title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: update release notes | diff --git a/doc/source/release.rst b/doc/source/release.rst
index f89fec9fb86e6..2587962299569 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -37,6 +37,248 @@ analysis / manipulation tool available in any language.
* Binary installers on PyPI: http://pypi.python.org/pypi/pandas
* Documentation: http://pandas.pydata.org
+pandas 0.20.0 / 0.20.1
+----------------------
+
+**Release date:** May 5, 2017
+
+
+This is a major release from 0.19.2 and includes a number of API changes, deprecations, new features,
+enhancements, and performance improvements along with a large number of bug fixes. We recommend that all
+users upgrade to this version.
+
+Highlights include:
+
+- New ``.agg()`` API for Series/DataFrame similar to the groupby-rolling-resample API's, see :ref:`here <whatsnew_0200.enhancements.agg>`
+- Integration with the ``feather-format``, including a new top-level ``pd.read_feather()`` and ``DataFrame.to_feather()`` method, see :ref:`here <io.feather>`.
+- The ``.ix`` indexer has been deprecated, see :ref:`here <whatsnew_0200.api_breaking.deprecate_ix>`
+- ``Panel`` has been deprecated, see :ref:`here <whatsnew_0200.api_breaking.deprecate_panel>`
+- Addition of an ``IntervalIndex`` and ``Interval`` scalar type, see :ref:`here <whatsnew_0200.enhancements.intervalindex>`
+- Improved user API when grouping by index levels in ``.groupby()``, see :ref:`here <whatsnew_0200.enhancements.groupby_access>`
+- Improved support for ``UInt64`` dtypes, see :ref:`here <whatsnew_0200.enhancements.uint64_support>`
+- A new orient for JSON serialization, ``orient='table'``, that uses the Table Schema spec and that gives the possibility for a more interactive repr in the Jupyter Notebook, see :ref:`here <whatsnew_0200.enhancements.table_schema>`
+- Experimental support for exporting styled DataFrames (``DataFrame.style``) to Excel, see :ref:`here <whatsnew_0200.enhancements.style_excel>`
+- Window binary corr/cov operations now return a MultiIndexed ``DataFrame`` rather than a ``Panel``, as ``Panel`` is now deprecated, see :ref:`here <whatsnew_0200.api_breaking.rolling_pairwise>`
+- Support for S3 handling now uses ``s3fs``, see :ref:`here <whatsnew_0200.api_breaking.s3>`
+- Google BigQuery support now uses the ``pandas-gbq`` library, see :ref:`here <whatsnew_0200.api_breaking.gbq>`
+
+See the :ref:`v0.20.1 Whatsnew <whatsnew_0200>` overview for an extensive list
+of all enhancements and bugs that have been fixed in 0.20.1.
+
+
+.. note::
+
+ This is a combined release for 0.20.0 and and 0.20.1.
+ Version 0.20.1 contains one additional change for backwards-compatibility with downstream projects using pandas' ``utils`` routines. (:issue:`16250`)
+
+Thanks
+~~~~~~
+
+- abaldenko
+- Adam J. Stewart
+- Adrian
+- adrian-stepien
+- Ajay Saxena
+- Akash Tandon
+- Albert Villanova del Moral
+- Aleksey Bilogur
+- alexandercbooth
+- Alexis Mignon
+- Amol Kahat
+- Andreas Winkler
+- Andrew Kittredge
+- Anthonios Partheniou
+- Arco Bast
+- Ashish Singal
+- atbd
+- bastewart
+- Baurzhan Muftakhidinov
+- Ben Kandel
+- Ben Thayer
+- Ben Welsh
+- Bill Chambers
+- bmagnusson
+- Brandon M. Burroughs
+- Brian
+- Brian McFee
+- carlosdanielcsantos
+- Carlos Souza
+- chaimdemulder
+- Chris
+- chris-b1
+- Chris Ham
+- Christopher C. Aycock
+- Christoph Gohlke
+- Christoph Paulik
+- Chris Warth
+- Clemens Brunner
+- DaanVanHauwermeiren
+- Daniel Himmelstein
+- Dave Willmer
+- David Cook
+- David Gwynne
+- David Hoffman
+- David Krych
+- dickreuter
+- Diego Fernandez
+- Dimitris Spathis
+- discort
+- Dmitry L
+- Dody Suria Wijaya
+- Dominik Stanczak
+- Dr-Irv
+- Dr. Irv
+- dr-leo
+- D.S. McNeil
+- dubourg
+- dwkenefick
+- Elliott Sales de Andrade
+- Ennemoser Christoph
+- Francesc Alted
+- Fumito Hamamura
+- funnycrab
+- gfyoung
+- Giacomo Ferroni
+- goldenbull
+- Graham R. Jeffries
+- Greg Williams
+- Guilherme Beltramini
+- Guilherme Samora
+- Hao Wu
+- Harshit Patni
+- hesham.shabana@hotmail.com
+- Ilya V. Schurov
+- Iván Vallés Pérez
+- Jackie Leng
+- Jaehoon Hwang
+- James Draper
+- James Goppert
+- James McBride
+- James Santucci
+- Jan Schulz
+- Jeff Carey
+- Jeff Reback
+- JennaVergeynst
+- Jim
+- Jim Crist
+- Joe Jevnik
+- Joel Nothman
+- John
+- John Tucker
+- John W. O'Brien
+- John Zwinck
+- jojomdt
+- Jonathan de Bruin
+- Jonathan Whitmore
+- Jon Mease
+- Jon M. Mease
+- Joost Kranendonk
+- Joris Van den Bossche
+- Joshua Bradt
+- Julian Santander
+- Julien Marrec
+- Jun Kim
+- Justin Solinsky
+- Kacawi
+- Kamal Kamalaldin
+- Kerby Shedden
+- Kernc
+- Keshav Ramaswamy
+- Kevin Sheppard
+- Kyle Kelley
+- Larry Ren
+- Leon Yin
+- linebp
+- Line Pedersen
+- Lorenzo Cestaro
+- Luca Scarabello
+- Lukasz
+- Mahmoud Lababidi
+- manu
+- manuels
+- Mark Mandel
+- Matthew Brett
+- Matthew Roeschke
+- mattip
+- Matti Picus
+- Matt Roeschke
+- maxalbert
+- Maximilian Roos
+- mcocdawc
+- Michael Charlton
+- Michael Felt
+- Michael Lamparski
+- Michiel Stock
+- Mikolaj Chwalisz
+- Min RK
+- Miroslav Šedivý
+- Mykola Golubyev
+- Nate Yoder
+- Nathalie Rud
+- Nicholas Ver Halen
+- Nick Chmura
+- Nolan Nichols
+- nuffe
+- Pankaj Pandey
+- paul-mannino
+- Pawel Kordek
+- pbreach
+- Pete Huang
+- Peter
+- Peter Csizsek
+- Petio Petrov
+- Phil Ruffwind
+- Pietro Battiston
+- Piotr Chromiec
+- Prasanjit Prakash
+- Robert Bradshaw
+- Rob Forgione
+- Robin
+- Rodolfo Fernandez
+- Roger Thomas
+- Rouz Azari
+- Sahil Dua
+- sakkemo
+- Sam Foo
+- Sami Salonen
+- Sarah Bird
+- Sarma Tangirala
+- scls19fr
+- Scott Sanderson
+- Sebastian Bank
+- Sebastian Gsänger
+- Sébastien de Menten
+- Shawn Heide
+- Shyam Saladi
+- sinhrks
+- Sinhrks
+- Stephen Rauch
+- stijnvanhoey
+- Tara Adiseshan
+- themrmax
+- the-nose-knows
+- Thiago Serafim
+- Thoralf Gutierrez
+- Thrasibule
+- Tobias Gustafsson
+- Tom Augspurger
+- tomrod
+- Tong Shen
+- Tong SHEN
+- TrigonaMinima
+- tzinckgraf
+- Uwe
+- wandersoncferreira
+- watercrossing
+- wcwagner
+- Wes Turner
+- Wiktor Tomczak
+- WillAyd
+- xgdgsc
+- Yaroslav Halchenko
+- Yimeng Zhang
+- yui-knk
+
pandas 0.19.2
-------------
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/16259 | 2017-05-05T16:55:44Z | 2017-05-05T16:57:45Z | 2017-05-05T16:57:45Z | 2023-05-11T01:15:33Z |
COMPAT/TEST test, fix for unsafe Vector.resize(), which allows refche… | diff --git a/pandas/_libs/hashtable.pxd b/pandas/_libs/hashtable.pxd
index 9b352ae1c003b..3366751af144d 100644
--- a/pandas/_libs/hashtable.pxd
+++ b/pandas/_libs/hashtable.pxd
@@ -52,6 +52,7 @@ cdef struct Int64VectorData:
cdef class Int64Vector:
cdef Int64VectorData *data
cdef ndarray ao
+ cdef bint external_view_exists
cdef resize(self)
cpdef to_array(self)
diff --git a/pandas/_libs/hashtable.pyx b/pandas/_libs/hashtable.pyx
index c8aedcef77502..101e2c031f26e 100644
--- a/pandas/_libs/hashtable.pyx
+++ b/pandas/_libs/hashtable.pyx
@@ -64,6 +64,10 @@ cdef class Factorizer:
>>> factorize(np.array([1,2,np.nan], dtype='O'), na_sentinel=20)
array([ 0, 1, 20])
"""
+ if self.uniques.external_view_exists:
+ uniques = ObjectVector()
+ uniques.extend(self.uniques.to_array())
+ self.uniques = uniques
labels = self.table.get_labels(values, self.uniques,
self.count, na_sentinel, check_null)
mask = (labels == na_sentinel)
@@ -99,6 +103,15 @@ cdef class Int64Factorizer:
def factorize(self, int64_t[:] values, sort=False,
na_sentinel=-1, check_null=True):
+ """
+ Factorize values with nans replaced by na_sentinel
+ >>> factorize(np.array([1,2,np.nan], dtype='O'), na_sentinel=20)
+ array([ 0, 1, 20])
+ """
+ if self.uniques.external_view_exists:
+ uniques = Int64Vector()
+ uniques.extend(self.uniques.to_array())
+ self.uniques = uniques
labels = self.table.get_labels(values, self.uniques,
self.count, na_sentinel,
check_null)
diff --git a/pandas/_libs/hashtable_class_helper.pxi.in b/pandas/_libs/hashtable_class_helper.pxi.in
index 3ce82dace40a9..b80a592669eca 100644
--- a/pandas/_libs/hashtable_class_helper.pxi.in
+++ b/pandas/_libs/hashtable_class_helper.pxi.in
@@ -71,6 +71,7 @@ cdef class {{name}}Vector:
{{if dtype != 'int64'}}
cdef:
+ bint external_view_exists
{{name}}VectorData *data
ndarray ao
{{endif}}
@@ -80,6 +81,7 @@ cdef class {{name}}Vector:
sizeof({{name}}VectorData))
if not self.data:
raise MemoryError()
+ self.external_view_exists = False
self.data.n = 0
self.data.m = _INIT_VEC_CAP
self.ao = np.empty(self.data.m, dtype={{idtype}})
@@ -87,7 +89,7 @@ cdef class {{name}}Vector:
cdef resize(self):
self.data.m = max(self.data.m * 4, _INIT_VEC_CAP)
- self.ao.resize(self.data.m)
+ self.ao.resize(self.data.m, refcheck=False)
self.data.data = <{{arg}}*> self.ao.data
def __dealloc__(self):
@@ -99,13 +101,20 @@ cdef class {{name}}Vector:
return self.data.n
cpdef to_array(self):
- self.ao.resize(self.data.n)
- self.data.m = self.data.n
+ if self.data.m != self.data.n:
+ if self.external_view_exists:
+ # should never happen
+ raise ValueError("should have raised on append()")
+ self.ao.resize(self.data.n, refcheck=False)
+ self.data.m = self.data.n
+ self.external_view_exists = True
return self.ao
cdef inline void append(self, {{arg}} x):
if needs_resize(self.data):
+ if self.external_view_exists:
+ raise ValueError("external reference but Vector.resize() needed")
self.resize()
append_data_{{dtype}}(self.data, x)
@@ -120,15 +129,19 @@ cdef class StringVector:
cdef:
StringVectorData *data
+ bint external_view_exists
def __cinit__(self):
self.data = <StringVectorData *>PyMem_Malloc(
sizeof(StringVectorData))
if not self.data:
raise MemoryError()
+ self.external_view_exists = False
self.data.n = 0
self.data.m = _INIT_VEC_CAP
self.data.data = <char **> malloc(self.data.m * sizeof(char *))
+ if not self.data.data:
+ raise MemoryError()
cdef resize(self):
cdef:
@@ -138,9 +151,10 @@ cdef class StringVector:
m = self.data.m
self.data.m = max(self.data.m * 4, _INIT_VEC_CAP)
- # TODO: can resize?
orig_data = self.data.data
self.data.data = <char **> malloc(self.data.m * sizeof(char *))
+ if not self.data.data:
+ raise MemoryError()
for i in range(m):
self.data.data[i] = orig_data[i]
@@ -164,6 +178,7 @@ cdef class StringVector:
for i in range(self.data.n):
val = self.data.data[i]
ao[i] = val
+ self.external_view_exists = True
self.data.m = self.data.n
return ao
@@ -174,6 +189,9 @@ cdef class StringVector:
append_data_string(self.data, x)
+ cdef extend(self, ndarray[:] x):
+ for i in range(len(x)):
+ self.append(x[i])
cdef class ObjectVector:
@@ -181,8 +199,10 @@ cdef class ObjectVector:
PyObject **data
size_t n, m
ndarray ao
+ bint external_view_exists
def __cinit__(self):
+ self.external_view_exists = False
self.n = 0
self.m = _INIT_VEC_CAP
self.ao = np.empty(_INIT_VEC_CAP, dtype=object)
@@ -193,8 +213,10 @@ cdef class ObjectVector:
cdef inline append(self, object o):
if self.n == self.m:
+ if self.external_view_exists:
+ raise ValueError("external reference but Vector.resize() needed")
self.m = max(self.m * 2, _INIT_VEC_CAP)
- self.ao.resize(self.m)
+ self.ao.resize(self.m, refcheck=False)
self.data = <PyObject**> self.ao.data
Py_INCREF(o)
@@ -202,10 +224,17 @@ cdef class ObjectVector:
self.n += 1
def to_array(self):
- self.ao.resize(self.n)
- self.m = self.n
+ if self.m != self.n:
+ if self.external_view_exists:
+ raise ValueError("should have raised on append()")
+ self.ao.resize(self.n, refcheck=False)
+ self.m = self.n
+ self.external_view_exists = True
return self.ao
+ cdef extend(self, ndarray[:] x):
+ for i in range(len(x)):
+ self.append(x[i])
#----------------------------------------------------------------------
# HashTable
@@ -362,6 +391,9 @@ cdef class {{name}}HashTable(HashTable):
if needs_resize(ud):
with gil:
+ if uniques.external_view_exists:
+ raise ValueError("external reference to uniques held, "
+ "but Vector.resize() needed")
uniques.resize()
append_data_{{dtype}}(ud, val)
labels[i] = count
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 093730fb2478b..351e646cbb0b2 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -14,7 +14,7 @@
from pandas import compat
from pandas._libs import (groupby as libgroupby, algos as libalgos,
- hashtable)
+ hashtable as ht)
from pandas._libs.hashtable import unique_label_indices
from pandas.compat import lrange, range
import pandas.core.algorithms as algos
@@ -259,7 +259,7 @@ def test_factorize_nan(self):
# rizer.factorize should not raise an exception if na_sentinel indexes
# outside of reverse_indexer
key = np.array([1, 2, 1, np.nan], dtype='O')
- rizer = hashtable.Factorizer(len(key))
+ rizer = ht.Factorizer(len(key))
for na_sentinel in (-1, 20):
ids = rizer.factorize(key, sort=True, na_sentinel=na_sentinel)
expected = np.array([0, 1, 0, na_sentinel], dtype='int32')
@@ -1049,14 +1049,14 @@ class TestHashTable(object):
def test_lookup_nan(self):
xs = np.array([2.718, 3.14, np.nan, -7, 5, 2, 3])
- m = hashtable.Float64HashTable()
+ m = ht.Float64HashTable()
m.map_locations(xs)
tm.assert_numpy_array_equal(m.lookup(xs), np.arange(len(xs),
dtype=np.int64))
def test_lookup_overflow(self):
xs = np.array([1, 2, 2**63], dtype=np.uint64)
- m = hashtable.UInt64HashTable()
+ m = ht.UInt64HashTable()
m.map_locations(xs)
tm.assert_numpy_array_equal(m.lookup(xs), np.arange(len(xs),
dtype=np.int64))
@@ -1070,25 +1070,35 @@ def test_vector_resize(self):
# Test for memory errors after internal vector
# reallocations (pull request #7157)
- def _test_vector_resize(htable, uniques, dtype, nvals):
+ def _test_vector_resize(htable, uniques, dtype, nvals, safely_resizes):
vals = np.array(np.random.randn(1000), dtype=dtype)
- # get_labels appends to the vector
+ # get_labels may append to uniques
htable.get_labels(vals[:nvals], uniques, 0, -1)
- # to_array resizes the vector
- uniques.to_array()
- htable.get_labels(vals, uniques, 0, -1)
+ # to_array() set an external_view_exists flag on uniques.
+ tmp = uniques.to_array()
+ oldshape = tmp.shape
+ # subsequent get_labels() calls can no longer append to it
+ # (for all but StringHashTables + ObjectVector)
+ if safely_resizes:
+ htable.get_labels(vals, uniques, 0, -1)
+ else:
+ with pytest.raises(ValueError) as excinfo:
+ htable.get_labels(vals, uniques, 0, -1)
+ assert str(excinfo.value).startswith('external reference')
+ uniques.to_array() # should not raise here
+ assert tmp.shape == oldshape
test_cases = [
- (hashtable.PyObjectHashTable, hashtable.ObjectVector, 'object'),
- (hashtable.StringHashTable, hashtable.ObjectVector, 'object'),
- (hashtable.Float64HashTable, hashtable.Float64Vector, 'float64'),
- (hashtable.Int64HashTable, hashtable.Int64Vector, 'int64'),
- (hashtable.UInt64HashTable, hashtable.UInt64Vector, 'uint64')]
+ (ht.PyObjectHashTable, ht.ObjectVector, 'object', False),
+ (ht.StringHashTable, ht.ObjectVector, 'object', True),
+ (ht.Float64HashTable, ht.Float64Vector, 'float64', False),
+ (ht.Int64HashTable, ht.Int64Vector, 'int64', False),
+ (ht.UInt64HashTable, ht.UInt64Vector, 'uint64', False)]
- for (tbl, vect, dtype) in test_cases:
+ for (tbl, vect, dtype, safely_resizes) in test_cases:
# resizing to empty is a special case
- _test_vector_resize(tbl(), vect(), dtype, 0)
- _test_vector_resize(tbl(), vect(), dtype, 10)
+ _test_vector_resize(tbl(), vect(), dtype, 0, safely_resizes)
+ _test_vector_resize(tbl(), vect(), dtype, 10, safely_resizes)
def test_quantile():
| closes issue #15854, supersedes pull request #16224, #16193
Adds a test showing how the ``uniques`` attribute leaks to user space, and calling ``get_labels()`` again with different data could change the underlying ndarray. With this pull request an exception will be raised after calling ``append()`` after calling ``to_array()``, which makes the test pass. It also allows addition of the ``refcheck=False`` kwarg to ``ndarray.resize()``, which fixes the issue above. | https://api.github.com/repos/pandas-dev/pandas/pulls/16258 | 2017-05-05T14:19:23Z | 2017-05-11T11:39:07Z | 2017-05-11T11:39:07Z | 2017-05-11T11:39:51Z |
DOC: add read_gbq as top-level in api.rst | diff --git a/doc/source/api.rst b/doc/source/api.rst
index c652573bc6677..cb5136df1ff8b 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -128,7 +128,6 @@ SQL
Google BigQuery
~~~~~~~~~~~~~~~
-.. currentmodule:: pandas.io.gbq
.. autosummary::
:toctree: generated/
@@ -136,9 +135,6 @@ Google BigQuery
read_gbq
-.. currentmodule:: pandas
-
-
STATA
~~~~~
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 556e5f0227471..394fa44c30573 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -243,6 +243,7 @@
('pandas.io.clipboard.read_clipboard', 'pandas.read_clipboard'),
('pandas.io.excel.ExcelFile.parse', 'pandas.ExcelFile.parse'),
('pandas.io.excel.read_excel', 'pandas.read_excel'),
+ ('pandas.io.gbq.read_gbq', 'pandas.read_gbq'),
('pandas.io.html.read_html', 'pandas.read_html'),
('pandas.io.json.read_json', 'pandas.read_json'),
('pandas.io.parsers.read_csv', 'pandas.read_csv'),
| Noticed that the link to the top-level one did not work in the whatsnew file | https://api.github.com/repos/pandas-dev/pandas/pulls/16256 | 2017-05-05T13:35:55Z | 2017-05-05T14:35:50Z | 2017-05-05T14:35:50Z | 2017-05-05T15:14:33Z |
DOC: some reviewing of the 0.20 whatsnew file | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 61042071a52ec..551c4bd67146b 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -14,14 +14,13 @@ Highlights include:
- The ``.ix`` indexer has been deprecated, see :ref:`here <whatsnew_0200.api_breaking.deprecate_ix>`
- ``Panel`` has been deprecated, see :ref:`here <whatsnew_0200.api_breaking.deprecate_panel>`
- Addition of an ``IntervalIndex`` and ``Interval`` scalar type, see :ref:`here <whatsnew_0200.enhancements.intervalindex>`
-- Improved user API when accessing levels in ``.groupby()``, see :ref:`here <whatsnew_0200.enhancements.groupby_access>`
+- Improved user API when grouping by index levels in ``.groupby()``, see :ref:`here <whatsnew_0200.enhancements.groupby_access>`
- Improved support for ``UInt64`` dtypes, see :ref:`here <whatsnew_0200.enhancements.uint64_support>`
-- A new orient for JSON serialization, ``orient='table'``, that uses the :ref:`Table Schema spec <whatsnew_0200.enhancements.table_schema>`
-- Experimental support for exporting ``DataFrame.style`` formats to Excel, see :ref:`here <whatsnew_0200.enhancements.style_excel>`
+- A new orient for JSON serialization, ``orient='table'``, that uses the Table Schema spec and that gives the possibility for a more interactive repr in the Jupyter Notebook, see :ref:`here <whatsnew_0200.enhancements.table_schema>`
+- Experimental support for exporting styled DataFrames (``DataFrame.style``) to Excel, see :ref:`here <whatsnew_0200.enhancements.style_excel>`
- Window binary corr/cov operations now return a MultiIndexed ``DataFrame`` rather than a ``Panel``, as ``Panel`` is now deprecated, see :ref:`here <whatsnew_0200.api_breaking.rolling_pairwise>`
- Support for S3 handling now uses ``s3fs``, see :ref:`here <whatsnew_0200.api_breaking.s3>`
- Google BigQuery support now uses the ``pandas-gbq`` library, see :ref:`here <whatsnew_0200.api_breaking.gbq>`
-- Switched the test framework to use `pytest <http://doc.pytest.org/en/latest>`__ (:issue:`13097`)
.. warning::
@@ -41,12 +40,12 @@ New features
.. _whatsnew_0200.enhancements.agg:
-``agg`` API
-^^^^^^^^^^^
+``agg`` API for DataFrame/Series
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Series & DataFrame have been enhanced to support the aggregation API. This is a familiar API
-from groupby, window operations, and resampling. This allows aggregation operations in a concise
-by using :meth:`~DataFrame.agg`, and :meth:`~DataFrame.transform`. The full documentation
+from groupby, window operations, and resampling. This allows aggregation operations in a concise way
+by using :meth:`~DataFrame.agg` and :meth:`~DataFrame.transform`. The full documentation
is :ref:`here <basics.aggregate>` (:issue:`1623`).
Here is a sample
@@ -107,22 +106,14 @@ aggregations. This is similiar to how groupby ``.agg()`` works. (:issue:`15015`)
``dtype`` keyword for data IO
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The ``'python'`` engine for :func:`read_csv` now accepts the ``dtype`` keyword argument for specifying the types of specific columns (:issue:`14295`). See the :ref:`io docs <io.dtypes>` for more information.
+The ``'python'`` engine for :func:`read_csv`, as well as the :func:`read_fwf` function for parsing
+fixed-width text files and :func:`read_excel` for parsing Excel files, now accept the ``dtype`` keyword argument for specifying the types of specific columns (:issue:`14295`). See the :ref:`io docs <io.dtypes>` for more information.
.. ipython:: python
:suppress:
from pandas.compat import StringIO
-.. ipython:: python
-
- data = "a,b\n1,2\n3,4"
- pd.read_csv(StringIO(data), engine='python').dtypes
- pd.read_csv(StringIO(data), engine='python', dtype={'a':'float64', 'b':'object'}).dtypes
-
-The ``dtype`` keyword argument is also now supported in the :func:`read_fwf` function for parsing
-fixed-width text files, and :func:`read_excel` for parsing Excel files.
-
.. ipython:: python
data = "a b\n1 2\n3 4"
@@ -135,16 +126,16 @@ fixed-width text files, and :func:`read_excel` for parsing Excel files.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:func:`to_datetime` has gained a new parameter, ``origin``, to define a reference date
-from where to compute the resulting ``DatetimeIndex`` when ``unit`` is specified. (:issue:`11276`, :issue:`11745`)
+from where to compute the resulting timestamps when parsing numerical values with a specific ``unit`` specified. (:issue:`11276`, :issue:`11745`)
-Start with 1960-01-01 as the starting date
+For example, with 1960-01-01 as the starting date:
.. ipython:: python
pd.to_datetime([1, 2, 3], unit='D', origin=pd.Timestamp('1960-01-01'))
-The default is set at ``origin='unix'``, which defaults to ``1970-01-01 00:00:00``.
-Commonly called 'unix epoch' or POSIX time. This was the previous default, so this is a backward compatible change.
+The default is set at ``origin='unix'``, which defaults to ``1970-01-01 00:00:00``, which is
+commonly called 'unix epoch' or POSIX time. This was the previous default, so this is a backward compatible change.
.. ipython:: python
@@ -156,7 +147,7 @@ Commonly called 'unix epoch' or POSIX time. This was the previous default, so th
Groupby Enhancements
^^^^^^^^^^^^^^^^^^^^
-Strings passed to ``DataFrame.groupby()`` as the ``by`` parameter may now reference either column names or index level names.
+Strings passed to ``DataFrame.groupby()`` as the ``by`` parameter may now reference either column names or index level names. Previously, only column names could be referenced. This allows to easily group by a column and index level at the same time. (:issue:`5677`)
.. ipython:: python
@@ -172,8 +163,6 @@ Strings passed to ``DataFrame.groupby()`` as the ``by`` parameter may now refere
df.groupby(['second', 'A']).sum()
-Previously, only column names could be referenced. (:issue:`5677`)
-
.. _whatsnew_0200.enhancements.compressed_urls:
@@ -203,7 +192,7 @@ support for bz2 compression in the python 2 C-engine improved (:issue:`14874`).
Pickle file I/O now supports compression
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-:func:`read_pickle`, :meth:`DataFame.to_pickle` and :meth:`Series.to_pickle`
+:func:`read_pickle`, :meth:`DataFrame.to_pickle` and :meth:`Series.to_pickle`
can now read from and write to compressed pickle files. Compression methods
can be an explicit parameter or be inferred from the file extension.
See :ref:`the docs here. <io.pickle.compression>`
@@ -221,33 +210,24 @@ Using an explicit compression type
df.to_pickle("data.pkl.compress", compression="gzip")
rt = pd.read_pickle("data.pkl.compress", compression="gzip")
- rt
-
-Inferring compression type from the extension
-
-.. ipython:: python
+ rt.head()
- df.to_pickle("data.pkl.xz", compression="infer")
- rt = pd.read_pickle("data.pkl.xz", compression="infer")
- rt
-
-The default is to ``infer``:
+The default is to infer the compression type from the extension (``compression='infer'``):
.. ipython:: python
df.to_pickle("data.pkl.gz")
rt = pd.read_pickle("data.pkl.gz")
- rt
+ rt.head()
df["A"].to_pickle("s1.pkl.bz2")
rt = pd.read_pickle("s1.pkl.bz2")
- rt
+ rt.head()
.. ipython:: python
:suppress:
import os
os.remove("data.pkl.compress")
- os.remove("data.pkl.xz")
os.remove("data.pkl.gz")
os.remove("s1.pkl.bz2")
@@ -293,7 +273,7 @@ In previous versions, ``.groupby(..., sort=False)`` would fail with a ``ValueErr
ordered=True)})
df
-Previous Behavior:
+**Previous Behavior**:
.. code-block:: ipython
@@ -301,7 +281,7 @@ Previous Behavior:
---------------------------------------------------------------------------
ValueError: items in new_categories are not the same as in old categories
-New Behavior:
+**New Behavior**:
.. ipython:: python
@@ -327,7 +307,7 @@ the data.
df.to_json(orient='table')
-See :ref:`IO: Table Schema for more<io.table_schema>`.
+See :ref:`IO: Table Schema for more information <io.table_schema>`.
Additionally, the repr for ``DataFrame`` and ``Series`` can now publish
this JSON Table schema representation of the Series or DataFrame if you are
@@ -411,6 +391,11 @@ pandas has gained an ``IntervalIndex`` with its own dtype, ``interval`` as well
notation, specifically as a return type for the categories in :func:`cut` and :func:`qcut`. The ``IntervalIndex`` allows some unique indexing, see the
:ref:`docs <indexing.intervallindex>`. (:issue:`7640`, :issue:`8625`)
+.. warning::
+
+ These indexing behaviors of the IntervalIndex are provisional and may change in a future version of pandas. Feedback on usage is welcome.
+
+
Previous behavior:
The returned categories were strings, representing Intervals
@@ -473,9 +458,8 @@ Other Enhancements
- ``Series.str.replace()`` now accepts a callable, as replacement, which is passed to ``re.sub`` (:issue:`15055`)
- ``Series.str.replace()`` now accepts a compiled regular expression as a pattern (:issue:`15446`)
- ``Series.sort_index`` accepts parameters ``kind`` and ``na_position`` (:issue:`13589`, :issue:`14444`)
-- ``DataFrame`` has gained a ``nunique()`` method to count the distinct values over an axis (:issue:`14336`).
+- ``DataFrame`` and ``DataFrame.groupby()`` have gained a ``nunique()`` method to count the distinct values over an axis (:issue:`14336`, :issue:`15197`).
- ``DataFrame`` has gained a ``melt()`` method, equivalent to ``pd.melt()``, for unpivoting from a wide to long format (:issue:`12640`).
-- ``DataFrame.groupby()`` has gained a ``.nunique()`` method to count the distinct values for all columns within each group (:issue:`14336`, :issue:`15197`).
- ``pd.read_excel()`` now preserves sheet order when using ``sheetname=None`` (:issue:`9930`)
- Multiple offset aliases with decimal points are now supported (e.g. ``0.5min`` is parsed as ``30s``) (:issue:`8419`)
- ``.isnull()`` and ``.notnull()`` have been added to ``Index`` object to make them more consistent with the ``Series`` API (:issue:`15300`)
@@ -506,9 +490,8 @@ Other Enhancements
- ``DataFrame.to_excel()`` has a new ``freeze_panes`` parameter to turn on Freeze Panes when exporting to Excel (:issue:`15160`)
- ``pd.read_html()`` will parse multiple header rows, creating a MutliIndex header. (:issue:`13434`).
- HTML table output skips ``colspan`` or ``rowspan`` attribute if equal to 1. (:issue:`15403`)
-- :class:`pandas.io.formats.style.Styler`` template now has blocks for easier extension, :ref:`see the example notebook <style.ipynb#Subclassing>` (:issue:`15649`)
-- :meth:`pandas.io.formats.style.Styler.render` now accepts ``**kwargs`` to allow user-defined variables in the template (:issue:`15649`)
-- ``pd.io.api.Styler.render`` now accepts ``**kwargs`` to allow user-defined variables in the template (:issue:`15649`)
+- :class:`pandas.io.formats.style.Styler` template now has blocks for easier extension, :ref:`see the example notebook <style.ipynb#Subclassing>` (:issue:`15649`)
+- :meth:`Styler.render() <pandas.io.formats.style.Styler.render>` now accepts ``**kwargs`` to allow user-defined variables in the template (:issue:`15649`)
- Compatibility with Jupyter notebook 5.0; MultiIndex column labels are left-aligned and MultiIndex row-labels are top-aligned (:issue:`15379`)
- ``TimedeltaIndex`` now has a custom date-tick formatter specifically designed for nanosecond level precision (:issue:`8711`)
- ``pd.api.types.union_categoricals`` gained the ``ignore_ordered`` argument to allow ignoring the ordered attribute of unioned categoricals (:issue:`13410`). See the :ref:`categorical union docs <categorical.union>` for more information.
@@ -519,7 +502,7 @@ Other Enhancements
- ``pandas.io.json.json_normalize()`` gained the option ``errors='ignore'|'raise'``; the default is ``errors='raise'`` which is backward compatible. (:issue:`14583`)
- ``pandas.io.json.json_normalize()`` with an empty ``list`` will return an empty ``DataFrame`` (:issue:`15534`)
- ``pandas.io.json.json_normalize()`` has gained a ``sep`` option that accepts ``str`` to separate joined fields; the default is ".", which is backward compatible. (:issue:`14883`)
-- :meth:`~MultiIndex.remove_unused_levels` has been added to facilitate :ref:`removing unused levels <advanced.shown_levels>`. (:issue:`15694`)
+- :meth:`MultiIndex.remove_unused_levels` has been added to facilitate :ref:`removing unused levels <advanced.shown_levels>`. (:issue:`15694`)
- ``pd.read_csv()`` will now raise a ``ParserError`` error whenever any parsing error occurs (:issue:`15913`, :issue:`15925`)
- ``pd.read_csv()`` now supports the ``error_bad_lines`` and ``warn_bad_lines`` arguments for the Python parser (:issue:`15925`)
- The ``display.show_dimensions`` option can now also be used to specify
@@ -542,7 +525,7 @@ Backwards incompatible API changes
Possible incompatibility for HDF5 formats created with pandas < 0.13.0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-``pd.TimeSeries`` was deprecated officially in 0.17.0, though has only been an alias since 0.13.0. It has
+``pd.TimeSeries`` was deprecated officially in 0.17.0, though has already been an alias since 0.13.0. It has
been dropped in favor of ``pd.Series``. (:issue:`15098`).
This *may* cause HDF5 files that were created in prior versions to become unreadable if ``pd.TimeSeries``
@@ -680,7 +663,7 @@ ndarray, you can always convert explicitly using ``np.asarray(idx.hour)``.
pd.unique will now be consistent with extension types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-In prior versions, using ``Series.unique()`` and :func:`unique` on ``Categorical`` and tz-aware
+In prior versions, using :meth:`Series.unique` and :func:`pandas.unique` on ``Categorical`` and tz-aware
data-types would yield different return types. These are now made consistent. (:issue:`15903`)
- Datetime tz-aware
@@ -729,12 +712,12 @@ data-types would yield different return types. These are now made consistent. (:
.. code-block:: ipython
- In [1]: pd.Series(pd.Categorical(list('baabc'))).unique()
+ In [1]: pd.Series(list('baabc'), dtype='category').unique()
Out[1]:
[b, a, c]
Categories (3, object): [b, a, c]
- In [2]: pd.unique(pd.Series(pd.Categorical(list('baabc'))))
+ In [2]: pd.unique(pd.Series(list('baabc'), dtype='category'))
Out[2]: array(['b', 'a', 'c'], dtype=object)
New Behavior:
@@ -742,8 +725,8 @@ data-types would yield different return types. These are now made consistent. (:
.. ipython:: python
# returns a Categorical
- pd.Series(pd.Categorical(list('baabc'))).unique()
- pd.unique(pd.Series(pd.Categorical(list('baabc'))).unique())
+ pd.Series(list('baabc'), dtype='category').unique()
+ pd.unique(pd.Series(list('baabc'), dtype='category'))
.. _whatsnew_0200.api_breaking.s3:
@@ -804,8 +787,6 @@ Now the smallest acceptable dtype will be used (:issue:`13247`)
df1 = pd.DataFrame(np.array([1.0], dtype=np.float32, ndmin=2))
df1.dtypes
-.. ipython:: python
-
df2 = pd.DataFrame(np.array([np.nan], dtype=np.float32, ndmin=2))
df2.dtypes
@@ -813,7 +794,7 @@ Previous Behavior:
.. code-block:: ipython
- In [7]: pd.concat([df1,df2]).dtypes
+ In [7]: pd.concat([df1, df2]).dtypes
Out[7]:
0 float64
dtype: object
@@ -822,7 +803,7 @@ New Behavior:
.. ipython:: python
- pd.concat([df1,df2]).dtypes
+ pd.concat([df1, df2]).dtypes
.. _whatsnew_0200.api_breaking.gbq:
@@ -1012,7 +993,7 @@ See the section on :ref:`Windowed Binary Operations <stats.moments.binary>` for
periods=100, freq='D', name='foo'))
df.tail()
-Old Behavior:
+Previous Behavior:
.. code-block:: ipython
@@ -1228,12 +1209,12 @@ If indicated, a deprecation warning will be issued if you reference theses modul
"pandas.algos", "pandas._libs.algos", ""
"pandas.hashtable", "pandas._libs.hashtable", ""
"pandas.indexes", "pandas.core.indexes", ""
- "pandas.json", "pandas._libs.json", "X"
+ "pandas.json", "pandas._libs.json / pandas.io.json", "X"
"pandas.parser", "pandas._libs.parsers", "X"
"pandas.formats", "pandas.io.formats", ""
"pandas.sparse", "pandas.core.sparse", ""
- "pandas.tools", "pandas.core.reshape", ""
- "pandas.types", "pandas.core.dtypes", ""
+ "pandas.tools", "pandas.core.reshape", "X"
+ "pandas.types", "pandas.core.dtypes", "X"
"pandas.io.sas.saslib", "pandas.io.sas._sas", ""
"pandas._join", "pandas._libs.join", ""
"pandas._hash", "pandas._libs.hashing", ""
@@ -1249,11 +1230,12 @@ exposed in the top-level namespace: ``pandas.errors``, ``pandas.plotting`` and
certain functions in the ``pandas.io`` and ``pandas.tseries`` submodules,
these are now the public subpackages.
+Further changes:
- The function :func:`~pandas.api.types.union_categoricals` is now importable from ``pandas.api.types``, formerly from ``pandas.types.concat`` (:issue:`15998`)
- The type import ``pandas.tslib.NaTType`` is deprecated and can be replaced by using ``type(pandas.NaT)`` (:issue:`16146`)
- The public functions in ``pandas.tools.hashing`` deprecated from that locations, but are now importable from ``pandas.util`` (:issue:`16223`)
-- The modules in ``pandas.util``: ``decorators``, ``print_versions``, ``doctools``, `validators``, ``depr_module`` are now private (:issue:`16223`)
+- The modules in ``pandas.util``: ``decorators``, ``print_versions``, ``doctools``, ``validators``, ``depr_module`` are now private. Only the functions exposed in ``pandas.util`` itself are public (:issue:`16223`)
.. _whatsnew_0200.privacy.errors:
@@ -1320,7 +1302,7 @@ Deprecations
Deprecate ``.ix``
^^^^^^^^^^^^^^^^^
-The ``.ix`` indexer is deprecated, in favor of the more strict ``.iloc`` and ``.loc`` indexers. ``.ix`` offers a lot of magic on the inference of what the user wants to do. To wit, ``.ix`` can decide to index *positionally* OR via *labels*, depending on the data type of the index. This has caused quite a bit of user confusion over the years. The full indexing documentation are :ref:`here <indexing>`. (:issue:`14218`)
+The ``.ix`` indexer is deprecated, in favor of the more strict ``.iloc`` and ``.loc`` indexers. ``.ix`` offers a lot of magic on the inference of what the user wants to do. To wit, ``.ix`` can decide to index *positionally* OR via *labels*, depending on the data type of the index. This has caused quite a bit of user confusion over the years. The full indexing documentation is :ref:`here <indexing>`. (:issue:`14218`)
The recommended methods of indexing are:
@@ -1368,7 +1350,7 @@ Deprecate Panel
``Panel`` is deprecated and will be removed in a future version. The recommended way to represent 3-D data are
with a ``MultiIndex`` on a ``DataFrame`` via the :meth:`~Panel.to_frame` or with the `xarray package <http://xarray.pydata.org/en/stable/>`__. Pandas
-provides a :meth:`~Panel.to_xarray` method to automate this conversion. See the documentation :ref:`Deprecate Panel <dsintro.deprecate_panel>`. (:issue:`13563`).
+provides a :meth:`~Panel.to_xarray` method to automate this conversion. For more details see :ref:`Deprecate Panel <dsintro.deprecate_panel>` documentation. (:issue:`13563`).
.. ipython:: python
:okwarning:
@@ -1416,7 +1398,7 @@ This is an illustrative example:
Here is a typical useful syntax for computing different aggregations for different columns. This
is a natural, and useful syntax. We aggregate from the dict-to-list by taking the specified
-columns and applying the list of functions. This returns a ``MultiIndex`` for the columns.
+columns and applying the list of functions. This returns a ``MultiIndex`` for the columns (this is *not* deprecated).
.. ipython:: python
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 8363cead01e56..b1523cd6c0d0c 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -99,6 +99,9 @@ class IntervalIndex(IntervalMixin, Index):
.. versionadded:: 0.20.0
+ Warning: the indexing behaviors are provisional and may change in
+ a future version of pandas.
+
Attributes
----------
left, right : array-like (1-dimensional)
| Taking the opportunity to do some further edits to the whatsnew file | https://api.github.com/repos/pandas-dev/pandas/pulls/16254 | 2017-05-05T13:22:04Z | 2017-05-05T15:21:32Z | 2017-05-05T15:21:31Z | 2017-05-05T15:22:55Z |
DOC: Updated release notes for 0.20.1 | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 61042071a52ec..b0aac2aee4238 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -1,6 +1,6 @@
.. _whatsnew_0200:
-v0.20.0 (May 4, 2017)
+v0.20.1 (May 5, 2017)
---------------------
This is a major release from 0.19.2 and includes a number of API changes, deprecations, new features,
@@ -30,6 +30,11 @@ Highlights include:
Check the :ref:`API Changes <whatsnew_0200.api_breaking>` and :ref:`deprecations <whatsnew_0200.deprecations>` before updating.
+.. note::
+
+ This is a combined release for 0.20.0 and and 0.20.1.
+ Version 0.20.1 contains one additional change for backwards-compatibility with downstream projects using pandas' ``utils`` routines. (:issue:`16250`)
+
.. contents:: What's new in v0.20.0
:local:
:backlinks: none
@@ -388,8 +393,7 @@ For example, after running the following, ``styled.xlsx`` renders as below:
df
styled = df.style.\
applymap(lambda val: 'color: %s' % 'red' if val < 0 else 'black').\
- apply(lambda s: ['background-color: yellow' if v else ''
- for v in s == s.max()])
+ highlight_max()
styled.to_excel('styled.xlsx', engine='openpyxl')
.. image:: _static/style-excel.png
diff --git a/doc/source/whatsnew/v0.20.1.txt b/doc/source/whatsnew/v0.20.2.txt
similarity index 100%
rename from doc/source/whatsnew/v0.20.1.txt
rename to doc/source/whatsnew/v0.20.2.txt
| Also added release note for https://github.com/pandas-dev/pandas/pull/16250 | https://api.github.com/repos/pandas-dev/pandas/pulls/16251 | 2017-05-05T12:17:05Z | 2017-05-05T14:06:38Z | 2017-05-05T14:06:38Z | 2017-05-27T16:36:39Z |
DEPR: add shims for util + TST: test that we work in downstream packages | diff --git a/ci/install_release_build.sh b/ci/install_release_build.sh
new file mode 100644
index 0000000000000..f8373176643fa
--- /dev/null
+++ b/ci/install_release_build.sh
@@ -0,0 +1,10 @@
+#!/bin/bash
+
+# this requires cython to be installed
+
+# this builds the release cleanly
+rm -rf dist
+git clean -xfd
+python setup.py clean
+python setup.py cython
+python setup.py sdist --formats=gztar
diff --git a/ci/install_travis.sh b/ci/install_travis.sh
index 09668cbccc9d2..601edded29f5a 100755
--- a/ci/install_travis.sh
+++ b/ci/install_travis.sh
@@ -123,12 +123,9 @@ if [ "$BUILD_TEST" ]; then
# build & install testing
echo ["Starting installation test."]
- rm -rf dist
- python setup.py clean
- python setup.py build_ext --inplace
- python setup.py sdist --formats=gztar
- conda uninstall cython
- pip install dist/*tar.gz || exit 1
+ bash ci/install_release_build.sh
+ conda uninstall -y cython
+ time pip install dist/*tar.gz || exit 1
else
@@ -162,14 +159,13 @@ if [ -e ${REQ} ]; then
time bash $REQ || exit 1
fi
-# finish install if we are not doing a build-testk
-if [ -z "$BUILD_TEST" ]; then
+# remove any installed pandas package
+# w/o removing anything else
+echo
+echo "[removing installed pandas]"
+conda remove pandas --force
- # remove any installed pandas package
- # w/o removing anything else
- echo
- echo "[removing installed pandas]"
- conda remove pandas --force
+if [ -z "$BUILD_TEST" ]; then
# install our pandas
echo
@@ -178,6 +174,10 @@ if [ -z "$BUILD_TEST" ]; then
fi
+echo
+echo "[show pandas]"
+conda list pandas
+
echo
echo "[done]"
exit 0
diff --git a/ci/requirements-2.7_BUILD_TEST.pip b/ci/requirements-2.7_BUILD_TEST.pip
new file mode 100644
index 0000000000000..a0fc77c40bc00
--- /dev/null
+++ b/ci/requirements-2.7_BUILD_TEST.pip
@@ -0,0 +1,7 @@
+xarray
+geopandas
+seaborn
+pandas_gbq
+pandas_datareader
+statsmodels
+scikit-learn
diff --git a/ci/requirements-2.7_BUILD_TEST.sh b/ci/requirements-2.7_BUILD_TEST.sh
new file mode 100755
index 0000000000000..78941fd0944e5
--- /dev/null
+++ b/ci/requirements-2.7_BUILD_TEST.sh
@@ -0,0 +1,7 @@
+#!/bin/bash
+
+source activate pandas
+
+echo "install 27 BUILD_TEST"
+
+conda install -n pandas -c conda-forge pyarrow dask
diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py
new file mode 100644
index 0000000000000..2baedb82aa2a7
--- /dev/null
+++ b/pandas/tests/test_downstream.py
@@ -0,0 +1,85 @@
+"""
+Testing that we work in the downstream packages
+"""
+import pytest
+import numpy as np # noqa
+from pandas import DataFrame
+from pandas.util import testing as tm
+
+
+@pytest.fixture
+def df():
+ return DataFrame({'A': [1, 2, 3]})
+
+
+def test_dask(df):
+
+ toolz = pytest.importorskip('toolz') # noqa
+ dask = pytest.importorskip('dask') # noqa
+
+ import dask.dataframe as dd
+
+ ddf = dd.from_pandas(df, npartitions=3)
+ assert ddf.A is not None
+ assert ddf.compute() is not None
+
+
+def test_xarray(df):
+
+ xarray = pytest.importorskip('xarray') # noqa
+
+ assert df.to_xarray() is not None
+
+
+def test_statsmodels():
+
+ statsmodels = pytest.importorskip('statsmodels') # noqa
+ import statsmodels.api as sm
+ import statsmodels.formula.api as smf
+ df = sm.datasets.get_rdataset("Guerry", "HistData").data
+ smf.ols('Lottery ~ Literacy + np.log(Pop1831)', data=df).fit()
+
+
+def test_scikit_learn(df):
+
+ sklearn = pytest.importorskip('sklearn') # noqa
+ from sklearn import svm, datasets
+
+ digits = datasets.load_digits()
+ clf = svm.SVC(gamma=0.001, C=100.)
+ clf.fit(digits.data[:-1], digits.target[:-1])
+ clf.predict(digits.data[-1:])
+
+
+def test_seaborn():
+
+ seaborn = pytest.importorskip('seaborn')
+ tips = seaborn.load_dataset("tips")
+ seaborn.stripplot(x="day", y="total_bill", data=tips)
+
+
+def test_pandas_gbq(df):
+
+ pandas_gbq = pytest.importorskip('pandas-gbq') # noqa
+
+
+@tm.network
+def test_pandas_datareader():
+
+ pandas_datareader = pytest.importorskip('pandas-datareader') # noqa
+ pandas_datareader.get_data_yahoo('AAPL')
+
+
+def test_geopandas():
+
+ geopandas = pytest.importorskip('geopandas') # noqa
+ fp = geopandas.datasets.get_path('naturalearth_lowres')
+ assert geopandas.read_file(fp) is not None
+
+
+def test_pyarrow(df):
+
+ pyarrow = pytest.importorskip('pyarrow') # noqa
+ table = pyarrow.Table.from_pandas(df)
+ result = table.to_pandas()
+ tm.assert_frame_equal(result, df)
diff --git a/pandas/types/common.py b/pandas/types/common.py
new file mode 100644
index 0000000000000..a125c27d04596
--- /dev/null
+++ b/pandas/types/common.py
@@ -0,0 +1,8 @@
+import warnings
+
+warnings.warn("pandas.types.common is deprecated and will be "
+ "removed in a future version, import "
+ "from pandas.api.types",
+ DeprecationWarning, stacklevel=3)
+
+from pandas.core.dtypes.common import * # noqa
diff --git a/pandas/util/decorators.py b/pandas/util/decorators.py
new file mode 100644
index 0000000000000..54bb834e829f3
--- /dev/null
+++ b/pandas/util/decorators.py
@@ -0,0 +1,8 @@
+import warnings
+
+warnings.warn("pandas.util.decorators is deprecated and will be "
+ "removed in a future version, import "
+ "from pandas.util",
+ DeprecationWarning, stacklevel=3)
+
+from pandas.util._decorators import * # noqa
diff --git a/pandas/util/hashing.py b/pandas/util/hashing.py
new file mode 100644
index 0000000000000..f97a7ac507407
--- /dev/null
+++ b/pandas/util/hashing.py
@@ -0,0 +1,18 @@
+import warnings
+import sys
+
+m = sys.modules['pandas.util.hashing']
+for t in ['hash_pandas_object', 'hash_array']:
+
+ def outer(t=t):
+
+ def wrapper(*args, **kwargs):
+ from pandas import util
+ warnings.warn("pandas.util.hashing is deprecated and will be "
+ "removed in a future version, import "
+ "from pandas.util",
+ DeprecationWarning, stacklevel=3)
+ return getattr(util, t)(*args, **kwargs)
+ return wrapper
+
+ setattr(m, t, outer(t))
diff --git a/scripts/build_dist.sh b/scripts/build_dist.sh
index c9c36c18bed9c..d6a7d0ba67239 100755
--- a/scripts/build_dist.sh
+++ b/scripts/build_dist.sh
@@ -10,8 +10,10 @@ read -p "Ok to continue (y/n)? " answer
case ${answer:0:1} in
y|Y )
echo "Building distribution"
+ rm -rf dist
+ git clean -xfd
python setup.py clean
- python setup.py build_ext --inplace
+ python setup.py cython
python setup.py sdist --formats=gztar
;;
* )
| https://api.github.com/repos/pandas-dev/pandas/pulls/16250 | 2017-05-05T10:09:48Z | 2017-05-05T16:54:48Z | 2017-05-05T16:54:48Z | 2017-05-05T16:56:32Z | |
BUG: rolling.quantile does not return an interpolated result | diff --git a/asv_bench/benchmarks/rolling.py b/asv_bench/benchmarks/rolling.py
new file mode 100644
index 0000000000000..9da9d0b855323
--- /dev/null
+++ b/asv_bench/benchmarks/rolling.py
@@ -0,0 +1,185 @@
+from .pandas_vb_common import *
+import pandas as pd
+import numpy as np
+
+
+class DataframeRolling(object):
+ goal_time = 0.2
+
+ def setup(self):
+ self.N = 100000
+ self.Ns = 10000
+ self.df = pd.DataFrame({'a': np.random.random(self.N)})
+ self.dfs = pd.DataFrame({'a': np.random.random(self.Ns)})
+ self.wins = 10
+ self.winl = 1000
+
+ def time_rolling_quantile_0(self):
+ (self.df.rolling(self.wins).quantile(0.0))
+
+ def time_rolling_quantile_1(self):
+ (self.df.rolling(self.wins).quantile(1.0))
+
+ def time_rolling_quantile_median(self):
+ (self.df.rolling(self.wins).quantile(0.5))
+
+ def time_rolling_median(self):
+ (self.df.rolling(self.wins).median())
+
+ def time_rolling_median(self):
+ (self.df.rolling(self.wins).mean())
+
+ def time_rolling_max(self):
+ (self.df.rolling(self.wins).max())
+
+ def time_rolling_min(self):
+ (self.df.rolling(self.wins).min())
+
+ def time_rolling_std(self):
+ (self.df.rolling(self.wins).std())
+
+ def time_rolling_count(self):
+ (self.df.rolling(self.wins).count())
+
+ def time_rolling_skew(self):
+ (self.df.rolling(self.wins).skew())
+
+ def time_rolling_kurt(self):
+ (self.df.rolling(self.wins).kurt())
+
+ def time_rolling_sum(self):
+ (self.df.rolling(self.wins).sum())
+
+ def time_rolling_corr(self):
+ (self.dfs.rolling(self.wins).corr())
+
+ def time_rolling_cov(self):
+ (self.dfs.rolling(self.wins).cov())
+
+ def time_rolling_quantile_0_l(self):
+ (self.df.rolling(self.winl).quantile(0.0))
+
+ def time_rolling_quantile_1_l(self):
+ (self.df.rolling(self.winl).quantile(1.0))
+
+ def time_rolling_quantile_median_l(self):
+ (self.df.rolling(self.winl).quantile(0.5))
+
+ def time_rolling_median_l(self):
+ (self.df.rolling(self.winl).median())
+
+ def time_rolling_median_l(self):
+ (self.df.rolling(self.winl).mean())
+
+ def time_rolling_max_l(self):
+ (self.df.rolling(self.winl).max())
+
+ def time_rolling_min_l(self):
+ (self.df.rolling(self.winl).min())
+
+ def time_rolling_std_l(self):
+ (self.df.rolling(self.wins).std())
+
+ def time_rolling_count_l(self):
+ (self.df.rolling(self.wins).count())
+
+ def time_rolling_skew_l(self):
+ (self.df.rolling(self.wins).skew())
+
+ def time_rolling_kurt_l(self):
+ (self.df.rolling(self.wins).kurt())
+
+ def time_rolling_sum_l(self):
+ (self.df.rolling(self.wins).sum())
+
+
+class SeriesRolling(object):
+ goal_time = 0.2
+
+ def setup(self):
+ self.N = 100000
+ self.Ns = 10000
+ self.df = pd.DataFrame({'a': np.random.random(self.N)})
+ self.dfs = pd.DataFrame({'a': np.random.random(self.Ns)})
+ self.sr = self.df.a
+ self.srs = self.dfs.a
+ self.wins = 10
+ self.winl = 1000
+
+ def time_rolling_quantile_0(self):
+ (self.sr.rolling(self.wins).quantile(0.0))
+
+ def time_rolling_quantile_1(self):
+ (self.sr.rolling(self.wins).quantile(1.0))
+
+ def time_rolling_quantile_median(self):
+ (self.sr.rolling(self.wins).quantile(0.5))
+
+ def time_rolling_median(self):
+ (self.sr.rolling(self.wins).median())
+
+ def time_rolling_median(self):
+ (self.sr.rolling(self.wins).mean())
+
+ def time_rolling_max(self):
+ (self.sr.rolling(self.wins).max())
+
+ def time_rolling_min(self):
+ (self.sr.rolling(self.wins).min())
+
+ def time_rolling_std(self):
+ (self.sr.rolling(self.wins).std())
+
+ def time_rolling_count(self):
+ (self.sr.rolling(self.wins).count())
+
+ def time_rolling_skew(self):
+ (self.sr.rolling(self.wins).skew())
+
+ def time_rolling_kurt(self):
+ (self.sr.rolling(self.wins).kurt())
+
+ def time_rolling_sum(self):
+ (self.sr.rolling(self.wins).sum())
+
+ def time_rolling_corr(self):
+ (self.srs.rolling(self.wins).corr())
+
+ def time_rolling_cov(self):
+ (self.srs.rolling(self.wins).cov())
+
+ def time_rolling_quantile_0_l(self):
+ (self.sr.rolling(self.winl).quantile(0.0))
+
+ def time_rolling_quantile_1_l(self):
+ (self.sr.rolling(self.winl).quantile(1.0))
+
+ def time_rolling_quantile_median_l(self):
+ (self.sr.rolling(self.winl).quantile(0.5))
+
+ def time_rolling_median_l(self):
+ (self.sr.rolling(self.winl).median())
+
+ def time_rolling_median_l(self):
+ (self.sr.rolling(self.winl).mean())
+
+ def time_rolling_max_l(self):
+ (self.sr.rolling(self.winl).max())
+
+ def time_rolling_min_l(self):
+ (self.sr.rolling(self.winl).min())
+
+ def time_rolling_std_l(self):
+ (self.sr.rolling(self.wins).std())
+
+ def time_rolling_count_l(self):
+ (self.sr.rolling(self.wins).count())
+
+ def time_rolling_skew_l(self):
+ (self.sr.rolling(self.wins).skew())
+
+ def time_rolling_kurt_l(self):
+ (self.sr.rolling(self.wins).kurt())
+
+ def time_rolling_sum_l(self):
+ (self.sr.rolling(self.wins).sum())
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index d5cc3d6ddca8e..5fcbfd8b571c8 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -167,9 +167,11 @@ Plotting
Groupby/Resample/Rolling
^^^^^^^^^^^^^^^^^^^^^^^^
-- Bug in ``DataFrame.resample().size()`` where an empty ``DataFrame`` did not return a ``Series`` (:issue:`14962`)
+- Bug in ``DataFrame.resample().size()`` where an empty ``DataFrame`` did not return a ``Series`` (:issue:`14962`)
- Bug in ``infer_freq`` causing indices with 2-day gaps during the working week to be wrongly inferred as business daily (:issue:`16624`)
+- Bug in ``.rolling.quantile()`` which incorrectly used different defaults than :func:`Series.quantile()` and :func:`DataFrame.quantile()` (:issue:`9413`, :issue:`16211`)
+
Sparse
^^^^^^
@@ -190,6 +192,7 @@ Categorical
^^^^^^^^^^^
+
Other
^^^^^
- Bug in :func:`eval` where the ``inplace`` parameter was being incorrectly handled (:issue:`16732`)
diff --git a/pandas/_libs/window.pyx b/pandas/_libs/window.pyx
index 3bb8abe26c781..2450eea5500cd 100644
--- a/pandas/_libs/window.pyx
+++ b/pandas/_libs/window.pyx
@@ -1348,8 +1348,9 @@ def roll_quantile(ndarray[float64_t, cast=True] input, int64_t win,
bint is_variable
ndarray[int64_t] start, end
ndarray[double_t] output
+ double vlow, vhigh
- if quantile < 0.0 or quantile > 1.0:
+ if quantile <= 0.0 or quantile >= 1.0:
raise ValueError("quantile value {0} not in [0, 1]".format(quantile))
# we use the Fixed/Variable Indexer here as the
@@ -1391,7 +1392,17 @@ def roll_quantile(ndarray[float64_t, cast=True] input, int64_t win,
if nobs >= minp:
idx = int(quantile * <double>(nobs - 1))
- output[i] = skiplist.get(idx)
+
+ # Single value in skip list
+ if nobs == 1:
+ output[i] = skiplist.get(0)
+
+ # Interpolated quantile
+ else:
+ vlow = skiplist.get(idx)
+ vhigh = skiplist.get(idx + 1)
+ output[i] = (vlow + (vhigh - vlow) *
+ (quantile * (nobs - 1) - idx))
else:
output[i] = NaN
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 02b508bb94e4c..57611794c375f 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -975,8 +975,15 @@ def quantile(self, quantile, **kwargs):
def f(arg, *args, **kwargs):
minp = _use_window(self.min_periods, window)
- return _window.roll_quantile(arg, window, minp, indexi,
- self.closed, quantile)
+ if quantile == 1.0:
+ return _window.roll_max(arg, window, minp, indexi,
+ self.closed)
+ elif quantile == 0.0:
+ return _window.roll_min(arg, window, minp, indexi,
+ self.closed)
+ else:
+ return _window.roll_quantile(arg, window, minp, indexi,
+ self.closed, quantile)
return self._apply(f, 'quantile', quantile=quantile,
**kwargs)
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index 9c3765ffdb716..3ba5d2065cddf 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -1122,8 +1122,19 @@ def test_rolling_quantile(self):
def scoreatpercentile(a, per):
values = np.sort(a, axis=0)
- idx = per / 1. * (values.shape[0] - 1)
- return values[int(idx)]
+ idx = int(per / 1. * (values.shape[0] - 1))
+
+ if idx == values.shape[0] - 1:
+ retval = values[-1]
+
+ else:
+ qlow = float(idx) / float(values.shape[0] - 1)
+ qhig = float(idx + 1) / float(values.shape[0] - 1)
+ vlow = values[idx]
+ vhig = values[idx + 1]
+ retval = vlow + (vhig - vlow) * (per - qlow) / (qhig - qlow)
+
+ return retval
for q in qs:
@@ -1138,6 +1149,30 @@ def alt(x):
self._check_moment_func(f, alt, name='quantile', quantile=q)
+ def test_rolling_quantile_np_percentile(self):
+ # #9413: Tests that rolling window's quantile default behavior
+ # is analogus to Numpy's percentile
+ row = 10
+ col = 5
+ idx = pd.date_range(20100101, periods=row, freq='B')
+ df = pd.DataFrame(np.random.rand(row * col).reshape((row, -1)),
+ index=idx)
+
+ df_quantile = df.quantile([0.25, 0.5, 0.75], axis=0)
+ np_percentile = np.percentile(df, [25, 50, 75], axis=0)
+
+ tm.assert_almost_equal(df_quantile.values, np.array(np_percentile))
+
+ def test_rolling_quantile_series(self):
+ # #16211: Tests that rolling window's quantile default behavior
+ # is analogus to pd.Series' quantile
+ arr = np.arange(100)
+ s = pd.Series(arr)
+ q1 = s.quantile(0.1)
+ q2 = s.rolling(100).quantile(0.1).iloc[-1]
+
+ tm.assert_almost_equal(q1, q2)
+
def test_rolling_quantile_param(self):
ser = Series([0.0, .1, .5, .9, 1.0])
@@ -3558,7 +3593,7 @@ def test_ragged_quantile(self):
result = df.rolling(window='2s', min_periods=1).quantile(0.5)
expected = df.copy()
- expected['B'] = [0.0, 1, 1.0, 3.0, 3.0]
+ expected['B'] = [0.0, 1, 1.5, 3.0, 3.5]
tm.assert_frame_equal(result, expected)
def test_ragged_std(self):
| Now computing the quantile of a rolling window matches the default of the rolling method in Series, DataFrame and np.percentile.
Fixes bugs #9413 and #16211
- [x] closes #9413
- [x] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16247 | 2017-05-05T06:52:13Z | 2017-07-10T10:15:08Z | 2017-07-10T10:15:08Z | 2017-07-10T10:15:11Z |
DOC: Whatsnew cleanup | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index bfd8031b4c305..61042071a52ec 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -1,7 +1,7 @@
.. _whatsnew_0200:
-v0.20.0 (May 12, 2017)
-------------------------
+v0.20.0 (May 4, 2017)
+---------------------
This is a major release from 0.19.2 and includes a number of API changes, deprecations, new features,
enhancements, and performance improvements along with a large number of bug fixes. We recommend that all
@@ -17,8 +17,8 @@ Highlights include:
- Improved user API when accessing levels in ``.groupby()``, see :ref:`here <whatsnew_0200.enhancements.groupby_access>`
- Improved support for ``UInt64`` dtypes, see :ref:`here <whatsnew_0200.enhancements.uint64_support>`
- A new orient for JSON serialization, ``orient='table'``, that uses the :ref:`Table Schema spec <whatsnew_0200.enhancements.table_schema>`
-- Experimental support for exporting ``DataFrame.style`` formats to Excel , see :ref:`here <whatsnew_0200.enhancements.style_excel>`
-- Window Binary Corr/Cov operations now return a MultiIndexed ``DataFrame`` rather than a ``Panel``, as ``Panel`` is now deprecated, see :ref:`here <whatsnew_0200.api_breaking.rolling_pairwise>`
+- Experimental support for exporting ``DataFrame.style`` formats to Excel, see :ref:`here <whatsnew_0200.enhancements.style_excel>`
+- Window binary corr/cov operations now return a MultiIndexed ``DataFrame`` rather than a ``Panel``, as ``Panel`` is now deprecated, see :ref:`here <whatsnew_0200.api_breaking.rolling_pairwise>`
- Support for S3 handling now uses ``s3fs``, see :ref:`here <whatsnew_0200.api_breaking.s3>`
- Google BigQuery support now uses the ``pandas-gbq`` library, see :ref:`here <whatsnew_0200.api_breaking.gbq>`
- Switched the test framework to use `pytest <http://doc.pytest.org/en/latest>`__ (:issue:`13097`)
@@ -44,10 +44,10 @@ New features
``agg`` API
^^^^^^^^^^^
-Series & DataFrame have been enhanced to support the aggregation API. This is an already familiar API that
-is supported for groupby, window operations, and resampling. This allows one to express aggregation operations
-in a single concise way by using :meth:`~DataFrame.agg`,
-and :meth:`~DataFrame.transform`. The full documentation is :ref:`here <basics.aggregate>` (:issue:`1623`).
+Series & DataFrame have been enhanced to support the aggregation API. This is a familiar API
+from groupby, window operations, and resampling. This allows aggregation operations in a concise
+by using :meth:`~DataFrame.agg`, and :meth:`~DataFrame.transform`. The full documentation
+is :ref:`here <basics.aggregate>` (:issue:`1623`).
Here is a sample
@@ -66,28 +66,28 @@ Using a single function is equivalent to ``.apply``.
df.agg('sum')
-Multiple functions in lists.
+Multiple aggregations with a list of functions.
.. ipython:: python
df.agg(['sum', 'min'])
-Using a dict provides the ability to have selective aggregation per column.
-You will get a matrix-like output of all of the aggregators. The output will consist
-of all unique functions. Those that are not noted for a particular column will be ``NaN``:
+Using a dict provides the ability to apply specific aggregations per column.
+You will get a matrix-like output of all of the aggregators. The output has one column
+per unique function. Those functions applied to a particular column will be ``NaN``:
.. ipython:: python
df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
-The API also supports a ``.transform()`` function to provide for broadcasting results.
+The API also supports a ``.transform()`` function for broadcasting results.
.. ipython:: python
:okwarning:
df.transform(['abs', lambda x: x - x.min()])
-When presented with mixed dtypes that cannot aggregate, ``.agg()`` will only take the valid
+When presented with mixed dtypes that cannot be aggregated, ``.agg()`` will only take the valid
aggregations. This is similiar to how groupby ``.agg()`` works. (:issue:`15015`)
.. ipython:: python
@@ -107,7 +107,7 @@ aggregations. This is similiar to how groupby ``.agg()`` works. (:issue:`15015`)
``dtype`` keyword for data IO
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The ``dtype`` keyword argument in the :func:`read_csv` function for specifying the types of parsed columns is now supported with the ``'python'`` engine (:issue:`14295`). See the :ref:`io docs <io.dtypes>` for more information.
+The ``'python'`` engine for :func:`read_csv` now accepts the ``dtype`` keyword argument for specifying the types of specific columns (:issue:`14295`). See the :ref:`io docs <io.dtypes>` for more information.
.. ipython:: python
:suppress:
@@ -156,7 +156,7 @@ Commonly called 'unix epoch' or POSIX time. This was the previous default, so th
Groupby Enhancements
^^^^^^^^^^^^^^^^^^^^
-Strings passed to ``DataFrame.groupby()`` as the ``by`` parameter may now reference either column names or index level names (:issue:`5677`)
+Strings passed to ``DataFrame.groupby()`` as the ``by`` parameter may now reference either column names or index level names.
.. ipython:: python
@@ -172,6 +172,9 @@ Strings passed to ``DataFrame.groupby()`` as the ``by`` parameter may now refere
df.groupby(['second', 'A']).sum()
+Previously, only column names could be referenced. (:issue:`5677`)
+
+
.. _whatsnew_0200.enhancements.compressed_urls:
Better support for compressed URLs in ``read_csv``
@@ -181,8 +184,8 @@ The compression code was refactored (:issue:`12688`). As a result, reading
dataframes from URLs in :func:`read_csv` or :func:`read_table` now supports
additional compression methods: ``xz``, ``bz2``, and ``zip`` (:issue:`14570`).
Previously, only ``gzip`` compression was supported. By default, compression of
-URLs and paths are now both inferred using their file extensions. Additionally,
-support for bz2 compression in the python 2 c-engine improved (:issue:`14874`).
+URLs and paths are now inferred using their file extensions. Additionally,
+support for bz2 compression in the python 2 C-engine improved (:issue:`14874`).
.. ipython:: python
@@ -203,7 +206,7 @@ Pickle file I/O now supports compression
:func:`read_pickle`, :meth:`DataFame.to_pickle` and :meth:`Series.to_pickle`
can now read from and write to compressed pickle files. Compression methods
can be an explicit parameter or be inferred from the file extension.
-See :ref:`the docs here <io.pickle.compression>`
+See :ref:`the docs here. <io.pickle.compression>`
.. ipython:: python
@@ -432,7 +435,7 @@ New behavior:
c
c.categories
-Furthermore, this allows one to bin *other* data with these same bins, with ``NaN`` represents a missing
+Furthermore, this allows one to bin *other* data with these same bins, with ``NaN`` representing a missing
value similar to other dtypes.
.. ipython:: python
@@ -465,7 +468,7 @@ Selecting via a scalar value that is contained *in* the intervals.
Other Enhancements
^^^^^^^^^^^^^^^^^^
-- ``DataFrame.rolling()`` now accepts the parameter ``closed='right'|'left'|'both'|'neither'`` to choose the rolling window endpoint closedness. See the :ref:`documentation <stats.rolling_window.endpoints>` (:issue:`13965`)
+- ``DataFrame.rolling()`` now accepts the parameter ``closed='right'|'left'|'both'|'neither'`` to choose the rolling window-endpoint closedness. See the :ref:`documentation <stats.rolling_window.endpoints>` (:issue:`13965`)
- Integration with the ``feather-format``, including a new top-level ``pd.read_feather()`` and ``DataFrame.to_feather()`` method, see :ref:`here <io.feather>`.
- ``Series.str.replace()`` now accepts a callable, as replacement, which is passed to ``re.sub`` (:issue:`15055`)
- ``Series.str.replace()`` now accepts a compiled regular expression as a pattern (:issue:`15446`)
@@ -473,11 +476,9 @@ Other Enhancements
- ``DataFrame`` has gained a ``nunique()`` method to count the distinct values over an axis (:issue:`14336`).
- ``DataFrame`` has gained a ``melt()`` method, equivalent to ``pd.melt()``, for unpivoting from a wide to long format (:issue:`12640`).
- ``DataFrame.groupby()`` has gained a ``.nunique()`` method to count the distinct values for all columns within each group (:issue:`14336`, :issue:`15197`).
-
- ``pd.read_excel()`` now preserves sheet order when using ``sheetname=None`` (:issue:`9930`)
- Multiple offset aliases with decimal points are now supported (e.g. ``0.5min`` is parsed as ``30s``) (:issue:`8419`)
- ``.isnull()`` and ``.notnull()`` have been added to ``Index`` object to make them more consistent with the ``Series`` API (:issue:`15300`)
-
- New ``UnsortedIndexError`` (subclass of ``KeyError``) raised when indexing/slicing into an
unsorted MultiIndex (:issue:`11897`). This allows differentiation between errors due to lack
of sorting or an incorrect key. See :ref:`here <advanced.unsorted>`
@@ -497,20 +498,19 @@ Other Enhancements
- ``Timedelta.isoformat`` method added for formatting Timedeltas as an `ISO 8601 duration`_. See the :ref:`Timedelta docs <timedeltas.isoformat>` (:issue:`15136`)
- ``.select_dtypes()`` now allows the string ``datetimetz`` to generically select datetimes with tz (:issue:`14910`)
- The ``.to_latex()`` method will now accept ``multicolumn`` and ``multirow`` arguments to use the accompanying LaTeX enhancements
-
- ``pd.merge_asof()`` gained the option ``direction='backward'|'forward'|'nearest'`` (:issue:`14887`)
- ``Series/DataFrame.asfreq()`` have gained a ``fill_value`` parameter, to fill missing values (:issue:`3715`).
- ``Series/DataFrame.resample.asfreq`` have gained a ``fill_value`` parameter, to fill missing values during resampling (:issue:`3715`).
-- ``pandas.util.hashing`` has gained a ``hash_tuples`` routine, and ``hash_pandas_object`` has gained the ability to hash a ``MultiIndex`` (:issue:`15224`)
+- :func:`pandas.util.hash_pandas_object` has gained the ability to hash a ``MultiIndex`` (:issue:`15224`)
- ``Series/DataFrame.squeeze()`` have gained the ``axis`` parameter. (:issue:`15339`)
- ``DataFrame.to_excel()`` has a new ``freeze_panes`` parameter to turn on Freeze Panes when exporting to Excel (:issue:`15160`)
-- ``pd.read_html()`` will parse multiple header rows, creating a multiindex header. (:issue:`13434`).
+- ``pd.read_html()`` will parse multiple header rows, creating a MutliIndex header. (:issue:`13434`).
- HTML table output skips ``colspan`` or ``rowspan`` attribute if equal to 1. (:issue:`15403`)
-- ``pd.io.api.Styler`` template now has blocks for easier extension, :ref:`see the example notebook <style.ipynb#Subclassing>` (:issue:`15649`)
+- :class:`pandas.io.formats.style.Styler`` template now has blocks for easier extension, :ref:`see the example notebook <style.ipynb#Subclassing>` (:issue:`15649`)
+- :meth:`pandas.io.formats.style.Styler.render` now accepts ``**kwargs`` to allow user-defined variables in the template (:issue:`15649`)
- ``pd.io.api.Styler.render`` now accepts ``**kwargs`` to allow user-defined variables in the template (:issue:`15649`)
-- Compatability with Jupyter notebook 5.0; MultiIndex column labels are left-aligned and MultiIndex row-labels are top-aligned (:issue:`15379`)
-
-- ``TimedeltaIndex`` now has a custom datetick formatter specifically designed for nanosecond level precision (:issue:`8711`)
+- Compatibility with Jupyter notebook 5.0; MultiIndex column labels are left-aligned and MultiIndex row-labels are top-aligned (:issue:`15379`)
+- ``TimedeltaIndex`` now has a custom date-tick formatter specifically designed for nanosecond level precision (:issue:`8711`)
- ``pd.api.types.union_categoricals`` gained the ``ignore_ordered`` argument to allow ignoring the ordered attribute of unioned categoricals (:issue:`13410`). See the :ref:`categorical union docs <categorical.union>` for more information.
- ``DataFrame.to_latex()`` and ``DataFrame.to_string()`` now allow optional header aliases. (:issue:`15536`)
- Re-enable the ``parse_dates`` keyword of ``pd.read_excel()`` to parse string columns as dates (:issue:`14326`)
@@ -524,9 +524,8 @@ Other Enhancements
- ``pd.read_csv()`` now supports the ``error_bad_lines`` and ``warn_bad_lines`` arguments for the Python parser (:issue:`15925`)
- The ``display.show_dimensions`` option can now also be used to specify
whether the length of a ``Series`` should be shown in its repr (:issue:`7117`).
-- ``parallel_coordinates()`` has gained a ``sort_labels`` keyword arg that sorts class labels and the colours assigned to them (:issue:`15908`)
+- ``parallel_coordinates()`` has gained a ``sort_labels`` keyword argument that sorts class labels and the colors assigned to them (:issue:`15908`)
- Options added to allow one to turn on/off using ``bottleneck`` and ``numexpr``, see :ref:`here <basics.accelerate>` (:issue:`16157`)
-
- ``DataFrame.style.bar()`` now accepts two more options to further customize the bar chart. Bar alignment is set with ``align='left'|'mid'|'zero'``, the default is "left", which is backward compatible; You can now pass a list of ``color=[color_negative, color_positive]``. (:issue:`14757`)
@@ -653,7 +652,7 @@ Accessing datetime fields of Index now return Index
The datetime-related attributes (see :ref:`here <timeseries.components>`
for an overview) of ``DatetimeIndex``, ``PeriodIndex`` and ``TimedeltaIndex`` previously
returned numpy arrays. They will now return a new ``Index`` object, except
-in the case of a boolean field, where the result will stil be a boolean ndarray. (:issue:`15022`)
+in the case of a boolean field, where the result will still be a boolean ndarray. (:issue:`15022`)
Previous behaviour:
@@ -682,7 +681,7 @@ pd.unique will now be consistent with extension types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In prior versions, using ``Series.unique()`` and :func:`unique` on ``Categorical`` and tz-aware
-datatypes would yield different return types. These are now made consistent. (:issue:`15903`)
+data-types would yield different return types. These are now made consistent. (:issue:`15903`)
- Datetime tz-aware
@@ -1044,7 +1043,7 @@ HDFStore where string comparison
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In previous versions most types could be compared to string column in a ``HDFStore``
-usually resulting in an invalid comparsion, returning an empty result frame. These comparisions will now raise a
+usually resulting in an invalid comparison, returning an empty result frame. These comparisons will now raise a
``TypeError`` (:issue:`15492`)
.. ipython:: python
@@ -1085,8 +1084,8 @@ Index.intersection and inner join now preserve the order of the left Index
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:meth:`Index.intersection` now preserves the order of the calling ``Index`` (left)
-instead of the other ``Index`` (right) (:issue:`15582`). This affects the inner
-joins, :meth:`DataFrame.join` and :func:`merge`, and the ``.align`` methods.
+instead of the other ``Index`` (right) (:issue:`15582`). This affects inner
+joins, :meth:`DataFrame.join` and :func:`merge`, and the ``.align`` method.
- ``Index.intersection``
@@ -1141,7 +1140,7 @@ Pivot Table always returns a DataFrame
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The documentation for :meth:`pivot_table` states that a ``DataFrame`` is *always* returned. Here a bug
-is fixed that allowed this to return a ``Series`` under a narrow circumstance. (:issue:`4386`)
+is fixed that allowed this to return a ``Series`` under certain circumstance. (:issue:`4386`)
.. ipython:: python
@@ -1199,7 +1198,6 @@ Other API Changes
- ``NaT`` will now returns ``NaT`` for ``tz_localize`` and ``tz_convert``
methods (:issue:`15830`)
- ``DataFrame`` and ``Panel`` constructors with invalid input will now raise ``ValueError`` rather than ``PandasError``, if called with scalar inputs and not axes (:issue:`15541`)
-
- ``DataFrame`` and ``Panel`` constructors with invalid input will now raise ``ValueError`` rather than ``pandas.core.common.PandasError``, if called with scalar inputs and not axes; The exception ``PandasError`` is removed as well. (:issue:`15541`)
- The exception ``pandas.core.common.AmbiguousIndexError`` is removed as it is not referenced (:issue:`15541`)
@@ -1324,7 +1322,6 @@ Deprecate ``.ix``
The ``.ix`` indexer is deprecated, in favor of the more strict ``.iloc`` and ``.loc`` indexers. ``.ix`` offers a lot of magic on the inference of what the user wants to do. To wit, ``.ix`` can decide to index *positionally* OR via *labels*, depending on the data type of the index. This has caused quite a bit of user confusion over the years. The full indexing documentation are :ref:`here <indexing>`. (:issue:`14218`)
-
The recommended methods of indexing are:
- ``.loc`` if you want to *label* index
@@ -1720,7 +1717,7 @@ Reshaping
- Bug in ``DataFrame.pivot_table()`` where ``dropna=True`` would not drop all-NaN columns when the columns was a ``category`` dtype (:issue:`15193`)
- Bug in ``pd.melt()`` where passing a tuple value for ``value_vars`` caused a ``TypeError`` (:issue:`15348`)
- Bug in ``pd.pivot_table()`` where no error was raised when values argument was not in the columns (:issue:`14938`)
-- Bug in ``pd.concat()`` in which concatting with an empty dataframe with ``join='inner'`` was being improperly handled (:issue:`15328`)
+- Bug in ``pd.concat()`` in which concatenating with an empty dataframe with ``join='inner'`` was being improperly handled (:issue:`15328`)
- Bug with ``sort=True`` in ``DataFrame.join`` and ``pd.merge`` when joining on indexes (:issue:`15582`)
- Bug in ``DataFrame.nsmallest`` and ``DataFrame.nlargest`` where identical values resulted in duplicated rows (:issue:`15297`)
| https://api.github.com/repos/pandas-dev/pandas/pulls/16245 | 2017-05-05T01:59:15Z | 2017-05-05T02:23:35Z | 2017-05-05T02:23:35Z | 2017-05-27T16:36:38Z | |
BUG: Incorrect handling of rolling.cov with offset window | diff --git a/doc/source/whatsnew/v0.20.2.txt b/doc/source/whatsnew/v0.20.2.txt
index 13365401f1d1c..8182a1c111a0a 100644
--- a/doc/source/whatsnew/v0.20.2.txt
+++ b/doc/source/whatsnew/v0.20.2.txt
@@ -80,6 +80,7 @@ Groupby/Resample/Rolling
^^^^^^^^^^^^^^^^^^^^^^^^
- Bug creating datetime rolling window on an empty DataFrame (:issue:`15819`)
+- Bug in ``rolling.cov()`` with offset window (:issue:`16058`)
Sparse
diff --git a/pandas/core/window.py b/pandas/core/window.py
index cf1bad706ae1d..ba7e79944ab0e 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -81,6 +81,7 @@ def __init__(self, obj, window=None, min_periods=None, freq=None,
self.freq = freq
self.center = center
self.win_type = win_type
+ self.win_freq = None
self.axis = obj._get_axis_number(axis) if axis is not None else None
self.validate()
@@ -996,7 +997,12 @@ def cov(self, other=None, pairwise=None, ddof=1, **kwargs):
# only default unset
pairwise = True if pairwise is None else pairwise
other = self._shallow_copy(other)
- window = self._get_window(other)
+
+ # GH 16058: offset window
+ if self.is_freq_type:
+ window = self.win_freq
+ else:
+ window = self._get_window(other)
def _get_cov(X, Y):
# GH #12373 : rolling functions error on float32 data
@@ -1088,6 +1094,7 @@ def validate(self):
"based windows")
# this will raise ValueError on non-fixed freqs
+ self.win_freq = self.window
self.window = freq.nanos
self.win_type = 'freq'
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index 6a640d62108b3..cbb3c345a9353 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -3833,3 +3833,26 @@ def test_non_monotonic(self):
df2 = df.sort_values('B')
result = df2.groupby('A').rolling('4s', on='B').C.mean()
tm.assert_series_equal(result, expected)
+
+ def test_rolling_cov_offset(self):
+ # GH16058
+
+ idx = pd.date_range('2017-01-01', periods=24, freq='1h')
+ ss = pd.Series(np.arange(len(idx)), index=idx)
+
+ result = ss.rolling('2h').cov()
+ expected = pd.Series([np.nan] + [0.5 for _ in range(len(idx) - 1)],
+ index=idx)
+ tm.assert_series_equal(result, expected)
+
+ expected2 = ss.rolling(2, min_periods=1).cov()
+ tm.assert_series_equal(result, expected2)
+
+ result = ss.rolling('3h').cov()
+ expected = pd.Series([np.nan, 0.5] +
+ [1.0 for _ in range(len(idx) - 2)],
+ index=idx)
+ tm.assert_series_equal(result, expected)
+
+ expected2 = ss.rolling(3, min_periods=1).cov()
+ tm.assert_series_equal(result, expected2)
| - [x] closes #16058
- [x] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
- [x] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/16244 | 2017-05-04T22:33:50Z | 2017-05-30T23:12:51Z | 2017-05-30T23:12:51Z | 2017-06-04T17:02:55Z |
TST: Test CategoricalIndex in test_is_categorical | diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 6c2bbe330eeee..bfec1ec3ebe8c 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -205,13 +205,15 @@ def is_categorical(arr):
>>> is_categorical([1, 2, 3])
False
- Categoricals and Series Categoricals will return True.
+ Categoricals, Series Categoricals, and CategoricalIndex will return True.
>>> cat = pd.Categorical([1, 2, 3])
>>> is_categorical(cat)
True
>>> is_categorical(pd.Series(cat))
True
+ >>> is_categorical(pd.CategoricalIndex([1, 2, 3]))
+ True
"""
return isinstance(arr, ABCCategorical) or is_categorical_dtype(arr)
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 5b74397b1e770..4633dde5ed537 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -158,6 +158,7 @@ def test_is_categorical():
cat = pd.Categorical([1, 2, 3])
assert com.is_categorical(cat)
assert com.is_categorical(pd.Series(cat))
+ assert com.is_categorical(pd.CategoricalIndex([1, 2, 3]))
assert not com.is_categorical([1, 2, 3])
| Title is self-explanatory.
Follow-up to #16237.
| https://api.github.com/repos/pandas-dev/pandas/pulls/16243 | 2017-05-04T22:03:23Z | 2017-05-04T23:31:49Z | 2017-05-04T23:31:49Z | 2017-05-05T01:11:28Z |
TST: xfail some bottleneck on windows | diff --git a/ci/requirements-3.6_WIN.run b/ci/requirements-3.6_WIN.run
index 840d2867e9297..899bfbc6b6b23 100644
--- a/ci/requirements-3.6_WIN.run
+++ b/ci/requirements-3.6_WIN.run
@@ -1,6 +1,7 @@
python-dateutil
pytz
numpy=1.12*
+bottleneck
openpyxl
xlsxwriter
xlrd
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 257f992f57f6d..ec6a118ec3639 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -19,7 +19,7 @@
import pandas.core.nanops as nanops
-from pandas.compat import lrange, range
+from pandas.compat import lrange, range, is_platform_windows
from pandas import compat
from pandas.util.testing import (assert_series_equal, assert_almost_equal,
assert_frame_equal, assert_index_equal)
@@ -28,6 +28,10 @@
from .common import TestData
+skip_if_bottleneck_on_windows = (is_platform_windows() and
+ nanops._USE_BOTTLENECK)
+
+
class TestSeriesAnalytics(TestData):
def test_sum_zero(self):
@@ -64,14 +68,6 @@ def test_overflow(self):
result = s.max(skipna=False)
assert int(result) == v[-1]
- # use bottleneck if available
- result = s.sum()
- assert int(result) == v.sum(dtype='int64')
- result = s.min()
- assert int(result) == 0
- result = s.max()
- assert int(result) == v[-1]
-
for dtype in ['float32', 'float64']:
v = np.arange(5000000, dtype=dtype)
s = Series(v)
@@ -84,6 +80,28 @@ def test_overflow(self):
result = s.max(skipna=False)
assert np.allclose(float(result), v[-1])
+ @pytest.mark.xfail(
+ skip_if_bottleneck_on_windows,
+ reason="buggy bottleneck with sum overflow on windows")
+ def test_overflow_with_bottleneck(self):
+ # GH 6915
+ # overflowing on the smaller int dtypes
+ for dtype in ['int32', 'int64']:
+ v = np.arange(5000000, dtype=dtype)
+ s = Series(v)
+
+ # use bottleneck if available
+ result = s.sum()
+ assert int(result) == v.sum(dtype='int64')
+ result = s.min()
+ assert int(result) == 0
+ result = s.max()
+ assert int(result) == v[-1]
+
+ for dtype in ['float32', 'float64']:
+ v = np.arange(5000000, dtype=dtype)
+ s = Series(v)
+
# use bottleneck if available
result = s.sum()
assert result == v.sum(dtype=dtype)
@@ -92,6 +110,9 @@ def test_overflow(self):
result = s.max()
assert np.allclose(float(result), v[-1])
+ @pytest.mark.xfail(
+ skip_if_bottleneck_on_windows,
+ reason="buggy bottleneck with sum overflow on windows")
def test_sum(self):
self._check_stat_op('sum', np.sum, check_allna=True)
| xref https://github.com/pandas-dev/pandas/issues/16049#issuecomment-299298192 | https://api.github.com/repos/pandas-dev/pandas/pulls/16240 | 2017-05-04T21:28:24Z | 2017-05-04T23:26:09Z | 2017-05-04T23:26:09Z | 2017-05-04T23:27:05Z |
Fix ModuleNotFoundError: No module named 'pandas.formats' | diff --git a/setup.py b/setup.py
index 806047a344281..d101358fb63dd 100755
--- a/setup.py
+++ b/setup.py
@@ -648,6 +648,7 @@ def pxd(name):
'pandas.core.util',
'pandas.computation',
'pandas.errors',
+ 'pandas.formats',
'pandas.io',
'pandas.io.json',
'pandas.io.sas',
| Fixes `test_shim` failure:
```
================================== FAILURES ===================================
__________________________________ test_shim __________________________________
def test_shim():
# https://github.com/pandas-dev/pandas/pull/16059
# Remove in 0.21
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
> from pandas.formats.style import Styler as _styler # noqa
E ModuleNotFoundError: No module named 'pandas.formats'
X:\Python36\lib\site-packages\pandas\tests\io\formats\test_style.py:866: ModuleNotFoundError
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/16239 | 2017-05-04T20:13:13Z | 2017-05-04T20:58:25Z | 2017-05-04T20:58:25Z | 2017-05-04T20:58:33Z |
TST: Remove __init__ statements in testing | diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index 402dba0ba08b8..1fe4d85815c4b 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -486,7 +486,7 @@ def test_copy_names(self):
def test_names(self):
- # names are assigned in __init__
+ # names are assigned in setup
names = self.index_names
level_names = [level.name for level in self.index.levels]
assert names == level_names
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index f014b16976d39..8d6f36ac6a798 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -8,22 +8,28 @@
class TestConfig(object):
- def __init__(self, *args):
- super(TestConfig, self).__init__(*args)
-
+ @classmethod
+ def setup_class(cls):
from copy import deepcopy
- self.cf = pd.core.config
- self.gc = deepcopy(getattr(self.cf, '_global_config'))
- self.do = deepcopy(getattr(self.cf, '_deprecated_options'))
- self.ro = deepcopy(getattr(self.cf, '_registered_options'))
+
+ cls.cf = pd.core.config
+ cls.gc = deepcopy(getattr(cls.cf, '_global_config'))
+ cls.do = deepcopy(getattr(cls.cf, '_deprecated_options'))
+ cls.ro = deepcopy(getattr(cls.cf, '_registered_options'))
def setup_method(self, method):
setattr(self.cf, '_global_config', {})
- setattr(
- self.cf, 'options', self.cf.DictWrapper(self.cf._global_config))
+ setattr(self.cf, 'options', self.cf.DictWrapper(
+ self.cf._global_config))
setattr(self.cf, '_deprecated_options', {})
setattr(self.cf, '_registered_options', {})
+ # Our test fixture in conftest.py sets "chained_assignment"
+ # to "raise" only after all test methods have been setup.
+ # However, after this setup, there is no longer any
+ # "chained_assignment" option, so re-register it.
+ self.cf.register_option('chained_assignment', 'raise')
+
def teardown_method(self, method):
setattr(self.cf, '_global_config', self.gc)
setattr(self.cf, '_deprecated_options', self.do)
| We should never have had `__init__` in a test case (even with the old `nose` paradigm).
Closes #16235.
| https://api.github.com/repos/pandas-dev/pandas/pulls/16238 | 2017-05-04T19:45:25Z | 2017-05-04T21:29:16Z | 2017-05-04T21:29:15Z | 2017-05-04T21:29:58Z |
DOC, TST: Document and Test Functions in dtypes/common.py | diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index ba822071a3b72..6c2bbe330eeee 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -37,7 +37,7 @@ def _ensure_float(arr):
Parameters
----------
- arr : ndarray, Series
+ arr : array-like
The array whose data type we want to enforce as float.
Returns
@@ -82,46 +82,243 @@ def _ensure_categorical(arr):
def is_object_dtype(arr_or_dtype):
+ """
+ Check whether an array-like or dtype is of the object dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array-like or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like or dtype is of the object dtype.
+
+ Examples
+ --------
+ >>> is_object_dtype(object)
+ True
+ >>> is_object_dtype(int)
+ False
+ >>> is_object_dtype(np.array([], dtype=object))
+ True
+ >>> is_object_dtype(np.array([], dtype=int))
+ False
+ >>> is_object_dtype([1, 2, 3])
+ False
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
return issubclass(tipo, np.object_)
-def is_sparse(array):
- """ return if we are a sparse array """
- return isinstance(array, (ABCSparseArray, ABCSparseSeries))
+def is_sparse(arr):
+ """
+ Check whether an array-like is a pandas sparse array.
+
+ Parameters
+ ----------
+ arr : array-like
+ The array-like to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like is a pandas sparse array.
+
+ Examples
+ --------
+ >>> is_sparse(np.array([1, 2, 3]))
+ False
+ >>> is_sparse(pd.SparseArray([1, 2, 3]))
+ True
+ >>> is_sparse(pd.SparseSeries([1, 2, 3]))
+ True
+
+ This function checks only for pandas sparse array instances, so
+ sparse arrays from other libraries will return False.
+
+ >>> from scipy.sparse import bsr_matrix
+ >>> is_sparse(bsr_matrix([1, 2, 3]))
+ False
+ """
+
+ return isinstance(arr, (ABCSparseArray, ABCSparseSeries))
+
+
+def is_scipy_sparse(arr):
+ """
+ Check whether an array-like is a scipy.sparse.spmatrix instance.
+
+ Parameters
+ ----------
+ arr : array-like
+ The array-like to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like is a
+ scipy.sparse.spmatrix instance.
+
+ Notes
+ -----
+ If scipy is not installed, this function will always return False.
+ Examples
+ --------
+ >>> from scipy.sparse import bsr_matrix
+ >>> is_scipy_sparse(bsr_matrix([1, 2, 3]))
+ True
+ >>> is_scipy_sparse(pd.SparseArray([1, 2, 3]))
+ False
+ >>> is_scipy_sparse(pd.SparseSeries([1, 2, 3]))
+ False
+ """
-def is_scipy_sparse(array):
- """ return if we are a scipy.sparse.spmatrix """
global _is_scipy_sparse
+
if _is_scipy_sparse is None:
try:
from scipy.sparse import issparse as _is_scipy_sparse
except ImportError:
_is_scipy_sparse = lambda _: False
- return _is_scipy_sparse(array)
+ return _is_scipy_sparse(arr)
-def is_categorical(array):
- """ return if we are a categorical possibility """
- return isinstance(array, ABCCategorical) or is_categorical_dtype(array)
+def is_categorical(arr):
+ """
+ Check whether an array-like is a Categorical instance.
-def is_datetimetz(array):
- """ return if we are a datetime with tz array """
- return ((isinstance(array, ABCDatetimeIndex) and
- getattr(array, 'tz', None) is not None) or
- is_datetime64tz_dtype(array))
+ Parameters
+ ----------
+ arr : array-like
+ The array-like to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like is of a Categorical instance.
+
+ Examples
+ --------
+ >>> is_categorical([1, 2, 3])
+ False
+
+ Categoricals and Series Categoricals will return True.
+
+ >>> cat = pd.Categorical([1, 2, 3])
+ >>> is_categorical(cat)
+ True
+ >>> is_categorical(pd.Series(cat))
+ True
+ """
+
+ return isinstance(arr, ABCCategorical) or is_categorical_dtype(arr)
+
+
+def is_datetimetz(arr):
+ """
+ Check whether an array-like is a datetime array-like with a timezone
+ component in its dtype.
+
+ Parameters
+ ----------
+ arr : array-like
+ The array-like to check.
+ Returns
+ -------
+ boolean : Whether or not the array-like is a datetime array-like with
+ a timezone component in its dtype.
+
+ Examples
+ --------
+ >>> is_datetimetz([1, 2, 3])
+ False
+
+ Although the following examples are both DatetimeIndex objects,
+ the first one returns False because it has no timezone component
+ unlike the second one, which returns True.
+
+ >>> is_datetimetz(pd.DatetimeIndex([1, 2, 3]))
+ False
+ >>> is_datetimetz(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
+ True
+
+ The object need not be a DatetimeIndex object. It just needs to have
+ a dtype which has a timezone component.
+
+ >>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
+ >>> s = pd.Series([], dtype=dtype)
+ >>> is_datetimetz(s)
+ True
+ """
+
+ # TODO: do we need this function?
+ # It seems like a repeat of is_datetime64tz_dtype.
+
+ return ((isinstance(arr, ABCDatetimeIndex) and
+ getattr(arr, 'tz', None) is not None) or
+ is_datetime64tz_dtype(arr))
+
+
+def is_period(arr):
+ """
+ Check whether an array-like is a periodical index.
+
+ Parameters
+ ----------
+ arr : array-like
+ The array-like to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like is a periodical index.
+
+ Examples
+ --------
+ >>> is_period([1, 2, 3])
+ False
+ >>> is_period(pd.Index([1, 2, 3]))
+ False
+ >>> is_period(pd.PeriodIndex(["2017-01-01"], freq="D"))
+ True
+ """
-def is_period(array):
- """ return if we are a period array """
- return isinstance(array, ABCPeriodIndex) or is_period_arraylike(array)
+ # TODO: do we need this function?
+ # It seems like a repeat of is_period_arraylike.
+ return isinstance(arr, ABCPeriodIndex) or is_period_arraylike(arr)
def is_datetime64_dtype(arr_or_dtype):
+ """
+ Check whether an array-like or dtype is of the datetime64 dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array-like or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like or dtype is of
+ the datetime64 dtype.
+
+ Examples
+ --------
+ >>> is_datetime64_dtype(object)
+ False
+ >>> is_datetime64_dtype(np.datetime64)
+ True
+ >>> is_datetime64_dtype(np.array([], dtype=int))
+ False
+ >>> is_datetime64_dtype(np.array([], dtype=np.datetime64))
+ True
+ >>> is_datetime64_dtype([1, 2, 3])
+ False
+ """
+
if arr_or_dtype is None:
return False
try:
@@ -132,12 +329,69 @@ def is_datetime64_dtype(arr_or_dtype):
def is_datetime64tz_dtype(arr_or_dtype):
+ """
+ Check whether an array-like or dtype is of a DatetimeTZDtype dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array-like or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like or dtype is of
+ a DatetimeTZDtype dtype.
+
+ Examples
+ --------
+ >>> is_datetime64tz_dtype(object)
+ False
+ >>> is_datetime64tz_dtype([1, 2, 3])
+ False
+ >>> is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3])) # tz-naive
+ False
+ >>> is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
+ True
+
+ >>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
+ >>> s = pd.Series([], dtype=dtype)
+ >>> is_datetime64tz_dtype(dtype)
+ True
+ >>> is_datetime64tz_dtype(s)
+ True
+ """
+
if arr_or_dtype is None:
return False
return DatetimeTZDtype.is_dtype(arr_or_dtype)
def is_timedelta64_dtype(arr_or_dtype):
+ """
+ Check whether an array-like or dtype is of the timedelta64 dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array-like or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like or dtype is
+ of the timedelta64 dtype.
+
+ Examples
+ --------
+ >>> is_timedelta64_dtype(object)
+ False
+ >>> is_timedelta64_dtype(np.timedelta64)
+ True
+ >>> is_timedelta64_dtype([1, 2, 3])
+ False
+ >>> is_timedelta64_dtype(pd.Series([], dtype="timedelta64[ns]"))
+ True
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -145,18 +399,102 @@ def is_timedelta64_dtype(arr_or_dtype):
def is_period_dtype(arr_or_dtype):
+ """
+ Check whether an array-like or dtype is of the Period dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array-like or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like or dtype is of the Period dtype.
+
+ Examples
+ --------
+ >>> is_period_dtype(object)
+ False
+ >>> is_period_dtype(PeriodDtype(freq="D"))
+ True
+ >>> is_period_dtype([1, 2, 3])
+ False
+ >>> is_period_dtype(pd.Period("2017-01-01"))
+ False
+ >>> is_period_dtype(pd.PeriodIndex([], freq="A"))
+ True
+ """
+
+ # TODO: Consider making Period an instance of PeriodDtype
if arr_or_dtype is None:
return False
return PeriodDtype.is_dtype(arr_or_dtype)
def is_interval_dtype(arr_or_dtype):
+ """
+ Check whether an array-like or dtype is of the Interval dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array-like or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like or dtype is
+ of the Interval dtype.
+
+ Examples
+ --------
+ >>> is_interval_dtype(object)
+ False
+ >>> is_interval_dtype(IntervalDtype())
+ True
+ >>> is_interval_dtype([1, 2, 3])
+ False
+ >>>
+ >>> interval = pd.Interval(1, 2, closed="right")
+ >>> is_interval_dtype(interval)
+ False
+ >>> is_interval_dtype(pd.IntervalIndex([interval]))
+ True
+ """
+
+ # TODO: Consider making Interval an instance of IntervalDtype
if arr_or_dtype is None:
return False
return IntervalDtype.is_dtype(arr_or_dtype)
def is_categorical_dtype(arr_or_dtype):
+ """
+ Check whether an array-like or dtype is of the Categorical dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array-like or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like or dtype is
+ of the Categorical dtype.
+
+ Examples
+ --------
+ >>> is_categorical_dtype(object)
+ False
+ >>> is_categorical_dtype(CategoricalDtype())
+ True
+ >>> is_categorical_dtype([1, 2, 3])
+ False
+ >>> is_categorical_dtype(pd.Categorical([1, 2, 3]))
+ True
+ >>> is_categorical_dtype(pd.CategoricalIndex([1, 2, 3]))
+ True
+ """
+
if arr_or_dtype is None:
return False
return CategoricalDtype.is_dtype(arr_or_dtype)
@@ -168,7 +506,7 @@ def is_string_dtype(arr_or_dtype):
Parameters
----------
- arr_or_dtype : ndarray, dtype, type
+ arr_or_dtype : array-like
The array or dtype to check.
Returns
@@ -186,7 +524,7 @@ def is_string_dtype(arr_or_dtype):
>>>
>>> is_string_dtype(np.array(['a', 'b']))
True
- >>> is_string_dtype(np.array([1, 2]))
+ >>> is_string_dtype(pd.Series([1, 2]))
False
"""
@@ -202,7 +540,29 @@ def is_string_dtype(arr_or_dtype):
def is_period_arraylike(arr):
- """ return if we are period arraylike / PeriodIndex """
+ """
+ Check whether an array-like is a periodical array-like or PeriodIndex.
+
+ Parameters
+ ----------
+ arr : array-like
+ The array-like to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like is a periodical
+ array-like or PeriodIndex instance.
+
+ Examples
+ --------
+ >>> is_period_arraylike([1, 2, 3])
+ False
+ >>> is_period_arraylike(pd.Index([1, 2, 3]))
+ False
+ >>> is_period_arraylike(pd.PeriodIndex(["2017-01-01"], freq="D"))
+ True
+ """
+
if isinstance(arr, ABCPeriodIndex):
return True
elif isinstance(arr, (np.ndarray, ABCSeries)):
@@ -211,7 +571,29 @@ def is_period_arraylike(arr):
def is_datetime_arraylike(arr):
- """ return if we are datetime arraylike / DatetimeIndex """
+ """
+ Check whether an array-like is a datetime array-like or DatetimeIndex.
+
+ Parameters
+ ----------
+ arr : array-like
+ The array-like to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like is a datetime
+ array-like or DatetimeIndex.
+
+ Examples
+ --------
+ >>> is_datetime_arraylike([1, 2, 3])
+ False
+ >>> is_datetime_arraylike(pd.Index([1, 2, 3]))
+ False
+ >>> is_datetime_arraylike(pd.DatetimeIndex([1, 2, 3]))
+ True
+ """
+
if isinstance(arr, ABCDatetimeIndex):
return True
elif isinstance(arr, (np.ndarray, ABCSeries)):
@@ -220,6 +602,44 @@ def is_datetime_arraylike(arr):
def is_datetimelike(arr):
+ """
+ Check whether an array-like is a datetime-like array-like.
+
+ Acceptable datetime-like objects are (but not limited to) datetime
+ indices, periodic indices, and timedelta indices.
+
+ Parameters
+ ----------
+ arr : array-like
+ The array-like to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like is a datetime-like array-like.
+
+ Examples
+ --------
+ >>> is_datetimelike([1, 2, 3])
+ False
+ >>> is_datetimelike(pd.Index([1, 2, 3]))
+ False
+ >>> is_datetimelike(pd.DatetimeIndex([1, 2, 3]))
+ True
+ >>> is_datetimelike(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
+ True
+ >>> is_datetimelike(pd.PeriodIndex([], freq="A"))
+ True
+ >>> is_datetimelike(np.array([], dtype=np.datetime64))
+ True
+ >>> is_datetimelike(pd.Series([], dtype="timedelta64[ns]"))
+ True
+ >>>
+ >>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
+ >>> s = pd.Series([], dtype=dtype)
+ >>> is_datetimelike(s)
+ True
+ """
+
return (is_datetime64_dtype(arr) or is_datetime64tz_dtype(arr) or
is_timedelta64_dtype(arr) or
isinstance(arr, ABCPeriodIndex) or
@@ -227,7 +647,32 @@ def is_datetimelike(arr):
def is_dtype_equal(source, target):
- """ return a boolean if the dtypes are equal """
+ """
+ Check if two dtypes are equal.
+
+ Parameters
+ ----------
+ source : The first dtype to compare
+ target : The second dtype to compare
+
+ Returns
+ ----------
+ boolean : Whether or not the two dtypes are equal.
+
+ Examples
+ --------
+ >>> is_dtype_equal(int, float)
+ False
+ >>> is_dtype_equal("int", int)
+ True
+ >>> is_dtype_equal(object, "category")
+ False
+ >>> is_dtype_equal(CategoricalDtype(), "category")
+ True
+ >>> is_dtype_equal(DatetimeTZDtype(), "datetime64")
+ False
+ """
+
try:
source = _get_dtype(source)
target = _get_dtype(target)
@@ -240,6 +685,47 @@ def is_dtype_equal(source, target):
def is_any_int_dtype(arr_or_dtype):
+ """
+ DEPRECATED: This function will be removed in a future version.
+
+ Check whether the provided array or dtype is of an integer dtype.
+
+ In this function, timedelta64 instances are also considered "any-integer"
+ type objects and will return True.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of an integer dtype.
+
+ Examples
+ --------
+ >>> is_any_int_dtype(str)
+ False
+ >>> is_any_int_dtype(int)
+ True
+ >>> is_any_int_dtype(float)
+ False
+ >>> is_any_int_dtype(np.uint64)
+ True
+ >>> is_any_int_dtype(np.datetime64)
+ False
+ >>> is_any_int_dtype(np.timedelta64)
+ True
+ >>> is_any_int_dtype(np.array(['a', 'b']))
+ False
+ >>> is_any_int_dtype(pd.Series([1, 2]))
+ True
+ >>> is_any_int_dtype(np.array([], dtype=np.timedelta64))
+ True
+ >>> is_any_int_dtype(pd.Index([1, 2.])) # float
+ False
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -247,6 +733,45 @@ def is_any_int_dtype(arr_or_dtype):
def is_integer_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of an integer dtype.
+
+ Unlike in `in_any_int_dtype`, timedelta64 instances will return False.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of an integer dtype
+ and not an instance of timedelta64.
+
+ Examples
+ --------
+ >>> is_integer_dtype(str)
+ False
+ >>> is_integer_dtype(int)
+ True
+ >>> is_integer_dtype(float)
+ False
+ >>> is_integer_dtype(np.uint64)
+ True
+ >>> is_integer_dtype(np.datetime64)
+ False
+ >>> is_integer_dtype(np.timedelta64)
+ False
+ >>> is_integer_dtype(np.array(['a', 'b']))
+ False
+ >>> is_integer_dtype(pd.Series([1, 2]))
+ True
+ >>> is_integer_dtype(np.array([], dtype=np.timedelta64))
+ False
+ >>> is_integer_dtype(pd.Index([1, 2.])) # float
+ False
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -255,6 +780,47 @@ def is_integer_dtype(arr_or_dtype):
def is_signed_integer_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of a signed integer dtype.
+
+ Unlike in `in_any_int_dtype`, timedelta64 instances will return False.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of a signed integer dtype
+ and not an instance of timedelta64.
+
+ Examples
+ --------
+ >>> is_signed_integer_dtype(str)
+ False
+ >>> is_signed_integer_dtype(int)
+ True
+ >>> is_signed_integer_dtype(float)
+ False
+ >>> is_signed_integer_dtype(np.uint64) # unsigned
+ False
+ >>> is_signed_integer_dtype(np.datetime64)
+ False
+ >>> is_signed_integer_dtype(np.timedelta64)
+ False
+ >>> is_signed_integer_dtype(np.array(['a', 'b']))
+ False
+ >>> is_signed_integer_dtype(pd.Series([1, 2]))
+ True
+ >>> is_signed_integer_dtype(np.array([], dtype=np.timedelta64))
+ False
+ >>> is_signed_integer_dtype(pd.Index([1, 2.])) # float
+ False
+ >>> is_signed_integer_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
+ False
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -263,6 +829,39 @@ def is_signed_integer_dtype(arr_or_dtype):
def is_unsigned_integer_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of an unsigned integer dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of an
+ unsigned integer dtype.
+
+ Examples
+ --------
+ >>> is_unsigned_integer_dtype(str)
+ False
+ >>> is_unsigned_integer_dtype(int) # signed
+ False
+ >>> is_unsigned_integer_dtype(float)
+ False
+ >>> is_unsigned_integer_dtype(np.uint64)
+ True
+ >>> is_unsigned_integer_dtype(np.array(['a', 'b']))
+ False
+ >>> is_unsigned_integer_dtype(pd.Series([1, 2])) # signed
+ False
+ >>> is_unsigned_integer_dtype(pd.Index([1, 2.])) # float
+ False
+ >>> is_unsigned_integer_dtype(np.array([1, 2], dtype=np.uint32))
+ True
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -271,6 +870,46 @@ def is_unsigned_integer_dtype(arr_or_dtype):
def is_int64_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of the int64 dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of the int64 dtype.
+
+ Notes
+ -----
+ Depending on system architecture, the return value of `is_int64_dtype(
+ int)` will be True if the OS uses 64-bit integers and False if the OS
+ uses 32-bit integers.
+
+ Examples
+ --------
+ >>> is_int64_dtype(str)
+ False
+ >>> is_int64_dtype(np.int32)
+ False
+ >>> is_int64_dtype(np.int64)
+ True
+ >>> is_int64_dtype(float)
+ False
+ >>> is_int64_dtype(np.uint64) # unsigned
+ False
+ >>> is_int64_dtype(np.array(['a', 'b']))
+ False
+ >>> is_int64_dtype(np.array([1, 2], dtype=np.int64))
+ True
+ >>> is_int64_dtype(pd.Index([1, 2.])) # float
+ False
+ >>> is_int64_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
+ False
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -278,6 +917,46 @@ def is_int64_dtype(arr_or_dtype):
def is_int_or_datetime_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of an
+ integer, timedelta64, or datetime64 dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of an
+ integer, timedelta64, or datetime64 dtype.
+
+ Examples
+ --------
+ >>> is_int_or_datetime_dtype(str)
+ False
+ >>> is_int_or_datetime_dtype(int)
+ True
+ >>> is_int_or_datetime_dtype(float)
+ False
+ >>> is_int_or_datetime_dtype(np.uint64)
+ True
+ >>> is_int_or_datetime_dtype(np.datetime64)
+ True
+ >>> is_int_or_datetime_dtype(np.timedelta64)
+ True
+ >>> is_int_or_datetime_dtype(np.array(['a', 'b']))
+ False
+ >>> is_int_or_datetime_dtype(pd.Series([1, 2]))
+ True
+ >>> is_int_or_datetime_dtype(np.array([], dtype=np.timedelta64))
+ True
+ >>> is_int_or_datetime_dtype(np.array([], dtype=np.datetime64))
+ True
+ >>> is_int_or_datetime_dtype(pd.Index([1, 2.])) # float
+ False
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -285,7 +964,40 @@ def is_int_or_datetime_dtype(arr_or_dtype):
issubclass(tipo, (np.datetime64, np.timedelta64)))
-def is_datetime64_any_dtype(arr_or_dtype):
+def is_datetime64_any_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of the datetime64 dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of the datetime64 dtype.
+
+ Examples
+ --------
+ >>> is_datetime64_any_dtype(str)
+ False
+ >>> is_datetime64_any_dtype(int)
+ False
+ >>> is_datetime64_any_dtype(np.datetime64) # can be tz-naive
+ True
+ >>> is_datetime64_any_dtype(DatetimeTZDtype("ns", "US/Eastern"))
+ True
+ >>> is_datetime64_any_dtype(np.array(['a', 'b']))
+ False
+ >>> is_datetime64_any_dtype(np.array([1, 2]))
+ False
+ >>> is_datetime64_any_dtype(np.array([], dtype=np.datetime64))
+ True
+ >>> is_datetime64_any_dtype(pd.DatetimeIndex([1, 2, 3],
+ dtype=np.datetime64))
+ True
+ """
+
if arr_or_dtype is None:
return False
return (is_datetime64_dtype(arr_or_dtype) or
@@ -293,6 +1005,42 @@ def is_datetime64_any_dtype(arr_or_dtype):
def is_datetime64_ns_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of the datetime64[ns] dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of the datetime64[ns] dtype.
+
+ Examples
+ --------
+ >>> is_datetime64_ns_dtype(str)
+ False
+ >>> is_datetime64_ns_dtype(int)
+ False
+ >>> is_datetime64_ns_dtype(np.datetime64) # no unit
+ False
+ >>> is_datetime64_ns_dtype(DatetimeTZDtype("ns", "US/Eastern"))
+ True
+ >>> is_datetime64_ns_dtype(np.array(['a', 'b']))
+ False
+ >>> is_datetime64_ns_dtype(np.array([1, 2]))
+ False
+ >>> is_datetime64_ns_dtype(np.array([], dtype=np.datetime64)) # no unit
+ False
+ >>> is_datetime64_ns_dtype(np.array([],
+ dtype="datetime64[ps]")) # wrong unit
+ False
+ >>> is_datetime64_ns_dtype(pd.DatetimeIndex([1, 2, 3],
+ dtype=np.datetime64)) # has 'ns' unit
+ True
+ """
+
if arr_or_dtype is None:
return False
try:
@@ -314,21 +1062,20 @@ def is_timedelta64_ns_dtype(arr_or_dtype):
Parameters
----------
- arr_or_dtype : ndarray, dtype, type
+ arr_or_dtype : array-like
The array or dtype to check.
Returns
-------
- boolean : Whether or not the array or dtype
- is of the timedelta64[ns] dtype.
+ boolean : Whether or not the array or dtype is of the
+ timedelta64[ns] dtype.
Examples
--------
- >>> is_timedelta64_ns_dtype(np.dtype('m8[ns]')
+ >>> is_timedelta64_ns_dtype(np.dtype('m8[ns]'))
True
- >>> is_timedelta64_ns_dtype(np.dtype('m8[ps]') # Wrong frequency
+ >>> is_timedelta64_ns_dtype(np.dtype('m8[ps]')) # Wrong frequency
False
- >>>
>>> is_timedelta64_ns_dtype(np.array([1, 2], dtype='m8[ns]'))
True
>>> is_timedelta64_ns_dtype(np.array([1, 2], dtype=np.timedelta64))
@@ -345,6 +1092,40 @@ def is_timedelta64_ns_dtype(arr_or_dtype):
def is_datetime_or_timedelta_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of
+ a timedelta64 or datetime64 dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of a
+ timedelta64, or datetime64 dtype.
+
+ Examples
+ --------
+ >>> is_datetime_or_timedelta_dtype(str)
+ False
+ >>> is_datetime_or_timedelta_dtype(int)
+ False
+ >>> is_datetime_or_timedelta_dtype(np.datetime64)
+ True
+ >>> is_datetime_or_timedelta_dtype(np.timedelta64)
+ True
+ >>> is_datetime_or_timedelta_dtype(np.array(['a', 'b']))
+ False
+ >>> is_datetime_or_timedelta_dtype(pd.Series([1, 2]))
+ False
+ >>> is_datetime_or_timedelta_dtype(np.array([], dtype=np.timedelta64))
+ True
+ >>> is_datetime_or_timedelta_dtype(np.array([], dtype=np.datetime64))
+ True
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -378,11 +1159,45 @@ def _is_unorderable_exception(e):
def is_numeric_v_string_like(a, b):
"""
- numpy doesn't like to compare numeric arrays vs scalar string-likes
+ Check if we are comparing a string-like object to a numeric ndarray.
+
+ NumPy doesn't like to compare such objects, especially numeric arrays
+ and scalar string-likes.
+
+ Parameters
+ ----------
+ a : array-like, scalar
+ The first object to check.
+ b : array-like, scalar
+ The second object to check.
- return a boolean result if this is the case for a,b or b,a
+ Returns
+ -------
+ boolean : Whether we return a comparing a string-like
+ object to a numeric array.
+ Examples
+ --------
+ >>> is_numeric_v_string_like(1, 1)
+ False
+ >>> is_numeric_v_string_like("foo", "foo")
+ False
+ >>> is_numeric_v_string_like(1, "foo") # non-array numeric
+ False
+ >>> is_numeric_v_string_like(np.array([1]), "foo")
+ True
+ >>> is_numeric_v_string_like("foo", np.array([1])) # symmetric check
+ True
+ >>> is_numeric_v_string_like(np.array([1, 2]), np.array(["foo"]))
+ True
+ >>> is_numeric_v_string_like(np.array(["foo"]), np.array([1, 2]))
+ True
+ >>> is_numeric_v_string_like(np.array([1]), np.array([2]))
+ False
+ >>> is_numeric_v_string_like(np.array(["foo"]), np.array(["foo"]))
+ False
"""
+
is_a_array = isinstance(a, np.ndarray)
is_b_array = isinstance(b, np.ndarray)
@@ -401,13 +1216,56 @@ def is_numeric_v_string_like(a, b):
def is_datetimelike_v_numeric(a, b):
- # return if we have an i8 convertible and numeric comparison
+ """
+ Check if we are comparing a datetime-like object to a numeric object.
+
+ By "numeric," we mean an object that is either of an int or float dtype.
+
+ Parameters
+ ----------
+ a : array-like, scalar
+ The first object to check.
+ b : array-like, scalar
+ The second object to check.
+
+ Returns
+ -------
+ boolean : Whether we return a comparing a datetime-like
+ to a numeric object.
+
+ Examples
+ --------
+ >>> dt = np.datetime64(pd.datetime(2017, 1, 1))
+ >>>
+ >>> is_datetimelike_v_numeric(1, 1)
+ False
+ >>> is_datetimelike_v_numeric(dt, dt)
+ False
+ >>> is_datetimelike_v_numeric(1, dt)
+ True
+ >>> is_datetimelike_v_numeric(dt, 1) # symmetric check
+ True
+ >>> is_datetimelike_v_numeric(np.array([dt]), 1)
+ True
+ >>> is_datetimelike_v_numeric(np.array([1]), dt)
+ True
+ >>> is_datetimelike_v_numeric(np.array([dt]), np.array([1]))
+ True
+ >>> is_datetimelike_v_numeric(np.array([1]), np.array([2]))
+ False
+ >>> is_datetimelike_v_numeric(np.array([dt]), np.array([dt]))
+ False
+ """
+
if not hasattr(a, 'dtype'):
a = np.asarray(a)
if not hasattr(b, 'dtype'):
b = np.asarray(b)
def is_numeric(x):
+ """
+ Check if an object has a numeric dtype (i.e. integer or float).
+ """
return is_integer_dtype(x) or is_float_dtype(x)
is_datetimelike = needs_i8_conversion
@@ -416,24 +1274,92 @@ def is_numeric(x):
def is_datetimelike_v_object(a, b):
- # return if we have an i8 convertible and object comparsion
+ """
+ Check if we are comparing a datetime-like object to an object instance.
+
+ Parameters
+ ----------
+ a : array-like, scalar
+ The first object to check.
+ b : array-like, scalar
+ The second object to check.
+
+ Returns
+ -------
+ boolean : Whether we return a comparing a datetime-like
+ to an object instance.
+
+ Examples
+ --------
+ >>> obj = object()
+ >>> dt = np.datetime64(pd.datetime(2017, 1, 1))
+ >>>
+ >>> is_datetimelike_v_object(obj, obj)
+ False
+ >>> is_datetimelike_v_object(dt, dt)
+ False
+ >>> is_datetimelike_v_object(obj, dt)
+ True
+ >>> is_datetimelike_v_object(dt, obj) # symmetric check
+ True
+ >>> is_datetimelike_v_object(np.array([dt]), obj)
+ True
+ >>> is_datetimelike_v_object(np.array([obj]), dt)
+ True
+ >>> is_datetimelike_v_object(np.array([dt]), np.array([obj]))
+ True
+ >>> is_datetimelike_v_object(np.array([obj]), np.array([obj]))
+ False
+ >>> is_datetimelike_v_object(np.array([dt]), np.array([1]))
+ False
+ >>> is_datetimelike_v_object(np.array([dt]), np.array([dt]))
+ False
+ """
+
if not hasattr(a, 'dtype'):
a = np.asarray(a)
if not hasattr(b, 'dtype'):
b = np.asarray(b)
- def f(x):
- return is_object_dtype(x)
-
- def is_object(x):
- return is_integer_dtype(x) or is_float_dtype(x)
-
is_datetimelike = needs_i8_conversion
- return ((is_datetimelike(a) and is_object(b)) or
- (is_datetimelike(b) and is_object(a)))
+ return ((is_datetimelike(a) and is_object_dtype(b)) or
+ (is_datetimelike(b) and is_object_dtype(a)))
def needs_i8_conversion(arr_or_dtype):
+ """
+ Check whether the array or dtype should be converted to int64.
+
+ An array-like or dtype "needs" such a conversion if the array-like
+ or dtype is of a datetime-like dtype
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype should be converted to int64.
+
+ Examples
+ --------
+ >>> needs_i8_conversion(str)
+ False
+ >>> needs_i8_conversion(np.int64)
+ False
+ >>> needs_i8_conversion(np.datetime64)
+ True
+ >>> needs_i8_conversion(np.array(['a', 'b']))
+ False
+ >>> needs_i8_conversion(pd.Series([1, 2]))
+ False
+ >>> needs_i8_conversion(pd.Series([], dtype="timedelta64[ns]"))
+ True
+ >>> needs_i8_conversion(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
+ True
+ """
+
if arr_or_dtype is None:
return False
return (is_datetime_or_timedelta_dtype(arr_or_dtype) or
@@ -442,6 +1368,42 @@ def needs_i8_conversion(arr_or_dtype):
def is_numeric_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of a numeric dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of a numeric dtype.
+
+ Examples
+ --------
+ >>> is_numeric_dtype(str)
+ False
+ >>> is_numeric_dtype(int)
+ True
+ >>> is_numeric_dtype(float)
+ True
+ >>> is_numeric_dtype(np.uint64)
+ True
+ >>> is_numeric_dtype(np.datetime64)
+ False
+ >>> is_numeric_dtype(np.timedelta64)
+ False
+ >>> is_numeric_dtype(np.array(['a', 'b']))
+ False
+ >>> is_numeric_dtype(pd.Series([1, 2]))
+ True
+ >>> is_numeric_dtype(pd.Index([1, 2.]))
+ True
+ >>> is_numeric_dtype(np.array([], dtype=np.timedelta64))
+ False
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -458,7 +1420,7 @@ def is_string_like_dtype(arr_or_dtype):
Parameters
----------
- arr_or_dtype : ndarray, dtype, type
+ arr_or_dtype : array-like
The array or dtype to check.
Returns
@@ -471,10 +1433,9 @@ def is_string_like_dtype(arr_or_dtype):
True
>>> is_string_like_dtype(object)
False
- >>>
>>> is_string_like_dtype(np.array(['a', 'b']))
True
- >>> is_string_like_dtype(np.array([1, 2]))
+ >>> is_string_like_dtype(pd.Series([1, 2]))
False
"""
@@ -488,6 +1449,34 @@ def is_string_like_dtype(arr_or_dtype):
def is_float_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of a float dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of a float dtype.
+
+ Examples
+ --------
+ >>> is_float_dtype(str)
+ False
+ >>> is_float_dtype(int)
+ False
+ >>> is_float_dtype(float)
+ True
+ >>> is_float_dtype(np.array(['a', 'b']))
+ False
+ >>> is_float_dtype(pd.Series([1, 2]))
+ False
+ >>> is_float_dtype(pd.Index([1, 2.]))
+ True
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -495,6 +1484,16 @@ def is_float_dtype(arr_or_dtype):
def is_floating_dtype(arr_or_dtype):
+ """
+ DEPRECATED: This function will be removed in a future version.
+
+ Check whether the provided array or dtype is an instance of
+ numpy's float dtype.
+
+ Unlike, `is_float_dtype`, this check is a lot stricter, as it requires
+ `isinstance` of `np.floating` and not `issubclass`.
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -502,6 +1501,36 @@ def is_floating_dtype(arr_or_dtype):
def is_bool_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of a boolean dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of a boolean dtype.
+
+ Examples
+ --------
+ >>> is_bool_dtype(str)
+ False
+ >>> is_bool_dtype(int)
+ False
+ >>> is_bool_dtype(bool)
+ True
+ >>> is_bool_dtype(np.bool)
+ True
+ >>> is_bool_dtype(np.array(['a', 'b']))
+ False
+ >>> is_bool_dtype(pd.Series([1, 2]))
+ False
+ >>> is_bool_dtype(np.array([True, False]))
+ True
+ """
+
if arr_or_dtype is None:
return False
try:
@@ -512,21 +1541,94 @@ def is_bool_dtype(arr_or_dtype):
return issubclass(tipo, np.bool_)
-def is_extension_type(value):
+def is_extension_type(arr):
"""
- if we are a klass that is preserved by the internals
- these are internal klasses that we represent (and don't use a np.array)
+ Check whether an array-like is of a pandas extension class instance.
+
+ Extension classes include categoricals, pandas sparse objects (i.e.
+ classes represented within the pandas library and not ones external
+ to it like scipy sparse matrices), and datetime-like arrays.
+
+ Parameters
+ ----------
+ arr : array-like
+ The array-like to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array-like is of a pandas
+ extension class instance.
+
+ Examples
+ --------
+ >>> is_extension_type([1, 2, 3])
+ False
+ >>> is_extension_type(np.array([1, 2, 3]))
+ False
+ >>>
+ >>> cat = pd.Categorical([1, 2, 3])
+ >>>
+ >>> is_extension_type(cat)
+ True
+ >>> is_extension_type(pd.Series(cat))
+ True
+ >>> is_extension_type(pd.SparseArray([1, 2, 3]))
+ True
+ >>> is_extension_type(pd.SparseSeries([1, 2, 3]))
+ True
+ >>>
+ >>> from scipy.sparse import bsr_matrix
+ >>> is_extension_type(bsr_matrix([1, 2, 3]))
+ False
+ >>> is_extension_type(pd.DatetimeIndex([1, 2, 3]))
+ False
+ >>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
+ True
+ >>>
+ >>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
+ >>> s = pd.Series([], dtype=dtype)
+ >>> is_extension_type(s)
+ True
"""
- if is_categorical(value):
+
+ if is_categorical(arr):
return True
- elif is_sparse(value):
+ elif is_sparse(arr):
return True
- elif is_datetimetz(value):
+ elif is_datetimetz(arr):
return True
return False
def is_complex_dtype(arr_or_dtype):
+ """
+ Check whether the provided array or dtype is of a complex dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like
+ The array or dtype to check.
+
+ Returns
+ -------
+ boolean : Whether or not the array or dtype is of a compex dtype.
+
+ Examples
+ --------
+ >>> is_complex_dtype(str)
+ False
+ >>> is_complex_dtype(int)
+ False
+ >>> is_complex_dtype(np.complex)
+ True
+ >>> is_complex_dtype(np.array(['a', 'b']))
+ False
+ >>> is_complex_dtype(pd.Series([1, 2]))
+ False
+ >>> is_complex_dtype(np.array([1 + 1j, 5]))
+ True
+ """
+
if arr_or_dtype is None:
return False
tipo = _get_dtype_type(arr_or_dtype)
@@ -570,7 +1672,7 @@ def _get_dtype(arr_or_dtype):
Parameters
----------
- arr_or_dtype : ndarray, Series, dtype, type
+ arr_or_dtype : array-like
The array-like or dtype object whose dtype we want to extract.
Returns
@@ -619,7 +1721,7 @@ def _get_dtype_type(arr_or_dtype):
Parameters
----------
- arr_or_dtype : ndarray, Series, dtype, type
+ arr_or_dtype : array-like
The array-like or dtype object whose type we want to extract.
Returns
@@ -754,6 +1856,7 @@ def pandas_dtype(dtype):
-------
np.dtype or a pandas dtype
"""
+
if isinstance(dtype, DatetimeTZDtype):
return dtype
elif isinstance(dtype, PeriodDtype):
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 68518e235d417..5b74397b1e770 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -4,11 +4,10 @@
import numpy as np
import pandas as pd
-from pandas.core.dtypes.dtypes import (
- DatetimeTZDtype, PeriodDtype, CategoricalDtype)
-from pandas.core.dtypes.common import (
- pandas_dtype, is_dtype_equal)
+from pandas.core.dtypes.dtypes import (DatetimeTZDtype, PeriodDtype,
+ CategoricalDtype, IntervalDtype)
+import pandas.core.dtypes.common as com
import pandas.util.testing as tm
@@ -21,49 +20,49 @@ def test_invalid_dtype_error(self):
invalid_list = [pd.Timestamp, 'pd.Timestamp', list]
for dtype in invalid_list:
with tm.assert_raises_regex(TypeError, msg):
- pandas_dtype(dtype)
+ com.pandas_dtype(dtype)
valid_list = [object, 'float64', np.object_, np.dtype('object'), 'O',
np.float64, float, np.dtype('float64')]
for dtype in valid_list:
- pandas_dtype(dtype)
+ com.pandas_dtype(dtype)
def test_numpy_dtype(self):
for dtype in ['M8[ns]', 'm8[ns]', 'object', 'float64', 'int64']:
- assert pandas_dtype(dtype) == np.dtype(dtype)
+ assert com.pandas_dtype(dtype) == np.dtype(dtype)
def test_numpy_string_dtype(self):
# do not parse freq-like string as period dtype
- assert pandas_dtype('U') == np.dtype('U')
- assert pandas_dtype('S') == np.dtype('S')
+ assert com.pandas_dtype('U') == np.dtype('U')
+ assert com.pandas_dtype('S') == np.dtype('S')
def test_datetimetz_dtype(self):
for dtype in ['datetime64[ns, US/Eastern]',
'datetime64[ns, Asia/Tokyo]',
'datetime64[ns, UTC]']:
- assert pandas_dtype(dtype) is DatetimeTZDtype(dtype)
- assert pandas_dtype(dtype) == DatetimeTZDtype(dtype)
- assert pandas_dtype(dtype) == dtype
+ assert com.pandas_dtype(dtype) is DatetimeTZDtype(dtype)
+ assert com.pandas_dtype(dtype) == DatetimeTZDtype(dtype)
+ assert com.pandas_dtype(dtype) == dtype
def test_categorical_dtype(self):
- assert pandas_dtype('category') == CategoricalDtype()
+ assert com.pandas_dtype('category') == CategoricalDtype()
def test_period_dtype(self):
for dtype in ['period[D]', 'period[3M]', 'period[U]',
'Period[D]', 'Period[3M]', 'Period[U]']:
- assert pandas_dtype(dtype) is PeriodDtype(dtype)
- assert pandas_dtype(dtype) == PeriodDtype(dtype)
- assert pandas_dtype(dtype) == dtype
+ assert com.pandas_dtype(dtype) is PeriodDtype(dtype)
+ assert com.pandas_dtype(dtype) == PeriodDtype(dtype)
+ assert com.pandas_dtype(dtype) == dtype
-dtypes = dict(datetime_tz=pandas_dtype('datetime64[ns, US/Eastern]'),
- datetime=pandas_dtype('datetime64[ns]'),
- timedelta=pandas_dtype('timedelta64[ns]'),
+dtypes = dict(datetime_tz=com.pandas_dtype('datetime64[ns, US/Eastern]'),
+ datetime=com.pandas_dtype('datetime64[ns]'),
+ timedelta=com.pandas_dtype('timedelta64[ns]'),
period=PeriodDtype('D'),
integer=np.dtype(np.int64),
float=np.dtype(np.float64),
object=np.dtype(np.object),
- category=pandas_dtype('category'))
+ category=com.pandas_dtype('category'))
@pytest.mark.parametrize('name1,dtype1',
@@ -75,31 +74,30 @@ def test_period_dtype(self):
def test_dtype_equal(name1, dtype1, name2, dtype2):
# match equal to self, but not equal to other
- assert is_dtype_equal(dtype1, dtype1)
+ assert com.is_dtype_equal(dtype1, dtype1)
if name1 != name2:
- assert not is_dtype_equal(dtype1, dtype2)
+ assert not com.is_dtype_equal(dtype1, dtype2)
def test_dtype_equal_strict():
# we are strict on kind equality
for dtype in [np.int8, np.int16, np.int32]:
- assert not is_dtype_equal(np.int64, dtype)
+ assert not com.is_dtype_equal(np.int64, dtype)
for dtype in [np.float32]:
- assert not is_dtype_equal(np.float64, dtype)
+ assert not com.is_dtype_equal(np.float64, dtype)
# strict w.r.t. PeriodDtype
- assert not is_dtype_equal(PeriodDtype('D'),
- PeriodDtype('2D'))
+ assert not com.is_dtype_equal(PeriodDtype('D'), PeriodDtype('2D'))
# strict w.r.t. datetime64
- assert not is_dtype_equal(
- pandas_dtype('datetime64[ns, US/Eastern]'),
- pandas_dtype('datetime64[ns, CET]'))
+ assert not com.is_dtype_equal(
+ com.pandas_dtype('datetime64[ns, US/Eastern]'),
+ com.pandas_dtype('datetime64[ns, CET]'))
# see gh-15941: no exception should be raised
- assert not is_dtype_equal(None, None)
+ assert not com.is_dtype_equal(None, None)
def get_is_dtype_funcs():
@@ -108,7 +106,6 @@ def get_is_dtype_funcs():
begin with 'is_' and end with 'dtype'
"""
- import pandas.core.dtypes.common as com
fnames = [f for f in dir(com) if (f.startswith('is_') and
f.endswith('dtype'))]
@@ -124,3 +121,403 @@ def test_get_dtype_error_catch(func):
# No exception should be raised.
assert not func(None)
+
+
+def test_is_object():
+ assert com.is_object_dtype(object)
+ assert com.is_object_dtype(np.array([], dtype=object))
+
+ assert not com.is_object_dtype(int)
+ assert not com.is_object_dtype(np.array([], dtype=int))
+ assert not com.is_object_dtype([1, 2, 3])
+
+
+def test_is_sparse():
+ assert com.is_sparse(pd.SparseArray([1, 2, 3]))
+ assert com.is_sparse(pd.SparseSeries([1, 2, 3]))
+
+ assert not com.is_sparse(np.array([1, 2, 3]))
+
+ # This test will only skip if the previous assertions
+ # pass AND scipy is not installed.
+ sparse = pytest.importorskip("scipy.sparse")
+ assert not com.is_sparse(sparse.bsr_matrix([1, 2, 3]))
+
+
+def test_is_scipy_sparse():
+ tm._skip_if_no_scipy()
+
+ from scipy.sparse import bsr_matrix
+ assert com.is_scipy_sparse(bsr_matrix([1, 2, 3]))
+
+ assert not com.is_scipy_sparse(pd.SparseArray([1, 2, 3]))
+ assert not com.is_scipy_sparse(pd.SparseSeries([1, 2, 3]))
+
+
+def test_is_categorical():
+ cat = pd.Categorical([1, 2, 3])
+ assert com.is_categorical(cat)
+ assert com.is_categorical(pd.Series(cat))
+
+ assert not com.is_categorical([1, 2, 3])
+
+
+def test_is_datetimetz():
+ assert not com.is_datetimetz([1, 2, 3])
+ assert not com.is_datetimetz(pd.DatetimeIndex([1, 2, 3]))
+
+ assert com.is_datetimetz(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
+
+ dtype = DatetimeTZDtype("ns", tz="US/Eastern")
+ s = pd.Series([], dtype=dtype)
+ assert com.is_datetimetz(s)
+
+
+def test_is_period():
+ assert not com.is_period([1, 2, 3])
+ assert not com.is_period(pd.Index([1, 2, 3]))
+ assert com.is_period(pd.PeriodIndex(["2017-01-01"], freq="D"))
+
+
+def test_is_datetime64_dtype():
+ assert not com.is_datetime64_dtype(object)
+ assert not com.is_datetime64_dtype([1, 2, 3])
+ assert not com.is_datetime64_dtype(np.array([], dtype=int))
+
+ assert com.is_datetime64_dtype(np.datetime64)
+ assert com.is_datetime64_dtype(np.array([], dtype=np.datetime64))
+
+
+def test_is_datetime64tz_dtype():
+ assert not com.is_datetime64tz_dtype(object)
+ assert not com.is_datetime64tz_dtype([1, 2, 3])
+ assert not com.is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3]))
+ assert com.is_datetime64tz_dtype(pd.DatetimeIndex(
+ [1, 2, 3], tz="US/Eastern"))
+
+
+def test_is_timedelta64_dtype():
+ assert not com.is_timedelta64_dtype(object)
+ assert not com.is_timedelta64_dtype([1, 2, 3])
+
+ assert com.is_timedelta64_dtype(np.timedelta64)
+ assert com.is_timedelta64_dtype(pd.Series([], dtype="timedelta64[ns]"))
+
+
+def test_is_period_dtype():
+ assert not com.is_period_dtype(object)
+ assert not com.is_period_dtype([1, 2, 3])
+ assert not com.is_period_dtype(pd.Period("2017-01-01"))
+
+ assert com.is_period_dtype(PeriodDtype(freq="D"))
+ assert com.is_period_dtype(pd.PeriodIndex([], freq="A"))
+
+
+def test_is_interval_dtype():
+ assert not com.is_interval_dtype(object)
+ assert not com.is_interval_dtype([1, 2, 3])
+
+ assert com.is_interval_dtype(IntervalDtype())
+
+ interval = pd.Interval(1, 2, closed="right")
+ assert not com.is_interval_dtype(interval)
+ assert com.is_interval_dtype(pd.IntervalIndex([interval]))
+
+
+def test_is_categorical_dtype():
+ assert not com.is_categorical_dtype(object)
+ assert not com.is_categorical_dtype([1, 2, 3])
+
+ assert com.is_categorical_dtype(CategoricalDtype())
+ assert com.is_categorical_dtype(pd.Categorical([1, 2, 3]))
+ assert com.is_categorical_dtype(pd.CategoricalIndex([1, 2, 3]))
+
+
+def test_is_string_dtype():
+ assert not com.is_string_dtype(int)
+ assert not com.is_string_dtype(pd.Series([1, 2]))
+
+ assert com.is_string_dtype(str)
+ assert com.is_string_dtype(object)
+ assert com.is_string_dtype(np.array(['a', 'b']))
+
+
+def test_is_period_arraylike():
+ assert not com.is_period_arraylike([1, 2, 3])
+ assert not com.is_period_arraylike(pd.Index([1, 2, 3]))
+ assert com.is_period_arraylike(pd.PeriodIndex(["2017-01-01"], freq="D"))
+
+
+def test_is_datetime_arraylike():
+ assert not com.is_datetime_arraylike([1, 2, 3])
+ assert not com.is_datetime_arraylike(pd.Index([1, 2, 3]))
+ assert com.is_datetime_arraylike(pd.DatetimeIndex([1, 2, 3]))
+
+
+def test_is_datetimelike():
+ assert not com.is_datetimelike([1, 2, 3])
+ assert not com.is_datetimelike(pd.Index([1, 2, 3]))
+
+ assert com.is_datetimelike(pd.DatetimeIndex([1, 2, 3]))
+ assert com.is_datetimelike(pd.PeriodIndex([], freq="A"))
+ assert com.is_datetimelike(np.array([], dtype=np.datetime64))
+ assert com.is_datetimelike(pd.Series([], dtype="timedelta64[ns]"))
+ assert com.is_datetimelike(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
+
+ dtype = DatetimeTZDtype("ns", tz="US/Eastern")
+ s = pd.Series([], dtype=dtype)
+ assert com.is_datetimelike(s)
+
+
+def test_is_integer_dtype():
+ assert not com.is_integer_dtype(str)
+ assert not com.is_integer_dtype(float)
+ assert not com.is_integer_dtype(np.datetime64)
+ assert not com.is_integer_dtype(np.timedelta64)
+ assert not com.is_integer_dtype(pd.Index([1, 2.]))
+ assert not com.is_integer_dtype(np.array(['a', 'b']))
+ assert not com.is_integer_dtype(np.array([], dtype=np.timedelta64))
+
+ assert com.is_integer_dtype(int)
+ assert com.is_integer_dtype(np.uint64)
+ assert com.is_integer_dtype(pd.Series([1, 2]))
+
+
+def test_is_signed_integer_dtype():
+ assert not com.is_signed_integer_dtype(str)
+ assert not com.is_signed_integer_dtype(float)
+ assert not com.is_signed_integer_dtype(np.uint64)
+ assert not com.is_signed_integer_dtype(np.datetime64)
+ assert not com.is_signed_integer_dtype(np.timedelta64)
+ assert not com.is_signed_integer_dtype(pd.Index([1, 2.]))
+ assert not com.is_signed_integer_dtype(np.array(['a', 'b']))
+ assert not com.is_signed_integer_dtype(np.array([1, 2], dtype=np.uint32))
+ assert not com.is_signed_integer_dtype(np.array([], dtype=np.timedelta64))
+
+ assert com.is_signed_integer_dtype(int)
+ assert com.is_signed_integer_dtype(pd.Series([1, 2]))
+
+
+def test_is_unsigned_integer_dtype():
+ assert not com.is_unsigned_integer_dtype(str)
+ assert not com.is_unsigned_integer_dtype(int)
+ assert not com.is_unsigned_integer_dtype(float)
+ assert not com.is_unsigned_integer_dtype(pd.Series([1, 2]))
+ assert not com.is_unsigned_integer_dtype(pd.Index([1, 2.]))
+ assert not com.is_unsigned_integer_dtype(np.array(['a', 'b']))
+
+ assert com.is_unsigned_integer_dtype(np.uint64)
+ assert com.is_unsigned_integer_dtype(np.array([1, 2], dtype=np.uint32))
+
+
+def test_is_int64_dtype():
+ assert not com.is_int64_dtype(str)
+ assert not com.is_int64_dtype(float)
+ assert not com.is_int64_dtype(np.int32)
+ assert not com.is_int64_dtype(np.uint64)
+ assert not com.is_int64_dtype(pd.Index([1, 2.]))
+ assert not com.is_int64_dtype(np.array(['a', 'b']))
+ assert not com.is_int64_dtype(np.array([1, 2], dtype=np.uint32))
+
+ assert com.is_int64_dtype(np.int64)
+ assert com.is_int64_dtype(np.array([1, 2], dtype=np.int64))
+
+
+def test_is_int_or_datetime_dtype():
+ assert not com.is_int_or_datetime_dtype(str)
+ assert not com.is_int_or_datetime_dtype(float)
+ assert not com.is_int_or_datetime_dtype(pd.Index([1, 2.]))
+ assert not com.is_int_or_datetime_dtype(np.array(['a', 'b']))
+
+ assert com.is_int_or_datetime_dtype(int)
+ assert com.is_int_or_datetime_dtype(np.uint64)
+ assert com.is_int_or_datetime_dtype(np.datetime64)
+ assert com.is_int_or_datetime_dtype(np.timedelta64)
+ assert com.is_int_or_datetime_dtype(pd.Series([1, 2]))
+ assert com.is_int_or_datetime_dtype(np.array([], dtype=np.datetime64))
+ assert com.is_int_or_datetime_dtype(np.array([], dtype=np.timedelta64))
+
+
+def test_is_datetime64_any_dtype():
+ assert not com.is_datetime64_any_dtype(int)
+ assert not com.is_datetime64_any_dtype(str)
+ assert not com.is_datetime64_any_dtype(np.array([1, 2]))
+ assert not com.is_datetime64_any_dtype(np.array(['a', 'b']))
+
+ assert com.is_datetime64_any_dtype(np.datetime64)
+ assert com.is_datetime64_any_dtype(np.array([], dtype=np.datetime64))
+ assert com.is_datetime64_any_dtype(DatetimeTZDtype("ns", "US/Eastern"))
+ assert com.is_datetime64_any_dtype(pd.DatetimeIndex([1, 2, 3],
+ dtype=np.datetime64))
+
+
+def test_is_datetime64_ns_dtype():
+ assert not com.is_datetime64_ns_dtype(int)
+ assert not com.is_datetime64_ns_dtype(str)
+ assert not com.is_datetime64_ns_dtype(np.datetime64)
+ assert not com.is_datetime64_ns_dtype(np.array([1, 2]))
+ assert not com.is_datetime64_ns_dtype(np.array(['a', 'b']))
+ assert not com.is_datetime64_ns_dtype(np.array([], dtype=np.datetime64))
+
+ # This datetime array has the wrong unit (ps instead of ns)
+ assert not com.is_datetime64_ns_dtype(np.array([], dtype="datetime64[ps]"))
+
+ assert com.is_datetime64_ns_dtype(DatetimeTZDtype("ns", "US/Eastern"))
+ assert com.is_datetime64_ns_dtype(pd.DatetimeIndex([1, 2, 3],
+ dtype=np.datetime64))
+
+
+def test_is_timedelta64_ns_dtype():
+ assert not com.is_timedelta64_ns_dtype(np.dtype('m8[ps]'))
+ assert not com.is_timedelta64_ns_dtype(
+ np.array([1, 2], dtype=np.timedelta64))
+
+ assert com.is_timedelta64_ns_dtype(np.dtype('m8[ns]'))
+ assert com.is_timedelta64_ns_dtype(np.array([1, 2], dtype='m8[ns]'))
+
+
+def test_is_datetime_or_timedelta_dtype():
+ assert not com.is_datetime_or_timedelta_dtype(int)
+ assert not com.is_datetime_or_timedelta_dtype(str)
+ assert not com.is_datetime_or_timedelta_dtype(pd.Series([1, 2]))
+ assert not com.is_datetime_or_timedelta_dtype(np.array(['a', 'b']))
+
+ assert com.is_datetime_or_timedelta_dtype(np.datetime64)
+ assert com.is_datetime_or_timedelta_dtype(np.timedelta64)
+ assert com.is_datetime_or_timedelta_dtype(
+ np.array([], dtype=np.timedelta64))
+ assert com.is_datetime_or_timedelta_dtype(
+ np.array([], dtype=np.datetime64))
+
+
+def test_is_numeric_v_string_like():
+ assert not com.is_numeric_v_string_like(1, 1)
+ assert not com.is_numeric_v_string_like(1, "foo")
+ assert not com.is_numeric_v_string_like("foo", "foo")
+ assert not com.is_numeric_v_string_like(np.array([1]), np.array([2]))
+ assert not com.is_numeric_v_string_like(
+ np.array(["foo"]), np.array(["foo"]))
+
+ assert com.is_numeric_v_string_like(np.array([1]), "foo")
+ assert com.is_numeric_v_string_like("foo", np.array([1]))
+ assert com.is_numeric_v_string_like(np.array([1, 2]), np.array(["foo"]))
+ assert com.is_numeric_v_string_like(np.array(["foo"]), np.array([1, 2]))
+
+
+def test_is_datetimelike_v_numeric():
+ dt = np.datetime64(pd.datetime(2017, 1, 1))
+
+ assert not com.is_datetimelike_v_numeric(1, 1)
+ assert not com.is_datetimelike_v_numeric(dt, dt)
+ assert not com.is_datetimelike_v_numeric(np.array([1]), np.array([2]))
+ assert not com.is_datetimelike_v_numeric(np.array([dt]), np.array([dt]))
+
+ assert com.is_datetimelike_v_numeric(1, dt)
+ assert com.is_datetimelike_v_numeric(1, dt)
+ assert com.is_datetimelike_v_numeric(np.array([dt]), 1)
+ assert com.is_datetimelike_v_numeric(np.array([1]), dt)
+ assert com.is_datetimelike_v_numeric(np.array([dt]), np.array([1]))
+
+
+def test_is_datetimelike_v_object():
+ obj = object()
+ dt = np.datetime64(pd.datetime(2017, 1, 1))
+
+ assert not com.is_datetimelike_v_object(dt, dt)
+ assert not com.is_datetimelike_v_object(obj, obj)
+ assert not com.is_datetimelike_v_object(np.array([dt]), np.array([1]))
+ assert not com.is_datetimelike_v_object(np.array([dt]), np.array([dt]))
+ assert not com.is_datetimelike_v_object(np.array([obj]), np.array([obj]))
+
+ assert com.is_datetimelike_v_object(dt, obj)
+ assert com.is_datetimelike_v_object(obj, dt)
+ assert com.is_datetimelike_v_object(np.array([dt]), obj)
+ assert com.is_datetimelike_v_object(np.array([obj]), dt)
+ assert com.is_datetimelike_v_object(np.array([dt]), np.array([obj]))
+
+
+def test_needs_i8_conversion():
+ assert not com.needs_i8_conversion(str)
+ assert not com.needs_i8_conversion(np.int64)
+ assert not com.needs_i8_conversion(pd.Series([1, 2]))
+ assert not com.needs_i8_conversion(np.array(['a', 'b']))
+
+ assert com.needs_i8_conversion(np.datetime64)
+ assert com.needs_i8_conversion(pd.Series([], dtype="timedelta64[ns]"))
+ assert com.needs_i8_conversion(pd.DatetimeIndex(
+ [1, 2, 3], tz="US/Eastern"))
+
+
+def test_is_numeric_dtype():
+ assert not com.is_numeric_dtype(str)
+ assert not com.is_numeric_dtype(np.datetime64)
+ assert not com.is_numeric_dtype(np.timedelta64)
+ assert not com.is_numeric_dtype(np.array(['a', 'b']))
+ assert not com.is_numeric_dtype(np.array([], dtype=np.timedelta64))
+
+ assert com.is_numeric_dtype(int)
+ assert com.is_numeric_dtype(float)
+ assert com.is_numeric_dtype(np.uint64)
+ assert com.is_numeric_dtype(pd.Series([1, 2]))
+ assert com.is_numeric_dtype(pd.Index([1, 2.]))
+
+
+def test_is_string_like_dtype():
+ assert not com.is_string_like_dtype(object)
+ assert not com.is_string_like_dtype(pd.Series([1, 2]))
+
+ assert com.is_string_like_dtype(str)
+ assert com.is_string_like_dtype(np.array(['a', 'b']))
+
+
+def test_is_float_dtype():
+ assert not com.is_float_dtype(str)
+ assert not com.is_float_dtype(int)
+ assert not com.is_float_dtype(pd.Series([1, 2]))
+ assert not com.is_float_dtype(np.array(['a', 'b']))
+
+ assert com.is_float_dtype(float)
+ assert com.is_float_dtype(pd.Index([1, 2.]))
+
+
+def test_is_bool_dtype():
+ assert not com.is_bool_dtype(int)
+ assert not com.is_bool_dtype(str)
+ assert not com.is_bool_dtype(pd.Series([1, 2]))
+ assert not com.is_bool_dtype(np.array(['a', 'b']))
+
+ assert com.is_bool_dtype(bool)
+ assert com.is_bool_dtype(np.bool)
+ assert com.is_bool_dtype(np.array([True, False]))
+
+
+def test_is_extension_type():
+ assert not com.is_extension_type([1, 2, 3])
+ assert not com.is_extension_type(np.array([1, 2, 3]))
+ assert not com.is_extension_type(pd.DatetimeIndex([1, 2, 3]))
+
+ cat = pd.Categorical([1, 2, 3])
+ assert com.is_extension_type(cat)
+ assert com.is_extension_type(pd.Series(cat))
+ assert com.is_extension_type(pd.SparseArray([1, 2, 3]))
+ assert com.is_extension_type(pd.SparseSeries([1, 2, 3]))
+ assert com.is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
+
+ dtype = DatetimeTZDtype("ns", tz="US/Eastern")
+ s = pd.Series([], dtype=dtype)
+ assert com.is_extension_type(s)
+
+ # This test will only skip if the previous assertions
+ # pass AND scipy is not installed.
+ sparse = pytest.importorskip("scipy.sparse")
+ assert not com.is_extension_type(sparse.bsr_matrix([1, 2, 3]))
+
+
+def test_is_complex_dtype():
+ assert not com.is_complex_dtype(int)
+ assert not com.is_complex_dtype(str)
+ assert not com.is_complex_dtype(pd.Series([1, 2]))
+ assert not com.is_complex_dtype(np.array(['a', 'b']))
+
+ assert com.is_complex_dtype(np.complex)
+ assert com.is_complex_dtype(np.array([1 + 1j, 5]))
| Title is self-explanatory.
Closes #15895. | https://api.github.com/repos/pandas-dev/pandas/pulls/16237 | 2017-05-04T19:10:29Z | 2017-05-04T21:37:15Z | 2017-05-04T21:37:15Z | 2017-05-04T21:39:06Z |
CLN: Index.append() refactoring | diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index 292d5f608d4cb..0ce45eea119ed 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -19,7 +19,7 @@
_TD_DTYPE)
from pandas.core.dtypes.generic import (
ABCDatetimeIndex, ABCTimedeltaIndex,
- ABCPeriodIndex)
+ ABCPeriodIndex, ABCRangeIndex)
def get_dtype_kinds(l):
@@ -41,6 +41,8 @@ def get_dtype_kinds(l):
typ = 'category'
elif is_sparse(arr):
typ = 'sparse'
+ elif isinstance(arr, ABCRangeIndex):
+ typ = 'range'
elif is_datetimetz(arr):
# if to_concat contains different tz,
# the result must be object dtype
@@ -559,3 +561,47 @@ def convert_sparse(x, axis):
# coerce to object if needed
result = result.astype('object')
return result
+
+
+def _concat_rangeindex_same_dtype(indexes):
+ """
+ Concatenates multiple RangeIndex instances. All members of "indexes" must
+ be of type RangeIndex; result will be RangeIndex if possible, Int64Index
+ otherwise. E.g.:
+ indexes = [RangeIndex(3), RangeIndex(3, 6)] -> RangeIndex(6)
+ indexes = [RangeIndex(3), RangeIndex(4, 6)] -> Int64Index([0,1,2,4,5])
+ """
+
+ start = step = next = None
+
+ for obj in indexes:
+ if not len(obj):
+ continue
+
+ if start is None:
+ # This is set by the first non-empty index
+ start = obj._start
+ if step is None and len(obj) > 1:
+ step = obj._step
+ elif step is None:
+ # First non-empty index had only one element
+ if obj._start == start:
+ return _concat_index_asobject(indexes)
+ step = obj._start - start
+
+ non_consecutive = ((step != obj._step and len(obj) > 1) or
+ (next is not None and obj._start != next))
+ if non_consecutive:
+ # Int64Index._append_same_dtype([ix.astype(int) for ix in indexes])
+ # would be preferred... but it currently resorts to
+ # _concat_index_asobject anyway.
+ return _concat_index_asobject(indexes)
+
+ if step is not None:
+ next = obj[-1] + step
+
+ if start is None:
+ start = obj._start
+ step = obj._step
+ stop = obj._stop if next is None else next
+ return indexes[0].__class__(start, stop, step)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 4aecc75d95971..e3c32629f54a0 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1741,18 +1741,17 @@ def append(self, other):
names = set([obj.name for obj in to_concat])
name = None if len(names) > 1 else self.name
- if self.is_categorical():
- # if calling index is category, don't check dtype of others
- from pandas.core.indexes.category import CategoricalIndex
- return CategoricalIndex._append_same_dtype(self, to_concat, name)
+ return self._concat(to_concat, name)
+
+ def _concat(self, to_concat, name):
typs = _concat.get_dtype_kinds(to_concat)
if len(typs) == 1:
- return self._append_same_dtype(to_concat, name=name)
+ return self._concat_same_dtype(to_concat, name=name)
return _concat._concat_index_asobject(to_concat, name=name)
- def _append_same_dtype(self, to_concat, name):
+ def _concat_same_dtype(self, to_concat, name):
"""
Concatenate to_concat which has the same class
"""
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index ac4698b570d17..f22407308e094 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -633,7 +633,11 @@ def insert(self, loc, item):
codes = np.concatenate((codes[:loc], code, codes[loc:]))
return self._create_from_codes(codes)
- def _append_same_dtype(self, to_concat, name):
+ def _concat(self, to_concat, name):
+ # if calling index is category, don't check dtype of others
+ return CategoricalIndex._concat_same_dtype(self, to_concat, name)
+
+ def _concat_same_dtype(self, to_concat, name):
"""
Concatenate to_concat which has the same class
ValueError if other is not in the categories
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 845c71b6c41d8..c3232627fce74 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -837,7 +837,7 @@ def summary(self, name=None):
result = result.replace("'", "")
return result
- def _append_same_dtype(self, to_concat, name):
+ def _concat_same_dtype(self, to_concat, name):
"""
Concatenate to_concat which has the same class
"""
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index aa2ad21ae37fd..c855dbf82c2af 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -867,7 +867,7 @@ def _as_like_interval_index(self, other, error_msg):
raise ValueError(error_msg)
return other
- def _append_same_dtype(self, to_concat, name):
+ def _concat_same_dtype(self, to_concat, name):
"""
assert that we all have the same .closed
we allow a 0-len index here as well
@@ -876,7 +876,7 @@ def _append_same_dtype(self, to_concat, name):
msg = ('can only append two IntervalIndex objects '
'that are closed on the same side')
raise ValueError(msg)
- return super(IntervalIndex, self)._append_same_dtype(to_concat, name)
+ return super(IntervalIndex, self)._concat_same_dtype(to_concat, name)
@Appender(_index_shared_docs['take'] % _index_doc_kwargs)
def take(self, indices, axis=0, allow_fill=True,
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index 5071b50bbebdf..3f24bdeac0420 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -14,6 +14,7 @@
from pandas.compat.numpy import function as nv
from pandas.core.indexes.base import Index, _index_shared_docs
from pandas.util._decorators import Appender, cache_readonly
+import pandas.core.dtypes.concat as _concat
import pandas.core.indexes.base as ibase
from pandas.core.indexes.numeric import Int64Index
@@ -443,62 +444,8 @@ def join(self, other, how='left', level=None, return_indexers=False,
return super(RangeIndex, self).join(other, how, level, return_indexers,
sort)
- def append(self, other):
- """
- Append a collection of Index options together
-
- Parameters
- ----------
- other : Index or list/tuple of indices
-
- Returns
- -------
- appended : RangeIndex if all indexes are consecutive RangeIndexes,
- otherwise Int64Index or Index
- """
-
- to_concat = [self]
-
- if isinstance(other, (list, tuple)):
- to_concat = to_concat + list(other)
- else:
- to_concat.append(other)
-
- if not all([isinstance(i, RangeIndex) for i in to_concat]):
- return super(RangeIndex, self).append(other)
-
- start = step = next = None
-
- for obj in to_concat:
- if not len(obj):
- continue
-
- if start is None:
- # This is set by the first non-empty index
- start = obj._start
- if step is None and len(obj) > 1:
- step = obj._step
- elif step is None:
- # First non-empty index had only one element
- if obj._start == start:
- return super(RangeIndex, self).append(other)
- step = obj._start - start
-
- non_consecutive = ((step != obj._step and len(obj) > 1) or
- (next is not None and obj._start != next))
- if non_consecutive:
- return super(RangeIndex, self).append(other)
-
- if step is not None:
- next = obj[-1] + step
-
- if start is None:
- start = obj._start
- step = obj._step
- stop = obj._stop if next is None else next
- names = set([obj.name for obj in to_concat])
- name = None if len(names) > 1 else self.name
- return RangeIndex(start, stop, step, name=name)
+ def _concat_same_dtype(self, indexes, name):
+ return _concat._concat_rangeindex_same_dtype(indexes).rename(name)
def __len__(self):
"""
| - [x] tests passed (one xpassed due to #16234)
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
The first commit is just #16213 .
The others reorganize a bit the code for ``Index.append()``. | https://api.github.com/repos/pandas-dev/pandas/pulls/16236 | 2017-05-04T16:05:54Z | 2017-08-22T08:11:11Z | 2017-08-22T08:11:11Z | 2017-08-22T08:42:14Z |
BUG: Accept list-like color with single col in plot | diff --git a/doc/source/whatsnew/v0.20.2.txt b/doc/source/whatsnew/v0.20.2.txt
index 983f3edfa2f46..cbfebee2ceba2 100644
--- a/doc/source/whatsnew/v0.20.2.txt
+++ b/doc/source/whatsnew/v0.20.2.txt
@@ -55,6 +55,8 @@ I/O
Plotting
^^^^^^^^
+- Bug in ``DataFrame.plot`` with a single column and a list-like ``color`` (:issue:`3486`)
+
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index e88979b14c8af..c0f9f62106330 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -180,7 +180,8 @@ def _validate_color_args(self):
colors = self.kwds.pop('colors')
self.kwds['color'] = colors
- if ('color' in self.kwds and self.nseries == 1):
+ if ('color' in self.kwds and self.nseries == 1 and
+ not is_list_like(self.kwds['color'])):
# support series.plot(color='green')
self.kwds['color'] = [self.kwds['color']]
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 9abbb348fbfa8..e40ec5a1faea8 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -153,6 +153,11 @@ def test_mpl2_color_cycle_str(self):
else:
pytest.skip("not supported in matplotlib < 2.0.0")
+ def test_color_single_series_list(self):
+ # GH 3486
+ df = DataFrame({"A": [1, 2, 3]})
+ _check_plot_works(df.plot, color=['red'])
+
def test_color_empty_string(self):
df = DataFrame(randn(10, 2))
with pytest.raises(ValueError):
| Close #3486 | https://api.github.com/repos/pandas-dev/pandas/pulls/16233 | 2017-05-04T13:27:43Z | 2017-05-11T16:07:28Z | 2017-05-11T16:07:28Z | 2017-05-30T12:20:36Z |
MAINT: Remove tm.TestCase from testing | diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index 26a2f56f3c1a1..aacfe25b91564 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -617,11 +617,11 @@ the expected correct result::
Transitioning to ``pytest``
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-*pandas* existing test structure is *mostly* classed based, meaning that you will typically find tests wrapped in a class, inheriting from ``tm.TestCase``.
+*pandas* existing test structure is *mostly* classed based, meaning that you will typically find tests wrapped in a class.
.. code-block:: python
- class TestReallyCoolFeature(tm.TestCase):
+ class TestReallyCoolFeature(object):
....
Going forward, we are moving to a more *functional* style using the `pytest <http://doc.pytest.org/en/latest/>`__ framework, which offers a richer testing
diff --git a/pandas/conftest.py b/pandas/conftest.py
index caced6a0c568e..1149fae3fc0b0 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -25,6 +25,13 @@ def pytest_runtest_setup(item):
pytest.skip("skipping due to --skip-network")
+# Configurations for all tests and all test modules
+
+@pytest.fixture(autouse=True)
+def configure_tests():
+ pandas.set_option('chained_assignment', 'raise')
+
+
# For running doctests: make np and pd names available
@pytest.fixture(autouse=True)
diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index 4678db4a52c5a..b1652cf6eb6db 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -23,7 +23,7 @@ def check(self, namespace, expected, ignored=None):
tm.assert_almost_equal(result, expected)
-class TestPDApi(Base, tm.TestCase):
+class TestPDApi(Base):
# these are optionally imported based on testing
# & need to be ignored
@@ -117,7 +117,7 @@ def test_api(self):
self.ignored)
-class TestApi(Base, tm.TestCase):
+class TestApi(Base):
allowed = ['types']
@@ -137,7 +137,7 @@ def test_testing(self):
self.check(testing, self.funcs)
-class TestDatetoolsDeprecation(tm.TestCase):
+class TestDatetoolsDeprecation(object):
def test_deprecation_access_func(self):
with tm.assert_produces_warning(FutureWarning,
@@ -150,7 +150,7 @@ def test_deprecation_access_obj(self):
pd.datetools.monthEnd
-class TestTopLevelDeprecations(tm.TestCase):
+class TestTopLevelDeprecations(object):
# top-level API deprecations
# GH 13790
@@ -191,35 +191,35 @@ def test_get_store(self):
s.close()
-class TestJson(tm.TestCase):
+class TestJson(object):
def test_deprecation_access_func(self):
with catch_warnings(record=True):
pd.json.dumps([])
-class TestParser(tm.TestCase):
+class TestParser(object):
def test_deprecation_access_func(self):
with catch_warnings(record=True):
pd.parser.na_values
-class TestLib(tm.TestCase):
+class TestLib(object):
def test_deprecation_access_func(self):
with catch_warnings(record=True):
pd.lib.infer_dtype('foo')
-class TestTSLib(tm.TestCase):
+class TestTSLib(object):
def test_deprecation_access_func(self):
with catch_warnings(record=True):
pd.tslib.Timestamp('20160101')
-class TestTypes(tm.TestCase):
+class TestTypes(object):
def test_deprecation_access_func(self):
with tm.assert_produces_warning(
diff --git a/pandas/tests/api/test_types.py b/pandas/tests/api/test_types.py
index 834857b87960c..1cbcf3f9109a4 100644
--- a/pandas/tests/api/test_types.py
+++ b/pandas/tests/api/test_types.py
@@ -13,7 +13,7 @@
from .test_api import Base
-class TestTypes(Base, tm.TestCase):
+class TestTypes(Base):
allowed = ['is_bool', 'is_bool_dtype',
'is_categorical', 'is_categorical_dtype', 'is_complex',
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 5086b803419c6..89ab4531877a4 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -95,11 +95,10 @@ def _is_py3_complex_incompat(result, expected):
_good_arith_ops = com.difference(_arith_ops_syms, _special_case_arith_ops_syms)
-class TestEvalNumexprPandas(tm.TestCase):
+class TestEvalNumexprPandas(object):
@classmethod
def setup_class(cls):
- super(TestEvalNumexprPandas, cls).setup_class()
tm.skip_if_no_ne()
import numexpr as ne
cls.ne = ne
@@ -108,7 +107,6 @@ def setup_class(cls):
@classmethod
def teardown_class(cls):
- super(TestEvalNumexprPandas, cls).teardown_class()
del cls.engine, cls.parser
if hasattr(cls, 'ne'):
del cls.ne
@@ -1067,11 +1065,10 @@ def test_performance_warning_for_poor_alignment(self, engine, parser):
# ------------------------------------
# Slightly more complex ops
-class TestOperationsNumExprPandas(tm.TestCase):
+class TestOperationsNumExprPandas(object):
@classmethod
def setup_class(cls):
- super(TestOperationsNumExprPandas, cls).setup_class()
tm.skip_if_no_ne()
cls.engine = 'numexpr'
cls.parser = 'pandas'
@@ -1079,7 +1076,6 @@ def setup_class(cls):
@classmethod
def teardown_class(cls):
- super(TestOperationsNumExprPandas, cls).teardown_class()
del cls.engine, cls.parser
def eval(self, *args, **kwargs):
@@ -1584,11 +1580,10 @@ def setup_class(cls):
cls.arith_ops = expr._arith_ops_syms + expr._cmp_ops_syms
-class TestMathPythonPython(tm.TestCase):
+class TestMathPythonPython(object):
@classmethod
def setup_class(cls):
- super(TestMathPythonPython, cls).setup_class()
tm.skip_if_no_ne()
cls.engine = 'python'
cls.parser = 'pandas'
@@ -1873,7 +1868,7 @@ def test_negate_lt_eq_le(engine, parser):
tm.assert_frame_equal(result, expected)
-class TestValidate(tm.TestCase):
+class TestValidate(object):
def test_validate_bool_args(self):
invalid_values = [1, "True", [1, 2, 3], 5.0]
diff --git a/pandas/tests/dtypes/test_cast.py b/pandas/tests/dtypes/test_cast.py
index cbf049b95b6ef..e92724a5d9cd4 100644
--- a/pandas/tests/dtypes/test_cast.py
+++ b/pandas/tests/dtypes/test_cast.py
@@ -26,7 +26,7 @@
from pandas.util import testing as tm
-class TestMaybeDowncast(tm.TestCase):
+class TestMaybeDowncast(object):
def test_downcast_conv(self):
# test downcasting
@@ -156,7 +156,7 @@ def test_infer_dtype_from_array(self, arr, expected):
assert dtype == expected
-class TestMaybe(tm.TestCase):
+class TestMaybe(object):
def test_maybe_convert_string_to_array(self):
result = maybe_convert_string_to_object('x')
@@ -214,7 +214,7 @@ def test_maybe_convert_scalar(self):
assert result == Timedelta('1 day 1 min').value
-class TestConvert(tm.TestCase):
+class TestConvert(object):
def test_maybe_convert_objects_copy(self):
values = np.array([1, 2])
@@ -233,7 +233,7 @@ def test_maybe_convert_objects_copy(self):
assert values is not out
-class TestCommonTypes(tm.TestCase):
+class TestCommonTypes(object):
def test_numpy_dtypes(self):
# (source_types, destination_type)
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 0472f0599cd9b..68518e235d417 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -12,7 +12,7 @@
import pandas.util.testing as tm
-class TestPandasDtype(tm.TestCase):
+class TestPandasDtype(object):
# Passing invalid dtype, both as a string or object, must raise TypeError
# Per issue GH15520
diff --git a/pandas/tests/dtypes/test_concat.py b/pandas/tests/dtypes/test_concat.py
index c0be0dc38d27f..ca579e2dc9390 100644
--- a/pandas/tests/dtypes/test_concat.py
+++ b/pandas/tests/dtypes/test_concat.py
@@ -2,10 +2,9 @@
import pandas as pd
import pandas.core.dtypes.concat as _concat
-import pandas.util.testing as tm
-class TestConcatCompat(tm.TestCase):
+class TestConcatCompat(object):
def check_concat(self, to_concat, exp):
for klass in [pd.Index, pd.Series]:
diff --git a/pandas/tests/dtypes/test_generic.py b/pandas/tests/dtypes/test_generic.py
index e9af53aaa1e1a..653d7d3082c08 100644
--- a/pandas/tests/dtypes/test_generic.py
+++ b/pandas/tests/dtypes/test_generic.py
@@ -3,11 +3,10 @@
from warnings import catch_warnings
import numpy as np
import pandas as pd
-import pandas.util.testing as tm
from pandas.core.dtypes import generic as gt
-class TestABCClasses(tm.TestCase):
+class TestABCClasses(object):
tuples = [[1, 2, 2], ['red', 'blue', 'red']]
multi_index = pd.MultiIndex.from_arrays(tuples, names=('number', 'color'))
datetime_index = pd.to_datetime(['2000/1/1', '2010/1/1'])
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index ec02a5a200308..3790ebe0d3e7c 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -226,7 +226,7 @@ def test_is_recompilable():
assert not inference.is_re_compilable(f)
-class TestInference(tm.TestCase):
+class TestInference(object):
def test_infer_dtype_bytes(self):
compare = 'string' if PY2 else 'bytes'
@@ -405,7 +405,7 @@ def test_mixed_dtypes_remain_object_array(self):
tm.assert_numpy_array_equal(result, array)
-class TestTypeInference(tm.TestCase):
+class TestTypeInference(object):
def test_length_zero(self):
result = lib.infer_dtype(np.array([], dtype='i4'))
@@ -774,7 +774,7 @@ def test_categorical(self):
assert result == 'categorical'
-class TestNumberScalar(tm.TestCase):
+class TestNumberScalar(object):
def test_is_number(self):
@@ -917,7 +917,7 @@ def test_is_timedelta(self):
assert not is_timedelta64_ns_dtype(tdi.astype('timedelta64[h]'))
-class Testisscalar(tm.TestCase):
+class Testisscalar(object):
def test_isscalar_builtin_scalars(self):
assert is_scalar(None)
diff --git a/pandas/tests/dtypes/test_io.py b/pandas/tests/dtypes/test_io.py
index 443c0c5410e61..58a1c3540cd03 100644
--- a/pandas/tests/dtypes/test_io.py
+++ b/pandas/tests/dtypes/test_io.py
@@ -7,7 +7,7 @@
from pandas.compat import long, u
-class TestParseSQL(tm.TestCase):
+class TestParseSQL(object):
def test_convert_sql_column_floats(self):
arr = np.array([1.5, None, 3, 4.2], dtype=object)
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index 78396a8d89d91..90993890b7553 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -45,7 +45,7 @@ def test_notnull():
assert (isinstance(isnull(s), Series))
-class TestIsNull(tm.TestCase):
+class TestIsNull(object):
def test_0d_array(self):
assert isnull(np.array(np.nan))
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index 34ab0b72f9b9a..e6313dfc602a8 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -25,7 +25,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameAlterAxes(tm.TestCase, TestData):
+class TestDataFrameAlterAxes(TestData):
def test_set_index(self):
idx = Index(np.arange(len(self.mixed_frame)))
@@ -806,7 +806,7 @@ def test_set_index_preserve_categorical_dtype(self):
tm.assert_frame_equal(result, df)
-class TestIntervalIndex(tm.TestCase):
+class TestIntervalIndex(object):
def test_setitem(self):
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 89ee096b4434e..be89b27912d1c 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -24,7 +24,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameAnalytics(tm.TestCase, TestData):
+class TestDataFrameAnalytics(TestData):
# ---------------------------------------------------------------------=
# Correlation and covariance
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 208c7b5ace50e..f63918c97c614 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -69,7 +69,7 @@ def test_add_prefix_suffix(self):
tm.assert_index_equal(with_suffix.columns, expected)
-class TestDataFrameMisc(tm.TestCase, SharedWithSparse, TestData):
+class TestDataFrameMisc(SharedWithSparse, TestData):
klass = DataFrame
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index 5febe8c62abe8..aa7c7a7120c1b 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -19,7 +19,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameApply(tm.TestCase, TestData):
+class TestDataFrameApply(TestData):
def test_apply(self):
with np.errstate(all='ignore'):
@@ -482,7 +482,7 @@ def zip_frames(*frames):
return pd.concat(zipped, axis=1)
-class TestDataFrameAggregate(tm.TestCase, TestData):
+class TestDataFrameAggregate(TestData):
_multiprocess_can_split_ = True
diff --git a/pandas/tests/frame/test_asof.py b/pandas/tests/frame/test_asof.py
index 4207238f0cd4f..d4e3d541937dc 100644
--- a/pandas/tests/frame/test_asof.py
+++ b/pandas/tests/frame/test_asof.py
@@ -9,7 +9,7 @@
from .common import TestData
-class TestFrameAsof(TestData, tm.TestCase):
+class TestFrameAsof(TestData):
def setup_method(self, method):
self.N = N = 50
self.rng = date_range('1/1/1990', periods=N, freq='53s')
diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py
index a563b678a3786..a6326083c1bee 100644
--- a/pandas/tests/frame/test_axis_select_reindex.py
+++ b/pandas/tests/frame/test_axis_select_reindex.py
@@ -22,7 +22,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameSelectReindex(tm.TestCase, TestData):
+class TestDataFrameSelectReindex(TestData):
# These are specific reindex-based tests; other indexing tests should go in
# test_indexing
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 44dc6df756f3d..c1a5b437be5d0 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -28,7 +28,7 @@
# structure
-class TestDataFrameBlockInternals(tm.TestCase, TestData):
+class TestDataFrameBlockInternals(TestData):
def test_cast_internals(self):
casted = DataFrame(self.frame._data, dtype=int)
diff --git a/pandas/tests/frame/test_combine_concat.py b/pandas/tests/frame/test_combine_concat.py
index 44f17faabe20d..688cacdee263e 100644
--- a/pandas/tests/frame/test_combine_concat.py
+++ b/pandas/tests/frame/test_combine_concat.py
@@ -18,7 +18,7 @@
from pandas.util.testing import assert_frame_equal, assert_series_equal
-class TestDataFrameConcatCommon(tm.TestCase, TestData):
+class TestDataFrameConcatCommon(TestData):
def test_concat_multiple_frames_dtypes(self):
@@ -441,7 +441,7 @@ def test_concat_numerical_names(self):
tm.assert_frame_equal(result, expected)
-class TestDataFrameCombineFirst(tm.TestCase, TestData):
+class TestDataFrameCombineFirst(TestData):
def test_combine_first_mixed(self):
a = Series(['a', 'b'], index=lrange(2))
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 5b00ddc51da46..8459900ea1059 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -32,7 +32,7 @@
'int32', 'int64']
-class TestDataFrameConstructors(tm.TestCase, TestData):
+class TestDataFrameConstructors(TestData):
def test_constructor(self):
df = DataFrame()
@@ -1903,7 +1903,7 @@ def test_to_frame_with_falsey_names(self):
tm.assert_series_equal(result, expected)
-class TestDataFrameConstructorWithDatetimeTZ(tm.TestCase, TestData):
+class TestDataFrameConstructorWithDatetimeTZ(TestData):
def test_from_dict(self):
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index 353b4b873332e..e0cdca7904db7 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -11,7 +11,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameConvertTo(tm.TestCase, TestData):
+class TestDataFrameConvertTo(TestData):
def test_to_dict(self):
test_data = {
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 2d39db16dbd8d..b99a6fabfa42b 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -19,7 +19,7 @@
import pandas as pd
-class TestDataFrameDataTypes(tm.TestCase, TestData):
+class TestDataFrameDataTypes(TestData):
def test_concat_empty_dataframe_dtypes(self):
df = DataFrame(columns=list("abc"))
@@ -542,7 +542,7 @@ def test_arg_for_errors_in_astype(self):
df.astype(np.int8, errors='ignore')
-class TestDataFrameDatetimeWithTZ(tm.TestCase, TestData):
+class TestDataFrameDatetimeWithTZ(TestData):
def test_interleave(self):
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index 42eb7148d616e..f0503b60eeefa 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -36,7 +36,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameIndexing(tm.TestCase, TestData):
+class TestDataFrameIndexing(TestData):
def test_getitem(self):
# Slicing
@@ -2912,7 +2912,7 @@ def test_type_error_multiindex(self):
assert_series_equal(result, expected)
-class TestDataFrameIndexingDatetimeWithTZ(tm.TestCase, TestData):
+class TestDataFrameIndexingDatetimeWithTZ(TestData):
def setup_method(self, method):
self.idx = Index(date_range('20130101', periods=3, tz='US/Eastern'),
@@ -2970,7 +2970,7 @@ def test_transpose(self):
assert_frame_equal(result, expected)
-class TestDataFrameIndexingUInt64(tm.TestCase, TestData):
+class TestDataFrameIndexingUInt64(TestData):
def setup_method(self, method):
self.ir = Index(np.arange(3), dtype=np.uint64)
diff --git a/pandas/tests/frame/test_missing.py b/pandas/tests/frame/test_missing.py
index ffba141ddc15d..77f0357685cab 100644
--- a/pandas/tests/frame/test_missing.py
+++ b/pandas/tests/frame/test_missing.py
@@ -34,7 +34,7 @@ def _skip_if_no_pchip():
pytest.skip('scipy.interpolate.pchip missing')
-class TestDataFrameMissingData(tm.TestCase, TestData):
+class TestDataFrameMissingData(TestData):
def test_dropEmptyRows(self):
N = len(self.frame.index)
@@ -519,7 +519,7 @@ def test_fill_value_when_combine_const(self):
assert_frame_equal(res, exp)
-class TestDataFrameInterpolate(tm.TestCase, TestData):
+class TestDataFrameInterpolate(TestData):
def test_interp_basic(self):
df = DataFrame({'A': [1, 2, np.nan, 4],
diff --git a/pandas/tests/frame/test_mutate_columns.py b/pandas/tests/frame/test_mutate_columns.py
index ac76970aaa901..4462260a290d9 100644
--- a/pandas/tests/frame/test_mutate_columns.py
+++ b/pandas/tests/frame/test_mutate_columns.py
@@ -17,7 +17,7 @@
# Column add, remove, delete.
-class TestDataFrameMutateColumns(tm.TestCase, TestData):
+class TestDataFrameMutateColumns(TestData):
def test_assign(self):
df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
diff --git a/pandas/tests/frame/test_nonunique_indexes.py b/pandas/tests/frame/test_nonunique_indexes.py
index 4bc0176b570e3..4f77ba0ae1f5a 100644
--- a/pandas/tests/frame/test_nonunique_indexes.py
+++ b/pandas/tests/frame/test_nonunique_indexes.py
@@ -16,7 +16,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameNonuniqueIndexes(tm.TestCase, TestData):
+class TestDataFrameNonuniqueIndexes(TestData):
def test_column_dups_operations(self):
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index 9083b7952909e..8ec6c6e6263d8 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -28,7 +28,7 @@
_check_mixed_int)
-class TestDataFrameOperators(tm.TestCase, TestData):
+class TestDataFrameOperators(TestData):
def test_operators(self):
garbage = random.random(4)
diff --git a/pandas/tests/frame/test_period.py b/pandas/tests/frame/test_period.py
index 49de3b8e8cd9b..482210966fe6b 100644
--- a/pandas/tests/frame/test_period.py
+++ b/pandas/tests/frame/test_period.py
@@ -12,7 +12,7 @@ def _permute(obj):
return obj.take(np.random.permutation(len(obj)))
-class TestPeriodIndex(tm.TestCase):
+class TestPeriodIndex(object):
def setup_method(self, method):
pass
diff --git a/pandas/tests/frame/test_quantile.py b/pandas/tests/frame/test_quantile.py
index 33f72cde1b9a3..2482e493dbefd 100644
--- a/pandas/tests/frame/test_quantile.py
+++ b/pandas/tests/frame/test_quantile.py
@@ -17,7 +17,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameQuantile(tm.TestCase, TestData):
+class TestDataFrameQuantile(TestData):
def test_quantile(self):
from numpy import percentile
diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index 6a06e3f4872ce..f0f1a2df27e93 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -48,7 +48,7 @@ def skip_if_no_ne(engine='numexpr'):
"installed")
-class TestCompat(tm.TestCase):
+class TestCompat(object):
def setup_method(self, method):
self.df = DataFrame({'A': [1, 2, 3]})
@@ -96,7 +96,7 @@ def test_query_numexpr(self):
lambda: df.eval('A+1', engine='numexpr'))
-class TestDataFrameEval(tm.TestCase, TestData):
+class TestDataFrameEval(TestData):
def test_ops(self):
@@ -172,7 +172,7 @@ def test_eval_resolvers_as_list(self):
dict1['a'] + dict2['b'])
-class TestDataFrameQueryWithMultiIndex(tm.TestCase):
+class TestDataFrameQueryWithMultiIndex(object):
def test_query_with_named_multiindex(self, parser, engine):
tm.skip_if_no_ne(engine)
@@ -384,18 +384,16 @@ def test_raise_on_panel4d_with_multiindex(self, parser, engine):
pd.eval('p4d + 1', parser=parser, engine=engine)
-class TestDataFrameQueryNumExprPandas(tm.TestCase):
+class TestDataFrameQueryNumExprPandas(object):
@classmethod
def setup_class(cls):
- super(TestDataFrameQueryNumExprPandas, cls).setup_class()
cls.engine = 'numexpr'
cls.parser = 'pandas'
tm.skip_if_no_ne(cls.engine)
@classmethod
def teardown_class(cls):
- super(TestDataFrameQueryNumExprPandas, cls).teardown_class()
del cls.engine, cls.parser
def test_date_query_with_attribute_access(self):
@@ -858,7 +856,7 @@ def test_query_builtin(self):
assert_frame_equal(expected, result)
-class TestDataFrameQueryStrings(tm.TestCase):
+class TestDataFrameQueryStrings(object):
def test_str_query_method(self, parser, engine):
tm.skip_if_no_ne(engine)
@@ -1039,11 +1037,10 @@ def test_query_string_scalar_variable(self, parser, engine):
assert_frame_equal(e, r)
-class TestDataFrameEvalNumExprPandas(tm.TestCase):
+class TestDataFrameEvalNumExprPandas(object):
@classmethod
def setup_class(cls):
- super(TestDataFrameEvalNumExprPandas, cls).setup_class()
cls.engine = 'numexpr'
cls.parser = 'pandas'
tm.skip_if_no_ne()
@@ -1099,5 +1096,4 @@ class TestDataFrameEvalPythonPython(TestDataFrameEvalNumExprPython):
@classmethod
def setup_class(cls):
- super(TestDataFrameEvalPythonPython, cls).teardown_class()
cls.engine = cls.parser = 'python'
diff --git a/pandas/tests/frame/test_rank.py b/pandas/tests/frame/test_rank.py
index b115218d76958..acf887d047c9e 100644
--- a/pandas/tests/frame/test_rank.py
+++ b/pandas/tests/frame/test_rank.py
@@ -12,7 +12,7 @@
from pandas.tests.frame.common import TestData
-class TestRank(tm.TestCase, TestData):
+class TestRank(TestData):
s = Series([1, 3, 4, 2, nan, 2, 1, 5, nan, 3])
df = DataFrame({'A': s, 'B': s})
diff --git a/pandas/tests/frame/test_replace.py b/pandas/tests/frame/test_replace.py
index 3f160012cb446..fbc4accd0e41e 100644
--- a/pandas/tests/frame/test_replace.py
+++ b/pandas/tests/frame/test_replace.py
@@ -23,7 +23,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameReplace(tm.TestCase, TestData):
+class TestDataFrameReplace(TestData):
def test_replace_inplace(self):
self.tsframe['A'][:5] = nan
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index 0300c53e086cd..cc37f8cc3cb02 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -23,7 +23,7 @@
# structure
-class TestDataFrameReprInfoEtc(tm.TestCase, TestData):
+class TestDataFrameReprInfoEtc(TestData):
def test_repr_empty(self):
# empty
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index 79ee76ee362c3..fdb0119d8ae60 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -24,7 +24,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameReshape(tm.TestCase, TestData):
+class TestDataFrameReshape(TestData):
def test_pivot(self):
data = {
diff --git a/pandas/tests/frame/test_sorting.py b/pandas/tests/frame/test_sorting.py
index 457ea32ec56f7..98f7f82c0ace7 100644
--- a/pandas/tests/frame/test_sorting.py
+++ b/pandas/tests/frame/test_sorting.py
@@ -18,7 +18,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameSorting(tm.TestCase, TestData):
+class TestDataFrameSorting(TestData):
def test_sort(self):
frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4],
@@ -315,7 +315,7 @@ def test_sort_nat_values_in_int_column(self):
assert_frame_equal(df_sorted, df_reversed)
-class TestDataFrameSortIndexKinds(tm.TestCase, TestData):
+class TestDataFrameSortIndexKinds(TestData):
def test_sort_index_multicolumn(self):
A = np.arange(5).repeat(20)
diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py
index 40a8ece852623..52c591e4dcbb0 100644
--- a/pandas/tests/frame/test_subclass.py
+++ b/pandas/tests/frame/test_subclass.py
@@ -12,7 +12,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameSubclassing(tm.TestCase, TestData):
+class TestDataFrameSubclassing(TestData):
def test_frame_subclassing_and_slicing(self):
# Subclass frame and ensure it returns the right class on slicing it
diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index f52f4697b1b08..143a7ea8f6fb2 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -24,7 +24,7 @@
from pandas.tests.frame.common import TestData
-class TestDataFrameTimeSeriesMethods(tm.TestCase, TestData):
+class TestDataFrameTimeSeriesMethods(TestData):
def test_diff(self):
the_diff = self.tsframe.diff(1)
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index 3e38f2a71d99d..69bd2b008416f 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -29,7 +29,7 @@
'int32', 'int64']
-class TestDataFrameToCSV(tm.TestCase, TestData):
+class TestDataFrameToCSV(TestData):
def test_to_csv_from_csv1(self):
diff --git a/pandas/tests/frame/test_validate.py b/pandas/tests/frame/test_validate.py
index 343853b3fcfa0..d6065e6042908 100644
--- a/pandas/tests/frame/test_validate.py
+++ b/pandas/tests/frame/test_validate.py
@@ -1,10 +1,9 @@
from pandas.core.frame import DataFrame
-import pandas.util.testing as tm
import pytest
-class TestDataFrameValidate(tm.TestCase):
+class TestDataFrameValidate(object):
"""Tests for error handling related to data types of method arguments."""
df = DataFrame({'a': [1, 2], 'b': [3, 4]})
diff --git a/pandas/tests/groupby/test_aggregate.py b/pandas/tests/groupby/test_aggregate.py
index 769e4d14d354b..d7b46e6748b99 100644
--- a/pandas/tests/groupby/test_aggregate.py
+++ b/pandas/tests/groupby/test_aggregate.py
@@ -25,7 +25,7 @@
import pandas.util.testing as tm
-class TestGroupByAggregate(tm.TestCase):
+class TestGroupByAggregate(object):
def setup_method(self, method):
self.ts = tm.makeTimeSeries()
diff --git a/pandas/tests/groupby/test_bin_groupby.py b/pandas/tests/groupby/test_bin_groupby.py
index bdac535b3d2e2..f527c732fb76b 100644
--- a/pandas/tests/groupby/test_bin_groupby.py
+++ b/pandas/tests/groupby/test_bin_groupby.py
@@ -46,7 +46,7 @@ def test_series_bin_grouper():
assert_almost_equal(counts, exp_counts)
-class TestBinGroupers(tm.TestCase):
+class TestBinGroupers(object):
def setup_method(self, method):
self.obj = np.random.randn(10, 1)
@@ -117,11 +117,11 @@ def _ohlc(group):
_check('float64')
-class TestMoments(tm.TestCase):
+class TestMoments(object):
pass
-class TestReducer(tm.TestCase):
+class TestReducer(object):
def test_int_index(self):
from pandas.core.series import Series
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 9d2134927389d..fdc03acd3e931 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -15,7 +15,7 @@
from .common import MixIn
-class TestGroupByCategorical(MixIn, tm.TestCase):
+class TestGroupByCategorical(MixIn):
def test_level_groupby_get_group(self):
# GH15155
diff --git a/pandas/tests/groupby/test_filters.py b/pandas/tests/groupby/test_filters.py
index b05b938fd8205..cac6b46af8f87 100644
--- a/pandas/tests/groupby/test_filters.py
+++ b/pandas/tests/groupby/test_filters.py
@@ -23,7 +23,7 @@
import pandas as pd
-class TestGroupByFilter(tm.TestCase):
+class TestGroupByFilter(object):
def setup_method(self, method):
self.ts = tm.makeTimeSeries()
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 8d86d40c379bf..88afa51e46b6c 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -28,7 +28,7 @@
from .common import MixIn
-class TestGroupBy(MixIn, tm.TestCase):
+class TestGroupBy(MixIn):
def test_basic(self):
def checkit(dtype):
diff --git a/pandas/tests/groupby/test_nth.py b/pandas/tests/groupby/test_nth.py
index 0b6aeaf155f86..7912b4bf3bdf6 100644
--- a/pandas/tests/groupby/test_nth.py
+++ b/pandas/tests/groupby/test_nth.py
@@ -2,13 +2,12 @@
import pandas as pd
from pandas import DataFrame, MultiIndex, Index, Series, isnull
from pandas.compat import lrange
-from pandas.util import testing as tm
from pandas.util.testing import assert_frame_equal, assert_series_equal
from .common import MixIn
-class TestNth(MixIn, tm.TestCase):
+class TestNth(MixIn):
def test_first_last_nth(self):
# tests for first / last / nth
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index 42caecbdb700e..2196318d1920e 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -14,7 +14,7 @@
from pandas.util.testing import assert_frame_equal, assert_series_equal
-class TestGroupBy(tm.TestCase):
+class TestGroupBy(object):
def test_groupby_with_timegrouper(self):
# GH 4161
diff --git a/pandas/tests/groupby/test_transform.py b/pandas/tests/groupby/test_transform.py
index 0b81235ef2117..40434ff510421 100644
--- a/pandas/tests/groupby/test_transform.py
+++ b/pandas/tests/groupby/test_transform.py
@@ -17,7 +17,7 @@
from pandas.core.config import option_context
-class TestGroupBy(MixIn, tm.TestCase):
+class TestGroupBy(MixIn):
def test_transform(self):
data = Series(np.arange(9) // 3, index=np.arange(9))
diff --git a/pandas/tests/indexes/datetimes/test_astype.py b/pandas/tests/indexes/datetimes/test_astype.py
index 185787d75f6e1..0f7acf1febae8 100644
--- a/pandas/tests/indexes/datetimes/test_astype.py
+++ b/pandas/tests/indexes/datetimes/test_astype.py
@@ -9,7 +9,7 @@
Int64Index, Period)
-class TestDatetimeIndex(tm.TestCase):
+class TestDatetimeIndex(object):
def test_astype(self):
# GH 13149, GH 13209
@@ -185,7 +185,7 @@ def _check_rng(rng):
_check_rng(rng_utc)
-class TestToPeriod(tm.TestCase):
+class TestToPeriod(object):
def setup_method(self, method):
data = [Timestamp('2007-01-01 10:11:12.123456Z'),
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py
index 9af4136afd025..fcfc56ea823da 100644
--- a/pandas/tests/indexes/datetimes/test_construction.py
+++ b/pandas/tests/indexes/datetimes/test_construction.py
@@ -12,7 +12,7 @@
to_datetime)
-class TestDatetimeIndex(tm.TestCase):
+class TestDatetimeIndex(object):
def test_construction_caching(self):
@@ -446,7 +446,7 @@ def test_000constructor_resolution(self):
assert idx.nanosecond[0] == t1.nanosecond
-class TestTimeSeries(tm.TestCase):
+class TestTimeSeries(object):
def test_dti_constructor_preserve_dti_freq(self):
rng = date_range('1/1/2000', '1/2/2000', freq='5min')
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index 67d6b0f314ecb..0586ea9c4db2b 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -26,7 +26,7 @@ def eq_gen_range(kwargs, expected):
assert (np.array_equal(list(rng), expected))
-class TestDateRanges(TestData, tm.TestCase):
+class TestDateRanges(TestData):
def test_date_range_gen_error(self):
rng = date_range('1/1/2000 00:00', '1/1/2000 00:18', freq='5min')
@@ -147,7 +147,7 @@ def test_catch_infinite_loop(self):
datetime(2011, 11, 12), freq=offset)
-class TestGenRangeGeneration(tm.TestCase):
+class TestGenRangeGeneration(object):
def test_generate(self):
rng1 = list(generate_range(START, END, offset=BDay()))
@@ -196,7 +196,7 @@ def test_precision_finer_than_offset(self):
tm.assert_index_equal(result2, expected2)
-class TestBusinessDateRange(tm.TestCase):
+class TestBusinessDateRange(object):
def setup_method(self, method):
self.rng = bdate_range(START, END)
@@ -482,7 +482,7 @@ def test_freq_divides_end_in_nanos(self):
tm.assert_index_equal(result_2, expected_2)
-class TestCustomDateRange(tm.TestCase):
+class TestCustomDateRange(object):
def setup_method(self, method):
self.rng = cdate_range(START, END)
diff --git a/pandas/tests/indexes/datetimes/test_datetime.py b/pandas/tests/indexes/datetimes/test_datetime.py
index 7b22d1615fbeb..96c8da546ff9d 100644
--- a/pandas/tests/indexes/datetimes/test_datetime.py
+++ b/pandas/tests/indexes/datetimes/test_datetime.py
@@ -15,7 +15,7 @@
randn = np.random.randn
-class TestDatetimeIndex(tm.TestCase):
+class TestDatetimeIndex(object):
def test_get_loc(self):
idx = pd.date_range('2000-01-01', periods=3)
diff --git a/pandas/tests/indexes/datetimes/test_datetimelike.py b/pandas/tests/indexes/datetimes/test_datetimelike.py
index 2e184b1aa4e51..3b970ee382521 100644
--- a/pandas/tests/indexes/datetimes/test_datetimelike.py
+++ b/pandas/tests/indexes/datetimes/test_datetimelike.py
@@ -8,7 +8,7 @@
from ..datetimelike import DatetimeLike
-class TestDatetimeIndex(DatetimeLike, tm.TestCase):
+class TestDatetimeIndex(DatetimeLike):
_holder = DatetimeIndex
def setup_method(self, method):
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index 92134a296b08f..a9ea028c9d0f7 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -7,7 +7,7 @@
from pandas import notnull, Index, DatetimeIndex, datetime, date_range
-class TestDatetimeIndex(tm.TestCase):
+class TestDatetimeIndex(object):
def test_where_other(self):
diff --git a/pandas/tests/indexes/datetimes/test_misc.py b/pandas/tests/indexes/datetimes/test_misc.py
index d9a61776a0d1c..951aa2c520d0f 100644
--- a/pandas/tests/indexes/datetimes/test_misc.py
+++ b/pandas/tests/indexes/datetimes/test_misc.py
@@ -7,7 +7,7 @@
Float64Index, date_range, Timestamp)
-class TestDateTimeIndexToJulianDate(tm.TestCase):
+class TestDateTimeIndexToJulianDate(object):
def test_1700(self):
r1 = Float64Index([2345897.5, 2345898.5, 2345899.5, 2345900.5,
@@ -53,7 +53,7 @@ def test_second(self):
tm.assert_index_equal(r1, r2)
-class TestTimeSeries(tm.TestCase):
+class TestTimeSeries(object):
def test_pass_datetimeindex_to_index(self):
# Bugs in #1396
@@ -170,7 +170,7 @@ def test_normalize(self):
assert not rng.is_normalized
-class TestDatetime64(tm.TestCase):
+class TestDatetime64(object):
def test_datetimeindex_accessors(self):
diff --git a/pandas/tests/indexes/datetimes/test_missing.py b/pandas/tests/indexes/datetimes/test_missing.py
index 0c356e3251e2f..adc0b7b3d81e8 100644
--- a/pandas/tests/indexes/datetimes/test_missing.py
+++ b/pandas/tests/indexes/datetimes/test_missing.py
@@ -2,7 +2,7 @@
import pandas.util.testing as tm
-class TestDatetimeIndex(tm.TestCase):
+class TestDatetimeIndex(object):
def test_fillna_datetime64(self):
# GH 11343
diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py
index 75c6626b47401..80e93a1f76a66 100644
--- a/pandas/tests/indexes/datetimes/test_ops.py
+++ b/pandas/tests/indexes/datetimes/test_ops.py
@@ -931,7 +931,7 @@ def test_equals(self):
assert not idx.equals(pd.Series(idx3))
-class TestDateTimeIndexToJulianDate(tm.TestCase):
+class TestDateTimeIndexToJulianDate(object):
def test_1700(self):
r1 = Float64Index([2345897.5, 2345898.5, 2345899.5, 2345900.5,
@@ -1107,7 +1107,7 @@ def test_shift_months(years, months):
tm.assert_index_equal(actual, expected)
-class TestBusinessDatetimeIndex(tm.TestCase):
+class TestBusinessDatetimeIndex(object):
def setup_method(self, method):
self.rng = bdate_range(START, END)
@@ -1207,7 +1207,7 @@ def test_identical(self):
assert not t1.identical(t2v)
-class TestCustomDatetimeIndex(tm.TestCase):
+class TestCustomDatetimeIndex(object):
def setup_method(self, method):
self.rng = cdate_range(START, END)
diff --git a/pandas/tests/indexes/datetimes/test_partial_slicing.py b/pandas/tests/indexes/datetimes/test_partial_slicing.py
index b3661ae0e7a97..e7d03aa193cbd 100644
--- a/pandas/tests/indexes/datetimes/test_partial_slicing.py
+++ b/pandas/tests/indexes/datetimes/test_partial_slicing.py
@@ -11,7 +11,7 @@
from pandas.util import testing as tm
-class TestSlicing(tm.TestCase):
+class TestSlicing(object):
def test_slice_year(self):
dti = DatetimeIndex(freq='B', start=datetime(2005, 1, 1), periods=500)
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index fb4b6e9d226f8..f3af7dd30c27f 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -12,7 +12,7 @@
START, END = datetime(2009, 1, 1), datetime(2010, 1, 1)
-class TestDatetimeIndex(tm.TestCase):
+class TestDatetimeIndex(object):
def test_union(self):
i1 = Int64Index(np.arange(0, 20, 2))
@@ -199,7 +199,7 @@ def test_join_nonunique(self):
assert rs.is_monotonic
-class TestBusinessDatetimeIndex(tm.TestCase):
+class TestBusinessDatetimeIndex(object):
def setup_method(self, method):
self.rng = bdate_range(START, END)
@@ -343,7 +343,7 @@ def test_month_range_union_tz_dateutil(self):
early_dr.union(late_dr)
-class TestCustomDatetimeIndex(tm.TestCase):
+class TestCustomDatetimeIndex(object):
def setup_method(self, method):
self.rng = cdate_range(START, END)
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index 3c7f2e424f779..648df01be5289 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -22,7 +22,7 @@
compat)
-class TimeConversionFormats(tm.TestCase):
+class TimeConversionFormats(object):
def test_to_datetime_format(self):
values = ['1/1/2000', '1/2/2000', '1/3/2000']
@@ -170,7 +170,7 @@ def test_to_datetime_format_weeks(self):
assert to_datetime(s, format=format) == dt
-class TestToDatetime(tm.TestCase):
+class TestToDatetime(object):
def test_to_datetime_dt64s(self):
in_bound_dts = [
@@ -335,7 +335,7 @@ def test_datetime_invalid_datatype(self):
pd.to_datetime(pd.to_datetime)
-class ToDatetimeUnit(tm.TestCase):
+class ToDatetimeUnit(object):
def test_unit(self):
# GH 11758
@@ -595,7 +595,7 @@ def test_dataframe_dtypes(self):
to_datetime(df)
-class ToDatetimeMisc(tm.TestCase):
+class ToDatetimeMisc(object):
def test_index_to_datetime(self):
idx = Index(['1/1/2000', '1/2/2000', '1/3/2000'])
@@ -829,7 +829,7 @@ def test_dayfirst(self):
tm.assert_index_equal(expected, idx6)
-class TestGuessDatetimeFormat(tm.TestCase):
+class TestGuessDatetimeFormat(object):
def test_guess_datetime_format_with_parseable_formats(self):
tm._skip_if_not_us_locale()
@@ -914,7 +914,7 @@ def test_guess_datetime_format_for_array(self):
assert format_for_string_of_nans is None
-class TestToDatetimeInferFormat(tm.TestCase):
+class TestToDatetimeInferFormat(object):
def test_to_datetime_infer_datetime_format_consistent_format(self):
s = pd.Series(pd.date_range('20000101', periods=50, freq='H'))
@@ -974,7 +974,7 @@ def test_to_datetime_iso8601_noleading_0s(self):
tm.assert_series_equal(pd.to_datetime(s, format='%Y-%m-%d'), expected)
-class TestDaysInMonth(tm.TestCase):
+class TestDaysInMonth(object):
# tests for issue #10154
def test_day_not_in_month_coerce(self):
@@ -1006,7 +1006,7 @@ def test_day_not_in_month_ignore(self):
format="%Y-%m-%d") == '2015-04-31'
-class TestDatetimeParsingWrappers(tm.TestCase):
+class TestDatetimeParsingWrappers(object):
def test_does_not_convert_mixed_integer(self):
bad_date_strings = ('-50000', '999', '123.1234', 'm', 'T')
@@ -1362,7 +1362,7 @@ def test_parsers_iso8601(self):
raise Exception(date_str)
-class TestArrayToDatetime(tm.TestCase):
+class TestArrayToDatetime(object):
def test_try_parse_dates(self):
from dateutil.parser import parse
diff --git a/pandas/tests/indexes/period/test_asfreq.py b/pandas/tests/indexes/period/test_asfreq.py
index b97be3f61a2dd..c8724b2a3bc91 100644
--- a/pandas/tests/indexes/period/test_asfreq.py
+++ b/pandas/tests/indexes/period/test_asfreq.py
@@ -6,7 +6,7 @@
from pandas import PeriodIndex, Series, DataFrame
-class TestPeriodIndex(tm.TestCase):
+class TestPeriodIndex(object):
def setup_method(self, method):
pass
diff --git a/pandas/tests/indexes/period/test_construction.py b/pandas/tests/indexes/period/test_construction.py
index b0db27b5f2cea..6a188c0987f91 100644
--- a/pandas/tests/indexes/period/test_construction.py
+++ b/pandas/tests/indexes/period/test_construction.py
@@ -9,7 +9,7 @@
Series, Index)
-class TestPeriodIndex(tm.TestCase):
+class TestPeriodIndex(object):
def setup_method(self, method):
pass
@@ -473,7 +473,7 @@ def test_map_with_string_constructor(self):
tm.assert_index_equal(res, expected)
-class TestSeriesPeriod(tm.TestCase):
+class TestSeriesPeriod(object):
def setup_method(self, method):
self.series = Series(period_range('2000-01-01', periods=10, freq='D'))
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index 36db56b751633..d4dac1cf88fff 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -11,7 +11,7 @@
period_range, Period, _np_version_under1p9)
-class TestGetItem(tm.TestCase):
+class TestGetItem(object):
def setup_method(self, method):
pass
@@ -200,7 +200,7 @@ def test_getitem_day(self):
s[v]
-class TestIndexing(tm.TestCase):
+class TestIndexing(object):
def test_get_loc_msg(self):
idx = period_range('2000-1-1', freq='A', periods=10)
diff --git a/pandas/tests/indexes/period/test_ops.py b/pandas/tests/indexes/period/test_ops.py
index 583848f75c6b4..7acc335c31be4 100644
--- a/pandas/tests/indexes/period/test_ops.py
+++ b/pandas/tests/indexes/period/test_ops.py
@@ -851,7 +851,7 @@ def test_equals(self):
assert not idx.equals(pd.Series(idx3))
-class TestPeriodIndexSeriesMethods(tm.TestCase):
+class TestPeriodIndexSeriesMethods(object):
""" Test PeriodIndex and Period Series Ops consistency """
def _check(self, values, func, expected):
@@ -1135,7 +1135,7 @@ def test_pi_comp_period_nat(self):
self._check(idx, f, exp)
-class TestSeriesPeriod(tm.TestCase):
+class TestSeriesPeriod(object):
def setup_method(self, method):
self.series = Series(period_range('2000-01-01', periods=10, freq='D'))
@@ -1175,7 +1175,7 @@ def test_ops_series_period(self):
tm.assert_series_equal(s - s2, -exp)
-class TestFramePeriod(tm.TestCase):
+class TestFramePeriod(object):
def test_ops_frame_period(self):
# GH 13043
@@ -1206,7 +1206,7 @@ def test_ops_frame_period(self):
tm.assert_frame_equal(df - df2, -exp)
-class TestPeriodIndexComparisons(tm.TestCase):
+class TestPeriodIndexComparisons(object):
def test_pi_pi_comp(self):
diff --git a/pandas/tests/indexes/period/test_partial_slicing.py b/pandas/tests/indexes/period/test_partial_slicing.py
index 88a9ff5752322..6d142722c315a 100644
--- a/pandas/tests/indexes/period/test_partial_slicing.py
+++ b/pandas/tests/indexes/period/test_partial_slicing.py
@@ -8,7 +8,7 @@
DataFrame, _np_version_under1p12, Period)
-class TestPeriodIndex(tm.TestCase):
+class TestPeriodIndex(object):
def setup_method(self, method):
pass
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 11ec3bc215cf8..6f73e7c15e4d9 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -13,7 +13,7 @@
from ..datetimelike import DatetimeLike
-class TestPeriodIndex(DatetimeLike, tm.TestCase):
+class TestPeriodIndex(DatetimeLike):
_holder = PeriodIndex
_multiprocess_can_split_ = True
diff --git a/pandas/tests/indexes/period/test_setops.py b/pandas/tests/indexes/period/test_setops.py
index 7041724faeb89..1ac05f9fa94b7 100644
--- a/pandas/tests/indexes/period/test_setops.py
+++ b/pandas/tests/indexes/period/test_setops.py
@@ -12,7 +12,7 @@ def _permute(obj):
return obj.take(np.random.permutation(len(obj)))
-class TestPeriodIndex(tm.TestCase):
+class TestPeriodIndex(object):
def setup_method(self, method):
pass
diff --git a/pandas/tests/indexes/period/test_tools.py b/pandas/tests/indexes/period/test_tools.py
index bd80c2c4f341e..074678164e6f9 100644
--- a/pandas/tests/indexes/period/test_tools.py
+++ b/pandas/tests/indexes/period/test_tools.py
@@ -11,7 +11,7 @@
date_range, to_datetime, period_range)
-class TestPeriodRepresentation(tm.TestCase):
+class TestPeriodRepresentation(object):
"""
Wish to match NumPy units
"""
@@ -73,7 +73,7 @@ def test_negone_ordinals(self):
repr(period)
-class TestTslib(tm.TestCase):
+class TestTslib(object):
def test_intraday_conversion_factors(self):
assert period_asfreq(1, get_freq('D'), get_freq('H'), False) == 24
assert period_asfreq(1, get_freq('D'), get_freq('T'), False) == 1440
@@ -150,7 +150,7 @@ def test_period_ordinal_business_day(self):
0, 0, 0, 0, get_freq('B')) == 11418
-class TestPeriodIndex(tm.TestCase):
+class TestPeriodIndex(object):
def setup_method(self, method):
pass
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index ce3f4b5d68d89..6a2087b37631e 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -29,7 +29,7 @@
from pandas._libs.lib import Timestamp
-class TestIndex(Base, tm.TestCase):
+class TestIndex(Base):
_holder = Index
def setup_method(self, method):
@@ -1801,7 +1801,7 @@ def test_string_index_repr(self):
assert coerce(idx) == expected
-class TestMixedIntIndex(Base, tm.TestCase):
+class TestMixedIntIndex(Base):
# Mostly the tests from common.py for which the results differ
# in py2 and py3 because ints and strings are uncomparable in py3
# (GH 13514)
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 94349b4860698..4e4f9b29f9a4c 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -19,7 +19,7 @@
unicode = lambda x: x
-class TestCategoricalIndex(Base, tm.TestCase):
+class TestCategoricalIndex(Base):
_holder = CategoricalIndex
def setup_method(self, method):
diff --git a/pandas/tests/indexes/test_frozen.py b/pandas/tests/indexes/test_frozen.py
index ae4a130c24310..ca9841112b1d5 100644
--- a/pandas/tests/indexes/test_frozen.py
+++ b/pandas/tests/indexes/test_frozen.py
@@ -5,7 +5,7 @@
from pandas.compat import u
-class TestFrozenList(CheckImmutable, CheckStringMixin, tm.TestCase):
+class TestFrozenList(CheckImmutable, CheckStringMixin):
mutable_methods = ('extend', 'pop', 'remove', 'insert')
unicode_container = FrozenList([u("\u05d0"), u("\u05d1"), "c"])
@@ -31,7 +31,7 @@ def test_inplace(self):
self.check_result(r, self.lst)
-class TestFrozenNDArray(CheckImmutable, CheckStringMixin, tm.TestCase):
+class TestFrozenNDArray(CheckImmutable, CheckStringMixin):
mutable_methods = ('put', 'itemset', 'fill')
unicode_container = FrozenNDArray([u("\u05d0"), u("\u05d1"), "c"])
diff --git a/pandas/tests/indexes/test_interval.py b/pandas/tests/indexes/test_interval.py
index 90e5b1b6c9788..33745017fe3d6 100644
--- a/pandas/tests/indexes/test_interval.py
+++ b/pandas/tests/indexes/test_interval.py
@@ -12,7 +12,7 @@
import pandas as pd
-class TestIntervalIndex(Base, tm.TestCase):
+class TestIntervalIndex(Base):
_holder = IntervalIndex
def setup_method(self, method):
@@ -682,7 +682,7 @@ def f():
pytest.raises(ValueError, f)
-class TestIntervalRange(tm.TestCase):
+class TestIntervalRange(object):
def test_construction(self):
result = interval_range(0, 5, name='foo', closed='both')
@@ -720,7 +720,7 @@ def f():
pytest.raises(ValueError, f)
-class TestIntervalTree(tm.TestCase):
+class TestIntervalTree(object):
def setup_method(self, method):
gentree = lambda dtype: IntervalTree(np.arange(5, dtype=dtype),
np.arange(5, dtype=dtype) + 2)
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index d2024340c522e..402dba0ba08b8 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -27,7 +27,7 @@
from .common import Base
-class TestMultiIndex(Base, tm.TestCase):
+class TestMultiIndex(Base):
_holder = MultiIndex
_compat_props = ['shape', 'ndim', 'size', 'itemsize']
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index e82b1c5e74543..3d06f1672ae32 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -176,7 +176,7 @@ def test_modulo(self):
tm.assert_index_equal(index % 2, expected)
-class TestFloat64Index(Numeric, tm.TestCase):
+class TestFloat64Index(Numeric):
_holder = Float64Index
def setup_method(self, method):
@@ -621,7 +621,7 @@ def test_ufunc_coercions(self):
tm.assert_index_equal(result, exp)
-class TestInt64Index(NumericInt, tm.TestCase):
+class TestInt64Index(NumericInt):
_dtype = 'int64'
_holder = Int64Index
@@ -915,7 +915,7 @@ def test_join_outer(self):
tm.assert_numpy_array_equal(ridx, eridx)
-class TestUInt64Index(NumericInt, tm.TestCase):
+class TestUInt64Index(NumericInt):
_dtype = 'uint64'
_holder = UInt64Index
diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py
index cc3a76aa7cac1..18539989084e9 100644
--- a/pandas/tests/indexes/test_range.py
+++ b/pandas/tests/indexes/test_range.py
@@ -20,7 +20,7 @@
from .test_numeric import Numeric
-class TestRangeIndex(Numeric, tm.TestCase):
+class TestRangeIndex(Numeric):
_holder = RangeIndex
_compat_props = ['shape', 'ndim', 'size', 'itemsize']
diff --git a/pandas/tests/indexes/timedeltas/test_astype.py b/pandas/tests/indexes/timedeltas/test_astype.py
index b9720f4a300d1..586b96f980f8f 100644
--- a/pandas/tests/indexes/timedeltas/test_astype.py
+++ b/pandas/tests/indexes/timedeltas/test_astype.py
@@ -10,7 +10,7 @@
from ..datetimelike import DatetimeLike
-class TestTimedeltaIndex(DatetimeLike, tm.TestCase):
+class TestTimedeltaIndex(DatetimeLike):
_holder = TimedeltaIndex
_multiprocess_can_split_ = True
diff --git a/pandas/tests/indexes/timedeltas/test_construction.py b/pandas/tests/indexes/timedeltas/test_construction.py
index bdaa62c5ce221..dd25e2cca2e55 100644
--- a/pandas/tests/indexes/timedeltas/test_construction.py
+++ b/pandas/tests/indexes/timedeltas/test_construction.py
@@ -8,7 +8,7 @@
from pandas import TimedeltaIndex, timedelta_range, to_timedelta
-class TestTimedeltaIndex(tm.TestCase):
+class TestTimedeltaIndex(object):
_multiprocess_can_split_ = True
def test_construction_base_constructor(self):
diff --git a/pandas/tests/indexes/timedeltas/test_indexing.py b/pandas/tests/indexes/timedeltas/test_indexing.py
index 6ffe3516c4a94..844033cc19eed 100644
--- a/pandas/tests/indexes/timedeltas/test_indexing.py
+++ b/pandas/tests/indexes/timedeltas/test_indexing.py
@@ -6,7 +6,7 @@
from pandas import TimedeltaIndex, timedelta_range, compat, Index, Timedelta
-class TestTimedeltaIndex(tm.TestCase):
+class TestTimedeltaIndex(object):
_multiprocess_can_split_ = True
def test_insert(self):
diff --git a/pandas/tests/indexes/timedeltas/test_ops.py b/pandas/tests/indexes/timedeltas/test_ops.py
index 12d29dc00e273..9a9912d4f0ab1 100644
--- a/pandas/tests/indexes/timedeltas/test_ops.py
+++ b/pandas/tests/indexes/timedeltas/test_ops.py
@@ -861,7 +861,7 @@ def test_equals(self):
assert not idx.equals(pd.Series(idx2))
-class TestTimedeltas(tm.TestCase):
+class TestTimedeltas(object):
_multiprocess_can_split_ = True
def test_ops(self):
@@ -1209,7 +1209,7 @@ def test_compare_timedelta_ndarray(self):
tm.assert_numpy_array_equal(result, expected)
-class TestSlicing(tm.TestCase):
+class TestSlicing(object):
def test_tdi_ops_attributes(self):
rng = timedelta_range('2 days', periods=5, freq='2D', name='x')
diff --git a/pandas/tests/indexes/timedeltas/test_partial_slicing.py b/pandas/tests/indexes/timedeltas/test_partial_slicing.py
index 5e6e1440a7c04..8e5eae2a7a3ef 100644
--- a/pandas/tests/indexes/timedeltas/test_partial_slicing.py
+++ b/pandas/tests/indexes/timedeltas/test_partial_slicing.py
@@ -8,7 +8,7 @@
from pandas.util.testing import assert_series_equal
-class TestSlicing(tm.TestCase):
+class TestSlicing(object):
def test_partial_slice(self):
rng = timedelta_range('1 day 10:11:12', freq='h', periods=500)
diff --git a/pandas/tests/indexes/timedeltas/test_setops.py b/pandas/tests/indexes/timedeltas/test_setops.py
index 8779f6d49cdd5..22546d25273a7 100644
--- a/pandas/tests/indexes/timedeltas/test_setops.py
+++ b/pandas/tests/indexes/timedeltas/test_setops.py
@@ -5,7 +5,7 @@
from pandas import TimedeltaIndex, timedelta_range, Int64Index
-class TestTimedeltaIndex(tm.TestCase):
+class TestTimedeltaIndex(object):
_multiprocess_can_split_ = True
def test_union(self):
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index 933674c425cd8..79fe0a864f246 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -16,7 +16,7 @@
randn = np.random.randn
-class TestTimedeltaIndex(DatetimeLike, tm.TestCase):
+class TestTimedeltaIndex(DatetimeLike):
_holder = TimedeltaIndex
_multiprocess_can_split_ = True
@@ -563,7 +563,7 @@ def test_freq_conversion(self):
assert_index_equal(result, expected)
-class TestSlicing(tm.TestCase):
+class TestSlicing(object):
def test_timedelta(self):
# this is valid too
@@ -589,7 +589,7 @@ def test_timedelta(self):
tm.assert_index_equal(result2, result3)
-class TestTimeSeries(tm.TestCase):
+class TestTimeSeries(object):
_multiprocess_can_split_ = True
def test_series_box_timedelta(self):
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta_range.py b/pandas/tests/indexes/timedeltas/test_timedelta_range.py
index 55f16c10e9945..4732a0ce110de 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta_range.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta_range.py
@@ -7,7 +7,7 @@
from pandas.util.testing import assert_frame_equal
-class TestTimedeltas(tm.TestCase):
+class TestTimedeltas(object):
_multiprocess_can_split_ = True
def test_timedelta_range(self):
diff --git a/pandas/tests/indexes/timedeltas/test_tools.py b/pandas/tests/indexes/timedeltas/test_tools.py
index faee627488dc0..a991b7bbe140a 100644
--- a/pandas/tests/indexes/timedeltas/test_tools.py
+++ b/pandas/tests/indexes/timedeltas/test_tools.py
@@ -11,7 +11,7 @@
from pandas._libs.tslib import iNaT
-class TestTimedeltas(tm.TestCase):
+class TestTimedeltas(object):
_multiprocess_can_split_ = True
def test_to_timedelta(self):
diff --git a/pandas/tests/indexing/test_callable.py b/pandas/tests/indexing/test_callable.py
index 727c87ac90872..95b406517be62 100644
--- a/pandas/tests/indexing/test_callable.py
+++ b/pandas/tests/indexing/test_callable.py
@@ -6,7 +6,7 @@
import pandas.util.testing as tm
-class TestIndexingCallable(tm.TestCase):
+class TestIndexingCallable(object):
def test_frame_loc_ix_callable(self):
# GH 11485
diff --git a/pandas/tests/indexing/test_categorical.py b/pandas/tests/indexing/test_categorical.py
index 6d2723ae0ff01..6874fedaa705f 100644
--- a/pandas/tests/indexing/test_categorical.py
+++ b/pandas/tests/indexing/test_categorical.py
@@ -10,7 +10,7 @@
from pandas.util import testing as tm
-class TestCategoricalIndex(tm.TestCase):
+class TestCategoricalIndex(object):
def setup_method(self, method):
diff --git a/pandas/tests/indexing/test_chaining_and_caching.py b/pandas/tests/indexing/test_chaining_and_caching.py
index c1f5d2941106d..27a889e58e55e 100644
--- a/pandas/tests/indexing/test_chaining_and_caching.py
+++ b/pandas/tests/indexing/test_chaining_and_caching.py
@@ -10,7 +10,7 @@
from pandas.util import testing as tm
-class TestCaching(tm.TestCase):
+class TestCaching(object):
def test_slice_consolidate_invalidate_item_cache(self):
@@ -90,7 +90,7 @@ def test_setitem_cache_updating(self):
tm.assert_series_equal(out['A'], expected['A'])
-class TestChaining(tm.TestCase):
+class TestChaining(object):
def test_setitem_chained_setfault(self):
diff --git a/pandas/tests/indexing/test_coercion.py b/pandas/tests/indexing/test_coercion.py
index 8e81a3bd1df7a..25cc810299678 100644
--- a/pandas/tests/indexing/test_coercion.py
+++ b/pandas/tests/indexing/test_coercion.py
@@ -44,7 +44,7 @@ def test_has_comprehensive_tests(self):
raise AssertionError(msg.format(type(self), method_name))
-class TestSetitemCoercion(CoercionBase, tm.TestCase):
+class TestSetitemCoercion(CoercionBase):
method = 'setitem'
@@ -330,7 +330,7 @@ def test_setitem_index_period(self):
pass
-class TestInsertIndexCoercion(CoercionBase, tm.TestCase):
+class TestInsertIndexCoercion(CoercionBase):
klasses = ['index']
method = 'insert'
@@ -514,7 +514,7 @@ def test_insert_index_period(self):
self._assert_insert_conversion(obj, 'x', exp, np.object)
-class TestWhereCoercion(CoercionBase, tm.TestCase):
+class TestWhereCoercion(CoercionBase):
method = 'where'
@@ -852,7 +852,7 @@ def test_where_index_period(self):
pass
-class TestFillnaSeriesCoercion(CoercionBase, tm.TestCase):
+class TestFillnaSeriesCoercion(CoercionBase):
# not indexing, but place here for consisntency
@@ -1139,7 +1139,7 @@ def test_fillna_index_period(self):
pass
-class TestReplaceSeriesCoercion(CoercionBase, tm.TestCase):
+class TestReplaceSeriesCoercion(CoercionBase):
# not indexing, but place here for consisntency
diff --git a/pandas/tests/indexing/test_datetime.py b/pandas/tests/indexing/test_datetime.py
index 3089bc1dbddea..da8a896cb6f4a 100644
--- a/pandas/tests/indexing/test_datetime.py
+++ b/pandas/tests/indexing/test_datetime.py
@@ -6,7 +6,7 @@
from pandas.util import testing as tm
-class TestDatetimeIndex(tm.TestCase):
+class TestDatetimeIndex(object):
def test_indexing_with_datetime_tz(self):
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 1701dd9f6ba90..00a2b8166ceed 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -9,7 +9,7 @@
import pandas.util.testing as tm
-class TestFloatIndexers(tm.TestCase):
+class TestFloatIndexers(object):
def check(self, result, original, indexer, getitem):
"""
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index 3e625fa483f7b..af4b9e1f0cc25 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -12,7 +12,7 @@
from pandas.tests.indexing.common import Base
-class TestiLoc(Base, tm.TestCase):
+class TestiLoc(Base):
def test_iloc_exceeds_bounds(self):
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 0759dc2333ad5..9fa677eb624ae 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -26,7 +26,7 @@
# Indexing test cases
-class TestFancy(Base, tm.TestCase):
+class TestFancy(Base):
""" pure get/set item & fancy indexing """
def test_setitem_ndarray_1d(self):
@@ -599,7 +599,7 @@ def test_index_type_coercion(self):
assert s2.index.is_object()
-class TestMisc(Base, tm.TestCase):
+class TestMisc(Base):
def test_indexer_caching(self):
# GH5727
@@ -800,7 +800,7 @@ def test_maybe_numeric_slice(self):
assert result == expected
-class TestSeriesNoneCoercion(tm.TestCase):
+class TestSeriesNoneCoercion(object):
EXPECTED_RESULTS = [
# For numeric series, we should coerce to NaN.
([1, 2, 3], [np.nan, 2, 3]),
@@ -847,7 +847,7 @@ def test_coercion_with_loc_and_series(self):
tm.assert_series_equal(start_series, expected_series)
-class TestDataframeNoneCoercion(tm.TestCase):
+class TestDataframeNoneCoercion(object):
EXPECTED_SINGLE_ROW_RESULTS = [
# For numeric series, we should coerce to NaN.
([1, 2, 3], [np.nan, 2, 3]),
diff --git a/pandas/tests/indexing/test_indexing_slow.py b/pandas/tests/indexing/test_indexing_slow.py
index 21cdbb17f52ce..08d390a6a213e 100644
--- a/pandas/tests/indexing/test_indexing_slow.py
+++ b/pandas/tests/indexing/test_indexing_slow.py
@@ -8,7 +8,7 @@
import pandas.util.testing as tm
-class TestIndexingSlow(tm.TestCase):
+class TestIndexingSlow(object):
@tm.slow
def test_multiindex_get_loc(self): # GH7724, GH2646
diff --git a/pandas/tests/indexing/test_interval.py b/pandas/tests/indexing/test_interval.py
index b8d8739af1d15..2552fc066cc87 100644
--- a/pandas/tests/indexing/test_interval.py
+++ b/pandas/tests/indexing/test_interval.py
@@ -6,7 +6,7 @@
import pandas.util.testing as tm
-class TestIntervalIndex(tm.TestCase):
+class TestIntervalIndex(object):
def setup_method(self, method):
self.s = Series(np.arange(5), IntervalIndex.from_breaks(np.arange(6)))
diff --git a/pandas/tests/indexing/test_ix.py b/pandas/tests/indexing/test_ix.py
index 8290bc80edac1..dc9a591ee3101 100644
--- a/pandas/tests/indexing/test_ix.py
+++ b/pandas/tests/indexing/test_ix.py
@@ -14,7 +14,7 @@
from pandas.errors import PerformanceWarning
-class TestIX(tm.TestCase):
+class TestIX(object):
def test_ix_deprecation(self):
# GH 15114
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 410d01431ef5a..fe2318be72eda 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -14,7 +14,7 @@
from pandas.tests.indexing.common import Base
-class TestLoc(Base, tm.TestCase):
+class TestLoc(Base):
def test_loc_getitem_dups(self):
# GH 5678
diff --git a/pandas/tests/indexing/test_multiindex.py b/pandas/tests/indexing/test_multiindex.py
index b8c34f9f28d83..483c39ed8694e 100644
--- a/pandas/tests/indexing/test_multiindex.py
+++ b/pandas/tests/indexing/test_multiindex.py
@@ -9,7 +9,7 @@
from pandas.tests.indexing.common import _mklbl
-class TestMultiIndexBasic(tm.TestCase):
+class TestMultiIndexBasic(object):
def test_iloc_getitem_multiindex2(self):
# TODO(wesm): fix this
@@ -698,7 +698,7 @@ def test_multiindex_slice_first_level(self):
tm.assert_frame_equal(result, expected)
-class TestMultiIndexSlicers(tm.TestCase):
+class TestMultiIndexSlicers(object):
def test_per_axis_per_level_getitem(self):
@@ -1188,7 +1188,7 @@ def f():
tm.assert_frame_equal(df, expected)
-class TestMultiIndexPanel(tm.TestCase):
+class TestMultiIndexPanel(object):
def test_iloc_getitem_panel_multiindex(self):
diff --git a/pandas/tests/indexing/test_panel.py b/pandas/tests/indexing/test_panel.py
index b704e15b81502..2d4ffd6a4e783 100644
--- a/pandas/tests/indexing/test_panel.py
+++ b/pandas/tests/indexing/test_panel.py
@@ -6,7 +6,7 @@
from pandas import Panel, date_range, DataFrame
-class TestPanel(tm.TestCase):
+class TestPanel(object):
def test_iloc_getitem_panel(self):
diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py
index 20cec2a3aa7db..93a85e247a787 100644
--- a/pandas/tests/indexing/test_partial.py
+++ b/pandas/tests/indexing/test_partial.py
@@ -14,7 +14,7 @@
from pandas.util import testing as tm
-class TestPartialSetting(tm.TestCase):
+class TestPartialSetting(object):
def test_partial_setting(self):
diff --git a/pandas/tests/indexing/test_scalar.py b/pandas/tests/indexing/test_scalar.py
index fb40c539e16ba..5dd1714b903eb 100644
--- a/pandas/tests/indexing/test_scalar.py
+++ b/pandas/tests/indexing/test_scalar.py
@@ -10,7 +10,7 @@
from pandas.tests.indexing.common import Base
-class TestScalar(Base, tm.TestCase):
+class TestScalar(Base):
def test_at_and_iat_get(self):
def _check(f, func, values=False):
diff --git a/pandas/tests/indexing/test_timedelta.py b/pandas/tests/indexing/test_timedelta.py
index 5f0088382ce57..cf8cc6c2d345d 100644
--- a/pandas/tests/indexing/test_timedelta.py
+++ b/pandas/tests/indexing/test_timedelta.py
@@ -2,7 +2,7 @@
from pandas.util import testing as tm
-class TestTimedeltaIndexing(tm.TestCase):
+class TestTimedeltaIndexing(object):
def test_boolean_indexing(self):
# GH 14946
diff --git a/pandas/tests/io/formats/test_eng_formatting.py b/pandas/tests/io/formats/test_eng_formatting.py
index e064d1200d672..9d5773283176c 100644
--- a/pandas/tests/io/formats/test_eng_formatting.py
+++ b/pandas/tests/io/formats/test_eng_formatting.py
@@ -6,7 +6,7 @@
from pandas.util import testing as tm
-class TestEngFormatter(tm.TestCase):
+class TestEngFormatter(object):
def test_eng_float_formatter(self):
df = DataFrame({'A': [1.41, 141., 14100, 1410000.]})
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 3cea731cfd440..e99c70952e5b3 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -105,7 +105,7 @@ def has_expanded_repr(df):
return False
-class TestDataFrameFormatting(tm.TestCase):
+class TestDataFrameFormatting(object):
def setup_method(self, method):
self.warn_filters = warnings.filters
@@ -1604,7 +1604,7 @@ def gen_series_formatting():
return test_sers
-class TestSeriesFormatting(tm.TestCase):
+class TestSeriesFormatting(object):
def setup_method(self, method):
self.ts = tm.makeTimeSeries()
@@ -2152,7 +2152,7 @@ def _three_digit_exp():
return '%.4g' % 1.7e8 == '1.7e+008'
-class TestFloatArrayFormatter(tm.TestCase):
+class TestFloatArrayFormatter(object):
def test_misc(self):
obj = fmt.FloatArrayFormatter(np.array([], dtype=np.float64))
@@ -2238,7 +2238,7 @@ def test_too_long(self):
assert str(df) == ' x\n0 1.2346e+04\n1 2.0000e+06'
-class TestRepr_timedelta64(tm.TestCase):
+class TestRepr_timedelta64(object):
def test_none(self):
delta_1d = pd.to_timedelta(1, unit='D')
@@ -2311,7 +2311,7 @@ def test_all(self):
assert drepr(delta_1ns) == "0 days 00:00:00.000000001"
-class TestTimedelta64Formatter(tm.TestCase):
+class TestTimedelta64Formatter(object):
def test_days(self):
x = pd.to_timedelta(list(range(5)) + [pd.NaT], unit='D')
@@ -2357,7 +2357,7 @@ def test_zero(self):
assert result[0].strip() == "'0 days'"
-class TestDatetime64Formatter(tm.TestCase):
+class TestDatetime64Formatter(object):
def test_mixed(self):
x = Series([datetime(2013, 1, 1), datetime(2013, 1, 1, 12), pd.NaT])
@@ -2438,7 +2438,7 @@ def format_func(x):
assert result == ['10:10', '12:12']
-class TestNaTFormatting(tm.TestCase):
+class TestNaTFormatting(object):
def test_repr(self):
assert repr(pd.NaT) == "NaT"
@@ -2447,7 +2447,7 @@ def test_str(self):
assert str(pd.NaT) == "NaT"
-class TestDatetimeIndexFormat(tm.TestCase):
+class TestDatetimeIndexFormat(object):
def test_datetime(self):
formatted = pd.to_datetime([datetime(2003, 1, 1, 12), pd.NaT]).format()
@@ -2474,7 +2474,7 @@ def test_date_explict_date_format(self):
assert formatted[1] == "UT"
-class TestDatetimeIndexUnicode(tm.TestCase):
+class TestDatetimeIndexUnicode(object):
def test_dates(self):
text = str(pd.to_datetime([datetime(2013, 1, 1), datetime(2014, 1, 1)
@@ -2489,7 +2489,7 @@ def test_mixed(self):
assert "'2014-01-01 00:00:00']" in text
-class TestStringRepTimestamp(tm.TestCase):
+class TestStringRepTimestamp(object):
def test_no_tz(self):
dt_date = datetime(2013, 1, 2)
diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index 05b697ffbb756..aae3ba31648ff 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -7,7 +7,6 @@
from pandas import compat
import pandas.io.formats.printing as printing
import pandas.io.formats.format as fmt
-import pandas.util.testing as tm
import pandas.core.config as cf
@@ -35,7 +34,7 @@ def test_repr_binary_type():
assert res == b
-class TestFormattBase(tm.TestCase):
+class TestFormattBase(object):
def test_adjoin(self):
data = [['a', 'b', 'c'], ['dd', 'ee', 'ff'], ['ggg', 'hhh', 'iii']]
@@ -123,7 +122,7 @@ def test_ambiguous_width(self):
assert adjoined == expected
-class TestTableSchemaRepr(tm.TestCase):
+class TestTableSchemaRepr(object):
@classmethod
def setup_class(cls):
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 687e78e64a3e7..ee7356f12f498 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -11,7 +11,7 @@
from pandas.io.formats.style import Styler, _get_level_lengths # noqa
-class TestStyler(tm.TestCase):
+class TestStyler(object):
def setup_method(self, method):
np.random.seed(24)
@@ -812,7 +812,7 @@ def test_mi_sparse_column_names(self):
assert head == expected
-class TestStylerMatplotlibDep(tm.TestCase):
+class TestStylerMatplotlibDep(object):
def test_background_gradient(self):
tm._skip_if_no_mpl()
diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py
index 552fb77bb54cc..1073fbcef5aec 100644
--- a/pandas/tests/io/formats/test_to_csv.py
+++ b/pandas/tests/io/formats/test_to_csv.py
@@ -4,7 +4,7 @@
from pandas.util import testing as tm
-class TestToCSV(tm.TestCase):
+class TestToCSV(object):
def test_to_csv_quotechar(self):
df = DataFrame({'col': [1, 2]})
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 4a4546dd807f1..cde920b1511d2 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -22,7 +22,7 @@
pass
-class TestToHTML(tm.TestCase):
+class TestToHTML(object):
def test_to_html_with_col_space(self):
def check_with_width(df, col_space):
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index 1e667245809ec..e447a74b2b462 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -9,7 +9,6 @@
from pandas import DataFrame
from pandas.core.dtypes.dtypes import (
PeriodDtype, CategoricalDtype, DatetimeTZDtype)
-import pandas.util.testing as tm
from pandas.io.json.table_schema import (
as_json_table_type,
build_table_schema,
@@ -17,7 +16,7 @@
set_default_names)
-class TestBuildSchema(tm.TestCase):
+class TestBuildSchema(object):
def setup_method(self, method):
self.df = DataFrame(
@@ -85,7 +84,7 @@ def test_multiindex(self):
assert result == expected
-class TestTableSchemaType(tm.TestCase):
+class TestTableSchemaType(object):
def test_as_json_table_type_int_data(self):
int_data = [1, 2, 3]
@@ -169,7 +168,7 @@ def test_as_json_table_type_categorical_dtypes(self):
assert as_json_table_type(CategoricalDtype()) == 'any'
-class TestTableOrient(tm.TestCase):
+class TestTableOrient(object):
def setup_method(self, method):
self.df = DataFrame(
diff --git a/pandas/tests/io/json/test_normalize.py b/pandas/tests/io/json/test_normalize.py
index d24250f534521..49b765b18d623 100644
--- a/pandas/tests/io/json/test_normalize.py
+++ b/pandas/tests/io/json/test_normalize.py
@@ -212,7 +212,7 @@ def test_non_ascii_key(self):
tm.assert_frame_equal(result, expected)
-class TestNestedToRecord(tm.TestCase):
+class TestNestedToRecord(object):
def test_flat_stays_flat(self):
recs = [dict(flat1=1, flat2=2),
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 0cf9000fcffb2..671d4248818e4 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -35,7 +35,7 @@
_mixed_frame = _frame.copy()
-class TestPandasContainer(tm.TestCase):
+class TestPandasContainer(object):
def setup_method(self, method):
self.dirpath = tm.get_data_path()
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index a23ae225c19b0..10f99c4fcd0a8 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -25,7 +25,7 @@
else partial(json.dumps, encoding="utf-8"))
-class UltraJSONTests(tm.TestCase):
+class UltraJSONTests(object):
@pytest.mark.skipif(compat.is_platform_32bit(),
reason="not compliant on 32-bit, xref #15865")
@@ -946,7 +946,7 @@ def my_obj_handler(obj):
ujson.decode(ujson.encode(l, default_handler=str)))
-class NumpyJSONTests(tm.TestCase):
+class NumpyJSONTests(object):
def testBool(self):
b = np.bool(True)
@@ -1222,7 +1222,7 @@ def testArrayNumpyLabelled(self):
assert (np.array(['a', 'b']) == output[2]).all()
-class PandasJSONTests(tm.TestCase):
+class PandasJSONTests(object):
def testDataFrame(self):
df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[
diff --git a/pandas/tests/io/msgpack/test_limits.py b/pandas/tests/io/msgpack/test_limits.py
index e906d14a2b5a8..07044dbb7e5de 100644
--- a/pandas/tests/io/msgpack/test_limits.py
+++ b/pandas/tests/io/msgpack/test_limits.py
@@ -4,12 +4,10 @@
import pytest
-import pandas.util.testing as tm
-
from pandas.io.msgpack import packb, unpackb, Packer, Unpacker, ExtType
-class TestLimits(tm.TestCase):
+class TestLimits(object):
def test_integer(self):
x = -(2 ** 63)
diff --git a/pandas/tests/io/msgpack/test_unpack.py b/pandas/tests/io/msgpack/test_unpack.py
index 158094d111b54..c056f8d800e11 100644
--- a/pandas/tests/io/msgpack/test_unpack.py
+++ b/pandas/tests/io/msgpack/test_unpack.py
@@ -1,11 +1,10 @@
from io import BytesIO
import sys
from pandas.io.msgpack import Unpacker, packb, OutOfData, ExtType
-import pandas.util.testing as tm
import pytest
-class TestUnpack(tm.TestCase):
+class TestUnpack(object):
def test_unpack_array_header_from_file(self):
f = BytesIO(packb([1, 2, 3, 4]))
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index 26b5c4788d53a..e12945a6a3102 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -47,7 +47,7 @@ def check_compressed_urls(salaries_table, compression, extension, mode,
tm.assert_frame_equal(url_table, salaries_table)
-class TestS3(tm.TestCase):
+class TestS3(object):
def setup_method(self, method):
try:
diff --git a/pandas/tests/io/parser/test_parsers.py b/pandas/tests/io/parser/test_parsers.py
index cced8299691df..8d59e3acb3230 100644
--- a/pandas/tests/io/parser/test_parsers.py
+++ b/pandas/tests/io/parser/test_parsers.py
@@ -50,7 +50,7 @@ def setup_method(self, method):
self.csv_shiftjs = os.path.join(self.dirpath, 'sauron.SHIFT_JIS.csv')
-class TestCParserHighMemory(BaseParser, CParserTests, tm.TestCase):
+class TestCParserHighMemory(BaseParser, CParserTests):
engine = 'c'
low_memory = False
float_precision_choices = [None, 'high', 'round_trip']
@@ -68,7 +68,7 @@ def read_table(self, *args, **kwds):
return read_table(*args, **kwds)
-class TestCParserLowMemory(BaseParser, CParserTests, tm.TestCase):
+class TestCParserLowMemory(BaseParser, CParserTests):
engine = 'c'
low_memory = True
float_precision_choices = [None, 'high', 'round_trip']
@@ -86,7 +86,7 @@ def read_table(self, *args, **kwds):
return read_table(*args, **kwds)
-class TestPythonParser(BaseParser, PythonParserTests, tm.TestCase):
+class TestPythonParser(BaseParser, PythonParserTests):
engine = 'python'
float_precision_choices = [None]
diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py
index 90231e01d0173..0bfeb5215f370 100644
--- a/pandas/tests/io/parser/test_read_fwf.py
+++ b/pandas/tests/io/parser/test_read_fwf.py
@@ -19,7 +19,7 @@
from pandas.io.parsers import read_csv, read_fwf, EmptyDataError
-class TestFwfParsing(tm.TestCase):
+class TestFwfParsing(object):
def test_fwf(self):
data_expected = """\
diff --git a/pandas/tests/io/parser/test_textreader.py b/pandas/tests/io/parser/test_textreader.py
index f09d8c8e778d5..7cd02a07bbd4c 100644
--- a/pandas/tests/io/parser/test_textreader.py
+++ b/pandas/tests/io/parser/test_textreader.py
@@ -26,7 +26,7 @@
import pandas.io.libparsers as parser
-class TestTextReader(tm.TestCase):
+class TestTextReader(object):
def setup_method(self, method):
self.dirpath = tm.get_data_path()
diff --git a/pandas/tests/io/parser/test_unsupported.py b/pandas/tests/io/parser/test_unsupported.py
index 6c2d883aeb16b..3f62ff44531fb 100644
--- a/pandas/tests/io/parser/test_unsupported.py
+++ b/pandas/tests/io/parser/test_unsupported.py
@@ -17,7 +17,7 @@
from pandas.io.parsers import read_csv, read_table
-class TestUnsupportedFeatures(tm.TestCase):
+class TestUnsupportedFeatures(object):
def test_mangle_dupe_cols_false(self):
# see gh-12935
@@ -102,7 +102,7 @@ def test_python_engine(self):
read_csv(StringIO(data), engine=engine, **kwargs)
-class TestDeprecatedFeatures(tm.TestCase):
+class TestDeprecatedFeatures(object):
def test_deprecated_args(self):
data = '1,2,3'
diff --git a/pandas/tests/io/sas/test_sas.py b/pandas/tests/io/sas/test_sas.py
index 461c0fe1fd848..617df99b99f0b 100644
--- a/pandas/tests/io/sas/test_sas.py
+++ b/pandas/tests/io/sas/test_sas.py
@@ -1,11 +1,10 @@
import pytest
-import pandas.util.testing as tm
from pandas.compat import StringIO
from pandas import read_sas
-class TestSas(tm.TestCase):
+class TestSas(object):
def test_sas_buffer_format(self):
diff --git a/pandas/tests/io/sas/test_sas7bdat.py b/pandas/tests/io/sas/test_sas7bdat.py
index cb28ab6c6c345..a5157744038f4 100644
--- a/pandas/tests/io/sas/test_sas7bdat.py
+++ b/pandas/tests/io/sas/test_sas7bdat.py
@@ -6,7 +6,7 @@
import numpy as np
-class TestSAS7BDAT(tm.TestCase):
+class TestSAS7BDAT(object):
def setup_method(self, method):
self.dirpath = tm.get_data_path()
diff --git a/pandas/tests/io/sas/test_xport.py b/pandas/tests/io/sas/test_xport.py
index 17b286a4915ce..de31c3e36a8d5 100644
--- a/pandas/tests/io/sas/test_xport.py
+++ b/pandas/tests/io/sas/test_xport.py
@@ -16,7 +16,7 @@ def numeric_as_float(data):
data[v] = data[v].astype(np.float64)
-class TestXport(tm.TestCase):
+class TestXport(object):
def setup_method(self, method):
self.dirpath = tm.get_data_path()
diff --git a/pandas/tests/io/test_clipboard.py b/pandas/tests/io/test_clipboard.py
index e9ffb2dca7ae5..406045a69beca 100644
--- a/pandas/tests/io/test_clipboard.py
+++ b/pandas/tests/io/test_clipboard.py
@@ -23,11 +23,10 @@
@pytest.mark.single
@pytest.mark.skipif(not _DEPS_INSTALLED,
reason="clipboard primitives not installed")
-class TestClipboard(tm.TestCase):
+class TestClipboard(object):
@classmethod
def setup_class(cls):
- super(TestClipboard, cls).setup_class()
cls.data = {}
cls.data['string'] = mkdf(5, 3, c_idx_type='s', r_idx_type='i',
c_idx_names=[None], r_idx_names=[None])
@@ -63,7 +62,6 @@ def setup_class(cls):
@classmethod
def teardown_class(cls):
- super(TestClipboard, cls).teardown_class()
del cls.data_types, cls.data
def check_round_trip_frame(self, data_type, excel=None, sep=None,
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 1837e5381a07e..a1a95e09915f1 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -24,7 +24,7 @@
pass
-class TestCommonIOCapabilities(tm.TestCase):
+class TestCommonIOCapabilities(object):
data1 = """index,A,B,C,D
foo,2,3,4,5
bar,7,8,9,10
@@ -90,7 +90,7 @@ def test_iterator(self):
tm.assert_frame_equal(concat(it), expected.iloc[1:])
-class TestMMapWrapper(tm.TestCase):
+class TestMMapWrapper(object):
def setup_method(self, method):
self.mmap_file = os.path.join(tm.get_data_path(),
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index 919c521f22f60..c70b5937fea3f 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -989,19 +989,19 @@ def test_read_excel_squeeze(self):
tm.assert_series_equal(actual, expected)
-class XlsReaderTests(XlrdTests, tm.TestCase):
+class XlsReaderTests(XlrdTests):
ext = '.xls'
engine_name = 'xlrd'
check_skip = staticmethod(_skip_if_no_xlrd)
-class XlsxReaderTests(XlrdTests, tm.TestCase):
+class XlsxReaderTests(XlrdTests):
ext = '.xlsx'
engine_name = 'xlrd'
check_skip = staticmethod(_skip_if_no_xlrd)
-class XlsmReaderTests(XlrdTests, tm.TestCase):
+class XlsmReaderTests(XlrdTests):
ext = '.xlsm'
engine_name = 'xlrd'
check_skip = staticmethod(_skip_if_no_xlrd)
@@ -1887,7 +1887,7 @@ def versioned_raise_on_incompat_version(cls):
@raise_on_incompat_version(1)
-class OpenpyxlTests(ExcelWriterBase, tm.TestCase):
+class OpenpyxlTests(ExcelWriterBase):
ext = '.xlsx'
engine_name = 'openpyxl1'
check_skip = staticmethod(lambda *args, **kwargs: None)
@@ -1923,7 +1923,7 @@ def test_to_excel_styleconverter(self):
def skip_openpyxl_gt21(cls):
- """Skip a TestCase instance if openpyxl >= 2.2"""
+ """Skip test case if openpyxl >= 2.2"""
@classmethod
def setup_class(cls):
@@ -1940,7 +1940,7 @@ def setup_class(cls):
@raise_on_incompat_version(2)
@skip_openpyxl_gt21
-class Openpyxl20Tests(ExcelWriterBase, tm.TestCase):
+class Openpyxl20Tests(ExcelWriterBase):
ext = '.xlsx'
engine_name = 'openpyxl20'
check_skip = staticmethod(lambda *args, **kwargs: None)
@@ -2040,7 +2040,7 @@ def test_write_cells_merge_styled(self):
def skip_openpyxl_lt22(cls):
- """Skip a TestCase instance if openpyxl < 2.2"""
+ """Skip test case if openpyxl < 2.2"""
@classmethod
def setup_class(cls):
@@ -2056,7 +2056,7 @@ def setup_class(cls):
@raise_on_incompat_version(2)
@skip_openpyxl_lt22
-class Openpyxl22Tests(ExcelWriterBase, tm.TestCase):
+class Openpyxl22Tests(ExcelWriterBase):
ext = '.xlsx'
engine_name = 'openpyxl22'
check_skip = staticmethod(lambda *args, **kwargs: None)
@@ -2151,7 +2151,7 @@ def test_write_cells_merge_styled(self):
assert xcell_a2.font == openpyxl_sty_merged
-class XlwtTests(ExcelWriterBase, tm.TestCase):
+class XlwtTests(ExcelWriterBase):
ext = '.xls'
engine_name = 'xlwt'
check_skip = staticmethod(_skip_if_no_xlwt)
@@ -2208,7 +2208,7 @@ def test_to_excel_styleconverter(self):
assert xlwt.Alignment.VERT_TOP == xls_style.alignment.vert
-class XlsxWriterTests(ExcelWriterBase, tm.TestCase):
+class XlsxWriterTests(ExcelWriterBase):
ext = '.xlsx'
engine_name = 'xlsxwriter'
check_skip = staticmethod(_skip_if_no_xlsxwriter)
@@ -2261,7 +2261,7 @@ def test_column_format(self):
assert read_num_format == num_format
-class OpenpyxlTests_NoMerge(ExcelWriterBase, tm.TestCase):
+class OpenpyxlTests_NoMerge(ExcelWriterBase):
ext = '.xlsx'
engine_name = 'openpyxl'
check_skip = staticmethod(_skip_if_no_openpyxl)
@@ -2270,7 +2270,7 @@ class OpenpyxlTests_NoMerge(ExcelWriterBase, tm.TestCase):
merge_cells = False
-class XlwtTests_NoMerge(ExcelWriterBase, tm.TestCase):
+class XlwtTests_NoMerge(ExcelWriterBase):
ext = '.xls'
engine_name = 'xlwt'
check_skip = staticmethod(_skip_if_no_xlwt)
@@ -2279,7 +2279,7 @@ class XlwtTests_NoMerge(ExcelWriterBase, tm.TestCase):
merge_cells = False
-class XlsxWriterTests_NoMerge(ExcelWriterBase, tm.TestCase):
+class XlsxWriterTests_NoMerge(ExcelWriterBase):
ext = '.xlsx'
engine_name = 'xlsxwriter'
check_skip = staticmethod(_skip_if_no_xlsxwriter)
@@ -2288,7 +2288,7 @@ class XlsxWriterTests_NoMerge(ExcelWriterBase, tm.TestCase):
merge_cells = False
-class ExcelWriterEngineTests(tm.TestCase):
+class ExcelWriterEngineTests(object):
def test_ExcelWriter_dispatch(self):
with tm.assert_raises_regex(ValueError, 'No engine'):
diff --git a/pandas/tests/io/test_gbq.py b/pandas/tests/io/test_gbq.py
index 47fc495201754..58a84ad4d47f8 100644
--- a/pandas/tests/io/test_gbq.py
+++ b/pandas/tests/io/test_gbq.py
@@ -10,7 +10,6 @@
from pandas import compat, DataFrame
from pandas.compat import range
-import pandas.util.testing as tm
pandas_gbq = pytest.importorskip('pandas_gbq')
@@ -94,7 +93,7 @@ def make_mixed_dataframe_v2(test_size):
@pytest.mark.single
-class TestToGBQIntegrationWithServiceAccountKeyPath(tm.TestCase):
+class TestToGBQIntegrationWithServiceAccountKeyPath(object):
@classmethod
def setup_class(cls):
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 6b1215e443b47..fa83c43ba8dd4 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -93,14 +93,13 @@ def read_html(self, *args, **kwargs):
return read_html(*args, **kwargs)
-class TestReadHtml(tm.TestCase, ReadHtmlMixin):
+class TestReadHtml(ReadHtmlMixin):
flavor = 'bs4'
spam_data = os.path.join(DATA_PATH, 'spam.html')
banklist_data = os.path.join(DATA_PATH, 'banklist.html')
@classmethod
def setup_class(cls):
- super(TestReadHtml, cls).setup_class()
_skip_if_none_of(('bs4', 'html5lib'))
def test_to_html_compat(self):
@@ -778,13 +777,12 @@ def _lang_enc(filename):
return os.path.splitext(os.path.basename(filename))[0].split('_')
-class TestReadHtmlEncoding(tm.TestCase):
+class TestReadHtmlEncoding(object):
files = glob.glob(os.path.join(DATA_PATH, 'html_encoding', '*.html'))
flavor = 'bs4'
@classmethod
def setup_class(cls):
- super(TestReadHtmlEncoding, cls).setup_class()
_skip_if_none_of((cls.flavor, 'html5lib'))
def read_html(self, *args, **kwargs):
@@ -830,12 +828,11 @@ def setup_class(cls):
_skip_if_no(cls.flavor)
-class TestReadHtmlLxml(tm.TestCase, ReadHtmlMixin):
+class TestReadHtmlLxml(ReadHtmlMixin):
flavor = 'lxml'
@classmethod
def setup_class(cls):
- super(TestReadHtmlLxml, cls).setup_class()
_skip_if_no('lxml')
def test_data_fail(self):
diff --git a/pandas/tests/io/test_packers.py b/pandas/tests/io/test_packers.py
index 96abf3415fff8..4b1145129c364 100644
--- a/pandas/tests/io/test_packers.py
+++ b/pandas/tests/io/test_packers.py
@@ -90,7 +90,7 @@ def check_arbitrary(a, b):
assert(a == b)
-class TestPackers(tm.TestCase):
+class TestPackers(object):
def setup_method(self, method):
self.path = '__%s__.msg' % tm.rands(10)
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 9e7196593650a..ee44fea55e51a 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -121,18 +121,16 @@ def _maybe_remove(store, key):
pass
-class Base(tm.TestCase):
+class Base(object):
@classmethod
def setup_class(cls):
- super(Base, cls).setup_class()
# Pytables 3.0.0 deprecates lots of things
tm.reset_testing_mode()
@classmethod
def teardown_class(cls):
- super(Base, cls).teardown_class()
# Pytables 3.0.0 deprecates lots of things
tm.set_testing_mode()
@@ -145,7 +143,7 @@ def teardown_method(self, method):
@pytest.mark.single
-class TestHDFStore(Base, tm.TestCase):
+class TestHDFStore(Base):
def test_factory_fun(self):
path = create_tempfile(self.path)
@@ -5228,7 +5226,7 @@ def test_complex_append(self):
assert_frame_equal(pd.concat([df, df], 0), result)
-class TestTimezones(Base, tm.TestCase):
+class TestTimezones(Base):
def _compare_with_tz(self, a, b):
tm.assert_frame_equal(a, b)
diff --git a/pandas/tests/io/test_s3.py b/pandas/tests/io/test_s3.py
index 36a0304bddfaf..8c2a32af33765 100644
--- a/pandas/tests/io/test_s3.py
+++ b/pandas/tests/io/test_s3.py
@@ -1,9 +1,7 @@
-from pandas.util import testing as tm
-
from pandas.io.common import _is_s3_url
-class TestS3URL(tm.TestCase):
+class TestS3URL(object):
def test_is_s3_url(self):
assert _is_s3_url("s3://pandas/somethingelse.com")
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 21de0cd371a37..7b3717281bf89 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -818,7 +818,7 @@ def test_unicode_column_name(self):
@pytest.mark.single
-class TestSQLApi(SQLAlchemyMixIn, _TestSQLApi, tm.TestCase):
+class TestSQLApi(SQLAlchemyMixIn, _TestSQLApi):
"""
Test the public API as it would be used directly
@@ -998,12 +998,12 @@ def teardown_method(self, method):
@pytest.mark.single
-class TestSQLApiConn(_EngineToConnMixin, TestSQLApi, tm.TestCase):
+class TestSQLApiConn(_EngineToConnMixin, TestSQLApi):
pass
@pytest.mark.single
-class TestSQLiteFallbackApi(SQLiteMixIn, _TestSQLApi, tm.TestCase):
+class TestSQLiteFallbackApi(SQLiteMixIn, _TestSQLApi):
"""
Test the public sqlite connection fallback API
@@ -1821,37 +1821,32 @@ def test_schema_support(self):
@pytest.mark.single
-class TestMySQLAlchemy(_TestMySQLAlchemy, _TestSQLAlchemy, tm.TestCase):
+class TestMySQLAlchemy(_TestMySQLAlchemy, _TestSQLAlchemy):
pass
@pytest.mark.single
-class TestMySQLAlchemyConn(_TestMySQLAlchemy, _TestSQLAlchemyConn,
- tm.TestCase):
+class TestMySQLAlchemyConn(_TestMySQLAlchemy, _TestSQLAlchemyConn):
pass
@pytest.mark.single
-class TestPostgreSQLAlchemy(_TestPostgreSQLAlchemy, _TestSQLAlchemy,
- tm.TestCase):
+class TestPostgreSQLAlchemy(_TestPostgreSQLAlchemy, _TestSQLAlchemy):
pass
@pytest.mark.single
-class TestPostgreSQLAlchemyConn(_TestPostgreSQLAlchemy, _TestSQLAlchemyConn,
- tm.TestCase):
+class TestPostgreSQLAlchemyConn(_TestPostgreSQLAlchemy, _TestSQLAlchemyConn):
pass
@pytest.mark.single
-class TestSQLiteAlchemy(_TestSQLiteAlchemy, _TestSQLAlchemy,
- tm.TestCase):
+class TestSQLiteAlchemy(_TestSQLiteAlchemy, _TestSQLAlchemy):
pass
@pytest.mark.single
-class TestSQLiteAlchemyConn(_TestSQLiteAlchemy, _TestSQLAlchemyConn,
- tm.TestCase):
+class TestSQLiteAlchemyConn(_TestSQLiteAlchemy, _TestSQLAlchemyConn):
pass
@@ -1859,7 +1854,7 @@ class TestSQLiteAlchemyConn(_TestSQLiteAlchemy, _TestSQLAlchemyConn,
# -- Test Sqlite / MySQL fallback
@pytest.mark.single
-class TestSQLiteFallback(SQLiteMixIn, PandasSQLTest, tm.TestCase):
+class TestSQLiteFallback(SQLiteMixIn, PandasSQLTest):
"""
Test the fallback mode against an in-memory sqlite database.
@@ -2083,7 +2078,7 @@ def _skip_if_no_pymysql():
@pytest.mark.single
-class TestXSQLite(SQLiteMixIn, tm.TestCase):
+class TestXSQLite(SQLiteMixIn):
def setup_method(self, method):
self.method = method
@@ -2287,7 +2282,7 @@ def clean_up(test_table_to_drop):
@pytest.mark.single
-class TestSQLFlavorDeprecation(tm.TestCase):
+class TestSQLFlavorDeprecation(object):
"""
gh-13611: test that the 'flavor' parameter
is appropriately deprecated by checking the
@@ -2314,7 +2309,7 @@ def test_deprecated_flavor(self):
@pytest.mark.single
@pytest.mark.skip(reason="gh-13611: there is no support for MySQL "
"if SQLAlchemy is not installed")
-class TestXMySQL(MySQLMixIn, tm.TestCase):
+class TestXMySQL(MySQLMixIn):
@classmethod
def setup_class(cls):
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 7867e6866876a..4c92c19c51e7a 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -23,7 +23,7 @@
from pandas.core.dtypes.common import is_categorical_dtype
-class TestStata(tm.TestCase):
+class TestStata(object):
def setup_method(self, method):
self.dirpath = tm.get_data_path()
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index 9a24e4ae2dad0..ac490a00bf684 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -42,7 +42,7 @@ def _ok_for_gaussian_kde(kind):
return True
-class TestPlotBase(tm.TestCase):
+class TestPlotBase(object):
def setup_method(self, method):
diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py
index 21d8d1f0ab555..e1f64bed5598d 100644
--- a/pandas/tests/plotting/test_converter.py
+++ b/pandas/tests/plotting/test_converter.py
@@ -15,7 +15,7 @@ def test_timtetonum_accepts_unicode():
assert (converter.time2num("00:01") == converter.time2num(u("00:01")))
-class TestDateTimeConverter(tm.TestCase):
+class TestDateTimeConverter(object):
def setup_method(self, method):
self.dtc = converter.DatetimeConverter()
@@ -146,7 +146,7 @@ def test_convert_nested(self):
assert result == expected
-class TestPeriodConverter(tm.TestCase):
+class TestPeriodConverter(object):
def setup_method(self, method):
self.pc = converter.PeriodConverter()
diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py
index 1842af465ca89..4dfa2904313ce 100644
--- a/pandas/tests/reshape/test_concat.py
+++ b/pandas/tests/reshape/test_concat.py
@@ -17,7 +17,7 @@
import pytest
-class ConcatenateBase(tm.TestCase):
+class ConcatenateBase(object):
def setup_method(self, method):
self.frame = DataFrame(tm.getSeriesData())
diff --git a/pandas/tests/reshape/test_hashing.py b/pandas/tests/reshape/test_hashing.py
index 622768353dd50..5f2c67ee300b5 100644
--- a/pandas/tests/reshape/test_hashing.py
+++ b/pandas/tests/reshape/test_hashing.py
@@ -9,7 +9,7 @@
import pandas.util.testing as tm
-class TestHashing(tm.TestCase):
+class TestHashing(object):
def setup_method(self, method):
self.df = DataFrame(
diff --git a/pandas/tests/reshape/test_join.py b/pandas/tests/reshape/test_join.py
index 3a6985fd4a373..e25661fb65271 100644
--- a/pandas/tests/reshape/test_join.py
+++ b/pandas/tests/reshape/test_join.py
@@ -19,7 +19,7 @@
a_ = np.array
-class TestJoin(tm.TestCase):
+class TestJoin(object):
def setup_method(self, method):
# aggregate multiple columns
diff --git a/pandas/tests/reshape/test_merge.py b/pandas/tests/reshape/test_merge.py
index e36b7ecbc3c7b..d3257243d7a2c 100644
--- a/pandas/tests/reshape/test_merge.py
+++ b/pandas/tests/reshape/test_merge.py
@@ -33,7 +33,7 @@ def get_test_data(ngroups=NGROUPS, n=N):
return arr
-class TestMerge(tm.TestCase):
+class TestMerge(object):
def setup_method(self, method):
# aggregate multiple columns
@@ -737,7 +737,7 @@ def _check_merge(x, y):
assert_frame_equal(result, expected, check_names=False)
-class TestMergeMulti(tm.TestCase):
+class TestMergeMulti(object):
def setup_method(self, method):
self.index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
diff --git a/pandas/tests/reshape/test_merge_asof.py b/pandas/tests/reshape/test_merge_asof.py
index 7e33449c92665..78bfa2ff8597c 100644
--- a/pandas/tests/reshape/test_merge_asof.py
+++ b/pandas/tests/reshape/test_merge_asof.py
@@ -11,7 +11,7 @@
from pandas.util.testing import assert_frame_equal
-class TestAsOfMerge(tm.TestCase):
+class TestAsOfMerge(object):
def read_data(self, name, dedupe=False):
path = os.path.join(tm.get_data_path(), name)
diff --git a/pandas/tests/reshape/test_merge_ordered.py b/pandas/tests/reshape/test_merge_ordered.py
index 375e2e13847e8..9469e98f336fd 100644
--- a/pandas/tests/reshape/test_merge_ordered.py
+++ b/pandas/tests/reshape/test_merge_ordered.py
@@ -6,7 +6,7 @@
from numpy import nan
-class TestOrderedMerge(tm.TestCase):
+class TestOrderedMerge(object):
def setup_method(self, method):
self.left = DataFrame({'key': ['a', 'c', 'e'],
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 905cd27ca4c58..270a93e4ae382 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -15,7 +15,7 @@
from pandas.tseries.util import pivot_annual, isleapyear
-class TestPivotTable(tm.TestCase):
+class TestPivotTable(object):
def setup_method(self, method):
self.data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
@@ -982,7 +982,7 @@ def test_pivot_table_not_series(self):
tm.assert_frame_equal(result, expected)
-class TestCrosstab(tm.TestCase):
+class TestCrosstab(object):
def setup_method(self, method):
df = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
@@ -1397,7 +1397,7 @@ def test_crosstab_with_numpy_size(self):
tm.assert_frame_equal(result, expected)
-class TestPivotAnnual(tm.TestCase):
+class TestPivotAnnual(object):
"""
New pandas of scikits.timeseries pivot_annual
"""
diff --git a/pandas/tests/reshape/test_reshape.py b/pandas/tests/reshape/test_reshape.py
index de2fe444bc4ea..79626d89026a7 100644
--- a/pandas/tests/reshape/test_reshape.py
+++ b/pandas/tests/reshape/test_reshape.py
@@ -17,7 +17,7 @@
from pandas.compat import range, u
-class TestMelt(tm.TestCase):
+class TestMelt(object):
def setup_method(self, method):
self.df = tm.makeTimeDataFrame()[:10]
@@ -216,7 +216,7 @@ def test_multiindex(self):
assert res.columns.tolist() == ['CAP', 'low', 'value']
-class TestGetDummies(tm.TestCase):
+class TestGetDummies(object):
sparse = False
@@ -644,7 +644,7 @@ class TestGetDummiesSparse(TestGetDummies):
sparse = True
-class TestMakeAxisDummies(tm.TestCase):
+class TestMakeAxisDummies(object):
def test_preserve_categorical_dtype(self):
# GH13854
@@ -665,7 +665,7 @@ def test_preserve_categorical_dtype(self):
tm.assert_frame_equal(result, expected)
-class TestLreshape(tm.TestCase):
+class TestLreshape(object):
def test_pairs(self):
data = {'birthdt': ['08jan2009', '20dec2008', '30dec2008', '21dec2008',
@@ -737,7 +737,7 @@ def test_pairs(self):
pytest.raises(ValueError, lreshape, df, spec)
-class TestWideToLong(tm.TestCase):
+class TestWideToLong(object):
def test_simple(self):
np.random.seed(123)
diff --git a/pandas/tests/reshape/test_tile.py b/pandas/tests/reshape/test_tile.py
index 2291030a2735c..8602b33856fea 100644
--- a/pandas/tests/reshape/test_tile.py
+++ b/pandas/tests/reshape/test_tile.py
@@ -14,7 +14,7 @@
import pandas.core.reshape.tile as tmod
-class TestCut(tm.TestCase):
+class TestCut(object):
def test_simple(self):
data = np.ones(5, dtype='int64')
diff --git a/pandas/tests/reshape/test_union_categoricals.py b/pandas/tests/reshape/test_union_categoricals.py
index 5cc476718add2..fe8d54005ba9b 100644
--- a/pandas/tests/reshape/test_union_categoricals.py
+++ b/pandas/tests/reshape/test_union_categoricals.py
@@ -7,7 +7,7 @@
from pandas.util import testing as tm
-class TestUnionCategoricals(tm.TestCase):
+class TestUnionCategoricals(object):
def test_union_categorical(self):
# GH 13361
diff --git a/pandas/tests/reshape/test_util.py b/pandas/tests/reshape/test_util.py
index a7fbe8d305011..e4a9591b95c26 100644
--- a/pandas/tests/reshape/test_util.py
+++ b/pandas/tests/reshape/test_util.py
@@ -5,7 +5,7 @@
from pandas.core.reshape.util import cartesian_product
-class TestCartesianProduct(tm.TestCase):
+class TestCartesianProduct(object):
def test_simple(self):
x, y = list('ABC'), [1, 22]
diff --git a/pandas/tests/scalar/test_interval.py b/pandas/tests/scalar/test_interval.py
index fab6f170bec60..e06f7cb34eb52 100644
--- a/pandas/tests/scalar/test_interval.py
+++ b/pandas/tests/scalar/test_interval.py
@@ -5,7 +5,7 @@
import pandas.util.testing as tm
-class TestInterval(tm.TestCase):
+class TestInterval(object):
def setup_method(self, method):
self.interval = Interval(0, 1)
diff --git a/pandas/tests/scalar/test_period.py b/pandas/tests/scalar/test_period.py
index 8c89fa60b12d6..54366dc9b1c3f 100644
--- a/pandas/tests/scalar/test_period.py
+++ b/pandas/tests/scalar/test_period.py
@@ -14,7 +14,7 @@
from pandas.tseries.frequencies import DAYS, MONTHS
-class TestPeriodProperties(tm.TestCase):
+class TestPeriodProperties(object):
"Test properties such as year, month, weekday, etc...."
def test_is_leap_year(self):
@@ -911,7 +911,7 @@ def test_round_trip(self):
assert new_p == p
-class TestPeriodField(tm.TestCase):
+class TestPeriodField(object):
def test_get_period_field_raises_on_out_of_range(self):
pytest.raises(ValueError, libperiod.get_period_field, -1, 0, 0)
@@ -921,7 +921,7 @@ def test_get_period_field_array_raises_on_out_of_range(self):
np.empty(1), 0)
-class TestComparisons(tm.TestCase):
+class TestComparisons(object):
def setup_method(self, method):
self.january1 = Period('2000-01', 'M')
@@ -1006,7 +1006,7 @@ def test_period_nat_comp(self):
assert not left >= right
-class TestMethods(tm.TestCase):
+class TestMethods(object):
def test_add(self):
dt1 = Period(freq='D', year=2008, month=1, day=1)
diff --git a/pandas/tests/scalar/test_period_asfreq.py b/pandas/tests/scalar/test_period_asfreq.py
index 7011cfeef90ae..32cea60c333b7 100644
--- a/pandas/tests/scalar/test_period_asfreq.py
+++ b/pandas/tests/scalar/test_period_asfreq.py
@@ -4,7 +4,7 @@
from pandas.tseries.frequencies import _period_code_map
-class TestFreqConversion(tm.TestCase):
+class TestFreqConversion(object):
"""Test frequency conversion of date objects"""
def test_asfreq_corner(self):
diff --git a/pandas/tests/scalar/test_timedelta.py b/pandas/tests/scalar/test_timedelta.py
index 82d6f6e8c84e5..ecc44204924d3 100644
--- a/pandas/tests/scalar/test_timedelta.py
+++ b/pandas/tests/scalar/test_timedelta.py
@@ -12,7 +12,7 @@
from pandas._libs.tslib import iNaT, NaTType
-class TestTimedeltas(tm.TestCase):
+class TestTimedeltas(object):
_multiprocess_can_split_ = True
def setup_method(self, method):
diff --git a/pandas/tests/scalar/test_timestamp.py b/pandas/tests/scalar/test_timestamp.py
index 64f68112f4b81..5caa0252b69b8 100644
--- a/pandas/tests/scalar/test_timestamp.py
+++ b/pandas/tests/scalar/test_timestamp.py
@@ -22,7 +22,7 @@
RESO_MS, RESO_SEC)
-class TestTimestamp(tm.TestCase):
+class TestTimestamp(object):
def test_constructor(self):
base_str = '2014-07-01 09:00'
@@ -1094,7 +1094,7 @@ def test_is_leap_year(self):
assert not dt.is_leap_year
-class TestTimestampNsOperations(tm.TestCase):
+class TestTimestampNsOperations(object):
def setup_method(self, method):
self.timestamp = Timestamp(datetime.utcnow())
@@ -1181,7 +1181,7 @@ def test_nanosecond_timestamp(self):
assert t.nanosecond == 10
-class TestTimestampOps(tm.TestCase):
+class TestTimestampOps(object):
def test_timestamp_and_datetime(self):
assert ((Timestamp(datetime(2013, 10, 13)) -
@@ -1256,7 +1256,7 @@ def test_resolution(self):
assert result == expected
-class TestTimestampToJulianDate(tm.TestCase):
+class TestTimestampToJulianDate(object):
def test_compare_1700(self):
r = Timestamp('1700-06-23').to_julian_date()
@@ -1279,7 +1279,7 @@ def test_compare_hour13(self):
assert r == 2451769.0416666666666666
-class TestTimeSeries(tm.TestCase):
+class TestTimeSeries(object):
def test_timestamp_to_datetime(self):
tm._skip_if_no_pytz()
@@ -1490,7 +1490,7 @@ def test_woy_boundary(self):
assert (result == [52, 52, 53, 53]).all()
-class TestTsUtil(tm.TestCase):
+class TestTsUtil(object):
def test_min_valid(self):
# Ensure that Timestamp.min is a valid Timestamp
diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py
index 33a4cdb6e26c4..150767ee9e2b2 100644
--- a/pandas/tests/series/test_alter_axes.py
+++ b/pandas/tests/series/test_alter_axes.py
@@ -18,7 +18,7 @@
from .common import TestData
-class TestSeriesAlterAxes(TestData, tm.TestCase):
+class TestSeriesAlterAxes(TestData):
def test_setindex(self):
# wrong type
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 71131452393a7..257f992f57f6d 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -28,7 +28,7 @@
from .common import TestData
-class TestSeriesAnalytics(TestData, tm.TestCase):
+class TestSeriesAnalytics(TestData):
def test_sum_zero(self):
arr = np.array([])
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 5bb463c7a2ebe..1eb2b98a7d7cc 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -118,7 +118,7 @@ def test_to_sparse_pass_name(self):
assert result.name == self.ts.name
-class TestSeriesMisc(TestData, SharedWithSparse, tm.TestCase):
+class TestSeriesMisc(TestData, SharedWithSparse):
def test_tab_completion(self):
# GH 9910
diff --git a/pandas/tests/series/test_apply.py b/pandas/tests/series/test_apply.py
index 089a2c36a5574..c273d3161fff5 100644
--- a/pandas/tests/series/test_apply.py
+++ b/pandas/tests/series/test_apply.py
@@ -17,7 +17,7 @@
from .common import TestData
-class TestSeriesApply(TestData, tm.TestCase):
+class TestSeriesApply(TestData):
def test_apply(self):
with np.errstate(all='ignore'):
@@ -151,7 +151,7 @@ def test_apply_dict_depr(self):
tsdf.A.agg({'foo': ['sum', 'mean']})
-class TestSeriesAggregate(TestData, tm.TestCase):
+class TestSeriesAggregate(TestData):
_multiprocess_can_split_ = True
@@ -307,7 +307,7 @@ def test_reduce(self):
assert_series_equal(result, expected)
-class TestSeriesMap(TestData, tm.TestCase):
+class TestSeriesMap(TestData):
def test_map(self):
index, data = tm.getMixedTypeDict()
diff --git a/pandas/tests/series/test_asof.py b/pandas/tests/series/test_asof.py
index a839d571c116c..1f62d618b20e1 100644
--- a/pandas/tests/series/test_asof.py
+++ b/pandas/tests/series/test_asof.py
@@ -11,7 +11,7 @@
from .common import TestData
-class TestSeriesAsof(TestData, tm.TestCase):
+class TestSeriesAsof(TestData):
def test_basic(self):
diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py
index 1291449ae7ce9..bb998b7fa55dd 100644
--- a/pandas/tests/series/test_combine_concat.py
+++ b/pandas/tests/series/test_combine_concat.py
@@ -18,7 +18,7 @@
from .common import TestData
-class TestSeriesCombine(TestData, tm.TestCase):
+class TestSeriesCombine(TestData):
def test_append(self):
appendedSeries = self.series.append(self.objSeries)
@@ -217,7 +217,7 @@ def test_combine_first_dt64(self):
assert_series_equal(rs, xp)
-class TestTimeseries(tm.TestCase):
+class TestTimeseries(object):
def test_append_concat(self):
rng = date_range('5/8/2012 1:45', periods=10, freq='5T')
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index a0a68a332f735..d591aa4f567a9 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -28,7 +28,7 @@
from .common import TestData
-class TestSeriesConstructors(TestData, tm.TestCase):
+class TestSeriesConstructors(TestData):
def test_invalid_dtype(self):
# GH15520
diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py
index 50914eef1abc8..e1fc9af0cca89 100644
--- a/pandas/tests/series/test_datetime_values.py
+++ b/pandas/tests/series/test_datetime_values.py
@@ -20,7 +20,7 @@
from .common import TestData
-class TestSeriesDatetimeValues(TestData, tm.TestCase):
+class TestSeriesDatetimeValues(TestData):
def test_dt_namespace_accessor(self):
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 8eae59a473995..7f876357ad3ab 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -31,7 +31,7 @@
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-class TestSeriesIndexing(TestData, tm.TestCase):
+class TestSeriesIndexing(TestData):
def test_get(self):
@@ -2252,7 +2252,7 @@ def test_setitem_slice_into_readonly_backing_data(self):
assert not array.any()
-class TestTimeSeriesDuplicates(tm.TestCase):
+class TestTimeSeriesDuplicates(object):
def setup_method(self, method):
dates = [datetime(2000, 1, 2), datetime(2000, 1, 2),
@@ -2494,7 +2494,7 @@ def test_indexing(self):
pytest.raises(KeyError, df.__getitem__, df.index[2], )
-class TestDatetimeIndexing(tm.TestCase):
+class TestDatetimeIndexing(object):
"""
Also test support for datetime64[ns] in Series / DataFrame
"""
@@ -2638,7 +2638,7 @@ def test_frame_datetime64_duplicated(self):
assert (-result).all()
-class TestNatIndexing(tm.TestCase):
+class TestNatIndexing(object):
def setup_method(self, method):
self.series = Series(date_range('1/1/2000', periods=10))
diff --git a/pandas/tests/series/test_internals.py b/pandas/tests/series/test_internals.py
index 31492a4ab214a..79e23459ac992 100644
--- a/pandas/tests/series/test_internals.py
+++ b/pandas/tests/series/test_internals.py
@@ -16,7 +16,7 @@
import pandas.util.testing as tm
-class TestSeriesInternals(tm.TestCase):
+class TestSeriesInternals(object):
def test_convert_objects(self):
diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py
index 24bb3bbc7fc16..d1c9e5a6d16cf 100644
--- a/pandas/tests/series/test_io.py
+++ b/pandas/tests/series/test_io.py
@@ -16,7 +16,7 @@
from .common import TestData
-class TestSeriesToCSV(TestData, tm.TestCase):
+class TestSeriesToCSV(TestData):
def test_from_csv(self):
@@ -108,7 +108,7 @@ def test_to_csv_path_is_none(self):
assert isinstance(csv_str, str)
-class TestSeriesIO(TestData, tm.TestCase):
+class TestSeriesIO(TestData):
def test_to_frame(self):
self.ts.name = None
@@ -168,7 +168,7 @@ class SubclassedFrame(DataFrame):
assert_frame_equal(result, expected)
-class TestSeriesToList(TestData, tm.TestCase):
+class TestSeriesToList(TestData):
def test_tolist(self):
rs = self.ts.tolist()
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 0eaab2e588cc2..c52c41877d5c0 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -48,7 +48,7 @@ def _simple_ts(start, end, freq='D'):
return Series(np.random.randn(len(rng)), index=rng)
-class TestSeriesMissingData(TestData, tm.TestCase):
+class TestSeriesMissingData(TestData):
def test_timedelta_fillna(self):
# GH 3371
@@ -700,7 +700,7 @@ def test_series_pad_backfill_limit(self):
assert_series_equal(result, expected)
-class TestSeriesInterpolateData(TestData, tm.TestCase):
+class TestSeriesInterpolateData(TestData):
def test_interpolate(self):
ts = Series(np.arange(len(self.ts), dtype=float), self.ts.index)
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index 7c7b98961d960..db0d06aa35a2a 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -28,7 +28,7 @@
from .common import TestData
-class TestSeriesOperators(TestData, tm.TestCase):
+class TestSeriesOperators(TestData):
def test_series_comparison_scalars(self):
series = Series(date_range('1/1/2000', periods=10))
diff --git a/pandas/tests/series/test_period.py b/pandas/tests/series/test_period.py
index 792d5b9e5c383..6e8ee38d366e2 100644
--- a/pandas/tests/series/test_period.py
+++ b/pandas/tests/series/test_period.py
@@ -10,7 +10,7 @@ def _permute(obj):
return obj.take(np.random.permutation(len(obj)))
-class TestSeriesPeriod(tm.TestCase):
+class TestSeriesPeriod(object):
def setup_method(self, method):
self.series = Series(period_range('2000-01-01', periods=10, freq='D'))
diff --git a/pandas/tests/series/test_quantile.py b/pandas/tests/series/test_quantile.py
index 6d2cdd046ea7f..2d02260ac7303 100644
--- a/pandas/tests/series/test_quantile.py
+++ b/pandas/tests/series/test_quantile.py
@@ -13,7 +13,7 @@
from .common import TestData
-class TestSeriesQuantile(TestData, tm.TestCase):
+class TestSeriesQuantile(TestData):
def test_quantile(self):
diff --git a/pandas/tests/series/test_rank.py b/pandas/tests/series/test_rank.py
index 1a1829eb5829f..ff489eb7f15b1 100644
--- a/pandas/tests/series/test_rank.py
+++ b/pandas/tests/series/test_rank.py
@@ -15,7 +15,7 @@
from pandas.tests.series.common import TestData
-class TestSeriesRank(tm.TestCase, TestData):
+class TestSeriesRank(TestData):
s = Series([1, 3, 4, 2, nan, 2, 1, 5, nan, 3])
results = {
diff --git a/pandas/tests/series/test_replace.py b/pandas/tests/series/test_replace.py
index 19a99c8351db8..35d13a62ca083 100644
--- a/pandas/tests/series/test_replace.py
+++ b/pandas/tests/series/test_replace.py
@@ -11,7 +11,7 @@
from .common import TestData
-class TestSeriesReplace(TestData, tm.TestCase):
+class TestSeriesReplace(TestData):
def test_replace(self):
N = 100
ser = pd.Series(np.random.randn(N))
diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py
index 8c1d74c5c2c23..3af61b0a902d3 100644
--- a/pandas/tests/series/test_repr.py
+++ b/pandas/tests/series/test_repr.py
@@ -18,7 +18,7 @@
from .common import TestData
-class TestSeriesRepr(TestData, tm.TestCase):
+class TestSeriesRepr(TestData):
def test_multilevel_name_print(self):
index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two',
diff --git a/pandas/tests/series/test_sorting.py b/pandas/tests/series/test_sorting.py
index 791a7d5db9a26..40b0280de3719 100644
--- a/pandas/tests/series/test_sorting.py
+++ b/pandas/tests/series/test_sorting.py
@@ -13,7 +13,7 @@
from .common import TestData
-class TestSeriesSorting(TestData, tm.TestCase):
+class TestSeriesSorting(TestData):
def test_sortlevel_deprecated(self):
ts = self.ts.copy()
diff --git a/pandas/tests/series/test_subclass.py b/pandas/tests/series/test_subclass.py
index fe8a5e7658d9c..37c8d7343f7f1 100644
--- a/pandas/tests/series/test_subclass.py
+++ b/pandas/tests/series/test_subclass.py
@@ -6,7 +6,7 @@
import pandas.util.testing as tm
-class TestSeriesSubclassing(tm.TestCase):
+class TestSeriesSubclassing(object):
def test_indexing_sliced(self):
s = tm.SubclassedSeries([1, 2, 3, 4], index=list('abcd'))
@@ -33,7 +33,7 @@ def test_to_frame(self):
assert isinstance(res, tm.SubclassedDataFrame)
-class TestSparseSeriesSubclassing(tm.TestCase):
+class TestSparseSeriesSubclassing(object):
def test_subclass_sparse_slice(self):
# int64
diff --git a/pandas/tests/series/test_timeseries.py b/pandas/tests/series/test_timeseries.py
index 78e5d87636532..d5517bdcceac7 100644
--- a/pandas/tests/series/test_timeseries.py
+++ b/pandas/tests/series/test_timeseries.py
@@ -33,7 +33,7 @@ def assert_range_equal(left, right):
assert (left.tz == right.tz)
-class TestTimeSeries(TestData, tm.TestCase):
+class TestTimeSeries(TestData):
def test_shift(self):
shifted = self.ts.shift(1)
diff --git a/pandas/tests/sparse/test_arithmetics.py b/pandas/tests/sparse/test_arithmetics.py
index 468d856ca68ce..f023cd0003910 100644
--- a/pandas/tests/sparse/test_arithmetics.py
+++ b/pandas/tests/sparse/test_arithmetics.py
@@ -3,7 +3,7 @@
import pandas.util.testing as tm
-class TestSparseArrayArithmetics(tm.TestCase):
+class TestSparseArrayArithmetics(object):
_base = np.array
_klass = pd.SparseArray
diff --git a/pandas/tests/sparse/test_array.py b/pandas/tests/sparse/test_array.py
index c205a1efbeeb1..ab7340c89f016 100644
--- a/pandas/tests/sparse/test_array.py
+++ b/pandas/tests/sparse/test_array.py
@@ -15,7 +15,7 @@
import pandas.util.testing as tm
-class TestSparseArray(tm.TestCase):
+class TestSparseArray(object):
def setup_method(self, method):
self.arr_data = np.array([nan, nan, 1, 2, 3, nan, 4, 5, nan, 6])
@@ -656,7 +656,7 @@ def test_fillna_overlap(self):
tm.assert_sp_array_equal(res, exp)
-class TestSparseArrayAnalytics(tm.TestCase):
+class TestSparseArrayAnalytics(object):
def test_sum(self):
data = np.arange(10).astype(float)
diff --git a/pandas/tests/sparse/test_combine_concat.py b/pandas/tests/sparse/test_combine_concat.py
index ab56a83c90530..15639fbe156c6 100644
--- a/pandas/tests/sparse/test_combine_concat.py
+++ b/pandas/tests/sparse/test_combine_concat.py
@@ -5,7 +5,7 @@
import pandas.util.testing as tm
-class TestSparseSeriesConcat(tm.TestCase):
+class TestSparseSeriesConcat(object):
def test_concat(self):
val1 = np.array([1, 2, np.nan, np.nan, 0, np.nan])
@@ -122,7 +122,7 @@ def test_concat_sparse_dense(self):
tm.assert_sp_series_equal(res, exp)
-class TestSparseDataFrameConcat(tm.TestCase):
+class TestSparseDataFrameConcat(object):
def setup_method(self, method):
diff --git a/pandas/tests/sparse/test_format.py b/pandas/tests/sparse/test_format.py
index 74be14ff5cf15..d983bd209085a 100644
--- a/pandas/tests/sparse/test_format.py
+++ b/pandas/tests/sparse/test_format.py
@@ -13,7 +13,7 @@
use_32bit_repr = is_platform_windows() or is_platform_32bit()
-class TestSparseSeriesFormatting(tm.TestCase):
+class TestSparseSeriesFormatting(object):
@property
def dtype_format_for_platform(self):
@@ -105,7 +105,7 @@ def test_sparse_int(self):
assert result == exp
-class TestSparseDataFrameFormatting(tm.TestCase):
+class TestSparseDataFrameFormatting(object):
def test_sparse_frame(self):
# GH 13110
diff --git a/pandas/tests/sparse/test_frame.py b/pandas/tests/sparse/test_frame.py
index 762bfba85dd0a..4a4a596e3bed4 100644
--- a/pandas/tests/sparse/test_frame.py
+++ b/pandas/tests/sparse/test_frame.py
@@ -26,7 +26,7 @@
from pandas.tests.frame.test_api import SharedWithSparse
-class TestSparseDataFrame(tm.TestCase, SharedWithSparse):
+class TestSparseDataFrame(SharedWithSparse):
klass = SparseDataFrame
def setup_method(self, method):
@@ -1245,7 +1245,7 @@ def test_from_to_scipy_object(spmatrix, fill_value):
assert sdf.to_coo().dtype == res_dtype
-class TestSparseDataFrameArithmetic(tm.TestCase):
+class TestSparseDataFrameArithmetic(object):
def test_numeric_op_scalar(self):
df = pd.DataFrame({'A': [nan, nan, 0, 1, ],
@@ -1274,7 +1274,7 @@ def test_comparison_op_scalar(self):
tm.assert_frame_equal(res.to_dense(), df != 0)
-class TestSparseDataFrameAnalytics(tm.TestCase):
+class TestSparseDataFrameAnalytics(object):
def setup_method(self, method):
self.data = {'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
'B': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
diff --git a/pandas/tests/sparse/test_groupby.py b/pandas/tests/sparse/test_groupby.py
index 501e40c6ebffd..c9049ed9743dd 100644
--- a/pandas/tests/sparse/test_groupby.py
+++ b/pandas/tests/sparse/test_groupby.py
@@ -4,7 +4,7 @@
import pandas.util.testing as tm
-class TestSparseGroupBy(tm.TestCase):
+class TestSparseGroupBy(object):
def setup_method(self, method):
self.dense = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
diff --git a/pandas/tests/sparse/test_indexing.py b/pandas/tests/sparse/test_indexing.py
index bb449c05729d4..382cff4b9d0ac 100644
--- a/pandas/tests/sparse/test_indexing.py
+++ b/pandas/tests/sparse/test_indexing.py
@@ -6,7 +6,7 @@
import pandas.util.testing as tm
-class TestSparseSeriesIndexing(tm.TestCase):
+class TestSparseSeriesIndexing(object):
def setup_method(self, method):
self.orig = pd.Series([1, np.nan, np.nan, 3, np.nan])
@@ -589,7 +589,7 @@ def test_reindex(self):
assert sparse is not res
-class TestSparseDataFrameIndexing(tm.TestCase):
+class TestSparseDataFrameIndexing(object):
def test_getitem(self):
orig = pd.DataFrame([[1, np.nan, np.nan],
@@ -952,7 +952,7 @@ def test_reindex_fill_value(self):
tm.assert_sp_frame_equal(res, exp)
-class TestMultitype(tm.TestCase):
+class TestMultitype(object):
def setup_method(self, method):
self.cols = ['string', 'int', 'float', 'object']
diff --git a/pandas/tests/sparse/test_libsparse.py b/pandas/tests/sparse/test_libsparse.py
index c7207870b22b9..c41025582c651 100644
--- a/pandas/tests/sparse/test_libsparse.py
+++ b/pandas/tests/sparse/test_libsparse.py
@@ -42,7 +42,7 @@ def _check_case_dict(case):
_check_case([], [], [], [], [], [])
-class TestSparseIndexUnion(tm.TestCase):
+class TestSparseIndexUnion(object):
def test_index_make_union(self):
def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
@@ -188,7 +188,7 @@ def test_intindex_make_union(self):
a.make_union(b)
-class TestSparseIndexIntersect(tm.TestCase):
+class TestSparseIndexIntersect(object):
def test_intersect(self):
def _check_correct(a, b, expected):
@@ -239,7 +239,7 @@ def test_intersect_identical(self):
assert case.intersect(case).equals(case)
-class TestSparseIndexCommon(tm.TestCase):
+class TestSparseIndexCommon(object):
def test_int_internal(self):
idx = _make_index(4, np.array([2, 3], dtype=np.int32), kind='integer')
@@ -387,7 +387,7 @@ def _check(index):
# corner cases
-class TestBlockIndex(tm.TestCase):
+class TestBlockIndex(object):
def test_block_internal(self):
idx = _make_index(4, np.array([2, 3], dtype=np.int32), kind='block')
@@ -472,7 +472,7 @@ def test_to_block_index(self):
assert index.to_block_index() is index
-class TestIntIndex(tm.TestCase):
+class TestIntIndex(object):
def test_check_integrity(self):
@@ -557,7 +557,7 @@ def test_to_int_index(self):
assert index.to_int_index() is index
-class TestSparseOperators(tm.TestCase):
+class TestSparseOperators(object):
def _op_tests(self, sparse_op, python_op):
def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
diff --git a/pandas/tests/sparse/test_list.py b/pandas/tests/sparse/test_list.py
index 3eab34661ae2b..6c721ca813a21 100644
--- a/pandas/tests/sparse/test_list.py
+++ b/pandas/tests/sparse/test_list.py
@@ -7,7 +7,7 @@
import pandas.util.testing as tm
-class TestSparseList(tm.TestCase):
+class TestSparseList(object):
def setup_method(self, method):
self.na_data = np.array([nan, nan, 1, 2, 3, nan, 4, 5, nan, 6])
diff --git a/pandas/tests/sparse/test_pivot.py b/pandas/tests/sparse/test_pivot.py
index 57c47b4e68811..e7eba63e4e0b3 100644
--- a/pandas/tests/sparse/test_pivot.py
+++ b/pandas/tests/sparse/test_pivot.py
@@ -3,7 +3,7 @@
import pandas.util.testing as tm
-class TestPivotTable(tm.TestCase):
+class TestPivotTable(object):
def setup_method(self, method):
self.dense = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
diff --git a/pandas/tests/sparse/test_series.py b/pandas/tests/sparse/test_series.py
index b756b63523798..344bca54b180b 100644
--- a/pandas/tests/sparse/test_series.py
+++ b/pandas/tests/sparse/test_series.py
@@ -56,7 +56,7 @@ def _test_data2_zero():
return arr, index
-class TestSparseSeries(tm.TestCase, SharedWithSparse):
+class TestSparseSeries(SharedWithSparse):
def setup_method(self, method):
arr, index = _test_data1()
@@ -934,7 +934,7 @@ def test_combine_first(self):
tm.assert_sp_series_equal(result, expected)
-class TestSparseHandlingMultiIndexes(tm.TestCase):
+class TestSparseHandlingMultiIndexes(object):
def setup_method(self, method):
miindex = pd.MultiIndex.from_product(
@@ -960,7 +960,7 @@ def test_round_trip_preserve_multiindex_names(self):
check_names=True)
-class TestSparseSeriesScipyInteraction(tm.TestCase):
+class TestSparseSeriesScipyInteraction(object):
# Issue 8048: add SparseSeries coo methods
def setup_method(self, method):
@@ -1310,7 +1310,7 @@ def _dense_series_compare(s, f):
tm.assert_series_equal(result.to_dense(), dense_result)
-class TestSparseSeriesAnalytics(tm.TestCase):
+class TestSparseSeriesAnalytics(object):
def setup_method(self, method):
arr, index = _test_data1()
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index dda95426d8011..093730fb2478b 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -23,7 +23,7 @@
from pandas.util.testing import assert_almost_equal
-class TestMatch(tm.TestCase):
+class TestMatch(object):
def test_ints(self):
values = np.array([0, 2, 1])
@@ -59,7 +59,7 @@ def test_strings(self):
tm.assert_series_equal(result, expected)
-class TestSafeSort(tm.TestCase):
+class TestSafeSort(object):
def test_basic_sort(self):
values = [3, 1, 2, 0, 4]
@@ -146,7 +146,7 @@ def test_exceptions(self):
algos.safe_sort(values=[0, 1, 2, 1], labels=[0, 1])
-class TestFactorize(tm.TestCase):
+class TestFactorize(object):
def test_basic(self):
@@ -306,7 +306,7 @@ def test_uint64_factorize(self):
tm.assert_numpy_array_equal(uniques, exp_uniques)
-class TestUnique(tm.TestCase):
+class TestUnique(object):
def test_ints(self):
arr = np.random.randint(0, 100, size=50)
@@ -503,7 +503,7 @@ def test_order_of_appearance(self):
tm.assert_categorical_equal(result, expected)
-class TestIsin(tm.TestCase):
+class TestIsin(object):
def test_invalid(self):
@@ -587,7 +587,7 @@ def test_large(self):
tm.assert_numpy_array_equal(result, expected)
-class TestValueCounts(tm.TestCase):
+class TestValueCounts(object):
def test_value_counts(self):
np.random.seed(1234)
@@ -779,7 +779,7 @@ def test_value_counts_uint64(self):
tm.assert_series_equal(result, expected)
-class TestDuplicated(tm.TestCase):
+class TestDuplicated(object):
def test_duplicated_with_nas(self):
keys = np.array([0, 1, np.nan, 0, 2, np.nan], dtype=object)
@@ -1014,7 +1014,7 @@ def test_group_var_constant(self):
tm.assert_almost_equal(out[0, 0], 0.0)
-class TestGroupVarFloat64(tm.TestCase, GroupVarTestMixin):
+class TestGroupVarFloat64(GroupVarTestMixin):
__test__ = True
algo = libgroupby.group_var_float64
@@ -1037,7 +1037,7 @@ def test_group_var_large_inputs(self):
tm.assert_almost_equal(out[0, 0], 1.0 / 12, check_less_precise=True)
-class TestGroupVarFloat32(tm.TestCase, GroupVarTestMixin):
+class TestGroupVarFloat32(GroupVarTestMixin):
__test__ = True
algo = libgroupby.group_var_float32
@@ -1045,7 +1045,7 @@ class TestGroupVarFloat32(tm.TestCase, GroupVarTestMixin):
rtol = 1e-2
-class TestHashTable(tm.TestCase):
+class TestHashTable(object):
def test_lookup_nan(self):
xs = np.array([2.718, 3.14, np.nan, -7, 5, 2, 3])
@@ -1116,7 +1116,7 @@ def test_unique_label_indices():
check_dtype=False)
-class TestRank(tm.TestCase):
+class TestRank(object):
def test_scipy_compat(self):
tm._skip_if_no_scipy()
@@ -1184,7 +1184,7 @@ def test_arrmap():
assert (result.dtype == np.bool_)
-class TestTseriesUtil(tm.TestCase):
+class TestTseriesUtil(object):
def test_combineFunc(self):
pass
@@ -1378,7 +1378,7 @@ def test_int64_add_overflow():
b_mask=np.array([False, True]))
-class TestMode(tm.TestCase):
+class TestMode(object):
def test_no_mode(self):
exp = Series([], dtype=np.float64)
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index dcc685ceef28e..85976b9fabd66 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -86,7 +86,7 @@ def check_result(self, result, expected, klass=None):
assert result == expected
-class TestPandasDelegate(tm.TestCase):
+class TestPandasDelegate(object):
class Delegator(object):
_properties = ['foo']
@@ -152,7 +152,7 @@ def test_memory_usage(self):
sys.getsizeof(delegate)
-class Ops(tm.TestCase):
+class Ops(object):
def _allow_na_ops(self, obj):
"""Whether to skip test cases including NaN"""
@@ -1008,7 +1008,7 @@ def test_numpy_transpose(self):
np.transpose, obj, axes=1)
-class TestNoNewAttributesMixin(tm.TestCase):
+class TestNoNewAttributesMixin(object):
def test_mixin(self):
class T(NoNewAttributesMixin):
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 2a53cf15278e0..03adf17f50300 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -28,7 +28,7 @@
from pandas.core.config import option_context
-class TestCategorical(tm.TestCase):
+class TestCategorical(object):
def setup_method(self, method):
self.factor = Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'],
@@ -1600,7 +1600,7 @@ def test_validate_inplace(self):
cat.sort_values(inplace=value)
-class TestCategoricalAsBlock(tm.TestCase):
+class TestCategoricalAsBlock(object):
def setup_method(self, method):
self.factor = Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'])
@@ -4411,7 +4411,7 @@ def test_concat_categorical(self):
tm.assert_frame_equal(res, exp)
-class TestCategoricalSubclassing(tm.TestCase):
+class TestCategoricalSubclassing(object):
def test_constructor(self):
sc = tm.SubclassedCategorical(['a', 'b', 'c'])
diff --git a/pandas/tests/test_compat.py b/pandas/tests/test_compat.py
index 5c56142687b5c..ff9d09c033164 100644
--- a/pandas/tests/test_compat.py
+++ b/pandas/tests/test_compat.py
@@ -6,10 +6,9 @@
from pandas.compat import (range, zip, map, filter, lrange, lzip, lmap,
lfilter, builtins, iterkeys, itervalues, iteritems,
next)
-import pandas.util.testing as tm
-class TestBuiltinIterators(tm.TestCase):
+class TestBuiltinIterators(object):
@classmethod
def check_result(cls, actual, expected, lengths):
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index 79475b297f83c..f014b16976d39 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -1,13 +1,12 @@
# -*- coding: utf-8 -*-
import pytest
-import pandas.util.testing as tm
import pandas as pd
import warnings
-class TestConfig(tm.TestCase):
+class TestConfig(object):
def __init__(self, *args):
super(TestConfig, self).__init__(*args)
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index 79b057c0548a9..fae7bfa513dcd 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -56,7 +56,7 @@
@pytest.mark.skipif(not expr._USE_NUMEXPR, reason='not using numexpr')
-class TestExpressions(tm.TestCase):
+class TestExpressions(object):
def setup_method(self, method):
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 0f2a3ce1d1e94..0900d21b250ed 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -192,7 +192,7 @@ def create_mgr(descr, item_shape=None):
[mgr_items] + [np.arange(n) for n in item_shape])
-class TestBlock(tm.TestCase):
+class TestBlock(object):
def setup_method(self, method):
# self.fblock = get_float_ex() # a,c,e
@@ -309,7 +309,7 @@ def test_split_block_at(self):
# assert len(bs), 0)
-class TestDatetimeBlock(tm.TestCase):
+class TestDatetimeBlock(object):
def test_try_coerce_arg(self):
block = create_block('datetime', [0])
@@ -1072,7 +1072,7 @@ def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer,
# reindex_indexer(new_labels, indexer, axis)
-class TestBlockPlacement(tm.TestCase):
+class TestBlockPlacement(object):
def test_slice_len(self):
assert len(BlockPlacement(slice(0, 4))) == 4
diff --git a/pandas/tests/test_join.py b/pandas/tests/test_join.py
index e9e7ffba7fe54..3fc13d23b53f7 100644
--- a/pandas/tests/test_join.py
+++ b/pandas/tests/test_join.py
@@ -8,7 +8,7 @@
from pandas.util.testing import assert_almost_equal
-class TestIndexer(tm.TestCase):
+class TestIndexer(object):
def test_outer_join_indexer(self):
typemap = [('int32', _join.outer_join_indexer_int32),
diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py
index 0ac05bae624e5..df97095035952 100644
--- a/pandas/tests/test_lib.py
+++ b/pandas/tests/test_lib.py
@@ -8,7 +8,7 @@
import pandas.util.testing as tm
-class TestMisc(tm.TestCase):
+class TestMisc(object):
def test_max_len_string_array(self):
@@ -41,7 +41,7 @@ def test_fast_unique_multiple_list_gen_sort(self):
tm.assert_numpy_array_equal(np.array(out), expected)
-class TestIndexing(tm.TestCase):
+class TestIndexing(object):
def test_maybe_indices_to_slice_left_edge(self):
target = np.arange(100)
@@ -201,7 +201,7 @@ def test_get_reverse_indexer(self):
assert np.array_equal(result, expected)
-class TestNullObj(tm.TestCase):
+class TestNullObj(object):
_1d_methods = ['isnullobj', 'isnullobj_old']
_2d_methods = ['isnullobj2d', 'isnullobj2d_old']
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index bfab10b7e63e7..ab28b8b43f359 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -56,7 +56,7 @@ def setup_method(self, method):
self.ymd.index.set_names(['year', 'month', 'day'], inplace=True)
-class TestMultiLevel(Base, tm.TestCase):
+class TestMultiLevel(Base):
def test_append(self):
a, b = self.frame[:5], self.frame[5:]
@@ -2352,7 +2352,7 @@ def test_iloc_mi(self):
tm.assert_frame_equal(result, expected)
-class TestSorted(Base, tm.TestCase):
+class TestSorted(Base):
""" everthing you wanted to test about sorting """
def test_sort_index_preserve_levels(self):
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index c5ecd75290fc6..6798e64b01d7e 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -16,7 +16,7 @@
use_bn = nanops._USE_BOTTLENECK
-class TestnanopsDataFrame(tm.TestCase):
+class TestnanopsDataFrame(object):
def setup_method(self, method):
np.random.seed(11235)
@@ -742,7 +742,7 @@ def test__bn_ok_dtype(self):
assert not nanops._bn_ok_dtype(self.arr_obj.dtype, 'test')
-class TestEnsureNumeric(tm.TestCase):
+class TestEnsureNumeric(object):
def test_numeric_values(self):
# Test integer
@@ -782,7 +782,7 @@ def test_non_convertable_values(self):
pytest.raises(TypeError, lambda: nanops._ensure_numeric([]))
-class TestNanvarFixedValues(tm.TestCase):
+class TestNanvarFixedValues(object):
# xref GH10242
@@ -895,7 +895,7 @@ def prng(self):
return np.random.RandomState(1234)
-class TestNanskewFixedValues(tm.TestCase):
+class TestNanskewFixedValues(object):
# xref GH 11974
@@ -945,7 +945,7 @@ def prng(self):
return np.random.RandomState(1234)
-class TestNankurtFixedValues(tm.TestCase):
+class TestNankurtFixedValues(object):
# xref GH 11974
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 44e1db494c041..3243b69a25acd 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -901,7 +901,7 @@ def test_set_value(self):
self.panel.set_value('a')
-class TestPanel(tm.TestCase, PanelTests, CheckIndexing, SafeForLongAndSparse,
+class TestPanel(PanelTests, CheckIndexing, SafeForLongAndSparse,
SafeForSparse):
@classmethod
@@ -2430,7 +2430,7 @@ def test_all_any_unhandled(self):
pytest.raises(NotImplementedError, self.panel.any, bool_only=True)
-class TestLongPanel(tm.TestCase):
+class TestLongPanel(object):
"""
LongPanel no longer exists, but...
"""
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 7d966422a7d79..96f02d63712fc 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -593,7 +593,7 @@ def test_set_value(self):
assert is_float_dtype(res3['l4'].values)
-class TestPanel4d(tm.TestCase, CheckIndexing, SafeForSparse,
+class TestPanel4d(CheckIndexing, SafeForSparse,
SafeForLongAndSparse):
def setup_method(self, method):
diff --git a/pandas/tests/test_panelnd.py b/pandas/tests/test_panelnd.py
index 7861b98b0ddd9..c473e3c09cc74 100644
--- a/pandas/tests/test_panelnd.py
+++ b/pandas/tests/test_panelnd.py
@@ -9,7 +9,7 @@
import pandas.util.testing as tm
-class TestPanelnd(tm.TestCase):
+class TestPanelnd(object):
def setup_method(self, method):
pass
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index c6719790c9e35..9734431c8b012 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -50,7 +50,7 @@ def _simple_pts(start, end, freq='D'):
return Series(np.random.randn(len(rng)), index=rng)
-class TestResampleAPI(tm.TestCase):
+class TestResampleAPI(object):
def setup_method(self, method):
dti = DatetimeIndex(start=datetime(2005, 1, 1),
@@ -847,7 +847,7 @@ def test_resample_loffset_arg_type(self):
assert_frame_equal(result_how, expected)
-class TestDatetimeIndex(Base, tm.TestCase):
+class TestDatetimeIndex(Base):
_index_factory = lambda x: date_range
def setup_method(self, method):
@@ -2165,7 +2165,7 @@ def test_resample_datetime_values(self):
tm.assert_series_equal(res, exp)
-class TestPeriodIndex(Base, tm.TestCase):
+class TestPeriodIndex(Base):
_index_factory = lambda x: period_range
def create_series(self):
@@ -2773,7 +2773,7 @@ def test_evenly_divisible_with_no_extra_bins(self):
assert_frame_equal(result, expected)
-class TestTimedeltaIndex(Base, tm.TestCase):
+class TestTimedeltaIndex(Base):
_index_factory = lambda x: timedelta_range
def create_series(self):
@@ -2794,7 +2794,7 @@ def test_asfreq_bug(self):
assert_frame_equal(result, expected)
-class TestResamplerGrouper(tm.TestCase):
+class TestResamplerGrouper(object):
def setup_method(self, method):
self.frame = DataFrame({'A': [1] * 20 + [2] * 12 + [3] * 8,
@@ -2989,7 +2989,7 @@ def test_median_duplicate_columns(self):
assert_frame_equal(result, expected)
-class TestTimeGrouper(tm.TestCase):
+class TestTimeGrouper(object):
def setup_method(self, method):
self.ts = Series(np.random.randn(1000),
diff --git a/pandas/tests/test_sorting.py b/pandas/tests/test_sorting.py
index c40cbcfdec883..e09270bcadf27 100644
--- a/pandas/tests/test_sorting.py
+++ b/pandas/tests/test_sorting.py
@@ -16,7 +16,7 @@
lexsort_indexer)
-class TestSorting(tm.TestCase):
+class TestSorting(object):
@pytest.mark.slow
def test_int64_overflow(self):
@@ -191,7 +191,7 @@ def test_nargsort(self):
tm.assert_numpy_array_equal(result, np.array(exp), check_dtype=False)
-class TestMerge(tm.TestCase):
+class TestMerge(object):
@pytest.mark.slow
def test_int64_overflow_issues(self):
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 412a88e13bb23..f28a5926087ac 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -19,7 +19,7 @@
import pandas.core.strings as strings
-class TestStringMethods(tm.TestCase):
+class TestStringMethods(object):
def test_api(self):
diff --git a/pandas/tests/test_take.py b/pandas/tests/test_take.py
index 617d268be8f67..7b97b0e975df3 100644
--- a/pandas/tests/test_take.py
+++ b/pandas/tests/test_take.py
@@ -9,7 +9,7 @@
from pandas._libs.tslib import iNaT
-class TestTake(tm.TestCase):
+class TestTake(object):
# standard incompatible fill error
fill_error = re.compile("Incompatible type for fill_value")
diff --git a/pandas/tests/test_testing.py b/pandas/tests/test_testing.py
index 2e84638533820..fe7c3b99987f5 100644
--- a/pandas/tests/test_testing.py
+++ b/pandas/tests/test_testing.py
@@ -12,7 +12,7 @@
from pandas.compat import is_platform_windows
-class TestAssertAlmostEqual(tm.TestCase):
+class TestAssertAlmostEqual(object):
def _assert_almost_equal_both(self, a, b, **kwargs):
assert_almost_equal(a, b, **kwargs)
@@ -139,7 +139,7 @@ def test_assert_almost_equal_object(self):
self._assert_almost_equal_both(a, b)
-class TestUtilTesting(tm.TestCase):
+class TestUtilTesting(object):
def test_raise_with_traceback(self):
with tm.assert_raises_regex(LookupError, "error_text"):
@@ -157,7 +157,7 @@ def test_raise_with_traceback(self):
raise_with_traceback(e, traceback)
-class TestAssertNumpyArrayEqual(tm.TestCase):
+class TestAssertNumpyArrayEqual(object):
def test_numpy_array_equal_message(self):
@@ -339,7 +339,7 @@ def test_assert_almost_equal_iterable_message(self):
assert_almost_equal([1, 2], [1, 3])
-class TestAssertIndexEqual(tm.TestCase):
+class TestAssertIndexEqual(object):
def test_index_equal_message(self):
@@ -486,7 +486,7 @@ def test_index_equal_metadata_message(self):
assert_index_equal(idx1, idx2)
-class TestAssertSeriesEqual(tm.TestCase):
+class TestAssertSeriesEqual(object):
def _assert_equal(self, x, y, **kwargs):
assert_series_equal(x, y, **kwargs)
@@ -580,7 +580,7 @@ def test_series_equal_message(self):
check_less_precise=True)
-class TestAssertFrameEqual(tm.TestCase):
+class TestAssertFrameEqual(object):
def _assert_equal(self, x, y, **kwargs):
assert_frame_equal(x, y, **kwargs)
@@ -679,7 +679,7 @@ def test_frame_equal_message(self):
by_blocks=True)
-class TestAssertCategoricalEqual(tm.TestCase):
+class TestAssertCategoricalEqual(object):
def test_categorical_equal_message(self):
@@ -717,7 +717,7 @@ def test_categorical_equal_message(self):
tm.assert_categorical_equal(a, b)
-class TestRNGContext(tm.TestCase):
+class TestRNGContext(object):
def test_RNGContext(self):
expected0 = 1.764052345967664
@@ -729,7 +729,7 @@ def test_RNGContext(self):
assert np.random.randn() == expected0
-class TestLocale(tm.TestCase):
+class TestLocale(object):
def test_locale(self):
if sys.platform == 'win32':
diff --git a/pandas/tests/test_util.py b/pandas/tests/test_util.py
index e9e04f76704f2..2d9ab78ceeb8a 100644
--- a/pandas/tests/test_util.py
+++ b/pandas/tests/test_util.py
@@ -20,7 +20,7 @@
LOCALE_OVERRIDE = os.environ.get('LOCALE_OVERRIDE', None)
-class TestDecorators(tm.TestCase):
+class TestDecorators(object):
def setup_method(self, method):
@deprecate_kwarg('old', 'new')
@@ -89,7 +89,7 @@ def test_rands_array():
assert(len(arr[1, 1]) == 7)
-class TestValidateArgs(tm.TestCase):
+class TestValidateArgs(object):
fname = 'func'
def test_bad_min_fname_arg_count(self):
@@ -159,7 +159,7 @@ def test_validation(self):
validate_args(self.fname, (1, None), 2, compat_args)
-class TestValidateKwargs(tm.TestCase):
+class TestValidateKwargs(object):
fname = 'func'
def test_bad_kwarg(self):
@@ -225,7 +225,7 @@ def test_validate_bool_kwarg(self):
assert validate_bool_kwarg(value, name) == value
-class TestValidateKwargsAndArgs(tm.TestCase):
+class TestValidateKwargsAndArgs(object):
fname = 'func'
def test_invalid_total_length_max_length_one(self):
@@ -322,7 +322,7 @@ def test_validation(self):
compat_args)
-class TestMove(tm.TestCase):
+class TestMove(object):
def test_cannot_create_instance_of_stolenbuffer(self):
"""Stolen buffers need to be created through the smart constructor
@@ -407,11 +407,10 @@ def test_numpy_errstate_is_default():
assert np.geterr() == expected
-class TestLocaleUtils(tm.TestCase):
+class TestLocaleUtils(object):
@classmethod
def setup_class(cls):
- super(TestLocaleUtils, cls).setup_class()
cls.locales = tm.get_locales()
if not cls.locales:
@@ -421,7 +420,6 @@ def setup_class(cls):
@classmethod
def teardown_class(cls):
- super(TestLocaleUtils, cls).teardown_class()
del cls.locales
def test_get_locales(self):
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index 5436f3c342019..634cd5fe2586b 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -30,7 +30,7 @@ def assert_equal(left, right):
tm.assert_frame_equal(left, right)
-class Base(tm.TestCase):
+class Base(object):
_nan_locs = np.arange(20, 40)
_inf_locs = np.array([])
@@ -562,8 +562,8 @@ def test_deprecations(self):
# gh-12373 : rolling functions error on float32 data
# make sure rolling functions works for different dtypes
#
-# NOTE that these are yielded tests and so _create_data is
-# explicity called, nor do these inherit from tm.TestCase
+# NOTE that these are yielded tests and so _create_data
+# is explicitly called.
#
# further note that we are only checking rolling for fully dtype
# compliance (though both expanding and ewm inherit)
@@ -3037,7 +3037,7 @@ def test_rolling_min_max_numeric_types(self):
assert result.dtypes[0] == np.dtype("f8")
-class TestGrouperGrouping(tm.TestCase):
+class TestGrouperGrouping(object):
def setup_method(self, method):
self.series = Series(np.arange(10))
@@ -3182,7 +3182,7 @@ def test_expanding_apply(self):
tm.assert_frame_equal(result, expected)
-class TestRollingTS(tm.TestCase):
+class TestRollingTS(object):
# rolling time-series friendly
# xref GH13327
diff --git a/pandas/tests/tools/test_numeric.py b/pandas/tests/tools/test_numeric.py
index b298df4f4b5d8..f82ad97d7b70f 100644
--- a/pandas/tests/tools/test_numeric.py
+++ b/pandas/tests/tools/test_numeric.py
@@ -9,7 +9,7 @@
from numpy import iinfo
-class TestToNumeric(tm.TestCase):
+class TestToNumeric(object):
def test_series(self):
s = pd.Series(['1', '-3.14', '7'])
diff --git a/pandas/tests/tseries/test_frequencies.py b/pandas/tests/tseries/test_frequencies.py
index a78150e9cf728..2edca1bd4676b 100644
--- a/pandas/tests/tseries/test_frequencies.py
+++ b/pandas/tests/tseries/test_frequencies.py
@@ -19,7 +19,7 @@
from pandas import Timedelta
-class TestToOffset(tm.TestCase):
+class TestToOffset(object):
def test_to_offset_multiple(self):
freqstr = '2h30min'
@@ -342,7 +342,7 @@ def _assert_depr(freq, expected, aliases):
assert (frequencies._period_str_to_code('NS') == 12000)
-class TestFrequencyCode(tm.TestCase):
+class TestFrequencyCode(object):
def test_freq_code(self):
assert frequencies.get_freq('A') == 1000
@@ -493,7 +493,7 @@ def test_get_freq_code(self):
_dti = DatetimeIndex
-class TestFrequencyInference(tm.TestCase):
+class TestFrequencyInference(object):
def test_raise_if_period_index(self):
index = PeriodIndex(start="1/1/1990", periods=20, freq="M")
diff --git a/pandas/tests/tseries/test_holiday.py b/pandas/tests/tseries/test_holiday.py
index 8ea4140bb85a7..59a2a225ab5f8 100644
--- a/pandas/tests/tseries/test_holiday.py
+++ b/pandas/tests/tseries/test_holiday.py
@@ -19,7 +19,7 @@
from pytz import utc
-class TestCalendar(tm.TestCase):
+class TestCalendar(object):
def setup_method(self, method):
self.holiday_list = [
@@ -85,7 +85,7 @@ def test_rule_from_name(self):
assert USFedCal.rule_from_name('Thanksgiving') == USThanksgivingDay
-class TestHoliday(tm.TestCase):
+class TestHoliday(object):
def setup_method(self, method):
self.start_date = datetime(2011, 1, 1)
@@ -284,7 +284,7 @@ def test_factory(self):
assert len(class_3.rules) == 2
-class TestObservanceRules(tm.TestCase):
+class TestObservanceRules(object):
def setup_method(self, method):
self.we = datetime(2014, 4, 9)
@@ -342,7 +342,7 @@ def test_after_nearest_workday(self):
assert after_nearest_workday(self.fr) == self.mo
-class TestFederalHolidayCalendar(tm.TestCase):
+class TestFederalHolidayCalendar(object):
def test_no_mlk_before_1984(self):
# see gh-10278
@@ -375,7 +375,7 @@ class MemorialDay(AbstractHolidayCalendar):
datetime(1979, 5, 28, 0, 0)]
-class TestHolidayConflictingArguments(tm.TestCase):
+class TestHolidayConflictingArguments(object):
def test_both_offset_observance_raises(self):
# see gh-10217
diff --git a/pandas/tests/tseries/test_offsets.py b/pandas/tests/tseries/test_offsets.py
index b6cd5e7958342..09de064c15183 100644
--- a/pandas/tests/tseries/test_offsets.py
+++ b/pandas/tests/tseries/test_offsets.py
@@ -97,7 +97,7 @@ def test_to_m8():
#####
-class Base(tm.TestCase):
+class Base(object):
_offset = None
_offset_types = [getattr(offsets, o) for o in offsets.__all__]
@@ -4334,7 +4334,7 @@ def test_Easter():
assertEq(-Easter(2), datetime(2010, 4, 4), datetime(2008, 3, 23))
-class TestTicks(tm.TestCase):
+class TestTicks(object):
ticks = [Hour, Minute, Second, Milli, Micro, Nano]
@@ -4491,7 +4491,7 @@ def test_compare_ticks(self):
assert kls(3) != kls(4)
-class TestOffsetNames(tm.TestCase):
+class TestOffsetNames(object):
def test_get_offset_name(self):
assert BDay().freqstr == 'B'
@@ -4547,7 +4547,7 @@ def test_get_offset_legacy():
get_offset(name)
-class TestParseTimeString(tm.TestCase):
+class TestParseTimeString(object):
def test_parse_time_string(self):
(date, parsed, reso) = parse_time_string('4Q1984')
@@ -4610,7 +4610,7 @@ def test_quarterly_dont_normalize():
assert (result.time() == date.time())
-class TestOffsetAliases(tm.TestCase):
+class TestOffsetAliases(object):
def setup_method(self, method):
_offset_map.clear()
@@ -4691,7 +4691,7 @@ def get_all_subclasses(cls):
return ret
-class TestCaching(tm.TestCase):
+class TestCaching(object):
# as of GH 6479 (in 0.14.0), offset caching is turned off
# as of v0.12.0 only BusinessMonth/Quarter were actually caching
@@ -4746,7 +4746,7 @@ def test_week_of_month_index_creation(self):
assert inst2 not in _daterange_cache
-class TestReprNames(tm.TestCase):
+class TestReprNames(object):
def test_str_for_named_is_name(self):
# look at all the amazing combinations!
@@ -4771,7 +4771,7 @@ def get_utc_offset_hours(ts):
return (o.days * 24 * 3600 + o.seconds) / 3600.0
-class TestDST(tm.TestCase):
+class TestDST(object):
"""
test DateOffset additions over Daylight Savings Time
"""
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index 74220aa5cd183..97c54922d36e9 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -50,7 +50,7 @@ def dst(self, dt):
fixed_off_no_name = FixedOffset(-330, None)
-class TestTimeZoneSupportPytz(tm.TestCase):
+class TestTimeZoneSupportPytz(object):
def setup_method(self, method):
tm._skip_if_no_pytz()
@@ -1178,7 +1178,7 @@ def test_tz_convert_tzlocal(self):
tm.assert_numpy_array_equal(dti2.asi8, dti.asi8)
-class TestTimeZoneCacheKey(tm.TestCase):
+class TestTimeZoneCacheKey(object):
def test_cache_keys_are_distinct_for_pytz_vs_dateutil(self):
tzs = pytz.common_timezones
@@ -1194,7 +1194,7 @@ def test_cache_keys_are_distinct_for_pytz_vs_dateutil(self):
assert tslib._p_tz_cache_key(tz_p) != tslib._p_tz_cache_key(tz_d)
-class TestTimeZones(tm.TestCase):
+class TestTimeZones(object):
timezones = ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific']
def setup_method(self, method):
@@ -1719,7 +1719,7 @@ def test_nat(self):
tm.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Eastern'))
-class TestTslib(tm.TestCase):
+class TestTslib(object):
def test_tslib_tz_convert(self):
def compare_utc_to_local(tz_didx, utc_didx):
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 354e11ce0133a..0d70d51032b3d 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -85,20 +85,6 @@ def reset_testing_mode():
set_testing_mode()
-class TestCase(object):
- """
- Base class for all test case classes.
- """
-
- @classmethod
- def setup_class(cls):
- pd.set_option('chained_assignment', 'raise')
-
- @classmethod
- def teardown_class(cls):
- pass
-
-
def reset_display_options():
"""
Reset the display options for printing and representing objects.
| Title is self-explanatory.
xref #15990
xref <a href="https://github.com/pandas-dev/pandas/pull/16201#pullrequestreview-36158775">#16201 (comment)</a>
| https://api.github.com/repos/pandas-dev/pandas/pulls/16225 | 2017-05-04T04:14:23Z | 2017-05-04T11:01:21Z | 2017-05-04T11:01:21Z | 2017-05-04T13:34:40Z |
CLN: make submodules of pandas.util private | diff --git a/asv_bench/benchmarks/algorithms.py b/asv_bench/benchmarks/algorithms.py
index d79051ed2d66c..40cfec1bcd4c7 100644
--- a/asv_bench/benchmarks/algorithms.py
+++ b/asv_bench/benchmarks/algorithms.py
@@ -5,7 +5,7 @@
import pandas as pd
from pandas.util import testing as tm
-for imp in ['pandas.util.hashing', 'pandas.tools.hashing']:
+for imp in ['pandas.util', 'pandas.tools.hashing']:
try:
hashing = import_module(imp)
break
diff --git a/asv_bench/benchmarks/attrs_caching.py b/asv_bench/benchmarks/attrs_caching.py
index 9210f1f2878d4..b7610037bed4d 100644
--- a/asv_bench/benchmarks/attrs_caching.py
+++ b/asv_bench/benchmarks/attrs_caching.py
@@ -1,5 +1,9 @@
from .pandas_vb_common import *
-from pandas.util.decorators import cache_readonly
+
+try:
+ from pandas.util import cache_readonly
+except ImportError:
+ from pandas.util.decorators import cache_readonly
class DataFrameAttributes(object):
diff --git a/doc/source/merging.rst b/doc/source/merging.rst
index fb020727d077e..170dde87c8363 100644
--- a/doc/source/merging.rst
+++ b/doc/source/merging.rst
@@ -13,7 +13,7 @@
import matplotlib.pyplot as plt
plt.close('all')
- import pandas.util.doctools as doctools
+ import pandas.util._doctools as doctools
p = doctools.TablePlotter()
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 230c7c0b90ac0..bfd8031b4c305 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -1230,19 +1230,19 @@ If indicated, a deprecation warning will be issued if you reference theses modul
"pandas.algos", "pandas._libs.algos", ""
"pandas.hashtable", "pandas._libs.hashtable", ""
"pandas.indexes", "pandas.core.indexes", ""
- "pandas.json", "pandas.io.json.libjson", "X"
- "pandas.parser", "pandas.io.libparsers", "X"
+ "pandas.json", "pandas._libs.json", "X"
+ "pandas.parser", "pandas._libs.parsers", "X"
"pandas.formats", "pandas.io.formats", ""
"pandas.sparse", "pandas.core.sparse", ""
"pandas.tools", "pandas.core.reshape", ""
"pandas.types", "pandas.core.dtypes", ""
- "pandas.io.sas.saslib", "pandas.io.sas.libsas", ""
+ "pandas.io.sas.saslib", "pandas.io.sas._sas", ""
"pandas._join", "pandas._libs.join", ""
- "pandas._hash", "pandas.util.libhashing", ""
+ "pandas._hash", "pandas._libs.hashing", ""
"pandas._period", "pandas._libs.period", ""
- "pandas._sparse", "pandas.core.sparse.libsparse", ""
- "pandas._testing", "pandas.util.libtesting", ""
- "pandas._window", "pandas.core.libwindow", ""
+ "pandas._sparse", "pandas._libs.sparse", ""
+ "pandas._testing", "pandas._libs.testing", ""
+ "pandas._window", "pandas._libs.window", ""
Some new subpackages are created with public functionality that is not directly
@@ -1254,6 +1254,8 @@ these are now the public subpackages.
- The function :func:`~pandas.api.types.union_categoricals` is now importable from ``pandas.api.types``, formerly from ``pandas.types.concat`` (:issue:`15998`)
- The type import ``pandas.tslib.NaTType`` is deprecated and can be replaced by using ``type(pandas.NaT)`` (:issue:`16146`)
+- The public functions in ``pandas.tools.hashing`` deprecated from that locations, but are now importable from ``pandas.util`` (:issue:`16223`)
+- The modules in ``pandas.util``: ``decorators``, ``print_versions``, ``doctools``, `validators``, ``depr_module`` are now private (:issue:`16223`)
.. _whatsnew_0200.privacy.errors:
@@ -1278,7 +1280,7 @@ The following are now part of this API:
'UnsupportedFunctionCall']
-.. _whatsnew_0200.privay.testing:
+.. _whatsnew_0200.privacy.testing:
``pandas.testing``
^^^^^^^^^^^^^^^^^^
@@ -1292,14 +1294,13 @@ The following testing functions are now part of this API:
- :func:`testing.assert_index_equal`
-.. _whatsnew_0200.privay.plotting:
+.. _whatsnew_0200.privacy.plotting:
``pandas.plotting``
^^^^^^^^^^^^^^^^^^^
A new public ``pandas.plotting`` module has been added that holds plotting functionality that was previously in either ``pandas.tools.plotting`` or in the top-level namespace. See the :ref:`deprecations sections <whatsnew_0200.privacy.deprecate_plotting>` for more details.
-
.. _whatsnew_0200.privacy.development:
Other Development Changes
diff --git a/pandas/__init__.py b/pandas/__init__.py
index 20c7e0d9d5993..48ac9d173559d 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -50,17 +50,17 @@
import pandas.tools.plotting
plot_params = pandas.plotting._style._Options(deprecated=True)
# do not import deprecate to top namespace
-scatter_matrix = pandas.util.decorators.deprecate(
+scatter_matrix = pandas.util._decorators.deprecate(
'pandas.scatter_matrix', pandas.plotting.scatter_matrix,
'pandas.plotting.scatter_matrix')
-from pandas.util.print_versions import show_versions
+from pandas.util._print_versions import show_versions
from pandas.io.api import *
from pandas.util._tester import test
import pandas.testing
# extension module deprecations
-from pandas.util.depr_module import _DeprecatedModule
+from pandas.util._depr_module import _DeprecatedModule
json = _DeprecatedModule(deprmod='pandas.json',
moved={'dumps': 'pandas.io.json.dumps',
diff --git a/pandas/util/hashing.pyx b/pandas/_libs/hashing.pyx
similarity index 100%
rename from pandas/util/hashing.pyx
rename to pandas/_libs/hashing.pyx
diff --git a/pandas/io/parsers.pyx b/pandas/_libs/parsers.pyx
similarity index 100%
rename from pandas/io/parsers.pyx
rename to pandas/_libs/parsers.pyx
diff --git a/pandas/core/sparse/sparse.pyx b/pandas/_libs/sparse.pyx
similarity index 100%
rename from pandas/core/sparse/sparse.pyx
rename to pandas/_libs/sparse.pyx
diff --git a/pandas/core/sparse/sparse_op_helper.pxi.in b/pandas/_libs/sparse_op_helper.pxi.in
similarity index 100%
rename from pandas/core/sparse/sparse_op_helper.pxi.in
rename to pandas/_libs/sparse_op_helper.pxi.in
diff --git a/pandas/_libs/src/ujson/python/ujson.c b/pandas/_libs/src/ujson/python/ujson.c
index ec6720f16bc77..a0c2146c30eed 100644
--- a/pandas/_libs/src/ujson/python/ujson.c
+++ b/pandas/_libs/src/ujson/python/ujson.c
@@ -90,14 +90,14 @@ static struct PyModuleDef moduledef = {
NULL /* m_free */
};
-#define PYMODINITFUNC PyMODINIT_FUNC PyInit_libjson(void)
+#define PYMODINITFUNC PyMODINIT_FUNC PyInit_json(void)
#define PYMODULE_CREATE() PyModule_Create(&moduledef)
#define MODINITERROR return NULL
#else
-#define PYMODINITFUNC PyMODINIT_FUNC initlibjson(void)
-#define PYMODULE_CREATE() Py_InitModule("libjson", ujsonMethods)
+#define PYMODINITFUNC PyMODINIT_FUNC initjson(void)
+#define PYMODULE_CREATE() Py_InitModule("json", ujsonMethods)
#define MODINITERROR return
#endif
diff --git a/pandas/util/testing.pyx b/pandas/_libs/testing.pyx
similarity index 100%
rename from pandas/util/testing.pyx
rename to pandas/_libs/testing.pyx
diff --git a/pandas/core/window.pyx b/pandas/_libs/window.pyx
similarity index 100%
rename from pandas/core/window.pyx
rename to pandas/_libs/window.pyx
diff --git a/pandas/compat/numpy/function.py b/pandas/compat/numpy/function.py
index d707ac66c4eab..a324bf94171ce 100644
--- a/pandas/compat/numpy/function.py
+++ b/pandas/compat/numpy/function.py
@@ -19,8 +19,8 @@
"""
from numpy import ndarray
-from pandas.util.validators import (validate_args, validate_kwargs,
- validate_args_and_kwargs)
+from pandas.util._validators import (validate_args, validate_kwargs,
+ validate_args_and_kwargs)
from pandas.errors import UnsupportedFunctionCall
from pandas.core.dtypes.common import is_integer, is_bool
from pandas.compat import OrderedDict
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index 6df365a1cd898..b875bbb0d63c0 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -71,7 +71,7 @@ def load_reduce(self):
# 12588, extensions moving
('pandas._sparse', 'BlockIndex'):
- ('pandas.core.sparse.libsparse', 'BlockIndex'),
+ ('pandas._libs.sparse', 'BlockIndex'),
('pandas.tslib', 'Timestamp'):
('pandas._libs.tslib', 'Timestamp'),
('pandas.tslib', '__nat_unpickle'):
diff --git a/pandas/core/api.py b/pandas/core/api.py
index 3e84720c32a1c..265fb4004d997 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -35,7 +35,7 @@
from pandas.core.resample import TimeGrouper
# see gh-14094.
-from pandas.util.depr_module import _DeprecatedModule
+from pandas.util._depr_module import _DeprecatedModule
_removals = ['day', 'bday', 'businessDay', 'cday', 'customBusinessDay',
'customBusinessMonthEnd', 'customBusinessMonthBegin',
diff --git a/pandas/core/base.py b/pandas/core/base.py
index fd0846b0ad33c..a3ef24c80f883 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -9,14 +9,14 @@
from pandas.core.dtypes.missing import isnull
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries, ABCIndexClass
from pandas.core.dtypes.common import is_object_dtype, is_list_like, is_scalar
-from pandas.util.validators import validate_bool_kwarg
+from pandas.util._validators import validate_bool_kwarg
from pandas.core import common as com
import pandas.core.nanops as nanops
import pandas._libs.lib as lib
from pandas.compat.numpy import function as nv
-from pandas.util.decorators import (Appender, cache_readonly,
- deprecate_kwarg, Substitution)
+from pandas.util._decorators import (Appender, cache_readonly,
+ deprecate_kwarg, Substitution)
from pandas.core.common import AbstractMethodError
_shared_docs = dict()
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index a3667e9322959..7eb86232cbb07 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -34,11 +34,11 @@
import pandas.core.common as com
from pandas.core.missing import interpolate_2d
from pandas.compat.numpy import function as nv
-from pandas.util.decorators import (Appender, cache_readonly,
- deprecate_kwarg, Substitution)
+from pandas.util._decorators import (Appender, cache_readonly,
+ deprecate_kwarg, Substitution)
-from pandas.util.terminal import get_terminal_size
-from pandas.util.validators import validate_bool_kwarg
+from pandas.io.formats.terminal import get_terminal_size
+from pandas.util._validators import validate_bool_kwarg
from pandas.core.config import get_option
diff --git a/pandas/core/computation/eval.py b/pandas/core/computation/eval.py
index 15e13025a7c53..22e376306280a 100644
--- a/pandas/core/computation/eval.py
+++ b/pandas/core/computation/eval.py
@@ -11,7 +11,7 @@
from pandas.core.computation.scope import _ensure_scope
from pandas.compat import string_types
from pandas.core.computation.engines import _engines
-from pandas.util.validators import validate_bool_kwarg
+from pandas.util._validators import validate_bool_kwarg
def _check_engine(engine):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 67966374fcf9a..e6ea58e7e05be 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -80,8 +80,8 @@
OrderedDict, raise_with_traceback)
from pandas import compat
from pandas.compat.numpy import function as nv
-from pandas.util.decorators import Appender, Substitution
-from pandas.util.validators import validate_bool_kwarg
+from pandas.util._decorators import Appender, Substitution
+from pandas.util._validators import validate_bool_kwarg
from pandas.core.indexes.period import PeriodIndex
from pandas.core.indexes.datetimes import DatetimeIndex
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 2bc64795b5f20..27a489293db8f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -51,8 +51,8 @@
from pandas.compat import (map, zip, lzip, lrange, string_types,
isidentifier, set_function_name)
import pandas.core.nanops as nanops
-from pandas.util.decorators import Appender, Substitution, deprecate_kwarg
-from pandas.util.validators import validate_bool_kwarg
+from pandas.util._decorators import Appender, Substitution, deprecate_kwarg
+from pandas.util._validators import validate_bool_kwarg
from pandas.core import config
# goal is to be able to define the docs close to function, while still being
@@ -1382,7 +1382,7 @@ def to_clipboard(self, excel=None, sep=None, **kwargs):
- Windows: none
- OS X: none
"""
- from pandas.io import clipboard
+ from pandas.io.clipboard import clipboard
clipboard.to_clipboard(self, excel=excel, sep=sep, **kwargs)
def to_xarray(self):
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 479d2f7d26eb6..91b55c414b507 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -54,10 +54,10 @@
from pandas.core.sorting import (get_group_index_sorter, get_group_index,
compress_group_index, get_flattened_iterator,
decons_obs_group_ids, get_indexer_dict)
-from pandas.util.decorators import (cache_readonly, Substitution,
- Appender, make_signature)
+from pandas.util._decorators import (cache_readonly, Substitution,
+ Appender, make_signature)
from pandas.io.formats.printing import pprint_thing
-from pandas.util.validators import validate_kwargs
+from pandas.util._validators import validate_kwargs
import pandas.core.algorithms as algorithms
import pandas.core.common as com
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 4345c74664bf5..82f3bf3b15462 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -41,8 +41,8 @@
from pandas.core.base import PandasObject, IndexOpsMixin
import pandas.core.base as base
-from pandas.util.decorators import (Appender, Substitution, cache_readonly,
- deprecate, deprecate_kwarg)
+from pandas.util._decorators import (Appender, Substitution, cache_readonly,
+ deprecate, deprecate_kwarg)
from pandas.core.indexes.frozen import FrozenList
import pandas.core.common as com
import pandas.core.dtypes.concat as _concat
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 760db4ba20675..395513d7b9b81 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -16,7 +16,7 @@
from pandas.core.algorithms import take_1d
-from pandas.util.decorators import Appender, cache_readonly
+from pandas.util._decorators import Appender, cache_readonly
from pandas.core.config import get_option
from pandas.core.indexes.base import Index, _index_shared_docs
import pandas.core.base as base
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 387209ceb038f..cd8559bcca03c 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -28,7 +28,7 @@
from pandas._libs.period import Period
from pandas.core.indexes.base import Index, _index_shared_docs
-from pandas.util.decorators import Appender, cache_readonly
+from pandas.util._decorators import Appender, cache_readonly
import pandas.core.dtypes.concat as _concat
import pandas.tseries.frequencies as frequencies
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index b0264759f2f8d..ec678b1577d81 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -41,8 +41,8 @@
from pandas.core.tools.datetimes import (
parse_time_string, normalize_date, to_time)
from pandas.core.tools.timedeltas import to_timedelta
-from pandas.util.decorators import (Appender, cache_readonly,
- deprecate_kwarg, Substitution)
+from pandas.util._decorators import (Appender, cache_readonly,
+ deprecate_kwarg, Substitution)
import pandas.core.common as com
import pandas.tseries.offsets as offsets
import pandas.core.tools.datetimes as tools
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index ccd0d8bee4abc..039346cba56c8 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -28,7 +28,7 @@
from pandas.core.indexes.multi import MultiIndex
from pandas.compat.numpy import function as nv
from pandas.core import common as com
-from pandas.util.decorators import cache_readonly, Appender
+from pandas.util._decorators import cache_readonly, Appender
from pandas.core.config import get_option
import pandas.core.indexes.base as ibase
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index c760d2943b823..7ef037d8f3536 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -26,8 +26,8 @@
is_null_slice)
import pandas.core.base as base
-from pandas.util.decorators import (Appender, cache_readonly,
- deprecate, deprecate_kwarg)
+from pandas.util._decorators import (Appender, cache_readonly,
+ deprecate, deprecate_kwarg)
import pandas.core.common as com
import pandas.core.missing as missing
import pandas.core.algorithms as algos
@@ -718,7 +718,7 @@ def _inferred_type_levels(self):
@cache_readonly
def _hashed_values(self):
""" return a uint64 ndarray of my hashed values """
- from pandas.util.hashing import hash_tuples
+ from pandas.core.util.hashing import hash_tuples
return hash_tuples(self)
def _hashed_indexing_key(self, key):
@@ -740,7 +740,7 @@ def _hashed_indexing_key(self, key):
we need to stringify if we have mixed levels
"""
- from pandas.util.hashing import hash_tuples
+ from pandas.core.util.hashing import hash_tuples
if not isinstance(key, tuple):
return hash_tuples(key)
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 21ba2a386d96a..bdae0ac7ac5e9 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -11,7 +11,7 @@
from pandas.core import algorithms
from pandas.core.indexes.base import (
Index, InvalidIndexError, _index_shared_docs)
-from pandas.util.decorators import Appender, cache_readonly
+from pandas.util._decorators import Appender, cache_readonly
import pandas.core.indexes.base as ibase
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 378661a49e20d..15fd9b7dc2b6a 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -40,8 +40,8 @@
from pandas.core.indexes.base import _index_shared_docs, _ensure_index
from pandas import compat
-from pandas.util.decorators import (Appender, Substitution, cache_readonly,
- deprecate_kwarg)
+from pandas.util._decorators import (Appender, Substitution, cache_readonly,
+ deprecate_kwarg)
from pandas.compat import zip, u
import pandas.core.indexes.base as ibase
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index acd040693af2e..b7a8e0b54a128 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -13,7 +13,7 @@
from pandas.compat import lrange, range
from pandas.compat.numpy import function as nv
from pandas.core.indexes.base import Index, _index_shared_docs
-from pandas.util.decorators import Appender, cache_readonly
+from pandas.util._decorators import Appender, cache_readonly
import pandas.core.indexes.base as ibase
from pandas.core.indexes.numeric import Int64Index
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 1081787b2c0b0..ab94a5bffb4f9 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -27,7 +27,7 @@
from pandas.core.indexes.base import _index_shared_docs
import pandas.core.common as com
import pandas.core.dtypes.concat as _concat
-from pandas.util.decorators import Appender, Substitution, deprecate_kwarg
+from pandas.util._decorators import Appender, Substitution, deprecate_kwarg
from pandas.core.indexes.datetimelike import TimelikeOps, DatetimeIndexOpsMixin
from pandas.core.tools.timedeltas import (
to_timedelta, _coerce_scalar_to_timedelta_type)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 840206977cf30..15851a17274ca 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -64,8 +64,8 @@
from pandas._libs.lib import BlockPlacement
import pandas.core.computation.expressions as expressions
-from pandas.util.decorators import cache_readonly
-from pandas.util.validators import validate_bool_kwarg
+from pandas.util._decorators import cache_readonly
+from pandas.util._validators import validate_bool_kwarg
from pandas import compat, _np_version_under1p9
from pandas.compat import range, map, zip, u
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 41a17a0957cbf..e7cfbdb0fc9c6 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -15,7 +15,7 @@
tslib as libts, algos as libalgos, iNaT)
from pandas import compat
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
import pandas.core.computation.expressions as expressions
from pandas.compat import bind_method
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 39d2ebdeec3ac..d1f5b4587059c 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -34,7 +34,7 @@
from pandas.core.ops import _op_descriptions
from pandas.core.series import Series
from pandas.core.reshape.util import cartesian_product
-from pandas.util.decorators import (deprecate, Appender)
+from pandas.util._decorators import (deprecate, Appender)
_shared_doc_kwargs = dict(
axes='items, major_axis, minor_axis',
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index cbb2f6a93c2fd..631b91c3aad11 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -25,7 +25,7 @@
from pandas._libs.lib import Timestamp
from pandas._libs.period import IncompatibleFrequency
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
from pandas.core.generic import _shared_docs
_shared_docs_kwargs = dict()
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 1ca3786ecc174..c55f4b5bf935f 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -34,7 +34,7 @@
from pandas.core.dtypes.missing import na_value_for_dtype
from pandas.core.internals import (items_overlap_with_suffix,
concatenate_block_managers)
-from pandas.util.decorators import Appender, Substitution
+from pandas.util._decorators import Appender, Substitution
from pandas.core.sorting import is_int64_overflow_possible
import pandas.core.algorithms as algos
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index a3cf80d758b7b..779002b300cc7 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -20,7 +20,7 @@
from pandas.core.sparse.api import SparseDataFrame, SparseSeries
from pandas.core.sparse.array import SparseArray
-from pandas.core.sparse.libsparse import IntIndex
+from pandas._libs.sparse import IntIndex
from pandas.core.categorical import Categorical, _factorize_from_iterable
from pandas.core.sorting import (get_group_index, get_compressed_ids,
@@ -30,7 +30,7 @@
from pandas._libs import algos as _algos, reshape as _reshape
from pandas.core.frame import _shared_docs
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
from pandas.core.index import MultiIndex, _get_na_value
diff --git a/pandas/core/series.py b/pandas/core/series.py
index e5f1d91eedfec..6ec163bbaa73d 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -60,7 +60,7 @@
from pandas.core.indexes.timedeltas import TimedeltaIndex
from pandas.core.indexes.period import PeriodIndex
from pandas import compat
-from pandas.util.terminal import get_terminal_size
+from pandas.io.formats.terminal import get_terminal_size
from pandas.compat import zip, u, OrderedDict, StringIO
from pandas.compat.numpy import function as nv
@@ -70,8 +70,8 @@
import pandas.core.common as com
import pandas.core.nanops as nanops
import pandas.io.formats.format as fmt
-from pandas.util.decorators import Appender, deprecate_kwarg, Substitution
-from pandas.util.validators import validate_bool_kwarg
+from pandas.util._decorators import Appender, deprecate_kwarg, Substitution
+from pandas.util._validators import validate_bool_kwarg
from pandas._libs import index as libindex, tslib as libts, lib, iNaT
from pandas.core.config import get_option
diff --git a/pandas/core/sparse/array.py b/pandas/core/sparse/array.py
index ef3600266c037..8ac9d3916573e 100644
--- a/pandas/core/sparse/array.py
+++ b/pandas/core/sparse/array.py
@@ -29,13 +29,13 @@
astype_nansafe, find_common_type)
from pandas.core.dtypes.missing import isnull, notnull, na_value_for_dtype
-from pandas.core.sparse import libsparse as splib
-from pandas.core.sparse.libsparse import SparseIndex, BlockIndex, IntIndex
+import pandas._libs.sparse as splib
+from pandas._libs.sparse import SparseIndex, BlockIndex, IntIndex
from pandas._libs import index as libindex
import pandas.core.algorithms as algos
import pandas.core.ops as ops
import pandas.io.formats.printing as printing
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
from pandas.core.indexes.base import _index_shared_docs
diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index 05c97fac4b53a..3c8f6e8c6257d 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -25,8 +25,8 @@
create_block_manager_from_arrays)
import pandas.core.generic as generic
from pandas.core.sparse.series import SparseSeries, SparseArray
-from pandas.core.sparse.libsparse import BlockIndex, get_blocks
-from pandas.util.decorators import Appender
+from pandas._libs.sparse import BlockIndex, get_blocks
+from pandas.util._decorators import Appender
import pandas.core.ops as ops
diff --git a/pandas/core/sparse/list.py b/pandas/core/sparse/list.py
index e69ad6d0ab7ad..e2a8c6a29cc23 100644
--- a/pandas/core/sparse/list.py
+++ b/pandas/core/sparse/list.py
@@ -5,8 +5,8 @@
from pandas.core.dtypes.common import is_scalar
from pandas.core.sparse.array import SparseArray
-from pandas.util.validators import validate_bool_kwarg
-from pandas.core.sparse import libsparse as splib
+from pandas.util._validators import validate_bool_kwarg
+import pandas._libs.sparse as splib
class SparseList(PandasObject):
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
index a77bce8f06783..9dd061e26ba06 100644
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -21,13 +21,13 @@
import pandas.core.common as com
import pandas.core.ops as ops
import pandas._libs.index as _index
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
from pandas.core.sparse.array import (
make_sparse, _sparse_array_op, SparseArray,
_make_index)
-from pandas.core.sparse.libsparse import BlockIndex, IntIndex
-import pandas.core.sparse.libsparse as splib
+from pandas._libs.sparse import BlockIndex, IntIndex
+import pandas._libs.sparse as splib
from pandas.core.sparse.scipy_sparse import (
_sparse_series_to_coo,
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 5082ac7f80fbf..c57d7a9362490 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -17,7 +17,7 @@
from pandas.core.algorithms import take_1d
import pandas.compat as compat
from pandas.core.base import AccessorProperty, NoNewAttributesMixin
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
import re
import pandas._libs.lib as lib
import warnings
diff --git a/pandas/util/importing.py b/pandas/core/util/__init__.py
similarity index 100%
rename from pandas/util/importing.py
rename to pandas/core/util/__init__.py
diff --git a/pandas/util/hashing.py b/pandas/core/util/hashing.py
similarity index 94%
rename from pandas/util/hashing.py
rename to pandas/core/util/hashing.py
index 3046c62a03f48..6a5343e8a8e25 100644
--- a/pandas/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -4,10 +4,10 @@
import itertools
import numpy as np
-from pandas import Series, factorize, Categorical, Index, MultiIndex
-from pandas.util import libhashing as _hash
+from pandas._libs import hashing
from pandas._libs.lib import is_bool_array
from pandas.core.dtypes.generic import (
+ ABCMultiIndex,
ABCIndexClass,
ABCSeries,
ABCDataFrame)
@@ -73,10 +73,11 @@ def hash_pandas_object(obj, index=True, encoding='utf8', hash_key=None,
Series of uint64, same length as the object
"""
+ from pandas import Series
if hash_key is None:
hash_key = _default_hash_key
- if isinstance(obj, MultiIndex):
+ if isinstance(obj, ABCMultiIndex):
return Series(hash_tuples(obj, encoding, hash_key),
dtype='uint64', copy=False)
@@ -143,7 +144,9 @@ def hash_tuples(vals, encoding='utf8', hash_key=None):
elif not is_list_like(vals):
raise TypeError("must be convertible to a list-of-tuples")
- if not isinstance(vals, MultiIndex):
+ from pandas import Categorical, MultiIndex
+
+ if not isinstance(vals, ABCMultiIndex):
vals = MultiIndex.from_tuples(vals)
# create a list-of-Categoricals
@@ -257,17 +260,18 @@ def hash_array(vals, encoding='utf8', hash_key=None, categorize=True):
# then hash and rename categories. We allow skipping the categorization
# when the values are known/likely to be unique.
if categorize:
+ from pandas import factorize, Categorical, Index
codes, categories = factorize(vals, sort=False)
cat = Categorical(codes, Index(categories),
ordered=False, fastpath=True)
return _hash_categorical(cat, encoding, hash_key)
try:
- vals = _hash.hash_object_array(vals, hash_key, encoding)
+ vals = hashing.hash_object_array(vals, hash_key, encoding)
except TypeError:
# we have mixed types
- vals = _hash.hash_object_array(vals.astype(str).astype(object),
- hash_key, encoding)
+ vals = hashing.hash_object_array(vals.astype(str).astype(object),
+ hash_key, encoding)
# Then, redistribute these 64-bit ints within the space of 64-bit ints
vals ^= vals >> 30
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 6d8f12e982f12..df8e0c05009f4 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -33,12 +33,12 @@
from pandas.core.base import (PandasObject, SelectionMixin,
GroupByMixin)
import pandas.core.common as com
-import pandas.core.libwindow as _window
+import pandas._libs.window as _window
from pandas.tseries.offsets import DateOffset
from pandas import compat
from pandas.compat.numpy import function as nv
-from pandas.util.decorators import (Substitution, Appender,
- cache_readonly)
+from pandas.util._decorators import (Substitution, Appender,
+ cache_readonly)
from pandas.core.generic import _shared_docs
from textwrap import dedent
diff --git a/pandas/io/api.py b/pandas/io/api.py
index e312e7bc2f300..7f0d3c3631f63 100644
--- a/pandas/io/api.py
+++ b/pandas/io/api.py
@@ -5,7 +5,7 @@
# flake8: noqa
from pandas.io.parsers import read_csv, read_table, read_fwf
-from pandas.io.clipboard import read_clipboard
+from pandas.io.clipboard.clipboard import read_clipboard
from pandas.io.excel import ExcelFile, ExcelWriter, read_excel
from pandas.io.pytables import HDFStore, get_store, read_hdf
from pandas.io.json import read_json
diff --git a/pandas/util/clipboard/__init__.py b/pandas/io/clipboard/__init__.py
similarity index 100%
rename from pandas/util/clipboard/__init__.py
rename to pandas/io/clipboard/__init__.py
diff --git a/pandas/io/clipboard.py b/pandas/io/clipboard/clipboard.py
similarity index 97%
rename from pandas/io/clipboard.py
rename to pandas/io/clipboard/clipboard.py
index 3c7ac528d83fd..6252a02b0d63d 100644
--- a/pandas/io/clipboard.py
+++ b/pandas/io/clipboard/clipboard.py
@@ -26,7 +26,7 @@ def read_clipboard(sep='\s+', **kwargs): # pragma: no cover
raise NotImplementedError(
'reading from clipboard only supports utf-8 encoding')
- from pandas.util.clipboard import clipboard_get
+ from pandas.io.clipboard import clipboard_get
from pandas.io.parsers import read_table
text = clipboard_get()
@@ -92,7 +92,7 @@ def to_clipboard(obj, excel=None, sep=None, **kwargs): # pragma: no cover
if encoding is not None and encoding.lower().replace('-', '') != 'utf8':
raise ValueError('clipboard only supports utf-8 encoding')
- from pandas.util.clipboard import clipboard_set
+ from pandas.io.clipboard import clipboard_set
if excel is None:
excel = True
diff --git a/pandas/util/clipboard/clipboards.py b/pandas/io/clipboard/clipboards.py
similarity index 100%
rename from pandas/util/clipboard/clipboards.py
rename to pandas/io/clipboard/clipboards.py
diff --git a/pandas/util/clipboard/exceptions.py b/pandas/io/clipboard/exceptions.py
similarity index 100%
rename from pandas/util/clipboard/exceptions.py
rename to pandas/io/clipboard/exceptions.py
diff --git a/pandas/util/clipboard/windows.py b/pandas/io/clipboard/windows.py
similarity index 100%
rename from pandas/util/clipboard/windows.py
rename to pandas/io/clipboard/windows.py
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index fbb10ebdfc56d..9b0f49ccc45b1 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -20,7 +20,7 @@
from pandas.io.common import (_is_url, _urlopen, _validate_header_arg,
get_filepath_or_buffer, _NA_VALUES)
from pandas.core.indexes.period import Period
-from pandas.io.json import libjson
+import pandas._libs.json as json
from pandas.compat import (map, zip, reduce, range, lrange, u, add_metaclass,
string_types, OrderedDict)
from pandas.core import config
@@ -29,7 +29,7 @@
import pandas.compat.openpyxl_compat as openpyxl_compat
from warnings import warn
from distutils.version import LooseVersion
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
from textwrap import fill
__all__ = ["read_excel", "ExcelWriter", "ExcelFile"]
@@ -1447,7 +1447,7 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0,
elif isinstance(cell.val, date):
num_format_str = self.date_format
- stylekey = libjson.dumps(cell.style)
+ stylekey = json.dumps(cell.style)
if num_format_str:
stylekey += num_format_str
@@ -1575,7 +1575,7 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0,
elif isinstance(cell.val, date):
num_format_str = self.date_format
- stylekey = libjson.dumps(cell.style)
+ stylekey = json.dumps(cell.style)
if num_format_str:
stylekey += num_format_str
diff --git a/pandas/io/formats/console.py b/pandas/io/formats/console.py
index 0e46b0073a53d..ab75e3fa253ce 100644
--- a/pandas/io/formats/console.py
+++ b/pandas/io/formats/console.py
@@ -4,7 +4,7 @@
import sys
import locale
-from pandas.util.terminal import get_terminal_size
+from pandas.io.formats.terminal import get_terminal_size
# -----------------------------------------------------------------------------
# Global formatting options
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 43b0b5fbeee90..65098bb2aa404 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -30,7 +30,7 @@
from pandas import compat
from pandas.compat import (StringIO, lzip, range, map, zip, u,
OrderedDict, unichr)
-from pandas.util.terminal import get_terminal_size
+from pandas.io.formats.terminal import get_terminal_size
from pandas.core.config import get_option, set_option
from pandas.io.common import _get_handle, UnicodeWriter, _expand_user
from pandas.io.formats.printing import adjoin, justify, pprint_thing
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 71c61998be092..eac82ddde2318 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -29,7 +29,7 @@
from pandas.core.generic import _shared_docs
import pandas.core.common as com
from pandas.core.indexing import _maybe_numeric_slice, _non_reducing_slice
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
try:
import matplotlib.pyplot as plt
from matplotlib import colors
diff --git a/pandas/util/terminal.py b/pandas/io/formats/terminal.py
similarity index 100%
rename from pandas/util/terminal.py
rename to pandas/io/formats/terminal.py
diff --git a/pandas/io/json/json.py b/pandas/io/json/json.py
index 28ea8298cee9e..b2fe074732cbb 100644
--- a/pandas/io/json/json.py
+++ b/pandas/io/json/json.py
@@ -2,7 +2,7 @@
import os
import numpy as np
-from pandas.io.json import libjson
+import pandas._libs.json as json
from pandas._libs.tslib import iNaT
from pandas.compat import StringIO, long, u
from pandas import compat, isnull
@@ -14,8 +14,8 @@
from .table_schema import build_table_schema
from pandas.core.dtypes.common import is_period_dtype
-loads = libjson.loads
-dumps = libjson.dumps
+loads = json.loads
+dumps = json.dumps
TABLE_SCHEMA_VERSION = '0.20.0'
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 95b1394c88ac2..ce8643504932f 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -37,10 +37,10 @@
_NA_VALUES, _infer_compression)
from pandas.core.tools import datetimes as tools
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
import pandas._libs.lib as lib
-import pandas.io.libparsers as libparsers
+import pandas._libs.parsers as parsers
# BOM character (byte order mark)
@@ -1460,7 +1460,7 @@ def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False,
if issubclass(cvals.dtype.type, np.integer) and self.compact_ints:
cvals = lib.downcast_int64(
- cvals, libparsers.na_values,
+ cvals, parsers.na_values,
self.use_unsigned)
result[c] = cvals
@@ -1579,7 +1579,7 @@ def __init__(self, src, **kwds):
# #2442
kwds['allow_leading_cols'] = self.index_col is not False
- self._reader = libparsers.TextReader(src, **kwds)
+ self._reader = parsers.TextReader(src, **kwds)
# XXX
self.usecols, self.usecols_dtype = _validate_usecols_arg(
diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index d33cee2c5a1bc..20b0cf85e95b7 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -20,7 +20,7 @@
import numpy as np
import struct
import pandas.io.sas.sas_constants as const
-from pandas.io.sas.libsas import Parser
+from pandas.io.sas._sas import Parser
class _subheader_pointer(object):
diff --git a/pandas/io/sas/sas_xport.py b/pandas/io/sas/sas_xport.py
index 76fc55154bc49..a43a5988a2ade 100644
--- a/pandas/io/sas/sas_xport.py
+++ b/pandas/io/sas/sas_xport.py
@@ -14,7 +14,7 @@
from pandas import compat
import struct
import numpy as np
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
import warnings
_correct_line1 = ("HEADER RECORD*******LIBRARY HEADER RECORD!!!!!!!"
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 691582629251a..55cac83804cd9 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -27,7 +27,7 @@
from pandas import compat, to_timedelta, to_datetime, isnull, DatetimeIndex
from pandas.compat import lrange, lmap, lzip, text_type, string_types, range, \
zip, BytesIO
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
import pandas as pd
from pandas.io.common import get_filepath_or_buffer, BaseIterator
diff --git a/pandas/json.py b/pandas/json.py
index 5b1e395fa4b74..0b87aa22394b9 100644
--- a/pandas/json.py
+++ b/pandas/json.py
@@ -4,4 +4,4 @@
warnings.warn("The pandas.json module is deprecated and will be "
"removed in a future version. Please import from "
"the pandas.io.json instead", FutureWarning, stacklevel=2)
-from pandas.io.json.libjson import dumps, loads
+from pandas._libs.json import dumps, loads
diff --git a/pandas/parser.py b/pandas/parser.py
index af203c3df8cc9..c0c3bf3179a2d 100644
--- a/pandas/parser.py
+++ b/pandas/parser.py
@@ -4,5 +4,5 @@
warnings.warn("The pandas.parser module is deprecated and will be "
"removed in a future version. Please import from "
"the pandas.io.parser instead", FutureWarning, stacklevel=2)
-from pandas.io.libparsers import na_values
+from pandas._libs.parsers import na_values
from pandas.io.common import CParserError
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index c3476d1443fc3..e88979b14c8af 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -9,7 +9,7 @@
import numpy as np
-from pandas.util.decorators import cache_readonly
+from pandas.util._decorators import cache_readonly
from pandas.core.base import PandasObject
from pandas.core.dtypes.common import (
is_list_like,
@@ -25,7 +25,7 @@
from pandas.compat import range, lrange, map, zip, string_types
import pandas.compat as compat
from pandas.io.formats.printing import pprint_thing
-from pandas.util.decorators import Appender
+from pandas.util._decorators import Appender
from pandas.plotting._compat import (_mpl_ge_1_3_1,
_mpl_ge_1_5_0)
diff --git a/pandas/plotting/_misc.py b/pandas/plotting/_misc.py
index 93eceba9a3f02..20ada033c0f58 100644
--- a/pandas/plotting/_misc.py
+++ b/pandas/plotting/_misc.py
@@ -4,7 +4,7 @@
import numpy as np
-from pandas.util.decorators import deprecate_kwarg
+from pandas.util._decorators import deprecate_kwarg
from pandas.core.dtypes.missing import notnull
from pandas.compat import range, lrange, lmap, zip
from pandas.io.formats.printing import pprint_thing
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index f98ffa26e0c2b..f6c3a08c6721a 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -8,7 +8,7 @@
import numpy as np
from pandas.core.dtypes.common import is_scalar
from pandas.core.api import DataFrame, Series
-from pandas.util.decorators import Substitution, Appender
+from pandas.util._decorators import Substitution, Appender
__all__ = ['rolling_count', 'rolling_max', 'rolling_min',
'rolling_sum', 'rolling_mean', 'rolling_std', 'rolling_cov',
diff --git a/pandas/tests/dtypes/test_io.py b/pandas/tests/dtypes/test_io.py
index 58a1c3540cd03..ae92e9ecca681 100644
--- a/pandas/tests/dtypes/test_io.py
+++ b/pandas/tests/dtypes/test_io.py
@@ -73,7 +73,7 @@ def test_convert_sql_column_decimals(self):
tm.assert_numpy_array_equal(result, expected)
def test_convert_downcast_int64(self):
- from pandas.io.libparsers import na_values
+ from pandas._libs.parsers import na_values
arr = np.array([1, 2, 7, 8, 10], dtype=np.int64)
expected = np.array([1, 2, 7, 8, 10], dtype=np.int8)
diff --git a/pandas/tests/frame/common.py b/pandas/tests/frame/common.py
index b9cd764c8704c..b475d25eb5dac 100644
--- a/pandas/tests/frame/common.py
+++ b/pandas/tests/frame/common.py
@@ -1,7 +1,7 @@
import numpy as np
from pandas import compat
-from pandas.util.decorators import cache_readonly
+from pandas.util._decorators import cache_readonly
import pandas.util.testing as tm
import pandas as pd
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index e99c70952e5b3..3f08013e05ac8 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -29,7 +29,7 @@
import pandas.io.formats.printing as printing
import pandas.util.testing as tm
-from pandas.util.terminal import get_terminal_size
+from pandas.io.formats.terminal import get_terminal_size
from pandas.core.config import (set_option, get_option, option_context,
reset_option)
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index 10f99c4fcd0a8..86b0e5a0c6a2d 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -13,7 +13,7 @@
import decimal
from functools import partial
from pandas.compat import range, zip, StringIO, u
-import pandas.io.json.libjson as ujson
+import pandas._libs.json as ujson
import pandas.compat as compat
import numpy as np
diff --git a/pandas/tests/io/parser/test_textreader.py b/pandas/tests/io/parser/test_textreader.py
index 7cd02a07bbd4c..c9088d2ecc5e7 100644
--- a/pandas/tests/io/parser/test_textreader.py
+++ b/pandas/tests/io/parser/test_textreader.py
@@ -22,8 +22,8 @@
import pandas.util.testing as tm
-from pandas.io.libparsers import TextReader
-import pandas.io.libparsers as parser
+from pandas._libs.parsers import TextReader
+import pandas._libs.parsers as parser
class TestTextReader(object):
diff --git a/pandas/tests/io/test_clipboard.py b/pandas/tests/io/test_clipboard.py
index 406045a69beca..940a331a9de84 100644
--- a/pandas/tests/io/test_clipboard.py
+++ b/pandas/tests/io/test_clipboard.py
@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
import numpy as np
from numpy.random import randint
+from textwrap import dedent
import pytest
import pandas as pd
@@ -10,7 +11,8 @@
from pandas import get_option
from pandas.util import testing as tm
from pandas.util.testing import makeCustomDataframe as mkdf
-from pandas.util.clipboard.exceptions import PyperclipException
+from pandas.io.clipboard.exceptions import PyperclipException
+from pandas.io.clipboard import clipboard_set
try:
@@ -89,8 +91,6 @@ def test_round_trip_frame(self):
self.check_round_trip_frame(dt)
def test_read_clipboard_infer_excel(self):
- from textwrap import dedent
- from pandas.util.clipboard import clipboard_set
text = dedent("""
John James Charlie Mingus
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index fa83c43ba8dd4..6da77bf423609 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -23,7 +23,7 @@
is_platform_windows)
from pandas.io.common import URLError, urlopen, file_path_to_url
from pandas.io.html import read_html
-from pandas.io.libparsers import ParserError
+from pandas._libs.parsers import ParserError
import pandas.util.testing as tm
from pandas.util.testing import makeCustomDataframe as mkdf, network
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index ac490a00bf684..1dbba676e4bc5 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -7,7 +7,7 @@
from pandas import DataFrame, Series
from pandas.compat import zip, iteritems
-from pandas.util.decorators import cache_readonly
+from pandas.util._decorators import cache_readonly
from pandas.core.dtypes.api import is_list_like
import pandas.util.testing as tm
from pandas.util.testing import (ensure_clean,
diff --git a/pandas/tests/series/common.py b/pandas/tests/series/common.py
index 613961e1c670f..0c25dcb29c3b2 100644
--- a/pandas/tests/series/common.py
+++ b/pandas/tests/series/common.py
@@ -1,4 +1,4 @@
-from pandas.util.decorators import cache_readonly
+from pandas.util._decorators import cache_readonly
import pandas.util.testing as tm
import pandas as pd
diff --git a/pandas/tests/sparse/test_array.py b/pandas/tests/sparse/test_array.py
index ab7340c89f016..4ce03f72dbba6 100644
--- a/pandas/tests/sparse/test_array.py
+++ b/pandas/tests/sparse/test_array.py
@@ -10,7 +10,7 @@
from pandas import _np_version_under1p8
from pandas.core.sparse.api import SparseArray, SparseSeries
-from pandas.core.sparse.libsparse import IntIndex
+from pandas._libs.sparse import IntIndex
from pandas.util.testing import assert_almost_equal
import pandas.util.testing as tm
diff --git a/pandas/tests/sparse/test_frame.py b/pandas/tests/sparse/test_frame.py
index 4a4a596e3bed4..0312b76ec30a5 100644
--- a/pandas/tests/sparse/test_frame.py
+++ b/pandas/tests/sparse/test_frame.py
@@ -21,7 +21,7 @@
from pandas import compat
from pandas.core.sparse import frame as spf
-from pandas.core.sparse.libsparse import BlockIndex, IntIndex
+from pandas._libs.sparse import BlockIndex, IntIndex
from pandas.core.sparse.api import SparseSeries, SparseDataFrame, SparseArray
from pandas.tests.frame.test_api import SharedWithSparse
diff --git a/pandas/tests/sparse/test_libsparse.py b/pandas/tests/sparse/test_libsparse.py
index c41025582c651..4842ebdd103c4 100644
--- a/pandas/tests/sparse/test_libsparse.py
+++ b/pandas/tests/sparse/test_libsparse.py
@@ -8,7 +8,7 @@
from pandas import compat
from pandas.core.sparse.array import IntIndex, BlockIndex, _make_index
-import pandas.core.sparse.libsparse as splib
+import pandas._libs.sparse as splib
TEST_LENGTH = 20
diff --git a/pandas/tests/sparse/test_series.py b/pandas/tests/sparse/test_series.py
index 344bca54b180b..b524d6bfab418 100644
--- a/pandas/tests/sparse/test_series.py
+++ b/pandas/tests/sparse/test_series.py
@@ -17,7 +17,7 @@
import pandas.core.sparse.frame as spf
-from pandas.core.sparse.libsparse import BlockIndex, IntIndex
+from pandas._libs.sparse import BlockIndex, IntIndex
from pandas.core.sparse.api import SparseSeries
from pandas.tests.series.test_api import SharedWithSparse
diff --git a/pandas/tests/util/__init__.py b/pandas/tests/util/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/reshape/test_hashing.py b/pandas/tests/util/test_hashing.py
similarity index 94%
rename from pandas/tests/reshape/test_hashing.py
rename to pandas/tests/util/test_hashing.py
index 5f2c67ee300b5..e1e6e43529a7d 100644
--- a/pandas/tests/reshape/test_hashing.py
+++ b/pandas/tests/util/test_hashing.py
@@ -5,7 +5,8 @@
import pandas as pd
from pandas import DataFrame, Series, Index, MultiIndex
-from pandas.util.hashing import hash_array, hash_tuples, hash_pandas_object
+from pandas.util import hash_array, hash_pandas_object
+from pandas.core.util.hashing import hash_tuples
import pandas.util.testing as tm
@@ -267,3 +268,18 @@ def test_hash_collisions(self):
result = hash_array(np.asarray(L, dtype=object), 'utf8')
tm.assert_numpy_array_equal(
result, np.concatenate([expected1, expected2], axis=0))
+
+
+def test_deprecation():
+
+ with tm.assert_produces_warning(DeprecationWarning,
+ check_stacklevel=False):
+ from pandas.tools.hashing import hash_pandas_object
+ obj = Series(list('abc'))
+ hash_pandas_object(obj, hash_key='9876543210123456')
+
+ with tm.assert_produces_warning(DeprecationWarning,
+ check_stacklevel=False):
+ from pandas.tools.hashing import hash_array
+ obj = np.array([1, 2, 3])
+ hash_array(obj, hash_key='9876543210123456')
diff --git a/pandas/tests/test_testing.py b/pandas/tests/util/test_testing.py
similarity index 100%
rename from pandas/tests/test_testing.py
rename to pandas/tests/util/test_testing.py
diff --git a/pandas/tests/test_util.py b/pandas/tests/util/test_util.py
similarity index 98%
rename from pandas/tests/test_util.py
rename to pandas/tests/util/test_util.py
index 2d9ab78ceeb8a..532d596220501 100644
--- a/pandas/tests/test_util.py
+++ b/pandas/tests/util/test_util.py
@@ -9,10 +9,10 @@
import pytest
from pandas.compat import intern
from pandas.util._move import move_into_mutable_buffer, BadMove, stolenbuf
-from pandas.util.decorators import deprecate_kwarg
-from pandas.util.validators import (validate_args, validate_kwargs,
- validate_args_and_kwargs,
- validate_bool_kwarg)
+from pandas.util._decorators import deprecate_kwarg
+from pandas.util._validators import (validate_args, validate_kwargs,
+ validate_args_and_kwargs,
+ validate_bool_kwarg)
import pandas.util.testing as tm
diff --git a/pandas/tools/hashing.py b/pandas/tools/hashing.py
new file mode 100644
index 0000000000000..ba38710b607af
--- /dev/null
+++ b/pandas/tools/hashing.py
@@ -0,0 +1,18 @@
+import warnings
+import sys
+
+m = sys.modules['pandas.tools.hashing']
+for t in ['hash_pandas_object', 'hash_array']:
+
+ def outer(t=t):
+
+ def wrapper(*args, **kwargs):
+ from pandas import util
+ warnings.warn("pandas.tools.hashing is deprecated and will be "
+ "removed in a future version, import "
+ "from pandas.util",
+ DeprecationWarning, stacklevel=3)
+ return getattr(util, t)(*args, **kwargs)
+ return wrapper
+
+ setattr(m, t, outer(t))
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 06d70f1456518..dddf835424f67 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -16,7 +16,7 @@
import pandas.core.algorithms as algos
from pandas.core.algorithms import unique
from pandas.tseries.offsets import DateOffset
-from pandas.util.decorators import cache_readonly, deprecate_kwarg
+from pandas.util._decorators import cache_readonly, deprecate_kwarg
import pandas.tseries.offsets as offsets
from pandas._libs import lib, tslib
diff --git a/pandas/util/__init__.py b/pandas/util/__init__.py
index e69de29bb2d1d..e86af930fef7c 100644
--- a/pandas/util/__init__.py
+++ b/pandas/util/__init__.py
@@ -0,0 +1,2 @@
+from pandas.core.util.hashing import hash_pandas_object, hash_array # noqa
+from pandas.util._decorators import Appender, Substitution, cache_readonly # noqa
diff --git a/pandas/util/decorators.py b/pandas/util/_decorators.py
similarity index 100%
rename from pandas/util/decorators.py
rename to pandas/util/_decorators.py
diff --git a/pandas/util/depr_module.py b/pandas/util/_depr_module.py
similarity index 100%
rename from pandas/util/depr_module.py
rename to pandas/util/_depr_module.py
diff --git a/pandas/util/doctools.py b/pandas/util/_doctools.py
similarity index 100%
rename from pandas/util/doctools.py
rename to pandas/util/_doctools.py
diff --git a/pandas/util/print_versions.py b/pandas/util/_print_versions.py
similarity index 100%
rename from pandas/util/print_versions.py
rename to pandas/util/_print_versions.py
diff --git a/pandas/util/validators.py b/pandas/util/_validators.py
similarity index 100%
rename from pandas/util/validators.py
rename to pandas/util/_validators.py
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 0d70d51032b3d..f6b572cdf7179 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -48,7 +48,7 @@
Index, MultiIndex,
Series, DataFrame, Panel, Panel4D)
-from pandas.util import libtesting
+from pandas._libs import testing as _testing
from pandas.io.common import urlopen
try:
import pytest
@@ -170,7 +170,7 @@ def assert_almost_equal(left, right, check_exact=False,
else:
obj = 'Input'
assert_class_equal(left, right, obj=obj)
- return libtesting.assert_almost_equal(
+ return _testing.assert_almost_equal(
left, right,
check_dtype=check_dtype,
check_less_precise=check_less_precise,
@@ -206,7 +206,7 @@ def _check_isinstance(left, right, cls):
def assert_dict_equal(left, right, compare_keys=True):
_check_isinstance(left, right, dict)
- return libtesting.assert_dict_equal(left, right, compare_keys=compare_keys)
+ return _testing.assert_dict_equal(left, right, compare_keys=compare_keys)
def randbool(size=(), p=0.5):
@@ -923,10 +923,10 @@ def _get_ilevel_values(index, level):
.format(obj, np.round(diff, 5))
raise_assert_detail(obj, msg, left, right)
else:
- libtesting.assert_almost_equal(left.values, right.values,
- check_less_precise=check_less_precise,
- check_dtype=exact,
- obj=obj, lobj=left, robj=right)
+ _testing.assert_almost_equal(left.values, right.values,
+ check_less_precise=check_less_precise,
+ check_dtype=exact,
+ obj=obj, lobj=left, robj=right)
# metadata comparison
if check_names:
@@ -1259,10 +1259,10 @@ def assert_series_equal(left, right, check_dtype=True,
assert_index_equal(l, r, obj='{0}.index'.format(obj))
else:
- libtesting.assert_almost_equal(left.get_values(), right.get_values(),
- check_less_precise=check_less_precise,
- check_dtype=check_dtype,
- obj='{0}'.format(obj))
+ _testing.assert_almost_equal(left.get_values(), right.get_values(),
+ check_less_precise=check_less_precise,
+ check_dtype=check_dtype,
+ obj='{0}'.format(obj))
# metadata comparison
if check_names:
@@ -1476,8 +1476,8 @@ def assert_sp_array_equal(left, right, check_dtype=True):
check_dtype=check_dtype)
# SparseIndex comparison
- assert isinstance(left.sp_index, pd.core.sparse.libsparse.SparseIndex)
- assert isinstance(right.sp_index, pd.core.sparse.libsparse.SparseIndex)
+ assert isinstance(left.sp_index, pd._libs.sparse.SparseIndex)
+ assert isinstance(right.sp_index, pd._libs.sparse.SparseIndex)
if not left.sp_index.equals(right.sp_index):
raise_assert_detail('SparseArray.index', 'index are not equal',
diff --git a/setup.py b/setup.py
index 6f3ddbe2ad9d0..806047a344281 100755
--- a/setup.py
+++ b/setup.py
@@ -116,9 +116,9 @@ def is_platform_mac():
'join': ['_libs/join_helper.pxi.in', '_libs/join_func_helper.pxi.in'],
'reshape': ['_libs/reshape_helper.pxi.in'],
'hashtable': ['_libs/hashtable_class_helper.pxi.in',
- '_libs/hashtable_func_helper.pxi.in'],
+ '_libs/hashtable_func_helper.pxi.in'],
'index': ['_libs/index_class_helper.pxi.in'],
- 'sparse': ['core/sparse/sparse_op_helper.pxi.in'],
+ 'sparse': ['_libs/sparse_op_helper.pxi.in'],
'interval': ['_libs/intervaltree.pxi.in']
}
@@ -337,11 +337,11 @@ class CheckSDist(sdist_class):
'pandas/_libs/algos.pyx',
'pandas/_libs/join.pyx',
'pandas/_libs/interval.pyx',
- 'pandas/core/window.pyx',
- 'pandas/core/sparse/sparse.pyx',
- 'pandas/util/testing.pyx',
- 'pandas/tools/hash.pyx',
- 'pandas/io/parsers.pyx',
+ 'pandas/_libs/hashing.pyx',
+ 'pandas/_libs/testing.pyx',
+ 'pandas/_libs/window.pyx',
+ 'pandas/_libs/sparse.pyx',
+ 'pandas/_libs/parsers.pyx',
'pandas/io/sas/sas.pyx']
def initialize_options(self):
@@ -513,24 +513,24 @@ def pxd(name):
'_libs.interval': {'pyxfile': '_libs/interval',
'pxdfiles': ['_libs/hashtable'],
'depends': _pxi_dep['interval']},
- 'core.libwindow': {'pyxfile': 'core/window',
- 'pxdfiles': ['_libs/src/skiplist', '_libs/src/util'],
- 'depends': ['pandas/_libs/src/skiplist.pyx',
- 'pandas/_libs/src/skiplist.h']},
- 'io.libparsers': {'pyxfile': 'io/parsers',
+ '_libs.window': {'pyxfile': '_libs/window',
+ 'pxdfiles': ['_libs/src/skiplist', '_libs/src/util'],
+ 'depends': ['pandas/_libs/src/skiplist.pyx',
+ 'pandas/_libs/src/skiplist.h']},
+ '_libs.parsers': {'pyxfile': '_libs/parsers',
'depends': ['pandas/_libs/src/parser/tokenizer.h',
'pandas/_libs/src/parser/io.h',
'pandas/_libs/src/numpy_helper.h'],
'sources': ['pandas/_libs/src/parser/tokenizer.c',
'pandas/_libs/src/parser/io.c']},
- 'core.sparse.libsparse': {'pyxfile': 'core/sparse/sparse',
- 'depends': (['pandas/core/sparse/sparse.pyx'] +
- _pxi_dep['sparse'])},
- 'util.libtesting': {'pyxfile': 'util/testing',
- 'depends': ['pandas/util/testing.pyx']},
- 'util.libhashing': {'pyxfile': 'util/hashing',
- 'depends': ['pandas/util/hashing.pyx']},
- 'io.sas.libsas': {'pyxfile': 'io/sas/sas'},
+ '_libs.sparse': {'pyxfile': '_libs/sparse',
+ 'depends': (['pandas/core/sparse/sparse.pyx'] +
+ _pxi_dep['sparse'])},
+ '_libs.testing': {'pyxfile': '_libs/testing',
+ 'depends': ['pandas/_libs/testing.pyx']},
+ '_libs.hashing': {'pyxfile': '_libs/hashing',
+ 'depends': ['pandas/_libs/hashing.pyx']},
+ 'io.sas._sas': {'pyxfile': 'io/sas/sas'},
}
extensions = []
@@ -596,7 +596,7 @@ def pxd(name):
root, _ = os.path.splitext(ext.sources[0])
ext.sources[0] = root + suffix
-ujson_ext = Extension('pandas.io.json.libjson',
+ujson_ext = Extension('pandas._libs.json',
depends=['pandas/_libs/src/ujson/lib/ultrajson.h',
'pandas/_libs/src/datetime_helper.h',
'pandas/_libs/src/numpy_helper.h'],
@@ -645,6 +645,7 @@ def pxd(name):
'pandas.core.reshape',
'pandas.core.sparse',
'pandas.core.tools',
+ 'pandas.core.util',
'pandas.computation',
'pandas.errors',
'pandas.io',
@@ -652,6 +653,7 @@ def pxd(name):
'pandas.io.sas',
'pandas.io.msgpack',
'pandas.io.formats',
+ 'pandas.io.clipboard',
'pandas._libs',
'pandas.plotting',
'pandas.stats',
@@ -679,9 +681,9 @@ def pxd(name):
'pandas.tests.tseries',
'pandas.tests.plotting',
'pandas.tests.tools',
+ 'pandas.tests.util',
'pandas.tools',
'pandas.tseries',
- 'pandas.util.clipboard'
],
package_data={'pandas.tests': ['data/*.csv'],
'pandas.tests.indexes': ['data/*.pickle'],
| xref #13634
| https://api.github.com/repos/pandas-dev/pandas/pulls/16223 | 2017-05-04T02:15:03Z | 2017-05-04T14:27:36Z | 2017-05-04T14:27:36Z | 2017-05-05T10:44:28Z |
DOC: don't include all methods/attributes of IntervalIndex | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 491bec3c83f61..c652573bc6677 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -618,7 +618,6 @@ strings and apply several methods to it. These can be accessed like
Series.cat
Series.dt
Index.str
- CategoricalIndex.str
MultiIndex.str
DatetimeIndex.str
TimedeltaIndex.str
@@ -1404,6 +1403,7 @@ CategoricalIndex
.. autosummary::
:toctree: generated/
+ :template: autosummary/class_without_autosummary.rst
CategoricalIndex
@@ -1432,6 +1432,7 @@ IntervalIndex
.. autosummary::
:toctree: generated/
+ :template: autosummary/class_without_autosummary.rst
IntervalIndex
diff --git a/doc/sphinxext/numpydoc/numpydoc.py b/doc/sphinxext/numpydoc/numpydoc.py
index 0cccf72de3745..710c3cc9842c4 100755
--- a/doc/sphinxext/numpydoc/numpydoc.py
+++ b/doc/sphinxext/numpydoc/numpydoc.py
@@ -43,7 +43,9 @@ def mangle_docstrings(app, what, name, obj, options, lines,
)
# PANDAS HACK (to remove the list of methods/attributes for Categorical)
- if what == "class" and name.endswith(".Categorical"):
+ if what == "class" and (name.endswith(".Categorical") or
+ name.endswith("CategoricalIndex") or
+ name.endswith("IntervalIndex")):
cfg['class_members_list'] = False
if what == 'module':
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 760db4ba20675..e7921dafabc3c 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -47,6 +47,9 @@ class CategoricalIndex(Index, base.PandasDelegate):
name : object
Name to be stored in the index
+ See Also
+ --------
+ Categorical, Index
"""
_typ = 'categoricalindex'
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index ccd0d8bee4abc..518b3f99b64ec 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -110,6 +110,10 @@ class IntervalIndex(IntervalMixin, Index):
Name to be stored in the index.
copy : boolean, default False
Copy the meta-data
+
+ See Also
+ --------
+ Index
"""
_typ = 'intervalindex'
_comparables = ['name']
| Alternative for #16050
I totally forgot I already had made such a "class without autosummary table" template before for Categorical, so I hope this should work for IntervalIndex as well. | https://api.github.com/repos/pandas-dev/pandas/pulls/16221 | 2017-05-03T23:03:15Z | 2017-05-04T21:15:56Z | 2017-05-04T21:15:56Z | 2017-05-04T23:27:41Z |
ENH: Provide dict object for to_dict() #16122 | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 3df0a21facb02..ce80a80e58cd7 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -26,6 +26,7 @@ New features
Other Enhancements
^^^^^^^^^^^^^^^^^^
+- ``Series.to_dict()`` and ``DataFrame.to_dict()`` now support an ``into`` keyword which allows you to specify the ``collections.Mapping`` subclass that you would like returned. The default is ``dict``, which is backwards compatible. (:issue:`16122`)
- ``RangeIndex.append`` now returns a ``RangeIndex`` object when possible (:issue:`16212`)
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 39a5da0aa6912..0dc6a7a1e9c7b 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -6,6 +6,8 @@
import warnings
from datetime import datetime, timedelta
from functools import partial
+import inspect
+import collections
import numpy as np
from pandas._libs import lib, tslib
@@ -479,6 +481,42 @@ def _dict_compat(d):
for key, value in iteritems(d))
+def standardize_mapping(into):
+ """
+ Helper function to standardize a supplied mapping.
+
+ .. versionadded:: 0.21.0
+
+ Parameters
+ ----------
+ into : instance or subclass of collections.Mapping
+ Must be a class, an initialized collections.defaultdict,
+ or an instance of a collections.Mapping subclass.
+
+ Returns
+ -------
+ mapping : a collections.Mapping subclass or other constructor
+ a callable object that can accept an iterator to create
+ the desired Mapping.
+
+ See Also
+ --------
+ DataFrame.to_dict
+ Series.to_dict
+ """
+ if not inspect.isclass(into):
+ if isinstance(into, collections.defaultdict):
+ return partial(
+ collections.defaultdict, into.default_factory)
+ into = type(into)
+ if not issubclass(into, collections.Mapping):
+ raise TypeError('unsupported type: {}'.format(into))
+ elif into == collections.defaultdict:
+ raise TypeError(
+ 'to_dict() only accepts initialized defaultdicts')
+ return into
+
+
def sentinel_factory():
class Sentinel(object):
pass
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 8d437102e4d18..3b0cc5619a1cd 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -63,7 +63,8 @@
_default_index,
_values_from_object,
_maybe_box_datetimelike,
- _dict_compat)
+ _dict_compat,
+ standardize_mapping)
from pandas.core.generic import NDFrame, _shared_docs
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas.core.indexing import (maybe_droplevels, convert_to_index_sliceable,
@@ -860,7 +861,7 @@ def from_dict(cls, data, orient='columns', dtype=None):
return cls(data, index=index, columns=columns, dtype=dtype)
- def to_dict(self, orient='dict'):
+ def to_dict(self, orient='dict', into=dict):
"""Convert DataFrame to dictionary.
Parameters
@@ -882,32 +883,85 @@ def to_dict(self, orient='dict'):
Abbreviations are allowed. `s` indicates `series` and `sp`
indicates `split`.
+ into : class, default dict
+ The collections.Mapping subclass used for all Mappings
+ in the return value. Can be the actual class or an empty
+ instance of the mapping type you want. If you want a
+ collections.defaultdict, you must pass it initialized.
+
+ .. versionadded:: 0.21.0
+
Returns
-------
- result : dict like {column -> {index -> value}}
+ result : collections.Mapping like {column -> {index -> value}}
+
+ Examples
+ --------
+ >>> df = pd.DataFrame(
+ {'col1': [1, 2], 'col2': [0.5, 0.75]}, index=['a', 'b'])
+ >>> df
+ col1 col2
+ a 1 0.1
+ b 2 0.2
+ >>> df.to_dict()
+ {'col1': {'a': 1, 'b': 2}, 'col2': {'a': 0.5, 'b': 0.75}}
+
+ You can specify the return orientation.
+
+ >>> df.to_dict('series')
+ {'col1': a 1
+ b 2
+ Name: col1, dtype: int64, 'col2': a 0.50
+ b 0.75
+ Name: col2, dtype: float64}
+ >>> df.to_dict('split')
+ {'columns': ['col1', 'col2'],
+ 'data': [[1.0, 0.5], [2.0, 0.75]],
+ 'index': ['a', 'b']}
+ >>> df.to_dict('records')
+ [{'col1': 1.0, 'col2': 0.5}, {'col1': 2.0, 'col2': 0.75}]
+ >>> df.to_dict('index')
+ {'a': {'col1': 1.0, 'col2': 0.5}, 'b': {'col1': 2.0, 'col2': 0.75}}
+
+ You can also specify the mapping type.
+
+ >>> from collections import OrderedDict, defaultdict
+ >>> df.to_dict(into=OrderedDict)
+ OrderedDict([('col1', OrderedDict([('a', 1), ('b', 2)])),
+ ('col2', OrderedDict([('a', 0.5), ('b', 0.75)]))])
+
+ If you want a `defaultdict`, you need to initialize it:
+
+ >>> dd = defaultdict(list)
+ >>> df.to_dict('records', into=dd)
+ [defaultdict(<type 'list'>, {'col2': 0.5, 'col1': 1.0}),
+ defaultdict(<type 'list'>, {'col2': 0.75, 'col1': 2.0})]
"""
if not self.columns.is_unique:
warnings.warn("DataFrame columns are not unique, some "
"columns will be omitted.", UserWarning)
+ # GH16122
+ into_c = standardize_mapping(into)
if orient.lower().startswith('d'):
- return dict((k, v.to_dict()) for k, v in compat.iteritems(self))
+ return into_c(
+ (k, v.to_dict(into)) for k, v in compat.iteritems(self))
elif orient.lower().startswith('l'):
- return dict((k, v.tolist()) for k, v in compat.iteritems(self))
+ return into_c((k, v.tolist()) for k, v in compat.iteritems(self))
elif orient.lower().startswith('sp'):
- return {'index': self.index.tolist(),
- 'columns': self.columns.tolist(),
- 'data': lib.map_infer(self.values.ravel(),
- _maybe_box_datetimelike)
- .reshape(self.values.shape).tolist()}
+ return into_c((('index', self.index.tolist()),
+ ('columns', self.columns.tolist()),
+ ('data', lib.map_infer(self.values.ravel(),
+ _maybe_box_datetimelike)
+ .reshape(self.values.shape).tolist())))
elif orient.lower().startswith('s'):
- return dict((k, _maybe_box_datetimelike(v))
- for k, v in compat.iteritems(self))
+ return into_c((k, _maybe_box_datetimelike(v))
+ for k, v in compat.iteritems(self))
elif orient.lower().startswith('r'):
- return [dict((k, _maybe_box_datetimelike(v))
- for k, v in zip(self.columns, row))
+ return [into_c((k, _maybe_box_datetimelike(v))
+ for k, v in zip(self.columns, row))
for row in self.values]
elif orient.lower().startswith('i'):
- return dict((k, v.to_dict()) for k, v in self.iterrows())
+ return into_c((k, v.to_dict(into)) for k, v in self.iterrows())
else:
raise ValueError("orient '%s' not understood" % orient)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 6ec163bbaa73d..129f291e5f843 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -46,7 +46,8 @@
_maybe_match_name,
SettingWithCopyError,
_maybe_box_datetimelike,
- _dict_compat)
+ _dict_compat,
+ standardize_mapping)
from pandas.core.index import (Index, MultiIndex, InvalidIndexError,
Float64Index, _ensure_index)
from pandas.core.indexing import check_bool_indexer, maybe_convert_indices
@@ -1074,15 +1075,39 @@ def tolist(self):
""" Convert Series to a nested list """
return list(self.asobject)
- def to_dict(self):
+ def to_dict(self, into=dict):
"""
- Convert Series to {label -> value} dict
+ Convert Series to {label -> value} dict or dict-like object.
+
+ Parameters
+ ----------
+ into : class, default dict
+ The collections.Mapping subclass to use as the return
+ object. Can be the actual class or an empty
+ instance of the mapping type you want. If you want a
+ collections.defaultdict, you must pass it initialized.
+
+ .. versionadded:: 0.21.0
Returns
-------
- value_dict : dict
- """
- return dict(compat.iteritems(self))
+ value_dict : collections.Mapping
+
+ Examples
+ --------
+ >>> s = pd.Series([1, 2, 3, 4])
+ >>> s.to_dict()
+ {0: 1, 1: 2, 2: 3, 3: 4}
+ >>> from collections import OrderedDict, defaultdict
+ >>> s.to_dict(OrderedDict)
+ OrderedDict([(0, 1), (1, 2), (2, 3), (3, 4)])
+ >>> dd = defaultdict(list)
+ >>> s.to_dict(dd)
+ defaultdict(<type 'list'>, {0: 1, 1: 2, 2: 3, 3: 4})
+ """
+ # GH16122
+ into_c = standardize_mapping(into)
+ return into_c(compat.iteritems(self))
def to_frame(self, name=None):
"""
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index e0cdca7904db7..34dd138ee1c80 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
import pytest
+import collections
import numpy as np
from pandas import compat
@@ -13,50 +14,6 @@
class TestDataFrameConvertTo(TestData):
- def test_to_dict(self):
- test_data = {
- 'A': {'1': 1, '2': 2},
- 'B': {'1': '1', '2': '2', '3': '3'},
- }
- recons_data = DataFrame(test_data).to_dict()
-
- for k, v in compat.iteritems(test_data):
- for k2, v2 in compat.iteritems(v):
- assert v2 == recons_data[k][k2]
-
- recons_data = DataFrame(test_data).to_dict("l")
-
- for k, v in compat.iteritems(test_data):
- for k2, v2 in compat.iteritems(v):
- assert v2 == recons_data[k][int(k2) - 1]
-
- recons_data = DataFrame(test_data).to_dict("s")
-
- for k, v in compat.iteritems(test_data):
- for k2, v2 in compat.iteritems(v):
- assert v2 == recons_data[k][k2]
-
- recons_data = DataFrame(test_data).to_dict("sp")
- expected_split = {'columns': ['A', 'B'], 'index': ['1', '2', '3'],
- 'data': [[1.0, '1'], [2.0, '2'], [np.nan, '3']]}
- tm.assert_dict_equal(recons_data, expected_split)
-
- recons_data = DataFrame(test_data).to_dict("r")
- expected_records = [{'A': 1.0, 'B': '1'},
- {'A': 2.0, 'B': '2'},
- {'A': np.nan, 'B': '3'}]
- assert isinstance(recons_data, list)
- assert len(recons_data) == 3
- for l, r in zip(recons_data, expected_records):
- tm.assert_dict_equal(l, r)
-
- # GH10844
- recons_data = DataFrame(test_data).to_dict("i")
-
- for k, v in compat.iteritems(test_data):
- for k2, v2 in compat.iteritems(v):
- assert v2 == recons_data[k2][k]
-
def test_to_dict_timestamp(self):
# GH11247
@@ -190,17 +147,85 @@ def test_to_records_with_unicode_column_names(self):
)
tm.assert_almost_equal(result, expected)
+ @pytest.mark.parametrize('mapping', [
+ dict,
+ collections.defaultdict(list),
+ collections.OrderedDict])
+ def test_to_dict(self, mapping):
+ test_data = {
+ 'A': {'1': 1, '2': 2},
+ 'B': {'1': '1', '2': '2', '3': '3'},
+ }
+
+ # GH16122
+ recons_data = DataFrame(test_data).to_dict(into=mapping)
+
+ for k, v in compat.iteritems(test_data):
+ for k2, v2 in compat.iteritems(v):
+ assert (v2 == recons_data[k][k2])
+
+ recons_data = DataFrame(test_data).to_dict("l", mapping)
+
+ for k, v in compat.iteritems(test_data):
+ for k2, v2 in compat.iteritems(v):
+ assert (v2 == recons_data[k][int(k2) - 1])
+
+ recons_data = DataFrame(test_data).to_dict("s", mapping)
+
+ for k, v in compat.iteritems(test_data):
+ for k2, v2 in compat.iteritems(v):
+ assert (v2 == recons_data[k][k2])
+
+ recons_data = DataFrame(test_data).to_dict("sp", mapping)
+ expected_split = {'columns': ['A', 'B'], 'index': ['1', '2', '3'],
+ 'data': [[1.0, '1'], [2.0, '2'], [np.nan, '3']]}
+ tm.assert_dict_equal(recons_data, expected_split)
+
+ recons_data = DataFrame(test_data).to_dict("r", mapping)
+ expected_records = [{'A': 1.0, 'B': '1'},
+ {'A': 2.0, 'B': '2'},
+ {'A': np.nan, 'B': '3'}]
+ assert isinstance(recons_data, list)
+ assert (len(recons_data) == 3)
+ for l, r in zip(recons_data, expected_records):
+ tm.assert_dict_equal(l, r)
+
+ # GH10844
+ recons_data = DataFrame(test_data).to_dict("i")
+
+ for k, v in compat.iteritems(test_data):
+ for k2, v2 in compat.iteritems(v):
+ assert (v2 == recons_data[k2][k])
+
+ df = DataFrame(test_data)
+ df['duped'] = df[df.columns[0]]
+ recons_data = df.to_dict("i")
+ comp_data = test_data.copy()
+ comp_data['duped'] = comp_data[df.columns[0]]
+ for k, v in compat.iteritems(comp_data):
+ for k2, v2 in compat.iteritems(v):
+ assert (v2 == recons_data[k2][k])
+
+ @pytest.mark.parametrize('mapping', [
+ list,
+ collections.defaultdict,
+ []])
+ def test_to_dict_errors(self, mapping):
+ # GH16122
+ df = DataFrame(np.random.randn(3, 3))
+ with pytest.raises(TypeError):
+ df.to_dict(into=mapping)
-@pytest.mark.parametrize('tz', ['UTC', 'GMT', 'US/Eastern'])
-def test_to_records_datetimeindex_with_tz(tz):
- # GH13937
- dr = date_range('2016-01-01', periods=10,
- freq='S', tz=tz)
+ @pytest.mark.parametrize('tz', ['UTC', 'GMT', 'US/Eastern'])
+ def test_to_records_datetimeindex_with_tz(self, tz):
+ # GH13937
+ dr = date_range('2016-01-01', periods=10,
+ freq='S', tz=tz)
- df = DataFrame({'datetime': dr}, index=dr)
+ df = DataFrame({'datetime': dr}, index=dr)
- expected = df.to_records()
- result = df.tz_convert("UTC").to_records()
+ expected = df.to_records()
+ result = df.tz_convert("UTC").to_records()
- # both converted to UTC, so they are equal
- tm.assert_numpy_array_equal(result, expected)
+ # both converted to UTC, so they are equal
+ tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py
index d1c9e5a6d16cf..503185de427f1 100644
--- a/pandas/tests/series/test_io.py
+++ b/pandas/tests/series/test_io.py
@@ -2,6 +2,8 @@
# pylint: disable-msg=E1101,W0612
from datetime import datetime
+import collections
+import pytest
import numpy as np
import pandas as pd
@@ -126,9 +128,6 @@ def test_to_frame(self):
dict(testdifferent=self.ts.values), index=self.ts.index)
assert_frame_equal(rs, xp)
- def test_to_dict(self):
- tm.assert_series_equal(Series(self.ts.to_dict(), name='ts'), self.ts)
-
def test_timeseries_periodindex(self):
# GH2891
from pandas import period_range
@@ -167,6 +166,19 @@ class SubclassedFrame(DataFrame):
expected = SubclassedFrame({'X': [1, 2, 3]})
assert_frame_equal(result, expected)
+ @pytest.mark.parametrize('mapping', (
+ dict,
+ collections.defaultdict(list),
+ collections.OrderedDict))
+ def test_to_dict(self, mapping):
+ # GH16122
+ ts = TestData().ts
+ tm.assert_series_equal(
+ Series(ts.to_dict(mapping), name='ts'), ts)
+ from_method = Series(ts.to_dict(collections.Counter))
+ from_constructor = Series(collections.Counter(ts.iteritems()))
+ tm.assert_series_equal(from_method, from_constructor)
+
class TestSeriesToList(TestData):
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index d7dbaccb87ee8..4893f99f7cf0f 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -1,6 +1,8 @@
# -*- coding: utf-8 -*-
import pytest
+import collections
+from functools import partial
import numpy as np
@@ -195,3 +197,26 @@ def test_dict_compat():
assert (com._dict_compat(data_datetime64) == expected)
assert (com._dict_compat(expected) == expected)
assert (com._dict_compat(data_unchanged) == data_unchanged)
+
+
+def test_standardize_mapping():
+ # No uninitialized defaultdicts
+ with pytest.raises(TypeError):
+ com.standardize_mapping(collections.defaultdict)
+
+ # No non-mapping subtypes, instance
+ with pytest.raises(TypeError):
+ com.standardize_mapping([])
+
+ # No non-mapping subtypes, class
+ with pytest.raises(TypeError):
+ com.standardize_mapping(list)
+
+ fill = {'bad': 'data'}
+ assert (com.standardize_mapping(fill) == dict)
+
+ # Convert instance to type
+ assert (com.standardize_mapping({}) == dict)
+
+ dd = collections.defaultdict(list)
+ assert isinstance(com.standardize_mapping(dd), partial)
| - [x] closes #16122
- [x] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16220 | 2017-05-03T21:05:46Z | 2017-05-16T16:06:19Z | 2017-05-16T16:06:19Z | 2018-04-18T18:24:05Z |
ENH: Make RangeIndex.append() return RangeIndex when possible | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 36dffc3d3378b..3df0a21facb02 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -26,6 +26,7 @@ New features
Other Enhancements
^^^^^^^^^^^^^^^^^^
+- ``RangeIndex.append`` now returns a ``RangeIndex`` object when possible (:issue:`16212`)
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index acd040693af2e..0b645b46b95b6 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -443,6 +443,63 @@ def join(self, other, how='left', level=None, return_indexers=False,
return super(RangeIndex, self).join(other, how, level, return_indexers,
sort)
+ def append(self, other):
+ """
+ Append a collection of Index options together
+
+ Parameters
+ ----------
+ other : Index or list/tuple of indices
+
+ Returns
+ -------
+ appended : RangeIndex if all indexes are consecutive RangeIndexes,
+ otherwise Int64Index or Index
+ """
+
+ to_concat = [self]
+
+ if isinstance(other, (list, tuple)):
+ to_concat = to_concat + list(other)
+ else:
+ to_concat.append(other)
+
+ if not all([isinstance(i, RangeIndex) for i in to_concat]):
+ return super(RangeIndex, self).append(other)
+
+ start = step = next = None
+
+ for obj in to_concat:
+ if not len(obj):
+ continue
+
+ if start is None:
+ # This is set by the first non-empty index
+ start = obj._start
+ if step is None and len(obj) > 1:
+ step = obj._step
+ elif step is None:
+ # First non-empty index had only one element
+ if obj._start == start:
+ return super(RangeIndex, self).append(other)
+ step = obj._start - start
+
+ non_consecutive = ((step != obj._step and len(obj) > 1) or
+ (next is not None and obj._start != next))
+ if non_consecutive:
+ return super(RangeIndex, self).append(other)
+
+ if step is not None:
+ next = obj[-1] + step
+
+ if start is None:
+ start = obj._start
+ step = obj._step
+ stop = obj._stop if next is None else next
+ names = set([obj.name for obj in to_concat])
+ name = None if len(names) > 1 else self.name
+ return RangeIndex(start, stop, step, name=name)
+
def __len__(self):
"""
return the length of the RangeIndex
diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py
index 0379718b004e1..4e6de216de4ac 100644
--- a/pandas/tests/indexes/test_range.py
+++ b/pandas/tests/indexes/test_range.py
@@ -941,3 +941,39 @@ def test_where_array_like(self):
for klass in klasses:
result = i.where(klass(cond))
tm.assert_index_equal(result, expected)
+
+ def test_append(self):
+ # GH16212
+ RI = RangeIndex
+ I64 = Int64Index
+ F64 = Float64Index
+ OI = Index
+ cases = [([RI(1, 12, 5)], RI(1, 12, 5)),
+ ([RI(0, 6, 4)], RI(0, 6, 4)),
+ ([RI(1, 3), RI(3, 7)], RI(1, 7)),
+ ([RI(1, 5, 2), RI(5, 6)], RI(1, 6, 2)),
+ ([RI(1, 3, 2), RI(4, 7, 3)], RI(1, 7, 3)),
+ ([RI(-4, 3, 2), RI(4, 7, 2)], RI(-4, 7, 2)),
+ ([RI(-4, -8), RI(-8, -12)], RI(-8, -12)),
+ ([RI(-4, -8), RI(3, -4)], RI(3, -8)),
+ ([RI(-4, -8), RI(3, 5)], RI(3, 5)),
+ ([RI(-4, -2), RI(3, 5)], I64([-4, -3, 3, 4])),
+ ([RI(-2,), RI(3, 5)], RI(3, 5)),
+ ([RI(2,), RI(2)], I64([0, 1, 0, 1])),
+ ([RI(2,), RI(2, 5), RI(5, 8, 4)], RI(0, 6)),
+ ([RI(2,), RI(3, 5), RI(5, 8, 4)], I64([0, 1, 3, 4, 5])),
+ ([RI(-2, 2), RI(2, 5), RI(5, 8, 4)], RI(-2, 6)),
+ ([RI(3,), I64([-1, 3, 15])], I64([0, 1, 2, -1, 3, 15])),
+ ([RI(3,), F64([-1, 3.1, 15.])], F64([0, 1, 2, -1, 3.1, 15.])),
+ ([RI(3,), OI(['a', None, 14])], OI([0, 1, 2, 'a', None, 14])),
+ ([RI(3, 1), OI(['a', None, 14])], OI(['a', None, 14]))
+ ]
+
+ for indices, expected in cases:
+ result = indices[0].append(indices[1:])
+ tm.assert_index_equal(result, expected, exact=True)
+
+ if len(indices) == 2:
+ # Append single item rather than list
+ result2 = indices[0].append(indices[1])
+ tm.assert_index_equal(result2, expected, exact=True)
| - [x] closes #16212
- [x] tests added / passed
- [x] passes ``git diff master --name-only -- '*.py' | flake8 --diff``
- [x] whatsnew entry
This is analogous to what is already done for ``RangeIndex.union()``, but there doesn't seem to be much scope for code reuse (what could be, in principle, reused in ``.append`` of all ``Index`` subclasses is the initial casting to list/name handling, I could do in a separate PR). | https://api.github.com/repos/pandas-dev/pandas/pulls/16213 | 2017-05-03T11:42:17Z | 2017-05-12T11:45:06Z | 2017-05-12T11:45:06Z | 2017-05-12T11:57:49Z |
BUG: Categorical scatter plot has KeyError #16199 | diff --git a/doc/source/whatsnew/v0.20.3.txt b/doc/source/whatsnew/v0.20.3.txt
index 52f7701724f18..0210ec53125b8 100644
--- a/doc/source/whatsnew/v0.20.3.txt
+++ b/doc/source/whatsnew/v0.20.3.txt
@@ -36,6 +36,7 @@ Performance Improvements
Bug Fixes
~~~~~~~~~
+- Fixed issue with dataframe scatter plot for categorical data that reports incorrect column key not found when categorical data is used for plotting (:issue:`16199`)
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index 9169eb86895fb..391fa377f3c6f 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -778,6 +778,11 @@ def __init__(self, data, x, y, **kwargs):
x = self.data.columns[x]
if is_integer(y) and not self.data.columns.holds_integer():
y = self.data.columns[y]
+ if len(self.data[x]._get_numeric_data()) == 0:
+ raise ValueError(self._kind + ' requires x column to be numeric')
+ if len(self.data[y]._get_numeric_data()) == 0:
+ raise ValueError(self._kind + ' requires y column to be numeric')
+
self.x = x
self.y = y
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index e40ec5a1faea8..ba674e10be384 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -915,6 +915,24 @@ def test_plot_scatter(self):
axes = df.plot(x='x', y='y', kind='scatter', subplots=True)
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
+ @slow
+ def test_plot_scatter_with_categorical_data(self):
+ # GH 16199
+ df = pd.DataFrame({'x': [1, 2, 3, 4],
+ 'y': pd.Categorical(['a', 'b', 'a', 'c'])})
+
+ with pytest.raises(ValueError) as ve:
+ df.plot(x='x', y='y', kind='scatter')
+ ve.match('requires y column to be numeric')
+
+ with pytest.raises(ValueError) as ve:
+ df.plot(x='y', y='x', kind='scatter')
+ ve.match('requires x column to be numeric')
+
+ with pytest.raises(ValueError) as ve:
+ df.plot(x='y', y='y', kind='scatter')
+ ve.match('requires x column to be numeric')
+
@slow
def test_plot_scatter_with_c(self):
df = DataFrame(randn(6, 4),
| Appropriately handles categorical data for dataframe scatter plots which
currently raises KeyError for categorical data
- [x] closes #16199
- [x] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16208 | 2017-05-03T01:58:51Z | 2017-06-12T21:55:23Z | 2017-06-12T21:55:23Z | 2017-07-07T13:12:14Z |
COMPAT: ensure proper extension dtype's don't pickle the cache | diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 59c23addd418e..561f1951a4151 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -24,6 +24,7 @@ class ExtensionDtype(object):
isbuiltin = 0
isnative = 0
_metadata = []
+ _cache = {}
def __unicode__(self):
return self.name
@@ -71,6 +72,15 @@ def __eq__(self, other):
def __ne__(self, other):
return not self.__eq__(other)
+ def __getstate__(self):
+ # pickle support; we don't want to pickle the cache
+ return {k: getattr(self, k, None) for k in self._metadata}
+
+ @classmethod
+ def reset_cache(cls):
+ """ clear the cache """
+ cls._cache = {}
+
@classmethod
def is_dtype(cls, dtype):
""" Return a boolean if the passed type is an actual dtype that
@@ -110,6 +120,7 @@ class CategoricalDtype(ExtensionDtype):
kind = 'O'
str = '|O08'
base = np.dtype('O')
+ _metadata = []
_cache = {}
def __new__(cls):
@@ -408,9 +419,15 @@ def __new__(cls, subtype=None):
if isinstance(subtype, IntervalDtype):
return subtype
- elif subtype is None or (isinstance(subtype, compat.string_types) and
- subtype == 'interval'):
- subtype = None
+ elif subtype is None:
+ # we are called as an empty constructor
+ # generally for pickle compat
+ u = object.__new__(cls)
+ u.subtype = None
+ return u
+ elif (isinstance(subtype, compat.string_types) and
+ subtype == 'interval'):
+ subtype = ''
else:
if isinstance(subtype, compat.string_types):
m = cls._match.search(subtype)
@@ -423,6 +440,11 @@ def __new__(cls, subtype=None):
except TypeError:
raise ValueError("could not construct IntervalDtype")
+ if subtype is None:
+ u = object.__new__(cls)
+ u.subtype = None
+ return u
+
try:
return cls._cache[str(subtype)]
except KeyError:
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index da3120145fe38..fb20571213c15 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -23,6 +23,9 @@
class Base(object):
+ def setup_method(self, method):
+ self.dtype = self.create()
+
def test_hash(self):
hash(self.dtype)
@@ -37,14 +40,38 @@ def test_numpy_informed(self):
assert not np.str_ == self.dtype
def test_pickle(self):
+ # make sure our cache is NOT pickled
+
+ # clear the cache
+ type(self.dtype).reset_cache()
+ assert not len(self.dtype._cache)
+
+ # force back to the cache
result = tm.round_trip_pickle(self.dtype)
+ assert not len(self.dtype._cache)
assert result == self.dtype
-class TestCategoricalDtype(Base, tm.TestCase):
+class TestCategoricalDtype(Base):
+
+ def create(self):
+ return CategoricalDtype()
+
+ def test_pickle(self):
+ # make sure our cache is NOT pickled
+
+ # clear the cache
+ type(self.dtype).reset_cache()
+ assert not len(self.dtype._cache)
- def setUp(self):
- self.dtype = CategoricalDtype()
+ # force back to the cache
+ result = tm.round_trip_pickle(self.dtype)
+
+ # we are a singular object so we are added
+ # back to the cache upon unpickling
+ # this is to ensure object identity
+ assert len(self.dtype._cache) == 1
+ assert result == self.dtype
def test_hash_vs_equality(self):
# make sure that we satisfy is semantics
@@ -93,10 +120,10 @@ def test_basic(self):
assert not is_categorical(1.0)
-class TestDatetimeTZDtype(Base, tm.TestCase):
+class TestDatetimeTZDtype(Base):
- def setUp(self):
- self.dtype = DatetimeTZDtype('ns', 'US/Eastern')
+ def create(self):
+ return DatetimeTZDtype('ns', 'US/Eastern')
def test_hash_vs_equality(self):
# make sure that we satisfy is semantics
@@ -209,10 +236,24 @@ def test_empty(self):
str(dt)
-class TestPeriodDtype(Base, tm.TestCase):
+class TestPeriodDtype(Base):
- def setUp(self):
- self.dtype = PeriodDtype('D')
+ def create(self):
+ return PeriodDtype('D')
+
+ def test_hash_vs_equality(self):
+ # make sure that we satisfy is semantics
+ dtype = self.dtype
+ dtype2 = PeriodDtype('D')
+ dtype3 = PeriodDtype(dtype2)
+ assert dtype == dtype2
+ assert dtype2 == dtype
+ assert dtype3 == dtype
+ assert dtype is dtype2
+ assert dtype2 is dtype
+ assert dtype3 is dtype
+ assert hash(dtype) == hash(dtype2)
+ assert hash(dtype) == hash(dtype3)
def test_construction(self):
with pytest.raises(ValueError):
@@ -338,11 +379,37 @@ def test_not_string(self):
assert not is_string_dtype(PeriodDtype('D'))
-class TestIntervalDtype(Base, tm.TestCase):
+class TestIntervalDtype(Base):
+
+ def create(self):
+ return IntervalDtype('int64')
+
+ def test_hash_vs_equality(self):
+ # make sure that we satisfy is semantics
+ dtype = self.dtype
+ dtype2 = IntervalDtype('int64')
+ dtype3 = IntervalDtype(dtype2)
+ assert dtype == dtype2
+ assert dtype2 == dtype
+ assert dtype3 == dtype
+ assert dtype is dtype2
+ assert dtype2 is dtype
+ assert dtype3 is dtype
+ assert hash(dtype) == hash(dtype2)
+ assert hash(dtype) == hash(dtype3)
- # TODO: placeholder
- def setUp(self):
- self.dtype = IntervalDtype('int64')
+ dtype1 = IntervalDtype('interval')
+ dtype2 = IntervalDtype(dtype1)
+ dtype3 = IntervalDtype('interval')
+ assert dtype2 == dtype1
+ assert dtype2 == dtype2
+ assert dtype2 == dtype3
+ assert dtype2 is dtype1
+ assert dtype2 is dtype2
+ assert dtype2 is dtype3
+ assert hash(dtype2) == hash(dtype1)
+ assert hash(dtype2) == hash(dtype2)
+ assert hash(dtype2) == hash(dtype3)
def test_construction(self):
with pytest.raises(ValueError):
@@ -356,9 +423,9 @@ def test_construction(self):
def test_construction_generic(self):
# generic
i = IntervalDtype('interval')
- assert i.subtype is None
+ assert i.subtype == ''
assert is_interval_dtype(i)
- assert str(i) == 'interval'
+ assert str(i) == 'interval[]'
i = IntervalDtype()
assert i.subtype is None
@@ -445,3 +512,15 @@ def test_basic_dtype(self):
assert not is_interval_dtype(np.object_)
assert not is_interval_dtype(np.int64)
assert not is_interval_dtype(np.float64)
+
+ def test_caching(self):
+ IntervalDtype.reset_cache()
+ dtype = IntervalDtype("int64")
+ assert len(IntervalDtype._cache) == 1
+
+ IntervalDtype("interval")
+ assert len(IntervalDtype._cache) == 2
+
+ IntervalDtype.reset_cache()
+ tm.round_trip_pickle(dtype)
+ assert len(IntervalDtype._cache) == 0
| xref #16201 | https://api.github.com/repos/pandas-dev/pandas/pulls/16207 | 2017-05-02T22:59:33Z | 2017-05-03T00:54:59Z | 2017-05-03T00:54:59Z | 2017-05-03T00:58:40Z |
DOC: Remove various warnings from doc build | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 7a056203ed447..134cc5106015b 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -1004,6 +1004,7 @@ Transform the entire frame. ``.transform()`` allows input functions as: a numpy
function name or a user defined function.
.. ipython:: python
+ :okwarning:
tsdf.transform(np.abs)
tsdf.transform('abs')
@@ -1055,6 +1056,7 @@ Passing a dict of lists will generate a multi-indexed DataFrame with these
selective transforms.
.. ipython:: python
+ :okwarning:
tsdf.transform({'A': np.abs, 'B': [lambda x: x+1, 'sqrt']})
diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst
index 8466b3d3c3297..62aa487069132 100644
--- a/doc/source/cookbook.rst
+++ b/doc/source/cookbook.rst
@@ -968,7 +968,8 @@ You can use the same approach to read all files matching a pattern. Here is an
Finally, this strategy will work with the other ``pd.read_*(...)`` functions described in the :ref:`io docs<io>`.
.. ipython:: python
- :supress:
+ :suppress:
+
for i in range(3):
os.remove('file_{}.csv'.format(i))
diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 0c9bb029b9b68..bc5e278df743f 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -479,7 +479,7 @@ Other enhancements
df.resample('M', on='date').sum()
df.resample('M', level='d').sum()
-- The ``.get_credentials()`` method of ``GbqConnector`` can now first try to fetch `the application default credentials <https://developers.google.com/identity/protocols/application-default-credentials>`__. See the :ref:`docs <io.bigquery_authentication>` for more details (:issue:`13577`).
+- The ``.get_credentials()`` method of ``GbqConnector`` can now first try to fetch `the application default credentials <https://developers.google.com/identity/protocols/application-default-credentials>`__. See the docs for more details (:issue:`13577`).
- The ``.tz_localize()`` method of ``DatetimeIndex`` and ``Timestamp`` has gained the ``errors`` keyword, so you can potentially coerce nonexistent timestamps to ``NaT``. The default behavior remains to raising a ``NonExistentTimeError`` (:issue:`13057`)
- ``.to_hdf/read_hdf()`` now accept path objects (e.g. ``pathlib.Path``, ``py.path.local``) for the file path (:issue:`11773`)
- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 4882acbe820ea..230c7c0b90ac0 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -83,6 +83,7 @@ of all unique functions. Those that are not noted for a particular column will b
The API also supports a ``.transform()`` function to provide for broadcasting results.
.. ipython:: python
+ :okwarning:
df.transform(['abs', lambda x: x - x.min()])
@@ -373,26 +374,28 @@ Experimental support has been added to export ``DataFrame.style`` formats to Exc
For example, after running the following, ``styled.xlsx`` renders as below:
.. ipython:: python
+ :okwarning:
- np.random.seed(24)
- df = pd.DataFrame({'A': np.linspace(1, 10, 10)})
- df = pd.concat([df, pd.DataFrame(np.random.RandomState(24).randn(10, 4),
- columns=list('BCDE'))],
- axis=1)
- df.iloc[0, 2] = np.nan
- df
- styled = df.style.\
- applymap(lambda val: 'color: %s' % 'red' if val < 0 else 'black').\
- apply(lambda s: ['background-color: yellow' if v else ''
- for v in s == s.max()])
- styled.to_excel('styled.xlsx', engine='openpyxl')
+ np.random.seed(24)
+ df = pd.DataFrame({'A': np.linspace(1, 10, 10)})
+ df = pd.concat([df, pd.DataFrame(np.random.RandomState(24).randn(10, 4),
+ columns=list('BCDE'))],
+ axis=1)
+ df.iloc[0, 2] = np.nan
+ df
+ styled = df.style.\
+ applymap(lambda val: 'color: %s' % 'red' if val < 0 else 'black').\
+ apply(lambda s: ['background-color: yellow' if v else ''
+ for v in s == s.max()])
+ styled.to_excel('styled.xlsx', engine='openpyxl')
.. image:: _static/style-excel.png
.. ipython:: python
- :suppress:
- import os
- os.remove('styled.xlsx')
+ :suppress:
+
+ import os
+ os.remove('styled.xlsx')
See the :ref:`Style documentation <style.ipynb#Export-to-Excel>` for more detail.
@@ -490,7 +493,7 @@ Other Enhancements
- ``Series.interpolate()`` now supports timedelta as an index type with ``method='time'`` (:issue:`6424`)
- Addition of a ``level`` keyword to ``DataFrame/Series.rename`` to rename
labels in the specified level of a MultiIndex (:issue:`4160`).
-- ``DataFrame.reset_index()`` will now interpret a tuple ``index.name`` as a key spanning across levels of ``columns``, if this is a ``MultiIndex`` (:issues:`16164`)
+- ``DataFrame.reset_index()`` will now interpret a tuple ``index.name`` as a key spanning across levels of ``columns``, if this is a ``MultiIndex`` (:issue:`16164`)
- ``Timedelta.isoformat`` method added for formatting Timedeltas as an `ISO 8601 duration`_. See the :ref:`Timedelta docs <timedeltas.isoformat>` (:issue:`15136`)
- ``.select_dtypes()`` now allows the string ``datetimetz`` to generically select datetimes with tz (:issue:`14910`)
- The ``.to_latex()`` method will now accept ``multicolumn`` and ``multirow`` arguments to use the accompanying LaTeX enhancements
diff --git a/doc/source/whatsnew/v0.8.0.txt b/doc/source/whatsnew/v0.8.0.txt
index 4136c108fba57..b9cece752981e 100644
--- a/doc/source/whatsnew/v0.8.0.txt
+++ b/doc/source/whatsnew/v0.8.0.txt
@@ -168,7 +168,6 @@ New plotting methods
fx['FR'].plot(style='g')
- @savefig whatsnew_secondary_y.png
fx['IT'].plot(style='k--', secondary_y=True)
Vytautas Jancauskas, the 2012 GSOC participant, has added many new plot
@@ -180,7 +179,6 @@ types. For example, ``'kde'`` is a new option:
np.random.randn(1000) * 0.5 + 3)))
plt.figure()
s.hist(normed=True, alpha=0.2)
- @savefig whatsnew_kde.png
s.plot(kind='kde')
See :ref:`the plotting page <visualization.other>` for much more.
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index f14e7bf6bd183..ccd0d8bee4abc 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -99,7 +99,7 @@ class IntervalIndex(IntervalMixin, Index):
.. versionadded:: 0.20.0
- Properties
+ Attributes
----------
left, right : array-like (1-dimensional)
Left and right bounds for each interval.
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index f1ff2966dca48..71c61998be092 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -982,7 +982,9 @@ def bar(self, subset=None, axis=0, color='#d65f5f', width=100,
"""
Color the background ``color`` proptional to the values in each column.
Excludes non-numeric data by default.
+
.. versionadded:: 0.17.1
+
Parameters
----------
subset: IndexSlice, default None
| This gets a handful, though it hasn't addressed https://github.com/pandas-dev/pandas/pull/16050 yet. | https://api.github.com/repos/pandas-dev/pandas/pulls/16206 | 2017-05-02T21:02:30Z | 2017-05-03T18:16:45Z | 2017-05-03T18:16:45Z | 2023-05-11T01:15:29Z |
BUG: Fixed renaming of falsey names in build_table_schema | diff --git a/pandas/io/json/table_schema.py b/pandas/io/json/table_schema.py
index d8ef3afc9591f..c3865afa9c0c0 100644
--- a/pandas/io/json/table_schema.py
+++ b/pandas/io/json/table_schema.py
@@ -76,7 +76,11 @@ def set_default_names(data):
def make_field(arr, dtype=None):
dtype = dtype or arr.dtype
- field = {'name': arr.name or 'values',
+ if arr.name is None:
+ name = 'values'
+ else:
+ name = arr.name
+ field = {'name': name,
'type': as_json_table_type(dtype)}
if is_categorical_dtype(arr):
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index 0f77a886dd302..c3a976973bb29 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -461,3 +461,11 @@ def test_overlapping_names(self):
data.to_json(orient='table')
assert 'Overlapping' in str(excinfo.value)
+
+ def test_mi_falsey_name(self):
+ # GH 16203
+ df = pd.DataFrame(np.random.randn(4, 4),
+ index=pd.MultiIndex.from_product([('A', 'B'),
+ ('a', 'b')]))
+ result = [x['name'] for x in build_table_schema(df)['fields']]
+ assert result == ['level_0', 'level_1', 0, 1, 2, 3]
| Closes https://github.com/pandas-dev/pandas/issues/16203 | https://api.github.com/repos/pandas-dev/pandas/pulls/16205 | 2017-05-02T19:49:56Z | 2017-05-03T01:22:50Z | 2017-05-03T01:22:50Z | 2017-05-03T18:24:27Z |
API Change repr name for table schema | diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index 81fb8090a7afe..7e6ffaaffb72b 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -9,7 +9,6 @@
module is imported, register them here rather then in the module.
"""
-import sys
import warnings
import pandas.core.config as cf
@@ -342,36 +341,8 @@ def mpl_style_cb(key):
def table_schema_cb(key):
- # first, check if we are in IPython
- if 'IPython' not in sys.modules:
- # definitely not in IPython
- return
- from IPython import get_ipython
- ip = get_ipython()
- if ip is None:
- # still not in IPython
- return
-
- formatters = ip.display_formatter.formatters
-
- mimetype = "application/vnd.dataresource+json"
-
- if cf.get_option(key):
- if mimetype not in formatters:
- # define tableschema formatter
- from IPython.core.formatters import BaseFormatter
-
- class TableSchemaFormatter(BaseFormatter):
- print_method = '_repr_table_schema_'
- _return_type = (dict,)
- # register it:
- formatters[mimetype] = TableSchemaFormatter()
- # enable it if it's been disabled:
- formatters[mimetype].enabled = True
- else:
- # unregister tableschema mime-type
- if mimetype in formatters:
- formatters[mimetype].enabled = False
+ from pandas.io.formats.printing import _enable_data_resource_formatter
+ _enable_data_resource_formatter(cf.get_option(key))
with cf.config_prefix('display'):
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b3498583f6e14..2bc64795b5f20 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -129,7 +129,7 @@ def __init__(self, data, axes=None, copy=False, dtype=None,
object.__setattr__(self, '_data', data)
object.__setattr__(self, '_item_cache', {})
- def _repr_table_schema_(self):
+ def _repr_data_resource_(self):
"""
Not a real Jupyter special repr method, but we use the same
naming convention.
diff --git a/pandas/io/formats/printing.py b/pandas/io/formats/printing.py
index 5ea47df2c817f..cbad603630bd3 100644
--- a/pandas/io/formats/printing.py
+++ b/pandas/io/formats/printing.py
@@ -2,6 +2,7 @@
printing tools
"""
+import sys
from pandas.core.dtypes.inference import is_sequence
from pandas import compat
from pandas.compat import u
@@ -233,3 +234,34 @@ def as_escaped_unicode(thing, escape_chars=escape_chars):
def pprint_thing_encoded(object, encoding='utf-8', errors='replace', **kwds):
value = pprint_thing(object) # get unicode representation of object
return value.encode(encoding, errors, **kwds)
+
+
+def _enable_data_resource_formatter(enable):
+ if 'IPython' not in sys.modules:
+ # definitely not in IPython
+ return
+ from IPython import get_ipython
+ ip = get_ipython()
+ if ip is None:
+ # still not in IPython
+ return
+
+ formatters = ip.display_formatter.formatters
+ mimetype = "application/vnd.dataresource+json"
+
+ if enable:
+ if mimetype not in formatters:
+ # define tableschema formatter
+ from IPython.core.formatters import BaseFormatter
+
+ class TableSchemaFormatter(BaseFormatter):
+ print_method = '_repr_data_resource_'
+ _return_type = (dict,)
+ # register it:
+ formatters[mimetype] = TableSchemaFormatter()
+ # enable it if it's been disabled:
+ formatters[mimetype].enabled = True
+ else:
+ # unregister tableschema mime-type
+ if mimetype in formatters:
+ formatters[mimetype].enabled = False
diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index 3acd5c7a5e8c5..44fbd5a958d8c 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -180,23 +180,19 @@ def test_publishes_not_implemented(self):
def test_config_on(self):
df = pd.DataFrame({"A": [1, 2]})
with pd.option_context("display.html.table_schema", True):
- result = df._repr_table_schema_()
+ result = df._repr_data_resource_()
assert result is not None
def test_config_default_off(self):
df = pd.DataFrame({"A": [1, 2]})
with pd.option_context("display.html.table_schema", False):
- result = df._repr_table_schema_()
+ result = df._repr_data_resource_()
assert result is None
- def test_config_monkeypatches(self):
+ def test_enable_data_resource_formatter(self):
# GH 10491
- df = pd.DataFrame({"A": [1, 2]})
- assert not hasattr(df, '_ipython_display_')
- assert not hasattr(df['A'], '_ipython_display_')
-
formatters = self.display_formatter.formatters
mimetype = 'application/vnd.dataresource+json'
| Not API breaking, since pandas 0.20.0 hasn't been released yet.
`_repr_table_schema_` isn't the right name, since we include both the schema and the data.
xref https://github.com/pandas-dev/pandas/pull/16198#discussion_r114342141
@rgbkrk do you need any kind of backwards compatibility with `_repr_table_schema_`? | https://api.github.com/repos/pandas-dev/pandas/pulls/16204 | 2017-05-02T19:38:10Z | 2017-05-03T18:23:29Z | 2017-05-03T18:23:28Z | 2017-05-03T18:23:44Z |
DEPR: correct deprecation message for datetools | diff --git a/pandas/util/depr_module.py b/pandas/util/depr_module.py
index b438c91d980af..9c648b76fdad1 100644
--- a/pandas/util/depr_module.py
+++ b/pandas/util/depr_module.py
@@ -83,8 +83,7 @@ def __getattr__(self, name):
FutureWarning, stacklevel=2)
else:
if deprmodto is None:
- deprmodto = "{modname}.{name}".format(
- modname=obj.__module__, name=name)
+ deprmodto = obj.__module__
# The object is actually located in another module.
warnings.warn(
"{deprmod}.{name} is deprecated. Please use "
| Small error in the datetools depr message I just noticed:
Before ("Day.Day"):
```
In [33]: from pandas import datetools
In [34]: datetools.Day(1)
/home/joris/miniconda3/envs/dev/bin/ipython:1: FutureWarning: pandas.core.datetools.Day is deprecated. Please use pandas.tseries.offsets.Day.Day instead.
#!/home/joris/miniconda3/envs/dev/bin/python
Out[34]: <Day>
```
After:
```
In [2]: datetools.Day(1)
/home/joris/miniconda3/envs/dev/bin/ipython:1: FutureWarning: pandas.core.datetools.Day is deprecated. Please use pandas.tseries.offsets.Day instead.
#!/home/joris/miniconda3/envs/dev/bin/python
Out[2]: <Day>
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/16202 | 2017-05-02T17:11:30Z | 2017-05-03T19:18:42Z | 2017-05-03T19:18:42Z | 2017-05-03T22:15:43Z |
MAINT: Complete Conversion to Pytest Idiom | diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index 08e28582e7469..26a2f56f3c1a1 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -632,14 +632,6 @@ framework that will facilitate testing and developing. Thus, instead of writing
def test_really_cool_feature():
....
-Sometimes, it does make sense to bundle test functions together into a single class, either because the test file is testing multiple functions from a single module, and
-using test classes allows for better organization. However, instead of inheriting from ``tm.TestCase``, we should just inherit from ``object``:
-
-.. code-block:: python
-
- class TestReallyCoolFeature(object):
- ....
-
Using ``pytest``
~~~~~~~~~~~~~~~~
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index f8f84985142a8..5086b803419c6 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -98,8 +98,8 @@ def _is_py3_complex_incompat(result, expected):
class TestEvalNumexprPandas(tm.TestCase):
@classmethod
- def setUpClass(cls):
- super(TestEvalNumexprPandas, cls).setUpClass()
+ def setup_class(cls):
+ super(TestEvalNumexprPandas, cls).setup_class()
tm.skip_if_no_ne()
import numexpr as ne
cls.ne = ne
@@ -107,8 +107,8 @@ def setUpClass(cls):
cls.parser = 'pandas'
@classmethod
- def tearDownClass(cls):
- super(TestEvalNumexprPandas, cls).tearDownClass()
+ def teardown_class(cls):
+ super(TestEvalNumexprPandas, cls).teardown_class()
del cls.engine, cls.parser
if hasattr(cls, 'ne'):
del cls.ne
@@ -137,12 +137,12 @@ def setup_ops(self):
self.arith_ops = _good_arith_ops
self.unary_ops = '-', '~', 'not '
- def setUp(self):
+ def setup_method(self, method):
self.setup_ops()
self.setup_data()
self.current_engines = filter(lambda x: x != self.engine, _engines)
- def tearDown(self):
+ def teardown_method(self, method):
del self.lhses, self.rhses, self.scalar_rhses, self.scalar_lhses
del self.pandas_rhses, self.pandas_lhses, self.current_engines
@@ -723,8 +723,8 @@ def test_float_truncation(self):
class TestEvalNumexprPython(TestEvalNumexprPandas):
@classmethod
- def setUpClass(cls):
- super(TestEvalNumexprPython, cls).setUpClass()
+ def setup_class(cls):
+ super(TestEvalNumexprPython, cls).setup_class()
tm.skip_if_no_ne()
import numexpr as ne
cls.ne = ne
@@ -750,8 +750,8 @@ def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs):
class TestEvalPythonPython(TestEvalNumexprPython):
@classmethod
- def setUpClass(cls):
- super(TestEvalPythonPython, cls).setUpClass()
+ def setup_class(cls):
+ super(TestEvalPythonPython, cls).setup_class()
cls.engine = 'python'
cls.parser = 'python'
@@ -780,8 +780,8 @@ def check_alignment(self, result, nlhs, ghs, op):
class TestEvalPythonPandas(TestEvalPythonPython):
@classmethod
- def setUpClass(cls):
- super(TestEvalPythonPandas, cls).setUpClass()
+ def setup_class(cls):
+ super(TestEvalPythonPandas, cls).setup_class()
cls.engine = 'python'
cls.parser = 'pandas'
@@ -1070,16 +1070,16 @@ def test_performance_warning_for_poor_alignment(self, engine, parser):
class TestOperationsNumExprPandas(tm.TestCase):
@classmethod
- def setUpClass(cls):
- super(TestOperationsNumExprPandas, cls).setUpClass()
+ def setup_class(cls):
+ super(TestOperationsNumExprPandas, cls).setup_class()
tm.skip_if_no_ne()
cls.engine = 'numexpr'
cls.parser = 'pandas'
cls.arith_ops = expr._arith_ops_syms + expr._cmp_ops_syms
@classmethod
- def tearDownClass(cls):
- super(TestOperationsNumExprPandas, cls).tearDownClass()
+ def teardown_class(cls):
+ super(TestOperationsNumExprPandas, cls).teardown_class()
del cls.engine, cls.parser
def eval(self, *args, **kwargs):
@@ -1492,8 +1492,8 @@ def test_simple_in_ops(self):
class TestOperationsNumExprPython(TestOperationsNumExprPandas):
@classmethod
- def setUpClass(cls):
- super(TestOperationsNumExprPython, cls).setUpClass()
+ def setup_class(cls):
+ super(TestOperationsNumExprPython, cls).setup_class()
cls.engine = 'numexpr'
cls.parser = 'python'
tm.skip_if_no_ne(cls.engine)
@@ -1566,8 +1566,8 @@ def test_simple_bool_ops(self):
class TestOperationsPythonPython(TestOperationsNumExprPython):
@classmethod
- def setUpClass(cls):
- super(TestOperationsPythonPython, cls).setUpClass()
+ def setup_class(cls):
+ super(TestOperationsPythonPython, cls).setup_class()
cls.engine = cls.parser = 'python'
cls.arith_ops = expr._arith_ops_syms + expr._cmp_ops_syms
cls.arith_ops = filter(lambda x: x not in ('in', 'not in'),
@@ -1577,8 +1577,8 @@ def setUpClass(cls):
class TestOperationsPythonPandas(TestOperationsNumExprPandas):
@classmethod
- def setUpClass(cls):
- super(TestOperationsPythonPandas, cls).setUpClass()
+ def setup_class(cls):
+ super(TestOperationsPythonPandas, cls).setup_class()
cls.engine = 'python'
cls.parser = 'pandas'
cls.arith_ops = expr._arith_ops_syms + expr._cmp_ops_syms
@@ -1587,8 +1587,8 @@ def setUpClass(cls):
class TestMathPythonPython(tm.TestCase):
@classmethod
- def setUpClass(cls):
- super(TestMathPythonPython, cls).setUpClass()
+ def setup_class(cls):
+ super(TestMathPythonPython, cls).setup_class()
tm.skip_if_no_ne()
cls.engine = 'python'
cls.parser = 'pandas'
@@ -1596,7 +1596,7 @@ def setUpClass(cls):
cls.binary_fns = _binary_math_ops
@classmethod
- def tearDownClass(cls):
+ def teardown_class(cls):
del cls.engine, cls.parser
def eval(self, *args, **kwargs):
@@ -1694,8 +1694,8 @@ def test_keyword_arg(self):
class TestMathPythonPandas(TestMathPythonPython):
@classmethod
- def setUpClass(cls):
- super(TestMathPythonPandas, cls).setUpClass()
+ def setup_class(cls):
+ super(TestMathPythonPandas, cls).setup_class()
cls.engine = 'python'
cls.parser = 'pandas'
@@ -1703,8 +1703,8 @@ def setUpClass(cls):
class TestMathNumExprPandas(TestMathPythonPython):
@classmethod
- def setUpClass(cls):
- super(TestMathNumExprPandas, cls).setUpClass()
+ def setup_class(cls):
+ super(TestMathNumExprPandas, cls).setup_class()
cls.engine = 'numexpr'
cls.parser = 'pandas'
@@ -1712,8 +1712,8 @@ def setUpClass(cls):
class TestMathNumExprPython(TestMathPythonPython):
@classmethod
- def setUpClass(cls):
- super(TestMathNumExprPython, cls).setUpClass()
+ def setup_class(cls):
+ super(TestMathNumExprPython, cls).setup_class()
cls.engine = 'numexpr'
cls.parser = 'python'
diff --git a/pandas/tests/frame/test_asof.py b/pandas/tests/frame/test_asof.py
index ba3e239756f51..4207238f0cd4f 100644
--- a/pandas/tests/frame/test_asof.py
+++ b/pandas/tests/frame/test_asof.py
@@ -10,7 +10,7 @@
class TestFrameAsof(TestData, tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.N = N = 50
self.rng = date_range('1/1/1990', periods=N, freq='53s')
self.df = DataFrame({'A': np.arange(N), 'B': np.arange(N)},
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index 75d4263cbe68f..42eb7148d616e 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -2914,7 +2914,7 @@ def test_type_error_multiindex(self):
class TestDataFrameIndexingDatetimeWithTZ(tm.TestCase, TestData):
- def setUp(self):
+ def setup_method(self, method):
self.idx = Index(date_range('20130101', periods=3, tz='US/Eastern'),
name='foo')
self.dr = date_range('20130110', periods=3)
@@ -2972,7 +2972,7 @@ def test_transpose(self):
class TestDataFrameIndexingUInt64(tm.TestCase, TestData):
- def setUp(self):
+ def setup_method(self, method):
self.ir = Index(np.arange(3), dtype=np.uint64)
self.idx = Index([2**63, 2**63 + 5, 2**63 + 10], name='foo')
diff --git a/pandas/tests/frame/test_period.py b/pandas/tests/frame/test_period.py
index 826ece2ed2c9b..49de3b8e8cd9b 100644
--- a/pandas/tests/frame/test_period.py
+++ b/pandas/tests/frame/test_period.py
@@ -14,7 +14,7 @@ def _permute(obj):
class TestPeriodIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
pass
def test_as_frame_columns(self):
diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index 80db2c50c3eb6..6a06e3f4872ce 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -4,7 +4,6 @@
import operator
import pytest
-from itertools import product
from pandas.compat import (zip, range, lrange, StringIO)
from pandas import DataFrame, Series, Index, MultiIndex, date_range
@@ -27,6 +26,16 @@
ENGINES = 'python', 'numexpr'
+@pytest.fixture(params=PARSERS, ids=lambda x: x)
+def parser(request):
+ return request.param
+
+
+@pytest.fixture(params=ENGINES, ids=lambda x: x)
+def engine(request):
+ return request.param
+
+
def skip_if_no_pandas_parser(parser):
if parser != 'pandas':
pytest.skip("cannot evaluate with parser {0!r}".format(parser))
@@ -41,7 +50,7 @@ def skip_if_no_ne(engine='numexpr'):
class TestCompat(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.df = DataFrame({'A': [1, 2, 3]})
self.expected1 = self.df[self.df.A > 0]
self.expected2 = self.df.A + 1
@@ -165,8 +174,9 @@ def test_eval_resolvers_as_list(self):
class TestDataFrameQueryWithMultiIndex(tm.TestCase):
- def check_query_with_named_multiindex(self, parser, engine):
+ def test_query_with_named_multiindex(self, parser, engine):
tm.skip_if_no_ne(engine)
+ skip_if_no_pandas_parser(parser)
a = np.random.choice(['red', 'green'], size=10)
b = np.random.choice(['eggs', 'ham'], size=10)
index = MultiIndex.from_arrays([a, b], names=['color', 'food'])
@@ -214,12 +224,9 @@ def check_query_with_named_multiindex(self, parser, engine):
assert_frame_equal(res1, exp)
assert_frame_equal(res2, exp)
- def test_query_with_named_multiindex(self):
- for parser, engine in product(['pandas'], ENGINES):
- yield self.check_query_with_named_multiindex, parser, engine
-
- def check_query_with_unnamed_multiindex(self, parser, engine):
+ def test_query_with_unnamed_multiindex(self, parser, engine):
tm.skip_if_no_ne(engine)
+ skip_if_no_pandas_parser(parser)
a = np.random.choice(['red', 'green'], size=10)
b = np.random.choice(['eggs', 'ham'], size=10)
index = MultiIndex.from_arrays([a, b])
@@ -308,12 +315,9 @@ def check_query_with_unnamed_multiindex(self, parser, engine):
assert_frame_equal(res1, exp)
assert_frame_equal(res2, exp)
- def test_query_with_unnamed_multiindex(self):
- for parser, engine in product(['pandas'], ENGINES):
- yield self.check_query_with_unnamed_multiindex, parser, engine
-
- def check_query_with_partially_named_multiindex(self, parser, engine):
+ def test_query_with_partially_named_multiindex(self, parser, engine):
tm.skip_if_no_ne(engine)
+ skip_if_no_pandas_parser(parser)
a = np.random.choice(['red', 'green'], size=10)
b = np.arange(10)
index = MultiIndex.from_arrays([a, b])
@@ -341,17 +345,7 @@ def check_query_with_partially_named_multiindex(self, parser, engine):
exp = df[ind != "red"]
assert_frame_equal(res, exp)
- def test_query_with_partially_named_multiindex(self):
- for parser, engine in product(['pandas'], ENGINES):
- yield (self.check_query_with_partially_named_multiindex,
- parser, engine)
-
def test_query_multiindex_get_index_resolvers(self):
- for parser, engine in product(['pandas'], ENGINES):
- yield (self.check_query_multiindex_get_index_resolvers, parser,
- engine)
-
- def check_query_multiindex_get_index_resolvers(self, parser, engine):
df = mkdf(10, 3, r_idx_nlevels=2, r_idx_names=['spam', 'eggs'])
resolvers = df._get_index_resolvers()
@@ -375,22 +369,14 @@ def to_series(mi, level):
else:
raise AssertionError("object must be a Series or Index")
- def test_raise_on_panel_with_multiindex(self):
- for parser, engine in product(PARSERS, ENGINES):
- yield self.check_raise_on_panel_with_multiindex, parser, engine
-
- def check_raise_on_panel_with_multiindex(self, parser, engine):
+ def test_raise_on_panel_with_multiindex(self, parser, engine):
tm.skip_if_no_ne()
p = tm.makePanel(7)
p.items = tm.makeCustomIndex(len(p.items), nlevels=2)
with pytest.raises(NotImplementedError):
pd.eval('p + 1', parser=parser, engine=engine)
- def test_raise_on_panel4d_with_multiindex(self):
- for parser, engine in product(PARSERS, ENGINES):
- yield self.check_raise_on_panel4d_with_multiindex, parser, engine
-
- def check_raise_on_panel4d_with_multiindex(self, parser, engine):
+ def test_raise_on_panel4d_with_multiindex(self, parser, engine):
tm.skip_if_no_ne()
p4d = tm.makePanel4D(7)
p4d.items = tm.makeCustomIndex(len(p4d.items), nlevels=2)
@@ -401,15 +387,15 @@ def check_raise_on_panel4d_with_multiindex(self, parser, engine):
class TestDataFrameQueryNumExprPandas(tm.TestCase):
@classmethod
- def setUpClass(cls):
- super(TestDataFrameQueryNumExprPandas, cls).setUpClass()
+ def setup_class(cls):
+ super(TestDataFrameQueryNumExprPandas, cls).setup_class()
cls.engine = 'numexpr'
cls.parser = 'pandas'
tm.skip_if_no_ne(cls.engine)
@classmethod
- def tearDownClass(cls):
- super(TestDataFrameQueryNumExprPandas, cls).tearDownClass()
+ def teardown_class(cls):
+ super(TestDataFrameQueryNumExprPandas, cls).teardown_class()
del cls.engine, cls.parser
def test_date_query_with_attribute_access(self):
@@ -733,8 +719,8 @@ def test_inf(self):
class TestDataFrameQueryNumExprPython(TestDataFrameQueryNumExprPandas):
@classmethod
- def setUpClass(cls):
- super(TestDataFrameQueryNumExprPython, cls).setUpClass()
+ def setup_class(cls):
+ super(TestDataFrameQueryNumExprPython, cls).setup_class()
cls.engine = 'numexpr'
cls.parser = 'python'
tm.skip_if_no_ne(cls.engine)
@@ -834,8 +820,8 @@ def test_nested_scope(self):
class TestDataFrameQueryPythonPandas(TestDataFrameQueryNumExprPandas):
@classmethod
- def setUpClass(cls):
- super(TestDataFrameQueryPythonPandas, cls).setUpClass()
+ def setup_class(cls):
+ super(TestDataFrameQueryPythonPandas, cls).setup_class()
cls.engine = 'python'
cls.parser = 'pandas'
cls.frame = TestData().frame
@@ -855,8 +841,8 @@ def test_query_builtin(self):
class TestDataFrameQueryPythonPython(TestDataFrameQueryNumExprPython):
@classmethod
- def setUpClass(cls):
- super(TestDataFrameQueryPythonPython, cls).setUpClass()
+ def setup_class(cls):
+ super(TestDataFrameQueryPythonPython, cls).setup_class()
cls.engine = cls.parser = 'python'
cls.frame = TestData().frame
@@ -874,7 +860,7 @@ def test_query_builtin(self):
class TestDataFrameQueryStrings(tm.TestCase):
- def check_str_query_method(self, parser, engine):
+ def test_str_query_method(self, parser, engine):
tm.skip_if_no_ne(engine)
df = DataFrame(randn(10, 1), columns=['b'])
df['strings'] = Series(list('aabbccddee'))
@@ -911,15 +897,7 @@ def check_str_query_method(self, parser, engine):
assert_frame_equal(res, expect)
assert_frame_equal(res, df[~df.strings.isin(['a'])])
- def test_str_query_method(self):
- for parser, engine in product(PARSERS, ENGINES):
- yield self.check_str_query_method, parser, engine
-
- def test_str_list_query_method(self):
- for parser, engine in product(PARSERS, ENGINES):
- yield self.check_str_list_query_method, parser, engine
-
- def check_str_list_query_method(self, parser, engine):
+ def test_str_list_query_method(self, parser, engine):
tm.skip_if_no_ne(engine)
df = DataFrame(randn(10, 1), columns=['b'])
df['strings'] = Series(list('aabbccddee'))
@@ -958,7 +936,7 @@ def check_str_list_query_method(self, parser, engine):
parser=parser)
assert_frame_equal(res, expect)
- def check_query_with_string_columns(self, parser, engine):
+ def test_query_with_string_columns(self, parser, engine):
tm.skip_if_no_ne(engine)
df = DataFrame({'a': list('aaaabbbbcccc'),
'b': list('aabbccddeeff'),
@@ -979,11 +957,7 @@ def check_query_with_string_columns(self, parser, engine):
with pytest.raises(NotImplementedError):
df.query('a in b and c < d', parser=parser, engine=engine)
- def test_query_with_string_columns(self):
- for parser, engine in product(PARSERS, ENGINES):
- yield self.check_query_with_string_columns, parser, engine
-
- def check_object_array_eq_ne(self, parser, engine):
+ def test_object_array_eq_ne(self, parser, engine):
tm.skip_if_no_ne(engine)
df = DataFrame({'a': list('aaaabbbbcccc'),
'b': list('aabbccddeeff'),
@@ -997,11 +971,7 @@ def check_object_array_eq_ne(self, parser, engine):
exp = df[df.a != df.b]
assert_frame_equal(res, exp)
- def test_object_array_eq_ne(self):
- for parser, engine in product(PARSERS, ENGINES):
- yield self.check_object_array_eq_ne, parser, engine
-
- def check_query_with_nested_strings(self, parser, engine):
+ def test_query_with_nested_strings(self, parser, engine):
tm.skip_if_no_ne(engine)
skip_if_no_pandas_parser(parser)
raw = """id event timestamp
@@ -1025,11 +995,7 @@ def check_query_with_nested_strings(self, parser, engine):
engine=engine)
assert_frame_equal(expected, res)
- def test_query_with_nested_string(self):
- for parser, engine in product(PARSERS, ENGINES):
- yield self.check_query_with_nested_strings, parser, engine
-
- def check_query_with_nested_special_character(self, parser, engine):
+ def test_query_with_nested_special_character(self, parser, engine):
skip_if_no_pandas_parser(parser)
tm.skip_if_no_ne(engine)
df = DataFrame({'a': ['a', 'b', 'test & test'],
@@ -1038,12 +1004,7 @@ def check_query_with_nested_special_character(self, parser, engine):
expec = df[df.a == 'test & test']
assert_frame_equal(res, expec)
- def test_query_with_nested_special_character(self):
- for parser, engine in product(PARSERS, ENGINES):
- yield (self.check_query_with_nested_special_character,
- parser, engine)
-
- def check_query_lex_compare_strings(self, parser, engine):
+ def test_query_lex_compare_strings(self, parser, engine):
tm.skip_if_no_ne(engine=engine)
import operator as opr
@@ -1058,11 +1019,7 @@ def check_query_lex_compare_strings(self, parser, engine):
expected = df[func(df.X, 'd')]
assert_frame_equal(res, expected)
- def test_query_lex_compare_strings(self):
- for parser, engine in product(PARSERS, ENGINES):
- yield self.check_query_lex_compare_strings, parser, engine
-
- def check_query_single_element_booleans(self, parser, engine):
+ def test_query_single_element_booleans(self, parser, engine):
tm.skip_if_no_ne(engine)
columns = 'bid', 'bidsize', 'ask', 'asksize'
data = np.random.randint(2, size=(1, len(columns))).astype(bool)
@@ -1071,12 +1028,9 @@ def check_query_single_element_booleans(self, parser, engine):
expected = df[df.bid & df.ask]
assert_frame_equal(res, expected)
- def test_query_single_element_booleans(self):
- for parser, engine in product(PARSERS, ENGINES):
- yield self.check_query_single_element_booleans, parser, engine
-
- def check_query_string_scalar_variable(self, parser, engine):
+ def test_query_string_scalar_variable(self, parser, engine):
tm.skip_if_no_ne(engine)
+ skip_if_no_pandas_parser(parser)
df = pd.DataFrame({'Symbol': ['BUD US', 'BUD US', 'IBM US', 'IBM US'],
'Price': [109.70, 109.72, 183.30, 183.35]})
e = df[df.Symbol == 'BUD US']
@@ -1084,24 +1038,20 @@ def check_query_string_scalar_variable(self, parser, engine):
r = df.query('Symbol == @symb', parser=parser, engine=engine)
assert_frame_equal(e, r)
- def test_query_string_scalar_variable(self):
- for parser, engine in product(['pandas'], ENGINES):
- yield self.check_query_string_scalar_variable, parser, engine
-
class TestDataFrameEvalNumExprPandas(tm.TestCase):
@classmethod
- def setUpClass(cls):
- super(TestDataFrameEvalNumExprPandas, cls).setUpClass()
+ def setup_class(cls):
+ super(TestDataFrameEvalNumExprPandas, cls).setup_class()
cls.engine = 'numexpr'
cls.parser = 'pandas'
tm.skip_if_no_ne()
- def setUp(self):
+ def setup_method(self, method):
self.frame = DataFrame(randn(10, 3), columns=list('abc'))
- def tearDown(self):
+ def teardown_method(self, method):
del self.frame
def test_simple_expr(self):
@@ -1129,8 +1079,8 @@ def test_invalid_type_for_operator_raises(self):
class TestDataFrameEvalNumExprPython(TestDataFrameEvalNumExprPandas):
@classmethod
- def setUpClass(cls):
- super(TestDataFrameEvalNumExprPython, cls).setUpClass()
+ def setup_class(cls):
+ super(TestDataFrameEvalNumExprPython, cls).setup_class()
cls.engine = 'numexpr'
cls.parser = 'python'
tm.skip_if_no_ne(cls.engine)
@@ -1139,8 +1089,8 @@ def setUpClass(cls):
class TestDataFrameEvalPythonPandas(TestDataFrameEvalNumExprPandas):
@classmethod
- def setUpClass(cls):
- super(TestDataFrameEvalPythonPandas, cls).setUpClass()
+ def setup_class(cls):
+ super(TestDataFrameEvalPythonPandas, cls).setup_class()
cls.engine = 'python'
cls.parser = 'pandas'
@@ -1148,6 +1098,6 @@ def setUpClass(cls):
class TestDataFrameEvalPythonPython(TestDataFrameEvalNumExprPython):
@classmethod
- def setUpClass(cls):
- super(TestDataFrameEvalPythonPython, cls).tearDownClass()
+ def setup_class(cls):
+ super(TestDataFrameEvalPythonPython, cls).teardown_class()
cls.engine = cls.parser = 'python'
diff --git a/pandas/tests/frame/test_validate.py b/pandas/tests/frame/test_validate.py
index 4c4abb7e58e75..343853b3fcfa0 100644
--- a/pandas/tests/frame/test_validate.py
+++ b/pandas/tests/frame/test_validate.py
@@ -1,10 +1,10 @@
-from unittest import TestCase
from pandas.core.frame import DataFrame
+import pandas.util.testing as tm
import pytest
-class TestDataFrameValidate(TestCase):
+class TestDataFrameValidate(tm.TestCase):
"""Tests for error handling related to data types of method arguments."""
df = DataFrame({'a': [1, 2], 'b': [3, 4]})
diff --git a/pandas/tests/groupby/common.py b/pandas/tests/groupby/common.py
index f3dccf473f53a..3e99e8211b4f8 100644
--- a/pandas/tests/groupby/common.py
+++ b/pandas/tests/groupby/common.py
@@ -28,7 +28,7 @@ def df():
class MixIn(object):
- def setUp(self):
+ def setup_method(self, method):
self.ts = tm.makeTimeSeries()
self.seriesd = tm.getSeriesData()
diff --git a/pandas/tests/groupby/test_aggregate.py b/pandas/tests/groupby/test_aggregate.py
index 310a5aca77b77..769e4d14d354b 100644
--- a/pandas/tests/groupby/test_aggregate.py
+++ b/pandas/tests/groupby/test_aggregate.py
@@ -27,7 +27,7 @@
class TestGroupByAggregate(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.ts = tm.makeTimeSeries()
self.seriesd = tm.getSeriesData()
diff --git a/pandas/tests/groupby/test_bin_groupby.py b/pandas/tests/groupby/test_bin_groupby.py
index 320acacff483c..bdac535b3d2e2 100644
--- a/pandas/tests/groupby/test_bin_groupby.py
+++ b/pandas/tests/groupby/test_bin_groupby.py
@@ -48,7 +48,7 @@ def test_series_bin_grouper():
class TestBinGroupers(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.obj = np.random.randn(10, 1)
self.labels = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2], dtype=np.int64)
self.bins = np.array([3, 6], dtype=np.int64)
diff --git a/pandas/tests/groupby/test_filters.py b/pandas/tests/groupby/test_filters.py
index 2cfbe0ab68c8e..b05b938fd8205 100644
--- a/pandas/tests/groupby/test_filters.py
+++ b/pandas/tests/groupby/test_filters.py
@@ -25,7 +25,7 @@
class TestGroupByFilter(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.ts = tm.makeTimeSeries()
self.seriesd = tm.getSeriesData()
diff --git a/pandas/tests/indexes/datetimes/test_astype.py b/pandas/tests/indexes/datetimes/test_astype.py
index 1c8189d0c75ac..185787d75f6e1 100644
--- a/pandas/tests/indexes/datetimes/test_astype.py
+++ b/pandas/tests/indexes/datetimes/test_astype.py
@@ -187,7 +187,7 @@ def _check_rng(rng):
class TestToPeriod(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
data = [Timestamp('2007-01-01 10:11:12.123456Z'),
Timestamp('2007-01-01 10:11:13.789123Z')]
self.index = DatetimeIndex(data)
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index a9fdd40406770..67d6b0f314ecb 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -198,7 +198,7 @@ def test_precision_finer_than_offset(self):
class TestBusinessDateRange(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.rng = bdate_range(START, END)
def test_constructor(self):
@@ -483,7 +483,7 @@ def test_freq_divides_end_in_nanos(self):
class TestCustomDateRange(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.rng = cdate_range(START, END)
def test_constructor(self):
diff --git a/pandas/tests/indexes/datetimes/test_datetimelike.py b/pandas/tests/indexes/datetimes/test_datetimelike.py
index 0eb565bf0ec55..2e184b1aa4e51 100644
--- a/pandas/tests/indexes/datetimes/test_datetimelike.py
+++ b/pandas/tests/indexes/datetimes/test_datetimelike.py
@@ -11,7 +11,7 @@
class TestDatetimeIndex(DatetimeLike, tm.TestCase):
_holder = DatetimeIndex
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(index=tm.makeDateIndex(10))
self.setup_indices()
diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py
index e25e3d448190e..75c6626b47401 100644
--- a/pandas/tests/indexes/datetimes/test_ops.py
+++ b/pandas/tests/indexes/datetimes/test_ops.py
@@ -23,8 +23,8 @@ class TestDatetimeIndexOps(Ops):
tz = [None, 'UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/Asia/Singapore',
'dateutil/US/Pacific']
- def setUp(self):
- super(TestDatetimeIndexOps, self).setUp()
+ def setup_method(self, method):
+ super(TestDatetimeIndexOps, self).setup_method(method)
mask = lambda x: (isinstance(x, DatetimeIndex) or
isinstance(x, PeriodIndex))
self.is_valid_objs = [o for o in self.objs if mask(o)]
@@ -1109,7 +1109,7 @@ def test_shift_months(years, months):
class TestBusinessDatetimeIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.rng = bdate_range(START, END)
def test_comparison(self):
@@ -1209,7 +1209,7 @@ def test_identical(self):
class TestCustomDatetimeIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.rng = cdate_range(START, END)
def test_comparison(self):
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index b25fdaf6be3b0..fb4b6e9d226f8 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -201,7 +201,7 @@ def test_join_nonunique(self):
class TestBusinessDatetimeIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.rng = bdate_range(START, END)
def test_union(self):
@@ -345,7 +345,7 @@ def test_month_range_union_tz_dateutil(self):
class TestCustomDatetimeIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.rng = cdate_range(START, END)
def test_union(self):
diff --git a/pandas/tests/indexes/period/test_asfreq.py b/pandas/tests/indexes/period/test_asfreq.py
index f9effd3d1aea6..b97be3f61a2dd 100644
--- a/pandas/tests/indexes/period/test_asfreq.py
+++ b/pandas/tests/indexes/period/test_asfreq.py
@@ -8,7 +8,7 @@
class TestPeriodIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
pass
def test_asfreq(self):
diff --git a/pandas/tests/indexes/period/test_construction.py b/pandas/tests/indexes/period/test_construction.py
index a95ad808cadce..b0db27b5f2cea 100644
--- a/pandas/tests/indexes/period/test_construction.py
+++ b/pandas/tests/indexes/period/test_construction.py
@@ -11,7 +11,7 @@
class TestPeriodIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
pass
def test_construction_base_constructor(self):
@@ -475,7 +475,7 @@ def test_map_with_string_constructor(self):
class TestSeriesPeriod(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.series = Series(period_range('2000-01-01', periods=10, freq='D'))
def test_constructor_cant_cast_period(self):
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index ebbe05d51598c..36db56b751633 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -13,7 +13,7 @@
class TestGetItem(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
pass
def test_getitem(self):
diff --git a/pandas/tests/indexes/period/test_ops.py b/pandas/tests/indexes/period/test_ops.py
index fb688bda58ae8..583848f75c6b4 100644
--- a/pandas/tests/indexes/period/test_ops.py
+++ b/pandas/tests/indexes/period/test_ops.py
@@ -15,8 +15,8 @@
class TestPeriodIndexOps(Ops):
- def setUp(self):
- super(TestPeriodIndexOps, self).setUp()
+ def setup_method(self, method):
+ super(TestPeriodIndexOps, self).setup_method(method)
mask = lambda x: (isinstance(x, DatetimeIndex) or
isinstance(x, PeriodIndex))
self.is_valid_objs = [o for o in self.objs if mask(o)]
@@ -1137,7 +1137,7 @@ def test_pi_comp_period_nat(self):
class TestSeriesPeriod(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.series = Series(period_range('2000-01-01', periods=10, freq='D'))
def test_ops_series_timedelta(self):
diff --git a/pandas/tests/indexes/period/test_partial_slicing.py b/pandas/tests/indexes/period/test_partial_slicing.py
index 04b4e6795e770..88a9ff5752322 100644
--- a/pandas/tests/indexes/period/test_partial_slicing.py
+++ b/pandas/tests/indexes/period/test_partial_slicing.py
@@ -10,7 +10,7 @@
class TestPeriodIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
pass
def test_slice_with_negative_step(self):
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 6ec567509cd76..11ec3bc215cf8 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -17,7 +17,7 @@ class TestPeriodIndex(DatetimeLike, tm.TestCase):
_holder = PeriodIndex
_multiprocess_can_split_ = True
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(index=tm.makePeriodIndex(10))
self.setup_indices()
diff --git a/pandas/tests/indexes/period/test_setops.py b/pandas/tests/indexes/period/test_setops.py
index 025ee7e732a7c..7041724faeb89 100644
--- a/pandas/tests/indexes/period/test_setops.py
+++ b/pandas/tests/indexes/period/test_setops.py
@@ -14,7 +14,7 @@ def _permute(obj):
class TestPeriodIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
pass
def test_joins(self):
diff --git a/pandas/tests/indexes/period/test_tools.py b/pandas/tests/indexes/period/test_tools.py
index 9e5994dd54f50..bd80c2c4f341e 100644
--- a/pandas/tests/indexes/period/test_tools.py
+++ b/pandas/tests/indexes/period/test_tools.py
@@ -152,7 +152,7 @@ def test_period_ordinal_business_day(self):
class TestPeriodIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
pass
def test_tolist(self):
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 10958681af450..ce3f4b5d68d89 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -32,7 +32,7 @@
class TestIndex(Base, tm.TestCase):
_holder = Index
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(unicodeIndex=tm.makeUnicodeIndex(100),
strIndex=tm.makeStringIndex(100),
dateIndex=tm.makeDateIndex(100),
@@ -1808,7 +1808,7 @@ class TestMixedIntIndex(Base, tm.TestCase):
_holder = Index
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(mixedIndex=Index([0, 'a', 1, 'b', 2, 'c']))
self.setup_indices()
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 6a2eea0b84b72..94349b4860698 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -22,7 +22,7 @@
class TestCategoricalIndex(Base, tm.TestCase):
_holder = CategoricalIndex
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(catIndex=tm.makeCategoricalIndex(100))
self.setup_indices()
diff --git a/pandas/tests/indexes/test_frozen.py b/pandas/tests/indexes/test_frozen.py
index ed2e3d94aa4a4..ae4a130c24310 100644
--- a/pandas/tests/indexes/test_frozen.py
+++ b/pandas/tests/indexes/test_frozen.py
@@ -9,7 +9,7 @@ class TestFrozenList(CheckImmutable, CheckStringMixin, tm.TestCase):
mutable_methods = ('extend', 'pop', 'remove', 'insert')
unicode_container = FrozenList([u("\u05d0"), u("\u05d1"), "c"])
- def setUp(self):
+ def setup_method(self, method):
self.lst = [1, 2, 3, 4, 5]
self.container = FrozenList(self.lst)
self.klass = FrozenList
@@ -35,7 +35,7 @@ class TestFrozenNDArray(CheckImmutable, CheckStringMixin, tm.TestCase):
mutable_methods = ('put', 'itemset', 'fill')
unicode_container = FrozenNDArray([u("\u05d0"), u("\u05d1"), "c"])
- def setUp(self):
+ def setup_method(self, method):
self.lst = [3, 5, 7, -2]
self.container = FrozenNDArray(self.lst)
self.klass = FrozenNDArray
diff --git a/pandas/tests/indexes/test_interval.py b/pandas/tests/indexes/test_interval.py
index 00897f290f292..90e5b1b6c9788 100644
--- a/pandas/tests/indexes/test_interval.py
+++ b/pandas/tests/indexes/test_interval.py
@@ -15,7 +15,7 @@
class TestIntervalIndex(Base, tm.TestCase):
_holder = IntervalIndex
- def setUp(self):
+ def setup_method(self, method):
self.index = IntervalIndex.from_arrays([0, 1], [1, 2])
self.index_with_nan = IntervalIndex.from_tuples(
[(0, 1), np.nan, (1, 2)])
@@ -721,7 +721,7 @@ def f():
class TestIntervalTree(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
gentree = lambda dtype: IntervalTree(np.arange(5, dtype=dtype),
np.arange(5, dtype=dtype) + 2)
self.tree = gentree('int64')
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index a840711e37fb0..d2024340c522e 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -31,7 +31,7 @@ class TestMultiIndex(Base, tm.TestCase):
_holder = MultiIndex
_compat_props = ['shape', 'ndim', 'size', 'itemsize']
- def setUp(self):
+ def setup_method(self, method):
major_axis = Index(['foo', 'bar', 'baz', 'qux'])
minor_axis = Index(['one', 'two'])
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index 428c261df5654..e82b1c5e74543 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -179,7 +179,7 @@ def test_modulo(self):
class TestFloat64Index(Numeric, tm.TestCase):
_holder = Float64Index
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(mixed=Float64Index([1.5, 2, 3, 4, 5]),
float=Float64Index(np.arange(5) * 2.5))
self.setup_indices()
@@ -625,7 +625,7 @@ class TestInt64Index(NumericInt, tm.TestCase):
_dtype = 'int64'
_holder = Int64Index
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(index=Int64Index(np.arange(0, 20, 2)))
self.setup_indices()
@@ -920,7 +920,7 @@ class TestUInt64Index(NumericInt, tm.TestCase):
_dtype = 'uint64'
_holder = UInt64Index
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(index=UInt64Index([2**63, 2**63 + 10, 2**63 + 15,
2**63 + 20, 2**63 + 25]))
self.setup_indices()
diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py
index 0379718b004e1..cc3a76aa7cac1 100644
--- a/pandas/tests/indexes/test_range.py
+++ b/pandas/tests/indexes/test_range.py
@@ -24,7 +24,7 @@ class TestRangeIndex(Numeric, tm.TestCase):
_holder = RangeIndex
_compat_props = ['shape', 'ndim', 'size', 'itemsize']
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(index=RangeIndex(0, 20, 2, name='foo'))
self.setup_indices()
diff --git a/pandas/tests/indexes/timedeltas/test_astype.py b/pandas/tests/indexes/timedeltas/test_astype.py
index 6e82f165e4909..b9720f4a300d1 100644
--- a/pandas/tests/indexes/timedeltas/test_astype.py
+++ b/pandas/tests/indexes/timedeltas/test_astype.py
@@ -14,7 +14,7 @@ class TestTimedeltaIndex(DatetimeLike, tm.TestCase):
_holder = TimedeltaIndex
_multiprocess_can_split_ = True
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(index=tm.makeTimedeltaIndex(10))
self.setup_indices()
diff --git a/pandas/tests/indexes/timedeltas/test_ops.py b/pandas/tests/indexes/timedeltas/test_ops.py
index 474dd283530c5..12d29dc00e273 100644
--- a/pandas/tests/indexes/timedeltas/test_ops.py
+++ b/pandas/tests/indexes/timedeltas/test_ops.py
@@ -16,8 +16,8 @@
class TestTimedeltaIndexOps(Ops):
- def setUp(self):
- super(TestTimedeltaIndexOps, self).setUp()
+ def setup_method(self, method):
+ super(TestTimedeltaIndexOps, self).setup_method(method)
mask = lambda x: isinstance(x, TimedeltaIndex)
self.is_valid_objs = [o for o in self.objs if mask(o)]
self.not_valid_objs = []
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index d1379973dfec5..933674c425cd8 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -20,7 +20,7 @@ class TestTimedeltaIndex(DatetimeLike, tm.TestCase):
_holder = TimedeltaIndex
_multiprocess_can_split_ = True
- def setUp(self):
+ def setup_method(self, method):
self.indices = dict(index=tm.makeTimedeltaIndex(10))
self.setup_indices()
diff --git a/pandas/tests/indexing/common.py b/pandas/tests/indexing/common.py
index bd5b7f45a6f4c..259a8aea94df0 100644
--- a/pandas/tests/indexing/common.py
+++ b/pandas/tests/indexing/common.py
@@ -31,7 +31,7 @@ class Base(object):
_typs = set(['ints', 'uints', 'labels', 'mixed',
'ts', 'floats', 'empty', 'ts_rev'])
- def setUp(self):
+ def setup_method(self, method):
self.series_ints = Series(np.random.rand(4), index=lrange(0, 8, 2))
self.frame_ints = DataFrame(np.random.randn(4, 4),
diff --git a/pandas/tests/indexing/test_categorical.py b/pandas/tests/indexing/test_categorical.py
index f9fcef16c12d4..6d2723ae0ff01 100644
--- a/pandas/tests/indexing/test_categorical.py
+++ b/pandas/tests/indexing/test_categorical.py
@@ -12,7 +12,7 @@
class TestCategoricalIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.df = DataFrame({'A': np.arange(6, dtype='int64'),
'B': Series(list('aabbca')).astype(
diff --git a/pandas/tests/indexing/test_coercion.py b/pandas/tests/indexing/test_coercion.py
index 56bc8c1d72bb8..8e81a3bd1df7a 100644
--- a/pandas/tests/indexing/test_coercion.py
+++ b/pandas/tests/indexing/test_coercion.py
@@ -1146,7 +1146,7 @@ class TestReplaceSeriesCoercion(CoercionBase, tm.TestCase):
klasses = ['series']
method = 'replace'
- def setUp(self):
+ def setup_method(self, method):
self.rep = {}
self.rep['object'] = ['a', 'b']
self.rep['int64'] = [4, 5]
diff --git a/pandas/tests/indexing/test_interval.py b/pandas/tests/indexing/test_interval.py
index bccc21ed6c086..b8d8739af1d15 100644
--- a/pandas/tests/indexing/test_interval.py
+++ b/pandas/tests/indexing/test_interval.py
@@ -8,7 +8,7 @@
class TestIntervalIndex(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.s = Series(np.arange(5), IntervalIndex.from_breaks(np.arange(6)))
def test_loc_with_scalar(self):
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index ac00e441047dd..3cea731cfd440 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -107,14 +107,14 @@ def has_expanded_repr(df):
class TestDataFrameFormatting(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.warn_filters = warnings.filters
warnings.filterwarnings('ignore', category=FutureWarning,
module=".*format")
self.frame = _frame.copy()
- def tearDown(self):
+ def teardown_method(self, method):
warnings.filters = self.warn_filters
def test_repr_embedded_ndarray(self):
@@ -1606,7 +1606,7 @@ def gen_series_formatting():
class TestSeriesFormatting(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.ts = tm.makeTimeSeries()
def test_repr_unicode(self):
diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index 44fbd5a958d8c..05b697ffbb756 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -126,7 +126,7 @@ def test_ambiguous_width(self):
class TestTableSchemaRepr(tm.TestCase):
@classmethod
- def setUpClass(cls):
+ def setup_class(cls):
pytest.importorskip('IPython')
try:
import mock
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 1cd338479bd0c..687e78e64a3e7 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -5,16 +5,15 @@
import numpy as np
import pandas as pd
from pandas import DataFrame
-from pandas.util.testing import TestCase
import pandas.util.testing as tm
jinja2 = pytest.importorskip('jinja2')
from pandas.io.formats.style import Styler, _get_level_lengths # noqa
-class TestStyler(TestCase):
+class TestStyler(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
np.random.seed(24)
self.s = DataFrame({'A': np.random.permutation(range(6))})
self.df = DataFrame({'A': [0, 1], 'B': np.random.randn(2)})
@@ -813,10 +812,10 @@ def test_mi_sparse_column_names(self):
assert head == expected
-@tm.mplskip
-class TestStylerMatplotlibDep(TestCase):
+class TestStylerMatplotlibDep(tm.TestCase):
def test_background_gradient(self):
+ tm._skip_if_no_mpl()
df = pd.DataFrame([[1, 2], [2, 4]], columns=['A', 'B'])
for c_map in [None, 'YlOrRd']:
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index c3a976973bb29..1e667245809ec 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -19,7 +19,7 @@
class TestBuildSchema(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.df = DataFrame(
{'A': [1, 2, 3, 4],
'B': ['a', 'b', 'c', 'c'],
@@ -171,7 +171,7 @@ def test_as_json_table_type_categorical_dtypes(self):
class TestTableOrient(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.df = DataFrame(
{'A': [1, 2, 3, 4],
'B': ['a', 'b', 'c', 'c'],
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 2e92910f82b74..0cf9000fcffb2 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -37,7 +37,7 @@
class TestPandasContainer(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.dirpath = tm.get_data_path()
self.ts = tm.makeTimeSeries()
@@ -59,7 +59,7 @@ def setUp(self):
self.mixed_frame = _mixed_frame.copy()
self.categorical = _cat_frame.copy()
- def tearDown(self):
+ def teardown_method(self, method):
del self.dirpath
del self.ts
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index b749cd150d445..a23ae225c19b0 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -1,7 +1,5 @@
# -*- coding: utf-8 -*-
-from unittest import TestCase
-
try:
import json
except ImportError:
@@ -27,7 +25,7 @@
else partial(json.dumps, encoding="utf-8"))
-class UltraJSONTests(TestCase):
+class UltraJSONTests(tm.TestCase):
@pytest.mark.skipif(compat.is_platform_32bit(),
reason="not compliant on 32-bit, xref #15865")
@@ -948,7 +946,7 @@ def my_obj_handler(obj):
ujson.decode(ujson.encode(l, default_handler=str)))
-class NumpyJSONTests(TestCase):
+class NumpyJSONTests(tm.TestCase):
def testBool(self):
b = np.bool(True)
@@ -1224,7 +1222,7 @@ def testArrayNumpyLabelled(self):
assert (np.array(['a', 'b']) == output[2]).all()
-class PandasJSONTests(TestCase):
+class PandasJSONTests(tm.TestCase):
def testDataFrame(self):
df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index cabee76dd6dfc..26b5c4788d53a 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -49,7 +49,7 @@ def check_compressed_urls(salaries_table, compression, extension, mode,
class TestS3(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
try:
import s3fs # noqa
except ImportError:
diff --git a/pandas/tests/io/parser/test_parsers.py b/pandas/tests/io/parser/test_parsers.py
index 2ae557a7d57db..cced8299691df 100644
--- a/pandas/tests/io/parser/test_parsers.py
+++ b/pandas/tests/io/parser/test_parsers.py
@@ -42,7 +42,7 @@ def read_table(self, *args, **kwargs):
def float_precision_choices(self):
raise AbstractMethodError(self)
- def setUp(self):
+ def setup_method(self, method):
self.dirpath = tm.get_data_path()
self.csv1 = os.path.join(self.dirpath, 'test1.csv')
self.csv2 = os.path.join(self.dirpath, 'test2.csv')
diff --git a/pandas/tests/io/parser/test_textreader.py b/pandas/tests/io/parser/test_textreader.py
index d8ae66a2b275c..f09d8c8e778d5 100644
--- a/pandas/tests/io/parser/test_textreader.py
+++ b/pandas/tests/io/parser/test_textreader.py
@@ -28,7 +28,7 @@
class TestTextReader(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.dirpath = tm.get_data_path()
self.csv1 = os.path.join(self.dirpath, 'test1.csv')
self.csv2 = os.path.join(self.dirpath, 'test2.csv')
diff --git a/pandas/tests/io/sas/test_sas7bdat.py b/pandas/tests/io/sas/test_sas7bdat.py
index afd40e7017cff..cb28ab6c6c345 100644
--- a/pandas/tests/io/sas/test_sas7bdat.py
+++ b/pandas/tests/io/sas/test_sas7bdat.py
@@ -8,7 +8,7 @@
class TestSAS7BDAT(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.dirpath = tm.get_data_path()
self.data = []
self.test_ix = [list(range(1, 16)), [16]]
diff --git a/pandas/tests/io/sas/test_xport.py b/pandas/tests/io/sas/test_xport.py
index 2ed7ebbbfce32..17b286a4915ce 100644
--- a/pandas/tests/io/sas/test_xport.py
+++ b/pandas/tests/io/sas/test_xport.py
@@ -18,7 +18,7 @@ def numeric_as_float(data):
class TestXport(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.dirpath = tm.get_data_path()
self.file01 = os.path.join(self.dirpath, "DEMO_G.xpt")
self.file02 = os.path.join(self.dirpath, "SSHSV1_A.xpt")
diff --git a/pandas/tests/io/test_clipboard.py b/pandas/tests/io/test_clipboard.py
index 756dd0db8c3b7..e9ffb2dca7ae5 100644
--- a/pandas/tests/io/test_clipboard.py
+++ b/pandas/tests/io/test_clipboard.py
@@ -26,8 +26,8 @@
class TestClipboard(tm.TestCase):
@classmethod
- def setUpClass(cls):
- super(TestClipboard, cls).setUpClass()
+ def setup_class(cls):
+ super(TestClipboard, cls).setup_class()
cls.data = {}
cls.data['string'] = mkdf(5, 3, c_idx_type='s', r_idx_type='i',
c_idx_names=[None], r_idx_names=[None])
@@ -62,8 +62,8 @@ def setUpClass(cls):
cls.data_types = list(cls.data.keys())
@classmethod
- def tearDownClass(cls):
- super(TestClipboard, cls).tearDownClass()
+ def teardown_class(cls):
+ super(TestClipboard, cls).teardown_class()
del cls.data_types, cls.data
def check_round_trip_frame(self, data_type, excel=None, sep=None,
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index c427fab4103e0..1837e5381a07e 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -92,7 +92,7 @@ def test_iterator(self):
class TestMMapWrapper(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.mmap_file = os.path.join(tm.get_data_path(),
'test_mmap.csv')
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index d733f26b2c04d..919c521f22f60 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -84,7 +84,7 @@ def _skip_if_no_s3fs():
class SharedItems(object):
- def setUp(self):
+ def setup_method(self, method):
self.dirpath = tm.get_data_path()
self.frame = _frame.copy()
self.frame2 = _frame2.copy()
@@ -161,9 +161,9 @@ class ReadingTestsBase(SharedItems):
# 3. Add a property engine_name, which is the name of the reader class.
# For the reader this is not used for anything at the moment.
- def setUp(self):
+ def setup_method(self, method):
self.check_skip()
- super(ReadingTestsBase, self).setUp()
+ super(ReadingTestsBase, self).setup_method(method)
def test_parse_cols_int(self):
@@ -1019,14 +1019,14 @@ class ExcelWriterBase(SharedItems):
# Test with MultiIndex and Hierarchical Rows as merged cells.
merge_cells = True
- def setUp(self):
+ def setup_method(self, method):
self.check_skip()
- super(ExcelWriterBase, self).setUp()
+ super(ExcelWriterBase, self).setup_method(method)
self.option_name = 'io.excel.%s.writer' % self.ext.strip('.')
self.prev_engine = get_option(self.option_name)
set_option(self.option_name, self.engine_name)
- def tearDown(self):
+ def teardown_method(self, method):
set_option(self.option_name, self.prev_engine)
def test_excel_sheet_by_name_raise(self):
@@ -1926,7 +1926,7 @@ def skip_openpyxl_gt21(cls):
"""Skip a TestCase instance if openpyxl >= 2.2"""
@classmethod
- def setUpClass(cls):
+ def setup_class(cls):
_skip_if_no_openpyxl()
import openpyxl
ver = openpyxl.__version__
@@ -1934,7 +1934,7 @@ def setUpClass(cls):
LooseVersion(ver) < LooseVersion('2.2.0'))):
pytest.skip("openpyxl %s >= 2.2" % str(ver))
- cls.setUpClass = setUpClass
+ cls.setup_class = setup_class
return cls
@@ -2043,14 +2043,14 @@ def skip_openpyxl_lt22(cls):
"""Skip a TestCase instance if openpyxl < 2.2"""
@classmethod
- def setUpClass(cls):
+ def setup_class(cls):
_skip_if_no_openpyxl()
import openpyxl
ver = openpyxl.__version__
if LooseVersion(ver) < LooseVersion('2.2.0'):
pytest.skip("openpyxl %s < 2.2" % str(ver))
- cls.setUpClass = setUpClass
+ cls.setup_class = setup_class
return cls
diff --git a/pandas/tests/io/test_gbq.py b/pandas/tests/io/test_gbq.py
index 138def3ea1ac9..47fc495201754 100644
--- a/pandas/tests/io/test_gbq.py
+++ b/pandas/tests/io/test_gbq.py
@@ -97,7 +97,7 @@ def make_mixed_dataframe_v2(test_size):
class TestToGBQIntegrationWithServiceAccountKeyPath(tm.TestCase):
@classmethod
- def setUpClass(cls):
+ def setup_class(cls):
# - GLOBAL CLASS FIXTURES -
# put here any instruction you want to execute only *ONCE* *BEFORE*
# executing *ALL* tests described below.
@@ -111,7 +111,7 @@ def setUpClass(cls):
).create(DATASET_ID + "1")
@classmethod
- def tearDownClass(cls):
+ def teardown_class(cls):
# - GLOBAL CLASS FIXTURES -
# put here any instruction you want to execute only *ONCE* *AFTER*
# executing all tests.
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 0a79173df731c..6b1215e443b47 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -99,8 +99,8 @@ class TestReadHtml(tm.TestCase, ReadHtmlMixin):
banklist_data = os.path.join(DATA_PATH, 'banklist.html')
@classmethod
- def setUpClass(cls):
- super(TestReadHtml, cls).setUpClass()
+ def setup_class(cls):
+ super(TestReadHtml, cls).setup_class()
_skip_if_none_of(('bs4', 'html5lib'))
def test_to_html_compat(self):
@@ -783,8 +783,8 @@ class TestReadHtmlEncoding(tm.TestCase):
flavor = 'bs4'
@classmethod
- def setUpClass(cls):
- super(TestReadHtmlEncoding, cls).setUpClass()
+ def setup_class(cls):
+ super(TestReadHtmlEncoding, cls).setup_class()
_skip_if_none_of((cls.flavor, 'html5lib'))
def read_html(self, *args, **kwargs):
@@ -825,8 +825,8 @@ class TestReadHtmlEncodingLxml(TestReadHtmlEncoding):
flavor = 'lxml'
@classmethod
- def setUpClass(cls):
- super(TestReadHtmlEncodingLxml, cls).setUpClass()
+ def setup_class(cls):
+ super(TestReadHtmlEncodingLxml, cls).setup_class()
_skip_if_no(cls.flavor)
@@ -834,8 +834,8 @@ class TestReadHtmlLxml(tm.TestCase, ReadHtmlMixin):
flavor = 'lxml'
@classmethod
- def setUpClass(cls):
- super(TestReadHtmlLxml, cls).setUpClass()
+ def setup_class(cls):
+ super(TestReadHtmlLxml, cls).setup_class()
_skip_if_no('lxml')
def test_data_fail(self):
diff --git a/pandas/tests/io/test_packers.py b/pandas/tests/io/test_packers.py
index 451cce125e228..96abf3415fff8 100644
--- a/pandas/tests/io/test_packers.py
+++ b/pandas/tests/io/test_packers.py
@@ -92,10 +92,10 @@ def check_arbitrary(a, b):
class TestPackers(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.path = '__%s__.msg' % tm.rands(10)
- def tearDown(self):
+ def teardown_method(self, method):
pass
def encode_decode(self, x, compress=None, **kwargs):
@@ -301,8 +301,8 @@ def test_timedeltas(self):
class TestIndex(TestPackers):
- def setUp(self):
- super(TestIndex, self).setUp()
+ def setup_method(self, method):
+ super(TestIndex, self).setup_method(method)
self.d = {
'string': tm.makeStringIndex(100),
@@ -364,8 +364,8 @@ def categorical_index(self):
class TestSeries(TestPackers):
- def setUp(self):
- super(TestSeries, self).setUp()
+ def setup_method(self, method):
+ super(TestSeries, self).setup_method(method)
self.d = {}
@@ -412,8 +412,8 @@ def test_basic(self):
class TestCategorical(TestPackers):
- def setUp(self):
- super(TestCategorical, self).setUp()
+ def setup_method(self, method):
+ super(TestCategorical, self).setup_method(method)
self.d = {}
@@ -435,8 +435,8 @@ def test_basic(self):
class TestNDFrame(TestPackers):
- def setUp(self):
- super(TestNDFrame, self).setUp()
+ def setup_method(self, method):
+ super(TestNDFrame, self).setup_method(method)
data = {
'A': [0., 1., 2., 3., np.nan],
@@ -579,7 +579,7 @@ class TestCompression(TestPackers):
"""See https://github.com/pandas-dev/pandas/pull/9783
"""
- def setUp(self):
+ def setup_method(self, method):
try:
from sqlalchemy import create_engine
self._create_sql_engine = create_engine
@@ -588,7 +588,7 @@ def setUp(self):
else:
self._SQLALCHEMY_INSTALLED = True
- super(TestCompression, self).setUp()
+ super(TestCompression, self).setup_method(method)
data = {
'A': np.arange(1000, dtype=np.float64),
'B': np.arange(1000, dtype=np.int32),
@@ -773,8 +773,8 @@ def test_readonly_axis_zlib_to_sql(self):
class TestEncoding(TestPackers):
- def setUp(self):
- super(TestEncoding, self).setUp()
+ def setup_method(self, method):
+ super(TestEncoding, self).setup_method(method)
data = {
'A': [compat.u('\u2019')] * 1000,
'B': np.arange(1000, dtype=np.int32),
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index a268fa96175cf..9e7196593650a 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -124,23 +124,23 @@ def _maybe_remove(store, key):
class Base(tm.TestCase):
@classmethod
- def setUpClass(cls):
- super(Base, cls).setUpClass()
+ def setup_class(cls):
+ super(Base, cls).setup_class()
# Pytables 3.0.0 deprecates lots of things
tm.reset_testing_mode()
@classmethod
- def tearDownClass(cls):
- super(Base, cls).tearDownClass()
+ def teardown_class(cls):
+ super(Base, cls).teardown_class()
# Pytables 3.0.0 deprecates lots of things
tm.set_testing_mode()
- def setUp(self):
+ def setup_method(self, method):
self.path = 'tmp.__%s__.h5' % tm.rands(10)
- def tearDown(self):
+ def teardown_method(self, method):
pass
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 52883a41b08c2..21de0cd371a37 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -20,7 +20,6 @@
from __future__ import print_function
from warnings import catch_warnings
import pytest
-import unittest
import sqlite3
import csv
import os
@@ -179,7 +178,7 @@
class MixInBase(object):
- def tearDown(self):
+ def teardown_method(self, method):
for tbl in self._get_all_tables():
self.drop_table(tbl)
self._close_conn()
@@ -498,7 +497,7 @@ class _TestSQLApi(PandasSQLTest):
flavor = 'sqlite'
mode = None
- def setUp(self):
+ def setup_method(self, method):
self.conn = self.connect()
self._load_iris_data()
self._load_iris_view()
@@ -819,7 +818,7 @@ def test_unicode_column_name(self):
@pytest.mark.single
-class TestSQLApi(SQLAlchemyMixIn, _TestSQLApi, unittest.TestCase):
+class TestSQLApi(SQLAlchemyMixIn, _TestSQLApi, tm.TestCase):
"""
Test the public API as it would be used directly
@@ -981,8 +980,8 @@ class _EngineToConnMixin(object):
A mixin that causes setup_connect to create a conn rather than an engine.
"""
- def setUp(self):
- super(_EngineToConnMixin, self).setUp()
+ def setup_method(self, method):
+ super(_EngineToConnMixin, self).setup_method(method)
engine = self.conn
conn = engine.connect()
self.__tx = conn.begin()
@@ -990,21 +989,21 @@ def setUp(self):
self.__engine = engine
self.conn = conn
- def tearDown(self):
+ def teardown_method(self, method):
self.__tx.rollback()
self.conn.close()
self.conn = self.__engine
self.pandasSQL = sql.SQLDatabase(self.__engine)
- super(_EngineToConnMixin, self).tearDown()
+ super(_EngineToConnMixin, self).teardown_method(method)
@pytest.mark.single
-class TestSQLApiConn(_EngineToConnMixin, TestSQLApi, unittest.TestCase):
+class TestSQLApiConn(_EngineToConnMixin, TestSQLApi, tm.TestCase):
pass
@pytest.mark.single
-class TestSQLiteFallbackApi(SQLiteMixIn, _TestSQLApi, unittest.TestCase):
+class TestSQLiteFallbackApi(SQLiteMixIn, _TestSQLApi, tm.TestCase):
"""
Test the public sqlite connection fallback API
@@ -1093,7 +1092,7 @@ class _TestSQLAlchemy(SQLAlchemyMixIn, PandasSQLTest):
flavor = None
@classmethod
- def setUpClass(cls):
+ def setup_class(cls):
cls.setup_import()
cls.setup_driver()
@@ -1105,7 +1104,7 @@ def setUpClass(cls):
msg = "{0} - can't connect to {1} server".format(cls, cls.flavor)
pytest.skip(msg)
- def setUp(self):
+ def setup_method(self, method):
self.setup_connect()
self._load_iris_data()
@@ -1822,37 +1821,37 @@ def test_schema_support(self):
@pytest.mark.single
-class TestMySQLAlchemy(_TestMySQLAlchemy, _TestSQLAlchemy, unittest.TestCase):
+class TestMySQLAlchemy(_TestMySQLAlchemy, _TestSQLAlchemy, tm.TestCase):
pass
@pytest.mark.single
class TestMySQLAlchemyConn(_TestMySQLAlchemy, _TestSQLAlchemyConn,
- unittest.TestCase):
+ tm.TestCase):
pass
@pytest.mark.single
class TestPostgreSQLAlchemy(_TestPostgreSQLAlchemy, _TestSQLAlchemy,
- unittest.TestCase):
+ tm.TestCase):
pass
@pytest.mark.single
class TestPostgreSQLAlchemyConn(_TestPostgreSQLAlchemy, _TestSQLAlchemyConn,
- unittest.TestCase):
+ tm.TestCase):
pass
@pytest.mark.single
class TestSQLiteAlchemy(_TestSQLiteAlchemy, _TestSQLAlchemy,
- unittest.TestCase):
+ tm.TestCase):
pass
@pytest.mark.single
class TestSQLiteAlchemyConn(_TestSQLiteAlchemy, _TestSQLAlchemyConn,
- unittest.TestCase):
+ tm.TestCase):
pass
@@ -1860,7 +1859,7 @@ class TestSQLiteAlchemyConn(_TestSQLiteAlchemy, _TestSQLAlchemyConn,
# -- Test Sqlite / MySQL fallback
@pytest.mark.single
-class TestSQLiteFallback(SQLiteMixIn, PandasSQLTest, unittest.TestCase):
+class TestSQLiteFallback(SQLiteMixIn, PandasSQLTest, tm.TestCase):
"""
Test the fallback mode against an in-memory sqlite database.
@@ -1871,7 +1870,7 @@ class TestSQLiteFallback(SQLiteMixIn, PandasSQLTest, unittest.TestCase):
def connect(cls):
return sqlite3.connect(':memory:')
- def setUp(self):
+ def setup_method(self, method):
self.conn = self.connect()
self.pandasSQL = sql.SQLiteDatabase(self.conn)
@@ -2086,7 +2085,8 @@ def _skip_if_no_pymysql():
@pytest.mark.single
class TestXSQLite(SQLiteMixIn, tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
+ self.method = method
self.conn = sqlite3.connect(':memory:')
def test_basic(self):
@@ -2186,7 +2186,7 @@ def test_execute_closed_connection(self):
tquery("select * from test", con=self.conn)
# Initialize connection again (needed for tearDown)
- self.setUp()
+ self.setup_method(self.method)
def test_na_roundtrip(self):
pass
@@ -2317,7 +2317,7 @@ def test_deprecated_flavor(self):
class TestXMySQL(MySQLMixIn, tm.TestCase):
@classmethod
- def setUpClass(cls):
+ def setup_class(cls):
_skip_if_no_pymysql()
# test connection
@@ -2345,7 +2345,7 @@ def setUpClass(cls):
"[pandas] in your system's mysql default file, "
"typically located at ~/.my.cnf or /etc/.my.cnf. ")
- def setUp(self):
+ def setup_method(self, method):
_skip_if_no_pymysql()
import pymysql
try:
@@ -2371,6 +2371,8 @@ def setUp(self):
"[pandas] in your system's mysql default file, "
"typically located at ~/.my.cnf or /etc/.my.cnf. ")
+ self.method = method
+
def test_basic(self):
_skip_if_no_pymysql()
frame = tm.makeTimeDataFrame()
@@ -2498,7 +2500,7 @@ def test_execute_closed_connection(self):
tquery("select * from test", con=self.conn)
# Initialize connection again (needed for tearDown)
- self.setUp()
+ self.setup_method(self.method)
def test_na_roundtrip(self):
_skip_if_no_pymysql()
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 945f0b009a9da..7867e6866876a 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -25,7 +25,7 @@
class TestStata(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.dirpath = tm.get_data_path()
self.dta1_114 = os.path.join(self.dirpath, 'stata1_114.dta')
self.dta1_117 = os.path.join(self.dirpath, 'stata1_117.dta')
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index 2c0ac974e9e43..9a24e4ae2dad0 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -19,11 +19,12 @@
import pandas.plotting as plotting
from pandas.plotting._tools import _flatten
-
"""
This is a common base class used for various plotting tests
"""
+tm._skip_module_if_no_mpl()
+
def _skip_if_no_scipy_gaussian_kde():
try:
@@ -41,10 +42,9 @@ def _ok_for_gaussian_kde(kind):
return True
-@tm.mplskip
class TestPlotBase(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
import matplotlib as mpl
mpl.rcdefaults()
@@ -95,7 +95,7 @@ def setUp(self):
"C": np.arange(20) + np.random.uniform(
size=20)})
- def tearDown(self):
+ def teardown_method(self, method):
tm.close()
@cache_readonly
diff --git a/pandas/tests/plotting/test_boxplot_method.py b/pandas/tests/plotting/test_boxplot_method.py
index 1f70d408767f3..1e06c13980657 100644
--- a/pandas/tests/plotting/test_boxplot_method.py
+++ b/pandas/tests/plotting/test_boxplot_method.py
@@ -21,6 +21,8 @@
""" Test cases for .boxplot method """
+tm._skip_module_if_no_mpl()
+
def _skip_if_mpl_14_or_dev_boxplot():
# GH 8382
@@ -31,7 +33,6 @@ def _skip_if_mpl_14_or_dev_boxplot():
pytest.skip("Matplotlib Regression in 1.4 and current dev.")
-@tm.mplskip
class TestDataFramePlots(TestPlotBase):
@slow
@@ -165,7 +166,6 @@ def test_fontsize(self):
xlabelsize=16, ylabelsize=16)
-@tm.mplskip
class TestDataFrameGroupByPlots(TestPlotBase):
@slow
diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py
index e23bc2ef6c563..21d8d1f0ab555 100644
--- a/pandas/tests/plotting/test_converter.py
+++ b/pandas/tests/plotting/test_converter.py
@@ -17,7 +17,7 @@ def test_timtetonum_accepts_unicode():
class TestDateTimeConverter(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.dtc = converter.DatetimeConverter()
self.tc = converter.TimeFormatter(None)
@@ -148,7 +148,7 @@ def test_convert_nested(self):
class TestPeriodConverter(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.pc = converter.PeriodConverter()
class Axis(object):
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index ae8faa031174e..ed198de11bac1 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -20,12 +20,13 @@
from pandas.tests.plotting.common import (TestPlotBase,
_skip_if_no_scipy_gaussian_kde)
+tm._skip_module_if_no_mpl()
+
-@tm.mplskip
class TestTSPlot(TestPlotBase):
- def setUp(self):
- TestPlotBase.setUp(self)
+ def setup_method(self, method):
+ TestPlotBase.setup_method(self, method)
freq = ['S', 'T', 'H', 'D', 'W', 'M', 'Q', 'A']
idx = [period_range('12/31/1999', freq=x, periods=100) for x in freq]
@@ -41,7 +42,7 @@ def setUp(self):
columns=['A', 'B', 'C'])
for x in idx]
- def tearDown(self):
+ def teardown_method(self, method):
tm.close()
@slow
diff --git a/pandas/tests/plotting/test_deprecated.py b/pandas/tests/plotting/test_deprecated.py
index d7eaa69460a3a..48030df48deca 100644
--- a/pandas/tests/plotting/test_deprecated.py
+++ b/pandas/tests/plotting/test_deprecated.py
@@ -18,8 +18,9 @@
pandas.tools.plotting
"""
+tm._skip_module_if_no_mpl()
+
-@tm.mplskip
class TestDeprecatedNameSpace(TestPlotBase):
@slow
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 03bc477d6f852..4a4a71d7ea639 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -27,12 +27,13 @@
_skip_if_no_scipy_gaussian_kde,
_ok_for_gaussian_kde)
+tm._skip_module_if_no_mpl()
+
-@tm.mplskip
class TestDataFramePlots(TestPlotBase):
- def setUp(self):
- TestPlotBase.setUp(self)
+ def setup_method(self, method):
+ TestPlotBase.setup_method(self, method)
import matplotlib as mpl
mpl.rcdefaults()
diff --git a/pandas/tests/plotting/test_groupby.py b/pandas/tests/plotting/test_groupby.py
index 121f2f9b75698..8dcf73bce03c0 100644
--- a/pandas/tests/plotting/test_groupby.py
+++ b/pandas/tests/plotting/test_groupby.py
@@ -10,8 +10,9 @@
from pandas.tests.plotting.common import TestPlotBase
+tm._skip_module_if_no_mpl()
+
-@tm.mplskip
class TestDataFrameGroupByPlots(TestPlotBase):
def test_series_groupby_plotting_nominally_works(self):
diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py
index b75fcd4d8b680..c3e32f52e0474 100644
--- a/pandas/tests/plotting/test_hist_method.py
+++ b/pandas/tests/plotting/test_hist_method.py
@@ -15,11 +15,13 @@
from pandas.tests.plotting.common import (TestPlotBase, _check_plot_works)
-@tm.mplskip
+tm._skip_module_if_no_mpl()
+
+
class TestSeriesPlots(TestPlotBase):
- def setUp(self):
- TestPlotBase.setUp(self)
+ def setup_method(self, method):
+ TestPlotBase.setup_method(self, method)
import matplotlib as mpl
mpl.rcdefaults()
@@ -140,7 +142,6 @@ def test_plot_fails_when_ax_differs_from_figure(self):
self.ts.hist(ax=ax1, figure=fig2)
-@tm.mplskip
class TestDataFramePlots(TestPlotBase):
@slow
@@ -251,7 +252,6 @@ def test_tight_layout(self):
tm.close()
-@tm.mplskip
class TestDataFrameGroupByPlots(TestPlotBase):
@slow
diff --git a/pandas/tests/plotting/test_misc.py b/pandas/tests/plotting/test_misc.py
index 3a9cb309db707..9eace32aa19a3 100644
--- a/pandas/tests/plotting/test_misc.py
+++ b/pandas/tests/plotting/test_misc.py
@@ -17,12 +17,13 @@
from pandas.tests.plotting.common import (TestPlotBase, _check_plot_works,
_ok_for_gaussian_kde)
+tm._skip_module_if_no_mpl()
+
-@tm.mplskip
class TestSeriesPlots(TestPlotBase):
- def setUp(self):
- TestPlotBase.setUp(self)
+ def setup_method(self, method):
+ TestPlotBase.setup_method(self, method)
import matplotlib as mpl
mpl.rcdefaults()
@@ -50,7 +51,6 @@ def test_bootstrap_plot(self):
_check_plot_works(bootstrap_plot, series=self.ts, size=10)
-@tm.mplskip
class TestDataFramePlots(TestPlotBase):
@slow
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 91a27142069c7..448661c7af0e9 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -22,12 +22,13 @@
_skip_if_no_scipy_gaussian_kde,
_ok_for_gaussian_kde)
+tm._skip_module_if_no_mpl()
+
-@tm.mplskip
class TestSeriesPlots(TestPlotBase):
- def setUp(self):
- TestPlotBase.setUp(self)
+ def setup_method(self, method):
+ TestPlotBase.setup_method(self, method)
import matplotlib as mpl
mpl.rcdefaults()
diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py
index 2d4d0a09060de..1842af465ca89 100644
--- a/pandas/tests/reshape/test_concat.py
+++ b/pandas/tests/reshape/test_concat.py
@@ -19,7 +19,7 @@
class ConcatenateBase(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.frame = DataFrame(tm.getSeriesData())
self.mixed_frame = self.frame.copy()
self.mixed_frame['foo'] = 'bar'
@@ -31,7 +31,7 @@ class TestConcatAppendCommon(ConcatenateBase):
Test common dtype coercion rules between concat and append.
"""
- def setUp(self):
+ def setup_method(self, method):
dt_data = [pd.Timestamp('2011-01-01'),
pd.Timestamp('2011-01-02'),
diff --git a/pandas/tests/reshape/test_hashing.py b/pandas/tests/reshape/test_hashing.py
index 85807da33e38d..622768353dd50 100644
--- a/pandas/tests/reshape/test_hashing.py
+++ b/pandas/tests/reshape/test_hashing.py
@@ -11,7 +11,7 @@
class TestHashing(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.df = DataFrame(
{'i32': np.array([1, 2, 3] * 3, dtype='int32'),
'f32': np.array([None, 2.5, 3.5] * 3, dtype='float32'),
diff --git a/pandas/tests/reshape/test_join.py b/pandas/tests/reshape/test_join.py
index cda343175fd0a..3a6985fd4a373 100644
--- a/pandas/tests/reshape/test_join.py
+++ b/pandas/tests/reshape/test_join.py
@@ -21,7 +21,7 @@
class TestJoin(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
# aggregate multiple columns
self.df = DataFrame({'key1': get_test_data(),
'key2': get_test_data(),
diff --git a/pandas/tests/reshape/test_merge.py b/pandas/tests/reshape/test_merge.py
index db0e4631381f1..e36b7ecbc3c7b 100644
--- a/pandas/tests/reshape/test_merge.py
+++ b/pandas/tests/reshape/test_merge.py
@@ -35,7 +35,7 @@ def get_test_data(ngroups=NGROUPS, n=N):
class TestMerge(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
# aggregate multiple columns
self.df = DataFrame({'key1': get_test_data(),
'key2': get_test_data(),
@@ -739,7 +739,7 @@ def _check_merge(x, y):
class TestMergeMulti(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
['one', 'two', 'three']],
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
diff --git a/pandas/tests/reshape/test_merge_asof.py b/pandas/tests/reshape/test_merge_asof.py
index 7934b8abf85a8..7e33449c92665 100644
--- a/pandas/tests/reshape/test_merge_asof.py
+++ b/pandas/tests/reshape/test_merge_asof.py
@@ -23,7 +23,7 @@ def read_data(self, name, dedupe=False):
x.time = to_datetime(x.time)
return x
- def setUp(self):
+ def setup_method(self, method):
self.trades = self.read_data('trades.csv')
self.quotes = self.read_data('quotes.csv', dedupe=True)
diff --git a/pandas/tests/reshape/test_merge_ordered.py b/pandas/tests/reshape/test_merge_ordered.py
index 1f1eee0e9980b..375e2e13847e8 100644
--- a/pandas/tests/reshape/test_merge_ordered.py
+++ b/pandas/tests/reshape/test_merge_ordered.py
@@ -8,7 +8,7 @@
class TestOrderedMerge(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.left = DataFrame({'key': ['a', 'c', 'e'],
'lvalue': [1, 2., 3]})
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index df679966e0002..905cd27ca4c58 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -17,7 +17,7 @@
class TestPivotTable(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
'bar', 'bar', 'bar', 'bar',
'foo', 'foo', 'foo'],
@@ -984,7 +984,7 @@ def test_pivot_table_not_series(self):
class TestCrosstab(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
df = DataFrame({'A': ['foo', 'foo', 'foo', 'foo',
'bar', 'bar', 'bar', 'bar',
'foo', 'foo', 'foo'],
diff --git a/pandas/tests/reshape/test_reshape.py b/pandas/tests/reshape/test_reshape.py
index 87cd0637f1125..de2fe444bc4ea 100644
--- a/pandas/tests/reshape/test_reshape.py
+++ b/pandas/tests/reshape/test_reshape.py
@@ -19,7 +19,7 @@
class TestMelt(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.df = tm.makeTimeDataFrame()[:10]
self.df['id1'] = (self.df['A'] > 0).astype(np.int64)
self.df['id2'] = (self.df['B'] > 0).astype(np.int64)
@@ -220,7 +220,7 @@ class TestGetDummies(tm.TestCase):
sparse = False
- def setUp(self):
+ def setup_method(self, method):
self.df = DataFrame({'A': ['a', 'b', 'a'],
'B': ['b', 'b', 'c'],
'C': [1, 2, 3]})
diff --git a/pandas/tests/scalar/test_interval.py b/pandas/tests/scalar/test_interval.py
index 079c41657bec6..fab6f170bec60 100644
--- a/pandas/tests/scalar/test_interval.py
+++ b/pandas/tests/scalar/test_interval.py
@@ -6,7 +6,7 @@
class TestInterval(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.interval = Interval(0, 1)
def test_properties(self):
diff --git a/pandas/tests/scalar/test_period.py b/pandas/tests/scalar/test_period.py
index 2e60cfdb7a4f2..8c89fa60b12d6 100644
--- a/pandas/tests/scalar/test_period.py
+++ b/pandas/tests/scalar/test_period.py
@@ -923,7 +923,7 @@ def test_get_period_field_array_raises_on_out_of_range(self):
class TestComparisons(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.january1 = Period('2000-01', 'M')
self.january2 = Period('2000-01', 'M')
self.february = Period('2000-02', 'M')
diff --git a/pandas/tests/scalar/test_timedelta.py b/pandas/tests/scalar/test_timedelta.py
index 5659bc26fc1cc..82d6f6e8c84e5 100644
--- a/pandas/tests/scalar/test_timedelta.py
+++ b/pandas/tests/scalar/test_timedelta.py
@@ -15,7 +15,7 @@
class TestTimedeltas(tm.TestCase):
_multiprocess_can_split_ = True
- def setUp(self):
+ def setup_method(self, method):
pass
def test_construction(self):
diff --git a/pandas/tests/scalar/test_timestamp.py b/pandas/tests/scalar/test_timestamp.py
index 04b33bbc6c3bf..64f68112f4b81 100644
--- a/pandas/tests/scalar/test_timestamp.py
+++ b/pandas/tests/scalar/test_timestamp.py
@@ -1096,7 +1096,7 @@ def test_is_leap_year(self):
class TestTimestampNsOperations(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.timestamp = Timestamp(datetime.utcnow())
def assert_ns_timedelta(self, modified_timestamp, expected_value):
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 394ae88983faa..8eae59a473995 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -2254,7 +2254,7 @@ def test_setitem_slice_into_readonly_backing_data(self):
class TestTimeSeriesDuplicates(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
dates = [datetime(2000, 1, 2), datetime(2000, 1, 2),
datetime(2000, 1, 2), datetime(2000, 1, 3),
datetime(2000, 1, 3), datetime(2000, 1, 3),
@@ -2499,7 +2499,7 @@ class TestDatetimeIndexing(tm.TestCase):
Also test support for datetime64[ns] in Series / DataFrame
"""
- def setUp(self):
+ def setup_method(self, method):
dti = DatetimeIndex(start=datetime(2005, 1, 1),
end=datetime(2005, 1, 10), freq='Min')
self.series = Series(np.random.rand(len(dti)), dti)
@@ -2640,7 +2640,7 @@ def test_frame_datetime64_duplicated(self):
class TestNatIndexing(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.series = Series(date_range('1/1/2000', periods=10))
# ---------------------------------------------------------------------
diff --git a/pandas/tests/series/test_period.py b/pandas/tests/series/test_period.py
index 5ea27d605c28a..792d5b9e5c383 100644
--- a/pandas/tests/series/test_period.py
+++ b/pandas/tests/series/test_period.py
@@ -12,7 +12,7 @@ def _permute(obj):
class TestSeriesPeriod(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.series = Series(period_range('2000-01-01', periods=10, freq='D'))
def test_auto_conversion(self):
diff --git a/pandas/tests/sparse/test_array.py b/pandas/tests/sparse/test_array.py
index 9a2c958a252af..c205a1efbeeb1 100644
--- a/pandas/tests/sparse/test_array.py
+++ b/pandas/tests/sparse/test_array.py
@@ -17,7 +17,7 @@
class TestSparseArray(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.arr_data = np.array([nan, nan, 1, 2, 3, nan, 4, 5, nan, 6])
self.arr = SparseArray(self.arr_data)
self.zarr = SparseArray([0, 0, 1, 2, 3, 0, 4, 5, 0, 6], fill_value=0)
diff --git a/pandas/tests/sparse/test_combine_concat.py b/pandas/tests/sparse/test_combine_concat.py
index 57b4065744e32..ab56a83c90530 100644
--- a/pandas/tests/sparse/test_combine_concat.py
+++ b/pandas/tests/sparse/test_combine_concat.py
@@ -124,7 +124,7 @@ def test_concat_sparse_dense(self):
class TestSparseDataFrameConcat(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.dense1 = pd.DataFrame({'A': [0., 1., 2., np.nan],
'B': [0., 0., 0., 0.],
diff --git a/pandas/tests/sparse/test_frame.py b/pandas/tests/sparse/test_frame.py
index f2dd2aa79cc6a..762bfba85dd0a 100644
--- a/pandas/tests/sparse/test_frame.py
+++ b/pandas/tests/sparse/test_frame.py
@@ -29,7 +29,7 @@
class TestSparseDataFrame(tm.TestCase, SharedWithSparse):
klass = SparseDataFrame
- def setUp(self):
+ def setup_method(self, method):
self.data = {'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
'B': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
'C': np.arange(10, dtype=np.float64),
@@ -1275,7 +1275,7 @@ def test_comparison_op_scalar(self):
class TestSparseDataFrameAnalytics(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.data = {'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
'B': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
'C': np.arange(10, dtype=float),
diff --git a/pandas/tests/sparse/test_groupby.py b/pandas/tests/sparse/test_groupby.py
index 23bea94a2aef8..501e40c6ebffd 100644
--- a/pandas/tests/sparse/test_groupby.py
+++ b/pandas/tests/sparse/test_groupby.py
@@ -6,7 +6,7 @@
class TestSparseGroupBy(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.dense = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B': ['one', 'one', 'two', 'three',
diff --git a/pandas/tests/sparse/test_indexing.py b/pandas/tests/sparse/test_indexing.py
index 0fc2211bbeeae..bb449c05729d4 100644
--- a/pandas/tests/sparse/test_indexing.py
+++ b/pandas/tests/sparse/test_indexing.py
@@ -8,7 +8,7 @@
class TestSparseSeriesIndexing(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.orig = pd.Series([1, np.nan, np.nan, 3, np.nan])
self.sparse = self.orig.to_sparse()
@@ -446,7 +446,7 @@ def tests_indexing_with_sparse(self):
class TestSparseSeriesMultiIndexing(TestSparseSeriesIndexing):
- def setUp(self):
+ def setup_method(self, method):
# Mi with duplicated values
idx = pd.MultiIndex.from_tuples([('A', 0), ('A', 1), ('B', 0),
('C', 0), ('C', 1)])
@@ -954,7 +954,7 @@ def test_reindex_fill_value(self):
class TestMultitype(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.cols = ['string', 'int', 'float', 'object']
self.string_series = pd.SparseSeries(['a', 'b', 'c'])
diff --git a/pandas/tests/sparse/test_list.py b/pandas/tests/sparse/test_list.py
index 941e07a5582b0..3eab34661ae2b 100644
--- a/pandas/tests/sparse/test_list.py
+++ b/pandas/tests/sparse/test_list.py
@@ -1,5 +1,4 @@
from pandas.compat import range
-import unittest
from numpy import nan
import numpy as np
@@ -8,9 +7,9 @@
import pandas.util.testing as tm
-class TestSparseList(unittest.TestCase):
+class TestSparseList(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.na_data = np.array([nan, nan, 1, 2, 3, nan, 4, 5, nan, 6])
self.zero_data = np.array([0, 0, 1, 2, 3, 0, 4, 5, 0, 6])
diff --git a/pandas/tests/sparse/test_pivot.py b/pandas/tests/sparse/test_pivot.py
index 4ff9f20093c67..57c47b4e68811 100644
--- a/pandas/tests/sparse/test_pivot.py
+++ b/pandas/tests/sparse/test_pivot.py
@@ -5,7 +5,7 @@
class TestPivotTable(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.dense = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B': ['one', 'one', 'two', 'three',
diff --git a/pandas/tests/sparse/test_series.py b/pandas/tests/sparse/test_series.py
index 0f04e1a06900d..b756b63523798 100644
--- a/pandas/tests/sparse/test_series.py
+++ b/pandas/tests/sparse/test_series.py
@@ -58,7 +58,7 @@ def _test_data2_zero():
class TestSparseSeries(tm.TestCase, SharedWithSparse):
- def setUp(self):
+ def setup_method(self, method):
arr, index = _test_data1()
date_index = bdate_range('1/1/2011', periods=len(index))
@@ -936,7 +936,7 @@ def test_combine_first(self):
class TestSparseHandlingMultiIndexes(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
miindex = pd.MultiIndex.from_product(
[["x", "y"], ["10", "20"]], names=['row-foo', 'row-bar'])
micol = pd.MultiIndex.from_product(
@@ -963,7 +963,7 @@ def test_round_trip_preserve_multiindex_names(self):
class TestSparseSeriesScipyInteraction(tm.TestCase):
# Issue 8048: add SparseSeries coo methods
- def setUp(self):
+ def setup_method(self, method):
tm._skip_if_no_scipy()
import scipy.sparse
# SparseSeries inputs used in tests, the tests rely on the order
@@ -1312,7 +1312,7 @@ def _dense_series_compare(s, f):
class TestSparseSeriesAnalytics(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
arr, index = _test_data1()
self.bseries = SparseSeries(arr, index=index, kind='block',
name='bseries')
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 86d9ab3643cc9..dda95426d8011 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -282,7 +282,7 @@ def test_complex_sorting(self):
# gh 12666 - check no segfault
# Test not valid numpy versions older than 1.11
if pd._np_version_under1p11:
- self.skipTest("Test valid only for numpy 1.11+")
+ pytest.skip("Test valid only for numpy 1.11+")
x17 = np.array([complex(i) for i in range(17)], dtype=object)
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index ed0d61cdbbaf9..dcc685ceef28e 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -109,7 +109,7 @@ class Delegate(PandasDelegate):
def __init__(self, obj):
self.obj = obj
- def setUp(self):
+ def setup_method(self, method):
pass
def test_invalida_delgation(self):
@@ -162,7 +162,7 @@ def _allow_na_ops(self, obj):
return False
return True
- def setUp(self):
+ def setup_method(self, method):
self.bool_index = tm.makeBoolIndex(10, name='a')
self.int_index = tm.makeIntIndex(10, name='a')
self.float_index = tm.makeFloatIndex(10, name='a')
@@ -259,8 +259,8 @@ def test_binary_ops_docs(self):
class TestIndexOps(Ops):
- def setUp(self):
- super(TestIndexOps, self).setUp()
+ def setup_method(self, method):
+ super(TestIndexOps, self).setup_method(method)
self.is_valid_objs = [o for o in self.objs if o._allow_index_ops]
self.not_valid_objs = [o for o in self.objs if not o._allow_index_ops]
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 515ca8d9cedc5..2a53cf15278e0 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -30,7 +30,7 @@
class TestCategorical(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.factor = Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'],
ordered=True)
@@ -1602,7 +1602,7 @@ def test_validate_inplace(self):
class TestCategoricalAsBlock(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.factor = Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'])
df = DataFrame({'value': np.random.randint(0, 10000, 100)})
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index ba055b105dc41..79475b297f83c 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
import pytest
+import pandas.util.testing as tm
import pandas as pd
-import unittest
+
import warnings
-class TestConfig(unittest.TestCase):
+class TestConfig(tm.TestCase):
def __init__(self, *args):
super(TestConfig, self).__init__(*args)
@@ -17,14 +18,14 @@ def __init__(self, *args):
self.do = deepcopy(getattr(self.cf, '_deprecated_options'))
self.ro = deepcopy(getattr(self.cf, '_registered_options'))
- def setUp(self):
+ def setup_method(self, method):
setattr(self.cf, '_global_config', {})
setattr(
self.cf, 'options', self.cf.DictWrapper(self.cf._global_config))
setattr(self.cf, '_deprecated_options', {})
setattr(self.cf, '_registered_options', {})
- def tearDown(self):
+ def teardown_method(self, method):
setattr(self.cf, '_global_config', self.gc)
setattr(self.cf, '_deprecated_options', self.do)
setattr(self.cf, '_registered_options', self.ro)
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index 8ef29097b66e8..79b057c0548a9 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -58,7 +58,7 @@
@pytest.mark.skipif(not expr._USE_NUMEXPR, reason='not using numexpr')
class TestExpressions(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.frame = _frame.copy()
self.frame2 = _frame2.copy()
@@ -67,7 +67,7 @@ def setUp(self):
self.integer = _integer.copy()
self._MIN_ELEMENTS = expr._MIN_ELEMENTS
- def tearDown(self):
+ def teardown_method(self, method):
expr._MIN_ELEMENTS = self._MIN_ELEMENTS
def run_arithmetic(self, df, other, assert_func, check_dtype=False,
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 61b4369d21ab4..0f2a3ce1d1e94 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -194,7 +194,7 @@ def create_mgr(descr, item_shape=None):
class TestBlock(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
# self.fblock = get_float_ex() # a,c,e
# self.cblock = get_complex_ex() #
# self.oblock = get_obj_ex()
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index f4cb07625faf2..bfab10b7e63e7 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -22,7 +22,7 @@
class Base(object):
- def setUp(self):
+ def setup_method(self, method):
index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two',
'three']],
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index efa647fd91a0d..c5ecd75290fc6 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -18,7 +18,7 @@
class TestnanopsDataFrame(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
np.random.seed(11235)
nanops._USE_BOTTLENECK = False
@@ -118,7 +118,7 @@ def setUp(self):
self.arr_float_nan_inf_1d = self.arr_float_nan_inf[:, 0, 0]
self.arr_nan_nan_inf_1d = self.arr_nan_nan_inf[:, 0, 0]
- def tearDown(self):
+ def teardown_method(self, method):
nanops._USE_BOTTLENECK = use_bn
def check_results(self, targ, res, axis, check_dtype=True):
@@ -786,7 +786,7 @@ class TestNanvarFixedValues(tm.TestCase):
# xref GH10242
- def setUp(self):
+ def setup_method(self, method):
# Samples from a normal distribution.
self.variance = variance = 3.0
self.samples = self.prng.normal(scale=variance ** 0.5, size=100000)
@@ -899,7 +899,7 @@ class TestNanskewFixedValues(tm.TestCase):
# xref GH 11974
- def setUp(self):
+ def setup_method(self, method):
# Test data + skewness value (computed with scipy.stats.skew)
self.samples = np.sin(np.linspace(0, 1, 200))
self.actual_skew = -0.1875895205961754
@@ -949,7 +949,7 @@ class TestNankurtFixedValues(tm.TestCase):
# xref GH 11974
- def setUp(self):
+ def setup_method(self, method):
# Test data + kurtosis value (computed with scipy.stats.kurtosis)
self.samples = np.sin(np.linspace(0, 1, 200))
self.actual_kurt = -1.2058303433799713
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index b9cceab4d65f4..44e1db494c041 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -908,7 +908,7 @@ class TestPanel(tm.TestCase, PanelTests, CheckIndexing, SafeForLongAndSparse,
def assert_panel_equal(cls, x, y):
assert_panel_equal(x, y)
- def setUp(self):
+ def setup_method(self, method):
self.panel = make_test_panel()
self.panel.major_axis.name = None
self.panel.minor_axis.name = None
@@ -2435,7 +2435,7 @@ class TestLongPanel(tm.TestCase):
LongPanel no longer exists, but...
"""
- def setUp(self):
+ def setup_method(self, method):
panel = make_test_panel()
self.panel = panel.to_frame()
self.unfiltered_panel = panel.to_frame(filter_observations=False)
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 1b611309aece0..7d966422a7d79 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -596,7 +596,7 @@ def test_set_value(self):
class TestPanel4d(tm.TestCase, CheckIndexing, SafeForSparse,
SafeForLongAndSparse):
- def setUp(self):
+ def setup_method(self, method):
with catch_warnings(record=True):
self.panel4d = tm.makePanel4D(nper=8)
add_nans(self.panel4d)
@@ -685,7 +685,7 @@ def test_ctor_dict(self):
tm.assert_panel_equal(panel4d['A'], self.panel4d['l1'])
tm.assert_frame_equal(panel4d.loc['B', 'ItemB', :, :],
self.panel4d.loc['l2', ['ItemB'],
- :, :]['ItemB'])
+ :, :]['ItemB'])
def test_constructor_dict_mixed(self):
with catch_warnings(record=True):
@@ -798,7 +798,7 @@ def test_reindex(self):
method='pad')
tm.assert_panel_equal(larger.loc[:, :,
- self.panel4d.major_axis[1], :],
+ self.panel4d.major_axis[1], :],
smaller.loc[:, :, smaller_major[0], :])
# don't necessarily copy
diff --git a/pandas/tests/test_panelnd.py b/pandas/tests/test_panelnd.py
index 33c37e9c8feb2..7861b98b0ddd9 100644
--- a/pandas/tests/test_panelnd.py
+++ b/pandas/tests/test_panelnd.py
@@ -11,7 +11,7 @@
class TestPanelnd(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
pass
def test_4d_construction(self):
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 276e9a12c1993..c6719790c9e35 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -52,7 +52,7 @@ def _simple_pts(start, end, freq='D'):
class TestResampleAPI(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
dti = DatetimeIndex(start=datetime(2005, 1, 1),
end=datetime(2005, 1, 10), freq='Min')
@@ -850,7 +850,7 @@ def test_resample_loffset_arg_type(self):
class TestDatetimeIndex(Base, tm.TestCase):
_index_factory = lambda x: date_range
- def setUp(self):
+ def setup_method(self, method):
dti = DatetimeIndex(start=datetime(2005, 1, 1),
end=datetime(2005, 1, 10), freq='Min')
@@ -2796,7 +2796,7 @@ def test_asfreq_bug(self):
class TestResamplerGrouper(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.frame = DataFrame({'A': [1] * 20 + [2] * 12 + [3] * 8,
'B': np.arange(40)},
index=date_range('1/1/2000',
@@ -2991,7 +2991,7 @@ def test_median_duplicate_columns(self):
class TestTimeGrouper(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.ts = Series(np.random.randn(1000),
index=date_range('1/1/2000', periods=1000))
diff --git a/pandas/tests/test_testing.py b/pandas/tests/test_testing.py
index 2c0cd55205a5a..2e84638533820 100644
--- a/pandas/tests/test_testing.py
+++ b/pandas/tests/test_testing.py
@@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
import pandas as pd
-import unittest
import pytest
import numpy as np
import sys
@@ -340,7 +339,7 @@ def test_assert_almost_equal_iterable_message(self):
assert_almost_equal([1, 2], [1, 3])
-class TestAssertIndexEqual(unittest.TestCase):
+class TestAssertIndexEqual(tm.TestCase):
def test_index_equal_message(self):
@@ -680,7 +679,7 @@ def test_frame_equal_message(self):
by_blocks=True)
-class TestAssertCategoricalEqual(unittest.TestCase):
+class TestAssertCategoricalEqual(tm.TestCase):
def test_categorical_equal_message(self):
@@ -718,7 +717,7 @@ def test_categorical_equal_message(self):
tm.assert_categorical_equal(a, b)
-class TestRNGContext(unittest.TestCase):
+class TestRNGContext(tm.TestCase):
def test_RNGContext(self):
expected0 = 1.764052345967664
diff --git a/pandas/tests/test_util.py b/pandas/tests/test_util.py
index 80eb5bb9dfe16..e9e04f76704f2 100644
--- a/pandas/tests/test_util.py
+++ b/pandas/tests/test_util.py
@@ -22,7 +22,7 @@
class TestDecorators(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
@deprecate_kwarg('old', 'new')
def _f1(new=False):
return new
@@ -410,8 +410,8 @@ def test_numpy_errstate_is_default():
class TestLocaleUtils(tm.TestCase):
@classmethod
- def setUpClass(cls):
- super(TestLocaleUtils, cls).setUpClass()
+ def setup_class(cls):
+ super(TestLocaleUtils, cls).setup_class()
cls.locales = tm.get_locales()
if not cls.locales:
@@ -420,8 +420,8 @@ def setUpClass(cls):
tm._skip_if_windows()
@classmethod
- def tearDownClass(cls):
- super(TestLocaleUtils, cls).tearDownClass()
+ def teardown_class(cls):
+ super(TestLocaleUtils, cls).teardown_class()
del cls.locales
def test_get_locales(self):
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index d3e427dfb4c7b..5436f3c342019 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -48,7 +48,7 @@ def _create_data(self):
class TestApi(Base):
- def setUp(self):
+ def setup_method(self, method):
self._create_data()
def test_getitem(self):
@@ -315,7 +315,7 @@ def test_how_compat(self):
class TestWindow(Base):
- def setUp(self):
+ def setup_method(self, method):
self._create_data()
def test_constructor(self):
@@ -360,7 +360,7 @@ def test_numpy_compat(self):
class TestRolling(Base):
- def setUp(self):
+ def setup_method(self, method):
self._create_data()
def test_doc_string(self):
@@ -444,7 +444,7 @@ def test_closed(self):
class TestExpanding(Base):
- def setUp(self):
+ def setup_method(self, method):
self._create_data()
def test_doc_string(self):
@@ -486,7 +486,7 @@ def test_numpy_compat(self):
class TestEWM(Base):
- def setUp(self):
+ def setup_method(self, method):
self._create_data()
def test_doc_string(self):
@@ -549,7 +549,7 @@ def test_numpy_compat(self):
class TestDeprecations(Base):
""" test that we are catching deprecation warnings """
- def setUp(self):
+ def setup_method(self, method):
self._create_data()
def test_deprecations(self):
@@ -559,11 +559,11 @@ def test_deprecations(self):
mom.rolling_mean(Series(np.ones(10)), 3, center=True, axis=0)
-# GH #12373 : rolling functions error on float32 data
+# gh-12373 : rolling functions error on float32 data
# make sure rolling functions works for different dtypes
#
# NOTE that these are yielded tests and so _create_data is
-# explicity called, nor do these inherit from unittest.TestCase
+# explicity called, nor do these inherit from tm.TestCase
#
# further note that we are only checking rolling for fully dtype
# compliance (though both expanding and ewm inherit)
@@ -775,7 +775,7 @@ def _create_data(self):
class TestMoments(Base):
- def setUp(self):
+ def setup_method(self, method):
self._create_data()
def test_centered_axis_validation(self):
@@ -1958,7 +1958,7 @@ def _create_data(self):
super(TestMomentsConsistency, self)._create_data()
self.data = _consistency_data
- def setUp(self):
+ def setup_method(self, method):
self._create_data()
def _test_moments_consistency(self, min_periods, count, mean, mock_mean,
@@ -3039,7 +3039,7 @@ def test_rolling_min_max_numeric_types(self):
class TestGrouperGrouping(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.series = Series(np.arange(10))
self.frame = DataFrame({'A': [1] * 20 + [2] * 12 + [3] * 8,
'B': np.arange(40)})
@@ -3187,7 +3187,7 @@ class TestRollingTS(tm.TestCase):
# rolling time-series friendly
# xref GH13327
- def setUp(self):
+ def setup_method(self, method):
self.regular = DataFrame({'A': pd.date_range('20130101',
periods=5,
diff --git a/pandas/tests/tseries/test_holiday.py b/pandas/tests/tseries/test_holiday.py
index 109adaaa7e0b0..8ea4140bb85a7 100644
--- a/pandas/tests/tseries/test_holiday.py
+++ b/pandas/tests/tseries/test_holiday.py
@@ -21,7 +21,7 @@
class TestCalendar(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.holiday_list = [
datetime(2012, 1, 2),
datetime(2012, 1, 16),
@@ -87,7 +87,7 @@ def test_rule_from_name(self):
class TestHoliday(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.start_date = datetime(2011, 1, 1)
self.end_date = datetime(2020, 12, 31)
@@ -286,7 +286,7 @@ def test_factory(self):
class TestObservanceRules(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
self.we = datetime(2014, 4, 9)
self.th = datetime(2014, 4, 10)
self.fr = datetime(2014, 4, 11)
diff --git a/pandas/tests/tseries/test_offsets.py b/pandas/tests/tseries/test_offsets.py
index 79190aa98f8d9..b6cd5e7958342 100644
--- a/pandas/tests/tseries/test_offsets.py
+++ b/pandas/tests/tseries/test_offsets.py
@@ -167,7 +167,7 @@ def test_apply_out_of_range(self):
class TestCommon(Base):
- def setUp(self):
+ def setup_method(self, method):
# exected value created by Base._get_offset
# are applied to 2011/01/01 09:00 (Saturday)
# used for .apply and .rollforward
@@ -507,7 +507,7 @@ def test_pickle_v0_15_2(self):
class TestDateOffset(Base):
- def setUp(self):
+ def setup_method(self, method):
self.d = Timestamp(datetime(2008, 1, 2))
_offset_map.clear()
@@ -547,7 +547,7 @@ def test_eq(self):
class TestBusinessDay(Base):
_offset = BDay
- def setUp(self):
+ def setup_method(self, method):
self.d = datetime(2008, 1, 1)
self.offset = BDay()
@@ -724,7 +724,7 @@ def test_offsets_compare_equal(self):
class TestBusinessHour(Base):
_offset = BusinessHour
- def setUp(self):
+ def setup_method(self, method):
self.d = datetime(2014, 7, 1, 10, 00)
self.offset1 = BusinessHour()
@@ -1418,7 +1418,7 @@ def test_datetimeindex(self):
class TestCustomBusinessHour(Base):
_offset = CustomBusinessHour
- def setUp(self):
+ def setup_method(self, method):
# 2014 Calendar to check custom holidays
# Sun Mon Tue Wed Thu Fri Sat
# 6/22 23 24 25 26 27 28
@@ -1674,7 +1674,7 @@ def test_apply_nanoseconds(self):
class TestCustomBusinessDay(Base):
_offset = CDay
- def setUp(self):
+ def setup_method(self, method):
self.d = datetime(2008, 1, 1)
self.nd = np_datetime64_compat('2008-01-01 00:00:00Z')
@@ -1910,7 +1910,7 @@ def test_pickle_compat_0_14_1(self):
class CustomBusinessMonthBase(object):
- def setUp(self):
+ def setup_method(self, method):
self.d = datetime(2008, 1, 1)
self.offset = self._object()
@@ -4612,7 +4612,7 @@ def test_quarterly_dont_normalize():
class TestOffsetAliases(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
_offset_map.clear()
def test_alias_equality(self):
@@ -4696,7 +4696,7 @@ class TestCaching(tm.TestCase):
# as of GH 6479 (in 0.14.0), offset caching is turned off
# as of v0.12.0 only BusinessMonth/Quarter were actually caching
- def setUp(self):
+ def setup_method(self, method):
_daterange_cache.clear()
_offset_map.clear()
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index 10776381974de..74220aa5cd183 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -52,7 +52,7 @@ def dst(self, dt):
class TestTimeZoneSupportPytz(tm.TestCase):
- def setUp(self):
+ def setup_method(self, method):
tm._skip_if_no_pytz()
def tz(self, tz):
@@ -944,7 +944,7 @@ def test_datetimeindex_tz_nat(self):
class TestTimeZoneSupportDateutil(TestTimeZoneSupportPytz):
- def setUp(self):
+ def setup_method(self, method):
tm._skip_if_no_dateutil()
def tz(self, tz):
@@ -1197,7 +1197,7 @@ def test_cache_keys_are_distinct_for_pytz_vs_dateutil(self):
class TestTimeZones(tm.TestCase):
timezones = ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific']
- def setUp(self):
+ def setup_method(self, method):
tm._skip_if_no_pytz()
def test_replace(self):
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index d0c56e9974a3f..354e11ce0133a 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -10,7 +10,6 @@
import os
import subprocess
import locale
-import unittest
import traceback
from datetime import datetime
@@ -86,22 +85,17 @@ def reset_testing_mode():
set_testing_mode()
-class TestCase(unittest.TestCase):
+class TestCase(object):
"""
- The test case class that we originally used when using the
- nosetests framework. Under the new pytest framework, we are
- moving away from this class.
-
- Do not create new test classes derived from this one. Rather,
- they should inherit from object directly.
+ Base class for all test case classes.
"""
@classmethod
- def setUpClass(cls):
+ def setup_class(cls):
pd.set_option('chained_assignment', 'raise')
@classmethod
- def tearDownClass(cls):
+ def teardown_class(cls):
pass
@@ -295,36 +289,31 @@ def _skip_if_32bit():
pytest.skip("skipping for 32 bit")
-def mplskip(cls):
- """Skip a TestCase instance if matplotlib isn't installed"""
-
- @classmethod
- def setUpClass(cls):
- try:
- import matplotlib as mpl
- mpl.use("Agg", warn=False)
- except ImportError:
- import pytest
- pytest.skip("matplotlib not installed")
+def _skip_module_if_no_mpl():
+ import pytest
- cls.setUpClass = setUpClass
- return cls
+ mpl = pytest.importorskip("matplotlib")
+ mpl.use("Agg", warn=False)
def _skip_if_no_mpl():
try:
- import matplotlib # noqa
+ import matplotlib as mpl
+ mpl.use("Agg", warn=False)
except ImportError:
import pytest
pytest.skip("matplotlib not installed")
def _skip_if_mpl_1_5():
- import matplotlib
- v = matplotlib.__version__
+ import matplotlib as mpl
+
+ v = mpl.__version__
if v > LooseVersion('1.4.3') or v[0] == '0':
import pytest
pytest.skip("matplotlib 1.5")
+ else:
+ mpl.use("Agg", warn=False)
def _skip_if_no_scipy():
| * Convert all test setup/teardown to `pytest` idiom
* Remove any classes that inherit from `unittest.TestCase`
* Update documentation that we are still using `tm.TestCase`
Closes #15990.
`pytest` idiom for test setup/teardown can be found <a href="https://docs.pytest.org/en/2.7.3/xunit_setup.html">here</a>. | https://api.github.com/repos/pandas-dev/pandas/pulls/16201 | 2017-05-02T16:31:20Z | 2017-05-04T02:38:13Z | 2017-05-04T02:38:13Z | 2017-05-04T03:13:55Z |
DOC: Add redirect for moved classes | diff --git a/doc/_templates/api_redirect.html b/doc/_templates/api_redirect.html
index 24bdd8363830f..c04a8b58ce544 100644
--- a/doc/_templates/api_redirect.html
+++ b/doc/_templates/api_redirect.html
@@ -1,15 +1,10 @@
-{% set pgn = pagename.split('.') -%}
-{% if pgn[-2][0].isupper() -%}
- {% set redirect = ["pandas", pgn[-2], pgn[-1], 'html']|join('.') -%}
-{% else -%}
- {% set redirect = ["pandas", pgn[-1], 'html']|join('.') -%}
-{% endif -%}
+{% set redirect = redirects[pagename.split("/")[-1]] %}
<html>
<head>
- <meta http-equiv="Refresh" content="0; url={{ redirect }}" />
+ <meta http-equiv="Refresh" content="0; url={{ redirect }}.html" />
<title>This API page has moved</title>
</head>
<body>
- <p>This API page has moved <a href="{{ redirect }}">here</a>.</p>
+ <p>This API page has moved <a href="{{ redirect }}.html">here</a>.</p>
</body>
-</html>
\ No newline at end of file
+</html>
diff --git a/doc/source/conf.py b/doc/source/conf.py
index a2a6dca57c34c..556e5f0227471 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -14,6 +14,7 @@
import os
import re
import inspect
+import importlib
from pandas.compat import u, PY3
# https://github.com/sphinx-doc/sphinx/pull/2325/files
@@ -226,20 +227,69 @@
# Additional templates that should be rendered to pages, maps page names to
# template names.
-# Add redirect for previously existing API pages (which are now included in
-# the API pages as top-level functions) based on a template (GH9911)
+# Add redirect for previously existing API pages
+# each item is like `(from_old, to_new)`
+# To redirect a class and all its methods, see below
+# https://github.com/pandas-dev/pandas/issues/16186
+
moved_api_pages = [
- 'pandas.core.common.isnull', 'pandas.core.common.notnull', 'pandas.core.reshape.get_dummies',
- 'pandas.tools.merge.concat', 'pandas.tools.merge.merge', 'pandas.tools.pivot.pivot_table',
- 'pandas.tseries.tools.to_datetime', 'pandas.io.clipboard.read_clipboard', 'pandas.io.excel.ExcelFile.parse',
- 'pandas.io.excel.read_excel', 'pandas.io.html.read_html', 'pandas.io.json.read_json',
- 'pandas.io.parsers.read_csv', 'pandas.io.parsers.read_fwf', 'pandas.io.parsers.read_table',
- 'pandas.io.pickle.read_pickle', 'pandas.io.pytables.HDFStore.append', 'pandas.io.pytables.HDFStore.get',
- 'pandas.io.pytables.HDFStore.put', 'pandas.io.pytables.HDFStore.select', 'pandas.io.pytables.read_hdf',
- 'pandas.io.sql.read_sql', 'pandas.io.sql.read_frame', 'pandas.io.sql.write_frame',
- 'pandas.io.stata.read_stata']
-
-html_additional_pages = {'generated/' + page: 'api_redirect.html' for page in moved_api_pages}
+ ('pandas.core.common.isnull', 'pandas.isnull'),
+ ('pandas.core.common.notnull', 'pandas.notnull'),
+ ('pandas.core.reshape.get_dummies', 'pandas.get_dummies'),
+ ('pandas.tools.merge.concat', 'pandas.concat'),
+ ('pandas.tools.merge.merge', 'pandas.merge'),
+ ('pandas.tools.pivot.pivot_table', 'pandas.pivot_table'),
+ ('pandas.tseries.tools.to_datetime', 'pandas.to_datetime'),
+ ('pandas.io.clipboard.read_clipboard', 'pandas.read_clipboard'),
+ ('pandas.io.excel.ExcelFile.parse', 'pandas.ExcelFile.parse'),
+ ('pandas.io.excel.read_excel', 'pandas.read_excel'),
+ ('pandas.io.html.read_html', 'pandas.read_html'),
+ ('pandas.io.json.read_json', 'pandas.read_json'),
+ ('pandas.io.parsers.read_csv', 'pandas.read_csv'),
+ ('pandas.io.parsers.read_fwf', 'pandas.read_fwf'),
+ ('pandas.io.parsers.read_table', 'pandas.read_table'),
+ ('pandas.io.pickle.read_pickle', 'pandas.read_pickle'),
+ ('pandas.io.pytables.HDFStore.append', 'pandas.HDFStore.append'),
+ ('pandas.io.pytables.HDFStore.get', 'pandas.HDFStore.get'),
+ ('pandas.io.pytables.HDFStore.put', 'pandas.HDFStore.put'),
+ ('pandas.io.pytables.HDFStore.select', 'pandas.HDFStore.select'),
+ ('pandas.io.pytables.read_hdf', 'pandas.read_hdf'),
+ ('pandas.io.sql.read_sql', 'pandas.read_sql'),
+ ('pandas.io.sql.read_frame', 'pandas.read_frame'),
+ ('pandas.io.sql.write_frame', 'pandas.write_frame'),
+ ('pandas.io.stata.read_stata', 'pandas.read_stata'),
+]
+
+# Again, tuples of (from_old, to_new)
+moved_classes = [
+ ('pandas.tseries.resample.Resampler', 'pandas.core.resample.Resampler'),
+ ('pandas.formats.style.Styler', 'pandas.io.formats.style.Styler'),
+]
+
+for old, new in moved_classes:
+ # the class itself...
+ moved_api_pages.append((old, new))
+
+ mod, classname = new.rsplit('.', 1)
+ klass = getattr(importlib.import_module(mod), classname)
+ methods = [x for x in dir(klass)
+ if not x.startswith('_') or x in ('__iter__', '__array__')]
+
+ for method in methods:
+ # ... and each of its public methods
+ moved_api_pages.append(
+ ("{old}.{method}".format(old=old, method=method),
+ "{new}.{method}".format(new=new, method=method))
+ )
+
+html_additional_pages = {
+ 'generated/' + page[0]: 'api_redirect.html'
+ for page in moved_api_pages
+}
+
+html_context = {
+ 'redirects': {old: new for old, new in moved_api_pages}
+}
# If false, no module index is generated.
html_use_modindex = True
| The new redirects in this commit are for Resampler and Styler
Refactor how we do redirects. Moved all the logic into the
config file, where you state the methods / classes to be redirected.
Removed all the logic from the template, and just look up in the
new html_context variable.
Closes https://github.com/pandas-dev/pandas/issues/16186
Running a test locally ATM. | https://api.github.com/repos/pandas-dev/pandas/pulls/16200 | 2017-05-02T16:21:12Z | 2017-05-02T17:52:47Z | 2017-05-02T17:52:47Z | 2017-05-03T18:24:35Z |
register custom DisplayFormatter for table schema | diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index d77d17aa4d00e..81fb8090a7afe 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -9,6 +9,7 @@
module is imported, register them here rather then in the module.
"""
+import sys
import warnings
import pandas.core.config as cf
@@ -341,18 +342,36 @@ def mpl_style_cb(key):
def table_schema_cb(key):
- # Having _ipython_display_ defined messes with the return value
- # from cells, so the Out[x] dictionary breaks.
- # Currently table schema is the only thing using it, so we'll
- # monkey patch `_ipython_display_` onto NDFrame when config option
- # is set
- # see https://github.com/pandas-dev/pandas/issues/16168
- from pandas.core.generic import NDFrame, _ipython_display_
+ # first, check if we are in IPython
+ if 'IPython' not in sys.modules:
+ # definitely not in IPython
+ return
+ from IPython import get_ipython
+ ip = get_ipython()
+ if ip is None:
+ # still not in IPython
+ return
+
+ formatters = ip.display_formatter.formatters
+
+ mimetype = "application/vnd.dataresource+json"
if cf.get_option(key):
- NDFrame._ipython_display_ = _ipython_display_
- elif getattr(NDFrame, '_ipython_display_', None):
- del NDFrame._ipython_display_
+ if mimetype not in formatters:
+ # define tableschema formatter
+ from IPython.core.formatters import BaseFormatter
+
+ class TableSchemaFormatter(BaseFormatter):
+ print_method = '_repr_table_schema_'
+ _return_type = (dict,)
+ # register it:
+ formatters[mimetype] = TableSchemaFormatter()
+ # enable it if it's been disabled:
+ formatters[mimetype].enabled = True
+ else:
+ # unregister tableschema mime-type
+ if mimetype in formatters:
+ formatters[mimetype].enabled = False
with cf.config_prefix('display'):
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 48ee1842dc4a0..b3498583f6e14 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -43,7 +43,6 @@
import pandas.core.algorithms as algos
import pandas.core.common as com
import pandas.core.missing as missing
-from pandas.errors import UnserializableWarning
from pandas.io.formats.printing import pprint_thing
from pandas.io.formats.format import format_percentiles
from pandas.tseries.frequencies import to_offset
@@ -6279,38 +6278,6 @@ def logical_func(self, axis=None, bool_only=None, skipna=None, level=None,
return set_function_name(logical_func, name, cls)
-def _ipython_display_(self):
- # Having _ipython_display_ defined messes with the return value
- # from cells, so the Out[x] dictionary breaks.
- # Currently table schema is the only thing using it, so we'll
- # monkey patch `_ipython_display_` onto NDFrame when config option
- # is set
- # see https://github.com/pandas-dev/pandas/issues/16168
- try:
- from IPython.display import display
- except ImportError:
- return None
-
- # Series doesn't define _repr_html_ or _repr_latex_
- latex = self._repr_latex_() if hasattr(self, '_repr_latex_') else None
- html = self._repr_html_() if hasattr(self, '_repr_html_') else None
- try:
- table_schema = self._repr_table_schema_()
- except Exception as e:
- warnings.warn("Cannot create table schema representation. "
- "{}".format(e), UnserializableWarning)
- table_schema = None
- # We need the inital newline since we aren't going through the
- # usual __repr__. See
- # https://github.com/pandas-dev/pandas/pull/14904#issuecomment-277829277
- text = "\n" + repr(self)
-
- reprs = {"text/plain": text, "text/html": html, "text/latex": latex,
- "application/vnd.dataresource+json": table_schema}
- reprs = {k: v for k, v in reprs.items() if v}
- display(reprs, raw=True)
-
-
# install the indexes
for _name, _indexer in indexing.get_indexers_list():
NDFrame._create_indexer(_name, _indexer)
diff --git a/pandas/errors/__init__.py b/pandas/errors/__init__.py
index 9b6c9c5be319c..805e689dca840 100644
--- a/pandas/errors/__init__.py
+++ b/pandas/errors/__init__.py
@@ -57,9 +57,3 @@ class ParserWarning(Warning):
"""
-class UnserializableWarning(Warning):
- """
- Warning that is raised when a DataFrame cannot be serialized.
-
- .. versionadded:: 0.20.0
- """
diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index b8d6e9578339f..3acd5c7a5e8c5 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -5,7 +5,6 @@
import pandas as pd
from pandas import compat
-from pandas.errors import UnserializableWarning
import pandas.io.formats.printing as printing
import pandas.io.formats.format as fmt
import pandas.util.testing as tm
@@ -137,8 +136,11 @@ def setUpClass(cls):
except ImportError:
pytest.skip("Mock is not installed")
cls.mock = mock
+ from IPython.core.interactiveshell import InteractiveShell
+ cls.display_formatter = InteractiveShell.instance().display_formatter
def test_publishes(self):
+
df = pd.DataFrame({"A": [1, 2]})
objects = [df['A'], df, df] # dataframe / series
expected_keys = [
@@ -146,29 +148,20 @@ def test_publishes(self):
{'text/plain', 'text/html', 'application/vnd.dataresource+json'},
]
- make_patch = self.mock.patch('IPython.display.display')
opt = pd.option_context('display.html.table_schema', True)
for obj, expected in zip(objects, expected_keys):
- with opt, make_patch as mock_display:
- handle = obj._ipython_display_()
- assert mock_display.call_count == 1
- assert handle is None
- args, kwargs = mock_display.call_args
- arg, = args # just one argument
-
- assert kwargs == {"raw": True}
- assert set(arg.keys()) == expected
+ with opt:
+ formatted = self.display_formatter.format(obj)
+ assert set(formatted[0].keys()) == expected
with_latex = pd.option_context('display.latex.repr', True)
- with opt, with_latex, make_patch as mock_display:
- handle = obj._ipython_display_()
- args, kwargs = mock_display.call_args
- arg, = args
+ with opt, with_latex:
+ formatted = self.display_formatter.format(obj)
expected = {'text/plain', 'text/html', 'text/latex',
'application/vnd.dataresource+json'}
- assert set(arg.keys()) == expected
+ assert set(formatted[0].keys()) == expected
def test_publishes_not_implemented(self):
# column MultiIndex
@@ -176,18 +169,13 @@ def test_publishes_not_implemented(self):
midx = pd.MultiIndex.from_product([['A', 'B'], ['a', 'b', 'c']])
df = pd.DataFrame(np.random.randn(5, len(midx)), columns=midx)
- make_patch = self.mock.patch('IPython.display.display')
opt = pd.option_context('display.html.table_schema', True)
- with opt, make_patch as mock_display:
- with pytest.warns(UnserializableWarning) as record:
- df._ipython_display_()
- args, _ = mock_display.call_args
- arg, = args # just one argument
+
+ with opt:
+ formatted = self.display_formatter.format(df)
expected = {'text/plain', 'text/html'}
- assert set(arg.keys()) == expected
- assert "orient='table' is not supported for MultiIndex" in (
- record[-1].message.args[0])
+ assert set(formatted[0].keys()) == expected
def test_config_on(self):
df = pd.DataFrame({"A": [1, 2]})
@@ -209,26 +197,23 @@ def test_config_monkeypatches(self):
assert not hasattr(df, '_ipython_display_')
assert not hasattr(df['A'], '_ipython_display_')
+ formatters = self.display_formatter.formatters
+ mimetype = 'application/vnd.dataresource+json'
+
with pd.option_context('display.html.table_schema', True):
- assert hasattr(df, '_ipython_display_')
- # smoke test that it works
- df._ipython_display_()
- assert hasattr(df['A'], '_ipython_display_')
- df['A']._ipython_display_()
+ assert 'application/vnd.dataresource+json' in formatters
+ assert formatters[mimetype].enabled
- assert not hasattr(df, '_ipython_display_')
- assert not hasattr(df['A'], '_ipython_display_')
- # re-unsetting is OK
- assert not hasattr(df, '_ipython_display_')
- assert not hasattr(df['A'], '_ipython_display_')
+ # still there, just disabled
+ assert 'application/vnd.dataresource+json' in formatters
+ assert not formatters[mimetype].enabled
# able to re-set
with pd.option_context('display.html.table_schema', True):
- assert hasattr(df, '_ipython_display_')
+ assert 'application/vnd.dataresource+json' in formatters
+ assert formatters[mimetype].enabled
# smoke test that it works
- df._ipython_display_()
- assert hasattr(df['A'], '_ipython_display_')
- df['A']._ipython_display_()
+ self.display_formatter.format(cf)
# TODO: fix this broken test
| instead of using `_ipython_display_` for custom mime-types
This should solve the issues in #16171, without needing to register/unregister `_ipython_display_`.
In the long run, https://github.com/ipython/ipython/pull/10496 should remove the need for any custom registration, in favor of a simple `_repr_mimebundle_` method.
I think the only thing missing is the UnserializableWarning because IPython does its own catching and displaying of tracebacks when formatters fail. I can copy and modify a little more code out of the base Formatter to preserve that warning, though. | https://api.github.com/repos/pandas-dev/pandas/pulls/16198 | 2017-05-02T14:59:42Z | 2017-05-02T19:17:10Z | 2017-05-02T19:17:10Z | 2023-05-11T01:15:29Z |
Unblock supported compression libs in pytables | diff --git a/doc/source/whatsnew/v0.20.2.txt b/doc/source/whatsnew/v0.20.2.txt
index 983f3edfa2f46..95e88f610004f 100644
--- a/doc/source/whatsnew/v0.20.2.txt
+++ b/doc/source/whatsnew/v0.20.2.txt
@@ -19,7 +19,7 @@ Highlights include:
Enhancements
~~~~~~~~~~~~
-
+- Unblocked access to additional compression types supported in pytables: 'blosc:blosclz, 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy', 'blosc:zlib', 'blosc:zstd' (:issue:`14478`)
.. _whatsnew_0202.performance:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b72f83ce723cc..777cfcae7a326 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1266,12 +1266,17 @@ def to_hdf(self, path_or_buf, key, **kwargs):
<http://pandas.pydata.org/pandas-docs/stable/io.html#query-via-data-columns>`__.
Applicable only to format='table'.
- complevel : int, 1-9, default 0
- If a complib is specified compression will be applied
- where possible
- complib : {'zlib', 'bzip2', 'lzo', 'blosc', None}, default None
- If complevel is > 0 apply compression to objects written
- in the store wherever possible
+ complevel : int, 0-9, default 0
+ Specifies a compression level for data.
+ A value of 0 disables compression.
+ complib : {'zlib', 'lzo', 'bzip2', 'blosc', None}, default None
+ Specifies the compression library to be used.
+ As of v0.20.2 these additional compressors for Blosc are supported
+ (default if no compressor specified: 'blosc:blosclz'):
+ {'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
+ 'blosc:zlib', 'blosc:zstd'}.
+ Specifying a compression library which is not available issues
+ a ValueError.
fletcher32 : bool, default False
If applying compression use the fletcher32 checksum
dropna : boolean, default False.
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 17bedd016f617..f017421c1f83a 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -402,12 +402,17 @@ class HDFStore(StringMixin):
and if the file does not exist it is created.
``'r+'``
It is similar to ``'a'``, but the file must already exist.
- complevel : int, 1-9, default 0
- If a complib is specified compression will be applied
- where possible
- complib : {'zlib', 'bzip2', 'lzo', 'blosc', None}, default None
- If complevel is > 0 apply compression to objects written
- in the store wherever possible
+ complevel : int, 0-9, default 0
+ Specifies a compression level for data.
+ A value of 0 disables compression.
+ complib : {'zlib', 'lzo', 'bzip2', 'blosc', None}, default None
+ Specifies the compression library to be used.
+ As of v0.20.2 these additional compressors for Blosc are supported
+ (default if no compressor specified: 'blosc:blosclz'):
+ {'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
+ 'blosc:zlib', 'blosc:zstd'}.
+ Specifying a compression library which is not available issues
+ a ValueError.
fletcher32 : bool, default False
If applying compression use the fletcher32 checksum
@@ -430,9 +435,10 @@ def __init__(self, path, mode=None, complevel=None, complib=None,
raise ImportError('HDFStore requires PyTables, "{ex}" problem '
'importing'.format(ex=str(ex)))
- if complib not in (None, 'blosc', 'bzip2', 'lzo', 'zlib'):
- raise ValueError("complib only supports 'blosc', 'bzip2', lzo' "
- "or 'zlib' compression.")
+ if complib is not None and complib not in tables.filters.all_complibs:
+ raise ValueError(
+ "complib only supports {libs} compression.".format(
+ libs=tables.filters.all_complibs))
self._path = path
if mode is None:
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 873bb20b3bba9..abfd88a6f13e1 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -734,6 +734,39 @@ def test_put_compression_blosc(self):
store.put('c', df, format='table', complib='blosc')
tm.assert_frame_equal(store['c'], df)
+ def test_complibs(self):
+ # GH14478
+ df = tm.makeDataFrame()
+
+ # Building list of all complibs and complevels tuples
+ all_complibs = tables.filters.all_complibs
+ # Remove lzo if its not available on this platform
+ if not tables.which_lib_version('lzo'):
+ all_complibs.remove('lzo')
+ all_levels = range(0, 10)
+ all_tests = [(lib, lvl) for lib in all_complibs for lvl in all_levels]
+
+ for (lib, lvl) in all_tests:
+ with ensure_clean_path(self.path) as tmpfile:
+ gname = 'foo'
+
+ # Write and read file to see if data is consistent
+ df.to_hdf(tmpfile, gname, complib=lib, complevel=lvl)
+ result = pd.read_hdf(tmpfile, gname)
+ tm.assert_frame_equal(result, df)
+
+ # Open file and check metadata
+ # for correct amount of compression
+ h5table = tables.open_file(tmpfile, mode='r')
+ for node in h5table.walk_nodes(where='/' + gname,
+ classname='Leaf'):
+ assert node.filters.complevel == lvl
+ if lvl == 0:
+ assert node.filters.complib is None
+ else:
+ assert node.filters.complib == lib
+ h5table.close()
+
def test_put_integer(self):
# non-date, non-string index
df = DataFrame(np.random.randn(50, 100))
@@ -4939,8 +4972,8 @@ def test_invalid_complib(self):
index=list('abcd'),
columns=list('ABCDE'))
with ensure_clean_path(self.path) as path:
- pytest.raises(ValueError, df.to_hdf, path,
- 'df', complib='blosc:zlib')
+ with pytest.raises(ValueError):
+ df.to_hdf(path, 'df', complib='foolib')
# GH10443
def test_read_nokey(self):
| - [x] closes #14478
- [x] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
- [ ] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/16196 | 2017-05-02T08:36:23Z | 2017-05-11T22:55:10Z | 2017-05-11T22:55:09Z | 2017-05-30T12:20:36Z |
TST: test reset_index with tuple index name and col_level!=0 | diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 5b2057f830102..f4cb07625faf2 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -2286,6 +2286,11 @@ def test_reset_index_multiindex_columns(self):
"incomplete column name \('C', 'c'\)")):
df2.rename_axis([('C', 'c')]).reset_index(col_fill=None)
+ # with col_level != 0
+ result = df2.rename_axis([('c', 'ii')]).reset_index(col_level=1,
+ col_fill='C')
+ tm.assert_frame_equal(result, expected)
+
def test_set_index_period(self):
# GH 6631
df = DataFrame(np.random.random(6))
| - [x] tests added / passed
- [x] passes ``git diff master --name-only -- '*.py' | flake8 --diff``
This just completes #16165 as [asked](https://github.com/pandas-dev/pandas/pull/16165#issuecomment-298427315) by @jorisvandenbossche | https://api.github.com/repos/pandas-dev/pandas/pulls/16195 | 2017-05-02T07:53:19Z | 2017-05-02T10:26:20Z | 2017-05-02T10:26:20Z | 2017-05-03T23:17:50Z |
COMPAT: PySlice_GetIndicesEx is a macro on PyPy | diff --git a/pandas/_libs/src/compat_helper.h b/pandas/_libs/src/compat_helper.h
index 8f86bb3f8e62f..bdff61d7d4150 100644
--- a/pandas/_libs/src/compat_helper.h
+++ b/pandas/_libs/src/compat_helper.h
@@ -26,8 +26,10 @@ the macro, which restores compat.
https://bugs.python.org/issue29943
*/
-#if PY_VERSION_HEX < 0x03070000 && defined(PySlice_GetIndicesEx)
- #undef PySlice_GetIndicesEx
+#ifndef PYPY_VERSION
+# if PY_VERSION_HEX < 0x03070000 && defined(PySlice_GetIndicesEx)
+# undef PySlice_GetIndicesEx
+# endif
#endif
PANDAS_INLINE int slice_get_indices(PyObject *s,
| the fix for issue #15961 broke compatibility with PyPy. This pull request fixes it (fixed formatting for linter) | https://api.github.com/repos/pandas-dev/pandas/pulls/16194 | 2017-05-02T05:58:23Z | 2017-05-02T10:28:23Z | 2017-05-02T10:28:23Z | 2017-05-02T10:28:26Z |
BUG: incorrect handling of scipy.sparse.dok formats | diff --git a/doc/source/whatsnew/v0.20.2.txt b/doc/source/whatsnew/v0.20.2.txt
index e0a8065d9a507..ad8a85843ea17 100644
--- a/doc/source/whatsnew/v0.20.2.txt
+++ b/doc/source/whatsnew/v0.20.2.txt
@@ -66,8 +66,7 @@ Groupby/Resample/Rolling
Sparse
^^^^^^
-
-
+- Bug in construction of SparseDataFrame from ``scipy.sparse.dok_matrix`` (:issue:`16179`)
Reshaping
^^^^^^^^^
diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index 3c8f6e8c6257d..461dd50c5da6e 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -190,8 +190,8 @@ def _init_spmatrix(self, data, index, columns, dtype=None,
values = Series(data.data, index=data.row, copy=False)
for col, rowvals in values.groupby(data.col):
# get_blocks expects int32 row indices in sorted order
+ rowvals = rowvals.sort_index()
rows = rowvals.index.values.astype(np.int32)
- rows.sort()
blocs, blens = get_blocks(rows)
sdict[columns[col]] = SparseSeries(
diff --git a/pandas/tests/sparse/test_frame.py b/pandas/tests/sparse/test_frame.py
index 0312b76ec30a5..654d12b782f37 100644
--- a/pandas/tests/sparse/test_frame.py
+++ b/pandas/tests/sparse/test_frame.py
@@ -1146,8 +1146,8 @@ def test_isnotnull(self):
tm.assert_frame_equal(res.to_dense(), exp)
-@pytest.mark.parametrize('index', [None, list('ab')]) # noqa: F811
-@pytest.mark.parametrize('columns', [None, list('cd')])
+@pytest.mark.parametrize('index', [None, list('abc')]) # noqa: F811
+@pytest.mark.parametrize('columns', [None, list('def')])
@pytest.mark.parametrize('fill_value', [None, 0, np.nan])
@pytest.mark.parametrize('dtype', [bool, int, float, np.uint16])
def test_from_to_scipy(spmatrix, index, columns, fill_value, dtype):
@@ -1156,7 +1156,9 @@ def test_from_to_scipy(spmatrix, index, columns, fill_value, dtype):
# Make one ndarray and from it one sparse matrix, both to be used for
# constructing frames and comparing results
- arr = np.eye(2, dtype=dtype)
+ arr = np.eye(3, dtype=dtype)
+ # GH 16179
+ arr[0, 1] = dtype(2)
try:
spm = spmatrix(arr)
assert spm.dtype == arr.dtype
@@ -1245,6 +1247,26 @@ def test_from_to_scipy_object(spmatrix, fill_value):
assert sdf.to_coo().dtype == res_dtype
+def test_from_scipy_correct_ordering(spmatrix):
+ # GH 16179
+ tm.skip_if_no_package('scipy')
+
+ arr = np.arange(1, 5).reshape(2, 2)
+ try:
+ spm = spmatrix(arr)
+ assert spm.dtype == arr.dtype
+ except (TypeError, AssertionError):
+ # If conversion to sparse fails for this spmatrix type and arr.dtype,
+ # then the combination is not currently supported in NumPy, so we
+ # can just skip testing it thoroughly
+ return
+
+ sdf = pd.SparseDataFrame(spm)
+ expected = pd.SparseDataFrame(arr)
+ tm.assert_sp_frame_equal(sdf, expected)
+ tm.assert_frame_equal(sdf.to_dense(), expected.to_dense())
+
+
class TestSparseDataFrameArithmetic(object):
def test_numeric_op_scalar(self):
| - [x] closes #16179
- [x] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` | https://api.github.com/repos/pandas-dev/pandas/pulls/16191 | 2017-05-02T03:06:20Z | 2017-05-11T23:00:05Z | 2017-05-11T23:00:05Z | 2017-05-30T12:20:36Z |
MAINT: Remove vestigial self.assert* | diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index 74301b918bd02..0300c53e086cd 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -331,13 +331,13 @@ def test_info_memory_usage(self):
res = buf.getvalue().splitlines()
assert re.match(r"memory usage: [^+]+$", res[-1])
- self.assertGreater(df_with_object_index.memory_usage(index=True,
- deep=True).sum(),
- df_with_object_index.memory_usage(index=True).sum())
+ assert (df_with_object_index.memory_usage(
+ index=True, deep=True).sum() > df_with_object_index.memory_usage(
+ index=True).sum())
df_object = pd.DataFrame({'a': ['a']})
- self.assertGreater(df_object.memory_usage(deep=True).sum(),
- df_object.memory_usage().sum())
+ assert (df_object.memory_usage(deep=True).sum() >
+ df_object.memory_usage().sum())
# Test a DataFrame with duplicate columns
dtypes = ['int64', 'int64', 'int64', 'float64']
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 9219ac1c9c26b..1cd338479bd0c 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -374,7 +374,7 @@ def test_bar_align_mid_pos_and_neg(self):
'#5fba7d 10.0%, #5fba7d 100.0%, '
'transparent 100.0%)']}
- self.assertEqual(result, expected)
+ assert result == expected
def test_bar_align_mid_all_pos(self):
df = pd.DataFrame({'A': [10, 20, 50, 100]})
@@ -399,7 +399,7 @@ def test_bar_align_mid_all_pos(self):
'transparent 0%, transparent 0.0%, #5fba7d 0.0%, '
'#5fba7d 100.0%, transparent 100.0%)']}
- self.assertEqual(result, expected)
+ assert result == expected
def test_bar_align_mid_all_neg(self):
df = pd.DataFrame({'A': [-100, -60, -30, -20]})
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 2aa3638b18e9b..efa647fd91a0d 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -914,11 +914,11 @@ def test_constant_series(self):
def test_all_finite(self):
alpha, beta = 0.3, 0.1
left_tailed = self.prng.beta(alpha, beta, size=100)
- self.assertLess(nanops.nanskew(left_tailed), 0)
+ assert nanops.nanskew(left_tailed) < 0
alpha, beta = 0.1, 0.3
right_tailed = self.prng.beta(alpha, beta, size=100)
- self.assertGreater(nanops.nanskew(right_tailed), 0)
+ assert nanops.nanskew(right_tailed) > 0
def test_ground_truth(self):
skew = nanops.nanskew(self.samples)
@@ -964,11 +964,11 @@ def test_constant_series(self):
def test_all_finite(self):
alpha, beta = 0.3, 0.1
left_tailed = self.prng.beta(alpha, beta, size=100)
- self.assertLess(nanops.nankurt(left_tailed), 0)
+ assert nanops.nankurt(left_tailed) < 0
alpha, beta = 0.1, 0.3
right_tailed = self.prng.beta(alpha, beta, size=100)
- self.assertGreater(nanops.nankurt(right_tailed), 0)
+ assert nanops.nankurt(right_tailed) > 0
def test_ground_truth(self):
kurt = nanops.nankurt(self.samples)
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 041e36848e1d8..1b611309aece0 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -13,10 +13,7 @@
from pandas.core.series import remove_na
from pandas.tseries.offsets import BDay
-from pandas.util.testing import (assert_panel_equal,
- assert_panel4d_equal,
- assert_frame_equal,
- assert_series_equal,
+from pandas.util.testing import (assert_frame_equal, assert_series_equal,
assert_almost_equal)
import pandas.util.testing as tm
@@ -133,7 +130,7 @@ def wrapper(x):
for i in range(obj.ndim):
result = f(axis=i, skipna=False)
expected = obj.apply(wrapper, axis=i)
- assert_panel_equal(result, expected)
+ tm.assert_panel_equal(result, expected)
else:
skipna_wrapper = alternative
wrapper = alternative
@@ -143,26 +140,18 @@ def wrapper(x):
result = f(axis=i)
if not tm._incompat_bottleneck_version(name):
expected = obj.apply(skipna_wrapper, axis=i)
- assert_panel_equal(result, expected)
+ tm.assert_panel_equal(result, expected)
pytest.raises(Exception, f, axis=obj.ndim)
class SafeForSparse(object):
- @classmethod
- def assert_panel_equal(cls, x, y):
- assert_panel_equal(x, y)
-
- @classmethod
- def assert_panel4d_equal(cls, x, y):
- assert_panel4d_equal(x, y)
-
def test_get_axis(self):
- assert(self.panel4d._get_axis(0) is self.panel4d.labels)
- assert(self.panel4d._get_axis(1) is self.panel4d.items)
- assert(self.panel4d._get_axis(2) is self.panel4d.major_axis)
- assert(self.panel4d._get_axis(3) is self.panel4d.minor_axis)
+ assert self.panel4d._get_axis(0) is self.panel4d.labels
+ assert self.panel4d._get_axis(1) is self.panel4d.items
+ assert self.panel4d._get_axis(2) is self.panel4d.major_axis
+ assert self.panel4d._get_axis(3) is self.panel4d.minor_axis
def test_set_axis(self):
with catch_warnings(record=True):
@@ -226,7 +215,7 @@ def test_arith(self):
@staticmethod
def _test_op(panel4d, op):
result = op(panel4d, 1)
- assert_panel_equal(result['l1'], op(panel4d['l1'], 1))
+ tm.assert_panel_equal(result['l1'], op(panel4d['l1'], 1))
def test_keys(self):
tm.equalContents(list(self.panel4d.keys()), self.panel4d.labels)
@@ -240,11 +229,11 @@ def test_iteritems(self):
def test_combinePanel4d(self):
with catch_warnings(record=True):
result = self.panel4d.add(self.panel4d)
- self.assert_panel4d_equal(result, self.panel4d * 2)
+ tm.assert_panel4d_equal(result, self.panel4d * 2)
def test_neg(self):
with catch_warnings(record=True):
- self.assert_panel4d_equal(-self.panel4d, self.panel4d * -1)
+ tm.assert_panel4d_equal(-self.panel4d, self.panel4d * -1)
def test_select(self):
with catch_warnings(record=True):
@@ -254,28 +243,28 @@ def test_select(self):
# select labels
result = p.select(lambda x: x in ('l1', 'l3'), axis='labels')
expected = p.reindex(labels=['l1', 'l3'])
- self.assert_panel4d_equal(result, expected)
+ tm.assert_panel4d_equal(result, expected)
# select items
result = p.select(lambda x: x in ('ItemA', 'ItemC'), axis='items')
expected = p.reindex(items=['ItemA', 'ItemC'])
- self.assert_panel4d_equal(result, expected)
+ tm.assert_panel4d_equal(result, expected)
# select major_axis
result = p.select(lambda x: x >= datetime(2000, 1, 15),
axis='major')
new_major = p.major_axis[p.major_axis >= datetime(2000, 1, 15)]
expected = p.reindex(major=new_major)
- self.assert_panel4d_equal(result, expected)
+ tm.assert_panel4d_equal(result, expected)
# select minor_axis
result = p.select(lambda x: x in ('D', 'A'), axis=3)
expected = p.reindex(minor=['A', 'D'])
- self.assert_panel4d_equal(result, expected)
+ tm.assert_panel4d_equal(result, expected)
# corner case, empty thing
result = p.select(lambda x: x in ('foo',), axis='items')
- self.assert_panel4d_equal(result, p.reindex(items=[]))
+ tm.assert_panel4d_equal(result, p.reindex(items=[]))
def test_get_value(self):
@@ -291,12 +280,12 @@ def test_abs(self):
with catch_warnings(record=True):
result = self.panel4d.abs()
expected = np.abs(self.panel4d)
- self.assert_panel4d_equal(result, expected)
+ tm.assert_panel4d_equal(result, expected)
p = self.panel4d['l1']
result = p.abs()
expected = np.abs(p)
- assert_panel_equal(result, expected)
+ tm.assert_panel_equal(result, expected)
df = p['ItemA']
result = df.abs()
@@ -314,7 +303,7 @@ def test_delitem_and_pop(self):
with catch_warnings(record=True):
expected = self.panel4d['l2']
result = self.panel4d.pop('l2')
- assert_panel_equal(expected, result)
+ tm.assert_panel_equal(expected, result)
assert 'l2' not in self.panel4d.labels
del self.panel4d['l3']
@@ -367,9 +356,9 @@ def test_setitem(self):
p2 = self.panel4d['l4']
- assert_panel_equal(p, p2.reindex(items=p.items,
- major_axis=p.major_axis,
- minor_axis=p.minor_axis))
+ tm.assert_panel_equal(p, p2.reindex(items=p.items,
+ major_axis=p.major_axis,
+ minor_axis=p.minor_axis))
# scalar
self.panel4d['lG'] = 1
@@ -534,34 +523,34 @@ def test_getitem_fancy_labels(self):
cols = ['D', 'C', 'F']
# all 4 specified
- assert_panel4d_equal(panel4d.loc[labels, items, dates, cols],
- panel4d.reindex(labels=labels, items=items,
- major=dates, minor=cols))
+ tm.assert_panel4d_equal(panel4d.loc[labels, items, dates, cols],
+ panel4d.reindex(labels=labels, items=items,
+ major=dates, minor=cols))
# 3 specified
- assert_panel4d_equal(panel4d.loc[:, items, dates, cols],
- panel4d.reindex(items=items, major=dates,
- minor=cols))
+ tm.assert_panel4d_equal(panel4d.loc[:, items, dates, cols],
+ panel4d.reindex(items=items, major=dates,
+ minor=cols))
# 2 specified
- assert_panel4d_equal(panel4d.loc[:, :, dates, cols],
- panel4d.reindex(major=dates, minor=cols))
+ tm.assert_panel4d_equal(panel4d.loc[:, :, dates, cols],
+ panel4d.reindex(major=dates, minor=cols))
- assert_panel4d_equal(panel4d.loc[:, items, :, cols],
- panel4d.reindex(items=items, minor=cols))
+ tm.assert_panel4d_equal(panel4d.loc[:, items, :, cols],
+ panel4d.reindex(items=items, minor=cols))
- assert_panel4d_equal(panel4d.loc[:, items, dates, :],
- panel4d.reindex(items=items, major=dates))
+ tm.assert_panel4d_equal(panel4d.loc[:, items, dates, :],
+ panel4d.reindex(items=items, major=dates))
# only 1
- assert_panel4d_equal(panel4d.loc[:, items, :, :],
- panel4d.reindex(items=items))
+ tm.assert_panel4d_equal(panel4d.loc[:, items, :, :],
+ panel4d.reindex(items=items))
- assert_panel4d_equal(panel4d.loc[:, :, dates, :],
- panel4d.reindex(major=dates))
+ tm.assert_panel4d_equal(panel4d.loc[:, :, dates, :],
+ panel4d.reindex(major=dates))
- assert_panel4d_equal(panel4d.loc[:, :, :, cols],
- panel4d.reindex(minor=cols))
+ tm.assert_panel4d_equal(panel4d.loc[:, :, :, cols],
+ panel4d.reindex(minor=cols))
def test_getitem_fancy_slice(self):
pass
@@ -607,10 +596,6 @@ def test_set_value(self):
class TestPanel4d(tm.TestCase, CheckIndexing, SafeForSparse,
SafeForLongAndSparse):
- @classmethod
- def assert_panel4d_equal(cls, x, y):
- assert_panel4d_equal(x, y)
-
def setUp(self):
with catch_warnings(record=True):
self.panel4d = tm.makePanel4D(nper=8)
@@ -697,10 +682,10 @@ def test_ctor_dict(self):
d = {'A': l1, 'B': l2.loc[['ItemB'], :, :]}
panel4d = Panel4D(d)
- assert_panel_equal(panel4d['A'], self.panel4d['l1'])
- assert_frame_equal(panel4d.loc['B', 'ItemB', :, :],
- self.panel4d.loc['l2', ['ItemB'],
- :, :]['ItemB'])
+ tm.assert_panel_equal(panel4d['A'], self.panel4d['l1'])
+ tm.assert_frame_equal(panel4d.loc['B', 'ItemB', :, :],
+ self.panel4d.loc['l2', ['ItemB'],
+ :, :]['ItemB'])
def test_constructor_dict_mixed(self):
with catch_warnings(record=True):
@@ -715,12 +700,12 @@ def test_constructor_dict_mixed(self):
items=self.panel4d.items,
major_axis=self.panel4d.major_axis,
minor_axis=self.panel4d.minor_axis)
- assert_panel4d_equal(result, self.panel4d)
+ tm.assert_panel4d_equal(result, self.panel4d)
data['l2'] = self.panel4d['l2']
result = Panel4D(data)
- assert_panel4d_equal(result, self.panel4d)
+ tm.assert_panel4d_equal(result, self.panel4d)
# corner, blow up
data['l2'] = data['l2']['ItemB']
@@ -741,19 +726,19 @@ def test_constructor_resize(self):
major_axis=major, minor_axis=minor)
expected = self.panel4d.reindex(
labels=labels, items=items, major=major, minor=minor)
- assert_panel4d_equal(result, expected)
+ tm.assert_panel4d_equal(result, expected)
result = Panel4D(data, items=items, major_axis=major)
expected = self.panel4d.reindex(items=items, major=major)
- assert_panel4d_equal(result, expected)
+ tm.assert_panel4d_equal(result, expected)
result = Panel4D(data, items=items)
expected = self.panel4d.reindex(items=items)
- assert_panel4d_equal(result, expected)
+ tm.assert_panel4d_equal(result, expected)
result = Panel4D(data, minor_axis=minor)
expected = self.panel4d.reindex(minor=minor)
- assert_panel4d_equal(result, expected)
+ tm.assert_panel4d_equal(result, expected)
def test_conform(self):
with catch_warnings(record=True):
@@ -773,7 +758,7 @@ def test_reindex(self):
# labels
result = self.panel4d.reindex(labels=['l1', 'l2'])
- assert_panel_equal(result['l2'], ref)
+ tm.assert_panel_equal(result['l2'], ref)
# items
result = self.panel4d.reindex(items=['ItemA', 'ItemB'])
@@ -802,7 +787,7 @@ def test_reindex(self):
# don't necessarily copy
result = self.panel4d.reindex()
- assert_panel4d_equal(result, self.panel4d)
+ tm.assert_panel4d_equal(result, self.panel4d)
assert result is not self.panel4d
# with filling
@@ -812,13 +797,14 @@ def test_reindex(self):
larger = smaller.reindex(major=self.panel4d.major_axis,
method='pad')
- assert_panel_equal(larger.loc[:, :, self.panel4d.major_axis[1], :],
- smaller.loc[:, :, smaller_major[0], :])
+ tm.assert_panel_equal(larger.loc[:, :,
+ self.panel4d.major_axis[1], :],
+ smaller.loc[:, :, smaller_major[0], :])
# don't necessarily copy
result = self.panel4d.reindex(
major=self.panel4d.major_axis, copy=False)
- assert_panel4d_equal(result, self.panel4d)
+ tm.assert_panel4d_equal(result, self.panel4d)
assert result is self.panel4d
def test_not_hashable(self):
@@ -835,7 +821,7 @@ def test_reindex_like(self):
major=self.panel4d.major_axis[:-1],
minor=self.panel4d.minor_axis[:-1])
smaller_like = self.panel4d.reindex_like(smaller)
- assert_panel4d_equal(smaller, smaller_like)
+ tm.assert_panel4d_equal(smaller, smaller_like)
def test_sort_index(self):
with catch_warnings(record=True):
@@ -852,7 +838,7 @@ def test_sort_index(self):
random_order = self.panel4d.reindex(labels=rlabels)
sorted_panel4d = random_order.sort_index(axis=0)
- assert_panel4d_equal(sorted_panel4d, self.panel4d)
+ tm.assert_panel4d_equal(sorted_panel4d, self.panel4d)
def test_fillna(self):
@@ -887,7 +873,7 @@ def test_swapaxes(self):
# this works, but return a copy
result = self.panel4d.swapaxes('items', 'items')
- assert_panel4d_equal(self.panel4d, result)
+ tm.assert_panel4d_equal(self.panel4d, result)
assert id(self.panel4d) != id(result)
def test_update(self):
@@ -916,7 +902,7 @@ def test_update(self):
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]]])
- assert_panel4d_equal(p4d, expected)
+ tm.assert_panel4d_equal(p4d, expected)
def test_dtypes(self):
@@ -952,4 +938,4 @@ def test_rename(self):
assert (self.panel4d['l1'].values == 3).all()
def test_get_attr(self):
- assert_panel_equal(self.panel4d['l1'], self.panel4d.l1)
+ tm.assert_panel_equal(self.panel4d['l1'], self.panel4d.l1)
| Remove all remaining `self.assert*` method calls originating from `unittest`. Any that are left are calls to methods directly defined in the test class or a higher derived `pandas` test class.
Partially addresses #15990.
Once this PR is merged, it is important that we remain vigilant about requiring PR's to adhere to the `pytest` idiom until we remove our dependency on `unittest.TestCase` (after which the builds will do the work when tests fail). | https://api.github.com/repos/pandas-dev/pandas/pulls/16190 | 2017-05-02T02:21:54Z | 2017-05-02T10:24:17Z | 2017-05-02T10:24:17Z | 2017-05-02T13:29:51Z |
DEPR: deprecate pandas.api.types.is_sequence | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 7102258318b5b..491bec3c83f61 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -1969,15 +1969,14 @@ Dtype introspection
Iterable introspection
+.. autosummary::
+ :toctree: generated/
+
api.types.is_dict_like
api.types.is_file_like
api.types.is_list_like
api.types.is_named_tuple
api.types.is_iterator
- api.types.is_sequence
-
-.. autosummary::
- :toctree: generated/
Scalar introspection
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 6e4756c3c5245..cdad8094e8dd6 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -1521,7 +1521,7 @@ Other Deprecations
* ``pd.match()``, is removed.
* ``pd.groupby()``, replaced by using the ``.groupby()`` method directly on a ``Series/DataFrame``
* ``pd.get_store()``, replaced by a direct call to ``pd.HDFStore(...)``
-- ``is_any_int_dtype`` and ``is_floating_dtype`` are deprecated from ``pandas.api.types`` (:issue:`16042`)
+- ``is_any_int_dtype``, ``is_floating_dtype``, and ``is_sequence`` are deprecated from ``pandas.api.types`` (:issue:`16042`)
.. _whatsnew_0200.prior_deprecations:
diff --git a/pandas/core/dtypes/api.py b/pandas/core/dtypes/api.py
index 242c62125664c..a2180ecc4632f 100644
--- a/pandas/core/dtypes/api.py
+++ b/pandas/core/dtypes/api.py
@@ -57,14 +57,13 @@
is_file_like,
is_list_like,
is_hashable,
- is_named_tuple,
- is_sequence)
+ is_named_tuple)
# deprecated
m = sys.modules['pandas.core.dtypes.api']
-for t in ['is_any_int_dtype', 'is_floating_dtype']:
+for t in ['is_any_int_dtype', 'is_floating_dtype', 'is_sequence']:
def outer(t=t):
diff --git a/pandas/tests/api/test_types.py b/pandas/tests/api/test_types.py
index b9198c42e2eff..834857b87960c 100644
--- a/pandas/tests/api/test_types.py
+++ b/pandas/tests/api/test_types.py
@@ -31,9 +31,9 @@ class TestTypes(Base, tm.TestCase):
'is_re', 'is_re_compilable',
'is_dict_like', 'is_iterator', 'is_file_like',
'is_list_like', 'is_hashable',
- 'is_named_tuple', 'is_sequence',
+ 'is_named_tuple',
'pandas_dtype', 'union_categoricals', 'infer_dtype']
- deprecated = ['is_any_int_dtype', 'is_floating_dtype']
+ deprecated = ['is_any_int_dtype', 'is_floating_dtype', 'is_sequence']
dtypes = ['CategoricalDtype', 'DatetimeTZDtype',
'PeriodDtype', 'IntervalDtype']
@@ -90,7 +90,7 @@ def test_removed_from_core_common(self):
def test_deprecated_from_api_types(self):
- for t in ['is_any_int_dtype', 'is_floating_dtype']:
+ for t in self.deprecated:
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
getattr(types, t)(1)
| xref #16042
| https://api.github.com/repos/pandas-dev/pandas/pulls/16189 | 2017-05-02T00:30:29Z | 2017-05-02T01:22:24Z | 2017-05-02T01:22:24Z | 2017-05-02T06:11:13Z |
DOC: update the .agg doc-string with examples | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 3332bfcd65d50..4882acbe820ea 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -9,7 +9,7 @@ users upgrade to this version.
Highlights include:
-- new ``.agg()`` API for Series/DataFrame similar to the groupby-rolling-resample API's, see :ref:`here <whatsnew_0200.enhancements.agg>`
+- New ``.agg()`` API for Series/DataFrame similar to the groupby-rolling-resample API's, see :ref:`here <whatsnew_0200.enhancements.agg>`
- Integration with the ``feather-format``, including a new top-level ``pd.read_feather()`` and ``DataFrame.to_feather()`` method, see :ref:`here <io.feather>`.
- The ``.ix`` indexer has been deprecated, see :ref:`here <whatsnew_0200.api_breaking.deprecate_ix>`
- ``Panel`` has been deprecated, see :ref:`here <whatsnew_0200.api_breaking.deprecate_panel>`
@@ -45,8 +45,8 @@ New features
^^^^^^^^^^^
Series & DataFrame have been enhanced to support the aggregation API. This is an already familiar API that
-is supported for groupby, window operations, and resampling. This allows one to express, possibly multiple,
-aggregation operations in a single concise way by using :meth:`~DataFrame.agg`,
+is supported for groupby, window operations, and resampling. This allows one to express aggregation operations
+in a single concise way by using :meth:`~DataFrame.agg`,
and :meth:`~DataFrame.transform`. The full documentation is :ref:`here <basics.aggregate>` (:issue:`1623`).
Here is a sample
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 87c649c5fbd79..fd0846b0ad33c 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -370,42 +370,6 @@ def _gotitem(self, key, ndim, subset=None):
"""
raise AbstractMethodError(self)
- _agg_doc = """Aggregate using input function or dict of {column ->
-function}
-
-Parameters
-----------
-arg : function or dict
- Function to use for aggregating groups. If a function, must either
- work when passed a DataFrame or when passed to DataFrame.apply. If
- passed a dict, the keys must be DataFrame column names.
-
- Accepted Combinations are:
- - string cythonized function name
- - function
- - list of functions
- - dict of columns -> functions
- - nested dict of names -> dicts of functions
-
-Notes
------
-Numpy functions mean/median/prod/sum/std/var are special cased so the
-default behavior is applying the function along axis=0
-(e.g., np.mean(arr_2d, axis=0)) as opposed to
-mimicking the default Numpy behavior (e.g., np.mean(arr_2d)).
-
-Returns
--------
-aggregated : DataFrame
-"""
-
- _see_also_template = """
-See also
---------
-pandas.Series.%(name)s
-pandas.DataFrame.%(name)s
-"""
-
def aggregate(self, func, *args, **kwargs):
raise AbstractMethodError(self)
@@ -1150,30 +1114,39 @@ def factorize(self, sort=False, na_sentinel=-1):
Examples
--------
+
>>> x = pd.Series([1, 2, 3])
>>> x
0 1
1 2
2 3
dtype: int64
+
>>> x.searchsorted(4)
array([3])
+
>>> x.searchsorted([0, 4])
array([0, 3])
+
>>> x.searchsorted([1, 3], side='left')
array([0, 2])
+
>>> x.searchsorted([1, 3], side='right')
array([1, 3])
- >>>
+
>>> x = pd.Categorical(['apple', 'bread', 'bread', 'cheese', 'milk' ])
[apple, bread, bread, cheese, milk]
Categories (4, object): [apple < bread < cheese < milk]
+
>>> x.searchsorted('bread')
array([1]) # Note: an array, not a scalar
+
>>> x.searchsorted(['bread'])
array([1])
+
>>> x.searchsorted(['bread', 'eggs'])
array([1, 4])
+
>>> x.searchsorted(['bread', 'eggs'], side='right')
array([3, 4]) # eggs before milk
""")
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 9a62259202653..67966374fcf9a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -18,6 +18,7 @@
import sys
import types
import warnings
+from textwrap import dedent
from numpy import nan as NA
import numpy as np
@@ -4200,7 +4201,43 @@ def _gotitem(self, key, ndim, subset=None):
# TODO: _shallow_copy(subset)?
return self[key]
- @Appender(_shared_docs['aggregate'] % _shared_doc_kwargs)
+ _agg_doc = dedent("""
+ Examples
+ --------
+
+ >>> df = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'],
+ ... index=pd.date_range('1/1/2000', periods=10))
+ >>> df.iloc[3:7] = np.nan
+
+ Aggregate these functions across all columns
+
+ >>> df.agg(['sum', 'min'])
+ A B C
+ sum -0.182253 -0.614014 -2.909534
+ min -1.916563 -1.460076 -1.568297
+
+ Different aggregations per column
+
+ >>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
+ A B
+ max NaN 1.514318
+ min -1.916563 -1.460076
+ sum -0.182253 NaN
+
+ See also
+ --------
+ pandas.DataFrame.apply
+ pandas.DataFrame.transform
+ pandas.DataFrame.groupby.aggregate
+ pandas.DataFrame.resample.aggregate
+ pandas.DataFrame.rolling.aggregate
+
+ """)
+
+ @Appender(_agg_doc)
+ @Appender(_shared_docs['aggregate'] % dict(
+ versionadded='.. versionadded:: 0.20.0',
+ **_shared_doc_kwargs))
def aggregate(self, func, axis=0, *args, **kwargs):
axis = self._get_axis_number(axis)
@@ -4272,7 +4309,7 @@ def apply(self, func, axis=0, broadcast=False, raw=False, reduce=None,
See also
--------
DataFrame.applymap: For elementwise operations
- DataFrame.agg: only perform aggregating type operations
+ DataFrame.aggregate: only perform aggregating type operations
DataFrame.transform: only perform transformating type operations
Returns
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9318a9f5ef27c..48ee1842dc4a0 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2854,19 +2854,19 @@ def pipe(self, func, *args, **kwargs):
return func(self, *args, **kwargs)
_shared_docs['aggregate'] = ("""
- Aggregate using input function or dict of {column ->
- function}
+ Aggregate using callable, string, dict, or list of string/callables
- .. versionadded:: 0.20.0
+ %(versionadded)s
Parameters
----------
func : callable, string, dictionary, or list of string/callables
Function to use for aggregating the data. If a function, must either
- work when passed a DataFrame or when passed to DataFrame.apply. If
- passed a dict, the keys must be DataFrame column names.
+ work when passed a %(klass)s or when passed to %(klass)s.apply. For
+ a DataFrame, can pass a dict, if the keys are DataFrame column names.
Accepted Combinations are:
+
- string function name
- function
- list of functions
@@ -2879,12 +2879,11 @@ def pipe(self, func, *args, **kwargs):
(e.g., np.mean(arr_2d, axis=0)) as opposed to
mimicking the default Numpy behavior (e.g., np.mean(arr_2d)).
+ agg is an alias for aggregate. Use it.
+
Returns
-------
aggregated : %(klass)s
-
- See also
- --------
""")
_shared_docs['transform'] = ("""
@@ -2899,18 +2898,40 @@ def pipe(self, func, *args, **kwargs):
To apply to column
Accepted Combinations are:
+
- string function name
- function
- list of functions
- dict of column names -> functions (or list of functions)
+ Returns
+ -------
+ transformed : %(klass)s
+
Examples
--------
+ >>> df = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'],
+ ... index=pd.date_range('1/1/2000', periods=10))
+ df.iloc[3:7] = np.nan
+
>>> df.transform(lambda x: (x - x.mean()) / x.std())
+ A B C
+ 2000-01-01 0.579457 1.236184 0.123424
+ 2000-01-02 0.370357 -0.605875 -1.231325
+ 2000-01-03 1.455756 -0.277446 0.288967
+ 2000-01-04 NaN NaN NaN
+ 2000-01-05 NaN NaN NaN
+ 2000-01-06 NaN NaN NaN
+ 2000-01-07 NaN NaN NaN
+ 2000-01-08 -0.498658 1.274522 1.642524
+ 2000-01-09 -0.540524 -1.012676 -0.828968
+ 2000-01-10 -1.366388 -0.614710 0.005378
+
+ See also
+ --------
+ pandas.%(klass)s.aggregate
+ pandas.%(klass)s.apply
- Returns
- -------
- transformed : %(klass)s
""")
# ----------------------------------------------------------------------
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 1f715c685c27e..479d2f7d26eb6 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -5,6 +5,7 @@
import collections
import warnings
import copy
+from textwrap import dedent
from pandas.compat import (
zip, range, lzip,
@@ -46,7 +47,7 @@
CategoricalIndex, _ensure_index)
from pandas.core.categorical import Categorical
from pandas.core.frame import DataFrame
-from pandas.core.generic import NDFrame
+from pandas.core.generic import NDFrame, _shared_docs
from pandas.core.internals import BlockManager, make_block
from pandas.core.series import Series
from pandas.core.panel import Panel
@@ -2749,57 +2750,47 @@ def _selection_name(self):
else:
return self._selection
- def aggregate(self, func_or_funcs, *args, **kwargs):
- """
- Apply aggregation function or functions to groups, yielding most likely
- Series but in some cases DataFrame depending on the output of the
- aggregation function
+ _agg_doc = dedent("""
+ Examples
+ --------
- Parameters
- ----------
- func_or_funcs : function or list / dict of functions
- List/dict of functions will produce DataFrame with column names
- determined by the function names themselves (list) or the keys in
- the dict
+ >>> s = Series([1, 2, 3, 4])
- Notes
- -----
- agg is an alias for aggregate. Use it.
+ >>> s
+ 0 1
+ 1 2
+ 2 3
+ 3 4
+ dtype: int64
- Examples
- --------
- >>> series
- bar 1.0
- baz 2.0
- qot 3.0
- qux 4.0
-
- >>> mapper = lambda x: x[0] # first letter
- >>> grouped = series.groupby(mapper)
-
- >>> grouped.aggregate(np.sum)
- b 3.0
- q 7.0
-
- >>> grouped.aggregate([np.sum, np.mean, np.std])
- mean std sum
- b 1.5 0.5 3
- q 3.5 0.5 7
-
- >>> grouped.agg({'result' : lambda x: x.mean() / x.std(),
- ... 'total' : np.sum})
- result total
- b 2.121 3
- q 4.95 7
+ >>> s.groupby([1, 1, 2, 2]).min()
+ 1 1
+ 2 3
+ dtype: int64
- See also
- --------
- apply, transform
+ >>> s.groupby([1, 1, 2, 2]).agg('min')
+ 1 1
+ 2 3
+ dtype: int64
- Returns
- -------
- Series or DataFrame
- """
+ >>> s.groupby([1, 1, 2, 2]).agg(['min', 'max'])
+ min max
+ 1 1 2
+ 2 3 4
+
+ See also
+ --------
+ pandas.Series.groupby.apply
+ pandas.Series.groupby.transform
+ pandas.Series.aggregate
+
+ """)
+
+ @Appender(_agg_doc)
+ @Appender(_shared_docs['aggregate'] % dict(
+ klass='Series',
+ versionadded=''))
+ def aggregate(self, func_or_funcs, *args, **kwargs):
_level = kwargs.pop('_level', None)
if isinstance(func_or_funcs, compat.string_types):
return getattr(self, func_or_funcs)(*args, **kwargs)
@@ -3905,9 +3896,67 @@ class DataFrameGroupBy(NDFrameGroupBy):
_block_agg_axis = 1
- @Substitution(name='groupby')
- @Appender(SelectionMixin._see_also_template)
- @Appender(SelectionMixin._agg_doc)
+ _agg_doc = dedent("""
+ Examples
+ --------
+
+ >>> df = pd.DataFrame({'A': [1, 1, 2, 2],
+ ... 'B': [1, 2, 3, 4],
+ ... 'C': np.random.randn(4)})
+
+ >>> df
+ A B C
+ 0 1 1 0.362838
+ 1 1 2 0.227877
+ 2 2 3 1.267767
+ 3 2 4 -0.562860
+
+ The aggregation is for each column.
+
+ >>> df.groupby('A').agg('min')
+ B C
+ A
+ 1 1 0.227877
+ 2 3 -0.562860
+
+ Multiple aggregations
+
+ >>> df.groupby('A').agg(['min', 'max'])
+ B C
+ min max min max
+ A
+ 1 1 2 0.227877 0.362838
+ 2 3 4 -0.562860 1.267767
+
+ Select a column for aggregation
+
+ >>> df.groupby('A').B.agg(['min', 'max'])
+ min max
+ A
+ 1 1 2
+ 2 3 4
+
+ Different aggregations per column
+
+ >>> df.groupby('A').agg({'B': ['min', 'max'], 'C': 'sum'})
+ B C
+ min max sum
+ A
+ 1 1 2 0.590716
+ 2 3 4 0.704907
+
+ See also
+ --------
+ pandas.DataFrame.groupby.apply
+ pandas.DataFrame.groupby.transform
+ pandas.DataFrame.aggregate
+
+ """)
+
+ @Appender(_agg_doc)
+ @Appender(_shared_docs['aggregate'] % dict(
+ klass='DataFrame',
+ versionadded=''))
def aggregate(self, arg, *args, **kwargs):
return super(DataFrameGroupBy, self).aggregate(arg, *args, **kwargs)
@@ -4166,9 +4215,6 @@ def groupby_series(obj, col=None):
class PanelGroupBy(NDFrameGroupBy):
- @Substitution(name='groupby')
- @Appender(SelectionMixin._see_also_template)
- @Appender(SelectionMixin._agg_doc)
def aggregate(self, arg, *args, **kwargs):
return super(PanelGroupBy, self).aggregate(arg, *args, **kwargs)
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 1685a5d75245d..cbb2f6a93c2fd 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -2,6 +2,7 @@
import numpy as np
import warnings
import copy
+from textwrap import dedent
import pandas as pd
from pandas.core.base import AbstractMethodError, GroupByMixin
@@ -254,66 +255,56 @@ def plot(self, *args, **kwargs):
# have the warnings shown here and just have this work
return self._deprecated('plot').plot(*args, **kwargs)
+ _agg_doc = dedent("""
+
+ Examples
+ --------
+ >>> s = Series([1,2,3,4,5],
+ index=pd.date_range('20130101',
+ periods=5,freq='s'))
+ 2013-01-01 00:00:00 1
+ 2013-01-01 00:00:01 2
+ 2013-01-01 00:00:02 3
+ 2013-01-01 00:00:03 4
+ 2013-01-01 00:00:04 5
+ Freq: S, dtype: int64
+
+ >>> r = s.resample('2s')
+ DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left,
+ label=left, convention=start, base=0]
+
+ >>> r.agg(np.sum)
+ 2013-01-01 00:00:00 3
+ 2013-01-01 00:00:02 7
+ 2013-01-01 00:00:04 5
+ Freq: 2S, dtype: int64
+
+ >>> r.agg(['sum','mean','max'])
+ sum mean max
+ 2013-01-01 00:00:00 3 1.5 2
+ 2013-01-01 00:00:02 7 3.5 4
+ 2013-01-01 00:00:04 5 5.0 5
+
+ >>> r.agg({'result' : lambda x: x.mean() / x.std(),
+ 'total' : np.sum})
+ total result
+ 2013-01-01 00:00:00 3 2.121320
+ 2013-01-01 00:00:02 7 4.949747
+ 2013-01-01 00:00:04 5 NaN
+
+ See also
+ --------
+ pandas.DataFrame.groupby.aggregate
+ pandas.DataFrame.resample.transform
+ pandas.DataFrame.aggregate
+
+ """)
+
+ @Appender(_agg_doc)
+ @Appender(_shared_docs['aggregate'] % dict(
+ klass='DataFrame',
+ versionadded=''))
def aggregate(self, arg, *args, **kwargs):
- """
- Apply aggregation function or functions to resampled groups, yielding
- most likely Series but in some cases DataFrame depending on the output
- of the aggregation function
-
- Parameters
- ----------
- func_or_funcs : function or list / dict of functions
- List/dict of functions will produce DataFrame with column names
- determined by the function names themselves (list) or the keys in
- the dict
-
- Notes
- -----
- agg is an alias for aggregate. Use it.
-
- Examples
- --------
- >>> s = Series([1,2,3,4,5],
- index=pd.date_range('20130101',
- periods=5,freq='s'))
- 2013-01-01 00:00:00 1
- 2013-01-01 00:00:01 2
- 2013-01-01 00:00:02 3
- 2013-01-01 00:00:03 4
- 2013-01-01 00:00:04 5
- Freq: S, dtype: int64
-
- >>> r = s.resample('2s')
- DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left,
- label=left, convention=start, base=0]
-
- >>> r.agg(np.sum)
- 2013-01-01 00:00:00 3
- 2013-01-01 00:00:02 7
- 2013-01-01 00:00:04 5
- Freq: 2S, dtype: int64
-
- >>> r.agg(['sum','mean','max'])
- sum mean max
- 2013-01-01 00:00:00 3 1.5 2
- 2013-01-01 00:00:02 7 3.5 4
- 2013-01-01 00:00:04 5 5.0 5
-
- >>> r.agg({'result' : lambda x: x.mean() / x.std(),
- 'total' : np.sum})
- total result
- 2013-01-01 00:00:00 3 2.121320
- 2013-01-01 00:00:02 7 4.949747
- 2013-01-01 00:00:04 5 NaN
-
- See also
- --------
- transform
-
- Returns
- -------
- Series or DataFrame
- """
self._set_binner()
result, how = self._aggregate(arg, *args, **kwargs)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index f03091d7e6a66..e5f1d91eedfec 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -8,6 +8,7 @@
import types
import warnings
+from textwrap import dedent
from numpy import nan, ndarray
import numpy as np
@@ -2174,7 +2175,31 @@ def _gotitem(self, key, ndim, subset=None):
"""
return self
- @Appender(generic._shared_docs['aggregate'] % _shared_doc_kwargs)
+ _agg_doc = dedent("""
+ Examples
+ --------
+
+ >>> s = Series(np.random.randn(10))
+
+ >>> s.agg('min')
+ -1.3018049988556679
+
+ >>> s.agg(['min', 'max'])
+ min -1.301805
+ max 1.127688
+ dtype: float64
+
+ See also
+ --------
+ pandas.Series.apply
+ pandas.Series.transform
+
+ """)
+
+ @Appender(_agg_doc)
+ @Appender(generic._shared_docs['aggregate'] % dict(
+ versionadded='.. versionadded:: 0.20.0',
+ **_shared_doc_kwargs))
def aggregate(self, func, axis=0, *args, **kwargs):
axis = self._get_axis_number(axis)
result, how = self._aggregate(func, *args, **kwargs)
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 6fdc05a13b773..6d8f12e982f12 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -39,10 +39,11 @@
from pandas.compat.numpy import function as nv
from pandas.util.decorators import (Substitution, Appender,
cache_readonly)
+from pandas.core.generic import _shared_docs
from textwrap import dedent
-_shared_docs = dict()
+_shared_docs = dict(**_shared_docs)
_doc_template = """
Returns
@@ -611,9 +612,48 @@ def f(arg, *args, **kwargs):
return self._wrap_results(results, blocks, obj)
- @Substitution(name='rolling')
- @Appender(SelectionMixin._see_also_template)
- @Appender(SelectionMixin._agg_doc)
+ _agg_doc = dedent("""
+ Examples
+ --------
+
+ >>> df = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'])
+ >>> df
+ A B C
+ 0 -2.385977 -0.102758 0.438822
+ 1 -1.004295 0.905829 -0.954544
+ 2 0.735167 -0.165272 -1.619346
+ 3 -0.702657 -1.340923 -0.706334
+ 4 -0.246845 0.211596 -0.901819
+ 5 2.463718 3.157577 -1.380906
+ 6 -1.142255 2.340594 -0.039875
+ 7 1.396598 -1.647453 1.677227
+ 8 -0.543425 1.761277 -0.220481
+ 9 -0.640505 0.289374 -1.550670
+
+ >>> df.rolling(3, win_type='boxcar').agg('mean')
+ A B C
+ 0 NaN NaN NaN
+ 1 NaN NaN NaN
+ 2 -0.885035 0.212600 -0.711689
+ 3 -0.323928 -0.200122 -1.093408
+ 4 -0.071445 -0.431533 -1.075833
+ 5 0.504739 0.676083 -0.996353
+ 6 0.358206 1.903256 -0.774200
+ 7 0.906020 1.283573 0.085482
+ 8 -0.096361 0.818139 0.472290
+ 9 0.070889 0.134399 -0.031308
+
+ See also
+ --------
+ pandas.DataFrame.rolling.aggregate
+ pandas.DataFrame.aggregate
+
+ """)
+
+ @Appender(_agg_doc)
+ @Appender(_shared_docs['aggregate'] % dict(
+ versionadded='',
+ klass='Series/DataFrame'))
def aggregate(self, arg, *args, **kwargs):
result, how = self._aggregate(arg, *args, **kwargs)
if result is None:
@@ -1081,9 +1121,62 @@ def _validate_freq(self):
"compat with a datetimelike "
"index".format(self.window))
- @Substitution(name='rolling')
- @Appender(SelectionMixin._see_also_template)
- @Appender(SelectionMixin._agg_doc)
+ _agg_doc = dedent("""
+ Examples
+ --------
+
+ >>> df = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'])
+ >>> df
+ A B C
+ 0 -2.385977 -0.102758 0.438822
+ 1 -1.004295 0.905829 -0.954544
+ 2 0.735167 -0.165272 -1.619346
+ 3 -0.702657 -1.340923 -0.706334
+ 4 -0.246845 0.211596 -0.901819
+ 5 2.463718 3.157577 -1.380906
+ 6 -1.142255 2.340594 -0.039875
+ 7 1.396598 -1.647453 1.677227
+ 8 -0.543425 1.761277 -0.220481
+ 9 -0.640505 0.289374 -1.550670
+
+ >>> df.rolling(3).sum()
+ A B C
+ 0 NaN NaN NaN
+ 1 NaN NaN NaN
+ 2 -2.655105 0.637799 -2.135068
+ 3 -0.971785 -0.600366 -3.280224
+ 4 -0.214334 -1.294599 -3.227500
+ 5 1.514216 2.028250 -2.989060
+ 6 1.074618 5.709767 -2.322600
+ 7 2.718061 3.850718 0.256446
+ 8 -0.289082 2.454418 1.416871
+ 9 0.212668 0.403198 -0.093924
+
+
+ >>> df.rolling(3).agg({'A':'sum', 'B':'min'})
+ A B
+ 0 NaN NaN
+ 1 NaN NaN
+ 2 -2.655105 -0.165272
+ 3 -0.971785 -1.340923
+ 4 -0.214334 -1.340923
+ 5 1.514216 -1.340923
+ 6 1.074618 0.211596
+ 7 2.718061 -1.647453
+ 8 -0.289082 -1.647453
+ 9 0.212668 -1.647453
+
+ See also
+ --------
+ pandas.Series.rolling
+ pandas.DataFrame.rolling
+
+ """)
+
+ @Appender(_agg_doc)
+ @Appender(_shared_docs['aggregate'] % dict(
+ versionadded='',
+ klass='Series/DataFrame'))
def aggregate(self, arg, *args, **kwargs):
return super(Rolling, self).aggregate(arg, *args, **kwargs)
@@ -1288,9 +1381,49 @@ def _get_window(self, other=None):
return (max((len(obj) + len(obj)), self.min_periods)
if self.min_periods else (len(obj) + len(obj)))
- @Substitution(name='expanding')
- @Appender(SelectionMixin._see_also_template)
- @Appender(SelectionMixin._agg_doc)
+ _agg_doc = dedent("""
+ Examples
+ --------
+
+ >>> df = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'])
+ >>> df
+ A B C
+ 0 -2.385977 -0.102758 0.438822
+ 1 -1.004295 0.905829 -0.954544
+ 2 0.735167 -0.165272 -1.619346
+ 3 -0.702657 -1.340923 -0.706334
+ 4 -0.246845 0.211596 -0.901819
+ 5 2.463718 3.157577 -1.380906
+ 6 -1.142255 2.340594 -0.039875
+ 7 1.396598 -1.647453 1.677227
+ 8 -0.543425 1.761277 -0.220481
+ 9 -0.640505 0.289374 -1.550670
+
+ >>> df.ewm(alpha=0.5).mean()
+ A B C
+ 0 -2.385977 -0.102758 0.438822
+ 1 -1.464856 0.569633 -0.490089
+ 2 -0.207700 0.149687 -1.135379
+ 3 -0.471677 -0.645305 -0.906555
+ 4 -0.355635 -0.203033 -0.904111
+ 5 1.076417 1.503943 -1.146293
+ 6 -0.041654 1.925562 -0.588728
+ 7 0.680292 0.132049 0.548693
+ 8 0.067236 0.948257 0.163353
+ 9 -0.286980 0.618493 -0.694496
+
+ See also
+ --------
+ pandas.DataFrame.expanding.aggregate
+ pandas.DataFrame.rolling.aggregate
+ pandas.DataFrame.aggregate
+
+ """)
+
+ @Appender(_agg_doc)
+ @Appender(_shared_docs['aggregate'] % dict(
+ versionadded='',
+ klass='Series/DataFrame'))
def aggregate(self, arg, *args, **kwargs):
return super(Expanding, self).aggregate(arg, *args, **kwargs)
@@ -1534,9 +1667,47 @@ def __init__(self, obj, com=None, span=None, halflife=None, alpha=None,
def _constructor(self):
return EWM
- @Substitution(name='ewm')
- @Appender(SelectionMixin._see_also_template)
- @Appender(SelectionMixin._agg_doc)
+ _agg_doc = dedent("""
+ Examples
+ --------
+
+ >>> df = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'])
+ >>> df
+ A B C
+ 0 -2.385977 -0.102758 0.438822
+ 1 -1.004295 0.905829 -0.954544
+ 2 0.735167 -0.165272 -1.619346
+ 3 -0.702657 -1.340923 -0.706334
+ 4 -0.246845 0.211596 -0.901819
+ 5 2.463718 3.157577 -1.380906
+ 6 -1.142255 2.340594 -0.039875
+ 7 1.396598 -1.647453 1.677227
+ 8 -0.543425 1.761277 -0.220481
+ 9 -0.640505 0.289374 -1.550670
+
+ >>> df.ewm(alpha=0.5).mean()
+ A B C
+ 0 -2.385977 -0.102758 0.438822
+ 1 -1.464856 0.569633 -0.490089
+ 2 -0.207700 0.149687 -1.135379
+ 3 -0.471677 -0.645305 -0.906555
+ 4 -0.355635 -0.203033 -0.904111
+ 5 1.076417 1.503943 -1.146293
+ 6 -0.041654 1.925562 -0.588728
+ 7 0.680292 0.132049 0.548693
+ 8 0.067236 0.948257 0.163353
+ 9 -0.286980 0.618493 -0.694496
+
+ See also
+ --------
+ pandas.DataFrame.rolling.aggregate
+
+ """)
+
+ @Appender(_agg_doc)
+ @Appender(_shared_docs['aggregate'] % dict(
+ versionadded='',
+ klass='Series/DataFrame'))
def aggregate(self, arg, *args, **kwargs):
return super(EWM, self).aggregate(arg, *args, **kwargs)
| consolidates .agg docs across frame/series & groupby & window
| https://api.github.com/repos/pandas-dev/pandas/pulls/16188 | 2017-05-02T00:10:23Z | 2017-05-02T11:26:28Z | 2017-05-02T11:26:28Z | 2017-05-02T12:08:32Z |
MAINT: Remove self.assertAlmostEqual from testing | diff --git a/pandas/tests/indexing/test_chaining_and_caching.py b/pandas/tests/indexing/test_chaining_and_caching.py
index c39876a8c6e44..c1f5d2941106d 100644
--- a/pandas/tests/indexing/test_chaining_and_caching.py
+++ b/pandas/tests/indexing/test_chaining_and_caching.py
@@ -32,7 +32,7 @@ def test_slice_consolidate_invalidate_item_cache(self):
# Assignment to wrong series
df['bb'].iloc[0] = 0.17
df._clear_item_cache()
- self.assertAlmostEqual(df['bb'][0], 0.17)
+ tm.assert_almost_equal(df['bb'][0], 0.17)
def test_setitem_cache_updating(self):
# GH 5424
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index b132322952024..b749cd150d445 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -738,37 +738,37 @@ def test_numericIntExp(self):
def test_numericIntFrcExp(self):
input = "1.337E40"
output = ujson.decode(input)
- self.assertAlmostEqual(output, json.loads(input))
+ tm.assert_almost_equal(output, json.loads(input))
def test_decodeNumericIntExpEPLUS(self):
input = "1337E+9"
output = ujson.decode(input)
- self.assertAlmostEqual(output, json.loads(input))
+ tm.assert_almost_equal(output, json.loads(input))
def test_decodeNumericIntExpePLUS(self):
input = "1.337e+40"
output = ujson.decode(input)
- self.assertAlmostEqual(output, json.loads(input))
+ tm.assert_almost_equal(output, json.loads(input))
def test_decodeNumericIntExpE(self):
input = "1337E40"
output = ujson.decode(input)
- self.assertAlmostEqual(output, json.loads(input))
+ tm.assert_almost_equal(output, json.loads(input))
def test_decodeNumericIntExpe(self):
input = "1337e40"
output = ujson.decode(input)
- self.assertAlmostEqual(output, json.loads(input))
+ tm.assert_almost_equal(output, json.loads(input))
def test_decodeNumericIntExpEMinus(self):
input = "1.337E-4"
output = ujson.decode(input)
- self.assertAlmostEqual(output, json.loads(input))
+ tm.assert_almost_equal(output, json.loads(input))
def test_decodeNumericIntExpeMinus(self):
input = "1.337e-4"
output = ujson.decode(input)
- self.assertAlmostEqual(output, json.loads(input))
+ tm.assert_almost_equal(output, json.loads(input))
def test_dumpToFile(self):
f = StringIO()
@@ -1583,36 +1583,49 @@ def test_decodeArrayFaultyUnicode(self):
def test_decodeFloatingPointAdditionalTests(self):
places = 15
- self.assertAlmostEqual(-1.1234567893,
- ujson.loads("-1.1234567893"), places=places)
- self.assertAlmostEqual(-1.234567893,
- ujson.loads("-1.234567893"), places=places)
- self.assertAlmostEqual(-1.34567893,
- ujson.loads("-1.34567893"), places=places)
- self.assertAlmostEqual(-1.4567893,
- ujson.loads("-1.4567893"), places=places)
- self.assertAlmostEqual(-1.567893,
- ujson.loads("-1.567893"), places=places)
- self.assertAlmostEqual(-1.67893,
- ujson.loads("-1.67893"), places=places)
- self.assertAlmostEqual(-1.7893, ujson.loads("-1.7893"), places=places)
- self.assertAlmostEqual(-1.893, ujson.loads("-1.893"), places=places)
- self.assertAlmostEqual(-1.3, ujson.loads("-1.3"), places=places)
-
- self.assertAlmostEqual(1.1234567893, ujson.loads(
- "1.1234567893"), places=places)
- self.assertAlmostEqual(1.234567893, ujson.loads(
- "1.234567893"), places=places)
- self.assertAlmostEqual(
- 1.34567893, ujson.loads("1.34567893"), places=places)
- self.assertAlmostEqual(
- 1.4567893, ujson.loads("1.4567893"), places=places)
- self.assertAlmostEqual(
- 1.567893, ujson.loads("1.567893"), places=places)
- self.assertAlmostEqual(1.67893, ujson.loads("1.67893"), places=places)
- self.assertAlmostEqual(1.7893, ujson.loads("1.7893"), places=places)
- self.assertAlmostEqual(1.893, ujson.loads("1.893"), places=places)
- self.assertAlmostEqual(1.3, ujson.loads("1.3"), places=places)
+ tm.assert_almost_equal(-1.1234567893,
+ ujson.loads("-1.1234567893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(-1.234567893,
+ ujson.loads("-1.234567893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(-1.34567893,
+ ujson.loads("-1.34567893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(-1.4567893,
+ ujson.loads("-1.4567893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(-1.567893,
+ ujson.loads("-1.567893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(-1.67893,
+ ujson.loads("-1.67893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(-1.7893, ujson.loads("-1.7893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(-1.893, ujson.loads("-1.893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(-1.3, ujson.loads("-1.3"),
+ check_less_precise=places)
+
+ tm.assert_almost_equal(1.1234567893, ujson.loads(
+ "1.1234567893"), check_less_precise=places)
+ tm.assert_almost_equal(1.234567893, ujson.loads(
+ "1.234567893"), check_less_precise=places)
+ tm.assert_almost_equal(
+ 1.34567893, ujson.loads("1.34567893"), check_less_precise=places)
+ tm.assert_almost_equal(
+ 1.4567893, ujson.loads("1.4567893"), check_less_precise=places)
+ tm.assert_almost_equal(
+ 1.567893, ujson.loads("1.567893"), check_less_precise=places)
+ tm.assert_almost_equal(1.67893, ujson.loads("1.67893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(1.7893, ujson.loads("1.7893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(1.893, ujson.loads("1.893"),
+ check_less_precise=places)
+ tm.assert_almost_equal(1.3, ujson.loads("1.3"),
+ check_less_precise=places)
def test_encodeBigSet(self):
s = set()
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index 7d0c39dae6e4b..2c0ac974e9e43 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -292,10 +292,10 @@ def _check_ticks_props(self, axes, xlabelsize=None, xrot=None,
for label in labels:
if xlabelsize is not None:
- self.assertAlmostEqual(label.get_fontsize(),
+ tm.assert_almost_equal(label.get_fontsize(),
xlabelsize)
if xrot is not None:
- self.assertAlmostEqual(label.get_rotation(), xrot)
+ tm.assert_almost_equal(label.get_rotation(), xrot)
if ylabelsize or yrot:
if isinstance(ax.yaxis.get_minor_formatter(), NullFormatter):
@@ -306,10 +306,10 @@ def _check_ticks_props(self, axes, xlabelsize=None, xrot=None,
for label in labels:
if ylabelsize is not None:
- self.assertAlmostEqual(label.get_fontsize(),
+ tm.assert_almost_equal(label.get_fontsize(),
ylabelsize)
if yrot is not None:
- self.assertAlmostEqual(label.get_rotation(), yrot)
+ tm.assert_almost_equal(label.get_rotation(), yrot)
def _check_ax_scales(self, axes, xaxis='linear', yaxis='linear'):
"""
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 7297e3548b956..03bc477d6f852 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -1036,8 +1036,8 @@ def _check_bar_alignment(self, df, kind='bar', stacked=False,
# GH 7498
# compare margins between lim and bar edges
- self.assertAlmostEqual(ax_min, min_edge - 0.25)
- self.assertAlmostEqual(ax_max, max_edge + 0.25)
+ tm.assert_almost_equal(ax_min, min_edge - 0.25)
+ tm.assert_almost_equal(ax_max, max_edge + 0.25)
p = ax.patches[0]
if kind == 'bar' and (stacked is True or subplots is True):
@@ -1061,10 +1061,10 @@ def _check_bar_alignment(self, df, kind='bar', stacked=False,
if align == 'center':
# Check whether the bar locates on center
- self.assertAlmostEqual(axis.get_ticklocs()[0], center)
+ tm.assert_almost_equal(axis.get_ticklocs()[0], center)
elif align == 'edge':
# Check whether the bar's edge starts from the tick
- self.assertAlmostEqual(axis.get_ticklocs()[0], edge)
+ tm.assert_almost_equal(axis.get_ticklocs()[0], edge)
else:
raise ValueError
@@ -1314,13 +1314,13 @@ def test_hist_df(self):
ax = series.plot.hist(normed=True, cumulative=True, bins=4)
# height of last bin (index 5) must be 1.0
rects = [x for x in ax.get_children() if isinstance(x, Rectangle)]
- self.assertAlmostEqual(rects[-1].get_height(), 1.0)
+ tm.assert_almost_equal(rects[-1].get_height(), 1.0)
tm.close()
ax = series.plot.hist(cumulative=True, bins=4)
rects = [x for x in ax.get_children() if isinstance(x, Rectangle)]
- self.assertAlmostEqual(rects[-2].get_height(), 100.0)
+ tm.assert_almost_equal(rects[-2].get_height(), 100.0)
tm.close()
# if horizontal, yticklabels are rotated
diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py
index 39bab59242c22..b75fcd4d8b680 100644
--- a/pandas/tests/plotting/test_hist_method.py
+++ b/pandas/tests/plotting/test_hist_method.py
@@ -196,7 +196,7 @@ def test_hist_df_legacy(self):
ax = ser.hist(normed=True, cumulative=True, bins=4)
# height of last bin (index 5) must be 1.0
rects = [x for x in ax.get_children() if isinstance(x, Rectangle)]
- self.assertAlmostEqual(rects[-1].get_height(), 1.0)
+ tm.assert_almost_equal(rects[-1].get_height(), 1.0)
tm.close()
ax = ser.hist(log=True)
@@ -286,7 +286,7 @@ def test_grouped_hist_legacy(self):
for ax in axes.ravel():
rects = [x for x in ax.get_children() if isinstance(x, Rectangle)]
height = rects[-1].get_height()
- self.assertAlmostEqual(height, 1.0)
+ tm.assert_almost_equal(height, 1.0)
self._check_ticks_props(axes, xlabelsize=xf, xrot=xrot,
ylabelsize=yf, yrot=yrot)
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index d1325c7130d04..91a27142069c7 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -222,15 +222,15 @@ def test_bar_log(self):
ymin = 0.0007943282347242822 if self.mpl_ge_2_0_0 else 0.001
ymax = 0.12589254117941673 if self.mpl_ge_2_0_0 else .10000000000000001
res = ax.get_ylim()
- self.assertAlmostEqual(res[0], ymin)
- self.assertAlmostEqual(res[1], ymax)
+ tm.assert_almost_equal(res[0], ymin)
+ tm.assert_almost_equal(res[1], ymax)
tm.assert_numpy_array_equal(ax.yaxis.get_ticklocs(), expected)
tm.close()
ax = Series([0.1, 0.01, 0.001]).plot(log=True, kind='barh')
res = ax.get_xlim()
- self.assertAlmostEqual(res[0], ymin)
- self.assertAlmostEqual(res[1], ymax)
+ tm.assert_almost_equal(res[0], ymin)
+ tm.assert_almost_equal(res[1], ymax)
tm.assert_numpy_array_equal(ax.xaxis.get_ticklocs(), expected)
@slow
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 73515c47388ea..71131452393a7 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -123,7 +123,7 @@ def test_median(self):
# test with integers, test failure
int_ts = Series(np.ones(10, dtype=int), index=lrange(10))
- self.assertAlmostEqual(np.median(int_ts), int_ts.median())
+ tm.assert_almost_equal(np.median(int_ts), int_ts.median())
def test_mode(self):
# No mode should be found.
@@ -298,7 +298,7 @@ def test_kurt(self):
labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2],
[0, 1, 0, 1, 0, 1]])
s = Series(np.random.randn(6), index=index)
- self.assertAlmostEqual(s.kurt(), s.kurt(level=0)['bar'])
+ tm.assert_almost_equal(s.kurt(), s.kurt(level=0)['bar'])
# test corner cases, kurt() returns NaN unless there's at least 4
# values
@@ -743,10 +743,10 @@ def test_corr(self):
import scipy.stats as stats
# full overlap
- self.assertAlmostEqual(self.ts.corr(self.ts), 1)
+ tm.assert_almost_equal(self.ts.corr(self.ts), 1)
# partial overlap
- self.assertAlmostEqual(self.ts[:15].corr(self.ts[5:]), 1)
+ tm.assert_almost_equal(self.ts[:15].corr(self.ts[5:]), 1)
assert isnull(self.ts[:15].corr(self.ts[5:], min_periods=12))
@@ -766,7 +766,7 @@ def test_corr(self):
B = tm.makeTimeSeries()
result = A.corr(B)
expected, _ = stats.pearsonr(A, B)
- self.assertAlmostEqual(result, expected)
+ tm.assert_almost_equal(result, expected)
def test_corr_rank(self):
tm._skip_if_no_scipy()
@@ -780,11 +780,11 @@ def test_corr_rank(self):
A[-5:] = A[:5]
result = A.corr(B, method='kendall')
expected = stats.kendalltau(A, B)[0]
- self.assertAlmostEqual(result, expected)
+ tm.assert_almost_equal(result, expected)
result = A.corr(B, method='spearman')
expected = stats.spearmanr(A, B)[0]
- self.assertAlmostEqual(result, expected)
+ tm.assert_almost_equal(result, expected)
# these methods got rewritten in 0.8
if scipy.__version__ < LooseVersion('0.9'):
@@ -800,15 +800,15 @@ def test_corr_rank(self):
1.17258718, -1.06009347, -0.10222060, -0.89076239, 0.89372375])
kexp = 0.4319297
sexp = 0.5853767
- self.assertAlmostEqual(A.corr(B, method='kendall'), kexp)
- self.assertAlmostEqual(A.corr(B, method='spearman'), sexp)
+ tm.assert_almost_equal(A.corr(B, method='kendall'), kexp)
+ tm.assert_almost_equal(A.corr(B, method='spearman'), sexp)
def test_cov(self):
# full overlap
- self.assertAlmostEqual(self.ts.cov(self.ts), self.ts.std() ** 2)
+ tm.assert_almost_equal(self.ts.cov(self.ts), self.ts.std() ** 2)
# partial overlap
- self.assertAlmostEqual(self.ts[:15].cov(self.ts[5:]),
+ tm.assert_almost_equal(self.ts[:15].cov(self.ts[5:]),
self.ts[5:15].std() ** 2)
# No overlap
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 9f5d80411ed17..394ae88983faa 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -558,7 +558,7 @@ def test_getitem_setitem_integers(self):
assert s.iloc[0] == s['a']
s.iloc[0] = 5
- self.assertAlmostEqual(s['a'], 5)
+ tm.assert_almost_equal(s['a'], 5)
def test_getitem_box_float64(self):
value = self.ts[5]
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 35d0198ae06a9..2aa3638b18e9b 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -922,7 +922,7 @@ def test_all_finite(self):
def test_ground_truth(self):
skew = nanops.nanskew(self.samples)
- self.assertAlmostEqual(skew, self.actual_skew)
+ tm.assert_almost_equal(skew, self.actual_skew)
def test_axis(self):
samples = np.vstack([self.samples,
@@ -972,7 +972,7 @@ def test_all_finite(self):
def test_ground_truth(self):
kurt = nanops.nankurt(self.samples)
- self.assertAlmostEqual(kurt, self.actual_kurt)
+ tm.assert_almost_equal(kurt, self.actual_kurt)
def test_axis(self):
samples = np.vstack([self.samples,
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index 55be6302036f1..d3e427dfb4c7b 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -1406,7 +1406,7 @@ def get_result(obj, window, min_periods=None, freq=None, center=False):
trunc_series = self.series[::2].truncate(prev_date, last_date)
trunc_frame = self.frame[::2].truncate(prev_date, last_date)
- self.assertAlmostEqual(series_result[-1],
+ tm.assert_almost_equal(series_result[-1],
static_comp(trunc_series))
tm.assert_series_equal(frame_result.xs(last_date),
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index 0c8aaf77aec12..10776381974de 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -729,7 +729,7 @@ def test_string_index_alias_tz_aware(self):
ts = Series(np.random.randn(len(rng)), index=rng)
result = ts['1/3/2000']
- self.assertAlmostEqual(result, ts[2])
+ tm.assert_almost_equal(result, ts[2])
def test_fixed_offset(self):
dates = [datetime(2000, 1, 1, tzinfo=fixed_off),
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 3f07937a6e552..d0c56e9974a3f 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -140,7 +140,8 @@ def round_trip_pickle(obj, path=None):
def assert_almost_equal(left, right, check_exact=False,
check_dtype='equiv', check_less_precise=False,
**kwargs):
- """Check that left and right Index are equal.
+ """
+ Check that the left and right objects are approximately equal.
Parameters
----------
| Title is self-explanatory.
Partially addresses #15990.
| https://api.github.com/repos/pandas-dev/pandas/pulls/16183 | 2017-05-01T17:19:43Z | 2017-05-02T00:01:59Z | 2017-05-02T00:01:59Z | 2017-05-02T00:51:57Z |
TST: DatetimeIndex and its Timestamp elements returning same .weekofyear with tz (#6538) | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index aded04e82ee7e..6e4756c3c5245 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -1580,7 +1580,7 @@ Conversion
- Bug in ``Timestamp.replace`` now raises ``TypeError`` when incorrect argument names are given; previously this raised ``ValueError`` (:issue:`15240`)
- Bug in ``Timestamp.replace`` with compat for passing long integers (:issue:`15030`)
-- Bug in ``Timestamp`` returning UTC based time/date attributes when a timezone was provided (:issue:`13303`)
+- Bug in ``Timestamp`` returning UTC based time/date attributes when a timezone was provided (:issue:`13303`, :issue:`6538`)
- Bug in ``Timestamp`` incorrectly localizing timezones during construction (:issue:`11481`, :issue:`15777`)
- Bug in ``TimedeltaIndex`` addition where overflow was being allowed without error (:issue:`14816`)
- Bug in ``TimedeltaIndex`` raising a ``ValueError`` when boolean indexing with ``loc`` (:issue:`14946`)
diff --git a/pandas/tests/indexes/datetimes/test_misc.py b/pandas/tests/indexes/datetimes/test_misc.py
index ae5d29ca426b4..d9a61776a0d1c 100644
--- a/pandas/tests/indexes/datetimes/test_misc.py
+++ b/pandas/tests/indexes/datetimes/test_misc.py
@@ -334,6 +334,14 @@ def test_datetimeindex_accessors(self):
for ts, value in tests:
assert ts == value
+ # GH 6538: Check that DatetimeIndex and its TimeStamp elements
+ # return the same weekofyear accessor close to new year w/ tz
+ dates = ["2013/12/29", "2013/12/30", "2013/12/31"]
+ dates = DatetimeIndex(dates, tz="Europe/Brussels")
+ expected = [52, 1, 1]
+ assert dates.weekofyear.tolist() == expected
+ assert [d.weekofyear for d in dates] == expected
+
def test_nanosecond_field(self):
dti = DatetimeIndex(np.arange(10))
| - [x] closes #6538
- [x] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
I think this was closed by PR #15740.
| https://api.github.com/repos/pandas-dev/pandas/pulls/16181 | 2017-05-01T00:00:04Z | 2017-05-01T11:34:20Z | 2017-05-01T11:34:20Z | 2017-12-20T02:00:02Z |
MAINT: Remove self.assertNotEqual from testing | diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index d2a1e32f015b2..208c7b5ace50e 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -247,7 +247,7 @@ def test_deepcopy(self):
series = cp['A']
series[:] = 10
for idx, value in compat.iteritems(series):
- self.assertNotEqual(self.frame['A'][idx], value)
+ assert self.frame['A'][idx] != value
# ---------------------------------------------------------------------
# Transposing
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index cd1529d04c991..75d4263cbe68f 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -1890,7 +1890,7 @@ def test_nested_exception(self):
try:
repr(df)
except Exception as e:
- self.assertNotEqual(type(e), UnboundLocalError)
+ assert type(e) != UnboundLocalError
def test_reindex_methods(self):
df = pd.DataFrame({'x': list(range(5))})
diff --git a/pandas/tests/frame/test_sorting.py b/pandas/tests/frame/test_sorting.py
index bdb5fd0e8354c..457ea32ec56f7 100644
--- a/pandas/tests/frame/test_sorting.py
+++ b/pandas/tests/frame/test_sorting.py
@@ -361,7 +361,7 @@ def test_sort_index_inplace(self):
df.sort_index(inplace=True)
expected = frame
assert_frame_equal(df, expected)
- self.assertNotEqual(a_id, id(df['A']))
+ assert a_id != id(df['A'])
df = unordered.copy()
df.sort_index(ascending=False, inplace=True)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 09643e918af31..8d86d40c379bf 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1032,7 +1032,7 @@ def test_frame_set_name_single(self):
assert result.index.name == 'A'
result = self.df.groupby('A', as_index=False).mean()
- self.assertNotEqual(result.index.name, 'A')
+ assert result.index.name != 'A'
result = grouped.agg(np.mean)
assert result.index.name == 'A'
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 23c72e511d2b3..10958681af450 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -81,7 +81,7 @@ def test_constructor(self):
assert index.name == 'name'
tm.assert_numpy_array_equal(arr, index.values)
arr[0] = "SOMEBIGLONGSTRING"
- self.assertNotEqual(index[0], "SOMEBIGLONGSTRING")
+ assert index[0] != "SOMEBIGLONGSTRING"
# what to do here?
# arr = np.array(5.)
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index 19bca875e650d..428c261df5654 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -653,7 +653,7 @@ def test_constructor(self):
# this should not change index
arr[0] = val
- self.assertNotEqual(new_index[0], val)
+ assert new_index[0] != val
# interpret list-like
expected = Int64Index([5, 0])
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index dee645e9d70ec..ac00e441047dd 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -918,7 +918,7 @@ def test_wide_repr(self):
assert "10 rows x %d columns" % (max_cols - 1) in rep_str
set_option('display.expand_frame_repr', True)
wide_repr = repr(df)
- self.assertNotEqual(rep_str, wide_repr)
+ assert rep_str != wide_repr
with option_context('display.width', 120):
wider_repr = repr(df)
@@ -944,7 +944,7 @@ def test_wide_repr_named(self):
rep_str = repr(df)
set_option('display.expand_frame_repr', True)
wide_repr = repr(df)
- self.assertNotEqual(rep_str, wide_repr)
+ assert rep_str != wide_repr
with option_context('display.width', 150):
wider_repr = repr(df)
@@ -966,7 +966,7 @@ def test_wide_repr_multiindex(self):
rep_str = repr(df)
set_option('display.expand_frame_repr', True)
wide_repr = repr(df)
- self.assertNotEqual(rep_str, wide_repr)
+ assert rep_str != wide_repr
with option_context('display.width', 150):
wider_repr = repr(df)
@@ -990,7 +990,7 @@ def test_wide_repr_multiindex_cols(self):
rep_str = repr(df)
set_option('display.expand_frame_repr', True)
wide_repr = repr(df)
- self.assertNotEqual(rep_str, wide_repr)
+ assert rep_str != wide_repr
with option_context('display.width', 150):
wider_repr = repr(df)
@@ -1006,7 +1006,7 @@ def test_wide_repr_unicode(self):
rep_str = repr(df)
set_option('display.expand_frame_repr', True)
wide_repr = repr(df)
- self.assertNotEqual(rep_str, wide_repr)
+ assert rep_str != wide_repr
with option_context('display.width', 150):
wider_repr = repr(df)
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 371cc2b61634a..f421c0f8e6d69 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -85,9 +85,9 @@ def test_deepcopy(self):
self.styler._update_ctx(self.attrs)
self.styler.highlight_max()
- self.assertNotEqual(self.styler.ctx, s2.ctx)
+ assert self.styler.ctx != s2.ctx
assert s2._todo == []
- self.assertNotEqual(self.styler._todo, s2._todo)
+ assert self.styler._todo != s2._todo
def test_clear(self):
s = self.df.style.highlight_max()._compute()
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 804d76c3c9eca..c427fab4103e0 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -38,7 +38,7 @@ def test_expand_user(self):
filename = '~/sometest'
expanded_name = common._expand_user(filename)
- self.assertNotEqual(expanded_name, filename)
+ assert expanded_name != filename
assert isabs(expanded_name)
assert os.path.expanduser(filename) == expanded_name
@@ -68,7 +68,7 @@ def test_stringify_path_localpath(self):
def test_get_filepath_or_buffer_with_path(self):
filename = '~/sometest'
filepath_or_buffer, _, _ = common.get_filepath_or_buffer(filename)
- self.assertNotEqual(filepath_or_buffer, filename)
+ assert filepath_or_buffer != filename
assert isabs(filepath_or_buffer)
assert os.path.expanduser(filename) == filepath_or_buffer
diff --git a/pandas/tests/scalar/test_period.py b/pandas/tests/scalar/test_period.py
index 00a1fa1b507b6..2e60cfdb7a4f2 100644
--- a/pandas/tests/scalar/test_period.py
+++ b/pandas/tests/scalar/test_period.py
@@ -938,8 +938,8 @@ def test_equal_Raises_Value(self):
self.january1 == self.day
def test_notEqual(self):
- self.assertNotEqual(self.january1, 1)
- self.assertNotEqual(self.january1, self.february)
+ assert self.january1 != 1
+ assert self.january1 != self.february
def test_greater(self):
assert self.february > self.january1
diff --git a/pandas/tests/scalar/test_timedelta.py b/pandas/tests/scalar/test_timedelta.py
index faddbcc84109f..5659bc26fc1cc 100644
--- a/pandas/tests/scalar/test_timedelta.py
+++ b/pandas/tests/scalar/test_timedelta.py
@@ -560,7 +560,7 @@ def test_timedelta_hash_equality(self):
# python timedeltas drop ns resolution
ns_td = Timedelta(1, 'ns')
- self.assertNotEqual(hash(ns_td), hash(ns_td.to_pytimedelta()))
+ assert hash(ns_td) != hash(ns_td.to_pytimedelta())
def test_implementation_limits(self):
min_td = Timedelta(Timedelta.min)
diff --git a/pandas/tests/scalar/test_timestamp.py b/pandas/tests/scalar/test_timestamp.py
index 8a28a9a4bedd0..04b33bbc6c3bf 100644
--- a/pandas/tests/scalar/test_timestamp.py
+++ b/pandas/tests/scalar/test_timestamp.py
@@ -873,8 +873,8 @@ def test_comparison(self):
other = Timestamp(stamp + 100)
- self.assertNotEqual(val, other)
- self.assertNotEqual(val, other)
+ assert val != other
+ assert val != other
assert val < other
assert val <= other
assert other > val
@@ -1375,9 +1375,9 @@ def test_timestamp_compare_with_early_datetime(self):
assert not stamp == datetime.min
assert not stamp == datetime(1600, 1, 1)
assert not stamp == datetime(2700, 1, 1)
- self.assertNotEqual(stamp, datetime.min)
- self.assertNotEqual(stamp, datetime(1600, 1, 1))
- self.assertNotEqual(stamp, datetime(2700, 1, 1))
+ assert stamp != datetime.min
+ assert stamp != datetime(1600, 1, 1)
+ assert stamp != datetime(2700, 1, 1)
assert stamp > datetime(1600, 1, 1)
assert stamp >= datetime(1600, 1, 1)
assert stamp < datetime(2700, 1, 1)
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 9937f6a34172e..0eaab2e588cc2 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -1078,8 +1078,8 @@ def test_spline_extrapolate(self):
def test_spline_smooth(self):
tm._skip_if_no_scipy()
s = Series([1, 2, np.nan, 4, 5.1, np.nan, 7])
- self.assertNotEqual(s.interpolate(method='spline', order=3, s=0)[5],
- s.interpolate(method='spline', order=3)[5])
+ assert (s.interpolate(method='spline', order=3, s=0)[5] !=
+ s.interpolate(method='spline', order=3)[5])
def test_spline_interpolation(self):
tm._skip_if_no_scipy()
@@ -1090,8 +1090,8 @@ def test_spline_interpolation(self):
expected1 = s.interpolate(method='spline', order=1)
assert_series_equal(result1, expected1)
- # GH #10633
def test_spline_error(self):
+ # see gh-10633
tm._skip_if_no_scipy()
s = pd.Series(np.arange(10) ** 2)
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index ae505a66ad75a..8ef29097b66e8 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -293,7 +293,7 @@ def testit():
if op is not None:
result = expr._can_use_numexpr(op, op_str, f, f,
'evaluate')
- self.assertNotEqual(result, f._is_mixed_type)
+ assert result != f._is_mixed_type
result = expr.evaluate(op, op_str, f, f,
use_numexpr=True)
@@ -336,7 +336,7 @@ def testit():
result = expr._can_use_numexpr(op, op_str, f11, f12,
'evaluate')
- self.assertNotEqual(result, f11._is_mixed_type)
+ assert result != f11._is_mixed_type
result = expr.evaluate(op, op_str, f11, f12,
use_numexpr=True)
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index a692f6b26c61e..b9cceab4d65f4 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1636,7 +1636,7 @@ def test_swapaxes(self):
# this works, but return a copy
result = self.panel.swapaxes('items', 'items')
assert_panel_equal(self.panel, result)
- self.assertNotEqual(id(self.panel), id(result))
+ assert id(self.panel) != id(result)
def test_transpose(self):
with catch_warnings(record=True):
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index f2a1414957d44..041e36848e1d8 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -888,7 +888,7 @@ def test_swapaxes(self):
# this works, but return a copy
result = self.panel4d.swapaxes('items', 'items')
assert_panel4d_equal(self.panel4d, result)
- self.assertNotEqual(id(self.panel4d), id(result))
+ assert id(self.panel4d) != id(result)
def test_update(self):
diff --git a/pandas/tests/tseries/test_offsets.py b/pandas/tests/tseries/test_offsets.py
index ce4208a8cea69..79190aa98f8d9 100644
--- a/pandas/tests/tseries/test_offsets.py
+++ b/pandas/tests/tseries/test_offsets.py
@@ -541,7 +541,7 @@ def test_eq(self):
offset1 = DateOffset(days=1)
offset2 = DateOffset(days=365)
- self.assertNotEqual(offset1, offset2)
+ assert offset1 != offset2
class TestBusinessDay(Base):
@@ -775,12 +775,11 @@ def testEQ(self):
for offset in [self.offset1, self.offset2, self.offset3, self.offset4]:
assert offset == offset
- self.assertNotEqual(BusinessHour(), BusinessHour(-1))
+ assert BusinessHour() != BusinessHour(-1)
assert BusinessHour(start='09:00') == BusinessHour()
- self.assertNotEqual(BusinessHour(start='09:00'),
- BusinessHour(start='09:01'))
- self.assertNotEqual(BusinessHour(start='09:00', end='17:00'),
- BusinessHour(start='17:00', end='09:01'))
+ assert BusinessHour(start='09:00') != BusinessHour(start='09:01')
+ assert (BusinessHour(start='09:00', end='17:00') !=
+ BusinessHour(start='17:00', end='09:01'))
def test_hash(self):
for offset in [self.offset1, self.offset2, self.offset3, self.offset4]:
@@ -4362,7 +4361,7 @@ def test_Hour(self):
assert Hour(3) + Hour(2) == Hour(5)
assert Hour(3) - Hour(2) == Hour()
- self.assertNotEqual(Hour(4), Hour(1))
+ assert Hour(4) != Hour(1)
def test_Minute(self):
assertEq(Minute(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 1))
@@ -4374,7 +4373,7 @@ def test_Minute(self):
assert Minute(3) + Minute(2) == Minute(5)
assert Minute(3) - Minute(2) == Minute()
- self.assertNotEqual(Minute(5), Minute())
+ assert Minute(5) != Minute()
def test_Second(self):
assertEq(Second(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 0, 1))
@@ -4464,8 +4463,8 @@ def test_tick_equalities(self):
assert t() == t(1)
# not equals
- self.assertNotEqual(t(3), t(2))
- self.assertNotEqual(t(3), t(-3))
+ assert t(3) != t(2)
+ assert t(3) != t(-3)
def test_tick_operators(self):
for t in self.ticks:
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index 8b6774885c8b7..0c8aaf77aec12 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -1191,8 +1191,7 @@ def test_cache_keys_are_distinct_for_pytz_vs_dateutil(self):
if tz_d is None:
# skip timezones that dateutil doesn't know about.
continue
- self.assertNotEqual(tslib._p_tz_cache_key(
- tz_p), tslib._p_tz_cache_key(tz_d))
+ assert tslib._p_tz_cache_key(tz_p) != tslib._p_tz_cache_key(tz_d)
class TestTimeZones(tm.TestCase):
| Title is self-explanatory.
Partially addresses #15990.
| https://api.github.com/repos/pandas-dev/pandas/pulls/16176 | 2017-04-29T21:09:16Z | 2017-05-01T11:40:42Z | 2017-05-01T11:40:42Z | 2017-05-01T15:46:22Z |
Update docs.ecosystem.api.pandasdmx | diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst
index ee0ea60c6f220..31849fc142aea 100644
--- a/doc/source/ecosystem.rst
+++ b/doc/source/ecosystem.rst
@@ -173,13 +173,15 @@ This package requires valid credentials for this API (non free).
`pandaSDMX <https://pandasdmx.readthedocs.io>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-pandaSDMX is an extensible library to retrieve and acquire statistical data
+pandaSDMX is a library to retrieve and acquire statistical data
and metadata disseminated in
-`SDMX <http://www.sdmx.org>`_ 2.1. This standard is currently supported by
-the European statistics office (Eurostat)
-and the European Central Bank (ECB). Datasets may be returned as pandas Series
-or multi-indexed DataFrames.
-
+`SDMX <http://www.sdmx.org>`_ 2.1, an ISO-standard
+widely used by institutions such as statistics offices, central banks,
+and international organisations. pandaSDMX can expose datasets and related
+structural metadata including dataflows, code-lists,
+and datastructure definitions as pandas Series
+or multi-indexed DataFrames.
+
`fredapi <https://github.com/mortada/fredapi>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fredapi is a Python interface to the `Federal Reserve Economic Data (FRED) <http://research.stlouisfed.org/fred2/>`__
| I am the author of pandaSDMX and contributed the section about two years ago. It is outdated and has become misleading.
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16172 | 2017-04-29T15:49:55Z | 2017-05-01T20:10:12Z | 2017-05-01T20:10:12Z | 2017-05-01T20:20:43Z |
BUG: Restore return value in notebook reprs | diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index f8cbdffa27bb4..790a9ed24b007 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -307,6 +307,21 @@ def mpl_style_cb(key):
return val
+def table_schema_cb(key):
+ # Having _ipython_display_ defined messes with the return value
+ # from cells, so the Out[x] dictionary breaks.
+ # Currently table schema is the only thing using it, so we'll
+ # monkey patch `_ipython_display_` onto NDFrame when config option
+ # is set
+ # see https://github.com/pandas-dev/pandas/issues/16168
+ from pandas.core.generic import NDFrame, _ipython_display_
+
+ if cf.get_option(key):
+ NDFrame._ipython_display_ = _ipython_display_
+ elif getattr(NDFrame, '_ipython_display_', None):
+ del NDFrame._ipython_display_
+
+
with cf.config_prefix('display'):
cf.register_option('precision', 6, pc_precision_doc, validator=is_int)
cf.register_option('float_format', None, float_format_doc,
@@ -374,7 +389,7 @@ def mpl_style_cb(key):
cf.register_option('latex.multirow', False, pc_latex_multirow,
validator=is_bool)
cf.register_option('html.table_schema', False, pc_table_schema_doc,
- validator=is_bool)
+ validator=is_bool, cb=table_schema_cb)
cf.deprecate_option('display.line_width',
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 70862015dff5b..9318a9f5ef27c 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -130,31 +130,6 @@ def __init__(self, data, axes=None, copy=False, dtype=None,
object.__setattr__(self, '_data', data)
object.__setattr__(self, '_item_cache', {})
- def _ipython_display_(self):
- try:
- from IPython.display import display
- except ImportError:
- return None
-
- # Series doesn't define _repr_html_ or _repr_latex_
- latex = self._repr_latex_() if hasattr(self, '_repr_latex_') else None
- html = self._repr_html_() if hasattr(self, '_repr_html_') else None
- try:
- table_schema = self._repr_table_schema_()
- except Exception as e:
- warnings.warn("Cannot create table schema representation. "
- "{}".format(e), UnserializableWarning)
- table_schema = None
- # We need the inital newline since we aren't going through the
- # usual __repr__. See
- # https://github.com/pandas-dev/pandas/pull/14904#issuecomment-277829277
- text = "\n" + repr(self)
-
- reprs = {"text/plain": text, "text/html": html, "text/latex": latex,
- "application/vnd.dataresource+json": table_schema}
- reprs = {k: v for k, v in reprs.items() if v}
- display(reprs, raw=True)
-
def _repr_table_schema_(self):
"""
Not a real Jupyter special repr method, but we use the same
@@ -6283,6 +6258,38 @@ def logical_func(self, axis=None, bool_only=None, skipna=None, level=None,
return set_function_name(logical_func, name, cls)
+def _ipython_display_(self):
+ # Having _ipython_display_ defined messes with the return value
+ # from cells, so the Out[x] dictionary breaks.
+ # Currently table schema is the only thing using it, so we'll
+ # monkey patch `_ipython_display_` onto NDFrame when config option
+ # is set
+ # see https://github.com/pandas-dev/pandas/issues/16168
+ try:
+ from IPython.display import display
+ except ImportError:
+ return None
+
+ # Series doesn't define _repr_html_ or _repr_latex_
+ latex = self._repr_latex_() if hasattr(self, '_repr_latex_') else None
+ html = self._repr_html_() if hasattr(self, '_repr_html_') else None
+ try:
+ table_schema = self._repr_table_schema_()
+ except Exception as e:
+ warnings.warn("Cannot create table schema representation. "
+ "{}".format(e), UnserializableWarning)
+ table_schema = None
+ # We need the inital newline since we aren't going through the
+ # usual __repr__. See
+ # https://github.com/pandas-dev/pandas/pull/14904#issuecomment-277829277
+ text = "\n" + repr(self)
+
+ reprs = {"text/plain": text, "text/html": html, "text/latex": latex,
+ "application/vnd.dataresource+json": table_schema}
+ reprs = {k: v for k, v in reprs.items() if v}
+ display(reprs, raw=True)
+
+
# install the indexes
for _name, _indexer in indexing.get_indexers_list():
NDFrame._create_indexer(_name, _indexer)
diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index 63cd08545610f..a5961910593f6 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -203,6 +203,33 @@ def test_config_default_off(self):
assert result is None
+ def test_config_monkeypatches(self):
+ # GH 10491
+ df = pd.DataFrame({"A": [1, 2]})
+ assert not hasattr(df, '_ipython_display_')
+ assert not hasattr(df['A'], '_ipython_display_')
+
+ with pd.option_context('display.html.table_schema', True):
+ assert hasattr(df, '_ipython_display_')
+ # smoke test that it works
+ df._ipython_display_()
+ assert hasattr(df['A'], '_ipython_display_')
+ df['A']._ipython_display_()
+
+ assert not hasattr(df, '_ipython_display_')
+ assert not hasattr(df['A'], '_ipython_display_')
+ # re-unsetting is OK
+ assert not hasattr(df, '_ipython_display_')
+ assert not hasattr(df['A'], '_ipython_display_')
+
+ # able to re-set
+ with pd.option_context('display.html.table_schema', True):
+ assert hasattr(df, '_ipython_display_')
+ # smoke test that it works
+ df._ipython_display_()
+ assert hasattr(df['A'], '_ipython_display_')
+ df['A']._ipython_display_()
+
# TODO: fix this broken test
| Monkey patches the _ipython_display_ method onto NDFrame, so that
notebook cells have a real return value. Setting the
display.html.table_schema will monkey patch the method on,
and remove it when unset.
closes https://github.com/ipython/ipython/issues/10491 | https://api.github.com/repos/pandas-dev/pandas/pulls/16171 | 2017-04-29T11:39:44Z | 2017-05-01T16:20:34Z | 2017-05-01T16:20:34Z | 2017-05-03T22:10:09Z |
MAINT: Remove self.assertEqual from testing | diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
index 7ebdd9735b967..2fe6359fd1ea6 100644
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -104,6 +104,7 @@ def signature(f):
map = map
zip = zip
filter = filter
+ intern = sys.intern
reduce = functools.reduce
long = int
unichr = chr
@@ -146,6 +147,7 @@ def signature(f):
# import iterator versions of these functions
range = xrange
+ intern = intern
zip = itertools.izip
filter = itertools.ifilter
map = itertools.imap
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 827a4668ed0bc..f8f84985142a8 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -8,7 +8,7 @@
from numpy.random import randn, rand, randint
import numpy as np
-from pandas.core.dtypes.common import is_list_like, is_scalar
+from pandas.core.dtypes.common import is_bool, is_list_like, is_scalar
import pandas as pd
from pandas.core import common as com
from pandas.errors import PerformanceWarning
@@ -209,7 +209,7 @@ def check_equal(self, result, expected):
elif isinstance(result, np.ndarray):
tm.assert_numpy_array_equal(result, expected)
else:
- self.assertEqual(result, expected)
+ assert result == expected
def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2):
skip_these = _scalar_skip
@@ -610,30 +610,28 @@ def test_scalar_unary(self):
with pytest.raises(TypeError):
pd.eval('~1.0', engine=self.engine, parser=self.parser)
- self.assertEqual(
- pd.eval('-1.0', parser=self.parser, engine=self.engine), -1.0)
- self.assertEqual(
- pd.eval('+1.0', parser=self.parser, engine=self.engine), +1.0)
-
- self.assertEqual(
- pd.eval('~1', parser=self.parser, engine=self.engine), ~1)
- self.assertEqual(
- pd.eval('-1', parser=self.parser, engine=self.engine), -1)
- self.assertEqual(
- pd.eval('+1', parser=self.parser, engine=self.engine), +1)
-
- self.assertEqual(
- pd.eval('~True', parser=self.parser, engine=self.engine), ~True)
- self.assertEqual(
- pd.eval('~False', parser=self.parser, engine=self.engine), ~False)
- self.assertEqual(
- pd.eval('-True', parser=self.parser, engine=self.engine), -True)
- self.assertEqual(
- pd.eval('-False', parser=self.parser, engine=self.engine), -False)
- self.assertEqual(
- pd.eval('+True', parser=self.parser, engine=self.engine), +True)
- self.assertEqual(
- pd.eval('+False', parser=self.parser, engine=self.engine), +False)
+ assert pd.eval('-1.0', parser=self.parser,
+ engine=self.engine) == -1.0
+ assert pd.eval('+1.0', parser=self.parser,
+ engine=self.engine) == +1.0
+ assert pd.eval('~1', parser=self.parser,
+ engine=self.engine) == ~1
+ assert pd.eval('-1', parser=self.parser,
+ engine=self.engine) == -1
+ assert pd.eval('+1', parser=self.parser,
+ engine=self.engine) == +1
+ assert pd.eval('~True', parser=self.parser,
+ engine=self.engine) == ~True
+ assert pd.eval('~False', parser=self.parser,
+ engine=self.engine) == ~False
+ assert pd.eval('-True', parser=self.parser,
+ engine=self.engine) == -True
+ assert pd.eval('-False', parser=self.parser,
+ engine=self.engine) == -False
+ assert pd.eval('+True', parser=self.parser,
+ engine=self.engine) == +True
+ assert pd.eval('+False', parser=self.parser,
+ engine=self.engine) == +False
def test_unary_in_array(self):
# GH 11235
@@ -658,50 +656,51 @@ def test_disallow_scalar_bool_ops(self):
pd.eval(ex, engine=self.engine, parser=self.parser)
def test_identical(self):
- # GH 10546
+ # see gh-10546
x = 1
result = pd.eval('x', engine=self.engine, parser=self.parser)
- self.assertEqual(result, 1)
+ assert result == 1
assert is_scalar(result)
x = 1.5
result = pd.eval('x', engine=self.engine, parser=self.parser)
- self.assertEqual(result, 1.5)
+ assert result == 1.5
assert is_scalar(result)
x = False
result = pd.eval('x', engine=self.engine, parser=self.parser)
- self.assertEqual(result, False)
+ assert not result
+ assert is_bool(result)
assert is_scalar(result)
x = np.array([1])
result = pd.eval('x', engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(result, np.array([1]))
- self.assertEqual(result.shape, (1, ))
+ assert result.shape == (1, )
x = np.array([1.5])
result = pd.eval('x', engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(result, np.array([1.5]))
- self.assertEqual(result.shape, (1, ))
+ assert result.shape == (1, )
x = np.array([False]) # noqa
result = pd.eval('x', engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(result, np.array([False]))
- self.assertEqual(result.shape, (1, ))
+ assert result.shape == (1, )
def test_line_continuation(self):
# GH 11149
exp = """1 + 2 * \
5 - 1 + 2 """
result = pd.eval(exp, engine=self.engine, parser=self.parser)
- self.assertEqual(result, 12)
+ assert result == 12
def test_float_truncation(self):
# GH 14241
exp = '1000000000.006'
result = pd.eval(exp, engine=self.engine, parser=self.parser)
expected = np.float64(exp)
- self.assertEqual(result, expected)
+ assert result == expected
df = pd.DataFrame({'A': [1000000000.0009,
1000000000.0011,
@@ -1121,7 +1120,7 @@ def test_simple_bool_ops(self):
ex = '{0} {1} {2}'.format(lhs, op, rhs)
res = self.eval(ex)
exp = eval(ex)
- self.assertEqual(res, exp)
+ assert res == exp
def test_bool_ops_with_constants(self):
for op, lhs, rhs in product(expr._bool_ops_syms, ('True', 'False'),
@@ -1129,7 +1128,7 @@ def test_bool_ops_with_constants(self):
ex = '{0} {1} {2}'.format(lhs, op, rhs)
res = self.eval(ex)
exp = eval(ex)
- self.assertEqual(res, exp)
+ assert res == exp
def test_panel_fails(self):
with catch_warnings(record=True):
@@ -1169,19 +1168,19 @@ def test_truediv(self):
res = self.eval('1 / 2', truediv=True)
expec = 0.5
- self.assertEqual(res, expec)
+ assert res == expec
res = self.eval('1 / 2', truediv=False)
expec = 0.5
- self.assertEqual(res, expec)
+ assert res == expec
res = self.eval('s / 2', truediv=False)
expec = 0.5
- self.assertEqual(res, expec)
+ assert res == expec
res = self.eval('s / 2', truediv=True)
expec = 0.5
- self.assertEqual(res, expec)
+ assert res == expec
else:
res = self.eval(ex, truediv=False)
tm.assert_numpy_array_equal(res, np.array([1]))
@@ -1191,19 +1190,19 @@ def test_truediv(self):
res = self.eval('1 / 2', truediv=True)
expec = 0.5
- self.assertEqual(res, expec)
+ assert res == expec
res = self.eval('1 / 2', truediv=False)
expec = 0
- self.assertEqual(res, expec)
+ assert res == expec
res = self.eval('s / 2', truediv=False)
expec = 0
- self.assertEqual(res, expec)
+ assert res == expec
res = self.eval('s / 2', truediv=True)
expec = 0.5
- self.assertEqual(res, expec)
+ assert res == expec
def test_failing_subscript_with_name_error(self):
df = DataFrame(np.random.randn(5, 3)) # noqa
@@ -1549,7 +1548,7 @@ def test_bool_ops_with_constants(self):
else:
res = self.eval(ex)
exp = eval(ex)
- self.assertEqual(res, exp)
+ assert res == exp
def test_simple_bool_ops(self):
for op, lhs, rhs in product(expr._bool_ops_syms, (True, False),
@@ -1561,7 +1560,7 @@ def test_simple_bool_ops(self):
else:
res = pd.eval(ex, engine=self.engine, parser=self.parser)
exp = eval(ex)
- self.assertEqual(res, exp)
+ assert res == exp
class TestOperationsPythonPython(TestOperationsNumExprPython):
@@ -1650,14 +1649,14 @@ def test_df_arithmetic_subexpression(self):
def check_result_type(self, dtype, expect_dtype):
df = DataFrame({'a': np.random.randn(10).astype(dtype)})
- self.assertEqual(df.a.dtype, dtype)
+ assert df.a.dtype == dtype
df.eval("b = sin(a)",
engine=self.engine,
parser=self.parser, inplace=True)
got = df.b
expect = np.sin(df.a)
- self.assertEqual(expect.dtype, got.dtype)
- self.assertEqual(expect_dtype, got.dtype)
+ assert expect.dtype == got.dtype
+ assert expect_dtype == got.dtype
tm.assert_series_equal(got, expect, check_names=False)
def test_result_types(self):
diff --git a/pandas/tests/dtypes/test_cast.py b/pandas/tests/dtypes/test_cast.py
index 22640729c262f..cbf049b95b6ef 100644
--- a/pandas/tests/dtypes/test_cast.py
+++ b/pandas/tests/dtypes/test_cast.py
@@ -164,7 +164,7 @@ def test_maybe_convert_string_to_array(self):
assert result.dtype == object
result = maybe_convert_string_to_object(1)
- self.assertEqual(result, 1)
+ assert result == 1
arr = np.array(['x', 'y'], dtype=str)
result = maybe_convert_string_to_object(arr)
@@ -187,31 +187,31 @@ def test_maybe_convert_scalar(self):
# pass thru
result = maybe_convert_scalar('x')
- self.assertEqual(result, 'x')
+ assert result == 'x'
result = maybe_convert_scalar(np.array([1]))
- self.assertEqual(result, np.array([1]))
+ assert result == np.array([1])
# leave scalar dtype
result = maybe_convert_scalar(np.int64(1))
- self.assertEqual(result, np.int64(1))
+ assert result == np.int64(1)
result = maybe_convert_scalar(np.int32(1))
- self.assertEqual(result, np.int32(1))
+ assert result == np.int32(1)
result = maybe_convert_scalar(np.float32(1))
- self.assertEqual(result, np.float32(1))
+ assert result == np.float32(1)
result = maybe_convert_scalar(np.int64(1))
- self.assertEqual(result, np.float64(1))
+ assert result == np.float64(1)
# coerce
result = maybe_convert_scalar(1)
- self.assertEqual(result, np.int64(1))
+ assert result == np.int64(1)
result = maybe_convert_scalar(1.0)
- self.assertEqual(result, np.float64(1))
+ assert result == np.float64(1)
result = maybe_convert_scalar(Timestamp('20130101'))
- self.assertEqual(result, Timestamp('20130101').value)
+ assert result == Timestamp('20130101').value
result = maybe_convert_scalar(datetime(2013, 1, 1))
- self.assertEqual(result, Timestamp('20130101').value)
+ assert result == Timestamp('20130101').value
result = maybe_convert_scalar(Timedelta('1 day 1 min'))
- self.assertEqual(result, Timedelta('1 day 1 min').value)
+ assert result == Timedelta('1 day 1 min').value
class TestConvert(tm.TestCase):
@@ -291,7 +291,7 @@ def test_numpy_dtypes(self):
((np.dtype('datetime64[ns]'), np.int64), np.object)
)
for src, common in testcases:
- self.assertEqual(find_common_type(src), common)
+ assert find_common_type(src) == common
with pytest.raises(ValueError):
# empty
@@ -299,26 +299,25 @@ def test_numpy_dtypes(self):
def test_categorical_dtype(self):
dtype = CategoricalDtype()
- self.assertEqual(find_common_type([dtype]), 'category')
- self.assertEqual(find_common_type([dtype, dtype]), 'category')
- self.assertEqual(find_common_type([np.object, dtype]), np.object)
+ assert find_common_type([dtype]) == 'category'
+ assert find_common_type([dtype, dtype]) == 'category'
+ assert find_common_type([np.object, dtype]) == np.object
def test_datetimetz_dtype(self):
dtype = DatetimeTZDtype(unit='ns', tz='US/Eastern')
- self.assertEqual(find_common_type([dtype, dtype]),
- 'datetime64[ns, US/Eastern]')
+ assert find_common_type([dtype, dtype]) == 'datetime64[ns, US/Eastern]'
for dtype2 in [DatetimeTZDtype(unit='ns', tz='Asia/Tokyo'),
np.dtype('datetime64[ns]'), np.object, np.int64]:
- self.assertEqual(find_common_type([dtype, dtype2]), np.object)
- self.assertEqual(find_common_type([dtype2, dtype]), np.object)
+ assert find_common_type([dtype, dtype2]) == np.object
+ assert find_common_type([dtype2, dtype]) == np.object
def test_period_dtype(self):
dtype = PeriodDtype(freq='D')
- self.assertEqual(find_common_type([dtype, dtype]), 'period[D]')
+ assert find_common_type([dtype, dtype]) == 'period[D]'
for dtype2 in [DatetimeTZDtype(unit='ns', tz='Asia/Tokyo'),
PeriodDtype(freq='2D'), PeriodDtype(freq='H'),
np.dtype('datetime64[ns]'), np.object, np.int64]:
- self.assertEqual(find_common_type([dtype, dtype2]), np.object)
- self.assertEqual(find_common_type([dtype2, dtype]), np.object)
+ assert find_common_type([dtype, dtype2]) == np.object
+ assert find_common_type([dtype2, dtype]) == np.object
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 2aad1b6baaac0..0472f0599cd9b 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -30,30 +30,30 @@ def test_invalid_dtype_error(self):
def test_numpy_dtype(self):
for dtype in ['M8[ns]', 'm8[ns]', 'object', 'float64', 'int64']:
- self.assertEqual(pandas_dtype(dtype), np.dtype(dtype))
+ assert pandas_dtype(dtype) == np.dtype(dtype)
def test_numpy_string_dtype(self):
# do not parse freq-like string as period dtype
- self.assertEqual(pandas_dtype('U'), np.dtype('U'))
- self.assertEqual(pandas_dtype('S'), np.dtype('S'))
+ assert pandas_dtype('U') == np.dtype('U')
+ assert pandas_dtype('S') == np.dtype('S')
def test_datetimetz_dtype(self):
for dtype in ['datetime64[ns, US/Eastern]',
'datetime64[ns, Asia/Tokyo]',
'datetime64[ns, UTC]']:
assert pandas_dtype(dtype) is DatetimeTZDtype(dtype)
- self.assertEqual(pandas_dtype(dtype), DatetimeTZDtype(dtype))
- self.assertEqual(pandas_dtype(dtype), dtype)
+ assert pandas_dtype(dtype) == DatetimeTZDtype(dtype)
+ assert pandas_dtype(dtype) == dtype
def test_categorical_dtype(self):
- self.assertEqual(pandas_dtype('category'), CategoricalDtype())
+ assert pandas_dtype('category') == CategoricalDtype()
def test_period_dtype(self):
for dtype in ['period[D]', 'period[3M]', 'period[U]',
'Period[D]', 'Period[3M]', 'Period[U]']:
assert pandas_dtype(dtype) is PeriodDtype(dtype)
- self.assertEqual(pandas_dtype(dtype), PeriodDtype(dtype))
- self.assertEqual(pandas_dtype(dtype), dtype)
+ assert pandas_dtype(dtype) == PeriodDtype(dtype)
+ assert pandas_dtype(dtype) == dtype
dtypes = dict(datetime_tz=pandas_dtype('datetime64[ns, US/Eastern]'),
diff --git a/pandas/tests/dtypes/test_concat.py b/pandas/tests/dtypes/test_concat.py
index e8eb042d78f30..c0be0dc38d27f 100644
--- a/pandas/tests/dtypes/test_concat.py
+++ b/pandas/tests/dtypes/test_concat.py
@@ -11,7 +11,7 @@ def check_concat(self, to_concat, exp):
for klass in [pd.Index, pd.Series]:
to_concat_klass = [klass(c) for c in to_concat]
res = _concat.get_dtype_kinds(to_concat_klass)
- self.assertEqual(res, set(exp))
+ assert res == set(exp)
def test_get_dtype_kinds(self):
to_concat = [['a'], [1, 2]]
@@ -60,19 +60,19 @@ def test_get_dtype_kinds_period(self):
to_concat = [pd.PeriodIndex(['2011-01'], freq='M'),
pd.PeriodIndex(['2011-01'], freq='M')]
res = _concat.get_dtype_kinds(to_concat)
- self.assertEqual(res, set(['period[M]']))
+ assert res == set(['period[M]'])
to_concat = [pd.Series([pd.Period('2011-01', freq='M')]),
pd.Series([pd.Period('2011-02', freq='M')])]
res = _concat.get_dtype_kinds(to_concat)
- self.assertEqual(res, set(['object']))
+ assert res == set(['object'])
to_concat = [pd.PeriodIndex(['2011-01'], freq='M'),
pd.PeriodIndex(['2011-01'], freq='D')]
res = _concat.get_dtype_kinds(to_concat)
- self.assertEqual(res, set(['period[M]', 'period[D]']))
+ assert res == set(['period[M]', 'period[D]'])
to_concat = [pd.Series([pd.Period('2011-01', freq='M')]),
pd.Series([pd.Period('2011-02', freq='D')])]
res = _concat.get_dtype_kinds(to_concat)
- self.assertEqual(res, set(['object']))
+ assert res == set(['object'])
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index b02c846d50c89..da3120145fe38 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -124,10 +124,10 @@ def test_subclass(self):
assert issubclass(type(a), type(b))
def test_coerce_to_dtype(self):
- self.assertEqual(_coerce_to_dtype('datetime64[ns, US/Eastern]'),
- DatetimeTZDtype('ns', 'US/Eastern'))
- self.assertEqual(_coerce_to_dtype('datetime64[ns, Asia/Tokyo]'),
- DatetimeTZDtype('ns', 'Asia/Tokyo'))
+ assert (_coerce_to_dtype('datetime64[ns, US/Eastern]') ==
+ DatetimeTZDtype('ns', 'US/Eastern'))
+ assert (_coerce_to_dtype('datetime64[ns, Asia/Tokyo]') ==
+ DatetimeTZDtype('ns', 'Asia/Tokyo'))
def test_compat(self):
assert is_datetime64tz_dtype(self.dtype)
@@ -194,16 +194,14 @@ def test_dst(self):
dr2 = date_range('2013-08-01', periods=3, tz='US/Eastern')
s2 = Series(dr2, name='A')
assert is_datetimetz(s2)
- self.assertEqual(s1.dtype, s2.dtype)
+ assert s1.dtype == s2.dtype
def test_parser(self):
# pr #11245
for tz, constructor in product(('UTC', 'US/Eastern'),
('M8', 'datetime64')):
- self.assertEqual(
- DatetimeTZDtype('%s[ns, %s]' % (constructor, tz)),
- DatetimeTZDtype('ns', tz),
- )
+ assert (DatetimeTZDtype('%s[ns, %s]' % (constructor, tz)) ==
+ DatetimeTZDtype('ns', tz))
def test_empty(self):
dt = DatetimeTZDtype()
@@ -222,18 +220,18 @@ def test_construction(self):
for s in ['period[D]', 'Period[D]', 'D']:
dt = PeriodDtype(s)
- self.assertEqual(dt.freq, pd.tseries.offsets.Day())
+ assert dt.freq == pd.tseries.offsets.Day()
assert is_period_dtype(dt)
for s in ['period[3D]', 'Period[3D]', '3D']:
dt = PeriodDtype(s)
- self.assertEqual(dt.freq, pd.tseries.offsets.Day(3))
+ assert dt.freq == pd.tseries.offsets.Day(3)
assert is_period_dtype(dt)
for s in ['period[26H]', 'Period[26H]', '26H',
'period[1D2H]', 'Period[1D2H]', '1D2H']:
dt = PeriodDtype(s)
- self.assertEqual(dt.freq, pd.tseries.offsets.Hour(26))
+ assert dt.freq == pd.tseries.offsets.Hour(26)
assert is_period_dtype(dt)
def test_subclass(self):
@@ -254,10 +252,8 @@ def test_identity(self):
assert PeriodDtype('period[1S1U]') is PeriodDtype('period[1000001U]')
def test_coerce_to_dtype(self):
- self.assertEqual(_coerce_to_dtype('period[D]'),
- PeriodDtype('period[D]'))
- self.assertEqual(_coerce_to_dtype('period[3M]'),
- PeriodDtype('period[3M]'))
+ assert _coerce_to_dtype('period[D]') == PeriodDtype('period[D]')
+ assert _coerce_to_dtype('period[3M]') == PeriodDtype('period[3M]')
def test_compat(self):
assert not is_datetime64_ns_dtype(self.dtype)
@@ -354,7 +350,7 @@ def test_construction(self):
for s in ['interval[int64]', 'Interval[int64]', 'int64']:
i = IntervalDtype(s)
- self.assertEqual(i.subtype, np.dtype('int64'))
+ assert i.subtype == np.dtype('int64')
assert is_interval_dtype(i)
def test_construction_generic(self):
@@ -393,12 +389,12 @@ def test_is_dtype(self):
assert not IntervalDtype.is_dtype(np.float64)
def test_identity(self):
- self.assertEqual(IntervalDtype('interval[int64]'),
- IntervalDtype('interval[int64]'))
+ assert (IntervalDtype('interval[int64]') ==
+ IntervalDtype('interval[int64]'))
def test_coerce_to_dtype(self):
- self.assertEqual(_coerce_to_dtype('interval[int64]'),
- IntervalDtype('interval[int64]'))
+ assert (_coerce_to_dtype('interval[int64]') ==
+ IntervalDtype('interval[int64]'))
def test_construction_from_string(self):
result = IntervalDtype('interval[int64]')
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 3449d6c56167e..ec02a5a200308 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -233,11 +233,11 @@ def test_infer_dtype_bytes(self):
# string array of bytes
arr = np.array(list('abc'), dtype='S1')
- self.assertEqual(lib.infer_dtype(arr), compare)
+ assert lib.infer_dtype(arr) == compare
# object array of bytes
arr = arr.astype(object)
- self.assertEqual(lib.infer_dtype(arr), compare)
+ assert lib.infer_dtype(arr) == compare
def test_isinf_scalar(self):
# GH 11352
@@ -409,58 +409,58 @@ class TestTypeInference(tm.TestCase):
def test_length_zero(self):
result = lib.infer_dtype(np.array([], dtype='i4'))
- self.assertEqual(result, 'integer')
+ assert result == 'integer'
result = lib.infer_dtype([])
- self.assertEqual(result, 'empty')
+ assert result == 'empty'
def test_integers(self):
arr = np.array([1, 2, 3, np.int64(4), np.int32(5)], dtype='O')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'integer')
+ assert result == 'integer'
arr = np.array([1, 2, 3, np.int64(4), np.int32(5), 'foo'], dtype='O')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'mixed-integer')
+ assert result == 'mixed-integer'
arr = np.array([1, 2, 3, 4, 5], dtype='i4')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'integer')
+ assert result == 'integer'
def test_bools(self):
arr = np.array([True, False, True, True, True], dtype='O')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'boolean')
+ assert result == 'boolean'
arr = np.array([np.bool_(True), np.bool_(False)], dtype='O')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'boolean')
+ assert result == 'boolean'
arr = np.array([True, False, True, 'foo'], dtype='O')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'mixed')
+ assert result == 'mixed'
arr = np.array([True, False, True], dtype=bool)
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'boolean')
+ assert result == 'boolean'
def test_floats(self):
arr = np.array([1., 2., 3., np.float64(4), np.float32(5)], dtype='O')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'floating')
+ assert result == 'floating'
arr = np.array([1, 2, 3, np.float64(4), np.float32(5), 'foo'],
dtype='O')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'mixed-integer')
+ assert result == 'mixed-integer'
arr = np.array([1, 2, 3, 4, 5], dtype='f4')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'floating')
+ assert result == 'floating'
arr = np.array([1, 2, 3, 4, 5], dtype='f8')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'floating')
+ assert result == 'floating'
def test_string(self):
pass
@@ -472,198 +472,198 @@ def test_datetime(self):
dates = [datetime(2012, 1, x) for x in range(1, 20)]
index = Index(dates)
- self.assertEqual(index.inferred_type, 'datetime64')
+ assert index.inferred_type == 'datetime64'
def test_infer_dtype_datetime(self):
arr = np.array([Timestamp('2011-01-01'),
Timestamp('2011-01-02')])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
arr = np.array([np.datetime64('2011-01-01'),
np.datetime64('2011-01-01')], dtype=object)
- self.assertEqual(lib.infer_dtype(arr), 'datetime64')
+ assert lib.infer_dtype(arr) == 'datetime64'
arr = np.array([datetime(2011, 1, 1), datetime(2012, 2, 1)])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
# starts with nan
for n in [pd.NaT, np.nan]:
arr = np.array([n, pd.Timestamp('2011-01-02')])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
arr = np.array([n, np.datetime64('2011-01-02')])
- self.assertEqual(lib.infer_dtype(arr), 'datetime64')
+ assert lib.infer_dtype(arr) == 'datetime64'
arr = np.array([n, datetime(2011, 1, 1)])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
arr = np.array([n, pd.Timestamp('2011-01-02'), n])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
arr = np.array([n, np.datetime64('2011-01-02'), n])
- self.assertEqual(lib.infer_dtype(arr), 'datetime64')
+ assert lib.infer_dtype(arr) == 'datetime64'
arr = np.array([n, datetime(2011, 1, 1), n])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
# different type of nat
arr = np.array([np.timedelta64('nat'),
np.datetime64('2011-01-02')], dtype=object)
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
arr = np.array([np.datetime64('2011-01-02'),
np.timedelta64('nat')], dtype=object)
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
# mixed datetime
arr = np.array([datetime(2011, 1, 1),
pd.Timestamp('2011-01-02')])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
# should be datetime?
arr = np.array([np.datetime64('2011-01-01'),
pd.Timestamp('2011-01-02')])
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
arr = np.array([pd.Timestamp('2011-01-02'),
np.datetime64('2011-01-01')])
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
arr = np.array([np.nan, pd.Timestamp('2011-01-02'), 1])
- self.assertEqual(lib.infer_dtype(arr), 'mixed-integer')
+ assert lib.infer_dtype(arr) == 'mixed-integer'
arr = np.array([np.nan, pd.Timestamp('2011-01-02'), 1.1])
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
arr = np.array([np.nan, '2011-01-01', pd.Timestamp('2011-01-02')])
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
def test_infer_dtype_timedelta(self):
arr = np.array([pd.Timedelta('1 days'),
pd.Timedelta('2 days')])
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
arr = np.array([np.timedelta64(1, 'D'),
np.timedelta64(2, 'D')], dtype=object)
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
arr = np.array([timedelta(1), timedelta(2)])
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
# starts with nan
for n in [pd.NaT, np.nan]:
arr = np.array([n, Timedelta('1 days')])
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
arr = np.array([n, np.timedelta64(1, 'D')])
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
arr = np.array([n, timedelta(1)])
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
arr = np.array([n, pd.Timedelta('1 days'), n])
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
arr = np.array([n, np.timedelta64(1, 'D'), n])
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
arr = np.array([n, timedelta(1), n])
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
# different type of nat
arr = np.array([np.datetime64('nat'), np.timedelta64(1, 'D')],
dtype=object)
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
arr = np.array([np.timedelta64(1, 'D'), np.datetime64('nat')],
dtype=object)
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
def test_infer_dtype_period(self):
# GH 13664
arr = np.array([pd.Period('2011-01', freq='D'),
pd.Period('2011-02', freq='D')])
- self.assertEqual(lib.infer_dtype(arr), 'period')
+ assert lib.infer_dtype(arr) == 'period'
arr = np.array([pd.Period('2011-01', freq='D'),
pd.Period('2011-02', freq='M')])
- self.assertEqual(lib.infer_dtype(arr), 'period')
+ assert lib.infer_dtype(arr) == 'period'
# starts with nan
for n in [pd.NaT, np.nan]:
arr = np.array([n, pd.Period('2011-01', freq='D')])
- self.assertEqual(lib.infer_dtype(arr), 'period')
+ assert lib.infer_dtype(arr) == 'period'
arr = np.array([n, pd.Period('2011-01', freq='D'), n])
- self.assertEqual(lib.infer_dtype(arr), 'period')
+ assert lib.infer_dtype(arr) == 'period'
# different type of nat
arr = np.array([np.datetime64('nat'), pd.Period('2011-01', freq='M')],
dtype=object)
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
arr = np.array([pd.Period('2011-01', freq='M'), np.datetime64('nat')],
dtype=object)
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
def test_infer_dtype_all_nan_nat_like(self):
arr = np.array([np.nan, np.nan])
- self.assertEqual(lib.infer_dtype(arr), 'floating')
+ assert lib.infer_dtype(arr) == 'floating'
# nan and None mix are result in mixed
arr = np.array([np.nan, np.nan, None])
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
arr = np.array([None, np.nan, np.nan])
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
# pd.NaT
arr = np.array([pd.NaT])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
arr = np.array([pd.NaT, np.nan])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
arr = np.array([np.nan, pd.NaT])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
arr = np.array([np.nan, pd.NaT, np.nan])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
arr = np.array([None, pd.NaT, None])
- self.assertEqual(lib.infer_dtype(arr), 'datetime')
+ assert lib.infer_dtype(arr) == 'datetime'
# np.datetime64(nat)
arr = np.array([np.datetime64('nat')])
- self.assertEqual(lib.infer_dtype(arr), 'datetime64')
+ assert lib.infer_dtype(arr) == 'datetime64'
for n in [np.nan, pd.NaT, None]:
arr = np.array([n, np.datetime64('nat'), n])
- self.assertEqual(lib.infer_dtype(arr), 'datetime64')
+ assert lib.infer_dtype(arr) == 'datetime64'
arr = np.array([pd.NaT, n, np.datetime64('nat'), n])
- self.assertEqual(lib.infer_dtype(arr), 'datetime64')
+ assert lib.infer_dtype(arr) == 'datetime64'
arr = np.array([np.timedelta64('nat')], dtype=object)
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
for n in [np.nan, pd.NaT, None]:
arr = np.array([n, np.timedelta64('nat'), n])
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
arr = np.array([pd.NaT, n, np.timedelta64('nat'), n])
- self.assertEqual(lib.infer_dtype(arr), 'timedelta')
+ assert lib.infer_dtype(arr) == 'timedelta'
# datetime / timedelta mixed
arr = np.array([pd.NaT, np.datetime64('nat'),
np.timedelta64('nat'), np.nan])
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
arr = np.array([np.timedelta64('nat'), np.datetime64('nat')],
dtype=object)
- self.assertEqual(lib.infer_dtype(arr), 'mixed')
+ assert lib.infer_dtype(arr) == 'mixed'
def test_is_datetimelike_array_all_nan_nat_like(self):
arr = np.array([np.nan, pd.NaT, np.datetime64('nat')])
@@ -706,7 +706,7 @@ def test_date(self):
dates = [date(2012, 1, x) for x in range(1, 20)]
index = Index(dates)
- self.assertEqual(index.inferred_type, 'date')
+ assert index.inferred_type == 'date'
def test_to_object_array_tuples(self):
r = (5, 6)
@@ -729,7 +729,7 @@ def test_object(self):
# cannot infer more than this as only a single element
arr = np.array([None], dtype='O')
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'mixed')
+ assert result == 'mixed'
def test_to_object_array_width(self):
# see gh-13320
@@ -761,17 +761,17 @@ def test_categorical(self):
from pandas import Categorical, Series
arr = Categorical(list('abc'))
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'categorical')
+ assert result == 'categorical'
result = lib.infer_dtype(Series(arr))
- self.assertEqual(result, 'categorical')
+ assert result == 'categorical'
arr = Categorical(list('abc'), categories=['cegfab'], ordered=True)
result = lib.infer_dtype(arr)
- self.assertEqual(result, 'categorical')
+ assert result == 'categorical'
result = lib.infer_dtype(Series(arr))
- self.assertEqual(result, 'categorical')
+ assert result == 'categorical'
class TestNumberScalar(tm.TestCase):
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index 303c8cb6e858a..34ab0b72f9b9a 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -69,7 +69,7 @@ def test_set_index2(self):
assert_frame_equal(result, expected)
assert_frame_equal(result_nodrop, expected_nodrop)
- self.assertEqual(result.index.name, index.name)
+ assert result.index.name == index.name
# inplace, single
df2 = df.copy()
@@ -97,7 +97,7 @@ def test_set_index2(self):
assert_frame_equal(result, expected)
assert_frame_equal(result_nodrop, expected_nodrop)
- self.assertEqual(result.index.names, index.names)
+ assert result.index.names == index.names
# inplace
df2 = df.copy()
@@ -127,7 +127,7 @@ def test_set_index2(self):
# Series
result = df.set_index(df.C)
- self.assertEqual(result.index.name, 'C')
+ assert result.index.name == 'C'
def test_set_index_nonuniq(self):
df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'],
@@ -174,7 +174,7 @@ def test_construction_with_categorical_index(self):
idf = df.set_index('B')
str(idf)
tm.assert_index_equal(idf.index, ci, check_names=False)
- self.assertEqual(idf.index.name, 'B')
+ assert idf.index.name == 'B'
# from a CategoricalIndex
df = DataFrame({'A': np.random.randn(10),
@@ -182,17 +182,17 @@ def test_construction_with_categorical_index(self):
idf = df.set_index('B')
str(idf)
tm.assert_index_equal(idf.index, ci, check_names=False)
- self.assertEqual(idf.index.name, 'B')
+ assert idf.index.name == 'B'
idf = df.set_index('B').reset_index().set_index('B')
str(idf)
tm.assert_index_equal(idf.index, ci, check_names=False)
- self.assertEqual(idf.index.name, 'B')
+ assert idf.index.name == 'B'
new_df = idf.reset_index()
new_df.index = df.B
tm.assert_index_equal(new_df.index, ci, check_names=False)
- self.assertEqual(idf.index.name, 'B')
+ assert idf.index.name == 'B'
def test_set_index_cast_datetimeindex(self):
df = DataFrame({'A': [datetime(2000, 1, 1) + timedelta(i)
@@ -224,7 +224,7 @@ def test_set_index_cast_datetimeindex(self):
df['B'] = i
result = df['B']
assert_series_equal(result, expected, check_names=False)
- self.assertEqual(result.name, 'B')
+ assert result.name == 'B'
# keep the timezone
result = i.to_series(keep_tz=True)
@@ -241,7 +241,7 @@ def test_set_index_cast_datetimeindex(self):
df['D'] = i.to_pydatetime()
result = df['D']
assert_series_equal(result, expected, check_names=False)
- self.assertEqual(result.name, 'D')
+ assert result.name == 'D'
# GH 6785
# set the index manually
@@ -279,9 +279,9 @@ def test_set_index_timezone(self):
i = pd.to_datetime(["2014-01-01 10:10:10"],
utc=True).tz_convert('Europe/Rome')
df = DataFrame({'i': i})
- self.assertEqual(df.set_index(i).index[0].hour, 11)
- self.assertEqual(pd.DatetimeIndex(pd.Series(df.i))[0].hour, 11)
- self.assertEqual(df.set_index(df.i).index[0].hour, 11)
+ assert df.set_index(i).index[0].hour == 11
+ assert pd.DatetimeIndex(pd.Series(df.i))[0].hour == 11
+ assert df.set_index(df.i).index[0].hour == 11
def test_set_index_dst(self):
di = pd.date_range('2006-10-29 00:00:00', periods=3,
@@ -365,7 +365,7 @@ def test_dti_set_index_reindex(self):
# TODO: unused?
result = df.set_index(new_index) # noqa
- self.assertEqual(new_index.freq, index.freq)
+ assert new_index.freq == index.freq
# Renaming
@@ -416,7 +416,7 @@ def test_rename(self):
renamed = renamer.rename(index={'foo': 'bar', 'bar': 'foo'})
tm.assert_index_equal(renamed.index,
pd.Index(['bar', 'foo'], name='name'))
- self.assertEqual(renamed.index.name, renamer.index.name)
+ assert renamed.index.name == renamer.index.name
def test_rename_multiindex(self):
@@ -440,8 +440,8 @@ def test_rename_multiindex(self):
names=['fizz', 'buzz'])
tm.assert_index_equal(renamed.index, new_index)
tm.assert_index_equal(renamed.columns, new_columns)
- self.assertEqual(renamed.index.names, df.index.names)
- self.assertEqual(renamed.columns.names, df.columns.names)
+ assert renamed.index.names == df.index.names
+ assert renamed.columns.names == df.columns.names
#
# with specifying a level (GH13766)
@@ -609,7 +609,7 @@ def test_reset_index(self):
# preserve column names
self.frame.columns.name = 'columns'
resetted = self.frame.reset_index()
- self.assertEqual(resetted.columns.name, 'columns')
+ assert resetted.columns.name == 'columns'
# only remove certain columns
frame = self.frame.reset_index().set_index(['index', 'A', 'B'])
@@ -649,10 +649,10 @@ def test_reset_index_right_dtype(self):
df = DataFrame(s1)
resetted = s1.reset_index()
- self.assertEqual(resetted['time'].dtype, np.float64)
+ assert resetted['time'].dtype == np.float64
resetted = df.reset_index()
- self.assertEqual(resetted['time'].dtype, np.float64)
+ assert resetted['time'].dtype == np.float64
def test_reset_index_multiindex_col(self):
vals = np.random.randn(3, 3).astype(object)
@@ -752,7 +752,7 @@ def test_set_index_names(self):
df = pd.util.testing.makeDataFrame()
df.index.name = 'name'
- self.assertEqual(df.set_index(df.index).index.names, ['name'])
+ assert df.set_index(df.index).index.names == ['name']
mi = MultiIndex.from_arrays(df[['A', 'B']].T.values, names=['A', 'B'])
mi2 = MultiIndex.from_arrays(df[['A', 'B', 'A', 'B']].T.values,
@@ -760,7 +760,7 @@ def test_set_index_names(self):
df = df.set_index(['A', 'B'])
- self.assertEqual(df.set_index(df.index).index.names, ['A', 'B'])
+ assert df.set_index(df.index).index.names == ['A', 'B']
# Check that set_index isn't converting a MultiIndex into an Index
assert isinstance(df.set_index(df.index).index, MultiIndex)
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 8f46f055343d4..89ee096b4434e 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -83,8 +83,8 @@ def test_corr_nooverlap(self):
rs = df.corr(meth)
assert isnull(rs.loc['A', 'B'])
assert isnull(rs.loc['B', 'A'])
- self.assertEqual(rs.loc['A', 'A'], 1)
- self.assertEqual(rs.loc['B', 'B'], 1)
+ assert rs.loc['A', 'A'] == 1
+ assert rs.loc['B', 'B'] == 1
assert isnull(rs.loc['C', 'C'])
def test_corr_constant(self):
@@ -335,8 +335,8 @@ def test_describe_datetime_columns(self):
'50%', '75%', 'max'])
expected.columns = exp_columns
tm.assert_frame_equal(result, expected)
- self.assertEqual(result.columns.freq, 'MS')
- self.assertEqual(result.columns.tz, expected.columns.tz)
+ assert result.columns.freq == 'MS'
+ assert result.columns.tz == expected.columns.tz
def test_describe_timedelta_values(self):
# GH 6145
@@ -373,7 +373,7 @@ def test_describe_timedelta_values(self):
"50% 3 days 00:00:00 0 days 03:00:00\n"
"75% 4 days 00:00:00 0 days 04:00:00\n"
"max 5 days 00:00:00 0 days 05:00:00")
- self.assertEqual(repr(res), exp_repr)
+ assert repr(res) == exp_repr
def test_reduce_mixed_frame(self):
# GH 6806
@@ -462,7 +462,7 @@ def test_stat_operators_attempt_obj_array(self):
for df in [df1, df2]:
for meth in methods:
- self.assertEqual(df.values.dtype, np.object_)
+ assert df.values.dtype == np.object_
result = getattr(df, meth)(1)
expected = getattr(df.astype('f8'), meth)(1)
@@ -508,7 +508,7 @@ def test_cummin(self):
# fix issue
cummin_xs = self.tsframe.cummin(axis=1)
- self.assertEqual(np.shape(cummin_xs), np.shape(self.tsframe))
+ assert np.shape(cummin_xs) == np.shape(self.tsframe)
def test_cummax(self):
self.tsframe.loc[5:10, 0] = nan
@@ -531,7 +531,7 @@ def test_cummax(self):
# fix issue
cummax_xs = self.tsframe.cummax(axis=1)
- self.assertEqual(np.shape(cummax_xs), np.shape(self.tsframe))
+ assert np.shape(cummax_xs) == np.shape(self.tsframe)
def test_max(self):
self._check_stat_op('max', np.max, check_dates=True)
@@ -629,7 +629,7 @@ def test_cumsum(self):
# fix issue
cumsum_xs = self.tsframe.cumsum(axis=1)
- self.assertEqual(np.shape(cumsum_xs), np.shape(self.tsframe))
+ assert np.shape(cumsum_xs) == np.shape(self.tsframe)
def test_cumprod(self):
self.tsframe.loc[5:10, 0] = nan
@@ -648,7 +648,7 @@ def test_cumprod(self):
# fix issue
cumprod_xs = self.tsframe.cumprod(axis=1)
- self.assertEqual(np.shape(cumprod_xs), np.shape(self.tsframe))
+ assert np.shape(cumprod_xs) == np.shape(self.tsframe)
# ints
df = self.tsframe.fillna(0).astype(int)
@@ -711,7 +711,7 @@ def alt(x):
kurt2 = df.kurt(level=0).xs('bar')
tm.assert_series_equal(kurt, kurt2, check_names=False)
assert kurt.name is None
- self.assertEqual(kurt2.name, 'bar')
+ assert kurt2.name == 'bar'
def _check_stat_op(self, name, alternative, frame=None, has_skipna=True,
has_numeric_only=False, check_dtype=True,
@@ -771,8 +771,8 @@ def wrapper(x):
# check dtypes
if check_dtype:
lcd_dtype = frame.values.dtype
- self.assertEqual(lcd_dtype, result0.dtype)
- self.assertEqual(lcd_dtype, result1.dtype)
+ assert lcd_dtype == result0.dtype
+ assert lcd_dtype == result1.dtype
# result = f(axis=1)
# comp = frame.apply(alternative, axis=1).reindex(result.index)
@@ -860,16 +860,16 @@ def test_operators_timedelta64(self):
# min
result = diffs.min()
- self.assertEqual(result[0], diffs.loc[0, 'A'])
- self.assertEqual(result[1], diffs.loc[0, 'B'])
+ assert result[0] == diffs.loc[0, 'A']
+ assert result[1] == diffs.loc[0, 'B']
result = diffs.min(axis=1)
assert (result == diffs.loc[0, 'B']).all()
# max
result = diffs.max()
- self.assertEqual(result[0], diffs.loc[2, 'A'])
- self.assertEqual(result[1], diffs.loc[2, 'B'])
+ assert result[0] == diffs.loc[2, 'A']
+ assert result[1] == diffs.loc[2, 'B']
result = diffs.max(axis=1)
assert (result == diffs['A']).all()
@@ -920,7 +920,7 @@ def test_operators_timedelta64(self):
df = DataFrame({'time': date_range('20130102', periods=5),
'time2': date_range('20130105', periods=5)})
df['off1'] = df['time2'] - df['time']
- self.assertEqual(df['off1'].dtype, 'timedelta64[ns]')
+ assert df['off1'].dtype == 'timedelta64[ns]'
df['off2'] = df['time'] - df['time2']
df._consolidate_inplace()
@@ -932,8 +932,8 @@ def test_sum_corner(self):
axis1 = self.empty.sum(1)
assert isinstance(axis0, Series)
assert isinstance(axis1, Series)
- self.assertEqual(len(axis0), 0)
- self.assertEqual(len(axis1), 0)
+ assert len(axis0) == 0
+ assert len(axis1) == 0
def test_sum_object(self):
values = self.frame.values.astype(int)
@@ -963,7 +963,7 @@ def test_mean_corner(self):
# take mean of boolean column
self.frame['bool'] = self.frame['A'] > 0
means = self.frame.mean(0)
- self.assertEqual(means['bool'], self.frame['bool'].values.mean())
+ assert means['bool'] == self.frame['bool'].values.mean()
def test_stats_mixed_type(self):
# don't blow up
@@ -999,7 +999,7 @@ def test_cumsum_corner(self):
def test_sum_bools(self):
df = DataFrame(index=lrange(1), columns=lrange(10))
bools = isnull(df)
- self.assertEqual(bools.sum(axis=1)[0], 10)
+ assert bools.sum(axis=1)[0] == 10
# Index of max / min
@@ -1307,7 +1307,7 @@ def test_drop_duplicates(self):
result = df.drop_duplicates('AAA', keep=False)
expected = df.loc[[]]
tm.assert_frame_equal(result, expected)
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
# multi column
expected = df.loc[[0, 1, 2, 3]]
@@ -1380,7 +1380,7 @@ def test_drop_duplicates(self):
df = df.append([[1] + [0] * 8], ignore_index=True)
for keep in ['first', 'last', False]:
- self.assertEqual(df.duplicated(keep=keep).sum(), 0)
+ assert df.duplicated(keep=keep).sum() == 0
def test_drop_duplicates_for_take_all(self):
df = DataFrame({'AAA': ['foo', 'bar', 'baz', 'bar',
@@ -1435,7 +1435,7 @@ def test_drop_duplicates_tuple(self):
result = df.drop_duplicates(('AA', 'AB'), keep=False)
expected = df.loc[[]] # empty df
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
tm.assert_frame_equal(result, expected)
# multi column
@@ -1464,7 +1464,7 @@ def test_drop_duplicates_NA(self):
result = df.drop_duplicates('A', keep=False)
expected = df.loc[[]] # empty df
tm.assert_frame_equal(result, expected)
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
# multi column
result = df.drop_duplicates(['A', 'B'])
@@ -1499,7 +1499,7 @@ def test_drop_duplicates_NA(self):
result = df.drop_duplicates('C', keep=False)
expected = df.loc[[]] # empty df
tm.assert_frame_equal(result, expected)
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
# multi column
result = df.drop_duplicates(['C', 'B'])
@@ -1574,7 +1574,7 @@ def test_drop_duplicates_inplace(self):
expected = orig.loc[[]]
result = df
tm.assert_frame_equal(result, expected)
- self.assertEqual(len(df), 0)
+ assert len(df) == 0
# multi column
df = orig.copy()
@@ -1840,11 +1840,11 @@ def test_clip_against_series(self):
result = clipped_df.loc[lb_mask, i]
tm.assert_series_equal(result, lb[lb_mask], check_names=False)
- self.assertEqual(result.name, i)
+ assert result.name == i
result = clipped_df.loc[ub_mask, i]
tm.assert_series_equal(result, ub[ub_mask], check_names=False)
- self.assertEqual(result.name, i)
+ assert result.name == i
tm.assert_series_equal(clipped_df.loc[mask, i], df.loc[mask, i])
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 6b1e9d66d2071..d2a1e32f015b2 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -41,16 +41,16 @@ def test_copy_index_name_checking(self):
def test_getitem_pop_assign_name(self):
s = self.frame['A']
- self.assertEqual(s.name, 'A')
+ assert s.name == 'A'
s = self.frame.pop('A')
- self.assertEqual(s.name, 'A')
+ assert s.name == 'A'
s = self.frame.loc[:, 'B']
- self.assertEqual(s.name, 'B')
+ assert s.name == 'B'
s2 = s.loc[:]
- self.assertEqual(s2.name, 'B')
+ assert s2.name == 'B'
def test_get_value(self):
for idx in self.frame.index:
@@ -75,17 +75,17 @@ class TestDataFrameMisc(tm.TestCase, SharedWithSparse, TestData):
def test_get_axis(self):
f = self.frame
- self.assertEqual(f._get_axis_number(0), 0)
- self.assertEqual(f._get_axis_number(1), 1)
- self.assertEqual(f._get_axis_number('index'), 0)
- self.assertEqual(f._get_axis_number('rows'), 0)
- self.assertEqual(f._get_axis_number('columns'), 1)
-
- self.assertEqual(f._get_axis_name(0), 'index')
- self.assertEqual(f._get_axis_name(1), 'columns')
- self.assertEqual(f._get_axis_name('index'), 'index')
- self.assertEqual(f._get_axis_name('rows'), 'index')
- self.assertEqual(f._get_axis_name('columns'), 'columns')
+ assert f._get_axis_number(0) == 0
+ assert f._get_axis_number(1) == 1
+ assert f._get_axis_number('index') == 0
+ assert f._get_axis_number('rows') == 0
+ assert f._get_axis_number('columns') == 1
+
+ assert f._get_axis_name(0) == 'index'
+ assert f._get_axis_name(1) == 'columns'
+ assert f._get_axis_name('index') == 'index'
+ assert f._get_axis_name('rows') == 'index'
+ assert f._get_axis_name('columns') == 'columns'
assert f._get_axis(0) is f.index
assert f._get_axis(1) is f.columns
@@ -154,7 +154,7 @@ def test_nonzero(self):
def test_iteritems(self):
df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b'])
for k, v in compat.iteritems(df):
- self.assertEqual(type(v), Series)
+ assert type(v) == Series
def test_iter(self):
assert tm.equalContents(list(self.frame), self.frame.columns)
@@ -183,27 +183,25 @@ def test_itertuples(self):
df = DataFrame(data={"a": [1, 2, 3], "b": [4, 5, 6]})
dfaa = df[['a', 'a']]
- self.assertEqual(list(dfaa.itertuples()), [
- (0, 1, 1), (1, 2, 2), (2, 3, 3)])
- self.assertEqual(repr(list(df.itertuples(name=None))),
- '[(0, 1, 4), (1, 2, 5), (2, 3, 6)]')
+ assert (list(dfaa.itertuples()) ==
+ [(0, 1, 1), (1, 2, 2), (2, 3, 3)])
+ assert (repr(list(df.itertuples(name=None))) ==
+ '[(0, 1, 4), (1, 2, 5), (2, 3, 6)]')
tup = next(df.itertuples(name='TestName'))
- # no support for field renaming in Python 2.6, regular tuples are
- # returned
if sys.version >= LooseVersion('2.7'):
- self.assertEqual(tup._fields, ('Index', 'a', 'b'))
- self.assertEqual((tup.Index, tup.a, tup.b), tup)
- self.assertEqual(type(tup).__name__, 'TestName')
+ assert tup._fields == ('Index', 'a', 'b')
+ assert (tup.Index, tup.a, tup.b) == tup
+ assert type(tup).__name__ == 'TestName'
df.columns = ['def', 'return']
tup2 = next(df.itertuples(name='TestName'))
- self.assertEqual(tup2, (0, 1, 4))
+ assert tup2 == (0, 1, 4)
if sys.version >= LooseVersion('2.7'):
- self.assertEqual(tup2._fields, ('Index', '_1', '_2'))
+ assert tup2._fields == ('Index', '_1', '_2')
df3 = DataFrame(dict(('f' + str(i), [i]) for i in range(1024)))
# will raise SyntaxError if trying to create namedtuple
@@ -212,7 +210,7 @@ def test_itertuples(self):
assert isinstance(tup3, tuple)
def test_len(self):
- self.assertEqual(len(self.frame), len(self.frame.index))
+ assert len(self.frame) == len(self.frame.index)
def test_as_matrix(self):
frame = self.frame
@@ -225,15 +223,15 @@ def test_as_matrix(self):
if np.isnan(value):
assert np.isnan(frame[col][i])
else:
- self.assertEqual(value, frame[col][i])
+ assert value == frame[col][i]
# mixed type
mat = self.mixed_frame.as_matrix(['foo', 'A'])
- self.assertEqual(mat[0, 0], 'bar')
+ assert mat[0, 0] == 'bar'
df = DataFrame({'real': [1, 2, 3], 'complex': [1j, 2j, 3j]})
mat = df.as_matrix()
- self.assertEqual(mat[0, 0], 1j)
+ assert mat[0, 0] == 1j
# single block corner case
mat = self.frame.as_matrix(['A', 'B'])
@@ -262,7 +260,7 @@ def test_transpose(self):
if np.isnan(value):
assert np.isnan(frame[col][idx])
else:
- self.assertEqual(value, frame[col][idx])
+ assert value == frame[col][idx]
# mixed type
index, data = tm.getMixedTypeDict()
@@ -270,7 +268,7 @@ def test_transpose(self):
mixed_T = mixed.T
for col, s in compat.iteritems(mixed_T):
- self.assertEqual(s.dtype, np.object_)
+ assert s.dtype == np.object_
def test_transpose_get_view(self):
dft = self.frame.T
@@ -299,23 +297,23 @@ def test_axis_aliases(self):
def test_more_asMatrix(self):
values = self.mixed_frame.as_matrix()
- self.assertEqual(values.shape[1], len(self.mixed_frame.columns))
+ assert values.shape[1] == len(self.mixed_frame.columns)
def test_repr_with_mi_nat(self):
df = DataFrame({'X': [1, 2]},
index=[[pd.NaT, pd.Timestamp('20130101')], ['a', 'b']])
res = repr(df)
exp = ' X\nNaT a 1\n2013-01-01 b 2'
- self.assertEqual(res, exp)
+ assert res == exp
def test_iteritems_names(self):
for k, v in compat.iteritems(self.mixed_frame):
- self.assertEqual(v.name, k)
+ assert v.name == k
def test_series_put_names(self):
series = self.mixed_frame._series
for k, v in compat.iteritems(series):
- self.assertEqual(v.name, k)
+ assert v.name == k
def test_empty_nonzero(self):
df = DataFrame([1, 2, 3])
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index 0bccca5cecb27..5febe8c62abe8 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -97,7 +97,7 @@ def test_apply_empty(self):
[], index=pd.Index([], dtype=object)))
# Ensure that x.append hasn't been called
- self.assertEqual(x, [])
+ assert x == []
def test_apply_standard_nonunique(self):
df = DataFrame(
@@ -150,7 +150,7 @@ def test_apply_raw(self):
def test_apply_axis1(self):
d = self.frame.index[0]
tapplied = self.frame.apply(np.mean, axis=1)
- self.assertEqual(tapplied[d], np.mean(self.frame.xs(d)))
+ assert tapplied[d] == np.mean(self.frame.xs(d))
def test_apply_ignore_failures(self):
result = self.mixed_frame._apply_standard(np.mean, 0,
@@ -284,12 +284,11 @@ def transform2(row):
return row
try:
- transformed = data.apply(transform, axis=1) # noqa
+ data.apply(transform, axis=1)
except AttributeError as e:
- self.assertEqual(len(e.args), 2)
- self.assertEqual(e.args[1], 'occurred at index 4')
- self.assertEqual(
- e.args[0], "'float' object has no attribute 'startswith'")
+ assert len(e.args) == 2
+ assert e.args[1] == 'occurred at index 4'
+ assert e.args[0] == "'float' object has no attribute 'startswith'"
def test_apply_bug(self):
@@ -383,23 +382,23 @@ def test_apply_dict(self):
def test_applymap(self):
applied = self.frame.applymap(lambda x: x * 2)
- assert_frame_equal(applied, self.frame * 2)
- result = self.frame.applymap(type)
+ tm.assert_frame_equal(applied, self.frame * 2)
+ self.frame.applymap(type)
- # GH #465, function returning tuples
+ # gh-465: function returning tuples
result = self.frame.applymap(lambda x: (x, x))
assert isinstance(result['A'][0], tuple)
- # GH 2909, object conversion to float in constructor?
+ # gh-2909: object conversion to float in constructor?
df = DataFrame(data=[1, 'a'])
result = df.applymap(lambda x: x)
- self.assertEqual(result.dtypes[0], object)
+ assert result.dtypes[0] == object
df = DataFrame(data=[1., 'a'])
result = df.applymap(lambda x: x)
- self.assertEqual(result.dtypes[0], object)
+ assert result.dtypes[0] == object
- # GH2786
+ # see gh-2786
df = DataFrame(np.random.random((3, 4)))
df2 = df.copy()
cols = ['a', 'a', 'a', 'a']
@@ -408,16 +407,16 @@ def test_applymap(self):
expected = df2.applymap(str)
expected.columns = cols
result = df.applymap(str)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# datetime/timedelta
df['datetime'] = Timestamp('20130101')
df['timedelta'] = pd.Timedelta('1 min')
result = df.applymap(str)
for f in ['datetime', 'timedelta']:
- self.assertEqual(result.loc[0, f], str(df.loc[0, f]))
+ assert result.loc[0, f] == str(df.loc[0, f])
- # GH 8222
+ # see gh-8222
empty_frames = [pd.DataFrame(),
pd.DataFrame(columns=list('ABC')),
pd.DataFrame(index=list('ABC')),
diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py
index 2c285c6261415..a563b678a3786 100644
--- a/pandas/tests/frame/test_axis_select_reindex.py
+++ b/pandas/tests/frame/test_axis_select_reindex.py
@@ -37,9 +37,9 @@ def test_drop_names(self):
df_inplace_b.drop('b', inplace=True)
df_inplace_e.drop('e', axis=1, inplace=True)
for obj in (df_dropped_b, df_dropped_e, df_inplace_b, df_inplace_e):
- self.assertEqual(obj.index.name, 'first')
- self.assertEqual(obj.columns.name, 'second')
- self.assertEqual(list(df.columns), ['d', 'e', 'f'])
+ assert obj.index.name == 'first'
+ assert obj.columns.name == 'second'
+ assert list(df.columns) == ['d', 'e', 'f']
pytest.raises(ValueError, df.drop, ['g'])
pytest.raises(ValueError, df.drop, ['g'], 1)
@@ -174,14 +174,14 @@ def test_reindex(self):
if np.isnan(val):
assert np.isnan(self.frame[col][idx])
else:
- self.assertEqual(val, self.frame[col][idx])
+ assert val == self.frame[col][idx]
else:
assert np.isnan(val)
for col, series in compat.iteritems(newFrame):
assert tm.equalContents(series.index, newFrame.index)
emptyFrame = self.frame.reindex(Index([]))
- self.assertEqual(len(emptyFrame.index), 0)
+ assert len(emptyFrame.index) == 0
# Cython code should be unit-tested directly
nonContigFrame = self.frame.reindex(self.ts1.index[::2])
@@ -192,7 +192,7 @@ def test_reindex(self):
if np.isnan(val):
assert np.isnan(self.frame[col][idx])
else:
- self.assertEqual(val, self.frame[col][idx])
+ assert val == self.frame[col][idx]
else:
assert np.isnan(val)
@@ -208,13 +208,13 @@ def test_reindex(self):
# length zero
newFrame = self.frame.reindex([])
assert newFrame.empty
- self.assertEqual(len(newFrame.columns), len(self.frame.columns))
+ assert len(newFrame.columns) == len(self.frame.columns)
# length zero with columns reindexed with non-empty index
newFrame = self.frame.reindex([])
newFrame = newFrame.reindex(self.frame.index)
- self.assertEqual(len(newFrame.index), len(self.frame.index))
- self.assertEqual(len(newFrame.columns), len(self.frame.columns))
+ assert len(newFrame.index) == len(self.frame.index)
+ assert len(newFrame.columns) == len(self.frame.columns)
# pass non-Index
newFrame = self.frame.reindex(list(self.ts1.index))
@@ -255,27 +255,27 @@ def test_reindex_name_remains(self):
i = Series(np.arange(10), name='iname')
df = df.reindex(i)
- self.assertEqual(df.index.name, 'iname')
+ assert df.index.name == 'iname'
df = df.reindex(Index(np.arange(10), name='tmpname'))
- self.assertEqual(df.index.name, 'tmpname')
+ assert df.index.name == 'tmpname'
s = Series(random.rand(10))
df = DataFrame(s.T, index=np.arange(len(s)))
i = Series(np.arange(10), name='iname')
df = df.reindex(columns=i)
- self.assertEqual(df.columns.name, 'iname')
+ assert df.columns.name == 'iname'
def test_reindex_int(self):
smaller = self.intframe.reindex(self.intframe.index[::2])
- self.assertEqual(smaller['A'].dtype, np.int64)
+ assert smaller['A'].dtype == np.int64
bigger = smaller.reindex(self.intframe.index)
- self.assertEqual(bigger['A'].dtype, np.float64)
+ assert bigger['A'].dtype == np.float64
smaller = self.intframe.reindex(columns=['A', 'B'])
- self.assertEqual(smaller['A'].dtype, np.int64)
+ assert smaller['A'].dtype == np.int64
def test_reindex_like(self):
other = self.frame.reindex(index=self.frame.index[:10],
@@ -346,8 +346,8 @@ def test_reindex_axes(self):
both_freq = df.reindex(index=time_freq, columns=some_cols).index.freq
seq_freq = df.reindex(index=time_freq).reindex(
columns=some_cols).index.freq
- self.assertEqual(index_freq, both_freq)
- self.assertEqual(index_freq, seq_freq)
+ assert index_freq == both_freq
+ assert index_freq == seq_freq
def test_reindex_fill_value(self):
df = DataFrame(np.random.randn(10, 4))
@@ -732,7 +732,7 @@ def test_filter_regex_search(self):
# regex
filtered = fcopy.filter(regex='[A]+')
- self.assertEqual(len(filtered.columns), 2)
+ assert len(filtered.columns) == 2
assert 'AA' in filtered
# doesn't have to be at beginning
@@ -845,11 +845,11 @@ def test_reindex_boolean(self):
columns=[0, 2])
reindexed = frame.reindex(np.arange(10))
- self.assertEqual(reindexed.values.dtype, np.object_)
+ assert reindexed.values.dtype == np.object_
assert isnull(reindexed[0][1])
reindexed = frame.reindex(columns=lrange(3))
- self.assertEqual(reindexed.values.dtype, np.object_)
+ assert reindexed.values.dtype == np.object_
assert isnull(reindexed[1]).all()
def test_reindex_objects(self):
@@ -867,7 +867,7 @@ def test_reindex_corner(self):
# ints are weird
smaller = self.intframe.reindex(columns=['A', 'B', 'E'])
- self.assertEqual(smaller['E'].dtype, np.float64)
+ assert smaller['E'].dtype == np.float64
def test_reindex_axis(self):
cols = ['A', 'B', 'E']
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 2a319348aca3f..44dc6df756f3d 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -95,47 +95,47 @@ def test_as_matrix_numeric_cols(self):
self.frame['foo'] = 'bar'
values = self.frame.as_matrix(['A', 'B', 'C', 'D'])
- self.assertEqual(values.dtype, np.float64)
+ assert values.dtype == np.float64
def test_as_matrix_lcd(self):
# mixed lcd
values = self.mixed_float.as_matrix(['A', 'B', 'C', 'D'])
- self.assertEqual(values.dtype, np.float64)
+ assert values.dtype == np.float64
values = self.mixed_float.as_matrix(['A', 'B', 'C'])
- self.assertEqual(values.dtype, np.float32)
+ assert values.dtype == np.float32
values = self.mixed_float.as_matrix(['C'])
- self.assertEqual(values.dtype, np.float16)
+ assert values.dtype == np.float16
# GH 10364
# B uint64 forces float because there are other signed int types
values = self.mixed_int.as_matrix(['A', 'B', 'C', 'D'])
- self.assertEqual(values.dtype, np.float64)
+ assert values.dtype == np.float64
values = self.mixed_int.as_matrix(['A', 'D'])
- self.assertEqual(values.dtype, np.int64)
+ assert values.dtype == np.int64
# B uint64 forces float because there are other signed int types
values = self.mixed_int.as_matrix(['A', 'B', 'C'])
- self.assertEqual(values.dtype, np.float64)
+ assert values.dtype == np.float64
# as B and C are both unsigned, no forcing to float is needed
values = self.mixed_int.as_matrix(['B', 'C'])
- self.assertEqual(values.dtype, np.uint64)
+ assert values.dtype == np.uint64
values = self.mixed_int.as_matrix(['A', 'C'])
- self.assertEqual(values.dtype, np.int32)
+ assert values.dtype == np.int32
values = self.mixed_int.as_matrix(['C', 'D'])
- self.assertEqual(values.dtype, np.int64)
+ assert values.dtype == np.int64
values = self.mixed_int.as_matrix(['A'])
- self.assertEqual(values.dtype, np.int32)
+ assert values.dtype == np.int32
values = self.mixed_int.as_matrix(['C'])
- self.assertEqual(values.dtype, np.uint8)
+ assert values.dtype == np.uint8
def test_constructor_with_convert(self):
# this is actually mostly a test of lib.maybe_convert_objects
@@ -220,8 +220,8 @@ def test_construction_with_mixed(self):
# mixed-type frames
self.mixed_frame['datetime'] = datetime.now()
self.mixed_frame['timedelta'] = timedelta(days=1, seconds=1)
- self.assertEqual(self.mixed_frame['datetime'].dtype, 'M8[ns]')
- self.assertEqual(self.mixed_frame['timedelta'].dtype, 'm8[ns]')
+ assert self.mixed_frame['datetime'].dtype == 'M8[ns]'
+ assert self.mixed_frame['timedelta'].dtype == 'm8[ns]'
result = self.mixed_frame.get_dtype_counts().sort_values()
expected = Series({'float64': 4,
'object': 1,
@@ -452,7 +452,7 @@ def test_convert_objects(self):
oops = self.mixed_frame.T.T
converted = oops._convert(datetime=True)
assert_frame_equal(converted, self.mixed_frame)
- self.assertEqual(converted['A'].dtype, np.float64)
+ assert converted['A'].dtype == np.float64
# force numeric conversion
self.mixed_frame['H'] = '1.'
@@ -464,19 +464,19 @@ def test_convert_objects(self):
self.mixed_frame['K'] = '1'
self.mixed_frame.loc[0:5, ['J', 'K']] = 'garbled'
converted = self.mixed_frame._convert(datetime=True, numeric=True)
- self.assertEqual(converted['H'].dtype, 'float64')
- self.assertEqual(converted['I'].dtype, 'int64')
- self.assertEqual(converted['J'].dtype, 'float64')
- self.assertEqual(converted['K'].dtype, 'float64')
- self.assertEqual(len(converted['J'].dropna()), l - 5)
- self.assertEqual(len(converted['K'].dropna()), l - 5)
+ assert converted['H'].dtype == 'float64'
+ assert converted['I'].dtype == 'int64'
+ assert converted['J'].dtype == 'float64'
+ assert converted['K'].dtype == 'float64'
+ assert len(converted['J'].dropna()) == l - 5
+ assert len(converted['K'].dropna()) == l - 5
# via astype
converted = self.mixed_frame.copy()
converted['H'] = converted['H'].astype('float64')
converted['I'] = converted['I'].astype('int64')
- self.assertEqual(converted['H'].dtype, 'float64')
- self.assertEqual(converted['I'].dtype, 'int64')
+ assert converted['H'].dtype == 'float64'
+ assert converted['I'].dtype == 'int64'
# via astype, but errors
converted = self.mixed_frame.copy()
diff --git a/pandas/tests/frame/test_combine_concat.py b/pandas/tests/frame/test_combine_concat.py
index 5452792def1ac..44f17faabe20d 100644
--- a/pandas/tests/frame/test_combine_concat.py
+++ b/pandas/tests/frame/test_combine_concat.py
@@ -303,7 +303,7 @@ def test_join_str_datetime(self):
tst = A.join(C, on='aa')
- self.assertEqual(len(tst.columns), 3)
+ assert len(tst.columns) == 3
def test_join_multiindex_leftright(self):
# GH 10741
@@ -538,7 +538,7 @@ def test_combine_first_mixed_bug(self):
"col5": ser3})
combined = frame1.combine_first(frame2)
- self.assertEqual(len(combined.columns), 5)
+ assert len(combined.columns) == 5
# gh 3016 (same as in update)
df = DataFrame([[1., 2., False, True], [4., 5., True, False]],
@@ -603,28 +603,28 @@ def test_combine_first_align_nan(self):
dfa = pd.DataFrame([[pd.Timestamp('2011-01-01'), 2]],
columns=['a', 'b'])
dfb = pd.DataFrame([[4], [5]], columns=['b'])
- self.assertEqual(dfa['a'].dtype, 'datetime64[ns]')
- self.assertEqual(dfa['b'].dtype, 'int64')
+ assert dfa['a'].dtype == 'datetime64[ns]'
+ assert dfa['b'].dtype == 'int64'
res = dfa.combine_first(dfb)
exp = pd.DataFrame({'a': [pd.Timestamp('2011-01-01'), pd.NaT],
'b': [2., 5.]}, columns=['a', 'b'])
tm.assert_frame_equal(res, exp)
- self.assertEqual(res['a'].dtype, 'datetime64[ns]')
+ assert res['a'].dtype == 'datetime64[ns]'
# ToDo: this must be int64
- self.assertEqual(res['b'].dtype, 'float64')
+ assert res['b'].dtype == 'float64'
res = dfa.iloc[:0].combine_first(dfb)
exp = pd.DataFrame({'a': [np.nan, np.nan],
'b': [4, 5]}, columns=['a', 'b'])
tm.assert_frame_equal(res, exp)
# ToDo: this must be datetime64
- self.assertEqual(res['a'].dtype, 'float64')
+ assert res['a'].dtype == 'float64'
# ToDo: this must be int64
- self.assertEqual(res['b'].dtype, 'int64')
+ assert res['b'].dtype == 'int64'
def test_combine_first_timezone(self):
- # GH 7630
+ # see gh-7630
data1 = pd.to_datetime('20100101 01:01').tz_localize('UTC')
df1 = pd.DataFrame(columns=['UTCdatetime', 'abc'],
data=data1,
@@ -644,10 +644,10 @@ def test_combine_first_timezone(self):
index=pd.date_range('20140627', periods=2,
freq='D'))
tm.assert_frame_equal(res, exp)
- self.assertEqual(res['UTCdatetime'].dtype, 'datetime64[ns, UTC]')
- self.assertEqual(res['abc'].dtype, 'datetime64[ns, UTC]')
+ assert res['UTCdatetime'].dtype == 'datetime64[ns, UTC]'
+ assert res['abc'].dtype == 'datetime64[ns, UTC]'
- # GH 10567
+ # see gh-10567
dts1 = pd.date_range('2015-01-01', '2015-01-05', tz='UTC')
df1 = pd.DataFrame({'DATE': dts1})
dts2 = pd.date_range('2015-01-03', '2015-01-05', tz='UTC')
@@ -655,7 +655,7 @@ def test_combine_first_timezone(self):
res = df1.combine_first(df2)
tm.assert_frame_equal(res, df1)
- self.assertEqual(res['DATE'].dtype, 'datetime64[ns, UTC]')
+ assert res['DATE'].dtype == 'datetime64[ns, UTC]'
dts1 = pd.DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03',
'2011-01-04'], tz='US/Eastern')
@@ -680,7 +680,7 @@ def test_combine_first_timezone(self):
# if df1 doesn't have NaN, keep its dtype
res = df1.combine_first(df2)
tm.assert_frame_equal(res, df1)
- self.assertEqual(res['DATE'].dtype, 'datetime64[ns, US/Eastern]')
+ assert res['DATE'].dtype == 'datetime64[ns, US/Eastern]'
dts1 = pd.date_range('2015-01-01', '2015-01-02', tz='US/Eastern')
df1 = pd.DataFrame({'DATE': dts1})
@@ -693,7 +693,7 @@ def test_combine_first_timezone(self):
pd.Timestamp('2015-01-03')]
exp = pd.DataFrame({'DATE': exp_dts})
tm.assert_frame_equal(res, exp)
- self.assertEqual(res['DATE'].dtype, 'object')
+ assert res['DATE'].dtype == 'object'
def test_combine_first_timedelta(self):
data1 = pd.TimedeltaIndex(['1 day', 'NaT', '3 day', '4day'])
@@ -706,7 +706,7 @@ def test_combine_first_timedelta(self):
'11 day', '3 day', '4 day'])
exp = pd.DataFrame({'TD': exp_dts}, index=[1, 2, 3, 4, 5, 7])
tm.assert_frame_equal(res, exp)
- self.assertEqual(res['TD'].dtype, 'timedelta64[ns]')
+ assert res['TD'].dtype == 'timedelta64[ns]'
def test_combine_first_period(self):
data1 = pd.PeriodIndex(['2011-01', 'NaT', '2011-03',
@@ -722,7 +722,7 @@ def test_combine_first_period(self):
freq='M')
exp = pd.DataFrame({'P': exp_dts}, index=[1, 2, 3, 4, 5, 7])
tm.assert_frame_equal(res, exp)
- self.assertEqual(res['P'].dtype, 'object')
+ assert res['P'].dtype == 'object'
# different freq
dts2 = pd.PeriodIndex(['2012-01-01', '2012-01-02',
@@ -738,7 +738,7 @@ def test_combine_first_period(self):
pd.Period('2011-04', freq='M')]
exp = pd.DataFrame({'P': exp_dts}, index=[1, 2, 3, 4, 5, 7])
tm.assert_frame_equal(res, exp)
- self.assertEqual(res['P'].dtype, 'object')
+ assert res['P'].dtype == 'object'
def test_combine_first_int(self):
# GH14687 - integer series that do no align exactly
@@ -748,7 +748,7 @@ def test_combine_first_int(self):
res = df1.combine_first(df2)
tm.assert_frame_equal(res, df1)
- self.assertEqual(res['a'].dtype, 'int64')
+ assert res['a'].dtype == 'int64'
def test_concat_datetime_datetime64_frame(self):
# #2624
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 588182eb30336..5b00ddc51da46 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -36,10 +36,10 @@ class TestDataFrameConstructors(tm.TestCase, TestData):
def test_constructor(self):
df = DataFrame()
- self.assertEqual(len(df.index), 0)
+ assert len(df.index) == 0
df = DataFrame(data={})
- self.assertEqual(len(df.index), 0)
+ assert len(df.index) == 0
def test_constructor_mixed(self):
index, data = tm.getMixedTypeDict()
@@ -48,11 +48,11 @@ def test_constructor_mixed(self):
indexed_frame = DataFrame(data, index=index) # noqa
unindexed_frame = DataFrame(data) # noqa
- self.assertEqual(self.mixed_frame['foo'].dtype, np.object_)
+ assert self.mixed_frame['foo'].dtype == np.object_
def test_constructor_cast_failure(self):
foo = DataFrame({'a': ['a', 'b', 'c']}, dtype=np.float64)
- self.assertEqual(foo['a'].dtype, object)
+ assert foo['a'].dtype == object
# GH 3010, constructing with odd arrays
df = DataFrame(np.ones((4, 2)))
@@ -76,29 +76,28 @@ def test_constructor_dtype_copy(self):
new_df = pd.DataFrame(orig_df, dtype=float, copy=True)
new_df['col1'] = 200.
- self.assertEqual(orig_df['col1'][0], 1.)
+ assert orig_df['col1'][0] == 1.
def test_constructor_dtype_nocast_view(self):
df = DataFrame([[1, 2]])
should_be_view = DataFrame(df, dtype=df[0].dtype)
should_be_view[0][0] = 99
- self.assertEqual(df.values[0, 0], 99)
+ assert df.values[0, 0] == 99
should_be_view = DataFrame(df.values, dtype=df[0].dtype)
should_be_view[0][0] = 97
- self.assertEqual(df.values[0, 0], 97)
+ assert df.values[0, 0] == 97
def test_constructor_dtype_list_data(self):
df = DataFrame([[1, '2'],
[None, 'a']], dtype=object)
assert df.loc[1, 0] is None
- self.assertEqual(df.loc[0, 1], '2')
+ assert df.loc[0, 1] == '2'
def test_constructor_list_frames(self):
-
- # GH 3243
+ # see gh-3243
result = DataFrame([DataFrame([])])
- self.assertEqual(result.shape, (1, 0))
+ assert result.shape == (1, 0)
result = DataFrame([DataFrame(dict(A=lrange(5)))])
assert isinstance(result.iloc[0, 0], DataFrame)
@@ -149,8 +148,8 @@ def test_constructor_complex_dtypes(self):
b = np.random.rand(10).astype(np.complex128)
df = DataFrame({'a': a, 'b': b})
- self.assertEqual(a.dtype, df.a.dtype)
- self.assertEqual(b.dtype, df.b.dtype)
+ assert a.dtype == df.a.dtype
+ assert b.dtype == df.b.dtype
def test_constructor_rec(self):
rec = self.frame.to_records(index=False)
@@ -175,7 +174,7 @@ def test_constructor_rec(self):
def test_constructor_bool(self):
df = DataFrame({0: np.ones(10, dtype=bool),
1: np.zeros(10, dtype=bool)})
- self.assertEqual(df.values.dtype, np.bool_)
+ assert df.values.dtype == np.bool_
def test_constructor_overflow_int64(self):
# see gh-14881
@@ -183,7 +182,7 @@ def test_constructor_overflow_int64(self):
dtype=np.uint64)
result = DataFrame({'a': values})
- self.assertEqual(result['a'].dtype, np.uint64)
+ assert result['a'].dtype == np.uint64
# see gh-2355
data_scores = [(6311132704823138710, 273), (2685045978526272070, 23),
@@ -194,7 +193,7 @@ def test_constructor_overflow_int64(self):
data = np.zeros((len(data_scores),), dtype=dtype)
data[:] = data_scores
df_crawls = DataFrame(data)
- self.assertEqual(df_crawls['uid'].dtype, np.uint64)
+ assert df_crawls['uid'].dtype == np.uint64
def test_constructor_ordereddict(self):
import random
@@ -203,7 +202,7 @@ def test_constructor_ordereddict(self):
random.shuffle(nums)
expected = ['A%d' % i for i in nums]
df = DataFrame(OrderedDict(zip(expected, [[0]] * nitems)))
- self.assertEqual(expected, list(df.columns))
+ assert expected == list(df.columns)
def test_constructor_dict(self):
frame = DataFrame({'col1': self.ts1,
@@ -378,14 +377,14 @@ def test_constructor_dict_cast(self):
'B': {'1': '1', '2': '2', '3': '3'},
}
frame = DataFrame(test_data, dtype=float)
- self.assertEqual(len(frame), 3)
- self.assertEqual(frame['B'].dtype, np.float64)
- self.assertEqual(frame['A'].dtype, np.float64)
+ assert len(frame) == 3
+ assert frame['B'].dtype == np.float64
+ assert frame['A'].dtype == np.float64
frame = DataFrame(test_data)
- self.assertEqual(len(frame), 3)
- self.assertEqual(frame['B'].dtype, np.object_)
- self.assertEqual(frame['A'].dtype, np.float64)
+ assert len(frame) == 3
+ assert frame['B'].dtype == np.object_
+ assert frame['A'].dtype == np.float64
# can't cast to float
test_data = {
@@ -393,9 +392,9 @@ def test_constructor_dict_cast(self):
'B': dict(zip(range(15), randn(15)))
}
frame = DataFrame(test_data, dtype=float)
- self.assertEqual(len(frame), 20)
- self.assertEqual(frame['A'].dtype, np.object_)
- self.assertEqual(frame['B'].dtype, np.float64)
+ assert len(frame) == 20
+ assert frame['A'].dtype == np.object_
+ assert frame['B'].dtype == np.float64
def test_constructor_dict_dont_upcast(self):
d = {'Col1': {'Row1': 'A String', 'Row2': np.nan}}
@@ -494,14 +493,14 @@ def test_constructor_period(self):
a = pd.PeriodIndex(['2012-01', 'NaT', '2012-04'], freq='M')
b = pd.PeriodIndex(['2012-02-01', '2012-03-01', 'NaT'], freq='D')
df = pd.DataFrame({'a': a, 'b': b})
- self.assertEqual(df['a'].dtype, 'object')
- self.assertEqual(df['b'].dtype, 'object')
+ assert df['a'].dtype == 'object'
+ assert df['b'].dtype == 'object'
# list of periods
df = pd.DataFrame({'a': a.asobject.tolist(),
'b': b.asobject.tolist()})
- self.assertEqual(df['a'].dtype, 'object')
- self.assertEqual(df['b'].dtype, 'object')
+ assert df['a'].dtype == 'object'
+ assert df['b'].dtype == 'object'
def test_nested_dict_frame_constructor(self):
rng = pd.period_range('1/1/2000', periods=5)
@@ -530,18 +529,18 @@ def _check_basic_constructor(self, empty):
# 2-D input
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
- self.assertEqual(len(frame.index), 2)
- self.assertEqual(len(frame.columns), 3)
+ assert len(frame.index) == 2
+ assert len(frame.columns) == 3
# 1-D input
frame = DataFrame(empty((3,)), columns=['A'], index=[1, 2, 3])
- self.assertEqual(len(frame.index), 3)
- self.assertEqual(len(frame.columns), 1)
+ assert len(frame.index) == 3
+ assert len(frame.columns) == 1
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
index=[1, 2], dtype=np.int64)
- self.assertEqual(frame.values.dtype, np.int64)
+ assert frame.values.dtype == np.int64
# wrong size axis labels
msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)'
@@ -569,16 +568,16 @@ def _check_basic_constructor(self, empty):
# 0-length axis
frame = DataFrame(empty((0, 3)))
- self.assertEqual(len(frame.index), 0)
+ assert len(frame.index) == 0
frame = DataFrame(empty((3, 0)))
- self.assertEqual(len(frame.columns), 0)
+ assert len(frame.columns) == 0
def test_constructor_ndarray(self):
self._check_basic_constructor(np.ones)
frame = DataFrame(['foo', 'bar'], index=[0, 1], columns=['A'])
- self.assertEqual(len(frame), 2)
+ assert len(frame) == 2
def test_constructor_maskedarray(self):
self._check_basic_constructor(ma.masked_all)
@@ -588,8 +587,8 @@ def test_constructor_maskedarray(self):
mat[0, 0] = 1.0
mat[1, 2] = 2.0
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
- self.assertEqual(1.0, frame['A'][1])
- self.assertEqual(2.0, frame['C'][2])
+ assert 1.0 == frame['A'][1]
+ assert 2.0 == frame['C'][2]
# what is this even checking??
mat = ma.masked_all((2, 3), dtype=float)
@@ -602,66 +601,66 @@ def test_constructor_maskedarray_nonfloat(self):
# 2-D input
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
- self.assertEqual(len(frame.index), 2)
- self.assertEqual(len(frame.columns), 3)
+ assert len(frame.index) == 2
+ assert len(frame.columns) == 3
assert np.all(~np.asarray(frame == frame))
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
index=[1, 2], dtype=np.float64)
- self.assertEqual(frame.values.dtype, np.float64)
+ assert frame.values.dtype == np.float64
# Check non-masked values
mat2 = ma.copy(mat)
mat2[0, 0] = 1
mat2[1, 2] = 2
frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2])
- self.assertEqual(1, frame['A'][1])
- self.assertEqual(2, frame['C'][2])
+ assert 1 == frame['A'][1]
+ assert 2 == frame['C'][2]
# masked np.datetime64 stays (use lib.NaT as null)
mat = ma.masked_all((2, 3), dtype='M8[ns]')
# 2-D input
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
- self.assertEqual(len(frame.index), 2)
- self.assertEqual(len(frame.columns), 3)
+ assert len(frame.index) == 2
+ assert len(frame.columns) == 3
assert isnull(frame).values.all()
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
index=[1, 2], dtype=np.int64)
- self.assertEqual(frame.values.dtype, np.int64)
+ assert frame.values.dtype == np.int64
# Check non-masked values
mat2 = ma.copy(mat)
mat2[0, 0] = 1
mat2[1, 2] = 2
frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2])
- self.assertEqual(1, frame['A'].view('i8')[1])
- self.assertEqual(2, frame['C'].view('i8')[2])
+ assert 1 == frame['A'].view('i8')[1]
+ assert 2 == frame['C'].view('i8')[2]
# masked bool promoted to object
mat = ma.masked_all((2, 3), dtype=bool)
# 2-D input
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
- self.assertEqual(len(frame.index), 2)
- self.assertEqual(len(frame.columns), 3)
+ assert len(frame.index) == 2
+ assert len(frame.columns) == 3
assert np.all(~np.asarray(frame == frame))
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
index=[1, 2], dtype=object)
- self.assertEqual(frame.values.dtype, object)
+ assert frame.values.dtype == object
# Check non-masked values
mat2 = ma.copy(mat)
mat2[0, 0] = True
mat2[1, 2] = False
frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2])
- self.assertEqual(True, frame['A'][1])
- self.assertEqual(False, frame['C'][2])
+ assert frame['A'][1] is True
+ assert frame['C'][2] is False
def test_constructor_mrecarray(self):
# Ensure mrecarray produces frame identical to dict of masked arrays
@@ -708,34 +707,34 @@ def test_constructor_mrecarray(self):
def test_constructor_corner(self):
df = DataFrame(index=[])
- self.assertEqual(df.values.shape, (0, 0))
+ assert df.values.shape == (0, 0)
# empty but with specified dtype
df = DataFrame(index=lrange(10), columns=['a', 'b'], dtype=object)
- self.assertEqual(df.values.dtype, np.object_)
+ assert df.values.dtype == np.object_
# does not error but ends up float
df = DataFrame(index=lrange(10), columns=['a', 'b'], dtype=int)
- self.assertEqual(df.values.dtype, np.object_)
+ assert df.values.dtype == np.object_
# #1783 empty dtype object
df = DataFrame({}, columns=['foo', 'bar'])
- self.assertEqual(df.values.dtype, np.object_)
+ assert df.values.dtype == np.object_
df = DataFrame({'b': 1}, index=lrange(10), columns=list('abc'),
dtype=int)
- self.assertEqual(df.values.dtype, np.object_)
+ assert df.values.dtype == np.object_
def test_constructor_scalar_inference(self):
data = {'int': 1, 'bool': True,
'float': 3., 'complex': 4j, 'object': 'foo'}
df = DataFrame(data, index=np.arange(10))
- self.assertEqual(df['int'].dtype, np.int64)
- self.assertEqual(df['bool'].dtype, np.bool_)
- self.assertEqual(df['float'].dtype, np.float64)
- self.assertEqual(df['complex'].dtype, np.complex128)
- self.assertEqual(df['object'].dtype, np.object_)
+ assert df['int'].dtype == np.int64
+ assert df['bool'].dtype == np.bool_
+ assert df['float'].dtype == np.float64
+ assert df['complex'].dtype == np.complex128
+ assert df['object'].dtype == np.object_
def test_constructor_arrays_and_scalars(self):
df = DataFrame({'a': randn(10), 'b': True})
@@ -750,28 +749,28 @@ def test_constructor_DataFrame(self):
tm.assert_frame_equal(df, self.frame)
df_casted = DataFrame(self.frame, dtype=np.int64)
- self.assertEqual(df_casted.values.dtype, np.int64)
+ assert df_casted.values.dtype == np.int64
def test_constructor_more(self):
# used to be in test_matrix.py
arr = randn(10)
dm = DataFrame(arr, columns=['A'], index=np.arange(10))
- self.assertEqual(dm.values.ndim, 2)
+ assert dm.values.ndim == 2
arr = randn(0)
dm = DataFrame(arr)
- self.assertEqual(dm.values.ndim, 2)
- self.assertEqual(dm.values.ndim, 2)
+ assert dm.values.ndim == 2
+ assert dm.values.ndim == 2
# no data specified
dm = DataFrame(columns=['A', 'B'], index=np.arange(10))
- self.assertEqual(dm.values.shape, (10, 2))
+ assert dm.values.shape == (10, 2)
dm = DataFrame(columns=['A', 'B'])
- self.assertEqual(dm.values.shape, (0, 2))
+ assert dm.values.shape == (0, 2)
dm = DataFrame(index=np.arange(10))
- self.assertEqual(dm.values.shape, (10, 0))
+ assert dm.values.shape == (10, 0)
# corner, silly
# TODO: Fix this Exception to be better...
@@ -792,8 +791,8 @@ def test_constructor_more(self):
'B': np.ones(10, dtype=np.float64)},
index=np.arange(10))
- self.assertEqual(len(dm.columns), 2)
- self.assertEqual(dm.values.dtype, np.float64)
+ assert len(dm.columns) == 2
+ assert dm.values.dtype == np.float64
def test_constructor_empty_list(self):
df = DataFrame([], index=[])
@@ -818,7 +817,7 @@ def test_constructor_list_of_lists(self):
l = [[1, 'a'], [2, 'b']]
df = DataFrame(data=l, columns=["num", "str"])
assert is_integer_dtype(df['num'])
- self.assertEqual(df['str'].dtype, np.object_)
+ assert df['str'].dtype == np.object_
# GH 4851
# list of 0-dim ndarrays
@@ -1075,7 +1074,7 @@ def test_constructor_orient(self):
def test_constructor_Series_named(self):
a = Series([1, 2, 3], index=['a', 'b', 'c'], name='x')
df = DataFrame(a)
- self.assertEqual(df.columns[0], 'x')
+ assert df.columns[0] == 'x'
tm.assert_index_equal(df.index, a.index)
# ndarray like
@@ -1095,7 +1094,7 @@ def test_constructor_Series_named(self):
# #2234
a = Series([], name='x')
df = DataFrame(a)
- self.assertEqual(df.columns[0], 'x')
+ assert df.columns[0] == 'x'
# series with name and w/o
s1 = Series(arr, name='x')
@@ -1120,12 +1119,12 @@ def test_constructor_Series_differently_indexed(self):
df1 = DataFrame(s1, index=other_index)
exp1 = DataFrame(s1.reindex(other_index))
- self.assertEqual(df1.columns[0], 'x')
+ assert df1.columns[0] == 'x'
tm.assert_frame_equal(df1, exp1)
df2 = DataFrame(s2, index=other_index)
exp2 = DataFrame(s2.reindex(other_index))
- self.assertEqual(df2.columns[0], 0)
+ assert df2.columns[0] == 0
tm.assert_index_equal(df2.index, other_index)
tm.assert_frame_equal(df2, exp2)
@@ -1156,7 +1155,7 @@ def test_constructor_from_items(self):
columns=self.mixed_frame.columns,
orient='index')
tm.assert_frame_equal(recons, self.mixed_frame)
- self.assertEqual(recons['A'].dtype, np.float64)
+ assert recons['A'].dtype == np.float64
with tm.assert_raises_regex(TypeError,
"Must pass columns with "
@@ -1305,7 +1304,7 @@ def test_constructor_with_datetimes(self):
ind = date_range(start="2000-01-01", freq="D", periods=10)
datetimes = [ts.to_pydatetime() for ts in ind]
datetime_s = Series(datetimes)
- self.assertEqual(datetime_s.dtype, 'M8[ns]')
+ assert datetime_s.dtype == 'M8[ns]'
df = DataFrame({'datetime_s': datetime_s})
result = df.get_dtype_counts()
expected = Series({datetime64name: 1})
@@ -1331,12 +1330,12 @@ def test_constructor_with_datetimes(self):
dt = tz.localize(datetime(2012, 1, 1))
df = DataFrame({'End Date': dt}, index=[0])
- self.assertEqual(df.iat[0, 0], dt)
+ assert df.iat[0, 0] == dt
tm.assert_series_equal(df.dtypes, Series(
{'End Date': 'datetime64[ns, US/Eastern]'}))
df = DataFrame([{'End Date': dt}])
- self.assertEqual(df.iat[0, 0], dt)
+ assert df.iat[0, 0] == dt
tm.assert_series_equal(df.dtypes, Series(
{'End Date': 'datetime64[ns, US/Eastern]'}))
@@ -1511,7 +1510,7 @@ def f():
def test_constructor_lists_to_object_dtype(self):
# from #1074
d = DataFrame({'a': [np.nan, False]})
- self.assertEqual(d['a'].dtype, np.object_)
+ assert d['a'].dtype == np.object_
assert not d['a'][1]
def test_from_records_to_records(self):
@@ -1616,7 +1615,7 @@ def test_from_records_columns_not_modified(self):
df = DataFrame.from_records(tuples, columns=columns, index='a') # noqa
- self.assertEqual(columns, original_columns)
+ assert columns == original_columns
def test_from_records_decimal(self):
from decimal import Decimal
@@ -1624,10 +1623,10 @@ def test_from_records_decimal(self):
tuples = [(Decimal('1.5'),), (Decimal('2.5'),), (None,)]
df = DataFrame.from_records(tuples, columns=['a'])
- self.assertEqual(df['a'].dtype, object)
+ assert df['a'].dtype == object
df = DataFrame.from_records(tuples, columns=['a'], coerce_float=True)
- self.assertEqual(df['a'].dtype, np.float64)
+ assert df['a'].dtype == np.float64
assert np.isnan(df['a'].values[-1])
def test_from_records_duplicates(self):
@@ -1648,12 +1647,12 @@ def create_dict(order_id):
documents.append({'order_id': 10, 'quantity': 5})
result = DataFrame.from_records(documents, index='order_id')
- self.assertEqual(result.index.name, 'order_id')
+ assert result.index.name == 'order_id'
# MultiIndex
result = DataFrame.from_records(documents,
index=['order_id', 'quantity'])
- self.assertEqual(result.index.names, ('order_id', 'quantity'))
+ assert result.index.names == ('order_id', 'quantity')
def test_from_records_misc_brokenness(self):
# #2179
@@ -1702,13 +1701,13 @@ def test_from_records_empty_with_nonempty_fields_gh3682(self):
a = np.array([(1, 2)], dtype=[('id', np.int64), ('value', np.int64)])
df = DataFrame.from_records(a, index='id')
tm.assert_index_equal(df.index, Index([1], name='id'))
- self.assertEqual(df.index.name, 'id')
+ assert df.index.name == 'id'
tm.assert_index_equal(df.columns, Index(['value']))
b = np.array([], dtype=[('id', np.int64), ('value', np.int64)])
df = DataFrame.from_records(b, index='id')
tm.assert_index_equal(df.index, Index([], name='id'))
- self.assertEqual(df.index.name, 'id')
+ assert df.index.name == 'id'
def test_from_records_with_datetimes(self):
@@ -1804,13 +1803,13 @@ def test_from_records_sequencelike(self):
# empty case
result = DataFrame.from_records([], columns=['foo', 'bar', 'baz'])
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
tm.assert_index_equal(result.columns,
pd.Index(['foo', 'bar', 'baz']))
result = DataFrame.from_records([])
- self.assertEqual(len(result), 0)
- self.assertEqual(len(result.columns), 0)
+ assert len(result) == 0
+ assert len(result.columns) == 0
def test_from_records_dictlike(self):
@@ -1891,8 +1890,8 @@ def test_from_records_len0_with_columns(self):
columns=['foo', 'bar'])
assert np.array_equal(result.columns, ['bar'])
- self.assertEqual(len(result), 0)
- self.assertEqual(result.index.name, 'foo')
+ assert len(result) == 0
+ assert result.index.name == 'foo'
def test_to_frame_with_falsey_names(self):
# GH 16114
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index d3a675e3dc1a3..353b4b873332e 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -22,19 +22,19 @@ def test_to_dict(self):
for k, v in compat.iteritems(test_data):
for k2, v2 in compat.iteritems(v):
- self.assertEqual(v2, recons_data[k][k2])
+ assert v2 == recons_data[k][k2]
recons_data = DataFrame(test_data).to_dict("l")
for k, v in compat.iteritems(test_data):
for k2, v2 in compat.iteritems(v):
- self.assertEqual(v2, recons_data[k][int(k2) - 1])
+ assert v2 == recons_data[k][int(k2) - 1]
recons_data = DataFrame(test_data).to_dict("s")
for k, v in compat.iteritems(test_data):
for k2, v2 in compat.iteritems(v):
- self.assertEqual(v2, recons_data[k][k2])
+ assert v2 == recons_data[k][k2]
recons_data = DataFrame(test_data).to_dict("sp")
expected_split = {'columns': ['A', 'B'], 'index': ['1', '2', '3'],
@@ -46,7 +46,7 @@ def test_to_dict(self):
{'A': 2.0, 'B': '2'},
{'A': np.nan, 'B': '3'}]
assert isinstance(recons_data, list)
- self.assertEqual(len(recons_data), 3)
+ assert len(recons_data) == 3
for l, r in zip(recons_data, expected_records):
tm.assert_dict_equal(l, r)
@@ -55,7 +55,7 @@ def test_to_dict(self):
for k, v in compat.iteritems(test_data):
for k2, v2 in compat.iteritems(v):
- self.assertEqual(v2, recons_data[k2][k])
+ assert v2 == recons_data[k2][k]
def test_to_dict_timestamp(self):
@@ -72,10 +72,10 @@ def test_to_dict_timestamp(self):
expected_records_mixed = [{'A': tsmp, 'B': 1},
{'A': tsmp, 'B': 2}]
- self.assertEqual(test_data.to_dict(orient='records'),
- expected_records)
- self.assertEqual(test_data_mixed.to_dict(orient='records'),
- expected_records_mixed)
+ assert (test_data.to_dict(orient='records') ==
+ expected_records)
+ assert (test_data_mixed.to_dict(orient='records') ==
+ expected_records_mixed)
expected_series = {
'A': Series([tsmp, tsmp], name='A'),
@@ -117,10 +117,10 @@ def test_to_records_dt64(self):
df = DataFrame([["one", "two", "three"],
["four", "five", "six"]],
index=date_range("2012-01-01", "2012-01-02"))
- self.assertEqual(df.to_records()['index'][0], df.index[0])
+ assert df.to_records()['index'][0] == df.index[0]
rs = df.to_records(convert_datetime64=False)
- self.assertEqual(rs['index'][0], df.index.values[0])
+ assert rs['index'][0] == df.index.values[0]
def test_to_records_with_multindex(self):
# GH3189
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 427834b3dbf38..2d39db16dbd8d 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -28,14 +28,14 @@ def test_concat_empty_dataframe_dtypes(self):
df['c'] = df['c'].astype(np.float64)
result = pd.concat([df, df])
- self.assertEqual(result['a'].dtype, np.bool_)
- self.assertEqual(result['b'].dtype, np.int32)
- self.assertEqual(result['c'].dtype, np.float64)
+ assert result['a'].dtype == np.bool_
+ assert result['b'].dtype == np.int32
+ assert result['c'].dtype == np.float64
result = pd.concat([df, df.astype(np.float64)])
- self.assertEqual(result['a'].dtype, np.object_)
- self.assertEqual(result['b'].dtype, np.float64)
- self.assertEqual(result['c'].dtype, np.float64)
+ assert result['a'].dtype == np.object_
+ assert result['b'].dtype == np.float64
+ assert result['c'].dtype == np.float64
def test_empty_frame_dtypes_ftypes(self):
empty_df = pd.DataFrame()
@@ -326,9 +326,8 @@ def test_astype(self):
# mixed casting
def _check_cast(df, v):
- self.assertEqual(
- list(set([s.dtype.name
- for _, s in compat.iteritems(df)]))[0], v)
+ assert (list(set([s.dtype.name for
+ _, s in compat.iteritems(df)]))[0] == v)
mn = self.all_mixed._get_numeric_data().copy()
mn['little_float'] = np.array(12345., dtype='float16')
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index 8f6128ad4e525..cd1529d04c991 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -113,7 +113,7 @@ def test_getitem_list(self):
assert_frame_equal(result, expected)
assert_frame_equal(result2, expected)
- self.assertEqual(result.columns.name, 'foo')
+ assert result.columns.name == 'foo'
with tm.assert_raises_regex(KeyError, 'not in index'):
self.frame[['B', 'A', 'food']]
@@ -128,7 +128,7 @@ def test_getitem_list(self):
result = df[[('foo', 'bar'), ('baz', 'qux')]]
expected = df.iloc[:, :2]
assert_frame_equal(result, expected)
- self.assertEqual(result.columns.names, ['sth', 'sth2'])
+ assert result.columns.names == ['sth', 'sth2']
def test_getitem_callable(self):
# GH 12533
@@ -282,7 +282,7 @@ def test_getitem_boolean(self):
assert_frame_equal(bif, bifw, check_dtype=False)
for c in df.columns:
if bif[c].dtype != bifw[c].dtype:
- self.assertEqual(bif[c].dtype, df[c].dtype)
+ assert bif[c].dtype == df[c].dtype
def test_getitem_boolean_casting(self):
@@ -404,8 +404,8 @@ def test_getitem_setitem_ix_negative_integers(self):
with catch_warnings(record=True):
assert_series_equal(a.ix[-1], a.ix[-2], check_names=False)
- self.assertEqual(a.ix[-1].name, 'T')
- self.assertEqual(a.ix[-2].name, 'S')
+ assert a.ix[-1].name == 'T'
+ assert a.ix[-2].name == 'S'
def test_getattr(self):
assert_series_equal(self.frame.A, self.frame['A'])
@@ -424,8 +424,8 @@ def test_setitem(self):
self.frame['col5'] = series
assert 'col5' in self.frame
- self.assertEqual(len(series), 15)
- self.assertEqual(len(self.frame), 30)
+ assert len(series) == 15
+ assert len(self.frame) == 30
exp = np.ravel(np.column_stack((series.values, [np.nan] * 15)))
exp = Series(exp, index=self.frame.index, name='col5')
@@ -459,13 +459,13 @@ def test_setitem(self):
def f():
smaller['col10'] = ['1', '2']
pytest.raises(com.SettingWithCopyError, f)
- self.assertEqual(smaller['col10'].dtype, np.object_)
+ assert smaller['col10'].dtype == np.object_
assert (smaller['col10'] == ['1', '2']).all()
# with a dtype
for dtype in ['int32', 'int64', 'float32', 'float64']:
self.frame[dtype] = np.array(arr, dtype=dtype)
- self.assertEqual(self.frame[dtype].dtype.name, dtype)
+ assert self.frame[dtype].dtype.name == dtype
# dtype changing GH4204
df = DataFrame([[0, 0]])
@@ -542,13 +542,13 @@ def test_setitem_boolean(self):
def test_setitem_cast(self):
self.frame['D'] = self.frame['D'].astype('i8')
- self.assertEqual(self.frame['D'].dtype, np.int64)
+ assert self.frame['D'].dtype == np.int64
# #669, should not cast?
# this is now set to int64, which means a replacement of the column to
# the value dtype (and nothing to do with the existing dtype)
self.frame['B'] = 0
- self.assertEqual(self.frame['B'].dtype, np.int64)
+ assert self.frame['B'].dtype == np.int64
# cast if pass array of course
self.frame['B'] = np.arange(len(self.frame))
@@ -556,18 +556,18 @@ def test_setitem_cast(self):
self.frame['foo'] = 'bar'
self.frame['foo'] = 0
- self.assertEqual(self.frame['foo'].dtype, np.int64)
+ assert self.frame['foo'].dtype == np.int64
self.frame['foo'] = 'bar'
self.frame['foo'] = 2.5
- self.assertEqual(self.frame['foo'].dtype, np.float64)
+ assert self.frame['foo'].dtype == np.float64
self.frame['something'] = 0
- self.assertEqual(self.frame['something'].dtype, np.int64)
+ assert self.frame['something'].dtype == np.int64
self.frame['something'] = 2
- self.assertEqual(self.frame['something'].dtype, np.int64)
+ assert self.frame['something'].dtype == np.int64
self.frame['something'] = 2.5
- self.assertEqual(self.frame['something'].dtype, np.float64)
+ assert self.frame['something'].dtype == np.float64
# GH 7704
# dtype conversion on setting
@@ -581,9 +581,9 @@ def test_setitem_cast(self):
# Test that data type is preserved . #5782
df = DataFrame({'one': np.arange(6, dtype=np.int8)})
df.loc[1, 'one'] = 6
- self.assertEqual(df.dtypes.one, np.dtype(np.int8))
+ assert df.dtypes.one == np.dtype(np.int8)
df.one = np.int8(7)
- self.assertEqual(df.dtypes.one, np.dtype(np.int8))
+ assert df.dtypes.one == np.dtype(np.int8)
def test_setitem_boolean_column(self):
expected = self.frame.copy()
@@ -602,7 +602,7 @@ def test_setitem_corner(self):
del df['B']
df['B'] = [1., 2., 3.]
assert 'B' in df
- self.assertEqual(len(df.columns), 2)
+ assert len(df.columns) == 2
df['A'] = 'beginning'
df['E'] = 'foo'
@@ -614,29 +614,29 @@ def test_setitem_corner(self):
dm = DataFrame(index=self.frame.index)
dm['A'] = 'foo'
dm['B'] = 'bar'
- self.assertEqual(len(dm.columns), 2)
- self.assertEqual(dm.values.dtype, np.object_)
+ assert len(dm.columns) == 2
+ assert dm.values.dtype == np.object_
# upcast
dm['C'] = 1
- self.assertEqual(dm['C'].dtype, np.int64)
+ assert dm['C'].dtype == np.int64
dm['E'] = 1.
- self.assertEqual(dm['E'].dtype, np.float64)
+ assert dm['E'].dtype == np.float64
# set existing column
dm['A'] = 'bar'
- self.assertEqual('bar', dm['A'][0])
+ assert 'bar' == dm['A'][0]
dm = DataFrame(index=np.arange(3))
dm['A'] = 1
dm['foo'] = 'bar'
del dm['foo']
dm['foo'] = 'bar'
- self.assertEqual(dm['foo'].dtype, np.object_)
+ assert dm['foo'].dtype == np.object_
dm['coercable'] = ['1', '2', '3']
- self.assertEqual(dm['coercable'].dtype, np.object_)
+ assert dm['coercable'].dtype == np.object_
def test_setitem_corner2(self):
data = {"title": ['foobar', 'bar', 'foobar'] + ['foobar'] * 17,
@@ -648,8 +648,8 @@ def test_setitem_corner2(self):
df.loc[ix, ['title']] = 'foobar'
df.loc[ix, ['cruft']] = 0
- self.assertEqual(df.loc[1, 'title'], 'foobar')
- self.assertEqual(df.loc[1, 'cruft'], 0)
+ assert df.loc[1, 'title'] == 'foobar'
+ assert df.loc[1, 'cruft'] == 0
def test_setitem_ambig(self):
# Difficulties with mixed-type data
@@ -731,10 +731,10 @@ def test_getitem_empty_frame_with_boolean(self):
def test_delitem_corner(self):
f = self.frame.copy()
del f['D']
- self.assertEqual(len(f.columns), 3)
+ assert len(f.columns) == 3
pytest.raises(KeyError, f.__delitem__, 'D')
del f['B']
- self.assertEqual(len(f.columns), 2)
+ assert len(f.columns) == 2
def test_getitem_fancy_2d(self):
f = self.frame
@@ -781,13 +781,13 @@ def test_slice_floats(self):
df = DataFrame(np.random.rand(3, 2), index=index)
s1 = df.loc[52195.1:52196.5]
- self.assertEqual(len(s1), 2)
+ assert len(s1) == 2
s1 = df.loc[52195.1:52196.6]
- self.assertEqual(len(s1), 2)
+ assert len(s1) == 2
s1 = df.loc[52195.1:52198.9]
- self.assertEqual(len(s1), 3)
+ assert len(s1) == 3
def test_getitem_fancy_slice_integers_step(self):
df = DataFrame(np.random.randn(10, 5))
@@ -930,7 +930,7 @@ def test_setitem_fancy_2d(self):
def test_fancy_getitem_slice_mixed(self):
sliced = self.mixed_frame.iloc[:, -3:]
- self.assertEqual(sliced['D'].dtype, np.float64)
+ assert sliced['D'].dtype == np.float64
# get view with single block
# setting it triggers setting with copy
@@ -1282,7 +1282,7 @@ def test_getitem_fancy_scalar(self):
for col in f.columns:
ts = f[col]
for idx in f.index[::5]:
- self.assertEqual(ix[idx, col], ts[idx])
+ assert ix[idx, col] == ts[idx]
def test_setitem_fancy_scalar(self):
f = self.frame
@@ -1394,17 +1394,17 @@ def test_getitem_setitem_float_labels(self):
result = df.loc[1.5:4]
expected = df.reindex([1.5, 2, 3, 4])
assert_frame_equal(result, expected)
- self.assertEqual(len(result), 4)
+ assert len(result) == 4
result = df.loc[4:5]
expected = df.reindex([4, 5]) # reindex with int
assert_frame_equal(result, expected, check_index_type=False)
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
result = df.loc[4:5]
expected = df.reindex([4.0, 5.0]) # reindex with float
assert_frame_equal(result, expected)
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
# loc_float changes this to work properly
result = df.loc[1:2]
@@ -1425,7 +1425,7 @@ def test_getitem_setitem_float_labels(self):
result = df.iloc[4:5]
expected = df.reindex([5.0])
assert_frame_equal(result, expected)
- self.assertEqual(len(result), 1)
+ assert len(result) == 1
cp = df.copy()
@@ -1449,22 +1449,22 @@ def f():
result = df.loc[1.0:5]
expected = df
assert_frame_equal(result, expected)
- self.assertEqual(len(result), 5)
+ assert len(result) == 5
result = df.loc[1.1:5]
expected = df.reindex([2.5, 3.5, 4.5, 5.0])
assert_frame_equal(result, expected)
- self.assertEqual(len(result), 4)
+ assert len(result) == 4
result = df.loc[4.51:5]
expected = df.reindex([5.0])
assert_frame_equal(result, expected)
- self.assertEqual(len(result), 1)
+ assert len(result) == 1
result = df.loc[1.0:5.0]
expected = df.reindex([1.0, 2.5, 3.5, 4.5, 5.0])
assert_frame_equal(result, expected)
- self.assertEqual(len(result), 5)
+ assert len(result) == 5
cp = df.copy()
cp.loc[1.0:5.0] = 0
@@ -1621,7 +1621,7 @@ def test_getitem_list_duplicates(self):
df.columns.name = 'foo'
result = df[['B', 'C']]
- self.assertEqual(result.columns.name, 'foo')
+ assert result.columns.name == 'foo'
expected = df.iloc[:, 2:]
assert_frame_equal(result, expected)
@@ -1631,7 +1631,7 @@ def test_get_value(self):
for col in self.frame.columns:
result = self.frame.get_value(idx, col)
expected = self.frame[col][idx]
- self.assertEqual(result, expected)
+ assert result == expected
def test_lookup(self):
def alt(df, rows, cols, dtype):
@@ -1657,7 +1657,7 @@ def testit(df):
df['mask'] = df.lookup(df.index, 'mask_' + df['label'])
exp_mask = alt(df, df.index, 'mask_' + df['label'], dtype=np.bool_)
tm.assert_series_equal(df['mask'], pd.Series(exp_mask, name='mask'))
- self.assertEqual(df['mask'].dtype, np.bool_)
+ assert df['mask'].dtype == np.bool_
with pytest.raises(KeyError):
self.frame.lookup(['xyz'], ['A'])
@@ -1672,25 +1672,25 @@ def test_set_value(self):
for idx in self.frame.index:
for col in self.frame.columns:
self.frame.set_value(idx, col, 1)
- self.assertEqual(self.frame[col][idx], 1)
+ assert self.frame[col][idx] == 1
def test_set_value_resize(self):
res = self.frame.set_value('foobar', 'B', 0)
assert res is self.frame
- self.assertEqual(res.index[-1], 'foobar')
- self.assertEqual(res.get_value('foobar', 'B'), 0)
+ assert res.index[-1] == 'foobar'
+ assert res.get_value('foobar', 'B') == 0
self.frame.loc['foobar', 'qux'] = 0
- self.assertEqual(self.frame.get_value('foobar', 'qux'), 0)
+ assert self.frame.get_value('foobar', 'qux') == 0
res = self.frame.copy()
res3 = res.set_value('foobar', 'baz', 'sam')
- self.assertEqual(res3['baz'].dtype, np.object_)
+ assert res3['baz'].dtype == np.object_
res = self.frame.copy()
res3 = res.set_value('foobar', 'baz', True)
- self.assertEqual(res3['baz'].dtype, np.object_)
+ assert res3['baz'].dtype == np.object_
res = self.frame.copy()
res3 = res.set_value('foobar', 'baz', 5)
@@ -1705,24 +1705,24 @@ def test_set_value_with_index_dtype_change(self):
# so column is not created
df = df_orig.copy()
df.set_value('C', 2, 1.0)
- self.assertEqual(list(df.index), list(df_orig.index) + ['C'])
- # self.assertEqual(list(df.columns), list(df_orig.columns) + [2])
+ assert list(df.index) == list(df_orig.index) + ['C']
+ # assert list(df.columns) == list(df_orig.columns) + [2]
df = df_orig.copy()
df.loc['C', 2] = 1.0
- self.assertEqual(list(df.index), list(df_orig.index) + ['C'])
- # self.assertEqual(list(df.columns), list(df_orig.columns) + [2])
+ assert list(df.index) == list(df_orig.index) + ['C']
+ # assert list(df.columns) == list(df_orig.columns) + [2]
# create both new
df = df_orig.copy()
df.set_value('C', 'D', 1.0)
- self.assertEqual(list(df.index), list(df_orig.index) + ['C'])
- self.assertEqual(list(df.columns), list(df_orig.columns) + ['D'])
+ assert list(df.index) == list(df_orig.index) + ['C']
+ assert list(df.columns) == list(df_orig.columns) + ['D']
df = df_orig.copy()
df.loc['C', 'D'] = 1.0
- self.assertEqual(list(df.index), list(df_orig.index) + ['C'])
- self.assertEqual(list(df.columns), list(df_orig.columns) + ['D'])
+ assert list(df.index) == list(df_orig.index) + ['C']
+ assert list(df.columns) == list(df_orig.columns) + ['D']
def test_get_set_value_no_partial_indexing(self):
# partial w/ MultiIndex raise exception
@@ -1874,7 +1874,7 @@ def test_iat(self):
for j, col in enumerate(self.frame.columns):
result = self.frame.iat[i, j]
expected = self.frame.at[row, col]
- self.assertEqual(result, expected)
+ assert result == expected
def test_nested_exception(self):
# Ignore the strange way of triggering the problem
@@ -1941,7 +1941,7 @@ def test_reindex_frame_add_nat(self):
def test_set_dataframe_column_ns_dtype(self):
x = DataFrame([datetime.now(), datetime.now()])
- self.assertEqual(x[0].dtype, np.dtype('M8[ns]'))
+ assert x[0].dtype == np.dtype('M8[ns]')
def test_non_monotonic_reindex_methods(self):
dr = pd.date_range('2013-08-01', periods=6, freq='B')
@@ -2095,13 +2095,13 @@ def test_setitem_with_unaligned_tz_aware_datetime_column(self):
assert_series_equal(df['dates'], column)
def test_setitem_datetime_coercion(self):
- # GH 1048
+ # gh-1048
df = pd.DataFrame({'c': [pd.Timestamp('2010-10-01')] * 3})
df.loc[0:1, 'c'] = np.datetime64('2008-08-08')
- self.assertEqual(pd.Timestamp('2008-08-08'), df.loc[0, 'c'])
- self.assertEqual(pd.Timestamp('2008-08-08'), df.loc[1, 'c'])
+ assert pd.Timestamp('2008-08-08') == df.loc[0, 'c']
+ assert pd.Timestamp('2008-08-08') == df.loc[1, 'c']
df.loc[2, 'c'] = date(2005, 5, 5)
- self.assertEqual(pd.Timestamp('2005-05-05'), df.loc[2, 'c'])
+ assert pd.Timestamp('2005-05-05') == df.loc[2, 'c']
def test_setitem_datetimelike_with_inference(self):
# GH 7592
@@ -2139,14 +2139,14 @@ def test_at_time_between_time_datetimeindex(self):
expected2 = df.iloc[ainds]
assert_frame_equal(result, expected)
assert_frame_equal(result, expected2)
- self.assertEqual(len(result), 4)
+ assert len(result) == 4
result = df.between_time(bkey.start, bkey.stop)
expected = df.loc[bkey]
expected2 = df.iloc[binds]
assert_frame_equal(result, expected)
assert_frame_equal(result, expected2)
- self.assertEqual(len(result), 12)
+ assert len(result) == 12
result = df.copy()
result.loc[akey] = 0
@@ -2179,7 +2179,7 @@ def test_xs(self):
if np.isnan(value):
assert np.isnan(self.frame[item][idx])
else:
- self.assertEqual(value, self.frame[item][idx])
+ assert value == self.frame[item][idx]
# mixed-type xs
test_data = {
@@ -2188,9 +2188,9 @@ def test_xs(self):
}
frame = DataFrame(test_data)
xs = frame.xs('1')
- self.assertEqual(xs.dtype, np.object_)
- self.assertEqual(xs['A'], 1)
- self.assertEqual(xs['B'], '1')
+ assert xs.dtype == np.object_
+ assert xs['A'] == 1
+ assert xs['B'] == '1'
with pytest.raises(KeyError):
self.tsframe.xs(self.tsframe.index[0] - BDay())
@@ -2266,10 +2266,10 @@ def test_index_namedtuple(self):
with catch_warnings(record=True):
result = df.ix[IndexType("foo", "bar")]["A"]
- self.assertEqual(result, 1)
+ assert result == 1
result = df.loc[IndexType("foo", "bar")]["A"]
- self.assertEqual(result, 1)
+ assert result == 1
def test_boolean_indexing(self):
idx = lrange(3)
@@ -2442,7 +2442,7 @@ def _check_set(df, cond, check_dtypes=True):
for k, v in compat.iteritems(df.dtypes):
if issubclass(v.type, np.integer) and not cond[k].all():
v = np.dtype('float64')
- self.assertEqual(dfi[k].dtype, v)
+ assert dfi[k].dtype == v
for df in [default_frame, self.mixed_frame, self.mixed_float,
self.mixed_int]:
@@ -3011,7 +3011,7 @@ def test_set_reset(self):
# set/reset
df = DataFrame({'A': [0, 1, 2]}, index=idx)
result = df.reset_index()
- self.assertEqual(result['foo'].dtype, np.dtype('uint64'))
+ assert result['foo'].dtype == np.dtype('uint64')
df = result.set_index('foo')
tm.assert_index_equal(df.index, idx)
diff --git a/pandas/tests/frame/test_missing.py b/pandas/tests/frame/test_missing.py
index 17f12679ae92e..ffba141ddc15d 100644
--- a/pandas/tests/frame/test_missing.py
+++ b/pandas/tests/frame/test_missing.py
@@ -493,7 +493,7 @@ def test_fillna_col_reordering(self):
data = np.random.rand(20, 5)
df = DataFrame(index=lrange(20), columns=cols, data=data)
filled = df.fillna(method='ffill')
- self.assertEqual(df.columns.tolist(), filled.columns.tolist())
+ assert df.columns.tolist() == filled.columns.tolist()
def test_fill_corner(self):
mf = self.mixed_frame
diff --git a/pandas/tests/frame/test_mutate_columns.py b/pandas/tests/frame/test_mutate_columns.py
index fbd1b7be3e431..ac76970aaa901 100644
--- a/pandas/tests/frame/test_mutate_columns.py
+++ b/pandas/tests/frame/test_mutate_columns.py
@@ -150,7 +150,7 @@ def test_insert(self):
df.columns.name = 'some_name'
# preserve columns name field
df.insert(0, 'baz', df['c'])
- self.assertEqual(df.columns.name, 'some_name')
+ assert df.columns.name == 'some_name'
# GH 13522
df = DataFrame(index=['A', 'B', 'C'])
@@ -197,7 +197,7 @@ def test_pop(self):
self.frame['foo'] = 'bar'
self.frame.pop('foo')
assert 'foo' not in self.frame
- # TODO self.assertEqual(self.frame.columns.name, 'baz')
+ # TODO assert self.frame.columns.name == 'baz'
# gh-10912: inplace ops cause caching issue
a = DataFrame([[1, 2, 3], [4, 5, 6]], columns=[
@@ -219,12 +219,12 @@ def test_pop_non_unique_cols(self):
df.columns = ["a", "b", "a"]
res = df.pop("a")
- self.assertEqual(type(res), DataFrame)
- self.assertEqual(len(res), 2)
- self.assertEqual(len(df.columns), 1)
+ assert type(res) == DataFrame
+ assert len(res) == 2
+ assert len(df.columns) == 1
assert "b" in df.columns
assert "a" not in df.columns
- self.assertEqual(len(df.index), 2)
+ assert len(df.index) == 2
def test_insert_column_bug_4032(self):
diff --git a/pandas/tests/frame/test_nonunique_indexes.py b/pandas/tests/frame/test_nonunique_indexes.py
index 61dd92fcd1fab..4bc0176b570e3 100644
--- a/pandas/tests/frame/test_nonunique_indexes.py
+++ b/pandas/tests/frame/test_nonunique_indexes.py
@@ -425,8 +425,8 @@ def test_columns_with_dups(self):
columns=df_float.columns)
df = pd.concat([df_float, df_int, df_bool, df_object, df_dt], axis=1)
- self.assertEqual(len(df._data._blknos), len(df.columns))
- self.assertEqual(len(df._data._blklocs), len(df.columns))
+ assert len(df._data._blknos) == len(df.columns)
+ assert len(df._data._blklocs) == len(df.columns)
# testing iloc
for i in range(len(df.columns)):
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index efe167297627a..9083b7952909e 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -41,7 +41,7 @@ def test_operators(self):
for idx, val in compat.iteritems(series):
origVal = self.frame[col][idx] * 2
if not np.isnan(val):
- self.assertEqual(val, origVal)
+ assert val == origVal
else:
assert np.isnan(origVal)
@@ -49,7 +49,7 @@ def test_operators(self):
for idx, val in compat.iteritems(series):
origVal = self.frame[col][idx] + colSeries[col]
if not np.isnan(val):
- self.assertEqual(val, origVal)
+ assert val == origVal
else:
assert np.isnan(origVal)
@@ -278,14 +278,14 @@ def _check_bin_op(op):
result = op(df1, df2)
expected = DataFrame(op(df1.values, df2.values), index=df1.index,
columns=df1.columns)
- self.assertEqual(result.values.dtype, np.bool_)
+ assert result.values.dtype == np.bool_
assert_frame_equal(result, expected)
def _check_unary_op(op):
result = op(df1)
expected = DataFrame(op(df1.values), index=df1.index,
columns=df1.columns)
- self.assertEqual(result.values.dtype, np.bool_)
+ assert result.values.dtype == np.bool_
assert_frame_equal(result, expected)
df1 = {'a': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True},
@@ -861,9 +861,9 @@ def test_combineSeries(self):
for key, col in compat.iteritems(self.tsframe):
result = col + ts
assert_series_equal(added[key], result, check_names=False)
- self.assertEqual(added[key].name, key)
+ assert added[key].name == key
if col.name == ts.name:
- self.assertEqual(result.name, 'A')
+ assert result.name == 'A'
else:
assert result.name is None
@@ -891,7 +891,7 @@ def test_combineSeries(self):
# empty but with non-empty index
frame = self.tsframe[:1].reindex(columns=[])
result = frame.mul(ts, axis='index')
- self.assertEqual(len(result), len(ts))
+ assert len(result) == len(ts)
def test_combineFunc(self):
result = self.frame * 2
@@ -906,7 +906,7 @@ def test_combineFunc(self):
result = self.empty * 2
assert result.index is self.empty.index
- self.assertEqual(len(result.columns), 0)
+ assert len(result.columns) == 0
def test_comparisons(self):
df1 = tm.makeTimeDataFrame()
diff --git a/pandas/tests/frame/test_period.py b/pandas/tests/frame/test_period.py
index 0ca37de6bf2d4..826ece2ed2c9b 100644
--- a/pandas/tests/frame/test_period.py
+++ b/pandas/tests/frame/test_period.py
@@ -37,8 +37,8 @@ def test_frame_setitem(self):
df['Index'] = rng
rs = Index(df['Index'])
tm.assert_index_equal(rs, rng, check_names=False)
- self.assertEqual(rs.name, 'Index')
- self.assertEqual(rng.name, 'index')
+ assert rs.name == 'Index'
+ assert rng.name == 'index'
rs = df.reset_index().set_index('index')
assert isinstance(rs.index, PeriodIndex)
@@ -117,8 +117,8 @@ def _get_with_delta(delta, freq='A-DEC'):
tm.assert_numpy_array_equal(result1.columns.asi8, expected.asi8)
tm.assert_numpy_array_equal(result2.columns.asi8, expected.asi8)
# PeriodIndex.to_timestamp always use 'infer'
- self.assertEqual(result1.columns.freqstr, 'AS-JAN')
- self.assertEqual(result2.columns.freqstr, 'AS-JAN')
+ assert result1.columns.freqstr == 'AS-JAN'
+ assert result2.columns.freqstr == 'AS-JAN'
def test_frame_index_to_string(self):
index = PeriodIndex(['2011-1', '2011-2', '2011-3'], freq='M')
diff --git a/pandas/tests/frame/test_quantile.py b/pandas/tests/frame/test_quantile.py
index 406f8107952ef..33f72cde1b9a3 100644
--- a/pandas/tests/frame/test_quantile.py
+++ b/pandas/tests/frame/test_quantile.py
@@ -23,12 +23,12 @@ def test_quantile(self):
from numpy import percentile
q = self.tsframe.quantile(0.1, axis=0)
- self.assertEqual(q['A'], percentile(self.tsframe['A'], 10))
+ assert q['A'] == percentile(self.tsframe['A'], 10)
tm.assert_index_equal(q.index, self.tsframe.columns)
q = self.tsframe.quantile(0.9, axis=1)
- self.assertEqual(q['2000-01-17'],
- percentile(self.tsframe.loc['2000-01-17'], 90))
+ assert (q['2000-01-17'] ==
+ percentile(self.tsframe.loc['2000-01-17'], 90))
tm.assert_index_equal(q.index, self.tsframe.index)
# test degenerate case
@@ -102,7 +102,7 @@ def test_quantile_axis_parameter(self):
pytest.raises(ValueError, df.quantile, 0.1, axis="column")
def test_quantile_interpolation(self):
- # GH #10174
+ # see gh-10174
if _np_version_under1p9:
pytest.skip("Numpy version under 1.9")
@@ -110,32 +110,32 @@ def test_quantile_interpolation(self):
# interpolation = linear (default case)
q = self.tsframe.quantile(0.1, axis=0, interpolation='linear')
- self.assertEqual(q['A'], percentile(self.tsframe['A'], 10))
+ assert q['A'] == percentile(self.tsframe['A'], 10)
q = self.intframe.quantile(0.1)
- self.assertEqual(q['A'], percentile(self.intframe['A'], 10))
+ assert q['A'] == percentile(self.intframe['A'], 10)
# test with and without interpolation keyword
q1 = self.intframe.quantile(0.1)
- self.assertEqual(q1['A'], np.percentile(self.intframe['A'], 10))
- assert_series_equal(q, q1)
+ assert q1['A'] == np.percentile(self.intframe['A'], 10)
+ tm.assert_series_equal(q, q1)
# interpolation method other than default linear
df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3])
result = df.quantile(.5, axis=1, interpolation='nearest')
expected = Series([1, 2, 3], index=[1, 2, 3], name=0.5)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# cross-check interpolation=nearest results in original dtype
exp = np.percentile(np.array([[1, 2, 3], [2, 3, 4]]), .5,
axis=0, interpolation='nearest')
expected = Series(exp, index=[1, 2, 3], name=0.5, dtype='int64')
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# float
df = DataFrame({"A": [1., 2., 3.], "B": [2., 3., 4.]}, index=[1, 2, 3])
result = df.quantile(.5, axis=1, interpolation='nearest')
expected = Series([1., 2., 3.], index=[1, 2, 3], name=0.5)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
exp = np.percentile(np.array([[1., 2., 3.], [2., 3., 4.]]), .5,
axis=0, interpolation='nearest')
expected = Series(exp, index=[1, 2, 3], name=0.5, dtype='float64')
@@ -167,7 +167,7 @@ def test_quantile_interpolation(self):
assert_frame_equal(result, expected)
def test_quantile_interpolation_np_lt_1p9(self):
- # GH #10174
+ # see gh-10174
if not _np_version_under1p9:
pytest.skip("Numpy version is greater than 1.9")
@@ -175,33 +175,33 @@ def test_quantile_interpolation_np_lt_1p9(self):
# interpolation = linear (default case)
q = self.tsframe.quantile(0.1, axis=0, interpolation='linear')
- self.assertEqual(q['A'], percentile(self.tsframe['A'], 10))
+ assert q['A'] == percentile(self.tsframe['A'], 10)
q = self.intframe.quantile(0.1)
- self.assertEqual(q['A'], percentile(self.intframe['A'], 10))
+ assert q['A'] == percentile(self.intframe['A'], 10)
# test with and without interpolation keyword
q1 = self.intframe.quantile(0.1)
- self.assertEqual(q1['A'], np.percentile(self.intframe['A'], 10))
+ assert q1['A'] == np.percentile(self.intframe['A'], 10)
assert_series_equal(q, q1)
# interpolation method other than default linear
- expErrMsg = "Interpolation methods other than linear"
+ msg = "Interpolation methods other than linear"
df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3])
- with tm.assert_raises_regex(ValueError, expErrMsg):
+ with tm.assert_raises_regex(ValueError, msg):
df.quantile(.5, axis=1, interpolation='nearest')
- with tm.assert_raises_regex(ValueError, expErrMsg):
+ with tm.assert_raises_regex(ValueError, msg):
df.quantile([.5, .75], axis=1, interpolation='lower')
# test degenerate case
df = DataFrame({'x': [], 'y': []})
- with tm.assert_raises_regex(ValueError, expErrMsg):
+ with tm.assert_raises_regex(ValueError, msg):
q = df.quantile(0.1, axis=0, interpolation='higher')
# multi
df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]],
columns=['a', 'b', 'c'])
- with tm.assert_raises_regex(ValueError, expErrMsg):
+ with tm.assert_raises_regex(ValueError, msg):
df.quantile([.25, .5], interpolation='midpoint')
def test_quantile_multi(self):
diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index 575906fb5c8b2..80db2c50c3eb6 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -808,7 +808,7 @@ def test_nested_scope(self):
# smoke test
x = 1 # noqa
result = pd.eval('x + 1', engine=engine, parser=parser)
- self.assertEqual(result, 2)
+ assert result == 2
df = DataFrame(np.random.randn(5, 3))
df2 = DataFrame(np.random.randn(5, 3))
diff --git a/pandas/tests/frame/test_replace.py b/pandas/tests/frame/test_replace.py
index 87075e6d6e631..3f160012cb446 100644
--- a/pandas/tests/frame/test_replace.py
+++ b/pandas/tests/frame/test_replace.py
@@ -548,7 +548,7 @@ def test_regex_replace_numeric_to_object_conversion(self):
expec = DataFrame({'a': ['a', 1, 2, 3], 'b': mix['b'], 'c': mix['c']})
res = df.replace(0, 'a')
assert_frame_equal(res, expec)
- self.assertEqual(res.a.dtype, np.object_)
+ assert res.a.dtype == np.object_
def test_replace_regex_metachar(self):
metachars = '[]', '()', r'\d', r'\w', r'\s'
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index dbdbebddcc0b5..74301b918bd02 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -132,11 +132,11 @@ def test_repr_unicode(self):
result = repr(df)
ex_top = ' A'
- self.assertEqual(result.split('\n')[0].rstrip(), ex_top)
+ assert result.split('\n')[0].rstrip() == ex_top
df = DataFrame({'A': [uval, uval]})
result = repr(df)
- self.assertEqual(result.split('\n')[0].rstrip(), ex_top)
+ assert result.split('\n')[0].rstrip() == ex_top
def test_unicode_string_with_unicode(self):
df = DataFrame({'A': [u("\u05d0")]})
@@ -186,7 +186,7 @@ def test_latex_repr(self):
with option_context("display.latex.escape", False,
'display.latex.repr', True):
df = DataFrame([[r'$\alpha$', 'b', 'c'], [1, 2, 3]])
- self.assertEqual(result, df._repr_latex_())
+ assert result == df._repr_latex_()
# GH 12182
assert df._repr_latex_() is None
@@ -217,7 +217,7 @@ def test_info_wide(self):
set_option('display.max_info_columns', 101)
io = StringIO()
df.info(buf=io)
- self.assertEqual(rs, xp)
+ assert rs == xp
reset_option('display.max_info_columns')
def test_info_duplicate_columns(self):
@@ -237,8 +237,8 @@ def test_info_duplicate_columns_shows_correct_dtypes(self):
frame.info(buf=io)
io.seek(0)
lines = io.readlines()
- self.assertEqual('a 1 non-null int64\n', lines[3])
- self.assertEqual('a 1 non-null float64\n', lines[4])
+ assert 'a 1 non-null int64\n' == lines[3]
+ assert 'a 1 non-null float64\n' == lines[4]
def test_info_shows_column_dtypes(self):
dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]',
@@ -263,7 +263,7 @@ def test_info_max_cols(self):
buf = StringIO()
df.info(buf=buf, verbose=verbose)
res = buf.getvalue()
- self.assertEqual(len(res.strip().split('\n')), len_)
+ assert len(res.strip().split('\n')) == len_
for len_, verbose in [(10, None), (5, False), (10, True)]:
@@ -272,7 +272,7 @@ def test_info_max_cols(self):
buf = StringIO()
df.info(buf=buf, verbose=verbose)
res = buf.getvalue()
- self.assertEqual(len(res.strip().split('\n')), len_)
+ assert len(res.strip().split('\n')) == len_
for len_, max_cols in [(10, 5), (5, 4)]:
# setting truncates
@@ -280,14 +280,14 @@ def test_info_max_cols(self):
buf = StringIO()
df.info(buf=buf, max_cols=max_cols)
res = buf.getvalue()
- self.assertEqual(len(res.strip().split('\n')), len_)
+ assert len(res.strip().split('\n')) == len_
# setting wouldn't truncate
with option_context('max_info_columns', 5):
buf = StringIO()
df.info(buf=buf, max_cols=max_cols)
res = buf.getvalue()
- self.assertEqual(len(res.strip().split('\n')), len_)
+ assert len(res.strip().split('\n')) == len_
def test_info_memory_usage(self):
# Ensure memory usage is displayed, when asserted, on the last line
@@ -352,15 +352,14 @@ def test_info_memory_usage(self):
# (cols * rows * bytes) + index size
df_size = df.memory_usage().sum()
exp_size = len(dtypes) * n * 8 + df.index.nbytes
- self.assertEqual(df_size, exp_size)
+ assert df_size == exp_size
# Ensure number of cols in memory_usage is the same as df
size_df = np.size(df.columns.values) + 1 # index=True; default
- self.assertEqual(size_df, np.size(df.memory_usage()))
+ assert size_df == np.size(df.memory_usage())
# assert deep works only on object
- self.assertEqual(df.memory_usage().sum(),
- df.memory_usage(deep=True).sum())
+ assert df.memory_usage().sum() == df.memory_usage(deep=True).sum()
# test for validity
DataFrame(1, index=['a'], columns=['A']
@@ -428,7 +427,7 @@ def memory_usage(f):
df = DataFrame({'value': np.random.randn(N * M)}, index=index)
unstacked = df.unstack('id')
- self.assertEqual(df.values.nbytes, unstacked.values.nbytes)
+ assert df.values.nbytes == unstacked.values.nbytes
assert memory_usage(df) > memory_usage(unstacked)
# high upper bound
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index 9c48233ff29cd..79ee76ee362c3 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -41,25 +41,25 @@ def test_pivot(self):
'One': {'A': 1., 'B': 2., 'C': 3.},
'Two': {'A': 1., 'B': 2., 'C': 3.}
})
- expected.index.name, expected.columns.name = 'index', 'columns'
- assert_frame_equal(pivoted, expected)
+ expected.index.name, expected.columns.name = 'index', 'columns'
+ tm.assert_frame_equal(pivoted, expected)
# name tracking
- self.assertEqual(pivoted.index.name, 'index')
- self.assertEqual(pivoted.columns.name, 'columns')
+ assert pivoted.index.name == 'index'
+ assert pivoted.columns.name == 'columns'
# don't specify values
pivoted = frame.pivot(index='index', columns='columns')
- self.assertEqual(pivoted.index.name, 'index')
- self.assertEqual(pivoted.columns.names, (None, 'columns'))
+ assert pivoted.index.name == 'index'
+ assert pivoted.columns.names == (None, 'columns')
with catch_warnings(record=True):
# pivot multiple columns
wp = tm.makePanel()
lp = wp.to_frame()
df = lp.reset_index()
- assert_frame_equal(df.pivot('major', 'minor'), lp.unstack())
+ tm.assert_frame_equal(df.pivot('major', 'minor'), lp.unstack())
def test_pivot_duplicates(self):
data = DataFrame({'a': ['bar', 'bar', 'foo', 'foo', 'foo'],
@@ -72,7 +72,7 @@ def test_pivot_empty(self):
df = DataFrame({}, columns=['a', 'b', 'c'])
result = df.pivot('a', 'b', 'c')
expected = DataFrame({})
- assert_frame_equal(result, expected, check_names=False)
+ tm.assert_frame_equal(result, expected, check_names=False)
def test_pivot_integer_bug(self):
df = DataFrame(data=[("A", "1", "A1"), ("B", "2", "B2")])
@@ -106,21 +106,14 @@ def test_pivot_index_none(self):
('values', 'Two')],
names=[None, 'columns'])
expected.index.name = 'index'
- assert_frame_equal(result, expected, check_names=False)
- self.assertEqual(result.index.name, 'index',)
- self.assertEqual(result.columns.names, (None, 'columns'))
+ tm.assert_frame_equal(result, expected, check_names=False)
+ assert result.index.name == 'index'
+ assert result.columns.names == (None, 'columns')
expected.columns = expected.columns.droplevel(0)
-
- data = {
- 'index': range(7),
- 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'],
- 'values': [1., 2., 3., 3., 2., 1.]
- }
-
result = frame.pivot(columns='columns', values='values')
expected.columns.name = 'columns'
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_stack_unstack(self):
f = self.frame.copy()
@@ -516,8 +509,8 @@ def test_unstack_dtypes(self):
right = right.set_index(['A', 'B']).unstack(0)
right[('D', 'a')] = right[('D', 'a')].astype('int64')
- self.assertEqual(left.shape, (3, 2))
- assert_frame_equal(left, right)
+ assert left.shape == (3, 2)
+ tm.assert_frame_equal(left, right)
def test_unstack_non_unique_index_names(self):
idx = MultiIndex.from_tuples([('a', 'b'), ('c', 'd')],
@@ -540,7 +533,7 @@ def verify(df):
left = sorted(df.iloc[i, j].split('.'))
right = mk_list(df.index[i]) + mk_list(df.columns[j])
right = sorted(list(map(cast, right)))
- self.assertEqual(left, right)
+ assert left == right
df = DataFrame({'jim': ['a', 'b', nan, 'd'],
'joe': ['w', 'x', 'y', 'z'],
@@ -554,7 +547,7 @@ def verify(df):
mi = df.set_index(list(idx))
for lev in range(2):
udf = mi.unstack(level=lev)
- self.assertEqual(udf.notnull().values.sum(), len(df))
+ assert udf.notnull().values.sum() == len(df)
verify(udf['jolie'])
df = DataFrame({'1st': ['d'] * 3 + [nan] * 5 + ['a'] * 2 +
@@ -572,7 +565,7 @@ def verify(df):
mi = df.set_index(list(idx))
for lev in range(3):
udf = mi.unstack(level=lev)
- self.assertEqual(udf.notnull().values.sum(), 2 * len(df))
+ assert udf.notnull().values.sum() == 2 * len(df)
for col in ['4th', '5th']:
verify(udf[col])
@@ -677,12 +670,12 @@ def verify(df):
df.loc[1, '3rd'] = df.loc[4, '3rd'] = nan
left = df.set_index(['1st', '2nd', '3rd']).unstack(['2nd', '3rd'])
- self.assertEqual(left.notnull().values.sum(), 2 * len(df))
+ assert left.notnull().values.sum() == 2 * len(df)
for col in ['jim', 'joe']:
for _, r in df.iterrows():
key = r['1st'], (col, r['2nd'], r['3rd'])
- self.assertEqual(r[col], left.loc[key])
+ assert r[col] == left.loc[key]
def test_stack_datetime_column_multiIndex(self):
# GH 8039
diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py
index ade696885c2e0..40a8ece852623 100644
--- a/pandas/tests/frame/test_subclass.py
+++ b/pandas/tests/frame/test_subclass.py
@@ -55,12 +55,12 @@ def custom_frame_function(self):
# Do we get back our own Series class after selecting a column?
cdf_series = cdf.col1
assert isinstance(cdf_series, CustomSeries)
- self.assertEqual(cdf_series.custom_series_function(), 'OK')
+ assert cdf_series.custom_series_function() == 'OK'
# Do we get back our own DF class after slicing row-wise?
cdf_rows = cdf[1:5]
assert isinstance(cdf_rows, CustomDataFrame)
- self.assertEqual(cdf_rows.custom_frame_function(), 'OK')
+ assert cdf_rows.custom_frame_function() == 'OK'
# Make sure sliced part of multi-index frame is custom class
mcol = pd.MultiIndex.from_tuples([('A', 'A'), ('A', 'B')])
@@ -76,19 +76,19 @@ def test_dataframe_metadata(self):
index=['a', 'b', 'c'])
df.testattr = 'XXX'
- self.assertEqual(df.testattr, 'XXX')
- self.assertEqual(df[['X']].testattr, 'XXX')
- self.assertEqual(df.loc[['a', 'b'], :].testattr, 'XXX')
- self.assertEqual(df.iloc[[0, 1], :].testattr, 'XXX')
+ assert df.testattr == 'XXX'
+ assert df[['X']].testattr == 'XXX'
+ assert df.loc[['a', 'b'], :].testattr == 'XXX'
+ assert df.iloc[[0, 1], :].testattr == 'XXX'
- # GH9776
- self.assertEqual(df.iloc[0:1, :].testattr, 'XXX')
+ # see gh-9776
+ assert df.iloc[0:1, :].testattr == 'XXX'
- # GH10553
+ # see gh-10553
unpickled = tm.round_trip_pickle(df)
tm.assert_frame_equal(df, unpickled)
- self.assertEqual(df._metadata, unpickled._metadata)
- self.assertEqual(df.testattr, unpickled.testattr)
+ assert df._metadata == unpickled._metadata
+ assert df.testattr == unpickled.testattr
def test_indexing_sliced(self):
# GH 11559
diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index 910f04f0d63c6..f52f4697b1b08 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -38,7 +38,7 @@ def test_diff(self):
s = Series([a, b])
rs = DataFrame({'s': s}).diff()
- self.assertEqual(rs.s[1], 1)
+ assert rs.s[1] == 1
# mixed numeric
tf = self.tsframe.astype('float32')
@@ -71,7 +71,7 @@ def test_diff_mixed_dtype(self):
df['A'] = np.array([1, 2, 3, 4, 5], dtype=object)
result = df.diff()
- self.assertEqual(result[0].dtype, np.float64)
+ assert result[0].dtype == np.float64
def test_diff_neg_n(self):
rs = self.tsframe.diff(-1)
@@ -153,7 +153,7 @@ def test_frame_add_datetime64_col_other_units(self):
ex_vals = to_datetime(vals.astype('O')).values
- self.assertEqual(df[unit].dtype, ns_dtype)
+ assert df[unit].dtype == ns_dtype
assert (df[unit].values == ex_vals).all()
# Test insertion into existing datetime64 column
@@ -191,7 +191,7 @@ def test_shift(self):
# shift by DateOffset
shiftedFrame = self.tsframe.shift(5, freq=offsets.BDay())
- self.assertEqual(len(shiftedFrame), len(self.tsframe))
+ assert len(shiftedFrame) == len(self.tsframe)
shiftedFrame2 = self.tsframe.shift(5, freq='B')
assert_frame_equal(shiftedFrame, shiftedFrame2)
@@ -408,10 +408,10 @@ def test_first_last_valid(self):
frame = DataFrame({'foo': mat}, index=self.frame.index)
index = frame.first_valid_index()
- self.assertEqual(index, frame.index[5])
+ assert index == frame.index[5]
index = frame.last_valid_index()
- self.assertEqual(index, frame.index[-6])
+ assert index == frame.index[-6]
# GH12800
empty = DataFrame()
@@ -446,7 +446,7 @@ def test_at_time_frame(self):
rng = date_range('1/1/2012', freq='23Min', periods=384)
ts = DataFrame(np.random.randn(len(rng), 2), rng)
rs = ts.at_time('16:00')
- self.assertEqual(len(rs), 0)
+ assert len(rs) == 0
def test_between_time_frame(self):
rng = date_range('1/1/2000', '1/5/2000', freq='5min')
@@ -463,7 +463,7 @@ def test_between_time_frame(self):
if not inc_end:
exp_len -= 4
- self.assertEqual(len(filtered), exp_len)
+ assert len(filtered) == exp_len
for rs in filtered.index:
t = rs.time()
if inc_start:
@@ -495,7 +495,7 @@ def test_between_time_frame(self):
if not inc_end:
exp_len -= 4
- self.assertEqual(len(filtered), exp_len)
+ assert len(filtered) == exp_len
for rs in filtered.index:
t = rs.time()
if inc_start:
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index 11c10f1982558..3e38f2a71d99d 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -433,13 +433,13 @@ def test_to_csv_no_index(self):
assert_frame_equal(df, result)
def test_to_csv_with_mix_columns(self):
- # GH11637, incorrect output when a mix of integer and string column
+ # gh-11637: incorrect output when a mix of integer and string column
# names passed as columns parameter in to_csv
df = DataFrame({0: ['a', 'b', 'c'],
1: ['aa', 'bb', 'cc']})
df['test'] = 'txt'
- self.assertEqual(df.to_csv(), df.to_csv(columns=[0, 1, 'test']))
+ assert df.to_csv() == df.to_csv(columns=[0, 1, 'test'])
def test_to_csv_headers(self):
# GH6186, the presence or absence of `index` incorrectly
@@ -475,7 +475,7 @@ def test_to_csv_multiindex(self):
# TODO to_csv drops column name
assert_frame_equal(frame, df, check_names=False)
- self.assertEqual(frame.index.names, df.index.names)
+ assert frame.index.names == df.index.names
# needed if setUP becomes a classmethod
self.frame.index = old_index
@@ -494,7 +494,7 @@ def test_to_csv_multiindex(self):
# do not load index
tsframe.to_csv(path)
recons = DataFrame.from_csv(path, index_col=None)
- self.assertEqual(len(recons.columns), len(tsframe.columns) + 2)
+ assert len(recons.columns) == len(tsframe.columns) + 2
# no index
tsframe.to_csv(path, index=False)
@@ -604,7 +604,7 @@ def _make_frame(names=None):
exp.index = []
tm.assert_index_equal(recons.columns, exp.columns)
- self.assertEqual(len(recons), 0)
+ assert len(recons) == 0
def test_to_csv_float32_nanrep(self):
df = DataFrame(np.random.randn(1, 4).astype(np.float32))
@@ -615,7 +615,7 @@ def test_to_csv_float32_nanrep(self):
with open(path) as f:
lines = f.readlines()
- self.assertEqual(lines[1].split(',')[2], '999')
+ assert lines[1].split(',')[2] == '999'
def test_to_csv_withcommas(self):
@@ -813,7 +813,7 @@ def test_to_csv_unicodewriter_quoting(self):
'2,"bar"\n'
'3,"baz"\n')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_csv_quote_none(self):
# GH4328
@@ -824,7 +824,7 @@ def test_to_csv_quote_none(self):
encoding=encoding, index=False)
result = buf.getvalue()
expected = 'A\nhello\n{"hello"}\n'
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_csv_index_no_leading_comma(self):
df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]},
@@ -836,7 +836,7 @@ def test_to_csv_index_no_leading_comma(self):
'one,1,4\n'
'two,2,5\n'
'three,3,6\n')
- self.assertEqual(buf.getvalue(), expected)
+ assert buf.getvalue() == expected
def test_to_csv_line_terminators(self):
df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]},
@@ -848,7 +848,7 @@ def test_to_csv_line_terminators(self):
'one,1,4\r\n'
'two,2,5\r\n'
'three,3,6\r\n')
- self.assertEqual(buf.getvalue(), expected)
+ assert buf.getvalue() == expected
buf = StringIO()
df.to_csv(buf) # The default line terminator remains \n
@@ -856,7 +856,7 @@ def test_to_csv_line_terminators(self):
'one,1,4\n'
'two,2,5\n'
'three,3,6\n')
- self.assertEqual(buf.getvalue(), expected)
+ assert buf.getvalue() == expected
def test_to_csv_from_csv_categorical(self):
@@ -868,7 +868,7 @@ def test_to_csv_from_csv_categorical(self):
s.to_csv(res)
exp = StringIO()
s2.to_csv(exp)
- self.assertEqual(res.getvalue(), exp.getvalue())
+ assert res.getvalue() == exp.getvalue()
df = DataFrame({"s": s})
df2 = DataFrame({"s": s2})
@@ -876,7 +876,7 @@ def test_to_csv_from_csv_categorical(self):
df.to_csv(res)
exp = StringIO()
df2.to_csv(exp)
- self.assertEqual(res.getvalue(), exp.getvalue())
+ assert res.getvalue() == exp.getvalue()
def test_to_csv_path_is_none(self):
# GH 8215
@@ -1078,13 +1078,13 @@ def test_to_csv_quoting(self):
1,False,3.2,,"b,c"
"""
result = df.to_csv()
- self.assertEqual(result, expected)
+ assert result == expected
result = df.to_csv(quoting=None)
- self.assertEqual(result, expected)
+ assert result == expected
result = df.to_csv(quoting=csv.QUOTE_MINIMAL)
- self.assertEqual(result, expected)
+ assert result == expected
expected = """\
"","c_bool","c_float","c_int","c_string"
@@ -1092,7 +1092,7 @@ def test_to_csv_quoting(self):
"1","False","3.2","","b,c"
"""
result = df.to_csv(quoting=csv.QUOTE_ALL)
- self.assertEqual(result, expected)
+ assert result == expected
# see gh-12922, gh-13259: make sure changes to
# the formatters do not break this behaviour
@@ -1102,7 +1102,7 @@ def test_to_csv_quoting(self):
1,False,3.2,"","b,c"
"""
result = df.to_csv(quoting=csv.QUOTE_NONNUMERIC)
- self.assertEqual(result, expected)
+ assert result == expected
msg = "need to escape, but no escapechar set"
tm.assert_raises_regex(csv.Error, msg, df.to_csv,
@@ -1118,7 +1118,7 @@ def test_to_csv_quoting(self):
"""
result = df.to_csv(quoting=csv.QUOTE_NONE,
escapechar='!')
- self.assertEqual(result, expected)
+ assert result == expected
expected = """\
,c_bool,c_ffloat,c_int,c_string
@@ -1127,7 +1127,7 @@ def test_to_csv_quoting(self):
"""
result = df.to_csv(quoting=csv.QUOTE_NONE,
escapechar='f')
- self.assertEqual(result, expected)
+ assert result == expected
# see gh-3503: quoting Windows line terminators
# presents with encoding?
@@ -1135,14 +1135,14 @@ def test_to_csv_quoting(self):
df = pd.read_csv(StringIO(text))
buf = StringIO()
df.to_csv(buf, encoding='utf-8', index=False)
- self.assertEqual(buf.getvalue(), text)
+ assert buf.getvalue() == text
# xref gh-7791: make sure the quoting parameter is passed through
# with multi-indexes
df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [5, 6]})
df = df.set_index(['a', 'b'])
expected = '"a","b","c"\n"1","3","5"\n"2","4","6"\n'
- self.assertEqual(df.to_csv(quoting=csv.QUOTE_ALL), expected)
+ assert df.to_csv(quoting=csv.QUOTE_ALL) == expected
def test_period_index_date_overflow(self):
# see gh-15982
diff --git a/pandas/tests/groupby/test_aggregate.py b/pandas/tests/groupby/test_aggregate.py
index e3f166d2294e2..310a5aca77b77 100644
--- a/pandas/tests/groupby/test_aggregate.py
+++ b/pandas/tests/groupby/test_aggregate.py
@@ -197,7 +197,7 @@ def test_agg_ser_multi_key(self):
def test_agg_apply_corner(self):
# nothing to group, all NA
grouped = self.ts.groupby(self.ts * np.nan)
- self.assertEqual(self.ts.dtype, np.float64)
+ assert self.ts.dtype == np.float64
# groupby float64 values results in Float64Index
exp = Series([], dtype=np.float64, index=pd.Index(
@@ -445,7 +445,7 @@ def test_aggregate_item_by_item(self):
# def aggfun(ser):
# return len(ser + 'a')
# result = grouped.agg(aggfun)
- # self.assertEqual(len(result.columns), 1)
+ # assert len(result.columns) == 1
aggfun = lambda ser: ser.size
result = grouped.agg(aggfun)
@@ -468,7 +468,7 @@ def aggfun(ser):
result = DataFrame().groupby(self.df.A).agg(aggfun)
assert isinstance(result, DataFrame)
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
def test_agg_item_by_item_raise_typeerror(self):
from numpy.random import randint
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index b9a731f2204da..9d2134927389d 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -48,7 +48,7 @@ def get_stats(group):
'mean': group.mean()}
result = self.df.groupby(cats).D.apply(get_stats)
- self.assertEqual(result.index.names[0], 'C')
+ assert result.index.names[0] == 'C'
def test_apply_categorical_data(self):
# GH 10138
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 278682ccb8d45..09643e918af31 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -41,10 +41,10 @@ def checkit(dtype):
grouped = data.groupby(lambda x: x // 3)
for k, v in grouped:
- self.assertEqual(len(v), 3)
+ assert len(v) == 3
agged = grouped.aggregate(np.mean)
- self.assertEqual(agged[1], 1)
+ assert agged[1] == 1
assert_series_equal(agged, grouped.agg(np.mean)) # shorthand
assert_series_equal(agged, grouped.mean())
@@ -52,7 +52,7 @@ def checkit(dtype):
expected = grouped.apply(lambda x: x * x.sum())
transformed = grouped.transform(lambda x: x * x.sum())
- self.assertEqual(transformed[7], 12)
+ assert transformed[7] == 12
assert_series_equal(transformed, expected)
value_grouped = data.groupby(data)
@@ -68,7 +68,7 @@ def checkit(dtype):
group_constants = {0: 10, 1: 20, 2: 30}
agged = grouped.agg(lambda x: group_constants[x.name] + x.mean())
- self.assertEqual(agged[1], 21)
+ assert agged[1] == 21
# corner cases
pytest.raises(Exception, grouped.aggregate, lambda x: x * 2)
@@ -423,10 +423,10 @@ def test_grouper_getting_correct_binner(self):
assert_frame_equal(result, expected)
def test_grouper_iter(self):
- self.assertEqual(sorted(self.df.groupby('A').grouper), ['bar', 'foo'])
+ assert sorted(self.df.groupby('A').grouper) == ['bar', 'foo']
def test_empty_groups(self):
- # GH # 1048
+ # see gh-1048
pytest.raises(ValueError, self.df.groupby, [])
def test_groupby_grouper(self):
@@ -434,7 +434,7 @@ def test_groupby_grouper(self):
result = self.df.groupby(grouped.grouper).mean()
expected = grouped.mean()
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_groupby_duplicated_column_errormsg(self):
# GH7511
@@ -744,17 +744,17 @@ def test_len(self):
df = tm.makeTimeDataFrame()
grouped = df.groupby([lambda x: x.year, lambda x: x.month,
lambda x: x.day])
- self.assertEqual(len(grouped), len(df))
+ assert len(grouped) == len(df)
grouped = df.groupby([lambda x: x.year, lambda x: x.month])
expected = len(set([(x.year, x.month) for x in df.index]))
- self.assertEqual(len(grouped), expected)
+ assert len(grouped) == expected
# issue 11016
df = pd.DataFrame(dict(a=[np.nan] * 3, b=[1, 2, 3]))
- self.assertEqual(len(df.groupby(('a'))), 0)
- self.assertEqual(len(df.groupby(('b'))), 3)
- self.assertEqual(len(df.groupby(('a', 'b'))), 3)
+ assert len(df.groupby(('a'))) == 0
+ assert len(df.groupby(('b'))) == 3
+ assert len(df.groupby(('a', 'b'))) == 3
def test_groups(self):
grouped = self.df.groupby(['A'])
@@ -900,7 +900,7 @@ def test_series_describe_single(self):
def test_series_index_name(self):
grouped = self.df.loc[:, ['C']].groupby(self.df['A'])
result = grouped.agg(lambda x: x.mean())
- self.assertEqual(result.index.name, 'A')
+ assert result.index.name == 'A'
def test_frame_describe_multikey(self):
grouped = self.tsframe.groupby([lambda x: x.year, lambda x: x.month])
@@ -962,8 +962,8 @@ def test_frame_groupby(self):
# aggregate
aggregated = grouped.aggregate(np.mean)
- self.assertEqual(len(aggregated), 5)
- self.assertEqual(len(aggregated.columns), 4)
+ assert len(aggregated) == 5
+ assert len(aggregated.columns) == 4
# by string
tscopy = self.tsframe.copy()
@@ -974,8 +974,8 @@ def test_frame_groupby(self):
# transform
grouped = self.tsframe.head(30).groupby(lambda x: x.weekday())
transformed = grouped.transform(lambda x: x - x.mean())
- self.assertEqual(len(transformed), 30)
- self.assertEqual(len(transformed.columns), 4)
+ assert len(transformed) == 30
+ assert len(transformed.columns) == 4
# transform propagate
transformed = grouped.transform(lambda x: x.mean())
@@ -987,7 +987,7 @@ def test_frame_groupby(self):
# iterate
for weekday, group in grouped:
- self.assertEqual(group.index[0].weekday(), weekday)
+ assert group.index[0].weekday() == weekday
# groups / group_indices
groups = grouped.groups
@@ -1013,8 +1013,8 @@ def test_frame_groupby_columns(self):
# aggregate
aggregated = grouped.aggregate(np.mean)
- self.assertEqual(len(aggregated), len(self.tsframe))
- self.assertEqual(len(aggregated.columns), 2)
+ assert len(aggregated) == len(self.tsframe)
+ assert len(aggregated.columns) == 2
# transform
tf = lambda x: x - x.mean()
@@ -1023,34 +1023,34 @@ def test_frame_groupby_columns(self):
# iterate
for k, v in grouped:
- self.assertEqual(len(v.columns), 2)
+ assert len(v.columns) == 2
def test_frame_set_name_single(self):
grouped = self.df.groupby('A')
result = grouped.mean()
- self.assertEqual(result.index.name, 'A')
+ assert result.index.name == 'A'
result = self.df.groupby('A', as_index=False).mean()
self.assertNotEqual(result.index.name, 'A')
result = grouped.agg(np.mean)
- self.assertEqual(result.index.name, 'A')
+ assert result.index.name == 'A'
result = grouped.agg({'C': np.mean, 'D': np.std})
- self.assertEqual(result.index.name, 'A')
+ assert result.index.name == 'A'
result = grouped['C'].mean()
- self.assertEqual(result.index.name, 'A')
+ assert result.index.name == 'A'
result = grouped['C'].agg(np.mean)
- self.assertEqual(result.index.name, 'A')
+ assert result.index.name == 'A'
result = grouped['C'].agg([np.mean, np.std])
- self.assertEqual(result.index.name, 'A')
+ assert result.index.name == 'A'
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
result = grouped['C'].agg({'foo': np.mean, 'bar': np.std})
- self.assertEqual(result.index.name, 'A')
+ assert result.index.name == 'A'
def test_multi_iter(self):
s = Series(np.arange(6))
@@ -1064,8 +1064,8 @@ def test_multi_iter(self):
('b', '1', s[[4]]), ('b', '2', s[[3, 5]])]
for i, ((one, two), three) in enumerate(iterated):
e1, e2, e3 = expected[i]
- self.assertEqual(e1, one)
- self.assertEqual(e2, two)
+ assert e1 == one
+ assert e2 == two
assert_series_equal(three, e3)
def test_multi_iter_frame(self):
@@ -1087,8 +1087,8 @@ def test_multi_iter_frame(self):
('b', '2', df.loc[idx[[1]]])]
for i, ((one, two), three) in enumerate(iterated):
e1, e2, e3 = expected[i]
- self.assertEqual(e1, one)
- self.assertEqual(e2, two)
+ assert e1 == one
+ assert e2 == two
assert_frame_equal(three, e3)
# don't iterate through groups with no data
@@ -1098,7 +1098,7 @@ def test_multi_iter_frame(self):
groups = {}
for key, gp in grouped:
groups[key] = gp
- self.assertEqual(len(groups), 2)
+ assert len(groups) == 2
# axis = 1
three_levels = self.three_group.groupby(['A', 'B', 'C']).mean()
@@ -1563,7 +1563,7 @@ def test_empty_groups_corner(self):
agged = grouped.apply(lambda x: x.mean())
agged_A = grouped['A'].apply(np.mean)
assert_series_equal(agged['A'], agged_A)
- self.assertEqual(agged.index.name, 'first')
+ assert agged.index.name == 'first'
def test_apply_concat_preserve_names(self):
grouped = self.three_group.groupby(['A', 'B'])
@@ -1591,13 +1591,13 @@ def desc3(group):
return result
result = grouped.apply(desc)
- self.assertEqual(result.index.names, ('A', 'B', 'stat'))
+ assert result.index.names == ('A', 'B', 'stat')
result2 = grouped.apply(desc2)
- self.assertEqual(result2.index.names, ('A', 'B', 'stat'))
+ assert result2.index.names == ('A', 'B', 'stat')
result3 = grouped.apply(desc3)
- self.assertEqual(result3.index.names, ('A', 'B', None))
+ assert result3.index.names == ('A', 'B', None)
def test_nonsense_func(self):
df = DataFrame([0])
@@ -1789,7 +1789,7 @@ def aggfun(ser):
return ser.sum()
agged2 = df.groupby(keys).aggregate(aggfun)
- self.assertEqual(len(agged2.columns) + 1, len(df.columns))
+ assert len(agged2.columns) + 1 == len(df.columns)
def test_groupby_level(self):
frame = self.mframe
@@ -1804,13 +1804,13 @@ def test_groupby_level(self):
expected0 = expected0.reindex(frame.index.levels[0])
expected1 = expected1.reindex(frame.index.levels[1])
- self.assertEqual(result0.index.name, 'first')
- self.assertEqual(result1.index.name, 'second')
+ assert result0.index.name == 'first'
+ assert result1.index.name == 'second'
assert_frame_equal(result0, expected0)
assert_frame_equal(result1, expected1)
- self.assertEqual(result0.index.name, frame.index.names[0])
- self.assertEqual(result1.index.name, frame.index.names[1])
+ assert result0.index.name == frame.index.names[0]
+ assert result1.index.name == frame.index.names[1]
# groupby level name
result0 = frame.groupby(level='first').sum()
@@ -1860,12 +1860,12 @@ def test_groupby_level_apply(self):
frame = self.mframe
result = frame.groupby(level=0).count()
- self.assertEqual(result.index.name, 'first')
+ assert result.index.name == 'first'
result = frame.groupby(level=1).count()
- self.assertEqual(result.index.name, 'second')
+ assert result.index.name == 'second'
result = frame['A'].groupby(level=0).count()
- self.assertEqual(result.index.name, 'first')
+ assert result.index.name == 'first'
def test_groupby_args(self):
# PR8618 and issue 8015
@@ -1965,7 +1965,7 @@ def f(piece):
def test_apply_series_yield_constant(self):
result = self.df.groupby(['A', 'B'])['C'].apply(len)
- self.assertEqual(result.index.names[:2], ('A', 'B'))
+ assert result.index.names[:2] == ('A', 'B')
def test_apply_frame_yield_constant(self):
# GH13568
@@ -1999,7 +1999,7 @@ def trans2(group):
result = df.groupby('A').apply(trans)
exp = df.groupby('A')['C'].apply(trans2)
assert_series_equal(result, exp, check_names=False)
- self.assertEqual(result.name, 'C')
+ assert result.name == 'C'
def test_apply_transform(self):
grouped = self.ts.groupby(lambda x: x.month)
@@ -2161,17 +2161,17 @@ def test_size(self):
grouped = self.df.groupby(['A', 'B'])
result = grouped.size()
for key, group in grouped:
- self.assertEqual(result[key], len(group))
+ assert result[key] == len(group)
grouped = self.df.groupby('A')
result = grouped.size()
for key, group in grouped:
- self.assertEqual(result[key], len(group))
+ assert result[key] == len(group)
grouped = self.df.groupby('B')
result = grouped.size()
for key, group in grouped:
- self.assertEqual(result[key], len(group))
+ assert result[key] == len(group)
df = DataFrame(np.random.choice(20, (1000, 3)), columns=list('abc'))
for sort, key in cart_product((False, True), ('a', 'b', ['a', 'b'])):
@@ -2481,24 +2481,24 @@ def test_groupby_wrong_multi_labels(self):
def test_groupby_series_with_name(self):
result = self.df.groupby(self.df['A']).mean()
result2 = self.df.groupby(self.df['A'], as_index=False).mean()
- self.assertEqual(result.index.name, 'A')
+ assert result.index.name == 'A'
assert 'A' in result2
result = self.df.groupby([self.df['A'], self.df['B']]).mean()
result2 = self.df.groupby([self.df['A'], self.df['B']],
as_index=False).mean()
- self.assertEqual(result.index.names, ('A', 'B'))
+ assert result.index.names == ('A', 'B')
assert 'A' in result2
assert 'B' in result2
def test_seriesgroupby_name_attr(self):
# GH 6265
result = self.df.groupby('A')['C']
- self.assertEqual(result.count().name, 'C')
- self.assertEqual(result.mean().name, 'C')
+ assert result.count().name == 'C'
+ assert result.mean().name == 'C'
testFunc = lambda x: np.sum(x) * 2
- self.assertEqual(result.agg(testFunc).name, 'C')
+ assert result.agg(testFunc).name == 'C'
def test_consistency_name(self):
# GH 12363
@@ -2530,11 +2530,11 @@ def summarize_random_name(df):
}, name=df.iloc[0]['A'])
metrics = self.df.groupby('A').apply(summarize)
- self.assertEqual(metrics.columns.name, None)
+ assert metrics.columns.name is None
metrics = self.df.groupby('A').apply(summarize, 'metrics')
- self.assertEqual(metrics.columns.name, 'metrics')
+ assert metrics.columns.name == 'metrics'
metrics = self.df.groupby('A').apply(summarize_random_name)
- self.assertEqual(metrics.columns.name, None)
+ assert metrics.columns.name is None
def test_groupby_nonstring_columns(self):
df = DataFrame([np.arange(10) for x in range(10)])
@@ -2595,11 +2595,11 @@ def convert_force_pure(x):
grouped = s.groupby(labels)
result = grouped.agg(convert_fast)
- self.assertEqual(result.dtype, np.object_)
+ assert result.dtype == np.object_
assert isinstance(result[0], Decimal)
result = grouped.agg(convert_force_pure)
- self.assertEqual(result.dtype, np.object_)
+ assert result.dtype == np.object_
assert isinstance(result[0], Decimal)
def test_fast_apply(self):
@@ -2670,7 +2670,7 @@ def test_groupby_aggregation_mixed_dtype(self):
def test_groupby_dtype_inference_empty(self):
# GH 6733
df = DataFrame({'x': [], 'range': np.arange(0, dtype='int64')})
- self.assertEqual(df['x'].dtype, np.float64)
+ assert df['x'].dtype == np.float64
result = df.groupby('x').first()
exp_index = Index([], name='x', dtype=np.float64)
@@ -2725,7 +2725,7 @@ def test_groupby_nat_exclude(self):
expected = [pd.Index([1, 7]), pd.Index([3, 5])]
keys = sorted(grouped.groups.keys())
- self.assertEqual(len(keys), 2)
+ assert len(keys) == 2
for k, e in zip(keys, expected):
# grouped.groups keys are np.datetime64 with system tz
# not to be affected by tz, only compare values
@@ -2733,7 +2733,7 @@ def test_groupby_nat_exclude(self):
# confirm obj is not filtered
tm.assert_frame_equal(grouped.grouper.groupings[0].obj, df)
- self.assertEqual(grouped.ngroups, 2)
+ assert grouped.ngroups == 2
expected = {
Timestamp('2013-01-01 00:00:00'): np.array([1, 7], dtype=np.int64),
@@ -2752,14 +2752,14 @@ def test_groupby_nat_exclude(self):
nan_df = DataFrame({'nan': [np.nan, np.nan, np.nan],
'nat': [pd.NaT, pd.NaT, pd.NaT]})
- self.assertEqual(nan_df['nan'].dtype, 'float64')
- self.assertEqual(nan_df['nat'].dtype, 'datetime64[ns]')
+ assert nan_df['nan'].dtype == 'float64'
+ assert nan_df['nat'].dtype == 'datetime64[ns]'
for key in ['nan', 'nat']:
grouped = nan_df.groupby(key)
- self.assertEqual(grouped.groups, {})
- self.assertEqual(grouped.ngroups, 0)
- self.assertEqual(grouped.indices, {})
+ assert grouped.groups == {}
+ assert grouped.ngroups == 0
+ assert grouped.indices == {}
pytest.raises(KeyError, grouped.get_group, np.nan)
pytest.raises(KeyError, grouped.get_group, pd.NaT)
@@ -2837,7 +2837,7 @@ def test_int32_overflow(self):
left = df.groupby(['A', 'B', 'C', 'D']).sum()
right = df.groupby(['D', 'C', 'B', 'A']).sum()
- self.assertEqual(len(left), len(right))
+ assert len(left) == len(right)
def test_groupby_sort_multi(self):
df = DataFrame({'a': ['foo', 'bar', 'baz'],
@@ -2963,7 +2963,7 @@ def test_multifunc_sum_bug(self):
grouped = x.groupby('test')
result = grouped.agg({'fl': 'sum', 2: 'size'})
- self.assertEqual(result['fl'].dtype, np.float64)
+ assert result['fl'].dtype == np.float64
def test_handle_dict_return_value(self):
def f(group):
@@ -3056,14 +3056,13 @@ def f(group):
assert names == expected_names
def test_no_dummy_key_names(self):
- # GH #1291
-
+ # see gh-1291
result = self.df.groupby(self.df['A'].values).sum()
assert result.index.name is None
result = self.df.groupby([self.df['A'].values, self.df['B'].values
]).sum()
- self.assertEqual(result.index.names, (None, None))
+ assert result.index.names == (None, None)
def test_groupby_sort_multiindex_series(self):
# series multiindex groupby sort argument was not being passed through
@@ -3121,16 +3120,16 @@ def test_multiindex_columns_empty_level(self):
df = DataFrame([[long(1), 'A']], columns=midx)
grouped = df.groupby('to filter').groups
- self.assertEqual(grouped['A'], [0])
+ assert grouped['A'] == [0]
grouped = df.groupby([('to filter', '')]).groups
- self.assertEqual(grouped['A'], [0])
+ assert grouped['A'] == [0]
df = DataFrame([[long(1), 'A'], [long(2), 'B']], columns=midx)
expected = df.groupby('to filter').groups
result = df.groupby([('to filter', '')]).groups
- self.assertEqual(result, expected)
+ assert result == expected
df = DataFrame([[long(1), 'A'], [long(2), 'A']], columns=midx)
@@ -3230,7 +3229,7 @@ def test_groupby_non_arithmetic_agg_intlike_precision(self):
grpd = df.groupby('a')
res = getattr(grpd, method)(*data['args'])
- self.assertEqual(res.iloc[0].b, data['expected'])
+ assert res.iloc[0].b == data['expected']
def test_groupby_multiindex_missing_pair(self):
# GH9049
diff --git a/pandas/tests/groupby/test_nth.py b/pandas/tests/groupby/test_nth.py
index f583fa7aa7e86..0b6aeaf155f86 100644
--- a/pandas/tests/groupby/test_nth.py
+++ b/pandas/tests/groupby/test_nth.py
@@ -87,9 +87,9 @@ def test_first_last_nth_dtypes(self):
idx = lrange(10)
idx.append(9)
s = Series(data=lrange(11), index=idx, name='IntCol')
- self.assertEqual(s.dtype, 'int64')
+ assert s.dtype == 'int64'
f = s.groupby(level=0).first()
- self.assertEqual(f.dtype, 'int64')
+ assert f.dtype == 'int64'
def test_nth(self):
df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B'])
@@ -155,12 +155,12 @@ def test_nth(self):
expected2 = s.groupby(g).apply(lambda x: x.iloc[0])
assert_series_equal(expected2, expected, check_names=False)
assert expected.name, 0
- self.assertEqual(expected.name, 1)
+ assert expected.name == 1
# validate first
v = s[g == 1].iloc[0]
- self.assertEqual(expected.iloc[0], v)
- self.assertEqual(expected2.iloc[0], v)
+ assert expected.iloc[0] == v
+ assert expected2.iloc[0] == v
# this is NOT the same as .first (as sorted is default!)
# as it keeps the order in the series (and not the group order)
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index db3fdfa605b5b..42caecbdb700e 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -444,7 +444,7 @@ def test_frame_datetime64_handling_groupby(self):
(3, np.datetime64('2012-07-04'))],
columns=['a', 'date'])
result = df.groupby('a').first()
- self.assertEqual(result['date'][3], Timestamp('2012-07-03'))
+ assert result['date'][3] == Timestamp('2012-07-03')
def test_groupby_multi_timezone(self):
@@ -575,10 +575,10 @@ def test_timezone_info(self):
import pytz
df = pd.DataFrame({'a': [1], 'b': [datetime.now(pytz.utc)]})
- self.assertEqual(df['b'][0].tzinfo, pytz.utc)
+ assert df['b'][0].tzinfo == pytz.utc
df = pd.DataFrame({'a': [1, 2, 3]})
df['b'] = datetime.now(pytz.utc)
- self.assertEqual(df['b'][0].tzinfo, pytz.utc)
+ assert df['b'][0].tzinfo == pytz.utc
def test_datetime_count(self):
df = DataFrame({'a': [1, 2, 3] * 2,
diff --git a/pandas/tests/groupby/test_transform.py b/pandas/tests/groupby/test_transform.py
index e0d81003e325f..0b81235ef2117 100644
--- a/pandas/tests/groupby/test_transform.py
+++ b/pandas/tests/groupby/test_transform.py
@@ -29,7 +29,7 @@ def test_transform(self):
grouped = data.groupby(lambda x: x // 3)
transformed = grouped.transform(lambda x: x * x.sum())
- self.assertEqual(transformed[7], 12)
+ assert transformed[7] == 12
# GH 8046
# make sure that we preserve the input order
@@ -408,7 +408,7 @@ def f(group):
grouped = df.groupby('c')
result = grouped.apply(f)
- self.assertEqual(result['d'].dtype, np.float64)
+ assert result['d'].dtype == np.float64
# this is by definition a mutating operation!
with option_context('mode.chained_assignment', None):
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index d9dccc39f469f..bbde902fb87bf 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -139,7 +139,7 @@ def test_ndarray_compat_properties(self):
values = idx.values
for prop in self._compat_props:
- self.assertEqual(getattr(idx, prop), getattr(values, prop))
+ assert getattr(idx, prop) == getattr(values, prop)
# test for validity
idx.nbytes
@@ -162,7 +162,7 @@ def test_dtype_str(self):
for idx in self.indices.values():
dtype = idx.dtype_str
assert isinstance(dtype, compat.string_types)
- self.assertEqual(dtype, str(idx.dtype))
+ assert dtype == str(idx.dtype)
def test_repr_max_seq_item_setting(self):
# GH10182
@@ -189,14 +189,14 @@ def test_set_name_methods(self):
original_name = ind.name
new_ind = ind.set_names([new_name])
- self.assertEqual(new_ind.name, new_name)
- self.assertEqual(ind.name, original_name)
+ assert new_ind.name == new_name
+ assert ind.name == original_name
res = ind.rename(new_name, inplace=True)
# should return None
assert res is None
- self.assertEqual(ind.name, new_name)
- self.assertEqual(ind.names, [new_name])
+ assert ind.name == new_name
+ assert ind.names == [new_name]
# with tm.assert_raises_regex(TypeError, "list-like"):
# # should still fail even if it would be the right length
# ind.set_names("a")
@@ -206,8 +206,8 @@ def test_set_name_methods(self):
# rename in place just leaves tuples and other containers alone
name = ('A', 'B')
ind.rename(name, inplace=True)
- self.assertEqual(ind.name, name)
- self.assertEqual(ind.names, [name])
+ assert ind.name == name
+ assert ind.names == [name]
def test_hash_error(self):
for ind in self.indices.values():
@@ -310,7 +310,7 @@ def test_duplicates(self):
# preserve names
idx.name = 'foo'
result = idx.drop_duplicates()
- self.assertEqual(result.name, 'foo')
+ assert result.name == 'foo'
tm.assert_index_equal(result, Index([ind[0]], name='foo'))
def test_get_unique_index(self):
@@ -351,8 +351,8 @@ def test_get_unique_index(self):
idx_unique_nan = ind._shallow_copy(vals_unique)
assert idx_unique_nan.is_unique
- self.assertEqual(idx_nan.dtype, ind.dtype)
- self.assertEqual(idx_unique_nan.dtype, ind.dtype)
+ assert idx_nan.dtype == ind.dtype
+ assert idx_unique_nan.dtype == ind.dtype
for dropna, expected in zip([False, True],
[idx_unique_nan, idx_unique]):
@@ -373,11 +373,11 @@ def test_mutability(self):
def test_view(self):
for ind in self.indices.values():
i_view = ind.view()
- self.assertEqual(i_view.name, ind.name)
+ assert i_view.name == ind.name
def test_compat(self):
for ind in self.indices.values():
- self.assertEqual(ind.tolist(), list(ind))
+ assert ind.tolist() == list(ind)
def test_memory_usage(self):
for name, index in compat.iteritems(self.indices):
@@ -398,7 +398,7 @@ def test_memory_usage(self):
else:
# we report 0 for no-length
- self.assertEqual(result, 0)
+ assert result == 0
def test_argsort(self):
for k, ind in self.indices.items():
@@ -617,7 +617,7 @@ def test_difference_base(self):
elif isinstance(idx, CategoricalIndex):
pass
elif isinstance(idx, (DatetimeIndex, TimedeltaIndex)):
- self.assertEqual(result.__class__, answer.__class__)
+ assert result.__class__ == answer.__class__
tm.assert_numpy_array_equal(result.asi8, answer.asi8)
else:
result = first.difference(case)
@@ -687,12 +687,12 @@ def test_delete_base(self):
expected = idx[1:]
result = idx.delete(0)
assert result.equals(expected)
- self.assertEqual(result.name, expected.name)
+ assert result.name == expected.name
expected = idx[:-1]
result = idx.delete(-1)
assert result.equals(expected)
- self.assertEqual(result.name, expected.name)
+ assert result.name == expected.name
with pytest.raises((IndexError, ValueError)):
# either depending on numpy version
diff --git a/pandas/tests/indexes/datetimes/test_astype.py b/pandas/tests/indexes/datetimes/test_astype.py
index 35031746efebe..1c8189d0c75ac 100644
--- a/pandas/tests/indexes/datetimes/test_astype.py
+++ b/pandas/tests/indexes/datetimes/test_astype.py
@@ -131,8 +131,8 @@ def _check_rng(rng):
assert isinstance(converted, np.ndarray)
for x, stamp in zip(converted, rng):
assert isinstance(x, datetime)
- self.assertEqual(x, stamp.to_pydatetime())
- self.assertEqual(x.tzinfo, stamp.tzinfo)
+ assert x == stamp.to_pydatetime()
+ assert x.tzinfo == stamp.tzinfo
rng = date_range('20090415', '20090519')
rng_eastern = date_range('20090415', '20090519', tz='US/Eastern')
@@ -151,8 +151,8 @@ def _check_rng(rng):
assert isinstance(converted, np.ndarray)
for x, stamp in zip(converted, rng):
assert isinstance(x, datetime)
- self.assertEqual(x, stamp.to_pydatetime())
- self.assertEqual(x.tzinfo, stamp.tzinfo)
+ assert x == stamp.to_pydatetime()
+ assert x.tzinfo == stamp.tzinfo
rng = date_range('20090415', '20090519')
rng_eastern = date_range('20090415', '20090519',
@@ -172,8 +172,8 @@ def _check_rng(rng):
assert isinstance(converted, np.ndarray)
for x, stamp in zip(converted, rng):
assert isinstance(x, datetime)
- self.assertEqual(x, stamp.to_pydatetime())
- self.assertEqual(x.tzinfo, stamp.tzinfo)
+ assert x == stamp.to_pydatetime()
+ assert x.tzinfo == stamp.tzinfo
rng = date_range('20090415', '20090519')
rng_eastern = date_range('20090415', '20090519',
@@ -196,17 +196,17 @@ def test_to_period_millisecond(self):
index = self.index
period = index.to_period(freq='L')
- self.assertEqual(2, len(period))
- self.assertEqual(period[0], Period('2007-01-01 10:11:12.123Z', 'L'))
- self.assertEqual(period[1], Period('2007-01-01 10:11:13.789Z', 'L'))
+ assert 2 == len(period)
+ assert period[0] == Period('2007-01-01 10:11:12.123Z', 'L')
+ assert period[1] == Period('2007-01-01 10:11:13.789Z', 'L')
def test_to_period_microsecond(self):
index = self.index
period = index.to_period(freq='U')
- self.assertEqual(2, len(period))
- self.assertEqual(period[0], Period('2007-01-01 10:11:12.123456Z', 'U'))
- self.assertEqual(period[1], Period('2007-01-01 10:11:13.789123Z', 'U'))
+ assert 2 == len(period)
+ assert period[0] == Period('2007-01-01 10:11:12.123456Z', 'U')
+ assert period[1] == Period('2007-01-01 10:11:13.789123Z', 'U')
def test_to_period_tz_pytz(self):
tm._skip_if_no_pytz()
@@ -220,7 +220,7 @@ def test_to_period_tz_pytz(self):
result = ts.to_period()[0]
expected = ts[0].to_period()
- self.assertEqual(result, expected)
+ assert result == expected
tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=UTC)
@@ -228,7 +228,7 @@ def test_to_period_tz_pytz(self):
result = ts.to_period()[0]
expected = ts[0].to_period()
- self.assertEqual(result, expected)
+ assert result == expected
tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=tzlocal())
@@ -236,7 +236,7 @@ def test_to_period_tz_pytz(self):
result = ts.to_period()[0]
expected = ts[0].to_period()
- self.assertEqual(result, expected)
+ assert result == expected
tm.assert_index_equal(ts.to_period(), xp)
def test_to_period_tz_explicit_pytz(self):
@@ -309,4 +309,4 @@ def test_astype_object(self):
exp_values = list(rng)
tm.assert_index_equal(casted, Index(exp_values, dtype=np.object_))
- self.assertEqual(casted.tolist(), exp_values)
+ assert casted.tolist() == exp_values
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py
index 098d4755b385c..9af4136afd025 100644
--- a/pandas/tests/indexes/datetimes/test_construction.py
+++ b/pandas/tests/indexes/datetimes/test_construction.py
@@ -436,14 +436,14 @@ def test_constructor_dtype(self):
def test_constructor_name(self):
idx = DatetimeIndex(start='2000-01-01', periods=1, freq='A',
name='TEST')
- self.assertEqual(idx.name, 'TEST')
+ assert idx.name == 'TEST'
def test_000constructor_resolution(self):
# 2252
t1 = Timestamp((1352934390 * 1000000000) + 1000000 + 1000 + 1)
idx = DatetimeIndex([t1])
- self.assertEqual(idx.nanosecond[0], t1.nanosecond)
+ assert idx.nanosecond[0] == t1.nanosecond
class TestTimeSeries(tm.TestCase):
@@ -452,7 +452,7 @@ def test_dti_constructor_preserve_dti_freq(self):
rng = date_range('1/1/2000', '1/2/2000', freq='5min')
rng2 = DatetimeIndex(rng)
- self.assertEqual(rng.freq, rng2.freq)
+ assert rng.freq == rng2.freq
def test_dti_constructor_years_only(self):
# GH 6961
@@ -487,7 +487,7 @@ def test_dti_constructor_small_int(self):
def test_ctor_str_intraday(self):
rng = DatetimeIndex(['1-1-2000 00:00:01'])
- self.assertEqual(rng[0].second, 1)
+ assert rng[0].second == 1
def test_is_(self):
dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M')
@@ -565,29 +565,29 @@ def test_datetimeindex_constructor_misc(self):
sdate = datetime(1999, 12, 25)
edate = datetime(2000, 1, 1)
idx = DatetimeIndex(start=sdate, freq='1B', periods=20)
- self.assertEqual(len(idx), 20)
- self.assertEqual(idx[0], sdate + 0 * offsets.BDay())
- self.assertEqual(idx.freq, 'B')
+ assert len(idx) == 20
+ assert idx[0] == sdate + 0 * offsets.BDay()
+ assert idx.freq == 'B'
idx = DatetimeIndex(end=edate, freq=('D', 5), periods=20)
- self.assertEqual(len(idx), 20)
- self.assertEqual(idx[-1], edate)
- self.assertEqual(idx.freq, '5D')
+ assert len(idx) == 20
+ assert idx[-1] == edate
+ assert idx.freq == '5D'
idx1 = DatetimeIndex(start=sdate, end=edate, freq='W-SUN')
idx2 = DatetimeIndex(start=sdate, end=edate,
freq=offsets.Week(weekday=6))
- self.assertEqual(len(idx1), len(idx2))
- self.assertEqual(idx1.offset, idx2.offset)
+ assert len(idx1) == len(idx2)
+ assert idx1.offset == idx2.offset
idx1 = DatetimeIndex(start=sdate, end=edate, freq='QS')
idx2 = DatetimeIndex(start=sdate, end=edate,
freq=offsets.QuarterBegin(startingMonth=1))
- self.assertEqual(len(idx1), len(idx2))
- self.assertEqual(idx1.offset, idx2.offset)
+ assert len(idx1) == len(idx2)
+ assert idx1.offset == idx2.offset
idx1 = DatetimeIndex(start=sdate, end=edate, freq='BQ')
idx2 = DatetimeIndex(start=sdate, end=edate,
freq=offsets.BQuarterEnd(startingMonth=12))
- self.assertEqual(len(idx1), len(idx2))
- self.assertEqual(idx1.offset, idx2.offset)
+ assert len(idx1) == len(idx2)
+ assert idx1.offset == idx2.offset
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index 6b011ad6db98e..a9fdd40406770 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -30,7 +30,7 @@ class TestDateRanges(TestData, tm.TestCase):
def test_date_range_gen_error(self):
rng = date_range('1/1/2000 00:00', '1/1/2000 00:18', freq='5min')
- self.assertEqual(len(rng), 4)
+ assert len(rng) == 4
def test_date_range_negative_freq(self):
# GH 11018
@@ -38,20 +38,20 @@ def test_date_range_negative_freq(self):
exp = pd.DatetimeIndex(['2011-12-31', '2009-12-31',
'2007-12-31'], freq='-2A')
tm.assert_index_equal(rng, exp)
- self.assertEqual(rng.freq, '-2A')
+ assert rng.freq == '-2A'
rng = date_range('2011-01-31', freq='-2M', periods=3)
exp = pd.DatetimeIndex(['2011-01-31', '2010-11-30',
'2010-09-30'], freq='-2M')
tm.assert_index_equal(rng, exp)
- self.assertEqual(rng.freq, '-2M')
+ assert rng.freq == '-2M'
def test_date_range_bms_bug(self):
# #1645
rng = date_range('1/1/2000', periods=10, freq='BMS')
ex_first = Timestamp('2000-01-03')
- self.assertEqual(rng[0], ex_first)
+ assert rng[0] == ex_first
def test_date_range_normalize(self):
snap = datetime.today()
@@ -68,13 +68,13 @@ def test_date_range_normalize(self):
freq='B')
the_time = time(8, 15)
for val in rng:
- self.assertEqual(val.time(), the_time)
+ assert val.time() == the_time
def test_date_range_fy5252(self):
dr = date_range(start="2013-01-01", periods=2, freq=offsets.FY5253(
startingMonth=1, weekday=3, variation="nearest"))
- self.assertEqual(dr[0], Timestamp('2013-01-31'))
- self.assertEqual(dr[1], Timestamp('2014-01-30'))
+ assert dr[0] == Timestamp('2013-01-31')
+ assert dr[1] == Timestamp('2014-01-30')
def test_date_range_ambiguous_arguments(self):
# #2538
@@ -138,7 +138,7 @@ def test_compat_replace(self):
freq='QS-JAN'),
periods=f(76),
freq='QS-JAN')
- self.assertEqual(len(result), 76)
+ assert len(result) == 76
def test_catch_infinite_loop(self):
offset = offsets.DateOffset(minute=5)
@@ -152,12 +152,12 @@ class TestGenRangeGeneration(tm.TestCase):
def test_generate(self):
rng1 = list(generate_range(START, END, offset=BDay()))
rng2 = list(generate_range(START, END, time_rule='B'))
- self.assertEqual(rng1, rng2)
+ assert rng1 == rng2
def test_generate_cday(self):
rng1 = list(generate_range(START, END, offset=CDay()))
rng2 = list(generate_range(START, END, time_rule='C'))
- self.assertEqual(rng1, rng2)
+ assert rng1 == rng2
def test_1(self):
eq_gen_range(dict(start=datetime(2009, 3, 25), periods=2),
@@ -241,14 +241,14 @@ def test_cached_range(self):
def test_cached_range_bug(self):
rng = date_range('2010-09-01 05:00:00', periods=50,
freq=DateOffset(hours=6))
- self.assertEqual(len(rng), 50)
- self.assertEqual(rng[0], datetime(2010, 9, 1, 5))
+ assert len(rng) == 50
+ assert rng[0] == datetime(2010, 9, 1, 5)
def test_timezone_comparaison_bug(self):
# smoke test
start = Timestamp('20130220 10:00', tz='US/Eastern')
result = date_range(start, periods=2, tz='US/Eastern')
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
def test_timezone_comparaison_assert(self):
start = Timestamp('20130220 10:00', tz='US/Eastern')
@@ -308,19 +308,19 @@ def test_range_tz_pytz(self):
end = tz.localize(datetime(2011, 1, 3))
dr = date_range(start=start, periods=3)
- self.assertEqual(dr.tz.zone, tz.zone)
- self.assertEqual(dr[0], start)
- self.assertEqual(dr[2], end)
+ assert dr.tz.zone == tz.zone
+ assert dr[0] == start
+ assert dr[2] == end
dr = date_range(end=end, periods=3)
- self.assertEqual(dr.tz.zone, tz.zone)
- self.assertEqual(dr[0], start)
- self.assertEqual(dr[2], end)
+ assert dr.tz.zone == tz.zone
+ assert dr[0] == start
+ assert dr[2] == end
dr = date_range(start=start, end=end)
- self.assertEqual(dr.tz.zone, tz.zone)
- self.assertEqual(dr[0], start)
- self.assertEqual(dr[2], end)
+ assert dr.tz.zone == tz.zone
+ assert dr[0] == start
+ assert dr[2] == end
def test_range_tz_dst_straddle_pytz(self):
@@ -333,20 +333,20 @@ def test_range_tz_dst_straddle_pytz(self):
tz.localize(datetime(2013, 11, 6)))]
for (start, end) in dates:
dr = date_range(start, end, freq='D')
- self.assertEqual(dr[0], start)
- self.assertEqual(dr[-1], end)
- self.assertEqual(np.all(dr.hour == 0), True)
+ assert dr[0] == start
+ assert dr[-1] == end
+ assert np.all(dr.hour == 0)
dr = date_range(start, end, freq='D', tz='US/Eastern')
- self.assertEqual(dr[0], start)
- self.assertEqual(dr[-1], end)
- self.assertEqual(np.all(dr.hour == 0), True)
+ assert dr[0] == start
+ assert dr[-1] == end
+ assert np.all(dr.hour == 0)
dr = date_range(start.replace(tzinfo=None), end.replace(
tzinfo=None), freq='D', tz='US/Eastern')
- self.assertEqual(dr[0], start)
- self.assertEqual(dr[-1], end)
- self.assertEqual(np.all(dr.hour == 0), True)
+ assert dr[0] == start
+ assert dr[-1] == end
+ assert np.all(dr.hour == 0)
def test_range_tz_dateutil(self):
# GH 2906
@@ -461,8 +461,8 @@ def test_range_closed_boundary(self):
def test_years_only(self):
# GH 6961
dr = date_range('2014', '2015', freq='M')
- self.assertEqual(dr[0], datetime(2014, 1, 31))
- self.assertEqual(dr[-1], datetime(2014, 12, 31))
+ assert dr[0] == datetime(2014, 1, 31)
+ assert dr[-1] == datetime(2014, 12, 31)
def test_freq_divides_end_in_nanos(self):
# GH 10885
diff --git a/pandas/tests/indexes/datetimes/test_datetime.py b/pandas/tests/indexes/datetimes/test_datetime.py
index 83f9119377b19..7b22d1615fbeb 100644
--- a/pandas/tests/indexes/datetimes/test_datetime.py
+++ b/pandas/tests/indexes/datetimes/test_datetime.py
@@ -21,35 +21,35 @@ def test_get_loc(self):
idx = pd.date_range('2000-01-01', periods=3)
for method in [None, 'pad', 'backfill', 'nearest']:
- self.assertEqual(idx.get_loc(idx[1], method), 1)
- self.assertEqual(idx.get_loc(idx[1].to_pydatetime(), method), 1)
- self.assertEqual(idx.get_loc(str(idx[1]), method), 1)
+ assert idx.get_loc(idx[1], method) == 1
+ assert idx.get_loc(idx[1].to_pydatetime(), method) == 1
+ assert idx.get_loc(str(idx[1]), method) == 1
+
if method is not None:
- self.assertEqual(idx.get_loc(idx[1], method,
- tolerance=pd.Timedelta('0 days')),
- 1)
-
- self.assertEqual(idx.get_loc('2000-01-01', method='nearest'), 0)
- self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest'), 1)
-
- self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest',
- tolerance='1 day'), 1)
- self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest',
- tolerance=pd.Timedelta('1D')), 1)
- self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest',
- tolerance=np.timedelta64(1, 'D')), 1)
- self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest',
- tolerance=timedelta(1)), 1)
+ assert idx.get_loc(idx[1], method,
+ tolerance=pd.Timedelta('0 days')) == 1
+
+ assert idx.get_loc('2000-01-01', method='nearest') == 0
+ assert idx.get_loc('2000-01-01T12', method='nearest') == 1
+
+ assert idx.get_loc('2000-01-01T12', method='nearest',
+ tolerance='1 day') == 1
+ assert idx.get_loc('2000-01-01T12', method='nearest',
+ tolerance=pd.Timedelta('1D')) == 1
+ assert idx.get_loc('2000-01-01T12', method='nearest',
+ tolerance=np.timedelta64(1, 'D')) == 1
+ assert idx.get_loc('2000-01-01T12', method='nearest',
+ tolerance=timedelta(1)) == 1
with tm.assert_raises_regex(ValueError, 'must be convertible'):
idx.get_loc('2000-01-01T12', method='nearest', tolerance='foo')
with pytest.raises(KeyError):
idx.get_loc('2000-01-01T03', method='nearest', tolerance='2 hours')
- self.assertEqual(idx.get_loc('2000', method='nearest'), slice(0, 3))
- self.assertEqual(idx.get_loc('2000-01', method='nearest'), slice(0, 3))
+ assert idx.get_loc('2000', method='nearest') == slice(0, 3)
+ assert idx.get_loc('2000-01', method='nearest') == slice(0, 3)
- self.assertEqual(idx.get_loc('1999', method='nearest'), 0)
- self.assertEqual(idx.get_loc('2001', method='nearest'), 2)
+ assert idx.get_loc('1999', method='nearest') == 0
+ assert idx.get_loc('2001', method='nearest') == 2
with pytest.raises(KeyError):
idx.get_loc('1999', method='pad')
@@ -62,9 +62,9 @@ def test_get_loc(self):
idx.get_loc(slice(2))
idx = pd.to_datetime(['2000-01-01', '2000-01-04'])
- self.assertEqual(idx.get_loc('2000-01-02', method='nearest'), 0)
- self.assertEqual(idx.get_loc('2000-01-03', method='nearest'), 1)
- self.assertEqual(idx.get_loc('2000-01', method='nearest'), slice(0, 2))
+ assert idx.get_loc('2000-01-02', method='nearest') == 0
+ assert idx.get_loc('2000-01-03', method='nearest') == 1
+ assert idx.get_loc('2000-01', method='nearest') == slice(0, 2)
# time indexing
idx = pd.date_range('2000-01-01', periods=24, freq='H')
@@ -114,8 +114,8 @@ def test_roundtrip_pickle_with_tz(self):
def test_reindex_preserves_tz_if_target_is_empty_list_or_array(self):
# GH7774
index = date_range('20130101', periods=3, tz='US/Eastern')
- self.assertEqual(str(index.reindex([])[0].tz), 'US/Eastern')
- self.assertEqual(str(index.reindex(np.array([]))[0].tz), 'US/Eastern')
+ assert str(index.reindex([])[0].tz) == 'US/Eastern'
+ assert str(index.reindex(np.array([]))[0].tz) == 'US/Eastern'
def test_time_loc(self): # GH8667
from datetime import time
@@ -150,10 +150,10 @@ def test_time_overflow_for_32bit_machines(self):
periods = np.int_(1000)
idx1 = pd.date_range(start='2000', periods=periods, freq='S')
- self.assertEqual(len(idx1), periods)
+ assert len(idx1) == periods
idx2 = pd.date_range(end='2000', periods=periods, freq='S')
- self.assertEqual(len(idx2), periods)
+ assert len(idx2) == periods
def test_nat(self):
assert DatetimeIndex([np.nan])[0] is pd.NaT
@@ -166,13 +166,13 @@ def test_ufunc_coercions(self):
assert isinstance(result, DatetimeIndex)
exp = date_range('2011-01-02', periods=3, freq='2D', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, '2D')
+ assert result.freq == '2D'
for result in [idx - delta, np.subtract(idx, delta)]:
assert isinstance(result, DatetimeIndex)
exp = date_range('2010-12-31', periods=3, freq='2D', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, '2D')
+ assert result.freq == '2D'
delta = np.array([np.timedelta64(1, 'D'), np.timedelta64(2, 'D'),
np.timedelta64(3, 'D')])
@@ -181,14 +181,14 @@ def test_ufunc_coercions(self):
exp = DatetimeIndex(['2011-01-02', '2011-01-05', '2011-01-08'],
freq='3D', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, '3D')
+ assert result.freq == '3D'
for result in [idx - delta, np.subtract(idx, delta)]:
assert isinstance(result, DatetimeIndex)
exp = DatetimeIndex(['2010-12-31', '2011-01-01', '2011-01-02'],
freq='D', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == 'D'
def test_week_of_month_frequency(self):
# GH 5348: "ValueError: Could not evaluate WOM-1SUN" shouldn't raise
@@ -240,14 +240,14 @@ def test_to_period_nofreq(self):
idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03'],
freq='infer')
- self.assertEqual(idx.freqstr, 'D')
+ assert idx.freqstr == 'D'
expected = pd.PeriodIndex(['2000-01-01', '2000-01-02',
'2000-01-03'], freq='D')
tm.assert_index_equal(idx.to_period(), expected)
# GH 7606
idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03'])
- self.assertEqual(idx.freqstr, None)
+ assert idx.freqstr is None
tm.assert_index_equal(idx.to_period(), expected)
def test_comparisons_coverage(self):
@@ -373,7 +373,7 @@ def test_iteration_preserves_tz(self):
for i, ts in enumerate(index):
result = ts
expected = index[i]
- self.assertEqual(result, expected)
+ assert result == expected
index = date_range("2012-01-01", periods=3, freq='H',
tz=dateutil.tz.tzoffset(None, -28800))
@@ -381,8 +381,8 @@ def test_iteration_preserves_tz(self):
for i, ts in enumerate(index):
result = ts
expected = index[i]
- self.assertEqual(result._repr_base, expected._repr_base)
- self.assertEqual(result, expected)
+ assert result._repr_base == expected._repr_base
+ assert result == expected
# 9100
index = pd.DatetimeIndex(['2014-12-01 03:32:39.987000-08:00',
@@ -390,8 +390,8 @@ def test_iteration_preserves_tz(self):
for i, ts in enumerate(index):
result = ts
expected = index[i]
- self.assertEqual(result._repr_base, expected._repr_base)
- self.assertEqual(result, expected)
+ assert result._repr_base == expected._repr_base
+ assert result == expected
def test_misc_coverage(self):
rng = date_range('1/1/2000', periods=5)
@@ -410,10 +410,10 @@ def test_string_index_series_name_converted(self):
index=date_range('1/1/2000', periods=10))
result = df.loc['1/3/2000']
- self.assertEqual(result.name, df.index[2])
+ assert result.name == df.index[2]
result = df.T['1/3/2000']
- self.assertEqual(result.name, df.index[2])
+ assert result.name == df.index[2]
def test_overflow_offset(self):
# xref https://github.com/statsmodels/statsmodels/issues/3374
@@ -444,8 +444,8 @@ def test_get_duplicates(self):
def test_argmin_argmax(self):
idx = DatetimeIndex(['2000-01-04', '2000-01-01', '2000-01-02'])
- self.assertEqual(idx.argmin(), 1)
- self.assertEqual(idx.argmax(), 0)
+ assert idx.argmin() == 1
+ assert idx.argmax() == 0
def test_sort_values(self):
idx = DatetimeIndex(['2000-01-04', '2000-01-01', '2000-01-02'])
@@ -481,8 +481,8 @@ def test_take(self):
tm.assert_index_equal(taken, expected)
assert isinstance(taken, DatetimeIndex)
assert taken.freq is None
- self.assertEqual(taken.tz, expected.tz)
- self.assertEqual(taken.name, expected.name)
+ assert taken.tz == expected.tz
+ assert taken.name == expected.name
def test_take_fill_value(self):
# GH 12631
@@ -601,8 +601,8 @@ def test_does_not_convert_mixed_integer(self):
r_idx_type='i', c_idx_type='dt')
cols = df.columns.join(df.index, how='outer')
joined = cols.join(df.columns)
- self.assertEqual(cols.dtype, np.dtype('O'))
- self.assertEqual(cols.dtype, joined.dtype)
+ assert cols.dtype == np.dtype('O')
+ assert cols.dtype == joined.dtype
tm.assert_numpy_array_equal(cols.values, joined.values)
def test_slice_keeps_name(self):
@@ -610,7 +610,7 @@ def test_slice_keeps_name(self):
st = pd.Timestamp('2013-07-01 00:00:00', tz='America/Los_Angeles')
et = pd.Timestamp('2013-07-02 00:00:00', tz='America/Los_Angeles')
dr = pd.date_range(st, et, freq='H', name='timebucket')
- self.assertEqual(dr[1:].name, dr.name)
+ assert dr[1:].name == dr.name
def test_join_self(self):
index = date_range('1/1/2000', periods=10)
@@ -769,8 +769,8 @@ def test_slice_bounds_empty(self):
right = empty_idx._maybe_cast_slice_bound('2015-01-02', 'right', 'loc')
exp = Timestamp('2015-01-02 23:59:59.999999999')
- self.assertEqual(right, exp)
+ assert right == exp
left = empty_idx._maybe_cast_slice_bound('2015-01-02', 'left', 'loc')
exp = Timestamp('2015-01-02 00:00:00')
- self.assertEqual(left, exp)
+ assert left == exp
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index 568e045d9f5e7..92134a296b08f 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -164,8 +164,8 @@ def test_delete(self):
for n, expected in compat.iteritems(cases):
result = idx.delete(n)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, expected.freq)
+ assert result.name == expected.name
+ assert result.freq == expected.freq
with pytest.raises((IndexError, ValueError)):
# either depeidnig on numpy version
@@ -179,17 +179,17 @@ def test_delete(self):
freq='H', name='idx', tz=tz)
result = idx.delete(0)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freqstr, 'H')
- self.assertEqual(result.tz, expected.tz)
+ assert result.name == expected.name
+ assert result.freqstr == 'H'
+ assert result.tz == expected.tz
expected = date_range(start='2000-01-01 09:00', periods=9,
freq='H', name='idx', tz=tz)
result = idx.delete(-1)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freqstr, 'H')
- self.assertEqual(result.tz, expected.tz)
+ assert result.name == expected.name
+ assert result.freqstr == 'H'
+ assert result.tz == expected.tz
def test_delete_slice(self):
idx = date_range(start='2000-01-01', periods=10, freq='D', name='idx')
@@ -211,13 +211,13 @@ def test_delete_slice(self):
for n, expected in compat.iteritems(cases):
result = idx.delete(n)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, expected.freq)
+ assert result.name == expected.name
+ assert result.freq == expected.freq
result = idx.delete(slice(n[0], n[-1] + 1))
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, expected.freq)
+ assert result.name == expected.name
+ assert result.freq == expected.freq
for tz in [None, 'Asia/Tokyo', 'US/Pacific']:
ts = pd.Series(1, index=pd.date_range(
@@ -227,9 +227,9 @@ def test_delete_slice(self):
expected = pd.date_range('2000-01-01 14:00', periods=5, freq='H',
name='idx', tz=tz)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.tz, expected.tz)
+ assert result.name == expected.name
+ assert result.freq == expected.freq
+ assert result.tz == expected.tz
# reset freq to None
result = ts.drop(ts.index[[1, 3, 5, 7, 9]]).index
@@ -238,6 +238,6 @@ def test_delete_slice(self):
'2000-01-01 15:00', '2000-01-01 17:00'],
freq=None, name='idx', tz=tz)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.tz, expected.tz)
+ assert result.name == expected.name
+ assert result.freq == expected.freq
+ assert result.tz == expected.tz
diff --git a/pandas/tests/indexes/datetimes/test_misc.py b/pandas/tests/indexes/datetimes/test_misc.py
index 55165aa39a1a4..ae5d29ca426b4 100644
--- a/pandas/tests/indexes/datetimes/test_misc.py
+++ b/pandas/tests/indexes/datetimes/test_misc.py
@@ -181,81 +181,80 @@ def test_datetimeindex_accessors(self):
periods=365, tz='US/Eastern')
for dti in [dti_naive, dti_tz]:
- self.assertEqual(dti.year[0], 1998)
- self.assertEqual(dti.month[0], 1)
- self.assertEqual(dti.day[0], 1)
- self.assertEqual(dti.hour[0], 0)
- self.assertEqual(dti.minute[0], 0)
- self.assertEqual(dti.second[0], 0)
- self.assertEqual(dti.microsecond[0], 0)
- self.assertEqual(dti.dayofweek[0], 3)
-
- self.assertEqual(dti.dayofyear[0], 1)
- self.assertEqual(dti.dayofyear[120], 121)
-
- self.assertEqual(dti.weekofyear[0], 1)
- self.assertEqual(dti.weekofyear[120], 18)
-
- self.assertEqual(dti.quarter[0], 1)
- self.assertEqual(dti.quarter[120], 2)
-
- self.assertEqual(dti.days_in_month[0], 31)
- self.assertEqual(dti.days_in_month[90], 30)
-
- self.assertEqual(dti.is_month_start[0], True)
- self.assertEqual(dti.is_month_start[1], False)
- self.assertEqual(dti.is_month_start[31], True)
- self.assertEqual(dti.is_quarter_start[0], True)
- self.assertEqual(dti.is_quarter_start[90], True)
- self.assertEqual(dti.is_year_start[0], True)
- self.assertEqual(dti.is_year_start[364], False)
- self.assertEqual(dti.is_month_end[0], False)
- self.assertEqual(dti.is_month_end[30], True)
- self.assertEqual(dti.is_month_end[31], False)
- self.assertEqual(dti.is_month_end[364], True)
- self.assertEqual(dti.is_quarter_end[0], False)
- self.assertEqual(dti.is_quarter_end[30], False)
- self.assertEqual(dti.is_quarter_end[89], True)
- self.assertEqual(dti.is_quarter_end[364], True)
- self.assertEqual(dti.is_year_end[0], False)
- self.assertEqual(dti.is_year_end[364], True)
+ assert dti.year[0] == 1998
+ assert dti.month[0] == 1
+ assert dti.day[0] == 1
+ assert dti.hour[0] == 0
+ assert dti.minute[0] == 0
+ assert dti.second[0] == 0
+ assert dti.microsecond[0] == 0
+ assert dti.dayofweek[0] == 3
+
+ assert dti.dayofyear[0] == 1
+ assert dti.dayofyear[120] == 121
+
+ assert dti.weekofyear[0] == 1
+ assert dti.weekofyear[120] == 18
+
+ assert dti.quarter[0] == 1
+ assert dti.quarter[120] == 2
+
+ assert dti.days_in_month[0] == 31
+ assert dti.days_in_month[90] == 30
+
+ assert dti.is_month_start[0]
+ assert not dti.is_month_start[1]
+ assert dti.is_month_start[31]
+ assert dti.is_quarter_start[0]
+ assert dti.is_quarter_start[90]
+ assert dti.is_year_start[0]
+ assert not dti.is_year_start[364]
+ assert not dti.is_month_end[0]
+ assert dti.is_month_end[30]
+ assert not dti.is_month_end[31]
+ assert dti.is_month_end[364]
+ assert not dti.is_quarter_end[0]
+ assert not dti.is_quarter_end[30]
+ assert dti.is_quarter_end[89]
+ assert dti.is_quarter_end[364]
+ assert not dti.is_year_end[0]
+ assert dti.is_year_end[364]
# GH 11128
- self.assertEqual(dti.weekday_name[4], u'Monday')
- self.assertEqual(dti.weekday_name[5], u'Tuesday')
- self.assertEqual(dti.weekday_name[6], u'Wednesday')
- self.assertEqual(dti.weekday_name[7], u'Thursday')
- self.assertEqual(dti.weekday_name[8], u'Friday')
- self.assertEqual(dti.weekday_name[9], u'Saturday')
- self.assertEqual(dti.weekday_name[10], u'Sunday')
-
- self.assertEqual(Timestamp('2016-04-04').weekday_name, u'Monday')
- self.assertEqual(Timestamp('2016-04-05').weekday_name, u'Tuesday')
- self.assertEqual(Timestamp('2016-04-06').weekday_name,
- u'Wednesday')
- self.assertEqual(Timestamp('2016-04-07').weekday_name, u'Thursday')
- self.assertEqual(Timestamp('2016-04-08').weekday_name, u'Friday')
- self.assertEqual(Timestamp('2016-04-09').weekday_name, u'Saturday')
- self.assertEqual(Timestamp('2016-04-10').weekday_name, u'Sunday')
-
- self.assertEqual(len(dti.year), 365)
- self.assertEqual(len(dti.month), 365)
- self.assertEqual(len(dti.day), 365)
- self.assertEqual(len(dti.hour), 365)
- self.assertEqual(len(dti.minute), 365)
- self.assertEqual(len(dti.second), 365)
- self.assertEqual(len(dti.microsecond), 365)
- self.assertEqual(len(dti.dayofweek), 365)
- self.assertEqual(len(dti.dayofyear), 365)
- self.assertEqual(len(dti.weekofyear), 365)
- self.assertEqual(len(dti.quarter), 365)
- self.assertEqual(len(dti.is_month_start), 365)
- self.assertEqual(len(dti.is_month_end), 365)
- self.assertEqual(len(dti.is_quarter_start), 365)
- self.assertEqual(len(dti.is_quarter_end), 365)
- self.assertEqual(len(dti.is_year_start), 365)
- self.assertEqual(len(dti.is_year_end), 365)
- self.assertEqual(len(dti.weekday_name), 365)
+ assert dti.weekday_name[4] == u'Monday'
+ assert dti.weekday_name[5] == u'Tuesday'
+ assert dti.weekday_name[6] == u'Wednesday'
+ assert dti.weekday_name[7] == u'Thursday'
+ assert dti.weekday_name[8] == u'Friday'
+ assert dti.weekday_name[9] == u'Saturday'
+ assert dti.weekday_name[10] == u'Sunday'
+
+ assert Timestamp('2016-04-04').weekday_name == u'Monday'
+ assert Timestamp('2016-04-05').weekday_name == u'Tuesday'
+ assert Timestamp('2016-04-06').weekday_name == u'Wednesday'
+ assert Timestamp('2016-04-07').weekday_name == u'Thursday'
+ assert Timestamp('2016-04-08').weekday_name == u'Friday'
+ assert Timestamp('2016-04-09').weekday_name == u'Saturday'
+ assert Timestamp('2016-04-10').weekday_name == u'Sunday'
+
+ assert len(dti.year) == 365
+ assert len(dti.month) == 365
+ assert len(dti.day) == 365
+ assert len(dti.hour) == 365
+ assert len(dti.minute) == 365
+ assert len(dti.second) == 365
+ assert len(dti.microsecond) == 365
+ assert len(dti.dayofweek) == 365
+ assert len(dti.dayofyear) == 365
+ assert len(dti.weekofyear) == 365
+ assert len(dti.quarter) == 365
+ assert len(dti.is_month_start) == 365
+ assert len(dti.is_month_end) == 365
+ assert len(dti.is_quarter_start) == 365
+ assert len(dti.is_quarter_end) == 365
+ assert len(dti.is_year_start) == 365
+ assert len(dti.is_year_end) == 365
+ assert len(dti.weekday_name) == 365
dti.name = 'name'
@@ -283,10 +282,10 @@ def test_datetimeindex_accessors(self):
dti = DatetimeIndex(freq='BQ-FEB', start=datetime(1998, 1, 1),
periods=4)
- self.assertEqual(sum(dti.is_quarter_start), 0)
- self.assertEqual(sum(dti.is_quarter_end), 4)
- self.assertEqual(sum(dti.is_year_start), 0)
- self.assertEqual(sum(dti.is_year_end), 1)
+ assert sum(dti.is_quarter_start) == 0
+ assert sum(dti.is_quarter_end) == 4
+ assert sum(dti.is_year_start) == 0
+ assert sum(dti.is_year_end) == 1
# Ensure is_start/end accessors throw ValueError for CustomBusinessDay,
# CBD requires np >= 1.7
@@ -296,7 +295,7 @@ def test_datetimeindex_accessors(self):
dti = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03'])
- self.assertEqual(dti.is_month_start[0], 1)
+ assert dti.is_month_start[0] == 1
tests = [
(Timestamp('2013-06-01', freq='M').is_month_start, 1),
@@ -333,7 +332,7 @@ def test_datetimeindex_accessors(self):
(Timestamp('2013-02-01').days_in_month, 28)]
for ts, value in tests:
- self.assertEqual(ts, value)
+ assert ts == value
def test_nanosecond_field(self):
dti = DatetimeIndex(np.arange(10))
diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py
index fa1b2c0d7c68d..e25e3d448190e 100644
--- a/pandas/tests/indexes/datetimes/test_ops.py
+++ b/pandas/tests/indexes/datetimes/test_ops.py
@@ -45,9 +45,9 @@ def test_ops_properties_basic(self):
# attribute access should still work!
s = Series(dict(year=2000, month=1, day=10))
- self.assertEqual(s.year, 2000)
- self.assertEqual(s.month, 1)
- self.assertEqual(s.day, 10)
+ assert s.year == 2000
+ assert s.month == 1
+ assert s.day == 10
pytest.raises(AttributeError, lambda: s.weekday)
def test_asobject_tolist(self):
@@ -61,10 +61,10 @@ def test_asobject_tolist(self):
result = idx.asobject
assert isinstance(result, Index)
- self.assertEqual(result.dtype, object)
+ assert result.dtype == object
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(idx.tolist(), expected_list)
+ assert result.name == expected.name
+ assert idx.tolist() == expected_list
idx = pd.date_range(start='2013-01-01', periods=4, freq='M',
name='idx', tz='Asia/Tokyo')
@@ -75,10 +75,10 @@ def test_asobject_tolist(self):
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
assert isinstance(result, Index)
- self.assertEqual(result.dtype, object)
+ assert result.dtype == object
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(idx.tolist(), expected_list)
+ assert result.name == expected.name
+ assert idx.tolist() == expected_list
idx = DatetimeIndex([datetime(2013, 1, 1), datetime(2013, 1, 2),
pd.NaT, datetime(2013, 1, 4)], name='idx')
@@ -88,10 +88,10 @@ def test_asobject_tolist(self):
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
assert isinstance(result, Index)
- self.assertEqual(result.dtype, object)
+ assert result.dtype == object
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(idx.tolist(), expected_list)
+ assert result.name == expected.name
+ assert idx.tolist() == expected_list
def test_minmax(self):
for tz in self.tz:
@@ -106,10 +106,10 @@ def test_minmax(self):
assert not idx2.is_monotonic
for idx in [idx1, idx2]:
- self.assertEqual(idx.min(), Timestamp('2011-01-01', tz=tz))
- self.assertEqual(idx.max(), Timestamp('2011-01-03', tz=tz))
- self.assertEqual(idx.argmin(), 0)
- self.assertEqual(idx.argmax(), 2)
+ assert idx.min() == Timestamp('2011-01-01', tz=tz)
+ assert idx.max() == Timestamp('2011-01-03', tz=tz)
+ assert idx.argmin() == 0
+ assert idx.argmax() == 2
for op in ['min', 'max']:
# Return NaT
@@ -125,17 +125,15 @@ def test_minmax(self):
def test_numpy_minmax(self):
dr = pd.date_range(start='2016-01-15', end='2016-01-20')
- self.assertEqual(np.min(dr),
- Timestamp('2016-01-15 00:00:00', freq='D'))
- self.assertEqual(np.max(dr),
- Timestamp('2016-01-20 00:00:00', freq='D'))
+ assert np.min(dr) == Timestamp('2016-01-15 00:00:00', freq='D')
+ assert np.max(dr) == Timestamp('2016-01-20 00:00:00', freq='D')
errmsg = "the 'out' parameter is not supported"
tm.assert_raises_regex(ValueError, errmsg, np.min, dr, out=0)
tm.assert_raises_regex(ValueError, errmsg, np.max, dr, out=0)
- self.assertEqual(np.argmin(dr), 0)
- self.assertEqual(np.argmax(dr), 5)
+ assert np.argmin(dr) == 0
+ assert np.argmax(dr) == 5
if not _np_version_under1p10:
errmsg = "the 'out' parameter is not supported"
@@ -160,7 +158,7 @@ def test_round(self):
expected_elt = expected_rng[1]
tm.assert_index_equal(rng.round(freq='H'), expected_rng)
- self.assertEqual(elt.round(freq='H'), expected_elt)
+ assert elt.round(freq='H') == expected_elt
msg = pd.tseries.frequencies._INVALID_FREQ_ERROR
with tm.assert_raises_regex(ValueError, msg):
@@ -200,7 +198,7 @@ def test_repeat_range(self):
result = rng.repeat(5)
assert result.freq is None
- self.assertEqual(len(result), 5 * len(rng))
+ assert len(result) == 5 * len(rng)
for tz in self.tz:
index = pd.date_range('2001-01-01', periods=2, freq='D', tz=tz)
@@ -288,7 +286,7 @@ def test_representation(self):
for indx, expected in zip(idx, exp):
for func in ['__repr__', '__unicode__', '__str__']:
result = getattr(indx, func)()
- self.assertEqual(result, expected)
+ assert result == expected
def test_representation_to_series(self):
idx1 = DatetimeIndex([], freq='D')
@@ -336,7 +334,7 @@ def test_representation_to_series(self):
[exp1, exp2, exp3, exp4,
exp5, exp6, exp7]):
result = repr(Series(idx))
- self.assertEqual(result, expected)
+ assert result == expected
def test_summary(self):
# GH9116
@@ -372,7 +370,7 @@ def test_summary(self):
for idx, expected in zip([idx1, idx2, idx3, idx4, idx5, idx6],
[exp1, exp2, exp3, exp4, exp5, exp6]):
result = idx.summary()
- self.assertEqual(result, expected)
+ assert result == expected
def test_resolution(self):
for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T',
@@ -383,7 +381,7 @@ def test_resolution(self):
for tz in self.tz:
idx = pd.date_range(start='2013-04-01', periods=30, freq=freq,
tz=tz)
- self.assertEqual(idx.resolution, expected)
+ assert idx.resolution == expected
def test_union(self):
for tz in self.tz:
@@ -724,39 +722,39 @@ def test_getitem(self):
for idx in [idx1, idx2]:
result = idx[0]
- self.assertEqual(result, Timestamp('2011-01-01', tz=idx.tz))
+ assert result == Timestamp('2011-01-01', tz=idx.tz)
result = idx[0:5]
expected = pd.date_range('2011-01-01', '2011-01-05', freq='D',
tz=idx.tz, name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx[0:10:2]
expected = pd.date_range('2011-01-01', '2011-01-09', freq='2D',
tz=idx.tz, name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx[-20:-5:3]
expected = pd.date_range('2011-01-12', '2011-01-24', freq='3D',
tz=idx.tz, name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx[4::-1]
expected = DatetimeIndex(['2011-01-05', '2011-01-04', '2011-01-03',
'2011-01-02', '2011-01-01'],
freq='-1D', tz=idx.tz, name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
def test_drop_duplicates_metadata(self):
# GH 10115
idx = pd.date_range('2011-01-01', '2011-01-31', freq='D', name='idx')
result = idx.drop_duplicates()
tm.assert_index_equal(idx, result)
- self.assertEqual(idx.freq, result.freq)
+ assert idx.freq == result.freq
idx_dup = idx.append(idx)
assert idx_dup.freq is None # freq is reset
@@ -793,25 +791,25 @@ def test_take(self):
for idx in [idx1, idx2]:
result = idx.take([0])
- self.assertEqual(result, Timestamp('2011-01-01', tz=idx.tz))
+ assert result == Timestamp('2011-01-01', tz=idx.tz)
result = idx.take([0, 1, 2])
expected = pd.date_range('2011-01-01', '2011-01-03', freq='D',
tz=idx.tz, name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx.take([0, 2, 4])
expected = pd.date_range('2011-01-01', '2011-01-05', freq='2D',
tz=idx.tz, name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx.take([7, 4, 1])
expected = pd.date_range('2011-01-08', '2011-01-02', freq='-3D',
tz=idx.tz, name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx.take([3, 2, 5])
expected = DatetimeIndex(['2011-01-04', '2011-01-03',
@@ -851,7 +849,7 @@ def test_infer_freq(self):
idx = pd.date_range('2011-01-01 09:00:00', freq=freq, periods=10)
result = pd.DatetimeIndex(idx.asi8, freq='infer')
tm.assert_index_equal(idx, result)
- self.assertEqual(result.freq, freq)
+ assert result.freq == freq
def test_nat_new(self):
idx = pd.date_range('2011-01-01', freq='D', periods=5, name='x')
@@ -1139,18 +1137,18 @@ def test_getitem(self):
exp = DatetimeIndex(self.rng.view(np.ndarray)[:5])
tm.assert_index_equal(smaller, exp)
- self.assertEqual(smaller.offset, self.rng.offset)
+ assert smaller.offset == self.rng.offset
sliced = self.rng[::5]
- self.assertEqual(sliced.offset, BDay() * 5)
+ assert sliced.offset == BDay() * 5
fancy_indexed = self.rng[[4, 3, 2, 1, 0]]
- self.assertEqual(len(fancy_indexed), 5)
+ assert len(fancy_indexed) == 5
assert isinstance(fancy_indexed, DatetimeIndex)
assert fancy_indexed.freq is None
# 32-bit vs. 64-bit platforms
- self.assertEqual(self.rng[4], self.rng[np.int_(4)])
+ assert self.rng[4] == self.rng[np.int_(4)]
def test_getitem_matplotlib_hackaround(self):
values = self.rng[:, None]
@@ -1159,20 +1157,20 @@ def test_getitem_matplotlib_hackaround(self):
def test_shift(self):
shifted = self.rng.shift(5)
- self.assertEqual(shifted[0], self.rng[5])
- self.assertEqual(shifted.offset, self.rng.offset)
+ assert shifted[0] == self.rng[5]
+ assert shifted.offset == self.rng.offset
shifted = self.rng.shift(-5)
- self.assertEqual(shifted[5], self.rng[0])
- self.assertEqual(shifted.offset, self.rng.offset)
+ assert shifted[5] == self.rng[0]
+ assert shifted.offset == self.rng.offset
shifted = self.rng.shift(0)
- self.assertEqual(shifted[0], self.rng[0])
- self.assertEqual(shifted.offset, self.rng.offset)
+ assert shifted[0] == self.rng[0]
+ assert shifted.offset == self.rng.offset
rng = date_range(START, END, freq=BMonthEnd())
shifted = rng.shift(1, freq=BDay())
- self.assertEqual(shifted[0], rng[0] + BDay())
+ assert shifted[0] == rng[0] + BDay()
def test_summary(self):
self.rng.summary()
@@ -1234,18 +1232,18 @@ def test_getitem(self):
smaller = self.rng[:5]
exp = DatetimeIndex(self.rng.view(np.ndarray)[:5])
tm.assert_index_equal(smaller, exp)
- self.assertEqual(smaller.offset, self.rng.offset)
+ assert smaller.offset == self.rng.offset
sliced = self.rng[::5]
- self.assertEqual(sliced.offset, CDay() * 5)
+ assert sliced.offset == CDay() * 5
fancy_indexed = self.rng[[4, 3, 2, 1, 0]]
- self.assertEqual(len(fancy_indexed), 5)
+ assert len(fancy_indexed) == 5
assert isinstance(fancy_indexed, DatetimeIndex)
assert fancy_indexed.freq is None
# 32-bit vs. 64-bit platforms
- self.assertEqual(self.rng[4], self.rng[np.int_(4)])
+ assert self.rng[4] == self.rng[np.int_(4)]
def test_getitem_matplotlib_hackaround(self):
values = self.rng[:, None]
@@ -1255,22 +1253,22 @@ def test_getitem_matplotlib_hackaround(self):
def test_shift(self):
shifted = self.rng.shift(5)
- self.assertEqual(shifted[0], self.rng[5])
- self.assertEqual(shifted.offset, self.rng.offset)
+ assert shifted[0] == self.rng[5]
+ assert shifted.offset == self.rng.offset
shifted = self.rng.shift(-5)
- self.assertEqual(shifted[5], self.rng[0])
- self.assertEqual(shifted.offset, self.rng.offset)
+ assert shifted[5] == self.rng[0]
+ assert shifted.offset == self.rng.offset
shifted = self.rng.shift(0)
- self.assertEqual(shifted[0], self.rng[0])
- self.assertEqual(shifted.offset, self.rng.offset)
+ assert shifted[0] == self.rng[0]
+ assert shifted.offset == self.rng.offset
# PerformanceWarning
with warnings.catch_warnings(record=True):
rng = date_range(START, END, freq=BMonthEnd())
shifted = rng.shift(1, freq=CDay())
- self.assertEqual(shifted[0], rng[0] + CDay())
+ assert shifted[0] == rng[0] + CDay()
def test_pickle_unpickle(self):
unpickled = tm.round_trip_pickle(self.rng)
diff --git a/pandas/tests/indexes/datetimes/test_partial_slicing.py b/pandas/tests/indexes/datetimes/test_partial_slicing.py
index c3eda8b378c96..b3661ae0e7a97 100644
--- a/pandas/tests/indexes/datetimes/test_partial_slicing.py
+++ b/pandas/tests/indexes/datetimes/test_partial_slicing.py
@@ -30,24 +30,24 @@ def test_slice_year(self):
result = rng.get_loc('2009')
expected = slice(3288, 3653)
- self.assertEqual(result, expected)
+ assert result == expected
def test_slice_quarter(self):
dti = DatetimeIndex(freq='D', start=datetime(2000, 6, 1), periods=500)
s = Series(np.arange(len(dti)), index=dti)
- self.assertEqual(len(s['2001Q1']), 90)
+ assert len(s['2001Q1']) == 90
df = DataFrame(np.random.rand(len(dti), 5), index=dti)
- self.assertEqual(len(df.loc['1Q01']), 90)
+ assert len(df.loc['1Q01']) == 90
def test_slice_month(self):
dti = DatetimeIndex(freq='D', start=datetime(2005, 1, 1), periods=500)
s = Series(np.arange(len(dti)), index=dti)
- self.assertEqual(len(s['2005-11']), 30)
+ assert len(s['2005-11']) == 30
df = DataFrame(np.random.rand(len(dti), 5), index=dti)
- self.assertEqual(len(df.loc['2005-11']), 30)
+ assert len(df.loc['2005-11']) == 30
tm.assert_series_equal(s['2005-11'], s['11-2005'])
@@ -68,7 +68,7 @@ def test_partial_slice(self):
tm.assert_series_equal(result, expected)
result = s['2005-1-1']
- self.assertEqual(result, s.iloc[0])
+ assert result == s.iloc[0]
pytest.raises(Exception, s.__getitem__, '2004-12-31')
@@ -92,7 +92,7 @@ def test_partial_slice_hourly(self):
result = s['2005-1-1 20']
tm.assert_series_equal(result, s.iloc[:60])
- self.assertEqual(s['2005-1-1 20:00'], s.iloc[0])
+ assert s['2005-1-1 20:00'] == s.iloc[0]
pytest.raises(Exception, s.__getitem__, '2004-12-31 00:15')
def test_partial_slice_minutely(self):
@@ -106,7 +106,7 @@ def test_partial_slice_minutely(self):
result = s['2005-1-1']
tm.assert_series_equal(result, s.iloc[:60])
- self.assertEqual(s[Timestamp('2005-1-1 23:59:00')], s.iloc[0])
+ assert s[Timestamp('2005-1-1 23:59:00')] == s.iloc[0]
pytest.raises(Exception, s.__getitem__, '2004-12-31 00:00:00')
def test_partial_slice_second_precision(self):
@@ -121,7 +121,7 @@ def test_partial_slice_second_precision(self):
tm.assert_series_equal(s['2005-1-1 00:01'], s.iloc[10:])
tm.assert_series_equal(s['2005-1-1 00:01:00'], s.iloc[10:])
- self.assertEqual(s[Timestamp('2005-1-1 00:00:59.999990')], s.iloc[0])
+ assert s[Timestamp('2005-1-1 00:00:59.999990')] == s.iloc[0]
tm.assert_raises_regex(KeyError, '2005-1-1 00:00:00',
lambda: s['2005-1-1 00:00:00'])
@@ -144,7 +144,7 @@ def test_partial_slicing_dataframe(self):
middate, middate + unit])
values = [1, 2, 3]
df = DataFrame({'a': values}, index, dtype=np.int64)
- self.assertEqual(df.index.resolution, resolution)
+ assert df.index.resolution == resolution
# Timestamp with the same resolution as index
# Should be exact match for Series (return scalar)
@@ -154,7 +154,7 @@ def test_partial_slicing_dataframe(self):
# make ts_string as precise as index
result = df['a'][ts_string]
assert isinstance(result, np.int64)
- self.assertEqual(result, expected)
+ assert result == expected
pytest.raises(KeyError, df.__getitem__, ts_string)
# Timestamp with resolution less precise than index
@@ -181,7 +181,7 @@ def test_partial_slicing_dataframe(self):
ts_string = index[1].strftime(fmt)
result = df['a'][ts_string]
assert isinstance(result, np.int64)
- self.assertEqual(result, 2)
+ assert result == 2
pytest.raises(KeyError, df.__getitem__, ts_string)
# Not compatible with existing key
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index 6612ab844b849..b25fdaf6be3b0 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -29,7 +29,7 @@ def test_union_coverage(self):
result = ordered[:0].union(ordered)
tm.assert_index_equal(result, ordered)
- self.assertEqual(result.freq, ordered.freq)
+ assert result.freq == ordered.freq
def test_union_bug_1730(self):
rng_a = date_range('1/1/2012', periods=4, freq='3H')
@@ -106,9 +106,9 @@ def test_intersection(self):
(rng4, expected4)]:
result = base.intersection(rng)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.tz, expected.tz)
+ assert result.name == expected.name
+ assert result.freq == expected.freq
+ assert result.tz == expected.tz
# non-monotonic
base = DatetimeIndex(['2011-01-05', '2011-01-04',
@@ -136,17 +136,17 @@ def test_intersection(self):
(rng4, expected4)]:
result = base.intersection(rng)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
+ assert result.name == expected.name
assert result.freq is None
- self.assertEqual(result.tz, expected.tz)
+ assert result.tz == expected.tz
# empty same freq GH2129
rng = date_range('6/1/2000', '6/15/2000', freq='T')
result = rng[0:0].intersection(rng)
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
result = rng.intersection(rng[0:0])
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
def test_intersection_bug_1708(self):
from pandas import DateOffset
@@ -154,7 +154,7 @@ def test_intersection_bug_1708(self):
index_2 = index_1 + DateOffset(hours=1)
result = index_1 & index_2
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
def test_difference_freq(self):
# GH14323: difference of DatetimeIndex should not preserve frequency
@@ -177,7 +177,7 @@ def test_datetimeindex_diff(self):
periods=100)
dti2 = DatetimeIndex(freq='Q-JAN', start=datetime(1997, 12, 31),
periods=98)
- self.assertEqual(len(dti1.difference(dti2)), 2)
+ assert len(dti1.difference(dti2)) == 2
def test_datetimeindex_union_join_empty(self):
dti = DatetimeIndex(start='1/1/2001', end='2/1/2001', freq='D')
@@ -288,7 +288,7 @@ def test_intersection(self):
expected = rng[10:25]
tm.assert_index_equal(the_int, expected)
assert isinstance(the_int, DatetimeIndex)
- self.assertEqual(the_int.offset, rng.offset)
+ assert the_int.offset == rng.offset
the_int = rng1.intersection(rng2.view(DatetimeIndex))
tm.assert_index_equal(the_int, expected)
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index 4c32f41db207c..3c7f2e424f779 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -45,7 +45,7 @@ def test_to_datetime_format(self):
if isinstance(expected, Series):
assert_series_equal(result, Series(expected))
elif isinstance(expected, Timestamp):
- self.assertEqual(result, expected)
+ assert result == expected
else:
tm.assert_index_equal(result, expected)
@@ -112,7 +112,7 @@ def test_to_datetime_format_microsecond(self):
format = '%d-%b-%Y %H:%M:%S.%f'
result = to_datetime(val, format=format)
exp = datetime.strptime(val, format)
- self.assertEqual(result, exp)
+ assert result == exp
def test_to_datetime_format_time(self):
data = [
@@ -130,7 +130,7 @@ def test_to_datetime_format_time(self):
# Timestamp('2010-01-10 09:12:56')]
]
for s, format, dt in data:
- self.assertEqual(to_datetime(s, format=format), dt)
+ assert to_datetime(s, format=format) == dt
def test_to_datetime_with_non_exact(self):
# GH 10834
@@ -159,7 +159,7 @@ def test_parse_nanoseconds_with_formula(self):
"2012-01-01 09:00:00.001000000", ]:
expected = pd.to_datetime(v)
result = pd.to_datetime(v, format="%Y-%m-%d %H:%M:%S.%f")
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_datetime_format_weeks(self):
data = [
@@ -167,7 +167,7 @@ def test_to_datetime_format_weeks(self):
['2013020', '%Y%U%w', Timestamp('2013-01-13')]
]
for s, format, dt in data:
- self.assertEqual(to_datetime(s, format=format), dt)
+ assert to_datetime(s, format=format) == dt
class TestToDatetime(tm.TestCase):
@@ -312,11 +312,11 @@ def test_datetime_bool(self):
with pytest.raises(TypeError):
to_datetime(False)
assert to_datetime(False, errors="coerce") is NaT
- self.assertEqual(to_datetime(False, errors="ignore"), False)
+ assert to_datetime(False, errors="ignore") is False
with pytest.raises(TypeError):
to_datetime(True)
assert to_datetime(True, errors="coerce") is NaT
- self.assertEqual(to_datetime(True, errors="ignore"), True)
+ assert to_datetime(True, errors="ignore") is True
with pytest.raises(TypeError):
to_datetime([False, datetime.today()])
with pytest.raises(TypeError):
@@ -390,15 +390,15 @@ def test_unit_consistency(self):
# consistency of conversions
expected = Timestamp('1970-05-09 14:25:11')
result = pd.to_datetime(11111111, unit='s', errors='raise')
- self.assertEqual(result, expected)
+ assert result == expected
assert isinstance(result, Timestamp)
result = pd.to_datetime(11111111, unit='s', errors='coerce')
- self.assertEqual(result, expected)
+ assert result == expected
assert isinstance(result, Timestamp)
result = pd.to_datetime(11111111, unit='s', errors='ignore')
- self.assertEqual(result, expected)
+ assert result == expected
assert isinstance(result, Timestamp)
def test_unit_with_numeric(self):
@@ -617,11 +617,11 @@ def test_index_to_datetime(self):
def test_to_datetime_iso8601(self):
result = to_datetime(["2012-01-01 00:00:00"])
exp = Timestamp("2012-01-01 00:00:00")
- self.assertEqual(result[0], exp)
+ assert result[0] == exp
result = to_datetime(['20121001']) # bad iso 8601
exp = Timestamp('2012-10-01')
- self.assertEqual(result[0], exp)
+ assert result[0] == exp
def test_to_datetime_default(self):
rs = to_datetime('2001')
@@ -639,7 +639,7 @@ def test_to_datetime_on_datetime64_series(self):
s = Series(date_range('1/1/2000', periods=10))
result = to_datetime(s)
- self.assertEqual(result[0], s[0])
+ assert result[0] == s[0]
def test_to_datetime_with_space_in_series(self):
# GH 6428
@@ -689,12 +689,12 @@ def test_to_datetime_types(self):
# ints
result = Timestamp(0)
expected = to_datetime(0)
- self.assertEqual(result, expected)
+ assert result == expected
# GH 3888 (strings)
expected = to_datetime(['2012'])[0]
result = to_datetime('2012')
- self.assertEqual(result, expected)
+ assert result == expected
# array = ['2012','20120101','20120101 12:01:01']
array = ['20120101', '20120101 12:01:01']
@@ -705,7 +705,7 @@ def test_to_datetime_types(self):
# currently fails ###
# result = Timestamp('2012')
# expected = to_datetime('2012')
- # self.assertEqual(result, expected)
+ # assert result == expected
def test_to_datetime_unprocessable_input(self):
# GH 4928
@@ -721,10 +721,10 @@ def test_to_datetime_other_datetime64_units(self):
as_obj = scalar.astype('O')
index = DatetimeIndex([scalar])
- self.assertEqual(index[0], scalar.astype('O'))
+ assert index[0] == scalar.astype('O')
value = Timestamp(scalar)
- self.assertEqual(value, as_obj)
+ assert value == as_obj
def test_to_datetime_list_of_integers(self):
rng = date_range('1/1/2000', periods=20)
@@ -739,8 +739,8 @@ def test_to_datetime_list_of_integers(self):
def test_to_datetime_freq(self):
xp = bdate_range('2000-1-1', periods=10, tz='UTC')
rs = xp.to_datetime()
- self.assertEqual(xp.freq, rs.freq)
- self.assertEqual(xp.tzinfo, rs.tzinfo)
+ assert xp.freq == rs.freq
+ assert xp.tzinfo == rs.tzinfo
def test_string_na_nat_conversion(self):
# GH #999, #858
@@ -794,10 +794,10 @@ def test_string_na_nat_conversion(self):
expected[i] = to_datetime(x)
assert_series_equal(result, expected, check_names=False)
- self.assertEqual(result.name, 'foo')
+ assert result.name == 'foo'
assert_series_equal(dresult, expected, check_names=False)
- self.assertEqual(dresult.name, 'foo')
+ assert dresult.name == 'foo'
def test_dti_constructor_numpy_timeunits(self):
# GH 9114
@@ -842,21 +842,14 @@ def test_guess_datetime_format_with_parseable_formats(self):
'%Y-%m-%d %H:%M:%S.%f'), )
for dt_string, dt_format in dt_string_to_format:
- self.assertEqual(
- tools._guess_datetime_format(dt_string),
- dt_format
- )
+ assert tools._guess_datetime_format(dt_string) == dt_format
def test_guess_datetime_format_with_dayfirst(self):
ambiguous_string = '01/01/2011'
- self.assertEqual(
- tools._guess_datetime_format(ambiguous_string, dayfirst=True),
- '%d/%m/%Y'
- )
- self.assertEqual(
- tools._guess_datetime_format(ambiguous_string, dayfirst=False),
- '%m/%d/%Y'
- )
+ assert tools._guess_datetime_format(
+ ambiguous_string, dayfirst=True) == '%d/%m/%Y'
+ assert tools._guess_datetime_format(
+ ambiguous_string, dayfirst=False) == '%m/%d/%Y'
def test_guess_datetime_format_with_locale_specific_formats(self):
# The month names will vary depending on the locale, in which
@@ -868,10 +861,7 @@ def test_guess_datetime_format_with_locale_specific_formats(self):
('30/Dec/2011 00:00:00', '%d/%b/%Y %H:%M:%S'), )
for dt_string, dt_format in dt_string_to_format:
- self.assertEqual(
- tools._guess_datetime_format(dt_string),
- dt_format
- )
+ assert tools._guess_datetime_format(dt_string) == dt_format
def test_guess_datetime_format_invalid_inputs(self):
# A datetime string must include a year, month and a day for it
@@ -901,10 +891,7 @@ def test_guess_datetime_format_nopadding(self):
('2011-1-3T00:00:0', '%Y-%m-%dT%H:%M:%S'))
for dt_string, dt_format in dt_string_to_format:
- self.assertEqual(
- tools._guess_datetime_format(dt_string),
- dt_format
- )
+ assert tools._guess_datetime_format(dt_string) == dt_format
def test_guess_datetime_format_for_array(self):
tm._skip_if_not_us_locale()
@@ -918,10 +905,8 @@ def test_guess_datetime_format_for_array(self):
]
for test_array in test_arrays:
- self.assertEqual(
- tools._guess_datetime_format_for_array(test_array),
- expected_format
- )
+ assert tools._guess_datetime_format_for_array(
+ test_array) == expected_format
format_for_string_of_nans = tools._guess_datetime_format_for_array(
np.array(
@@ -1012,14 +997,13 @@ def test_day_not_in_month_raise(self):
errors='raise', format="%Y-%m-%d")
def test_day_not_in_month_ignore(self):
- self.assertEqual(to_datetime(
- '2015-02-29', errors='ignore'), '2015-02-29')
- self.assertEqual(to_datetime(
- '2015-02-29', errors='ignore', format="%Y-%m-%d"), '2015-02-29')
- self.assertEqual(to_datetime(
- '2015-02-32', errors='ignore', format="%Y-%m-%d"), '2015-02-32')
- self.assertEqual(to_datetime(
- '2015-04-31', errors='ignore', format="%Y-%m-%d"), '2015-04-31')
+ assert to_datetime('2015-02-29', errors='ignore') == '2015-02-29'
+ assert to_datetime('2015-02-29', errors='ignore',
+ format="%Y-%m-%d") == '2015-02-29'
+ assert to_datetime('2015-02-32', errors='ignore',
+ format="%Y-%m-%d") == '2015-02-32'
+ assert to_datetime('2015-04-31', errors='ignore',
+ format="%Y-%m-%d") == '2015-04-31'
class TestDatetimeParsingWrappers(tm.TestCase):
@@ -1110,7 +1094,7 @@ def test_parsers(self):
result9 = DatetimeIndex(Series([date_str]), yearfirst=yearfirst)
for res in [result1, result2]:
- self.assertEqual(res, expected)
+ assert res == expected
for res in [result3, result4, result6, result8, result9]:
exp = DatetimeIndex([pd.Timestamp(expected)])
tm.assert_index_equal(res, exp)
@@ -1118,10 +1102,10 @@ def test_parsers(self):
# these really need to have yearfist, but we don't support
if not yearfirst:
result5 = Timestamp(date_str)
- self.assertEqual(result5, expected)
+ assert result5 == expected
result7 = date_range(date_str, freq='S', periods=1,
yearfirst=yearfirst)
- self.assertEqual(result7, expected)
+ assert result7 == expected
# NaT
result1, _, _ = tools.parse_time_string('NaT')
@@ -1215,7 +1199,7 @@ def test_parsers_dayfirst_yearfirst(self):
# compare with dateutil result
dateutil_result = parse(date_str, dayfirst=dayfirst,
yearfirst=yearfirst)
- self.assertEqual(dateutil_result, expected)
+ assert dateutil_result == expected
result1, _, _ = tools.parse_time_string(date_str,
dayfirst=dayfirst,
@@ -1224,7 +1208,7 @@ def test_parsers_dayfirst_yearfirst(self):
# we don't support dayfirst/yearfirst here:
if not dayfirst and not yearfirst:
result2 = Timestamp(date_str)
- self.assertEqual(result2, expected)
+ assert result2 == expected
result3 = to_datetime(date_str, dayfirst=dayfirst,
yearfirst=yearfirst)
@@ -1232,9 +1216,9 @@ def test_parsers_dayfirst_yearfirst(self):
result4 = DatetimeIndex([date_str], dayfirst=dayfirst,
yearfirst=yearfirst)[0]
- self.assertEqual(result1, expected)
- self.assertEqual(result3, expected)
- self.assertEqual(result4, expected)
+ assert result1 == expected
+ assert result3 == expected
+ assert result4 == expected
def test_parsers_timestring(self):
tm._skip_if_no_dateutil()
@@ -1253,11 +1237,11 @@ def test_parsers_timestring(self):
# parse time string return time string based on default date
# others are not, and can't be changed because it is used in
# time series plot
- self.assertEqual(result1, exp_def)
- self.assertEqual(result2, exp_now)
- self.assertEqual(result3, exp_now)
- self.assertEqual(result4, exp_now)
- self.assertEqual(result5, exp_now)
+ assert result1 == exp_def
+ assert result2 == exp_now
+ assert result3 == exp_now
+ assert result4 == exp_now
+ assert result5 == exp_now
def test_parsers_time(self):
# GH11818
@@ -1267,20 +1251,19 @@ def test_parsers_time(self):
expected = time(14, 15)
for time_string in strings:
- self.assertEqual(tools.to_time(time_string), expected)
+ assert tools.to_time(time_string) == expected
new_string = "14.15"
pytest.raises(ValueError, tools.to_time, new_string)
- self.assertEqual(tools.to_time(new_string, format="%H.%M"), expected)
+ assert tools.to_time(new_string, format="%H.%M") == expected
arg = ["14:15", "20:20"]
expected_arr = [time(14, 15), time(20, 20)]
- self.assertEqual(tools.to_time(arg), expected_arr)
- self.assertEqual(tools.to_time(arg, format="%H:%M"), expected_arr)
- self.assertEqual(tools.to_time(arg, infer_time_format=True),
- expected_arr)
- self.assertEqual(tools.to_time(arg, format="%I:%M%p", errors="coerce"),
- [None, None])
+ assert tools.to_time(arg) == expected_arr
+ assert tools.to_time(arg, format="%H:%M") == expected_arr
+ assert tools.to_time(arg, infer_time_format=True) == expected_arr
+ assert tools.to_time(arg, format="%I:%M%p",
+ errors="coerce") == [None, None]
res = tools.to_time(arg, format="%I:%M%p", errors="ignore")
tm.assert_numpy_array_equal(res, np.array(arg, dtype=np.object_))
@@ -1301,7 +1284,7 @@ def test_parsers_monthfreq(self):
for date_str, expected in compat.iteritems(cases):
result1, _, _ = tools.parse_time_string(date_str, freq='M')
- self.assertEqual(result1, expected)
+ assert result1 == expected
def test_parsers_quarterly_with_freq(self):
msg = ('Incorrect quarterly string is given, quarter '
@@ -1321,7 +1304,7 @@ def test_parsers_quarterly_with_freq(self):
for (date_str, freq), exp in compat.iteritems(cases):
result, _, _ = tools.parse_time_string(date_str, freq=freq)
- self.assertEqual(result, exp)
+ assert result == exp
def test_parsers_timezone_minute_offsets_roundtrip(self):
# GH11708
@@ -1337,9 +1320,9 @@ def test_parsers_timezone_minute_offsets_roundtrip(self):
for dt_string, tz, dt_string_repr in dt_strings:
dt_time = to_datetime(dt_string)
- self.assertEqual(base, dt_time)
+ assert base == dt_time
converted_time = dt_time.tz_localize('UTC').tz_convert(tz)
- self.assertEqual(dt_string_repr, repr(converted_time))
+ assert dt_string_repr == repr(converted_time)
def test_parsers_iso8601(self):
# GH 12060
@@ -1358,7 +1341,7 @@ def test_parsers_iso8601(self):
'2013-1-1 5:30:00': datetime(2013, 1, 1, 5, 30)}
for date_str, exp in compat.iteritems(cases):
actual = tslib._test_parse_iso8601(date_str)
- self.assertEqual(actual, exp)
+ assert actual == exp
# seperators must all match - YYYYMM not valid
invalid_cases = ['2011-01/02', '2011^11^11',
diff --git a/pandas/tests/indexes/period/test_asfreq.py b/pandas/tests/indexes/period/test_asfreq.py
index 4d1fe9c46f126..f9effd3d1aea6 100644
--- a/pandas/tests/indexes/period/test_asfreq.py
+++ b/pandas/tests/indexes/period/test_asfreq.py
@@ -20,64 +20,64 @@ def test_asfreq(self):
pi6 = PeriodIndex(freq='Min', start='1/1/2001', end='1/1/2001 00:00')
pi7 = PeriodIndex(freq='S', start='1/1/2001', end='1/1/2001 00:00:00')
- self.assertEqual(pi1.asfreq('Q', 'S'), pi2)
- self.assertEqual(pi1.asfreq('Q', 's'), pi2)
- self.assertEqual(pi1.asfreq('M', 'start'), pi3)
- self.assertEqual(pi1.asfreq('D', 'StarT'), pi4)
- self.assertEqual(pi1.asfreq('H', 'beGIN'), pi5)
- self.assertEqual(pi1.asfreq('Min', 'S'), pi6)
- self.assertEqual(pi1.asfreq('S', 'S'), pi7)
-
- self.assertEqual(pi2.asfreq('A', 'S'), pi1)
- self.assertEqual(pi2.asfreq('M', 'S'), pi3)
- self.assertEqual(pi2.asfreq('D', 'S'), pi4)
- self.assertEqual(pi2.asfreq('H', 'S'), pi5)
- self.assertEqual(pi2.asfreq('Min', 'S'), pi6)
- self.assertEqual(pi2.asfreq('S', 'S'), pi7)
-
- self.assertEqual(pi3.asfreq('A', 'S'), pi1)
- self.assertEqual(pi3.asfreq('Q', 'S'), pi2)
- self.assertEqual(pi3.asfreq('D', 'S'), pi4)
- self.assertEqual(pi3.asfreq('H', 'S'), pi5)
- self.assertEqual(pi3.asfreq('Min', 'S'), pi6)
- self.assertEqual(pi3.asfreq('S', 'S'), pi7)
-
- self.assertEqual(pi4.asfreq('A', 'S'), pi1)
- self.assertEqual(pi4.asfreq('Q', 'S'), pi2)
- self.assertEqual(pi4.asfreq('M', 'S'), pi3)
- self.assertEqual(pi4.asfreq('H', 'S'), pi5)
- self.assertEqual(pi4.asfreq('Min', 'S'), pi6)
- self.assertEqual(pi4.asfreq('S', 'S'), pi7)
-
- self.assertEqual(pi5.asfreq('A', 'S'), pi1)
- self.assertEqual(pi5.asfreq('Q', 'S'), pi2)
- self.assertEqual(pi5.asfreq('M', 'S'), pi3)
- self.assertEqual(pi5.asfreq('D', 'S'), pi4)
- self.assertEqual(pi5.asfreq('Min', 'S'), pi6)
- self.assertEqual(pi5.asfreq('S', 'S'), pi7)
-
- self.assertEqual(pi6.asfreq('A', 'S'), pi1)
- self.assertEqual(pi6.asfreq('Q', 'S'), pi2)
- self.assertEqual(pi6.asfreq('M', 'S'), pi3)
- self.assertEqual(pi6.asfreq('D', 'S'), pi4)
- self.assertEqual(pi6.asfreq('H', 'S'), pi5)
- self.assertEqual(pi6.asfreq('S', 'S'), pi7)
-
- self.assertEqual(pi7.asfreq('A', 'S'), pi1)
- self.assertEqual(pi7.asfreq('Q', 'S'), pi2)
- self.assertEqual(pi7.asfreq('M', 'S'), pi3)
- self.assertEqual(pi7.asfreq('D', 'S'), pi4)
- self.assertEqual(pi7.asfreq('H', 'S'), pi5)
- self.assertEqual(pi7.asfreq('Min', 'S'), pi6)
+ assert pi1.asfreq('Q', 'S') == pi2
+ assert pi1.asfreq('Q', 's') == pi2
+ assert pi1.asfreq('M', 'start') == pi3
+ assert pi1.asfreq('D', 'StarT') == pi4
+ assert pi1.asfreq('H', 'beGIN') == pi5
+ assert pi1.asfreq('Min', 'S') == pi6
+ assert pi1.asfreq('S', 'S') == pi7
+
+ assert pi2.asfreq('A', 'S') == pi1
+ assert pi2.asfreq('M', 'S') == pi3
+ assert pi2.asfreq('D', 'S') == pi4
+ assert pi2.asfreq('H', 'S') == pi5
+ assert pi2.asfreq('Min', 'S') == pi6
+ assert pi2.asfreq('S', 'S') == pi7
+
+ assert pi3.asfreq('A', 'S') == pi1
+ assert pi3.asfreq('Q', 'S') == pi2
+ assert pi3.asfreq('D', 'S') == pi4
+ assert pi3.asfreq('H', 'S') == pi5
+ assert pi3.asfreq('Min', 'S') == pi6
+ assert pi3.asfreq('S', 'S') == pi7
+
+ assert pi4.asfreq('A', 'S') == pi1
+ assert pi4.asfreq('Q', 'S') == pi2
+ assert pi4.asfreq('M', 'S') == pi3
+ assert pi4.asfreq('H', 'S') == pi5
+ assert pi4.asfreq('Min', 'S') == pi6
+ assert pi4.asfreq('S', 'S') == pi7
+
+ assert pi5.asfreq('A', 'S') == pi1
+ assert pi5.asfreq('Q', 'S') == pi2
+ assert pi5.asfreq('M', 'S') == pi3
+ assert pi5.asfreq('D', 'S') == pi4
+ assert pi5.asfreq('Min', 'S') == pi6
+ assert pi5.asfreq('S', 'S') == pi7
+
+ assert pi6.asfreq('A', 'S') == pi1
+ assert pi6.asfreq('Q', 'S') == pi2
+ assert pi6.asfreq('M', 'S') == pi3
+ assert pi6.asfreq('D', 'S') == pi4
+ assert pi6.asfreq('H', 'S') == pi5
+ assert pi6.asfreq('S', 'S') == pi7
+
+ assert pi7.asfreq('A', 'S') == pi1
+ assert pi7.asfreq('Q', 'S') == pi2
+ assert pi7.asfreq('M', 'S') == pi3
+ assert pi7.asfreq('D', 'S') == pi4
+ assert pi7.asfreq('H', 'S') == pi5
+ assert pi7.asfreq('Min', 'S') == pi6
pytest.raises(ValueError, pi7.asfreq, 'T', 'foo')
result1 = pi1.asfreq('3M')
result2 = pi1.asfreq('M')
expected = PeriodIndex(freq='M', start='2001-12', end='2001-12')
tm.assert_numpy_array_equal(result1.asi8, expected.asi8)
- self.assertEqual(result1.freqstr, '3M')
+ assert result1.freqstr == '3M'
tm.assert_numpy_array_equal(result2.asi8, expected.asi8)
- self.assertEqual(result2.freqstr, 'M')
+ assert result2.freqstr == 'M'
def test_asfreq_nat(self):
idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M')
@@ -93,13 +93,13 @@ def test_asfreq_mult_pi(self):
exp = PeriodIndex(['2001-02-28', '2001-03-31', 'NaT',
'2001-04-30'], freq=freq)
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, exp.freq)
+ assert result.freq == exp.freq
result = pi.asfreq(freq, how='S')
exp = PeriodIndex(['2001-01-01', '2001-02-01', 'NaT',
'2001-03-01'], freq=freq)
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, exp.freq)
+ assert result.freq == exp.freq
def test_asfreq_combined_pi(self):
pi = pd.PeriodIndex(['2001-01-01 00:00', '2001-01-02 02:00', 'NaT'],
@@ -109,7 +109,7 @@ def test_asfreq_combined_pi(self):
for freq, how in zip(['1D1H', '1H1D'], ['S', 'E']):
result = pi.asfreq(freq, how=how)
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, exp.freq)
+ assert result.freq == exp.freq
for freq in ['1D1H', '1H1D']:
pi = pd.PeriodIndex(['2001-01-01 00:00', '2001-01-02 02:00',
@@ -118,7 +118,7 @@ def test_asfreq_combined_pi(self):
exp = PeriodIndex(['2001-01-02 00:00', '2001-01-03 02:00', 'NaT'],
freq='H')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, exp.freq)
+ assert result.freq == exp.freq
pi = pd.PeriodIndex(['2001-01-01 00:00', '2001-01-02 02:00',
'NaT'], freq=freq)
@@ -126,7 +126,7 @@ def test_asfreq_combined_pi(self):
exp = PeriodIndex(['2001-01-01 00:00', '2001-01-02 02:00', 'NaT'],
freq='H')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, exp.freq)
+ assert result.freq == exp.freq
def test_asfreq_ts(self):
index = PeriodIndex(freq='A', start='1/1/2001', end='12/31/2010')
@@ -136,12 +136,12 @@ def test_asfreq_ts(self):
result = ts.asfreq('D', how='end')
df_result = df.asfreq('D', how='end')
exp_index = index.asfreq('D', how='end')
- self.assertEqual(len(result), len(ts))
+ assert len(result) == len(ts)
tm.assert_index_equal(result.index, exp_index)
tm.assert_index_equal(df_result.index, exp_index)
result = ts.asfreq('D', how='start')
- self.assertEqual(len(result), len(ts))
+ assert len(result) == len(ts)
tm.assert_index_equal(result.index, index.asfreq('D', how='start'))
def test_astype_asfreq(self):
diff --git a/pandas/tests/indexes/period/test_construction.py b/pandas/tests/indexes/period/test_construction.py
index 6ab42f14efae6..a95ad808cadce 100644
--- a/pandas/tests/indexes/period/test_construction.py
+++ b/pandas/tests/indexes/period/test_construction.py
@@ -160,12 +160,12 @@ def test_constructor_dtype(self):
idx = PeriodIndex(['2013-01', '2013-03'], dtype='period[M]')
exp = PeriodIndex(['2013-01', '2013-03'], freq='M')
tm.assert_index_equal(idx, exp)
- self.assertEqual(idx.dtype, 'period[M]')
+ assert idx.dtype == 'period[M]'
idx = PeriodIndex(['2013-01-05', '2013-03-05'], dtype='period[3D]')
exp = PeriodIndex(['2013-01-05', '2013-03-05'], freq='3D')
tm.assert_index_equal(idx, exp)
- self.assertEqual(idx.dtype, 'period[3D]')
+ assert idx.dtype == 'period[3D]'
# if we already have a freq and its not the same, then asfreq
# (not changed)
@@ -174,11 +174,11 @@ def test_constructor_dtype(self):
res = PeriodIndex(idx, dtype='period[M]')
exp = PeriodIndex(['2013-01', '2013-01'], freq='M')
tm.assert_index_equal(res, exp)
- self.assertEqual(res.dtype, 'period[M]')
+ assert res.dtype == 'period[M]'
res = PeriodIndex(idx, freq='M')
tm.assert_index_equal(res, exp)
- self.assertEqual(res.dtype, 'period[M]')
+ assert res.dtype == 'period[M]'
msg = 'specified freq and dtype are different'
with tm.assert_raises_regex(period.IncompatibleFrequency, msg):
@@ -187,8 +187,8 @@ def test_constructor_dtype(self):
def test_constructor_empty(self):
idx = pd.PeriodIndex([], freq='M')
assert isinstance(idx, PeriodIndex)
- self.assertEqual(len(idx), 0)
- self.assertEqual(idx.freq, 'M')
+ assert len(idx) == 0
+ assert idx.freq == 'M'
with tm.assert_raises_regex(ValueError, 'freq not specified'):
pd.PeriodIndex([])
@@ -367,64 +367,64 @@ def test_constructor_freq_combined(self):
def test_constructor(self):
pi = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
- self.assertEqual(len(pi), 9)
+ assert len(pi) == 9
pi = PeriodIndex(freq='Q', start='1/1/2001', end='12/1/2009')
- self.assertEqual(len(pi), 4 * 9)
+ assert len(pi) == 4 * 9
pi = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009')
- self.assertEqual(len(pi), 12 * 9)
+ assert len(pi) == 12 * 9
pi = PeriodIndex(freq='D', start='1/1/2001', end='12/31/2009')
- self.assertEqual(len(pi), 365 * 9 + 2)
+ assert len(pi) == 365 * 9 + 2
pi = PeriodIndex(freq='B', start='1/1/2001', end='12/31/2009')
- self.assertEqual(len(pi), 261 * 9)
+ assert len(pi) == 261 * 9
pi = PeriodIndex(freq='H', start='1/1/2001', end='12/31/2001 23:00')
- self.assertEqual(len(pi), 365 * 24)
+ assert len(pi) == 365 * 24
pi = PeriodIndex(freq='Min', start='1/1/2001', end='1/1/2001 23:59')
- self.assertEqual(len(pi), 24 * 60)
+ assert len(pi) == 24 * 60
pi = PeriodIndex(freq='S', start='1/1/2001', end='1/1/2001 23:59:59')
- self.assertEqual(len(pi), 24 * 60 * 60)
+ assert len(pi) == 24 * 60 * 60
start = Period('02-Apr-2005', 'B')
i1 = PeriodIndex(start=start, periods=20)
- self.assertEqual(len(i1), 20)
- self.assertEqual(i1.freq, start.freq)
- self.assertEqual(i1[0], start)
+ assert len(i1) == 20
+ assert i1.freq == start.freq
+ assert i1[0] == start
end_intv = Period('2006-12-31', 'W')
i1 = PeriodIndex(end=end_intv, periods=10)
- self.assertEqual(len(i1), 10)
- self.assertEqual(i1.freq, end_intv.freq)
- self.assertEqual(i1[-1], end_intv)
+ assert len(i1) == 10
+ assert i1.freq == end_intv.freq
+ assert i1[-1] == end_intv
end_intv = Period('2006-12-31', '1w')
i2 = PeriodIndex(end=end_intv, periods=10)
- self.assertEqual(len(i1), len(i2))
+ assert len(i1) == len(i2)
assert (i1 == i2).all()
- self.assertEqual(i1.freq, i2.freq)
+ assert i1.freq == i2.freq
end_intv = Period('2006-12-31', ('w', 1))
i2 = PeriodIndex(end=end_intv, periods=10)
- self.assertEqual(len(i1), len(i2))
+ assert len(i1) == len(i2)
assert (i1 == i2).all()
- self.assertEqual(i1.freq, i2.freq)
+ assert i1.freq == i2.freq
end_intv = Period('2005-05-01', 'B')
i1 = PeriodIndex(start=start, end=end_intv)
# infer freq from first element
i2 = PeriodIndex([end_intv, Period('2005-05-05', 'B')])
- self.assertEqual(len(i2), 2)
- self.assertEqual(i2[0], end_intv)
+ assert len(i2) == 2
+ assert i2[0] == end_intv
i2 = PeriodIndex(np.array([end_intv, Period('2005-05-05', 'B')]))
- self.assertEqual(len(i2), 2)
- self.assertEqual(i2[0], end_intv)
+ assert len(i2) == 2
+ assert i2[0] == end_intv
# Mixed freq should fail
vals = [end_intv, Period('2006-12-31', 'w')]
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index cf5f741fb09ed..ebbe05d51598c 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -22,17 +22,17 @@ def test_getitem(self):
for idx in [idx1]:
result = idx[0]
- self.assertEqual(result, pd.Period('2011-01-01', freq='D'))
+ assert result == pd.Period('2011-01-01', freq='D')
result = idx[-1]
- self.assertEqual(result, pd.Period('2011-01-31', freq='D'))
+ assert result == pd.Period('2011-01-31', freq='D')
result = idx[0:5]
expected = pd.period_range('2011-01-01', '2011-01-05', freq='D',
name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == expected.freq
+ assert result.freq == 'D'
result = idx[0:10:2]
expected = pd.PeriodIndex(['2011-01-01', '2011-01-03',
@@ -40,8 +40,8 @@ def test_getitem(self):
'2011-01-07', '2011-01-09'],
freq='D', name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == expected.freq
+ assert result.freq == 'D'
result = idx[-20:-5:3]
expected = pd.PeriodIndex(['2011-01-12', '2011-01-15',
@@ -49,16 +49,16 @@ def test_getitem(self):
'2011-01-21', '2011-01-24'],
freq='D', name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == expected.freq
+ assert result.freq == 'D'
result = idx[4::-1]
expected = PeriodIndex(['2011-01-05', '2011-01-04', '2011-01-03',
'2011-01-02', '2011-01-01'],
freq='D', name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == expected.freq
+ assert result.freq == 'D'
def test_getitem_index(self):
idx = period_range('2007-01', periods=10, freq='M', name='x')
@@ -84,19 +84,19 @@ def test_getitem_partial(self):
assert (result.index.year == 2008).all()
result = ts['2008':'2009']
- self.assertEqual(len(result), 24)
+ assert len(result) == 24
result = ts['2008-1':'2009-12']
- self.assertEqual(len(result), 24)
+ assert len(result) == 24
result = ts['2008Q1':'2009Q4']
- self.assertEqual(len(result), 24)
+ assert len(result) == 24
result = ts[:'2009']
- self.assertEqual(len(result), 36)
+ assert len(result) == 36
result = ts['2009':]
- self.assertEqual(len(result), 50 - 24)
+ assert len(result) == 50 - 24
exp = result
result = ts[24:]
@@ -120,15 +120,15 @@ def test_getitem_datetime(self):
def test_getitem_nat(self):
idx = pd.PeriodIndex(['2011-01', 'NaT', '2011-02'], freq='M')
- self.assertEqual(idx[0], pd.Period('2011-01', freq='M'))
+ assert idx[0] == pd.Period('2011-01', freq='M')
assert idx[1] is tslib.NaT
s = pd.Series([0, 1, 2], index=idx)
- self.assertEqual(s[pd.NaT], 1)
+ assert s[pd.NaT] == 1
s = pd.Series(idx, index=idx)
- self.assertEqual(s[pd.Period('2011-01', freq='M')],
- pd.Period('2011-01', freq='M'))
+ assert (s[pd.Period('2011-01', freq='M')] ==
+ pd.Period('2011-01', freq='M'))
assert s[pd.NaT] is tslib.NaT
def test_getitem_list_periods(self):
@@ -210,7 +210,7 @@ def test_get_loc_msg(self):
try:
idx.get_loc(bad_period)
except KeyError as inst:
- self.assertEqual(inst.args[0], bad_period)
+ assert inst.args[0] == bad_period
def test_get_loc_nat(self):
didx = DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03'])
@@ -218,10 +218,10 @@ def test_get_loc_nat(self):
# check DatetimeIndex compat
for idx in [didx, pidx]:
- self.assertEqual(idx.get_loc(pd.NaT), 1)
- self.assertEqual(idx.get_loc(None), 1)
- self.assertEqual(idx.get_loc(float('nan')), 1)
- self.assertEqual(idx.get_loc(np.nan), 1)
+ assert idx.get_loc(pd.NaT) == 1
+ assert idx.get_loc(None) == 1
+ assert idx.get_loc(float('nan')) == 1
+ assert idx.get_loc(np.nan) == 1
def test_take(self):
# GH 10295
@@ -230,46 +230,46 @@ def test_take(self):
for idx in [idx1]:
result = idx.take([0])
- self.assertEqual(result, pd.Period('2011-01-01', freq='D'))
+ assert result == pd.Period('2011-01-01', freq='D')
result = idx.take([5])
- self.assertEqual(result, pd.Period('2011-01-06', freq='D'))
+ assert result == pd.Period('2011-01-06', freq='D')
result = idx.take([0, 1, 2])
expected = pd.period_range('2011-01-01', '2011-01-03', freq='D',
name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, 'D')
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == 'D'
+ assert result.freq == expected.freq
result = idx.take([0, 2, 4])
expected = pd.PeriodIndex(['2011-01-01', '2011-01-03',
'2011-01-05'], freq='D', name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == expected.freq
+ assert result.freq == 'D'
result = idx.take([7, 4, 1])
expected = pd.PeriodIndex(['2011-01-08', '2011-01-05',
'2011-01-02'],
freq='D', name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == expected.freq
+ assert result.freq == 'D'
result = idx.take([3, 2, 5])
expected = PeriodIndex(['2011-01-04', '2011-01-03', '2011-01-06'],
freq='D', name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == expected.freq
+ assert result.freq == 'D'
result = idx.take([-3, 2, 5])
expected = PeriodIndex(['2011-01-29', '2011-01-03', '2011-01-06'],
freq='D', name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == expected.freq
+ assert result.freq == 'D'
def test_take_misc(self):
index = PeriodIndex(start='1/1/10', end='12/31/12', freq='D',
@@ -284,8 +284,8 @@ def test_take_misc(self):
for taken in [taken1, taken2]:
tm.assert_index_equal(taken, expected)
assert isinstance(taken, PeriodIndex)
- self.assertEqual(taken.freq, index.freq)
- self.assertEqual(taken.name, expected.name)
+ assert taken.freq == index.freq
+ assert taken.name == expected.name
def test_take_fill_value(self):
# GH 12631
diff --git a/pandas/tests/indexes/period/test_ops.py b/pandas/tests/indexes/period/test_ops.py
index af377c1b69922..fb688bda58ae8 100644
--- a/pandas/tests/indexes/period/test_ops.py
+++ b/pandas/tests/indexes/period/test_ops.py
@@ -38,10 +38,10 @@ def test_asobject_tolist(self):
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
assert isinstance(result, Index)
- self.assertEqual(result.dtype, object)
+ assert result.dtype == object
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(idx.tolist(), expected_list)
+ assert result.name == expected.name
+ assert idx.tolist() == expected_list
idx = PeriodIndex(['2013-01-01', '2013-01-02', 'NaT',
'2013-01-04'], freq='D', name='idx')
@@ -52,16 +52,16 @@ def test_asobject_tolist(self):
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
assert isinstance(result, Index)
- self.assertEqual(result.dtype, object)
+ assert result.dtype == object
tm.assert_index_equal(result, expected)
for i in [0, 1, 3]:
- self.assertEqual(result[i], expected[i])
+ assert result[i] == expected[i]
assert result[2] is pd.NaT
- self.assertEqual(result.name, expected.name)
+ assert result.name == expected.name
result_list = idx.tolist()
for i in [0, 1, 3]:
- self.assertEqual(result_list[i], expected_list[i])
+ assert result_list[i] == expected_list[i]
assert result_list[2] is pd.NaT
def test_minmax(self):
@@ -77,12 +77,12 @@ def test_minmax(self):
assert not idx2.is_monotonic
for idx in [idx1, idx2]:
- self.assertEqual(idx.min(), pd.Period('2011-01-01', freq='D'))
- self.assertEqual(idx.max(), pd.Period('2011-01-03', freq='D'))
- self.assertEqual(idx1.argmin(), 1)
- self.assertEqual(idx2.argmin(), 0)
- self.assertEqual(idx1.argmax(), 3)
- self.assertEqual(idx2.argmax(), 2)
+ assert idx.min() == pd.Period('2011-01-01', freq='D')
+ assert idx.max() == pd.Period('2011-01-03', freq='D')
+ assert idx1.argmin() == 1
+ assert idx2.argmin() == 0
+ assert idx1.argmax() == 3
+ assert idx2.argmax() == 2
for op in ['min', 'max']:
# Return NaT
@@ -101,15 +101,15 @@ def test_minmax(self):
def test_numpy_minmax(self):
pr = pd.period_range(start='2016-01-15', end='2016-01-20')
- self.assertEqual(np.min(pr), Period('2016-01-15', freq='D'))
- self.assertEqual(np.max(pr), Period('2016-01-20', freq='D'))
+ assert np.min(pr) == Period('2016-01-15', freq='D')
+ assert np.max(pr) == Period('2016-01-20', freq='D')
errmsg = "the 'out' parameter is not supported"
tm.assert_raises_regex(ValueError, errmsg, np.min, pr, out=0)
tm.assert_raises_regex(ValueError, errmsg, np.max, pr, out=0)
- self.assertEqual(np.argmin(pr), 0)
- self.assertEqual(np.argmax(pr), 5)
+ assert np.argmin(pr) == 0
+ assert np.argmax(pr) == 5
if not _np_version_under1p10:
errmsg = "the 'out' parameter is not supported"
@@ -167,7 +167,7 @@ def test_representation(self):
exp6, exp7, exp8, exp9, exp10]):
for func in ['__repr__', '__unicode__', '__str__']:
result = getattr(idx, func)()
- self.assertEqual(result, expected)
+ assert result == expected
def test_representation_to_series(self):
# GH 10971
@@ -225,7 +225,7 @@ def test_representation_to_series(self):
[exp1, exp2, exp3, exp4, exp5,
exp6, exp7, exp8, exp9]):
result = repr(pd.Series(idx))
- self.assertEqual(result, expected)
+ assert result == expected
def test_summary(self):
# GH9116
@@ -274,7 +274,7 @@ def test_summary(self):
[exp1, exp2, exp3, exp4, exp5,
exp6, exp7, exp8, exp9]):
result = idx.summary()
- self.assertEqual(result, expected)
+ assert result == expected
def test_resolution(self):
for freq, expected in zip(['A', 'Q', 'M', 'D', 'H',
@@ -284,7 +284,7 @@ def test_resolution(self):
'millisecond', 'microsecond']):
idx = pd.period_range(start='2013-04-01', periods=30, freq=freq)
- self.assertEqual(idx.resolution, expected)
+ assert idx.resolution == expected
def test_add_iadd(self):
rng = pd.period_range('1/1/2000', freq='D', periods=5)
@@ -569,12 +569,12 @@ def test_drop_duplicates_metadata(self):
idx = pd.period_range('2011-01-01', '2011-01-31', freq='D', name='idx')
result = idx.drop_duplicates()
tm.assert_index_equal(idx, result)
- self.assertEqual(idx.freq, result.freq)
+ assert idx.freq == result.freq
idx_dup = idx.append(idx) # freq will not be reset
result = idx_dup.drop_duplicates()
tm.assert_index_equal(idx, result)
- self.assertEqual(idx.freq, result.freq)
+ assert idx.freq == result.freq
def test_drop_duplicates(self):
# to check Index/Series compat
@@ -601,7 +601,7 @@ def test_drop_duplicates(self):
def test_order_compat(self):
def _check_freq(index, expected_index):
if isinstance(index, PeriodIndex):
- self.assertEqual(index.freq, expected_index.freq)
+ assert index.freq == expected_index.freq
pidx = PeriodIndex(['2011', '2012', '2013'], name='pidx', freq='A')
# for compatibility check
@@ -666,13 +666,13 @@ def _check_freq(index, expected_index):
expected = PeriodIndex(['NaT', '2011', '2011', '2013'],
name='pidx', freq='D')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == 'D'
result = pidx.sort_values(ascending=False)
expected = PeriodIndex(
['2013', '2011', '2011', 'NaT'], name='pidx', freq='D')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == 'D'
def test_order(self):
for freq in ['D', '2D', '4D']:
@@ -681,20 +681,20 @@ def test_order(self):
ordered = idx.sort_values()
tm.assert_index_equal(ordered, idx)
- self.assertEqual(ordered.freq, idx.freq)
+ assert ordered.freq == idx.freq
ordered = idx.sort_values(ascending=False)
expected = idx[::-1]
tm.assert_index_equal(ordered, expected)
- self.assertEqual(ordered.freq, expected.freq)
- self.assertEqual(ordered.freq, freq)
+ assert ordered.freq == expected.freq
+ assert ordered.freq == freq
ordered, indexer = idx.sort_values(return_indexer=True)
tm.assert_index_equal(ordered, idx)
tm.assert_numpy_array_equal(indexer, np.array([0, 1, 2]),
check_dtype=False)
- self.assertEqual(ordered.freq, idx.freq)
- self.assertEqual(ordered.freq, freq)
+ assert ordered.freq == idx.freq
+ assert ordered.freq == freq
ordered, indexer = idx.sort_values(return_indexer=True,
ascending=False)
@@ -702,8 +702,8 @@ def test_order(self):
tm.assert_index_equal(ordered, expected)
tm.assert_numpy_array_equal(indexer, np.array([2, 1, 0]),
check_dtype=False)
- self.assertEqual(ordered.freq, expected.freq)
- self.assertEqual(ordered.freq, freq)
+ assert ordered.freq == expected.freq
+ assert ordered.freq == freq
idx1 = PeriodIndex(['2011-01-01', '2011-01-03', '2011-01-05',
'2011-01-02', '2011-01-01'], freq='D', name='idx1')
@@ -725,18 +725,18 @@ def test_order(self):
for idx, expected in [(idx1, exp1), (idx2, exp2), (idx3, exp3)]:
ordered = idx.sort_values()
tm.assert_index_equal(ordered, expected)
- self.assertEqual(ordered.freq, 'D')
+ assert ordered.freq == 'D'
ordered = idx.sort_values(ascending=False)
tm.assert_index_equal(ordered, expected[::-1])
- self.assertEqual(ordered.freq, 'D')
+ assert ordered.freq == 'D'
ordered, indexer = idx.sort_values(return_indexer=True)
tm.assert_index_equal(ordered, expected)
exp = np.array([0, 4, 3, 1, 2])
tm.assert_numpy_array_equal(indexer, exp, check_dtype=False)
- self.assertEqual(ordered.freq, 'D')
+ assert ordered.freq == 'D'
ordered, indexer = idx.sort_values(return_indexer=True,
ascending=False)
@@ -744,7 +744,7 @@ def test_order(self):
exp = np.array([2, 1, 3, 4, 0])
tm.assert_numpy_array_equal(indexer, exp, check_dtype=False)
- self.assertEqual(ordered.freq, 'D')
+ assert ordered.freq == 'D'
def test_nat_new(self):
@@ -1144,7 +1144,7 @@ def test_ops_series_timedelta(self):
# GH 13043
s = pd.Series([pd.Period('2015-01-01', freq='D'),
pd.Period('2015-01-02', freq='D')], name='xxx')
- self.assertEqual(s.dtype, object)
+ assert s.dtype == object
exp = pd.Series([pd.Period('2015-01-02', freq='D'),
pd.Period('2015-01-03', freq='D')], name='xxx')
@@ -1158,7 +1158,7 @@ def test_ops_series_period(self):
# GH 13043
s = pd.Series([pd.Period('2015-01-01', freq='D'),
pd.Period('2015-01-02', freq='D')], name='xxx')
- self.assertEqual(s.dtype, object)
+ assert s.dtype == object
p = pd.Period('2015-01-10', freq='D')
# dtype will be object because of original dtype
@@ -1168,7 +1168,7 @@ def test_ops_series_period(self):
s2 = pd.Series([pd.Period('2015-01-05', freq='D'),
pd.Period('2015-01-04', freq='D')], name='xxx')
- self.assertEqual(s2.dtype, object)
+ assert s2.dtype == object
exp = pd.Series([4, 2], name='xxx', dtype=object)
tm.assert_series_equal(s2 - s, exp)
@@ -1183,8 +1183,8 @@ def test_ops_frame_period(self):
pd.Period('2015-02', freq='M')],
'B': [pd.Period('2014-01', freq='M'),
pd.Period('2014-02', freq='M')]})
- self.assertEqual(df['A'].dtype, object)
- self.assertEqual(df['B'].dtype, object)
+ assert df['A'].dtype == object
+ assert df['B'].dtype == object
p = pd.Period('2015-03', freq='M')
# dtype will be object because of original dtype
@@ -1197,8 +1197,8 @@ def test_ops_frame_period(self):
pd.Period('2015-06', freq='M')],
'B': [pd.Period('2015-05', freq='M'),
pd.Period('2015-06', freq='M')]})
- self.assertEqual(df2['A'].dtype, object)
- self.assertEqual(df2['B'].dtype, object)
+ assert df2['A'].dtype == object
+ assert df2['B'].dtype == object
exp = pd.DataFrame({'A': np.array([4, 4], dtype=object),
'B': np.array([16, 16], dtype=object)})
diff --git a/pandas/tests/indexes/period/test_partial_slicing.py b/pandas/tests/indexes/period/test_partial_slicing.py
index 7c1279a12450c..04b4e6795e770 100644
--- a/pandas/tests/indexes/period/test_partial_slicing.py
+++ b/pandas/tests/indexes/period/test_partial_slicing.py
@@ -51,7 +51,7 @@ def test_slice_with_zero_step_raises(self):
def test_slice_keep_name(self):
idx = period_range('20010101', periods=10, freq='D', name='bob')
- self.assertEqual(idx.name, idx[1:].name)
+ assert idx.name == idx[1:].name
def test_pindex_slice_index(self):
pi = PeriodIndex(start='1/1/10', end='12/31/12', freq='M')
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 8ee3e9d6707b4..6ec567509cd76 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -56,8 +56,8 @@ def test_pickle_compat_construction(self):
pass
def test_pickle_round_trip(self):
- for freq in ['D', 'M', 'Y']:
- idx = PeriodIndex(['2016-05-16', 'NaT', NaT, np.NaN], freq='D')
+ for freq in ['D', 'M', 'A']:
+ idx = PeriodIndex(['2016-05-16', 'NaT', NaT, np.NaN], freq=freq)
result = tm.round_trip_pickle(idx)
tm.assert_index_equal(result, idx)
@@ -65,23 +65,22 @@ def test_get_loc(self):
idx = pd.period_range('2000-01-01', periods=3)
for method in [None, 'pad', 'backfill', 'nearest']:
- self.assertEqual(idx.get_loc(idx[1], method), 1)
- self.assertEqual(
- idx.get_loc(idx[1].asfreq('H', how='start'), method), 1)
- self.assertEqual(idx.get_loc(idx[1].to_timestamp(), method), 1)
- self.assertEqual(
- idx.get_loc(idx[1].to_timestamp().to_pydatetime(), method), 1)
- self.assertEqual(idx.get_loc(str(idx[1]), method), 1)
+ assert idx.get_loc(idx[1], method) == 1
+ assert idx.get_loc(idx[1].asfreq('H', how='start'), method) == 1
+ assert idx.get_loc(idx[1].to_timestamp(), method) == 1
+ assert idx.get_loc(idx[1].to_timestamp()
+ .to_pydatetime(), method) == 1
+ assert idx.get_loc(str(idx[1]), method) == 1
idx = pd.period_range('2000-01-01', periods=5)[::2]
- self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest',
- tolerance='1 day'), 1)
- self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest',
- tolerance=pd.Timedelta('1D')), 1)
- self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest',
- tolerance=np.timedelta64(1, 'D')), 1)
- self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest',
- tolerance=timedelta(1)), 1)
+ assert idx.get_loc('2000-01-02T12', method='nearest',
+ tolerance='1 day') == 1
+ assert idx.get_loc('2000-01-02T12', method='nearest',
+ tolerance=pd.Timedelta('1D')) == 1
+ assert idx.get_loc('2000-01-02T12', method='nearest',
+ tolerance=np.timedelta64(1, 'D')) == 1
+ assert idx.get_loc('2000-01-02T12', method='nearest',
+ tolerance=timedelta(1)) == 1
with tm.assert_raises_regex(ValueError, 'must be convertible'):
idx.get_loc('2000-01-10', method='nearest', tolerance='foo')
@@ -164,7 +163,7 @@ def test_repeat(self):
res = idx.repeat(3)
exp = PeriodIndex(idx.values.repeat(3), freq='D')
tm.assert_index_equal(res, exp)
- self.assertEqual(res.freqstr, 'D')
+ assert res.freqstr == 'D'
def test_period_index_indexer(self):
# GH4125
@@ -243,12 +242,12 @@ def test_shallow_copy_empty(self):
def test_dtype_str(self):
pi = pd.PeriodIndex([], freq='M')
- self.assertEqual(pi.dtype_str, 'period[M]')
- self.assertEqual(pi.dtype_str, str(pi.dtype))
+ assert pi.dtype_str == 'period[M]'
+ assert pi.dtype_str == str(pi.dtype)
pi = pd.PeriodIndex([], freq='3M')
- self.assertEqual(pi.dtype_str, 'period[3M]')
- self.assertEqual(pi.dtype_str, str(pi.dtype))
+ assert pi.dtype_str == 'period[3M]'
+ assert pi.dtype_str == str(pi.dtype)
def test_view_asi8(self):
idx = pd.PeriodIndex([], freq='M')
@@ -296,37 +295,37 @@ def test_values(self):
def test_period_index_length(self):
pi = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
- self.assertEqual(len(pi), 9)
+ assert len(pi) == 9
pi = PeriodIndex(freq='Q', start='1/1/2001', end='12/1/2009')
- self.assertEqual(len(pi), 4 * 9)
+ assert len(pi) == 4 * 9
pi = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009')
- self.assertEqual(len(pi), 12 * 9)
+ assert len(pi) == 12 * 9
start = Period('02-Apr-2005', 'B')
i1 = PeriodIndex(start=start, periods=20)
- self.assertEqual(len(i1), 20)
- self.assertEqual(i1.freq, start.freq)
- self.assertEqual(i1[0], start)
+ assert len(i1) == 20
+ assert i1.freq == start.freq
+ assert i1[0] == start
end_intv = Period('2006-12-31', 'W')
i1 = PeriodIndex(end=end_intv, periods=10)
- self.assertEqual(len(i1), 10)
- self.assertEqual(i1.freq, end_intv.freq)
- self.assertEqual(i1[-1], end_intv)
+ assert len(i1) == 10
+ assert i1.freq == end_intv.freq
+ assert i1[-1] == end_intv
end_intv = Period('2006-12-31', '1w')
i2 = PeriodIndex(end=end_intv, periods=10)
- self.assertEqual(len(i1), len(i2))
+ assert len(i1) == len(i2)
assert (i1 == i2).all()
- self.assertEqual(i1.freq, i2.freq)
+ assert i1.freq == i2.freq
end_intv = Period('2006-12-31', ('w', 1))
i2 = PeriodIndex(end=end_intv, periods=10)
- self.assertEqual(len(i1), len(i2))
+ assert len(i1) == len(i2)
assert (i1 == i2).all()
- self.assertEqual(i1.freq, i2.freq)
+ assert i1.freq == i2.freq
try:
PeriodIndex(start=start, end=end_intv)
@@ -346,12 +345,12 @@ def test_period_index_length(self):
# infer freq from first element
i2 = PeriodIndex([end_intv, Period('2005-05-05', 'B')])
- self.assertEqual(len(i2), 2)
- self.assertEqual(i2[0], end_intv)
+ assert len(i2) == 2
+ assert i2[0] == end_intv
i2 = PeriodIndex(np.array([end_intv, Period('2005-05-05', 'B')]))
- self.assertEqual(len(i2), 2)
- self.assertEqual(i2[0], end_intv)
+ assert len(i2) == 2
+ assert i2[0] == end_intv
# Mixed freq should fail
vals = [end_intv, Period('2006-12-31', 'w')]
@@ -402,17 +401,17 @@ def _check_all_fields(self, periodindex):
for field in fields:
field_idx = getattr(periodindex, field)
- self.assertEqual(len(periodindex), len(field_idx))
+ assert len(periodindex) == len(field_idx)
for x, val in zip(periods, field_idx):
- self.assertEqual(getattr(x, field), val)
+ assert getattr(x, field) == val
if len(s) == 0:
continue
field_s = getattr(s.dt, field)
- self.assertEqual(len(periodindex), len(field_s))
+ assert len(periodindex) == len(field_s)
for x, val in zip(periods, field_s):
- self.assertEqual(getattr(x, field), val)
+ assert getattr(x, field) == val
def test_indexing(self):
@@ -421,7 +420,7 @@ def test_indexing(self):
s = Series(randn(10), index=index)
expected = s[index[0]]
result = s.iat[0]
- self.assertEqual(expected, result)
+ assert expected == result
def test_period_set_index_reindex(self):
# GH 6631
@@ -486,20 +485,19 @@ def test_is_(self):
create_index = lambda: PeriodIndex(freq='A', start='1/1/2001',
end='12/1/2009')
index = create_index()
- self.assertEqual(index.is_(index), True)
- self.assertEqual(index.is_(create_index()), False)
- self.assertEqual(index.is_(index.view()), True)
- self.assertEqual(
- index.is_(index.view().view().view().view().view()), True)
- self.assertEqual(index.view().is_(index), True)
+ assert index.is_(index)
+ assert not index.is_(create_index())
+ assert index.is_(index.view())
+ assert index.is_(index.view().view().view().view().view())
+ assert index.view().is_(index)
ind2 = index.view()
index.name = "Apple"
- self.assertEqual(ind2.is_(index), True)
- self.assertEqual(index.is_(index[:]), False)
- self.assertEqual(index.is_(index.asfreq('M')), False)
- self.assertEqual(index.is_(index.asfreq('A')), False)
- self.assertEqual(index.is_(index - 2), False)
- self.assertEqual(index.is_(index - 0), False)
+ assert ind2.is_(index)
+ assert not index.is_(index[:])
+ assert not index.is_(index.asfreq('M'))
+ assert not index.is_(index.asfreq('A'))
+ assert not index.is_(index - 2)
+ assert not index.is_(index - 0)
def test_comp_period(self):
idx = period_range('2007-01', periods=20, freq='M')
@@ -566,14 +564,14 @@ def test_index_unique(self):
idx = PeriodIndex([2000, 2007, 2007, 2009, 2009], freq='A-JUN')
expected = PeriodIndex([2000, 2007, 2009], freq='A-JUN')
tm.assert_index_equal(idx.unique(), expected)
- self.assertEqual(idx.nunique(), 3)
+ assert idx.nunique() == 3
idx = PeriodIndex([2000, 2007, 2007, 2009, 2007], freq='A-JUN',
tz='US/Eastern')
expected = PeriodIndex([2000, 2007, 2009], freq='A-JUN',
tz='US/Eastern')
tm.assert_index_equal(idx.unique(), expected)
- self.assertEqual(idx.nunique(), 3)
+ assert idx.nunique() == 3
def test_shift_gh8083(self):
@@ -591,32 +589,32 @@ def test_shift(self):
tm.assert_index_equal(pi1.shift(0), pi1)
- self.assertEqual(len(pi1), len(pi2))
+ assert len(pi1) == len(pi2)
tm.assert_index_equal(pi1.shift(1), pi2)
pi1 = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='A', start='1/1/2000', end='12/1/2008')
- self.assertEqual(len(pi1), len(pi2))
+ assert len(pi1) == len(pi2)
tm.assert_index_equal(pi1.shift(-1), pi2)
pi1 = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='M', start='2/1/2001', end='1/1/2010')
- self.assertEqual(len(pi1), len(pi2))
+ assert len(pi1) == len(pi2)
tm.assert_index_equal(pi1.shift(1), pi2)
pi1 = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='M', start='12/1/2000', end='11/1/2009')
- self.assertEqual(len(pi1), len(pi2))
+ assert len(pi1) == len(pi2)
tm.assert_index_equal(pi1.shift(-1), pi2)
pi1 = PeriodIndex(freq='D', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='D', start='1/2/2001', end='12/2/2009')
- self.assertEqual(len(pi1), len(pi2))
+ assert len(pi1) == len(pi2)
tm.assert_index_equal(pi1.shift(1), pi2)
pi1 = PeriodIndex(freq='D', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='D', start='12/31/2000', end='11/30/2009')
- self.assertEqual(len(pi1), len(pi2))
+ assert len(pi1) == len(pi2)
tm.assert_index_equal(pi1.shift(-1), pi2)
def test_shift_nat(self):
@@ -626,7 +624,7 @@ def test_shift_nat(self):
expected = PeriodIndex(['2011-02', '2011-03', 'NaT',
'2011-05'], freq='M', name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
+ assert result.name == expected.name
def test_ndarray_compat_properties(self):
if compat.is_platform_32bit():
@@ -669,7 +667,7 @@ def test_pindex_qaccess(self):
pi = PeriodIndex(['2Q05', '3Q05', '4Q05', '1Q06', '2Q06'], freq='Q')
s = Series(np.random.rand(len(pi)), index=pi).cumsum()
# Todo: fix these accessors!
- self.assertEqual(s['05Q4'], s[2])
+ assert s['05Q4'] == s[2]
def test_numpy_repeat(self):
index = period_range('20010101', periods=2)
@@ -687,25 +685,25 @@ def test_pindex_multiples(self):
expected = PeriodIndex(['2011-01', '2011-03', '2011-05', '2011-07',
'2011-09', '2011-11'], freq='2M')
tm.assert_index_equal(pi, expected)
- self.assertEqual(pi.freq, offsets.MonthEnd(2))
- self.assertEqual(pi.freqstr, '2M')
+ assert pi.freq == offsets.MonthEnd(2)
+ assert pi.freqstr == '2M'
pi = period_range(start='1/1/11', end='12/31/11', freq='2M')
tm.assert_index_equal(pi, expected)
- self.assertEqual(pi.freq, offsets.MonthEnd(2))
- self.assertEqual(pi.freqstr, '2M')
+ assert pi.freq == offsets.MonthEnd(2)
+ assert pi.freqstr == '2M'
pi = period_range(start='1/1/11', periods=6, freq='2M')
tm.assert_index_equal(pi, expected)
- self.assertEqual(pi.freq, offsets.MonthEnd(2))
- self.assertEqual(pi.freqstr, '2M')
+ assert pi.freq == offsets.MonthEnd(2)
+ assert pi.freqstr == '2M'
def test_iteration(self):
index = PeriodIndex(start='1/1/10', periods=4, freq='B')
result = list(index)
assert isinstance(result[0], Period)
- self.assertEqual(result[0].freq, index.freq)
+ assert result[0].freq == index.freq
def test_is_full(self):
index = PeriodIndex([2005, 2007, 2009], freq='A')
@@ -757,14 +755,14 @@ def test_append_concat(self):
# drops index
result = pd.concat([s1, s2])
assert isinstance(result.index, PeriodIndex)
- self.assertEqual(result.index[0], s1.index[0])
+ assert result.index[0] == s1.index[0]
def test_pickle_freq(self):
# GH2891
prng = period_range('1/1/2011', '1/1/2012', freq='M')
new_prng = tm.round_trip_pickle(prng)
- self.assertEqual(new_prng.freq, offsets.MonthEnd())
- self.assertEqual(new_prng.freqstr, 'M')
+ assert new_prng.freq == offsets.MonthEnd()
+ assert new_prng.freqstr == 'M'
def test_map(self):
index = PeriodIndex([2005, 2007, 2009], freq='A')
diff --git a/pandas/tests/indexes/period/test_setops.py b/pandas/tests/indexes/period/test_setops.py
index e1fdc85d670d4..025ee7e732a7c 100644
--- a/pandas/tests/indexes/period/test_setops.py
+++ b/pandas/tests/indexes/period/test_setops.py
@@ -24,7 +24,7 @@ def test_joins(self):
joined = index.join(index[:-5], how=kind)
assert isinstance(joined, PeriodIndex)
- self.assertEqual(joined.freq, index.freq)
+ assert joined.freq == index.freq
def test_join_self(self):
index = period_range('1/1/2000', '1/20/2000', freq='D')
@@ -172,8 +172,8 @@ def test_intersection_cases(self):
(rng4, expected4)]:
result = base.intersection(rng)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, expected.freq)
+ assert result.name == expected.name
+ assert result.freq == expected.freq
# non-monotonic
base = PeriodIndex(['2011-01-05', '2011-01-04', '2011-01-02',
@@ -198,16 +198,16 @@ def test_intersection_cases(self):
(rng4, expected4)]:
result = base.intersection(rng)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, 'D')
+ assert result.name == expected.name
+ assert result.freq == 'D'
# empty same freq
rng = date_range('6/1/2000', '6/15/2000', freq='T')
result = rng[0:0].intersection(rng)
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
result = rng.intersection(rng[0:0])
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
def test_difference(self):
# diff
diff --git a/pandas/tests/indexes/period/test_tools.py b/pandas/tests/indexes/period/test_tools.py
index 60ad8fed32399..9e5994dd54f50 100644
--- a/pandas/tests/indexes/period/test_tools.py
+++ b/pandas/tests/indexes/period/test_tools.py
@@ -65,7 +65,7 @@ def test_negone_ordinals(self):
for freq in freqs:
period = Period(ordinal=-1, freq=freq)
repr(period)
- self.assertEqual(period.year, 1969)
+ assert period.year == 1969
period = Period(ordinal=-1, freq='B')
repr(period)
@@ -75,97 +75,79 @@ def test_negone_ordinals(self):
class TestTslib(tm.TestCase):
def test_intraday_conversion_factors(self):
- self.assertEqual(period_asfreq(
- 1, get_freq('D'), get_freq('H'), False), 24)
- self.assertEqual(period_asfreq(
- 1, get_freq('D'), get_freq('T'), False), 1440)
- self.assertEqual(period_asfreq(
- 1, get_freq('D'), get_freq('S'), False), 86400)
- self.assertEqual(period_asfreq(1, get_freq(
- 'D'), get_freq('L'), False), 86400000)
- self.assertEqual(period_asfreq(1, get_freq(
- 'D'), get_freq('U'), False), 86400000000)
- self.assertEqual(period_asfreq(1, get_freq(
- 'D'), get_freq('N'), False), 86400000000000)
-
- self.assertEqual(period_asfreq(
- 1, get_freq('H'), get_freq('T'), False), 60)
- self.assertEqual(period_asfreq(
- 1, get_freq('H'), get_freq('S'), False), 3600)
- self.assertEqual(period_asfreq(1, get_freq('H'),
- get_freq('L'), False), 3600000)
- self.assertEqual(period_asfreq(1, get_freq(
- 'H'), get_freq('U'), False), 3600000000)
- self.assertEqual(period_asfreq(1, get_freq(
- 'H'), get_freq('N'), False), 3600000000000)
-
- self.assertEqual(period_asfreq(
- 1, get_freq('T'), get_freq('S'), False), 60)
- self.assertEqual(period_asfreq(
- 1, get_freq('T'), get_freq('L'), False), 60000)
- self.assertEqual(period_asfreq(1, get_freq(
- 'T'), get_freq('U'), False), 60000000)
- self.assertEqual(period_asfreq(1, get_freq(
- 'T'), get_freq('N'), False), 60000000000)
-
- self.assertEqual(period_asfreq(
- 1, get_freq('S'), get_freq('L'), False), 1000)
- self.assertEqual(period_asfreq(1, get_freq('S'),
- get_freq('U'), False), 1000000)
- self.assertEqual(period_asfreq(1, get_freq(
- 'S'), get_freq('N'), False), 1000000000)
-
- self.assertEqual(period_asfreq(
- 1, get_freq('L'), get_freq('U'), False), 1000)
- self.assertEqual(period_asfreq(1, get_freq('L'),
- get_freq('N'), False), 1000000)
-
- self.assertEqual(period_asfreq(
- 1, get_freq('U'), get_freq('N'), False), 1000)
+ assert period_asfreq(1, get_freq('D'), get_freq('H'), False) == 24
+ assert period_asfreq(1, get_freq('D'), get_freq('T'), False) == 1440
+ assert period_asfreq(1, get_freq('D'), get_freq('S'), False) == 86400
+ assert period_asfreq(1, get_freq('D'),
+ get_freq('L'), False) == 86400000
+ assert period_asfreq(1, get_freq('D'),
+ get_freq('U'), False) == 86400000000
+ assert period_asfreq(1, get_freq('D'),
+ get_freq('N'), False) == 86400000000000
+
+ assert period_asfreq(1, get_freq('H'), get_freq('T'), False) == 60
+ assert period_asfreq(1, get_freq('H'), get_freq('S'), False) == 3600
+ assert period_asfreq(1, get_freq('H'),
+ get_freq('L'), False) == 3600000
+ assert period_asfreq(1, get_freq('H'),
+ get_freq('U'), False) == 3600000000
+ assert period_asfreq(1, get_freq('H'),
+ get_freq('N'), False) == 3600000000000
+
+ assert period_asfreq(1, get_freq('T'), get_freq('S'), False) == 60
+ assert period_asfreq(1, get_freq('T'), get_freq('L'), False) == 60000
+ assert period_asfreq(1, get_freq('T'),
+ get_freq('U'), False) == 60000000
+ assert period_asfreq(1, get_freq('T'),
+ get_freq('N'), False) == 60000000000
+
+ assert period_asfreq(1, get_freq('S'), get_freq('L'), False) == 1000
+ assert period_asfreq(1, get_freq('S'),
+ get_freq('U'), False) == 1000000
+ assert period_asfreq(1, get_freq('S'),
+ get_freq('N'), False) == 1000000000
+
+ assert period_asfreq(1, get_freq('L'), get_freq('U'), False) == 1000
+ assert period_asfreq(1, get_freq('L'),
+ get_freq('N'), False) == 1000000
+
+ assert period_asfreq(1, get_freq('U'), get_freq('N'), False) == 1000
def test_period_ordinal_start_values(self):
# information for 1.1.1970
- self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0,
- get_freq('A')))
- self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0,
- get_freq('M')))
- self.assertEqual(1, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0,
- get_freq('W')))
- self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0,
- get_freq('D')))
- self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0,
- get_freq('B')))
+ assert period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('A')) == 0
+ assert period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('M')) == 0
+ assert period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('W')) == 1
+ assert period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('D')) == 0
+ assert period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('B')) == 0
def test_period_ordinal_week(self):
- self.assertEqual(1, period_ordinal(1970, 1, 4, 0, 0, 0, 0, 0,
- get_freq('W')))
- self.assertEqual(2, period_ordinal(1970, 1, 5, 0, 0, 0, 0, 0,
- get_freq('W')))
-
- self.assertEqual(2284, period_ordinal(2013, 10, 6, 0, 0, 0, 0, 0,
- get_freq('W')))
- self.assertEqual(2285, period_ordinal(2013, 10, 7, 0, 0, 0, 0, 0,
- get_freq('W')))
+ assert period_ordinal(1970, 1, 4, 0, 0, 0, 0, 0, get_freq('W')) == 1
+ assert period_ordinal(1970, 1, 5, 0, 0, 0, 0, 0, get_freq('W')) == 2
+ assert period_ordinal(2013, 10, 6, 0,
+ 0, 0, 0, 0, get_freq('W')) == 2284
+ assert period_ordinal(2013, 10, 7, 0,
+ 0, 0, 0, 0, get_freq('W')) == 2285
def test_period_ordinal_business_day(self):
# Thursday
- self.assertEqual(11415, period_ordinal(2013, 10, 3, 0, 0, 0, 0, 0,
- get_freq('B')))
+ assert period_ordinal(2013, 10, 3, 0,
+ 0, 0, 0, 0, get_freq('B')) == 11415
# Friday
- self.assertEqual(11416, period_ordinal(2013, 10, 4, 0, 0, 0, 0, 0,
- get_freq('B')))
+ assert period_ordinal(2013, 10, 4, 0,
+ 0, 0, 0, 0, get_freq('B')) == 11416
# Saturday
- self.assertEqual(11417, period_ordinal(2013, 10, 5, 0, 0, 0, 0, 0,
- get_freq('B')))
+ assert period_ordinal(2013, 10, 5, 0,
+ 0, 0, 0, 0, get_freq('B')) == 11417
# Sunday
- self.assertEqual(11417, period_ordinal(2013, 10, 6, 0, 0, 0, 0, 0,
- get_freq('B')))
+ assert period_ordinal(2013, 10, 6, 0,
+ 0, 0, 0, 0, get_freq('B')) == 11417
# Monday
- self.assertEqual(11417, period_ordinal(2013, 10, 7, 0, 0, 0, 0, 0,
- get_freq('B')))
+ assert period_ordinal(2013, 10, 7, 0,
+ 0, 0, 0, 0, get_freq('B')) == 11417
# Tuesday
- self.assertEqual(11418, period_ordinal(2013, 10, 8, 0, 0, 0, 0, 0,
- get_freq('B')))
+ assert period_ordinal(2013, 10, 8, 0,
+ 0, 0, 0, 0, get_freq('B')) == 11418
class TestPeriodIndex(tm.TestCase):
@@ -189,7 +171,7 @@ def test_to_timestamp(self):
exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC')
result = series.to_timestamp(how='end')
tm.assert_index_equal(result.index, exp_index)
- self.assertEqual(result.name, 'foo')
+ assert result.name == 'foo'
exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN')
result = series.to_timestamp(how='start')
@@ -221,7 +203,7 @@ def _get_with_delta(delta, freq='A-DEC'):
freq='H')
result = series.to_timestamp(how='end')
tm.assert_index_equal(result.index, exp_index)
- self.assertEqual(result.name, 'foo')
+ assert result.name == 'foo'
def test_to_timestamp_quarterly_bug(self):
years = np.arange(1960, 2000).repeat(4)
@@ -236,10 +218,10 @@ def test_to_timestamp_quarterly_bug(self):
def test_to_timestamp_preserve_name(self):
index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009',
name='foo')
- self.assertEqual(index.name, 'foo')
+ assert index.name == 'foo'
conv = index.to_timestamp('D')
- self.assertEqual(conv.name, 'foo')
+ assert conv.name == 'foo'
def test_to_timestamp_repr_is_code(self):
zs = [Timestamp('99-04-17 00:00:00', tz='UTC'),
@@ -247,7 +229,7 @@ def test_to_timestamp_repr_is_code(self):
Timestamp('2001-04-17 00:00:00', tz='America/Los_Angeles'),
Timestamp('2001-04-17 00:00:00', tz=None)]
for z in zs:
- self.assertEqual(eval(repr(z)), z)
+ assert eval(repr(z)) == z
def test_to_timestamp_pi_nat(self):
# GH 7228
@@ -258,16 +240,16 @@ def test_to_timestamp_pi_nat(self):
expected = DatetimeIndex([pd.NaT, datetime(2011, 1, 1),
datetime(2011, 2, 1)], name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, 'idx')
+ assert result.name == 'idx'
result2 = result.to_period(freq='M')
tm.assert_index_equal(result2, index)
- self.assertEqual(result2.name, 'idx')
+ assert result2.name == 'idx'
result3 = result.to_period(freq='3M')
exp = PeriodIndex(['NaT', '2011-01', '2011-02'], freq='3M', name='idx')
tm.assert_index_equal(result3, exp)
- self.assertEqual(result3.freqstr, '3M')
+ assert result3.freqstr == '3M'
msg = ('Frequency must be positive, because it'
' represents span: -2A')
@@ -317,13 +299,13 @@ def test_dti_to_period(self):
pi2 = dti.to_period(freq='D')
pi3 = dti.to_period(freq='3D')
- self.assertEqual(pi1[0], Period('Jan 2005', freq='M'))
- self.assertEqual(pi2[0], Period('1/31/2005', freq='D'))
- self.assertEqual(pi3[0], Period('1/31/2005', freq='3D'))
+ assert pi1[0] == Period('Jan 2005', freq='M')
+ assert pi2[0] == Period('1/31/2005', freq='D')
+ assert pi3[0] == Period('1/31/2005', freq='3D')
- self.assertEqual(pi1[-1], Period('Nov 2005', freq='M'))
- self.assertEqual(pi2[-1], Period('11/30/2005', freq='D'))
- self.assertEqual(pi3[-1], Period('11/30/2005', freq='3D'))
+ assert pi1[-1] == Period('Nov 2005', freq='M')
+ assert pi2[-1] == Period('11/30/2005', freq='D')
+ assert pi3[-1], Period('11/30/2005', freq='3D')
tm.assert_index_equal(pi1, period_range('1/1/2005', '11/1/2005',
freq='M'))
@@ -365,25 +347,25 @@ def test_to_period_quarterlyish(self):
for off in offsets:
rng = date_range('01-Jan-2012', periods=8, freq=off)
prng = rng.to_period()
- self.assertEqual(prng.freq, 'Q-DEC')
+ assert prng.freq == 'Q-DEC'
def test_to_period_annualish(self):
offsets = ['BA', 'AS', 'BAS']
for off in offsets:
rng = date_range('01-Jan-2012', periods=8, freq=off)
prng = rng.to_period()
- self.assertEqual(prng.freq, 'A-DEC')
+ assert prng.freq == 'A-DEC'
def test_to_period_monthish(self):
offsets = ['MS', 'BM']
for off in offsets:
rng = date_range('01-Jan-2012', periods=8, freq=off)
prng = rng.to_period()
- self.assertEqual(prng.freq, 'M')
+ assert prng.freq == 'M'
rng = date_range('01-Jan-2012', periods=8, freq='M')
prng = rng.to_period()
- self.assertEqual(prng.freq, 'M')
+ assert prng.freq == 'M'
msg = pd.tseries.frequencies._INVALID_FREQ_ERROR
with tm.assert_raises_regex(ValueError, msg):
@@ -402,7 +384,7 @@ def test_to_timestamp_1703(self):
index = period_range('1/1/2012', periods=4, freq='D')
result = index.to_timestamp()
- self.assertEqual(result[0], Timestamp('1/1/2012'))
+ assert result[0] == Timestamp('1/1/2012')
def test_to_datetime_depr(self):
index = period_range('1/1/2012', periods=4, freq='D')
@@ -410,7 +392,7 @@ def test_to_datetime_depr(self):
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
result = index.to_datetime()
- self.assertEqual(result[0], Timestamp('1/1/2012'))
+ assert result[0] == Timestamp('1/1/2012')
def test_combine_first(self):
# GH 3367
@@ -433,10 +415,10 @@ def test_searchsorted(self):
'2014-01-04', '2014-01-05'], freq=freq)
p1 = pd.Period('2014-01-01', freq=freq)
- self.assertEqual(pidx.searchsorted(p1), 0)
+ assert pidx.searchsorted(p1) == 0
p2 = pd.Period('2014-01-04', freq=freq)
- self.assertEqual(pidx.searchsorted(p2), 3)
+ assert pidx.searchsorted(p2) == 3
msg = "Input has different freq=H from PeriodIndex"
with tm.assert_raises_regex(
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 8ac1ef3e1911b..23c72e511d2b3 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -54,14 +54,14 @@ def create_index(self):
def test_new_axis(self):
new_index = self.dateIndex[None, :]
- self.assertEqual(new_index.ndim, 2)
+ assert new_index.ndim == 2
assert isinstance(new_index, np.ndarray)
def test_copy_and_deepcopy(self):
super(TestIndex, self).test_copy_and_deepcopy()
new_copy2 = self.intIndex.copy(dtype=int)
- self.assertEqual(new_copy2.dtype.kind, 'i')
+ assert new_copy2.dtype.kind == 'i'
def test_constructor(self):
# regular instance creation
@@ -78,7 +78,7 @@ def test_constructor(self):
arr = np.array(self.strIndex)
index = Index(arr, copy=True, name='name')
assert isinstance(index, Index)
- self.assertEqual(index.name, 'name')
+ assert index.name == 'name'
tm.assert_numpy_array_equal(arr, index.values)
arr[0] = "SOMEBIGLONGSTRING"
self.assertNotEqual(index[0], "SOMEBIGLONGSTRING")
@@ -107,11 +107,11 @@ def test_constructor_from_index_datetimetz(self):
tz='US/Eastern')
result = pd.Index(idx)
tm.assert_index_equal(result, idx)
- self.assertEqual(result.tz, idx.tz)
+ assert result.tz == idx.tz
result = pd.Index(idx.asobject)
tm.assert_index_equal(result, idx)
- self.assertEqual(result.tz, idx.tz)
+ assert result.tz == idx.tz
def test_constructor_from_index_timedelta(self):
idx = pd.timedelta_range('1 days', freq='D', periods=3)
@@ -134,7 +134,7 @@ def test_constructor_from_series_datetimetz(self):
tz='US/Eastern')
result = pd.Index(pd.Series(idx))
tm.assert_index_equal(result, idx)
- self.assertEqual(result.tz, idx.tz)
+ assert result.tz == idx.tz
def test_constructor_from_series_timedelta(self):
idx = pd.timedelta_range('1 days', freq='D', periods=3)
@@ -172,7 +172,7 @@ def test_constructor_from_series(self):
result = DatetimeIndex(df['date'], freq='MS')
expected.name = 'date'
tm.assert_index_equal(result, expected)
- self.assertEqual(df['date'].dtype, object)
+ assert df['date'].dtype == object
exp = pd.Series(['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990',
'5-1-1990'], name='date')
@@ -181,7 +181,7 @@ def test_constructor_from_series(self):
# GH 6274
# infer freq of same
result = pd.infer_freq(df['date'])
- self.assertEqual(result, 'MS')
+ assert result == 'MS'
def test_constructor_ndarray_like(self):
# GH 5460#issuecomment-44474502
@@ -221,17 +221,17 @@ def test_constructor_int_dtype_nan(self):
def test_index_ctor_infer_nan_nat(self):
# GH 13467
exp = pd.Float64Index([np.nan, np.nan])
- self.assertEqual(exp.dtype, np.float64)
+ assert exp.dtype == np.float64
tm.assert_index_equal(Index([np.nan, np.nan]), exp)
tm.assert_index_equal(Index(np.array([np.nan, np.nan])), exp)
exp = pd.DatetimeIndex([pd.NaT, pd.NaT])
- self.assertEqual(exp.dtype, 'datetime64[ns]')
+ assert exp.dtype == 'datetime64[ns]'
tm.assert_index_equal(Index([pd.NaT, pd.NaT]), exp)
tm.assert_index_equal(Index(np.array([pd.NaT, pd.NaT])), exp)
exp = pd.DatetimeIndex([pd.NaT, pd.NaT])
- self.assertEqual(exp.dtype, 'datetime64[ns]')
+ assert exp.dtype == 'datetime64[ns]'
for data in [[pd.NaT, np.nan], [np.nan, pd.NaT],
[np.nan, np.datetime64('nat')],
@@ -240,7 +240,7 @@ def test_index_ctor_infer_nan_nat(self):
tm.assert_index_equal(Index(np.array(data, dtype=object)), exp)
exp = pd.TimedeltaIndex([pd.NaT, pd.NaT])
- self.assertEqual(exp.dtype, 'timedelta64[ns]')
+ assert exp.dtype == 'timedelta64[ns]'
for data in [[np.nan, np.timedelta64('nat')],
[np.timedelta64('nat'), np.nan],
@@ -407,7 +407,7 @@ def test_astype(self):
# pass on name
self.intIndex.name = 'foobar'
casted = self.intIndex.astype('i8')
- self.assertEqual(casted.name, 'foobar')
+ assert casted.name == 'foobar'
def test_equals_object(self):
# same
@@ -449,12 +449,12 @@ def test_delete(self):
expected = Index(['b', 'c', 'd'], name='idx')
result = idx.delete(0)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
+ assert result.name == expected.name
expected = Index(['a', 'b', 'c'], name='idx')
result = idx.delete(-1)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
+ assert result.name == expected.name
with pytest.raises((IndexError, ValueError)):
# either depending on numpy version
@@ -505,11 +505,11 @@ def test_is_(self):
def test_asof(self):
d = self.dateIndex[0]
- self.assertEqual(self.dateIndex.asof(d), d)
+ assert self.dateIndex.asof(d) == d
assert isnull(self.dateIndex.asof(d - timedelta(1)))
d = self.dateIndex[-1]
- self.assertEqual(self.dateIndex.asof(d + timedelta(1)), d)
+ assert self.dateIndex.asof(d + timedelta(1)) == d
d = self.dateIndex[0].to_pydatetime()
assert isinstance(self.dateIndex.asof(d), Timestamp)
@@ -518,7 +518,7 @@ def test_asof_datetime_partial(self):
idx = pd.date_range('2010-01-01', periods=2, freq='m')
expected = Timestamp('2010-02-28')
result = idx.asof('2010-02')
- self.assertEqual(result, expected)
+ assert result == expected
assert not isinstance(result, Index)
def test_nanosecond_index_access(self):
@@ -529,12 +529,11 @@ def test_nanosecond_index_access(self):
first_value = x.asof(x.index[0])
# this does not yet work, as parsing strings is done via dateutil
- # self.assertEqual(first_value,
- # x['2013-01-01 00:00:00.000000050+0000'])
+ # assert first_value == x['2013-01-01 00:00:00.000000050+0000']
exp_ts = np_datetime64_compat('2013-01-01 00:00:00.000000050+0000',
'ns')
- self.assertEqual(first_value, x[Timestamp(exp_ts)])
+ assert first_value == x[Timestamp(exp_ts)]
def test_comparators(self):
index = self.dateIndex
@@ -564,16 +563,16 @@ def test_booleanindex(self):
subIndex = self.strIndex[boolIdx]
for i, val in enumerate(subIndex):
- self.assertEqual(subIndex.get_loc(val), i)
+ assert subIndex.get_loc(val) == i
subIndex = self.strIndex[list(boolIdx)]
for i, val in enumerate(subIndex):
- self.assertEqual(subIndex.get_loc(val), i)
+ assert subIndex.get_loc(val) == i
def test_fancy(self):
sl = self.strIndex[[1, 2, 3]]
for i in sl:
- self.assertEqual(i, sl[sl.get_loc(i)])
+ assert i == sl[sl.get_loc(i)]
def test_empty_fancy(self):
empty_farr = np.array([], dtype=np.float_)
@@ -598,7 +597,7 @@ def test_getitem(self):
exp = self.dateIndex[5]
exp = _to_m8(exp)
- self.assertEqual(exp, arr[5])
+ assert exp == arr[5]
def test_intersection(self):
first = self.strIndex[:20]
@@ -616,14 +615,14 @@ def test_intersection(self):
expected2 = Index([3, 4, 5], name='idx')
result2 = idx1.intersection(idx2)
tm.assert_index_equal(result2, expected2)
- self.assertEqual(result2.name, expected2.name)
+ assert result2.name == expected2.name
# if target name is different, it will be reset
idx3 = Index([3, 4, 5, 6, 7], name='other')
expected3 = Index([3, 4, 5], name=None)
result3 = idx1.intersection(idx3)
tm.assert_index_equal(result3, expected3)
- self.assertEqual(result3.name, expected3.name)
+ assert result3.name == expected3.name
# non monotonic
idx1 = Index([5, 3, 2, 4, 1], name='idx')
@@ -655,7 +654,7 @@ def test_intersection(self):
first.name = 'A'
second.name = 'A'
intersect = first.intersection(second)
- self.assertEqual(intersect.name, 'A')
+ assert intersect.name == 'A'
second.name = 'B'
intersect = first.intersection(second)
@@ -838,7 +837,7 @@ def test_append_empty_preserve_name(self):
right = Index([1, 2, 3], name='foo')
result = left.append(right)
- self.assertEqual(result.name, 'foo')
+ assert result.name == 'foo'
left = Index([], name='foo')
right = Index([1, 2, 3], name='bar')
@@ -872,22 +871,22 @@ def test_difference(self):
result = first.difference(second)
assert tm.equalContents(result, answer)
- self.assertEqual(result.name, None)
+ assert result.name is None
# same names
second.name = 'name'
result = first.difference(second)
- self.assertEqual(result.name, 'name')
+ assert result.name == 'name'
# with empty
result = first.difference([])
assert tm.equalContents(result, first)
- self.assertEqual(result.name, first.name)
+ assert result.name == first.name
- # with everythin
+ # with everything
result = first.difference(first)
- self.assertEqual(len(result), 0)
- self.assertEqual(result.name, first.name)
+ assert len(result) == 0
+ assert result.name == first.name
def test_symmetric_difference(self):
# smoke
@@ -931,11 +930,11 @@ def test_symmetric_difference(self):
expected = Index([1, 5])
result = idx1.symmetric_difference(idx2)
assert tm.equalContents(result, expected)
- self.assertEqual(result.name, 'idx1')
+ assert result.name == 'idx1'
result = idx1.symmetric_difference(idx2, result_name='new_name')
assert tm.equalContents(result, expected)
- self.assertEqual(result.name, 'new_name')
+ assert result.name == 'new_name'
def test_is_numeric(self):
assert not self.dateIndex.is_numeric()
@@ -978,19 +977,19 @@ def test_format(self):
index = Index([now])
formatted = index.format()
expected = [str(index[0])]
- self.assertEqual(formatted, expected)
+ assert formatted == expected
# 2845
index = Index([1, 2.0 + 3.0j, np.nan])
formatted = index.format()
expected = [str(index[0]), str(index[1]), u('NaN')]
- self.assertEqual(formatted, expected)
+ assert formatted == expected
# is this really allowed?
index = Index([1, 2.0 + 3.0j, None])
formatted = index.format()
expected = [str(index[0]), str(index[1]), u('NaN')]
- self.assertEqual(formatted, expected)
+ assert formatted == expected
self.strIndex[:0].format()
@@ -1000,15 +999,15 @@ def test_format_with_name_time_info(self):
dates = Index([dt + inc for dt in self.dateIndex], name='something')
formatted = dates.format(name=True)
- self.assertEqual(formatted[0], 'something')
+ assert formatted[0] == 'something'
def test_format_datetime_with_time(self):
t = Index([datetime(2012, 2, 7), datetime(2012, 2, 7, 23)])
result = t.format()
expected = ['2012-02-07 00:00:00', '2012-02-07 23:00:00']
- self.assertEqual(len(result), 2)
- self.assertEqual(result, expected)
+ assert len(result) == 2
+ assert result == expected
def test_format_none(self):
values = ['a', 'b', 'c', None]
@@ -1019,8 +1018,8 @@ def test_format_none(self):
def test_logical_compat(self):
idx = self.create_index()
- self.assertEqual(idx.all(), idx.values.all())
- self.assertEqual(idx.any(), idx.values.any())
+ assert idx.all() == idx.values.all()
+ assert idx.any() == idx.values.any()
def _check_method_works(self, method):
method(self.empty)
@@ -1138,17 +1137,17 @@ def test_get_loc(self):
idx = pd.Index([0, 1, 2])
all_methods = [None, 'pad', 'backfill', 'nearest']
for method in all_methods:
- self.assertEqual(idx.get_loc(1, method=method), 1)
+ assert idx.get_loc(1, method=method) == 1
if method is not None:
- self.assertEqual(idx.get_loc(1, method=method, tolerance=0), 1)
+ assert idx.get_loc(1, method=method, tolerance=0) == 1
with pytest.raises(TypeError):
idx.get_loc([1, 2], method=method)
for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]:
- self.assertEqual(idx.get_loc(1.1, method), loc)
+ assert idx.get_loc(1.1, method) == loc
for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]:
- self.assertEqual(idx.get_loc(1.1, method, tolerance=1), loc)
+ assert idx.get_loc(1.1, method, tolerance=1) == loc
for method in ['pad', 'backfill', 'nearest']:
with pytest.raises(KeyError):
@@ -1170,26 +1169,26 @@ def test_slice_locs(self):
idx = Index(np.array([0, 1, 2, 5, 6, 7, 9, 10], dtype=dtype))
n = len(idx)
- self.assertEqual(idx.slice_locs(start=2), (2, n))
- self.assertEqual(idx.slice_locs(start=3), (3, n))
- self.assertEqual(idx.slice_locs(3, 8), (3, 6))
- self.assertEqual(idx.slice_locs(5, 10), (3, n))
- self.assertEqual(idx.slice_locs(end=8), (0, 6))
- self.assertEqual(idx.slice_locs(end=9), (0, 7))
+ assert idx.slice_locs(start=2) == (2, n)
+ assert idx.slice_locs(start=3) == (3, n)
+ assert idx.slice_locs(3, 8) == (3, 6)
+ assert idx.slice_locs(5, 10) == (3, n)
+ assert idx.slice_locs(end=8) == (0, 6)
+ assert idx.slice_locs(end=9) == (0, 7)
# reversed
idx2 = idx[::-1]
- self.assertEqual(idx2.slice_locs(8, 2), (2, 6))
- self.assertEqual(idx2.slice_locs(7, 3), (2, 5))
+ assert idx2.slice_locs(8, 2) == (2, 6)
+ assert idx2.slice_locs(7, 3) == (2, 5)
# float slicing
idx = Index(np.array([0, 1, 2, 5, 6, 7, 9, 10], dtype=float))
n = len(idx)
- self.assertEqual(idx.slice_locs(5.0, 10.0), (3, n))
- self.assertEqual(idx.slice_locs(4.5, 10.5), (3, 8))
+ assert idx.slice_locs(5.0, 10.0) == (3, n)
+ assert idx.slice_locs(4.5, 10.5) == (3, 8)
idx2 = idx[::-1]
- self.assertEqual(idx2.slice_locs(8.5, 1.5), (2, 6))
- self.assertEqual(idx2.slice_locs(10.5, -1), (0, n))
+ assert idx2.slice_locs(8.5, 1.5) == (2, 6)
+ assert idx2.slice_locs(10.5, -1) == (0, n)
# int slicing with floats
# GH 4892, these are all TypeErrors
@@ -1206,35 +1205,35 @@ def test_slice_locs(self):
def test_slice_locs_dup(self):
idx = Index(['a', 'a', 'b', 'c', 'd', 'd'])
- self.assertEqual(idx.slice_locs('a', 'd'), (0, 6))
- self.assertEqual(idx.slice_locs(end='d'), (0, 6))
- self.assertEqual(idx.slice_locs('a', 'c'), (0, 4))
- self.assertEqual(idx.slice_locs('b', 'd'), (2, 6))
+ assert idx.slice_locs('a', 'd') == (0, 6)
+ assert idx.slice_locs(end='d') == (0, 6)
+ assert idx.slice_locs('a', 'c') == (0, 4)
+ assert idx.slice_locs('b', 'd') == (2, 6)
idx2 = idx[::-1]
- self.assertEqual(idx2.slice_locs('d', 'a'), (0, 6))
- self.assertEqual(idx2.slice_locs(end='a'), (0, 6))
- self.assertEqual(idx2.slice_locs('d', 'b'), (0, 4))
- self.assertEqual(idx2.slice_locs('c', 'a'), (2, 6))
+ assert idx2.slice_locs('d', 'a') == (0, 6)
+ assert idx2.slice_locs(end='a') == (0, 6)
+ assert idx2.slice_locs('d', 'b') == (0, 4)
+ assert idx2.slice_locs('c', 'a') == (2, 6)
for dtype in [int, float]:
idx = Index(np.array([10, 12, 12, 14], dtype=dtype))
- self.assertEqual(idx.slice_locs(12, 12), (1, 3))
- self.assertEqual(idx.slice_locs(11, 13), (1, 3))
+ assert idx.slice_locs(12, 12) == (1, 3)
+ assert idx.slice_locs(11, 13) == (1, 3)
idx2 = idx[::-1]
- self.assertEqual(idx2.slice_locs(12, 12), (1, 3))
- self.assertEqual(idx2.slice_locs(13, 11), (1, 3))
+ assert idx2.slice_locs(12, 12) == (1, 3)
+ assert idx2.slice_locs(13, 11) == (1, 3)
def test_slice_locs_na(self):
idx = Index([np.nan, 1, 2])
pytest.raises(KeyError, idx.slice_locs, start=1.5)
pytest.raises(KeyError, idx.slice_locs, end=1.5)
- self.assertEqual(idx.slice_locs(1), (1, 3))
- self.assertEqual(idx.slice_locs(np.nan), (0, 3))
+ assert idx.slice_locs(1) == (1, 3)
+ assert idx.slice_locs(np.nan) == (0, 3)
idx = Index([0, np.nan, np.nan, 1, 2])
- self.assertEqual(idx.slice_locs(np.nan), (1, 5))
+ assert idx.slice_locs(np.nan) == (1, 5)
def test_slice_locs_negative_step(self):
idx = Index(list('bcdxy'))
@@ -1320,13 +1319,13 @@ def test_tuple_union_bug(self):
int_idx = idx1.intersection(idx2)
# needs to be 1d like idx1 and idx2
expected = idx1[:4] # pandas.Index(sorted(set(idx1) & set(idx2)))
- self.assertEqual(int_idx.ndim, 1)
+ assert int_idx.ndim == 1
tm.assert_index_equal(int_idx, expected)
# union broken
union_idx = idx1.union(idx2)
expected = idx2
- self.assertEqual(union_idx.ndim, 1)
+ assert union_idx.ndim == 1
tm.assert_index_equal(union_idx, expected)
def test_is_monotonic_incomparable(self):
@@ -1341,7 +1340,7 @@ def test_get_set_value(self):
assert_almost_equal(self.dateIndex.get_value(values, date), values[67])
self.dateIndex.set_value(values, date, 10)
- self.assertEqual(values[67], 10)
+ assert values[67] == 10
def test_isin(self):
values = ['foo', 'bar', 'quux']
@@ -1358,8 +1357,8 @@ def test_isin(self):
# empty, return dtype bool
idx = Index([])
result = idx.isin(values)
- self.assertEqual(len(result), 0)
- self.assertEqual(result.dtype, np.bool_)
+ assert len(result) == 0
+ assert result.dtype == np.bool_
def test_isin_nan(self):
tm.assert_numpy_array_equal(Index(['a', np.nan]).isin([np.nan]),
@@ -1423,7 +1422,7 @@ def test_get_level_values(self):
def test_slice_keep_name(self):
idx = Index(['a', 'b'], name='asdf')
- self.assertEqual(idx.name, idx[1:].name)
+ assert idx.name == idx[1:].name
def test_join_self(self):
# instance attributes of the form self.<name>Index
@@ -1546,28 +1545,28 @@ def test_reindex_preserves_name_if_target_is_list_or_ndarray(self):
dt_idx = pd.date_range('20130101', periods=3)
idx.name = None
- self.assertEqual(idx.reindex([])[0].name, None)
- self.assertEqual(idx.reindex(np.array([]))[0].name, None)
- self.assertEqual(idx.reindex(idx.tolist())[0].name, None)
- self.assertEqual(idx.reindex(idx.tolist()[:-1])[0].name, None)
- self.assertEqual(idx.reindex(idx.values)[0].name, None)
- self.assertEqual(idx.reindex(idx.values[:-1])[0].name, None)
+ assert idx.reindex([])[0].name is None
+ assert idx.reindex(np.array([]))[0].name is None
+ assert idx.reindex(idx.tolist())[0].name is None
+ assert idx.reindex(idx.tolist()[:-1])[0].name is None
+ assert idx.reindex(idx.values)[0].name is None
+ assert idx.reindex(idx.values[:-1])[0].name is None
# Must preserve name even if dtype changes.
- self.assertEqual(idx.reindex(dt_idx.values)[0].name, None)
- self.assertEqual(idx.reindex(dt_idx.tolist())[0].name, None)
+ assert idx.reindex(dt_idx.values)[0].name is None
+ assert idx.reindex(dt_idx.tolist())[0].name is None
idx.name = 'foobar'
- self.assertEqual(idx.reindex([])[0].name, 'foobar')
- self.assertEqual(idx.reindex(np.array([]))[0].name, 'foobar')
- self.assertEqual(idx.reindex(idx.tolist())[0].name, 'foobar')
- self.assertEqual(idx.reindex(idx.tolist()[:-1])[0].name, 'foobar')
- self.assertEqual(idx.reindex(idx.values)[0].name, 'foobar')
- self.assertEqual(idx.reindex(idx.values[:-1])[0].name, 'foobar')
+ assert idx.reindex([])[0].name == 'foobar'
+ assert idx.reindex(np.array([]))[0].name == 'foobar'
+ assert idx.reindex(idx.tolist())[0].name == 'foobar'
+ assert idx.reindex(idx.tolist()[:-1])[0].name == 'foobar'
+ assert idx.reindex(idx.values)[0].name == 'foobar'
+ assert idx.reindex(idx.values[:-1])[0].name == 'foobar'
# Must preserve name even if dtype changes.
- self.assertEqual(idx.reindex(dt_idx.values)[0].name, 'foobar')
- self.assertEqual(idx.reindex(dt_idx.tolist())[0].name, 'foobar')
+ assert idx.reindex(dt_idx.values)[0].name == 'foobar'
+ assert idx.reindex(dt_idx.tolist())[0].name == 'foobar'
def test_reindex_preserves_type_if_target_is_empty_list_or_array(self):
# GH7774
@@ -1576,10 +1575,9 @@ def test_reindex_preserves_type_if_target_is_empty_list_or_array(self):
def get_reindex_type(target):
return idx.reindex(target)[0].dtype.type
- self.assertEqual(get_reindex_type([]), np.object_)
- self.assertEqual(get_reindex_type(np.array([])), np.object_)
- self.assertEqual(get_reindex_type(np.array([], dtype=np.int64)),
- np.object_)
+ assert get_reindex_type([]) == np.object_
+ assert get_reindex_type(np.array([])) == np.object_
+ assert get_reindex_type(np.array([], dtype=np.int64)) == np.object_
def test_reindex_doesnt_preserve_type_if_target_is_empty_index(self):
# GH7774
@@ -1588,14 +1586,14 @@ def test_reindex_doesnt_preserve_type_if_target_is_empty_index(self):
def get_reindex_type(target):
return idx.reindex(target)[0].dtype.type
- self.assertEqual(get_reindex_type(pd.Int64Index([])), np.int64)
- self.assertEqual(get_reindex_type(pd.Float64Index([])), np.float64)
- self.assertEqual(get_reindex_type(pd.DatetimeIndex([])), np.datetime64)
+ assert get_reindex_type(pd.Int64Index([])) == np.int64
+ assert get_reindex_type(pd.Float64Index([])) == np.float64
+ assert get_reindex_type(pd.DatetimeIndex([])) == np.datetime64
reindexed = idx.reindex(pd.MultiIndex(
[pd.Int64Index([]), pd.Float64Index([])], [[], []]))[0]
- self.assertEqual(reindexed.levels[0].dtype.type, np.int64)
- self.assertEqual(reindexed.levels[1].dtype.type, np.float64)
+ assert reindexed.levels[0].dtype.type == np.int64
+ assert reindexed.levels[1].dtype.type == np.float64
def test_groupby(self):
idx = Index(range(5))
@@ -1628,8 +1626,8 @@ def test_equals_op_multiindex(self):
def test_conversion_preserves_name(self):
# GH 10875
i = pd.Index(['01:02:03', '01:02:04'], name='label')
- self.assertEqual(i.name, pd.to_datetime(i).name)
- self.assertEqual(i.name, pd.to_timedelta(i).name)
+ assert i.name == pd.to_datetime(i).name
+ assert i.name == pd.to_timedelta(i).name
def test_string_index_repr(self):
# py3/py2 repr can differ because of "u" prefix
@@ -1644,10 +1642,10 @@ def test_string_index_repr(self):
idx = pd.Index(['a', 'bb', 'ccc'])
if PY3:
expected = u"""Index(['a', 'bb', 'ccc'], dtype='object')"""
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""Index([u'a', u'bb', u'ccc'], dtype='object')"""
- self.assertEqual(coerce(idx), expected)
+ assert coerce(idx) == expected
# multiple lines
idx = pd.Index(['a', 'bb', 'ccc'] * 10)
@@ -1658,7 +1656,7 @@ def test_string_index_repr(self):
'a', 'bb', 'ccc', 'a', 'bb', 'ccc'],
dtype='object')"""
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""\
Index([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a',
@@ -1666,7 +1664,7 @@ def test_string_index_repr(self):
u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'],
dtype='object')"""
- self.assertEqual(coerce(idx), expected)
+ assert coerce(idx) == expected
# truncated
idx = pd.Index(['a', 'bb', 'ccc'] * 100)
@@ -1677,7 +1675,7 @@ def test_string_index_repr(self):
'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'],
dtype='object', length=300)"""
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""\
Index([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a',
@@ -1685,16 +1683,16 @@ def test_string_index_repr(self):
u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'],
dtype='object', length=300)"""
- self.assertEqual(coerce(idx), expected)
+ assert coerce(idx) == expected
# short
idx = pd.Index([u'あ', u'いい', u'ううう'])
if PY3:
expected = u"""Index(['あ', 'いい', 'ううう'], dtype='object')"""
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""Index([u'あ', u'いい', u'ううう'], dtype='object')"""
- self.assertEqual(coerce(idx), expected)
+ assert coerce(idx) == expected
# multiple lines
idx = pd.Index([u'あ', u'いい', u'ううう'] * 10)
@@ -1706,7 +1704,7 @@ def test_string_index_repr(self):
u" 'あ', 'いい', 'ううう', 'あ', 'いい', "
u"'ううう'],\n"
u" dtype='object')")
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = (u"Index([u'あ', u'いい', u'ううう', u'あ', u'いい', "
u"u'ううう', u'あ', u'いい', u'ううう', u'あ',\n"
@@ -1715,7 +1713,7 @@ def test_string_index_repr(self):
u" u'ううう', u'あ', u'いい', u'ううう', u'あ', "
u"u'いい', u'ううう', u'あ', u'いい', u'ううう'],\n"
u" dtype='object')")
- self.assertEqual(coerce(idx), expected)
+ assert coerce(idx) == expected
# truncated
idx = pd.Index([u'あ', u'いい', u'ううう'] * 100)
@@ -1726,7 +1724,7 @@ def test_string_index_repr(self):
u" 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', "
u"'ううう', 'あ', 'いい', 'ううう'],\n"
u" dtype='object', length=300)")
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = (u"Index([u'あ', u'いい', u'ううう', u'あ', u'いい', "
u"u'ううう', u'あ', u'いい', u'ううう', u'あ',\n"
@@ -1735,7 +1733,7 @@ def test_string_index_repr(self):
u"u'いい', u'ううう', u'あ', u'いい', u'ううう'],\n"
u" dtype='object', length=300)")
- self.assertEqual(coerce(idx), expected)
+ assert coerce(idx) == expected
# Emable Unicode option -----------------------------------------
with cf.option_context('display.unicode.east_asian_width', True):
@@ -1745,11 +1743,11 @@ def test_string_index_repr(self):
if PY3:
expected = (u"Index(['あ', 'いい', 'ううう'], "
u"dtype='object')")
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = (u"Index([u'あ', u'いい', u'ううう'], "
u"dtype='object')")
- self.assertEqual(coerce(idx), expected)
+ assert coerce(idx) == expected
# multiple lines
idx = pd.Index([u'あ', u'いい', u'ううう'] * 10)
@@ -1763,7 +1761,7 @@ def test_string_index_repr(self):
u" 'あ', 'いい', 'ううう'],\n"
u" dtype='object')""")
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = (u"Index([u'あ', u'いい', u'ううう', u'あ', u'いい', "
u"u'ううう', u'あ', u'いい',\n"
@@ -1775,7 +1773,7 @@ def test_string_index_repr(self):
u"u'あ', u'いい', u'ううう'],\n"
u" dtype='object')")
- self.assertEqual(coerce(idx), expected)
+ assert coerce(idx) == expected
# truncated
idx = pd.Index([u'あ', u'いい', u'ううう'] * 100)
@@ -1789,7 +1787,7 @@ def test_string_index_repr(self):
u" 'ううう'],\n"
u" dtype='object', length=300)")
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = (u"Index([u'あ', u'いい', u'ううう', u'あ', u'いい', "
u"u'ううう', u'あ', u'いい',\n"
@@ -1800,7 +1798,7 @@ def test_string_index_repr(self):
u" u'いい', u'ううう'],\n"
u" dtype='object', length=300)")
- self.assertEqual(coerce(idx), expected)
+ assert coerce(idx) == expected
class TestMixedIntIndex(Base, tm.TestCase):
@@ -1876,22 +1874,22 @@ def test_copy_name2(self):
idx1 = idx.copy()
assert idx.equals(idx1)
- self.assertEqual(idx.name, 'MyName')
- self.assertEqual(idx1.name, 'MyName')
+ assert idx.name == 'MyName'
+ assert idx1.name == 'MyName'
idx2 = idx.copy(name='NewName')
assert idx.equals(idx2)
- self.assertEqual(idx.name, 'MyName')
- self.assertEqual(idx2.name, 'NewName')
+ assert idx.name == 'MyName'
+ assert idx2.name == 'NewName'
idx3 = idx.copy(names=['NewName'])
assert idx.equals(idx3)
- self.assertEqual(idx.name, 'MyName')
- self.assertEqual(idx.names, ['MyName'])
- self.assertEqual(idx3.name, 'NewName')
- self.assertEqual(idx3.names, ['NewName'])
+ assert idx.name == 'MyName'
+ assert idx.names == ['MyName']
+ assert idx3.name == 'NewName'
+ assert idx3.names == ['NewName']
def test_union_base(self):
idx = self.create_index()
@@ -1960,8 +1958,8 @@ def test_symmetric_difference(self):
def test_logical_compat(self):
idx = self.create_index()
- self.assertEqual(idx.all(), idx.values.all())
- self.assertEqual(idx.any(), idx.values.any())
+ assert idx.all() == idx.values.all()
+ assert idx.any() == idx.values.any()
def test_dropna(self):
# GH 6194
@@ -2074,4 +2072,4 @@ def test_intersect_str_dates(self):
i2 = Index(['aa'], dtype=object)
res = i2.intersection(i1)
- self.assertEqual(len(res), 0)
+ assert len(res) == 0
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 7b2d27c9b51a4..6a2eea0b84b72 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -198,8 +198,8 @@ def test_min_max(self):
ci = self.create_index(ordered=True)
- self.assertEqual(ci.min(), 'c')
- self.assertEqual(ci.max(), 'b')
+ assert ci.min() == 'c'
+ assert ci.max() == 'b'
def test_map(self):
ci = pd.CategoricalIndex(list('ABABC'), categories=list('CBA'),
@@ -450,8 +450,8 @@ def test_get_loc(self):
# GH 12531
cidx1 = CategoricalIndex(list('abcde'), categories=list('edabc'))
idx1 = Index(list('abcde'))
- self.assertEqual(cidx1.get_loc('a'), idx1.get_loc('a'))
- self.assertEqual(cidx1.get_loc('e'), idx1.get_loc('e'))
+ assert cidx1.get_loc('a') == idx1.get_loc('a')
+ assert cidx1.get_loc('e') == idx1.get_loc('e')
for i in [cidx1, idx1]:
with pytest.raises(KeyError):
@@ -468,8 +468,8 @@ def test_get_loc(self):
True, False, True]))
# unique element results in scalar
res = cidx2.get_loc('e')
- self.assertEqual(res, idx2.get_loc('e'))
- self.assertEqual(res, 4)
+ assert res == idx2.get_loc('e')
+ assert res == 4
for i in [cidx2, idx2]:
with pytest.raises(KeyError):
@@ -481,12 +481,12 @@ def test_get_loc(self):
# results in slice
res = cidx3.get_loc('a')
- self.assertEqual(res, idx3.get_loc('a'))
- self.assertEqual(res, slice(0, 2, None))
+ assert res == idx3.get_loc('a')
+ assert res == slice(0, 2, None)
res = cidx3.get_loc('b')
- self.assertEqual(res, idx3.get_loc('b'))
- self.assertEqual(res, slice(2, 5, None))
+ assert res == idx3.get_loc('b')
+ assert res == slice(2, 5, None)
for i in [cidx3, idx3]:
with pytest.raises(KeyError):
@@ -612,10 +612,10 @@ def test_string_categorical_index_repr(self):
idx = pd.CategoricalIndex(['a', 'bb', 'ccc'])
if PY3:
expected = u"""CategoricalIndex(['a', 'bb', 'ccc'], categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'a', u'bb', u'ccc'], categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# multiple lines
idx = pd.CategoricalIndex(['a', 'bb', 'ccc'] * 10)
@@ -625,7 +625,7 @@ def test_string_categorical_index_repr(self):
'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'],
categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb',
u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a',
@@ -633,7 +633,7 @@ def test_string_categorical_index_repr(self):
u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'],
categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# truncated
idx = pd.CategoricalIndex(['a', 'bb', 'ccc'] * 100)
@@ -643,7 +643,7 @@ def test_string_categorical_index_repr(self):
'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'],
categories=['a', 'bb', 'ccc'], ordered=False, dtype='category', length=300)""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb',
u'ccc', u'a',
@@ -652,7 +652,7 @@ def test_string_categorical_index_repr(self):
u'bb', u'ccc'],
categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category', length=300)""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# larger categories
idx = pd.CategoricalIndex(list('abcdefghijklmmo'))
@@ -661,22 +661,22 @@ def test_string_categorical_index_repr(self):
'm', 'm', 'o'],
categories=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', ...], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'a', u'b', u'c', u'd', u'e', u'f', u'g', u'h', u'i', u'j',
u'k', u'l', u'm', u'm', u'o'],
categories=[u'a', u'b', u'c', u'd', u'e', u'f', u'g', u'h', ...], ordered=False, dtype='category')""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# short
idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'])
if PY3:
expected = u"""CategoricalIndex(['あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう'], categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# multiple lines
idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 10)
@@ -686,7 +686,7 @@ def test_string_categorical_index_repr(self):
'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'],
categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい',
u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ',
@@ -694,7 +694,7 @@ def test_string_categorical_index_repr(self):
u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'],
categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# truncated
idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 100)
@@ -704,7 +704,7 @@ def test_string_categorical_index_repr(self):
'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'],
categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい',
u'ううう', u'あ',
@@ -713,7 +713,7 @@ def test_string_categorical_index_repr(self):
u'いい', u'ううう'],
categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category', length=300)""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# larger categories
idx = pd.CategoricalIndex(list(u'あいうえおかきくけこさしすせそ'))
@@ -722,13 +722,13 @@ def test_string_categorical_index_repr(self):
'す', 'せ', 'そ'],
categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', u'け', u'こ',
u'さ', u'し', u'す', u'せ', u'そ'],
categories=[u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', ...], ordered=False, dtype='category')""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# Emable Unicode option -----------------------------------------
with cf.option_context('display.unicode.east_asian_width', True):
@@ -737,10 +737,10 @@ def test_string_categorical_index_repr(self):
idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'])
if PY3:
expected = u"""CategoricalIndex(['あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう'], categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# multiple lines
idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 10)
@@ -751,7 +751,7 @@ def test_string_categorical_index_repr(self):
'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'],
categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ',
u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ',
@@ -760,7 +760,7 @@ def test_string_categorical_index_repr(self):
u'いい', u'ううう', u'あ', u'いい', u'ううう'],
categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# truncated
idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 100)
@@ -772,7 +772,7 @@ def test_string_categorical_index_repr(self):
'あ', 'いい', 'ううう'],
categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ',
u'いい', u'ううう', u'あ',
@@ -781,7 +781,7 @@ def test_string_categorical_index_repr(self):
u'ううう', u'あ', u'いい', u'ううう'],
categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category', length=300)""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
# larger categories
idx = pd.CategoricalIndex(list(u'あいうえおかきくけこさしすせそ'))
@@ -790,13 +790,13 @@ def test_string_categorical_index_repr(self):
'さ', 'し', 'す', 'せ', 'そ'],
categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(idx), expected)
+ assert repr(idx) == expected
else:
expected = u"""CategoricalIndex([u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く',
u'け', u'こ', u'さ', u'し', u'す', u'せ', u'そ'],
categories=[u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', ...], ordered=False, dtype='category')""" # noqa
- self.assertEqual(unicode(idx), expected)
+ assert unicode(idx) == expected
def test_fillna_categorical(self):
# GH 11343
diff --git a/pandas/tests/indexes/test_interval.py b/pandas/tests/indexes/test_interval.py
index 815fefa813a9d..00897f290f292 100644
--- a/pandas/tests/indexes/test_interval.py
+++ b/pandas/tests/indexes/test_interval.py
@@ -118,15 +118,15 @@ def f():
def test_properties(self):
index = self.index
- self.assertEqual(len(index), 2)
- self.assertEqual(index.size, 2)
- self.assertEqual(index.shape, (2, ))
+ assert len(index) == 2
+ assert index.size == 2
+ assert index.shape == (2, )
tm.assert_index_equal(index.left, Index([0, 1]))
tm.assert_index_equal(index.right, Index([1, 2]))
tm.assert_index_equal(index.mid, Index([0.5, 1.5]))
- self.assertEqual(index.closed, 'right')
+ assert index.closed == 'right'
expected = np.array([Interval(0, 1), Interval(1, 2)], dtype=object)
tm.assert_numpy_array_equal(np.asarray(index), expected)
@@ -134,15 +134,15 @@ def test_properties(self):
# with nans
index = self.index_with_nan
- self.assertEqual(len(index), 3)
- self.assertEqual(index.size, 3)
- self.assertEqual(index.shape, (3, ))
+ assert len(index) == 3
+ assert index.size == 3
+ assert index.shape == (3, )
tm.assert_index_equal(index.left, Index([0, np.nan, 1]))
tm.assert_index_equal(index.right, Index([1, np.nan, 2]))
tm.assert_index_equal(index.mid, Index([0.5, np.nan, 1.5]))
- self.assertEqual(index.closed, 'right')
+ assert index.closed == 'right'
expected = np.array([Interval(0, 1), np.nan,
Interval(1, 2)], dtype=object)
@@ -285,7 +285,7 @@ def test_repr(self):
"\n right=[1, 2],"
"\n closed='right',"
"\n dtype='interval[int64]')")
- self.assertEqual(repr(i), expected)
+ assert repr(i) == expected
i = IntervalIndex.from_tuples((Timestamp('20130101'),
Timestamp('20130102')),
@@ -296,7 +296,7 @@ def test_repr(self):
"\n right=['2013-01-02', '2013-01-03'],"
"\n closed='right',"
"\n dtype='interval[datetime64[ns]]')")
- self.assertEqual(repr(i), expected)
+ assert repr(i) == expected
@pytest.mark.xfail(reason='not a valid repr as we use interval notation')
def test_repr_max_seq_item_setting(self):
@@ -328,21 +328,21 @@ def test_get_item(self):
def test_get_loc_value(self):
pytest.raises(KeyError, self.index.get_loc, 0)
- self.assertEqual(self.index.get_loc(0.5), 0)
- self.assertEqual(self.index.get_loc(1), 0)
- self.assertEqual(self.index.get_loc(1.5), 1)
- self.assertEqual(self.index.get_loc(2), 1)
+ assert self.index.get_loc(0.5) == 0
+ assert self.index.get_loc(1) == 0
+ assert self.index.get_loc(1.5) == 1
+ assert self.index.get_loc(2) == 1
pytest.raises(KeyError, self.index.get_loc, -1)
pytest.raises(KeyError, self.index.get_loc, 3)
idx = IntervalIndex.from_tuples([(0, 2), (1, 3)])
- self.assertEqual(idx.get_loc(0.5), 0)
- self.assertEqual(idx.get_loc(1), 0)
+ assert idx.get_loc(0.5) == 0
+ assert idx.get_loc(1) == 0
tm.assert_numpy_array_equal(idx.get_loc(1.5),
np.array([0, 1], dtype='int64'))
tm.assert_numpy_array_equal(np.sort(idx.get_loc(2)),
np.array([0, 1], dtype='int64'))
- self.assertEqual(idx.get_loc(3), 1)
+ assert idx.get_loc(3) == 1
pytest.raises(KeyError, idx.get_loc, 3.5)
idx = IntervalIndex.from_arrays([0, 2], [1, 3])
@@ -351,29 +351,29 @@ def test_get_loc_value(self):
def slice_locs_cases(self, breaks):
# TODO: same tests for more index types
index = IntervalIndex.from_breaks([0, 1, 2], closed='right')
- self.assertEqual(index.slice_locs(), (0, 2))
- self.assertEqual(index.slice_locs(0, 1), (0, 1))
- self.assertEqual(index.slice_locs(1, 1), (0, 1))
- self.assertEqual(index.slice_locs(0, 2), (0, 2))
- self.assertEqual(index.slice_locs(0.5, 1.5), (0, 2))
- self.assertEqual(index.slice_locs(0, 0.5), (0, 1))
- self.assertEqual(index.slice_locs(start=1), (0, 2))
- self.assertEqual(index.slice_locs(start=1.2), (1, 2))
- self.assertEqual(index.slice_locs(end=1), (0, 1))
- self.assertEqual(index.slice_locs(end=1.1), (0, 2))
- self.assertEqual(index.slice_locs(end=1.0), (0, 1))
- self.assertEqual(*index.slice_locs(-1, -1))
+ assert index.slice_locs() == (0, 2)
+ assert index.slice_locs(0, 1) == (0, 1)
+ assert index.slice_locs(1, 1) == (0, 1)
+ assert index.slice_locs(0, 2) == (0, 2)
+ assert index.slice_locs(0.5, 1.5) == (0, 2)
+ assert index.slice_locs(0, 0.5) == (0, 1)
+ assert index.slice_locs(start=1) == (0, 2)
+ assert index.slice_locs(start=1.2) == (1, 2)
+ assert index.slice_locs(end=1) == (0, 1)
+ assert index.slice_locs(end=1.1) == (0, 2)
+ assert index.slice_locs(end=1.0) == (0, 1)
+ assert index.slice_locs(-1, -1) == (0, 0)
index = IntervalIndex.from_breaks([0, 1, 2], closed='neither')
- self.assertEqual(index.slice_locs(0, 1), (0, 1))
- self.assertEqual(index.slice_locs(0, 2), (0, 2))
- self.assertEqual(index.slice_locs(0.5, 1.5), (0, 2))
- self.assertEqual(index.slice_locs(1, 1), (1, 1))
- self.assertEqual(index.slice_locs(1, 2), (1, 2))
+ assert index.slice_locs(0, 1) == (0, 1)
+ assert index.slice_locs(0, 2) == (0, 2)
+ assert index.slice_locs(0.5, 1.5) == (0, 2)
+ assert index.slice_locs(1, 1) == (1, 1)
+ assert index.slice_locs(1, 2) == (1, 2)
index = IntervalIndex.from_breaks([0, 1, 2], closed='both')
- self.assertEqual(index.slice_locs(1, 1), (0, 2))
- self.assertEqual(index.slice_locs(1, 2), (0, 2))
+ assert index.slice_locs(1, 1) == (0, 2)
+ assert index.slice_locs(1, 2) == (0, 2)
def test_slice_locs_int64(self):
self.slice_locs_cases([0, 1, 2])
@@ -383,14 +383,16 @@ def test_slice_locs_float64(self):
def slice_locs_decreasing_cases(self, tuples):
index = IntervalIndex.from_tuples(tuples)
- self.assertEqual(index.slice_locs(1.5, 0.5), (1, 3))
- self.assertEqual(index.slice_locs(2, 0), (1, 3))
- self.assertEqual(index.slice_locs(2, 1), (1, 3))
- self.assertEqual(index.slice_locs(3, 1.1), (0, 3))
- self.assertEqual(index.slice_locs(3, 3), (0, 2))
- self.assertEqual(index.slice_locs(3.5, 3.3), (0, 1))
- self.assertEqual(index.slice_locs(1, -3), (2, 3))
- self.assertEqual(*index.slice_locs(-1, -1))
+ assert index.slice_locs(1.5, 0.5) == (1, 3)
+ assert index.slice_locs(2, 0) == (1, 3)
+ assert index.slice_locs(2, 1) == (1, 3)
+ assert index.slice_locs(3, 1.1) == (0, 3)
+ assert index.slice_locs(3, 3) == (0, 2)
+ assert index.slice_locs(3.5, 3.3) == (0, 1)
+ assert index.slice_locs(1, -3) == (2, 3)
+
+ slice_locs = index.slice_locs(-1, -1)
+ assert slice_locs[0] == slice_locs[1]
def test_slice_locs_decreasing_int64(self):
self.slice_locs_cases([(2, 4), (1, 3), (0, 2)])
@@ -404,9 +406,9 @@ def test_slice_locs_fails(self):
index.slice_locs(1, 2)
def test_get_loc_interval(self):
- self.assertEqual(self.index.get_loc(Interval(0, 1)), 0)
- self.assertEqual(self.index.get_loc(Interval(0, 0.5)), 0)
- self.assertEqual(self.index.get_loc(Interval(0, 1, 'left')), 0)
+ assert self.index.get_loc(Interval(0, 1)) == 0
+ assert self.index.get_loc(Interval(0, 0.5)) == 0
+ assert self.index.get_loc(Interval(0, 1, 'left')) == 0
pytest.raises(KeyError, self.index.get_loc, Interval(2, 3))
pytest.raises(KeyError, self.index.get_loc,
Interval(-1, 0, 'left'))
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index 714e901532ed9..a840711e37fb0 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -128,35 +128,35 @@ def test_numpy_repeat(self):
def test_set_name_methods(self):
# so long as these are synonyms, we don't need to test set_names
- self.assertEqual(self.index.rename, self.index.set_names)
+ assert self.index.rename == self.index.set_names
new_names = [name + "SUFFIX" for name in self.index_names]
ind = self.index.set_names(new_names)
- self.assertEqual(self.index.names, self.index_names)
- self.assertEqual(ind.names, new_names)
+ assert self.index.names == self.index_names
+ assert ind.names == new_names
with tm.assert_raises_regex(ValueError, "^Length"):
ind.set_names(new_names + new_names)
new_names2 = [name + "SUFFIX2" for name in new_names]
res = ind.set_names(new_names2, inplace=True)
assert res is None
- self.assertEqual(ind.names, new_names2)
+ assert ind.names == new_names2
# set names for specific level (# GH7792)
ind = self.index.set_names(new_names[0], level=0)
- self.assertEqual(self.index.names, self.index_names)
- self.assertEqual(ind.names, [new_names[0], self.index_names[1]])
+ assert self.index.names == self.index_names
+ assert ind.names == [new_names[0], self.index_names[1]]
res = ind.set_names(new_names2[0], level=0, inplace=True)
assert res is None
- self.assertEqual(ind.names, [new_names2[0], self.index_names[1]])
+ assert ind.names == [new_names2[0], self.index_names[1]]
# set names for multiple levels
ind = self.index.set_names(new_names, level=[0, 1])
- self.assertEqual(self.index.names, self.index_names)
- self.assertEqual(ind.names, new_names)
+ assert self.index.names == self.index_names
+ assert ind.names == new_names
res = ind.set_names(new_names2, level=[0, 1], inplace=True)
assert res is None
- self.assertEqual(ind.names, new_names2)
+ assert ind.names == new_names2
def test_set_levels(self):
# side note - you probably wouldn't want to use levels and labels
@@ -167,7 +167,7 @@ def test_set_levels(self):
def assert_matching(actual, expected, check_dtype=False):
# avoid specifying internal representation
# as much as possible
- self.assertEqual(len(actual), len(expected))
+ assert len(actual) == len(expected)
for act, exp in zip(actual, expected):
act = np.asarray(act)
exp = np.asarray(exp)
@@ -256,7 +256,7 @@ def test_set_labels(self):
def assert_matching(actual, expected):
# avoid specifying internal representation
# as much as possible
- self.assertEqual(len(actual), len(expected))
+ assert len(actual) == len(expected)
for act, exp in zip(actual, expected):
act = np.asarray(act)
exp = np.asarray(exp, dtype=np.int8)
@@ -439,12 +439,12 @@ def test_copy_in_constructor(self):
val = labels[0]
mi = MultiIndex(levels=[levels, levels], labels=[labels, labels],
copy=True)
- self.assertEqual(mi.labels[0][0], val)
+ assert mi.labels[0][0] == val
labels[0] = 15
- self.assertEqual(mi.labels[0][0], val)
+ assert mi.labels[0][0] == val
val = levels[0]
levels[0] = "PANDA"
- self.assertEqual(mi.levels[0][0], val)
+ assert mi.levels[0][0] == val
def test_set_value_keeps_names(self):
# motivating example from #3742
@@ -457,10 +457,10 @@ def test_set_value_keeps_names(self):
index=idx)
df = df.sort_index()
assert df.is_copy is None
- self.assertEqual(df.index.names, ('Name', 'Number'))
+ assert df.index.names == ('Name', 'Number')
df = df.set_value(('grethe', '4'), 'one', 99.34)
assert df.is_copy is None
- self.assertEqual(df.index.names, ('Name', 'Number'))
+ assert df.index.names == ('Name', 'Number')
def test_copy_names(self):
# Check that adding a "names" parameter to the copy is honored
@@ -469,27 +469,27 @@ def test_copy_names(self):
multi_idx1 = multi_idx.copy()
assert multi_idx.equals(multi_idx1)
- self.assertEqual(multi_idx.names, ['MyName1', 'MyName2'])
- self.assertEqual(multi_idx1.names, ['MyName1', 'MyName2'])
+ assert multi_idx.names == ['MyName1', 'MyName2']
+ assert multi_idx1.names == ['MyName1', 'MyName2']
multi_idx2 = multi_idx.copy(names=['NewName1', 'NewName2'])
assert multi_idx.equals(multi_idx2)
- self.assertEqual(multi_idx.names, ['MyName1', 'MyName2'])
- self.assertEqual(multi_idx2.names, ['NewName1', 'NewName2'])
+ assert multi_idx.names == ['MyName1', 'MyName2']
+ assert multi_idx2.names == ['NewName1', 'NewName2']
multi_idx3 = multi_idx.copy(name=['NewName1', 'NewName2'])
assert multi_idx.equals(multi_idx3)
- self.assertEqual(multi_idx.names, ['MyName1', 'MyName2'])
- self.assertEqual(multi_idx3.names, ['NewName1', 'NewName2'])
+ assert multi_idx.names == ['MyName1', 'MyName2']
+ assert multi_idx3.names == ['NewName1', 'NewName2']
def test_names(self):
# names are assigned in __init__
names = self.index_names
level_names = [level.name for level in self.index.levels]
- self.assertEqual(names, level_names)
+ assert names == level_names
# setting bad names on existing
index = self.index
@@ -515,7 +515,7 @@ def test_names(self):
index.names = ["a", "b"]
ind_names = list(index.names)
level_names = [level.name for level in index.levels]
- self.assertEqual(ind_names, level_names)
+ assert ind_names == level_names
def test_reference_duplicate_name(self):
idx = MultiIndex.from_tuples(
@@ -623,7 +623,7 @@ def test_view(self):
self.assert_multiindex_copied(i_view, self.index)
def check_level_names(self, index, names):
- self.assertEqual([level.name for level in index.levels], list(names))
+ assert [level.name for level in index.levels] == list(names)
def test_changing_names(self):
@@ -656,8 +656,8 @@ def test_duplicate_names(self):
def test_get_level_number_integer(self):
self.index.names = [1, 0]
- self.assertEqual(self.index._get_level_number(1), 0)
- self.assertEqual(self.index._get_level_number(0), 1)
+ assert self.index._get_level_number(1) == 0
+ assert self.index._get_level_number(0) == 1
pytest.raises(IndexError, self.index._get_level_number, 2)
tm.assert_raises_regex(KeyError, 'Level fourth not found',
self.index._get_level_number, 'fourth')
@@ -668,7 +668,7 @@ def test_from_arrays(self):
arrays.append(np.asarray(lev).take(lab))
result = MultiIndex.from_arrays(arrays)
- self.assertEqual(list(result), list(self.index))
+ assert list(result) == list(self.index)
# infer correctly
result = MultiIndex.from_arrays([[pd.NaT, Timestamp('20130101')],
@@ -819,7 +819,7 @@ def test_from_product(self):
expected = MultiIndex.from_tuples(tuples, names=names)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.names, names)
+ assert result.names == names
def test_from_product_empty(self):
# 0 levels
@@ -914,7 +914,7 @@ def test_append_mixed_dtypes(self):
[1.1, np.nan, 3.3],
['a', 'b', 'c'],
dti, dti_tz, pi])
- self.assertEqual(mi.nlevels, 6)
+ assert mi.nlevels == 6
res = mi.append(mi)
exp = MultiIndex.from_arrays([[1, 2, 3, 1, 2, 3],
@@ -943,7 +943,7 @@ def test_get_level_values(self):
expected = Index(['foo', 'foo', 'bar', 'baz', 'qux', 'qux'],
name='first')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, 'first')
+ assert result.name == 'first'
result = self.index.get_level_values('first')
expected = self.index.get_level_values(0)
@@ -989,7 +989,7 @@ def test_get_level_values_na(self):
arrays = [[], []]
index = pd.MultiIndex.from_arrays(arrays)
values = index.get_level_values(0)
- self.assertEqual(values.shape, (0, ))
+ assert values.shape == (0, )
def test_reorder_levels(self):
# this blows up
@@ -997,13 +997,13 @@ def test_reorder_levels(self):
self.index.reorder_levels, [2, 1, 0])
def test_nlevels(self):
- self.assertEqual(self.index.nlevels, 2)
+ assert self.index.nlevels == 2
def test_iter(self):
result = list(self.index)
expected = [('foo', 'one'), ('foo', 'two'), ('bar', 'one'),
('baz', 'two'), ('qux', 'one'), ('qux', 'two')]
- self.assertEqual(result, expected)
+ assert result == expected
def test_legacy_pickle(self):
if PY3:
@@ -1089,7 +1089,7 @@ def test_is_numeric(self):
def test_getitem(self):
# scalar
- self.assertEqual(self.index[2], ('bar', 'one'))
+ assert self.index[2] == ('bar', 'one')
# slice
result = self.index[2:5]
@@ -1105,12 +1105,12 @@ def test_getitem(self):
def test_getitem_group_select(self):
sorted_idx, _ = self.index.sortlevel(0)
- self.assertEqual(sorted_idx.get_loc('baz'), slice(3, 4))
- self.assertEqual(sorted_idx.get_loc('foo'), slice(0, 2))
+ assert sorted_idx.get_loc('baz') == slice(3, 4)
+ assert sorted_idx.get_loc('foo') == slice(0, 2)
def test_get_loc(self):
- self.assertEqual(self.index.get_loc(('foo', 'two')), 1)
- self.assertEqual(self.index.get_loc(('baz', 'two')), 3)
+ assert self.index.get_loc(('foo', 'two')) == 1
+ assert self.index.get_loc(('baz', 'two')) == 3
pytest.raises(KeyError, self.index.get_loc, ('bar', 'two'))
pytest.raises(KeyError, self.index.get_loc, 'quux')
@@ -1122,19 +1122,19 @@ def test_get_loc(self):
lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])])
pytest.raises(KeyError, index.get_loc, (1, 1))
- self.assertEqual(index.get_loc((2, 0)), slice(3, 5))
+ assert index.get_loc((2, 0)) == slice(3, 5)
def test_get_loc_duplicates(self):
index = Index([2, 2, 2, 2])
result = index.get_loc(2)
expected = slice(0, 4)
- self.assertEqual(result, expected)
+ assert result == expected
# pytest.raises(Exception, index.get_loc, 2)
index = Index(['c', 'a', 'a', 'b', 'b'])
rs = index.get_loc('c')
xp = 0
- assert (rs == xp)
+ assert rs == xp
def test_get_value_duplicates(self):
index = MultiIndex(levels=[['D', 'B', 'C'],
@@ -1155,12 +1155,12 @@ def test_get_loc_level(self):
loc, new_index = index.get_loc_level((0, 1))
expected = slice(1, 2)
exp_index = index[expected].droplevel(0).droplevel(0)
- self.assertEqual(loc, expected)
+ assert loc == expected
assert new_index.equals(exp_index)
loc, new_index = index.get_loc_level((0, 1, 0))
expected = 1
- self.assertEqual(loc, expected)
+ assert loc == expected
assert new_index is None
pytest.raises(KeyError, index.get_loc_level, (2, 2))
@@ -1169,7 +1169,7 @@ def test_get_loc_level(self):
[0, 0, 0, 0]), np.array([0, 1, 2, 3])])
result, new_index = index.get_loc_level((2000, slice(None, None)))
expected = slice(None, None)
- self.assertEqual(result, expected)
+ assert result == expected
assert new_index.equals(index.droplevel(0))
def test_slice_locs(self):
@@ -1225,16 +1225,16 @@ def test_slice_locs_partial(self):
sorted_idx, _ = self.index.sortlevel(0)
result = sorted_idx.slice_locs(('foo', 'two'), ('qux', 'one'))
- self.assertEqual(result, (1, 5))
+ assert result == (1, 5)
result = sorted_idx.slice_locs(None, ('qux', 'one'))
- self.assertEqual(result, (0, 5))
+ assert result == (0, 5)
result = sorted_idx.slice_locs(('foo', 'two'), None)
- self.assertEqual(result, (1, len(sorted_idx)))
+ assert result == (1, len(sorted_idx))
result = sorted_idx.slice_locs('bar', 'baz')
- self.assertEqual(result, (2, 4))
+ assert result == (2, 4)
def test_slice_locs_not_contained(self):
# some searchsorted action
@@ -1244,22 +1244,22 @@ def test_slice_locs_not_contained(self):
[0, 1, 2, 1, 2, 2, 0, 1, 2]], sortorder=0)
result = index.slice_locs((1, 0), (5, 2))
- self.assertEqual(result, (3, 6))
+ assert result == (3, 6)
result = index.slice_locs(1, 5)
- self.assertEqual(result, (3, 6))
+ assert result == (3, 6)
result = index.slice_locs((2, 2), (5, 2))
- self.assertEqual(result, (3, 6))
+ assert result == (3, 6)
result = index.slice_locs(2, 5)
- self.assertEqual(result, (3, 6))
+ assert result == (3, 6)
result = index.slice_locs((1, 0), (6, 3))
- self.assertEqual(result, (3, 8))
+ assert result == (3, 8)
result = index.slice_locs(-1, 10)
- self.assertEqual(result, (0, len(index)))
+ assert result == (0, len(index))
def test_consistency(self):
# need to construct an overflow
@@ -1374,7 +1374,7 @@ def test_hash_collisions(self):
for i in [0, 1, len(index) - 2, len(index) - 1]:
result = index.get_loc(index[i])
- self.assertEqual(result, i)
+ assert result == i
def test_format(self):
self.index.format()
@@ -1391,7 +1391,7 @@ def test_format_sparse_display(self):
[0, 1, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0]])
result = index.format()
- self.assertEqual(result[3], '1 0 0 0')
+ assert result[3] == '1 0 0 0'
def test_format_sparse_config(self):
warn_filters = warnings.filters
@@ -1401,7 +1401,7 @@ def test_format_sparse_config(self):
pd.set_option('display.multi_sparse', False)
result = self.index.format()
- self.assertEqual(result[1], 'foo two')
+ assert result[1] == 'foo two'
tm.reset_display_options()
@@ -1452,7 +1452,7 @@ def test_to_hierarchical(self):
labels=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]])
tm.assert_index_equal(result, expected)
- self.assertEqual(result.names, index.names)
+ assert result.names == index.names
# K > 1
result = index.to_hierarchical(3, 2)
@@ -1460,7 +1460,7 @@ def test_to_hierarchical(self):
labels=[[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]])
tm.assert_index_equal(result, expected)
- self.assertEqual(result.names, index.names)
+ assert result.names == index.names
# non-sorted
index = MultiIndex.from_tuples([(2, 'c'), (1, 'b'),
@@ -1474,7 +1474,7 @@ def test_to_hierarchical(self):
(2, 'b'), (2, 'b')],
names=['N1', 'N2'])
tm.assert_index_equal(result, expected)
- self.assertEqual(result.names, index.names)
+ assert result.names == index.names
def test_bounds(self):
self.index._bounds
@@ -1655,35 +1655,35 @@ def test_difference(self):
assert isinstance(result, MultiIndex)
assert result.equals(expected)
- self.assertEqual(result.names, self.index.names)
+ assert result.names == self.index.names
# empty difference: reflexive
result = self.index.difference(self.index)
expected = self.index[:0]
assert result.equals(expected)
- self.assertEqual(result.names, self.index.names)
+ assert result.names == self.index.names
# empty difference: superset
result = self.index[-3:].difference(self.index)
expected = self.index[:0]
assert result.equals(expected)
- self.assertEqual(result.names, self.index.names)
+ assert result.names == self.index.names
# empty difference: degenerate
result = self.index[:0].difference(self.index)
expected = self.index[:0]
assert result.equals(expected)
- self.assertEqual(result.names, self.index.names)
+ assert result.names == self.index.names
# names not the same
chunklet = self.index[-3:]
chunklet.names = ['foo', 'baz']
result = first.difference(chunklet)
- self.assertEqual(result.names, (None, None))
+ assert result.names == (None, None)
# empty, but non-equal
result = self.index.difference(self.index.sortlevel(1)[0])
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
# raise Exception called with non-MultiIndex
result = first.difference(first.values)
@@ -1692,14 +1692,14 @@ def test_difference(self):
# name from empty array
result = first.difference([])
assert first.equals(result)
- self.assertEqual(first.names, result.names)
+ assert first.names == result.names
# name from non-empty array
result = first.difference([('foo', 'one')])
expected = pd.MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), (
'foo', 'two'), ('qux', 'one'), ('qux', 'two')])
expected.names = first.names
- self.assertEqual(first.names, result.names)
+ assert first.names == result.names
tm.assert_raises_regex(TypeError, "other must be a MultiIndex "
"or a list of tuples",
first.difference, [1, 2, 3, 4, 5])
@@ -1710,7 +1710,7 @@ def test_from_tuples(self):
MultiIndex.from_tuples, [])
idx = MultiIndex.from_tuples(((1, 2), (3, 4)), names=['a', 'b'])
- self.assertEqual(len(idx), 2)
+ assert len(idx) == 2
def test_argsort(self):
result = self.index.argsort()
@@ -1824,14 +1824,14 @@ def test_drop(self):
def test_droplevel_with_names(self):
index = self.index[self.index.get_loc('foo')]
dropped = index.droplevel(0)
- self.assertEqual(dropped.name, 'second')
+ assert dropped.name == 'second'
index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index(
lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])],
names=['one', 'two', 'three'])
dropped = index.droplevel(0)
- self.assertEqual(dropped.names, ('two', 'three'))
+ assert dropped.names == ('two', 'three')
dropped = index.droplevel('two')
expected = index.droplevel(1)
@@ -1873,7 +1873,7 @@ def test_insert(self):
# key contained in all levels
new_index = self.index.insert(0, ('bar', 'two'))
assert new_index.equal_levels(self.index)
- self.assertEqual(new_index[0], ('bar', 'two'))
+ assert new_index[0] == ('bar', 'two')
# key not contained in all levels
new_index = self.index.insert(0, ('abc', 'three'))
@@ -1883,7 +1883,7 @@ def test_insert(self):
exp1 = Index(list(self.index.levels[1]) + ['three'], name='second')
tm.assert_index_equal(new_index.levels[1], exp1)
- self.assertEqual(new_index[0], ('abc', 'three'))
+ assert new_index[0] == ('abc', 'three')
# key wrong length
msg = "Item must have length equal to number of levels"
@@ -1937,7 +1937,7 @@ def test_insert(self):
def test_take_preserve_name(self):
taken = self.index.take([3, 0, 1])
- self.assertEqual(taken.names, self.index.names)
+ assert taken.names == self.index.names
def test_take_fill_value(self):
# GH 12631
@@ -2203,7 +2203,7 @@ def check(nlevels, with_nulls):
for a in [101, 102]:
mi = MultiIndex.from_arrays([[101, a], [3.5, np.nan]])
assert not mi.has_duplicates
- self.assertEqual(mi.get_duplicates(), [])
+ assert mi.get_duplicates() == []
tm.assert_numpy_array_equal(mi.duplicated(), np.zeros(
2, dtype='bool'))
@@ -2213,9 +2213,9 @@ def check(nlevels, with_nulls):
lab = product(range(-1, n), range(-1, m))
mi = MultiIndex(levels=[list('abcde')[:n], list('WXYZ')[:m]],
labels=np.random.permutation(list(lab)).T)
- self.assertEqual(len(mi), (n + 1) * (m + 1))
+ assert len(mi) == (n + 1) * (m + 1)
assert not mi.has_duplicates
- self.assertEqual(mi.get_duplicates(), [])
+ assert mi.get_duplicates() == []
tm.assert_numpy_array_equal(mi.duplicated(), np.zeros(
len(mi), dtype='bool'))
@@ -2228,7 +2228,7 @@ def test_duplicate_meta_data(self):
index.set_names([None, 'Num']),
index.set_names(['Upper', 'Num']), ]:
assert idx.has_duplicates
- self.assertEqual(idx.drop_duplicates().names, idx.names)
+ assert idx.drop_duplicates().names == idx.names
def test_get_unique_index(self):
idx = self.index[[0, 1, 0, 1, 1, 0, 0]]
@@ -2274,7 +2274,7 @@ def test_unique_datetimelike(self):
def test_tolist(self):
result = self.index.tolist()
exp = list(self.index.values)
- self.assertEqual(result, exp)
+ assert result == exp
def test_repr_with_unicode_data(self):
with pd.core.config.option_context("display.encoding", 'UTF-8'):
@@ -2294,10 +2294,8 @@ def test_repr_roundtrip(self):
result = eval(repr(mi))
# string coerces to unicode
tm.assert_index_equal(result, mi, exact=False)
- self.assertEqual(
- mi.get_level_values('first').inferred_type, 'string')
- self.assertEqual(
- result.get_level_values('first').inferred_type, 'unicode')
+ assert mi.get_level_values('first').inferred_type == 'string'
+ assert result.get_level_values('first').inferred_type == 'unicode'
mi_u = MultiIndex.from_product(
[list(u'ab'), range(3)], names=['first', 'second'])
@@ -2313,7 +2311,6 @@ def test_repr_roundtrip(self):
# long format
mi = MultiIndex.from_product([list('abcdefg'), range(10)],
names=['first', 'second'])
- result = str(mi)
if PY3:
tm.assert_index_equal(eval(repr(mi)), mi, exact=True)
@@ -2321,13 +2318,9 @@ def test_repr_roundtrip(self):
result = eval(repr(mi))
# string coerces to unicode
tm.assert_index_equal(result, mi, exact=False)
- self.assertEqual(
- mi.get_level_values('first').inferred_type, 'string')
- self.assertEqual(
- result.get_level_values('first').inferred_type, 'unicode')
+ assert mi.get_level_values('first').inferred_type == 'string'
+ assert result.get_level_values('first').inferred_type == 'unicode'
- mi = MultiIndex.from_product(
- [list(u'abcdefg'), range(10)], names=['first', 'second'])
result = eval(repr(mi_u))
tm.assert_index_equal(result, mi_u, exact=True)
@@ -2356,7 +2349,7 @@ def test_bytestring_with_unicode(self):
def test_slice_keep_name(self):
x = MultiIndex.from_tuples([('a', 'b'), (1, 2), ('c', 'd')],
names=['x', 'y'])
- self.assertEqual(x[1:].names, x.names)
+ assert x[1:].names == x.names
def test_isnull_behavior(self):
# should not segfault GH5123
@@ -2510,8 +2503,8 @@ def test_isin(self):
# empty, return dtype bool
idx = MultiIndex.from_arrays([[], []])
result = idx.isin(values)
- self.assertEqual(len(result), 0)
- self.assertEqual(result.dtype, np.bool_)
+ assert len(result) == 0
+ assert result.dtype == np.bool_
def test_isin_nan(self):
idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]])
@@ -2556,39 +2549,33 @@ def test_reindex_preserves_names_when_target_is_list_or_ndarray(self):
other_dtype = pd.MultiIndex.from_product([[1, 2], [3, 4]])
# list & ndarray cases
- self.assertEqual(idx.reindex([])[0].names, [None, None])
- self.assertEqual(idx.reindex(np.array([]))[0].names, [None, None])
- self.assertEqual(idx.reindex(target.tolist())[0].names, [None, None])
- self.assertEqual(idx.reindex(target.values)[0].names, [None, None])
- self.assertEqual(
- idx.reindex(other_dtype.tolist())[0].names, [None, None])
- self.assertEqual(
- idx.reindex(other_dtype.values)[0].names, [None, None])
+ assert idx.reindex([])[0].names == [None, None]
+ assert idx.reindex(np.array([]))[0].names == [None, None]
+ assert idx.reindex(target.tolist())[0].names == [None, None]
+ assert idx.reindex(target.values)[0].names == [None, None]
+ assert idx.reindex(other_dtype.tolist())[0].names == [None, None]
+ assert idx.reindex(other_dtype.values)[0].names == [None, None]
idx.names = ['foo', 'bar']
- self.assertEqual(idx.reindex([])[0].names, ['foo', 'bar'])
- self.assertEqual(idx.reindex(np.array([]))[0].names, ['foo', 'bar'])
- self.assertEqual(idx.reindex(target.tolist())[0].names, ['foo', 'bar'])
- self.assertEqual(idx.reindex(target.values)[0].names, ['foo', 'bar'])
- self.assertEqual(
- idx.reindex(other_dtype.tolist())[0].names, ['foo', 'bar'])
- self.assertEqual(
- idx.reindex(other_dtype.values)[0].names, ['foo', 'bar'])
+ assert idx.reindex([])[0].names == ['foo', 'bar']
+ assert idx.reindex(np.array([]))[0].names == ['foo', 'bar']
+ assert idx.reindex(target.tolist())[0].names == ['foo', 'bar']
+ assert idx.reindex(target.values)[0].names == ['foo', 'bar']
+ assert idx.reindex(other_dtype.tolist())[0].names == ['foo', 'bar']
+ assert idx.reindex(other_dtype.values)[0].names == ['foo', 'bar']
def test_reindex_lvl_preserves_names_when_target_is_list_or_array(self):
# GH7774
idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']],
names=['foo', 'bar'])
- self.assertEqual(idx.reindex([], level=0)[0].names, ['foo', 'bar'])
- self.assertEqual(idx.reindex([], level=1)[0].names, ['foo', 'bar'])
+ assert idx.reindex([], level=0)[0].names == ['foo', 'bar']
+ assert idx.reindex([], level=1)[0].names == ['foo', 'bar']
def test_reindex_lvl_preserves_type_if_target_is_empty_list_or_array(self):
# GH7774
idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']])
- self.assertEqual(idx.reindex([], level=0)[0].levels[0].dtype.type,
- np.int64)
- self.assertEqual(idx.reindex([], level=1)[0].levels[1].dtype.type,
- np.object_)
+ assert idx.reindex([], level=0)[0].levels[0].dtype.type == np.int64
+ assert idx.reindex([], level=1)[0].levels[1].dtype.type == np.object_
def test_groupby(self):
groups = self.index.groupby(np.array([1, 1, 1, 2, 2, 2]))
@@ -2781,7 +2768,7 @@ def test_unsortedindex(self):
with pytest.raises(UnsortedIndexError):
df.loc(axis=0)['z', :]
df.sort_index(inplace=True)
- self.assertEqual(len(df.loc(axis=0)['z', :]), 2)
+ assert len(df.loc(axis=0)['z', :]) == 2
with pytest.raises(KeyError):
df.loc(axis=0)['q', :]
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index 68a329a7f741f..19bca875e650d 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -216,15 +216,15 @@ def test_constructor(self):
assert isinstance(index, Float64Index)
index = Float64Index(np.array([1., 2, 3, 4, 5]))
assert isinstance(index, Float64Index)
- self.assertEqual(index.dtype, float)
+ assert index.dtype == float
index = Float64Index(np.array([1., 2, 3, 4, 5]), dtype=np.float32)
assert isinstance(index, Float64Index)
- self.assertEqual(index.dtype, np.float64)
+ assert index.dtype == np.float64
index = Float64Index(np.array([1, 2, 3, 4, 5]), dtype=np.float32)
assert isinstance(index, Float64Index)
- self.assertEqual(index.dtype, np.float64)
+ assert index.dtype == np.float64
# nan handling
result = Float64Index([np.nan, np.nan])
@@ -336,13 +336,13 @@ def test_get_indexer(self):
def test_get_loc(self):
idx = Float64Index([0.0, 1.0, 2.0])
for method in [None, 'pad', 'backfill', 'nearest']:
- self.assertEqual(idx.get_loc(1, method), 1)
+ assert idx.get_loc(1, method) == 1
if method is not None:
- self.assertEqual(idx.get_loc(1, method, tolerance=0), 1)
+ assert idx.get_loc(1, method, tolerance=0) == 1
for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]:
- self.assertEqual(idx.get_loc(1.1, method), loc)
- self.assertEqual(idx.get_loc(1.1, method, tolerance=0.9), loc)
+ assert idx.get_loc(1.1, method) == loc
+ assert idx.get_loc(1.1, method, tolerance=0.9) == loc
pytest.raises(KeyError, idx.get_loc, 'foo')
pytest.raises(KeyError, idx.get_loc, 1.5)
@@ -354,21 +354,21 @@ def test_get_loc(self):
def test_get_loc_na(self):
idx = Float64Index([np.nan, 1, 2])
- self.assertEqual(idx.get_loc(1), 1)
- self.assertEqual(idx.get_loc(np.nan), 0)
+ assert idx.get_loc(1) == 1
+ assert idx.get_loc(np.nan) == 0
idx = Float64Index([np.nan, 1, np.nan])
- self.assertEqual(idx.get_loc(1), 1)
+ assert idx.get_loc(1) == 1
# representable by slice [0:2:2]
# pytest.raises(KeyError, idx.slice_locs, np.nan)
sliced = idx.slice_locs(np.nan)
assert isinstance(sliced, tuple)
- self.assertEqual(sliced, (0, 3))
+ assert sliced == (0, 3)
# not representable by slice
idx = Float64Index([np.nan, 1, np.nan, np.nan])
- self.assertEqual(idx.get_loc(1), 1)
+ assert idx.get_loc(1) == 1
pytest.raises(KeyError, idx.slice_locs, np.nan)
def test_contains_nans(self):
@@ -400,7 +400,7 @@ def test_astype_from_object(self):
index = Index([1.0, np.nan, 0.2], dtype='object')
result = index.astype(float)
expected = Float64Index([1.0, np.nan, 0.2])
- self.assertEqual(result.dtype, expected.dtype)
+ assert result.dtype == expected.dtype
tm.assert_index_equal(result, expected)
def test_fillna_float64(self):
@@ -454,7 +454,7 @@ def test_view(self):
i = self._holder([], name='Foo')
i_view = i.view()
- self.assertEqual(i_view.name, 'Foo')
+ assert i_view.name == 'Foo'
i_view = i.view(self._dtype)
tm.assert_index_equal(i, self._holder(i_view, name='Foo'))
@@ -478,8 +478,8 @@ def test_is_monotonic(self):
def test_logical_compat(self):
idx = self.create_index()
- self.assertEqual(idx.all(), idx.values.all())
- self.assertEqual(idx.any(), idx.values.any())
+ assert idx.all() == idx.values.all()
+ assert idx.any() == idx.values.any()
def test_identical(self):
i = Index(self.index.copy())
@@ -546,12 +546,12 @@ def test_view_index(self):
def test_prevent_casting(self):
result = self.index.astype('O')
- self.assertEqual(result.dtype, np.object_)
+ assert result.dtype == np.object_
def test_take_preserve_name(self):
index = self._holder([1, 2, 3, 4], name='foo')
taken = index.take([3, 0, 1])
- self.assertEqual(index.name, taken.name)
+ assert index.name == taken.name
def test_take_fill_value(self):
# see gh-12631
@@ -584,7 +584,7 @@ def test_take_fill_value(self):
def test_slice_keep_name(self):
idx = self._holder([1, 2], name='asdf')
- self.assertEqual(idx.name, idx[1:].name)
+ assert idx.name == idx[1:].name
def test_ufunc_coercions(self):
idx = self._holder([1, 2, 3, 4, 5], name='x')
@@ -666,7 +666,7 @@ def test_constructor(self):
def test_constructor_corner(self):
arr = np.array([1, 2, 3, 4], dtype=object)
index = Int64Index(arr)
- self.assertEqual(index.values.dtype, np.int64)
+ assert index.values.dtype == np.int64
tm.assert_index_equal(index, Index(arr))
# preventing casting
diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py
index 49536be1aa57c..0379718b004e1 100644
--- a/pandas/tests/indexes/test_range.py
+++ b/pandas/tests/indexes/test_range.py
@@ -70,22 +70,22 @@ def test_constructor(self):
index = RangeIndex(5)
expected = np.arange(5, dtype=np.int64)
assert isinstance(index, RangeIndex)
- self.assertEqual(index._start, 0)
- self.assertEqual(index._stop, 5)
- self.assertEqual(index._step, 1)
- self.assertEqual(index.name, None)
+ assert index._start == 0
+ assert index._stop == 5
+ assert index._step == 1
+ assert index.name is None
tm.assert_index_equal(Index(expected), index)
index = RangeIndex(1, 5)
expected = np.arange(1, 5, dtype=np.int64)
assert isinstance(index, RangeIndex)
- self.assertEqual(index._start, 1)
+ assert index._start == 1
tm.assert_index_equal(Index(expected), index)
index = RangeIndex(1, 5, 2)
expected = np.arange(1, 5, 2, dtype=np.int64)
assert isinstance(index, RangeIndex)
- self.assertEqual(index._step, 2)
+ assert index._step == 2
tm.assert_index_equal(Index(expected), index)
msg = "RangeIndex\\(\\.\\.\\.\\) must be called with integers"
@@ -96,9 +96,9 @@ def test_constructor(self):
RangeIndex(0, 0)]:
expected = np.empty(0, dtype=np.int64)
assert isinstance(index, RangeIndex)
- self.assertEqual(index._start, 0)
- self.assertEqual(index._stop, 0)
- self.assertEqual(index._step, 1)
+ assert index._start == 0
+ assert index._stop == 0
+ assert index._step == 1
tm.assert_index_equal(Index(expected), index)
with tm.assert_raises_regex(TypeError, msg):
@@ -109,7 +109,7 @@ def test_constructor(self):
RangeIndex(stop=0, name='Foo'),
RangeIndex(0, 0, name='Foo')]:
assert isinstance(index, RangeIndex)
- self.assertEqual(index.name, 'Foo')
+ assert index.name == 'Foo'
# we don't allow on a bare Index
pytest.raises(TypeError, lambda: Index(0, 1000))
@@ -246,7 +246,7 @@ def test_numeric_compat2(self):
def test_constructor_corner(self):
arr = np.array([1, 2, 3, 4], dtype=object)
index = RangeIndex(1, 5)
- self.assertEqual(index.values.dtype, np.int64)
+ assert index.values.dtype == np.int64
tm.assert_index_equal(index, Index(arr))
# non-int raise Exception
@@ -261,10 +261,10 @@ def test_copy(self):
i_copy = i.copy()
assert i_copy is not i
assert i_copy.identical(i)
- self.assertEqual(i_copy._start, 0)
- self.assertEqual(i_copy._stop, 5)
- self.assertEqual(i_copy._step, 1)
- self.assertEqual(i_copy.name, 'Foo')
+ assert i_copy._start == 0
+ assert i_copy._stop == 5
+ assert i_copy._step == 1
+ assert i_copy.name == 'Foo'
def test_repr(self):
i = RangeIndex(5, name='Foo')
@@ -281,7 +281,7 @@ def test_repr(self):
i = RangeIndex(5, 0, -1)
result = repr(i)
expected = "RangeIndex(start=5, stop=0, step=-1)"
- self.assertEqual(result, expected)
+ assert result == expected
result = eval(result)
tm.assert_index_equal(result, i, exact=True)
@@ -300,12 +300,12 @@ def test_delete(self):
expected = idx[1:].astype(int)
result = idx.delete(0)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
+ assert result.name == expected.name
expected = idx[:-1].astype(int)
result = idx.delete(-1)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
+ assert result.name == expected.name
with pytest.raises((IndexError, ValueError)):
# either depending on numpy version
@@ -316,7 +316,7 @@ def test_view(self):
i = RangeIndex(0, name='Foo')
i_view = i.view()
- self.assertEqual(i_view.name, 'Foo')
+ assert i_view.name == 'Foo'
i_view = i.view('i8')
tm.assert_numpy_array_equal(i.values, i_view)
@@ -325,7 +325,7 @@ def test_view(self):
tm.assert_index_equal(i, i_view)
def test_dtype(self):
- self.assertEqual(self.index.dtype, np.int64)
+ assert self.index.dtype == np.int64
def test_is_monotonic(self):
assert self.index.is_monotonic
@@ -362,8 +362,8 @@ def test_equals_range(self):
def test_logical_compat(self):
idx = self.create_index()
- self.assertEqual(idx.all(), idx.values.all())
- self.assertEqual(idx.any(), idx.values.any())
+ assert idx.all() == idx.values.all()
+ assert idx.any() == idx.values.any()
def test_identical(self):
i = Index(self.index.copy())
@@ -636,7 +636,7 @@ def test_intersect_str_dates(self):
i2 = Index(['aa'], dtype=object)
res = i2.intersection(i1)
- self.assertEqual(len(res), 0)
+ assert len(res) == 0
def test_union_noncomparable(self):
from datetime import datetime, timedelta
@@ -692,7 +692,7 @@ def test_nbytes(self):
# constant memory usage
i2 = RangeIndex(0, 10)
- self.assertEqual(i.nbytes, i2.nbytes)
+ assert i.nbytes == i2.nbytes
def test_cant_or_shouldnt_cast(self):
# can't
@@ -706,12 +706,12 @@ def test_view_Index(self):
def test_prevent_casting(self):
result = self.index.astype('O')
- self.assertEqual(result.dtype, np.object_)
+ assert result.dtype == np.object_
def test_take_preserve_name(self):
index = RangeIndex(1, 5, name='foo')
taken = index.take([3, 0, 1])
- self.assertEqual(index.name, taken.name)
+ assert index.name == taken.name
def test_take_fill_value(self):
# GH 12631
@@ -751,7 +751,7 @@ def test_repr_roundtrip(self):
def test_slice_keep_name(self):
idx = RangeIndex(1, 2, name='asdf')
- self.assertEqual(idx.name, idx[1:].name)
+ assert idx.name == idx[1:].name
def test_explicit_conversions(self):
@@ -794,48 +794,48 @@ def test_ufunc_compat(self):
def test_extended_gcd(self):
result = self.index._extended_gcd(6, 10)
- self.assertEqual(result[0], result[1] * 6 + result[2] * 10)
- self.assertEqual(2, result[0])
+ assert result[0] == result[1] * 6 + result[2] * 10
+ assert 2 == result[0]
result = self.index._extended_gcd(10, 6)
- self.assertEqual(2, result[1] * 10 + result[2] * 6)
- self.assertEqual(2, result[0])
+ assert 2 == result[1] * 10 + result[2] * 6
+ assert 2 == result[0]
def test_min_fitting_element(self):
result = RangeIndex(0, 20, 2)._min_fitting_element(1)
- self.assertEqual(2, result)
+ assert 2 == result
result = RangeIndex(1, 6)._min_fitting_element(1)
- self.assertEqual(1, result)
+ assert 1 == result
result = RangeIndex(18, -2, -2)._min_fitting_element(1)
- self.assertEqual(2, result)
+ assert 2 == result
result = RangeIndex(5, 0, -1)._min_fitting_element(1)
- self.assertEqual(1, result)
+ assert 1 == result
big_num = 500000000000000000000000
result = RangeIndex(5, big_num * 2, 1)._min_fitting_element(big_num)
- self.assertEqual(big_num, result)
+ assert big_num == result
def test_max_fitting_element(self):
result = RangeIndex(0, 20, 2)._max_fitting_element(17)
- self.assertEqual(16, result)
+ assert 16 == result
result = RangeIndex(1, 6)._max_fitting_element(4)
- self.assertEqual(4, result)
+ assert 4 == result
result = RangeIndex(18, -2, -2)._max_fitting_element(17)
- self.assertEqual(16, result)
+ assert 16 == result
result = RangeIndex(5, 0, -1)._max_fitting_element(4)
- self.assertEqual(4, result)
+ assert 4 == result
big_num = 500000000000000000000000
result = RangeIndex(5, big_num * 2, 1)._max_fitting_element(big_num)
- self.assertEqual(big_num, result)
+ assert big_num == result
def test_pickle_compat_construction(self):
# RangeIndex() is a valid constructor
@@ -846,11 +846,11 @@ def test_slice_specialised(self):
# scalar indexing
res = self.index[1]
expected = 2
- self.assertEqual(res, expected)
+ assert res == expected
res = self.index[-1]
expected = 18
- self.assertEqual(res, expected)
+ assert res == expected
# slicing
# slice value completion
@@ -903,19 +903,19 @@ def test_len_specialised(self):
arr = np.arange(0, 5, step)
i = RangeIndex(0, 5, step)
- self.assertEqual(len(i), len(arr))
+ assert len(i) == len(arr)
i = RangeIndex(5, 0, step)
- self.assertEqual(len(i), 0)
+ assert len(i) == 0
for step in np.arange(-6, -1, 1):
arr = np.arange(5, 0, step)
i = RangeIndex(5, 0, step)
- self.assertEqual(len(i), len(arr))
+ assert len(i) == len(arr)
i = RangeIndex(0, 5, step)
- self.assertEqual(len(i), 0)
+ assert len(i) == 0
def test_where(self):
i = self.create_index()
diff --git a/pandas/tests/indexes/timedeltas/test_construction.py b/pandas/tests/indexes/timedeltas/test_construction.py
index 6681a03a3b271..bdaa62c5ce221 100644
--- a/pandas/tests/indexes/timedeltas/test_construction.py
+++ b/pandas/tests/indexes/timedeltas/test_construction.py
@@ -81,8 +81,8 @@ def test_constructor_coverage(self):
def test_constructor_name(self):
idx = TimedeltaIndex(start='1 days', periods=1, freq='D', name='TEST')
- self.assertEqual(idx.name, 'TEST')
+ assert idx.name == 'TEST'
# GH10025
idx2 = TimedeltaIndex(idx, name='something else')
- self.assertEqual(idx2.name, 'something else')
+ assert idx2.name == 'something else'
diff --git a/pandas/tests/indexes/timedeltas/test_indexing.py b/pandas/tests/indexes/timedeltas/test_indexing.py
index 58b83dde5f402..6ffe3516c4a94 100644
--- a/pandas/tests/indexes/timedeltas/test_indexing.py
+++ b/pandas/tests/indexes/timedeltas/test_indexing.py
@@ -76,8 +76,8 @@ def test_delete(self):
for n, expected in compat.iteritems(cases):
result = idx.delete(n)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, expected.freq)
+ assert result.name == expected.name
+ assert result.freq == expected.freq
with pytest.raises((IndexError, ValueError)):
# either depeidnig on numpy version
@@ -103,10 +103,10 @@ def test_delete_slice(self):
for n, expected in compat.iteritems(cases):
result = idx.delete(n)
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, expected.freq)
+ assert result.name == expected.name
+ assert result.freq == expected.freq
result = idx.delete(slice(n[0], n[-1] + 1))
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(result.freq, expected.freq)
+ assert result.name == expected.name
+ assert result.freq == expected.freq
diff --git a/pandas/tests/indexes/timedeltas/test_ops.py b/pandas/tests/indexes/timedeltas/test_ops.py
index feaec50264872..474dd283530c5 100644
--- a/pandas/tests/indexes/timedeltas/test_ops.py
+++ b/pandas/tests/indexes/timedeltas/test_ops.py
@@ -35,10 +35,10 @@ def test_asobject_tolist(self):
result = idx.asobject
assert isinstance(result, Index)
- self.assertEqual(result.dtype, object)
+ assert result.dtype == object
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(idx.tolist(), expected_list)
+ assert result.name == expected.name
+ assert idx.tolist() == expected_list
idx = TimedeltaIndex([timedelta(days=1), timedelta(days=2), pd.NaT,
timedelta(days=4)], name='idx')
@@ -47,10 +47,10 @@ def test_asobject_tolist(self):
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
assert isinstance(result, Index)
- self.assertEqual(result.dtype, object)
+ assert result.dtype == object
tm.assert_index_equal(result, expected)
- self.assertEqual(result.name, expected.name)
- self.assertEqual(idx.tolist(), expected_list)
+ assert result.name == expected.name
+ assert idx.tolist() == expected_list
def test_minmax(self):
@@ -63,10 +63,10 @@ def test_minmax(self):
assert not idx2.is_monotonic
for idx in [idx1, idx2]:
- self.assertEqual(idx.min(), Timedelta('1 days')),
- self.assertEqual(idx.max(), Timedelta('3 days')),
- self.assertEqual(idx.argmin(), 0)
- self.assertEqual(idx.argmax(), 2)
+ assert idx.min() == Timedelta('1 days')
+ assert idx.max() == Timedelta('3 days')
+ assert idx.argmin() == 0
+ assert idx.argmax() == 2
for op in ['min', 'max']:
# Return NaT
@@ -83,15 +83,15 @@ def test_numpy_minmax(self):
dr = pd.date_range(start='2016-01-15', end='2016-01-20')
td = TimedeltaIndex(np.asarray(dr))
- self.assertEqual(np.min(td), Timedelta('16815 days'))
- self.assertEqual(np.max(td), Timedelta('16820 days'))
+ assert np.min(td) == Timedelta('16815 days')
+ assert np.max(td) == Timedelta('16820 days')
errmsg = "the 'out' parameter is not supported"
tm.assert_raises_regex(ValueError, errmsg, np.min, td, out=0)
tm.assert_raises_regex(ValueError, errmsg, np.max, td, out=0)
- self.assertEqual(np.argmin(td), 0)
- self.assertEqual(np.argmax(td), 5)
+ assert np.argmin(td) == 0
+ assert np.argmax(td) == 5
if not _np_version_under1p10:
errmsg = "the 'out' parameter is not supported"
@@ -114,7 +114,7 @@ def test_round(self):
expected_elt = expected_rng[1]
tm.assert_index_equal(td.round(freq='H'), expected_rng)
- self.assertEqual(elt.round(freq='H'), expected_elt)
+ assert elt.round(freq='H') == expected_elt
msg = pd.tseries.frequencies._INVALID_FREQ_ERROR
with tm.assert_raises_regex(ValueError, msg):
@@ -152,7 +152,7 @@ def test_representation(self):
[exp1, exp2, exp3, exp4, exp5]):
for func in ['__repr__', '__unicode__', '__str__']:
result = getattr(idx, func)()
- self.assertEqual(result, expected)
+ assert result == expected
def test_representation_to_series(self):
idx1 = TimedeltaIndex([], freq='D')
@@ -184,7 +184,7 @@ def test_representation_to_series(self):
for idx, expected in zip([idx1, idx2, idx3, idx4, idx5],
[exp1, exp2, exp3, exp4, exp5]):
result = repr(pd.Series(idx))
- self.assertEqual(result, expected)
+ assert result == expected
def test_summary(self):
# GH9116
@@ -212,7 +212,7 @@ def test_summary(self):
for idx, expected in zip([idx1, idx2, idx3, idx4, idx5],
[exp1, exp2, exp3, exp4, exp5]):
result = idx.summary()
- self.assertEqual(result, expected)
+ assert result == expected
def test_add_iadd(self):
@@ -355,7 +355,7 @@ def test_subtraction_ops_with_tz(self):
td = Timedelta('1 days')
def _check(result, expected):
- self.assertEqual(result, expected)
+ assert result == expected
assert isinstance(result, Timedelta)
# scalars
@@ -491,11 +491,11 @@ def test_addition_ops(self):
result = dt + td
expected = Timestamp('20130102')
- self.assertEqual(result, expected)
+ assert result == expected
result = td + dt
expected = Timestamp('20130102')
- self.assertEqual(result, expected)
+ assert result == expected
def test_comp_nat(self):
left = pd.TimedeltaIndex([pd.Timedelta('1 days'), pd.NaT,
@@ -582,25 +582,25 @@ def test_order(self):
for idx in [idx1, idx2]:
ordered = idx.sort_values()
tm.assert_index_equal(ordered, idx)
- self.assertEqual(ordered.freq, idx.freq)
+ assert ordered.freq == idx.freq
ordered = idx.sort_values(ascending=False)
expected = idx[::-1]
tm.assert_index_equal(ordered, expected)
- self.assertEqual(ordered.freq, expected.freq)
- self.assertEqual(ordered.freq.n, -1)
+ assert ordered.freq == expected.freq
+ assert ordered.freq.n == -1
ordered, indexer = idx.sort_values(return_indexer=True)
tm.assert_index_equal(ordered, idx)
tm.assert_numpy_array_equal(indexer, np.array([0, 1, 2]),
check_dtype=False)
- self.assertEqual(ordered.freq, idx.freq)
+ assert ordered.freq == idx.freq
ordered, indexer = idx.sort_values(return_indexer=True,
ascending=False)
tm.assert_index_equal(ordered, idx[::-1])
- self.assertEqual(ordered.freq, expected.freq)
- self.assertEqual(ordered.freq.n, -1)
+ assert ordered.freq == expected.freq
+ assert ordered.freq.n == -1
idx1 = TimedeltaIndex(['1 hour', '3 hour', '5 hour',
'2 hour ', '1 hour'], name='idx1')
@@ -648,39 +648,39 @@ def test_getitem(self):
for idx in [idx1]:
result = idx[0]
- self.assertEqual(result, pd.Timedelta('1 day'))
+ assert result == pd.Timedelta('1 day')
result = idx[0:5]
expected = pd.timedelta_range('1 day', '5 day', freq='D',
name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx[0:10:2]
expected = pd.timedelta_range('1 day', '9 day', freq='2D',
name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx[-20:-5:3]
expected = pd.timedelta_range('12 day', '24 day', freq='3D',
name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx[4::-1]
expected = TimedeltaIndex(['5 day', '4 day', '3 day',
'2 day', '1 day'],
freq='-1D', name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
def test_drop_duplicates_metadata(self):
# GH 10115
idx = pd.timedelta_range('1 day', '31 day', freq='D', name='idx')
result = idx.drop_duplicates()
tm.assert_index_equal(idx, result)
- self.assertEqual(idx.freq, result.freq)
+ assert idx.freq == result.freq
idx_dup = idx.append(idx)
assert idx_dup.freq is None # freq is reset
@@ -715,28 +715,28 @@ def test_take(self):
for idx in [idx1]:
result = idx.take([0])
- self.assertEqual(result, pd.Timedelta('1 day'))
+ assert result == pd.Timedelta('1 day')
result = idx.take([-1])
- self.assertEqual(result, pd.Timedelta('31 day'))
+ assert result == pd.Timedelta('31 day')
result = idx.take([0, 1, 2])
expected = pd.timedelta_range('1 day', '3 day', freq='D',
name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx.take([0, 2, 4])
expected = pd.timedelta_range('1 day', '5 day', freq='2D',
name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx.take([7, 4, 1])
expected = pd.timedelta_range('8 day', '2 day', freq='-3D',
name='idx')
tm.assert_index_equal(result, expected)
- self.assertEqual(result.freq, expected.freq)
+ assert result.freq == expected.freq
result = idx.take([3, 2, 5])
expected = TimedeltaIndex(['4 day', '3 day', '6 day'], name='idx')
@@ -771,7 +771,7 @@ def test_infer_freq(self):
idx = pd.timedelta_range('1', freq=freq, periods=10)
result = pd.TimedeltaIndex(idx.asi8, freq='infer')
tm.assert_index_equal(idx, result)
- self.assertEqual(result.freq, freq)
+ assert result.freq == freq
def test_nat_new(self):
@@ -867,27 +867,27 @@ class TestTimedeltas(tm.TestCase):
def test_ops(self):
td = Timedelta(10, unit='d')
- self.assertEqual(-td, Timedelta(-10, unit='d'))
- self.assertEqual(+td, Timedelta(10, unit='d'))
- self.assertEqual(td - td, Timedelta(0, unit='ns'))
+ assert -td == Timedelta(-10, unit='d')
+ assert +td == Timedelta(10, unit='d')
+ assert td - td == Timedelta(0, unit='ns')
assert (td - pd.NaT) is pd.NaT
- self.assertEqual(td + td, Timedelta(20, unit='d'))
+ assert td + td == Timedelta(20, unit='d')
assert (td + pd.NaT) is pd.NaT
- self.assertEqual(td * 2, Timedelta(20, unit='d'))
+ assert td * 2 == Timedelta(20, unit='d')
assert (td * pd.NaT) is pd.NaT
- self.assertEqual(td / 2, Timedelta(5, unit='d'))
- self.assertEqual(td // 2, Timedelta(5, unit='d'))
- self.assertEqual(abs(td), td)
- self.assertEqual(abs(-td), td)
- self.assertEqual(td / td, 1)
+ assert td / 2 == Timedelta(5, unit='d')
+ assert td // 2 == Timedelta(5, unit='d')
+ assert abs(td) == td
+ assert abs(-td) == td
+ assert td / td == 1
assert (td / pd.NaT) is np.nan
assert (td // pd.NaT) is np.nan
# invert
- self.assertEqual(-td, Timedelta('-10d'))
- self.assertEqual(td * -1, Timedelta('-10d'))
- self.assertEqual(-1 * td, Timedelta('-10d'))
- self.assertEqual(abs(-td), Timedelta('10d'))
+ assert -td == Timedelta('-10d')
+ assert td * -1 == Timedelta('-10d')
+ assert -1 * td == Timedelta('-10d')
+ assert abs(-td) == Timedelta('10d')
# invalid multiply with another timedelta
pytest.raises(TypeError, lambda: td * td)
@@ -898,12 +898,12 @@ def test_ops(self):
def test_ops_offsets(self):
td = Timedelta(10, unit='d')
- self.assertEqual(Timedelta(241, unit='h'), td + pd.offsets.Hour(1))
- self.assertEqual(Timedelta(241, unit='h'), pd.offsets.Hour(1) + td)
- self.assertEqual(240, td / pd.offsets.Hour(1))
- self.assertEqual(1 / 240.0, pd.offsets.Hour(1) / td)
- self.assertEqual(Timedelta(239, unit='h'), td - pd.offsets.Hour(1))
- self.assertEqual(Timedelta(-239, unit='h'), pd.offsets.Hour(1) - td)
+ assert Timedelta(241, unit='h') == td + pd.offsets.Hour(1)
+ assert Timedelta(241, unit='h') == pd.offsets.Hour(1) + td
+ assert 240 == td / pd.offsets.Hour(1)
+ assert 1 / 240.0 == pd.offsets.Hour(1) / td
+ assert Timedelta(239, unit='h') == td - pd.offsets.Hour(1)
+ assert Timedelta(-239, unit='h') == pd.offsets.Hour(1) - td
def test_ops_ndarray(self):
td = Timedelta('1 day')
@@ -961,7 +961,7 @@ def test_ops_series_object(self):
s = pd.Series([pd.Timestamp('2015-01-01', tz='US/Eastern'),
pd.Timestamp('2015-01-01', tz='Asia/Tokyo')],
name='xxx')
- self.assertEqual(s.dtype, object)
+ assert s.dtype == object
exp = pd.Series([pd.Timestamp('2015-01-02', tz='US/Eastern'),
pd.Timestamp('2015-01-02', tz='Asia/Tokyo')],
@@ -973,7 +973,7 @@ def test_ops_series_object(self):
s2 = pd.Series([pd.Timestamp('2015-01-03', tz='US/Eastern'),
pd.Timestamp('2015-01-05', tz='Asia/Tokyo')],
name='xxx')
- self.assertEqual(s2.dtype, object)
+ assert s2.dtype == object
exp = pd.Series([pd.Timedelta('2 days'), pd.Timedelta('4 days')],
name='xxx')
tm.assert_series_equal(s2 - s, exp)
@@ -981,7 +981,7 @@ def test_ops_series_object(self):
s = pd.Series([pd.Timedelta('01:00:00'), pd.Timedelta('02:00:00')],
name='xxx', dtype=object)
- self.assertEqual(s.dtype, object)
+ assert s.dtype == object
exp = pd.Series([pd.Timedelta('01:30:00'), pd.Timedelta('02:30:00')],
name='xxx')
@@ -1027,38 +1027,38 @@ def test_timedelta_ops(self):
result = td.mean()
expected = to_timedelta(timedelta(seconds=9))
- self.assertEqual(result, expected)
+ assert result == expected
result = td.to_frame().mean()
- self.assertEqual(result[0], expected)
+ assert result[0] == expected
result = td.quantile(.1)
expected = Timedelta(np.timedelta64(2600, 'ms'))
- self.assertEqual(result, expected)
+ assert result == expected
result = td.median()
expected = to_timedelta('00:00:09')
- self.assertEqual(result, expected)
+ assert result == expected
result = td.to_frame().median()
- self.assertEqual(result[0], expected)
+ assert result[0] == expected
# GH 6462
# consistency in returned values for sum
result = td.sum()
expected = to_timedelta('00:01:21')
- self.assertEqual(result, expected)
+ assert result == expected
result = td.to_frame().sum()
- self.assertEqual(result[0], expected)
+ assert result[0] == expected
# std
result = td.std()
expected = to_timedelta(Series(td.dropna().values).std())
- self.assertEqual(result, expected)
+ assert result == expected
result = td.to_frame().std()
- self.assertEqual(result[0], expected)
+ assert result[0] == expected
# invalid ops
for op in ['skew', 'kurt', 'sem', 'prod']:
@@ -1067,11 +1067,11 @@ def test_timedelta_ops(self):
# GH 10040
# make sure NaT is properly handled by median()
s = Series([Timestamp('2015-02-03'), Timestamp('2015-02-07')])
- self.assertEqual(s.diff().median(), timedelta(days=4))
+ assert s.diff().median() == timedelta(days=4)
s = Series([Timestamp('2015-02-03'), Timestamp('2015-02-07'),
Timestamp('2015-02-15')])
- self.assertEqual(s.diff().median(), timedelta(days=6))
+ assert s.diff().median() == timedelta(days=6)
def test_timedelta_ops_scalar(self):
# GH 6808
@@ -1084,10 +1084,10 @@ def test_timedelta_ops_scalar(self):
np.timedelta64(10000000000, 'ns'),
pd.offsets.Second(10)]:
result = base + offset
- self.assertEqual(result, expected_add)
+ assert result == expected_add
result = base - offset
- self.assertEqual(result, expected_sub)
+ assert result == expected_sub
base = pd.to_datetime('20130102 09:01:12.123456')
expected_add = pd.to_datetime('20130103 09:01:22.123456')
@@ -1099,10 +1099,10 @@ def test_timedelta_ops_scalar(self):
np.timedelta64(1, 'D') + np.timedelta64(10, 's'),
pd.offsets.Day() + pd.offsets.Second(10)]:
result = base + offset
- self.assertEqual(result, expected_add)
+ assert result == expected_add
result = base - offset
- self.assertEqual(result, expected_sub)
+ assert result == expected_sub
def test_timedelta_ops_with_missing_values(self):
# setup
@@ -1118,9 +1118,9 @@ def test_timedelta_ops_with_missing_values(self):
NA = np.nan
actual = scalar1 + scalar1
- self.assertEqual(actual, scalar2)
+ assert actual == scalar2
actual = scalar2 - scalar1
- self.assertEqual(actual, scalar1)
+ assert actual == scalar1
actual = s1 + s1
assert_series_equal(actual, s2)
@@ -1217,27 +1217,27 @@ def test_tdi_ops_attributes(self):
result = rng + 1
exp = timedelta_range('4 days', periods=5, freq='2D', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, '2D')
+ assert result.freq == '2D'
result = rng - 2
exp = timedelta_range('-2 days', periods=5, freq='2D', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, '2D')
+ assert result.freq == '2D'
result = rng * 2
exp = timedelta_range('4 days', periods=5, freq='4D', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, '4D')
+ assert result.freq == '4D'
result = rng / 2
exp = timedelta_range('1 days', periods=5, freq='D', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, 'D')
+ assert result.freq == 'D'
result = -rng
exp = timedelta_range('-2 days', periods=5, freq='-2D', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, '-2D')
+ assert result.freq == '-2D'
rng = pd.timedelta_range('-2 days', periods=5, freq='D', name='x')
@@ -1245,7 +1245,7 @@ def test_tdi_ops_attributes(self):
exp = TimedeltaIndex(['2 days', '1 days', '0 days', '1 days',
'2 days'], name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, None)
+ assert result.freq is None
def test_add_overflow(self):
# see gh-14068
diff --git a/pandas/tests/indexes/timedeltas/test_partial_slicing.py b/pandas/tests/indexes/timedeltas/test_partial_slicing.py
index 230dbe91b4e34..5e6e1440a7c04 100644
--- a/pandas/tests/indexes/timedeltas/test_partial_slicing.py
+++ b/pandas/tests/indexes/timedeltas/test_partial_slicing.py
@@ -27,7 +27,7 @@ def test_partial_slice(self):
assert_series_equal(result, expected)
result = s['6 days, 23:11:12']
- self.assertEqual(result, s.iloc[133])
+ assert result == s.iloc[133]
pytest.raises(KeyError, s.__getitem__, '50 days')
@@ -46,7 +46,7 @@ def test_partial_slice_high_reso(self):
assert_series_equal(result, expected)
result = s['1 days, 10:11:12.001001']
- self.assertEqual(result, s.iloc[1001])
+ assert result == s.iloc[1001]
def test_slice_with_negative_step(self):
ts = Series(np.arange(20), timedelta_range('0', periods=20, freq='H'))
diff --git a/pandas/tests/indexes/timedeltas/test_setops.py b/pandas/tests/indexes/timedeltas/test_setops.py
index 45900788f7bda..8779f6d49cdd5 100644
--- a/pandas/tests/indexes/timedeltas/test_setops.py
+++ b/pandas/tests/indexes/timedeltas/test_setops.py
@@ -30,7 +30,7 @@ def test_union_coverage(self):
result = ordered[:0].union(ordered)
tm.assert_index_equal(result, ordered)
- self.assertEqual(result.freq, ordered.freq)
+ assert result.freq == ordered.freq
def test_union_bug_1730(self):
@@ -66,7 +66,7 @@ def test_intersection_bug_1708(self):
index_2 = index_1 + pd.offsets.Hour(5)
result = index_1 & index_2
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
index_1 = timedelta_range('1 day', periods=4, freq='h')
index_2 = index_1 + pd.offsets.Hour(1)
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index 8a327d2ecb08f..d1379973dfec5 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -49,29 +49,30 @@ def test_get_loc(self):
idx = pd.to_timedelta(['0 days', '1 days', '2 days'])
for method in [None, 'pad', 'backfill', 'nearest']:
- self.assertEqual(idx.get_loc(idx[1], method), 1)
- self.assertEqual(idx.get_loc(idx[1].to_pytimedelta(), method), 1)
- self.assertEqual(idx.get_loc(str(idx[1]), method), 1)
+ assert idx.get_loc(idx[1], method) == 1
+ assert idx.get_loc(idx[1].to_pytimedelta(), method) == 1
+ assert idx.get_loc(str(idx[1]), method) == 1
- self.assertEqual(
- idx.get_loc(idx[1], 'pad', tolerance=pd.Timedelta(0)), 1)
- self.assertEqual(
- idx.get_loc(idx[1], 'pad', tolerance=np.timedelta64(0, 's')), 1)
- self.assertEqual(idx.get_loc(idx[1], 'pad', tolerance=timedelta(0)), 1)
+ assert idx.get_loc(idx[1], 'pad',
+ tolerance=pd.Timedelta(0)) == 1
+ assert idx.get_loc(idx[1], 'pad',
+ tolerance=np.timedelta64(0, 's')) == 1
+ assert idx.get_loc(idx[1], 'pad',
+ tolerance=timedelta(0)) == 1
with tm.assert_raises_regex(ValueError, 'must be convertible'):
idx.get_loc(idx[1], method='nearest', tolerance='foo')
for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]:
- self.assertEqual(idx.get_loc('1 day 1 hour', method), loc)
+ assert idx.get_loc('1 day 1 hour', method) == loc
def test_get_loc_nat(self):
tidx = TimedeltaIndex(['1 days 01:00:00', 'NaT', '2 days 01:00:00'])
- self.assertEqual(tidx.get_loc(pd.NaT), 1)
- self.assertEqual(tidx.get_loc(None), 1)
- self.assertEqual(tidx.get_loc(float('nan')), 1)
- self.assertEqual(tidx.get_loc(np.nan), 1)
+ assert tidx.get_loc(pd.NaT) == 1
+ assert tidx.get_loc(None) == 1
+ assert tidx.get_loc(float('nan')) == 1
+ assert tidx.get_loc(np.nan) == 1
def test_get_indexer(self):
idx = pd.to_timedelta(['0 days', '1 days', '2 days'])
@@ -138,14 +139,14 @@ def test_ufunc_coercions(self):
exp = TimedeltaIndex(['4H', '8H', '12H', '16H', '20H'],
freq='4H', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, '4H')
+ assert result.freq == '4H'
for result in [idx / 2, np.divide(idx, 2)]:
assert isinstance(result, TimedeltaIndex)
exp = TimedeltaIndex(['1H', '2H', '3H', '4H', '5H'],
freq='H', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, 'H')
+ assert result.freq == 'H'
idx = TimedeltaIndex(['2H', '4H', '6H', '8H', '10H'],
freq='2H', name='x')
@@ -154,7 +155,7 @@ def test_ufunc_coercions(self):
exp = TimedeltaIndex(['-2H', '-4H', '-6H', '-8H', '-10H'],
freq='-2H', name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, '-2H')
+ assert result.freq == '-2H'
idx = TimedeltaIndex(['-2H', '-1H', '0H', '1H', '2H'],
freq='H', name='x')
@@ -163,7 +164,7 @@ def test_ufunc_coercions(self):
exp = TimedeltaIndex(['2H', '1H', '0H', '1H', '2H'],
freq=None, name='x')
tm.assert_index_equal(result, exp)
- self.assertEqual(result.freq, None)
+ assert result.freq is None
def test_fillna_timedelta(self):
# GH 11343
@@ -209,7 +210,7 @@ def test_take(self):
tm.assert_index_equal(taken, expected)
assert isinstance(taken, TimedeltaIndex)
assert taken.freq is None
- self.assertEqual(taken.name, expected.name)
+ assert taken.name == expected.name
def test_take_fill_value(self):
# GH 12631
@@ -289,7 +290,7 @@ def test_slice_keeps_name(self):
# GH4226
dr = pd.timedelta_range('1d', '5d', freq='H', name='timebucket')
- self.assertEqual(dr[1:].name, dr.name)
+ assert dr[1:].name == dr.name
def test_does_not_convert_mixed_integer(self):
df = tm.makeCustomDataframe(10, 10,
@@ -299,8 +300,8 @@ def test_does_not_convert_mixed_integer(self):
cols = df.columns.join(df.index, how='outer')
joined = cols.join(df.columns)
- self.assertEqual(cols.dtype, np.dtype('O'))
- self.assertEqual(cols.dtype, joined.dtype)
+ assert cols.dtype == np.dtype('O')
+ assert cols.dtype == joined.dtype
tm.assert_index_equal(cols, joined)
def test_sort_values(self):
@@ -336,8 +337,8 @@ def test_get_duplicates(self):
def test_argmin_argmax(self):
idx = TimedeltaIndex(['1 day 00:00:05', '1 day 00:00:01',
'1 day 00:00:02'])
- self.assertEqual(idx.argmin(), 1)
- self.assertEqual(idx.argmax(), 0)
+ assert idx.argmin() == 1
+ assert idx.argmax() == 0
def test_misc_coverage(self):
@@ -570,8 +571,8 @@ def test_timedelta(self):
shifted = index + timedelta(1)
back = shifted + timedelta(-1)
assert tm.equalContents(index, back)
- self.assertEqual(shifted.freq, index.freq)
- self.assertEqual(shifted.freq, back.freq)
+ assert shifted.freq == index.freq
+ assert shifted.freq == back.freq
result = index - timedelta(1)
expected = index + timedelta(-1)
diff --git a/pandas/tests/indexes/timedeltas/test_tools.py b/pandas/tests/indexes/timedeltas/test_tools.py
index d69f78bfd73b1..faee627488dc0 100644
--- a/pandas/tests/indexes/timedeltas/test_tools.py
+++ b/pandas/tests/indexes/timedeltas/test_tools.py
@@ -20,16 +20,15 @@ def conv(v):
d1 = np.timedelta64(1, 'D')
- self.assertEqual(to_timedelta('1 days 06:05:01.00003', box=False),
- conv(d1 + np.timedelta64(6 * 3600 +
- 5 * 60 + 1, 's') +
- np.timedelta64(30, 'us')))
- self.assertEqual(to_timedelta('15.5us', box=False),
- conv(np.timedelta64(15500, 'ns')))
+ assert (to_timedelta('1 days 06:05:01.00003', box=False) ==
+ conv(d1 + np.timedelta64(6 * 3600 + 5 * 60 + 1, 's') +
+ np.timedelta64(30, 'us')))
+ assert (to_timedelta('15.5us', box=False) ==
+ conv(np.timedelta64(15500, 'ns')))
# empty string
result = to_timedelta('', box=False)
- self.assertEqual(result.astype('int64'), iNaT)
+ assert result.astype('int64') == iNaT
result = to_timedelta(['', ''])
assert isnull(result).all()
@@ -42,7 +41,7 @@ def conv(v):
# ints
result = np.timedelta64(0, 'ns')
expected = to_timedelta(0, box=False)
- self.assertEqual(result, expected)
+ assert result == expected
# Series
expected = Series([timedelta(days=1), timedelta(days=1, seconds=1)])
@@ -59,12 +58,12 @@ def conv(v):
v = timedelta(seconds=1)
result = to_timedelta(v, box=False)
expected = np.timedelta64(timedelta(seconds=1))
- self.assertEqual(result, expected)
+ assert result == expected
v = np.timedelta64(timedelta(seconds=1))
result = to_timedelta(v, box=False)
expected = np.timedelta64(timedelta(seconds=1))
- self.assertEqual(result, expected)
+ assert result == expected
# arrays of various dtypes
arr = np.array([1] * 5, dtype='int64')
@@ -134,8 +133,7 @@ def test_to_timedelta_invalid(self):
# gh-13613: these should not error because errors='ignore'
invalid_data = 'apple'
- self.assertEqual(invalid_data, to_timedelta(
- invalid_data, errors='ignore'))
+ assert invalid_data == to_timedelta(invalid_data, errors='ignore')
invalid_data = ['apple', '1 days']
tm.assert_numpy_array_equal(
@@ -172,32 +170,32 @@ def test_to_timedelta_on_missing_values(self):
assert_series_equal(actual, expected)
actual = pd.to_timedelta(np.nan)
- self.assertEqual(actual.value, timedelta_NaT.astype('int64'))
+ assert actual.value == timedelta_NaT.astype('int64')
actual = pd.to_timedelta(pd.NaT)
- self.assertEqual(actual.value, timedelta_NaT.astype('int64'))
+ assert actual.value == timedelta_NaT.astype('int64')
def test_to_timedelta_on_nanoseconds(self):
# GH 9273
result = Timedelta(nanoseconds=100)
expected = Timedelta('100ns')
- self.assertEqual(result, expected)
+ assert result == expected
result = Timedelta(days=1, hours=1, minutes=1, weeks=1, seconds=1,
milliseconds=1, microseconds=1, nanoseconds=1)
expected = Timedelta(694861001001001)
- self.assertEqual(result, expected)
+ assert result == expected
result = Timedelta(microseconds=1) + Timedelta(nanoseconds=1)
expected = Timedelta('1us1ns')
- self.assertEqual(result, expected)
+ assert result == expected
result = Timedelta(microseconds=1) - Timedelta(nanoseconds=1)
expected = Timedelta('999ns')
- self.assertEqual(result, expected)
+ assert result == expected
result = Timedelta(microseconds=1) + 5 * Timedelta(nanoseconds=-2)
expected = Timedelta('990ns')
- self.assertEqual(result, expected)
+ assert result == expected
pytest.raises(TypeError, lambda: Timedelta(nanoseconds='abc'))
diff --git a/pandas/tests/indexing/common.py b/pandas/tests/indexing/common.py
index b555a9c1fd0df..bd5b7f45a6f4c 100644
--- a/pandas/tests/indexing/common.py
+++ b/pandas/tests/indexing/common.py
@@ -201,7 +201,7 @@ def _print(result, error=None):
try:
if is_scalar(rs) and is_scalar(xp):
- self.assertEqual(rs, xp)
+ assert rs == xp
elif xp.ndim == 1:
tm.assert_series_equal(rs, xp)
elif xp.ndim == 2:
diff --git a/pandas/tests/indexing/test_callable.py b/pandas/tests/indexing/test_callable.py
index 1d70205076b86..727c87ac90872 100644
--- a/pandas/tests/indexing/test_callable.py
+++ b/pandas/tests/indexing/test_callable.py
@@ -59,10 +59,10 @@ def test_frame_loc_ix_callable(self):
# scalar
res = df.loc[lambda x: 1, lambda x: 'A']
- self.assertEqual(res, df.loc[1, 'A'])
+ assert res == df.loc[1, 'A']
res = df.loc[lambda x: 1, lambda x: 'A']
- self.assertEqual(res, df.loc[1, 'A'])
+ assert res == df.loc[1, 'A']
def test_frame_loc_ix_callable_mixture(self):
# GH 11485
diff --git a/pandas/tests/indexing/test_chaining_and_caching.py b/pandas/tests/indexing/test_chaining_and_caching.py
index b776d3c2d08ea..c39876a8c6e44 100644
--- a/pandas/tests/indexing/test_chaining_and_caching.py
+++ b/pandas/tests/indexing/test_chaining_and_caching.py
@@ -50,8 +50,8 @@ def test_setitem_cache_updating(self):
# set it
df.loc[7, 'c'] = 1
- self.assertEqual(df.loc[0, 'c'], 0.0)
- self.assertEqual(df.loc[7, 'c'], 1.0)
+ assert df.loc[0, 'c'] == 0.0
+ assert df.loc[7, 'c'] == 1.0
# GH 7084
# not updating cache on series setting with slices
@@ -395,12 +395,12 @@ def test_cache_updating(self):
# but actually works, since everything is a view
df.loc[0]['z'].iloc[0] = 1.
result = df.loc[(0, 0), 'z']
- self.assertEqual(result, 1)
+ assert result == 1
# correct setting
df.loc[(0, 0), 'z'] = 2
result = df.loc[(0, 0), 'z']
- self.assertEqual(result, 2)
+ assert result == 2
# 10264
df = DataFrame(np.zeros((5, 5), dtype='int64'), columns=[
diff --git a/pandas/tests/indexing/test_coercion.py b/pandas/tests/indexing/test_coercion.py
index b8030d84e7929..56bc8c1d72bb8 100644
--- a/pandas/tests/indexing/test_coercion.py
+++ b/pandas/tests/indexing/test_coercion.py
@@ -31,8 +31,8 @@ def _assert(self, left, right, dtype):
tm.assert_index_equal(left, right)
else:
raise NotImplementedError
- self.assertEqual(left.dtype, dtype)
- self.assertEqual(right.dtype, dtype)
+ assert left.dtype == dtype
+ assert right.dtype == dtype
def test_has_comprehensive_tests(self):
for klass in self.klasses:
@@ -55,7 +55,7 @@ def _assert_setitem_series_conversion(self, original_series, loc_value,
temp[1] = loc_value
tm.assert_series_equal(temp, expected_series)
# check dtype explicitly for sure
- self.assertEqual(temp.dtype, expected_dtype)
+ assert temp.dtype == expected_dtype
# .loc works different rule, temporary disable
# temp = original_series.copy()
@@ -64,7 +64,7 @@ def _assert_setitem_series_conversion(self, original_series, loc_value,
def test_setitem_series_object(self):
obj = pd.Series(list('abcd'))
- self.assertEqual(obj.dtype, np.object)
+ assert obj.dtype == np.object
# object + int -> object
exp = pd.Series(['a', 1, 'c', 'd'])
@@ -84,7 +84,7 @@ def test_setitem_series_object(self):
def test_setitem_series_int64(self):
obj = pd.Series([1, 2, 3, 4])
- self.assertEqual(obj.dtype, np.int64)
+ assert obj.dtype == np.int64
# int + int -> int
exp = pd.Series([1, 1, 3, 4])
@@ -93,7 +93,7 @@ def test_setitem_series_int64(self):
# int + float -> float
# TODO_GH12747 The result must be float
# tm.assert_series_equal(temp, pd.Series([1, 1.1, 3, 4]))
- # self.assertEqual(temp.dtype, np.float64)
+ # assert temp.dtype == np.float64
exp = pd.Series([1, 1, 3, 4])
self._assert_setitem_series_conversion(obj, 1.1, exp, np.int64)
@@ -107,7 +107,7 @@ def test_setitem_series_int64(self):
def test_setitem_series_float64(self):
obj = pd.Series([1.1, 2.2, 3.3, 4.4])
- self.assertEqual(obj.dtype, np.float64)
+ assert obj.dtype == np.float64
# float + int -> float
exp = pd.Series([1.1, 1.0, 3.3, 4.4])
@@ -128,7 +128,7 @@ def test_setitem_series_float64(self):
def test_setitem_series_complex128(self):
obj = pd.Series([1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j])
- self.assertEqual(obj.dtype, np.complex128)
+ assert obj.dtype == np.complex128
# complex + int -> complex
exp = pd.Series([1 + 1j, 1, 3 + 3j, 4 + 4j])
@@ -148,33 +148,33 @@ def test_setitem_series_complex128(self):
def test_setitem_series_bool(self):
obj = pd.Series([True, False, True, False])
- self.assertEqual(obj.dtype, np.bool)
+ assert obj.dtype == np.bool
# bool + int -> int
# TODO_GH12747 The result must be int
# tm.assert_series_equal(temp, pd.Series([1, 1, 1, 0]))
- # self.assertEqual(temp.dtype, np.int64)
+ # assert temp.dtype == np.int64
exp = pd.Series([True, True, True, False])
self._assert_setitem_series_conversion(obj, 1, exp, np.bool)
# TODO_GH12747 The result must be int
# assigning int greater than bool
# tm.assert_series_equal(temp, pd.Series([1, 3, 1, 0]))
- # self.assertEqual(temp.dtype, np.int64)
+ # assert temp.dtype == np.int64
exp = pd.Series([True, True, True, False])
self._assert_setitem_series_conversion(obj, 3, exp, np.bool)
# bool + float -> float
# TODO_GH12747 The result must be float
# tm.assert_series_equal(temp, pd.Series([1., 1.1, 1., 0.]))
- # self.assertEqual(temp.dtype, np.float64)
+ # assert temp.dtype == np.float64
exp = pd.Series([True, True, True, False])
self._assert_setitem_series_conversion(obj, 1.1, exp, np.bool)
# bool + complex -> complex (buggy, results in bool)
# TODO_GH12747 The result must be complex
# tm.assert_series_equal(temp, pd.Series([1, 1 + 1j, 1, 0]))
- # self.assertEqual(temp.dtype, np.complex128)
+ # assert temp.dtype == np.complex128
exp = pd.Series([True, True, True, False])
self._assert_setitem_series_conversion(obj, 1 + 1j, exp, np.bool)
@@ -187,7 +187,7 @@ def test_setitem_series_datetime64(self):
pd.Timestamp('2011-01-02'),
pd.Timestamp('2011-01-03'),
pd.Timestamp('2011-01-04')])
- self.assertEqual(obj.dtype, 'datetime64[ns]')
+ assert obj.dtype == 'datetime64[ns]'
# datetime64 + datetime64 -> datetime64
exp = pd.Series([pd.Timestamp('2011-01-01'),
@@ -213,7 +213,7 @@ def test_setitem_series_datetime64tz(self):
pd.Timestamp('2011-01-02', tz=tz),
pd.Timestamp('2011-01-03', tz=tz),
pd.Timestamp('2011-01-04', tz=tz)])
- self.assertEqual(obj.dtype, 'datetime64[ns, US/Eastern]')
+ assert obj.dtype == 'datetime64[ns, US/Eastern]'
# datetime64tz + datetime64tz -> datetime64tz
exp = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
@@ -249,18 +249,18 @@ def _assert_setitem_index_conversion(self, original_series, loc_key,
exp = pd.Series([1, 2, 3, 4, 5], index=expected_index)
tm.assert_series_equal(temp, exp)
# check dtype explicitly for sure
- self.assertEqual(temp.index.dtype, expected_dtype)
+ assert temp.index.dtype == expected_dtype
temp = original_series.copy()
temp.loc[loc_key] = 5
exp = pd.Series([1, 2, 3, 4, 5], index=expected_index)
tm.assert_series_equal(temp, exp)
# check dtype explicitly for sure
- self.assertEqual(temp.index.dtype, expected_dtype)
+ assert temp.index.dtype == expected_dtype
def test_setitem_index_object(self):
obj = pd.Series([1, 2, 3, 4], index=list('abcd'))
- self.assertEqual(obj.index.dtype, np.object)
+ assert obj.index.dtype == np.object
# object + object -> object
exp_index = pd.Index(list('abcdx'))
@@ -278,7 +278,7 @@ def test_setitem_index_object(self):
def test_setitem_index_int64(self):
# tests setitem with non-existing numeric key
obj = pd.Series([1, 2, 3, 4])
- self.assertEqual(obj.index.dtype, np.int64)
+ assert obj.index.dtype == np.int64
# int + int -> int
exp_index = pd.Index([0, 1, 2, 3, 5])
@@ -295,7 +295,7 @@ def test_setitem_index_int64(self):
def test_setitem_index_float64(self):
# tests setitem with non-existing numeric key
obj = pd.Series([1, 2, 3, 4], index=[1.1, 2.1, 3.1, 4.1])
- self.assertEqual(obj.index.dtype, np.float64)
+ assert obj.index.dtype == np.float64
# float + int -> int
temp = obj.copy()
@@ -341,11 +341,11 @@ def _assert_insert_conversion(self, original, value,
target = original.copy()
res = target.insert(1, value)
tm.assert_index_equal(res, expected)
- self.assertEqual(res.dtype, expected_dtype)
+ assert res.dtype == expected_dtype
def test_insert_index_object(self):
obj = pd.Index(list('abcd'))
- self.assertEqual(obj.dtype, np.object)
+ assert obj.dtype == np.object
# object + int -> object
exp = pd.Index(['a', 1, 'b', 'c', 'd'])
@@ -358,7 +358,7 @@ def test_insert_index_object(self):
# object + bool -> object
res = obj.insert(1, False)
tm.assert_index_equal(res, pd.Index(['a', False, 'b', 'c', 'd']))
- self.assertEqual(res.dtype, np.object)
+ assert res.dtype == np.object
# object + object -> object
exp = pd.Index(['a', 'x', 'b', 'c', 'd'])
@@ -366,7 +366,7 @@ def test_insert_index_object(self):
def test_insert_index_int64(self):
obj = pd.Int64Index([1, 2, 3, 4])
- self.assertEqual(obj.dtype, np.int64)
+ assert obj.dtype == np.int64
# int + int -> int
exp = pd.Index([1, 1, 2, 3, 4])
@@ -386,7 +386,7 @@ def test_insert_index_int64(self):
def test_insert_index_float64(self):
obj = pd.Float64Index([1., 2., 3., 4.])
- self.assertEqual(obj.dtype, np.float64)
+ assert obj.dtype == np.float64
# float + int -> int
exp = pd.Index([1., 1., 2., 3., 4.])
@@ -413,7 +413,7 @@ def test_insert_index_bool(self):
def test_insert_index_datetime64(self):
obj = pd.DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03',
'2011-01-04'])
- self.assertEqual(obj.dtype, 'datetime64[ns]')
+ assert obj.dtype == 'datetime64[ns]'
# datetime64 + datetime64 => datetime64
exp = pd.DatetimeIndex(['2011-01-01', '2012-01-01', '2011-01-02',
@@ -434,7 +434,7 @@ def test_insert_index_datetime64(self):
def test_insert_index_datetime64tz(self):
obj = pd.DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03',
'2011-01-04'], tz='US/Eastern')
- self.assertEqual(obj.dtype, 'datetime64[ns, US/Eastern]')
+ assert obj.dtype == 'datetime64[ns, US/Eastern]'
# datetime64tz + datetime64tz => datetime64
exp = pd.DatetimeIndex(['2011-01-01', '2012-01-01', '2011-01-02',
@@ -460,7 +460,7 @@ def test_insert_index_datetime64tz(self):
def test_insert_index_timedelta64(self):
obj = pd.TimedeltaIndex(['1 day', '2 day', '3 day', '4 day'])
- self.assertEqual(obj.dtype, 'timedelta64[ns]')
+ assert obj.dtype == 'timedelta64[ns]'
# timedelta64 + timedelta64 => timedelta64
exp = pd.TimedeltaIndex(['1 day', '10 day', '2 day', '3 day', '4 day'])
@@ -480,7 +480,7 @@ def test_insert_index_timedelta64(self):
def test_insert_index_period(self):
obj = pd.PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04'],
freq='M')
- self.assertEqual(obj.dtype, 'period[M]')
+ assert obj.dtype == 'period[M]'
# period + period => period
exp = pd.PeriodIndex(['2011-01', '2012-01', '2011-02',
@@ -527,7 +527,7 @@ def _assert_where_conversion(self, original, cond, values,
def _where_object_common(self, klass):
obj = klass(list('abcd'))
- self.assertEqual(obj.dtype, np.object)
+ assert obj.dtype == np.object
cond = klass([True, False, True, False])
# object + int -> object
@@ -580,7 +580,7 @@ def test_where_index_object(self):
def _where_int64_common(self, klass):
obj = klass([1, 2, 3, 4])
- self.assertEqual(obj.dtype, np.int64)
+ assert obj.dtype == np.int64
cond = klass([True, False, True, False])
# int + int -> int
@@ -626,7 +626,7 @@ def test_where_index_int64(self):
def _where_float64_common(self, klass):
obj = klass([1.1, 2.2, 3.3, 4.4])
- self.assertEqual(obj.dtype, np.float64)
+ assert obj.dtype == np.float64
cond = klass([True, False, True, False])
# float + int -> float
@@ -672,7 +672,7 @@ def test_where_index_float64(self):
def test_where_series_complex128(self):
obj = pd.Series([1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j])
- self.assertEqual(obj.dtype, np.complex128)
+ assert obj.dtype == np.complex128
cond = pd.Series([True, False, True, False])
# complex + int -> complex
@@ -712,7 +712,7 @@ def test_where_index_complex128(self):
def test_where_series_bool(self):
obj = pd.Series([True, False, True, False])
- self.assertEqual(obj.dtype, np.bool)
+ assert obj.dtype == np.bool
cond = pd.Series([True, False, True, False])
# bool + int -> int
@@ -755,7 +755,7 @@ def test_where_series_datetime64(self):
pd.Timestamp('2011-01-02'),
pd.Timestamp('2011-01-03'),
pd.Timestamp('2011-01-04')])
- self.assertEqual(obj.dtype, 'datetime64[ns]')
+ assert obj.dtype == 'datetime64[ns]'
cond = pd.Series([True, False, True, False])
# datetime64 + datetime64 -> datetime64
@@ -797,7 +797,7 @@ def test_where_index_datetime64(self):
pd.Timestamp('2011-01-02'),
pd.Timestamp('2011-01-03'),
pd.Timestamp('2011-01-04')])
- self.assertEqual(obj.dtype, 'datetime64[ns]')
+ assert obj.dtype == 'datetime64[ns]'
cond = pd.Index([True, False, True, False])
# datetime64 + datetime64 -> datetime64
@@ -867,7 +867,7 @@ def _assert_fillna_conversion(self, original, value,
def _fillna_object_common(self, klass):
obj = klass(['a', np.nan, 'c', 'd'])
- self.assertEqual(obj.dtype, np.object)
+ assert obj.dtype == np.object
# object + int -> object
exp = klass(['a', 1, 'c', 'd'])
@@ -900,7 +900,7 @@ def test_fillna_index_int64(self):
def _fillna_float64_common(self, klass):
obj = klass([1.1, np.nan, 3.3, 4.4])
- self.assertEqual(obj.dtype, np.float64)
+ assert obj.dtype == np.float64
# float + int -> float
exp = klass([1.1, 1.0, 3.3, 4.4])
@@ -933,7 +933,7 @@ def test_fillna_index_float64(self):
def test_fillna_series_complex128(self):
obj = pd.Series([1 + 1j, np.nan, 3 + 3j, 4 + 4j])
- self.assertEqual(obj.dtype, np.complex128)
+ assert obj.dtype == np.complex128
# complex + int -> complex
exp = pd.Series([1 + 1j, 1, 3 + 3j, 4 + 4j])
@@ -966,7 +966,7 @@ def test_fillna_series_datetime64(self):
pd.NaT,
pd.Timestamp('2011-01-03'),
pd.Timestamp('2011-01-04')])
- self.assertEqual(obj.dtype, 'datetime64[ns]')
+ assert obj.dtype == 'datetime64[ns]'
# datetime64 + datetime64 => datetime64
exp = pd.Series([pd.Timestamp('2011-01-01'),
@@ -1006,7 +1006,7 @@ def test_fillna_series_datetime64tz(self):
pd.NaT,
pd.Timestamp('2011-01-03', tz=tz),
pd.Timestamp('2011-01-04', tz=tz)])
- self.assertEqual(obj.dtype, 'datetime64[ns, US/Eastern]')
+ assert obj.dtype == 'datetime64[ns, US/Eastern]'
# datetime64tz + datetime64tz => datetime64tz
exp = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
@@ -1058,7 +1058,7 @@ def test_fillna_series_period(self):
def test_fillna_index_datetime64(self):
obj = pd.DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03',
'2011-01-04'])
- self.assertEqual(obj.dtype, 'datetime64[ns]')
+ assert obj.dtype == 'datetime64[ns]'
# datetime64 + datetime64 => datetime64
exp = pd.DatetimeIndex(['2011-01-01', '2012-01-01',
@@ -1093,7 +1093,7 @@ def test_fillna_index_datetime64tz(self):
obj = pd.DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03',
'2011-01-04'], tz=tz)
- self.assertEqual(obj.dtype, 'datetime64[ns, US/Eastern]')
+ assert obj.dtype == 'datetime64[ns, US/Eastern]'
# datetime64tz + datetime64tz => datetime64tz
exp = pd.DatetimeIndex(['2011-01-01', '2012-01-01',
@@ -1168,7 +1168,7 @@ def setUp(self):
def _assert_replace_conversion(self, from_key, to_key, how):
index = pd.Index([3, 4], name='xxx')
obj = pd.Series(self.rep[from_key], index=index, name='yyy')
- self.assertEqual(obj.dtype, from_key)
+ assert obj.dtype == from_key
if (from_key.startswith('datetime') and to_key.startswith('datetime')):
# different tz, currently mask_missing raises SystemError
@@ -1198,7 +1198,7 @@ def _assert_replace_conversion(self, from_key, to_key, how):
else:
exp = pd.Series(self.rep[to_key], index=index, name='yyy')
- self.assertEqual(exp.dtype, to_key)
+ assert exp.dtype == to_key
tm.assert_series_equal(result, exp)
diff --git a/pandas/tests/indexing/test_datetime.py b/pandas/tests/indexing/test_datetime.py
index 9b224ba796268..3089bc1dbddea 100644
--- a/pandas/tests/indexing/test_datetime.py
+++ b/pandas/tests/indexing/test_datetime.py
@@ -37,10 +37,10 @@ def test_indexing_with_datetime_tz(self):
df = DataFrame({'a': date_range('2014-01-01', periods=10, tz='UTC')})
result = df.iloc[5]
expected = Timestamp('2014-01-06 00:00:00+0000', tz='UTC', freq='D')
- self.assertEqual(result, expected)
+ assert result == expected
result = df.loc[5]
- self.assertEqual(result, expected)
+ assert result == expected
# indexing - boolean
result = df[df.a > df.a[3]]
@@ -129,7 +129,7 @@ def test_indexing_with_datetimeindex_tz(self):
# single element indexing
# getitem
- self.assertEqual(ser[index[1]], 1)
+ assert ser[index[1]] == 1
# setitem
result = ser.copy()
@@ -138,7 +138,7 @@ def test_indexing_with_datetimeindex_tz(self):
tm.assert_series_equal(result, expected)
# .loc getitem
- self.assertEqual(ser.loc[index[1]], 1)
+ assert ser.loc[index[1]] == 1
# .loc setitem
result = ser.copy()
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 4d4ef65b40074..1701dd9f6ba90 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -165,7 +165,7 @@ def f():
result = s2.loc['b']
expected = 2
- self.assertEqual(result, expected)
+ assert result == expected
# mixed index so we have label
# indexing
@@ -180,14 +180,14 @@ def f():
result = idxr(s3)[1]
expected = 2
- self.assertEqual(result, expected)
+ assert result == expected
pytest.raises(TypeError, lambda: s3.iloc[1.0])
pytest.raises(KeyError, lambda: s3.loc[1.0])
result = s3.loc[1.5]
expected = 3
- self.assertEqual(result, expected)
+ assert result == expected
def test_scalar_integer(self):
@@ -216,7 +216,8 @@ def test_scalar_integer(self):
(lambda x: x, True)]:
if isinstance(s, Series):
- compare = self.assertEqual
+ def compare(x, y):
+ assert x == y
expected = 100
else:
compare = tm.assert_series_equal
@@ -576,10 +577,10 @@ def test_floating_index_doc_example(self):
index = Index([1.5, 2, 3, 4.5, 5])
s = Series(range(5), index=index)
- self.assertEqual(s[3], 2)
- self.assertEqual(s.loc[3], 2)
- self.assertEqual(s.loc[3], 2)
- self.assertEqual(s.iloc[3], 3)
+ assert s[3] == 2
+ assert s.loc[3] == 2
+ assert s.loc[3] == 2
+ assert s.iloc[3] == 3
def test_floating_misc(self):
@@ -598,16 +599,16 @@ def test_floating_misc(self):
result1 = s[5.0]
result2 = s.loc[5.0]
result3 = s.loc[5.0]
- self.assertEqual(result1, result2)
- self.assertEqual(result1, result3)
+ assert result1 == result2
+ assert result1 == result3
result1 = s[5]
result2 = s.loc[5]
result3 = s.loc[5]
- self.assertEqual(result1, result2)
- self.assertEqual(result1, result3)
+ assert result1 == result2
+ assert result1 == result3
- self.assertEqual(s[5.0], s[5])
+ assert s[5.0] == s[5]
# value not found (and no fallbacking at all)
@@ -702,15 +703,17 @@ def test_floating_misc(self):
assert_series_equal(result1, Series([1], index=[2.5]))
def test_floating_tuples(self):
- # GH13509
+ # see gh-13509
s = Series([(1, 1), (2, 2), (3, 3)], index=[0.0, 0.1, 0.2], name='foo')
+
result = s[0.0]
- self.assertEqual(result, (1, 1))
+ assert result == (1, 1)
+ expected = Series([(1, 1), (2, 2)], index=[0.0, 0.0], name='foo')
s = Series([(1, 1), (2, 2), (3, 3)], index=[0.0, 0.0, 0.2], name='foo')
+
result = s[0.0]
- expected = Series([(1, 1), (2, 2)], index=[0.0, 0.0], name='foo')
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_float64index_slicing_bug(self):
# GH 5557, related to slicing a float index
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index baced46923fd4..3e625fa483f7b 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -166,7 +166,7 @@ def test_iloc_getitem_neg_int_can_reach_first_index(self):
expected = s.iloc[0]
result = s.iloc[-3]
- self.assertEqual(result, expected)
+ assert result == expected
expected = s.iloc[[0]]
result = s.iloc[[-3]]
@@ -256,7 +256,7 @@ def test_iloc_setitem(self):
df.iloc[1, 1] = 1
result = df.iloc[1, 1]
- self.assertEqual(result, 1)
+ assert result == 1
df.iloc[:, 2:3] = 0
expected = df.iloc[:, 2:3]
@@ -326,7 +326,7 @@ def test_iloc_getitem_frame(self):
result = df.iloc[2, 2]
with catch_warnings(record=True):
exp = df.ix[4, 4]
- self.assertEqual(result, exp)
+ assert result == exp
# slice
result = df.iloc[4:8]
@@ -376,7 +376,7 @@ def test_iloc_getitem_labelled_frame(self):
result = df.iloc[1, 1]
exp = df.loc['b', 'B']
- self.assertEqual(result, exp)
+ assert result == exp
result = df.iloc[:, 2:3]
expected = df.loc[:, ['C']]
@@ -385,7 +385,7 @@ def test_iloc_getitem_labelled_frame(self):
# negative indexing
result = df.iloc[-1, -1]
exp = df.loc['j', 'D']
- self.assertEqual(result, exp)
+ assert result == exp
# out-of-bounds exception
pytest.raises(IndexError, df.iloc.__getitem__, tuple([10, 5]))
@@ -444,7 +444,7 @@ def test_iloc_setitem_series(self):
df.iloc[1, 1] = 1
result = df.iloc[1, 1]
- self.assertEqual(result, 1)
+ assert result == 1
df.iloc[:, 2:3] = 0
expected = df.iloc[:, 2:3]
@@ -455,7 +455,7 @@ def test_iloc_setitem_series(self):
s.iloc[1] = 1
result = s.iloc[1]
- self.assertEqual(result, 1)
+ assert result == 1
s.iloc[:4] = 0
expected = s.iloc[:4]
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 5924dba488043..0759dc2333ad5 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -68,7 +68,7 @@ def test_setitem_dtype_upcast(self):
# GH3216
df = DataFrame([{"a": 1}, {"a": 3, "b": 2}])
df['c'] = np.nan
- self.assertEqual(df['c'].dtype, np.float64)
+ assert df['c'].dtype == np.float64
df.loc[0, 'c'] = 'foo'
expected = DataFrame([{"a": 1, "c": 'foo'},
@@ -231,7 +231,7 @@ def test_indexing_mixed_frame_bug(self):
idx = df['test'] == '_'
temp = df.loc[idx, 'a'].apply(lambda x: '-----' if x == 'aaa' else x)
df.loc[idx, 'test'] = temp
- self.assertEqual(df.iloc[0, 2], '-----')
+ assert df.iloc[0, 2] == '-----'
# if I look at df, then element [0,2] equals '_'. If instead I type
# df.ix[idx,'test'], I get '-----', finally by typing df.iloc[0,2] I
@@ -244,7 +244,7 @@ def test_multitype_list_index_access(self):
with pytest.raises(KeyError):
df[[22, 26, -8]]
- self.assertEqual(df[21].shape[0], df.shape[0])
+ assert df[21].shape[0] == df.shape[0]
def test_set_index_nan(self):
@@ -638,9 +638,9 @@ def test_float_index_non_scalar_assignment(self):
def test_float_index_at_iat(self):
s = pd.Series([1, 2, 3], index=[0.1, 0.2, 0.3])
for el, item in s.iteritems():
- self.assertEqual(s.at[el], item)
+ assert s.at[el] == item
for i in range(len(s)):
- self.assertEqual(s.iat[i], i + 1)
+ assert s.iat[i] == i + 1
def test_rhs_alignment(self):
# GH8258, tests that both rows & columns are aligned to what is
@@ -741,7 +741,7 @@ def test_indexing_dtypes_on_empty(self):
with catch_warnings(record=True):
df2 = df.ix[[], :]
- self.assertEqual(df2.loc[:, 'a'].dtype, np.int64)
+ assert df2.loc[:, 'a'].dtype == np.int64
tm.assert_series_equal(df2.loc[:, 'a'], df2.iloc[:, 0])
with catch_warnings(record=True):
tm.assert_series_equal(df2.loc[:, 'a'], df2.ix[:, 0])
@@ -791,13 +791,13 @@ def test_maybe_numeric_slice(self):
df = pd.DataFrame({'A': [1, 2], 'B': ['c', 'd'], 'C': [True, False]})
result = _maybe_numeric_slice(df, slice_=None)
expected = pd.IndexSlice[:, ['A']]
- self.assertEqual(result, expected)
+ assert result == expected
result = _maybe_numeric_slice(df, None, include_bool=True)
expected = pd.IndexSlice[:, ['A', 'C']]
result = _maybe_numeric_slice(df, [1])
expected = [1]
- self.assertEqual(result, expected)
+ assert result == expected
class TestSeriesNoneCoercion(tm.TestCase):
diff --git a/pandas/tests/indexing/test_ix.py b/pandas/tests/indexing/test_ix.py
index 433b44c952ca1..8290bc80edac1 100644
--- a/pandas/tests/indexing/test_ix.py
+++ b/pandas/tests/indexing/test_ix.py
@@ -82,7 +82,7 @@ def test_ix_loc_consistency(self):
def compare(result, expected):
if is_scalar(expected):
- self.assertEqual(result, expected)
+ assert result == expected
else:
assert expected.equals(result)
@@ -216,7 +216,7 @@ def test_ix_assign_column_mixed(self):
indexer = i * 2
v = 1000 + i * 200
expected.loc[indexer, 'y'] = v
- self.assertEqual(expected.loc[indexer, 'y'], v)
+ assert expected.loc[indexer, 'y'] == v
df.loc[df.x % 2 == 0, 'y'] = df.loc[df.x % 2 == 0, 'y'] * 100
tm.assert_frame_equal(df, expected)
@@ -252,21 +252,21 @@ def test_ix_get_set_consistency(self):
index=['e', 7, 'f', 'g'])
with catch_warnings(record=True):
- self.assertEqual(df.ix['e', 8], 2)
- self.assertEqual(df.loc['e', 8], 2)
+ assert df.ix['e', 8] == 2
+ assert df.loc['e', 8] == 2
with catch_warnings(record=True):
df.ix['e', 8] = 42
- self.assertEqual(df.ix['e', 8], 42)
- self.assertEqual(df.loc['e', 8], 42)
+ assert df.ix['e', 8] == 42
+ assert df.loc['e', 8] == 42
df.loc['e', 8] = 45
with catch_warnings(record=True):
- self.assertEqual(df.ix['e', 8], 45)
- self.assertEqual(df.loc['e', 8], 45)
+ assert df.ix['e', 8] == 45
+ assert df.loc['e', 8] == 45
def test_ix_slicing_strings(self):
- # GH3836
+ # see gh-3836
data = {'Classification':
['SA EQUITY CFD', 'bbb', 'SA EQUITY', 'SA SSF', 'aaa'],
'Random': [1, 2, 3, 4, 5],
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index b430f458d48b5..410d01431ef5a 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -58,7 +58,7 @@ def test_loc_setitem_dups(self):
indexer = tuple(['r', 'bar'])
df = df_orig.copy()
df.loc[indexer] *= 2.0
- self.assertEqual(df.loc[indexer], 2.0 * df_orig.loc[indexer])
+ assert df.loc[indexer] == 2.0 * df_orig.loc[indexer]
indexer = tuple(['t', ['bar', 'bar2']])
df = df_orig.copy()
@@ -332,7 +332,7 @@ def test_loc_general(self):
result = DataFrame({'a': [Timestamp('20130101')], 'b': [1]}).iloc[0]
expected = Series([Timestamp('20130101'), 1], index=['a', 'b'], name=0)
tm.assert_series_equal(result, expected)
- self.assertEqual(result.dtype, object)
+ assert result.dtype == object
def test_loc_setitem_consistency(self):
# GH 6149
@@ -415,10 +415,10 @@ def test_loc_setitem_frame(self):
df.loc['a', 'A'] = 1
result = df.loc['a', 'A']
- self.assertEqual(result, 1)
+ assert result == 1
result = df.iloc[0, 0]
- self.assertEqual(result, 1)
+ assert result == 1
df.loc[:, 'B':'D'] = 0
expected = df.loc[:, 'B':'D']
@@ -608,14 +608,14 @@ def test_loc_name(self):
df = DataFrame([[1, 1], [1, 1]])
df.index.name = 'index_name'
result = df.iloc[[0, 1]].index.name
- self.assertEqual(result, 'index_name')
+ assert result == 'index_name'
with catch_warnings(record=True):
result = df.ix[[0, 1]].index.name
- self.assertEqual(result, 'index_name')
+ assert result == 'index_name'
result = df.loc[[0, 1]].index.name
- self.assertEqual(result, 'index_name')
+ assert result == 'index_name'
def test_loc_empty_list_indexer_is_ok(self):
from pandas.util.testing import makeCustomDataframe as mkdf
diff --git a/pandas/tests/indexing/test_multiindex.py b/pandas/tests/indexing/test_multiindex.py
index dbd0f5a9e6e1c..b8c34f9f28d83 100644
--- a/pandas/tests/indexing/test_multiindex.py
+++ b/pandas/tests/indexing/test_multiindex.py
@@ -30,7 +30,7 @@ def test_iloc_getitem_multiindex2(self):
rs = df.iloc[2, 2]
xp = df.values[2, 2]
- self.assertEqual(rs, xp)
+ assert rs == xp
# for multiple items
# GH 5528
@@ -50,6 +50,9 @@ def test_setitem_multiindex(self):
for index_fn in ('ix', 'loc'):
+ def assert_equal(a, b):
+ assert a == b
+
def check(target, indexers, value, compare_fn, expected=None):
fn = getattr(target, index_fn)
fn.__setitem__(indexers, value)
@@ -66,28 +69,28 @@ def check(target, indexers, value, compare_fn, expected=None):
'X', 'd', 'profit'],
index=index)
check(target=df, indexers=((t, n), 'X'), value=0,
- compare_fn=self.assertEqual)
+ compare_fn=assert_equal)
df = DataFrame(-999, columns=['A', 'w', 'l', 'a', 'x',
'X', 'd', 'profit'],
index=index)
check(target=df, indexers=((t, n), 'X'), value=1,
- compare_fn=self.assertEqual)
+ compare_fn=assert_equal)
df = DataFrame(columns=['A', 'w', 'l', 'a', 'x',
'X', 'd', 'profit'],
index=index)
check(target=df, indexers=((t, n), 'X'), value=2,
- compare_fn=self.assertEqual)
+ compare_fn=assert_equal)
- # GH 7218, assinging with 0-dim arrays
+ # gh-7218: assigning with 0-dim arrays
df = DataFrame(-999, columns=['A', 'w', 'l', 'a', 'x',
'X', 'd', 'profit'],
index=index)
check(target=df,
indexers=((t, n), 'X'),
value=np.array(3),
- compare_fn=self.assertEqual,
+ compare_fn=assert_equal,
expected=3, )
# GH5206
@@ -215,8 +218,8 @@ def test_iloc_getitem_multiindex(self):
with catch_warnings(record=True):
xp = mi_int.ix[4].ix[8]
tm.assert_series_equal(rs, xp, check_names=False)
- self.assertEqual(rs.name, (4, 8))
- self.assertEqual(xp.name, 8)
+ assert rs.name == (4, 8)
+ assert xp.name == 8
# 2nd (last) columns
rs = mi_int.iloc[:, 2]
@@ -228,13 +231,13 @@ def test_iloc_getitem_multiindex(self):
rs = mi_int.iloc[2, 2]
with catch_warnings(record=True):
xp = mi_int.ix[:, 2].ix[2]
- self.assertEqual(rs, xp)
+ assert rs == xp
# this is basically regular indexing
rs = mi_labels.iloc[2, 2]
with catch_warnings(record=True):
xp = mi_labels.ix['j'].ix[:, 'j'].ix[0, 0]
- self.assertEqual(rs, xp)
+ assert rs == xp
def test_loc_multiindex(self):
@@ -572,7 +575,7 @@ def f():
('functs', 'median')]),
index=['function', 'name'])
result = df.loc['function', ('functs', 'mean')]
- self.assertEqual(result, np.mean)
+ assert result == np.mean
def test_multiindex_assignment(self):
@@ -798,9 +801,9 @@ def f():
tm.assert_frame_equal(result, expected)
# not lexsorted
- self.assertEqual(df.index.lexsort_depth, 2)
+ assert df.index.lexsort_depth == 2
df = df.sort_index(level=1, axis=0)
- self.assertEqual(df.index.lexsort_depth, 0)
+ assert df.index.lexsort_depth == 0
with tm.assert_raises_regex(
UnsortedIndexError,
'MultiIndex Slicing requires the index to be fully '
diff --git a/pandas/tests/indexing/test_panel.py b/pandas/tests/indexing/test_panel.py
index 8aa35a163babc..b704e15b81502 100644
--- a/pandas/tests/indexing/test_panel.py
+++ b/pandas/tests/indexing/test_panel.py
@@ -27,7 +27,7 @@ def test_iloc_getitem_panel(self):
result = p.iloc[1, 1, 1]
expected = p.loc['B', 'b', 'two']
- self.assertEqual(result, expected)
+ assert result == expected
# slice
result = p.iloc[1:3]
@@ -99,16 +99,16 @@ def f():
def test_iloc_panel_issue(self):
with catch_warnings(record=True):
- # GH 3617
+ # see gh-3617
p = Panel(np.random.randn(4, 4, 4))
- self.assertEqual(p.iloc[:3, :3, :3].shape, (3, 3, 3))
- self.assertEqual(p.iloc[1, :3, :3].shape, (3, 3))
- self.assertEqual(p.iloc[:3, 1, :3].shape, (3, 3))
- self.assertEqual(p.iloc[:3, :3, 1].shape, (3, 3))
- self.assertEqual(p.iloc[1, 1, :3].shape, (3, ))
- self.assertEqual(p.iloc[1, :3, 1].shape, (3, ))
- self.assertEqual(p.iloc[:3, 1, 1].shape, (3, ))
+ assert p.iloc[:3, :3, :3].shape == (3, 3, 3)
+ assert p.iloc[1, :3, :3].shape == (3, 3)
+ assert p.iloc[:3, 1, :3].shape == (3, 3)
+ assert p.iloc[:3, :3, 1].shape == (3, 3)
+ assert p.iloc[1, 1, :3].shape == (3, )
+ assert p.iloc[1, :3, 1].shape == (3, )
+ assert p.iloc[:3, 1, 1].shape == (3, )
def test_panel_getitem(self):
diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py
index 80d2d5729c610..20cec2a3aa7db 100644
--- a/pandas/tests/indexing/test_partial.py
+++ b/pandas/tests/indexing/test_partial.py
@@ -392,7 +392,7 @@ def f():
tm.assert_frame_equal(df, exp)
tm.assert_index_equal(df.index,
pd.Index(orig.index.tolist() + ['a']))
- self.assertEqual(df.index.dtype, 'object')
+ assert df.index.dtype == 'object'
def test_partial_set_empty_series(self):
diff --git a/pandas/tests/indexing/test_scalar.py b/pandas/tests/indexing/test_scalar.py
index 70c7eaf7446db..fb40c539e16ba 100644
--- a/pandas/tests/indexing/test_scalar.py
+++ b/pandas/tests/indexing/test_scalar.py
@@ -77,7 +77,7 @@ def test_at_iat_coercion(self):
result = s.at[dates[5]]
xp = s.values[5]
- self.assertEqual(result, xp)
+ assert result == xp
# GH 7729
# make sure we are boxing the returns
@@ -86,14 +86,14 @@ def test_at_iat_coercion(self):
for r in [lambda: s.iat[1], lambda: s.iloc[1]]:
result = r()
- self.assertEqual(result, expected)
+ assert result == expected
s = Series(['1 days', '2 days'], dtype='timedelta64[ns]')
expected = Timedelta('2 days')
for r in [lambda: s.iat[1], lambda: s.iloc[1]]:
result = r()
- self.assertEqual(result, expected)
+ assert result == expected
def test_iat_invalid_args(self):
pass
@@ -105,9 +105,9 @@ def test_imethods_with_dups(self):
s = Series(range(5), index=[1, 1, 2, 2, 3], dtype='int64')
result = s.iloc[2]
- self.assertEqual(result, 2)
+ assert result == 2
result = s.iat[2]
- self.assertEqual(result, 2)
+ assert result == 2
pytest.raises(IndexError, lambda: s.iat[10])
pytest.raises(IndexError, lambda: s.iat[-10])
@@ -123,29 +123,29 @@ def test_imethods_with_dups(self):
result = df.iat[2, 0]
expected = 2
- self.assertEqual(result, 2)
+ assert result == 2
def test_at_to_fail(self):
# at should not fallback
# GH 7814
s = Series([1, 2, 3], index=list('abc'))
result = s.at['a']
- self.assertEqual(result, 1)
+ assert result == 1
pytest.raises(ValueError, lambda: s.at[0])
df = DataFrame({'A': [1, 2, 3]}, index=list('abc'))
result = df.at['a', 'A']
- self.assertEqual(result, 1)
+ assert result == 1
pytest.raises(ValueError, lambda: df.at['a', 0])
s = Series([1, 2, 3], index=[3, 2, 1])
result = s.at[1]
- self.assertEqual(result, 3)
+ assert result == 3
pytest.raises(ValueError, lambda: s.at['a'])
df = DataFrame({0: [1, 2, 3]}, index=[3, 2, 1])
result = df.at[1, 0]
- self.assertEqual(result, 3)
+ assert result == 3
pytest.raises(ValueError, lambda: df.at['a', 0])
# GH 13822, incorrect error string with non-unique columns when missing
diff --git a/pandas/tests/io/formats/test_eng_formatting.py b/pandas/tests/io/formats/test_eng_formatting.py
index 41bb95964b4a2..e064d1200d672 100644
--- a/pandas/tests/io/formats/test_eng_formatting.py
+++ b/pandas/tests/io/formats/test_eng_formatting.py
@@ -18,7 +18,7 @@ def test_eng_float_formatter(self):
'1 141.000E+00\n'
'2 14.100E+03\n'
'3 1.410E+06')
- self.assertEqual(result, expected)
+ assert result == expected
fmt.set_eng_float_format(use_eng_prefix=True)
result = df.to_string()
@@ -27,7 +27,7 @@ def test_eng_float_formatter(self):
'1 141.000\n'
'2 14.100k\n'
'3 1.410M')
- self.assertEqual(result, expected)
+ assert result == expected
fmt.set_eng_float_format(accuracy=0)
result = df.to_string()
@@ -36,15 +36,13 @@ def test_eng_float_formatter(self):
'1 141E+00\n'
'2 14E+03\n'
'3 1E+06')
- self.assertEqual(result, expected)
+ assert result == expected
tm.reset_display_options()
def compare(self, formatter, input, output):
formatted_input = formatter(input)
- msg = ("formatting of %s results in '%s', expected '%s'" %
- (str(input), formatted_input, output))
- self.assertEqual(formatted_input, output, msg)
+ assert formatted_input == output
def compare_all(self, formatter, in_out):
"""
@@ -169,14 +167,14 @@ def test_rounding(self):
formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True)
result = formatter(0)
- self.assertEqual(result, u(' 0.000'))
+ assert result == u(' 0.000')
def test_nan(self):
# Issue #11981
formatter = fmt.EngFormatter(accuracy=1, use_eng_prefix=True)
result = formatter(np.nan)
- self.assertEqual(result, u('NaN'))
+ assert result == u('NaN')
df = pd.DataFrame({'a': [1.5, 10.3, 20.5],
'b': [50.3, 60.67, 70.12],
@@ -192,4 +190,4 @@ def test_inf(self):
formatter = fmt.EngFormatter(accuracy=1, use_eng_prefix=True)
result = formatter(np.inf)
- self.assertEqual(result, u('inf'))
+ assert result == u('inf')
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 6f19a4a126118..dee645e9d70ec 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -196,16 +196,16 @@ def test_repr_truncation(self):
def test_repr_chop_threshold(self):
df = DataFrame([[0.1, 0.5], [0.5, -0.1]])
pd.reset_option("display.chop_threshold") # default None
- self.assertEqual(repr(df), ' 0 1\n0 0.1 0.5\n1 0.5 -0.1')
+ assert repr(df) == ' 0 1\n0 0.1 0.5\n1 0.5 -0.1'
with option_context("display.chop_threshold", 0.2):
- self.assertEqual(repr(df), ' 0 1\n0 0.0 0.5\n1 0.5 0.0')
+ assert repr(df) == ' 0 1\n0 0.0 0.5\n1 0.5 0.0'
with option_context("display.chop_threshold", 0.6):
- self.assertEqual(repr(df), ' 0 1\n0 0.0 0.0\n1 0.0 0.0')
+ assert repr(df) == ' 0 1\n0 0.0 0.0\n1 0.0 0.0'
with option_context("display.chop_threshold", None):
- self.assertEqual(repr(df), ' 0 1\n0 0.1 0.5\n1 0.5 -0.1')
+ assert repr(df) == ' 0 1\n0 0.1 0.5\n1 0.5 -0.1'
def test_repr_obeys_max_seq_limit(self):
with option_context("display.max_seq_items", 2000):
@@ -215,7 +215,7 @@ def test_repr_obeys_max_seq_limit(self):
assert len(printing.pprint_thing(lrange(1000))) < 100
def test_repr_set(self):
- self.assertEqual(printing.pprint_thing(set([1])), '{1}')
+ assert printing.pprint_thing(set([1])) == '{1}'
def test_repr_is_valid_construction_code(self):
# for the case of Index, where the repr is traditional rather then
@@ -389,7 +389,7 @@ def test_to_string_repr_unicode(self):
except:
pass
if not line.startswith('dtype:'):
- self.assertEqual(len(line), line_len)
+ assert len(line) == line_len
# it works even if sys.stdin in None
_stdin = sys.stdin
@@ -441,11 +441,11 @@ def test_to_string_with_formatters(self):
('object', lambda x: '-%s-' % str(x))]
result = df.to_string(formatters=dict(formatters))
result2 = df.to_string(formatters=lzip(*formatters)[1])
- self.assertEqual(result, (' int float object\n'
- '0 0x1 [ 1.0] -(1, 2)-\n'
- '1 0x2 [ 2.0] -True-\n'
- '2 0x3 [ 3.0] -False-'))
- self.assertEqual(result, result2)
+ assert result == (' int float object\n'
+ '0 0x1 [ 1.0] -(1, 2)-\n'
+ '1 0x2 [ 2.0] -True-\n'
+ '2 0x3 [ 3.0] -False-')
+ assert result == result2
def test_to_string_with_datetime64_monthformatter(self):
months = [datetime(2016, 1, 1), datetime(2016, 2, 2)]
@@ -455,7 +455,7 @@ def format_func(x):
return x.strftime('%Y-%m')
result = x.to_string(formatters={'months': format_func})
expected = 'months\n0 2016-01\n1 2016-02'
- self.assertEqual(result.strip(), expected)
+ assert result.strip() == expected
def test_to_string_with_datetime64_hourformatter(self):
@@ -467,12 +467,12 @@ def format_func(x):
result = x.to_string(formatters={'hod': format_func})
expected = 'hod\n0 10:10\n1 12:12'
- self.assertEqual(result.strip(), expected)
+ assert result.strip() == expected
def test_to_string_with_formatters_unicode(self):
df = DataFrame({u('c/\u03c3'): [1, 2, 3]})
result = df.to_string(formatters={u('c/\u03c3'): lambda x: '%s' % x})
- self.assertEqual(result, u(' c/\u03c3\n') + '0 1\n1 2\n2 3')
+ assert result == u(' c/\u03c3\n') + '0 1\n1 2\n2 3'
def test_east_asian_unicode_frame(self):
if PY3:
@@ -489,7 +489,7 @@ def test_east_asian_unicode_frame(self):
expected = (u" a b\na あ 1\n"
u"bb いいい 222\nc う 33333\n"
u"ddd ええええええ 4")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# last col
df = DataFrame({'a': [1, 222, 33333, 4],
@@ -498,7 +498,7 @@ def test_east_asian_unicode_frame(self):
expected = (u" a b\na 1 あ\n"
u"bb 222 いいい\nc 33333 う\n"
u"ddd 4 ええええええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# all col
df = DataFrame({'a': [u'あああああ', u'い', u'う', u'えええ'],
@@ -507,7 +507,7 @@ def test_east_asian_unicode_frame(self):
expected = (u" a b\na あああああ あ\n"
u"bb い いいい\nc う う\n"
u"ddd えええ ええええええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# column name
df = DataFrame({u'あああああ': [1, 222, 33333, 4],
@@ -516,7 +516,7 @@ def test_east_asian_unicode_frame(self):
expected = (u" b あああああ\na あ 1\n"
u"bb いいい 222\nc う 33333\n"
u"ddd ええええええ 4")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# index
df = DataFrame({'a': [u'あああああ', u'い', u'う', u'えええ'],
@@ -525,7 +525,7 @@ def test_east_asian_unicode_frame(self):
expected = (u" a b\nあああ あああああ あ\n"
u"いいいいいい い いいい\nうう う う\n"
u"え えええ ええええええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# index name
df = DataFrame({'a': [u'あああああ', u'い', u'う', u'えええ'],
@@ -538,7 +538,7 @@ def test_east_asian_unicode_frame(self):
u"い い いいい\n"
u"うう う う\n"
u"え えええ ええええええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# all
df = DataFrame({u'あああ': [u'あああ', u'い', u'う', u'えええええ'],
@@ -551,7 +551,7 @@ def test_east_asian_unicode_frame(self):
u"いいい い いいい\n"
u"うう う う\n"
u"え えええええ ええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# MultiIndex
idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), (
@@ -564,7 +564,7 @@ def test_east_asian_unicode_frame(self):
u"う え い いいい\n"
u"おおお かかかか う う\n"
u"き くく えええ ええええええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# truncate
with option_context('display.max_rows', 3, 'display.max_columns', 3):
@@ -577,13 +577,13 @@ def test_east_asian_unicode_frame(self):
expected = (u" a ... ああああ\n0 あああああ ... さ\n"
u".. ... ... ...\n3 えええ ... せ\n"
u"\n[4 rows x 4 columns]")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
df.index = [u'あああ', u'いいいい', u'う', 'aaa']
expected = (u" a ... ああああ\nあああ あああああ ... さ\n"
u".. ... ... ...\naaa えええ ... せ\n"
u"\n[4 rows x 4 columns]")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# Emable Unicode option -----------------------------------------
with option_context('display.unicode.east_asian_width', True):
@@ -595,7 +595,7 @@ def test_east_asian_unicode_frame(self):
expected = (u" a b\na あ 1\n"
u"bb いいい 222\nc う 33333\n"
u"ddd ええええええ 4")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# last col
df = DataFrame({'a': [1, 222, 33333, 4],
@@ -604,7 +604,7 @@ def test_east_asian_unicode_frame(self):
expected = (u" a b\na 1 あ\n"
u"bb 222 いいい\nc 33333 う\n"
u"ddd 4 ええええええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# all col
df = DataFrame({'a': [u'あああああ', u'い', u'う', u'えええ'],
@@ -615,7 +615,7 @@ def test_east_asian_unicode_frame(self):
u"bb い いいい\n"
u"c う う\n"
u"ddd えええ ええええええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# column name
df = DataFrame({u'あああああ': [1, 222, 33333, 4],
@@ -626,7 +626,7 @@ def test_east_asian_unicode_frame(self):
u"bb いいい 222\n"
u"c う 33333\n"
u"ddd ええええええ 4")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# index
df = DataFrame({'a': [u'あああああ', u'い', u'う', u'えええ'],
@@ -637,7 +637,7 @@ def test_east_asian_unicode_frame(self):
u"いいいいいい い いいい\n"
u"うう う う\n"
u"え えええ ええええええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# index name
df = DataFrame({'a': [u'あああああ', u'い', u'う', u'えええ'],
@@ -650,7 +650,7 @@ def test_east_asian_unicode_frame(self):
u"い い いいい\n"
u"うう う う\n"
u"え えええ ええええええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# all
df = DataFrame({u'あああ': [u'あああ', u'い', u'う', u'えええええ'],
@@ -663,7 +663,7 @@ def test_east_asian_unicode_frame(self):
u"いいい い いいい\n"
u"うう う う\n"
u"え えええええ ええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# MultiIndex
idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), (
@@ -676,7 +676,7 @@ def test_east_asian_unicode_frame(self):
u"う え い いいい\n"
u"おおお かかかか う う\n"
u"き くく えええ ええええええ")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# truncate
with option_context('display.max_rows', 3, 'display.max_columns',
@@ -693,7 +693,7 @@ def test_east_asian_unicode_frame(self):
u".. ... ... ...\n"
u"3 えええ ... せ\n"
u"\n[4 rows x 4 columns]")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
df.index = [u'あああ', u'いいいい', u'う', 'aaa']
expected = (u" a ... ああああ\n"
@@ -701,7 +701,7 @@ def test_east_asian_unicode_frame(self):
u"... ... ... ...\n"
u"aaa えええ ... せ\n"
u"\n[4 rows x 4 columns]")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
# ambiguous unicode
df = DataFrame({u'あああああ': [1, 222, 33333, 4],
@@ -712,7 +712,7 @@ def test_east_asian_unicode_frame(self):
u"bb いいい 222\n"
u"c ¡¡ 33333\n"
u"¡¡¡ ええええええ 4")
- self.assertEqual(_rep(df), expected)
+ assert _rep(df) == expected
def test_to_string_buffer_all_unicode(self):
buf = StringIO()
@@ -738,7 +738,7 @@ def test_to_string_with_col_space(self):
with_header = df.to_string(col_space=20)
with_header_row1 = with_header.splitlines()[1]
no_header = df.to_string(col_space=20, header=False)
- self.assertEqual(len(with_header_row1), len(no_header))
+ assert len(with_header_row1) == len(no_header)
def test_to_string_truncate_indices(self):
for index in [tm.makeStringIndex, tm.makeUnicodeIndex, tm.makeIntIndex,
@@ -825,7 +825,7 @@ def test_datetimelike_frame(self):
'8 NaT 9\n'
'9 NaT 10\n\n'
'[10 rows x 2 columns]')
- self.assertEqual(repr(df), expected)
+ assert repr(df) == expected
dts = [pd.NaT] * 5 + [pd.Timestamp('2011-01-01', tz='US/Eastern')] * 5
df = pd.DataFrame({"dt": dts,
@@ -838,7 +838,7 @@ def test_datetimelike_frame(self):
'8 2011-01-01 00:00:00-05:00 9\n'
'9 2011-01-01 00:00:00-05:00 10\n\n'
'[10 rows x 2 columns]')
- self.assertEqual(repr(df), expected)
+ assert repr(df) == expected
dts = ([pd.Timestamp('2011-01-01', tz='Asia/Tokyo')] * 5 +
[pd.Timestamp('2011-01-01', tz='US/Eastern')] * 5)
@@ -852,13 +852,13 @@ def test_datetimelike_frame(self):
'8 2011-01-01 00:00:00-05:00 9\n'
'9 2011-01-01 00:00:00-05:00 10\n\n'
'[10 rows x 2 columns]')
- self.assertEqual(repr(df), expected)
+ assert repr(df) == expected
def test_nonunicode_nonascii_alignment(self):
df = DataFrame([["aa\xc3\xa4\xc3\xa4", 1], ["bbbb", 2]])
rep_str = df.to_string()
lines = rep_str.split('\n')
- self.assertEqual(len(lines[1]), len(lines[2]))
+ assert len(lines[1]) == len(lines[2])
def test_unicode_problem_decoding_as_ascii(self):
dm = DataFrame({u('c/\u03c3'): Series({'test': np.nan})})
@@ -890,25 +890,21 @@ def test_pprint_thing(self):
if PY3:
pytest.skip("doesn't work on Python 3")
- self.assertEqual(pp_t('a'), u('a'))
- self.assertEqual(pp_t(u('a')), u('a'))
- self.assertEqual(pp_t(None), 'None')
- self.assertEqual(pp_t(u('\u05d0'), quote_strings=True), u("u'\u05d0'"))
- self.assertEqual(pp_t(u('\u05d0'), quote_strings=False), u('\u05d0'))
- self.assertEqual(pp_t((u('\u05d0'),
- u('\u05d1')), quote_strings=True),
- u("(u'\u05d0', u'\u05d1')"))
- self.assertEqual(pp_t((u('\u05d0'), (u('\u05d1'),
- u('\u05d2'))),
- quote_strings=True),
- u("(u'\u05d0', (u'\u05d1', u'\u05d2'))"))
- self.assertEqual(pp_t(('foo', u('\u05d0'), (u('\u05d0'),
- u('\u05d0'))),
- quote_strings=True),
- u("(u'foo', u'\u05d0', (u'\u05d0', u'\u05d0'))"))
-
- # escape embedded tabs in string
- # GH #2038
+ assert pp_t('a') == u('a')
+ assert pp_t(u('a')) == u('a')
+ assert pp_t(None) == 'None'
+ assert pp_t(u('\u05d0'), quote_strings=True) == u("u'\u05d0'")
+ assert pp_t(u('\u05d0'), quote_strings=False) == u('\u05d0')
+ assert (pp_t((u('\u05d0'), u('\u05d1')), quote_strings=True) ==
+ u("(u'\u05d0', u'\u05d1')"))
+ assert (pp_t((u('\u05d0'), (u('\u05d1'), u('\u05d2'))),
+ quote_strings=True) == u("(u'\u05d0', "
+ "(u'\u05d1', u'\u05d2'))"))
+ assert (pp_t(('foo', u('\u05d0'), (u('\u05d0'), u('\u05d0'))),
+ quote_strings=True) == u("(u'foo', u'\u05d0', "
+ "(u'\u05d0', u'\u05d0'))"))
+
+ # gh-2038: escape embedded tabs in string
assert "\t" not in pp_t("a\tb", escape_chars=("\t", ))
def test_wide_repr(self):
@@ -936,7 +932,7 @@ def test_wide_repr_wide_columns(self):
columns=['a' * 90, 'b' * 90, 'c' * 90])
rep_str = repr(df)
- self.assertEqual(len(rep_str.splitlines()), 20)
+ assert len(rep_str.splitlines()) == 20
def test_wide_repr_named(self):
with option_context('mode.sim_interactive', True):
@@ -1036,7 +1032,7 @@ def test_long_series(self):
import re
str_rep = str(s)
nmatches = len(re.findall('dtype', str_rep))
- self.assertEqual(nmatches, 1)
+ assert nmatches == 1
def test_index_with_nan(self):
# GH 2850
@@ -1055,7 +1051,7 @@ def test_index_with_nan(self):
expected = u(
' value\nid1 id2 id3 \n'
'1a3 NaN 78d 123\n9h4 d67 79d 64')
- self.assertEqual(result, expected)
+ assert result == expected
# index
y = df.set_index('id2')
@@ -1063,7 +1059,7 @@ def test_index_with_nan(self):
expected = u(
' id1 id3 value\nid2 \n'
'NaN 1a3 78d 123\nd67 9h4 79d 64')
- self.assertEqual(result, expected)
+ assert result == expected
# with append (this failed in 0.12)
y = df.set_index(['id1', 'id2']).set_index('id3', append=True)
@@ -1071,7 +1067,7 @@ def test_index_with_nan(self):
expected = u(
' value\nid1 id2 id3 \n'
'1a3 NaN 78d 123\n9h4 d67 79d 64')
- self.assertEqual(result, expected)
+ assert result == expected
# all-nan in mi
df2 = df.copy()
@@ -1081,7 +1077,7 @@ def test_index_with_nan(self):
expected = u(
' id1 id3 value\nid2 \n'
'NaN 1a3 78d 123\nNaN 9h4 79d 64')
- self.assertEqual(result, expected)
+ assert result == expected
# partial nan in mi
df2 = df.copy()
@@ -1091,7 +1087,7 @@ def test_index_with_nan(self):
expected = u(
' id1 value\nid2 id3 \n'
'NaN 78d 1a3 123\n 79d 9h4 64')
- self.assertEqual(result, expected)
+ assert result == expected
df = DataFrame({'id1': {0: np.nan,
1: '9h4'},
@@ -1107,7 +1103,7 @@ def test_index_with_nan(self):
expected = u(
' value\nid1 id2 id3 \n'
'NaN NaN NaN 123\n9h4 d67 79d 64')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_string(self):
@@ -1123,7 +1119,7 @@ def test_to_string(self):
buf = StringIO()
retval = biggie.to_string(buf=buf)
assert retval is None
- self.assertEqual(buf.getvalue(), s)
+ assert buf.getvalue() == s
assert isinstance(s, compat.string_types)
@@ -1136,17 +1132,17 @@ def test_to_string(self):
recons = read_table(StringIO(joined), names=header,
header=None, sep=' ')
tm.assert_series_equal(recons['B'], biggie['B'])
- self.assertEqual(recons['A'].count(), biggie['A'].count())
+ assert recons['A'].count() == biggie['A'].count()
assert (np.abs(recons['A'].dropna() -
biggie['A'].dropna()) < 0.1).all()
# expected = ['B', 'A']
- # self.assertEqual(header, expected)
+ # assert header == expected
result = biggie.to_string(columns=['A'], col_space=17)
header = result.split('\n')[0].strip().split()
expected = ['A']
- self.assertEqual(header, expected)
+ assert header == expected
biggie.to_string(columns=['B', 'A'],
formatters={'A': lambda x: '%.1f' % x})
@@ -1163,7 +1159,7 @@ def test_to_string_no_header(self):
df_s = df.to_string(header=False)
expected = "0 1 4\n1 2 5\n2 3 6"
- self.assertEqual(df_s, expected)
+ assert df_s == expected
def test_to_string_specified_header(self):
df = DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]})
@@ -1171,7 +1167,7 @@ def test_to_string_specified_header(self):
df_s = df.to_string(header=['X', 'Y'])
expected = ' X Y\n0 1 4\n1 2 5\n2 3 6'
- self.assertEqual(df_s, expected)
+ assert df_s == expected
with pytest.raises(ValueError):
df.to_string(header=['X'])
@@ -1182,7 +1178,7 @@ def test_to_string_no_index(self):
df_s = df.to_string(index=False)
expected = "x y\n1 4\n2 5\n3 6"
- self.assertEqual(df_s, expected)
+ assert df_s == expected
def test_to_string_line_width_no_index(self):
df = DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]})
@@ -1190,7 +1186,7 @@ def test_to_string_line_width_no_index(self):
df_s = df.to_string(line_width=1, index=False)
expected = "x \\\n1 \n2 \n3 \n\ny \n4 \n5 \n6"
- self.assertEqual(df_s, expected)
+ assert df_s == expected
def test_to_string_float_formatting(self):
tm.reset_display_options()
@@ -1214,16 +1210,16 @@ def test_to_string_float_formatting(self):
'2 3.45600e+03\n3 1.20000e+46\n4 1.64000e+06\n'
'5 1.70000e+08\n6 1.25346e+00\n7 3.14159e+00\n'
'8 -1.00000e+06')
- self.assertEqual(df_s, expected)
+ assert df_s == expected
df = DataFrame({'x': [3234, 0.253]})
df_s = df.to_string()
expected = (' x\n' '0 3234.000\n' '1 0.253')
- self.assertEqual(df_s, expected)
+ assert df_s == expected
tm.reset_display_options()
- self.assertEqual(get_option("display.precision"), 6)
+ assert get_option("display.precision") == 6
df = DataFrame({'x': [1e9, 0.2512]})
df_s = df.to_string()
@@ -1237,7 +1233,7 @@ def test_to_string_float_formatting(self):
expected = (' x\n'
'0 1.000000e+09\n'
'1 2.512000e-01')
- self.assertEqual(df_s, expected)
+ assert df_s == expected
def test_to_string_small_float_values(self):
df = DataFrame({'a': [1.5, 1e-17, -5.5e-7]})
@@ -1254,7 +1250,7 @@ def test_to_string_small_float_values(self):
'0 1.500000e+00\n'
'1 1.000000e-17\n'
'2 -5.500000e-07')
- self.assertEqual(result, expected)
+ assert result == expected
# but not all exactly zero
df = df * 0
@@ -1272,7 +1268,7 @@ def test_to_string_float_index(self):
'3.0 2\n'
'4.0 3\n'
'5.0 4')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_string_ascii_error(self):
data = [('0 ', u(' .gitignore '), u(' 5 '),
@@ -1289,7 +1285,7 @@ def test_to_string_int_formatting(self):
output = df.to_string()
expected = (' x\n' '0 -15\n' '1 20\n' '2 25\n' '3 -35')
- self.assertEqual(output, expected)
+ assert output == expected
def test_to_string_index_formatter(self):
df = DataFrame([lrange(5), lrange(5, 10), lrange(10, 15)])
@@ -1303,14 +1299,14 @@ def test_to_string_index_formatter(self):
c 10 11 12 13 14\
"""
- self.assertEqual(rs, xp)
+ assert rs == xp
def test_to_string_left_justify_cols(self):
tm.reset_display_options()
df = DataFrame({'x': [3234, 0.253]})
df_s = df.to_string(justify='left')
expected = (' x \n' '0 3234.000\n' '1 0.253')
- self.assertEqual(df_s, expected)
+ assert df_s == expected
def test_to_string_format_na(self):
tm.reset_display_options()
@@ -1324,7 +1320,7 @@ def test_to_string_format_na(self):
'2 -2.1234 foooo\n'
'3 3.0000 fooooo\n'
'4 4.0000 bar')
- self.assertEqual(result, expected)
+ assert result == expected
df = DataFrame({'A': [np.nan, -1., -2., 3., 4.],
'B': [np.nan, 'foo', 'foooo', 'fooooo', 'bar']})
@@ -1336,12 +1332,12 @@ def test_to_string_format_na(self):
'2 -2.0 foooo\n'
'3 3.0 fooooo\n'
'4 4.0 bar')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_string_line_width(self):
df = DataFrame(123, lrange(10, 15), lrange(30))
s = df.to_string(line_width=80)
- self.assertEqual(max(len(l) for l in s.split('\n')), 80)
+ assert max(len(l) for l in s.split('\n')) == 80
def test_show_dimensions(self):
df = DataFrame(123, lrange(10, 15), lrange(30))
@@ -1596,7 +1592,7 @@ def test_period(self):
exp = (" A B C\n0 2013-01 2011-01 a\n"
"1 2013-02 2011-02-01 b\n2 2013-03 2011-03-01 09:00 c\n"
"3 2013-04 2011-04 d")
- self.assertEqual(str(df), exp)
+ assert str(df) == exp
def gen_series_formatting():
@@ -1628,30 +1624,29 @@ def test_to_string(self):
retval = self.ts.to_string(buf=buf)
assert retval is None
- self.assertEqual(buf.getvalue().strip(), s)
+ assert buf.getvalue().strip() == s
# pass float_format
format = '%.4f'.__mod__
result = self.ts.to_string(float_format=format)
result = [x.split()[1] for x in result.split('\n')[:-1]]
expected = [format(x) for x in self.ts]
- self.assertEqual(result, expected)
+ assert result == expected
# empty string
result = self.ts[:0].to_string()
- self.assertEqual(result, 'Series([], Freq: B)')
+ assert result == 'Series([], Freq: B)'
result = self.ts[:0].to_string(length=0)
- self.assertEqual(result, 'Series([], Freq: B)')
+ assert result == 'Series([], Freq: B)'
# name and length
cp = self.ts.copy()
cp.name = 'foo'
result = cp.to_string(length=True, name=True, dtype=True)
last_line = result.split('\n')[-1].strip()
- self.assertEqual(last_line,
- "Freq: B, Name: foo, Length: %d, dtype: float64" %
- len(cp))
+ assert last_line == ("Freq: B, Name: foo, "
+ "Length: %d, dtype: float64" % len(cp))
def test_freq_name_separation(self):
s = Series(np.random.randn(10),
@@ -1665,18 +1660,18 @@ def test_to_string_mixed(self):
result = s.to_string()
expected = (u('0 foo\n') + u('1 NaN\n') + u('2 -1.23\n') +
u('3 4.56'))
- self.assertEqual(result, expected)
+ assert result == expected
# but don't count NAs as floats
s = Series(['foo', np.nan, 'bar', 'baz'])
result = s.to_string()
expected = (u('0 foo\n') + '1 NaN\n' + '2 bar\n' + '3 baz')
- self.assertEqual(result, expected)
+ assert result == expected
s = Series(['foo', 5, 'bar', 'baz'])
result = s.to_string()
expected = (u('0 foo\n') + '1 5\n' + '2 bar\n' + '3 baz')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_string_float_na_spacing(self):
s = Series([0., 1.5678, 2., -3., 4.])
@@ -1685,14 +1680,14 @@ def test_to_string_float_na_spacing(self):
result = s.to_string()
expected = (u('0 NaN\n') + '1 1.5678\n' + '2 NaN\n' +
'3 -3.0000\n' + '4 NaN')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_string_without_index(self):
# GH 11729 Test index=False option
s = Series([1, 2, 3, 4])
result = s.to_string(index=False)
expected = (u('1\n') + '2\n' + '3\n' + '4')
- self.assertEqual(result, expected)
+ assert result == expected
def test_unicode_name_in_footer(self):
s = Series([1, 2], name=u('\u05e2\u05d1\u05e8\u05d9\u05ea'))
@@ -1711,21 +1706,21 @@ def test_east_asian_unicode_series(self):
index=[u'あ', u'いい', u'ううう', u'ええええ'])
expected = (u"あ a\nいい bb\nううう CCC\n"
u"ええええ D\ndtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# unicode values
s = Series([u'あ', u'いい', u'ううう', u'ええええ'],
index=['a', 'bb', 'c', 'ddd'])
expected = (u"a あ\nbb いい\nc ううう\n"
u"ddd ええええ\ndtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# both
s = Series([u'あ', u'いい', u'ううう', u'ええええ'],
index=[u'ああ', u'いいいい', u'う', u'えええ'])
expected = (u"ああ あ\nいいいい いい\nう ううう\n"
u"えええ ええええ\ndtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# unicode footer
s = Series([u'あ', u'いい', u'ううう', u'ええええ'],
@@ -1733,7 +1728,7 @@ def test_east_asian_unicode_series(self):
name=u'おおおおおおお')
expected = (u"ああ あ\nいいいい いい\nう ううう\n"
u"えええ ええええ\nName: おおおおおおお, dtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# MultiIndex
idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), (
@@ -1743,13 +1738,13 @@ def test_east_asian_unicode_series(self):
u"う え 22\n"
u"おおお かかかか 3333\n"
u"き くく 44444\ndtype: int64")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# object dtype, shorter than unicode repr
s = Series([1, 22, 3333, 44444], index=[1, 'AB', np.nan, u'あああ'])
expected = (u"1 1\nAB 22\nNaN 3333\n"
u"あああ 44444\ndtype: int64")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# object dtype, longer than unicode repr
s = Series([1, 22, 3333, 44444],
@@ -1758,7 +1753,7 @@ def test_east_asian_unicode_series(self):
u"AB 22\n"
u"2011-01-01 00:00:00 3333\n"
u"あああ 44444\ndtype: int64")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# truncate
with option_context('display.max_rows', 3):
@@ -1768,13 +1763,13 @@ def test_east_asian_unicode_series(self):
expected = (u"0 あ\n ... \n"
u"3 ええええ\n"
u"Name: おおおおおおお, Length: 4, dtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
s.index = [u'ああ', u'いいいい', u'う', u'えええ']
expected = (u"ああ あ\n ... \n"
u"えええ ええええ\n"
u"Name: おおおおおおお, Length: 4, dtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# Emable Unicode option -----------------------------------------
with option_context('display.unicode.east_asian_width', True):
@@ -1784,14 +1779,14 @@ def test_east_asian_unicode_series(self):
index=[u'あ', u'いい', u'ううう', u'ええええ'])
expected = (u"あ a\nいい bb\nううう CCC\n"
u"ええええ D\ndtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# unicode values
s = Series([u'あ', u'いい', u'ううう', u'ええええ'],
index=['a', 'bb', 'c', 'ddd'])
expected = (u"a あ\nbb いい\nc ううう\n"
u"ddd ええええ\ndtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# both
s = Series([u'あ', u'いい', u'ううう', u'ええええ'],
@@ -1800,7 +1795,7 @@ def test_east_asian_unicode_series(self):
u"いいいい いい\n"
u"う ううう\n"
u"えええ ええええ\ndtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# unicode footer
s = Series([u'あ', u'いい', u'ううう', u'ええええ'],
@@ -1811,7 +1806,7 @@ def test_east_asian_unicode_series(self):
u"う ううう\n"
u"えええ ええええ\n"
u"Name: おおおおおおお, dtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# MultiIndex
idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), (
@@ -1822,13 +1817,13 @@ def test_east_asian_unicode_series(self):
u"おおお かかかか 3333\n"
u"き くく 44444\n"
u"dtype: int64")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# object dtype, shorter than unicode repr
s = Series([1, 22, 3333, 44444], index=[1, 'AB', np.nan, u'あああ'])
expected = (u"1 1\nAB 22\nNaN 3333\n"
u"あああ 44444\ndtype: int64")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# object dtype, longer than unicode repr
s = Series([1, 22, 3333, 44444],
@@ -1837,7 +1832,7 @@ def test_east_asian_unicode_series(self):
u"AB 22\n"
u"2011-01-01 00:00:00 3333\n"
u"あああ 44444\ndtype: int64")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# truncate
with option_context('display.max_rows', 3):
@@ -1846,14 +1841,14 @@ def test_east_asian_unicode_series(self):
expected = (u"0 あ\n ... \n"
u"3 ええええ\n"
u"Name: おおおおおおお, Length: 4, dtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
s.index = [u'ああ', u'いいいい', u'う', u'えええ']
expected = (u"ああ あ\n"
u" ... \n"
u"えええ ええええ\n"
u"Name: おおおおおおお, Length: 4, dtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
# ambiguous unicode
s = Series([u'¡¡', u'い¡¡', u'ううう', u'ええええ'],
@@ -1862,7 +1857,7 @@ def test_east_asian_unicode_series(self):
u"¡¡¡¡いい い¡¡\n"
u"¡¡ ううう\n"
u"えええ ええええ\ndtype: object")
- self.assertEqual(_rep(s), expected)
+ assert _rep(s) == expected
def test_float_trim_zeros(self):
vals = [2.08430917305e+10, 3.52205017305e+10, 2.30674817305e+10,
@@ -1950,7 +1945,7 @@ def test_timedelta64(self):
# no boxing of the actual elements
td = Series(pd.timedelta_range('1 days', periods=3))
result = td.to_string()
- self.assertEqual(result, u("0 1 days\n1 2 days\n2 3 days"))
+ assert result == u("0 1 days\n1 2 days\n2 3 days")
def test_mixed_datetime64(self):
df = DataFrame({'A': [1, 2], 'B': ['2012-01-01', '2012-01-02']})
@@ -1965,12 +1960,12 @@ def test_period(self):
s = Series(np.arange(6, dtype='int64'), index=index)
exp = ("2013-01 0\n2013-02 1\n2013-03 2\n2013-04 3\n"
"2013-05 4\n2013-06 5\nFreq: M, dtype: int64")
- self.assertEqual(str(s), exp)
+ assert str(s) == exp
s = Series(index)
exp = ("0 2013-01\n1 2013-02\n2 2013-03\n3 2013-04\n"
"4 2013-05\n5 2013-06\ndtype: object")
- self.assertEqual(str(s), exp)
+ assert str(s) == exp
# periods with mixed freq
s = Series([pd.Period('2011-01', freq='M'),
@@ -1978,7 +1973,7 @@ def test_period(self):
pd.Period('2011-03-01 09:00', freq='H')])
exp = ("0 2011-01\n1 2011-02-01\n"
"2 2011-03-01 09:00\ndtype: object")
- self.assertEqual(str(s), exp)
+ assert str(s) == exp
def test_max_multi_index_display(self):
# GH 7101
@@ -1993,29 +1988,29 @@ def test_max_multi_index_display(self):
s = Series(np.random.randn(8), index=index)
with option_context("display.max_rows", 10):
- self.assertEqual(len(str(s).split('\n')), 10)
+ assert len(str(s).split('\n')) == 10
with option_context("display.max_rows", 3):
- self.assertEqual(len(str(s).split('\n')), 5)
+ assert len(str(s).split('\n')) == 5
with option_context("display.max_rows", 2):
- self.assertEqual(len(str(s).split('\n')), 5)
+ assert len(str(s).split('\n')) == 5
with option_context("display.max_rows", 1):
- self.assertEqual(len(str(s).split('\n')), 4)
+ assert len(str(s).split('\n')) == 4
with option_context("display.max_rows", 0):
- self.assertEqual(len(str(s).split('\n')), 10)
+ assert len(str(s).split('\n')) == 10
# index
s = Series(np.random.randn(8), None)
with option_context("display.max_rows", 10):
- self.assertEqual(len(str(s).split('\n')), 9)
+ assert len(str(s).split('\n')) == 9
with option_context("display.max_rows", 3):
- self.assertEqual(len(str(s).split('\n')), 4)
+ assert len(str(s).split('\n')) == 4
with option_context("display.max_rows", 2):
- self.assertEqual(len(str(s).split('\n')), 4)
+ assert len(str(s).split('\n')) == 4
with option_context("display.max_rows", 1):
- self.assertEqual(len(str(s).split('\n')), 3)
+ assert len(str(s).split('\n')) == 3
with option_context("display.max_rows", 0):
- self.assertEqual(len(str(s).split('\n')), 9)
+ assert len(str(s).split('\n')) == 9
# Make sure #8532 is fixed
def test_consistent_format(self):
@@ -2027,7 +2022,7 @@ def test_consistent_format(self):
'1.0000\n4 1.0000\n ... \n125 '
'1.0000\n126 1.0000\n127 0.9999\n128 '
'1.0000\n129 1.0000\ndtype: float64')
- self.assertEqual(res, exp)
+ assert res == exp
def chck_ncols(self, s):
with option_context("display.max_rows", 10):
@@ -2036,7 +2031,7 @@ def chck_ncols(self, s):
lines = [line for line in repr(s).split('\n')
if not re.match(r'[^\.]*\.+', line)][:-1]
ncolsizes = len(set(len(line.strip()) for line in lines))
- self.assertEqual(ncolsizes, 1)
+ assert ncolsizes == 1
def test_format_explicit(self):
test_sers = gen_series_formatting()
@@ -2044,19 +2039,19 @@ def test_format_explicit(self):
"display.show_dimensions", False):
res = repr(test_sers['onel'])
exp = '0 a\n1 a\n ..\n98 a\n99 a\ndtype: object'
- self.assertEqual(exp, res)
+ assert exp == res
res = repr(test_sers['twol'])
exp = ('0 ab\n1 ab\n ..\n98 ab\n99 ab\ndtype:'
' object')
- self.assertEqual(exp, res)
+ assert exp == res
res = repr(test_sers['asc'])
exp = ('0 a\n1 ab\n ... \n4 abcde\n5'
' abcdef\ndtype: object')
- self.assertEqual(exp, res)
+ assert exp == res
res = repr(test_sers['desc'])
exp = ('5 abcdef\n4 abcde\n ... \n1 ab\n0'
' a\ndtype: object')
- self.assertEqual(exp, res)
+ assert exp == res
def test_ncols(self):
test_sers = gen_series_formatting()
@@ -2069,10 +2064,10 @@ def test_max_rows_eq_one(self):
strrepr = repr(s).split('\n')
exp1 = ['0', '0']
res1 = strrepr[0].split()
- self.assertEqual(exp1, res1)
+ assert exp1 == res1
exp2 = ['..']
res2 = strrepr[1].split()
- self.assertEqual(exp2, res2)
+ assert exp2 == res2
def test_truncate_ndots(self):
def getndots(s):
@@ -2081,12 +2076,12 @@ def getndots(s):
s = Series([0, 2, 3, 6])
with option_context("display.max_rows", 2):
strrepr = repr(s).replace('\n', '')
- self.assertEqual(getndots(strrepr), 2)
+ assert getndots(strrepr) == 2
s = Series([0, 100, 200, 400])
with option_context("display.max_rows", 2):
strrepr = repr(s).replace('\n', '')
- self.assertEqual(getndots(strrepr), 3)
+ assert getndots(strrepr) == 3
def test_show_dimensions(self):
# gh-7117
@@ -2109,48 +2104,48 @@ def test_to_string_name(self):
s.name = 'myser'
res = s.to_string(max_rows=2, name=True)
exp = '0 0\n ..\n99 99\nName: myser'
- self.assertEqual(res, exp)
+ assert res == exp
res = s.to_string(max_rows=2, name=False)
exp = '0 0\n ..\n99 99'
- self.assertEqual(res, exp)
+ assert res == exp
def test_to_string_dtype(self):
s = Series(range(100), dtype='int64')
res = s.to_string(max_rows=2, dtype=True)
exp = '0 0\n ..\n99 99\ndtype: int64'
- self.assertEqual(res, exp)
+ assert res == exp
res = s.to_string(max_rows=2, dtype=False)
exp = '0 0\n ..\n99 99'
- self.assertEqual(res, exp)
+ assert res == exp
def test_to_string_length(self):
s = Series(range(100), dtype='int64')
res = s.to_string(max_rows=2, length=True)
exp = '0 0\n ..\n99 99\nLength: 100'
- self.assertEqual(res, exp)
+ assert res == exp
def test_to_string_na_rep(self):
s = pd.Series(index=range(100))
res = s.to_string(na_rep='foo', max_rows=2)
exp = '0 foo\n ..\n99 foo'
- self.assertEqual(res, exp)
+ assert res == exp
def test_to_string_float_format(self):
s = pd.Series(range(10), dtype='float64')
res = s.to_string(float_format=lambda x: '{0:2.1f}'.format(x),
max_rows=2)
exp = '0 0.0\n ..\n9 9.0'
- self.assertEqual(res, exp)
+ assert res == exp
def test_to_string_header(self):
s = pd.Series(range(10), dtype='int64')
s.index.name = 'foo'
res = s.to_string(header=True, max_rows=2)
exp = 'foo\n0 0\n ..\n9 9'
- self.assertEqual(res, exp)
+ assert res == exp
res = s.to_string(header=False, max_rows=2)
exp = '0 0\n ..\n9 9'
- self.assertEqual(res, exp)
+ assert res == exp
def _three_digit_exp():
@@ -2167,8 +2162,8 @@ def test_misc(self):
def test_format(self):
obj = fmt.FloatArrayFormatter(np.array([12, 0], dtype=np.float64))
result = obj.get_result()
- self.assertEqual(result[0], " 12.0")
- self.assertEqual(result[1], " 0.0")
+ assert result[0] == " 12.0"
+ assert result[1] == " 0.0"
def test_output_significant_digits(self):
# Issue #9764
@@ -2228,7 +2223,7 @@ def test_output_significant_digits(self):
}
for (start, stop), v in expected_output.items():
- self.assertEqual(str(d[start:stop]), v)
+ assert str(d[start:stop]) == v
def test_too_long(self):
# GH 10451
@@ -2236,12 +2231,11 @@ def test_too_long(self):
# need both a number > 1e6 and something that normally formats to
# having length > display.precision + 6
df = pd.DataFrame(dict(x=[12345.6789]))
- self.assertEqual(str(df), ' x\n0 12345.6789')
+ assert str(df) == ' x\n0 12345.6789'
df = pd.DataFrame(dict(x=[2e6]))
- self.assertEqual(str(df), ' x\n0 2000000.0')
+ assert str(df) == ' x\n0 2000000.0'
df = pd.DataFrame(dict(x=[12345.6789, 2e6]))
- self.assertEqual(
- str(df), ' x\n0 1.2346e+04\n1 2.0000e+06')
+ assert str(df) == ' x\n0 1.2346e+04\n1 2.0000e+06'
class TestRepr_timedelta64(tm.TestCase):
@@ -2253,14 +2247,13 @@ def test_none(self):
delta_500ms = pd.to_timedelta(500, unit='ms')
drepr = lambda x: x._repr_base()
- self.assertEqual(drepr(delta_1d), "1 days")
- self.assertEqual(drepr(-delta_1d), "-1 days")
- self.assertEqual(drepr(delta_0d), "0 days")
- self.assertEqual(drepr(delta_1s), "0 days 00:00:01")
- self.assertEqual(drepr(delta_500ms), "0 days 00:00:00.500000")
- self.assertEqual(drepr(delta_1d + delta_1s), "1 days 00:00:01")
- self.assertEqual(
- drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000")
+ assert drepr(delta_1d) == "1 days"
+ assert drepr(-delta_1d) == "-1 days"
+ assert drepr(delta_0d) == "0 days"
+ assert drepr(delta_1s) == "0 days 00:00:01"
+ assert drepr(delta_500ms) == "0 days 00:00:00.500000"
+ assert drepr(delta_1d + delta_1s) == "1 days 00:00:01"
+ assert drepr(delta_1d + delta_500ms) == "1 days 00:00:00.500000"
def test_even_day(self):
delta_1d = pd.to_timedelta(1, unit='D')
@@ -2269,14 +2262,13 @@ def test_even_day(self):
delta_500ms = pd.to_timedelta(500, unit='ms')
drepr = lambda x: x._repr_base(format='even_day')
- self.assertEqual(drepr(delta_1d), "1 days")
- self.assertEqual(drepr(-delta_1d), "-1 days")
- self.assertEqual(drepr(delta_0d), "0 days")
- self.assertEqual(drepr(delta_1s), "0 days 00:00:01")
- self.assertEqual(drepr(delta_500ms), "0 days 00:00:00.500000")
- self.assertEqual(drepr(delta_1d + delta_1s), "1 days 00:00:01")
- self.assertEqual(
- drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000")
+ assert drepr(delta_1d) == "1 days"
+ assert drepr(-delta_1d) == "-1 days"
+ assert drepr(delta_0d) == "0 days"
+ assert drepr(delta_1s) == "0 days 00:00:01"
+ assert drepr(delta_500ms) == "0 days 00:00:00.500000"
+ assert drepr(delta_1d + delta_1s) == "1 days 00:00:01"
+ assert drepr(delta_1d + delta_500ms) == "1 days 00:00:00.500000"
def test_sub_day(self):
delta_1d = pd.to_timedelta(1, unit='D')
@@ -2285,14 +2277,13 @@ def test_sub_day(self):
delta_500ms = pd.to_timedelta(500, unit='ms')
drepr = lambda x: x._repr_base(format='sub_day')
- self.assertEqual(drepr(delta_1d), "1 days")
- self.assertEqual(drepr(-delta_1d), "-1 days")
- self.assertEqual(drepr(delta_0d), "00:00:00")
- self.assertEqual(drepr(delta_1s), "00:00:01")
- self.assertEqual(drepr(delta_500ms), "00:00:00.500000")
- self.assertEqual(drepr(delta_1d + delta_1s), "1 days 00:00:01")
- self.assertEqual(
- drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000")
+ assert drepr(delta_1d) == "1 days"
+ assert drepr(-delta_1d) == "-1 days"
+ assert drepr(delta_0d) == "00:00:00"
+ assert drepr(delta_1s) == "00:00:01"
+ assert drepr(delta_500ms) == "00:00:00.500000"
+ assert drepr(delta_1d + delta_1s) == "1 days 00:00:01"
+ assert drepr(delta_1d + delta_500ms) == "1 days 00:00:00.500000"
def test_long(self):
delta_1d = pd.to_timedelta(1, unit='D')
@@ -2301,14 +2292,13 @@ def test_long(self):
delta_500ms = pd.to_timedelta(500, unit='ms')
drepr = lambda x: x._repr_base(format='long')
- self.assertEqual(drepr(delta_1d), "1 days 00:00:00")
- self.assertEqual(drepr(-delta_1d), "-1 days +00:00:00")
- self.assertEqual(drepr(delta_0d), "0 days 00:00:00")
- self.assertEqual(drepr(delta_1s), "0 days 00:00:01")
- self.assertEqual(drepr(delta_500ms), "0 days 00:00:00.500000")
- self.assertEqual(drepr(delta_1d + delta_1s), "1 days 00:00:01")
- self.assertEqual(
- drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000")
+ assert drepr(delta_1d) == "1 days 00:00:00"
+ assert drepr(-delta_1d) == "-1 days +00:00:00"
+ assert drepr(delta_0d) == "0 days 00:00:00"
+ assert drepr(delta_1s) == "0 days 00:00:01"
+ assert drepr(delta_500ms) == "0 days 00:00:00.500000"
+ assert drepr(delta_1d + delta_1s) == "1 days 00:00:01"
+ assert drepr(delta_1d + delta_500ms) == "1 days 00:00:00.500000"
def test_all(self):
delta_1d = pd.to_timedelta(1, unit='D')
@@ -2316,9 +2306,9 @@ def test_all(self):
delta_1ns = pd.to_timedelta(1, unit='ns')
drepr = lambda x: x._repr_base(format='all')
- self.assertEqual(drepr(delta_1d), "1 days 00:00:00.000000000")
- self.assertEqual(drepr(delta_0d), "0 days 00:00:00.000000000")
- self.assertEqual(drepr(delta_1ns), "0 days 00:00:00.000000001")
+ assert drepr(delta_1d) == "1 days 00:00:00.000000000"
+ assert drepr(delta_0d) == "0 days 00:00:00.000000000"
+ assert drepr(delta_1ns) == "0 days 00:00:00.000000001"
class TestTimedelta64Formatter(tm.TestCase):
@@ -2326,45 +2316,45 @@ class TestTimedelta64Formatter(tm.TestCase):
def test_days(self):
x = pd.to_timedelta(list(range(5)) + [pd.NaT], unit='D')
result = fmt.Timedelta64Formatter(x, box=True).get_result()
- self.assertEqual(result[0].strip(), "'0 days'")
- self.assertEqual(result[1].strip(), "'1 days'")
+ assert result[0].strip() == "'0 days'"
+ assert result[1].strip() == "'1 days'"
result = fmt.Timedelta64Formatter(x[1:2], box=True).get_result()
- self.assertEqual(result[0].strip(), "'1 days'")
+ assert result[0].strip() == "'1 days'"
result = fmt.Timedelta64Formatter(x, box=False).get_result()
- self.assertEqual(result[0].strip(), "0 days")
- self.assertEqual(result[1].strip(), "1 days")
+ assert result[0].strip() == "0 days"
+ assert result[1].strip() == "1 days"
result = fmt.Timedelta64Formatter(x[1:2], box=False).get_result()
- self.assertEqual(result[0].strip(), "1 days")
+ assert result[0].strip() == "1 days"
def test_days_neg(self):
x = pd.to_timedelta(list(range(5)) + [pd.NaT], unit='D')
result = fmt.Timedelta64Formatter(-x, box=True).get_result()
- self.assertEqual(result[0].strip(), "'0 days'")
- self.assertEqual(result[1].strip(), "'-1 days'")
+ assert result[0].strip() == "'0 days'"
+ assert result[1].strip() == "'-1 days'"
def test_subdays(self):
y = pd.to_timedelta(list(range(5)) + [pd.NaT], unit='s')
result = fmt.Timedelta64Formatter(y, box=True).get_result()
- self.assertEqual(result[0].strip(), "'00:00:00'")
- self.assertEqual(result[1].strip(), "'00:00:01'")
+ assert result[0].strip() == "'00:00:00'"
+ assert result[1].strip() == "'00:00:01'"
def test_subdays_neg(self):
y = pd.to_timedelta(list(range(5)) + [pd.NaT], unit='s')
result = fmt.Timedelta64Formatter(-y, box=True).get_result()
- self.assertEqual(result[0].strip(), "'00:00:00'")
- self.assertEqual(result[1].strip(), "'-1 days +23:59:59'")
+ assert result[0].strip() == "'00:00:00'"
+ assert result[1].strip() == "'-1 days +23:59:59'"
def test_zero(self):
x = pd.to_timedelta(list(range(1)) + [pd.NaT], unit='D')
result = fmt.Timedelta64Formatter(x, box=True).get_result()
- self.assertEqual(result[0].strip(), "'0 days'")
+ assert result[0].strip() == "'0 days'"
x = pd.to_timedelta(list(range(1)), unit='D')
result = fmt.Timedelta64Formatter(x, box=True).get_result()
- self.assertEqual(result[0].strip(), "'0 days'")
+ assert result[0].strip() == "'0 days'"
class TestDatetime64Formatter(tm.TestCase):
@@ -2372,19 +2362,19 @@ class TestDatetime64Formatter(tm.TestCase):
def test_mixed(self):
x = Series([datetime(2013, 1, 1), datetime(2013, 1, 1, 12), pd.NaT])
result = fmt.Datetime64Formatter(x).get_result()
- self.assertEqual(result[0].strip(), "2013-01-01 00:00:00")
- self.assertEqual(result[1].strip(), "2013-01-01 12:00:00")
+ assert result[0].strip() == "2013-01-01 00:00:00"
+ assert result[1].strip() == "2013-01-01 12:00:00"
def test_dates(self):
x = Series([datetime(2013, 1, 1), datetime(2013, 1, 2), pd.NaT])
result = fmt.Datetime64Formatter(x).get_result()
- self.assertEqual(result[0].strip(), "2013-01-01")
- self.assertEqual(result[1].strip(), "2013-01-02")
+ assert result[0].strip() == "2013-01-01"
+ assert result[1].strip() == "2013-01-02"
def test_date_nanos(self):
x = Series([Timestamp(200)])
result = fmt.Datetime64Formatter(x).get_result()
- self.assertEqual(result[0].strip(), "1970-01-01 00:00:00.000000200")
+ assert result[0].strip() == "1970-01-01 00:00:00.000000200"
def test_dates_display(self):
@@ -2393,37 +2383,37 @@ def test_dates_display(self):
x = Series(date_range('20130101 09:00:00', periods=5, freq='D'))
x.iloc[1] = np.nan
result = fmt.Datetime64Formatter(x).get_result()
- self.assertEqual(result[0].strip(), "2013-01-01 09:00:00")
- self.assertEqual(result[1].strip(), "NaT")
- self.assertEqual(result[4].strip(), "2013-01-05 09:00:00")
+ assert result[0].strip() == "2013-01-01 09:00:00"
+ assert result[1].strip() == "NaT"
+ assert result[4].strip() == "2013-01-05 09:00:00"
x = Series(date_range('20130101 09:00:00', periods=5, freq='s'))
x.iloc[1] = np.nan
result = fmt.Datetime64Formatter(x).get_result()
- self.assertEqual(result[0].strip(), "2013-01-01 09:00:00")
- self.assertEqual(result[1].strip(), "NaT")
- self.assertEqual(result[4].strip(), "2013-01-01 09:00:04")
+ assert result[0].strip() == "2013-01-01 09:00:00"
+ assert result[1].strip() == "NaT"
+ assert result[4].strip() == "2013-01-01 09:00:04"
x = Series(date_range('20130101 09:00:00', periods=5, freq='ms'))
x.iloc[1] = np.nan
result = fmt.Datetime64Formatter(x).get_result()
- self.assertEqual(result[0].strip(), "2013-01-01 09:00:00.000")
- self.assertEqual(result[1].strip(), "NaT")
- self.assertEqual(result[4].strip(), "2013-01-01 09:00:00.004")
+ assert result[0].strip() == "2013-01-01 09:00:00.000"
+ assert result[1].strip() == "NaT"
+ assert result[4].strip() == "2013-01-01 09:00:00.004"
x = Series(date_range('20130101 09:00:00', periods=5, freq='us'))
x.iloc[1] = np.nan
result = fmt.Datetime64Formatter(x).get_result()
- self.assertEqual(result[0].strip(), "2013-01-01 09:00:00.000000")
- self.assertEqual(result[1].strip(), "NaT")
- self.assertEqual(result[4].strip(), "2013-01-01 09:00:00.000004")
+ assert result[0].strip() == "2013-01-01 09:00:00.000000"
+ assert result[1].strip() == "NaT"
+ assert result[4].strip() == "2013-01-01 09:00:00.000004"
x = Series(date_range('20130101 09:00:00', periods=5, freq='N'))
x.iloc[1] = np.nan
result = fmt.Datetime64Formatter(x).get_result()
- self.assertEqual(result[0].strip(), "2013-01-01 09:00:00.000000000")
- self.assertEqual(result[1].strip(), "NaT")
- self.assertEqual(result[4].strip(), "2013-01-01 09:00:00.000000004")
+ assert result[0].strip() == "2013-01-01 09:00:00.000000000"
+ assert result[1].strip() == "NaT"
+ assert result[4].strip() == "2013-01-01 09:00:00.000000004"
def test_datetime64formatter_yearmonth(self):
x = Series([datetime(2016, 1, 1), datetime(2016, 2, 2)])
@@ -2433,7 +2423,7 @@ def format_func(x):
formatter = fmt.Datetime64Formatter(x, formatter=format_func)
result = formatter.get_result()
- self.assertEqual(result, ['2016-01', '2016-02'])
+ assert result == ['2016-01', '2016-02']
def test_datetime64formatter_hoursecond(self):
@@ -2445,43 +2435,43 @@ def format_func(x):
formatter = fmt.Datetime64Formatter(x, formatter=format_func)
result = formatter.get_result()
- self.assertEqual(result, ['10:10', '12:12'])
+ assert result == ['10:10', '12:12']
class TestNaTFormatting(tm.TestCase):
def test_repr(self):
- self.assertEqual(repr(pd.NaT), "NaT")
+ assert repr(pd.NaT) == "NaT"
def test_str(self):
- self.assertEqual(str(pd.NaT), "NaT")
+ assert str(pd.NaT) == "NaT"
class TestDatetimeIndexFormat(tm.TestCase):
def test_datetime(self):
formatted = pd.to_datetime([datetime(2003, 1, 1, 12), pd.NaT]).format()
- self.assertEqual(formatted[0], "2003-01-01 12:00:00")
- self.assertEqual(formatted[1], "NaT")
+ assert formatted[0] == "2003-01-01 12:00:00"
+ assert formatted[1] == "NaT"
def test_date(self):
formatted = pd.to_datetime([datetime(2003, 1, 1), pd.NaT]).format()
- self.assertEqual(formatted[0], "2003-01-01")
- self.assertEqual(formatted[1], "NaT")
+ assert formatted[0] == "2003-01-01"
+ assert formatted[1] == "NaT"
def test_date_tz(self):
formatted = pd.to_datetime([datetime(2013, 1, 1)], utc=True).format()
- self.assertEqual(formatted[0], "2013-01-01 00:00:00+00:00")
+ assert formatted[0] == "2013-01-01 00:00:00+00:00"
formatted = pd.to_datetime(
[datetime(2013, 1, 1), pd.NaT], utc=True).format()
- self.assertEqual(formatted[0], "2013-01-01 00:00:00+00:00")
+ assert formatted[0] == "2013-01-01 00:00:00+00:00"
def test_date_explict_date_format(self):
formatted = pd.to_datetime([datetime(2003, 2, 1), pd.NaT]).format(
date_format="%m-%d-%Y", na_rep="UT")
- self.assertEqual(formatted[0], "02-01-2003")
- self.assertEqual(formatted[1], "UT")
+ assert formatted[0] == "02-01-2003"
+ assert formatted[1] == "UT"
class TestDatetimeIndexUnicode(tm.TestCase):
@@ -2503,19 +2493,19 @@ class TestStringRepTimestamp(tm.TestCase):
def test_no_tz(self):
dt_date = datetime(2013, 1, 2)
- self.assertEqual(str(dt_date), str(Timestamp(dt_date)))
+ assert str(dt_date) == str(Timestamp(dt_date))
dt_datetime = datetime(2013, 1, 2, 12, 1, 3)
- self.assertEqual(str(dt_datetime), str(Timestamp(dt_datetime)))
+ assert str(dt_datetime) == str(Timestamp(dt_datetime))
dt_datetime_us = datetime(2013, 1, 2, 12, 1, 3, 45)
- self.assertEqual(str(dt_datetime_us), str(Timestamp(dt_datetime_us)))
+ assert str(dt_datetime_us) == str(Timestamp(dt_datetime_us))
ts_nanos_only = Timestamp(200)
- self.assertEqual(str(ts_nanos_only), "1970-01-01 00:00:00.000000200")
+ assert str(ts_nanos_only) == "1970-01-01 00:00:00.000000200"
ts_nanos_micros = Timestamp(1200)
- self.assertEqual(str(ts_nanos_micros), "1970-01-01 00:00:00.000001200")
+ assert str(ts_nanos_micros) == "1970-01-01 00:00:00.000001200"
def test_tz_pytz(self):
tm._skip_if_no_pytz()
@@ -2523,13 +2513,13 @@ def test_tz_pytz(self):
import pytz
dt_date = datetime(2013, 1, 2, tzinfo=pytz.utc)
- self.assertEqual(str(dt_date), str(Timestamp(dt_date)))
+ assert str(dt_date) == str(Timestamp(dt_date))
dt_datetime = datetime(2013, 1, 2, 12, 1, 3, tzinfo=pytz.utc)
- self.assertEqual(str(dt_datetime), str(Timestamp(dt_datetime)))
+ assert str(dt_datetime) == str(Timestamp(dt_datetime))
dt_datetime_us = datetime(2013, 1, 2, 12, 1, 3, 45, tzinfo=pytz.utc)
- self.assertEqual(str(dt_datetime_us), str(Timestamp(dt_datetime_us)))
+ assert str(dt_datetime_us) == str(Timestamp(dt_datetime_us))
def test_tz_dateutil(self):
tm._skip_if_no_dateutil()
@@ -2537,17 +2527,17 @@ def test_tz_dateutil(self):
utc = dateutil.tz.tzutc()
dt_date = datetime(2013, 1, 2, tzinfo=utc)
- self.assertEqual(str(dt_date), str(Timestamp(dt_date)))
+ assert str(dt_date) == str(Timestamp(dt_date))
dt_datetime = datetime(2013, 1, 2, 12, 1, 3, tzinfo=utc)
- self.assertEqual(str(dt_datetime), str(Timestamp(dt_datetime)))
+ assert str(dt_datetime) == str(Timestamp(dt_datetime))
dt_datetime_us = datetime(2013, 1, 2, 12, 1, 3, 45, tzinfo=utc)
- self.assertEqual(str(dt_datetime_us), str(Timestamp(dt_datetime_us)))
+ assert str(dt_datetime_us) == str(Timestamp(dt_datetime_us))
def test_nat_representations(self):
for f in (str, repr, methodcaller('isoformat')):
- self.assertEqual(f(pd.NaT), 'NaT')
+ assert f(pd.NaT) == 'NaT'
def test_format_percentiles():
diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index 63cd08545610f..7725b2063c7b6 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -44,13 +44,13 @@ def test_adjoin(self):
adjoined = printing.adjoin(2, *data)
- self.assertEqual(adjoined, expected)
+ assert adjoined == expected
def test_adjoin_unicode(self):
data = [[u'あ', 'b', 'c'], ['dd', u'ええ', 'ff'], ['ggg', 'hhh', u'いいい']]
expected = u'あ dd ggg\nb ええ hhh\nc ff いいい'
adjoined = printing.adjoin(2, *data)
- self.assertEqual(adjoined, expected)
+ assert adjoined == expected
adj = fmt.EastAsianTextAdjustment()
@@ -59,22 +59,22 @@ def test_adjoin_unicode(self):
c ff いいい"""
adjoined = adj.adjoin(2, *data)
- self.assertEqual(adjoined, expected)
+ assert adjoined == expected
cols = adjoined.split('\n')
- self.assertEqual(adj.len(cols[0]), 13)
- self.assertEqual(adj.len(cols[1]), 13)
- self.assertEqual(adj.len(cols[2]), 16)
+ assert adj.len(cols[0]) == 13
+ assert adj.len(cols[1]) == 13
+ assert adj.len(cols[2]) == 16
expected = u"""あ dd ggg
b ええ hhh
c ff いいい"""
adjoined = adj.adjoin(7, *data)
- self.assertEqual(adjoined, expected)
+ assert adjoined == expected
cols = adjoined.split('\n')
- self.assertEqual(adj.len(cols[0]), 23)
- self.assertEqual(adj.len(cols[1]), 23)
- self.assertEqual(adj.len(cols[2]), 26)
+ assert adj.len(cols[0]) == 23
+ assert adj.len(cols[1]) == 23
+ assert adj.len(cols[2]) == 26
def test_justify(self):
adj = fmt.EastAsianTextAdjustment()
@@ -83,45 +83,45 @@ def just(x, *args, **kwargs):
# wrapper to test single str
return adj.justify([x], *args, **kwargs)[0]
- self.assertEqual(just('abc', 5, mode='left'), 'abc ')
- self.assertEqual(just('abc', 5, mode='center'), ' abc ')
- self.assertEqual(just('abc', 5, mode='right'), ' abc')
- self.assertEqual(just(u'abc', 5, mode='left'), 'abc ')
- self.assertEqual(just(u'abc', 5, mode='center'), ' abc ')
- self.assertEqual(just(u'abc', 5, mode='right'), ' abc')
+ assert just('abc', 5, mode='left') == 'abc '
+ assert just('abc', 5, mode='center') == ' abc '
+ assert just('abc', 5, mode='right') == ' abc'
+ assert just(u'abc', 5, mode='left') == 'abc '
+ assert just(u'abc', 5, mode='center') == ' abc '
+ assert just(u'abc', 5, mode='right') == ' abc'
- self.assertEqual(just(u'パンダ', 5, mode='left'), u'パンダ')
- self.assertEqual(just(u'パンダ', 5, mode='center'), u'パンダ')
- self.assertEqual(just(u'パンダ', 5, mode='right'), u'パンダ')
+ assert just(u'パンダ', 5, mode='left') == u'パンダ'
+ assert just(u'パンダ', 5, mode='center') == u'パンダ'
+ assert just(u'パンダ', 5, mode='right') == u'パンダ'
- self.assertEqual(just(u'パンダ', 10, mode='left'), u'パンダ ')
- self.assertEqual(just(u'パンダ', 10, mode='center'), u' パンダ ')
- self.assertEqual(just(u'パンダ', 10, mode='right'), u' パンダ')
+ assert just(u'パンダ', 10, mode='left') == u'パンダ '
+ assert just(u'パンダ', 10, mode='center') == u' パンダ '
+ assert just(u'パンダ', 10, mode='right') == u' パンダ'
def test_east_asian_len(self):
adj = fmt.EastAsianTextAdjustment()
- self.assertEqual(adj.len('abc'), 3)
- self.assertEqual(adj.len(u'abc'), 3)
+ assert adj.len('abc') == 3
+ assert adj.len(u'abc') == 3
- self.assertEqual(adj.len(u'パンダ'), 6)
- self.assertEqual(adj.len(u'パンダ'), 5)
- self.assertEqual(adj.len(u'パンダpanda'), 11)
- self.assertEqual(adj.len(u'パンダpanda'), 10)
+ assert adj.len(u'パンダ') == 6
+ assert adj.len(u'パンダ') == 5
+ assert adj.len(u'パンダpanda') == 11
+ assert adj.len(u'パンダpanda') == 10
def test_ambiguous_width(self):
adj = fmt.EastAsianTextAdjustment()
- self.assertEqual(adj.len(u'¡¡ab'), 4)
+ assert adj.len(u'¡¡ab') == 4
with cf.option_context('display.unicode.ambiguous_as_wide', True):
adj = fmt.EastAsianTextAdjustment()
- self.assertEqual(adj.len(u'¡¡ab'), 6)
+ assert adj.len(u'¡¡ab') == 6
data = [[u'あ', 'b', 'c'], ['dd', u'ええ', 'ff'],
['ggg', u'¡¡ab', u'いいい']]
expected = u'あ dd ggg \nb ええ ¡¡ab\nc ff いいい'
adjoined = adj.adjoin(2, *data)
- self.assertEqual(adjoined, expected)
+ assert adjoined == expected
class TestTableSchemaRepr(tm.TestCase):
@@ -151,13 +151,13 @@ def test_publishes(self):
for obj, expected in zip(objects, expected_keys):
with opt, make_patch as mock_display:
handle = obj._ipython_display_()
- self.assertEqual(mock_display.call_count, 1)
+ assert mock_display.call_count == 1
assert handle is None
args, kwargs = mock_display.call_args
arg, = args # just one argument
- self.assertEqual(kwargs, {"raw": True})
- self.assertEqual(set(arg.keys()), expected)
+ assert kwargs == {"raw": True}
+ assert set(arg.keys()) == expected
with_latex = pd.option_context('display.latex.repr', True)
@@ -168,7 +168,7 @@ def test_publishes(self):
expected = {'text/plain', 'text/html', 'text/latex',
'application/vnd.dataresource+json'}
- self.assertEqual(set(arg.keys()), expected)
+ assert set(arg.keys()) == expected
def test_publishes_not_implemented(self):
# column MultiIndex
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 7d8ac6f81c31e..371cc2b61634a 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -39,7 +39,7 @@ def test_init_non_pandas(self):
def test_init_series(self):
result = Styler(pd.Series([1, 2]))
- self.assertEqual(result.data.ndim, 2)
+ assert result.data.ndim == 2
def test_repr_html_ok(self):
self.styler._repr_html_()
@@ -48,7 +48,7 @@ def test_update_ctx(self):
self.styler._update_ctx(self.attrs)
expected = {(0, 0): ['color: red'],
(1, 0): ['color: blue']}
- self.assertEqual(self.styler.ctx, expected)
+ assert self.styler.ctx == expected
def test_update_ctx_flatten_multi(self):
attrs = DataFrame({"A": ['color: red; foo: bar',
@@ -56,7 +56,7 @@ def test_update_ctx_flatten_multi(self):
self.styler._update_ctx(attrs)
expected = {(0, 0): ['color: red', ' foo: bar'],
(1, 0): ['color: blue', ' foo: baz']}
- self.assertEqual(self.styler.ctx, expected)
+ assert self.styler.ctx == expected
def test_update_ctx_flatten_multi_traliing_semi(self):
attrs = DataFrame({"A": ['color: red; foo: bar;',
@@ -64,7 +64,7 @@ def test_update_ctx_flatten_multi_traliing_semi(self):
self.styler._update_ctx(attrs)
expected = {(0, 0): ['color: red', ' foo: bar'],
(1, 0): ['color: blue', ' foo: baz']}
- self.assertEqual(self.styler.ctx, expected)
+ assert self.styler.ctx == expected
def test_copy(self):
s2 = copy.copy(self.styler)
@@ -74,8 +74,8 @@ def test_copy(self):
self.styler._update_ctx(self.attrs)
self.styler.highlight_max()
- self.assertEqual(self.styler.ctx, s2.ctx)
- self.assertEqual(self.styler._todo, s2._todo)
+ assert self.styler.ctx == s2.ctx
+ assert self.styler._todo == s2._todo
def test_deepcopy(self):
s2 = copy.deepcopy(self.styler)
@@ -86,7 +86,7 @@ def test_deepcopy(self):
self.styler._update_ctx(self.attrs)
self.styler.highlight_max()
self.assertNotEqual(self.styler.ctx, s2.ctx)
- self.assertEqual(s2._todo, [])
+ assert s2._todo == []
self.assertNotEqual(self.styler._todo, s2._todo)
def test_clear(self):
@@ -119,16 +119,16 @@ def test_set_properties(self):
# order is deterministic
v = ["color: white", "size: 10px"]
expected = {(0, 0): v, (1, 0): v}
- self.assertEqual(result.keys(), expected.keys())
+ assert result.keys() == expected.keys()
for v1, v2 in zip(result.values(), expected.values()):
- self.assertEqual(sorted(v1), sorted(v2))
+ assert sorted(v1) == sorted(v2)
def test_set_properties_subset(self):
df = pd.DataFrame({'A': [0, 1]})
result = df.style.set_properties(subset=pd.IndexSlice[0, 'A'],
color='white')._compute().ctx
expected = {(0, 0): ['color: white']}
- self.assertEqual(result, expected)
+ assert result == expected
def test_empty_index_name_doesnt_display(self):
# https://github.com/pandas-dev/pandas/pull/12090#issuecomment-180695902
@@ -156,7 +156,7 @@ def test_empty_index_name_doesnt_display(self):
'is_visible': True,
}]]
- self.assertEqual(result['head'], expected)
+ assert result['head'] == expected
def test_index_name(self):
# https://github.com/pandas-dev/pandas/issues/11655
@@ -174,7 +174,7 @@ def test_index_name(self):
{'class': 'blank', 'type': 'th', 'value': ''},
{'class': 'blank', 'type': 'th', 'value': ''}]]
- self.assertEqual(result['head'], expected)
+ assert result['head'] == expected
def test_multiindex_name(self):
# https://github.com/pandas-dev/pandas/issues/11655
@@ -194,7 +194,7 @@ def test_multiindex_name(self):
'value': 'B'},
{'class': 'blank', 'type': 'th', 'value': ''}]]
- self.assertEqual(result['head'], expected)
+ assert result['head'] == expected
def test_numeric_columns(self):
# https://github.com/pandas-dev/pandas/issues/12125
@@ -206,21 +206,21 @@ def test_apply_axis(self):
df = pd.DataFrame({'A': [0, 0], 'B': [1, 1]})
f = lambda x: ['val: %s' % x.max() for v in x]
result = df.style.apply(f, axis=1)
- self.assertEqual(len(result._todo), 1)
- self.assertEqual(len(result.ctx), 0)
+ assert len(result._todo) == 1
+ assert len(result.ctx) == 0
result._compute()
expected = {(0, 0): ['val: 1'], (0, 1): ['val: 1'],
(1, 0): ['val: 1'], (1, 1): ['val: 1']}
- self.assertEqual(result.ctx, expected)
+ assert result.ctx == expected
result = df.style.apply(f, axis=0)
expected = {(0, 0): ['val: 0'], (0, 1): ['val: 1'],
(1, 0): ['val: 0'], (1, 1): ['val: 1']}
result._compute()
- self.assertEqual(result.ctx, expected)
+ assert result.ctx == expected
result = df.style.apply(f) # default
result._compute()
- self.assertEqual(result.ctx, expected)
+ assert result.ctx == expected
def test_apply_subset(self):
axes = [0, 1]
@@ -236,7 +236,7 @@ def test_apply_subset(self):
for c, col in enumerate(self.df.columns)
if row in self.df.loc[slice_].index and
col in self.df.loc[slice_].columns)
- self.assertEqual(result, expected)
+ assert result == expected
def test_applymap_subset(self):
def f(x):
@@ -253,7 +253,7 @@ def f(x):
for c, col in enumerate(self.df.columns)
if row in self.df.loc[slice_].index and
col in self.df.loc[slice_].columns)
- self.assertEqual(result, expected)
+ assert result == expected
def test_empty(self):
df = pd.DataFrame({'A': [1, 0]})
@@ -264,7 +264,7 @@ def test_empty(self):
result = s._translate()['cellstyle']
expected = [{'props': [['color', ' red']], 'selector': 'row0_col0'},
{'props': [['', '']], 'selector': 'row1_col0'}]
- self.assertEqual(result, expected)
+ assert result == expected
def test_bar(self):
df = pd.DataFrame({'A': [0, 1, 2]})
@@ -278,7 +278,7 @@ def test_bar(self):
'background: linear-gradient('
'90deg,#d65f5f 100.0%, transparent 0%)']
}
- self.assertEqual(result, expected)
+ assert result == expected
result = df.style.bar(color='red', width=50)._compute().ctx
expected = {
@@ -290,14 +290,14 @@ def test_bar(self):
'background: linear-gradient('
'90deg,red 50.0%, transparent 0%)']
}
- self.assertEqual(result, expected)
+ assert result == expected
df['C'] = ['a'] * len(df)
result = df.style.bar(color='red', width=50)._compute().ctx
- self.assertEqual(result, expected)
+ assert result == expected
df['C'] = df['C'].astype('category')
result = df.style.bar(color='red', width=50)._compute().ctx
- self.assertEqual(result, expected)
+ assert result == expected
def test_bar_0points(self):
df = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
@@ -323,7 +323,7 @@ def test_bar_0points(self):
(2, 2): ['width: 10em', ' height: 80%',
'background: linear-gradient(90deg,#d65f5f 100.0%'
', transparent 0%)']}
- self.assertEqual(result, expected)
+ assert result == expected
result = df.style.bar(axis=1)._compute().ctx
expected = {(0, 0): ['width: 10em', ' height: 80%'],
@@ -347,14 +347,14 @@ def test_bar_0points(self):
(2, 2): ['width: 10em', ' height: 80%',
'background: linear-gradient(90deg,#d65f5f 100.0%'
', transparent 0%)']}
- self.assertEqual(result, expected)
+ assert result == expected
def test_highlight_null(self, null_color='red'):
df = pd.DataFrame({'A': [0, np.nan]})
result = df.style.highlight_null()._compute().ctx
expected = {(0, 0): [''],
(1, 0): ['background-color: red']}
- self.assertEqual(result, expected)
+ assert result == expected
def test_nonunique_raises(self):
df = pd.DataFrame([[1, 2]], columns=['A', 'A'])
@@ -372,7 +372,7 @@ def test_caption(self):
styler = self.df.style
result = styler.set_caption('baz')
assert styler is result
- self.assertEqual(styler.caption, 'baz')
+ assert styler.caption == 'baz'
def test_uuid(self):
styler = Styler(self.df, uuid='abc123')
@@ -382,7 +382,7 @@ def test_uuid(self):
styler = self.df.style
result = styler.set_uuid('aaa')
assert result is styler
- self.assertEqual(result.uuid, 'aaa')
+ assert result.uuid == 'aaa'
def test_table_styles(self):
style = [{'selector': 'th', 'props': [('foo', 'bar')]}]
@@ -393,7 +393,7 @@ def test_table_styles(self):
styler = self.df.style
result = styler.set_table_styles(style)
assert styler is result
- self.assertEqual(styler.table_styles, style)
+ assert styler.table_styles == style
def test_table_attributes(self):
attributes = 'class="foo" data-bar'
@@ -407,13 +407,13 @@ def test_table_attributes(self):
def test_precision(self):
with pd.option_context('display.precision', 10):
s = Styler(self.df)
- self.assertEqual(s.precision, 10)
+ assert s.precision == 10
s = Styler(self.df, precision=2)
- self.assertEqual(s.precision, 2)
+ assert s.precision == 2
s2 = s.set_precision(4)
assert s is s2
- self.assertEqual(s.precision, 4)
+ assert s.precision == 4
def test_apply_none(self):
def f(x):
@@ -421,14 +421,14 @@ def f(x):
index=x.index, columns=x.columns)
result = (pd.DataFrame([[1, 2], [3, 4]])
.style.apply(f, axis=None)._compute().ctx)
- self.assertEqual(result[(1, 1)], ['color: red'])
+ assert result[(1, 1)] == ['color: red']
def test_trim(self):
result = self.df.style.render() # trim=True
- self.assertEqual(result.count('#'), 0)
+ assert result.count('#') == 0
result = self.df.style.highlight_max().render()
- self.assertEqual(result.count('#'), len(self.df.columns))
+ assert result.count('#') == len(self.df.columns)
def test_highlight_max(self):
df = pd.DataFrame([[1, 2], [3, 4]], columns=['A', 'B'])
@@ -440,25 +440,25 @@ def test_highlight_max(self):
df = -df
attr = 'highlight_min'
result = getattr(df.style, attr)()._compute().ctx
- self.assertEqual(result[(1, 1)], ['background-color: yellow'])
+ assert result[(1, 1)] == ['background-color: yellow']
result = getattr(df.style, attr)(color='green')._compute().ctx
- self.assertEqual(result[(1, 1)], ['background-color: green'])
+ assert result[(1, 1)] == ['background-color: green']
result = getattr(df.style, attr)(subset='A')._compute().ctx
- self.assertEqual(result[(1, 0)], ['background-color: yellow'])
+ assert result[(1, 0)] == ['background-color: yellow']
result = getattr(df.style, attr)(axis=0)._compute().ctx
expected = {(1, 0): ['background-color: yellow'],
(1, 1): ['background-color: yellow'],
(0, 1): [''], (0, 0): ['']}
- self.assertEqual(result, expected)
+ assert result == expected
result = getattr(df.style, attr)(axis=1)._compute().ctx
expected = {(0, 1): ['background-color: yellow'],
(1, 1): ['background-color: yellow'],
(0, 0): [''], (1, 0): ['']}
- self.assertEqual(result, expected)
+ assert result == expected
# separate since we cant negate the strs
df['C'] = ['a', 'b']
@@ -478,7 +478,7 @@ def test_export(self):
result = style1.export()
style2 = self.df.style
style2.use(result)
- self.assertEqual(style1._todo, style2._todo)
+ assert style1._todo == style2._todo
style2.render()
def test_display_format(self):
@@ -503,48 +503,48 @@ def test_display_subset(self):
ctx = df.style.format({"a": "{:0.1f}", "b": "{0:.2%}"},
subset=pd.IndexSlice[0, :])._translate()
expected = '0.1'
- self.assertEqual(ctx['body'][0][1]['display_value'], expected)
- self.assertEqual(ctx['body'][1][1]['display_value'], '1.1234')
- self.assertEqual(ctx['body'][0][2]['display_value'], '12.34%')
+ assert ctx['body'][0][1]['display_value'] == expected
+ assert ctx['body'][1][1]['display_value'] == '1.1234'
+ assert ctx['body'][0][2]['display_value'] == '12.34%'
raw_11 = '1.1234'
ctx = df.style.format("{:0.1f}",
subset=pd.IndexSlice[0, :])._translate()
- self.assertEqual(ctx['body'][0][1]['display_value'], expected)
- self.assertEqual(ctx['body'][1][1]['display_value'], raw_11)
+ assert ctx['body'][0][1]['display_value'] == expected
+ assert ctx['body'][1][1]['display_value'] == raw_11
ctx = df.style.format("{:0.1f}",
subset=pd.IndexSlice[0, :])._translate()
- self.assertEqual(ctx['body'][0][1]['display_value'], expected)
- self.assertEqual(ctx['body'][1][1]['display_value'], raw_11)
+ assert ctx['body'][0][1]['display_value'] == expected
+ assert ctx['body'][1][1]['display_value'] == raw_11
ctx = df.style.format("{:0.1f}",
subset=pd.IndexSlice['a'])._translate()
- self.assertEqual(ctx['body'][0][1]['display_value'], expected)
- self.assertEqual(ctx['body'][0][2]['display_value'], '0.1234')
+ assert ctx['body'][0][1]['display_value'] == expected
+ assert ctx['body'][0][2]['display_value'] == '0.1234'
ctx = df.style.format("{:0.1f}",
subset=pd.IndexSlice[0, 'a'])._translate()
- self.assertEqual(ctx['body'][0][1]['display_value'], expected)
- self.assertEqual(ctx['body'][1][1]['display_value'], raw_11)
+ assert ctx['body'][0][1]['display_value'] == expected
+ assert ctx['body'][1][1]['display_value'] == raw_11
ctx = df.style.format("{:0.1f}",
subset=pd.IndexSlice[[0, 1], ['a']])._translate()
- self.assertEqual(ctx['body'][0][1]['display_value'], expected)
- self.assertEqual(ctx['body'][1][1]['display_value'], '1.1')
- self.assertEqual(ctx['body'][0][2]['display_value'], '0.1234')
- self.assertEqual(ctx['body'][1][2]['display_value'], '1.1234')
+ assert ctx['body'][0][1]['display_value'] == expected
+ assert ctx['body'][1][1]['display_value'] == '1.1'
+ assert ctx['body'][0][2]['display_value'] == '0.1234'
+ assert ctx['body'][1][2]['display_value'] == '1.1234'
def test_display_dict(self):
df = pd.DataFrame([[.1234, .1234], [1.1234, 1.1234]],
columns=['a', 'b'])
ctx = df.style.format({"a": "{:0.1f}", "b": "{0:.2%}"})._translate()
- self.assertEqual(ctx['body'][0][1]['display_value'], '0.1')
- self.assertEqual(ctx['body'][0][2]['display_value'], '12.34%')
+ assert ctx['body'][0][1]['display_value'] == '0.1'
+ assert ctx['body'][0][2]['display_value'] == '12.34%'
df['c'] = ['aaa', 'bbb']
ctx = df.style.format({"a": "{:0.1f}", "c": str.upper})._translate()
- self.assertEqual(ctx['body'][0][1]['display_value'], '0.1')
- self.assertEqual(ctx['body'][0][3]['display_value'], 'AAA')
+ assert ctx['body'][0][1]['display_value'] == '0.1'
+ assert ctx['body'][0][3]['display_value'] == 'AAA'
def test_bad_apply_shape(self):
df = pd.DataFrame([[1, 2], [3, 4]])
@@ -629,7 +629,7 @@ def test_mi_sparse(self):
'is_visible': True, 'display_value': ''},
{'type': 'th', 'class': 'col_heading level0 col0', 'value': 'A',
'is_visible': True, 'display_value': 'A'}]
- self.assertEqual(head, expected)
+ assert head == expected
def test_mi_sparse_disabled(self):
with pd.option_context('display.multi_sparse', False):
@@ -655,7 +655,7 @@ def test_mi_sparse_index_names(self):
'type': 'th'},
{'class': 'blank', 'value': '', 'type': 'th'}]
- self.assertEqual(head, expected)
+ assert head == expected
def test_mi_sparse_column_names(self):
df = pd.DataFrame(
@@ -698,7 +698,7 @@ def test_mi_sparse_column_names(self):
'type': 'th',
'value': 0},
]
- self.assertEqual(head, expected)
+ assert head == expected
@tm.mplskip
@@ -706,16 +706,16 @@ class TestStylerMatplotlibDep(TestCase):
def test_background_gradient(self):
df = pd.DataFrame([[1, 2], [2, 4]], columns=['A', 'B'])
- for axis in [0, 1, 'index', 'columns']:
- for cmap in [None, 'YlOrRd']:
- result = df.style.background_gradient(cmap=cmap)._compute().ctx
- assert all("#" in x[0] for x in result.values())
- self.assertEqual(result[(0, 0)], result[(0, 1)])
- self.assertEqual(result[(1, 0)], result[(1, 1)])
-
- result = (df.style.background_gradient(subset=pd.IndexSlice[1, 'A'])
- ._compute().ctx)
- self.assertEqual(result[(1, 0)], ['background-color: #fff7fb'])
+
+ for c_map in [None, 'YlOrRd']:
+ result = df.style.background_gradient(cmap=c_map)._compute().ctx
+ assert all("#" in x[0] for x in result.values())
+ assert result[(0, 0)] == result[(0, 1)]
+ assert result[(1, 0)] == result[(1, 1)]
+
+ result = df.style.background_gradient(
+ subset=pd.IndexSlice[1, 'A'])._compute().ctx
+ assert result[(1, 0)] == ['background-color: #fff7fb']
def test_block_names():
diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py
index 02c73019b0f65..552fb77bb54cc 100644
--- a/pandas/tests/io/formats/test_to_csv.py
+++ b/pandas/tests/io/formats/test_to_csv.py
@@ -17,7 +17,7 @@ def test_to_csv_quotechar(self):
with tm.ensure_clean('test.csv') as path:
df.to_csv(path, quoting=1) # 1=QUOTE_ALL
with open(path, 'r') as f:
- self.assertEqual(f.read(), expected)
+ assert f.read() == expected
expected = """\
$$,$col$
@@ -28,7 +28,7 @@ def test_to_csv_quotechar(self):
with tm.ensure_clean('test.csv') as path:
df.to_csv(path, quoting=1, quotechar="$")
with open(path, 'r') as f:
- self.assertEqual(f.read(), expected)
+ assert f.read() == expected
with tm.ensure_clean('test.csv') as path:
with tm.assert_raises_regex(TypeError, 'quotechar'):
@@ -45,7 +45,7 @@ def test_to_csv_doublequote(self):
with tm.ensure_clean('test.csv') as path:
df.to_csv(path, quoting=1, doublequote=True) # QUOTE_ALL
with open(path, 'r') as f:
- self.assertEqual(f.read(), expected)
+ assert f.read() == expected
from _csv import Error
with tm.ensure_clean('test.csv') as path:
@@ -63,7 +63,7 @@ def test_to_csv_escapechar(self):
with tm.ensure_clean('test.csv') as path: # QUOTE_ALL
df.to_csv(path, quoting=1, doublequote=False, escapechar='\\')
with open(path, 'r') as f:
- self.assertEqual(f.read(), expected)
+ assert f.read() == expected
df = DataFrame({'col': ['a,a', ',bb,']})
expected = """\
@@ -75,76 +75,71 @@ def test_to_csv_escapechar(self):
with tm.ensure_clean('test.csv') as path:
df.to_csv(path, quoting=3, escapechar='\\') # QUOTE_NONE
with open(path, 'r') as f:
- self.assertEqual(f.read(), expected)
+ assert f.read() == expected
def test_csv_to_string(self):
df = DataFrame({'col': [1, 2]})
expected = ',col\n0,1\n1,2\n'
- self.assertEqual(df.to_csv(), expected)
+ assert df.to_csv() == expected
def test_to_csv_decimal(self):
# GH 781
df = DataFrame({'col1': [1], 'col2': ['a'], 'col3': [10.1]})
expected_default = ',col1,col2,col3\n0,1,a,10.1\n'
- self.assertEqual(df.to_csv(), expected_default)
+ assert df.to_csv() == expected_default
expected_european_excel = ';col1;col2;col3\n0;1;a;10,1\n'
- self.assertEqual(
- df.to_csv(decimal=',', sep=';'), expected_european_excel)
+ assert df.to_csv(decimal=',', sep=';') == expected_european_excel
expected_float_format_default = ',col1,col2,col3\n0,1,a,10.10\n'
- self.assertEqual(
- df.to_csv(float_format='%.2f'), expected_float_format_default)
+ assert df.to_csv(float_format='%.2f') == expected_float_format_default
expected_float_format = ';col1;col2;col3\n0;1;a;10,10\n'
- self.assertEqual(
- df.to_csv(decimal=',', sep=';',
- float_format='%.2f'), expected_float_format)
+ assert df.to_csv(decimal=',', sep=';',
+ float_format='%.2f') == expected_float_format
# GH 11553: testing if decimal is taken into account for '0.0'
df = pd.DataFrame({'a': [0, 1.1], 'b': [2.2, 3.3], 'c': 1})
expected = 'a,b,c\n0^0,2^2,1\n1^1,3^3,1\n'
- self.assertEqual(df.to_csv(index=False, decimal='^'), expected)
+ assert df.to_csv(index=False, decimal='^') == expected
# same but for an index
- self.assertEqual(df.set_index('a').to_csv(decimal='^'), expected)
+ assert df.set_index('a').to_csv(decimal='^') == expected
# same for a multi-index
- self.assertEqual(
- df.set_index(['a', 'b']).to_csv(decimal="^"), expected)
+ assert df.set_index(['a', 'b']).to_csv(decimal="^") == expected
def test_to_csv_float_format(self):
# testing if float_format is taken into account for the index
# GH 11553
df = pd.DataFrame({'a': [0, 1], 'b': [2.2, 3.3], 'c': 1})
expected = 'a,b,c\n0,2.20,1\n1,3.30,1\n'
- self.assertEqual(
- df.set_index('a').to_csv(float_format='%.2f'), expected)
+ assert df.set_index('a').to_csv(float_format='%.2f') == expected
# same for a multi-index
- self.assertEqual(
- df.set_index(['a', 'b']).to_csv(float_format='%.2f'), expected)
+ assert df.set_index(['a', 'b']).to_csv(
+ float_format='%.2f') == expected
def test_to_csv_na_rep(self):
# testing if NaN values are correctly represented in the index
# GH 11553
df = DataFrame({'a': [0, np.NaN], 'b': [0, 1], 'c': [2, 3]})
expected = "a,b,c\n0.0,0,2\n_,1,3\n"
- self.assertEqual(df.set_index('a').to_csv(na_rep='_'), expected)
- self.assertEqual(df.set_index(['a', 'b']).to_csv(na_rep='_'), expected)
+ assert df.set_index('a').to_csv(na_rep='_') == expected
+ assert df.set_index(['a', 'b']).to_csv(na_rep='_') == expected
# now with an index containing only NaNs
df = DataFrame({'a': np.NaN, 'b': [0, 1], 'c': [2, 3]})
expected = "a,b,c\n_,0,2\n_,1,3\n"
- self.assertEqual(df.set_index('a').to_csv(na_rep='_'), expected)
- self.assertEqual(df.set_index(['a', 'b']).to_csv(na_rep='_'), expected)
+ assert df.set_index('a').to_csv(na_rep='_') == expected
+ assert df.set_index(['a', 'b']).to_csv(na_rep='_') == expected
# check if na_rep parameter does not break anything when no NaN
df = DataFrame({'a': 0, 'b': [0, 1], 'c': [2, 3]})
expected = "a,b,c\n0,0,2\n0,1,3\n"
- self.assertEqual(df.set_index('a').to_csv(na_rep='_'), expected)
- self.assertEqual(df.set_index(['a', 'b']).to_csv(na_rep='_'), expected)
+ assert df.set_index('a').to_csv(na_rep='_') == expected
+ assert df.set_index(['a', 'b']).to_csv(na_rep='_') == expected
def test_to_csv_date_format(self):
# GH 10209
@@ -157,26 +152,23 @@ def test_to_csv_date_format(self):
'2013-01-01 00:00:01\n2,2013-01-01 00:00:02'
'\n3,2013-01-01 00:00:03\n4,'
'2013-01-01 00:00:04\n')
- self.assertEqual(df_sec.to_csv(), expected_default_sec)
+ assert df_sec.to_csv() == expected_default_sec
expected_ymdhms_day = (',A\n0,2013-01-01 00:00:00\n1,'
'2013-01-02 00:00:00\n2,2013-01-03 00:00:00'
'\n3,2013-01-04 00:00:00\n4,'
'2013-01-05 00:00:00\n')
- self.assertEqual(
- df_day.to_csv(
- date_format='%Y-%m-%d %H:%M:%S'), expected_ymdhms_day)
+ assert (df_day.to_csv(date_format='%Y-%m-%d %H:%M:%S') ==
+ expected_ymdhms_day)
expected_ymd_sec = (',A\n0,2013-01-01\n1,2013-01-01\n2,'
'2013-01-01\n3,2013-01-01\n4,2013-01-01\n')
- self.assertEqual(
- df_sec.to_csv(date_format='%Y-%m-%d'), expected_ymd_sec)
+ assert df_sec.to_csv(date_format='%Y-%m-%d') == expected_ymd_sec
expected_default_day = (',A\n0,2013-01-01\n1,2013-01-02\n2,'
'2013-01-03\n3,2013-01-04\n4,2013-01-05\n')
- self.assertEqual(df_day.to_csv(), expected_default_day)
- self.assertEqual(
- df_day.to_csv(date_format='%Y-%m-%d'), expected_default_day)
+ assert df_day.to_csv() == expected_default_day
+ assert df_day.to_csv(date_format='%Y-%m-%d') == expected_default_day
# testing if date_format parameter is taken into account for
# multi-indexed dataframes (GH 7791)
@@ -184,33 +176,33 @@ def test_to_csv_date_format(self):
df_sec['C'] = 1
expected_ymd_sec = 'A,B,C\n2013-01-01,0,1\n'
df_sec_grouped = df_sec.groupby([pd.Grouper(key='A', freq='1h'), 'B'])
- self.assertEqual(df_sec_grouped.mean().to_csv(date_format='%Y-%m-%d'),
- expected_ymd_sec)
+ assert (df_sec_grouped.mean().to_csv(date_format='%Y-%m-%d') ==
+ expected_ymd_sec)
def test_to_csv_multi_index(self):
# see gh-6618
df = DataFrame([1], columns=pd.MultiIndex.from_arrays([[1], [2]]))
exp = ",1\n,2\n0,1\n"
- self.assertEqual(df.to_csv(), exp)
+ assert df.to_csv() == exp
exp = "1\n2\n1\n"
- self.assertEqual(df.to_csv(index=False), exp)
+ assert df.to_csv(index=False) == exp
df = DataFrame([1], columns=pd.MultiIndex.from_arrays([[1], [2]]),
index=pd.MultiIndex.from_arrays([[1], [2]]))
exp = ",,1\n,,2\n1,2,1\n"
- self.assertEqual(df.to_csv(), exp)
+ assert df.to_csv() == exp
exp = "1\n2\n1\n"
- self.assertEqual(df.to_csv(index=False), exp)
+ assert df.to_csv(index=False) == exp
df = DataFrame(
[1], columns=pd.MultiIndex.from_arrays([['foo'], ['bar']]))
exp = ",foo\n,bar\n0,1\n"
- self.assertEqual(df.to_csv(), exp)
+ assert df.to_csv() == exp
exp = "foo\nbar\n1\n"
- self.assertEqual(df.to_csv(index=False), exp)
+ assert df.to_csv(index=False) == exp
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index fd9ae0851635a..4a4546dd807f1 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -50,10 +50,10 @@ def test_to_html_with_empty_string_label(self):
def test_to_html_unicode(self):
df = DataFrame({u('\u03c3'): np.arange(10.)})
expected = u'<table border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>\u03c3</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>0.0</td>\n </tr>\n <tr>\n <th>1</th>\n <td>1.0</td>\n </tr>\n <tr>\n <th>2</th>\n <td>2.0</td>\n </tr>\n <tr>\n <th>3</th>\n <td>3.0</td>\n </tr>\n <tr>\n <th>4</th>\n <td>4.0</td>\n </tr>\n <tr>\n <th>5</th>\n <td>5.0</td>\n </tr>\n <tr>\n <th>6</th>\n <td>6.0</td>\n </tr>\n <tr>\n <th>7</th>\n <td>7.0</td>\n </tr>\n <tr>\n <th>8</th>\n <td>8.0</td>\n </tr>\n <tr>\n <th>9</th>\n <td>9.0</td>\n </tr>\n </tbody>\n</table>' # noqa
- self.assertEqual(df.to_html(), expected)
+ assert df.to_html() == expected
df = DataFrame({'A': [u('\u03c3')]})
expected = u'<table border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>A</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>\u03c3</td>\n </tr>\n </tbody>\n</table>' # noqa
- self.assertEqual(df.to_html(), expected)
+ assert df.to_html() == expected
def test_to_html_decimal(self):
# GH 12031
@@ -81,7 +81,7 @@ def test_to_html_decimal(self):
' </tr>\n'
' </tbody>\n'
'</table>')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_escaped(self):
a = 'str<ing1 &'
@@ -114,7 +114,7 @@ def test_to_html_escaped(self):
</tbody>
</table>"""
- self.assertEqual(xp, rs)
+ assert xp == rs
def test_to_html_escape_disabled(self):
a = 'str<ing1 &'
@@ -147,7 +147,7 @@ def test_to_html_escape_disabled(self):
</tbody>
</table>"""
- self.assertEqual(xp, rs)
+ assert xp == rs
def test_to_html_multiindex_index_false(self):
# issue 8452
@@ -189,11 +189,11 @@ def test_to_html_multiindex_index_false(self):
</tbody>
</table>"""
- self.assertEqual(result, expected)
+ assert result == expected
df.index = Index(df.index.values, name='idx')
result = df.to_html(index=False)
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_multiindex_sparsify_false_multi_sparse(self):
with option_context('display.multi_sparse', False):
@@ -247,7 +247,7 @@ def test_to_html_multiindex_sparsify_false_multi_sparse(self):
</tbody>
</table>"""
- self.assertEqual(result, expected)
+ assert result == expected
df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]],
columns=index[::2], index=index)
@@ -303,7 +303,7 @@ def test_to_html_multiindex_sparsify_false_multi_sparse(self):
</tbody>
</table>"""
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_multiindex_sparsify(self):
index = MultiIndex.from_arrays([[0, 0, 1, 1], [0, 1, 0, 1]],
@@ -353,7 +353,7 @@ def test_to_html_multiindex_sparsify(self):
</tbody>
</table>"""
- self.assertEqual(result, expected)
+ assert result == expected
df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], columns=index[::2],
index=index)
@@ -407,7 +407,7 @@ def test_to_html_multiindex_sparsify(self):
</tbody>
</table>"""
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_multiindex_odd_even_truncate(self):
# GH 14882 - Issue on truncation with odd length DataFrame
@@ -692,7 +692,7 @@ def test_to_html_multiindex_odd_even_truncate(self):
</tr>
</tbody>
</table>"""
- self.assertEqual(result, expected)
+ assert result == expected
# Test that ... appears in a middle level
result = df.to_html(max_rows=56)
@@ -955,7 +955,7 @@ def test_to_html_multiindex_odd_even_truncate(self):
</tr>
</tbody>
</table>"""
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_index_formatter(self):
df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], columns=['foo', None],
@@ -996,7 +996,7 @@ def test_to_html_index_formatter(self):
</tbody>
</table>"""
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_datetime64_monthformatter(self):
months = [datetime(2016, 1, 1), datetime(2016, 2, 2)]
@@ -1024,7 +1024,7 @@ def format_func(x):
</tr>
</tbody>
</table>"""
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_datetime64_hourformatter(self):
@@ -1053,7 +1053,7 @@ def format_func(x):
</tr>
</tbody>
</table>"""
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_regression_GH6098(self):
df = DataFrame({
@@ -1164,7 +1164,7 @@ def test_to_html_truncate(self):
</div>'''.format(div_style)
if compat.PY2:
expected = expected.decode('utf-8')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_truncate_multi_index(self):
pytest.skip("unreliable on travis")
@@ -1281,7 +1281,7 @@ def test_to_html_truncate_multi_index(self):
</div>'''.format(div_style)
if compat.PY2:
expected = expected.decode('utf-8')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_truncate_multi_index_sparse_off(self):
pytest.skip("unreliable on travis")
@@ -1392,7 +1392,7 @@ def test_to_html_truncate_multi_index_sparse_off(self):
</div>'''.format(div_style)
if compat.PY2:
expected = expected.decode('utf-8')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_border(self):
df = DataFrame({'A': [1, 2]})
@@ -1424,7 +1424,7 @@ def test_to_html(self):
buf = StringIO()
retval = biggie.to_html(buf=buf)
assert retval is None
- self.assertEqual(buf.getvalue(), s)
+ assert buf.getvalue() == s
assert isinstance(s, compat.string_types)
@@ -1450,13 +1450,13 @@ def test_to_html_filename(self):
with open(path, 'r') as f:
s = biggie.to_html()
s2 = f.read()
- self.assertEqual(s, s2)
+ assert s == s2
frame = DataFrame(index=np.arange(200))
with tm.ensure_clean('test.html') as path:
frame.to_html(path)
with open(path, 'r') as f:
- self.assertEqual(frame.to_html(), f.read())
+ assert frame.to_html() == f.read()
def test_to_html_with_no_bold(self):
x = DataFrame({'x': np.random.randn(5)})
@@ -1507,7 +1507,7 @@ def test_to_html_multiindex(self):
' </tbody>\n'
'</table>')
- self.assertEqual(result, expected)
+ assert result == expected
columns = MultiIndex.from_tuples(list(zip(
range(4), np.mod(
@@ -1550,7 +1550,7 @@ def test_to_html_multiindex(self):
' </tbody>\n'
'</table>')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_justify(self):
df = DataFrame({'A': [6, 30000, 2],
@@ -1588,7 +1588,7 @@ def test_to_html_justify(self):
' </tr>\n'
' </tbody>\n'
'</table>')
- self.assertEqual(result, expected)
+ assert result == expected
result = df.to_html(justify='right')
expected = ('<table border="1" class="dataframe">\n'
@@ -1621,7 +1621,7 @@ def test_to_html_justify(self):
' </tr>\n'
' </tbody>\n'
'</table>')
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_index(self):
index = ['foo', 'bar', 'baz']
@@ -1836,10 +1836,10 @@ def test_to_html_with_classes(self):
</table>
""").strip()
- self.assertEqual(result, expected)
+ assert result == expected
result = df.to_html(classes=["sortable", "draggable"])
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_no_index_max_rows(self):
# GH https://github.com/pandas-dev/pandas/issues/14998
@@ -1858,7 +1858,7 @@ def test_to_html_no_index_max_rows(self):
</tr>
</tbody>
</table>""")
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_html_notebook_has_style(self):
df = pd.DataFrame({"A": [1, 2, 3]})
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index 4ec13fa667452..0f77a886dd302 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -39,7 +39,7 @@ def test_build_table_schema(self):
],
'primaryKey': ['idx']
}
- self.assertEqual(result, expected)
+ assert result == expected
result = build_table_schema(self.df)
assert "pandas_version" in result
@@ -49,7 +49,7 @@ def test_series(self):
expected = {'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'foo', 'type': 'integer'}],
'primaryKey': ['index']}
- self.assertEqual(result, expected)
+ assert result == expected
result = build_table_schema(s)
assert 'pandas_version' in result
@@ -58,7 +58,7 @@ def tets_series_unnamed(self):
expected = {'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': ['index']}
- self.assertEqual(result, expected)
+ assert result == expected
def test_multiindex(self):
df = self.df.copy()
@@ -76,13 +76,13 @@ def test_multiindex(self):
],
'primaryKey': ['level_0', 'level_1']
}
- self.assertEqual(result, expected)
+ assert result == expected
df.index.names = ['idx0', None]
expected['fields'][0]['name'] = 'idx0'
expected['primaryKey'] = ['idx0', 'level_1']
result = build_table_schema(df, version=False)
- self.assertEqual(result, expected)
+ assert result == expected
class TestTableSchemaType(tm.TestCase):
@@ -91,23 +91,22 @@ def test_as_json_table_type_int_data(self):
int_data = [1, 2, 3]
int_types = [np.int, np.int16, np.int32, np.int64]
for t in int_types:
- self.assertEqual(as_json_table_type(np.array(int_data, dtype=t)),
- 'integer')
+ assert as_json_table_type(np.array(
+ int_data, dtype=t)) == 'integer'
def test_as_json_table_type_float_data(self):
float_data = [1., 2., 3.]
float_types = [np.float, np.float16, np.float32, np.float64]
for t in float_types:
- self.assertEqual(as_json_table_type(np.array(float_data,
- dtype=t)),
- 'number')
+ assert as_json_table_type(np.array(
+ float_data, dtype=t)) == 'number'
def test_as_json_table_type_bool_data(self):
bool_data = [True, False]
bool_types = [bool, np.bool]
for t in bool_types:
- self.assertEqual(as_json_table_type(np.array(bool_data, dtype=t)),
- 'boolean')
+ assert as_json_table_type(np.array(
+ bool_data, dtype=t)) == 'boolean'
def test_as_json_table_type_date_data(self):
date_data = [pd.to_datetime(['2016']),
@@ -116,20 +115,19 @@ def test_as_json_table_type_date_data(self):
pd.Series(pd.to_datetime(['2016'], utc=True)),
pd.period_range('2016', freq='A', periods=3)]
for arr in date_data:
- self.assertEqual(as_json_table_type(arr), 'datetime')
+ assert as_json_table_type(arr) == 'datetime'
def test_as_json_table_type_string_data(self):
strings = [pd.Series(['a', 'b']), pd.Index(['a', 'b'])]
for t in strings:
- self.assertEqual(as_json_table_type(t), 'string')
+ assert as_json_table_type(t) == 'string'
def test_as_json_table_type_categorical_data(self):
- self.assertEqual(as_json_table_type(pd.Categorical(['a'])), 'any')
- self.assertEqual(as_json_table_type(pd.Categorical([1])), 'any')
- self.assertEqual(as_json_table_type(
- pd.Series(pd.Categorical([1]))), 'any')
- self.assertEqual(as_json_table_type(pd.CategoricalIndex([1])), 'any')
- self.assertEqual(as_json_table_type(pd.Categorical([1])), 'any')
+ assert as_json_table_type(pd.Categorical(['a'])) == 'any'
+ assert as_json_table_type(pd.Categorical([1])) == 'any'
+ assert as_json_table_type(pd.Series(pd.Categorical([1]))) == 'any'
+ assert as_json_table_type(pd.CategoricalIndex([1])) == 'any'
+ assert as_json_table_type(pd.Categorical([1])) == 'any'
# ------
# dtypes
@@ -137,38 +135,38 @@ def test_as_json_table_type_categorical_data(self):
def test_as_json_table_type_int_dtypes(self):
integers = [np.int, np.int16, np.int32, np.int64]
for t in integers:
- self.assertEqual(as_json_table_type(t), 'integer')
+ assert as_json_table_type(t) == 'integer'
def test_as_json_table_type_float_dtypes(self):
floats = [np.float, np.float16, np.float32, np.float64]
for t in floats:
- self.assertEqual(as_json_table_type(t), 'number')
+ assert as_json_table_type(t) == 'number'
def test_as_json_table_type_bool_dtypes(self):
bools = [bool, np.bool]
for t in bools:
- self.assertEqual(as_json_table_type(t), 'boolean')
+ assert as_json_table_type(t) == 'boolean'
def test_as_json_table_type_date_dtypes(self):
# TODO: datedate.date? datetime.time?
dates = [np.datetime64, np.dtype("<M8[ns]"), PeriodDtype(),
DatetimeTZDtype('ns', 'US/Central')]
for t in dates:
- self.assertEqual(as_json_table_type(t), 'datetime')
+ assert as_json_table_type(t) == 'datetime'
def test_as_json_table_type_timedelta_dtypes(self):
durations = [np.timedelta64, np.dtype("<m8[ns]")]
for t in durations:
- self.assertEqual(as_json_table_type(t), 'duration')
+ assert as_json_table_type(t) == 'duration'
def test_as_json_table_type_string_dtypes(self):
strings = [object] # TODO
for t in strings:
- self.assertEqual(as_json_table_type(t), 'string')
+ assert as_json_table_type(t) == 'string'
def test_as_json_table_type_categorical_dtypes(self):
- self.assertEqual(as_json_table_type(pd.Categorical), 'any')
- self.assertEqual(as_json_table_type(CategoricalDtype()), 'any')
+ assert as_json_table_type(pd.Categorical) == 'any'
+ assert as_json_table_type(CategoricalDtype()) == 'any'
class TestTableOrient(tm.TestCase):
@@ -269,7 +267,7 @@ def test_to_json(self):
]),
]
expected = OrderedDict([('schema', schema), ('data', data)])
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_json_float_index(self):
data = pd.Series(1, index=[1., 2.])
@@ -286,7 +284,7 @@ def test_to_json_float_index(self):
('data', [OrderedDict([('index', 1.0), ('values', 1)]),
OrderedDict([('index', 2.0), ('values', 1)])])])
)
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_json_period_index(self):
idx = pd.period_range('2016', freq='Q-JAN', periods=2)
@@ -304,7 +302,7 @@ def test_to_json_period_index(self):
OrderedDict([('index', '2016-02-01T00:00:00.000Z'),
('values', 1)])]
expected = OrderedDict([('schema', schema), ('data', data)])
- self.assertEqual(result, expected)
+ assert result == expected
def test_to_json_categorical_index(self):
data = pd.Series(1, pd.CategoricalIndex(['a', 'b']))
@@ -324,7 +322,7 @@ def test_to_json_categorical_index(self):
('values', 1)]),
OrderedDict([('index', 'b'), ('values', 1)])])])
)
- self.assertEqual(result, expected)
+ assert result == expected
def test_date_format_raises(self):
with pytest.raises(ValueError):
@@ -340,7 +338,7 @@ def test_make_field_int(self):
for kind in kinds:
result = make_field(kind)
expected = {"name": "name", "type": 'integer'}
- self.assertEqual(result, expected)
+ assert result == expected
def test_make_field_float(self):
data = [1., 2., 3.]
@@ -348,7 +346,7 @@ def test_make_field_float(self):
for kind in kinds:
result = make_field(kind)
expected = {"name": "name", "type": 'number'}
- self.assertEqual(result, expected)
+ assert result == expected
def test_make_field_datetime(self):
data = [1., 2., 3.]
@@ -357,19 +355,19 @@ def test_make_field_datetime(self):
for kind in kinds:
result = make_field(kind)
expected = {"name": "values", "type": 'datetime'}
- self.assertEqual(result, expected)
+ assert result == expected
kinds = [pd.Series(pd.to_datetime(data, utc=True), name='values'),
pd.to_datetime(data, utc=True)]
for kind in kinds:
result = make_field(kind)
expected = {"name": "values", "type": 'datetime', "tz": "UTC"}
- self.assertEqual(result, expected)
+ assert result == expected
arr = pd.period_range('2016', freq='A-DEC', periods=4)
result = make_field(arr)
expected = {"name": "values", "type": 'datetime', "freq": "A-DEC"}
- self.assertEqual(result, expected)
+ assert result == expected
def test_make_field_categorical(self):
data = ['a', 'b', 'c']
@@ -381,14 +379,14 @@ def test_make_field_categorical(self):
expected = {"name": "cats", "type": "any",
"constraints": {"enum": data},
"ordered": ordered}
- self.assertEqual(result, expected)
+ assert result == expected
arr = pd.CategoricalIndex(data, ordered=ordered, name='cats')
result = make_field(arr)
expected = {"name": "cats", "type": "any",
"constraints": {"enum": data},
"ordered": ordered}
- self.assertEqual(result, expected)
+ assert result == expected
def test_categorical(self):
s = pd.Series(pd.Categorical(['a', 'b', 'a']))
@@ -409,37 +407,37 @@ def test_categorical(self):
('data', [OrderedDict([('idx', 0), ('values', 'a')]),
OrderedDict([('idx', 1), ('values', 'b')]),
OrderedDict([('idx', 2), ('values', 'a')])])])
- self.assertEqual(result, expected)
+ assert result == expected
def test_set_default_names_unset(self):
data = pd.Series(1, pd.Index([1]))
result = set_default_names(data)
- self.assertEqual(result.index.name, 'index')
+ assert result.index.name == 'index'
def test_set_default_names_set(self):
data = pd.Series(1, pd.Index([1], name='myname'))
result = set_default_names(data)
- self.assertEqual(result.index.name, 'myname')
+ assert result.index.name == 'myname'
def test_set_default_names_mi_unset(self):
data = pd.Series(
1, pd.MultiIndex.from_product([('a', 'b'), ('c', 'd')]))
result = set_default_names(data)
- self.assertEqual(result.index.names, ['level_0', 'level_1'])
+ assert result.index.names == ['level_0', 'level_1']
def test_set_default_names_mi_set(self):
data = pd.Series(
1, pd.MultiIndex.from_product([('a', 'b'), ('c', 'd')],
names=['n1', 'n2']))
result = set_default_names(data)
- self.assertEqual(result.index.names, ['n1', 'n2'])
+ assert result.index.names == ['n1', 'n2']
def test_set_default_names_mi_partion(self):
data = pd.Series(
1, pd.MultiIndex.from_product([('a', 'b'), ('c', 'd')],
names=['n1', None]))
result = set_default_names(data)
- self.assertEqual(result.index.names, ['n1', 'level_1'])
+ assert result.index.names == ['n1', 'level_1']
def test_timestamp_in_columns(self):
df = pd.DataFrame([[1, 2]], columns=[pd.Timestamp('2016'),
diff --git a/pandas/tests/io/json/test_normalize.py b/pandas/tests/io/json/test_normalize.py
index 42456d2630886..d24250f534521 100644
--- a/pandas/tests/io/json/test_normalize.py
+++ b/pandas/tests/io/json/test_normalize.py
@@ -221,7 +221,7 @@ def test_flat_stays_flat(self):
result = nested_to_record(recs)
expected = recs
- self.assertEqual(result, expected)
+ assert result == expected
def test_one_level_deep_flattens(self):
data = dict(flat1=1,
@@ -232,7 +232,7 @@ def test_one_level_deep_flattens(self):
'dict1.d': 2,
'flat1': 1}
- self.assertEqual(result, expected)
+ assert result == expected
def test_nested_flattens(self):
data = dict(flat1=1,
@@ -248,7 +248,7 @@ def test_nested_flattens(self):
'nested.e.c': 1,
'nested.e.d': 2}
- self.assertEqual(result, expected)
+ assert result == expected
def test_json_normalize_errors(self):
# GH14583: If meta keys are not always present
@@ -298,7 +298,7 @@ def test_json_normalize_errors(self):
'price': {0: '0', 1: '0', 2: '0', 3: '0'},
'symbol': {0: 'AAPL', 1: 'GOOG', 2: 'AAPL', 3: 'GOOG'}}
- self.assertEqual(j.fillna('').to_dict(), expected)
+ assert j.fillna('').to_dict() == expected
pytest.raises(KeyError,
json_normalize, data=i['Trades'],
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index e7a04e12d7fa4..2e92910f82b74 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -272,8 +272,7 @@ def _check_all_orients(df, dtype=None, convert_axes=True,
# basic
_check_all_orients(self.frame)
- self.assertEqual(self.frame.to_json(),
- self.frame.to_json(orient="columns"))
+ assert self.frame.to_json() == self.frame.to_json(orient="columns")
_check_all_orients(self.intframe, dtype=self.intframe.values.dtype)
_check_all_orients(self.intframe, dtype=False)
@@ -387,27 +386,27 @@ def test_frame_from_json_nones(self):
def test_frame_to_json_float_precision(self):
df = pd.DataFrame([dict(a_float=0.95)])
encoded = df.to_json(double_precision=1)
- self.assertEqual(encoded, '{"a_float":{"0":1.0}}')
+ assert encoded == '{"a_float":{"0":1.0}}'
df = pd.DataFrame([dict(a_float=1.95)])
encoded = df.to_json(double_precision=1)
- self.assertEqual(encoded, '{"a_float":{"0":2.0}}')
+ assert encoded == '{"a_float":{"0":2.0}}'
df = pd.DataFrame([dict(a_float=-1.95)])
encoded = df.to_json(double_precision=1)
- self.assertEqual(encoded, '{"a_float":{"0":-2.0}}')
+ assert encoded == '{"a_float":{"0":-2.0}}'
df = pd.DataFrame([dict(a_float=0.995)])
encoded = df.to_json(double_precision=2)
- self.assertEqual(encoded, '{"a_float":{"0":1.0}}')
+ assert encoded == '{"a_float":{"0":1.0}}'
df = pd.DataFrame([dict(a_float=0.9995)])
encoded = df.to_json(double_precision=3)
- self.assertEqual(encoded, '{"a_float":{"0":1.0}}')
+ assert encoded == '{"a_float":{"0":1.0}}'
df = pd.DataFrame([dict(a_float=0.99999999999999944)])
encoded = df.to_json(double_precision=15)
- self.assertEqual(encoded, '{"a_float":{"0":1.0}}')
+ assert encoded == '{"a_float":{"0":1.0}}'
def test_frame_to_json_except(self):
df = DataFrame([1, 2, 3])
@@ -566,8 +565,7 @@ def _check_all_orients(series, dtype=None, check_index_type=True):
# basic
_check_all_orients(self.series)
- self.assertEqual(self.series.to_json(),
- self.series.to_json(orient="index"))
+ assert self.series.to_json() == self.series.to_json(orient="index")
objSeries = Series([str(d) for d in self.objSeries],
index=self.objSeries.index,
@@ -576,7 +574,7 @@ def _check_all_orients(series, dtype=None, check_index_type=True):
# empty_series has empty index with object dtype
# which cannot be revert
- self.assertEqual(self.empty_series.index.dtype, np.object_)
+ assert self.empty_series.index.dtype == np.object_
_check_all_orients(self.empty_series, check_index_type=False)
_check_all_orients(self.ts)
@@ -806,25 +804,25 @@ def test_url(self):
url = 'https://api.github.com/repos/pandas-dev/pandas/issues?per_page=5' # noqa
result = read_json(url, convert_dates=True)
for c in ['created_at', 'closed_at', 'updated_at']:
- self.assertEqual(result[c].dtype, 'datetime64[ns]')
+ assert result[c].dtype == 'datetime64[ns]'
def test_timedelta(self):
converter = lambda x: pd.to_timedelta(x, unit='ms')
s = Series([timedelta(23), timedelta(seconds=5)])
- self.assertEqual(s.dtype, 'timedelta64[ns]')
+ assert s.dtype == 'timedelta64[ns]'
result = pd.read_json(s.to_json(), typ='series').apply(converter)
assert_series_equal(result, s)
s = Series([timedelta(23), timedelta(seconds=5)],
index=pd.Index([0, 1]))
- self.assertEqual(s.dtype, 'timedelta64[ns]')
+ assert s.dtype == 'timedelta64[ns]'
result = pd.read_json(s.to_json(), typ='series').apply(converter)
assert_series_equal(result, s)
frame = DataFrame([timedelta(23), timedelta(seconds=5)])
- self.assertEqual(frame[0].dtype, 'timedelta64[ns]')
+ assert frame[0].dtype == 'timedelta64[ns]'
assert_frame_equal(frame, pd.read_json(frame.to_json())
.apply(converter))
@@ -868,8 +866,8 @@ def default(obj):
columns=['a', 'b'])]
expected = ('[9,[[1,null],["STR",null],[[["mathjs","Complex"],'
'["re",4.0],["im",-5.0]],"N\\/A"]]]')
- self.assertEqual(expected, dumps(df_list, default_handler=default,
- orient="values"))
+ assert dumps(df_list, default_handler=default,
+ orient="values") == expected
def test_default_handler_numpy_unsupported_dtype(self):
# GH12554 to_json raises 'Unhandled numpy dtype 15'
@@ -879,8 +877,7 @@ def test_default_handler_numpy_unsupported_dtype(self):
expected = ('[["(1+0j)","(nan+0j)"],'
'["(2.3+0j)","(nan+0j)"],'
'["(4-5j)","(1.2+0j)"]]')
- self.assertEqual(expected, df.to_json(default_handler=str,
- orient="values"))
+ assert df.to_json(default_handler=str, orient="values") == expected
def test_default_handler_raises(self):
def my_handler_raises(obj):
@@ -899,11 +896,11 @@ def test_categorical(self):
expected = df.to_json()
df["B"] = df["A"].astype('category')
- self.assertEqual(expected, df.to_json())
+ assert expected == df.to_json()
s = df["A"]
sc = df["B"]
- self.assertEqual(s.to_json(), sc.to_json())
+ assert s.to_json() == sc.to_json()
def test_datetime_tz(self):
# GH4377 df.to_json segfaults with non-ndarray blocks
@@ -917,11 +914,11 @@ def test_datetime_tz(self):
df_naive = df.copy()
df_naive['A'] = tz_naive
expected = df_naive.to_json()
- self.assertEqual(expected, df.to_json())
+ assert expected == df.to_json()
stz = Series(tz_range)
s_naive = Series(tz_naive)
- self.assertEqual(stz.to_json(), s_naive.to_json())
+ assert stz.to_json() == s_naive.to_json()
def test_sparse(self):
# GH4377 df.to_json segfaults with non-ndarray blocks
@@ -930,33 +927,33 @@ def test_sparse(self):
sdf = df.to_sparse()
expected = df.to_json()
- self.assertEqual(expected, sdf.to_json())
+ assert expected == sdf.to_json()
s = pd.Series(np.random.randn(10))
s.loc[:8] = np.nan
ss = s.to_sparse()
expected = s.to_json()
- self.assertEqual(expected, ss.to_json())
+ assert expected == ss.to_json()
def test_tz_is_utc(self):
from pandas.io.json import dumps
exp = '"2013-01-10T05:00:00.000Z"'
ts = Timestamp('2013-01-10 05:00:00Z')
- self.assertEqual(exp, dumps(ts, iso_dates=True))
+ assert dumps(ts, iso_dates=True) == exp
dt = ts.to_pydatetime()
- self.assertEqual(exp, dumps(dt, iso_dates=True))
+ assert dumps(dt, iso_dates=True) == exp
ts = Timestamp('2013-01-10 00:00:00', tz='US/Eastern')
- self.assertEqual(exp, dumps(ts, iso_dates=True))
+ assert dumps(ts, iso_dates=True) == exp
dt = ts.to_pydatetime()
- self.assertEqual(exp, dumps(dt, iso_dates=True))
+ assert dumps(dt, iso_dates=True) == exp
ts = Timestamp('2013-01-10 00:00:00-0500')
- self.assertEqual(exp, dumps(ts, iso_dates=True))
+ assert dumps(ts, iso_dates=True) == exp
dt = ts.to_pydatetime()
- self.assertEqual(exp, dumps(dt, iso_dates=True))
+ assert dumps(dt, iso_dates=True) == exp
def test_tz_range_is_utc(self):
from pandas.io.json import dumps
@@ -967,26 +964,26 @@ def test_tz_range_is_utc(self):
'"1":"2013-01-02T05:00:00.000Z"}}')
tz_range = pd.date_range('2013-01-01 05:00:00Z', periods=2)
- self.assertEqual(exp, dumps(tz_range, iso_dates=True))
+ assert dumps(tz_range, iso_dates=True) == exp
dti = pd.DatetimeIndex(tz_range)
- self.assertEqual(exp, dumps(dti, iso_dates=True))
+ assert dumps(dti, iso_dates=True) == exp
df = DataFrame({'DT': dti})
- self.assertEqual(dfexp, dumps(df, iso_dates=True))
+ assert dumps(df, iso_dates=True) == dfexp
tz_range = pd.date_range('2013-01-01 00:00:00', periods=2,
tz='US/Eastern')
- self.assertEqual(exp, dumps(tz_range, iso_dates=True))
+ assert dumps(tz_range, iso_dates=True) == exp
dti = pd.DatetimeIndex(tz_range)
- self.assertEqual(exp, dumps(dti, iso_dates=True))
+ assert dumps(dti, iso_dates=True) == exp
df = DataFrame({'DT': dti})
- self.assertEqual(dfexp, dumps(df, iso_dates=True))
+ assert dumps(df, iso_dates=True) == dfexp
tz_range = pd.date_range('2013-01-01 00:00:00-0500', periods=2)
- self.assertEqual(exp, dumps(tz_range, iso_dates=True))
+ assert dumps(tz_range, iso_dates=True) == exp
dti = pd.DatetimeIndex(tz_range)
- self.assertEqual(exp, dumps(dti, iso_dates=True))
+ assert dumps(dti, iso_dates=True) == exp
df = DataFrame({'DT': dti})
- self.assertEqual(dfexp, dumps(df, iso_dates=True))
+ assert dumps(df, iso_dates=True) == dfexp
def test_read_jsonl(self):
# GH9180
@@ -1018,12 +1015,12 @@ def test_to_jsonl(self):
df = DataFrame([[1, 2], [1, 2]], columns=['a', 'b'])
result = df.to_json(orient="records", lines=True)
expected = '{"a":1,"b":2}\n{"a":1,"b":2}'
- self.assertEqual(result, expected)
+ assert result == expected
df = DataFrame([["foo}", "bar"], ['foo"', "bar"]], columns=['a', 'b'])
result = df.to_json(orient="records", lines=True)
expected = '{"a":"foo}","b":"bar"}\n{"a":"foo\\"","b":"bar"}'
- self.assertEqual(result, expected)
+ assert result == expected
assert_frame_equal(pd.read_json(result, lines=True), df)
# GH15096: escaped characters in columns and data
@@ -1032,7 +1029,7 @@ def test_to_jsonl(self):
result = df.to_json(orient="records", lines=True)
expected = ('{"a\\\\":"foo\\\\","b":"bar"}\n'
'{"a\\\\":"foo\\"","b":"bar"}')
- self.assertEqual(result, expected)
+ assert result == expected
assert_frame_equal(pd.read_json(result, lines=True), df)
def test_latin_encoding(self):
@@ -1086,4 +1083,4 @@ def test_data_frame_size_after_to_json(self):
df.to_json()
size_after = df.memory_usage(index=True, deep=True).sum()
- self.assertEqual(size_before, size_after)
+ assert size_before == size_after
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index 12d5cd14197b8..b132322952024 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -35,49 +35,49 @@ def test_encodeDecimal(self):
sut = decimal.Decimal("1337.1337")
encoded = ujson.encode(sut, double_precision=15)
decoded = ujson.decode(encoded)
- self.assertEqual(decoded, 1337.1337)
+ assert decoded == 1337.1337
sut = decimal.Decimal("0.95")
encoded = ujson.encode(sut, double_precision=1)
- self.assertEqual(encoded, "1.0")
+ assert encoded == "1.0"
decoded = ujson.decode(encoded)
- self.assertEqual(decoded, 1.0)
+ assert decoded == 1.0
sut = decimal.Decimal("0.94")
encoded = ujson.encode(sut, double_precision=1)
- self.assertEqual(encoded, "0.9")
+ assert encoded == "0.9"
decoded = ujson.decode(encoded)
- self.assertEqual(decoded, 0.9)
+ assert decoded == 0.9
sut = decimal.Decimal("1.95")
encoded = ujson.encode(sut, double_precision=1)
- self.assertEqual(encoded, "2.0")
+ assert encoded == "2.0"
decoded = ujson.decode(encoded)
- self.assertEqual(decoded, 2.0)
+ assert decoded == 2.0
sut = decimal.Decimal("-1.95")
encoded = ujson.encode(sut, double_precision=1)
- self.assertEqual(encoded, "-2.0")
+ assert encoded == "-2.0"
decoded = ujson.decode(encoded)
- self.assertEqual(decoded, -2.0)
+ assert decoded == -2.0
sut = decimal.Decimal("0.995")
encoded = ujson.encode(sut, double_precision=2)
- self.assertEqual(encoded, "1.0")
+ assert encoded == "1.0"
decoded = ujson.decode(encoded)
- self.assertEqual(decoded, 1.0)
+ assert decoded == 1.0
sut = decimal.Decimal("0.9995")
encoded = ujson.encode(sut, double_precision=3)
- self.assertEqual(encoded, "1.0")
+ assert encoded == "1.0"
decoded = ujson.decode(encoded)
- self.assertEqual(decoded, 1.0)
+ assert decoded == 1.0
sut = decimal.Decimal("0.99999999999999944")
encoded = ujson.encode(sut, double_precision=15)
- self.assertEqual(encoded, "1.0")
+ assert encoded == "1.0"
decoded = ujson.decode(encoded)
- self.assertEqual(decoded, 1.0)
+ assert decoded == 1.0
def test_encodeStringConversion(self):
input = "A string \\ / \b \f \n \r \t </script> &"
@@ -88,9 +88,9 @@ def test_encodeStringConversion(self):
def helper(expected_output, **encode_kwargs):
output = ujson.encode(input, **encode_kwargs)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, expected_output)
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == expected_output
+ assert input == ujson.decode(output)
# Default behavior assumes encode_html_chars=False.
helper(not_html_encoded, ensure_ascii=True)
@@ -108,19 +108,19 @@ def test_doubleLongIssue(self):
sut = {u('a'): -4342969734183514}
encoded = json.dumps(sut)
decoded = json.loads(encoded)
- self.assertEqual(sut, decoded)
+ assert sut == decoded
encoded = ujson.encode(sut, double_precision=15)
decoded = ujson.decode(encoded)
- self.assertEqual(sut, decoded)
+ assert sut == decoded
def test_doubleLongDecimalIssue(self):
sut = {u('a'): -12345678901234.56789012}
encoded = json.dumps(sut)
decoded = json.loads(encoded)
- self.assertEqual(sut, decoded)
+ assert sut == decoded
encoded = ujson.encode(sut, double_precision=15)
decoded = ujson.decode(encoded)
- self.assertEqual(sut, decoded)
+ assert sut == decoded
def test_encodeNonCLocale(self):
import locale
@@ -132,8 +132,8 @@ def test_encodeNonCLocale(self):
locale.setlocale(locale.LC_NUMERIC, 'Italian_Italy')
except:
pytest.skip('Could not set locale for testing')
- self.assertEqual(ujson.loads(ujson.dumps(4.78e60)), 4.78e60)
- self.assertEqual(ujson.loads('4.78', precise_float=True), 4.78)
+ assert ujson.loads(ujson.dumps(4.78e60)) == 4.78e60
+ assert ujson.loads('4.78', precise_float=True) == 4.78
locale.setlocale(locale.LC_NUMERIC, savedlocale)
def test_encodeDecodeLongDecimal(self):
@@ -145,17 +145,17 @@ def test_decimalDecodeTestPrecise(self):
sut = {u('a'): 4.56}
encoded = ujson.encode(sut)
decoded = ujson.decode(encoded, precise_float=True)
- self.assertEqual(sut, decoded)
+ assert sut == decoded
@pytest.mark.skipif(compat.is_platform_windows() and not compat.PY3,
reason="buggy on win-64 for py2")
def test_encodeDoubleTinyExponential(self):
num = 1e-40
- self.assertEqual(num, ujson.decode(ujson.encode(num)))
+ assert num == ujson.decode(ujson.encode(num))
num = 1e-100
- self.assertEqual(num, ujson.decode(ujson.encode(num)))
+ assert num == ujson.decode(ujson.encode(num))
num = -1e-45
- self.assertEqual(num, ujson.decode(ujson.encode(num)))
+ assert num == ujson.decode(ujson.encode(num))
num = -1e-145
assert np.allclose(num, ujson.decode(ujson.encode(num)))
@@ -175,27 +175,27 @@ def test_encodeDictWithUnicodeKeys(self):
def test_encodeDoubleConversion(self):
input = math.pi
output = ujson.encode(input)
- self.assertEqual(round(input, 5), round(json.loads(output), 5))
- self.assertEqual(round(input, 5), round(ujson.decode(output), 5))
+ assert round(input, 5) == round(json.loads(output), 5)
+ assert round(input, 5) == round(ujson.decode(output), 5)
def test_encodeWithDecimal(self):
input = 1.0
output = ujson.encode(input)
- self.assertEqual(output, "1.0")
+ assert output == "1.0"
def test_encodeDoubleNegConversion(self):
input = -math.pi
output = ujson.encode(input)
- self.assertEqual(round(input, 5), round(json.loads(output), 5))
- self.assertEqual(round(input, 5), round(ujson.decode(output), 5))
+ assert round(input, 5) == round(json.loads(output), 5)
+ assert round(input, 5) == round(ujson.decode(output), 5)
def test_encodeArrayOfNestedArrays(self):
input = [[[[]]]] * 20
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- # self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ # assert output == json.dumps(input)
+ assert input == ujson.decode(output)
input = np.array(input)
tm.assert_numpy_array_equal(input, ujson.decode(
output, numpy=True, dtype=input.dtype))
@@ -203,25 +203,25 @@ def test_encodeArrayOfNestedArrays(self):
def test_encodeArrayOfDoubles(self):
input = [31337.31337, 31337.31337, 31337.31337, 31337.31337] * 10
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- # self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ # assert output == json.dumps(input)
+ assert input == ujson.decode(output)
tm.assert_numpy_array_equal(
np.array(input), ujson.decode(output, numpy=True))
def test_doublePrecisionTest(self):
input = 30.012345678901234
output = ujson.encode(input, double_precision=15)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert input == ujson.decode(output)
output = ujson.encode(input, double_precision=9)
- self.assertEqual(round(input, 9), json.loads(output))
- self.assertEqual(round(input, 9), ujson.decode(output))
+ assert round(input, 9) == json.loads(output)
+ assert round(input, 9) == ujson.decode(output)
output = ujson.encode(input, double_precision=3)
- self.assertEqual(round(input, 3), json.loads(output))
- self.assertEqual(round(input, 3), ujson.decode(output))
+ assert round(input, 3) == json.loads(output)
+ assert round(input, 3) == ujson.decode(output)
def test_invalidDoublePrecision(self):
input = 30.12345678901234567890
@@ -238,9 +238,9 @@ def test_invalidDoublePrecision(self):
def test_encodeStringConversion2(self):
input = "A string \\ / \b \f \n \r \t"
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, '"A string \\\\ \\/ \\b \\f \\n \\r \\t"')
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == '"A string \\\\ \\/ \\b \\f \\n \\r \\t"'
+ assert input == ujson.decode(output)
pass
def test_decodeUnicodeConversion(self):
@@ -250,38 +250,38 @@ def test_encodeUnicodeConversion1(self):
input = "Räksmörgås اسامة بن محمد بن عوض بن لادن"
enc = ujson.encode(input)
dec = ujson.decode(enc)
- self.assertEqual(enc, json_unicode(input))
- self.assertEqual(dec, json.loads(enc))
+ assert enc == json_unicode(input)
+ assert dec == json.loads(enc)
def test_encodeControlEscaping(self):
input = "\x19"
enc = ujson.encode(input)
dec = ujson.decode(enc)
- self.assertEqual(input, dec)
- self.assertEqual(enc, json_unicode(input))
+ assert input == dec
+ assert enc == json_unicode(input)
def test_encodeUnicodeConversion2(self):
input = "\xe6\x97\xa5\xd1\x88"
enc = ujson.encode(input)
dec = ujson.decode(enc)
- self.assertEqual(enc, json_unicode(input))
- self.assertEqual(dec, json.loads(enc))
+ assert enc == json_unicode(input)
+ assert dec == json.loads(enc)
def test_encodeUnicodeSurrogatePair(self):
input = "\xf0\x90\x8d\x86"
enc = ujson.encode(input)
dec = ujson.decode(enc)
- self.assertEqual(enc, json_unicode(input))
- self.assertEqual(dec, json.loads(enc))
+ assert enc == json_unicode(input)
+ assert dec == json.loads(enc)
def test_encodeUnicode4BytesUTF8(self):
input = "\xf0\x91\x80\xb0TRAILINGNORMAL"
enc = ujson.encode(input)
dec = ujson.decode(enc)
- self.assertEqual(enc, json_unicode(input))
- self.assertEqual(dec, json.loads(enc))
+ assert enc == json_unicode(input)
+ assert dec == json.loads(enc)
def test_encodeUnicode4BytesUTF8Highest(self):
input = "\xf3\xbf\xbf\xbfTRAILINGNORMAL"
@@ -289,16 +289,16 @@ def test_encodeUnicode4BytesUTF8Highest(self):
dec = ujson.decode(enc)
- self.assertEqual(enc, json_unicode(input))
- self.assertEqual(dec, json.loads(enc))
+ assert enc == json_unicode(input)
+ assert dec == json.loads(enc)
def test_encodeArrayInArray(self):
input = [[[[]]]]
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == json.dumps(input)
+ assert input == ujson.decode(output)
tm.assert_numpy_array_equal(
np.array(input), ujson.decode(output, numpy=True))
pass
@@ -306,32 +306,32 @@ def test_encodeArrayInArray(self):
def test_encodeIntConversion(self):
input = 31337
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == json.dumps(input)
+ assert input == ujson.decode(output)
pass
def test_encodeIntNegConversion(self):
input = -31337
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == json.dumps(input)
+ assert input == ujson.decode(output)
pass
def test_encodeLongNegConversion(self):
input = -9223372036854775808
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == json.dumps(input)
+ assert input == ujson.decode(output)
def test_encodeListConversion(self):
input = [1, 2, 3, 4]
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert input == ujson.decode(output)
tm.assert_numpy_array_equal(
np.array(input), ujson.decode(output, numpy=True))
pass
@@ -339,41 +339,41 @@ def test_encodeListConversion(self):
def test_encodeDictConversion(self):
input = {"k1": 1, "k2": 2, "k3": 3, "k4": 4}
output = ujson.encode(input) # noqa
- self.assertEqual(input, json.loads(output))
- self.assertEqual(input, ujson.decode(output))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert input == ujson.decode(output)
+ assert input == ujson.decode(output)
pass
def test_encodeNoneConversion(self):
input = None
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == json.dumps(input)
+ assert input == ujson.decode(output)
pass
def test_encodeTrueConversion(self):
input = True
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == json.dumps(input)
+ assert input == ujson.decode(output)
pass
def test_encodeFalseConversion(self):
input = False
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == json.dumps(input)
+ assert input == ujson.decode(output)
def test_encodeDatetimeConversion(self):
ts = time.time()
input = datetime.datetime.fromtimestamp(ts)
output = ujson.encode(input, date_unit='s')
expected = calendar.timegm(input.utctimetuple())
- self.assertEqual(int(expected), json.loads(output))
- self.assertEqual(int(expected), ujson.decode(output))
+ assert int(expected) == json.loads(output)
+ assert int(expected) == ujson.decode(output)
def test_encodeDateConversion(self):
ts = time.time()
@@ -383,8 +383,8 @@ def test_encodeDateConversion(self):
tup = (input.year, input.month, input.day, 0, 0, 0)
expected = calendar.timegm(tup)
- self.assertEqual(int(expected), json.loads(output))
- self.assertEqual(int(expected), ujson.decode(output))
+ assert int(expected) == json.loads(output)
+ assert int(expected) == ujson.decode(output)
def test_encodeTimeConversion(self):
tests = [
@@ -395,7 +395,7 @@ def test_encodeTimeConversion(self):
for test in tests:
output = ujson.encode(test)
expected = '"%s"' % test.isoformat()
- self.assertEqual(expected, output)
+ assert expected == output
def test_encodeTimeConversion_pytz(self):
# GH11473 to_json segfaults with timezone-aware datetimes
@@ -404,7 +404,7 @@ def test_encodeTimeConversion_pytz(self):
test = datetime.time(10, 12, 15, 343243, pytz.utc)
output = ujson.encode(test)
expected = '"%s"' % test.isoformat()
- self.assertEqual(expected, output)
+ assert expected == output
def test_encodeTimeConversion_dateutil(self):
# GH11473 to_json segfaults with timezone-aware datetimes
@@ -413,7 +413,7 @@ def test_encodeTimeConversion_dateutil(self):
test = datetime.time(10, 12, 15, 343243, dateutil.tz.tzutc())
output = ujson.encode(test)
expected = '"%s"' % test.isoformat()
- self.assertEqual(expected, output)
+ assert expected == output
def test_nat(self):
input = NaT
@@ -435,16 +435,16 @@ def test_datetime_units(self):
stamp = Timestamp(val)
roundtrip = ujson.decode(ujson.encode(val, date_unit='s'))
- self.assertEqual(roundtrip, stamp.value // 10**9)
+ assert roundtrip == stamp.value // 10**9
roundtrip = ujson.decode(ujson.encode(val, date_unit='ms'))
- self.assertEqual(roundtrip, stamp.value // 10**6)
+ assert roundtrip == stamp.value // 10**6
roundtrip = ujson.decode(ujson.encode(val, date_unit='us'))
- self.assertEqual(roundtrip, stamp.value // 10**3)
+ assert roundtrip == stamp.value // 10**3
roundtrip = ujson.decode(ujson.encode(val, date_unit='ns'))
- self.assertEqual(roundtrip, stamp.value)
+ assert roundtrip == stamp.value
pytest.raises(ValueError, ujson.encode, val, date_unit='foo')
@@ -452,14 +452,14 @@ def test_encodeToUTF8(self):
input = "\xe6\x97\xa5\xd1\x88"
enc = ujson.encode(input, ensure_ascii=False)
dec = ujson.decode(enc)
- self.assertEqual(enc, json_unicode(input, ensure_ascii=False))
- self.assertEqual(dec, json.loads(enc))
+ assert enc == json_unicode(input, ensure_ascii=False)
+ assert dec == json.loads(enc)
def test_decodeFromUnicode(self):
input = u("{\"obj\": 31337}")
dec1 = ujson.decode(input)
dec2 = ujson.decode(str(input))
- self.assertEqual(dec1, dec2)
+ assert dec1 == dec2
def test_encodeRecursionMax(self):
# 8 is the max recursion depth
@@ -676,11 +676,11 @@ def test_decodeDictWithNoValue(self):
def test_decodeNumericIntPos(self):
input = "31337"
- self.assertEqual(31337, ujson.decode(input))
+ assert 31337 == ujson.decode(input)
def test_decodeNumericIntNeg(self):
input = "-31337"
- self.assertEqual(-31337, ujson.decode(input))
+ assert -31337 == ujson.decode(input)
@pytest.mark.skipif(compat.PY3, reason="only PY2")
def test_encodeUnicode4BytesUTF8Fail(self):
@@ -694,29 +694,29 @@ def test_encodeUnicode4BytesUTF8Fail(self):
def test_encodeNullCharacter(self):
input = "31337 \x00 1337"
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == json.dumps(input)
+ assert input == ujson.decode(output)
input = "\x00"
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == json.dumps(input)
+ assert input == ujson.decode(output)
- self.assertEqual('" \\u0000\\r\\n "', ujson.dumps(u(" \u0000\r\n ")))
+ assert '" \\u0000\\r\\n "' == ujson.dumps(u(" \u0000\r\n "))
pass
def test_decodeNullCharacter(self):
input = "\"31337 \\u0000 31337\""
- self.assertEqual(ujson.decode(input), json.loads(input))
+ assert ujson.decode(input) == json.loads(input)
def test_encodeListLongConversion(self):
input = [9223372036854775807, 9223372036854775807, 9223372036854775807,
9223372036854775807, 9223372036854775807, 9223372036854775807]
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert input == ujson.decode(output)
tm.assert_numpy_array_equal(np.array(input),
ujson.decode(output, numpy=True,
dtype=np.int64))
@@ -725,15 +725,15 @@ def test_encodeListLongConversion(self):
def test_encodeLongConversion(self):
input = 9223372036854775807
output = ujson.encode(input)
- self.assertEqual(input, json.loads(output))
- self.assertEqual(output, json.dumps(input))
- self.assertEqual(input, ujson.decode(output))
+ assert input == json.loads(output)
+ assert output == json.dumps(input)
+ assert input == ujson.decode(output)
pass
def test_numericIntExp(self):
input = "1337E40"
output = ujson.decode(input)
- self.assertEqual(output, json.loads(input))
+ assert output == json.loads(input)
def test_numericIntFrcExp(self):
input = "1.337E40"
@@ -773,7 +773,7 @@ def test_decodeNumericIntExpeMinus(self):
def test_dumpToFile(self):
f = StringIO()
ujson.dump([1, 2, 3], f)
- self.assertEqual("[1,2,3]", f.getvalue())
+ assert "[1,2,3]" == f.getvalue()
def test_dumpToFileLikeObject(self):
class filelike:
@@ -785,7 +785,7 @@ def write(self, bytes):
self.bytes += bytes
f = filelike()
ujson.dump([1, 2, 3], f)
- self.assertEqual("[1,2,3]", f.bytes)
+ assert "[1,2,3]" == f.bytes
def test_dumpFileArgsError(self):
try:
@@ -797,7 +797,8 @@ def test_dumpFileArgsError(self):
def test_loadFile(self):
f = StringIO("[1,2,3,4]")
- self.assertEqual([1, 2, 3, 4], ujson.load(f))
+ assert [1, 2, 3, 4] == ujson.load(f)
+
f = StringIO("[1,2,3,4]")
tm.assert_numpy_array_equal(
np.array([1, 2, 3, 4]), ujson.load(f, numpy=True))
@@ -812,7 +813,8 @@ def read(self):
self.end = True
return "[1,2,3,4]"
f = filelike()
- self.assertEqual([1, 2, 3, 4], ujson.load(f))
+ assert [1, 2, 3, 4] == ujson.load(f)
+
f = filelike()
tm.assert_numpy_array_equal(
np.array([1, 2, 3, 4]), ujson.load(f, numpy=True))
@@ -864,7 +866,7 @@ def test_decodeNumberWith32bitSignBit(self):
)
results = (3590016419, 2**31, 2**32, 2**32 - 1)
for doc, result in zip(docs, results):
- self.assertEqual(ujson.decode(doc)['id'], result)
+ assert ujson.decode(doc)['id'] == result
def test_encodeBigEscape(self):
for x in range(10):
@@ -896,7 +898,7 @@ def toDict(self):
o = DictTest()
output = ujson.encode(o)
dec = ujson.decode(output)
- self.assertEqual(dec, d)
+ assert dec == d
def test_defaultHandler(self):
@@ -913,42 +915,44 @@ def __str__(self):
return str(self.val)
pytest.raises(OverflowError, ujson.encode, _TestObject("foo"))
- self.assertEqual('"foo"', ujson.encode(_TestObject("foo"),
- default_handler=str))
+ assert '"foo"' == ujson.encode(_TestObject("foo"),
+ default_handler=str)
def my_handler(obj):
return "foobar"
- self.assertEqual('"foobar"', ujson.encode(_TestObject("foo"),
- default_handler=my_handler))
+
+ assert '"foobar"' == ujson.encode(_TestObject("foo"),
+ default_handler=my_handler)
def my_handler_raises(obj):
raise TypeError("I raise for anything")
+
with tm.assert_raises_regex(TypeError, "I raise for anything"):
ujson.encode(_TestObject("foo"), default_handler=my_handler_raises)
def my_int_handler(obj):
return 42
- self.assertEqual(
- 42, ujson.decode(ujson.encode(_TestObject("foo"),
- default_handler=my_int_handler)))
+
+ assert ujson.decode(ujson.encode(
+ _TestObject("foo"), default_handler=my_int_handler)) == 42
def my_obj_handler(obj):
return datetime.datetime(2013, 2, 3)
- self.assertEqual(
- ujson.decode(ujson.encode(datetime.datetime(2013, 2, 3))),
- ujson.decode(ujson.encode(_TestObject("foo"),
- default_handler=my_obj_handler)))
+
+ assert (ujson.decode(ujson.encode(datetime.datetime(2013, 2, 3))) ==
+ ujson.decode(ujson.encode(_TestObject("foo"),
+ default_handler=my_obj_handler)))
l = [_TestObject("foo"), _TestObject("bar")]
- self.assertEqual(json.loads(json.dumps(l, default=str)),
- ujson.decode(ujson.encode(l, default_handler=str)))
+ assert (json.loads(json.dumps(l, default=str)) ==
+ ujson.decode(ujson.encode(l, default_handler=str)))
class NumpyJSONTests(TestCase):
def testBool(self):
b = np.bool(True)
- self.assertEqual(ujson.decode(ujson.encode(b)), b)
+ assert ujson.decode(ujson.encode(b)) == b
def testBoolArray(self):
inpt = np.array([True, False, True, True, False, True, False, False],
@@ -958,31 +962,31 @@ def testBoolArray(self):
def testInt(self):
num = np.int(2562010)
- self.assertEqual(np.int(ujson.decode(ujson.encode(num))), num)
+ assert np.int(ujson.decode(ujson.encode(num))) == num
num = np.int8(127)
- self.assertEqual(np.int8(ujson.decode(ujson.encode(num))), num)
+ assert np.int8(ujson.decode(ujson.encode(num))) == num
num = np.int16(2562010)
- self.assertEqual(np.int16(ujson.decode(ujson.encode(num))), num)
+ assert np.int16(ujson.decode(ujson.encode(num))) == num
num = np.int32(2562010)
- self.assertEqual(np.int32(ujson.decode(ujson.encode(num))), num)
+ assert np.int32(ujson.decode(ujson.encode(num))) == num
num = np.int64(2562010)
- self.assertEqual(np.int64(ujson.decode(ujson.encode(num))), num)
+ assert np.int64(ujson.decode(ujson.encode(num))) == num
num = np.uint8(255)
- self.assertEqual(np.uint8(ujson.decode(ujson.encode(num))), num)
+ assert np.uint8(ujson.decode(ujson.encode(num))) == num
num = np.uint16(2562010)
- self.assertEqual(np.uint16(ujson.decode(ujson.encode(num))), num)
+ assert np.uint16(ujson.decode(ujson.encode(num))) == num
num = np.uint32(2562010)
- self.assertEqual(np.uint32(ujson.decode(ujson.encode(num))), num)
+ assert np.uint32(ujson.decode(ujson.encode(num))) == num
num = np.uint64(2562010)
- self.assertEqual(np.uint64(ujson.decode(ujson.encode(num))), num)
+ assert np.uint64(ujson.decode(ujson.encode(num))) == num
def testIntArray(self):
arr = np.arange(100, dtype=np.int)
@@ -995,43 +999,43 @@ def testIntArray(self):
def testIntMax(self):
num = np.int(np.iinfo(np.int).max)
- self.assertEqual(np.int(ujson.decode(ujson.encode(num))), num)
+ assert np.int(ujson.decode(ujson.encode(num))) == num
num = np.int8(np.iinfo(np.int8).max)
- self.assertEqual(np.int8(ujson.decode(ujson.encode(num))), num)
+ assert np.int8(ujson.decode(ujson.encode(num))) == num
num = np.int16(np.iinfo(np.int16).max)
- self.assertEqual(np.int16(ujson.decode(ujson.encode(num))), num)
+ assert np.int16(ujson.decode(ujson.encode(num))) == num
num = np.int32(np.iinfo(np.int32).max)
- self.assertEqual(np.int32(ujson.decode(ujson.encode(num))), num)
+ assert np.int32(ujson.decode(ujson.encode(num))) == num
num = np.uint8(np.iinfo(np.uint8).max)
- self.assertEqual(np.uint8(ujson.decode(ujson.encode(num))), num)
+ assert np.uint8(ujson.decode(ujson.encode(num))) == num
num = np.uint16(np.iinfo(np.uint16).max)
- self.assertEqual(np.uint16(ujson.decode(ujson.encode(num))), num)
+ assert np.uint16(ujson.decode(ujson.encode(num))) == num
num = np.uint32(np.iinfo(np.uint32).max)
- self.assertEqual(np.uint32(ujson.decode(ujson.encode(num))), num)
+ assert np.uint32(ujson.decode(ujson.encode(num))) == num
if not compat.is_platform_32bit():
num = np.int64(np.iinfo(np.int64).max)
- self.assertEqual(np.int64(ujson.decode(ujson.encode(num))), num)
+ assert np.int64(ujson.decode(ujson.encode(num))) == num
# uint64 max will always overflow as it's encoded to signed
num = np.uint64(np.iinfo(np.int64).max)
- self.assertEqual(np.uint64(ujson.decode(ujson.encode(num))), num)
+ assert np.uint64(ujson.decode(ujson.encode(num))) == num
def testFloat(self):
num = np.float(256.2013)
- self.assertEqual(np.float(ujson.decode(ujson.encode(num))), num)
+ assert np.float(ujson.decode(ujson.encode(num))) == num
num = np.float32(256.2013)
- self.assertEqual(np.float32(ujson.decode(ujson.encode(num))), num)
+ assert np.float32(ujson.decode(ujson.encode(num))) == num
num = np.float64(256.2013)
- self.assertEqual(np.float64(ujson.decode(ujson.encode(num))), num)
+ assert np.float64(ujson.decode(ujson.encode(num))) == num
def testFloatArray(self):
arr = np.arange(12.5, 185.72, 1.7322, dtype=np.float)
@@ -1618,7 +1622,7 @@ def test_encodeBigSet(self):
def test_encodeEmptySet(self):
s = set()
- self.assertEqual("[]", ujson.encode(s))
+ assert "[]" == ujson.encode(s)
def test_encodeSet(self):
s = set([1, 2, 3, 4, 5, 6, 7, 8, 9])
diff --git a/pandas/tests/io/parser/c_parser_only.py b/pandas/tests/io/parser/c_parser_only.py
index ac2aaf1f5e4ed..3e7a648474bc3 100644
--- a/pandas/tests/io/parser/c_parser_only.py
+++ b/pandas/tests/io/parser/c_parser_only.py
@@ -152,7 +152,7 @@ def error(val):
precise_errors.append(error(precise_val))
# round-trip should match float()
- self.assertEqual(roundtrip_val, float(text[2:]))
+ assert roundtrip_val == float(text[2:])
assert sum(precise_errors) <= sum(normal_errors)
assert max(precise_errors) <= max(normal_errors)
@@ -173,8 +173,8 @@ def test_pass_dtype_as_recarray(self):
FutureWarning, check_stacklevel=False):
result = self.read_csv(StringIO(data), dtype={
'one': 'u1', 1: 'S1'}, as_recarray=True)
- self.assertEqual(result['one'].dtype, 'u1')
- self.assertEqual(result['two'].dtype, 'S1')
+ assert result['one'].dtype == 'u1'
+ assert result['two'].dtype == 'S1'
def test_usecols_dtypes(self):
data = """\
@@ -211,7 +211,7 @@ def test_disable_bool_parsing(self):
assert (result.dtypes == object).all()
result = self.read_csv(StringIO(data), dtype=object, na_filter=False)
- self.assertEqual(result['B'][2], '')
+ assert result['B'][2] == ''
def test_custom_lineterminator(self):
data = 'a,b,c~1,2,3~4,5,6'
diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index 9677106f37232..bcce0c6d020ae 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -207,7 +207,7 @@ def test_quoting(self):
good_line_small = bad_line_small + '"'
df = self.read_table(StringIO(good_line_small), sep='\t')
- self.assertEqual(len(df), 3)
+ assert len(df) == 3
def test_unnamed_columns(self):
data = """A,B,C,,
@@ -237,13 +237,11 @@ def test_duplicate_columns(self):
# check default behavior
df = getattr(self, method)(StringIO(data), sep=',')
- self.assertEqual(list(df.columns),
- ['A', 'A.1', 'B', 'B.1', 'B.2'])
+ assert list(df.columns) == ['A', 'A.1', 'B', 'B.1', 'B.2']
df = getattr(self, method)(StringIO(data), sep=',',
mangle_dupe_cols=True)
- self.assertEqual(list(df.columns),
- ['A', 'A.1', 'B', 'B.1', 'B.2'])
+ assert list(df.columns) == ['A', 'A.1', 'B', 'B.1', 'B.2']
def test_csv_mixed_type(self):
data = """A,B,C
@@ -262,10 +260,10 @@ def test_read_csv_dataframe(self):
df2 = self.read_table(self.csv1, sep=',', index_col=0,
parse_dates=True)
tm.assert_index_equal(df.columns, pd.Index(['A', 'B', 'C', 'D']))
- self.assertEqual(df.index.name, 'index')
+ assert df.index.name == 'index'
assert isinstance(
df.index[0], (datetime, np.datetime64, Timestamp))
- self.assertEqual(df.values.dtype, np.float64)
+ assert df.values.dtype == np.float64
tm.assert_frame_equal(df, df2)
def test_read_csv_no_index_name(self):
@@ -333,7 +331,7 @@ def test_parse_bools(self):
True,3
"""
data = self.read_csv(StringIO(data))
- self.assertEqual(data['A'].dtype, np.bool_)
+ assert data['A'].dtype == np.bool_
data = """A,B
YES,1
@@ -345,7 +343,7 @@ def test_parse_bools(self):
data = self.read_csv(StringIO(data),
true_values=['yes', 'Yes', 'YES'],
false_values=['no', 'NO', 'No'])
- self.assertEqual(data['A'].dtype, np.bool_)
+ assert data['A'].dtype == np.bool_
data = """A,B
TRUE,1
@@ -353,7 +351,7 @@ def test_parse_bools(self):
TRUE,3
"""
data = self.read_csv(StringIO(data))
- self.assertEqual(data['A'].dtype, np.bool_)
+ assert data['A'].dtype == np.bool_
data = """A,B
foo,bar
@@ -370,8 +368,8 @@ def test_int_conversion(self):
3.0,3
"""
data = self.read_csv(StringIO(data))
- self.assertEqual(data['A'].dtype, np.float64)
- self.assertEqual(data['B'].dtype, np.int64)
+ assert data['A'].dtype == np.float64
+ assert data['B'].dtype == np.int64
def test_read_nrows(self):
expected = self.read_csv(StringIO(self.data1))[:3]
@@ -463,7 +461,7 @@ def test_get_chunk_passed_chunksize(self):
result = self.read_csv(StringIO(data), chunksize=2)
piece = result.get_chunk()
- self.assertEqual(len(piece), 2)
+ assert len(piece) == 2
def test_read_chunksize_generated_index(self):
# GH 12185
@@ -537,7 +535,7 @@ def test_iterator(self):
result = list(reader)
expected = DataFrame(dict(A=[1, 4, 7], B=[2, 5, 8], C=[
3, 6, 9]), index=['foo', 'bar', 'baz'])
- self.assertEqual(len(result), 3)
+ assert len(result) == 3
tm.assert_frame_equal(pd.concat(result), expected)
# skipfooter is not supported with the C parser yet
@@ -751,12 +749,12 @@ def test_utf16_example(self):
# it works! and is the right length
result = self.read_table(path, encoding='utf-16')
- self.assertEqual(len(result), 50)
+ assert len(result) == 50
if not compat.PY3:
buf = BytesIO(open(path, 'rb').read())
result = self.read_table(buf, encoding='utf-16')
- self.assertEqual(len(result), 50)
+ assert len(result) == 50
def test_unicode_encoding(self):
pth = tm.get_data_path('unicode_series.csv')
@@ -767,7 +765,7 @@ def test_unicode_encoding(self):
got = result[1][1632]
expected = u('\xc1 k\xf6ldum klaka (Cold Fever) (1994)')
- self.assertEqual(got, expected)
+ assert got == expected
def test_trailing_delimiters(self):
# #2442. grumble grumble
@@ -792,8 +790,8 @@ def test_escapechar(self):
result = self.read_csv(StringIO(data), escapechar='\\',
quotechar='"', encoding='utf-8')
- self.assertEqual(result['SEARCH_TERM'][2],
- 'SLAGBORD, "Bergslagen", IKEA:s 1700-tals serie')
+ assert result['SEARCH_TERM'][2] == ('SLAGBORD, "Bergslagen", '
+ 'IKEA:s 1700-tals serie')
tm.assert_index_equal(result.columns,
Index(['SEARCH_TERM', 'ACTUAL_URL']))
@@ -841,7 +839,7 @@ def test_chunks_have_consistent_numerical_type(self):
df = self.read_csv(StringIO(data))
# Assert that types were coerced.
assert type(df.a[0]) is np.float64
- self.assertEqual(df.a.dtype, np.float)
+ assert df.a.dtype == np.float
def test_warn_if_chunks_have_mismatched_type(self):
warning_type = False
@@ -855,7 +853,7 @@ def test_warn_if_chunks_have_mismatched_type(self):
with tm.assert_produces_warning(warning_type):
df = self.read_csv(StringIO(data))
- self.assertEqual(df.a.dtype, np.object)
+ assert df.a.dtype == np.object
def test_integer_overflow_bug(self):
# see gh-2601
@@ -888,7 +886,7 @@ def test_chunk_begins_with_newline_whitespace(self):
# see gh-10022
data = '\n hello\nworld\n'
result = self.read_csv(StringIO(data), header=None)
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
# see gh-9735: this issue is C parser-specific (bug when
# parsing whitespace and characters at chunk boundary)
@@ -1361,9 +1359,9 @@ def test_euro_decimal_format(self):
3;878,158;108013,434;GHI;rez;2,735694704"""
df2 = self.read_csv(StringIO(data), sep=';', decimal=',')
- self.assertEqual(df2['Number1'].dtype, float)
- self.assertEqual(df2['Number2'].dtype, float)
- self.assertEqual(df2['Number3'].dtype, float)
+ assert df2['Number1'].dtype == float
+ assert df2['Number2'].dtype == float
+ assert df2['Number3'].dtype == float
def test_read_duplicate_names(self):
# See gh-7160
@@ -1463,7 +1461,7 @@ def test_compact_ints_as_recarray(self):
result = self.read_csv(StringIO(data), delimiter=',', header=None,
compact_ints=True, as_recarray=True)
ex_dtype = np.dtype([(str(i), 'i1') for i in range(4)])
- self.assertEqual(result.dtype, ex_dtype)
+ assert result.dtype == ex_dtype
with tm.assert_produces_warning(
FutureWarning, check_stacklevel=False):
@@ -1471,7 +1469,7 @@ def test_compact_ints_as_recarray(self):
as_recarray=True, compact_ints=True,
use_unsigned=True)
ex_dtype = np.dtype([(str(i), 'u1') for i in range(4)])
- self.assertEqual(result.dtype, ex_dtype)
+ assert result.dtype == ex_dtype
def test_as_recarray(self):
# basic test
diff --git a/pandas/tests/io/parser/converters.py b/pandas/tests/io/parser/converters.py
index e10ee016b749a..8fde709e39cae 100644
--- a/pandas/tests/io/parser/converters.py
+++ b/pandas/tests/io/parser/converters.py
@@ -56,7 +56,7 @@ def test_converters_no_implicit_conv(self):
f = lambda x: x.strip()
converter = {0: f}
df = self.read_csv(StringIO(data), header=None, converters=converter)
- self.assertEqual(df[0].dtype, object)
+ assert df[0].dtype == object
def test_converters_euro_decimal_format(self):
data = """Id;Number1;Number2;Text1;Text2;Number3
@@ -66,9 +66,9 @@ def test_converters_euro_decimal_format(self):
f = lambda x: float(x.replace(",", "."))
converter = {'Number1': f, 'Number2': f, 'Number3': f}
df2 = self.read_csv(StringIO(data), sep=';', converters=converter)
- self.assertEqual(df2['Number1'].dtype, float)
- self.assertEqual(df2['Number2'].dtype, float)
- self.assertEqual(df2['Number3'].dtype, float)
+ assert df2['Number1'].dtype == float
+ assert df2['Number2'].dtype == float
+ assert df2['Number3'].dtype == float
def test_converter_return_string_bug(self):
# see gh-583
@@ -79,7 +79,7 @@ def test_converter_return_string_bug(self):
f = lambda x: float(x.replace(",", "."))
converter = {'Number1': f, 'Number2': f, 'Number3': f}
df2 = self.read_csv(StringIO(data), sep=';', converters=converter)
- self.assertEqual(df2['Number1'].dtype, float)
+ assert df2['Number1'].dtype == float
def test_converters_corner_with_nas(self):
# skip aberration observed on Win64 Python 3.2.2
@@ -150,4 +150,4 @@ def test_converter_index_col_bug(self):
xp = DataFrame({'B': [2, 4]}, index=Index([1, 3], name='A'))
tm.assert_frame_equal(rs, xp)
- self.assertEqual(rs.index.name, xp.index.name)
+ assert rs.index.name == xp.index.name
diff --git a/pandas/tests/io/parser/dtypes.py b/pandas/tests/io/parser/dtypes.py
index 6ef2bd8f869dd..7311c9200f269 100644
--- a/pandas/tests/io/parser/dtypes.py
+++ b/pandas/tests/io/parser/dtypes.py
@@ -60,8 +60,8 @@ def test_pass_dtype(self):
4,5.5"""
result = self.read_csv(StringIO(data), dtype={'one': 'u1', 1: 'S1'})
- self.assertEqual(result['one'].dtype, 'u1')
- self.assertEqual(result['two'].dtype, 'object')
+ assert result['one'].dtype == 'u1'
+ assert result['two'].dtype == 'object'
def test_categorical_dtype(self):
# GH 10153
diff --git a/pandas/tests/io/parser/header.py b/pandas/tests/io/parser/header.py
index f7967f4fe9765..1e5fb42b4c1d4 100644
--- a/pandas/tests/io/parser/header.py
+++ b/pandas/tests/io/parser/header.py
@@ -62,7 +62,7 @@ def test_header_with_index_col(self):
names = ['A', 'B', 'C']
df = self.read_csv(StringIO(data), names=names)
- self.assertEqual(names, ['A', 'B', 'C'])
+ assert list(df.columns) == ['A', 'B', 'C']
values = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
expected = DataFrame(values, index=['foo', 'bar', 'baz'],
diff --git a/pandas/tests/io/parser/index_col.py b/pandas/tests/io/parser/index_col.py
index 6283104dffd70..ee9b210443636 100644
--- a/pandas/tests/io/parser/index_col.py
+++ b/pandas/tests/io/parser/index_col.py
@@ -45,11 +45,11 @@ def test_index_col_named(self):
index=Index(['hello', 'world', 'foo'], name='message'))
rs = self.read_csv(StringIO(data), names=names, index_col=['message'])
tm.assert_frame_equal(xp, rs)
- self.assertEqual(xp.index.name, rs.index.name)
+ assert xp.index.name == rs.index.name
rs = self.read_csv(StringIO(data), names=names, index_col='message')
tm.assert_frame_equal(xp, rs)
- self.assertEqual(xp.index.name, rs.index.name)
+ assert xp.index.name == rs.index.name
def test_index_col_is_true(self):
# see gh-9798
diff --git a/pandas/tests/io/parser/na_values.py b/pandas/tests/io/parser/na_values.py
index 787fa304f84b2..362837a46f838 100644
--- a/pandas/tests/io/parser/na_values.py
+++ b/pandas/tests/io/parser/na_values.py
@@ -72,7 +72,7 @@ def test_default_na_values(self):
_NA_VALUES = set(['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN',
'#N/A', 'N/A', 'NA', '#NA', 'NULL', 'NaN',
'nan', '-NaN', '-nan', '#N/A N/A', ''])
- self.assertEqual(_NA_VALUES, parsers._NA_VALUES)
+ assert _NA_VALUES == parsers._NA_VALUES
nv = len(_NA_VALUES)
def f(i, v):
@@ -248,7 +248,7 @@ def test_na_trailing_columns(self):
2012-05-12,USD,SBUX,SELL,500"""
result = self.read_csv(StringIO(data))
- self.assertEqual(result['Date'][1], '2012-05-12')
+ assert result['Date'][1] == '2012-05-12'
assert result['UnitPrice'].isnull().all()
def test_na_values_scalar(self):
diff --git a/pandas/tests/io/parser/parse_dates.py b/pandas/tests/io/parser/parse_dates.py
index dfccf48b03be3..4507db108b684 100644
--- a/pandas/tests/io/parser/parse_dates.py
+++ b/pandas/tests/io/parser/parse_dates.py
@@ -361,15 +361,15 @@ def test_parse_tz_aware(self):
# it works
result = self.read_csv(data, index_col=0, parse_dates=True)
stamp = result.index[0]
- self.assertEqual(stamp.minute, 39)
+ assert stamp.minute == 39
try:
assert result.index.tz is pytz.utc
except AssertionError: # hello Yaroslav
arr = result.index.to_pydatetime()
result = tools.to_datetime(arr, utc=True)[0]
- self.assertEqual(stamp.minute, result.minute)
- self.assertEqual(stamp.hour, result.hour)
- self.assertEqual(stamp.day, result.day)
+ assert stamp.minute == result.minute
+ assert stamp.hour == result.hour
+ assert stamp.day == result.day
def test_multiple_date_cols_index(self):
data = """
@@ -532,7 +532,7 @@ def test_parse_date_time(self):
parse_dates=datecols,
date_parser=conv.parse_date_time)
assert 'date_time' in df
- self.assertEqual(df.date_time.loc[0], datetime(2001, 1, 5, 10, 0, 0))
+ assert df.date_time.loc[0] == datetime(2001, 1, 5, 10, 0, 0)
data = ("KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
"KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
@@ -560,7 +560,7 @@ def test_parse_date_fields(self):
parse_dates=datecols,
date_parser=conv.parse_date_fields)
assert 'ymd' in df
- self.assertEqual(df.ymd.loc[0], datetime(2001, 1, 10))
+ assert df.ymd.loc[0] == datetime(2001, 1, 10)
def test_datetime_six_col(self):
years = np.array([2007, 2008])
@@ -587,7 +587,7 @@ def test_datetime_six_col(self):
parse_dates=datecols,
date_parser=conv.parse_all_fields)
assert 'ymdHMS' in df
- self.assertEqual(df.ymdHMS.loc[0], datetime(2001, 1, 5, 10, 0, 0))
+ assert df.ymdHMS.loc[0] == datetime(2001, 1, 5, 10, 0, 0)
def test_datetime_fractional_seconds(self):
data = """\
@@ -600,10 +600,10 @@ def test_datetime_fractional_seconds(self):
parse_dates=datecols,
date_parser=conv.parse_all_fields)
assert 'ymdHMS' in df
- self.assertEqual(df.ymdHMS.loc[0], datetime(2001, 1, 5, 10, 0, 0,
- microsecond=123456))
- self.assertEqual(df.ymdHMS.loc[1], datetime(2001, 1, 5, 10, 0, 0,
- microsecond=500000))
+ assert df.ymdHMS.loc[0] == datetime(2001, 1, 5, 10, 0, 0,
+ microsecond=123456)
+ assert df.ymdHMS.loc[1] == datetime(2001, 1, 5, 10, 0, 0,
+ microsecond=500000)
def test_generic(self):
data = "year, month, day, a\n 2001, 01, 10, 10.\n 2001, 02, 1, 11."
@@ -613,7 +613,7 @@ def test_generic(self):
parse_dates=datecols,
date_parser=dateconverter)
assert 'ym' in df
- self.assertEqual(df.ym.loc[0], date(2001, 1, 1))
+ assert df.ym.loc[0] == date(2001, 1, 1)
def test_dateparser_resolution_if_not_ns(self):
# GH 10245
diff --git a/pandas/tests/io/parser/python_parser_only.py b/pandas/tests/io/parser/python_parser_only.py
index 1356ace4bb38a..a0784d3aeae2d 100644
--- a/pandas/tests/io/parser/python_parser_only.py
+++ b/pandas/tests/io/parser/python_parser_only.py
@@ -160,7 +160,7 @@ def test_read_table_buglet_4x_multiindex(self):
x q 30 3 -0.6662 -0.5243 -0.3580 0.89145 2.5838"""
df = self.read_table(StringIO(text), sep=r'\s+')
- self.assertEqual(df.index.names, ('one', 'two', 'three', 'four'))
+ assert df.index.names == ('one', 'two', 'three', 'four')
# see gh-6893
data = ' A B C\na b c\n1 3 7 0 3 6\n3 1 4 1 5 9'
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index 046590a3ae4c9..cabee76dd6dfc 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -107,7 +107,7 @@ def test_parse_public_s3_bucket_chunked(self):
for ext, comp in [('', None), ('.gz', 'gzip'), ('.bz2', 'bz2')]:
df_reader = read_csv('s3://pandas-test/tips.csv' + ext,
chunksize=chunksize, compression=comp)
- self.assertEqual(df_reader.chunksize, chunksize)
+ assert df_reader.chunksize == chunksize
for i_chunk in [0, 1, 2]:
# Read a couple of chunks and make sure we see them
# properly.
@@ -127,7 +127,7 @@ def test_parse_public_s3_bucket_chunked_python(self):
df_reader = read_csv('s3://pandas-test/tips.csv' + ext,
chunksize=chunksize, compression=comp,
engine='python')
- self.assertEqual(df_reader.chunksize, chunksize)
+ assert df_reader.chunksize == chunksize
for i_chunk in [0, 1, 2]:
# Read a couple of chunks and make sure we see them properly.
df = df_reader.get_chunk()
diff --git a/pandas/tests/io/parser/test_textreader.py b/pandas/tests/io/parser/test_textreader.py
index ad37f828bba6f..d8ae66a2b275c 100644
--- a/pandas/tests/io/parser/test_textreader.py
+++ b/pandas/tests/io/parser/test_textreader.py
@@ -66,7 +66,7 @@ def test_string_factorize(self):
data = 'a\nb\na\nb\na'
reader = TextReader(StringIO(data), header=None)
result = reader.read()
- self.assertEqual(len(set(map(id, result[0]))), 2)
+ assert len(set(map(id, result[0]))) == 2
def test_skipinitialspace(self):
data = ('a, b\n'
@@ -89,7 +89,7 @@ def test_parse_booleans(self):
reader = TextReader(StringIO(data), header=None)
result = reader.read()
- self.assertEqual(result[0].dtype, np.bool_)
+ assert result[0].dtype == np.bool_
def test_delimit_whitespace(self):
data = 'a b\na\t\t "b"\n"a"\t \t b'
@@ -186,7 +186,7 @@ def test_header_not_enough_lines(self):
reader = TextReader(StringIO(data), delimiter=',', header=2)
header = reader.header
expected = [['a', 'b', 'c']]
- self.assertEqual(header, expected)
+ assert header == expected
recs = reader.read()
expected = {0: [1, 4], 1: [2, 5], 2: [3, 6]}
@@ -207,7 +207,7 @@ def test_header_not_enough_lines_as_recarray(self):
as_recarray=True)
header = reader.header
expected = [['a', 'b', 'c']]
- self.assertEqual(header, expected)
+ assert header == expected
recs = reader.read()
expected = {'a': [1, 4], 'b': [2, 5], 'c': [3, 6]}
@@ -250,18 +250,18 @@ def _make_reader(**kwds):
reader = _make_reader(dtype='S5,i4')
result = reader.read()
- self.assertEqual(result[0].dtype, 'S5')
+ assert result[0].dtype == 'S5'
ex_values = np.array(['a', 'aa', 'aaa', 'aaaa', 'aaaaa'], dtype='S5')
assert (result[0] == ex_values).all()
- self.assertEqual(result[1].dtype, 'i4')
+ assert result[1].dtype == 'i4'
reader = _make_reader(dtype='S4')
result = reader.read()
- self.assertEqual(result[0].dtype, 'S4')
+ assert result[0].dtype == 'S4'
ex_values = np.array(['a', 'aa', 'aaa', 'aaaa', 'aaaa'], dtype='S4')
assert (result[0] == ex_values).all()
- self.assertEqual(result[1].dtype, 'S4')
+ assert result[1].dtype == 'S4'
def test_numpy_string_dtype_as_recarray(self):
data = """\
@@ -277,10 +277,10 @@ def _make_reader(**kwds):
reader = _make_reader(dtype='S4', as_recarray=True)
result = reader.read()
- self.assertEqual(result['0'].dtype, 'S4')
+ assert result['0'].dtype == 'S4'
ex_values = np.array(['a', 'aa', 'aaa', 'aaaa', 'aaaa'], dtype='S4')
assert (result['0'] == ex_values).all()
- self.assertEqual(result['1'].dtype, 'S4')
+ assert result['1'].dtype == 'S4'
def test_pass_dtype(self):
data = """\
@@ -295,19 +295,19 @@ def _make_reader(**kwds):
reader = _make_reader(dtype={'one': 'u1', 1: 'S1'})
result = reader.read()
- self.assertEqual(result[0].dtype, 'u1')
- self.assertEqual(result[1].dtype, 'S1')
+ assert result[0].dtype == 'u1'
+ assert result[1].dtype == 'S1'
reader = _make_reader(dtype={'one': np.uint8, 1: object})
result = reader.read()
- self.assertEqual(result[0].dtype, 'u1')
- self.assertEqual(result[1].dtype, 'O')
+ assert result[0].dtype == 'u1'
+ assert result[1].dtype == 'O'
reader = _make_reader(dtype={'one': np.dtype('u1'),
1: np.dtype('O')})
result = reader.read()
- self.assertEqual(result[0].dtype, 'u1')
- self.assertEqual(result[1].dtype, 'O')
+ assert result[0].dtype == 'u1'
+ assert result[1].dtype == 'O'
def test_usecols(self):
data = """\
@@ -324,7 +324,7 @@ def _make_reader(**kwds):
result = reader.read()
exp = _make_reader().read()
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
assert (result[1] == exp[1]).all()
assert (result[2] == exp[2]).all()
diff --git a/pandas/tests/io/parser/usecols.py b/pandas/tests/io/parser/usecols.py
index b52106d9e8595..8761d1ccd3da4 100644
--- a/pandas/tests/io/parser/usecols.py
+++ b/pandas/tests/io/parser/usecols.py
@@ -43,7 +43,7 @@ def test_usecols(self):
result2 = self.read_csv(StringIO(data), usecols=('b', 'c'))
exp = self.read_csv(StringIO(data))
- self.assertEqual(len(result.columns), 2)
+ assert len(result.columns) == 2
assert (result['b'] == exp['b']).all()
assert (result['c'] == exp['c']).all()
diff --git a/pandas/tests/io/test_clipboard.py b/pandas/tests/io/test_clipboard.py
index f373c13c3bc58..756dd0db8c3b7 100644
--- a/pandas/tests/io/test_clipboard.py
+++ b/pandas/tests/io/test_clipboard.py
@@ -103,7 +103,7 @@ def test_read_clipboard_infer_excel(self):
df = pd.read_clipboard()
# excel data is parsed correctly
- self.assertEqual(df.iloc[1][1], 'Harry Carney')
+ assert df.iloc[1][1] == 'Harry Carney'
# having diff tab counts doesn't trigger it
text = dedent("""
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 3eee3f619f33d..804d76c3c9eca 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -40,22 +40,22 @@ def test_expand_user(self):
self.assertNotEqual(expanded_name, filename)
assert isabs(expanded_name)
- self.assertEqual(os.path.expanduser(filename), expanded_name)
+ assert os.path.expanduser(filename) == expanded_name
def test_expand_user_normal_path(self):
filename = '/somefolder/sometest'
expanded_name = common._expand_user(filename)
- self.assertEqual(expanded_name, filename)
- self.assertEqual(os.path.expanduser(filename), expanded_name)
+ assert expanded_name == filename
+ assert os.path.expanduser(filename) == expanded_name
def test_stringify_path_pathlib(self):
tm._skip_if_no_pathlib()
rel_path = common._stringify_path(Path('.'))
- self.assertEqual(rel_path, '.')
+ assert rel_path == '.'
redundant_path = common._stringify_path(Path('foo//bar'))
- self.assertEqual(redundant_path, os.path.join('foo', 'bar'))
+ assert redundant_path == os.path.join('foo', 'bar')
def test_stringify_path_localpath(self):
tm._skip_if_no_localpath()
@@ -63,19 +63,19 @@ def test_stringify_path_localpath(self):
path = os.path.join('foo', 'bar')
abs_path = os.path.abspath(path)
lpath = LocalPath(path)
- self.assertEqual(common._stringify_path(lpath), abs_path)
+ assert common._stringify_path(lpath) == abs_path
def test_get_filepath_or_buffer_with_path(self):
filename = '~/sometest'
filepath_or_buffer, _, _ = common.get_filepath_or_buffer(filename)
self.assertNotEqual(filepath_or_buffer, filename)
assert isabs(filepath_or_buffer)
- self.assertEqual(os.path.expanduser(filename), filepath_or_buffer)
+ assert os.path.expanduser(filename) == filepath_or_buffer
def test_get_filepath_or_buffer_with_buffer(self):
input_buffer = StringIO()
filepath_or_buffer, _, _ = common.get_filepath_or_buffer(input_buffer)
- self.assertEqual(filepath_or_buffer, input_buffer)
+ assert filepath_or_buffer == input_buffer
def test_iterator(self):
reader = read_csv(StringIO(self.data1), chunksize=1)
@@ -138,6 +138,6 @@ def test_next(self):
for line in lines:
next_line = next(wrapper)
- self.assertEqual(next_line.strip(), line.strip())
+ assert next_line.strip() == line.strip()
pytest.raises(StopIteration, next, wrapper)
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index 6092cd4180675..d733f26b2c04d 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -1216,9 +1216,9 @@ def test_sheets(self):
tm.assert_frame_equal(self.frame, recons)
recons = read_excel(reader, 'test2', index_col=0)
tm.assert_frame_equal(self.tsframe, recons)
- self.assertEqual(2, len(reader.sheet_names))
- self.assertEqual('test1', reader.sheet_names[0])
- self.assertEqual('test2', reader.sheet_names[1])
+ assert 2 == len(reader.sheet_names)
+ assert 'test1' == reader.sheet_names[0]
+ assert 'test2' == reader.sheet_names[1]
def test_colaliases(self):
_skip_if_no_xlrd()
@@ -1262,7 +1262,7 @@ def test_roundtrip_indexlabels(self):
index_col=0,
).astype(np.int64)
frame.index.names = ['test']
- self.assertEqual(frame.index.names, recons.index.names)
+ assert frame.index.names == recons.index.names
frame = (DataFrame(np.random.randn(10, 2)) >= 0)
frame.to_excel(path,
@@ -1274,7 +1274,7 @@ def test_roundtrip_indexlabels(self):
index_col=0,
).astype(np.int64)
frame.index.names = ['test']
- self.assertEqual(frame.index.names, recons.index.names)
+ assert frame.index.names == recons.index.names
frame = (DataFrame(np.random.randn(10, 2)) >= 0)
frame.to_excel(path,
@@ -1316,7 +1316,7 @@ def test_excel_roundtrip_indexname(self):
index_col=0)
tm.assert_frame_equal(result, df)
- self.assertEqual(result.index.name, 'foo')
+ assert result.index.name == 'foo'
def test_excel_roundtrip_datetime(self):
_skip_if_no_xlrd()
@@ -1463,7 +1463,7 @@ def test_to_excel_multiindex_dates(self):
index_col=[0, 1])
tm.assert_frame_equal(tsframe, recons)
- self.assertEqual(recons.index.names, ('time', 'foo'))
+ assert recons.index.names == ('time', 'foo')
def test_to_excel_multiindex_no_write_index(self):
_skip_if_no_xlrd()
@@ -1577,21 +1577,20 @@ def test_to_excel_unicode_filename(self):
# wbk = xlrd.open_workbook(filename,
# formatting_info=True)
- # self.assertEqual(["test1"], wbk.sheet_names())
+ # assert ["test1"] == wbk.sheet_names()
# ws = wbk.sheet_by_name('test1')
- # self.assertEqual([(0, 1, 5, 7), (0, 1, 3, 5), (0, 1, 1, 3)],
- # ws.merged_cells)
+ # assert [(0, 1, 5, 7), (0, 1, 3, 5), (0, 1, 1, 3)] == ws.merged_cells
# for i in range(0, 2):
# for j in range(0, 7):
# xfx = ws.cell_xf_index(0, 0)
# cell_xf = wbk.xf_list[xfx]
# font = wbk.font_list
- # self.assertEqual(1, font[cell_xf.font_index].bold)
- # self.assertEqual(1, cell_xf.border.top_line_style)
- # self.assertEqual(1, cell_xf.border.right_line_style)
- # self.assertEqual(1, cell_xf.border.bottom_line_style)
- # self.assertEqual(1, cell_xf.border.left_line_style)
- # self.assertEqual(2, cell_xf.alignment.hor_align)
+ # assert 1 == font[cell_xf.font_index].bold
+ # assert 1 == cell_xf.border.top_line_style
+ # assert 1 == cell_xf.border.right_line_style
+ # assert 1 == cell_xf.border.bottom_line_style
+ # assert 1 == cell_xf.border.left_line_style
+ # assert 2 == cell_xf.alignment.hor_align
# os.remove(filename)
# def test_to_excel_header_styling_xlsx(self):
# import StringIO
@@ -1623,7 +1622,7 @@ def test_to_excel_unicode_filename(self):
# filename = '__tmp_to_excel_header_styling_xlsx__.xlsx'
# pdf.to_excel(filename, 'test1')
# wbk = openpyxl.load_workbook(filename)
- # self.assertEqual(["test1"], wbk.get_sheet_names())
+ # assert ["test1"] == wbk.get_sheet_names()
# ws = wbk.get_sheet_by_name('test1')
# xlsaddrs = ["%s2" % chr(i) for i in range(ord('A'), ord('H'))]
# xlsaddrs += ["A%s" % i for i in range(1, 6)]
@@ -1631,16 +1630,16 @@ def test_to_excel_unicode_filename(self):
# for xlsaddr in xlsaddrs:
# cell = ws.cell(xlsaddr)
# assert cell.style.font.bold
- # self.assertEqual(openpyxl.style.Border.BORDER_THIN,
- # cell.style.borders.top.border_style)
- # self.assertEqual(openpyxl.style.Border.BORDER_THIN,
- # cell.style.borders.right.border_style)
- # self.assertEqual(openpyxl.style.Border.BORDER_THIN,
- # cell.style.borders.bottom.border_style)
- # self.assertEqual(openpyxl.style.Border.BORDER_THIN,
- # cell.style.borders.left.border_style)
- # self.assertEqual(openpyxl.style.Alignment.HORIZONTAL_CENTER,
- # cell.style.alignment.horizontal)
+ # assert (openpyxl.style.Border.BORDER_THIN ==
+ # cell.style.borders.top.border_style)
+ # assert (openpyxl.style.Border.BORDER_THIN ==
+ # cell.style.borders.right.border_style)
+ # assert (openpyxl.style.Border.BORDER_THIN ==
+ # cell.style.borders.bottom.border_style)
+ # assert (openpyxl.style.Border.BORDER_THIN ==
+ # cell.style.borders.left.border_style)
+ # assert (openpyxl.style.Alignment.HORIZONTAL_CENTER ==
+ # cell.style.alignment.horizontal)
# mergedcells_addrs = ["C1", "E1", "G1"]
# for maddr in mergedcells_addrs:
# assert ws.cell(maddr).merged
@@ -1681,10 +1680,10 @@ def roundtrip(df, header=True, parser_hdr=0, index=True):
res = roundtrip(df, use_headers)
if use_headers:
- self.assertEqual(res.shape, (nrows, ncols + i))
+ assert res.shape == (nrows, ncols + i)
else:
# first row taken as columns
- self.assertEqual(res.shape, (nrows - 1, ncols + i))
+ assert res.shape == (nrows - 1, ncols + i)
# no nans
for r in range(len(res.index)):
@@ -1692,11 +1691,11 @@ def roundtrip(df, header=True, parser_hdr=0, index=True):
assert res.iloc[r, c] is not np.nan
res = roundtrip(DataFrame([0]))
- self.assertEqual(res.shape, (1, 1))
+ assert res.shape == (1, 1)
assert res.iloc[0, 0] is not np.nan
res = roundtrip(DataFrame([0]), False, None)
- self.assertEqual(res.shape, (1, 2))
+ assert res.shape == (1, 2)
assert res.iloc[0, 0] is not np.nan
def test_excel_010_hemstring_raises_NotImplementedError(self):
@@ -1909,18 +1908,18 @@ def test_to_excel_styleconverter(self):
xlsx_style = _Openpyxl1Writer._convert_to_style(hstyle)
assert xlsx_style.font.bold
- self.assertEqual(openpyxl.style.Border.BORDER_THIN,
- xlsx_style.borders.top.border_style)
- self.assertEqual(openpyxl.style.Border.BORDER_THIN,
- xlsx_style.borders.right.border_style)
- self.assertEqual(openpyxl.style.Border.BORDER_THIN,
- xlsx_style.borders.bottom.border_style)
- self.assertEqual(openpyxl.style.Border.BORDER_THIN,
- xlsx_style.borders.left.border_style)
- self.assertEqual(openpyxl.style.Alignment.HORIZONTAL_CENTER,
- xlsx_style.alignment.horizontal)
- self.assertEqual(openpyxl.style.Alignment.VERTICAL_TOP,
- xlsx_style.alignment.vertical)
+ assert (openpyxl.style.Border.BORDER_THIN ==
+ xlsx_style.borders.top.border_style)
+ assert (openpyxl.style.Border.BORDER_THIN ==
+ xlsx_style.borders.right.border_style)
+ assert (openpyxl.style.Border.BORDER_THIN ==
+ xlsx_style.borders.bottom.border_style)
+ assert (openpyxl.style.Border.BORDER_THIN ==
+ xlsx_style.borders.left.border_style)
+ assert (openpyxl.style.Alignment.HORIZONTAL_CENTER ==
+ xlsx_style.alignment.horizontal)
+ assert (openpyxl.style.Alignment.VERTICAL_TOP ==
+ xlsx_style.alignment.vertical)
def skip_openpyxl_gt21(cls):
@@ -1999,12 +1998,12 @@ def test_to_excel_styleconverter(self):
protection = styles.Protection(locked=True, hidden=False)
kw = _Openpyxl20Writer._convert_to_style_kwargs(hstyle)
- self.assertEqual(kw['font'], font)
- self.assertEqual(kw['border'], border)
- self.assertEqual(kw['alignment'], alignment)
- self.assertEqual(kw['fill'], fill)
- self.assertEqual(kw['number_format'], number_format)
- self.assertEqual(kw['protection'], protection)
+ assert kw['font'] == font
+ assert kw['border'] == border
+ assert kw['alignment'] == alignment
+ assert kw['fill'] == fill
+ assert kw['number_format'] == number_format
+ assert kw['protection'] == protection
def test_write_cells_merge_styled(self):
from pandas.io.formats.excel import ExcelCell
@@ -2036,8 +2035,8 @@ def test_write_cells_merge_styled(self):
wks = writer.sheets[sheet_name]
xcell_b1 = wks['B1']
xcell_a2 = wks['A2']
- self.assertEqual(xcell_b1.style, openpyxl_sty_merged)
- self.assertEqual(xcell_a2.style, openpyxl_sty_merged)
+ assert xcell_b1.style == openpyxl_sty_merged
+ assert xcell_a2.style == openpyxl_sty_merged
def skip_openpyxl_lt22(cls):
@@ -2109,12 +2108,12 @@ def test_to_excel_styleconverter(self):
protection = styles.Protection(locked=True, hidden=False)
kw = _Openpyxl22Writer._convert_to_style_kwargs(hstyle)
- self.assertEqual(kw['font'], font)
- self.assertEqual(kw['border'], border)
- self.assertEqual(kw['alignment'], alignment)
- self.assertEqual(kw['fill'], fill)
- self.assertEqual(kw['number_format'], number_format)
- self.assertEqual(kw['protection'], protection)
+ assert kw['font'] == font
+ assert kw['border'] == border
+ assert kw['alignment'] == alignment
+ assert kw['fill'] == fill
+ assert kw['number_format'] == number_format
+ assert kw['protection'] == protection
def test_write_cells_merge_styled(self):
if not openpyxl_compat.is_compat(major_ver=2):
@@ -2148,8 +2147,8 @@ def test_write_cells_merge_styled(self):
wks = writer.sheets[sheet_name]
xcell_b1 = wks['B1']
xcell_a2 = wks['A2']
- self.assertEqual(xcell_b1.font, openpyxl_sty_merged)
- self.assertEqual(xcell_a2.font, openpyxl_sty_merged)
+ assert xcell_b1.font == openpyxl_sty_merged
+ assert xcell_a2.font == openpyxl_sty_merged
class XlwtTests(ExcelWriterBase, tm.TestCase):
@@ -2201,12 +2200,12 @@ def test_to_excel_styleconverter(self):
xls_style = _XlwtWriter._convert_to_style(hstyle)
assert xls_style.font.bold
- self.assertEqual(xlwt.Borders.THIN, xls_style.borders.top)
- self.assertEqual(xlwt.Borders.THIN, xls_style.borders.right)
- self.assertEqual(xlwt.Borders.THIN, xls_style.borders.bottom)
- self.assertEqual(xlwt.Borders.THIN, xls_style.borders.left)
- self.assertEqual(xlwt.Alignment.HORZ_CENTER, xls_style.alignment.horz)
- self.assertEqual(xlwt.Alignment.VERT_TOP, xls_style.alignment.vert)
+ assert xlwt.Borders.THIN == xls_style.borders.top
+ assert xlwt.Borders.THIN == xls_style.borders.right
+ assert xlwt.Borders.THIN == xls_style.borders.bottom
+ assert xlwt.Borders.THIN == xls_style.borders.left
+ assert xlwt.Alignment.HORZ_CENTER == xls_style.alignment.horz
+ assert xlwt.Alignment.VERT_TOP == xls_style.alignment.vert
class XlsxWriterTests(ExcelWriterBase, tm.TestCase):
@@ -2259,7 +2258,7 @@ def test_column_format(self):
except:
read_num_format = cell.style.number_format._format_code
- self.assertEqual(read_num_format, num_format)
+ assert read_num_format == num_format
class OpenpyxlTests_NoMerge(ExcelWriterBase, tm.TestCase):
diff --git a/pandas/tests/io/test_gbq.py b/pandas/tests/io/test_gbq.py
index 13529e7b54714..138def3ea1ac9 100644
--- a/pandas/tests/io/test_gbq.py
+++ b/pandas/tests/io/test_gbq.py
@@ -133,4 +133,4 @@ def test_roundtrip(self):
.format(destination_table),
project_id=_get_project_id(),
private_key=_get_private_key_path())
- self.assertEqual(result['num_rows'][0], test_size)
+ assert result['num_rows'][0] == test_size
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index db6ab236ee793..0a79173df731c 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -144,16 +144,16 @@ def test_spam_no_types(self):
df2 = self.read_html(self.spam_data, 'Unit')
assert_framelist_equal(df1, df2)
- self.assertEqual(df1[0].iloc[0, 0], 'Proximates')
- self.assertEqual(df1[0].columns[0], 'Nutrient')
+ assert df1[0].iloc[0, 0] == 'Proximates'
+ assert df1[0].columns[0] == 'Nutrient'
def test_spam_with_types(self):
df1 = self.read_html(self.spam_data, '.*Water.*')
df2 = self.read_html(self.spam_data, 'Unit')
assert_framelist_equal(df1, df2)
- self.assertEqual(df1[0].iloc[0, 0], 'Proximates')
- self.assertEqual(df1[0].columns[0], 'Nutrient')
+ assert df1[0].iloc[0, 0] == 'Proximates'
+ assert df1[0].columns[0] == 'Nutrient'
def test_spam_no_match(self):
dfs = self.read_html(self.spam_data)
@@ -167,7 +167,7 @@ def test_banklist_no_match(self):
def test_spam_header(self):
df = self.read_html(self.spam_data, '.*Water.*', header=1)[0]
- self.assertEqual(df.columns[0], 'Proximates')
+ assert df.columns[0] == 'Proximates'
assert not df.empty
def test_skiprows_int(self):
@@ -288,7 +288,7 @@ def test_invalid_url(self):
self.read_html('http://www.a23950sdfa908sd.com',
match='.*Water.*')
except ValueError as e:
- self.assertEqual(str(e), 'No tables found')
+ assert str(e) == 'No tables found'
@tm.slow
def test_file_url(self):
@@ -368,7 +368,7 @@ def test_python_docs_table(self):
url = 'https://docs.python.org/2/'
dfs = self.read_html(url, match='Python')
zz = [df.iloc[0, 0][0:4] for df in dfs]
- self.assertEqual(sorted(zz), sorted(['Repo', 'What']))
+ assert sorted(zz) == sorted(['Repo', 'What'])
@tm.slow
def test_thousands_macau_stats(self):
@@ -518,7 +518,7 @@ def test_nyse_wsj_commas_table(self):
columns = Index(['Issue(Roll over for charts and headlines)',
'Volume', 'Price', 'Chg', '% Chg'])
nrows = 100
- self.assertEqual(df.shape[0], nrows)
+ assert df.shape[0] == nrows
tm.assert_index_equal(df.columns, columns)
@tm.slow
@@ -536,7 +536,7 @@ def try_remove_ws(x):
ground_truth = read_csv(os.path.join(DATA_PATH, 'banklist.csv'),
converters={'Updated Date': Timestamp,
'Closing Date': Timestamp})
- self.assertEqual(df.shape, ground_truth.shape)
+ assert df.shape == ground_truth.shape
old = ['First Vietnamese American BankIn Vietnamese',
'Westernbank Puerto RicoEn Espanol',
'R-G Premier Bank of Puerto RicoEn Espanol',
@@ -663,7 +663,7 @@ def test_wikipedia_states_table(self):
assert os.path.isfile(data), '%r is not a file' % data
assert os.path.getsize(data), '%r is an empty file' % data
result = self.read_html(data, 'Arizona', header=1)[0]
- self.assertEqual(result['sq mi'].dtype, np.dtype('float64'))
+ assert result['sq mi'].dtype == np.dtype('float64')
def test_decimal_rows(self):
diff --git a/pandas/tests/io/test_packers.py b/pandas/tests/io/test_packers.py
index ae1cadcd41496..451cce125e228 100644
--- a/pandas/tests/io/test_packers.py
+++ b/pandas/tests/io/test_packers.py
@@ -217,9 +217,10 @@ def test_dict_float(self):
def test_dict_complex(self):
x = {'foo': 1.0 + 1.0j, 'bar': 2.0 + 2.0j}
x_rec = self.encode_decode(x)
- self.assertEqual(x, x_rec)
+ tm.assert_dict_equal(x, x_rec)
+
for key in x:
- self.assertEqual(type(x[key]), type(x_rec[key]))
+ tm.assert_class_equal(x[key], x_rec[key], obj="complex value")
def test_dict_numpy_float(self):
x = {'foo': np.float32(1.0), 'bar': np.float32(2.0)}
@@ -230,9 +231,10 @@ def test_dict_numpy_complex(self):
x = {'foo': np.complex128(1.0 + 1.0j),
'bar': np.complex128(2.0 + 2.0j)}
x_rec = self.encode_decode(x)
- self.assertEqual(x, x_rec)
+ tm.assert_dict_equal(x, x_rec)
+
for key in x:
- self.assertEqual(type(x[key]), type(x_rec[key]))
+ tm.assert_class_equal(x[key], x_rec[key], obj="numpy complex128")
def test_numpy_array_float(self):
@@ -268,7 +270,7 @@ def test_timestamp(self):
'20130101'), Timestamp('20130101', tz='US/Eastern'),
Timestamp('201301010501')]:
i_rec = self.encode_decode(i)
- self.assertEqual(i, i_rec)
+ assert i == i_rec
def test_nat(self):
nat_rec = self.encode_decode(NaT)
@@ -286,7 +288,7 @@ def test_datetimes(self):
datetime.date(2013, 1, 1),
np.datetime64(datetime.datetime(2013, 1, 5, 2, 15))]:
i_rec = self.encode_decode(i)
- self.assertEqual(i, i_rec)
+ assert i == i_rec
def test_timedeltas(self):
@@ -294,7 +296,7 @@ def test_timedeltas(self):
datetime.timedelta(days=1, seconds=10),
np.timedelta64(1000000)]:
i_rec = self.encode_decode(i)
- self.assertEqual(i, i_rec)
+ assert i == i_rec
class TestIndex(TestPackers):
@@ -668,16 +670,14 @@ def decompress(ob):
for w in ws:
# check the messages from our warnings
- self.assertEqual(
- str(w.message),
- 'copying data after decompressing; this may mean that'
- ' decompress is caching its result',
- )
+ assert str(w.message) == ('copying data after decompressing; '
+ 'this may mean that decompress is '
+ 'caching its result')
for buf, control_buf in zip(not_garbage, control):
# make sure none of our mutations above affected the
# original buffers
- self.assertEqual(buf, control_buf)
+ assert buf == control_buf
def test_compression_warns_when_decompress_caches_zlib(self):
if not _ZLIB_INSTALLED:
@@ -710,7 +710,7 @@ def _test_small_strings_no_warn(self, compress):
# we compare the ord of bytes b'a' with unicode u'a' because the should
# always be the same (unless we were able to mutate the shared
# character singleton in which case ord(b'a') == ord(b'b').
- self.assertEqual(ord(b'a'), ord(u'a'))
+ assert ord(b'a') == ord(u'a')
tm.assert_numpy_array_equal(
char_unpacked,
np.array([ord(b'b')], dtype='uint8'),
@@ -801,7 +801,7 @@ def test_default_encoding(self):
for frame in compat.itervalues(self.frame):
result = frame.to_msgpack()
expected = frame.to_msgpack(encoding='utf8')
- self.assertEqual(result, expected)
+ assert result == expected
result = self.encode_decode(frame)
assert_frame_equal(result, frame)
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index ae1b4137c354f..a268fa96175cf 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -165,8 +165,8 @@ def test_factory_fun(self):
with catch_warnings(record=True):
with get_store(path) as tbl:
- self.assertEqual(len(tbl), 1)
- self.assertEqual(type(tbl['a']), DataFrame)
+ assert len(tbl) == 1
+ assert type(tbl['a']) == DataFrame
finally:
safe_remove(self.path)
@@ -185,8 +185,8 @@ def test_context(self):
tbl['a'] = tm.makeDataFrame()
with HDFStore(path) as tbl:
- self.assertEqual(len(tbl), 1)
- self.assertEqual(type(tbl['a']), DataFrame)
+ assert len(tbl) == 1
+ assert type(tbl['a']) == DataFrame
finally:
safe_remove(path)
@@ -374,7 +374,7 @@ def test_keys(self):
with catch_warnings(record=True):
store['d'] = tm.makePanel()
store['foo/bar'] = tm.makePanel()
- self.assertEqual(len(store), 5)
+ assert len(store) == 5
expected = set(['/a', '/b', '/c', '/d', '/foo/bar'])
assert set(store.keys()) == expected
assert set(store) == expected
@@ -461,9 +461,9 @@ def test_versioning(self):
_maybe_remove(store, 'df1')
store.append('df1', df[:10])
store.append('df1', df[10:])
- self.assertEqual(store.root.a._v_attrs.pandas_version, '0.15.2')
- self.assertEqual(store.root.b._v_attrs.pandas_version, '0.15.2')
- self.assertEqual(store.root.df1._v_attrs.pandas_version, '0.15.2')
+ assert store.root.a._v_attrs.pandas_version == '0.15.2'
+ assert store.root.b._v_attrs.pandas_version == '0.15.2'
+ assert store.root.df1._v_attrs.pandas_version == '0.15.2'
# write a file and wipe its versioning
_maybe_remove(store, 'df2')
@@ -488,7 +488,7 @@ def check(mode):
else:
store = HDFStore(path, mode=mode)
- self.assertEqual(store._handle.mode, mode)
+ assert store._handle.mode == mode
store.close()
with ensure_clean_path(self.path) as path:
@@ -501,7 +501,7 @@ def f():
pytest.raises(IOError, f)
else:
with HDFStore(path, mode=mode) as store:
- self.assertEqual(store._handle.mode, mode)
+ assert store._handle.mode == mode
with ensure_clean_path(self.path) as path:
@@ -550,7 +550,7 @@ def test_reopen_handle(self):
# truncation ok here
store.open('w')
assert store.is_open
- self.assertEqual(len(store), 0)
+ assert len(store) == 0
store.close()
assert not store.is_open
@@ -560,24 +560,24 @@ def test_reopen_handle(self):
# reopen as read
store.open('r')
assert store.is_open
- self.assertEqual(len(store), 1)
- self.assertEqual(store._mode, 'r')
+ assert len(store) == 1
+ assert store._mode == 'r'
store.close()
assert not store.is_open
# reopen as append
store.open('a')
assert store.is_open
- self.assertEqual(len(store), 1)
- self.assertEqual(store._mode, 'a')
+ assert len(store) == 1
+ assert store._mode == 'a'
store.close()
assert not store.is_open
# reopen as append (again)
store.open('a')
assert store.is_open
- self.assertEqual(len(store), 1)
- self.assertEqual(store._mode, 'a')
+ assert len(store) == 1
+ assert store._mode == 'a'
store.close()
assert not store.is_open
@@ -889,7 +889,7 @@ def test_append_series(self):
store.append('ns', ns)
result = store['ns']
tm.assert_series_equal(result, ns)
- self.assertEqual(result.name, ns.name)
+ assert result.name == ns.name
# select on the values
expected = ns[ns > 60]
@@ -1300,8 +1300,8 @@ def test_append_with_strings(self):
dict([(x, "%s_extra" % x) for x in wp.minor_axis]), axis=2)
def check_col(key, name, size):
- self.assertEqual(getattr(store.get_storer(
- key).table.description, name).itemsize, size)
+ assert getattr(store.get_storer(key)
+ .table.description, name).itemsize == size
store.append('s1', wp, min_itemsize=20)
store.append('s1', wp2)
@@ -1395,8 +1395,8 @@ def check_col(key, name, size):
with ensure_clean_store(self.path) as store:
def check_col(key, name, size):
- self.assertEqual(getattr(store.get_storer(
- key).table.description, name).itemsize, size)
+ assert getattr(store.get_storer(key)
+ .table.description, name).itemsize, size
df = DataFrame(dict(A='foo', B='bar'), index=range(10))
@@ -1404,13 +1404,13 @@ def check_col(key, name, size):
_maybe_remove(store, 'df')
store.append('df', df, min_itemsize={'A': 200})
check_col('df', 'A', 200)
- self.assertEqual(store.get_storer('df').data_columns, ['A'])
+ assert store.get_storer('df').data_columns == ['A']
# a min_itemsize that creates a data_column2
_maybe_remove(store, 'df')
store.append('df', df, data_columns=['B'], min_itemsize={'A': 200})
check_col('df', 'A', 200)
- self.assertEqual(store.get_storer('df').data_columns, ['B', 'A'])
+ assert store.get_storer('df').data_columns == ['B', 'A']
# a min_itemsize that creates a data_column2
_maybe_remove(store, 'df')
@@ -1418,7 +1418,7 @@ def check_col(key, name, size):
'B'], min_itemsize={'values': 200})
check_col('df', 'B', 200)
check_col('df', 'values_block_0', 200)
- self.assertEqual(store.get_storer('df').data_columns, ['B'])
+ assert store.get_storer('df').data_columns == ['B']
# infer the .typ on subsequent appends
_maybe_remove(store, 'df')
@@ -1492,8 +1492,8 @@ def test_append_with_data_columns(self):
# using min_itemsize and a data column
def check_col(key, name, size):
- self.assertEqual(getattr(store.get_storer(
- key).table.description, name).itemsize, size)
+ assert getattr(store.get_storer(key)
+ .table.description, name).itemsize == size
with ensure_clean_store(self.path) as store:
_maybe_remove(store, 'df')
@@ -1985,7 +1985,7 @@ def test_append_raise(self):
# list in column
df = tm.makeDataFrame()
df['invalid'] = [['a']] * len(df)
- self.assertEqual(df.dtypes['invalid'], np.object_)
+ assert df.dtypes['invalid'] == np.object_
pytest.raises(TypeError, store.append, 'df', df)
# multiple invalid columns
@@ -1999,7 +1999,7 @@ def test_append_raise(self):
s = s.astype(object)
s[0:5] = np.nan
df['invalid'] = s
- self.assertEqual(df.dtypes['invalid'], np.object_)
+ assert df.dtypes['invalid'] == np.object_
pytest.raises(TypeError, store.append, 'df', df)
# directy ndarray
@@ -2227,11 +2227,11 @@ def test_remove(self):
store['a'] = ts
store['b'] = df
_maybe_remove(store, 'a')
- self.assertEqual(len(store), 1)
+ assert len(store) == 1
tm.assert_frame_equal(df, store['b'])
_maybe_remove(store, 'b')
- self.assertEqual(len(store), 0)
+ assert len(store) == 0
# nonexistence
pytest.raises(KeyError, store.remove, 'a_nonexistent_store')
@@ -2241,19 +2241,19 @@ def test_remove(self):
store['b/foo'] = df
_maybe_remove(store, 'foo')
_maybe_remove(store, 'b/foo')
- self.assertEqual(len(store), 1)
+ assert len(store) == 1
store['a'] = ts
store['b/foo'] = df
_maybe_remove(store, 'b')
- self.assertEqual(len(store), 1)
+ assert len(store) == 1
# __delitem__
store['a'] = ts
store['b'] = df
del store['a']
del store['b']
- self.assertEqual(len(store), 0)
+ assert len(store) == 0
def test_remove_where(self):
@@ -3281,14 +3281,14 @@ def test_select_with_many_inputs(self):
result = store.select('df', 'B=selector')
expected = df[df.B.isin(selector)]
tm.assert_frame_equal(expected, result)
- self.assertEqual(len(result), 100)
+ assert len(result) == 100
# big selector along the index
selector = Index(df.ts[0:100].values)
result = store.select('df', 'ts=selector')
expected = df[df.ts.isin(selector.values)]
tm.assert_frame_equal(expected, result)
- self.assertEqual(len(result), 100)
+ assert len(result) == 100
def test_select_iterator(self):
@@ -3306,7 +3306,7 @@ def test_select_iterator(self):
tm.assert_frame_equal(expected, result)
results = [s for s in store.select('df', chunksize=100)]
- self.assertEqual(len(results), 5)
+ assert len(results) == 5
result = concat(results)
tm.assert_frame_equal(expected, result)
@@ -3331,7 +3331,7 @@ def test_select_iterator(self):
results = [s for s in read_hdf(path, 'df', chunksize=100)]
result = concat(results)
- self.assertEqual(len(results), 5)
+ assert len(results) == 5
tm.assert_frame_equal(result, df)
tm.assert_frame_equal(result, read_hdf(path, 'df'))
@@ -3484,7 +3484,7 @@ def test_select_iterator_non_complete_8014(self):
where = "index > '%s'" % end_dt
results = [s for s in store.select(
'df', where=where, chunksize=chunksize)]
- self.assertEqual(0, len(results))
+ assert 0 == len(results)
def test_select_iterator_many_empty_frames(self):
@@ -3563,8 +3563,8 @@ def test_retain_index_attributes(self):
for attr in ['freq', 'tz', 'name']:
for idx in ['index', 'columns']:
- self.assertEqual(getattr(getattr(df, idx), attr, None),
- getattr(getattr(result, idx), attr, None))
+ assert (getattr(getattr(df, idx), attr, None) ==
+ getattr(getattr(result, idx), attr, None))
# try to append a table with a different frequency
with catch_warnings(record=True):
@@ -3610,7 +3610,7 @@ def test_retain_index_attributes2(self):
df = DataFrame(dict(A=Series(lrange(3), index=idx)))
df.to_hdf(path, 'data', mode='w', append=True)
- self.assertEqual(read_hdf(path, 'data').index.name, 'foo')
+ assert read_hdf(path, 'data').index.name == 'foo'
with catch_warnings(record=True):
@@ -3655,7 +3655,7 @@ def test_frame_select(self):
date = df.index[len(df) // 2]
crit1 = Term('index>=date')
- self.assertEqual(crit1.env.scope['date'], date)
+ assert crit1.env.scope['date'] == date
crit2 = ("columns=['A', 'D']")
crit3 = ('columns=A')
@@ -4481,7 +4481,7 @@ def do_copy(f=None, new_f=None, keys=None,
# check keys
if keys is None:
keys = store.keys()
- self.assertEqual(set(keys), set(tstore.keys()))
+ assert set(keys) == set(tstore.keys())
# check indicies & nrows
for k in tstore.keys():
@@ -4489,7 +4489,7 @@ def do_copy(f=None, new_f=None, keys=None,
new_t = tstore.get_storer(k)
orig_t = store.get_storer(k)
- self.assertEqual(orig_t.nrows, new_t.nrows)
+ assert orig_t.nrows == new_t.nrows
# check propindixes
if propindexes:
@@ -4554,7 +4554,7 @@ def test_store_datetime_fractional_secs(self):
dt = datetime.datetime(2012, 1, 2, 3, 4, 5, 123456)
series = Series([0], [dt])
store['a'] = series
- self.assertEqual(store['a'].index[0], dt)
+ assert store['a'].index[0] == dt
def test_tseries_indices_series(self):
@@ -4564,18 +4564,18 @@ def test_tseries_indices_series(self):
store['a'] = ser
result = store['a']
- assert_series_equal(result, ser)
- self.assertEqual(type(result.index), type(ser.index))
- self.assertEqual(result.index.freq, ser.index.freq)
+ tm.assert_series_equal(result, ser)
+ assert result.index.freq == ser.index.freq
+ tm.assert_class_equal(result.index, ser.index, obj="series index")
idx = tm.makePeriodIndex(10)
ser = Series(np.random.randn(len(idx)), idx)
store['a'] = ser
result = store['a']
- assert_series_equal(result, ser)
- self.assertEqual(type(result.index), type(ser.index))
- self.assertEqual(result.index.freq, ser.index.freq)
+ tm.assert_series_equal(result, ser)
+ assert result.index.freq == ser.index.freq
+ tm.assert_class_equal(result.index, ser.index, obj="series index")
def test_tseries_indices_frame(self):
@@ -4586,8 +4586,9 @@ def test_tseries_indices_frame(self):
result = store['a']
assert_frame_equal(result, df)
- self.assertEqual(type(result.index), type(df.index))
- self.assertEqual(result.index.freq, df.index.freq)
+ assert result.index.freq == df.index.freq
+ tm.assert_class_equal(result.index, df.index,
+ obj="dataframe index")
idx = tm.makePeriodIndex(10)
df = DataFrame(np.random.randn(len(idx), 3), idx)
@@ -4595,8 +4596,9 @@ def test_tseries_indices_frame(self):
result = store['a']
assert_frame_equal(result, df)
- self.assertEqual(type(result.index), type(df.index))
- self.assertEqual(result.index.freq, df.index.freq)
+ assert result.index.freq == df.index.freq
+ tm.assert_class_equal(result.index, df.index,
+ obj="dataframe index")
def test_unicode_index(self):
@@ -5394,7 +5396,7 @@ def test_tseries_select_index_column(self):
with ensure_clean_store(self.path) as store:
store.append('frame', frame)
result = store.select_column('frame', 'index')
- self.assertEqual(rng.tz, DatetimeIndex(result.values).tz)
+ assert rng.tz == DatetimeIndex(result.values).tz
# check utc
rng = date_range('1/1/2000', '1/30/2000', tz='UTC')
@@ -5403,7 +5405,7 @@ def test_tseries_select_index_column(self):
with ensure_clean_store(self.path) as store:
store.append('frame', frame)
result = store.select_column('frame', 'index')
- self.assertEqual(rng.tz, result.dt.tz)
+ assert rng.tz == result.dt.tz
# double check non-utc
rng = date_range('1/1/2000', '1/30/2000', tz='US/Eastern')
@@ -5412,7 +5414,7 @@ def test_tseries_select_index_column(self):
with ensure_clean_store(self.path) as store:
store.append('frame', frame)
result = store.select_column('frame', 'index')
- self.assertEqual(rng.tz, result.dt.tz)
+ assert rng.tz == result.dt.tz
def test_timezones_fixed(self):
with ensure_clean_store(self.path) as store:
@@ -5443,7 +5445,7 @@ def test_fixed_offset_tz(self):
store['frame'] = frame
recons = store['frame']
tm.assert_index_equal(recons.index, rng)
- self.assertEqual(rng.tz, recons.index.tz)
+ assert rng.tz == recons.index.tz
def test_store_timezone(self):
# GH2852
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index fd883c9c0ff00..52883a41b08c2 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -405,9 +405,7 @@ def _to_sql_replace(self):
num_entries = len(self.test_frame1)
num_rows = self._count_rows('test_frame1')
- self.assertEqual(
- num_rows, num_entries, "not the same number of rows as entries")
-
+ assert num_rows == num_entries
self.drop_table('test_frame1')
def _to_sql_append(self):
@@ -425,9 +423,7 @@ def _to_sql_append(self):
num_entries = 2 * len(self.test_frame1)
num_rows = self._count_rows('test_frame1')
- self.assertEqual(
- num_rows, num_entries, "not the same number of rows as entries")
-
+ assert num_rows == num_entries
self.drop_table('test_frame1')
def _roundtrip(self):
@@ -454,7 +450,7 @@ def _to_sql_save_index(self):
columns=['A', 'B', 'C'], index=['A'])
self.pandasSQL.to_sql(df, 'test_to_sql_saves_index')
ix_cols = self._get_index_columns('test_to_sql_saves_index')
- self.assertEqual(ix_cols, [['A', ], ])
+ assert ix_cols == [['A', ], ]
def _transaction_test(self):
self.pandasSQL.execute("CREATE TABLE test_trans (A INT, B TEXT)")
@@ -470,13 +466,13 @@ def _transaction_test(self):
# ignore raised exception
pass
res = self.pandasSQL.read_query('SELECT * FROM test_trans')
- self.assertEqual(len(res), 0)
+ assert len(res) == 0
# Make sure when transaction is committed, rows do get inserted
with self.pandasSQL.run_transaction() as trans:
trans.execute(ins_sql)
res2 = self.pandasSQL.read_query('SELECT * FROM test_trans')
- self.assertEqual(len(res2), 1)
+ assert len(res2) == 1
# -----------------------------------------------------------------------------
@@ -544,8 +540,7 @@ def test_to_sql_replace(self):
num_entries = len(self.test_frame1)
num_rows = self._count_rows('test_frame3')
- self.assertEqual(
- num_rows, num_entries, "not the same number of rows as entries")
+ assert num_rows == num_entries
def test_to_sql_append(self):
sql.to_sql(self.test_frame1, 'test_frame4',
@@ -559,8 +554,7 @@ def test_to_sql_append(self):
num_entries = 2 * len(self.test_frame1)
num_rows = self._count_rows('test_frame4')
- self.assertEqual(
- num_rows, num_entries, "not the same number of rows as entries")
+ assert num_rows == num_entries
def test_to_sql_type_mapping(self):
sql.to_sql(self.test_frame3, 'test_frame5', self.conn, index=False)
@@ -663,44 +657,39 @@ def test_to_sql_index_label(self):
# no index name, defaults to 'index'
sql.to_sql(temp_frame, 'test_index_label', self.conn)
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
- self.assertEqual(frame.columns[0], 'index')
+ assert frame.columns[0] == 'index'
# specifying index_label
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace', index_label='other_label')
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
- self.assertEqual(frame.columns[0], 'other_label',
- "Specified index_label not written to database")
+ assert frame.columns[0] == "other_label"
# using the index name
temp_frame.index.name = 'index_name'
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace')
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
- self.assertEqual(frame.columns[0], 'index_name',
- "Index name not written to database")
+ assert frame.columns[0] == "index_name"
# has index name, but specifying index_label
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace', index_label='other_label')
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
- self.assertEqual(frame.columns[0], 'other_label',
- "Specified index_label not written to database")
+ assert frame.columns[0] == "other_label"
# index name is integer
temp_frame.index.name = 0
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace')
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
- self.assertEqual(frame.columns[0], '0',
- "Integer index label not written to database")
+ assert frame.columns[0] == "0"
temp_frame.index.name = None
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace', index_label=0)
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
- self.assertEqual(frame.columns[0], '0',
- "Integer index label not written to database")
+ assert frame.columns[0] == "0"
def test_to_sql_index_label_multiindex(self):
temp_frame = DataFrame({'col1': range(4)},
@@ -710,30 +699,27 @@ def test_to_sql_index_label_multiindex(self):
# no index name, defaults to 'level_0' and 'level_1'
sql.to_sql(temp_frame, 'test_index_label', self.conn)
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
- self.assertEqual(frame.columns[0], 'level_0')
- self.assertEqual(frame.columns[1], 'level_1')
+ assert frame.columns[0] == 'level_0'
+ assert frame.columns[1] == 'level_1'
# specifying index_label
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace', index_label=['A', 'B'])
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
- self.assertEqual(frame.columns[:2].tolist(), ['A', 'B'],
- "Specified index_labels not written to database")
+ assert frame.columns[:2].tolist() == ['A', 'B']
# using the index name
temp_frame.index.names = ['A', 'B']
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace')
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
- self.assertEqual(frame.columns[:2].tolist(), ['A', 'B'],
- "Index names not written to database")
+ assert frame.columns[:2].tolist() == ['A', 'B']
# has index name, but specifying index_label
sql.to_sql(temp_frame, 'test_index_label', self.conn,
if_exists='replace', index_label=['C', 'D'])
frame = sql.read_sql_query('SELECT * FROM test_index_label', self.conn)
- self.assertEqual(frame.columns[:2].tolist(), ['C', 'D'],
- "Specified index_labels not written to database")
+ assert frame.columns[:2].tolist() == ['C', 'D']
# wrong length of index_label
pytest.raises(ValueError, sql.to_sql, temp_frame,
@@ -793,7 +779,7 @@ def test_chunksize_read(self):
for chunk in sql.read_sql_query("select * from test_chunksize",
self.conn, chunksize=5):
res2 = concat([res2, chunk], ignore_index=True)
- self.assertEqual(len(chunk), sizes[i])
+ assert len(chunk) == sizes[i]
i += 1
tm.assert_frame_equal(res1, res2)
@@ -807,7 +793,7 @@ def test_chunksize_read(self):
for chunk in sql.read_sql_table("test_chunksize", self.conn,
chunksize=5):
res3 = concat([res3, chunk], ignore_index=True)
- self.assertEqual(len(chunk), sizes[i])
+ assert len(chunk) == sizes[i]
i += 1
tm.assert_frame_equal(res1, res3)
@@ -856,29 +842,24 @@ def test_read_table_columns(self):
cols = ['A', 'B']
result = sql.read_sql_table('test_frame', self.conn, columns=cols)
- self.assertEqual(result.columns.tolist(), cols,
- "Columns not correctly selected")
+ assert result.columns.tolist() == cols
def test_read_table_index_col(self):
# test columns argument in read_table
sql.to_sql(self.test_frame1, 'test_frame', self.conn)
result = sql.read_sql_table('test_frame', self.conn, index_col="index")
- self.assertEqual(result.index.names, ["index"],
- "index_col not correctly set")
+ assert result.index.names == ["index"]
result = sql.read_sql_table(
'test_frame', self.conn, index_col=["A", "B"])
- self.assertEqual(result.index.names, ["A", "B"],
- "index_col not correctly set")
+ assert result.index.names == ["A", "B"]
result = sql.read_sql_table('test_frame', self.conn,
index_col=["A", "B"],
columns=["C", "D"])
- self.assertEqual(result.index.names, ["A", "B"],
- "index_col not correctly set")
- self.assertEqual(result.columns.tolist(), ["C", "D"],
- "columns not set correctly whith index_col")
+ assert result.index.names == ["A", "B"]
+ assert result.columns.tolist() == ["C", "D"]
def test_read_sql_delegate(self):
iris_frame1 = sql.read_sql_query(
@@ -905,10 +886,11 @@ def test_not_reflect_all_tables(self):
sql.read_sql_table('other_table', self.conn)
sql.read_sql_query('SELECT * FROM other_table', self.conn)
# Verify some things
- self.assertEqual(len(w), 0, "Warning triggered for other table")
+ assert len(w) == 0
def test_warning_case_insensitive_table_name(self):
- # see GH7815.
+ # see gh-7815
+ #
# We can't test that this warning is triggered, a the database
# configuration would have to be altered. But here we test that
# the warning is certainly NOT triggered in a normal case.
@@ -918,8 +900,7 @@ def test_warning_case_insensitive_table_name(self):
# This should not trigger a Warning
self.test_frame1.to_sql('CaseSensitive', self.conn)
# Verify some things
- self.assertEqual(
- len(w), 0, "Warning triggered for writing a table")
+ assert len(w) == 0
def _get_index_columns(self, tbl_name):
from sqlalchemy.engine import reflection
@@ -981,7 +962,7 @@ def test_query_by_text_obj(self):
iris_df = sql.read_sql(name_text, self.conn, params={
'name': 'Iris-versicolor'})
all_names = set(iris_df['Name'])
- self.assertEqual(all_names, set(['Iris-versicolor']))
+ assert all_names == set(['Iris-versicolor'])
def test_query_by_select_obj(self):
# WIP : GH10846
@@ -992,7 +973,7 @@ def test_query_by_select_obj(self):
iris_df = sql.read_sql(name_select, self.conn,
params={'name': 'Iris-setosa'})
all_names = set(iris_df['Name'])
- self.assertEqual(all_names, set(['Iris-setosa']))
+ assert all_names == set(['Iris-setosa'])
class _EngineToConnMixin(object):
@@ -1094,8 +1075,7 @@ def test_sqlite_type_mapping(self):
db = sql.SQLiteDatabase(self.conn)
table = sql.SQLiteTable("test_type", db, frame=df)
schema = table.sql_schema()
- self.assertEqual(self._get_sqlite_column_type(schema, 'time'),
- "TIMESTAMP")
+ assert self._get_sqlite_column_type(schema, 'time') == "TIMESTAMP"
# -----------------------------------------------------------------------------
@@ -1264,24 +1244,22 @@ def check(col):
# "2000-01-01 00:00:00-08:00" should convert to
# "2000-01-01 08:00:00"
- self.assertEqual(col[0], Timestamp('2000-01-01 08:00:00'))
+ assert col[0] == Timestamp('2000-01-01 08:00:00')
# "2000-06-01 00:00:00-07:00" should convert to
# "2000-06-01 07:00:00"
- self.assertEqual(col[1], Timestamp('2000-06-01 07:00:00'))
+ assert col[1] == Timestamp('2000-06-01 07:00:00')
elif is_datetime64tz_dtype(col.dtype):
assert str(col.dt.tz) == 'UTC'
# "2000-01-01 00:00:00-08:00" should convert to
# "2000-01-01 08:00:00"
- self.assertEqual(col[0], Timestamp(
- '2000-01-01 08:00:00', tz='UTC'))
+ assert col[0] == Timestamp('2000-01-01 08:00:00', tz='UTC')
# "2000-06-01 00:00:00-07:00" should convert to
# "2000-06-01 07:00:00"
- self.assertEqual(col[1], Timestamp(
- '2000-06-01 07:00:00', tz='UTC'))
+ assert col[1] == Timestamp('2000-06-01 07:00:00', tz='UTC')
else:
raise AssertionError("DateCol loaded with incorrect type "
@@ -1525,7 +1503,7 @@ def test_dtype(self):
meta.reflect()
sqltype = meta.tables['dtype_test3'].columns['B'].type
assert isinstance(sqltype, sqlalchemy.String)
- self.assertEqual(sqltype.length, 10)
+ assert sqltype.length == 10
# single dtype
df.to_sql('single_dtype_test', self.conn, dtype=sqlalchemy.TEXT)
@@ -1576,15 +1554,14 @@ def test_double_precision(self):
res = sql.read_sql_table('test_dtypes', self.conn)
# check precision of float64
- self.assertEqual(np.round(df['f64'].iloc[0], 14),
- np.round(res['f64'].iloc[0], 14))
+ assert (np.round(df['f64'].iloc[0], 14) ==
+ np.round(res['f64'].iloc[0], 14))
# check sql types
meta = sqlalchemy.schema.MetaData(bind=self.conn)
meta.reflect()
col_dict = meta.tables['test_dtypes'].columns
- self.assertEqual(str(col_dict['f32'].type),
- str(col_dict['f64_as_f32'].type))
+ assert str(col_dict['f32'].type) == str(col_dict['f64_as_f32'].type)
assert isinstance(col_dict['f32'].type, sqltypes.Float)
assert isinstance(col_dict['f64'].type, sqltypes.Float)
assert isinstance(col_dict['i32'].type, sqltypes.Integer)
@@ -1690,7 +1667,7 @@ def test_bigint_warning(self):
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
sql.read_sql_table('test_bigintwarning', self.conn)
- self.assertEqual(len(w), 0, "Warning triggered for other table")
+ assert len(w) == 0
class _TestMySQLAlchemy(object):
@@ -2002,20 +1979,20 @@ def test_dtype(self):
df.to_sql('dtype_test2', self.conn, dtype={'B': 'STRING'})
# sqlite stores Boolean values as INTEGER
- self.assertEqual(self._get_sqlite_column_type(
- 'dtype_test', 'B'), 'INTEGER')
+ assert self._get_sqlite_column_type(
+ 'dtype_test', 'B') == 'INTEGER'
- self.assertEqual(self._get_sqlite_column_type(
- 'dtype_test2', 'B'), 'STRING')
+ assert self._get_sqlite_column_type(
+ 'dtype_test2', 'B') == 'STRING'
pytest.raises(ValueError, df.to_sql,
'error', self.conn, dtype={'B': bool})
# single dtype
df.to_sql('single_dtype_test', self.conn, dtype='STRING')
- self.assertEqual(
- self._get_sqlite_column_type('single_dtype_test', 'A'), 'STRING')
- self.assertEqual(
- self._get_sqlite_column_type('single_dtype_test', 'B'), 'STRING')
+ assert self._get_sqlite_column_type(
+ 'single_dtype_test', 'A') == 'STRING'
+ assert self._get_sqlite_column_type(
+ 'single_dtype_test', 'B') == 'STRING'
def test_notnull_dtype(self):
if self.flavor == 'mysql':
@@ -2031,11 +2008,10 @@ def test_notnull_dtype(self):
tbl = 'notnull_dtype_test'
df.to_sql(tbl, self.conn)
- self.assertEqual(self._get_sqlite_column_type(tbl, 'Bool'), 'INTEGER')
- self.assertEqual(self._get_sqlite_column_type(
- tbl, 'Date'), 'TIMESTAMP')
- self.assertEqual(self._get_sqlite_column_type(tbl, 'Int'), 'INTEGER')
- self.assertEqual(self._get_sqlite_column_type(tbl, 'Float'), 'REAL')
+ assert self._get_sqlite_column_type(tbl, 'Bool') == 'INTEGER'
+ assert self._get_sqlite_column_type(tbl, 'Date') == 'TIMESTAMP'
+ assert self._get_sqlite_column_type(tbl, 'Int') == 'INTEGER'
+ assert self._get_sqlite_column_type(tbl, 'Float') == 'REAL'
def test_illegal_names(self):
# For sqlite, these should work fine
@@ -2251,7 +2227,7 @@ def test_onecolumn_of_integer(self):
the_sum = sum([my_c0[0]
for my_c0 in con_x.execute("select * from mono_df")])
# it should not fail, and gives 3 ( Issue #3628 )
- self.assertEqual(the_sum, 3)
+ assert the_sum == 3
result = sql.read_sql("select * from mono_df", con_x)
tm.assert_frame_equal(result, mono_df)
@@ -2292,23 +2268,21 @@ def clean_up(test_table_to_drop):
# test if_exists='replace'
sql.to_sql(frame=df_if_exists_1, con=self.conn, name=table_name,
if_exists='replace', index=False)
- self.assertEqual(tquery(sql_select, con=self.conn),
- [(1, 'A'), (2, 'B')])
+ assert tquery(sql_select, con=self.conn) == [(1, 'A'), (2, 'B')]
sql.to_sql(frame=df_if_exists_2, con=self.conn, name=table_name,
if_exists='replace', index=False)
- self.assertEqual(tquery(sql_select, con=self.conn),
- [(3, 'C'), (4, 'D'), (5, 'E')])
+ assert (tquery(sql_select, con=self.conn) ==
+ [(3, 'C'), (4, 'D'), (5, 'E')])
clean_up(table_name)
# test if_exists='append'
sql.to_sql(frame=df_if_exists_1, con=self.conn, name=table_name,
if_exists='fail', index=False)
- self.assertEqual(tquery(sql_select, con=self.conn),
- [(1, 'A'), (2, 'B')])
+ assert tquery(sql_select, con=self.conn) == [(1, 'A'), (2, 'B')]
sql.to_sql(frame=df_if_exists_2, con=self.conn, name=table_name,
if_exists='append', index=False)
- self.assertEqual(tquery(sql_select, con=self.conn),
- [(1, 'A'), (2, 'B'), (3, 'C'), (4, 'D'), (5, 'E')])
+ assert (tquery(sql_select, con=self.conn) ==
+ [(1, 'A'), (2, 'B'), (3, 'C'), (4, 'D'), (5, 'E')])
clean_up(table_name)
@@ -2610,21 +2584,19 @@ def clean_up(test_table_to_drop):
# test if_exists='replace'
sql.to_sql(frame=df_if_exists_1, con=self.conn, name=table_name,
if_exists='replace', index=False)
- self.assertEqual(tquery(sql_select, con=self.conn),
- [(1, 'A'), (2, 'B')])
+ assert tquery(sql_select, con=self.conn) == [(1, 'A'), (2, 'B')]
sql.to_sql(frame=df_if_exists_2, con=self.conn, name=table_name,
if_exists='replace', index=False)
- self.assertEqual(tquery(sql_select, con=self.conn),
- [(3, 'C'), (4, 'D'), (5, 'E')])
+ assert (tquery(sql_select, con=self.conn) ==
+ [(3, 'C'), (4, 'D'), (5, 'E')])
clean_up(table_name)
# test if_exists='append'
sql.to_sql(frame=df_if_exists_1, con=self.conn, name=table_name,
if_exists='fail', index=False)
- self.assertEqual(tquery(sql_select, con=self.conn),
- [(1, 'A'), (2, 'B')])
+ assert tquery(sql_select, con=self.conn) == [(1, 'A'), (2, 'B')]
sql.to_sql(frame=df_if_exists_2, con=self.conn, name=table_name,
if_exists='append', index=False)
- self.assertEqual(tquery(sql_select, con=self.conn),
- [(1, 'A'), (2, 'B'), (3, 'C'), (4, 'D'), (5, 'E')])
+ assert (tquery(sql_select, con=self.conn) ==
+ [(1, 'A'), (2, 'B'), (3, 'C'), (4, 'D'), (5, 'E')])
clean_up(table_name)
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 72023c77e7c88..945f0b009a9da 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -181,7 +181,7 @@ def test_read_dta2(self):
w = [x for x in w if x.category is UserWarning]
# should get warning for each call to read_dta
- self.assertEqual(len(w), 3)
+ assert len(w) == 3
# buggy test because of the NaT comparison on certain platforms
# Format 113 test fails since it does not support tc and tC formats
@@ -283,7 +283,7 @@ def test_read_dta18(self):
u'Floats': u'float data'}
tm.assert_dict_equal(vl, vl_expected)
- self.assertEqual(rdr.data_label, u'This is a Ünicode data label')
+ assert rdr.data_label == u'This is a Ünicode data label'
def test_read_write_dta5(self):
original = DataFrame([(np.nan, np.nan, np.nan, np.nan, np.nan)],
@@ -351,11 +351,11 @@ def test_encoding(self):
if compat.PY3:
expected = raw.kreis1849[0]
- self.assertEqual(result, expected)
+ assert result == expected
assert isinstance(result, compat.string_types)
else:
expected = raw.kreis1849.str.decode("latin-1")[0]
- self.assertEqual(result, expected)
+ assert result == expected
assert isinstance(result, unicode) # noqa
with tm.ensure_clean() as path:
@@ -377,7 +377,7 @@ def test_read_write_dta11(self):
with warnings.catch_warnings(record=True) as w:
original.to_stata(path, None)
# should get a warning for that format.
- self.assertEqual(len(w), 1)
+ assert len(w) == 1
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(
@@ -405,7 +405,7 @@ def test_read_write_dta12(self):
with warnings.catch_warnings(record=True) as w:
original.to_stata(path, None)
# should get a warning for that format.
- self.assertEqual(len(w), 1)
+ assert len(w) == 1
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(
@@ -904,7 +904,7 @@ def test_categorical_warnings_and_errors(self):
with warnings.catch_warnings(record=True) as w:
original.to_stata(path)
# should get a warning for mixed content
- self.assertEqual(len(w), 1)
+ assert len(w) == 1
def test_categorical_with_stata_missing_values(self):
values = [['a' + str(i)] for i in range(120)]
@@ -986,10 +986,10 @@ def test_categorical_ordering(self):
for col in parsed_115:
if not is_categorical_dtype(parsed_115[col]):
continue
- self.assertEqual(True, parsed_115[col].cat.ordered)
- self.assertEqual(True, parsed_117[col].cat.ordered)
- self.assertEqual(False, parsed_115_unordered[col].cat.ordered)
- self.assertEqual(False, parsed_117_unordered[col].cat.ordered)
+ assert parsed_115[col].cat.ordered
+ assert parsed_117[col].cat.ordered
+ assert not parsed_115_unordered[col].cat.ordered
+ assert not parsed_117_unordered[col].cat.ordered
def test_read_chunks_117(self):
files_117 = [self.dta1_117, self.dta2_117, self.dta3_117,
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index 64bcb55cb4e6a..7d0c39dae6e4b 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -149,7 +149,7 @@ def check_line(xpl, rsl):
rsdata = rsl.get_xydata()
tm.assert_almost_equal(xpdata, rsdata)
- self.assertEqual(len(xp_lines), len(rs_lines))
+ assert len(xp_lines) == len(rs_lines)
[check_line(xpl, rsl) for xpl, rsl in zip(xp_lines, rs_lines)]
tm.close()
@@ -170,7 +170,7 @@ def _check_visible(self, collections, visible=True):
collections = [collections]
for patch in collections:
- self.assertEqual(patch.get_visible(), visible)
+ assert patch.get_visible() == visible
def _get_colors_mapped(self, series, colors):
unique = series.unique()
@@ -208,7 +208,7 @@ def _check_colors(self, collections, linecolors=None, facecolors=None,
linecolors = self._get_colors_mapped(mapping, linecolors)
linecolors = linecolors[:len(collections)]
- self.assertEqual(len(collections), len(linecolors))
+ assert len(collections) == len(linecolors)
for patch, color in zip(collections, linecolors):
if isinstance(patch, Line2D):
result = patch.get_color()
@@ -220,7 +220,7 @@ def _check_colors(self, collections, linecolors=None, facecolors=None,
result = patch.get_edgecolor()
expected = conv.to_rgba(color)
- self.assertEqual(result, expected)
+ assert result == expected
if facecolors is not None:
@@ -228,7 +228,7 @@ def _check_colors(self, collections, linecolors=None, facecolors=None,
facecolors = self._get_colors_mapped(mapping, facecolors)
facecolors = facecolors[:len(collections)]
- self.assertEqual(len(collections), len(facecolors))
+ assert len(collections) == len(facecolors)
for patch, color in zip(collections, facecolors):
if isinstance(patch, Collection):
# returned as list of np.array
@@ -240,7 +240,7 @@ def _check_colors(self, collections, linecolors=None, facecolors=None,
result = tuple(result)
expected = conv.to_rgba(color)
- self.assertEqual(result, expected)
+ assert result == expected
def _check_text_labels(self, texts, expected):
"""
@@ -254,12 +254,12 @@ def _check_text_labels(self, texts, expected):
expected text label, or its list
"""
if not is_list_like(texts):
- self.assertEqual(texts.get_text(), expected)
+ assert texts.get_text() == expected
else:
labels = [t.get_text() for t in texts]
- self.assertEqual(len(labels), len(expected))
+ assert len(labels) == len(expected)
for l, e in zip(labels, expected):
- self.assertEqual(l, e)
+ assert l == e
def _check_ticks_props(self, axes, xlabelsize=None, xrot=None,
ylabelsize=None, yrot=None):
@@ -325,8 +325,8 @@ def _check_ax_scales(self, axes, xaxis='linear', yaxis='linear'):
"""
axes = self._flatten_visible(axes)
for ax in axes:
- self.assertEqual(ax.xaxis.get_scale(), xaxis)
- self.assertEqual(ax.yaxis.get_scale(), yaxis)
+ assert ax.xaxis.get_scale() == xaxis
+ assert ax.yaxis.get_scale() == yaxis
def _check_axes_shape(self, axes, axes_num=None, layout=None,
figsize=None):
@@ -349,14 +349,14 @@ def _check_axes_shape(self, axes, axes_num=None, layout=None,
visible_axes = self._flatten_visible(axes)
if axes_num is not None:
- self.assertEqual(len(visible_axes), axes_num)
+ assert len(visible_axes) == axes_num
for ax in visible_axes:
# check something drawn on visible axes
assert len(ax.get_children()) > 0
if layout is not None:
result = self._get_axes_layout(_flatten(axes))
- self.assertEqual(result, layout)
+ assert result == layout
tm.assert_numpy_array_equal(
visible_axes[0].figure.get_size_inches(),
@@ -409,8 +409,8 @@ def _check_has_errorbars(self, axes, xerr=0, yerr=0):
xerr_count += 1
if has_yerr:
yerr_count += 1
- self.assertEqual(xerr, xerr_count)
- self.assertEqual(yerr, yerr_count)
+ assert xerr == xerr_count
+ assert yerr == yerr_count
def _check_box_return_type(self, returned, return_type, expected_keys=None,
check_ax_title=True):
@@ -450,23 +450,23 @@ def _check_box_return_type(self, returned, return_type, expected_keys=None,
assert isinstance(returned, Series)
- self.assertEqual(sorted(returned.keys()), sorted(expected_keys))
+ assert sorted(returned.keys()) == sorted(expected_keys)
for key, value in iteritems(returned):
assert isinstance(value, types[return_type])
# check returned dict has correct mapping
if return_type == 'axes':
if check_ax_title:
- self.assertEqual(value.get_title(), key)
+ assert value.get_title() == key
elif return_type == 'both':
if check_ax_title:
- self.assertEqual(value.ax.get_title(), key)
+ assert value.ax.get_title() == key
assert isinstance(value.ax, Axes)
assert isinstance(value.lines, dict)
elif return_type == 'dict':
line = value['medians'][0]
axes = line.axes if self.mpl_ge_1_5_0 else line.get_axes()
if check_ax_title:
- self.assertEqual(axes.get_title(), key)
+ assert axes.get_title() == key
else:
raise AssertionError
diff --git a/pandas/tests/plotting/test_boxplot_method.py b/pandas/tests/plotting/test_boxplot_method.py
index fe6d5e5cf148f..1f70d408767f3 100644
--- a/pandas/tests/plotting/test_boxplot_method.py
+++ b/pandas/tests/plotting/test_boxplot_method.py
@@ -90,7 +90,7 @@ def test_boxplot_legacy(self):
fig, ax = self.plt.subplots()
d = df.boxplot(ax=ax, return_type='dict')
lines = list(itertools.chain.from_iterable(d.values()))
- self.assertEqual(len(ax.get_lines()), len(lines))
+ assert len(ax.get_lines()) == len(lines)
@slow
def test_boxplot_return_type_none(self):
@@ -138,7 +138,7 @@ def _check_ax_limits(col, ax):
height_ax, weight_ax = df.boxplot(['height', 'weight'], by='category')
_check_ax_limits(df['height'], height_ax)
_check_ax_limits(df['weight'], weight_ax)
- self.assertEqual(weight_ax._sharey, height_ax)
+ assert weight_ax._sharey == height_ax
# Two rows, one partial
p = df.boxplot(['height', 'weight', 'age'], by='category')
@@ -148,8 +148,8 @@ def _check_ax_limits(col, ax):
_check_ax_limits(df['height'], height_ax)
_check_ax_limits(df['weight'], weight_ax)
_check_ax_limits(df['age'], age_ax)
- self.assertEqual(weight_ax._sharey, height_ax)
- self.assertEqual(age_ax._sharey, height_ax)
+ assert weight_ax._sharey == height_ax
+ assert age_ax._sharey == height_ax
assert dummy_ax._sharey is None
@slow
@@ -209,13 +209,13 @@ def test_grouped_plot_fignums(self):
gb = df.groupby('gender')
res = gb.plot()
- self.assertEqual(len(self.plt.get_fignums()), 2)
- self.assertEqual(len(res), 2)
+ assert len(self.plt.get_fignums()) == 2
+ assert len(res) == 2
tm.close()
res = gb.boxplot(return_type='axes')
- self.assertEqual(len(self.plt.get_fignums()), 1)
- self.assertEqual(len(res), 2)
+ assert len(self.plt.get_fignums()) == 1
+ assert len(res) == 2
tm.close()
# now works with GH 5610 as gender is excluded
diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py
index 30eb3ef24fe30..e23bc2ef6c563 100644
--- a/pandas/tests/plotting/test_converter.py
+++ b/pandas/tests/plotting/test_converter.py
@@ -29,35 +29,35 @@ def test_convert_accepts_unicode(self):
def test_conversion(self):
rs = self.dtc.convert(['2012-1-1'], None, None)[0]
xp = datetime(2012, 1, 1).toordinal()
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.dtc.convert('2012-1-1', None, None)
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.dtc.convert(date(2012, 1, 1), None, None)
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.dtc.convert(datetime(2012, 1, 1).toordinal(), None, None)
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.dtc.convert('2012-1-1', None, None)
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.dtc.convert(Timestamp('2012-1-1'), None, None)
- self.assertEqual(rs, xp)
+ assert rs == xp
# also testing datetime64 dtype (GH8614)
rs = self.dtc.convert(np_datetime64_compat('2012-01-01'), None, None)
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.dtc.convert(np_datetime64_compat(
'2012-01-01 00:00:00+0000'), None, None)
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.dtc.convert(np.array([
np_datetime64_compat('2012-01-01 00:00:00+0000'),
np_datetime64_compat('2012-01-02 00:00:00+0000')]), None, None)
- self.assertEqual(rs[0], xp)
+ assert rs[0] == xp
# we have a tz-aware date (constructed to that when we turn to utc it
# is the same as our sample)
@@ -66,17 +66,17 @@ def test_conversion(self):
.tz_convert('US/Eastern')
)
rs = self.dtc.convert(ts, None, None)
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.dtc.convert(ts.to_pydatetime(), None, None)
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.dtc.convert(Index([ts - Day(1), ts]), None, None)
- self.assertEqual(rs[1], xp)
+ assert rs[1] == xp
rs = self.dtc.convert(Index([ts - Day(1), ts]).to_pydatetime(),
None, None)
- self.assertEqual(rs[1], xp)
+ assert rs[1] == xp
def test_conversion_float(self):
decimals = 9
@@ -101,7 +101,7 @@ def test_conversion_outofbounds_datetime(self):
tm.assert_numpy_array_equal(rs, xp)
rs = self.dtc.convert(values[0], None, None)
xp = converter.dates.date2num(values[0])
- self.assertEqual(rs, xp)
+ assert rs == xp
values = [datetime(1677, 1, 1, 12), datetime(1677, 1, 2, 12)]
rs = self.dtc.convert(values, None, None)
@@ -109,7 +109,7 @@ def test_conversion_outofbounds_datetime(self):
tm.assert_numpy_array_equal(rs, xp)
rs = self.dtc.convert(values[0], None, None)
xp = converter.dates.date2num(values[0])
- self.assertEqual(rs, xp)
+ assert rs == xp
def test_time_formatter(self):
self.tc(90000)
@@ -165,44 +165,44 @@ def test_convert_accepts_unicode(self):
def test_conversion(self):
rs = self.pc.convert(['2012-1-1'], None, self.axis)[0]
xp = Period('2012-1-1').ordinal
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.pc.convert('2012-1-1', None, self.axis)
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.pc.convert([date(2012, 1, 1)], None, self.axis)[0]
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.pc.convert(date(2012, 1, 1), None, self.axis)
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.pc.convert([Timestamp('2012-1-1')], None, self.axis)[0]
- self.assertEqual(rs, xp)
+ assert rs == xp
rs = self.pc.convert(Timestamp('2012-1-1'), None, self.axis)
- self.assertEqual(rs, xp)
+ assert rs == xp
# FIXME
# rs = self.pc.convert(
# np_datetime64_compat('2012-01-01'), None, self.axis)
- # self.assertEqual(rs, xp)
+ # assert rs == xp
#
# rs = self.pc.convert(
# np_datetime64_compat('2012-01-01 00:00:00+0000'),
# None, self.axis)
- # self.assertEqual(rs, xp)
+ # assert rs == xp
#
# rs = self.pc.convert(np.array([
# np_datetime64_compat('2012-01-01 00:00:00+0000'),
# np_datetime64_compat('2012-01-02 00:00:00+0000')]),
# None, self.axis)
- # self.assertEqual(rs[0], xp)
+ # assert rs[0] == xp
def test_integer_passthrough(self):
# GH9012
rs = self.pc.convert([0, 1], None, self.axis)
xp = [0, 1]
- self.assertEqual(rs, xp)
+ assert rs == xp
def test_convert_nested(self):
data = ['2012-1-1', '2012-1-2']
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index 30d67630afa41..ae8faa031174e 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -58,7 +58,7 @@ def test_fontsize_set_correctly(self):
df = DataFrame(np.random.randn(10, 9), index=range(10))
ax = df.plot(fontsize=2)
for label in (ax.get_xticklabels() + ax.get_yticklabels()):
- self.assertEqual(label.get_fontsize(), 2)
+ assert label.get_fontsize() == 2
@slow
def test_frame_inferred(self):
@@ -95,7 +95,7 @@ def test_nonnumeric_exclude(self):
df = DataFrame({'A': ["x", "y", "z"], 'B': [1, 2, 3]}, idx)
ax = df.plot() # it works
- self.assertEqual(len(ax.get_lines()), 1) # B was plotted
+ assert len(ax.get_lines()) == 1 # B was plotted
plt.close(plt.gcf())
pytest.raises(TypeError, df['A'].plot)
@@ -124,7 +124,7 @@ def test_tsplot(self):
ax = ts.plot(style='k')
color = (0., 0., 0., 1) if self.mpl_ge_2_0_0 else (0., 0., 0.)
- self.assertEqual(color, ax.get_lines()[0].get_color())
+ assert color == ax.get_lines()[0].get_color()
def test_both_style_and_color(self):
import matplotlib.pyplot as plt # noqa
@@ -146,11 +146,11 @@ def test_high_freq(self):
def test_get_datevalue(self):
from pandas.plotting._converter import get_datevalue
assert get_datevalue(None, 'D') is None
- self.assertEqual(get_datevalue(1987, 'A'), 1987)
- self.assertEqual(get_datevalue(Period(1987, 'A'), 'M'),
- Period('1987-12', 'M').ordinal)
- self.assertEqual(get_datevalue('1/1/1987', 'D'),
- Period('1987-1-1', 'D').ordinal)
+ assert get_datevalue(1987, 'A') == 1987
+ assert (get_datevalue(Period(1987, 'A'), 'M') ==
+ Period('1987-12', 'M').ordinal)
+ assert (get_datevalue('1/1/1987', 'D') ==
+ Period('1987-1-1', 'D').ordinal)
@slow
def test_ts_plot_format_coord(self):
@@ -159,8 +159,7 @@ def check_format_of_first_point(ax, expected_string):
first_x = first_line.get_xdata()[0].ordinal
first_y = first_line.get_ydata()[0]
try:
- self.assertEqual(expected_string,
- ax.format_coord(first_x, first_y))
+ assert expected_string == ax.format_coord(first_x, first_y)
except (ValueError):
pytest.skip("skipping test because issue forming "
"test comparison GH7664")
@@ -261,7 +260,7 @@ def test_uhf(self):
xp = conv._from_ordinal(loc).strftime('%H:%M:%S.%f')
rs = str(label.get_text())
if len(rs):
- self.assertEqual(xp, rs)
+ assert xp == rs
@slow
def test_irreg_hf(self):
@@ -308,10 +307,9 @@ def test_business_freq(self):
import matplotlib.pyplot as plt # noqa
bts = tm.makePeriodSeries()
ax = bts.plot()
- self.assertEqual(ax.get_lines()[0].get_xydata()[0, 0],
- bts.index[0].ordinal)
+ assert ax.get_lines()[0].get_xydata()[0, 0] == bts.index[0].ordinal
idx = ax.get_lines()[0].get_xdata()
- self.assertEqual(PeriodIndex(data=idx).freqstr, 'B')
+ assert PeriodIndex(data=idx).freqstr == 'B'
@slow
def test_business_freq_convert(self):
@@ -321,10 +319,9 @@ def test_business_freq_convert(self):
tm.N = n
ts = bts.to_period('M')
ax = bts.plot()
- self.assertEqual(ax.get_lines()[0].get_xydata()[0, 0],
- ts.index[0].ordinal)
+ assert ax.get_lines()[0].get_xydata()[0, 0] == ts.index[0].ordinal
idx = ax.get_lines()[0].get_xdata()
- self.assertEqual(PeriodIndex(data=idx).freqstr, 'M')
+ assert PeriodIndex(data=idx).freqstr == 'M'
def test_nonzero_base(self):
# GH2571
@@ -350,8 +347,8 @@ def _test(ax):
ax.set_xlim(xlim[0] - 5, xlim[1] + 10)
ax.get_figure().canvas.draw()
result = ax.get_xlim()
- self.assertEqual(result[0], xlim[0] - 5)
- self.assertEqual(result[1], xlim[1] + 10)
+ assert result[0] == xlim[0] - 5
+ assert result[1] == xlim[1] + 10
# string
expected = (Period('1/1/2000', ax.freq),
@@ -359,8 +356,8 @@ def _test(ax):
ax.set_xlim('1/1/2000', '4/1/2000')
ax.get_figure().canvas.draw()
result = ax.get_xlim()
- self.assertEqual(int(result[0]), expected[0].ordinal)
- self.assertEqual(int(result[1]), expected[1].ordinal)
+ assert int(result[0]) == expected[0].ordinal
+ assert int(result[1]) == expected[1].ordinal
# datetim
expected = (Period('1/1/2000', ax.freq),
@@ -368,8 +365,8 @@ def _test(ax):
ax.set_xlim(datetime(2000, 1, 1), datetime(2000, 4, 1))
ax.get_figure().canvas.draw()
result = ax.get_xlim()
- self.assertEqual(int(result[0]), expected[0].ordinal)
- self.assertEqual(int(result[1]), expected[1].ordinal)
+ assert int(result[0]) == expected[0].ordinal
+ assert int(result[1]) == expected[1].ordinal
fig = ax.get_figure()
plt.close(fig)
@@ -390,12 +387,12 @@ def _test(ax):
def test_get_finder(self):
import pandas.plotting._converter as conv
- self.assertEqual(conv.get_finder('B'), conv._daily_finder)
- self.assertEqual(conv.get_finder('D'), conv._daily_finder)
- self.assertEqual(conv.get_finder('M'), conv._monthly_finder)
- self.assertEqual(conv.get_finder('Q'), conv._quarterly_finder)
- self.assertEqual(conv.get_finder('A'), conv._annual_finder)
- self.assertEqual(conv.get_finder('W'), conv._daily_finder)
+ assert conv.get_finder('B') == conv._daily_finder
+ assert conv.get_finder('D') == conv._daily_finder
+ assert conv.get_finder('M') == conv._monthly_finder
+ assert conv.get_finder('Q') == conv._quarterly_finder
+ assert conv.get_finder('A') == conv._annual_finder
+ assert conv.get_finder('W') == conv._daily_finder
@slow
def test_finder_daily(self):
@@ -408,11 +405,11 @@ def test_finder_daily(self):
ax = ser.plot()
xaxis = ax.get_xaxis()
rs = xaxis.get_majorticklocs()[0]
- self.assertEqual(xp, rs)
+ assert xp == rs
vmin, vmax = ax.get_xlim()
ax.set_xlim(vmin + 0.9, vmax)
rs = xaxis.get_majorticklocs()[0]
- self.assertEqual(xp, rs)
+ assert xp == rs
plt.close(ax.get_figure())
@slow
@@ -426,11 +423,11 @@ def test_finder_quarterly(self):
ax = ser.plot()
xaxis = ax.get_xaxis()
rs = xaxis.get_majorticklocs()[0]
- self.assertEqual(rs, xp)
+ assert rs == xp
(vmin, vmax) = ax.get_xlim()
ax.set_xlim(vmin + 0.9, vmax)
rs = xaxis.get_majorticklocs()[0]
- self.assertEqual(xp, rs)
+ assert xp == rs
plt.close(ax.get_figure())
@slow
@@ -444,11 +441,11 @@ def test_finder_monthly(self):
ax = ser.plot()
xaxis = ax.get_xaxis()
rs = xaxis.get_majorticklocs()[0]
- self.assertEqual(rs, xp)
+ assert rs == xp
vmin, vmax = ax.get_xlim()
ax.set_xlim(vmin + 0.9, vmax)
rs = xaxis.get_majorticklocs()[0]
- self.assertEqual(xp, rs)
+ assert xp == rs
plt.close(ax.get_figure())
def test_finder_monthly_long(self):
@@ -458,7 +455,7 @@ def test_finder_monthly_long(self):
xaxis = ax.get_xaxis()
rs = xaxis.get_majorticklocs()[0]
xp = Period('1989Q1', 'M').ordinal
- self.assertEqual(rs, xp)
+ assert rs == xp
@slow
def test_finder_annual(self):
@@ -470,7 +467,7 @@ def test_finder_annual(self):
ax = ser.plot()
xaxis = ax.get_xaxis()
rs = xaxis.get_majorticklocs()[0]
- self.assertEqual(rs, Period(xp[i], freq='A').ordinal)
+ assert rs == Period(xp[i], freq='A').ordinal
plt.close(ax.get_figure())
@slow
@@ -482,7 +479,7 @@ def test_finder_minutely(self):
xaxis = ax.get_xaxis()
rs = xaxis.get_majorticklocs()[0]
xp = Period('1/1/1999', freq='Min').ordinal
- self.assertEqual(rs, xp)
+ assert rs == xp
def test_finder_hourly(self):
nhours = 23
@@ -492,7 +489,7 @@ def test_finder_hourly(self):
xaxis = ax.get_xaxis()
rs = xaxis.get_majorticklocs()[0]
xp = Period('1/1/1999', freq='H').ordinal
- self.assertEqual(rs, xp)
+ assert rs == xp
@slow
def test_gaps(self):
@@ -503,7 +500,7 @@ def test_gaps(self):
ax = ts.plot()
lines = ax.get_lines()
tm._skip_if_mpl_1_5()
- self.assertEqual(len(lines), 1)
+ assert len(lines) == 1
l = lines[0]
data = l.get_xydata()
assert isinstance(data, np.ma.core.MaskedArray)
@@ -517,7 +514,7 @@ def test_gaps(self):
ts[2:5] = np.nan
ax = ts.plot()
lines = ax.get_lines()
- self.assertEqual(len(lines), 1)
+ assert len(lines) == 1
l = lines[0]
data = l.get_xydata()
assert isinstance(data, np.ma.core.MaskedArray)
@@ -531,7 +528,7 @@ def test_gaps(self):
ser[2:5] = np.nan
ax = ser.plot()
lines = ax.get_lines()
- self.assertEqual(len(lines), 1)
+ assert len(lines) == 1
l = lines[0]
data = l.get_xydata()
assert isinstance(data, np.ma.core.MaskedArray)
@@ -548,8 +545,8 @@ def test_gap_upsample(self):
s = Series(np.random.randn(len(idxh)), idxh)
s.plot(secondary_y=True)
lines = ax.get_lines()
- self.assertEqual(len(lines), 1)
- self.assertEqual(len(ax.right_ax.get_lines()), 1)
+ assert len(lines) == 1
+ assert len(ax.right_ax.get_lines()) == 1
l = lines[0]
data = l.get_xydata()
@@ -573,13 +570,13 @@ def test_secondary_y(self):
l = ax.get_lines()[0]
xp = Series(l.get_ydata(), l.get_xdata())
assert_series_equal(ser, xp)
- self.assertEqual(ax.get_yaxis().get_ticks_position(), 'right')
+ assert ax.get_yaxis().get_ticks_position() == 'right'
assert not axes[0].get_yaxis().get_visible()
plt.close(fig)
ax2 = ser2.plot()
- self.assertEqual(ax2.get_yaxis().get_ticks_position(),
- self.default_tick_position)
+ assert (ax2.get_yaxis().get_ticks_position() ==
+ self.default_tick_position)
plt.close(ax2.get_figure())
ax = ser2.plot()
@@ -604,13 +601,13 @@ def test_secondary_y_ts(self):
l = ax.get_lines()[0]
xp = Series(l.get_ydata(), l.get_xdata()).to_timestamp()
assert_series_equal(ser, xp)
- self.assertEqual(ax.get_yaxis().get_ticks_position(), 'right')
+ assert ax.get_yaxis().get_ticks_position() == 'right'
assert not axes[0].get_yaxis().get_visible()
plt.close(fig)
ax2 = ser2.plot()
- self.assertEqual(ax2.get_yaxis().get_ticks_position(),
- self.default_tick_position)
+ assert (ax2.get_yaxis().get_ticks_position() ==
+ self.default_tick_position)
plt.close(ax2.get_figure())
ax = ser2.plot()
@@ -629,7 +626,7 @@ def test_secondary_kde(self):
assert not hasattr(ax, 'right_ax')
fig = ax.get_figure()
axes = fig.get_axes()
- self.assertEqual(axes[1].get_yaxis().get_ticks_position(), 'right')
+ assert axes[1].get_yaxis().get_ticks_position() == 'right'
@slow
def test_secondary_bar(self):
@@ -637,25 +634,25 @@ def test_secondary_bar(self):
ax = ser.plot(secondary_y=True, kind='bar')
fig = ax.get_figure()
axes = fig.get_axes()
- self.assertEqual(axes[1].get_yaxis().get_ticks_position(), 'right')
+ assert axes[1].get_yaxis().get_ticks_position() == 'right'
@slow
def test_secondary_frame(self):
df = DataFrame(np.random.randn(5, 3), columns=['a', 'b', 'c'])
axes = df.plot(secondary_y=['a', 'c'], subplots=True)
- self.assertEqual(axes[0].get_yaxis().get_ticks_position(), 'right')
- self.assertEqual(axes[1].get_yaxis().get_ticks_position(),
- self.default_tick_position)
- self.assertEqual(axes[2].get_yaxis().get_ticks_position(), 'right')
+ assert axes[0].get_yaxis().get_ticks_position() == 'right'
+ assert (axes[1].get_yaxis().get_ticks_position() ==
+ self.default_tick_position)
+ assert axes[2].get_yaxis().get_ticks_position() == 'right'
@slow
def test_secondary_bar_frame(self):
df = DataFrame(np.random.randn(5, 3), columns=['a', 'b', 'c'])
axes = df.plot(kind='bar', secondary_y=['a', 'c'], subplots=True)
- self.assertEqual(axes[0].get_yaxis().get_ticks_position(), 'right')
- self.assertEqual(axes[1].get_yaxis().get_ticks_position(),
- self.default_tick_position)
- self.assertEqual(axes[2].get_yaxis().get_ticks_position(), 'right')
+ assert axes[0].get_yaxis().get_ticks_position() == 'right'
+ assert (axes[1].get_yaxis().get_ticks_position() ==
+ self.default_tick_position)
+ assert axes[2].get_yaxis().get_ticks_position() == 'right'
def test_mixed_freq_regular_first(self):
import matplotlib.pyplot as plt # noqa
@@ -673,8 +670,8 @@ def test_mixed_freq_regular_first(self):
assert idx2.equals(s2.index.to_period('B'))
left, right = ax2.get_xlim()
pidx = s1.index.to_period()
- self.assertEqual(left, pidx[0].ordinal)
- self.assertEqual(right, pidx[-1].ordinal)
+ assert left == pidx[0].ordinal
+ assert right == pidx[-1].ordinal
@slow
def test_mixed_freq_irregular_first(self):
@@ -704,8 +701,8 @@ def test_mixed_freq_regular_first_df(self):
assert idx2.equals(s2.index.to_period('B'))
left, right = ax2.get_xlim()
pidx = s1.index.to_period()
- self.assertEqual(left, pidx[0].ordinal)
- self.assertEqual(right, pidx[-1].ordinal)
+ assert left == pidx[0].ordinal
+ assert right == pidx[-1].ordinal
@slow
def test_mixed_freq_irregular_first_df(self):
@@ -730,7 +727,7 @@ def test_mixed_freq_hf_first(self):
high.plot()
ax = low.plot()
for l in ax.get_lines():
- self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, 'D')
+ assert PeriodIndex(data=l.get_xdata()).freq == 'D'
@slow
def test_mixed_freq_alignment(self):
@@ -743,8 +740,7 @@ def test_mixed_freq_alignment(self):
ax = ts.plot()
ts2.plot(style='r')
- self.assertEqual(ax.lines[0].get_xdata()[0],
- ax.lines[1].get_xdata()[0])
+ assert ax.lines[0].get_xdata()[0] == ax.lines[1].get_xdata()[0]
@slow
def test_mixed_freq_lf_first(self):
@@ -757,9 +753,9 @@ def test_mixed_freq_lf_first(self):
low.plot(legend=True)
ax = high.plot(legend=True)
for l in ax.get_lines():
- self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, 'D')
+ assert PeriodIndex(data=l.get_xdata()).freq == 'D'
leg = ax.get_legend()
- self.assertEqual(len(leg.texts), 2)
+ assert len(leg.texts) == 2
plt.close(ax.get_figure())
idxh = date_range('1/1/1999', periods=240, freq='T')
@@ -769,7 +765,7 @@ def test_mixed_freq_lf_first(self):
low.plot()
ax = high.plot()
for l in ax.get_lines():
- self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, 'T')
+ assert PeriodIndex(data=l.get_xdata()).freq == 'T'
def test_mixed_freq_irreg_period(self):
ts = tm.makeTimeSeries()
@@ -791,10 +787,10 @@ def test_mixed_freq_shared_ax(self):
s1.plot(ax=ax1)
s2.plot(ax=ax2)
- self.assertEqual(ax1.freq, 'M')
- self.assertEqual(ax2.freq, 'M')
- self.assertEqual(ax1.lines[0].get_xydata()[0, 0],
- ax2.lines[0].get_xydata()[0, 0])
+ assert ax1.freq == 'M'
+ assert ax2.freq == 'M'
+ assert (ax1.lines[0].get_xydata()[0, 0] ==
+ ax2.lines[0].get_xydata()[0, 0])
# using twinx
fig, ax1 = self.plt.subplots()
@@ -802,8 +798,8 @@ def test_mixed_freq_shared_ax(self):
s1.plot(ax=ax1)
s2.plot(ax=ax2)
- self.assertEqual(ax1.lines[0].get_xydata()[0, 0],
- ax2.lines[0].get_xydata()[0, 0])
+ assert (ax1.lines[0].get_xydata()[0, 0] ==
+ ax2.lines[0].get_xydata()[0, 0])
# TODO (GH14330, GH14322)
# plotting the irregular first does not yet work
@@ -811,8 +807,8 @@ def test_mixed_freq_shared_ax(self):
# ax2 = ax1.twinx()
# s2.plot(ax=ax1)
# s1.plot(ax=ax2)
- # self.assertEqual(ax1.lines[0].get_xydata()[0, 0],
- # ax2.lines[0].get_xydata()[0, 0])
+ # assert (ax1.lines[0].get_xydata()[0, 0] ==
+ # ax2.lines[0].get_xydata()[0, 0])
@slow
def test_to_weekly_resampling(self):
@@ -823,7 +819,7 @@ def test_to_weekly_resampling(self):
high.plot()
ax = low.plot()
for l in ax.get_lines():
- self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
+ assert PeriodIndex(data=l.get_xdata()).freq == idxh.freq
# tsplot
from pandas.tseries.plotting import tsplot
@@ -890,7 +886,7 @@ def test_from_resampling_area_line_mixed(self):
expected_y = np.zeros(len(expected_x), dtype=np.float64)
for i in range(3):
l = ax.lines[i]
- self.assertEqual(PeriodIndex(l.get_xdata()).freq, idxh.freq)
+ assert PeriodIndex(l.get_xdata()).freq == idxh.freq
tm.assert_numpy_array_equal(l.get_xdata(orig=False),
expected_x)
# check stacked values are correct
@@ -951,17 +947,17 @@ def test_mixed_freq_second_millisecond(self):
# high to low
high.plot()
ax = low.plot()
- self.assertEqual(len(ax.get_lines()), 2)
+ assert len(ax.get_lines()) == 2
for l in ax.get_lines():
- self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, 'L')
+ assert PeriodIndex(data=l.get_xdata()).freq == 'L'
tm.close()
# low to high
low.plot()
ax = high.plot()
- self.assertEqual(len(ax.get_lines()), 2)
+ assert len(ax.get_lines()) == 2
for l in ax.get_lines():
- self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, 'L')
+ assert PeriodIndex(data=l.get_xdata()).freq == 'L'
@slow
def test_irreg_dtypes(self):
@@ -995,7 +991,7 @@ def test_time(self):
xp = l.get_text()
if len(xp) > 0:
rs = time(h, m, s).strftime('%H:%M:%S')
- self.assertEqual(xp, rs)
+ assert xp == rs
# change xlim
ax.set_xlim('1:30', '5:00')
@@ -1009,7 +1005,7 @@ def test_time(self):
xp = l.get_text()
if len(xp) > 0:
rs = time(h, m, s).strftime('%H:%M:%S')
- self.assertEqual(xp, rs)
+ assert xp == rs
@slow
def test_time_musec(self):
@@ -1035,7 +1031,7 @@ def test_time_musec(self):
xp = l.get_text()
if len(xp) > 0:
rs = time(h, m, s).strftime('%H:%M:%S.%f')
- self.assertEqual(xp, rs)
+ assert xp == rs
@slow
def test_secondary_upsample(self):
@@ -1046,11 +1042,11 @@ def test_secondary_upsample(self):
low.plot()
ax = high.plot(secondary_y=True)
for l in ax.get_lines():
- self.assertEqual(PeriodIndex(l.get_xdata()).freq, 'D')
+ assert PeriodIndex(l.get_xdata()).freq == 'D'
assert hasattr(ax, 'left_ax')
assert not hasattr(ax, 'right_ax')
for l in ax.left_ax.get_lines():
- self.assertEqual(PeriodIndex(l.get_xdata()).freq, 'D')
+ assert PeriodIndex(l.get_xdata()).freq == 'D'
@slow
def test_secondary_legend(self):
@@ -1063,54 +1059,54 @@ def test_secondary_legend(self):
df = tm.makeTimeDataFrame()
ax = df.plot(secondary_y=['A', 'B'])
leg = ax.get_legend()
- self.assertEqual(len(leg.get_lines()), 4)
- self.assertEqual(leg.get_texts()[0].get_text(), 'A (right)')
- self.assertEqual(leg.get_texts()[1].get_text(), 'B (right)')
- self.assertEqual(leg.get_texts()[2].get_text(), 'C')
- self.assertEqual(leg.get_texts()[3].get_text(), 'D')
+ assert len(leg.get_lines()) == 4
+ assert leg.get_texts()[0].get_text() == 'A (right)'
+ assert leg.get_texts()[1].get_text() == 'B (right)'
+ assert leg.get_texts()[2].get_text() == 'C'
+ assert leg.get_texts()[3].get_text() == 'D'
assert ax.right_ax.get_legend() is None
colors = set()
for line in leg.get_lines():
colors.add(line.get_color())
# TODO: color cycle problems
- self.assertEqual(len(colors), 4)
+ assert len(colors) == 4
plt.clf()
ax = fig.add_subplot(211)
ax = df.plot(secondary_y=['A', 'C'], mark_right=False)
leg = ax.get_legend()
- self.assertEqual(len(leg.get_lines()), 4)
- self.assertEqual(leg.get_texts()[0].get_text(), 'A')
- self.assertEqual(leg.get_texts()[1].get_text(), 'B')
- self.assertEqual(leg.get_texts()[2].get_text(), 'C')
- self.assertEqual(leg.get_texts()[3].get_text(), 'D')
+ assert len(leg.get_lines()) == 4
+ assert leg.get_texts()[0].get_text() == 'A'
+ assert leg.get_texts()[1].get_text() == 'B'
+ assert leg.get_texts()[2].get_text() == 'C'
+ assert leg.get_texts()[3].get_text() == 'D'
plt.clf()
ax = df.plot(kind='bar', secondary_y=['A'])
leg = ax.get_legend()
- self.assertEqual(leg.get_texts()[0].get_text(), 'A (right)')
- self.assertEqual(leg.get_texts()[1].get_text(), 'B')
+ assert leg.get_texts()[0].get_text() == 'A (right)'
+ assert leg.get_texts()[1].get_text() == 'B'
plt.clf()
ax = df.plot(kind='bar', secondary_y=['A'], mark_right=False)
leg = ax.get_legend()
- self.assertEqual(leg.get_texts()[0].get_text(), 'A')
- self.assertEqual(leg.get_texts()[1].get_text(), 'B')
+ assert leg.get_texts()[0].get_text() == 'A'
+ assert leg.get_texts()[1].get_text() == 'B'
plt.clf()
ax = fig.add_subplot(211)
df = tm.makeTimeDataFrame()
ax = df.plot(secondary_y=['C', 'D'])
leg = ax.get_legend()
- self.assertEqual(len(leg.get_lines()), 4)
+ assert len(leg.get_lines()) == 4
assert ax.right_ax.get_legend() is None
colors = set()
for line in leg.get_lines():
colors.add(line.get_color())
# TODO: color cycle problems
- self.assertEqual(len(colors), 4)
+ assert len(colors) == 4
# non-ts
df = tm.makeDataFrame()
@@ -1118,27 +1114,27 @@ def test_secondary_legend(self):
ax = fig.add_subplot(211)
ax = df.plot(secondary_y=['A', 'B'])
leg = ax.get_legend()
- self.assertEqual(len(leg.get_lines()), 4)
+ assert len(leg.get_lines()) == 4
assert ax.right_ax.get_legend() is None
colors = set()
for line in leg.get_lines():
colors.add(line.get_color())
# TODO: color cycle problems
- self.assertEqual(len(colors), 4)
+ assert len(colors) == 4
plt.clf()
ax = fig.add_subplot(211)
ax = df.plot(secondary_y=['C', 'D'])
leg = ax.get_legend()
- self.assertEqual(len(leg.get_lines()), 4)
+ assert len(leg.get_lines()) == 4
assert ax.right_ax.get_legend() is None
colors = set()
for line in leg.get_lines():
colors.add(line.get_color())
# TODO: color cycle problems
- self.assertEqual(len(colors), 4)
+ assert len(colors) == 4
def test_format_date_axis(self):
rng = date_range('1/1/2012', periods=12, freq='M')
@@ -1147,7 +1143,7 @@ def test_format_date_axis(self):
xaxis = ax.get_xaxis()
for l in xaxis.get_ticklabels():
if len(l.get_text()) > 0:
- self.assertEqual(l.get_rotation(), 30)
+ assert l.get_rotation() == 30
@slow
def test_ax_plot(self):
@@ -1195,8 +1191,8 @@ def test_irregular_ts_shared_ax_xlim(self):
# check that axis limits are correct
left, right = ax.get_xlim()
- self.assertEqual(left, ts_irregular.index.min().toordinal())
- self.assertEqual(right, ts_irregular.index.max().toordinal())
+ assert left == ts_irregular.index.min().toordinal()
+ assert right == ts_irregular.index.max().toordinal()
@slow
def test_secondary_y_non_ts_xlim(self):
@@ -1211,7 +1207,7 @@ def test_secondary_y_non_ts_xlim(self):
s2.plot(secondary_y=True, ax=ax)
left_after, right_after = ax.get_xlim()
- self.assertEqual(left_before, left_after)
+ assert left_before == left_after
assert right_before < right_after
@slow
@@ -1227,7 +1223,7 @@ def test_secondary_y_regular_ts_xlim(self):
s2.plot(secondary_y=True, ax=ax)
left_after, right_after = ax.get_xlim()
- self.assertEqual(left_before, left_after)
+ assert left_before == left_after
assert right_before < right_after
@slow
@@ -1242,8 +1238,8 @@ def test_secondary_y_mixed_freq_ts_xlim(self):
left_after, right_after = ax.get_xlim()
# a downsample should not have changed either limit
- self.assertEqual(left_before, left_after)
- self.assertEqual(right_before, right_after)
+ assert left_before == left_after
+ assert right_before == right_after
@slow
def test_secondary_y_irregular_ts_xlim(self):
@@ -1258,8 +1254,8 @@ def test_secondary_y_irregular_ts_xlim(self):
ts_irregular[:5].plot(ax=ax)
left, right = ax.get_xlim()
- self.assertEqual(left, ts_irregular.index.min().toordinal())
- self.assertEqual(right, ts_irregular.index.max().toordinal())
+ assert left == ts_irregular.index.min().toordinal()
+ assert right == ts_irregular.index.max().toordinal()
def test_plot_outofbounds_datetime(self):
# 2579 - checking this does not raise
@@ -1283,9 +1279,9 @@ def test_format_timedelta_ticks_narrow(self):
fig = ax.get_figure()
fig.canvas.draw()
labels = ax.get_xticklabels()
- self.assertEqual(len(labels), len(expected_labels))
+ assert len(labels) == len(expected_labels)
for l, l_expected in zip(labels, expected_labels):
- self.assertEqual(l.get_text(), l_expected)
+ assert l.get_text() == l_expected
def test_format_timedelta_ticks_wide(self):
if is_platform_mac():
@@ -1309,9 +1305,9 @@ def test_format_timedelta_ticks_wide(self):
fig = ax.get_figure()
fig.canvas.draw()
labels = ax.get_xticklabels()
- self.assertEqual(len(labels), len(expected_labels))
+ assert len(labels) == len(expected_labels)
for l, l_expected in zip(labels, expected_labels):
- self.assertEqual(l.get_text(), l_expected)
+ assert l.get_text() == l_expected
def test_timedelta_plot(self):
# test issue #8711
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index c550504063b3e..7297e3548b956 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -134,7 +134,7 @@ def test_plot(self):
# passed ax should be used:
fig, ax = self.plt.subplots()
axes = df.plot.bar(subplots=True, ax=ax)
- self.assertEqual(len(axes), 1)
+ assert len(axes) == 1
if self.mpl_ge_1_5_0:
result = ax.axes
else:
@@ -164,10 +164,10 @@ def test_color_and_style_arguments(self):
ax = df.plot(color=['red', 'black'], style=['-', '--'])
# check that the linestyles are correctly set:
linestyle = [line.get_linestyle() for line in ax.lines]
- self.assertEqual(linestyle, ['-', '--'])
+ assert linestyle == ['-', '--']
# check that the colors are correctly set:
color = [line.get_color() for line in ax.lines]
- self.assertEqual(color, ['red', 'black'])
+ assert color == ['red', 'black']
# passing both 'color' and 'style' arguments should not be allowed
# if there is a color symbol in the style strings:
with pytest.raises(ValueError):
@@ -176,7 +176,7 @@ def test_color_and_style_arguments(self):
def test_nonnumeric_exclude(self):
df = DataFrame({'A': ["x", "y", "z"], 'B': [1, 2, 3]})
ax = df.plot()
- self.assertEqual(len(ax.get_lines()), 1) # B was plotted
+ assert len(ax.get_lines()) == 1 # B was plotted
@slow
def test_implicit_label(self):
@@ -190,7 +190,7 @@ def test_donot_overwrite_index_name(self):
df = DataFrame(randn(2, 2), columns=['a', 'b'])
df.index.name = 'NAME'
df.plot(y='b', label='LABEL')
- self.assertEqual(df.index.name, 'NAME')
+ assert df.index.name == 'NAME'
@slow
def test_plot_xy(self):
@@ -303,7 +303,7 @@ def test_subplots(self):
for kind in ['bar', 'barh', 'line', 'area']:
axes = df.plot(kind=kind, subplots=True, sharex=True, legend=True)
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
- self.assertEqual(axes.shape, (3, ))
+ assert axes.shape == (3, )
for ax, column in zip(axes, df.columns):
self._check_legend_labels(ax,
@@ -379,43 +379,43 @@ def test_subplots_layout(self):
axes = df.plot(subplots=True, layout=(2, 2))
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
- self.assertEqual(axes.shape, (2, 2))
+ assert axes.shape == (2, 2)
axes = df.plot(subplots=True, layout=(-1, 2))
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
- self.assertEqual(axes.shape, (2, 2))
+ assert axes.shape == (2, 2)
axes = df.plot(subplots=True, layout=(2, -1))
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
- self.assertEqual(axes.shape, (2, 2))
+ assert axes.shape == (2, 2)
axes = df.plot(subplots=True, layout=(1, 4))
self._check_axes_shape(axes, axes_num=3, layout=(1, 4))
- self.assertEqual(axes.shape, (1, 4))
+ assert axes.shape == (1, 4)
axes = df.plot(subplots=True, layout=(-1, 4))
self._check_axes_shape(axes, axes_num=3, layout=(1, 4))
- self.assertEqual(axes.shape, (1, 4))
+ assert axes.shape == (1, 4)
axes = df.plot(subplots=True, layout=(4, -1))
self._check_axes_shape(axes, axes_num=3, layout=(4, 1))
- self.assertEqual(axes.shape, (4, 1))
+ assert axes.shape == (4, 1)
with pytest.raises(ValueError):
- axes = df.plot(subplots=True, layout=(1, 1))
+ df.plot(subplots=True, layout=(1, 1))
with pytest.raises(ValueError):
- axes = df.plot(subplots=True, layout=(-1, -1))
+ df.plot(subplots=True, layout=(-1, -1))
# single column
df = DataFrame(np.random.rand(10, 1),
index=list(string.ascii_letters[:10]))
axes = df.plot(subplots=True)
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
- self.assertEqual(axes.shape, (1, ))
+ assert axes.shape == (1, )
axes = df.plot(subplots=True, layout=(3, 3))
self._check_axes_shape(axes, axes_num=1, layout=(3, 3))
- self.assertEqual(axes.shape, (3, 3))
+ assert axes.shape == (3, 3)
@slow
def test_subplots_warnings(self):
@@ -442,13 +442,13 @@ def test_subplots_multiple_axes(self):
returned = df.plot(subplots=True, ax=axes[0], sharex=False,
sharey=False)
self._check_axes_shape(returned, axes_num=3, layout=(1, 3))
- self.assertEqual(returned.shape, (3, ))
+ assert returned.shape == (3, )
assert returned[0].figure is fig
# draw on second row
returned = df.plot(subplots=True, ax=axes[1], sharex=False,
sharey=False)
self._check_axes_shape(returned, axes_num=3, layout=(1, 3))
- self.assertEqual(returned.shape, (3, ))
+ assert returned.shape == (3, )
assert returned[0].figure is fig
self._check_axes_shape(axes, axes_num=6, layout=(2, 3))
tm.close()
@@ -471,17 +471,17 @@ def test_subplots_multiple_axes(self):
returned = df.plot(subplots=True, ax=axes, layout=(2, 1),
sharex=False, sharey=False)
self._check_axes_shape(returned, axes_num=4, layout=(2, 2))
- self.assertEqual(returned.shape, (4, ))
+ assert returned.shape == (4, )
returned = df.plot(subplots=True, ax=axes, layout=(2, -1),
sharex=False, sharey=False)
self._check_axes_shape(returned, axes_num=4, layout=(2, 2))
- self.assertEqual(returned.shape, (4, ))
+ assert returned.shape == (4, )
returned = df.plot(subplots=True, ax=axes, layout=(-1, 2),
sharex=False, sharey=False)
self._check_axes_shape(returned, axes_num=4, layout=(2, 2))
- self.assertEqual(returned.shape, (4, ))
+ assert returned.shape == (4, )
# single column
fig, axes = self.plt.subplots(1, 1)
@@ -490,7 +490,7 @@ def test_subplots_multiple_axes(self):
axes = df.plot(subplots=True, ax=[axes], sharex=False, sharey=False)
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
- self.assertEqual(axes.shape, (1, ))
+ assert axes.shape == (1, )
def test_subplots_ts_share_axes(self):
# GH 3964
@@ -540,20 +540,20 @@ def test_subplots_dup_columns(self):
axes = df.plot(subplots=True)
for ax in axes:
self._check_legend_labels(ax, labels=['a'])
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
tm.close()
axes = df.plot(subplots=True, secondary_y='a')
for ax in axes:
# (right) is only attached when subplots=False
self._check_legend_labels(ax, labels=['a'])
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
tm.close()
ax = df.plot(secondary_y='a')
self._check_legend_labels(ax, labels=['a (right)'] * 5)
- self.assertEqual(len(ax.lines), 0)
- self.assertEqual(len(ax.right_ax.lines), 5)
+ assert len(ax.lines) == 0
+ assert len(ax.right_ax.lines) == 5
def test_negative_log(self):
df = - DataFrame(rand(6, 4),
@@ -651,14 +651,14 @@ def test_line_lim(self):
ax = df.plot()
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
- self.assertEqual(xmin, lines[0].get_data()[0][0])
- self.assertEqual(xmax, lines[0].get_data()[0][-1])
+ assert xmin == lines[0].get_data()[0][0]
+ assert xmax == lines[0].get_data()[0][-1]
ax = df.plot(secondary_y=True)
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
- self.assertEqual(xmin, lines[0].get_data()[0][0])
- self.assertEqual(xmax, lines[0].get_data()[0][-1])
+ assert xmin == lines[0].get_data()[0][0]
+ assert xmax == lines[0].get_data()[0][-1]
axes = df.plot(secondary_y=True, subplots=True)
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
@@ -667,8 +667,8 @@ def test_line_lim(self):
assert not hasattr(ax, 'right_ax')
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
- self.assertEqual(xmin, lines[0].get_data()[0][0])
- self.assertEqual(xmax, lines[0].get_data()[0][-1])
+ assert xmin == lines[0].get_data()[0][0]
+ assert xmax == lines[0].get_data()[0][-1]
def test_area_lim(self):
df = DataFrame(rand(6, 4), columns=['x', 'y', 'z', 'four'])
@@ -679,13 +679,13 @@ def test_area_lim(self):
xmin, xmax = ax.get_xlim()
ymin, ymax = ax.get_ylim()
lines = ax.get_lines()
- self.assertEqual(xmin, lines[0].get_data()[0][0])
- self.assertEqual(xmax, lines[0].get_data()[0][-1])
- self.assertEqual(ymin, 0)
+ assert xmin == lines[0].get_data()[0][0]
+ assert xmax == lines[0].get_data()[0][-1]
+ assert ymin == 0
ax = _check_plot_works(neg_df.plot.area, stacked=stacked)
ymin, ymax = ax.get_ylim()
- self.assertEqual(ymax, 0)
+ assert ymax == 0
@slow
def test_bar_colors(self):
@@ -730,19 +730,19 @@ def test_bar_linewidth(self):
# regular
ax = df.plot.bar(linewidth=2)
for r in ax.patches:
- self.assertEqual(r.get_linewidth(), 2)
+ assert r.get_linewidth() == 2
# stacked
ax = df.plot.bar(stacked=True, linewidth=2)
for r in ax.patches:
- self.assertEqual(r.get_linewidth(), 2)
+ assert r.get_linewidth() == 2
# subplots
axes = df.plot.bar(linewidth=2, subplots=True)
self._check_axes_shape(axes, axes_num=5, layout=(5, 1))
for ax in axes:
for r in ax.patches:
- self.assertEqual(r.get_linewidth(), 2)
+ assert r.get_linewidth() == 2
@slow
def test_bar_barwidth(self):
@@ -753,34 +753,34 @@ def test_bar_barwidth(self):
# regular
ax = df.plot.bar(width=width)
for r in ax.patches:
- self.assertEqual(r.get_width(), width / len(df.columns))
+ assert r.get_width() == width / len(df.columns)
# stacked
ax = df.plot.bar(stacked=True, width=width)
for r in ax.patches:
- self.assertEqual(r.get_width(), width)
+ assert r.get_width() == width
# horizontal regular
ax = df.plot.barh(width=width)
for r in ax.patches:
- self.assertEqual(r.get_height(), width / len(df.columns))
+ assert r.get_height() == width / len(df.columns)
# horizontal stacked
ax = df.plot.barh(stacked=True, width=width)
for r in ax.patches:
- self.assertEqual(r.get_height(), width)
+ assert r.get_height() == width
# subplots
axes = df.plot.bar(width=width, subplots=True)
for ax in axes:
for r in ax.patches:
- self.assertEqual(r.get_width(), width)
+ assert r.get_width() == width
# horizontal subplots
axes = df.plot.barh(width=width, subplots=True)
for ax in axes:
for r in ax.patches:
- self.assertEqual(r.get_height(), width)
+ assert r.get_height() == width
@slow
def test_bar_barwidth_position(self):
@@ -807,10 +807,10 @@ def test_bar_barwidth_position_int(self):
ax = df.plot.bar(stacked=True, width=w)
ticks = ax.xaxis.get_ticklocs()
tm.assert_numpy_array_equal(ticks, np.array([0, 1, 2, 3, 4]))
- self.assertEqual(ax.get_xlim(), (-0.75, 4.75))
+ assert ax.get_xlim() == (-0.75, 4.75)
# check left-edge of bars
- self.assertEqual(ax.patches[0].get_x(), -0.5)
- self.assertEqual(ax.patches[-1].get_x(), 3.5)
+ assert ax.patches[0].get_x() == -0.5
+ assert ax.patches[-1].get_x() == 3.5
self._check_bar_alignment(df, kind='bar', stacked=True, width=1)
self._check_bar_alignment(df, kind='barh', stacked=False, width=1)
@@ -823,29 +823,29 @@ def test_bar_bottom_left(self):
df = DataFrame(rand(5, 5))
ax = df.plot.bar(stacked=False, bottom=1)
result = [p.get_y() for p in ax.patches]
- self.assertEqual(result, [1] * 25)
+ assert result == [1] * 25
ax = df.plot.bar(stacked=True, bottom=[-1, -2, -3, -4, -5])
result = [p.get_y() for p in ax.patches[:5]]
- self.assertEqual(result, [-1, -2, -3, -4, -5])
+ assert result == [-1, -2, -3, -4, -5]
ax = df.plot.barh(stacked=False, left=np.array([1, 1, 1, 1, 1]))
result = [p.get_x() for p in ax.patches]
- self.assertEqual(result, [1] * 25)
+ assert result == [1] * 25
ax = df.plot.barh(stacked=True, left=[1, 2, 3, 4, 5])
result = [p.get_x() for p in ax.patches[:5]]
- self.assertEqual(result, [1, 2, 3, 4, 5])
+ assert result == [1, 2, 3, 4, 5]
axes = df.plot.bar(subplots=True, bottom=-1)
for ax in axes:
result = [p.get_y() for p in ax.patches]
- self.assertEqual(result, [-1] * 5)
+ assert result == [-1] * 5
axes = df.plot.barh(subplots=True, left=np.array([1, 1, 1, 1, 1]))
for ax in axes:
result = [p.get_x() for p in ax.patches]
- self.assertEqual(result, [1] * 5)
+ assert result == [1] * 5
@slow
def test_bar_nan(self):
@@ -855,15 +855,15 @@ def test_bar_nan(self):
ax = df.plot.bar()
expected = [10, 0, 20, 5, 10, 20, 1, 2, 3]
result = [p.get_height() for p in ax.patches]
- self.assertEqual(result, expected)
+ assert result == expected
ax = df.plot.bar(stacked=True)
result = [p.get_height() for p in ax.patches]
- self.assertEqual(result, expected)
+ assert result == expected
result = [p.get_y() for p in ax.patches]
expected = [0.0, 0.0, 0.0, 10.0, 0.0, 20.0, 15.0, 10.0, 40.0]
- self.assertEqual(result, expected)
+ assert result == expected
@slow
def test_bar_categorical(self):
@@ -880,16 +880,16 @@ def test_bar_categorical(self):
ax = df.plot.bar()
ticks = ax.xaxis.get_ticklocs()
tm.assert_numpy_array_equal(ticks, np.array([0, 1, 2, 3, 4, 5]))
- self.assertEqual(ax.get_xlim(), (-0.5, 5.5))
+ assert ax.get_xlim() == (-0.5, 5.5)
# check left-edge of bars
- self.assertEqual(ax.patches[0].get_x(), -0.25)
- self.assertEqual(ax.patches[-1].get_x(), 5.15)
+ assert ax.patches[0].get_x() == -0.25
+ assert ax.patches[-1].get_x() == 5.15
ax = df.plot.bar(stacked=True)
tm.assert_numpy_array_equal(ticks, np.array([0, 1, 2, 3, 4, 5]))
- self.assertEqual(ax.get_xlim(), (-0.5, 5.5))
- self.assertEqual(ax.patches[0].get_x(), -0.25)
- self.assertEqual(ax.patches[-1].get_x(), 4.75)
+ assert ax.get_xlim() == (-0.5, 5.5)
+ assert ax.patches[0].get_x() == -0.25
+ assert ax.patches[-1].get_x() == 4.75
@slow
def test_plot_scatter(self):
@@ -919,17 +919,17 @@ def test_plot_scatter_with_c(self):
df.plot.scatter(x=0, y=1, c=2)]
for ax in axes:
# default to Greys
- self.assertEqual(ax.collections[0].cmap.name, 'Greys')
+ assert ax.collections[0].cmap.name == 'Greys'
if self.mpl_ge_1_3_1:
# n.b. there appears to be no public method to get the colorbar
# label
- self.assertEqual(ax.collections[0].colorbar._label, 'z')
+ assert ax.collections[0].colorbar._label == 'z'
cm = 'cubehelix'
ax = df.plot.scatter(x='x', y='y', c='z', colormap=cm)
- self.assertEqual(ax.collections[0].cmap.name, cm)
+ assert ax.collections[0].cmap.name == cm
# verify turning off colorbar works
ax = df.plot.scatter(x='x', y='y', c='z', colorbar=False)
@@ -1167,7 +1167,7 @@ def test_boxplot(self):
self._check_text_labels(ax.get_xticklabels(), labels)
tm.assert_numpy_array_equal(ax.xaxis.get_ticklocs(),
np.arange(1, len(numeric_cols) + 1))
- self.assertEqual(len(ax.lines), self.bp_n_objects * len(numeric_cols))
+ assert len(ax.lines) == self.bp_n_objects * len(numeric_cols)
# different warning on py3
if not PY3:
@@ -1178,7 +1178,7 @@ def test_boxplot(self):
self._check_ax_scales(axes, yaxis='log')
for ax, label in zip(axes, labels):
self._check_text_labels(ax.get_xticklabels(), [label])
- self.assertEqual(len(ax.lines), self.bp_n_objects)
+ assert len(ax.lines) == self.bp_n_objects
axes = series.plot.box(rot=40)
self._check_ticks_props(axes, xrot=40, yrot=0)
@@ -1192,7 +1192,7 @@ def test_boxplot(self):
labels = [pprint_thing(c) for c in numeric_cols]
self._check_text_labels(ax.get_xticklabels(), labels)
tm.assert_numpy_array_equal(ax.xaxis.get_ticklocs(), positions)
- self.assertEqual(len(ax.lines), self.bp_n_objects * len(numeric_cols))
+ assert len(ax.lines) == self.bp_n_objects * len(numeric_cols)
@slow
def test_boxplot_vertical(self):
@@ -1204,7 +1204,7 @@ def test_boxplot_vertical(self):
ax = df.plot.box(rot=50, fontsize=8, vert=False)
self._check_ticks_props(ax, xrot=0, yrot=50, ylabelsize=8)
self._check_text_labels(ax.get_yticklabels(), labels)
- self.assertEqual(len(ax.lines), self.bp_n_objects * len(numeric_cols))
+ assert len(ax.lines) == self.bp_n_objects * len(numeric_cols)
# _check_plot_works adds an ax so catch warning. see GH #13188
with tm.assert_produces_warning(UserWarning):
@@ -1214,13 +1214,13 @@ def test_boxplot_vertical(self):
self._check_ax_scales(axes, xaxis='log')
for ax, label in zip(axes, labels):
self._check_text_labels(ax.get_yticklabels(), [label])
- self.assertEqual(len(ax.lines), self.bp_n_objects)
+ assert len(ax.lines) == self.bp_n_objects
positions = np.array([3, 2, 8])
ax = df.plot.box(positions=positions, vert=False)
self._check_text_labels(ax.get_yticklabels(), labels)
tm.assert_numpy_array_equal(ax.yaxis.get_ticklocs(), positions)
- self.assertEqual(len(ax.lines), self.bp_n_objects * len(numeric_cols))
+ assert len(ax.lines) == self.bp_n_objects * len(numeric_cols)
@slow
def test_boxplot_return_type(self):
@@ -1563,16 +1563,16 @@ def test_style_by_column(self):
fig.add_subplot(111)
ax = df.plot(style=markers)
for i, l in enumerate(ax.get_lines()[:len(markers)]):
- self.assertEqual(l.get_marker(), markers[i])
+ assert l.get_marker() == markers[i]
@slow
def test_line_label_none(self):
s = Series([1, 2])
ax = s.plot()
- self.assertEqual(ax.get_legend(), None)
+ assert ax.get_legend() is None
ax = s.plot(legend=True)
- self.assertEqual(ax.get_legend().get_texts()[0].get_text(), 'None')
+ assert ax.get_legend().get_texts()[0].get_text() == 'None'
@slow
@tm.capture_stdout
@@ -1591,7 +1591,7 @@ def test_line_colors(self):
lines2 = ax2.get_lines()
for l1, l2 in zip(ax.get_lines(), lines2):
- self.assertEqual(l1.get_color(), l2.get_color())
+ assert l1.get_color() == l2.get_color()
tm.close()
@@ -1630,7 +1630,7 @@ def test_line_colors(self):
def test_dont_modify_colors(self):
colors = ['r', 'g', 'b']
pd.DataFrame(np.random.rand(10, 2)).plot(color=colors)
- self.assertEqual(len(colors), 3)
+ assert len(colors) == 3
@slow
def test_line_colors_and_styles_subplots(self):
@@ -1768,7 +1768,7 @@ def test_area_colors(self):
linecolors = jet_colors
self._check_colors(handles[:len(jet_colors)], linecolors=linecolors)
for h in handles:
- self.assertEqual(h.get_alpha(), 0.5)
+ assert h.get_alpha() == 0.5
@slow
def test_hist_colors(self):
@@ -2028,13 +2028,13 @@ def test_hexbin_basic(self):
ax = df.plot.hexbin(x='A', y='B', gridsize=10)
# TODO: need better way to test. This just does existence.
- self.assertEqual(len(ax.collections), 1)
+ assert len(ax.collections) == 1
# GH 6951
axes = df.plot.hexbin(x='A', y='B', subplots=True)
# hexbin should have 2 axes in the figure, 1 for plotting and another
# is colorbar
- self.assertEqual(len(axes[0].figure.axes), 2)
+ assert len(axes[0].figure.axes) == 2
# return value is single axes
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
@@ -2043,10 +2043,10 @@ def test_hexbin_with_c(self):
df = self.hexbin_df
ax = df.plot.hexbin(x='A', y='B', C='C')
- self.assertEqual(len(ax.collections), 1)
+ assert len(ax.collections) == 1
ax = df.plot.hexbin(x='A', y='B', C='C', reduce_C_function=np.std)
- self.assertEqual(len(ax.collections), 1)
+ assert len(ax.collections) == 1
@slow
def test_hexbin_cmap(self):
@@ -2054,11 +2054,11 @@ def test_hexbin_cmap(self):
# Default to BuGn
ax = df.plot.hexbin(x='A', y='B')
- self.assertEqual(ax.collections[0].cmap.name, 'BuGn')
+ assert ax.collections[0].cmap.name == 'BuGn'
cm = 'cubehelix'
ax = df.plot.hexbin(x='A', y='B', colormap=cm)
- self.assertEqual(ax.collections[0].cmap.name, cm)
+ assert ax.collections[0].cmap.name == cm
@slow
def test_no_color_bar(self):
@@ -2072,7 +2072,7 @@ def test_allow_cmap(self):
df = self.hexbin_df
ax = df.plot.hexbin(x='A', y='B', cmap='YlGn')
- self.assertEqual(ax.collections[0].cmap.name, 'YlGn')
+ assert ax.collections[0].cmap.name == 'YlGn'
with pytest.raises(TypeError):
df.plot.hexbin(x='A', y='B', cmap='YlGn', colormap='BuGn')
@@ -2094,11 +2094,11 @@ def test_pie_df(self):
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(df.plot.pie,
subplots=True)
- self.assertEqual(len(axes), len(df.columns))
+ assert len(axes) == len(df.columns)
for ax in axes:
self._check_text_labels(ax.texts, df.index)
for ax, ylabel in zip(axes, df.columns):
- self.assertEqual(ax.get_ylabel(), ylabel)
+ assert ax.get_ylabel() == ylabel
labels = ['A', 'B', 'C', 'D', 'E']
color_args = ['r', 'g', 'b', 'c', 'm']
@@ -2106,7 +2106,7 @@ def test_pie_df(self):
axes = _check_plot_works(df.plot.pie,
subplots=True, labels=labels,
colors=color_args)
- self.assertEqual(len(axes), len(df.columns))
+ assert len(axes) == len(df.columns)
for ax in axes:
self._check_text_labels(ax.texts, labels)
@@ -2124,13 +2124,12 @@ def test_pie_df_nan(self):
expected = list(base_expected) # force copy
expected[i] = ''
result = [x.get_text() for x in ax.texts]
- self.assertEqual(result, expected)
+ assert result == expected
# legend labels
# NaN's not included in legend with subplots
# see https://github.com/pandas-dev/pandas/issues/8390
- self.assertEqual([x.get_text() for x in
- ax.get_legend().get_texts()],
- base_expected[:i] + base_expected[i + 1:])
+ assert ([x.get_text() for x in ax.get_legend().get_texts()] ==
+ base_expected[:i] + base_expected[i + 1:])
@slow
def test_errorbar_plot(self):
@@ -2280,13 +2279,10 @@ def test_errorbar_asymmetrical(self):
expected_0_0 = err[0, :, 0] * np.array([-1, 1])
tm.assert_almost_equal(yerr_0_0, expected_0_0)
else:
- self.assertEqual(ax.lines[7].get_ydata()[0],
- data[0, 1] - err[1, 0, 0])
- self.assertEqual(ax.lines[8].get_ydata()[0],
- data[0, 1] + err[1, 1, 0])
-
- self.assertEqual(ax.lines[5].get_xdata()[0], -err[1, 0, 0] / 2)
- self.assertEqual(ax.lines[6].get_xdata()[0], err[1, 1, 0] / 2)
+ assert ax.lines[7].get_ydata()[0] == data[0, 1] - err[1, 0, 0]
+ assert ax.lines[8].get_ydata()[0] == data[0, 1] + err[1, 1, 0]
+ assert ax.lines[5].get_xdata()[0] == -err[1, 0, 0] / 2
+ assert ax.lines[6].get_xdata()[0] == err[1, 1, 0] / 2
with pytest.raises(ValueError):
df.plot(yerr=err.T)
@@ -2362,7 +2358,7 @@ def test_sharex_and_ax(self):
def _check(axes):
for ax in axes:
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
for ax in [axes[0], axes[2]]:
self._check_visible(ax.get_xticklabels(), visible=False)
@@ -2392,7 +2388,7 @@ def _check(axes):
gs.tight_layout(plt.gcf())
for ax in axes:
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
@@ -2414,7 +2410,7 @@ def test_sharey_and_ax(self):
def _check(axes):
for ax in axes:
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(
ax.get_xticklabels(minor=True), visible=True)
@@ -2444,7 +2440,7 @@ def _check(axes):
gs.tight_layout(plt.gcf())
for ax in axes:
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
@@ -2494,7 +2490,7 @@ def test_df_subplots_patterns_minorticks(self):
fig, axes = plt.subplots(2, 1, sharex=True)
axes = df.plot(subplots=True, ax=axes)
for ax in axes:
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
# xaxis of 1st ax must be hidden
self._check_visible(axes[0].get_xticklabels(), visible=False)
@@ -2507,7 +2503,7 @@ def test_df_subplots_patterns_minorticks(self):
with tm.assert_produces_warning(UserWarning):
axes = df.plot(subplots=True, ax=axes, sharex=True)
for ax in axes:
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
# xaxis of 1st ax must be hidden
self._check_visible(axes[0].get_xticklabels(), visible=False)
@@ -2520,7 +2516,7 @@ def test_df_subplots_patterns_minorticks(self):
fig, axes = plt.subplots(2, 1)
axes = df.plot(subplots=True, ax=axes)
for ax in axes:
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
@@ -2554,9 +2550,9 @@ def _get_horizontal_grid():
for ax1, ax2 in [_get_vertical_grid(), _get_horizontal_grid()]:
ax1 = ts.plot(ax=ax1)
- self.assertEqual(len(ax1.lines), 1)
+ assert len(ax1.lines) == 1
ax2 = df.plot(ax=ax2)
- self.assertEqual(len(ax2.lines), 2)
+ assert len(ax2.lines) == 2
for ax in [ax1, ax2]:
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
@@ -2567,8 +2563,8 @@ def _get_horizontal_grid():
# subplots=True
for ax1, ax2 in [_get_vertical_grid(), _get_horizontal_grid()]:
axes = df.plot(subplots=True, ax=[ax1, ax2])
- self.assertEqual(len(ax1.lines), 1)
- self.assertEqual(len(ax2.lines), 1)
+ assert len(ax1.lines) == 1
+ assert len(ax2.lines) == 1
for ax in axes:
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
@@ -2581,8 +2577,8 @@ def _get_horizontal_grid():
with tm.assert_produces_warning(UserWarning):
axes = df.plot(subplots=True, ax=[ax1, ax2], sharex=True,
sharey=True)
- self.assertEqual(len(axes[0].lines), 1)
- self.assertEqual(len(axes[1].lines), 1)
+ assert len(axes[0].lines) == 1
+ assert len(axes[1].lines) == 1
for ax in [ax1, ax2]:
# yaxis are visible because there is only one column
self._check_visible(ax.get_yticklabels(), visible=True)
@@ -2598,8 +2594,8 @@ def _get_horizontal_grid():
with tm.assert_produces_warning(UserWarning):
axes = df.plot(subplots=True, ax=[ax1, ax2], sharex=True,
sharey=True)
- self.assertEqual(len(axes[0].lines), 1)
- self.assertEqual(len(axes[1].lines), 1)
+ assert len(axes[0].lines) == 1
+ assert len(axes[1].lines) == 1
self._check_visible(axes[0].get_yticklabels(), visible=True)
# yaxis of axes1 (right) are hidden
self._check_visible(axes[1].get_yticklabels(), visible=False)
@@ -2624,7 +2620,7 @@ def _get_boxed_grid():
index=ts.index, columns=list('ABCD'))
axes = df.plot(subplots=True, ax=axes)
for ax in axes:
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
# axis are visible because these are not shared
self._check_visible(ax.get_yticklabels(), visible=True)
self._check_visible(ax.get_xticklabels(), visible=True)
@@ -2636,7 +2632,7 @@ def _get_boxed_grid():
with tm.assert_produces_warning(UserWarning):
axes = df.plot(subplots=True, ax=axes, sharex=True, sharey=True)
for ax in axes:
- self.assertEqual(len(ax.lines), 1)
+ assert len(ax.lines) == 1
for ax in [axes[0], axes[2]]: # left column
self._check_visible(ax.get_yticklabels(), visible=True)
for ax in [axes[1], axes[3]]: # right column
@@ -2710,8 +2706,7 @@ def test_passed_bar_colors(self):
color_tuples = [(0.9, 0, 0, 1), (0, 0.9, 0, 1), (0, 0, 0.9, 1)]
colormap = mpl.colors.ListedColormap(color_tuples)
barplot = pd.DataFrame([[1, 2, 3]]).plot(kind="bar", cmap=colormap)
- self.assertEqual(color_tuples, [c.get_facecolor()
- for c in barplot.patches])
+ assert color_tuples == [c.get_facecolor() for c in barplot.patches]
def test_rcParams_bar_colors(self):
import matplotlib as mpl
@@ -2723,8 +2718,7 @@ def test_rcParams_bar_colors(self):
except (AttributeError, KeyError): # mpl 1.4
with mpl.rc_context(rc={'axes.color_cycle': color_tuples}):
barplot = pd.DataFrame([[1, 2, 3]]).plot(kind="bar")
- self.assertEqual(color_tuples, [c.get_facecolor()
- for c in barplot.patches])
+ assert color_tuples == [c.get_facecolor() for c in barplot.patches]
def _generate_4_axes_via_gridspec():
diff --git a/pandas/tests/plotting/test_groupby.py b/pandas/tests/plotting/test_groupby.py
index 93efb3f994c38..121f2f9b75698 100644
--- a/pandas/tests/plotting/test_groupby.py
+++ b/pandas/tests/plotting/test_groupby.py
@@ -68,7 +68,7 @@ def test_plot_kwargs(self):
res = df.groupby('z').plot(kind='scatter', x='x', y='y')
# check that a scatter plot is effectively plotted: the axes should
# contain a PathCollection from the scatter plot (GH11805)
- self.assertEqual(len(res['a'].collections), 1)
+ assert len(res['a'].collections) == 1
res = df.groupby('z').plot.scatter(x='x', y='y')
- self.assertEqual(len(res['a'].collections), 1)
+ assert len(res['a'].collections) == 1
diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py
index 7002321908ef0..39bab59242c22 100644
--- a/pandas/tests/plotting/test_hist_method.py
+++ b/pandas/tests/plotting/test_hist_method.py
@@ -54,7 +54,7 @@ def test_hist_legacy(self):
def test_hist_bins_legacy(self):
df = DataFrame(np.random.randn(10, 2))
ax = df.hist(bins=2)[0][0]
- self.assertEqual(len(ax.patches), 2)
+ assert len(ax.patches) == 2
@slow
def test_hist_layout(self):
@@ -122,13 +122,13 @@ def test_hist_no_overlap(self):
y.hist()
fig = gcf()
axes = fig.axes if self.mpl_ge_1_5_0 else fig.get_axes()
- self.assertEqual(len(axes), 2)
+ assert len(axes) == 2
@slow
def test_hist_by_no_extra_plots(self):
df = self.hist_df
axes = df.height.hist(by=df.gender) # noqa
- self.assertEqual(len(self.plt.get_fignums()), 1)
+ assert len(self.plt.get_fignums()) == 1
@slow
def test_plot_fails_when_ax_differs_from_figure(self):
@@ -314,8 +314,8 @@ def test_grouped_hist_legacy2(self):
'gender': gender_int})
gb = df_int.groupby('gender')
axes = gb.hist()
- self.assertEqual(len(axes), 2)
- self.assertEqual(len(self.plt.get_fignums()), 2)
+ assert len(axes) == 2
+ assert len(self.plt.get_fignums()) == 2
tm.close()
@slow
diff --git a/pandas/tests/plotting/test_misc.py b/pandas/tests/plotting/test_misc.py
index 9b8569e8680e4..3a9cb309db707 100644
--- a/pandas/tests/plotting/test_misc.py
+++ b/pandas/tests/plotting/test_misc.py
@@ -309,7 +309,7 @@ def test_subplot_titles(self):
# Case len(title) == len(df)
plot = df.plot(subplots=True, title=title)
- self.assertEqual([p.get_title() for p in plot], title)
+ assert [p.get_title() for p in plot] == title
# Case len(title) > len(df)
pytest.raises(ValueError, df.plot, subplots=True,
@@ -325,4 +325,4 @@ def test_subplot_titles(self):
plot = df.drop('SepalWidth', axis=1).plot(subplots=True, layout=(2, 2),
title=title[:-1])
title_list = [ax.get_title() for sublist in plot for ax in sublist]
- self.assertEqual(title_list, title[:3] + [''])
+ assert title_list == title[:3] + ['']
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 8ae301a0b7b4c..d1325c7130d04 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -93,36 +93,36 @@ def test_dont_modify_rcParams(self):
key = 'axes.color_cycle'
colors = self.plt.rcParams[key]
Series([1, 2, 3]).plot()
- self.assertEqual(colors, self.plt.rcParams[key])
+ assert colors == self.plt.rcParams[key]
def test_ts_line_lim(self):
ax = self.ts.plot()
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
- self.assertEqual(xmin, lines[0].get_data(orig=False)[0][0])
- self.assertEqual(xmax, lines[0].get_data(orig=False)[0][-1])
+ assert xmin == lines[0].get_data(orig=False)[0][0]
+ assert xmax == lines[0].get_data(orig=False)[0][-1]
tm.close()
ax = self.ts.plot(secondary_y=True)
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
- self.assertEqual(xmin, lines[0].get_data(orig=False)[0][0])
- self.assertEqual(xmax, lines[0].get_data(orig=False)[0][-1])
+ assert xmin == lines[0].get_data(orig=False)[0][0]
+ assert xmax == lines[0].get_data(orig=False)[0][-1]
def test_ts_area_lim(self):
ax = self.ts.plot.area(stacked=False)
xmin, xmax = ax.get_xlim()
line = ax.get_lines()[0].get_data(orig=False)[0]
- self.assertEqual(xmin, line[0])
- self.assertEqual(xmax, line[-1])
+ assert xmin == line[0]
+ assert xmax == line[-1]
tm.close()
# GH 7471
ax = self.ts.plot.area(stacked=False, x_compat=True)
xmin, xmax = ax.get_xlim()
line = ax.get_lines()[0].get_data(orig=False)[0]
- self.assertEqual(xmin, line[0])
- self.assertEqual(xmax, line[-1])
+ assert xmin == line[0]
+ assert xmax == line[-1]
tm.close()
tz_ts = self.ts.copy()
@@ -130,15 +130,15 @@ def test_ts_area_lim(self):
ax = tz_ts.plot.area(stacked=False, x_compat=True)
xmin, xmax = ax.get_xlim()
line = ax.get_lines()[0].get_data(orig=False)[0]
- self.assertEqual(xmin, line[0])
- self.assertEqual(xmax, line[-1])
+ assert xmin == line[0]
+ assert xmax == line[-1]
tm.close()
ax = tz_ts.plot.area(stacked=False, secondary_y=True)
xmin, xmax = ax.get_xlim()
line = ax.get_lines()[0].get_data(orig=False)[0]
- self.assertEqual(xmin, line[0])
- self.assertEqual(xmax, line[-1])
+ assert xmin == line[0]
+ assert xmax == line[-1]
def test_label(self):
s = Series([1, 2])
@@ -159,7 +159,7 @@ def test_label(self):
self.plt.close()
# Add lebel info, but don't draw
ax = s.plot(legend=False, label='LABEL')
- self.assertEqual(ax.get_legend(), None) # Hasn't been drawn
+ assert ax.get_legend() is None # Hasn't been drawn
ax.legend() # draw it
self._check_legend_labels(ax, labels=['LABEL'])
@@ -190,10 +190,10 @@ def test_line_use_index_false(self):
s.index.name = 'The Index'
ax = s.plot(use_index=False)
label = ax.get_xlabel()
- self.assertEqual(label, '')
+ assert label == ''
ax2 = s.plot.bar(use_index=False)
label2 = ax2.get_xlabel()
- self.assertEqual(label2, '')
+ assert label2 == ''
@slow
def test_bar_log(self):
@@ -255,7 +255,7 @@ def test_irregular_datetime(self):
ax = ser.plot()
xp = datetime(1999, 1, 1).toordinal()
ax.set_xlim('1/1/1999', '1/1/2001')
- self.assertEqual(xp, ax.get_xlim()[0])
+ assert xp == ax.get_xlim()[0]
@slow
def test_pie_series(self):
@@ -265,7 +265,7 @@ def test_pie_series(self):
index=['a', 'b', 'c', 'd', 'e'], name='YLABEL')
ax = _check_plot_works(series.plot.pie)
self._check_text_labels(ax.texts, series.index)
- self.assertEqual(ax.get_ylabel(), 'YLABEL')
+ assert ax.get_ylabel() == 'YLABEL'
# without wedge labels
ax = _check_plot_works(series.plot.pie, labels=None)
@@ -295,7 +295,7 @@ def test_pie_series(self):
expected_texts = list(next(it) for it in itertools.cycle(iters))
self._check_text_labels(ax.texts, expected_texts)
for t in ax.texts:
- self.assertEqual(t.get_fontsize(), 7)
+ assert t.get_fontsize() == 7
# includes negative value
with pytest.raises(ValueError):
@@ -313,13 +313,13 @@ def test_pie_nan(self):
ax = s.plot.pie(legend=True)
expected = ['0', '', '2', '3']
result = [x.get_text() for x in ax.texts]
- self.assertEqual(result, expected)
+ assert result == expected
@slow
def test_hist_df_kwargs(self):
df = DataFrame(np.random.randn(10, 2))
ax = df.plot.hist(bins=5)
- self.assertEqual(len(ax.patches), 10)
+ assert len(ax.patches) == 10
@slow
def test_hist_df_with_nonnumerics(self):
@@ -329,10 +329,10 @@ def test_hist_df_with_nonnumerics(self):
np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])
df['E'] = ['x', 'y'] * 5
ax = df.plot.hist(bins=5)
- self.assertEqual(len(ax.patches), 20)
+ assert len(ax.patches) == 20
ax = df.plot.hist() # bins=10
- self.assertEqual(len(ax.patches), 40)
+ assert len(ax.patches) == 40
@slow
def test_hist_legacy(self):
@@ -364,7 +364,7 @@ def test_hist_legacy(self):
def test_hist_bins_legacy(self):
df = DataFrame(np.random.randn(10, 2))
ax = df.hist(bins=2)[0][0]
- self.assertEqual(len(ax.patches), 2)
+ assert len(ax.patches) == 2
@slow
def test_hist_layout(self):
@@ -430,7 +430,7 @@ def test_hist_no_overlap(self):
y.hist()
fig = gcf()
axes = fig.axes if self.mpl_ge_1_5_0 else fig.get_axes()
- self.assertEqual(len(axes), 2)
+ assert len(axes) == 2
@slow
def test_hist_secondary_legend(self):
@@ -583,7 +583,7 @@ def test_kde_missing_vals(self):
@slow
def test_hist_kwargs(self):
ax = self.ts.plot.hist(bins=5)
- self.assertEqual(len(ax.patches), 5)
+ assert len(ax.patches) == 5
self._check_text_labels(ax.yaxis.get_label(), 'Frequency')
tm.close()
@@ -599,7 +599,7 @@ def test_hist_kwargs(self):
def test_hist_kde_color(self):
ax = self.ts.plot.hist(logy=True, bins=10, color='b')
self._check_ax_scales(ax, yaxis='log')
- self.assertEqual(len(ax.patches), 10)
+ assert len(ax.patches) == 10
self._check_colors(ax.patches, facecolors=['b'] * 10)
tm._skip_if_no_scipy()
@@ -607,7 +607,7 @@ def test_hist_kde_color(self):
ax = self.ts.plot.kde(logy=True, color='r')
self._check_ax_scales(ax, yaxis='log')
lines = ax.get_lines()
- self.assertEqual(len(lines), 1)
+ assert len(lines) == 1
self._check_colors(lines, ['r'])
@slow
@@ -729,16 +729,16 @@ def test_standard_colors(self):
for c in ['r', 'red', 'green', '#FF0000']:
result = _get_standard_colors(1, color=c)
- self.assertEqual(result, [c])
+ assert result == [c]
result = _get_standard_colors(1, color=[c])
- self.assertEqual(result, [c])
+ assert result == [c]
result = _get_standard_colors(3, color=c)
- self.assertEqual(result, [c] * 3)
+ assert result == [c] * 3
result = _get_standard_colors(3, color=[c])
- self.assertEqual(result, [c] * 3)
+ assert result == [c] * 3
@slow
def test_standard_colors_all(self):
@@ -748,30 +748,30 @@ def test_standard_colors_all(self):
# multiple colors like mediumaquamarine
for c in colors.cnames:
result = _get_standard_colors(num_colors=1, color=c)
- self.assertEqual(result, [c])
+ assert result == [c]
result = _get_standard_colors(num_colors=1, color=[c])
- self.assertEqual(result, [c])
+ assert result == [c]
result = _get_standard_colors(num_colors=3, color=c)
- self.assertEqual(result, [c] * 3)
+ assert result == [c] * 3
result = _get_standard_colors(num_colors=3, color=[c])
- self.assertEqual(result, [c] * 3)
+ assert result == [c] * 3
# single letter colors like k
for c in colors.ColorConverter.colors:
result = _get_standard_colors(num_colors=1, color=c)
- self.assertEqual(result, [c])
+ assert result == [c]
result = _get_standard_colors(num_colors=1, color=[c])
- self.assertEqual(result, [c])
+ assert result == [c]
result = _get_standard_colors(num_colors=3, color=c)
- self.assertEqual(result, [c] * 3)
+ assert result == [c] * 3
result = _get_standard_colors(num_colors=3, color=[c])
- self.assertEqual(result, [c] * 3)
+ assert result == [c] * 3
def test_series_plot_color_kwargs(self):
# GH1890
diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py
index 9854245cf1abd..2d4d0a09060de 100644
--- a/pandas/tests/reshape/test_concat.py
+++ b/pandas/tests/reshape/test_concat.py
@@ -65,14 +65,14 @@ def _check_expected_dtype(self, obj, label):
"""
if isinstance(obj, pd.Index):
if label == 'bool':
- self.assertEqual(obj.dtype, 'object')
+ assert obj.dtype == 'object'
else:
- self.assertEqual(obj.dtype, label)
+ assert obj.dtype == label
elif isinstance(obj, pd.Series):
if label.startswith('period'):
- self.assertEqual(obj.dtype, 'object')
+ assert obj.dtype == 'object'
else:
- self.assertEqual(obj.dtype, label)
+ assert obj.dtype == label
else:
raise ValueError
@@ -814,7 +814,7 @@ def test_append_preserve_index_name(self):
df2 = df2.set_index(['A'])
result = df1.append(df2)
- self.assertEqual(result.index.name, 'A')
+ assert result.index.name == 'A'
def test_append_dtype_coerce(self):
@@ -849,8 +849,8 @@ def test_append_missing_column_proper_upcast(self):
dtype=bool)})
appended = df1.append(df2, ignore_index=True)
- self.assertEqual(appended['A'].dtype, 'f8')
- self.assertEqual(appended['B'].dtype, 'O')
+ assert appended['A'].dtype == 'f8'
+ assert appended['B'].dtype == 'O'
class TestConcatenate(ConcatenateBase):
@@ -934,7 +934,7 @@ def test_concat_keys_specific_levels(self):
tm.assert_index_equal(result.columns.levels[0],
Index(level, name='group_key'))
- self.assertEqual(result.columns.names[0], 'group_key')
+ assert result.columns.names[0] == 'group_key'
def test_concat_dataframe_keys_bug(self):
t1 = DataFrame({
@@ -945,8 +945,7 @@ def test_concat_dataframe_keys_bug(self):
# it works
result = concat([t1, t2], axis=1, keys=['t1', 't2'])
- self.assertEqual(list(result.columns), [('t1', 'value'),
- ('t2', 'value')])
+ assert list(result.columns) == [('t1', 'value'), ('t2', 'value')]
def test_concat_series_partial_columns_names(self):
# GH10698
@@ -1020,10 +1019,10 @@ def test_concat_multiindex_with_keys(self):
columns=Index(['A', 'B', 'C'], name='exp'))
result = concat([frame, frame], keys=[0, 1], names=['iteration'])
- self.assertEqual(result.index.names, ('iteration',) + index.names)
+ assert result.index.names == ('iteration',) + index.names
tm.assert_frame_equal(result.loc[0], frame)
tm.assert_frame_equal(result.loc[1], frame)
- self.assertEqual(result.index.nlevels, 3)
+ assert result.index.nlevels == 3
def test_concat_multiindex_with_tz(self):
# GH 6606
@@ -1088,22 +1087,21 @@ def test_concat_keys_and_levels(self):
names=names + [None])
expected.index = exp_index
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# no names
-
result = concat([df, df2, df, df2],
keys=[('foo', 'one'), ('foo', 'two'),
('baz', 'one'), ('baz', 'two')],
levels=levels)
- self.assertEqual(result.index.names, (None,) * 3)
+ assert result.index.names == (None,) * 3
# no levels
result = concat([df, df2, df, df2],
keys=[('foo', 'one'), ('foo', 'two'),
('baz', 'one'), ('baz', 'two')],
names=['first', 'second'])
- self.assertEqual(result.index.names, ('first', 'second') + (None,))
+ assert result.index.names == ('first', 'second') + (None,)
tm.assert_index_equal(result.index.levels[0],
Index(['baz', 'foo'], name='first'))
@@ -1135,7 +1133,7 @@ def test_concat_rename_index(self):
exp.index.set_names(names, inplace=True)
tm.assert_frame_equal(result, exp)
- self.assertEqual(result.index.names, exp.index.names)
+ assert result.index.names == exp.index.names
def test_crossed_dtypes_weird_corner(self):
columns = ['A', 'B', 'C', 'D']
@@ -1160,7 +1158,7 @@ def test_crossed_dtypes_weird_corner(self):
df2 = DataFrame(np.random.randn(1, 4), index=['b'])
result = concat(
[df, df2], keys=['one', 'two'], names=['first', 'second'])
- self.assertEqual(result.index.names, ('first', 'second'))
+ assert result.index.names == ('first', 'second')
def test_dups_index(self):
# GH 4771
@@ -1442,7 +1440,7 @@ def test_concat_series(self):
result = concat(pieces)
tm.assert_series_equal(result, ts)
- self.assertEqual(result.name, ts.name)
+ assert result.name == ts.name
result = concat(pieces, keys=[0, 1, 2])
expected = ts.copy()
@@ -1549,7 +1547,7 @@ def test_concat_bug_1719(self):
left = concat([ts1, ts2], join='outer', axis=1)
right = concat([ts2, ts1], join='outer', axis=1)
- self.assertEqual(len(left), len(right))
+ assert len(left) == len(right)
def test_concat_bug_2972(self):
ts0 = Series(np.zeros(5))
@@ -1706,8 +1704,7 @@ def test_concat_tz_frame(self):
assert_frame_equal(df2, df3)
def test_concat_tz_series(self):
- # GH 11755
- # tz and no tz
+ # gh-11755: tz and no tz
x = Series(date_range('20151124 08:00',
'20151124 09:00',
freq='1h', tz='UTC'))
@@ -1717,8 +1714,7 @@ def test_concat_tz_series(self):
result = concat([x, y], ignore_index=True)
tm.assert_series_equal(result, expected)
- # GH 11887
- # concat tz and object
+ # gh-11887: concat tz and object
x = Series(date_range('20151124 08:00',
'20151124 09:00',
freq='1h', tz='UTC'))
@@ -1728,10 +1724,8 @@ def test_concat_tz_series(self):
result = concat([x, y], ignore_index=True)
tm.assert_series_equal(result, expected)
- # 12217
- # 12306 fixed I think
-
- # Concat'ing two UTC times
+ # see gh-12217 and gh-12306
+ # Concatenating two UTC times
first = pd.DataFrame([[datetime(2016, 1, 1)]])
first[0] = first[0].dt.tz_localize('UTC')
@@ -1739,9 +1733,9 @@ def test_concat_tz_series(self):
second[0] = second[0].dt.tz_localize('UTC')
result = pd.concat([first, second])
- self.assertEqual(result[0].dtype, 'datetime64[ns, UTC]')
+ assert result[0].dtype == 'datetime64[ns, UTC]'
- # Concat'ing two London times
+ # Concatenating two London times
first = pd.DataFrame([[datetime(2016, 1, 1)]])
first[0] = first[0].dt.tz_localize('Europe/London')
@@ -1749,9 +1743,9 @@ def test_concat_tz_series(self):
second[0] = second[0].dt.tz_localize('Europe/London')
result = pd.concat([first, second])
- self.assertEqual(result[0].dtype, 'datetime64[ns, Europe/London]')
+ assert result[0].dtype == 'datetime64[ns, Europe/London]'
- # Concat'ing 2+1 London times
+ # Concatenating 2+1 London times
first = pd.DataFrame([[datetime(2016, 1, 1)], [datetime(2016, 1, 2)]])
first[0] = first[0].dt.tz_localize('Europe/London')
@@ -1759,7 +1753,7 @@ def test_concat_tz_series(self):
second[0] = second[0].dt.tz_localize('Europe/London')
result = pd.concat([first, second])
- self.assertEqual(result[0].dtype, 'datetime64[ns, Europe/London]')
+ assert result[0].dtype == 'datetime64[ns, Europe/London]'
# Concat'ing 1+2 London times
first = pd.DataFrame([[datetime(2016, 1, 1)]])
@@ -1769,11 +1763,10 @@ def test_concat_tz_series(self):
second[0] = second[0].dt.tz_localize('Europe/London')
result = pd.concat([first, second])
- self.assertEqual(result[0].dtype, 'datetime64[ns, Europe/London]')
+ assert result[0].dtype == 'datetime64[ns, Europe/London]'
def test_concat_tz_series_with_datetimelike(self):
- # GH 12620
- # tz and timedelta
+ # see gh-12620: tz and timedelta
x = [pd.Timestamp('2011-01-01', tz='US/Eastern'),
pd.Timestamp('2011-02-01', tz='US/Eastern')]
y = [pd.Timedelta('1 day'), pd.Timedelta('2 day')]
@@ -1786,16 +1779,18 @@ def test_concat_tz_series_with_datetimelike(self):
tm.assert_series_equal(result, pd.Series(x + y, dtype='object'))
def test_concat_tz_series_tzlocal(self):
- # GH 13583
+ # see gh-13583
tm._skip_if_no_dateutil()
import dateutil
+
x = [pd.Timestamp('2011-01-01', tz=dateutil.tz.tzlocal()),
pd.Timestamp('2011-02-01', tz=dateutil.tz.tzlocal())]
y = [pd.Timestamp('2012-01-01', tz=dateutil.tz.tzlocal()),
pd.Timestamp('2012-02-01', tz=dateutil.tz.tzlocal())]
+
result = concat([pd.Series(x), pd.Series(y)], ignore_index=True)
tm.assert_series_equal(result, pd.Series(x + y))
- self.assertEqual(result.dtype, 'datetime64[ns, tzlocal()]')
+ assert result.dtype == 'datetime64[ns, tzlocal()]'
def test_concat_period_series(self):
x = Series(pd.PeriodIndex(['2015-11-01', '2015-12-01'], freq='D'))
@@ -1803,7 +1798,7 @@ def test_concat_period_series(self):
expected = Series([x[0], x[1], y[0], y[1]], dtype='object')
result = concat([x, y], ignore_index=True)
tm.assert_series_equal(result, expected)
- self.assertEqual(result.dtype, 'object')
+ assert result.dtype == 'object'
# different freq
x = Series(pd.PeriodIndex(['2015-11-01', '2015-12-01'], freq='D'))
@@ -1811,14 +1806,14 @@ def test_concat_period_series(self):
expected = Series([x[0], x[1], y[0], y[1]], dtype='object')
result = concat([x, y], ignore_index=True)
tm.assert_series_equal(result, expected)
- self.assertEqual(result.dtype, 'object')
+ assert result.dtype == 'object'
x = Series(pd.PeriodIndex(['2015-11-01', '2015-12-01'], freq='D'))
y = Series(pd.PeriodIndex(['2015-11-01', '2015-12-01'], freq='M'))
expected = Series([x[0], x[1], y[0], y[1]], dtype='object')
result = concat([x, y], ignore_index=True)
tm.assert_series_equal(result, expected)
- self.assertEqual(result.dtype, 'object')
+ assert result.dtype == 'object'
# non-period
x = Series(pd.PeriodIndex(['2015-11-01', '2015-12-01'], freq='D'))
@@ -1826,14 +1821,14 @@ def test_concat_period_series(self):
expected = Series([x[0], x[1], y[0], y[1]], dtype='object')
result = concat([x, y], ignore_index=True)
tm.assert_series_equal(result, expected)
- self.assertEqual(result.dtype, 'object')
+ assert result.dtype == 'object'
x = Series(pd.PeriodIndex(['2015-11-01', '2015-12-01'], freq='D'))
y = Series(['A', 'B'])
expected = Series([x[0], x[1], y[0], y[1]], dtype='object')
result = concat([x, y], ignore_index=True)
tm.assert_series_equal(result, expected)
- self.assertEqual(result.dtype, 'object')
+ assert result.dtype == 'object'
def test_concat_empty_series(self):
# GH 11082
diff --git a/pandas/tests/reshape/test_hashing.py b/pandas/tests/reshape/test_hashing.py
index f19f6b1374978..85807da33e38d 100644
--- a/pandas/tests/reshape/test_hashing.py
+++ b/pandas/tests/reshape/test_hashing.py
@@ -76,7 +76,7 @@ def test_hash_tuples(self):
tm.assert_numpy_array_equal(result, expected)
result = hash_tuples(tups[0])
- self.assertEqual(result, expected[0])
+ assert result == expected[0]
def test_hash_tuples_err(self):
diff --git a/pandas/tests/reshape/test_join.py b/pandas/tests/reshape/test_join.py
index 1da187788e99d..cda343175fd0a 100644
--- a/pandas/tests/reshape/test_join.py
+++ b/pandas/tests/reshape/test_join.py
@@ -257,7 +257,7 @@ def test_join_with_len0(self):
merged2 = self.target.join(self.source.reindex([]), on='C',
how='inner')
tm.assert_index_equal(merged2.columns, merged.columns)
- self.assertEqual(len(merged2), 0)
+ assert len(merged2) == 0
def test_join_on_inner(self):
df = DataFrame({'key': ['a', 'a', 'd', 'b', 'b', 'c']})
@@ -301,8 +301,8 @@ def test_join_index_mixed(self):
df1 = DataFrame({'A': 1., 'B': 2, 'C': 'foo', 'D': True},
index=np.arange(10),
columns=['A', 'B', 'C', 'D'])
- self.assertEqual(df1['B'].dtype, np.int64)
- self.assertEqual(df1['D'].dtype, np.bool_)
+ assert df1['B'].dtype == np.int64
+ assert df1['D'].dtype == np.bool_
df2 = DataFrame({'A': 1., 'B': 2, 'C': 'foo', 'D': True},
index=np.arange(0, 10, 2),
@@ -374,7 +374,7 @@ def test_join_multiindex(self):
expected = df1.reindex(ex_index).join(df2.reindex(ex_index))
expected.index.names = index1.names
assert_frame_equal(joined, expected)
- self.assertEqual(joined.index.names, index1.names)
+ assert joined.index.names == index1.names
df1 = df1.sort_index(level=1)
df2 = df2.sort_index(level=1)
@@ -385,7 +385,7 @@ def test_join_multiindex(self):
expected.index.names = index1.names
assert_frame_equal(joined, expected)
- self.assertEqual(joined.index.names, index1.names)
+ assert joined.index.names == index1.names
def test_join_inner_multiindex(self):
key1 = ['bar', 'bar', 'bar', 'foo', 'foo', 'baz', 'baz', 'qux',
@@ -445,9 +445,9 @@ def test_join_float64_float32(self):
a = DataFrame(randn(10, 2), columns=['a', 'b'], dtype=np.float64)
b = DataFrame(randn(10, 1), columns=['c'], dtype=np.float32)
joined = a.join(b)
- self.assertEqual(joined.dtypes['a'], 'float64')
- self.assertEqual(joined.dtypes['b'], 'float64')
- self.assertEqual(joined.dtypes['c'], 'float32')
+ assert joined.dtypes['a'] == 'float64'
+ assert joined.dtypes['b'] == 'float64'
+ assert joined.dtypes['c'] == 'float32'
a = np.random.randint(0, 5, 100).astype('int64')
b = np.random.random(100).astype('float64')
@@ -456,10 +456,10 @@ def test_join_float64_float32(self):
xpdf = DataFrame({'a': a, 'b': b, 'c': c})
s = DataFrame(np.random.random(5).astype('float32'), columns=['md'])
rs = df.merge(s, left_on='a', right_index=True)
- self.assertEqual(rs.dtypes['a'], 'int64')
- self.assertEqual(rs.dtypes['b'], 'float64')
- self.assertEqual(rs.dtypes['c'], 'float32')
- self.assertEqual(rs.dtypes['md'], 'float32')
+ assert rs.dtypes['a'] == 'int64'
+ assert rs.dtypes['b'] == 'float64'
+ assert rs.dtypes['c'] == 'float32'
+ assert rs.dtypes['md'] == 'float32'
xp = xpdf.merge(s, left_on='a', right_index=True)
assert_frame_equal(rs, xp)
diff --git a/pandas/tests/reshape/test_merge.py b/pandas/tests/reshape/test_merge.py
index 86580e5a84d92..db0e4631381f1 100644
--- a/pandas/tests/reshape/test_merge.py
+++ b/pandas/tests/reshape/test_merge.py
@@ -127,7 +127,7 @@ def test_index_and_on_parameters_confusion(self):
def test_merge_overlap(self):
merged = merge(self.left, self.left, on='key')
exp_len = (self.left['key'].value_counts() ** 2).sum()
- self.assertEqual(len(merged), exp_len)
+ assert len(merged) == exp_len
assert 'v1_x' in merged
assert 'v1_y' in merged
@@ -202,7 +202,7 @@ def test_merge_join_key_dtype_cast(self):
df1 = DataFrame({'key': [1], 'v1': [10]})
df2 = DataFrame({'key': [2], 'v1': [20]})
df = merge(df1, df2, how='outer')
- self.assertEqual(df['key'].dtype, 'int64')
+ assert df['key'].dtype == 'int64'
df1 = DataFrame({'key': [True], 'v1': [1]})
df2 = DataFrame({'key': [False], 'v1': [0]})
@@ -210,14 +210,14 @@ def test_merge_join_key_dtype_cast(self):
# GH13169
# this really should be bool
- self.assertEqual(df['key'].dtype, 'object')
+ assert df['key'].dtype == 'object'
df1 = DataFrame({'val': [1]})
df2 = DataFrame({'val': [2]})
lkey = np.array([1])
rkey = np.array([2])
df = merge(df1, df2, left_on=lkey, right_on=rkey, how='outer')
- self.assertEqual(df['key_0'].dtype, 'int64')
+ assert df['key_0'].dtype == 'int64'
def test_handle_join_key_pass_array(self):
left = DataFrame({'key': [1, 1, 2, 2, 3],
@@ -499,7 +499,7 @@ def test_other_datetime_unit(self):
df2 = s.astype(dtype).to_frame('days')
# coerces to datetime64[ns], thus sholuld not be affected
- self.assertEqual(df2['days'].dtype, 'datetime64[ns]')
+ assert df2['days'].dtype == 'datetime64[ns]'
result = df1.merge(df2, left_on='entity_id', right_index=True)
@@ -519,7 +519,7 @@ def test_other_timedelta_unit(self):
'timedelta64[ns]']:
df2 = s.astype(dtype).to_frame('days')
- self.assertEqual(df2['days'].dtype, dtype)
+ assert df2['days'].dtype == dtype
result = df1.merge(df2, left_on='entity_id', right_index=True)
@@ -582,8 +582,8 @@ def test_merge_on_datetime64tz(self):
'key': [1, 2, 3]})
result = pd.merge(left, right, on='key', how='outer')
assert_frame_equal(result, expected)
- self.assertEqual(result['value_x'].dtype, 'datetime64[ns, US/Eastern]')
- self.assertEqual(result['value_y'].dtype, 'datetime64[ns, US/Eastern]')
+ assert result['value_x'].dtype == 'datetime64[ns, US/Eastern]'
+ assert result['value_y'].dtype == 'datetime64[ns, US/Eastern]'
def test_merge_on_periods(self):
left = pd.DataFrame({'key': pd.period_range('20151010', periods=2,
@@ -614,8 +614,8 @@ def test_merge_on_periods(self):
'key': [1, 2, 3]})
result = pd.merge(left, right, on='key', how='outer')
assert_frame_equal(result, expected)
- self.assertEqual(result['value_x'].dtype, 'object')
- self.assertEqual(result['value_y'].dtype, 'object')
+ assert result['value_x'].dtype == 'object'
+ assert result['value_y'].dtype == 'object'
def test_indicator(self):
# PR #10054. xref #7412 and closes #8790.
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 3b3b4fe247b72..df679966e0002 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -45,14 +45,14 @@ def test_pivot_table(self):
pivot_table(self.data, values='D', index=index)
if len(index) > 1:
- self.assertEqual(table.index.names, tuple(index))
+ assert table.index.names == tuple(index)
else:
- self.assertEqual(table.index.name, index[0])
+ assert table.index.name == index[0]
if len(columns) > 1:
- self.assertEqual(table.columns.names, columns)
+ assert table.columns.names == columns
else:
- self.assertEqual(table.columns.name, columns[0])
+ assert table.columns.name == columns[0]
expected = self.data.groupby(
index + [columns])['D'].agg(np.mean).unstack()
@@ -148,7 +148,7 @@ def test_pivot_dtypes(self):
# can convert dtypes
f = DataFrame({'a': ['cat', 'bat', 'cat', 'bat'], 'v': [
1, 2, 3, 4], 'i': ['a', 'b', 'a', 'b']})
- self.assertEqual(f.dtypes['v'], 'int64')
+ assert f.dtypes['v'] == 'int64'
z = pivot_table(f, values='v', index=['a'], columns=[
'i'], fill_value=0, aggfunc=np.sum)
@@ -159,7 +159,7 @@ def test_pivot_dtypes(self):
# cannot convert dtypes
f = DataFrame({'a': ['cat', 'bat', 'cat', 'bat'], 'v': [
1.5, 2.5, 3.5, 4.5], 'i': ['a', 'b', 'a', 'b']})
- self.assertEqual(f.dtypes['v'], 'float64')
+ assert f.dtypes['v'] == 'float64'
z = pivot_table(f, values='v', index=['a'], columns=[
'i'], fill_value=0, aggfunc=np.mean)
@@ -249,10 +249,10 @@ def test_pivot_index_with_nan(self):
df.loc[1, 'b'] = df.loc[4, 'b'] = nan
pv = df.pivot('a', 'b', 'c')
- self.assertEqual(pv.notnull().values.sum(), len(df))
+ assert pv.notnull().values.sum() == len(df)
for _, row in df.iterrows():
- self.assertEqual(pv.loc[row['a'], row['b']], row['c'])
+ assert pv.loc[row['a'], row['b']] == row['c']
tm.assert_frame_equal(df.pivot('b', 'a', 'c'), pv.T)
@@ -341,7 +341,7 @@ def _check_output(result, values_col, index=['A', 'B'],
expected_col_margins = self.data.groupby(index)[values_col].mean()
tm.assert_series_equal(col_margins, expected_col_margins,
check_names=False)
- self.assertEqual(col_margins.name, margins_col)
+ assert col_margins.name == margins_col
result = result.sort_index()
index_margins = result.loc[(margins_col, '')].iloc[:-1]
@@ -349,11 +349,11 @@ def _check_output(result, values_col, index=['A', 'B'],
expected_ix_margins = self.data.groupby(columns)[values_col].mean()
tm.assert_series_equal(index_margins, expected_ix_margins,
check_names=False)
- self.assertEqual(index_margins.name, (margins_col, ''))
+ assert index_margins.name == (margins_col, '')
grand_total_margins = result.loc[(margins_col, ''), margins_col]
expected_total_margins = self.data[values_col].mean()
- self.assertEqual(grand_total_margins, expected_total_margins)
+ assert grand_total_margins == expected_total_margins
# column specified
result = self.data.pivot_table(values='D', index=['A', 'B'],
@@ -382,7 +382,7 @@ def _check_output(result, values_col, index=['A', 'B'],
aggfunc=np.mean)
for value_col in table.columns:
totals = table.loc[('All', ''), value_col]
- self.assertEqual(totals, self.data[value_col].mean())
+ assert totals == self.data[value_col].mean()
# no rows
rtable = self.data.pivot_table(columns=['AA', 'BB'], margins=True,
@@ -393,7 +393,7 @@ def _check_output(result, values_col, index=['A', 'B'],
aggfunc='mean')
for item in ['DD', 'EE', 'FF']:
totals = table.loc[('All', ''), item]
- self.assertEqual(totals, self.data[item].mean())
+ assert totals == self.data[item].mean()
# issue number #8349: pivot_table with margins and dictionary aggfunc
data = [
@@ -528,21 +528,21 @@ def test_margins_no_values_no_cols(self):
result = self.data[['A', 'B']].pivot_table(
index=['A', 'B'], aggfunc=len, margins=True)
result_list = result.tolist()
- self.assertEqual(sum(result_list[:-1]), result_list[-1])
+ assert sum(result_list[:-1]) == result_list[-1]
def test_margins_no_values_two_rows(self):
# Regression test on pivot table: no values passed but rows are a
# multi-index
result = self.data[['A', 'B', 'C']].pivot_table(
index=['A', 'B'], columns='C', aggfunc=len, margins=True)
- self.assertEqual(result.All.tolist(), [3.0, 1.0, 4.0, 3.0, 11.0])
+ assert result.All.tolist() == [3.0, 1.0, 4.0, 3.0, 11.0]
def test_margins_no_values_one_row_one_col(self):
# Regression test on pivot table: no values passed but row and col
# defined
result = self.data[['A', 'B']].pivot_table(
index='A', columns='B', aggfunc=len, margins=True)
- self.assertEqual(result.All.tolist(), [4.0, 7.0, 11.0])
+ assert result.All.tolist() == [4.0, 7.0, 11.0]
def test_margins_no_values_two_row_two_cols(self):
# Regression test on pivot table: no values passed but rows and cols
@@ -551,10 +551,10 @@ def test_margins_no_values_two_row_two_cols(self):
'e', 'f', 'g', 'h', 'i', 'j', 'k']
result = self.data[['A', 'B', 'C', 'D']].pivot_table(
index=['A', 'B'], columns=['C', 'D'], aggfunc=len, margins=True)
- self.assertEqual(result.All.tolist(), [3.0, 1.0, 4.0, 3.0, 11.0])
+ assert result.All.tolist() == [3.0, 1.0, 4.0, 3.0, 11.0]
def test_pivot_table_with_margins_set_margin_name(self):
- # GH 3335
+ # see gh-3335
for margin_name in ['foo', 'one', 666, None, ['a', 'b']]:
with pytest.raises(ValueError):
# multi-index index
@@ -1037,8 +1037,8 @@ def test_crosstab_ndarray(self):
# assign arbitrary names
result = crosstab(self.df['A'].values, self.df['C'].values)
- self.assertEqual(result.index.name, 'row_0')
- self.assertEqual(result.columns.name, 'col_0')
+ assert result.index.name == 'row_0'
+ assert result.columns.name == 'col_0'
def test_crosstab_margins(self):
a = np.random.randint(0, 7, size=100)
@@ -1050,8 +1050,8 @@ def test_crosstab_margins(self):
result = crosstab(a, [b, c], rownames=['a'], colnames=('b', 'c'),
margins=True)
- self.assertEqual(result.index.names, ('a',))
- self.assertEqual(result.columns.names, ['b', 'c'])
+ assert result.index.names == ('a',)
+ assert result.columns.names == ['b', 'c']
all_cols = result['All', '']
exp_cols = df.groupby(['a']).size().astype('i8')
@@ -1420,7 +1420,7 @@ def test_daily(self):
result = annual[i].dropna()
tm.assert_series_equal(result, subset, check_names=False)
- self.assertEqual(result.name, i)
+ assert result.name == i
# check leap days
leaps = ts[(ts.index.month == 2) & (ts.index.day == 29)]
@@ -1453,7 +1453,7 @@ def test_hourly(self):
result = annual[i].dropna()
tm.assert_series_equal(result, subset, check_names=False)
- self.assertEqual(result.name, i)
+ assert result.name == i
leaps = ts_hourly[(ts_hourly.index.month == 2) & (
ts_hourly.index.day == 29) & (ts_hourly.index.hour == 0)]
@@ -1478,7 +1478,7 @@ def test_monthly(self):
subset.index = [x.year for x in subset.index]
result = annual[i].dropna()
tm.assert_series_equal(result, subset, check_names=False)
- self.assertEqual(result.name, i)
+ assert result.name == i
def test_period_monthly(self):
pass
diff --git a/pandas/tests/reshape/test_reshape.py b/pandas/tests/reshape/test_reshape.py
index 87f16cfaf31ec..87cd0637f1125 100644
--- a/pandas/tests/reshape/test_reshape.py
+++ b/pandas/tests/reshape/test_reshape.py
@@ -35,7 +35,7 @@ def setUp(self):
def test_top_level_method(self):
result = melt(self.df)
- self.assertEqual(result.columns.tolist(), ['variable', 'value'])
+ assert result.columns.tolist() == ['variable', 'value']
def test_method_signatures(self):
tm.assert_frame_equal(self.df.melt(),
@@ -58,19 +58,17 @@ def test_method_signatures(self):
def test_default_col_names(self):
result = self.df.melt()
- self.assertEqual(result.columns.tolist(), ['variable', 'value'])
+ assert result.columns.tolist() == ['variable', 'value']
result1 = self.df.melt(id_vars=['id1'])
- self.assertEqual(result1.columns.tolist(), ['id1', 'variable', 'value'
- ])
+ assert result1.columns.tolist() == ['id1', 'variable', 'value']
result2 = self.df.melt(id_vars=['id1', 'id2'])
- self.assertEqual(result2.columns.tolist(), ['id1', 'id2', 'variable',
- 'value'])
+ assert result2.columns.tolist() == ['id1', 'id2', 'variable', 'value']
def test_value_vars(self):
result3 = self.df.melt(id_vars=['id1', 'id2'], value_vars='A')
- self.assertEqual(len(result3), 10)
+ assert len(result3) == 10
result4 = self.df.melt(id_vars=['id1', 'id2'], value_vars=['A', 'B'])
expected4 = DataFrame({'id1': self.df['id1'].tolist() * 2,
@@ -122,19 +120,17 @@ def test_tuple_vars_fail_with_multiindex(self):
def test_custom_var_name(self):
result5 = self.df.melt(var_name=self.var_name)
- self.assertEqual(result5.columns.tolist(), ['var', 'value'])
+ assert result5.columns.tolist() == ['var', 'value']
result6 = self.df.melt(id_vars=['id1'], var_name=self.var_name)
- self.assertEqual(result6.columns.tolist(), ['id1', 'var', 'value'])
+ assert result6.columns.tolist() == ['id1', 'var', 'value']
result7 = self.df.melt(id_vars=['id1', 'id2'], var_name=self.var_name)
- self.assertEqual(result7.columns.tolist(), ['id1', 'id2', 'var',
- 'value'])
+ assert result7.columns.tolist() == ['id1', 'id2', 'var', 'value']
result8 = self.df.melt(id_vars=['id1', 'id2'], value_vars='A',
var_name=self.var_name)
- self.assertEqual(result8.columns.tolist(), ['id1', 'id2', 'var',
- 'value'])
+ assert result8.columns.tolist() == ['id1', 'id2', 'var', 'value']
result9 = self.df.melt(id_vars=['id1', 'id2'], value_vars=['A', 'B'],
var_name=self.var_name)
@@ -148,20 +144,18 @@ def test_custom_var_name(self):
def test_custom_value_name(self):
result10 = self.df.melt(value_name=self.value_name)
- self.assertEqual(result10.columns.tolist(), ['variable', 'val'])
+ assert result10.columns.tolist() == ['variable', 'val']
result11 = self.df.melt(id_vars=['id1'], value_name=self.value_name)
- self.assertEqual(result11.columns.tolist(), ['id1', 'variable', 'val'])
+ assert result11.columns.tolist() == ['id1', 'variable', 'val']
result12 = self.df.melt(id_vars=['id1', 'id2'],
value_name=self.value_name)
- self.assertEqual(result12.columns.tolist(), ['id1', 'id2', 'variable',
- 'val'])
+ assert result12.columns.tolist() == ['id1', 'id2', 'variable', 'val']
result13 = self.df.melt(id_vars=['id1', 'id2'], value_vars='A',
value_name=self.value_name)
- self.assertEqual(result13.columns.tolist(), ['id1', 'id2', 'variable',
- 'val'])
+ assert result13.columns.tolist() == ['id1', 'id2', 'variable', 'val']
result14 = self.df.melt(id_vars=['id1', 'id2'], value_vars=['A', 'B'],
value_name=self.value_name)
@@ -178,23 +172,21 @@ def test_custom_var_and_value_name(self):
result15 = self.df.melt(var_name=self.var_name,
value_name=self.value_name)
- self.assertEqual(result15.columns.tolist(), ['var', 'val'])
+ assert result15.columns.tolist() == ['var', 'val']
result16 = self.df.melt(id_vars=['id1'], var_name=self.var_name,
value_name=self.value_name)
- self.assertEqual(result16.columns.tolist(), ['id1', 'var', 'val'])
+ assert result16.columns.tolist() == ['id1', 'var', 'val']
result17 = self.df.melt(id_vars=['id1', 'id2'],
var_name=self.var_name,
value_name=self.value_name)
- self.assertEqual(result17.columns.tolist(), ['id1', 'id2', 'var', 'val'
- ])
+ assert result17.columns.tolist() == ['id1', 'id2', 'var', 'val']
result18 = self.df.melt(id_vars=['id1', 'id2'], value_vars='A',
var_name=self.var_name,
value_name=self.value_name)
- self.assertEqual(result18.columns.tolist(), ['id1', 'id2', 'var', 'val'
- ])
+ assert result18.columns.tolist() == ['id1', 'id2', 'var', 'val']
result19 = self.df.melt(id_vars=['id1', 'id2'], value_vars=['A', 'B'],
var_name=self.var_name,
@@ -211,17 +203,17 @@ def test_custom_var_and_value_name(self):
df20 = self.df.copy()
df20.columns.name = 'foo'
result20 = df20.melt()
- self.assertEqual(result20.columns.tolist(), ['foo', 'value'])
+ assert result20.columns.tolist() == ['foo', 'value']
def test_col_level(self):
res1 = self.df1.melt(col_level=0)
res2 = self.df1.melt(col_level='CAP')
- self.assertEqual(res1.columns.tolist(), ['CAP', 'value'])
- self.assertEqual(res2.columns.tolist(), ['CAP', 'value'])
+ assert res1.columns.tolist() == ['CAP', 'value']
+ assert res2.columns.tolist() == ['CAP', 'value']
def test_multiindex(self):
res = self.df1.melt()
- self.assertEqual(res.columns.tolist(), ['CAP', 'low', 'value'])
+ assert res.columns.tolist() == ['CAP', 'low', 'value']
class TestGetDummies(tm.TestCase):
@@ -298,13 +290,13 @@ def test_just_na(self):
res_series_index = get_dummies(just_na_series_index,
sparse=self.sparse)
- self.assertEqual(res_list.empty, True)
- self.assertEqual(res_series.empty, True)
- self.assertEqual(res_series_index.empty, True)
+ assert res_list.empty
+ assert res_series.empty
+ assert res_series_index.empty
- self.assertEqual(res_list.index.tolist(), [0])
- self.assertEqual(res_series.index.tolist(), [0])
- self.assertEqual(res_series_index.index.tolist(), ['A'])
+ assert res_list.index.tolist() == [0]
+ assert res_series.index.tolist() == [0]
+ assert res_series_index.index.tolist() == ['A']
def test_include_na(self):
s = ['a', 'b', np.nan]
@@ -784,7 +776,7 @@ def test_stubs(self):
# TODO: unused?
df_long = pd.wide_to_long(df, stubs, i='id', j='age') # noqa
- self.assertEqual(stubs, ['inc', 'edu'])
+ assert stubs == ['inc', 'edu']
def test_separating_character(self):
# GH14779
diff --git a/pandas/tests/reshape/test_tile.py b/pandas/tests/reshape/test_tile.py
index 923615c93d98b..2291030a2735c 100644
--- a/pandas/tests/reshape/test_tile.py
+++ b/pandas/tests/reshape/test_tile.py
@@ -122,7 +122,7 @@ def test_cut_pass_series_name_to_factor(self):
s = Series(np.random.randn(100), name='foo')
factor = cut(s, 4)
- self.assertEqual(factor.name, 'foo')
+ assert factor.name == 'foo'
def test_label_precision(self):
arr = np.arange(0, 0.73, 0.01)
@@ -158,16 +158,16 @@ def test_inf_handling(self):
ex_uniques = IntervalIndex.from_breaks(bins)
tm.assert_index_equal(result.categories, ex_uniques)
- self.assertEqual(result[5], Interval(4, np.inf))
- self.assertEqual(result[0], Interval(-np.inf, 2))
- self.assertEqual(result_ser[5], Interval(4, np.inf))
- self.assertEqual(result_ser[0], Interval(-np.inf, 2))
+ assert result[5] == Interval(4, np.inf)
+ assert result[0] == Interval(-np.inf, 2)
+ assert result_ser[5] == Interval(4, np.inf)
+ assert result_ser[0] == Interval(-np.inf, 2)
def test_qcut(self):
arr = np.random.randn(1000)
- # we store the bins as Index that have been rounded
- # to comparisions are a bit tricky
+ # We store the bins as Index that have been rounded
+ # to comparisons are a bit tricky.
labels, bins = qcut(arr, 4, retbins=True)
ex_bins = quantile(arr, [0, .25, .5, .75, 1.])
result = labels.categories.left.values
@@ -182,7 +182,7 @@ def test_qcut_bounds(self):
arr = np.random.randn(1000)
factor = qcut(arr, 10, labels=False)
- self.assertEqual(len(np.unique(factor)), 10)
+ assert len(np.unique(factor)) == 10
def test_qcut_specify_quantiles(self):
arr = np.random.randn(100)
@@ -253,14 +253,14 @@ def test_round_frac(self):
# #1979, negative numbers
result = tmod._round_frac(-117.9998, precision=3)
- self.assertEqual(result, -118)
+ assert result == -118
result = tmod._round_frac(117.9998, precision=3)
- self.assertEqual(result, 118)
+ assert result == 118
result = tmod._round_frac(117.9998, precision=2)
- self.assertEqual(result, 118)
+ assert result == 118
result = tmod._round_frac(0.000123456, precision=2)
- self.assertEqual(result, 0.00012)
+ assert result == 0.00012
def test_qcut_binning_issues(self):
# #1978, 1979
diff --git a/pandas/tests/scalar/test_interval.py b/pandas/tests/scalar/test_interval.py
index d77deabee58d4..079c41657bec6 100644
--- a/pandas/tests/scalar/test_interval.py
+++ b/pandas/tests/scalar/test_interval.py
@@ -10,20 +10,18 @@ def setUp(self):
self.interval = Interval(0, 1)
def test_properties(self):
- self.assertEqual(self.interval.closed, 'right')
- self.assertEqual(self.interval.left, 0)
- self.assertEqual(self.interval.right, 1)
- self.assertEqual(self.interval.mid, 0.5)
+ assert self.interval.closed == 'right'
+ assert self.interval.left == 0
+ assert self.interval.right == 1
+ assert self.interval.mid == 0.5
def test_repr(self):
- self.assertEqual(repr(self.interval),
- "Interval(0, 1, closed='right')")
- self.assertEqual(str(self.interval), "(0, 1]")
+ assert repr(self.interval) == "Interval(0, 1, closed='right')"
+ assert str(self.interval) == "(0, 1]"
interval_left = Interval(0, 1, closed='left')
- self.assertEqual(repr(interval_left),
- "Interval(0, 1, closed='left')")
- self.assertEqual(str(interval_left), "[0, 1)")
+ assert repr(interval_left) == "Interval(0, 1, closed='left')"
+ assert str(interval_left) == "[0, 1)"
def test_contains(self):
assert 0.5 in self.interval
@@ -41,9 +39,9 @@ def test_contains(self):
assert 1 not in interval
def test_equal(self):
- self.assertEqual(Interval(0, 1), Interval(0, 1, closed='right'))
- self.assertNotEqual(Interval(0, 1), Interval(0, 1, closed='left'))
- self.assertNotEqual(Interval(0, 1), 0)
+ assert Interval(0, 1) == Interval(0, 1, closed='right')
+ assert Interval(0, 1) != Interval(0, 1, closed='left')
+ assert Interval(0, 1) != 0
def test_comparison(self):
with tm.assert_raises_regex(TypeError, 'unorderable types'):
@@ -63,15 +61,15 @@ def test_hash(self):
def test_math_add(self):
expected = Interval(1, 2)
actual = self.interval + 1
- self.assertEqual(expected, actual)
+ assert expected == actual
expected = Interval(1, 2)
actual = 1 + self.interval
- self.assertEqual(expected, actual)
+ assert expected == actual
actual = self.interval
actual += 1
- self.assertEqual(expected, actual)
+ assert expected == actual
with pytest.raises(TypeError):
self.interval + Interval(1, 2)
@@ -82,11 +80,11 @@ def test_math_add(self):
def test_math_sub(self):
expected = Interval(-1, 0)
actual = self.interval - 1
- self.assertEqual(expected, actual)
+ assert expected == actual
actual = self.interval
actual -= 1
- self.assertEqual(expected, actual)
+ assert expected == actual
with pytest.raises(TypeError):
self.interval - Interval(1, 2)
@@ -97,15 +95,15 @@ def test_math_sub(self):
def test_math_mult(self):
expected = Interval(0, 2)
actual = self.interval * 2
- self.assertEqual(expected, actual)
+ assert expected == actual
expected = Interval(0, 2)
actual = 2 * self.interval
- self.assertEqual(expected, actual)
+ assert expected == actual
actual = self.interval
actual *= 2
- self.assertEqual(expected, actual)
+ assert expected == actual
with pytest.raises(TypeError):
self.interval * Interval(1, 2)
@@ -116,11 +114,11 @@ def test_math_mult(self):
def test_math_div(self):
expected = Interval(0, 0.5)
actual = self.interval / 2.0
- self.assertEqual(expected, actual)
+ assert expected == actual
actual = self.interval
actual /= 2.0
- self.assertEqual(expected, actual)
+ assert expected == actual
with pytest.raises(TypeError):
self.interval / Interval(1, 2)
diff --git a/pandas/tests/scalar/test_period.py b/pandas/tests/scalar/test_period.py
index fc0921451c133..00a1fa1b507b6 100644
--- a/pandas/tests/scalar/test_period.py
+++ b/pandas/tests/scalar/test_period.py
@@ -35,18 +35,18 @@ def test_is_leap_year(self):
def test_quarterly_negative_ordinals(self):
p = Period(ordinal=-1, freq='Q-DEC')
- self.assertEqual(p.year, 1969)
- self.assertEqual(p.quarter, 4)
+ assert p.year == 1969
+ assert p.quarter == 4
assert isinstance(p, Period)
p = Period(ordinal=-2, freq='Q-DEC')
- self.assertEqual(p.year, 1969)
- self.assertEqual(p.quarter, 3)
+ assert p.year == 1969
+ assert p.quarter == 3
assert isinstance(p, Period)
p = Period(ordinal=-2, freq='M')
- self.assertEqual(p.year, 1969)
- self.assertEqual(p.month, 11)
+ assert p.year == 1969
+ assert p.month == 11
assert isinstance(p, Period)
def test_period_cons_quarterly(self):
@@ -57,11 +57,11 @@ def test_period_cons_quarterly(self):
assert '1989Q3' in str(exp)
stamp = exp.to_timestamp('D', how='end')
p = Period(stamp, freq=freq)
- self.assertEqual(p, exp)
+ assert p == exp
stamp = exp.to_timestamp('3D', how='end')
p = Period(stamp, freq=freq)
- self.assertEqual(p, exp)
+ assert p == exp
def test_period_cons_annual(self):
# bugs in scikits.timeseries
@@ -70,7 +70,7 @@ def test_period_cons_annual(self):
exp = Period('1989', freq=freq)
stamp = exp.to_timestamp('D', how='end') + timedelta(days=30)
p = Period(stamp, freq=freq)
- self.assertEqual(p, exp + 1)
+ assert p == exp + 1
assert isinstance(p, Period)
def test_period_cons_weekly(self):
@@ -81,13 +81,13 @@ def test_period_cons_weekly(self):
result = Period(daystr, freq=freq)
expected = Period(daystr, freq='D').asfreq(freq)
- self.assertEqual(result, expected)
+ assert result == expected
assert isinstance(result, Period)
def test_period_from_ordinal(self):
p = pd.Period('2011-01', freq='M')
res = pd.Period._from_ordinal(p.ordinal, freq='M')
- self.assertEqual(p, res)
+ assert p == res
assert isinstance(res, Period)
def test_period_cons_nat(self):
@@ -115,23 +115,23 @@ def test_period_cons_nat(self):
def test_period_cons_mult(self):
p1 = Period('2011-01', freq='3M')
p2 = Period('2011-01', freq='M')
- self.assertEqual(p1.ordinal, p2.ordinal)
+ assert p1.ordinal == p2.ordinal
- self.assertEqual(p1.freq, offsets.MonthEnd(3))
- self.assertEqual(p1.freqstr, '3M')
+ assert p1.freq == offsets.MonthEnd(3)
+ assert p1.freqstr == '3M'
- self.assertEqual(p2.freq, offsets.MonthEnd())
- self.assertEqual(p2.freqstr, 'M')
+ assert p2.freq == offsets.MonthEnd()
+ assert p2.freqstr == 'M'
result = p1 + 1
- self.assertEqual(result.ordinal, (p2 + 3).ordinal)
- self.assertEqual(result.freq, p1.freq)
- self.assertEqual(result.freqstr, '3M')
+ assert result.ordinal == (p2 + 3).ordinal
+ assert result.freq == p1.freq
+ assert result.freqstr == '3M'
result = p1 - 1
- self.assertEqual(result.ordinal, (p2 - 3).ordinal)
- self.assertEqual(result.freq, p1.freq)
- self.assertEqual(result.freqstr, '3M')
+ assert result.ordinal == (p2 - 3).ordinal
+ assert result.freq == p1.freq
+ assert result.freqstr == '3M'
msg = ('Frequency must be positive, because it'
' represents span: -3M')
@@ -151,37 +151,37 @@ def test_period_cons_combined(self):
Period(ordinal=1, freq='H'))]
for p1, p2, p3 in p:
- self.assertEqual(p1.ordinal, p3.ordinal)
- self.assertEqual(p2.ordinal, p3.ordinal)
+ assert p1.ordinal == p3.ordinal
+ assert p2.ordinal == p3.ordinal
- self.assertEqual(p1.freq, offsets.Hour(25))
- self.assertEqual(p1.freqstr, '25H')
+ assert p1.freq == offsets.Hour(25)
+ assert p1.freqstr == '25H'
- self.assertEqual(p2.freq, offsets.Hour(25))
- self.assertEqual(p2.freqstr, '25H')
+ assert p2.freq == offsets.Hour(25)
+ assert p2.freqstr == '25H'
- self.assertEqual(p3.freq, offsets.Hour())
- self.assertEqual(p3.freqstr, 'H')
+ assert p3.freq == offsets.Hour()
+ assert p3.freqstr == 'H'
result = p1 + 1
- self.assertEqual(result.ordinal, (p3 + 25).ordinal)
- self.assertEqual(result.freq, p1.freq)
- self.assertEqual(result.freqstr, '25H')
+ assert result.ordinal == (p3 + 25).ordinal
+ assert result.freq == p1.freq
+ assert result.freqstr == '25H'
result = p2 + 1
- self.assertEqual(result.ordinal, (p3 + 25).ordinal)
- self.assertEqual(result.freq, p2.freq)
- self.assertEqual(result.freqstr, '25H')
+ assert result.ordinal == (p3 + 25).ordinal
+ assert result.freq == p2.freq
+ assert result.freqstr == '25H'
result = p1 - 1
- self.assertEqual(result.ordinal, (p3 - 25).ordinal)
- self.assertEqual(result.freq, p1.freq)
- self.assertEqual(result.freqstr, '25H')
+ assert result.ordinal == (p3 - 25).ordinal
+ assert result.freq == p1.freq
+ assert result.freqstr == '25H'
result = p2 - 1
- self.assertEqual(result.ordinal, (p3 - 25).ordinal)
- self.assertEqual(result.freq, p2.freq)
- self.assertEqual(result.freqstr, '25H')
+ assert result.ordinal == (p3 - 25).ordinal
+ assert result.freq == p2.freq
+ assert result.freqstr == '25H'
msg = ('Frequency must be positive, because it'
' represents span: -25H')
@@ -217,33 +217,33 @@ def test_timestamp_tz_arg(self):
exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case)
exp_zone = pytz.timezone(case).normalize(p)
- self.assertEqual(p, exp)
- self.assertEqual(p.tz, exp_zone.tzinfo)
- self.assertEqual(p.tz, exp.tz)
+ assert p == exp
+ assert p.tz == exp_zone.tzinfo
+ assert p.tz == exp.tz
p = Period('1/1/2005', freq='3H').to_timestamp(tz=case)
exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case)
exp_zone = pytz.timezone(case).normalize(p)
- self.assertEqual(p, exp)
- self.assertEqual(p.tz, exp_zone.tzinfo)
- self.assertEqual(p.tz, exp.tz)
+ assert p == exp
+ assert p.tz == exp_zone.tzinfo
+ assert p.tz == exp.tz
p = Period('1/1/2005', freq='A').to_timestamp(freq='A', tz=case)
exp = Timestamp('31/12/2005', tz='UTC').tz_convert(case)
exp_zone = pytz.timezone(case).normalize(p)
- self.assertEqual(p, exp)
- self.assertEqual(p.tz, exp_zone.tzinfo)
- self.assertEqual(p.tz, exp.tz)
+ assert p == exp
+ assert p.tz == exp_zone.tzinfo
+ assert p.tz == exp.tz
p = Period('1/1/2005', freq='A').to_timestamp(freq='3H', tz=case)
exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case)
exp_zone = pytz.timezone(case).normalize(p)
- self.assertEqual(p, exp)
- self.assertEqual(p.tz, exp_zone.tzinfo)
- self.assertEqual(p.tz, exp.tz)
+ assert p == exp
+ assert p.tz == exp_zone.tzinfo
+ assert p.tz == exp.tz
def test_timestamp_tz_arg_dateutil(self):
from pandas._libs.tslib import _dateutil_gettz as gettz
@@ -253,86 +253,86 @@ def test_timestamp_tz_arg_dateutil(self):
p = Period('1/1/2005', freq='M').to_timestamp(
tz=maybe_get_tz(case))
exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case)
- self.assertEqual(p, exp)
- self.assertEqual(p.tz, gettz(case.split('/', 1)[1]))
- self.assertEqual(p.tz, exp.tz)
+ assert p == exp
+ assert p.tz == gettz(case.split('/', 1)[1])
+ assert p.tz == exp.tz
p = Period('1/1/2005',
freq='M').to_timestamp(freq='3H', tz=maybe_get_tz(case))
exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case)
- self.assertEqual(p, exp)
- self.assertEqual(p.tz, gettz(case.split('/', 1)[1]))
- self.assertEqual(p.tz, exp.tz)
+ assert p == exp
+ assert p.tz == gettz(case.split('/', 1)[1])
+ assert p.tz == exp.tz
def test_timestamp_tz_arg_dateutil_from_string(self):
from pandas._libs.tslib import _dateutil_gettz as gettz
p = Period('1/1/2005',
freq='M').to_timestamp(tz='dateutil/Europe/Brussels')
- self.assertEqual(p.tz, gettz('Europe/Brussels'))
+ assert p.tz == gettz('Europe/Brussels')
def test_timestamp_mult(self):
p = pd.Period('2011-01', freq='M')
- self.assertEqual(p.to_timestamp(how='S'), pd.Timestamp('2011-01-01'))
- self.assertEqual(p.to_timestamp(how='E'), pd.Timestamp('2011-01-31'))
+ assert p.to_timestamp(how='S') == pd.Timestamp('2011-01-01')
+ assert p.to_timestamp(how='E') == pd.Timestamp('2011-01-31')
p = pd.Period('2011-01', freq='3M')
- self.assertEqual(p.to_timestamp(how='S'), pd.Timestamp('2011-01-01'))
- self.assertEqual(p.to_timestamp(how='E'), pd.Timestamp('2011-03-31'))
+ assert p.to_timestamp(how='S') == pd.Timestamp('2011-01-01')
+ assert p.to_timestamp(how='E') == pd.Timestamp('2011-03-31')
def test_construction(self):
i1 = Period('1/1/2005', freq='M')
i2 = Period('Jan 2005')
- self.assertEqual(i1, i2)
+ assert i1 == i2
i1 = Period('2005', freq='A')
i2 = Period('2005')
i3 = Period('2005', freq='a')
- self.assertEqual(i1, i2)
- self.assertEqual(i1, i3)
+ assert i1 == i2
+ assert i1 == i3
i4 = Period('2005', freq='M')
i5 = Period('2005', freq='m')
pytest.raises(ValueError, i1.__ne__, i4)
- self.assertEqual(i4, i5)
+ assert i4 == i5
i1 = Period.now('Q')
i2 = Period(datetime.now(), freq='Q')
i3 = Period.now('q')
- self.assertEqual(i1, i2)
- self.assertEqual(i1, i3)
+ assert i1 == i2
+ assert i1 == i3
i1 = Period('1982', freq='min')
i2 = Period('1982', freq='MIN')
- self.assertEqual(i1, i2)
+ assert i1 == i2
i2 = Period('1982', freq=('Min', 1))
- self.assertEqual(i1, i2)
+ assert i1 == i2
i1 = Period(year=2005, month=3, day=1, freq='D')
i2 = Period('3/1/2005', freq='D')
- self.assertEqual(i1, i2)
+ assert i1 == i2
i3 = Period(year=2005, month=3, day=1, freq='d')
- self.assertEqual(i1, i3)
+ assert i1 == i3
i1 = Period('2007-01-01 09:00:00.001')
expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1000), freq='L')
- self.assertEqual(i1, expected)
+ assert i1 == expected
expected = Period(np_datetime64_compat(
'2007-01-01 09:00:00.001Z'), freq='L')
- self.assertEqual(i1, expected)
+ assert i1 == expected
i1 = Period('2007-01-01 09:00:00.00101')
expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1010), freq='U')
- self.assertEqual(i1, expected)
+ assert i1 == expected
expected = Period(np_datetime64_compat('2007-01-01 09:00:00.00101Z'),
freq='U')
- self.assertEqual(i1, expected)
+ assert i1 == expected
pytest.raises(ValueError, Period, ordinal=200701)
@@ -343,157 +343,155 @@ def test_construction_bday(self):
# Biz day construction, roll forward if non-weekday
i1 = Period('3/10/12', freq='B')
i2 = Period('3/10/12', freq='D')
- self.assertEqual(i1, i2.asfreq('B'))
+ assert i1 == i2.asfreq('B')
i2 = Period('3/11/12', freq='D')
- self.assertEqual(i1, i2.asfreq('B'))
+ assert i1 == i2.asfreq('B')
i2 = Period('3/12/12', freq='D')
- self.assertEqual(i1, i2.asfreq('B'))
+ assert i1 == i2.asfreq('B')
i3 = Period('3/10/12', freq='b')
- self.assertEqual(i1, i3)
+ assert i1 == i3
i1 = Period(year=2012, month=3, day=10, freq='B')
i2 = Period('3/12/12', freq='B')
- self.assertEqual(i1, i2)
+ assert i1 == i2
def test_construction_quarter(self):
i1 = Period(year=2005, quarter=1, freq='Q')
i2 = Period('1/1/2005', freq='Q')
- self.assertEqual(i1, i2)
+ assert i1 == i2
i1 = Period(year=2005, quarter=3, freq='Q')
i2 = Period('9/1/2005', freq='Q')
- self.assertEqual(i1, i2)
+ assert i1 == i2
i1 = Period('2005Q1')
i2 = Period(year=2005, quarter=1, freq='Q')
i3 = Period('2005q1')
- self.assertEqual(i1, i2)
- self.assertEqual(i1, i3)
+ assert i1 == i2
+ assert i1 == i3
i1 = Period('05Q1')
- self.assertEqual(i1, i2)
+ assert i1 == i2
lower = Period('05q1')
- self.assertEqual(i1, lower)
+ assert i1 == lower
i1 = Period('1Q2005')
- self.assertEqual(i1, i2)
+ assert i1 == i2
lower = Period('1q2005')
- self.assertEqual(i1, lower)
+ assert i1 == lower
i1 = Period('1Q05')
- self.assertEqual(i1, i2)
+ assert i1 == i2
lower = Period('1q05')
- self.assertEqual(i1, lower)
+ assert i1 == lower
i1 = Period('4Q1984')
- self.assertEqual(i1.year, 1984)
+ assert i1.year == 1984
lower = Period('4q1984')
- self.assertEqual(i1, lower)
+ assert i1 == lower
def test_construction_month(self):
expected = Period('2007-01', freq='M')
i1 = Period('200701', freq='M')
- self.assertEqual(i1, expected)
+ assert i1 == expected
i1 = Period('200701', freq='M')
- self.assertEqual(i1, expected)
+ assert i1 == expected
i1 = Period(200701, freq='M')
- self.assertEqual(i1, expected)
+ assert i1 == expected
i1 = Period(ordinal=200701, freq='M')
- self.assertEqual(i1.year, 18695)
+ assert i1.year == 18695
i1 = Period(datetime(2007, 1, 1), freq='M')
i2 = Period('200701', freq='M')
- self.assertEqual(i1, i2)
+ assert i1 == i2
i1 = Period(date(2007, 1, 1), freq='M')
i2 = Period(datetime(2007, 1, 1), freq='M')
i3 = Period(np.datetime64('2007-01-01'), freq='M')
i4 = Period(np_datetime64_compat('2007-01-01 00:00:00Z'), freq='M')
i5 = Period(np_datetime64_compat('2007-01-01 00:00:00.000Z'), freq='M')
- self.assertEqual(i1, i2)
- self.assertEqual(i1, i3)
- self.assertEqual(i1, i4)
- self.assertEqual(i1, i5)
+ assert i1 == i2
+ assert i1 == i3
+ assert i1 == i4
+ assert i1 == i5
def test_period_constructor_offsets(self):
- self.assertEqual(Period('1/1/2005', freq=offsets.MonthEnd()),
- Period('1/1/2005', freq='M'))
- self.assertEqual(Period('2005', freq=offsets.YearEnd()),
- Period('2005', freq='A'))
- self.assertEqual(Period('2005', freq=offsets.MonthEnd()),
- Period('2005', freq='M'))
- self.assertEqual(Period('3/10/12', freq=offsets.BusinessDay()),
- Period('3/10/12', freq='B'))
- self.assertEqual(Period('3/10/12', freq=offsets.Day()),
- Period('3/10/12', freq='D'))
-
- self.assertEqual(Period(year=2005, quarter=1,
- freq=offsets.QuarterEnd(startingMonth=12)),
- Period(year=2005, quarter=1, freq='Q'))
- self.assertEqual(Period(year=2005, quarter=2,
- freq=offsets.QuarterEnd(startingMonth=12)),
- Period(year=2005, quarter=2, freq='Q'))
-
- self.assertEqual(Period(year=2005, month=3, day=1, freq=offsets.Day()),
- Period(year=2005, month=3, day=1, freq='D'))
- self.assertEqual(Period(year=2012, month=3, day=10,
- freq=offsets.BDay()),
- Period(year=2012, month=3, day=10, freq='B'))
+ assert (Period('1/1/2005', freq=offsets.MonthEnd()) ==
+ Period('1/1/2005', freq='M'))
+ assert (Period('2005', freq=offsets.YearEnd()) ==
+ Period('2005', freq='A'))
+ assert (Period('2005', freq=offsets.MonthEnd()) ==
+ Period('2005', freq='M'))
+ assert (Period('3/10/12', freq=offsets.BusinessDay()) ==
+ Period('3/10/12', freq='B'))
+ assert (Period('3/10/12', freq=offsets.Day()) ==
+ Period('3/10/12', freq='D'))
+
+ assert (Period(year=2005, quarter=1,
+ freq=offsets.QuarterEnd(startingMonth=12)) ==
+ Period(year=2005, quarter=1, freq='Q'))
+ assert (Period(year=2005, quarter=2,
+ freq=offsets.QuarterEnd(startingMonth=12)) ==
+ Period(year=2005, quarter=2, freq='Q'))
+
+ assert (Period(year=2005, month=3, day=1, freq=offsets.Day()) ==
+ Period(year=2005, month=3, day=1, freq='D'))
+ assert (Period(year=2012, month=3, day=10, freq=offsets.BDay()) ==
+ Period(year=2012, month=3, day=10, freq='B'))
expected = Period('2005-03-01', freq='3D')
- self.assertEqual(Period(year=2005, month=3, day=1,
- freq=offsets.Day(3)), expected)
- self.assertEqual(Period(year=2005, month=3, day=1, freq='3D'),
- expected)
+ assert (Period(year=2005, month=3, day=1,
+ freq=offsets.Day(3)) == expected)
+ assert Period(year=2005, month=3, day=1, freq='3D') == expected
- self.assertEqual(Period(year=2012, month=3, day=10,
- freq=offsets.BDay(3)),
- Period(year=2012, month=3, day=10, freq='3B'))
+ assert (Period(year=2012, month=3, day=10,
+ freq=offsets.BDay(3)) ==
+ Period(year=2012, month=3, day=10, freq='3B'))
- self.assertEqual(Period(200701, freq=offsets.MonthEnd()),
- Period(200701, freq='M'))
+ assert (Period(200701, freq=offsets.MonthEnd()) ==
+ Period(200701, freq='M'))
i1 = Period(ordinal=200701, freq=offsets.MonthEnd())
i2 = Period(ordinal=200701, freq='M')
- self.assertEqual(i1, i2)
- self.assertEqual(i1.year, 18695)
- self.assertEqual(i2.year, 18695)
+ assert i1 == i2
+ assert i1.year == 18695
+ assert i2.year == 18695
i1 = Period(datetime(2007, 1, 1), freq='M')
i2 = Period('200701', freq='M')
- self.assertEqual(i1, i2)
+ assert i1 == i2
i1 = Period(date(2007, 1, 1), freq='M')
i2 = Period(datetime(2007, 1, 1), freq='M')
i3 = Period(np.datetime64('2007-01-01'), freq='M')
i4 = Period(np_datetime64_compat('2007-01-01 00:00:00Z'), freq='M')
i5 = Period(np_datetime64_compat('2007-01-01 00:00:00.000Z'), freq='M')
- self.assertEqual(i1, i2)
- self.assertEqual(i1, i3)
- self.assertEqual(i1, i4)
- self.assertEqual(i1, i5)
+ assert i1 == i2
+ assert i1 == i3
+ assert i1 == i4
+ assert i1 == i5
i1 = Period('2007-01-01 09:00:00.001')
expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1000), freq='L')
- self.assertEqual(i1, expected)
+ assert i1 == expected
expected = Period(np_datetime64_compat(
'2007-01-01 09:00:00.001Z'), freq='L')
- self.assertEqual(i1, expected)
+ assert i1 == expected
i1 = Period('2007-01-01 09:00:00.00101')
expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1010), freq='U')
- self.assertEqual(i1, expected)
+ assert i1 == expected
expected = Period(np_datetime64_compat('2007-01-01 09:00:00.00101Z'),
freq='U')
- self.assertEqual(i1, expected)
+ assert i1 == expected
pytest.raises(ValueError, Period, ordinal=200701)
@@ -501,8 +499,8 @@ def test_period_constructor_offsets(self):
def test_freq_str(self):
i1 = Period('1982', freq='Min')
- self.assertEqual(i1.freq, offsets.Minute())
- self.assertEqual(i1.freqstr, 'T')
+ assert i1.freq == offsets.Minute()
+ assert i1.freqstr == 'T'
def test_period_deprecated_freq(self):
cases = {"M": ["MTH", "MONTH", "MONTHLY", "Mth", "month", "monthly"],
@@ -530,17 +528,17 @@ def test_period_deprecated_freq(self):
assert isinstance(p2, Period)
def test_hash(self):
- self.assertEqual(hash(Period('2011-01', freq='M')),
- hash(Period('2011-01', freq='M')))
+ assert (hash(Period('2011-01', freq='M')) ==
+ hash(Period('2011-01', freq='M')))
- self.assertNotEqual(hash(Period('2011-01-01', freq='D')),
- hash(Period('2011-01', freq='M')))
+ assert (hash(Period('2011-01-01', freq='D')) !=
+ hash(Period('2011-01', freq='M')))
- self.assertNotEqual(hash(Period('2011-01', freq='3M')),
- hash(Period('2011-01', freq='2M')))
+ assert (hash(Period('2011-01', freq='3M')) !=
+ hash(Period('2011-01', freq='2M')))
- self.assertNotEqual(hash(Period('2011-01', freq='M')),
- hash(Period('2011-02', freq='M')))
+ assert (hash(Period('2011-01', freq='M')) !=
+ hash(Period('2011-02', freq='M')))
def test_repr(self):
p = Period('Jan-2000')
@@ -556,23 +554,23 @@ def test_repr_nat(self):
def test_millisecond_repr(self):
p = Period('2000-01-01 12:15:02.123')
- self.assertEqual("Period('2000-01-01 12:15:02.123', 'L')", repr(p))
+ assert repr(p) == "Period('2000-01-01 12:15:02.123', 'L')"
def test_microsecond_repr(self):
p = Period('2000-01-01 12:15:02.123567')
- self.assertEqual("Period('2000-01-01 12:15:02.123567', 'U')", repr(p))
+ assert repr(p) == "Period('2000-01-01 12:15:02.123567', 'U')"
def test_strftime(self):
p = Period('2000-1-1 12:34:12', freq='S')
res = p.strftime('%Y-%m-%d %H:%M:%S')
- self.assertEqual(res, '2000-01-01 12:34:12')
+ assert res == '2000-01-01 12:34:12'
assert isinstance(res, text_type) # GH3363
def test_sub_delta(self):
left, right = Period('2011', freq='A'), Period('2007', freq='A')
result = left - right
- self.assertEqual(result, 4)
+ assert result == 4
with pytest.raises(period.IncompatibleFrequency):
left - Period('2007-01', freq='M')
@@ -582,15 +580,15 @@ def test_to_timestamp(self):
start_ts = p.to_timestamp(how='S')
aliases = ['s', 'StarT', 'BEGIn']
for a in aliases:
- self.assertEqual(start_ts, p.to_timestamp('D', how=a))
+ assert start_ts == p.to_timestamp('D', how=a)
# freq with mult should not affect to the result
- self.assertEqual(start_ts, p.to_timestamp('3D', how=a))
+ assert start_ts == p.to_timestamp('3D', how=a)
end_ts = p.to_timestamp(how='E')
aliases = ['e', 'end', 'FINIsH']
for a in aliases:
- self.assertEqual(end_ts, p.to_timestamp('D', how=a))
- self.assertEqual(end_ts, p.to_timestamp('3D', how=a))
+ assert end_ts == p.to_timestamp('D', how=a)
+ assert end_ts == p.to_timestamp('3D', how=a)
from_lst = ['A', 'Q', 'M', 'W', 'B', 'D', 'H', 'Min', 'S']
@@ -600,11 +598,11 @@ def _ex(p):
for i, fcode in enumerate(from_lst):
p = Period('1982', freq=fcode)
result = p.to_timestamp().to_period(fcode)
- self.assertEqual(result, p)
+ assert result == p
- self.assertEqual(p.start_time, p.to_timestamp(how='S'))
+ assert p.start_time == p.to_timestamp(how='S')
- self.assertEqual(p.end_time, _ex(p))
+ assert p.end_time == _ex(p)
# Frequency other than daily
@@ -612,42 +610,40 @@ def _ex(p):
result = p.to_timestamp('H', how='end')
expected = datetime(1985, 12, 31, 23)
- self.assertEqual(result, expected)
+ assert result == expected
result = p.to_timestamp('3H', how='end')
- self.assertEqual(result, expected)
+ assert result == expected
result = p.to_timestamp('T', how='end')
expected = datetime(1985, 12, 31, 23, 59)
- self.assertEqual(result, expected)
+ assert result == expected
result = p.to_timestamp('2T', how='end')
- self.assertEqual(result, expected)
+ assert result == expected
result = p.to_timestamp(how='end')
expected = datetime(1985, 12, 31)
- self.assertEqual(result, expected)
+ assert result == expected
expected = datetime(1985, 1, 1)
result = p.to_timestamp('H', how='start')
- self.assertEqual(result, expected)
+ assert result == expected
result = p.to_timestamp('T', how='start')
- self.assertEqual(result, expected)
+ assert result == expected
result = p.to_timestamp('S', how='start')
- self.assertEqual(result, expected)
+ assert result == expected
result = p.to_timestamp('3H', how='start')
- self.assertEqual(result, expected)
+ assert result == expected
result = p.to_timestamp('5S', how='start')
- self.assertEqual(result, expected)
+ assert result == expected
def test_start_time(self):
freq_lst = ['A', 'Q', 'M', 'D', 'H', 'T', 'S']
xp = datetime(2012, 1, 1)
for f in freq_lst:
p = Period('2012', freq=f)
- self.assertEqual(p.start_time, xp)
- self.assertEqual(Period('2012', freq='B').start_time,
- datetime(2012, 1, 2))
- self.assertEqual(Period('2012', freq='W').start_time,
- datetime(2011, 12, 26))
+ assert p.start_time == xp
+ assert Period('2012', freq='B').start_time == datetime(2012, 1, 2)
+ assert Period('2012', freq='W').start_time == datetime(2011, 12, 26)
def test_end_time(self):
p = Period('2012', freq='A')
@@ -656,44 +652,44 @@ def _ex(*args):
return Timestamp(Timestamp(datetime(*args)).value - 1)
xp = _ex(2013, 1, 1)
- self.assertEqual(xp, p.end_time)
+ assert xp == p.end_time
p = Period('2012', freq='Q')
xp = _ex(2012, 4, 1)
- self.assertEqual(xp, p.end_time)
+ assert xp == p.end_time
p = Period('2012', freq='M')
xp = _ex(2012, 2, 1)
- self.assertEqual(xp, p.end_time)
+ assert xp == p.end_time
p = Period('2012', freq='D')
xp = _ex(2012, 1, 2)
- self.assertEqual(xp, p.end_time)
+ assert xp == p.end_time
p = Period('2012', freq='H')
xp = _ex(2012, 1, 1, 1)
- self.assertEqual(xp, p.end_time)
+ assert xp == p.end_time
p = Period('2012', freq='B')
xp = _ex(2012, 1, 3)
- self.assertEqual(xp, p.end_time)
+ assert xp == p.end_time
p = Period('2012', freq='W')
xp = _ex(2012, 1, 2)
- self.assertEqual(xp, p.end_time)
+ assert xp == p.end_time
# Test for GH 11738
p = Period('2012', freq='15D')
xp = _ex(2012, 1, 16)
- self.assertEqual(xp, p.end_time)
+ assert xp == p.end_time
p = Period('2012', freq='1D1H')
xp = _ex(2012, 1, 2, 1)
- self.assertEqual(xp, p.end_time)
+ assert xp == p.end_time
p = Period('2012', freq='1H1D')
xp = _ex(2012, 1, 2, 1)
- self.assertEqual(xp, p.end_time)
+ assert xp == p.end_time
def test_anchor_week_end_time(self):
def _ex(*args):
@@ -701,12 +697,12 @@ def _ex(*args):
p = Period('2013-1-1', 'W-SAT')
xp = _ex(2013, 1, 6)
- self.assertEqual(p.end_time, xp)
+ assert p.end_time == xp
def test_properties_annually(self):
# Test properties on Periods with annually frequency.
a_date = Period(freq='A', year=2007)
- self.assertEqual(a_date.year, 2007)
+ assert a_date.year == 2007
def test_properties_quarterly(self):
# Test properties on Periods with daily frequency.
@@ -716,50 +712,50 @@ def test_properties_quarterly(self):
#
for x in range(3):
for qd in (qedec_date, qejan_date, qejun_date):
- self.assertEqual((qd + x).qyear, 2007)
- self.assertEqual((qd + x).quarter, x + 1)
+ assert (qd + x).qyear == 2007
+ assert (qd + x).quarter == x + 1
def test_properties_monthly(self):
# Test properties on Periods with daily frequency.
m_date = Period(freq='M', year=2007, month=1)
for x in range(11):
m_ival_x = m_date + x
- self.assertEqual(m_ival_x.year, 2007)
+ assert m_ival_x.year == 2007
if 1 <= x + 1 <= 3:
- self.assertEqual(m_ival_x.quarter, 1)
+ assert m_ival_x.quarter == 1
elif 4 <= x + 1 <= 6:
- self.assertEqual(m_ival_x.quarter, 2)
+ assert m_ival_x.quarter == 2
elif 7 <= x + 1 <= 9:
- self.assertEqual(m_ival_x.quarter, 3)
+ assert m_ival_x.quarter == 3
elif 10 <= x + 1 <= 12:
- self.assertEqual(m_ival_x.quarter, 4)
- self.assertEqual(m_ival_x.month, x + 1)
+ assert m_ival_x.quarter == 4
+ assert m_ival_x.month == x + 1
def test_properties_weekly(self):
# Test properties on Periods with daily frequency.
w_date = Period(freq='W', year=2007, month=1, day=7)
#
- self.assertEqual(w_date.year, 2007)
- self.assertEqual(w_date.quarter, 1)
- self.assertEqual(w_date.month, 1)
- self.assertEqual(w_date.week, 1)
- self.assertEqual((w_date - 1).week, 52)
- self.assertEqual(w_date.days_in_month, 31)
- self.assertEqual(Period(freq='W', year=2012,
- month=2, day=1).days_in_month, 29)
+ assert w_date.year == 2007
+ assert w_date.quarter == 1
+ assert w_date.month == 1
+ assert w_date.week == 1
+ assert (w_date - 1).week == 52
+ assert w_date.days_in_month == 31
+ assert Period(freq='W', year=2012,
+ month=2, day=1).days_in_month == 29
def test_properties_weekly_legacy(self):
# Test properties on Periods with daily frequency.
w_date = Period(freq='W', year=2007, month=1, day=7)
- self.assertEqual(w_date.year, 2007)
- self.assertEqual(w_date.quarter, 1)
- self.assertEqual(w_date.month, 1)
- self.assertEqual(w_date.week, 1)
- self.assertEqual((w_date - 1).week, 52)
- self.assertEqual(w_date.days_in_month, 31)
+ assert w_date.year == 2007
+ assert w_date.quarter == 1
+ assert w_date.month == 1
+ assert w_date.week == 1
+ assert (w_date - 1).week == 52
+ assert w_date.days_in_month == 31
exp = Period(freq='W', year=2012, month=2, day=1)
- self.assertEqual(exp.days_in_month, 29)
+ assert exp.days_in_month == 29
msg = pd.tseries.frequencies._INVALID_FREQ_ERROR
with tm.assert_raises_regex(ValueError, msg):
@@ -769,27 +765,27 @@ def test_properties_daily(self):
# Test properties on Periods with daily frequency.
b_date = Period(freq='B', year=2007, month=1, day=1)
#
- self.assertEqual(b_date.year, 2007)
- self.assertEqual(b_date.quarter, 1)
- self.assertEqual(b_date.month, 1)
- self.assertEqual(b_date.day, 1)
- self.assertEqual(b_date.weekday, 0)
- self.assertEqual(b_date.dayofyear, 1)
- self.assertEqual(b_date.days_in_month, 31)
- self.assertEqual(Period(freq='B', year=2012,
- month=2, day=1).days_in_month, 29)
- #
+ assert b_date.year == 2007
+ assert b_date.quarter == 1
+ assert b_date.month == 1
+ assert b_date.day == 1
+ assert b_date.weekday == 0
+ assert b_date.dayofyear == 1
+ assert b_date.days_in_month == 31
+ assert Period(freq='B', year=2012,
+ month=2, day=1).days_in_month == 29
+
d_date = Period(freq='D', year=2007, month=1, day=1)
- #
- self.assertEqual(d_date.year, 2007)
- self.assertEqual(d_date.quarter, 1)
- self.assertEqual(d_date.month, 1)
- self.assertEqual(d_date.day, 1)
- self.assertEqual(d_date.weekday, 0)
- self.assertEqual(d_date.dayofyear, 1)
- self.assertEqual(d_date.days_in_month, 31)
- self.assertEqual(Period(freq='D', year=2012, month=2,
- day=1).days_in_month, 29)
+
+ assert d_date.year == 2007
+ assert d_date.quarter == 1
+ assert d_date.month == 1
+ assert d_date.day == 1
+ assert d_date.weekday == 0
+ assert d_date.dayofyear == 1
+ assert d_date.days_in_month == 31
+ assert Period(freq='D', year=2012, month=2,
+ day=1).days_in_month == 29
def test_properties_hourly(self):
# Test properties on Periods with hourly frequency.
@@ -797,50 +793,50 @@ def test_properties_hourly(self):
h_date2 = Period(freq='2H', year=2007, month=1, day=1, hour=0)
for h_date in [h_date1, h_date2]:
- self.assertEqual(h_date.year, 2007)
- self.assertEqual(h_date.quarter, 1)
- self.assertEqual(h_date.month, 1)
- self.assertEqual(h_date.day, 1)
- self.assertEqual(h_date.weekday, 0)
- self.assertEqual(h_date.dayofyear, 1)
- self.assertEqual(h_date.hour, 0)
- self.assertEqual(h_date.days_in_month, 31)
- self.assertEqual(Period(freq='H', year=2012, month=2, day=1,
- hour=0).days_in_month, 29)
+ assert h_date.year == 2007
+ assert h_date.quarter == 1
+ assert h_date.month == 1
+ assert h_date.day == 1
+ assert h_date.weekday == 0
+ assert h_date.dayofyear == 1
+ assert h_date.hour == 0
+ assert h_date.days_in_month == 31
+ assert Period(freq='H', year=2012, month=2, day=1,
+ hour=0).days_in_month == 29
def test_properties_minutely(self):
# Test properties on Periods with minutely frequency.
t_date = Period(freq='Min', year=2007, month=1, day=1, hour=0,
minute=0)
#
- self.assertEqual(t_date.quarter, 1)
- self.assertEqual(t_date.month, 1)
- self.assertEqual(t_date.day, 1)
- self.assertEqual(t_date.weekday, 0)
- self.assertEqual(t_date.dayofyear, 1)
- self.assertEqual(t_date.hour, 0)
- self.assertEqual(t_date.minute, 0)
- self.assertEqual(t_date.days_in_month, 31)
- self.assertEqual(Period(freq='D', year=2012, month=2, day=1, hour=0,
- minute=0).days_in_month, 29)
+ assert t_date.quarter == 1
+ assert t_date.month == 1
+ assert t_date.day == 1
+ assert t_date.weekday == 0
+ assert t_date.dayofyear == 1
+ assert t_date.hour == 0
+ assert t_date.minute == 0
+ assert t_date.days_in_month == 31
+ assert Period(freq='D', year=2012, month=2, day=1, hour=0,
+ minute=0).days_in_month == 29
def test_properties_secondly(self):
# Test properties on Periods with secondly frequency.
s_date = Period(freq='Min', year=2007, month=1, day=1, hour=0,
minute=0, second=0)
#
- self.assertEqual(s_date.year, 2007)
- self.assertEqual(s_date.quarter, 1)
- self.assertEqual(s_date.month, 1)
- self.assertEqual(s_date.day, 1)
- self.assertEqual(s_date.weekday, 0)
- self.assertEqual(s_date.dayofyear, 1)
- self.assertEqual(s_date.hour, 0)
- self.assertEqual(s_date.minute, 0)
- self.assertEqual(s_date.second, 0)
- self.assertEqual(s_date.days_in_month, 31)
- self.assertEqual(Period(freq='Min', year=2012, month=2, day=1, hour=0,
- minute=0, second=0).days_in_month, 29)
+ assert s_date.year == 2007
+ assert s_date.quarter == 1
+ assert s_date.month == 1
+ assert s_date.day == 1
+ assert s_date.weekday == 0
+ assert s_date.dayofyear == 1
+ assert s_date.hour == 0
+ assert s_date.minute == 0
+ assert s_date.second == 0
+ assert s_date.days_in_month == 31
+ assert Period(freq='Min', year=2012, month=2, day=1, hour=0,
+ minute=0, second=0).days_in_month == 29
def test_pnow(self):
@@ -851,7 +847,7 @@ def test_pnow(self):
def test_constructor_corner(self):
expected = Period('2007-01', freq='2M')
- self.assertEqual(Period(year=2007, month=1, freq='2M'), expected)
+ assert Period(year=2007, month=1, freq='2M') == expected
pytest.raises(ValueError, Period, datetime.now())
pytest.raises(ValueError, Period, datetime.now().date())
@@ -865,29 +861,29 @@ def test_constructor_corner(self):
result = Period(p, freq='A')
exp = Period('2007', freq='A')
- self.assertEqual(result, exp)
+ assert result == exp
def test_constructor_infer_freq(self):
p = Period('2007-01-01')
- self.assertEqual(p.freq, 'D')
+ assert p.freq == 'D'
p = Period('2007-01-01 07')
- self.assertEqual(p.freq, 'H')
+ assert p.freq == 'H'
p = Period('2007-01-01 07:10')
- self.assertEqual(p.freq, 'T')
+ assert p.freq == 'T'
p = Period('2007-01-01 07:10:15')
- self.assertEqual(p.freq, 'S')
+ assert p.freq == 'S'
p = Period('2007-01-01 07:10:15.123')
- self.assertEqual(p.freq, 'L')
+ assert p.freq == 'L'
p = Period('2007-01-01 07:10:15.123000')
- self.assertEqual(p.freq, 'L')
+ assert p.freq == 'L'
p = Period('2007-01-01 07:10:15.123400')
- self.assertEqual(p.freq, 'U')
+ assert p.freq == 'U'
def test_badinput(self):
pytest.raises(ValueError, Period, '-2000', 'A')
@@ -897,22 +893,22 @@ def test_badinput(self):
def test_multiples(self):
result1 = Period('1989', freq='2A')
result2 = Period('1989', freq='A')
- self.assertEqual(result1.ordinal, result2.ordinal)
- self.assertEqual(result1.freqstr, '2A-DEC')
- self.assertEqual(result2.freqstr, 'A-DEC')
- self.assertEqual(result1.freq, offsets.YearEnd(2))
- self.assertEqual(result2.freq, offsets.YearEnd())
+ assert result1.ordinal == result2.ordinal
+ assert result1.freqstr == '2A-DEC'
+ assert result2.freqstr == 'A-DEC'
+ assert result1.freq == offsets.YearEnd(2)
+ assert result2.freq == offsets.YearEnd()
- self.assertEqual((result1 + 1).ordinal, result1.ordinal + 2)
- self.assertEqual((1 + result1).ordinal, result1.ordinal + 2)
- self.assertEqual((result1 - 1).ordinal, result2.ordinal - 2)
- self.assertEqual((-1 + result1).ordinal, result2.ordinal - 2)
+ assert (result1 + 1).ordinal == result1.ordinal + 2
+ assert (1 + result1).ordinal == result1.ordinal + 2
+ assert (result1 - 1).ordinal == result2.ordinal - 2
+ assert (-1 + result1).ordinal == result2.ordinal - 2
def test_round_trip(self):
p = Period('2000Q1')
new_p = tm.round_trip_pickle(p)
- self.assertEqual(new_p, p)
+ assert new_p == p
class TestPeriodField(tm.TestCase):
@@ -935,7 +931,7 @@ def setUp(self):
self.day = Period('2012-01-01', 'D')
def test_equal(self):
- self.assertEqual(self.january1, self.january2)
+ assert self.january1 == self.january2
def test_equal_Raises_Value(self):
with pytest.raises(period.IncompatibleFrequency):
@@ -991,7 +987,7 @@ def test_smaller_Raises_Type(self):
def test_sort(self):
periods = [self.march, self.january1, self.february]
correctPeriods = [self.january1, self.february, self.march]
- self.assertEqual(sorted(periods), correctPeriods)
+ assert sorted(periods) == correctPeriods
def test_period_nat_comp(self):
p_nat = Period('NaT', freq='D')
@@ -1002,12 +998,12 @@ def test_period_nat_comp(self):
# confirm Period('NaT') work identical with Timestamp('NaT')
for left, right in [(p_nat, p), (p, p_nat), (p_nat, p_nat), (nat, t),
(t, nat), (nat, nat)]:
- self.assertEqual(left < right, False)
- self.assertEqual(left > right, False)
- self.assertEqual(left == right, False)
- self.assertEqual(left != right, True)
- self.assertEqual(left <= right, False)
- self.assertEqual(left >= right, False)
+ assert not left < right
+ assert not left > right
+ assert not left == right
+ assert left != right
+ assert not left <= right
+ assert not left >= right
class TestMethods(tm.TestCase):
@@ -1015,8 +1011,8 @@ class TestMethods(tm.TestCase):
def test_add(self):
dt1 = Period(freq='D', year=2008, month=1, day=1)
dt2 = Period(freq='D', year=2008, month=1, day=2)
- self.assertEqual(dt1 + 1, dt2)
- self.assertEqual(1 + dt1, dt2)
+ assert dt1 + 1 == dt2
+ assert 1 + dt1 == dt2
def test_add_pdnat(self):
p = pd.Period('2011-01', freq='M')
@@ -1046,8 +1042,8 @@ def test_sub(self):
dt1 = Period('2011-01-01', freq='D')
dt2 = Period('2011-01-15', freq='D')
- self.assertEqual(dt1 - dt2, -14)
- self.assertEqual(dt2 - dt1, 14)
+ assert dt1 - dt2 == -14
+ assert dt2 - dt1 == 14
msg = r"Input has different freq=M from Period\(freq=D\)"
with tm.assert_raises_regex(period.IncompatibleFrequency, msg):
@@ -1058,8 +1054,8 @@ def test_add_offset(self):
for freq in ['A', '2A', '3A']:
p = Period('2011', freq=freq)
exp = Period('2013', freq=freq)
- self.assertEqual(p + offsets.YearEnd(2), exp)
- self.assertEqual(offsets.YearEnd(2) + p, exp)
+ assert p + offsets.YearEnd(2) == exp
+ assert offsets.YearEnd(2) + p == exp
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
@@ -1077,12 +1073,12 @@ def test_add_offset(self):
for freq in ['M', '2M', '3M']:
p = Period('2011-03', freq=freq)
exp = Period('2011-05', freq=freq)
- self.assertEqual(p + offsets.MonthEnd(2), exp)
- self.assertEqual(offsets.MonthEnd(2) + p, exp)
+ assert p + offsets.MonthEnd(2) == exp
+ assert offsets.MonthEnd(2) + p == exp
exp = Period('2012-03', freq=freq)
- self.assertEqual(p + offsets.MonthEnd(12), exp)
- self.assertEqual(offsets.MonthEnd(12) + p, exp)
+ assert p + offsets.MonthEnd(12) == exp
+ assert offsets.MonthEnd(12) + p == exp
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
@@ -1102,30 +1098,30 @@ def test_add_offset(self):
p = Period('2011-04-01', freq=freq)
exp = Period('2011-04-06', freq=freq)
- self.assertEqual(p + offsets.Day(5), exp)
- self.assertEqual(offsets.Day(5) + p, exp)
+ assert p + offsets.Day(5) == exp
+ assert offsets.Day(5) + p == exp
exp = Period('2011-04-02', freq=freq)
- self.assertEqual(p + offsets.Hour(24), exp)
- self.assertEqual(offsets.Hour(24) + p, exp)
+ assert p + offsets.Hour(24) == exp
+ assert offsets.Hour(24) + p == exp
exp = Period('2011-04-03', freq=freq)
- self.assertEqual(p + np.timedelta64(2, 'D'), exp)
+ assert p + np.timedelta64(2, 'D') == exp
with pytest.raises(TypeError):
np.timedelta64(2, 'D') + p
exp = Period('2011-04-02', freq=freq)
- self.assertEqual(p + np.timedelta64(3600 * 24, 's'), exp)
+ assert p + np.timedelta64(3600 * 24, 's') == exp
with pytest.raises(TypeError):
np.timedelta64(3600 * 24, 's') + p
exp = Period('2011-03-30', freq=freq)
- self.assertEqual(p + timedelta(-2), exp)
- self.assertEqual(timedelta(-2) + p, exp)
+ assert p + timedelta(-2) == exp
+ assert timedelta(-2) + p == exp
exp = Period('2011-04-03', freq=freq)
- self.assertEqual(p + timedelta(hours=48), exp)
- self.assertEqual(timedelta(hours=48) + p, exp)
+ assert p + timedelta(hours=48) == exp
+ assert timedelta(hours=48) + p == exp
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(4, 'h'),
@@ -1144,30 +1140,30 @@ def test_add_offset(self):
p = Period('2011-04-01 09:00', freq=freq)
exp = Period('2011-04-03 09:00', freq=freq)
- self.assertEqual(p + offsets.Day(2), exp)
- self.assertEqual(offsets.Day(2) + p, exp)
+ assert p + offsets.Day(2) == exp
+ assert offsets.Day(2) + p == exp
exp = Period('2011-04-01 12:00', freq=freq)
- self.assertEqual(p + offsets.Hour(3), exp)
- self.assertEqual(offsets.Hour(3) + p, exp)
+ assert p + offsets.Hour(3) == exp
+ assert offsets.Hour(3) + p == exp
exp = Period('2011-04-01 12:00', freq=freq)
- self.assertEqual(p + np.timedelta64(3, 'h'), exp)
+ assert p + np.timedelta64(3, 'h') == exp
with pytest.raises(TypeError):
np.timedelta64(3, 'h') + p
exp = Period('2011-04-01 10:00', freq=freq)
- self.assertEqual(p + np.timedelta64(3600, 's'), exp)
+ assert p + np.timedelta64(3600, 's') == exp
with pytest.raises(TypeError):
np.timedelta64(3600, 's') + p
exp = Period('2011-04-01 11:00', freq=freq)
- self.assertEqual(p + timedelta(minutes=120), exp)
- self.assertEqual(timedelta(minutes=120) + p, exp)
+ assert p + timedelta(minutes=120) == exp
+ assert timedelta(minutes=120) + p == exp
exp = Period('2011-04-05 12:00', freq=freq)
- self.assertEqual(p + timedelta(days=4, minutes=180), exp)
- self.assertEqual(timedelta(days=4, minutes=180) + p, exp)
+ assert p + timedelta(days=4, minutes=180) == exp
+ assert timedelta(days=4, minutes=180) + p == exp
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(3200, 's'),
@@ -1283,7 +1279,7 @@ def test_sub_offset(self):
# freq is DateOffset
for freq in ['A', '2A', '3A']:
p = Period('2011', freq=freq)
- self.assertEqual(p - offsets.YearEnd(2), Period('2009', freq=freq))
+ assert p - offsets.YearEnd(2) == Period('2009', freq=freq)
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
@@ -1293,10 +1289,8 @@ def test_sub_offset(self):
for freq in ['M', '2M', '3M']:
p = Period('2011-03', freq=freq)
- self.assertEqual(p - offsets.MonthEnd(2),
- Period('2011-01', freq=freq))
- self.assertEqual(p - offsets.MonthEnd(12),
- Period('2010-03', freq=freq))
+ assert p - offsets.MonthEnd(2) == Period('2011-01', freq=freq)
+ assert p - offsets.MonthEnd(12) == Period('2010-03', freq=freq)
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
@@ -1307,18 +1301,14 @@ def test_sub_offset(self):
# freq is Tick
for freq in ['D', '2D', '3D']:
p = Period('2011-04-01', freq=freq)
- self.assertEqual(p - offsets.Day(5),
- Period('2011-03-27', freq=freq))
- self.assertEqual(p - offsets.Hour(24),
- Period('2011-03-31', freq=freq))
- self.assertEqual(p - np.timedelta64(2, 'D'),
- Period('2011-03-30', freq=freq))
- self.assertEqual(p - np.timedelta64(3600 * 24, 's'),
- Period('2011-03-31', freq=freq))
- self.assertEqual(p - timedelta(-2),
- Period('2011-04-03', freq=freq))
- self.assertEqual(p - timedelta(hours=48),
- Period('2011-03-30', freq=freq))
+ assert p - offsets.Day(5) == Period('2011-03-27', freq=freq)
+ assert p - offsets.Hour(24) == Period('2011-03-31', freq=freq)
+ assert p - np.timedelta64(2, 'D') == Period(
+ '2011-03-30', freq=freq)
+ assert p - np.timedelta64(3600 * 24, 's') == Period(
+ '2011-03-31', freq=freq)
+ assert p - timedelta(-2) == Period('2011-04-03', freq=freq)
+ assert p - timedelta(hours=48) == Period('2011-03-30', freq=freq)
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(4, 'h'),
@@ -1328,18 +1318,16 @@ def test_sub_offset(self):
for freq in ['H', '2H', '3H']:
p = Period('2011-04-01 09:00', freq=freq)
- self.assertEqual(p - offsets.Day(2),
- Period('2011-03-30 09:00', freq=freq))
- self.assertEqual(p - offsets.Hour(3),
- Period('2011-04-01 06:00', freq=freq))
- self.assertEqual(p - np.timedelta64(3, 'h'),
- Period('2011-04-01 06:00', freq=freq))
- self.assertEqual(p - np.timedelta64(3600, 's'),
- Period('2011-04-01 08:00', freq=freq))
- self.assertEqual(p - timedelta(minutes=120),
- Period('2011-04-01 07:00', freq=freq))
- self.assertEqual(p - timedelta(days=4, minutes=180),
- Period('2011-03-28 06:00', freq=freq))
+ assert p - offsets.Day(2) == Period('2011-03-30 09:00', freq=freq)
+ assert p - offsets.Hour(3) == Period('2011-04-01 06:00', freq=freq)
+ assert p - np.timedelta64(3, 'h') == Period(
+ '2011-04-01 06:00', freq=freq)
+ assert p - np.timedelta64(3600, 's') == Period(
+ '2011-04-01 08:00', freq=freq)
+ assert p - timedelta(minutes=120) == Period(
+ '2011-04-01 07:00', freq=freq)
+ assert p - timedelta(days=4, minutes=180) == Period(
+ '2011-03-28 06:00', freq=freq)
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(3200, 's'),
@@ -1407,11 +1395,11 @@ def test_period_ops_offset(self):
p = Period('2011-04-01', freq='D')
result = p + offsets.Day()
exp = pd.Period('2011-04-02', freq='D')
- self.assertEqual(result, exp)
+ assert result == exp
result = p - offsets.Day(2)
exp = pd.Period('2011-03-30', freq='D')
- self.assertEqual(result, exp)
+ assert result == exp
msg = r"Input cannot be converted to Period\(freq=D\)"
with tm.assert_raises_regex(period.IncompatibleFrequency, msg):
diff --git a/pandas/tests/scalar/test_period_asfreq.py b/pandas/tests/scalar/test_period_asfreq.py
index d31eeda5c8e3c..7011cfeef90ae 100644
--- a/pandas/tests/scalar/test_period_asfreq.py
+++ b/pandas/tests/scalar/test_period_asfreq.py
@@ -5,17 +5,17 @@
class TestFreqConversion(tm.TestCase):
- "Test frequency conversion of date objects"
+ """Test frequency conversion of date objects"""
def test_asfreq_corner(self):
val = Period(freq='A', year=2007)
result1 = val.asfreq('5t')
result2 = val.asfreq('t')
expected = Period('2007-12-31 23:59', freq='t')
- self.assertEqual(result1.ordinal, expected.ordinal)
- self.assertEqual(result1.freqstr, '5T')
- self.assertEqual(result2.ordinal, expected.ordinal)
- self.assertEqual(result2.freqstr, 'T')
+ assert result1.ordinal == expected.ordinal
+ assert result1.freqstr == '5T'
+ assert result2.ordinal == expected.ordinal
+ assert result2.freqstr == 'T'
def test_conv_annual(self):
# frequency conversion tests: from Annual Frequency
@@ -55,35 +55,35 @@ def test_conv_annual(self):
ival_ANOV_to_D_end = Period(freq='D', year=2007, month=11, day=30)
ival_ANOV_to_D_start = Period(freq='D', year=2006, month=12, day=1)
- self.assertEqual(ival_A.asfreq('Q', 'S'), ival_A_to_Q_start)
- self.assertEqual(ival_A.asfreq('Q', 'e'), ival_A_to_Q_end)
- self.assertEqual(ival_A.asfreq('M', 's'), ival_A_to_M_start)
- self.assertEqual(ival_A.asfreq('M', 'E'), ival_A_to_M_end)
- self.assertEqual(ival_A.asfreq('W', 'S'), ival_A_to_W_start)
- self.assertEqual(ival_A.asfreq('W', 'E'), ival_A_to_W_end)
- self.assertEqual(ival_A.asfreq('B', 'S'), ival_A_to_B_start)
- self.assertEqual(ival_A.asfreq('B', 'E'), ival_A_to_B_end)
- self.assertEqual(ival_A.asfreq('D', 'S'), ival_A_to_D_start)
- self.assertEqual(ival_A.asfreq('D', 'E'), ival_A_to_D_end)
- self.assertEqual(ival_A.asfreq('H', 'S'), ival_A_to_H_start)
- self.assertEqual(ival_A.asfreq('H', 'E'), ival_A_to_H_end)
- self.assertEqual(ival_A.asfreq('min', 'S'), ival_A_to_T_start)
- self.assertEqual(ival_A.asfreq('min', 'E'), ival_A_to_T_end)
- self.assertEqual(ival_A.asfreq('T', 'S'), ival_A_to_T_start)
- self.assertEqual(ival_A.asfreq('T', 'E'), ival_A_to_T_end)
- self.assertEqual(ival_A.asfreq('S', 'S'), ival_A_to_S_start)
- self.assertEqual(ival_A.asfreq('S', 'E'), ival_A_to_S_end)
-
- self.assertEqual(ival_AJAN.asfreq('D', 'S'), ival_AJAN_to_D_start)
- self.assertEqual(ival_AJAN.asfreq('D', 'E'), ival_AJAN_to_D_end)
-
- self.assertEqual(ival_AJUN.asfreq('D', 'S'), ival_AJUN_to_D_start)
- self.assertEqual(ival_AJUN.asfreq('D', 'E'), ival_AJUN_to_D_end)
-
- self.assertEqual(ival_ANOV.asfreq('D', 'S'), ival_ANOV_to_D_start)
- self.assertEqual(ival_ANOV.asfreq('D', 'E'), ival_ANOV_to_D_end)
-
- self.assertEqual(ival_A.asfreq('A'), ival_A)
+ assert ival_A.asfreq('Q', 'S') == ival_A_to_Q_start
+ assert ival_A.asfreq('Q', 'e') == ival_A_to_Q_end
+ assert ival_A.asfreq('M', 's') == ival_A_to_M_start
+ assert ival_A.asfreq('M', 'E') == ival_A_to_M_end
+ assert ival_A.asfreq('W', 'S') == ival_A_to_W_start
+ assert ival_A.asfreq('W', 'E') == ival_A_to_W_end
+ assert ival_A.asfreq('B', 'S') == ival_A_to_B_start
+ assert ival_A.asfreq('B', 'E') == ival_A_to_B_end
+ assert ival_A.asfreq('D', 'S') == ival_A_to_D_start
+ assert ival_A.asfreq('D', 'E') == ival_A_to_D_end
+ assert ival_A.asfreq('H', 'S') == ival_A_to_H_start
+ assert ival_A.asfreq('H', 'E') == ival_A_to_H_end
+ assert ival_A.asfreq('min', 'S') == ival_A_to_T_start
+ assert ival_A.asfreq('min', 'E') == ival_A_to_T_end
+ assert ival_A.asfreq('T', 'S') == ival_A_to_T_start
+ assert ival_A.asfreq('T', 'E') == ival_A_to_T_end
+ assert ival_A.asfreq('S', 'S') == ival_A_to_S_start
+ assert ival_A.asfreq('S', 'E') == ival_A_to_S_end
+
+ assert ival_AJAN.asfreq('D', 'S') == ival_AJAN_to_D_start
+ assert ival_AJAN.asfreq('D', 'E') == ival_AJAN_to_D_end
+
+ assert ival_AJUN.asfreq('D', 'S') == ival_AJUN_to_D_start
+ assert ival_AJUN.asfreq('D', 'E') == ival_AJUN_to_D_end
+
+ assert ival_ANOV.asfreq('D', 'S') == ival_ANOV_to_D_start
+ assert ival_ANOV.asfreq('D', 'E') == ival_ANOV_to_D_end
+
+ assert ival_A.asfreq('A') == ival_A
def test_conv_quarterly(self):
# frequency conversion tests: from Quarterly Frequency
@@ -120,30 +120,30 @@ def test_conv_quarterly(self):
ival_QEJUN_to_D_start = Period(freq='D', year=2006, month=7, day=1)
ival_QEJUN_to_D_end = Period(freq='D', year=2006, month=9, day=30)
- self.assertEqual(ival_Q.asfreq('A'), ival_Q_to_A)
- self.assertEqual(ival_Q_end_of_year.asfreq('A'), ival_Q_to_A)
-
- self.assertEqual(ival_Q.asfreq('M', 'S'), ival_Q_to_M_start)
- self.assertEqual(ival_Q.asfreq('M', 'E'), ival_Q_to_M_end)
- self.assertEqual(ival_Q.asfreq('W', 'S'), ival_Q_to_W_start)
- self.assertEqual(ival_Q.asfreq('W', 'E'), ival_Q_to_W_end)
- self.assertEqual(ival_Q.asfreq('B', 'S'), ival_Q_to_B_start)
- self.assertEqual(ival_Q.asfreq('B', 'E'), ival_Q_to_B_end)
- self.assertEqual(ival_Q.asfreq('D', 'S'), ival_Q_to_D_start)
- self.assertEqual(ival_Q.asfreq('D', 'E'), ival_Q_to_D_end)
- self.assertEqual(ival_Q.asfreq('H', 'S'), ival_Q_to_H_start)
- self.assertEqual(ival_Q.asfreq('H', 'E'), ival_Q_to_H_end)
- self.assertEqual(ival_Q.asfreq('Min', 'S'), ival_Q_to_T_start)
- self.assertEqual(ival_Q.asfreq('Min', 'E'), ival_Q_to_T_end)
- self.assertEqual(ival_Q.asfreq('S', 'S'), ival_Q_to_S_start)
- self.assertEqual(ival_Q.asfreq('S', 'E'), ival_Q_to_S_end)
-
- self.assertEqual(ival_QEJAN.asfreq('D', 'S'), ival_QEJAN_to_D_start)
- self.assertEqual(ival_QEJAN.asfreq('D', 'E'), ival_QEJAN_to_D_end)
- self.assertEqual(ival_QEJUN.asfreq('D', 'S'), ival_QEJUN_to_D_start)
- self.assertEqual(ival_QEJUN.asfreq('D', 'E'), ival_QEJUN_to_D_end)
-
- self.assertEqual(ival_Q.asfreq('Q'), ival_Q)
+ assert ival_Q.asfreq('A') == ival_Q_to_A
+ assert ival_Q_end_of_year.asfreq('A') == ival_Q_to_A
+
+ assert ival_Q.asfreq('M', 'S') == ival_Q_to_M_start
+ assert ival_Q.asfreq('M', 'E') == ival_Q_to_M_end
+ assert ival_Q.asfreq('W', 'S') == ival_Q_to_W_start
+ assert ival_Q.asfreq('W', 'E') == ival_Q_to_W_end
+ assert ival_Q.asfreq('B', 'S') == ival_Q_to_B_start
+ assert ival_Q.asfreq('B', 'E') == ival_Q_to_B_end
+ assert ival_Q.asfreq('D', 'S') == ival_Q_to_D_start
+ assert ival_Q.asfreq('D', 'E') == ival_Q_to_D_end
+ assert ival_Q.asfreq('H', 'S') == ival_Q_to_H_start
+ assert ival_Q.asfreq('H', 'E') == ival_Q_to_H_end
+ assert ival_Q.asfreq('Min', 'S') == ival_Q_to_T_start
+ assert ival_Q.asfreq('Min', 'E') == ival_Q_to_T_end
+ assert ival_Q.asfreq('S', 'S') == ival_Q_to_S_start
+ assert ival_Q.asfreq('S', 'E') == ival_Q_to_S_end
+
+ assert ival_QEJAN.asfreq('D', 'S') == ival_QEJAN_to_D_start
+ assert ival_QEJAN.asfreq('D', 'E') == ival_QEJAN_to_D_end
+ assert ival_QEJUN.asfreq('D', 'S') == ival_QEJUN_to_D_start
+ assert ival_QEJUN.asfreq('D', 'E') == ival_QEJUN_to_D_end
+
+ assert ival_Q.asfreq('Q') == ival_Q
def test_conv_monthly(self):
# frequency conversion tests: from Monthly Frequency
@@ -170,25 +170,25 @@ def test_conv_monthly(self):
ival_M_to_S_end = Period(freq='S', year=2007, month=1, day=31, hour=23,
minute=59, second=59)
- self.assertEqual(ival_M.asfreq('A'), ival_M_to_A)
- self.assertEqual(ival_M_end_of_year.asfreq('A'), ival_M_to_A)
- self.assertEqual(ival_M.asfreq('Q'), ival_M_to_Q)
- self.assertEqual(ival_M_end_of_quarter.asfreq('Q'), ival_M_to_Q)
-
- self.assertEqual(ival_M.asfreq('W', 'S'), ival_M_to_W_start)
- self.assertEqual(ival_M.asfreq('W', 'E'), ival_M_to_W_end)
- self.assertEqual(ival_M.asfreq('B', 'S'), ival_M_to_B_start)
- self.assertEqual(ival_M.asfreq('B', 'E'), ival_M_to_B_end)
- self.assertEqual(ival_M.asfreq('D', 'S'), ival_M_to_D_start)
- self.assertEqual(ival_M.asfreq('D', 'E'), ival_M_to_D_end)
- self.assertEqual(ival_M.asfreq('H', 'S'), ival_M_to_H_start)
- self.assertEqual(ival_M.asfreq('H', 'E'), ival_M_to_H_end)
- self.assertEqual(ival_M.asfreq('Min', 'S'), ival_M_to_T_start)
- self.assertEqual(ival_M.asfreq('Min', 'E'), ival_M_to_T_end)
- self.assertEqual(ival_M.asfreq('S', 'S'), ival_M_to_S_start)
- self.assertEqual(ival_M.asfreq('S', 'E'), ival_M_to_S_end)
-
- self.assertEqual(ival_M.asfreq('M'), ival_M)
+ assert ival_M.asfreq('A') == ival_M_to_A
+ assert ival_M_end_of_year.asfreq('A') == ival_M_to_A
+ assert ival_M.asfreq('Q') == ival_M_to_Q
+ assert ival_M_end_of_quarter.asfreq('Q') == ival_M_to_Q
+
+ assert ival_M.asfreq('W', 'S') == ival_M_to_W_start
+ assert ival_M.asfreq('W', 'E') == ival_M_to_W_end
+ assert ival_M.asfreq('B', 'S') == ival_M_to_B_start
+ assert ival_M.asfreq('B', 'E') == ival_M_to_B_end
+ assert ival_M.asfreq('D', 'S') == ival_M_to_D_start
+ assert ival_M.asfreq('D', 'E') == ival_M_to_D_end
+ assert ival_M.asfreq('H', 'S') == ival_M_to_H_start
+ assert ival_M.asfreq('H', 'E') == ival_M_to_H_end
+ assert ival_M.asfreq('Min', 'S') == ival_M_to_T_start
+ assert ival_M.asfreq('Min', 'E') == ival_M_to_T_end
+ assert ival_M.asfreq('S', 'S') == ival_M_to_S_start
+ assert ival_M.asfreq('S', 'E') == ival_M_to_S_end
+
+ assert ival_M.asfreq('M') == ival_M
def test_conv_weekly(self):
# frequency conversion tests: from Weekly Frequency
@@ -254,45 +254,44 @@ def test_conv_weekly(self):
ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7, hour=23,
minute=59, second=59)
- self.assertEqual(ival_W.asfreq('A'), ival_W_to_A)
- self.assertEqual(ival_W_end_of_year.asfreq('A'),
- ival_W_to_A_end_of_year)
- self.assertEqual(ival_W.asfreq('Q'), ival_W_to_Q)
- self.assertEqual(ival_W_end_of_quarter.asfreq('Q'),
- ival_W_to_Q_end_of_quarter)
- self.assertEqual(ival_W.asfreq('M'), ival_W_to_M)
- self.assertEqual(ival_W_end_of_month.asfreq('M'),
- ival_W_to_M_end_of_month)
-
- self.assertEqual(ival_W.asfreq('B', 'S'), ival_W_to_B_start)
- self.assertEqual(ival_W.asfreq('B', 'E'), ival_W_to_B_end)
-
- self.assertEqual(ival_W.asfreq('D', 'S'), ival_W_to_D_start)
- self.assertEqual(ival_W.asfreq('D', 'E'), ival_W_to_D_end)
-
- self.assertEqual(ival_WSUN.asfreq('D', 'S'), ival_WSUN_to_D_start)
- self.assertEqual(ival_WSUN.asfreq('D', 'E'), ival_WSUN_to_D_end)
- self.assertEqual(ival_WSAT.asfreq('D', 'S'), ival_WSAT_to_D_start)
- self.assertEqual(ival_WSAT.asfreq('D', 'E'), ival_WSAT_to_D_end)
- self.assertEqual(ival_WFRI.asfreq('D', 'S'), ival_WFRI_to_D_start)
- self.assertEqual(ival_WFRI.asfreq('D', 'E'), ival_WFRI_to_D_end)
- self.assertEqual(ival_WTHU.asfreq('D', 'S'), ival_WTHU_to_D_start)
- self.assertEqual(ival_WTHU.asfreq('D', 'E'), ival_WTHU_to_D_end)
- self.assertEqual(ival_WWED.asfreq('D', 'S'), ival_WWED_to_D_start)
- self.assertEqual(ival_WWED.asfreq('D', 'E'), ival_WWED_to_D_end)
- self.assertEqual(ival_WTUE.asfreq('D', 'S'), ival_WTUE_to_D_start)
- self.assertEqual(ival_WTUE.asfreq('D', 'E'), ival_WTUE_to_D_end)
- self.assertEqual(ival_WMON.asfreq('D', 'S'), ival_WMON_to_D_start)
- self.assertEqual(ival_WMON.asfreq('D', 'E'), ival_WMON_to_D_end)
-
- self.assertEqual(ival_W.asfreq('H', 'S'), ival_W_to_H_start)
- self.assertEqual(ival_W.asfreq('H', 'E'), ival_W_to_H_end)
- self.assertEqual(ival_W.asfreq('Min', 'S'), ival_W_to_T_start)
- self.assertEqual(ival_W.asfreq('Min', 'E'), ival_W_to_T_end)
- self.assertEqual(ival_W.asfreq('S', 'S'), ival_W_to_S_start)
- self.assertEqual(ival_W.asfreq('S', 'E'), ival_W_to_S_end)
-
- self.assertEqual(ival_W.asfreq('W'), ival_W)
+ assert ival_W.asfreq('A') == ival_W_to_A
+ assert ival_W_end_of_year.asfreq('A') == ival_W_to_A_end_of_year
+
+ assert ival_W.asfreq('Q') == ival_W_to_Q
+ assert ival_W_end_of_quarter.asfreq('Q') == ival_W_to_Q_end_of_quarter
+
+ assert ival_W.asfreq('M') == ival_W_to_M
+ assert ival_W_end_of_month.asfreq('M') == ival_W_to_M_end_of_month
+
+ assert ival_W.asfreq('B', 'S') == ival_W_to_B_start
+ assert ival_W.asfreq('B', 'E') == ival_W_to_B_end
+
+ assert ival_W.asfreq('D', 'S') == ival_W_to_D_start
+ assert ival_W.asfreq('D', 'E') == ival_W_to_D_end
+
+ assert ival_WSUN.asfreq('D', 'S') == ival_WSUN_to_D_start
+ assert ival_WSUN.asfreq('D', 'E') == ival_WSUN_to_D_end
+ assert ival_WSAT.asfreq('D', 'S') == ival_WSAT_to_D_start
+ assert ival_WSAT.asfreq('D', 'E') == ival_WSAT_to_D_end
+ assert ival_WFRI.asfreq('D', 'S') == ival_WFRI_to_D_start
+ assert ival_WFRI.asfreq('D', 'E') == ival_WFRI_to_D_end
+ assert ival_WTHU.asfreq('D', 'S') == ival_WTHU_to_D_start
+ assert ival_WTHU.asfreq('D', 'E') == ival_WTHU_to_D_end
+ assert ival_WWED.asfreq('D', 'S') == ival_WWED_to_D_start
+ assert ival_WWED.asfreq('D', 'E') == ival_WWED_to_D_end
+ assert ival_WTUE.asfreq('D', 'S') == ival_WTUE_to_D_start
+ assert ival_WTUE.asfreq('D', 'E') == ival_WTUE_to_D_end
+ assert ival_WMON.asfreq('D', 'S') == ival_WMON_to_D_start
+ assert ival_WMON.asfreq('D', 'E') == ival_WMON_to_D_end
+
+ assert ival_W.asfreq('H', 'S') == ival_W_to_H_start
+ assert ival_W.asfreq('H', 'E') == ival_W_to_H_end
+ assert ival_W.asfreq('Min', 'S') == ival_W_to_T_start
+ assert ival_W.asfreq('Min', 'E') == ival_W_to_T_end
+ assert ival_W.asfreq('S', 'S') == ival_W_to_S_start
+ assert ival_W.asfreq('S', 'E') == ival_W_to_S_end
+
+ assert ival_W.asfreq('W') == ival_W
msg = pd.tseries.frequencies._INVALID_FREQ_ERROR
with tm.assert_raises_regex(ValueError, msg):
@@ -342,25 +341,25 @@ def test_conv_business(self):
ival_B_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=23,
minute=59, second=59)
- self.assertEqual(ival_B.asfreq('A'), ival_B_to_A)
- self.assertEqual(ival_B_end_of_year.asfreq('A'), ival_B_to_A)
- self.assertEqual(ival_B.asfreq('Q'), ival_B_to_Q)
- self.assertEqual(ival_B_end_of_quarter.asfreq('Q'), ival_B_to_Q)
- self.assertEqual(ival_B.asfreq('M'), ival_B_to_M)
- self.assertEqual(ival_B_end_of_month.asfreq('M'), ival_B_to_M)
- self.assertEqual(ival_B.asfreq('W'), ival_B_to_W)
- self.assertEqual(ival_B_end_of_week.asfreq('W'), ival_B_to_W)
+ assert ival_B.asfreq('A') == ival_B_to_A
+ assert ival_B_end_of_year.asfreq('A') == ival_B_to_A
+ assert ival_B.asfreq('Q') == ival_B_to_Q
+ assert ival_B_end_of_quarter.asfreq('Q') == ival_B_to_Q
+ assert ival_B.asfreq('M') == ival_B_to_M
+ assert ival_B_end_of_month.asfreq('M') == ival_B_to_M
+ assert ival_B.asfreq('W') == ival_B_to_W
+ assert ival_B_end_of_week.asfreq('W') == ival_B_to_W
- self.assertEqual(ival_B.asfreq('D'), ival_B_to_D)
+ assert ival_B.asfreq('D') == ival_B_to_D
- self.assertEqual(ival_B.asfreq('H', 'S'), ival_B_to_H_start)
- self.assertEqual(ival_B.asfreq('H', 'E'), ival_B_to_H_end)
- self.assertEqual(ival_B.asfreq('Min', 'S'), ival_B_to_T_start)
- self.assertEqual(ival_B.asfreq('Min', 'E'), ival_B_to_T_end)
- self.assertEqual(ival_B.asfreq('S', 'S'), ival_B_to_S_start)
- self.assertEqual(ival_B.asfreq('S', 'E'), ival_B_to_S_end)
+ assert ival_B.asfreq('H', 'S') == ival_B_to_H_start
+ assert ival_B.asfreq('H', 'E') == ival_B_to_H_end
+ assert ival_B.asfreq('Min', 'S') == ival_B_to_T_start
+ assert ival_B.asfreq('Min', 'E') == ival_B_to_T_end
+ assert ival_B.asfreq('S', 'S') == ival_B_to_S_start
+ assert ival_B.asfreq('S', 'E') == ival_B_to_S_end
- self.assertEqual(ival_B.asfreq('B'), ival_B)
+ assert ival_B.asfreq('B') == ival_B
def test_conv_daily(self):
# frequency conversion tests: from Business Frequency"
@@ -405,39 +404,36 @@ def test_conv_daily(self):
ival_D_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=23,
minute=59, second=59)
- self.assertEqual(ival_D.asfreq('A'), ival_D_to_A)
-
- self.assertEqual(ival_D_end_of_quarter.asfreq('A-JAN'),
- ival_Deoq_to_AJAN)
- self.assertEqual(ival_D_end_of_quarter.asfreq('A-JUN'),
- ival_Deoq_to_AJUN)
- self.assertEqual(ival_D_end_of_quarter.asfreq('A-DEC'),
- ival_Deoq_to_ADEC)
-
- self.assertEqual(ival_D_end_of_year.asfreq('A'), ival_D_to_A)
- self.assertEqual(ival_D_end_of_quarter.asfreq('Q'), ival_D_to_QEDEC)
- self.assertEqual(ival_D.asfreq("Q-JAN"), ival_D_to_QEJAN)
- self.assertEqual(ival_D.asfreq("Q-JUN"), ival_D_to_QEJUN)
- self.assertEqual(ival_D.asfreq("Q-DEC"), ival_D_to_QEDEC)
- self.assertEqual(ival_D.asfreq('M'), ival_D_to_M)
- self.assertEqual(ival_D_end_of_month.asfreq('M'), ival_D_to_M)
- self.assertEqual(ival_D.asfreq('W'), ival_D_to_W)
- self.assertEqual(ival_D_end_of_week.asfreq('W'), ival_D_to_W)
-
- self.assertEqual(ival_D_friday.asfreq('B'), ival_B_friday)
- self.assertEqual(ival_D_saturday.asfreq('B', 'S'), ival_B_friday)
- self.assertEqual(ival_D_saturday.asfreq('B', 'E'), ival_B_monday)
- self.assertEqual(ival_D_sunday.asfreq('B', 'S'), ival_B_friday)
- self.assertEqual(ival_D_sunday.asfreq('B', 'E'), ival_B_monday)
-
- self.assertEqual(ival_D.asfreq('H', 'S'), ival_D_to_H_start)
- self.assertEqual(ival_D.asfreq('H', 'E'), ival_D_to_H_end)
- self.assertEqual(ival_D.asfreq('Min', 'S'), ival_D_to_T_start)
- self.assertEqual(ival_D.asfreq('Min', 'E'), ival_D_to_T_end)
- self.assertEqual(ival_D.asfreq('S', 'S'), ival_D_to_S_start)
- self.assertEqual(ival_D.asfreq('S', 'E'), ival_D_to_S_end)
-
- self.assertEqual(ival_D.asfreq('D'), ival_D)
+ assert ival_D.asfreq('A') == ival_D_to_A
+
+ assert ival_D_end_of_quarter.asfreq('A-JAN') == ival_Deoq_to_AJAN
+ assert ival_D_end_of_quarter.asfreq('A-JUN') == ival_Deoq_to_AJUN
+ assert ival_D_end_of_quarter.asfreq('A-DEC') == ival_Deoq_to_ADEC
+
+ assert ival_D_end_of_year.asfreq('A') == ival_D_to_A
+ assert ival_D_end_of_quarter.asfreq('Q') == ival_D_to_QEDEC
+ assert ival_D.asfreq("Q-JAN") == ival_D_to_QEJAN
+ assert ival_D.asfreq("Q-JUN") == ival_D_to_QEJUN
+ assert ival_D.asfreq("Q-DEC") == ival_D_to_QEDEC
+ assert ival_D.asfreq('M') == ival_D_to_M
+ assert ival_D_end_of_month.asfreq('M') == ival_D_to_M
+ assert ival_D.asfreq('W') == ival_D_to_W
+ assert ival_D_end_of_week.asfreq('W') == ival_D_to_W
+
+ assert ival_D_friday.asfreq('B') == ival_B_friday
+ assert ival_D_saturday.asfreq('B', 'S') == ival_B_friday
+ assert ival_D_saturday.asfreq('B', 'E') == ival_B_monday
+ assert ival_D_sunday.asfreq('B', 'S') == ival_B_friday
+ assert ival_D_sunday.asfreq('B', 'E') == ival_B_monday
+
+ assert ival_D.asfreq('H', 'S') == ival_D_to_H_start
+ assert ival_D.asfreq('H', 'E') == ival_D_to_H_end
+ assert ival_D.asfreq('Min', 'S') == ival_D_to_T_start
+ assert ival_D.asfreq('Min', 'E') == ival_D_to_T_end
+ assert ival_D.asfreq('S', 'S') == ival_D_to_S_start
+ assert ival_D.asfreq('S', 'E') == ival_D_to_S_end
+
+ assert ival_D.asfreq('D') == ival_D
def test_conv_hourly(self):
# frequency conversion tests: from Hourly Frequency"
@@ -472,25 +468,25 @@ def test_conv_hourly(self):
ival_H_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=0,
minute=59, second=59)
- self.assertEqual(ival_H.asfreq('A'), ival_H_to_A)
- self.assertEqual(ival_H_end_of_year.asfreq('A'), ival_H_to_A)
- self.assertEqual(ival_H.asfreq('Q'), ival_H_to_Q)
- self.assertEqual(ival_H_end_of_quarter.asfreq('Q'), ival_H_to_Q)
- self.assertEqual(ival_H.asfreq('M'), ival_H_to_M)
- self.assertEqual(ival_H_end_of_month.asfreq('M'), ival_H_to_M)
- self.assertEqual(ival_H.asfreq('W'), ival_H_to_W)
- self.assertEqual(ival_H_end_of_week.asfreq('W'), ival_H_to_W)
- self.assertEqual(ival_H.asfreq('D'), ival_H_to_D)
- self.assertEqual(ival_H_end_of_day.asfreq('D'), ival_H_to_D)
- self.assertEqual(ival_H.asfreq('B'), ival_H_to_B)
- self.assertEqual(ival_H_end_of_bus.asfreq('B'), ival_H_to_B)
-
- self.assertEqual(ival_H.asfreq('Min', 'S'), ival_H_to_T_start)
- self.assertEqual(ival_H.asfreq('Min', 'E'), ival_H_to_T_end)
- self.assertEqual(ival_H.asfreq('S', 'S'), ival_H_to_S_start)
- self.assertEqual(ival_H.asfreq('S', 'E'), ival_H_to_S_end)
-
- self.assertEqual(ival_H.asfreq('H'), ival_H)
+ assert ival_H.asfreq('A') == ival_H_to_A
+ assert ival_H_end_of_year.asfreq('A') == ival_H_to_A
+ assert ival_H.asfreq('Q') == ival_H_to_Q
+ assert ival_H_end_of_quarter.asfreq('Q') == ival_H_to_Q
+ assert ival_H.asfreq('M') == ival_H_to_M
+ assert ival_H_end_of_month.asfreq('M') == ival_H_to_M
+ assert ival_H.asfreq('W') == ival_H_to_W
+ assert ival_H_end_of_week.asfreq('W') == ival_H_to_W
+ assert ival_H.asfreq('D') == ival_H_to_D
+ assert ival_H_end_of_day.asfreq('D') == ival_H_to_D
+ assert ival_H.asfreq('B') == ival_H_to_B
+ assert ival_H_end_of_bus.asfreq('B') == ival_H_to_B
+
+ assert ival_H.asfreq('Min', 'S') == ival_H_to_T_start
+ assert ival_H.asfreq('Min', 'E') == ival_H_to_T_end
+ assert ival_H.asfreq('S', 'S') == ival_H_to_S_start
+ assert ival_H.asfreq('S', 'E') == ival_H_to_S_end
+
+ assert ival_H.asfreq('H') == ival_H
def test_conv_minutely(self):
# frequency conversion tests: from Minutely Frequency"
@@ -525,25 +521,25 @@ def test_conv_minutely(self):
ival_T_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=0,
minute=0, second=59)
- self.assertEqual(ival_T.asfreq('A'), ival_T_to_A)
- self.assertEqual(ival_T_end_of_year.asfreq('A'), ival_T_to_A)
- self.assertEqual(ival_T.asfreq('Q'), ival_T_to_Q)
- self.assertEqual(ival_T_end_of_quarter.asfreq('Q'), ival_T_to_Q)
- self.assertEqual(ival_T.asfreq('M'), ival_T_to_M)
- self.assertEqual(ival_T_end_of_month.asfreq('M'), ival_T_to_M)
- self.assertEqual(ival_T.asfreq('W'), ival_T_to_W)
- self.assertEqual(ival_T_end_of_week.asfreq('W'), ival_T_to_W)
- self.assertEqual(ival_T.asfreq('D'), ival_T_to_D)
- self.assertEqual(ival_T_end_of_day.asfreq('D'), ival_T_to_D)
- self.assertEqual(ival_T.asfreq('B'), ival_T_to_B)
- self.assertEqual(ival_T_end_of_bus.asfreq('B'), ival_T_to_B)
- self.assertEqual(ival_T.asfreq('H'), ival_T_to_H)
- self.assertEqual(ival_T_end_of_hour.asfreq('H'), ival_T_to_H)
-
- self.assertEqual(ival_T.asfreq('S', 'S'), ival_T_to_S_start)
- self.assertEqual(ival_T.asfreq('S', 'E'), ival_T_to_S_end)
-
- self.assertEqual(ival_T.asfreq('Min'), ival_T)
+ assert ival_T.asfreq('A') == ival_T_to_A
+ assert ival_T_end_of_year.asfreq('A') == ival_T_to_A
+ assert ival_T.asfreq('Q') == ival_T_to_Q
+ assert ival_T_end_of_quarter.asfreq('Q') == ival_T_to_Q
+ assert ival_T.asfreq('M') == ival_T_to_M
+ assert ival_T_end_of_month.asfreq('M') == ival_T_to_M
+ assert ival_T.asfreq('W') == ival_T_to_W
+ assert ival_T_end_of_week.asfreq('W') == ival_T_to_W
+ assert ival_T.asfreq('D') == ival_T_to_D
+ assert ival_T_end_of_day.asfreq('D') == ival_T_to_D
+ assert ival_T.asfreq('B') == ival_T_to_B
+ assert ival_T_end_of_bus.asfreq('B') == ival_T_to_B
+ assert ival_T.asfreq('H') == ival_T_to_H
+ assert ival_T_end_of_hour.asfreq('H') == ival_T_to_H
+
+ assert ival_T.asfreq('S', 'S') == ival_T_to_S_start
+ assert ival_T.asfreq('S', 'E') == ival_T_to_S_end
+
+ assert ival_T.asfreq('Min') == ival_T
def test_conv_secondly(self):
# frequency conversion tests: from Secondly Frequency"
@@ -577,24 +573,24 @@ def test_conv_secondly(self):
ival_S_to_T = Period(freq='Min', year=2007, month=1, day=1, hour=0,
minute=0)
- self.assertEqual(ival_S.asfreq('A'), ival_S_to_A)
- self.assertEqual(ival_S_end_of_year.asfreq('A'), ival_S_to_A)
- self.assertEqual(ival_S.asfreq('Q'), ival_S_to_Q)
- self.assertEqual(ival_S_end_of_quarter.asfreq('Q'), ival_S_to_Q)
- self.assertEqual(ival_S.asfreq('M'), ival_S_to_M)
- self.assertEqual(ival_S_end_of_month.asfreq('M'), ival_S_to_M)
- self.assertEqual(ival_S.asfreq('W'), ival_S_to_W)
- self.assertEqual(ival_S_end_of_week.asfreq('W'), ival_S_to_W)
- self.assertEqual(ival_S.asfreq('D'), ival_S_to_D)
- self.assertEqual(ival_S_end_of_day.asfreq('D'), ival_S_to_D)
- self.assertEqual(ival_S.asfreq('B'), ival_S_to_B)
- self.assertEqual(ival_S_end_of_bus.asfreq('B'), ival_S_to_B)
- self.assertEqual(ival_S.asfreq('H'), ival_S_to_H)
- self.assertEqual(ival_S_end_of_hour.asfreq('H'), ival_S_to_H)
- self.assertEqual(ival_S.asfreq('Min'), ival_S_to_T)
- self.assertEqual(ival_S_end_of_minute.asfreq('Min'), ival_S_to_T)
-
- self.assertEqual(ival_S.asfreq('S'), ival_S)
+ assert ival_S.asfreq('A') == ival_S_to_A
+ assert ival_S_end_of_year.asfreq('A') == ival_S_to_A
+ assert ival_S.asfreq('Q') == ival_S_to_Q
+ assert ival_S_end_of_quarter.asfreq('Q') == ival_S_to_Q
+ assert ival_S.asfreq('M') == ival_S_to_M
+ assert ival_S_end_of_month.asfreq('M') == ival_S_to_M
+ assert ival_S.asfreq('W') == ival_S_to_W
+ assert ival_S_end_of_week.asfreq('W') == ival_S_to_W
+ assert ival_S.asfreq('D') == ival_S_to_D
+ assert ival_S_end_of_day.asfreq('D') == ival_S_to_D
+ assert ival_S.asfreq('B') == ival_S_to_B
+ assert ival_S_end_of_bus.asfreq('B') == ival_S_to_B
+ assert ival_S.asfreq('H') == ival_S_to_H
+ assert ival_S_end_of_hour.asfreq('H') == ival_S_to_H
+ assert ival_S.asfreq('Min') == ival_S_to_T
+ assert ival_S_end_of_minute.asfreq('Min') == ival_S_to_T
+
+ assert ival_S.asfreq('S') == ival_S
def test_asfreq_mult(self):
# normal freq to mult freq
@@ -604,17 +600,17 @@ def test_asfreq_mult(self):
result = p.asfreq(freq)
expected = Period('2007', freq='3A')
- self.assertEqual(result, expected)
- self.assertEqual(result.ordinal, expected.ordinal)
- self.assertEqual(result.freq, expected.freq)
+ assert result == expected
+ assert result.ordinal == expected.ordinal
+ assert result.freq == expected.freq
# ordinal will not change
for freq in ['3A', offsets.YearEnd(3)]:
result = p.asfreq(freq, how='S')
expected = Period('2007', freq='3A')
- self.assertEqual(result, expected)
- self.assertEqual(result.ordinal, expected.ordinal)
- self.assertEqual(result.freq, expected.freq)
+ assert result == expected
+ assert result.ordinal == expected.ordinal
+ assert result.freq == expected.freq
# mult freq to normal freq
p = Period(freq='3A', year=2007)
@@ -623,49 +619,49 @@ def test_asfreq_mult(self):
result = p.asfreq(freq)
expected = Period('2009', freq='A')
- self.assertEqual(result, expected)
- self.assertEqual(result.ordinal, expected.ordinal)
- self.assertEqual(result.freq, expected.freq)
+ assert result == expected
+ assert result.ordinal == expected.ordinal
+ assert result.freq == expected.freq
# ordinal will not change
for freq in ['A', offsets.YearEnd()]:
result = p.asfreq(freq, how='S')
expected = Period('2007', freq='A')
- self.assertEqual(result, expected)
- self.assertEqual(result.ordinal, expected.ordinal)
- self.assertEqual(result.freq, expected.freq)
+ assert result == expected
+ assert result.ordinal == expected.ordinal
+ assert result.freq == expected.freq
p = Period(freq='A', year=2007)
for freq in ['2M', offsets.MonthEnd(2)]:
result = p.asfreq(freq)
expected = Period('2007-12', freq='2M')
- self.assertEqual(result, expected)
- self.assertEqual(result.ordinal, expected.ordinal)
- self.assertEqual(result.freq, expected.freq)
+ assert result == expected
+ assert result.ordinal == expected.ordinal
+ assert result.freq == expected.freq
for freq in ['2M', offsets.MonthEnd(2)]:
result = p.asfreq(freq, how='S')
expected = Period('2007-01', freq='2M')
- self.assertEqual(result, expected)
- self.assertEqual(result.ordinal, expected.ordinal)
- self.assertEqual(result.freq, expected.freq)
+ assert result == expected
+ assert result.ordinal == expected.ordinal
+ assert result.freq == expected.freq
p = Period(freq='3A', year=2007)
for freq in ['2M', offsets.MonthEnd(2)]:
result = p.asfreq(freq)
expected = Period('2009-12', freq='2M')
- self.assertEqual(result, expected)
- self.assertEqual(result.ordinal, expected.ordinal)
- self.assertEqual(result.freq, expected.freq)
+ assert result == expected
+ assert result.ordinal == expected.ordinal
+ assert result.freq == expected.freq
for freq in ['2M', offsets.MonthEnd(2)]:
result = p.asfreq(freq, how='S')
expected = Period('2007-01', freq='2M')
- self.assertEqual(result, expected)
- self.assertEqual(result.ordinal, expected.ordinal)
- self.assertEqual(result.freq, expected.freq)
+ assert result == expected
+ assert result.ordinal == expected.ordinal
+ assert result.freq == expected.freq
def test_asfreq_combined(self):
# normal freq to combined freq
@@ -675,9 +671,9 @@ def test_asfreq_combined(self):
expected = Period('2007', freq='25H')
for freq, how in zip(['1D1H', '1H1D'], ['E', 'S']):
result = p.asfreq(freq, how=how)
- self.assertEqual(result, expected)
- self.assertEqual(result.ordinal, expected.ordinal)
- self.assertEqual(result.freq, expected.freq)
+ assert result == expected
+ assert result.ordinal == expected.ordinal
+ assert result.freq == expected.freq
# combined freq to normal freq
p1 = Period(freq='1D1H', year=2007)
@@ -687,29 +683,28 @@ def test_asfreq_combined(self):
result1 = p1.asfreq('H')
result2 = p2.asfreq('H')
expected = Period('2007-01-02', freq='H')
- self.assertEqual(result1, expected)
- self.assertEqual(result1.ordinal, expected.ordinal)
- self.assertEqual(result1.freq, expected.freq)
- self.assertEqual(result2, expected)
- self.assertEqual(result2.ordinal, expected.ordinal)
- self.assertEqual(result2.freq, expected.freq)
+ assert result1 == expected
+ assert result1.ordinal == expected.ordinal
+ assert result1.freq == expected.freq
+ assert result2 == expected
+ assert result2.ordinal == expected.ordinal
+ assert result2.freq == expected.freq
# ordinal will not change
result1 = p1.asfreq('H', how='S')
result2 = p2.asfreq('H', how='S')
expected = Period('2007-01-01', freq='H')
- self.assertEqual(result1, expected)
- self.assertEqual(result1.ordinal, expected.ordinal)
- self.assertEqual(result1.freq, expected.freq)
- self.assertEqual(result2, expected)
- self.assertEqual(result2.ordinal, expected.ordinal)
- self.assertEqual(result2.freq, expected.freq)
+ assert result1 == expected
+ assert result1.ordinal == expected.ordinal
+ assert result1.freq == expected.freq
+ assert result2 == expected
+ assert result2.ordinal == expected.ordinal
+ assert result2.freq == expected.freq
def test_asfreq_MS(self):
initial = Period("2013")
- self.assertEqual(initial.asfreq(freq="M", how="S"),
- Period('2013-01', 'M'))
+ assert initial.asfreq(freq="M", how="S") == Period('2013-01', 'M')
msg = pd.tseries.frequencies._INVALID_FREQ_ERROR
with tm.assert_raises_regex(ValueError, msg):
diff --git a/pandas/tests/scalar/test_timedelta.py b/pandas/tests/scalar/test_timedelta.py
index 9efd180afc2da..faddbcc84109f 100644
--- a/pandas/tests/scalar/test_timedelta.py
+++ b/pandas/tests/scalar/test_timedelta.py
@@ -21,22 +21,20 @@ def setUp(self):
def test_construction(self):
expected = np.timedelta64(10, 'D').astype('m8[ns]').view('i8')
- self.assertEqual(Timedelta(10, unit='d').value, expected)
- self.assertEqual(Timedelta(10.0, unit='d').value, expected)
- self.assertEqual(Timedelta('10 days').value, expected)
- self.assertEqual(Timedelta(days=10).value, expected)
- self.assertEqual(Timedelta(days=10.0).value, expected)
+ assert Timedelta(10, unit='d').value == expected
+ assert Timedelta(10.0, unit='d').value == expected
+ assert Timedelta('10 days').value == expected
+ assert Timedelta(days=10).value == expected
+ assert Timedelta(days=10.0).value == expected
expected += np.timedelta64(10, 's').astype('m8[ns]').view('i8')
- self.assertEqual(Timedelta('10 days 00:00:10').value, expected)
- self.assertEqual(Timedelta(days=10, seconds=10).value, expected)
- self.assertEqual(
- Timedelta(days=10, milliseconds=10 * 1000).value, expected)
- self.assertEqual(
- Timedelta(days=10, microseconds=10 * 1000 * 1000).value, expected)
-
- # test construction with np dtypes
- # GH 8757
+ assert Timedelta('10 days 00:00:10').value == expected
+ assert Timedelta(days=10, seconds=10).value == expected
+ assert Timedelta(days=10, milliseconds=10 * 1000).value == expected
+ assert (Timedelta(days=10, microseconds=10 * 1000 * 1000)
+ .value == expected)
+
+ # gh-8757: test construction with np dtypes
timedelta_kwargs = {'days': 'D',
'seconds': 's',
'microseconds': 'us',
@@ -48,70 +46,64 @@ def test_construction(self):
np.float16]
for npdtype in npdtypes:
for pykwarg, npkwarg in timedelta_kwargs.items():
- expected = np.timedelta64(1,
- npkwarg).astype('m8[ns]').view('i8')
- self.assertEqual(
- Timedelta(**{pykwarg: npdtype(1)}).value, expected)
+ expected = np.timedelta64(1, npkwarg).astype(
+ 'm8[ns]').view('i8')
+ assert Timedelta(**{pykwarg: npdtype(1)}).value == expected
# rounding cases
- self.assertEqual(Timedelta(82739999850000).value, 82739999850000)
+ assert Timedelta(82739999850000).value == 82739999850000
assert ('0 days 22:58:59.999850' in str(Timedelta(82739999850000)))
- self.assertEqual(Timedelta(123072001000000).value, 123072001000000)
+ assert Timedelta(123072001000000).value == 123072001000000
assert ('1 days 10:11:12.001' in str(Timedelta(123072001000000)))
# string conversion with/without leading zero
# GH 9570
- self.assertEqual(Timedelta('0:00:00'), timedelta(hours=0))
- self.assertEqual(Timedelta('00:00:00'), timedelta(hours=0))
- self.assertEqual(Timedelta('-1:00:00'), -timedelta(hours=1))
- self.assertEqual(Timedelta('-01:00:00'), -timedelta(hours=1))
+ assert Timedelta('0:00:00') == timedelta(hours=0)
+ assert Timedelta('00:00:00') == timedelta(hours=0)
+ assert Timedelta('-1:00:00') == -timedelta(hours=1)
+ assert Timedelta('-01:00:00') == -timedelta(hours=1)
# more strings & abbrevs
# GH 8190
- self.assertEqual(Timedelta('1 h'), timedelta(hours=1))
- self.assertEqual(Timedelta('1 hour'), timedelta(hours=1))
- self.assertEqual(Timedelta('1 hr'), timedelta(hours=1))
- self.assertEqual(Timedelta('1 hours'), timedelta(hours=1))
- self.assertEqual(Timedelta('-1 hours'), -timedelta(hours=1))
- self.assertEqual(Timedelta('1 m'), timedelta(minutes=1))
- self.assertEqual(Timedelta('1.5 m'), timedelta(seconds=90))
- self.assertEqual(Timedelta('1 minute'), timedelta(minutes=1))
- self.assertEqual(Timedelta('1 minutes'), timedelta(minutes=1))
- self.assertEqual(Timedelta('1 s'), timedelta(seconds=1))
- self.assertEqual(Timedelta('1 second'), timedelta(seconds=1))
- self.assertEqual(Timedelta('1 seconds'), timedelta(seconds=1))
- self.assertEqual(Timedelta('1 ms'), timedelta(milliseconds=1))
- self.assertEqual(Timedelta('1 milli'), timedelta(milliseconds=1))
- self.assertEqual(Timedelta('1 millisecond'), timedelta(milliseconds=1))
- self.assertEqual(Timedelta('1 us'), timedelta(microseconds=1))
- self.assertEqual(Timedelta('1 micros'), timedelta(microseconds=1))
- self.assertEqual(Timedelta('1 microsecond'), timedelta(microseconds=1))
- self.assertEqual(Timedelta('1.5 microsecond'),
- Timedelta('00:00:00.000001500'))
- self.assertEqual(Timedelta('1 ns'), Timedelta('00:00:00.000000001'))
- self.assertEqual(Timedelta('1 nano'), Timedelta('00:00:00.000000001'))
- self.assertEqual(Timedelta('1 nanosecond'),
- Timedelta('00:00:00.000000001'))
+ assert Timedelta('1 h') == timedelta(hours=1)
+ assert Timedelta('1 hour') == timedelta(hours=1)
+ assert Timedelta('1 hr') == timedelta(hours=1)
+ assert Timedelta('1 hours') == timedelta(hours=1)
+ assert Timedelta('-1 hours') == -timedelta(hours=1)
+ assert Timedelta('1 m') == timedelta(minutes=1)
+ assert Timedelta('1.5 m') == timedelta(seconds=90)
+ assert Timedelta('1 minute') == timedelta(minutes=1)
+ assert Timedelta('1 minutes') == timedelta(minutes=1)
+ assert Timedelta('1 s') == timedelta(seconds=1)
+ assert Timedelta('1 second') == timedelta(seconds=1)
+ assert Timedelta('1 seconds') == timedelta(seconds=1)
+ assert Timedelta('1 ms') == timedelta(milliseconds=1)
+ assert Timedelta('1 milli') == timedelta(milliseconds=1)
+ assert Timedelta('1 millisecond') == timedelta(milliseconds=1)
+ assert Timedelta('1 us') == timedelta(microseconds=1)
+ assert Timedelta('1 micros') == timedelta(microseconds=1)
+ assert Timedelta('1 microsecond') == timedelta(microseconds=1)
+ assert Timedelta('1.5 microsecond') == Timedelta('00:00:00.000001500')
+ assert Timedelta('1 ns') == Timedelta('00:00:00.000000001')
+ assert Timedelta('1 nano') == Timedelta('00:00:00.000000001')
+ assert Timedelta('1 nanosecond') == Timedelta('00:00:00.000000001')
# combos
- self.assertEqual(Timedelta('10 days 1 hour'),
- timedelta(days=10, hours=1))
- self.assertEqual(Timedelta('10 days 1 h'), timedelta(days=10, hours=1))
- self.assertEqual(Timedelta('10 days 1 h 1m 1s'), timedelta(
- days=10, hours=1, minutes=1, seconds=1))
- self.assertEqual(Timedelta('-10 days 1 h 1m 1s'), -
- timedelta(days=10, hours=1, minutes=1, seconds=1))
- self.assertEqual(Timedelta('-10 days 1 h 1m 1s'), -
- timedelta(days=10, hours=1, minutes=1, seconds=1))
- self.assertEqual(Timedelta('-10 days 1 h 1m 1s 3us'), -
- timedelta(days=10, hours=1, minutes=1,
- seconds=1, microseconds=3))
- self.assertEqual(Timedelta('-10 days 1 h 1.5m 1s 3us'), -
- timedelta(days=10, hours=1, minutes=1,
- seconds=31, microseconds=3))
-
- # currently invalid as it has a - on the hhmmdd part (only allowed on
- # the days)
+ assert Timedelta('10 days 1 hour') == timedelta(days=10, hours=1)
+ assert Timedelta('10 days 1 h') == timedelta(days=10, hours=1)
+ assert Timedelta('10 days 1 h 1m 1s') == timedelta(
+ days=10, hours=1, minutes=1, seconds=1)
+ assert Timedelta('-10 days 1 h 1m 1s') == -timedelta(
+ days=10, hours=1, minutes=1, seconds=1)
+ assert Timedelta('-10 days 1 h 1m 1s') == -timedelta(
+ days=10, hours=1, minutes=1, seconds=1)
+ assert Timedelta('-10 days 1 h 1m 1s 3us') == -timedelta(
+ days=10, hours=1, minutes=1, seconds=1, microseconds=3)
+ assert Timedelta('-10 days 1 h 1.5m 1s 3us'), -timedelta(
+ days=10, hours=1, minutes=1, seconds=31, microseconds=3)
+
+ # Currently invalid as it has a - on the hh:mm:dd part
+ # (only allowed on the days)
pytest.raises(ValueError,
lambda: Timedelta('-10 days -1 h 1.5m 1s 3us'))
@@ -139,34 +131,33 @@ def test_construction(self):
'1ns', '-23:59:59.999999999']:
td = Timedelta(v)
- self.assertEqual(Timedelta(td.value), td)
+ assert Timedelta(td.value) == td
# str does not normally display nanos
if not td.nanoseconds:
- self.assertEqual(Timedelta(str(td)), td)
- self.assertEqual(Timedelta(td._repr_base(format='all')), td)
+ assert Timedelta(str(td)) == td
+ assert Timedelta(td._repr_base(format='all')) == td
# floats
expected = np.timedelta64(
10, 's').astype('m8[ns]').view('i8') + np.timedelta64(
500, 'ms').astype('m8[ns]').view('i8')
- self.assertEqual(Timedelta(10.5, unit='s').value, expected)
+ assert Timedelta(10.5, unit='s').value == expected
# offset
- self.assertEqual(to_timedelta(pd.offsets.Hour(2)),
- Timedelta('0 days, 02:00:00'))
- self.assertEqual(Timedelta(pd.offsets.Hour(2)),
- Timedelta('0 days, 02:00:00'))
- self.assertEqual(Timedelta(pd.offsets.Second(2)),
- Timedelta('0 days, 00:00:02'))
-
- # unicode
- # GH 11995
+ assert (to_timedelta(pd.offsets.Hour(2)) ==
+ Timedelta('0 days, 02:00:00'))
+ assert (Timedelta(pd.offsets.Hour(2)) ==
+ Timedelta('0 days, 02:00:00'))
+ assert (Timedelta(pd.offsets.Second(2)) ==
+ Timedelta('0 days, 00:00:02'))
+
+ # gh-11995: unicode
expected = Timedelta('1H')
result = pd.Timedelta(u'1H')
- self.assertEqual(result, expected)
- self.assertEqual(to_timedelta(pd.offsets.Hour(2)),
- Timedelta(u'0 days, 02:00:00'))
+ assert result == expected
+ assert (to_timedelta(pd.offsets.Hour(2)) ==
+ Timedelta(u'0 days, 02:00:00'))
pytest.raises(ValueError, lambda: Timedelta(u'foo bar'))
@@ -176,7 +167,7 @@ def test_overflow_on_construction(self):
pytest.raises(OverflowError, pd.Timedelta, value)
def test_total_seconds_scalar(self):
- # GH 10939
+ # see gh-10939
rng = Timedelta('1 days, 10:11:12.100123456')
expt = 1 * 86400 + 10 * 3600 + 11 * 60 + 12 + 100123456. / 1e9
tm.assert_almost_equal(rng.total_seconds(), expt)
@@ -186,14 +177,14 @@ def test_total_seconds_scalar(self):
def test_repr(self):
- self.assertEqual(repr(Timedelta(10, unit='d')),
- "Timedelta('10 days 00:00:00')")
- self.assertEqual(repr(Timedelta(10, unit='s')),
- "Timedelta('0 days 00:00:10')")
- self.assertEqual(repr(Timedelta(10, unit='ms')),
- "Timedelta('0 days 00:00:00.010000')")
- self.assertEqual(repr(Timedelta(-10, unit='ms')),
- "Timedelta('-1 days +23:59:59.990000')")
+ assert (repr(Timedelta(10, unit='d')) ==
+ "Timedelta('10 days 00:00:00')")
+ assert (repr(Timedelta(10, unit='s')) ==
+ "Timedelta('0 days 00:00:10')")
+ assert (repr(Timedelta(10, unit='ms')) ==
+ "Timedelta('0 days 00:00:00.010000')")
+ assert (repr(Timedelta(-10, unit='ms')) ==
+ "Timedelta('-1 days +23:59:59.990000')")
def test_conversion(self):
@@ -201,14 +192,16 @@ def test_conversion(self):
Timedelta('1 days, 10:11:12.012345')]:
pydt = td.to_pytimedelta()
assert td == Timedelta(pydt)
- self.assertEqual(td, pydt)
+ assert td == pydt
assert (isinstance(pydt, timedelta) and not isinstance(
pydt, Timedelta))
- self.assertEqual(td, np.timedelta64(td.value, 'ns'))
+ assert td == np.timedelta64(td.value, 'ns')
td64 = td.to_timedelta64()
- self.assertEqual(td64, np.timedelta64(td.value, 'ns'))
- self.assertEqual(td, td64)
+
+ assert td64 == np.timedelta64(td.value, 'ns')
+ assert td == td64
+
assert isinstance(td64, np.timedelta64)
# this is NOT equal and cannot be roundtriped (because of the nanos)
@@ -220,20 +213,20 @@ def test_freq_conversion(self):
# truediv
td = Timedelta('1 days 2 hours 3 ns')
result = td / np.timedelta64(1, 'D')
- self.assertEqual(result, td.value / float(86400 * 1e9))
+ assert result == td.value / float(86400 * 1e9)
result = td / np.timedelta64(1, 's')
- self.assertEqual(result, td.value / float(1e9))
+ assert result == td.value / float(1e9)
result = td / np.timedelta64(1, 'ns')
- self.assertEqual(result, td.value)
+ assert result == td.value
# floordiv
td = Timedelta('1 days 2 hours 3 ns')
result = td // np.timedelta64(1, 'D')
- self.assertEqual(result, 1)
+ assert result == 1
result = td // np.timedelta64(1, 's')
- self.assertEqual(result, 93600)
+ assert result == 93600
result = td // np.timedelta64(1, 'ns')
- self.assertEqual(result, td.value)
+ assert result == td.value
def test_fields(self):
def check(value):
@@ -242,10 +235,10 @@ def check(value):
# compat to datetime.timedelta
rng = to_timedelta('1 days, 10:11:12')
- self.assertEqual(rng.days, 1)
- self.assertEqual(rng.seconds, 10 * 3600 + 11 * 60 + 12)
- self.assertEqual(rng.microseconds, 0)
- self.assertEqual(rng.nanoseconds, 0)
+ assert rng.days == 1
+ assert rng.seconds == 10 * 3600 + 11 * 60 + 12
+ assert rng.microseconds == 0
+ assert rng.nanoseconds == 0
pytest.raises(AttributeError, lambda: rng.hours)
pytest.raises(AttributeError, lambda: rng.minutes)
@@ -258,30 +251,30 @@ def check(value):
check(rng.nanoseconds)
td = Timedelta('-1 days, 10:11:12')
- self.assertEqual(abs(td), Timedelta('13:48:48'))
+ assert abs(td) == Timedelta('13:48:48')
assert str(td) == "-1 days +10:11:12"
- self.assertEqual(-td, Timedelta('0 days 13:48:48'))
- self.assertEqual(-Timedelta('-1 days, 10:11:12').value, 49728000000000)
- self.assertEqual(Timedelta('-1 days, 10:11:12').value, -49728000000000)
+ assert -td == Timedelta('0 days 13:48:48')
+ assert -Timedelta('-1 days, 10:11:12').value == 49728000000000
+ assert Timedelta('-1 days, 10:11:12').value == -49728000000000
rng = to_timedelta('-1 days, 10:11:12.100123456')
- self.assertEqual(rng.days, -1)
- self.assertEqual(rng.seconds, 10 * 3600 + 11 * 60 + 12)
- self.assertEqual(rng.microseconds, 100 * 1000 + 123)
- self.assertEqual(rng.nanoseconds, 456)
+ assert rng.days == -1
+ assert rng.seconds == 10 * 3600 + 11 * 60 + 12
+ assert rng.microseconds == 100 * 1000 + 123
+ assert rng.nanoseconds == 456
pytest.raises(AttributeError, lambda: rng.hours)
pytest.raises(AttributeError, lambda: rng.minutes)
pytest.raises(AttributeError, lambda: rng.milliseconds)
# components
tup = pd.to_timedelta(-1, 'us').components
- self.assertEqual(tup.days, -1)
- self.assertEqual(tup.hours, 23)
- self.assertEqual(tup.minutes, 59)
- self.assertEqual(tup.seconds, 59)
- self.assertEqual(tup.milliseconds, 999)
- self.assertEqual(tup.microseconds, 999)
- self.assertEqual(tup.nanoseconds, 0)
+ assert tup.days == -1
+ assert tup.hours == 23
+ assert tup.minutes == 59
+ assert tup.seconds == 59
+ assert tup.milliseconds == 999
+ assert tup.microseconds == 999
+ assert tup.nanoseconds == 0
# GH 10050
check(tup.days)
@@ -293,19 +286,17 @@ def check(value):
check(tup.nanoseconds)
tup = Timedelta('-1 days 1 us').components
- self.assertEqual(tup.days, -2)
- self.assertEqual(tup.hours, 23)
- self.assertEqual(tup.minutes, 59)
- self.assertEqual(tup.seconds, 59)
- self.assertEqual(tup.milliseconds, 999)
- self.assertEqual(tup.microseconds, 999)
- self.assertEqual(tup.nanoseconds, 0)
+ assert tup.days == -2
+ assert tup.hours == 23
+ assert tup.minutes == 59
+ assert tup.seconds == 59
+ assert tup.milliseconds == 999
+ assert tup.microseconds == 999
+ assert tup.nanoseconds == 0
def test_nat_converters(self):
- self.assertEqual(to_timedelta(
- 'nat', box=False).astype('int64'), iNaT)
- self.assertEqual(to_timedelta(
- 'nan', box=False).astype('int64'), iNaT)
+ assert to_timedelta('nat', box=False).astype('int64') == iNaT
+ assert to_timedelta('nan', box=False).astype('int64') == iNaT
def testit(unit, transform):
@@ -319,7 +310,7 @@ def testit(unit, transform):
result = to_timedelta(2, unit=unit)
expected = Timedelta(np.timedelta64(2, transform(unit)).astype(
'timedelta64[ns]'))
- self.assertEqual(result, expected)
+ assert result == expected
# validate all units
# GH 6855
@@ -340,27 +331,22 @@ def testit(unit, transform):
testit('L', lambda x: 'ms')
def test_numeric_conversions(self):
- self.assertEqual(ct(0), np.timedelta64(0, 'ns'))
- self.assertEqual(ct(10), np.timedelta64(10, 'ns'))
- self.assertEqual(ct(10, unit='ns'), np.timedelta64(
- 10, 'ns').astype('m8[ns]'))
-
- self.assertEqual(ct(10, unit='us'), np.timedelta64(
- 10, 'us').astype('m8[ns]'))
- self.assertEqual(ct(10, unit='ms'), np.timedelta64(
- 10, 'ms').astype('m8[ns]'))
- self.assertEqual(ct(10, unit='s'), np.timedelta64(
- 10, 's').astype('m8[ns]'))
- self.assertEqual(ct(10, unit='d'), np.timedelta64(
- 10, 'D').astype('m8[ns]'))
+ assert ct(0) == np.timedelta64(0, 'ns')
+ assert ct(10) == np.timedelta64(10, 'ns')
+ assert ct(10, unit='ns') == np.timedelta64(10, 'ns').astype('m8[ns]')
+
+ assert ct(10, unit='us') == np.timedelta64(10, 'us').astype('m8[ns]')
+ assert ct(10, unit='ms') == np.timedelta64(10, 'ms').astype('m8[ns]')
+ assert ct(10, unit='s') == np.timedelta64(10, 's').astype('m8[ns]')
+ assert ct(10, unit='d') == np.timedelta64(10, 'D').astype('m8[ns]')
def test_timedelta_conversions(self):
- self.assertEqual(ct(timedelta(seconds=1)),
- np.timedelta64(1, 's').astype('m8[ns]'))
- self.assertEqual(ct(timedelta(microseconds=1)),
- np.timedelta64(1, 'us').astype('m8[ns]'))
- self.assertEqual(ct(timedelta(days=1)),
- np.timedelta64(1, 'D').astype('m8[ns]'))
+ assert (ct(timedelta(seconds=1)) ==
+ np.timedelta64(1, 's').astype('m8[ns]'))
+ assert (ct(timedelta(microseconds=1)) ==
+ np.timedelta64(1, 'us').astype('m8[ns]'))
+ assert (ct(timedelta(days=1)) ==
+ np.timedelta64(1, 'D').astype('m8[ns]'))
def test_round(self):
@@ -387,9 +373,9 @@ def test_round(self):
('d', Timedelta('1 days'),
Timedelta('-1 days'))]:
r1 = t1.round(freq)
- self.assertEqual(r1, s1)
+ assert r1 == s1
r2 = t2.round(freq)
- self.assertEqual(r2, s2)
+ assert r2 == s2
# invalid
for freq in ['Y', 'M', 'foobar']:
@@ -465,43 +451,43 @@ def test_short_format_converters(self):
def conv(v):
return v.astype('m8[ns]')
- self.assertEqual(ct('10'), np.timedelta64(10, 'ns'))
- self.assertEqual(ct('10ns'), np.timedelta64(10, 'ns'))
- self.assertEqual(ct('100'), np.timedelta64(100, 'ns'))
- self.assertEqual(ct('100ns'), np.timedelta64(100, 'ns'))
-
- self.assertEqual(ct('1000'), np.timedelta64(1000, 'ns'))
- self.assertEqual(ct('1000ns'), np.timedelta64(1000, 'ns'))
- self.assertEqual(ct('1000NS'), np.timedelta64(1000, 'ns'))
-
- self.assertEqual(ct('10us'), np.timedelta64(10000, 'ns'))
- self.assertEqual(ct('100us'), np.timedelta64(100000, 'ns'))
- self.assertEqual(ct('1000us'), np.timedelta64(1000000, 'ns'))
- self.assertEqual(ct('1000Us'), np.timedelta64(1000000, 'ns'))
- self.assertEqual(ct('1000uS'), np.timedelta64(1000000, 'ns'))
-
- self.assertEqual(ct('1ms'), np.timedelta64(1000000, 'ns'))
- self.assertEqual(ct('10ms'), np.timedelta64(10000000, 'ns'))
- self.assertEqual(ct('100ms'), np.timedelta64(100000000, 'ns'))
- self.assertEqual(ct('1000ms'), np.timedelta64(1000000000, 'ns'))
-
- self.assertEqual(ct('-1s'), -np.timedelta64(1000000000, 'ns'))
- self.assertEqual(ct('1s'), np.timedelta64(1000000000, 'ns'))
- self.assertEqual(ct('10s'), np.timedelta64(10000000000, 'ns'))
- self.assertEqual(ct('100s'), np.timedelta64(100000000000, 'ns'))
- self.assertEqual(ct('1000s'), np.timedelta64(1000000000000, 'ns'))
-
- self.assertEqual(ct('1d'), conv(np.timedelta64(1, 'D')))
- self.assertEqual(ct('-1d'), -conv(np.timedelta64(1, 'D')))
- self.assertEqual(ct('1D'), conv(np.timedelta64(1, 'D')))
- self.assertEqual(ct('10D'), conv(np.timedelta64(10, 'D')))
- self.assertEqual(ct('100D'), conv(np.timedelta64(100, 'D')))
- self.assertEqual(ct('1000D'), conv(np.timedelta64(1000, 'D')))
- self.assertEqual(ct('10000D'), conv(np.timedelta64(10000, 'D')))
+ assert ct('10') == np.timedelta64(10, 'ns')
+ assert ct('10ns') == np.timedelta64(10, 'ns')
+ assert ct('100') == np.timedelta64(100, 'ns')
+ assert ct('100ns') == np.timedelta64(100, 'ns')
+
+ assert ct('1000') == np.timedelta64(1000, 'ns')
+ assert ct('1000ns') == np.timedelta64(1000, 'ns')
+ assert ct('1000NS') == np.timedelta64(1000, 'ns')
+
+ assert ct('10us') == np.timedelta64(10000, 'ns')
+ assert ct('100us') == np.timedelta64(100000, 'ns')
+ assert ct('1000us') == np.timedelta64(1000000, 'ns')
+ assert ct('1000Us') == np.timedelta64(1000000, 'ns')
+ assert ct('1000uS') == np.timedelta64(1000000, 'ns')
+
+ assert ct('1ms') == np.timedelta64(1000000, 'ns')
+ assert ct('10ms') == np.timedelta64(10000000, 'ns')
+ assert ct('100ms') == np.timedelta64(100000000, 'ns')
+ assert ct('1000ms') == np.timedelta64(1000000000, 'ns')
+
+ assert ct('-1s') == -np.timedelta64(1000000000, 'ns')
+ assert ct('1s') == np.timedelta64(1000000000, 'ns')
+ assert ct('10s') == np.timedelta64(10000000000, 'ns')
+ assert ct('100s') == np.timedelta64(100000000000, 'ns')
+ assert ct('1000s') == np.timedelta64(1000000000000, 'ns')
+
+ assert ct('1d') == conv(np.timedelta64(1, 'D'))
+ assert ct('-1d') == -conv(np.timedelta64(1, 'D'))
+ assert ct('1D') == conv(np.timedelta64(1, 'D'))
+ assert ct('10D') == conv(np.timedelta64(10, 'D'))
+ assert ct('100D') == conv(np.timedelta64(100, 'D'))
+ assert ct('1000D') == conv(np.timedelta64(1000, 'D'))
+ assert ct('10000D') == conv(np.timedelta64(10000, 'D'))
# space
- self.assertEqual(ct(' 10000D '), conv(np.timedelta64(10000, 'D')))
- self.assertEqual(ct(' - 10000D '), -conv(np.timedelta64(10000, 'D')))
+ assert ct(' 10000D ') == conv(np.timedelta64(10000, 'D'))
+ assert ct(' - 10000D ') == -conv(np.timedelta64(10000, 'D'))
# invalid
pytest.raises(ValueError, ct, '1foo')
@@ -513,24 +499,22 @@ def conv(v):
d1 = np.timedelta64(1, 'D')
- self.assertEqual(ct('1days'), conv(d1))
- self.assertEqual(ct('1days,'), conv(d1))
- self.assertEqual(ct('- 1days,'), -conv(d1))
-
- self.assertEqual(ct('00:00:01'), conv(np.timedelta64(1, 's')))
- self.assertEqual(ct('06:00:01'), conv(
- np.timedelta64(6 * 3600 + 1, 's')))
- self.assertEqual(ct('06:00:01.0'), conv(
- np.timedelta64(6 * 3600 + 1, 's')))
- self.assertEqual(ct('06:00:01.01'), conv(
- np.timedelta64(1000 * (6 * 3600 + 1) + 10, 'ms')))
-
- self.assertEqual(ct('- 1days, 00:00:01'),
- conv(-d1 + np.timedelta64(1, 's')))
- self.assertEqual(ct('1days, 06:00:01'), conv(
- d1 + np.timedelta64(6 * 3600 + 1, 's')))
- self.assertEqual(ct('1days, 06:00:01.01'), conv(
- d1 + np.timedelta64(1000 * (6 * 3600 + 1) + 10, 'ms')))
+ assert ct('1days') == conv(d1)
+ assert ct('1days,') == conv(d1)
+ assert ct('- 1days,') == -conv(d1)
+
+ assert ct('00:00:01') == conv(np.timedelta64(1, 's'))
+ assert ct('06:00:01') == conv(np.timedelta64(6 * 3600 + 1, 's'))
+ assert ct('06:00:01.0') == conv(np.timedelta64(6 * 3600 + 1, 's'))
+ assert ct('06:00:01.01') == conv(np.timedelta64(
+ 1000 * (6 * 3600 + 1) + 10, 'ms'))
+
+ assert (ct('- 1days, 00:00:01') ==
+ conv(-d1 + np.timedelta64(1, 's')))
+ assert (ct('1days, 06:00:01') ==
+ conv(d1 + np.timedelta64(6 * 3600 + 1, 's')))
+ assert (ct('1days, 06:00:01.01') ==
+ conv(d1 + np.timedelta64(1000 * (6 * 3600 + 1) + 10, 'ms')))
# invalid
pytest.raises(ValueError, ct, '- 1days, 00')
@@ -560,16 +544,16 @@ def test_pickle(self):
v = Timedelta('1 days 10:11:12.0123456')
v_p = tm.round_trip_pickle(v)
- self.assertEqual(v, v_p)
+ assert v == v_p
def test_timedelta_hash_equality(self):
# GH 11129
v = Timedelta(1, 'D')
td = timedelta(days=1)
- self.assertEqual(hash(v), hash(td))
+ assert hash(v) == hash(td)
d = {td: 2}
- self.assertEqual(d[v], 2)
+ assert d[v] == 2
tds = timedelta_range('1 second', periods=20)
assert all(hash(td) == hash(td.to_pytimedelta()) for td in tds)
@@ -662,34 +646,34 @@ def test_isoformat(self):
milliseconds=10, microseconds=10, nanoseconds=12)
expected = 'P6DT0H50M3.010010012S'
result = td.isoformat()
- self.assertEqual(result, expected)
+ assert result == expected
td = Timedelta(days=4, hours=12, minutes=30, seconds=5)
result = td.isoformat()
expected = 'P4DT12H30M5S'
- self.assertEqual(result, expected)
+ assert result == expected
td = Timedelta(nanoseconds=123)
result = td.isoformat()
expected = 'P0DT0H0M0.000000123S'
- self.assertEqual(result, expected)
+ assert result == expected
# trim nano
td = Timedelta(microseconds=10)
result = td.isoformat()
expected = 'P0DT0H0M0.00001S'
- self.assertEqual(result, expected)
+ assert result == expected
# trim micro
td = Timedelta(milliseconds=1)
result = td.isoformat()
expected = 'P0DT0H0M0.001S'
- self.assertEqual(result, expected)
+ assert result == expected
# don't strip every 0
result = Timedelta(minutes=1).isoformat()
expected = 'P0DT0H1M0S'
- self.assertEqual(result, expected)
+ assert result == expected
def test_ops_error_str(self):
# GH 13624
diff --git a/pandas/tests/scalar/test_timestamp.py b/pandas/tests/scalar/test_timestamp.py
index 72b1e4d450b84..8a28a9a4bedd0 100644
--- a/pandas/tests/scalar/test_timestamp.py
+++ b/pandas/tests/scalar/test_timestamp.py
@@ -31,8 +31,8 @@ def test_constructor(self):
# confirm base representation is correct
import calendar
- self.assertEqual(calendar.timegm(base_dt.timetuple()) * 1000000000,
- base_expected)
+ assert (calendar.timegm(base_dt.timetuple()) * 1000000000 ==
+ base_expected)
tests = [(base_str, base_dt, base_expected),
('2014-07-01 10:00', datetime(2014, 7, 1, 10),
@@ -56,32 +56,32 @@ def test_constructor(self):
for date_str, date, expected in tests:
for result in [Timestamp(date_str), Timestamp(date)]:
# only with timestring
- self.assertEqual(result.value, expected)
- self.assertEqual(tslib.pydt_to_i8(result), expected)
+ assert result.value == expected
+ assert tslib.pydt_to_i8(result) == expected
# re-creation shouldn't affect to internal value
result = Timestamp(result)
- self.assertEqual(result.value, expected)
- self.assertEqual(tslib.pydt_to_i8(result), expected)
+ assert result.value == expected
+ assert tslib.pydt_to_i8(result) == expected
# with timezone
for tz, offset in timezones:
for result in [Timestamp(date_str, tz=tz), Timestamp(date,
tz=tz)]:
expected_tz = expected - offset * 3600 * 1000000000
- self.assertEqual(result.value, expected_tz)
- self.assertEqual(tslib.pydt_to_i8(result), expected_tz)
+ assert result.value == expected_tz
+ assert tslib.pydt_to_i8(result) == expected_tz
# should preserve tz
result = Timestamp(result)
- self.assertEqual(result.value, expected_tz)
- self.assertEqual(tslib.pydt_to_i8(result), expected_tz)
+ assert result.value == expected_tz
+ assert tslib.pydt_to_i8(result) == expected_tz
# should convert to UTC
result = Timestamp(result, tz='UTC')
expected_utc = expected - offset * 3600 * 1000000000
- self.assertEqual(result.value, expected_utc)
- self.assertEqual(tslib.pydt_to_i8(result), expected_utc)
+ assert result.value == expected_utc
+ assert tslib.pydt_to_i8(result) == expected_utc
def test_constructor_with_stringoffset(self):
# GH 7833
@@ -91,8 +91,8 @@ def test_constructor_with_stringoffset(self):
# confirm base representation is correct
import calendar
- self.assertEqual(calendar.timegm(base_dt.timetuple()) * 1000000000,
- base_expected)
+ assert (calendar.timegm(base_dt.timetuple()) * 1000000000 ==
+ base_expected)
tests = [(base_str, base_expected),
('2014-07-01 12:00:00+02:00',
@@ -112,64 +112,64 @@ def test_constructor_with_stringoffset(self):
for date_str, expected in tests:
for result in [Timestamp(date_str)]:
# only with timestring
- self.assertEqual(result.value, expected)
- self.assertEqual(tslib.pydt_to_i8(result), expected)
+ assert result.value == expected
+ assert tslib.pydt_to_i8(result) == expected
# re-creation shouldn't affect to internal value
result = Timestamp(result)
- self.assertEqual(result.value, expected)
- self.assertEqual(tslib.pydt_to_i8(result), expected)
+ assert result.value == expected
+ assert tslib.pydt_to_i8(result) == expected
# with timezone
for tz, offset in timezones:
result = Timestamp(date_str, tz=tz)
expected_tz = expected
- self.assertEqual(result.value, expected_tz)
- self.assertEqual(tslib.pydt_to_i8(result), expected_tz)
+ assert result.value == expected_tz
+ assert tslib.pydt_to_i8(result) == expected_tz
# should preserve tz
result = Timestamp(result)
- self.assertEqual(result.value, expected_tz)
- self.assertEqual(tslib.pydt_to_i8(result), expected_tz)
+ assert result.value == expected_tz
+ assert tslib.pydt_to_i8(result) == expected_tz
# should convert to UTC
result = Timestamp(result, tz='UTC')
expected_utc = expected
- self.assertEqual(result.value, expected_utc)
- self.assertEqual(tslib.pydt_to_i8(result), expected_utc)
+ assert result.value == expected_utc
+ assert tslib.pydt_to_i8(result) == expected_utc
# This should be 2013-11-01 05:00 in UTC
# converted to Chicago tz
result = Timestamp('2013-11-01 00:00:00-0500', tz='America/Chicago')
- self.assertEqual(result.value, Timestamp('2013-11-01 05:00').value)
+ assert result.value == Timestamp('2013-11-01 05:00').value
expected = "Timestamp('2013-11-01 00:00:00-0500', tz='America/Chicago')" # noqa
- self.assertEqual(repr(result), expected)
- self.assertEqual(result, eval(repr(result)))
+ assert repr(result) == expected
+ assert result == eval(repr(result))
# This should be 2013-11-01 05:00 in UTC
# converted to Tokyo tz (+09:00)
result = Timestamp('2013-11-01 00:00:00-0500', tz='Asia/Tokyo')
- self.assertEqual(result.value, Timestamp('2013-11-01 05:00').value)
+ assert result.value == Timestamp('2013-11-01 05:00').value
expected = "Timestamp('2013-11-01 14:00:00+0900', tz='Asia/Tokyo')"
- self.assertEqual(repr(result), expected)
- self.assertEqual(result, eval(repr(result)))
+ assert repr(result) == expected
+ assert result == eval(repr(result))
# GH11708
# This should be 2015-11-18 10:00 in UTC
# converted to Asia/Katmandu
result = Timestamp("2015-11-18 15:45:00+05:45", tz="Asia/Katmandu")
- self.assertEqual(result.value, Timestamp("2015-11-18 10:00").value)
+ assert result.value == Timestamp("2015-11-18 10:00").value
expected = "Timestamp('2015-11-18 15:45:00+0545', tz='Asia/Katmandu')"
- self.assertEqual(repr(result), expected)
- self.assertEqual(result, eval(repr(result)))
+ assert repr(result) == expected
+ assert result == eval(repr(result))
# This should be 2015-11-18 10:00 in UTC
# converted to Asia/Kolkata
result = Timestamp("2015-11-18 15:30:00+05:30", tz="Asia/Kolkata")
- self.assertEqual(result.value, Timestamp("2015-11-18 10:00").value)
+ assert result.value == Timestamp("2015-11-18 10:00").value
expected = "Timestamp('2015-11-18 15:30:00+0530', tz='Asia/Kolkata')"
- self.assertEqual(repr(result), expected)
- self.assertEqual(result, eval(repr(result)))
+ assert repr(result) == expected
+ assert result == eval(repr(result))
def test_constructor_invalid(self):
with tm.assert_raises_regex(TypeError, 'Cannot convert input'):
@@ -178,7 +178,7 @@ def test_constructor_invalid(self):
Timestamp(Period('1000-01-01'))
def test_constructor_positional(self):
- # GH 10758
+ # see gh-10758
with pytest.raises(TypeError):
Timestamp(2000, 1)
with pytest.raises(ValueError):
@@ -190,14 +190,11 @@ def test_constructor_positional(self):
with pytest.raises(ValueError):
Timestamp(2000, 1, 32)
- # GH 11630
- self.assertEqual(
- repr(Timestamp(2015, 11, 12)),
- repr(Timestamp('20151112')))
-
- self.assertEqual(
- repr(Timestamp(2015, 11, 12, 1, 2, 3, 999999)),
- repr(Timestamp('2015-11-12 01:02:03.999999')))
+ # see gh-11630
+ assert (repr(Timestamp(2015, 11, 12)) ==
+ repr(Timestamp('20151112')))
+ assert (repr(Timestamp(2015, 11, 12, 1, 2, 3, 999999)) ==
+ repr(Timestamp('2015-11-12 01:02:03.999999')))
def test_constructor_keyword(self):
# GH 10758
@@ -212,37 +209,35 @@ def test_constructor_keyword(self):
with pytest.raises(ValueError):
Timestamp(year=2000, month=1, day=32)
- self.assertEqual(
- repr(Timestamp(year=2015, month=11, day=12)),
- repr(Timestamp('20151112')))
+ assert (repr(Timestamp(year=2015, month=11, day=12)) ==
+ repr(Timestamp('20151112')))
- self.assertEqual(
- repr(Timestamp(year=2015, month=11, day=12,
- hour=1, minute=2, second=3, microsecond=999999)),
- repr(Timestamp('2015-11-12 01:02:03.999999')))
+ assert (repr(Timestamp(year=2015, month=11, day=12, hour=1, minute=2,
+ second=3, microsecond=999999)) ==
+ repr(Timestamp('2015-11-12 01:02:03.999999')))
def test_constructor_fromordinal(self):
base = datetime(2000, 1, 1)
ts = Timestamp.fromordinal(base.toordinal(), freq='D')
- self.assertEqual(base, ts)
- self.assertEqual(ts.freq, 'D')
- self.assertEqual(base.toordinal(), ts.toordinal())
+ assert base == ts
+ assert ts.freq == 'D'
+ assert base.toordinal() == ts.toordinal()
ts = Timestamp.fromordinal(base.toordinal(), tz='US/Eastern')
- self.assertEqual(Timestamp('2000-01-01', tz='US/Eastern'), ts)
- self.assertEqual(base.toordinal(), ts.toordinal())
+ assert Timestamp('2000-01-01', tz='US/Eastern') == ts
+ assert base.toordinal() == ts.toordinal()
def test_constructor_offset_depr(self):
- # GH 12160
+ # see gh-12160
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
ts = Timestamp('2011-01-01', offset='D')
- self.assertEqual(ts.freq, 'D')
+ assert ts.freq == 'D'
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
- self.assertEqual(ts.offset, 'D')
+ assert ts.offset == 'D'
msg = "Can only specify freq or offset, not both"
with tm.assert_raises_regex(TypeError, msg):
@@ -255,9 +250,9 @@ def test_constructor_offset_depr_fromordinal(self):
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
ts = Timestamp.fromordinal(base.toordinal(), offset='D')
- self.assertEqual(Timestamp('2000-01-01'), ts)
- self.assertEqual(ts.freq, 'D')
- self.assertEqual(base.toordinal(), ts.toordinal())
+ assert Timestamp('2000-01-01') == ts
+ assert ts.freq == 'D'
+ assert base.toordinal() == ts.toordinal()
msg = "Can only specify freq or offset, not both"
with tm.assert_raises_regex(TypeError, msg):
@@ -269,14 +264,14 @@ def test_conversion(self):
result = ts.to_pydatetime()
expected = datetime(2000, 1, 1)
- self.assertEqual(result, expected)
- self.assertEqual(type(result), type(expected))
+ assert result == expected
+ assert type(result) == type(expected)
result = ts.to_datetime64()
expected = np.datetime64(ts.value, 'ns')
- self.assertEqual(result, expected)
- self.assertEqual(type(result), type(expected))
- self.assertEqual(result.dtype, expected.dtype)
+ assert result == expected
+ assert type(result) == type(expected)
+ assert result.dtype == expected.dtype
def test_repr(self):
tm._skip_if_no_pytz()
@@ -365,20 +360,20 @@ def test_tz(self):
t = '2014-02-01 09:00'
ts = Timestamp(t)
local = ts.tz_localize('Asia/Tokyo')
- self.assertEqual(local.hour, 9)
- self.assertEqual(local, Timestamp(t, tz='Asia/Tokyo'))
+ assert local.hour == 9
+ assert local == Timestamp(t, tz='Asia/Tokyo')
conv = local.tz_convert('US/Eastern')
- self.assertEqual(conv, Timestamp('2014-01-31 19:00', tz='US/Eastern'))
- self.assertEqual(conv.hour, 19)
+ assert conv == Timestamp('2014-01-31 19:00', tz='US/Eastern')
+ assert conv.hour == 19
# preserves nanosecond
ts = Timestamp(t) + offsets.Nano(5)
local = ts.tz_localize('Asia/Tokyo')
- self.assertEqual(local.hour, 9)
- self.assertEqual(local.nanosecond, 5)
+ assert local.hour == 9
+ assert local.nanosecond == 5
conv = local.tz_convert('US/Eastern')
- self.assertEqual(conv.nanosecond, 5)
- self.assertEqual(conv.hour, 19)
+ assert conv.nanosecond == 5
+ assert conv.hour == 19
def test_tz_localize_ambiguous(self):
@@ -387,8 +382,8 @@ def test_tz_localize_ambiguous(self):
ts_no_dst = ts.tz_localize('US/Eastern', ambiguous=False)
rng = date_range('2014-11-02', periods=3, freq='H', tz='US/Eastern')
- self.assertEqual(rng[1], ts_dst)
- self.assertEqual(rng[2], ts_no_dst)
+ assert rng[1] == ts_dst
+ assert rng[2] == ts_no_dst
pytest.raises(ValueError, ts.tz_localize, 'US/Eastern',
ambiguous='infer')
@@ -431,13 +426,13 @@ def test_tz_localize_roundtrip(self):
'2014-11-01 17:00', '2014-11-05 00:00']:
ts = Timestamp(t)
localized = ts.tz_localize(tz)
- self.assertEqual(localized, Timestamp(t, tz=tz))
+ assert localized == Timestamp(t, tz=tz)
with pytest.raises(TypeError):
localized.tz_localize(tz)
reset = localized.tz_localize(None)
- self.assertEqual(reset, ts)
+ assert reset == ts
assert reset.tzinfo is None
def test_tz_convert_roundtrip(self):
@@ -448,10 +443,9 @@ def test_tz_convert_roundtrip(self):
converted = ts.tz_convert(tz)
reset = converted.tz_convert(None)
- self.assertEqual(reset, Timestamp(t))
+ assert reset == Timestamp(t)
assert reset.tzinfo is None
- self.assertEqual(reset,
- converted.tz_convert('UTC').tz_localize(None))
+ assert reset == converted.tz_convert('UTC').tz_localize(None)
def test_barely_oob_dts(self):
one_us = np.timedelta64(1).astype('timedelta64[us]')
@@ -472,8 +466,7 @@ def test_barely_oob_dts(self):
pytest.raises(ValueError, Timestamp, max_ts_us + one_us)
def test_utc_z_designator(self):
- self.assertEqual(get_timezone(
- Timestamp('2014-11-02 01:00Z').tzinfo), 'UTC')
+ assert get_timezone(Timestamp('2014-11-02 01:00Z').tzinfo) == 'UTC'
def test_now(self):
# #9000
@@ -513,18 +506,20 @@ def test_today(self):
def test_asm8(self):
np.random.seed(7960929)
- ns = [Timestamp.min.value, Timestamp.max.value, 1000, ]
+ ns = [Timestamp.min.value, Timestamp.max.value, 1000]
+
for n in ns:
- self.assertEqual(Timestamp(n).asm8.view('i8'),
- np.datetime64(n, 'ns').view('i8'), n)
- self.assertEqual(Timestamp('nat').asm8.view('i8'),
- np.datetime64('nat', 'ns').view('i8'))
+ assert (Timestamp(n).asm8.view('i8') ==
+ np.datetime64(n, 'ns').view('i8') == n)
+
+ assert (Timestamp('nat').asm8.view('i8') ==
+ np.datetime64('nat', 'ns').view('i8'))
def test_fields(self):
def check(value, equal):
# that we are int/long like
assert isinstance(value, (int, compat.long))
- self.assertEqual(value, equal)
+ assert value == equal
# GH 10050
ts = Timestamp('2015-05-10 09:06:03.000100001')
@@ -587,7 +582,7 @@ def test_pprint(self):
{'w': {'a': Timestamp('2011-01-01 00:00:00')}},
{'w': {'a': Timestamp('2011-01-01 00:00:00')}}],
'foo': 1}"""
- self.assertEqual(result, expected)
+ assert result == expected
def to_datetime_depr(self):
# see gh-8254
@@ -597,7 +592,7 @@ def to_datetime_depr(self):
check_stacklevel=False):
expected = datetime(2011, 1, 1)
result = ts.to_datetime()
- self.assertEqual(result, expected)
+ assert result == expected
def to_pydatetime_nonzero_nano(self):
ts = Timestamp('2011-01-01 9:00:00.123456789')
@@ -607,7 +602,7 @@ def to_pydatetime_nonzero_nano(self):
check_stacklevel=False):
expected = datetime(2011, 1, 1, 9, 0, 0, 123456)
result = ts.to_pydatetime()
- self.assertEqual(result, expected)
+ assert result == expected
def test_round(self):
@@ -615,27 +610,27 @@ def test_round(self):
dt = Timestamp('20130101 09:10:11')
result = dt.round('D')
expected = Timestamp('20130101')
- self.assertEqual(result, expected)
+ assert result == expected
dt = Timestamp('20130101 19:10:11')
result = dt.round('D')
expected = Timestamp('20130102')
- self.assertEqual(result, expected)
+ assert result == expected
dt = Timestamp('20130201 12:00:00')
result = dt.round('D')
expected = Timestamp('20130202')
- self.assertEqual(result, expected)
+ assert result == expected
dt = Timestamp('20130104 12:00:00')
result = dt.round('D')
expected = Timestamp('20130105')
- self.assertEqual(result, expected)
+ assert result == expected
dt = Timestamp('20130104 12:32:00')
result = dt.round('30Min')
expected = Timestamp('20130104 12:30:00')
- self.assertEqual(result, expected)
+ assert result == expected
dti = date_range('20130101 09:10:11', periods=5)
result = dti.round('D')
@@ -646,23 +641,23 @@ def test_round(self):
dt = Timestamp('20130101 09:10:11')
result = dt.floor('D')
expected = Timestamp('20130101')
- self.assertEqual(result, expected)
+ assert result == expected
# ceil
dt = Timestamp('20130101 09:10:11')
result = dt.ceil('D')
expected = Timestamp('20130102')
- self.assertEqual(result, expected)
+ assert result == expected
# round with tz
dt = Timestamp('20130101 09:10:11', tz='US/Eastern')
result = dt.round('D')
expected = Timestamp('20130101', tz='US/Eastern')
- self.assertEqual(result, expected)
+ assert result == expected
dt = Timestamp('20130101 09:10:11', tz='US/Eastern')
result = dt.round('s')
- self.assertEqual(result, dt)
+ assert result == dt
dti = date_range('20130101 09:10:11',
periods=5).tz_localize('UTC').tz_convert('US/Eastern')
@@ -680,19 +675,19 @@ def test_round(self):
# GH 14440 & 15578
result = Timestamp('2016-10-17 12:00:00.0015').round('ms')
expected = Timestamp('2016-10-17 12:00:00.002000')
- self.assertEqual(result, expected)
+ assert result == expected
result = Timestamp('2016-10-17 12:00:00.00149').round('ms')
expected = Timestamp('2016-10-17 12:00:00.001000')
- self.assertEqual(result, expected)
+ assert result == expected
ts = Timestamp('2016-10-17 12:00:00.0015')
for freq in ['us', 'ns']:
- self.assertEqual(ts, ts.round(freq))
+ assert ts == ts.round(freq)
result = Timestamp('2016-10-17 12:00:00.001501031').round('10ns')
expected = Timestamp('2016-10-17 12:00:00.001501030')
- self.assertEqual(result, expected)
+ assert result == expected
with tm.assert_produces_warning():
Timestamp('2016-10-17 12:00:00.001501031').round('1010ns')
@@ -702,7 +697,7 @@ def test_round_misc(self):
def _check_round(freq, expected):
result = stamp.round(freq=freq)
- self.assertEqual(result, expected)
+ assert result == expected
for freq, expected in [('D', Timestamp('2000-01-05 00:00:00')),
('H', Timestamp('2000-01-05 05:00:00')),
@@ -718,8 +713,8 @@ def test_class_ops_pytz(self):
from pytz import timezone
def compare(x, y):
- self.assertEqual(int(Timestamp(x).value / 1e9),
- int(Timestamp(y).value / 1e9))
+ assert (int(Timestamp(x).value / 1e9) ==
+ int(Timestamp(y).value / 1e9))
compare(Timestamp.now(), datetime.now())
compare(Timestamp.now('UTC'), datetime.now(timezone('UTC')))
@@ -741,8 +736,8 @@ def test_class_ops_dateutil(self):
from dateutil.tz import tzutc
def compare(x, y):
- self.assertEqual(int(np.round(Timestamp(x).value / 1e9)),
- int(np.round(Timestamp(y).value / 1e9)))
+ assert (int(np.round(Timestamp(x).value / 1e9)) ==
+ int(np.round(Timestamp(y).value / 1e9)))
compare(Timestamp.now(), datetime.now())
compare(Timestamp.now('UTC'), datetime.now(tzutc()))
@@ -762,37 +757,37 @@ def compare(x, y):
def test_basics_nanos(self):
val = np.int64(946684800000000000).view('M8[ns]')
stamp = Timestamp(val.view('i8') + 500)
- self.assertEqual(stamp.year, 2000)
- self.assertEqual(stamp.month, 1)
- self.assertEqual(stamp.microsecond, 0)
- self.assertEqual(stamp.nanosecond, 500)
+ assert stamp.year == 2000
+ assert stamp.month == 1
+ assert stamp.microsecond == 0
+ assert stamp.nanosecond == 500
# GH 14415
val = np.iinfo(np.int64).min + 80000000000000
stamp = Timestamp(val)
- self.assertEqual(stamp.year, 1677)
- self.assertEqual(stamp.month, 9)
- self.assertEqual(stamp.day, 21)
- self.assertEqual(stamp.microsecond, 145224)
- self.assertEqual(stamp.nanosecond, 192)
+ assert stamp.year == 1677
+ assert stamp.month == 9
+ assert stamp.day == 21
+ assert stamp.microsecond == 145224
+ assert stamp.nanosecond == 192
def test_unit(self):
def check(val, unit=None, h=1, s=1, us=0):
stamp = Timestamp(val, unit=unit)
- self.assertEqual(stamp.year, 2000)
- self.assertEqual(stamp.month, 1)
- self.assertEqual(stamp.day, 1)
- self.assertEqual(stamp.hour, h)
+ assert stamp.year == 2000
+ assert stamp.month == 1
+ assert stamp.day == 1
+ assert stamp.hour == h
if unit != 'D':
- self.assertEqual(stamp.minute, 1)
- self.assertEqual(stamp.second, s)
- self.assertEqual(stamp.microsecond, us)
+ assert stamp.minute == 1
+ assert stamp.second == s
+ assert stamp.microsecond == us
else:
- self.assertEqual(stamp.minute, 0)
- self.assertEqual(stamp.second, 0)
- self.assertEqual(stamp.microsecond, 0)
- self.assertEqual(stamp.nanosecond, 0)
+ assert stamp.minute == 0
+ assert stamp.second == 0
+ assert stamp.microsecond == 0
+ assert stamp.nanosecond == 0
ts = Timestamp('20000101 01:01:01')
val = ts.value
@@ -835,25 +830,25 @@ def test_roundtrip(self):
base = Timestamp('20140101 00:00:00')
result = Timestamp(base.value + Timedelta('5ms').value)
- self.assertEqual(result, Timestamp(str(base) + ".005000"))
- self.assertEqual(result.microsecond, 5000)
+ assert result == Timestamp(str(base) + ".005000")
+ assert result.microsecond == 5000
result = Timestamp(base.value + Timedelta('5us').value)
- self.assertEqual(result, Timestamp(str(base) + ".000005"))
- self.assertEqual(result.microsecond, 5)
+ assert result == Timestamp(str(base) + ".000005")
+ assert result.microsecond == 5
result = Timestamp(base.value + Timedelta('5ns').value)
- self.assertEqual(result, Timestamp(str(base) + ".000000005"))
- self.assertEqual(result.nanosecond, 5)
- self.assertEqual(result.microsecond, 0)
+ assert result == Timestamp(str(base) + ".000000005")
+ assert result.nanosecond == 5
+ assert result.microsecond == 0
result = Timestamp(base.value + Timedelta('6ms 5us').value)
- self.assertEqual(result, Timestamp(str(base) + ".006005"))
- self.assertEqual(result.microsecond, 5 + 6 * 1000)
+ assert result == Timestamp(str(base) + ".006005")
+ assert result.microsecond == 5 + 6 * 1000
result = Timestamp(base.value + Timedelta('200ms 5us').value)
- self.assertEqual(result, Timestamp(str(base) + ".200005"))
- self.assertEqual(result.microsecond, 5 + 200 * 1000)
+ assert result == Timestamp(str(base) + ".200005")
+ assert result.microsecond == 5 + 200 * 1000
def test_comparison(self):
# 5-18-2012 00:00:00.000
@@ -861,7 +856,7 @@ def test_comparison(self):
val = Timestamp(stamp)
- self.assertEqual(val, val)
+ assert val == val
assert not val != val
assert not val < val
assert val <= val
@@ -869,7 +864,7 @@ def test_comparison(self):
assert val >= val
other = datetime(2012, 5, 18)
- self.assertEqual(val, other)
+ assert val == other
assert not val != other
assert not val < other
assert val <= other
@@ -986,26 +981,26 @@ def test_cant_compare_tz_naive_w_aware_dateutil(self):
def test_delta_preserve_nanos(self):
val = Timestamp(long(1337299200000000123))
result = val + timedelta(1)
- self.assertEqual(result.nanosecond, val.nanosecond)
+ assert result.nanosecond == val.nanosecond
def test_frequency_misc(self):
- self.assertEqual(frequencies.get_freq_group('T'),
- frequencies.FreqGroup.FR_MIN)
+ assert (frequencies.get_freq_group('T') ==
+ frequencies.FreqGroup.FR_MIN)
code, stride = frequencies.get_freq_code(offsets.Hour())
- self.assertEqual(code, frequencies.FreqGroup.FR_HR)
+ assert code == frequencies.FreqGroup.FR_HR
code, stride = frequencies.get_freq_code((5, 'T'))
- self.assertEqual(code, frequencies.FreqGroup.FR_MIN)
- self.assertEqual(stride, 5)
+ assert code == frequencies.FreqGroup.FR_MIN
+ assert stride == 5
offset = offsets.Hour()
result = frequencies.to_offset(offset)
- self.assertEqual(result, offset)
+ assert result == offset
result = frequencies.to_offset((5, 'T'))
expected = offsets.Minute(5)
- self.assertEqual(result, expected)
+ assert result == expected
pytest.raises(ValueError, frequencies.get_freq_code, (5, 'baz'))
@@ -1015,12 +1010,12 @@ def test_frequency_misc(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = frequencies.get_standard_freq(offsets.Hour())
- self.assertEqual(result, 'H')
+ assert result == 'H'
def test_hash_equivalent(self):
d = {datetime(2011, 1, 1): 5}
stamp = Timestamp(datetime(2011, 1, 1))
- self.assertEqual(d[stamp], 5)
+ assert d[stamp] == 5
def test_timestamp_compare_scalars(self):
# case where ndim == 0
@@ -1041,11 +1036,11 @@ def test_timestamp_compare_scalars(self):
expected = left_f(lhs, rhs)
result = right_f(rhs, lhs)
- self.assertEqual(result, expected)
+ assert result == expected
expected = left_f(rhs, nat)
result = right_f(nat, rhs)
- self.assertEqual(result, expected)
+ assert result == expected
def test_timestamp_compare_series(self):
# make sure we can compare Timestamps on the right AND left hand side
@@ -1108,7 +1103,7 @@ def assert_ns_timedelta(self, modified_timestamp, expected_value):
value = self.timestamp.value
modified_value = modified_timestamp.value
- self.assertEqual(modified_value - value, expected_value)
+ assert modified_value - value == expected_value
def test_timedelta_ns_arithmetic(self):
self.assert_ns_timedelta(self.timestamp + np.timedelta64(-123, 'ns'),
@@ -1131,68 +1126,68 @@ def test_nanosecond_string_parsing(self):
# GH 7878
expected_repr = '2013-05-01 07:15:45.123456789'
expected_value = 1367392545123456789
- self.assertEqual(ts.value, expected_value)
+ assert ts.value == expected_value
assert expected_repr in repr(ts)
ts = Timestamp('2013-05-01 07:15:45.123456789+09:00', tz='Asia/Tokyo')
- self.assertEqual(ts.value, expected_value - 9 * 3600 * 1000000000)
+ assert ts.value == expected_value - 9 * 3600 * 1000000000
assert expected_repr in repr(ts)
ts = Timestamp('2013-05-01 07:15:45.123456789', tz='UTC')
- self.assertEqual(ts.value, expected_value)
+ assert ts.value == expected_value
assert expected_repr in repr(ts)
ts = Timestamp('2013-05-01 07:15:45.123456789', tz='US/Eastern')
- self.assertEqual(ts.value, expected_value + 4 * 3600 * 1000000000)
+ assert ts.value == expected_value + 4 * 3600 * 1000000000
assert expected_repr in repr(ts)
# GH 10041
ts = Timestamp('20130501T071545.123456789')
- self.assertEqual(ts.value, expected_value)
+ assert ts.value == expected_value
assert expected_repr in repr(ts)
def test_nanosecond_timestamp(self):
# GH 7610
expected = 1293840000000000005
t = Timestamp('2011-01-01') + offsets.Nano(5)
- self.assertEqual(repr(t), "Timestamp('2011-01-01 00:00:00.000000005')")
- self.assertEqual(t.value, expected)
- self.assertEqual(t.nanosecond, 5)
+ assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000005')"
+ assert t.value == expected
+ assert t.nanosecond == 5
t = Timestamp(t)
- self.assertEqual(repr(t), "Timestamp('2011-01-01 00:00:00.000000005')")
- self.assertEqual(t.value, expected)
- self.assertEqual(t.nanosecond, 5)
+ assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000005')"
+ assert t.value == expected
+ assert t.nanosecond == 5
t = Timestamp(np_datetime64_compat('2011-01-01 00:00:00.000000005Z'))
- self.assertEqual(repr(t), "Timestamp('2011-01-01 00:00:00.000000005')")
- self.assertEqual(t.value, expected)
- self.assertEqual(t.nanosecond, 5)
+ assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000005')"
+ assert t.value == expected
+ assert t.nanosecond == 5
expected = 1293840000000000010
t = t + offsets.Nano(5)
- self.assertEqual(repr(t), "Timestamp('2011-01-01 00:00:00.000000010')")
- self.assertEqual(t.value, expected)
- self.assertEqual(t.nanosecond, 10)
+ assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000010')"
+ assert t.value == expected
+ assert t.nanosecond == 10
t = Timestamp(t)
- self.assertEqual(repr(t), "Timestamp('2011-01-01 00:00:00.000000010')")
- self.assertEqual(t.value, expected)
- self.assertEqual(t.nanosecond, 10)
+ assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000010')"
+ assert t.value == expected
+ assert t.nanosecond == 10
t = Timestamp(np_datetime64_compat('2011-01-01 00:00:00.000000010Z'))
- self.assertEqual(repr(t), "Timestamp('2011-01-01 00:00:00.000000010')")
- self.assertEqual(t.value, expected)
- self.assertEqual(t.nanosecond, 10)
+ assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000010')"
+ assert t.value == expected
+ assert t.nanosecond == 10
class TestTimestampOps(tm.TestCase):
def test_timestamp_and_datetime(self):
- self.assertEqual((Timestamp(datetime(
- 2013, 10, 13)) - datetime(2013, 10, 12)).days, 1)
- self.assertEqual((datetime(2013, 10, 12) -
- Timestamp(datetime(2013, 10, 13))).days, -1)
+ assert ((Timestamp(datetime(2013, 10, 13)) -
+ datetime(2013, 10, 12)).days == 1)
+ assert ((datetime(2013, 10, 12) -
+ Timestamp(datetime(2013, 10, 13))).days == -1)
def test_timestamp_and_series(self):
timestamp_series = Series(date_range('2014-03-17', periods=2, freq='D',
@@ -1213,42 +1208,36 @@ def test_addition_subtraction_types(self):
timestamp_instance = date_range(datetime_instance, periods=1,
freq='D')[0]
- self.assertEqual(type(timestamp_instance + 1), Timestamp)
- self.assertEqual(type(timestamp_instance - 1), Timestamp)
+ assert type(timestamp_instance + 1) == Timestamp
+ assert type(timestamp_instance - 1) == Timestamp
# Timestamp + datetime not supported, though subtraction is supported
# and yields timedelta more tests in tseries/base/tests/test_base.py
- self.assertEqual(
- type(timestamp_instance - datetime_instance), Timedelta)
- self.assertEqual(
- type(timestamp_instance + timedelta_instance), Timestamp)
- self.assertEqual(
- type(timestamp_instance - timedelta_instance), Timestamp)
+ assert type(timestamp_instance - datetime_instance) == Timedelta
+ assert type(timestamp_instance + timedelta_instance) == Timestamp
+ assert type(timestamp_instance - timedelta_instance) == Timestamp
# Timestamp +/- datetime64 not supported, so not tested (could possibly
# assert error raised?)
timedelta64_instance = np.timedelta64(1, 'D')
- self.assertEqual(
- type(timestamp_instance + timedelta64_instance), Timestamp)
- self.assertEqual(
- type(timestamp_instance - timedelta64_instance), Timestamp)
+ assert type(timestamp_instance + timedelta64_instance) == Timestamp
+ assert type(timestamp_instance - timedelta64_instance) == Timestamp
def test_addition_subtraction_preserve_frequency(self):
timestamp_instance = date_range('2014-03-05', periods=1, freq='D')[0]
timedelta_instance = timedelta(days=1)
original_freq = timestamp_instance.freq
- self.assertEqual((timestamp_instance + 1).freq, original_freq)
- self.assertEqual((timestamp_instance - 1).freq, original_freq)
- self.assertEqual(
- (timestamp_instance + timedelta_instance).freq, original_freq)
- self.assertEqual(
- (timestamp_instance - timedelta_instance).freq, original_freq)
+
+ assert (timestamp_instance + 1).freq == original_freq
+ assert (timestamp_instance - 1).freq == original_freq
+ assert (timestamp_instance + timedelta_instance).freq == original_freq
+ assert (timestamp_instance - timedelta_instance).freq == original_freq
timedelta64_instance = np.timedelta64(1, 'D')
- self.assertEqual(
- (timestamp_instance + timedelta64_instance).freq, original_freq)
- self.assertEqual(
- (timestamp_instance - timedelta64_instance).freq, original_freq)
+ assert (timestamp_instance +
+ timedelta64_instance).freq == original_freq
+ assert (timestamp_instance -
+ timedelta64_instance).freq == original_freq
def test_resolution(self):
@@ -1264,30 +1253,30 @@ def test_resolution(self):
idx = date_range(start='2013-04-01', periods=30, freq=freq,
tz=tz)
result = period.resolution(idx.asi8, idx.tz)
- self.assertEqual(result, expected)
+ assert result == expected
class TestTimestampToJulianDate(tm.TestCase):
def test_compare_1700(self):
r = Timestamp('1700-06-23').to_julian_date()
- self.assertEqual(r, 2342145.5)
+ assert r == 2342145.5
def test_compare_2000(self):
r = Timestamp('2000-04-12').to_julian_date()
- self.assertEqual(r, 2451646.5)
+ assert r == 2451646.5
def test_compare_2100(self):
r = Timestamp('2100-08-12').to_julian_date()
- self.assertEqual(r, 2488292.5)
+ assert r == 2488292.5
def test_compare_hour01(self):
r = Timestamp('2000-08-12T01:00:00').to_julian_date()
- self.assertEqual(r, 2451768.5416666666666666)
+ assert r == 2451768.5416666666666666
def test_compare_hour13(self):
r = Timestamp('2000-08-12T13:00:00').to_julian_date()
- self.assertEqual(r, 2451769.0416666666666666)
+ assert r == 2451769.0416666666666666
class TestTimeSeries(tm.TestCase):
@@ -1298,8 +1287,8 @@ def test_timestamp_to_datetime(self):
stamp = rng[0]
dtval = stamp.to_pydatetime()
- self.assertEqual(stamp, dtval)
- self.assertEqual(stamp.tzinfo, dtval.tzinfo)
+ assert stamp == dtval
+ assert stamp.tzinfo == dtval.tzinfo
def test_timestamp_to_datetime_dateutil(self):
tm._skip_if_no_pytz()
@@ -1307,8 +1296,8 @@ def test_timestamp_to_datetime_dateutil(self):
stamp = rng[0]
dtval = stamp.to_pydatetime()
- self.assertEqual(stamp, dtval)
- self.assertEqual(stamp.tzinfo, dtval.tzinfo)
+ assert stamp == dtval
+ assert stamp.tzinfo == dtval.tzinfo
def test_timestamp_to_datetime_explicit_pytz(self):
tm._skip_if_no_pytz()
@@ -1318,8 +1307,8 @@ def test_timestamp_to_datetime_explicit_pytz(self):
stamp = rng[0]
dtval = stamp.to_pydatetime()
- self.assertEqual(stamp, dtval)
- self.assertEqual(stamp.tzinfo, dtval.tzinfo)
+ assert stamp == dtval
+ assert stamp.tzinfo == dtval.tzinfo
def test_timestamp_to_datetime_explicit_dateutil(self):
tm._skip_if_windows_python_3()
@@ -1329,8 +1318,8 @@ def test_timestamp_to_datetime_explicit_dateutil(self):
stamp = rng[0]
dtval = stamp.to_pydatetime()
- self.assertEqual(stamp, dtval)
- self.assertEqual(stamp.tzinfo, dtval.tzinfo)
+ assert stamp == dtval
+ assert stamp.tzinfo == dtval.tzinfo
def test_timestamp_fields(self):
# extra fields from DatetimeIndex like quarter and week
@@ -1343,16 +1332,16 @@ def test_timestamp_fields(self):
for f in fields:
expected = getattr(idx, f)[-1]
result = getattr(Timestamp(idx[-1]), f)
- self.assertEqual(result, expected)
+ assert result == expected
- self.assertEqual(idx.freq, Timestamp(idx[-1], idx.freq).freq)
- self.assertEqual(idx.freqstr, Timestamp(idx[-1], idx.freq).freqstr)
+ assert idx.freq == Timestamp(idx[-1], idx.freq).freq
+ assert idx.freqstr == Timestamp(idx[-1], idx.freq).freqstr
def test_timestamp_date_out_of_range(self):
pytest.raises(ValueError, Timestamp, '1676-01-01')
pytest.raises(ValueError, Timestamp, '2263-01-01')
- # 1475
+ # see gh-1475
pytest.raises(ValueError, DatetimeIndex, ['1400-01-01'])
pytest.raises(ValueError, DatetimeIndex, [datetime(1400, 1, 1)])
@@ -1371,13 +1360,13 @@ def test_timestamp_from_ordinal(self):
# GH 3042
dt = datetime(2011, 4, 16, 0, 0)
ts = Timestamp.fromordinal(dt.toordinal())
- self.assertEqual(ts.to_pydatetime(), dt)
+ assert ts.to_pydatetime() == dt
# with a tzinfo
stamp = Timestamp('2011-4-16', tz='US/Eastern')
dt_tz = stamp.to_pydatetime()
ts = Timestamp.fromordinal(dt_tz.toordinal(), tz='US/Eastern')
- self.assertEqual(ts.to_pydatetime(), dt_tz)
+ assert ts.to_pydatetime() == dt_tz
def test_timestamp_compare_with_early_datetime(self):
# e.g. datetime.min
@@ -1461,9 +1450,9 @@ def test_dti_slicing(self):
v2 = dti2[1]
v3 = dti2[2]
- self.assertEqual(v1, Timestamp('2/28/2005'))
- self.assertEqual(v2, Timestamp('4/30/2005'))
- self.assertEqual(v3, Timestamp('6/30/2005'))
+ assert v1 == Timestamp('2/28/2005')
+ assert v2 == Timestamp('4/30/2005')
+ assert v3 == Timestamp('6/30/2005')
# don't carry freq through irregular slicing
assert dti2.freq is None
@@ -1473,27 +1462,27 @@ def test_woy_boundary(self):
d = datetime(2013, 12, 31)
result = Timestamp(d).week
expected = 1 # ISO standard
- self.assertEqual(result, expected)
+ assert result == expected
d = datetime(2008, 12, 28)
result = Timestamp(d).week
expected = 52 # ISO standard
- self.assertEqual(result, expected)
+ assert result == expected
d = datetime(2009, 12, 31)
result = Timestamp(d).week
expected = 53 # ISO standard
- self.assertEqual(result, expected)
+ assert result == expected
d = datetime(2010, 1, 1)
result = Timestamp(d).week
expected = 53 # ISO standard
- self.assertEqual(result, expected)
+ assert result == expected
d = datetime(2010, 1, 3)
result = Timestamp(d).week
expected = 53 # ISO standard
- self.assertEqual(result, expected)
+ assert result == expected
result = np.array([Timestamp(datetime(*args)).week
for args in [(2000, 1, 1), (2000, 1, 2), (
@@ -1516,12 +1505,10 @@ def test_to_datetime_bijective(self):
# by going from nanoseconds to microseconds.
exp_warning = None if Timestamp.max.nanosecond == 0 else UserWarning
with tm.assert_produces_warning(exp_warning, check_stacklevel=False):
- self.assertEqual(
- Timestamp(Timestamp.max.to_pydatetime()).value / 1000,
- Timestamp.max.value / 1000)
+ assert (Timestamp(Timestamp.max.to_pydatetime()).value / 1000 ==
+ Timestamp.max.value / 1000)
exp_warning = None if Timestamp.min.nanosecond == 0 else UserWarning
with tm.assert_produces_warning(exp_warning, check_stacklevel=False):
- self.assertEqual(
- Timestamp(Timestamp.min.to_pydatetime()).value / 1000,
- Timestamp.min.value / 1000)
+ assert (Timestamp(Timestamp.min.to_pydatetime()).value / 1000 ==
+ Timestamp.min.value / 1000)
diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py
index e0964fea95cc9..33a4cdb6e26c4 100644
--- a/pandas/tests/series/test_alter_axes.py
+++ b/pandas/tests/series/test_alter_axes.py
@@ -38,7 +38,7 @@ def test_setindex(self):
def test_rename(self):
renamer = lambda x: x.strftime('%Y%m%d')
renamed = self.ts.rename(renamer)
- self.assertEqual(renamed.index[0], renamer(self.ts.index[0]))
+ assert renamed.index[0] == renamer(self.ts.index[0])
# dict
rename_dict = dict(zip(self.ts.index, renamed.index))
@@ -55,7 +55,7 @@ def test_rename(self):
index=Index(['a', 'b', 'c', 'd'], name='name'),
dtype='int64')
renamed = renamer.rename({})
- self.assertEqual(renamed.index.name, renamer.index.name)
+ assert renamed.index.name == renamer.index.name
def test_rename_by_series(self):
s = Series(range(5), name='foo')
@@ -68,7 +68,7 @@ def test_rename_set_name(self):
s = Series(range(4), index=list('abcd'))
for name in ['foo', 123, 123., datetime(2001, 11, 11), ('foo',)]:
result = s.rename(name)
- self.assertEqual(result.name, name)
+ assert result.name == name
tm.assert_numpy_array_equal(result.index.values, s.index.values)
assert s.name is None
@@ -76,7 +76,7 @@ def test_rename_set_name_inplace(self):
s = Series(range(3), index=list('abc'))
for name in ['foo', 123, 123., datetime(2001, 11, 11), ('foo',)]:
s.rename(name, inplace=True)
- self.assertEqual(s.name, name)
+ assert s.name == name
exp = np.array(['a', 'b', 'c'], dtype=np.object_)
tm.assert_numpy_array_equal(s.index.values, exp)
@@ -86,14 +86,14 @@ def test_set_name_attribute(self):
s2 = Series([1, 2, 3], name='bar')
for name in [7, 7., 'name', datetime(2001, 1, 1), (1,), u"\u05D0"]:
s.name = name
- self.assertEqual(s.name, name)
+ assert s.name == name
s2.name = name
- self.assertEqual(s2.name, name)
+ assert s2.name == name
def test_set_name(self):
s = Series([1, 2, 3])
s2 = s._set_name('foo')
- self.assertEqual(s2.name, 'foo')
+ assert s2.name == 'foo'
assert s.name is None
assert s is not s2
@@ -102,7 +102,7 @@ def test_rename_inplace(self):
expected = renamer(self.ts.index[0])
self.ts.rename(renamer, inplace=True)
- self.assertEqual(self.ts.index[0], expected)
+ assert self.ts.index[0] == expected
def test_set_index_makes_timeseries(self):
idx = tm.makeDateIndex(10)
@@ -135,7 +135,7 @@ def test_reset_index(self):
[0, 1, 0, 1, 0, 1]])
s = Series(np.random.randn(6), index=index)
rs = s.reset_index(level=1)
- self.assertEqual(len(rs.columns), 2)
+ assert len(rs.columns) == 2
rs = s.reset_index(level=[0, 2], drop=True)
tm.assert_index_equal(rs.index, Index(index.get_level_values(1)))
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 233d71cb1d8a5..73515c47388ea 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -32,14 +32,14 @@ class TestSeriesAnalytics(TestData, tm.TestCase):
def test_sum_zero(self):
arr = np.array([])
- self.assertEqual(nanops.nansum(arr), 0)
+ assert nanops.nansum(arr) == 0
arr = np.empty((10, 0))
assert (nanops.nansum(arr, axis=1) == 0).all()
# GH #844
s = Series([], index=[])
- self.assertEqual(s.sum(), 0)
+ assert s.sum() == 0
df = DataFrame(np.empty((10, 0)))
assert (df.sum(1) == 0).all()
@@ -58,19 +58,19 @@ def test_overflow(self):
# no bottleneck
result = s.sum(skipna=False)
- self.assertEqual(int(result), v.sum(dtype='int64'))
+ assert int(result) == v.sum(dtype='int64')
result = s.min(skipna=False)
- self.assertEqual(int(result), 0)
+ assert int(result) == 0
result = s.max(skipna=False)
- self.assertEqual(int(result), v[-1])
+ assert int(result) == v[-1]
# use bottleneck if available
result = s.sum()
- self.assertEqual(int(result), v.sum(dtype='int64'))
+ assert int(result) == v.sum(dtype='int64')
result = s.min()
- self.assertEqual(int(result), 0)
+ assert int(result) == 0
result = s.max()
- self.assertEqual(int(result), v[-1])
+ assert int(result) == v[-1]
for dtype in ['float32', 'float64']:
v = np.arange(5000000, dtype=dtype)
@@ -78,7 +78,7 @@ def test_overflow(self):
# no bottleneck
result = s.sum(skipna=False)
- self.assertEqual(result, v.sum(dtype=dtype))
+ assert result == v.sum(dtype=dtype)
result = s.min(skipna=False)
assert np.allclose(float(result), 0.0)
result = s.max(skipna=False)
@@ -86,7 +86,7 @@ def test_overflow(self):
# use bottleneck if available
result = s.sum()
- self.assertEqual(result, v.sum(dtype=dtype))
+ assert result == v.sum(dtype=dtype)
result = s.min()
assert np.allclose(float(result), 0.0)
result = s.max()
@@ -284,7 +284,7 @@ def test_skew(self):
assert np.isnan(s.skew())
assert np.isnan(df.skew()).all()
else:
- self.assertEqual(0, s.skew())
+ assert 0 == s.skew()
assert (df.skew() == 0).all()
def test_kurt(self):
@@ -310,7 +310,7 @@ def test_kurt(self):
assert np.isnan(s.kurt())
assert np.isnan(df.kurt()).all()
else:
- self.assertEqual(0, s.kurt())
+ assert 0 == s.kurt()
assert (df.kurt() == 0).all()
def test_describe(self):
@@ -341,9 +341,9 @@ def test_argsort(self):
# GH 2967 (introduced bug in 0.11-dev I think)
s = Series([Timestamp('201301%02d' % (i + 1)) for i in range(5)])
- self.assertEqual(s.dtype, 'datetime64[ns]')
+ assert s.dtype == 'datetime64[ns]'
shifted = s.shift(-1)
- self.assertEqual(shifted.dtype, 'datetime64[ns]')
+ assert shifted.dtype == 'datetime64[ns]'
assert isnull(shifted[4])
result = s.argsort()
@@ -520,7 +520,7 @@ def testit():
assert nanops._USE_BOTTLENECK
import bottleneck as bn # noqa
assert bn.__version__ >= LooseVersion('1.0')
- self.assertEqual(f(allna), 0.0)
+ assert f(allna) == 0.0
except:
assert np.isnan(f(allna))
@@ -539,7 +539,7 @@ def testit():
s = Series(bdate_range('1/1/2000', periods=10))
res = f(s)
exp = alternate(s)
- self.assertEqual(res, exp)
+ assert res == exp
# check on string data
if name not in ['sum', 'min', 'max']:
@@ -609,7 +609,7 @@ def test_round(self):
expected = Series(np.round(self.ts.values, 2),
index=self.ts.index, name='ts')
assert_series_equal(result, expected)
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
def test_numpy_round(self):
# See gh-12600
@@ -651,7 +651,7 @@ def test_all_any(self):
# Alternative types, with implicit 'object' dtype.
s = Series(['abc', True])
- self.assertEqual('abc', s.any()) # 'abc' || True => 'abc'
+ assert 'abc' == s.any() # 'abc' || True => 'abc'
def test_all_any_params(self):
# Check skipna, with implicit 'object' dtype.
@@ -719,7 +719,7 @@ def test_ops_consistency_on_empty(self):
# float
result = Series(dtype=float).sum()
- self.assertEqual(result, 0)
+ assert result == 0
result = Series(dtype=float).mean()
assert isnull(result)
@@ -729,7 +729,7 @@ def test_ops_consistency_on_empty(self):
# timedelta64[ns]
result = Series(dtype='m8[ns]').sum()
- self.assertEqual(result, Timedelta(0))
+ assert result == Timedelta(0)
result = Series(dtype='m8[ns]').mean()
assert result is pd.NaT
@@ -827,11 +827,11 @@ def test_cov(self):
assert isnull(ts1.cov(ts2, min_periods=12))
def test_count(self):
- self.assertEqual(self.ts.count(), len(self.ts))
+ assert self.ts.count() == len(self.ts)
self.ts[::2] = np.NaN
- self.assertEqual(self.ts.count(), np.isfinite(self.ts).sum())
+ assert self.ts.count() == np.isfinite(self.ts).sum()
mi = MultiIndex.from_arrays([list('aabbcc'), [1, 2, 2, nan, 1, 2]])
ts = Series(np.arange(len(mi)), index=mi)
@@ -876,7 +876,7 @@ def test_value_counts_nunique(self):
series[20:500] = np.nan
series[10:20] = 5000
result = series.nunique()
- self.assertEqual(result, 11)
+ assert result == 11
def test_unique(self):
@@ -884,18 +884,18 @@ def test_unique(self):
s = Series([1.2345] * 100)
s[::2] = np.nan
result = s.unique()
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
s = Series([1.2345] * 100, dtype='f4')
s[::2] = np.nan
result = s.unique()
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
# NAs in object arrays #714
s = Series(['foo'] * 100, dtype='O')
s[::2] = np.nan
result = s.unique()
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
# decision about None
s = Series([1, 2, 3, None, None, None], dtype=object)
@@ -953,11 +953,11 @@ def test_drop_duplicates(self):
def test_clip(self):
val = self.ts.median()
- self.assertEqual(self.ts.clip_lower(val).min(), val)
- self.assertEqual(self.ts.clip_upper(val).max(), val)
+ assert self.ts.clip_lower(val).min() == val
+ assert self.ts.clip_upper(val).max() == val
- self.assertEqual(self.ts.clip(lower=val).min(), val)
- self.assertEqual(self.ts.clip(upper=val).max(), val)
+ assert self.ts.clip(lower=val).min() == val
+ assert self.ts.clip(upper=val).max() == val
result = self.ts.clip(-0.5, 0.5)
expected = np.clip(self.ts, -0.5, 0.5)
@@ -974,10 +974,10 @@ def test_clip_types_and_nulls(self):
thresh = s[2]
l = s.clip_lower(thresh)
u = s.clip_upper(thresh)
- self.assertEqual(l[notnull(l)].min(), thresh)
- self.assertEqual(u[notnull(u)].max(), thresh)
- self.assertEqual(list(isnull(s)), list(isnull(l)))
- self.assertEqual(list(isnull(s)), list(isnull(u)))
+ assert l[notnull(l)].min() == thresh
+ assert u[notnull(u)].max() == thresh
+ assert list(isnull(s)) == list(isnull(l))
+ assert list(isnull(s)) == list(isnull(u))
def test_clip_against_series(self):
# GH #6966
@@ -1109,20 +1109,20 @@ def test_timedelta64_analytics(self):
Timestamp('20120101')
result = td.idxmin()
- self.assertEqual(result, 0)
+ assert result == 0
result = td.idxmax()
- self.assertEqual(result, 2)
+ assert result == 2
# GH 2982
# with NaT
td[0] = np.nan
result = td.idxmin()
- self.assertEqual(result, 1)
+ assert result == 1
result = td.idxmax()
- self.assertEqual(result, 2)
+ assert result == 2
# abs
s1 = Series(date_range('20120101', periods=3))
@@ -1139,11 +1139,11 @@ def test_timedelta64_analytics(self):
# max/min
result = td.max()
expected = Timedelta('2 days')
- self.assertEqual(result, expected)
+ assert result == expected
result = td.min()
expected = Timedelta('1 days')
- self.assertEqual(result, expected)
+ assert result == expected
def test_idxmin(self):
# test idxmin
@@ -1153,14 +1153,14 @@ def test_idxmin(self):
self.series[5:15] = np.NaN
# skipna or no
- self.assertEqual(self.series[self.series.idxmin()], self.series.min())
+ assert self.series[self.series.idxmin()] == self.series.min()
assert isnull(self.series.idxmin(skipna=False))
# no NaNs
nona = self.series.dropna()
- self.assertEqual(nona[nona.idxmin()], nona.min())
- self.assertEqual(nona.index.values.tolist().index(nona.idxmin()),
- nona.values.argmin())
+ assert nona[nona.idxmin()] == nona.min()
+ assert (nona.index.values.tolist().index(nona.idxmin()) ==
+ nona.values.argmin())
# all NaNs
allna = self.series * nan
@@ -1170,17 +1170,17 @@ def test_idxmin(self):
from pandas import date_range
s = Series(date_range('20130102', periods=6))
result = s.idxmin()
- self.assertEqual(result, 0)
+ assert result == 0
s[0] = np.nan
result = s.idxmin()
- self.assertEqual(result, 1)
+ assert result == 1
def test_numpy_argmin(self):
# argmin is aliased to idxmin
data = np.random.randint(0, 11, size=10)
result = np.argmin(Series(data))
- self.assertEqual(result, np.argmin(data))
+ assert result == np.argmin(data)
if not _np_version_under1p10:
msg = "the 'out' parameter is not supported"
@@ -1195,14 +1195,14 @@ def test_idxmax(self):
self.series[5:15] = np.NaN
# skipna or no
- self.assertEqual(self.series[self.series.idxmax()], self.series.max())
+ assert self.series[self.series.idxmax()] == self.series.max()
assert isnull(self.series.idxmax(skipna=False))
# no NaNs
nona = self.series.dropna()
- self.assertEqual(nona[nona.idxmax()], nona.max())
- self.assertEqual(nona.index.values.tolist().index(nona.idxmax()),
- nona.values.argmax())
+ assert nona[nona.idxmax()] == nona.max()
+ assert (nona.index.values.tolist().index(nona.idxmax()) ==
+ nona.values.argmax())
# all NaNs
allna = self.series * nan
@@ -1211,32 +1211,32 @@ def test_idxmax(self):
from pandas import date_range
s = Series(date_range('20130102', periods=6))
result = s.idxmax()
- self.assertEqual(result, 5)
+ assert result == 5
s[5] = np.nan
result = s.idxmax()
- self.assertEqual(result, 4)
+ assert result == 4
# Float64Index
# GH 5914
s = pd.Series([1, 2, 3], [1.1, 2.1, 3.1])
result = s.idxmax()
- self.assertEqual(result, 3.1)
+ assert result == 3.1
result = s.idxmin()
- self.assertEqual(result, 1.1)
+ assert result == 1.1
s = pd.Series(s.index, s.index)
result = s.idxmax()
- self.assertEqual(result, 3.1)
+ assert result == 3.1
result = s.idxmin()
- self.assertEqual(result, 1.1)
+ assert result == 1.1
def test_numpy_argmax(self):
# argmax is aliased to idxmax
data = np.random.randint(0, 11, size=10)
result = np.argmax(Series(data))
- self.assertEqual(result, np.argmax(data))
+ assert result == np.argmax(data)
if not _np_version_under1p10:
msg = "the 'out' parameter is not supported"
@@ -1247,11 +1247,11 @@ def test_ptp(self):
N = 1000
arr = np.random.randn(N)
ser = Series(arr)
- self.assertEqual(np.ptp(ser), np.ptp(arr))
+ assert np.ptp(ser) == np.ptp(arr)
# GH11163
s = Series([3, 5, np.nan, -3, 10])
- self.assertEqual(s.ptp(), 13)
+ assert s.ptp() == 13
assert pd.isnull(s.ptp(skipna=False))
mi = pd.MultiIndex.from_product([['a', 'b'], [1, 2, 3]])
@@ -1326,7 +1326,7 @@ def test_searchsorted_numeric_dtypes_scalar(self):
s = Series([1, 2, 90, 1000, 3e9])
r = s.searchsorted(30)
e = 2
- self.assertEqual(r, e)
+ assert r == e
r = s.searchsorted([30])
e = np.array([2], dtype=np.intp)
@@ -1343,7 +1343,7 @@ def test_search_sorted_datetime64_scalar(self):
v = pd.Timestamp('20120102')
r = s.searchsorted(v)
e = 1
- self.assertEqual(r, e)
+ assert r == e
def test_search_sorted_datetime64_list(self):
s = Series(pd.date_range('20120101', periods=10, freq='2D'))
@@ -1417,7 +1417,7 @@ def test_apply_categorical(self):
result = s.apply(lambda x: 'A')
exp = pd.Series(['A'] * 7, name='XX', index=list('abcdefg'))
tm.assert_series_equal(result, exp)
- self.assertEqual(result.dtype, np.object)
+ assert result.dtype == np.object
def test_shift_int(self):
ts = self.ts.astype(int)
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 7d331f0643b18..5bb463c7a2ebe 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -23,11 +23,11 @@ class SharedWithSparse(object):
def test_scalarop_preserve_name(self):
result = self.ts * 2
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
def test_copy_name(self):
result = self.ts.copy()
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
def test_copy_index_name_checking(self):
# don't want to be able to modify the index stored elsewhere after
@@ -44,17 +44,17 @@ def test_copy_index_name_checking(self):
def test_append_preserve_name(self):
result = self.ts[:5].append(self.ts[5:])
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
def test_binop_maybe_preserve_name(self):
# names match, preserve
result = self.ts * self.ts
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
result = self.ts.mul(self.ts)
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
result = self.ts * self.ts[:-2]
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
# names don't match, don't preserve
cp = self.ts.copy()
@@ -70,7 +70,7 @@ def test_binop_maybe_preserve_name(self):
# names match, preserve
s = self.ts.copy()
result = getattr(s, op)(s)
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
# names don't match, don't preserve
cp = self.ts.copy()
@@ -80,17 +80,17 @@ def test_binop_maybe_preserve_name(self):
def test_combine_first_name(self):
result = self.ts.combine_first(self.ts[:5])
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
def test_getitem_preserve_name(self):
result = self.ts[self.ts > 0]
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
result = self.ts[[0, 2, 4]]
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
result = self.ts[5:10]
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
def test_pickle(self):
unp_series = self._pickle_roundtrip(self.series)
@@ -107,15 +107,15 @@ def _pickle_roundtrip(self, obj):
def test_argsort_preserve_name(self):
result = self.ts.argsort()
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
def test_sort_index_name(self):
result = self.ts.sort_index(ascending=False)
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
def test_to_sparse_pass_name(self):
result = self.ts.to_sparse()
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
class TestSeriesMisc(TestData, SharedWithSparse, tm.TestCase):
@@ -158,46 +158,47 @@ def test_contains(self):
def test_iter(self):
for i, val in enumerate(self.series):
- self.assertEqual(val, self.series[i])
+ assert val == self.series[i]
for i, val in enumerate(self.ts):
- self.assertEqual(val, self.ts[i])
+ assert val == self.ts[i]
def test_iter_box(self):
vals = [pd.Timestamp('2011-01-01'), pd.Timestamp('2011-01-02')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'datetime64[ns]')
+ assert s.dtype == 'datetime64[ns]'
for res, exp in zip(s, vals):
assert isinstance(res, pd.Timestamp)
- self.assertEqual(res, exp)
assert res.tz is None
+ assert res == exp
vals = [pd.Timestamp('2011-01-01', tz='US/Eastern'),
pd.Timestamp('2011-01-02', tz='US/Eastern')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'datetime64[ns, US/Eastern]')
+
+ assert s.dtype == 'datetime64[ns, US/Eastern]'
for res, exp in zip(s, vals):
assert isinstance(res, pd.Timestamp)
- self.assertEqual(res, exp)
- self.assertEqual(res.tz, exp.tz)
+ assert res.tz == exp.tz
+ assert res == exp
# timedelta
vals = [pd.Timedelta('1 days'), pd.Timedelta('2 days')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'timedelta64[ns]')
+ assert s.dtype == 'timedelta64[ns]'
for res, exp in zip(s, vals):
assert isinstance(res, pd.Timedelta)
- self.assertEqual(res, exp)
+ assert res == exp
# period (object dtype, not boxed)
vals = [pd.Period('2011-01-01', freq='M'),
pd.Period('2011-01-02', freq='M')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'object')
+ assert s.dtype == 'object'
for res, exp in zip(s, vals):
assert isinstance(res, pd.Period)
- self.assertEqual(res, exp)
- self.assertEqual(res.freq, 'M')
+ assert res.freq == 'M'
+ assert res == exp
def test_keys(self):
# HACK: By doing this in two stages, we avoid 2to3 wrapping the call
@@ -210,10 +211,10 @@ def test_values(self):
def test_iteritems(self):
for idx, val in compat.iteritems(self.series):
- self.assertEqual(val, self.series[idx])
+ assert val == self.series[idx]
for idx, val in compat.iteritems(self.ts):
- self.assertEqual(val, self.ts[idx])
+ assert val == self.ts[idx]
# assert is lazy (genrators don't define reverse, lists do)
assert not hasattr(self.series.iteritems(), 'reverse')
@@ -274,9 +275,9 @@ def test_copy(self):
def test_axis_alias(self):
s = Series([1, 2, np.nan])
assert_series_equal(s.dropna(axis='rows'), s.dropna(axis='index'))
- self.assertEqual(s.dropna().sum('rows'), 3)
- self.assertEqual(s._get_axis_number('rows'), 0)
- self.assertEqual(s._get_axis_name('rows'), 'index')
+ assert s.dropna().sum('rows') == 3
+ assert s._get_axis_number('rows') == 0
+ assert s._get_axis_name('rows') == 'index'
def test_numpy_unique(self):
# it works!
@@ -293,19 +294,19 @@ def f(x):
result = tsdf.apply(f)
expected = tsdf.max()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# .item()
s = Series([1])
result = s.item()
- self.assertEqual(result, 1)
- self.assertEqual(s.item(), s.iloc[0])
+ assert result == 1
+ assert s.item() == s.iloc[0]
# using an ndarray like function
s = Series(np.random.randn(10))
- result = np.ones_like(s)
+ result = Series(np.ones_like(s))
expected = Series(1, index=range(10), dtype='float64')
- # assert_series_equal(result,expected)
+ tm.assert_series_equal(result, expected)
# ravel
s = Series(np.random.randn(10))
@@ -315,21 +316,21 @@ def f(x):
# GH 6658
s = Series([0, 1., -1], index=list('abc'))
result = np.compress(s > 0, s)
- assert_series_equal(result, Series([1.], index=['b']))
+ tm.assert_series_equal(result, Series([1.], index=['b']))
result = np.compress(s < -1, s)
# result empty Index(dtype=object) as the same as original
exp = Series([], dtype='float64', index=Index([], dtype='object'))
- assert_series_equal(result, exp)
+ tm.assert_series_equal(result, exp)
s = Series([0, 1., -1], index=[.1, .2, .3])
result = np.compress(s > 0, s)
- assert_series_equal(result, Series([1.], index=[.2]))
+ tm.assert_series_equal(result, Series([1.], index=[.2]))
result = np.compress(s < -1, s)
# result empty Float64Index as the same as original
exp = Series([], dtype='float64', index=Index([], dtype='float64'))
- assert_series_equal(result, exp)
+ tm.assert_series_equal(result, exp)
def test_str_attribute(self):
# GH9068
diff --git a/pandas/tests/series/test_apply.py b/pandas/tests/series/test_apply.py
index c764d7b856bb8..089a2c36a5574 100644
--- a/pandas/tests/series/test_apply.py
+++ b/pandas/tests/series/test_apply.py
@@ -61,27 +61,27 @@ def test_apply_dont_convert_dtype(self):
f = lambda x: x if x > 0 else np.nan
result = s.apply(f, convert_dtype=False)
- self.assertEqual(result.dtype, object)
+ assert result.dtype == object
def test_with_string_args(self):
for arg in ['sum', 'mean', 'min', 'max', 'std']:
result = self.ts.apply(arg)
expected = getattr(self.ts, arg)()
- self.assertEqual(result, expected)
+ assert result == expected
def test_apply_args(self):
s = Series(['foo,bar'])
result = s.apply(str.split, args=(',', ))
- self.assertEqual(result[0], ['foo', 'bar'])
+ assert result[0] == ['foo', 'bar']
assert isinstance(result[0], list)
def test_apply_box(self):
# ufunc will not be boxed. Same test cases as the test_map_box
vals = [pd.Timestamp('2011-01-01'), pd.Timestamp('2011-01-02')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'datetime64[ns]')
+ assert s.dtype == 'datetime64[ns]'
# boxed value must be Timestamp instance
res = s.apply(lambda x: '{0}_{1}_{2}'.format(x.__class__.__name__,
x.day, x.tz))
@@ -91,7 +91,7 @@ def test_apply_box(self):
vals = [pd.Timestamp('2011-01-01', tz='US/Eastern'),
pd.Timestamp('2011-01-02', tz='US/Eastern')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'datetime64[ns, US/Eastern]')
+ assert s.dtype == 'datetime64[ns, US/Eastern]'
res = s.apply(lambda x: '{0}_{1}_{2}'.format(x.__class__.__name__,
x.day, x.tz))
exp = pd.Series(['Timestamp_1_US/Eastern', 'Timestamp_2_US/Eastern'])
@@ -100,7 +100,7 @@ def test_apply_box(self):
# timedelta
vals = [pd.Timedelta('1 days'), pd.Timedelta('2 days')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'timedelta64[ns]')
+ assert s.dtype == 'timedelta64[ns]'
res = s.apply(lambda x: '{0}_{1}'.format(x.__class__.__name__, x.days))
exp = pd.Series(['Timedelta_1', 'Timedelta_2'])
tm.assert_series_equal(res, exp)
@@ -109,7 +109,7 @@ def test_apply_box(self):
vals = [pd.Period('2011-01-01', freq='M'),
pd.Period('2011-01-02', freq='M')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'object')
+ assert s.dtype == 'object'
res = s.apply(lambda x: '{0}_{1}'.format(x.__class__.__name__,
x.freqstr))
exp = pd.Series(['Period_M', 'Period_M'])
@@ -318,13 +318,13 @@ def test_map(self):
merged = target.map(source)
for k, v in compat.iteritems(merged):
- self.assertEqual(v, source[target[k]])
+ assert v == source[target[k]]
# input could be a dict
merged = target.map(source.to_dict())
for k, v in compat.iteritems(merged):
- self.assertEqual(v, source[target[k]])
+ assert v == source[target[k]]
# function
result = self.ts.map(lambda x: x * 2)
@@ -372,11 +372,11 @@ def test_map_int(self):
left = Series({'a': 1., 'b': 2., 'c': 3., 'd': 4})
right = Series({1: 11, 2: 22, 3: 33})
- self.assertEqual(left.dtype, np.float_)
+ assert left.dtype == np.float_
assert issubclass(right.dtype.type, np.integer)
merged = left.map(right)
- self.assertEqual(merged.dtype, np.float_)
+ assert merged.dtype == np.float_
assert isnull(merged['d'])
assert not isnull(merged['c'])
@@ -389,7 +389,7 @@ def test_map_decimal(self):
from decimal import Decimal
result = self.series.map(lambda x: Decimal(str(x)))
- self.assertEqual(result.dtype, np.object_)
+ assert result.dtype == np.object_
assert isinstance(result[0], Decimal)
def test_map_na_exclusion(self):
@@ -457,7 +457,7 @@ class DictWithoutMissing(dict):
def test_map_box(self):
vals = [pd.Timestamp('2011-01-01'), pd.Timestamp('2011-01-02')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'datetime64[ns]')
+ assert s.dtype == 'datetime64[ns]'
# boxed value must be Timestamp instance
res = s.map(lambda x: '{0}_{1}_{2}'.format(x.__class__.__name__,
x.day, x.tz))
@@ -467,7 +467,7 @@ def test_map_box(self):
vals = [pd.Timestamp('2011-01-01', tz='US/Eastern'),
pd.Timestamp('2011-01-02', tz='US/Eastern')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'datetime64[ns, US/Eastern]')
+ assert s.dtype == 'datetime64[ns, US/Eastern]'
res = s.map(lambda x: '{0}_{1}_{2}'.format(x.__class__.__name__,
x.day, x.tz))
exp = pd.Series(['Timestamp_1_US/Eastern', 'Timestamp_2_US/Eastern'])
@@ -476,7 +476,7 @@ def test_map_box(self):
# timedelta
vals = [pd.Timedelta('1 days'), pd.Timedelta('2 days')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'timedelta64[ns]')
+ assert s.dtype == 'timedelta64[ns]'
res = s.map(lambda x: '{0}_{1}'.format(x.__class__.__name__, x.days))
exp = pd.Series(['Timedelta_1', 'Timedelta_2'])
tm.assert_series_equal(res, exp)
@@ -485,7 +485,7 @@ def test_map_box(self):
vals = [pd.Period('2011-01-01', freq='M'),
pd.Period('2011-01-02', freq='M')]
s = pd.Series(vals)
- self.assertEqual(s.dtype, 'object')
+ assert s.dtype == 'object'
res = s.map(lambda x: '{0}_{1}'.format(x.__class__.__name__,
x.freqstr))
exp = pd.Series(['Period_M', 'Period_M'])
@@ -506,7 +506,7 @@ def test_map_categorical(self):
result = s.map(lambda x: 'A')
exp = pd.Series(['A'] * 7, name='XX', index=list('abcdefg'))
tm.assert_series_equal(result, exp)
- self.assertEqual(result.dtype, np.object)
+ assert result.dtype == np.object
with pytest.raises(NotImplementedError):
s.map(lambda x: x, na_action='ignore')
diff --git a/pandas/tests/series/test_asof.py b/pandas/tests/series/test_asof.py
index 80556a5e5ffdb..a839d571c116c 100644
--- a/pandas/tests/series/test_asof.py
+++ b/pandas/tests/series/test_asof.py
@@ -37,7 +37,7 @@ def test_basic(self):
assert (rs == ts[lb]).all()
val = result[result.index[result.index >= ub][0]]
- self.assertEqual(ts[ub], val)
+ assert ts[ub] == val
def test_scalar(self):
@@ -50,16 +50,16 @@ def test_scalar(self):
val1 = ts.asof(ts.index[7])
val2 = ts.asof(ts.index[19])
- self.assertEqual(val1, ts[4])
- self.assertEqual(val2, ts[14])
+ assert val1 == ts[4]
+ assert val2 == ts[14]
# accepts strings
val1 = ts.asof(str(ts.index[7]))
- self.assertEqual(val1, ts[4])
+ assert val1 == ts[4]
# in there
result = ts.asof(ts.index[3])
- self.assertEqual(result, ts[3])
+ assert result == ts[3]
# no as of value
d = ts.index[0] - offsets.BDay()
@@ -118,15 +118,15 @@ def test_periodindex(self):
val1 = ts.asof(ts.index[7])
val2 = ts.asof(ts.index[19])
- self.assertEqual(val1, ts[4])
- self.assertEqual(val2, ts[14])
+ assert val1 == ts[4]
+ assert val2 == ts[14]
# accepts strings
val1 = ts.asof(str(ts.index[7]))
- self.assertEqual(val1, ts[4])
+ assert val1 == ts[4]
# in there
- self.assertEqual(ts.asof(ts.index[3]), ts[3])
+ assert ts.asof(ts.index[3]) == ts[3]
# no as of value
d = ts.index[0].to_timestamp() - offsets.BDay()
diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py
index 6042a8c0a2e9d..1291449ae7ce9 100644
--- a/pandas/tests/series/test_combine_concat.py
+++ b/pandas/tests/series/test_combine_concat.py
@@ -24,9 +24,9 @@ def test_append(self):
appendedSeries = self.series.append(self.objSeries)
for idx, value in compat.iteritems(appendedSeries):
if idx in self.series.index:
- self.assertEqual(value, self.series[idx])
+ assert value == self.series[idx]
elif idx in self.objSeries.index:
- self.assertEqual(value, self.objSeries[idx])
+ assert value == self.objSeries[idx]
else:
self.fail("orphaned index!")
@@ -117,9 +117,9 @@ def test_concat_empty_series_dtypes_roundtrips(self):
'M8[ns]'])
for dtype in dtypes:
- self.assertEqual(pd.concat([Series(dtype=dtype)]).dtype, dtype)
- self.assertEqual(pd.concat([Series(dtype=dtype),
- Series(dtype=dtype)]).dtype, dtype)
+ assert pd.concat([Series(dtype=dtype)]).dtype == dtype
+ assert pd.concat([Series(dtype=dtype),
+ Series(dtype=dtype)]).dtype == dtype
def int_result_type(dtype, dtype2):
typs = set([dtype.kind, dtype2.kind])
@@ -155,55 +155,52 @@ def get_result_type(dtype, dtype2):
expected = get_result_type(dtype, dtype2)
result = pd.concat([Series(dtype=dtype), Series(dtype=dtype2)
]).dtype
- self.assertEqual(result.kind, expected)
+ assert result.kind == expected
def test_concat_empty_series_dtypes(self):
- # bools
- self.assertEqual(pd.concat([Series(dtype=np.bool_),
- Series(dtype=np.int32)]).dtype, np.int32)
- self.assertEqual(pd.concat([Series(dtype=np.bool_),
- Series(dtype=np.float32)]).dtype,
- np.object_)
-
- # datetimelike
- self.assertEqual(pd.concat([Series(dtype='m8[ns]'),
- Series(dtype=np.bool)]).dtype, np.object_)
- self.assertEqual(pd.concat([Series(dtype='m8[ns]'),
- Series(dtype=np.int64)]).dtype, np.object_)
- self.assertEqual(pd.concat([Series(dtype='M8[ns]'),
- Series(dtype=np.bool)]).dtype, np.object_)
- self.assertEqual(pd.concat([Series(dtype='M8[ns]'),
- Series(dtype=np.int64)]).dtype, np.object_)
- self.assertEqual(pd.concat([Series(dtype='M8[ns]'),
- Series(dtype=np.bool_),
- Series(dtype=np.int64)]).dtype, np.object_)
+ # booleans
+ assert pd.concat([Series(dtype=np.bool_),
+ Series(dtype=np.int32)]).dtype == np.int32
+ assert pd.concat([Series(dtype=np.bool_),
+ Series(dtype=np.float32)]).dtype == np.object_
+
+ # datetime-like
+ assert pd.concat([Series(dtype='m8[ns]'),
+ Series(dtype=np.bool)]).dtype == np.object_
+ assert pd.concat([Series(dtype='m8[ns]'),
+ Series(dtype=np.int64)]).dtype == np.object_
+ assert pd.concat([Series(dtype='M8[ns]'),
+ Series(dtype=np.bool)]).dtype == np.object_
+ assert pd.concat([Series(dtype='M8[ns]'),
+ Series(dtype=np.int64)]).dtype == np.object_
+ assert pd.concat([Series(dtype='M8[ns]'),
+ Series(dtype=np.bool_),
+ Series(dtype=np.int64)]).dtype == np.object_
# categorical
- self.assertEqual(pd.concat([Series(dtype='category'),
- Series(dtype='category')]).dtype,
- 'category')
- self.assertEqual(pd.concat([Series(dtype='category'),
- Series(dtype='float64')]).dtype,
- 'float64')
- self.assertEqual(pd.concat([Series(dtype='category'),
- Series(dtype='object')]).dtype, 'object')
+ assert pd.concat([Series(dtype='category'),
+ Series(dtype='category')]).dtype == 'category'
+ assert pd.concat([Series(dtype='category'),
+ Series(dtype='float64')]).dtype == 'float64'
+ assert pd.concat([Series(dtype='category'),
+ Series(dtype='object')]).dtype == 'object'
# sparse
result = pd.concat([Series(dtype='float64').to_sparse(), Series(
dtype='float64').to_sparse()])
- self.assertEqual(result.dtype, np.float64)
- self.assertEqual(result.ftype, 'float64:sparse')
+ assert result.dtype == np.float64
+ assert result.ftype == 'float64:sparse'
result = pd.concat([Series(dtype='float64').to_sparse(), Series(
dtype='float64')])
- self.assertEqual(result.dtype, np.float64)
- self.assertEqual(result.ftype, 'float64:sparse')
+ assert result.dtype == np.float64
+ assert result.ftype == 'float64:sparse'
result = pd.concat([Series(dtype='float64').to_sparse(), Series(
dtype='object')])
- self.assertEqual(result.dtype, np.object_)
- self.assertEqual(result.ftype, 'object:dense')
+ assert result.dtype == np.object_
+ assert result.ftype == 'object:dense'
def test_combine_first_dt64(self):
from pandas.core.tools.datetimes import to_datetime
@@ -245,7 +242,7 @@ def test_append_concat(self):
rng2 = rng.copy()
rng1.name = 'foo'
rng2.name = 'bar'
- self.assertEqual(rng1.append(rng1).name, 'foo')
+ assert rng1.append(rng1).name == 'foo'
assert rng1.append(rng2).name is None
def test_append_concat_tz(self):
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 966861fe3c1e4..a0a68a332f735 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -58,11 +58,11 @@ def test_constructor(self):
assert tm.equalContents(derived.index, self.ts.index)
# Ensure new index is not created
- self.assertEqual(id(self.ts.index), id(derived.index))
+ assert id(self.ts.index) == id(derived.index)
# Mixed type Series
mixed = Series(['hello', np.NaN], index=[0, 1])
- self.assertEqual(mixed.dtype, np.object_)
+ assert mixed.dtype == np.object_
assert mixed[1] is np.NaN
assert not self.empty.index.is_all_dates
@@ -73,7 +73,7 @@ def test_constructor(self):
mixed.name = 'Series'
rs = Series(mixed).name
xp = 'Series'
- self.assertEqual(rs, xp)
+ assert rs == xp
# raise on MultiIndex GH4187
m = MultiIndex.from_arrays([[1, 2], [3, 4]])
@@ -248,10 +248,10 @@ def test_constructor_corner(self):
def test_constructor_sanitize(self):
s = Series(np.array([1., 1., 8.]), dtype='i8')
- self.assertEqual(s.dtype, np.dtype('i8'))
+ assert s.dtype == np.dtype('i8')
s = Series(np.array([1., 1., np.nan]), copy=True, dtype='i8')
- self.assertEqual(s.dtype, np.dtype('f8'))
+ assert s.dtype == np.dtype('f8')
def test_constructor_copy(self):
# GH15125
@@ -266,15 +266,15 @@ def test_constructor_copy(self):
# changes to origin of copy does not affect the copy
x[0] = 2.
assert not x.equals(y)
- self.assertEqual(x[0], 2.)
- self.assertEqual(y[0], 1.)
+ assert x[0] == 2.
+ assert y[0] == 1.
def test_constructor_pass_none(self):
s = Series(None, index=lrange(5))
- self.assertEqual(s.dtype, np.float64)
+ assert s.dtype == np.float64
s = Series(None, index=lrange(5), dtype=object)
- self.assertEqual(s.dtype, np.object_)
+ assert s.dtype == np.object_
# GH 7431
# inference on the index
@@ -285,12 +285,12 @@ def test_constructor_pass_none(self):
def test_constructor_pass_nan_nat(self):
# GH 13467
exp = Series([np.nan, np.nan], dtype=np.float64)
- self.assertEqual(exp.dtype, np.float64)
+ assert exp.dtype == np.float64
tm.assert_series_equal(Series([np.nan, np.nan]), exp)
tm.assert_series_equal(Series(np.array([np.nan, np.nan])), exp)
exp = Series([pd.NaT, pd.NaT])
- self.assertEqual(exp.dtype, 'datetime64[ns]')
+ assert exp.dtype == 'datetime64[ns]'
tm.assert_series_equal(Series([pd.NaT, pd.NaT]), exp)
tm.assert_series_equal(Series(np.array([pd.NaT, pd.NaT])), exp)
@@ -310,7 +310,7 @@ def test_constructor_dtype_nocast(self):
s2 = Series(s, dtype=np.int64)
s2[1] = 5
- self.assertEqual(s[1], 5)
+ assert s[1] == 5
def test_constructor_datelike_coercion(self):
@@ -318,8 +318,8 @@ def test_constructor_datelike_coercion(self):
# incorrectly infering on dateimelike looking when object dtype is
# specified
s = Series([Timestamp('20130101'), 'NOV'], dtype=object)
- self.assertEqual(s.iloc[0], Timestamp('20130101'))
- self.assertEqual(s.iloc[1], 'NOV')
+ assert s.iloc[0] == Timestamp('20130101')
+ assert s.iloc[1] == 'NOV'
assert s.dtype == object
# the dtype was being reset on the slicing and re-inferred to datetime
@@ -361,11 +361,11 @@ def test_constructor_dtype_datetime64(self):
s = Series([datetime(2001, 1, 2, 0, 0), iNaT], dtype='M8[ns]')
assert isnull(s[1])
- self.assertEqual(s.dtype, 'M8[ns]')
+ assert s.dtype == 'M8[ns]'
s = Series([datetime(2001, 1, 2, 0, 0), nan], dtype='M8[ns]')
assert isnull(s[1])
- self.assertEqual(s.dtype, 'M8[ns]')
+ assert s.dtype == 'M8[ns]'
# GH3416
dates = [
@@ -375,10 +375,10 @@ def test_constructor_dtype_datetime64(self):
]
s = Series(dates)
- self.assertEqual(s.dtype, 'M8[ns]')
+ assert s.dtype == 'M8[ns]'
s.iloc[0] = np.nan
- self.assertEqual(s.dtype, 'M8[ns]')
+ assert s.dtype == 'M8[ns]'
# invalid astypes
for t in ['s', 'D', 'us', 'ms']:
@@ -392,15 +392,15 @@ def test_constructor_dtype_datetime64(self):
# invalid dates can be help as object
result = Series([datetime(2, 1, 1)])
- self.assertEqual(result[0], datetime(2, 1, 1, 0, 0))
+ assert result[0] == datetime(2, 1, 1, 0, 0)
result = Series([datetime(3000, 1, 1)])
- self.assertEqual(result[0], datetime(3000, 1, 1, 0, 0))
+ assert result[0] == datetime(3000, 1, 1, 0, 0)
# don't mix types
result = Series([Timestamp('20130101'), 1], index=['a', 'b'])
- self.assertEqual(result['a'], Timestamp('20130101'))
- self.assertEqual(result['b'], 1)
+ assert result['a'] == Timestamp('20130101')
+ assert result['b'] == 1
# GH6529
# coerce datetime64 non-ns properly
@@ -426,17 +426,17 @@ def test_constructor_dtype_datetime64(self):
dtype=object)
series1 = Series(dates2, dates)
tm.assert_numpy_array_equal(series1.values, dates2)
- self.assertEqual(series1.dtype, object)
+ assert series1.dtype == object
# these will correctly infer a datetime
s = Series([None, pd.NaT, '2013-08-05 15:30:00.000001'])
- self.assertEqual(s.dtype, 'datetime64[ns]')
+ assert s.dtype == 'datetime64[ns]'
s = Series([np.nan, pd.NaT, '2013-08-05 15:30:00.000001'])
- self.assertEqual(s.dtype, 'datetime64[ns]')
+ assert s.dtype == 'datetime64[ns]'
s = Series([pd.NaT, None, '2013-08-05 15:30:00.000001'])
- self.assertEqual(s.dtype, 'datetime64[ns]')
+ assert s.dtype == 'datetime64[ns]'
s = Series([pd.NaT, np.nan, '2013-08-05 15:30:00.000001'])
- self.assertEqual(s.dtype, 'datetime64[ns]')
+ assert s.dtype == 'datetime64[ns]'
# tz-aware (UTC and other tz's)
# GH 8411
@@ -488,11 +488,11 @@ def test_constructor_with_datetime_tz(self):
# indexing
result = s.iloc[0]
- self.assertEqual(result, Timestamp('2013-01-01 00:00:00-0500',
- tz='US/Eastern', freq='D'))
+ assert result == Timestamp('2013-01-01 00:00:00-0500',
+ tz='US/Eastern', freq='D')
result = s[0]
- self.assertEqual(result, Timestamp('2013-01-01 00:00:00-0500',
- tz='US/Eastern', freq='D'))
+ assert result == Timestamp('2013-01-01 00:00:00-0500',
+ tz='US/Eastern', freq='D')
result = s[Series([True, True, False], index=s.index)]
assert_series_equal(result, s[0:2])
@@ -589,7 +589,7 @@ def test_constructor_periodindex(self):
expected = Series(pi.asobject)
assert_series_equal(s, expected)
- self.assertEqual(s.dtype, 'object')
+ assert s.dtype == 'object'
def test_constructor_dict(self):
d = {'a': 0., 'b': 1., 'c': 2.}
@@ -693,12 +693,12 @@ class A(OrderedDict):
def test_constructor_list_of_tuples(self):
data = [(1, 1), (2, 2), (2, 3)]
s = Series(data)
- self.assertEqual(list(s), data)
+ assert list(s) == data
def test_constructor_tuple_of_tuples(self):
data = ((1, 1), (2, 2), (2, 3))
s = Series(data)
- self.assertEqual(tuple(s), data)
+ assert tuple(s) == data
def test_constructor_set(self):
values = set([1, 2, 3, 4, 5])
@@ -714,80 +714,80 @@ def test_fromDict(self):
data = {'a': 0, 'b': '1', 'c': '2', 'd': datetime.now()}
series = Series(data)
- self.assertEqual(series.dtype, np.object_)
+ assert series.dtype == np.object_
data = {'a': 0, 'b': '1', 'c': '2', 'd': '3'}
series = Series(data)
- self.assertEqual(series.dtype, np.object_)
+ assert series.dtype == np.object_
data = {'a': '0', 'b': '1'}
series = Series(data, dtype=float)
- self.assertEqual(series.dtype, np.float64)
+ assert series.dtype == np.float64
def test_fromValue(self):
nans = Series(np.NaN, index=self.ts.index)
- self.assertEqual(nans.dtype, np.float_)
- self.assertEqual(len(nans), len(self.ts))
+ assert nans.dtype == np.float_
+ assert len(nans) == len(self.ts)
strings = Series('foo', index=self.ts.index)
- self.assertEqual(strings.dtype, np.object_)
- self.assertEqual(len(strings), len(self.ts))
+ assert strings.dtype == np.object_
+ assert len(strings) == len(self.ts)
d = datetime.now()
dates = Series(d, index=self.ts.index)
- self.assertEqual(dates.dtype, 'M8[ns]')
- self.assertEqual(len(dates), len(self.ts))
+ assert dates.dtype == 'M8[ns]'
+ assert len(dates) == len(self.ts)
# GH12336
# Test construction of categorical series from value
categorical = Series(0, index=self.ts.index, dtype="category")
expected = Series(0, index=self.ts.index).astype("category")
- self.assertEqual(categorical.dtype, 'category')
- self.assertEqual(len(categorical), len(self.ts))
+ assert categorical.dtype == 'category'
+ assert len(categorical) == len(self.ts)
tm.assert_series_equal(categorical, expected)
def test_constructor_dtype_timedelta64(self):
# basic
td = Series([timedelta(days=i) for i in range(3)])
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
td = Series([timedelta(days=1)])
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
td = Series([timedelta(days=1), timedelta(days=2), np.timedelta64(
1, 's')])
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
# mixed with NaT
td = Series([timedelta(days=1), NaT], dtype='m8[ns]')
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
td = Series([timedelta(days=1), np.nan], dtype='m8[ns]')
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
td = Series([np.timedelta64(300000000), pd.NaT], dtype='m8[ns]')
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
# improved inference
# GH5689
td = Series([np.timedelta64(300000000), NaT])
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
# because iNaT is int, not coerced to timedelta
td = Series([np.timedelta64(300000000), iNaT])
- self.assertEqual(td.dtype, 'object')
+ assert td.dtype == 'object'
td = Series([np.timedelta64(300000000), np.nan])
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
td = Series([pd.NaT, np.timedelta64(300000000)])
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
td = Series([np.timedelta64(1, 's')])
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
# these are frequency conversion astypes
# for t in ['s', 'D', 'us', 'ms']:
@@ -807,17 +807,17 @@ def f():
# leave as object here
td = Series([timedelta(days=i) for i in range(3)] + ['foo'])
- self.assertEqual(td.dtype, 'object')
+ assert td.dtype == 'object'
# these will correctly infer a timedelta
s = Series([None, pd.NaT, '1 Day'])
- self.assertEqual(s.dtype, 'timedelta64[ns]')
+ assert s.dtype == 'timedelta64[ns]'
s = Series([np.nan, pd.NaT, '1 Day'])
- self.assertEqual(s.dtype, 'timedelta64[ns]')
+ assert s.dtype == 'timedelta64[ns]'
s = Series([pd.NaT, None, '1 Day'])
- self.assertEqual(s.dtype, 'timedelta64[ns]')
+ assert s.dtype == 'timedelta64[ns]'
s = Series([pd.NaT, np.nan, '1 Day'])
- self.assertEqual(s.dtype, 'timedelta64[ns]')
+ assert s.dtype == 'timedelta64[ns]'
def test_NaT_scalar(self):
series = Series([0, 1000, 2000, iNaT], dtype='M8[ns]')
@@ -838,7 +838,7 @@ def test_constructor_name_hashable(self):
for n in [777, 777., 'name', datetime(2001, 11, 11), (1, ), u"\u05D0"]:
for data in [[1, 2, 3], np.ones(3), {'a': 0, 'b': 1}]:
s = Series(data, name=n)
- self.assertEqual(s.name, n)
+ assert s.name == n
def test_constructor_name_unhashable(self):
for n in [['name_list'], np.ones(2), {1: 2}]:
@@ -847,7 +847,7 @@ def test_constructor_name_unhashable(self):
def test_auto_conversion(self):
series = Series(list(date_range('1/1/2000', periods=10)))
- self.assertEqual(series.dtype, 'M8[ns]')
+ assert series.dtype == 'M8[ns]'
def test_constructor_cant_cast_datetime64(self):
msg = "Cannot cast datetime64 to "
diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py
index 13fa3bc782f89..50914eef1abc8 100644
--- a/pandas/tests/series/test_datetime_values.py
+++ b/pandas/tests/series/test_datetime_values.py
@@ -50,7 +50,7 @@ def compare(s, name):
a = getattr(s.dt, prop)
b = get_expected(s, prop)
if not (is_list_like(a) and is_list_like(b)):
- self.assertEqual(a, b)
+ assert a == b
else:
tm.assert_series_equal(a, b)
@@ -79,10 +79,9 @@ def compare(s, name):
tm.assert_series_equal(result, expected)
tz_result = result.dt.tz
- self.assertEqual(str(tz_result), 'US/Eastern')
+ assert str(tz_result) == 'US/Eastern'
freq_result = s.dt.freq
- self.assertEqual(freq_result, DatetimeIndex(s.values,
- freq='infer').freq)
+ assert freq_result == DatetimeIndex(s.values, freq='infer').freq
# let's localize, then convert
result = s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
@@ -149,12 +148,11 @@ def compare(s, name):
tm.assert_series_equal(result, expected)
tz_result = result.dt.tz
- self.assertEqual(str(tz_result), 'CET')
+ assert str(tz_result) == 'CET'
freq_result = s.dt.freq
- self.assertEqual(freq_result, DatetimeIndex(s.values,
- freq='infer').freq)
+ assert freq_result == DatetimeIndex(s.values, freq='infer').freq
- # timedeltaindex
+ # timedelta index
cases = [Series(timedelta_range('1 day', periods=5),
index=list('abcde'), name='xxx'),
Series(timedelta_range('1 day 01:23:45', periods=5,
@@ -183,8 +181,7 @@ def compare(s, name):
assert result.dtype == 'float64'
freq_result = s.dt.freq
- self.assertEqual(freq_result, TimedeltaIndex(s.values,
- freq='infer').freq)
+ assert freq_result == TimedeltaIndex(s.values, freq='infer').freq
# both
index = date_range('20130101', periods=3, freq='D')
@@ -218,7 +215,7 @@ def compare(s, name):
getattr(s.dt, prop)
freq_result = s.dt.freq
- self.assertEqual(freq_result, PeriodIndex(s.values).freq)
+ assert freq_result == PeriodIndex(s.values).freq
# test limited display api
def get_dir(s):
@@ -387,7 +384,7 @@ def test_sub_of_datetime_from_TimeSeries(self):
b = datetime(1993, 6, 22, 13, 30)
a = Series([a])
result = to_timedelta(np.abs(a - b))
- self.assertEqual(result.dtype, 'timedelta64[ns]')
+ assert result.dtype == 'timedelta64[ns]'
def test_between(self):
s = Series(bdate_range('1/1/2000', periods=20).asobject)
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 954e80facf848..9f5d80411ed17 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -41,7 +41,7 @@ def test_get(self):
result = s.get(25, 0)
expected = 0
- self.assertEqual(result, expected)
+ assert result == expected
s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56,
45, 51, 39, 55, 43, 54, 52, 51, 54]),
@@ -54,21 +54,21 @@ def test_get(self):
result = s.get(25, 0)
expected = 43
- self.assertEqual(result, expected)
+ assert result == expected
# GH 7407
# with a boolean accessor
df = pd.DataFrame({'i': [0] * 3, 'b': [False] * 3})
vc = df.i.value_counts()
result = vc.get(99, default='Missing')
- self.assertEqual(result, 'Missing')
+ assert result == 'Missing'
vc = df.b.value_counts()
result = vc.get(False, default='Missing')
- self.assertEqual(result, 3)
+ assert result == 3
result = vc.get(True, default='Missing')
- self.assertEqual(result, 'Missing')
+ assert result == 'Missing'
def test_delitem(self):
@@ -137,7 +137,7 @@ def test_pop(self):
k = df.iloc[4]
result = k.pop('B')
- self.assertEqual(result, 4)
+ assert result == 4
expected = Series([0, 0], index=['A', 'C'], name=4)
assert_series_equal(k, expected)
@@ -146,15 +146,14 @@ def test_getitem_get(self):
idx1 = self.series.index[5]
idx2 = self.objSeries.index[5]
- self.assertEqual(self.series[idx1], self.series.get(idx1))
- self.assertEqual(self.objSeries[idx2], self.objSeries.get(idx2))
+ assert self.series[idx1] == self.series.get(idx1)
+ assert self.objSeries[idx2] == self.objSeries.get(idx2)
- self.assertEqual(self.series[idx1], self.series[5])
- self.assertEqual(self.objSeries[idx2], self.objSeries[5])
+ assert self.series[idx1] == self.series[5]
+ assert self.objSeries[idx2] == self.objSeries[5]
- self.assertEqual(
- self.series.get(-1), self.series.get(self.series.index[-1]))
- self.assertEqual(self.series[5], self.series.get(self.series.index[5]))
+ assert self.series.get(-1) == self.series.get(self.series.index[-1])
+ assert self.series[5] == self.series.get(self.series.index[5])
# missing
d = self.ts.index[0] - BDay()
@@ -191,7 +190,7 @@ def test_iloc(self):
def test_iloc_nonunique(self):
s = Series([0, 1, 2], index=[0, 1, 0])
- self.assertEqual(s.iloc[2], 2)
+ assert s.iloc[2] == 2
def test_getitem_regression(self):
s = Series(lrange(5), index=lrange(5))
@@ -218,15 +217,15 @@ def test_getitem_setitem_slice_bug(self):
def test_getitem_int64(self):
idx = np.int64(5)
- self.assertEqual(self.ts[idx], self.ts[5])
+ assert self.ts[idx] == self.ts[5]
def test_getitem_fancy(self):
slice1 = self.series[[1, 2, 3]]
slice2 = self.objSeries[[1, 2, 3]]
- self.assertEqual(self.series.index[2], slice1.index[1])
- self.assertEqual(self.objSeries.index[2], slice2.index[1])
- self.assertEqual(self.series[2], slice1[1])
- self.assertEqual(self.objSeries[2], slice2[1])
+ assert self.series.index[2] == slice1.index[1]
+ assert self.objSeries.index[2] == slice2.index[1]
+ assert self.series[2] == slice1[1]
+ assert self.objSeries[2] == slice2[1]
def test_getitem_boolean(self):
s = self.series
@@ -242,8 +241,8 @@ def test_getitem_boolean_empty(self):
s = Series([], dtype=np.int64)
s.index.name = 'index_name'
s = s[s.isnull()]
- self.assertEqual(s.index.name, 'index_name')
- self.assertEqual(s.dtype, np.int64)
+ assert s.index.name == 'index_name'
+ assert s.dtype == np.int64
# GH5877
# indexing with empty series
@@ -421,7 +420,7 @@ def test_getitem_setitem_datetimeindex(self):
result = ts["1990-01-01 04:00:00"]
expected = ts[4]
- self.assertEqual(result, expected)
+ assert result == expected
result = ts.copy()
result["1990-01-01 04:00:00"] = 0
@@ -446,7 +445,7 @@ def test_getitem_setitem_datetimeindex(self):
# repeat all the above with naive datetimes
result = ts[datetime(1990, 1, 1, 4)]
expected = ts[4]
- self.assertEqual(result, expected)
+ assert result == expected
result = ts.copy()
result[datetime(1990, 1, 1, 4)] = 0
@@ -470,7 +469,7 @@ def test_getitem_setitem_datetimeindex(self):
result = ts[ts.index[4]]
expected = ts[4]
- self.assertEqual(result, expected)
+ assert result == expected
result = ts[ts.index[4:8]]
expected = ts[4:8]
@@ -500,7 +499,7 @@ def test_getitem_setitem_periodindex(self):
result = ts["1990-01-01 04"]
expected = ts[4]
- self.assertEqual(result, expected)
+ assert result == expected
result = ts.copy()
result["1990-01-01 04"] = 0
@@ -525,7 +524,7 @@ def test_getitem_setitem_periodindex(self):
# GH 2782
result = ts[ts.index[4]]
expected = ts[4]
- self.assertEqual(result, expected)
+ assert result == expected
result = ts[ts.index[4:8]]
expected = ts[4:8]
@@ -557,7 +556,7 @@ def test_getitem_setitem_integers(self):
# caused bug without test
s = Series([1, 2, 3], ['a', 'b', 'c'])
- self.assertEqual(s.iloc[0], s['a'])
+ assert s.iloc[0] == s['a']
s.iloc[0] = 5
self.assertAlmostEqual(s['a'], 5)
@@ -573,7 +572,7 @@ def test_getitem_ambiguous_keyerror(self):
def test_getitem_unordered_dup(self):
obj = Series(lrange(5), index=['c', 'a', 'a', 'b', 'b'])
assert is_scalar(obj['c'])
- self.assertEqual(obj['c'], 0)
+ assert obj['c'] == 0
def test_getitem_dups_with_missing(self):
@@ -600,7 +599,7 @@ def test_getitem_callable(self):
# GH 12533
s = pd.Series(4, index=list('ABCD'))
result = s[lambda x: 'A']
- self.assertEqual(result, s.loc['A'])
+ assert result == s.loc['A']
result = s[lambda x: ['A', 'B']]
tm.assert_series_equal(result, s.loc[['A', 'B']])
@@ -687,14 +686,14 @@ def f():
def test_slice_floats2(self):
s = Series(np.random.rand(10), index=np.arange(10, 20, dtype=float))
- self.assertEqual(len(s.loc[12.0:]), 8)
- self.assertEqual(len(s.loc[12.5:]), 7)
+ assert len(s.loc[12.0:]) == 8
+ assert len(s.loc[12.5:]) == 7
i = np.arange(10, 20, dtype=float)
i[2] = 12.2
s.index = i
- self.assertEqual(len(s.loc[12.0:]), 8)
- self.assertEqual(len(s.loc[12.5:]), 7)
+ assert len(s.loc[12.0:]) == 8
+ assert len(s.loc[12.5:]) == 7
def test_slice_float64(self):
@@ -787,23 +786,23 @@ def test_set_value(self):
idx = self.ts.index[10]
res = self.ts.set_value(idx, 0)
assert res is self.ts
- self.assertEqual(self.ts[idx], 0)
+ assert self.ts[idx] == 0
# equiv
s = self.series.copy()
res = s.set_value('foobar', 0)
assert res is s
- self.assertEqual(res.index[-1], 'foobar')
- self.assertEqual(res['foobar'], 0)
+ assert res.index[-1] == 'foobar'
+ assert res['foobar'] == 0
s = self.series.copy()
s.loc['foobar'] = 0
- self.assertEqual(s.index[-1], 'foobar')
- self.assertEqual(s['foobar'], 0)
+ assert s.index[-1] == 'foobar'
+ assert s['foobar'] == 0
def test_setslice(self):
sl = self.ts[5:20]
- self.assertEqual(len(sl), len(sl.index))
+ assert len(sl) == len(sl.index)
assert sl.index.is_unique
def test_basic_getitem_setitem_corner(self):
@@ -853,11 +852,11 @@ def test_basic_getitem_with_labels(self):
index=['a', 'b', 'c'])
expected = Timestamp('2011-01-01', tz='US/Eastern')
result = s.loc['a']
- self.assertEqual(result, expected)
+ assert result == expected
result = s.iloc[0]
- self.assertEqual(result, expected)
+ assert result == expected
result = s['a']
- self.assertEqual(result, expected)
+ assert result == expected
def test_basic_setitem_with_labels(self):
indices = self.ts.index[[5, 10, 15]]
@@ -904,17 +903,17 @@ def test_basic_setitem_with_labels(self):
expected = Timestamp('2011-01-03', tz='US/Eastern')
s2.loc['a'] = expected
result = s2.loc['a']
- self.assertEqual(result, expected)
+ assert result == expected
s2 = s.copy()
s2.iloc[0] = expected
result = s2.iloc[0]
- self.assertEqual(result, expected)
+ assert result == expected
s2 = s.copy()
s2['a'] = expected
result = s2['a']
- self.assertEqual(result, expected)
+ assert result == expected
def test_loc_getitem(self):
inds = self.series.index[[3, 4, 7]]
@@ -932,8 +931,8 @@ def test_loc_getitem(self):
assert_series_equal(self.series.loc[mask], self.series[mask])
# ask for index value
- self.assertEqual(self.ts.loc[d1], self.ts[d1])
- self.assertEqual(self.ts.loc[d2], self.ts[d2])
+ assert self.ts.loc[d1] == self.ts[d1]
+ assert self.ts.loc[d2] == self.ts[d2]
def test_loc_getitem_not_monotonic(self):
d1, d2 = self.ts.index[[5, 15]]
@@ -977,7 +976,7 @@ def test_setitem_with_tz(self):
for tz in ['US/Eastern', 'UTC', 'Asia/Tokyo']:
orig = pd.Series(pd.date_range('2016-01-01', freq='H', periods=3,
tz=tz))
- self.assertEqual(orig.dtype, 'datetime64[ns, {0}]'.format(tz))
+ assert orig.dtype == 'datetime64[ns, {0}]'.format(tz)
# scalar
s = orig.copy()
@@ -998,7 +997,7 @@ def test_setitem_with_tz(self):
# vector
vals = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
pd.Timestamp('2012-01-01', tz=tz)], index=[1, 2])
- self.assertEqual(vals.dtype, 'datetime64[ns, {0}]'.format(tz))
+ assert vals.dtype == 'datetime64[ns, {0}]'.format(tz)
s[[1, 2]] = vals
exp = pd.Series([pd.Timestamp('2016-01-01 00:00', tz=tz),
@@ -1019,7 +1018,7 @@ def test_setitem_with_tz_dst(self):
tz = 'US/Eastern'
orig = pd.Series(pd.date_range('2016-11-06', freq='H', periods=3,
tz=tz))
- self.assertEqual(orig.dtype, 'datetime64[ns, {0}]'.format(tz))
+ assert orig.dtype == 'datetime64[ns, {0}]'.format(tz)
# scalar
s = orig.copy()
@@ -1040,7 +1039,7 @@ def test_setitem_with_tz_dst(self):
# vector
vals = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
pd.Timestamp('2012-01-01', tz=tz)], index=[1, 2])
- self.assertEqual(vals.dtype, 'datetime64[ns, {0}]'.format(tz))
+ assert vals.dtype == 'datetime64[ns, {0}]'.format(tz)
s[[1, 2]] = vals
exp = pd.Series([pd.Timestamp('2016-11-06 00:00', tz=tz),
@@ -1107,7 +1106,7 @@ def test_where(self):
s[mask] = lrange(2, 7)
expected = Series(lrange(2, 7) + lrange(5, 10), dtype=dtype)
assert_series_equal(s, expected)
- self.assertEqual(s.dtype, expected.dtype)
+ assert s.dtype == expected.dtype
# these are allowed operations, but are upcasted
for dtype in [np.int64, np.float64]:
@@ -1117,7 +1116,7 @@ def test_where(self):
s[mask] = values
expected = Series(values + lrange(5, 10), dtype='float64')
assert_series_equal(s, expected)
- self.assertEqual(s.dtype, expected.dtype)
+ assert s.dtype == expected.dtype
# GH 9731
s = Series(np.arange(10), dtype='int64')
@@ -1141,7 +1140,7 @@ def test_where(self):
s[mask] = lrange(2, 7)
expected = Series(lrange(2, 7) + lrange(5, 10), dtype='int64')
assert_series_equal(s, expected)
- self.assertEqual(s.dtype, expected.dtype)
+ assert s.dtype == expected.dtype
s = Series(np.arange(10), dtype='int64')
mask = s > 5
@@ -1506,8 +1505,8 @@ def test_ix_setitem(self):
# set index value
self.series.loc[d1] = 4
self.series.loc[d2] = 6
- self.assertEqual(self.series[d1], 4)
- self.assertEqual(self.series[d2], 6)
+ assert self.series[d1] == 4
+ assert self.series[d2] == 6
def test_where_numeric_with_string(self):
# GH 9280
@@ -1639,7 +1638,7 @@ def test_datetime_indexing(self):
pytest.raises(KeyError, s.__getitem__, stamp)
s[stamp] = 0
- self.assertEqual(s[stamp], 0)
+ assert s[stamp] == 0
# not monotonic
s = Series(len(index), index=index)
@@ -1647,7 +1646,7 @@ def test_datetime_indexing(self):
pytest.raises(KeyError, s.__getitem__, stamp)
s[stamp] = 0
- self.assertEqual(s[stamp], 0)
+ assert s[stamp] == 0
def test_timedelta_assignment(self):
# GH 8209
@@ -1702,7 +1701,7 @@ def test_underlying_data_conversion(self):
df_tmp = df.iloc[ck] # noqa
df["bb"].iloc[0] = .15
- self.assertEqual(df['bb'].iloc[0], 0.15)
+ assert df['bb'].iloc[0] == 0.15
pd.set_option('chained_assignment', 'raise')
# GH 3217
@@ -1788,10 +1787,10 @@ def _check_align(a, b, how='left', fill=None):
assert_series_equal(aa, ea)
assert_series_equal(ab, eb)
- self.assertEqual(aa.name, 'ts')
- self.assertEqual(ea.name, 'ts')
- self.assertEqual(ab.name, 'ts')
- self.assertEqual(eb.name, 'ts')
+ assert aa.name == 'ts'
+ assert ea.name == 'ts'
+ assert ab.name == 'ts'
+ assert eb.name == 'ts'
for kind in JOIN_TYPES:
_check_align(self.ts[2:], self.ts[:-5], how=kind)
@@ -1932,13 +1931,13 @@ def test_reindex(self):
subSeries = self.series.reindex(subIndex)
for idx, val in compat.iteritems(subSeries):
- self.assertEqual(val, self.series[idx])
+ assert val == self.series[idx]
subIndex2 = self.ts.index[10:20]
subTS = self.ts.reindex(subIndex2)
for idx, val in compat.iteritems(subTS):
- self.assertEqual(val, self.ts[idx])
+ assert val == self.ts[idx]
stuffSeries = self.ts.reindex(subIndex)
assert np.isnan(stuffSeries).all()
@@ -1947,7 +1946,7 @@ def test_reindex(self):
nonContigIndex = self.ts.index[::2]
subNonContig = self.ts.reindex(nonContigIndex)
for idx, val in compat.iteritems(subNonContig):
- self.assertEqual(val, self.ts[idx])
+ assert val == self.ts[idx]
# return a copy the same index here
result = self.ts.reindex()
@@ -2070,11 +2069,11 @@ def test_reindex_int(self):
reindexed_int = int_ts.reindex(self.ts.index)
# if NaNs introduced
- self.assertEqual(reindexed_int.dtype, np.float_)
+ assert reindexed_int.dtype == np.float_
# NO NaNs introduced
reindexed_int = int_ts.reindex(int_ts.index[::2])
- self.assertEqual(reindexed_int.dtype, np.int_)
+ assert reindexed_int.dtype == np.int_
def test_reindex_bool(self):
@@ -2086,11 +2085,11 @@ def test_reindex_bool(self):
reindexed_bool = bool_ts.reindex(self.ts.index)
# if NaNs introduced
- self.assertEqual(reindexed_bool.dtype, np.object_)
+ assert reindexed_bool.dtype == np.object_
# NO NaNs introduced
reindexed_bool = bool_ts.reindex(bool_ts.index[::2])
- self.assertEqual(reindexed_bool.dtype, np.bool_)
+ assert reindexed_bool.dtype == np.bool_
def test_reindex_bool_pad(self):
# fail
@@ -2224,8 +2223,8 @@ def test_multilevel_preserve_name(self):
result = s['foo']
result2 = s.loc['foo']
- self.assertEqual(result.name, s.name)
- self.assertEqual(result2.name, s.name)
+ assert result.name == s.name
+ assert result2.name == s.name
def test_setitem_scalar_into_readonly_backing_data(self):
# GH14359: test that you cannot mutate a read only buffer
@@ -2238,12 +2237,7 @@ def test_setitem_scalar_into_readonly_backing_data(self):
with pytest.raises(ValueError):
series[n] = 1
- self.assertEqual(
- array[n],
- 0,
- msg='even though the ValueError was raised, the underlying'
- ' array was still mutated!',
- )
+ assert array[n] == 0
def test_setitem_slice_into_readonly_backing_data(self):
# GH14359: test that you cannot mutate a read only buffer
@@ -2280,9 +2274,9 @@ def test_index_unique(self):
uniques = self.dups.index.unique()
expected = DatetimeIndex([datetime(2000, 1, 2), datetime(2000, 1, 3),
datetime(2000, 1, 4), datetime(2000, 1, 5)])
- self.assertEqual(uniques.dtype, 'M8[ns]') # sanity
+ assert uniques.dtype == 'M8[ns]' # sanity
tm.assert_index_equal(uniques, expected)
- self.assertEqual(self.dups.index.nunique(), 4)
+ assert self.dups.index.nunique() == 4
# #2563
assert isinstance(uniques, DatetimeIndex)
@@ -2293,22 +2287,22 @@ def test_index_unique(self):
expected = DatetimeIndex(expected, name='foo')
expected = expected.tz_localize('US/Eastern')
assert result.tz is not None
- self.assertEqual(result.name, 'foo')
+ assert result.name == 'foo'
tm.assert_index_equal(result, expected)
# NaT, note this is excluded
arr = [1370745748 + t for t in range(20)] + [tslib.iNaT]
idx = DatetimeIndex(arr * 3)
tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
- self.assertEqual(idx.nunique(), 20)
- self.assertEqual(idx.nunique(dropna=False), 21)
+ assert idx.nunique() == 20
+ assert idx.nunique(dropna=False) == 21
arr = [Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t)
for t in range(20)] + [NaT]
idx = DatetimeIndex(arr * 3)
tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
- self.assertEqual(idx.nunique(), 20)
- self.assertEqual(idx.nunique(dropna=False), 21)
+ assert idx.nunique() == 20
+ assert idx.nunique(dropna=False) == 21
def test_index_dupes_contains(self):
d = datetime(2011, 12, 5, 20, 30)
@@ -2339,7 +2333,7 @@ def test_duplicate_dates_indexing(self):
# new index
ts[datetime(2000, 1, 6)] = 0
- self.assertEqual(ts[datetime(2000, 1, 6)], 0)
+ assert ts[datetime(2000, 1, 6)] == 0
def test_range_slice(self):
idx = DatetimeIndex(['1/1/2000', '1/2/2000', '1/2/2000', '1/3/2000',
@@ -2516,11 +2510,11 @@ def test_fancy_getitem(self):
s = Series(np.arange(len(dti)), index=dti)
- self.assertEqual(s[48], 48)
- self.assertEqual(s['1/2/2009'], 48)
- self.assertEqual(s['2009-1-2'], 48)
- self.assertEqual(s[datetime(2009, 1, 2)], 48)
- self.assertEqual(s[lib.Timestamp(datetime(2009, 1, 2))], 48)
+ assert s[48] == 48
+ assert s['1/2/2009'] == 48
+ assert s['2009-1-2'] == 48
+ assert s[datetime(2009, 1, 2)] == 48
+ assert s[lib.Timestamp(datetime(2009, 1, 2))] == 48
pytest.raises(KeyError, s.__getitem__, '2009-1-3')
assert_series_equal(s['3/6/2009':'2009-06-05'],
@@ -2532,9 +2526,9 @@ def test_fancy_setitem(self):
s = Series(np.arange(len(dti)), index=dti)
s[48] = -1
- self.assertEqual(s[48], -1)
+ assert s[48] == -1
s['1/2/2009'] = -2
- self.assertEqual(s[48], -2)
+ assert s[48] == -2
s['1/2/2009':'2009-06-05'] = -3
assert (s[48:54] == -3).all()
@@ -2557,7 +2551,7 @@ def test_dti_reset_index_round_trip(self):
dti = DatetimeIndex(start='1/1/2001', end='6/1/2001', freq='D')
d1 = DataFrame({'v': np.random.rand(len(dti))}, index=dti)
d2 = d1.reset_index()
- self.assertEqual(d2.dtypes[0], np.dtype('M8[ns]'))
+ assert d2.dtypes[0] == np.dtype('M8[ns]')
d3 = d2.set_index('index')
assert_frame_equal(d1, d3, check_names=False)
@@ -2566,8 +2560,8 @@ def test_dti_reset_index_round_trip(self):
df = DataFrame([[stamp, 12.1]], columns=['Date', 'Value'])
df = df.set_index('Date')
- self.assertEqual(df.index[0], stamp)
- self.assertEqual(df.reset_index()['Date'][0], stamp)
+ assert df.index[0] == stamp
+ assert df.reset_index()['Date'][0] == stamp
def test_series_set_value(self):
# #1561
@@ -2584,7 +2578,7 @@ def test_series_set_value(self):
# s = Series(index[:1], index[:1])
# s2 = s.set_value(dates[1], index[1])
- # self.assertEqual(s2.values.dtype, 'M8[ns]')
+ # assert s2.values.dtype == 'M8[ns]'
@slow
def test_slice_locs_indexerror(self):
@@ -2669,9 +2663,9 @@ def test_nat_operations(self):
# GH 8617
s = Series([0, pd.NaT], dtype='m8[ns]')
exp = s[0]
- self.assertEqual(s.median(), exp)
- self.assertEqual(s.min(), exp)
- self.assertEqual(s.max(), exp)
+ assert s.median() == exp
+ assert s.min() == exp
+ assert s.max() == exp
def test_round_nat(self):
# GH14940
diff --git a/pandas/tests/series/test_internals.py b/pandas/tests/series/test_internals.py
index 19170c82953ad..31492a4ab214a 100644
--- a/pandas/tests/series/test_internals.py
+++ b/pandas/tests/series/test_internals.py
@@ -116,7 +116,7 @@ def test_convert_objects(self):
# r = s.copy()
# r[0] = np.nan
# result = r.convert_objects(convert_dates=True,convert_numeric=False)
- # self.assertEqual(result.dtype, 'M8[ns]')
+ # assert result.dtype == 'M8[ns]'
# dateutil parses some single letters into today's value as a date
for x in 'abcdefghijklmnopqrstuvwxyz':
@@ -282,7 +282,7 @@ def test_convert(self):
# r = s.copy()
# r[0] = np.nan
# result = r._convert(convert_dates=True,convert_numeric=False)
- # self.assertEqual(result.dtype, 'M8[ns]')
+ # assert result.dtype == 'M8[ns]'
# dateutil parses some single letters into today's value as a date
expected = Series([lib.NaT])
diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py
index 7a9d0390a2cfa..24bb3bbc7fc16 100644
--- a/pandas/tests/series/test_io.py
+++ b/pandas/tests/series/test_io.py
@@ -135,12 +135,12 @@ def test_timeseries_periodindex(self):
prng = period_range('1/1/2011', '1/1/2012', freq='M')
ts = Series(np.random.randn(len(prng)), prng)
new_ts = tm.round_trip_pickle(ts)
- self.assertEqual(new_ts.index.freq, 'M')
+ assert new_ts.index.freq == 'M'
def test_pickle_preserve_name(self):
for n in [777, 777., 'name', datetime(2001, 11, 11), (1, 2)]:
unpickled = self._pickle_roundtrip_name(tm.makeTimeSeries(name=n))
- self.assertEqual(unpickled.name, n)
+ assert unpickled.name == n
def _pickle_roundtrip_name(self, obj):
@@ -178,7 +178,7 @@ def test_tolist(self):
# datetime64
s = Series(self.ts.index)
rs = s.tolist()
- self.assertEqual(self.ts.index[0], rs[0])
+ assert self.ts.index[0] == rs[0]
def test_tolist_np_int(self):
# GH10904
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 251954b5da05e..9937f6a34172e 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -190,7 +190,7 @@ def test_datetime64_tz_fillna(self):
idx = pd.DatetimeIndex(['2011-01-01 10:00', pd.NaT,
'2011-01-03 10:00', pd.NaT], tz=tz)
s = pd.Series(idx)
- self.assertEqual(s.dtype, 'datetime64[ns, {0}]'.format(tz))
+ assert s.dtype == 'datetime64[ns, {0}]'.format(tz)
tm.assert_series_equal(pd.isnull(s), null_loc)
result = s.fillna(pd.Timestamp('2011-01-02 10:00'))
@@ -485,19 +485,19 @@ def test_timedelta64_nan(self):
td1 = td.copy()
td1[0] = np.nan
assert isnull(td1[0])
- self.assertEqual(td1[0].value, iNaT)
+ assert td1[0].value == iNaT
td1[0] = td[0]
assert not isnull(td1[0])
td1[1] = iNaT
assert isnull(td1[1])
- self.assertEqual(td1[1].value, iNaT)
+ assert td1[1].value == iNaT
td1[1] = td[1]
assert not isnull(td1[1])
td1[2] = NaT
assert isnull(td1[2])
- self.assertEqual(td1[2].value, iNaT)
+ assert td1[2].value == iNaT
td1[2] = td[2]
assert not isnull(td1[2])
@@ -505,7 +505,7 @@ def test_timedelta64_nan(self):
# this doesn't work, not sure numpy even supports it
# result = td[(td>np.timedelta64(timedelta(days=3))) &
# td<np.timedelta64(timedelta(days=7)))] = np.nan
- # self.assertEqual(isnull(result).sum(), 7)
+ # assert isnull(result).sum() == 7
# NumPy limitiation =(
@@ -517,9 +517,9 @@ def test_timedelta64_nan(self):
def test_dropna_empty(self):
s = Series([])
- self.assertEqual(len(s.dropna()), 0)
+ assert len(s.dropna()) == 0
s.dropna(inplace=True)
- self.assertEqual(len(s), 0)
+ assert len(s) == 0
# invalid axis
pytest.raises(ValueError, s.dropna, axis=1)
@@ -538,12 +538,12 @@ def test_datetime64_tz_dropna(self):
'2011-01-03 10:00', pd.NaT],
tz='Asia/Tokyo')
s = pd.Series(idx)
- self.assertEqual(s.dtype, 'datetime64[ns, Asia/Tokyo]')
+ assert s.dtype == 'datetime64[ns, Asia/Tokyo]'
result = s.dropna()
expected = Series([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'),
Timestamp('2011-01-03 10:00', tz='Asia/Tokyo')],
index=[0, 2])
- self.assertEqual(result.dtype, 'datetime64[ns, Asia/Tokyo]')
+ assert result.dtype == 'datetime64[ns, Asia/Tokyo]'
tm.assert_series_equal(result, expected)
def test_dropna_no_nan(self):
@@ -572,7 +572,7 @@ def test_valid(self):
ts[::2] = np.NaN
result = ts.valid()
- self.assertEqual(len(result), ts.count())
+ assert len(result) == ts.count()
tm.assert_series_equal(result, ts[1::2])
tm.assert_series_equal(result, ts[pd.notnull(ts)])
@@ -612,11 +612,11 @@ def test_pad_require_monotonicity(self):
def test_dropna_preserve_name(self):
self.ts[:5] = np.nan
result = self.ts.dropna()
- self.assertEqual(result.name, self.ts.name)
+ assert result.name == self.ts.name
name = self.ts.name
ts = self.ts.copy()
ts.dropna(inplace=True)
- self.assertEqual(ts.name, name)
+ assert ts.name == name
def test_fill_value_when_combine_const(self):
# GH12723
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index f48a3474494a4..7c7b98961d960 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -232,7 +232,7 @@ def check(series, other):
# check the values, name, and index separatly
assert_almost_equal(np.asarray(result), expected)
- self.assertEqual(result.name, series.name)
+ assert result.name == series.name
assert_index_equal(result.index, series.index)
check(self.ts, self.ts * 2)
@@ -262,25 +262,25 @@ def test_operators_timedelta64(self):
xp = Series(1e9 * 3600 * 24,
rs.index).astype('int64').astype('timedelta64[ns]')
assert_series_equal(rs, xp)
- self.assertEqual(rs.dtype, 'timedelta64[ns]')
+ assert rs.dtype == 'timedelta64[ns]'
df = DataFrame(dict(A=v1))
td = Series([timedelta(days=i) for i in range(3)])
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ assert td.dtype == 'timedelta64[ns]'
# series on the rhs
result = df['A'] - df['A'].shift()
- self.assertEqual(result.dtype, 'timedelta64[ns]')
+ assert result.dtype == 'timedelta64[ns]'
result = df['A'] + td
- self.assertEqual(result.dtype, 'M8[ns]')
+ assert result.dtype == 'M8[ns]'
# scalar Timestamp on rhs
maxa = df['A'].max()
assert isinstance(maxa, Timestamp)
resultb = df['A'] - df['A'].max()
- self.assertEqual(resultb.dtype, 'timedelta64[ns]')
+ assert resultb.dtype == 'timedelta64[ns]'
# timestamp on lhs
result = resultb + df['A']
@@ -294,11 +294,11 @@ def test_operators_timedelta64(self):
expected = Series(
[timedelta(days=4017 + i) for i in range(3)], name='A')
assert_series_equal(result, expected)
- self.assertEqual(result.dtype, 'm8[ns]')
+ assert result.dtype == 'm8[ns]'
d = datetime(2001, 1, 1, 3, 4)
resulta = df['A'] - d
- self.assertEqual(resulta.dtype, 'm8[ns]')
+ assert resulta.dtype == 'm8[ns]'
# roundtrip
resultb = resulta + d
@@ -309,19 +309,19 @@ def test_operators_timedelta64(self):
resulta = df['A'] + td
resultb = resulta - td
assert_series_equal(resultb, df['A'])
- self.assertEqual(resultb.dtype, 'M8[ns]')
+ assert resultb.dtype == 'M8[ns]'
# roundtrip
td = timedelta(minutes=5, seconds=3)
resulta = df['A'] + td
resultb = resulta - td
assert_series_equal(df['A'], resultb)
- self.assertEqual(resultb.dtype, 'M8[ns]')
+ assert resultb.dtype == 'M8[ns]'
# inplace
value = rs[2] + np.timedelta64(timedelta(minutes=5, seconds=1))
rs[2] += np.timedelta64(timedelta(minutes=5, seconds=1))
- self.assertEqual(rs[2], value)
+ assert rs[2] == value
def test_operator_series_comparison_zerorank(self):
# GH 13006
@@ -440,7 +440,7 @@ def test_timedelta64_operations_with_timedeltas(self):
result = td1 - td2
expected = Series([timedelta(seconds=0)] * 3) - Series([timedelta(
seconds=1)] * 3)
- self.assertEqual(result.dtype, 'm8[ns]')
+ assert result.dtype == 'm8[ns]'
assert_series_equal(result, expected)
result2 = td2 - td1
@@ -458,7 +458,7 @@ def test_timedelta64_operations_with_timedeltas(self):
result = td1 - td2
expected = Series([timedelta(seconds=0)] * 3) - Series([timedelta(
seconds=1)] * 3)
- self.assertEqual(result.dtype, 'm8[ns]')
+ assert result.dtype == 'm8[ns]'
assert_series_equal(result, expected)
result2 = td2 - td1
@@ -1469,7 +1469,7 @@ def test_operators_corner(self):
assert np.isnan(result).all()
result = empty + Series([], index=Index([]))
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
# TODO: this returned NotImplemented earlier, what to do?
# deltas = Series([timedelta(1)] * 5, index=np.arange(5))
diff --git a/pandas/tests/series/test_period.py b/pandas/tests/series/test_period.py
index 72a85086d4e24..5ea27d605c28a 100644
--- a/pandas/tests/series/test_period.py
+++ b/pandas/tests/series/test_period.py
@@ -17,21 +17,21 @@ def setUp(self):
def test_auto_conversion(self):
series = Series(list(period_range('2000-01-01', periods=10, freq='D')))
- self.assertEqual(series.dtype, 'object')
+ assert series.dtype == 'object'
series = pd.Series([pd.Period('2011-01-01', freq='D'),
pd.Period('2011-02-01', freq='D')])
- self.assertEqual(series.dtype, 'object')
+ assert series.dtype == 'object'
def test_getitem(self):
- self.assertEqual(self.series[1], pd.Period('2000-01-02', freq='D'))
+ assert self.series[1] == pd.Period('2000-01-02', freq='D')
result = self.series[[2, 4]]
exp = pd.Series([pd.Period('2000-01-03', freq='D'),
pd.Period('2000-01-05', freq='D')],
index=[2, 4])
tm.assert_series_equal(result, exp)
- self.assertEqual(result.dtype, 'object')
+ assert result.dtype == 'object'
def test_isnull(self):
# GH 13737
@@ -49,12 +49,12 @@ def test_fillna(self):
exp = Series([pd.Period('2011-01', freq='M'),
pd.Period('2012-01', freq='M')])
tm.assert_series_equal(res, exp)
- self.assertEqual(res.dtype, 'object')
+ assert res.dtype == 'object'
res = s.fillna('XXX')
exp = Series([pd.Period('2011-01', freq='M'), 'XXX'])
tm.assert_series_equal(res, exp)
- self.assertEqual(res.dtype, 'object')
+ assert res.dtype == 'object'
def test_dropna(self):
# GH 13737
diff --git a/pandas/tests/series/test_quantile.py b/pandas/tests/series/test_quantile.py
index 9fb87a914a0ac..6d2cdd046ea7f 100644
--- a/pandas/tests/series/test_quantile.py
+++ b/pandas/tests/series/test_quantile.py
@@ -18,24 +18,24 @@ class TestSeriesQuantile(TestData, tm.TestCase):
def test_quantile(self):
q = self.ts.quantile(0.1)
- self.assertEqual(q, np.percentile(self.ts.valid(), 10))
+ assert q == np.percentile(self.ts.valid(), 10)
q = self.ts.quantile(0.9)
- self.assertEqual(q, np.percentile(self.ts.valid(), 90))
+ assert q == np.percentile(self.ts.valid(), 90)
# object dtype
q = Series(self.ts, dtype=object).quantile(0.9)
- self.assertEqual(q, np.percentile(self.ts.valid(), 90))
+ assert q == np.percentile(self.ts.valid(), 90)
# datetime64[ns] dtype
dts = self.ts.index.to_series()
q = dts.quantile(.2)
- self.assertEqual(q, Timestamp('2000-01-10 19:12:00'))
+ assert q == Timestamp('2000-01-10 19:12:00')
# timedelta64[ns] dtype
tds = dts.diff()
q = tds.quantile(.25)
- self.assertEqual(q, pd.to_timedelta('24:00:00'))
+ assert q == pd.to_timedelta('24:00:00')
# GH7661
result = Series([np.timedelta64('NaT')]).sum()
@@ -71,16 +71,16 @@ def test_quantile_multi(self):
@pytest.mark.skipif(_np_version_under1p9,
reason="Numpy version is under 1.9")
def test_quantile_interpolation(self):
- # GH #10174
+ # see gh-10174
# interpolation = linear (default case)
q = self.ts.quantile(0.1, interpolation='linear')
- self.assertEqual(q, np.percentile(self.ts.valid(), 10))
+ assert q == np.percentile(self.ts.valid(), 10)
q1 = self.ts.quantile(0.1)
- self.assertEqual(q1, np.percentile(self.ts.valid(), 10))
+ assert q1 == np.percentile(self.ts.valid(), 10)
# test with and without interpolation keyword
- self.assertEqual(q, q1)
+ assert q == q1
@pytest.mark.skipif(_np_version_under1p9,
reason="Numpy version is under 1.9")
@@ -89,11 +89,11 @@ def test_quantile_interpolation_dtype(self):
# interpolation = linear (default case)
q = pd.Series([1, 3, 4]).quantile(0.5, interpolation='lower')
- self.assertEqual(q, np.percentile(np.array([1, 3, 4]), 50))
+ assert q == np.percentile(np.array([1, 3, 4]), 50)
assert is_integer(q)
q = pd.Series([1, 3, 4]).quantile(0.5, interpolation='higher')
- self.assertEqual(q, np.percentile(np.array([1, 3, 4]), 50))
+ assert q == np.percentile(np.array([1, 3, 4]), 50)
assert is_integer(q)
@pytest.mark.skipif(not _np_version_under1p9,
@@ -103,19 +103,18 @@ def test_quantile_interpolation_np_lt_1p9(self):
# interpolation = linear (default case)
q = self.ts.quantile(0.1, interpolation='linear')
- self.assertEqual(q, np.percentile(self.ts.valid(), 10))
+ assert q == np.percentile(self.ts.valid(), 10)
q1 = self.ts.quantile(0.1)
- self.assertEqual(q1, np.percentile(self.ts.valid(), 10))
+ assert q1 == np.percentile(self.ts.valid(), 10)
# interpolation other than linear
- expErrMsg = "Interpolation methods other than "
- with tm.assert_raises_regex(ValueError, expErrMsg):
+ msg = "Interpolation methods other than "
+ with tm.assert_raises_regex(ValueError, msg):
self.ts.quantile(0.9, interpolation='nearest')
# object dtype
- with tm.assert_raises_regex(ValueError, expErrMsg):
- q = Series(self.ts, dtype=object).quantile(0.7,
- interpolation='higher')
+ with tm.assert_raises_regex(ValueError, msg):
+ Series(self.ts, dtype=object).quantile(0.7, interpolation='higher')
def test_quantile_nan(self):
@@ -123,7 +122,7 @@ def test_quantile_nan(self):
s = pd.Series([1, 2, 3, 4, np.nan])
result = s.quantile(0.5)
expected = 2.5
- self.assertEqual(result, expected)
+ assert result == expected
# all nan/empty
cases = [Series([]), Series([np.nan, np.nan])]
@@ -159,7 +158,7 @@ def test_quantile_box(self):
for case in cases:
s = pd.Series(case, name='XXX')
res = s.quantile(0.5)
- self.assertEqual(res, case[1])
+ assert res == case[1]
res = s.quantile([0.5])
exp = pd.Series([case[1]], index=[0.5], name='XXX')
diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py
index 2decffce0f2fe..8c1d74c5c2c23 100644
--- a/pandas/tests/series/test_repr.py
+++ b/pandas/tests/series/test_repr.py
@@ -34,7 +34,7 @@ def test_multilevel_name_print(self):
"qux one 7", " two 8",
" three 9", "Name: sth, dtype: int64"]
expected = "\n".join(expected)
- self.assertEqual(repr(s), expected)
+ assert repr(s) == expected
def test_name_printing(self):
# Test small Series.
@@ -109,10 +109,10 @@ def test_repr(self):
# with empty series (#4651)
s = Series([], dtype=np.int64, name='foo')
- self.assertEqual(repr(s), 'Series([], Name: foo, dtype: int64)')
+ assert repr(s) == 'Series([], Name: foo, dtype: int64)'
s = Series([], dtype=np.int64, name=None)
- self.assertEqual(repr(s), 'Series([], dtype: int64)')
+ assert repr(s) == 'Series([], dtype: int64)'
def test_tidy_repr(self):
a = Series([u("\u05d0")] * 1000)
diff --git a/pandas/tests/series/test_subclass.py b/pandas/tests/series/test_subclass.py
index 677bf2ee3e557..fe8a5e7658d9c 100644
--- a/pandas/tests/series/test_subclass.py
+++ b/pandas/tests/series/test_subclass.py
@@ -40,29 +40,29 @@ def test_subclass_sparse_slice(self):
s = tm.SubclassedSparseSeries([1, 2, 3, 4, 5])
exp = tm.SubclassedSparseSeries([2, 3, 4], index=[1, 2, 3])
tm.assert_sp_series_equal(s.loc[1:3], exp)
- self.assertEqual(s.loc[1:3].dtype, np.int64)
+ assert s.loc[1:3].dtype == np.int64
exp = tm.SubclassedSparseSeries([2, 3], index=[1, 2])
tm.assert_sp_series_equal(s.iloc[1:3], exp)
- self.assertEqual(s.iloc[1:3].dtype, np.int64)
+ assert s.iloc[1:3].dtype == np.int64
exp = tm.SubclassedSparseSeries([2, 3], index=[1, 2])
tm.assert_sp_series_equal(s[1:3], exp)
- self.assertEqual(s[1:3].dtype, np.int64)
+ assert s[1:3].dtype == np.int64
# float64
s = tm.SubclassedSparseSeries([1., 2., 3., 4., 5.])
exp = tm.SubclassedSparseSeries([2., 3., 4.], index=[1, 2, 3])
tm.assert_sp_series_equal(s.loc[1:3], exp)
- self.assertEqual(s.loc[1:3].dtype, np.float64)
+ assert s.loc[1:3].dtype == np.float64
exp = tm.SubclassedSparseSeries([2., 3.], index=[1, 2])
tm.assert_sp_series_equal(s.iloc[1:3], exp)
- self.assertEqual(s.iloc[1:3].dtype, np.float64)
+ assert s.iloc[1:3].dtype == np.float64
exp = tm.SubclassedSparseSeries([2., 3.], index=[1, 2])
tm.assert_sp_series_equal(s[1:3], exp)
- self.assertEqual(s[1:3].dtype, np.float64)
+ assert s[1:3].dtype == np.float64
def test_subclass_sparse_addition(self):
s1 = tm.SubclassedSparseSeries([1, 3, 5])
diff --git a/pandas/tests/series/test_timeseries.py b/pandas/tests/series/test_timeseries.py
index 1c94bc3db9990..78e5d87636532 100644
--- a/pandas/tests/series/test_timeseries.py
+++ b/pandas/tests/series/test_timeseries.py
@@ -131,25 +131,25 @@ def test_shift_dst(self):
res = s.shift(0)
tm.assert_series_equal(res, s)
- self.assertEqual(res.dtype, 'datetime64[ns, US/Eastern]')
+ assert res.dtype == 'datetime64[ns, US/Eastern]'
res = s.shift(1)
exp_vals = [NaT] + dates.asobject.values.tolist()[:9]
exp = Series(exp_vals)
tm.assert_series_equal(res, exp)
- self.assertEqual(res.dtype, 'datetime64[ns, US/Eastern]')
+ assert res.dtype == 'datetime64[ns, US/Eastern]'
res = s.shift(-2)
exp_vals = dates.asobject.values.tolist()[2:] + [NaT, NaT]
exp = Series(exp_vals)
tm.assert_series_equal(res, exp)
- self.assertEqual(res.dtype, 'datetime64[ns, US/Eastern]')
+ assert res.dtype == 'datetime64[ns, US/Eastern]'
for ex in [10, -10, 20, -20]:
res = s.shift(ex)
exp = Series([NaT] * 10, dtype='datetime64[ns, US/Eastern]')
tm.assert_series_equal(res, exp)
- self.assertEqual(res.dtype, 'datetime64[ns, US/Eastern]')
+ assert res.dtype == 'datetime64[ns, US/Eastern]'
def test_tshift(self):
# PeriodIndex
@@ -280,7 +280,7 @@ def test_diff(self):
s = Series([a, b])
rs = s.diff()
- self.assertEqual(rs[1], 1)
+ assert rs[1] == 1
# neg n
rs = self.ts.diff(-1)
@@ -346,7 +346,7 @@ def test_autocorr(self):
assert np.isnan(corr1)
assert np.isnan(corr2)
else:
- self.assertEqual(corr1, corr2)
+ assert corr1 == corr2
# Choose a random lag between 1 and length of Series - 2
# and compare the result with the Series corr() function
@@ -359,18 +359,18 @@ def test_autocorr(self):
assert np.isnan(corr1)
assert np.isnan(corr2)
else:
- self.assertEqual(corr1, corr2)
+ assert corr1 == corr2
def test_first_last_valid(self):
ts = self.ts.copy()
ts[:5] = np.NaN
index = ts.first_valid_index()
- self.assertEqual(index, ts.index[5])
+ assert index == ts.index[5]
ts[-5:] = np.NaN
index = ts.last_valid_index()
- self.assertEqual(index, ts.index[-6])
+ assert index == ts.index[-6]
ts[:] = np.nan
assert ts.last_valid_index() is None
@@ -498,7 +498,7 @@ def test_series_repr_nat(self):
'2 1970-01-01 00:00:00.000002\n'
'3 NaT\n'
'dtype: datetime64[ns]')
- self.assertEqual(result, expected)
+ assert result == expected
def test_asfreq_keep_index_name(self):
# GH #9854
@@ -506,8 +506,8 @@ def test_asfreq_keep_index_name(self):
index = pd.date_range('20130101', periods=20, name=index_name)
df = pd.DataFrame([x for x in range(20)], columns=['foo'], index=index)
- self.assertEqual(index_name, df.index.name)
- self.assertEqual(index_name, df.asfreq('10D').index.name)
+ assert index_name == df.index.name
+ assert index_name == df.asfreq('10D').index.name
def test_promote_datetime_date(self):
rng = date_range('1/1/2000', periods=20)
@@ -555,11 +555,11 @@ def test_asfreq_normalize(self):
def test_first_subset(self):
ts = _simple_ts('1/1/2000', '1/1/2010', freq='12h')
result = ts.first('10d')
- self.assertEqual(len(result), 20)
+ assert len(result) == 20
ts = _simple_ts('1/1/2000', '1/1/2010')
result = ts.first('10d')
- self.assertEqual(len(result), 10)
+ assert len(result) == 10
result = ts.first('3M')
expected = ts[:'3/31/2000']
@@ -575,11 +575,11 @@ def test_first_subset(self):
def test_last_subset(self):
ts = _simple_ts('1/1/2000', '1/1/2010', freq='12h')
result = ts.last('10d')
- self.assertEqual(len(result), 20)
+ assert len(result) == 20
ts = _simple_ts('1/1/2000', '1/1/2010')
result = ts.last('10d')
- self.assertEqual(len(result), 10)
+ assert len(result) == 10
result = ts.last('21D')
expected = ts['12/12/2009':]
@@ -638,7 +638,7 @@ def test_at_time(self):
rng = date_range('1/1/2012', freq='23Min', periods=384)
ts = Series(np.random.randn(len(rng)), rng)
rs = ts.at_time('16:00')
- self.assertEqual(len(rs), 0)
+ assert len(rs) == 0
def test_between(self):
series = Series(date_range('1/1/2000', periods=10))
@@ -663,7 +663,7 @@ def test_between_time(self):
if not inc_end:
exp_len -= 4
- self.assertEqual(len(filtered), exp_len)
+ assert len(filtered) == exp_len
for rs in filtered.index:
t = rs.time()
if inc_start:
@@ -695,7 +695,7 @@ def test_between_time(self):
if not inc_end:
exp_len -= 4
- self.assertEqual(len(filtered), exp_len)
+ assert len(filtered) == exp_len
for rs in filtered.index:
t = rs.time()
if inc_start:
@@ -736,9 +736,7 @@ def test_between_time_formats(self):
expected_length = 28
for time_string in strings:
- self.assertEqual(len(ts.between_time(*time_string)),
- expected_length,
- "%s - %s" % time_string)
+ assert len(ts.between_time(*time_string)) == expected_length
def test_to_period(self):
from pandas.core.indexes.period import period_range
@@ -817,14 +815,14 @@ def test_asfreq_resample_set_correct_freq(self):
df = df.set_index(pd.to_datetime(df.date))
# testing the settings before calling .asfreq() and .resample()
- self.assertEqual(df.index.freq, None)
- self.assertEqual(df.index.inferred_freq, 'D')
+ assert df.index.freq is None
+ assert df.index.inferred_freq == 'D'
# does .asfreq() set .freq correctly?
- self.assertEqual(df.asfreq('D').index.freq, 'D')
+ assert df.asfreq('D').index.freq == 'D'
# does .resample() set .freq correctly?
- self.assertEqual(df.resample('D').asfreq().index.freq, 'D')
+ assert df.resample('D').asfreq().index.freq == 'D'
def test_pickle(self):
@@ -849,35 +847,35 @@ def test_setops_preserve_freq(self):
rng = date_range('1/1/2000', '1/1/2002', name='idx', tz=tz)
result = rng[:50].union(rng[50:100])
- self.assertEqual(result.name, rng.name)
- self.assertEqual(result.freq, rng.freq)
- self.assertEqual(result.tz, rng.tz)
+ assert result.name == rng.name
+ assert result.freq == rng.freq
+ assert result.tz == rng.tz
result = rng[:50].union(rng[30:100])
- self.assertEqual(result.name, rng.name)
- self.assertEqual(result.freq, rng.freq)
- self.assertEqual(result.tz, rng.tz)
+ assert result.name == rng.name
+ assert result.freq == rng.freq
+ assert result.tz == rng.tz
result = rng[:50].union(rng[60:100])
- self.assertEqual(result.name, rng.name)
+ assert result.name == rng.name
assert result.freq is None
- self.assertEqual(result.tz, rng.tz)
+ assert result.tz == rng.tz
result = rng[:50].intersection(rng[25:75])
- self.assertEqual(result.name, rng.name)
- self.assertEqual(result.freqstr, 'D')
- self.assertEqual(result.tz, rng.tz)
+ assert result.name == rng.name
+ assert result.freqstr == 'D'
+ assert result.tz == rng.tz
nofreq = DatetimeIndex(list(rng[25:75]), name='other')
result = rng[:50].union(nofreq)
assert result.name is None
- self.assertEqual(result.freq, rng.freq)
- self.assertEqual(result.tz, rng.tz)
+ assert result.freq == rng.freq
+ assert result.tz == rng.tz
result = rng[:50].intersection(nofreq)
assert result.name is None
- self.assertEqual(result.freq, rng.freq)
- self.assertEqual(result.tz, rng.tz)
+ assert result.freq == rng.freq
+ assert result.tz == rng.tz
def test_min_max(self):
rng = date_range('1/1/2000', '12/31/2000')
@@ -887,11 +885,11 @@ def test_min_max(self):
the_max = rng2.max()
assert isinstance(the_min, Timestamp)
assert isinstance(the_max, Timestamp)
- self.assertEqual(the_min, rng[0])
- self.assertEqual(the_max, rng[-1])
+ assert the_min == rng[0]
+ assert the_max == rng[-1]
- self.assertEqual(rng.min(), rng[0])
- self.assertEqual(rng.max(), rng[-1])
+ assert rng.min() == rng[0]
+ assert rng.max() == rng[-1]
def test_min_max_series(self):
rng = date_range('1/1/2000', periods=10, freq='4h')
@@ -901,12 +899,12 @@ def test_min_max_series(self):
result = df.TS.max()
exp = Timestamp(df.TS.iat[-1])
assert isinstance(result, Timestamp)
- self.assertEqual(result, exp)
+ assert result == exp
result = df.TS.min()
exp = Timestamp(df.TS.iat[0])
assert isinstance(result, Timestamp)
- self.assertEqual(result, exp)
+ assert result == exp
def test_from_M8_structured(self):
dates = [(datetime(2012, 9, 9, 0, 0), datetime(2012, 9, 8, 15, 10))]
@@ -914,15 +912,15 @@ def test_from_M8_structured(self):
dtype=[('Date', 'M8[us]'), ('Forecasting', 'M8[us]')])
df = DataFrame(arr)
- self.assertEqual(df['Date'][0], dates[0][0])
- self.assertEqual(df['Forecasting'][0], dates[0][1])
+ assert df['Date'][0] == dates[0][0]
+ assert df['Forecasting'][0] == dates[0][1]
s = Series(arr['Date'])
assert s[0], Timestamp
- self.assertEqual(s[0], dates[0][0])
+ assert s[0] == dates[0][0]
s = Series.from_array(arr['Date'], Index([0]))
- self.assertEqual(s[0], dates[0][0])
+ assert s[0] == dates[0][0]
def test_get_level_values_box(self):
from pandas import MultiIndex
diff --git a/pandas/tests/sparse/test_arithmetics.py b/pandas/tests/sparse/test_arithmetics.py
index ae2e152917bbd..468d856ca68ce 100644
--- a/pandas/tests/sparse/test_arithmetics.py
+++ b/pandas/tests/sparse/test_arithmetics.py
@@ -68,7 +68,7 @@ def _check_numeric_ops(self, a, b, a_dense, b_dense):
def _check_bool_result(self, res):
assert isinstance(res, self._klass)
- self.assertEqual(res.dtype, np.bool)
+ assert res.dtype == np.bool
assert isinstance(res.fill_value, bool)
def _check_comparison_ops(self, a, b, a_dense, b_dense):
@@ -274,30 +274,30 @@ def test_int_array(self):
for kind in ['integer', 'block']:
a = self._klass(values, dtype=dtype, kind=kind)
- self.assertEqual(a.dtype, dtype)
+ assert a.dtype == dtype
b = self._klass(rvalues, dtype=dtype, kind=kind)
- self.assertEqual(b.dtype, dtype)
+ assert b.dtype == dtype
self._check_numeric_ops(a, b, values, rvalues)
self._check_numeric_ops(a, b * 0, values, rvalues * 0)
a = self._klass(values, fill_value=0, dtype=dtype, kind=kind)
- self.assertEqual(a.dtype, dtype)
+ assert a.dtype == dtype
b = self._klass(rvalues, dtype=dtype, kind=kind)
- self.assertEqual(b.dtype, dtype)
+ assert b.dtype == dtype
self._check_numeric_ops(a, b, values, rvalues)
a = self._klass(values, fill_value=0, dtype=dtype, kind=kind)
- self.assertEqual(a.dtype, dtype)
+ assert a.dtype == dtype
b = self._klass(rvalues, fill_value=0, dtype=dtype, kind=kind)
- self.assertEqual(b.dtype, dtype)
+ assert b.dtype == dtype
self._check_numeric_ops(a, b, values, rvalues)
a = self._klass(values, fill_value=1, dtype=dtype, kind=kind)
- self.assertEqual(a.dtype, dtype)
+ assert a.dtype == dtype
b = self._klass(rvalues, fill_value=2, dtype=dtype, kind=kind)
- self.assertEqual(b.dtype, dtype)
+ assert b.dtype == dtype
self._check_numeric_ops(a, b, values, rvalues)
def test_int_array_comparison(self):
@@ -364,24 +364,24 @@ def test_mixed_array_float_int(self):
for kind in ['integer', 'block']:
a = self._klass(values, kind=kind)
b = self._klass(rvalues, kind=kind)
- self.assertEqual(b.dtype, rdtype)
+ assert b.dtype == rdtype
self._check_numeric_ops(a, b, values, rvalues)
self._check_numeric_ops(a, b * 0, values, rvalues * 0)
a = self._klass(values, kind=kind, fill_value=0)
b = self._klass(rvalues, kind=kind)
- self.assertEqual(b.dtype, rdtype)
+ assert b.dtype == rdtype
self._check_numeric_ops(a, b, values, rvalues)
a = self._klass(values, kind=kind, fill_value=0)
b = self._klass(rvalues, kind=kind, fill_value=0)
- self.assertEqual(b.dtype, rdtype)
+ assert b.dtype == rdtype
self._check_numeric_ops(a, b, values, rvalues)
a = self._klass(values, kind=kind, fill_value=1)
b = self._klass(rvalues, kind=kind, fill_value=2)
- self.assertEqual(b.dtype, rdtype)
+ assert b.dtype == rdtype
self._check_numeric_ops(a, b, values, rvalues)
def test_mixed_array_comparison(self):
@@ -394,24 +394,24 @@ def test_mixed_array_comparison(self):
for kind in ['integer', 'block']:
a = self._klass(values, kind=kind)
b = self._klass(rvalues, kind=kind)
- self.assertEqual(b.dtype, rdtype)
+ assert b.dtype == rdtype
self._check_comparison_ops(a, b, values, rvalues)
self._check_comparison_ops(a, b * 0, values, rvalues * 0)
a = self._klass(values, kind=kind, fill_value=0)
b = self._klass(rvalues, kind=kind)
- self.assertEqual(b.dtype, rdtype)
+ assert b.dtype == rdtype
self._check_comparison_ops(a, b, values, rvalues)
a = self._klass(values, kind=kind, fill_value=0)
b = self._klass(rvalues, kind=kind, fill_value=0)
- self.assertEqual(b.dtype, rdtype)
+ assert b.dtype == rdtype
self._check_comparison_ops(a, b, values, rvalues)
a = self._klass(values, kind=kind, fill_value=1)
b = self._klass(rvalues, kind=kind, fill_value=2)
- self.assertEqual(b.dtype, rdtype)
+ assert b.dtype == rdtype
self._check_comparison_ops(a, b, values, rvalues)
diff --git a/pandas/tests/sparse/test_array.py b/pandas/tests/sparse/test_array.py
index b8dff5606f979..9a2c958a252af 100644
--- a/pandas/tests/sparse/test_array.py
+++ b/pandas/tests/sparse/test_array.py
@@ -24,48 +24,48 @@ def setUp(self):
def test_constructor_dtype(self):
arr = SparseArray([np.nan, 1, 2, np.nan])
- self.assertEqual(arr.dtype, np.float64)
+ assert arr.dtype == np.float64
assert np.isnan(arr.fill_value)
arr = SparseArray([np.nan, 1, 2, np.nan], fill_value=0)
- self.assertEqual(arr.dtype, np.float64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.float64
+ assert arr.fill_value == 0
arr = SparseArray([0, 1, 2, 4], dtype=np.float64)
- self.assertEqual(arr.dtype, np.float64)
+ assert arr.dtype == np.float64
assert np.isnan(arr.fill_value)
arr = SparseArray([0, 1, 2, 4], dtype=np.int64)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
arr = SparseArray([0, 1, 2, 4], fill_value=0, dtype=np.int64)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
arr = SparseArray([0, 1, 2, 4], dtype=None)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
arr = SparseArray([0, 1, 2, 4], fill_value=0, dtype=None)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
def test_constructor_object_dtype(self):
# GH 11856
arr = SparseArray(['A', 'A', np.nan, 'B'], dtype=np.object)
- self.assertEqual(arr.dtype, np.object)
+ assert arr.dtype == np.object
assert np.isnan(arr.fill_value)
arr = SparseArray(['A', 'A', np.nan, 'B'], dtype=np.object,
fill_value='A')
- self.assertEqual(arr.dtype, np.object)
- self.assertEqual(arr.fill_value, 'A')
+ assert arr.dtype == np.object
+ assert arr.fill_value == 'A'
def test_constructor_spindex_dtype(self):
arr = SparseArray(data=[1, 2], sparse_index=IntIndex(4, [1, 2]))
tm.assert_sp_array_equal(arr, SparseArray([np.nan, 1, 2, np.nan]))
- self.assertEqual(arr.dtype, np.float64)
+ assert arr.dtype == np.float64
assert np.isnan(arr.fill_value)
arr = SparseArray(data=[1, 2, 3],
@@ -73,37 +73,37 @@ def test_constructor_spindex_dtype(self):
dtype=np.int64, fill_value=0)
exp = SparseArray([0, 1, 2, 3], dtype=np.int64, fill_value=0)
tm.assert_sp_array_equal(arr, exp)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
arr = SparseArray(data=[1, 2], sparse_index=IntIndex(4, [1, 2]),
fill_value=0, dtype=np.int64)
exp = SparseArray([0, 1, 2, 0], fill_value=0, dtype=np.int64)
tm.assert_sp_array_equal(arr, exp)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
arr = SparseArray(data=[1, 2, 3],
sparse_index=IntIndex(4, [1, 2, 3]),
dtype=None, fill_value=0)
exp = SparseArray([0, 1, 2, 3], dtype=None)
tm.assert_sp_array_equal(arr, exp)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
# scalar input
arr = SparseArray(data=1, sparse_index=IntIndex(1, [0]), dtype=None)
exp = SparseArray([1], dtype=None)
tm.assert_sp_array_equal(arr, exp)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
arr = SparseArray(data=[1, 2], sparse_index=IntIndex(4, [1, 2]),
fill_value=0, dtype=None)
exp = SparseArray([0, 1, 2, 0], fill_value=0, dtype=None)
tm.assert_sp_array_equal(arr, exp)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
def test_sparseseries_roundtrip(self):
# GH 13999
@@ -134,17 +134,17 @@ def test_sparseseries_roundtrip(self):
def test_get_item(self):
assert np.isnan(self.arr[1])
- self.assertEqual(self.arr[2], 1)
- self.assertEqual(self.arr[7], 5)
+ assert self.arr[2] == 1
+ assert self.arr[7] == 5
- self.assertEqual(self.zarr[0], 0)
- self.assertEqual(self.zarr[2], 1)
- self.assertEqual(self.zarr[7], 5)
+ assert self.zarr[0] == 0
+ assert self.zarr[2] == 1
+ assert self.zarr[7] == 5
errmsg = re.compile("bounds")
tm.assert_raises_regex(IndexError, errmsg, lambda: self.arr[11])
tm.assert_raises_regex(IndexError, errmsg, lambda: self.arr[-11])
- self.assertEqual(self.arr[-1], self.arr[len(self.arr) - 1])
+ assert self.arr[-1] == self.arr[len(self.arr) - 1]
def test_take(self):
assert np.isnan(self.arr.take(0))
@@ -152,8 +152,8 @@ def test_take(self):
# np.take in < 1.8 doesn't support scalar indexing
if not _np_version_under1p8:
- self.assertEqual(self.arr.take(2), np.take(self.arr_data, 2))
- self.assertEqual(self.arr.take(6), np.take(self.arr_data, 6))
+ assert self.arr.take(2) == np.take(self.arr_data, 2)
+ assert self.arr.take(6) == np.take(self.arr_data, 6)
exp = SparseArray(np.take(self.arr_data, [2, 3]))
tm.assert_sp_array_equal(self.arr.take([2, 3]), exp)
@@ -293,7 +293,7 @@ def test_constructor_from_too_large_array(self):
def test_constructor_from_sparse(self):
res = SparseArray(self.zarr)
- self.assertEqual(res.fill_value, 0)
+ assert res.fill_value == 0
assert_almost_equal(res.sp_values, self.zarr.sp_values)
def test_constructor_copy(self):
@@ -310,27 +310,27 @@ def test_constructor_bool(self):
data = np.array([False, False, True, True, False, False])
arr = SparseArray(data, fill_value=False, dtype=bool)
- self.assertEqual(arr.dtype, bool)
+ assert arr.dtype == bool
tm.assert_numpy_array_equal(arr.sp_values, np.array([True, True]))
tm.assert_numpy_array_equal(arr.sp_values, np.asarray(arr))
tm.assert_numpy_array_equal(arr.sp_index.indices,
np.array([2, 3], np.int32))
for dense in [arr.to_dense(), arr.values]:
- self.assertEqual(dense.dtype, bool)
+ assert dense.dtype == bool
tm.assert_numpy_array_equal(dense, data)
def test_constructor_bool_fill_value(self):
arr = SparseArray([True, False, True], dtype=None)
- self.assertEqual(arr.dtype, np.bool)
+ assert arr.dtype == np.bool
assert not arr.fill_value
arr = SparseArray([True, False, True], dtype=np.bool)
- self.assertEqual(arr.dtype, np.bool)
+ assert arr.dtype == np.bool
assert not arr.fill_value
arr = SparseArray([True, False, True], dtype=np.bool, fill_value=True)
- self.assertEqual(arr.dtype, np.bool)
+ assert arr.dtype == np.bool
assert arr.fill_value
def test_constructor_float32(self):
@@ -338,7 +338,7 @@ def test_constructor_float32(self):
data = np.array([1., np.nan, 3], dtype=np.float32)
arr = SparseArray(data, dtype=np.float32)
- self.assertEqual(arr.dtype, np.float32)
+ assert arr.dtype == np.float32
tm.assert_numpy_array_equal(arr.sp_values,
np.array([1, 3], dtype=np.float32))
tm.assert_numpy_array_equal(arr.sp_values, np.asarray(arr))
@@ -346,7 +346,7 @@ def test_constructor_float32(self):
np.array([0, 2], dtype=np.int32))
for dense in [arr.to_dense(), arr.values]:
- self.assertEqual(dense.dtype, np.float32)
+ assert dense.dtype == np.float32
tm.assert_numpy_array_equal(dense, data)
def test_astype(self):
@@ -375,19 +375,19 @@ def test_astype_all(self):
np.int32, np.int16, np.int8]
for typ in types:
res = arr.astype(typ)
- self.assertEqual(res.dtype, typ)
- self.assertEqual(res.sp_values.dtype, typ)
+ assert res.dtype == typ
+ assert res.sp_values.dtype == typ
tm.assert_numpy_array_equal(res.values, vals.astype(typ))
def test_set_fill_value(self):
arr = SparseArray([1., np.nan, 2.], fill_value=np.nan)
arr.fill_value = 2
- self.assertEqual(arr.fill_value, 2)
+ assert arr.fill_value == 2
arr = SparseArray([1, 0, 2], fill_value=0, dtype=np.int64)
arr.fill_value = 2
- self.assertEqual(arr.fill_value, 2)
+ assert arr.fill_value == 2
# coerces to int
msg = "unable to set fill_value 3\\.1 to int64 dtype"
@@ -621,14 +621,14 @@ def test_fillna(self):
# int dtype shouldn't have missing. No changes.
s = SparseArray([0, 0, 0, 0])
- self.assertEqual(s.dtype, np.int64)
- self.assertEqual(s.fill_value, 0)
+ assert s.dtype == np.int64
+ assert s.fill_value == 0
res = s.fillna(-1)
tm.assert_sp_array_equal(res, s)
s = SparseArray([0, 0, 0, 0], fill_value=0)
- self.assertEqual(s.dtype, np.int64)
- self.assertEqual(s.fill_value, 0)
+ assert s.dtype == np.int64
+ assert s.fill_value == 0
res = s.fillna(-1)
exp = SparseArray([0, 0, 0, 0], fill_value=0)
tm.assert_sp_array_equal(res, exp)
@@ -636,7 +636,7 @@ def test_fillna(self):
# fill_value can be nan if there is no missing hole.
# only fill_value will be changed
s = SparseArray([0, 0, 0, 0], fill_value=np.nan)
- self.assertEqual(s.dtype, np.int64)
+ assert s.dtype == np.int64
assert np.isnan(s.fill_value)
res = s.fillna(-1)
exp = SparseArray([0, 0, 0, 0], fill_value=-1)
@@ -661,26 +661,26 @@ class TestSparseArrayAnalytics(tm.TestCase):
def test_sum(self):
data = np.arange(10).astype(float)
out = SparseArray(data).sum()
- self.assertEqual(out, 45.0)
+ assert out == 45.0
data[5] = np.nan
out = SparseArray(data, fill_value=2).sum()
- self.assertEqual(out, 40.0)
+ assert out == 40.0
out = SparseArray(data, fill_value=np.nan).sum()
- self.assertEqual(out, 40.0)
+ assert out == 40.0
def test_numpy_sum(self):
data = np.arange(10).astype(float)
out = np.sum(SparseArray(data))
- self.assertEqual(out, 45.0)
+ assert out == 45.0
data[5] = np.nan
out = np.sum(SparseArray(data, fill_value=2))
- self.assertEqual(out, 40.0)
+ assert out == 40.0
out = np.sum(SparseArray(data, fill_value=np.nan))
- self.assertEqual(out, 40.0)
+ assert out == 40.0
msg = "the 'dtype' parameter is not supported"
tm.assert_raises_regex(ValueError, msg, np.sum,
@@ -746,20 +746,20 @@ def test_numpy_cumsum(self):
def test_mean(self):
data = np.arange(10).astype(float)
out = SparseArray(data).mean()
- self.assertEqual(out, 4.5)
+ assert out == 4.5
data[5] = np.nan
out = SparseArray(data).mean()
- self.assertEqual(out, 40.0 / 9)
+ assert out == 40.0 / 9
def test_numpy_mean(self):
data = np.arange(10).astype(float)
out = np.mean(SparseArray(data))
- self.assertEqual(out, 4.5)
+ assert out == 4.5
data[5] = np.nan
out = np.mean(SparseArray(data))
- self.assertEqual(out, 40.0 / 9)
+ assert out == 40.0 / 9
msg = "the 'dtype' parameter is not supported"
tm.assert_raises_regex(ValueError, msg, np.mean,
diff --git a/pandas/tests/sparse/test_format.py b/pandas/tests/sparse/test_format.py
index eafb493319e40..74be14ff5cf15 100644
--- a/pandas/tests/sparse/test_format.py
+++ b/pandas/tests/sparse/test_format.py
@@ -27,7 +27,7 @@ def test_sparse_max_row(self):
"4 NaN\ndtype: float64\nBlockIndex\n"
"Block locations: array([0, 3]{0})\n"
"Block lengths: array([1, 1]{0})".format(dfm))
- self.assertEqual(result, exp)
+ assert result == exp
with option_context("display.max_rows", 3):
# GH 10560
@@ -36,7 +36,7 @@ def test_sparse_max_row(self):
"Length: 5, dtype: float64\nBlockIndex\n"
"Block locations: array([0, 3]{0})\n"
"Block lengths: array([1, 1]{0})".format(dfm))
- self.assertEqual(result, exp)
+ assert result == exp
def test_sparse_mi_max_row(self):
idx = pd.MultiIndex.from_tuples([('A', 0), ('A', 1), ('B', 0),
@@ -50,7 +50,7 @@ def test_sparse_mi_max_row(self):
"dtype: float64\nBlockIndex\n"
"Block locations: array([0, 3]{0})\n"
"Block lengths: array([1, 1]{0})".format(dfm))
- self.assertEqual(result, exp)
+ assert result == exp
with option_context("display.max_rows", 3,
"display.show_dimensions", False):
@@ -60,7 +60,7 @@ def test_sparse_mi_max_row(self):
"dtype: float64\nBlockIndex\n"
"Block locations: array([0, 3]{0})\n"
"Block lengths: array([1, 1]{0})".format(dfm))
- self.assertEqual(result, exp)
+ assert result == exp
def test_sparse_bool(self):
# GH 13110
@@ -73,7 +73,7 @@ def test_sparse_bool(self):
"dtype: bool\nBlockIndex\n"
"Block locations: array([0, 3]{0})\n"
"Block lengths: array([1, 1]{0})".format(dtype))
- self.assertEqual(result, exp)
+ assert result == exp
with option_context("display.max_rows", 3):
result = repr(s)
@@ -81,7 +81,7 @@ def test_sparse_bool(self):
"Length: 6, dtype: bool\nBlockIndex\n"
"Block locations: array([0, 3]{0})\n"
"Block lengths: array([1, 1]{0})".format(dtype))
- self.assertEqual(result, exp)
+ assert result == exp
def test_sparse_int(self):
# GH 13110
@@ -93,7 +93,7 @@ def test_sparse_int(self):
"5 0\ndtype: int64\nBlockIndex\n"
"Block locations: array([1, 4]{0})\n"
"Block lengths: array([1, 1]{0})".format(dtype))
- self.assertEqual(result, exp)
+ assert result == exp
with option_context("display.max_rows", 3,
"display.show_dimensions", False):
@@ -102,7 +102,7 @@ def test_sparse_int(self):
"dtype: int64\nBlockIndex\n"
"Block locations: array([1, 4]{0})\n"
"Block lengths: array([1, 1]{0})".format(dtype))
- self.assertEqual(result, exp)
+ assert result == exp
class TestSparseDataFrameFormatting(tm.TestCase):
@@ -114,10 +114,10 @@ def test_sparse_frame(self):
'C': [0, 0, 3, 0, 5],
'D': [np.nan, np.nan, np.nan, 1, 2]})
sparse = df.to_sparse()
- self.assertEqual(repr(sparse), repr(df))
+ assert repr(sparse) == repr(df)
with option_context("display.max_rows", 3):
- self.assertEqual(repr(sparse), repr(df))
+ assert repr(sparse) == repr(df)
def test_sparse_repr_after_set(self):
# GH 15488
diff --git a/pandas/tests/sparse/test_frame.py b/pandas/tests/sparse/test_frame.py
index 6b54dca8e93d5..f2dd2aa79cc6a 100644
--- a/pandas/tests/sparse/test_frame.py
+++ b/pandas/tests/sparse/test_frame.py
@@ -74,15 +74,15 @@ def test_fill_value_when_combine_const(self):
def test_as_matrix(self):
empty = self.empty.as_matrix()
- self.assertEqual(empty.shape, (0, 0))
+ assert empty.shape == (0, 0)
no_cols = SparseDataFrame(index=np.arange(10))
mat = no_cols.as_matrix()
- self.assertEqual(mat.shape, (10, 0))
+ assert mat.shape == (10, 0)
no_index = SparseDataFrame(columns=np.arange(10))
mat = no_index.as_matrix()
- self.assertEqual(mat.shape, (0, 10))
+ assert mat.shape == (0, 10)
def test_copy(self):
cp = self.frame.copy()
@@ -100,7 +100,7 @@ def test_constructor(self):
assert isinstance(self.iframe['A'].sp_index, IntIndex)
# constructed zframe from matrix above
- self.assertEqual(self.zframe['A'].fill_value, 0)
+ assert self.zframe['A'].fill_value == 0
tm.assert_numpy_array_equal(pd.SparseArray([1., 2., 3., 4., 5., 6.]),
self.zframe['A'].values)
tm.assert_numpy_array_equal(np.array([0., 0., 0., 0., 1., 2.,
@@ -160,8 +160,8 @@ def test_constructor_ndarray(self):
# GH 9272
def test_constructor_empty(self):
sp = SparseDataFrame()
- self.assertEqual(len(sp.index), 0)
- self.assertEqual(len(sp.columns), 0)
+ assert len(sp.index) == 0
+ assert len(sp.columns) == 0
def test_constructor_dataframe(self):
dense = self.frame.to_dense()
@@ -201,24 +201,24 @@ def test_constructor_from_series(self):
def test_constructor_preserve_attr(self):
# GH 13866
arr = pd.SparseArray([1, 0, 3, 0], dtype=np.int64, fill_value=0)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
df = pd.SparseDataFrame({'x': arr})
- self.assertEqual(df['x'].dtype, np.int64)
- self.assertEqual(df['x'].fill_value, 0)
+ assert df['x'].dtype == np.int64
+ assert df['x'].fill_value == 0
s = pd.SparseSeries(arr, name='x')
- self.assertEqual(s.dtype, np.int64)
- self.assertEqual(s.fill_value, 0)
+ assert s.dtype == np.int64
+ assert s.fill_value == 0
df = pd.SparseDataFrame(s)
- self.assertEqual(df['x'].dtype, np.int64)
- self.assertEqual(df['x'].fill_value, 0)
+ assert df['x'].dtype == np.int64
+ assert df['x'].fill_value == 0
df = pd.SparseDataFrame({'x': s})
- self.assertEqual(df['x'].dtype, np.int64)
- self.assertEqual(df['x'].fill_value, 0)
+ assert df['x'].dtype == np.int64
+ assert df['x'].fill_value == 0
def test_constructor_nan_dataframe(self):
# GH 10079
@@ -257,11 +257,11 @@ def test_dtypes(self):
tm.assert_series_equal(result, expected)
def test_shape(self):
- # GH 10452
- self.assertEqual(self.frame.shape, (10, 4))
- self.assertEqual(self.iframe.shape, (10, 4))
- self.assertEqual(self.zframe.shape, (10, 4))
- self.assertEqual(self.fill_frame.shape, (10, 4))
+ # see gh-10452
+ assert self.frame.shape == (10, 4)
+ assert self.iframe.shape == (10, 4)
+ assert self.zframe.shape == (10, 4)
+ assert self.fill_frame.shape == (10, 4)
def test_str(self):
df = DataFrame(np.random.randn(10000, 4))
@@ -300,19 +300,19 @@ def test_dense_to_sparse(self):
df = DataFrame({'A': [0, 0, 0, 1, 2],
'B': [1, 2, 0, 0, 0]}, dtype=float)
sdf = df.to_sparse(fill_value=0)
- self.assertEqual(sdf.default_fill_value, 0)
+ assert sdf.default_fill_value == 0
tm.assert_frame_equal(sdf.to_dense(), df)
def test_density(self):
df = SparseSeries([nan, nan, nan, 0, 1, 2, 3, 4, 5, 6])
- self.assertEqual(df.density, 0.7)
+ assert df.density == 0.7
df = SparseDataFrame({'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6],
'B': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6],
'C': np.arange(10),
'D': [0, 1, 2, 3, 4, 5, nan, nan, nan, nan]})
- self.assertEqual(df.density, 0.75)
+ assert df.density == 0.75
def test_sparse_to_dense(self):
pass
@@ -417,8 +417,8 @@ def test_iloc(self):
# preserve sparse index type. #2251
data = {'A': [0, 1]}
iframe = SparseDataFrame(data, default_kind='integer')
- self.assertEqual(type(iframe['A'].sp_index),
- type(iframe.iloc[:, 0].sp_index))
+ tm.assert_class_equal(iframe['A'].sp_index,
+ iframe.iloc[:, 0].sp_index)
def test_set_value(self):
@@ -484,7 +484,7 @@ def _check_frame(frame, orig):
expected = to_insert.to_dense().reindex(frame.index)
result = frame['E'].to_dense()
tm.assert_series_equal(result, expected, check_names=False)
- self.assertEqual(result.name, 'E')
+ assert result.name == 'E'
# insert Series
frame['F'] = frame['A'].to_dense()
@@ -506,7 +506,7 @@ def _check_frame(frame, orig):
to_sparsify = np.random.randn(N)
to_sparsify[N // 2:] = frame.default_fill_value
frame['I'] = to_sparsify
- self.assertEqual(len(frame['I'].sp_values), N // 2)
+ assert len(frame['I'].sp_values) == N // 2
# insert ndarray wrong size
pytest.raises(Exception, frame.__setitem__, 'foo',
@@ -514,11 +514,11 @@ def _check_frame(frame, orig):
# scalar value
frame['J'] = 5
- self.assertEqual(len(frame['J'].sp_values), N)
+ assert len(frame['J'].sp_values) == N
assert (frame['J'].sp_values == 5).all()
frame['K'] = frame.default_fill_value
- self.assertEqual(len(frame['K'].sp_values), 0)
+ assert len(frame['K'].sp_values) == 0
self._check_all(_check_frame)
@@ -584,7 +584,7 @@ def test_apply(self):
tm.assert_almost_equal(applied.values, np.sqrt(self.frame.values))
applied = self.fill_frame.apply(np.sqrt)
- self.assertEqual(applied['A'].fill_value, np.sqrt(2))
+ assert applied['A'].fill_value == np.sqrt(2)
# agg / broadcast
broadcasted = self.frame.apply(np.sum, broadcast=True)
@@ -607,7 +607,7 @@ def test_apply_nonuq(self):
res = sparse.apply(lambda s: s[0], axis=1)
exp = orig.apply(lambda s: s[0], axis=1)
# dtype must be kept
- self.assertEqual(res.dtype, np.int64)
+ assert res.dtype == np.int64
# ToDo: apply must return subclassed dtype
assert isinstance(res, pd.Series)
tm.assert_series_equal(res.to_dense(), exp)
@@ -629,8 +629,8 @@ def test_astype(self):
dtype=np.int64),
'B': SparseArray([4, 5, 6, 7],
dtype=np.int64)})
- self.assertEqual(sparse['A'].dtype, np.int64)
- self.assertEqual(sparse['B'].dtype, np.int64)
+ assert sparse['A'].dtype == np.int64
+ assert sparse['B'].dtype == np.int64
res = sparse.astype(np.float64)
exp = pd.SparseDataFrame({'A': SparseArray([1., 2., 3., 4.],
@@ -639,16 +639,16 @@ def test_astype(self):
fill_value=0.)},
default_fill_value=np.nan)
tm.assert_sp_frame_equal(res, exp)
- self.assertEqual(res['A'].dtype, np.float64)
- self.assertEqual(res['B'].dtype, np.float64)
+ assert res['A'].dtype == np.float64
+ assert res['B'].dtype == np.float64
sparse = pd.SparseDataFrame({'A': SparseArray([0, 2, 0, 4],
dtype=np.int64),
'B': SparseArray([0, 5, 0, 7],
dtype=np.int64)},
default_fill_value=0)
- self.assertEqual(sparse['A'].dtype, np.int64)
- self.assertEqual(sparse['B'].dtype, np.int64)
+ assert sparse['A'].dtype == np.int64
+ assert sparse['B'].dtype == np.int64
res = sparse.astype(np.float64)
exp = pd.SparseDataFrame({'A': SparseArray([0., 2., 0., 4.],
@@ -657,8 +657,8 @@ def test_astype(self):
fill_value=0.)},
default_fill_value=0.)
tm.assert_sp_frame_equal(res, exp)
- self.assertEqual(res['A'].dtype, np.float64)
- self.assertEqual(res['B'].dtype, np.float64)
+ assert res['A'].dtype == np.float64
+ assert res['B'].dtype == np.float64
def test_astype_bool(self):
sparse = pd.SparseDataFrame({'A': SparseArray([0, 2, 0, 4],
@@ -668,8 +668,8 @@ def test_astype_bool(self):
fill_value=0,
dtype=np.int64)},
default_fill_value=0)
- self.assertEqual(sparse['A'].dtype, np.int64)
- self.assertEqual(sparse['B'].dtype, np.int64)
+ assert sparse['A'].dtype == np.int64
+ assert sparse['B'].dtype == np.int64
res = sparse.astype(bool)
exp = pd.SparseDataFrame({'A': SparseArray([False, True, False, True],
@@ -680,8 +680,8 @@ def test_astype_bool(self):
fill_value=False)},
default_fill_value=False)
tm.assert_sp_frame_equal(res, exp)
- self.assertEqual(res['A'].dtype, np.bool)
- self.assertEqual(res['B'].dtype, np.bool)
+ assert res['A'].dtype == np.bool
+ assert res['B'].dtype == np.bool
def test_fillna(self):
df = self.zframe.reindex(lrange(5))
@@ -1085,14 +1085,14 @@ def test_sparse_pow_issue(self):
r1 = result.take([0], 1)['A']
r2 = result['A']
- self.assertEqual(len(r2.sp_values), len(r1.sp_values))
+ assert len(r2.sp_values) == len(r1.sp_values)
def test_as_blocks(self):
df = SparseDataFrame({'A': [1.1, 3.3], 'B': [nan, -3.9]},
dtype='float64')
df_blocks = df.blocks
- self.assertEqual(list(df_blocks.keys()), ['float64'])
+ assert list(df_blocks.keys()) == ['float64']
tm.assert_frame_equal(df_blocks['float64'], df)
def test_nan_columnname(self):
diff --git a/pandas/tests/sparse/test_indexing.py b/pandas/tests/sparse/test_indexing.py
index 6dd012ad46db9..0fc2211bbeeae 100644
--- a/pandas/tests/sparse/test_indexing.py
+++ b/pandas/tests/sparse/test_indexing.py
@@ -16,9 +16,9 @@ def test_getitem(self):
orig = self.orig
sparse = self.sparse
- self.assertEqual(sparse[0], 1)
+ assert sparse[0] == 1
assert np.isnan(sparse[1])
- self.assertEqual(sparse[3], 3)
+ assert sparse[3] == 3
result = sparse[[1, 3, 4]]
exp = orig[[1, 3, 4]].to_sparse()
@@ -53,23 +53,23 @@ def test_getitem_int_dtype(self):
res = s[::2]
exp = pd.SparseSeries([0, 2, 4, 6], index=[0, 2, 4, 6], name='xxx')
tm.assert_sp_series_equal(res, exp)
- self.assertEqual(res.dtype, np.int64)
+ assert res.dtype == np.int64
s = pd.SparseSeries([0, 1, 2, 3, 4, 5, 6], fill_value=0, name='xxx')
res = s[::2]
exp = pd.SparseSeries([0, 2, 4, 6], index=[0, 2, 4, 6],
fill_value=0, name='xxx')
tm.assert_sp_series_equal(res, exp)
- self.assertEqual(res.dtype, np.int64)
+ assert res.dtype == np.int64
def test_getitem_fill_value(self):
orig = pd.Series([1, np.nan, 0, 3, 0])
sparse = orig.to_sparse(fill_value=0)
- self.assertEqual(sparse[0], 1)
+ assert sparse[0] == 1
assert np.isnan(sparse[1])
- self.assertEqual(sparse[2], 0)
- self.assertEqual(sparse[3], 3)
+ assert sparse[2] == 0
+ assert sparse[3] == 3
result = sparse[[1, 3, 4]]
exp = orig[[1, 3, 4]].to_sparse(fill_value=0)
@@ -113,7 +113,7 @@ def test_loc(self):
orig = self.orig
sparse = self.sparse
- self.assertEqual(sparse.loc[0], 1)
+ assert sparse.loc[0] == 1
assert np.isnan(sparse.loc[1])
result = sparse.loc[[1, 3, 4]]
@@ -145,7 +145,7 @@ def test_loc_index(self):
orig = pd.Series([1, np.nan, np.nan, 3, np.nan], index=list('ABCDE'))
sparse = orig.to_sparse()
- self.assertEqual(sparse.loc['A'], 1)
+ assert sparse.loc['A'] == 1
assert np.isnan(sparse.loc['B'])
result = sparse.loc[['A', 'C', 'D']]
@@ -170,7 +170,7 @@ def test_loc_index_fill_value(self):
orig = pd.Series([1, np.nan, 0, 3, 0], index=list('ABCDE'))
sparse = orig.to_sparse(fill_value=0)
- self.assertEqual(sparse.loc['A'], 1)
+ assert sparse.loc['A'] == 1
assert np.isnan(sparse.loc['B'])
result = sparse.loc[['A', 'C', 'D']]
@@ -209,7 +209,7 @@ def test_iloc(self):
orig = self.orig
sparse = self.sparse
- self.assertEqual(sparse.iloc[3], 3)
+ assert sparse.iloc[3] == 3
assert np.isnan(sparse.iloc[2])
result = sparse.iloc[[1, 3, 4]]
@@ -227,9 +227,9 @@ def test_iloc_fill_value(self):
orig = pd.Series([1, np.nan, 0, 3, 0])
sparse = orig.to_sparse(fill_value=0)
- self.assertEqual(sparse.iloc[3], 3)
+ assert sparse.iloc[3] == 3
assert np.isnan(sparse.iloc[1])
- self.assertEqual(sparse.iloc[4], 0)
+ assert sparse.iloc[4] == 0
result = sparse.iloc[[1, 3, 4]]
exp = orig.iloc[[1, 3, 4]].to_sparse(fill_value=0)
@@ -249,73 +249,73 @@ def test_iloc_slice_fill_value(self):
def test_at(self):
orig = pd.Series([1, np.nan, np.nan, 3, np.nan])
sparse = orig.to_sparse()
- self.assertEqual(sparse.at[0], orig.at[0])
+ assert sparse.at[0] == orig.at[0]
assert np.isnan(sparse.at[1])
assert np.isnan(sparse.at[2])
- self.assertEqual(sparse.at[3], orig.at[3])
+ assert sparse.at[3] == orig.at[3]
assert np.isnan(sparse.at[4])
orig = pd.Series([1, np.nan, np.nan, 3, np.nan],
index=list('abcde'))
sparse = orig.to_sparse()
- self.assertEqual(sparse.at['a'], orig.at['a'])
+ assert sparse.at['a'] == orig.at['a']
assert np.isnan(sparse.at['b'])
assert np.isnan(sparse.at['c'])
- self.assertEqual(sparse.at['d'], orig.at['d'])
+ assert sparse.at['d'] == orig.at['d']
assert np.isnan(sparse.at['e'])
def test_at_fill_value(self):
orig = pd.Series([1, np.nan, 0, 3, 0],
index=list('abcde'))
sparse = orig.to_sparse(fill_value=0)
- self.assertEqual(sparse.at['a'], orig.at['a'])
+ assert sparse.at['a'] == orig.at['a']
assert np.isnan(sparse.at['b'])
- self.assertEqual(sparse.at['c'], orig.at['c'])
- self.assertEqual(sparse.at['d'], orig.at['d'])
- self.assertEqual(sparse.at['e'], orig.at['e'])
+ assert sparse.at['c'] == orig.at['c']
+ assert sparse.at['d'] == orig.at['d']
+ assert sparse.at['e'] == orig.at['e']
def test_iat(self):
orig = self.orig
sparse = self.sparse
- self.assertEqual(sparse.iat[0], orig.iat[0])
+ assert sparse.iat[0] == orig.iat[0]
assert np.isnan(sparse.iat[1])
assert np.isnan(sparse.iat[2])
- self.assertEqual(sparse.iat[3], orig.iat[3])
+ assert sparse.iat[3] == orig.iat[3]
assert np.isnan(sparse.iat[4])
assert np.isnan(sparse.iat[-1])
- self.assertEqual(sparse.iat[-5], orig.iat[-5])
+ assert sparse.iat[-5] == orig.iat[-5]
def test_iat_fill_value(self):
orig = pd.Series([1, np.nan, 0, 3, 0])
sparse = orig.to_sparse()
- self.assertEqual(sparse.iat[0], orig.iat[0])
+ assert sparse.iat[0] == orig.iat[0]
assert np.isnan(sparse.iat[1])
- self.assertEqual(sparse.iat[2], orig.iat[2])
- self.assertEqual(sparse.iat[3], orig.iat[3])
- self.assertEqual(sparse.iat[4], orig.iat[4])
+ assert sparse.iat[2] == orig.iat[2]
+ assert sparse.iat[3] == orig.iat[3]
+ assert sparse.iat[4] == orig.iat[4]
- self.assertEqual(sparse.iat[-1], orig.iat[-1])
- self.assertEqual(sparse.iat[-5], orig.iat[-5])
+ assert sparse.iat[-1] == orig.iat[-1]
+ assert sparse.iat[-5] == orig.iat[-5]
def test_get(self):
s = pd.SparseSeries([1, np.nan, np.nan, 3, np.nan])
- self.assertEqual(s.get(0), 1)
+ assert s.get(0) == 1
assert np.isnan(s.get(1))
assert s.get(5) is None
s = pd.SparseSeries([1, np.nan, 0, 3, 0], index=list('ABCDE'))
- self.assertEqual(s.get('A'), 1)
+ assert s.get('A') == 1
assert np.isnan(s.get('B'))
- self.assertEqual(s.get('C'), 0)
+ assert s.get('C') == 0
assert s.get('XX') is None
s = pd.SparseSeries([1, np.nan, 0, 3, 0], index=list('ABCDE'),
fill_value=0)
- self.assertEqual(s.get('A'), 1)
+ assert s.get('A') == 1
assert np.isnan(s.get('B'))
- self.assertEqual(s.get('C'), 0)
+ assert s.get('C') == 0
assert s.get('XX') is None
def test_take(self):
@@ -457,9 +457,9 @@ def test_getitem_multi(self):
orig = self.orig
sparse = self.sparse
- self.assertEqual(sparse[0], orig[0])
+ assert sparse[0] == orig[0]
assert np.isnan(sparse[1])
- self.assertEqual(sparse[3], orig[3])
+ assert sparse[3] == orig[3]
tm.assert_sp_series_equal(sparse['A'], orig['A'].to_sparse())
tm.assert_sp_series_equal(sparse['B'], orig['B'].to_sparse())
@@ -486,7 +486,7 @@ def test_getitem_multi_tuple(self):
orig = self.orig
sparse = self.sparse
- self.assertEqual(sparse['C', 0], orig['C', 0])
+ assert sparse['C', 0] == orig['C', 0]
assert np.isnan(sparse['A', 1])
assert np.isnan(sparse['B', 0])
@@ -544,7 +544,7 @@ def test_loc_multi_tuple(self):
orig = self.orig
sparse = self.sparse
- self.assertEqual(sparse.loc['C', 0], orig.loc['C', 0])
+ assert sparse.loc['C', 0] == orig.loc['C', 0]
assert np.isnan(sparse.loc['A', 1])
assert np.isnan(sparse.loc['B', 0])
@@ -645,9 +645,9 @@ def test_loc(self):
columns=list('xyz'))
sparse = orig.to_sparse()
- self.assertEqual(sparse.loc[0, 'x'], 1)
+ assert sparse.loc[0, 'x'] == 1
assert np.isnan(sparse.loc[1, 'z'])
- self.assertEqual(sparse.loc[2, 'z'], 4)
+ assert sparse.loc[2, 'z'] == 4
tm.assert_sp_series_equal(sparse.loc[0], orig.loc[0].to_sparse())
tm.assert_sp_series_equal(sparse.loc[1], orig.loc[1].to_sparse())
@@ -702,9 +702,9 @@ def test_loc_index(self):
index=list('abc'), columns=list('xyz'))
sparse = orig.to_sparse()
- self.assertEqual(sparse.loc['a', 'x'], 1)
+ assert sparse.loc['a', 'x'] == 1
assert np.isnan(sparse.loc['b', 'z'])
- self.assertEqual(sparse.loc['c', 'z'], 4)
+ assert sparse.loc['c', 'z'] == 4
tm.assert_sp_series_equal(sparse.loc['a'], orig.loc['a'].to_sparse())
tm.assert_sp_series_equal(sparse.loc['b'], orig.loc['b'].to_sparse())
@@ -762,7 +762,7 @@ def test_iloc(self):
[np.nan, np.nan, 4]])
sparse = orig.to_sparse()
- self.assertEqual(sparse.iloc[1, 1], 3)
+ assert sparse.iloc[1, 1] == 3
assert np.isnan(sparse.iloc[2, 0])
tm.assert_sp_series_equal(sparse.iloc[0], orig.loc[0].to_sparse())
@@ -810,10 +810,10 @@ def test_at(self):
[0, np.nan, 5]],
index=list('ABCD'), columns=list('xyz'))
sparse = orig.to_sparse()
- self.assertEqual(sparse.at['A', 'x'], orig.at['A', 'x'])
+ assert sparse.at['A', 'x'] == orig.at['A', 'x']
assert np.isnan(sparse.at['B', 'z'])
assert np.isnan(sparse.at['C', 'y'])
- self.assertEqual(sparse.at['D', 'x'], orig.at['D', 'x'])
+ assert sparse.at['D', 'x'] == orig.at['D', 'x']
def test_at_fill_value(self):
orig = pd.DataFrame([[1, np.nan, 0],
@@ -822,10 +822,10 @@ def test_at_fill_value(self):
[0, np.nan, 5]],
index=list('ABCD'), columns=list('xyz'))
sparse = orig.to_sparse(fill_value=0)
- self.assertEqual(sparse.at['A', 'x'], orig.at['A', 'x'])
+ assert sparse.at['A', 'x'] == orig.at['A', 'x']
assert np.isnan(sparse.at['B', 'z'])
assert np.isnan(sparse.at['C', 'y'])
- self.assertEqual(sparse.at['D', 'x'], orig.at['D', 'x'])
+ assert sparse.at['D', 'x'] == orig.at['D', 'x']
def test_iat(self):
orig = pd.DataFrame([[1, np.nan, 0],
@@ -834,13 +834,13 @@ def test_iat(self):
[0, np.nan, 5]],
index=list('ABCD'), columns=list('xyz'))
sparse = orig.to_sparse()
- self.assertEqual(sparse.iat[0, 0], orig.iat[0, 0])
+ assert sparse.iat[0, 0] == orig.iat[0, 0]
assert np.isnan(sparse.iat[1, 2])
assert np.isnan(sparse.iat[2, 1])
- self.assertEqual(sparse.iat[2, 0], orig.iat[2, 0])
+ assert sparse.iat[2, 0] == orig.iat[2, 0]
assert np.isnan(sparse.iat[-1, -2])
- self.assertEqual(sparse.iat[-1, -1], orig.iat[-1, -1])
+ assert sparse.iat[-1, -1] == orig.iat[-1, -1]
def test_iat_fill_value(self):
orig = pd.DataFrame([[1, np.nan, 0],
@@ -849,13 +849,13 @@ def test_iat_fill_value(self):
[0, np.nan, 5]],
index=list('ABCD'), columns=list('xyz'))
sparse = orig.to_sparse(fill_value=0)
- self.assertEqual(sparse.iat[0, 0], orig.iat[0, 0])
+ assert sparse.iat[0, 0] == orig.iat[0, 0]
assert np.isnan(sparse.iat[1, 2])
assert np.isnan(sparse.iat[2, 1])
- self.assertEqual(sparse.iat[2, 0], orig.iat[2, 0])
+ assert sparse.iat[2, 0] == orig.iat[2, 0]
assert np.isnan(sparse.iat[-1, -2])
- self.assertEqual(sparse.iat[-1, -1], orig.iat[-1, -1])
+ assert sparse.iat[-1, -1] == orig.iat[-1, -1]
def test_take(self):
orig = pd.DataFrame([[1, np.nan, 0],
@@ -972,7 +972,7 @@ def setUp(self):
def test_frame_basic_dtypes(self):
for _, row in self.sdf.iterrows():
- self.assertEqual(row.dtype, object)
+ assert row.dtype == object
tm.assert_sp_series_equal(self.sdf['string'], self.string_series,
check_names=False)
tm.assert_sp_series_equal(self.sdf['int'], self.int_series,
@@ -1014,13 +1014,14 @@ def test_frame_indexing_multiple(self):
def test_series_indexing_single(self):
for i, idx in enumerate(self.cols):
- self.assertEqual(self.ss.iloc[i], self.ss[idx])
- self.assertEqual(type(self.ss.iloc[i]),
- type(self.ss[idx]))
- self.assertEqual(self.ss['string'], 'a')
- self.assertEqual(self.ss['int'], 1)
- self.assertEqual(self.ss['float'], 1.1)
- self.assertEqual(self.ss['object'], [])
+ assert self.ss.iloc[i] == self.ss[idx]
+ tm.assert_class_equal(self.ss.iloc[i], self.ss[idx],
+ obj="series index")
+
+ assert self.ss['string'] == 'a'
+ assert self.ss['int'] == 1
+ assert self.ss['float'] == 1.1
+ assert self.ss['object'] == []
def test_series_indexing_multiple(self):
tm.assert_sp_series_equal(self.ss.loc[['string', 'int']],
diff --git a/pandas/tests/sparse/test_libsparse.py b/pandas/tests/sparse/test_libsparse.py
index c7e1be968c148..c7207870b22b9 100644
--- a/pandas/tests/sparse/test_libsparse.py
+++ b/pandas/tests/sparse/test_libsparse.py
@@ -244,27 +244,27 @@ class TestSparseIndexCommon(tm.TestCase):
def test_int_internal(self):
idx = _make_index(4, np.array([2, 3], dtype=np.int32), kind='integer')
assert isinstance(idx, IntIndex)
- self.assertEqual(idx.npoints, 2)
+ assert idx.npoints == 2
tm.assert_numpy_array_equal(idx.indices,
np.array([2, 3], dtype=np.int32))
idx = _make_index(4, np.array([], dtype=np.int32), kind='integer')
assert isinstance(idx, IntIndex)
- self.assertEqual(idx.npoints, 0)
+ assert idx.npoints == 0
tm.assert_numpy_array_equal(idx.indices,
np.array([], dtype=np.int32))
idx = _make_index(4, np.array([0, 1, 2, 3], dtype=np.int32),
kind='integer')
assert isinstance(idx, IntIndex)
- self.assertEqual(idx.npoints, 4)
+ assert idx.npoints == 4
tm.assert_numpy_array_equal(idx.indices,
np.array([0, 1, 2, 3], dtype=np.int32))
def test_block_internal(self):
idx = _make_index(4, np.array([2, 3], dtype=np.int32), kind='block')
assert isinstance(idx, BlockIndex)
- self.assertEqual(idx.npoints, 2)
+ assert idx.npoints == 2
tm.assert_numpy_array_equal(idx.blocs,
np.array([2], dtype=np.int32))
tm.assert_numpy_array_equal(idx.blengths,
@@ -272,7 +272,7 @@ def test_block_internal(self):
idx = _make_index(4, np.array([], dtype=np.int32), kind='block')
assert isinstance(idx, BlockIndex)
- self.assertEqual(idx.npoints, 0)
+ assert idx.npoints == 0
tm.assert_numpy_array_equal(idx.blocs,
np.array([], dtype=np.int32))
tm.assert_numpy_array_equal(idx.blengths,
@@ -281,7 +281,7 @@ def test_block_internal(self):
idx = _make_index(4, np.array([0, 1, 2, 3], dtype=np.int32),
kind='block')
assert isinstance(idx, BlockIndex)
- self.assertEqual(idx.npoints, 4)
+ assert idx.npoints == 4
tm.assert_numpy_array_equal(idx.blocs,
np.array([0], dtype=np.int32))
tm.assert_numpy_array_equal(idx.blengths,
@@ -290,7 +290,7 @@ def test_block_internal(self):
idx = _make_index(4, np.array([0, 2, 3], dtype=np.int32),
kind='block')
assert isinstance(idx, BlockIndex)
- self.assertEqual(idx.npoints, 3)
+ assert idx.npoints == 3
tm.assert_numpy_array_equal(idx.blocs,
np.array([0, 2], dtype=np.int32))
tm.assert_numpy_array_equal(idx.blengths,
@@ -299,35 +299,35 @@ def test_block_internal(self):
def test_lookup(self):
for kind in ['integer', 'block']:
idx = _make_index(4, np.array([2, 3], dtype=np.int32), kind=kind)
- self.assertEqual(idx.lookup(-1), -1)
- self.assertEqual(idx.lookup(0), -1)
- self.assertEqual(idx.lookup(1), -1)
- self.assertEqual(idx.lookup(2), 0)
- self.assertEqual(idx.lookup(3), 1)
- self.assertEqual(idx.lookup(4), -1)
+ assert idx.lookup(-1) == -1
+ assert idx.lookup(0) == -1
+ assert idx.lookup(1) == -1
+ assert idx.lookup(2) == 0
+ assert idx.lookup(3) == 1
+ assert idx.lookup(4) == -1
idx = _make_index(4, np.array([], dtype=np.int32), kind=kind)
for i in range(-1, 5):
- self.assertEqual(idx.lookup(i), -1)
+ assert idx.lookup(i) == -1
idx = _make_index(4, np.array([0, 1, 2, 3], dtype=np.int32),
kind=kind)
- self.assertEqual(idx.lookup(-1), -1)
- self.assertEqual(idx.lookup(0), 0)
- self.assertEqual(idx.lookup(1), 1)
- self.assertEqual(idx.lookup(2), 2)
- self.assertEqual(idx.lookup(3), 3)
- self.assertEqual(idx.lookup(4), -1)
+ assert idx.lookup(-1) == -1
+ assert idx.lookup(0) == 0
+ assert idx.lookup(1) == 1
+ assert idx.lookup(2) == 2
+ assert idx.lookup(3) == 3
+ assert idx.lookup(4) == -1
idx = _make_index(4, np.array([0, 2, 3], dtype=np.int32),
kind=kind)
- self.assertEqual(idx.lookup(-1), -1)
- self.assertEqual(idx.lookup(0), 0)
- self.assertEqual(idx.lookup(1), -1)
- self.assertEqual(idx.lookup(2), 1)
- self.assertEqual(idx.lookup(3), 2)
- self.assertEqual(idx.lookup(4), -1)
+ assert idx.lookup(-1) == -1
+ assert idx.lookup(0) == 0
+ assert idx.lookup(1) == -1
+ assert idx.lookup(2) == 1
+ assert idx.lookup(3) == 2
+ assert idx.lookup(4) == -1
def test_lookup_array(self):
for kind in ['integer', 'block']:
@@ -392,7 +392,7 @@ class TestBlockIndex(tm.TestCase):
def test_block_internal(self):
idx = _make_index(4, np.array([2, 3], dtype=np.int32), kind='block')
assert isinstance(idx, BlockIndex)
- self.assertEqual(idx.npoints, 2)
+ assert idx.npoints == 2
tm.assert_numpy_array_equal(idx.blocs,
np.array([2], dtype=np.int32))
tm.assert_numpy_array_equal(idx.blengths,
@@ -400,7 +400,7 @@ def test_block_internal(self):
idx = _make_index(4, np.array([], dtype=np.int32), kind='block')
assert isinstance(idx, BlockIndex)
- self.assertEqual(idx.npoints, 0)
+ assert idx.npoints == 0
tm.assert_numpy_array_equal(idx.blocs,
np.array([], dtype=np.int32))
tm.assert_numpy_array_equal(idx.blengths,
@@ -409,7 +409,7 @@ def test_block_internal(self):
idx = _make_index(4, np.array([0, 1, 2, 3], dtype=np.int32),
kind='block')
assert isinstance(idx, BlockIndex)
- self.assertEqual(idx.npoints, 4)
+ assert idx.npoints == 4
tm.assert_numpy_array_equal(idx.blocs,
np.array([0], dtype=np.int32))
tm.assert_numpy_array_equal(idx.blengths,
@@ -417,7 +417,7 @@ def test_block_internal(self):
idx = _make_index(4, np.array([0, 2, 3], dtype=np.int32), kind='block')
assert isinstance(idx, BlockIndex)
- self.assertEqual(idx.npoints, 3)
+ assert idx.npoints == 3
tm.assert_numpy_array_equal(idx.blocs,
np.array([0, 2], dtype=np.int32))
tm.assert_numpy_array_equal(idx.blengths,
@@ -515,20 +515,20 @@ def test_check_integrity(self):
def test_int_internal(self):
idx = _make_index(4, np.array([2, 3], dtype=np.int32), kind='integer')
assert isinstance(idx, IntIndex)
- self.assertEqual(idx.npoints, 2)
+ assert idx.npoints == 2
tm.assert_numpy_array_equal(idx.indices,
np.array([2, 3], dtype=np.int32))
idx = _make_index(4, np.array([], dtype=np.int32), kind='integer')
assert isinstance(idx, IntIndex)
- self.assertEqual(idx.npoints, 0)
+ assert idx.npoints == 0
tm.assert_numpy_array_equal(idx.indices,
np.array([], dtype=np.int32))
idx = _make_index(4, np.array([0, 1, 2, 3], dtype=np.int32),
kind='integer')
assert isinstance(idx, IntIndex)
- self.assertEqual(idx.npoints, 4)
+ assert idx.npoints == 4
tm.assert_numpy_array_equal(idx.indices,
np.array([0, 1, 2, 3], dtype=np.int32))
@@ -580,7 +580,7 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
assert rb_index.to_int_index().equals(ri_index)
tm.assert_numpy_array_equal(result_block_vals, result_int_vals)
- self.assertEqual(bfill, ifill)
+ assert bfill == ifill
# check versus Series...
xseries = Series(x, xdindex.indices)
diff --git a/pandas/tests/sparse/test_list.py b/pandas/tests/sparse/test_list.py
index 9f91d73a8228a..941e07a5582b0 100644
--- a/pandas/tests/sparse/test_list.py
+++ b/pandas/tests/sparse/test_list.py
@@ -33,11 +33,11 @@ def test_len(self):
arr = self.na_data
splist = SparseList()
splist.append(arr[:5])
- self.assertEqual(len(splist), 5)
+ assert len(splist) == 5
splist.append(arr[5])
- self.assertEqual(len(splist), 6)
+ assert len(splist) == 6
splist.append(arr[6:])
- self.assertEqual(len(splist), 10)
+ assert len(splist) == 10
def test_append_na(self):
with tm.assert_produces_warning(FutureWarning):
@@ -76,12 +76,12 @@ def test_consolidate(self):
splist.append(arr[6:])
consol = splist.consolidate(inplace=False)
- self.assertEqual(consol.nchunks, 1)
- self.assertEqual(splist.nchunks, 3)
+ assert consol.nchunks == 1
+ assert splist.nchunks == 3
tm.assert_sp_array_equal(consol.to_array(), exp_sparr)
splist.consolidate()
- self.assertEqual(splist.nchunks, 1)
+ assert splist.nchunks == 1
tm.assert_sp_array_equal(splist.to_array(), exp_sparr)
def test_copy(self):
@@ -96,7 +96,7 @@ def test_copy(self):
cp = splist.copy()
cp.append(arr[6:])
- self.assertEqual(splist.nchunks, 2)
+ assert splist.nchunks == 2
tm.assert_sp_array_equal(cp.to_array(), exp_sparr)
def test_getitem(self):
diff --git a/pandas/tests/sparse/test_series.py b/pandas/tests/sparse/test_series.py
index b8c12c2d64277..0f04e1a06900d 100644
--- a/pandas/tests/sparse/test_series.py
+++ b/pandas/tests/sparse/test_series.py
@@ -90,24 +90,24 @@ def setUp(self):
def test_constructor_dtype(self):
arr = SparseSeries([np.nan, 1, 2, np.nan])
- self.assertEqual(arr.dtype, np.float64)
+ assert arr.dtype == np.float64
assert np.isnan(arr.fill_value)
arr = SparseSeries([np.nan, 1, 2, np.nan], fill_value=0)
- self.assertEqual(arr.dtype, np.float64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.float64
+ assert arr.fill_value == 0
arr = SparseSeries([0, 1, 2, 4], dtype=np.int64, fill_value=np.nan)
- self.assertEqual(arr.dtype, np.int64)
+ assert arr.dtype == np.int64
assert np.isnan(arr.fill_value)
arr = SparseSeries([0, 1, 2, 4], dtype=np.int64)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
arr = SparseSeries([0, 1, 2, 4], fill_value=0, dtype=np.int64)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
def test_iteration_and_str(self):
[x for x in self.bseries]
@@ -135,12 +135,12 @@ def test_construct_DataFrame_with_sp_series(self):
def test_constructor_preserve_attr(self):
arr = pd.SparseArray([1, 0, 3, 0], dtype=np.int64, fill_value=0)
- self.assertEqual(arr.dtype, np.int64)
- self.assertEqual(arr.fill_value, 0)
+ assert arr.dtype == np.int64
+ assert arr.fill_value == 0
s = pd.SparseSeries(arr, name='x')
- self.assertEqual(s.dtype, np.int64)
- self.assertEqual(s.fill_value, 0)
+ assert s.dtype == np.int64
+ assert s.fill_value == 0
def test_series_density(self):
# GH2803
@@ -148,7 +148,7 @@ def test_series_density(self):
ts[2:-2] = nan
sts = ts.to_sparse()
density = sts.density # don't die
- self.assertEqual(density, 4 / 10.0)
+ assert density == 4 / 10.0
def test_sparse_to_dense(self):
arr, index = _test_data1()
@@ -203,12 +203,12 @@ def test_dense_to_sparse(self):
iseries = series.to_sparse(kind='integer')
tm.assert_sp_series_equal(bseries, self.bseries)
tm.assert_sp_series_equal(iseries, self.iseries, check_names=False)
- self.assertEqual(iseries.name, self.bseries.name)
+ assert iseries.name == self.bseries.name
- self.assertEqual(len(series), len(bseries))
- self.assertEqual(len(series), len(iseries))
- self.assertEqual(series.shape, bseries.shape)
- self.assertEqual(series.shape, iseries.shape)
+ assert len(series) == len(bseries)
+ assert len(series) == len(iseries)
+ assert series.shape == bseries.shape
+ assert series.shape == iseries.shape
# non-NaN fill value
series = self.zbseries.to_dense()
@@ -216,17 +216,17 @@ def test_dense_to_sparse(self):
ziseries = series.to_sparse(kind='integer', fill_value=0)
tm.assert_sp_series_equal(zbseries, self.zbseries)
tm.assert_sp_series_equal(ziseries, self.ziseries, check_names=False)
- self.assertEqual(ziseries.name, self.zbseries.name)
+ assert ziseries.name == self.zbseries.name
- self.assertEqual(len(series), len(zbseries))
- self.assertEqual(len(series), len(ziseries))
- self.assertEqual(series.shape, zbseries.shape)
- self.assertEqual(series.shape, ziseries.shape)
+ assert len(series) == len(zbseries)
+ assert len(series) == len(ziseries)
+ assert series.shape == zbseries.shape
+ assert series.shape == ziseries.shape
def test_to_dense_preserve_name(self):
assert (self.bseries.name is not None)
result = self.bseries.to_dense()
- self.assertEqual(result.name, self.bseries.name)
+ assert result.name == self.bseries.name
def test_constructor(self):
# test setup guys
@@ -235,7 +235,7 @@ def test_constructor(self):
assert np.isnan(self.iseries.fill_value)
assert isinstance(self.iseries.sp_index, IntIndex)
- self.assertEqual(self.zbseries.fill_value, 0)
+ assert self.zbseries.fill_value == 0
tm.assert_numpy_array_equal(self.zbseries.values.values,
self.bseries.to_dense().fillna(0).values)
@@ -244,13 +244,13 @@ def _check_const(sparse, name):
# use passed series name
result = SparseSeries(sparse)
tm.assert_sp_series_equal(result, sparse)
- self.assertEqual(sparse.name, name)
- self.assertEqual(result.name, name)
+ assert sparse.name == name
+ assert result.name == name
# use passed name
result = SparseSeries(sparse, name='x')
tm.assert_sp_series_equal(result, sparse, check_names=False)
- self.assertEqual(result.name, 'x')
+ assert result.name == 'x'
_check_const(self.bseries, 'bseries')
_check_const(self.iseries, 'iseries')
@@ -271,19 +271,19 @@ def _check_const(sparse, name):
values = np.ones(self.bseries.npoints)
sp = SparseSeries(values, sparse_index=self.bseries.sp_index)
sp.sp_values[:5] = 97
- self.assertEqual(values[0], 97)
+ assert values[0] == 97
- self.assertEqual(len(sp), 20)
- self.assertEqual(sp.shape, (20, ))
+ assert len(sp) == 20
+ assert sp.shape == (20, )
# but can make it copy!
sp = SparseSeries(values, sparse_index=self.bseries.sp_index,
copy=True)
sp.sp_values[:5] = 100
- self.assertEqual(values[0], 97)
+ assert values[0] == 97
- self.assertEqual(len(sp), 20)
- self.assertEqual(sp.shape, (20, ))
+ assert len(sp) == 20
+ assert sp.shape == (20, )
def test_constructor_scalar(self):
data = 5
@@ -294,8 +294,8 @@ def test_constructor_scalar(self):
data = np.nan
sp = SparseSeries(data, np.arange(100))
- self.assertEqual(len(sp), 100)
- self.assertEqual(sp.shape, (100, ))
+ assert len(sp) == 100
+ assert sp.shape == (100, )
def test_constructor_ndarray(self):
pass
@@ -304,14 +304,14 @@ def test_constructor_nonnan(self):
arr = [0, 0, 0, nan, nan]
sp_series = SparseSeries(arr, fill_value=0)
tm.assert_numpy_array_equal(sp_series.values.values, np.array(arr))
- self.assertEqual(len(sp_series), 5)
- self.assertEqual(sp_series.shape, (5, ))
+ assert len(sp_series) == 5
+ assert sp_series.shape == (5, )
- # GH 9272
def test_constructor_empty(self):
+ # see gh-9272
sp = SparseSeries()
- self.assertEqual(len(sp.index), 0)
- self.assertEqual(sp.shape, (0, ))
+ assert len(sp.index) == 0
+ assert sp.shape == (0, )
def test_copy_astype(self):
cop = self.bseries.astype(np.float64)
@@ -342,16 +342,16 @@ def test_copy_astype(self):
assert (self.bseries.sp_values[:5] == 5).all()
def test_shape(self):
- # GH 10452
- self.assertEqual(self.bseries.shape, (20, ))
- self.assertEqual(self.btseries.shape, (20, ))
- self.assertEqual(self.iseries.shape, (20, ))
+ # see gh-10452
+ assert self.bseries.shape == (20, )
+ assert self.btseries.shape == (20, )
+ assert self.iseries.shape == (20, )
- self.assertEqual(self.bseries2.shape, (15, ))
- self.assertEqual(self.iseries2.shape, (15, ))
+ assert self.bseries2.shape == (15, )
+ assert self.iseries2.shape == (15, )
- self.assertEqual(self.zbseries2.shape, (15, ))
- self.assertEqual(self.ziseries2.shape, (15, ))
+ assert self.zbseries2.shape == (15, )
+ assert self.ziseries2.shape == (15, )
def test_astype(self):
with pytest.raises(ValueError):
@@ -365,12 +365,12 @@ def test_astype_all(self):
np.int32, np.int16, np.int8]
for typ in types:
res = s.astype(typ)
- self.assertEqual(res.dtype, typ)
+ assert res.dtype == typ
tm.assert_series_equal(res.to_dense(), orig.astype(typ))
def test_kind(self):
- self.assertEqual(self.bseries.kind, 'block')
- self.assertEqual(self.iseries.kind, 'integer')
+ assert self.bseries.kind == 'block'
+ assert self.iseries.kind == 'integer'
def test_to_frame(self):
# GH 9850
@@ -448,11 +448,11 @@ def test_set_value(self):
idx = self.btseries.index[7]
self.btseries.set_value(idx, 0)
- self.assertEqual(self.btseries[idx], 0)
+ assert self.btseries[idx] == 0
self.iseries.set_value('foobar', 0)
- self.assertEqual(self.iseries.index[-1], 'foobar')
- self.assertEqual(self.iseries['foobar'], 0)
+ assert self.iseries.index[-1] == 'foobar'
+ assert self.iseries['foobar'] == 0
def test_getitem_slice(self):
idx = self.bseries.index
@@ -515,7 +515,7 @@ def test_numpy_take(self):
def test_setitem(self):
self.bseries[5] = 7.
- self.assertEqual(self.bseries[5], 7.)
+ assert self.bseries[5] == 7.
def test_setslice(self):
self.bseries[5:10] = 7.
@@ -592,30 +592,30 @@ def test_abs(self):
expected = SparseSeries([1, 2, 3], name='x')
result = s.abs()
tm.assert_sp_series_equal(result, expected)
- self.assertEqual(result.name, 'x')
+ assert result.name == 'x'
result = abs(s)
tm.assert_sp_series_equal(result, expected)
- self.assertEqual(result.name, 'x')
+ assert result.name == 'x'
result = np.abs(s)
tm.assert_sp_series_equal(result, expected)
- self.assertEqual(result.name, 'x')
+ assert result.name == 'x'
s = SparseSeries([1, -2, 2, -3], fill_value=-2, name='x')
expected = SparseSeries([1, 2, 3], sparse_index=s.sp_index,
fill_value=2, name='x')
result = s.abs()
tm.assert_sp_series_equal(result, expected)
- self.assertEqual(result.name, 'x')
+ assert result.name == 'x'
result = abs(s)
tm.assert_sp_series_equal(result, expected)
- self.assertEqual(result.name, 'x')
+ assert result.name == 'x'
result = np.abs(s)
tm.assert_sp_series_equal(result, expected)
- self.assertEqual(result.name, 'x')
+ assert result.name == 'x'
def test_reindex(self):
def _compare_with_series(sps, new_index):
@@ -729,7 +729,7 @@ def _compare_with_dense(obj, op):
sparse_result = getattr(obj, op)()
series = obj.to_dense()
dense_result = getattr(series, op)()
- self.assertEqual(sparse_result, dense_result)
+ assert sparse_result == dense_result
to_compare = ['count', 'sum', 'mean', 'std', 'var', 'skew']
@@ -1097,8 +1097,8 @@ def _check_results_to_coo(self, results, check):
# or compare directly as difference of sparse
# assert(abs(A - A_result).max() < 1e-12) # max is failing in python
# 2.6
- self.assertEqual(il, il_result)
- self.assertEqual(jl, jl_result)
+ assert il == il_result
+ assert jl == jl_result
def test_concat(self):
val1 = np.array([1, 2, np.nan, np.nan, 0, np.nan])
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 1b03c4e86b23f..86d9ab3643cc9 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -263,7 +263,7 @@ def test_factorize_nan(self):
for na_sentinel in (-1, 20):
ids = rizer.factorize(key, sort=True, na_sentinel=na_sentinel)
expected = np.array([0, 1, 0, na_sentinel], dtype='int32')
- self.assertEqual(len(set(key)), len(set(expected)))
+ assert len(set(key)) == len(set(expected))
tm.assert_numpy_array_equal(pd.isnull(key),
expected == na_sentinel)
@@ -275,7 +275,7 @@ def test_factorize_nan(self):
ids = rizer.factorize(key, sort=False, na_sentinel=na_sentinel) # noqa
expected = np.array([2, -1, 0], dtype='int32')
- self.assertEqual(len(set(key)), len(set(expected)))
+ assert len(set(key)) == len(set(expected))
tm.assert_numpy_array_equal(pd.isnull(key), expected == na_sentinel)
def test_complex_sorting(self):
@@ -351,17 +351,17 @@ def test_datetime64_dtype_array_returned(self):
'2015-01-01T00:00:00.000000000+0000'])
result = algos.unique(dt_index)
tm.assert_numpy_array_equal(result, expected)
- self.assertEqual(result.dtype, expected.dtype)
+ assert result.dtype == expected.dtype
s = Series(dt_index)
result = algos.unique(s)
tm.assert_numpy_array_equal(result, expected)
- self.assertEqual(result.dtype, expected.dtype)
+ assert result.dtype == expected.dtype
arr = s.values
result = algos.unique(arr)
tm.assert_numpy_array_equal(result, expected)
- self.assertEqual(result.dtype, expected.dtype)
+ assert result.dtype == expected.dtype
def test_timedelta64_dtype_array_returned(self):
# GH 9431
@@ -370,17 +370,17 @@ def test_timedelta64_dtype_array_returned(self):
td_index = pd.to_timedelta([31200, 45678, 31200, 10000, 45678])
result = algos.unique(td_index)
tm.assert_numpy_array_equal(result, expected)
- self.assertEqual(result.dtype, expected.dtype)
+ assert result.dtype == expected.dtype
s = Series(td_index)
result = algos.unique(s)
tm.assert_numpy_array_equal(result, expected)
- self.assertEqual(result.dtype, expected.dtype)
+ assert result.dtype == expected.dtype
arr = s.values
result = algos.unique(arr)
tm.assert_numpy_array_equal(result, expected)
- self.assertEqual(result.dtype, expected.dtype)
+ assert result.dtype == expected.dtype
def test_uint64_overflow(self):
s = Series([1, 2, 2**63, 2**63], dtype=np.uint64)
@@ -620,13 +620,13 @@ def test_value_counts_bins(self):
def test_value_counts_dtypes(self):
result = algos.value_counts([1, 1.])
- self.assertEqual(len(result), 1)
+ assert len(result) == 1
result = algos.value_counts([1, 1.], bins=1)
- self.assertEqual(len(result), 1)
+ assert len(result) == 1
result = algos.value_counts(Series([1, 1., '1'])) # object
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
pytest.raises(TypeError, lambda s: algos.value_counts(s, bins=1),
['1', 1])
@@ -638,8 +638,8 @@ def test_value_counts_nat(self):
for s in [td, dt]:
vc = algos.value_counts(s)
vc_with_na = algos.value_counts(s, dropna=False)
- self.assertEqual(len(vc), 1)
- self.assertEqual(len(vc_with_na), 2)
+ assert len(vc) == 1
+ assert len(vc_with_na) == 2
exp_dt = Series({Timestamp('2014-01-01 00:00:00'): 1})
tm.assert_series_equal(algos.value_counts(dt), exp_dt)
@@ -1009,7 +1009,7 @@ def test_group_var_constant(self):
self.algo(out, counts, values, labels)
- self.assertEqual(counts[0], 3)
+ assert counts[0] == 3
assert out[0, 0] >= 0
tm.assert_almost_equal(out[0, 0], 0.0)
@@ -1033,7 +1033,7 @@ def test_group_var_large_inputs(self):
self.algo(out, counts, values, labels)
- self.assertEqual(counts[0], 10 ** 6)
+ assert counts[0] == 10 ** 6
tm.assert_almost_equal(out[0, 0], 1.0 / 12, check_less_precise=True)
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index cbcc4dc84e6d0..ed0d61cdbbaf9 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -83,7 +83,7 @@ def test_slicing_maintains_type(self):
def check_result(self, result, expected, klass=None):
klass = klass or self.klass
assert isinstance(result, klass)
- self.assertEqual(result, expected)
+ assert result == expected
class TestPandasDelegate(tm.TestCase):
@@ -219,7 +219,7 @@ def check_ops_properties(self, props, filter=None, ignore_failures=False):
np.ndarray):
tm.assert_numpy_array_equal(result, expected)
else:
- self.assertEqual(result, expected)
+ assert result == expected
# freq raises AttributeError on an Int64Index because its not
# defined we mostly care about Series here anyhow
@@ -337,12 +337,12 @@ def test_ops(self):
expected = pd.Period(ordinal=getattr(o._values, op)(),
freq=o.freq)
try:
- self.assertEqual(result, expected)
+ assert result == expected
except TypeError:
# comparing tz-aware series with np.array results in
# TypeError
expected = expected.astype('M8[ns]').astype('int64')
- self.assertEqual(result.value, expected)
+ assert result.value == expected
def test_nanops(self):
# GH 7261
@@ -350,7 +350,7 @@ def test_nanops(self):
for klass in [Index, Series]:
obj = klass([np.nan, 2.0])
- self.assertEqual(getattr(obj, op)(), 2.0)
+ assert getattr(obj, op)() == 2.0
obj = klass([np.nan])
assert pd.isnull(getattr(obj, op)())
@@ -360,33 +360,33 @@ def test_nanops(self):
obj = klass([pd.NaT, datetime(2011, 11, 1)])
# check DatetimeIndex monotonic path
- self.assertEqual(getattr(obj, op)(), datetime(2011, 11, 1))
+ assert getattr(obj, op)() == datetime(2011, 11, 1)
obj = klass([pd.NaT, datetime(2011, 11, 1), pd.NaT])
# check DatetimeIndex non-monotonic path
- self.assertEqual(getattr(obj, op)(), datetime(2011, 11, 1))
+ assert getattr(obj, op)(), datetime(2011, 11, 1)
# argmin/max
obj = Index(np.arange(5, dtype='int64'))
- self.assertEqual(obj.argmin(), 0)
- self.assertEqual(obj.argmax(), 4)
+ assert obj.argmin() == 0
+ assert obj.argmax() == 4
obj = Index([np.nan, 1, np.nan, 2])
- self.assertEqual(obj.argmin(), 1)
- self.assertEqual(obj.argmax(), 3)
+ assert obj.argmin() == 1
+ assert obj.argmax() == 3
obj = Index([np.nan])
- self.assertEqual(obj.argmin(), -1)
- self.assertEqual(obj.argmax(), -1)
+ assert obj.argmin() == -1
+ assert obj.argmax() == -1
obj = Index([pd.NaT, datetime(2011, 11, 1), datetime(2011, 11, 2),
pd.NaT])
- self.assertEqual(obj.argmin(), 1)
- self.assertEqual(obj.argmax(), 2)
+ assert obj.argmin() == 1
+ assert obj.argmax() == 2
obj = Index([pd.NaT])
- self.assertEqual(obj.argmin(), -1)
- self.assertEqual(obj.argmax(), -1)
+ assert obj.argmin() == -1
+ assert obj.argmax() == -1
def test_value_counts_unique_nunique(self):
for orig in self.objs:
@@ -414,7 +414,7 @@ def test_value_counts_unique_nunique(self):
o = klass(rep, index=idx, name='a')
# check values has the same dtype as the original
- self.assertEqual(o.dtype, orig.dtype)
+ assert o.dtype == orig.dtype
expected_s = Series(range(10, 0, -1), index=expected_index,
dtype='int64', name='a')
@@ -422,7 +422,7 @@ def test_value_counts_unique_nunique(self):
result = o.value_counts()
tm.assert_series_equal(result, expected_s)
assert result.index.name is None
- self.assertEqual(result.name, 'a')
+ assert result.name == 'a'
result = o.unique()
if isinstance(o, Index):
@@ -430,7 +430,7 @@ def test_value_counts_unique_nunique(self):
tm.assert_index_equal(result, orig)
elif is_datetimetz(o):
# datetimetz Series returns array of Timestamp
- self.assertEqual(result[0], orig[0])
+ assert result[0] == orig[0]
for r in result:
assert isinstance(r, pd.Timestamp)
tm.assert_numpy_array_equal(result,
@@ -438,7 +438,7 @@ def test_value_counts_unique_nunique(self):
else:
tm.assert_numpy_array_equal(result, orig.values)
- self.assertEqual(o.nunique(), len(np.unique(o.values)))
+ assert o.nunique() == len(np.unique(o.values))
def test_value_counts_unique_nunique_null(self):
@@ -469,7 +469,7 @@ def test_value_counts_unique_nunique_null(self):
values[0:2] = null_obj
# check values has the same dtype as the original
- self.assertEqual(values.dtype, o.dtype)
+ assert values.dtype == o.dtype
# create repeated values, 'n'th element is repeated by n+1
# times
@@ -490,7 +490,7 @@ def test_value_counts_unique_nunique_null(self):
o.name = 'a'
# check values has the same dtype as the original
- self.assertEqual(o.dtype, orig.dtype)
+ assert o.dtype == orig.dtype
# check values correctly have NaN
nanloc = np.zeros(len(o), dtype=np.bool)
nanloc[:3] = True
@@ -510,11 +510,11 @@ def test_value_counts_unique_nunique_null(self):
result_s_na = o.value_counts(dropna=False)
tm.assert_series_equal(result_s_na, expected_s_na)
assert result_s_na.index.name is None
- self.assertEqual(result_s_na.name, 'a')
+ assert result_s_na.name == 'a'
result_s = o.value_counts()
tm.assert_series_equal(o.value_counts(), expected_s)
assert result_s.index.name is None
- self.assertEqual(result_s.name, 'a')
+ assert result_s.name == 'a'
result = o.unique()
if isinstance(o, Index):
@@ -529,10 +529,10 @@ def test_value_counts_unique_nunique_null(self):
tm.assert_numpy_array_equal(result[1:], values[2:])
assert pd.isnull(result[0])
- self.assertEqual(result.dtype, orig.dtype)
+ assert result.dtype == orig.dtype
- self.assertEqual(o.nunique(), 8)
- self.assertEqual(o.nunique(dropna=False), 9)
+ assert o.nunique() == 8
+ assert o.nunique(dropna=False) == 9
def test_value_counts_inferred(self):
klasses = [Index, Series]
@@ -549,7 +549,7 @@ def test_value_counts_inferred(self):
exp = np.unique(np.array(s_values, dtype=np.object_))
tm.assert_numpy_array_equal(s.unique(), exp)
- self.assertEqual(s.nunique(), 4)
+ assert s.nunique() == 4
# don't sort, have to sort after the fact as not sorting is
# platform-dep
hist = s.value_counts(sort=False).sort_values()
@@ -666,14 +666,14 @@ def test_value_counts_datetime64(self):
else:
tm.assert_numpy_array_equal(s.unique(), expected)
- self.assertEqual(s.nunique(), 3)
+ assert s.nunique() == 3
# with NaT
s = df['dt'].copy()
s = klass([v for v in s.values] + [pd.NaT])
result = s.value_counts()
- self.assertEqual(result.index.dtype, 'datetime64[ns]')
+ assert result.index.dtype == 'datetime64[ns]'
tm.assert_series_equal(result, expected_s)
result = s.value_counts(dropna=False)
@@ -681,7 +681,7 @@ def test_value_counts_datetime64(self):
tm.assert_series_equal(result, expected_s)
unique = s.unique()
- self.assertEqual(unique.dtype, 'datetime64[ns]')
+ assert unique.dtype == 'datetime64[ns]'
# numpy_array_equal cannot compare pd.NaT
if isinstance(s, Index):
@@ -691,8 +691,8 @@ def test_value_counts_datetime64(self):
tm.assert_numpy_array_equal(unique[:3], expected)
assert pd.isnull(unique[3])
- self.assertEqual(s.nunique(), 3)
- self.assertEqual(s.nunique(dropna=False), 4)
+ assert s.nunique() == 3
+ assert s.nunique(dropna=False) == 4
# timedelta64[ns]
td = df.dt - df.dt + timedelta(1)
@@ -931,7 +931,7 @@ def test_fillna(self):
o = klass(values)
# check values has the same dtype as the original
- self.assertEqual(o.dtype, orig.dtype)
+ assert o.dtype == orig.dtype
result = o.fillna(fill_value)
if isinstance(o, Index):
@@ -951,14 +951,12 @@ def test_memory_usage(self):
# if there are objects, only deep will pick them up
assert res_deep > res
else:
- self.assertEqual(res, res_deep)
+ assert res == res_deep
if isinstance(o, Series):
- self.assertEqual(
- (o.memory_usage(index=False) +
- o.index.memory_usage()),
- o.memory_usage(index=True)
- )
+ assert ((o.memory_usage(index=False) +
+ o.index.memory_usage()) ==
+ o.memory_usage(index=True))
# sys.getsizeof will call the .memory_usage with
# deep=True, and add on some GC overhead
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 708ca92c30cac..515ca8d9cedc5 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -35,8 +35,8 @@ def setUp(self):
ordered=True)
def test_getitem(self):
- self.assertEqual(self.factor[0], 'a')
- self.assertEqual(self.factor[-1], 'c')
+ assert self.factor[0] == 'a'
+ assert self.factor[-1] == 'c'
subf = self.factor[[0, 1, 2]]
tm.assert_numpy_array_equal(subf._codes,
@@ -82,9 +82,9 @@ def test_setitem(self):
# int/positional
c = self.factor.copy()
c[0] = 'b'
- self.assertEqual(c[0], 'b')
+ assert c[0] == 'b'
c[-1] = 'a'
- self.assertEqual(c[-1], 'a')
+ assert c[-1] == 'a'
# boolean
c = self.factor.copy()
@@ -110,7 +110,7 @@ def test_setitem_listlike(self):
# we are asserting the code result here
# which maps to the -1000 category
result = c.codes[np.array([100000]).astype(np.int64)]
- self.assertEqual(result, np.array([5], dtype='int8'))
+ tm.assert_numpy_array_equal(result, np.array([5], dtype='int8'))
def test_constructor_unsortable(self):
@@ -665,7 +665,7 @@ def test_print(self):
"Categories (3, object): [a < b < c]"]
expected = "\n".join(expected)
actual = repr(self.factor)
- self.assertEqual(actual, expected)
+ assert actual == expected
def test_big_print(self):
factor = Categorical([0, 1, 2, 0, 1, 2] * 100, ['a', 'b', 'c'],
@@ -676,24 +676,24 @@ def test_big_print(self):
actual = repr(factor)
- self.assertEqual(actual, expected)
+ assert actual == expected
def test_empty_print(self):
factor = Categorical([], ["a", "b", "c"])
expected = ("[], Categories (3, object): [a, b, c]")
# hack because array_repr changed in numpy > 1.6.x
actual = repr(factor)
- self.assertEqual(actual, expected)
+ assert actual == expected
- self.assertEqual(expected, actual)
+ assert expected == actual
factor = Categorical([], ["a", "b", "c"], ordered=True)
expected = ("[], Categories (3, object): [a < b < c]")
actual = repr(factor)
- self.assertEqual(expected, actual)
+ assert expected == actual
factor = Categorical([], [])
expected = ("[], Categories (0, object): []")
- self.assertEqual(expected, repr(factor))
+ assert expected == repr(factor)
def test_print_none_width(self):
# GH10087
@@ -702,7 +702,7 @@ def test_print_none_width(self):
"dtype: category\nCategories (4, int64): [1, 2, 3, 4]")
with option_context("display.width", None):
- self.assertEqual(exp, repr(a))
+ assert exp == repr(a)
def test_unicode_print(self):
if PY3:
@@ -716,7 +716,7 @@ def test_unicode_print(self):
Length: 60
Categories (3, object): [aaaaa, bb, cccc]"""
- self.assertEqual(_rep(c), expected)
+ assert _rep(c) == expected
c = pd.Categorical([u'ああああ', u'いいいいい', u'ううううううう'] * 20)
expected = u"""\
@@ -724,7 +724,7 @@ def test_unicode_print(self):
Length: 60
Categories (3, object): [ああああ, いいいいい, ううううううう]""" # noqa
- self.assertEqual(_rep(c), expected)
+ assert _rep(c) == expected
# unicode option should not affect to Categorical, as it doesn't care
# the repr width
@@ -735,7 +735,7 @@ def test_unicode_print(self):
Length: 60
Categories (3, object): [ああああ, いいいいい, ううううううう]""" # noqa
- self.assertEqual(_rep(c), expected)
+ assert _rep(c) == expected
def test_periodindex(self):
idx1 = PeriodIndex(['2014-01', '2014-01', '2014-02', '2014-02',
@@ -1080,7 +1080,7 @@ def test_remove_unused_categories(self):
tm.assert_index_equal(out.categories, Index(['B', 'D', 'F']))
exp_codes = np.array([2, -1, 1, 0, 1, 2, -1], dtype=np.int8)
tm.assert_numpy_array_equal(out.codes, exp_codes)
- self.assertEqual(out.get_values().tolist(), val)
+ assert out.get_values().tolist() == val
alpha = list('abcdefghijklmnopqrstuvwxyz')
val = np.random.choice(alpha[::2], 10000).astype('object')
@@ -1088,7 +1088,7 @@ def test_remove_unused_categories(self):
cat = pd.Categorical(values=val, categories=alpha)
out = cat.remove_unused_categories()
- self.assertEqual(out.get_values().tolist(), val.tolist())
+ assert out.get_values().tolist() == val.tolist()
def test_nan_handling(self):
@@ -1156,37 +1156,37 @@ def test_min_max(self):
cat = Categorical(["a", "b", "c", "d"], ordered=True)
_min = cat.min()
_max = cat.max()
- self.assertEqual(_min, "a")
- self.assertEqual(_max, "d")
+ assert _min == "a"
+ assert _max == "d"
cat = Categorical(["a", "b", "c", "d"],
categories=['d', 'c', 'b', 'a'], ordered=True)
_min = cat.min()
_max = cat.max()
- self.assertEqual(_min, "d")
- self.assertEqual(_max, "a")
+ assert _min == "d"
+ assert _max == "a"
cat = Categorical([np.nan, "b", "c", np.nan],
categories=['d', 'c', 'b', 'a'], ordered=True)
_min = cat.min()
_max = cat.max()
assert np.isnan(_min)
- self.assertEqual(_max, "b")
+ assert _max == "b"
_min = cat.min(numeric_only=True)
- self.assertEqual(_min, "c")
+ assert _min == "c"
_max = cat.max(numeric_only=True)
- self.assertEqual(_max, "b")
+ assert _max == "b"
cat = Categorical([np.nan, 1, 2, np.nan], categories=[5, 4, 3, 2, 1],
ordered=True)
_min = cat.min()
_max = cat.max()
assert np.isnan(_min)
- self.assertEqual(_max, 1)
+ assert _max == 1
_min = cat.min(numeric_only=True)
- self.assertEqual(_min, 2)
+ assert _min == 2
_max = cat.max(numeric_only=True)
- self.assertEqual(_max, 1)
+ assert _max == 1
def test_unique(self):
# categories are reordered based on value when ordered=False
@@ -1391,7 +1391,7 @@ def test_sort_values_na_position(self):
def test_slicing_directly(self):
cat = Categorical(["a", "b", "c", "d", "a", "b", "c"])
sliced = cat[3]
- self.assertEqual(sliced, "d")
+ assert sliced == "d"
sliced = cat[3:5]
expected = Categorical(["d", "a"], categories=['a', 'b', 'c', 'd'])
tm.assert_numpy_array_equal(sliced._codes, expected._codes)
@@ -1427,7 +1427,7 @@ def test_shift(self):
def test_nbytes(self):
cat = pd.Categorical([1, 2, 3])
exp = cat._codes.nbytes + cat._categories.values.nbytes
- self.assertEqual(cat.nbytes, exp)
+ assert cat.nbytes == exp
def test_memory_usage(self):
cat = pd.Categorical([1, 2, 3])
@@ -1661,8 +1661,8 @@ def test_basic(self):
# test basic creation / coercion of categoricals
s = Series(self.factor, name='A')
- self.assertEqual(s.dtype, 'category')
- self.assertEqual(len(s), len(self.factor))
+ assert s.dtype == 'category'
+ assert len(s) == len(self.factor)
str(s.values)
str(s)
@@ -1672,14 +1672,14 @@ def test_basic(self):
tm.assert_series_equal(result, s)
result = df.iloc[:, 0]
tm.assert_series_equal(result, s)
- self.assertEqual(len(df), len(self.factor))
+ assert len(df) == len(self.factor)
str(df.values)
str(df)
df = DataFrame({'A': s})
result = df['A']
tm.assert_series_equal(result, s)
- self.assertEqual(len(df), len(self.factor))
+ assert len(df) == len(self.factor)
str(df.values)
str(df)
@@ -1689,8 +1689,8 @@ def test_basic(self):
result2 = df['B']
tm.assert_series_equal(result1, s)
tm.assert_series_equal(result2, s, check_names=False)
- self.assertEqual(result2.name, 'B')
- self.assertEqual(len(df), len(self.factor))
+ assert result2.name == 'B'
+ assert len(df) == len(self.factor)
str(df.values)
str(df)
@@ -1703,13 +1703,13 @@ def test_basic(self):
expected = x.iloc[0].person_name
result = x.person_name.iloc[0]
- self.assertEqual(result, expected)
+ assert result == expected
result = x.person_name[0]
- self.assertEqual(result, expected)
+ assert result == expected
result = x.person_name.loc[0]
- self.assertEqual(result, expected)
+ assert result == expected
def test_creation_astype(self):
l = ["a", "b", "c", "a"]
@@ -1976,11 +1976,11 @@ def test_series_delegations(self):
exp_codes = Series([0, 1, 2, 0], dtype='int8')
tm.assert_series_equal(s.cat.codes, exp_codes)
- self.assertEqual(s.cat.ordered, True)
+ assert s.cat.ordered
s = s.cat.as_unordered()
- self.assertEqual(s.cat.ordered, False)
+ assert not s.cat.ordered
s.cat.as_ordered(inplace=True)
- self.assertEqual(s.cat.ordered, True)
+ assert s.cat.ordered
# reorder
s = Series(Categorical(["a", "b", "c", "a"], ordered=True))
@@ -2058,7 +2058,7 @@ def test_describe(self):
# Categoricals should not show up together with numerical columns
result = self.cat.describe()
- self.assertEqual(len(result.columns), 1)
+ assert len(result.columns) == 1
# In a frame, describe() for the cat should be the same as for string
# arrays (count, unique, top, freq)
@@ -2081,75 +2081,75 @@ def test_repr(self):
exp = u("0 1\n1 2\n2 3\n3 4\n" +
"dtype: category\nCategories (4, int64): [1, 2, 3, 4]")
- self.assertEqual(exp, a.__unicode__())
+ assert exp == a.__unicode__()
a = pd.Series(pd.Categorical(["a", "b"] * 25))
exp = u("0 a\n1 b\n" + " ..\n" + "48 a\n49 b\n" +
"Length: 50, dtype: category\nCategories (2, object): [a, b]")
with option_context("display.max_rows", 5):
- self.assertEqual(exp, repr(a))
+ assert exp == repr(a)
levs = list("abcdefghijklmnopqrstuvwxyz")
a = pd.Series(pd.Categorical(
["a", "b"], categories=levs, ordered=True))
exp = u("0 a\n1 b\n" + "dtype: category\n"
"Categories (26, object): [a < b < c < d ... w < x < y < z]")
- self.assertEqual(exp, a.__unicode__())
+ assert exp == a.__unicode__()
def test_categorical_repr(self):
c = pd.Categorical([1, 2, 3])
exp = """[1, 2, 3]
Categories (3, int64): [1, 2, 3]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical([1, 2, 3, 1, 2, 3], categories=[1, 2, 3])
exp = """[1, 2, 3, 1, 2, 3]
Categories (3, int64): [1, 2, 3]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical([1, 2, 3, 4, 5] * 10)
exp = """[1, 2, 3, 4, 5, ..., 1, 2, 3, 4, 5]
Length: 50
Categories (5, int64): [1, 2, 3, 4, 5]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(np.arange(20))
exp = """[0, 1, 2, 3, 4, ..., 15, 16, 17, 18, 19]
Length: 20
Categories (20, int64): [0, 1, 2, 3, ..., 16, 17, 18, 19]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
def test_categorical_repr_ordered(self):
c = pd.Categorical([1, 2, 3], ordered=True)
exp = """[1, 2, 3]
Categories (3, int64): [1 < 2 < 3]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical([1, 2, 3, 1, 2, 3], categories=[1, 2, 3],
ordered=True)
exp = """[1, 2, 3, 1, 2, 3]
Categories (3, int64): [1 < 2 < 3]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical([1, 2, 3, 4, 5] * 10, ordered=True)
exp = """[1, 2, 3, 4, 5, ..., 1, 2, 3, 4, 5]
Length: 50
Categories (5, int64): [1 < 2 < 3 < 4 < 5]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(np.arange(20), ordered=True)
exp = """[0, 1, 2, 3, 4, ..., 15, 16, 17, 18, 19]
Length: 20
Categories (20, int64): [0 < 1 < 2 < 3 ... 16 < 17 < 18 < 19]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
def test_categorical_repr_datetime(self):
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2164,7 +2164,7 @@ def test_categorical_repr_datetime(self):
"2011-01-01 10:00:00, 2011-01-01 11:00:00,\n"
" 2011-01-01 12:00:00, "
"2011-01-01 13:00:00]""")
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx)
exp = (
@@ -2177,7 +2177,7 @@ def test_categorical_repr_datetime(self):
" 2011-01-01 12:00:00, "
"2011-01-01 13:00:00]")
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5,
tz='US/Eastern')
@@ -2193,7 +2193,7 @@ def test_categorical_repr_datetime(self):
" "
"2011-01-01 13:00:00-05:00]")
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx)
exp = (
@@ -2209,7 +2209,7 @@ def test_categorical_repr_datetime(self):
" "
"2011-01-01 13:00:00-05:00]")
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
def test_categorical_repr_datetime_ordered(self):
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2218,14 +2218,14 @@ def test_categorical_repr_datetime_ordered(self):
Categories (5, datetime64[ns]): [2011-01-01 09:00:00 < 2011-01-01 10:00:00 < 2011-01-01 11:00:00 <
2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx, ordered=True)
exp = """[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00, 2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00]
Categories (5, datetime64[ns]): [2011-01-01 09:00:00 < 2011-01-01 10:00:00 < 2011-01-01 11:00:00 <
2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5,
tz='US/Eastern')
@@ -2235,7 +2235,7 @@ def test_categorical_repr_datetime_ordered(self):
2011-01-01 11:00:00-05:00 < 2011-01-01 12:00:00-05:00 <
2011-01-01 13:00:00-05:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx, ordered=True)
exp = """[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00, 2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00]
@@ -2243,7 +2243,7 @@ def test_categorical_repr_datetime_ordered(self):
2011-01-01 11:00:00-05:00 < 2011-01-01 12:00:00-05:00 <
2011-01-01 13:00:00-05:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
def test_categorical_repr_period(self):
idx = pd.period_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2252,27 +2252,27 @@ def test_categorical_repr_period(self):
Categories (5, period[H]): [2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00,
2011-01-01 13:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx)
exp = """[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00, 2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00]
Categories (5, period[H]): [2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00,
2011-01-01 13:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
idx = pd.period_range('2011-01', freq='M', periods=5)
c = pd.Categorical(idx)
exp = """[2011-01, 2011-02, 2011-03, 2011-04, 2011-05]
Categories (5, period[M]): [2011-01, 2011-02, 2011-03, 2011-04, 2011-05]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx)
exp = """[2011-01, 2011-02, 2011-03, 2011-04, 2011-05, 2011-01, 2011-02, 2011-03, 2011-04, 2011-05]
Categories (5, period[M]): [2011-01, 2011-02, 2011-03, 2011-04, 2011-05]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
def test_categorical_repr_period_ordered(self):
idx = pd.period_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2281,27 +2281,27 @@ def test_categorical_repr_period_ordered(self):
Categories (5, period[H]): [2011-01-01 09:00 < 2011-01-01 10:00 < 2011-01-01 11:00 < 2011-01-01 12:00 <
2011-01-01 13:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx, ordered=True)
exp = """[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00, 2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00]
Categories (5, period[H]): [2011-01-01 09:00 < 2011-01-01 10:00 < 2011-01-01 11:00 < 2011-01-01 12:00 <
2011-01-01 13:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
idx = pd.period_range('2011-01', freq='M', periods=5)
c = pd.Categorical(idx, ordered=True)
exp = """[2011-01, 2011-02, 2011-03, 2011-04, 2011-05]
Categories (5, period[M]): [2011-01 < 2011-02 < 2011-03 < 2011-04 < 2011-05]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx, ordered=True)
exp = """[2011-01, 2011-02, 2011-03, 2011-04, 2011-05, 2011-01, 2011-02, 2011-03, 2011-04, 2011-05]
Categories (5, period[M]): [2011-01 < 2011-02 < 2011-03 < 2011-04 < 2011-05]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
def test_categorical_repr_timedelta(self):
idx = pd.timedelta_range('1 days', periods=5)
@@ -2309,13 +2309,13 @@ def test_categorical_repr_timedelta(self):
exp = """[1 days, 2 days, 3 days, 4 days, 5 days]
Categories (5, timedelta64[ns]): [1 days, 2 days, 3 days, 4 days, 5 days]"""
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx)
exp = """[1 days, 2 days, 3 days, 4 days, 5 days, 1 days, 2 days, 3 days, 4 days, 5 days]
Categories (5, timedelta64[ns]): [1 days, 2 days, 3 days, 4 days, 5 days]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
idx = pd.timedelta_range('1 hours', periods=20)
c = pd.Categorical(idx)
@@ -2325,7 +2325,7 @@ def test_categorical_repr_timedelta(self):
3 days 01:00:00, ..., 16 days 01:00:00, 17 days 01:00:00,
18 days 01:00:00, 19 days 01:00:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx)
exp = """[0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, 4 days 01:00:00, ..., 15 days 01:00:00, 16 days 01:00:00, 17 days 01:00:00, 18 days 01:00:00, 19 days 01:00:00]
@@ -2334,7 +2334,7 @@ def test_categorical_repr_timedelta(self):
3 days 01:00:00, ..., 16 days 01:00:00, 17 days 01:00:00,
18 days 01:00:00, 19 days 01:00:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
def test_categorical_repr_timedelta_ordered(self):
idx = pd.timedelta_range('1 days', periods=5)
@@ -2342,13 +2342,13 @@ def test_categorical_repr_timedelta_ordered(self):
exp = """[1 days, 2 days, 3 days, 4 days, 5 days]
Categories (5, timedelta64[ns]): [1 days < 2 days < 3 days < 4 days < 5 days]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx, ordered=True)
exp = """[1 days, 2 days, 3 days, 4 days, 5 days, 1 days, 2 days, 3 days, 4 days, 5 days]
Categories (5, timedelta64[ns]): [1 days < 2 days < 3 days < 4 days < 5 days]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
idx = pd.timedelta_range('1 hours', periods=20)
c = pd.Categorical(idx, ordered=True)
@@ -2358,7 +2358,7 @@ def test_categorical_repr_timedelta_ordered(self):
3 days 01:00:00 ... 16 days 01:00:00 < 17 days 01:00:00 <
18 days 01:00:00 < 19 days 01:00:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
c = pd.Categorical(idx.append(idx), categories=idx, ordered=True)
exp = """[0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, 4 days 01:00:00, ..., 15 days 01:00:00, 16 days 01:00:00, 17 days 01:00:00, 18 days 01:00:00, 19 days 01:00:00]
@@ -2367,7 +2367,7 @@ def test_categorical_repr_timedelta_ordered(self):
3 days 01:00:00 ... 16 days 01:00:00 < 17 days 01:00:00 <
18 days 01:00:00 < 19 days 01:00:00]""" # noqa
- self.assertEqual(repr(c), exp)
+ assert repr(c) == exp
def test_categorical_series_repr(self):
s = pd.Series(pd.Categorical([1, 2, 3]))
@@ -2377,7 +2377,7 @@ def test_categorical_series_repr(self):
dtype: category
Categories (3, int64): [1, 2, 3]"""
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
s = pd.Series(pd.Categorical(np.arange(10)))
exp = """0 0
@@ -2393,7 +2393,7 @@ def test_categorical_series_repr(self):
dtype: category
Categories (10, int64): [0, 1, 2, 3, ..., 6, 7, 8, 9]"""
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
def test_categorical_series_repr_ordered(self):
s = pd.Series(pd.Categorical([1, 2, 3], ordered=True))
@@ -2403,7 +2403,7 @@ def test_categorical_series_repr_ordered(self):
dtype: category
Categories (3, int64): [1 < 2 < 3]"""
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
s = pd.Series(pd.Categorical(np.arange(10), ordered=True))
exp = """0 0
@@ -2419,7 +2419,7 @@ def test_categorical_series_repr_ordered(self):
dtype: category
Categories (10, int64): [0 < 1 < 2 < 3 ... 6 < 7 < 8 < 9]"""
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
def test_categorical_series_repr_datetime(self):
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2433,7 +2433,7 @@ def test_categorical_series_repr_datetime(self):
Categories (5, datetime64[ns]): [2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00,
2011-01-01 12:00:00, 2011-01-01 13:00:00]""" # noqa
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5,
tz='US/Eastern')
@@ -2448,7 +2448,7 @@ def test_categorical_series_repr_datetime(self):
2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00,
2011-01-01 13:00:00-05:00]""" # noqa
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
def test_categorical_series_repr_datetime_ordered(self):
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2462,7 +2462,7 @@ def test_categorical_series_repr_datetime_ordered(self):
Categories (5, datetime64[ns]): [2011-01-01 09:00:00 < 2011-01-01 10:00:00 < 2011-01-01 11:00:00 <
2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5,
tz='US/Eastern')
@@ -2477,7 +2477,7 @@ def test_categorical_series_repr_datetime_ordered(self):
2011-01-01 11:00:00-05:00 < 2011-01-01 12:00:00-05:00 <
2011-01-01 13:00:00-05:00]""" # noqa
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
def test_categorical_series_repr_period(self):
idx = pd.period_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2491,7 +2491,7 @@ def test_categorical_series_repr_period(self):
Categories (5, period[H]): [2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00,
2011-01-01 13:00]""" # noqa
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
idx = pd.period_range('2011-01', freq='M', periods=5)
s = pd.Series(pd.Categorical(idx))
@@ -2503,7 +2503,7 @@ def test_categorical_series_repr_period(self):
dtype: category
Categories (5, period[M]): [2011-01, 2011-02, 2011-03, 2011-04, 2011-05]"""
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
def test_categorical_series_repr_period_ordered(self):
idx = pd.period_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2517,7 +2517,7 @@ def test_categorical_series_repr_period_ordered(self):
Categories (5, period[H]): [2011-01-01 09:00 < 2011-01-01 10:00 < 2011-01-01 11:00 < 2011-01-01 12:00 <
2011-01-01 13:00]""" # noqa
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
idx = pd.period_range('2011-01', freq='M', periods=5)
s = pd.Series(pd.Categorical(idx, ordered=True))
@@ -2529,7 +2529,7 @@ def test_categorical_series_repr_period_ordered(self):
dtype: category
Categories (5, period[M]): [2011-01 < 2011-02 < 2011-03 < 2011-04 < 2011-05]"""
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
def test_categorical_series_repr_timedelta(self):
idx = pd.timedelta_range('1 days', periods=5)
@@ -2542,7 +2542,7 @@ def test_categorical_series_repr_timedelta(self):
dtype: category
Categories (5, timedelta64[ns]): [1 days, 2 days, 3 days, 4 days, 5 days]"""
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
idx = pd.timedelta_range('1 hours', periods=10)
s = pd.Series(pd.Categorical(idx))
@@ -2561,7 +2561,7 @@ def test_categorical_series_repr_timedelta(self):
3 days 01:00:00, ..., 6 days 01:00:00, 7 days 01:00:00,
8 days 01:00:00, 9 days 01:00:00]""" # noqa
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
def test_categorical_series_repr_timedelta_ordered(self):
idx = pd.timedelta_range('1 days', periods=5)
@@ -2574,7 +2574,7 @@ def test_categorical_series_repr_timedelta_ordered(self):
dtype: category
Categories (5, timedelta64[ns]): [1 days < 2 days < 3 days < 4 days < 5 days]""" # noqa
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
idx = pd.timedelta_range('1 hours', periods=10)
s = pd.Series(pd.Categorical(idx, ordered=True))
@@ -2593,25 +2593,25 @@ def test_categorical_series_repr_timedelta_ordered(self):
3 days 01:00:00 ... 6 days 01:00:00 < 7 days 01:00:00 <
8 days 01:00:00 < 9 days 01:00:00]""" # noqa
- self.assertEqual(repr(s), exp)
+ assert repr(s) == exp
def test_categorical_index_repr(self):
idx = pd.CategoricalIndex(pd.Categorical([1, 2, 3]))
exp = """CategoricalIndex([1, 2, 3], categories=[1, 2, 3], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(idx), exp)
+ assert repr(idx) == exp
i = pd.CategoricalIndex(pd.Categorical(np.arange(10)))
exp = """CategoricalIndex([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], categories=[0, 1, 2, 3, 4, 5, 6, 7, ...], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
def test_categorical_index_repr_ordered(self):
i = pd.CategoricalIndex(pd.Categorical([1, 2, 3], ordered=True))
exp = """CategoricalIndex([1, 2, 3], categories=[1, 2, 3], ordered=True, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
i = pd.CategoricalIndex(pd.Categorical(np.arange(10), ordered=True))
exp = """CategoricalIndex([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], categories=[0, 1, 2, 3, 4, 5, 6, 7, ...], ordered=True, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
def test_categorical_index_repr_datetime(self):
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2621,7 +2621,7 @@ def test_categorical_index_repr_datetime(self):
'2011-01-01 13:00:00'],
categories=[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5,
tz='US/Eastern')
@@ -2631,7 +2631,7 @@ def test_categorical_index_repr_datetime(self):
'2011-01-01 13:00:00-05:00'],
categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
def test_categorical_index_repr_datetime_ordered(self):
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2641,7 +2641,7 @@ def test_categorical_index_repr_datetime_ordered(self):
'2011-01-01 13:00:00'],
categories=[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00], ordered=True, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5,
tz='US/Eastern')
@@ -2651,7 +2651,7 @@ def test_categorical_index_repr_datetime_ordered(self):
'2011-01-01 13:00:00-05:00'],
categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=True, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
i = pd.CategoricalIndex(pd.Categorical(idx.append(idx), ordered=True))
exp = """CategoricalIndex(['2011-01-01 09:00:00-05:00', '2011-01-01 10:00:00-05:00',
@@ -2661,24 +2661,24 @@ def test_categorical_index_repr_datetime_ordered(self):
'2011-01-01 12:00:00-05:00', '2011-01-01 13:00:00-05:00'],
categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=True, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
def test_categorical_index_repr_period(self):
# test all length
idx = pd.period_range('2011-01-01 09:00', freq='H', periods=1)
i = pd.CategoricalIndex(pd.Categorical(idx))
exp = """CategoricalIndex(['2011-01-01 09:00'], categories=[2011-01-01 09:00], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
idx = pd.period_range('2011-01-01 09:00', freq='H', periods=2)
i = pd.CategoricalIndex(pd.Categorical(idx))
exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00'], categories=[2011-01-01 09:00, 2011-01-01 10:00], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
idx = pd.period_range('2011-01-01 09:00', freq='H', periods=3)
i = pd.CategoricalIndex(pd.Categorical(idx))
exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00'], categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
idx = pd.period_range('2011-01-01 09:00', freq='H', periods=5)
i = pd.CategoricalIndex(pd.Categorical(idx))
@@ -2686,7 +2686,7 @@ def test_categorical_index_repr_period(self):
'2011-01-01 12:00', '2011-01-01 13:00'],
categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
i = pd.CategoricalIndex(pd.Categorical(idx.append(idx)))
exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00',
@@ -2695,12 +2695,12 @@ def test_categorical_index_repr_period(self):
'2011-01-01 13:00'],
categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
idx = pd.period_range('2011-01', freq='M', periods=5)
i = pd.CategoricalIndex(pd.Categorical(idx))
exp = """CategoricalIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05'], categories=[2011-01, 2011-02, 2011-03, 2011-04, 2011-05], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
def test_categorical_index_repr_period_ordered(self):
idx = pd.period_range('2011-01-01 09:00', freq='H', periods=5)
@@ -2709,18 +2709,18 @@ def test_categorical_index_repr_period_ordered(self):
'2011-01-01 12:00', '2011-01-01 13:00'],
categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=True, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
idx = pd.period_range('2011-01', freq='M', periods=5)
i = pd.CategoricalIndex(pd.Categorical(idx, ordered=True))
exp = """CategoricalIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05'], categories=[2011-01, 2011-02, 2011-03, 2011-04, 2011-05], ordered=True, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
def test_categorical_index_repr_timedelta(self):
idx = pd.timedelta_range('1 days', periods=5)
i = pd.CategoricalIndex(pd.Categorical(idx))
exp = """CategoricalIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], categories=[1 days 00:00:00, 2 days 00:00:00, 3 days 00:00:00, 4 days 00:00:00, 5 days 00:00:00], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
idx = pd.timedelta_range('1 hours', periods=10)
i = pd.CategoricalIndex(pd.Categorical(idx))
@@ -2730,13 +2730,13 @@ def test_categorical_index_repr_timedelta(self):
'9 days 01:00:00'],
categories=[0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, 4 days 01:00:00, 5 days 01:00:00, 6 days 01:00:00, 7 days 01:00:00, ...], ordered=False, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
def test_categorical_index_repr_timedelta_ordered(self):
idx = pd.timedelta_range('1 days', periods=5)
i = pd.CategoricalIndex(pd.Categorical(idx, ordered=True))
exp = """CategoricalIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], categories=[1 days 00:00:00, 2 days 00:00:00, 3 days 00:00:00, 4 days 00:00:00, 5 days 00:00:00], ordered=True, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
idx = pd.timedelta_range('1 hours', periods=10)
i = pd.CategoricalIndex(pd.Categorical(idx, ordered=True))
@@ -2746,7 +2746,7 @@ def test_categorical_index_repr_timedelta_ordered(self):
'9 days 01:00:00'],
categories=[0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, 4 days 01:00:00, 5 days 01:00:00, 6 days 01:00:00, 7 days 01:00:00, ...], ordered=True, dtype='category')""" # noqa
- self.assertEqual(repr(i), exp)
+ assert repr(i) == exp
def test_categorical_frame(self):
# normal DataFrame
@@ -2762,7 +2762,7 @@ def test_categorical_frame(self):
4 2011-01-01 13:00:00-05:00 2011-05"""
df = pd.DataFrame({'dt': pd.Categorical(dt), 'p': pd.Categorical(p)})
- self.assertEqual(repr(df), exp)
+ assert repr(df) == exp
def test_info(self):
@@ -2800,15 +2800,15 @@ def test_min_max(self):
cat = Series(Categorical(["a", "b", "c", "d"], ordered=True))
_min = cat.min()
_max = cat.max()
- self.assertEqual(_min, "a")
- self.assertEqual(_max, "d")
+ assert _min == "a"
+ assert _max == "d"
cat = Series(Categorical(["a", "b", "c", "d"], categories=[
'd', 'c', 'b', 'a'], ordered=True))
_min = cat.min()
_max = cat.max()
- self.assertEqual(_min, "d")
- self.assertEqual(_max, "a")
+ assert _min == "d"
+ assert _max == "a"
cat = Series(Categorical(
[np.nan, "b", "c", np.nan], categories=['d', 'c', 'b', 'a'
@@ -2816,14 +2816,14 @@ def test_min_max(self):
_min = cat.min()
_max = cat.max()
assert np.isnan(_min)
- self.assertEqual(_max, "b")
+ assert _max == "b"
cat = Series(Categorical(
[np.nan, 1, 2, np.nan], categories=[5, 4, 3, 2, 1], ordered=True))
_min = cat.min()
_max = cat.max()
assert np.isnan(_min)
- self.assertEqual(_max, 1)
+ assert _max == 1
def test_mode(self):
s = Series(Categorical([1, 1, 2, 4, 5, 5, 5],
@@ -3050,7 +3050,7 @@ def test_count(self):
s = Series(Categorical([np.nan, 1, 2, np.nan],
categories=[5, 4, 3, 2, 1], ordered=True))
result = s.count()
- self.assertEqual(result, 2)
+ assert result == 2
def test_sort_values(self):
@@ -3099,13 +3099,13 @@ def test_sort_values(self):
res = df.sort_values(by=["string"], ascending=False)
exp = np.array(["d", "c", "b", "a"], dtype=np.object_)
tm.assert_numpy_array_equal(res["sort"].values.__array__(), exp)
- self.assertEqual(res["sort"].dtype, "category")
+ assert res["sort"].dtype == "category"
res = df.sort_values(by=["sort"], ascending=False)
exp = df.sort_values(by=["string"], ascending=True)
tm.assert_series_equal(res["values"], exp["values"])
- self.assertEqual(res["sort"].dtype, "category")
- self.assertEqual(res["unsort"].dtype, "category")
+ assert res["sort"].dtype == "category"
+ assert res["unsort"].dtype == "category"
# unordered cat, but we allow this
df.sort_values(by=["unsort"], ascending=False)
@@ -3201,7 +3201,7 @@ def test_slicing_and_getting_ops(self):
# single value
res_val = df.iloc[2, 0]
- self.assertEqual(res_val, exp_val)
+ assert res_val == exp_val
# loc
# frame
@@ -3221,7 +3221,7 @@ def test_slicing_and_getting_ops(self):
# single value
res_val = df.loc["j", "cats"]
- self.assertEqual(res_val, exp_val)
+ assert res_val == exp_val
# ix
# frame
@@ -3242,15 +3242,15 @@ def test_slicing_and_getting_ops(self):
# single value
res_val = df.loc["j", df.columns[0]]
- self.assertEqual(res_val, exp_val)
+ assert res_val == exp_val
# iat
res_val = df.iat[2, 0]
- self.assertEqual(res_val, exp_val)
+ assert res_val == exp_val
# at
res_val = df.at["j", "cats"]
- self.assertEqual(res_val, exp_val)
+ assert res_val == exp_val
# fancy indexing
exp_fancy = df.iloc[[2]]
@@ -3262,7 +3262,7 @@ def test_slicing_and_getting_ops(self):
# get_value
res_val = df.get_value("j", "cats")
- self.assertEqual(res_val, exp_val)
+ assert res_val == exp_val
# i : int, slice, or sequence of integers
res_row = df.iloc[2]
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index ad5418f4a4a29..ba055b105dc41 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -111,9 +111,9 @@ def test_case_insensitive(self):
self.cf.register_option('KanBAN', 1, 'doc')
assert 'doc' in self.cf.describe_option('kanbaN', _print_desc=False)
- self.assertEqual(self.cf.get_option('kanBaN'), 1)
+ assert self.cf.get_option('kanBaN') == 1
self.cf.set_option('KanBan', 2)
- self.assertEqual(self.cf.get_option('kAnBaN'), 2)
+ assert self.cf.get_option('kAnBaN') == 2
# gets of non-existent keys fail
pytest.raises(KeyError, self.cf.get_option, 'no_such_option')
@@ -127,8 +127,8 @@ def test_get_option(self):
self.cf.register_option('b.b', None, 'doc2')
# gets of existing keys succeed
- self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.c'), 'hullo')
+ assert self.cf.get_option('a') == 1
+ assert self.cf.get_option('b.c') == 'hullo'
assert self.cf.get_option('b.b') is None
# gets of non-existent keys fail
@@ -139,17 +139,17 @@ def test_set_option(self):
self.cf.register_option('b.c', 'hullo', 'doc2')
self.cf.register_option('b.b', None, 'doc2')
- self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.c'), 'hullo')
+ assert self.cf.get_option('a') == 1
+ assert self.cf.get_option('b.c') == 'hullo'
assert self.cf.get_option('b.b') is None
self.cf.set_option('a', 2)
self.cf.set_option('b.c', 'wurld')
self.cf.set_option('b.b', 1.1)
- self.assertEqual(self.cf.get_option('a'), 2)
- self.assertEqual(self.cf.get_option('b.c'), 'wurld')
- self.assertEqual(self.cf.get_option('b.b'), 1.1)
+ assert self.cf.get_option('a') == 2
+ assert self.cf.get_option('b.c') == 'wurld'
+ assert self.cf.get_option('b.b') == 1.1
pytest.raises(KeyError, self.cf.set_option, 'no.such.key', None)
@@ -167,15 +167,15 @@ def test_set_option_multiple(self):
self.cf.register_option('b.c', 'hullo', 'doc2')
self.cf.register_option('b.b', None, 'doc2')
- self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.c'), 'hullo')
+ assert self.cf.get_option('a') == 1
+ assert self.cf.get_option('b.c') == 'hullo'
assert self.cf.get_option('b.b') is None
self.cf.set_option('a', '2', 'b.c', None, 'b.b', 10.0)
- self.assertEqual(self.cf.get_option('a'), '2')
+ assert self.cf.get_option('a') == '2'
assert self.cf.get_option('b.c') is None
- self.assertEqual(self.cf.get_option('b.b'), 10.0)
+ assert self.cf.get_option('b.b') == 10.0
def test_validation(self):
self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
@@ -203,36 +203,36 @@ def test_reset_option(self):
self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
self.cf.register_option('b.c', 'hullo', 'doc2',
validator=self.cf.is_str)
- self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.c'), 'hullo')
+ assert self.cf.get_option('a') == 1
+ assert self.cf.get_option('b.c') == 'hullo'
self.cf.set_option('a', 2)
self.cf.set_option('b.c', 'wurld')
- self.assertEqual(self.cf.get_option('a'), 2)
- self.assertEqual(self.cf.get_option('b.c'), 'wurld')
+ assert self.cf.get_option('a') == 2
+ assert self.cf.get_option('b.c') == 'wurld'
self.cf.reset_option('a')
- self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.c'), 'wurld')
+ assert self.cf.get_option('a') == 1
+ assert self.cf.get_option('b.c') == 'wurld'
self.cf.reset_option('b.c')
- self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.c'), 'hullo')
+ assert self.cf.get_option('a') == 1
+ assert self.cf.get_option('b.c') == 'hullo'
def test_reset_option_all(self):
self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
self.cf.register_option('b.c', 'hullo', 'doc2',
validator=self.cf.is_str)
- self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.c'), 'hullo')
+ assert self.cf.get_option('a') == 1
+ assert self.cf.get_option('b.c') == 'hullo'
self.cf.set_option('a', 2)
self.cf.set_option('b.c', 'wurld')
- self.assertEqual(self.cf.get_option('a'), 2)
- self.assertEqual(self.cf.get_option('b.c'), 'wurld')
+ assert self.cf.get_option('a') == 2
+ assert self.cf.get_option('b.c') == 'wurld'
self.cf.reset_option("all")
- self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b.c'), 'hullo')
+ assert self.cf.get_option('a') == 1
+ assert self.cf.get_option('b.c') == 'hullo'
def test_deprecate_option(self):
# we can deprecate non-existent options
@@ -248,7 +248,7 @@ def test_deprecate_option(self):
else:
self.fail("Nonexistent option didn't raise KeyError")
- self.assertEqual(len(w), 1) # should have raised one warning
+ assert len(w) == 1 # should have raised one warning
assert 'deprecated' in str(w[-1]) # we get the default message
self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
@@ -260,7 +260,7 @@ def test_deprecate_option(self):
warnings.simplefilter('always')
self.cf.get_option('a')
- self.assertEqual(len(w), 1) # should have raised one warning
+ assert len(w) == 1 # should have raised one warning
assert 'eprecated' in str(w[-1]) # we get the default message
assert 'nifty_ver' in str(w[-1]) # with the removal_ver quoted
@@ -272,51 +272,51 @@ def test_deprecate_option(self):
warnings.simplefilter('always')
self.cf.get_option('b.c')
- self.assertEqual(len(w), 1) # should have raised one warning
+ assert len(w) == 1 # should have raised one warning
assert 'zounds!' in str(w[-1]) # we get the custom message
# test rerouting keys
self.cf.register_option('d.a', 'foo', 'doc2')
self.cf.register_option('d.dep', 'bar', 'doc2')
- self.assertEqual(self.cf.get_option('d.a'), 'foo')
- self.assertEqual(self.cf.get_option('d.dep'), 'bar')
+ assert self.cf.get_option('d.a') == 'foo'
+ assert self.cf.get_option('d.dep') == 'bar'
self.cf.deprecate_option('d.dep', rkey='d.a') # reroute d.dep to d.a
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
- self.assertEqual(self.cf.get_option('d.dep'), 'foo')
+ assert self.cf.get_option('d.dep') == 'foo'
- self.assertEqual(len(w), 1) # should have raised one warning
+ assert len(w) == 1 # should have raised one warning
assert 'eprecated' in str(w[-1]) # we get the custom message
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
self.cf.set_option('d.dep', 'baz') # should overwrite "d.a"
- self.assertEqual(len(w), 1) # should have raised one warning
+ assert len(w) == 1 # should have raised one warning
assert 'eprecated' in str(w[-1]) # we get the custom message
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
- self.assertEqual(self.cf.get_option('d.dep'), 'baz')
+ assert self.cf.get_option('d.dep') == 'baz'
- self.assertEqual(len(w), 1) # should have raised one warning
+ assert len(w) == 1 # should have raised one warning
assert 'eprecated' in str(w[-1]) # we get the custom message
def test_config_prefix(self):
with self.cf.config_prefix("base"):
self.cf.register_option('a', 1, "doc1")
self.cf.register_option('b', 2, "doc2")
- self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b'), 2)
+ assert self.cf.get_option('a') == 1
+ assert self.cf.get_option('b') == 2
self.cf.set_option('a', 3)
self.cf.set_option('b', 4)
- self.assertEqual(self.cf.get_option('a'), 3)
- self.assertEqual(self.cf.get_option('b'), 4)
+ assert self.cf.get_option('a') == 3
+ assert self.cf.get_option('b') == 4
- self.assertEqual(self.cf.get_option('base.a'), 3)
- self.assertEqual(self.cf.get_option('base.b'), 4)
+ assert self.cf.get_option('base.a') == 3
+ assert self.cf.get_option('base.b') == 4
assert 'doc1' in self.cf.describe_option('base.a', _print_desc=False)
assert 'doc2' in self.cf.describe_option('base.b', _print_desc=False)
@@ -324,8 +324,8 @@ def test_config_prefix(self):
self.cf.reset_option('base.b')
with self.cf.config_prefix("base"):
- self.assertEqual(self.cf.get_option('a'), 1)
- self.assertEqual(self.cf.get_option('b'), 2)
+ assert self.cf.get_option('a') == 1
+ assert self.cf.get_option('b') == 2
def test_callback(self):
k = [None]
@@ -340,21 +340,21 @@ def callback(key):
del k[-1], v[-1]
self.cf.set_option("d.a", "fooz")
- self.assertEqual(k[-1], "d.a")
- self.assertEqual(v[-1], "fooz")
+ assert k[-1] == "d.a"
+ assert v[-1] == "fooz"
del k[-1], v[-1]
self.cf.set_option("d.b", "boo")
- self.assertEqual(k[-1], "d.b")
- self.assertEqual(v[-1], "boo")
+ assert k[-1] == "d.b"
+ assert v[-1] == "boo"
del k[-1], v[-1]
self.cf.reset_option("d.b")
- self.assertEqual(k[-1], "d.b")
+ assert k[-1] == "d.b"
def test_set_ContextManager(self):
def eq(val):
- self.assertEqual(self.cf.get_option("a"), val)
+ assert self.cf.get_option("a") == val
self.cf.register_option('a', 0)
eq(0)
@@ -384,22 +384,22 @@ def f3(key):
self.cf.register_option('c', 0, cb=f3)
options = self.cf.options
- self.assertEqual(options.a, 0)
+ assert options.a == 0
with self.cf.option_context("a", 15):
- self.assertEqual(options.a, 15)
+ assert options.a == 15
options.a = 500
- self.assertEqual(self.cf.get_option("a"), 500)
+ assert self.cf.get_option("a") == 500
self.cf.reset_option("a")
- self.assertEqual(options.a, self.cf.get_option("a", 0))
+ assert options.a == self.cf.get_option("a", 0)
pytest.raises(KeyError, f)
pytest.raises(KeyError, f2)
# make sure callback kicks when using this form of setting
options.c = 1
- self.assertEqual(len(holder), 1)
+ assert len(holder) == 1
def test_option_context_scope(self):
# Ensure that creating a context does not affect the existing
@@ -414,11 +414,11 @@ def test_option_context_scope(self):
# Ensure creating contexts didn't affect the current context.
ctx = self.cf.option_context(option_name, context_value)
- self.assertEqual(self.cf.get_option(option_name), original_value)
+ assert self.cf.get_option(option_name) == original_value
# Ensure the correct value is available inside the context.
with ctx:
- self.assertEqual(self.cf.get_option(option_name), context_value)
+ assert self.cf.get_option(option_name) == context_value
# Ensure the current context is reset
- self.assertEqual(self.cf.get_option(option_name), original_value)
+ assert self.cf.get_option(option_name) == original_value
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index e4ed194b75bcd..5b2057f830102 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -259,7 +259,7 @@ def test_series_getitem(self):
result = s[2000, 3, 10]
expected = s[49]
- self.assertEqual(result, expected)
+ assert result == expected
# fancy
expected = s.reindex(s.index[49:51])
@@ -404,9 +404,9 @@ def test_frame_setitem_multi_column(self):
sliced_b1 = df['B', '1']
tm.assert_series_equal(sliced_a1, sliced_b1, check_names=False)
tm.assert_series_equal(sliced_a2, sliced_b1, check_names=False)
- self.assertEqual(sliced_a1.name, ('A', '1'))
- self.assertEqual(sliced_a2.name, ('A', '2'))
- self.assertEqual(sliced_b1.name, ('B', '1'))
+ assert sliced_a1.name == ('A', '1')
+ assert sliced_a2.name == ('A', '2')
+ assert sliced_b1.name == ('B', '1')
def test_getitem_tuple_plus_slice(self):
# GH #671
@@ -557,7 +557,7 @@ def test_xs_level0(self):
result = df.xs('a', level=0)
expected = df.xs('a')
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
tm.assert_frame_equal(result, expected)
def test_xs_level_series(self):
@@ -667,19 +667,19 @@ def test_setitem_change_dtype(self):
def test_frame_setitem_ix(self):
self.frame.loc[('bar', 'two'), 'B'] = 5
- self.assertEqual(self.frame.loc[('bar', 'two'), 'B'], 5)
+ assert self.frame.loc[('bar', 'two'), 'B'] == 5
# with integer labels
df = self.frame.copy()
df.columns = lrange(3)
df.loc[('bar', 'two'), 1] = 7
- self.assertEqual(df.loc[('bar', 'two'), 1], 7)
+ assert df.loc[('bar', 'two'), 1] == 7
with catch_warnings(record=True):
df = self.frame.copy()
df.columns = lrange(3)
df.ix[('bar', 'two'), 1] = 7
- self.assertEqual(df.loc[('bar', 'two'), 1], 7)
+ assert df.loc[('bar', 'two'), 1] == 7
def test_fancy_slice_partial(self):
result = self.frame.loc['bar':'baz']
@@ -724,12 +724,11 @@ def test_delevel_infer_dtype(self):
def test_reset_index_with_drop(self):
deleveled = self.ymd.reset_index(drop=True)
- self.assertEqual(len(deleveled.columns), len(self.ymd.columns))
+ assert len(deleveled.columns) == len(self.ymd.columns)
deleveled = self.series.reset_index()
assert isinstance(deleveled, DataFrame)
- self.assertEqual(len(deleveled.columns),
- len(self.series.index.levels) + 1)
+ assert len(deleveled.columns) == len(self.series.index.levels) + 1
deleveled = self.series.reset_index(drop=True)
assert isinstance(deleveled, Series)
@@ -942,7 +941,7 @@ def test_stack_mixed_dtype(self):
result = df['foo'].stack().sort_index()
tm.assert_series_equal(stacked['foo'], result, check_names=False)
assert result.name is None
- self.assertEqual(stacked['bar'].dtype, np.float_)
+ assert stacked['bar'].dtype == np.float_
def test_unstack_bug(self):
df = DataFrame({'state': ['naive', 'naive', 'naive', 'activ', 'activ',
@@ -961,11 +960,11 @@ def test_unstack_bug(self):
def test_stack_unstack_preserve_names(self):
unstacked = self.frame.unstack()
- self.assertEqual(unstacked.index.name, 'first')
- self.assertEqual(unstacked.columns.names, ['exp', 'second'])
+ assert unstacked.index.name == 'first'
+ assert unstacked.columns.names == ['exp', 'second']
restacked = unstacked.stack()
- self.assertEqual(restacked.index.names, self.frame.index.names)
+ assert restacked.index.names == self.frame.index.names
def test_unstack_level_name(self):
result = self.frame.unstack('second')
@@ -986,7 +985,7 @@ def test_stack_unstack_multiple(self):
unstacked = self.ymd.unstack(['year', 'month'])
expected = self.ymd.unstack('year').unstack('month')
tm.assert_frame_equal(unstacked, expected)
- self.assertEqual(unstacked.columns.names, expected.columns.names)
+ assert unstacked.columns.names == expected.columns.names
# series
s = self.ymd['A']
@@ -998,7 +997,7 @@ def test_stack_unstack_multiple(self):
restacked = restacked.sort_index(level=0)
tm.assert_frame_equal(restacked, self.ymd)
- self.assertEqual(restacked.index.names, self.ymd.index.names)
+ assert restacked.index.names == self.ymd.index.names
# GH #451
unstacked = self.ymd.unstack([1, 2])
@@ -1191,7 +1190,7 @@ def test_unstack_unobserved_keys(self):
df = DataFrame(np.random.randn(4, 2), index=index)
result = df.unstack()
- self.assertEqual(len(result.columns), 4)
+ assert len(result.columns) == 4
recons = result.stack()
tm.assert_frame_equal(recons, df)
@@ -1351,12 +1350,12 @@ def test_count(self):
result = series.count(level='b')
expect = self.series.count(level=1)
tm.assert_series_equal(result, expect, check_names=False)
- self.assertEqual(result.index.name, 'b')
+ assert result.index.name == 'b'
result = series.count(level='a')
expect = self.series.count(level=0)
tm.assert_series_equal(result, expect, check_names=False)
- self.assertEqual(result.index.name, 'a')
+ assert result.index.name == 'a'
pytest.raises(KeyError, series.count, 'x')
pytest.raises(KeyError, frame.count, level='x')
@@ -1465,7 +1464,7 @@ def test_groupby_multilevel(self):
# TODO groupby with level_values drops names
tm.assert_frame_equal(result, expected, check_names=False)
- self.assertEqual(result.index.names, self.ymd.index.names[:2])
+ assert result.index.names == self.ymd.index.names[:2]
result2 = self.ymd.groupby(level=self.ymd.index.names[:2]).mean()
tm.assert_frame_equal(result, result2)
@@ -1483,13 +1482,13 @@ def test_multilevel_consolidate(self):
def test_ix_preserve_names(self):
result = self.ymd.loc[2000]
result2 = self.ymd['A'].loc[2000]
- self.assertEqual(result.index.names, self.ymd.index.names[1:])
- self.assertEqual(result2.index.names, self.ymd.index.names[1:])
+ assert result.index.names == self.ymd.index.names[1:]
+ assert result2.index.names == self.ymd.index.names[1:]
result = self.ymd.loc[2000, 2]
result2 = self.ymd['A'].loc[2000, 2]
- self.assertEqual(result.index.name, self.ymd.index.names[2])
- self.assertEqual(result2.index.name, self.ymd.index.names[2])
+ assert result.index.name == self.ymd.index.names[2]
+ assert result2.index.name == self.ymd.index.names[2]
def test_partial_set(self):
# GH #397
@@ -1509,7 +1508,7 @@ def test_partial_set(self):
# this works...for now
df['A'].iloc[14] = 5
- self.assertEqual(df['A'][14], 5)
+ assert df['A'][14] == 5
def test_unstack_preserve_types(self):
# GH #403
@@ -1517,9 +1516,9 @@ def test_unstack_preserve_types(self):
self.ymd['F'] = 2
unstacked = self.ymd.unstack('month')
- self.assertEqual(unstacked['A', 1].dtype, np.float64)
- self.assertEqual(unstacked['E', 1].dtype, np.object_)
- self.assertEqual(unstacked['F', 1].dtype, np.float64)
+ assert unstacked['A', 1].dtype == np.float64
+ assert unstacked['E', 1].dtype == np.object_
+ assert unstacked['F', 1].dtype == np.float64
def test_unstack_group_index_overflow(self):
labels = np.tile(np.arange(500), 2)
@@ -1530,7 +1529,7 @@ def test_unstack_group_index_overflow(self):
s = Series(np.arange(1000), index=index)
result = s.unstack()
- self.assertEqual(result.shape, (500, 2))
+ assert result.shape == (500, 2)
# test roundtrip
stacked = result.stack()
@@ -1542,7 +1541,7 @@ def test_unstack_group_index_overflow(self):
s = Series(np.arange(1000), index=index)
result = s.unstack(0)
- self.assertEqual(result.shape, (500, 2))
+ assert result.shape == (500, 2)
# put it in middle
index = MultiIndex(levels=[level] * 4 + [[0, 1]] + [level] * 4,
@@ -1551,7 +1550,7 @@ def test_unstack_group_index_overflow(self):
s = Series(np.arange(1000), index=index)
result = s.unstack(4)
- self.assertEqual(result.shape, (500, 2))
+ assert result.shape == (500, 2)
def test_getitem_lowerdim_corner(self):
pytest.raises(KeyError, self.frame.loc.__getitem__,
@@ -1559,7 +1558,7 @@ def test_getitem_lowerdim_corner(self):
# in theory should be inserting in a sorted space????
self.frame.loc[('bar', 'three'), 'B'] = 0
- self.assertEqual(self.frame.sort_index().loc[('bar', 'three'), 'B'], 0)
+ assert self.frame.sort_index().loc[('bar', 'three'), 'B'] == 0
# ---------------------------------------------------------------------
# AMBIGUOUS CASES!
@@ -1659,12 +1658,12 @@ def test_mixed_depth_get(self):
result = df['a']
expected = df['a', '', '']
tm.assert_series_equal(result, expected, check_names=False)
- self.assertEqual(result.name, 'a')
+ assert result.name == 'a'
result = df['routine1', 'result1']
expected = df['routine1', 'result1', '']
tm.assert_series_equal(result, expected, check_names=False)
- self.assertEqual(result.name, ('routine1', 'result1'))
+ assert result.name == ('routine1', 'result1')
def test_mixed_depth_insert(self):
arrays = [['a', 'top', 'top', 'routine1', 'routine1', 'routine2'],
@@ -1747,7 +1746,7 @@ def test_mixed_depth_pop(self):
expected = df2.pop(('a', '', ''))
tm.assert_series_equal(expected, result, check_names=False)
tm.assert_frame_equal(df1, df2)
- self.assertEqual(result.name, 'a')
+ assert result.name == 'a'
expected = df1['top']
df1 = df1.drop(['top'], axis=1)
@@ -1845,7 +1844,7 @@ def test_drop_preserve_names(self):
df = DataFrame(np.random.randn(6, 3), index=index)
result = df.drop([(0, 2)])
- self.assertEqual(result.index.names, ('one', 'two'))
+ assert result.index.names == ('one', 'two')
def test_unicode_repr_issues(self):
levels = [Index([u('a/\u03c3'), u('b/\u03c3'), u('c/\u03c3')]),
@@ -1944,9 +1943,9 @@ def test_indexing_over_hashtable_size_cutoff(self):
MultiIndex.from_arrays((["a"] * n, np.arange(n))))
# hai it works!
- self.assertEqual(s[("a", 5)], 5)
- self.assertEqual(s[("a", 6)], 6)
- self.assertEqual(s[("a", 7)], 7)
+ assert s[("a", 5)] == 5
+ assert s[("a", 6)] == 6
+ assert s[("a", 7)] == 7
_index._SIZE_CUTOFF = old_cutoff
@@ -1998,7 +1997,7 @@ def test_duplicate_groupby_issues(self):
s = Series(dt, index=idx)
result = s.groupby(s.index).first()
- self.assertEqual(len(result), 3)
+ assert len(result) == 3
def test_duplicate_mi(self):
# GH 4516
@@ -2353,7 +2352,7 @@ class TestSorted(Base, tm.TestCase):
def test_sort_index_preserve_levels(self):
result = self.frame.sort_index()
- self.assertEqual(result.index.names, self.frame.index.names)
+ assert result.index.names == self.frame.index.names
def test_sorting_repr_8017(self):
@@ -2375,7 +2374,7 @@ def test_sorting_repr_8017(self):
# check that the repr is good
# make sure that we have a correct sparsified repr
# e.g. only 1 header of read
- self.assertEqual(str(df2).splitlines()[0].split(), ['red'])
+ assert str(df2).splitlines()[0].split() == ['red']
# GH 8017
# sorting fails after columns added
@@ -2406,7 +2405,7 @@ def test_sort_index_level(self):
a_sorted = self.frame['A'].sort_index(level=0)
# preserve names
- self.assertEqual(a_sorted.index.names, self.frame.index.names)
+ assert a_sorted.index.names == self.frame.index.names
# inplace
rs = self.frame.copy()
@@ -2469,7 +2468,7 @@ def test_is_lexsorted(self):
index = MultiIndex(levels=levels,
labels=[[0, 0, 1, 0, 1, 1], [0, 1, 0, 2, 2, 1]])
assert not index.is_lexsorted()
- self.assertEqual(index.lexsort_depth, 0)
+ assert index.lexsort_depth == 0
def test_getitem_multilevel_index_tuple_not_sorted(self):
index_columns = list("abc")
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 92d7f29366c69..35d0198ae06a9 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -346,8 +346,8 @@ def test_nanmean_overflow(self):
s = Series(a, index=range(500), dtype=np.int64)
result = s.mean()
np_result = s.values.mean()
- self.assertEqual(result, a)
- self.assertEqual(result, np_result)
+ assert result == a
+ assert result == np_result
assert result.dtype == np.float64
def test_returned_dtype(self):
@@ -746,12 +746,13 @@ class TestEnsureNumeric(tm.TestCase):
def test_numeric_values(self):
# Test integer
- self.assertEqual(nanops._ensure_numeric(1), 1, 'Failed for int')
+ assert nanops._ensure_numeric(1) == 1
+
# Test float
- self.assertEqual(nanops._ensure_numeric(1.1), 1.1, 'Failed for float')
+ assert nanops._ensure_numeric(1.1) == 1.1
+
# Test complex
- self.assertEqual(nanops._ensure_numeric(1 + 2j), 1 + 2j,
- 'Failed for complex')
+ assert nanops._ensure_numeric(1 + 2j) == 1 + 2j
def test_ndarray(self):
# Test numeric ndarray
@@ -887,7 +888,7 @@ def test_nanstd_roundoff(self):
data = Series(766897346 * np.ones(10))
for ddof in range(3):
result = data.std(ddof=ddof)
- self.assertEqual(result, 0.0)
+ assert result == 0.0
@property
def prng(self):
@@ -908,7 +909,7 @@ def test_constant_series(self):
for val in [3075.2, 3075.3, 3075.5]:
data = val * np.ones(300)
skew = nanops.nanskew(data)
- self.assertEqual(skew, 0.0)
+ assert skew == 0.0
def test_all_finite(self):
alpha, beta = 0.3, 0.1
@@ -958,7 +959,7 @@ def test_constant_series(self):
for val in [3075.2, 3075.3, 3075.5]:
data = val * np.ones(300)
kurt = nanops.nankurt(data)
- self.assertEqual(kurt, 0.0)
+ assert kurt == 0.0
def test_all_finite(self):
alpha, beta = 0.3, 0.1
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index c9894ad9a9acf..a692f6b26c61e 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -222,9 +222,9 @@ def test_set_axis(self):
assert self.panel.minor_axis is new_minor
def test_get_axis_number(self):
- self.assertEqual(self.panel._get_axis_number('items'), 0)
- self.assertEqual(self.panel._get_axis_number('major'), 1)
- self.assertEqual(self.panel._get_axis_number('minor'), 2)
+ assert self.panel._get_axis_number('items') == 0
+ assert self.panel._get_axis_number('major') == 1
+ assert self.panel._get_axis_number('minor') == 2
with tm.assert_raises_regex(ValueError, "No axis named foo"):
self.panel._get_axis_number('foo')
@@ -233,9 +233,9 @@ def test_get_axis_number(self):
self.panel.__ge__(self.panel, axis='foo')
def test_get_axis_name(self):
- self.assertEqual(self.panel._get_axis_name(0), 'items')
- self.assertEqual(self.panel._get_axis_name(1), 'major_axis')
- self.assertEqual(self.panel._get_axis_name(2), 'minor_axis')
+ assert self.panel._get_axis_name(0) == 'items'
+ assert self.panel._get_axis_name(1) == 'major_axis'
+ assert self.panel._get_axis_name(2) == 'minor_axis'
def test_get_plane_axes(self):
# what to do here?
@@ -303,8 +303,7 @@ def test_iteritems(self):
for k, v in self.panel.iteritems():
pass
- self.assertEqual(len(list(self.panel.iteritems())),
- len(self.panel.items))
+ assert len(list(self.panel.iteritems())) == len(self.panel.items)
def test_combineFrame(self):
with catch_warnings(record=True):
@@ -432,8 +431,8 @@ def test_abs(self):
expected = np.abs(s)
assert_series_equal(result, expected)
assert_series_equal(result2, expected)
- self.assertEqual(result.name, 'A')
- self.assertEqual(result2.name, 'A')
+ assert result.name == 'A'
+ assert result2.name == 'A'
class CheckIndexing(object):
@@ -497,16 +496,16 @@ def test_setitem(self):
# scalar
self.panel['ItemG'] = 1
self.panel['ItemE'] = True
- self.assertEqual(self.panel['ItemG'].values.dtype, np.int64)
- self.assertEqual(self.panel['ItemE'].values.dtype, np.bool_)
+ assert self.panel['ItemG'].values.dtype == np.int64
+ assert self.panel['ItemE'].values.dtype == np.bool_
# object dtype
self.panel['ItemQ'] = 'foo'
- self.assertEqual(self.panel['ItemQ'].values.dtype, np.object_)
+ assert self.panel['ItemQ'].values.dtype == np.object_
# boolean dtype
self.panel['ItemP'] = self.panel['ItemA'] > 0
- self.assertEqual(self.panel['ItemP'].values.dtype, np.bool_)
+ assert self.panel['ItemP'].values.dtype == np.bool_
pytest.raises(TypeError, self.panel.__setitem__, 'foo',
self.panel.loc[['ItemP']])
@@ -560,7 +559,7 @@ def test_major_xs(self):
result = xs['ItemA']
assert_series_equal(result, ref.xs(idx), check_names=False)
- self.assertEqual(result.name, 'ItemA')
+ assert result.name == 'ItemA'
# not contained
idx = self.panel.major_axis[0] - BDay()
@@ -570,8 +569,8 @@ def test_major_xs_mixed(self):
with catch_warnings(record=True):
self.panel['ItemD'] = 'foo'
xs = self.panel.major_xs(self.panel.major_axis[0])
- self.assertEqual(xs['ItemA'].dtype, np.float64)
- self.assertEqual(xs['ItemD'].dtype, np.object_)
+ assert xs['ItemA'].dtype == np.float64
+ assert xs['ItemD'].dtype == np.object_
def test_minor_xs(self):
with catch_warnings(record=True):
@@ -590,8 +589,8 @@ def test_minor_xs_mixed(self):
self.panel['ItemD'] = 'foo'
xs = self.panel.minor_xs('D')
- self.assertEqual(xs['ItemA'].dtype, np.float64)
- self.assertEqual(xs['ItemD'].dtype, np.object_)
+ assert xs['ItemA'].dtype == np.float64
+ assert xs['ItemD'].dtype == np.object_
def test_xs(self):
with catch_warnings(record=True):
@@ -985,16 +984,16 @@ def test_constructor_cast(self):
def test_constructor_empty_panel(self):
with catch_warnings(record=True):
empty = Panel()
- self.assertEqual(len(empty.items), 0)
- self.assertEqual(len(empty.major_axis), 0)
- self.assertEqual(len(empty.minor_axis), 0)
+ assert len(empty.items) == 0
+ assert len(empty.major_axis) == 0
+ assert len(empty.minor_axis) == 0
def test_constructor_observe_dtype(self):
with catch_warnings(record=True):
# GH #411
panel = Panel(items=lrange(3), major_axis=lrange(3),
minor_axis=lrange(3), dtype='O')
- self.assertEqual(panel.values.dtype, np.object_)
+ assert panel.values.dtype == np.object_
def test_constructor_dtypes(self):
with catch_warnings(record=True):
@@ -1002,7 +1001,7 @@ def test_constructor_dtypes(self):
def _check_dtype(panel, dtype):
for i in panel.items:
- self.assertEqual(panel[i].values.dtype.name, dtype)
+ assert panel[i].values.dtype.name == dtype
# only nan holding types allowed here
for dtype in ['float64', 'float32', 'object']:
@@ -1173,8 +1172,8 @@ def test_from_dict_mixed_orient(self):
panel = Panel.from_dict(data, orient='minor')
- self.assertEqual(panel['foo'].values.dtype, np.object_)
- self.assertEqual(panel['A'].values.dtype, np.float64)
+ assert panel['foo'].values.dtype == np.object_
+ assert panel['A'].values.dtype == np.float64
def test_constructor_error_msgs(self):
with catch_warnings(record=True):
@@ -1709,7 +1708,7 @@ def test_to_frame(self):
assert_panel_equal(unfiltered.to_panel(), self.panel)
# names
- self.assertEqual(unfiltered.index.names, ('major', 'minor'))
+ assert unfiltered.index.names == ('major', 'minor')
# unsorted, round trip
df = self.panel.to_frame(filter_observations=False)
@@ -1726,8 +1725,8 @@ def test_to_frame(self):
df.columns.name = 'baz'
rdf = df.to_panel().to_frame()
- self.assertEqual(rdf.index.names, df.index.names)
- self.assertEqual(rdf.columns.names, df.columns.names)
+ assert rdf.index.names == df.index.names
+ assert rdf.columns.names == df.columns.names
def test_to_frame_mixed(self):
with catch_warnings(record=True):
@@ -1737,7 +1736,7 @@ def test_to_frame_mixed(self):
lp = panel.to_frame()
wp = lp.to_panel()
- self.assertEqual(wp['bool'].values.dtype, np.bool_)
+ assert wp['bool'].values.dtype == np.bool_
# Previously, this was mutating the underlying
# index and changing its name
assert_frame_equal(wp['bool'], panel['bool'], check_names=False)
@@ -2591,18 +2590,16 @@ def test_axis_dummies(self):
from pandas.core.reshape.reshape import make_axis_dummies
minor_dummies = make_axis_dummies(self.panel, 'minor').astype(np.uint8)
- self.assertEqual(len(minor_dummies.columns),
- len(self.panel.index.levels[1]))
+ assert len(minor_dummies.columns) == len(self.panel.index.levels[1])
major_dummies = make_axis_dummies(self.panel, 'major').astype(np.uint8)
- self.assertEqual(len(major_dummies.columns),
- len(self.panel.index.levels[0]))
+ assert len(major_dummies.columns) == len(self.panel.index.levels[0])
mapping = {'A': 'one', 'B': 'one', 'C': 'two', 'D': 'two'}
transformed = make_axis_dummies(self.panel, 'minor',
transform=mapping.get).astype(np.uint8)
- self.assertEqual(len(transformed.columns), 2)
+ assert len(transformed.columns) == 2
tm.assert_index_equal(transformed.columns, Index(['one', 'two']))
# TODO: test correctness
@@ -2638,12 +2635,12 @@ def test_count(self):
major_count = self.panel.count(level=0)['ItemA']
labels = index.labels[0]
for i, idx in enumerate(index.levels[0]):
- self.assertEqual(major_count[i], (labels == i).sum())
+ assert major_count[i] == (labels == i).sum()
minor_count = self.panel.count(level=1)['ItemA']
labels = index.labels[1]
for i, idx in enumerate(index.levels[1]):
- self.assertEqual(minor_count[i], (labels == i).sum())
+ assert minor_count[i] == (labels == i).sum()
def test_join(self):
with catch_warnings(record=True):
@@ -2652,7 +2649,7 @@ def test_join(self):
joined = lp1.join(lp2)
- self.assertEqual(len(joined.columns), 3)
+ assert len(joined.columns) == 3
pytest.raises(Exception, lp1.join,
self.panel.filter(['ItemB', 'ItemC']))
@@ -2665,11 +2662,11 @@ def test_pivot(self):
np.array(['a', 'b', 'c', 'd', 'e']),
np.array([1, 2, 3, 5, 4.]))
df = pivot(one, two, three)
- self.assertEqual(df['a'][1], 1)
- self.assertEqual(df['b'][2], 2)
- self.assertEqual(df['c'][3], 3)
- self.assertEqual(df['d'][4], 5)
- self.assertEqual(df['e'][5], 4)
+ assert df['a'][1] == 1
+ assert df['b'][2] == 2
+ assert df['c'][3] == 3
+ assert df['d'][4] == 5
+ assert df['e'][5] == 4
assert_frame_equal(df, _slow_pivot(one, two, three))
# weird overlap, TODO: test?
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 05ce239b9c5a3..f2a1414957d44 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -194,16 +194,16 @@ def test_set_axis(self):
assert self.panel4d.minor_axis is new_minor
def test_get_axis_number(self):
- self.assertEqual(self.panel4d._get_axis_number('labels'), 0)
- self.assertEqual(self.panel4d._get_axis_number('items'), 1)
- self.assertEqual(self.panel4d._get_axis_number('major'), 2)
- self.assertEqual(self.panel4d._get_axis_number('minor'), 3)
+ assert self.panel4d._get_axis_number('labels') == 0
+ assert self.panel4d._get_axis_number('items') == 1
+ assert self.panel4d._get_axis_number('major') == 2
+ assert self.panel4d._get_axis_number('minor') == 3
def test_get_axis_name(self):
- self.assertEqual(self.panel4d._get_axis_name(0), 'labels')
- self.assertEqual(self.panel4d._get_axis_name(1), 'items')
- self.assertEqual(self.panel4d._get_axis_name(2), 'major_axis')
- self.assertEqual(self.panel4d._get_axis_name(3), 'minor_axis')
+ assert self.panel4d._get_axis_name(0) == 'labels'
+ assert self.panel4d._get_axis_name(1) == 'items'
+ assert self.panel4d._get_axis_name(2) == 'major_axis'
+ assert self.panel4d._get_axis_name(3) == 'minor_axis'
def test_arith(self):
with catch_warnings(record=True):
@@ -234,8 +234,8 @@ def test_keys(self):
def test_iteritems(self):
"""Test panel4d.iteritems()"""
- self.assertEqual(len(list(self.panel4d.iteritems())),
- len(self.panel4d.labels))
+ assert (len(list(self.panel4d.iteritems())) ==
+ len(self.panel4d.labels))
def test_combinePanel4d(self):
with catch_warnings(record=True):
@@ -374,16 +374,16 @@ def test_setitem(self):
# scalar
self.panel4d['lG'] = 1
self.panel4d['lE'] = True
- self.assertEqual(self.panel4d['lG'].values.dtype, np.int64)
- self.assertEqual(self.panel4d['lE'].values.dtype, np.bool_)
+ assert self.panel4d['lG'].values.dtype == np.int64
+ assert self.panel4d['lE'].values.dtype == np.bool_
# object dtype
self.panel4d['lQ'] = 'foo'
- self.assertEqual(self.panel4d['lQ'].values.dtype, np.object_)
+ assert self.panel4d['lQ'].values.dtype == np.object_
# boolean dtype
self.panel4d['lP'] = self.panel4d['l1'] > 0
- self.assertEqual(self.panel4d['lP'].values.dtype, np.bool_)
+ assert self.panel4d['lP'].values.dtype == np.bool_
def test_setitem_by_indexer(self):
@@ -484,8 +484,8 @@ def test_major_xs_mixed(self):
self.panel4d['l4'] = 'foo'
with catch_warnings(record=True):
xs = self.panel4d.major_xs(self.panel4d.major_axis[0])
- self.assertEqual(xs['l1']['A'].dtype, np.float64)
- self.assertEqual(xs['l4']['A'].dtype, np.object_)
+ assert xs['l1']['A'].dtype == np.float64
+ assert xs['l4']['A'].dtype == np.object_
def test_minor_xs(self):
ref = self.panel4d['l1']['ItemA']
@@ -504,8 +504,8 @@ def test_minor_xs_mixed(self):
with catch_warnings(record=True):
xs = self.panel4d.minor_xs('D')
- self.assertEqual(xs['l1'].T['ItemA'].dtype, np.float64)
- self.assertEqual(xs['l4'].T['ItemA'].dtype, np.object_)
+ assert xs['l1'].T['ItemA'].dtype == np.float64
+ assert xs['l4'].T['ItemA'].dtype == np.object_
def test_xs(self):
l1 = self.panel4d.xs('l1', axis=0)
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 37e22f101612b..276e9a12c1993 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -71,12 +71,12 @@ def test_api(self):
r = self.series.resample('H')
result = r.mean()
assert isinstance(result, Series)
- self.assertEqual(len(result), 217)
+ assert len(result) == 217
r = self.series.to_frame().resample('H')
result = r.mean()
assert isinstance(result, DataFrame)
- self.assertEqual(len(result), 217)
+ assert len(result) == 217
def test_api_changes_v018(self):
@@ -186,13 +186,13 @@ def f():
check_stacklevel=False):
result = self.series.resample('H')[0]
expected = self.series.resample('H').mean()[0]
- self.assertEqual(result, expected)
+ assert result == expected
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
result = self.series.resample('H')['2005-01-09 23:00:00']
expected = self.series.resample('H').mean()['2005-01-09 23:00:00']
- self.assertEqual(result, expected)
+ assert result == expected
def test_groupby_resample_api(self):
@@ -254,7 +254,7 @@ def test_getitem(self):
tm.assert_index_equal(r._selected_obj.columns, self.frame.columns)
r = self.frame.resample('H')['B']
- self.assertEqual(r._selected_obj.name, self.frame.columns[1])
+ assert r._selected_obj.name == self.frame.columns[1]
# technically this is allowed
r = self.frame.resample('H')['A', 'B']
@@ -771,7 +771,7 @@ def test_resample_empty_series(self):
expected = s.copy()
expected.index = s.index._shallow_copy(freq=freq)
assert_index_equal(result.index, expected.index)
- self.assertEqual(result.index.freq, expected.index.freq)
+ assert result.index.freq == expected.index.freq
assert_series_equal(result, expected, check_dtype=False)
def test_resample_empty_dataframe(self):
@@ -788,7 +788,7 @@ def test_resample_empty_dataframe(self):
expected = f.copy()
expected.index = f.index._shallow_copy(freq=freq)
assert_index_equal(result.index, expected.index)
- self.assertEqual(result.index.freq, expected.index.freq)
+ assert result.index.freq == expected.index.freq
assert_frame_equal(result, expected, check_dtype=False)
# test size for GH13212 (currently stays as df)
@@ -884,7 +884,7 @@ def test_custom_grouper(self):
for f in funcs:
g._cython_agg_general(f)
- self.assertEqual(g.ngroups, 2593)
+ assert g.ngroups == 2593
assert notnull(g.mean()).all()
# construct expected val
@@ -901,8 +901,8 @@ def test_custom_grouper(self):
index=dti, dtype='float64')
r = df.groupby(b).agg(np.sum)
- self.assertEqual(len(r.columns), 10)
- self.assertEqual(len(r.index), 2593)
+ assert len(r.columns) == 10
+ assert len(r.index) == 2593
def test_resample_basic(self):
rng = date_range('1/1/2000 00:00:00', '1/1/2000 00:13:00', freq='min',
@@ -914,7 +914,7 @@ def test_resample_basic(self):
expected = Series([s[0], s[1:6].mean(), s[6:11].mean(), s[11:].mean()],
index=exp_idx)
assert_series_equal(result, expected)
- self.assertEqual(result.index.name, 'index')
+ assert result.index.name == 'index'
result = s.resample('5min', closed='left', label='right').mean()
@@ -958,7 +958,7 @@ def _ohlc(group):
'5min', closed='right', label='right'), arg)()
expected = s.groupby(grouplist).agg(func)
- self.assertEqual(result.index.name, 'index')
+ assert result.index.name == 'index'
if arg == 'ohlc':
expected = DataFrame(expected.values.tolist())
expected.columns = ['open', 'high', 'low', 'close']
@@ -1116,51 +1116,51 @@ def test_resample_basic_from_daily(self):
# to weekly
result = s.resample('w-sun').last()
- self.assertEqual(len(result), 3)
+ assert len(result) == 3
assert (result.index.dayofweek == [6, 6, 6]).all()
- self.assertEqual(result.iloc[0], s['1/2/2005'])
- self.assertEqual(result.iloc[1], s['1/9/2005'])
- self.assertEqual(result.iloc[2], s.iloc[-1])
+ assert result.iloc[0] == s['1/2/2005']
+ assert result.iloc[1] == s['1/9/2005']
+ assert result.iloc[2] == s.iloc[-1]
result = s.resample('W-MON').last()
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
assert (result.index.dayofweek == [0, 0]).all()
- self.assertEqual(result.iloc[0], s['1/3/2005'])
- self.assertEqual(result.iloc[1], s['1/10/2005'])
+ assert result.iloc[0] == s['1/3/2005']
+ assert result.iloc[1] == s['1/10/2005']
result = s.resample('W-TUE').last()
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
assert (result.index.dayofweek == [1, 1]).all()
- self.assertEqual(result.iloc[0], s['1/4/2005'])
- self.assertEqual(result.iloc[1], s['1/10/2005'])
+ assert result.iloc[0] == s['1/4/2005']
+ assert result.iloc[1] == s['1/10/2005']
result = s.resample('W-WED').last()
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
assert (result.index.dayofweek == [2, 2]).all()
- self.assertEqual(result.iloc[0], s['1/5/2005'])
- self.assertEqual(result.iloc[1], s['1/10/2005'])
+ assert result.iloc[0] == s['1/5/2005']
+ assert result.iloc[1] == s['1/10/2005']
result = s.resample('W-THU').last()
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
assert (result.index.dayofweek == [3, 3]).all()
- self.assertEqual(result.iloc[0], s['1/6/2005'])
- self.assertEqual(result.iloc[1], s['1/10/2005'])
+ assert result.iloc[0] == s['1/6/2005']
+ assert result.iloc[1] == s['1/10/2005']
result = s.resample('W-FRI').last()
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
assert (result.index.dayofweek == [4, 4]).all()
- self.assertEqual(result.iloc[0], s['1/7/2005'])
- self.assertEqual(result.iloc[1], s['1/10/2005'])
+ assert result.iloc[0] == s['1/7/2005']
+ assert result.iloc[1] == s['1/10/2005']
# to biz day
result = s.resample('B').last()
- self.assertEqual(len(result), 7)
+ assert len(result) == 7
assert (result.index.dayofweek == [4, 0, 1, 2, 3, 4, 0]).all()
- self.assertEqual(result.iloc[0], s['1/2/2005'])
- self.assertEqual(result.iloc[1], s['1/3/2005'])
- self.assertEqual(result.iloc[5], s['1/9/2005'])
- self.assertEqual(result.index.name, 'index')
+ assert result.iloc[0] == s['1/2/2005']
+ assert result.iloc[1] == s['1/3/2005']
+ assert result.iloc[5] == s['1/9/2005']
+ assert result.index.name == 'index'
def test_resample_upsampling_picked_but_not_correct(self):
@@ -1169,7 +1169,7 @@ def test_resample_upsampling_picked_but_not_correct(self):
series = Series(1, index=dates)
result = series.resample('D').mean()
- self.assertEqual(result.index[0], dates[0])
+ assert result.index[0] == dates[0]
# GH 5955
# incorrect deciding to upsample when the axis frequency matches the
@@ -1230,7 +1230,7 @@ def test_resample_loffset(self):
loffset=Minute(1)).mean()
assert_series_equal(result, expected)
- self.assertEqual(result.index.freq, Minute(5))
+ assert result.index.freq == Minute(5)
# from daily
dti = DatetimeIndex(start=datetime(2005, 1, 1),
@@ -1240,7 +1240,7 @@ def test_resample_loffset(self):
# to weekly
result = ser.resample('w-sun').last()
expected = ser.resample('w-sun', loffset=-bday).last()
- self.assertEqual(result.index[0] - bday, expected.index[0])
+ assert result.index[0] - bday == expected.index[0]
def test_resample_loffset_count(self):
# GH 12725
@@ -1273,11 +1273,11 @@ def test_resample_upsample(self):
# to minutely, by padding
result = s.resample('Min').pad()
- self.assertEqual(len(result), 12961)
- self.assertEqual(result[0], s[0])
- self.assertEqual(result[-1], s[-1])
+ assert len(result) == 12961
+ assert result[0] == s[0]
+ assert result[-1] == s[-1]
- self.assertEqual(result.index.name, 'index')
+ assert result.index.name == 'index'
def test_resample_how_method(self):
# GH9915
@@ -1320,20 +1320,20 @@ def test_resample_ohlc(self):
expect = s.groupby(grouper).agg(lambda x: x[-1])
result = s.resample('5Min').ohlc()
- self.assertEqual(len(result), len(expect))
- self.assertEqual(len(result.columns), 4)
+ assert len(result) == len(expect)
+ assert len(result.columns) == 4
xs = result.iloc[-2]
- self.assertEqual(xs['open'], s[-6])
- self.assertEqual(xs['high'], s[-6:-1].max())
- self.assertEqual(xs['low'], s[-6:-1].min())
- self.assertEqual(xs['close'], s[-2])
+ assert xs['open'] == s[-6]
+ assert xs['high'] == s[-6:-1].max()
+ assert xs['low'] == s[-6:-1].min()
+ assert xs['close'] == s[-2]
xs = result.iloc[0]
- self.assertEqual(xs['open'], s[0])
- self.assertEqual(xs['high'], s[:5].max())
- self.assertEqual(xs['low'], s[:5].min())
- self.assertEqual(xs['close'], s[4])
+ assert xs['open'] == s[0]
+ assert xs['high'] == s[:5].max()
+ assert xs['low'] == s[:5].min()
+ assert xs['close'] == s[4]
def test_resample_ohlc_result(self):
@@ -1410,9 +1410,9 @@ def test_resample_reresample(self):
s = Series(np.random.rand(len(dti)), dti)
bs = s.resample('B', closed='right', label='right').mean()
result = bs.resample('8H').mean()
- self.assertEqual(len(result), 22)
+ assert len(result) == 22
assert isinstance(result.index.freq, offsets.DateOffset)
- self.assertEqual(result.index.freq, offsets.Hour(8))
+ assert result.index.freq == offsets.Hour(8)
def test_resample_timestamp_to_period(self):
ts = _simple_ts('1/1/1990', '1/1/2000')
@@ -1465,7 +1465,7 @@ def test_downsample_non_unique(self):
result = ts.resample('M').mean()
expected = ts.groupby(lambda x: x.month).mean()
- self.assertEqual(len(result), 2)
+ assert len(result) == 2
assert_almost_equal(result[0], expected[1])
assert_almost_equal(result[1], expected[2])
@@ -1665,10 +1665,10 @@ def test_resample_dtype_preservation(self):
).set_index('date')
result = df.resample('1D').ffill()
- self.assertEqual(result.val.dtype, np.int32)
+ assert result.val.dtype == np.int32
result = df.groupby('group').resample('1D').ffill()
- self.assertEqual(result.val.dtype, np.int32)
+ assert result.val.dtype == np.int32
def test_weekly_resample_buglet(self):
# #1327
@@ -1742,7 +1742,7 @@ def test_resample_anchored_intraday(self):
ts = _simple_ts('2012-04-29 23:00', '2012-04-30 5:00', freq='h')
resampled = ts.resample('M').mean()
- self.assertEqual(len(resampled), 1)
+ assert len(resampled) == 1
def test_resample_anchored_monthstart(self):
ts = _simple_ts('1/1/2000', '12/31/2002')
@@ -1768,13 +1768,11 @@ def test_resample_anchored_multiday(self):
# Ensure left closing works
result = s.resample('2200L').mean()
- self.assertEqual(result.index[-1],
- pd.Timestamp('2014-10-15 23:00:02.000'))
+ assert result.index[-1] == pd.Timestamp('2014-10-15 23:00:02.000')
# Ensure right closing works
result = s.resample('2200L', label='right').mean()
- self.assertEqual(result.index[-1],
- pd.Timestamp('2014-10-15 23:00:04.200'))
+ assert result.index[-1] == pd.Timestamp('2014-10-15 23:00:04.200')
def test_corner_cases(self):
# miscellaneous test coverage
@@ -1789,13 +1787,13 @@ def test_corner_cases(self):
len0pts = _simple_pts('2007-01', '2010-05', freq='M')[:0]
# it works
result = len0pts.resample('A-DEC').mean()
- self.assertEqual(len(result), 0)
+ assert len(result) == 0
# resample to periods
ts = _simple_ts('2000-04-28', '2000-04-30 11:00', freq='h')
result = ts.resample('M', kind='period').mean()
- self.assertEqual(len(result), 1)
- self.assertEqual(result.index[0], Period('2000-04', freq='M'))
+ assert len(result) == 1
+ assert result.index[0] == Period('2000-04', freq='M')
def test_anchored_lowercase_buglet(self):
dates = date_range('4/16/2012 20:00', periods=50000, freq='s')
@@ -1941,7 +1939,7 @@ def test_resample_nunique(self):
g = df.groupby(pd.Grouper(freq='D'))
expected = df.groupby(pd.TimeGrouper('D')).ID.apply(lambda x:
x.nunique())
- self.assertEqual(expected.name, 'ID')
+ assert expected.name == 'ID'
for t in [r, g]:
result = r.ID.nunique()
@@ -2691,8 +2689,8 @@ def test_resample_bms_2752(self):
foo = pd.Series(index=pd.bdate_range('20000101', '20000201'))
res1 = foo.resample("BMS").mean()
res2 = foo.resample("BMS").mean().resample("B").mean()
- self.assertEqual(res1.index[0], Timestamp('20000103'))
- self.assertEqual(res1.index[0], res2.index[0])
+ assert res1.index[0] == Timestamp('20000103')
+ assert res1.index[0] == res2.index[0]
# def test_monthly_convention_span(self):
# rng = period_range('2000-01', periods=3, freq='M')
@@ -2969,11 +2967,11 @@ def test_consistency_with_window(self):
df = self.frame
expected = pd.Int64Index([1, 2, 3], name='A')
result = df.groupby('A').resample('2s').mean()
- self.assertEqual(result.index.nlevels, 2)
+ assert result.index.nlevels == 2
tm.assert_index_equal(result.index.levels[0], expected)
result = df.groupby('A').rolling(20).mean()
- self.assertEqual(result.index.nlevels, 2)
+ assert result.index.nlevels == 2
tm.assert_index_equal(result.index.levels[0], expected)
def test_median_duplicate_columns(self):
@@ -3219,7 +3217,7 @@ def test_aggregate_with_nat(self):
dt_result = getattr(dt_grouped, func)()
assert_series_equal(expected, dt_result)
# GH 9925
- self.assertEqual(dt_result.index.name, 'key')
+ assert dt_result.index.name == 'key'
# if NaT is included, 'var', 'std', 'mean', 'first','last'
# and 'nth' doesn't work yet
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 5b9797ce76a45..412a88e13bb23 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -54,7 +54,7 @@ def test_iter(self):
# desired behavior is to iterate until everything would be nan on the
# next iter so make sure the last element of the iterator was 'l' in
# this case since 'wikitravel' is the longest string
- self.assertEqual(s.dropna().values.item(), 'l')
+ assert s.dropna().values.item() == 'l'
def test_iter_empty(self):
ds = Series([], dtype=object)
@@ -66,8 +66,8 @@ def test_iter_empty(self):
# nothing to iterate over so nothing defined values should remain
# unchanged
- self.assertEqual(i, 100)
- self.assertEqual(s, 1)
+ assert i == 100
+ assert s == 1
def test_iter_single_element(self):
ds = Series(['a'])
@@ -87,8 +87,8 @@ def test_iter_object_try_string(self):
for i, s in enumerate(ds.str):
pass
- self.assertEqual(i, 100)
- self.assertEqual(s, 'h')
+ assert i == 100
+ assert s == 'h'
def test_cat(self):
one = np.array(['a', 'a', 'b', 'b', 'c', NA], dtype=np.object_)
@@ -97,23 +97,23 @@ def test_cat(self):
# single array
result = strings.str_cat(one)
exp = 'aabbc'
- self.assertEqual(result, exp)
+ assert result == exp
result = strings.str_cat(one, na_rep='NA')
exp = 'aabbcNA'
- self.assertEqual(result, exp)
+ assert result == exp
result = strings.str_cat(one, na_rep='-')
exp = 'aabbc-'
- self.assertEqual(result, exp)
+ assert result == exp
result = strings.str_cat(one, sep='_', na_rep='NA')
exp = 'a_a_b_b_c_NA'
- self.assertEqual(result, exp)
+ assert result == exp
result = strings.str_cat(two, sep='-')
exp = 'a-b-d-foo'
- self.assertEqual(result, exp)
+ assert result == exp
# Multiple arrays
result = strings.str_cat(one, [two], na_rep='NA')
@@ -177,7 +177,7 @@ def test_contains(self):
values = ['foo', 'xyz', 'fooommm__foo', 'mmm_']
result = strings.str_contains(values, pat)
expected = np.array([False, False, True, True])
- self.assertEqual(result.dtype, np.bool_)
+ assert result.dtype == np.bool_
tm.assert_numpy_array_equal(result, expected)
# case insensitive using regex
@@ -220,13 +220,13 @@ def test_contains(self):
dtype=np.object_)
result = strings.str_contains(values, pat)
expected = np.array([False, False, True, True])
- self.assertEqual(result.dtype, np.bool_)
+ assert result.dtype == np.bool_
tm.assert_numpy_array_equal(result, expected)
# na
values = Series(['om', 'foo', np.nan])
res = values.str.contains('foo', na="foo")
- self.assertEqual(res.loc[2], "foo")
+ assert res.loc[2] == "foo"
def test_startswith(self):
values = Series(['om', NA, 'foo_nom', 'nom', 'bar_foo', NA, 'foo'])
@@ -381,13 +381,11 @@ def test_swapcase(self):
def test_casemethods(self):
values = ['aaa', 'bbb', 'CCC', 'Dddd', 'eEEE']
s = Series(values)
- self.assertEqual(s.str.lower().tolist(), [v.lower() for v in values])
- self.assertEqual(s.str.upper().tolist(), [v.upper() for v in values])
- self.assertEqual(s.str.title().tolist(), [v.title() for v in values])
- self.assertEqual(s.str.capitalize().tolist(), [
- v.capitalize() for v in values])
- self.assertEqual(s.str.swapcase().tolist(), [
- v.swapcase() for v in values])
+ assert s.str.lower().tolist() == [v.lower() for v in values]
+ assert s.str.upper().tolist() == [v.upper() for v in values]
+ assert s.str.title().tolist() == [v.title() for v in values]
+ assert s.str.capitalize().tolist() == [v.capitalize() for v in values]
+ assert s.str.swapcase().tolist() == [v.swapcase() for v in values]
def test_replace(self):
values = Series(['fooBAD__barBAD', NA])
@@ -668,7 +666,7 @@ def test_extract_expand_False(self):
# single group renames series/index properly
s_or_idx = klass(['A1', 'A2'])
result = s_or_idx.str.extract(r'(?P<uno>A)\d', expand=False)
- self.assertEqual(result.name, 'uno')
+ assert result.name == 'uno'
exp = klass(['A', 'A'], name='uno')
if klass == Series:
@@ -772,7 +770,7 @@ def check_index(index):
r = s.str.extract(r'(?P<sue>[a-z])', expand=False)
e = Series(['a', 'b', 'c'], name='sue')
tm.assert_series_equal(r, e)
- self.assertEqual(r.name, e.name)
+ assert r.name == e.name
def test_extract_expand_True(self):
# Contains tests like those in test_match and some others.
@@ -1220,7 +1218,7 @@ def test_empty_str_methods(self):
# (extract) on empty series
tm.assert_series_equal(empty_str, empty.str.cat(empty))
- self.assertEqual('', empty.str.cat())
+ assert '' == empty.str.cat()
tm.assert_series_equal(empty_str, empty.str.title())
tm.assert_series_equal(empty_int, empty.str.count('a'))
tm.assert_series_equal(empty_bool, empty.str.contains('a'))
@@ -1322,20 +1320,13 @@ def test_ismethods(self):
tm.assert_series_equal(str_s.str.isupper(), Series(upper_e))
tm.assert_series_equal(str_s.str.istitle(), Series(title_e))
- self.assertEqual(str_s.str.isalnum().tolist(), [v.isalnum()
- for v in values])
- self.assertEqual(str_s.str.isalpha().tolist(), [v.isalpha()
- for v in values])
- self.assertEqual(str_s.str.isdigit().tolist(), [v.isdigit()
- for v in values])
- self.assertEqual(str_s.str.isspace().tolist(), [v.isspace()
- for v in values])
- self.assertEqual(str_s.str.islower().tolist(), [v.islower()
- for v in values])
- self.assertEqual(str_s.str.isupper().tolist(), [v.isupper()
- for v in values])
- self.assertEqual(str_s.str.istitle().tolist(), [v.istitle()
- for v in values])
+ assert str_s.str.isalnum().tolist() == [v.isalnum() for v in values]
+ assert str_s.str.isalpha().tolist() == [v.isalpha() for v in values]
+ assert str_s.str.isdigit().tolist() == [v.isdigit() for v in values]
+ assert str_s.str.isspace().tolist() == [v.isspace() for v in values]
+ assert str_s.str.islower().tolist() == [v.islower() for v in values]
+ assert str_s.str.isupper().tolist() == [v.isupper() for v in values]
+ assert str_s.str.istitle().tolist() == [v.istitle() for v in values]
def test_isnumeric(self):
# 0x00bc: ¼ VULGAR FRACTION ONE QUARTER
@@ -1350,10 +1341,8 @@ def test_isnumeric(self):
tm.assert_series_equal(s.str.isdecimal(), Series(decimal_e))
unicodes = [u'A', u'3', u'¼', u'★', u'፸', u'3', u'four']
- self.assertEqual(s.str.isnumeric().tolist(), [
- v.isnumeric() for v in unicodes])
- self.assertEqual(s.str.isdecimal().tolist(), [
- v.isdecimal() for v in unicodes])
+ assert s.str.isnumeric().tolist() == [v.isnumeric() for v in unicodes]
+ assert s.str.isdecimal().tolist() == [v.isdecimal() for v in unicodes]
values = ['A', np.nan, u'¼', u'★', np.nan, u'3', 'four']
s = Series(values)
@@ -1962,9 +1951,9 @@ def test_split_noargs(self):
s = Series(['Wes McKinney', 'Travis Oliphant'])
result = s.str.split()
expected = ['Travis', 'Oliphant']
- self.assertEqual(result[1], expected)
+ assert result[1] == expected
result = s.str.rsplit()
- self.assertEqual(result[1], expected)
+ assert result[1] == expected
def test_split_maxsplit(self):
# re.split 0, str.split -1
@@ -2027,14 +2016,14 @@ def test_split_to_multiindex_expand(self):
result = idx.str.split('_', expand=True)
exp = idx
tm.assert_index_equal(result, exp)
- self.assertEqual(result.nlevels, 1)
+ assert result.nlevels == 1
idx = Index(['some_equal_splits', 'with_no_nans'])
result = idx.str.split('_', expand=True)
exp = MultiIndex.from_tuples([('some', 'equal', 'splits'), (
'with', 'no', 'nans')])
tm.assert_index_equal(result, exp)
- self.assertEqual(result.nlevels, 3)
+ assert result.nlevels == 3
idx = Index(['some_unequal_splits', 'one_of_these_things_is_not'])
result = idx.str.split('_', expand=True)
@@ -2042,7 +2031,7 @@ def test_split_to_multiindex_expand(self):
), ('one', 'of', 'these', 'things',
'is', 'not')])
tm.assert_index_equal(result, exp)
- self.assertEqual(result.nlevels, 6)
+ assert result.nlevels == 6
with tm.assert_raises_regex(ValueError, "expand must be"):
idx.str.split('_', expand="not_a_boolean")
@@ -2081,21 +2070,21 @@ def test_rsplit_to_multiindex_expand(self):
result = idx.str.rsplit('_', expand=True)
exp = idx
tm.assert_index_equal(result, exp)
- self.assertEqual(result.nlevels, 1)
+ assert result.nlevels == 1
idx = Index(['some_equal_splits', 'with_no_nans'])
result = idx.str.rsplit('_', expand=True)
exp = MultiIndex.from_tuples([('some', 'equal', 'splits'), (
'with', 'no', 'nans')])
tm.assert_index_equal(result, exp)
- self.assertEqual(result.nlevels, 3)
+ assert result.nlevels == 3
idx = Index(['some_equal_splits', 'with_no_nans'])
result = idx.str.rsplit('_', expand=True, n=1)
exp = MultiIndex.from_tuples([('some_equal', 'splits'),
('with_no', 'nans')])
tm.assert_index_equal(result, exp)
- self.assertEqual(result.nlevels, 2)
+ assert result.nlevels == 2
def test_split_with_name(self):
# GH 12617
@@ -2184,9 +2173,9 @@ def test_partition_series(self):
# compare to standard lib
values = Series(['A_B_C', 'B_C_D', 'E_F_G', 'EFGHEF'])
result = values.str.partition('_', expand=False).tolist()
- self.assertEqual(result, [v.partition('_') for v in values])
+ assert result == [v.partition('_') for v in values]
result = values.str.rpartition('_', expand=False).tolist()
- self.assertEqual(result, [v.rpartition('_') for v in values])
+ assert result == [v.rpartition('_') for v in values]
def test_partition_index(self):
values = Index(['a_b_c', 'c_d_e', 'f_g_h'])
@@ -2195,25 +2184,25 @@ def test_partition_index(self):
exp = Index(np.array([('a', '_', 'b_c'), ('c', '_', 'd_e'), ('f', '_',
'g_h')]))
tm.assert_index_equal(result, exp)
- self.assertEqual(result.nlevels, 1)
+ assert result.nlevels == 1
result = values.str.rpartition('_', expand=False)
exp = Index(np.array([('a_b', '_', 'c'), ('c_d', '_', 'e'), (
'f_g', '_', 'h')]))
tm.assert_index_equal(result, exp)
- self.assertEqual(result.nlevels, 1)
+ assert result.nlevels == 1
result = values.str.partition('_')
exp = Index([('a', '_', 'b_c'), ('c', '_', 'd_e'), ('f', '_', 'g_h')])
tm.assert_index_equal(result, exp)
assert isinstance(result, MultiIndex)
- self.assertEqual(result.nlevels, 3)
+ assert result.nlevels == 3
result = values.str.rpartition('_')
exp = Index([('a_b', '_', 'c'), ('c_d', '_', 'e'), ('f_g', '_', 'h')])
tm.assert_index_equal(result, exp)
assert isinstance(result, MultiIndex)
- self.assertEqual(result.nlevels, 3)
+ assert result.nlevels == 3
def test_partition_to_dataframe(self):
values = Series(['a_b_c', 'c_d_e', NA, 'f_g_h'])
@@ -2604,20 +2593,20 @@ def test_match_findall_flags(self):
pat = r'([A-Z0-9._%+-]+)@([A-Z0-9.-]+)\.([A-Z]{2,4})'
result = data.str.extract(pat, flags=re.IGNORECASE, expand=True)
- self.assertEqual(result.iloc[0].tolist(), ['dave', 'google', 'com'])
+ assert result.iloc[0].tolist() == ['dave', 'google', 'com']
result = data.str.match(pat, flags=re.IGNORECASE)
- self.assertEqual(result[0], True)
+ assert result[0]
result = data.str.findall(pat, flags=re.IGNORECASE)
- self.assertEqual(result[0][0], ('dave', 'google', 'com'))
+ assert result[0][0] == ('dave', 'google', 'com')
result = data.str.count(pat, flags=re.IGNORECASE)
- self.assertEqual(result[0], 1)
+ assert result[0] == 1
with tm.assert_produces_warning(UserWarning):
result = data.str.contains(pat, flags=re.IGNORECASE)
- self.assertEqual(result[0], True)
+ assert result[0]
def test_encode_decode(self):
base = Series([u('a'), u('b'), u('a\xe4')])
@@ -2685,11 +2674,11 @@ def test_cat_on_filtered_index(self):
str_month = df.month.astype('str')
str_both = str_year.str.cat(str_month, sep=' ')
- self.assertEqual(str_both.loc[1], '2011 2')
+ assert str_both.loc[1] == '2011 2'
str_multiple = str_year.str.cat([str_month, str_month], sep=' ')
- self.assertEqual(str_multiple.loc[1], '2011 2 2')
+ assert str_multiple.loc[1] == '2011 2 2'
def test_str_cat_raises_intuitive_error(self):
# https://github.com/pandas-dev/pandas/issues/11334
@@ -2721,13 +2710,13 @@ def test_index_str_accessor_visibility(self):
idx = Index(values)
assert isinstance(Series(values).str, StringMethods)
assert isinstance(idx.str, StringMethods)
- self.assertEqual(idx.inferred_type, tp)
+ assert idx.inferred_type == tp
for values, tp in cases:
idx = Index(values)
assert isinstance(Series(values).str, StringMethods)
assert isinstance(idx.str, StringMethods)
- self.assertEqual(idx.inferred_type, tp)
+ assert idx.inferred_type == tp
cases = [([1, np.nan], 'floating'),
([datetime(2011, 1, 1)], 'datetime64'),
@@ -2739,11 +2728,11 @@ def test_index_str_accessor_visibility(self):
Series(values).str
with tm.assert_raises_regex(AttributeError, message):
idx.str
- self.assertEqual(idx.inferred_type, tp)
+ assert idx.inferred_type == tp
# MultiIndex has mixed dtype, but not allow to use accessor
idx = MultiIndex.from_tuples([('a', 'b'), ('a', 'b')])
- self.assertEqual(idx.inferred_type, 'mixed')
+ assert idx.inferred_type == 'mixed'
message = 'Can only use .str accessor with Index, not MultiIndex'
with tm.assert_raises_regex(AttributeError, message):
idx.str
diff --git a/pandas/tests/test_take.py b/pandas/tests/test_take.py
index 9fb61998f6c54..617d268be8f67 100644
--- a/pandas/tests/test_take.py
+++ b/pandas/tests/test_take.py
@@ -353,7 +353,7 @@ def test_1d_bool(self):
tm.assert_numpy_array_equal(result, expected)
result = algos.take_1d(arr, [0, 2, -1])
- self.assertEqual(result.dtype, np.object_)
+ assert result.dtype == np.object_
def test_2d_bool(self):
arr = np.array([[0, 1, 0], [1, 0, 1], [0, 1, 1]], dtype=bool)
@@ -367,7 +367,7 @@ def test_2d_bool(self):
tm.assert_numpy_array_equal(result, expected)
result = algos.take_nd(arr, [0, 2, -1])
- self.assertEqual(result.dtype, np.object_)
+ assert result.dtype == np.object_
def test_2d_float32(self):
arr = np.random.randn(4, 3).astype(np.float32)
diff --git a/pandas/tests/test_testing.py b/pandas/tests/test_testing.py
index 80db5eb49c127..2c0cd55205a5a 100644
--- a/pandas/tests/test_testing.py
+++ b/pandas/tests/test_testing.py
@@ -726,8 +726,8 @@ def test_RNGContext(self):
with RNGContext(0):
with RNGContext(1):
- self.assertEqual(np.random.randn(), expected1)
- self.assertEqual(np.random.randn(), expected0)
+ assert np.random.randn() == expected1
+ assert np.random.randn() == expected0
class TestLocale(tm.TestCase):
diff --git a/pandas/tests/test_util.py b/pandas/tests/test_util.py
index 6581e7688a32f..80eb5bb9dfe16 100644
--- a/pandas/tests/test_util.py
+++ b/pandas/tests/test_util.py
@@ -7,6 +7,7 @@
from collections import OrderedDict
import pytest
+from pandas.compat import intern
from pandas.util._move import move_into_mutable_buffer, BadMove, stolenbuf
from pandas.util.decorators import deprecate_kwarg
from pandas.util.validators import (validate_args, validate_kwargs,
@@ -50,19 +51,19 @@ def test_dict_deprecate_kwarg(self):
x = 'yes'
with tm.assert_produces_warning(FutureWarning):
result = self.f2(old=x)
- self.assertEqual(result, True)
+ assert result
def test_missing_deprecate_kwarg(self):
x = 'bogus'
with tm.assert_produces_warning(FutureWarning):
result = self.f2(old=x)
- self.assertEqual(result, 'bogus')
+ assert result == 'bogus'
def test_callable_deprecate_kwarg(self):
x = 5
with tm.assert_produces_warning(FutureWarning):
result = self.f3(old=x)
- self.assertEqual(result, x + 1)
+ assert result == x + 1
with pytest.raises(TypeError):
self.f3(old='hello')
@@ -358,7 +359,7 @@ def test_exactly_one_ref(self):
as_stolen_buf = move_into_mutable_buffer(b[:-3])
# materialize as bytearray to show that it is mutable
- self.assertEqual(bytearray(as_stolen_buf), b'test')
+ assert bytearray(as_stolen_buf) == b'test'
@pytest.mark.skipif(
sys.version_info[0] > 2,
@@ -393,12 +394,7 @@ def ref_capture(ob):
# be the same instance.
move_into_mutable_buffer(ref_capture(intern(make_string()))) # noqa
- self.assertEqual(
- refcount[0],
- 1,
- msg='The BadMove was probably raised for refcount reasons instead'
- ' of interning reasons',
- )
+ assert refcount[0] == 1
def test_numpy_errstate_is_default():
@@ -468,7 +464,7 @@ def test_set_locale(self):
new_lang, new_enc = normalized_locale.split('.')
new_enc = codecs.lookup(enc).name
normalized_locale = new_lang, new_enc
- self.assertEqual(normalized_locale, new_locale)
+ assert normalized_locale == new_locale
current_locale = locale.getlocale()
- self.assertEqual(current_locale, CURRENT_LOCALE)
+ assert current_locale == CURRENT_LOCALE
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index 7979e7d77a49d..55be6302036f1 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -57,7 +57,7 @@ def test_getitem(self):
tm.assert_index_equal(r._selected_obj.columns, self.frame.columns)
r = self.frame.rolling(window=5)[1]
- self.assertEqual(r._selected_obj.name, self.frame.columns[1])
+ assert r._selected_obj.name == self.frame.columns[1]
# technically this is allowed
r = self.frame.rolling(window=5)[1, 3]
@@ -281,8 +281,8 @@ def test_preserve_metadata(self):
s2 = s.rolling(30).sum()
s3 = s.rolling(20).sum()
- self.assertEqual(s2.name, 'foo')
- self.assertEqual(s3.name, 'foo')
+ assert s2.name == 'foo'
+ assert s3.name == 'foo'
def test_how_compat(self):
# in prior versions, we would allow how to be used in the resample
@@ -859,14 +859,14 @@ def test_cmov_window_corner(self):
vals = np.array([])
with catch_warnings(record=True):
rs = mom.rolling_window(vals, 5, 'boxcar', center=True)
- self.assertEqual(len(rs), 0)
+ assert len(rs) == 0
# shorter than window
vals = np.random.randn(5)
with catch_warnings(record=True):
rs = mom.rolling_window(vals, 10, 'boxcar')
assert np.isnan(rs).all()
- self.assertEqual(len(rs), 5)
+ assert len(rs) == 5
def test_cmov_window_frame(self):
# Gh 8238
@@ -1382,7 +1382,7 @@ def get_result(obj, window, min_periods=None, freq=None, center=False):
frame_result = get_result(self.frame, window=50)
assert isinstance(series_result, Series)
- self.assertEqual(type(frame_result), DataFrame)
+ assert type(frame_result) == DataFrame
# check time_rule works
if has_time_rule:
@@ -1689,14 +1689,14 @@ def _check_ew_ndarray(self, func, preserve_nan=False, name=None):
# pass in ints
result2 = func(np.arange(50), span=10)
- self.assertEqual(result2.dtype, np.float_)
+ assert result2.dtype == np.float_
def _check_ew_structures(self, func, name):
series_result = getattr(self.series.ewm(com=10), name)()
assert isinstance(series_result, Series)
frame_result = getattr(self.frame.ewm(com=10), name)()
- self.assertEqual(type(frame_result), DataFrame)
+ assert type(frame_result) == DataFrame
class TestPairwise(object):
@@ -2911,7 +2911,7 @@ def _check_expanding_structures(self, func):
series_result = func(self.series)
assert isinstance(series_result, Series)
frame_result = func(self.frame)
- self.assertEqual(type(frame_result), DataFrame)
+ assert type(frame_result) == DataFrame
def _check_expanding(self, func, static_comp, has_min_periods=True,
has_time_rule=True, preserve_nan=True):
@@ -3031,10 +3031,10 @@ def test_rolling_min_max_numeric_types(self):
# correctness
result = (DataFrame(np.arange(20, dtype=data_type))
.rolling(window=5).max())
- self.assertEqual(result.dtypes[0], np.dtype("f8"))
+ assert result.dtypes[0] == np.dtype("f8")
result = (DataFrame(np.arange(20, dtype=data_type))
.rolling(window=5).min())
- self.assertEqual(result.dtypes[0], np.dtype("f8"))
+ assert result.dtypes[0] == np.dtype("f8")
class TestGrouperGrouping(tm.TestCase):
diff --git a/pandas/tests/tools/test_numeric.py b/pandas/tests/tools/test_numeric.py
index 45b736102aa3d..b298df4f4b5d8 100644
--- a/pandas/tests/tools/test_numeric.py
+++ b/pandas/tests/tools/test_numeric.py
@@ -156,16 +156,16 @@ def test_type_check(self):
to_numeric(df, errors=errors)
def test_scalar(self):
- self.assertEqual(pd.to_numeric(1), 1)
- self.assertEqual(pd.to_numeric(1.1), 1.1)
+ assert pd.to_numeric(1) == 1
+ assert pd.to_numeric(1.1) == 1.1
- self.assertEqual(pd.to_numeric('1'), 1)
- self.assertEqual(pd.to_numeric('1.1'), 1.1)
+ assert pd.to_numeric('1') == 1
+ assert pd.to_numeric('1.1') == 1.1
with pytest.raises(ValueError):
to_numeric('XX', errors='raise')
- self.assertEqual(to_numeric('XX', errors='ignore'), 'XX')
+ assert to_numeric('XX', errors='ignore') == 'XX'
assert np.isnan(to_numeric('XX', errors='coerce'))
def test_numeric_dtypes(self):
diff --git a/pandas/tests/tseries/test_frequencies.py b/pandas/tests/tseries/test_frequencies.py
index 894269aaf451a..a78150e9cf728 100644
--- a/pandas/tests/tseries/test_frequencies.py
+++ b/pandas/tests/tseries/test_frequencies.py
@@ -345,97 +345,92 @@ def _assert_depr(freq, expected, aliases):
class TestFrequencyCode(tm.TestCase):
def test_freq_code(self):
- self.assertEqual(frequencies.get_freq('A'), 1000)
- self.assertEqual(frequencies.get_freq('3A'), 1000)
- self.assertEqual(frequencies.get_freq('-1A'), 1000)
+ assert frequencies.get_freq('A') == 1000
+ assert frequencies.get_freq('3A') == 1000
+ assert frequencies.get_freq('-1A') == 1000
- self.assertEqual(frequencies.get_freq('W'), 4000)
- self.assertEqual(frequencies.get_freq('W-MON'), 4001)
- self.assertEqual(frequencies.get_freq('W-FRI'), 4005)
+ assert frequencies.get_freq('W') == 4000
+ assert frequencies.get_freq('W-MON') == 4001
+ assert frequencies.get_freq('W-FRI') == 4005
for freqstr, code in compat.iteritems(frequencies._period_code_map):
result = frequencies.get_freq(freqstr)
- self.assertEqual(result, code)
+ assert result == code
result = frequencies.get_freq_group(freqstr)
- self.assertEqual(result, code // 1000 * 1000)
+ assert result == code // 1000 * 1000
result = frequencies.get_freq_group(code)
- self.assertEqual(result, code // 1000 * 1000)
+ assert result == code // 1000 * 1000
def test_freq_group(self):
- self.assertEqual(frequencies.get_freq_group('A'), 1000)
- self.assertEqual(frequencies.get_freq_group('3A'), 1000)
- self.assertEqual(frequencies.get_freq_group('-1A'), 1000)
- self.assertEqual(frequencies.get_freq_group('A-JAN'), 1000)
- self.assertEqual(frequencies.get_freq_group('A-MAY'), 1000)
- self.assertEqual(frequencies.get_freq_group(offsets.YearEnd()), 1000)
- self.assertEqual(frequencies.get_freq_group(
- offsets.YearEnd(month=1)), 1000)
- self.assertEqual(frequencies.get_freq_group(
- offsets.YearEnd(month=5)), 1000)
-
- self.assertEqual(frequencies.get_freq_group('W'), 4000)
- self.assertEqual(frequencies.get_freq_group('W-MON'), 4000)
- self.assertEqual(frequencies.get_freq_group('W-FRI'), 4000)
- self.assertEqual(frequencies.get_freq_group(offsets.Week()), 4000)
- self.assertEqual(frequencies.get_freq_group(
- offsets.Week(weekday=1)), 4000)
- self.assertEqual(frequencies.get_freq_group(
- offsets.Week(weekday=5)), 4000)
+ assert frequencies.get_freq_group('A') == 1000
+ assert frequencies.get_freq_group('3A') == 1000
+ assert frequencies.get_freq_group('-1A') == 1000
+ assert frequencies.get_freq_group('A-JAN') == 1000
+ assert frequencies.get_freq_group('A-MAY') == 1000
+ assert frequencies.get_freq_group(offsets.YearEnd()) == 1000
+ assert frequencies.get_freq_group(offsets.YearEnd(month=1)) == 1000
+ assert frequencies.get_freq_group(offsets.YearEnd(month=5)) == 1000
+
+ assert frequencies.get_freq_group('W') == 4000
+ assert frequencies.get_freq_group('W-MON') == 4000
+ assert frequencies.get_freq_group('W-FRI') == 4000
+ assert frequencies.get_freq_group(offsets.Week()) == 4000
+ assert frequencies.get_freq_group(offsets.Week(weekday=1)) == 4000
+ assert frequencies.get_freq_group(offsets.Week(weekday=5)) == 4000
def test_get_to_timestamp_base(self):
tsb = frequencies.get_to_timestamp_base
- self.assertEqual(tsb(frequencies.get_freq_code('D')[0]),
- frequencies.get_freq_code('D')[0])
- self.assertEqual(tsb(frequencies.get_freq_code('W')[0]),
- frequencies.get_freq_code('D')[0])
- self.assertEqual(tsb(frequencies.get_freq_code('M')[0]),
- frequencies.get_freq_code('D')[0])
+ assert (tsb(frequencies.get_freq_code('D')[0]) ==
+ frequencies.get_freq_code('D')[0])
+ assert (tsb(frequencies.get_freq_code('W')[0]) ==
+ frequencies.get_freq_code('D')[0])
+ assert (tsb(frequencies.get_freq_code('M')[0]) ==
+ frequencies.get_freq_code('D')[0])
- self.assertEqual(tsb(frequencies.get_freq_code('S')[0]),
- frequencies.get_freq_code('S')[0])
- self.assertEqual(tsb(frequencies.get_freq_code('T')[0]),
- frequencies.get_freq_code('S')[0])
- self.assertEqual(tsb(frequencies.get_freq_code('H')[0]),
- frequencies.get_freq_code('S')[0])
+ assert (tsb(frequencies.get_freq_code('S')[0]) ==
+ frequencies.get_freq_code('S')[0])
+ assert (tsb(frequencies.get_freq_code('T')[0]) ==
+ frequencies.get_freq_code('S')[0])
+ assert (tsb(frequencies.get_freq_code('H')[0]) ==
+ frequencies.get_freq_code('S')[0])
def test_freq_to_reso(self):
Reso = frequencies.Resolution
- self.assertEqual(Reso.get_str_from_freq('A'), 'year')
- self.assertEqual(Reso.get_str_from_freq('Q'), 'quarter')
- self.assertEqual(Reso.get_str_from_freq('M'), 'month')
- self.assertEqual(Reso.get_str_from_freq('D'), 'day')
- self.assertEqual(Reso.get_str_from_freq('H'), 'hour')
- self.assertEqual(Reso.get_str_from_freq('T'), 'minute')
- self.assertEqual(Reso.get_str_from_freq('S'), 'second')
- self.assertEqual(Reso.get_str_from_freq('L'), 'millisecond')
- self.assertEqual(Reso.get_str_from_freq('U'), 'microsecond')
- self.assertEqual(Reso.get_str_from_freq('N'), 'nanosecond')
+ assert Reso.get_str_from_freq('A') == 'year'
+ assert Reso.get_str_from_freq('Q') == 'quarter'
+ assert Reso.get_str_from_freq('M') == 'month'
+ assert Reso.get_str_from_freq('D') == 'day'
+ assert Reso.get_str_from_freq('H') == 'hour'
+ assert Reso.get_str_from_freq('T') == 'minute'
+ assert Reso.get_str_from_freq('S') == 'second'
+ assert Reso.get_str_from_freq('L') == 'millisecond'
+ assert Reso.get_str_from_freq('U') == 'microsecond'
+ assert Reso.get_str_from_freq('N') == 'nanosecond'
for freq in ['A', 'Q', 'M', 'D', 'H', 'T', 'S', 'L', 'U', 'N']:
# check roundtrip
result = Reso.get_freq(Reso.get_str_from_freq(freq))
- self.assertEqual(freq, result)
+ assert freq == result
for freq in ['D', 'H', 'T', 'S', 'L', 'U']:
result = Reso.get_freq(Reso.get_str(Reso.get_reso_from_freq(freq)))
- self.assertEqual(freq, result)
+ assert freq == result
def test_resolution_bumping(self):
- # GH 14378
+ # see gh-14378
Reso = frequencies.Resolution
- self.assertEqual(Reso.get_stride_from_decimal(1.5, 'T'), (90, 'S'))
- self.assertEqual(Reso.get_stride_from_decimal(62.4, 'T'), (3744, 'S'))
- self.assertEqual(Reso.get_stride_from_decimal(1.04, 'H'), (3744, 'S'))
- self.assertEqual(Reso.get_stride_from_decimal(1, 'D'), (1, 'D'))
- self.assertEqual(Reso.get_stride_from_decimal(0.342931, 'H'),
- (1234551600, 'U'))
- self.assertEqual(Reso.get_stride_from_decimal(1.2345, 'D'),
- (106660800, 'L'))
+ assert Reso.get_stride_from_decimal(1.5, 'T') == (90, 'S')
+ assert Reso.get_stride_from_decimal(62.4, 'T') == (3744, 'S')
+ assert Reso.get_stride_from_decimal(1.04, 'H') == (3744, 'S')
+ assert Reso.get_stride_from_decimal(1, 'D') == (1, 'D')
+ assert (Reso.get_stride_from_decimal(0.342931, 'H') ==
+ (1234551600, 'U'))
+ assert Reso.get_stride_from_decimal(1.2345, 'D') == (106660800, 'L')
with pytest.raises(ValueError):
Reso.get_stride_from_decimal(0.5, 'N')
@@ -445,54 +440,54 @@ def test_resolution_bumping(self):
Reso.get_stride_from_decimal(0.3429324798798269273987982, 'H')
def test_get_freq_code(self):
- # freqstr
- self.assertEqual(frequencies.get_freq_code('A'),
- (frequencies.get_freq('A'), 1))
- self.assertEqual(frequencies.get_freq_code('3D'),
- (frequencies.get_freq('D'), 3))
- self.assertEqual(frequencies.get_freq_code('-2M'),
- (frequencies.get_freq('M'), -2))
+ # frequency str
+ assert (frequencies.get_freq_code('A') ==
+ (frequencies.get_freq('A'), 1))
+ assert (frequencies.get_freq_code('3D') ==
+ (frequencies.get_freq('D'), 3))
+ assert (frequencies.get_freq_code('-2M') ==
+ (frequencies.get_freq('M'), -2))
# tuple
- self.assertEqual(frequencies.get_freq_code(('D', 1)),
- (frequencies.get_freq('D'), 1))
- self.assertEqual(frequencies.get_freq_code(('A', 3)),
- (frequencies.get_freq('A'), 3))
- self.assertEqual(frequencies.get_freq_code(('M', -2)),
- (frequencies.get_freq('M'), -2))
+ assert (frequencies.get_freq_code(('D', 1)) ==
+ (frequencies.get_freq('D'), 1))
+ assert (frequencies.get_freq_code(('A', 3)) ==
+ (frequencies.get_freq('A'), 3))
+ assert (frequencies.get_freq_code(('M', -2)) ==
+ (frequencies.get_freq('M'), -2))
+
# numeric tuple
- self.assertEqual(frequencies.get_freq_code((1000, 1)), (1000, 1))
+ assert frequencies.get_freq_code((1000, 1)) == (1000, 1)
# offsets
- self.assertEqual(frequencies.get_freq_code(offsets.Day()),
- (frequencies.get_freq('D'), 1))
- self.assertEqual(frequencies.get_freq_code(offsets.Day(3)),
- (frequencies.get_freq('D'), 3))
- self.assertEqual(frequencies.get_freq_code(offsets.Day(-2)),
- (frequencies.get_freq('D'), -2))
-
- self.assertEqual(frequencies.get_freq_code(offsets.MonthEnd()),
- (frequencies.get_freq('M'), 1))
- self.assertEqual(frequencies.get_freq_code(offsets.MonthEnd(3)),
- (frequencies.get_freq('M'), 3))
- self.assertEqual(frequencies.get_freq_code(offsets.MonthEnd(-2)),
- (frequencies.get_freq('M'), -2))
-
- self.assertEqual(frequencies.get_freq_code(offsets.Week()),
- (frequencies.get_freq('W'), 1))
- self.assertEqual(frequencies.get_freq_code(offsets.Week(3)),
- (frequencies.get_freq('W'), 3))
- self.assertEqual(frequencies.get_freq_code(offsets.Week(-2)),
- (frequencies.get_freq('W'), -2))
-
- # monday is weekday=0
- self.assertEqual(frequencies.get_freq_code(offsets.Week(weekday=1)),
- (frequencies.get_freq('W-TUE'), 1))
- self.assertEqual(frequencies.get_freq_code(offsets.Week(3, weekday=0)),
- (frequencies.get_freq('W-MON'), 3))
- self.assertEqual(
- frequencies.get_freq_code(offsets.Week(-2, weekday=4)),
- (frequencies.get_freq('W-FRI'), -2))
+ assert (frequencies.get_freq_code(offsets.Day()) ==
+ (frequencies.get_freq('D'), 1))
+ assert (frequencies.get_freq_code(offsets.Day(3)) ==
+ (frequencies.get_freq('D'), 3))
+ assert (frequencies.get_freq_code(offsets.Day(-2)) ==
+ (frequencies.get_freq('D'), -2))
+
+ assert (frequencies.get_freq_code(offsets.MonthEnd()) ==
+ (frequencies.get_freq('M'), 1))
+ assert (frequencies.get_freq_code(offsets.MonthEnd(3)) ==
+ (frequencies.get_freq('M'), 3))
+ assert (frequencies.get_freq_code(offsets.MonthEnd(-2)) ==
+ (frequencies.get_freq('M'), -2))
+
+ assert (frequencies.get_freq_code(offsets.Week()) ==
+ (frequencies.get_freq('W'), 1))
+ assert (frequencies.get_freq_code(offsets.Week(3)) ==
+ (frequencies.get_freq('W'), 3))
+ assert (frequencies.get_freq_code(offsets.Week(-2)) ==
+ (frequencies.get_freq('W'), -2))
+
+ # Monday is weekday=0
+ assert (frequencies.get_freq_code(offsets.Week(weekday=1)) ==
+ (frequencies.get_freq('W-TUE'), 1))
+ assert (frequencies.get_freq_code(offsets.Week(3, weekday=0)) ==
+ (frequencies.get_freq('W-MON'), 3))
+ assert (frequencies.get_freq_code(offsets.Week(-2, weekday=4)) ==
+ (frequencies.get_freq('W-FRI'), -2))
_dti = DatetimeIndex
@@ -510,18 +505,18 @@ def test_raise_if_too_few(self):
def test_business_daily(self):
index = _dti(['12/31/1998', '1/3/1999', '1/4/1999'])
- self.assertEqual(frequencies.infer_freq(index), 'B')
+ assert frequencies.infer_freq(index) == 'B'
def test_day(self):
self._check_tick(timedelta(1), 'D')
def test_day_corner(self):
index = _dti(['1/1/2000', '1/2/2000', '1/3/2000'])
- self.assertEqual(frequencies.infer_freq(index), 'D')
+ assert frequencies.infer_freq(index) == 'D'
def test_non_datetimeindex(self):
dates = to_datetime(['1/1/2000', '1/2/2000', '1/3/2000'])
- self.assertEqual(frequencies.infer_freq(dates), 'D')
+ assert frequencies.infer_freq(dates) == 'D'
def test_hour(self):
self._check_tick(timedelta(hours=1), 'H')
@@ -550,7 +545,7 @@ def _check_tick(self, base_delta, code):
exp_freq = '%d%s' % (i, code)
else:
exp_freq = code
- self.assertEqual(frequencies.infer_freq(index), exp_freq)
+ assert frequencies.infer_freq(index) == exp_freq
index = _dti([b + base_delta * 7] + [b + base_delta * j for j in range(
3)])
@@ -595,7 +590,7 @@ def test_monthly(self):
def test_monthly_ambiguous(self):
rng = _dti(['1/31/2000', '2/29/2000', '3/31/2000'])
- self.assertEqual(rng.inferred_freq, 'M')
+ assert rng.inferred_freq == 'M'
def test_business_monthly(self):
self._check_generated_range('1/1/2000', 'BM')
@@ -617,7 +612,7 @@ def test_business_annual(self):
def test_annual_ambiguous(self):
rng = _dti(['1/31/2000', '1/31/2001', '1/31/2002'])
- self.assertEqual(rng.inferred_freq, 'A-JAN')
+ assert rng.inferred_freq == 'A-JAN'
def _check_generated_range(self, start, freq):
freq = freq.upper()
@@ -625,7 +620,7 @@ def _check_generated_range(self, start, freq):
gen = date_range(start, periods=7, freq=freq)
index = _dti(gen.values)
if not freq.startswith('Q-'):
- self.assertEqual(frequencies.infer_freq(index), gen.freqstr)
+ assert frequencies.infer_freq(index) == gen.freqstr
else:
inf_freq = frequencies.infer_freq(index)
is_dec_range = inf_freq == 'Q-DEC' and gen.freqstr in (
@@ -640,7 +635,7 @@ def _check_generated_range(self, start, freq):
index = _dti(gen.values)
if not freq.startswith('Q-'):
- self.assertEqual(frequencies.infer_freq(index), gen.freqstr)
+ assert frequencies.infer_freq(index) == gen.freqstr
else:
inf_freq = frequencies.infer_freq(index)
is_dec_range = inf_freq == 'Q-DEC' and gen.freqstr in (
@@ -655,15 +650,15 @@ def _check_generated_range(self, start, freq):
def test_infer_freq(self):
rng = period_range('1959Q2', '2009Q3', freq='Q')
rng = Index(rng.to_timestamp('D', how='e').asobject)
- self.assertEqual(rng.inferred_freq, 'Q-DEC')
+ assert rng.inferred_freq == 'Q-DEC'
rng = period_range('1959Q2', '2009Q3', freq='Q-NOV')
rng = Index(rng.to_timestamp('D', how='e').asobject)
- self.assertEqual(rng.inferred_freq, 'Q-NOV')
+ assert rng.inferred_freq == 'Q-NOV'
rng = period_range('1959Q2', '2009Q3', freq='Q-OCT')
rng = Index(rng.to_timestamp('D', how='e').asobject)
- self.assertEqual(rng.inferred_freq, 'Q-OCT')
+ assert rng.inferred_freq == 'Q-OCT'
def test_infer_freq_tz(self):
@@ -683,7 +678,7 @@ def test_infer_freq_tz(self):
'US/Pacific', 'US/Eastern']:
for expected, dates in compat.iteritems(freqs):
idx = DatetimeIndex(dates, tz=tz)
- self.assertEqual(idx.inferred_freq, expected)
+ assert idx.inferred_freq == expected
def test_infer_freq_tz_transition(self):
# Tests for #8772
@@ -699,7 +694,7 @@ def test_infer_freq_tz_transition(self):
for freq in freqs:
idx = date_range(date_pair[0], date_pair[
1], freq=freq, tz=tz)
- self.assertEqual(idx.inferred_freq, freq)
+ assert idx.inferred_freq == freq
index = date_range("2013-11-03", periods=5,
freq="3H").tz_localize("America/Chicago")
@@ -711,21 +706,21 @@ def test_infer_freq_businesshour(self):
['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00', '2014-07-01 14:00'])
# hourly freq in a day must result in 'H'
- self.assertEqual(idx.inferred_freq, 'H')
+ assert idx.inferred_freq == 'H'
idx = DatetimeIndex(
['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00', '2014-07-01 14:00',
'2014-07-01 15:00', '2014-07-01 16:00', '2014-07-02 09:00',
'2014-07-02 10:00', '2014-07-02 11:00'])
- self.assertEqual(idx.inferred_freq, 'BH')
+ assert idx.inferred_freq == 'BH'
idx = DatetimeIndex(
['2014-07-04 09:00', '2014-07-04 10:00', '2014-07-04 11:00',
'2014-07-04 12:00', '2014-07-04 13:00', '2014-07-04 14:00',
'2014-07-04 15:00', '2014-07-04 16:00', '2014-07-07 09:00',
'2014-07-07 10:00', '2014-07-07 11:00'])
- self.assertEqual(idx.inferred_freq, 'BH')
+ assert idx.inferred_freq == 'BH'
idx = DatetimeIndex(
['2014-07-04 09:00', '2014-07-04 10:00', '2014-07-04 11:00',
@@ -736,12 +731,12 @@ def test_infer_freq_businesshour(self):
'2014-07-07 16:00', '2014-07-08 09:00', '2014-07-08 10:00',
'2014-07-08 11:00', '2014-07-08 12:00', '2014-07-08 13:00',
'2014-07-08 14:00', '2014-07-08 15:00', '2014-07-08 16:00'])
- self.assertEqual(idx.inferred_freq, 'BH')
+ assert idx.inferred_freq == 'BH'
def test_not_monotonic(self):
rng = _dti(['1/31/2000', '1/31/2001', '1/31/2002'])
rng = rng[::-1]
- self.assertEqual(rng.inferred_freq, '-1A-JAN')
+ assert rng.inferred_freq == '-1A-JAN'
def test_non_datetimeindex2(self):
rng = _dti(['1/31/2000', '1/31/2001', '1/31/2002'])
@@ -749,7 +744,7 @@ def test_non_datetimeindex2(self):
vals = rng.to_pydatetime()
result = frequencies.infer_freq(vals)
- self.assertEqual(result, rng.inferred_freq)
+ assert result == rng.inferred_freq
def test_invalid_index_types(self):
@@ -771,7 +766,7 @@ def test_string_datetimelike_compat(self):
'2004-04'])
result = frequencies.infer_freq(Index(['2004-01', '2004-02', '2004-03',
'2004-04']))
- self.assertEqual(result, expected)
+ assert result == expected
def test_series(self):
diff --git a/pandas/tests/tseries/test_holiday.py b/pandas/tests/tseries/test_holiday.py
index c87f580582335..109adaaa7e0b0 100644
--- a/pandas/tests/tseries/test_holiday.py
+++ b/pandas/tests/tseries/test_holiday.py
@@ -49,9 +49,9 @@ def test_calendar(self):
Timestamp(self.start_date),
Timestamp(self.end_date))
- self.assertEqual(list(holidays.to_pydatetime()), self.holiday_list)
- self.assertEqual(list(holidays_1.to_pydatetime()), self.holiday_list)
- self.assertEqual(list(holidays_2.to_pydatetime()), self.holiday_list)
+ assert list(holidays.to_pydatetime()) == self.holiday_list
+ assert list(holidays_1.to_pydatetime()) == self.holiday_list
+ assert list(holidays_2.to_pydatetime()) == self.holiday_list
def test_calendar_caching(self):
# Test for issue #9552
@@ -82,8 +82,7 @@ def test_calendar_observance_dates(self):
def test_rule_from_name(self):
USFedCal = get_calendar('USFederalHolidayCalendar')
- self.assertEqual(USFedCal.rule_from_name(
- 'Thanksgiving'), USThanksgivingDay)
+ assert USFedCal.rule_from_name('Thanksgiving') == USThanksgivingDay
class TestHoliday(tm.TestCase):
@@ -93,17 +92,12 @@ def setUp(self):
self.end_date = datetime(2020, 12, 31)
def check_results(self, holiday, start, end, expected):
- self.assertEqual(list(holiday.dates(start, end)), expected)
+ assert list(holiday.dates(start, end)) == expected
+
# Verify that timezone info is preserved.
- self.assertEqual(
- list(
- holiday.dates(
- utc.localize(Timestamp(start)),
- utc.localize(Timestamp(end)),
- )
- ),
- [utc.localize(dt) for dt in expected],
- )
+ assert (list(holiday.dates(utc.localize(Timestamp(start)),
+ utc.localize(Timestamp(end)))) ==
+ [utc.localize(dt) for dt in expected])
def test_usmemorialday(self):
self.check_results(holiday=USMemorialDay,
@@ -234,7 +228,7 @@ def test_holidays_within_dates(self):
for rule, dates in compat.iteritems(holidays):
empty_dates = rule.dates(start_date, end_date)
- self.assertEqual(empty_dates.tolist(), [])
+ assert empty_dates.tolist() == []
if isinstance(dates, tuple):
dates = [dates]
@@ -266,17 +260,15 @@ def test_special_holidays(self):
end_date=datetime(2012, 12, 31),
offset=DateOffset(weekday=MO(1)))
- self.assertEqual(base_date,
- holiday_1.dates(self.start_date, self.end_date))
- self.assertEqual(base_date,
- holiday_2.dates(self.start_date, self.end_date))
+ assert base_date == holiday_1.dates(self.start_date, self.end_date)
+ assert base_date == holiday_2.dates(self.start_date, self.end_date)
def test_get_calendar(self):
class TestCalendar(AbstractHolidayCalendar):
rules = []
calendar = get_calendar('TestCalendar')
- self.assertEqual(TestCalendar, calendar.__class__)
+ assert TestCalendar == calendar.__class__
def test_factory(self):
class_1 = HolidayCalendarFactory('MemorialDay',
@@ -287,9 +279,9 @@ def test_factory(self):
USThanksgivingDay)
class_3 = HolidayCalendarFactory('Combined', class_1, class_2)
- self.assertEqual(len(class_1.rules), 1)
- self.assertEqual(len(class_2.rules), 1)
- self.assertEqual(len(class_3.rules), 2)
+ assert len(class_1.rules) == 1
+ assert len(class_2.rules) == 1
+ assert len(class_3.rules) == 2
class TestObservanceRules(tm.TestCase):
@@ -304,64 +296,65 @@ def setUp(self):
self.tu = datetime(2014, 4, 15)
def test_next_monday(self):
- self.assertEqual(next_monday(self.sa), self.mo)
- self.assertEqual(next_monday(self.su), self.mo)
+ assert next_monday(self.sa) == self.mo
+ assert next_monday(self.su) == self.mo
def test_next_monday_or_tuesday(self):
- self.assertEqual(next_monday_or_tuesday(self.sa), self.mo)
- self.assertEqual(next_monday_or_tuesday(self.su), self.tu)
- self.assertEqual(next_monday_or_tuesday(self.mo), self.tu)
+ assert next_monday_or_tuesday(self.sa) == self.mo
+ assert next_monday_or_tuesday(self.su) == self.tu
+ assert next_monday_or_tuesday(self.mo) == self.tu
def test_previous_friday(self):
- self.assertEqual(previous_friday(self.sa), self.fr)
- self.assertEqual(previous_friday(self.su), self.fr)
+ assert previous_friday(self.sa) == self.fr
+ assert previous_friday(self.su) == self.fr
def test_sunday_to_monday(self):
- self.assertEqual(sunday_to_monday(self.su), self.mo)
+ assert sunday_to_monday(self.su) == self.mo
def test_nearest_workday(self):
- self.assertEqual(nearest_workday(self.sa), self.fr)
- self.assertEqual(nearest_workday(self.su), self.mo)
- self.assertEqual(nearest_workday(self.mo), self.mo)
+ assert nearest_workday(self.sa) == self.fr
+ assert nearest_workday(self.su) == self.mo
+ assert nearest_workday(self.mo) == self.mo
def test_weekend_to_monday(self):
- self.assertEqual(weekend_to_monday(self.sa), self.mo)
- self.assertEqual(weekend_to_monday(self.su), self.mo)
- self.assertEqual(weekend_to_monday(self.mo), self.mo)
+ assert weekend_to_monday(self.sa) == self.mo
+ assert weekend_to_monday(self.su) == self.mo
+ assert weekend_to_monday(self.mo) == self.mo
def test_next_workday(self):
- self.assertEqual(next_workday(self.sa), self.mo)
- self.assertEqual(next_workday(self.su), self.mo)
- self.assertEqual(next_workday(self.mo), self.tu)
+ assert next_workday(self.sa) == self.mo
+ assert next_workday(self.su) == self.mo
+ assert next_workday(self.mo) == self.tu
def test_previous_workday(self):
- self.assertEqual(previous_workday(self.sa), self.fr)
- self.assertEqual(previous_workday(self.su), self.fr)
- self.assertEqual(previous_workday(self.tu), self.mo)
+ assert previous_workday(self.sa) == self.fr
+ assert previous_workday(self.su) == self.fr
+ assert previous_workday(self.tu) == self.mo
def test_before_nearest_workday(self):
- self.assertEqual(before_nearest_workday(self.sa), self.th)
- self.assertEqual(before_nearest_workday(self.su), self.fr)
- self.assertEqual(before_nearest_workday(self.tu), self.mo)
+ assert before_nearest_workday(self.sa) == self.th
+ assert before_nearest_workday(self.su) == self.fr
+ assert before_nearest_workday(self.tu) == self.mo
def test_after_nearest_workday(self):
- self.assertEqual(after_nearest_workday(self.sa), self.mo)
- self.assertEqual(after_nearest_workday(self.su), self.tu)
- self.assertEqual(after_nearest_workday(self.fr), self.mo)
+ assert after_nearest_workday(self.sa) == self.mo
+ assert after_nearest_workday(self.su) == self.tu
+ assert after_nearest_workday(self.fr) == self.mo
class TestFederalHolidayCalendar(tm.TestCase):
- # Test for issue 10278
def test_no_mlk_before_1984(self):
+ # see gh-10278
class MLKCalendar(AbstractHolidayCalendar):
rules = [USMartinLutherKingJr]
holidays = MLKCalendar().holidays(start='1984',
end='1988').to_pydatetime().tolist()
+
# Testing to make sure holiday is not incorrectly observed before 1986
- self.assertEqual(holidays, [datetime(1986, 1, 20, 0, 0), datetime(
- 1987, 1, 19, 0, 0)])
+ assert holidays == [datetime(1986, 1, 20, 0, 0),
+ datetime(1987, 1, 19, 0, 0)]
def test_memorial_day(self):
class MemorialDay(AbstractHolidayCalendar):
@@ -369,23 +362,23 @@ class MemorialDay(AbstractHolidayCalendar):
holidays = MemorialDay().holidays(start='1971',
end='1980').to_pydatetime().tolist()
- # Fixes 5/31 error and checked manually against wikipedia
- self.assertEqual(holidays, [datetime(1971, 5, 31, 0, 0),
- datetime(1972, 5, 29, 0, 0),
- datetime(1973, 5, 28, 0, 0),
- datetime(1974, 5, 27, 0,
- 0), datetime(1975, 5, 26, 0, 0),
- datetime(1976, 5, 31, 0,
- 0), datetime(1977, 5, 30, 0, 0),
- datetime(1978, 5, 29, 0,
- 0), datetime(1979, 5, 28, 0, 0)])
+ # Fixes 5/31 error and checked manually against Wikipedia
+ assert holidays == [datetime(1971, 5, 31, 0, 0),
+ datetime(1972, 5, 29, 0, 0),
+ datetime(1973, 5, 28, 0, 0),
+ datetime(1974, 5, 27, 0, 0),
+ datetime(1975, 5, 26, 0, 0),
+ datetime(1976, 5, 31, 0, 0),
+ datetime(1977, 5, 30, 0, 0),
+ datetime(1978, 5, 29, 0, 0),
+ datetime(1979, 5, 28, 0, 0)]
-class TestHolidayConflictingArguments(tm.TestCase):
- # GH 10217
+class TestHolidayConflictingArguments(tm.TestCase):
def test_both_offset_observance_raises(self):
+ # see gh-10217
with pytest.raises(NotImplementedError):
Holiday("Cyber Monday", month=11, day=1,
offset=[DateOffset(weekday=SA(4))],
diff --git a/pandas/tests/tseries/test_offsets.py b/pandas/tests/tseries/test_offsets.py
index 08f17fc358a47..ce4208a8cea69 100644
--- a/pandas/tests/tseries/test_offsets.py
+++ b/pandas/tests/tseries/test_offsets.py
@@ -155,7 +155,7 @@ def test_apply_out_of_range(self):
t = Timestamp('20080101', tz=tz)
result = t + offset
assert isinstance(result, datetime)
- self.assertEqual(t.tzinfo, result.tzinfo)
+ assert t.tzinfo == result.tzinfo
except (tslib.OutOfBoundsDatetime):
raise
@@ -230,13 +230,13 @@ def test_return_type(self):
def test_offset_n(self):
for offset_klass in self.offset_types:
offset = self._get_offset(offset_klass)
- self.assertEqual(offset.n, 1)
+ assert offset.n == 1
neg_offset = offset * -1
- self.assertEqual(neg_offset.n, -1)
+ assert neg_offset.n == -1
mul_offset = offset * 3
- self.assertEqual(mul_offset.n, 3)
+ assert mul_offset.n == 3
def test_offset_freqstr(self):
for offset_klass in self.offset_types:
@@ -247,7 +247,7 @@ def test_offset_freqstr(self):
"<DateOffset: kwds={'days': 1}>",
'LWOM-SAT', ):
code = get_offset(freqstr)
- self.assertEqual(offset.rule_code, code)
+ assert offset.rule_code == code
def _check_offsetfunc_works(self, offset, funcname, dt, expected,
normalize=False):
@@ -256,11 +256,11 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected,
result = func(dt)
assert isinstance(result, Timestamp)
- self.assertEqual(result, expected)
+ assert result == expected
result = func(Timestamp(dt))
assert isinstance(result, Timestamp)
- self.assertEqual(result, expected)
+ assert result == expected
# see gh-14101
exp_warning = None
@@ -277,9 +277,9 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected,
result = func(ts)
assert isinstance(result, Timestamp)
if normalize is False:
- self.assertEqual(result, expected + Nano(5))
+ assert result == expected + Nano(5)
else:
- self.assertEqual(result, expected)
+ assert result == expected
if isinstance(dt, np.datetime64):
# test tz when input is datetime or Timestamp
@@ -295,11 +295,11 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected,
result = func(dt_tz)
assert isinstance(result, Timestamp)
- self.assertEqual(result, expected_localize)
+ assert result == expected_localize
result = func(Timestamp(dt, tz=tz))
assert isinstance(result, Timestamp)
- self.assertEqual(result, expected_localize)
+ assert result == expected_localize
# see gh-14101
exp_warning = None
@@ -316,9 +316,9 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected,
result = func(ts)
assert isinstance(result, Timestamp)
if normalize is False:
- self.assertEqual(result, expected_localize + Nano(5))
+ assert result == expected_localize + Nano(5)
else:
- self.assertEqual(result, expected_localize)
+ assert result == expected_localize
def test_apply(self):
sdt = datetime(2011, 1, 1, 9, 0)
@@ -466,14 +466,14 @@ def test_add(self):
result_ts = Timestamp(dt) + offset_s
for result in [result_dt, result_ts]:
assert isinstance(result, Timestamp)
- self.assertEqual(result, expected)
+ assert result == expected
tm._skip_if_no_pytz()
for tz in self.timezones:
expected_localize = expected.tz_localize(tz)
result = Timestamp(dt, tz=tz) + offset_s
assert isinstance(result, Timestamp)
- self.assertEqual(result, expected_localize)
+ assert result == expected_localize
# normalize=True
offset_s = self._get_offset(offset, normalize=True)
@@ -483,13 +483,13 @@ def test_add(self):
result_ts = Timestamp(dt) + offset_s
for result in [result_dt, result_ts]:
assert isinstance(result, Timestamp)
- self.assertEqual(result, expected)
+ assert result == expected
for tz in self.timezones:
expected_localize = expected.tz_localize(tz)
result = Timestamp(dt, tz=tz) + offset_s
assert isinstance(result, Timestamp)
- self.assertEqual(result, expected_localize)
+ assert result == expected_localize
def test_pickle_v0_15_2(self):
offsets = {'DateOffset': DateOffset(years=1),
@@ -558,10 +558,10 @@ def test_different_normalize_equals(self):
offset = BDay()
offset2 = BDay()
offset2.normalize = True
- self.assertEqual(offset, offset2)
+ assert offset == offset2
def test_repr(self):
- self.assertEqual(repr(self.offset), '<BusinessDay>')
+ assert repr(self.offset) == '<BusinessDay>'
assert repr(self.offset2) == '<2 * BusinessDays>'
expected = '<BusinessDay: offset=datetime.timedelta(1)>'
@@ -573,49 +573,49 @@ def test_with_offset(self):
assert (self.d + offset) == datetime(2008, 1, 2, 2)
def testEQ(self):
- self.assertEqual(self.offset2, self.offset2)
+ assert self.offset2 == self.offset2
def test_mul(self):
pass
def test_hash(self):
- self.assertEqual(hash(self.offset2), hash(self.offset2))
+ assert hash(self.offset2) == hash(self.offset2)
def testCall(self):
- self.assertEqual(self.offset2(self.d), datetime(2008, 1, 3))
+ assert self.offset2(self.d) == datetime(2008, 1, 3)
def testRAdd(self):
- self.assertEqual(self.d + self.offset2, self.offset2 + self.d)
+ assert self.d + self.offset2 == self.offset2 + self.d
def testSub(self):
off = self.offset2
pytest.raises(Exception, off.__sub__, self.d)
- self.assertEqual(2 * off - off, off)
+ assert 2 * off - off == off
- self.assertEqual(self.d - self.offset2, self.d + BDay(-2))
+ assert self.d - self.offset2 == self.d + BDay(-2)
def testRSub(self):
- self.assertEqual(self.d - self.offset2, (-self.offset2).apply(self.d))
+ assert self.d - self.offset2 == (-self.offset2).apply(self.d)
def testMult1(self):
- self.assertEqual(self.d + 10 * self.offset, self.d + BDay(10))
+ assert self.d + 10 * self.offset == self.d + BDay(10)
def testMult2(self):
- self.assertEqual(self.d + (-5 * BDay(-10)), self.d + BDay(50))
+ assert self.d + (-5 * BDay(-10)) == self.d + BDay(50)
def testRollback1(self):
- self.assertEqual(BDay(10).rollback(self.d), self.d)
+ assert BDay(10).rollback(self.d) == self.d
def testRollback2(self):
- self.assertEqual(
- BDay(10).rollback(datetime(2008, 1, 5)), datetime(2008, 1, 4))
+ assert (BDay(10).rollback(datetime(2008, 1, 5)) ==
+ datetime(2008, 1, 4))
def testRollforward1(self):
- self.assertEqual(BDay(10).rollforward(self.d), self.d)
+ assert BDay(10).rollforward(self.d) == self.d
def testRollforward2(self):
- self.assertEqual(
- BDay(10).rollforward(datetime(2008, 1, 5)), datetime(2008, 1, 7))
+ assert (BDay(10).rollforward(datetime(2008, 1, 5)) ==
+ datetime(2008, 1, 7))
def test_roll_date_object(self):
offset = BDay()
@@ -623,17 +623,17 @@ def test_roll_date_object(self):
dt = date(2012, 9, 15)
result = offset.rollback(dt)
- self.assertEqual(result, datetime(2012, 9, 14))
+ assert result == datetime(2012, 9, 14)
result = offset.rollforward(dt)
- self.assertEqual(result, datetime(2012, 9, 17))
+ assert result == datetime(2012, 9, 17)
offset = offsets.Day()
result = offset.rollback(dt)
- self.assertEqual(result, datetime(2012, 9, 15))
+ assert result == datetime(2012, 9, 15)
result = offset.rollforward(dt)
- self.assertEqual(result, datetime(2012, 9, 15))
+ assert result == datetime(2012, 9, 15)
def test_onOffset(self):
tests = [(BDay(), datetime(2008, 1, 1), True),
@@ -691,25 +691,25 @@ def test_apply_large_n(self):
dt = datetime(2012, 10, 23)
result = dt + BDay(10)
- self.assertEqual(result, datetime(2012, 11, 6))
+ assert result == datetime(2012, 11, 6)
result = dt + BDay(100) - BDay(100)
- self.assertEqual(result, dt)
+ assert result == dt
off = BDay() * 6
rs = datetime(2012, 1, 1) - off
xp = datetime(2011, 12, 23)
- self.assertEqual(rs, xp)
+ assert rs == xp
st = datetime(2011, 12, 18)
rs = st + off
xp = datetime(2011, 12, 26)
- self.assertEqual(rs, xp)
+ assert rs == xp
off = BDay() * 10
rs = datetime(2014, 1, 5) + off # see #5890
xp = datetime(2014, 1, 17)
- self.assertEqual(rs, xp)
+ assert rs == xp
def test_apply_corner(self):
pytest.raises(TypeError, BDay().apply, BMonthEnd())
@@ -753,34 +753,30 @@ def test_different_normalize_equals(self):
offset = self._offset()
offset2 = self._offset()
offset2.normalize = True
- self.assertEqual(offset, offset2)
+ assert offset == offset2
def test_repr(self):
- self.assertEqual(repr(self.offset1), '<BusinessHour: BH=09:00-17:00>')
- self.assertEqual(repr(self.offset2),
- '<3 * BusinessHours: BH=09:00-17:00>')
- self.assertEqual(repr(self.offset3),
- '<-1 * BusinessHour: BH=09:00-17:00>')
- self.assertEqual(repr(self.offset4),
- '<-4 * BusinessHours: BH=09:00-17:00>')
-
- self.assertEqual(repr(self.offset5), '<BusinessHour: BH=11:00-14:30>')
- self.assertEqual(repr(self.offset6), '<BusinessHour: BH=20:00-05:00>')
- self.assertEqual(repr(self.offset7),
- '<-2 * BusinessHours: BH=21:30-06:30>')
+ assert repr(self.offset1) == '<BusinessHour: BH=09:00-17:00>'
+ assert repr(self.offset2) == '<3 * BusinessHours: BH=09:00-17:00>'
+ assert repr(self.offset3) == '<-1 * BusinessHour: BH=09:00-17:00>'
+ assert repr(self.offset4) == '<-4 * BusinessHours: BH=09:00-17:00>'
+
+ assert repr(self.offset5) == '<BusinessHour: BH=11:00-14:30>'
+ assert repr(self.offset6) == '<BusinessHour: BH=20:00-05:00>'
+ assert repr(self.offset7) == '<-2 * BusinessHours: BH=21:30-06:30>'
def test_with_offset(self):
expected = Timestamp('2014-07-01 13:00')
- self.assertEqual(self.d + BusinessHour() * 3, expected)
- self.assertEqual(self.d + BusinessHour(n=3), expected)
+ assert self.d + BusinessHour() * 3 == expected
+ assert self.d + BusinessHour(n=3) == expected
def testEQ(self):
for offset in [self.offset1, self.offset2, self.offset3, self.offset4]:
- self.assertEqual(offset, offset)
+ assert offset == offset
self.assertNotEqual(BusinessHour(), BusinessHour(-1))
- self.assertEqual(BusinessHour(start='09:00'), BusinessHour())
+ assert BusinessHour(start='09:00') == BusinessHour()
self.assertNotEqual(BusinessHour(start='09:00'),
BusinessHour(start='09:01'))
self.assertNotEqual(BusinessHour(start='09:00', end='17:00'),
@@ -788,90 +784,83 @@ def testEQ(self):
def test_hash(self):
for offset in [self.offset1, self.offset2, self.offset3, self.offset4]:
- self.assertEqual(hash(offset), hash(offset))
+ assert hash(offset) == hash(offset)
def testCall(self):
- self.assertEqual(self.offset1(self.d), datetime(2014, 7, 1, 11))
- self.assertEqual(self.offset2(self.d), datetime(2014, 7, 1, 13))
- self.assertEqual(self.offset3(self.d), datetime(2014, 6, 30, 17))
- self.assertEqual(self.offset4(self.d), datetime(2014, 6, 30, 14))
+ assert self.offset1(self.d) == datetime(2014, 7, 1, 11)
+ assert self.offset2(self.d) == datetime(2014, 7, 1, 13)
+ assert self.offset3(self.d) == datetime(2014, 6, 30, 17)
+ assert self.offset4(self.d) == datetime(2014, 6, 30, 14)
def testRAdd(self):
- self.assertEqual(self.d + self.offset2, self.offset2 + self.d)
+ assert self.d + self.offset2 == self.offset2 + self.d
def testSub(self):
off = self.offset2
pytest.raises(Exception, off.__sub__, self.d)
- self.assertEqual(2 * off - off, off)
+ assert 2 * off - off == off
- self.assertEqual(self.d - self.offset2, self.d + self._offset(-3))
+ assert self.d - self.offset2 == self.d + self._offset(-3)
def testRSub(self):
- self.assertEqual(self.d - self.offset2, (-self.offset2).apply(self.d))
+ assert self.d - self.offset2 == (-self.offset2).apply(self.d)
def testMult1(self):
- self.assertEqual(self.d + 5 * self.offset1, self.d + self._offset(5))
+ assert self.d + 5 * self.offset1 == self.d + self._offset(5)
def testMult2(self):
- self.assertEqual(self.d + (-3 * self._offset(-2)),
- self.d + self._offset(6))
+ assert self.d + (-3 * self._offset(-2)) == self.d + self._offset(6)
def testRollback1(self):
- self.assertEqual(self.offset1.rollback(self.d), self.d)
- self.assertEqual(self.offset2.rollback(self.d), self.d)
- self.assertEqual(self.offset3.rollback(self.d), self.d)
- self.assertEqual(self.offset4.rollback(self.d), self.d)
- self.assertEqual(self.offset5.rollback(self.d),
- datetime(2014, 6, 30, 14, 30))
- self.assertEqual(self.offset6.rollback(
- self.d), datetime(2014, 7, 1, 5, 0))
- self.assertEqual(self.offset7.rollback(
- self.d), datetime(2014, 7, 1, 6, 30))
+ assert self.offset1.rollback(self.d) == self.d
+ assert self.offset2.rollback(self.d) == self.d
+ assert self.offset3.rollback(self.d) == self.d
+ assert self.offset4.rollback(self.d) == self.d
+ assert self.offset5.rollback(self.d) == datetime(2014, 6, 30, 14, 30)
+ assert self.offset6.rollback(self.d) == datetime(2014, 7, 1, 5, 0)
+ assert self.offset7.rollback(self.d) == datetime(2014, 7, 1, 6, 30)
d = datetime(2014, 7, 1, 0)
- self.assertEqual(self.offset1.rollback(d), datetime(2014, 6, 30, 17))
- self.assertEqual(self.offset2.rollback(d), datetime(2014, 6, 30, 17))
- self.assertEqual(self.offset3.rollback(d), datetime(2014, 6, 30, 17))
- self.assertEqual(self.offset4.rollback(d), datetime(2014, 6, 30, 17))
- self.assertEqual(self.offset5.rollback(
- d), datetime(2014, 6, 30, 14, 30))
- self.assertEqual(self.offset6.rollback(d), d)
- self.assertEqual(self.offset7.rollback(d), d)
+ assert self.offset1.rollback(d) == datetime(2014, 6, 30, 17)
+ assert self.offset2.rollback(d) == datetime(2014, 6, 30, 17)
+ assert self.offset3.rollback(d) == datetime(2014, 6, 30, 17)
+ assert self.offset4.rollback(d) == datetime(2014, 6, 30, 17)
+ assert self.offset5.rollback(d) == datetime(2014, 6, 30, 14, 30)
+ assert self.offset6.rollback(d) == d
+ assert self.offset7.rollback(d) == d
- self.assertEqual(self._offset(5).rollback(self.d), self.d)
+ assert self._offset(5).rollback(self.d) == self.d
def testRollback2(self):
- self.assertEqual(self._offset(-3)
- .rollback(datetime(2014, 7, 5, 15, 0)),
- datetime(2014, 7, 4, 17, 0))
+ assert (self._offset(-3).rollback(datetime(2014, 7, 5, 15, 0)) ==
+ datetime(2014, 7, 4, 17, 0))
def testRollforward1(self):
- self.assertEqual(self.offset1.rollforward(self.d), self.d)
- self.assertEqual(self.offset2.rollforward(self.d), self.d)
- self.assertEqual(self.offset3.rollforward(self.d), self.d)
- self.assertEqual(self.offset4.rollforward(self.d), self.d)
- self.assertEqual(self.offset5.rollforward(
- self.d), datetime(2014, 7, 1, 11, 0))
- self.assertEqual(self.offset6.rollforward(
- self.d), datetime(2014, 7, 1, 20, 0))
- self.assertEqual(self.offset7.rollforward(
- self.d), datetime(2014, 7, 1, 21, 30))
+ assert self.offset1.rollforward(self.d) == self.d
+ assert self.offset2.rollforward(self.d) == self.d
+ assert self.offset3.rollforward(self.d) == self.d
+ assert self.offset4.rollforward(self.d) == self.d
+ assert (self.offset5.rollforward(self.d) ==
+ datetime(2014, 7, 1, 11, 0))
+ assert (self.offset6.rollforward(self.d) ==
+ datetime(2014, 7, 1, 20, 0))
+ assert (self.offset7.rollforward(self.d) ==
+ datetime(2014, 7, 1, 21, 30))
d = datetime(2014, 7, 1, 0)
- self.assertEqual(self.offset1.rollforward(d), datetime(2014, 7, 1, 9))
- self.assertEqual(self.offset2.rollforward(d), datetime(2014, 7, 1, 9))
- self.assertEqual(self.offset3.rollforward(d), datetime(2014, 7, 1, 9))
- self.assertEqual(self.offset4.rollforward(d), datetime(2014, 7, 1, 9))
- self.assertEqual(self.offset5.rollforward(d), datetime(2014, 7, 1, 11))
- self.assertEqual(self.offset6.rollforward(d), d)
- self.assertEqual(self.offset7.rollforward(d), d)
+ assert self.offset1.rollforward(d) == datetime(2014, 7, 1, 9)
+ assert self.offset2.rollforward(d) == datetime(2014, 7, 1, 9)
+ assert self.offset3.rollforward(d) == datetime(2014, 7, 1, 9)
+ assert self.offset4.rollforward(d) == datetime(2014, 7, 1, 9)
+ assert self.offset5.rollforward(d) == datetime(2014, 7, 1, 11)
+ assert self.offset6.rollforward(d) == d
+ assert self.offset7.rollforward(d) == d
- self.assertEqual(self._offset(5).rollforward(self.d), self.d)
+ assert self._offset(5).rollforward(self.d) == self.d
def testRollforward2(self):
- self.assertEqual(self._offset(-3)
- .rollforward(datetime(2014, 7, 5, 16, 0)),
- datetime(2014, 7, 7, 9))
+ assert (self._offset(-3).rollforward(datetime(2014, 7, 5, 16, 0)) ==
+ datetime(2014, 7, 7, 9))
def test_roll_date_object(self):
offset = BusinessHour()
@@ -879,10 +868,10 @@ def test_roll_date_object(self):
dt = datetime(2014, 7, 6, 15, 0)
result = offset.rollback(dt)
- self.assertEqual(result, datetime(2014, 7, 4, 17))
+ assert result == datetime(2014, 7, 4, 17)
result = offset.rollforward(dt)
- self.assertEqual(result, datetime(2014, 7, 7, 9))
+ assert result == datetime(2014, 7, 7, 9)
def test_normalize(self):
tests = []
@@ -924,7 +913,7 @@ def test_normalize(self):
for offset, cases in tests:
for dt, expected in compat.iteritems(cases):
- self.assertEqual(offset.apply(dt), expected)
+ assert offset.apply(dt) == expected
def test_onOffset(self):
tests = []
@@ -963,7 +952,7 @@ def test_onOffset(self):
for offset, cases in tests:
for dt, expected in compat.iteritems(cases):
- self.assertEqual(offset.onOffset(dt), expected)
+ assert offset.onOffset(dt) == expected
def test_opening_time(self):
tests = []
@@ -1127,8 +1116,8 @@ def test_opening_time(self):
for _offsets, cases in tests:
for offset in _offsets:
for dt, (exp_next, exp_prev) in compat.iteritems(cases):
- self.assertEqual(offset._next_opening_time(dt), exp_next)
- self.assertEqual(offset._prev_opening_time(dt), exp_prev)
+ assert offset._next_opening_time(dt) == exp_next
+ assert offset._prev_opening_time(dt) == exp_prev
def test_apply(self):
tests = []
@@ -1457,93 +1446,89 @@ def test_different_normalize_equals(self):
offset = self._offset()
offset2 = self._offset()
offset2.normalize = True
- self.assertEqual(offset, offset2)
+ assert offset == offset2
def test_repr(self):
- self.assertEqual(repr(self.offset1),
- '<CustomBusinessHour: CBH=09:00-17:00>')
- self.assertEqual(repr(self.offset2),
- '<CustomBusinessHour: CBH=09:00-17:00>')
+ assert repr(self.offset1) == '<CustomBusinessHour: CBH=09:00-17:00>'
+ assert repr(self.offset2) == '<CustomBusinessHour: CBH=09:00-17:00>'
def test_with_offset(self):
expected = Timestamp('2014-07-01 13:00')
- self.assertEqual(self.d + CustomBusinessHour() * 3, expected)
- self.assertEqual(self.d + CustomBusinessHour(n=3), expected)
+ assert self.d + CustomBusinessHour() * 3 == expected
+ assert self.d + CustomBusinessHour(n=3) == expected
def testEQ(self):
for offset in [self.offset1, self.offset2]:
- self.assertEqual(offset, offset)
+ assert offset == offset
- self.assertNotEqual(CustomBusinessHour(), CustomBusinessHour(-1))
- self.assertEqual(CustomBusinessHour(start='09:00'),
- CustomBusinessHour())
- self.assertNotEqual(CustomBusinessHour(start='09:00'),
- CustomBusinessHour(start='09:01'))
- self.assertNotEqual(CustomBusinessHour(start='09:00', end='17:00'),
- CustomBusinessHour(start='17:00', end='09:01'))
+ assert CustomBusinessHour() != CustomBusinessHour(-1)
+ assert (CustomBusinessHour(start='09:00') ==
+ CustomBusinessHour())
+ assert (CustomBusinessHour(start='09:00') !=
+ CustomBusinessHour(start='09:01'))
+ assert (CustomBusinessHour(start='09:00', end='17:00') !=
+ CustomBusinessHour(start='17:00', end='09:01'))
- self.assertNotEqual(CustomBusinessHour(weekmask='Tue Wed Thu Fri'),
- CustomBusinessHour(weekmask='Mon Tue Wed Thu Fri'))
- self.assertNotEqual(CustomBusinessHour(holidays=['2014-06-27']),
- CustomBusinessHour(holidays=['2014-06-28']))
+ assert (CustomBusinessHour(weekmask='Tue Wed Thu Fri') !=
+ CustomBusinessHour(weekmask='Mon Tue Wed Thu Fri'))
+ assert (CustomBusinessHour(holidays=['2014-06-27']) !=
+ CustomBusinessHour(holidays=['2014-06-28']))
def test_hash(self):
- self.assertEqual(hash(self.offset1), hash(self.offset1))
- self.assertEqual(hash(self.offset2), hash(self.offset2))
+ assert hash(self.offset1) == hash(self.offset1)
+ assert hash(self.offset2) == hash(self.offset2)
def testCall(self):
- self.assertEqual(self.offset1(self.d), datetime(2014, 7, 1, 11))
- self.assertEqual(self.offset2(self.d), datetime(2014, 7, 1, 11))
+ assert self.offset1(self.d) == datetime(2014, 7, 1, 11)
+ assert self.offset2(self.d) == datetime(2014, 7, 1, 11)
def testRAdd(self):
- self.assertEqual(self.d + self.offset2, self.offset2 + self.d)
+ assert self.d + self.offset2 == self.offset2 + self.d
def testSub(self):
off = self.offset2
pytest.raises(Exception, off.__sub__, self.d)
- self.assertEqual(2 * off - off, off)
+ assert 2 * off - off == off
- self.assertEqual(self.d - self.offset2, self.d - (2 * off - off))
+ assert self.d - self.offset2 == self.d - (2 * off - off)
def testRSub(self):
- self.assertEqual(self.d - self.offset2, (-self.offset2).apply(self.d))
+ assert self.d - self.offset2 == (-self.offset2).apply(self.d)
def testMult1(self):
- self.assertEqual(self.d + 5 * self.offset1, self.d + self._offset(5))
+ assert self.d + 5 * self.offset1 == self.d + self._offset(5)
def testMult2(self):
- self.assertEqual(self.d + (-3 * self._offset(-2)),
- self.d + self._offset(6))
+ assert self.d + (-3 * self._offset(-2)) == self.d + self._offset(6)
def testRollback1(self):
- self.assertEqual(self.offset1.rollback(self.d), self.d)
- self.assertEqual(self.offset2.rollback(self.d), self.d)
+ assert self.offset1.rollback(self.d) == self.d
+ assert self.offset2.rollback(self.d) == self.d
d = datetime(2014, 7, 1, 0)
+
# 2014/07/01 is Tuesday, 06/30 is Monday(holiday)
- self.assertEqual(self.offset1.rollback(d), datetime(2014, 6, 27, 17))
+ assert self.offset1.rollback(d) == datetime(2014, 6, 27, 17)
# 2014/6/30 and 2014/6/27 are holidays
- self.assertEqual(self.offset2.rollback(d), datetime(2014, 6, 26, 17))
+ assert self.offset2.rollback(d) == datetime(2014, 6, 26, 17)
def testRollback2(self):
- self.assertEqual(self._offset(-3)
- .rollback(datetime(2014, 7, 5, 15, 0)),
- datetime(2014, 7, 4, 17, 0))
+ assert (self._offset(-3).rollback(datetime(2014, 7, 5, 15, 0)) ==
+ datetime(2014, 7, 4, 17, 0))
def testRollforward1(self):
- self.assertEqual(self.offset1.rollforward(self.d), self.d)
- self.assertEqual(self.offset2.rollforward(self.d), self.d)
+ assert self.offset1.rollforward(self.d) == self.d
+ assert self.offset2.rollforward(self.d) == self.d
d = datetime(2014, 7, 1, 0)
- self.assertEqual(self.offset1.rollforward(d), datetime(2014, 7, 1, 9))
- self.assertEqual(self.offset2.rollforward(d), datetime(2014, 7, 1, 9))
+ assert self.offset1.rollforward(d) == datetime(2014, 7, 1, 9)
+ assert self.offset2.rollforward(d) == datetime(2014, 7, 1, 9)
def testRollforward2(self):
- self.assertEqual(self._offset(-3)
- .rollforward(datetime(2014, 7, 5, 16, 0)),
- datetime(2014, 7, 7, 9))
+ assert (self._offset(-3).rollforward(datetime(2014, 7, 5, 16, 0)) ==
+ datetime(2014, 7, 7, 9))
def test_roll_date_object(self):
offset = BusinessHour()
@@ -1551,10 +1536,10 @@ def test_roll_date_object(self):
dt = datetime(2014, 7, 6, 15, 0)
result = offset.rollback(dt)
- self.assertEqual(result, datetime(2014, 7, 4, 17))
+ assert result == datetime(2014, 7, 4, 17)
result = offset.rollforward(dt)
- self.assertEqual(result, datetime(2014, 7, 7, 9))
+ assert result == datetime(2014, 7, 7, 9)
def test_normalize(self):
tests = []
@@ -1598,7 +1583,7 @@ def test_normalize(self):
for offset, cases in tests:
for dt, expected in compat.iteritems(cases):
- self.assertEqual(offset.apply(dt), expected)
+ assert offset.apply(dt) == expected
def test_onOffset(self):
tests = []
@@ -1614,7 +1599,7 @@ def test_onOffset(self):
for offset, cases in tests:
for dt, expected in compat.iteritems(cases):
- self.assertEqual(offset.onOffset(dt), expected)
+ assert offset.onOffset(dt) == expected
def test_apply(self):
tests = []
@@ -1702,7 +1687,7 @@ def test_different_normalize_equals(self):
offset = CDay()
offset2 = CDay()
offset2.normalize = True
- self.assertEqual(offset, offset2)
+ assert offset == offset2
def test_repr(self):
assert repr(self.offset) == '<CustomBusinessDay>'
@@ -1717,50 +1702,50 @@ def test_with_offset(self):
assert (self.d + offset) == datetime(2008, 1, 2, 2)
def testEQ(self):
- self.assertEqual(self.offset2, self.offset2)
+ assert self.offset2 == self.offset2
def test_mul(self):
pass
def test_hash(self):
- self.assertEqual(hash(self.offset2), hash(self.offset2))
+ assert hash(self.offset2) == hash(self.offset2)
def testCall(self):
- self.assertEqual(self.offset2(self.d), datetime(2008, 1, 3))
- self.assertEqual(self.offset2(self.nd), datetime(2008, 1, 3))
+ assert self.offset2(self.d) == datetime(2008, 1, 3)
+ assert self.offset2(self.nd) == datetime(2008, 1, 3)
def testRAdd(self):
- self.assertEqual(self.d + self.offset2, self.offset2 + self.d)
+ assert self.d + self.offset2 == self.offset2 + self.d
def testSub(self):
off = self.offset2
pytest.raises(Exception, off.__sub__, self.d)
- self.assertEqual(2 * off - off, off)
+ assert 2 * off - off == off
- self.assertEqual(self.d - self.offset2, self.d + CDay(-2))
+ assert self.d - self.offset2 == self.d + CDay(-2)
def testRSub(self):
- self.assertEqual(self.d - self.offset2, (-self.offset2).apply(self.d))
+ assert self.d - self.offset2 == (-self.offset2).apply(self.d)
def testMult1(self):
- self.assertEqual(self.d + 10 * self.offset, self.d + CDay(10))
+ assert self.d + 10 * self.offset == self.d + CDay(10)
def testMult2(self):
- self.assertEqual(self.d + (-5 * CDay(-10)), self.d + CDay(50))
+ assert self.d + (-5 * CDay(-10)) == self.d + CDay(50)
def testRollback1(self):
- self.assertEqual(CDay(10).rollback(self.d), self.d)
+ assert CDay(10).rollback(self.d) == self.d
def testRollback2(self):
- self.assertEqual(
- CDay(10).rollback(datetime(2008, 1, 5)), datetime(2008, 1, 4))
+ assert (CDay(10).rollback(datetime(2008, 1, 5)) ==
+ datetime(2008, 1, 4))
def testRollforward1(self):
- self.assertEqual(CDay(10).rollforward(self.d), self.d)
+ assert CDay(10).rollforward(self.d) == self.d
def testRollforward2(self):
- self.assertEqual(
- CDay(10).rollforward(datetime(2008, 1, 5)), datetime(2008, 1, 7))
+ assert (CDay(10).rollforward(datetime(2008, 1, 5)) ==
+ datetime(2008, 1, 7))
def test_roll_date_object(self):
offset = CDay()
@@ -1768,17 +1753,17 @@ def test_roll_date_object(self):
dt = date(2012, 9, 15)
result = offset.rollback(dt)
- self.assertEqual(result, datetime(2012, 9, 14))
+ assert result == datetime(2012, 9, 14)
result = offset.rollforward(dt)
- self.assertEqual(result, datetime(2012, 9, 17))
+ assert result == datetime(2012, 9, 17)
offset = offsets.Day()
result = offset.rollback(dt)
- self.assertEqual(result, datetime(2012, 9, 15))
+ assert result == datetime(2012, 9, 15)
result = offset.rollforward(dt)
- self.assertEqual(result, datetime(2012, 9, 15))
+ assert result == datetime(2012, 9, 15)
def test_onOffset(self):
tests = [(CDay(), datetime(2008, 1, 1), True),
@@ -1837,20 +1822,20 @@ def test_apply_large_n(self):
dt = datetime(2012, 10, 23)
result = dt + CDay(10)
- self.assertEqual(result, datetime(2012, 11, 6))
+ assert result == datetime(2012, 11, 6)
result = dt + CDay(100) - CDay(100)
- self.assertEqual(result, dt)
+ assert result == dt
off = CDay() * 6
rs = datetime(2012, 1, 1) - off
xp = datetime(2011, 12, 23)
- self.assertEqual(rs, xp)
+ assert rs == xp
st = datetime(2011, 12, 18)
rs = st + off
xp = datetime(2011, 12, 26)
- self.assertEqual(rs, xp)
+ assert rs == xp
def test_apply_corner(self):
pytest.raises(Exception, CDay().apply, BMonthEnd())
@@ -1870,7 +1855,7 @@ def test_holidays(self):
dt = datetime(year, 4, 30)
xp = datetime(year, 5, 2)
rs = dt + tday
- self.assertEqual(rs, xp)
+ assert rs == xp
def test_weekmask(self):
weekmask_saudi = 'Sat Sun Mon Tue Wed' # Thu-Fri Weekend
@@ -1883,13 +1868,13 @@ def test_weekmask(self):
xp_saudi = datetime(2013, 5, 4)
xp_uae = datetime(2013, 5, 2)
xp_egypt = datetime(2013, 5, 2)
- self.assertEqual(xp_saudi, dt + bday_saudi)
- self.assertEqual(xp_uae, dt + bday_uae)
- self.assertEqual(xp_egypt, dt + bday_egypt)
+ assert xp_saudi == dt + bday_saudi
+ assert xp_uae == dt + bday_uae
+ assert xp_egypt == dt + bday_egypt
xp2 = datetime(2013, 5, 5)
- self.assertEqual(xp2, dt + 2 * bday_saudi)
- self.assertEqual(xp2, dt + 2 * bday_uae)
- self.assertEqual(xp2, dt + 2 * bday_egypt)
+ assert xp2 == dt + 2 * bday_saudi
+ assert xp2 == dt + 2 * bday_uae
+ assert xp2 == dt + 2 * bday_egypt
def test_weekmask_and_holidays(self):
weekmask_egypt = 'Sun Mon Tue Wed Thu' # Fri-Sat Weekend
@@ -1898,7 +1883,7 @@ def test_weekmask_and_holidays(self):
bday_egypt = CDay(holidays=holidays, weekmask=weekmask_egypt)
dt = datetime(2013, 4, 30)
xp_egypt = datetime(2013, 5, 5)
- self.assertEqual(xp_egypt, dt + 2 * bday_egypt)
+ assert xp_egypt == dt + 2 * bday_egypt
def test_calendar(self):
calendar = USFederalHolidayCalendar()
@@ -1908,7 +1893,7 @@ def test_calendar(self):
def test_roundtrip_pickle(self):
def _check_roundtrip(obj):
unpickled = tm.round_trip_pickle(obj)
- self.assertEqual(unpickled, obj)
+ assert unpickled == obj
_check_roundtrip(self.offset)
_check_roundtrip(self.offset2)
@@ -1921,7 +1906,7 @@ def test_pickle_compat_0_14_1(self):
cday0_14_1 = read_pickle(os.path.join(pth, 'cday-0.14.1.pickle'))
cday = CDay(holidays=hdays)
- self.assertEqual(cday, cday0_14_1)
+ assert cday == cday0_14_1
class CustomBusinessMonthBase(object):
@@ -1933,33 +1918,32 @@ def setUp(self):
self.offset2 = self._object(2)
def testEQ(self):
- self.assertEqual(self.offset2, self.offset2)
+ assert self.offset2 == self.offset2
def test_mul(self):
pass
def test_hash(self):
- self.assertEqual(hash(self.offset2), hash(self.offset2))
+ assert hash(self.offset2) == hash(self.offset2)
def testRAdd(self):
- self.assertEqual(self.d + self.offset2, self.offset2 + self.d)
+ assert self.d + self.offset2 == self.offset2 + self.d
def testSub(self):
off = self.offset2
pytest.raises(Exception, off.__sub__, self.d)
- self.assertEqual(2 * off - off, off)
+ assert 2 * off - off == off
- self.assertEqual(self.d - self.offset2, self.d + self._object(-2))
+ assert self.d - self.offset2 == self.d + self._object(-2)
def testRSub(self):
- self.assertEqual(self.d - self.offset2, (-self.offset2).apply(self.d))
+ assert self.d - self.offset2 == (-self.offset2).apply(self.d)
def testMult1(self):
- self.assertEqual(self.d + 10 * self.offset, self.d + self._object(10))
+ assert self.d + 10 * self.offset == self.d + self._object(10)
def testMult2(self):
- self.assertEqual(self.d + (-5 * self._object(-10)),
- self.d + self._object(50))
+ assert self.d + (-5 * self._object(-10)) == self.d + self._object(50)
def test_offsets_compare_equal(self):
offset1 = self._object()
@@ -1969,7 +1953,7 @@ def test_offsets_compare_equal(self):
def test_roundtrip_pickle(self):
def _check_roundtrip(obj):
unpickled = tm.round_trip_pickle(obj)
- self.assertEqual(unpickled, obj)
+ assert unpickled == obj
_check_roundtrip(self._object())
_check_roundtrip(self._object(2))
@@ -1984,26 +1968,24 @@ def test_different_normalize_equals(self):
offset = CBMonthEnd()
offset2 = CBMonthEnd()
offset2.normalize = True
- self.assertEqual(offset, offset2)
+ assert offset == offset2
def test_repr(self):
assert repr(self.offset) == '<CustomBusinessMonthEnd>'
assert repr(self.offset2) == '<2 * CustomBusinessMonthEnds>'
def testCall(self):
- self.assertEqual(self.offset2(self.d), datetime(2008, 2, 29))
+ assert self.offset2(self.d) == datetime(2008, 2, 29)
def testRollback1(self):
- self.assertEqual(
- CDay(10).rollback(datetime(2007, 12, 31)), datetime(2007, 12, 31))
+ assert (CDay(10).rollback(datetime(2007, 12, 31)) ==
+ datetime(2007, 12, 31))
def testRollback2(self):
- self.assertEqual(CBMonthEnd(10).rollback(self.d),
- datetime(2007, 12, 31))
+ assert CBMonthEnd(10).rollback(self.d) == datetime(2007, 12, 31)
def testRollforward1(self):
- self.assertEqual(CBMonthEnd(10).rollforward(
- self.d), datetime(2008, 1, 31))
+ assert CBMonthEnd(10).rollforward(self.d) == datetime(2008, 1, 31)
def test_roll_date_object(self):
offset = CBMonthEnd()
@@ -2011,17 +1993,17 @@ def test_roll_date_object(self):
dt = date(2012, 9, 15)
result = offset.rollback(dt)
- self.assertEqual(result, datetime(2012, 8, 31))
+ assert result == datetime(2012, 8, 31)
result = offset.rollforward(dt)
- self.assertEqual(result, datetime(2012, 9, 28))
+ assert result == datetime(2012, 9, 28)
offset = offsets.Day()
result = offset.rollback(dt)
- self.assertEqual(result, datetime(2012, 9, 15))
+ assert result == datetime(2012, 9, 15)
result = offset.rollforward(dt)
- self.assertEqual(result, datetime(2012, 9, 15))
+ assert result == datetime(2012, 9, 15)
def test_onOffset(self):
tests = [(CBMonthEnd(), datetime(2008, 1, 31), True),
@@ -2059,20 +2041,20 @@ def test_apply_large_n(self):
dt = datetime(2012, 10, 23)
result = dt + CBMonthEnd(10)
- self.assertEqual(result, datetime(2013, 7, 31))
+ assert result == datetime(2013, 7, 31)
result = dt + CDay(100) - CDay(100)
- self.assertEqual(result, dt)
+ assert result == dt
off = CBMonthEnd() * 6
rs = datetime(2012, 1, 1) - off
xp = datetime(2011, 7, 29)
- self.assertEqual(rs, xp)
+ assert rs == xp
st = datetime(2011, 12, 18)
rs = st + off
xp = datetime(2012, 5, 31)
- self.assertEqual(rs, xp)
+ assert rs == xp
def test_holidays(self):
# Define a TradingDay offset
@@ -2080,17 +2062,16 @@ def test_holidays(self):
np.datetime64('2012-02-29')]
bm_offset = CBMonthEnd(holidays=holidays)
dt = datetime(2012, 1, 1)
- self.assertEqual(dt + bm_offset, datetime(2012, 1, 30))
- self.assertEqual(dt + 2 * bm_offset, datetime(2012, 2, 27))
+ assert dt + bm_offset == datetime(2012, 1, 30)
+ assert dt + 2 * bm_offset == datetime(2012, 2, 27)
def test_datetimeindex(self):
from pandas.tseries.holiday import USFederalHolidayCalendar
hcal = USFederalHolidayCalendar()
freq = CBMonthEnd(calendar=hcal)
- self.assertEqual(DatetimeIndex(start='20120101', end='20130101',
- freq=freq).tolist()[0],
- datetime(2012, 1, 31))
+ assert (DatetimeIndex(start='20120101', end='20130101',
+ freq=freq).tolist()[0] == datetime(2012, 1, 31))
class TestCustomBusinessMonthBegin(CustomBusinessMonthBase, Base):
@@ -2101,26 +2082,24 @@ def test_different_normalize_equals(self):
offset = CBMonthBegin()
offset2 = CBMonthBegin()
offset2.normalize = True
- self.assertEqual(offset, offset2)
+ assert offset == offset2
def test_repr(self):
assert repr(self.offset) == '<CustomBusinessMonthBegin>'
assert repr(self.offset2) == '<2 * CustomBusinessMonthBegins>'
def testCall(self):
- self.assertEqual(self.offset2(self.d), datetime(2008, 3, 3))
+ assert self.offset2(self.d) == datetime(2008, 3, 3)
def testRollback1(self):
- self.assertEqual(
- CDay(10).rollback(datetime(2007, 12, 31)), datetime(2007, 12, 31))
+ assert (CDay(10).rollback(datetime(2007, 12, 31)) ==
+ datetime(2007, 12, 31))
def testRollback2(self):
- self.assertEqual(CBMonthBegin(10).rollback(self.d),
- datetime(2008, 1, 1))
+ assert CBMonthBegin(10).rollback(self.d) == datetime(2008, 1, 1)
def testRollforward1(self):
- self.assertEqual(CBMonthBegin(10).rollforward(
- self.d), datetime(2008, 1, 1))
+ assert CBMonthBegin(10).rollforward(self.d) == datetime(2008, 1, 1)
def test_roll_date_object(self):
offset = CBMonthBegin()
@@ -2128,17 +2107,17 @@ def test_roll_date_object(self):
dt = date(2012, 9, 15)
result = offset.rollback(dt)
- self.assertEqual(result, datetime(2012, 9, 3))
+ assert result == datetime(2012, 9, 3)
result = offset.rollforward(dt)
- self.assertEqual(result, datetime(2012, 10, 1))
+ assert result == datetime(2012, 10, 1)
offset = offsets.Day()
result = offset.rollback(dt)
- self.assertEqual(result, datetime(2012, 9, 15))
+ assert result == datetime(2012, 9, 15)
result = offset.rollforward(dt)
- self.assertEqual(result, datetime(2012, 9, 15))
+ assert result == datetime(2012, 9, 15)
def test_onOffset(self):
tests = [(CBMonthBegin(), datetime(2008, 1, 1), True),
@@ -2175,20 +2154,21 @@ def test_apply_large_n(self):
dt = datetime(2012, 10, 23)
result = dt + CBMonthBegin(10)
- self.assertEqual(result, datetime(2013, 8, 1))
+ assert result == datetime(2013, 8, 1)
result = dt + CDay(100) - CDay(100)
- self.assertEqual(result, dt)
+ assert result == dt
off = CBMonthBegin() * 6
rs = datetime(2012, 1, 1) - off
xp = datetime(2011, 7, 1)
- self.assertEqual(rs, xp)
+ assert rs == xp
st = datetime(2011, 12, 18)
rs = st + off
+
xp = datetime(2012, 6, 1)
- self.assertEqual(rs, xp)
+ assert rs == xp
def test_holidays(self):
# Define a TradingDay offset
@@ -2196,15 +2176,15 @@ def test_holidays(self):
np.datetime64('2012-03-01')]
bm_offset = CBMonthBegin(holidays=holidays)
dt = datetime(2012, 1, 1)
- self.assertEqual(dt + bm_offset, datetime(2012, 1, 2))
- self.assertEqual(dt + 2 * bm_offset, datetime(2012, 2, 3))
+
+ assert dt + bm_offset == datetime(2012, 1, 2)
+ assert dt + 2 * bm_offset == datetime(2012, 2, 3)
def test_datetimeindex(self):
hcal = USFederalHolidayCalendar()
cbmb = CBMonthBegin(calendar=hcal)
- self.assertEqual(DatetimeIndex(start='20120101', end='20130101',
- freq=cbmb).tolist()[0],
- datetime(2012, 1, 3))
+ assert (DatetimeIndex(start='20120101', end='20130101',
+ freq=cbmb).tolist()[0] == datetime(2012, 1, 3))
def assertOnOffset(offset, date, expected):
@@ -2218,10 +2198,9 @@ class TestWeek(Base):
_offset = Week
def test_repr(self):
- self.assertEqual(repr(Week(weekday=0)), "<Week: weekday=0>")
- self.assertEqual(repr(Week(n=-1, weekday=0)), "<-1 * Week: weekday=0>")
- self.assertEqual(repr(Week(n=-2, weekday=0)),
- "<-2 * Weeks: weekday=0>")
+ assert repr(Week(weekday=0)) == "<Week: weekday=0>"
+ assert repr(Week(n=-1, weekday=0)) == "<-1 * Week: weekday=0>"
+ assert repr(Week(n=-2, weekday=0)) == "<-2 * Weeks: weekday=0>"
def test_corner(self):
pytest.raises(ValueError, Week, weekday=7)
@@ -2303,8 +2282,8 @@ def test_constructor(self):
n=1, week=0, weekday=7)
def test_repr(self):
- self.assertEqual(repr(WeekOfMonth(weekday=1, week=2)),
- "<WeekOfMonth: week=2, weekday=1>")
+ assert (repr(WeekOfMonth(weekday=1, week=2)) ==
+ "<WeekOfMonth: week=2, weekday=1>")
def test_offset(self):
date1 = datetime(2011, 1, 4) # 1st Tuesday of Month
@@ -2354,9 +2333,10 @@ def test_offset(self):
# try subtracting
result = datetime(2011, 2, 1) - WeekOfMonth(week=1, weekday=2)
- self.assertEqual(result, datetime(2011, 1, 12))
+ assert result == datetime(2011, 1, 12)
+
result = datetime(2011, 2, 3) - WeekOfMonth(week=0, weekday=2)
- self.assertEqual(result, datetime(2011, 2, 2))
+ assert result == datetime(2011, 2, 2)
def test_onOffset(self):
test_cases = [
@@ -2370,7 +2350,7 @@ def test_onOffset(self):
for week, weekday, dt, expected in test_cases:
offset = WeekOfMonth(week=week, weekday=weekday)
- self.assertEqual(offset.onOffset(dt), expected)
+ assert offset.onOffset(dt) == expected
class TestLastWeekOfMonth(Base):
@@ -2392,13 +2372,13 @@ def test_offset(self):
offset_sat = LastWeekOfMonth(n=1, weekday=5)
one_day_before = (last_sat + timedelta(days=-1))
- self.assertEqual(one_day_before + offset_sat, last_sat)
+ assert one_day_before + offset_sat == last_sat
one_day_after = (last_sat + timedelta(days=+1))
- self.assertEqual(one_day_after + offset_sat, next_sat)
+ assert one_day_after + offset_sat == next_sat
# Test On that day
- self.assertEqual(last_sat + offset_sat, next_sat)
+ assert last_sat + offset_sat == next_sat
# Thursday
@@ -2407,23 +2387,22 @@ def test_offset(self):
next_thurs = datetime(2013, 2, 28)
one_day_before = last_thurs + timedelta(days=-1)
- self.assertEqual(one_day_before + offset_thur, last_thurs)
+ assert one_day_before + offset_thur == last_thurs
one_day_after = last_thurs + timedelta(days=+1)
- self.assertEqual(one_day_after + offset_thur, next_thurs)
+ assert one_day_after + offset_thur == next_thurs
# Test on that day
- self.assertEqual(last_thurs + offset_thur, next_thurs)
+ assert last_thurs + offset_thur == next_thurs
three_before = last_thurs + timedelta(days=-3)
- self.assertEqual(three_before + offset_thur, last_thurs)
+ assert three_before + offset_thur == last_thurs
two_after = last_thurs + timedelta(days=+2)
- self.assertEqual(two_after + offset_thur, next_thurs)
+ assert two_after + offset_thur == next_thurs
offset_sunday = LastWeekOfMonth(n=1, weekday=WeekDay.SUN)
- self.assertEqual(datetime(2013, 7, 31) +
- offset_sunday, datetime(2013, 8, 25))
+ assert datetime(2013, 7, 31) + offset_sunday == datetime(2013, 8, 25)
def test_onOffset(self):
test_cases = [
@@ -2445,7 +2424,7 @@ def test_onOffset(self):
for weekday, dt, expected in test_cases:
offset = LastWeekOfMonth(weekday=weekday)
- self.assertEqual(offset.onOffset(dt), expected, msg=date)
+ assert offset.onOffset(dt) == expected
class TestBMonthBegin(Base):
@@ -2556,7 +2535,7 @@ def test_normalize(self):
result = dt + BMonthEnd(normalize=True)
expected = dt.replace(hour=0) + BMonthEnd()
- self.assertEqual(result, expected)
+ assert result == expected
def test_onOffset(self):
@@ -2655,23 +2634,22 @@ def test_offset(self):
for base, expected in compat.iteritems(cases):
assertEq(offset, base, expected)
- # def test_day_of_month(self):
- # dt = datetime(2007, 1, 1)
-
- # offset = MonthEnd(day=20)
+ def test_day_of_month(self):
+ dt = datetime(2007, 1, 1)
+ offset = MonthEnd(day=20)
- # result = dt + offset
- # self.assertEqual(result, datetime(2007, 1, 20))
+ result = dt + offset
+ assert result == Timestamp(2007, 1, 31)
- # result = result + offset
- # self.assertEqual(result, datetime(2007, 2, 20))
+ result = result + offset
+ assert result == Timestamp(2007, 2, 28)
def test_normalize(self):
dt = datetime(2007, 1, 1, 3)
result = dt + MonthEnd(normalize=True)
expected = dt.replace(hour=0) + MonthEnd()
- self.assertEqual(result, expected)
+ assert result == expected
def test_onOffset(self):
@@ -3033,12 +3011,12 @@ class TestBQuarterBegin(Base):
_offset = BQuarterBegin
def test_repr(self):
- self.assertEqual(repr(BQuarterBegin()),
- "<BusinessQuarterBegin: startingMonth=3>")
- self.assertEqual(repr(BQuarterBegin(startingMonth=3)),
- "<BusinessQuarterBegin: startingMonth=3>")
- self.assertEqual(repr(BQuarterBegin(startingMonth=1)),
- "<BusinessQuarterBegin: startingMonth=1>")
+ assert (repr(BQuarterBegin()) ==
+ "<BusinessQuarterBegin: startingMonth=3>")
+ assert (repr(BQuarterBegin(startingMonth=3)) ==
+ "<BusinessQuarterBegin: startingMonth=3>")
+ assert (repr(BQuarterBegin(startingMonth=1)) ==
+ "<BusinessQuarterBegin: startingMonth=1>")
def test_isAnchored(self):
assert BQuarterBegin(startingMonth=1).isAnchored()
@@ -3120,19 +3098,19 @@ def test_offset(self):
# corner
offset = BQuarterBegin(n=-1, startingMonth=1)
- self.assertEqual(datetime(2007, 4, 3) + offset, datetime(2007, 4, 2))
+ assert datetime(2007, 4, 3) + offset == datetime(2007, 4, 2)
class TestBQuarterEnd(Base):
_offset = BQuarterEnd
def test_repr(self):
- self.assertEqual(repr(BQuarterEnd()),
- "<BusinessQuarterEnd: startingMonth=3>")
- self.assertEqual(repr(BQuarterEnd(startingMonth=3)),
- "<BusinessQuarterEnd: startingMonth=3>")
- self.assertEqual(repr(BQuarterEnd(startingMonth=1)),
- "<BusinessQuarterEnd: startingMonth=1>")
+ assert (repr(BQuarterEnd()) ==
+ "<BusinessQuarterEnd: startingMonth=3>")
+ assert (repr(BQuarterEnd(startingMonth=3)) ==
+ "<BusinessQuarterEnd: startingMonth=3>")
+ assert (repr(BQuarterEnd(startingMonth=1)) ==
+ "<BusinessQuarterEnd: startingMonth=1>")
def test_isAnchored(self):
assert BQuarterEnd(startingMonth=1).isAnchored()
@@ -3197,7 +3175,7 @@ def test_offset(self):
# corner
offset = BQuarterEnd(n=-1, startingMonth=1)
- self.assertEqual(datetime(2010, 1, 31) + offset, datetime(2010, 1, 29))
+ assert datetime(2010, 1, 31) + offset == datetime(2010, 1, 29)
def test_onOffset(self):
@@ -3334,58 +3312,52 @@ def test_apply(self):
current = data[0]
for datum in data[1:]:
current = current + offset
- self.assertEqual(current, datum)
+ assert current == datum
class TestFY5253NearestEndMonth(Base):
def test_get_target_month_end(self):
- self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8,
- weekday=WeekDay.SAT)
- .get_target_month_end(
- datetime(2013, 1, 1)), datetime(2013, 8, 31))
- self.assertEqual(makeFY5253NearestEndMonth(startingMonth=12,
- weekday=WeekDay.SAT)
- .get_target_month_end(datetime(2013, 1, 1)),
- datetime(2013, 12, 31))
- self.assertEqual(makeFY5253NearestEndMonth(startingMonth=2,
- weekday=WeekDay.SAT)
- .get_target_month_end(datetime(2013, 1, 1)),
- datetime(2013, 2, 28))
+ assert (makeFY5253NearestEndMonth(
+ startingMonth=8, weekday=WeekDay.SAT).get_target_month_end(
+ datetime(2013, 1, 1)) == datetime(2013, 8, 31))
+ assert (makeFY5253NearestEndMonth(
+ startingMonth=12, weekday=WeekDay.SAT).get_target_month_end(
+ datetime(2013, 1, 1)) == datetime(2013, 12, 31))
+ assert (makeFY5253NearestEndMonth(
+ startingMonth=2, weekday=WeekDay.SAT).get_target_month_end(
+ datetime(2013, 1, 1)) == datetime(2013, 2, 28))
def test_get_year_end(self):
- self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8,
- weekday=WeekDay.SAT)
- .get_year_end(datetime(2013, 1, 1)),
- datetime(2013, 8, 31))
- self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8,
- weekday=WeekDay.SUN)
- .get_year_end(datetime(2013, 1, 1)),
- datetime(2013, 9, 1))
- self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8,
- weekday=WeekDay.FRI)
- .get_year_end(datetime(2013, 1, 1)),
- datetime(2013, 8, 30))
+ assert (makeFY5253NearestEndMonth(
+ startingMonth=8, weekday=WeekDay.SAT).get_year_end(
+ datetime(2013, 1, 1)) == datetime(2013, 8, 31))
+ assert (makeFY5253NearestEndMonth(
+ startingMonth=8, weekday=WeekDay.SUN).get_year_end(
+ datetime(2013, 1, 1)) == datetime(2013, 9, 1))
+ assert (makeFY5253NearestEndMonth(
+ startingMonth=8, weekday=WeekDay.FRI).get_year_end(
+ datetime(2013, 1, 1)) == datetime(2013, 8, 30))
offset_n = FY5253(weekday=WeekDay.TUE, startingMonth=12,
variation="nearest")
- self.assertEqual(offset_n.get_year_end(
- datetime(2012, 1, 1)), datetime(2013, 1, 1))
- self.assertEqual(offset_n.get_year_end(
- datetime(2012, 1, 10)), datetime(2013, 1, 1))
-
- self.assertEqual(offset_n.get_year_end(
- datetime(2013, 1, 1)), datetime(2013, 12, 31))
- self.assertEqual(offset_n.get_year_end(
- datetime(2013, 1, 2)), datetime(2013, 12, 31))
- self.assertEqual(offset_n.get_year_end(
- datetime(2013, 1, 3)), datetime(2013, 12, 31))
- self.assertEqual(offset_n.get_year_end(
- datetime(2013, 1, 10)), datetime(2013, 12, 31))
+ assert (offset_n.get_year_end(datetime(2012, 1, 1)) ==
+ datetime(2013, 1, 1))
+ assert (offset_n.get_year_end(datetime(2012, 1, 10)) ==
+ datetime(2013, 1, 1))
+
+ assert (offset_n.get_year_end(datetime(2013, 1, 1)) ==
+ datetime(2013, 12, 31))
+ assert (offset_n.get_year_end(datetime(2013, 1, 2)) ==
+ datetime(2013, 12, 31))
+ assert (offset_n.get_year_end(datetime(2013, 1, 3)) ==
+ datetime(2013, 12, 31))
+ assert (offset_n.get_year_end(datetime(2013, 1, 10)) ==
+ datetime(2013, 12, 31))
JNJ = FY5253(n=1, startingMonth=12, weekday=6, variation="nearest")
- self.assertEqual(JNJ.get_year_end(
- datetime(2006, 1, 1)), datetime(2006, 12, 31))
+ assert (JNJ.get_year_end(datetime(2006, 1, 1)) ==
+ datetime(2006, 12, 31))
def test_onOffset(self):
offset_lom_aug_sat = makeFY5253NearestEndMonth(1, startingMonth=8,
@@ -3500,7 +3472,7 @@ def test_apply(self):
current = data[0]
for datum in data[1:]:
current = current + offset
- self.assertEqual(current, datum)
+ assert current == datum
class TestFY5253LastOfMonthQuarter(Base):
@@ -3517,26 +3489,18 @@ def test_isAnchored(self):
qtr_with_extra_week=4).isAnchored()
def test_equality(self):
- self.assertEqual(makeFY5253LastOfMonthQuarter(startingMonth=1,
- weekday=WeekDay.SAT,
- qtr_with_extra_week=4),
- makeFY5253LastOfMonthQuarter(startingMonth=1,
- weekday=WeekDay.SAT,
- qtr_with_extra_week=4))
- self.assertNotEqual(
- makeFY5253LastOfMonthQuarter(
- startingMonth=1, weekday=WeekDay.SAT,
- qtr_with_extra_week=4),
- makeFY5253LastOfMonthQuarter(
- startingMonth=1, weekday=WeekDay.SUN,
- qtr_with_extra_week=4))
- self.assertNotEqual(
- makeFY5253LastOfMonthQuarter(
- startingMonth=1, weekday=WeekDay.SAT,
- qtr_with_extra_week=4),
- makeFY5253LastOfMonthQuarter(
- startingMonth=2, weekday=WeekDay.SAT,
- qtr_with_extra_week=4))
+ assert (makeFY5253LastOfMonthQuarter(
+ startingMonth=1, weekday=WeekDay.SAT,
+ qtr_with_extra_week=4) == makeFY5253LastOfMonthQuarter(
+ startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4))
+ assert (makeFY5253LastOfMonthQuarter(
+ startingMonth=1, weekday=WeekDay.SAT,
+ qtr_with_extra_week=4) != makeFY5253LastOfMonthQuarter(
+ startingMonth=1, weekday=WeekDay.SUN, qtr_with_extra_week=4))
+ assert (makeFY5253LastOfMonthQuarter(
+ startingMonth=1, weekday=WeekDay.SAT,
+ qtr_with_extra_week=4) != makeFY5253LastOfMonthQuarter(
+ startingMonth=2, weekday=WeekDay.SAT, qtr_with_extra_week=4))
def test_offset(self):
offset = makeFY5253LastOfMonthQuarter(1, startingMonth=9,
@@ -3705,12 +3669,9 @@ def test_get_weeks(self):
weekday=WeekDay.SAT,
qtr_with_extra_week=4)
- self.assertEqual(sat_dec_1.get_weeks(
- datetime(2011, 4, 2)), [14, 13, 13, 13])
- self.assertEqual(sat_dec_4.get_weeks(
- datetime(2011, 4, 2)), [13, 13, 13, 14])
- self.assertEqual(sat_dec_1.get_weeks(
- datetime(2010, 12, 25)), [13, 13, 13, 13])
+ assert sat_dec_1.get_weeks(datetime(2011, 4, 2)) == [14, 13, 13, 13]
+ assert sat_dec_4.get_weeks(datetime(2011, 4, 2)) == [13, 13, 13, 14]
+ assert sat_dec_1.get_weeks(datetime(2010, 12, 25)) == [13, 13, 13, 13]
class TestFY5253NearestEndMonthQuarter(Base):
@@ -3802,12 +3763,12 @@ def test_offset(self):
class TestQuarterBegin(Base):
def test_repr(self):
- self.assertEqual(repr(QuarterBegin()),
- "<QuarterBegin: startingMonth=3>")
- self.assertEqual(repr(QuarterBegin(startingMonth=3)),
- "<QuarterBegin: startingMonth=3>")
- self.assertEqual(repr(QuarterBegin(startingMonth=1)),
- "<QuarterBegin: startingMonth=1>")
+ assert (repr(QuarterBegin()) ==
+ "<QuarterBegin: startingMonth=3>")
+ assert (repr(QuarterBegin(startingMonth=3)) ==
+ "<QuarterBegin: startingMonth=3>")
+ assert (repr(QuarterBegin(startingMonth=1)) ==
+ "<QuarterBegin: startingMonth=1>")
def test_isAnchored(self):
assert QuarterBegin(startingMonth=1).isAnchored()
@@ -3874,18 +3835,19 @@ def test_offset(self):
# corner
offset = QuarterBegin(n=-1, startingMonth=1)
- self.assertEqual(datetime(2010, 2, 1) + offset, datetime(2010, 1, 1))
+ assert datetime(2010, 2, 1) + offset == datetime(2010, 1, 1)
class TestQuarterEnd(Base):
_offset = QuarterEnd
def test_repr(self):
- self.assertEqual(repr(QuarterEnd()), "<QuarterEnd: startingMonth=3>")
- self.assertEqual(repr(QuarterEnd(startingMonth=3)),
- "<QuarterEnd: startingMonth=3>")
- self.assertEqual(repr(QuarterEnd(startingMonth=1)),
- "<QuarterEnd: startingMonth=1>")
+ assert (repr(QuarterEnd()) ==
+ "<QuarterEnd: startingMonth=3>")
+ assert (repr(QuarterEnd(startingMonth=3)) ==
+ "<QuarterEnd: startingMonth=3>")
+ assert (repr(QuarterEnd(startingMonth=1)) ==
+ "<QuarterEnd: startingMonth=1>")
def test_isAnchored(self):
assert QuarterEnd(startingMonth=1).isAnchored()
@@ -3951,7 +3913,7 @@ def test_offset(self):
# corner
offset = QuarterEnd(n=-1, startingMonth=1)
- self.assertEqual(datetime(2010, 2, 1) + offset, datetime(2010, 1, 31))
+ assert datetime(2010, 2, 1) + offset == datetime(2010, 1, 31)
def test_onOffset(self):
@@ -4173,14 +4135,14 @@ def test_offset(self):
for offset, cases in tests:
for base, expected in compat.iteritems(cases):
- self.assertEqual(base + offset, expected)
+ assert base + offset == expected
def test_roll(self):
offset = BYearEnd(month=6)
date = datetime(2009, 11, 30)
- self.assertEqual(offset.rollforward(date), datetime(2010, 6, 30))
- self.assertEqual(offset.rollback(date), datetime(2009, 6, 30))
+ assert offset.rollforward(date) == datetime(2010, 6, 30)
+ assert offset.rollback(date) == datetime(2009, 6, 30)
def test_onOffset(self):
@@ -4389,7 +4351,7 @@ def test_ticks(self):
offset = kls(3)
result = offset + Timedelta(hours=2)
assert isinstance(result, Timedelta)
- self.assertEqual(result, expected)
+ assert result == expected
def test_Hour(self):
assertEq(Hour(), datetime(2010, 1, 1), datetime(2010, 1, 1, 1))
@@ -4397,8 +4359,8 @@ def test_Hour(self):
assertEq(2 * Hour(), datetime(2010, 1, 1), datetime(2010, 1, 1, 2))
assertEq(-1 * Hour(), datetime(2010, 1, 1, 1), datetime(2010, 1, 1))
- self.assertEqual(Hour(3) + Hour(2), Hour(5))
- self.assertEqual(Hour(3) - Hour(2), Hour())
+ assert Hour(3) + Hour(2) == Hour(5)
+ assert Hour(3) - Hour(2) == Hour()
self.assertNotEqual(Hour(4), Hour(1))
@@ -4410,8 +4372,8 @@ def test_Minute(self):
assertEq(-1 * Minute(), datetime(2010, 1, 1, 0, 1),
datetime(2010, 1, 1))
- self.assertEqual(Minute(3) + Minute(2), Minute(5))
- self.assertEqual(Minute(3) - Minute(2), Minute())
+ assert Minute(3) + Minute(2) == Minute(5)
+ assert Minute(3) - Minute(2) == Minute()
self.assertNotEqual(Minute(5), Minute())
def test_Second(self):
@@ -4423,8 +4385,8 @@ def test_Second(self):
assertEq(-1 * Second(), datetime(2010, 1, 1, 0, 0, 1),
datetime(2010, 1, 1))
- self.assertEqual(Second(3) + Second(2), Second(5))
- self.assertEqual(Second(3) - Second(2), Second())
+ assert Second(3) + Second(2) == Second(5)
+ assert Second(3) - Second(2) == Second()
def test_Millisecond(self):
assertEq(Milli(), datetime(2010, 1, 1),
@@ -4438,8 +4400,8 @@ def test_Millisecond(self):
assertEq(-1 * Milli(), datetime(2010, 1, 1, 0, 0, 0, 1000),
datetime(2010, 1, 1))
- self.assertEqual(Milli(3) + Milli(2), Milli(5))
- self.assertEqual(Milli(3) - Milli(2), Milli())
+ assert Milli(3) + Milli(2) == Milli(5)
+ assert Milli(3) - Milli(2) == Milli()
def test_MillisecondTimestampArithmetic(self):
assertEq(Milli(), Timestamp('2010-01-01'),
@@ -4457,18 +4419,18 @@ def test_Microsecond(self):
assertEq(-1 * Micro(), datetime(2010, 1, 1, 0, 0, 0, 1),
datetime(2010, 1, 1))
- self.assertEqual(Micro(3) + Micro(2), Micro(5))
- self.assertEqual(Micro(3) - Micro(2), Micro())
+ assert Micro(3) + Micro(2) == Micro(5)
+ assert Micro(3) - Micro(2) == Micro()
def test_NanosecondGeneric(self):
timestamp = Timestamp(datetime(2010, 1, 1))
- self.assertEqual(timestamp.nanosecond, 0)
+ assert timestamp.nanosecond == 0
result = timestamp + Nano(10)
- self.assertEqual(result.nanosecond, 10)
+ assert result.nanosecond == 10
reverse_result = Nano(10) + timestamp
- self.assertEqual(reverse_result.nanosecond, 10)
+ assert reverse_result.nanosecond == 10
def test_Nanosecond(self):
timestamp = Timestamp(datetime(2010, 1, 1))
@@ -4477,29 +4439,29 @@ def test_Nanosecond(self):
assertEq(2 * Nano(), timestamp, timestamp + np.timedelta64(2, 'ns'))
assertEq(-1 * Nano(), timestamp + np.timedelta64(1, 'ns'), timestamp)
- self.assertEqual(Nano(3) + Nano(2), Nano(5))
- self.assertEqual(Nano(3) - Nano(2), Nano())
+ assert Nano(3) + Nano(2) == Nano(5)
+ assert Nano(3) - Nano(2) == Nano()
# GH9284
- self.assertEqual(Nano(1) + Nano(10), Nano(11))
- self.assertEqual(Nano(5) + Micro(1), Nano(1005))
- self.assertEqual(Micro(5) + Nano(1), Nano(5001))
+ assert Nano(1) + Nano(10) == Nano(11)
+ assert Nano(5) + Micro(1) == Nano(1005)
+ assert Micro(5) + Nano(1) == Nano(5001)
def test_tick_zero(self):
for t1 in self.ticks:
for t2 in self.ticks:
- self.assertEqual(t1(0), t2(0))
- self.assertEqual(t1(0) + t2(0), t1(0))
+ assert t1(0) == t2(0)
+ assert t1(0) + t2(0) == t1(0)
if t1 is not Nano:
- self.assertEqual(t1(2) + t2(0), t1(2))
+ assert t1(2) + t2(0) == t1(2)
if t1 is Nano:
- self.assertEqual(t1(2) + Nano(0), t1(2))
+ assert t1(2) + Nano(0) == t1(2)
def test_tick_equalities(self):
for t in self.ticks:
- self.assertEqual(t(3), t(3))
- self.assertEqual(t(), t(1))
+ assert t(3) == t(3)
+ assert t() == t(1)
# not equals
self.assertNotEqual(t(3), t(2))
@@ -4507,10 +4469,10 @@ def test_tick_equalities(self):
def test_tick_operators(self):
for t in self.ticks:
- self.assertEqual(t(3) + t(2), t(5))
- self.assertEqual(t(3) - t(2), t(1))
- self.assertEqual(t(800) + t(300), t(1100))
- self.assertEqual(t(1000) - t(5), t(995))
+ assert t(3) + t(2) == t(5)
+ assert t(3) - t(2) == t(1)
+ assert t(800) + t(300) == t(1100)
+ assert t(1000) - t(5) == t(995)
def test_tick_offset(self):
for t in self.ticks:
@@ -4533,25 +4495,22 @@ def test_compare_ticks(self):
class TestOffsetNames(tm.TestCase):
def test_get_offset_name(self):
- self.assertEqual(BDay().freqstr, 'B')
- self.assertEqual(BDay(2).freqstr, '2B')
- self.assertEqual(BMonthEnd().freqstr, 'BM')
- self.assertEqual(Week(weekday=0).freqstr, 'W-MON')
- self.assertEqual(Week(weekday=1).freqstr, 'W-TUE')
- self.assertEqual(Week(weekday=2).freqstr, 'W-WED')
- self.assertEqual(Week(weekday=3).freqstr, 'W-THU')
- self.assertEqual(Week(weekday=4).freqstr, 'W-FRI')
-
- self.assertEqual(LastWeekOfMonth(
- weekday=WeekDay.SUN).freqstr, "LWOM-SUN")
- self.assertEqual(
- makeFY5253LastOfMonthQuarter(weekday=1, startingMonth=3,
- qtr_with_extra_week=4).freqstr,
- "REQ-L-MAR-TUE-4")
- self.assertEqual(
- makeFY5253NearestEndMonthQuarter(weekday=1, startingMonth=3,
- qtr_with_extra_week=3).freqstr,
- "REQ-N-MAR-TUE-3")
+ assert BDay().freqstr == 'B'
+ assert BDay(2).freqstr == '2B'
+ assert BMonthEnd().freqstr == 'BM'
+ assert Week(weekday=0).freqstr == 'W-MON'
+ assert Week(weekday=1).freqstr == 'W-TUE'
+ assert Week(weekday=2).freqstr == 'W-WED'
+ assert Week(weekday=3).freqstr == 'W-THU'
+ assert Week(weekday=4).freqstr == 'W-FRI'
+
+ assert LastWeekOfMonth(weekday=WeekDay.SUN).freqstr == "LWOM-SUN"
+ assert (makeFY5253LastOfMonthQuarter(
+ weekday=1, startingMonth=3,
+ qtr_with_extra_week=4).freqstr == "REQ-L-MAR-TUE-4")
+ assert (makeFY5253NearestEndMonthQuarter(
+ weekday=1, startingMonth=3,
+ qtr_with_extra_week=3).freqstr == "REQ-N-MAR-TUE-3")
def test_get_offset():
@@ -4594,9 +4553,9 @@ class TestParseTimeString(tm.TestCase):
def test_parse_time_string(self):
(date, parsed, reso) = parse_time_string('4Q1984')
(date_lower, parsed_lower, reso_lower) = parse_time_string('4q1984')
- self.assertEqual(date, date_lower)
- self.assertEqual(parsed, parsed_lower)
- self.assertEqual(reso, reso_lower)
+ assert date == date_lower
+ assert parsed == parsed_lower
+ assert reso == reso_lower
def test_parse_time_quarter_w_dash(self):
# https://github.com/pandas-dev/pandas/issue/9688
@@ -4606,9 +4565,9 @@ def test_parse_time_quarter_w_dash(self):
(date_dash, parsed_dash, reso_dash) = parse_time_string(dashed)
(date, parsed, reso) = parse_time_string(normal)
- self.assertEqual(date_dash, date)
- self.assertEqual(parsed_dash, parsed)
- self.assertEqual(reso_dash, reso)
+ assert date_dash == date
+ assert parsed_dash == parsed
+ assert reso_dash == reso
pytest.raises(DateParseError, parse_time_string, "-2Q1992")
pytest.raises(DateParseError, parse_time_string, "2-Q1992")
@@ -4661,22 +4620,22 @@ def test_alias_equality(self):
for k, v in compat.iteritems(_offset_map):
if v is None:
continue
- self.assertEqual(k, v.copy())
+ assert k == v.copy()
def test_rule_code(self):
lst = ['M', 'MS', 'BM', 'BMS', 'D', 'B', 'H', 'T', 'S', 'L', 'U']
for k in lst:
- self.assertEqual(k, get_offset(k).rule_code)
+ assert k == get_offset(k).rule_code
# should be cached - this is kind of an internals test...
assert k in _offset_map
- self.assertEqual(k, (get_offset(k) * 3).rule_code)
+ assert k == (get_offset(k) * 3).rule_code
suffix_lst = ['MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT', 'SUN']
base = 'W'
for v in suffix_lst:
alias = '-'.join([base, v])
- self.assertEqual(alias, get_offset(alias).rule_code)
- self.assertEqual(alias, (get_offset(alias) * 5).rule_code)
+ assert alias == get_offset(alias).rule_code
+ assert alias == (get_offset(alias) * 5).rule_code
suffix_lst = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG',
'SEP', 'OCT', 'NOV', 'DEC']
@@ -4684,15 +4643,15 @@ def test_rule_code(self):
for base in base_lst:
for v in suffix_lst:
alias = '-'.join([base, v])
- self.assertEqual(alias, get_offset(alias).rule_code)
- self.assertEqual(alias, (get_offset(alias) * 5).rule_code)
+ assert alias == get_offset(alias).rule_code
+ assert alias == (get_offset(alias) * 5).rule_code
lst = ['M', 'D', 'B', 'H', 'T', 'S', 'L', 'U']
for k in lst:
code, stride = get_freq_code('3' + k)
assert isinstance(code, int)
- self.assertEqual(stride, 3)
- self.assertEqual(k, _get_freq_str(code))
+ assert stride == 3
+ assert k == _get_freq_str(code)
def test_apply_ticks():
@@ -4804,7 +4763,7 @@ def test_str_for_named_is_name(self):
_offset_map.clear()
for name in names:
offset = get_offset(name)
- self.assertEqual(offset.freqstr, name)
+ assert offset.freqstr == name
def get_utc_offset_hours(ts):
@@ -4949,4 +4908,4 @@ def test_all_offset_classes(self):
for offset, test_values in iteritems(tests):
first = Timestamp(test_values[0], tz='US/Eastern') + offset()
second = Timestamp(test_values[1], tz='US/Eastern')
- self.assertEqual(first, second, msg=str(offset))
+ assert first == second
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index 2c3aa03e85904..8b6774885c8b7 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -89,7 +89,7 @@ def test_utc_to_local_no_modify_explicit(self):
# Values are unmodified
tm.assert_numpy_array_equal(rng.asi8, rng_eastern.asi8)
- self.assertEqual(rng_eastern.tz, self.tz('US/Eastern'))
+ assert rng_eastern.tz == self.tz('US/Eastern')
def test_localize_utc_conversion(self):
# Localizing to time zone should:
@@ -129,16 +129,16 @@ def test_timestamp_tz_localize(self):
result = stamp.tz_localize(self.tzstr('US/Eastern'))
expected = Timestamp('3/11/2012 04:00', tz=self.tzstr('US/Eastern'))
- self.assertEqual(result.hour, expected.hour)
- self.assertEqual(result, expected)
+ assert result.hour == expected.hour
+ assert result == expected
def test_timestamp_tz_localize_explicit(self):
stamp = Timestamp('3/11/2012 04:00')
result = stamp.tz_localize(self.tz('US/Eastern'))
expected = Timestamp('3/11/2012 04:00', tz=self.tz('US/Eastern'))
- self.assertEqual(result.hour, expected.hour)
- self.assertEqual(result, expected)
+ assert result.hour == expected.hour
+ assert result == expected
def test_timestamp_constructed_by_date_and_tz(self):
# Fix Issue 2993, Timestamp cannot be constructed by datetime.date
@@ -147,8 +147,8 @@ def test_timestamp_constructed_by_date_and_tz(self):
result = Timestamp(date(2012, 3, 11), tz=self.tzstr('US/Eastern'))
expected = Timestamp('3/11/2012', tz=self.tzstr('US/Eastern'))
- self.assertEqual(result.hour, expected.hour)
- self.assertEqual(result, expected)
+ assert result.hour == expected.hour
+ assert result == expected
def test_timestamp_constructed_by_date_and_tz_explicit(self):
# Fix Issue 2993, Timestamp cannot be constructed by datetime.date
@@ -157,8 +157,8 @@ def test_timestamp_constructed_by_date_and_tz_explicit(self):
result = Timestamp(date(2012, 3, 11), tz=self.tz('US/Eastern'))
expected = Timestamp('3/11/2012', tz=self.tz('US/Eastern'))
- self.assertEqual(result.hour, expected.hour)
- self.assertEqual(result, expected)
+ assert result.hour == expected.hour
+ assert result == expected
def test_timestamp_constructor_near_dst_boundary(self):
# GH 11481 & 15777
@@ -212,7 +212,7 @@ def test_timestamp_to_datetime_tzoffset(self):
tzinfo = tzoffset(None, 7200)
expected = Timestamp('3/11/2012 04:00', tz=tzinfo)
result = Timestamp(expected.to_pydatetime())
- self.assertEqual(expected, result)
+ assert expected == result
def test_timedelta_push_over_dst_boundary(self):
# #1389
@@ -225,7 +225,7 @@ def test_timedelta_push_over_dst_boundary(self):
# spring forward, + "7" hours
expected = Timestamp('3/11/2012 05:00', tz=self.tzstr('US/Eastern'))
- self.assertEqual(result, expected)
+ assert result == expected
def test_timedelta_push_over_dst_boundary_explicit(self):
# #1389
@@ -238,7 +238,7 @@ def test_timedelta_push_over_dst_boundary_explicit(self):
# spring forward, + "7" hours
expected = Timestamp('3/11/2012 05:00', tz=self.tz('US/Eastern'))
- self.assertEqual(result, expected)
+ assert result == expected
def test_tz_localize_dti(self):
dti = DatetimeIndex(start='1/1/2005', end='1/1/2005 0:00:30.256',
@@ -278,31 +278,31 @@ def test_astimezone(self):
utc = Timestamp('3/11/2012 22:00', tz='UTC')
expected = utc.tz_convert(self.tzstr('US/Eastern'))
result = utc.astimezone(self.tzstr('US/Eastern'))
- self.assertEqual(expected, result)
+ assert expected == result
assert isinstance(result, Timestamp)
def test_create_with_tz(self):
stamp = Timestamp('3/11/2012 05:00', tz=self.tzstr('US/Eastern'))
- self.assertEqual(stamp.hour, 5)
+ assert stamp.hour == 5
rng = date_range('3/11/2012 04:00', periods=10, freq='H',
tz=self.tzstr('US/Eastern'))
- self.assertEqual(stamp, rng[1])
+ assert stamp == rng[1]
utc_stamp = Timestamp('3/11/2012 05:00', tz='utc')
assert utc_stamp.tzinfo is pytz.utc
- self.assertEqual(utc_stamp.hour, 5)
+ assert utc_stamp.hour == 5
stamp = Timestamp('3/11/2012 05:00').tz_localize('utc')
- self.assertEqual(utc_stamp.hour, 5)
+ assert utc_stamp.hour == 5
def test_create_with_fixed_tz(self):
off = FixedOffset(420, '+07:00')
start = datetime(2012, 3, 11, 5, 0, 0, tzinfo=off)
end = datetime(2012, 6, 11, 5, 0, 0, tzinfo=off)
rng = date_range(start=start, end=end)
- self.assertEqual(off, rng.tz)
+ assert off == rng.tz
rng2 = date_range(start, periods=len(rng), tz=off)
tm.assert_index_equal(rng, rng2)
@@ -316,10 +316,10 @@ def test_create_with_fixedoffset_noname(self):
start = datetime(2012, 3, 11, 5, 0, 0, tzinfo=off)
end = datetime(2012, 6, 11, 5, 0, 0, tzinfo=off)
rng = date_range(start=start, end=end)
- self.assertEqual(off, rng.tz)
+ assert off == rng.tz
idx = Index([start, end])
- self.assertEqual(off, idx.tz)
+ assert off == idx.tz
def test_date_range_localize(self):
rng = date_range('3/11/2012 03:00', periods=15, freq='H',
@@ -335,9 +335,9 @@ def test_date_range_localize(self):
val = rng[0]
exp = Timestamp('3/11/2012 03:00', tz='US/Eastern')
- self.assertEqual(val.hour, 3)
- self.assertEqual(exp.hour, 3)
- self.assertEqual(val, exp) # same UTC value
+ assert val.hour == 3
+ assert exp.hour == 3
+ assert val == exp # same UTC value
tm.assert_index_equal(rng[:2], rng2)
# Right before the DST transition
@@ -347,15 +347,15 @@ def test_date_range_localize(self):
tz='US/Eastern')
tm.assert_index_equal(rng, rng2)
exp = Timestamp('3/11/2012 00:00', tz='US/Eastern')
- self.assertEqual(exp.hour, 0)
- self.assertEqual(rng[0], exp)
+ assert exp.hour == 0
+ assert rng[0] == exp
exp = Timestamp('3/11/2012 01:00', tz='US/Eastern')
- self.assertEqual(exp.hour, 1)
- self.assertEqual(rng[1], exp)
+ assert exp.hour == 1
+ assert rng[1] == exp
rng = date_range('3/11/2012 00:00', periods=10, freq='H',
tz='US/Eastern')
- self.assertEqual(rng[2].hour, 3)
+ assert rng[2].hour == 3
def test_utc_box_timestamp_and_localize(self):
rng = date_range('3/11/2012', '3/12/2012', freq='H', tz='utc')
@@ -365,8 +365,8 @@ def test_utc_box_timestamp_and_localize(self):
expected = rng[-1].astimezone(tz)
stamp = rng_eastern[-1]
- self.assertEqual(stamp, expected)
- self.assertEqual(stamp.tzinfo, expected.tzinfo)
+ assert stamp == expected
+ assert stamp.tzinfo == expected.tzinfo
# right tzinfo
rng = date_range('3/13/2012', '3/14/2012', freq='H', tz='utc')
@@ -383,7 +383,7 @@ def test_timestamp_tz_convert(self):
conv = idx[0].tz_convert(self.tzstr('US/Pacific'))
expected = idx.tz_convert(self.tzstr('US/Pacific'))[0]
- self.assertEqual(conv, expected)
+ assert conv == expected
def test_pass_dates_localize_to_utc(self):
strdates = ['1/1/2012', '3/1/2012', '4/1/2012']
@@ -393,7 +393,7 @@ def test_pass_dates_localize_to_utc(self):
fromdates = DatetimeIndex(strdates, tz=self.tzstr('US/Eastern'))
- self.assertEqual(conv.tz, fromdates.tz)
+ assert conv.tz == fromdates.tz
tm.assert_numpy_array_equal(conv.values, fromdates.values)
def test_field_access_localize(self):
@@ -560,12 +560,12 @@ def f():
times = date_range("2013-10-26 23:00", "2013-10-27 01:00", freq="H",
tz=tz, ambiguous='infer')
- self.assertEqual(times[0], Timestamp('2013-10-26 23:00', tz=tz,
- freq="H"))
+ assert times[0] == Timestamp('2013-10-26 23:00', tz=tz, freq="H")
+
if dateutil.__version__ != LooseVersion('2.6.0'):
- # GH 14621
- self.assertEqual(times[-1], Timestamp('2013-10-27 01:00:00+0000',
- tz=tz, freq="H"))
+ # see gh-14621
+ assert times[-1] == Timestamp('2013-10-27 01:00:00+0000',
+ tz=tz, freq="H")
def test_ambiguous_nat(self):
tz = self.tz('US/Eastern')
@@ -595,10 +595,10 @@ def f():
pytest.raises(pytz.AmbiguousTimeError, f)
result = t.tz_localize('US/Central', ambiguous=True)
- self.assertEqual(result, expected0)
+ assert result == expected0
result = t.tz_localize('US/Central', ambiguous=False)
- self.assertEqual(result, expected1)
+ assert result == expected1
s = Series([t])
expected0 = Series([expected0])
@@ -674,8 +674,8 @@ def test_take_dont_lose_meta(self):
rng = date_range('1/1/2000', periods=20, tz=self.tzstr('US/Eastern'))
result = rng.take(lrange(5))
- self.assertEqual(result.tz, rng.tz)
- self.assertEqual(result.freq, rng.freq)
+ assert result.tz == rng.tz
+ assert result.freq == rng.freq
def test_index_with_timezone_repr(self):
rng = date_range('4/13/2010', '5/6/2010')
@@ -694,14 +694,14 @@ def test_index_astype_asobject_tzinfos(self):
objs = rng.asobject
for i, x in enumerate(objs):
exval = rng[i]
- self.assertEqual(x, exval)
- self.assertEqual(x.tzinfo, exval.tzinfo)
+ assert x == exval
+ assert x.tzinfo == exval.tzinfo
objs = rng.astype(object)
for i, x in enumerate(objs):
exval = rng[i]
- self.assertEqual(x, exval)
- self.assertEqual(x.tzinfo, exval.tzinfo)
+ assert x == exval
+ assert x.tzinfo == exval.tzinfo
def test_localized_at_time_between_time(self):
from datetime import time
@@ -736,7 +736,7 @@ def test_fixed_offset(self):
datetime(2000, 1, 2, tzinfo=fixed_off),
datetime(2000, 1, 3, tzinfo=fixed_off)]
result = to_datetime(dates)
- self.assertEqual(result.tz, fixed_off)
+ assert result.tz == fixed_off
def test_fixedtz_topydatetime(self):
dates = np.array([datetime(2000, 1, 1, tzinfo=fixed_off),
@@ -796,7 +796,7 @@ def test_frame_no_datetime64_dtype(self):
dr_tz = dr.tz_localize(self.tzstr('US/Eastern'))
e = DataFrame({'A': 'foo', 'B': dr_tz}, index=dr)
tz_expected = DatetimeTZDtype('ns', dr_tz.tzinfo)
- self.assertEqual(e['B'].dtype, tz_expected)
+ assert e['B'].dtype == tz_expected
# GH 2810 (with timezones)
datetimes_naive = [ts.to_pydatetime() for ts in dr]
@@ -830,7 +830,7 @@ def test_shift_localized(self):
dr_tz = dr.tz_localize(self.tzstr('US/Eastern'))
result = dr_tz.shift(1, '10T')
- self.assertEqual(result.tz, dr_tz.tz)
+ assert result.tz == dr_tz.tz
def test_tz_aware_asfreq(self):
dr = date_range('2011-12-01', '2012-07-20', freq='D',
@@ -870,8 +870,8 @@ def test_convert_datetime_list(self):
tz=self.tzstr('US/Eastern'), name='foo')
dr2 = DatetimeIndex(list(dr), name='foo')
tm.assert_index_equal(dr, dr2)
- self.assertEqual(dr.tz, dr2.tz)
- self.assertEqual(dr2.name, 'foo')
+ assert dr.tz == dr2.tz
+ assert dr2.name == 'foo'
def test_frame_from_records_utc(self):
rec = {'datum': 1.5,
@@ -886,7 +886,7 @@ def test_frame_reset_index(self):
roundtripped = df.reset_index().set_index('index')
xp = df.index.tz
rs = roundtripped.index.tz
- self.assertEqual(xp, rs)
+ assert xp == rs
def test_dateutil_tzoffset_support(self):
from dateutil.tz import tzoffset
@@ -896,7 +896,7 @@ def test_dateutil_tzoffset_support(self):
datetime(2012, 5, 11, 12, tzinfo=tzinfo)]
series = Series(data=values, index=index)
- self.assertEqual(series.index.tz, tzinfo)
+ assert series.index.tz == tzinfo
# it works! #2443
repr(series.index[0])
@@ -909,7 +909,7 @@ def test_getitem_pydatetime_tz(self):
tz=self.tzstr('Europe/Berlin'))
time_datetime = self.localize(
self.tz('Europe/Berlin'), datetime(2012, 12, 24, 17, 0))
- self.assertEqual(ts[time_pandas], ts[time_datetime])
+ assert ts[time_pandas] == ts[time_datetime]
def test_index_drop_dont_lose_tz(self):
# #2621
@@ -977,12 +977,12 @@ def test_utc_with_system_utc(self):
# from system utc to real utc
ts = Timestamp('2001-01-05 11:56', tz=maybe_get_tz('dateutil/UTC'))
# check that the time hasn't changed.
- self.assertEqual(ts, ts.tz_convert(dateutil.tz.tzutc()))
+ assert ts == ts.tz_convert(dateutil.tz.tzutc())
# from system utc to real utc
ts = Timestamp('2001-01-05 11:56', tz=maybe_get_tz('dateutil/UTC'))
# check that the time hasn't changed.
- self.assertEqual(ts, ts.tz_convert(dateutil.tz.tzutc()))
+ assert ts == ts.tz_convert(dateutil.tz.tzutc())
def test_tz_convert_hour_overflow_dst(self):
# Regression test for:
@@ -1140,16 +1140,16 @@ def test_tslib_tz_convert_dst(self):
def test_tzlocal(self):
# GH 13583
ts = Timestamp('2011-01-01', tz=dateutil.tz.tzlocal())
- self.assertEqual(ts.tz, dateutil.tz.tzlocal())
+ assert ts.tz == dateutil.tz.tzlocal()
assert "tz='tzlocal()')" in repr(ts)
tz = tslib.maybe_get_tz('tzlocal()')
- self.assertEqual(tz, dateutil.tz.tzlocal())
+ assert tz == dateutil.tz.tzlocal()
# get offset using normal datetime for test
offset = dateutil.tz.tzlocal().utcoffset(datetime(2011, 1, 1))
offset = offset.total_seconds() * 1000000000
- self.assertEqual(ts.value + offset, Timestamp('2011-01-01').value)
+ assert ts.value + offset == Timestamp('2011-01-01').value
def test_tz_localize_tzlocal(self):
# GH 13583
@@ -1208,26 +1208,26 @@ def test_replace(self):
dt = Timestamp('2016-01-01 09:00:00')
result = dt.replace(hour=0)
expected = Timestamp('2016-01-01 00:00:00')
- self.assertEqual(result, expected)
+ assert result == expected
for tz in self.timezones:
dt = Timestamp('2016-01-01 09:00:00', tz=tz)
result = dt.replace(hour=0)
expected = Timestamp('2016-01-01 00:00:00', tz=tz)
- self.assertEqual(result, expected)
+ assert result == expected
# we preserve nanoseconds
dt = Timestamp('2016-01-01 09:00:00.000000123', tz=tz)
result = dt.replace(hour=0)
expected = Timestamp('2016-01-01 00:00:00.000000123', tz=tz)
- self.assertEqual(result, expected)
+ assert result == expected
# test all
dt = Timestamp('2016-01-01 09:00:00.000000123', tz=tz)
result = dt.replace(year=2015, month=2, day=2, hour=0, minute=5,
second=5, microsecond=5, nanosecond=5)
expected = Timestamp('2015-02-02 00:05:05.000005005', tz=tz)
- self.assertEqual(result, expected)
+ assert result == expected
# error
def f():
@@ -1240,7 +1240,7 @@ def f():
# assert conversion to naive is the same as replacing tzinfo with None
dt = Timestamp('2013-11-03 01:59:59.999999-0400', tz='US/Eastern')
- self.assertEqual(dt.tz_localize(None), dt.replace(tzinfo=None))
+ assert dt.tz_localize(None) == dt.replace(tzinfo=None)
def test_ambiguous_compat(self):
# validate that pytz and dateutil are compat for dst
@@ -1254,31 +1254,31 @@ def test_ambiguous_compat(self):
.tz_localize(pytz_zone, ambiguous=0))
result_dateutil = (Timestamp('2013-10-27 01:00:00')
.tz_localize(dateutil_zone, ambiguous=0))
- self.assertEqual(result_pytz.value, result_dateutil.value)
- self.assertEqual(result_pytz.value, 1382835600000000000)
+ assert result_pytz.value == result_dateutil.value
+ assert result_pytz.value == 1382835600000000000
# dateutil 2.6 buggy w.r.t. ambiguous=0
if dateutil.__version__ != LooseVersion('2.6.0'):
- # GH 14621
- # https://github.com/dateutil/dateutil/issues/321
- self.assertEqual(result_pytz.to_pydatetime().tzname(),
- result_dateutil.to_pydatetime().tzname())
- self.assertEqual(str(result_pytz), str(result_dateutil))
+ # see gh-14621
+ # see https://github.com/dateutil/dateutil/issues/321
+ assert (result_pytz.to_pydatetime().tzname() ==
+ result_dateutil.to_pydatetime().tzname())
+ assert str(result_pytz) == str(result_dateutil)
# 1 hour difference
result_pytz = (Timestamp('2013-10-27 01:00:00')
.tz_localize(pytz_zone, ambiguous=1))
result_dateutil = (Timestamp('2013-10-27 01:00:00')
.tz_localize(dateutil_zone, ambiguous=1))
- self.assertEqual(result_pytz.value, result_dateutil.value)
- self.assertEqual(result_pytz.value, 1382832000000000000)
+ assert result_pytz.value == result_dateutil.value
+ assert result_pytz.value == 1382832000000000000
# dateutil < 2.6 is buggy w.r.t. ambiguous timezones
if dateutil.__version__ > LooseVersion('2.5.3'):
- # GH 14621
- self.assertEqual(str(result_pytz), str(result_dateutil))
- self.assertEqual(result_pytz.to_pydatetime().tzname(),
- result_dateutil.to_pydatetime().tzname())
+ # see gh-14621
+ assert str(result_pytz) == str(result_dateutil)
+ assert (result_pytz.to_pydatetime().tzname() ==
+ result_dateutil.to_pydatetime().tzname())
def test_index_equals_with_tz(self):
left = date_range('1/1/2011', periods=100, freq='H', tz='utc')
@@ -1319,17 +1319,17 @@ def test_series_frame_tz_localize(self):
ts = Series(1, index=rng)
result = ts.tz_localize('utc')
- self.assertEqual(result.index.tz.zone, 'UTC')
+ assert result.index.tz.zone == 'UTC'
df = DataFrame({'a': 1}, index=rng)
result = df.tz_localize('utc')
expected = DataFrame({'a': 1}, rng.tz_localize('UTC'))
- self.assertEqual(result.index.tz.zone, 'UTC')
+ assert result.index.tz.zone == 'UTC'
assert_frame_equal(result, expected)
df = df.T
result = df.tz_localize('utc', axis=1)
- self.assertEqual(result.columns.tz.zone, 'UTC')
+ assert result.columns.tz.zone == 'UTC'
assert_frame_equal(result, expected.T)
# Can't localize if already tz-aware
@@ -1343,17 +1343,17 @@ def test_series_frame_tz_convert(self):
ts = Series(1, index=rng)
result = ts.tz_convert('Europe/Berlin')
- self.assertEqual(result.index.tz.zone, 'Europe/Berlin')
+ assert result.index.tz.zone == 'Europe/Berlin'
df = DataFrame({'a': 1}, index=rng)
result = df.tz_convert('Europe/Berlin')
expected = DataFrame({'a': 1}, rng.tz_convert('Europe/Berlin'))
- self.assertEqual(result.index.tz.zone, 'Europe/Berlin')
+ assert result.index.tz.zone == 'Europe/Berlin'
assert_frame_equal(result, expected)
df = df.T
result = df.tz_convert('Europe/Berlin', axis=1)
- self.assertEqual(result.columns.tz.zone, 'Europe/Berlin')
+ assert result.columns.tz.zone == 'Europe/Berlin'
assert_frame_equal(result, expected.T)
# can't convert tz-naive
@@ -1398,11 +1398,11 @@ def test_join_utc_convert(self):
for how in ['inner', 'outer', 'left', 'right']:
result = left.join(left[:-5], how=how)
assert isinstance(result, DatetimeIndex)
- self.assertEqual(result.tz, left.tz)
+ assert result.tz == left.tz
result = left.join(right[:-5], how=how)
assert isinstance(result, DatetimeIndex)
- self.assertEqual(result.tz.zone, 'UTC')
+ assert result.tz.zone == 'UTC'
def test_join_aware(self):
rng = date_range('1/1/2011', periods=10, freq='H')
@@ -1443,30 +1443,30 @@ def test_align_aware(self):
df1 = DataFrame(np.random.randn(len(idx1), 3), idx1)
df2 = DataFrame(np.random.randn(len(idx2), 3), idx2)
new1, new2 = df1.align(df2)
- self.assertEqual(df1.index.tz, new1.index.tz)
- self.assertEqual(df2.index.tz, new2.index.tz)
+ assert df1.index.tz == new1.index.tz
+ assert df2.index.tz == new2.index.tz
# # different timezones convert to UTC
# frame
df1_central = df1.tz_convert('US/Central')
new1, new2 = df1.align(df1_central)
- self.assertEqual(new1.index.tz, pytz.UTC)
- self.assertEqual(new2.index.tz, pytz.UTC)
+ assert new1.index.tz == pytz.UTC
+ assert new2.index.tz == pytz.UTC
# series
new1, new2 = df1[0].align(df1_central[0])
- self.assertEqual(new1.index.tz, pytz.UTC)
- self.assertEqual(new2.index.tz, pytz.UTC)
+ assert new1.index.tz == pytz.UTC
+ assert new2.index.tz == pytz.UTC
# combination
new1, new2 = df1.align(df1_central[0], axis=0)
- self.assertEqual(new1.index.tz, pytz.UTC)
- self.assertEqual(new2.index.tz, pytz.UTC)
+ assert new1.index.tz == pytz.UTC
+ assert new2.index.tz == pytz.UTC
df1[0].align(df1_central, axis=0)
- self.assertEqual(new1.index.tz, pytz.UTC)
- self.assertEqual(new2.index.tz, pytz.UTC)
+ assert new1.index.tz == pytz.UTC
+ assert new2.index.tz == pytz.UTC
def test_append_aware(self):
rng1 = date_range('1/1/2011 01:00', periods=1, freq='H',
@@ -1481,7 +1481,7 @@ def test_append_aware(self):
tz='US/Eastern')
exp = Series([1, 2], index=exp_index)
assert_series_equal(ts_result, exp)
- self.assertEqual(ts_result.index.tz, rng1.tz)
+ assert ts_result.index.tz == rng1.tz
rng1 = date_range('1/1/2011 01:00', periods=1, freq='H', tz='UTC')
rng2 = date_range('1/1/2011 02:00', periods=1, freq='H', tz='UTC')
@@ -1494,7 +1494,7 @@ def test_append_aware(self):
exp = Series([1, 2], index=exp_index)
assert_series_equal(ts_result, exp)
utc = rng1.tz
- self.assertEqual(utc, ts_result.index.tz)
+ assert utc == ts_result.index.tz
# GH 7795
# different tz coerces to object dtype, not UTC
@@ -1525,7 +1525,7 @@ def test_append_dst(self):
tz='US/Eastern')
exp = Series([1, 2, 3, 10, 11, 12], index=exp_index)
assert_series_equal(ts_result, exp)
- self.assertEqual(ts_result.index.tz, rng1.tz)
+ assert ts_result.index.tz == rng1.tz
def test_append_aware_naive(self):
rng1 = date_range('1/1/2011 01:00', periods=1, freq='H')
@@ -1584,7 +1584,7 @@ def test_arith_utc_convert(self):
uts2 = ts2.tz_convert('utc')
expected = uts1 + uts2
- self.assertEqual(result.index.tz, pytz.UTC)
+ assert result.index.tz == pytz.UTC
assert_series_equal(result, expected)
def test_intersection(self):
@@ -1593,9 +1593,9 @@ def test_intersection(self):
left = rng[10:90][::-1]
right = rng[20:80][::-1]
- self.assertEqual(left.tz, rng.tz)
+ assert left.tz == rng.tz
result = left.intersection(right)
- self.assertEqual(result.tz, left.tz)
+ assert result.tz == left.tz
def test_timestamp_equality_different_timezones(self):
utc_range = date_range('1/1/2000', periods=20, tz='UTC')
@@ -1603,9 +1603,9 @@ def test_timestamp_equality_different_timezones(self):
berlin_range = utc_range.tz_convert('Europe/Berlin')
for a, b, c in zip(utc_range, eastern_range, berlin_range):
- self.assertEqual(a, b)
- self.assertEqual(b, c)
- self.assertEqual(a, c)
+ assert a == b
+ assert b == c
+ assert a == c
assert (utc_range == eastern_range).all()
assert (utc_range == berlin_range).all()
@@ -1670,7 +1670,7 @@ def test_normalize_tz_local(self):
def test_tzaware_offset(self):
dates = date_range('2012-11-01', periods=3, tz='US/Pacific')
offset = dates + offsets.Hour(5)
- self.assertEqual(dates[0] + offsets.Hour(5), offset[0])
+ assert dates[0] + offsets.Hour(5) == offset[0]
# GH 6818
for tz in ['UTC', 'US/Pacific', 'Asia/Tokyo']:
| Title is self-explanatory. Massive, massive PR.
Partially addresses #15990. | https://api.github.com/repos/pandas-dev/pandas/pulls/16169 | 2017-04-29T09:07:17Z | 2017-04-29T20:04:09Z | 2017-04-29T20:04:09Z | 2017-04-29T20:23:03Z |
ENH: pandas read_* wildcard #15904 | diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst
index 8fa1283ffc924..8466b3d3c3297 100644
--- a/doc/source/cookbook.rst
+++ b/doc/source/cookbook.rst
@@ -910,9 +910,6 @@ The :ref:`CSV <io.read_csv_table>` docs
`appending to a csv
<http://stackoverflow.com/questions/17134942/pandas-dataframe-output-end-of-csv>`__
-`how to read in multiple files, appending to create a single dataframe
-<http://stackoverflow.com/questions/25210819/speeding-up-data-import-function-pandas-and-appending-to-dataframe/25210900#25210900>`__
-
`Reading a csv chunk-by-chunk
<http://stackoverflow.com/questions/11622652/large-persistent-dataframe-in-pandas/12193309#12193309>`__
@@ -943,6 +940,41 @@ using that handle to read.
`Write a multi-row index CSV without writing duplicates
<http://stackoverflow.com/questions/17349574/pandas-write-multiindex-rows-with-to-csv>`__
+.. _cookbook.csv.multiple_files:
+
+Reading multiple files to create a single DataFrame
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all
+of the individual frames into a list, and then combine the frames in the list using :func:`pd.concat`:
+
+.. ipython:: python
+
+ for i in range(3):
+ data = pd.DataFrame(np.random.randn(10, 4))
+ data.to_csv('file_{}.csv'.format(i))
+
+ files = ['file_0.csv', 'file_1.csv', 'file_2.csv']
+ result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
+
+You can use the same approach to read all files matching a pattern. Here is an example using ``glob``:
+
+.. ipython:: python
+
+ import glob
+ files = glob.glob('file_*.csv')
+ result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
+
+Finally, this strategy will work with the other ``pd.read_*(...)`` functions described in the :ref:`io docs<io>`.
+
+.. ipython:: python
+ :supress:
+ for i in range(3):
+ os.remove('file_{}.csv'.format(i))
+
+Parsing date components in multi-columns
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
Parsing date components in multi-columns is faster with a format
.. code-block:: python
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 2b3d2895333d3..9692766505d7a 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1439,6 +1439,14 @@ class of the csv module. For this, you have to specify ``sep=None``.
print(open('tmp2.sv').read())
pd.read_csv('tmp2.sv', sep=None, engine='python')
+.. _io.multiple_files:
+
+Reading multiple files to create a single DataFrame
+'''''''''''''''''''''''''''''''''''''''''''''''''''
+
+It's best to use :func:`~pandas.concat` to combine multiple files.
+See the :ref:`cookbook<cookbook.csv.multiple_files>` for an example.
+
.. _io.chunking:
Iterating through files chunk by chunk
| - [x] closes #15904
- [x] tests added / passed (N/A for docs)
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
- [x] whatsnew entry (N/A for docs)
| https://api.github.com/repos/pandas-dev/pandas/pulls/16166 | 2017-04-28T01:46:28Z | 2017-04-30T11:09:46Z | 2017-04-30T11:09:46Z | 2017-04-30T11:10:30Z |
DEPR: deprecate is_any_int_dtype and is_floating_dtype from pandas.api.types | diff --git a/doc/source/api.rst b/doc/source/api.rst
index ab14c2758ae49..7102258318b5b 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -1939,3 +1939,61 @@ Data types related functionality
api.types.union_categoricals
api.types.infer_dtype
api.types.pandas_dtype
+
+Dtype introspection
+
+.. autosummary::
+ :toctree: generated/
+
+ api.types.is_bool_dtype
+ api.types.is_categorical_dtype
+ api.types.is_complex_dtype
+ api.types.is_datetime64_any_dtype
+ api.types.is_datetime64_dtype
+ api.types.is_datetime64_ns_dtype
+ api.types.is_datetime64tz_dtype
+ api.types.is_extension_type
+ api.types.is_float_dtype
+ api.types.is_int64_dtype
+ api.types.is_integer_dtype
+ api.types.is_interval_dtype
+ api.types.is_numeric_dtype
+ api.types.is_object_dtype
+ api.types.is_period_dtype
+ api.types.is_signed_integer_dtype
+ api.types.is_string_dtype
+ api.types.is_timedelta64_dtype
+ api.types.is_timedelta64_ns_dtype
+ api.types.is_unsigned_integer_dtype
+ api.types.is_sparse
+
+Iterable introspection
+
+ api.types.is_dict_like
+ api.types.is_file_like
+ api.types.is_list_like
+ api.types.is_named_tuple
+ api.types.is_iterator
+ api.types.is_sequence
+
+.. autosummary::
+ :toctree: generated/
+
+Scalar introspection
+
+.. autosummary::
+ :toctree: generated/
+
+ api.types.is_bool
+ api.types.is_categorical
+ api.types.is_complex
+ api.types.is_datetimetz
+ api.types.is_float
+ api.types.is_hashable
+ api.types.is_integer
+ api.types.is_interval
+ api.types.is_number
+ api.types.is_period
+ api.types.is_re
+ api.types.is_re_compilable
+ api.types.is_scalar
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 86a598183517c..720e4a588034e 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -1520,6 +1520,7 @@ Other Deprecations
* ``pd.match()``, is removed.
* ``pd.groupby()``, replaced by using the ``.groupby()`` method directly on a ``Series/DataFrame``
* ``pd.get_store()``, replaced by a direct call to ``pd.HDFStore(...)``
+- ``is_any_int_dtype`` and ``is_floating_dtype`` are deprecated from ``pandas.api.types`` (:issue:`16042`)
.. _whatsnew_0200.prior_deprecations:
diff --git a/pandas/api/types/__init__.py b/pandas/api/types/__init__.py
index 8bda0c75f8540..438e4afa3f580 100644
--- a/pandas/api/types/__init__.py
+++ b/pandas/api/types/__init__.py
@@ -7,4 +7,3 @@
IntervalDtype)
from pandas.core.dtypes.concat import union_categoricals # noqa
from pandas._libs.lib import infer_dtype # noqa
-del np # noqa
diff --git a/pandas/core/dtypes/api.py b/pandas/core/dtypes/api.py
index 6dbd3dc6b640c..242c62125664c 100644
--- a/pandas/core/dtypes/api.py
+++ b/pandas/core/dtypes/api.py
@@ -1,6 +1,6 @@
# flake8: noqa
-import numpy as np
+import sys
from .common import (pandas_dtype,
is_dtype_equal,
@@ -40,12 +40,10 @@
is_float,
is_complex,
is_number,
- is_any_int_dtype,
is_integer_dtype,
is_int64_dtype,
is_numeric_dtype,
is_float_dtype,
- is_floating_dtype,
is_bool_dtype,
is_complex_dtype,
is_signed_integer_dtype,
@@ -61,3 +59,24 @@
is_hashable,
is_named_tuple,
is_sequence)
+
+
+# deprecated
+m = sys.modules['pandas.core.dtypes.api']
+
+for t in ['is_any_int_dtype', 'is_floating_dtype']:
+
+ def outer(t=t):
+
+ def wrapper(arr_or_dtype):
+ import warnings
+ import pandas
+ warnings.warn("{t} is deprecated and will be "
+ "removed in a future version".format(t=t),
+ FutureWarning, stacklevel=3)
+ return getattr(pandas.core.dtypes.common, t)(arr_or_dtype)
+ return wrapper
+
+ setattr(m, t, outer(t))
+
+del sys, m, t, outer
diff --git a/pandas/tests/api/test_types.py b/pandas/tests/api/test_types.py
index 6b37501045d40..b9198c42e2eff 100644
--- a/pandas/tests/api/test_types.py
+++ b/pandas/tests/api/test_types.py
@@ -15,13 +15,13 @@
class TestTypes(Base, tm.TestCase):
- allowed = ['is_any_int_dtype', 'is_bool', 'is_bool_dtype',
+ allowed = ['is_bool', 'is_bool_dtype',
'is_categorical', 'is_categorical_dtype', 'is_complex',
'is_complex_dtype', 'is_datetime64_any_dtype',
'is_datetime64_dtype', 'is_datetime64_ns_dtype',
'is_datetime64tz_dtype', 'is_datetimetz', 'is_dtype_equal',
'is_extension_type', 'is_float', 'is_float_dtype',
- 'is_floating_dtype', 'is_int64_dtype', 'is_integer',
+ 'is_int64_dtype', 'is_integer',
'is_integer_dtype', 'is_number', 'is_numeric_dtype',
'is_object_dtype', 'is_scalar', 'is_sparse',
'is_string_dtype', 'is_signed_integer_dtype',
@@ -33,12 +33,13 @@ class TestTypes(Base, tm.TestCase):
'is_list_like', 'is_hashable',
'is_named_tuple', 'is_sequence',
'pandas_dtype', 'union_categoricals', 'infer_dtype']
+ deprecated = ['is_any_int_dtype', 'is_floating_dtype']
dtypes = ['CategoricalDtype', 'DatetimeTZDtype',
'PeriodDtype', 'IntervalDtype']
def test_types(self):
- self.check(types, self.allowed + self.dtypes)
+ self.check(types, self.allowed + self.dtypes + self.deprecated)
def check_deprecation(self, fold, fnew):
with tm.assert_produces_warning(DeprecationWarning):
@@ -87,6 +88,13 @@ def test_removed_from_core_common(self):
'ensure_float']:
pytest.raises(AttributeError, lambda: getattr(com, t))
+ def test_deprecated_from_api_types(self):
+
+ for t in ['is_any_int_dtype', 'is_floating_dtype']:
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ getattr(types, t)(1)
+
def test_moved_infer_dtype():
| closes #16042
| https://api.github.com/repos/pandas-dev/pandas/pulls/16163 | 2017-04-27T22:47:38Z | 2017-04-28T10:20:16Z | 2017-04-28T10:20:16Z | 2017-04-28T10:32:01Z |
MAINT: Remove self.assertTrue from testing | diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 52061f7f1e0ae..827a4668ed0bc 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -662,17 +662,17 @@ def test_identical(self):
x = 1
result = pd.eval('x', engine=self.engine, parser=self.parser)
self.assertEqual(result, 1)
- self.assertTrue(is_scalar(result))
+ assert is_scalar(result)
x = 1.5
result = pd.eval('x', engine=self.engine, parser=self.parser)
self.assertEqual(result, 1.5)
- self.assertTrue(is_scalar(result))
+ assert is_scalar(result)
x = False
result = pd.eval('x', engine=self.engine, parser=self.parser)
self.assertEqual(result, False)
- self.assertTrue(is_scalar(result))
+ assert is_scalar(result)
x = np.array([1])
result = pd.eval('x', engine=self.engine, parser=self.parser)
@@ -708,7 +708,7 @@ def test_float_truncation(self):
1000000000.0015]})
cutoff = 1000000000.0006
result = df.query("A < %.4f" % cutoff)
- self.assertTrue(result.empty)
+ assert result.empty
cutoff = 1000000000.0010
result = df.query("A > %.4f" % cutoff)
@@ -1281,7 +1281,7 @@ def f():
df.eval('a = a + b', inplace=True)
result = old_a + df.b
assert_series_equal(result, df.a, check_names=False)
- self.assertTrue(result.name is None)
+ assert result.name is None
f()
@@ -1435,11 +1435,11 @@ def test_simple_in_ops(self):
if self.parser != 'python':
res = pd.eval('1 in [1, 2]', engine=self.engine,
parser=self.parser)
- self.assertTrue(res)
+ assert res
res = pd.eval('2 in (1, 2)', engine=self.engine,
parser=self.parser)
- self.assertTrue(res)
+ assert res
res = pd.eval('3 in (1, 2)', engine=self.engine,
parser=self.parser)
@@ -1447,23 +1447,23 @@ def test_simple_in_ops(self):
res = pd.eval('3 not in (1, 2)', engine=self.engine,
parser=self.parser)
- self.assertTrue(res)
+ assert res
res = pd.eval('[3] not in (1, 2)', engine=self.engine,
parser=self.parser)
- self.assertTrue(res)
+ assert res
res = pd.eval('[3] in ([3], 2)', engine=self.engine,
parser=self.parser)
- self.assertTrue(res)
+ assert res
res = pd.eval('[[3]] in [[[3]], 2]', engine=self.engine,
parser=self.parser)
- self.assertTrue(res)
+ assert res
res = pd.eval('(3,) in [(3,), 2]', engine=self.engine,
parser=self.parser)
- self.assertTrue(res)
+ assert res
res = pd.eval('(3,) not in [(3,), 2]', engine=self.engine,
parser=self.parser)
@@ -1471,7 +1471,7 @@ def test_simple_in_ops(self):
res = pd.eval('[(3,)] in [[(3,)], 2]', engine=self.engine,
parser=self.parser)
- self.assertTrue(res)
+ assert res
else:
with pytest.raises(NotImplementedError):
pd.eval('1 in [1, 2]', engine=self.engine, parser=self.parser)
diff --git a/pandas/tests/dtypes/test_cast.py b/pandas/tests/dtypes/test_cast.py
index bf3668111b9f9..22640729c262f 100644
--- a/pandas/tests/dtypes/test_cast.py
+++ b/pandas/tests/dtypes/test_cast.py
@@ -161,7 +161,7 @@ class TestMaybe(tm.TestCase):
def test_maybe_convert_string_to_array(self):
result = maybe_convert_string_to_object('x')
tm.assert_numpy_array_equal(result, np.array(['x'], dtype=object))
- self.assertTrue(result.dtype == object)
+ assert result.dtype == object
result = maybe_convert_string_to_object(1)
self.assertEqual(result, 1)
@@ -169,19 +169,19 @@ def test_maybe_convert_string_to_array(self):
arr = np.array(['x', 'y'], dtype=str)
result = maybe_convert_string_to_object(arr)
tm.assert_numpy_array_equal(result, np.array(['x', 'y'], dtype=object))
- self.assertTrue(result.dtype == object)
+ assert result.dtype == object
# unicode
arr = np.array(['x', 'y']).astype('U')
result = maybe_convert_string_to_object(arr)
tm.assert_numpy_array_equal(result, np.array(['x', 'y'], dtype=object))
- self.assertTrue(result.dtype == object)
+ assert result.dtype == object
# object
arr = np.array(['x', 2], dtype=object)
result = maybe_convert_string_to_object(arr)
tm.assert_numpy_array_equal(result, np.array(['x', 2], dtype=object))
- self.assertTrue(result.dtype == object)
+ assert result.dtype == object
def test_maybe_convert_scalar(self):
@@ -220,17 +220,17 @@ def test_maybe_convert_objects_copy(self):
values = np.array([1, 2])
out = maybe_convert_objects(values, copy=False)
- self.assertTrue(values is out)
+ assert values is out
out = maybe_convert_objects(values, copy=True)
- self.assertTrue(values is not out)
+ assert values is not out
values = np.array(['apply', 'banana'])
out = maybe_convert_objects(values, copy=False)
- self.assertTrue(values is out)
+ assert values is out
out = maybe_convert_objects(values, copy=True)
- self.assertTrue(values is not out)
+ assert values is not out
class TestCommonTypes(tm.TestCase):
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index 718efc08394b1..b02c846d50c89 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -50,45 +50,45 @@ def test_hash_vs_equality(self):
# make sure that we satisfy is semantics
dtype = self.dtype
dtype2 = CategoricalDtype()
- self.assertTrue(dtype == dtype2)
- self.assertTrue(dtype2 == dtype)
- self.assertTrue(dtype is dtype2)
- self.assertTrue(dtype2 is dtype)
- self.assertTrue(hash(dtype) == hash(dtype2))
+ assert dtype == dtype2
+ assert dtype2 == dtype
+ assert dtype is dtype2
+ assert dtype2 is dtype
+ assert hash(dtype) == hash(dtype2)
def test_equality(self):
- self.assertTrue(is_dtype_equal(self.dtype, 'category'))
- self.assertTrue(is_dtype_equal(self.dtype, CategoricalDtype()))
+ assert is_dtype_equal(self.dtype, 'category')
+ assert is_dtype_equal(self.dtype, CategoricalDtype())
assert not is_dtype_equal(self.dtype, 'foo')
def test_construction_from_string(self):
result = CategoricalDtype.construct_from_string('category')
- self.assertTrue(is_dtype_equal(self.dtype, result))
+ assert is_dtype_equal(self.dtype, result)
pytest.raises(
TypeError, lambda: CategoricalDtype.construct_from_string('foo'))
def test_is_dtype(self):
- self.assertTrue(CategoricalDtype.is_dtype(self.dtype))
- self.assertTrue(CategoricalDtype.is_dtype('category'))
- self.assertTrue(CategoricalDtype.is_dtype(CategoricalDtype()))
+ assert CategoricalDtype.is_dtype(self.dtype)
+ assert CategoricalDtype.is_dtype('category')
+ assert CategoricalDtype.is_dtype(CategoricalDtype())
assert not CategoricalDtype.is_dtype('foo')
assert not CategoricalDtype.is_dtype(np.float64)
def test_basic(self):
- self.assertTrue(is_categorical_dtype(self.dtype))
+ assert is_categorical_dtype(self.dtype)
factor = Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'])
s = Series(factor, name='A')
# dtypes
- self.assertTrue(is_categorical_dtype(s.dtype))
- self.assertTrue(is_categorical_dtype(s))
+ assert is_categorical_dtype(s.dtype)
+ assert is_categorical_dtype(s)
assert not is_categorical_dtype(np.dtype('float64'))
- self.assertTrue(is_categorical(s.dtype))
- self.assertTrue(is_categorical(s))
+ assert is_categorical(s.dtype)
+ assert is_categorical(s)
assert not is_categorical(np.dtype('float64'))
assert not is_categorical(1.0)
@@ -103,14 +103,14 @@ def test_hash_vs_equality(self):
dtype = self.dtype
dtype2 = DatetimeTZDtype('ns', 'US/Eastern')
dtype3 = DatetimeTZDtype(dtype2)
- self.assertTrue(dtype == dtype2)
- self.assertTrue(dtype2 == dtype)
- self.assertTrue(dtype3 == dtype)
- self.assertTrue(dtype is dtype2)
- self.assertTrue(dtype2 is dtype)
- self.assertTrue(dtype3 is dtype)
- self.assertTrue(hash(dtype) == hash(dtype2))
- self.assertTrue(hash(dtype) == hash(dtype3))
+ assert dtype == dtype2
+ assert dtype2 == dtype
+ assert dtype3 == dtype
+ assert dtype is dtype2
+ assert dtype2 is dtype
+ assert dtype3 is dtype
+ assert hash(dtype) == hash(dtype2)
+ assert hash(dtype) == hash(dtype3)
def test_construction(self):
pytest.raises(ValueError,
@@ -120,8 +120,8 @@ def test_subclass(self):
a = DatetimeTZDtype('datetime64[ns, US/Eastern]')
b = DatetimeTZDtype('datetime64[ns, CET]')
- self.assertTrue(issubclass(type(a), type(a)))
- self.assertTrue(issubclass(type(a), type(b)))
+ assert issubclass(type(a), type(a))
+ assert issubclass(type(a), type(b))
def test_coerce_to_dtype(self):
self.assertEqual(_coerce_to_dtype('datetime64[ns, US/Eastern]'),
@@ -130,61 +130,58 @@ def test_coerce_to_dtype(self):
DatetimeTZDtype('ns', 'Asia/Tokyo'))
def test_compat(self):
- self.assertTrue(is_datetime64tz_dtype(self.dtype))
- self.assertTrue(is_datetime64tz_dtype('datetime64[ns, US/Eastern]'))
- self.assertTrue(is_datetime64_any_dtype(self.dtype))
- self.assertTrue(is_datetime64_any_dtype('datetime64[ns, US/Eastern]'))
- self.assertTrue(is_datetime64_ns_dtype(self.dtype))
- self.assertTrue(is_datetime64_ns_dtype('datetime64[ns, US/Eastern]'))
+ assert is_datetime64tz_dtype(self.dtype)
+ assert is_datetime64tz_dtype('datetime64[ns, US/Eastern]')
+ assert is_datetime64_any_dtype(self.dtype)
+ assert is_datetime64_any_dtype('datetime64[ns, US/Eastern]')
+ assert is_datetime64_ns_dtype(self.dtype)
+ assert is_datetime64_ns_dtype('datetime64[ns, US/Eastern]')
assert not is_datetime64_dtype(self.dtype)
assert not is_datetime64_dtype('datetime64[ns, US/Eastern]')
def test_construction_from_string(self):
result = DatetimeTZDtype('datetime64[ns, US/Eastern]')
- self.assertTrue(is_dtype_equal(self.dtype, result))
+ assert is_dtype_equal(self.dtype, result)
result = DatetimeTZDtype.construct_from_string(
'datetime64[ns, US/Eastern]')
- self.assertTrue(is_dtype_equal(self.dtype, result))
+ assert is_dtype_equal(self.dtype, result)
pytest.raises(TypeError,
lambda: DatetimeTZDtype.construct_from_string('foo'))
def test_is_dtype(self):
assert not DatetimeTZDtype.is_dtype(None)
- self.assertTrue(DatetimeTZDtype.is_dtype(self.dtype))
- self.assertTrue(DatetimeTZDtype.is_dtype('datetime64[ns, US/Eastern]'))
+ assert DatetimeTZDtype.is_dtype(self.dtype)
+ assert DatetimeTZDtype.is_dtype('datetime64[ns, US/Eastern]')
assert not DatetimeTZDtype.is_dtype('foo')
- self.assertTrue(DatetimeTZDtype.is_dtype(DatetimeTZDtype(
- 'ns', 'US/Pacific')))
+ assert DatetimeTZDtype.is_dtype(DatetimeTZDtype('ns', 'US/Pacific'))
assert not DatetimeTZDtype.is_dtype(np.float64)
def test_equality(self):
- self.assertTrue(is_dtype_equal(self.dtype,
- 'datetime64[ns, US/Eastern]'))
- self.assertTrue(is_dtype_equal(self.dtype, DatetimeTZDtype(
- 'ns', 'US/Eastern')))
+ assert is_dtype_equal(self.dtype, 'datetime64[ns, US/Eastern]')
+ assert is_dtype_equal(self.dtype, DatetimeTZDtype('ns', 'US/Eastern'))
assert not is_dtype_equal(self.dtype, 'foo')
assert not is_dtype_equal(self.dtype, DatetimeTZDtype('ns', 'CET'))
assert not is_dtype_equal(DatetimeTZDtype('ns', 'US/Eastern'),
DatetimeTZDtype('ns', 'US/Pacific'))
# numpy compat
- self.assertTrue(is_dtype_equal(np.dtype("M8[ns]"), "datetime64[ns]"))
+ assert is_dtype_equal(np.dtype("M8[ns]"), "datetime64[ns]")
def test_basic(self):
- self.assertTrue(is_datetime64tz_dtype(self.dtype))
+ assert is_datetime64tz_dtype(self.dtype)
dr = date_range('20130101', periods=3, tz='US/Eastern')
s = Series(dr, name='A')
# dtypes
- self.assertTrue(is_datetime64tz_dtype(s.dtype))
- self.assertTrue(is_datetime64tz_dtype(s))
+ assert is_datetime64tz_dtype(s.dtype)
+ assert is_datetime64tz_dtype(s)
assert not is_datetime64tz_dtype(np.dtype('float64'))
assert not is_datetime64tz_dtype(1.0)
- self.assertTrue(is_datetimetz(s))
- self.assertTrue(is_datetimetz(s.dtype))
+ assert is_datetimetz(s)
+ assert is_datetimetz(s.dtype)
assert not is_datetimetz(np.dtype('float64'))
assert not is_datetimetz(1.0)
@@ -192,11 +189,11 @@ def test_dst(self):
dr1 = date_range('2013-01-01', periods=3, tz='US/Eastern')
s1 = Series(dr1, name='A')
- self.assertTrue(is_datetimetz(s1))
+ assert is_datetimetz(s1)
dr2 = date_range('2013-08-01', periods=3, tz='US/Eastern')
s2 = Series(dr2, name='A')
- self.assertTrue(is_datetimetz(s2))
+ assert is_datetimetz(s2)
self.assertEqual(s1.dtype, s2.dtype)
def test_parser(self):
@@ -226,25 +223,25 @@ def test_construction(self):
for s in ['period[D]', 'Period[D]', 'D']:
dt = PeriodDtype(s)
self.assertEqual(dt.freq, pd.tseries.offsets.Day())
- self.assertTrue(is_period_dtype(dt))
+ assert is_period_dtype(dt)
for s in ['period[3D]', 'Period[3D]', '3D']:
dt = PeriodDtype(s)
self.assertEqual(dt.freq, pd.tseries.offsets.Day(3))
- self.assertTrue(is_period_dtype(dt))
+ assert is_period_dtype(dt)
for s in ['period[26H]', 'Period[26H]', '26H',
'period[1D2H]', 'Period[1D2H]', '1D2H']:
dt = PeriodDtype(s)
self.assertEqual(dt.freq, pd.tseries.offsets.Hour(26))
- self.assertTrue(is_period_dtype(dt))
+ assert is_period_dtype(dt)
def test_subclass(self):
a = PeriodDtype('period[D]')
b = PeriodDtype('period[3D]')
- self.assertTrue(issubclass(type(a), type(a)))
- self.assertTrue(issubclass(type(a), type(b)))
+ assert issubclass(type(a), type(a))
+ assert issubclass(type(a), type(b))
def test_identity(self):
assert PeriodDtype('period[D]') == PeriodDtype('period[D]')
@@ -270,9 +267,9 @@ def test_compat(self):
def test_construction_from_string(self):
result = PeriodDtype('period[D]')
- self.assertTrue(is_dtype_equal(self.dtype, result))
+ assert is_dtype_equal(self.dtype, result)
result = PeriodDtype.construct_from_string('period[D]')
- self.assertTrue(is_dtype_equal(self.dtype, result))
+ assert is_dtype_equal(self.dtype, result)
with pytest.raises(TypeError):
PeriodDtype.construct_from_string('foo')
with pytest.raises(TypeError):
@@ -286,14 +283,14 @@ def test_construction_from_string(self):
PeriodDtype.construct_from_string('datetime64[ns, US/Eastern]')
def test_is_dtype(self):
- self.assertTrue(PeriodDtype.is_dtype(self.dtype))
- self.assertTrue(PeriodDtype.is_dtype('period[D]'))
- self.assertTrue(PeriodDtype.is_dtype('period[3D]'))
- self.assertTrue(PeriodDtype.is_dtype(PeriodDtype('3D')))
- self.assertTrue(PeriodDtype.is_dtype('period[U]'))
- self.assertTrue(PeriodDtype.is_dtype('period[S]'))
- self.assertTrue(PeriodDtype.is_dtype(PeriodDtype('U')))
- self.assertTrue(PeriodDtype.is_dtype(PeriodDtype('S')))
+ assert PeriodDtype.is_dtype(self.dtype)
+ assert PeriodDtype.is_dtype('period[D]')
+ assert PeriodDtype.is_dtype('period[3D]')
+ assert PeriodDtype.is_dtype(PeriodDtype('3D'))
+ assert PeriodDtype.is_dtype('period[U]')
+ assert PeriodDtype.is_dtype('period[S]')
+ assert PeriodDtype.is_dtype(PeriodDtype('U'))
+ assert PeriodDtype.is_dtype(PeriodDtype('S'))
assert not PeriodDtype.is_dtype('D')
assert not PeriodDtype.is_dtype('3D')
@@ -305,22 +302,22 @@ def test_is_dtype(self):
assert not PeriodDtype.is_dtype(np.float64)
def test_equality(self):
- self.assertTrue(is_dtype_equal(self.dtype, 'period[D]'))
- self.assertTrue(is_dtype_equal(self.dtype, PeriodDtype('D')))
- self.assertTrue(is_dtype_equal(self.dtype, PeriodDtype('D')))
- self.assertTrue(is_dtype_equal(PeriodDtype('D'), PeriodDtype('D')))
+ assert is_dtype_equal(self.dtype, 'period[D]')
+ assert is_dtype_equal(self.dtype, PeriodDtype('D'))
+ assert is_dtype_equal(self.dtype, PeriodDtype('D'))
+ assert is_dtype_equal(PeriodDtype('D'), PeriodDtype('D'))
assert not is_dtype_equal(self.dtype, 'D')
assert not is_dtype_equal(PeriodDtype('D'), PeriodDtype('2D'))
def test_basic(self):
- self.assertTrue(is_period_dtype(self.dtype))
+ assert is_period_dtype(self.dtype)
pidx = pd.period_range('2013-01-01 09:00', periods=5, freq='H')
- self.assertTrue(is_period_dtype(pidx.dtype))
- self.assertTrue(is_period_dtype(pidx))
- self.assertTrue(is_period(pidx))
+ assert is_period_dtype(pidx.dtype)
+ assert is_period_dtype(pidx)
+ assert is_period(pidx)
s = Series(pidx, name='A')
# dtypes
@@ -328,7 +325,7 @@ def test_basic(self):
# is_period checks period_arraylike
assert not is_period_dtype(s.dtype)
assert not is_period_dtype(s)
- self.assertTrue(is_period(s))
+ assert is_period(s)
assert not is_period_dtype(np.dtype('float64'))
assert not is_period_dtype(1.0)
@@ -358,33 +355,33 @@ def test_construction(self):
for s in ['interval[int64]', 'Interval[int64]', 'int64']:
i = IntervalDtype(s)
self.assertEqual(i.subtype, np.dtype('int64'))
- self.assertTrue(is_interval_dtype(i))
+ assert is_interval_dtype(i)
def test_construction_generic(self):
# generic
i = IntervalDtype('interval')
assert i.subtype is None
- self.assertTrue(is_interval_dtype(i))
- self.assertTrue(str(i) == 'interval')
+ assert is_interval_dtype(i)
+ assert str(i) == 'interval'
i = IntervalDtype()
assert i.subtype is None
- self.assertTrue(is_interval_dtype(i))
- self.assertTrue(str(i) == 'interval')
+ assert is_interval_dtype(i)
+ assert str(i) == 'interval'
def test_subclass(self):
a = IntervalDtype('interval[int64]')
b = IntervalDtype('interval[int64]')
- self.assertTrue(issubclass(type(a), type(a)))
- self.assertTrue(issubclass(type(a), type(b)))
+ assert issubclass(type(a), type(a))
+ assert issubclass(type(a), type(b))
def test_is_dtype(self):
- self.assertTrue(IntervalDtype.is_dtype(self.dtype))
- self.assertTrue(IntervalDtype.is_dtype('interval'))
- self.assertTrue(IntervalDtype.is_dtype(IntervalDtype('float64')))
- self.assertTrue(IntervalDtype.is_dtype(IntervalDtype('int64')))
- self.assertTrue(IntervalDtype.is_dtype(IntervalDtype(np.int64)))
+ assert IntervalDtype.is_dtype(self.dtype)
+ assert IntervalDtype.is_dtype('interval')
+ assert IntervalDtype.is_dtype(IntervalDtype('float64'))
+ assert IntervalDtype.is_dtype(IntervalDtype('int64'))
+ assert IntervalDtype.is_dtype(IntervalDtype(np.int64))
assert not IntervalDtype.is_dtype('D')
assert not IntervalDtype.is_dtype('3D')
@@ -405,9 +402,9 @@ def test_coerce_to_dtype(self):
def test_construction_from_string(self):
result = IntervalDtype('interval[int64]')
- self.assertTrue(is_dtype_equal(self.dtype, result))
+ assert is_dtype_equal(self.dtype, result)
result = IntervalDtype.construct_from_string('interval[int64]')
- self.assertTrue(is_dtype_equal(self.dtype, result))
+ assert is_dtype_equal(self.dtype, result)
with pytest.raises(TypeError):
IntervalDtype.construct_from_string('foo')
with pytest.raises(TypeError):
@@ -416,23 +413,22 @@ def test_construction_from_string(self):
IntervalDtype.construct_from_string('foo[int64]')
def test_equality(self):
- self.assertTrue(is_dtype_equal(self.dtype, 'interval[int64]'))
- self.assertTrue(is_dtype_equal(self.dtype, IntervalDtype('int64')))
- self.assertTrue(is_dtype_equal(self.dtype, IntervalDtype('int64')))
- self.assertTrue(is_dtype_equal(IntervalDtype('int64'),
- IntervalDtype('int64')))
+ assert is_dtype_equal(self.dtype, 'interval[int64]')
+ assert is_dtype_equal(self.dtype, IntervalDtype('int64'))
+ assert is_dtype_equal(self.dtype, IntervalDtype('int64'))
+ assert is_dtype_equal(IntervalDtype('int64'), IntervalDtype('int64'))
assert not is_dtype_equal(self.dtype, 'int64')
assert not is_dtype_equal(IntervalDtype('int64'),
IntervalDtype('float64'))
def test_basic(self):
- self.assertTrue(is_interval_dtype(self.dtype))
+ assert is_interval_dtype(self.dtype)
ii = IntervalIndex.from_breaks(range(3))
- self.assertTrue(is_interval_dtype(ii.dtype))
- self.assertTrue(is_interval_dtype(ii))
+ assert is_interval_dtype(ii.dtype)
+ assert is_interval_dtype(ii)
s = Series(ii, name='A')
@@ -442,12 +438,11 @@ def test_basic(self):
assert not is_interval_dtype(s)
def test_basic_dtype(self):
- self.assertTrue(is_interval_dtype('interval[int64]'))
- self.assertTrue(is_interval_dtype(IntervalIndex.from_tuples([(0, 1)])))
- self.assertTrue(is_interval_dtype
- (IntervalIndex.from_breaks(np.arange(4))))
- self.assertTrue(is_interval_dtype(
- IntervalIndex.from_breaks(date_range('20130101', periods=3))))
+ assert is_interval_dtype('interval[int64]')
+ assert is_interval_dtype(IntervalIndex.from_tuples([(0, 1)]))
+ assert is_interval_dtype(IntervalIndex.from_breaks(np.arange(4)))
+ assert is_interval_dtype(IntervalIndex.from_breaks(
+ date_range('20130101', periods=3)))
assert not is_interval_dtype('U')
assert not is_interval_dtype('S')
assert not is_interval_dtype('foo')
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 8dcf75e8a1aec..035de283ae500 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -211,14 +211,14 @@ def test_infer_dtype_bytes(self):
def test_isinf_scalar(self):
# GH 11352
- self.assertTrue(lib.isposinf_scalar(float('inf')))
- self.assertTrue(lib.isposinf_scalar(np.inf))
+ assert lib.isposinf_scalar(float('inf'))
+ assert lib.isposinf_scalar(np.inf)
assert not lib.isposinf_scalar(-np.inf)
assert not lib.isposinf_scalar(1)
assert not lib.isposinf_scalar('a')
- self.assertTrue(lib.isneginf_scalar(float('-inf')))
- self.assertTrue(lib.isneginf_scalar(-np.inf))
+ assert lib.isneginf_scalar(float('-inf'))
+ assert lib.isneginf_scalar(-np.inf)
assert not lib.isneginf_scalar(np.inf)
assert not lib.isneginf_scalar(1)
assert not lib.isneginf_scalar('a')
@@ -275,17 +275,17 @@ def test_maybe_convert_numeric_post_floatify_nan(self):
def test_convert_infs(self):
arr = np.array(['inf', 'inf', 'inf'], dtype='O')
result = lib.maybe_convert_numeric(arr, set(), False)
- self.assertTrue(result.dtype == np.float64)
+ assert result.dtype == np.float64
arr = np.array(['-inf', '-inf', '-inf'], dtype='O')
result = lib.maybe_convert_numeric(arr, set(), False)
- self.assertTrue(result.dtype == np.float64)
+ assert result.dtype == np.float64
def test_scientific_no_exponent(self):
# See PR 12215
arr = np.array(['42E', '2E', '99e', '6e'], dtype='O')
result = lib.maybe_convert_numeric(arr, set(), False, True)
- self.assertTrue(np.all(np.isnan(result)))
+ assert np.all(np.isnan(result))
def test_convert_non_hashable(self):
# GH13324
@@ -637,8 +637,8 @@ def test_infer_dtype_all_nan_nat_like(self):
def test_is_datetimelike_array_all_nan_nat_like(self):
arr = np.array([np.nan, pd.NaT, np.datetime64('nat')])
- self.assertTrue(lib.is_datetime_array(arr))
- self.assertTrue(lib.is_datetime64_array(arr))
+ assert lib.is_datetime_array(arr)
+ assert lib.is_datetime64_array(arr)
assert not lib.is_timedelta_array(arr)
assert not lib.is_timedelta64_array(arr)
assert not lib.is_timedelta_or_timedelta64_array(arr)
@@ -646,9 +646,9 @@ def test_is_datetimelike_array_all_nan_nat_like(self):
arr = np.array([np.nan, pd.NaT, np.timedelta64('nat')])
assert not lib.is_datetime_array(arr)
assert not lib.is_datetime64_array(arr)
- self.assertTrue(lib.is_timedelta_array(arr))
- self.assertTrue(lib.is_timedelta64_array(arr))
- self.assertTrue(lib.is_timedelta_or_timedelta64_array(arr))
+ assert lib.is_timedelta_array(arr)
+ assert lib.is_timedelta64_array(arr)
+ assert lib.is_timedelta_or_timedelta64_array(arr)
arr = np.array([np.nan, pd.NaT, np.datetime64('nat'),
np.timedelta64('nat')])
@@ -659,11 +659,11 @@ def test_is_datetimelike_array_all_nan_nat_like(self):
assert not lib.is_timedelta_or_timedelta64_array(arr)
arr = np.array([np.nan, pd.NaT])
- self.assertTrue(lib.is_datetime_array(arr))
- self.assertTrue(lib.is_datetime64_array(arr))
- self.assertTrue(lib.is_timedelta_array(arr))
- self.assertTrue(lib.is_timedelta64_array(arr))
- self.assertTrue(lib.is_timedelta_or_timedelta64_array(arr))
+ assert lib.is_datetime_array(arr)
+ assert lib.is_datetime64_array(arr)
+ assert lib.is_timedelta_array(arr)
+ assert lib.is_timedelta64_array(arr)
+ assert lib.is_timedelta_or_timedelta64_array(arr)
arr = np.array([np.nan, np.nan], dtype=object)
assert not lib.is_datetime_array(arr)
@@ -719,7 +719,7 @@ def test_to_object_array_width(self):
tm.assert_numpy_array_equal(out, expected)
def test_is_period(self):
- self.assertTrue(lib.is_period(pd.Period('2011-01', freq='M')))
+ assert lib.is_period(pd.Period('2011-01', freq='M'))
assert not lib.is_period(pd.PeriodIndex(['2011-01'], freq='M'))
assert not lib.is_period(pd.Timestamp('2011-01'))
assert not lib.is_period(1)
@@ -748,15 +748,15 @@ class TestNumberScalar(tm.TestCase):
def test_is_number(self):
- self.assertTrue(is_number(True))
- self.assertTrue(is_number(1))
- self.assertTrue(is_number(1.1))
- self.assertTrue(is_number(1 + 3j))
- self.assertTrue(is_number(np.bool(False)))
- self.assertTrue(is_number(np.int64(1)))
- self.assertTrue(is_number(np.float64(1.1)))
- self.assertTrue(is_number(np.complex128(1 + 3j)))
- self.assertTrue(is_number(np.nan))
+ assert is_number(True)
+ assert is_number(1)
+ assert is_number(1.1)
+ assert is_number(1 + 3j)
+ assert is_number(np.bool(False))
+ assert is_number(np.int64(1))
+ assert is_number(np.float64(1.1))
+ assert is_number(np.complex128(1 + 3j))
+ assert is_number(np.nan)
assert not is_number(None)
assert not is_number('x')
@@ -769,12 +769,12 @@ def test_is_number(self):
# questionable
assert not is_number(np.bool_(False))
- self.assertTrue(is_number(np.timedelta64(1, 'D')))
+ assert is_number(np.timedelta64(1, 'D'))
def test_is_bool(self):
- self.assertTrue(is_bool(True))
- self.assertTrue(is_bool(np.bool(False)))
- self.assertTrue(is_bool(np.bool_(False)))
+ assert is_bool(True)
+ assert is_bool(np.bool(False))
+ assert is_bool(np.bool_(False))
assert not is_bool(1)
assert not is_bool(1.1)
@@ -794,8 +794,8 @@ def test_is_bool(self):
assert not is_bool(Timedelta('1 days'))
def test_is_integer(self):
- self.assertTrue(is_integer(1))
- self.assertTrue(is_integer(np.int64(1)))
+ assert is_integer(1)
+ assert is_integer(np.int64(1))
assert not is_integer(True)
assert not is_integer(1.1)
@@ -815,12 +815,12 @@ def test_is_integer(self):
assert not is_integer(Timedelta('1 days'))
# questionable
- self.assertTrue(is_integer(np.timedelta64(1, 'D')))
+ assert is_integer(np.timedelta64(1, 'D'))
def test_is_float(self):
- self.assertTrue(is_float(1.1))
- self.assertTrue(is_float(np.float64(1.1)))
- self.assertTrue(is_float(np.nan))
+ assert is_float(1.1)
+ assert is_float(np.float64(1.1))
+ assert is_float(np.nan)
assert not is_float(True)
assert not is_float(1)
@@ -844,43 +844,43 @@ def test_is_datetime_dtypes(self):
ts = pd.date_range('20130101', periods=3)
tsa = pd.date_range('20130101', periods=3, tz='US/Eastern')
- self.assertTrue(is_datetime64_dtype('datetime64'))
- self.assertTrue(is_datetime64_dtype('datetime64[ns]'))
- self.assertTrue(is_datetime64_dtype(ts))
+ assert is_datetime64_dtype('datetime64')
+ assert is_datetime64_dtype('datetime64[ns]')
+ assert is_datetime64_dtype(ts)
assert not is_datetime64_dtype(tsa)
assert not is_datetime64_ns_dtype('datetime64')
- self.assertTrue(is_datetime64_ns_dtype('datetime64[ns]'))
- self.assertTrue(is_datetime64_ns_dtype(ts))
- self.assertTrue(is_datetime64_ns_dtype(tsa))
+ assert is_datetime64_ns_dtype('datetime64[ns]')
+ assert is_datetime64_ns_dtype(ts)
+ assert is_datetime64_ns_dtype(tsa)
- self.assertTrue(is_datetime64_any_dtype('datetime64'))
- self.assertTrue(is_datetime64_any_dtype('datetime64[ns]'))
- self.assertTrue(is_datetime64_any_dtype(ts))
- self.assertTrue(is_datetime64_any_dtype(tsa))
+ assert is_datetime64_any_dtype('datetime64')
+ assert is_datetime64_any_dtype('datetime64[ns]')
+ assert is_datetime64_any_dtype(ts)
+ assert is_datetime64_any_dtype(tsa)
assert not is_datetime64tz_dtype('datetime64')
assert not is_datetime64tz_dtype('datetime64[ns]')
assert not is_datetime64tz_dtype(ts)
- self.assertTrue(is_datetime64tz_dtype(tsa))
+ assert is_datetime64tz_dtype(tsa)
for tz in ['US/Eastern', 'UTC']:
dtype = 'datetime64[ns, {}]'.format(tz)
assert not is_datetime64_dtype(dtype)
- self.assertTrue(is_datetime64tz_dtype(dtype))
- self.assertTrue(is_datetime64_ns_dtype(dtype))
- self.assertTrue(is_datetime64_any_dtype(dtype))
+ assert is_datetime64tz_dtype(dtype)
+ assert is_datetime64_ns_dtype(dtype)
+ assert is_datetime64_any_dtype(dtype)
def test_is_timedelta(self):
- self.assertTrue(is_timedelta64_dtype('timedelta64'))
- self.assertTrue(is_timedelta64_dtype('timedelta64[ns]'))
+ assert is_timedelta64_dtype('timedelta64')
+ assert is_timedelta64_dtype('timedelta64[ns]')
assert not is_timedelta64_ns_dtype('timedelta64')
- self.assertTrue(is_timedelta64_ns_dtype('timedelta64[ns]'))
+ assert is_timedelta64_ns_dtype('timedelta64[ns]')
tdi = TimedeltaIndex([1e14, 2e14], dtype='timedelta64')
- self.assertTrue(is_timedelta64_dtype(tdi))
- self.assertTrue(is_timedelta64_ns_dtype(tdi))
- self.assertTrue(is_timedelta64_ns_dtype(tdi.astype('timedelta64[ns]')))
+ assert is_timedelta64_dtype(tdi)
+ assert is_timedelta64_ns_dtype(tdi)
+ assert is_timedelta64_ns_dtype(tdi.astype('timedelta64[ns]'))
# Conversion to Int64Index:
assert not is_timedelta64_ns_dtype(tdi.astype('timedelta64'))
@@ -890,19 +890,19 @@ def test_is_timedelta(self):
class Testisscalar(tm.TestCase):
def test_isscalar_builtin_scalars(self):
- self.assertTrue(is_scalar(None))
- self.assertTrue(is_scalar(True))
- self.assertTrue(is_scalar(False))
- self.assertTrue(is_scalar(0.))
- self.assertTrue(is_scalar(np.nan))
- self.assertTrue(is_scalar('foobar'))
- self.assertTrue(is_scalar(b'foobar'))
- self.assertTrue(is_scalar(u('efoobar')))
- self.assertTrue(is_scalar(datetime(2014, 1, 1)))
- self.assertTrue(is_scalar(date(2014, 1, 1)))
- self.assertTrue(is_scalar(time(12, 0)))
- self.assertTrue(is_scalar(timedelta(hours=1)))
- self.assertTrue(is_scalar(pd.NaT))
+ assert is_scalar(None)
+ assert is_scalar(True)
+ assert is_scalar(False)
+ assert is_scalar(0.)
+ assert is_scalar(np.nan)
+ assert is_scalar('foobar')
+ assert is_scalar(b'foobar')
+ assert is_scalar(u('efoobar'))
+ assert is_scalar(datetime(2014, 1, 1))
+ assert is_scalar(date(2014, 1, 1))
+ assert is_scalar(time(12, 0))
+ assert is_scalar(timedelta(hours=1))
+ assert is_scalar(pd.NaT)
def test_isscalar_builtin_nonscalars(self):
assert not is_scalar({})
@@ -914,15 +914,15 @@ def test_isscalar_builtin_nonscalars(self):
assert not is_scalar(Ellipsis)
def test_isscalar_numpy_array_scalars(self):
- self.assertTrue(is_scalar(np.int64(1)))
- self.assertTrue(is_scalar(np.float64(1.)))
- self.assertTrue(is_scalar(np.int32(1)))
- self.assertTrue(is_scalar(np.object_('foobar')))
- self.assertTrue(is_scalar(np.str_('foobar')))
- self.assertTrue(is_scalar(np.unicode_(u('foobar'))))
- self.assertTrue(is_scalar(np.bytes_(b'foobar')))
- self.assertTrue(is_scalar(np.datetime64('2014-01-01')))
- self.assertTrue(is_scalar(np.timedelta64(1, 'h')))
+ assert is_scalar(np.int64(1))
+ assert is_scalar(np.float64(1.))
+ assert is_scalar(np.int32(1))
+ assert is_scalar(np.object_('foobar'))
+ assert is_scalar(np.str_('foobar'))
+ assert is_scalar(np.unicode_(u('foobar')))
+ assert is_scalar(np.bytes_(b'foobar'))
+ assert is_scalar(np.datetime64('2014-01-01'))
+ assert is_scalar(np.timedelta64(1, 'h'))
def test_isscalar_numpy_zerodim_arrays(self):
for zerodim in [np.array(1), np.array('foobar'),
@@ -930,7 +930,7 @@ def test_isscalar_numpy_zerodim_arrays(self):
np.array(np.timedelta64(1, 'h')),
np.array(np.datetime64('NaT'))]:
assert not is_scalar(zerodim)
- self.assertTrue(is_scalar(lib.item_from_zerodim(zerodim)))
+ assert is_scalar(lib.item_from_zerodim(zerodim))
def test_isscalar_numpy_arrays(self):
assert not is_scalar(np.array([]))
@@ -938,9 +938,9 @@ def test_isscalar_numpy_arrays(self):
assert not is_scalar(np.matrix('1; 2'))
def test_isscalar_pandas_scalars(self):
- self.assertTrue(is_scalar(Timestamp('2014-01-01')))
- self.assertTrue(is_scalar(Timedelta(hours=1)))
- self.assertTrue(is_scalar(Period('2014-01-01')))
+ assert is_scalar(Timestamp('2014-01-01'))
+ assert is_scalar(Timedelta(hours=1))
+ assert is_scalar(Period('2014-01-01'))
def test_lisscalar_pandas_containers(self):
assert not is_scalar(Series())
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index 3e1a12d439b9a..78396a8d89d91 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -48,11 +48,11 @@ def test_notnull():
class TestIsNull(tm.TestCase):
def test_0d_array(self):
- self.assertTrue(isnull(np.array(np.nan)))
+ assert isnull(np.array(np.nan))
assert not isnull(np.array(0.0))
assert not isnull(np.array(0))
# test object dtype
- self.assertTrue(isnull(np.array(np.nan, dtype=object)))
+ assert isnull(np.array(np.nan, dtype=object))
assert not isnull(np.array(0.0, dtype=object))
assert not isnull(np.array(0, dtype=object))
@@ -66,9 +66,9 @@ def test_empty_object(self):
def test_isnull(self):
assert not isnull(1.)
- self.assertTrue(isnull(None))
- self.assertTrue(isnull(np.NaN))
- self.assertTrue(float('nan'))
+ assert isnull(None)
+ assert isnull(np.NaN)
+ assert float('nan')
assert not isnull(np.inf)
assert not isnull(-np.inf)
@@ -136,7 +136,7 @@ def test_isnull_numpy_nat(self):
def test_isnull_datetime(self):
assert not isnull(datetime.now())
- self.assertTrue(notnull(datetime.now()))
+ assert notnull(datetime.now())
idx = date_range('1/1/1990', periods=20)
exp = np.ones(len(idx), dtype=bool)
@@ -146,14 +146,14 @@ def test_isnull_datetime(self):
idx[0] = iNaT
idx = DatetimeIndex(idx)
mask = isnull(idx)
- self.assertTrue(mask[0])
+ assert mask[0]
exp = np.array([True] + [False] * (len(idx) - 1), dtype=bool)
tm.assert_numpy_array_equal(mask, exp)
# GH 9129
pidx = idx.to_period(freq='M')
mask = isnull(pidx)
- self.assertTrue(mask[0])
+ assert mask[0]
exp = np.array([True] + [False] * (len(idx) - 1), dtype=bool)
tm.assert_numpy_array_equal(mask, exp)
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index 0a00d7e018f33..303c8cb6e858a 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -496,7 +496,7 @@ def test_rename_multiindex(self):
def test_rename_nocopy(self):
renamed = self.frame.rename(columns={'C': 'foo'}, copy=False)
renamed['foo'] = 1.
- self.assertTrue((self.frame['C'] == 1.).all())
+ assert (self.frame['C'] == 1.).all()
def test_rename_inplace(self):
self.frame.rename(columns={'C': 'foo'})
@@ -763,15 +763,15 @@ def test_set_index_names(self):
self.assertEqual(df.set_index(df.index).index.names, ['A', 'B'])
# Check that set_index isn't converting a MultiIndex into an Index
- self.assertTrue(isinstance(df.set_index(df.index).index, MultiIndex))
+ assert isinstance(df.set_index(df.index).index, MultiIndex)
# Check actual equality
tm.assert_index_equal(df.set_index(df.index).index, mi)
# Check that [MultiIndex, MultiIndex] yields a MultiIndex rather
# than a pair of tuples
- self.assertTrue(isinstance(df.set_index(
- [df.index, df.index]).index, MultiIndex))
+ assert isinstance(df.set_index(
+ [df.index, df.index]).index, MultiIndex)
# Check equality
tm.assert_index_equal(df.set_index([df.index, df.index]).index, mi2)
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 6268ccc27c7a6..8f46f055343d4 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -81,11 +81,11 @@ def test_corr_nooverlap(self):
'C': [np.nan, np.nan, np.nan, np.nan,
np.nan, np.nan]})
rs = df.corr(meth)
- self.assertTrue(isnull(rs.loc['A', 'B']))
- self.assertTrue(isnull(rs.loc['B', 'A']))
+ assert isnull(rs.loc['A', 'B'])
+ assert isnull(rs.loc['B', 'A'])
self.assertEqual(rs.loc['A', 'A'], 1)
self.assertEqual(rs.loc['B', 'B'], 1)
- self.assertTrue(isnull(rs.loc['C', 'C']))
+ assert isnull(rs.loc['C', 'C'])
def test_corr_constant(self):
tm._skip_if_no_scipy()
@@ -96,7 +96,7 @@ def test_corr_constant(self):
df = DataFrame({'A': [1, 1, 1, np.nan, np.nan, np.nan],
'B': [np.nan, np.nan, np.nan, 1, 1, 1]})
rs = df.corr(meth)
- self.assertTrue(isnull(rs.values).all())
+ assert isnull(rs.values).all()
def test_corr_int(self):
# dtypes other than float64 #1761
@@ -136,7 +136,7 @@ def test_cov(self):
tm.assert_frame_equal(expected, result)
result = self.frame.cov(min_periods=len(self.frame) + 1)
- self.assertTrue(isnull(result.values).all())
+ assert isnull(result.values).all()
# with NAs
frame = self.frame.copy()
@@ -234,7 +234,7 @@ def test_corrwith_matches_corrcoef(self):
c2 = np.corrcoef(df1['a'], df2['a'])[0][1]
tm.assert_almost_equal(c1, c2)
- self.assertTrue(c1 < 1)
+ assert c1 < 1
def test_bool_describe_in_mixed_frame(self):
df = DataFrame({
@@ -710,7 +710,7 @@ def alt(x):
kurt = df.kurt()
kurt2 = df.kurt(level=0).xs('bar')
tm.assert_series_equal(kurt, kurt2, check_names=False)
- self.assertTrue(kurt.name is None)
+ assert kurt.name is None
self.assertEqual(kurt2.name, 'bar')
def _check_stat_op(self, name, alternative, frame=None, has_skipna=True,
@@ -733,7 +733,7 @@ def _check_stat_op(self, name, alternative, frame=None, has_skipna=True,
df['a'] = lrange(len(df))
result = getattr(df, name)()
assert isinstance(result, Series)
- self.assertTrue(len(result))
+ assert len(result)
if has_skipna:
def skipna_wrapper(x):
@@ -796,8 +796,8 @@ def wrapper(x):
r0 = getattr(all_na, name)(axis=0)
r1 = getattr(all_na, name)(axis=1)
if not tm._incompat_bottleneck_version(name):
- self.assertTrue(np.isnan(r0).all())
- self.assertTrue(np.isnan(r1).all())
+ assert np.isnan(r0).all()
+ assert np.isnan(r1).all()
def test_mode(self):
df = pd.DataFrame({"A": [12, 12, 11, 12, 19, 11],
@@ -864,7 +864,7 @@ def test_operators_timedelta64(self):
self.assertEqual(result[1], diffs.loc[0, 'B'])
result = diffs.min(axis=1)
- self.assertTrue((result == diffs.loc[0, 'B']).all())
+ assert (result == diffs.loc[0, 'B']).all()
# max
result = diffs.max()
@@ -872,7 +872,7 @@ def test_operators_timedelta64(self):
self.assertEqual(result[1], diffs.loc[2, 'B'])
result = diffs.max(axis=1)
- self.assertTrue((result == diffs['A']).all())
+ assert (result == diffs['A']).all()
# abs
result = diffs.abs()
@@ -924,8 +924,8 @@ def test_operators_timedelta64(self):
df['off2'] = df['time'] - df['time2']
df._consolidate_inplace()
- self.assertTrue(df['off1'].dtype == 'timedelta64[ns]')
- self.assertTrue(df['off2'].dtype == 'timedelta64[ns]')
+ assert df['off1'].dtype == 'timedelta64[ns]'
+ assert df['off2'].dtype == 'timedelta64[ns]'
def test_sum_corner(self):
axis0 = self.empty.sum(0)
@@ -953,7 +953,7 @@ def test_mean_corner(self):
the_mean = self.mixed_frame.mean(axis=0)
the_sum = self.mixed_frame.sum(axis=0, numeric_only=True)
tm.assert_index_equal(the_sum.index, the_mean.index)
- self.assertTrue(len(the_mean.index) < len(self.mixed_frame.columns))
+ assert len(the_mean.index) < len(self.mixed_frame.columns)
# xs sum mixed type, just want to know it works...
the_mean = self.mixed_frame.mean(axis=1)
@@ -1134,8 +1134,8 @@ def __nonzero__(self):
assert not r0.any()
assert not r1.any()
else:
- self.assertTrue(r0.all())
- self.assertTrue(r1.all())
+ assert r0.all()
+ assert r1.all()
# ----------------------------------------------------------------------
# Isin
@@ -1820,10 +1820,9 @@ def test_dataframe_clip(self):
lb_mask = df.values <= lb
ub_mask = df.values >= ub
mask = ~lb_mask & ~ub_mask
- self.assertTrue((clipped_df.values[lb_mask] == lb).all())
- self.assertTrue((clipped_df.values[ub_mask] == ub).all())
- self.assertTrue((clipped_df.values[mask] ==
- df.values[mask]).all())
+ assert (clipped_df.values[lb_mask] == lb).all()
+ assert (clipped_df.values[ub_mask] == ub).all()
+ assert (clipped_df.values[mask] == df.values[mask]).all()
def test_clip_against_series(self):
# GH #6966
@@ -1884,11 +1883,11 @@ def test_dot(self):
# Check series argument
result = a.dot(b['one'])
tm.assert_series_equal(result, expected['one'], check_names=False)
- self.assertTrue(result.name is None)
+ assert result.name is None
result = a.dot(b1['one'])
tm.assert_series_equal(result, expected['one'], check_names=False)
- self.assertTrue(result.name is None)
+ assert result.name is None
# can pass correct-length arrays
row = a.iloc[0].values
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 7669de17885f8..6b1e9d66d2071 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -139,7 +139,7 @@ def test_get_agg_axis(self):
pytest.raises(ValueError, self.frame._get_agg_axis, 2)
def test_nonzero(self):
- self.assertTrue(self.empty.empty)
+ assert self.empty.empty
assert not self.frame.empty
assert not self.mixed_frame.empty
@@ -157,7 +157,7 @@ def test_iteritems(self):
self.assertEqual(type(v), Series)
def test_iter(self):
- self.assertTrue(tm.equalContents(list(self.frame), self.frame.columns))
+ assert tm.equalContents(list(self.frame), self.frame.columns)
def test_iterrows(self):
for i, (k, v) in enumerate(self.frame.iterrows()):
@@ -223,7 +223,7 @@ def test_as_matrix(self):
for j, value in enumerate(row):
col = frameCols[j]
if np.isnan(value):
- self.assertTrue(np.isnan(frame[col][i]))
+ assert np.isnan(frame[col][i])
else:
self.assertEqual(value, frame[col][i])
@@ -242,7 +242,7 @@ def test_as_matrix(self):
def test_values(self):
self.frame.values[:, 0] = 5.
- self.assertTrue((self.frame.values[:, 0] == 5).all())
+ assert (self.frame.values[:, 0] == 5).all()
def test_deepcopy(self):
cp = deepcopy(self.frame)
@@ -260,7 +260,7 @@ def test_transpose(self):
for idx, series in compat.iteritems(dft):
for col, value in compat.iteritems(series):
if np.isnan(value):
- self.assertTrue(np.isnan(frame[col][idx]))
+ assert np.isnan(frame[col][idx])
else:
self.assertEqual(value, frame[col][idx])
@@ -276,7 +276,7 @@ def test_transpose_get_view(self):
dft = self.frame.T
dft.values[:, 5:10] = 5
- self.assertTrue((self.frame.values[5:10] == 5).all())
+ assert (self.frame.values[5:10] == 5).all()
def test_swapaxes(self):
df = DataFrame(np.random.randn(10, 5))
@@ -323,15 +323,15 @@ def test_empty_nonzero(self):
df = pd.DataFrame(index=[1], columns=[1])
assert not df.empty
df = DataFrame(index=['a', 'b'], columns=['c', 'd']).dropna()
- self.assertTrue(df.empty)
- self.assertTrue(df.T.empty)
+ assert df.empty
+ assert df.T.empty
empty_frames = [pd.DataFrame(),
pd.DataFrame(index=[1]),
pd.DataFrame(columns=[1]),
pd.DataFrame({1: []})]
for df in empty_frames:
- self.assertTrue(df.empty)
- self.assertTrue(df.T.empty)
+ assert df.empty
+ assert df.T.empty
def test_with_datetimelikes(self):
@@ -352,7 +352,7 @@ def test_inplace_return_self(self):
def _check_f(base, f):
result = f(base)
- self.assertTrue(result is None)
+ assert result is None
# -----DataFrame-----
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index 9d0f00c6eeffe..0bccca5cecb27 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -61,10 +61,10 @@ def test_apply_mixed_datetimelike(self):
def test_apply_empty(self):
# empty
applied = self.empty.apply(np.sqrt)
- self.assertTrue(applied.empty)
+ assert applied.empty
applied = self.empty.apply(np.mean)
- self.assertTrue(applied.empty)
+ assert applied.empty
no_rows = self.frame[:0]
result = no_rows.apply(lambda x: x.mean())
@@ -125,12 +125,12 @@ def test_apply_broadcast(self):
agged = self.frame.apply(np.mean)
for col, ts in compat.iteritems(broadcasted):
- self.assertTrue((ts == agged[col]).all())
+ assert (ts == agged[col]).all()
broadcasted = self.frame.apply(np.mean, axis=1, broadcast=True)
agged = self.frame.apply(np.mean, axis=1)
for idx in broadcasted.index:
- self.assertTrue((broadcasted.xs(idx) == agged[idx]).all())
+ assert (broadcasted.xs(idx) == agged[idx]).all()
def test_apply_raw(self):
result0 = self.frame.apply(np.mean, raw=True)
@@ -452,7 +452,7 @@ def test_frame_apply_dont_convert_datetime64(self):
df = df.applymap(lambda x: x + BDay())
df = df.applymap(lambda x: x + BDay())
- self.assertTrue(df.x1.dtype == 'M8[ns]')
+ assert df.x1.dtype == 'M8[ns]'
# See gh-12244
def test_apply_non_numpy_dtype(self):
diff --git a/pandas/tests/frame/test_asof.py b/pandas/tests/frame/test_asof.py
index dd03f8f7cb7a9..ba3e239756f51 100644
--- a/pandas/tests/frame/test_asof.py
+++ b/pandas/tests/frame/test_asof.py
@@ -23,17 +23,17 @@ def test_basic(self):
freq='25s')
result = df.asof(dates)
- self.assertTrue(result.notnull().all(1).all())
+ assert result.notnull().all(1).all()
lb = df.index[14]
ub = df.index[30]
dates = list(dates)
result = df.asof(dates)
- self.assertTrue(result.notnull().all(1).all())
+ assert result.notnull().all(1).all()
mask = (result.index >= lb) & (result.index < ub)
rs = result[mask]
- self.assertTrue((rs == 14).all(1).all())
+ assert (rs == 14).all(1).all()
def test_subset(self):
N = 10
diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py
index 61d0694eea382..2c285c6261415 100644
--- a/pandas/tests/frame/test_axis_select_reindex.py
+++ b/pandas/tests/frame/test_axis_select_reindex.py
@@ -120,7 +120,7 @@ def test_drop_multiindex_not_lexsorted(self):
lexsorted_mi = MultiIndex.from_tuples(
[('a', ''), ('b1', 'c1'), ('b2', 'c2')], names=['b', 'c'])
lexsorted_df = DataFrame([[1, 3, 4]], columns=lexsorted_mi)
- self.assertTrue(lexsorted_df.columns.is_lexsorted())
+ assert lexsorted_df.columns.is_lexsorted()
# define the non-lexsorted version
not_lexsorted_df = DataFrame(columns=['a', 'b', 'c', 'd'],
@@ -172,14 +172,14 @@ def test_reindex(self):
for idx, val in compat.iteritems(newFrame[col]):
if idx in self.frame.index:
if np.isnan(val):
- self.assertTrue(np.isnan(self.frame[col][idx]))
+ assert np.isnan(self.frame[col][idx])
else:
self.assertEqual(val, self.frame[col][idx])
else:
- self.assertTrue(np.isnan(val))
+ assert np.isnan(val)
for col, series in compat.iteritems(newFrame):
- self.assertTrue(tm.equalContents(series.index, newFrame.index))
+ assert tm.equalContents(series.index, newFrame.index)
emptyFrame = self.frame.reindex(Index([]))
self.assertEqual(len(emptyFrame.index), 0)
@@ -190,15 +190,14 @@ def test_reindex(self):
for idx, val in compat.iteritems(nonContigFrame[col]):
if idx in self.frame.index:
if np.isnan(val):
- self.assertTrue(np.isnan(self.frame[col][idx]))
+ assert np.isnan(self.frame[col][idx])
else:
self.assertEqual(val, self.frame[col][idx])
else:
- self.assertTrue(np.isnan(val))
+ assert np.isnan(val)
for col, series in compat.iteritems(nonContigFrame):
- self.assertTrue(tm.equalContents(series.index,
- nonContigFrame.index))
+ assert tm.equalContents(series.index, nonContigFrame.index)
# corner cases
@@ -208,7 +207,7 @@ def test_reindex(self):
# length zero
newFrame = self.frame.reindex([])
- self.assertTrue(newFrame.empty)
+ assert newFrame.empty
self.assertEqual(len(newFrame.columns), len(self.frame.columns))
# length zero with columns reindexed with non-empty index
@@ -355,7 +354,7 @@ def test_reindex_fill_value(self):
# axis=0
result = df.reindex(lrange(15))
- self.assertTrue(np.isnan(result.values[-5:]).all())
+ assert np.isnan(result.values[-5:]).all()
result = df.reindex(lrange(15), fill_value=0)
expected = df.reindex(lrange(15)).fillna(0)
@@ -847,11 +846,11 @@ def test_reindex_boolean(self):
reindexed = frame.reindex(np.arange(10))
self.assertEqual(reindexed.values.dtype, np.object_)
- self.assertTrue(isnull(reindexed[0][1]))
+ assert isnull(reindexed[0][1])
reindexed = frame.reindex(columns=lrange(3))
self.assertEqual(reindexed.values.dtype, np.object_)
- self.assertTrue(isnull(reindexed[1]).all())
+ assert isnull(reindexed[1]).all()
def test_reindex_objects(self):
reindexed = self.mixed_frame.reindex(columns=['foo', 'A', 'B'])
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 37615179a3f26..2a319348aca3f 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -71,16 +71,16 @@ def test_as_matrix_consolidate(self):
self.frame['E'] = 7.
assert not self.frame._data.is_consolidated()
_ = self.frame.as_matrix() # noqa
- self.assertTrue(self.frame._data.is_consolidated())
+ assert self.frame._data.is_consolidated()
def test_modify_values(self):
self.frame.values[5] = 5
- self.assertTrue((self.frame.values[5] == 5).all())
+ assert (self.frame.values[5] == 5).all()
# unconsolidated
self.frame['E'] = 7.
self.frame.values[6] = 6
- self.assertTrue((self.frame.values[6] == 6).all())
+ assert (self.frame.values[6] == 6).all()
def test_boolean_set_uncons(self):
self.frame['E'] = 7.
@@ -307,12 +307,12 @@ def test_equals_different_blocks(self):
df1 = df0.reset_index()[["A", "B", "C"]]
# this assert verifies that the above operations have
# induced a block rearrangement
- self.assertTrue(df0._data.blocks[0].dtype !=
- df1._data.blocks[0].dtype)
+ assert (df0._data.blocks[0].dtype != df1._data.blocks[0].dtype)
+
# do the real tests
assert_frame_equal(df0, df1)
- self.assertTrue(df0.equals(df1))
- self.assertTrue(df1.equals(df0))
+ assert df0.equals(df1)
+ assert df1.equals(df0)
def test_copy_blocks(self):
# API/ENH 9607
@@ -340,7 +340,7 @@ def test_no_copy_blocks(self):
_df.loc[:, column] = _df[column] + 1
# make sure we did change the original DataFrame
- self.assertTrue(_df[column].equals(df[column]))
+ assert _df[column].equals(df[column])
def test_copy(self):
cop = self.frame.copy()
@@ -400,7 +400,7 @@ def test_consolidate_datetime64(self):
def test_is_mixed_type(self):
assert not self.frame._is_mixed_type
- self.assertTrue(self.mixed_frame._is_mixed_type)
+ assert self.mixed_frame._is_mixed_type
def test_get_numeric_data(self):
# TODO(wesm): unused?
@@ -507,7 +507,7 @@ def test_stale_cached_series_bug_473(self):
repr(Y)
result = Y.sum() # noqa
exp = Y['g'].sum() # noqa
- self.assertTrue(pd.isnull(Y['g']['c']))
+ assert pd.isnull(Y['g']['c'])
def test_get_X_columns(self):
# numeric and object columns
@@ -542,4 +542,4 @@ def test_strange_column_corruption_issue(self):
first = len(df.loc[pd.isnull(df[myid]), [myid]])
second = len(df.loc[pd.isnull(df[myid]), [myid]])
- self.assertTrue(first == second == 0)
+ assert first == second == 0
diff --git a/pandas/tests/frame/test_combine_concat.py b/pandas/tests/frame/test_combine_concat.py
index 0e4184b07f22e..5452792def1ac 100644
--- a/pandas/tests/frame/test_combine_concat.py
+++ b/pandas/tests/frame/test_combine_concat.py
@@ -464,7 +464,7 @@ def test_combine_first(self):
combined = head.combine_first(tail)
reordered_frame = self.frame.reindex(combined.index)
assert_frame_equal(combined, reordered_frame)
- self.assertTrue(tm.equalContents(combined.columns, self.frame.columns))
+ assert tm.equalContents(combined.columns, self.frame.columns)
assert_series_equal(combined['A'], reordered_frame['A'])
# same index
@@ -478,7 +478,7 @@ def test_combine_first(self):
combined = fcopy.combine_first(fcopy2)
- self.assertTrue((combined['A'] == 1).all())
+ assert (combined['A'] == 1).all()
assert_series_equal(combined['B'], fcopy['B'])
assert_series_equal(combined['C'], fcopy2['C'])
assert_series_equal(combined['D'], fcopy['D'])
@@ -488,12 +488,12 @@ def test_combine_first(self):
head['A'] = 1
combined = head.combine_first(tail)
- self.assertTrue((combined['A'][:10] == 1).all())
+ assert (combined['A'][:10] == 1).all()
# reverse overlap
tail['A'][:10] = 0
combined = tail.combine_first(head)
- self.assertTrue((combined['A'][:10] == 0).all())
+ assert (combined['A'][:10] == 0).all()
# no overlap
f = self.frame[:10]
@@ -510,13 +510,13 @@ def test_combine_first(self):
assert_frame_equal(comb, self.frame)
comb = self.frame.combine_first(DataFrame(index=["faz", "boo"]))
- self.assertTrue("faz" in comb.index)
+ assert "faz" in comb.index
# #2525
df = DataFrame({'a': [1]}, index=[datetime(2012, 1, 1)])
df2 = DataFrame({}, columns=['b'])
result = df.combine_first(df2)
- self.assertTrue('b' in result)
+ assert 'b' in result
def test_combine_first_mixed_bug(self):
idx = Index(['a', 'b', 'c', 'e'])
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index e9a6f03abbe8d..588182eb30336 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -280,12 +280,12 @@ def test_constructor_multi_index(self):
tuples = [(2, 3), (3, 3), (3, 3)]
mi = MultiIndex.from_tuples(tuples)
df = DataFrame(index=mi, columns=mi)
- self.assertTrue(pd.isnull(df).values.ravel().all())
+ assert pd.isnull(df).values.ravel().all()
tuples = [(3, 3), (2, 3), (3, 3)]
mi = MultiIndex.from_tuples(tuples)
df = DataFrame(index=mi, columns=mi)
- self.assertTrue(pd.isnull(df).values.ravel().all())
+ assert pd.isnull(df).values.ravel().all()
def test_constructor_error_msgs(self):
msg = "Empty data passed with indices specified."
@@ -594,7 +594,7 @@ def test_constructor_maskedarray(self):
# what is this even checking??
mat = ma.masked_all((2, 3), dtype=float)
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
- self.assertTrue(np.all(~np.asarray(frame == frame)))
+ assert np.all(~np.asarray(frame == frame))
def test_constructor_maskedarray_nonfloat(self):
# masked int promoted to float
@@ -604,7 +604,7 @@ def test_constructor_maskedarray_nonfloat(self):
self.assertEqual(len(frame.index), 2)
self.assertEqual(len(frame.columns), 3)
- self.assertTrue(np.all(~np.asarray(frame == frame)))
+ assert np.all(~np.asarray(frame == frame))
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
@@ -626,7 +626,7 @@ def test_constructor_maskedarray_nonfloat(self):
self.assertEqual(len(frame.index), 2)
self.assertEqual(len(frame.columns), 3)
- self.assertTrue(isnull(frame).values.all())
+ assert isnull(frame).values.all()
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
@@ -648,7 +648,7 @@ def test_constructor_maskedarray_nonfloat(self):
self.assertEqual(len(frame.index), 2)
self.assertEqual(len(frame.columns), 3)
- self.assertTrue(np.all(~np.asarray(frame == frame)))
+ assert np.all(~np.asarray(frame == frame))
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
@@ -817,7 +817,7 @@ def test_constructor_list_of_lists(self):
# GH #484
l = [[1, 'a'], [2, 'b']]
df = DataFrame(data=l, columns=["num", "str"])
- self.assertTrue(is_integer_dtype(df['num']))
+ assert is_integer_dtype(df['num'])
self.assertEqual(df['str'].dtype, np.object_)
# GH 4851
@@ -1027,7 +1027,7 @@ def test_constructor_mixed_dict_and_Series(self):
data['B'] = Series([4, 3, 2, 1], index=['bar', 'qux', 'baz', 'foo'])
result = DataFrame(data)
- self.assertTrue(result.index.is_monotonic)
+ assert result.index.is_monotonic
# ordering ambiguous, raise exception
with tm.assert_raises_regex(ValueError, 'ambiguous ordering'):
@@ -1344,13 +1344,13 @@ def test_constructor_with_datetimes(self):
# GH 8411
dr = date_range('20130101', periods=3)
df = DataFrame({'value': dr})
- self.assertTrue(df.iat[0, 0].tz is None)
+ assert df.iat[0, 0].tz is None
dr = date_range('20130101', periods=3, tz='UTC')
df = DataFrame({'value': dr})
- self.assertTrue(str(df.iat[0, 0].tz) == 'UTC')
+ assert str(df.iat[0, 0].tz) == 'UTC'
dr = date_range('20130101', periods=3, tz='US/Eastern')
df = DataFrame({'value': dr})
- self.assertTrue(str(df.iat[0, 0].tz) == 'US/Eastern')
+ assert str(df.iat[0, 0].tz) == 'US/Eastern'
# GH 7822
# preserver an index with a tz on dict construction
@@ -1451,14 +1451,14 @@ def test_constructor_for_list_with_dtypes(self):
def test_constructor_frame_copy(self):
cop = DataFrame(self.frame, copy=True)
cop['A'] = 5
- self.assertTrue((cop['A'] == 5).all())
+ assert (cop['A'] == 5).all()
assert not (self.frame['A'] == 5).all()
def test_constructor_ndarray_copy(self):
df = DataFrame(self.frame.values)
self.frame.values[5] = 5
- self.assertTrue((df.values[5] == 5).all())
+ assert (df.values[5] == 5).all()
df = DataFrame(self.frame.values, copy=True)
self.frame.values[6] = 6
@@ -1551,7 +1551,7 @@ def test_from_records_nones(self):
(None, 2, 5, 3)]
df = DataFrame.from_records(tuples, columns=['a', 'b', 'c', 'd'])
- self.assertTrue(np.isnan(df['c'][0]))
+ assert np.isnan(df['c'][0])
def test_from_records_iterator(self):
arr = np.array([(1.0, 1.0, 2, 2), (3.0, 3.0, 4, 4), (5., 5., 6, 6),
@@ -1628,7 +1628,7 @@ def test_from_records_decimal(self):
df = DataFrame.from_records(tuples, columns=['a'], coerce_float=True)
self.assertEqual(df['a'].dtype, np.float64)
- self.assertTrue(np.isnan(df['a'].values[-1]))
+ assert np.isnan(df['a'].values[-1])
def test_from_records_duplicates(self):
result = DataFrame.from_records([(1, 2, 3), (4, 5, 6)],
@@ -1890,7 +1890,7 @@ def test_from_records_len0_with_columns(self):
result = DataFrame.from_records([], index='foo',
columns=['foo', 'bar'])
- self.assertTrue(np.array_equal(result.columns, ['bar']))
+ assert np.array_equal(result.columns, ['bar'])
self.assertEqual(len(result), 0)
self.assertEqual(result.index.name, 'foo')
@@ -1917,8 +1917,8 @@ def test_from_dict(self):
# construction
df = DataFrame({'A': idx, 'B': dr})
- self.assertTrue(df['A'].dtype, 'M8[ns, US/Eastern')
- self.assertTrue(df['A'].name == 'A')
+ assert df['A'].dtype, 'M8[ns, US/Eastern'
+ assert df['A'].name == 'A'
tm.assert_series_equal(df['A'], Series(idx, name='A'))
tm.assert_series_equal(df['B'], Series(dr, name='B'))
@@ -1951,7 +1951,7 @@ def test_frame_datetime64_mixed_index_ctor_1681(self):
# it works!
d = DataFrame({'A': 'foo', 'B': ts}, index=dr)
- self.assertTrue(d['B'].isnull().all())
+ assert d['B'].isnull().all()
def test_frame_timeseries_to_records(self):
index = date_range('1/1/2000', periods=10)
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index 6a49c88f17526..d3a675e3dc1a3 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -129,8 +129,8 @@ def test_to_records_with_multindex(self):
data = np.zeros((8, 4))
df = DataFrame(data, index=index)
r = df.to_records(index=True)['level_0']
- self.assertTrue('bar' in r)
- self.assertTrue('one' not in r)
+ assert 'bar' in r
+ assert 'one' not in r
def test_to_records_with_Mapping_type(self):
import email
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index ed6d72c08fdae..427834b3dbf38 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -624,9 +624,9 @@ def test_astype_str(self):
tm.assert_frame_equal(result, expected)
result = str(self.tzframe)
- self.assertTrue('0 2013-01-01 2013-01-01 00:00:00-05:00 '
- '2013-01-01 00:00:00+01:00' in result)
- self.assertTrue('1 2013-01-02 '
- 'NaT NaT' in result)
- self.assertTrue('2 2013-01-03 2013-01-03 00:00:00-05:00 '
- '2013-01-03 00:00:00+01:00' in result)
+ assert ('0 2013-01-01 2013-01-01 00:00:00-05:00 '
+ '2013-01-01 00:00:00+01:00') in result
+ assert ('1 2013-01-02 '
+ 'NaT NaT') in result
+ assert ('2 2013-01-03 2013-01-03 00:00:00-05:00 '
+ '2013-01-03 00:00:00+01:00') in result
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index ebc125ae09818..8f6128ad4e525 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -391,11 +391,11 @@ def test_getitem_setitem_ix_negative_integers(self):
with catch_warnings(record=True):
self.frame.ix[:, [-1]] = 0
- self.assertTrue((self.frame['D'] == 0).all())
+ assert (self.frame['D'] == 0).all()
df = DataFrame(np.random.randn(8, 4))
with catch_warnings(record=True):
- self.assertTrue(isnull(df.ix[:, [-1]].values).all())
+ assert isnull(df.ix[:, [-1]].values).all()
# #1942
a = DataFrame(randn(20, 2), index=[chr(x + 65) for x in range(20)])
@@ -416,7 +416,7 @@ def test_setattr_column(self):
df = DataFrame({'foobar': 1}, index=lrange(10))
df.foobar = 5
- self.assertTrue((df.foobar == 5).all())
+ assert (df.foobar == 5).all()
def test_setitem(self):
# not sure what else to do here
@@ -441,7 +441,7 @@ def test_setitem(self):
# set ndarray
arr = randn(len(self.frame))
self.frame['col9'] = arr
- self.assertTrue((self.frame['col9'] == arr).all())
+ assert (self.frame['col9'] == arr).all()
self.frame['col7'] = 5
assert((self.frame['col7'] == 5).all())
@@ -460,7 +460,7 @@ def f():
smaller['col10'] = ['1', '2']
pytest.raises(com.SettingWithCopyError, f)
self.assertEqual(smaller['col10'].dtype, np.object_)
- self.assertTrue((smaller['col10'] == ['1', '2']).all())
+ assert (smaller['col10'] == ['1', '2']).all()
# with a dtype
for dtype in ['int32', 'int64', 'float32', 'float64']:
@@ -487,7 +487,7 @@ def test_setitem_always_copy(self):
self.frame['E'] = s
self.frame['E'][5:10] = nan
- self.assertTrue(notnull(s[5:10]).all())
+ assert notnull(s[5:10]).all()
def test_setitem_boolean(self):
df = self.frame.copy()
@@ -552,7 +552,7 @@ def test_setitem_cast(self):
# cast if pass array of course
self.frame['B'] = np.arange(len(self.frame))
- self.assertTrue(issubclass(self.frame['B'].dtype.type, np.integer))
+ assert issubclass(self.frame['B'].dtype.type, np.integer)
self.frame['foo'] = 'bar'
self.frame['foo'] = 0
@@ -795,7 +795,7 @@ def test_getitem_fancy_slice_integers_step(self):
# this is OK
result = df.iloc[:8:2] # noqa
df.iloc[:8:2] = np.nan
- self.assertTrue(isnull(df.iloc[:8:2]).values.all())
+ assert isnull(df.iloc[:8:2]).values.all()
def test_getitem_setitem_integer_slice_keyerrors(self):
df = DataFrame(np.random.randn(10, 5), index=lrange(0, 20, 2))
@@ -803,12 +803,12 @@ def test_getitem_setitem_integer_slice_keyerrors(self):
# this is OK
cp = df.copy()
cp.iloc[4:10] = 0
- self.assertTrue((cp.iloc[4:10] == 0).values.all())
+ assert (cp.iloc[4:10] == 0).values.all()
# so is this
cp = df.copy()
cp.iloc[3:11] = 0
- self.assertTrue((cp.iloc[3:11] == 0).values.all())
+ assert (cp.iloc[3:11] == 0).values.all()
result = df.iloc[2:6]
result2 = df.loc[3:11]
@@ -939,7 +939,7 @@ def test_fancy_getitem_slice_mixed(self):
def f():
sliced['C'] = 4.
pytest.raises(com.SettingWithCopyError, f)
- self.assertTrue((self.frame['C'] == 4).all())
+ assert (self.frame['C'] == 4).all()
def test_fancy_setitem_int_labels(self):
# integer index defers to label-based indexing
@@ -1017,10 +1017,10 @@ def test_setitem_fancy_mixed_2d(self):
with catch_warnings(record=True):
self.mixed_frame.ix[:5, ['C', 'B', 'A']] = 5
result = self.mixed_frame.ix[:5, ['C', 'B', 'A']]
- self.assertTrue((result.values == 5).all())
+ assert (result.values == 5).all()
self.mixed_frame.ix[5] = np.nan
- self.assertTrue(isnull(self.mixed_frame.ix[5]).all())
+ assert isnull(self.mixed_frame.ix[5]).all()
self.mixed_frame.ix[5] = self.mixed_frame.ix[6]
assert_series_equal(self.mixed_frame.ix[5], self.mixed_frame.ix[6],
@@ -1030,7 +1030,7 @@ def test_setitem_fancy_mixed_2d(self):
with catch_warnings(record=True):
df = DataFrame({1: [1., 2., 3.],
2: [3, 4, 5]})
- self.assertTrue(df._is_mixed_type)
+ assert df._is_mixed_type
df.ix[1] = [5, 10]
@@ -1413,7 +1413,7 @@ def test_getitem_setitem_float_labels(self):
df.loc[1:2] = 0
result = df[1:2]
- self.assertTrue((result == 0).all().all())
+ assert (result == 0).all().all()
# #2727
index = Index([1.0, 2.5, 3.5, 4.5, 5.0])
@@ -1437,13 +1437,13 @@ def f():
result = cp.iloc[1.0:5] == 0 # noqa
pytest.raises(TypeError, f)
- self.assertTrue(result.values.all())
- self.assertTrue((cp.iloc[0:1] == df.iloc[0:1]).values.all())
+ assert result.values.all()
+ assert (cp.iloc[0:1] == df.iloc[0:1]).values.all()
cp = df.copy()
cp.iloc[4:5] = 0
- self.assertTrue((cp.iloc[4:5] == 0).values.all())
- self.assertTrue((cp.iloc[0:4] == df.iloc[0:4]).values.all())
+ assert (cp.iloc[4:5] == 0).values.all()
+ assert (cp.iloc[0:4] == df.iloc[0:4]).values.all()
# float slicing
result = df.loc[1.0:5]
@@ -1469,7 +1469,7 @@ def f():
cp = df.copy()
cp.loc[1.0:5.0] = 0
result = cp.loc[1.0:5.0]
- self.assertTrue((result == 0).values.all())
+ assert (result == 0).values.all()
def test_setitem_single_column_mixed(self):
df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'],
@@ -1492,15 +1492,15 @@ def test_setitem_single_column_mixed_datetime(self):
# set an allowable datetime64 type
df.loc['b', 'timestamp'] = iNaT
- self.assertTrue(isnull(df.loc['b', 'timestamp']))
+ assert isnull(df.loc['b', 'timestamp'])
# allow this syntax
df.loc['c', 'timestamp'] = nan
- self.assertTrue(isnull(df.loc['c', 'timestamp']))
+ assert isnull(df.loc['c', 'timestamp'])
# allow this syntax
df.loc['d', :] = nan
- self.assertTrue(isnull(df.loc['c', :]).all() == False) # noqa
+ assert not isnull(df.loc['c', :]).all()
# as of GH 3216 this will now work!
# try to set with a list like item
@@ -1694,8 +1694,8 @@ def test_set_value_resize(self):
res = self.frame.copy()
res3 = res.set_value('foobar', 'baz', 5)
- self.assertTrue(is_float_dtype(res3['baz']))
- self.assertTrue(isnull(res3['baz'].drop(['foobar'])).all())
+ assert is_float_dtype(res3['baz'])
+ assert isnull(res3['baz'].drop(['foobar'])).all()
pytest.raises(ValueError, res3.set_value, 'foobar', 'baz', 'sam')
def test_set_value_with_index_dtype_change(self):
@@ -1733,15 +1733,14 @@ def test_get_set_value_no_partial_indexing(self):
def test_single_element_ix_dont_upcast(self):
self.frame['E'] = 1
- self.assertTrue(issubclass(self.frame['E'].dtype.type,
- (int, np.integer)))
+ assert issubclass(self.frame['E'].dtype.type, (int, np.integer))
with catch_warnings(record=True):
result = self.frame.ix[self.frame.index[5], 'E']
- self.assertTrue(is_integer(result))
+ assert is_integer(result)
result = self.frame.loc[self.frame.index[5], 'E']
- self.assertTrue(is_integer(result))
+ assert is_integer(result)
# GH 11617
df = pd.DataFrame(dict(a=[1.23]))
@@ -1749,9 +1748,9 @@ def test_single_element_ix_dont_upcast(self):
with catch_warnings(record=True):
result = df.ix[0, "b"]
- self.assertTrue(is_integer(result))
+ assert is_integer(result)
result = df.loc[0, "b"]
- self.assertTrue(is_integer(result))
+ assert is_integer(result)
expected = Series([666], [0], name='b')
with catch_warnings(record=True):
@@ -1812,7 +1811,7 @@ def test_iloc_col(self):
def f():
result[8] = 0.
pytest.raises(com.SettingWithCopyError, f)
- self.assertTrue((df[8] == 0).all())
+ assert (df[8] == 0).all()
# list of integers
result = df.iloc[:, [1, 2, 4, 6]]
@@ -1867,7 +1866,7 @@ def test_iloc_duplicates(self):
def test_iloc_sparse_propegate_fill_value(self):
from pandas.core.sparse.api import SparseDataFrame
df = SparseDataFrame({'A': [999, 1]}, default_fill_value=999)
- self.assertTrue(len(df['A'].sp_values) == len(df.iloc[:, 0].sp_values))
+ assert len(df['A'].sp_values) == len(df.iloc[:, 0].sp_values)
def test_iat(self):
@@ -1934,10 +1933,10 @@ def test_reindex_frame_add_nat(self):
df = DataFrame({'A': np.random.randn(len(rng)), 'B': rng})
result = df.reindex(lrange(15))
- self.assertTrue(np.issubdtype(result['B'].dtype, np.dtype('M8[ns]')))
+ assert np.issubdtype(result['B'].dtype, np.dtype('M8[ns]'))
mask = com.isnull(result)['B']
- self.assertTrue(mask[-5:].all())
+ assert mask[-5:].all()
assert not mask[:-5].any()
def test_set_dataframe_column_ns_dtype(self):
@@ -2178,7 +2177,7 @@ def test_xs(self):
xs = self.frame.xs(idx)
for item, value in compat.iteritems(xs):
if np.isnan(value):
- self.assertTrue(np.isnan(self.frame[item][idx]))
+ assert np.isnan(self.frame[item][idx])
else:
self.assertEqual(value, self.frame[item][idx])
@@ -2204,7 +2203,7 @@ def test_xs(self):
# view is returned if possible
series = self.frame.xs('A', axis=1)
series[:] = 5
- self.assertTrue((expected == 5).all())
+ assert (expected == 5).all()
def test_xs_corner(self):
# pathological mixed-type reordering case
@@ -2254,7 +2253,7 @@ def test_xs_view(self):
index=lrange(4), columns=lrange(5))
dm.xs(2)[:] = 10
- self.assertTrue((dm.xs(2) == 10).all())
+ assert (dm.xs(2) == 10).all()
def test_index_namedtuple(self):
from collections import namedtuple
@@ -2350,7 +2349,7 @@ def _check_get(df, cond, check_dtypes=True):
# dtypes
if check_dtypes:
- self.assertTrue((rs.dtypes == df.dtypes).all())
+ assert (rs.dtypes == df.dtypes).all()
# check getting
for df in [default_frame, self.mixed_frame,
@@ -2399,7 +2398,7 @@ def _check_align(df, cond, other, check_dtypes=True):
# can't check dtype when other is an ndarray
if check_dtypes and not isinstance(other, np.ndarray):
- self.assertTrue((rs.dtypes == df.dtypes).all())
+ assert (rs.dtypes == df.dtypes).all()
for df in [self.mixed_frame, self.mixed_float, self.mixed_int]:
@@ -2939,7 +2938,7 @@ def test_setitem(self):
# are copies)
b1 = df._data.blocks[1]
b2 = df._data.blocks[2]
- self.assertTrue(b1.values.equals(b2.values))
+ assert b1.values.equals(b2.values)
assert id(b1.values.values.base) != id(b2.values.values.base)
# with nan
@@ -2958,7 +2957,7 @@ def test_set_reset(self):
# set/reset
df = DataFrame({'A': [0, 1, 2]}, index=idx)
result = df.reset_index()
- self.assertTrue(result['foo'].dtype, 'M8[ns, US/Eastern')
+ assert result['foo'].dtype, 'M8[ns, US/Eastern'
df = result.set_index('foo')
tm.assert_index_equal(df.index, idx)
diff --git a/pandas/tests/frame/test_missing.py b/pandas/tests/frame/test_missing.py
index 721cee7f3141b..17f12679ae92e 100644
--- a/pandas/tests/frame/test_missing.py
+++ b/pandas/tests/frame/test_missing.py
@@ -78,7 +78,7 @@ def test_dropIncompleteRows(self):
samesize_frame = frame.dropna(subset=['bar'])
assert_series_equal(frame['foo'], original)
- self.assertTrue((frame['bar'] == 5).all())
+ assert (frame['bar'] == 5).all()
inp_frame2.dropna(subset=['bar'], inplace=True)
tm.assert_index_equal(samesize_frame.index, self.frame.index)
tm.assert_index_equal(inp_frame2.index, self.frame.index)
@@ -187,13 +187,12 @@ def test_fillna(self):
tf.loc[tf.index[-5:], 'A'] = nan
zero_filled = self.tsframe.fillna(0)
- self.assertTrue((zero_filled.loc[zero_filled.index[:5], 'A'] == 0
- ).all())
+ assert (zero_filled.loc[zero_filled.index[:5], 'A'] == 0).all()
padded = self.tsframe.fillna(method='pad')
- self.assertTrue(np.isnan(padded.loc[padded.index[:5], 'A']).all())
- self.assertTrue((padded.loc[padded.index[-5:], 'A'] ==
- padded.loc[padded.index[-5], 'A']).all())
+ assert np.isnan(padded.loc[padded.index[:5], 'A']).all()
+ assert (padded.loc[padded.index[-5:], 'A'] ==
+ padded.loc[padded.index[-5], 'A']).all()
# mixed type
mf = self.mixed_frame
@@ -502,7 +501,7 @@ def test_fill_corner(self):
mf.loc[mf.index[-10:], 'A'] = nan
filled = self.mixed_frame.fillna(value=0)
- self.assertTrue((filled.loc[filled.index[5:20], 'foo'] == 0).all())
+ assert (filled.loc[filled.index[5:20], 'foo'] == 0).all()
del self.mixed_frame['foo']
empty_float = self.frame.reindex(columns=[])
diff --git a/pandas/tests/frame/test_mutate_columns.py b/pandas/tests/frame/test_mutate_columns.py
index d5035f2908528..fbd1b7be3e431 100644
--- a/pandas/tests/frame/test_mutate_columns.py
+++ b/pandas/tests/frame/test_mutate_columns.py
@@ -132,16 +132,16 @@ def test_insert(self):
# new item
df['x'] = df['a'].astype('float32')
result = Series(dict(float64=5, float32=1))
- self.assertTrue((df.get_dtype_counts() == result).all())
+ assert (df.get_dtype_counts() == result).all()
# replacing current (in different block)
df['a'] = df['a'].astype('float32')
result = Series(dict(float64=4, float32=2))
- self.assertTrue((df.get_dtype_counts() == result).all())
+ assert (df.get_dtype_counts() == result).all()
df['y'] = df['a'].astype('int32')
result = Series(dict(float64=4, float32=2, int32=1))
- self.assertTrue((df.get_dtype_counts() == result).all())
+ assert (df.get_dtype_counts() == result).all()
with tm.assert_raises_regex(ValueError, 'already exists'):
df.insert(1, 'a', df['b'])
@@ -222,7 +222,7 @@ def test_pop_non_unique_cols(self):
self.assertEqual(type(res), DataFrame)
self.assertEqual(len(res), 2)
self.assertEqual(len(df.columns), 1)
- self.assertTrue("b" in df.columns)
+ assert "b" in df.columns
assert "a" not in df.columns
self.assertEqual(len(df.index), 2)
diff --git a/pandas/tests/frame/test_nonunique_indexes.py b/pandas/tests/frame/test_nonunique_indexes.py
index 5c141b6a46eec..61dd92fcd1fab 100644
--- a/pandas/tests/frame/test_nonunique_indexes.py
+++ b/pandas/tests/frame/test_nonunique_indexes.py
@@ -151,7 +151,7 @@ def check(result, expected=None):
df = DataFrame([[1, 2.5], [3, 4.5]], index=[1, 2], columns=['x', 'x'])
result = df.values
expected = np.array([[1, 2.5], [3, 4.5]])
- self.assertTrue((result == expected).all().all())
+ assert (result == expected).all().all()
# rename, GH 4403
df4 = DataFrame(
@@ -448,7 +448,7 @@ def test_as_matrix_duplicates(self):
expected = np.array([[1, 2, 'a', 'b'], [1, 2, 'a', 'b']],
dtype=object)
- self.assertTrue(np.array_equal(result, expected))
+ assert np.array_equal(result, expected)
def test_set_value_by_index(self):
# See gh-12344
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index 7f87666d5ecc4..efe167297627a 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -43,7 +43,7 @@ def test_operators(self):
if not np.isnan(val):
self.assertEqual(val, origVal)
else:
- self.assertTrue(np.isnan(origVal))
+ assert np.isnan(origVal)
for col, series in compat.iteritems(seriesSum):
for idx, val in compat.iteritems(series):
@@ -51,7 +51,7 @@ def test_operators(self):
if not np.isnan(val):
self.assertEqual(val, origVal)
else:
- self.assertTrue(np.isnan(origVal))
+ assert np.isnan(origVal)
added = self.frame2 + self.frame2
expected = self.frame2 * 2
@@ -68,7 +68,7 @@ def test_operators(self):
DataFrame(index=[0], dtype=dtype),
]
for df in frames:
- self.assertTrue((df + df).equals(df))
+ assert (df + df).equals(df)
assert_frame_equal(df + df, df)
def test_ops_np_scalar(self):
@@ -573,7 +573,7 @@ def _check_unaligned_frame(meth, op, df, other):
assert_frame_equal(rs, xp)
# DataFrame
- self.assertTrue(df.eq(df).values.all())
+ assert df.eq(df).values.all()
assert not df.ne(df).values.any()
for op in ['eq', 'ne', 'gt', 'lt', 'ge', 'le']:
f = getattr(df, op)
@@ -636,7 +636,7 @@ def _test_seq(df, idx_ser, col_ser):
rs = df.eq(df)
assert not rs.loc[0, 0]
rs = df.ne(df)
- self.assertTrue(rs.loc[0, 0])
+ assert rs.loc[0, 0]
rs = df.gt(df)
assert not rs.loc[0, 0]
rs = df.lt(df)
@@ -654,7 +654,7 @@ def _test_seq(df, idx_ser, col_ser):
rs = df.gt(df2)
assert not rs.values.any()
rs = df.ne(df2)
- self.assertTrue(rs.values.all())
+ assert rs.values.all()
arr3 = np.array([2j, np.nan, None])
df3 = DataFrame({'a': arr3})
@@ -766,31 +766,30 @@ def test_combineFrame(self):
exp.loc[~exp.index.isin(indexer)] = np.nan
tm.assert_series_equal(added['A'], exp.loc[added['A'].index])
- self.assertTrue(
- np.isnan(added['C'].reindex(frame_copy.index)[:5]).all())
+ assert np.isnan(added['C'].reindex(frame_copy.index)[:5]).all()
# assert(False)
- self.assertTrue(np.isnan(added['D']).all())
+ assert np.isnan(added['D']).all()
self_added = self.frame + self.frame
tm.assert_index_equal(self_added.index, self.frame.index)
added_rev = frame_copy + self.frame
- self.assertTrue(np.isnan(added['D']).all())
- self.assertTrue(np.isnan(added_rev['D']).all())
+ assert np.isnan(added['D']).all()
+ assert np.isnan(added_rev['D']).all()
# corner cases
# empty
plus_empty = self.frame + self.empty
- self.assertTrue(np.isnan(plus_empty.values).all())
+ assert np.isnan(plus_empty.values).all()
empty_plus = self.empty + self.frame
- self.assertTrue(np.isnan(empty_plus.values).all())
+ assert np.isnan(empty_plus.values).all()
empty_empty = self.empty + self.empty
- self.assertTrue(empty_empty.empty)
+ assert empty_empty.empty
# out of order
reverse = self.frame.reindex(columns=self.frame.columns[::-1])
@@ -831,7 +830,7 @@ def test_combineSeries(self):
for key, s in compat.iteritems(self.frame):
assert_series_equal(larger_added[key], s + series[key])
assert 'E' in larger_added
- self.assertTrue(np.isnan(larger_added['E']).all())
+ assert np.isnan(larger_added['E']).all()
# vs mix (upcast) as needed
added = self.mixed_float + series
@@ -866,7 +865,7 @@ def test_combineSeries(self):
if col.name == ts.name:
self.assertEqual(result.name, 'A')
else:
- self.assertTrue(result.name is None)
+ assert result.name is None
smaller_frame = self.tsframe[:-5]
smaller_added = smaller_frame.add(ts, axis='index')
@@ -1045,8 +1044,8 @@ def test_combine_generic(self):
combined = df1.combine(df2, np.add)
combined2 = df2.combine(df1, np.add)
- self.assertTrue(combined['D'].isnull().all())
- self.assertTrue(combined2['D'].isnull().all())
+ assert combined['D'].isnull().all()
+ assert combined2['D'].isnull().all()
chunk = combined.loc[combined.index[:-5], ['A', 'B', 'C']]
chunk2 = combined2.loc[combined2.index[:-5], ['A', 'B', 'C']]
diff --git a/pandas/tests/frame/test_period.py b/pandas/tests/frame/test_period.py
index 194b6c0e251bc..0ca37de6bf2d4 100644
--- a/pandas/tests/frame/test_period.py
+++ b/pandas/tests/frame/test_period.py
@@ -112,8 +112,8 @@ def _get_with_delta(delta, freq='A-DEC'):
result1 = df.to_timestamp('5t', axis=1)
result2 = df.to_timestamp('t', axis=1)
expected = pd.date_range('2001-01-01', '2009-01-01', freq='AS')
- self.assertTrue(isinstance(result1.columns, DatetimeIndex))
- self.assertTrue(isinstance(result2.columns, DatetimeIndex))
+ assert isinstance(result1.columns, DatetimeIndex)
+ assert isinstance(result2.columns, DatetimeIndex)
tm.assert_numpy_array_equal(result1.columns.asi8, expected.asi8)
tm.assert_numpy_array_equal(result2.columns.asi8, expected.asi8)
# PeriodIndex.to_timestamp always use 'infer'
diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index 2232205a57326..575906fb5c8b2 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -157,10 +157,10 @@ def test_eval_resolvers_as_list(self):
df = DataFrame(randn(10, 2), columns=list('ab'))
dict1 = {'a': 1}
dict2 = {'b': 2}
- self.assertTrue(df.eval('a + b', resolvers=[dict1, dict2]) ==
- dict1['a'] + dict2['b'])
- self.assertTrue(pd.eval('a + b', resolvers=[dict1, dict2]) ==
- dict1['a'] + dict2['b'])
+ assert (df.eval('a + b', resolvers=[dict1, dict2]) ==
+ dict1['a'] + dict2['b'])
+ assert (pd.eval('a + b', resolvers=[dict1, dict2]) ==
+ dict1['a'] + dict2['b'])
class TestDataFrameQueryWithMultiIndex(tm.TestCase):
diff --git a/pandas/tests/frame/test_replace.py b/pandas/tests/frame/test_replace.py
index 262734d093d4e..87075e6d6e631 100644
--- a/pandas/tests/frame/test_replace.py
+++ b/pandas/tests/frame/test_replace.py
@@ -781,7 +781,7 @@ def test_replace_dtypes(self):
# bools
df = DataFrame({'bools': [True, False, True]})
result = df.replace(False, True)
- self.assertTrue(result.values.all())
+ assert result.values.all()
# complex blocks
df = DataFrame({'complex': [1j, 2j, 3j]})
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index bcb85b6e44d54..dbdbebddcc0b5 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -79,7 +79,7 @@ def test_repr(self):
def test_repr_dimensions(self):
df = DataFrame([[1, 2, ], [3, 4]])
with option_context('display.show_dimensions', True):
- self.assertTrue("2 rows x 2 columns" in repr(df))
+ assert "2 rows x 2 columns" in repr(df)
with option_context('display.show_dimensions', False):
assert "2 rows x 2 columns" not in repr(df)
@@ -211,7 +211,7 @@ def test_info_wide(self):
io = StringIO()
df.info(buf=io, max_cols=101)
rs = io.getvalue()
- self.assertTrue(len(rs.splitlines()) > 100)
+ assert len(rs.splitlines()) > 100
xp = rs
set_option('display.max_info_columns', 101)
@@ -303,18 +303,18 @@ def test_info_memory_usage(self):
# display memory usage case
df.info(buf=buf, memory_usage=True)
res = buf.getvalue().splitlines()
- self.assertTrue("memory usage: " in res[-1])
+ assert "memory usage: " in res[-1]
# do not display memory usage cas
df.info(buf=buf, memory_usage=False)
res = buf.getvalue().splitlines()
- self.assertTrue("memory usage: " not in res[-1])
+ assert "memory usage: " not in res[-1]
df.info(buf=buf, memory_usage=True)
res = buf.getvalue().splitlines()
# memory usage is a lower bound, so print it as XYZ+ MB
- self.assertTrue(re.match(r"memory usage: [^+]+\+", res[-1]))
+ assert re.match(r"memory usage: [^+]+\+", res[-1])
df.iloc[:, :5].info(buf=buf, memory_usage=True)
res = buf.getvalue().splitlines()
@@ -325,11 +325,11 @@ def test_info_memory_usage(self):
df_with_object_index = pd.DataFrame({'a': [1]}, index=['foo'])
df_with_object_index.info(buf=buf, memory_usage=True)
res = buf.getvalue().splitlines()
- self.assertTrue(re.match(r"memory usage: [^+]+\+", res[-1]))
+ assert re.match(r"memory usage: [^+]+\+", res[-1])
df_with_object_index.info(buf=buf, memory_usage='deep')
res = buf.getvalue().splitlines()
- self.assertTrue(re.match(r"memory usage: [^+]+$", res[-1]))
+ assert re.match(r"memory usage: [^+]+$", res[-1])
self.assertGreater(df_with_object_index.memory_usage(index=True,
deep=True).sum(),
@@ -380,7 +380,7 @@ def test_info_memory_usage(self):
# sys.getsizeof will call the .memory_usage with
# deep=True, and add on some GC overhead
diff = df.memory_usage(deep=True).sum() - sys.getsizeof(df)
- self.assertTrue(abs(diff) < 100)
+ assert abs(diff) < 100
def test_info_memory_usage_qualified(self):
@@ -394,7 +394,7 @@ def test_info_memory_usage_qualified(self):
df = DataFrame(1, columns=list('ab'),
index=list('ABC'))
df.info(buf=buf)
- self.assertTrue('+' in buf.getvalue())
+ assert '+' in buf.getvalue()
buf = StringIO()
df = DataFrame(1, columns=list('ab'),
@@ -408,7 +408,7 @@ def test_info_memory_usage_qualified(self):
index=pd.MultiIndex.from_product(
[range(3), ['foo', 'bar']]))
df.info(buf=buf)
- self.assertTrue('+' in buf.getvalue())
+ assert '+' in buf.getvalue()
def test_info_memory_usage_bug_on_multiindex(self):
# GH 14308
@@ -429,10 +429,10 @@ def memory_usage(f):
unstacked = df.unstack('id')
self.assertEqual(df.values.nbytes, unstacked.values.nbytes)
- self.assertTrue(memory_usage(df) > memory_usage(unstacked))
+ assert memory_usage(df) > memory_usage(unstacked)
# high upper bound
- self.assertTrue(memory_usage(unstacked) - memory_usage(df) < 2000)
+ assert memory_usage(unstacked) - memory_usage(df) < 2000
def test_info_categorical(self):
# GH14298
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index c1905fa0476c4..9c48233ff29cd 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -445,7 +445,7 @@ def test_unstack_to_series(self):
# check reversibility
data = self.frame.unstack()
- self.assertTrue(isinstance(data, Series))
+ assert isinstance(data, Series)
undo = data.unstack().T
assert_frame_equal(undo, self.frame)
diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py
index db4f4b909f7cb..ade696885c2e0 100644
--- a/pandas/tests/frame/test_subclass.py
+++ b/pandas/tests/frame/test_subclass.py
@@ -50,26 +50,26 @@ def custom_frame_function(self):
cdf = CustomDataFrame(data)
# Did we get back our own DF class?
- self.assertTrue(isinstance(cdf, CustomDataFrame))
+ assert isinstance(cdf, CustomDataFrame)
# Do we get back our own Series class after selecting a column?
cdf_series = cdf.col1
- self.assertTrue(isinstance(cdf_series, CustomSeries))
+ assert isinstance(cdf_series, CustomSeries)
self.assertEqual(cdf_series.custom_series_function(), 'OK')
# Do we get back our own DF class after slicing row-wise?
cdf_rows = cdf[1:5]
- self.assertTrue(isinstance(cdf_rows, CustomDataFrame))
+ assert isinstance(cdf_rows, CustomDataFrame)
self.assertEqual(cdf_rows.custom_frame_function(), 'OK')
# Make sure sliced part of multi-index frame is custom class
mcol = pd.MultiIndex.from_tuples([('A', 'A'), ('A', 'B')])
cdf_multi = CustomDataFrame([[0, 1], [2, 3]], columns=mcol)
- self.assertTrue(isinstance(cdf_multi['A'], CustomDataFrame))
+ assert isinstance(cdf_multi['A'], CustomDataFrame)
mcol = pd.MultiIndex.from_tuples([('A', ''), ('B', '')])
cdf_multi2 = CustomDataFrame([[0, 1], [2, 3]], columns=mcol)
- self.assertTrue(isinstance(cdf_multi2['A'], CustomSeries))
+ assert isinstance(cdf_multi2['A'], CustomSeries)
def test_dataframe_metadata(self):
df = tm.SubclassedDataFrame({'X': [1, 2, 3], 'Y': [1, 2, 3]},
@@ -142,7 +142,7 @@ class SubclassedPanel(Panel):
index = MultiIndex.from_tuples([(0, 0), (0, 1), (0, 2)])
df = SubclassedFrame({'X': [1, 2, 3], 'Y': [4, 5, 6]}, index=index)
result = df.to_panel()
- self.assertTrue(isinstance(result, SubclassedPanel))
+ assert isinstance(result, SubclassedPanel)
expected = SubclassedPanel([[[1, 2, 3]], [[4, 5, 6]]],
items=['X', 'Y'], major_axis=[0],
minor_axis=[0, 1, 2],
diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index 66af6aaca6513..910f04f0d63c6 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -122,14 +122,14 @@ def test_frame_ctor_datetime64_column(self):
dates = np.asarray(rng)
df = DataFrame({'A': np.random.randn(len(rng)), 'B': dates})
- self.assertTrue(np.issubdtype(df['B'].dtype, np.dtype('M8[ns]')))
+ assert np.issubdtype(df['B'].dtype, np.dtype('M8[ns]'))
def test_frame_add_datetime64_column(self):
rng = date_range('1/1/2000 00:00:00', '1/1/2000 1:59:50', freq='10s')
df = DataFrame(index=np.arange(len(rng)))
df['A'] = rng
- self.assertTrue(np.issubdtype(df['A'].dtype, np.dtype('M8[ns]')))
+ assert np.issubdtype(df['A'].dtype, np.dtype('M8[ns]'))
def test_frame_datetime64_pre1900_repr(self):
df = DataFrame({'year': date_range('1/1/1700', periods=50,
@@ -154,7 +154,7 @@ def test_frame_add_datetime64_col_other_units(self):
ex_vals = to_datetime(vals.astype('O')).values
self.assertEqual(df[unit].dtype, ns_dtype)
- self.assertTrue((df[unit].values == ex_vals).all())
+ assert (df[unit].values == ex_vals).all()
# Test insertion into existing datetime64 column
df = DataFrame({'ints': np.arange(n)}, index=np.arange(n))
@@ -169,7 +169,7 @@ def test_frame_add_datetime64_col_other_units(self):
tmp['dates'] = vals
ex_vals = to_datetime(vals.astype('O')).values
- self.assertTrue((tmp['dates'].values == ex_vals).all())
+ assert (tmp['dates'].values == ex_vals).all()
def test_shift(self):
# naive shift
@@ -422,9 +422,9 @@ def test_at_time_frame(self):
rng = date_range('1/1/2000', '1/5/2000', freq='5min')
ts = DataFrame(np.random.randn(len(rng), 2), index=rng)
rs = ts.at_time(rng[1])
- self.assertTrue((rs.index.hour == rng[1].hour).all())
- self.assertTrue((rs.index.minute == rng[1].minute).all())
- self.assertTrue((rs.index.second == rng[1].second).all())
+ assert (rs.index.hour == rng[1].hour).all()
+ assert (rs.index.minute == rng[1].minute).all()
+ assert (rs.index.second == rng[1].second).all()
result = ts.at_time('9:30')
expected = ts.at_time(time(9, 30))
@@ -467,14 +467,14 @@ def test_between_time_frame(self):
for rs in filtered.index:
t = rs.time()
if inc_start:
- self.assertTrue(t >= stime)
+ assert t >= stime
else:
- self.assertTrue(t > stime)
+ assert t > stime
if inc_end:
- self.assertTrue(t <= etime)
+ assert t <= etime
else:
- self.assertTrue(t < etime)
+ assert t < etime
result = ts.between_time('00:00', '01:00')
expected = ts.between_time(stime, etime)
@@ -499,14 +499,14 @@ def test_between_time_frame(self):
for rs in filtered.index:
t = rs.time()
if inc_start:
- self.assertTrue((t >= stime) or (t <= etime))
+ assert (t >= stime) or (t <= etime)
else:
- self.assertTrue((t > stime) or (t <= etime))
+ assert (t > stime) or (t <= etime)
if inc_end:
- self.assertTrue((t <= etime) or (t >= stime))
+ assert (t <= etime) or (t >= stime)
else:
- self.assertTrue((t < etime) or (t >= stime))
+ assert (t < etime) or (t >= stime)
def test_operation_on_NaT(self):
# Both NaT and Timestamp are in DataFrame.
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index ffce525434ab5..11c10f1982558 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -548,7 +548,7 @@ def _make_frame(names=None):
df = _make_frame(True)
df.to_csv(path, tupleize_cols=False, index=False)
result = read_csv(path, header=[0, 1], tupleize_cols=False)
- self.assertTrue(all([x is None for x in result.columns.names]))
+ assert all([x is None for x in result.columns.names])
result.columns.names = df.columns.names
assert_frame_equal(df, result)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 0696473d0449f..278682ccb8d45 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -446,8 +446,8 @@ def test_groupby_duplicated_column_errormsg(self):
grouped = df.groupby('B')
c = grouped.count()
- self.assertTrue(c.columns.nlevels == 1)
- self.assertTrue(c.columns.size == 3)
+ assert c.columns.nlevels == 1
+ assert c.columns.size == 3
def test_groupby_dict_mapping(self):
# GH #679
@@ -798,7 +798,7 @@ def test_with_na(self):
assert_series_equal(agged, expected, check_dtype=False)
- # self.assertTrue(issubclass(agged.dtype.type, np.integer))
+ # assert issubclass(agged.dtype.type, np.integer)
# explicity return a float from my function
def f(x):
@@ -808,7 +808,7 @@ def f(x):
expected = Series([4, 2], index=['bar', 'foo'])
assert_series_equal(agged, expected, check_dtype=False)
- self.assertTrue(issubclass(agged.dtype.type, np.dtype(dtype).type))
+ assert issubclass(agged.dtype.type, np.dtype(dtype).type)
def test_indices_concatenation_order(self):
@@ -995,7 +995,7 @@ def test_frame_groupby(self):
for k, v in compat.iteritems(groups):
samething = self.tsframe.index.take(indices[k])
- self.assertTrue((samething == v).all())
+ assert (samething == v).all()
def test_grouping_is_iterable(self):
# this code path isn't used anywhere else
@@ -1637,16 +1637,16 @@ def test_max_min_non_numeric(self):
'ss': 4 * ['mama']})
result = aa.groupby('nn').max()
- self.assertTrue('ss' in result)
+ assert 'ss' in result
result = aa.groupby('nn').max(numeric_only=False)
- self.assertTrue('ss' in result)
+ assert 'ss' in result
result = aa.groupby('nn').min()
- self.assertTrue('ss' in result)
+ assert 'ss' in result
result = aa.groupby('nn').min(numeric_only=False)
- self.assertTrue('ss' in result)
+ assert 'ss' in result
def test_arg_passthru(self):
# make sure that we are passing thru kwargs
@@ -1970,11 +1970,11 @@ def test_apply_series_yield_constant(self):
def test_apply_frame_yield_constant(self):
# GH13568
result = self.df.groupby(['A', 'B']).apply(len)
- self.assertTrue(isinstance(result, Series))
+ assert isinstance(result, Series)
assert result.name is None
result = self.df.groupby(['A', 'B'])[['C', 'D']].apply(len)
- self.assertTrue(isinstance(result, Series))
+ assert isinstance(result, Series)
assert result.name is None
def test_apply_frame_to_series(self):
@@ -2459,7 +2459,7 @@ def f(g):
return g
result = grouped.apply(f)
- self.assertTrue('value3' in result)
+ assert 'value3' in result
def test_groupby_wrong_multi_labels(self):
from pandas import read_csv
@@ -2562,7 +2562,7 @@ def test_cython_grouper_series_bug_noncontig(self):
inds = np.tile(lrange(10), 10)
result = obj.groupby(inds).agg(Series.median)
- self.assertTrue(result.isnull().all())
+ assert result.isnull().all()
def test_series_grouper_noncontig_index(self):
index = Index(tm.rands_array(10, 100))
@@ -3254,7 +3254,7 @@ def test_groupby_multiindex_not_lexsorted(self):
lexsorted_mi = MultiIndex.from_tuples(
[('a', ''), ('b1', 'c1'), ('b2', 'c2')], names=['b', 'c'])
lexsorted_df = DataFrame([[1, 3, 4]], columns=lexsorted_mi)
- self.assertTrue(lexsorted_df.columns.is_lexsorted())
+ assert lexsorted_df.columns.is_lexsorted()
# define the non-lexsorted version
not_lexsorted_df = DataFrame(columns=['a', 'b', 'c', 'd'],
diff --git a/pandas/tests/groupby/test_nth.py b/pandas/tests/groupby/test_nth.py
index bf2f1f1f9cbc5..f583fa7aa7e86 100644
--- a/pandas/tests/groupby/test_nth.py
+++ b/pandas/tests/groupby/test_nth.py
@@ -42,9 +42,9 @@ def test_first_last_nth(self):
grouped['B'].nth(0)
self.df.loc[self.df['A'] == 'foo', 'B'] = np.nan
- self.assertTrue(isnull(grouped['B'].first()['foo']))
- self.assertTrue(isnull(grouped['B'].last()['foo']))
- self.assertTrue(isnull(grouped['B'].nth(0)['foo']))
+ assert isnull(grouped['B'].first()['foo'])
+ assert isnull(grouped['B'].last()['foo'])
+ assert isnull(grouped['B'].nth(0)['foo'])
# v0.14.0 whatsnew
df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B'])
@@ -154,7 +154,7 @@ def test_nth(self):
expected = s.groupby(g).first()
expected2 = s.groupby(g).apply(lambda x: x.iloc[0])
assert_series_equal(expected2, expected, check_names=False)
- self.assertTrue(expected.name, 0)
+ assert expected.name, 0
self.assertEqual(expected.name, 1)
# validate first
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index ae0413615f738..db3fdfa605b5b 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -80,11 +80,11 @@ def test_groupby_with_timegrouper_methods(self):
for df in [df_original, df_sorted]:
df = df.set_index('Date', drop=False)
g = df.groupby(pd.TimeGrouper('6M'))
- self.assertTrue(g.group_keys)
- self.assertTrue(isinstance(g.grouper, pd.core.groupby.BinGrouper))
+ assert g.group_keys
+ assert isinstance(g.grouper, pd.core.groupby.BinGrouper)
groups = g.groups
- self.assertTrue(isinstance(groups, dict))
- self.assertTrue(len(groups) == 3)
+ assert isinstance(groups, dict)
+ assert len(groups) == 3
def test_timegrouper_with_reg_groups(self):
@@ -528,15 +528,15 @@ def test_groupby_first_datetime64(self):
df = DataFrame([(1, 1351036800000000000), (2, 1351036800000000000)])
df[1] = df[1].view('M8[ns]')
- self.assertTrue(issubclass(df[1].dtype.type, np.datetime64))
+ assert issubclass(df[1].dtype.type, np.datetime64)
result = df.groupby(level=0).first()
got_dt = result[1].dtype
- self.assertTrue(issubclass(got_dt.type, np.datetime64))
+ assert issubclass(got_dt.type, np.datetime64)
result = df[1].groupby(level=0).first()
got_dt = result.dtype
- self.assertTrue(issubclass(got_dt.type, np.datetime64))
+ assert issubclass(got_dt.type, np.datetime64)
def test_groupby_max_datetime64(self):
# GH 5869
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 23b1de76234c3..d9dccc39f469f 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -31,7 +31,7 @@ def setup_indices(self):
def verify_pickle(self, index):
unpickled = tm.round_trip_pickle(index)
- self.assertTrue(index.equals(unpickled))
+ assert index.equals(unpickled)
def test_pickle_compat_construction(self):
# this is testing for pickle compat
@@ -134,8 +134,8 @@ def test_reindex_base(self):
def test_ndarray_compat_properties(self):
idx = self.create_index()
- self.assertTrue(idx.T.equals(idx))
- self.assertTrue(idx.transpose().equals(idx))
+ assert idx.T.equals(idx)
+ assert idx.transpose().equals(idx)
values = idx.values
for prop in self._compat_props:
@@ -155,8 +155,8 @@ def test_str(self):
# test the string repr
idx = self.create_index()
idx.name = 'foo'
- self.assertTrue("'foo'" in str(idx))
- self.assertTrue(idx.__class__.__name__ in str(idx))
+ assert "'foo'" in str(idx)
+ assert idx.__class__.__name__ in str(idx)
def test_dtype_str(self):
for idx in self.indices.values():
@@ -304,7 +304,7 @@ def test_duplicates(self):
continue
idx = self._holder([ind[0]] * 5)
assert not idx.is_unique
- self.assertTrue(idx.has_duplicates)
+ assert idx.has_duplicates
# GH 10115
# preserve names
@@ -325,7 +325,7 @@ def test_get_unique_index(self):
# We test against `idx_unique`, so first we make sure it's unique
# and doesn't contain nans.
- self.assertTrue(idx_unique.is_unique)
+ assert idx_unique.is_unique
try:
assert not idx_unique.hasnans
except NotImplementedError:
@@ -349,7 +349,7 @@ def test_get_unique_index(self):
vals_unique = vals[:2]
idx_nan = ind._shallow_copy(vals)
idx_unique_nan = ind._shallow_copy(vals_unique)
- self.assertTrue(idx_unique_nan.is_unique)
+ assert idx_unique_nan.is_unique
self.assertEqual(idx_nan.dtype, ind.dtype)
self.assertEqual(idx_unique_nan.dtype, ind.dtype)
@@ -390,10 +390,10 @@ def test_memory_usage(self):
# RangeIndex, IntervalIndex
# don't have engines
if not isinstance(index, (RangeIndex, IntervalIndex)):
- self.assertTrue(result2 > result)
+ assert result2 > result
if index.inferred_type == 'object':
- self.assertTrue(result3 > result2)
+ assert result3 > result2
else:
@@ -453,7 +453,7 @@ def test_take(self):
result = ind.take(indexer)
expected = ind[indexer]
- self.assertTrue(result.equals(expected))
+ assert result.equals(expected)
if not isinstance(ind,
(DatetimeIndex, PeriodIndex, TimedeltaIndex)):
@@ -546,7 +546,7 @@ def test_intersection_base(self):
if isinstance(idx, CategoricalIndex):
pass
else:
- self.assertTrue(tm.equalContents(intersect, second))
+ assert tm.equalContents(intersect, second)
# GH 10149
cases = [klass(second.values)
@@ -560,7 +560,7 @@ def test_intersection_base(self):
pass
else:
result = first.intersection(case)
- self.assertTrue(tm.equalContents(result, second))
+ assert tm.equalContents(result, second)
if isinstance(idx, MultiIndex):
msg = "other must be a MultiIndex or a list of tuples"
@@ -573,7 +573,7 @@ def test_union_base(self):
second = idx[:5]
everything = idx
union = first.union(second)
- self.assertTrue(tm.equalContents(union, everything))
+ assert tm.equalContents(union, everything)
# GH 10149
cases = [klass(second.values)
@@ -587,7 +587,7 @@ def test_union_base(self):
pass
else:
result = first.union(case)
- self.assertTrue(tm.equalContents(result, everything))
+ assert tm.equalContents(result, everything)
if isinstance(idx, MultiIndex):
msg = "other must be a MultiIndex or a list of tuples"
@@ -604,7 +604,7 @@ def test_difference_base(self):
if isinstance(idx, CategoricalIndex):
pass
else:
- self.assertTrue(tm.equalContents(result, answer))
+ assert tm.equalContents(result, answer)
# GH 10149
cases = [klass(second.values)
@@ -621,7 +621,7 @@ def test_difference_base(self):
tm.assert_numpy_array_equal(result.asi8, answer.asi8)
else:
result = first.difference(case)
- self.assertTrue(tm.equalContents(result, answer))
+ assert tm.equalContents(result, answer)
if isinstance(idx, MultiIndex):
msg = "other must be a MultiIndex or a list of tuples"
@@ -637,7 +637,7 @@ def test_symmetric_difference(self):
else:
answer = idx[[0, -1]]
result = first.symmetric_difference(second)
- self.assertTrue(tm.equalContents(result, answer))
+ assert tm.equalContents(result, answer)
# GH 10149
cases = [klass(second.values)
@@ -651,7 +651,7 @@ def test_symmetric_difference(self):
pass
else:
result = first.symmetric_difference(case)
- self.assertTrue(tm.equalContents(result, answer))
+ assert tm.equalContents(result, answer)
if isinstance(idx, MultiIndex):
msg = "other must be a MultiIndex or a list of tuples"
@@ -671,7 +671,7 @@ def test_insert_base(self):
continue
# test 0th element
- self.assertTrue(idx[0:4].equals(result.insert(0, idx[0])))
+ assert idx[0:4].equals(result.insert(0, idx[0]))
def test_delete_base(self):
@@ -686,12 +686,12 @@ def test_delete_base(self):
expected = idx[1:]
result = idx.delete(0)
- self.assertTrue(result.equals(expected))
+ assert result.equals(expected)
self.assertEqual(result.name, expected.name)
expected = idx[:-1]
result = idx.delete(-1)
- self.assertTrue(result.equals(expected))
+ assert result.equals(expected)
self.assertEqual(result.name, expected.name)
with pytest.raises((IndexError, ValueError)):
@@ -701,9 +701,9 @@ def test_delete_base(self):
def test_equals(self):
for name, idx in compat.iteritems(self.indices):
- self.assertTrue(idx.equals(idx))
- self.assertTrue(idx.equals(idx.copy()))
- self.assertTrue(idx.equals(idx.astype(object)))
+ assert idx.equals(idx)
+ assert idx.equals(idx.copy())
+ assert idx.equals(idx.astype(object))
assert not idx.equals(list(idx))
assert not idx.equals(np.array(idx))
@@ -711,8 +711,8 @@ def test_equals(self):
# Cannot pass in non-int64 dtype to RangeIndex
if not isinstance(idx, RangeIndex):
same_values = Index(idx, dtype=object)
- self.assertTrue(idx.equals(same_values))
- self.assertTrue(same_values.equals(idx))
+ assert idx.equals(same_values)
+ assert same_values.equals(idx)
if idx.nlevels == 1:
# do not test MultiIndex
@@ -865,7 +865,7 @@ def test_hasnans_isnans(self):
expected = np.array([False] * len(idx), dtype=bool)
expected[1] = True
tm.assert_numpy_array_equal(idx._isnan, expected)
- self.assertTrue(idx.hasnans)
+ assert idx.hasnans
def test_fillna(self):
# GH 11343
@@ -905,7 +905,7 @@ def test_fillna(self):
expected = np.array([False] * len(idx), dtype=bool)
expected[1] = True
tm.assert_numpy_array_equal(idx._isnan, expected)
- self.assertTrue(idx.hasnans)
+ assert idx.hasnans
def test_nulls(self):
# this is really a smoke test for the methods
@@ -936,4 +936,4 @@ def test_empty(self):
# GH 15270
index = self.create_index()
assert not index.empty
- self.assertTrue(index[:0].empty)
+ assert index[:0].empty
diff --git a/pandas/tests/indexes/datetimelike.py b/pandas/tests/indexes/datetimelike.py
index 338dba9ef6c4f..114940009377c 100644
--- a/pandas/tests/indexes/datetimelike.py
+++ b/pandas/tests/indexes/datetimelike.py
@@ -17,14 +17,14 @@ def test_str(self):
idx = self.create_index()
idx.name = 'foo'
assert not "length=%s" % len(idx) in str(idx)
- self.assertTrue("'foo'" in str(idx))
- self.assertTrue(idx.__class__.__name__ in str(idx))
+ assert "'foo'" in str(idx)
+ assert idx.__class__.__name__ in str(idx)
if hasattr(idx, 'tz'):
if idx.tz is not None:
- self.assertTrue(idx.tz in str(idx))
+ assert idx.tz in str(idx)
if hasattr(idx, 'freq'):
- self.assertTrue("freq='%s'" % idx.freqstr in str(idx))
+ assert "freq='%s'" % idx.freqstr in str(idx)
def test_view(self):
super(DatetimeLike, self).test_view()
diff --git a/pandas/tests/indexes/datetimes/test_astype.py b/pandas/tests/indexes/datetimes/test_astype.py
index 7e695164db971..35031746efebe 100644
--- a/pandas/tests/indexes/datetimes/test_astype.py
+++ b/pandas/tests/indexes/datetimes/test_astype.py
@@ -105,7 +105,7 @@ def test_astype_datetime64(self):
result = idx.astype('datetime64[ns]', copy=False)
tm.assert_index_equal(result, idx)
- self.assertTrue(result is idx)
+ assert result is idx
idx_tz = DatetimeIndex(['2016-05-16', 'NaT', NaT, np.NaN], tz='EST')
result = idx_tz.astype('datetime64[ns]')
@@ -251,7 +251,7 @@ def test_to_period_tz_explicit_pytz(self):
result = ts.to_period()[0]
expected = ts[0].to_period()
- self.assertTrue(result == expected)
+ assert result == expected
tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=pytz.utc)
@@ -259,7 +259,7 @@ def test_to_period_tz_explicit_pytz(self):
result = ts.to_period()[0]
expected = ts[0].to_period()
- self.assertTrue(result == expected)
+ assert result == expected
tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=tzlocal())
@@ -267,7 +267,7 @@ def test_to_period_tz_explicit_pytz(self):
result = ts.to_period()[0]
expected = ts[0].to_period()
- self.assertTrue(result == expected)
+ assert result == expected
tm.assert_index_equal(ts.to_period(), xp)
def test_to_period_tz_dateutil(self):
@@ -282,7 +282,7 @@ def test_to_period_tz_dateutil(self):
result = ts.to_period()[0]
expected = ts[0].to_period()
- self.assertTrue(result == expected)
+ assert result == expected
tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=dateutil.tz.tzutc())
@@ -290,7 +290,7 @@ def test_to_period_tz_dateutil(self):
result = ts.to_period()[0]
expected = ts[0].to_period()
- self.assertTrue(result == expected)
+ assert result == expected
tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=tzlocal())
@@ -298,7 +298,7 @@ def test_to_period_tz_dateutil(self):
result = ts.to_period()[0]
expected = ts[0].to_period()
- self.assertTrue(result == expected)
+ assert result == expected
tm.assert_index_equal(ts.to_period(), xp)
def test_astype_object(self):
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py
index 8ce2085032ca1..098d4755b385c 100644
--- a/pandas/tests/indexes/datetimes/test_construction.py
+++ b/pandas/tests/indexes/datetimes/test_construction.py
@@ -205,7 +205,7 @@ def test_construction_dti_with_mixed_timezones(self):
exp = DatetimeIndex(
[Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx')
tm.assert_index_equal(result, exp, exact=True)
- self.assertTrue(isinstance(result, DatetimeIndex))
+ assert isinstance(result, DatetimeIndex)
# same tz results in DatetimeIndex
result = DatetimeIndex([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'),
@@ -216,7 +216,7 @@ def test_construction_dti_with_mixed_timezones(self):
Timestamp('2011-01-02 10:00')],
tz='Asia/Tokyo', name='idx')
tm.assert_index_equal(result, exp, exact=True)
- self.assertTrue(isinstance(result, DatetimeIndex))
+ assert isinstance(result, DatetimeIndex)
# same tz results in DatetimeIndex (DST)
result = DatetimeIndex([Timestamp('2011-01-01 10:00', tz='US/Eastern'),
@@ -227,7 +227,7 @@ def test_construction_dti_with_mixed_timezones(self):
Timestamp('2011-08-01 10:00')],
tz='US/Eastern', name='idx')
tm.assert_index_equal(result, exp, exact=True)
- self.assertTrue(isinstance(result, DatetimeIndex))
+ assert isinstance(result, DatetimeIndex)
# different tz coerces tz-naive to tz-awareIndex(dtype=object)
result = DatetimeIndex([Timestamp('2011-01-01 10:00'),
@@ -237,7 +237,7 @@ def test_construction_dti_with_mixed_timezones(self):
Timestamp('2011-01-02 10:00')],
tz='US/Eastern', name='idx')
tm.assert_index_equal(result, exp, exact=True)
- self.assertTrue(isinstance(result, DatetimeIndex))
+ assert isinstance(result, DatetimeIndex)
# tz mismatch affecting to tz-aware raises TypeError/ValueError
@@ -491,15 +491,15 @@ def test_ctor_str_intraday(self):
def test_is_(self):
dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M')
- self.assertTrue(dti.is_(dti))
- self.assertTrue(dti.is_(dti.view()))
+ assert dti.is_(dti)
+ assert dti.is_(dti.view())
assert not dti.is_(dti.copy())
def test_index_cast_datetime64_other_units(self):
arr = np.arange(0, 100, 10, dtype=np.int64).view('M8[D]')
idx = Index(arr)
- self.assertTrue((idx.values == tslib.cast_to_nanoseconds(arr)).all())
+ assert (idx.values == tslib.cast_to_nanoseconds(arr)).all()
def test_constructor_int64_nocopy(self):
# #1624
@@ -507,13 +507,13 @@ def test_constructor_int64_nocopy(self):
index = DatetimeIndex(arr)
arr[50:100] = -1
- self.assertTrue((index.asi8[50:100] == -1).all())
+ assert (index.asi8[50:100] == -1).all()
arr = np.arange(1000, dtype=np.int64)
index = DatetimeIndex(arr, copy=True)
arr[50:100] = -1
- self.assertTrue((index.asi8[50:100] != -1).all())
+ assert (index.asi8[50:100] != -1).all()
def test_from_freq_recreate_from_data(self):
freqs = ['M', 'Q', 'A', 'D', 'B', 'BH', 'T', 'S', 'L', 'U', 'H', 'N',
@@ -560,7 +560,7 @@ def test_datetimeindex_constructor_misc(self):
tm.assert_index_equal(idx7, idx8)
for other in [idx2, idx3, idx4, idx5, idx6]:
- self.assertTrue((idx1.values == other.values).all())
+ assert (idx1.values == other.values).all()
sdate = datetime(1999, 12, 25)
edate = datetime(2000, 1, 1)
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index e570313b716cb..6b011ad6db98e 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -359,19 +359,19 @@ def test_range_tz_dateutil(self):
end = datetime(2011, 1, 3, tzinfo=tz('US/Eastern'))
dr = date_range(start=start, periods=3)
- self.assertTrue(dr.tz == tz('US/Eastern'))
- self.assertTrue(dr[0] == start)
- self.assertTrue(dr[2] == end)
+ assert dr.tz == tz('US/Eastern')
+ assert dr[0] == start
+ assert dr[2] == end
dr = date_range(end=end, periods=3)
- self.assertTrue(dr.tz == tz('US/Eastern'))
- self.assertTrue(dr[0] == start)
- self.assertTrue(dr[2] == end)
+ assert dr.tz == tz('US/Eastern')
+ assert dr[0] == start
+ assert dr[2] == end
dr = date_range(start=start, end=end)
- self.assertTrue(dr.tz == tz('US/Eastern'))
- self.assertTrue(dr[0] == start)
- self.assertTrue(dr[2] == end)
+ assert dr.tz == tz('US/Eastern')
+ assert dr[0] == start
+ assert dr[2] == end
def test_range_closed(self):
begin = datetime(2011, 1, 1)
diff --git a/pandas/tests/indexes/datetimes/test_datetime.py b/pandas/tests/indexes/datetimes/test_datetime.py
index 7ba9bf53abc4d..83f9119377b19 100644
--- a/pandas/tests/indexes/datetimes/test_datetime.py
+++ b/pandas/tests/indexes/datetimes/test_datetime.py
@@ -451,17 +451,17 @@ def test_sort_values(self):
idx = DatetimeIndex(['2000-01-04', '2000-01-01', '2000-01-02'])
ordered = idx.sort_values()
- self.assertTrue(ordered.is_monotonic)
+ assert ordered.is_monotonic
ordered = idx.sort_values(ascending=False)
- self.assertTrue(ordered[::-1].is_monotonic)
+ assert ordered[::-1].is_monotonic
ordered, dexer = idx.sort_values(return_indexer=True)
- self.assertTrue(ordered.is_monotonic)
+ assert ordered.is_monotonic
tm.assert_numpy_array_equal(dexer, np.array([1, 2, 0], dtype=np.intp))
ordered, dexer = idx.sort_values(return_indexer=True, ascending=False)
- self.assertTrue(ordered[::-1].is_monotonic)
+ assert ordered[::-1].is_monotonic
tm.assert_numpy_array_equal(dexer, np.array([0, 2, 1], dtype=np.intp))
def test_take(self):
@@ -570,15 +570,15 @@ def test_append_numpy_bug_1681(self):
c = DataFrame({'A': 'foo', 'B': dr}, index=dr)
result = a.append(c)
- self.assertTrue((result['B'] == dr).all())
+ assert (result['B'] == dr).all()
def test_isin(self):
index = tm.makeDateIndex(4)
result = index.isin(index)
- self.assertTrue(result.all())
+ assert result.all()
result = index.isin(list(index))
- self.assertTrue(result.all())
+ assert result.all()
assert_almost_equal(index.isin([index[2], 5]),
np.array([False, False, True, False]))
@@ -587,13 +587,13 @@ def test_time(self):
rng = pd.date_range('1/1/2000', freq='12min', periods=10)
result = pd.Index(rng).time
expected = [t.time() for t in rng]
- self.assertTrue((result == expected).all())
+ assert (result == expected).all()
def test_date(self):
rng = pd.date_range('1/1/2000', freq='12H', periods=10)
result = pd.Index(rng).date
expected = [t.date() for t in rng]
- self.assertTrue((result == expected).all())
+ assert (result == expected).all()
def test_does_not_convert_mixed_integer(self):
df = tm.makeCustomDataframe(10, 10,
diff --git a/pandas/tests/indexes/datetimes/test_datetimelike.py b/pandas/tests/indexes/datetimes/test_datetimelike.py
index 3e6fe10223216..0eb565bf0ec55 100644
--- a/pandas/tests/indexes/datetimes/test_datetimelike.py
+++ b/pandas/tests/indexes/datetimes/test_datetimelike.py
@@ -49,13 +49,13 @@ def test_intersection(self):
first = self.index
second = self.index[5:]
intersect = first.intersection(second)
- self.assertTrue(tm.equalContents(intersect, second))
+ assert tm.equalContents(intersect, second)
# GH 10149
cases = [klass(second.values) for klass in [np.array, Series, list]]
for case in cases:
result = first.intersection(case)
- self.assertTrue(tm.equalContents(result, second))
+ assert tm.equalContents(result, second)
third = Index(['a', 'b', 'c'])
result = first.intersection(third)
@@ -67,10 +67,10 @@ def test_union(self):
second = self.index[5:]
everything = self.index
union = first.union(second)
- self.assertTrue(tm.equalContents(union, everything))
+ assert tm.equalContents(union, everything)
# GH 10149
cases = [klass(second.values) for klass in [np.array, Series, list]]
for case in cases:
result = first.union(case)
- self.assertTrue(tm.equalContents(result, everything))
+ assert tm.equalContents(result, everything)
diff --git a/pandas/tests/indexes/datetimes/test_misc.py b/pandas/tests/indexes/datetimes/test_misc.py
index 22e77eebec06b..55165aa39a1a4 100644
--- a/pandas/tests/indexes/datetimes/test_misc.py
+++ b/pandas/tests/indexes/datetimes/test_misc.py
@@ -166,7 +166,7 @@ def test_normalize(self):
"datetime64[ns]"))
tm.assert_index_equal(rng_ns_normalized, expected)
- self.assertTrue(result.is_normalized)
+ assert result.is_normalized
assert not rng.is_normalized
diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py
index 7e42e5e3db7ef..fa1b2c0d7c68d 100644
--- a/pandas/tests/indexes/datetimes/test_ops.py
+++ b/pandas/tests/indexes/datetimes/test_ops.py
@@ -59,7 +59,7 @@ def test_asobject_tolist(self):
Timestamp('2013-04-30')]
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
- self.assertTrue(isinstance(result, Index))
+ assert isinstance(result, Index)
self.assertEqual(result.dtype, object)
tm.assert_index_equal(result, expected)
@@ -74,7 +74,7 @@ def test_asobject_tolist(self):
Timestamp('2013-04-30', tz='Asia/Tokyo')]
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
- self.assertTrue(isinstance(result, Index))
+ assert isinstance(result, Index)
self.assertEqual(result.dtype, object)
tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
@@ -87,7 +87,7 @@ def test_asobject_tolist(self):
Timestamp('2013-01-04')]
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
- self.assertTrue(isinstance(result, Index))
+ assert isinstance(result, Index)
self.assertEqual(result.dtype, object)
tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
@@ -98,7 +98,7 @@ def test_minmax(self):
# monotonic
idx1 = pd.DatetimeIndex(['2011-01-01', '2011-01-02',
'2011-01-03'], tz=tz)
- self.assertTrue(idx1.is_monotonic)
+ assert idx1.is_monotonic
# non-monotonic
idx2 = pd.DatetimeIndex(['2011-01-01', pd.NaT, '2011-01-03',
@@ -114,13 +114,13 @@ def test_minmax(self):
for op in ['min', 'max']:
# Return NaT
obj = DatetimeIndex([])
- self.assertTrue(pd.isnull(getattr(obj, op)()))
+ assert pd.isnull(getattr(obj, op)())
obj = DatetimeIndex([pd.NaT])
- self.assertTrue(pd.isnull(getattr(obj, op)()))
+ assert pd.isnull(getattr(obj, op)())
obj = DatetimeIndex([pd.NaT, pd.NaT, pd.NaT])
- self.assertTrue(pd.isnull(getattr(obj, op)()))
+ assert pd.isnull(getattr(obj, op)())
def test_numpy_minmax(self):
dr = pd.date_range(start='2016-01-15', end='2016-01-20')
@@ -886,7 +886,7 @@ def test_nat(self):
for tz in [None, 'US/Eastern', 'UTC']:
idx = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], tz=tz)
- self.assertTrue(idx._can_hold_na)
+ assert idx._can_hold_na
tm.assert_numpy_array_equal(idx._isnan, np.array([False, False]))
assert not idx.hasnans
@@ -894,10 +894,10 @@ def test_nat(self):
np.array([], dtype=np.intp))
idx = pd.DatetimeIndex(['2011-01-01', 'NaT'], tz=tz)
- self.assertTrue(idx._can_hold_na)
+ assert idx._can_hold_na
tm.assert_numpy_array_equal(idx._isnan, np.array([False, True]))
- self.assertTrue(idx.hasnans)
+ assert idx.hasnans
tm.assert_numpy_array_equal(idx._nan_idxs,
np.array([1], dtype=np.intp))
@@ -905,11 +905,11 @@ def test_equals(self):
# GH 13107
for tz in [None, 'UTC', 'US/Eastern', 'Asia/Tokyo']:
idx = pd.DatetimeIndex(['2011-01-01', '2011-01-02', 'NaT'])
- self.assertTrue(idx.equals(idx))
- self.assertTrue(idx.equals(idx.copy()))
- self.assertTrue(idx.equals(idx.asobject))
- self.assertTrue(idx.asobject.equals(idx))
- self.assertTrue(idx.asobject.equals(idx.asobject))
+ assert idx.equals(idx)
+ assert idx.equals(idx.copy())
+ assert idx.equals(idx.asobject)
+ assert idx.asobject.equals(idx)
+ assert idx.asobject.equals(idx.asobject)
assert not idx.equals(list(idx))
assert not idx.equals(pd.Series(idx))
@@ -1118,7 +1118,7 @@ def test_comparison(self):
d = self.rng[10]
comp = self.rng > d
- self.assertTrue(comp[11])
+ assert comp[11]
assert not comp[9]
def test_pickle_unpickle(self):
@@ -1194,18 +1194,18 @@ def test_equals(self):
def test_identical(self):
t1 = self.rng.copy()
t2 = self.rng.copy()
- self.assertTrue(t1.identical(t2))
+ assert t1.identical(t2)
# name
t1 = t1.rename('foo')
- self.assertTrue(t1.equals(t2))
+ assert t1.equals(t2)
assert not t1.identical(t2)
t2 = t2.rename('foo')
- self.assertTrue(t1.identical(t2))
+ assert t1.identical(t2)
# freq
t2v = Index(t2.values)
- self.assertTrue(t1.equals(t2v))
+ assert t1.equals(t2v)
assert not t1.identical(t2v)
@@ -1218,7 +1218,7 @@ def test_comparison(self):
d = self.rng[10]
comp = self.rng > d
- self.assertTrue(comp[11])
+ assert comp[11]
assert not comp[9]
def test_copy(self):
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index 84a1adce2c0aa..6612ab844b849 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -196,7 +196,7 @@ def test_join_nonunique(self):
idx2 = to_datetime(['2012-11-06 15:11:09.006507',
'2012-11-06 15:11:09.006507'])
rs = idx1.join(idx2, how='outer')
- self.assertTrue(rs.is_monotonic)
+ assert rs.is_monotonic
class TestBusinessDatetimeIndex(tm.TestCase):
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index 941c9767e7a3a..4c32f41db207c 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -296,7 +296,7 @@ def test_to_datetime_tz_psycopg2(self):
i = pd.DatetimeIndex([
'2000-01-01 08:00:00+00:00'
], tz=psycopg2.tz.FixedOffsetTimezone(offset=-300, name=None))
- self.assertTrue(is_datetime64_ns_dtype(i))
+ assert is_datetime64_ns_dtype(i)
# tz coerceion
result = pd.to_datetime(i, errors='coerce')
@@ -311,11 +311,11 @@ def test_datetime_bool(self):
# GH13176
with pytest.raises(TypeError):
to_datetime(False)
- self.assertTrue(to_datetime(False, errors="coerce") is NaT)
+ assert to_datetime(False, errors="coerce") is NaT
self.assertEqual(to_datetime(False, errors="ignore"), False)
with pytest.raises(TypeError):
to_datetime(True)
- self.assertTrue(to_datetime(True, errors="coerce") is NaT)
+ assert to_datetime(True, errors="coerce") is NaT
self.assertEqual(to_datetime(True, errors="ignore"), True)
with pytest.raises(TypeError):
to_datetime([False, datetime.today()])
@@ -626,7 +626,7 @@ def test_to_datetime_iso8601(self):
def test_to_datetime_default(self):
rs = to_datetime('2001')
xp = datetime(2001, 1, 1)
- self.assertTrue(rs, xp)
+ assert rs, xp
# dayfirst is essentially broken
@@ -684,7 +684,7 @@ def test_to_datetime_types(self):
assert result is NaT
result = to_datetime(['', ''])
- self.assertTrue(isnull(result).all())
+ assert isnull(result).all()
# ints
result = Timestamp(0)
@@ -889,7 +889,7 @@ def test_guess_datetime_format_invalid_inputs(self):
]
for invalid_dt in invalid_dts:
- self.assertTrue(tools._guess_datetime_format(invalid_dt) is None)
+ assert tools._guess_datetime_format(invalid_dt) is None
def test_guess_datetime_format_nopadding(self):
# GH 11142
@@ -926,7 +926,7 @@ def test_guess_datetime_format_for_array(self):
format_for_string_of_nans = tools._guess_datetime_format_for_array(
np.array(
[np.nan, np.nan, np.nan], dtype='O'))
- self.assertTrue(format_for_string_of_nans is None)
+ assert format_for_string_of_nans is None
class TestToDatetimeInferFormat(tm.TestCase):
@@ -993,13 +993,13 @@ class TestDaysInMonth(tm.TestCase):
# tests for issue #10154
def test_day_not_in_month_coerce(self):
- self.assertTrue(isnull(to_datetime('2015-02-29', errors='coerce')))
- self.assertTrue(isnull(to_datetime('2015-02-29', format="%Y-%m-%d",
- errors='coerce')))
- self.assertTrue(isnull(to_datetime('2015-02-32', format="%Y-%m-%d",
- errors='coerce')))
- self.assertTrue(isnull(to_datetime('2015-04-31', format="%Y-%m-%d",
- errors='coerce')))
+ assert isnull(to_datetime('2015-02-29', errors='coerce'))
+ assert isnull(to_datetime('2015-02-29', format="%Y-%m-%d",
+ errors='coerce'))
+ assert isnull(to_datetime('2015-02-32', format="%Y-%m-%d",
+ errors='coerce'))
+ assert isnull(to_datetime('2015-04-31', format="%Y-%m-%d",
+ errors='coerce'))
def test_day_not_in_month_raise(self):
pytest.raises(ValueError, to_datetime, '2015-02-29',
@@ -1037,8 +1037,7 @@ def test_does_not_convert_mixed_integer(self):
'1-1', )
for good_date_string in good_date_strings:
- self.assertTrue(tslib._does_string_look_like_datetime(
- good_date_string))
+ assert tslib._does_string_look_like_datetime(good_date_string)
def test_parsers(self):
@@ -1129,10 +1128,10 @@ def test_parsers(self):
result2 = to_datetime('NaT')
result3 = Timestamp('NaT')
result4 = DatetimeIndex(['NaT'])[0]
- self.assertTrue(result1 is tslib.NaT)
- self.assertTrue(result1 is tslib.NaT)
- self.assertTrue(result1 is tslib.NaT)
- self.assertTrue(result1 is tslib.NaT)
+ assert result1 is tslib.NaT
+ assert result1 is tslib.NaT
+ assert result1 is tslib.NaT
+ assert result1 is tslib.NaT
def test_parsers_quarter_invalid(self):
@@ -1388,7 +1387,7 @@ def test_try_parse_dates(self):
result = lib.try_parse_dates(arr, dayfirst=True)
expected = [parse(d, dayfirst=True) for d in arr]
- self.assertTrue(np.array_equal(result, expected))
+ assert np.array_equal(result, expected)
def test_parsing_valid_dates(self):
arr = np.array(['01-01-2013', '01-02-2013'], dtype=object)
diff --git a/pandas/tests/indexes/period/test_construction.py b/pandas/tests/indexes/period/test_construction.py
index 434271cbe22ec..6ab42f14efae6 100644
--- a/pandas/tests/indexes/period/test_construction.py
+++ b/pandas/tests/indexes/period/test_construction.py
@@ -135,15 +135,15 @@ def test_constructor_fromarraylike(self):
result = PeriodIndex(idx, freq=offsets.MonthEnd())
tm.assert_index_equal(result, idx)
- self.assertTrue(result.freq, 'M')
+ assert result.freq, 'M'
result = PeriodIndex(idx, freq='2M')
tm.assert_index_equal(result, idx.asfreq('2M'))
- self.assertTrue(result.freq, '2M')
+ assert result.freq, '2M'
result = PeriodIndex(idx, freq=offsets.MonthEnd(2))
tm.assert_index_equal(result, idx.asfreq('2M'))
- self.assertTrue(result.freq, '2M')
+ assert result.freq, '2M'
result = PeriodIndex(idx, freq='D')
exp = idx.asfreq('D', 'e')
@@ -405,13 +405,13 @@ def test_constructor(self):
end_intv = Period('2006-12-31', '1w')
i2 = PeriodIndex(end=end_intv, periods=10)
self.assertEqual(len(i1), len(i2))
- self.assertTrue((i1 == i2).all())
+ assert (i1 == i2).all()
self.assertEqual(i1.freq, i2.freq)
end_intv = Period('2006-12-31', ('w', 1))
i2 = PeriodIndex(end=end_intv, periods=10)
self.assertEqual(len(i1), len(i2))
- self.assertTrue((i1 == i2).all())
+ assert (i1 == i2).all()
self.assertEqual(i1.freq, i2.freq)
end_intv = Period('2005-05-01', 'B')
@@ -467,7 +467,7 @@ def test_map_with_string_constructor(self):
assert isinstance(res, Index)
# preserve element types
- self.assertTrue(all(isinstance(resi, t) for resi in res))
+ assert all(isinstance(resi, t) for resi in res)
# lastly, values should compare equal
tm.assert_index_equal(res, expected)
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index 7af9e9ae3b14c..cf5f741fb09ed 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -81,7 +81,7 @@ def test_getitem_partial(self):
pytest.raises(KeyError, ts.__getitem__, '2006')
result = ts['2008']
- self.assertTrue((result.index.year == 2008).all())
+ assert (result.index.year == 2008).all()
result = ts['2008':'2009']
self.assertEqual(len(result), 24)
diff --git a/pandas/tests/indexes/period/test_ops.py b/pandas/tests/indexes/period/test_ops.py
index f133845f8404a..af377c1b69922 100644
--- a/pandas/tests/indexes/period/test_ops.py
+++ b/pandas/tests/indexes/period/test_ops.py
@@ -37,7 +37,7 @@ def test_asobject_tolist(self):
pd.Period('2013-04-30', freq='M')]
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
- self.assertTrue(isinstance(result, Index))
+ assert isinstance(result, Index)
self.assertEqual(result.dtype, object)
tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
@@ -51,7 +51,7 @@ def test_asobject_tolist(self):
pd.Period('2013-01-04', freq='D')]
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
- self.assertTrue(isinstance(result, Index))
+ assert isinstance(result, Index)
self.assertEqual(result.dtype, object)
tm.assert_index_equal(result, expected)
for i in [0, 1, 3]:
@@ -69,7 +69,7 @@ def test_minmax(self):
# monotonic
idx1 = pd.PeriodIndex([pd.NaT, '2011-01-01', '2011-01-02',
'2011-01-03'], freq='D')
- self.assertTrue(idx1.is_monotonic)
+ assert idx1.is_monotonic
# non-monotonic
idx2 = pd.PeriodIndex(['2011-01-01', pd.NaT, '2011-01-03',
@@ -803,7 +803,7 @@ def test_nat(self):
assert pd.PeriodIndex([], freq='M')._na_value is pd.NaT
idx = pd.PeriodIndex(['2011-01-01', '2011-01-02'], freq='D')
- self.assertTrue(idx._can_hold_na)
+ assert idx._can_hold_na
tm.assert_numpy_array_equal(idx._isnan, np.array([False, False]))
assert not idx.hasnans
@@ -811,10 +811,10 @@ def test_nat(self):
np.array([], dtype=np.intp))
idx = pd.PeriodIndex(['2011-01-01', 'NaT'], freq='D')
- self.assertTrue(idx._can_hold_na)
+ assert idx._can_hold_na
tm.assert_numpy_array_equal(idx._isnan, np.array([False, True]))
- self.assertTrue(idx.hasnans)
+ assert idx.hasnans
tm.assert_numpy_array_equal(idx._nan_idxs,
np.array([1], dtype=np.intp))
@@ -823,11 +823,11 @@ def test_equals(self):
for freq in ['D', 'M']:
idx = pd.PeriodIndex(['2011-01-01', '2011-01-02', 'NaT'],
freq=freq)
- self.assertTrue(idx.equals(idx))
- self.assertTrue(idx.equals(idx.copy()))
- self.assertTrue(idx.equals(idx.asobject))
- self.assertTrue(idx.asobject.equals(idx))
- self.assertTrue(idx.asobject.equals(idx.asobject))
+ assert idx.equals(idx)
+ assert idx.equals(idx.copy())
+ assert idx.equals(idx.asobject)
+ assert idx.asobject.equals(idx)
+ assert idx.asobject.equals(idx.asobject)
assert not idx.equals(list(idx))
assert not idx.equals(pd.Series(idx))
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index df3f6023a6506..8ee3e9d6707b4 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -319,13 +319,13 @@ def test_period_index_length(self):
end_intv = Period('2006-12-31', '1w')
i2 = PeriodIndex(end=end_intv, periods=10)
self.assertEqual(len(i1), len(i2))
- self.assertTrue((i1 == i2).all())
+ assert (i1 == i2).all()
self.assertEqual(i1.freq, i2.freq)
end_intv = Period('2006-12-31', ('w', 1))
i2 = PeriodIndex(end=end_intv, periods=10)
self.assertEqual(len(i1), len(i2))
- self.assertTrue((i1 == i2).all())
+ assert (i1 == i2).all()
self.assertEqual(i1.freq, i2.freq)
try:
@@ -511,7 +511,7 @@ def test_comp_period(self):
def test_contains(self):
rng = period_range('2007-01', freq='M', periods=10)
- self.assertTrue(Period('2007-01', freq='M') in rng)
+ assert Period('2007-01', freq='M') in rng
assert not Period('2007-01', freq='D') in rng
assert not Period('2007-01', freq='2M') in rng
@@ -524,10 +524,10 @@ def test_contains_nat(self):
assert np.nan not in idx
idx = pd.PeriodIndex(['2011-01', 'NaT', '2011-02'], freq='M')
- self.assertTrue(pd.NaT in idx)
- self.assertTrue(None in idx)
- self.assertTrue(float('nan') in idx)
- self.assertTrue(np.nan in idx)
+ assert pd.NaT in idx
+ assert None in idx
+ assert float('nan') in idx
+ assert np.nan in idx
def test_periods_number_check(self):
with pytest.raises(ValueError):
@@ -552,7 +552,7 @@ def test_index_duplicate_periods(self):
expected = ts[1:3]
tm.assert_series_equal(result, expected)
result[:] = 1
- self.assertTrue((ts[1:3] == 1).all())
+ assert (ts[1:3] == 1).all()
# not monotonic
idx = PeriodIndex([2000, 2007, 2007, 2009, 2007], freq='A-JUN')
@@ -712,18 +712,18 @@ def test_is_full(self):
assert not index.is_full
index = PeriodIndex([2005, 2006, 2007], freq='A')
- self.assertTrue(index.is_full)
+ assert index.is_full
index = PeriodIndex([2005, 2005, 2007], freq='A')
assert not index.is_full
index = PeriodIndex([2005, 2005, 2006], freq='A')
- self.assertTrue(index.is_full)
+ assert index.is_full
index = PeriodIndex([2006, 2005, 2005], freq='A')
pytest.raises(ValueError, getattr, index, 'is_full')
- self.assertTrue(index[:0].is_full)
+ assert index[:0].is_full
def test_with_multi_index(self):
# #1705
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 2f07cf3c8270f..8ac1ef3e1911b 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -411,7 +411,7 @@ def test_astype(self):
def test_equals_object(self):
# same
- self.assertTrue(Index(['a', 'b', 'c']).equals(Index(['a', 'b', 'c'])))
+ assert Index(['a', 'b', 'c']).equals(Index(['a', 'b', 'c']))
# different length
assert not Index(['a', 'b', 'c']).equals(Index(['a', 'b']))
@@ -466,14 +466,14 @@ def test_identical(self):
i1 = Index(['a', 'b', 'c'])
i2 = Index(['a', 'b', 'c'])
- self.assertTrue(i1.identical(i2))
+ assert i1.identical(i2)
i1 = i1.rename('foo')
- self.assertTrue(i1.equals(i2))
+ assert i1.equals(i2)
assert not i1.identical(i2)
i2 = i2.rename('foo')
- self.assertTrue(i1.identical(i2))
+ assert i1.identical(i2)
i3 = Index([('a', 'a'), ('a', 'b'), ('b', 'a')])
i4 = Index([('a', 'a'), ('a', 'b'), ('b', 'a')], tupleize_cols=False)
@@ -481,8 +481,8 @@ def test_identical(self):
def test_is_(self):
ind = Index(range(10))
- self.assertTrue(ind.is_(ind))
- self.assertTrue(ind.is_(ind.view().view().view().view()))
+ assert ind.is_(ind)
+ assert ind.is_(ind.view().view().view().view())
assert not ind.is_(Index(range(10)))
assert not ind.is_(ind.copy())
assert not ind.is_(ind.copy(deep=False))
@@ -491,11 +491,11 @@ def test_is_(self):
assert not ind.is_(np.array(range(10)))
# quasi-implementation dependent
- self.assertTrue(ind.is_(ind.view()))
+ assert ind.is_(ind.view())
ind2 = ind.view()
ind2.name = 'bob'
- self.assertTrue(ind.is_(ind2))
- self.assertTrue(ind2.is_(ind))
+ assert ind.is_(ind2)
+ assert ind2.is_(ind)
# doesn't matter if Indices are *actually* views of underlying data,
assert not ind.is_(Index(ind.values))
arr = np.array(range(1, 11))
@@ -506,7 +506,7 @@ def test_is_(self):
def test_asof(self):
d = self.dateIndex[0]
self.assertEqual(self.dateIndex.asof(d), d)
- self.assertTrue(isnull(self.dateIndex.asof(d - timedelta(1))))
+ assert isnull(self.dateIndex.asof(d - timedelta(1)))
d = self.dateIndex[-1]
self.assertEqual(self.dateIndex.asof(d + timedelta(1)), d)
@@ -585,9 +585,9 @@ def test_empty_fancy(self):
for idx in [self.strIndex, self.intIndex, self.floatIndex]:
empty_idx = idx.__class__([])
- self.assertTrue(idx[[]].identical(empty_idx))
- self.assertTrue(idx[empty_iarr].identical(empty_idx))
- self.assertTrue(idx[empty_barr].identical(empty_idx))
+ assert idx[[]].identical(empty_idx)
+ assert idx[empty_iarr].identical(empty_idx)
+ assert idx[empty_barr].identical(empty_idx)
# np.ndarray only accepts ndarray of int & bool dtypes, so should
# Index.
@@ -604,7 +604,7 @@ def test_intersection(self):
first = self.strIndex[:20]
second = self.strIndex[:10]
intersect = first.intersection(second)
- self.assertTrue(tm.equalContents(intersect, second))
+ assert tm.equalContents(intersect, second)
# Corner cases
inter = first.intersection(first)
@@ -671,13 +671,13 @@ def test_union(self):
second = self.strIndex[:10]
everything = self.strIndex[:20]
union = first.union(second)
- self.assertTrue(tm.equalContents(union, everything))
+ assert tm.equalContents(union, everything)
# GH 10149
cases = [klass(second.values) for klass in [np.array, Series, list]]
for case in cases:
result = first.union(case)
- self.assertTrue(tm.equalContents(result, everything))
+ assert tm.equalContents(result, everything)
# Corner cases
union = first.union(first)
@@ -753,8 +753,8 @@ def test_union(self):
else:
appended = np.append(self.strIndex, self.dateIndex.astype('O'))
- self.assertTrue(tm.equalContents(firstCat, appended))
- self.assertTrue(tm.equalContents(secondCat, self.strIndex))
+ assert tm.equalContents(firstCat, appended)
+ assert tm.equalContents(secondCat, self.strIndex)
tm.assert_contains_all(self.strIndex, firstCat)
tm.assert_contains_all(self.strIndex, secondCat)
tm.assert_contains_all(self.dateIndex, firstCat)
@@ -871,7 +871,7 @@ def test_difference(self):
# different names
result = first.difference(second)
- self.assertTrue(tm.equalContents(result, answer))
+ assert tm.equalContents(result, answer)
self.assertEqual(result.name, None)
# same names
@@ -881,7 +881,7 @@ def test_difference(self):
# with empty
result = first.difference([])
- self.assertTrue(tm.equalContents(result, first))
+ assert tm.equalContents(result, first)
self.assertEqual(result.name, first.name)
# with everythin
@@ -895,12 +895,12 @@ def test_symmetric_difference(self):
idx2 = Index([2, 3, 4, 5])
result = idx1.symmetric_difference(idx2)
expected = Index([1, 5])
- self.assertTrue(tm.equalContents(result, expected))
+ assert tm.equalContents(result, expected)
assert result.name is None
# __xor__ syntax
expected = idx1 ^ idx2
- self.assertTrue(tm.equalContents(result, expected))
+ assert tm.equalContents(result, expected)
assert result.name is None
# multiIndex
@@ -908,7 +908,7 @@ def test_symmetric_difference(self):
idx2 = MultiIndex.from_tuples([('foo', 1), ('bar', 3)])
result = idx1.symmetric_difference(idx2)
expected = MultiIndex.from_tuples([('bar', 2), ('baz', 3), ('bar', 3)])
- self.assertTrue(tm.equalContents(result, expected))
+ assert tm.equalContents(result, expected)
# nans:
# GH 13514 change: {nan} - {nan} == {}
@@ -930,30 +930,30 @@ def test_symmetric_difference(self):
idx2 = np.array([2, 3, 4, 5])
expected = Index([1, 5])
result = idx1.symmetric_difference(idx2)
- self.assertTrue(tm.equalContents(result, expected))
+ assert tm.equalContents(result, expected)
self.assertEqual(result.name, 'idx1')
result = idx1.symmetric_difference(idx2, result_name='new_name')
- self.assertTrue(tm.equalContents(result, expected))
+ assert tm.equalContents(result, expected)
self.assertEqual(result.name, 'new_name')
def test_is_numeric(self):
assert not self.dateIndex.is_numeric()
assert not self.strIndex.is_numeric()
- self.assertTrue(self.intIndex.is_numeric())
- self.assertTrue(self.floatIndex.is_numeric())
+ assert self.intIndex.is_numeric()
+ assert self.floatIndex.is_numeric()
assert not self.catIndex.is_numeric()
def test_is_object(self):
- self.assertTrue(self.strIndex.is_object())
- self.assertTrue(self.boolIndex.is_object())
+ assert self.strIndex.is_object()
+ assert self.boolIndex.is_object()
assert not self.catIndex.is_object()
assert not self.intIndex.is_object()
assert not self.dateIndex.is_object()
assert not self.floatIndex.is_object()
def test_is_all_dates(self):
- self.assertTrue(self.dateIndex.is_all_dates)
+ assert self.dateIndex.is_all_dates
assert not self.strIndex.is_all_dates
assert not self.intIndex.is_all_dates
@@ -1475,17 +1475,16 @@ def test_str_attribute(self):
def test_tab_completion(self):
# GH 9910
idx = Index(list('abcd'))
- self.assertTrue('str' in dir(idx))
+ assert 'str' in dir(idx)
idx = Index(range(4))
- self.assertTrue('str' not in dir(idx))
+ assert 'str' not in dir(idx)
def test_indexing_doesnt_change_class(self):
idx = Index([1, 2, 3, 'a', 'b', 'c'])
- self.assertTrue(idx[1:3].identical(pd.Index([2, 3], dtype=np.object_)))
- self.assertTrue(idx[[0, 1]].identical(pd.Index(
- [1, 2], dtype=np.object_)))
+ assert idx[1:3].identical(pd.Index([2, 3], dtype=np.object_))
+ assert idx[[0, 1]].identical(pd.Index([1, 2], dtype=np.object_))
def test_outer_join_sort(self):
left_idx = Index(np.random.permutation(15))
@@ -1876,19 +1875,19 @@ def test_copy_name2(self):
idx = pd.Index([1, 2], name='MyName')
idx1 = idx.copy()
- self.assertTrue(idx.equals(idx1))
+ assert idx.equals(idx1)
self.assertEqual(idx.name, 'MyName')
self.assertEqual(idx1.name, 'MyName')
idx2 = idx.copy(name='NewName')
- self.assertTrue(idx.equals(idx2))
+ assert idx.equals(idx2)
self.assertEqual(idx.name, 'MyName')
self.assertEqual(idx2.name, 'NewName')
idx3 = idx.copy(names=['NewName'])
- self.assertTrue(idx.equals(idx3))
+ assert idx.equals(idx3)
self.assertEqual(idx.name, 'MyName')
self.assertEqual(idx.names, ['MyName'])
self.assertEqual(idx3.name, 'NewName')
@@ -1918,10 +1917,10 @@ def test_union_base(self):
with tm.assert_produces_warning(RuntimeWarning):
# unorderable types
result = first.union(case)
- self.assertTrue(tm.equalContents(result, idx))
+ assert tm.equalContents(result, idx)
else:
result = first.union(case)
- self.assertTrue(tm.equalContents(result, idx))
+ assert tm.equalContents(result, idx)
def test_intersection_base(self):
# (same results for py2 and py3 but sortedness not tested elsewhere)
@@ -1937,7 +1936,7 @@ def test_intersection_base(self):
for klass in [np.array, Series, list]]
for case in cases:
result = first.intersection(case)
- self.assertTrue(tm.equalContents(result, second))
+ assert tm.equalContents(result, second)
def test_difference_base(self):
# (same results for py2 and py3 but sortedness not tested elsewhere)
@@ -2037,8 +2036,8 @@ def test_is_monotonic_na(self):
def test_repr_summary(self):
with cf.option_context('display.max_seq_items', 10):
r = repr(pd.Index(np.arange(1000)))
- self.assertTrue(len(r) < 200)
- self.assertTrue("..." in r)
+ assert len(r) < 200
+ assert "..." in r
def test_int_name_format(self):
index = Index(['a', 'b', 'c'], name=0)
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 5c9df55d2b508..7b2d27c9b51a4 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -177,10 +177,10 @@ def test_contains(self):
ci = self.create_index(categories=list('cabdef'))
- self.assertTrue('a' in ci)
- self.assertTrue('z' not in ci)
- self.assertTrue('e' not in ci)
- self.assertTrue(np.nan not in ci)
+ assert 'a' in ci
+ assert 'z' not in ci
+ assert 'e' not in ci
+ assert np.nan not in ci
# assert codes NOT in index
assert 0 not in ci
@@ -188,7 +188,7 @@ def test_contains(self):
ci = CategoricalIndex(
list('aabbca') + [np.nan], categories=list('cabdef'))
- self.assertTrue(np.nan in ci)
+ assert np.nan in ci
def test_min_max(self):
@@ -424,7 +424,7 @@ def test_duplicates(self):
idx = CategoricalIndex([0, 0, 0], name='foo')
assert not idx.is_unique
- self.assertTrue(idx.has_duplicates)
+ assert idx.has_duplicates
expected = CategoricalIndex([0], name='foo')
tm.assert_index_equal(idx.drop_duplicates(), expected)
@@ -537,8 +537,8 @@ def test_identical(self):
ci1 = CategoricalIndex(['a', 'b'], categories=['a', 'b'], ordered=True)
ci2 = CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'],
ordered=True)
- self.assertTrue(ci1.identical(ci1))
- self.assertTrue(ci1.identical(ci1.copy()))
+ assert ci1.identical(ci1)
+ assert ci1.identical(ci1.copy())
assert not ci1.identical(ci2)
def test_ensure_copied_data(self):
@@ -562,21 +562,21 @@ def test_equals_categorical(self):
ci2 = CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'],
ordered=True)
- self.assertTrue(ci1.equals(ci1))
+ assert ci1.equals(ci1)
assert not ci1.equals(ci2)
- self.assertTrue(ci1.equals(ci1.astype(object)))
- self.assertTrue(ci1.astype(object).equals(ci1))
+ assert ci1.equals(ci1.astype(object))
+ assert ci1.astype(object).equals(ci1)
- self.assertTrue((ci1 == ci1).all())
+ assert (ci1 == ci1).all()
assert not (ci1 != ci1).all()
assert not (ci1 > ci1).all()
assert not (ci1 < ci1).all()
- self.assertTrue((ci1 <= ci1).all())
- self.assertTrue((ci1 >= ci1).all())
+ assert (ci1 <= ci1).all()
+ assert (ci1 >= ci1).all()
assert not (ci1 == 1).all()
- self.assertTrue((ci1 == Index(['a', 'b'])).all())
- self.assertTrue((ci1 == ci1.values).all())
+ assert (ci1 == Index(['a', 'b'])).all()
+ assert (ci1 == ci1.values).all()
# invalid comparisons
with tm.assert_raises_regex(ValueError, "Lengths must match"):
@@ -593,19 +593,19 @@ def test_equals_categorical(self):
ci = CategoricalIndex(list('aabca'), categories=['c', 'a', 'b'])
assert not ci.equals(list('aabca'))
assert not ci.equals(CategoricalIndex(list('aabca')))
- self.assertTrue(ci.equals(ci.copy()))
+ assert ci.equals(ci.copy())
ci = CategoricalIndex(list('aabca') + [np.nan],
categories=['c', 'a', 'b'])
assert not ci.equals(list('aabca'))
assert not ci.equals(CategoricalIndex(list('aabca')))
- self.assertTrue(ci.equals(ci.copy()))
+ assert ci.equals(ci.copy())
ci = CategoricalIndex(list('aabca') + [np.nan],
categories=['c', 'a', 'b'])
assert not ci.equals(list('aabca') + [np.nan])
assert not ci.equals(CategoricalIndex(list('aabca') + [np.nan]))
- self.assertTrue(ci.equals(ci.copy()))
+ assert ci.equals(ci.copy())
def test_string_categorical_index_repr(self):
# short
diff --git a/pandas/tests/indexes/test_interval.py b/pandas/tests/indexes/test_interval.py
index 2e16e16e0b2c4..815fefa813a9d 100644
--- a/pandas/tests/indexes/test_interval.py
+++ b/pandas/tests/indexes/test_interval.py
@@ -27,28 +27,28 @@ def create_index(self):
def test_constructors(self):
expected = self.index
actual = IntervalIndex.from_breaks(np.arange(3), closed='right')
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
alternate = IntervalIndex.from_breaks(np.arange(3), closed='left')
assert not expected.equals(alternate)
actual = IntervalIndex.from_intervals([Interval(0, 1), Interval(1, 2)])
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
actual = IntervalIndex([Interval(0, 1), Interval(1, 2)])
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
actual = IntervalIndex.from_arrays(np.arange(2), np.arange(2) + 1,
closed='right')
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
actual = Index([Interval(0, 1), Interval(1, 2)])
assert isinstance(actual, IntervalIndex)
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
actual = Index(expected)
assert isinstance(actual, IntervalIndex)
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
def test_constructors_other(self):
@@ -106,8 +106,8 @@ def test_constructors_datetimelike(self):
expected_scalar_type = type(idx[0])
i = result[0]
- self.assertTrue(isinstance(i.left, expected_scalar_type))
- self.assertTrue(isinstance(i.right, expected_scalar_type))
+ assert isinstance(i.left, expected_scalar_type)
+ assert isinstance(i.right, expected_scalar_type)
def test_constructors_error(self):
@@ -158,7 +158,7 @@ def test_with_nans(self):
np.array([True, True]))
index = self.index_with_nan
- self.assertTrue(index.hasnans)
+ assert index.hasnans
tm.assert_numpy_array_equal(index.notnull(),
np.array([True, False, True]))
tm.assert_numpy_array_equal(index.isnull(),
@@ -193,8 +193,8 @@ def test_ensure_copied_data(self):
def test_equals(self):
idx = self.index
- self.assertTrue(idx.equals(idx))
- self.assertTrue(idx.equals(idx.copy()))
+ assert idx.equals(idx)
+ assert idx.equals(idx.copy())
assert not idx.equals(idx.astype(object))
assert not idx.equals(np.array(idx))
@@ -216,11 +216,11 @@ def test_astype(self):
result = idx.astype(object)
tm.assert_index_equal(result, Index(idx.values, dtype='object'))
assert not idx.equals(result)
- self.assertTrue(idx.equals(IntervalIndex.from_intervals(result)))
+ assert idx.equals(IntervalIndex.from_intervals(result))
result = idx.astype('interval')
tm.assert_index_equal(result, idx)
- self.assertTrue(result.equals(idx))
+ assert result.equals(idx)
result = idx.astype('category')
expected = pd.Categorical(idx, ordered=True)
@@ -243,12 +243,12 @@ def test_where_array_like(self):
def test_delete(self):
expected = IntervalIndex.from_breaks([1, 2])
actual = self.index.delete(0)
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
def test_insert(self):
expected = IntervalIndex.from_breaks(range(4))
actual = self.index.insert(2, Interval(2, 3))
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
pytest.raises(ValueError, self.index.insert, 0, 1)
pytest.raises(ValueError, self.index.insert, 0,
@@ -256,27 +256,27 @@ def test_insert(self):
def test_take(self):
actual = self.index.take([0, 1])
- self.assertTrue(self.index.equals(actual))
+ assert self.index.equals(actual)
expected = IntervalIndex.from_arrays([0, 0, 1], [1, 1, 2])
actual = self.index.take([0, 0, 1])
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
def test_monotonic_and_unique(self):
- self.assertTrue(self.index.is_monotonic)
- self.assertTrue(self.index.is_unique)
+ assert self.index.is_monotonic
+ assert self.index.is_unique
idx = IntervalIndex.from_tuples([(0, 1), (0.5, 1.5)])
- self.assertTrue(idx.is_monotonic)
- self.assertTrue(idx.is_unique)
+ assert idx.is_monotonic
+ assert idx.is_unique
idx = IntervalIndex.from_tuples([(0, 1), (2, 3), (1, 2)])
assert not idx.is_monotonic
- self.assertTrue(idx.is_unique)
+ assert idx.is_unique
idx = IntervalIndex.from_tuples([(0, 2), (0, 2)])
assert not idx.is_unique
- self.assertTrue(idx.is_monotonic)
+ assert idx.is_monotonic
@pytest.mark.xfail(reason='not a valid repr as we use interval notation')
def test_repr(self):
@@ -514,10 +514,10 @@ def test_union(self):
other = IntervalIndex.from_arrays([2], [3])
expected = IntervalIndex.from_arrays(range(3), range(1, 4))
actual = self.index.union(other)
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
actual = other.union(self.index)
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
tm.assert_index_equal(self.index.union(self.index), self.index)
tm.assert_index_equal(self.index.union(self.index[:1]),
@@ -527,7 +527,7 @@ def test_intersection(self):
other = IntervalIndex.from_breaks([1, 2, 3])
expected = IntervalIndex.from_breaks([1, 2])
actual = self.index.intersection(other)
- self.assertTrue(expected.equals(actual))
+ assert expected.equals(actual)
tm.assert_index_equal(self.index.intersection(self.index),
self.index)
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index 6f6e1f1544219..714e901532ed9 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -65,19 +65,19 @@ def test_labels_dtypes(self):
# GH 8456
i = MultiIndex.from_tuples([('A', 1), ('A', 2)])
- self.assertTrue(i.labels[0].dtype == 'int8')
- self.assertTrue(i.labels[1].dtype == 'int8')
+ assert i.labels[0].dtype == 'int8'
+ assert i.labels[1].dtype == 'int8'
i = MultiIndex.from_product([['a'], range(40)])
- self.assertTrue(i.labels[1].dtype == 'int8')
+ assert i.labels[1].dtype == 'int8'
i = MultiIndex.from_product([['a'], range(400)])
- self.assertTrue(i.labels[1].dtype == 'int16')
+ assert i.labels[1].dtype == 'int16'
i = MultiIndex.from_product([['a'], range(40000)])
- self.assertTrue(i.labels[1].dtype == 'int32')
+ assert i.labels[1].dtype == 'int32'
i = pd.MultiIndex.from_product([['a'], range(1000)])
- self.assertTrue((i.labels[0] >= 0).all())
- self.assertTrue((i.labels[1] >= 0).all())
+ assert (i.labels[0] >= 0).all()
+ assert (i.labels[1] >= 0).all()
def test_where(self):
i = MultiIndex.from_tuples([('A', 1), ('A', 2)])
@@ -468,19 +468,19 @@ def test_copy_names(self):
multi_idx = pd.Index([(1, 2), (3, 4)], names=['MyName1', 'MyName2'])
multi_idx1 = multi_idx.copy()
- self.assertTrue(multi_idx.equals(multi_idx1))
+ assert multi_idx.equals(multi_idx1)
self.assertEqual(multi_idx.names, ['MyName1', 'MyName2'])
self.assertEqual(multi_idx1.names, ['MyName1', 'MyName2'])
multi_idx2 = multi_idx.copy(names=['NewName1', 'NewName2'])
- self.assertTrue(multi_idx.equals(multi_idx2))
+ assert multi_idx.equals(multi_idx2)
self.assertEqual(multi_idx.names, ['MyName1', 'MyName2'])
self.assertEqual(multi_idx2.names, ['NewName1', 'NewName2'])
multi_idx3 = multi_idx.copy(name=['NewName1', 'NewName2'])
- self.assertTrue(multi_idx.equals(multi_idx3))
+ assert multi_idx.equals(multi_idx3)
self.assertEqual(multi_idx.names, ['MyName1', 'MyName2'])
self.assertEqual(multi_idx3.names, ['NewName1', 'NewName2'])
@@ -520,7 +520,7 @@ def test_names(self):
def test_reference_duplicate_name(self):
idx = MultiIndex.from_tuples(
[('a', 'b'), ('c', 'd')], names=['x', 'x'])
- self.assertTrue(idx._reference_duplicate_name('x'))
+ assert idx._reference_duplicate_name('x')
idx = MultiIndex.from_tuples(
[('a', 'b'), ('c', 'd')], names=['x', 'y'])
@@ -673,9 +673,8 @@ def test_from_arrays(self):
# infer correctly
result = MultiIndex.from_arrays([[pd.NaT, Timestamp('20130101')],
['a', 'b']])
- self.assertTrue(result.levels[0].equals(Index([Timestamp('20130101')
- ])))
- self.assertTrue(result.levels[1].equals(Index(['a', 'b'])))
+ assert result.levels[0].equals(Index([Timestamp('20130101')]))
+ assert result.levels[1].equals(Index(['a', 'b']))
def test_from_arrays_index_series_datetimetz(self):
idx1 = pd.date_range('2015-01-01 10:00', freq='D', periods=3,
@@ -895,15 +894,15 @@ def test_values_boxed(self):
def test_append(self):
result = self.index[:3].append(self.index[3:])
- self.assertTrue(result.equals(self.index))
+ assert result.equals(self.index)
foos = [self.index[:1], self.index[1:3], self.index[3:]]
result = foos[0].append(foos[1:])
- self.assertTrue(result.equals(self.index))
+ assert result.equals(self.index)
# empty
result = self.index.append([])
- self.assertTrue(result.equals(self.index))
+ assert result.equals(self.index)
def test_append_mixed_dtypes(self):
# GH 13660
@@ -1015,7 +1014,7 @@ def test_legacy_pickle(self):
obj = pd.read_pickle(path)
obj2 = MultiIndex.from_tuples(obj.values)
- self.assertTrue(obj.equals(obj2))
+ assert obj.equals(obj2)
res = obj.get_indexer(obj)
exp = np.arange(len(obj), dtype=np.intp)
@@ -1034,7 +1033,7 @@ def test_legacy_v2_unpickle(self):
obj = pd.read_pickle(path)
obj2 = MultiIndex.from_tuples(obj.values)
- self.assertTrue(obj.equals(obj2))
+ assert obj.equals(obj2)
res = obj.get_indexer(obj)
exp = np.arange(len(obj), dtype=np.intp)
@@ -1055,11 +1054,11 @@ def test_roundtrip_pickle_with_tz(self):
tz='US/Eastern')
], names=['one', 'two', 'three'])
unpickled = tm.round_trip_pickle(index)
- self.assertTrue(index.equal_levels(unpickled))
+ assert index.equal_levels(unpickled)
def test_from_tuples_index_values(self):
result = MultiIndex.from_tuples(self.index)
- self.assertTrue((result.values == self.index.values).all())
+ assert (result.values == self.index.values).all()
def test_contains(self):
assert ('foo', 'two') in self.index
@@ -1077,9 +1076,9 @@ def test_contains_with_nat(self):
pd.date_range('2012-01-01', periods=5)],
labels=[[0, 0, 0, 0, 0, 0], [-1, 0, 1, 2, 3, 4]],
names=[None, 'B'])
- self.assertTrue(('C', pd.Timestamp('2012-01-01')) in mi)
+ assert ('C', pd.Timestamp('2012-01-01')) in mi
for val in mi.values:
- self.assertTrue(val in mi)
+ assert val in mi
def test_is_all_dates(self):
assert not self.index.is_all_dates
@@ -1095,14 +1094,14 @@ def test_getitem(self):
# slice
result = self.index[2:5]
expected = self.index[[2, 3, 4]]
- self.assertTrue(result.equals(expected))
+ assert result.equals(expected)
# boolean
result = self.index[[True, False, True, False, True, True]]
result2 = self.index[np.array([True, False, True, False, True, True])]
expected = self.index[[0, 2, 4, 5]]
- self.assertTrue(result.equals(expected))
- self.assertTrue(result2.equals(expected))
+ assert result.equals(expected)
+ assert result2.equals(expected)
def test_getitem_group_select(self):
sorted_idx, _ = self.index.sortlevel(0)
@@ -1157,7 +1156,7 @@ def test_get_loc_level(self):
expected = slice(1, 2)
exp_index = index[expected].droplevel(0).droplevel(0)
self.assertEqual(loc, expected)
- self.assertTrue(new_index.equals(exp_index))
+ assert new_index.equals(exp_index)
loc, new_index = index.get_loc_level((0, 1, 0))
expected = 1
@@ -1171,7 +1170,7 @@ def test_get_loc_level(self):
result, new_index = index.get_loc_level((2000, slice(None, None)))
expected = slice(None, None)
self.assertEqual(result, expected)
- self.assertTrue(new_index.equals(index.droplevel(0)))
+ assert new_index.equals(index.droplevel(0))
def test_slice_locs(self):
df = tm.makeTimeDataFrame()
@@ -1347,7 +1346,7 @@ def test_get_indexer(self):
assert_almost_equal(r1, rexp1)
r1 = idx1.get_indexer([1, 2, 3])
- self.assertTrue((r1 == [-1, -1, -1]).all())
+ assert (r1 == [-1, -1, -1]).all()
# create index with duplicates
idx1 = Index(lrange(10) + lrange(10))
@@ -1533,41 +1532,41 @@ def test_equals_missing_values(self):
def test_identical(self):
mi = self.index.copy()
mi2 = self.index.copy()
- self.assertTrue(mi.identical(mi2))
+ assert mi.identical(mi2)
mi = mi.set_names(['new1', 'new2'])
- self.assertTrue(mi.equals(mi2))
+ assert mi.equals(mi2)
assert not mi.identical(mi2)
mi2 = mi2.set_names(['new1', 'new2'])
- self.assertTrue(mi.identical(mi2))
+ assert mi.identical(mi2)
mi3 = Index(mi.tolist(), names=mi.names)
mi4 = Index(mi.tolist(), names=mi.names, tupleize_cols=False)
- self.assertTrue(mi.identical(mi3))
+ assert mi.identical(mi3)
assert not mi.identical(mi4)
- self.assertTrue(mi.equals(mi4))
+ assert mi.equals(mi4)
def test_is_(self):
mi = MultiIndex.from_tuples(lzip(range(10), range(10)))
- self.assertTrue(mi.is_(mi))
- self.assertTrue(mi.is_(mi.view()))
- self.assertTrue(mi.is_(mi.view().view().view().view()))
+ assert mi.is_(mi)
+ assert mi.is_(mi.view())
+ assert mi.is_(mi.view().view().view().view())
mi2 = mi.view()
# names are metadata, they don't change id
mi2.names = ["A", "B"]
- self.assertTrue(mi2.is_(mi))
- self.assertTrue(mi.is_(mi2))
+ assert mi2.is_(mi)
+ assert mi.is_(mi2)
- self.assertTrue(mi.is_(mi.set_names(["C", "D"])))
+ assert mi.is_(mi.set_names(["C", "D"]))
mi2 = mi.view()
mi2.set_names(["E", "F"], inplace=True)
- self.assertTrue(mi.is_(mi2))
+ assert mi.is_(mi2)
# levels are inherent properties, they change identity
mi3 = mi2.set_levels([lrange(10), lrange(10)])
assert not mi3.is_(mi2)
# shouldn't change
- self.assertTrue(mi2.is_(mi))
+ assert mi2.is_(mi)
mi4 = mi3.view()
mi4.set_levels([[1 for _ in range(10)], lrange(10)], inplace=True)
assert not mi4.is_(mi3)
@@ -1584,7 +1583,7 @@ def test_union(self):
tups = sorted(self.index.values)
expected = MultiIndex.from_tuples(tups)
- self.assertTrue(the_union.equals(expected))
+ assert the_union.equals(expected)
# corner case, pass self or empty thing:
the_union = self.index.union(self.index)
@@ -1596,7 +1595,7 @@ def test_union(self):
# won't work in python 3
# tuples = self.index.values
# result = self.index[:4] | tuples[4:]
- # self.assertTrue(result.equals(tuples))
+ # assert result.equals(tuples)
# not valid for python 3
# def test_union_with_regular_index(self):
@@ -1607,7 +1606,7 @@ def test_union(self):
# assert 'B' in result
# result2 = self.index.union(other)
- # self.assertTrue(result.equals(result2))
+ # assert result.equals(result2)
def test_intersection(self):
piece1 = self.index[:5][::-1]
@@ -1616,7 +1615,7 @@ def test_intersection(self):
the_int = piece1 & piece2
tups = sorted(self.index[3:5].values)
expected = MultiIndex.from_tuples(tups)
- self.assertTrue(the_int.equals(expected))
+ assert the_int.equals(expected)
# corner case, pass self
the_int = self.index.intersection(self.index)
@@ -1625,12 +1624,12 @@ def test_intersection(self):
# empty intersection: disjoint
empty = self.index[:2] & self.index[2:]
expected = self.index[:0]
- self.assertTrue(empty.equals(expected))
+ assert empty.equals(expected)
# can't do in python 3
# tuples = self.index.values
# result = self.index & tuples
- # self.assertTrue(result.equals(tuples))
+ # assert result.equals(tuples)
def test_sub(self):
@@ -1655,25 +1654,25 @@ def test_difference(self):
names=self.index.names)
assert isinstance(result, MultiIndex)
- self.assertTrue(result.equals(expected))
+ assert result.equals(expected)
self.assertEqual(result.names, self.index.names)
# empty difference: reflexive
result = self.index.difference(self.index)
expected = self.index[:0]
- self.assertTrue(result.equals(expected))
+ assert result.equals(expected)
self.assertEqual(result.names, self.index.names)
# empty difference: superset
result = self.index[-3:].difference(self.index)
expected = self.index[:0]
- self.assertTrue(result.equals(expected))
+ assert result.equals(expected)
self.assertEqual(result.names, self.index.names)
# empty difference: degenerate
result = self.index[:0].difference(self.index)
expected = self.index[:0]
- self.assertTrue(result.equals(expected))
+ assert result.equals(expected)
self.assertEqual(result.names, self.index.names)
# names not the same
@@ -1688,11 +1687,11 @@ def test_difference(self):
# raise Exception called with non-MultiIndex
result = first.difference(first.values)
- self.assertTrue(result.equals(first[:0]))
+ assert result.equals(first[:0])
# name from empty array
result = first.difference([])
- self.assertTrue(first.equals(result))
+ assert first.equals(result)
self.assertEqual(first.names, result.names)
# name from non-empty array
@@ -1728,23 +1727,23 @@ def test_sortlevel(self):
sorted_idx, _ = index.sortlevel(0)
expected = MultiIndex.from_tuples(sorted(tuples))
- self.assertTrue(sorted_idx.equals(expected))
+ assert sorted_idx.equals(expected)
sorted_idx, _ = index.sortlevel(0, ascending=False)
- self.assertTrue(sorted_idx.equals(expected[::-1]))
+ assert sorted_idx.equals(expected[::-1])
sorted_idx, _ = index.sortlevel(1)
by1 = sorted(tuples, key=lambda x: (x[1], x[0]))
expected = MultiIndex.from_tuples(by1)
- self.assertTrue(sorted_idx.equals(expected))
+ assert sorted_idx.equals(expected)
sorted_idx, _ = index.sortlevel(1, ascending=False)
- self.assertTrue(sorted_idx.equals(expected[::-1]))
+ assert sorted_idx.equals(expected[::-1])
def test_sortlevel_not_sort_remaining(self):
mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC'))
sorted_idx, _ = mi.sortlevel('A', sort_remaining=False)
- self.assertTrue(sorted_idx.equals(mi))
+ assert sorted_idx.equals(mi)
def test_sortlevel_deterministic(self):
tuples = [('bar', 'one'), ('foo', 'two'), ('qux', 'two'),
@@ -1754,18 +1753,18 @@ def test_sortlevel_deterministic(self):
sorted_idx, _ = index.sortlevel(0)
expected = MultiIndex.from_tuples(sorted(tuples))
- self.assertTrue(sorted_idx.equals(expected))
+ assert sorted_idx.equals(expected)
sorted_idx, _ = index.sortlevel(0, ascending=False)
- self.assertTrue(sorted_idx.equals(expected[::-1]))
+ assert sorted_idx.equals(expected[::-1])
sorted_idx, _ = index.sortlevel(1)
by1 = sorted(tuples, key=lambda x: (x[1], x[0]))
expected = MultiIndex.from_tuples(by1)
- self.assertTrue(sorted_idx.equals(expected))
+ assert sorted_idx.equals(expected)
sorted_idx, _ = index.sortlevel(1, ascending=False)
- self.assertTrue(sorted_idx.equals(expected[::-1]))
+ assert sorted_idx.equals(expected[::-1])
def test_dims(self):
pass
@@ -1836,7 +1835,7 @@ def test_droplevel_with_names(self):
dropped = index.droplevel('two')
expected = index.droplevel(1)
- self.assertTrue(dropped.equals(expected))
+ assert dropped.equals(expected)
def test_droplevel_multiple(self):
index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index(
@@ -1846,7 +1845,7 @@ def test_droplevel_multiple(self):
dropped = index[:2].droplevel(['three', 'one'])
expected = index[:2].droplevel(2).droplevel(0)
- self.assertTrue(dropped.equals(expected))
+ assert dropped.equals(expected)
def test_drop_not_lexsorted(self):
# GH 12078
@@ -1854,7 +1853,7 @@ def test_drop_not_lexsorted(self):
# define the lexsorted version of the multi-index
tuples = [('a', ''), ('b1', 'c1'), ('b2', 'c2')]
lexsorted_mi = MultiIndex.from_tuples(tuples, names=['b', 'c'])
- self.assertTrue(lexsorted_mi.is_lexsorted())
+ assert lexsorted_mi.is_lexsorted()
# and the not-lexsorted version
df = pd.DataFrame(columns=['a', 'b', 'c', 'd'],
@@ -1873,7 +1872,7 @@ def test_drop_not_lexsorted(self):
def test_insert(self):
# key contained in all levels
new_index = self.index.insert(0, ('bar', 'two'))
- self.assertTrue(new_index.equal_levels(self.index))
+ assert new_index.equal_levels(self.index)
self.assertEqual(new_index[0], ('bar', 'two'))
# key not contained in all levels
@@ -2005,8 +2004,8 @@ def _check_how(other, how):
return_indexers=True)
exp_level = other.join(self.index.levels[1], how=how)
- self.assertTrue(join_index.levels[0].equals(self.index.levels[0]))
- self.assertTrue(join_index.levels[1].equals(exp_level))
+ assert join_index.levels[0].equals(self.index.levels[0])
+ assert join_index.levels[1].equals(exp_level)
# pare down levels
mask = np.array(
@@ -2019,7 +2018,7 @@ def _check_how(other, how):
self.index.join(other, how=how, level='second',
return_indexers=True)
- self.assertTrue(join_index.equals(join_index2))
+ assert join_index.equals(join_index2)
tm.assert_numpy_array_equal(lidx, lidx2)
tm.assert_numpy_array_equal(ridx, ridx2)
tm.assert_numpy_array_equal(join_index2.values, exp_values)
@@ -2102,11 +2101,11 @@ def test_reindex_level(self):
exp_index = self.index.join(idx, level='second', how='right')
exp_index2 = self.index.join(idx, level='second', how='left')
- self.assertTrue(target.equals(exp_index))
+ assert target.equals(exp_index)
exp_indexer = np.array([0, 2, 4])
tm.assert_numpy_array_equal(indexer, exp_indexer, check_dtype=False)
- self.assertTrue(target2.equals(exp_index2))
+ assert target2.equals(exp_index2)
exp_indexer2 = np.array([0, -1, 0, -1, 0, -1])
tm.assert_numpy_array_equal(indexer2, exp_indexer2, check_dtype=False)
@@ -2120,11 +2119,11 @@ def test_reindex_level(self):
def test_duplicates(self):
assert not self.index.has_duplicates
- self.assertTrue(self.index.append(self.index).has_duplicates)
+ assert self.index.append(self.index).has_duplicates
index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[
[0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]])
- self.assertTrue(index.has_duplicates)
+ assert index.has_duplicates
# GH 9075
t = [(u('x'), u('out'), u('z'), 5, u('y'), u('in'), u('z'), 169),
@@ -2179,7 +2178,7 @@ def check(nlevels, with_nulls):
values = index.values.tolist()
index = MultiIndex.from_tuples(values + [values[0]])
- self.assertTrue(index.has_duplicates)
+ assert index.has_duplicates
# no overflow
check(4, False)
@@ -2228,7 +2227,7 @@ def test_duplicate_meta_data(self):
index.set_names([None, None]),
index.set_names([None, 'Num']),
index.set_names(['Upper', 'Num']), ]:
- self.assertTrue(idx.has_duplicates)
+ assert idx.has_duplicates
self.assertEqual(idx.drop_duplicates().names, idx.names)
def test_get_unique_index(self):
@@ -2237,7 +2236,7 @@ def test_get_unique_index(self):
for dropna in [False, True]:
result = idx._get_unique_index(dropna=dropna)
- self.assertTrue(result.unique)
+ assert result.unique
tm.assert_index_equal(result, expected)
def test_unique(self):
@@ -2370,7 +2369,7 @@ def test_level_setting_resets_attributes(self):
ind = MultiIndex.from_arrays([
['A', 'A', 'B', 'B', 'B'], [1, 2, 1, 2, 3]
])
- self.assertTrue(ind.is_monotonic)
+ assert ind.is_monotonic
ind.set_levels([['A', 'B', 'A', 'A', 'B'], [2, 1, 3, -2, 5]],
inplace=True)
@@ -2380,8 +2379,8 @@ def test_level_setting_resets_attributes(self):
def test_is_monotonic(self):
i = MultiIndex.from_product([np.arange(10),
np.arange(10)], names=['one', 'two'])
- self.assertTrue(i.is_monotonic)
- self.assertTrue(Index(i.values).is_monotonic)
+ assert i.is_monotonic
+ assert Index(i.values).is_monotonic
i = MultiIndex.from_product([np.arange(10, 0, -1),
np.arange(10)], names=['one', 'two'])
@@ -2412,8 +2411,8 @@ def test_is_monotonic(self):
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
[0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
names=['first', 'second'])
- self.assertTrue(i.is_monotonic)
- self.assertTrue(Index(i.values).is_monotonic)
+ assert i.is_monotonic
+ assert Index(i.values).is_monotonic
# mixed levels, hits the TypeError
i = MultiIndex(
@@ -2617,7 +2616,7 @@ def test_index_name_retained(self):
def test_equals_operator(self):
# GH9785
- self.assertTrue((self.index == self.index).all())
+ assert (self.index == self.index).all()
def test_large_multiindex_error(self):
# GH12527
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index 8b4179dbf2e0e..68a329a7f741f 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -228,11 +228,11 @@ def test_constructor(self):
# nan handling
result = Float64Index([np.nan, np.nan])
- self.assertTrue(pd.isnull(result.values).all())
+ assert pd.isnull(result.values).all()
result = Float64Index(np.array([np.nan]))
- self.assertTrue(pd.isnull(result.values).all())
+ assert pd.isnull(result.values).all()
result = Index(np.array([np.nan]))
- self.assertTrue(pd.isnull(result.values).all())
+ assert pd.isnull(result.values).all()
def test_constructor_invalid(self):
@@ -260,15 +260,15 @@ def test_constructor_explicit(self):
def test_astype(self):
result = self.float.astype(object)
- self.assertTrue(result.equals(self.float))
- self.assertTrue(self.float.equals(result))
+ assert result.equals(self.float)
+ assert self.float.equals(result)
self.check_is_index(result)
i = self.mixed.copy()
i.name = 'foo'
result = i.astype(object)
- self.assertTrue(result.equals(i))
- self.assertTrue(i.equals(result))
+ assert result.equals(i)
+ assert i.equals(result)
self.check_is_index(result)
# GH 12881
@@ -307,18 +307,18 @@ def test_astype(self):
def test_equals_numeric(self):
i = Float64Index([1.0, 2.0])
- self.assertTrue(i.equals(i))
- self.assertTrue(i.identical(i))
+ assert i.equals(i)
+ assert i.identical(i)
i2 = Float64Index([1.0, 2.0])
- self.assertTrue(i.equals(i2))
+ assert i.equals(i2)
i = Float64Index([1.0, np.nan])
- self.assertTrue(i.equals(i))
- self.assertTrue(i.identical(i))
+ assert i.equals(i)
+ assert i.identical(i)
i2 = Float64Index([1.0, np.nan])
- self.assertTrue(i.equals(i2))
+ assert i.equals(i2)
def test_get_indexer(self):
idx = Float64Index([0.0, 1.0, 2.0])
@@ -363,7 +363,7 @@ def test_get_loc_na(self):
# representable by slice [0:2:2]
# pytest.raises(KeyError, idx.slice_locs, np.nan)
sliced = idx.slice_locs(np.nan)
- self.assertTrue(isinstance(sliced, tuple))
+ assert isinstance(sliced, tuple)
self.assertEqual(sliced, (0, 3))
# not representable by slice
@@ -373,17 +373,17 @@ def test_get_loc_na(self):
def test_contains_nans(self):
i = Float64Index([1.0, 2.0, np.nan])
- self.assertTrue(np.nan in i)
+ assert np.nan in i
def test_contains_not_nans(self):
i = Float64Index([1.0, 2.0, np.nan])
- self.assertTrue(1.0 in i)
+ assert 1.0 in i
def test_doesnt_contain_all_the_things(self):
i = Float64Index([np.nan])
assert not i.isin([0]).item()
assert not i.isin([1]).item()
- self.assertTrue(i.isin([np.nan]).item())
+ assert i.isin([np.nan]).item()
def test_nan_multiple_containment(self):
i = Float64Index([1.0, np.nan])
@@ -463,18 +463,18 @@ def test_view(self):
tm.assert_index_equal(i, self._holder(i_view, name='Foo'))
def test_is_monotonic(self):
- self.assertTrue(self.index.is_monotonic)
- self.assertTrue(self.index.is_monotonic_increasing)
+ assert self.index.is_monotonic
+ assert self.index.is_monotonic_increasing
assert not self.index.is_monotonic_decreasing
index = self._holder([4, 3, 2, 1])
assert not index.is_monotonic
- self.assertTrue(index.is_monotonic_decreasing)
+ assert index.is_monotonic_decreasing
index = self._holder([1])
- self.assertTrue(index.is_monotonic)
- self.assertTrue(index.is_monotonic_increasing)
- self.assertTrue(index.is_monotonic_decreasing)
+ assert index.is_monotonic
+ assert index.is_monotonic_increasing
+ assert index.is_monotonic_decreasing
def test_logical_compat(self):
idx = self.create_index()
@@ -483,7 +483,7 @@ def test_logical_compat(self):
def test_identical(self):
i = Index(self.index.copy())
- self.assertTrue(i.identical(self.index))
+ assert i.identical(self.index)
same_values_different_type = Index(i, dtype=object)
assert not i.identical(same_values_different_type)
@@ -491,11 +491,10 @@ def test_identical(self):
i = self.index.copy(dtype=object)
i = i.rename('foo')
same_values = Index(i, dtype=object)
- self.assertTrue(same_values.identical(i))
+ assert same_values.identical(i)
assert not i.identical(self.index)
- self.assertTrue(Index(same_values, name='foo', dtype=object).identical(
- i))
+ assert Index(same_values, name='foo', dtype=object).identical(i)
assert not self.index.copy(dtype=object).identical(
self.index.copy(dtype=self._dtype))
diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py
index 0baf6636806f6..49536be1aa57c 100644
--- a/pandas/tests/indexes/test_range.py
+++ b/pandas/tests/indexes/test_range.py
@@ -125,7 +125,7 @@ def test_constructor_same(self):
# pass thru w and w/o copy
index = RangeIndex(1, 5, 2)
result = RangeIndex(index, copy=False)
- self.assertTrue(result.identical(index))
+ assert result.identical(index)
result = RangeIndex(index, copy=True)
tm.assert_index_equal(result, index, exact=True)
@@ -172,16 +172,16 @@ def test_constructor_name(self):
copy = RangeIndex(orig)
copy.name = 'copy'
- self.assertTrue(orig.name, 'original')
- self.assertTrue(copy.name, 'copy')
+ assert orig.name, 'original'
+ assert copy.name, 'copy'
new = Index(copy)
- self.assertTrue(new.name, 'copy')
+ assert new.name, 'copy'
new.name = 'new'
- self.assertTrue(orig.name, 'original')
- self.assertTrue(new.name, 'copy')
- self.assertTrue(new.name, 'new')
+ assert orig.name, 'original'
+ assert new.name, 'copy'
+ assert new.name, 'new'
def test_numeric_compat2(self):
# validate that we are handling the RangeIndex overrides to numeric ops
@@ -259,8 +259,8 @@ def test_constructor_corner(self):
def test_copy(self):
i = RangeIndex(5, name='Foo')
i_copy = i.copy()
- self.assertTrue(i_copy is not i)
- self.assertTrue(i_copy.identical(i))
+ assert i_copy is not i
+ assert i_copy.identical(i)
self.assertEqual(i_copy._start, 0)
self.assertEqual(i_copy._stop, 5)
self.assertEqual(i_copy._step, 1)
@@ -273,7 +273,7 @@ def test_repr(self):
expected = "RangeIndex(start=0, stop=5, step=1, name='Foo')"
else:
expected = "RangeIndex(start=0, stop=5, step=1, name=u'Foo')"
- self.assertTrue(result, expected)
+ assert result, expected
result = eval(result)
tm.assert_index_equal(result, i, exact=True)
@@ -328,28 +328,28 @@ def test_dtype(self):
self.assertEqual(self.index.dtype, np.int64)
def test_is_monotonic(self):
- self.assertTrue(self.index.is_monotonic)
- self.assertTrue(self.index.is_monotonic_increasing)
+ assert self.index.is_monotonic
+ assert self.index.is_monotonic_increasing
assert not self.index.is_monotonic_decreasing
index = RangeIndex(4, 0, -1)
assert not index.is_monotonic
- self.assertTrue(index.is_monotonic_decreasing)
+ assert index.is_monotonic_decreasing
index = RangeIndex(1, 2)
- self.assertTrue(index.is_monotonic)
- self.assertTrue(index.is_monotonic_increasing)
- self.assertTrue(index.is_monotonic_decreasing)
+ assert index.is_monotonic
+ assert index.is_monotonic_increasing
+ assert index.is_monotonic_decreasing
index = RangeIndex(2, 1)
- self.assertTrue(index.is_monotonic)
- self.assertTrue(index.is_monotonic_increasing)
- self.assertTrue(index.is_monotonic_decreasing)
+ assert index.is_monotonic
+ assert index.is_monotonic_increasing
+ assert index.is_monotonic_decreasing
index = RangeIndex(1, 1)
- self.assertTrue(index.is_monotonic)
- self.assertTrue(index.is_monotonic_increasing)
- self.assertTrue(index.is_monotonic_decreasing)
+ assert index.is_monotonic
+ assert index.is_monotonic_increasing
+ assert index.is_monotonic_decreasing
def test_equals_range(self):
equiv_pairs = [(RangeIndex(0, 9, 2), RangeIndex(0, 10, 2)),
@@ -357,8 +357,8 @@ def test_equals_range(self):
(RangeIndex(1, 2, 3), RangeIndex(1, 3, 4)),
(RangeIndex(0, -9, -2), RangeIndex(0, -10, -2))]
for left, right in equiv_pairs:
- self.assertTrue(left.equals(right))
- self.assertTrue(right.equals(left))
+ assert left.equals(right)
+ assert right.equals(left)
def test_logical_compat(self):
idx = self.create_index()
@@ -367,7 +367,7 @@ def test_logical_compat(self):
def test_identical(self):
i = Index(self.index.copy())
- self.assertTrue(i.identical(self.index))
+ assert i.identical(self.index)
# we don't allow object dtype for RangeIndex
if isinstance(self.index, RangeIndex):
@@ -379,11 +379,10 @@ def test_identical(self):
i = self.index.copy(dtype=object)
i = i.rename('foo')
same_values = Index(i, dtype=object)
- self.assertTrue(same_values.identical(self.index.copy(dtype=object)))
+ assert same_values.identical(self.index.copy(dtype=object))
assert not i.identical(self.index)
- self.assertTrue(Index(same_values, name='foo', dtype=object).identical(
- i))
+ assert Index(same_values, name='foo', dtype=object).identical(i)
assert not self.index.copy(dtype=object).identical(
self.index.copy(dtype='int64'))
@@ -689,7 +688,7 @@ def test_nbytes(self):
# memory savings vs int index
i = RangeIndex(0, 1000)
- self.assertTrue(i.nbytes < i.astype(int).nbytes / 10)
+ assert i.nbytes < i.astype(int).nbytes / 10
# constant memory usage
i2 = RangeIndex(0, 10)
@@ -784,7 +783,7 @@ def test_duplicates(self):
if not len(ind):
continue
idx = self.indices[ind]
- self.assertTrue(idx.is_unique)
+ assert idx.is_unique
assert not idx.has_duplicates
def test_ufunc_compat(self):
diff --git a/pandas/tests/indexes/timedeltas/test_astype.py b/pandas/tests/indexes/timedeltas/test_astype.py
index b17433d3aeb51..6e82f165e4909 100644
--- a/pandas/tests/indexes/timedeltas/test_astype.py
+++ b/pandas/tests/indexes/timedeltas/test_astype.py
@@ -55,7 +55,7 @@ def test_astype_timedelta64(self):
result = idx.astype('timedelta64[ns]', copy=False)
tm.assert_index_equal(result, idx)
- self.assertTrue(result is idx)
+ assert result is idx
def test_astype_raises(self):
# GH 13149, GH 13209
diff --git a/pandas/tests/indexes/timedeltas/test_ops.py b/pandas/tests/indexes/timedeltas/test_ops.py
index 9747902f316a6..feaec50264872 100644
--- a/pandas/tests/indexes/timedeltas/test_ops.py
+++ b/pandas/tests/indexes/timedeltas/test_ops.py
@@ -33,7 +33,7 @@ def test_asobject_tolist(self):
Timedelta('3 days'), Timedelta('4 days')]
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
- self.assertTrue(isinstance(result, Index))
+ assert isinstance(result, Index)
self.assertEqual(result.dtype, object)
tm.assert_index_equal(result, expected)
@@ -46,7 +46,7 @@ def test_asobject_tolist(self):
Timedelta('4 days')]
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
- self.assertTrue(isinstance(result, Index))
+ assert isinstance(result, Index)
self.assertEqual(result.dtype, object)
tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
@@ -56,7 +56,7 @@ def test_minmax(self):
# monotonic
idx1 = TimedeltaIndex(['1 days', '2 days', '3 days'])
- self.assertTrue(idx1.is_monotonic)
+ assert idx1.is_monotonic
# non-monotonic
idx2 = TimedeltaIndex(['1 days', np.nan, '3 days', 'NaT'])
@@ -71,13 +71,13 @@ def test_minmax(self):
for op in ['min', 'max']:
# Return NaT
obj = TimedeltaIndex([])
- self.assertTrue(pd.isnull(getattr(obj, op)()))
+ assert pd.isnull(getattr(obj, op)())
obj = TimedeltaIndex([pd.NaT])
- self.assertTrue(pd.isnull(getattr(obj, op)()))
+ assert pd.isnull(getattr(obj, op)())
obj = TimedeltaIndex([pd.NaT, pd.NaT, pd.NaT])
- self.assertTrue(pd.isnull(getattr(obj, op)()))
+ assert pd.isnull(getattr(obj, op)())
def test_numpy_minmax(self):
dr = pd.date_range(start='2016-01-15', end='2016-01-20')
@@ -825,7 +825,7 @@ def test_nat(self):
assert pd.TimedeltaIndex([])._na_value is pd.NaT
idx = pd.TimedeltaIndex(['1 days', '2 days'])
- self.assertTrue(idx._can_hold_na)
+ assert idx._can_hold_na
tm.assert_numpy_array_equal(idx._isnan, np.array([False, False]))
assert not idx.hasnans
@@ -833,21 +833,21 @@ def test_nat(self):
np.array([], dtype=np.intp))
idx = pd.TimedeltaIndex(['1 days', 'NaT'])
- self.assertTrue(idx._can_hold_na)
+ assert idx._can_hold_na
tm.assert_numpy_array_equal(idx._isnan, np.array([False, True]))
- self.assertTrue(idx.hasnans)
+ assert idx.hasnans
tm.assert_numpy_array_equal(idx._nan_idxs,
np.array([1], dtype=np.intp))
def test_equals(self):
# GH 13107
idx = pd.TimedeltaIndex(['1 days', '2 days', 'NaT'])
- self.assertTrue(idx.equals(idx))
- self.assertTrue(idx.equals(idx.copy()))
- self.assertTrue(idx.equals(idx.asobject))
- self.assertTrue(idx.asobject.equals(idx))
- self.assertTrue(idx.asobject.equals(idx.asobject))
+ assert idx.equals(idx)
+ assert idx.equals(idx.copy())
+ assert idx.equals(idx.asobject)
+ assert idx.asobject.equals(idx)
+ assert idx.asobject.equals(idx.asobject)
assert not idx.equals(list(idx))
assert not idx.equals(pd.Series(idx))
@@ -870,18 +870,18 @@ def test_ops(self):
self.assertEqual(-td, Timedelta(-10, unit='d'))
self.assertEqual(+td, Timedelta(10, unit='d'))
self.assertEqual(td - td, Timedelta(0, unit='ns'))
- self.assertTrue((td - pd.NaT) is pd.NaT)
+ assert (td - pd.NaT) is pd.NaT
self.assertEqual(td + td, Timedelta(20, unit='d'))
- self.assertTrue((td + pd.NaT) is pd.NaT)
+ assert (td + pd.NaT) is pd.NaT
self.assertEqual(td * 2, Timedelta(20, unit='d'))
- self.assertTrue((td * pd.NaT) is pd.NaT)
+ assert (td * pd.NaT) is pd.NaT
self.assertEqual(td / 2, Timedelta(5, unit='d'))
self.assertEqual(td // 2, Timedelta(5, unit='d'))
self.assertEqual(abs(td), td)
self.assertEqual(abs(-td), td)
self.assertEqual(td / td, 1)
- self.assertTrue((td / pd.NaT) is np.nan)
- self.assertTrue((td // pd.NaT) is np.nan)
+ assert (td / pd.NaT) is np.nan
+ assert (td // pd.NaT) is np.nan
# invert
self.assertEqual(-td, Timedelta('-10d'))
@@ -995,11 +995,11 @@ class Other:
other = Other()
td = Timedelta('1 day')
- self.assertTrue(td.__add__(other) is NotImplemented)
- self.assertTrue(td.__sub__(other) is NotImplemented)
- self.assertTrue(td.__truediv__(other) is NotImplemented)
- self.assertTrue(td.__mul__(other) is NotImplemented)
- self.assertTrue(td.__floordiv__(other) is NotImplemented)
+ assert td.__add__(other) is NotImplemented
+ assert td.__sub__(other) is NotImplemented
+ assert td.__truediv__(other) is NotImplemented
+ assert td.__mul__(other) is NotImplemented
+ assert td.__floordiv__(other) is NotImplemented
def test_ops_error_str(self):
# GH 13624
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index c90c61170ca93..8a327d2ecb08f 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -247,10 +247,10 @@ def test_isin(self):
index = tm.makeTimedeltaIndex(4)
result = index.isin(index)
- self.assertTrue(result.all())
+ assert result.all()
result = index.isin(list(index))
- self.assertTrue(result.all())
+ assert result.all()
assert_almost_equal(index.isin([index[2], 5]),
np.array([False, False, True, False]))
@@ -483,7 +483,7 @@ def test_append_numpy_bug_1681(self):
str(c)
result = a.append(c)
- self.assertTrue((result['B'] == td).all())
+ assert (result['B'] == td).all()
def test_fields(self):
rng = timedelta_range('1 days, 10:11:12.100123456', periods=2,
@@ -569,7 +569,7 @@ def test_timedelta(self):
index = date_range('1/1/2000', periods=50, freq='B')
shifted = index + timedelta(1)
back = shifted + timedelta(-1)
- self.assertTrue(tm.equalContents(index, back))
+ assert tm.equalContents(index, back)
self.assertEqual(shifted.freq, index.freq)
self.assertEqual(shifted.freq, back.freq)
diff --git a/pandas/tests/indexes/timedeltas/test_tools.py b/pandas/tests/indexes/timedeltas/test_tools.py
index 12ed8a2e38f92..d69f78bfd73b1 100644
--- a/pandas/tests/indexes/timedeltas/test_tools.py
+++ b/pandas/tests/indexes/timedeltas/test_tools.py
@@ -32,7 +32,7 @@ def conv(v):
self.assertEqual(result.astype('int64'), iNaT)
result = to_timedelta(['', ''])
- self.assertTrue(isnull(result).all())
+ assert isnull(result).all()
# pass thru
result = to_timedelta(np.array([np.timedelta64(1, 's')]))
@@ -122,8 +122,7 @@ def test_to_timedelta_invalid(self):
# time not supported ATM
pytest.raises(ValueError, lambda: to_timedelta(time(second=1)))
- self.assertTrue(to_timedelta(
- time(second=1), errors='coerce') is pd.NaT)
+ assert to_timedelta(time(second=1), errors='coerce') is pd.NaT
pytest.raises(ValueError, lambda: to_timedelta(['foo', 'bar']))
tm.assert_index_equal(TimedeltaIndex([pd.NaT, pd.NaT]),
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 498604aaac853..4d4ef65b40074 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -130,14 +130,14 @@ def f():
s2 = s.copy()
s2.loc[3.0] = 10
- self.assertTrue(s2.index.is_object())
+ assert s2.index.is_object()
for idxr in [lambda x: x.ix,
lambda x: x]:
s2 = s.copy()
with catch_warnings(record=True):
idxr(s2)[3.0] = 0
- self.assertTrue(s2.index.is_object())
+ assert s2.index.is_object()
# fallsback to position selection, series only
s = Series(np.arange(len(i)), index=i)
@@ -239,7 +239,7 @@ def test_scalar_integer(self):
# contains
# coerce to equal int
- self.assertTrue(3.0 in s)
+ assert 3.0 in s
def test_scalar_float(self):
@@ -275,7 +275,7 @@ def f():
pytest.raises(KeyError, lambda: idxr(s)[3.5])
# contains
- self.assertTrue(3.0 in s)
+ assert 3.0 in s
# iloc succeeds with an integer
expected = s.iloc[3]
@@ -440,7 +440,7 @@ def f():
with catch_warnings(record=True):
idxr(sc)[l] = 0
result = idxr(sc)[l].values.ravel()
- self.assertTrue((result == 0).all())
+ assert (result == 0).all()
# positional indexing
def f():
@@ -534,7 +534,7 @@ def f():
with catch_warnings(record=True):
idxr(sc)[l] = 0
result = idxr(sc)[l].values.ravel()
- self.assertTrue((result == 0).all())
+ assert (result == 0).all()
# positional indexing
def f():
@@ -570,7 +570,7 @@ def test_slice_float(self):
with catch_warnings(record=True):
idxr(s2)[l] = 0
result = idxr(s2)[l].values.ravel()
- self.assertTrue((result == 0).all())
+ assert (result == 0).all()
def test_floating_index_doc_example(self):
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index 18b169559b2d4..baced46923fd4 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -191,7 +191,7 @@ def test_iloc_getitem_dups(self):
# cross-sectional indexing
result = df.iloc[0, 0]
- self.assertTrue(isnull(result))
+ assert isnull(result)
result = df.iloc[0, :]
expected = Series([np.nan, 1, 3, 3], index=['A', 'B', 'A', 'B'],
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index d0f089f0804c3..5924dba488043 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -87,8 +87,8 @@ def test_setitem_dtype_upcast(self):
columns=['foo', 'bar', 'baz'])
tm.assert_frame_equal(left, right)
- self.assertTrue(is_integer_dtype(left['foo']))
- self.assertTrue(is_integer_dtype(left['baz']))
+ assert is_integer_dtype(left['foo'])
+ assert is_integer_dtype(left['baz'])
left = DataFrame(np.arange(6, dtype='int64').reshape(2, 3) / 10.0,
index=list('ab'),
@@ -99,8 +99,8 @@ def test_setitem_dtype_upcast(self):
columns=['foo', 'bar', 'baz'])
tm.assert_frame_equal(left, right)
- self.assertTrue(is_float_dtype(left['foo']))
- self.assertTrue(is_float_dtype(left['baz']))
+ assert is_float_dtype(left['foo'])
+ assert is_float_dtype(left['baz'])
def test_dups_fancy_indexing(self):
@@ -430,7 +430,7 @@ def test_string_slice(self):
# dtype should properly raises KeyError
df = pd.DataFrame([1], pd.Index([pd.Timestamp('2011-01-01')],
dtype=object))
- self.assertTrue(df.index.is_all_dates)
+ assert df.index.is_all_dates
with pytest.raises(KeyError):
df['2011']
@@ -556,15 +556,15 @@ def test_index_type_coercion(self):
for s in [Series(range(5)),
Series(range(5), index=range(1, 6))]:
- self.assertTrue(s.index.is_integer())
+ assert s.index.is_integer()
for indexer in [lambda x: x.ix,
lambda x: x.loc,
lambda x: x]:
s2 = s.copy()
indexer(s2)[0.1] = 0
- self.assertTrue(s2.index.is_floating())
- self.assertTrue(indexer(s2)[0.1] == 0)
+ assert s2.index.is_floating()
+ assert indexer(s2)[0.1] == 0
s2 = s.copy()
indexer(s2)[0.0] = 0
@@ -575,11 +575,11 @@ def test_index_type_coercion(self):
s2 = s.copy()
indexer(s2)['0'] = 0
- self.assertTrue(s2.index.is_object())
+ assert s2.index.is_object()
for s in [Series(range(5), index=np.arange(5.))]:
- self.assertTrue(s.index.is_floating())
+ assert s.index.is_floating()
for idxr in [lambda x: x.ix,
lambda x: x.loc,
@@ -587,8 +587,8 @@ def test_index_type_coercion(self):
s2 = s.copy()
idxr(s2)[0.1] = 0
- self.assertTrue(s2.index.is_floating())
- self.assertTrue(idxr(s2)[0.1] == 0)
+ assert s2.index.is_floating()
+ assert idxr(s2)[0.1] == 0
s2 = s.copy()
idxr(s2)[0.0] = 0
@@ -596,7 +596,7 @@ def test_index_type_coercion(self):
s2 = s.copy()
idxr(s2)['0'] = 0
- self.assertTrue(s2.index.is_object())
+ assert s2.index.is_object()
class TestMisc(Base, tm.TestCase):
@@ -776,7 +776,7 @@ def test_non_reducing_slice(self):
]
for slice_ in slices:
tslice_ = _non_reducing_slice(slice_)
- self.assertTrue(isinstance(df.loc[tslice_], DataFrame))
+ assert isinstance(df.loc[tslice_], DataFrame)
def test_list_slice(self):
# like dataframe getitem
diff --git a/pandas/tests/indexing/test_ix.py b/pandas/tests/indexing/test_ix.py
index c3ce21343b8d1..433b44c952ca1 100644
--- a/pandas/tests/indexing/test_ix.py
+++ b/pandas/tests/indexing/test_ix.py
@@ -84,7 +84,7 @@ def compare(result, expected):
if is_scalar(expected):
self.assertEqual(result, expected)
else:
- self.assertTrue(expected.equals(result))
+ assert expected.equals(result)
# failure cases for .loc, but these work for .ix
df = pd.DataFrame(np.random.randn(5, 4), columns=list('ABCD'))
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 862a6e6326ddd..b430f458d48b5 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -325,8 +325,8 @@ def test_loc_general(self):
# want this to work
result = df.loc[:, "A":"B"].iloc[0:2, :]
- self.assertTrue((result.columns == ['A', 'B']).all())
- self.assertTrue((result.index == ['A', 'B']).all())
+ assert (result.columns == ['A', 'B']).all()
+ assert (result.index == ['A', 'B']).all()
# mixed type
result = DataFrame({'a': [Timestamp('20130101')], 'b': [1]}).iloc[0]
diff --git a/pandas/tests/io/formats/test_eng_formatting.py b/pandas/tests/io/formats/test_eng_formatting.py
index 8eb4ed576fff1..41bb95964b4a2 100644
--- a/pandas/tests/io/formats/test_eng_formatting.py
+++ b/pandas/tests/io/formats/test_eng_formatting.py
@@ -184,7 +184,7 @@ def test_nan(self):
pt = df.pivot_table(values='a', index='b', columns='c')
fmt.set_eng_float_format(accuracy=1)
result = pt.to_string()
- self.assertTrue('NaN' in result)
+ assert 'NaN' in result
tm.reset_display_options()
def test_inf(self):
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index ccc1372495106..6f19a4a126118 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -148,7 +148,7 @@ def test_show_null_counts(self):
def check(null_counts, result):
buf = StringIO()
df.info(buf=buf, null_counts=null_counts)
- self.assertTrue(('non-null' in buf.getvalue()) is result)
+ assert ('non-null' in buf.getvalue()) is result
with option_context('display.max_info_rows', 20,
'display.max_info_columns', 20):
@@ -209,10 +209,10 @@ def test_repr_chop_threshold(self):
def test_repr_obeys_max_seq_limit(self):
with option_context("display.max_seq_items", 2000):
- self.assertTrue(len(printing.pprint_thing(lrange(1000))) > 1000)
+ assert len(printing.pprint_thing(lrange(1000))) > 1000
with option_context("display.max_seq_items", 5):
- self.assertTrue(len(printing.pprint_thing(lrange(1000))) < 100)
+ assert len(printing.pprint_thing(lrange(1000))) < 100
def test_repr_set(self):
self.assertEqual(printing.pprint_thing(set([1])), '{1}')
@@ -235,12 +235,12 @@ def test_repr_should_return_str(self):
index1 = [u("\u03c3"), u("\u03c4"), u("\u03c5"), u("\u03c6")]
cols = [u("\u03c8")]
df = DataFrame(data, columns=cols, index=index1)
- self.assertTrue(type(df.__repr__()) == str) # both py2 / 3
+ assert type(df.__repr__()) == str # both py2 / 3
def test_repr_no_backslash(self):
with option_context('mode.sim_interactive', True):
df = DataFrame(np.random.randn(10, 4))
- self.assertTrue('\\' not in repr(df))
+ assert '\\' not in repr(df)
def test_expand_frame_repr(self):
df_small = DataFrame('hello', [0], [0])
@@ -255,16 +255,16 @@ def test_expand_frame_repr(self):
assert not has_truncated_repr(df_small)
assert not has_expanded_repr(df_small)
assert not has_truncated_repr(df_wide)
- self.assertTrue(has_expanded_repr(df_wide))
- self.assertTrue(has_vertically_truncated_repr(df_tall))
- self.assertTrue(has_expanded_repr(df_tall))
+ assert has_expanded_repr(df_wide)
+ assert has_vertically_truncated_repr(df_tall)
+ assert has_expanded_repr(df_tall)
with option_context('display.expand_frame_repr', False):
assert not has_truncated_repr(df_small)
assert not has_expanded_repr(df_small)
assert not has_horizontally_truncated_repr(df_wide)
assert not has_expanded_repr(df_wide)
- self.assertTrue(has_vertically_truncated_repr(df_tall))
+ assert has_vertically_truncated_repr(df_tall)
assert not has_expanded_repr(df_tall)
def test_repr_non_interactive(self):
@@ -296,7 +296,7 @@ def mkframe(n):
assert not has_expanded_repr(mkframe(4))
assert not has_expanded_repr(mkframe(5))
assert not has_expanded_repr(df6)
- self.assertTrue(has_doubly_truncated_repr(df6))
+ assert has_doubly_truncated_repr(df6)
with option_context('display.max_rows', 20,
'display.max_columns', 10):
@@ -309,7 +309,7 @@ def mkframe(n):
'display.max_columns', 10):
# out vertical bounds can not result in exanded repr
assert not has_expanded_repr(df10)
- self.assertTrue(has_vertically_truncated_repr(df10))
+ assert has_vertically_truncated_repr(df10)
# width=None in terminal, auto detection
with option_context('display.max_columns', 100, 'display.max_rows',
@@ -318,7 +318,7 @@ def mkframe(n):
assert not has_expanded_repr(df)
df = mkframe((term_width // 7) + 2)
printing.pprint_thing(df._repr_fits_horizontal_())
- self.assertTrue(has_expanded_repr(df))
+ assert has_expanded_repr(df)
def test_str_max_colwidth(self):
# GH 7856
@@ -330,15 +330,14 @@ def test_str_max_colwidth(self):
'c': 'stuff',
'd': 1}])
df.set_index(['a', 'b', 'c'])
- self.assertTrue(
- str(df) ==
+ assert str(df) == (
' a b c d\n'
'0 foo bar uncomfortably long line with lots of stuff 1\n'
'1 foo bar stuff 1')
with option_context('max_colwidth', 20):
- self.assertTrue(str(df) == ' a b c d\n'
- '0 foo bar uncomfortably lo... 1\n'
- '1 foo bar stuff 1')
+ assert str(df) == (' a b c d\n'
+ '0 foo bar uncomfortably lo... 1\n'
+ '1 foo bar stuff 1')
def test_auto_detect(self):
term_width, term_height = get_terminal_size()
@@ -350,24 +349,24 @@ def test_auto_detect(self):
with option_context('max_rows', None):
with option_context('max_columns', None):
# Wrap around with None
- self.assertTrue(has_expanded_repr(df))
+ assert has_expanded_repr(df)
with option_context('max_rows', 0):
with option_context('max_columns', 0):
# Truncate with auto detection.
- self.assertTrue(has_horizontally_truncated_repr(df))
+ assert has_horizontally_truncated_repr(df)
index = range(int(term_height * fac))
df = DataFrame(index=index, columns=cols)
with option_context('max_rows', 0):
with option_context('max_columns', None):
# Wrap around with None
- self.assertTrue(has_expanded_repr(df))
+ assert has_expanded_repr(df)
# Truncate vertically
- self.assertTrue(has_vertically_truncated_repr(df))
+ assert has_vertically_truncated_repr(df)
with option_context('max_rows', None):
with option_context('max_columns', 0):
- self.assertTrue(has_horizontally_truncated_repr(df))
+ assert has_horizontally_truncated_repr(df)
def test_to_string_repr_unicode(self):
buf = StringIO()
@@ -732,7 +731,7 @@ def test_to_string_with_col_space(self):
c10 = len(df.to_string(col_space=10).split("\n")[1])
c20 = len(df.to_string(col_space=20).split("\n")[1])
c30 = len(df.to_string(col_space=30).split("\n")[1])
- self.assertTrue(c10 < c20 < c30)
+ assert c10 < c20 < c30
# GH 8230
# col_space wasn't being applied with header=False
@@ -752,23 +751,20 @@ def test_to_string_truncate_indices(self):
df = DataFrame(index=index(h), columns=column(w))
with option_context("display.max_rows", 15):
if h == 20:
- self.assertTrue(
- has_vertically_truncated_repr(df))
+ assert has_vertically_truncated_repr(df)
else:
assert not has_vertically_truncated_repr(
df)
with option_context("display.max_columns", 15):
if w == 20:
- self.assertTrue(
- has_horizontally_truncated_repr(df))
+ assert has_horizontally_truncated_repr(df)
else:
assert not (
has_horizontally_truncated_repr(df))
with option_context("display.max_rows", 15,
"display.max_columns", 15):
if h == 20 and w == 20:
- self.assertTrue(has_doubly_truncated_repr(
- df))
+ assert has_doubly_truncated_repr(df)
else:
assert not has_doubly_truncated_repr(
df)
@@ -778,7 +774,7 @@ def test_to_string_truncate_multilevel(self):
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
df = DataFrame(index=arrays, columns=arrays)
with option_context("display.max_rows", 7, "display.max_columns", 7):
- self.assertTrue(has_doubly_truncated_repr(df))
+ assert has_doubly_truncated_repr(df)
def test_truncate_with_different_dtypes(self):
@@ -793,7 +789,7 @@ def test_truncate_with_different_dtypes(self):
with pd.option_context('display.max_rows', 8):
result = str(s)
- self.assertTrue('object' in result)
+ assert 'object' in result
# 12045
df = DataFrame({'text': ['some words'] + [None] * 9})
@@ -801,7 +797,7 @@ def test_truncate_with_different_dtypes(self):
with pd.option_context('display.max_rows', 8,
'display.max_columns', 3):
result = str(df)
- self.assertTrue('None' in result)
+ assert 'None' in result
assert 'NaN' not in result
def test_datetimelike_frame(self):
@@ -813,10 +809,10 @@ def test_datetimelike_frame(self):
with option_context("display.max_rows", 5):
result = str(df)
- self.assertTrue('2013-01-01 00:00:00+00:00' in result)
- self.assertTrue('NaT' in result)
- self.assertTrue('...' in result)
- self.assertTrue('[6 rows x 1 columns]' in result)
+ assert '2013-01-01 00:00:00+00:00' in result
+ assert 'NaT' in result
+ assert '...' in result
+ assert '[6 rows x 1 columns]' in result
dts = [pd.Timestamp('2011-01-01', tz='US/Eastern')] * 5 + [pd.NaT] * 5
df = pd.DataFrame({"dt": dts,
@@ -930,7 +926,7 @@ def test_wide_repr(self):
with option_context('display.width', 120):
wider_repr = repr(df)
- self.assertTrue(len(wider_repr) < len(wide_repr))
+ assert len(wider_repr) < len(wide_repr)
reset_option('display.expand_frame_repr')
@@ -956,7 +952,7 @@ def test_wide_repr_named(self):
with option_context('display.width', 150):
wider_repr = repr(df)
- self.assertTrue(len(wider_repr) < len(wide_repr))
+ assert len(wider_repr) < len(wide_repr)
for line in wide_repr.splitlines()[1::13]:
assert 'DataFrame Index' in line
@@ -978,7 +974,7 @@ def test_wide_repr_multiindex(self):
with option_context('display.width', 150):
wider_repr = repr(df)
- self.assertTrue(len(wider_repr) < len(wide_repr))
+ assert len(wider_repr) < len(wide_repr)
for line in wide_repr.splitlines()[1::13]:
assert 'Level 0 Level 1' in line
@@ -1002,7 +998,7 @@ def test_wide_repr_multiindex_cols(self):
with option_context('display.width', 150):
wider_repr = repr(df)
- self.assertTrue(len(wider_repr) < len(wide_repr))
+ assert len(wider_repr) < len(wide_repr)
reset_option('display.expand_frame_repr')
@@ -1018,7 +1014,7 @@ def test_wide_repr_unicode(self):
with option_context('display.width', 150):
wider_repr = repr(df)
- self.assertTrue(len(wider_repr) < len(wide_repr))
+ assert len(wider_repr) < len(wide_repr)
reset_option('display.expand_frame_repr')
@@ -1028,8 +1024,8 @@ def test_wide_repr_wide_long_columns(self):
'b': ['c' * 70, 'd' * 80]})
result = repr(df)
- self.assertTrue('ccccc' in result)
- self.assertTrue('ddddd' in result)
+ assert 'ccccc' in result
+ assert 'ddddd' in result
def test_long_series(self):
n = 1000
@@ -1141,8 +1137,8 @@ def test_to_string(self):
header=None, sep=' ')
tm.assert_series_equal(recons['B'], biggie['B'])
self.assertEqual(recons['A'].count(), biggie['A'].count())
- self.assertTrue((np.abs(recons['A'].dropna() - biggie['A'].dropna()) <
- 0.1).all())
+ assert (np.abs(recons['A'].dropna() -
+ biggie['A'].dropna()) < 0.1).all()
# expected = ['B', 'A']
# self.assertEqual(header, expected)
@@ -1289,7 +1285,7 @@ def test_to_string_ascii_error(self):
def test_to_string_int_formatting(self):
df = DataFrame({'x': [-15, 20, 25, -35]})
- self.assertTrue(issubclass(df['x'].dtype.type, np.integer))
+ assert issubclass(df['x'].dtype.type, np.integer)
output = df.to_string()
expected = (' x\n' '0 -15\n' '1 20\n' '2 25\n' '3 -35')
@@ -1353,8 +1349,8 @@ def test_show_dimensions(self):
with option_context('display.max_rows', 10, 'display.max_columns', 40,
'display.width', 500, 'display.expand_frame_repr',
'info', 'display.show_dimensions', True):
- self.assertTrue('5 rows' in str(df))
- self.assertTrue('5 rows' in df._repr_html_())
+ assert '5 rows' in str(df)
+ assert '5 rows' in df._repr_html_()
with option_context('display.max_rows', 10, 'display.max_columns', 40,
'display.width', 500, 'display.expand_frame_repr',
'info', 'display.show_dimensions', False):
@@ -1363,8 +1359,8 @@ def test_show_dimensions(self):
with option_context('display.max_rows', 2, 'display.max_columns', 2,
'display.width', 500, 'display.expand_frame_repr',
'info', 'display.show_dimensions', 'truncate'):
- self.assertTrue('5 rows' in str(df))
- self.assertTrue('5 rows' in df._repr_html_())
+ assert '5 rows' in str(df)
+ assert '5 rows' in df._repr_html_()
with option_context('display.max_rows', 10, 'display.max_columns', 40,
'display.width', 500, 'display.expand_frame_repr',
'info', 'display.show_dimensions', 'truncate'):
@@ -1384,7 +1380,7 @@ def test_repr_html(self):
df = DataFrame([[1, 2], [3, 4]])
fmt.set_option('display.show_dimensions', True)
- self.assertTrue('2 rows' in df._repr_html_())
+ assert '2 rows' in df._repr_html_()
fmt.set_option('display.show_dimensions', False)
assert '2 rows' not in df._repr_html_()
@@ -1513,7 +1509,7 @@ def test_info_repr_max_cols(self):
with option_context('display.large_repr', 'info',
'display.max_columns', 1,
'display.max_info_columns', 4):
- self.assertTrue(has_non_verbose_info_repr(df))
+ assert has_non_verbose_info_repr(df)
with option_context('display.large_repr', 'info',
'display.max_columns', 1,
@@ -1576,17 +1572,17 @@ def test_float_trim_zeros(self):
if line.startswith('dtype:'):
continue
if _three_digit_exp():
- self.assertTrue(('+010' in line) or skip)
+ assert ('+010' in line) or skip
else:
- self.assertTrue(('+10' in line) or skip)
+ assert ('+10' in line) or skip
skip = False
def test_dict_entries(self):
df = DataFrame({'A': [{'a': 1, 'b': 2}]})
val = df.to_string()
- self.assertTrue("'a': 1" in val)
- self.assertTrue("'b': 2" in val)
+ assert "'a': 1" in val
+ assert "'b': 2" in val
def test_period(self):
# GH 12615
@@ -1662,7 +1658,7 @@ def test_freq_name_separation(self):
index=date_range('1/1/2000', periods=10), name=0)
result = repr(s)
- self.assertTrue('Freq: D, Name: 0' in result)
+ assert 'Freq: D, Name: 0' in result
def test_to_string_mixed(self):
s = Series(['foo', np.nan, -1.23, 4.56])
@@ -1884,17 +1880,17 @@ def test_datetimeindex(self):
index = date_range('20130102', periods=6)
s = Series(1, index=index)
result = s.to_string()
- self.assertTrue('2013-01-02' in result)
+ assert '2013-01-02' in result
# nat in index
s2 = Series(2, index=[Timestamp('20130111'), NaT])
s = s2.append(s)
result = s.to_string()
- self.assertTrue('NaT' in result)
+ assert 'NaT' in result
# nat in summary
result = str(s2.index)
- self.assertTrue('NaT' in result)
+ assert 'NaT' in result
def test_timedelta64(self):
@@ -1909,47 +1905,47 @@ def test_timedelta64(self):
# adding NaTs
y = s - s.shift(1)
result = y.to_string()
- self.assertTrue('1 days' in result)
- self.assertTrue('00:00:00' not in result)
- self.assertTrue('NaT' in result)
+ assert '1 days' in result
+ assert '00:00:00' not in result
+ assert 'NaT' in result
# with frac seconds
o = Series([datetime(2012, 1, 1, microsecond=150)] * 3)
y = s - o
result = y.to_string()
- self.assertTrue('-1 days +23:59:59.999850' in result)
+ assert '-1 days +23:59:59.999850' in result
# rounding?
o = Series([datetime(2012, 1, 1, 1)] * 3)
y = s - o
result = y.to_string()
- self.assertTrue('-1 days +23:00:00' in result)
- self.assertTrue('1 days 23:00:00' in result)
+ assert '-1 days +23:00:00' in result
+ assert '1 days 23:00:00' in result
o = Series([datetime(2012, 1, 1, 1, 1)] * 3)
y = s - o
result = y.to_string()
- self.assertTrue('-1 days +22:59:00' in result)
- self.assertTrue('1 days 22:59:00' in result)
+ assert '-1 days +22:59:00' in result
+ assert '1 days 22:59:00' in result
o = Series([datetime(2012, 1, 1, 1, 1, microsecond=150)] * 3)
y = s - o
result = y.to_string()
- self.assertTrue('-1 days +22:58:59.999850' in result)
- self.assertTrue('0 days 22:58:59.999850' in result)
+ assert '-1 days +22:58:59.999850' in result
+ assert '0 days 22:58:59.999850' in result
# neg time
td = timedelta(minutes=5, seconds=3)
s2 = Series(date_range('2012-1-1', periods=3, freq='D')) + td
y = s - s2
result = y.to_string()
- self.assertTrue('-1 days +23:54:57' in result)
+ assert '-1 days +23:54:57' in result
td = timedelta(microseconds=550)
s2 = Series(date_range('2012-1-1', periods=3, freq='D')) + td
y = s - td
result = y.to_string()
- self.assertTrue('2012-01-01 23:59:59.999450' in result)
+ assert '2012-01-01 23:59:59.999450' in result
# no boxing of the actual elements
td = Series(pd.timedelta_range('1 days', periods=3))
@@ -1961,7 +1957,7 @@ def test_mixed_datetime64(self):
df['B'] = pd.to_datetime(df.B)
result = repr(df.loc[0])
- self.assertTrue('2012-01-01' in result)
+ assert '2012-01-01' in result
def test_period(self):
# GH 12615
@@ -2166,7 +2162,7 @@ class TestFloatArrayFormatter(tm.TestCase):
def test_misc(self):
obj = fmt.FloatArrayFormatter(np.array([], dtype=np.float64))
result = obj.get_result()
- self.assertTrue(len(result) == 0)
+ assert len(result) == 0
def test_format(self):
obj = fmt.FloatArrayFormatter(np.array([12, 0], dtype=np.float64))
@@ -2493,14 +2489,14 @@ class TestDatetimeIndexUnicode(tm.TestCase):
def test_dates(self):
text = str(pd.to_datetime([datetime(2013, 1, 1), datetime(2014, 1, 1)
]))
- self.assertTrue("['2013-01-01'," in text)
- self.assertTrue(", '2014-01-01']" in text)
+ assert "['2013-01-01'," in text
+ assert ", '2014-01-01']" in text
def test_mixed(self):
text = str(pd.to_datetime([datetime(2013, 1, 1), datetime(
2014, 1, 1, 12), datetime(2014, 1, 1)]))
- self.assertTrue("'2013-01-01 00:00:00'," in text)
- self.assertTrue("'2014-01-01 00:00:00']" in text)
+ assert "'2013-01-01 00:00:00'," in text
+ assert "'2014-01-01 00:00:00']" in text
class TestStringRepTimestamp(tm.TestCase):
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 96bf2b605ffa1..7d8ac6f81c31e 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -68,9 +68,9 @@ def test_update_ctx_flatten_multi_traliing_semi(self):
def test_copy(self):
s2 = copy.copy(self.styler)
- self.assertTrue(self.styler is not s2)
- self.assertTrue(self.styler.ctx is s2.ctx) # shallow
- self.assertTrue(self.styler._todo is s2._todo)
+ assert self.styler is not s2
+ assert self.styler.ctx is s2.ctx # shallow
+ assert self.styler._todo is s2._todo
self.styler._update_ctx(self.attrs)
self.styler.highlight_max()
@@ -79,9 +79,9 @@ def test_copy(self):
def test_deepcopy(self):
s2 = copy.deepcopy(self.styler)
- self.assertTrue(self.styler is not s2)
- self.assertTrue(self.styler.ctx is not s2.ctx)
- self.assertTrue(self.styler._todo is not s2._todo)
+ assert self.styler is not s2
+ assert self.styler.ctx is not s2.ctx
+ assert self.styler._todo is not s2._todo
self.styler._update_ctx(self.attrs)
self.styler.highlight_max()
@@ -91,11 +91,11 @@ def test_deepcopy(self):
def test_clear(self):
s = self.df.style.highlight_max()._compute()
- self.assertTrue(len(s.ctx) > 0)
- self.assertTrue(len(s._todo) > 0)
+ assert len(s.ctx) > 0
+ assert len(s._todo) > 0
s.clear()
- self.assertTrue(len(s.ctx) == 0)
- self.assertTrue(len(s._todo) == 0)
+ assert len(s.ctx) == 0
+ assert len(s._todo) == 0
def test_render(self):
df = pd.DataFrame({"A": [0, 1]})
@@ -367,42 +367,42 @@ def test_nonunique_raises(self):
def test_caption(self):
styler = Styler(self.df, caption='foo')
result = styler.render()
- self.assertTrue(all(['caption' in result, 'foo' in result]))
+ assert all(['caption' in result, 'foo' in result])
styler = self.df.style
result = styler.set_caption('baz')
- self.assertTrue(styler is result)
+ assert styler is result
self.assertEqual(styler.caption, 'baz')
def test_uuid(self):
styler = Styler(self.df, uuid='abc123')
result = styler.render()
- self.assertTrue('abc123' in result)
+ assert 'abc123' in result
styler = self.df.style
result = styler.set_uuid('aaa')
- self.assertTrue(result is styler)
+ assert result is styler
self.assertEqual(result.uuid, 'aaa')
def test_table_styles(self):
style = [{'selector': 'th', 'props': [('foo', 'bar')]}]
styler = Styler(self.df, table_styles=style)
result = ' '.join(styler.render().split())
- self.assertTrue('th { foo: bar; }' in result)
+ assert 'th { foo: bar; }' in result
styler = self.df.style
result = styler.set_table_styles(style)
- self.assertTrue(styler is result)
+ assert styler is result
self.assertEqual(styler.table_styles, style)
def test_table_attributes(self):
attributes = 'class="foo" data-bar'
styler = Styler(self.df, table_attributes=attributes)
result = styler.render()
- self.assertTrue('class="foo" data-bar' in result)
+ assert 'class="foo" data-bar' in result
result = self.df.style.set_table_attributes(attributes).render()
- self.assertTrue('class="foo" data-bar' in result)
+ assert 'class="foo" data-bar' in result
def test_precision(self):
with pd.option_context('display.precision', 10):
@@ -412,7 +412,7 @@ def test_precision(self):
self.assertEqual(s.precision, 2)
s2 = s.set_precision(4)
- self.assertTrue(s is s2)
+ assert s is s2
self.assertEqual(s.precision, 4)
def test_apply_none(self):
@@ -485,12 +485,10 @@ def test_display_format(self):
df = pd.DataFrame(np.random.random(size=(2, 2)))
ctx = df.style.format("{:0.1f}")._translate()
- self.assertTrue(all(['display_value' in c for c in row]
- for row in ctx['body']))
- self.assertTrue(all([len(c['display_value']) <= 3 for c in row[1:]]
- for row in ctx['body']))
- self.assertTrue(
- len(ctx['body'][0][1]['display_value'].lstrip('-')) <= 3)
+ assert all(['display_value' in c for c in row] for row in ctx['body'])
+ assert (all([len(c['display_value']) <= 3 for c in row[1:]]
+ for row in ctx['body']))
+ assert len(ctx['body'][0][1]['display_value'].lstrip('-')) <= 3
def test_display_format_raises(self):
df = pd.DataFrame(np.random.randn(2, 2))
@@ -711,7 +709,7 @@ def test_background_gradient(self):
for axis in [0, 1, 'index', 'columns']:
for cmap in [None, 'YlOrRd']:
result = df.style.background_gradient(cmap=cmap)._compute().ctx
- self.assertTrue(all("#" in x[0] for x in result.values()))
+ assert all("#" in x[0] for x in result.values())
self.assertEqual(result[(0, 0)], result[(0, 1)])
self.assertEqual(result[(1, 0)], result[(1, 1)])
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index a67bb2fd8eb5c..fd9ae0851635a 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -30,10 +30,10 @@ def check_with_width(df, col_space):
# and be very brittle about it.
html = df.to_html(col_space=col_space)
hdrs = [x for x in html.split(r"\n") if re.search(r"<th[>\s]", x)]
- self.assertTrue(len(hdrs) > 0)
+ assert len(hdrs) > 0
for h in hdrs:
- self.assertTrue("min-width" in h)
- self.assertTrue(str(col_space) in h)
+ assert "min-width" in h
+ assert str(col_space) in h
df = DataFrame(np.random.random(size=(1, 3)))
@@ -45,7 +45,7 @@ def test_to_html_with_empty_string_label(self):
data = {'c1': ['a', 'b'], 'c2': ['a', ''], 'data': [1, 2]}
df = DataFrame(data).set_index(['c1', 'c2'])
res = df.to_html()
- self.assertTrue("rowspan" not in res)
+ assert "rowspan" not in res
def test_to_html_unicode(self):
df = DataFrame({u('\u03c3'): np.arange(10.)})
@@ -1403,13 +1403,13 @@ def test_to_html_border_option(self):
df = DataFrame({'A': [1, 2]})
with pd.option_context('html.border', 0):
result = df.to_html()
- self.assertTrue('border="0"' in result)
- self.assertTrue('border="0"' in df._repr_html_())
+ assert 'border="0"' in result
+ assert 'border="0"' in df._repr_html_()
def test_to_html_border_zero(self):
df = DataFrame({'A': [1, 2]})
result = df.to_html(border=0)
- self.assertTrue('border="0"' in result)
+ assert 'border="0"' in result
def test_to_html(self):
# big mixed
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index cbb302ad39dd6..4ec13fa667452 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -41,7 +41,7 @@ def test_build_table_schema(self):
}
self.assertEqual(result, expected)
result = build_table_schema(self.df)
- self.assertTrue("pandas_version" in result)
+ assert "pandas_version" in result
def test_series(self):
s = pd.Series([1, 2, 3], name='foo')
@@ -51,7 +51,7 @@ def test_series(self):
'primaryKey': ['index']}
self.assertEqual(result, expected)
result = build_table_schema(s)
- self.assertTrue('pandas_version' in result)
+ assert 'pandas_version' in result
def tets_series_unnamed(self):
result = build_table_schema(pd.Series([1, 2, 3]), version=False)
@@ -194,7 +194,7 @@ def test_build_series(self):
result = s.to_json(orient='table', date_format='iso')
result = json.loads(result, object_pairs_hook=OrderedDict)
- self.assertTrue("pandas_version" in result['schema'])
+ assert "pandas_version" in result['schema']
result['schema'].pop('pandas_version')
fields = [{'name': 'id', 'type': 'integer'},
@@ -217,7 +217,7 @@ def test_to_json(self):
result = df.to_json(orient='table', date_format='iso')
result = json.loads(result, object_pairs_hook=OrderedDict)
- self.assertTrue("pandas_version" in result['schema'])
+ assert "pandas_version" in result['schema']
result['schema'].pop('pandas_version')
fields = [
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index ac9e4f77db6ac..e7a04e12d7fa4 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -349,38 +349,38 @@ def test_frame_from_json_bad_data(self):
def test_frame_from_json_nones(self):
df = DataFrame([[1, 2], [4, 5, 6]])
unser = read_json(df.to_json())
- self.assertTrue(np.isnan(unser[2][0]))
+ assert np.isnan(unser[2][0])
df = DataFrame([['1', '2'], ['4', '5', '6']])
unser = read_json(df.to_json())
- self.assertTrue(np.isnan(unser[2][0]))
+ assert np.isnan(unser[2][0])
unser = read_json(df.to_json(), dtype=False)
- self.assertTrue(unser[2][0] is None)
+ assert unser[2][0] is None
unser = read_json(df.to_json(), convert_axes=False, dtype=False)
- self.assertTrue(unser['2']['0'] is None)
+ assert unser['2']['0'] is None
unser = read_json(df.to_json(), numpy=False)
- self.assertTrue(np.isnan(unser[2][0]))
+ assert np.isnan(unser[2][0])
unser = read_json(df.to_json(), numpy=False, dtype=False)
- self.assertTrue(unser[2][0] is None)
+ assert unser[2][0] is None
unser = read_json(df.to_json(), numpy=False,
convert_axes=False, dtype=False)
- self.assertTrue(unser['2']['0'] is None)
+ assert unser['2']['0'] is None
# infinities get mapped to nulls which get mapped to NaNs during
# deserialisation
df = DataFrame([[1, 2], [4, 5, 6]])
df.loc[0, 2] = np.inf
unser = read_json(df.to_json())
- self.assertTrue(np.isnan(unser[2][0]))
+ assert np.isnan(unser[2][0])
unser = read_json(df.to_json(), dtype=False)
- self.assertTrue(np.isnan(unser[2][0]))
+ assert np.isnan(unser[2][0])
df.loc[0, 2] = np.NINF
unser = read_json(df.to_json())
- self.assertTrue(np.isnan(unser[2][0]))
+ assert np.isnan(unser[2][0])
unser = read_json(df.to_json(), dtype=False)
- self.assertTrue(np.isnan(unser[2][0]))
+ assert np.isnan(unser[2][0])
@pytest.mark.skipif(is_platform_32bit(),
reason="not compliant on 32-bit, xref #15865")
@@ -427,7 +427,7 @@ def test_frame_empty_mixedtype(self):
# mixed type
df = DataFrame(columns=['jim', 'joe'])
df['joe'] = df['joe'].astype('i8')
- self.assertTrue(df._is_mixed_type)
+ assert df._is_mixed_type
assert_frame_equal(read_json(df.to_json(), dtype=dict(df.dtypes)), df,
check_index_type=False)
@@ -440,7 +440,7 @@ def test_frame_mixedtype_orient(self): # GH10289
df = DataFrame(vals, index=list('abcd'),
columns=['1st', '2nd', '3rd', '4th', '5th'])
- self.assertTrue(df._is_mixed_type)
+ assert df._is_mixed_type
right = df.copy()
for orient in ['split', 'index', 'columns']:
@@ -637,7 +637,7 @@ def test_axis_dates(self):
json = self.ts.to_json()
result = read_json(json, typ='series')
assert_series_equal(result, self.ts, check_names=False)
- self.assertTrue(result.name is None)
+ assert result.name is None
def test_convert_dates(self):
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index 037e47bfc2a46..12d5cd14197b8 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -157,7 +157,7 @@ def test_encodeDoubleTinyExponential(self):
num = -1e-45
self.assertEqual(num, ujson.decode(ujson.encode(num)))
num = -1e-145
- self.assertTrue(np.allclose(num, ujson.decode(ujson.encode(num))))
+ assert np.allclose(num, ujson.decode(ujson.encode(num)))
def test_encodeDictWithUnicodeKeys(self):
input = {u("key1"): u("value1"), u("key1"):
@@ -1189,15 +1189,15 @@ def testArrayNumpyExcept(self):
def testArrayNumpyLabelled(self):
input = {'a': []}
output = ujson.loads(ujson.dumps(input), numpy=True, labelled=True)
- self.assertTrue((np.empty((1, 0)) == output[0]).all())
- self.assertTrue((np.array(['a']) == output[1]).all())
- self.assertTrue(output[2] is None)
+ assert (np.empty((1, 0)) == output[0]).all()
+ assert (np.array(['a']) == output[1]).all()
+ assert output[2] is None
input = [{'a': 42}]
output = ujson.loads(ujson.dumps(input), numpy=True, labelled=True)
- self.assertTrue((np.array([42]) == output[0]).all())
- self.assertTrue(output[1] is None)
- self.assertTrue((np.array([u('a')]) == output[2]).all())
+ assert (np.array([42]) == output[0]).all()
+ assert output[1] is None
+ assert (np.array([u('a')]) == output[2]).all()
# Write out the dump explicitly so there is no dependency on iteration
# order GH10837
@@ -1206,18 +1206,18 @@ def testArrayNumpyLabelled(self):
output = ujson.loads(input_dumps, numpy=True, labelled=True)
expectedvals = np.array(
[42, 31, 24, 99, 2.4, 78], dtype=int).reshape((3, 2))
- self.assertTrue((expectedvals == output[0]).all())
- self.assertTrue(output[1] is None)
- self.assertTrue((np.array([u('a'), 'b']) == output[2]).all())
+ assert (expectedvals == output[0]).all()
+ assert output[1] is None
+ assert (np.array([u('a'), 'b']) == output[2]).all()
input_dumps = ('{"1": {"a": 42, "b":31}, "2": {"a": 24, "c": 99}, '
'"3": {"a": 2.4, "b": 78}}')
output = ujson.loads(input_dumps, numpy=True, labelled=True)
expectedvals = np.array(
[42, 31, 24, 99, 2.4, 78], dtype=int).reshape((3, 2))
- self.assertTrue((expectedvals == output[0]).all())
- self.assertTrue((np.array(['1', '2', '3']) == output[1]).all())
- self.assertTrue((np.array(['a', 'b']) == output[2]).all())
+ assert (expectedvals == output[0]).all()
+ assert (np.array(['1', '2', '3']) == output[1]).all()
+ assert (np.array(['a', 'b']) == output[2]).all()
class PandasJSONTests(TestCase):
@@ -1228,27 +1228,27 @@ def testDataFrame(self):
# column indexed
outp = DataFrame(ujson.decode(ujson.encode(df)))
- self.assertTrue((df == outp).values.all())
+ assert (df == outp).values.all()
tm.assert_index_equal(df.columns, outp.columns)
tm.assert_index_equal(df.index, outp.index)
dec = _clean_dict(ujson.decode(ujson.encode(df, orient="split")))
outp = DataFrame(**dec)
- self.assertTrue((df == outp).values.all())
+ assert (df == outp).values.all()
tm.assert_index_equal(df.columns, outp.columns)
tm.assert_index_equal(df.index, outp.index)
outp = DataFrame(ujson.decode(ujson.encode(df, orient="records")))
outp.index = df.index
- self.assertTrue((df == outp).values.all())
+ assert (df == outp).values.all()
tm.assert_index_equal(df.columns, outp.columns)
outp = DataFrame(ujson.decode(ujson.encode(df, orient="values")))
outp.index = df.index
- self.assertTrue((df.values == outp.values).all())
+ assert (df.values == outp.values).all()
outp = DataFrame(ujson.decode(ujson.encode(df, orient="index")))
- self.assertTrue((df.transpose() == outp).values.all())
+ assert (df.transpose() == outp).values.all()
tm.assert_index_equal(df.transpose().columns, outp.columns)
tm.assert_index_equal(df.transpose().index, outp.index)
@@ -1258,20 +1258,20 @@ def testDataFrameNumpy(self):
# column indexed
outp = DataFrame(ujson.decode(ujson.encode(df), numpy=True))
- self.assertTrue((df == outp).values.all())
+ assert (df == outp).values.all()
tm.assert_index_equal(df.columns, outp.columns)
tm.assert_index_equal(df.index, outp.index)
dec = _clean_dict(ujson.decode(ujson.encode(df, orient="split"),
numpy=True))
outp = DataFrame(**dec)
- self.assertTrue((df == outp).values.all())
+ assert (df == outp).values.all()
tm.assert_index_equal(df.columns, outp.columns)
tm.assert_index_equal(df.index, outp.index)
outp = DataFrame(ujson.decode(ujson.encode(df, orient="index"),
numpy=True))
- self.assertTrue((df.transpose() == outp).values.all())
+ assert (df.transpose() == outp).values.all()
tm.assert_index_equal(df.transpose().columns, outp.columns)
tm.assert_index_equal(df.transpose().index, outp.index)
@@ -1283,27 +1283,23 @@ def testDataFrameNested(self):
exp = {'df1': ujson.decode(ujson.encode(df)),
'df2': ujson.decode(ujson.encode(df))}
- self.assertTrue(ujson.decode(ujson.encode(nested)) == exp)
+ assert ujson.decode(ujson.encode(nested)) == exp
exp = {'df1': ujson.decode(ujson.encode(df, orient="index")),
'df2': ujson.decode(ujson.encode(df, orient="index"))}
- self.assertTrue(ujson.decode(
- ujson.encode(nested, orient="index")) == exp)
+ assert ujson.decode(ujson.encode(nested, orient="index")) == exp
exp = {'df1': ujson.decode(ujson.encode(df, orient="records")),
'df2': ujson.decode(ujson.encode(df, orient="records"))}
- self.assertTrue(ujson.decode(
- ujson.encode(nested, orient="records")) == exp)
+ assert ujson.decode(ujson.encode(nested, orient="records")) == exp
exp = {'df1': ujson.decode(ujson.encode(df, orient="values")),
'df2': ujson.decode(ujson.encode(df, orient="values"))}
- self.assertTrue(ujson.decode(
- ujson.encode(nested, orient="values")) == exp)
+ assert ujson.decode(ujson.encode(nested, orient="values")) == exp
exp = {'df1': ujson.decode(ujson.encode(df, orient="split")),
'df2': ujson.decode(ujson.encode(df, orient="split"))}
- self.assertTrue(ujson.decode(
- ujson.encode(nested, orient="split")) == exp)
+ assert ujson.decode(ujson.encode(nested, orient="split")) == exp
def testDataFrameNumpyLabelled(self):
df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[
@@ -1312,19 +1308,19 @@ def testDataFrameNumpyLabelled(self):
# column indexed
outp = DataFrame(*ujson.decode(ujson.encode(df),
numpy=True, labelled=True))
- self.assertTrue((df.T == outp).values.all())
+ assert (df.T == outp).values.all()
tm.assert_index_equal(df.T.columns, outp.columns)
tm.assert_index_equal(df.T.index, outp.index)
outp = DataFrame(*ujson.decode(ujson.encode(df, orient="records"),
numpy=True, labelled=True))
outp.index = df.index
- self.assertTrue((df == outp).values.all())
+ assert (df == outp).values.all()
tm.assert_index_equal(df.columns, outp.columns)
outp = DataFrame(*ujson.decode(ujson.encode(df, orient="index"),
numpy=True, labelled=True))
- self.assertTrue((df == outp).values.all())
+ assert (df == outp).values.all()
tm.assert_index_equal(df.columns, outp.columns)
tm.assert_index_equal(df.index, outp.index)
@@ -1384,27 +1380,23 @@ def testSeriesNested(self):
exp = {'s1': ujson.decode(ujson.encode(s)),
's2': ujson.decode(ujson.encode(s))}
- self.assertTrue(ujson.decode(ujson.encode(nested)) == exp)
+ assert ujson.decode(ujson.encode(nested)) == exp
exp = {'s1': ujson.decode(ujson.encode(s, orient="split")),
's2': ujson.decode(ujson.encode(s, orient="split"))}
- self.assertTrue(ujson.decode(
- ujson.encode(nested, orient="split")) == exp)
+ assert ujson.decode(ujson.encode(nested, orient="split")) == exp
exp = {'s1': ujson.decode(ujson.encode(s, orient="records")),
's2': ujson.decode(ujson.encode(s, orient="records"))}
- self.assertTrue(ujson.decode(
- ujson.encode(nested, orient="records")) == exp)
+ assert ujson.decode(ujson.encode(nested, orient="records")) == exp
exp = {'s1': ujson.decode(ujson.encode(s, orient="values")),
's2': ujson.decode(ujson.encode(s, orient="values"))}
- self.assertTrue(ujson.decode(
- ujson.encode(nested, orient="values")) == exp)
+ assert ujson.decode(ujson.encode(nested, orient="values")) == exp
exp = {'s1': ujson.decode(ujson.encode(s, orient="index")),
's2': ujson.decode(ujson.encode(s, orient="index"))}
- self.assertTrue(ujson.decode(
- ujson.encode(nested, orient="index")) == exp)
+ assert ujson.decode(ujson.encode(nested, orient="index")) == exp
def testIndex(self):
i = Index([23, 45, 18, 98, 43, 11], name="index")
@@ -1419,13 +1411,13 @@ def testIndex(self):
dec = _clean_dict(ujson.decode(ujson.encode(i, orient="split")))
outp = Index(**dec)
tm.assert_index_equal(i, outp)
- self.assertTrue(i.name == outp.name)
+ assert i.name == outp.name
dec = _clean_dict(ujson.decode(ujson.encode(i, orient="split"),
numpy=True))
outp = Index(**dec)
tm.assert_index_equal(i, outp)
- self.assertTrue(i.name == outp.name)
+ assert i.name == outp.name
outp = Index(ujson.decode(ujson.encode(i, orient="values")),
name='index')
@@ -1634,7 +1626,7 @@ def test_encodeSet(self):
dec = ujson.decode(enc)
for v in dec:
- self.assertTrue(v in s)
+ assert v in s
def _clean_dict(d):
diff --git a/pandas/tests/io/parser/c_parser_only.py b/pandas/tests/io/parser/c_parser_only.py
index 7ce8c61777bc7..ac2aaf1f5e4ed 100644
--- a/pandas/tests/io/parser/c_parser_only.py
+++ b/pandas/tests/io/parser/c_parser_only.py
@@ -154,8 +154,8 @@ def error(val):
# round-trip should match float()
self.assertEqual(roundtrip_val, float(text[2:]))
- self.assertTrue(sum(precise_errors) <= sum(normal_errors))
- self.assertTrue(max(precise_errors) <= max(normal_errors))
+ assert sum(precise_errors) <= sum(normal_errors)
+ assert max(precise_errors) <= max(normal_errors)
def test_pass_dtype_as_recarray(self):
if compat.is_platform_windows() and self.low_memory:
@@ -195,8 +195,8 @@ def test_usecols_dtypes(self):
converters={'a': str},
dtype={'b': int, 'c': float},
)
- self.assertTrue((result.dtypes == [object, np.int, np.float]).all())
- self.assertTrue((result2.dtypes == [object, np.float]).all())
+ assert (result.dtypes == [object, np.int, np.float]).all()
+ assert (result2.dtypes == [object, np.float]).all()
def test_disable_bool_parsing(self):
# #2090
@@ -208,7 +208,7 @@ def test_disable_bool_parsing(self):
No,No,No"""
result = self.read_csv(StringIO(data), dtype=object)
- self.assertTrue((result.dtypes == object).all())
+ assert (result.dtypes == object).all()
result = self.read_csv(StringIO(data), dtype=object, na_filter=False)
self.assertEqual(result['B'][2], '')
@@ -388,7 +388,7 @@ def test_read_nrows_large(self):
df = self.read_csv(StringIO(test_input), sep='\t', nrows=1010)
- self.assertTrue(df.size == 1010 * 10)
+ assert df.size == 1010 * 10
def test_float_precision_round_trip_with_text(self):
# gh-15140 - This should not segfault on Python 2.7+
diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index afb23f540264e..87235f7580b08 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -693,7 +693,7 @@ def test_missing_trailing_delimiters(self):
1,3,3,
1,4,5"""
result = self.read_csv(StringIO(data))
- self.assertTrue(result['D'].isnull()[1:].all())
+ assert result['D'].isnull()[1:].all()
def test_skipinitialspace(self):
s = ('"09-Apr-2012", "01:10:18.300", 2456026.548822908, 12849, '
@@ -707,7 +707,7 @@ def test_skipinitialspace(self):
# it's 33 columns
result = self.read_csv(sfile, names=lrange(33), na_values=['-9999.0'],
header=None, skipinitialspace=True)
- self.assertTrue(pd.isnull(result.iloc[0, 29]))
+ assert pd.isnull(result.iloc[0, 29])
def test_utf16_bom_skiprows(self):
# #2298
@@ -794,8 +794,8 @@ def test_escapechar(self):
quotechar='"', encoding='utf-8')
self.assertEqual(result['SEARCH_TERM'][2],
'SLAGBORD, "Bergslagen", IKEA:s 1700-tals serie')
- self.assertTrue(np.array_equal(result.columns,
- ['SEARCH_TERM', 'ACTUAL_URL']))
+ tm.assert_index_equal(result.columns,
+ Index(['SEARCH_TERM', 'ACTUAL_URL']))
def test_int64_min_issues(self):
# #2599
@@ -831,7 +831,7 @@ def test_parse_integers_above_fp_precision(self):
17007000002000192,
17007000002000194]})
- self.assertTrue(np.array_equal(result['Numbers'], expected['Numbers']))
+ assert np.array_equal(result['Numbers'], expected['Numbers'])
def test_chunks_have_consistent_numerical_type(self):
integers = [str(i) for i in range(499999)]
@@ -840,7 +840,7 @@ def test_chunks_have_consistent_numerical_type(self):
with tm.assert_produces_warning(False):
df = self.read_csv(StringIO(data))
# Assert that types were coerced.
- self.assertTrue(type(df.a[0]) is np.float64)
+ assert type(df.a[0]) is np.float64
self.assertEqual(df.a.dtype, np.float)
def test_warn_if_chunks_have_mismatched_type(self):
@@ -862,10 +862,10 @@ def test_integer_overflow_bug(self):
data = "65248E10 11\n55555E55 22\n"
result = self.read_csv(StringIO(data), header=None, sep=' ')
- self.assertTrue(result[0].dtype == np.float64)
+ assert result[0].dtype == np.float64
result = self.read_csv(StringIO(data), header=None, sep=r'\s+')
- self.assertTrue(result[0].dtype == np.float64)
+ assert result[0].dtype == np.float64
def test_catch_too_many_names(self):
# see gh-5156
@@ -953,7 +953,7 @@ def test_int64_overflow(self):
# 13007854817840016671868 > UINT64_MAX, so this
# will overflow and return object as the dtype.
result = self.read_csv(StringIO(data))
- self.assertTrue(result['ID'].dtype == object)
+ assert result['ID'].dtype == object
# 13007854817840016671868 > UINT64_MAX, so attempts
# to cast to either int64 or uint64 will result in
diff --git a/pandas/tests/io/parser/converters.py b/pandas/tests/io/parser/converters.py
index 6cea0f3e7b36c..e10ee016b749a 100644
--- a/pandas/tests/io/parser/converters.py
+++ b/pandas/tests/io/parser/converters.py
@@ -133,7 +133,7 @@ def convert_score(x):
result = self.read_csv(fh, converters={'score': convert_score,
'days': convert_days},
na_values=['', None])
- self.assertTrue(pd.isnull(result['days'][1]))
+ assert pd.isnull(result['days'][1])
fh = StringIO(data)
result2 = self.read_csv(fh, converters={'score': convert_score,
diff --git a/pandas/tests/io/parser/index_col.py b/pandas/tests/io/parser/index_col.py
index 168f6eda46ed1..6283104dffd70 100644
--- a/pandas/tests/io/parser/index_col.py
+++ b/pandas/tests/io/parser/index_col.py
@@ -63,7 +63,7 @@ def test_infer_index_col(self):
baz,7,8,9
"""
data = self.read_csv(StringIO(data))
- self.assertTrue(data.index.equals(Index(['foo', 'bar', 'baz'])))
+ assert data.index.equals(Index(['foo', 'bar', 'baz']))
def test_empty_index_col_scenarios(self):
data = 'x,y,z'
diff --git a/pandas/tests/io/parser/na_values.py b/pandas/tests/io/parser/na_values.py
index cf29dbdfef49d..787fa304f84b2 100644
--- a/pandas/tests/io/parser/na_values.py
+++ b/pandas/tests/io/parser/na_values.py
@@ -249,7 +249,7 @@ def test_na_trailing_columns(self):
result = self.read_csv(StringIO(data))
self.assertEqual(result['Date'][1], '2012-05-12')
- self.assertTrue(result['UnitPrice'].isnull().all())
+ assert result['UnitPrice'].isnull().all()
def test_na_values_scalar(self):
# see gh-12224
diff --git a/pandas/tests/io/parser/parse_dates.py b/pandas/tests/io/parser/parse_dates.py
index 3833fa3d7ff4e..dfccf48b03be3 100644
--- a/pandas/tests/io/parser/parse_dates.py
+++ b/pandas/tests/io/parser/parse_dates.py
@@ -461,7 +461,7 @@ def test_parse_dates_empty_string(self):
data = "Date, test\n2012-01-01, 1\n,2"
result = self.read_csv(StringIO(data), parse_dates=["Date"],
na_filter=False)
- self.assertTrue(result['Date'].isnull()[1])
+ assert result['Date'].isnull()[1]
def test_parse_dates_noconvert_thousands(self):
# see gh-14066
@@ -520,7 +520,7 @@ def test_parse_date_time(self):
datetime(2008, 2, 4, 6, 8, 0)])
result = conv.parse_date_time(dates, times)
- self.assertTrue((result == expected).all())
+ assert (result == expected).all()
data = """\
date, time, a, b
@@ -551,7 +551,7 @@ def test_parse_date_fields(self):
days = np.array([3, 4])
result = conv.parse_date_fields(years, months, days)
expected = np.array([datetime(2007, 1, 3), datetime(2008, 2, 4)])
- self.assertTrue((result == expected).all())
+ assert (result == expected).all()
data = ("year, month, day, a\n 2001 , 01 , 10 , 10.\n"
"2001 , 02 , 1 , 11.")
@@ -575,7 +575,7 @@ def test_datetime_six_col(self):
result = conv.parse_all_fields(years, months, days,
hours, minutes, seconds)
- self.assertTrue((result == expected).all())
+ assert (result == expected).all()
data = """\
year, month, day, hour, minute, second, a, b
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index b9920983856d4..7636563586a8f 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -60,14 +60,14 @@ def test_parse_public_s3_bucket(self):
for ext, comp in [('', None), ('.gz', 'gzip'), ('.bz2', 'bz2')]:
df = read_csv('s3://pandas-test/tips.csv' +
ext, compression=comp)
- self.assertTrue(isinstance(df, DataFrame))
+ assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')), df)
# Read public file from bucket with not-public contents
df = read_csv('s3://cant_get_it/tips.csv')
- self.assertTrue(isinstance(df, DataFrame))
+ assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(read_csv(tm.get_data_path('tips.csv')), df)
@@ -75,7 +75,7 @@ def test_parse_public_s3_bucket(self):
def test_parse_public_s3n_bucket(self):
# Read from AWS s3 as "s3n" URL
df = read_csv('s3n://pandas-test/tips.csv', nrows=10)
- self.assertTrue(isinstance(df, DataFrame))
+ assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
@@ -84,7 +84,7 @@ def test_parse_public_s3n_bucket(self):
def test_parse_public_s3a_bucket(self):
# Read from AWS s3 as "s3a" URL
df = read_csv('s3a://pandas-test/tips.csv', nrows=10)
- self.assertTrue(isinstance(df, DataFrame))
+ assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
@@ -94,7 +94,7 @@ def test_parse_public_s3_bucket_nrows(self):
for ext, comp in [('', None), ('.gz', 'gzip'), ('.bz2', 'bz2')]:
df = read_csv('s3://pandas-test/tips.csv' +
ext, nrows=10, compression=comp)
- self.assertTrue(isinstance(df, DataFrame))
+ assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
@@ -112,7 +112,7 @@ def test_parse_public_s3_bucket_chunked(self):
# Read a couple of chunks and make sure we see them
# properly.
df = df_reader.get_chunk()
- self.assertTrue(isinstance(df, DataFrame))
+ assert isinstance(df, DataFrame)
assert not df.empty
true_df = local_tips.iloc[
chunksize * i_chunk: chunksize * (i_chunk + 1)]
@@ -131,7 +131,7 @@ def test_parse_public_s3_bucket_chunked_python(self):
for i_chunk in [0, 1, 2]:
# Read a couple of chunks and make sure we see them properly.
df = df_reader.get_chunk()
- self.assertTrue(isinstance(df, DataFrame))
+ assert isinstance(df, DataFrame)
assert not df.empty
true_df = local_tips.iloc[
chunksize * i_chunk: chunksize * (i_chunk + 1)]
@@ -142,7 +142,7 @@ def test_parse_public_s3_bucket_python(self):
for ext, comp in [('', None), ('.gz', 'gzip'), ('.bz2', 'bz2')]:
df = read_csv('s3://pandas-test/tips.csv' + ext, engine='python',
compression=comp)
- self.assertTrue(isinstance(df, DataFrame))
+ assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')), df)
@@ -152,7 +152,7 @@ def test_infer_s3_compression(self):
for ext in ['', '.gz', '.bz2']:
df = read_csv('s3://pandas-test/tips.csv' + ext,
engine='python', compression='infer')
- self.assertTrue(isinstance(df, DataFrame))
+ assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')), df)
@@ -162,7 +162,7 @@ def test_parse_public_s3_bucket_nrows_python(self):
for ext, comp in [('', None), ('.gz', 'gzip'), ('.bz2', 'bz2')]:
df = read_csv('s3://pandas-test/tips.csv' + ext, engine='python',
nrows=10, compression=comp)
- self.assertTrue(isinstance(df, DataFrame))
+ assert isinstance(df, DataFrame)
assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py
index ffb04c52e8d93..90231e01d0173 100644
--- a/pandas/tests/io/parser/test_read_fwf.py
+++ b/pandas/tests/io/parser/test_read_fwf.py
@@ -166,7 +166,7 @@ def test_fwf_regression(self):
for c in df.columns:
res = df.loc[:, c]
- self.assertTrue(len(res))
+ assert len(res)
def test_fwf_for_uint8(self):
data = """1421302965.213420 PRI=3 PGN=0xef00 DST=0x17 SRC=0x28 04 154 00 00 00 00 00 127
diff --git a/pandas/tests/io/parser/test_textreader.py b/pandas/tests/io/parser/test_textreader.py
index bf1d8d4f3e27c..ad37f828bba6f 100644
--- a/pandas/tests/io/parser/test_textreader.py
+++ b/pandas/tests/io/parser/test_textreader.py
@@ -253,14 +253,14 @@ def _make_reader(**kwds):
self.assertEqual(result[0].dtype, 'S5')
ex_values = np.array(['a', 'aa', 'aaa', 'aaaa', 'aaaaa'], dtype='S5')
- self.assertTrue((result[0] == ex_values).all())
+ assert (result[0] == ex_values).all()
self.assertEqual(result[1].dtype, 'i4')
reader = _make_reader(dtype='S4')
result = reader.read()
self.assertEqual(result[0].dtype, 'S4')
ex_values = np.array(['a', 'aa', 'aaa', 'aaaa', 'aaaa'], dtype='S4')
- self.assertTrue((result[0] == ex_values).all())
+ assert (result[0] == ex_values).all()
self.assertEqual(result[1].dtype, 'S4')
def test_numpy_string_dtype_as_recarray(self):
@@ -279,7 +279,7 @@ def _make_reader(**kwds):
result = reader.read()
self.assertEqual(result['0'].dtype, 'S4')
ex_values = np.array(['a', 'aa', 'aaa', 'aaaa', 'aaaa'], dtype='S4')
- self.assertTrue((result['0'] == ex_values).all())
+ assert (result['0'] == ex_values).all()
self.assertEqual(result['1'].dtype, 'S4')
def test_pass_dtype(self):
@@ -325,8 +325,8 @@ def _make_reader(**kwds):
exp = _make_reader().read()
self.assertEqual(len(result), 2)
- self.assertTrue((result[1] == exp[1]).all())
- self.assertTrue((result[2] == exp[2]).all())
+ assert (result[1] == exp[1]).all()
+ assert (result[2] == exp[2]).all()
def test_cr_delimited(self):
def _test(text, **kwargs):
@@ -392,7 +392,7 @@ def test_empty_csv_input(self):
# GH14867
df = read_csv(StringIO(), chunksize=20, header=None,
names=['a', 'b', 'c'])
- self.assertTrue(isinstance(df, TextFileReader))
+ assert isinstance(df, TextFileReader)
def assert_array_dicts_equal(left, right):
diff --git a/pandas/tests/io/parser/usecols.py b/pandas/tests/io/parser/usecols.py
index db8e5b7653a51..b52106d9e8595 100644
--- a/pandas/tests/io/parser/usecols.py
+++ b/pandas/tests/io/parser/usecols.py
@@ -44,8 +44,8 @@ def test_usecols(self):
exp = self.read_csv(StringIO(data))
self.assertEqual(len(result.columns), 2)
- self.assertTrue((result['b'] == exp['b']).all())
- self.assertTrue((result['c'] == exp['c']).all())
+ assert (result['b'] == exp['b']).all()
+ assert (result['c'] == exp['c']).all()
tm.assert_frame_equal(result, result2)
diff --git a/pandas/tests/io/sas/test_sas7bdat.py b/pandas/tests/io/sas/test_sas7bdat.py
index 69073a90e9669..afd40e7017cff 100644
--- a/pandas/tests/io/sas/test_sas7bdat.py
+++ b/pandas/tests/io/sas/test_sas7bdat.py
@@ -75,7 +75,7 @@ def test_iterator_loop(self):
y = 0
for x in rdr:
y += x.shape[0]
- self.assertTrue(y == rdr.row_count)
+ assert y == rdr.row_count
rdr.close()
def test_iterator_read_too_much(self):
diff --git a/pandas/tests/io/sas/test_xport.py b/pandas/tests/io/sas/test_xport.py
index fe2f7cb4bf4be..2ed7ebbbfce32 100644
--- a/pandas/tests/io/sas/test_xport.py
+++ b/pandas/tests/io/sas/test_xport.py
@@ -40,7 +40,7 @@ def test1_basic(self):
# Test reading beyond end of file
reader = read_sas(self.file01, format="xport", iterator=True)
data = reader.read(num_rows + 100)
- self.assertTrue(data.shape[0] == num_rows)
+ assert data.shape[0] == num_rows
reader.close()
# Test incremental read with `read` method.
@@ -61,7 +61,7 @@ def test1_basic(self):
for x in reader:
m += x.shape[0]
reader.close()
- self.assertTrue(m == num_rows)
+ assert m == num_rows
# Read full file with `read_sas` method
data = read_sas(self.file01)
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 700915b81dd31..3eee3f619f33d 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -39,7 +39,7 @@ def test_expand_user(self):
expanded_name = common._expand_user(filename)
self.assertNotEqual(expanded_name, filename)
- self.assertTrue(isabs(expanded_name))
+ assert isabs(expanded_name)
self.assertEqual(os.path.expanduser(filename), expanded_name)
def test_expand_user_normal_path(self):
@@ -69,7 +69,7 @@ def test_get_filepath_or_buffer_with_path(self):
filename = '~/sometest'
filepath_or_buffer, _, _ = common.get_filepath_or_buffer(filename)
self.assertNotEqual(filepath_or_buffer, filename)
- self.assertTrue(isabs(filepath_or_buffer))
+ assert isabs(filepath_or_buffer)
self.assertEqual(os.path.expanduser(filename), filepath_or_buffer)
def test_get_filepath_or_buffer_with_buffer(self):
@@ -127,7 +127,7 @@ def test_get_attr(self):
attrs.append('__next__')
for attr in attrs:
- self.assertTrue(hasattr(wrapper, attr))
+ assert hasattr(wrapper, attr)
assert not hasattr(wrapper, 'foo')
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index 2a3a4992ead71..6092cd4180675 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -656,7 +656,7 @@ def test_reader_closes_file(self):
# parses okay
read_excel(xlsx, 'Sheet1', index_col=0)
- self.assertTrue(f.closed)
+ assert f.closed
def test_creating_and_reading_multiple_sheets(self):
# Test reading multiple sheets, from a runtime created excel file
@@ -1630,7 +1630,7 @@ def test_to_excel_unicode_filename(self):
# xlsaddrs += ["B1", "D1", "F1"]
# for xlsaddr in xlsaddrs:
# cell = ws.cell(xlsaddr)
- # self.assertTrue(cell.style.font.bold)
+ # assert cell.style.font.bold
# self.assertEqual(openpyxl.style.Border.BORDER_THIN,
# cell.style.borders.top.border_style)
# self.assertEqual(openpyxl.style.Border.BORDER_THIN,
@@ -1643,7 +1643,7 @@ def test_to_excel_unicode_filename(self):
# cell.style.alignment.horizontal)
# mergedcells_addrs = ["C1", "E1", "G1"]
# for maddr in mergedcells_addrs:
- # self.assertTrue(ws.cell(maddr).merged)
+ # assert ws.cell(maddr).merged
# os.remove(filename)
def test_excel_010_hemstring(self):
@@ -1689,15 +1689,15 @@ def roundtrip(df, header=True, parser_hdr=0, index=True):
# no nans
for r in range(len(res.index)):
for c in range(len(res.columns)):
- self.assertTrue(res.iloc[r, c] is not np.nan)
+ assert res.iloc[r, c] is not np.nan
res = roundtrip(DataFrame([0]))
self.assertEqual(res.shape, (1, 1))
- self.assertTrue(res.iloc[0, 0] is not np.nan)
+ assert res.iloc[0, 0] is not np.nan
res = roundtrip(DataFrame([0]), False, None)
self.assertEqual(res.shape, (1, 2))
- self.assertTrue(res.iloc[0, 0] is not np.nan)
+ assert res.iloc[0, 0] is not np.nan
def test_excel_010_hemstring_raises_NotImplementedError(self):
# This test was failing only for j>1 and header=False,
@@ -1908,7 +1908,7 @@ def test_to_excel_styleconverter(self):
"alignment": {"horizontal": "center", "vertical": "top"}}
xlsx_style = _Openpyxl1Writer._convert_to_style(hstyle)
- self.assertTrue(xlsx_style.font.bold)
+ assert xlsx_style.font.bold
self.assertEqual(openpyxl.style.Border.BORDER_THIN,
xlsx_style.borders.top.border_style)
self.assertEqual(openpyxl.style.Border.BORDER_THIN,
@@ -2200,7 +2200,7 @@ def test_to_excel_styleconverter(self):
"alignment": {"horizontal": "center", "vertical": "top"}}
xls_style = _XlwtWriter._convert_to_style(hstyle)
- self.assertTrue(xls_style.font.bold)
+ assert xls_style.font.bold
self.assertEqual(xlwt.Borders.THIN, xls_style.borders.top)
self.assertEqual(xlwt.Borders.THIN, xls_style.borders.right)
self.assertEqual(xlwt.Borders.THIN, xls_style.borders.bottom)
@@ -2332,8 +2332,8 @@ def write_cells(self, *args, **kwargs):
def check_called(func):
func()
- self.assertTrue(len(called_save) >= 1)
- self.assertTrue(len(called_write_cells) >= 1)
+ assert len(called_save) >= 1
+ assert len(called_write_cells) >= 1
del called_save[:]
del called_write_cells[:]
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index cf08754a18527..db6ab236ee793 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -361,7 +361,7 @@ def test_negative_skiprows(self):
def test_multiple_matches(self):
url = 'https://docs.python.org/2/'
dfs = self.read_html(url, match='Python')
- self.assertTrue(len(dfs) > 1)
+ assert len(dfs) > 1
@network
def test_python_docs_table(self):
diff --git a/pandas/tests/io/test_packers.py b/pandas/tests/io/test_packers.py
index f8923035b3a63..ae1cadcd41496 100644
--- a/pandas/tests/io/test_packers.py
+++ b/pandas/tests/io/test_packers.py
@@ -163,7 +163,7 @@ def test_numpy_scalar_float(self):
def test_numpy_scalar_complex(self):
x = np.complex64(np.random.rand() + 1j * np.random.rand())
x_rec = self.encode_decode(x)
- self.assertTrue(np.allclose(x, x_rec))
+ assert np.allclose(x, x_rec)
def test_scalar_float(self):
x = np.random.rand()
@@ -173,7 +173,7 @@ def test_scalar_float(self):
def test_scalar_complex(self):
x = np.random.rand() + 1j * np.random.rand()
x_rec = self.encode_decode(x)
- self.assertTrue(np.allclose(x, x_rec))
+ assert np.allclose(x, x_rec)
def test_list_numpy_float(self):
x = [np.float32(np.random.rand()) for i in range(5)]
@@ -192,7 +192,7 @@ def test_list_numpy_float_complex(self):
[np.complex128(np.random.rand() + 1j * np.random.rand())
for i in range(5)]
x_rec = self.encode_decode(x)
- self.assertTrue(np.allclose(x, x_rec))
+ assert np.allclose(x, x_rec)
def test_list_float(self):
x = [np.random.rand() for i in range(5)]
@@ -207,7 +207,7 @@ def test_list_float_complex(self):
x = [np.random.rand() for i in range(5)] + \
[(np.random.rand() + 1j * np.random.rand()) for i in range(5)]
x_rec = self.encode_decode(x)
- self.assertTrue(np.allclose(x, x_rec))
+ assert np.allclose(x, x_rec)
def test_dict_float(self):
x = {'foo': 1.0, 'bar': 2.0}
@@ -247,8 +247,8 @@ def test_numpy_array_float(self):
def test_numpy_array_complex(self):
x = (np.random.rand(5) + 1j * np.random.rand(5)).astype(np.complex128)
x_rec = self.encode_decode(x)
- self.assertTrue(all(map(lambda x, y: x == y, x, x_rec)) and
- x.dtype == x_rec.dtype)
+ assert (all(map(lambda x, y: x == y, x, x_rec)) and
+ x.dtype == x_rec.dtype)
def test_list_mixed(self):
x = [1.0, np.float32(3.5), np.complex128(4.25), u('foo')]
@@ -613,7 +613,7 @@ def _test_compression(self, compress):
assert_frame_equal(value, expected)
# make sure that we can write to the new frames
for block in value._data.blocks:
- self.assertTrue(block.values.flags.writeable)
+ assert block.values.flags.writeable
def test_compression_zlib(self):
if not _ZLIB_INSTALLED:
@@ -662,7 +662,7 @@ def decompress(ob):
# make sure that we can write to the new frames even though
# we needed to copy the data
for block in value._data.blocks:
- self.assertTrue(block.values.flags.writeable)
+ assert block.values.flags.writeable
# mutate the data in some way
block.values[0] += rhs[block.dtype]
@@ -695,14 +695,14 @@ def _test_small_strings_no_warn(self, compress):
empty_unpacked = self.encode_decode(empty, compress=compress)
tm.assert_numpy_array_equal(empty_unpacked, empty)
- self.assertTrue(empty_unpacked.flags.writeable)
+ assert empty_unpacked.flags.writeable
char = np.array([ord(b'a')], dtype='uint8')
with tm.assert_produces_warning(None):
char_unpacked = self.encode_decode(char, compress=compress)
tm.assert_numpy_array_equal(char_unpacked, char)
- self.assertTrue(char_unpacked.flags.writeable)
+ assert char_unpacked.flags.writeable
# if this test fails I am sorry because the interpreter is now in a
# bad state where b'a' points to 98 == ord(b'b').
char_unpacked[0] = ord(b'b')
@@ -732,15 +732,15 @@ def test_readonly_axis_blosc(self):
pytest.skip('no blosc')
df1 = DataFrame({'A': list('abcd')})
df2 = DataFrame(df1, index=[1., 2., 3., 4.])
- self.assertTrue(1 in self.encode_decode(df1['A'], compress='blosc'))
- self.assertTrue(1. in self.encode_decode(df2['A'], compress='blosc'))
+ assert 1 in self.encode_decode(df1['A'], compress='blosc')
+ assert 1. in self.encode_decode(df2['A'], compress='blosc')
def test_readonly_axis_zlib(self):
# GH11880
df1 = DataFrame({'A': list('abcd')})
df2 = DataFrame(df1, index=[1., 2., 3., 4.])
- self.assertTrue(1 in self.encode_decode(df1['A'], compress='zlib'))
- self.assertTrue(1. in self.encode_decode(df2['A'], compress='zlib'))
+ assert 1 in self.encode_decode(df1['A'], compress='zlib')
+ assert 1. in self.encode_decode(df2['A'], compress='zlib')
def test_readonly_axis_blosc_to_sql(self):
# GH11880
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 6e7fca9a29e98..ae1b4137c354f 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -338,10 +338,10 @@ def test_api_default_format(self):
pandas.set_option('io.hdf.default_format', 'table')
_maybe_remove(store, 'df')
store.put('df', df)
- self.assertTrue(store.get_storer('df').is_table)
+ assert store.get_storer('df').is_table
_maybe_remove(store, 'df2')
store.append('df2', df)
- self.assertTrue(store.get_storer('df').is_table)
+ assert store.get_storer('df').is_table
pandas.set_option('io.hdf.default_format', None)
@@ -358,10 +358,10 @@ def test_api_default_format(self):
pandas.set_option('io.hdf.default_format', 'table')
df.to_hdf(path, 'df3')
with HDFStore(path) as store:
- self.assertTrue(store.get_storer('df3').is_table)
+ assert store.get_storer('df3').is_table
df.to_hdf(path, 'df4', append=True)
with HDFStore(path) as store:
- self.assertTrue(store.get_storer('df4').is_table)
+ assert store.get_storer('df4').is_table
pandas.set_option('io.hdf.default_format', None)
@@ -376,14 +376,14 @@ def test_keys(self):
store['foo/bar'] = tm.makePanel()
self.assertEqual(len(store), 5)
expected = set(['/a', '/b', '/c', '/d', '/foo/bar'])
- self.assertTrue(set(store.keys()) == expected)
- self.assertTrue(set(store) == expected)
+ assert set(store.keys()) == expected
+ assert set(store) == expected
def test_iter_empty(self):
with ensure_clean_store(self.path) as store:
# GH 12221
- self.assertTrue(list(store) == [])
+ assert list(store) == []
def test_repr(self):
@@ -549,7 +549,7 @@ def test_reopen_handle(self):
# truncation ok here
store.open('w')
- self.assertTrue(store.is_open)
+ assert store.is_open
self.assertEqual(len(store), 0)
store.close()
assert not store.is_open
@@ -559,7 +559,7 @@ def test_reopen_handle(self):
# reopen as read
store.open('r')
- self.assertTrue(store.is_open)
+ assert store.is_open
self.assertEqual(len(store), 1)
self.assertEqual(store._mode, 'r')
store.close()
@@ -567,7 +567,7 @@ def test_reopen_handle(self):
# reopen as append
store.open('a')
- self.assertTrue(store.is_open)
+ assert store.is_open
self.assertEqual(len(store), 1)
self.assertEqual(store._mode, 'a')
store.close()
@@ -575,7 +575,7 @@ def test_reopen_handle(self):
# reopen as append (again)
store.open('a')
- self.assertTrue(store.is_open)
+ assert store.is_open
self.assertEqual(len(store), 1)
self.assertEqual(store._mode, 'a')
store.close()
@@ -1232,7 +1232,7 @@ def test_ndim_indexables(self):
def check_indexers(key, indexers):
for i, idx in enumerate(indexers):
descr = getattr(store.root, key).table.description
- self.assertTrue(getattr(descr, idx)._v_pos == i)
+ assert getattr(descr, idx)._v_pos == i
# append then change (will take existing schema)
indexers = ['items', 'major_axis', 'minor_axis']
@@ -2280,7 +2280,7 @@ def test_remove_where(self):
# deleted number (entire table)
n = store.remove('wp', [])
- self.assertTrue(n == 120)
+ assert n == 120
# non - empty where
_maybe_remove(store, 'wp')
@@ -2300,7 +2300,7 @@ def test_remove_startstop(self):
_maybe_remove(store, 'wp1')
store.put('wp1', wp, format='t')
n = store.remove('wp1', start=32)
- self.assertTrue(n == 120 - 32)
+ assert n == 120 - 32
result = store.select('wp1')
expected = wp.reindex(major_axis=wp.major_axis[:32 // 4])
assert_panel_equal(result, expected)
@@ -2308,7 +2308,7 @@ def test_remove_startstop(self):
_maybe_remove(store, 'wp2')
store.put('wp2', wp, format='t')
n = store.remove('wp2', start=-32)
- self.assertTrue(n == 32)
+ assert n == 32
result = store.select('wp2')
expected = wp.reindex(major_axis=wp.major_axis[:-32 // 4])
assert_panel_equal(result, expected)
@@ -2317,7 +2317,7 @@ def test_remove_startstop(self):
_maybe_remove(store, 'wp3')
store.put('wp3', wp, format='t')
n = store.remove('wp3', stop=32)
- self.assertTrue(n == 32)
+ assert n == 32
result = store.select('wp3')
expected = wp.reindex(major_axis=wp.major_axis[32 // 4:])
assert_panel_equal(result, expected)
@@ -2325,7 +2325,7 @@ def test_remove_startstop(self):
_maybe_remove(store, 'wp4')
store.put('wp4', wp, format='t')
n = store.remove('wp4', stop=-32)
- self.assertTrue(n == 120 - 32)
+ assert n == 120 - 32
result = store.select('wp4')
expected = wp.reindex(major_axis=wp.major_axis[-32 // 4:])
assert_panel_equal(result, expected)
@@ -2334,7 +2334,7 @@ def test_remove_startstop(self):
_maybe_remove(store, 'wp5')
store.put('wp5', wp, format='t')
n = store.remove('wp5', start=16, stop=-16)
- self.assertTrue(n == 120 - 32)
+ assert n == 120 - 32
result = store.select('wp5')
expected = wp.reindex(
major_axis=(wp.major_axis[:16 // 4]
@@ -2344,7 +2344,7 @@ def test_remove_startstop(self):
_maybe_remove(store, 'wp6')
store.put('wp6', wp, format='t')
n = store.remove('wp6', start=16, stop=16)
- self.assertTrue(n == 0)
+ assert n == 0
result = store.select('wp6')
expected = wp.reindex(major_axis=wp.major_axis)
assert_panel_equal(result, expected)
@@ -2358,7 +2358,7 @@ def test_remove_startstop(self):
crit = 'major_axis=date'
store.put('wp7', wp, format='t')
n = store.remove('wp7', where=[crit], stop=80)
- self.assertTrue(n == 28)
+ assert n == 28
result = store.select('wp7')
expected = wp.reindex(major_axis=wp.major_axis.difference(
wp.major_axis[np.arange(0, 20, 3)]))
@@ -2377,7 +2377,7 @@ def test_remove_crit(self):
crit4 = 'major_axis=date4'
store.put('wp3', wp, format='t')
n = store.remove('wp3', where=[crit4])
- self.assertTrue(n == 36)
+ assert n == 36
result = store.select('wp3')
expected = wp.reindex(
@@ -2392,10 +2392,10 @@ def test_remove_crit(self):
crit1 = 'major_axis>date'
crit2 = "minor_axis=['A', 'D']"
n = store.remove('wp', where=[crit1])
- self.assertTrue(n == 56)
+ assert n == 56
n = store.remove('wp', where=[crit2])
- self.assertTrue(n == 32)
+ assert n == 32
result = store['wp']
expected = wp.truncate(after=date).reindex(minor=['B', 'C'])
@@ -2819,7 +2819,7 @@ def test_frame(self):
df['foo'] = np.random.randn(len(df))
store['df'] = df
recons = store['df']
- self.assertTrue(recons._data.is_consolidated())
+ assert recons._data.is_consolidated()
# empty
self._check_roundtrip(df[:0], tm.assert_frame_equal)
@@ -4184,7 +4184,7 @@ def test_start_stop_table(self):
# out of range
result = store.select(
'df', "columns=['A']", start=30, stop=40)
- self.assertTrue(len(result) == 0)
+ assert len(result) == 0
expected = df.loc[30:40, ['A']]
tm.assert_frame_equal(result, expected)
@@ -4495,8 +4495,7 @@ def do_copy(f=None, new_f=None, keys=None,
if propindexes:
for a in orig_t.axes:
if a.is_indexed:
- self.assertTrue(
- new_t[a.name].is_indexed)
+ assert new_t[a.name].is_indexed
finally:
safe_close(store)
@@ -4803,8 +4802,8 @@ def test_duplicate_column_name(self):
other = read_hdf(path, 'df')
tm.assert_frame_equal(df, other)
- self.assertTrue(df.equals(other))
- self.assertTrue(other.equals(df))
+ assert df.equals(other)
+ assert other.equals(df)
def test_round_trip_equals(self):
# GH 9330
@@ -4814,8 +4813,8 @@ def test_round_trip_equals(self):
df.to_hdf(path, 'df', format='table')
other = read_hdf(path, 'df')
tm.assert_frame_equal(df, other)
- self.assertTrue(df.equals(other))
- self.assertTrue(other.equals(df))
+ assert df.equals(other)
+ assert other.equals(df)
def test_preserve_timedeltaindex_type(self):
# GH9635
@@ -4851,7 +4850,7 @@ def test_colums_multiindex_modified(self):
cols2load = list('BCD')
cols2load_original = list(cols2load)
df_loaded = read_hdf(path, 'df', columns=cols2load) # noqa
- self.assertTrue(cols2load_original == cols2load)
+ assert cols2load_original == cols2load
def test_to_hdf_with_object_column_names(self):
# GH9057
@@ -4902,7 +4901,7 @@ def test_read_hdf_open_store(self):
store = HDFStore(path, mode='r')
indirect = read_hdf(store, 'df')
tm.assert_frame_equal(direct, indirect)
- self.assertTrue(store.is_open)
+ assert store.is_open
store.close()
def test_read_hdf_iterator(self):
@@ -4916,7 +4915,7 @@ def test_read_hdf_iterator(self):
df.to_hdf(path, 'df', mode='w', format='t')
direct = read_hdf(path, 'df')
iterator = read_hdf(path, 'df', iterator=True)
- self.assertTrue(isinstance(iterator, TableIterator))
+ assert isinstance(iterator, TableIterator)
indirect = next(iterator.__iter__())
tm.assert_frame_equal(direct, indirect)
iterator.store.close()
@@ -5023,7 +5022,7 @@ def test_query_long_float_literal(self):
cutoff = 1000000000.0006
result = store.select('test', "A < %.4f" % cutoff)
- self.assertTrue(result.empty)
+ assert result.empty
cutoff = 1000000000.0010
result = store.select('test', "A > %.4f" % cutoff)
diff --git a/pandas/tests/io/test_s3.py b/pandas/tests/io/test_s3.py
index cff8eef74a607..36a0304bddfaf 100644
--- a/pandas/tests/io/test_s3.py
+++ b/pandas/tests/io/test_s3.py
@@ -6,5 +6,5 @@
class TestS3URL(tm.TestCase):
def test_is_s3_url(self):
- self.assertTrue(_is_s3_url("s3://pandas/somethingelse.com"))
+ assert _is_s3_url("s3://pandas/somethingelse.com")
assert not _is_s3_url("s4://pandas/somethingelse.com")
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 0930d99ea5c30..fd883c9c0ff00 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -272,8 +272,7 @@ def _check_iris_loaded_frame(self, iris_frame):
pytype = iris_frame.dtypes[0].type
row = iris_frame.iloc[0]
- self.assertTrue(
- issubclass(pytype, np.floating), 'Loaded frame has incorrect type')
+ assert issubclass(pytype, np.floating)
tm.equalContents(row.values, [5.1, 3.5, 1.4, 0.2, 'Iris-setosa'])
def _load_test1_data(self):
@@ -372,8 +371,7 @@ def _to_sql(self):
self.drop_table('test_frame1')
self.pandasSQL.to_sql(self.test_frame1, 'test_frame1')
- self.assertTrue(self.pandasSQL.has_table(
- 'test_frame1'), 'Table not written to DB')
+ assert self.pandasSQL.has_table('test_frame1')
# Nuke table
self.drop_table('test_frame1')
@@ -387,8 +385,7 @@ def _to_sql_fail(self):
self.pandasSQL.to_sql(
self.test_frame1, 'test_frame1', if_exists='fail')
- self.assertTrue(self.pandasSQL.has_table(
- 'test_frame1'), 'Table not written to DB')
+ assert self.pandasSQL.has_table('test_frame1')
pytest.raises(ValueError, self.pandasSQL.to_sql,
self.test_frame1, 'test_frame1', if_exists='fail')
@@ -403,8 +400,7 @@ def _to_sql_replace(self):
# Add to table again
self.pandasSQL.to_sql(
self.test_frame1, 'test_frame1', if_exists='replace')
- self.assertTrue(self.pandasSQL.has_table(
- 'test_frame1'), 'Table not written to DB')
+ assert self.pandasSQL.has_table('test_frame1')
num_entries = len(self.test_frame1)
num_rows = self._count_rows('test_frame1')
@@ -424,8 +420,7 @@ def _to_sql_append(self):
# Add to table again
self.pandasSQL.to_sql(
self.test_frame1, 'test_frame1', if_exists='append')
- self.assertTrue(self.pandasSQL.has_table(
- 'test_frame1'), 'Table not written to DB')
+ assert self.pandasSQL.has_table('test_frame1')
num_entries = 2 * len(self.test_frame1)
num_rows = self._count_rows('test_frame1')
@@ -528,16 +523,12 @@ def test_read_sql_view(self):
def test_to_sql(self):
sql.to_sql(self.test_frame1, 'test_frame1', self.conn)
- self.assertTrue(
- sql.has_table('test_frame1', self.conn),
- 'Table not written to DB')
+ assert sql.has_table('test_frame1', self.conn)
def test_to_sql_fail(self):
sql.to_sql(self.test_frame1, 'test_frame2',
self.conn, if_exists='fail')
- self.assertTrue(
- sql.has_table('test_frame2', self.conn),
- 'Table not written to DB')
+ assert sql.has_table('test_frame2', self.conn)
pytest.raises(ValueError, sql.to_sql, self.test_frame1,
'test_frame2', self.conn, if_exists='fail')
@@ -548,9 +539,7 @@ def test_to_sql_replace(self):
# Add to table again
sql.to_sql(self.test_frame1, 'test_frame3',
self.conn, if_exists='replace')
- self.assertTrue(
- sql.has_table('test_frame3', self.conn),
- 'Table not written to DB')
+ assert sql.has_table('test_frame3', self.conn)
num_entries = len(self.test_frame1)
num_rows = self._count_rows('test_frame3')
@@ -565,9 +554,7 @@ def test_to_sql_append(self):
# Add to table again
sql.to_sql(self.test_frame1, 'test_frame4',
self.conn, if_exists='append')
- self.assertTrue(
- sql.has_table('test_frame4', self.conn),
- 'Table not written to DB')
+ assert sql.has_table('test_frame4', self.conn)
num_entries = 2 * len(self.test_frame1)
num_rows = self._count_rows('test_frame4')
@@ -629,27 +616,21 @@ def test_date_parsing(self):
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn,
parse_dates=['DateCol'])
- self.assertTrue(
- issubclass(df.DateCol.dtype.type, np.datetime64),
- "DateCol loaded with incorrect type")
+ assert issubclass(df.DateCol.dtype.type, np.datetime64)
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn,
parse_dates={'DateCol': '%Y-%m-%d %H:%M:%S'})
- self.assertTrue(
- issubclass(df.DateCol.dtype.type, np.datetime64),
- "DateCol loaded with incorrect type")
+ assert issubclass(df.DateCol.dtype.type, np.datetime64)
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn,
parse_dates=['IntDateCol'])
- self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
- "IntDateCol loaded with incorrect type")
+ assert issubclass(df.IntDateCol.dtype.type, np.datetime64)
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn,
parse_dates={'IntDateCol': 's'})
- self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
- "IntDateCol loaded with incorrect type")
+ assert issubclass(df.IntDateCol.dtype.type, np.datetime64)
def test_date_and_index(self):
# Test case where same column appears in parse_date and index_col
@@ -658,11 +639,8 @@ def test_date_and_index(self):
index_col='DateCol',
parse_dates=['DateCol', 'IntDateCol'])
- self.assertTrue(issubclass(df.index.dtype.type, np.datetime64),
- "DateCol loaded with incorrect type")
-
- self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
- "IntDateCol loaded with incorrect type")
+ assert issubclass(df.index.dtype.type, np.datetime64)
+ assert issubclass(df.IntDateCol.dtype.type, np.datetime64)
def test_timedelta(self):
@@ -778,27 +756,27 @@ def test_integer_col_names(self):
def test_get_schema(self):
create_sql = sql.get_schema(self.test_frame1, 'test', con=self.conn)
- self.assertTrue('CREATE' in create_sql)
+ assert 'CREATE' in create_sql
def test_get_schema_dtypes(self):
float_frame = DataFrame({'a': [1.1, 1.2], 'b': [2.1, 2.2]})
dtype = sqlalchemy.Integer if self.mode == 'sqlalchemy' else 'INTEGER'
create_sql = sql.get_schema(float_frame, 'test',
con=self.conn, dtype={'b': dtype})
- self.assertTrue('CREATE' in create_sql)
- self.assertTrue('INTEGER' in create_sql)
+ assert 'CREATE' in create_sql
+ assert 'INTEGER' in create_sql
def test_get_schema_keys(self):
frame = DataFrame({'Col1': [1.1, 1.2], 'Col2': [2.1, 2.2]})
create_sql = sql.get_schema(frame, 'test', con=self.conn, keys='Col1')
constraint_sentence = 'CONSTRAINT test_pk PRIMARY KEY ("Col1")'
- self.assertTrue(constraint_sentence in create_sql)
+ assert constraint_sentence in create_sql
# multiple columns as key (GH10385)
create_sql = sql.get_schema(self.test_frame1, 'test',
con=self.conn, keys=['A', 'B'])
constraint_sentence = 'CONSTRAINT test_pk PRIMARY KEY ("A", "B")'
- self.assertTrue(constraint_sentence in create_sql)
+ assert constraint_sentence in create_sql
def test_chunksize_read(self):
df = DataFrame(np.random.randn(22, 5), columns=list('abcde'))
@@ -957,8 +935,7 @@ def test_sqlalchemy_type_mapping(self):
utc=True)})
db = sql.SQLDatabase(self.conn)
table = sql.SQLTable("test_type", db, frame=df)
- self.assertTrue(isinstance(
- table.table.c['time'].type, sqltypes.DateTime))
+ assert isinstance(table.table.c['time'].type, sqltypes.DateTime)
def test_database_uri_string(self):
@@ -1100,7 +1077,7 @@ def test_safe_names_warning(self):
def test_get_schema2(self):
# without providing a connection object (available for backwards comp)
create_sql = sql.get_schema(self.test_frame1, 'test')
- self.assertTrue('CREATE' in create_sql)
+ assert 'CREATE' in create_sql
def _get_sqlite_column_type(self, schema, column):
@@ -1211,8 +1188,7 @@ def test_create_table(self):
pandasSQL = sql.SQLDatabase(temp_conn)
pandasSQL.to_sql(temp_frame, 'temp_frame')
- self.assertTrue(
- temp_conn.has_table('temp_frame'), 'Table not written to DB')
+ assert temp_conn.has_table('temp_frame')
def test_drop_table(self):
temp_conn = self.connect()
@@ -1223,8 +1199,7 @@ def test_drop_table(self):
pandasSQL = sql.SQLDatabase(temp_conn)
pandasSQL.to_sql(temp_frame, 'temp_frame')
- self.assertTrue(
- temp_conn.has_table('temp_frame'), 'Table not written to DB')
+ assert temp_conn.has_table('temp_frame')
pandasSQL.drop_table('temp_frame')
@@ -1253,19 +1228,14 @@ def test_read_table_absent(self):
def test_default_type_conversion(self):
df = sql.read_sql_table("types_test_data", self.conn)
- self.assertTrue(issubclass(df.FloatCol.dtype.type, np.floating),
- "FloatCol loaded with incorrect type")
- self.assertTrue(issubclass(df.IntCol.dtype.type, np.integer),
- "IntCol loaded with incorrect type")
- self.assertTrue(issubclass(df.BoolCol.dtype.type, np.bool_),
- "BoolCol loaded with incorrect type")
+ assert issubclass(df.FloatCol.dtype.type, np.floating)
+ assert issubclass(df.IntCol.dtype.type, np.integer)
+ assert issubclass(df.BoolCol.dtype.type, np.bool_)
# Int column with NA values stays as float
- self.assertTrue(issubclass(df.IntColWithNull.dtype.type, np.floating),
- "IntColWithNull loaded with incorrect type")
+ assert issubclass(df.IntColWithNull.dtype.type, np.floating)
# Bool column with NA values becomes object
- self.assertTrue(issubclass(df.BoolColWithNull.dtype.type, np.object),
- "BoolColWithNull loaded with incorrect type")
+ assert issubclass(df.BoolColWithNull.dtype.type, np.object)
def test_bigint(self):
# int64 should be converted to BigInteger, GH7433
@@ -1280,8 +1250,7 @@ def test_default_date_load(self):
# IMPORTANT - sqlite has no native date type, so shouldn't parse, but
# MySQL SHOULD be converted.
- self.assertTrue(issubclass(df.DateCol.dtype.type, np.datetime64),
- "DateCol loaded with incorrect type")
+ assert issubclass(df.DateCol.dtype.type, np.datetime64)
def test_datetime_with_timezone(self):
# edge case that converts postgresql datetime with time zone types
@@ -1302,7 +1271,7 @@ def check(col):
self.assertEqual(col[1], Timestamp('2000-06-01 07:00:00'))
elif is_datetime64tz_dtype(col.dtype):
- self.assertTrue(str(col.dt.tz) == 'UTC')
+ assert str(col.dt.tz) == 'UTC'
# "2000-01-01 00:00:00-08:00" should convert to
# "2000-01-01 08:00:00"
@@ -1327,11 +1296,9 @@ def check(col):
# even with the same versions of psycopg2 & sqlalchemy, possibly a
# Postgrsql server version difference
col = df.DateColWithTz
- self.assertTrue(is_object_dtype(col.dtype) or
- is_datetime64_dtype(col.dtype) or
- is_datetime64tz_dtype(col.dtype),
- "DateCol loaded with incorrect type -> {0}"
- .format(col.dtype))
+ assert (is_object_dtype(col.dtype) or
+ is_datetime64_dtype(col.dtype) or
+ is_datetime64tz_dtype(col.dtype))
df = pd.read_sql_query("select * from types_test_data",
self.conn, parse_dates=['DateColWithTz'])
@@ -1343,10 +1310,8 @@ def check(col):
self.conn, chunksize=1)),
ignore_index=True)
col = df.DateColWithTz
- self.assertTrue(is_datetime64tz_dtype(col.dtype),
- "DateCol loaded with incorrect type -> {0}"
- .format(col.dtype))
- self.assertTrue(str(col.dt.tz) == 'UTC')
+ assert is_datetime64tz_dtype(col.dtype)
+ assert str(col.dt.tz) == 'UTC'
expected = sql.read_sql_table("types_test_data", self.conn)
tm.assert_series_equal(df.DateColWithTz,
expected.DateColWithTz
@@ -1363,33 +1328,27 @@ def test_date_parsing(self):
df = sql.read_sql_table("types_test_data", self.conn,
parse_dates=['DateCol'])
- self.assertTrue(issubclass(df.DateCol.dtype.type, np.datetime64),
- "DateCol loaded with incorrect type")
+ assert issubclass(df.DateCol.dtype.type, np.datetime64)
df = sql.read_sql_table("types_test_data", self.conn,
parse_dates={'DateCol': '%Y-%m-%d %H:%M:%S'})
- self.assertTrue(issubclass(df.DateCol.dtype.type, np.datetime64),
- "DateCol loaded with incorrect type")
+ assert issubclass(df.DateCol.dtype.type, np.datetime64)
df = sql.read_sql_table("types_test_data", self.conn, parse_dates={
'DateCol': {'format': '%Y-%m-%d %H:%M:%S'}})
- self.assertTrue(issubclass(df.DateCol.dtype.type, np.datetime64),
- "IntDateCol loaded with incorrect type")
+ assert issubclass(df.DateCol.dtype.type, np.datetime64)
df = sql.read_sql_table(
"types_test_data", self.conn, parse_dates=['IntDateCol'])
- self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
- "IntDateCol loaded with incorrect type")
+ assert issubclass(df.IntDateCol.dtype.type, np.datetime64)
df = sql.read_sql_table(
"types_test_data", self.conn, parse_dates={'IntDateCol': 's'})
- self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
- "IntDateCol loaded with incorrect type")
+ assert issubclass(df.IntDateCol.dtype.type, np.datetime64)
df = sql.read_sql_table("types_test_data", self.conn,
parse_dates={'IntDateCol': {'unit': 's'}})
- self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64),
- "IntDateCol loaded with incorrect type")
+ assert issubclass(df.IntDateCol.dtype.type, np.datetime64)
def test_datetime(self):
df = DataFrame({'A': date_range('2013-01-01 09:00:00', periods=3),
@@ -1405,7 +1364,7 @@ def test_datetime(self):
result = sql.read_sql_query('SELECT * FROM test_datetime', self.conn)
result = result.drop('index', axis=1)
if self.flavor == 'sqlite':
- self.assertTrue(isinstance(result.loc[0, 'A'], string_types))
+ assert isinstance(result.loc[0, 'A'], string_types)
result['A'] = to_datetime(result['A'])
tm.assert_frame_equal(result, df)
else:
@@ -1424,7 +1383,7 @@ def test_datetime_NaT(self):
# with read_sql -> no type information -> sqlite has no native
result = sql.read_sql_query('SELECT * FROM test_datetime', self.conn)
if self.flavor == 'sqlite':
- self.assertTrue(isinstance(result.loc[0, 'A'], string_types))
+ assert isinstance(result.loc[0, 'A'], string_types)
result['A'] = to_datetime(result['A'], errors='coerce')
tm.assert_frame_equal(result, df)
else:
@@ -1557,7 +1516,7 @@ def test_dtype(self):
meta = sqlalchemy.schema.MetaData(bind=self.conn)
meta.reflect()
sqltype = meta.tables['dtype_test2'].columns['B'].type
- self.assertTrue(isinstance(sqltype, sqlalchemy.TEXT))
+ assert isinstance(sqltype, sqlalchemy.TEXT)
pytest.raises(ValueError, df.to_sql,
'error', self.conn, dtype={'B': str})
@@ -1565,7 +1524,7 @@ def test_dtype(self):
df.to_sql('dtype_test3', self.conn, dtype={'B': sqlalchemy.String(10)})
meta.reflect()
sqltype = meta.tables['dtype_test3'].columns['B'].type
- self.assertTrue(isinstance(sqltype, sqlalchemy.String))
+ assert isinstance(sqltype, sqlalchemy.String)
self.assertEqual(sqltype.length, 10)
# single dtype
@@ -1574,8 +1533,8 @@ def test_dtype(self):
meta.reflect()
sqltypea = meta.tables['single_dtype_test'].columns['A'].type
sqltypeb = meta.tables['single_dtype_test'].columns['B'].type
- self.assertTrue(isinstance(sqltypea, sqlalchemy.TEXT))
- self.assertTrue(isinstance(sqltypeb, sqlalchemy.TEXT))
+ assert isinstance(sqltypea, sqlalchemy.TEXT)
+ assert isinstance(sqltypeb, sqlalchemy.TEXT)
def test_notnull_dtype(self):
cols = {'Bool': Series([True, None]),
@@ -1597,10 +1556,10 @@ def test_notnull_dtype(self):
col_dict = meta.tables[tbl].columns
- self.assertTrue(isinstance(col_dict['Bool'].type, my_type))
- self.assertTrue(isinstance(col_dict['Date'].type, sqltypes.DateTime))
- self.assertTrue(isinstance(col_dict['Int'].type, sqltypes.Integer))
- self.assertTrue(isinstance(col_dict['Float'].type, sqltypes.Float))
+ assert isinstance(col_dict['Bool'].type, my_type)
+ assert isinstance(col_dict['Date'].type, sqltypes.DateTime)
+ assert isinstance(col_dict['Int'].type, sqltypes.Integer)
+ assert isinstance(col_dict['Float'].type, sqltypes.Float)
def test_double_precision(self):
V = 1.23456789101112131415
@@ -1626,10 +1585,10 @@ def test_double_precision(self):
col_dict = meta.tables['test_dtypes'].columns
self.assertEqual(str(col_dict['f32'].type),
str(col_dict['f64_as_f32'].type))
- self.assertTrue(isinstance(col_dict['f32'].type, sqltypes.Float))
- self.assertTrue(isinstance(col_dict['f64'].type, sqltypes.Float))
- self.assertTrue(isinstance(col_dict['i32'].type, sqltypes.Integer))
- self.assertTrue(isinstance(col_dict['i64'].type, sqltypes.BigInteger))
+ assert isinstance(col_dict['f32'].type, sqltypes.Float)
+ assert isinstance(col_dict['f64'].type, sqltypes.Float)
+ assert isinstance(col_dict['i32'].type, sqltypes.Integer)
+ assert isinstance(col_dict['i64'].type, sqltypes.BigInteger)
def test_connectable_issue_example(self):
# This tests the example raised in issue
@@ -1705,20 +1664,17 @@ def setup_driver(cls):
def test_default_type_conversion(self):
df = sql.read_sql_table("types_test_data", self.conn)
- self.assertTrue(issubclass(df.FloatCol.dtype.type, np.floating),
- "FloatCol loaded with incorrect type")
- self.assertTrue(issubclass(df.IntCol.dtype.type, np.integer),
- "IntCol loaded with incorrect type")
+ assert issubclass(df.FloatCol.dtype.type, np.floating)
+ assert issubclass(df.IntCol.dtype.type, np.integer)
+
# sqlite has no boolean type, so integer type is returned
- self.assertTrue(issubclass(df.BoolCol.dtype.type, np.integer),
- "BoolCol loaded with incorrect type")
+ assert issubclass(df.BoolCol.dtype.type, np.integer)
# Int column with NA values stays as float
- self.assertTrue(issubclass(df.IntColWithNull.dtype.type, np.floating),
- "IntColWithNull loaded with incorrect type")
+ assert issubclass(df.IntColWithNull.dtype.type, np.floating)
+
# Non-native Bool column with NA values stays as float
- self.assertTrue(issubclass(df.BoolColWithNull.dtype.type, np.floating),
- "BoolColWithNull loaded with incorrect type")
+ assert issubclass(df.BoolColWithNull.dtype.type, np.floating)
def test_default_date_load(self):
df = sql.read_sql_table("types_test_data", self.conn)
@@ -1760,20 +1716,17 @@ def setup_driver(cls):
def test_default_type_conversion(self):
df = sql.read_sql_table("types_test_data", self.conn)
- self.assertTrue(issubclass(df.FloatCol.dtype.type, np.floating),
- "FloatCol loaded with incorrect type")
- self.assertTrue(issubclass(df.IntCol.dtype.type, np.integer),
- "IntCol loaded with incorrect type")
+ assert issubclass(df.FloatCol.dtype.type, np.floating)
+ assert issubclass(df.IntCol.dtype.type, np.integer)
+
# MySQL has no real BOOL type (it's an alias for TINYINT)
- self.assertTrue(issubclass(df.BoolCol.dtype.type, np.integer),
- "BoolCol loaded with incorrect type")
+ assert issubclass(df.BoolCol.dtype.type, np.integer)
# Int column with NA values stays as float
- self.assertTrue(issubclass(df.IntColWithNull.dtype.type, np.floating),
- "IntColWithNull loaded with incorrect type")
+ assert issubclass(df.IntColWithNull.dtype.type, np.floating)
+
# Bool column with NA = int column with NA values => becomes float
- self.assertTrue(issubclass(df.BoolColWithNull.dtype.type, np.floating),
- "BoolColWithNull loaded with incorrect type")
+ assert issubclass(df.BoolColWithNull.dtype.type, np.floating)
def test_read_procedure(self):
# see GH7324. Although it is more an api test, it is added to the
@@ -1979,8 +1932,7 @@ def test_create_and_drop_table(self):
self.pandasSQL.to_sql(temp_frame, 'drop_test_frame')
- self.assertTrue(self.pandasSQL.has_table('drop_test_frame'),
- 'Table not written to DB')
+ assert self.pandasSQL.has_table('drop_test_frame')
self.pandasSQL.drop_table('drop_test_frame')
@@ -2208,12 +2160,12 @@ def test_schema(self):
for l in lines:
tokens = l.split(' ')
if len(tokens) == 2 and tokens[0] == 'A':
- self.assertTrue(tokens[1] == 'DATETIME')
+ assert tokens[1] == 'DATETIME'
frame = tm.makeTimeDataFrame()
create_sql = sql.get_schema(frame, 'test', keys=['A', 'B'])
lines = create_sql.splitlines()
- self.assertTrue('PRIMARY KEY ("A", "B")' in create_sql)
+ assert 'PRIMARY KEY ("A", "B")' in create_sql
cur = self.conn.cursor()
cur.execute(create_sql)
@@ -2514,13 +2466,13 @@ def test_schema(self):
for l in lines:
tokens = l.split(' ')
if len(tokens) == 2 and tokens[0] == 'A':
- self.assertTrue(tokens[1] == 'DATETIME')
+ assert tokens[1] == 'DATETIME'
frame = tm.makeTimeDataFrame()
drop_sql = "DROP TABLE IF EXISTS test"
create_sql = sql.get_schema(frame, 'test', keys=['A', 'B'])
lines = create_sql.splitlines()
- self.assertTrue('PRIMARY KEY (`A`, `B`)' in create_sql)
+ assert 'PRIMARY KEY (`A`, `B`)' in create_sql
cur = self.conn.cursor()
cur.execute(drop_sql)
cur.execute(create_sql)
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 9dc2bd589bf9b..72023c77e7c88 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -647,10 +647,10 @@ def test_variable_labels(self):
keys = ('var1', 'var2', 'var3')
labels = ('label1', 'label2', 'label3')
for k, v in compat.iteritems(sr_115):
- self.assertTrue(k in sr_117)
- self.assertTrue(v == sr_117[k])
- self.assertTrue(k in keys)
- self.assertTrue(v in labels)
+ assert k in sr_117
+ assert v == sr_117[k]
+ assert k in keys
+ assert v in labels
def test_minimal_size_col(self):
str_lens = (1, 100, 244)
@@ -667,8 +667,8 @@ def test_minimal_size_col(self):
variables = sr.varlist
formats = sr.fmtlist
for variable, fmt, typ in zip(variables, formats, typlist):
- self.assertTrue(int(variable[1:]) == int(fmt[1:-1]))
- self.assertTrue(int(variable[1:]) == typ)
+ assert int(variable[1:]) == int(fmt[1:-1])
+ assert int(variable[1:]) == typ
def test_excessively_long_string(self):
str_lens = (1, 244, 500)
@@ -694,21 +694,21 @@ def test_missing_value_generator(self):
offset = valid_range[t][1]
for i in range(0, 27):
val = StataMissingValue(offset + 1 + i)
- self.assertTrue(val.string == expected_values[i])
+ assert val.string == expected_values[i]
# Test extremes for floats
val = StataMissingValue(struct.unpack('<f', b'\x00\x00\x00\x7f')[0])
- self.assertTrue(val.string == '.')
+ assert val.string == '.'
val = StataMissingValue(struct.unpack('<f', b'\x00\xd0\x00\x7f')[0])
- self.assertTrue(val.string == '.z')
+ assert val.string == '.z'
# Test extremes for floats
val = StataMissingValue(struct.unpack(
'<d', b'\x00\x00\x00\x00\x00\x00\xe0\x7f')[0])
- self.assertTrue(val.string == '.')
+ assert val.string == '.'
val = StataMissingValue(struct.unpack(
'<d', b'\x00\x00\x00\x00\x00\x1a\xe0\x7f')[0])
- self.assertTrue(val.string == '.z')
+ assert val.string == '.z'
def test_missing_value_conversion(self):
columns = ['int8_', 'int16_', 'int32_', 'float32_', 'float64_']
@@ -1216,7 +1216,7 @@ def test_repeated_column_labels(self):
# GH 13923
with pytest.raises(ValueError) as cm:
read_stata(self.dta23, convert_categoricals=True)
- tm.assertTrue('wolof' in cm.exception)
+ assert 'wolof' in cm.exception
def test_stata_111(self):
# 111 is an old version but still used by current versions of
@@ -1242,14 +1242,14 @@ def test_out_of_range_double(self):
with pytest.raises(ValueError) as cm:
with tm.ensure_clean() as path:
df.to_stata(path)
- tm.assertTrue('ColumnTooBig' in cm.exception)
+ assert 'ColumnTooBig' in cm.exception
df.loc[2, 'ColumnTooBig'] = np.inf
with pytest.raises(ValueError) as cm:
with tm.ensure_clean() as path:
df.to_stata(path)
- tm.assertTrue('ColumnTooBig' in cm.exception)
- tm.assertTrue('infinity' in cm.exception)
+ assert 'ColumnTooBig' in cm.exception
+ assert 'infinity' in cm.exception
def test_out_of_range_float(self):
original = DataFrame({'ColumnOk': [0.0,
@@ -1274,8 +1274,8 @@ def test_out_of_range_float(self):
with pytest.raises(ValueError) as cm:
with tm.ensure_clean() as path:
original.to_stata(path)
- tm.assertTrue('ColumnTooBig' in cm.exception)
- tm.assertTrue('infinity' in cm.exception)
+ assert 'ColumnTooBig' in cm.exception
+ assert 'infinity' in cm.exception
def test_invalid_encoding(self):
# GH15723, validate encoding
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index 35625670f0641..64bcb55cb4e6a 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -127,10 +127,10 @@ def _check_legend_labels(self, axes, labels=None, visible=True):
axes = self._flatten_visible(axes)
for ax in axes:
if visible:
- self.assertTrue(ax.get_legend() is not None)
+ assert ax.get_legend() is not None
self._check_text_labels(ax.get_legend().get_texts(), labels)
else:
- self.assertTrue(ax.get_legend() is None)
+ assert ax.get_legend() is None
def _check_data(self, xp, rs):
"""
@@ -352,7 +352,7 @@ def _check_axes_shape(self, axes, axes_num=None, layout=None,
self.assertEqual(len(visible_axes), axes_num)
for ax in visible_axes:
# check something drawn on visible axes
- self.assertTrue(len(ax.get_children()) > 0)
+ assert len(ax.get_children()) > 0
if layout is not None:
result = self._get_axes_layout(_flatten(axes))
@@ -437,7 +437,7 @@ def _check_box_return_type(self, returned, return_type, expected_keys=None,
if return_type is None:
return_type = 'dict'
- self.assertTrue(isinstance(returned, types[return_type]))
+ assert isinstance(returned, types[return_type])
if return_type == 'both':
assert isinstance(returned.ax, Axes)
assert isinstance(returned.lines, dict)
@@ -448,11 +448,11 @@ def _check_box_return_type(self, returned, return_type, expected_keys=None,
assert isinstance(r, Axes)
return
- self.assertTrue(isinstance(returned, Series))
+ assert isinstance(returned, Series)
self.assertEqual(sorted(returned.keys()), sorted(expected_keys))
for key, value in iteritems(returned):
- self.assertTrue(isinstance(value, types[return_type]))
+ assert isinstance(value, types[return_type])
# check returned dict has correct mapping
if return_type == 'axes':
if check_ax_title:
@@ -504,13 +504,13 @@ def is_grid_on():
spndx += 1
mpl.rc('axes', grid=True)
obj.plot(kind=kind, **kws)
- self.assertTrue(is_grid_on())
+ assert is_grid_on()
self.plt.subplot(1, 4 * len(kinds), spndx)
spndx += 1
mpl.rc('axes', grid=False)
obj.plot(kind=kind, grid=True, **kws)
- self.assertTrue(is_grid_on())
+ assert is_grid_on()
def _maybe_unpack_cycler(self, rcParams, field='color'):
"""
diff --git a/pandas/tests/plotting/test_boxplot_method.py b/pandas/tests/plotting/test_boxplot_method.py
index 018cbbe170313..fe6d5e5cf148f 100644
--- a/pandas/tests/plotting/test_boxplot_method.py
+++ b/pandas/tests/plotting/test_boxplot_method.py
@@ -96,7 +96,7 @@ def test_boxplot_legacy(self):
def test_boxplot_return_type_none(self):
# GH 12216; return_type=None & by=None -> axes
result = self.hist_df.boxplot()
- self.assertTrue(isinstance(result, self.plt.Axes))
+ assert isinstance(result, self.plt.Axes)
@slow
def test_boxplot_return_type_legacy(self):
@@ -129,8 +129,8 @@ def test_boxplot_axis_limits(self):
def _check_ax_limits(col, ax):
y_min, y_max = ax.get_ylim()
- self.assertTrue(y_min <= col.min())
- self.assertTrue(y_max >= col.max())
+ assert y_min <= col.min()
+ assert y_max >= col.max()
df = self.hist_df.copy()
df['age'] = np.random.randint(1, 20, df.shape[0])
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index 7534d9363f267..30d67630afa41 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -278,8 +278,7 @@ def test_irreg_hf(self):
diffs = Series(ax.get_lines()[0].get_xydata()[:, 0]).diff()
sec = 1. / 24 / 60 / 60
- self.assertTrue((np.fabs(diffs[1:] - [sec, sec * 2, sec]) < 1e-8).all(
- ))
+ assert (np.fabs(diffs[1:] - [sec, sec * 2, sec]) < 1e-8).all()
plt.clf()
fig.add_subplot(111)
@@ -287,7 +286,7 @@ def test_irreg_hf(self):
df2.index = df.index.asobject
ax = df2.plot()
diffs = Series(ax.get_lines()[0].get_xydata()[:, 0]).diff()
- self.assertTrue((np.fabs(diffs[1:] - sec) < 1e-8).all())
+ assert (np.fabs(diffs[1:] - sec) < 1e-8).all()
def test_irregular_datetime64_repr_bug(self):
import matplotlib.pyplot as plt
@@ -509,7 +508,7 @@ def test_gaps(self):
data = l.get_xydata()
assert isinstance(data, np.ma.core.MaskedArray)
mask = data.mask
- self.assertTrue(mask[5:25, 1].all())
+ assert mask[5:25, 1].all()
plt.close(ax.get_figure())
# irregular
@@ -523,7 +522,7 @@ def test_gaps(self):
data = l.get_xydata()
assert isinstance(data, np.ma.core.MaskedArray)
mask = data.mask
- self.assertTrue(mask[2:5, 1].all())
+ assert mask[2:5, 1].all()
plt.close(ax.get_figure())
# non-ts
@@ -537,7 +536,7 @@ def test_gaps(self):
data = l.get_xydata()
assert isinstance(data, np.ma.core.MaskedArray)
mask = data.mask
- self.assertTrue(mask[2:5, 1].all())
+ assert mask[2:5, 1].all()
@slow
def test_gap_upsample(self):
@@ -558,7 +557,7 @@ def test_gap_upsample(self):
assert isinstance(data, np.ma.core.MaskedArray)
mask = data.mask
- self.assertTrue(mask[5:25, 1].all())
+ assert mask[5:25, 1].all()
@slow
def test_secondary_y(self):
@@ -567,7 +566,7 @@ def test_secondary_y(self):
ser = Series(np.random.randn(10))
ser2 = Series(np.random.randn(10))
ax = ser.plot(secondary_y=True)
- self.assertTrue(hasattr(ax, 'left_ax'))
+ assert hasattr(ax, 'left_ax')
assert not hasattr(ax, 'right_ax')
fig = ax.get_figure()
axes = fig.get_axes()
@@ -585,10 +584,10 @@ def test_secondary_y(self):
ax = ser2.plot()
ax2 = ser.plot(secondary_y=True)
- self.assertTrue(ax.get_yaxis().get_visible())
+ assert ax.get_yaxis().get_visible()
assert not hasattr(ax, 'left_ax')
- self.assertTrue(hasattr(ax, 'right_ax'))
- self.assertTrue(hasattr(ax2, 'left_ax'))
+ assert hasattr(ax, 'right_ax')
+ assert hasattr(ax2, 'left_ax')
assert not hasattr(ax2, 'right_ax')
@slow
@@ -598,7 +597,7 @@ def test_secondary_y_ts(self):
ser = Series(np.random.randn(10), idx)
ser2 = Series(np.random.randn(10), idx)
ax = ser.plot(secondary_y=True)
- self.assertTrue(hasattr(ax, 'left_ax'))
+ assert hasattr(ax, 'left_ax')
assert not hasattr(ax, 'right_ax')
fig = ax.get_figure()
axes = fig.get_axes()
@@ -616,7 +615,7 @@ def test_secondary_y_ts(self):
ax = ser2.plot()
ax2 = ser.plot(secondary_y=True)
- self.assertTrue(ax.get_yaxis().get_visible())
+ assert ax.get_yaxis().get_visible()
@slow
def test_secondary_kde(self):
@@ -626,7 +625,7 @@ def test_secondary_kde(self):
import matplotlib.pyplot as plt # noqa
ser = Series(np.random.randn(10))
ax = ser.plot(secondary_y=True, kind='density')
- self.assertTrue(hasattr(ax, 'left_ax'))
+ assert hasattr(ax, 'left_ax')
assert not hasattr(ax, 'right_ax')
fig = ax.get_figure()
axes = fig.get_axes()
@@ -670,8 +669,8 @@ def test_mixed_freq_regular_first(self):
lines = ax2.get_lines()
idx1 = PeriodIndex(lines[0].get_xdata())
idx2 = PeriodIndex(lines[1].get_xdata())
- self.assertTrue(idx1.equals(s1.index.to_period('B')))
- self.assertTrue(idx2.equals(s2.index.to_period('B')))
+ assert idx1.equals(s1.index.to_period('B'))
+ assert idx2.equals(s2.index.to_period('B'))
left, right = ax2.get_xlim()
pidx = s1.index.to_period()
self.assertEqual(left, pidx[0].ordinal)
@@ -701,8 +700,8 @@ def test_mixed_freq_regular_first_df(self):
lines = ax2.get_lines()
idx1 = PeriodIndex(lines[0].get_xdata())
idx2 = PeriodIndex(lines[1].get_xdata())
- self.assertTrue(idx1.equals(s1.index.to_period('B')))
- self.assertTrue(idx2.equals(s2.index.to_period('B')))
+ assert idx1.equals(s1.index.to_period('B'))
+ assert idx2.equals(s2.index.to_period('B'))
left, right = ax2.get_xlim()
pidx = s1.index.to_period()
self.assertEqual(left, pidx[0].ordinal)
@@ -833,7 +832,7 @@ def test_to_weekly_resampling(self):
tsplot(high, plt.Axes.plot)
lines = tsplot(low, plt.Axes.plot)
for l in lines:
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
+ assert PeriodIndex(data=l.get_xdata()).freq, idxh.freq
@slow
def test_from_weekly_resampling(self):
@@ -848,7 +847,7 @@ def test_from_weekly_resampling(self):
expected_l = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, 1544,
1549, 1553, 1558, 1562], dtype=np.float64)
for l in ax.get_lines():
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
+ assert PeriodIndex(data=l.get_xdata()).freq, idxh.freq
xdata = l.get_xdata(orig=False)
if len(xdata) == 12: # idxl lines
tm.assert_numpy_array_equal(xdata, expected_l)
@@ -863,7 +862,7 @@ def test_from_weekly_resampling(self):
tsplot(low, plt.Axes.plot)
lines = tsplot(high, plt.Axes.plot)
for l in lines:
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
+ assert PeriodIndex(data=l.get_xdata()).freq, idxh.freq
xdata = l.get_xdata(orig=False)
if len(xdata) == 12: # idxl lines
tm.assert_numpy_array_equal(xdata, expected_l)
@@ -1048,7 +1047,7 @@ def test_secondary_upsample(self):
ax = high.plot(secondary_y=True)
for l in ax.get_lines():
self.assertEqual(PeriodIndex(l.get_xdata()).freq, 'D')
- self.assertTrue(hasattr(ax, 'left_ax'))
+ assert hasattr(ax, 'left_ax')
assert not hasattr(ax, 'right_ax')
for l in ax.left_ax.get_lines():
self.assertEqual(PeriodIndex(l.get_xdata()).freq, 'D')
@@ -1213,7 +1212,7 @@ def test_secondary_y_non_ts_xlim(self):
left_after, right_after = ax.get_xlim()
self.assertEqual(left_before, left_after)
- self.assertTrue(right_before < right_after)
+ assert right_before < right_after
@slow
def test_secondary_y_regular_ts_xlim(self):
@@ -1229,7 +1228,7 @@ def test_secondary_y_regular_ts_xlim(self):
left_after, right_after = ax.get_xlim()
self.assertEqual(left_before, left_after)
- self.assertTrue(right_before < right_after)
+ assert right_before < right_after
@slow
def test_secondary_y_mixed_freq_ts_xlim(self):
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index c5b43cd1a300b..c550504063b3e 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -333,7 +333,7 @@ def test_subplots(self):
axes = df.plot(kind=kind, subplots=True, legend=False)
for ax in axes:
- self.assertTrue(ax.get_legend() is None)
+ assert ax.get_legend() is None
@slow
def test_subplots_timeseries(self):
@@ -663,7 +663,7 @@ def test_line_lim(self):
axes = df.plot(secondary_y=True, subplots=True)
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
for ax in axes:
- self.assertTrue(hasattr(ax, 'left_ax'))
+ assert hasattr(ax, 'left_ax')
assert not hasattr(ax, 'right_ax')
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
@@ -955,8 +955,8 @@ def test_plot_scatter_with_c(self):
# identical to the values we supplied, normally we'd be on shaky ground
# comparing floats for equality but here we expect them to be
# identical.
- self.assertTrue(np.array_equal(ax.collections[0].get_facecolor(),
- rgba_array))
+ tm.assert_numpy_array_equal(ax.collections[0]
+ .get_facecolor(), rgba_array)
# we don't test the colors of the faces in this next plot because they
# are dependent on the spring colormap, which may change its colors
# later.
@@ -1057,7 +1057,7 @@ def _check_bar_alignment(self, df, kind='bar', stacked=False,
raise ValueError
# Check the ticks locates on integer
- self.assertTrue((axis.get_ticklocs() == np.arange(len(df))).all())
+ assert (axis.get_ticklocs() == np.arange(len(df))).all()
if align == 'center':
# Check whether the bar locates on center
@@ -1511,7 +1511,7 @@ def test_df_legend_labels(self):
self._check_text_labels(ax.xaxis.get_label(), 'a')
ax = df5.plot(y='c', label='LABEL_c', ax=ax)
self._check_legend_labels(ax, labels=['LABEL_b', 'LABEL_c'])
- self.assertTrue(df5.columns.tolist() == ['b', 'c'])
+ assert df5.columns.tolist() == ['b', 'c']
def test_legend_name(self):
multi = DataFrame(randn(4, 4),
@@ -1733,7 +1733,7 @@ def test_area_colors(self):
self._check_colors(linehandles, linecolors=custom_colors)
for h in handles:
- self.assertTrue(h.get_alpha() is None)
+ assert h.get_alpha() is None
tm.close()
ax = df.plot.area(colormap='jet')
@@ -1750,7 +1750,7 @@ def test_area_colors(self):
if not isinstance(x, PolyCollection)]
self._check_colors(linehandles, linecolors=jet_colors)
for h in handles:
- self.assertTrue(h.get_alpha() is None)
+ assert h.get_alpha() is None
tm.close()
# When stacked=False, alpha is set to 0.5
@@ -1974,7 +1974,7 @@ def test_unordered_ts(self):
columns=['test'])
ax = df.plot()
xticks = ax.lines[0].get_xdata()
- self.assertTrue(xticks[0] < xticks[1])
+ assert xticks[0] < xticks[1]
ydata = ax.lines[0].get_ydata()
tm.assert_numpy_array_equal(ydata, np.array([1.0, 2.0, 3.0]))
@@ -2300,9 +2300,9 @@ def test_table(self):
_check_plot_works(df.plot, table=df)
ax = df.plot()
- self.assertTrue(len(ax.tables) == 0)
+ assert len(ax.tables) == 0
plotting.table(ax, df.T)
- self.assertTrue(len(ax.tables) == 1)
+ assert len(ax.tables) == 1
def test_errorbar_scatter(self):
df = DataFrame(
diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py
index a77c1edd258e3..7002321908ef0 100644
--- a/pandas/tests/plotting/test_hist_method.py
+++ b/pandas/tests/plotting/test_hist_method.py
@@ -394,8 +394,8 @@ def test_axis_share_x(self):
ax1, ax2 = df.hist(column='height', by=df.gender, sharex=True)
# share x
- self.assertTrue(ax1._shared_x_axes.joined(ax1, ax2))
- self.assertTrue(ax2._shared_x_axes.joined(ax1, ax2))
+ assert ax1._shared_x_axes.joined(ax1, ax2)
+ assert ax2._shared_x_axes.joined(ax1, ax2)
# don't share y
assert not ax1._shared_y_axes.joined(ax1, ax2)
@@ -407,8 +407,8 @@ def test_axis_share_y(self):
ax1, ax2 = df.hist(column='height', by=df.gender, sharey=True)
# share y
- self.assertTrue(ax1._shared_y_axes.joined(ax1, ax2))
- self.assertTrue(ax2._shared_y_axes.joined(ax1, ax2))
+ assert ax1._shared_y_axes.joined(ax1, ax2)
+ assert ax2._shared_y_axes.joined(ax1, ax2)
# don't share x
assert not ax1._shared_x_axes.joined(ax1, ax2)
@@ -421,8 +421,8 @@ def test_axis_share_xy(self):
sharey=True)
# share both x and y
- self.assertTrue(ax1._shared_x_axes.joined(ax1, ax2))
- self.assertTrue(ax2._shared_x_axes.joined(ax1, ax2))
+ assert ax1._shared_x_axes.joined(ax1, ax2)
+ assert ax2._shared_x_axes.joined(ax1, ax2)
- self.assertTrue(ax1._shared_y_axes.joined(ax1, ax2))
- self.assertTrue(ax2._shared_y_axes.joined(ax1, ax2))
+ assert ax1._shared_y_axes.joined(ax1, ax2)
+ assert ax2._shared_y_axes.joined(ax1, ax2)
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index b84e50c4ec827..8ae301a0b7b4c 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -443,8 +443,8 @@ def test_hist_secondary_legend(self):
# both legends are dran on left ax
# left and right axis must be visible
self._check_legend_labels(ax, labels=['a', 'b (right)'])
- self.assertTrue(ax.get_yaxis().get_visible())
- self.assertTrue(ax.right_ax.get_yaxis().get_visible())
+ assert ax.get_yaxis().get_visible()
+ assert ax.right_ax.get_yaxis().get_visible()
tm.close()
# secondary -> secondary
@@ -455,7 +455,7 @@ def test_hist_secondary_legend(self):
self._check_legend_labels(ax.left_ax,
labels=['a (right)', 'b (right)'])
assert not ax.left_ax.get_yaxis().get_visible()
- self.assertTrue(ax.get_yaxis().get_visible())
+ assert ax.get_yaxis().get_visible()
tm.close()
# secondary -> primary
@@ -465,8 +465,8 @@ def test_hist_secondary_legend(self):
# both legends are draw on left ax
# left and right axis must be visible
self._check_legend_labels(ax.left_ax, labels=['a (right)', 'b'])
- self.assertTrue(ax.left_ax.get_yaxis().get_visible())
- self.assertTrue(ax.get_yaxis().get_visible())
+ assert ax.left_ax.get_yaxis().get_visible()
+ assert ax.get_yaxis().get_visible()
tm.close()
@slow
@@ -481,8 +481,8 @@ def test_df_series_secondary_legend(self):
# both legends are dran on left ax
# left and right axis must be visible
self._check_legend_labels(ax, labels=['a', 'b', 'c', 'x (right)'])
- self.assertTrue(ax.get_yaxis().get_visible())
- self.assertTrue(ax.right_ax.get_yaxis().get_visible())
+ assert ax.get_yaxis().get_visible()
+ assert ax.right_ax.get_yaxis().get_visible()
tm.close()
# primary -> secondary (with passing ax)
@@ -491,8 +491,8 @@ def test_df_series_secondary_legend(self):
# both legends are dran on left ax
# left and right axis must be visible
self._check_legend_labels(ax, labels=['a', 'b', 'c', 'x (right)'])
- self.assertTrue(ax.get_yaxis().get_visible())
- self.assertTrue(ax.right_ax.get_yaxis().get_visible())
+ assert ax.get_yaxis().get_visible()
+ assert ax.right_ax.get_yaxis().get_visible()
tm.close()
# seconcary -> secondary (without passing ax)
@@ -503,7 +503,7 @@ def test_df_series_secondary_legend(self):
expected = ['a (right)', 'b (right)', 'c (right)', 'x (right)']
self._check_legend_labels(ax.left_ax, labels=expected)
assert not ax.left_ax.get_yaxis().get_visible()
- self.assertTrue(ax.get_yaxis().get_visible())
+ assert ax.get_yaxis().get_visible()
tm.close()
# secondary -> secondary (with passing ax)
@@ -514,7 +514,7 @@ def test_df_series_secondary_legend(self):
expected = ['a (right)', 'b (right)', 'c (right)', 'x (right)']
self._check_legend_labels(ax.left_ax, expected)
assert not ax.left_ax.get_yaxis().get_visible()
- self.assertTrue(ax.get_yaxis().get_visible())
+ assert ax.get_yaxis().get_visible()
tm.close()
# secondary -> secondary (with passing ax)
@@ -525,7 +525,7 @@ def test_df_series_secondary_legend(self):
expected = ['a', 'b', 'c', 'x (right)']
self._check_legend_labels(ax.left_ax, expected)
assert not ax.left_ax.get_yaxis().get_visible()
- self.assertTrue(ax.get_yaxis().get_visible())
+ assert ax.get_yaxis().get_visible()
tm.close()
@slow
@@ -576,10 +576,9 @@ def test_kde_missing_vals(self):
s = Series(np.random.uniform(size=50))
s[0] = np.nan
axes = _check_plot_works(s.plot.kde)
- # check if the values have any missing values
- # GH14821
- self.assertTrue(any(~np.isnan(axes.lines[0].get_xdata())),
- msg='Missing Values not dropped')
+
+ # gh-14821: check if the values have any missing values
+ assert any(~np.isnan(axes.lines[0].get_xdata()))
@slow
def test_hist_kwargs(self):
diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py
index 2bde4349f6000..9854245cf1abd 100644
--- a/pandas/tests/reshape/test_concat.py
+++ b/pandas/tests/reshape/test_concat.py
@@ -788,8 +788,8 @@ def test_append_different_columns(self):
b = df[5:].loc[:, ['strings', 'ints', 'floats']]
appended = a.append(b)
- self.assertTrue(isnull(appended['strings'][0:4]).all())
- self.assertTrue(isnull(appended['bools'][5:]).all())
+ assert isnull(appended['strings'][0:4]).all()
+ assert isnull(appended['bools'][5:]).all()
def test_append_many(self):
chunks = [self.frame[:5], self.frame[5:10],
@@ -802,8 +802,8 @@ def test_append_many(self):
chunks[-1]['foo'] = 'bar'
result = chunks[0].append(chunks[1:])
tm.assert_frame_equal(result.loc[:, self.frame.columns], self.frame)
- self.assertTrue((result['foo'][15:] == 'bar').all())
- self.assertTrue(result['foo'][:15].isnull().all())
+ assert (result['foo'][15:] == 'bar').all()
+ assert result['foo'][:15].isnull().all()
def test_append_preserve_index_name(self):
# #980
@@ -1479,8 +1479,8 @@ def test_concat_series_axis1(self):
s2.name = None
result = concat([s, s2], axis=1)
- self.assertTrue(np.array_equal(
- result.columns, Index(['A', 0], dtype='object')))
+ tm.assert_index_equal(result.columns,
+ Index(['A', 0], dtype='object'))
# must reindex, #2603
s = Series(randn(3), index=['c', 'a', 'b'], name='A')
@@ -1512,8 +1512,8 @@ def test_concat_datetime64_block(self):
df = DataFrame({'time': rng})
result = concat([df, df])
- self.assertTrue((result.iloc[:10]['time'] == rng).all())
- self.assertTrue((result.iloc[10:]['time'] == rng).all())
+ assert (result.iloc[:10]['time'] == rng).all()
+ assert (result.iloc[10:]['time'] == rng).all()
def test_concat_timedelta64_block(self):
from pandas import to_timedelta
@@ -1523,8 +1523,8 @@ def test_concat_timedelta64_block(self):
df = DataFrame({'time': rng})
result = concat([df, df])
- self.assertTrue((result.iloc[:10]['time'] == rng).all())
- self.assertTrue((result.iloc[10:]['time'] == rng).all())
+ assert (result.iloc[:10]['time'] == rng).all()
+ assert (result.iloc[10:]['time'] == rng).all()
def test_concat_keys_with_none(self):
# #1649
@@ -1593,7 +1593,7 @@ def test_concat_series_axis1_same_names_ignore_index(self):
s2 = Series(randn(len(dates)), index=dates, name='value')
result = concat([s1, s2], axis=1, ignore_index=True)
- self.assertTrue(np.array_equal(result.columns, [0, 1]))
+ assert np.array_equal(result.columns, [0, 1])
def test_concat_iterables(self):
from collections import deque, Iterable
diff --git a/pandas/tests/reshape/test_hashing.py b/pandas/tests/reshape/test_hashing.py
index 4857d3ac8310b..f19f6b1374978 100644
--- a/pandas/tests/reshape/test_hashing.py
+++ b/pandas/tests/reshape/test_hashing.py
@@ -86,9 +86,9 @@ def test_hash_tuples_err(self):
def test_multiindex_unique(self):
mi = MultiIndex.from_tuples([(118, 472), (236, 118),
(51, 204), (102, 51)])
- self.assertTrue(mi.is_unique)
+ assert mi.is_unique
result = hash_pandas_object(mi)
- self.assertTrue(result.is_unique)
+ assert result.is_unique
def test_multiindex_objects(self):
mi = MultiIndex(levels=[['b', 'd', 'a'], [1, 2, 3]],
@@ -215,7 +215,7 @@ def test_hash_keys(self):
obj = Series(list('abc'))
a = hash_pandas_object(obj, hash_key='9876543210123456')
b = hash_pandas_object(obj, hash_key='9876543210123465')
- self.assertTrue((a != b).all())
+ assert (a != b).all()
def test_invalid_key(self):
# this only matters for object dtypes
diff --git a/pandas/tests/reshape/test_join.py b/pandas/tests/reshape/test_join.py
index 475b17d9fe792..1da187788e99d 100644
--- a/pandas/tests/reshape/test_join.py
+++ b/pandas/tests/reshape/test_join.py
@@ -190,8 +190,8 @@ def test_join_on(self):
columns=['three'])
joined = df_a.join(df_b, on='one')
joined = joined.join(df_c, on='one')
- self.assertTrue(np.isnan(joined['two']['c']))
- self.assertTrue(np.isnan(joined['three']['c']))
+ assert np.isnan(joined['two']['c'])
+ assert np.isnan(joined['three']['c'])
# merge column not p resent
pytest.raises(KeyError, target.join, source, on='E')
@@ -252,7 +252,7 @@ def test_join_with_len0(self):
merged = self.target.join(self.source.reindex([]), on='C')
for col in self.source:
assert col in merged
- self.assertTrue(merged[col].isnull().all())
+ assert merged[col].isnull().all()
merged2 = self.target.join(self.source.reindex([]), on='C',
how='inner')
@@ -422,7 +422,7 @@ def test_join_inner_multiindex(self):
expected = expected.drop(['first', 'second'], axis=1)
expected.index = joined.index
- self.assertTrue(joined.index.is_monotonic)
+ assert joined.index.is_monotonic
assert_frame_equal(joined, expected)
# _assert_same_contents(expected, expected2.loc[:, expected.columns])
@@ -437,8 +437,8 @@ def test_join_hierarchical_mixed(self):
# GH 9455, 12219
with tm.assert_produces_warning(UserWarning):
result = merge(new_df, other_df, left_index=True, right_index=True)
- self.assertTrue(('b', 'mean') in result)
- self.assertTrue('b' in result)
+ assert ('b', 'mean') in result
+ assert 'b' in result
def test_join_float64_float32(self):
diff --git a/pandas/tests/reshape/test_merge.py b/pandas/tests/reshape/test_merge.py
index 80056b973a2fc..86580e5a84d92 100644
--- a/pandas/tests/reshape/test_merge.py
+++ b/pandas/tests/reshape/test_merge.py
@@ -162,10 +162,10 @@ def test_merge_copy(self):
right_index=True, copy=True)
merged['a'] = 6
- self.assertTrue((left['a'] == 0).all())
+ assert (left['a'] == 0).all()
merged['d'] = 'peekaboo'
- self.assertTrue((right['d'] == 'bar').all())
+ assert (right['d'] == 'bar').all()
def test_merge_nocopy(self):
left = DataFrame({'a': 0, 'b': 1}, index=lrange(10))
@@ -175,10 +175,10 @@ def test_merge_nocopy(self):
right_index=True, copy=False)
merged['a'] = 6
- self.assertTrue((left['a'] == 6).all())
+ assert (left['a'] == 6).all()
merged['d'] = 'peekaboo'
- self.assertTrue((right['d'] == 'peekaboo').all())
+ assert (right['d'] == 'peekaboo').all()
def test_intelligently_handle_join_key(self):
# #733, be a bit more 1337 about not returning unconsolidated DataFrame
@@ -229,8 +229,8 @@ def test_handle_join_key_pass_array(self):
merged2 = merge(right, left, left_on=key, right_on='key', how='outer')
assert_series_equal(merged['key'], merged2['key'])
- self.assertTrue(merged['key'].notnull().all())
- self.assertTrue(merged2['key'].notnull().all())
+ assert merged['key'].notnull().all()
+ assert merged2['key'].notnull().all()
left = DataFrame({'value': lrange(5)}, columns=['value'])
right = DataFrame({'rvalue': lrange(6)})
@@ -425,7 +425,7 @@ def test_merge_nosort(self):
exp = merge(df, new, on='var3', sort=False)
assert_frame_equal(result, exp)
- self.assertTrue((df.var3.unique() == result.var3.unique()).all())
+ assert (df.var3.unique() == result.var3.unique()).all()
def test_merge_nan_right(self):
df1 = DataFrame({"i1": [0, 1], "i2": [0, 1]})
@@ -671,19 +671,19 @@ def test_indicator(self):
# Check result integrity
test2 = merge(df1, df2, on='col1', how='left', indicator=True)
- self.assertTrue((test2._merge != 'right_only').all())
+ assert (test2._merge != 'right_only').all()
test2 = df1.merge(df2, on='col1', how='left', indicator=True)
- self.assertTrue((test2._merge != 'right_only').all())
+ assert (test2._merge != 'right_only').all()
test3 = merge(df1, df2, on='col1', how='right', indicator=True)
- self.assertTrue((test3._merge != 'left_only').all())
+ assert (test3._merge != 'left_only').all()
test3 = df1.merge(df2, on='col1', how='right', indicator=True)
- self.assertTrue((test3._merge != 'left_only').all())
+ assert (test3._merge != 'left_only').all()
test4 = merge(df1, df2, on='col1', how='inner', indicator=True)
- self.assertTrue((test4._merge == 'both').all())
+ assert (test4._merge == 'both').all()
test4 = df1.merge(df2, on='col1', how='inner', indicator=True)
- self.assertTrue((test4._merge == 'both').all())
+ assert (test4._merge == 'both').all()
# Check if working name in df
for i in ['_right_indicator', '_left_indicator', '_merge']:
@@ -789,7 +789,7 @@ def run_asserts(left, right):
for sort in [False, True]:
res = left.join(right, on=icols, how='left', sort=sort)
- self.assertTrue(len(left) < len(res) + 1)
+ assert len(left) < len(res) + 1
assert not res['4th'].isnull().any()
assert not res['5th'].isnull().any()
@@ -797,7 +797,7 @@ def run_asserts(left, right):
res['4th'], - res['5th'], check_names=False)
result = bind_cols(res.iloc[:, :-2])
tm.assert_series_equal(res['4th'], result, check_names=False)
- self.assertTrue(result.name is None)
+ assert result.name is None
if sort:
tm.assert_frame_equal(
diff --git a/pandas/tests/reshape/test_merge_asof.py b/pandas/tests/reshape/test_merge_asof.py
index f2aef409324f8..7934b8abf85a8 100644
--- a/pandas/tests/reshape/test_merge_asof.py
+++ b/pandas/tests/reshape/test_merge_asof.py
@@ -539,7 +539,7 @@ def test_non_sorted(self):
by='ticker')
trades = self.trades.sort_values('time')
- self.assertTrue(trades.time.is_monotonic)
+ assert trades.time.is_monotonic
assert not quotes.time.is_monotonic
with pytest.raises(ValueError):
merge_asof(trades, quotes,
@@ -547,8 +547,8 @@ def test_non_sorted(self):
by='ticker')
quotes = self.quotes.sort_values('time')
- self.assertTrue(trades.time.is_monotonic)
- self.assertTrue(quotes.time.is_monotonic)
+ assert trades.time.is_monotonic
+ assert quotes.time.is_monotonic
# ok, though has dupes
merge_asof(trades, self.quotes,
diff --git a/pandas/tests/reshape/test_merge_ordered.py b/pandas/tests/reshape/test_merge_ordered.py
index 77f47ff0a76e9..1f1eee0e9980b 100644
--- a/pandas/tests/reshape/test_merge_ordered.py
+++ b/pandas/tests/reshape/test_merge_ordered.py
@@ -57,7 +57,7 @@ def test_multigroup(self):
assert_frame_equal(result, result2.loc[:, result.columns])
result = merge_ordered(left, self.right, on='key', left_by='group')
- self.assertTrue(result['group'].notnull().all())
+ assert result['group'].notnull().all()
def test_merge_type(self):
class NotADataFrame(DataFrame):
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 416e729944d39..3b3b4fe247b72 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -514,7 +514,7 @@ def test_pivot_columns_lexsorted(self):
columns=['Index', 'Symbol', 'Year'],
aggfunc='mean')
- self.assertTrue(pivoted.columns.is_monotonic)
+ assert pivoted.columns.is_monotonic
def test_pivot_complex_aggfunc(self):
f = OrderedDict([('D', ['std']), ('E', ['sum'])])
@@ -1491,10 +1491,10 @@ def test_period_weekly(self):
def test_isleapyear_deprecate(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- self.assertTrue(isleapyear(2000))
+ assert isleapyear(2000)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
assert not isleapyear(2001)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- self.assertTrue(isleapyear(2004))
+ assert isleapyear(2004)
diff --git a/pandas/tests/reshape/test_tile.py b/pandas/tests/reshape/test_tile.py
index 1cc5c5f229bce..923615c93d98b 100644
--- a/pandas/tests/reshape/test_tile.py
+++ b/pandas/tests/reshape/test_tile.py
@@ -171,9 +171,9 @@ def test_qcut(self):
labels, bins = qcut(arr, 4, retbins=True)
ex_bins = quantile(arr, [0, .25, .5, .75, 1.])
result = labels.categories.left.values
- self.assertTrue(np.allclose(result, ex_bins[:-1], atol=1e-2))
+ assert np.allclose(result, ex_bins[:-1], atol=1e-2)
result = labels.categories.right.values
- self.assertTrue(np.allclose(result, ex_bins[1:], atol=1e-2))
+ assert np.allclose(result, ex_bins[1:], atol=1e-2)
ex_levels = cut(arr, ex_bins, include_lowest=True)
tm.assert_categorical_equal(labels, ex_levels)
@@ -236,7 +236,7 @@ def test_qcut_nas(self):
arr[:20] = np.nan
result = qcut(arr, 4)
- self.assertTrue(isnull(result[:20]).all())
+ assert isnull(result[:20]).all()
def test_qcut_index(self):
result = qcut([0, 2], 2)
@@ -274,16 +274,16 @@ def test_qcut_binning_issues(self):
for lev in np.unique(result):
s = lev.left
e = lev.right
- self.assertTrue(s != e)
+ assert s != e
starts.append(float(s))
ends.append(float(e))
for (sp, sn), (ep, en) in zip(zip(starts[:-1], starts[1:]),
zip(ends[:-1], ends[1:])):
- self.assertTrue(sp < sn)
- self.assertTrue(ep < en)
- self.assertTrue(ep <= sn)
+ assert sp < sn
+ assert ep < en
+ assert ep <= sn
def test_cut_return_intervals(self):
s = Series([0, 1, 2, 3, 4, 5, 6, 7, 8])
diff --git a/pandas/tests/scalar/test_interval.py b/pandas/tests/scalar/test_interval.py
index 526a2916e2924..d77deabee58d4 100644
--- a/pandas/tests/scalar/test_interval.py
+++ b/pandas/tests/scalar/test_interval.py
@@ -49,12 +49,12 @@ def test_comparison(self):
with tm.assert_raises_regex(TypeError, 'unorderable types'):
Interval(0, 1) < 2
- self.assertTrue(Interval(0, 1) < Interval(1, 2))
- self.assertTrue(Interval(0, 1) < Interval(0, 2))
- self.assertTrue(Interval(0, 1) < Interval(0.5, 1.5))
- self.assertTrue(Interval(0, 1) <= Interval(0, 1))
- self.assertTrue(Interval(0, 1) > Interval(-1, 2))
- self.assertTrue(Interval(0, 1) >= Interval(0, 1))
+ assert Interval(0, 1) < Interval(1, 2)
+ assert Interval(0, 1) < Interval(0, 2)
+ assert Interval(0, 1) < Interval(0.5, 1.5)
+ assert Interval(0, 1) <= Interval(0, 1)
+ assert Interval(0, 1) > Interval(-1, 2)
+ assert Interval(0, 1) >= Interval(0, 1)
def test_hash(self):
# should not raise
diff --git a/pandas/tests/scalar/test_period.py b/pandas/tests/scalar/test_period.py
index c8f3833c2c964..fc0921451c133 100644
--- a/pandas/tests/scalar/test_period.py
+++ b/pandas/tests/scalar/test_period.py
@@ -21,14 +21,14 @@ def test_is_leap_year(self):
# GH 13727
for freq in ['A', 'M', 'D', 'H']:
p = Period('2000-01-01 00:00:00', freq=freq)
- self.assertTrue(p.is_leap_year)
+ assert p.is_leap_year
assert isinstance(p.is_leap_year, bool)
p = Period('1999-01-01 00:00:00', freq=freq)
assert not p.is_leap_year
p = Period('2004-01-01 00:00:00', freq=freq)
- self.assertTrue(p.is_leap_year)
+ assert p.is_leap_year
p = Period('2100-01-01 00:00:00', freq=freq)
assert not p.is_leap_year
@@ -946,7 +946,7 @@ def test_notEqual(self):
self.assertNotEqual(self.january1, self.february)
def test_greater(self):
- self.assertTrue(self.february > self.january1)
+ assert self.february > self.january1
def test_greater_Raises_Value(self):
with pytest.raises(period.IncompatibleFrequency):
@@ -957,7 +957,7 @@ def test_greater_Raises_Type(self):
self.january1 > 1
def test_greaterEqual(self):
- self.assertTrue(self.january1 >= self.january2)
+ assert self.january1 >= self.january2
def test_greaterEqual_Raises_Value(self):
with pytest.raises(period.IncompatibleFrequency):
@@ -967,7 +967,7 @@ def test_greaterEqual_Raises_Value(self):
print(self.january1 >= 1)
def test_smallerEqual(self):
- self.assertTrue(self.january1 <= self.january2)
+ assert self.january1 <= self.january2
def test_smallerEqual_Raises_Value(self):
with pytest.raises(period.IncompatibleFrequency):
@@ -978,7 +978,7 @@ def test_smallerEqual_Raises_Type(self):
self.january1 <= 1
def test_smaller(self):
- self.assertTrue(self.january1 < self.february)
+ assert self.january1 < self.february
def test_smaller_Raises_Value(self):
with pytest.raises(period.IncompatibleFrequency):
diff --git a/pandas/tests/scalar/test_period_asfreq.py b/pandas/tests/scalar/test_period_asfreq.py
index 84793658a6537..d31eeda5c8e3c 100644
--- a/pandas/tests/scalar/test_period_asfreq.py
+++ b/pandas/tests/scalar/test_period_asfreq.py
@@ -718,4 +718,4 @@ def test_asfreq_MS(self):
with tm.assert_raises_regex(ValueError, msg):
pd.Period('2013-01', 'MS')
- self.assertTrue(_period_code_map.get("MS") is None)
+ assert _period_code_map.get("MS") is None
diff --git a/pandas/tests/scalar/test_timedelta.py b/pandas/tests/scalar/test_timedelta.py
index 788c204ca3eb3..9efd180afc2da 100644
--- a/pandas/tests/scalar/test_timedelta.py
+++ b/pandas/tests/scalar/test_timedelta.py
@@ -55,11 +55,9 @@ def test_construction(self):
# rounding cases
self.assertEqual(Timedelta(82739999850000).value, 82739999850000)
- self.assertTrue('0 days 22:58:59.999850' in str(Timedelta(
- 82739999850000)))
+ assert ('0 days 22:58:59.999850' in str(Timedelta(82739999850000)))
self.assertEqual(Timedelta(123072001000000).value, 123072001000000)
- self.assertTrue('1 days 10:11:12.001' in str(Timedelta(
- 123072001000000)))
+ assert ('1 days 10:11:12.001' in str(Timedelta(123072001000000)))
# string conversion with/without leading zero
# GH 9570
@@ -184,7 +182,7 @@ def test_total_seconds_scalar(self):
tm.assert_almost_equal(rng.total_seconds(), expt)
rng = Timedelta(np.nan)
- self.assertTrue(np.isnan(rng.total_seconds()))
+ assert np.isnan(rng.total_seconds())
def test_repr(self):
@@ -202,20 +200,20 @@ def test_conversion(self):
for td in [Timedelta(10, unit='d'),
Timedelta('1 days, 10:11:12.012345')]:
pydt = td.to_pytimedelta()
- self.assertTrue(td == Timedelta(pydt))
+ assert td == Timedelta(pydt)
self.assertEqual(td, pydt)
- self.assertTrue(isinstance(pydt, timedelta) and not isinstance(
+ assert (isinstance(pydt, timedelta) and not isinstance(
pydt, Timedelta))
self.assertEqual(td, np.timedelta64(td.value, 'ns'))
td64 = td.to_timedelta64()
self.assertEqual(td64, np.timedelta64(td.value, 'ns'))
self.assertEqual(td, td64)
- self.assertTrue(isinstance(td64, np.timedelta64))
+ assert isinstance(td64, np.timedelta64)
# this is NOT equal and cannot be roundtriped (because of the nanos)
td = Timedelta('1 days, 10:11:12.012345678')
- self.assertTrue(td != td.to_pytimedelta())
+ assert td != td.to_pytimedelta()
def test_freq_conversion(self):
@@ -240,7 +238,7 @@ def test_freq_conversion(self):
def test_fields(self):
def check(value):
# that we are int/long like
- self.assertTrue(isinstance(value, (int, compat.long)))
+ assert isinstance(value, (int, compat.long))
# compat to datetime.timedelta
rng = to_timedelta('1 days, 10:11:12')
@@ -261,7 +259,7 @@ def check(value):
td = Timedelta('-1 days, 10:11:12')
self.assertEqual(abs(td), Timedelta('13:48:48'))
- self.assertTrue(str(td) == "-1 days +10:11:12")
+ assert str(td) == "-1 days +10:11:12"
self.assertEqual(-td, Timedelta('0 days 13:48:48'))
self.assertEqual(-Timedelta('-1 days, 10:11:12').value, 49728000000000)
self.assertEqual(Timedelta('-1 days, 10:11:12').value, -49728000000000)
@@ -455,13 +453,13 @@ def test_contains(self):
td = to_timedelta([pd.NaT])
for v in [pd.NaT, None, float('nan'), np.nan]:
- self.assertTrue((v in td))
+ assert (v in td)
def test_identity(self):
td = Timedelta(10, unit='d')
- self.assertTrue(isinstance(td, Timedelta))
- self.assertTrue(isinstance(td, timedelta))
+ assert isinstance(td, Timedelta)
+ assert isinstance(td, timedelta)
def test_short_format_converters(self):
def conv(v):
@@ -547,10 +545,9 @@ def test_overflow(self):
expected = pd.Timedelta((pd.DatetimeIndex((s - s.min())).asi8 / len(s)
).sum())
- # the computation is converted to float so might be some loss of
- # precision
- self.assertTrue(np.allclose(result.value / 1000, expected.value /
- 1000))
+ # the computation is converted to float so
+ # might be some loss of precision
+ assert np.allclose(result.value / 1000, expected.value / 1000)
# sum
pytest.raises(ValueError, lambda: (s - s.min()).sum())
@@ -575,8 +572,7 @@ def test_timedelta_hash_equality(self):
self.assertEqual(d[v], 2)
tds = timedelta_range('1 second', periods=20)
- self.assertTrue(all(hash(td) == hash(td.to_pytimedelta()) for td in
- tds))
+ assert all(hash(td) == hash(td.to_pytimedelta()) for td in tds)
# python timedeltas drop ns resolution
ns_td = Timedelta(1, 'ns')
@@ -659,7 +655,7 @@ def test_components(self):
result = s.dt.components
assert not result.iloc[0].isnull().all()
- self.assertTrue(result.iloc[1].isnull().all())
+ assert result.iloc[1].isnull().all()
def test_isoformat(self):
td = Timedelta(days=6, minutes=50, seconds=3,
@@ -708,4 +704,4 @@ def test_ops_error_str(self):
l > r
assert not l == r
- self.assertTrue(l != r)
+ assert l != r
diff --git a/pandas/tests/scalar/test_timestamp.py b/pandas/tests/scalar/test_timestamp.py
index cfc4cf93e720c..72b1e4d450b84 100644
--- a/pandas/tests/scalar/test_timestamp.py
+++ b/pandas/tests/scalar/test_timestamp.py
@@ -438,7 +438,7 @@ def test_tz_localize_roundtrip(self):
reset = localized.tz_localize(None)
self.assertEqual(reset, ts)
- self.assertTrue(reset.tzinfo is None)
+ assert reset.tzinfo is None
def test_tz_convert_roundtrip(self):
for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific']:
@@ -449,7 +449,7 @@ def test_tz_convert_roundtrip(self):
reset = converted.tz_convert(None)
self.assertEqual(reset, Timestamp(t))
- self.assertTrue(reset.tzinfo is None)
+ assert reset.tzinfo is None
self.assertEqual(reset,
converted.tz_convert('UTC').tz_localize(None))
@@ -487,11 +487,11 @@ def test_now(self):
# Check that the delta between the times is less than 1s (arbitrarily
# small)
delta = Timedelta(seconds=1)
- self.assertTrue(abs(ts_from_method - ts_from_string) < delta)
- self.assertTrue(abs(ts_datetime - ts_from_method) < delta)
- self.assertTrue(abs(ts_from_method_tz - ts_from_string_tz) < delta)
- self.assertTrue(abs(ts_from_string_tz.tz_localize(None) -
- ts_from_method_tz.tz_localize(None)) < delta)
+ assert abs(ts_from_method - ts_from_string) < delta
+ assert abs(ts_datetime - ts_from_method) < delta
+ assert abs(ts_from_method_tz - ts_from_string_tz) < delta
+ assert (abs(ts_from_string_tz.tz_localize(None) -
+ ts_from_method_tz.tz_localize(None)) < delta)
def test_today(self):
@@ -505,11 +505,11 @@ def test_today(self):
# Check that the delta between the times is less than 1s (arbitrarily
# small)
delta = Timedelta(seconds=1)
- self.assertTrue(abs(ts_from_method - ts_from_string) < delta)
- self.assertTrue(abs(ts_datetime - ts_from_method) < delta)
- self.assertTrue(abs(ts_from_method_tz - ts_from_string_tz) < delta)
- self.assertTrue(abs(ts_from_string_tz.tz_localize(None) -
- ts_from_method_tz.tz_localize(None)) < delta)
+ assert abs(ts_from_method - ts_from_string) < delta
+ assert abs(ts_datetime - ts_from_method) < delta
+ assert abs(ts_from_method_tz - ts_from_string_tz) < delta
+ assert (abs(ts_from_string_tz.tz_localize(None) -
+ ts_from_method_tz.tz_localize(None)) < delta)
def test_asm8(self):
np.random.seed(7960929)
@@ -523,7 +523,7 @@ def test_asm8(self):
def test_fields(self):
def check(value, equal):
# that we are int/long like
- self.assertTrue(isinstance(value, (int, compat.long)))
+ assert isinstance(value, (int, compat.long))
self.assertEqual(value, equal)
# GH 10050
@@ -564,11 +564,11 @@ def check(value, equal):
ts = Timestamp('2014-01-01 00:00:00+01:00')
starts = ['is_month_start', 'is_quarter_start', 'is_year_start']
for start in starts:
- self.assertTrue(getattr(ts, start))
+ assert getattr(ts, start)
ts = Timestamp('2014-12-31 23:59:59+01:00')
ends = ['is_month_end', 'is_year_end', 'is_quarter_end']
for end in ends:
- self.assertTrue(getattr(ts, end))
+ assert getattr(ts, end)
def test_pprint(self):
# GH12622
@@ -864,26 +864,26 @@ def test_comparison(self):
self.assertEqual(val, val)
assert not val != val
assert not val < val
- self.assertTrue(val <= val)
+ assert val <= val
assert not val > val
- self.assertTrue(val >= val)
+ assert val >= val
other = datetime(2012, 5, 18)
self.assertEqual(val, other)
assert not val != other
assert not val < other
- self.assertTrue(val <= other)
+ assert val <= other
assert not val > other
- self.assertTrue(val >= other)
+ assert val >= other
other = Timestamp(stamp + 100)
self.assertNotEqual(val, other)
self.assertNotEqual(val, other)
- self.assertTrue(val < other)
- self.assertTrue(val <= other)
- self.assertTrue(other > val)
- self.assertTrue(other >= val)
+ assert val < other
+ assert val <= other
+ assert other > val
+ assert other >= val
def test_compare_invalid(self):
@@ -898,14 +898,14 @@ def test_compare_invalid(self):
assert not val == np.float64(1)
assert not val == np.int64(1)
- self.assertTrue(val != 'foo')
- self.assertTrue(val != 10.0)
- self.assertTrue(val != 1)
- self.assertTrue(val != long(1))
- self.assertTrue(val != [])
- self.assertTrue(val != {'foo': 1})
- self.assertTrue(val != np.float64(1))
- self.assertTrue(val != np.int64(1))
+ assert val != 'foo'
+ assert val != 10.0
+ assert val != 1
+ assert val != long(1)
+ assert val != []
+ assert val != {'foo': 1}
+ assert val != np.float64(1)
+ assert val != np.int64(1)
# ops testing
df = DataFrame(np.random.randn(5, 2))
@@ -1086,14 +1086,14 @@ def test_is_leap_year(self):
# GH 13727
for tz in [None, 'UTC', 'US/Eastern', 'Asia/Tokyo']:
dt = Timestamp('2000-01-01 00:00:00', tz=tz)
- self.assertTrue(dt.is_leap_year)
+ assert dt.is_leap_year
assert isinstance(dt.is_leap_year, bool)
dt = Timestamp('1999-01-01 00:00:00', tz=tz)
assert not dt.is_leap_year
dt = Timestamp('2004-01-01 00:00:00', tz=tz)
- self.assertTrue(dt.is_leap_year)
+ assert dt.is_leap_year
dt = Timestamp('2100-01-01 00:00:00', tz=tz)
assert not dt.is_leap_year
@@ -1389,10 +1389,10 @@ def test_timestamp_compare_with_early_datetime(self):
self.assertNotEqual(stamp, datetime.min)
self.assertNotEqual(stamp, datetime(1600, 1, 1))
self.assertNotEqual(stamp, datetime(2700, 1, 1))
- self.assertTrue(stamp > datetime(1600, 1, 1))
- self.assertTrue(stamp >= datetime(1600, 1, 1))
- self.assertTrue(stamp < datetime(2700, 1, 1))
- self.assertTrue(stamp <= datetime(2700, 1, 1))
+ assert stamp > datetime(1600, 1, 1)
+ assert stamp >= datetime(1600, 1, 1)
+ assert stamp < datetime(2700, 1, 1)
+ assert stamp <= datetime(2700, 1, 1)
def test_timestamp_equality(self):
@@ -1498,7 +1498,7 @@ def test_woy_boundary(self):
result = np.array([Timestamp(datetime(*args)).week
for args in [(2000, 1, 1), (2000, 1, 2), (
2005, 1, 1), (2005, 1, 2)]])
- self.assertTrue((result == [52, 52, 53, 53]).all())
+ assert (result == [52, 52, 53, 53]).all()
class TestTsUtil(tm.TestCase):
diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py
index 17a270c3a9346..e0964fea95cc9 100644
--- a/pandas/tests/series/test_alter_axes.py
+++ b/pandas/tests/series/test_alter_axes.py
@@ -70,7 +70,7 @@ def test_rename_set_name(self):
result = s.rename(name)
self.assertEqual(result.name, name)
tm.assert_numpy_array_equal(result.index.values, s.index.values)
- self.assertTrue(s.name is None)
+ assert s.name is None
def test_rename_set_name_inplace(self):
s = Series(range(3), index=list('abc'))
@@ -94,8 +94,8 @@ def test_set_name(self):
s = Series([1, 2, 3])
s2 = s._set_name('foo')
self.assertEqual(s2.name, 'foo')
- self.assertTrue(s.name is None)
- self.assertTrue(s is not s2)
+ assert s.name is None
+ assert s is not s2
def test_rename_inplace(self):
renamer = lambda x: x.strftime('%Y%m%d')
@@ -109,7 +109,7 @@ def test_set_index_makes_timeseries(self):
s = Series(lrange(10))
s.index = idx
- self.assertTrue(s.index.is_all_dates)
+ assert s.index.is_all_dates
def test_reset_index(self):
df = tm.makeDataFrame()[:5]
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index f5bccdd55e944..233d71cb1d8a5 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -35,14 +35,14 @@ def test_sum_zero(self):
self.assertEqual(nanops.nansum(arr), 0)
arr = np.empty((10, 0))
- self.assertTrue((nanops.nansum(arr, axis=1) == 0).all())
+ assert (nanops.nansum(arr, axis=1) == 0).all()
# GH #844
s = Series([], index=[])
self.assertEqual(s.sum(), 0)
df = DataFrame(np.empty((10, 0)))
- self.assertTrue((df.sum(1) == 0).all())
+ assert (df.sum(1) == 0).all()
def test_nansum_buglet(self):
s = Series([1.0, np.nan], index=[0, 1])
@@ -80,17 +80,17 @@ def test_overflow(self):
result = s.sum(skipna=False)
self.assertEqual(result, v.sum(dtype=dtype))
result = s.min(skipna=False)
- self.assertTrue(np.allclose(float(result), 0.0))
+ assert np.allclose(float(result), 0.0)
result = s.max(skipna=False)
- self.assertTrue(np.allclose(float(result), v[-1]))
+ assert np.allclose(float(result), v[-1])
# use bottleneck if available
result = s.sum()
self.assertEqual(result, v.sum(dtype=dtype))
result = s.min()
- self.assertTrue(np.allclose(float(result), 0.0))
+ assert np.allclose(float(result), 0.0)
result = s.max()
- self.assertTrue(np.allclose(float(result), v[-1]))
+ assert np.allclose(float(result), v[-1])
def test_sum(self):
self._check_stat_op('sum', np.sum, check_allna=True)
@@ -104,7 +104,7 @@ def test_sum_inf(self):
s[5:8] = np.inf
s2[5:8] = np.nan
- self.assertTrue(np.isinf(s.sum()))
+ assert np.isinf(s.sum())
arr = np.random.randn(100, 100).astype('f4')
arr[:, 2] = np.inf
@@ -113,7 +113,7 @@ def test_sum_inf(self):
assert_almost_equal(s.sum(), s2.sum())
res = nanops.nansum(arr, axis=1)
- self.assertTrue(np.isinf(res).all())
+ assert np.isinf(res).all()
def test_mean(self):
self._check_stat_op('mean', np.mean)
@@ -248,10 +248,10 @@ def test_var_std(self):
# 1 - element series with ddof=1
s = self.ts.iloc[[0]]
result = s.var(ddof=1)
- self.assertTrue(isnull(result))
+ assert isnull(result)
result = s.std(ddof=1)
- self.assertTrue(isnull(result))
+ assert isnull(result)
def test_sem(self):
alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x))
@@ -265,7 +265,7 @@ def test_sem(self):
# 1 - element series with ddof=1
s = self.ts.iloc[[0]]
result = s.sem(ddof=1)
- self.assertTrue(isnull(result))
+ assert isnull(result)
def test_skew(self):
tm._skip_if_no_scipy()
@@ -281,11 +281,11 @@ def test_skew(self):
s = Series(np.ones(i))
df = DataFrame(np.ones((i, i)))
if i < min_N:
- self.assertTrue(np.isnan(s.skew()))
- self.assertTrue(np.isnan(df.skew()).all())
+ assert np.isnan(s.skew())
+ assert np.isnan(df.skew()).all()
else:
self.assertEqual(0, s.skew())
- self.assertTrue((df.skew() == 0).all())
+ assert (df.skew() == 0).all()
def test_kurt(self):
tm._skip_if_no_scipy()
@@ -307,11 +307,11 @@ def test_kurt(self):
s = Series(np.ones(i))
df = DataFrame(np.ones((i, i)))
if i < min_N:
- self.assertTrue(np.isnan(s.kurt()))
- self.assertTrue(np.isnan(df.kurt()).all())
+ assert np.isnan(s.kurt())
+ assert np.isnan(df.kurt()).all()
else:
self.assertEqual(0, s.kurt())
- self.assertTrue((df.kurt() == 0).all())
+ assert (df.kurt() == 0).all()
def test_describe(self):
s = Series([0, 1, 2, 3, 4], name='int_data')
@@ -337,14 +337,14 @@ def test_describe(self):
def test_argsort(self):
self._check_accum_op('argsort', check_dtype=False)
argsorted = self.ts.argsort()
- self.assertTrue(issubclass(argsorted.dtype.type, np.integer))
+ assert issubclass(argsorted.dtype.type, np.integer)
# GH 2967 (introduced bug in 0.11-dev I think)
s = Series([Timestamp('201301%02d' % (i + 1)) for i in range(5)])
self.assertEqual(s.dtype, 'datetime64[ns]')
shifted = s.shift(-1)
self.assertEqual(shifted.dtype, 'datetime64[ns]')
- self.assertTrue(isnull(shifted[4]))
+ assert isnull(shifted[4])
result = s.argsort()
expected = Series(lrange(5), dtype='int64')
@@ -503,8 +503,8 @@ def testit():
pytest.raises(TypeError, f, ds)
# skipna or no
- self.assertTrue(notnull(f(self.series)))
- self.assertTrue(isnull(f(self.series, skipna=False)))
+ assert notnull(f(self.series))
+ assert isnull(f(self.series, skipna=False))
# check the result is correct
nona = self.series.dropna()
@@ -517,12 +517,12 @@ def testit():
# xref 9422
# bottleneck >= 1.0 give 0.0 for an allna Series sum
try:
- self.assertTrue(nanops._USE_BOTTLENECK)
+ assert nanops._USE_BOTTLENECK
import bottleneck as bn # noqa
- self.assertTrue(bn.__version__ >= LooseVersion('1.0'))
+ assert bn.__version__ >= LooseVersion('1.0')
self.assertEqual(f(allna), 0.0)
except:
- self.assertTrue(np.isnan(f(allna)))
+ assert np.isnan(f(allna))
# dtype=object with None, it works!
s = Series([1, 2, 3, None, 5])
@@ -647,7 +647,7 @@ def test_all_any(self):
ts = tm.makeTimeSeries()
bool_series = ts > 0
assert not bool_series.all()
- self.assertTrue(bool_series.any())
+ assert bool_series.any()
# Alternative types, with implicit 'object' dtype.
s = Series(['abc', True])
@@ -657,9 +657,9 @@ def test_all_any_params(self):
# Check skipna, with implicit 'object' dtype.
s1 = Series([np.nan, True])
s2 = Series([np.nan, False])
- self.assertTrue(s1.all(skipna=False)) # nan && True => True
- self.assertTrue(s1.all(skipna=True))
- self.assertTrue(np.isnan(s2.any(skipna=False))) # nan || False => nan
+ assert s1.all(skipna=False) # nan && True => True
+ assert s1.all(skipna=True)
+ assert np.isnan(s2.any(skipna=False)) # nan || False => nan
assert not s2.any(skipna=True)
# Check level.
@@ -722,20 +722,20 @@ def test_ops_consistency_on_empty(self):
self.assertEqual(result, 0)
result = Series(dtype=float).mean()
- self.assertTrue(isnull(result))
+ assert isnull(result)
result = Series(dtype=float).median()
- self.assertTrue(isnull(result))
+ assert isnull(result)
# timedelta64[ns]
result = Series(dtype='m8[ns]').sum()
self.assertEqual(result, Timedelta(0))
result = Series(dtype='m8[ns]').mean()
- self.assertTrue(result is pd.NaT)
+ assert result is pd.NaT
result = Series(dtype='m8[ns]').median()
- self.assertTrue(result is pd.NaT)
+ assert result is pd.NaT
def test_corr(self):
tm._skip_if_no_scipy()
@@ -748,19 +748,19 @@ def test_corr(self):
# partial overlap
self.assertAlmostEqual(self.ts[:15].corr(self.ts[5:]), 1)
- self.assertTrue(isnull(self.ts[:15].corr(self.ts[5:], min_periods=12)))
+ assert isnull(self.ts[:15].corr(self.ts[5:], min_periods=12))
ts1 = self.ts[:15].reindex(self.ts.index)
ts2 = self.ts[5:].reindex(self.ts.index)
- self.assertTrue(isnull(ts1.corr(ts2, min_periods=12)))
+ assert isnull(ts1.corr(ts2, min_periods=12))
# No overlap
- self.assertTrue(np.isnan(self.ts[::2].corr(self.ts[1::2])))
+ assert np.isnan(self.ts[::2].corr(self.ts[1::2]))
# all NA
cp = self.ts[:10].copy()
cp[:] = np.nan
- self.assertTrue(isnull(cp.corr(cp)))
+ assert isnull(cp.corr(cp))
A = tm.makeTimeSeries()
B = tm.makeTimeSeries()
@@ -812,19 +812,19 @@ def test_cov(self):
self.ts[5:15].std() ** 2)
# No overlap
- self.assertTrue(np.isnan(self.ts[::2].cov(self.ts[1::2])))
+ assert np.isnan(self.ts[::2].cov(self.ts[1::2]))
# all NA
cp = self.ts[:10].copy()
cp[:] = np.nan
- self.assertTrue(isnull(cp.cov(cp)))
+ assert isnull(cp.cov(cp))
# min_periods
- self.assertTrue(isnull(self.ts[:15].cov(self.ts[5:], min_periods=12)))
+ assert isnull(self.ts[:15].cov(self.ts[5:], min_periods=12))
ts1 = self.ts[:15].reindex(self.ts.index)
ts2 = self.ts[5:].reindex(self.ts.index)
- self.assertTrue(isnull(ts1.cov(ts2, min_periods=12)))
+ assert isnull(ts1.cov(ts2, min_periods=12))
def test_count(self):
self.assertEqual(self.ts.count(), len(self.ts))
@@ -859,7 +859,7 @@ def test_dot(self):
# Check ndarray argument
result = a.dot(b.values)
- self.assertTrue(np.all(result == expected.values))
+ assert np.all(result == expected.values)
assert_almost_equal(a.dot(b['2'].values), expected['2'])
# Check series argument
@@ -1154,7 +1154,7 @@ def test_idxmin(self):
# skipna or no
self.assertEqual(self.series[self.series.idxmin()], self.series.min())
- self.assertTrue(isnull(self.series.idxmin(skipna=False)))
+ assert isnull(self.series.idxmin(skipna=False))
# no NaNs
nona = self.series.dropna()
@@ -1164,7 +1164,7 @@ def test_idxmin(self):
# all NaNs
allna = self.series * nan
- self.assertTrue(isnull(allna.idxmin()))
+ assert isnull(allna.idxmin())
# datetime64[ns]
from pandas import date_range
@@ -1196,7 +1196,7 @@ def test_idxmax(self):
# skipna or no
self.assertEqual(self.series[self.series.idxmax()], self.series.max())
- self.assertTrue(isnull(self.series.idxmax(skipna=False)))
+ assert isnull(self.series.idxmax(skipna=False))
# no NaNs
nona = self.series.dropna()
@@ -1206,7 +1206,7 @@ def test_idxmax(self):
# all NaNs
allna = self.series * nan
- self.assertTrue(isnull(allna.idxmax()))
+ assert isnull(allna.idxmax())
from pandas import date_range
s = Series(date_range('20130102', periods=6))
@@ -1252,7 +1252,7 @@ def test_ptp(self):
# GH11163
s = Series([3, 5, np.nan, -3, 10])
self.assertEqual(s.ptp(), 13)
- self.assertTrue(pd.isnull(s.ptp(skipna=False)))
+ assert pd.isnull(s.ptp(skipna=False))
mi = pd.MultiIndex.from_product([['a', 'b'], [1, 2, 3]])
s = pd.Series([1, np.nan, 7, 3, 5, np.nan], index=mi)
@@ -1364,24 +1364,24 @@ def test_is_unique(self):
s = Series(np.random.randint(0, 10, size=1000))
assert not s.is_unique
s = Series(np.arange(1000))
- self.assertTrue(s.is_unique)
+ assert s.is_unique
def test_is_monotonic(self):
s = Series(np.random.randint(0, 10, size=1000))
assert not s.is_monotonic
s = Series(np.arange(1000))
- self.assertTrue(s.is_monotonic)
- self.assertTrue(s.is_monotonic_increasing)
+ assert s.is_monotonic
+ assert s.is_monotonic_increasing
s = Series(np.arange(1000, 0, -1))
- self.assertTrue(s.is_monotonic_decreasing)
+ assert s.is_monotonic_decreasing
s = Series(pd.date_range('20130101', periods=10))
- self.assertTrue(s.is_monotonic)
- self.assertTrue(s.is_monotonic_increasing)
+ assert s.is_monotonic
+ assert s.is_monotonic_increasing
s = Series(list(reversed(s.tolist())))
assert not s.is_monotonic
- self.assertTrue(s.is_monotonic_decreasing)
+ assert s.is_monotonic_decreasing
def test_sort_index_level(self):
mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC'))
@@ -1433,13 +1433,13 @@ def test_shift_categorical(self):
sp1 = s.shift(1)
assert_index_equal(s.index, sp1.index)
- self.assertTrue(np.all(sp1.values.codes[:1] == -1))
- self.assertTrue(np.all(s.values.codes[:-1] == sp1.values.codes[1:]))
+ assert np.all(sp1.values.codes[:1] == -1)
+ assert np.all(s.values.codes[:-1] == sp1.values.codes[1:])
sn2 = s.shift(-2)
assert_index_equal(s.index, sn2.index)
- self.assertTrue(np.all(sn2.values.codes[-2:] == -1))
- self.assertTrue(np.all(s.values.codes[2:] == sn2.values.codes[:-2]))
+ assert np.all(sn2.values.codes[-2:] == -1)
+ assert np.all(s.values.codes[2:] == sn2.values.codes[:-2])
assert_index_equal(s.values.categories, sp1.values.categories)
assert_index_equal(s.values.categories, sn2.values.categories)
@@ -1452,7 +1452,7 @@ def test_reshape_non_2d(self):
# see gh-4554
with tm.assert_produces_warning(FutureWarning):
x = Series(np.random.random(201), name='x')
- self.assertTrue(x.reshape(x.shape, ) is x)
+ assert x.reshape(x.shape, ) is x
# see gh-2719
with tm.assert_produces_warning(FutureWarning):
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 5b7ac9bc2b33c..7d331f0643b18 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -124,28 +124,28 @@ def test_tab_completion(self):
# GH 9910
s = Series(list('abcd'))
# Series of str values should have .str but not .dt/.cat in __dir__
- self.assertTrue('str' in dir(s))
- self.assertTrue('dt' not in dir(s))
- self.assertTrue('cat' not in dir(s))
+ assert 'str' in dir(s)
+ assert 'dt' not in dir(s)
+ assert 'cat' not in dir(s)
# similiarly for .dt
s = Series(date_range('1/1/2015', periods=5))
- self.assertTrue('dt' in dir(s))
- self.assertTrue('str' not in dir(s))
- self.assertTrue('cat' not in dir(s))
+ assert 'dt' in dir(s)
+ assert 'str' not in dir(s)
+ assert 'cat' not in dir(s)
- # similiarly for .cat, but with the twist that str and dt should be
- # there if the categories are of that type first cat and str
+ # Similarly for .cat, but with the twist that str and dt should be
+ # there if the categories are of that type first cat and str.
s = Series(list('abbcd'), dtype="category")
- self.assertTrue('cat' in dir(s))
- self.assertTrue('str' in dir(s)) # as it is a string categorical
- self.assertTrue('dt' not in dir(s))
+ assert 'cat' in dir(s)
+ assert 'str' in dir(s) # as it is a string categorical
+ assert 'dt' not in dir(s)
# similar to cat and str
s = Series(date_range('1/1/2015', periods=5)).astype("category")
- self.assertTrue('cat' in dir(s))
- self.assertTrue('str' not in dir(s))
- self.assertTrue('dt' in dir(s)) # as it is a datetime categorical
+ assert 'cat' in dir(s)
+ assert 'str' not in dir(s)
+ assert 'dt' in dir(s) # as it is a datetime categorical
def test_not_hashable(self):
s_empty = Series()
@@ -238,12 +238,12 @@ def test_copy(self):
if deep is None or deep is True:
# Did not modify original Series
- self.assertTrue(np.isnan(s2[0]))
+ assert np.isnan(s2[0])
assert not np.isnan(s[0])
else:
# we DID modify the original Series
- self.assertTrue(np.isnan(s2[0]))
- self.assertTrue(np.isnan(s[0]))
+ assert np.isnan(s2[0])
+ assert np.isnan(s[0])
# GH 11794
# copy of tz-aware
diff --git a/pandas/tests/series/test_apply.py b/pandas/tests/series/test_apply.py
index afe46e5dcf480..c764d7b856bb8 100644
--- a/pandas/tests/series/test_apply.py
+++ b/pandas/tests/series/test_apply.py
@@ -373,17 +373,17 @@ def test_map_int(self):
right = Series({1: 11, 2: 22, 3: 33})
self.assertEqual(left.dtype, np.float_)
- self.assertTrue(issubclass(right.dtype.type, np.integer))
+ assert issubclass(right.dtype.type, np.integer)
merged = left.map(right)
self.assertEqual(merged.dtype, np.float_)
- self.assertTrue(isnull(merged['d']))
- self.assertTrue(not isnull(merged['c']))
+ assert isnull(merged['d'])
+ assert not isnull(merged['c'])
def test_map_type_inference(self):
s = Series(lrange(3))
s2 = s.map(lambda x: np.where(x == 0, 0, 1))
- self.assertTrue(issubclass(s2.dtype.type, np.integer))
+ assert issubclass(s2.dtype.type, np.integer)
def test_map_decimal(self):
from decimal import Decimal
diff --git a/pandas/tests/series/test_asof.py b/pandas/tests/series/test_asof.py
index 137390b6427eb..80556a5e5ffdb 100644
--- a/pandas/tests/series/test_asof.py
+++ b/pandas/tests/series/test_asof.py
@@ -23,18 +23,18 @@ def test_basic(self):
dates = date_range('1/1/1990', periods=N * 3, freq='25s')
result = ts.asof(dates)
- self.assertTrue(notnull(result).all())
+ assert notnull(result).all()
lb = ts.index[14]
ub = ts.index[30]
result = ts.asof(list(dates))
- self.assertTrue(notnull(result).all())
+ assert notnull(result).all()
lb = ts.index[14]
ub = ts.index[30]
mask = (result.index >= lb) & (result.index < ub)
rs = result[mask]
- self.assertTrue((rs == ts[lb]).all())
+ assert (rs == ts[lb]).all()
val = result[result.index[result.index >= ub][0]]
self.assertEqual(ts[ub], val)
@@ -63,7 +63,7 @@ def test_scalar(self):
# no as of value
d = ts.index[0] - offsets.BDay()
- self.assertTrue(np.isnan(ts.asof(d)))
+ assert np.isnan(ts.asof(d))
def test_with_nan(self):
# basic asof test
@@ -98,19 +98,19 @@ def test_periodindex(self):
dates = date_range('1/1/1990', periods=N * 3, freq='37min')
result = ts.asof(dates)
- self.assertTrue(notnull(result).all())
+ assert notnull(result).all()
lb = ts.index[14]
ub = ts.index[30]
result = ts.asof(list(dates))
- self.assertTrue(notnull(result).all())
+ assert notnull(result).all()
lb = ts.index[14]
ub = ts.index[30]
pix = PeriodIndex(result.index.values, freq='H')
mask = (pix >= lb) & (pix < ub)
rs = result[mask]
- self.assertTrue((rs == ts[lb]).all())
+ assert (rs == ts[lb]).all()
ts[5:10] = np.nan
ts[15:20] = np.nan
@@ -130,7 +130,7 @@ def test_periodindex(self):
# no as of value
d = ts.index[0].to_timestamp() - offsets.BDay()
- self.assertTrue(isnull(ts.asof(d)))
+ assert isnull(ts.asof(d))
def test_errors(self):
diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py
index b4615e5420a81..6042a8c0a2e9d 100644
--- a/pandas/tests/series/test_combine_concat.py
+++ b/pandas/tests/series/test_combine_concat.py
@@ -74,7 +74,7 @@ def test_combine_first(self):
# Holes filled from input
combined = series_copy.combine_first(series)
- self.assertTrue(np.isfinite(combined).all())
+ assert np.isfinite(combined).all()
tm.assert_series_equal(combined[::2], series[::2])
tm.assert_series_equal(combined[1::2], series_copy[1::2])
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index b08653b0001ca..966861fe3c1e4 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -50,13 +50,13 @@ def test_scalar_conversion(self):
assert long(Series([1.])) == 1
def test_constructor(self):
- self.assertTrue(self.ts.index.is_all_dates)
+ assert self.ts.index.is_all_dates
# Pass in Series
derived = Series(self.ts)
- self.assertTrue(derived.index.is_all_dates)
+ assert derived.index.is_all_dates
- self.assertTrue(tm.equalContents(derived.index, self.ts.index))
+ assert tm.equalContents(derived.index, self.ts.index)
# Ensure new index is not created
self.assertEqual(id(self.ts.index), id(derived.index))
@@ -152,11 +152,11 @@ def test_constructor_categorical(self):
ValueError, lambda: Series(pd.Categorical([1, 2, 3]),
dtype='int64'))
cat = Series(pd.Categorical([1, 2, 3]), dtype='category')
- self.assertTrue(is_categorical_dtype(cat))
- self.assertTrue(is_categorical_dtype(cat.dtype))
+ assert is_categorical_dtype(cat)
+ assert is_categorical_dtype(cat.dtype)
s = Series([1, 2, 3], dtype='category')
- self.assertTrue(is_categorical_dtype(s))
- self.assertTrue(is_categorical_dtype(s.dtype))
+ assert is_categorical_dtype(s)
+ assert is_categorical_dtype(s.dtype)
def test_constructor_maskedarray(self):
data = ma.masked_all((3, ), dtype=float)
@@ -320,7 +320,7 @@ def test_constructor_datelike_coercion(self):
s = Series([Timestamp('20130101'), 'NOV'], dtype=object)
self.assertEqual(s.iloc[0], Timestamp('20130101'))
self.assertEqual(s.iloc[1], 'NOV')
- self.assertTrue(s.dtype == object)
+ assert s.dtype == object
# the dtype was being reset on the slicing and re-inferred to datetime
# even thought the blocks are mixed
@@ -334,9 +334,9 @@ def test_constructor_datelike_coercion(self):
'mat': mat}, index=belly)
result = df.loc['3T19']
- self.assertTrue(result.dtype == object)
+ assert result.dtype == object
result = df.loc['216']
- self.assertTrue(result.dtype == object)
+ assert result.dtype == object
def test_constructor_datetimes_with_nulls(self):
# gh-15869
@@ -349,7 +349,7 @@ def test_constructor_datetimes_with_nulls(self):
def test_constructor_dtype_datetime64(self):
s = Series(iNaT, dtype='M8[ns]', index=lrange(5))
- self.assertTrue(isnull(s).all())
+ assert isnull(s).all()
# in theory this should be all nulls, but since
# we are not specifying a dtype is ambiguous
@@ -357,14 +357,14 @@ def test_constructor_dtype_datetime64(self):
assert not isnull(s).all()
s = Series(nan, dtype='M8[ns]', index=lrange(5))
- self.assertTrue(isnull(s).all())
+ assert isnull(s).all()
s = Series([datetime(2001, 1, 2, 0, 0), iNaT], dtype='M8[ns]')
- self.assertTrue(isnull(s[1]))
+ assert isnull(s[1])
self.assertEqual(s.dtype, 'M8[ns]')
s = Series([datetime(2001, 1, 2, 0, 0), nan], dtype='M8[ns]')
- self.assertTrue(isnull(s[1]))
+ assert isnull(s[1])
self.assertEqual(s.dtype, 'M8[ns]')
# GH3416
@@ -441,29 +441,29 @@ def test_constructor_dtype_datetime64(self):
# tz-aware (UTC and other tz's)
# GH 8411
dr = date_range('20130101', periods=3)
- self.assertTrue(Series(dr).iloc[0].tz is None)
+ assert Series(dr).iloc[0].tz is None
dr = date_range('20130101', periods=3, tz='UTC')
- self.assertTrue(str(Series(dr).iloc[0].tz) == 'UTC')
+ assert str(Series(dr).iloc[0].tz) == 'UTC'
dr = date_range('20130101', periods=3, tz='US/Eastern')
- self.assertTrue(str(Series(dr).iloc[0].tz) == 'US/Eastern')
+ assert str(Series(dr).iloc[0].tz) == 'US/Eastern'
# non-convertible
s = Series([1479596223000, -1479590, pd.NaT])
- self.assertTrue(s.dtype == 'object')
- self.assertTrue(s[2] is pd.NaT)
- self.assertTrue('NaT' in str(s))
+ assert s.dtype == 'object'
+ assert s[2] is pd.NaT
+ assert 'NaT' in str(s)
# if we passed a NaT it remains
s = Series([datetime(2010, 1, 1), datetime(2, 1, 1), pd.NaT])
- self.assertTrue(s.dtype == 'object')
- self.assertTrue(s[2] is pd.NaT)
- self.assertTrue('NaT' in str(s))
+ assert s.dtype == 'object'
+ assert s[2] is pd.NaT
+ assert 'NaT' in str(s)
# if we passed a nan it remains
s = Series([datetime(2010, 1, 1), datetime(2, 1, 1), np.nan])
- self.assertTrue(s.dtype == 'object')
- self.assertTrue(s[2] is np.nan)
- self.assertTrue('NaN' in str(s))
+ assert s.dtype == 'object'
+ assert s[2] is np.nan
+ assert 'NaN' in str(s)
def test_constructor_with_datetime_tz(self):
@@ -472,15 +472,15 @@ def test_constructor_with_datetime_tz(self):
dr = date_range('20130101', periods=3, tz='US/Eastern')
s = Series(dr)
- self.assertTrue(s.dtype.name == 'datetime64[ns, US/Eastern]')
- self.assertTrue(s.dtype == 'datetime64[ns, US/Eastern]')
- self.assertTrue(is_datetime64tz_dtype(s.dtype))
- self.assertTrue('datetime64[ns, US/Eastern]' in str(s))
+ assert s.dtype.name == 'datetime64[ns, US/Eastern]'
+ assert s.dtype == 'datetime64[ns, US/Eastern]'
+ assert is_datetime64tz_dtype(s.dtype)
+ assert 'datetime64[ns, US/Eastern]' in str(s)
# export
result = s.values
assert isinstance(result, np.ndarray)
- self.assertTrue(result.dtype == 'datetime64[ns]')
+ assert result.dtype == 'datetime64[ns]'
exp = pd.DatetimeIndex(result)
exp = exp.tz_localize('UTC').tz_convert(tz=s.dt.tz)
@@ -524,16 +524,16 @@ def test_constructor_with_datetime_tz(self):
assert_series_equal(result, expected)
# short str
- self.assertTrue('datetime64[ns, US/Eastern]' in str(s))
+ assert 'datetime64[ns, US/Eastern]' in str(s)
# formatting with NaT
result = s.shift()
- self.assertTrue('datetime64[ns, US/Eastern]' in str(result))
- self.assertTrue('NaT' in str(result))
+ assert 'datetime64[ns, US/Eastern]' in str(result)
+ assert 'NaT' in str(result)
# long str
t = Series(date_range('20130101', periods=1000, tz='US/Eastern'))
- self.assertTrue('datetime64[ns, US/Eastern]' in str(t))
+ assert 'datetime64[ns, US/Eastern]' in str(t)
result = pd.DatetimeIndex(s, freq='infer')
tm.assert_index_equal(result, dr)
@@ -541,13 +541,13 @@ def test_constructor_with_datetime_tz(self):
# inference
s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'),
pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Pacific')])
- self.assertTrue(s.dtype == 'datetime64[ns, US/Pacific]')
- self.assertTrue(lib.infer_dtype(s) == 'datetime64')
+ assert s.dtype == 'datetime64[ns, US/Pacific]'
+ assert lib.infer_dtype(s) == 'datetime64'
s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'),
pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Eastern')])
- self.assertTrue(s.dtype == 'object')
- self.assertTrue(lib.infer_dtype(s) == 'datetime')
+ assert s.dtype == 'object'
+ assert lib.infer_dtype(s) == 'datetime'
# with all NaT
s = Series(pd.NaT, index=[0, 1], dtype='datetime64[ns, US/Eastern]')
@@ -676,7 +676,7 @@ def test_orderedDict_ctor(self):
import random
data = OrderedDict([('col%s' % i, random.random()) for i in range(12)])
s = pandas.Series(data)
- self.assertTrue(all(s.values == list(data.values())))
+ assert all(s.values == list(data.values()))
def test_orderedDict_subclass_ctor(self):
# GH3283
@@ -688,7 +688,7 @@ class A(OrderedDict):
data = A([('col%s' % i, random.random()) for i in range(12)])
s = pandas.Series(data)
- self.assertTrue(all(s.values == list(data.values())))
+ assert all(s.values == list(data.values()))
def test_constructor_list_of_tuples(self):
data = [(1, 1), (2, 2), (2, 3)]
@@ -710,7 +710,7 @@ def test_fromDict(self):
data = {'a': 0, 'b': 1, 'c': 2, 'd': 3}
series = Series(data)
- self.assertTrue(tm.is_sorted(series.index))
+ assert tm.is_sorted(series.index)
data = {'a': 0, 'b': '1', 'c': '2', 'd': datetime.now()}
series = Series(data)
@@ -823,10 +823,10 @@ def test_NaT_scalar(self):
series = Series([0, 1000, 2000, iNaT], dtype='M8[ns]')
val = series[3]
- self.assertTrue(isnull(val))
+ assert isnull(val)
series[2] = val
- self.assertTrue(isnull(series[2]))
+ assert isnull(series[2])
def test_NaT_cast(self):
# GH10747
diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py
index c56a5baac12af..13fa3bc782f89 100644
--- a/pandas/tests/series/test_datetime_values.py
+++ b/pandas/tests/series/test_datetime_values.py
@@ -71,7 +71,7 @@ def compare(s, name):
result = s.dt.to_pydatetime()
assert isinstance(result, np.ndarray)
- self.assertTrue(result.dtype == object)
+ assert result.dtype == object
result = s.dt.tz_localize('US/Eastern')
exp_values = DatetimeIndex(s.values).tz_localize('US/Eastern')
@@ -141,7 +141,7 @@ def compare(s, name):
result = s.dt.to_pydatetime()
assert isinstance(result, np.ndarray)
- self.assertTrue(result.dtype == object)
+ assert result.dtype == object
result = s.dt.tz_convert('CET')
expected = Series(s._values.tz_convert('CET'),
@@ -176,11 +176,11 @@ def compare(s, name):
result = s.dt.to_pytimedelta()
assert isinstance(result, np.ndarray)
- self.assertTrue(result.dtype == object)
+ assert result.dtype == object
result = s.dt.total_seconds()
assert isinstance(result, pd.Series)
- self.assertTrue(result.dtype == 'float64')
+ assert result.dtype == 'float64'
freq_result = s.dt.freq
self.assertEqual(freq_result, TimedeltaIndex(s.values,
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 601262df89260..954e80facf848 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -123,7 +123,7 @@ def test_getitem_setitem_ellipsis(self):
assert_series_equal(result, s)
s[...] = 5
- self.assertTrue((result == 5).all())
+ assert (result == 5).all()
def test_getitem_negative_out_of_bounds(self):
s = Series(tm.rands_array(5, 10), index=tm.rands_array(10, 10))
@@ -182,7 +182,7 @@ def test_iloc(self):
# test slice is a view
result[:] = 0
- self.assertTrue((s[1:3] == 0).all())
+ assert (s[1:3] == 0).all()
# list of integers
result = s.iloc[[0, 2, 3, 4, 5]]
@@ -211,10 +211,10 @@ def test_getitem_setitem_slice_bug(self):
s = Series(lrange(10), lrange(10))
s[-12:] = 0
- self.assertTrue((s == 0).all())
+ assert (s == 0).all()
s[:-12] = 5
- self.assertTrue((s == 0).all())
+ assert (s == 0).all()
def test_getitem_int64(self):
idx = np.int64(5)
@@ -335,8 +335,8 @@ def test_getitem_setitem_slice_integers(self):
assert_series_equal(result, expected)
s[:4] = 0
- self.assertTrue((s[:4] == 0).all())
- self.assertTrue(not (s[4:] == 0).any())
+ assert (s[:4] == 0).all()
+ assert not (s[4:] == 0).any()
def test_getitem_setitem_datetime_tz_pytz(self):
tm._skip_if_no_pytz()
@@ -572,7 +572,7 @@ def test_getitem_ambiguous_keyerror(self):
def test_getitem_unordered_dup(self):
obj = Series(lrange(5), index=['c', 'a', 'a', 'b', 'b'])
- self.assertTrue(is_scalar(obj['c']))
+ assert is_scalar(obj['c'])
self.assertEqual(obj['c'], 0)
def test_getitem_dups_with_missing(self):
@@ -725,8 +725,8 @@ def test_setitem(self):
self.ts[self.ts.index[5]] = np.NaN
self.ts[[1, 2, 17]] = np.NaN
self.ts[6] = np.NaN
- self.assertTrue(np.isnan(self.ts[6]))
- self.assertTrue(np.isnan(self.ts[2]))
+ assert np.isnan(self.ts[6])
+ assert np.isnan(self.ts[2])
self.ts[np.isnan(self.ts)] = 5
assert not np.isnan(self.ts[2])
@@ -735,7 +735,7 @@ def test_setitem(self):
index=tm.makeIntIndex(20))
series[::2] = 0
- self.assertTrue((series[::2] == 0).all())
+ assert (series[::2] == 0).all()
# set item that's not contained
s = self.series.copy()
@@ -804,7 +804,7 @@ def test_set_value(self):
def test_setslice(self):
sl = self.ts[5:20]
self.assertEqual(len(sl), len(sl.index))
- self.assertTrue(sl.index.is_unique)
+ assert sl.index.is_unique
def test_basic_getitem_setitem_corner(self):
# invalid tuples, e.g. self.ts[:, None] vs. self.ts[:, 2]
@@ -949,12 +949,12 @@ def test_loc_getitem_setitem_integer_slice_keyerrors(self):
# this is OK
cp = s.copy()
cp.iloc[4:10] = 0
- self.assertTrue((cp.iloc[4:10] == 0).all())
+ assert (cp.iloc[4:10] == 0).all()
# so is this
cp = s.copy()
cp.iloc[3:11] = 0
- self.assertTrue((cp.iloc[3:11] == 0).values.all())
+ assert (cp.iloc[3:11] == 0).values.all()
result = s.iloc[2:6]
result2 = s.loc[3:11]
@@ -1173,7 +1173,7 @@ def f():
s = Series(range(10)).astype(float)
s[8] = None
result = s[8]
- self.assertTrue(isnull(result))
+ assert isnull(result)
s = Series(range(10)).astype(float)
s[s > 8] = None
@@ -1515,24 +1515,24 @@ def test_where_numeric_with_string(self):
w = s.where(s > 1, 'X')
assert not is_integer(w[0])
- self.assertTrue(is_integer(w[1]))
- self.assertTrue(is_integer(w[2]))
- self.assertTrue(isinstance(w[0], str))
- self.assertTrue(w.dtype == 'object')
+ assert is_integer(w[1])
+ assert is_integer(w[2])
+ assert isinstance(w[0], str)
+ assert w.dtype == 'object'
w = s.where(s > 1, ['X', 'Y', 'Z'])
assert not is_integer(w[0])
- self.assertTrue(is_integer(w[1]))
- self.assertTrue(is_integer(w[2]))
- self.assertTrue(isinstance(w[0], str))
- self.assertTrue(w.dtype == 'object')
+ assert is_integer(w[1])
+ assert is_integer(w[2])
+ assert isinstance(w[0], str)
+ assert w.dtype == 'object'
w = s.where(s > 1, np.array(['X', 'Y', 'Z']))
assert not is_integer(w[0])
- self.assertTrue(is_integer(w[1]))
- self.assertTrue(is_integer(w[2]))
- self.assertTrue(isinstance(w[0], str))
- self.assertTrue(w.dtype == 'object')
+ assert is_integer(w[1])
+ assert is_integer(w[2])
+ assert isinstance(w[0], str)
+ assert w.dtype == 'object'
def test_setitem_boolean(self):
mask = self.series > self.series.median()
@@ -1761,7 +1761,7 @@ def test_drop(self):
# GH 8522
s = Series([2, 3], index=[True, False])
- self.assertTrue(s.index.is_object())
+ assert s.index.is_object()
result = s.drop(True)
expected = Series([3], index=[False])
assert_series_equal(result, expected)
@@ -1775,9 +1775,9 @@ def _check_align(a, b, how='left', fill=None):
diff_a = aa.index.difference(join_index)
diff_b = ab.index.difference(join_index)
if len(diff_a) > 0:
- self.assertTrue((aa.reindex(diff_a) == fill).all())
+ assert (aa.reindex(diff_a) == fill).all()
if len(diff_b) > 0:
- self.assertTrue((ab.reindex(diff_b) == fill).all())
+ assert (ab.reindex(diff_b) == fill).all()
ea = a.reindex(join_index)
eb = b.reindex(join_index)
@@ -1857,7 +1857,7 @@ def test_align_nocopy(self):
a = self.ts.copy()
ra, _ = a.align(b, join='left', copy=False)
ra[:5] = 5
- self.assertTrue((a[:5] == 5).all())
+ assert (a[:5] == 5).all()
# do copy
a = self.ts.copy()
@@ -1871,7 +1871,7 @@ def test_align_nocopy(self):
b = self.ts[:5].copy()
_, rb = a.align(b, join='right', copy=False)
rb[:2] = 5
- self.assertTrue((b[:2] == 5).all())
+ assert (b[:2] == 5).all()
def test_align_same_index(self):
a, b = self.ts.align(self.ts, copy=False)
@@ -1921,13 +1921,12 @@ def test_reindex(self):
# __array_interface__ is not defined for older numpies
# and on some pythons
try:
- self.assertTrue(np.may_share_memory(self.series.index,
- identity.index))
- except (AttributeError):
+ assert np.may_share_memory(self.series.index, identity.index)
+ except AttributeError:
pass
- self.assertTrue(identity.index.is_(self.series.index))
- self.assertTrue(identity.index.identical(self.series.index))
+ assert identity.index.is_(self.series.index)
+ assert identity.index.identical(self.series.index)
subIndex = self.series.index[10:20]
subSeries = self.series.reindex(subIndex)
@@ -1942,7 +1941,7 @@ def test_reindex(self):
self.assertEqual(val, self.ts[idx])
stuffSeries = self.ts.reindex(subIndex)
- self.assertTrue(np.isnan(stuffSeries).all())
+ assert np.isnan(stuffSeries).all()
# This is extremely important for the Cython code to not screw up
nonContigIndex = self.ts.index[::2]
@@ -1970,10 +1969,10 @@ def test_reindex_series_add_nat(self):
series = Series(rng)
result = series.reindex(lrange(15))
- self.assertTrue(np.issubdtype(result.dtype, np.dtype('M8[ns]')))
+ assert np.issubdtype(result.dtype, np.dtype('M8[ns]'))
mask = result.isnull()
- self.assertTrue(mask[-5:].all())
+ assert mask[-5:].all()
assert not mask[:-5].any()
def test_reindex_with_datetimes(self):
@@ -2098,7 +2097,7 @@ def test_reindex_bool_pad(self):
ts = self.ts[5:]
bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index)
filled_bool = bool_ts.reindex(self.ts.index, method='pad')
- self.assertTrue(isnull(filled_bool[:5]).all())
+ assert isnull(filled_bool[:5]).all()
def test_reindex_like(self):
other = self.ts[::2]
@@ -2140,7 +2139,7 @@ def test_reindex_fill_value(self):
# don't upcast
result = ints.reindex([1, 2, 3], fill_value=0)
expected = Series([2, 3, 0], index=[1, 2, 3])
- self.assertTrue(issubclass(result.dtype.type, np.integer))
+ assert issubclass(result.dtype.type, np.integer)
assert_series_equal(result, expected)
# -----------------------------------------------------------
@@ -2256,11 +2255,7 @@ def test_setitem_slice_into_readonly_backing_data(self):
with pytest.raises(ValueError):
series[1:3] = 1
- self.assertTrue(
- not array.any(),
- msg='even though the ValueError was raised, the underlying'
- ' array was still mutated!',
- )
+ assert not array.any()
class TestTimeSeriesDuplicates(tm.TestCase):
@@ -2290,14 +2285,14 @@ def test_index_unique(self):
self.assertEqual(self.dups.index.nunique(), 4)
# #2563
- self.assertTrue(isinstance(uniques, DatetimeIndex))
+ assert isinstance(uniques, DatetimeIndex)
dups_local = self.dups.index.tz_localize('US/Eastern')
dups_local.name = 'foo'
result = dups_local.unique()
expected = DatetimeIndex(expected, name='foo')
expected = expected.tz_localize('US/Eastern')
- self.assertTrue(result.tz is not None)
+ assert result.tz is not None
self.assertEqual(result.name, 'foo')
tm.assert_index_equal(result, expected)
@@ -2318,7 +2313,7 @@ def test_index_unique(self):
def test_index_dupes_contains(self):
d = datetime(2011, 12, 5, 20, 30)
ix = DatetimeIndex([d, d])
- self.assertTrue(d in ix)
+ assert d in ix
def test_duplicate_dates_indexing(self):
ts = self.dups
@@ -2401,7 +2396,7 @@ def test_indexing_over_size_cutoff(self):
# it works!
df.loc[timestamp]
- self.assertTrue(len(df.loc[[timestamp]]) > 0)
+ assert len(df.loc[[timestamp]]) > 0
finally:
_index._SIZE_CUTOFF = old_cutoff
@@ -2417,7 +2412,7 @@ def test_indexing_unordered(self):
expected = ts[t]
result = ts2[t]
- self.assertTrue(expected == result)
+ assert expected == result
# GH 3448 (ranges)
def compare(slobj):
@@ -2447,7 +2442,7 @@ def compare(slobj):
result = ts['2005']
for t in result.index:
- self.assertTrue(t.year == 2005)
+ assert t.year == 2005
def test_indexing(self):
@@ -2541,7 +2536,7 @@ def test_fancy_setitem(self):
s['1/2/2009'] = -2
self.assertEqual(s[48], -2)
s['1/2/2009':'2009-06-05'] = -3
- self.assertTrue((s[48:54] == -3).all())
+ assert (s[48:54] == -3).all()
def test_dti_snap(self):
dti = DatetimeIndex(['1/1/2002', '1/2/2002', '1/3/2002', '1/4/2002',
@@ -2550,13 +2545,13 @@ def test_dti_snap(self):
res = dti.snap(freq='W-MON')
exp = date_range('12/31/2001', '1/7/2002', freq='w-mon')
exp = exp.repeat([3, 4])
- self.assertTrue((res == exp).all())
+ assert (res == exp).all()
res = dti.snap(freq='B')
exp = date_range('1/1/2002', '1/7/2002', freq='b')
exp = exp.repeat([1, 1, 1, 2, 2])
- self.assertTrue((res == exp).all())
+ assert (res == exp).all()
def test_dti_reset_index_round_trip(self):
dti = DatetimeIndex(start='1/1/2001', end='6/1/2001', freq='D')
@@ -2642,11 +2637,11 @@ def test_frame_datetime64_duplicated(self):
tst = DataFrame({'symbol': 'AAA', 'date': dates})
result = tst.duplicated(['date', 'symbol'])
- self.assertTrue((-result).all())
+ assert (-result).all()
tst = DataFrame({'date': dates})
result = tst.duplicated()
- self.assertTrue((-result).all())
+ assert (-result).all()
class TestNatIndexing(tm.TestCase):
diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py
index 3df32992a4d74..7a9d0390a2cfa 100644
--- a/pandas/tests/series/test_io.py
+++ b/pandas/tests/series/test_io.py
@@ -24,25 +24,25 @@ def test_from_csv(self):
self.ts.to_csv(path)
ts = Series.from_csv(path)
assert_series_equal(self.ts, ts, check_names=False)
- self.assertTrue(ts.name is None)
- self.assertTrue(ts.index.name is None)
+ assert ts.name is None
+ assert ts.index.name is None
# GH10483
self.ts.to_csv(path, header=True)
ts_h = Series.from_csv(path, header=0)
- self.assertTrue(ts_h.name == 'ts')
+ assert ts_h.name == 'ts'
self.series.to_csv(path)
series = Series.from_csv(path)
assert series.name is None
assert series.index.name is None
assert_series_equal(self.series, series, check_names=False)
- self.assertTrue(series.name is None)
- self.assertTrue(series.index.name is None)
+ assert series.name is None
+ assert series.index.name is None
self.series.to_csv(path, header=True)
series_h = Series.from_csv(path, header=0)
- self.assertTrue(series_h.name == 'series')
+ assert series_h.name == 'series'
outfile = open(path, 'w')
outfile.write('1998-01-01|1.0\n1999-01-01|2.0')
@@ -163,7 +163,7 @@ class SubclassedFrame(DataFrame):
s = SubclassedSeries([1, 2, 3], name='X')
result = s.to_frame()
- self.assertTrue(isinstance(result, SubclassedFrame))
+ assert isinstance(result, SubclassedFrame)
expected = SubclassedFrame({'X': [1, 2, 3]})
assert_frame_equal(result, expected)
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 53c8c518eb3eb..251954b5da05e 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -484,19 +484,19 @@ def test_timedelta64_nan(self):
# nan ops on timedeltas
td1 = td.copy()
td1[0] = np.nan
- self.assertTrue(isnull(td1[0]))
+ assert isnull(td1[0])
self.assertEqual(td1[0].value, iNaT)
td1[0] = td[0]
assert not isnull(td1[0])
td1[1] = iNaT
- self.assertTrue(isnull(td1[1]))
+ assert isnull(td1[1])
self.assertEqual(td1[1].value, iNaT)
td1[1] = td[1]
assert not isnull(td1[1])
td1[2] = NaT
- self.assertTrue(isnull(td1[2]))
+ assert isnull(td1[2])
self.assertEqual(td1[2].value, iNaT)
td1[2] = td[2]
assert not isnull(td1[2])
@@ -599,7 +599,7 @@ def test_pad_nan(self):
expected = Series([np.nan, 1.0, 1.0, 3.0, 3.0],
['z', 'a', 'b', 'c', 'd'], dtype=float)
assert_series_equal(x[1:], expected[1:])
- self.assertTrue(np.isnan(x[0]), np.isnan(expected[0]))
+ assert np.isnan(x[0]), np.isnan(expected[0])
def test_pad_require_monotonicity(self):
rng = date_range('1/1/2000', '3/1/2000', freq='B')
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index eb840faac05e0..f48a3474494a4 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -121,7 +121,7 @@ def test_div(self):
result = p['first'] / p['second']
assert_series_equal(result, p['first'].astype('float64'),
check_names=False)
- self.assertTrue(result.name is None)
+ assert result.name is None
assert not np.array_equal(result, p['second'] / p['first'])
# inf signing
@@ -565,11 +565,11 @@ def test_timedelta64_conversions(self):
s = Series(date_range('20130101', periods=3))
result = s.astype(object)
assert isinstance(result.iloc[0], datetime)
- self.assertTrue(result.dtype == np.object_)
+ assert result.dtype == np.object_
result = s1.astype(object)
assert isinstance(result.iloc[0], timedelta)
- self.assertTrue(result.dtype == np.object_)
+ assert result.dtype == np.object_
def test_timedelta64_equal_timedelta_supported_ops(self):
ser = Series([Timestamp('20130301'), Timestamp('20130228 23:00:00'),
@@ -1466,7 +1466,7 @@ def test_operators_corner(self):
empty = Series([], index=Index([]))
result = series + empty
- self.assertTrue(np.isnan(result).all())
+ assert np.isnan(result).all()
result = empty + Series([], index=Index([]))
self.assertEqual(len(result), 0)
@@ -1777,8 +1777,8 @@ def _check_fill(meth, op, a, b, fill_value=0):
def test_ne(self):
ts = Series([3, 4, 5, 6, 7], [3, 4, 5, 6, 7], dtype=float)
expected = [True, True, False, True, True]
- self.assertTrue(tm.equalContents(ts.index != 5, expected))
- self.assertTrue(tm.equalContents(~(ts.index == 5), expected))
+ assert tm.equalContents(ts.index != 5, expected)
+ assert tm.equalContents(~(ts.index == 5), expected)
def test_operators_na_handling(self):
from decimal import Decimal
@@ -1788,8 +1788,8 @@ def test_operators_na_handling(self):
result = s + s.shift(1)
result2 = s.shift(1) + s
- self.assertTrue(isnull(result[0]))
- self.assertTrue(isnull(result2[0]))
+ assert isnull(result[0])
+ assert isnull(result2[0])
s = Series(['foo', 'bar', 'baz', np.nan])
result = 'prefix_' + s
diff --git a/pandas/tests/series/test_period.py b/pandas/tests/series/test_period.py
index fdc12459f8c59..72a85086d4e24 100644
--- a/pandas/tests/series/test_period.py
+++ b/pandas/tests/series/test_period.py
@@ -89,10 +89,10 @@ def test_NaT_scalar(self):
series = Series([0, 1000, 2000, iNaT], dtype='period[D]')
val = series[3]
- self.assertTrue(isnull(val))
+ assert isnull(val)
series[2] = val
- self.assertTrue(isnull(series[2]))
+ assert isnull(series[2])
def test_NaT_cast(self):
result = Series([np.nan]).astype('period[D]')
@@ -109,10 +109,10 @@ def test_set_none_nan(self):
assert self.series[4] is None
self.series[5] = np.nan
- self.assertTrue(np.isnan(self.series[5]))
+ assert np.isnan(self.series[5])
self.series[5:7] = np.nan
- self.assertTrue(np.isnan(self.series[6]))
+ assert np.isnan(self.series[6])
def test_intercept_astype_object(self):
expected = self.series.astype('object')
@@ -121,12 +121,12 @@ def test_intercept_astype_object(self):
'b': np.random.randn(len(self.series))})
result = df.values.squeeze()
- self.assertTrue((result[:, 0] == expected.values).all())
+ assert (result[:, 0] == expected.values).all()
df = DataFrame({'a': self.series, 'b': ['foo'] * len(self.series)})
result = df.values.squeeze()
- self.assertTrue((result[:, 0] == expected.values).all())
+ assert (result[:, 0] == expected.values).all()
def test_comp_series_period_scalar(self):
# GH 13200
diff --git a/pandas/tests/series/test_quantile.py b/pandas/tests/series/test_quantile.py
index 6f9c65e37533d..9fb87a914a0ac 100644
--- a/pandas/tests/series/test_quantile.py
+++ b/pandas/tests/series/test_quantile.py
@@ -39,7 +39,7 @@ def test_quantile(self):
# GH7661
result = Series([np.timedelta64('NaT')]).sum()
- self.assertTrue(result is pd.NaT)
+ assert result is pd.NaT
msg = 'percentiles should all be in the interval \\[0, 1\\]'
for invalid in [-1, 2, [0.5, -1], [0.5, 2]]:
@@ -90,11 +90,11 @@ def test_quantile_interpolation_dtype(self):
# interpolation = linear (default case)
q = pd.Series([1, 3, 4]).quantile(0.5, interpolation='lower')
self.assertEqual(q, np.percentile(np.array([1, 3, 4]), 50))
- self.assertTrue(is_integer(q))
+ assert is_integer(q)
q = pd.Series([1, 3, 4]).quantile(0.5, interpolation='higher')
self.assertEqual(q, np.percentile(np.array([1, 3, 4]), 50))
- self.assertTrue(is_integer(q))
+ assert is_integer(q)
@pytest.mark.skipif(not _np_version_under1p9,
reason="Numpy version is greater 1.9")
@@ -130,7 +130,7 @@ def test_quantile_nan(self):
for s in cases:
res = s.quantile(0.5)
- self.assertTrue(np.isnan(res))
+ assert np.isnan(res)
res = s.quantile([0.5])
tm.assert_series_equal(res, pd.Series([np.nan], index=[0.5]))
@@ -167,12 +167,12 @@ def test_quantile_box(self):
def test_datetime_timedelta_quantiles(self):
# covers #9694
- self.assertTrue(pd.isnull(Series([], dtype='M8[ns]').quantile(.5)))
- self.assertTrue(pd.isnull(Series([], dtype='m8[ns]').quantile(.5)))
+ assert pd.isnull(Series([], dtype='M8[ns]').quantile(.5))
+ assert pd.isnull(Series([], dtype='m8[ns]').quantile(.5))
def test_quantile_nat(self):
res = Series([pd.NaT, pd.NaT]).quantile(0.5)
- self.assertTrue(res is pd.NaT)
+ assert res is pd.NaT
res = Series([pd.NaT, pd.NaT]).quantile([0.5])
tm.assert_series_equal(res, pd.Series([pd.NaT], index=[0.5]))
@@ -183,7 +183,7 @@ def test_quantile_empty(self):
s = Series([], dtype='float64')
res = s.quantile(0.5)
- self.assertTrue(np.isnan(res))
+ assert np.isnan(res)
res = s.quantile([0.5])
exp = Series([np.nan], index=[0.5])
@@ -193,7 +193,7 @@ def test_quantile_empty(self):
s = Series([], dtype='int64')
res = s.quantile(0.5)
- self.assertTrue(np.isnan(res))
+ assert np.isnan(res)
res = s.quantile([0.5])
exp = Series([np.nan], index=[0.5])
@@ -203,7 +203,7 @@ def test_quantile_empty(self):
s = Series([], dtype='datetime64[ns]')
res = s.quantile(0.5)
- self.assertTrue(res is pd.NaT)
+ assert res is pd.NaT
res = s.quantile([0.5])
exp = Series([pd.NaT], index=[0.5])
diff --git a/pandas/tests/series/test_replace.py b/pandas/tests/series/test_replace.py
index ee7b264bde8bc..19a99c8351db8 100644
--- a/pandas/tests/series/test_replace.py
+++ b/pandas/tests/series/test_replace.py
@@ -37,18 +37,18 @@ def test_replace(self):
# replace list with a single value
rs = ser.replace([np.nan, 'foo', 'bar'], -1)
- self.assertTrue((rs[:5] == -1).all())
- self.assertTrue((rs[6:10] == -1).all())
- self.assertTrue((rs[20:30] == -1).all())
- self.assertTrue((pd.isnull(ser[:5])).all())
+ assert (rs[:5] == -1).all()
+ assert (rs[6:10] == -1).all()
+ assert (rs[20:30] == -1).all()
+ assert (pd.isnull(ser[:5])).all()
# replace with different values
rs = ser.replace({np.nan: -1, 'foo': -2, 'bar': -3})
- self.assertTrue((rs[:5] == -1).all())
- self.assertTrue((rs[6:10] == -2).all())
- self.assertTrue((rs[20:30] == -3).all())
- self.assertTrue((pd.isnull(ser[:5])).all())
+ assert (rs[:5] == -1).all()
+ assert (rs[6:10] == -2).all()
+ assert (rs[20:30] == -3).all()
+ assert (pd.isnull(ser[:5])).all()
# replace with different values with 2 lists
rs2 = ser.replace([np.nan, 'foo', 'bar'], [-1, -2, -3])
@@ -57,9 +57,9 @@ def test_replace(self):
# replace inplace
ser.replace([np.nan, 'foo', 'bar'], -1, inplace=True)
- self.assertTrue((ser[:5] == -1).all())
- self.assertTrue((ser[6:10] == -1).all())
- self.assertTrue((ser[20:30] == -1).all())
+ assert (ser[:5] == -1).all()
+ assert (ser[6:10] == -1).all()
+ assert (ser[20:30] == -1).all()
ser = pd.Series([np.nan, 0, np.inf])
tm.assert_series_equal(ser.replace(np.nan, 0), ser.fillna(0))
@@ -200,18 +200,18 @@ def test_replace2(self):
# replace list with a single value
rs = ser.replace([np.nan, 'foo', 'bar'], -1)
- self.assertTrue((rs[:5] == -1).all())
- self.assertTrue((rs[6:10] == -1).all())
- self.assertTrue((rs[20:30] == -1).all())
- self.assertTrue((pd.isnull(ser[:5])).all())
+ assert (rs[:5] == -1).all()
+ assert (rs[6:10] == -1).all()
+ assert (rs[20:30] == -1).all()
+ assert (pd.isnull(ser[:5])).all()
# replace with different values
rs = ser.replace({np.nan: -1, 'foo': -2, 'bar': -3})
- self.assertTrue((rs[:5] == -1).all())
- self.assertTrue((rs[6:10] == -2).all())
- self.assertTrue((rs[20:30] == -3).all())
- self.assertTrue((pd.isnull(ser[:5])).all())
+ assert (rs[:5] == -1).all()
+ assert (rs[6:10] == -2).all()
+ assert (rs[20:30] == -3).all()
+ assert (pd.isnull(ser[:5])).all()
# replace with different values with 2 lists
rs2 = ser.replace([np.nan, 'foo', 'bar'], [-1, -2, -3])
@@ -219,9 +219,9 @@ def test_replace2(self):
# replace inplace
ser.replace([np.nan, 'foo', 'bar'], -1, inplace=True)
- self.assertTrue((ser[:5] == -1).all())
- self.assertTrue((ser[6:10] == -1).all())
- self.assertTrue((ser[20:30] == -1).all())
+ assert (ser[:5] == -1).all()
+ assert (ser[6:10] == -1).all()
+ assert (ser[20:30] == -1).all()
def test_replace_with_empty_dictlike(self):
# GH 15289
diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py
index c92a82e287120..2decffce0f2fe 100644
--- a/pandas/tests/series/test_repr.py
+++ b/pandas/tests/series/test_repr.py
@@ -148,7 +148,7 @@ def test_repr_should_return_str(self):
data = [8, 5, 3, 5]
index1 = [u("\u03c3"), u("\u03c4"), u("\u03c5"), u("\u03c6")]
df = Series(data, index=index1)
- self.assertTrue(type(df.__repr__() == str)) # both py2 / 3
+ assert type(df.__repr__() == str) # both py2 / 3
def test_repr_max_rows(self):
# GH 6863
@@ -176,7 +176,7 @@ def test_timeseries_repr_object_dtype(self):
repr(ts)
ts = tm.makeTimeSeries(1000)
- self.assertTrue(repr(ts).splitlines()[-1].startswith('Freq:'))
+ assert repr(ts).splitlines()[-1].startswith('Freq:')
ts2 = ts.iloc[np.random.randint(0, len(ts) - 1, 400)]
repr(ts2).splitlines()[-1]
diff --git a/pandas/tests/series/test_sorting.py b/pandas/tests/series/test_sorting.py
index 6fe18e712a29d..791a7d5db9a26 100644
--- a/pandas/tests/series/test_sorting.py
+++ b/pandas/tests/series/test_sorting.py
@@ -35,12 +35,12 @@ def test_sort_values(self):
vals = ts.values
result = ts.sort_values()
- self.assertTrue(np.isnan(result[-5:]).all())
+ assert np.isnan(result[-5:]).all()
tm.assert_numpy_array_equal(result[:-5].values, np.sort(vals[5:]))
# na_position
result = ts.sort_values(na_position='first')
- self.assertTrue(np.isnan(result[:5]).all())
+ assert np.isnan(result[:5]).all()
tm.assert_numpy_array_equal(result[5:].values, np.sort(vals[5:]))
# something object-type
diff --git a/pandas/tests/series/test_timeseries.py b/pandas/tests/series/test_timeseries.py
index 430be97845fcb..1c94bc3db9990 100644
--- a/pandas/tests/series/test_timeseries.py
+++ b/pandas/tests/series/test_timeseries.py
@@ -343,8 +343,8 @@ def test_autocorr(self):
# corr() with lag needs Series of at least length 2
if len(self.ts) <= 2:
- self.assertTrue(np.isnan(corr1))
- self.assertTrue(np.isnan(corr2))
+ assert np.isnan(corr1)
+ assert np.isnan(corr2)
else:
self.assertEqual(corr1, corr2)
@@ -356,8 +356,8 @@ def test_autocorr(self):
# corr() with lag needs Series of at least length 2
if len(self.ts) <= 2:
- self.assertTrue(np.isnan(corr1))
- self.assertTrue(np.isnan(corr2))
+ assert np.isnan(corr1)
+ assert np.isnan(corr2)
else:
self.assertEqual(corr1, corr2)
@@ -393,7 +393,7 @@ def test_mpl_compat_hack(self):
def test_timeseries_coercion(self):
idx = tm.makeDateIndex(10000)
ser = Series(np.random.randn(len(idx)), idx.astype(object))
- self.assertTrue(ser.index.is_all_dates)
+ assert ser.index.is_all_dates
assert isinstance(ser.index, DatetimeIndex)
def test_empty_series_ops(self):
@@ -487,7 +487,7 @@ def test_series_ctor_datetime64(self):
dates = np.asarray(rng)
series = Series(dates)
- self.assertTrue(np.issubdtype(series.dtype, np.dtype('M8[ns]')))
+ assert np.issubdtype(series.dtype, np.dtype('M8[ns]'))
def test_series_repr_nat(self):
series = Series([0, 1000, 2000, iNaT], dtype='M8[ns]')
@@ -602,9 +602,9 @@ def test_at_time(self):
rng = date_range('1/1/2000', '1/5/2000', freq='5min')
ts = Series(np.random.randn(len(rng)), index=rng)
rs = ts.at_time(rng[1])
- self.assertTrue((rs.index.hour == rng[1].hour).all())
- self.assertTrue((rs.index.minute == rng[1].minute).all())
- self.assertTrue((rs.index.second == rng[1].second).all())
+ assert (rs.index.hour == rng[1].hour).all()
+ assert (rs.index.minute == rng[1].minute).all()
+ assert (rs.index.second == rng[1].second).all()
result = ts.at_time('9:30')
expected = ts.at_time(time(9, 30))
@@ -667,14 +667,14 @@ def test_between_time(self):
for rs in filtered.index:
t = rs.time()
if inc_start:
- self.assertTrue(t >= stime)
+ assert t >= stime
else:
- self.assertTrue(t > stime)
+ assert t > stime
if inc_end:
- self.assertTrue(t <= etime)
+ assert t <= etime
else:
- self.assertTrue(t < etime)
+ assert t < etime
result = ts.between_time('00:00', '01:00')
expected = ts.between_time(stime, etime)
@@ -699,14 +699,14 @@ def test_between_time(self):
for rs in filtered.index:
t = rs.time()
if inc_start:
- self.assertTrue((t >= stime) or (t <= etime))
+ assert (t >= stime) or (t <= etime)
else:
- self.assertTrue((t > stime) or (t <= etime))
+ assert (t > stime) or (t <= etime)
if inc_end:
- self.assertTrue((t <= etime) or (t >= stime))
+ assert (t <= etime) or (t >= stime)
else:
- self.assertTrue((t < etime) or (t >= stime))
+ assert (t < etime) or (t >= stime)
def test_between_time_types(self):
# GH11818
@@ -830,13 +830,13 @@ def test_pickle(self):
# GH4606
p = tm.round_trip_pickle(NaT)
- self.assertTrue(p is NaT)
+ assert p is NaT
idx = pd.to_datetime(['2013-01-01', NaT, '2014-01-06'])
idx_p = tm.round_trip_pickle(idx)
- self.assertTrue(idx_p[0] == idx[0])
- self.assertTrue(idx_p[1] is NaT)
- self.assertTrue(idx_p[2] == idx[2])
+ assert idx_p[0] == idx[0]
+ assert idx_p[1] is NaT
+ assert idx_p[2] == idx[2]
# GH11002
# don't infer freq
@@ -900,12 +900,12 @@ def test_min_max_series(self):
result = df.TS.max()
exp = Timestamp(df.TS.iat[-1])
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
self.assertEqual(result, exp)
result = df.TS.min()
exp = Timestamp(df.TS.iat[0])
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
self.assertEqual(result, exp)
def test_from_M8_structured(self):
@@ -918,7 +918,7 @@ def test_from_M8_structured(self):
self.assertEqual(df['Forecasting'][0], dates[0][1])
s = Series(arr['Date'])
- self.assertTrue(s[0], Timestamp)
+ assert s[0], Timestamp
self.assertEqual(s[0], dates[0][0])
s = Series.from_array(arr['Date'], Index([0]))
@@ -933,4 +933,4 @@ def test_get_level_values_box(self):
index = MultiIndex(levels=levels, labels=labels)
- self.assertTrue(isinstance(index.get_level_values(0)[0], Timestamp))
+ assert isinstance(index.get_level_values(0)[0], Timestamp)
diff --git a/pandas/tests/sparse/test_array.py b/pandas/tests/sparse/test_array.py
index 33df4b5e59bc9..b8dff5606f979 100644
--- a/pandas/tests/sparse/test_array.py
+++ b/pandas/tests/sparse/test_array.py
@@ -25,7 +25,7 @@ def setUp(self):
def test_constructor_dtype(self):
arr = SparseArray([np.nan, 1, 2, np.nan])
self.assertEqual(arr.dtype, np.float64)
- self.assertTrue(np.isnan(arr.fill_value))
+ assert np.isnan(arr.fill_value)
arr = SparseArray([np.nan, 1, 2, np.nan], fill_value=0)
self.assertEqual(arr.dtype, np.float64)
@@ -33,7 +33,7 @@ def test_constructor_dtype(self):
arr = SparseArray([0, 1, 2, 4], dtype=np.float64)
self.assertEqual(arr.dtype, np.float64)
- self.assertTrue(np.isnan(arr.fill_value))
+ assert np.isnan(arr.fill_value)
arr = SparseArray([0, 1, 2, 4], dtype=np.int64)
self.assertEqual(arr.dtype, np.int64)
@@ -55,7 +55,7 @@ def test_constructor_object_dtype(self):
# GH 11856
arr = SparseArray(['A', 'A', np.nan, 'B'], dtype=np.object)
self.assertEqual(arr.dtype, np.object)
- self.assertTrue(np.isnan(arr.fill_value))
+ assert np.isnan(arr.fill_value)
arr = SparseArray(['A', 'A', np.nan, 'B'], dtype=np.object,
fill_value='A')
@@ -66,7 +66,7 @@ def test_constructor_spindex_dtype(self):
arr = SparseArray(data=[1, 2], sparse_index=IntIndex(4, [1, 2]))
tm.assert_sp_array_equal(arr, SparseArray([np.nan, 1, 2, np.nan]))
self.assertEqual(arr.dtype, np.float64)
- self.assertTrue(np.isnan(arr.fill_value))
+ assert np.isnan(arr.fill_value)
arr = SparseArray(data=[1, 2, 3],
sparse_index=IntIndex(4, [1, 2, 3]),
@@ -133,7 +133,7 @@ def test_sparseseries_roundtrip(self):
def test_get_item(self):
- self.assertTrue(np.isnan(self.arr[1]))
+ assert np.isnan(self.arr[1])
self.assertEqual(self.arr[2], 1)
self.assertEqual(self.arr[7], 5)
@@ -147,8 +147,8 @@ def test_get_item(self):
self.assertEqual(self.arr[-1], self.arr[len(self.arr) - 1])
def test_take(self):
- self.assertTrue(np.isnan(self.arr.take(0)))
- self.assertTrue(np.isscalar(self.arr.take(2)))
+ assert np.isnan(self.arr.take(0))
+ assert np.isscalar(self.arr.take(2))
# np.take in < 1.8 doesn't support scalar indexing
if not _np_version_under1p8:
@@ -303,7 +303,7 @@ def test_constructor_copy(self):
not_copy = SparseArray(self.arr)
not_copy.sp_values[:3] = 0
- self.assertTrue((self.arr.sp_values[:3] == 0).all())
+ assert (self.arr.sp_values[:3] == 0).all()
def test_constructor_bool(self):
# GH 10648
@@ -331,7 +331,7 @@ def test_constructor_bool_fill_value(self):
arr = SparseArray([True, False, True], dtype=np.bool, fill_value=True)
self.assertEqual(arr.dtype, np.bool)
- self.assertTrue(arr.fill_value)
+ assert arr.fill_value
def test_constructor_float32(self):
# GH 10648
@@ -400,7 +400,7 @@ def test_set_fill_value(self):
arr = SparseArray([True, False, True], fill_value=False, dtype=np.bool)
arr.fill_value = True
- self.assertTrue(arr.fill_value)
+ assert arr.fill_value
# coerces to bool
msg = "unable to set fill_value 0 to bool dtype"
@@ -637,7 +637,7 @@ def test_fillna(self):
# only fill_value will be changed
s = SparseArray([0, 0, 0, 0], fill_value=np.nan)
self.assertEqual(s.dtype, np.int64)
- self.assertTrue(np.isnan(s.fill_value))
+ assert np.isnan(s.fill_value)
res = s.fillna(-1)
exp = SparseArray([0, 0, 0, 0], fill_value=-1)
tm.assert_sp_array_equal(res, exp)
diff --git a/pandas/tests/sparse/test_frame.py b/pandas/tests/sparse/test_frame.py
index a5080bbd81005..6b54dca8e93d5 100644
--- a/pandas/tests/sparse/test_frame.py
+++ b/pandas/tests/sparse/test_frame.py
@@ -91,7 +91,7 @@ def test_copy(self):
# as of v0.15.0
# this is now identical (but not is_a )
- self.assertTrue(cp.index.identical(self.frame.index))
+ assert cp.index.identical(self.frame.index)
def test_constructor(self):
for col, series in compat.iteritems(self.frame):
@@ -171,7 +171,7 @@ def test_constructor_dataframe(self):
def test_constructor_convert_index_once(self):
arr = np.array([1.5, 2.5, 3.5])
sdf = SparseDataFrame(columns=lrange(4), index=arr)
- self.assertTrue(sdf[0].index is sdf[1].index)
+ assert sdf[0].index is sdf[1].index
def test_constructor_from_series(self):
@@ -290,7 +290,7 @@ def test_dense_to_sparse(self):
'B': [1, 2, nan, nan, nan]})
sdf = df.to_sparse()
assert isinstance(sdf, SparseDataFrame)
- self.assertTrue(np.isnan(sdf.default_fill_value))
+ assert np.isnan(sdf.default_fill_value)
assert isinstance(sdf['A'].sp_index, BlockIndex)
tm.assert_frame_equal(sdf.to_dense(), df)
@@ -385,7 +385,7 @@ def _compare_to_dense(a, b, da, db, op):
def test_op_corners(self):
empty = self.empty + self.empty
- self.assertTrue(empty.empty)
+ assert empty.empty
foo = self.frame + self.empty
assert isinstance(foo.index, DatetimeIndex)
@@ -411,7 +411,7 @@ def test_iloc(self):
# 2227
result = self.frame.iloc[:, 0]
- self.assertTrue(isinstance(result, SparseSeries))
+ assert isinstance(result, SparseSeries)
tm.assert_sp_series_equal(result, self.frame['A'])
# preserve sparse index type. #2251
@@ -515,7 +515,7 @@ def _check_frame(frame, orig):
# scalar value
frame['J'] = 5
self.assertEqual(len(frame['J'].sp_values), N)
- self.assertTrue((frame['J'].sp_values == 5).all())
+ assert (frame['J'].sp_values == 5).all()
frame['K'] = frame.default_fill_value
self.assertEqual(len(frame['K'].sp_values), 0)
@@ -1099,7 +1099,7 @@ def test_nan_columnname(self):
# GH 8822
nan_colname = DataFrame(Series(1.0, index=[0]), columns=[nan])
nan_colname_sparse = nan_colname.to_sparse()
- self.assertTrue(np.isnan(nan_colname_sparse.columns[0]))
+ assert np.isnan(nan_colname_sparse.columns[0])
def test_isnull(self):
# GH 8276
diff --git a/pandas/tests/sparse/test_indexing.py b/pandas/tests/sparse/test_indexing.py
index bfa0a0440761f..6dd012ad46db9 100644
--- a/pandas/tests/sparse/test_indexing.py
+++ b/pandas/tests/sparse/test_indexing.py
@@ -17,7 +17,7 @@ def test_getitem(self):
sparse = self.sparse
self.assertEqual(sparse[0], 1)
- self.assertTrue(np.isnan(sparse[1]))
+ assert np.isnan(sparse[1])
self.assertEqual(sparse[3], 3)
result = sparse[[1, 3, 4]]
@@ -67,7 +67,7 @@ def test_getitem_fill_value(self):
sparse = orig.to_sparse(fill_value=0)
self.assertEqual(sparse[0], 1)
- self.assertTrue(np.isnan(sparse[1]))
+ assert np.isnan(sparse[1])
self.assertEqual(sparse[2], 0)
self.assertEqual(sparse[3], 3)
@@ -114,7 +114,7 @@ def test_loc(self):
sparse = self.sparse
self.assertEqual(sparse.loc[0], 1)
- self.assertTrue(np.isnan(sparse.loc[1]))
+ assert np.isnan(sparse.loc[1])
result = sparse.loc[[1, 3, 4]]
exp = orig.loc[[1, 3, 4]].to_sparse()
@@ -125,7 +125,7 @@ def test_loc(self):
exp = orig.loc[[1, 3, 4, 5]].to_sparse()
tm.assert_sp_series_equal(result, exp)
# padded with NaN
- self.assertTrue(np.isnan(result[-1]))
+ assert np.isnan(result[-1])
# dense array
result = sparse.loc[orig % 2 == 1]
@@ -146,7 +146,7 @@ def test_loc_index(self):
sparse = orig.to_sparse()
self.assertEqual(sparse.loc['A'], 1)
- self.assertTrue(np.isnan(sparse.loc['B']))
+ assert np.isnan(sparse.loc['B'])
result = sparse.loc[['A', 'C', 'D']]
exp = orig.loc[['A', 'C', 'D']].to_sparse()
@@ -171,7 +171,7 @@ def test_loc_index_fill_value(self):
sparse = orig.to_sparse(fill_value=0)
self.assertEqual(sparse.loc['A'], 1)
- self.assertTrue(np.isnan(sparse.loc['B']))
+ assert np.isnan(sparse.loc['B'])
result = sparse.loc[['A', 'C', 'D']]
exp = orig.loc[['A', 'C', 'D']].to_sparse(fill_value=0)
@@ -210,7 +210,7 @@ def test_iloc(self):
sparse = self.sparse
self.assertEqual(sparse.iloc[3], 3)
- self.assertTrue(np.isnan(sparse.iloc[2]))
+ assert np.isnan(sparse.iloc[2])
result = sparse.iloc[[1, 3, 4]]
exp = orig.iloc[[1, 3, 4]].to_sparse()
@@ -228,7 +228,7 @@ def test_iloc_fill_value(self):
sparse = orig.to_sparse(fill_value=0)
self.assertEqual(sparse.iloc[3], 3)
- self.assertTrue(np.isnan(sparse.iloc[1]))
+ assert np.isnan(sparse.iloc[1])
self.assertEqual(sparse.iloc[4], 0)
result = sparse.iloc[[1, 3, 4]]
@@ -250,26 +250,26 @@ def test_at(self):
orig = pd.Series([1, np.nan, np.nan, 3, np.nan])
sparse = orig.to_sparse()
self.assertEqual(sparse.at[0], orig.at[0])
- self.assertTrue(np.isnan(sparse.at[1]))
- self.assertTrue(np.isnan(sparse.at[2]))
+ assert np.isnan(sparse.at[1])
+ assert np.isnan(sparse.at[2])
self.assertEqual(sparse.at[3], orig.at[3])
- self.assertTrue(np.isnan(sparse.at[4]))
+ assert np.isnan(sparse.at[4])
orig = pd.Series([1, np.nan, np.nan, 3, np.nan],
index=list('abcde'))
sparse = orig.to_sparse()
self.assertEqual(sparse.at['a'], orig.at['a'])
- self.assertTrue(np.isnan(sparse.at['b']))
- self.assertTrue(np.isnan(sparse.at['c']))
+ assert np.isnan(sparse.at['b'])
+ assert np.isnan(sparse.at['c'])
self.assertEqual(sparse.at['d'], orig.at['d'])
- self.assertTrue(np.isnan(sparse.at['e']))
+ assert np.isnan(sparse.at['e'])
def test_at_fill_value(self):
orig = pd.Series([1, np.nan, 0, 3, 0],
index=list('abcde'))
sparse = orig.to_sparse(fill_value=0)
self.assertEqual(sparse.at['a'], orig.at['a'])
- self.assertTrue(np.isnan(sparse.at['b']))
+ assert np.isnan(sparse.at['b'])
self.assertEqual(sparse.at['c'], orig.at['c'])
self.assertEqual(sparse.at['d'], orig.at['d'])
self.assertEqual(sparse.at['e'], orig.at['e'])
@@ -279,19 +279,19 @@ def test_iat(self):
sparse = self.sparse
self.assertEqual(sparse.iat[0], orig.iat[0])
- self.assertTrue(np.isnan(sparse.iat[1]))
- self.assertTrue(np.isnan(sparse.iat[2]))
+ assert np.isnan(sparse.iat[1])
+ assert np.isnan(sparse.iat[2])
self.assertEqual(sparse.iat[3], orig.iat[3])
- self.assertTrue(np.isnan(sparse.iat[4]))
+ assert np.isnan(sparse.iat[4])
- self.assertTrue(np.isnan(sparse.iat[-1]))
+ assert np.isnan(sparse.iat[-1])
self.assertEqual(sparse.iat[-5], orig.iat[-5])
def test_iat_fill_value(self):
orig = pd.Series([1, np.nan, 0, 3, 0])
sparse = orig.to_sparse()
self.assertEqual(sparse.iat[0], orig.iat[0])
- self.assertTrue(np.isnan(sparse.iat[1]))
+ assert np.isnan(sparse.iat[1])
self.assertEqual(sparse.iat[2], orig.iat[2])
self.assertEqual(sparse.iat[3], orig.iat[3])
self.assertEqual(sparse.iat[4], orig.iat[4])
@@ -302,19 +302,19 @@ def test_iat_fill_value(self):
def test_get(self):
s = pd.SparseSeries([1, np.nan, np.nan, 3, np.nan])
self.assertEqual(s.get(0), 1)
- self.assertTrue(np.isnan(s.get(1)))
+ assert np.isnan(s.get(1))
assert s.get(5) is None
s = pd.SparseSeries([1, np.nan, 0, 3, 0], index=list('ABCDE'))
self.assertEqual(s.get('A'), 1)
- self.assertTrue(np.isnan(s.get('B')))
+ assert np.isnan(s.get('B'))
self.assertEqual(s.get('C'), 0)
assert s.get('XX') is None
s = pd.SparseSeries([1, np.nan, 0, 3, 0], index=list('ABCDE'),
fill_value=0)
self.assertEqual(s.get('A'), 1)
- self.assertTrue(np.isnan(s.get('B')))
+ assert np.isnan(s.get('B'))
self.assertEqual(s.get('C'), 0)
assert s.get('XX') is None
@@ -458,7 +458,7 @@ def test_getitem_multi(self):
sparse = self.sparse
self.assertEqual(sparse[0], orig[0])
- self.assertTrue(np.isnan(sparse[1]))
+ assert np.isnan(sparse[1])
self.assertEqual(sparse[3], orig[3])
tm.assert_sp_series_equal(sparse['A'], orig['A'].to_sparse())
@@ -487,8 +487,8 @@ def test_getitem_multi_tuple(self):
sparse = self.sparse
self.assertEqual(sparse['C', 0], orig['C', 0])
- self.assertTrue(np.isnan(sparse['A', 1]))
- self.assertTrue(np.isnan(sparse['B', 0]))
+ assert np.isnan(sparse['A', 1])
+ assert np.isnan(sparse['B', 0])
def test_getitems_slice_multi(self):
orig = self.orig
@@ -545,8 +545,8 @@ def test_loc_multi_tuple(self):
sparse = self.sparse
self.assertEqual(sparse.loc['C', 0], orig.loc['C', 0])
- self.assertTrue(np.isnan(sparse.loc['A', 1]))
- self.assertTrue(np.isnan(sparse.loc['B', 0]))
+ assert np.isnan(sparse.loc['A', 1])
+ assert np.isnan(sparse.loc['B', 0])
def test_loc_slice(self):
orig = self.orig
@@ -646,7 +646,7 @@ def test_loc(self):
sparse = orig.to_sparse()
self.assertEqual(sparse.loc[0, 'x'], 1)
- self.assertTrue(np.isnan(sparse.loc[1, 'z']))
+ assert np.isnan(sparse.loc[1, 'z'])
self.assertEqual(sparse.loc[2, 'z'], 4)
tm.assert_sp_series_equal(sparse.loc[0], orig.loc[0].to_sparse())
@@ -703,7 +703,7 @@ def test_loc_index(self):
sparse = orig.to_sparse()
self.assertEqual(sparse.loc['a', 'x'], 1)
- self.assertTrue(np.isnan(sparse.loc['b', 'z']))
+ assert np.isnan(sparse.loc['b', 'z'])
self.assertEqual(sparse.loc['c', 'z'], 4)
tm.assert_sp_series_equal(sparse.loc['a'], orig.loc['a'].to_sparse())
@@ -763,7 +763,7 @@ def test_iloc(self):
sparse = orig.to_sparse()
self.assertEqual(sparse.iloc[1, 1], 3)
- self.assertTrue(np.isnan(sparse.iloc[2, 0]))
+ assert np.isnan(sparse.iloc[2, 0])
tm.assert_sp_series_equal(sparse.iloc[0], orig.loc[0].to_sparse())
tm.assert_sp_series_equal(sparse.iloc[1], orig.loc[1].to_sparse())
@@ -811,8 +811,8 @@ def test_at(self):
index=list('ABCD'), columns=list('xyz'))
sparse = orig.to_sparse()
self.assertEqual(sparse.at['A', 'x'], orig.at['A', 'x'])
- self.assertTrue(np.isnan(sparse.at['B', 'z']))
- self.assertTrue(np.isnan(sparse.at['C', 'y']))
+ assert np.isnan(sparse.at['B', 'z'])
+ assert np.isnan(sparse.at['C', 'y'])
self.assertEqual(sparse.at['D', 'x'], orig.at['D', 'x'])
def test_at_fill_value(self):
@@ -823,8 +823,8 @@ def test_at_fill_value(self):
index=list('ABCD'), columns=list('xyz'))
sparse = orig.to_sparse(fill_value=0)
self.assertEqual(sparse.at['A', 'x'], orig.at['A', 'x'])
- self.assertTrue(np.isnan(sparse.at['B', 'z']))
- self.assertTrue(np.isnan(sparse.at['C', 'y']))
+ assert np.isnan(sparse.at['B', 'z'])
+ assert np.isnan(sparse.at['C', 'y'])
self.assertEqual(sparse.at['D', 'x'], orig.at['D', 'x'])
def test_iat(self):
@@ -835,11 +835,11 @@ def test_iat(self):
index=list('ABCD'), columns=list('xyz'))
sparse = orig.to_sparse()
self.assertEqual(sparse.iat[0, 0], orig.iat[0, 0])
- self.assertTrue(np.isnan(sparse.iat[1, 2]))
- self.assertTrue(np.isnan(sparse.iat[2, 1]))
+ assert np.isnan(sparse.iat[1, 2])
+ assert np.isnan(sparse.iat[2, 1])
self.assertEqual(sparse.iat[2, 0], orig.iat[2, 0])
- self.assertTrue(np.isnan(sparse.iat[-1, -2]))
+ assert np.isnan(sparse.iat[-1, -2])
self.assertEqual(sparse.iat[-1, -1], orig.iat[-1, -1])
def test_iat_fill_value(self):
@@ -850,11 +850,11 @@ def test_iat_fill_value(self):
index=list('ABCD'), columns=list('xyz'))
sparse = orig.to_sparse(fill_value=0)
self.assertEqual(sparse.iat[0, 0], orig.iat[0, 0])
- self.assertTrue(np.isnan(sparse.iat[1, 2]))
- self.assertTrue(np.isnan(sparse.iat[2, 1]))
+ assert np.isnan(sparse.iat[1, 2])
+ assert np.isnan(sparse.iat[2, 1])
self.assertEqual(sparse.iat[2, 0], orig.iat[2, 0])
- self.assertTrue(np.isnan(sparse.iat[-1, -2]))
+ assert np.isnan(sparse.iat[-1, -2])
self.assertEqual(sparse.iat[-1, -1], orig.iat[-1, -1])
def test_take(self):
diff --git a/pandas/tests/sparse/test_libsparse.py b/pandas/tests/sparse/test_libsparse.py
index 55115f45ff740..c7e1be968c148 100644
--- a/pandas/tests/sparse/test_libsparse.py
+++ b/pandas/tests/sparse/test_libsparse.py
@@ -162,25 +162,25 @@ def test_intindex_make_union(self):
b = IntIndex(5, np.array([0, 2], dtype=np.int32))
res = a.make_union(b)
exp = IntIndex(5, np.array([0, 2, 3, 4], np.int32))
- self.assertTrue(res.equals(exp))
+ assert res.equals(exp)
a = IntIndex(5, np.array([], dtype=np.int32))
b = IntIndex(5, np.array([0, 2], dtype=np.int32))
res = a.make_union(b)
exp = IntIndex(5, np.array([0, 2], np.int32))
- self.assertTrue(res.equals(exp))
+ assert res.equals(exp)
a = IntIndex(5, np.array([], dtype=np.int32))
b = IntIndex(5, np.array([], dtype=np.int32))
res = a.make_union(b)
exp = IntIndex(5, np.array([], np.int32))
- self.assertTrue(res.equals(exp))
+ assert res.equals(exp)
a = IntIndex(5, np.array([0, 1, 2, 3, 4], dtype=np.int32))
b = IntIndex(5, np.array([0, 1, 2, 3, 4], dtype=np.int32))
res = a.make_union(b)
exp = IntIndex(5, np.array([0, 1, 2, 3, 4], np.int32))
- self.assertTrue(res.equals(exp))
+ assert res.equals(exp)
a = IntIndex(5, np.array([0, 1], dtype=np.int32))
b = IntIndex(4, np.array([0, 1], dtype=np.int32))
@@ -219,13 +219,13 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
def test_intersect_empty(self):
xindex = IntIndex(4, np.array([], dtype=np.int32))
yindex = IntIndex(4, np.array([2, 3], dtype=np.int32))
- self.assertTrue(xindex.intersect(yindex).equals(xindex))
- self.assertTrue(yindex.intersect(xindex).equals(xindex))
+ assert xindex.intersect(yindex).equals(xindex)
+ assert yindex.intersect(xindex).equals(xindex)
xindex = xindex.to_block_index()
yindex = yindex.to_block_index()
- self.assertTrue(xindex.intersect(yindex).equals(xindex))
- self.assertTrue(yindex.intersect(xindex).equals(xindex))
+ assert xindex.intersect(yindex).equals(xindex)
+ assert yindex.intersect(xindex).equals(xindex)
def test_intersect_identical(self):
cases = [IntIndex(5, np.array([1, 2], dtype=np.int32)),
@@ -234,9 +234,9 @@ def test_intersect_identical(self):
IntIndex(5, np.array([], dtype=np.int32))]
for case in cases:
- self.assertTrue(case.intersect(case).equals(case))
+ assert case.intersect(case).equals(case)
case = case.to_block_index()
- self.assertTrue(case.intersect(case).equals(case))
+ assert case.intersect(case).equals(case)
class TestSparseIndexCommon(tm.TestCase):
@@ -436,7 +436,7 @@ def test_make_block_boundary(self):
def test_equals(self):
index = BlockIndex(10, [0, 4], [2, 5])
- self.assertTrue(index.equals(index))
+ assert index.equals(index)
assert not index.equals(BlockIndex(10, [0, 4], [2, 6]))
def test_check_integrity(self):
@@ -534,7 +534,7 @@ def test_int_internal(self):
def test_equals(self):
index = IntIndex(10, [0, 1, 2, 3, 4])
- self.assertTrue(index.equals(index))
+ assert index.equals(index)
assert not index.equals(IntIndex(10, [0, 1, 2, 3]))
def test_to_block_index(self):
@@ -547,8 +547,8 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
xbindex = xindex.to_int_index().to_block_index()
ybindex = yindex.to_int_index().to_block_index()
assert isinstance(xbindex, BlockIndex)
- self.assertTrue(xbindex.equals(xindex))
- self.assertTrue(ybindex.equals(yindex))
+ assert xbindex.equals(xindex)
+ assert ybindex.equals(yindex)
check_cases(_check_case)
@@ -578,7 +578,7 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
result_int_vals, ri_index, ifill = sparse_op(x, xdindex, xfill, y,
ydindex, yfill)
- self.assertTrue(rb_index.to_int_index().equals(ri_index))
+ assert rb_index.to_int_index().equals(ri_index)
tm.assert_numpy_array_equal(result_block_vals, result_int_vals)
self.assertEqual(bfill, ifill)
diff --git a/pandas/tests/sparse/test_series.py b/pandas/tests/sparse/test_series.py
index e0b0809c756b1..b8c12c2d64277 100644
--- a/pandas/tests/sparse/test_series.py
+++ b/pandas/tests/sparse/test_series.py
@@ -91,7 +91,7 @@ def setUp(self):
def test_constructor_dtype(self):
arr = SparseSeries([np.nan, 1, 2, np.nan])
self.assertEqual(arr.dtype, np.float64)
- self.assertTrue(np.isnan(arr.fill_value))
+ assert np.isnan(arr.fill_value)
arr = SparseSeries([np.nan, 1, 2, np.nan], fill_value=0)
self.assertEqual(arr.dtype, np.float64)
@@ -99,7 +99,7 @@ def test_constructor_dtype(self):
arr = SparseSeries([0, 1, 2, 4], dtype=np.int64, fill_value=np.nan)
self.assertEqual(arr.dtype, np.int64)
- self.assertTrue(np.isnan(arr.fill_value))
+ assert np.isnan(arr.fill_value)
arr = SparseSeries([0, 1, 2, 4], dtype=np.int64)
self.assertEqual(arr.dtype, np.int64)
@@ -230,9 +230,9 @@ def test_to_dense_preserve_name(self):
def test_constructor(self):
# test setup guys
- self.assertTrue(np.isnan(self.bseries.fill_value))
+ assert np.isnan(self.bseries.fill_value)
assert isinstance(self.bseries.sp_index, BlockIndex)
- self.assertTrue(np.isnan(self.iseries.fill_value))
+ assert np.isnan(self.iseries.fill_value)
assert isinstance(self.iseries.sp_index, IntIndex)
self.assertEqual(self.zbseries.fill_value, 0)
@@ -289,8 +289,8 @@ def test_constructor_scalar(self):
data = 5
sp = SparseSeries(data, np.arange(100))
sp = sp.reindex(np.arange(200))
- self.assertTrue((sp.loc[:99] == data).all())
- self.assertTrue(isnull(sp.loc[100:]).all())
+ assert (sp.loc[:99] == data).all()
+ assert isnull(sp.loc[100:]).all()
data = np.nan
sp = SparseSeries(data, np.arange(100))
@@ -805,13 +805,13 @@ def test_fill_value_corner(self):
cop.fill_value = 0
result = self.bseries / cop
- self.assertTrue(np.isnan(result.fill_value))
+ assert np.isnan(result.fill_value)
cop2 = self.zbseries.copy()
cop2.fill_value = 1
result = cop2 / cop
# 1 / 0 is inf
- self.assertTrue(np.isinf(result.fill_value))
+ assert np.isinf(result.fill_value)
def test_fill_value_when_combine_const(self):
# GH12723
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 96628322e4ee2..1b03c4e86b23f 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -264,8 +264,8 @@ def test_factorize_nan(self):
ids = rizer.factorize(key, sort=True, na_sentinel=na_sentinel)
expected = np.array([0, 1, 0, na_sentinel], dtype='int32')
self.assertEqual(len(set(key)), len(set(expected)))
- self.assertTrue(np.array_equal(
- pd.isnull(key), expected == na_sentinel))
+ tm.assert_numpy_array_equal(pd.isnull(key),
+ expected == na_sentinel)
# nan still maps to na_sentinel when sort=False
key = np.array([0, np.nan, 1], dtype='O')
@@ -276,8 +276,7 @@ def test_factorize_nan(self):
expected = np.array([2, -1, 0], dtype='int32')
self.assertEqual(len(set(key)), len(set(expected)))
- self.assertTrue(
- np.array_equal(pd.isnull(key), expected == na_sentinel))
+ tm.assert_numpy_array_equal(pd.isnull(key), expected == na_sentinel)
def test_complex_sorting(self):
# gh 12666 - check no segfault
@@ -926,7 +925,7 @@ def test_datetime_likes(self):
def test_unique_index(self):
cases = [pd.Index([1, 2, 3]), pd.RangeIndex(0, 3)]
for case in cases:
- self.assertTrue(case.is_unique)
+ assert case.is_unique
tm.assert_numpy_array_equal(case.duplicated(),
np.array([False, False, False]))
@@ -947,7 +946,7 @@ def test_group_var_generic_1d(self):
expected_counts = counts + 3
self.algo(out, counts, values, labels)
- self.assertTrue(np.allclose(out, expected_out, self.rtol))
+ assert np.allclose(out, expected_out, self.rtol)
tm.assert_numpy_array_equal(counts, expected_counts)
def test_group_var_generic_1d_flat_labels(self):
@@ -963,7 +962,7 @@ def test_group_var_generic_1d_flat_labels(self):
self.algo(out, counts, values, labels)
- self.assertTrue(np.allclose(out, expected_out, self.rtol))
+ assert np.allclose(out, expected_out, self.rtol)
tm.assert_numpy_array_equal(counts, expected_counts)
def test_group_var_generic_2d_all_finite(self):
@@ -978,7 +977,7 @@ def test_group_var_generic_2d_all_finite(self):
expected_counts = counts + 2
self.algo(out, counts, values, labels)
- self.assertTrue(np.allclose(out, expected_out, self.rtol))
+ assert np.allclose(out, expected_out, self.rtol)
tm.assert_numpy_array_equal(counts, expected_counts)
def test_group_var_generic_2d_some_nan(self):
@@ -1011,7 +1010,7 @@ def test_group_var_constant(self):
self.algo(out, counts, values, labels)
self.assertEqual(counts[0], 3)
- self.assertTrue(out[0, 0] >= 0)
+ assert out[0, 0] >= 0
tm.assert_almost_equal(out[0, 0], 0.0)
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index e058a62ea3089..cbcc4dc84e6d0 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -250,13 +250,11 @@ def test_binary_ops_docs(self):
operand2 = 'other'
op = op_map[op_name]
expected_str = ' '.join([operand1, op, operand2])
- self.assertTrue(expected_str in getattr(klass,
- op_name).__doc__)
+ assert expected_str in getattr(klass, op_name).__doc__
# reverse version of the binary ops
expected_str = ' '.join([operand2, op, operand1])
- self.assertTrue(expected_str in getattr(klass, 'r' +
- op_name).__doc__)
+ assert expected_str in getattr(klass, 'r' + op_name).__doc__
class TestIndexOps(Ops):
@@ -282,8 +280,8 @@ def test_none_comparison(self):
# noinspection PyComparisonWithNone
result = o != None # noqa
- self.assertTrue(result.iat[0])
- self.assertTrue(result.iat[1])
+ assert result.iat[0]
+ assert result.iat[1]
result = None == o # noqa
assert not result.iat[0]
@@ -292,8 +290,8 @@ def test_none_comparison(self):
# this fails for numpy < 1.9
# and oddly for *some* platforms
# result = None != o # noqa
- # self.assertTrue(result.iat[0])
- # self.assertTrue(result.iat[1])
+ # assert result.iat[0]
+ # assert result.iat[1]
result = None > o
assert not result.iat[0]
@@ -355,10 +353,10 @@ def test_nanops(self):
self.assertEqual(getattr(obj, op)(), 2.0)
obj = klass([np.nan])
- self.assertTrue(pd.isnull(getattr(obj, op)()))
+ assert pd.isnull(getattr(obj, op)())
obj = klass([])
- self.assertTrue(pd.isnull(getattr(obj, op)()))
+ assert pd.isnull(getattr(obj, op)())
obj = klass([pd.NaT, datetime(2011, 11, 1)])
# check DatetimeIndex monotonic path
@@ -423,12 +421,12 @@ def test_value_counts_unique_nunique(self):
result = o.value_counts()
tm.assert_series_equal(result, expected_s)
- self.assertTrue(result.index.name is None)
+ assert result.index.name is None
self.assertEqual(result.name, 'a')
result = o.unique()
if isinstance(o, Index):
- self.assertTrue(isinstance(result, o.__class__))
+ assert isinstance(result, o.__class__)
tm.assert_index_equal(result, orig)
elif is_datetimetz(o):
# datetimetz Series returns array of Timestamp
@@ -511,11 +509,11 @@ def test_value_counts_unique_nunique_null(self):
result_s_na = o.value_counts(dropna=False)
tm.assert_series_equal(result_s_na, expected_s_na)
- self.assertTrue(result_s_na.index.name is None)
+ assert result_s_na.index.name is None
self.assertEqual(result_s_na.name, 'a')
result_s = o.value_counts()
tm.assert_series_equal(o.value_counts(), expected_s)
- self.assertTrue(result_s.index.name is None)
+ assert result_s.index.name is None
self.assertEqual(result_s.name, 'a')
result = o.unique()
@@ -530,7 +528,7 @@ def test_value_counts_unique_nunique_null(self):
else:
tm.assert_numpy_array_equal(result[1:], values[2:])
- self.assertTrue(pd.isnull(result[0]))
+ assert pd.isnull(result[0])
self.assertEqual(result.dtype, orig.dtype)
self.assertEqual(o.nunique(), 8)
@@ -691,7 +689,7 @@ def test_value_counts_datetime64(self):
tm.assert_index_equal(unique, exp_idx)
else:
tm.assert_numpy_array_equal(unique[:3], expected)
- self.assertTrue(pd.isnull(unique[3]))
+ assert pd.isnull(unique[3])
self.assertEqual(s.nunique(), 3)
self.assertEqual(s.nunique(dropna=False), 4)
@@ -793,7 +791,7 @@ def test_duplicated_drop_duplicates_index(self):
expected = np.array([False] * len(original), dtype=bool)
duplicated = original.duplicated()
tm.assert_numpy_array_equal(duplicated, expected)
- self.assertTrue(duplicated.dtype == bool)
+ assert duplicated.dtype == bool
result = original.drop_duplicates()
tm.assert_index_equal(result, original)
assert result is not original
@@ -807,7 +805,7 @@ def test_duplicated_drop_duplicates_index(self):
dtype=bool)
duplicated = idx.duplicated()
tm.assert_numpy_array_equal(duplicated, expected)
- self.assertTrue(duplicated.dtype == bool)
+ assert duplicated.dtype == bool
tm.assert_index_equal(idx.drop_duplicates(), original)
base = [False] * len(idx)
@@ -817,7 +815,7 @@ def test_duplicated_drop_duplicates_index(self):
duplicated = idx.duplicated(keep='last')
tm.assert_numpy_array_equal(duplicated, expected)
- self.assertTrue(duplicated.dtype == bool)
+ assert duplicated.dtype == bool
result = idx.drop_duplicates(keep='last')
tm.assert_index_equal(result, idx[~expected])
@@ -828,7 +826,7 @@ def test_duplicated_drop_duplicates_index(self):
duplicated = idx.duplicated(keep=False)
tm.assert_numpy_array_equal(duplicated, expected)
- self.assertTrue(duplicated.dtype == bool)
+ assert duplicated.dtype == bool
result = idx.drop_duplicates(keep=False)
tm.assert_index_equal(result, idx[~expected])
@@ -951,7 +949,7 @@ def test_memory_usage(self):
if (is_object_dtype(o) or (isinstance(o, Series) and
is_object_dtype(o.index))):
# if there are objects, only deep will pick them up
- self.assertTrue(res_deep > res)
+ assert res_deep > res
else:
self.assertEqual(res, res_deep)
@@ -965,16 +963,16 @@ def test_memory_usage(self):
# sys.getsizeof will call the .memory_usage with
# deep=True, and add on some GC overhead
diff = res_deep - sys.getsizeof(o)
- self.assertTrue(abs(diff) < 100)
+ assert abs(diff) < 100
def test_searchsorted(self):
# See gh-12238
for o in self.objs:
index = np.searchsorted(o, max(o))
- self.assertTrue(0 <= index <= len(o))
+ assert 0 <= index <= len(o)
index = np.searchsorted(o, max(o), sorter=range(len(o)))
- self.assertTrue(0 <= index <= len(o))
+ assert 0 <= index <= len(o)
def test_validate_bool_args(self):
invalid_values = [1, "True", [1, 2, 3], 5.0]
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 252b32e264c1b..708ca92c30cac 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -140,14 +140,14 @@ def test_is_equal_dtype(self):
c1 = Categorical(list('aabca'), categories=list('abc'), ordered=False)
c2 = Categorical(list('aabca'), categories=list('cab'), ordered=False)
c3 = Categorical(list('aabca'), categories=list('cab'), ordered=True)
- self.assertTrue(c1.is_dtype_equal(c1))
- self.assertTrue(c2.is_dtype_equal(c2))
- self.assertTrue(c3.is_dtype_equal(c3))
+ assert c1.is_dtype_equal(c1)
+ assert c2.is_dtype_equal(c2)
+ assert c3.is_dtype_equal(c3)
assert not c1.is_dtype_equal(c2)
assert not c1.is_dtype_equal(c3)
assert not c1.is_dtype_equal(Index(list('aabca')))
assert not c1.is_dtype_equal(c1.astype(object))
- self.assertTrue(c1.is_dtype_equal(CategoricalIndex(c1)))
+ assert c1.is_dtype_equal(CategoricalIndex(c1))
assert not (c1.is_dtype_equal(
CategoricalIndex(c1, categories=list('cab'))))
assert not c1.is_dtype_equal(CategoricalIndex(c1, ordered=True))
@@ -216,51 +216,51 @@ def f():
# This should result in integer categories, not float!
cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3])
- self.assertTrue(is_integer_dtype(cat.categories))
+ assert is_integer_dtype(cat.categories)
# https://github.com/pandas-dev/pandas/issues/3678
cat = pd.Categorical([np.nan, 1, 2, 3])
- self.assertTrue(is_integer_dtype(cat.categories))
+ assert is_integer_dtype(cat.categories)
# this should result in floats
cat = pd.Categorical([np.nan, 1, 2., 3])
- self.assertTrue(is_float_dtype(cat.categories))
+ assert is_float_dtype(cat.categories)
cat = pd.Categorical([np.nan, 1., 2., 3.])
- self.assertTrue(is_float_dtype(cat.categories))
+ assert is_float_dtype(cat.categories)
# This doesn't work -> this would probably need some kind of "remember
# the original type" feature to try to cast the array interface result
# to...
# vals = np.asarray(cat[cat.notnull()])
- # self.assertTrue(is_integer_dtype(vals))
+ # assert is_integer_dtype(vals)
# corner cases
cat = pd.Categorical([1])
- self.assertTrue(len(cat.categories) == 1)
- self.assertTrue(cat.categories[0] == 1)
- self.assertTrue(len(cat.codes) == 1)
- self.assertTrue(cat.codes[0] == 0)
+ assert len(cat.categories) == 1
+ assert cat.categories[0] == 1
+ assert len(cat.codes) == 1
+ assert cat.codes[0] == 0
cat = pd.Categorical(["a"])
- self.assertTrue(len(cat.categories) == 1)
- self.assertTrue(cat.categories[0] == "a")
- self.assertTrue(len(cat.codes) == 1)
- self.assertTrue(cat.codes[0] == 0)
+ assert len(cat.categories) == 1
+ assert cat.categories[0] == "a"
+ assert len(cat.codes) == 1
+ assert cat.codes[0] == 0
# Scalars should be converted to lists
cat = pd.Categorical(1)
- self.assertTrue(len(cat.categories) == 1)
- self.assertTrue(cat.categories[0] == 1)
- self.assertTrue(len(cat.codes) == 1)
- self.assertTrue(cat.codes[0] == 0)
+ assert len(cat.categories) == 1
+ assert cat.categories[0] == 1
+ assert len(cat.codes) == 1
+ assert cat.codes[0] == 0
cat = pd.Categorical([1], categories=1)
- self.assertTrue(len(cat.categories) == 1)
- self.assertTrue(cat.categories[0] == 1)
- self.assertTrue(len(cat.codes) == 1)
- self.assertTrue(cat.codes[0] == 0)
+ assert len(cat.categories) == 1
+ assert cat.categories[0] == 1
+ assert len(cat.codes) == 1
+ assert cat.codes[0] == 0
# Catch old style constructor useage: two arrays, codes + categories
# We can only catch two cases:
@@ -360,7 +360,7 @@ def test_constructor_with_datetimelike(self):
tm.assert_numpy_array_equal(c.codes, exp)
result = repr(c)
- self.assertTrue('NaT' in result)
+ assert 'NaT' in result
def test_constructor_from_index_series_datetimetz(self):
idx = pd.date_range('2015-01-01 10:00', freq='D', periods=3,
@@ -618,7 +618,7 @@ def test_categories_none(self):
def test_describe(self):
# string type
desc = self.factor.describe()
- self.assertTrue(self.factor.ordered)
+ assert self.factor.ordered
exp_index = pd.CategoricalIndex(['a', 'b', 'c'], name='categories',
ordered=self.factor.ordered)
expected = DataFrame({'counts': [3, 2, 3],
@@ -792,7 +792,7 @@ def test_construction_with_ordered(self):
cat = Categorical([0, 1, 2], ordered=False)
assert not cat.ordered
cat = Categorical([0, 1, 2], ordered=True)
- self.assertTrue(cat.ordered)
+ assert cat.ordered
def test_ordered_api(self):
# GH 9347
@@ -807,12 +807,12 @@ def test_ordered_api(self):
cat3 = pd.Categorical(["a", "c", "b"], ordered=True)
tm.assert_index_equal(cat3.categories, Index(['a', 'b', 'c']))
- self.assertTrue(cat3.ordered)
+ assert cat3.ordered
cat4 = pd.Categorical(["a", "c", "b"], categories=['b', 'c', 'a'],
ordered=True)
tm.assert_index_equal(cat4.categories, Index(['b', 'c', 'a']))
- self.assertTrue(cat4.ordered)
+ assert cat4.ordered
def test_set_ordered(self):
@@ -820,16 +820,16 @@ def test_set_ordered(self):
cat2 = cat.as_unordered()
assert not cat2.ordered
cat2 = cat.as_ordered()
- self.assertTrue(cat2.ordered)
+ assert cat2.ordered
cat2.as_unordered(inplace=True)
assert not cat2.ordered
cat2.as_ordered(inplace=True)
- self.assertTrue(cat2.ordered)
+ assert cat2.ordered
- self.assertTrue(cat2.set_ordered(True).ordered)
+ assert cat2.set_ordered(True).ordered
assert not cat2.set_ordered(False).ordered
cat2.set_ordered(True, inplace=True)
- self.assertTrue(cat2.ordered)
+ assert cat2.ordered
cat2.set_ordered(False, inplace=True)
assert not cat2.ordered
@@ -1168,7 +1168,7 @@ def test_min_max(self):
categories=['d', 'c', 'b', 'a'], ordered=True)
_min = cat.min()
_max = cat.max()
- self.assertTrue(np.isnan(_min))
+ assert np.isnan(_min)
self.assertEqual(_max, "b")
_min = cat.min(numeric_only=True)
@@ -1180,7 +1180,7 @@ def test_min_max(self):
ordered=True)
_min = cat.min()
_max = cat.max()
- self.assertTrue(np.isnan(_min))
+ assert np.isnan(_min)
self.assertEqual(_max, 1)
_min = cat.min(numeric_only=True)
@@ -1433,17 +1433,16 @@ def test_memory_usage(self):
cat = pd.Categorical([1, 2, 3])
# .categories is an index, so we include the hashtable
- self.assertTrue(cat.nbytes > 0 and cat.nbytes <= cat.memory_usage())
- self.assertTrue(cat.nbytes > 0 and
- cat.nbytes <= cat.memory_usage(deep=True))
+ assert 0 < cat.nbytes <= cat.memory_usage()
+ assert 0 < cat.nbytes <= cat.memory_usage(deep=True)
cat = pd.Categorical(['foo', 'foo', 'bar'])
- self.assertTrue(cat.memory_usage(deep=True) > cat.nbytes)
+ assert cat.memory_usage(deep=True) > cat.nbytes
# sys.getsizeof will call the .memory_usage with
# deep=True, and add on some GC overhead
diff = cat.memory_usage(deep=True) - sys.getsizeof(cat)
- self.assertTrue(abs(diff) < 100)
+ assert abs(diff) < 100
def test_searchsorted(self):
# https://github.com/pandas-dev/pandas/issues/8420
@@ -1640,23 +1639,23 @@ def test_codes_dtypes(self):
# GH 8453
result = Categorical(['foo', 'bar', 'baz'])
- self.assertTrue(result.codes.dtype == 'int8')
+ assert result.codes.dtype == 'int8'
result = Categorical(['foo%05d' % i for i in range(400)])
- self.assertTrue(result.codes.dtype == 'int16')
+ assert result.codes.dtype == 'int16'
result = Categorical(['foo%05d' % i for i in range(40000)])
- self.assertTrue(result.codes.dtype == 'int32')
+ assert result.codes.dtype == 'int32'
# adding cats
result = Categorical(['foo', 'bar', 'baz'])
- self.assertTrue(result.codes.dtype == 'int8')
+ assert result.codes.dtype == 'int8'
result = result.add_categories(['foo%05d' % i for i in range(400)])
- self.assertTrue(result.codes.dtype == 'int16')
+ assert result.codes.dtype == 'int16'
# removing cats
result = result.remove_categories(['foo%05d' % i for i in range(300)])
- self.assertTrue(result.codes.dtype == 'int8')
+ assert result.codes.dtype == 'int8'
def test_basic(self):
@@ -1893,7 +1892,7 @@ def test_sideeffects_free(self):
# so this WILL change values
cat = Categorical(["a", "b", "c", "a"])
s = pd.Series(cat)
- self.assertTrue(s.values is cat)
+ assert s.values is cat
s.cat.categories = [1, 2, 3]
exp_s = np.array([1, 2, 3, 1], dtype=np.int64)
tm.assert_numpy_array_equal(s.__array__(), exp_s)
@@ -2816,14 +2815,14 @@ def test_min_max(self):
], ordered=True))
_min = cat.min()
_max = cat.max()
- self.assertTrue(np.isnan(_min))
+ assert np.isnan(_min)
self.assertEqual(_max, "b")
cat = Series(Categorical(
[np.nan, 1, 2, np.nan], categories=[5, 4, 3, 2, 1], ordered=True))
_min = cat.min()
_max = cat.max()
- self.assertTrue(np.isnan(_min))
+ assert np.isnan(_min)
self.assertEqual(_max, 1)
def test_mode(self):
@@ -3188,7 +3187,7 @@ def test_slicing_and_getting_ops(self):
# frame
res_df = df.iloc[2:4, :]
tm.assert_frame_equal(res_df, exp_df)
- self.assertTrue(is_categorical_dtype(res_df["cats"]))
+ assert is_categorical_dtype(res_df["cats"])
# row
res_row = df.iloc[2, :]
@@ -3198,7 +3197,7 @@ def test_slicing_and_getting_ops(self):
# col
res_col = df.iloc[:, 0]
tm.assert_series_equal(res_col, exp_col)
- self.assertTrue(is_categorical_dtype(res_col))
+ assert is_categorical_dtype(res_col)
# single value
res_val = df.iloc[2, 0]
@@ -3208,7 +3207,7 @@ def test_slicing_and_getting_ops(self):
# frame
res_df = df.loc["j":"k", :]
tm.assert_frame_equal(res_df, exp_df)
- self.assertTrue(is_categorical_dtype(res_df["cats"]))
+ assert is_categorical_dtype(res_df["cats"])
# row
res_row = df.loc["j", :]
@@ -3218,7 +3217,7 @@ def test_slicing_and_getting_ops(self):
# col
res_col = df.loc[:, "cats"]
tm.assert_series_equal(res_col, exp_col)
- self.assertTrue(is_categorical_dtype(res_col))
+ assert is_categorical_dtype(res_col)
# single value
res_val = df.loc["j", "cats"]
@@ -3229,7 +3228,7 @@ def test_slicing_and_getting_ops(self):
# res_df = df.loc["j":"k",[0,1]] # doesn't work?
res_df = df.loc["j":"k", :]
tm.assert_frame_equal(res_df, exp_df)
- self.assertTrue(is_categorical_dtype(res_df["cats"]))
+ assert is_categorical_dtype(res_df["cats"])
# row
res_row = df.loc["j", :]
@@ -3239,7 +3238,7 @@ def test_slicing_and_getting_ops(self):
# col
res_col = df.loc[:, "cats"]
tm.assert_series_equal(res_col, exp_col)
- self.assertTrue(is_categorical_dtype(res_col))
+ assert is_categorical_dtype(res_col)
# single value
res_val = df.loc["j", df.columns[0]]
@@ -3272,23 +3271,23 @@ def test_slicing_and_getting_ops(self):
res_df = df.iloc[slice(2, 4)]
tm.assert_frame_equal(res_df, exp_df)
- self.assertTrue(is_categorical_dtype(res_df["cats"]))
+ assert is_categorical_dtype(res_df["cats"])
res_df = df.iloc[[2, 3]]
tm.assert_frame_equal(res_df, exp_df)
- self.assertTrue(is_categorical_dtype(res_df["cats"]))
+ assert is_categorical_dtype(res_df["cats"])
res_col = df.iloc[:, 0]
tm.assert_series_equal(res_col, exp_col)
- self.assertTrue(is_categorical_dtype(res_col))
+ assert is_categorical_dtype(res_col)
res_df = df.iloc[:, slice(0, 2)]
tm.assert_frame_equal(res_df, df)
- self.assertTrue(is_categorical_dtype(res_df["cats"]))
+ assert is_categorical_dtype(res_df["cats"])
res_df = df.iloc[:, [0, 1]]
tm.assert_frame_equal(res_df, df)
- self.assertTrue(is_categorical_dtype(res_df["cats"]))
+ assert is_categorical_dtype(res_df["cats"])
def test_slicing_doc_examples(self):
@@ -3784,22 +3783,22 @@ def test_cat_equality(self):
# vs scalar
assert not (a == 'a').all()
- self.assertTrue(((a != 'a') == ~(a == 'a')).all())
+ assert ((a != 'a') == ~(a == 'a')).all()
assert not ('a' == a).all()
- self.assertTrue((a == 'a')[0])
- self.assertTrue(('a' == a)[0])
+ assert (a == 'a')[0]
+ assert ('a' == a)[0]
assert not ('a' != a)[0]
# vs list-like
- self.assertTrue((a == a).all())
+ assert (a == a).all()
assert not (a != a).all()
- self.assertTrue((a == list(a)).all())
- self.assertTrue((a == b).all())
- self.assertTrue((b == a).all())
- self.assertTrue(((~(a == b)) == (a != b)).all())
- self.assertTrue(((~(b == a)) == (b != a)).all())
+ assert (a == list(a)).all()
+ assert (a == b).all()
+ assert (b == a).all()
+ assert ((~(a == b)) == (a != b)).all()
+ assert ((~(b == a)) == (b != a)).all()
assert not (a == c).all()
assert not (c == a).all()
@@ -3807,15 +3806,15 @@ def test_cat_equality(self):
assert not (d == a).all()
# vs a cat-like
- self.assertTrue((a == e).all())
- self.assertTrue((e == a).all())
+ assert (a == e).all()
+ assert (e == a).all()
assert not (a == f).all()
assert not (f == a).all()
- self.assertTrue(((~(a == e) == (a != e)).all()))
- self.assertTrue(((~(e == a) == (e != a)).all()))
- self.assertTrue(((~(a == f) == (a != f)).all()))
- self.assertTrue(((~(f == a) == (f != a)).all()))
+ assert ((~(a == e) == (a != e)).all())
+ assert ((~(e == a) == (e != a)).all())
+ assert ((~(a == f) == (a != f)).all())
+ assert ((~(f == a) == (f != a)).all())
# non-equality is not comparable
pytest.raises(TypeError, lambda: a < b)
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index 0e614fdbfe008..ad5418f4a4a29 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -32,10 +32,10 @@ def tearDown(self):
def test_api(self):
# the pandas object exposes the user API
- self.assertTrue(hasattr(pd, 'get_option'))
- self.assertTrue(hasattr(pd, 'set_option'))
- self.assertTrue(hasattr(pd, 'reset_option'))
- self.assertTrue(hasattr(pd, 'describe_option'))
+ assert hasattr(pd, 'get_option')
+ assert hasattr(pd, 'set_option')
+ assert hasattr(pd, 'reset_option')
+ assert hasattr(pd, 'describe_option')
def test_is_one_of_factory(self):
v = self.cf.is_one_of_factory([None, 12])
@@ -87,43 +87,30 @@ def test_describe_option(self):
pytest.raises(KeyError, self.cf.describe_option, 'no.such.key')
# we can get the description for any key we registered
- self.assertTrue(
- 'doc' in self.cf.describe_option('a', _print_desc=False))
- self.assertTrue(
- 'doc2' in self.cf.describe_option('b', _print_desc=False))
- self.assertTrue(
- 'precated' in self.cf.describe_option('b', _print_desc=False))
-
- self.assertTrue(
- 'doc3' in self.cf.describe_option('c.d.e1', _print_desc=False))
- self.assertTrue(
- 'doc4' in self.cf.describe_option('c.d.e2', _print_desc=False))
+ assert 'doc' in self.cf.describe_option('a', _print_desc=False)
+ assert 'doc2' in self.cf.describe_option('b', _print_desc=False)
+ assert 'precated' in self.cf.describe_option('b', _print_desc=False)
+ assert 'doc3' in self.cf.describe_option('c.d.e1', _print_desc=False)
+ assert 'doc4' in self.cf.describe_option('c.d.e2', _print_desc=False)
# if no doc is specified we get a default message
# saying "description not available"
- self.assertTrue(
- 'vailable' in self.cf.describe_option('f', _print_desc=False))
- self.assertTrue(
- 'vailable' in self.cf.describe_option('g.h', _print_desc=False))
- self.assertTrue(
- 'precated' in self.cf.describe_option('g.h', _print_desc=False))
- self.assertTrue(
- 'k' in self.cf.describe_option('g.h', _print_desc=False))
+ assert 'vailable' in self.cf.describe_option('f', _print_desc=False)
+ assert 'vailable' in self.cf.describe_option('g.h', _print_desc=False)
+ assert 'precated' in self.cf.describe_option('g.h', _print_desc=False)
+ assert 'k' in self.cf.describe_option('g.h', _print_desc=False)
# default is reported
- self.assertTrue(
- 'foo' in self.cf.describe_option('l', _print_desc=False))
+ assert 'foo' in self.cf.describe_option('l', _print_desc=False)
# current value is reported
assert 'bar' not in self.cf.describe_option('l', _print_desc=False)
self.cf.set_option("l", "bar")
- self.assertTrue(
- 'bar' in self.cf.describe_option('l', _print_desc=False))
+ assert 'bar' in self.cf.describe_option('l', _print_desc=False)
def test_case_insensitive(self):
self.cf.register_option('KanBAN', 1, 'doc')
- self.assertTrue(
- 'doc' in self.cf.describe_option('kanbaN', _print_desc=False))
+ assert 'doc' in self.cf.describe_option('kanbaN', _print_desc=False)
self.assertEqual(self.cf.get_option('kanBaN'), 1)
self.cf.set_option('KanBan', 2)
self.assertEqual(self.cf.get_option('kAnBaN'), 2)
@@ -132,7 +119,7 @@ def test_case_insensitive(self):
pytest.raises(KeyError, self.cf.get_option, 'no_such_option')
self.cf.deprecate_option('KanBan')
- self.assertTrue(self.cf._is_deprecated('kAnBaN'))
+ assert self.cf._is_deprecated('kAnBaN')
def test_get_option(self):
self.cf.register_option('a', 1, 'doc')
@@ -142,7 +129,7 @@ def test_get_option(self):
# gets of existing keys succeed
self.assertEqual(self.cf.get_option('a'), 1)
self.assertEqual(self.cf.get_option('b.c'), 'hullo')
- self.assertTrue(self.cf.get_option('b.b') is None)
+ assert self.cf.get_option('b.b') is None
# gets of non-existent keys fail
pytest.raises(KeyError, self.cf.get_option, 'no_such_option')
@@ -154,7 +141,7 @@ def test_set_option(self):
self.assertEqual(self.cf.get_option('a'), 1)
self.assertEqual(self.cf.get_option('b.c'), 'hullo')
- self.assertTrue(self.cf.get_option('b.b') is None)
+ assert self.cf.get_option('b.b') is None
self.cf.set_option('a', 2)
self.cf.set_option('b.c', 'wurld')
@@ -182,12 +169,12 @@ def test_set_option_multiple(self):
self.assertEqual(self.cf.get_option('a'), 1)
self.assertEqual(self.cf.get_option('b.c'), 'hullo')
- self.assertTrue(self.cf.get_option('b.b') is None)
+ assert self.cf.get_option('b.b') is None
self.cf.set_option('a', '2', 'b.c', None, 'b.b', 10.0)
self.assertEqual(self.cf.get_option('a'), '2')
- self.assertTrue(self.cf.get_option('b.c') is None)
+ assert self.cf.get_option('b.c') is None
self.assertEqual(self.cf.get_option('b.b'), 10.0)
def test_validation(self):
@@ -251,7 +238,7 @@ def test_deprecate_option(self):
# we can deprecate non-existent options
self.cf.deprecate_option('foo')
- self.assertTrue(self.cf._is_deprecated('foo'))
+ assert self.cf._is_deprecated('foo')
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
try:
@@ -262,8 +249,7 @@ def test_deprecate_option(self):
self.fail("Nonexistent option didn't raise KeyError")
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue(
- 'deprecated' in str(w[-1])) # we get the default message
+ assert 'deprecated' in str(w[-1]) # we get the default message
self.cf.register_option('a', 1, 'doc', validator=self.cf.is_int)
self.cf.register_option('b.c', 'hullo', 'doc2')
@@ -275,10 +261,8 @@ def test_deprecate_option(self):
self.cf.get_option('a')
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue(
- 'eprecated' in str(w[-1])) # we get the default message
- self.assertTrue(
- 'nifty_ver' in str(w[-1])) # with the removal_ver quoted
+ assert 'eprecated' in str(w[-1]) # we get the default message
+ assert 'nifty_ver' in str(w[-1]) # with the removal_ver quoted
pytest.raises(
KeyError, self.cf.deprecate_option, 'a') # can't depr. twice
@@ -289,8 +273,7 @@ def test_deprecate_option(self):
self.cf.get_option('b.c')
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue(
- 'zounds!' in str(w[-1])) # we get the custom message
+ assert 'zounds!' in str(w[-1]) # we get the custom message
# test rerouting keys
self.cf.register_option('d.a', 'foo', 'doc2')
@@ -304,24 +287,21 @@ def test_deprecate_option(self):
self.assertEqual(self.cf.get_option('d.dep'), 'foo')
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue(
- 'eprecated' in str(w[-1])) # we get the custom message
+ assert 'eprecated' in str(w[-1]) # we get the custom message
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
self.cf.set_option('d.dep', 'baz') # should overwrite "d.a"
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue(
- 'eprecated' in str(w[-1])) # we get the custom message
+ assert 'eprecated' in str(w[-1]) # we get the custom message
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
self.assertEqual(self.cf.get_option('d.dep'), 'baz')
self.assertEqual(len(w), 1) # should have raised one warning
- self.assertTrue(
- 'eprecated' in str(w[-1])) # we get the custom message
+ assert 'eprecated' in str(w[-1]) # we get the custom message
def test_config_prefix(self):
with self.cf.config_prefix("base"):
@@ -337,10 +317,8 @@ def test_config_prefix(self):
self.assertEqual(self.cf.get_option('base.a'), 3)
self.assertEqual(self.cf.get_option('base.b'), 4)
- self.assertTrue(
- 'doc1' in self.cf.describe_option('base.a', _print_desc=False))
- self.assertTrue(
- 'doc2' in self.cf.describe_option('base.b', _print_desc=False))
+ assert 'doc1' in self.cf.describe_option('base.a', _print_desc=False)
+ assert 'doc2' in self.cf.describe_option('base.b', _print_desc=False)
self.cf.reset_option('base.a')
self.cf.reset_option('base.b')
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index 782d2682145d8..ae505a66ad75a 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -269,7 +269,7 @@ def test_invalid(self):
# ok, we only check on first part of expression
result = expr._can_use_numexpr(operator.add, '+', self.frame,
self.frame2, 'evaluate')
- self.assertTrue(result)
+ assert result
def test_binary_ops(self):
def testit():
diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py
index 621f624c41a19..0ac05bae624e5 100644
--- a/pandas/tests/test_lib.py
+++ b/pandas/tests/test_lib.py
@@ -13,15 +13,15 @@ class TestMisc(tm.TestCase):
def test_max_len_string_array(self):
arr = a = np.array(['foo', 'b', np.nan], dtype='object')
- self.assertTrue(lib.max_len_string_array(arr), 3)
+ assert lib.max_len_string_array(arr), 3
# unicode
arr = a.astype('U').astype(object)
- self.assertTrue(lib.max_len_string_array(arr), 3)
+ assert lib.max_len_string_array(arr), 3
# bytes for python3
arr = a.astype('S').astype(object)
- self.assertTrue(lib.max_len_string_array(arr), 3)
+ assert lib.max_len_string_array(arr), 3
# raises
pytest.raises(TypeError,
@@ -139,13 +139,13 @@ def test_maybe_indices_to_slice_both_edges(self):
for step in [1, 2, 4, 5, 8, 9]:
indices = np.arange(0, 9, step, dtype=np.int64)
maybe_slice = lib.maybe_indices_to_slice(indices, len(target))
- self.assertTrue(isinstance(maybe_slice, slice))
+ assert isinstance(maybe_slice, slice)
tm.assert_numpy_array_equal(target[indices], target[maybe_slice])
# reverse
indices = indices[::-1]
maybe_slice = lib.maybe_indices_to_slice(indices, len(target))
- self.assertTrue(isinstance(maybe_slice, slice))
+ assert isinstance(maybe_slice, slice)
tm.assert_numpy_array_equal(target[indices], target[maybe_slice])
# not slice
@@ -189,16 +189,16 @@ def test_maybe_indices_to_slice_middle(self):
def test_maybe_booleans_to_slice(self):
arr = np.array([0, 0, 1, 1, 1, 0, 1], dtype=np.uint8)
result = lib.maybe_booleans_to_slice(arr)
- self.assertTrue(result.dtype == np.bool_)
+ assert result.dtype == np.bool_
result = lib.maybe_booleans_to_slice(arr[:0])
- self.assertTrue(result == slice(0, 0))
+ assert result == slice(0, 0)
def test_get_reverse_indexer(self):
indexer = np.array([-1, -1, 1, 2, 0, -1, 3, 4], dtype=np.int64)
result = lib.get_reverse_indexer(indexer, 5)
expected = np.array([4, 2, 3, 6, 7], dtype=np.int64)
- self.assertTrue(np.array_equal(result, expected))
+ assert np.array_equal(result, expected)
class TestNullObj(tm.TestCase):
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 668f5b2a5a962..1a4603978ce38 100755
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -233,7 +233,7 @@ def test_repr_name_coincide(self):
df = DataFrame({'value': [0, 1]}, index=index)
lines = repr(df).split('\n')
- self.assertTrue(lines[2].startswith('a 0 foo'))
+ assert lines[2].startswith('a 0 foo')
def test_getitem_simple(self):
df = self.frame.T
@@ -289,12 +289,12 @@ def test_series_setitem(self):
s = self.ymd['A']
s[2000, 3] = np.nan
- self.assertTrue(isnull(s.values[42:65]).all())
- self.assertTrue(notnull(s.values[:42]).all())
- self.assertTrue(notnull(s.values[65:]).all())
+ assert isnull(s.values[42:65]).all()
+ assert notnull(s.values[:42]).all()
+ assert notnull(s.values[65:]).all()
s[2000, 3, 10] = np.nan
- self.assertTrue(isnull(s[49]))
+ assert isnull(s[49])
def test_series_slice_partial(self):
pass
@@ -333,8 +333,8 @@ def test_frame_getitem_setitem_slice(self):
cp = self.frame.copy()
cp.iloc[:4] = 0
- self.assertTrue((cp.values[:4] == 0).all())
- self.assertTrue((cp.values[4:] != 0).all())
+ assert (cp.values[:4] == 0).all()
+ assert (cp.values[4:] != 0).all()
def test_frame_getitem_setitem_multislice(self):
levels = [['t1', 't2'], ['a', 'b', 'c']]
@@ -393,7 +393,7 @@ def test_frame_setitem_multi_column(self):
# Works, but adds a column instead of updating the two existing ones
df['A'] = 0.0 # Doesn't work
- self.assertTrue((df['A'].values == 0).all())
+ assert (df['A'].values == 0).all()
# it broadcasts
df['B', '1'] = [1, 2, 3]
@@ -616,7 +616,7 @@ def test_getitem_setitem_slice_integers(self):
tm.assert_frame_equal(res, exp)
frame.loc[1:2] = 7
- self.assertTrue((frame.loc[1:2] == 7).values.all())
+ assert (frame.loc[1:2] == 7).values.all()
series = Series(np.random.randn(len(index)), index=index)
@@ -625,7 +625,7 @@ def test_getitem_setitem_slice_integers(self):
tm.assert_series_equal(res, exp)
series.loc[1:2] = 7
- self.assertTrue((series.loc[1:2] == 7).values.all())
+ assert (series.loc[1:2] == 7).values.all()
def test_getitem_int(self):
levels = [[0, 1], [0, 1, 2]]
@@ -719,8 +719,8 @@ def test_delevel_infer_dtype(self):
df = DataFrame(np.random.randn(8, 3), columns=['A', 'B', 'C'],
index=index)
deleveled = df.reset_index()
- self.assertTrue(is_integer_dtype(deleveled['prm1']))
- self.assertTrue(is_float_dtype(deleveled['prm2']))
+ assert is_integer_dtype(deleveled['prm1'])
+ assert is_float_dtype(deleveled['prm2'])
def test_reset_index_with_drop(self):
deleveled = self.ymd.reset_index(drop=True)
@@ -1136,7 +1136,7 @@ def test_stack_dropna(self):
df = df.set_index(['A', 'B'])
stacked = df.unstack().stack(dropna=False)
- self.assertTrue(len(stacked) > len(stacked.dropna()))
+ assert len(stacked) > len(stacked.dropna())
stacked = df.unstack().stack(dropna=True)
tm.assert_frame_equal(stacked, stacked.dropna())
@@ -1215,7 +1215,7 @@ def test_groupby_level_no_obs(self):
grouped = df1.groupby(axis=1, level=0)
result = grouped.sum()
- self.assertTrue((result.columns == ['f2', 'f3']).all())
+ assert (result.columns == ['f2', 'f3']).all()
def test_join(self):
a = self.frame.loc[self.frame.index[:5], ['A']]
@@ -1244,7 +1244,7 @@ def test_swaplevel(self):
back2 = swapped.swaplevel(0)
back3 = swapped.swaplevel(0, 1)
back4 = swapped.swaplevel('second', 'first')
- self.assertTrue(back.index.equals(self.frame.index))
+ assert back.index.equals(self.frame.index)
tm.assert_series_equal(back, back2)
tm.assert_series_equal(back, back3)
tm.assert_series_equal(back, back4)
@@ -1288,7 +1288,7 @@ def test_insert_index(self):
df = self.ymd[:5].T
df[2000, 1, 10] = df[2000, 1, 7]
assert isinstance(df.columns, MultiIndex)
- self.assertTrue((df[2000, 1, 10] == df[2000, 1, 7]).all())
+ assert (df[2000, 1, 10] == df[2000, 1, 7]).all()
def test_alignment(self):
x = Series(data=[1, 2, 3], index=MultiIndex.from_tuples([("A", 1), (
@@ -1314,7 +1314,7 @@ def test_frame_getitem_view(self):
# this works because we are modifying the underlying array
# really a no-no
df['foo'].values[:] = 0
- self.assertTrue((df['foo'].values == 0).all())
+ assert (df['foo'].values == 0).all()
# but not if it's mixed-type
df['foo', 'four'] = 'foo'
@@ -1331,7 +1331,7 @@ def f():
df = f()
except:
pass
- self.assertTrue((df['foo', 'one'] == 0).all())
+ assert (df['foo', 'one'] == 0).all()
def test_count(self):
frame = self.frame.copy()
@@ -1574,7 +1574,7 @@ def test_partial_ix_missing(self):
# need to put in some work here
# self.ymd.loc[2000, 0] = 0
- # self.assertTrue((self.ymd.loc[2000]['A'] == 0).all())
+ # assert (self.ymd.loc[2000]['A'] == 0).all()
# Pretty sure the second (and maybe even the first) is already wrong.
pytest.raises(Exception, self.ymd.loc.__getitem__, (2000, 6))
@@ -1874,7 +1874,7 @@ def test_dataframe_insert_column_all_na(self):
df = DataFrame([[1, 2], [3, 4], [5, 6]], index=mix)
s = Series({(1, 1): 1, (1, 2): 2})
df['new'] = s
- self.assertTrue(df['new'].isnull().all())
+ assert df['new'].isnull().all()
def test_join_segfault(self):
# 1532
@@ -1890,11 +1890,11 @@ def test_set_column_scalar_with_ix(self):
subset = self.frame.index[[1, 4, 5]]
self.frame.loc[subset] = 99
- self.assertTrue((self.frame.loc[subset].values == 99).all())
+ assert (self.frame.loc[subset].values == 99).all()
col = self.frame['B']
col[subset] = 97
- self.assertTrue((self.frame.loc[subset, 'B'] == 97).all())
+ assert (self.frame.loc[subset, 'B'] == 97).all()
def test_frame_dict_constructor_empty_series(self):
s1 = Series([
@@ -1932,7 +1932,7 @@ def test_nonunique_assignment_1750(self):
df.loc[ix, "C"] = '_'
- self.assertTrue((df.xs((1, 1))['C'] == '_').all())
+ assert (df.xs((1, 1))['C'] == '_').all()
def test_indexing_over_hashtable_size_cutoff(self):
n = 10000
@@ -1986,8 +1986,8 @@ def test_tuples_have_na(self):
labels=[[1, 1, 1, 1, -1, 0, 0, 0], [0, 1, 2, 3, 0,
1, 2, 3]])
- self.assertTrue(isnull(index[4][0]))
- self.assertTrue(isnull(index.values[4][0]))
+ assert isnull(index[4][0])
+ assert isnull(index.values[4][0])
def test_duplicate_groupby_issues(self):
idx_tp = [('600809', '20061231'), ('600809', '20070331'),
@@ -2023,21 +2023,21 @@ def test_duplicated_drop_duplicates(self):
[False, False, False, True, False, False], dtype=bool)
duplicated = idx.duplicated()
tm.assert_numpy_array_equal(duplicated, expected)
- self.assertTrue(duplicated.dtype == bool)
+ assert duplicated.dtype == bool
expected = MultiIndex.from_arrays(([1, 2, 3, 2, 3], [1, 1, 1, 2, 2]))
tm.assert_index_equal(idx.drop_duplicates(), expected)
expected = np.array([True, False, False, False, False, False])
duplicated = idx.duplicated(keep='last')
tm.assert_numpy_array_equal(duplicated, expected)
- self.assertTrue(duplicated.dtype == bool)
+ assert duplicated.dtype == bool
expected = MultiIndex.from_arrays(([2, 3, 1, 2, 3], [1, 1, 1, 2, 2]))
tm.assert_index_equal(idx.drop_duplicates(keep='last'), expected)
expected = np.array([True, False, False, True, False, False])
duplicated = idx.duplicated(keep=False)
tm.assert_numpy_array_equal(duplicated, expected)
- self.assertTrue(duplicated.dtype == bool)
+ assert duplicated.dtype == bool
expected = MultiIndex.from_arrays(([2, 3, 2, 3], [1, 1, 2, 2]))
tm.assert_index_equal(idx.drop_duplicates(keep=False), expected)
@@ -2387,7 +2387,7 @@ def test_sort_index_level_large_cardinality(self):
# it works!
result = df.sort_index(level=0)
- self.assertTrue(result.index.lexsort_depth == 3)
+ assert result.index.lexsort_depth == 3
# #2684 (int32)
index = MultiIndex.from_arrays([np.arange(4000)] * 3)
@@ -2395,8 +2395,8 @@ def test_sort_index_level_large_cardinality(self):
# it works!
result = df.sort_index(level=0)
- self.assertTrue((result.dtypes.values == df.dtypes.values).all())
- self.assertTrue(result.index.lexsort_depth == 3)
+ assert (result.dtypes.values == df.dtypes.values).all()
+ assert result.index.lexsort_depth == 3
def test_sort_index_level_by_name(self):
self.frame.index.names = ['first', 'second']
@@ -2426,7 +2426,7 @@ def test_is_lexsorted(self):
index = MultiIndex(levels=levels,
labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]])
- self.assertTrue(index.is_lexsorted())
+ assert index.is_lexsorted()
index = MultiIndex(levels=levels,
labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 2, 1]])
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index a108749db8e6a..dda466a6937dd 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -347,7 +347,7 @@ def test_nanmean_overflow(self):
np_result = s.values.mean()
self.assertEqual(result, a)
self.assertEqual(result, np_result)
- self.assertTrue(result.dtype == np.float64)
+ assert result.dtype == np.float64
def test_returned_dtype(self):
@@ -362,15 +362,9 @@ def test_returned_dtype(self):
for method in group_a + group_b:
result = getattr(s, method)()
if is_integer_dtype(dtype) and method in group_a:
- self.assertTrue(
- result.dtype == np.float64,
- "return dtype expected from %s is np.float64, "
- "got %s instead" % (method, result.dtype))
+ assert result.dtype == np.float64
else:
- self.assertTrue(
- result.dtype == dtype,
- "return dtype expected from %s is %s, "
- "got %s instead" % (method, dtype, result.dtype))
+ assert result.dtype == dtype
def test_nanmedian(self):
with warnings.catch_warnings(record=True):
@@ -657,7 +651,7 @@ def check_bool(self, func, value, correct, *args, **kwargs):
try:
res0 = func(value, *args, **kwargs)
if correct:
- self.assertTrue(res0)
+ assert res0
else:
assert not res0
except BaseException as exc:
@@ -736,12 +730,12 @@ def test__isfinite(self):
raise
def test__bn_ok_dtype(self):
- self.assertTrue(nanops._bn_ok_dtype(self.arr_float.dtype, 'test'))
- self.assertTrue(nanops._bn_ok_dtype(self.arr_complex.dtype, 'test'))
- self.assertTrue(nanops._bn_ok_dtype(self.arr_int.dtype, 'test'))
- self.assertTrue(nanops._bn_ok_dtype(self.arr_bool.dtype, 'test'))
- self.assertTrue(nanops._bn_ok_dtype(self.arr_str.dtype, 'test'))
- self.assertTrue(nanops._bn_ok_dtype(self.arr_utf.dtype, 'test'))
+ assert nanops._bn_ok_dtype(self.arr_float.dtype, 'test')
+ assert nanops._bn_ok_dtype(self.arr_complex.dtype, 'test')
+ assert nanops._bn_ok_dtype(self.arr_int.dtype, 'test')
+ assert nanops._bn_ok_dtype(self.arr_bool.dtype, 'test')
+ assert nanops._bn_ok_dtype(self.arr_str.dtype, 'test')
+ assert nanops._bn_ok_dtype(self.arr_utf.dtype, 'test')
assert not nanops._bn_ok_dtype(self.arr_date.dtype, 'test')
assert not nanops._bn_ok_dtype(self.arr_tdelta.dtype, 'test')
assert not nanops._bn_ok_dtype(self.arr_obj.dtype, 'test')
@@ -761,30 +755,24 @@ def test_numeric_values(self):
def test_ndarray(self):
# Test numeric ndarray
values = np.array([1, 2, 3])
- self.assertTrue(np.allclose(nanops._ensure_numeric(values), values),
- 'Failed for numeric ndarray')
+ assert np.allclose(nanops._ensure_numeric(values), values)
# Test object ndarray
o_values = values.astype(object)
- self.assertTrue(np.allclose(nanops._ensure_numeric(o_values), values),
- 'Failed for object ndarray')
+ assert np.allclose(nanops._ensure_numeric(o_values), values)
# Test convertible string ndarray
s_values = np.array(['1', '2', '3'], dtype=object)
- self.assertTrue(np.allclose(nanops._ensure_numeric(s_values), values),
- 'Failed for convertible string ndarray')
+ assert np.allclose(nanops._ensure_numeric(s_values), values)
# Test non-convertible string ndarray
s_values = np.array(['foo', 'bar', 'baz'], dtype=object)
pytest.raises(ValueError, lambda: nanops._ensure_numeric(s_values))
def test_convertable_values(self):
- self.assertTrue(np.allclose(nanops._ensure_numeric('1'), 1.0),
- 'Failed for convertible integer string')
- self.assertTrue(np.allclose(nanops._ensure_numeric('1.1'), 1.1),
- 'Failed for convertible float string')
- self.assertTrue(np.allclose(nanops._ensure_numeric('1+1j'), 1 + 1j),
- 'Failed for convertible complex string')
+ assert np.allclose(nanops._ensure_numeric('1'), 1.0)
+ assert np.allclose(nanops._ensure_numeric('1.1'), 1.1)
+ assert np.allclose(nanops._ensure_numeric('1+1j'), 1 + 1j)
def test_non_convertable_values(self):
pytest.raises(TypeError, lambda: nanops._ensure_numeric('foo'))
@@ -883,14 +871,14 @@ def test_ground_truth(self):
for ddof in range(3):
var = nanops.nanvar(samples, skipna=True, axis=axis, ddof=ddof)
tm.assert_almost_equal(var[:3], variance[axis, ddof])
- self.assertTrue(np.isnan(var[3]))
+ assert np.isnan(var[3])
# Test nanstd.
for axis in range(2):
for ddof in range(3):
std = nanops.nanstd(samples, skipna=True, axis=axis, ddof=ddof)
tm.assert_almost_equal(std[:3], variance[axis, ddof] ** 0.5)
- self.assertTrue(np.isnan(std[3]))
+ assert np.isnan(std[3])
def test_nanstd_roundoff(self):
# Regression test for GH 10242 (test data taken from GH 10489). Ensure
@@ -943,7 +931,7 @@ def test_axis(self):
def test_nans(self):
samples = np.hstack([self.samples, np.nan])
skew = nanops.nanskew(samples, skipna=False)
- self.assertTrue(np.isnan(skew))
+ assert np.isnan(skew)
def test_nans_skipna(self):
samples = np.hstack([self.samples, np.nan])
@@ -993,7 +981,7 @@ def test_axis(self):
def test_nans(self):
samples = np.hstack([self.samples, np.nan])
kurt = nanops.nankurt(samples, skipna=False)
- self.assertTrue(np.isnan(kurt))
+ assert np.isnan(kurt)
def test_nans_skipna(self):
samples = np.hstack([self.samples, np.nan])
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 802acc86d3359..c9894ad9a9acf 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -808,7 +808,7 @@ def _check_view(self, indexer, comp):
cp = self.panel.copy()
obj = cp.loc[indexer]
obj.values[:] = 0
- self.assertTrue((obj.values == 0).all())
+ assert (obj.values == 0).all()
comp(cp.loc[indexer].reindex_like(obj), obj)
def test_logical_with_nas(self):
@@ -1047,13 +1047,13 @@ def test_constructor_fails_with_not_3d_input(self):
def test_consolidate(self):
with catch_warnings(record=True):
- self.assertTrue(self.panel._data.is_consolidated())
+ assert self.panel._data.is_consolidated()
self.panel['foo'] = 1.
assert not self.panel._data.is_consolidated()
panel = self.panel._consolidate()
- self.assertTrue(panel._data.is_consolidated())
+ assert panel._data.is_consolidated()
def test_ctor_dict(self):
with catch_warnings(record=True):
@@ -1134,10 +1134,10 @@ def test_ctor_orderedDict(self):
:50] # unique random int keys
d = OrderedDict([(k, mkdf(10, 5)) for k in keys])
p = Panel(d)
- self.assertTrue(list(p.items) == keys)
+ assert list(p.items) == keys
p = Panel.from_dict(d)
- self.assertTrue(list(p.items) == keys)
+ assert list(p.items) == keys
def test_constructor_resize(self):
with catch_warnings(record=True):
@@ -1440,7 +1440,7 @@ def test_reindex(self):
result = self.panel.reindex(
major=self.panel.major_axis, copy=False)
assert_panel_equal(result, self.panel)
- self.assertTrue(result is self.panel)
+ assert result is self.panel
def test_reindex_multi(self):
with catch_warnings(record=True):
@@ -1550,7 +1550,7 @@ def test_sort_index(self):
def test_fillna(self):
with catch_warnings(record=True):
filled = self.panel.fillna(0)
- self.assertTrue(np.isfinite(filled.values).all())
+ assert np.isfinite(filled.values).all()
filled = self.panel.fillna(method='backfill')
assert_frame_equal(filled['ItemA'],
@@ -1695,7 +1695,7 @@ def test_transpose_copy(self):
assert_panel_equal(result, expected)
panel.values[0, 1, 1] = np.nan
- self.assertTrue(notnull(result.values[1, 0, 1]))
+ assert notnull(result.values[1, 0, 1])
def test_to_frame(self):
with catch_warnings(record=True):
@@ -1864,7 +1864,7 @@ def test_to_panel_na_handling(self):
[0, 1, 2, 3, 4, 5, 2, 3, 4, 5]])
panel = df.to_panel()
- self.assertTrue(isnull(panel[0].loc[1, [0, 1]]).all())
+ assert isnull(panel[0].loc[1, [0, 1]]).all()
def test_to_panel_duplicates(self):
# #2441
@@ -2127,8 +2127,8 @@ def test_multiindex_get(self):
f2 = wp.loc['a']
assert_panel_equal(f1, f2)
- self.assertTrue((f1.items == [1, 2]).all())
- self.assertTrue((f2.items == [1, 2]).all())
+ assert (f1.items == [1, 2]).all()
+ assert (f2.items == [1, 2]).all()
ind = MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1)],
names=['first', 'second'])
@@ -2140,10 +2140,10 @@ def test_multiindex_blocks(self):
wp = Panel(self.panel._data)
wp.items = ind
f1 = wp['a']
- self.assertTrue((f1.items == [1, 2]).all())
+ assert (f1.items == [1, 2]).all()
f1 = wp[('b', 1)]
- self.assertTrue((f1.columns == ['A', 'B', 'C', 'D']).all())
+ assert (f1.columns == ['A', 'B', 'C', 'D']).all()
def test_repr_empty(self):
with catch_warnings(record=True):
@@ -2165,7 +2165,7 @@ def test_rename(self):
# don't copy
renamed_nocopy = self.panel.rename_axis(mapper, axis=0, copy=False)
renamed_nocopy['foo'] = 3.
- self.assertTrue((self.panel['ItemA'].values == 3).all())
+ assert (self.panel['ItemA'].values == 3).all()
def test_get_attr(self):
assert_frame_equal(self.panel['ItemA'], self.panel.ItemA)
@@ -2413,18 +2413,18 @@ def test_update_raise(self):
**{'raise_conflict': True})
def test_all_any(self):
- self.assertTrue((self.panel.all(axis=0).values == nanall(
- self.panel, axis=0)).all())
- self.assertTrue((self.panel.all(axis=1).values == nanall(
- self.panel, axis=1).T).all())
- self.assertTrue((self.panel.all(axis=2).values == nanall(
- self.panel, axis=2).T).all())
- self.assertTrue((self.panel.any(axis=0).values == nanany(
- self.panel, axis=0)).all())
- self.assertTrue((self.panel.any(axis=1).values == nanany(
- self.panel, axis=1).T).all())
- self.assertTrue((self.panel.any(axis=2).values == nanany(
- self.panel, axis=2).T).all())
+ assert (self.panel.all(axis=0).values == nanall(
+ self.panel, axis=0)).all()
+ assert (self.panel.all(axis=1).values == nanall(
+ self.panel, axis=1).T).all()
+ assert (self.panel.all(axis=2).values == nanall(
+ self.panel, axis=2).T).all()
+ assert (self.panel.any(axis=0).values == nanany(
+ self.panel, axis=0)).all()
+ assert (self.panel.any(axis=1).values == nanany(
+ self.panel, axis=1).T).all()
+ assert (self.panel.any(axis=2).values == nanany(
+ self.panel, axis=2).T).all()
def test_all_any_unhandled(self):
pytest.raises(NotImplementedError, self.panel.all, bool_only=True)
@@ -2532,10 +2532,10 @@ def is_sorted(arr):
return (arr[1:] > arr[:-1]).any()
sorted_minor = self.panel.sort_index(level=1)
- self.assertTrue(is_sorted(sorted_minor.index.labels[1]))
+ assert is_sorted(sorted_minor.index.labels[1])
sorted_major = sorted_minor.sort_index(level=0)
- self.assertTrue(is_sorted(sorted_major.index.labels[0]))
+ assert is_sorted(sorted_major.index.labels[0])
def test_to_string(self):
buf = StringIO()
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 5b4f09009c9db..05ce239b9c5a3 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -402,23 +402,23 @@ def func():
df = panel4dc.iloc[0, 0]
df.iloc[:] = 1
panel4dc.iloc[0, 0] = df
- self.assertTrue((panel4dc.iloc[0, 0].values == 1).all())
+ assert (panel4dc.iloc[0, 0].values == 1).all()
# Series
panel4dc = self.panel4d.copy()
s = panel4dc.iloc[0, 0, :, 0]
s.iloc[:] = 1
panel4dc.iloc[0, 0, :, 0] = s
- self.assertTrue((panel4dc.iloc[0, 0, :, 0].values == 1).all())
+ assert (panel4dc.iloc[0, 0, :, 0].values == 1).all()
# scalar
panel4dc = self.panel4d.copy()
panel4dc.iloc[0] = 1
panel4dc.iloc[1] = True
panel4dc.iloc[2] = 'foo'
- self.assertTrue((panel4dc.iloc[0].values == 1).all())
- self.assertTrue(panel4dc.iloc[1].values.all())
- self.assertTrue((panel4dc.iloc[2].values == 'foo').all())
+ assert (panel4dc.iloc[0].values == 1).all()
+ assert panel4dc.iloc[1].values.all()
+ assert (panel4dc.iloc[2].values == 'foo').all()
def test_setitem_by_indexer_mixed_type(self):
@@ -431,9 +431,9 @@ def test_setitem_by_indexer_mixed_type(self):
panel4dc.iloc[0] = 1
panel4dc.iloc[1] = True
panel4dc.iloc[2] = 'foo'
- self.assertTrue((panel4dc.iloc[0].values == 1).all())
- self.assertTrue(panel4dc.iloc[1].values.all())
- self.assertTrue((panel4dc.iloc[2].values == 'foo').all())
+ assert (panel4dc.iloc[0].values == 1).all()
+ assert panel4dc.iloc[1].values.all()
+ assert (panel4dc.iloc[2].values == 'foo').all()
def test_comparisons(self):
with catch_warnings(record=True):
@@ -681,13 +681,13 @@ def test_constructor_cast(self):
def test_consolidate(self):
with catch_warnings(record=True):
- self.assertTrue(self.panel4d._data.is_consolidated())
+ assert self.panel4d._data.is_consolidated()
self.panel4d['foo'] = 1.
assert not self.panel4d._data.is_consolidated()
panel4d = self.panel4d._consolidate()
- self.assertTrue(panel4d._data.is_consolidated())
+ assert panel4d._data.is_consolidated()
def test_ctor_dict(self):
with catch_warnings(record=True):
@@ -819,7 +819,7 @@ def test_reindex(self):
result = self.panel4d.reindex(
major=self.panel4d.major_axis, copy=False)
assert_panel4d_equal(result, self.panel4d)
- self.assertTrue(result is self.panel4d)
+ assert result is self.panel4d
def test_not_hashable(self):
with catch_warnings(record=True):
@@ -859,7 +859,7 @@ def test_fillna(self):
with catch_warnings(record=True):
assert not np.isfinite(self.panel4d.values).all()
filled = self.panel4d.fillna(0)
- self.assertTrue(np.isfinite(filled.values).all())
+ assert np.isfinite(filled.values).all()
pytest.raises(NotImplementedError,
self.panel4d.fillna, method='pad')
@@ -949,7 +949,7 @@ def test_rename(self):
axis=0,
copy=False)
renamed_nocopy['foo'] = 3.
- self.assertTrue((self.panel4d['l1'].values == 3).all())
+ assert (self.panel4d['l1'].values == 3).all()
def test_get_attr(self):
assert_panel_equal(self.panel4d['l1'], self.panel4d.l1)
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 42a6a2a784a0e..37e22f101612b 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -63,9 +63,8 @@ def setUp(self):
def test_str(self):
r = self.series.resample('H')
- self.assertTrue(
- 'DatetimeIndexResampler [freq=<Hour>, axis=0, closed=left, '
- 'label=left, convention=start, base=0]' in str(r))
+ assert ('DatetimeIndexResampler [freq=<Hour>, axis=0, closed=left, '
+ 'label=left, convention=start, base=0]' in str(r))
def test_api(self):
@@ -133,10 +132,10 @@ def f():
tm.assert_numpy_array_equal(np.array(r), np.array(r.mean()))
# masquerade as Series/DataFrame as needed for API compat
- self.assertTrue(isinstance(self.series.resample('H'), ABCSeries))
+ assert isinstance(self.series.resample('H'), ABCSeries)
assert not isinstance(self.frame.resample('H'), ABCSeries)
assert not isinstance(self.series.resample('H'), ABCDataFrame)
- self.assertTrue(isinstance(self.frame.resample('H'), ABCDataFrame))
+ assert isinstance(self.frame.resample('H'), ABCDataFrame)
# bin numeric ops
for op in ['__add__', '__mul__', '__truediv__', '__div__', '__sub__']:
@@ -886,7 +885,7 @@ def test_custom_grouper(self):
g._cython_agg_general(f)
self.assertEqual(g.ngroups, 2593)
- self.assertTrue(notnull(g.mean()).all())
+ assert notnull(g.mean()).all()
# construct expected val
arr = [1] + [5] * 2592
@@ -1118,47 +1117,46 @@ def test_resample_basic_from_daily(self):
result = s.resample('w-sun').last()
self.assertEqual(len(result), 3)
- self.assertTrue((result.index.dayofweek == [6, 6, 6]).all())
+ assert (result.index.dayofweek == [6, 6, 6]).all()
self.assertEqual(result.iloc[0], s['1/2/2005'])
self.assertEqual(result.iloc[1], s['1/9/2005'])
self.assertEqual(result.iloc[2], s.iloc[-1])
result = s.resample('W-MON').last()
self.assertEqual(len(result), 2)
- self.assertTrue((result.index.dayofweek == [0, 0]).all())
+ assert (result.index.dayofweek == [0, 0]).all()
self.assertEqual(result.iloc[0], s['1/3/2005'])
self.assertEqual(result.iloc[1], s['1/10/2005'])
result = s.resample('W-TUE').last()
self.assertEqual(len(result), 2)
- self.assertTrue((result.index.dayofweek == [1, 1]).all())
+ assert (result.index.dayofweek == [1, 1]).all()
self.assertEqual(result.iloc[0], s['1/4/2005'])
self.assertEqual(result.iloc[1], s['1/10/2005'])
result = s.resample('W-WED').last()
self.assertEqual(len(result), 2)
- self.assertTrue((result.index.dayofweek == [2, 2]).all())
+ assert (result.index.dayofweek == [2, 2]).all()
self.assertEqual(result.iloc[0], s['1/5/2005'])
self.assertEqual(result.iloc[1], s['1/10/2005'])
result = s.resample('W-THU').last()
self.assertEqual(len(result), 2)
- self.assertTrue((result.index.dayofweek == [3, 3]).all())
+ assert (result.index.dayofweek == [3, 3]).all()
self.assertEqual(result.iloc[0], s['1/6/2005'])
self.assertEqual(result.iloc[1], s['1/10/2005'])
result = s.resample('W-FRI').last()
self.assertEqual(len(result), 2)
- self.assertTrue((result.index.dayofweek == [4, 4]).all())
+ assert (result.index.dayofweek == [4, 4]).all()
self.assertEqual(result.iloc[0], s['1/7/2005'])
self.assertEqual(result.iloc[1], s['1/10/2005'])
# to biz day
result = s.resample('B').last()
self.assertEqual(len(result), 7)
- self.assertTrue((result.index.dayofweek == [
- 4, 0, 1, 2, 3, 4, 0
- ]).all())
+ assert (result.index.dayofweek == [4, 0, 1, 2, 3, 4, 0]).all()
+
self.assertEqual(result.iloc[0], s['1/2/2005'])
self.assertEqual(result.iloc[1], s['1/3/2005'])
self.assertEqual(result.iloc[5], s['1/9/2005'])
@@ -1451,13 +1449,13 @@ def _ohlc(group):
resampled = ts.resample('5min', closed='right',
label='right').ohlc()
- self.assertTrue((resampled.loc['1/1/2000 00:00'] == ts[0]).all())
+ assert (resampled.loc['1/1/2000 00:00'] == ts[0]).all()
exp = _ohlc(ts[1:31])
- self.assertTrue((resampled.loc['1/1/2000 00:05'] == exp).all())
+ assert (resampled.loc['1/1/2000 00:05'] == exp).all()
exp = _ohlc(ts['1/1/2000 5:55:01':])
- self.assertTrue((resampled.loc['1/1/2000 6:00:00'] == exp).all())
+ assert (resampled.loc['1/1/2000 6:00:00'] == exp).all()
def test_downsample_non_unique(self):
rng = date_range('1/1/2000', '2/29/2000')
@@ -2588,7 +2586,7 @@ def test_resample_weekly_all_na(self):
result = ts.resample('W-THU').asfreq()
- self.assertTrue(result.isnull().all())
+ assert result.isnull().all()
result = ts.resample('W-THU').asfreq().ffill()[:-1]
expected = ts.asfreq('W-THU').ffill()
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 45e8aa3a367db..5b9797ce76a45 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -49,8 +49,7 @@ def test_iter(self):
for el in s:
# each element of the series is either a basestring/str or nan
- self.assertTrue(isinstance(el, compat.string_types) or
- isnull(el))
+ assert isinstance(el, compat.string_types) or isnull(el)
# desired behavior is to iterate until everything would be nan on the
# next iter so make sure the last element of the iterator was 'l' in
@@ -2114,12 +2113,12 @@ def test_split_with_name(self):
idx = Index(['a,b', 'c,d'], name='xxx')
res = idx.str.split(',')
exp = Index([['a', 'b'], ['c', 'd']], name='xxx')
- self.assertTrue(res.nlevels, 1)
+ assert res.nlevels, 1
tm.assert_index_equal(res, exp)
res = idx.str.split(',', expand=True)
exp = MultiIndex.from_tuples([('a', 'b'), ('c', 'd')])
- self.assertTrue(res.nlevels, 2)
+ assert res.nlevels, 2
tm.assert_index_equal(res, exp)
def test_partition_series(self):
@@ -2207,13 +2206,13 @@ def test_partition_index(self):
result = values.str.partition('_')
exp = Index([('a', '_', 'b_c'), ('c', '_', 'd_e'), ('f', '_', 'g_h')])
tm.assert_index_equal(result, exp)
- self.assertTrue(isinstance(result, MultiIndex))
+ assert isinstance(result, MultiIndex)
self.assertEqual(result.nlevels, 3)
result = values.str.rpartition('_')
exp = Index([('a_b', '_', 'c'), ('c_d', '_', 'e'), ('f_g', '_', 'h')])
tm.assert_index_equal(result, exp)
- self.assertTrue(isinstance(result, MultiIndex))
+ assert isinstance(result, MultiIndex)
self.assertEqual(result.nlevels, 3)
def test_partition_to_dataframe(self):
@@ -2259,13 +2258,13 @@ def test_partition_with_name(self):
idx = Index(['a,b', 'c,d'], name='xxx')
res = idx.str.partition(',')
exp = MultiIndex.from_tuples([('a', ',', 'b'), ('c', ',', 'd')])
- self.assertTrue(res.nlevels, 3)
+ assert res.nlevels, 3
tm.assert_index_equal(res, exp)
# should preserve name
res = idx.str.partition(',', expand=False)
exp = Index(np.array([('a', ',', 'b'), ('c', ',', 'd')]), name='xxx')
- self.assertTrue(res.nlevels, 1)
+ assert res.nlevels, 1
tm.assert_index_equal(res, exp)
def test_pipe_failures(self):
@@ -2720,14 +2719,14 @@ def test_index_str_accessor_visibility(self):
(['aa', datetime(2011, 1, 1)], 'mixed')]
for values, tp in cases:
idx = Index(values)
- self.assertTrue(isinstance(Series(values).str, StringMethods))
- self.assertTrue(isinstance(idx.str, StringMethods))
+ assert isinstance(Series(values).str, StringMethods)
+ assert isinstance(idx.str, StringMethods)
self.assertEqual(idx.inferred_type, tp)
for values, tp in cases:
idx = Index(values)
- self.assertTrue(isinstance(Series(values).str, StringMethods))
- self.assertTrue(isinstance(idx.str, StringMethods))
+ assert isinstance(Series(values).str, StringMethods)
+ assert isinstance(idx.str, StringMethods)
self.assertEqual(idx.inferred_type, tp)
cases = [([1, np.nan], 'floating'),
diff --git a/pandas/tests/test_testing.py b/pandas/tests/test_testing.py
index 45994fd400912..80db5eb49c127 100644
--- a/pandas/tests/test_testing.py
+++ b/pandas/tests/test_testing.py
@@ -739,4 +739,4 @@ def test_locale(self):
# GH9744
locales = tm.get_locales()
- self.assertTrue(len(locales) >= 1)
+ assert len(locales) >= 1
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index 13d471f368693..7979e7d77a49d 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -853,7 +853,7 @@ def test_cmov_window_corner(self):
vals.fill(np.nan)
with catch_warnings(record=True):
rs = mom.rolling_window(vals, 5, 'boxcar', center=True)
- self.assertTrue(np.isnan(rs).all())
+ assert np.isnan(rs).all()
# empty
vals = np.array([])
@@ -865,7 +865,7 @@ def test_cmov_window_corner(self):
vals = np.random.randn(5)
with catch_warnings(record=True):
rs = mom.rolling_window(vals, 10, 'boxcar')
- self.assertTrue(np.isnan(rs).all())
+ assert np.isnan(rs).all()
self.assertEqual(len(rs), 5)
def test_cmov_window_frame(self):
@@ -1144,7 +1144,7 @@ def test_rolling_apply_out_of_bounds(self):
# it works!
with catch_warnings(record=True):
result = mom.rolling_apply(arr, 10, np.sum)
- self.assertTrue(isnull(result).all())
+ assert isnull(result).all()
with catch_warnings(record=True):
result = mom.rolling_apply(arr, 10, np.sum, min_periods=1)
@@ -1172,7 +1172,7 @@ def test_rolling_std_1obs(self):
with catch_warnings(record=True):
result = mom.rolling_std(np.array([np.nan, np.nan, 3., 4., 5.]),
3, min_periods=2)
- self.assertTrue(np.isnan(result[2]))
+ assert np.isnan(result[2])
def test_rolling_std_neg_sqrt(self):
# unit test from Bottleneck
@@ -1184,11 +1184,11 @@ def test_rolling_std_neg_sqrt(self):
0.00028718669878572767])
with catch_warnings(record=True):
b = mom.rolling_std(a, window=3)
- self.assertTrue(np.isfinite(b[2:]).all())
+ assert np.isfinite(b[2:]).all()
with catch_warnings(record=True):
b = mom.ewmstd(a, span=3)
- self.assertTrue(np.isfinite(b[2:]).all())
+ assert np.isfinite(b[2:]).all()
def test_rolling_var(self):
self._check_moment_func(mom.rolling_var, lambda x: np.var(x, ddof=1),
@@ -1226,25 +1226,25 @@ def test_fperr_robustness(self):
with catch_warnings(record=True):
result = mom.rolling_sum(arr, 2)
- self.assertTrue((result[1:] >= 0).all())
+ assert (result[1:] >= 0).all()
with catch_warnings(record=True):
result = mom.rolling_mean(arr, 2)
- self.assertTrue((result[1:] >= 0).all())
+ assert (result[1:] >= 0).all()
with catch_warnings(record=True):
result = mom.rolling_var(arr, 2)
- self.assertTrue((result[1:] >= 0).all())
+ assert (result[1:] >= 0).all()
# #2527, ugh
arr = np.array([0.00012456, 0.0003, 0])
with catch_warnings(record=True):
result = mom.rolling_mean(arr, 1)
- self.assertTrue(result[-1] >= 0)
+ assert result[-1] >= 0
with catch_warnings(record=True):
result = mom.rolling_mean(-arr, 1)
- self.assertTrue(result[-1] <= 0)
+ assert result[-1] <= 0
def _check_moment_func(self, f, static_comp, name=None, window=50,
has_min_periods=True, has_center=True,
@@ -1297,16 +1297,16 @@ def get_result(arr, window, min_periods=None, center=False):
# min_periods is working correctly
result = get_result(arr, 20, min_periods=15)
- self.assertTrue(np.isnan(result[23]))
+ assert np.isnan(result[23])
assert not np.isnan(result[24])
assert not np.isnan(result[-6])
- self.assertTrue(np.isnan(result[-5]))
+ assert np.isnan(result[-5])
arr2 = randn(20)
result = get_result(arr2, 10, min_periods=5)
- self.assertTrue(isnull(result[3]))
- self.assertTrue(notnull(result[4]))
+ assert isnull(result[3])
+ assert notnull(result[4])
# min_periods=0
result0 = get_result(arr, 20, min_periods=0)
@@ -1344,8 +1344,8 @@ def get_result(arr, window, min_periods=None, center=False):
expected = get_result(self.arr, len(self.arr),
min_periods=minp)
nan_mask = np.isnan(result)
- self.assertTrue(np.array_equal(nan_mask, np.isnan(
- expected)))
+ tm.assert_numpy_array_equal(nan_mask, np.isnan(expected))
+
nan_mask = ~nan_mask
tm.assert_almost_equal(result[nan_mask],
expected[nan_mask])
@@ -1353,7 +1353,8 @@ def get_result(arr, window, min_periods=None, center=False):
result = get_result(self.arr, len(self.arr) + 1)
expected = get_result(self.arr, len(self.arr))
nan_mask = np.isnan(result)
- self.assertTrue(np.array_equal(nan_mask, np.isnan(expected)))
+ tm.assert_numpy_array_equal(nan_mask, np.isnan(expected))
+
nan_mask = ~nan_mask
tm.assert_almost_equal(result[nan_mask], expected[nan_mask])
@@ -1459,7 +1460,7 @@ def test_ewma(self):
arr[5] = 1
with catch_warnings(record=True):
result = mom.ewma(arr, span=100, adjust=False).sum()
- self.assertTrue(np.abs(result - 1) < 1e-2)
+ assert np.abs(result - 1) < 1e-2
s = Series([1.0, 2.0, 4.0, 8.0])
@@ -1659,18 +1660,18 @@ def _check_ew_ndarray(self, func, preserve_nan=False, name=None):
# check min_periods
# GH 7898
result = func(s, 50, min_periods=2)
- self.assertTrue(np.isnan(result.values[:11]).all())
+ assert np.isnan(result.values[:11]).all()
assert not np.isnan(result.values[11:]).any()
for min_periods in (0, 1):
result = func(s, 50, min_periods=min_periods)
if func == mom.ewma:
- self.assertTrue(np.isnan(result.values[:10]).all())
+ assert np.isnan(result.values[:10]).all()
assert not np.isnan(result.values[10:]).any()
else:
# ewmstd, ewmvol, ewmvar (with bias=False) require at least two
# values
- self.assertTrue(np.isnan(result.values[:11]).all())
+ assert np.isnan(result.values[:11]).all()
assert not np.isnan(result.values[11:]).any()
# check series of length 0
@@ -1980,7 +1981,8 @@ def _non_null_values(x):
# check that correlation of a series with itself is either 1 or NaN
corr_x_x = corr(x, x)
- # self.assertTrue(_non_null_values(corr_x_x).issubset(set([1.]))) #
+
+ # assert _non_null_values(corr_x_x).issubset(set([1.]))
# restore once rolling_cov(x, x) is identically equal to var(x)
if is_constant:
@@ -2406,16 +2408,15 @@ def test_corr_sanity(self):
[0.84780328, 0.33394331], [0.78369152, 0.63919667]]))
res = df[0].rolling(5, center=True).corr(df[1])
- self.assertTrue(all([np.abs(np.nan_to_num(x)) <= 1 for x in res]))
+ assert all([np.abs(np.nan_to_num(x)) <= 1 for x in res])
# and some fuzzing
- for i in range(10):
+ for _ in range(10):
df = DataFrame(np.random.rand(30, 2))
res = df[0].rolling(5, center=True).corr(df[1])
try:
- self.assertTrue(all([np.abs(np.nan_to_num(x)) <= 1 for x in res
- ]))
- except:
+ assert all([np.abs(np.nan_to_num(x)) <= 1 for x in res])
+ except AssertionError:
print(res)
def test_flex_binary_frame(self):
@@ -2465,7 +2466,7 @@ def func(A, B, com, **kwargs):
B[-10:] = np.NaN
result = func(A, B, 20, min_periods=5)
- self.assertTrue(np.isnan(result.values[:14]).all())
+ assert np.isnan(result.values[:14]).all()
assert not np.isnan(result.values[14:]).any()
# GH 7898
@@ -2473,7 +2474,7 @@ def func(A, B, com, **kwargs):
result = func(A, B, 20, min_periods=min_periods)
# binary functions (ewmcov, ewmcorr) with bias=False require at
# least two values
- self.assertTrue(np.isnan(result.values[:11]).all())
+ assert np.isnan(result.values[:11]).all()
assert not np.isnan(result.values[11:]).any()
# check series of length 0
@@ -2890,13 +2891,13 @@ def _check_expanding_ndarray(self, func, static_comp, has_min_periods=True,
# min_periods is working correctly
result = func(arr, min_periods=15)
- self.assertTrue(np.isnan(result[13]))
+ assert np.isnan(result[13])
assert not np.isnan(result[14])
arr2 = randn(20)
result = func(arr2, min_periods=5)
- self.assertTrue(isnull(result[3]))
- self.assertTrue(notnull(result[4]))
+ assert isnull(result[3])
+ assert notnull(result[4])
# min_periods=0
result0 = func(arr, min_periods=0)
@@ -3052,7 +3053,7 @@ def f():
g = self.frame.groupby('A')
assert not g.mutated
g = self.frame.groupby('A', mutated=True)
- self.assertTrue(g.mutated)
+ assert g.mutated
def test_getitem(self):
g = self.frame.groupby('A')
@@ -3268,11 +3269,11 @@ def test_monotonic_on(self):
freq='s'),
'B': range(5)})
- self.assertTrue(df.A.is_monotonic)
+ assert df.A.is_monotonic
df.rolling('2s', on='A').sum()
df = df.set_index('A')
- self.assertTrue(df.index.is_monotonic)
+ assert df.index.is_monotonic
df.rolling('2s').sum()
# non-monotonic
@@ -3666,11 +3667,11 @@ def test_perf_min(self):
freq='s'))
expected = dfp.rolling(2, min_periods=1).min()
result = dfp.rolling('2s').min()
- self.assertTrue(((result - expected) < 0.01).all().bool())
+ assert ((result - expected) < 0.01).all().bool()
expected = dfp.rolling(200, min_periods=1).min()
result = dfp.rolling('200s').min()
- self.assertTrue(((result - expected) < 0.01).all().bool())
+ assert ((result - expected) < 0.01).all().bool()
def test_ragged_max(self):
diff --git a/pandas/tests/tools/test_numeric.py b/pandas/tests/tools/test_numeric.py
index 290c03af3be4b..45b736102aa3d 100644
--- a/pandas/tests/tools/test_numeric.py
+++ b/pandas/tests/tools/test_numeric.py
@@ -166,7 +166,7 @@ def test_scalar(self):
to_numeric('XX', errors='raise')
self.assertEqual(to_numeric('XX', errors='ignore'), 'XX')
- self.assertTrue(np.isnan(to_numeric('XX', errors='coerce')))
+ assert np.isnan(to_numeric('XX', errors='coerce'))
def test_numeric_dtypes(self):
idx = pd.Index([1, 2, 3], name='xxx')
diff --git a/pandas/tests/tseries/test_frequencies.py b/pandas/tests/tseries/test_frequencies.py
index af544d10a737c..894269aaf451a 100644
--- a/pandas/tests/tseries/test_frequencies.py
+++ b/pandas/tests/tseries/test_frequencies.py
@@ -628,25 +628,29 @@ def _check_generated_range(self, start, freq):
self.assertEqual(frequencies.infer_freq(index), gen.freqstr)
else:
inf_freq = frequencies.infer_freq(index)
- self.assertTrue((inf_freq == 'Q-DEC' and gen.freqstr in (
- 'Q', 'Q-DEC', 'Q-SEP', 'Q-JUN', 'Q-MAR')) or (
- inf_freq == 'Q-NOV' and gen.freqstr in (
- 'Q-NOV', 'Q-AUG', 'Q-MAY', 'Q-FEB')) or (
- inf_freq == 'Q-OCT' and gen.freqstr in (
- 'Q-OCT', 'Q-JUL', 'Q-APR', 'Q-JAN')))
+ is_dec_range = inf_freq == 'Q-DEC' and gen.freqstr in (
+ 'Q', 'Q-DEC', 'Q-SEP', 'Q-JUN', 'Q-MAR')
+ is_nov_range = inf_freq == 'Q-NOV' and gen.freqstr in (
+ 'Q-NOV', 'Q-AUG', 'Q-MAY', 'Q-FEB')
+ is_oct_range = inf_freq == 'Q-OCT' and gen.freqstr in (
+ 'Q-OCT', 'Q-JUL', 'Q-APR', 'Q-JAN')
+ assert is_dec_range or is_nov_range or is_oct_range
gen = date_range(start, periods=5, freq=freq)
index = _dti(gen.values)
+
if not freq.startswith('Q-'):
self.assertEqual(frequencies.infer_freq(index), gen.freqstr)
else:
inf_freq = frequencies.infer_freq(index)
- self.assertTrue((inf_freq == 'Q-DEC' and gen.freqstr in (
- 'Q', 'Q-DEC', 'Q-SEP', 'Q-JUN', 'Q-MAR')) or (
- inf_freq == 'Q-NOV' and gen.freqstr in (
- 'Q-NOV', 'Q-AUG', 'Q-MAY', 'Q-FEB')) or (
- inf_freq == 'Q-OCT' and gen.freqstr in (
- 'Q-OCT', 'Q-JUL', 'Q-APR', 'Q-JAN')))
+ is_dec_range = inf_freq == 'Q-DEC' and gen.freqstr in (
+ 'Q', 'Q-DEC', 'Q-SEP', 'Q-JUN', 'Q-MAR')
+ is_nov_range = inf_freq == 'Q-NOV' and gen.freqstr in (
+ 'Q-NOV', 'Q-AUG', 'Q-MAY', 'Q-FEB')
+ is_oct_range = inf_freq == 'Q-OCT' and gen.freqstr in (
+ 'Q-OCT', 'Q-JUL', 'Q-APR', 'Q-JAN')
+
+ assert is_dec_range or is_nov_range or is_oct_range
def test_infer_freq(self):
rng = period_range('1959Q2', '2009Q3', freq='Q')
diff --git a/pandas/tests/tseries/test_offsets.py b/pandas/tests/tseries/test_offsets.py
index 1332be2567b56..08f17fc358a47 100644
--- a/pandas/tests/tseries/test_offsets.py
+++ b/pandas/tests/tseries/test_offsets.py
@@ -221,11 +221,11 @@ def test_return_type(self):
assert isinstance(result, Timestamp)
# make sure that we are returning NaT
- self.assertTrue(NaT + offset is NaT)
- self.assertTrue(offset + NaT is NaT)
+ assert NaT + offset is NaT
+ assert offset + NaT is NaT
- self.assertTrue(NaT - offset is NaT)
- self.assertTrue((-offset).apply(NaT) is NaT)
+ assert NaT - offset is NaT
+ assert (-offset).apply(NaT) is NaT
def test_offset_n(self):
for offset_klass in self.offset_types:
@@ -255,11 +255,11 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected,
func = getattr(offset_s, funcname)
result = func(dt)
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
self.assertEqual(result, expected)
result = func(Timestamp(dt))
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
self.assertEqual(result, expected)
# see gh-14101
@@ -275,7 +275,7 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected,
with tm.assert_produces_warning(exp_warning,
check_stacklevel=False):
result = func(ts)
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
if normalize is False:
self.assertEqual(result, expected + Nano(5))
else:
@@ -294,11 +294,11 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected,
dt_tz = tslib._localize_pydatetime(dt, tz_obj)
result = func(dt_tz)
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
self.assertEqual(result, expected_localize)
result = func(Timestamp(dt, tz=tz))
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
self.assertEqual(result, expected_localize)
# see gh-14101
@@ -314,7 +314,7 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected,
with tm.assert_produces_warning(exp_warning,
check_stacklevel=False):
result = func(ts)
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
if normalize is False:
self.assertEqual(result, expected_localize + Nano(5))
else:
@@ -442,7 +442,7 @@ def test_onOffset(self):
for offset in self.offset_types:
dt = self.expecteds[offset.__name__]
offset_s = self._get_offset(offset)
- self.assertTrue(offset_s.onOffset(dt))
+ assert offset_s.onOffset(dt)
# when normalize=True, onOffset checks time is 00:00:00
offset_n = self._get_offset(offset, normalize=True)
@@ -453,7 +453,7 @@ def test_onOffset(self):
# cannot be in business hour range
continue
date = datetime(dt.year, dt.month, dt.day)
- self.assertTrue(offset_n.onOffset(date))
+ assert offset_n.onOffset(date)
def test_add(self):
dt = datetime(2011, 1, 1, 9, 0)
@@ -465,14 +465,14 @@ def test_add(self):
result_dt = dt + offset_s
result_ts = Timestamp(dt) + offset_s
for result in [result_dt, result_ts]:
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
self.assertEqual(result, expected)
tm._skip_if_no_pytz()
for tz in self.timezones:
expected_localize = expected.tz_localize(tz)
result = Timestamp(dt, tz=tz) + offset_s
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
self.assertEqual(result, expected_localize)
# normalize=True
@@ -482,13 +482,13 @@ def test_add(self):
result_dt = dt + offset_s
result_ts = Timestamp(dt) + offset_s
for result in [result_dt, result_ts]:
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
self.assertEqual(result, expected)
for tz in self.timezones:
expected_localize = expected.tz_localize(tz)
result = Timestamp(dt, tz=tz) + offset_s
- self.assertTrue(isinstance(result, Timestamp))
+ assert isinstance(result, Timestamp)
self.assertEqual(result, expected_localize)
def test_pickle_v0_15_2(self):
@@ -2229,7 +2229,7 @@ def test_corner(self):
ValueError, "Day must be", Week, weekday=-1)
def test_isAnchored(self):
- self.assertTrue(Week(weekday=0).isAnchored())
+ assert Week(weekday=0).isAnchored()
assert not Week().isAnchored()
assert not Week(2, weekday=2).isAnchored()
assert not Week(2).isAnchored()
@@ -3041,8 +3041,8 @@ def test_repr(self):
"<BusinessQuarterBegin: startingMonth=1>")
def test_isAnchored(self):
- self.assertTrue(BQuarterBegin(startingMonth=1).isAnchored())
- self.assertTrue(BQuarterBegin().isAnchored())
+ assert BQuarterBegin(startingMonth=1).isAnchored()
+ assert BQuarterBegin().isAnchored()
assert not BQuarterBegin(2, startingMonth=1).isAnchored()
def test_offset(self):
@@ -3135,8 +3135,8 @@ def test_repr(self):
"<BusinessQuarterEnd: startingMonth=1>")
def test_isAnchored(self):
- self.assertTrue(BQuarterEnd(startingMonth=1).isAnchored())
- self.assertTrue(BQuarterEnd().isAnchored())
+ assert BQuarterEnd(startingMonth=1).isAnchored()
+ assert BQuarterEnd().isAnchored()
assert not BQuarterEnd(2, startingMonth=1).isAnchored()
def test_offset(self):
@@ -3506,12 +3506,12 @@ def test_apply(self):
class TestFY5253LastOfMonthQuarter(Base):
def test_isAnchored(self):
- self.assertTrue(
- makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT,
- qtr_with_extra_week=4).isAnchored())
- self.assertTrue(
- makeFY5253LastOfMonthQuarter(weekday=WeekDay.SAT, startingMonth=3,
- qtr_with_extra_week=4).isAnchored())
+ assert makeFY5253LastOfMonthQuarter(
+ startingMonth=1, weekday=WeekDay.SAT,
+ qtr_with_extra_week=4).isAnchored()
+ assert makeFY5253LastOfMonthQuarter(
+ weekday=WeekDay.SAT, startingMonth=3,
+ qtr_with_extra_week=4).isAnchored()
assert not makeFY5253LastOfMonthQuarter(
2, startingMonth=1, weekday=WeekDay.SAT,
qtr_with_extra_week=4).isAnchored()
@@ -3662,18 +3662,14 @@ def test_onOffset(self):
def test_year_has_extra_week(self):
# End of long Q1
- self.assertTrue(
- makeFY5253LastOfMonthQuarter(1, startingMonth=12,
- weekday=WeekDay.SAT,
- qtr_with_extra_week=1)
- .year_has_extra_week(datetime(2011, 4, 2)))
+ assert makeFY5253LastOfMonthQuarter(
+ 1, startingMonth=12, weekday=WeekDay.SAT,
+ qtr_with_extra_week=1).year_has_extra_week(datetime(2011, 4, 2))
# Start of long Q1
- self.assertTrue(
- makeFY5253LastOfMonthQuarter(
- 1, startingMonth=12, weekday=WeekDay.SAT,
- qtr_with_extra_week=1)
- .year_has_extra_week(datetime(2010, 12, 26)))
+ assert makeFY5253LastOfMonthQuarter(
+ 1, startingMonth=12, weekday=WeekDay.SAT,
+ qtr_with_extra_week=1).year_has_extra_week(datetime(2010, 12, 26))
# End of year before year with long Q1
assert not makeFY5253LastOfMonthQuarter(
@@ -3689,23 +3685,17 @@ def test_year_has_extra_week(self):
datetime(year, 4, 2))
# Other long years
- self.assertTrue(
- makeFY5253LastOfMonthQuarter(
- 1, startingMonth=12, weekday=WeekDay.SAT,
- qtr_with_extra_week=1)
- .year_has_extra_week(datetime(2005, 4, 2)))
+ assert makeFY5253LastOfMonthQuarter(
+ 1, startingMonth=12, weekday=WeekDay.SAT,
+ qtr_with_extra_week=1).year_has_extra_week(datetime(2005, 4, 2))
- self.assertTrue(
- makeFY5253LastOfMonthQuarter(
- 1, startingMonth=12, weekday=WeekDay.SAT,
- qtr_with_extra_week=1)
- .year_has_extra_week(datetime(2000, 4, 2)))
+ assert makeFY5253LastOfMonthQuarter(
+ 1, startingMonth=12, weekday=WeekDay.SAT,
+ qtr_with_extra_week=1).year_has_extra_week(datetime(2000, 4, 2))
- self.assertTrue(
- makeFY5253LastOfMonthQuarter(
- 1, startingMonth=12, weekday=WeekDay.SAT,
- qtr_with_extra_week=1)
- .year_has_extra_week(datetime(1994, 4, 2)))
+ assert makeFY5253LastOfMonthQuarter(
+ 1, startingMonth=12, weekday=WeekDay.SAT,
+ qtr_with_extra_week=1).year_has_extra_week(datetime(1994, 4, 2))
def test_get_weeks(self):
sat_dec_1 = makeFY5253LastOfMonthQuarter(1, startingMonth=12,
@@ -3820,8 +3810,8 @@ def test_repr(self):
"<QuarterBegin: startingMonth=1>")
def test_isAnchored(self):
- self.assertTrue(QuarterBegin(startingMonth=1).isAnchored())
- self.assertTrue(QuarterBegin().isAnchored())
+ assert QuarterBegin(startingMonth=1).isAnchored()
+ assert QuarterBegin().isAnchored()
assert not QuarterBegin(2, startingMonth=1).isAnchored()
def test_offset(self):
@@ -3898,8 +3888,8 @@ def test_repr(self):
"<QuarterEnd: startingMonth=1>")
def test_isAnchored(self):
- self.assertTrue(QuarterEnd(startingMonth=1).isAnchored())
- self.assertTrue(QuarterEnd().isAnchored())
+ assert QuarterEnd(startingMonth=1).isAnchored()
+ assert QuarterEnd().isAnchored()
assert not QuarterEnd(2, startingMonth=1).isAnchored()
def test_offset(self):
@@ -4398,7 +4388,7 @@ def test_ticks(self):
for kls, expected in offsets:
offset = kls(3)
result = offset + Timedelta(hours=2)
- self.assertTrue(isinstance(result, Timedelta))
+ assert isinstance(result, Timedelta)
self.assertEqual(result, expected)
def test_Hour(self):
@@ -4532,12 +4522,12 @@ def test_compare_ticks(self):
four = kls(4)
for _ in range(10):
- self.assertTrue(three < kls(4))
- self.assertTrue(kls(3) < four)
- self.assertTrue(four > kls(3))
- self.assertTrue(kls(4) > three)
- self.assertTrue(kls(3) == kls(3))
- self.assertTrue(kls(3) != kls(4))
+ assert three < kls(4)
+ assert kls(3) < four
+ assert four > kls(3)
+ assert kls(4) > three
+ assert kls(3) == kls(3)
+ assert kls(3) != kls(4)
class TestOffsetNames(tm.TestCase):
@@ -4700,7 +4690,7 @@ def test_rule_code(self):
lst = ['M', 'D', 'B', 'H', 'T', 'S', 'L', 'U']
for k in lst:
code, stride = get_freq_code('3' + k)
- self.assertTrue(isinstance(code, int))
+ assert isinstance(code, int)
self.assertEqual(stride, 3)
self.assertEqual(k, _get_freq_str(code))
@@ -4758,11 +4748,11 @@ def run_X_index_creation(self, cls):
assert not inst1._should_cache(), cls
return
- self.assertTrue(inst1._should_cache(), cls)
+ assert inst1._should_cache(), cls
DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 31),
freq=inst1, normalize=True)
- self.assertTrue(cls() in _daterange_cache, cls)
+ assert cls() in _daterange_cache, cls
def test_should_cache_month_end(self):
assert not MonthEnd()._should_cache()
@@ -4859,34 +4849,34 @@ def _test_offset(self, offset_name, offset_n, tstart, expected_utc_offset):
t = tstart + offset
if expected_utc_offset is not None:
- self.assertTrue(get_utc_offset_hours(t) == expected_utc_offset)
+ assert get_utc_offset_hours(t) == expected_utc_offset
if offset_name == 'weeks':
# dates should match
- self.assertTrue(t.date() == timedelta(days=7 * offset.kwds[
- 'weeks']) + tstart.date())
+ assert t.date() == timedelta(days=7 * offset.kwds[
+ 'weeks']) + tstart.date()
# expect the same day of week, hour of day, minute, second, ...
- self.assertTrue(t.dayofweek == tstart.dayofweek and t.hour ==
- tstart.hour and t.minute == tstart.minute and
- t.second == tstart.second)
+ assert (t.dayofweek == tstart.dayofweek and
+ t.hour == tstart.hour and
+ t.minute == tstart.minute and
+ t.second == tstart.second)
elif offset_name == 'days':
# dates should match
- self.assertTrue(timedelta(offset.kwds['days']) + tstart.date() ==
- t.date())
+ assert timedelta(offset.kwds['days']) + tstart.date() == t.date()
# expect the same hour of day, minute, second, ...
- self.assertTrue(t.hour == tstart.hour and
- t.minute == tstart.minute and
- t.second == tstart.second)
+ assert (t.hour == tstart.hour and
+ t.minute == tstart.minute and
+ t.second == tstart.second)
elif offset_name in self.valid_date_offsets_singular:
# expect the signular offset value to match between tstart and t
datepart_offset = getattr(t, offset_name
if offset_name != 'weekday' else
'dayofweek')
- self.assertTrue(datepart_offset == offset.kwds[offset_name])
+ assert datepart_offset == offset.kwds[offset_name]
else:
# the offset should be the same as if it was done in UTC
- self.assertTrue(t == (tstart.tz_convert('UTC') + offset
- ).tz_convert('US/Pacific'))
+ assert (t == (tstart.tz_convert('UTC') + offset)
+ .tz_convert('US/Pacific'))
def _make_timestamp(self, string, hrs_offset, tz):
if hrs_offset >= 0:
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index 65db858a6ccf1..2c3aa03e85904 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -78,9 +78,9 @@ def test_utc_to_local_no_modify(self):
rng_eastern = rng.tz_convert(self.tzstr('US/Eastern'))
# Values are unmodified
- self.assertTrue(np.array_equal(rng.asi8, rng_eastern.asi8))
+ assert np.array_equal(rng.asi8, rng_eastern.asi8)
- self.assertTrue(self.cmptz(rng_eastern.tz, self.tz('US/Eastern')))
+ assert self.cmptz(rng_eastern.tz, self.tz('US/Eastern'))
def test_utc_to_local_no_modify_explicit(self):
rng = date_range('3/11/2012', '3/12/2012', freq='H', tz='utc')
@@ -116,7 +116,7 @@ def test_localize_utc_conversion_explicit(self):
rng = date_range('3/10/2012', '3/11/2012', freq='30T')
converted = rng.tz_localize(self.tz('US/Eastern'))
expected_naive = rng + offsets.Hour(5)
- self.assertTrue(np.array_equal(converted.asi8, expected_naive.asi8))
+ assert np.array_equal(converted.asi8, expected_naive.asi8)
# DST ambiguity, this should fail
rng = date_range('3/11/2012', '3/12/2012', freq='30T')
@@ -269,10 +269,10 @@ def test_tz_localize_empty_series(self):
ts = Series()
ts2 = ts.tz_localize('utc')
- self.assertTrue(ts2.index.tz == pytz.utc)
+ assert ts2.index.tz == pytz.utc
ts2 = ts.tz_localize(self.tzstr('US/Eastern'))
- self.assertTrue(self.cmptz(ts2.index.tz, self.tz('US/Eastern')))
+ assert self.cmptz(ts2.index.tz, self.tz('US/Eastern'))
def test_astimezone(self):
utc = Timestamp('3/11/2012 22:00', tz='UTC')
@@ -309,7 +309,7 @@ def test_create_with_fixed_tz(self):
rng3 = date_range('3/11/2012 05:00:00+07:00',
'6/11/2012 05:00:00+07:00')
- self.assertTrue((rng.values == rng3.values).all())
+ assert (rng.values == rng3.values).all()
def test_create_with_fixedoffset_noname(self):
off = fixed_off_no_name
@@ -373,8 +373,8 @@ def test_utc_box_timestamp_and_localize(self):
rng_eastern = rng.tz_convert(self.tzstr('US/Eastern'))
# test not valid for dateutil timezones.
# assert 'EDT' in repr(rng_eastern[0].tzinfo)
- self.assertTrue('EDT' in repr(rng_eastern[0].tzinfo) or 'tzfile' in
- repr(rng_eastern[0].tzinfo))
+ assert ('EDT' in repr(rng_eastern[0].tzinfo) or
+ 'tzfile' in repr(rng_eastern[0].tzinfo))
def test_timestamp_tz_convert(self):
strdates = ['1/1/2012', '3/1/2012', '4/1/2012']
@@ -399,7 +399,7 @@ def test_pass_dates_localize_to_utc(self):
def test_field_access_localize(self):
strdates = ['1/1/2012', '3/1/2012', '4/1/2012']
rng = DatetimeIndex(strdates, tz=self.tzstr('US/Eastern'))
- self.assertTrue((rng.hour == 0).all())
+ assert (rng.hour == 0).all()
# a more unusual time zone, #1946
dr = date_range('2011-10-02 00:00', freq='h', periods=10,
@@ -715,14 +715,14 @@ def test_localized_at_time_between_time(self):
expected = ts.at_time(time(10, 0)).tz_localize(self.tzstr(
'US/Eastern'))
assert_series_equal(result, expected)
- self.assertTrue(self.cmptz(result.index.tz, self.tz('US/Eastern')))
+ assert self.cmptz(result.index.tz, self.tz('US/Eastern'))
t1, t2 = time(10, 0), time(11, 0)
result = ts_local.between_time(t1, t2)
expected = ts.between_time(t1,
t2).tz_localize(self.tzstr('US/Eastern'))
assert_series_equal(result, expected)
- self.assertTrue(self.cmptz(result.index.tz, self.tz('US/Eastern')))
+ assert self.cmptz(result.index.tz, self.tz('US/Eastern'))
def test_string_index_alias_tz_aware(self):
rng = date_range('1/1/2000', periods=10, tz=self.tzstr('US/Eastern'))
@@ -757,7 +757,7 @@ def test_convert_tz_aware_datetime_datetime(self):
dates_aware = [self.localize(tz, x) for x in dates]
result = to_datetime(dates_aware)
- self.assertTrue(self.cmptz(result.tz, self.tz('US/Eastern')))
+ assert self.cmptz(result.tz, self.tz('US/Eastern'))
converted = to_datetime(dates_aware, utc=True)
ex_vals = np.array([Timestamp(x).value for x in dates_aware])
@@ -851,7 +851,7 @@ def test_tzaware_datetime_to_index(self):
d = [datetime(2012, 8, 19, tzinfo=self.tz('US/Eastern'))]
index = DatetimeIndex(d)
- self.assertTrue(self.cmptz(index.tz, self.tz('US/Eastern')))
+ assert self.cmptz(index.tz, self.tz('US/Eastern'))
def test_date_range_span_dst_transition(self):
# #1778
@@ -860,10 +860,10 @@ def test_date_range_span_dst_transition(self):
dr = date_range('03/06/2012 00:00', periods=200, freq='W-FRI',
tz='US/Eastern')
- self.assertTrue((dr.hour == 0).all())
+ assert (dr.hour == 0).all()
dr = date_range('2012-11-02', periods=10, tz=self.tzstr('US/Eastern'))
- self.assertTrue((dr.hour == 0).all())
+ assert (dr.hour == 0).all()
def test_convert_datetime_list(self):
dr = date_range('2012-06-02', periods=10,
@@ -916,7 +916,7 @@ def test_index_drop_dont_lose_tz(self):
ind = date_range("2012-12-01", periods=10, tz="utc")
ind = ind.drop(ind[-1])
- self.assertTrue(ind.tz is not None)
+ assert ind.tz is not None
def test_datetimeindex_tz(self):
""" Test different DatetimeIndex constructions with timezone
@@ -938,8 +938,8 @@ def test_datetimeindex_tz_nat(self):
idx = to_datetime([Timestamp("2013-1-1", tz=self.tzstr('US/Eastern')),
NaT])
- self.assertTrue(isnull(idx[1]))
- self.assertTrue(idx[0].tzinfo is not None)
+ assert isnull(idx[1])
+ assert idx[0].tzinfo is not None
class TestTimeZoneSupportDateutil(TestTimeZoneSupportPytz):
@@ -1141,7 +1141,7 @@ def test_tzlocal(self):
# GH 13583
ts = Timestamp('2011-01-01', tz=dateutil.tz.tzlocal())
self.assertEqual(ts.tz, dateutil.tz.tzlocal())
- self.assertTrue("tz='tzlocal()')" in repr(ts))
+ assert "tz='tzlocal()')" in repr(ts)
tz = tslib.maybe_get_tz('tzlocal()')
self.assertEqual(tz, dateutil.tz.tzlocal())
@@ -1311,7 +1311,7 @@ def test_tz_localize_roundtrip(self):
reset = localized.tz_localize(None)
tm.assert_index_equal(reset, idx)
- self.assertTrue(reset.tzinfo is None)
+ assert reset.tzinfo is None
def test_series_frame_tz_localize(self):
@@ -1385,7 +1385,7 @@ def test_tz_convert_roundtrip(self):
converted = idx.tz_convert(tz)
reset = converted.tz_convert(None)
tm.assert_index_equal(reset, expected)
- self.assertTrue(reset.tzinfo is None)
+ assert reset.tzinfo is None
tm.assert_index_equal(reset, converted.tz_convert(
'UTC').tz_localize(None))
@@ -1425,7 +1425,7 @@ def test_join_aware(self):
ex_index = test1.index.union(test2.index)
tm.assert_index_equal(result.index, ex_index)
- self.assertTrue(result.index.tz.zone == 'US/Central')
+ assert result.index.tz.zone == 'US/Central'
# non-overlapping
rng = date_range("2012-11-15 00:00:00", periods=6, freq="H",
@@ -1435,7 +1435,7 @@ def test_join_aware(self):
tz="US/Eastern")
result = rng.union(rng2)
- self.assertTrue(result.tz.zone == 'UTC')
+ assert result.tz.zone == 'UTC'
def test_align_aware(self):
idx1 = date_range('2001', periods=5, freq='H', tz='US/Eastern')
@@ -1535,8 +1535,8 @@ def test_append_aware_naive(self):
ts2 = Series(np.random.randn(len(rng2)), index=rng2)
ts_result = ts1.append(ts2)
- self.assertTrue(ts_result.index.equals(ts1.index.asobject.append(
- ts2.index.asobject)))
+ assert ts_result.index.equals(ts1.index.asobject.append(
+ ts2.index.asobject))
# mixed
rng1 = date_range('1/1/2011 01:00', periods=1, freq='H')
@@ -1544,8 +1544,8 @@ def test_append_aware_naive(self):
ts1 = Series(np.random.randn(len(rng1)), index=rng1)
ts2 = Series(np.random.randn(len(rng2)), index=rng2)
ts_result = ts1.append(ts2)
- self.assertTrue(ts_result.index.equals(ts1.index.asobject.append(
- ts2.index)))
+ assert ts_result.index.equals(ts1.index.asobject.append(
+ ts2.index))
def test_equal_join_ensure_utc(self):
rng = date_range('1/1/2011', periods=10, freq='H', tz='US/Eastern')
@@ -1607,9 +1607,9 @@ def test_timestamp_equality_different_timezones(self):
self.assertEqual(b, c)
self.assertEqual(a, c)
- self.assertTrue((utc_range == eastern_range).all())
- self.assertTrue((utc_range == berlin_range).all())
- self.assertTrue((berlin_range == eastern_range).all())
+ assert (utc_range == eastern_range).all()
+ assert (utc_range == berlin_range).all()
+ assert (berlin_range == eastern_range).all()
def test_datetimeindex_tz(self):
rng = date_range('03/12/2012 00:00', periods=10, freq='W-FRI',
@@ -1626,7 +1626,7 @@ def test_normalize_tz(self):
tz='US/Eastern')
tm.assert_index_equal(result, expected)
- self.assertTrue(result.is_normalized)
+ assert result.is_normalized
assert not rng.is_normalized
rng = date_range('1/1/2000 9:30', periods=10, freq='D', tz='UTC')
@@ -1635,7 +1635,7 @@ def test_normalize_tz(self):
expected = date_range('1/1/2000', periods=10, freq='D', tz='UTC')
tm.assert_index_equal(result, expected)
- self.assertTrue(result.is_normalized)
+ assert result.is_normalized
assert not rng.is_normalized
from dateutil.tz import tzlocal
@@ -1644,7 +1644,7 @@ def test_normalize_tz(self):
expected = date_range('1/1/2000', periods=10, freq='D', tz=tzlocal())
tm.assert_index_equal(result, expected)
- self.assertTrue(result.is_normalized)
+ assert result.is_normalized
assert not rng.is_normalized
def test_normalize_tz_local(self):
@@ -1664,7 +1664,7 @@ def test_normalize_tz_local(self):
tz=tzlocal())
tm.assert_index_equal(result, expected)
- self.assertTrue(result.is_normalized)
+ assert result.is_normalized
assert not rng.is_normalized
def test_tzaware_offset(self):
| Title is self-explanatory.
Partially addresses #15990. | https://api.github.com/repos/pandas-dev/pandas/pulls/16158 | 2017-04-27T08:12:39Z | 2017-04-27T12:17:48Z | 2017-04-27T12:17:48Z | 2017-04-29T09:12:35Z |
DEPR: allow options for using bottleneck/numexpr | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 5789f39266927..7a056203ed447 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -93,7 +93,7 @@ Accelerated operations
----------------------
pandas has support for accelerating certain types of binary numerical and boolean operations using
-the ``numexpr`` library (starting in 0.11.0) and the ``bottleneck`` libraries.
+the ``numexpr`` library and the ``bottleneck`` libraries.
These libraries are especially useful when dealing with large data sets, and provide large
speedups. ``numexpr`` uses smart chunking, caching, and multiple cores. ``bottleneck`` is
@@ -114,6 +114,15 @@ Here is a sample (using 100 column x 100,000 row ``DataFrames``):
You are highly encouraged to install both libraries. See the section
:ref:`Recommended Dependencies <install.recommended_dependencies>` for more installation info.
+These are both enabled to be used by default, you can control this by setting the options:
+
+.. versionadded:: 0.20.0
+
+.. code-block:: python
+
+ pd.set_option('compute.use_bottleneck', False)
+ pd.set_option('compute.use_numexpr', False)
+
.. _basics.binop:
Flexible binary operations
diff --git a/doc/source/options.rst b/doc/source/options.rst
index 1b219f640cc87..5f6bf2fbb9662 100644
--- a/doc/source/options.rst
+++ b/doc/source/options.rst
@@ -425,6 +425,10 @@ mode.use_inf_as_null False True means treat None, NaN, -IN
INF as null (old way), False means
None and NaN are null, but INF, -INF
are not null (new way).
+compute.use_bottleneck True Use the bottleneck library to accelerate
+ computation if it is installed
+compute.use_numexpr True Use the numexpr library to accelerate
+ computation if it is installed
=================================== ============ ==================================
@@ -538,4 +542,4 @@ Only ``'display.max_rows'`` are serialized and published.
.. ipython:: python
:suppress:
- pd.reset_option('display.html.table_schema')
\ No newline at end of file
+ pd.reset_option('display.html.table_schema')
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 025ac7673622b..efa3af79aee2d 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -521,6 +521,7 @@ Other Enhancements
- The ``display.show_dimensions`` option can now also be used to specify
whether the length of a ``Series`` should be shown in its repr (:issue:`7117`).
- ``parallel_coordinates()`` has gained a ``sort_labels`` keyword arg that sorts class labels and the colours assigned to them (:issue:`15908`)
+- Options added to allow one to turn on/off using ``bottleneck`` and ``numexpr``, see :ref:`here <basics.accelerate>` (:issue:`16157`)
.. _ISO 8601 duration: https://en.wikipedia.org/wiki/ISO_8601#Durations
@@ -1217,7 +1218,7 @@ If indicated, a deprecation warning will be issued if you reference theses modul
"pandas.lib", "pandas._libs.lib", "X"
"pandas.tslib", "pandas._libs.tslib", "X"
- "pandas.computation", "pandas.core.computation", ""
+ "pandas.computation", "pandas.core.computation", "X"
"pandas.msgpack", "pandas.io.msgpack", ""
"pandas.index", "pandas._libs.index", ""
"pandas.algos", "pandas._libs.algos", ""
diff --git a/pandas/computation/__init__.py b/pandas/computation/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/computation/expressions.py b/pandas/computation/expressions.py
new file mode 100644
index 0000000000000..f46487cfa1b79
--- /dev/null
+++ b/pandas/computation/expressions.py
@@ -0,0 +1,11 @@
+import warnings
+
+
+def set_use_numexpr(v=True):
+ warnings.warn("pandas.computation.expressions.set_use_numexpr is "
+ "deprecated and will be removed in a future version.\n"
+ "you can toggle usage of numexpr via "
+ "pandas.get_option('compute.use_numexpr')",
+ FutureWarning, stacklevel=2)
+ from pandas import set_option
+ set_option('compute.use_numexpr', v)
diff --git a/pandas/core/computation/expressions.py b/pandas/core/computation/expressions.py
index 4eeefb183001e..83d02af65cc85 100644
--- a/pandas/core/computation/expressions.py
+++ b/pandas/core/computation/expressions.py
@@ -10,6 +10,7 @@
import numpy as np
from pandas.core.common import _values_from_object
from pandas.core.computation import _NUMEXPR_INSTALLED
+from pandas.core.config import get_option
if _NUMEXPR_INSTALLED:
import numexpr as ne
@@ -156,7 +157,7 @@ def _where_numexpr(cond, a, b, raise_on_error=False):
# turn myself on
-set_use_numexpr(True)
+set_use_numexpr(get_option('compute.use_numexpr'))
def _has_bool_dtype(x):
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index f8cbdffa27bb4..70ebb170cb763 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -15,8 +15,41 @@
from pandas.core.config import (is_int, is_bool, is_text, is_instance_factory,
is_one_of_factory, get_default_val,
is_callable)
-from pandas.io.formats.format import detect_console_encoding
+from pandas.io.formats.console import detect_console_encoding
+# compute
+
+use_bottleneck_doc = """
+: bool
+ Use the bottleneck library to accelerate if it is installed,
+ the default is True
+ Valid values: False,True
+"""
+
+
+def use_bottleneck_cb(key):
+ from pandas.core import nanops
+ nanops.set_use_bottleneck(cf.get_option(key))
+
+
+use_numexpr_doc = """
+: bool
+ Use the numexpr library to accelerate computation if it is installed,
+ the default is True
+ Valid values: False,True
+"""
+
+
+def use_numexpr_cb(key):
+ from pandas.core.computation import expressions
+ expressions.set_use_numexpr(cf.get_option(key))
+
+
+with cf.config_prefix('compute'):
+ cf.register_option('use_bottleneck', True, use_bottleneck_doc,
+ validator=is_bool, cb=use_bottleneck_cb)
+ cf.register_option('use_numexpr', True, use_numexpr_doc,
+ validator=is_bool, cb=use_numexpr_cb)
#
# options from the "display" namespace
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 983a6ef3e045a..06bd8f8fc51bc 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -91,6 +91,7 @@
import pandas.core.nanops as nanops
import pandas.core.ops as ops
import pandas.io.formats.format as fmt
+import pandas.io.formats.console as console
from pandas.io.formats.printing import pprint_thing
import pandas.plotting._core as gfx
@@ -513,7 +514,7 @@ def _repr_fits_horizontal_(self, ignore_width=False):
GH3541, GH3573
"""
- width, height = fmt.get_console_size()
+ width, height = console.get_console_size()
max_columns = get_option("display.max_columns")
nb_columns = len(self.columns)
@@ -577,7 +578,7 @@ def __unicode__(self):
max_cols = get_option("display.max_columns")
show_dimensions = get_option("display.show_dimensions")
if get_option("display.expand_frame_repr"):
- width, _ = fmt.get_console_size()
+ width, _ = console.get_console_size()
else:
width = None
self.to_string(buf=buf, max_rows=max_rows, max_cols=max_cols,
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 04458d684d795..4345c74664bf5 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -837,7 +837,8 @@ def _format_data(self):
"""
Return the formatted data as a unicode string
"""
- from pandas.io.formats.format import get_console_size, _get_adjustment
+ from pandas.io.formats.console import get_console_size
+ from pandas.io.formats.format import _get_adjustment
display_width, _ = get_console_size()
if display_width is None:
display_width = get_option('display.width') or 80
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index e9be43b184537..1d64f87b15761 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -1,14 +1,8 @@
import itertools
import functools
-import numpy as np
import operator
-try:
- import bottleneck as bn
- _USE_BOTTLENECK = True
-except ImportError: # pragma: no cover
- _USE_BOTTLENECK = False
-
+import numpy as np
from pandas import compat
from pandas._libs import tslib, algos, lib
from pandas.core.dtypes.common import (
@@ -23,9 +17,27 @@
is_int_or_datetime_dtype, is_any_int_dtype)
from pandas.core.dtypes.cast import _int64_max, maybe_upcast_putmask
from pandas.core.dtypes.missing import isnull, notnull
-
+from pandas.core.config import get_option
from pandas.core.common import _values_from_object
+try:
+ import bottleneck as bn
+ _BOTTLENECK_INSTALLED = True
+except ImportError: # pragma: no cover
+ _BOTTLENECK_INSTALLED = False
+
+_USE_BOTTLENECK = False
+
+
+def set_use_bottleneck(v=True):
+ # set/unset to use bottleneck
+ global _USE_BOTTLENECK
+ if _BOTTLENECK_INSTALLED:
+ _USE_BOTTLENECK = v
+
+
+set_use_bottleneck(get_option('compute.use_bottleneck'))
+
class disallow(object):
diff --git a/pandas/io/formats/console.py b/pandas/io/formats/console.py
new file mode 100644
index 0000000000000..0e46b0073a53d
--- /dev/null
+++ b/pandas/io/formats/console.py
@@ -0,0 +1,84 @@
+"""
+Internal module for console introspection
+"""
+
+import sys
+import locale
+from pandas.util.terminal import get_terminal_size
+
+# -----------------------------------------------------------------------------
+# Global formatting options
+_initial_defencoding = None
+
+
+def detect_console_encoding():
+ """
+ Try to find the most capable encoding supported by the console.
+ slighly modified from the way IPython handles the same issue.
+ """
+ global _initial_defencoding
+
+ encoding = None
+ try:
+ encoding = sys.stdout.encoding or sys.stdin.encoding
+ except AttributeError:
+ pass
+
+ # try again for something better
+ if not encoding or 'ascii' in encoding.lower():
+ try:
+ encoding = locale.getpreferredencoding()
+ except Exception:
+ pass
+
+ # when all else fails. this will usually be "ascii"
+ if not encoding or 'ascii' in encoding.lower():
+ encoding = sys.getdefaultencoding()
+
+ # GH3360, save the reported defencoding at import time
+ # MPL backends may change it. Make available for debugging.
+ if not _initial_defencoding:
+ _initial_defencoding = sys.getdefaultencoding()
+
+ return encoding
+
+
+def get_console_size():
+ """Return console size as tuple = (width, height).
+
+ Returns (None,None) in non-interactive session.
+ """
+ from pandas import get_option
+ from pandas.core import common as com
+
+ display_width = get_option('display.width')
+ # deprecated.
+ display_height = get_option('display.height', silent=True)
+
+ # Consider
+ # interactive shell terminal, can detect term size
+ # interactive non-shell terminal (ipnb/ipqtconsole), cannot detect term
+ # size non-interactive script, should disregard term size
+
+ # in addition
+ # width,height have default values, but setting to 'None' signals
+ # should use Auto-Detection, But only in interactive shell-terminal.
+ # Simple. yeah.
+
+ if com.in_interactive_session():
+ if com.in_ipython_frontend():
+ # sane defaults for interactive non-shell terminal
+ # match default for width,height in config_init
+ from pandas.core.config import get_default_val
+ terminal_width = get_default_val('display.width')
+ terminal_height = get_default_val('display.height')
+ else:
+ # pure terminal
+ terminal_width, terminal_height = get_terminal_size()
+ else:
+ terminal_width, terminal_height = None, None
+
+ # Note if the User sets width/Height to None (auto-detection)
+ # and we're in a script (non-inter), this will return (None,None)
+ # caller needs to deal.
+ return (display_width or terminal_width, display_height or terminal_height)
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 1a9b3526a7503..43b0b5fbeee90 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -8,7 +8,6 @@
from distutils.version import LooseVersion
# pylint: disable=W0141
-import sys
from textwrap import dedent
from pandas.core.dtypes.missing import isnull, notnull
@@ -2290,82 +2289,6 @@ def _has_names(index):
return index.name is not None
-# -----------------------------------------------------------------------------
-# Global formatting options
-_initial_defencoding = None
-
-
-def detect_console_encoding():
- """
- Try to find the most capable encoding supported by the console.
- slighly modified from the way IPython handles the same issue.
- """
- import locale
- global _initial_defencoding
-
- encoding = None
- try:
- encoding = sys.stdout.encoding or sys.stdin.encoding
- except AttributeError:
- pass
-
- # try again for something better
- if not encoding or 'ascii' in encoding.lower():
- try:
- encoding = locale.getpreferredencoding()
- except Exception:
- pass
-
- # when all else fails. this will usually be "ascii"
- if not encoding or 'ascii' in encoding.lower():
- encoding = sys.getdefaultencoding()
-
- # GH3360, save the reported defencoding at import time
- # MPL backends may change it. Make available for debugging.
- if not _initial_defencoding:
- _initial_defencoding = sys.getdefaultencoding()
-
- return encoding
-
-
-def get_console_size():
- """Return console size as tuple = (width, height).
-
- Returns (None,None) in non-interactive session.
- """
- display_width = get_option('display.width')
- # deprecated.
- display_height = get_option('display.height', silent=True)
-
- # Consider
- # interactive shell terminal, can detect term size
- # interactive non-shell terminal (ipnb/ipqtconsole), cannot detect term
- # size non-interactive script, should disregard term size
-
- # in addition
- # width,height have default values, but setting to 'None' signals
- # should use Auto-Detection, But only in interactive shell-terminal.
- # Simple. yeah.
-
- if com.in_interactive_session():
- if com.in_ipython_frontend():
- # sane defaults for interactive non-shell terminal
- # match default for width,height in config_init
- from pandas.core.config import get_default_val
- terminal_width = get_default_val('display.width')
- terminal_height = get_default_val('display.height')
- else:
- # pure terminal
- terminal_width, terminal_height = get_terminal_size()
- else:
- terminal_width, terminal_height = None, None
-
- # Note if the User sets width/Height to None (auto-detection)
- # and we're in a script (non-inter), this will return (None,None)
- # caller needs to deal.
- return (display_width or terminal_width, display_height or terminal_height)
-
-
class EngFormatter(object):
"""
Formats float values according to engineering format.
diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index 026a36fd9f4f9..4678db4a52c5a 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -217,3 +217,17 @@ class TestTSLib(tm.TestCase):
def test_deprecation_access_func(self):
with catch_warnings(record=True):
pd.tslib.Timestamp('20160101')
+
+
+class TestTypes(tm.TestCase):
+
+ def test_deprecation_access_func(self):
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ from pandas.types.concat import union_categoricals
+ c1 = pd.Categorical(list('aabc'))
+ c2 = pd.Categorical(list('abcd'))
+ union_categoricals(
+ [c1, c2],
+ sort_categories=True,
+ ignore_order=True)
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index a108749db8e6a..212291608479f 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -4,9 +4,10 @@
from functools import partial
import pytest
-
import warnings
import numpy as np
+
+import pandas as pd
from pandas import Series, isnull, _np_version_under1p9
from pandas.core.dtypes.common import is_integer_dtype
import pandas.core.nanops as nanops
@@ -1003,3 +1004,16 @@ def test_nans_skipna(self):
@property
def prng(self):
return np.random.RandomState(1234)
+
+
+def test_use_bottleneck():
+
+ if nanops._BOTTLENECK_INSTALLED:
+
+ pd.set_option('use_bottleneck', True)
+ assert pd.get_option('use_bottleneck')
+
+ pd.set_option('use_bottleneck', False)
+ assert not pd.get_option('use_bottleneck')
+
+ pd.set_option('use_bottleneck', use_bn)
diff --git a/pandas/types/__init__.py b/pandas/types/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/types/concat.py b/pandas/types/concat.py
new file mode 100644
index 0000000000000..477156b38d56d
--- /dev/null
+++ b/pandas/types/concat.py
@@ -0,0 +1,11 @@
+import warnings
+
+
+def union_categoricals(to_union, sort_categories=False, ignore_order=False):
+ warnings.warn("pandas.types.concat.union_categoricals is "
+ "deprecated and will be removed in a future version.\n"
+ "use pandas.api.types.union_categoricals",
+ FutureWarning, stacklevel=2)
+ from pandas.api.types import union_categoricals
+ return union_categoricals(
+ to_union, sort_categories=sort_categories, ignore_order=ignore_order)
diff --git a/setup.py b/setup.py
index 5647e18aa227c..6f3ddbe2ad9d0 100755
--- a/setup.py
+++ b/setup.py
@@ -645,6 +645,7 @@ def pxd(name):
'pandas.core.reshape',
'pandas.core.sparse',
'pandas.core.tools',
+ 'pandas.computation',
'pandas.errors',
'pandas.io',
'pandas.io.json',
@@ -654,6 +655,7 @@ def pxd(name):
'pandas._libs',
'pandas.plotting',
'pandas.stats',
+ 'pandas.types',
'pandas.util',
'pandas.tests',
'pandas.tests.api',
| deprecate pd.computation.expressions.set_use_numexpr()
supersedes #16140
| https://api.github.com/repos/pandas-dev/pandas/pulls/16157 | 2017-04-27T00:44:16Z | 2017-04-27T21:28:35Z | 2017-04-27T21:28:35Z | 2017-04-27T21:29:56Z |
BUG: Fix some PeriodIndex resampling issues | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 4a3122a78b234..eafe8d08aafaa 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -171,6 +171,82 @@ Other Enhancements
Backwards incompatible API changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. _whatsnew_0210.api_breaking.period_index_resampling:
+
+``PeriodIndex`` resampling
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In previous versions of pandas, resampling a ``Series``/``DataFrame`` indexed by a ``PeriodIndex`` returned a ``DatetimeIndex`` in some cases (:issue:`12884`). Resampling to a multiplied frequency now returns a ``PeriodIndex`` (:issue:`15944`). As a minor enhancement, resampling a ``PeriodIndex`` can now handle ``NaT`` values (:issue:`13224`)
+
+Previous Behavior:
+
+.. code-block:: ipython
+
+ In [1]: pi = pd.period_range('2017-01', periods=12, freq='M')
+
+ In [2]: s = pd.Series(np.arange(12), index=pi)
+
+ In [3]: resampled = s.resample('2Q').mean()
+
+ In [4]: resampled
+ Out[4]:
+ 2017-03-31 1.0
+ 2017-09-30 5.5
+ 2018-03-31 10.0
+ Freq: 2Q-DEC, dtype: float64
+
+ In [5]: resampled.index
+ Out[5]: DatetimeIndex(['2017-03-31', '2017-09-30', '2018-03-31'], dtype='datetime64[ns]', freq='2Q-DEC')
+
+New Behavior:
+
+.. ipython:: python
+
+ pi = pd.period_range('2017-01', periods=12, freq='M')
+
+ s = pd.Series(np.arange(12), index=pi)
+
+ resampled = s.resample('2Q').mean()
+
+ resampled
+
+ resampled.index
+
+
+Upsampling and calling ``.ohlc()`` previously returned a ``Series``, basically identical to calling ``.asfreq()``. OHLC upsampling now returns a DataFrame with columns ``open``, ``high``, ``low`` and ``close`` (:issue:`13083`). This is consistent with downsampling and ``DatetimeIndex`` behavior.
+
+Previous Behavior:
+
+.. code-block:: ipython
+
+ In [1]: pi = pd.PeriodIndex(start='2000-01-01', freq='D', periods=10)
+
+ In [2]: s = pd.Series(np.arange(10), index=pi)
+
+ In [3]: s.resample('H').ohlc()
+ Out[3]:
+ 2000-01-01 00:00 0.0
+ ...
+ 2000-01-10 23:00 NaN
+ Freq: H, Length: 240, dtype: float64
+
+ In [4]: s.resample('M').ohlc()
+ Out[4]:
+ open high low close
+ 2000-01 0 9 0 9
+
+New Behavior:
+
+.. ipython:: python
+
+ pi = pd.PeriodIndex(start='2000-01-01', freq='D', periods=10)
+
+ s = pd.Series(np.arange(10), index=pi)
+
+ s.resample('H').ohlc()
+
+ s.resample('M').ohlc()
+
.. _whatsnew_0210.api_breaking.deps:
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 01c7e875b8ecc..083fbcaaabe46 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -14,7 +14,7 @@
from pandas.core.indexes.datetimes import DatetimeIndex, date_range
from pandas.core.indexes.timedeltas import TimedeltaIndex
from pandas.tseries.offsets import DateOffset, Tick, Day, _delta_to_nanoseconds
-from pandas.core.indexes.period import PeriodIndex, period_range
+from pandas.core.indexes.period import PeriodIndex
import pandas.core.common as com
import pandas.core.algorithms as algos
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
@@ -834,53 +834,32 @@ class PeriodIndexResampler(DatetimeIndexResampler):
def _resampler_for_grouping(self):
return PeriodIndexResamplerGroupby
+ def _get_binner_for_time(self):
+ if self.kind == 'timestamp':
+ return super(PeriodIndexResampler, self)._get_binner_for_time()
+ return self.groupby._get_period_bins(self.ax)
+
def _convert_obj(self, obj):
obj = super(PeriodIndexResampler, self)._convert_obj(obj)
- offset = to_offset(self.freq)
- if offset.n > 1:
- if self.kind == 'period': # pragma: no cover
- print('Warning: multiple of frequency -> timestamps')
-
- # Cannot have multiple of periods, convert to timestamp
+ if self._from_selection:
+ # see GH 14008, GH 12871
+ msg = ("Resampling from level= or on= selection"
+ " with a PeriodIndex is not currently supported,"
+ " use .set_index(...) to explicitly set index")
+ raise NotImplementedError(msg)
+
+ if self.loffset is not None:
+ # Cannot apply loffset/timedelta to PeriodIndex -> convert to
+ # timestamps
self.kind = 'timestamp'
# convert to timestamp
- if not (self.kind is None or self.kind == 'period'):
- if self._from_selection:
- # see GH 14008, GH 12871
- msg = ("Resampling from level= or on= selection"
- " with a PeriodIndex is not currently supported,"
- " use .set_index(...) to explicitly set index")
- raise NotImplementedError(msg)
- else:
- obj = obj.to_timestamp(how=self.convention)
+ if self.kind == 'timestamp':
+ obj = obj.to_timestamp(how=self.convention)
return obj
- def aggregate(self, arg, *args, **kwargs):
- result, how = self._aggregate(arg, *args, **kwargs)
- if result is None:
- result = self._downsample(arg, *args, **kwargs)
-
- result = self._apply_loffset(result)
- return result
-
- agg = aggregate
-
- def _get_new_index(self):
- """ return our new index """
- ax = self.ax
-
- if len(ax) == 0:
- values = []
- else:
- start = ax[0].asfreq(self.freq, how=self.convention)
- end = ax[-1].asfreq(self.freq, how='end')
- values = period_range(start, end, freq=self.freq).asi8
-
- return ax._shallow_copy(values, freq=self.freq)
-
def _downsample(self, how, **kwargs):
"""
Downsample the cython defined function
@@ -898,22 +877,17 @@ def _downsample(self, how, **kwargs):
how = self._is_cython_func(how) or how
ax = self.ax
- new_index = self._get_new_index()
-
- # Start vs. end of period
- memb = ax.asfreq(self.freq, how=self.convention)
-
if is_subperiod(ax.freq, self.freq):
# Downsampling
- if len(new_index) == 0:
- bins = []
- else:
- i8 = memb.asi8
- rng = np.arange(i8[0], i8[-1] + 1)
- bins = memb.searchsorted(rng, side='right')
- grouper = BinGrouper(bins, new_index)
- return self._groupby_and_aggregate(how, grouper=grouper)
+ return self._groupby_and_aggregate(how, grouper=self.grouper)
elif is_superperiod(ax.freq, self.freq):
+ if how == 'ohlc':
+ # GH #13083
+ # upsampling to subperiods is handled as an asfreq, which works
+ # for pure aggregating/reducing methods
+ # OHLC reduces along the time dimension, but creates multiple
+ # values for each period -> handle by _groupby_and_aggregate()
+ return self._groupby_and_aggregate(how, grouper=self.grouper)
return self.asfreq()
elif ax.freq == self.freq:
return self.asfreq()
@@ -936,19 +910,16 @@ def _upsample(self, method, limit=None, fill_value=None):
.fillna
"""
- if self._from_selection:
- raise ValueError("Upsampling from level= or on= selection"
- " is not supported, use .set_index(...)"
- " to explicitly set index to"
- " datetime-like")
+
# we may need to actually resample as if we are timestamps
if self.kind == 'timestamp':
return super(PeriodIndexResampler, self)._upsample(
method, limit=limit, fill_value=fill_value)
+ self._set_binner()
ax = self.ax
obj = self.obj
- new_index = self._get_new_index()
+ new_index = self.binner
# Start vs. end of period
memb = ax.asfreq(self.freq, how=self.convention)
@@ -1293,6 +1264,51 @@ def _get_time_period_bins(self, ax):
return binner, bins, labels
+ def _get_period_bins(self, ax):
+ if not isinstance(ax, PeriodIndex):
+ raise TypeError('axis must be a PeriodIndex, but got '
+ 'an instance of %r' % type(ax).__name__)
+
+ memb = ax.asfreq(self.freq, how=self.convention)
+
+ # NaT handling as in pandas._lib.lib.generate_bins_dt64()
+ nat_count = 0
+ if memb.hasnans:
+ nat_count = np.sum(memb._isnan)
+ memb = memb[~memb._isnan]
+
+ # if index contains no valid (non-NaT) values, return empty index
+ if not len(memb):
+ binner = labels = PeriodIndex(
+ data=[], freq=self.freq, name=ax.name)
+ return binner, [], labels
+
+ start = ax.min().asfreq(self.freq, how=self.convention)
+ end = ax.max().asfreq(self.freq, how='end')
+
+ labels = binner = PeriodIndex(start=start, end=end,
+ freq=self.freq, name=ax.name)
+
+ i8 = memb.asi8
+ freq_mult = self.freq.n
+
+ # when upsampling to subperiods, we need to generate enough bins
+ expected_bins_count = len(binner) * freq_mult
+ i8_extend = expected_bins_count - (i8[-1] - i8[0])
+ rng = np.arange(i8[0], i8[-1] + i8_extend, freq_mult)
+ rng += freq_mult
+ bins = memb.searchsorted(rng, side='left')
+
+ if nat_count > 0:
+ # NaT handling as in pandas._lib.lib.generate_bins_dt64()
+ # shift bins by the number of NaT
+ bins += nat_count
+ bins = np.insert(bins, 0, nat_count)
+ binner = binner.insert(0, tslib.NaT)
+ labels = labels.insert(0, tslib.NaT)
+
+ return binner, bins, labels
+
def _take_new_index(obj, indexer, new_index, axis=0):
from pandas.core.api import Series, DataFrame
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 7449beb8f97df..cd15203eccd82 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -18,7 +18,7 @@
from pandas.core.dtypes.generic import ABCSeries, ABCDataFrame
from pandas.compat import range, lrange, zip, product, OrderedDict
-from pandas.core.base import SpecificationError
+from pandas.core.base import SpecificationError, AbstractMethodError
from pandas.errors import UnsupportedFunctionCall
from pandas.core.groupby import DataError
from pandas.tseries.frequencies import MONTHS, DAYS
@@ -698,35 +698,58 @@ def create_index(self, *args, **kwargs):
factory = self._index_factory()
return factory(*args, **kwargs)
- def test_asfreq_downsample(self):
- s = self.create_series()
-
- result = s.resample('2D').asfreq()
- expected = s.reindex(s.index.take(np.arange(0, len(s.index), 2)))
- expected.index.freq = to_offset('2D')
- assert_series_equal(result, expected)
-
- frame = s.to_frame('value')
- result = frame.resample('2D').asfreq()
- expected = frame.reindex(
- frame.index.take(np.arange(0, len(frame.index), 2)))
- expected.index.freq = to_offset('2D')
- assert_frame_equal(result, expected)
-
- def test_asfreq_upsample(self):
- s = self.create_series()
-
- result = s.resample('1H').asfreq()
- new_index = self.create_index(s.index[0], s.index[-1], freq='1H')
- expected = s.reindex(new_index)
- assert_series_equal(result, expected)
-
- frame = s.to_frame('value')
- result = frame.resample('1H').asfreq()
- new_index = self.create_index(frame.index[0],
- frame.index[-1], freq='1H')
- expected = frame.reindex(new_index)
- assert_frame_equal(result, expected)
+ @pytest.fixture
+ def _index_start(self):
+ return datetime(2005, 1, 1)
+
+ @pytest.fixture
+ def _index_end(self):
+ return datetime(2005, 1, 10)
+
+ @pytest.fixture
+ def _index_freq(self):
+ return 'D'
+
+ @pytest.fixture
+ def index(self, _index_start, _index_end, _index_freq):
+ return self.create_index(_index_start, _index_end, freq=_index_freq)
+
+ @pytest.fixture
+ def _series_name(self):
+ raise AbstractMethodError(self)
+
+ @pytest.fixture
+ def _static_values(self, index):
+ return np.arange(len(index))
+
+ @pytest.fixture
+ def series(self, index, _series_name, _static_values):
+ return Series(_static_values, index=index, name=_series_name)
+
+ @pytest.fixture
+ def frame(self, index, _static_values):
+ return DataFrame({'value': _static_values}, index=index)
+
+ @pytest.fixture(params=[Series, DataFrame])
+ def series_and_frame(self, request, index, _series_name, _static_values):
+ if request.param == Series:
+ return Series(_static_values, index=index, name=_series_name)
+ if request.param == DataFrame:
+ return DataFrame({'value': _static_values}, index=index)
+
+ @pytest.mark.parametrize('freq', ['2D', '1H'])
+ def test_asfreq(self, series_and_frame, freq):
+ obj = series_and_frame
+
+ result = obj.resample(freq).asfreq()
+ if freq == '2D':
+ new_index = obj.index.take(np.arange(0, len(obj.index), 2))
+ new_index.freq = to_offset('2D')
+ else:
+ new_index = self.create_index(obj.index[0], obj.index[-1],
+ freq=freq)
+ expected = obj.reindex(new_index)
+ assert_almost_equal(result, expected)
def test_asfreq_fill_value(self):
# test for fill value during resampling, issue 3715
@@ -824,7 +847,7 @@ def test_resample_loffset_arg_type(self):
periods=len(df.index) / 2,
freq='2D')
- # loffset coreces PeriodIndex to DateTimeIndex
+ # loffset coerces PeriodIndex to DateTimeIndex
if isinstance(expected_index, PeriodIndex):
expected_index = expected_index.to_timestamp()
@@ -866,6 +889,10 @@ def test_apply_to_empty_series(self):
class TestDatetimeIndex(Base):
_index_factory = lambda x: date_range
+ @pytest.fixture
+ def _series_name(self):
+ return 'dti'
+
def setup_method(self, method):
dti = DatetimeIndex(start=datetime(2005, 1, 1),
end=datetime(2005, 1, 10), freq='Min')
@@ -2214,57 +2241,35 @@ def test_resample_datetime_values(self):
class TestPeriodIndex(Base):
_index_factory = lambda x: period_range
+ @pytest.fixture
+ def _series_name(self):
+ return 'pi'
+
def create_series(self):
+ # TODO: replace calls to .create_series() by injecting the series
+ # fixture
i = period_range(datetime(2005, 1, 1),
datetime(2005, 1, 10), freq='D')
return Series(np.arange(len(i)), index=i, name='pi')
- def test_asfreq_downsample(self):
-
- # series
- s = self.create_series()
- expected = s.reindex(s.index.take(np.arange(0, len(s.index), 2)))
- expected.index = expected.index.to_timestamp()
- expected.index.freq = to_offset('2D')
-
- # this is a bug, this *should* return a PeriodIndex
- # directly
- # GH 12884
- result = s.resample('2D').asfreq()
- assert_series_equal(result, expected)
-
- # frame
- frame = s.to_frame('value')
- expected = frame.reindex(
- frame.index.take(np.arange(0, len(frame.index), 2)))
- expected.index = expected.index.to_timestamp()
- expected.index.freq = to_offset('2D')
- result = frame.resample('2D').asfreq()
- assert_frame_equal(result, expected)
-
- def test_asfreq_upsample(self):
-
- # this is a bug, this *should* return a PeriodIndex
- # directly
- # GH 12884
- s = self.create_series()
- new_index = date_range(s.index[0].to_timestamp(how='start'),
- (s.index[-1] + 1).to_timestamp(how='start'),
- freq='1H',
- closed='left')
- expected = s.to_timestamp().reindex(new_index).to_period()
- result = s.resample('1H').asfreq()
- assert_series_equal(result, expected)
-
- frame = s.to_frame('value')
- new_index = date_range(frame.index[0].to_timestamp(how='start'),
- (frame.index[-1] + 1).to_timestamp(how='start'),
- freq='1H',
- closed='left')
- expected = frame.to_timestamp().reindex(new_index).to_period()
- result = frame.resample('1H').asfreq()
- assert_frame_equal(result, expected)
+ @pytest.mark.parametrize('freq', ['2D', '1H', '2H'])
+ @pytest.mark.parametrize('kind', ['period', None, 'timestamp'])
+ def test_asfreq(self, series_and_frame, freq, kind):
+ # GH 12884, 15944
+ # make sure .asfreq() returns PeriodIndex (except kind='timestamp')
+
+ obj = series_and_frame
+ if kind == 'timestamp':
+ expected = obj.to_timestamp().resample(freq).asfreq()
+ else:
+ start = obj.index[0].to_timestamp(how='start')
+ end = (obj.index[-1] + 1).to_timestamp(how='start')
+ new_index = date_range(start=start, end=end, freq=freq,
+ closed='left')
+ expected = obj.to_timestamp().reindex(new_index).to_period(freq)
+ result = obj.resample(freq, kind=kind).asfreq()
+ assert_almost_equal(result, expected)
def test_asfreq_fill_value(self):
# test for fill value during resampling, issue 3715
@@ -2285,8 +2290,9 @@ def test_asfreq_fill_value(self):
result = frame.resample('1H', kind='timestamp').asfreq(fill_value=3.0)
assert_frame_equal(result, expected)
- def test_selection(self):
- index = self.create_series().index
+ @pytest.mark.parametrize('freq', ['H', '12H', '2D', 'W'])
+ @pytest.mark.parametrize('kind', [None, 'period', 'timestamp'])
+ def test_selection(self, index, freq, kind):
# This is a bug, these should be implemented
# GH 14008
df = pd.DataFrame({'date': index,
@@ -2294,12 +2300,10 @@ def test_selection(self):
index=pd.MultiIndex.from_arrays([
np.arange(len(index), dtype=np.int64),
index], names=['v', 'd']))
-
with pytest.raises(NotImplementedError):
- df.resample('2D', on='date')
-
+ df.resample(freq, on='date', kind=kind)
with pytest.raises(NotImplementedError):
- df.resample('2D', level='d')
+ df.resample(freq, level='d', kind=kind)
def test_annual_upsample_D_s_f(self):
self._check_annual_upsample_cases('D', 'start', 'ffill')
@@ -2366,15 +2370,14 @@ def test_not_subperiod(self):
pytest.raises(ValueError, lambda: ts.resample('M').mean())
pytest.raises(ValueError, lambda: ts.resample('w-thu').mean())
- def test_basic_upsample(self):
+ @pytest.mark.parametrize('freq', ['D', '2D'])
+ def test_basic_upsample(self, freq):
ts = _simple_pts('1/1/1990', '6/30/1995', freq='M')
result = ts.resample('a-dec').mean()
- resampled = result.resample('D', convention='end').ffill()
-
- expected = result.to_timestamp('D', how='end')
- expected = expected.asfreq('D', 'ffill').to_period()
-
+ resampled = result.resample(freq, convention='end').ffill()
+ expected = result.to_timestamp(freq, how='end')
+ expected = expected.asfreq(freq, 'ffill').to_period(freq)
assert_series_equal(resampled, expected)
def test_upsample_with_limit(self):
@@ -2440,16 +2443,15 @@ def test_resample_basic(self):
result2 = s.resample('T', kind='period').mean()
assert_series_equal(result2, expected)
- def test_resample_count(self):
-
+ @pytest.mark.parametrize('freq,expected_vals', [('M', [31, 29, 31, 9]),
+ ('2M', [31 + 29, 31 + 9])])
+ def test_resample_count(self, freq, expected_vals):
# GH12774
- series = pd.Series(1, index=pd.period_range(start='2000',
- periods=100))
- result = series.resample('M').count()
-
- expected_index = pd.period_range(start='2000', freq='M', periods=4)
- expected = pd.Series([31, 29, 31, 9], index=expected_index)
-
+ series = pd.Series(1, index=pd.period_range(start='2000', periods=100))
+ result = series.resample(freq).count()
+ expected_index = pd.period_range(start='2000', freq=freq,
+ periods=len(expected_vals))
+ expected = pd.Series(expected_vals, index=expected_index)
assert_series_equal(result, expected)
def test_resample_same_freq(self):
@@ -2587,12 +2589,15 @@ def test_cant_fill_missing_dups(self):
s = Series(np.random.randn(5), index=rng)
pytest.raises(Exception, lambda: s.resample('A').ffill())
- def test_resample_5minute(self):
+ @pytest.mark.parametrize('freq', ['5min'])
+ @pytest.mark.parametrize('kind', ['period', None, 'timestamp'])
+ def test_resample_5minute(self, freq, kind):
rng = period_range('1/1/2000', '1/5/2000', freq='T')
ts = Series(np.random.randn(len(rng)), index=rng)
-
- result = ts.resample('5min').mean()
- expected = ts.to_timestamp().resample('5min').mean()
+ expected = ts.to_timestamp().resample(freq).mean()
+ if kind != 'timestamp':
+ expected = expected.to_period(freq)
+ result = ts.resample(freq, kind=kind).mean()
assert_series_equal(result, expected)
def test_upsample_daily_business_daily(self):
@@ -2812,18 +2817,96 @@ def test_evenly_divisible_with_no_extra_bins(self):
result = df.resample('7D').sum()
assert_frame_equal(result, expected)
- def test_apply_to_empty_series(self):
- # GH 14313
- series = self.create_series()[:0]
+ @pytest.mark.parametrize('kind', ['period', None, 'timestamp'])
+ @pytest.mark.parametrize('agg_arg', ['mean', {'value': 'mean'}, ['mean']])
+ def test_loffset_returns_datetimeindex(self, frame, kind, agg_arg):
+ # make sure passing loffset returns DatetimeIndex in all cases
+ # basic method taken from Base.test_resample_loffset_arg_type()
+ df = frame
+ expected_means = [df.values[i:i + 2].mean()
+ for i in range(0, len(df.values), 2)]
+ expected_index = self.create_index(df.index[0],
+ periods=len(df.index) / 2,
+ freq='2D')
- for freq in ['M', 'D', 'H']:
- with pytest.raises(TypeError):
- series.resample(freq).apply(lambda x: 1)
+ # loffset coerces PeriodIndex to DateTimeIndex
+ expected_index = expected_index.to_timestamp()
+ expected_index += timedelta(hours=2)
+ expected = DataFrame({'value': expected_means}, index=expected_index)
+
+ result_agg = df.resample('2D', loffset='2H', kind=kind).agg(agg_arg)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result_how = df.resample('2D', how=agg_arg, loffset='2H',
+ kind=kind)
+ if isinstance(agg_arg, list):
+ expected.columns = pd.MultiIndex.from_tuples([('value', 'mean')])
+ assert_frame_equal(result_agg, expected)
+ assert_frame_equal(result_how, expected)
+
+ @pytest.mark.parametrize('freq, period_mult', [('H', 24), ('12H', 2)])
+ @pytest.mark.parametrize('kind', [None, 'period'])
+ def test_upsampling_ohlc(self, freq, period_mult, kind):
+ # GH 13083
+ pi = PeriodIndex(start='2000', freq='D', periods=10)
+ s = Series(range(len(pi)), index=pi)
+ expected = s.to_timestamp().resample(freq).ohlc().to_period(freq)
+
+ # timestamp-based resampling doesn't include all sub-periods
+ # of the last original period, so extend accordingly:
+ new_index = PeriodIndex(start='2000', freq=freq,
+ periods=period_mult * len(pi))
+ expected = expected.reindex(new_index)
+ result = s.resample(freq, kind=kind).ohlc()
+ assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize('periods, values',
+ [([pd.NaT, '1970-01-01 00:00:00', pd.NaT,
+ '1970-01-01 00:00:02', '1970-01-01 00:00:03'],
+ [2, 3, 5, 7, 11]),
+ ([pd.NaT, pd.NaT, '1970-01-01 00:00:00', pd.NaT,
+ pd.NaT, pd.NaT, '1970-01-01 00:00:02',
+ '1970-01-01 00:00:03', pd.NaT, pd.NaT],
+ [1, 2, 3, 5, 6, 8, 7, 11, 12, 13])])
+ @pytest.mark.parametrize('freq, expected_values',
+ [('1s', [3, np.NaN, 7, 11]),
+ ('2s', [3, int((7 + 11) / 2)]),
+ ('3s', [int((3 + 7) / 2), 11])])
+ def test_resample_with_nat(self, periods, values, freq, expected_values):
+ # GH 13224
+ index = PeriodIndex(periods, freq='S')
+ frame = DataFrame(values, index=index)
+
+ expected_index = period_range('1970-01-01 00:00:00',
+ periods=len(expected_values), freq=freq)
+ expected = DataFrame(expected_values, index=expected_index)
+ result = frame.resample(freq).mean()
+ assert_frame_equal(result, expected)
+
+ def test_resample_with_only_nat(self):
+ # GH 13224
+ pi = PeriodIndex([pd.NaT] * 3, freq='S')
+ frame = DataFrame([2, 3, 5], index=pi)
+ expected_index = PeriodIndex(data=[], freq=pi.freq)
+ expected = DataFrame([], index=expected_index)
+ result = frame.resample('1s').mean()
+ assert_frame_equal(result, expected)
class TestTimedeltaIndex(Base):
_index_factory = lambda x: timedelta_range
+ @pytest.fixture
+ def _index_start(self):
+ return '1 day'
+
+ @pytest.fixture
+ def _index_end(self):
+ return '10 day'
+
+ @pytest.fixture
+ def _series_name(self):
+ return 'tdi'
+
def create_series(self):
i = timedelta_range('1 day',
'10 day', freq='D')
@@ -3167,13 +3250,6 @@ def test_fails_on_no_datetime_index(self):
"instance of %r" % name):
df.groupby(TimeGrouper('D'))
- # PeriodIndex gives a specific error message
- df = DataFrame({'a': np.random.randn(n)}, index=tm.makePeriodIndex(n))
- with tm.assert_raises_regex(TypeError,
- "axis must be a DatetimeIndex, but "
- "got an instance of 'PeriodIndex'"):
- df.groupby(TimeGrouper('D'))
-
def test_aaa_group_order(self):
# GH 12840
# check TimeGrouper perform stable sorts
| closes #15944
xref partially #12884
closes #13083
closes #13224
This PR addresses some of the issues related to PeriodIndex resampling.
As I'm new to the pandas codebase, I appreciate any advice. | https://api.github.com/repos/pandas-dev/pandas/pulls/16153 | 2017-04-26T20:46:01Z | 2017-10-01T14:55:33Z | 2017-10-01T14:55:32Z | 2017-10-06T17:44:22Z |
CLN: remove unused TimeGrouper._get_binner_for_resample() method | diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 203ae0cb17e02..1685a5d75245d 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -1099,23 +1099,6 @@ def _get_grouper(self, obj):
r._set_binner()
return r.binner, r.grouper, r.obj
- def _get_binner_for_resample(self, kind=None):
- # create the BinGrouper
- # assume that self.set_grouper(obj) has already been called
-
- ax = self.ax
- if kind is None:
- kind = self.kind
- if kind is None or kind == 'timestamp':
- self.binner, bins, binlabels = self._get_time_bins(ax)
- elif kind == 'timedelta':
- self.binner, bins, binlabels = self._get_time_delta_bins(ax)
- else:
- self.binner, bins, binlabels = self._get_time_period_bins(ax)
-
- self.grouper = BinGrouper(bins, binlabels)
- return self.binner, self.grouper, self.obj
-
def _get_binner_for_grouping(self, obj):
# return an ordering of the transformed group labels,
# suitable for multi-grouping, e.g the labels for
| - [x] tests passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
- [ ] whatsnew entry
method seems unused and is not covered by tests -> remove it?
| https://api.github.com/repos/pandas-dev/pandas/pulls/16152 | 2017-04-26T20:36:47Z | 2017-04-27T01:32:13Z | 2017-04-27T01:32:13Z | 2017-04-27T01:33:27Z |
MAINT: Remove self.assertFalse from testing | diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index cc14282934f16..52061f7f1e0ae 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -1443,7 +1443,7 @@ def test_simple_in_ops(self):
res = pd.eval('3 in (1, 2)', engine=self.engine,
parser=self.parser)
- self.assertFalse(res)
+ assert not res
res = pd.eval('3 not in (1, 2)', engine=self.engine,
parser=self.parser)
@@ -1467,7 +1467,7 @@ def test_simple_in_ops(self):
res = pd.eval('(3,) not in [(3,), 2]', engine=self.engine,
parser=self.parser)
- self.assertFalse(res)
+ assert not res
res = pd.eval('[(3,)] in [[(3,)], 2]', engine=self.engine,
parser=self.parser)
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index e3bae3675a9e4..718efc08394b1 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -59,7 +59,7 @@ def test_hash_vs_equality(self):
def test_equality(self):
self.assertTrue(is_dtype_equal(self.dtype, 'category'))
self.assertTrue(is_dtype_equal(self.dtype, CategoricalDtype()))
- self.assertFalse(is_dtype_equal(self.dtype, 'foo'))
+ assert not is_dtype_equal(self.dtype, 'foo')
def test_construction_from_string(self):
result = CategoricalDtype.construct_from_string('category')
@@ -71,8 +71,8 @@ def test_is_dtype(self):
self.assertTrue(CategoricalDtype.is_dtype(self.dtype))
self.assertTrue(CategoricalDtype.is_dtype('category'))
self.assertTrue(CategoricalDtype.is_dtype(CategoricalDtype()))
- self.assertFalse(CategoricalDtype.is_dtype('foo'))
- self.assertFalse(CategoricalDtype.is_dtype(np.float64))
+ assert not CategoricalDtype.is_dtype('foo')
+ assert not CategoricalDtype.is_dtype(np.float64)
def test_basic(self):
@@ -85,12 +85,12 @@ def test_basic(self):
# dtypes
self.assertTrue(is_categorical_dtype(s.dtype))
self.assertTrue(is_categorical_dtype(s))
- self.assertFalse(is_categorical_dtype(np.dtype('float64')))
+ assert not is_categorical_dtype(np.dtype('float64'))
self.assertTrue(is_categorical(s.dtype))
self.assertTrue(is_categorical(s))
- self.assertFalse(is_categorical(np.dtype('float64')))
- self.assertFalse(is_categorical(1.0))
+ assert not is_categorical(np.dtype('float64'))
+ assert not is_categorical(1.0)
class TestDatetimeTZDtype(Base, tm.TestCase):
@@ -136,8 +136,8 @@ def test_compat(self):
self.assertTrue(is_datetime64_any_dtype('datetime64[ns, US/Eastern]'))
self.assertTrue(is_datetime64_ns_dtype(self.dtype))
self.assertTrue(is_datetime64_ns_dtype('datetime64[ns, US/Eastern]'))
- self.assertFalse(is_datetime64_dtype(self.dtype))
- self.assertFalse(is_datetime64_dtype('datetime64[ns, US/Eastern]'))
+ assert not is_datetime64_dtype(self.dtype)
+ assert not is_datetime64_dtype('datetime64[ns, US/Eastern]')
def test_construction_from_string(self):
result = DatetimeTZDtype('datetime64[ns, US/Eastern]')
@@ -149,25 +149,23 @@ def test_construction_from_string(self):
lambda: DatetimeTZDtype.construct_from_string('foo'))
def test_is_dtype(self):
- self.assertFalse(DatetimeTZDtype.is_dtype(None))
+ assert not DatetimeTZDtype.is_dtype(None)
self.assertTrue(DatetimeTZDtype.is_dtype(self.dtype))
self.assertTrue(DatetimeTZDtype.is_dtype('datetime64[ns, US/Eastern]'))
- self.assertFalse(DatetimeTZDtype.is_dtype('foo'))
+ assert not DatetimeTZDtype.is_dtype('foo')
self.assertTrue(DatetimeTZDtype.is_dtype(DatetimeTZDtype(
'ns', 'US/Pacific')))
- self.assertFalse(DatetimeTZDtype.is_dtype(np.float64))
+ assert not DatetimeTZDtype.is_dtype(np.float64)
def test_equality(self):
self.assertTrue(is_dtype_equal(self.dtype,
'datetime64[ns, US/Eastern]'))
self.assertTrue(is_dtype_equal(self.dtype, DatetimeTZDtype(
'ns', 'US/Eastern')))
- self.assertFalse(is_dtype_equal(self.dtype, 'foo'))
- self.assertFalse(is_dtype_equal(self.dtype, DatetimeTZDtype('ns',
- 'CET')))
- self.assertFalse(is_dtype_equal(
- DatetimeTZDtype('ns', 'US/Eastern'), DatetimeTZDtype(
- 'ns', 'US/Pacific')))
+ assert not is_dtype_equal(self.dtype, 'foo')
+ assert not is_dtype_equal(self.dtype, DatetimeTZDtype('ns', 'CET'))
+ assert not is_dtype_equal(DatetimeTZDtype('ns', 'US/Eastern'),
+ DatetimeTZDtype('ns', 'US/Pacific'))
# numpy compat
self.assertTrue(is_dtype_equal(np.dtype("M8[ns]"), "datetime64[ns]"))
@@ -182,13 +180,13 @@ def test_basic(self):
# dtypes
self.assertTrue(is_datetime64tz_dtype(s.dtype))
self.assertTrue(is_datetime64tz_dtype(s))
- self.assertFalse(is_datetime64tz_dtype(np.dtype('float64')))
- self.assertFalse(is_datetime64tz_dtype(1.0))
+ assert not is_datetime64tz_dtype(np.dtype('float64'))
+ assert not is_datetime64tz_dtype(1.0)
self.assertTrue(is_datetimetz(s))
self.assertTrue(is_datetimetz(s.dtype))
- self.assertFalse(is_datetimetz(np.dtype('float64')))
- self.assertFalse(is_datetimetz(1.0))
+ assert not is_datetimetz(np.dtype('float64'))
+ assert not is_datetimetz(1.0)
def test_dst(self):
@@ -265,10 +263,10 @@ def test_coerce_to_dtype(self):
PeriodDtype('period[3M]'))
def test_compat(self):
- self.assertFalse(is_datetime64_ns_dtype(self.dtype))
- self.assertFalse(is_datetime64_ns_dtype('period[D]'))
- self.assertFalse(is_datetime64_dtype(self.dtype))
- self.assertFalse(is_datetime64_dtype('period[D]'))
+ assert not is_datetime64_ns_dtype(self.dtype)
+ assert not is_datetime64_ns_dtype('period[D]')
+ assert not is_datetime64_dtype(self.dtype)
+ assert not is_datetime64_dtype('period[D]')
def test_construction_from_string(self):
result = PeriodDtype('period[D]')
@@ -297,14 +295,14 @@ def test_is_dtype(self):
self.assertTrue(PeriodDtype.is_dtype(PeriodDtype('U')))
self.assertTrue(PeriodDtype.is_dtype(PeriodDtype('S')))
- self.assertFalse(PeriodDtype.is_dtype('D'))
- self.assertFalse(PeriodDtype.is_dtype('3D'))
- self.assertFalse(PeriodDtype.is_dtype('U'))
- self.assertFalse(PeriodDtype.is_dtype('S'))
- self.assertFalse(PeriodDtype.is_dtype('foo'))
- self.assertFalse(PeriodDtype.is_dtype(np.object_))
- self.assertFalse(PeriodDtype.is_dtype(np.int64))
- self.assertFalse(PeriodDtype.is_dtype(np.float64))
+ assert not PeriodDtype.is_dtype('D')
+ assert not PeriodDtype.is_dtype('3D')
+ assert not PeriodDtype.is_dtype('U')
+ assert not PeriodDtype.is_dtype('S')
+ assert not PeriodDtype.is_dtype('foo')
+ assert not PeriodDtype.is_dtype(np.object_)
+ assert not PeriodDtype.is_dtype(np.int64)
+ assert not PeriodDtype.is_dtype(np.float64)
def test_equality(self):
self.assertTrue(is_dtype_equal(self.dtype, 'period[D]'))
@@ -312,8 +310,8 @@ def test_equality(self):
self.assertTrue(is_dtype_equal(self.dtype, PeriodDtype('D')))
self.assertTrue(is_dtype_equal(PeriodDtype('D'), PeriodDtype('D')))
- self.assertFalse(is_dtype_equal(self.dtype, 'D'))
- self.assertFalse(is_dtype_equal(PeriodDtype('D'), PeriodDtype('2D')))
+ assert not is_dtype_equal(self.dtype, 'D')
+ assert not is_dtype_equal(PeriodDtype('D'), PeriodDtype('2D'))
def test_basic(self):
self.assertTrue(is_period_dtype(self.dtype))
@@ -328,14 +326,14 @@ def test_basic(self):
# dtypes
# series results in object dtype currently,
# is_period checks period_arraylike
- self.assertFalse(is_period_dtype(s.dtype))
- self.assertFalse(is_period_dtype(s))
+ assert not is_period_dtype(s.dtype)
+ assert not is_period_dtype(s)
self.assertTrue(is_period(s))
- self.assertFalse(is_period_dtype(np.dtype('float64')))
- self.assertFalse(is_period_dtype(1.0))
- self.assertFalse(is_period(np.dtype('float64')))
- self.assertFalse(is_period(1.0))
+ assert not is_period_dtype(np.dtype('float64'))
+ assert not is_period_dtype(1.0)
+ assert not is_period(np.dtype('float64'))
+ assert not is_period(1.0)
def test_empty(self):
dt = PeriodDtype()
@@ -344,7 +342,7 @@ def test_empty(self):
def test_not_string(self):
# though PeriodDtype has object kind, it cannot be string
- self.assertFalse(is_string_dtype(PeriodDtype('D')))
+ assert not is_string_dtype(PeriodDtype('D'))
class TestIntervalDtype(Base, tm.TestCase):
@@ -388,14 +386,14 @@ def test_is_dtype(self):
self.assertTrue(IntervalDtype.is_dtype(IntervalDtype('int64')))
self.assertTrue(IntervalDtype.is_dtype(IntervalDtype(np.int64)))
- self.assertFalse(IntervalDtype.is_dtype('D'))
- self.assertFalse(IntervalDtype.is_dtype('3D'))
- self.assertFalse(IntervalDtype.is_dtype('U'))
- self.assertFalse(IntervalDtype.is_dtype('S'))
- self.assertFalse(IntervalDtype.is_dtype('foo'))
- self.assertFalse(IntervalDtype.is_dtype(np.object_))
- self.assertFalse(IntervalDtype.is_dtype(np.int64))
- self.assertFalse(IntervalDtype.is_dtype(np.float64))
+ assert not IntervalDtype.is_dtype('D')
+ assert not IntervalDtype.is_dtype('3D')
+ assert not IntervalDtype.is_dtype('U')
+ assert not IntervalDtype.is_dtype('S')
+ assert not IntervalDtype.is_dtype('foo')
+ assert not IntervalDtype.is_dtype(np.object_)
+ assert not IntervalDtype.is_dtype(np.int64)
+ assert not IntervalDtype.is_dtype(np.float64)
def test_identity(self):
self.assertEqual(IntervalDtype('interval[int64]'),
@@ -424,9 +422,9 @@ def test_equality(self):
self.assertTrue(is_dtype_equal(IntervalDtype('int64'),
IntervalDtype('int64')))
- self.assertFalse(is_dtype_equal(self.dtype, 'int64'))
- self.assertFalse(is_dtype_equal(IntervalDtype('int64'),
- IntervalDtype('float64')))
+ assert not is_dtype_equal(self.dtype, 'int64')
+ assert not is_dtype_equal(IntervalDtype('int64'),
+ IntervalDtype('float64'))
def test_basic(self):
self.assertTrue(is_interval_dtype(self.dtype))
@@ -440,8 +438,8 @@ def test_basic(self):
# dtypes
# series results in object dtype currently,
- self.assertFalse(is_interval_dtype(s.dtype))
- self.assertFalse(is_interval_dtype(s))
+ assert not is_interval_dtype(s.dtype)
+ assert not is_interval_dtype(s)
def test_basic_dtype(self):
self.assertTrue(is_interval_dtype('interval[int64]'))
@@ -450,9 +448,9 @@ def test_basic_dtype(self):
(IntervalIndex.from_breaks(np.arange(4))))
self.assertTrue(is_interval_dtype(
IntervalIndex.from_breaks(date_range('20130101', periods=3))))
- self.assertFalse(is_interval_dtype('U'))
- self.assertFalse(is_interval_dtype('S'))
- self.assertFalse(is_interval_dtype('foo'))
- self.assertFalse(is_interval_dtype(np.object_))
- self.assertFalse(is_interval_dtype(np.int64))
- self.assertFalse(is_interval_dtype(np.float64))
+ assert not is_interval_dtype('U')
+ assert not is_interval_dtype('S')
+ assert not is_interval_dtype('foo')
+ assert not is_interval_dtype(np.object_)
+ assert not is_interval_dtype(np.int64)
+ assert not is_interval_dtype(np.float64)
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 35720b32d756c..8dcf75e8a1aec 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -213,15 +213,15 @@ def test_isinf_scalar(self):
# GH 11352
self.assertTrue(lib.isposinf_scalar(float('inf')))
self.assertTrue(lib.isposinf_scalar(np.inf))
- self.assertFalse(lib.isposinf_scalar(-np.inf))
- self.assertFalse(lib.isposinf_scalar(1))
- self.assertFalse(lib.isposinf_scalar('a'))
+ assert not lib.isposinf_scalar(-np.inf)
+ assert not lib.isposinf_scalar(1)
+ assert not lib.isposinf_scalar('a')
self.assertTrue(lib.isneginf_scalar(float('-inf')))
self.assertTrue(lib.isneginf_scalar(-np.inf))
- self.assertFalse(lib.isneginf_scalar(np.inf))
- self.assertFalse(lib.isneginf_scalar(1))
- self.assertFalse(lib.isneginf_scalar('a'))
+ assert not lib.isneginf_scalar(np.inf)
+ assert not lib.isneginf_scalar(1)
+ assert not lib.isneginf_scalar('a')
def test_maybe_convert_numeric_infinities(self):
# see gh-13274
@@ -639,24 +639,24 @@ def test_is_datetimelike_array_all_nan_nat_like(self):
arr = np.array([np.nan, pd.NaT, np.datetime64('nat')])
self.assertTrue(lib.is_datetime_array(arr))
self.assertTrue(lib.is_datetime64_array(arr))
- self.assertFalse(lib.is_timedelta_array(arr))
- self.assertFalse(lib.is_timedelta64_array(arr))
- self.assertFalse(lib.is_timedelta_or_timedelta64_array(arr))
+ assert not lib.is_timedelta_array(arr)
+ assert not lib.is_timedelta64_array(arr)
+ assert not lib.is_timedelta_or_timedelta64_array(arr)
arr = np.array([np.nan, pd.NaT, np.timedelta64('nat')])
- self.assertFalse(lib.is_datetime_array(arr))
- self.assertFalse(lib.is_datetime64_array(arr))
+ assert not lib.is_datetime_array(arr)
+ assert not lib.is_datetime64_array(arr)
self.assertTrue(lib.is_timedelta_array(arr))
self.assertTrue(lib.is_timedelta64_array(arr))
self.assertTrue(lib.is_timedelta_or_timedelta64_array(arr))
arr = np.array([np.nan, pd.NaT, np.datetime64('nat'),
np.timedelta64('nat')])
- self.assertFalse(lib.is_datetime_array(arr))
- self.assertFalse(lib.is_datetime64_array(arr))
- self.assertFalse(lib.is_timedelta_array(arr))
- self.assertFalse(lib.is_timedelta64_array(arr))
- self.assertFalse(lib.is_timedelta_or_timedelta64_array(arr))
+ assert not lib.is_datetime_array(arr)
+ assert not lib.is_datetime64_array(arr)
+ assert not lib.is_timedelta_array(arr)
+ assert not lib.is_timedelta64_array(arr)
+ assert not lib.is_timedelta_or_timedelta64_array(arr)
arr = np.array([np.nan, pd.NaT])
self.assertTrue(lib.is_datetime_array(arr))
@@ -666,11 +666,11 @@ def test_is_datetimelike_array_all_nan_nat_like(self):
self.assertTrue(lib.is_timedelta_or_timedelta64_array(arr))
arr = np.array([np.nan, np.nan], dtype=object)
- self.assertFalse(lib.is_datetime_array(arr))
- self.assertFalse(lib.is_datetime64_array(arr))
- self.assertFalse(lib.is_timedelta_array(arr))
- self.assertFalse(lib.is_timedelta64_array(arr))
- self.assertFalse(lib.is_timedelta_or_timedelta64_array(arr))
+ assert not lib.is_datetime_array(arr)
+ assert not lib.is_datetime64_array(arr)
+ assert not lib.is_timedelta_array(arr)
+ assert not lib.is_timedelta64_array(arr)
+ assert not lib.is_timedelta_or_timedelta64_array(arr)
def test_date(self):
@@ -720,10 +720,10 @@ def test_to_object_array_width(self):
def test_is_period(self):
self.assertTrue(lib.is_period(pd.Period('2011-01', freq='M')))
- self.assertFalse(lib.is_period(pd.PeriodIndex(['2011-01'], freq='M')))
- self.assertFalse(lib.is_period(pd.Timestamp('2011-01')))
- self.assertFalse(lib.is_period(1))
- self.assertFalse(lib.is_period(np.nan))
+ assert not lib.is_period(pd.PeriodIndex(['2011-01'], freq='M'))
+ assert not lib.is_period(pd.Timestamp('2011-01'))
+ assert not lib.is_period(1)
+ assert not lib.is_period(np.nan)
def test_categorical(self):
@@ -758,18 +758,17 @@ def test_is_number(self):
self.assertTrue(is_number(np.complex128(1 + 3j)))
self.assertTrue(is_number(np.nan))
- self.assertFalse(is_number(None))
- self.assertFalse(is_number('x'))
- self.assertFalse(is_number(datetime(2011, 1, 1)))
- self.assertFalse(is_number(np.datetime64('2011-01-01')))
- self.assertFalse(is_number(Timestamp('2011-01-01')))
- self.assertFalse(is_number(Timestamp('2011-01-01',
- tz='US/Eastern')))
- self.assertFalse(is_number(timedelta(1000)))
- self.assertFalse(is_number(Timedelta('1 days')))
+ assert not is_number(None)
+ assert not is_number('x')
+ assert not is_number(datetime(2011, 1, 1))
+ assert not is_number(np.datetime64('2011-01-01'))
+ assert not is_number(Timestamp('2011-01-01'))
+ assert not is_number(Timestamp('2011-01-01', tz='US/Eastern'))
+ assert not is_number(timedelta(1000))
+ assert not is_number(Timedelta('1 days'))
# questionable
- self.assertFalse(is_number(np.bool_(False)))
+ assert not is_number(np.bool_(False))
self.assertTrue(is_number(np.timedelta64(1, 'D')))
def test_is_bool(self):
@@ -777,45 +776,43 @@ def test_is_bool(self):
self.assertTrue(is_bool(np.bool(False)))
self.assertTrue(is_bool(np.bool_(False)))
- self.assertFalse(is_bool(1))
- self.assertFalse(is_bool(1.1))
- self.assertFalse(is_bool(1 + 3j))
- self.assertFalse(is_bool(np.int64(1)))
- self.assertFalse(is_bool(np.float64(1.1)))
- self.assertFalse(is_bool(np.complex128(1 + 3j)))
- self.assertFalse(is_bool(np.nan))
- self.assertFalse(is_bool(None))
- self.assertFalse(is_bool('x'))
- self.assertFalse(is_bool(datetime(2011, 1, 1)))
- self.assertFalse(is_bool(np.datetime64('2011-01-01')))
- self.assertFalse(is_bool(Timestamp('2011-01-01')))
- self.assertFalse(is_bool(Timestamp('2011-01-01',
- tz='US/Eastern')))
- self.assertFalse(is_bool(timedelta(1000)))
- self.assertFalse(is_bool(np.timedelta64(1, 'D')))
- self.assertFalse(is_bool(Timedelta('1 days')))
+ assert not is_bool(1)
+ assert not is_bool(1.1)
+ assert not is_bool(1 + 3j)
+ assert not is_bool(np.int64(1))
+ assert not is_bool(np.float64(1.1))
+ assert not is_bool(np.complex128(1 + 3j))
+ assert not is_bool(np.nan)
+ assert not is_bool(None)
+ assert not is_bool('x')
+ assert not is_bool(datetime(2011, 1, 1))
+ assert not is_bool(np.datetime64('2011-01-01'))
+ assert not is_bool(Timestamp('2011-01-01'))
+ assert not is_bool(Timestamp('2011-01-01', tz='US/Eastern'))
+ assert not is_bool(timedelta(1000))
+ assert not is_bool(np.timedelta64(1, 'D'))
+ assert not is_bool(Timedelta('1 days'))
def test_is_integer(self):
self.assertTrue(is_integer(1))
self.assertTrue(is_integer(np.int64(1)))
- self.assertFalse(is_integer(True))
- self.assertFalse(is_integer(1.1))
- self.assertFalse(is_integer(1 + 3j))
- self.assertFalse(is_integer(np.bool(False)))
- self.assertFalse(is_integer(np.bool_(False)))
- self.assertFalse(is_integer(np.float64(1.1)))
- self.assertFalse(is_integer(np.complex128(1 + 3j)))
- self.assertFalse(is_integer(np.nan))
- self.assertFalse(is_integer(None))
- self.assertFalse(is_integer('x'))
- self.assertFalse(is_integer(datetime(2011, 1, 1)))
- self.assertFalse(is_integer(np.datetime64('2011-01-01')))
- self.assertFalse(is_integer(Timestamp('2011-01-01')))
- self.assertFalse(is_integer(Timestamp('2011-01-01',
- tz='US/Eastern')))
- self.assertFalse(is_integer(timedelta(1000)))
- self.assertFalse(is_integer(Timedelta('1 days')))
+ assert not is_integer(True)
+ assert not is_integer(1.1)
+ assert not is_integer(1 + 3j)
+ assert not is_integer(np.bool(False))
+ assert not is_integer(np.bool_(False))
+ assert not is_integer(np.float64(1.1))
+ assert not is_integer(np.complex128(1 + 3j))
+ assert not is_integer(np.nan)
+ assert not is_integer(None)
+ assert not is_integer('x')
+ assert not is_integer(datetime(2011, 1, 1))
+ assert not is_integer(np.datetime64('2011-01-01'))
+ assert not is_integer(Timestamp('2011-01-01'))
+ assert not is_integer(Timestamp('2011-01-01', tz='US/Eastern'))
+ assert not is_integer(timedelta(1000))
+ assert not is_integer(Timedelta('1 days'))
# questionable
self.assertTrue(is_integer(np.timedelta64(1, 'D')))
@@ -825,23 +822,22 @@ def test_is_float(self):
self.assertTrue(is_float(np.float64(1.1)))
self.assertTrue(is_float(np.nan))
- self.assertFalse(is_float(True))
- self.assertFalse(is_float(1))
- self.assertFalse(is_float(1 + 3j))
- self.assertFalse(is_float(np.bool(False)))
- self.assertFalse(is_float(np.bool_(False)))
- self.assertFalse(is_float(np.int64(1)))
- self.assertFalse(is_float(np.complex128(1 + 3j)))
- self.assertFalse(is_float(None))
- self.assertFalse(is_float('x'))
- self.assertFalse(is_float(datetime(2011, 1, 1)))
- self.assertFalse(is_float(np.datetime64('2011-01-01')))
- self.assertFalse(is_float(Timestamp('2011-01-01')))
- self.assertFalse(is_float(Timestamp('2011-01-01',
- tz='US/Eastern')))
- self.assertFalse(is_float(timedelta(1000)))
- self.assertFalse(is_float(np.timedelta64(1, 'D')))
- self.assertFalse(is_float(Timedelta('1 days')))
+ assert not is_float(True)
+ assert not is_float(1)
+ assert not is_float(1 + 3j)
+ assert not is_float(np.bool(False))
+ assert not is_float(np.bool_(False))
+ assert not is_float(np.int64(1))
+ assert not is_float(np.complex128(1 + 3j))
+ assert not is_float(None)
+ assert not is_float('x')
+ assert not is_float(datetime(2011, 1, 1))
+ assert not is_float(np.datetime64('2011-01-01'))
+ assert not is_float(Timestamp('2011-01-01'))
+ assert not is_float(Timestamp('2011-01-01', tz='US/Eastern'))
+ assert not is_float(timedelta(1000))
+ assert not is_float(np.timedelta64(1, 'D'))
+ assert not is_float(Timedelta('1 days'))
def test_is_datetime_dtypes(self):
@@ -851,9 +847,9 @@ def test_is_datetime_dtypes(self):
self.assertTrue(is_datetime64_dtype('datetime64'))
self.assertTrue(is_datetime64_dtype('datetime64[ns]'))
self.assertTrue(is_datetime64_dtype(ts))
- self.assertFalse(is_datetime64_dtype(tsa))
+ assert not is_datetime64_dtype(tsa)
- self.assertFalse(is_datetime64_ns_dtype('datetime64'))
+ assert not is_datetime64_ns_dtype('datetime64')
self.assertTrue(is_datetime64_ns_dtype('datetime64[ns]'))
self.assertTrue(is_datetime64_ns_dtype(ts))
self.assertTrue(is_datetime64_ns_dtype(tsa))
@@ -863,14 +859,14 @@ def test_is_datetime_dtypes(self):
self.assertTrue(is_datetime64_any_dtype(ts))
self.assertTrue(is_datetime64_any_dtype(tsa))
- self.assertFalse(is_datetime64tz_dtype('datetime64'))
- self.assertFalse(is_datetime64tz_dtype('datetime64[ns]'))
- self.assertFalse(is_datetime64tz_dtype(ts))
+ assert not is_datetime64tz_dtype('datetime64')
+ assert not is_datetime64tz_dtype('datetime64[ns]')
+ assert not is_datetime64tz_dtype(ts)
self.assertTrue(is_datetime64tz_dtype(tsa))
for tz in ['US/Eastern', 'UTC']:
dtype = 'datetime64[ns, {}]'.format(tz)
- self.assertFalse(is_datetime64_dtype(dtype))
+ assert not is_datetime64_dtype(dtype)
self.assertTrue(is_datetime64tz_dtype(dtype))
self.assertTrue(is_datetime64_ns_dtype(dtype))
self.assertTrue(is_datetime64_any_dtype(dtype))
@@ -878,7 +874,7 @@ def test_is_datetime_dtypes(self):
def test_is_timedelta(self):
self.assertTrue(is_timedelta64_dtype('timedelta64'))
self.assertTrue(is_timedelta64_dtype('timedelta64[ns]'))
- self.assertFalse(is_timedelta64_ns_dtype('timedelta64'))
+ assert not is_timedelta64_ns_dtype('timedelta64')
self.assertTrue(is_timedelta64_ns_dtype('timedelta64[ns]'))
tdi = TimedeltaIndex([1e14, 2e14], dtype='timedelta64')
@@ -887,8 +883,8 @@ def test_is_timedelta(self):
self.assertTrue(is_timedelta64_ns_dtype(tdi.astype('timedelta64[ns]')))
# Conversion to Int64Index:
- self.assertFalse(is_timedelta64_ns_dtype(tdi.astype('timedelta64')))
- self.assertFalse(is_timedelta64_ns_dtype(tdi.astype('timedelta64[h]')))
+ assert not is_timedelta64_ns_dtype(tdi.astype('timedelta64'))
+ assert not is_timedelta64_ns_dtype(tdi.astype('timedelta64[h]'))
class Testisscalar(tm.TestCase):
@@ -909,13 +905,13 @@ def test_isscalar_builtin_scalars(self):
self.assertTrue(is_scalar(pd.NaT))
def test_isscalar_builtin_nonscalars(self):
- self.assertFalse(is_scalar({}))
- self.assertFalse(is_scalar([]))
- self.assertFalse(is_scalar([1]))
- self.assertFalse(is_scalar(()))
- self.assertFalse(is_scalar((1, )))
- self.assertFalse(is_scalar(slice(None)))
- self.assertFalse(is_scalar(Ellipsis))
+ assert not is_scalar({})
+ assert not is_scalar([])
+ assert not is_scalar([1])
+ assert not is_scalar(())
+ assert not is_scalar((1, ))
+ assert not is_scalar(slice(None))
+ assert not is_scalar(Ellipsis)
def test_isscalar_numpy_array_scalars(self):
self.assertTrue(is_scalar(np.int64(1)))
@@ -933,13 +929,13 @@ def test_isscalar_numpy_zerodim_arrays(self):
np.array(np.datetime64('2014-01-01')),
np.array(np.timedelta64(1, 'h')),
np.array(np.datetime64('NaT'))]:
- self.assertFalse(is_scalar(zerodim))
+ assert not is_scalar(zerodim)
self.assertTrue(is_scalar(lib.item_from_zerodim(zerodim)))
def test_isscalar_numpy_arrays(self):
- self.assertFalse(is_scalar(np.array([])))
- self.assertFalse(is_scalar(np.array([[]])))
- self.assertFalse(is_scalar(np.matrix('1; 2')))
+ assert not is_scalar(np.array([]))
+ assert not is_scalar(np.array([[]]))
+ assert not is_scalar(np.matrix('1; 2'))
def test_isscalar_pandas_scalars(self):
self.assertTrue(is_scalar(Timestamp('2014-01-01')))
@@ -947,15 +943,15 @@ def test_isscalar_pandas_scalars(self):
self.assertTrue(is_scalar(Period('2014-01-01')))
def test_lisscalar_pandas_containers(self):
- self.assertFalse(is_scalar(Series()))
- self.assertFalse(is_scalar(Series([1])))
- self.assertFalse(is_scalar(DataFrame()))
- self.assertFalse(is_scalar(DataFrame([[1]])))
+ assert not is_scalar(Series())
+ assert not is_scalar(Series([1]))
+ assert not is_scalar(DataFrame())
+ assert not is_scalar(DataFrame([[1]]))
with catch_warnings(record=True):
- self.assertFalse(is_scalar(Panel()))
- self.assertFalse(is_scalar(Panel([[[1]]])))
- self.assertFalse(is_scalar(Index([])))
- self.assertFalse(is_scalar(Index([1])))
+ assert not is_scalar(Panel())
+ assert not is_scalar(Panel([[[1]]]))
+ assert not is_scalar(Index([]))
+ assert not is_scalar(Index([1]))
def test_datetimeindex_from_empty_datetime64_array():
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index c03ba2b7daf50..3e1a12d439b9a 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -49,12 +49,12 @@ class TestIsNull(tm.TestCase):
def test_0d_array(self):
self.assertTrue(isnull(np.array(np.nan)))
- self.assertFalse(isnull(np.array(0.0)))
- self.assertFalse(isnull(np.array(0)))
+ assert not isnull(np.array(0.0))
+ assert not isnull(np.array(0))
# test object dtype
self.assertTrue(isnull(np.array(np.nan, dtype=object)))
- self.assertFalse(isnull(np.array(0.0, dtype=object)))
- self.assertFalse(isnull(np.array(0, dtype=object)))
+ assert not isnull(np.array(0.0, dtype=object))
+ assert not isnull(np.array(0, dtype=object))
def test_empty_object(self):
@@ -65,12 +65,12 @@ def test_empty_object(self):
tm.assert_numpy_array_equal(result, expected)
def test_isnull(self):
- self.assertFalse(isnull(1.))
+ assert not isnull(1.)
self.assertTrue(isnull(None))
self.assertTrue(isnull(np.NaN))
self.assertTrue(float('nan'))
- self.assertFalse(isnull(np.inf))
- self.assertFalse(isnull(-np.inf))
+ assert not isnull(np.inf)
+ assert not isnull(-np.inf)
# series
for s in [tm.makeFloatSeries(), tm.makeStringSeries(),
@@ -135,7 +135,7 @@ def test_isnull_numpy_nat(self):
tm.assert_numpy_array_equal(result, expected)
def test_isnull_datetime(self):
- self.assertFalse(isnull(datetime.now()))
+ assert not isnull(datetime.now())
self.assertTrue(notnull(datetime.now()))
idx = date_range('1/1/1990', periods=20)
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 45d93c187e0b7..6268ccc27c7a6 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -558,11 +558,11 @@ def test_var_std(self):
arr = np.repeat(np.random.random((1, 1000)), 1000, 0)
result = nanops.nanvar(arr, axis=0)
- self.assertFalse((result < 0).any())
+ assert not (result < 0).any()
if nanops._USE_BOTTLENECK:
nanops._USE_BOTTLENECK = False
result = nanops.nanvar(arr, axis=0)
- self.assertFalse((result < 0).any())
+ assert not (result < 0).any()
nanops._USE_BOTTLENECK = True
def test_numeric_only_flag(self):
@@ -671,11 +671,11 @@ def test_sem(self):
arr = np.repeat(np.random.random((1, 1000)), 1000, 0)
result = nanops.nansem(arr, axis=0)
- self.assertFalse((result < 0).any())
+ assert not (result < 0).any()
if nanops._USE_BOTTLENECK:
nanops._USE_BOTTLENECK = False
result = nanops.nansem(arr, axis=0)
- self.assertFalse((result < 0).any())
+ assert not (result < 0).any()
nanops._USE_BOTTLENECK = True
def test_skew(self):
@@ -1131,8 +1131,8 @@ def __nonzero__(self):
r0 = getattr(all_na, name)(axis=0)
r1 = getattr(all_na, name)(axis=1)
if name == 'any':
- self.assertFalse(r0.any())
- self.assertFalse(r1.any())
+ assert not r0.any()
+ assert not r1.any()
else:
self.assertTrue(r0.all())
self.assertTrue(r1.all())
@@ -1801,13 +1801,13 @@ def test_clip(self):
median = self.frame.median().median()
capped = self.frame.clip_upper(median)
- self.assertFalse((capped.values > median).any())
+ assert not (capped.values > median).any()
floored = self.frame.clip_lower(median)
- self.assertFalse((floored.values < median).any())
+ assert not (floored.values < median).any()
double = self.frame.clip(upper=median, lower=median)
- self.assertFalse((double.values != median).any())
+ assert not (double.values != median).any()
def test_dataframe_clip(self):
# GH #2747
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index bd4abd6fcd822..7669de17885f8 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -141,15 +141,15 @@ def test_get_agg_axis(self):
def test_nonzero(self):
self.assertTrue(self.empty.empty)
- self.assertFalse(self.frame.empty)
- self.assertFalse(self.mixed_frame.empty)
+ assert not self.frame.empty
+ assert not self.mixed_frame.empty
# corner case
df = DataFrame({'A': [1., 2., 3.],
'B': ['a', 'b', 'c']},
index=np.arange(3))
del df['A']
- self.assertFalse(df.empty)
+ assert not df.empty
def test_iteritems(self):
df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b'])
@@ -208,7 +208,7 @@ def test_itertuples(self):
df3 = DataFrame(dict(('f' + str(i), [i]) for i in range(1024)))
# will raise SyntaxError if trying to create namedtuple
tup3 = next(df3.itertuples())
- self.assertFalse(hasattr(tup3, '_fields'))
+ assert not hasattr(tup3, '_fields')
assert isinstance(tup3, tuple)
def test_len(self):
@@ -319,9 +319,9 @@ def test_series_put_names(self):
def test_empty_nonzero(self):
df = DataFrame([1, 2, 3])
- self.assertFalse(df.empty)
+ assert not df.empty
df = pd.DataFrame(index=[1], columns=[1])
- self.assertFalse(df.empty)
+ assert not df.empty
df = DataFrame(index=['a', 'b'], columns=['c', 'd']).dropna()
self.assertTrue(df.empty)
self.assertTrue(df.T.empty)
diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py
index b8be7c19203fa..61d0694eea382 100644
--- a/pandas/tests/frame/test_axis_select_reindex.py
+++ b/pandas/tests/frame/test_axis_select_reindex.py
@@ -129,7 +129,7 @@ def test_drop_multiindex_not_lexsorted(self):
not_lexsorted_df = not_lexsorted_df.pivot_table(
index='a', columns=['b', 'c'], values='d')
not_lexsorted_df = not_lexsorted_df.reset_index()
- self.assertFalse(not_lexsorted_df.columns.is_lexsorted())
+ assert not not_lexsorted_df.columns.is_lexsorted()
# compare the results
tm.assert_frame_equal(lexsorted_df, not_lexsorted_df)
@@ -224,7 +224,7 @@ def test_reindex(self):
# copy with no axes
result = self.frame.reindex()
assert_frame_equal(result, self.frame)
- self.assertFalse(result is self.frame)
+ assert result is not self.frame
def test_reindex_nan(self):
df = pd.DataFrame([[1, 2], [3, 5], [7, 11], [9, 23]],
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 5e85b890be569..37615179a3f26 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -69,7 +69,7 @@ def test_consolidate_inplace(self):
def test_as_matrix_consolidate(self):
self.frame['E'] = 7.
- self.assertFalse(self.frame._data.is_consolidated())
+ assert not self.frame._data.is_consolidated()
_ = self.frame.as_matrix() # noqa
self.assertTrue(self.frame._data.is_consolidated())
@@ -326,7 +326,7 @@ def test_copy_blocks(self):
_df.loc[:, column] = _df[column] + 1
# make sure we did not change the original DataFrame
- self.assertFalse(_df[column].equals(df[column]))
+ assert not _df[column].equals(df[column])
def test_no_copy_blocks(self):
# API/ENH 9607
@@ -399,7 +399,7 @@ def test_consolidate_datetime64(self):
tm.assert_index_equal(pd.DatetimeIndex(df.ending), ser_ending.index)
def test_is_mixed_type(self):
- self.assertFalse(self.frame._is_mixed_type)
+ assert not self.frame._is_mixed_type
self.assertTrue(self.mixed_frame._is_mixed_type)
def test_get_numeric_data(self):
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index db0293b71c3a3..e9a6f03abbe8d 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1452,7 +1452,7 @@ def test_constructor_frame_copy(self):
cop = DataFrame(self.frame, copy=True)
cop['A'] = 5
self.assertTrue((cop['A'] == 5).all())
- self.assertFalse((self.frame['A'] == 5).all())
+ assert not (self.frame['A'] == 5).all()
def test_constructor_ndarray_copy(self):
df = DataFrame(self.frame.values)
@@ -1462,7 +1462,7 @@ def test_constructor_ndarray_copy(self):
df = DataFrame(self.frame.values, copy=True)
self.frame.values[6] = 6
- self.assertFalse((df.values[6] == 6).all())
+ assert not (df.values[6] == 6).all()
def test_constructor_series_copy(self):
series = self.frame._series
@@ -1470,7 +1470,7 @@ def test_constructor_series_copy(self):
df = DataFrame({'A': series['A']})
df['A'][:] = 5
- self.assertFalse((series['A'] == 5).all())
+ assert not (series['A'] == 5).all()
def test_constructor_with_nas(self):
# GH 5016
@@ -1512,7 +1512,7 @@ def test_constructor_lists_to_object_dtype(self):
# from #1074
d = DataFrame({'a': [np.nan, False]})
self.assertEqual(d['a'].dtype, np.object_)
- self.assertFalse(d['a'][1])
+ assert not d['a'][1]
def test_from_records_to_records(self):
# from numpy documentation
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index be4e69fe99a4e..ebc125ae09818 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -1938,7 +1938,7 @@ def test_reindex_frame_add_nat(self):
mask = com.isnull(result)['B']
self.assertTrue(mask[-5:].all())
- self.assertFalse(mask[:-5].any())
+ assert not mask[:-5].any()
def test_set_dataframe_column_ns_dtype(self):
x = DataFrame([datetime.now(), datetime.now()])
@@ -2940,8 +2940,7 @@ def test_setitem(self):
b1 = df._data.blocks[1]
b2 = df._data.blocks[2]
self.assertTrue(b1.values.equals(b2.values))
- self.assertFalse(id(b1.values.values.base) ==
- id(b2.values.values.base))
+ assert id(b1.values.values.base) != id(b2.values.values.base)
# with nan
df2 = df.copy()
diff --git a/pandas/tests/frame/test_mutate_columns.py b/pandas/tests/frame/test_mutate_columns.py
index b82a549bae3a0..d5035f2908528 100644
--- a/pandas/tests/frame/test_mutate_columns.py
+++ b/pandas/tests/frame/test_mutate_columns.py
@@ -223,7 +223,7 @@ def test_pop_non_unique_cols(self):
self.assertEqual(len(res), 2)
self.assertEqual(len(df.columns), 1)
self.assertTrue("b" in df.columns)
- self.assertFalse("a" in df.columns)
+ assert "a" not in df.columns
self.assertEqual(len(df.index), 2)
def test_insert_column_bug_4032(self):
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index d90e859509454..7f87666d5ecc4 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -236,7 +236,7 @@ def test_modulo(self):
s = p[0]
res = s % p
res2 = p % s
- self.assertFalse(np.array_equal(res.fillna(0), res2.fillna(0)))
+ assert not np.array_equal(res.fillna(0), res2.fillna(0))
def test_div(self):
@@ -270,7 +270,7 @@ def test_div(self):
s = p[0]
res = s / p
res2 = p / s
- self.assertFalse(np.array_equal(res.fillna(0), res2.fillna(0)))
+ assert not np.array_equal(res.fillna(0), res2.fillna(0))
def test_logical_operators(self):
@@ -574,7 +574,7 @@ def _check_unaligned_frame(meth, op, df, other):
# DataFrame
self.assertTrue(df.eq(df).values.all())
- self.assertFalse(df.ne(df).values.any())
+ assert not df.ne(df).values.any()
for op in ['eq', 'ne', 'gt', 'lt', 'ge', 'le']:
f = getattr(df, op)
o = getattr(operator, op)
@@ -634,17 +634,17 @@ def _test_seq(df, idx_ser, col_ser):
# NA
df.loc[0, 0] = np.nan
rs = df.eq(df)
- self.assertFalse(rs.loc[0, 0])
+ assert not rs.loc[0, 0]
rs = df.ne(df)
self.assertTrue(rs.loc[0, 0])
rs = df.gt(df)
- self.assertFalse(rs.loc[0, 0])
+ assert not rs.loc[0, 0]
rs = df.lt(df)
- self.assertFalse(rs.loc[0, 0])
+ assert not rs.loc[0, 0]
rs = df.ge(df)
- self.assertFalse(rs.loc[0, 0])
+ assert not rs.loc[0, 0]
rs = df.le(df)
- self.assertFalse(rs.loc[0, 0])
+ assert not rs.loc[0, 0]
# complex
arr = np.array([np.nan, 1, 6, np.nan])
@@ -652,14 +652,14 @@ def _test_seq(df, idx_ser, col_ser):
df = DataFrame({'a': arr})
df2 = DataFrame({'a': arr2})
rs = df.gt(df2)
- self.assertFalse(rs.values.any())
+ assert not rs.values.any()
rs = df.ne(df2)
self.assertTrue(rs.values.all())
arr3 = np.array([2j, np.nan, None])
df3 = DataFrame({'a': arr3})
rs = df3.gt(2j)
- self.assertFalse(rs.values.any())
+ assert not rs.values.any()
# corner, dtype=object
df1 = DataFrame({'col': ['foo', np.nan, 'bar']})
@@ -1021,7 +1021,7 @@ def test_boolean_comparison(self):
assert_numpy_array_equal(result, expected.values)
pytest.raises(ValueError, lambda: df == b_c)
- self.assertFalse(np.array_equal(df.values, b_c))
+ assert not np.array_equal(df.values, b_c)
# with alignment
df = DataFrame(np.arange(6).reshape((3, 2)),
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index 630fa5ad57fad..bcb85b6e44d54 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -72,9 +72,9 @@ def test_repr(self):
self.empty.info(buf=buf)
df = DataFrame(["a\n\r\tb"], columns=["a\n\r\td"], index=["a\n\r\tf"])
- self.assertFalse("\t" in repr(df))
- self.assertFalse("\r" in repr(df))
- self.assertFalse("a\n" in repr(df))
+ assert "\t" not in repr(df)
+ assert "\r" not in repr(df)
+ assert "a\n" not in repr(df)
def test_repr_dimensions(self):
df = DataFrame([[1, 2, ], [3, 4]])
@@ -82,10 +82,10 @@ def test_repr_dimensions(self):
self.assertTrue("2 rows x 2 columns" in repr(df))
with option_context('display.show_dimensions', False):
- self.assertFalse("2 rows x 2 columns" in repr(df))
+ assert "2 rows x 2 columns" not in repr(df)
with option_context('display.show_dimensions', 'truncate'):
- self.assertFalse("2 rows x 2 columns" in repr(df))
+ assert "2 rows x 2 columns" not in repr(df)
@tm.slow
def test_repr_big(self):
@@ -320,7 +320,7 @@ def test_info_memory_usage(self):
res = buf.getvalue().splitlines()
# excluded column with object dtype, so estimate is accurate
- self.assertFalse(re.match(r"memory usage: [^+]+\+", res[-1]))
+ assert not re.match(r"memory usage: [^+]+\+", res[-1])
df_with_object_index = pd.DataFrame({'a': [1]}, index=['foo'])
df_with_object_index.info(buf=buf, memory_usage=True)
@@ -388,7 +388,7 @@ def test_info_memory_usage_qualified(self):
df = DataFrame(1, columns=list('ab'),
index=[1, 2, 3])
df.info(buf=buf)
- self.assertFalse('+' in buf.getvalue())
+ assert '+' not in buf.getvalue()
buf = StringIO()
df = DataFrame(1, columns=list('ab'),
@@ -401,7 +401,7 @@ def test_info_memory_usage_qualified(self):
index=pd.MultiIndex.from_product(
[range(3), range(3)]))
df.info(buf=buf)
- self.assertFalse('+' in buf.getvalue())
+ assert '+' not in buf.getvalue()
buf = StringIO()
df = DataFrame(1, columns=list('ab'),
diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index 7a5afa178208a..66af6aaca6513 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -350,7 +350,7 @@ def test_truncate_copy(self):
index = self.tsframe.index
truncated = self.tsframe.truncate(index[5], index[10])
truncated.values[:] = 5.
- self.assertFalse((self.tsframe.values[5:11] == 5).any())
+ assert not (self.tsframe.values[5:11] == 5).any()
def test_asfreq(self):
offset_monthly = self.tsframe.asfreq(offsets.BMonthEnd())
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 177c2345ea143..0696473d0449f 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -330,7 +330,7 @@ def test_grouper_column_index_level_precedence(self):
expected = df_multi_both.groupby([pd.Grouper(key='inner')]).mean()
assert_frame_equal(result, expected)
not_expected = df_multi_both.groupby(pd.Grouper(level='inner')).mean()
- self.assertFalse(result.index.equals(not_expected.index))
+ assert not result.index.equals(not_expected.index)
# Group single Index by single key
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
@@ -339,7 +339,7 @@ def test_grouper_column_index_level_precedence(self):
expected = df_single_both.groupby([pd.Grouper(key='inner')]).mean()
assert_frame_equal(result, expected)
not_expected = df_single_both.groupby(pd.Grouper(level='inner')).mean()
- self.assertFalse(result.index.equals(not_expected.index))
+ assert not result.index.equals(not_expected.index)
# Group MultiIndex by single key list
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
@@ -348,7 +348,7 @@ def test_grouper_column_index_level_precedence(self):
expected = df_multi_both.groupby([pd.Grouper(key='inner')]).mean()
assert_frame_equal(result, expected)
not_expected = df_multi_both.groupby(pd.Grouper(level='inner')).mean()
- self.assertFalse(result.index.equals(not_expected.index))
+ assert not result.index.equals(not_expected.index)
# Group single Index by single key list
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
@@ -357,7 +357,7 @@ def test_grouper_column_index_level_precedence(self):
expected = df_single_both.groupby([pd.Grouper(key='inner')]).mean()
assert_frame_equal(result, expected)
not_expected = df_single_both.groupby(pd.Grouper(level='inner')).mean()
- self.assertFalse(result.index.equals(not_expected.index))
+ assert not result.index.equals(not_expected.index)
# Group MultiIndex by two keys (1)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
@@ -369,7 +369,7 @@ def test_grouper_column_index_level_precedence(self):
not_expected = df_multi_both.groupby(['B',
pd.Grouper(level='inner')
]).mean()
- self.assertFalse(result.index.equals(not_expected.index))
+ assert not result.index.equals(not_expected.index)
# Group MultiIndex by two keys (2)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
@@ -380,7 +380,7 @@ def test_grouper_column_index_level_precedence(self):
assert_frame_equal(result, expected)
not_expected = df_multi_both.groupby([pd.Grouper(level='inner'),
'B']).mean()
- self.assertFalse(result.index.equals(not_expected.index))
+ assert not result.index.equals(not_expected.index)
# Group single Index by two keys (1)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
@@ -392,7 +392,7 @@ def test_grouper_column_index_level_precedence(self):
not_expected = df_single_both.groupby(['B',
pd.Grouper(level='inner')
]).mean()
- self.assertFalse(result.index.equals(not_expected.index))
+ assert not result.index.equals(not_expected.index)
# Group single Index by two keys (2)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
@@ -403,7 +403,7 @@ def test_grouper_column_index_level_precedence(self):
assert_frame_equal(result, expected)
not_expected = df_single_both.groupby([pd.Grouper(level='inner'),
'B']).mean()
- self.assertFalse(result.index.equals(not_expected.index))
+ assert not result.index.equals(not_expected.index)
def test_grouper_getting_correct_binner(self):
@@ -2626,7 +2626,7 @@ def f(g):
group_keys = grouper._get_group_keys()
values, mutated = splitter.fast_apply(f, group_keys)
- self.assertFalse(mutated)
+ assert not mutated
def test_apply_with_mixed_dtype(self):
# GH3480, apply with mixed dtype on axis=1 breaks in 0.11
@@ -3263,7 +3263,7 @@ def test_groupby_multiindex_not_lexsorted(self):
not_lexsorted_df = not_lexsorted_df.pivot_table(
index='a', columns=['b', 'c'], values='d')
not_lexsorted_df = not_lexsorted_df.reset_index()
- self.assertFalse(not_lexsorted_df.columns.is_lexsorted())
+ assert not not_lexsorted_df.columns.is_lexsorted()
# compare the results
tm.assert_frame_equal(lexsorted_df, not_lexsorted_df)
@@ -3278,7 +3278,7 @@ def test_groupby_multiindex_not_lexsorted(self):
df = DataFrame({'x': ['a', 'a', 'b', 'a'],
'y': [1, 1, 2, 2],
'z': [1, 2, 3, 4]}).set_index(['x', 'y'])
- self.assertFalse(df.index.is_lexsorted())
+ assert not df.index.is_lexsorted()
for level in [0, 1, [0, 1]]:
for sort in [False, True]:
@@ -3595,7 +3595,7 @@ def test_max_nan_bug(self):
r = gb[['File']].max()
e = gb['File'].max().to_frame()
tm.assert_frame_equal(r, e)
- self.assertFalse(r['File'].isnull().any())
+ assert not r['File'].isnull().any()
def test_nlargest(self):
a = Series([1, 3, 5, 7, 2, 9, 0, 4, 6, 10])
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 56a9af73e904a..23b1de76234c3 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -170,7 +170,7 @@ def test_repr_max_seq_item_setting(self):
idx = idx.repeat(50)
with pd.option_context("display.max_seq_items", None):
repr(idx)
- self.assertFalse('...' in str(idx))
+ assert '...' not in str(idx)
def test_wrong_number_names(self):
def testit(ind):
@@ -303,7 +303,7 @@ def test_duplicates(self):
if isinstance(ind, MultiIndex):
continue
idx = self._holder([ind[0]] * 5)
- self.assertFalse(idx.is_unique)
+ assert not idx.is_unique
self.assertTrue(idx.has_duplicates)
# GH 10115
@@ -327,7 +327,7 @@ def test_get_unique_index(self):
# and doesn't contain nans.
self.assertTrue(idx_unique.is_unique)
try:
- self.assertFalse(idx_unique.hasnans)
+ assert not idx_unique.hasnans
except NotImplementedError:
pass
@@ -705,8 +705,8 @@ def test_equals(self):
self.assertTrue(idx.equals(idx.copy()))
self.assertTrue(idx.equals(idx.astype(object)))
- self.assertFalse(idx.equals(list(idx)))
- self.assertFalse(idx.equals(np.array(idx)))
+ assert not idx.equals(list(idx))
+ assert not idx.equals(np.array(idx))
# Cannot pass in non-int64 dtype to RangeIndex
if not isinstance(idx, RangeIndex):
@@ -716,7 +716,7 @@ def test_equals(self):
if idx.nlevels == 1:
# do not test MultiIndex
- self.assertFalse(idx.equals(pd.Series(idx)))
+ assert not idx.equals(pd.Series(idx))
def test_equals_op(self):
# GH9947, GH10637
@@ -843,7 +843,7 @@ def test_hasnans_isnans(self):
# cases in indices doesn't include NaN
expected = np.array([False] * len(idx), dtype=bool)
tm.assert_numpy_array_equal(idx._isnan, expected)
- self.assertFalse(idx.hasnans)
+ assert not idx.hasnans
idx = index.copy()
values = idx.values
@@ -881,7 +881,7 @@ def test_fillna(self):
idx = index.copy()
result = idx.fillna(idx[0])
tm.assert_index_equal(result, idx)
- self.assertFalse(result is idx)
+ assert result is not idx
msg = "'value' must be a scalar, passed: "
with tm.assert_raises_regex(TypeError, msg):
@@ -935,5 +935,5 @@ def test_nulls(self):
def test_empty(self):
# GH 15270
index = self.create_index()
- self.assertFalse(index.empty)
+ assert not index.empty
self.assertTrue(index[:0].empty)
diff --git a/pandas/tests/indexes/datetimelike.py b/pandas/tests/indexes/datetimelike.py
index 470c0c2aad01a..338dba9ef6c4f 100644
--- a/pandas/tests/indexes/datetimelike.py
+++ b/pandas/tests/indexes/datetimelike.py
@@ -16,7 +16,7 @@ def test_str(self):
# test the string repr
idx = self.create_index()
idx.name = 'foo'
- self.assertFalse("length=%s" % len(idx) in str(idx))
+ assert not "length=%s" % len(idx) in str(idx)
self.assertTrue("'foo'" in str(idx))
self.assertTrue(idx.__class__.__name__ in str(idx))
diff --git a/pandas/tests/indexes/datetimes/test_astype.py b/pandas/tests/indexes/datetimes/test_astype.py
index 755944d342ed4..7e695164db971 100644
--- a/pandas/tests/indexes/datetimes/test_astype.py
+++ b/pandas/tests/indexes/datetimes/test_astype.py
@@ -101,7 +101,7 @@ def test_astype_datetime64(self):
result = idx.astype('datetime64[ns]')
tm.assert_index_equal(result, idx)
- self.assertFalse(result is idx)
+ assert result is not idx
result = idx.astype('datetime64[ns]', copy=False)
tm.assert_index_equal(result, idx)
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py
index ea9f7c65fb49b..8ce2085032ca1 100644
--- a/pandas/tests/indexes/datetimes/test_construction.py
+++ b/pandas/tests/indexes/datetimes/test_construction.py
@@ -493,7 +493,7 @@ def test_is_(self):
dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M')
self.assertTrue(dti.is_(dti))
self.assertTrue(dti.is_(dti.view()))
- self.assertFalse(dti.is_(dti.copy()))
+ assert not dti.is_(dti.copy())
def test_index_cast_datetime64_other_units(self):
arr = np.arange(0, 100, 10, dtype=np.int64).view('M8[D]')
diff --git a/pandas/tests/indexes/datetimes/test_datetime.py b/pandas/tests/indexes/datetimes/test_datetime.py
index 8a4cff2974b0d..7ba9bf53abc4d 100644
--- a/pandas/tests/indexes/datetimes/test_datetime.py
+++ b/pandas/tests/indexes/datetimes/test_datetime.py
@@ -399,10 +399,10 @@ def test_misc_coverage(self):
assert isinstance(list(result.values())[0][0], Timestamp)
idx = DatetimeIndex(['2000-01-03', '2000-01-01', '2000-01-02'])
- self.assertFalse(idx.equals(list(idx)))
+ assert not idx.equals(list(idx))
non_datetime = Index(list('abc'))
- self.assertFalse(idx.equals(list(non_datetime)))
+ assert not idx.equals(list(non_datetime))
def test_string_index_series_name_converted(self):
# #1644
diff --git a/pandas/tests/indexes/datetimes/test_misc.py b/pandas/tests/indexes/datetimes/test_misc.py
index 4c7235fea63e8..22e77eebec06b 100644
--- a/pandas/tests/indexes/datetimes/test_misc.py
+++ b/pandas/tests/indexes/datetimes/test_misc.py
@@ -167,7 +167,7 @@ def test_normalize(self):
tm.assert_index_equal(rng_ns_normalized, expected)
self.assertTrue(result.is_normalized)
- self.assertFalse(rng.is_normalized)
+ assert not rng.is_normalized
class TestDatetime64(tm.TestCase):
diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py
index 020bb0e27d9de..7e42e5e3db7ef 100644
--- a/pandas/tests/indexes/datetimes/test_ops.py
+++ b/pandas/tests/indexes/datetimes/test_ops.py
@@ -103,7 +103,7 @@ def test_minmax(self):
# non-monotonic
idx2 = pd.DatetimeIndex(['2011-01-01', pd.NaT, '2011-01-03',
'2011-01-02', pd.NaT], tz=tz)
- self.assertFalse(idx2.is_monotonic)
+ assert not idx2.is_monotonic
for idx in [idx1, idx2]:
self.assertEqual(idx.min(), Timestamp('2011-01-01', tz=tz))
@@ -889,7 +889,7 @@ def test_nat(self):
self.assertTrue(idx._can_hold_na)
tm.assert_numpy_array_equal(idx._isnan, np.array([False, False]))
- self.assertFalse(idx.hasnans)
+ assert not idx.hasnans
tm.assert_numpy_array_equal(idx._nan_idxs,
np.array([], dtype=np.intp))
@@ -910,27 +910,27 @@ def test_equals(self):
self.assertTrue(idx.equals(idx.asobject))
self.assertTrue(idx.asobject.equals(idx))
self.assertTrue(idx.asobject.equals(idx.asobject))
- self.assertFalse(idx.equals(list(idx)))
- self.assertFalse(idx.equals(pd.Series(idx)))
+ assert not idx.equals(list(idx))
+ assert not idx.equals(pd.Series(idx))
idx2 = pd.DatetimeIndex(['2011-01-01', '2011-01-02', 'NaT'],
tz='US/Pacific')
- self.assertFalse(idx.equals(idx2))
- self.assertFalse(idx.equals(idx2.copy()))
- self.assertFalse(idx.equals(idx2.asobject))
- self.assertFalse(idx.asobject.equals(idx2))
- self.assertFalse(idx.equals(list(idx2)))
- self.assertFalse(idx.equals(pd.Series(idx2)))
+ assert not idx.equals(idx2)
+ assert not idx.equals(idx2.copy())
+ assert not idx.equals(idx2.asobject)
+ assert not idx.asobject.equals(idx2)
+ assert not idx.equals(list(idx2))
+ assert not idx.equals(pd.Series(idx2))
# same internal, different tz
idx3 = pd.DatetimeIndex._simple_new(idx.asi8, tz='US/Pacific')
tm.assert_numpy_array_equal(idx.asi8, idx3.asi8)
- self.assertFalse(idx.equals(idx3))
- self.assertFalse(idx.equals(idx3.copy()))
- self.assertFalse(idx.equals(idx3.asobject))
- self.assertFalse(idx.asobject.equals(idx3))
- self.assertFalse(idx.equals(list(idx3)))
- self.assertFalse(idx.equals(pd.Series(idx3)))
+ assert not idx.equals(idx3)
+ assert not idx.equals(idx3.copy())
+ assert not idx.equals(idx3.asobject)
+ assert not idx.asobject.equals(idx3)
+ assert not idx.equals(list(idx3))
+ assert not idx.equals(pd.Series(idx3))
class TestDateTimeIndexToJulianDate(tm.TestCase):
@@ -1119,7 +1119,7 @@ def test_comparison(self):
comp = self.rng > d
self.assertTrue(comp[11])
- self.assertFalse(comp[9])
+ assert not comp[9]
def test_pickle_unpickle(self):
unpickled = tm.round_trip_pickle(self.rng)
@@ -1189,7 +1189,7 @@ def test_summary_dateutil(self):
bdate_range('1/1/2005', '1/1/2009', tz=dateutil.tz.tzutc()).summary()
def test_equals(self):
- self.assertFalse(self.rng.equals(list(self.rng)))
+ assert not self.rng.equals(list(self.rng))
def test_identical(self):
t1 = self.rng.copy()
@@ -1199,14 +1199,14 @@ def test_identical(self):
# name
t1 = t1.rename('foo')
self.assertTrue(t1.equals(t2))
- self.assertFalse(t1.identical(t2))
+ assert not t1.identical(t2)
t2 = t2.rename('foo')
self.assertTrue(t1.identical(t2))
# freq
t2v = Index(t2.values)
self.assertTrue(t1.equals(t2v))
- self.assertFalse(t1.identical(t2v))
+ assert not t1.identical(t2v)
class TestCustomDatetimeIndex(tm.TestCase):
@@ -1219,7 +1219,7 @@ def test_comparison(self):
comp = self.rng > d
self.assertTrue(comp[11])
- self.assertFalse(comp[9])
+ assert not comp[9]
def test_copy(self):
cp = self.rng.copy()
@@ -1291,4 +1291,4 @@ def test_summary_dateutil(self):
cdate_range('1/1/2005', '1/1/2009', tz=dateutil.tz.tzutc()).summary()
def test_equals(self):
- self.assertFalse(self.rng.equals(list(self.rng)))
+ assert not self.rng.equals(list(self.rng))
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index 715825417cd31..941c9767e7a3a 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -1027,8 +1027,7 @@ def test_does_not_convert_mixed_integer(self):
bad_date_strings = ('-50000', '999', '123.1234', 'm', 'T')
for bad_date_string in bad_date_strings:
- self.assertFalse(tslib._does_string_look_like_datetime(
- bad_date_string))
+ assert not tslib._does_string_look_like_datetime(bad_date_string)
good_date_strings = ('2012-01-01',
'01/01/2012',
diff --git a/pandas/tests/indexes/period/test_ops.py b/pandas/tests/indexes/period/test_ops.py
index 70c0879a0871a..f133845f8404a 100644
--- a/pandas/tests/indexes/period/test_ops.py
+++ b/pandas/tests/indexes/period/test_ops.py
@@ -74,7 +74,7 @@ def test_minmax(self):
# non-monotonic
idx2 = pd.PeriodIndex(['2011-01-01', pd.NaT, '2011-01-03',
'2011-01-02', pd.NaT], freq='D')
- self.assertFalse(idx2.is_monotonic)
+ assert not idx2.is_monotonic
for idx in [idx1, idx2]:
self.assertEqual(idx.min(), pd.Period('2011-01-01', freq='D'))
@@ -806,7 +806,7 @@ def test_nat(self):
self.assertTrue(idx._can_hold_na)
tm.assert_numpy_array_equal(idx._isnan, np.array([False, False]))
- self.assertFalse(idx.hasnans)
+ assert not idx.hasnans
tm.assert_numpy_array_equal(idx._nan_idxs,
np.array([], dtype=np.intp))
@@ -828,27 +828,27 @@ def test_equals(self):
self.assertTrue(idx.equals(idx.asobject))
self.assertTrue(idx.asobject.equals(idx))
self.assertTrue(idx.asobject.equals(idx.asobject))
- self.assertFalse(idx.equals(list(idx)))
- self.assertFalse(idx.equals(pd.Series(idx)))
+ assert not idx.equals(list(idx))
+ assert not idx.equals(pd.Series(idx))
idx2 = pd.PeriodIndex(['2011-01-01', '2011-01-02', 'NaT'],
freq='H')
- self.assertFalse(idx.equals(idx2))
- self.assertFalse(idx.equals(idx2.copy()))
- self.assertFalse(idx.equals(idx2.asobject))
- self.assertFalse(idx.asobject.equals(idx2))
- self.assertFalse(idx.equals(list(idx2)))
- self.assertFalse(idx.equals(pd.Series(idx2)))
+ assert not idx.equals(idx2)
+ assert not idx.equals(idx2.copy())
+ assert not idx.equals(idx2.asobject)
+ assert not idx.asobject.equals(idx2)
+ assert not idx.equals(list(idx2))
+ assert not idx.equals(pd.Series(idx2))
# same internal, different tz
idx3 = pd.PeriodIndex._simple_new(idx.asi8, freq='H')
tm.assert_numpy_array_equal(idx.asi8, idx3.asi8)
- self.assertFalse(idx.equals(idx3))
- self.assertFalse(idx.equals(idx3.copy()))
- self.assertFalse(idx.equals(idx3.asobject))
- self.assertFalse(idx.asobject.equals(idx3))
- self.assertFalse(idx.equals(list(idx3)))
- self.assertFalse(idx.equals(pd.Series(idx3)))
+ assert not idx.equals(idx3)
+ assert not idx.equals(idx3.copy())
+ assert not idx.equals(idx3.asobject)
+ assert not idx.asobject.equals(idx3)
+ assert not idx.equals(list(idx3))
+ assert not idx.equals(pd.Series(idx3))
class TestPeriodIndexSeriesMethods(tm.TestCase):
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index e563f683bf8ca..df3f6023a6506 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -512,16 +512,16 @@ def test_contains(self):
rng = period_range('2007-01', freq='M', periods=10)
self.assertTrue(Period('2007-01', freq='M') in rng)
- self.assertFalse(Period('2007-01', freq='D') in rng)
- self.assertFalse(Period('2007-01', freq='2M') in rng)
+ assert not Period('2007-01', freq='D') in rng
+ assert not Period('2007-01', freq='2M') in rng
def test_contains_nat(self):
- # GH13582
+ # see gh-13582
idx = period_range('2007-01', freq='M', periods=10)
- self.assertFalse(pd.NaT in idx)
- self.assertFalse(None in idx)
- self.assertFalse(float('nan') in idx)
- self.assertFalse(np.nan in idx)
+ assert pd.NaT not in idx
+ assert None not in idx
+ assert float('nan') not in idx
+ assert np.nan not in idx
idx = pd.PeriodIndex(['2011-01', 'NaT', '2011-02'], freq='M')
self.assertTrue(pd.NaT in idx)
@@ -709,13 +709,13 @@ def test_iteration(self):
def test_is_full(self):
index = PeriodIndex([2005, 2007, 2009], freq='A')
- self.assertFalse(index.is_full)
+ assert not index.is_full
index = PeriodIndex([2005, 2006, 2007], freq='A')
self.assertTrue(index.is_full)
index = PeriodIndex([2005, 2005, 2007], freq='A')
- self.assertFalse(index.is_full)
+ assert not index.is_full
index = PeriodIndex([2005, 2005, 2006], freq='A')
self.assertTrue(index.is_full)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index caf2dde249600..2f07cf3c8270f 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -414,13 +414,13 @@ def test_equals_object(self):
self.assertTrue(Index(['a', 'b', 'c']).equals(Index(['a', 'b', 'c'])))
# different length
- self.assertFalse(Index(['a', 'b', 'c']).equals(Index(['a', 'b'])))
+ assert not Index(['a', 'b', 'c']).equals(Index(['a', 'b']))
# same length, different values
- self.assertFalse(Index(['a', 'b', 'c']).equals(Index(['a', 'b', 'd'])))
+ assert not Index(['a', 'b', 'c']).equals(Index(['a', 'b', 'd']))
# Must also be an Index
- self.assertFalse(Index(['a', 'b', 'c']).equals(['a', 'b', 'c']))
+ assert not Index(['a', 'b', 'c']).equals(['a', 'b', 'c'])
def test_insert(self):
@@ -470,25 +470,25 @@ def test_identical(self):
i1 = i1.rename('foo')
self.assertTrue(i1.equals(i2))
- self.assertFalse(i1.identical(i2))
+ assert not i1.identical(i2)
i2 = i2.rename('foo')
self.assertTrue(i1.identical(i2))
i3 = Index([('a', 'a'), ('a', 'b'), ('b', 'a')])
i4 = Index([('a', 'a'), ('a', 'b'), ('b', 'a')], tupleize_cols=False)
- self.assertFalse(i3.identical(i4))
+ assert not i3.identical(i4)
def test_is_(self):
ind = Index(range(10))
self.assertTrue(ind.is_(ind))
self.assertTrue(ind.is_(ind.view().view().view().view()))
- self.assertFalse(ind.is_(Index(range(10))))
- self.assertFalse(ind.is_(ind.copy()))
- self.assertFalse(ind.is_(ind.copy(deep=False)))
- self.assertFalse(ind.is_(ind[:]))
- self.assertFalse(ind.is_(ind.view(np.ndarray).view(Index)))
- self.assertFalse(ind.is_(np.array(range(10))))
+ assert not ind.is_(Index(range(10)))
+ assert not ind.is_(ind.copy())
+ assert not ind.is_(ind.copy(deep=False))
+ assert not ind.is_(ind[:])
+ assert not ind.is_(ind.view(np.ndarray).view(Index))
+ assert not ind.is_(np.array(range(10)))
# quasi-implementation dependent
self.assertTrue(ind.is_(ind.view()))
@@ -497,11 +497,11 @@ def test_is_(self):
self.assertTrue(ind.is_(ind2))
self.assertTrue(ind2.is_(ind))
# doesn't matter if Indices are *actually* views of underlying data,
- self.assertFalse(ind.is_(Index(ind.values)))
+ assert not ind.is_(Index(ind.values))
arr = np.array(range(1, 11))
ind1 = Index(arr, copy=False)
ind2 = Index(arr, copy=False)
- self.assertFalse(ind1.is_(ind2))
+ assert not ind1.is_(ind2)
def test_asof(self):
d = self.dateIndex[0]
@@ -519,7 +519,7 @@ def test_asof_datetime_partial(self):
expected = Timestamp('2010-02-28')
result = idx.asof('2010-02')
self.assertEqual(result, expected)
- self.assertFalse(isinstance(result, Index))
+ assert not isinstance(result, Index)
def test_nanosecond_index_access(self):
s = Series([Timestamp('20130101')]).values.view('i8')[0]
@@ -938,24 +938,24 @@ def test_symmetric_difference(self):
self.assertEqual(result.name, 'new_name')
def test_is_numeric(self):
- self.assertFalse(self.dateIndex.is_numeric())
- self.assertFalse(self.strIndex.is_numeric())
+ assert not self.dateIndex.is_numeric()
+ assert not self.strIndex.is_numeric()
self.assertTrue(self.intIndex.is_numeric())
self.assertTrue(self.floatIndex.is_numeric())
- self.assertFalse(self.catIndex.is_numeric())
+ assert not self.catIndex.is_numeric()
def test_is_object(self):
self.assertTrue(self.strIndex.is_object())
self.assertTrue(self.boolIndex.is_object())
- self.assertFalse(self.catIndex.is_object())
- self.assertFalse(self.intIndex.is_object())
- self.assertFalse(self.dateIndex.is_object())
- self.assertFalse(self.floatIndex.is_object())
+ assert not self.catIndex.is_object()
+ assert not self.intIndex.is_object()
+ assert not self.dateIndex.is_object()
+ assert not self.floatIndex.is_object()
def test_is_all_dates(self):
self.assertTrue(self.dateIndex.is_all_dates)
- self.assertFalse(self.strIndex.is_all_dates)
- self.assertFalse(self.intIndex.is_all_dates)
+ assert not self.strIndex.is_all_dates
+ assert not self.intIndex.is_all_dates
def test_summary(self):
self._check_method_works(Index.summary)
@@ -1331,8 +1331,8 @@ def test_tuple_union_bug(self):
def test_is_monotonic_incomparable(self):
index = Index([5, datetime.now(), 7])
- self.assertFalse(index.is_monotonic)
- self.assertFalse(index.is_monotonic_decreasing)
+ assert not index.is_monotonic
+ assert not index.is_monotonic_decreasing
def test_get_set_value(self):
values = np.random.randn(100)
@@ -2031,8 +2031,8 @@ def test_is_monotonic_na(self):
pd.to_datetime(['2000-01-01', 'NaT', '2000-01-02']),
pd.to_timedelta(['1 day', 'NaT']), ]
for index in examples:
- self.assertFalse(index.is_monotonic_increasing)
- self.assertFalse(index.is_monotonic_decreasing)
+ assert not index.is_monotonic_increasing
+ assert not index.is_monotonic_decreasing
def test_repr_summary(self):
with cf.option_context('display.max_seq_items', 10):
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 5dcd45e8c85b0..5c9df55d2b508 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -183,8 +183,8 @@ def test_contains(self):
self.assertTrue(np.nan not in ci)
# assert codes NOT in index
- self.assertFalse(0 in ci)
- self.assertFalse(1 in ci)
+ assert 0 not in ci
+ assert 1 not in ci
ci = CategoricalIndex(
list('aabbca') + [np.nan], categories=list('cabdef'))
@@ -423,7 +423,7 @@ def test_reindex_dtype(self):
def test_duplicates(self):
idx = CategoricalIndex([0, 0, 0], name='foo')
- self.assertFalse(idx.is_unique)
+ assert not idx.is_unique
self.assertTrue(idx.has_duplicates)
expected = CategoricalIndex([0], name='foo')
@@ -539,7 +539,7 @@ def test_identical(self):
ordered=True)
self.assertTrue(ci1.identical(ci1))
self.assertTrue(ci1.identical(ci1.copy()))
- self.assertFalse(ci1.identical(ci2))
+ assert not ci1.identical(ci2)
def test_ensure_copied_data(self):
# gh-12309: Check the "copy" argument of each
@@ -563,18 +563,18 @@ def test_equals_categorical(self):
ordered=True)
self.assertTrue(ci1.equals(ci1))
- self.assertFalse(ci1.equals(ci2))
+ assert not ci1.equals(ci2)
self.assertTrue(ci1.equals(ci1.astype(object)))
self.assertTrue(ci1.astype(object).equals(ci1))
self.assertTrue((ci1 == ci1).all())
- self.assertFalse((ci1 != ci1).all())
- self.assertFalse((ci1 > ci1).all())
- self.assertFalse((ci1 < ci1).all())
+ assert not (ci1 != ci1).all()
+ assert not (ci1 > ci1).all()
+ assert not (ci1 < ci1).all()
self.assertTrue((ci1 <= ci1).all())
self.assertTrue((ci1 >= ci1).all())
- self.assertFalse((ci1 == 1).all())
+ assert not (ci1 == 1).all()
self.assertTrue((ci1 == Index(['a', 'b'])).all())
self.assertTrue((ci1 == ci1.values).all())
@@ -591,20 +591,20 @@ def test_equals_categorical(self):
# tests
# make sure that we are testing for category inclusion properly
ci = CategoricalIndex(list('aabca'), categories=['c', 'a', 'b'])
- self.assertFalse(ci.equals(list('aabca')))
- self.assertFalse(ci.equals(CategoricalIndex(list('aabca'))))
+ assert not ci.equals(list('aabca'))
+ assert not ci.equals(CategoricalIndex(list('aabca')))
self.assertTrue(ci.equals(ci.copy()))
ci = CategoricalIndex(list('aabca') + [np.nan],
categories=['c', 'a', 'b'])
- self.assertFalse(ci.equals(list('aabca')))
- self.assertFalse(ci.equals(CategoricalIndex(list('aabca'))))
+ assert not ci.equals(list('aabca'))
+ assert not ci.equals(CategoricalIndex(list('aabca')))
self.assertTrue(ci.equals(ci.copy()))
ci = CategoricalIndex(list('aabca') + [np.nan],
categories=['c', 'a', 'b'])
- self.assertFalse(ci.equals(list('aabca') + [np.nan]))
- self.assertFalse(ci.equals(CategoricalIndex(list('aabca') + [np.nan])))
+ assert not ci.equals(list('aabca') + [np.nan])
+ assert not ci.equals(CategoricalIndex(list('aabca') + [np.nan]))
self.assertTrue(ci.equals(ci.copy()))
def test_string_categorical_index_repr(self):
diff --git a/pandas/tests/indexes/test_interval.py b/pandas/tests/indexes/test_interval.py
index ec56791a6ec67..2e16e16e0b2c4 100644
--- a/pandas/tests/indexes/test_interval.py
+++ b/pandas/tests/indexes/test_interval.py
@@ -30,7 +30,7 @@ def test_constructors(self):
self.assertTrue(expected.equals(actual))
alternate = IntervalIndex.from_breaks(np.arange(3), closed='left')
- self.assertFalse(expected.equals(alternate))
+ assert not expected.equals(alternate)
actual = IntervalIndex.from_intervals([Interval(0, 1), Interval(1, 2)])
self.assertTrue(expected.equals(actual))
@@ -151,7 +151,7 @@ def test_properties(self):
def test_with_nans(self):
index = self.index
- self.assertFalse(index.hasnans)
+ assert not index.hasnans
tm.assert_numpy_array_equal(index.isnull(),
np.array([False, False]))
tm.assert_numpy_array_equal(index.notnull(),
@@ -196,14 +196,13 @@ def test_equals(self):
self.assertTrue(idx.equals(idx))
self.assertTrue(idx.equals(idx.copy()))
- self.assertFalse(idx.equals(idx.astype(object)))
- self.assertFalse(idx.equals(np.array(idx)))
- self.assertFalse(idx.equals(list(idx)))
+ assert not idx.equals(idx.astype(object))
+ assert not idx.equals(np.array(idx))
+ assert not idx.equals(list(idx))
- self.assertFalse(idx.equals([1, 2]))
- self.assertFalse(idx.equals(np.array([1, 2])))
- self.assertFalse(idx.equals(
- pd.date_range('20130101', periods=2)))
+ assert not idx.equals([1, 2])
+ assert not idx.equals(np.array([1, 2]))
+ assert not idx.equals(pd.date_range('20130101', periods=2))
def test_astype(self):
@@ -216,7 +215,7 @@ def test_astype(self):
result = idx.astype(object)
tm.assert_index_equal(result, Index(idx.values, dtype='object'))
- self.assertFalse(idx.equals(result))
+ assert not idx.equals(result)
self.assertTrue(idx.equals(IntervalIndex.from_intervals(result)))
result = idx.astype('interval')
@@ -272,11 +271,11 @@ def test_monotonic_and_unique(self):
self.assertTrue(idx.is_unique)
idx = IntervalIndex.from_tuples([(0, 1), (2, 3), (1, 2)])
- self.assertFalse(idx.is_monotonic)
+ assert not idx.is_monotonic
self.assertTrue(idx.is_unique)
idx = IntervalIndex.from_tuples([(0, 2), (0, 2)])
- self.assertFalse(idx.is_unique)
+ assert not idx.is_unique
self.assertTrue(idx.is_monotonic)
@pytest.mark.xfail(reason='not a valid repr as we use interval notation')
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index ab403cf56e033..6f6e1f1544219 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -524,7 +524,7 @@ def test_reference_duplicate_name(self):
idx = MultiIndex.from_tuples(
[('a', 'b'), ('c', 'd')], names=['x', 'y'])
- self.assertFalse(idx._reference_duplicate_name('x'))
+ assert not idx._reference_duplicate_name('x')
def test_astype(self):
expected = self.index.copy()
@@ -1082,11 +1082,11 @@ def test_contains_with_nat(self):
self.assertTrue(val in mi)
def test_is_all_dates(self):
- self.assertFalse(self.index.is_all_dates)
+ assert not self.index.is_all_dates
def test_is_numeric(self):
# MultiIndex is never numeric
- self.assertFalse(self.index.is_numeric())
+ assert not self.index.is_numeric()
def test_getitem(self):
# scalar
@@ -1280,7 +1280,7 @@ def test_consistency(self):
index = MultiIndex(levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels])
- self.assertFalse(index.is_unique)
+ assert not index.is_unique
def test_truncate(self):
major_axis = Index(lrange(4))
@@ -1526,9 +1526,9 @@ def test_equals_missing_values(self):
i = pd.MultiIndex.from_tuples([(0, pd.NaT),
(0, pd.Timestamp('20130101'))])
result = i[0:1].equals(i[0])
- self.assertFalse(result)
+ assert not result
result = i[1:2].equals(i[1])
- self.assertFalse(result)
+ assert not result
def test_identical(self):
mi = self.index.copy()
@@ -1537,7 +1537,7 @@ def test_identical(self):
mi = mi.set_names(['new1', 'new2'])
self.assertTrue(mi.equals(mi2))
- self.assertFalse(mi.identical(mi2))
+ assert not mi.identical(mi2)
mi2 = mi2.set_names(['new1', 'new2'])
self.assertTrue(mi.identical(mi2))
@@ -1545,7 +1545,7 @@ def test_identical(self):
mi3 = Index(mi.tolist(), names=mi.names)
mi4 = Index(mi.tolist(), names=mi.names, tupleize_cols=False)
self.assertTrue(mi.identical(mi3))
- self.assertFalse(mi.identical(mi4))
+ assert not mi.identical(mi4)
self.assertTrue(mi.equals(mi4))
def test_is_(self):
@@ -1565,15 +1565,15 @@ def test_is_(self):
self.assertTrue(mi.is_(mi2))
# levels are inherent properties, they change identity
mi3 = mi2.set_levels([lrange(10), lrange(10)])
- self.assertFalse(mi3.is_(mi2))
+ assert not mi3.is_(mi2)
# shouldn't change
self.assertTrue(mi2.is_(mi))
mi4 = mi3.view()
mi4.set_levels([[1 for _ in range(10)], lrange(10)], inplace=True)
- self.assertFalse(mi4.is_(mi3))
+ assert not mi4.is_(mi3)
mi5 = mi.view()
mi5.set_levels(mi5.levels, inplace=True)
- self.assertFalse(mi5.is_(mi))
+ assert not mi5.is_(mi)
def test_union(self):
piece1 = self.index[:5][::-1]
@@ -1862,7 +1862,7 @@ def test_drop_not_lexsorted(self):
df = df.pivot_table(index='a', columns=['b', 'c'], values='d')
df = df.reset_index()
not_lexsorted_mi = df.columns
- self.assertFalse(not_lexsorted_mi.is_lexsorted())
+ assert not not_lexsorted_mi.is_lexsorted()
# compare the results
tm.assert_index_equal(lexsorted_mi, not_lexsorted_mi)
@@ -2119,7 +2119,7 @@ def test_reindex_level(self):
level='first')
def test_duplicates(self):
- self.assertFalse(self.index.has_duplicates)
+ assert not self.index.has_duplicates
self.assertTrue(self.index.append(self.index).has_duplicates)
index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[
@@ -2147,7 +2147,7 @@ def test_duplicates(self):
(u('x'), u('out'), u('z'), 12, u('y'), u('in'), u('z'), 144)]
index = pd.MultiIndex.from_tuples(t)
- self.assertFalse(index.has_duplicates)
+ assert not index.has_duplicates
# handle int64 overflow if possible
def check(nlevels, with_nulls):
@@ -2168,7 +2168,7 @@ def check(nlevels, with_nulls):
# no dups
index = MultiIndex(levels=levels, labels=labels)
- self.assertFalse(index.has_duplicates)
+ assert not index.has_duplicates
# with a dup
if with_nulls:
@@ -2203,7 +2203,7 @@ def check(nlevels, with_nulls):
# GH5873
for a in [101, 102]:
mi = MultiIndex.from_arrays([[101, a], [3.5, np.nan]])
- self.assertFalse(mi.has_duplicates)
+ assert not mi.has_duplicates
self.assertEqual(mi.get_duplicates(), [])
tm.assert_numpy_array_equal(mi.duplicated(), np.zeros(
2, dtype='bool'))
@@ -2215,7 +2215,7 @@ def check(nlevels, with_nulls):
mi = MultiIndex(levels=[list('abcde')[:n], list('WXYZ')[:m]],
labels=np.random.permutation(list(lab)).T)
self.assertEqual(len(mi), (n + 1) * (m + 1))
- self.assertFalse(mi.has_duplicates)
+ assert not mi.has_duplicates
self.assertEqual(mi.get_duplicates(), [])
tm.assert_numpy_array_equal(mi.duplicated(), np.zeros(
len(mi), dtype='bool'))
@@ -2281,8 +2281,7 @@ def test_repr_with_unicode_data(self):
with pd.core.config.option_context("display.encoding", 'UTF-8'):
d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
index = pd.DataFrame(d).set_index(["a", "b"]).index
- self.assertFalse("\\u" in repr(index)
- ) # we don't want unicode-escaped
+ assert "\\u" not in repr(index) # we don't want unicode-escaped
def test_repr_roundtrip(self):
@@ -2376,7 +2375,7 @@ def test_level_setting_resets_attributes(self):
inplace=True)
# if this fails, probably didn't reset the cache correctly.
- self.assertFalse(ind.is_monotonic)
+ assert not ind.is_monotonic
def test_is_monotonic(self):
i = MultiIndex.from_product([np.arange(10),
@@ -2386,18 +2385,18 @@ def test_is_monotonic(self):
i = MultiIndex.from_product([np.arange(10, 0, -1),
np.arange(10)], names=['one', 'two'])
- self.assertFalse(i.is_monotonic)
- self.assertFalse(Index(i.values).is_monotonic)
+ assert not i.is_monotonic
+ assert not Index(i.values).is_monotonic
i = MultiIndex.from_product([np.arange(10),
np.arange(10, 0, -1)],
names=['one', 'two'])
- self.assertFalse(i.is_monotonic)
- self.assertFalse(Index(i.values).is_monotonic)
+ assert not i.is_monotonic
+ assert not Index(i.values).is_monotonic
i = MultiIndex.from_product([[1.0, np.nan, 2.0], ['a', 'b', 'c']])
- self.assertFalse(i.is_monotonic)
- self.assertFalse(Index(i.values).is_monotonic)
+ assert not i.is_monotonic
+ assert not Index(i.values).is_monotonic
# string ordering
i = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
@@ -2405,8 +2404,8 @@ def test_is_monotonic(self):
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
[0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
names=['first', 'second'])
- self.assertFalse(i.is_monotonic)
- self.assertFalse(Index(i.values).is_monotonic)
+ assert not i.is_monotonic
+ assert not Index(i.values).is_monotonic
i = MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'],
['mom', 'next', 'zenith']],
@@ -2424,7 +2423,7 @@ def test_is_monotonic(self):
labels=[[0, 1, 1, 2, 2, 2, 3], [4, 2, 0, 0, 1, 3, -1]],
names=['household_id', 'asset_id'])
- self.assertFalse(i.is_monotonic)
+ assert not i.is_monotonic
def test_reconstruct_sort(self):
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index 8a46da37572ff..8b4179dbf2e0e 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -381,8 +381,8 @@ def test_contains_not_nans(self):
def test_doesnt_contain_all_the_things(self):
i = Float64Index([np.nan])
- self.assertFalse(i.isin([0]).item())
- self.assertFalse(i.isin([1]).item())
+ assert not i.isin([0]).item()
+ assert not i.isin([1]).item()
self.assertTrue(i.isin([np.nan]).item())
def test_nan_multiple_containment(self):
@@ -465,10 +465,10 @@ def test_view(self):
def test_is_monotonic(self):
self.assertTrue(self.index.is_monotonic)
self.assertTrue(self.index.is_monotonic_increasing)
- self.assertFalse(self.index.is_monotonic_decreasing)
+ assert not self.index.is_monotonic_decreasing
index = self._holder([4, 3, 2, 1])
- self.assertFalse(index.is_monotonic)
+ assert not index.is_monotonic
self.assertTrue(index.is_monotonic_decreasing)
index = self._holder([1])
@@ -486,19 +486,19 @@ def test_identical(self):
self.assertTrue(i.identical(self.index))
same_values_different_type = Index(i, dtype=object)
- self.assertFalse(i.identical(same_values_different_type))
+ assert not i.identical(same_values_different_type)
i = self.index.copy(dtype=object)
i = i.rename('foo')
same_values = Index(i, dtype=object)
self.assertTrue(same_values.identical(i))
- self.assertFalse(i.identical(self.index))
+ assert not i.identical(self.index)
self.assertTrue(Index(same_values, name='foo', dtype=object).identical(
i))
- self.assertFalse(self.index.copy(dtype=object)
- .identical(self.index.copy(dtype=self._dtype)))
+ assert not self.index.copy(dtype=object).identical(
+ self.index.copy(dtype=self._dtype))
def test_join_non_unique(self):
left = Index([4, 4, 3, 3])
diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py
index c3ffb32c36e3b..0baf6636806f6 100644
--- a/pandas/tests/indexes/test_range.py
+++ b/pandas/tests/indexes/test_range.py
@@ -330,10 +330,10 @@ def test_dtype(self):
def test_is_monotonic(self):
self.assertTrue(self.index.is_monotonic)
self.assertTrue(self.index.is_monotonic_increasing)
- self.assertFalse(self.index.is_monotonic_decreasing)
+ assert not self.index.is_monotonic_decreasing
index = RangeIndex(4, 0, -1)
- self.assertFalse(index.is_monotonic)
+ assert not index.is_monotonic
self.assertTrue(index.is_monotonic_decreasing)
index = RangeIndex(1, 2)
@@ -374,19 +374,19 @@ def test_identical(self):
return
same_values_different_type = Index(i, dtype=object)
- self.assertFalse(i.identical(same_values_different_type))
+ assert not i.identical(same_values_different_type)
i = self.index.copy(dtype=object)
i = i.rename('foo')
same_values = Index(i, dtype=object)
self.assertTrue(same_values.identical(self.index.copy(dtype=object)))
- self.assertFalse(i.identical(self.index))
+ assert not i.identical(self.index)
self.assertTrue(Index(same_values, name='foo', dtype=object).identical(
i))
- self.assertFalse(self.index.copy(dtype=object)
- .identical(self.index.copy(dtype='int64')))
+ assert not self.index.copy(dtype=object).identical(
+ self.index.copy(dtype='int64'))
def test_get_indexer(self):
target = RangeIndex(10)
@@ -423,7 +423,7 @@ def test_join_outer(self):
5, 4, 3, 2, 1, 0], dtype=np.intp)
assert isinstance(res, Int64Index)
- self.assertFalse(isinstance(res, RangeIndex))
+ assert not isinstance(res, RangeIndex)
tm.assert_index_equal(res, eres)
tm.assert_numpy_array_equal(lidx, elidx)
tm.assert_numpy_array_equal(ridx, eridx)
@@ -437,7 +437,7 @@ def test_join_outer(self):
tm.assert_index_equal(res, noidx_res)
assert isinstance(res, Int64Index)
- self.assertFalse(isinstance(res, RangeIndex))
+ assert not isinstance(res, RangeIndex)
tm.assert_index_equal(res, eres)
tm.assert_numpy_array_equal(lidx, elidx)
tm.assert_numpy_array_equal(ridx, eridx)
@@ -785,7 +785,7 @@ def test_duplicates(self):
continue
idx = self.indices[ind]
self.assertTrue(idx.is_unique)
- self.assertFalse(idx.has_duplicates)
+ assert not idx.has_duplicates
def test_ufunc_compat(self):
idx = RangeIndex(5)
diff --git a/pandas/tests/indexes/timedeltas/test_astype.py b/pandas/tests/indexes/timedeltas/test_astype.py
index d269cddcbb5c8..b17433d3aeb51 100644
--- a/pandas/tests/indexes/timedeltas/test_astype.py
+++ b/pandas/tests/indexes/timedeltas/test_astype.py
@@ -51,7 +51,7 @@ def test_astype_timedelta64(self):
result = idx.astype('timedelta64[ns]')
tm.assert_index_equal(result, idx)
- self.assertFalse(result is idx)
+ assert result is not idx
result = idx.astype('timedelta64[ns]', copy=False)
tm.assert_index_equal(result, idx)
diff --git a/pandas/tests/indexes/timedeltas/test_ops.py b/pandas/tests/indexes/timedeltas/test_ops.py
index c3cc05271e978..9747902f316a6 100644
--- a/pandas/tests/indexes/timedeltas/test_ops.py
+++ b/pandas/tests/indexes/timedeltas/test_ops.py
@@ -60,7 +60,7 @@ def test_minmax(self):
# non-monotonic
idx2 = TimedeltaIndex(['1 days', np.nan, '3 days', 'NaT'])
- self.assertFalse(idx2.is_monotonic)
+ assert not idx2.is_monotonic
for idx in [idx1, idx2]:
self.assertEqual(idx.min(), Timedelta('1 days')),
@@ -828,7 +828,7 @@ def test_nat(self):
self.assertTrue(idx._can_hold_na)
tm.assert_numpy_array_equal(idx._isnan, np.array([False, False]))
- self.assertFalse(idx.hasnans)
+ assert not idx.hasnans
tm.assert_numpy_array_equal(idx._nan_idxs,
np.array([], dtype=np.intp))
@@ -848,17 +848,17 @@ def test_equals(self):
self.assertTrue(idx.equals(idx.asobject))
self.assertTrue(idx.asobject.equals(idx))
self.assertTrue(idx.asobject.equals(idx.asobject))
- self.assertFalse(idx.equals(list(idx)))
- self.assertFalse(idx.equals(pd.Series(idx)))
+ assert not idx.equals(list(idx))
+ assert not idx.equals(pd.Series(idx))
idx2 = pd.TimedeltaIndex(['2 days', '1 days', 'NaT'])
- self.assertFalse(idx.equals(idx2))
- self.assertFalse(idx.equals(idx2.copy()))
- self.assertFalse(idx.equals(idx2.asobject))
- self.assertFalse(idx.asobject.equals(idx2))
- self.assertFalse(idx.asobject.equals(idx2.asobject))
- self.assertFalse(idx.equals(list(idx2)))
- self.assertFalse(idx.equals(pd.Series(idx2)))
+ assert not idx.equals(idx2)
+ assert not idx.equals(idx2.copy())
+ assert not idx.equals(idx2.asobject)
+ assert not idx.asobject.equals(idx2)
+ assert not idx.asobject.equals(idx2.asobject)
+ assert not idx.equals(list(idx2))
+ assert not idx.equals(pd.Series(idx2))
class TestTimedeltas(tm.TestCase):
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index b5bdf031180ec..c90c61170ca93 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -346,10 +346,10 @@ def test_misc_coverage(self):
assert isinstance(list(result.values())[0][0], Timedelta)
idx = TimedeltaIndex(['3d', '1d', '2d'])
- self.assertFalse(idx.equals(list(idx)))
+ assert not idx.equals(list(idx))
non_td = Index(list('abc'))
- self.assertFalse(idx.equals(list(non_td)))
+ assert not idx.equals(list(non_td))
def test_map(self):
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index bdee41acbc8fd..498604aaac853 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -102,7 +102,7 @@ def f():
pytest.raises(error, f)
# contains
- self.assertFalse(3.0 in s)
+ assert 3.0 not in s
# setting with a float fails with iloc
def f():
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index f8a7c57ad5061..d0f089f0804c3 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -438,7 +438,7 @@ def test_string_slice(self):
df.loc['2011', 0]
df = pd.DataFrame()
- self.assertFalse(df.index.is_all_dates)
+ assert not df.index.is_all_dates
with pytest.raises(KeyError):
df['2011']
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index b2a5e6147cd28..862a6e6326ddd 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -588,7 +588,7 @@ def gen_expected(df, mask):
df.take(mask[1:], convert=False)])
df = gen_test(900, 100)
- self.assertFalse(df.index.is_unique)
+ assert not df.index.is_unique
mask = np.arange(100)
result = df.loc[mask]
@@ -596,7 +596,7 @@ def gen_expected(df, mask):
tm.assert_frame_equal(result, expected)
df = gen_test(900000, 100000)
- self.assertFalse(df.index.is_unique)
+ assert not df.index.is_unique
mask = np.arange(100000)
result = df.loc[mask]
diff --git a/pandas/tests/indexing/test_multiindex.py b/pandas/tests/indexing/test_multiindex.py
index a85c6bb446140..dbd0f5a9e6e1c 100644
--- a/pandas/tests/indexing/test_multiindex.py
+++ b/pandas/tests/indexing/test_multiindex.py
@@ -816,7 +816,7 @@ def test_multiindex_slicers_non_unique(self):
C=[1, 2, 1, 3],
D=[1, 2, 3, 4]))
.set_index(['A', 'B', 'C']).sort_index())
- self.assertFalse(df.index.is_unique)
+ assert not df.index.is_unique
expected = (DataFrame(dict(A=['foo', 'foo'], B=['a', 'a'],
C=[1, 1], D=[1, 3]))
.set_index(['A', 'B', 'C']).sort_index())
@@ -832,12 +832,12 @@ def test_multiindex_slicers_non_unique(self):
C=[1, 2, 1, 2],
D=[1, 2, 3, 4]))
.set_index(['A', 'B', 'C']).sort_index())
- self.assertFalse(df.index.is_unique)
+ assert not df.index.is_unique
expected = (DataFrame(dict(A=['foo', 'foo'], B=['a', 'a'],
C=[1, 1], D=[1, 3]))
.set_index(['A', 'B', 'C']).sort_index())
result = df.loc[(slice(None), slice(None), 1), :]
- self.assertFalse(result.index.is_unique)
+ assert not result.index.is_unique
tm.assert_frame_equal(result, expected)
# GH12896
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 35a71efbbf5ba..ccc1372495106 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -252,20 +252,20 @@ def test_expand_frame_repr(self):
'display.max_rows', 20,
'display.show_dimensions', True):
with option_context('display.expand_frame_repr', True):
- self.assertFalse(has_truncated_repr(df_small))
- self.assertFalse(has_expanded_repr(df_small))
- self.assertFalse(has_truncated_repr(df_wide))
+ assert not has_truncated_repr(df_small)
+ assert not has_expanded_repr(df_small)
+ assert not has_truncated_repr(df_wide)
self.assertTrue(has_expanded_repr(df_wide))
self.assertTrue(has_vertically_truncated_repr(df_tall))
self.assertTrue(has_expanded_repr(df_tall))
with option_context('display.expand_frame_repr', False):
- self.assertFalse(has_truncated_repr(df_small))
- self.assertFalse(has_expanded_repr(df_small))
- self.assertFalse(has_horizontally_truncated_repr(df_wide))
- self.assertFalse(has_expanded_repr(df_wide))
+ assert not has_truncated_repr(df_small)
+ assert not has_expanded_repr(df_small)
+ assert not has_horizontally_truncated_repr(df_wide)
+ assert not has_expanded_repr(df_wide)
self.assertTrue(has_vertically_truncated_repr(df_tall))
- self.assertFalse(has_expanded_repr(df_tall))
+ assert not has_expanded_repr(df_tall)
def test_repr_non_interactive(self):
# in non interactive mode, there can be no dependency on the
@@ -274,8 +274,8 @@ def test_repr_non_interactive(self):
with option_context('mode.sim_interactive', False, 'display.width', 0,
'display.height', 0, 'display.max_rows', 5000):
- self.assertFalse(has_truncated_repr(df))
- self.assertFalse(has_expanded_repr(df))
+ assert not has_truncated_repr(df)
+ assert not has_expanded_repr(df)
def test_repr_max_columns_max_rows(self):
term_width, term_height = get_terminal_size()
@@ -293,29 +293,29 @@ def mkframe(n):
with option_context('display.width', term_width * 2):
with option_context('display.max_rows', 5,
'display.max_columns', 5):
- self.assertFalse(has_expanded_repr(mkframe(4)))
- self.assertFalse(has_expanded_repr(mkframe(5)))
- self.assertFalse(has_expanded_repr(df6))
+ assert not has_expanded_repr(mkframe(4))
+ assert not has_expanded_repr(mkframe(5))
+ assert not has_expanded_repr(df6)
self.assertTrue(has_doubly_truncated_repr(df6))
with option_context('display.max_rows', 20,
'display.max_columns', 10):
# Out off max_columns boundary, but no extending
# since not exceeding width
- self.assertFalse(has_expanded_repr(df6))
- self.assertFalse(has_truncated_repr(df6))
+ assert not has_expanded_repr(df6)
+ assert not has_truncated_repr(df6)
with option_context('display.max_rows', 9,
'display.max_columns', 10):
# out vertical bounds can not result in exanded repr
- self.assertFalse(has_expanded_repr(df10))
+ assert not has_expanded_repr(df10)
self.assertTrue(has_vertically_truncated_repr(df10))
# width=None in terminal, auto detection
with option_context('display.max_columns', 100, 'display.max_rows',
term_width * 20, 'display.width', None):
df = mkframe((term_width // 7) - 2)
- self.assertFalse(has_expanded_repr(df))
+ assert not has_expanded_repr(df)
df = mkframe((term_width // 7) + 2)
printing.pprint_thing(df._repr_fits_horizontal_())
self.assertTrue(has_expanded_repr(df))
@@ -755,14 +755,14 @@ def test_to_string_truncate_indices(self):
self.assertTrue(
has_vertically_truncated_repr(df))
else:
- self.assertFalse(
- has_vertically_truncated_repr(df))
+ assert not has_vertically_truncated_repr(
+ df)
with option_context("display.max_columns", 15):
if w == 20:
self.assertTrue(
has_horizontally_truncated_repr(df))
else:
- self.assertFalse(
+ assert not (
has_horizontally_truncated_repr(df))
with option_context("display.max_rows", 15,
"display.max_columns", 15):
@@ -770,8 +770,8 @@ def test_to_string_truncate_indices(self):
self.assertTrue(has_doubly_truncated_repr(
df))
else:
- self.assertFalse(has_doubly_truncated_repr(
- df))
+ assert not has_doubly_truncated_repr(
+ df)
def test_to_string_truncate_multilevel(self):
arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
@@ -802,7 +802,7 @@ def test_truncate_with_different_dtypes(self):
'display.max_columns', 3):
result = str(df)
self.assertTrue('None' in result)
- self.assertFalse('NaN' in result)
+ assert 'NaN' not in result
def test_datetimelike_frame(self):
@@ -1358,8 +1358,8 @@ def test_show_dimensions(self):
with option_context('display.max_rows', 10, 'display.max_columns', 40,
'display.width', 500, 'display.expand_frame_repr',
'info', 'display.show_dimensions', False):
- self.assertFalse('5 rows' in str(df))
- self.assertFalse('5 rows' in df._repr_html_())
+ assert '5 rows' not in str(df)
+ assert '5 rows' not in df._repr_html_()
with option_context('display.max_rows', 2, 'display.max_columns', 2,
'display.width', 500, 'display.expand_frame_repr',
'info', 'display.show_dimensions', 'truncate'):
@@ -1368,8 +1368,8 @@ def test_show_dimensions(self):
with option_context('display.max_rows', 10, 'display.max_columns', 40,
'display.width', 500, 'display.expand_frame_repr',
'info', 'display.show_dimensions', 'truncate'):
- self.assertFalse('5 rows' in str(df))
- self.assertFalse('5 rows' in df._repr_html_())
+ assert '5 rows' not in str(df)
+ assert '5 rows' not in df._repr_html_()
def test_repr_html(self):
self.frame._repr_html_()
@@ -1386,7 +1386,7 @@ def test_repr_html(self):
fmt.set_option('display.show_dimensions', True)
self.assertTrue('2 rows' in df._repr_html_())
fmt.set_option('display.show_dimensions', False)
- self.assertFalse('2 rows' in df._repr_html_())
+ assert '2 rows' not in df._repr_html_()
tm.reset_display_options()
@@ -1518,7 +1518,7 @@ def test_info_repr_max_cols(self):
with option_context('display.large_repr', 'info',
'display.max_columns', 1,
'display.max_info_columns', 5):
- self.assertFalse(has_non_verbose_info_repr(df))
+ assert not has_non_verbose_info_repr(df)
# test verbose overrides
# fmt.set_option('display.max_info_columns', 4) # exceeded
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 28c6a0e95e0f1..a67bb2fd8eb5c 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -1461,7 +1461,7 @@ def test_to_html_filename(self):
def test_to_html_with_no_bold(self):
x = DataFrame({'x': np.random.randn(5)})
ashtml = x.to_html(bold_rows=False)
- self.assertFalse('<strong' in ashtml[ashtml.find("</thead>")])
+ assert '<strong' not in ashtml[ashtml.find("</thead>")]
def test_to_html_columns_arg(self):
frame = DataFrame(tm.getSeriesData())
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 0dfae0fb88bf6..ac9e4f77db6ac 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -415,7 +415,7 @@ def test_frame_to_json_except(self):
def test_frame_empty(self):
df = DataFrame(columns=['jim', 'joe'])
- self.assertFalse(df._is_mixed_type)
+ assert not df._is_mixed_type
assert_frame_equal(read_json(df.to_json(), dtype=dict(df.dtypes)), df,
check_index_type=False)
# GH 7445
diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index 9abd3c5bfe993..afb23f540264e 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -113,7 +113,7 @@ def test_squeeze_no_view(self):
# Series should not be a view
data = """time,data\n0,10\n1,11\n2,12\n4,14\n5,15\n3,13"""
result = self.read_csv(StringIO(data), index_col='time', squeeze=True)
- self.assertFalse(result._is_view)
+ assert not result._is_view
def test_malformed(self):
# see gh-6607
@@ -1656,11 +1656,11 @@ def test_file_handles(self):
fh = StringIO('a,b\n1,2')
self.read_csv(fh)
- self.assertFalse(fh.closed)
+ assert not fh.closed
with open(self.csv1, 'r') as f:
self.read_csv(f)
- self.assertFalse(f.closed)
+ assert not f.closed
# mmap not working with python engine
if self.engine != 'python':
@@ -1671,7 +1671,7 @@ def test_file_handles(self):
self.read_csv(m)
# closed attribute new in python 3.2
if PY3:
- self.assertFalse(m.closed)
+ assert not m.closed
m.close()
def test_invalid_file_buffer(self):
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index 4a8d2e997ee06..b9920983856d4 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -61,14 +61,14 @@ def test_parse_public_s3_bucket(self):
df = read_csv('s3://pandas-test/tips.csv' +
ext, compression=comp)
self.assertTrue(isinstance(df, DataFrame))
- self.assertFalse(df.empty)
+ assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')), df)
# Read public file from bucket with not-public contents
df = read_csv('s3://cant_get_it/tips.csv')
self.assertTrue(isinstance(df, DataFrame))
- self.assertFalse(df.empty)
+ assert not df.empty
tm.assert_frame_equal(read_csv(tm.get_data_path('tips.csv')), df)
@tm.network
@@ -76,7 +76,7 @@ def test_parse_public_s3n_bucket(self):
# Read from AWS s3 as "s3n" URL
df = read_csv('s3n://pandas-test/tips.csv', nrows=10)
self.assertTrue(isinstance(df, DataFrame))
- self.assertFalse(df.empty)
+ assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
@@ -85,7 +85,7 @@ def test_parse_public_s3a_bucket(self):
# Read from AWS s3 as "s3a" URL
df = read_csv('s3a://pandas-test/tips.csv', nrows=10)
self.assertTrue(isinstance(df, DataFrame))
- self.assertFalse(df.empty)
+ assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
@@ -95,7 +95,7 @@ def test_parse_public_s3_bucket_nrows(self):
df = read_csv('s3://pandas-test/tips.csv' +
ext, nrows=10, compression=comp)
self.assertTrue(isinstance(df, DataFrame))
- self.assertFalse(df.empty)
+ assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
@@ -113,7 +113,7 @@ def test_parse_public_s3_bucket_chunked(self):
# properly.
df = df_reader.get_chunk()
self.assertTrue(isinstance(df, DataFrame))
- self.assertFalse(df.empty)
+ assert not df.empty
true_df = local_tips.iloc[
chunksize * i_chunk: chunksize * (i_chunk + 1)]
tm.assert_frame_equal(true_df, df)
@@ -132,7 +132,7 @@ def test_parse_public_s3_bucket_chunked_python(self):
# Read a couple of chunks and make sure we see them properly.
df = df_reader.get_chunk()
self.assertTrue(isinstance(df, DataFrame))
- self.assertFalse(df.empty)
+ assert not df.empty
true_df = local_tips.iloc[
chunksize * i_chunk: chunksize * (i_chunk + 1)]
tm.assert_frame_equal(true_df, df)
@@ -143,7 +143,7 @@ def test_parse_public_s3_bucket_python(self):
df = read_csv('s3://pandas-test/tips.csv' + ext, engine='python',
compression=comp)
self.assertTrue(isinstance(df, DataFrame))
- self.assertFalse(df.empty)
+ assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')), df)
@@ -153,7 +153,7 @@ def test_infer_s3_compression(self):
df = read_csv('s3://pandas-test/tips.csv' + ext,
engine='python', compression='infer')
self.assertTrue(isinstance(df, DataFrame))
- self.assertFalse(df.empty)
+ assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')), df)
@@ -163,7 +163,7 @@ def test_parse_public_s3_bucket_nrows_python(self):
df = read_csv('s3://pandas-test/tips.csv' + ext, engine='python',
nrows=10, compression=comp)
self.assertTrue(isinstance(df, DataFrame))
- self.assertFalse(df.empty)
+ assert not df.empty
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 82819b94413b4..700915b81dd31 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -129,7 +129,7 @@ def test_get_attr(self):
for attr in attrs:
self.assertTrue(hasattr(wrapper, attr))
- self.assertFalse(hasattr(wrapper, 'foo'))
+ assert not hasattr(wrapper, 'foo')
def test_next(self):
with open(self.mmap_file, 'r') as target:
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 5a30ff2afe7e5..cf08754a18527 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -168,7 +168,7 @@ def test_banklist_no_match(self):
def test_spam_header(self):
df = self.read_html(self.spam_data, '.*Water.*', header=1)[0]
self.assertEqual(df.columns[0], 'Proximates')
- self.assertFalse(df.empty)
+ assert not df.empty
def test_skiprows_int(self):
df1 = self.read_html(self.spam_data, '.*Water.*', skiprows=1)
@@ -378,7 +378,7 @@ def test_thousands_macau_stats(self):
attrs={'class': 'style1'})
df = dfs[all_non_nan_table_index]
- self.assertFalse(any(s.isnull().any() for _, s in df.iteritems()))
+ assert not any(s.isnull().any() for _, s in df.iteritems())
@tm.slow
def test_thousands_macau_index_col(self):
@@ -387,7 +387,7 @@ def test_thousands_macau_index_col(self):
dfs = self.read_html(macau_data, index_col=0, header=0)
df = dfs[all_non_nan_table_index]
- self.assertFalse(any(s.isnull().any() for _, s in df.iteritems()))
+ assert not any(s.isnull().any() for _, s in df.iteritems())
def test_empty_tables(self):
"""
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 1b656e7b1b004..6e7fca9a29e98 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -332,7 +332,7 @@ def test_api_default_format(self):
pandas.set_option('io.hdf.default_format', 'fixed')
_maybe_remove(store, 'df')
store.put('df', df)
- self.assertFalse(store.get_storer('df').is_table)
+ assert not store.get_storer('df').is_table
pytest.raises(ValueError, store.append, 'df2', df)
pandas.set_option('io.hdf.default_format', 'table')
@@ -352,7 +352,7 @@ def test_api_default_format(self):
pandas.set_option('io.hdf.default_format', 'fixed')
df.to_hdf(path, 'df')
with HDFStore(path) as store:
- self.assertFalse(store.get_storer('df').is_table)
+ assert not store.get_storer('df').is_table
pytest.raises(ValueError, df.to_hdf, path, 'df2', append=True)
pandas.set_option('io.hdf.default_format', 'table')
@@ -545,14 +545,14 @@ def test_reopen_handle(self):
# invalid mode change
pytest.raises(PossibleDataLossError, store.open, 'w')
store.close()
- self.assertFalse(store.is_open)
+ assert not store.is_open
# truncation ok here
store.open('w')
self.assertTrue(store.is_open)
self.assertEqual(len(store), 0)
store.close()
- self.assertFalse(store.is_open)
+ assert not store.is_open
store = HDFStore(path, mode='a')
store['a'] = tm.makeTimeSeries()
@@ -563,7 +563,7 @@ def test_reopen_handle(self):
self.assertEqual(len(store), 1)
self.assertEqual(store._mode, 'r')
store.close()
- self.assertFalse(store.is_open)
+ assert not store.is_open
# reopen as append
store.open('a')
@@ -571,7 +571,7 @@ def test_reopen_handle(self):
self.assertEqual(len(store), 1)
self.assertEqual(store._mode, 'a')
store.close()
- self.assertFalse(store.is_open)
+ assert not store.is_open
# reopen as append (again)
store.open('a')
@@ -579,7 +579,7 @@ def test_reopen_handle(self):
self.assertEqual(len(store), 1)
self.assertEqual(store._mode, 'a')
store.close()
- self.assertFalse(store.is_open)
+ assert not store.is_open
def test_open_args(self):
@@ -599,7 +599,7 @@ def test_open_args(self):
store.close()
# the file should not have actually been written
- self.assertFalse(os.path.exists(path))
+ assert not os.path.exists(path)
def test_flush(self):
diff --git a/pandas/tests/io/test_s3.py b/pandas/tests/io/test_s3.py
index 2983fa647445c..cff8eef74a607 100644
--- a/pandas/tests/io/test_s3.py
+++ b/pandas/tests/io/test_s3.py
@@ -7,4 +7,4 @@ class TestS3URL(tm.TestCase):
def test_is_s3_url(self):
self.assertTrue(_is_s3_url("s3://pandas/somethingelse.com"))
- self.assertFalse(_is_s3_url("s4://pandas/somethingelse.com"))
+ assert not _is_s3_url("s4://pandas/somethingelse.com")
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 36ff3bdbb24b5..0930d99ea5c30 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -625,9 +625,7 @@ def test_date_parsing(self):
# Test date parsing in read_sq
# No Parsing
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn)
- self.assertFalse(
- issubclass(df.DateCol.dtype.type, np.datetime64),
- "DateCol loaded with incorrect type")
+ assert not issubclass(df.DateCol.dtype.type, np.datetime64)
df = sql.read_sql_query("SELECT * FROM types_test_data", self.conn,
parse_dates=['DateCol'])
@@ -1230,8 +1228,7 @@ def test_drop_table(self):
pandasSQL.drop_table('temp_frame')
- self.assertFalse(
- temp_conn.has_table('temp_frame'), 'Table not deleted from DB')
+ assert not temp_conn.has_table('temp_frame')
def test_roundtrip(self):
self._roundtrip()
@@ -1727,8 +1724,7 @@ def test_default_date_load(self):
df = sql.read_sql_table("types_test_data", self.conn)
# IMPORTANT - sqlite has no native date type, so shouldn't parse, but
- self.assertFalse(issubclass(df.DateCol.dtype.type, np.datetime64),
- "DateCol loaded with incorrect type")
+ assert not issubclass(df.DateCol.dtype.type, np.datetime64)
def test_bigint_warning(self):
# test no warning for BIGINT (to support int64) is raised (GH7433)
@@ -1988,8 +1984,7 @@ def test_create_and_drop_table(self):
self.pandasSQL.drop_table('drop_test_frame')
- self.assertFalse(self.pandasSQL.has_table('drop_test_frame'),
- 'Table not deleted from DB')
+ assert not self.pandasSQL.has_table('drop_test_frame')
def test_roundtrip(self):
self._roundtrip()
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index 22f471a01b9d2..35625670f0641 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -491,13 +491,13 @@ def is_grid_on():
spndx += 1
mpl.rc('axes', grid=False)
obj.plot(kind=kind, **kws)
- self.assertFalse(is_grid_on())
+ assert not is_grid_on()
self.plt.subplot(1, 4 * len(kinds), spndx)
spndx += 1
mpl.rc('axes', grid=True)
obj.plot(kind=kind, grid=False, **kws)
- self.assertFalse(is_grid_on())
+ assert not is_grid_on()
if kind != 'pie':
self.plt.subplot(1, 4 * len(kinds), spndx)
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index f0a56592158d3..7534d9363f267 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -223,7 +223,7 @@ def test_fake_inferred_business(self):
ts = Series(lrange(len(rng)), rng)
ts = ts[:3].append(ts[5:])
ax = ts.plot()
- self.assertFalse(hasattr(ax, 'freq'))
+ assert not hasattr(ax, 'freq')
@slow
def test_plot_offset_freq(self):
@@ -334,7 +334,7 @@ def test_nonzero_base(self):
df = DataFrame(np.arange(24), index=idx)
ax = df.plot()
rs = ax.get_lines()[0].get_xdata()
- self.assertFalse(Index(rs).is_normalized)
+ assert not Index(rs).is_normalized
def test_dataframe(self):
bts = DataFrame({'a': tm.makeTimeSeries()})
@@ -568,14 +568,14 @@ def test_secondary_y(self):
ser2 = Series(np.random.randn(10))
ax = ser.plot(secondary_y=True)
self.assertTrue(hasattr(ax, 'left_ax'))
- self.assertFalse(hasattr(ax, 'right_ax'))
+ assert not hasattr(ax, 'right_ax')
fig = ax.get_figure()
axes = fig.get_axes()
l = ax.get_lines()[0]
xp = Series(l.get_ydata(), l.get_xdata())
assert_series_equal(ser, xp)
self.assertEqual(ax.get_yaxis().get_ticks_position(), 'right')
- self.assertFalse(axes[0].get_yaxis().get_visible())
+ assert not axes[0].get_yaxis().get_visible()
plt.close(fig)
ax2 = ser2.plot()
@@ -586,10 +586,10 @@ def test_secondary_y(self):
ax = ser2.plot()
ax2 = ser.plot(secondary_y=True)
self.assertTrue(ax.get_yaxis().get_visible())
- self.assertFalse(hasattr(ax, 'left_ax'))
+ assert not hasattr(ax, 'left_ax')
self.assertTrue(hasattr(ax, 'right_ax'))
self.assertTrue(hasattr(ax2, 'left_ax'))
- self.assertFalse(hasattr(ax2, 'right_ax'))
+ assert not hasattr(ax2, 'right_ax')
@slow
def test_secondary_y_ts(self):
@@ -599,14 +599,14 @@ def test_secondary_y_ts(self):
ser2 = Series(np.random.randn(10), idx)
ax = ser.plot(secondary_y=True)
self.assertTrue(hasattr(ax, 'left_ax'))
- self.assertFalse(hasattr(ax, 'right_ax'))
+ assert not hasattr(ax, 'right_ax')
fig = ax.get_figure()
axes = fig.get_axes()
l = ax.get_lines()[0]
xp = Series(l.get_ydata(), l.get_xdata()).to_timestamp()
assert_series_equal(ser, xp)
self.assertEqual(ax.get_yaxis().get_ticks_position(), 'right')
- self.assertFalse(axes[0].get_yaxis().get_visible())
+ assert not axes[0].get_yaxis().get_visible()
plt.close(fig)
ax2 = ser2.plot()
@@ -627,7 +627,7 @@ def test_secondary_kde(self):
ser = Series(np.random.randn(10))
ax = ser.plot(secondary_y=True, kind='density')
self.assertTrue(hasattr(ax, 'left_ax'))
- self.assertFalse(hasattr(ax, 'right_ax'))
+ assert not hasattr(ax, 'right_ax')
fig = ax.get_figure()
axes = fig.get_axes()
self.assertEqual(axes[1].get_yaxis().get_ticks_position(), 'right')
@@ -684,7 +684,7 @@ def test_mixed_freq_irregular_first(self):
s2 = s1[[0, 5, 10, 11, 12, 13, 14, 15]]
s2.plot(style='g')
ax = s1.plot()
- self.assertFalse(hasattr(ax, 'freq'))
+ assert not hasattr(ax, 'freq')
lines = ax.get_lines()
x1 = lines[0].get_xdata()
tm.assert_numpy_array_equal(x1, s2.index.asobject.values)
@@ -716,7 +716,7 @@ def test_mixed_freq_irregular_first_df(self):
s2 = s1.iloc[[0, 5, 10, 11, 12, 13, 14, 15], :]
ax = s2.plot(style='g')
ax = s1.plot(ax=ax)
- self.assertFalse(hasattr(ax, 'freq'))
+ assert not hasattr(ax, 'freq')
lines = ax.get_lines()
x1 = lines[0].get_xdata()
tm.assert_numpy_array_equal(x1, s2.index.asobject.values)
@@ -1049,7 +1049,7 @@ def test_secondary_upsample(self):
for l in ax.get_lines():
self.assertEqual(PeriodIndex(l.get_xdata()).freq, 'D')
self.assertTrue(hasattr(ax, 'left_ax'))
- self.assertFalse(hasattr(ax, 'right_ax'))
+ assert not hasattr(ax, 'right_ax')
for l in ax.left_ax.get_lines():
self.assertEqual(PeriodIndex(l.get_xdata()).freq, 'D')
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index c72bce28b5862..c5b43cd1a300b 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -664,7 +664,7 @@ def test_line_lim(self):
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
for ax in axes:
self.assertTrue(hasattr(ax, 'left_ax'))
- self.assertFalse(hasattr(ax, 'right_ax'))
+ assert not hasattr(ax, 'right_ax')
xmin, xmax = ax.get_xlim()
lines = ax.get_lines()
self.assertEqual(xmin, lines[0].get_data()[0][0])
diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py
index 79d5f74e6ea06..a77c1edd258e3 100644
--- a/pandas/tests/plotting/test_hist_method.py
+++ b/pandas/tests/plotting/test_hist_method.py
@@ -154,7 +154,7 @@ def test_hist_df_legacy(self):
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(df.hist, grid=False)
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
- self.assertFalse(axes[1, 1].get_visible())
+ assert not axes[1, 1].get_visible()
df = DataFrame(randn(100, 1))
_check_plot_works(df.hist)
@@ -398,8 +398,8 @@ def test_axis_share_x(self):
self.assertTrue(ax2._shared_x_axes.joined(ax1, ax2))
# don't share y
- self.assertFalse(ax1._shared_y_axes.joined(ax1, ax2))
- self.assertFalse(ax2._shared_y_axes.joined(ax1, ax2))
+ assert not ax1._shared_y_axes.joined(ax1, ax2)
+ assert not ax2._shared_y_axes.joined(ax1, ax2)
@slow
def test_axis_share_y(self):
@@ -411,8 +411,8 @@ def test_axis_share_y(self):
self.assertTrue(ax2._shared_y_axes.joined(ax1, ax2))
# don't share x
- self.assertFalse(ax1._shared_x_axes.joined(ax1, ax2))
- self.assertFalse(ax2._shared_x_axes.joined(ax1, ax2))
+ assert not ax1._shared_x_axes.joined(ax1, ax2)
+ assert not ax2._shared_x_axes.joined(ax1, ax2)
@slow
def test_axis_share_xy(self):
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 38ce5f44b812f..b84e50c4ec827 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -454,7 +454,7 @@ def test_hist_secondary_legend(self):
# left axis must be invisible, right axis must be visible
self._check_legend_labels(ax.left_ax,
labels=['a (right)', 'b (right)'])
- self.assertFalse(ax.left_ax.get_yaxis().get_visible())
+ assert not ax.left_ax.get_yaxis().get_visible()
self.assertTrue(ax.get_yaxis().get_visible())
tm.close()
@@ -502,7 +502,7 @@ def test_df_series_secondary_legend(self):
# left axis must be invisible and right axis must be visible
expected = ['a (right)', 'b (right)', 'c (right)', 'x (right)']
self._check_legend_labels(ax.left_ax, labels=expected)
- self.assertFalse(ax.left_ax.get_yaxis().get_visible())
+ assert not ax.left_ax.get_yaxis().get_visible()
self.assertTrue(ax.get_yaxis().get_visible())
tm.close()
@@ -513,7 +513,7 @@ def test_df_series_secondary_legend(self):
# left axis must be invisible and right axis must be visible
expected = ['a (right)', 'b (right)', 'c (right)', 'x (right)']
self._check_legend_labels(ax.left_ax, expected)
- self.assertFalse(ax.left_ax.get_yaxis().get_visible())
+ assert not ax.left_ax.get_yaxis().get_visible()
self.assertTrue(ax.get_yaxis().get_visible())
tm.close()
@@ -524,7 +524,7 @@ def test_df_series_secondary_legend(self):
# left axis must be invisible and right axis must be visible
expected = ['a', 'b', 'c', 'x (right)']
self._check_legend_labels(ax.left_ax, expected)
- self.assertFalse(ax.left_ax.get_yaxis().get_visible())
+ assert not ax.left_ax.get_yaxis().get_visible()
self.assertTrue(ax.get_yaxis().get_visible())
tm.close()
diff --git a/pandas/tests/reshape/test_hashing.py b/pandas/tests/reshape/test_hashing.py
index cba70bba6823f..4857d3ac8310b 100644
--- a/pandas/tests/reshape/test_hashing.py
+++ b/pandas/tests/reshape/test_hashing.py
@@ -67,7 +67,7 @@ def check_not_equal_with_index(self, obj):
a = hash_pandas_object(obj, index=True)
b = hash_pandas_object(obj, index=False)
if len(obj):
- self.assertFalse((a == b).all())
+ assert not (a == b).all()
def test_hash_tuples(self):
tups = [(1, 'one'), (1, 'two'), (2, 'one')]
@@ -240,13 +240,13 @@ def test_same_len_hash_collisions(self):
length = 2**(l + 8) + 1
s = tm.rands_array(length, 2)
result = hash_array(s, 'utf8')
- self.assertFalse(result[0] == result[1])
+ assert not result[0] == result[1]
for l in range(8):
length = 2**(l + 8)
s = tm.rands_array(length, 2)
result = hash_array(s, 'utf8')
- self.assertFalse(result[0] == result[1])
+ assert not result[0] == result[1]
def test_hash_collisions(self):
diff --git a/pandas/tests/reshape/test_merge.py b/pandas/tests/reshape/test_merge.py
index 73d0346546b97..80056b973a2fc 100644
--- a/pandas/tests/reshape/test_merge.py
+++ b/pandas/tests/reshape/test_merge.py
@@ -790,8 +790,8 @@ def run_asserts(left, right):
res = left.join(right, on=icols, how='left', sort=sort)
self.assertTrue(len(left) < len(res) + 1)
- self.assertFalse(res['4th'].isnull().any())
- self.assertFalse(res['5th'].isnull().any())
+ assert not res['4th'].isnull().any()
+ assert not res['5th'].isnull().any()
tm.assert_series_equal(
res['4th'], - res['5th'], check_names=False)
diff --git a/pandas/tests/reshape/test_merge_asof.py b/pandas/tests/reshape/test_merge_asof.py
index 0b5b580563741..f2aef409324f8 100644
--- a/pandas/tests/reshape/test_merge_asof.py
+++ b/pandas/tests/reshape/test_merge_asof.py
@@ -531,8 +531,8 @@ def test_non_sorted(self):
quotes = self.quotes.sort_values('time', ascending=False)
# we require that we are already sorted on time & quotes
- self.assertFalse(trades.time.is_monotonic)
- self.assertFalse(quotes.time.is_monotonic)
+ assert not trades.time.is_monotonic
+ assert not quotes.time.is_monotonic
with pytest.raises(ValueError):
merge_asof(trades, quotes,
on='time',
@@ -540,7 +540,7 @@ def test_non_sorted(self):
trades = self.trades.sort_values('time')
self.assertTrue(trades.time.is_monotonic)
- self.assertFalse(quotes.time.is_monotonic)
+ assert not quotes.time.is_monotonic
with pytest.raises(ValueError):
merge_asof(trades, quotes,
on='time',
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index f15616a16678f..416e729944d39 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1494,7 +1494,7 @@ def test_isleapyear_deprecate(self):
self.assertTrue(isleapyear(2000))
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- self.assertFalse(isleapyear(2001))
+ assert not isleapyear(2001)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
self.assertTrue(isleapyear(2004))
diff --git a/pandas/tests/scalar/test_period.py b/pandas/tests/scalar/test_period.py
index b5c2439524e34..c8f3833c2c964 100644
--- a/pandas/tests/scalar/test_period.py
+++ b/pandas/tests/scalar/test_period.py
@@ -25,13 +25,13 @@ def test_is_leap_year(self):
assert isinstance(p.is_leap_year, bool)
p = Period('1999-01-01 00:00:00', freq=freq)
- self.assertFalse(p.is_leap_year)
+ assert not p.is_leap_year
p = Period('2004-01-01 00:00:00', freq=freq)
self.assertTrue(p.is_leap_year)
p = Period('2100-01-01 00:00:00', freq=freq)
- self.assertFalse(p.is_leap_year)
+ assert not p.is_leap_year
def test_quarterly_negative_ordinals(self):
p = Period(ordinal=-1, freq='Q-DEC')
diff --git a/pandas/tests/scalar/test_timedelta.py b/pandas/tests/scalar/test_timedelta.py
index 86b02d20b6996..788c204ca3eb3 100644
--- a/pandas/tests/scalar/test_timedelta.py
+++ b/pandas/tests/scalar/test_timedelta.py
@@ -451,7 +451,7 @@ def test_contains(self):
# GH 13603
td = to_timedelta(range(5), unit='d') + pd.offsets.Hour(1)
for v in [pd.NaT, None, float('nan'), np.nan]:
- self.assertFalse((v in td))
+ assert not (v in td)
td = to_timedelta([pd.NaT])
for v in [pd.NaT, None, float('nan'), np.nan]:
@@ -658,7 +658,7 @@ def test_components(self):
s[1] = np.nan
result = s.dt.components
- self.assertFalse(result.iloc[0].isnull().all())
+ assert not result.iloc[0].isnull().all()
self.assertTrue(result.iloc[1].isnull().all())
def test_isoformat(self):
@@ -707,5 +707,5 @@ def test_ops_error_str(self):
with pytest.raises(TypeError):
l > r
- self.assertFalse(l == r)
+ assert not l == r
self.assertTrue(l != r)
diff --git a/pandas/tests/scalar/test_timestamp.py b/pandas/tests/scalar/test_timestamp.py
index bad0b697eef6c..cfc4cf93e720c 100644
--- a/pandas/tests/scalar/test_timestamp.py
+++ b/pandas/tests/scalar/test_timestamp.py
@@ -862,18 +862,18 @@ def test_comparison(self):
val = Timestamp(stamp)
self.assertEqual(val, val)
- self.assertFalse(val != val)
- self.assertFalse(val < val)
+ assert not val != val
+ assert not val < val
self.assertTrue(val <= val)
- self.assertFalse(val > val)
+ assert not val > val
self.assertTrue(val >= val)
other = datetime(2012, 5, 18)
self.assertEqual(val, other)
- self.assertFalse(val != other)
- self.assertFalse(val < other)
+ assert not val != other
+ assert not val < other
self.assertTrue(val <= other)
- self.assertFalse(val > other)
+ assert not val > other
self.assertTrue(val >= other)
other = Timestamp(stamp + 100)
@@ -889,14 +889,14 @@ def test_compare_invalid(self):
# GH 8058
val = Timestamp('20130101 12:01:02')
- self.assertFalse(val == 'foo')
- self.assertFalse(val == 10.0)
- self.assertFalse(val == 1)
- self.assertFalse(val == long(1))
- self.assertFalse(val == [])
- self.assertFalse(val == {'foo': 1})
- self.assertFalse(val == np.float64(1))
- self.assertFalse(val == np.int64(1))
+ assert not val == 'foo'
+ assert not val == 10.0
+ assert not val == 1
+ assert not val == long(1)
+ assert not val == []
+ assert not val == {'foo': 1}
+ assert not val == np.float64(1)
+ assert not val == np.int64(1)
self.assertTrue(val != 'foo')
self.assertTrue(val != 10.0)
@@ -933,8 +933,8 @@ def test_cant_compare_tz_naive_w_aware(self):
pytest.raises(Exception, a.__eq__, b.to_pydatetime())
pytest.raises(Exception, a.to_pydatetime().__eq__, b)
else:
- self.assertFalse(a == b.to_pydatetime())
- self.assertFalse(a.to_pydatetime() == b)
+ assert not a == b.to_pydatetime()
+ assert not a.to_pydatetime() == b
def test_cant_compare_tz_naive_w_aware_explicit_pytz(self):
tm._skip_if_no_pytz()
@@ -956,8 +956,8 @@ def test_cant_compare_tz_naive_w_aware_explicit_pytz(self):
pytest.raises(Exception, a.__eq__, b.to_pydatetime())
pytest.raises(Exception, a.to_pydatetime().__eq__, b)
else:
- self.assertFalse(a == b.to_pydatetime())
- self.assertFalse(a.to_pydatetime() == b)
+ assert not a == b.to_pydatetime()
+ assert not a.to_pydatetime() == b
def test_cant_compare_tz_naive_w_aware_dateutil(self):
tm._skip_if_no_dateutil()
@@ -980,8 +980,8 @@ def test_cant_compare_tz_naive_w_aware_dateutil(self):
pytest.raises(Exception, a.__eq__, b.to_pydatetime())
pytest.raises(Exception, a.to_pydatetime().__eq__, b)
else:
- self.assertFalse(a == b.to_pydatetime())
- self.assertFalse(a.to_pydatetime() == b)
+ assert not a == b.to_pydatetime()
+ assert not a.to_pydatetime() == b
def test_delta_preserve_nanos(self):
val = Timestamp(long(1337299200000000123))
@@ -1090,13 +1090,13 @@ def test_is_leap_year(self):
assert isinstance(dt.is_leap_year, bool)
dt = Timestamp('1999-01-01 00:00:00', tz=tz)
- self.assertFalse(dt.is_leap_year)
+ assert not dt.is_leap_year
dt = Timestamp('2004-01-01 00:00:00', tz=tz)
self.assertTrue(dt.is_leap_year)
dt = Timestamp('2100-01-01 00:00:00', tz=tz)
- self.assertFalse(dt.is_leap_year)
+ assert not dt.is_leap_year
class TestTimestampNsOperations(tm.TestCase):
@@ -1383,9 +1383,9 @@ def test_timestamp_compare_with_early_datetime(self):
# e.g. datetime.min
stamp = Timestamp('2012-01-01')
- self.assertFalse(stamp == datetime.min)
- self.assertFalse(stamp == datetime(1600, 1, 1))
- self.assertFalse(stamp == datetime(2700, 1, 1))
+ assert not stamp == datetime.min
+ assert not stamp == datetime(1600, 1, 1)
+ assert not stamp == datetime(2700, 1, 1)
self.assertNotEqual(stamp, datetime.min)
self.assertNotEqual(stamp, datetime(1600, 1, 1))
self.assertNotEqual(stamp, datetime(2700, 1, 1))
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index eb8a83bb85847..f5bccdd55e944 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -646,7 +646,7 @@ def test_prod_numpy16_bug(self):
def test_all_any(self):
ts = tm.makeTimeSeries()
bool_series = ts > 0
- self.assertFalse(bool_series.all())
+ assert not bool_series.all()
self.assertTrue(bool_series.any())
# Alternative types, with implicit 'object' dtype.
@@ -660,7 +660,7 @@ def test_all_any_params(self):
self.assertTrue(s1.all(skipna=False)) # nan && True => True
self.assertTrue(s1.all(skipna=True))
self.assertTrue(np.isnan(s2.any(skipna=False))) # nan || False => nan
- self.assertFalse(s2.any(skipna=True))
+ assert not s2.any(skipna=True)
# Check level.
s = pd.Series([False, False, True, True, False, True],
@@ -699,7 +699,7 @@ def test_modulo(self):
p = p.astype('float64')
result = p['first'] % p['second']
result2 = p['second'] % p['first']
- self.assertFalse(np.array_equal(result, result2))
+ assert not np.array_equal(result, result2)
# GH 9144
s = Series([0, 1])
@@ -1362,14 +1362,14 @@ def test_searchsorted_sorter(self):
def test_is_unique(self):
# GH11946
s = Series(np.random.randint(0, 10, size=1000))
- self.assertFalse(s.is_unique)
+ assert not s.is_unique
s = Series(np.arange(1000))
self.assertTrue(s.is_unique)
def test_is_monotonic(self):
s = Series(np.random.randint(0, 10, size=1000))
- self.assertFalse(s.is_monotonic)
+ assert not s.is_monotonic
s = Series(np.arange(1000))
self.assertTrue(s.is_monotonic)
self.assertTrue(s.is_monotonic_increasing)
@@ -1380,7 +1380,7 @@ def test_is_monotonic(self):
self.assertTrue(s.is_monotonic)
self.assertTrue(s.is_monotonic_increasing)
s = Series(list(reversed(s.tolist())))
- self.assertFalse(s.is_monotonic)
+ assert not s.is_monotonic
self.assertTrue(s.is_monotonic_decreasing)
def test_sort_index_level(self):
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 397058c4bb8ce..5b7ac9bc2b33c 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -216,7 +216,7 @@ def test_iteritems(self):
self.assertEqual(val, self.ts[idx])
# assert is lazy (genrators don't define reverse, lists do)
- self.assertFalse(hasattr(self.series.iteritems(), 'reverse'))
+ assert not hasattr(self.series.iteritems(), 'reverse')
def test_raise_on_info(self):
s = Series(np.random.randn(10))
@@ -239,7 +239,7 @@ def test_copy(self):
if deep is None or deep is True:
# Did not modify original Series
self.assertTrue(np.isnan(s2[0]))
- self.assertFalse(np.isnan(s[0]))
+ assert not np.isnan(s[0])
else:
# we DID modify the original Series
self.assertTrue(np.isnan(s2[0]))
diff --git a/pandas/tests/series/test_asof.py b/pandas/tests/series/test_asof.py
index 9c1e4626e1736..137390b6427eb 100644
--- a/pandas/tests/series/test_asof.py
+++ b/pandas/tests/series/test_asof.py
@@ -140,7 +140,7 @@ def test_errors(self):
Timestamp('20130102')])
# non-monotonic
- self.assertFalse(s.index.is_monotonic)
+ assert not s.index.is_monotonic
with pytest.raises(ValueError):
s.asof(s.index[0])
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index a870667ff3f96..b08653b0001ca 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -65,8 +65,8 @@ def test_constructor(self):
self.assertEqual(mixed.dtype, np.object_)
assert mixed[1] is np.NaN
- self.assertFalse(self.empty.index.is_all_dates)
- self.assertFalse(Series({}).index.is_all_dates)
+ assert not self.empty.index.is_all_dates
+ assert not Series({}).index.is_all_dates
pytest.raises(Exception, Series, np.random.randn(3, 3),
index=np.arange(3))
@@ -265,7 +265,7 @@ def test_constructor_copy(self):
# changes to origin of copy does not affect the copy
x[0] = 2.
- self.assertFalse(x.equals(y))
+ assert not x.equals(y)
self.assertEqual(x[0], 2.)
self.assertEqual(y[0], 1.)
@@ -354,7 +354,7 @@ def test_constructor_dtype_datetime64(self):
# in theory this should be all nulls, but since
# we are not specifying a dtype is ambiguous
s = Series(iNaT, index=lrange(5))
- self.assertFalse(isnull(s).all())
+ assert not isnull(s).all()
s = Series(nan, dtype='M8[ns]', index=lrange(5))
self.assertTrue(isnull(s).all())
diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py
index 74a4e37f0923a..c56a5baac12af 100644
--- a/pandas/tests/series/test_datetime_values.py
+++ b/pandas/tests/series/test_datetime_values.py
@@ -378,7 +378,7 @@ def test_dt_accessor_api(self):
with tm.assert_raises_regex(AttributeError,
"only use .dt accessor"):
s.dt
- self.assertFalse(hasattr(s, 'dt'))
+ assert not hasattr(s, 'dt')
def test_sub_of_datetime_from_TimeSeries(self):
from pandas.core.tools.timedeltas import to_timedelta
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 6907cc194f0f0..601262df89260 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -728,7 +728,7 @@ def test_setitem(self):
self.assertTrue(np.isnan(self.ts[6]))
self.assertTrue(np.isnan(self.ts[2]))
self.ts[np.isnan(self.ts)] = 5
- self.assertFalse(np.isnan(self.ts[2]))
+ assert not np.isnan(self.ts[2])
# caught this bug when writing tests
series = Series(tm.makeIntIndex(20).astype(float),
@@ -1514,21 +1514,21 @@ def test_where_numeric_with_string(self):
s = pd.Series([1, 2, 3])
w = s.where(s > 1, 'X')
- self.assertFalse(is_integer(w[0]))
+ assert not is_integer(w[0])
self.assertTrue(is_integer(w[1]))
self.assertTrue(is_integer(w[2]))
self.assertTrue(isinstance(w[0], str))
self.assertTrue(w.dtype == 'object')
w = s.where(s > 1, ['X', 'Y', 'Z'])
- self.assertFalse(is_integer(w[0]))
+ assert not is_integer(w[0])
self.assertTrue(is_integer(w[1]))
self.assertTrue(is_integer(w[2]))
self.assertTrue(isinstance(w[0], str))
self.assertTrue(w.dtype == 'object')
w = s.where(s > 1, np.array(['X', 'Y', 'Z']))
- self.assertFalse(is_integer(w[0]))
+ assert not is_integer(w[0])
self.assertTrue(is_integer(w[1]))
self.assertTrue(is_integer(w[2]))
self.assertTrue(isinstance(w[0], str))
@@ -1716,7 +1716,7 @@ def test_underlying_data_conversion(self):
def test_preserveRefs(self):
seq = self.ts[[5, 10, 15]]
seq[1] = np.NaN
- self.assertFalse(np.isnan(self.ts[10]))
+ assert not np.isnan(self.ts[10])
def test_drop(self):
@@ -1851,7 +1851,7 @@ def test_align_nocopy(self):
a = self.ts.copy()
ra, _ = a.align(b, join='left')
ra[:5] = 5
- self.assertFalse((a[:5] == 5).any())
+ assert not (a[:5] == 5).any()
# do not copy
a = self.ts.copy()
@@ -1864,7 +1864,7 @@ def test_align_nocopy(self):
b = self.ts[:5].copy()
_, rb = a.align(b, join='right')
rb[:3] = 5
- self.assertFalse((b[:3] == 5).any())
+ assert not (b[:3] == 5).any()
# do not copy
a = self.ts.copy()
@@ -1952,7 +1952,7 @@ def test_reindex(self):
# return a copy the same index here
result = self.ts.reindex()
- self.assertFalse((result is self.ts))
+ assert not (result is self.ts)
def test_reindex_nan(self):
ts = Series([2, 3, 5, 7], index=[1, 4, nan, 8])
@@ -1974,7 +1974,7 @@ def test_reindex_series_add_nat(self):
mask = result.isnull()
self.assertTrue(mask[-5:].all())
- self.assertFalse(mask[:-5].any())
+ assert not mask[:-5].any()
def test_reindex_with_datetimes(self):
rng = date_range('1/1/2000', periods=20)
@@ -2279,7 +2279,7 @@ def test_constructor(self):
assert isinstance(self.dups.index, DatetimeIndex)
def test_is_unique_monotonic(self):
- self.assertFalse(self.dups.index.is_unique)
+ assert not self.dups.index.is_unique
def test_index_unique(self):
uniques = self.dups.index.unique()
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index e7c1b22216dcb..53c8c518eb3eb 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -487,19 +487,19 @@ def test_timedelta64_nan(self):
self.assertTrue(isnull(td1[0]))
self.assertEqual(td1[0].value, iNaT)
td1[0] = td[0]
- self.assertFalse(isnull(td1[0]))
+ assert not isnull(td1[0])
td1[1] = iNaT
self.assertTrue(isnull(td1[1]))
self.assertEqual(td1[1].value, iNaT)
td1[1] = td[1]
- self.assertFalse(isnull(td1[1]))
+ assert not isnull(td1[1])
td1[2] = NaT
self.assertTrue(isnull(td1[2]))
self.assertEqual(td1[2].value, iNaT)
td1[2] = td[2]
- self.assertFalse(isnull(td1[2]))
+ assert not isnull(td1[2])
# boolean setting
# this doesn't work, not sure numpy even supports it
@@ -552,7 +552,7 @@ def test_dropna_no_nan(self):
result = s.dropna()
tm.assert_series_equal(result, s)
- self.assertFalse(result is s)
+ assert result is not s
s2 = s.copy()
s2.dropna(inplace=True)
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index 89ed7975e8017..eb840faac05e0 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -122,7 +122,7 @@ def test_div(self):
assert_series_equal(result, p['first'].astype('float64'),
check_names=False)
self.assertTrue(result.name is None)
- self.assertFalse(np.array_equal(result, p['second'] / p['first']))
+ assert not np.array_equal(result, p['second'] / p['first'])
# inf signing
s = Series([np.nan, 1., -1.])
diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py
index b4ad90f6f35af..c92a82e287120 100644
--- a/pandas/tests/series/test_repr.py
+++ b/pandas/tests/series/test_repr.py
@@ -103,9 +103,9 @@ def test_repr(self):
assert "Name: 0" in rep_str
ser = Series(["a\n\r\tb"], name="a\n\r\td", index=["a\n\r\tf"])
- self.assertFalse("\t" in repr(ser))
- self.assertFalse("\r" in repr(ser))
- self.assertFalse("a\n" in repr(ser))
+ assert "\t" not in repr(ser)
+ assert "\r" not in repr(ser)
+ assert "a\n" not in repr(ser)
# with empty series (#4651)
s = Series([], dtype=np.int64, name='foo')
diff --git a/pandas/tests/sparse/test_array.py b/pandas/tests/sparse/test_array.py
index bb6ff7a0c728f..33df4b5e59bc9 100644
--- a/pandas/tests/sparse/test_array.py
+++ b/pandas/tests/sparse/test_array.py
@@ -299,7 +299,7 @@ def test_constructor_from_sparse(self):
def test_constructor_copy(self):
cp = SparseArray(self.arr, copy=True)
cp.sp_values[:3] = 0
- self.assertFalse((self.arr.sp_values[:3] == 0).any())
+ assert not (self.arr.sp_values[:3] == 0).any()
not_copy = SparseArray(self.arr)
not_copy.sp_values[:3] = 0
@@ -323,11 +323,11 @@ def test_constructor_bool(self):
def test_constructor_bool_fill_value(self):
arr = SparseArray([True, False, True], dtype=None)
self.assertEqual(arr.dtype, np.bool)
- self.assertFalse(arr.fill_value)
+ assert not arr.fill_value
arr = SparseArray([True, False, True], dtype=np.bool)
self.assertEqual(arr.dtype, np.bool)
- self.assertFalse(arr.fill_value)
+ assert not arr.fill_value
arr = SparseArray([True, False, True], dtype=np.bool, fill_value=True)
self.assertEqual(arr.dtype, np.bool)
@@ -352,7 +352,7 @@ def test_constructor_float32(self):
def test_astype(self):
res = self.arr.astype('f8')
res.sp_values[:3] = 27
- self.assertFalse((self.arr.sp_values[:3] == 27).any())
+ assert not (self.arr.sp_values[:3] == 27).any()
msg = "unable to coerce current fill_value nan to int64 dtype"
with tm.assert_raises_regex(ValueError, msg):
diff --git a/pandas/tests/sparse/test_libsparse.py b/pandas/tests/sparse/test_libsparse.py
index 63ed11845a896..55115f45ff740 100644
--- a/pandas/tests/sparse/test_libsparse.py
+++ b/pandas/tests/sparse/test_libsparse.py
@@ -437,7 +437,7 @@ def test_equals(self):
index = BlockIndex(10, [0, 4], [2, 5])
self.assertTrue(index.equals(index))
- self.assertFalse(index.equals(BlockIndex(10, [0, 4], [2, 6])))
+ assert not index.equals(BlockIndex(10, [0, 4], [2, 6]))
def test_check_integrity(self):
locs = []
@@ -535,7 +535,7 @@ def test_int_internal(self):
def test_equals(self):
index = IntIndex(10, [0, 1, 2, 3, 4])
self.assertTrue(index.equals(index))
- self.assertFalse(index.equals(IntIndex(10, [0, 1, 2, 3])))
+ assert not index.equals(IntIndex(10, [0, 1, 2, 3]))
def test_to_block_index(self):
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index e4f39197421a0..e058a62ea3089 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -277,8 +277,8 @@ def test_none_comparison(self):
# noinspection PyComparisonWithNone
result = o == None # noqa
- self.assertFalse(result.iat[0])
- self.assertFalse(result.iat[1])
+ assert not result.iat[0]
+ assert not result.iat[1]
# noinspection PyComparisonWithNone
result = o != None # noqa
@@ -286,8 +286,8 @@ def test_none_comparison(self):
self.assertTrue(result.iat[1])
result = None == o # noqa
- self.assertFalse(result.iat[0])
- self.assertFalse(result.iat[1])
+ assert not result.iat[0]
+ assert not result.iat[1]
# this fails for numpy < 1.9
# and oddly for *some* platforms
@@ -296,12 +296,12 @@ def test_none_comparison(self):
# self.assertTrue(result.iat[1])
result = None > o
- self.assertFalse(result.iat[0])
- self.assertFalse(result.iat[1])
+ assert not result.iat[0]
+ assert not result.iat[1]
result = o < None
- self.assertFalse(result.iat[0])
- self.assertFalse(result.iat[1])
+ assert not result.iat[0]
+ assert not result.iat[1]
def test_ndarray_compat_properties(self):
@@ -796,10 +796,10 @@ def test_duplicated_drop_duplicates_index(self):
self.assertTrue(duplicated.dtype == bool)
result = original.drop_duplicates()
tm.assert_index_equal(result, original)
- self.assertFalse(result is original)
+ assert result is not original
# has_duplicates
- self.assertFalse(original.has_duplicates)
+ assert not original.has_duplicates
# create repeated values, 3rd and 5th values are duplicated
idx = original[list(range(len(original))) + [5, 3]]
@@ -843,7 +843,7 @@ def test_duplicated_drop_duplicates_index(self):
tm.assert_series_equal(original.duplicated(), expected)
result = original.drop_duplicates()
tm.assert_series_equal(result, original)
- self.assertFalse(result is original)
+ assert result is not original
idx = original.index[list(range(len(original))) + [5, 3]]
values = original._values[list(range(len(original))) + [5, 3]]
@@ -907,7 +907,7 @@ def test_fillna(self):
else:
tm.assert_series_equal(o, result)
# check shallow_copied
- self.assertFalse(o is result)
+ assert o is not result
for null_obj in [np.nan, None]:
for orig in self.objs:
@@ -941,7 +941,7 @@ def test_fillna(self):
else:
tm.assert_series_equal(result, expected)
# check shallow_copied
- self.assertFalse(o is result)
+ assert o is not result
def test_memory_usage(self):
for o in self.objs:
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index bbcd42b147654..252b32e264c1b 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -117,7 +117,7 @@ def test_constructor_unsortable(self):
# it works!
arr = np.array([1, 2, 3, datetime.now()], dtype='O')
factor = Categorical(arr, ordered=False)
- self.assertFalse(factor.ordered)
+ assert not factor.ordered
# this however will raise as cannot be sorted
pytest.raises(
@@ -143,14 +143,14 @@ def test_is_equal_dtype(self):
self.assertTrue(c1.is_dtype_equal(c1))
self.assertTrue(c2.is_dtype_equal(c2))
self.assertTrue(c3.is_dtype_equal(c3))
- self.assertFalse(c1.is_dtype_equal(c2))
- self.assertFalse(c1.is_dtype_equal(c3))
- self.assertFalse(c1.is_dtype_equal(Index(list('aabca'))))
- self.assertFalse(c1.is_dtype_equal(c1.astype(object)))
+ assert not c1.is_dtype_equal(c2)
+ assert not c1.is_dtype_equal(c3)
+ assert not c1.is_dtype_equal(Index(list('aabca')))
+ assert not c1.is_dtype_equal(c1.astype(object))
self.assertTrue(c1.is_dtype_equal(CategoricalIndex(c1)))
- self.assertFalse(c1.is_dtype_equal(
+ assert not (c1.is_dtype_equal(
CategoricalIndex(c1, categories=list('cab'))))
- self.assertFalse(c1.is_dtype_equal(CategoricalIndex(c1, ordered=True)))
+ assert not c1.is_dtype_equal(CategoricalIndex(c1, ordered=True))
def test_constructor(self):
@@ -175,7 +175,7 @@ def f():
# The default should be unordered
c1 = Categorical(["a", "b", "c", "a"])
- self.assertFalse(c1.ordered)
+ assert not c1.ordered
# Categorical as input
c1 = Categorical(["a", "b", "c", "a"])
@@ -534,7 +534,7 @@ def f():
# Only categories with same ordering information can be compared
cat_unorderd = cat.set_ordered(False)
- self.assertFalse((cat > cat).any())
+ assert not (cat > cat).any()
def f():
cat > cat_unorderd
@@ -788,9 +788,9 @@ def f():
def test_construction_with_ordered(self):
# GH 9347, 9190
cat = Categorical([0, 1, 2])
- self.assertFalse(cat.ordered)
+ assert not cat.ordered
cat = Categorical([0, 1, 2], ordered=False)
- self.assertFalse(cat.ordered)
+ assert not cat.ordered
cat = Categorical([0, 1, 2], ordered=True)
self.assertTrue(cat.ordered)
@@ -798,12 +798,12 @@ def test_ordered_api(self):
# GH 9347
cat1 = pd.Categorical(["a", "c", "b"], ordered=False)
tm.assert_index_equal(cat1.categories, Index(['a', 'b', 'c']))
- self.assertFalse(cat1.ordered)
+ assert not cat1.ordered
cat2 = pd.Categorical(["a", "c", "b"], categories=['b', 'c', 'a'],
ordered=False)
tm.assert_index_equal(cat2.categories, Index(['b', 'c', 'a']))
- self.assertFalse(cat2.ordered)
+ assert not cat2.ordered
cat3 = pd.Categorical(["a", "c", "b"], ordered=True)
tm.assert_index_equal(cat3.categories, Index(['a', 'b', 'c']))
@@ -818,20 +818,20 @@ def test_set_ordered(self):
cat = Categorical(["a", "b", "c", "a"], ordered=True)
cat2 = cat.as_unordered()
- self.assertFalse(cat2.ordered)
+ assert not cat2.ordered
cat2 = cat.as_ordered()
self.assertTrue(cat2.ordered)
cat2.as_unordered(inplace=True)
- self.assertFalse(cat2.ordered)
+ assert not cat2.ordered
cat2.as_ordered(inplace=True)
self.assertTrue(cat2.ordered)
self.assertTrue(cat2.set_ordered(True).ordered)
- self.assertFalse(cat2.set_ordered(False).ordered)
+ assert not cat2.set_ordered(False).ordered
cat2.set_ordered(True, inplace=True)
self.assertTrue(cat2.ordered)
cat2.set_ordered(False, inplace=True)
- self.assertFalse(cat2.ordered)
+ assert not cat2.ordered
# removed in 0.19.0
msg = "can\'t set attribute"
@@ -1876,7 +1876,7 @@ def test_sideeffects_free(self):
# other one, IF you specify copy!
cat = Categorical(["a", "b", "c", "a"])
s = pd.Series(cat, copy=True)
- self.assertFalse(s.cat is cat)
+ assert s.cat is not cat
s.cat.categories = [1, 2, 3]
exp_s = np.array([1, 2, 3, 1], dtype=np.int64)
exp_cat = np.array(["a", "b", "c", "a"], dtype=np.object_)
@@ -3783,17 +3783,17 @@ def test_cat_equality(self):
f = Categorical(list('acb'))
# vs scalar
- self.assertFalse((a == 'a').all())
+ assert not (a == 'a').all()
self.assertTrue(((a != 'a') == ~(a == 'a')).all())
- self.assertFalse(('a' == a).all())
+ assert not ('a' == a).all()
self.assertTrue((a == 'a')[0])
self.assertTrue(('a' == a)[0])
- self.assertFalse(('a' != a)[0])
+ assert not ('a' != a)[0]
# vs list-like
self.assertTrue((a == a).all())
- self.assertFalse((a != a).all())
+ assert not (a != a).all()
self.assertTrue((a == list(a)).all())
self.assertTrue((a == b).all())
@@ -3801,16 +3801,16 @@ def test_cat_equality(self):
self.assertTrue(((~(a == b)) == (a != b)).all())
self.assertTrue(((~(b == a)) == (b != a)).all())
- self.assertFalse((a == c).all())
- self.assertFalse((c == a).all())
- self.assertFalse((a == d).all())
- self.assertFalse((d == a).all())
+ assert not (a == c).all()
+ assert not (c == a).all()
+ assert not (a == d).all()
+ assert not (d == a).all()
# vs a cat-like
self.assertTrue((a == e).all())
self.assertTrue((e == a).all())
- self.assertFalse((a == f).all())
- self.assertFalse((f == a).all())
+ assert not (a == f).all()
+ assert not (f == a).all()
self.assertTrue(((~(a == e) == (a != e)).all()))
self.assertTrue(((~(e == a) == (e != a)).all()))
@@ -4226,7 +4226,7 @@ def test_cat_accessor_api(self):
with tm.assert_raises_regex(AttributeError,
"only use .cat accessor"):
invalid.cat
- self.assertFalse(hasattr(invalid, 'cat'))
+ assert not hasattr(invalid, 'cat')
def test_cat_accessor_no_new_attributes(self):
# https://github.com/pandas-dev/pandas/issues/10673
@@ -4309,7 +4309,7 @@ def test_str_accessor_api_for_categorical(self):
"Can only use .str "
"accessor with string"):
invalid.str
- self.assertFalse(hasattr(invalid, 'str'))
+ assert not hasattr(invalid, 'str')
def test_dt_accessor_api_for_categorical(self):
# https://github.com/pandas-dev/pandas/issues/10661
@@ -4390,7 +4390,7 @@ def test_dt_accessor_api_for_categorical(self):
with tm.assert_raises_regex(
AttributeError, "Can only use .dt accessor with datetimelike"):
invalid.dt
- self.assertFalse(hasattr(invalid, 'str'))
+ assert not hasattr(invalid, 'str')
def test_concat_categorical(self):
# See GH 10177
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index f260895e74dda..0e614fdbfe008 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -114,8 +114,7 @@ def test_describe_option(self):
self.assertTrue(
'foo' in self.cf.describe_option('l', _print_desc=False))
# current value is reported
- self.assertFalse(
- 'bar' in self.cf.describe_option('l', _print_desc=False))
+ assert 'bar' not in self.cf.describe_option('l', _print_desc=False)
self.cf.set_option("l", "bar")
self.assertTrue(
'bar' in self.cf.describe_option('l', _print_desc=False))
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index 14e08411fa106..782d2682145d8 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -254,17 +254,17 @@ def test_invalid(self):
# no op
result = expr._can_use_numexpr(operator.add, None, self.frame,
self.frame, 'evaluate')
- self.assertFalse(result)
+ assert not result
# mixed
result = expr._can_use_numexpr(operator.add, '+', self.mixed,
self.frame, 'evaluate')
- self.assertFalse(result)
+ assert not result
# min elements
result = expr._can_use_numexpr(operator.add, '+', self.frame2,
self.frame2, 'evaluate')
- self.assertFalse(result)
+ assert not result
# ok, we only check on first part of expression
result = expr._can_use_numexpr(operator.add, '+', self.frame,
@@ -308,7 +308,7 @@ def testit():
result = expr._can_use_numexpr(op, op_str, f2, f2,
'evaluate')
- self.assertFalse(result)
+ assert not result
expr.set_use_numexpr(False)
testit()
@@ -349,7 +349,7 @@ def testit():
result = expr._can_use_numexpr(op, op_str, f21, f22,
'evaluate')
- self.assertFalse(result)
+ assert not result
expr.set_use_numexpr(False)
testit()
diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py
index 5c3e6adb48808..621f624c41a19 100644
--- a/pandas/tests/test_lib.py
+++ b/pandas/tests/test_lib.py
@@ -152,7 +152,7 @@ def test_maybe_indices_to_slice_both_edges(self):
for case in [[4, 2, 0, -2], [2, 2, 1, 0], [0, 1, 2, 1]]:
indices = np.array(case, dtype=np.int64)
maybe_slice = lib.maybe_indices_to_slice(indices, len(target))
- self.assertFalse(isinstance(maybe_slice, slice))
+ assert not isinstance(maybe_slice, slice)
tm.assert_numpy_array_equal(maybe_slice, indices)
tm.assert_numpy_array_equal(target[indices], target[maybe_slice])
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index f350ef4351585..668f5b2a5a962 100755
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -876,7 +876,7 @@ def test_stack(self):
# GH10417
def check(left, right):
tm.assert_series_equal(left, right)
- self.assertFalse(left.index.is_unique)
+ assert not left.index.is_unique
li, ri = left.index, right.index
tm.assert_index_equal(li, ri)
@@ -1225,7 +1225,7 @@ def test_join(self):
expected = self.frame.copy()
expected.values[np.isnan(joined.values)] = np.nan
- self.assertFalse(np.isnan(joined.values).all())
+ assert not np.isnan(joined.values).all()
# TODO what should join do with names ?
tm.assert_frame_equal(joined, expected, check_names=False)
@@ -1235,7 +1235,7 @@ def test_swaplevel(self):
swapped2 = self.frame['A'].swaplevel(0)
swapped3 = self.frame['A'].swaplevel(0, 1)
swapped4 = self.frame['A'].swaplevel('first', 'second')
- self.assertFalse(swapped.index.equals(self.frame.index))
+ assert not swapped.index.equals(self.frame.index)
tm.assert_series_equal(swapped, swapped2)
tm.assert_series_equal(swapped, swapped3)
tm.assert_series_equal(swapped, swapped4)
@@ -1831,7 +1831,7 @@ def test_drop_level_nonunique_datetime(self):
df['tstamp'] = idxdt
df = df.set_index('tstamp', append=True)
ts = pd.Timestamp('201603231600')
- self.assertFalse(df.index.is_unique)
+ assert not df.index.is_unique
result = df.drop(ts, level='tstamp')
expected = df.loc[idx != 4]
@@ -2430,11 +2430,11 @@ def test_is_lexsorted(self):
index = MultiIndex(levels=levels,
labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 2, 1]])
- self.assertFalse(index.is_lexsorted())
+ assert not index.is_lexsorted()
index = MultiIndex(levels=levels,
labels=[[0, 0, 1, 0, 1, 1], [0, 1, 0, 2, 2, 1]])
- self.assertFalse(index.is_lexsorted())
+ assert not index.is_lexsorted()
self.assertEqual(index.lexsort_depth, 0)
def test_getitem_multilevel_index_tuple_not_sorted(self):
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 1aad2f5224c0d..a108749db8e6a 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -659,7 +659,7 @@ def check_bool(self, func, value, correct, *args, **kwargs):
if correct:
self.assertTrue(res0)
else:
- self.assertFalse(res0)
+ assert not res0
except BaseException as exc:
exc.args += ('dim: %s' % getattr(value, 'ndim', value), )
raise
@@ -742,9 +742,9 @@ def test__bn_ok_dtype(self):
self.assertTrue(nanops._bn_ok_dtype(self.arr_bool.dtype, 'test'))
self.assertTrue(nanops._bn_ok_dtype(self.arr_str.dtype, 'test'))
self.assertTrue(nanops._bn_ok_dtype(self.arr_utf.dtype, 'test'))
- self.assertFalse(nanops._bn_ok_dtype(self.arr_date.dtype, 'test'))
- self.assertFalse(nanops._bn_ok_dtype(self.arr_tdelta.dtype, 'test'))
- self.assertFalse(nanops._bn_ok_dtype(self.arr_obj.dtype, 'test'))
+ assert not nanops._bn_ok_dtype(self.arr_date.dtype, 'test')
+ assert not nanops._bn_ok_dtype(self.arr_tdelta.dtype, 'test')
+ assert not nanops._bn_ok_dtype(self.arr_obj.dtype, 'test')
class TestEnsureNumeric(tm.TestCase):
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 322ea32a93562..802acc86d3359 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1050,7 +1050,7 @@ def test_consolidate(self):
self.assertTrue(self.panel._data.is_consolidated())
self.panel['foo'] = 1.
- self.assertFalse(self.panel._data.is_consolidated())
+ assert not self.panel._data.is_consolidated()
panel = self.panel._consolidate()
self.assertTrue(panel._data.is_consolidated())
@@ -1425,7 +1425,7 @@ def test_reindex(self):
# this ok
result = self.panel.reindex()
assert_panel_equal(result, self.panel)
- self.assertFalse(result is self.panel)
+ assert result is not self.panel
# with filling
smaller_major = self.panel.major_axis[::5]
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 05b42cdf00e94..5b4f09009c9db 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -684,7 +684,7 @@ def test_consolidate(self):
self.assertTrue(self.panel4d._data.is_consolidated())
self.panel4d['foo'] = 1.
- self.assertFalse(self.panel4d._data.is_consolidated())
+ assert not self.panel4d._data.is_consolidated()
panel4d = self.panel4d._consolidate()
self.assertTrue(panel4d._data.is_consolidated())
@@ -803,7 +803,7 @@ def test_reindex(self):
# don't necessarily copy
result = self.panel4d.reindex()
assert_panel4d_equal(result, self.panel4d)
- self.assertFalse(result is self.panel4d)
+ assert result is not self.panel4d
# with filling
smaller_major = self.panel4d.major_axis[::5]
@@ -857,7 +857,7 @@ def test_sort_index(self):
def test_fillna(self):
with catch_warnings(record=True):
- self.assertFalse(np.isfinite(self.panel4d.values).all())
+ assert not np.isfinite(self.panel4d.values).all()
filled = self.panel4d.fillna(0)
self.assertTrue(np.isfinite(filled.values).all())
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index f5309a985a499..42a6a2a784a0e 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -134,8 +134,8 @@ def f():
# masquerade as Series/DataFrame as needed for API compat
self.assertTrue(isinstance(self.series.resample('H'), ABCSeries))
- self.assertFalse(isinstance(self.frame.resample('H'), ABCSeries))
- self.assertFalse(isinstance(self.series.resample('H'), ABCDataFrame))
+ assert not isinstance(self.frame.resample('H'), ABCSeries)
+ assert not isinstance(self.series.resample('H'), ABCDataFrame)
self.assertTrue(isinstance(self.frame.resample('H'), ABCDataFrame))
# bin numeric ops
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index db0c2fdc80fd2..45e8aa3a367db 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -32,7 +32,7 @@ def test_api(self):
with tm.assert_raises_regex(AttributeError,
"only use .str accessor"):
invalid.str
- self.assertFalse(hasattr(invalid, 'str'))
+ assert not hasattr(invalid, 'str')
def test_iter(self):
# GH3638
@@ -76,7 +76,7 @@ def test_iter_single_element(self):
for i, s in enumerate(ds.str):
pass
- self.assertFalse(i)
+ assert not i
assert_series_equal(ds, s)
def test_iter_object_try_string(self):
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index adfecc90129e9..13d471f368693 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -1298,9 +1298,9 @@ def get_result(arr, window, min_periods=None, center=False):
# min_periods is working correctly
result = get_result(arr, 20, min_periods=15)
self.assertTrue(np.isnan(result[23]))
- self.assertFalse(np.isnan(result[24]))
+ assert not np.isnan(result[24])
- self.assertFalse(np.isnan(result[-6]))
+ assert not np.isnan(result[-6])
self.assertTrue(np.isnan(result[-5]))
arr2 = randn(20)
@@ -1660,18 +1660,18 @@ def _check_ew_ndarray(self, func, preserve_nan=False, name=None):
# GH 7898
result = func(s, 50, min_periods=2)
self.assertTrue(np.isnan(result.values[:11]).all())
- self.assertFalse(np.isnan(result.values[11:]).any())
+ assert not np.isnan(result.values[11:]).any()
for min_periods in (0, 1):
result = func(s, 50, min_periods=min_periods)
if func == mom.ewma:
self.assertTrue(np.isnan(result.values[:10]).all())
- self.assertFalse(np.isnan(result.values[10:]).any())
+ assert not np.isnan(result.values[10:]).any()
else:
# ewmstd, ewmvol, ewmvar (with bias=False) require at least two
# values
self.assertTrue(np.isnan(result.values[:11]).all())
- self.assertFalse(np.isnan(result.values[11:]).any())
+ assert not np.isnan(result.values[11:]).any()
# check series of length 0
result = func(Series([]), 50, min_periods=min_periods)
@@ -2010,11 +2010,11 @@ def _non_null_values(x):
# check that var(x), std(x), and cov(x) are all >= 0
var_x = var(x)
std_x = std(x)
- self.assertFalse((var_x < 0).any().any())
- self.assertFalse((std_x < 0).any().any())
+ assert not (var_x < 0).any().any()
+ assert not (std_x < 0).any().any()
if cov:
cov_x_x = cov(x, x)
- self.assertFalse((cov_x_x < 0).any().any())
+ assert not (cov_x_x < 0).any().any()
# check that var(x) == cov(x, x)
assert_equal(var_x, cov_x_x)
@@ -2029,7 +2029,7 @@ def _non_null_values(x):
if is_constant:
# check that variance of constant series is identically 0
- self.assertFalse((var_x > 0).any().any())
+ assert not (var_x > 0).any().any()
expected = x * np.nan
expected[count_x >= max(min_periods, 1)] = 0.
if var is var_unbiased:
@@ -2466,7 +2466,7 @@ def func(A, B, com, **kwargs):
result = func(A, B, 20, min_periods=5)
self.assertTrue(np.isnan(result.values[:14]).all())
- self.assertFalse(np.isnan(result.values[14:]).any())
+ assert not np.isnan(result.values[14:]).any()
# GH 7898
for min_periods in (0, 1, 2):
@@ -2474,7 +2474,7 @@ def func(A, B, com, **kwargs):
# binary functions (ewmcov, ewmcorr) with bias=False require at
# least two values
self.assertTrue(np.isnan(result.values[:11]).all())
- self.assertFalse(np.isnan(result.values[11:]).any())
+ assert not np.isnan(result.values[11:]).any()
# check series of length 0
result = func(Series([]), Series([]), 50, min_periods=min_periods)
@@ -2891,7 +2891,7 @@ def _check_expanding_ndarray(self, func, static_comp, has_min_periods=True,
# min_periods is working correctly
result = func(arr, min_periods=15)
self.assertTrue(np.isnan(result[13]))
- self.assertFalse(np.isnan(result[14]))
+ assert not np.isnan(result[14])
arr2 = randn(20)
result = func(arr2, min_periods=5)
@@ -3050,7 +3050,7 @@ def f():
pytest.raises(TypeError, f)
g = self.frame.groupby('A')
- self.assertFalse(g.mutated)
+ assert not g.mutated
g = self.frame.groupby('A', mutated=True)
self.assertTrue(g.mutated)
@@ -3277,7 +3277,7 @@ def test_monotonic_on(self):
# non-monotonic
df.index = reversed(df.index.tolist())
- self.assertFalse(df.index.is_monotonic)
+ assert not df.index.is_monotonic
with pytest.raises(ValueError):
df.rolling('2s').sum()
diff --git a/pandas/tests/tseries/test_offsets.py b/pandas/tests/tseries/test_offsets.py
index cb3fc3b60226f..1332be2567b56 100644
--- a/pandas/tests/tseries/test_offsets.py
+++ b/pandas/tests/tseries/test_offsets.py
@@ -446,7 +446,7 @@ def test_onOffset(self):
# when normalize=True, onOffset checks time is 00:00:00
offset_n = self._get_offset(offset, normalize=True)
- self.assertFalse(offset_n.onOffset(dt))
+ assert not offset_n.onOffset(dt)
if offset in (BusinessHour, CustomBusinessHour):
# In default BusinessHour (9:00-17:00), normalized time
@@ -718,7 +718,7 @@ def test_offsets_compare_equal(self):
# root cause of #456
offset1 = BDay()
offset2 = BDay()
- self.assertFalse(offset1 != offset2)
+ assert not offset1 != offset2
class TestBusinessHour(Base):
@@ -1389,7 +1389,7 @@ def test_offsets_compare_equal(self):
# root cause of #456
offset1 = self._offset()
offset2 = self._offset()
- self.assertFalse(offset1 != offset2)
+ assert not offset1 != offset2
def test_datetimeindex(self):
idx1 = DatetimeIndex(start='2014-07-04 15:00', end='2014-07-08 10:00',
@@ -1859,7 +1859,7 @@ def test_offsets_compare_equal(self):
# root cause of #456
offset1 = CDay()
offset2 = CDay()
- self.assertFalse(offset1 != offset2)
+ assert not offset1 != offset2
def test_holidays(self):
# Define a TradingDay offset
@@ -1964,7 +1964,7 @@ def testMult2(self):
def test_offsets_compare_equal(self):
offset1 = self._object()
offset2 = self._object()
- self.assertFalse(offset1 != offset2)
+ assert not offset1 != offset2
def test_roundtrip_pickle(self):
def _check_roundtrip(obj):
@@ -2230,9 +2230,9 @@ def test_corner(self):
def test_isAnchored(self):
self.assertTrue(Week(weekday=0).isAnchored())
- self.assertFalse(Week().isAnchored())
- self.assertFalse(Week(2, weekday=2).isAnchored())
- self.assertFalse(Week(2).isAnchored())
+ assert not Week().isAnchored()
+ assert not Week(2, weekday=2).isAnchored()
+ assert not Week(2).isAnchored()
def test_offset(self):
tests = []
@@ -2284,7 +2284,7 @@ def test_offsets_compare_equal(self):
# root cause of #456
offset1 = Week()
offset2 = Week()
- self.assertFalse(offset1 != offset2)
+ assert not offset1 != offset2
class TestWeekOfMonth(Base):
@@ -2507,7 +2507,7 @@ def test_offsets_compare_equal(self):
# root cause of #456
offset1 = BMonthBegin()
offset2 = BMonthBegin()
- self.assertFalse(offset1 != offset2)
+ assert not offset1 != offset2
class TestBMonthEnd(Base):
@@ -2570,7 +2570,7 @@ def test_offsets_compare_equal(self):
# root cause of #456
offset1 = BMonthEnd()
offset2 = BMonthEnd()
- self.assertFalse(offset1 != offset2)
+ assert not offset1 != offset2
class TestMonthBegin(Base):
@@ -3043,7 +3043,7 @@ def test_repr(self):
def test_isAnchored(self):
self.assertTrue(BQuarterBegin(startingMonth=1).isAnchored())
self.assertTrue(BQuarterBegin().isAnchored())
- self.assertFalse(BQuarterBegin(2, startingMonth=1).isAnchored())
+ assert not BQuarterBegin(2, startingMonth=1).isAnchored()
def test_offset(self):
tests = []
@@ -3137,7 +3137,7 @@ def test_repr(self):
def test_isAnchored(self):
self.assertTrue(BQuarterEnd(startingMonth=1).isAnchored())
self.assertTrue(BQuarterEnd().isAnchored())
- self.assertFalse(BQuarterEnd(2, startingMonth=1).isAnchored())
+ assert not BQuarterEnd(2, startingMonth=1).isAnchored()
def test_offset(self):
tests = []
@@ -3512,9 +3512,9 @@ def test_isAnchored(self):
self.assertTrue(
makeFY5253LastOfMonthQuarter(weekday=WeekDay.SAT, startingMonth=3,
qtr_with_extra_week=4).isAnchored())
- self.assertFalse(makeFY5253LastOfMonthQuarter(
+ assert not makeFY5253LastOfMonthQuarter(
2, startingMonth=1, weekday=WeekDay.SAT,
- qtr_with_extra_week=4).isAnchored())
+ qtr_with_extra_week=4).isAnchored()
def test_equality(self):
self.assertEqual(makeFY5253LastOfMonthQuarter(startingMonth=1,
@@ -3676,20 +3676,17 @@ def test_year_has_extra_week(self):
.year_has_extra_week(datetime(2010, 12, 26)))
# End of year before year with long Q1
- self.assertFalse(
- makeFY5253LastOfMonthQuarter(
- 1, startingMonth=12, weekday=WeekDay.SAT,
- qtr_with_extra_week=1)
- .year_has_extra_week(datetime(2010, 12, 25)))
+ assert not makeFY5253LastOfMonthQuarter(
+ 1, startingMonth=12, weekday=WeekDay.SAT,
+ qtr_with_extra_week=1).year_has_extra_week(datetime(2010, 12, 25))
for year in [x
for x in range(1994, 2011 + 1)
if x not in [2011, 2005, 2000, 1994]]:
- self.assertFalse(
- makeFY5253LastOfMonthQuarter(
- 1, startingMonth=12, weekday=WeekDay.SAT,
- qtr_with_extra_week=1)
- .year_has_extra_week(datetime(year, 4, 2)))
+ assert not makeFY5253LastOfMonthQuarter(
+ 1, startingMonth=12, weekday=WeekDay.SAT,
+ qtr_with_extra_week=1).year_has_extra_week(
+ datetime(year, 4, 2))
# Other long years
self.assertTrue(
@@ -3825,7 +3822,7 @@ def test_repr(self):
def test_isAnchored(self):
self.assertTrue(QuarterBegin(startingMonth=1).isAnchored())
self.assertTrue(QuarterBegin().isAnchored())
- self.assertFalse(QuarterBegin(2, startingMonth=1).isAnchored())
+ assert not QuarterBegin(2, startingMonth=1).isAnchored()
def test_offset(self):
tests = []
@@ -3903,7 +3900,7 @@ def test_repr(self):
def test_isAnchored(self):
self.assertTrue(QuarterEnd(startingMonth=1).isAnchored())
self.assertTrue(QuarterEnd().isAnchored())
- self.assertFalse(QuarterEnd(2, startingMonth=1).isAnchored())
+ assert not QuarterEnd(2, startingMonth=1).isAnchored()
def test_offset(self):
tests = []
@@ -4527,7 +4524,7 @@ def test_tick_operators(self):
def test_tick_offset(self):
for t in self.ticks:
- self.assertFalse(t().isAnchored())
+ assert not t().isAnchored()
def test_compare_ticks(self):
for kls in self.ticks:
@@ -4758,7 +4755,7 @@ def setUp(self):
def run_X_index_creation(self, cls):
inst1 = cls()
if not inst1.isAnchored():
- self.assertFalse(inst1._should_cache(), cls)
+ assert not inst1._should_cache(), cls
return
self.assertTrue(inst1._should_cache(), cls)
@@ -4768,13 +4765,13 @@ def run_X_index_creation(self, cls):
self.assertTrue(cls() in _daterange_cache, cls)
def test_should_cache_month_end(self):
- self.assertFalse(MonthEnd()._should_cache())
+ assert not MonthEnd()._should_cache()
def test_should_cache_bmonth_end(self):
- self.assertFalse(BusinessMonthEnd()._should_cache())
+ assert not BusinessMonthEnd()._should_cache()
def test_should_cache_week_month(self):
- self.assertFalse(WeekOfMonth(weekday=1, week=2)._should_cache())
+ assert not WeekOfMonth(weekday=1, week=2)._should_cache()
def test_all_cacheableoffsets(self):
for subclass in get_all_subclasses(CacheableOffset):
@@ -4786,19 +4783,19 @@ def test_all_cacheableoffsets(self):
def test_month_end_index_creation(self):
DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 31),
freq=MonthEnd(), normalize=True)
- self.assertFalse(MonthEnd() in _daterange_cache)
+ assert not MonthEnd() in _daterange_cache
def test_bmonth_end_index_creation(self):
DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 29),
freq=BusinessMonthEnd(), normalize=True)
- self.assertFalse(BusinessMonthEnd() in _daterange_cache)
+ assert not BusinessMonthEnd() in _daterange_cache
def test_week_of_month_index_creation(self):
inst1 = WeekOfMonth(weekday=1, week=2)
DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 29),
freq=inst1, normalize=True)
inst2 = WeekOfMonth(weekday=1, week=2)
- self.assertFalse(inst2 in _daterange_cache)
+ assert inst2 not in _daterange_cache
class TestReprNames(tm.TestCase):
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index 807d6866cbf74..65db858a6ccf1 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -1284,7 +1284,7 @@ def test_index_equals_with_tz(self):
left = date_range('1/1/2011', periods=100, freq='H', tz='utc')
right = date_range('1/1/2011', periods=100, freq='H', tz='US/Eastern')
- self.assertFalse(left.equals(right))
+ assert not left.equals(right)
def test_tz_localize_naive(self):
rng = date_range('1/1/2011', periods=100, freq='H')
@@ -1627,7 +1627,7 @@ def test_normalize_tz(self):
tm.assert_index_equal(result, expected)
self.assertTrue(result.is_normalized)
- self.assertFalse(rng.is_normalized)
+ assert not rng.is_normalized
rng = date_range('1/1/2000 9:30', periods=10, freq='D', tz='UTC')
@@ -1636,7 +1636,7 @@ def test_normalize_tz(self):
tm.assert_index_equal(result, expected)
self.assertTrue(result.is_normalized)
- self.assertFalse(rng.is_normalized)
+ assert not rng.is_normalized
from dateutil.tz import tzlocal
rng = date_range('1/1/2000 9:30', periods=10, freq='D', tz=tzlocal())
@@ -1645,7 +1645,7 @@ def test_normalize_tz(self):
tm.assert_index_equal(result, expected)
self.assertTrue(result.is_normalized)
- self.assertFalse(rng.is_normalized)
+ assert not rng.is_normalized
def test_normalize_tz_local(self):
# GH 13459
@@ -1665,7 +1665,7 @@ def test_normalize_tz_local(self):
tm.assert_index_equal(result, expected)
self.assertTrue(result.is_normalized)
- self.assertFalse(rng.is_normalized)
+ assert not rng.is_normalized
def test_tzaware_offset(self):
dates = date_range('2012-11-01', periods=3, tz='US/Pacific')
| Title is self-explanatory.
Partially addresses #15990. | https://api.github.com/repos/pandas-dev/pandas/pulls/16151 | 2017-04-26T19:55:56Z | 2017-04-27T00:42:34Z | 2017-04-27T00:42:34Z | 2017-04-27T00:43:23Z |
API: Relax is-file-like conditions | diff --git a/pandas/core/dtypes/inference.py b/pandas/core/dtypes/inference.py
index 66f4d87aa8e33..a5316a83612cb 100644
--- a/pandas/core/dtypes/inference.py
+++ b/pandas/core/dtypes/inference.py
@@ -142,12 +142,8 @@ def is_file_like(obj):
Check if the object is a file-like object.
For objects to be considered file-like, they must
- be an iterator AND have the following four methods:
-
- 1) read
- 2) write
- 3) seek
- 4) tell
+ be an iterator AND have either a `read` and/or `write`
+ method as an attribute.
Note: file-like objects must be iterable, but
iterable objects need not be file-like.
@@ -172,11 +168,8 @@ def is_file_like(obj):
False
"""
- file_attrs = ('read', 'write', 'seek', 'tell')
-
- for attr in file_attrs:
- if not hasattr(obj, attr):
- return False
+ if not (hasattr(obj, 'read') or hasattr(obj, 'write')):
+ return False
if not is_iterator(obj):
return False
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 8dcf75e8a1aec..1d3a956829a3c 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -100,11 +100,41 @@ def test_is_dict_like():
def test_is_file_like():
+ class MockFile(object):
+ pass
+
is_file = inference.is_file_like
data = StringIO("data")
assert is_file(data)
+ # No read / write attributes
+ # No iterator attributes
+ m = MockFile()
+ assert not is_file(m)
+
+ MockFile.write = lambda self: 0
+
+ # Write attribute but not an iterator
+ m = MockFile()
+ assert not is_file(m)
+
+ MockFile.__iter__ = lambda self: self
+ MockFile.__next__ = lambda self: 0
+ MockFile.next = MockFile.__next__
+
+ # Valid write-only file
+ m = MockFile()
+ assert is_file(m)
+
+ del MockFile.write
+ MockFile.read = lambda self: 0
+
+ # Valid read-only file
+ m = MockFile()
+ assert is_file(m)
+
+ # Iterator but no read / write attributes
data = [1, 2, 3]
assert not is_file(data)
diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index afb23f540264e..e3df02a948080 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -1685,6 +1685,26 @@ class InvalidBuffer(object):
with tm.assert_raises_regex(ValueError, msg):
self.read_csv(InvalidBuffer())
+ # gh-16135: we want to ensure that "tell" and "seek"
+ # aren't actually being used when we call `read_csv`
+ #
+ # Thus, while the object may look "invalid" (these
+ # methods are attributes of the `StringIO` class),
+ # it is still a valid file-object for our purposes.
+ class NoSeekTellBuffer(StringIO):
+ def tell(self):
+ raise AttributeError("No tell method")
+
+ def seek(self, pos, whence=0):
+ raise AttributeError("No seek method")
+
+ data = "a\n1"
+
+ expected = pd.DataFrame({"a": [1]})
+ result = self.read_csv(NoSeekTellBuffer(data))
+
+ tm.assert_frame_equal(result, expected)
+
if PY3:
from unittest import mock
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index b9920983856d4..e3a1b42fd4d45 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -176,3 +176,22 @@ def test_s3_fails(self):
# It's irrelevant here that this isn't actually a table.
with pytest.raises(IOError):
read_csv('s3://cant_get_it/')
+
+ @tm.network
+ def boto3_client_s3(self):
+ # see gh-16135
+
+ # boto3 is a dependency of s3fs
+ import boto3
+ client = boto3.client("s3")
+
+ key = "/tips.csv"
+ bucket = "pandas-test"
+ s3_object = client.get_object(Bucket=bucket, Key=key)
+
+ result = read_csv(s3_object["Body"])
+ assert isinstance(result, DataFrame)
+ assert not result.empty
+
+ expected = read_csv(tm.get_data_path('tips.csv'))
+ tm.assert_frame_equal(result, expected)
| Previously, we were requiring that all file-like objects had "read," "write," "seek," and "tell" methods,
but that was too strict (e.g. read-only buffers). This commit relaxes those requirements to having EITHER "read" or "write" as attributes.
Closes #16135. | https://api.github.com/repos/pandas-dev/pandas/pulls/16150 | 2017-04-26T17:55:19Z | 2017-04-27T09:59:57Z | 2017-04-27T09:59:57Z | 2017-04-27T13:41:58Z |
Support more styles for xlsxwriter | diff --git a/doc/source/style.ipynb b/doc/source/style.ipynb
index 1d6ce163cf977..a78595beabf1d 100644
--- a/doc/source/style.ipynb
+++ b/doc/source/style.ipynb
@@ -935,7 +935,7 @@
"\n",
"<span style=\"color: red\">*Experimental: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.*</span>\n",
"\n",
- "Some support is available for exporting styled `DataFrames` to Excel worksheets using the `OpenPyXL` engine. CSS2.2 properties handled include:\n",
+ "Some support is available for exporting styled `DataFrames` to Excel worksheets using the `OpenPyXL` or `XlsxWriter` engines. CSS2.2 properties handled include:\n",
"\n",
"- `background-color`\n",
"- `border-style`, `border-width`, `border-color` and their {`top`, `right`, `bottom`, `left` variants}\n",
diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.txt
index cbd094ec4ef49..a4c7fcb3d29e5 100644
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.22.0.txt
@@ -22,7 +22,7 @@ New features
Other Enhancements
^^^^^^^^^^^^^^^^^^
--
+- Better support for ``Dataframe.style.to_excel()`` output with the ``xlsxwriter`` engine. (:issue:`16149`)
-
-
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index c8d0e42a022ba..fec916dc52d20 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -1578,6 +1578,149 @@ def _convert_to_style(cls, style_dict, num_format_str=None):
register_writer(_XlwtWriter)
+class _XlsxStyler(object):
+ # Map from openpyxl-oriented styles to flatter xlsxwriter representation
+ # Ordering necessary for both determinism and because some are keyed by
+ # prefixes of others.
+ STYLE_MAPPING = {
+ 'font': [
+ (('name',), 'font_name'),
+ (('sz',), 'font_size'),
+ (('size',), 'font_size'),
+ (('color', 'rgb',), 'font_color'),
+ (('color',), 'font_color'),
+ (('b',), 'bold'),
+ (('bold',), 'bold'),
+ (('i',), 'italic'),
+ (('italic',), 'italic'),
+ (('u',), 'underline'),
+ (('underline',), 'underline'),
+ (('strike',), 'font_strikeout'),
+ (('vertAlign',), 'font_script'),
+ (('vertalign',), 'font_script'),
+ ],
+ 'number_format': [
+ (('format_code',), 'num_format'),
+ ((), 'num_format',),
+ ],
+ 'protection': [
+ (('locked',), 'locked'),
+ (('hidden',), 'hidden'),
+ ],
+ 'alignment': [
+ (('horizontal',), 'align'),
+ (('vertical',), 'valign'),
+ (('text_rotation',), 'rotation'),
+ (('wrap_text',), 'text_wrap'),
+ (('indent',), 'indent'),
+ (('shrink_to_fit',), 'shrink'),
+ ],
+ 'fill': [
+ (('patternType',), 'pattern'),
+ (('patterntype',), 'pattern'),
+ (('fill_type',), 'pattern'),
+ (('start_color', 'rgb',), 'fg_color'),
+ (('fgColor', 'rgb',), 'fg_color'),
+ (('fgcolor', 'rgb',), 'fg_color'),
+ (('start_color',), 'fg_color'),
+ (('fgColor',), 'fg_color'),
+ (('fgcolor',), 'fg_color'),
+ (('end_color', 'rgb',), 'bg_color'),
+ (('bgColor', 'rgb',), 'bg_color'),
+ (('bgcolor', 'rgb',), 'bg_color'),
+ (('end_color',), 'bg_color'),
+ (('bgColor',), 'bg_color'),
+ (('bgcolor',), 'bg_color'),
+ ],
+ 'border': [
+ (('color', 'rgb',), 'border_color'),
+ (('color',), 'border_color'),
+ (('style',), 'border'),
+ (('top', 'color', 'rgb',), 'top_color'),
+ (('top', 'color',), 'top_color'),
+ (('top', 'style',), 'top'),
+ (('top',), 'top'),
+ (('right', 'color', 'rgb',), 'right_color'),
+ (('right', 'color',), 'right_color'),
+ (('right', 'style',), 'right'),
+ (('right',), 'right'),
+ (('bottom', 'color', 'rgb',), 'bottom_color'),
+ (('bottom', 'color',), 'bottom_color'),
+ (('bottom', 'style',), 'bottom'),
+ (('bottom',), 'bottom'),
+ (('left', 'color', 'rgb',), 'left_color'),
+ (('left', 'color',), 'left_color'),
+ (('left', 'style',), 'left'),
+ (('left',), 'left'),
+ ],
+ }
+
+ @classmethod
+ def convert(cls, style_dict, num_format_str=None):
+ """
+ converts a style_dict to an xlsxwriter format dict
+
+ Parameters
+ ----------
+ style_dict: style dictionary to convert
+ num_format_str: optional number format string
+ """
+
+ # Create a XlsxWriter format object.
+ props = {}
+
+ if num_format_str is not None:
+ props['num_format'] = num_format_str
+
+ if style_dict is None:
+ return props
+
+ if 'borders' in style_dict:
+ style_dict = style_dict.copy()
+ style_dict['border'] = style_dict.pop('borders')
+
+ for style_group_key, style_group in style_dict.items():
+ for src, dst in cls.STYLE_MAPPING.get(style_group_key, []):
+ # src is a sequence of keys into a nested dict
+ # dst is a flat key
+ if dst in props:
+ continue
+ v = style_group
+ for k in src:
+ try:
+ v = v[k]
+ except (KeyError, TypeError):
+ break
+ else:
+ props[dst] = v
+
+ if isinstance(props.get('pattern'), string_types):
+ # TODO: support other fill patterns
+ props['pattern'] = 0 if props['pattern'] == 'none' else 1
+
+ for k in ['border', 'top', 'right', 'bottom', 'left']:
+ if isinstance(props.get(k), string_types):
+ try:
+ props[k] = ['none', 'thin', 'medium', 'dashed', 'dotted',
+ 'thick', 'double', 'hair', 'mediumDashed',
+ 'dashDot', 'mediumDashDot', 'dashDotDot',
+ 'mediumDashDotDot', 'slantDashDot'].\
+ index(props[k])
+ except ValueError:
+ props[k] = 2
+
+ if isinstance(props.get('font_script'), string_types):
+ props['font_script'] = ['baseline', 'superscript', 'subscript'].\
+ index(props['font_script'])
+
+ if isinstance(props.get('underline'), string_types):
+ props['underline'] = {'none': 0, 'single': 1, 'double': 2,
+ 'singleAccounting': 33,
+ 'doubleAccounting': 34}[props['underline']]
+
+ return props
+
+
class _XlsxWriter(ExcelWriter):
engine = 'xlsxwriter'
supported_extensions = ('.xlsx',)
@@ -1612,7 +1755,7 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0,
wks = self.book.add_worksheet(sheet_name)
self.sheets[sheet_name] = wks
- style_dict = {}
+ style_dict = {'null': None}
if _validate_freeze_panes(freeze_panes):
wks.freeze_panes(*(freeze_panes))
@@ -1633,7 +1776,8 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0,
if stylekey in style_dict:
style = style_dict[stylekey]
else:
- style = self._convert_to_style(cell.style, num_format_str)
+ style = self.book.add_format(
+ _XlsxStyler.convert(cell.style, num_format_str))
style_dict[stylekey] = style
if cell.mergestart is not None and cell.mergeend is not None:
@@ -1647,49 +1791,5 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0,
startcol + cell.col,
val, style)
- def _convert_to_style(self, style_dict, num_format_str=None):
- """
- converts a style_dict to an xlsxwriter format object
- Parameters
- ----------
- style_dict: style dictionary to convert
- num_format_str: optional number format string
- """
-
- # If there is no formatting we don't create a format object.
- if num_format_str is None and style_dict is None:
- return None
-
- # Create a XlsxWriter format object.
- xl_format = self.book.add_format()
-
- if num_format_str is not None:
- xl_format.set_num_format(num_format_str)
-
- if style_dict is None:
- return xl_format
-
- # Map the cell font to XlsxWriter font properties.
- if style_dict.get('font'):
- font = style_dict['font']
- if font.get('bold'):
- xl_format.set_bold()
-
- # Map the alignment to XlsxWriter alignment properties.
- alignment = style_dict.get('alignment')
- if alignment:
- if (alignment.get('horizontal') and
- alignment['horizontal'] == 'center'):
- xl_format.set_align('center')
- if (alignment.get('vertical') and
- alignment['vertical'] == 'top'):
- xl_format.set_align('top')
-
- # Map the cell borders to XlsxWriter border properties.
- if style_dict.get('borders'):
- xl_format.set_border()
-
- return xl_format
-
register_writer(_XlsxWriter)
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index 7af8bd12ca805..d33136a86faad 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -2476,88 +2476,103 @@ def custom_converter(css):
styled.to_excel(writer, sheet_name='styled')
ExcelFormatter(styled, style_converter=custom_converter).write(
writer, sheet_name='custom')
+ writer.save()
- # For engines other than openpyxl 2, we only smoke test
- if engine != 'openpyxl':
- return
- if not openpyxl_compat.is_compat(major_ver=2):
- pytest.skip('incompatible openpyxl version')
-
- # (1) compare DataFrame.to_excel and Styler.to_excel when unstyled
- n_cells = 0
- for col1, col2 in zip(writer.sheets['frame'].columns,
- writer.sheets['unstyled'].columns):
- assert len(col1) == len(col2)
- for cell1, cell2 in zip(col1, col2):
- assert cell1.value == cell2.value
- assert_equal_style(cell1, cell2)
- n_cells += 1
-
- # ensure iteration actually happened:
- assert n_cells == (10 + 1) * (3 + 1)
-
- # (2) check styling with default converter
- n_cells = 0
- for col1, col2 in zip(writer.sheets['frame'].columns,
- writer.sheets['styled'].columns):
- assert len(col1) == len(col2)
- for cell1, cell2 in zip(col1, col2):
- ref = '%s%d' % (cell2.column, cell2.row)
- # XXX: this isn't as strong a test as ideal; we should
- # differences are exclusive
- if ref == 'B2':
- assert not cell1.font.bold
- assert cell2.font.bold
- elif ref == 'C3':
- assert cell1.font.color.rgb != cell2.font.color.rgb
- assert cell2.font.color.rgb == '000000FF'
- elif ref == 'D4':
- assert cell1.font.underline != cell2.font.underline
- assert cell2.font.underline == 'single'
- elif ref == 'B5':
- assert not cell1.border.left.style
- assert (cell2.border.top.style ==
- cell2.border.right.style ==
- cell2.border.bottom.style ==
- cell2.border.left.style ==
- 'medium')
- elif ref == 'C6':
- assert not cell1.font.italic
- assert cell2.font.italic
- elif ref == 'D7':
- assert (cell1.alignment.horizontal !=
- cell2.alignment.horizontal)
- assert cell2.alignment.horizontal == 'right'
- elif ref == 'B8':
- assert cell1.fill.fgColor.rgb != cell2.fill.fgColor.rgb
- assert cell1.fill.patternType != cell2.fill.patternType
- assert cell2.fill.fgColor.rgb == '00FF0000'
- assert cell2.fill.patternType == 'solid'
- else:
- assert_equal_style(cell1, cell2)
-
- assert cell1.value == cell2.value
- n_cells += 1
-
- assert n_cells == (10 + 1) * (3 + 1)
-
- # (3) check styling with custom converter
- n_cells = 0
- for col1, col2 in zip(writer.sheets['frame'].columns,
- writer.sheets['custom'].columns):
- assert len(col1) == len(col2)
- for cell1, cell2 in zip(col1, col2):
- ref = '%s%d' % (cell2.column, cell2.row)
- if ref in ('B2', 'C3', 'D4', 'B5', 'C6', 'D7', 'B8'):
- assert not cell1.font.bold
- assert cell2.font.bold
- else:
- assert_equal_style(cell1, cell2)
+ if engine not in ('openpyxl', 'xlsxwriter'):
+ # For other engines, we only smoke test
+ return
+ openpyxl = pytest.importorskip('openpyxl')
+ if not openpyxl_compat.is_compat(major_ver=2):
+ pytest.skip('incompatible openpyxl version')
- assert cell1.value == cell2.value
- n_cells += 1
+ wb = openpyxl.load_workbook(path)
- assert n_cells == (10 + 1) * (3 + 1)
+ # (1) compare DataFrame.to_excel and Styler.to_excel when unstyled
+ n_cells = 0
+ for col1, col2 in zip(wb['frame'].columns,
+ wb['unstyled'].columns):
+ assert len(col1) == len(col2)
+ for cell1, cell2 in zip(col1, col2):
+ assert cell1.value == cell2.value
+ assert_equal_style(cell1, cell2)
+ n_cells += 1
+
+ # ensure iteration actually happened:
+ assert n_cells == (10 + 1) * (3 + 1)
+
+ # (2) check styling with default converter
+
+ # XXX: openpyxl (as at 2.4) prefixes colors with 00, xlsxwriter with FF
+ alpha = '00' if engine == 'openpyxl' else 'FF'
+
+ n_cells = 0
+ for col1, col2 in zip(wb['frame'].columns,
+ wb['styled'].columns):
+ assert len(col1) == len(col2)
+ for cell1, cell2 in zip(col1, col2):
+ ref = '%s%d' % (cell2.column, cell2.row)
+ # XXX: this isn't as strong a test as ideal; we should
+ # confirm that differences are exclusive
+ if ref == 'B2':
+ assert not cell1.font.bold
+ assert cell2.font.bold
+ elif ref == 'C3':
+ assert cell1.font.color.rgb != cell2.font.color.rgb
+ assert cell2.font.color.rgb == alpha + '0000FF'
+ elif ref == 'D4':
+ # This fails with engine=xlsxwriter due to
+ # https://bitbucket.org/openpyxl/openpyxl/issues/800
+ if engine == 'xlsxwriter' \
+ and (LooseVersion(openpyxl.__version__) <
+ LooseVersion('2.4.6')):
+ pass
+ else:
+ assert cell1.font.underline != cell2.font.underline
+ assert cell2.font.underline == 'single'
+ elif ref == 'B5':
+ assert not cell1.border.left.style
+ assert (cell2.border.top.style ==
+ cell2.border.right.style ==
+ cell2.border.bottom.style ==
+ cell2.border.left.style ==
+ 'medium')
+ elif ref == 'C6':
+ assert not cell1.font.italic
+ assert cell2.font.italic
+ elif ref == 'D7':
+ assert (cell1.alignment.horizontal !=
+ cell2.alignment.horizontal)
+ assert cell2.alignment.horizontal == 'right'
+ elif ref == 'B8':
+ assert cell1.fill.fgColor.rgb != cell2.fill.fgColor.rgb
+ assert cell1.fill.patternType != cell2.fill.patternType
+ assert cell2.fill.fgColor.rgb == alpha + 'FF0000'
+ assert cell2.fill.patternType == 'solid'
+ else:
+ assert_equal_style(cell1, cell2)
+
+ assert cell1.value == cell2.value
+ n_cells += 1
+
+ assert n_cells == (10 + 1) * (3 + 1)
+
+ # (3) check styling with custom converter
+ n_cells = 0
+ for col1, col2 in zip(wb['frame'].columns,
+ wb['custom'].columns):
+ assert len(col1) == len(col2)
+ for cell1, cell2 in zip(col1, col2):
+ ref = '%s%d' % (cell2.column, cell2.row)
+ if ref in ('B2', 'C3', 'D4', 'B5', 'C6', 'D7', 'B8'):
+ assert not cell1.font.bold
+ assert cell2.font.bold
+ else:
+ assert_equal_style(cell1, cell2)
+
+ assert cell1.value == cell2.value
+ n_cells += 1
+
+ assert n_cells == (10 + 1) * (3 + 1)
class TestFSPath(object):
| I was surprised to find that despite the interchangeable representation of Excel styles, xlsxwriter did not have good style support.
I've not added direct tests for this functionality, but test some of it through `test_styler_to_excel`.
- [x] ~~closes #xxxx~~
- [x] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
- [x] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/16149 | 2017-04-26T16:45:44Z | 2017-10-31T00:34:37Z | 2017-10-31T00:34:37Z | 2017-10-31T00:34:48Z |
MAINT: Remove vestigial assertRaisesRegexp | diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index c4ef5e48b4db9..2aad1b6baaac0 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -20,7 +20,7 @@ def test_invalid_dtype_error(self):
msg = 'not understood'
invalid_list = [pd.Timestamp, 'pd.Timestamp', list]
for dtype in invalid_list:
- with tm.assertRaisesRegexp(TypeError, msg):
+ with tm.assert_raises_regex(TypeError, msg):
pandas_dtype(dtype)
valid_list = [object, 'float64', np.object_, np.dtype('object'), 'O',
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index c461556644275..a870667ff3f96 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -35,7 +35,7 @@ def test_invalid_dtype(self):
msg = 'not understood'
invalid_list = [pd.Timestamp, 'pd.Timestamp', list]
for dtype in invalid_list:
- with tm.assertRaisesRegexp(TypeError, msg):
+ with tm.assert_raises_regex(TypeError, msg):
Series([], name='time', dtype=dtype)
def test_scalar_conversion(self):
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 86343e441f49a..e4f39197421a0 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -45,7 +45,7 @@ class CheckImmutable(object):
mutable_regex = re.compile('does not support mutable operations')
def check_mutable_error(self, *args, **kwargs):
- # Pass whatever function you normally would to assertRaisesRegexp
+ # Pass whatever function you normally would to assert_raises_regex
# (after the Exception kind).
tm.assert_raises_regex(
TypeError, self.mutable_regex, *args, **kwargs)
| Removes remaining `assertRaisesRegexp` before #16119 got merged.
This must be merged in ASAP because `master` will fail otherwise.
@jreback
@jorisvandenbossche | https://api.github.com/repos/pandas-dev/pandas/pulls/16148 | 2017-04-26T15:49:36Z | 2017-04-26T16:26:33Z | 2017-04-26T16:26:33Z | 2017-04-26T19:20:36Z |
DEPR: provide deprecations and exposure for NaTType | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 025ac7673622b..b6feb5cf8cedd 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -1246,7 +1246,7 @@ these are now the public subpackages.
- The function :func:`~pandas.api.types.union_categoricals` is now importable from ``pandas.api.types``, formerly from ``pandas.types.concat`` (:issue:`15998`)
-
+- The type import ``pandas.tslib.NaTType`` is deprecated and can be replaced by using ``type(pandas.NaT)`` (:issue:`16146`)
.. _whatsnew_0200.privacy.errors:
diff --git a/pandas/__init__.py b/pandas/__init__.py
index 43fa362b66ed5..20c7e0d9d5993 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -77,6 +77,7 @@
moved={'Timestamp': 'pandas.Timestamp',
'Timedelta': 'pandas.Timedelta',
'NaT': 'pandas.NaT',
+ 'NaTType': 'type(pandas.NaT)',
'OutOfBoundsDatetime': 'pandas.errors.OutOfBoundsDatetime'})
# use the closest tagged version if possible
diff --git a/pandas/tslib.py b/pandas/tslib.py
index f7d99538c2ea2..c960a4eaf59ad 100644
--- a/pandas/tslib.py
+++ b/pandas/tslib.py
@@ -4,4 +4,4 @@
warnings.warn("The pandas.tslib module is deprecated and will be "
"removed in a future version.", FutureWarning, stacklevel=2)
from pandas._libs.tslib import (Timestamp, Timedelta,
- NaT, OutOfBoundsDatetime)
+ NaT, NaTType, OutOfBoundsDatetime)
| xref #16137 | https://api.github.com/repos/pandas-dev/pandas/pulls/16146 | 2017-04-26T13:43:03Z | 2017-04-27T21:27:54Z | 2017-04-27T21:27:54Z | 2017-04-27T22:12:14Z |
DOC: Fix table styling in main docs | diff --git a/doc/source/themes/nature_with_gtoc/static/nature.css_t b/doc/source/themes/nature_with_gtoc/static/nature.css_t
index 1adaaf58d79c5..b61068ee28bef 100644
--- a/doc/source/themes/nature_with_gtoc/static/nature.css_t
+++ b/doc/source/themes/nature_with_gtoc/static/nature.css_t
@@ -315,7 +315,6 @@ thead {
vertical-align: bottom;
}
tr, th, td {
- text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
@@ -326,6 +325,9 @@ tr, th, td {
th {
font-weight: bold;
}
+th.col_heading {
+ text-align: right;
+}
tbody tr:nth-child(odd) {
background: #f5f5f5;
}
| When switching to nbsphinx, I modified the site's CSS so
that the converted notebook looks decent. This had some
unfortunate changes on tables elsewhere in the notebook.
This change fixes the headers to be left-aligned in the main site,
and right-aligned for the tables generated by `df.style` in the
nbsphinx-converted notebook.
xref https://github.com/pandas-dev/pandas/pull/15581/
Before:
<img width="801" alt="screen shot 2017-04-26 at 7 11 19 am" src="https://cloud.githubusercontent.com/assets/1312546/25437376/c568c4b2-2a5b-11e7-9bb0-f524d87d3813.png">
After:

| https://api.github.com/repos/pandas-dev/pandas/pulls/16145 | 2017-04-26T13:39:02Z | 2017-04-26T16:11:05Z | 2017-04-26T16:11:05Z | 2017-05-01T15:50:17Z |
DOC: fix some typos | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index c9c22de9141fe..0b66b90afec67 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -1241,7 +1241,7 @@ If indicated, a deprecation warning will be issued if you reference theses modul
Some new subpackages are created with public functionality that is not directly
exposed in the top-level namespace: ``pandas.errors``, ``pandas.plotting`` and
``pandas.testing`` (more details below). Together with ``pandas.api.types`` and
-certain functions in the ``pandas.io`` and ``pandas.tseries`` submodules,
+certain functions in the ``pandas.io`` and ``pandas.tseries`` submodules,
these are now the public subpackages.
@@ -1276,7 +1276,7 @@ The following are now part of this API:
``pandas.testing``
^^^^^^^^^^^^^^^^^^
-We are adding a standard module that exposes the public testing functions in ``pandas.testing`` (:issue:`9895`. Those functions can be used when writing tests for functionality using pandas objects.
+We are adding a standard module that exposes the public testing functions in ``pandas.testing`` (:issue:`9895`). Those functions can be used when writing tests for functionality using pandas objects.
The following testing functions are now part of this API:
@@ -1295,13 +1295,14 @@ A new public ``pandas.plotting`` module has been added that holds plotting funct
.. _whatsnew_0200.privacy.development:
-Other Developement Changes
-^^^^^^^^^^^^^^^^^^^^^^^^^^
+Other Development Changes
+^^^^^^^^^^^^^^^^^^^^^^^^^
- Building pandas for development now requires ``cython >= 0.23`` (:issue:`14831`)
- Require at least 0.23 version of cython to avoid problems with character encodings (:issue:`14699`)
-- Reorganization of timeseries tests (:issue:`14854`)
-- Reorganization of date converter tests (:issue:`15707`)
+- Switched the test framework to use `pytest <http://doc.pytest.org/en/latest>`__ (:issue:`13097`)
+- Reorganization of tests directory layout (:issue:`14854`, :issue:`15707`).
+
.. _whatsnew_0200.deprecations:
diff --git a/pandas/errors/__init__.py b/pandas/errors/__init__.py
index 8540d8776fbaa..9b6c9c5be319c 100644
--- a/pandas/errors/__init__.py
+++ b/pandas/errors/__init__.py
@@ -59,7 +59,7 @@ class ParserWarning(Warning):
class UnserializableWarning(Warning):
"""
- Warnng that is raised when a DataFrame cannot be serialzed.
+ Warning that is raised when a DataFrame cannot be serialized.
.. versionadded:: 0.20.0
"""
| Some corrections based on the comments of @crayxt | https://api.github.com/repos/pandas-dev/pandas/pulls/16144 | 2017-04-26T09:11:56Z | 2017-04-26T09:12:03Z | 2017-04-26T09:12:03Z | 2017-04-26T09:12:41Z |
ENH 14194: add style option for hiding index and columns | diff --git a/doc/source/style.ipynb b/doc/source/style.ipynb
index a78595beabf1d..20f7c2a93b9e6 100644
--- a/doc/source/style.ipynb
+++ b/doc/source/style.ipynb
@@ -674,13 +674,14 @@
"- precision\n",
"- captions\n",
"- table-wide styles\n",
+ "- hiding the index or columns\n",
"\n",
"Each of these can be specified in two ways:\n",
"\n",
"- A keyword argument to `Styler.__init__`\n",
- "- A call to one of the `.set_` methods, e.g. `.set_caption`\n",
+ "- A call to one of the `.set_` or `.hide_` methods, e.g. `.set_caption` or `.hide_columns`\n",
"\n",
- "The best method to use depends on the context. Use the `Styler` constructor when building many styled DataFrames that should all share the same properties. For interactive use, the`.set_` methods are more convenient."
+ "The best method to use depends on the context. Use the `Styler` constructor when building many styled DataFrames that should all share the same properties. For interactive use, the`.set_` and `.hide_` methods are more convenient."
]
},
{
@@ -814,6 +815,38 @@
"We hope to collect some useful ones either in pandas, or preferable in a new package that [builds on top](#Extensibility) the tools here."
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Hiding the Index or Columns"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The index can be hidden from rendering by calling `Styler.hide_index`. Columns can be hidden from rendering by calling `Styler.hide_columns` and passing in the name of a column, or a slice of columns."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "df.style.hide_index()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "df.style.hide_columns(['C','D'])"
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -875,7 +908,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"outputs": [],
"source": [
"from IPython.html import widgets\n",
@@ -911,7 +946,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"outputs": [],
"source": [
"np.random.seed(25)\n",
@@ -1010,7 +1047,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"outputs": [],
"source": [
"%mkdir templates"
@@ -1027,7 +1066,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"outputs": [],
"source": [
"%%file templates/myhtml.tpl\n",
@@ -1078,7 +1119,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"outputs": [],
"source": [
"MyStyler(df)"
@@ -1094,7 +1137,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"outputs": [],
"source": [
"HTML(MyStyler(df).render(table_title=\"Extending Example\"))"
@@ -1110,7 +1155,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"outputs": [],
"source": [
"EasyStyler = Styler.from_custom_template(\"templates\", \"myhtml.tpl\")\n",
@@ -1127,7 +1174,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"outputs": [],
"source": [
"with open(\"template_structure.html\") as f:\n",
@@ -1147,6 +1196,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
+ "collapsed": true,
"nbsphinx": "hidden"
},
"outputs": [],
@@ -1163,7 +1213,7 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python [default]",
"language": "python",
"name": "python3"
},
@@ -1177,7 +1227,14 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.1"
+ "version": "3.5.3"
+ },
+ "widgets": {
+ "application/vnd.jupyter.widget-state+json": {
+ "state": {},
+ "version_major": 1,
+ "version_minor": 0
+ }
}
},
"nbformat": 4,
diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.txt
index 8afdd1b2e22b3..a583dde29c9ed 100644
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.22.0.txt
@@ -24,7 +24,8 @@ Other Enhancements
- Better support for :func:`Dataframe.style.to_excel` output with the ``xlsxwriter`` engine. (:issue:`16149`)
- :func:`pandas.tseries.frequencies.to_offset` now accepts leading '+' signs e.g. '+1h'. (:issue:`18171`)
--
+- :class:`pandas.io.formats.style.Styler` now has method ``hide_index()`` to determine whether the index will be rendered in ouptut (:issue:`14194`)
+- :class:`pandas.io.formats.style.Styler` now has method ``hide_columns()`` to determine whether columns will be hidden in output (:issue:`14194`)
.. _whatsnew_0220.api_breaking:
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 776669d6d28db..4fab8c5c3bde5 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -133,6 +133,9 @@ def __init__(self, data, precision=None, table_styles=None, uuid=None,
precision = get_option('display.precision')
self.precision = precision
self.table_attributes = table_attributes
+ self.hidden_index = False
+ self.hidden_columns = []
+
# display_funcs maps (row, col) -> formatting function
def default_display_func(x):
@@ -180,6 +183,8 @@ def _translate(self):
caption = self.caption
ctx = self.ctx
precision = self.precision
+ hidden_index = self.hidden_index
+ hidden_columns = self.hidden_columns
uuid = self.uuid or str(uuid1()).replace("-", "_")
ROW_HEADING_CLASS = "row_heading"
COL_HEADING_CLASS = "col_heading"
@@ -194,7 +199,7 @@ def format_attr(pair):
# for sparsifying a MultiIndex
idx_lengths = _get_level_lengths(self.index)
- col_lengths = _get_level_lengths(self.columns)
+ col_lengths = _get_level_lengths(self.columns, hidden_columns)
cell_context = dict()
@@ -217,7 +222,7 @@ def format_attr(pair):
row_es = [{"type": "th",
"value": BLANK_VALUE,
"display_value": BLANK_VALUE,
- "is_visible": True,
+ "is_visible": not hidden_index,
"class": " ".join([BLANK_CLASS])}] * (n_rlvls - 1)
# ... except maybe the last for columns.names
@@ -229,7 +234,7 @@ def format_attr(pair):
"value": name,
"display_value": name,
"class": " ".join(cs),
- "is_visible": True})
+ "is_visible": not hidden_index})
if clabels:
for c, value in enumerate(clabels[r]):
@@ -252,7 +257,8 @@ def format_attr(pair):
row_es.append(es)
head.append(row_es)
- if self.data.index.names and _any_not_none(*self.data.index.names):
+ if (self.data.index.names and _any_not_none(*self.data.index.names) and
+ not hidden_index):
index_header_row = []
for c, name in enumerate(self.data.index.names):
@@ -266,7 +272,7 @@ def format_attr(pair):
[{"type": "th",
"value": BLANK_VALUE,
"class": " ".join([BLANK_CLASS])
- }] * len(clabels[0]))
+ }] * (len(clabels[0]) - len(hidden_columns)))
head.append(index_header_row)
@@ -278,7 +284,8 @@ def format_attr(pair):
"row{row}".format(row=r)]
es = {
"type": "th",
- "is_visible": _is_visible(r, c, idx_lengths),
+ "is_visible": (_is_visible(r, c, idx_lengths) and
+ not hidden_index),
"value": value,
"display_value": value,
"id": "_".join(rid[1:]),
@@ -302,7 +309,8 @@ def format_attr(pair):
"value": value,
"class": " ".join(cs),
"id": "_".join(cs[1:]),
- "display_value": formatter(value)
+ "display_value": formatter(value),
+ "is_visible": (c not in hidden_columns)
})
props = []
for x in ctx[r, c]:
@@ -742,7 +750,7 @@ def set_uuid(self, uuid):
def set_caption(self, caption):
"""
- Se the caption on a Styler
+ Set the caption on a Styler
Parameters
----------
@@ -784,6 +792,40 @@ def set_table_styles(self, table_styles):
self.table_styles = table_styles
return self
+ def hide_index(self):
+ """
+ Hide any indices from rendering.
+
+ .. versionadded:: 0.22.0
+
+ Returns
+ -------
+ self : Styler
+ """
+ self.hidden_index = True
+ return self
+
+ def hide_columns(self, subset):
+ """
+ Hide columns from rendering.
+
+ .. versionadded:: 0.22.0
+
+ Parameters
+ ----------
+ subset: IndexSlice
+ An argument to ``DataFrame.loc`` that identifies which columns
+ are hidden.
+
+ Returns
+ -------
+ self : Styler
+ """
+ subset = _non_reducing_slice(subset)
+ hidden_df = self.data.loc[subset]
+ self.hidden_columns = self.columns.get_indexer_for(hidden_df.columns)
+ return self
+
# -----------------------------------------------------------------------
# A collection of "builtin" styles
# -----------------------------------------------------------------------
@@ -1158,31 +1200,48 @@ def _is_visible(idx_row, idx_col, lengths):
return (idx_col, idx_row) in lengths
-def _get_level_lengths(index):
+def _get_level_lengths(index, hidden_elements=None):
"""
Given an index, find the level lenght for each element.
+ Optional argument is a list of index positions which
+ should not be visible.
Result is a dictionary of (level, inital_position): span
"""
sentinel = sentinel_factory()
levels = index.format(sparsify=sentinel, adjoin=False, names=False)
- if index.nlevels == 1:
- return {(0, i): 1 for i, value in enumerate(levels)}
+ if hidden_elements is None:
+ hidden_elements = []
lengths = {}
+ if index.nlevels == 1:
+ for i, value in enumerate(levels):
+ if(i not in hidden_elements):
+ lengths[(0, i)] = 1
+ return lengths
for i, lvl in enumerate(levels):
for j, row in enumerate(lvl):
if not get_option('display.multi_sparse'):
lengths[(i, j)] = 1
- elif row != sentinel:
+ elif (row != sentinel) and (j not in hidden_elements):
last_label = j
lengths[(i, last_label)] = 1
- else:
+ elif (row != sentinel):
+ # even if its hidden, keep track of it in case
+ # length >1 and later elemens are visible
+ last_label = j
+ lengths[(i, last_label)] = 0
+ elif(j not in hidden_elements):
lengths[(i, last_label)] += 1
- return lengths
+ non_zero_lengths = {}
+ for element, length in lengths.items():
+ if(length >= 1):
+ non_zero_lengths[element] = length
+
+ return non_zero_lengths
def _maybe_wrap_formatter(formatter):
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 811381e4cbd2a..62f1f0c39ce8b 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -891,6 +891,120 @@ def test_mi_sparse_column_names(self):
]
assert head == expected
+ def test_hide_single_index(self):
+ # GH 14194
+ # single unnamed index
+ ctx = self.df.style._translate()
+ assert ctx['body'][0][0]['is_visible']
+ assert ctx['head'][0][0]['is_visible']
+ ctx2 = self.df.style.hide_index()._translate()
+ assert not ctx2['body'][0][0]['is_visible']
+ assert not ctx2['head'][0][0]['is_visible']
+
+ # single named index
+ ctx3 = self.df.set_index('A').style._translate()
+ assert ctx3['body'][0][0]['is_visible']
+ assert len(ctx3['head']) == 2 # 2 header levels
+ assert ctx3['head'][0][0]['is_visible']
+
+ ctx4 = self.df.set_index('A').style.hide_index()._translate()
+ assert not ctx4['body'][0][0]['is_visible']
+ assert len(ctx4['head']) == 1 # only 1 header levels
+ assert not ctx4['head'][0][0]['is_visible']
+
+ def test_hide_multiindex(self):
+ # GH 14194
+ df = pd.DataFrame({'A': [1, 2]}, index=pd.MultiIndex.from_arrays(
+ [['a', 'a'], [0, 1]],
+ names=['idx_level_0', 'idx_level_1'])
+ )
+ ctx1 = df.style._translate()
+ # tests for 'a' and '0'
+ assert ctx1['body'][0][0]['is_visible']
+ assert ctx1['body'][0][1]['is_visible']
+ # check for blank header rows
+ assert ctx1['head'][0][0]['is_visible']
+ assert ctx1['head'][0][1]['is_visible']
+
+ ctx2 = df.style.hide_index()._translate()
+ # tests for 'a' and '0'
+ assert not ctx2['body'][0][0]['is_visible']
+ assert not ctx2['body'][0][1]['is_visible']
+ # check for blank header rows
+ assert not ctx2['head'][0][0]['is_visible']
+ assert not ctx2['head'][0][1]['is_visible']
+
+ def test_hide_columns_single_level(self):
+ # GH 14194
+ # test hiding single column
+ ctx = self.df.style._translate()
+ assert ctx['head'][0][1]['is_visible']
+ assert ctx['head'][0][1]['display_value'] == 'A'
+ assert ctx['head'][0][2]['is_visible']
+ assert ctx['head'][0][2]['display_value'] == 'B'
+ assert ctx['body'][0][1]['is_visible'] # col A, row 1
+ assert ctx['body'][1][2]['is_visible'] # col B, row 1
+
+ ctx = self.df.style.hide_columns('A')._translate()
+ assert not ctx['head'][0][1]['is_visible']
+ assert not ctx['body'][0][1]['is_visible'] # col A, row 1
+ assert ctx['body'][1][2]['is_visible'] # col B, row 1
+
+ # test hiding mulitiple columns
+ ctx = self.df.style.hide_columns(['A', 'B'])._translate()
+ assert not ctx['head'][0][1]['is_visible']
+ assert not ctx['head'][0][2]['is_visible']
+ assert not ctx['body'][0][1]['is_visible'] # col A, row 1
+ assert not ctx['body'][1][2]['is_visible'] # col B, row 1
+
+ def test_hide_columns_mult_levels(self):
+ # GH 14194
+ # setup dataframe with multiple column levels and indices
+ i1 = pd.MultiIndex.from_arrays([['a', 'a'], [0, 1]],
+ names=['idx_level_0',
+ 'idx_level_1'])
+ i2 = pd.MultiIndex.from_arrays([['b', 'b'], [0, 1]],
+ names=['col_level_0',
+ 'col_level_1'])
+ df = pd.DataFrame([[1, 2], [3, 4]], index=i1, columns=i2)
+ ctx = df.style._translate()
+ # column headers
+ assert ctx['head'][0][2]['is_visible']
+ assert ctx['head'][1][2]['is_visible']
+ assert ctx['head'][1][3]['display_value'] == 1
+ # indices
+ assert ctx['body'][0][0]['is_visible']
+ # data
+ assert ctx['body'][1][2]['is_visible']
+ assert ctx['body'][1][2]['display_value'] == 3
+ assert ctx['body'][1][3]['is_visible']
+ assert ctx['body'][1][3]['display_value'] == 4
+
+ # hide top column level, which hides both columns
+ ctx = df.style.hide_columns('b')._translate()
+ assert not ctx['head'][0][2]['is_visible'] # b
+ assert not ctx['head'][1][2]['is_visible'] # 0
+ assert not ctx['body'][1][2]['is_visible'] # 3
+ assert ctx['body'][0][0]['is_visible'] # index
+
+ # hide first column only
+ ctx = df.style.hide_columns([('b', 0)])._translate()
+ assert ctx['head'][0][2]['is_visible'] # b
+ assert not ctx['head'][1][2]['is_visible'] # 0
+ assert not ctx['body'][1][2]['is_visible'] # 3
+ assert ctx['body'][1][3]['is_visible']
+ assert ctx['body'][1][3]['display_value'] == 4
+
+ # hide second column and index
+ ctx = df.style.hide_columns([('b', 1)]).hide_index()._translate()
+ assert not ctx['body'][0][0]['is_visible'] # index
+ assert ctx['head'][0][2]['is_visible'] # b
+ assert ctx['head'][1][2]['is_visible'] # 0
+ assert not ctx['head'][1][3]['is_visible'] # 1
+ assert not ctx['body'][1][3]['is_visible'] # 4
+ assert ctx['body'][1][2]['is_visible']
+ assert ctx['body'][1][2]['display_value'] == 3
+
class TestStylerMatplotlibDep(object):
| - [ x] closes #14194
- [ x] tests added / passed
- [ x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16141 | 2017-04-26T04:56:04Z | 2017-11-19T16:15:35Z | 2017-11-19T16:15:34Z | 2017-11-19T16:15:39Z |
DEPS: sync fastparquet version | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 2c09ef90375e9..fad734a0e39ad 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -300,7 +300,7 @@ Optional libraries below the lowest tested version may still work, but are not c
+=================+=================+=========+
| beautifulsoup4 | 4.6.0 | |
+-----------------+-----------------+---------+
-| fastparquet | 0.3.2 | |
+| fastparquet | 0.4.0 | X |
+-----------------+-----------------+---------+
| fsspec | 0.7.4 | |
+-----------------+-----------------+---------+
diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index eb2b4caddb7a6..a26da75d921ef 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -11,7 +11,7 @@
"bs4": "4.6.0",
"bottleneck": "1.2.1",
"fsspec": "0.7.4",
- "fastparquet": "0.3.2",
+ "fastparquet": "0.4.0",
"gcsfs": "0.6.0",
"lxml.etree": "4.3.0",
"matplotlib": "2.2.3",
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 75c0de0a43bb9..3ef77d2fbacd0 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -916,7 +916,6 @@ def test_filter_row_groups(self, pa):
class TestParquetFastParquet(Base):
- @td.skip_if_no("fastparquet", min_version="0.3.2")
def test_basic(self, fp, df_full):
df = df_full
| follow up #38344
| https://api.github.com/repos/pandas-dev/pandas/pulls/40424 | 2021-03-14T05:19:36Z | 2021-03-15T16:59:52Z | 2021-03-15T16:59:52Z | 2021-03-18T16:40:07Z |
TYP: fix type-ignores in core | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index a888bfabd6f80..15f54c11be0a0 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -1867,7 +1867,7 @@ def _sort_mixed(values):
return np.concatenate([nums, np.asarray(strs, dtype=object)])
-def _sort_tuples(values: np.ndarray):
+def _sort_tuples(values: np.ndarray) -> np.ndarray:
"""
Convert array of tuples (1d) to array or array (2d).
We need to keep the columns separately as they contain different types and
diff --git a/pandas/core/arraylike.py b/pandas/core/arraylike.py
index 588fe8adc7241..b110d62b606d9 100644
--- a/pandas/core/arraylike.py
+++ b/pandas/core/arraylike.py
@@ -5,10 +5,7 @@
ExtensionArray
"""
import operator
-from typing import (
- Any,
- Callable,
-)
+from typing import Any
import warnings
import numpy as np
@@ -172,7 +169,7 @@ def _is_aligned(frame, other):
return frame.columns.equals(other.index)
-def _maybe_fallback(ufunc: Callable, method: str, *inputs: Any, **kwargs: Any):
+def _maybe_fallback(ufunc: np.ufunc, method: str, *inputs: Any, **kwargs: Any):
"""
In the future DataFrame, inputs to ufuncs will be aligned before applying
the ufunc, but for now we ignore the index but raise a warning if behaviour
diff --git a/pandas/core/arrays/boolean.py b/pandas/core/arrays/boolean.py
index 4258279e37551..5455b0b92a179 100644
--- a/pandas/core/arrays/boolean.py
+++ b/pandas/core/arrays/boolean.py
@@ -331,7 +331,7 @@ def map_string(s):
_HANDLED_TYPES = (np.ndarray, numbers.Number, bool, np.bool_)
- def __array_ufunc__(self, ufunc, method: str, *inputs, **kwargs):
+ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
# For BooleanArray inputs, we apply the ufunc to ._data
# and mask the result.
if method == "reduce":
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 0062ed01e957a..eb4bbb14a0135 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1388,7 +1388,7 @@ def __array__(self, dtype: Optional[NpDtype] = None) -> np.ndarray:
# ndarray.
return np.asarray(ret)
- def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
+ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
# for binary ops, use our custom dunder methods
result = ops.maybe_dispatch_ufunc_to_dunder_op(
self, ufunc, method, *inputs, **kwargs
@@ -2429,7 +2429,7 @@ def replace(self, to_replace, value, inplace: bool = False):
# ------------------------------------------------------------------------
# String methods interface
- def _str_map(self, f, na_value=np.nan, dtype=np.dtype(object)):
+ def _str_map(self, f, na_value=np.nan, dtype=np.dtype("object")):
# Optimization to apply the callable `f` to the categories once
# and rebuild the result by `take`ing from the result with the codes.
# Returns the same type as the object-dtype implementation though.
diff --git a/pandas/core/arrays/numeric.py b/pandas/core/arrays/numeric.py
index f06099a642833..a5ead2485801b 100644
--- a/pandas/core/arrays/numeric.py
+++ b/pandas/core/arrays/numeric.py
@@ -152,7 +152,7 @@ def _arith_method(self, other, op):
_HANDLED_TYPES = (np.ndarray, numbers.Number)
- def __array_ufunc__(self, ufunc, method: str, *inputs, **kwargs):
+ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
# For NumericArray inputs, we apply the ufunc to ._data
# and mask the result.
if method == "reduce":
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 5ef3c24726924..89988349132e6 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -137,7 +137,7 @@ def __array__(self, dtype: Optional[NpDtype] = None) -> np.ndarray:
_HANDLED_TYPES = (np.ndarray, numbers.Number)
- def __array_ufunc__(self, ufunc, method: str, *inputs, **kwargs):
+ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
# Lightly modified version of
# https://numpy.org/doc/stable/reference/generated/numpy.lib.mixins.NDArrayOperatorsMixin.html
# The primary modification is not boxing scalar return values
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index c798870e4126a..f5dc95590c963 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -1396,7 +1396,7 @@ def mean(self, axis=0, *args, **kwargs):
_HANDLED_TYPES = (np.ndarray, numbers.Number)
- def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
+ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
out = kwargs.get("out", ())
for x in inputs + out:
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 56ec2597314b2..f30430dd394ca 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -23,6 +23,7 @@
Dtype,
DtypeObj,
IndexLabel,
+ Shape,
)
from pandas.compat import PYPY
from pandas.compat.numpy import function as nv
@@ -389,7 +390,7 @@ def transpose(self: _T, *args, **kwargs) -> _T:
)
@property
- def shape(self):
+ def shape(self) -> Shape:
"""
Return a tuple of the shape of the underlying data.
"""
@@ -511,7 +512,7 @@ def to_numpy(
copy: bool = False,
na_value=lib.no_default,
**kwargs,
- ):
+ ) -> np.ndarray:
"""
A NumPy ndarray representing the values in this Series or Index.
@@ -852,7 +853,7 @@ def __iter__(self):
return map(self._values.item, range(self._values.size))
@cache_readonly
- def hasnans(self):
+ def hasnans(self) -> bool:
"""
Return if I have any nans; enables various perf speedups.
"""
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index ce91276bc6cf4..be535495de8d0 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -689,15 +689,14 @@ def _maybe_promote(dtype: np.dtype, fill_value=np.nan):
if fv.tz is None:
return dtype, fv.asm8
- # error: Value of type variable "_DTypeScalar" of "dtype" cannot be "object"
- return np.dtype(object), fill_value # type: ignore[type-var]
+ return np.dtype("object"), fill_value
elif issubclass(dtype.type, np.timedelta64):
inferred, fv = infer_dtype_from_scalar(fill_value, pandas_dtype=True)
if inferred == dtype:
return dtype, fv
- # error: Value of type variable "_DTypeScalar" of "dtype" cannot be "object"
- return np.dtype(object), fill_value # type: ignore[type-var]
+
+ return np.dtype("object"), fill_value
elif is_float(fill_value):
if issubclass(dtype.type, np.bool_):
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index de981c39228ae..d9fbc3ed122fa 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -648,8 +648,7 @@ def is_valid_na_for_dtype(obj, dtype: DtypeObj) -> bool:
# Numeric
return obj is not NaT and not isinstance(obj, (np.datetime64, np.timedelta64))
- # error: Value of type variable "_DTypeScalar" of "dtype" cannot be "object"
- elif dtype == np.dtype(object): # type: ignore[type-var]
+ elif dtype == np.dtype("object"):
# This is needed for Categorical, but is kind of weird
return True
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c048088ec2350..23c61773daf5a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -123,6 +123,7 @@
is_sequence,
pandas_dtype,
)
+from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas.core.dtypes.missing import (
isna,
notna,
@@ -584,25 +585,17 @@ def __init__(
)
elif isinstance(data, dict):
- # error: Argument "dtype" to "dict_to_mgr" has incompatible type
- # "Union[ExtensionDtype, str, dtype[Any], Type[object], None]"; expected
- # "Union[dtype[Any], ExtensionDtype, None]"
- mgr = dict_to_mgr(
- data, index, columns, dtype=dtype, typ=manager # type: ignore[arg-type]
- )
+ mgr = dict_to_mgr(data, index, columns, dtype=dtype, typ=manager)
elif isinstance(data, ma.MaskedArray):
import numpy.ma.mrecords as mrecords
# masked recarray
if isinstance(data, mrecords.MaskedRecords):
- # error: Argument 4 to "rec_array_to_mgr" has incompatible type
- # "Union[ExtensionDtype, str, dtype[Any], Type[object], None]"; expected
- # "Union[dtype[Any], ExtensionDtype, None]"
mgr = rec_array_to_mgr(
data,
index,
columns,
- dtype, # type: ignore[arg-type]
+ dtype,
copy,
typ=manager,
)
@@ -611,13 +604,10 @@ def __init__(
else:
data = sanitize_masked_array(data)
mgr = ndarray_to_mgr(
- # error: Argument "dtype" to "ndarray_to_mgr" has incompatible type
- # "Union[ExtensionDtype, str, dtype[Any], Type[object], None]";
- # expected "Union[dtype[Any], ExtensionDtype, None]"
data,
index,
columns,
- dtype=dtype, # type: ignore[arg-type]
+ dtype=dtype,
copy=copy,
typ=manager,
)
@@ -626,14 +616,11 @@ def __init__(
if data.dtype.names:
# i.e. numpy structured array
- # error: Argument 4 to "rec_array_to_mgr" has incompatible type
- # "Union[ExtensionDtype, str, dtype[Any], Type[object], None]"; expected
- # "Union[dtype[Any], ExtensionDtype, None]"
mgr = rec_array_to_mgr(
data,
index,
columns,
- dtype, # type: ignore[arg-type]
+ dtype,
copy,
typ=manager,
)
@@ -642,24 +629,18 @@ def __init__(
mgr = dict_to_mgr(
# error: Item "ndarray" of "Union[ndarray, Series, Index]" has no
# attribute "name"
- # error: Argument "dtype" to "dict_to_mgr" has incompatible type
- # "Union[ExtensionDtype, str, dtype[Any], Type[object], None]";
- # expected "Union[dtype[Any], ExtensionDtype, None]"
{data.name: data}, # type: ignore[union-attr]
index,
columns,
- dtype=dtype, # type: ignore[arg-type]
+ dtype=dtype,
typ=manager,
)
else:
mgr = ndarray_to_mgr(
- # error: Argument "dtype" to "ndarray_to_mgr" has incompatible type
- # "Union[ExtensionDtype, str, dtype[Any], Type[object], None]";
- # expected "Union[dtype[Any], ExtensionDtype, None]"
data,
index,
columns,
- dtype=dtype, # type: ignore[arg-type]
+ dtype=dtype,
copy=copy,
typ=manager,
)
@@ -680,46 +661,34 @@ def __init__(
arrays, columns, index = nested_data_to_arrays(
# error: Argument 3 to "nested_data_to_arrays" has incompatible
# type "Optional[Collection[Any]]"; expected "Optional[Index]"
- # error: Argument 4 to "nested_data_to_arrays" has incompatible
- # type "Union[ExtensionDtype, str, dtype[Any], Type[object],
- # None]"; expected "Union[dtype[Any], ExtensionDtype, None]"
data,
columns,
index, # type: ignore[arg-type]
- dtype, # type: ignore[arg-type]
+ dtype,
)
mgr = arrays_to_mgr(
- # error: Argument "dtype" to "arrays_to_mgr" has incompatible
- # type "Union[ExtensionDtype, str, dtype[Any], Type[object],
- # None]"; expected "Union[dtype[Any], ExtensionDtype, None]"
arrays,
columns,
index,
columns,
- dtype=dtype, # type: ignore[arg-type]
+ dtype=dtype,
typ=manager,
)
else:
mgr = ndarray_to_mgr(
- # error: Argument "dtype" to "ndarray_to_mgr" has incompatible
- # type "Union[ExtensionDtype, str, dtype[Any], Type[object],
- # None]"; expected "Union[dtype[Any], ExtensionDtype, None]"
data,
index,
columns,
- dtype=dtype, # type: ignore[arg-type]
+ dtype=dtype,
copy=copy,
typ=manager,
)
else:
- # error: Argument "dtype" to "dict_to_mgr" has incompatible type
- # "Union[ExtensionDtype, str, dtype[Any], Type[object], None]"; expected
- # "Union[dtype[Any], ExtensionDtype, None]"
mgr = dict_to_mgr(
{},
index,
columns,
- dtype=dtype, # type: ignore[arg-type]
+ dtype=dtype,
typ=manager,
)
# For data is scalar
@@ -731,16 +700,11 @@ def __init__(
dtype, _ = infer_dtype_from_scalar(data, pandas_dtype=True)
# For data is a scalar extension dtype
- if is_extension_array_dtype(dtype):
+ if isinstance(dtype, ExtensionDtype):
# TODO(EA2D): special case not needed with 2D EAs
values = [
- # error: Argument 3 to "construct_1d_arraylike_from_scalar"
- # has incompatible type "Union[ExtensionDtype, str, dtype,
- # Type[object]]"; expected "Union[dtype, ExtensionDtype]"
- construct_1d_arraylike_from_scalar(
- data, len(index), dtype # type: ignore[arg-type]
- )
+ construct_1d_arraylike_from_scalar(data, len(index), dtype)
for _ in range(len(columns))
]
mgr = arrays_to_mgr(
@@ -750,13 +714,10 @@ def __init__(
# error: Incompatible types in assignment (expression has type
# "ndarray", variable has type "List[ExtensionArray]")
values = construct_2d_arraylike_from_scalar( # type: ignore[assignment]
- # error: Argument 4 to "construct_2d_arraylike_from_scalar" has
- # incompatible type "Union[ExtensionDtype, str, dtype[Any],
- # Type[object]]"; expected "dtype[Any]"
data,
len(index),
len(columns),
- dtype, # type: ignore[arg-type]
+ dtype,
copy,
)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 67533259ae0c2..050550f8add50 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -44,6 +44,7 @@
CompressionOptions,
Dtype,
DtypeArg,
+ DtypeObj,
FilePathOrBuffer,
FrameOrSeries,
IndexKeyFunc,
@@ -411,7 +412,7 @@ def set_flags(
@final
@classmethod
- def _validate_dtype(cls, dtype):
+ def _validate_dtype(cls, dtype) -> Optional[DtypeObj]:
""" validate the passed dtype """
if dtype is not None:
dtype = pandas_dtype(dtype)
@@ -1995,13 +1996,9 @@ def __array_wrap__(
)
def __array_ufunc__(
- self, ufunc: Callable, method: str, *inputs: Any, **kwargs: Any
+ self, ufunc: np.ufunc, method: str, *inputs: Any, **kwargs: Any
):
- # error: Argument 2 to "array_ufunc" has incompatible type "Callable[..., Any]";
- # expected "ufunc"
- return arraylike.array_ufunc(
- self, ufunc, method, *inputs, **kwargs # type: ignore[arg-type]
- )
+ return arraylike.array_ufunc(self, ufunc, method, *inputs, **kwargs)
# ideally we would define this to avoid the getattr checks, but
# is slower
@@ -4900,7 +4897,6 @@ def _reindex_axes(
return obj
- @final
def _needs_reindex_multi(self, axes, method, level) -> bool_t:
"""Check if we do need a multi reindex."""
return (
@@ -6998,10 +6994,7 @@ def interpolate(
f"`limit_direction` must be 'backward' for method `{method}`"
)
- # error: Value of type variable "_DTypeScalar" of "dtype" cannot be "object"
- if obj.ndim == 2 and np.all(
- obj.dtypes == np.dtype(object) # type: ignore[type-var]
- ):
+ if obj.ndim == 2 and np.all(obj.dtypes == np.dtype("object")):
raise TypeError(
"Cannot interpolate with all object-dtype columns "
"in the DataFrame. Try setting at least one "
@@ -8488,15 +8481,12 @@ def ranker(data):
na_option=na_option,
pct=pct,
)
- # error: Incompatible types in assignment (expression has type
- # "FrameOrSeries", variable has type "ndarray")
# error: Argument 1 to "NDFrame" has incompatible type "ndarray"; expected
# "Union[ArrayManager, BlockManager]"
- ranks = self._constructor( # type: ignore[assignment]
+ ranks_obj = self._constructor(
ranks, **data._construct_axes_dict() # type: ignore[arg-type]
)
- # error: "ndarray" has no attribute "__finalize__"
- return ranks.__finalize__(self, method="rank") # type: ignore[attr-defined]
+ return ranks_obj.__finalize__(self, method="rank")
# if numeric_only is None, and we can't get anything, we try with
# numeric_only=True
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 26d25645b02c6..2597724b3d948 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -4648,7 +4648,7 @@ def _concat(self, to_concat: List[Index], name: Hashable) -> Index:
result = concat_compat(to_concat_vals)
return Index(result, name=name)
- def putmask(self, mask, value):
+ def putmask(self, mask, value) -> Index:
"""
Return a new Index of the values set with the mask.
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 62941a23c6459..5cdf4c1ecef55 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -15,6 +15,7 @@
from pandas._typing import (
ArrayLike,
Dtype,
+ DtypeObj,
)
from pandas.util._decorators import (
Appender,
@@ -538,7 +539,7 @@ def _maybe_cast_slice_bound(self, label, side: str, kind):
# --------------------------------------------------------------------
- def _is_comparable_dtype(self, dtype):
+ def _is_comparable_dtype(self, dtype: DtypeObj) -> bool:
return self.categories._is_comparable_dtype(dtype)
def take_nd(self, *args, **kwargs):
diff --git a/pandas/core/indexes/extension.py b/pandas/core/indexes/extension.py
index 4c15e9df534ba..f714da0d0e303 100644
--- a/pandas/core/indexes/extension.py
+++ b/pandas/core/indexes/extension.py
@@ -430,7 +430,7 @@ def insert(self: _T, loc: int, item) -> _T:
new_arr = arr._from_backing_data(new_vals)
return type(self)._simple_new(new_arr, name=self.name)
- def putmask(self, mask, value):
+ def putmask(self, mask, value) -> Index:
res_values = self._data.copy()
try:
res_values.putmask(mask, value)
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 86ff95a588217..eea66a481a72f 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -836,14 +836,14 @@ def right(self) -> Index:
return Index(self._data.right, copy=False)
@cache_readonly
- def mid(self):
+ def mid(self) -> Index:
return Index(self._data.mid, copy=False)
@property
- def length(self):
+ def length(self) -> Index:
return Index(self._data.length, copy=False)
- def putmask(self, mask, value):
+ def putmask(self, mask, value) -> Index:
mask, noop = validate_putmask(self._data, mask)
if noop:
return self.copy()
@@ -891,7 +891,7 @@ def _format_native_types(self, na_rep="NaN", quoting=None, **kwargs):
# GH 28210: use base method but with different default na_rep
return super()._format_native_types(na_rep=na_rep, quoting=quoting, **kwargs)
- def _format_data(self, name=None):
+ def _format_data(self, name=None) -> str:
# TODO: integrate with categorical and make generic
# name argument is unused here; just for compat with base / categorical
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index 05bb32dad6cab..e446786802239 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -188,7 +188,7 @@ def _constructor(self) -> Type[Int64Index]:
return Int64Index
@cache_readonly
- def _data(self):
+ def _data(self) -> np.ndarray:
"""
An int array that for performance reasons is created only when needed.
@@ -201,7 +201,7 @@ def _cached_int64index(self) -> Int64Index:
return Int64Index._simple_new(self._data, name=self.name)
@property
- def _int64index(self):
+ def _int64index(self) -> Int64Index:
# wrap _cached_int64index so we can be sure its name matches self.name
res = self._cached_int64index
res._name = self._name
@@ -425,13 +425,15 @@ def _get_indexer(self, target: Index, method=None, limit=None, tolerance=None):
# --------------------------------------------------------------------
- def repeat(self, repeats, axis=None):
+ def repeat(self, repeats, axis=None) -> Int64Index:
return self._int64index.repeat(repeats, axis=axis)
- def delete(self, loc):
+ def delete(self, loc) -> Int64Index:
return self._int64index.delete(loc)
- def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
+ def take(
+ self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs
+ ) -> Int64Index:
with rewrite_exception("Int64Index", type(self).__name__):
return self._int64index.take(
indices,
diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index 48b2084319292..c2193056cc974 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -679,7 +679,7 @@ def interpolate_2d(
return result
-def _fillna_prep(values, mask=None):
+def _fillna_prep(values, mask: Optional[np.ndarray] = None) -> np.ndarray:
# boilerplate for _pad_1d, _backfill_1d, _pad_2d, _backfill_2d
if mask is None:
@@ -717,9 +717,7 @@ def _pad_1d(
) -> tuple[np.ndarray, np.ndarray]:
mask = _fillna_prep(values, mask)
algos.pad_inplace(values, mask, limit=limit)
- # error: Incompatible return value type (got "Tuple[ndarray, Optional[ndarray]]",
- # expected "Tuple[ndarray, ndarray]")
- return values, mask # type: ignore[return-value]
+ return values, mask
@_datetimelike_compat
@@ -730,9 +728,7 @@ def _backfill_1d(
) -> tuple[np.ndarray, np.ndarray]:
mask = _fillna_prep(values, mask)
algos.backfill_inplace(values, mask, limit=limit)
- # error: Incompatible return value type (got "Tuple[ndarray, Optional[ndarray]]",
- # expected "Tuple[ndarray, ndarray]")
- return values, mask # type: ignore[return-value]
+ return values, mask
@_datetimelike_compat
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 9feec7acae4c6..1f5a42265ec98 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -267,7 +267,9 @@ class Series(base.IndexOpsMixin, generic.NDFrame):
)
# Override cache_readonly bc Series is mutable
- hasnans = property(
+ # error: Incompatible types in assignment (expression has type "property",
+ # base class "IndexOpsMixin" defined the type as "Callable[[IndexOpsMixin], bool]")
+ hasnans = property( # type: ignore[assignment]
base.IndexOpsMixin.hasnans.func, doc=base.IndexOpsMixin.hasnans.__doc__
)
__hash__ = generic.NDFrame.__hash__
@@ -404,12 +406,7 @@ def __init__(
elif copy:
data = data.copy()
else:
- # error: Argument 3 to "sanitize_array" has incompatible type
- # "Union[ExtensionDtype, str, dtype[Any], Type[object], None]"; expected
- # "Union[dtype[Any], ExtensionDtype, None]"
- data = sanitize_array(
- data, index, dtype, copy # type: ignore[arg-type]
- )
+ data = sanitize_array(data, index, dtype, copy)
manager = get_option("mode.data_manager")
if manager == "block":
@@ -4231,7 +4228,7 @@ def _reindex_indexer(self, new_index, indexer, copy):
)
return self._constructor(new_values, index=new_index)
- def _needs_reindex_multi(self, axes, method, level):
+ def _needs_reindex_multi(self, axes, method, level) -> bool:
"""
Check if we do need a multi reindex; this is for compat with
higher dims.
diff --git a/pandas/tests/extension/decimal/array.py b/pandas/tests/extension/decimal/array.py
index 58e5dc34d59d5..366b24e328642 100644
--- a/pandas/tests/extension/decimal/array.py
+++ b/pandas/tests/extension/decimal/array.py
@@ -110,7 +110,7 @@ def to_numpy(
result = np.asarray([round(x, decimals) for x in result])
return result
- def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
+ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
#
if not all(
isinstance(t, self._HANDLED_TYPES + (DecimalArray,)) for t in inputs
| Also added some missing annotations I saw along the way. | https://api.github.com/repos/pandas-dev/pandas/pulls/40423 | 2021-03-14T00:28:27Z | 2021-03-14T09:58:33Z | 2021-03-14T09:58:33Z | 2021-03-14T16:24:11Z |
ENH: Styler.to_latex(): conditional styling with native latex format | diff --git a/doc/source/_static/style/latex_1.png b/doc/source/_static/style/latex_1.png
new file mode 100644
index 0000000000000..8b901878a0ec9
Binary files /dev/null and b/doc/source/_static/style/latex_1.png differ
diff --git a/doc/source/_static/style/latex_2.png b/doc/source/_static/style/latex_2.png
new file mode 100644
index 0000000000000..7d6baa681575e
Binary files /dev/null and b/doc/source/_static/style/latex_2.png differ
diff --git a/doc/source/reference/style.rst b/doc/source/reference/style.rst
index 8c443f3ae9bb6..6a075ad702bde 100644
--- a/doc/source/reference/style.rst
+++ b/doc/source/reference/style.rst
@@ -24,6 +24,7 @@ Styler properties
Styler.env
Styler.template_html
+ Styler.template_latex
Styler.loader
Style application
@@ -66,3 +67,4 @@ Style export and import
Styler.export
Styler.use
Styler.to_excel
+ Styler.to_latex
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index d357e4a633347..2f2e8aed6fdb8 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -141,6 +141,9 @@ properly format HTML and eliminate some inconsistencies (:issue:`39942` :issue:`
:class:`.Styler` has also been compatible with non-unique index or columns, at least for as many features as are fully compatible, others made only partially compatible (:issue:`41269`).
One also has greater control of the display through separate sparsification of the index or columns, using the new 'styler' options context (:issue:`41142`).
+We have added an extension to allow LaTeX styling as an alternative to CSS styling and a method :meth:`.Styler.to_latex`
+which renders the necessary LaTeX format including built-up styles.
+
Documentation has also seen major revisions in light of new features (:issue:`39720` :issue:`39317` :issue:`40493`)
.. _whatsnew_130.dataframe_honors_copy_with_dict:
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 56e34d9500f31..977a3a24f0844 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -21,6 +21,7 @@
from pandas._typing import (
Axis,
+ FilePathOrBuffer,
FrameOrSeries,
FrameOrSeriesUnion,
IndexLabel,
@@ -30,6 +31,7 @@
from pandas.util._decorators import doc
import pandas as pd
+from pandas import RangeIndex
from pandas.api.types import is_list_like
from pandas.core import generic
import pandas.core.common as com
@@ -39,6 +41,8 @@
)
from pandas.core.generic import NDFrame
+from pandas.io.formats.format import save_to_buffer
+
jinja2 = import_optional_dependency("jinja2", extra="DataFrame.style requires jinja2.")
from pandas.io.formats.style_render import (
@@ -403,6 +407,338 @@ def to_excel(
engine=engine,
)
+ def to_latex(
+ self,
+ buf: FilePathOrBuffer[str] | None = None,
+ *,
+ column_format: str | None = None,
+ position: str | None = None,
+ position_float: str | None = None,
+ hrules: bool = False,
+ label: str | None = None,
+ caption: str | None = None,
+ sparse_index: bool | None = None,
+ sparse_columns: bool | None = None,
+ multirow_align: str = "c",
+ multicol_align: str = "r",
+ siunitx: bool = False,
+ encoding: str | None = None,
+ ):
+ r"""
+ Write Styler to a file, buffer or string in LaTeX format.
+
+ .. versionadded:: 1.3.0
+
+ Parameters
+ ----------
+ buf : str, Path, or StringIO-like, optional, default None
+ Buffer to write to. If ``None``, the output is returned as a string.
+ column_format : str, optional
+ The LaTeX column specification placed in location:
+
+ \\begin{tabular}{<column_format>}
+
+ Defaults to 'l' for index and
+ non-numeric data columns, and, for numeric data columns,
+ to 'r' by default, or 'S' if ``siunitx`` is ``True``.
+ position : str, optional
+ The LaTeX positional argument (e.g. 'h!') for tables, placed in location:
+
+ \\begin{table}[<position>]
+ position_float : {"centering", "raggedleft", "raggedright"}, optional
+ The LaTeX float command placed in location:
+
+ \\begin{table}[<position>]
+
+ \\<position_float>
+ hrules : bool, default False
+ Set to `True` to add \\toprule, \\midrule and \\bottomrule from the
+ {booktabs} LaTeX package.
+ label : str, optional
+ The LaTeX label included as: \\label{<label>}.
+ This is used with \\ref{<label>} in the main .tex file.
+ caption : str, optional
+ The LaTeX table caption included as: \\caption{<caption>}.
+ sparse_index : bool, optional
+ Whether to sparsify the display of a hierarchical index. Setting to False
+ will display each explicit level element in a hierarchical key for each row.
+ Defaults to ``pandas.options.styler.sparse.index`` value.
+ sparse_columns : bool, optional
+ Whether to sparsify the display of a hierarchical index. Setting to False
+ will display each explicit level element in a hierarchical key for each row.
+ Defaults to ``pandas.options.styler.sparse.columns`` value.
+ multirow_align : {"c", "t", "b"}
+ If sparsifying hierarchical MultiIndexes whether to align text centrally,
+ at the top or bottom.
+ multicol_align : {"r", "c", "l"}
+ If sparsifying hierarchical MultiIndex columns whether to align text at
+ the left, centrally, or at the right.
+ siunitx : bool, default False
+ Set to ``True`` to structure LaTeX compatible with the {siunitx} package.
+ encoding : str, default "utf-8"
+ Character encoding setting.
+
+ Returns
+ -------
+ str or None
+ If `buf` is None, returns the result as a string. Otherwise returns `None`.
+
+ See Also
+ --------
+ Styler.format: Format the text display value of cells.
+
+ Notes
+ -----
+ **Latex Packages**
+
+ For the following features we recommend the following LaTeX inclusions:
+
+ ===================== ==========================================================
+ Feature Inclusion
+ ===================== ==========================================================
+ sparse columns none: included within default {tabular} environment
+ sparse rows \\usepackage{multirow}
+ hrules \\usepackage{booktabs}
+ colors \\usepackage[table]{xcolor}
+ siunitx \\usepackage{siunitx}
+ bold (with siunitx) | \\usepackage{etoolbox}
+ | \\robustify\\bfseries
+ | \\sisetup{detect-all = true} *(within {document})*
+ italic (with siunitx) | \\usepackage{etoolbox}
+ | \\robustify\\itshape
+ | \\sisetup{detect-all = true} *(within {document})*
+ ===================== ==========================================================
+
+ **Cell Styles**
+
+ LaTeX styling can only be rendered if the accompanying styling functions have
+ been constructed with appropriate LaTeX commands. All styling
+ functionality is built around the concept of a CSS ``(<attribute>, <value>)``
+ pair (see `Table Visualization <../../user_guide/style.ipynb>`_), and this
+ should be replaced by a LaTeX
+ ``(<command>, <options>)`` approach. Each cell will be styled individually
+ using nested LaTeX commands with their accompanied options.
+
+ For example the following code will highlight and bold a cell in HTML-CSS:
+
+ >>> df = pd.DataFrame([[1,2], [3,4]])
+ >>> s = df.style.highlight_max(axis=None,
+ ... props='background-color:red; font-weight:bold;')
+ >>> s.render()
+
+ The equivalent using LaTeX only commands is the following:
+
+ >>> s = df.style.highlight_max(axis=None,
+ ... props='cellcolor:{red}; bfseries: ;')
+ >>> s.to_latex()
+
+ Internally these structured LaTeX ``(<command>, <options>)`` pairs
+ are translated to the
+ ``display_value`` with the default structure:
+ ``\<command><options> <display_value>``.
+ Where there are multiple commands the latter is nested recursively, so that
+ the above example highlighed cell is rendered as
+ ``\cellcolor{red} \bfseries 4``.
+
+ Occasionally this format does not suit the applied command, or
+ combination of LaTeX packages that is in use, so additional flags can be
+ added to the ``<options>``, within the tuple, to result in different
+ positions of required braces (the **default** being the same as ``--nowrap``):
+
+ =================================== ============================================
+ Tuple Format Output Structure
+ =================================== ============================================
+ (<command>,<options>) \\<command><options> <display_value>
+ (<command>,<options> ``--nowrap``) \\<command><options> <display_value>
+ (<command>,<options> ``--rwrap``) \\<command><options>{<display_value>}
+ (<command>,<options> ``--wrap``) {\\<command><options> <display_value>}
+ (<command>,<options> ``--lwrap``) {\\<command><options>} <display_value>
+ (<command>,<options> ``--dwrap``) {\\<command><options>}{<display_value>}
+ =================================== ============================================
+
+ For example the `textbf` command for font-weight
+ should always be used with `--rwrap` so ``('textbf', '--rwrap')`` will render a
+ working cell, wrapped with braces, as ``\textbf{<display_value>}``.
+
+ A more comprehensive example is as follows:
+
+ >>> df = pd.DataFrame([[1, 2.2, "dogs"], [3, 4.4, "cats"], [2, 6.6, "cows"]],
+ ... index=["ix1", "ix2", "ix3"],
+ ... columns=["Integers", "Floats", "Strings"])
+ >>> s = df.style.highlight_max(
+ ... props='cellcolor:[HTML]{FFFF00}; color:{red};'
+ ... 'textit:--rwrap; textbf:--rwrap;'
+ ... )
+ >>> s.to_latex()
+
+ .. figure:: ../../_static/style/latex_1.png
+
+ **Table Styles**
+
+ Internally Styler uses its ``table_styles`` object to parse the
+ ``column_format``, ``position``, ``position_float``, and ``label``
+ input arguments. These arguments are added to table styles in the format:
+
+ .. code-block:: python
+
+ set_table_styles([
+ {"selector": "column_format", "props": f":{column_format};"},
+ {"selector": "position", "props": f":{position};"},
+ {"selector": "position_float", "props": f":{position_float};"},
+ {"selector": "label", "props": f":{{{label.replace(':','§')}}};"}
+ ], overwrite=False)
+
+ Exception is made for the ``hrules`` argument which, in fact, controls all three
+ commands: ``toprule``, ``bottomrule`` and ``midrule`` simultaneously. Instead of
+ setting ``hrules`` to ``True``, it is also possible to set each
+ individual rule definition, by manually setting the ``table_styles``,
+ for example below we set a regular ``toprule``, set an ``hline`` for
+ ``bottomrule`` and exclude the ``midrule``:
+
+ .. code-block:: python
+
+ set_table_styles([
+ {'selector': 'toprule', 'props': ':toprule;'},
+ {'selector': 'bottomrule', 'props': ':hline;'},
+ ], overwrite=False)
+
+ If other ``commands`` are added to table styles they will be detected, and
+ positioned immediately above the '\\begin{tabular}' command. For example to
+ add odd and even row coloring, from the {colortbl} package, in format
+ ``\rowcolors{1}{pink}{red}``, use:
+
+ .. code-block:: python
+
+ set_table_styles([
+ {'selector': 'rowcolors', 'props': ':{1}{pink}{red};'}
+ ], overwrite=False)
+
+ A more comprehensive example using these arguments is as follows:
+
+ >>> df.columns = pd.MultiIndex.from_tuples([
+ ... ("Numeric", "Integers"),
+ ... ("Numeric", "Floats"),
+ ... ("Non-Numeric", "Strings")
+ ... ])
+ >>> df.index = pd.MultiIndex.from_tuples([
+ ... ("L0", "ix1"), ("L0", "ix2"), ("L1", "ix3")
+ ... ])
+ >>> s = df.style.highlight_max(
+ ... props='cellcolor:[HTML]{FFFF00}; color:{red}; itshape:; bfseries:;'
+ ... )
+ >>> s.to_latex(
+ ... column_format="rrrrr", position="h", position_float="centering",
+ ... hrules=True, label="table:5", caption="Styled LaTeX Table",
+ ... multirow_align="t", multicol_align="r"
+ ... )
+
+ .. figure:: ../../_static/style/latex_2.png
+
+ **Formatting**
+
+ To format values :meth:`Styler.format` should be used prior to calling
+ `Styler.to_latex`, as well as other methods such as :meth:`Styler.hide_index`
+ or :meth:`Styler.hide_columns`, for example:
+
+ >>> s.clear()
+ >>> s.table_styles = []
+ >>> s.caption = None
+ >>> s.format({
+ ... ("Numeric", "Integers"): '\${}',
+ ... ("Numeric", "Floats"): '{:.3f}',
+ ... ("Non-Numeric", "Strings"): str.upper
+ ... })
+ >>> s.to_latex()
+ \begin{tabular}{llrrl}
+ {} & {} & \multicolumn{2}{r}{Numeric} & {Non-Numeric} \\
+ {} & {} & {Integers} & {Floats} & {Strings} \\
+ \multirow[c]{2}{*}{L0} & ix1 & \\$1 & 2.200 & DOGS \\
+ & ix2 & \$3 & 4.400 & CATS \\
+ L1 & ix3 & \$2 & 6.600 & COWS \\
+ \end{tabular}
+ """
+ table_selectors = (
+ [style["selector"] for style in self.table_styles]
+ if self.table_styles is not None
+ else []
+ )
+
+ if column_format is not None:
+ # add more recent setting to table_styles
+ self.set_table_styles(
+ [{"selector": "column_format", "props": f":{column_format}"}],
+ overwrite=False,
+ )
+ elif "column_format" in table_selectors:
+ pass # adopt what has been previously set in table_styles
+ else:
+ # create a default: set float, complex, int cols to 'r' ('S'), index to 'l'
+ _original_columns = self.data.columns
+ self.data.columns = RangeIndex(stop=len(self.data.columns))
+ numeric_cols = self.data._get_numeric_data().columns.to_list()
+ self.data.columns = _original_columns
+ column_format = "" if self.hidden_index else "l" * self.data.index.nlevels
+ for ci, _ in enumerate(self.data.columns):
+ if ci not in self.hidden_columns:
+ column_format += (
+ ("r" if not siunitx else "S") if ci in numeric_cols else "l"
+ )
+ self.set_table_styles(
+ [{"selector": "column_format", "props": f":{column_format}"}],
+ overwrite=False,
+ )
+
+ if position:
+ self.set_table_styles(
+ [{"selector": "position", "props": f":{position}"}],
+ overwrite=False,
+ )
+
+ if position_float:
+ if position_float not in ["raggedright", "raggedleft", "centering"]:
+ raise ValueError(
+ f"`position_float` should be one of "
+ f"'raggedright', 'raggedleft', 'centering', "
+ f"got: '{position_float}'"
+ )
+ self.set_table_styles(
+ [{"selector": "position_float", "props": f":{position_float}"}],
+ overwrite=False,
+ )
+
+ if hrules:
+ self.set_table_styles(
+ [
+ {"selector": "toprule", "props": ":toprule"},
+ {"selector": "midrule", "props": ":midrule"},
+ {"selector": "bottomrule", "props": ":bottomrule"},
+ ],
+ overwrite=False,
+ )
+
+ if label:
+ self.set_table_styles(
+ [{"selector": "label", "props": f":{{{label.replace(':', '§')}}}"}],
+ overwrite=False,
+ )
+
+ if caption:
+ self.set_caption(caption)
+
+ if sparse_index is None:
+ sparse_index = get_option("styler.sparse.index")
+ if sparse_columns is None:
+ sparse_columns = get_option("styler.sparse.columns")
+
+ latex = self._render_latex(
+ sparse_index=sparse_index,
+ sparse_columns=sparse_columns,
+ multirow_align=multirow_align,
+ multicol_align=multicol_align,
+ )
+
+ return save_to_buffer(latex, buf=buf, encoding=encoding)
+
def set_td_classes(self, classes: DataFrame) -> Styler:
"""
Set the DataFrame of strings added to the ``class`` attribute of ``<td>``
diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py
index 9d149008dcb88..ce328f00cf794 100644
--- a/pandas/io/formats/style_render.py
+++ b/pandas/io/formats/style_render.py
@@ -66,6 +66,7 @@ class StylerRenderer:
loader = jinja2.PackageLoader("pandas", "io/formats/templates")
env = jinja2.Environment(loader=loader, trim_blocks=True)
template_html = env.get_template("html.tpl")
+ template_latex = env.get_template("latex.tpl")
def __init__(
self,
@@ -118,6 +119,23 @@ def _render_html(self, sparse_index: bool, sparse_columns: bool, **kwargs) -> st
d.update(kwargs)
return self.template_html.render(**d)
+ def _render_latex(self, sparse_index: bool, sparse_columns: bool, **kwargs) -> str:
+ """
+ Render a Styler in latex format
+ """
+ self._compute()
+
+ d = self._translate(sparse_index, sparse_columns, blank="")
+ self._translate_latex(d)
+
+ self.template_latex.globals["parse_wrap"] = _parse_latex_table_wrapping
+ self.template_latex.globals["parse_table"] = _parse_latex_table_styles
+ self.template_latex.globals["parse_cell"] = _parse_latex_cell_styles
+ self.template_latex.globals["parse_header"] = _parse_latex_header_span
+
+ d.update(kwargs)
+ return self.template_latex.render(**d)
+
def _compute(self):
"""
Execute the style functions built up in `self._todo`.
@@ -133,7 +151,7 @@ def _compute(self):
r = func(self)(*args, **kwargs)
return r
- def _translate(self, sparse_index: bool, sparse_cols: bool):
+ def _translate(self, sparse_index: bool, sparse_cols: bool, blank: str = " "):
"""
Process Styler data and settings into a dict for template rendering.
@@ -161,7 +179,7 @@ def _translate(self, sparse_index: bool, sparse_cols: bool):
DATA_CLASS = "data"
BLANK_CLASS = "blank"
- BLANK_VALUE = " "
+ BLANK_VALUE = blank
# construct render dict
d = {
@@ -395,6 +413,42 @@ def _translate_body(
body.append(index_headers + data)
return body
+ def _translate_latex(self, d: dict) -> None:
+ r"""
+ Post-process the default render dict for the LaTeX template format.
+
+ Processing items included are:
+ - Remove hidden columns from the non-headers part of the body.
+ - Place cellstyles directly in td cells rather than use cellstyle_map.
+ - Remove hidden indexes or reinsert missing th elements if part of multiindex
+ or multirow sparsification (so that \multirow and \multicol work correctly).
+ """
+ d["head"] = [[col for col in row if col["is_visible"]] for row in d["head"]]
+ body = []
+ for r, row in enumerate(d["body"]):
+ if self.hidden_index:
+ row_body_headers = []
+ else:
+ row_body_headers = [
+ {
+ **col,
+ "display_value": col["display_value"]
+ if col["is_visible"]
+ else "",
+ }
+ for col in row
+ if col["type"] == "th"
+ ]
+
+ row_body_cells = [
+ {**col, "cellstyle": self.ctx[r, c - self.data.index.nlevels]}
+ for c, col in enumerate(row)
+ if (col["is_visible"] and col["type"] == "td")
+ ]
+
+ body.append(row_body_headers + row_body_cells)
+ d["body"] = body
+
def format(
self,
formatter: ExtFormatter | None = None,
@@ -996,3 +1050,140 @@ def _translate(self, styler_data: FrameOrSeriesUnion, uuid: str, d: dict):
d["table_styles"].extend(self.table_styles)
return d
+
+
+def _parse_latex_table_wrapping(table_styles: CSSStyles, caption: str | None) -> bool:
+ """
+ Indicate whether LaTeX {tabular} should be wrapped with a {table} environment.
+
+ Parses the `table_styles` and detects any selectors which must be included outside
+ of {tabular}, i.e. indicating that wrapping must occur, and therefore return True,
+ or if a caption exists and requires similar.
+ """
+ IGNORED_WRAPPERS = ["toprule", "midrule", "bottomrule", "column_format"]
+ # ignored selectors are included with {tabular} so do not need wrapping
+ return (
+ table_styles is not None
+ and any(d["selector"] not in IGNORED_WRAPPERS for d in table_styles)
+ ) or caption is not None
+
+
+def _parse_latex_table_styles(table_styles: CSSStyles, selector: str) -> str | None:
+ """
+ Return the first 'props' 'value' from ``tables_styles`` identified by ``selector``.
+
+ Examples
+ --------
+ >>> table_styles = [{'selector': 'foo', 'props': [('attr','value')],
+ ... {'selector': 'bar', 'props': [('attr', 'overwritten')]},
+ ... {'selector': 'bar', 'props': [('a1', 'baz'), ('a2', 'ignore')]}]
+ >>> _parse_latex_table_styles(table_styles, selector='bar')
+ 'baz'
+
+ Notes
+ -----
+ The replacement of "§" with ":" is to avoid the CSS problem where ":" has structural
+ significance and cannot be used in LaTeX labels, but is often required by them.
+ """
+ for style in table_styles[::-1]: # in reverse for most recently applied style
+ if style["selector"] == selector:
+ return str(style["props"][0][1]).replace("§", ":")
+ return None
+
+
+def _parse_latex_cell_styles(latex_styles: CSSList, display_value: str) -> str:
+ r"""
+ Mutate the ``display_value`` string including LaTeX commands from ``latex_styles``.
+
+ This method builds a recursive latex chain of commands based on the
+ CSSList input, nested around ``display_value``.
+
+ If a CSS style is given as ('<command>', '<options>') this is translated to
+ '\<command><options>{display_value}', and this value is treated as the
+ display value for the next iteration.
+
+ The most recent style forms the inner component, for example for styles:
+ `[('c1', 'o1'), ('c2', 'o2')]` this returns: `\c1o1{\c2o2{display_value}}`
+
+ Sometimes latex commands have to be wrapped with curly braces in different ways:
+ We create some parsing flags to identify the different behaviours:
+
+ - `--rwrap` : `\<command><options>{<display_value>}`
+ - `--wrap` : `{\<command><options> <display_value>}`
+ - `--nowrap` : `\<command><options> <display_value>`
+ - `--lwrap` : `{\<command><options>} <display_value>`
+ - `--dwrap` : `{\<command><options>}{<display_value>}`
+
+ For example for styles:
+ `[('c1', 'o1--wrap'), ('c2', 'o2')]` this returns: `{\c1o1 \c2o2{display_value}}
+ """
+ for (command, options) in latex_styles[::-1]: # in reverse for most recent style
+ formatter = {
+ "--wrap": f"{{\\{command}--to_parse {display_value}}}",
+ "--nowrap": f"\\{command}--to_parse {display_value}",
+ "--lwrap": f"{{\\{command}--to_parse}} {display_value}",
+ "--rwrap": f"\\{command}--to_parse{{{display_value}}}",
+ "--dwrap": f"{{\\{command}--to_parse}}{{{display_value}}}",
+ }
+ display_value = f"\\{command}{options} {display_value}"
+ for arg in ["--nowrap", "--wrap", "--lwrap", "--rwrap", "--dwrap"]:
+ if arg in str(options):
+ display_value = formatter[arg].replace(
+ "--to_parse", _parse_latex_options_strip(value=options, arg=arg)
+ )
+ break # only ever one purposeful entry
+ return display_value
+
+
+def _parse_latex_header_span(
+ cell: dict[str, Any], multirow_align: str, multicol_align: str, wrap: bool = False
+) -> str:
+ r"""
+ Refactor the cell `display_value` if a 'colspan' or 'rowspan' attribute is present.
+
+ 'rowspan' and 'colspan' do not occur simultaneouly. If they are detected then
+ the `display_value` is altered to a LaTeX `multirow` or `multicol` command
+ respectively, with the appropriate cell-span.
+
+ ``wrap`` is used to enclose the `display_value` in braces which is needed for
+ column headers using an siunitx package.
+
+ Requires the package {multirow}, whereas multicol support is usually built in
+ to the {tabular} environment.
+
+ Examples
+ --------
+ >>> cell = {'display_vale':'text', 'attributes': 'colspan="3"'}
+ >>> _parse_latex_header_span(cell, 't', 'c')
+ '\multicol{3}{c}{text}'
+ """
+ if "attributes" in cell:
+ attrs = cell["attributes"]
+ if 'colspan="' in attrs:
+ colspan = attrs[attrs.find('colspan="') + 9 :] # len('colspan="') = 9
+ colspan = int(colspan[: colspan.find('"')])
+ return (
+ f"\\multicolumn{{{colspan}}}{{{multicol_align}}}"
+ f"{{{cell['display_value']}}}"
+ )
+ elif 'rowspan="' in attrs:
+ rowspan = attrs[attrs.find('rowspan="') + 9 :]
+ rowspan = int(rowspan[: rowspan.find('"')])
+ return (
+ f"\\multirow[{multirow_align}]{{{rowspan}}}{{*}}"
+ f"{{{cell['display_value']}}}"
+ )
+ if wrap:
+ return f"{{{cell['display_value']}}}"
+ else:
+ return cell["display_value"]
+
+
+def _parse_latex_options_strip(value: str | int | float, arg: str) -> str:
+ """
+ Strip a css_value which may have latex wrapping arguments, css comment identifiers,
+ and whitespaces, to a valid string for latex options parsing.
+
+ For example: 'red /* --wrap */ ' --> 'red'
+ """
+ return str(value).replace(arg, "").replace("/*", "").replace("*/", "").strip()
diff --git a/pandas/io/formats/templates/latex.tpl b/pandas/io/formats/templates/latex.tpl
new file mode 100644
index 0000000000000..e5db6ad8ca7f8
--- /dev/null
+++ b/pandas/io/formats/templates/latex.tpl
@@ -0,0 +1,49 @@
+{% if parse_wrap(table_styles, caption) %}
+\begin{table}
+{%- set position = parse_table(table_styles, 'position') %}
+{%- if position is not none %}
+[{{position}}]
+{%- endif %}
+
+{% set position_float = parse_table(table_styles, 'position_float') %}
+{% if position_float is not none%}
+\{{position_float}}
+{% endif %}
+{% if caption %}
+\caption{% raw %}{{% endraw %}{{caption}}{% raw %}}{% endraw %}
+
+{% endif %}
+{% for style in table_styles %}
+{% if style['selector'] not in ['position', 'position_float', 'caption', 'toprule', 'midrule', 'bottomrule', 'column_format'] %}
+\{{style['selector']}}{{parse_table(table_styles, style['selector'])}}
+{% endif %}
+{% endfor %}
+{% endif %}
+\begin{tabular}
+{%- set column_format = parse_table(table_styles, 'column_format') %}
+{% raw %}{{% endraw %}{{column_format}}{% raw %}}{% endraw %}
+
+{% set toprule = parse_table(table_styles, 'toprule') %}
+{% if toprule is not none %}
+\{{toprule}}
+{% endif %}
+{% for row in head %}
+{% for c in row %}{%- if not loop.first %} & {% endif %}{{parse_header(c, multirow_align, multicol_align, True)}}{% endfor %} \\
+{% endfor %}
+{% set midrule = parse_table(table_styles, 'midrule') %}
+{% if midrule is not none %}
+\{{midrule}}
+{% endif %}
+{% for row in body %}
+{% for c in row %}{% if not loop.first %} & {% endif %}
+ {%- if c.type == 'th' %}{{parse_header(c, multirow_align, multicol_align)}}{% else %}{{parse_cell(c.cellstyle, c.display_value)}}{% endif %}
+{%- endfor %} \\
+{% endfor %}
+{% set bottomrule = parse_table(table_styles, 'bottomrule') %}
+{% if bottomrule is not none %}
+\{{bottomrule}}
+{% endif %}
+\end{tabular}
+{% if parse_wrap(table_styles, caption) %}
+\end{table}
+{% endif %}
diff --git a/pandas/tests/io/formats/style/test_non_unique.py b/pandas/tests/io/formats/style/test_non_unique.py
index 9cbc2db87fbde..fc04169091c09 100644
--- a/pandas/tests/io/formats/style/test_non_unique.py
+++ b/pandas/tests/io/formats/style/test_non_unique.py
@@ -1,3 +1,5 @@
+from textwrap import dedent
+
import pytest
from pandas import (
@@ -122,3 +124,17 @@ def test_hide_columns_non_unique(styler):
assert ctx["body"][0][1]["is_visible"] is True
assert ctx["body"][0][2]["is_visible"] is False
assert ctx["body"][0][3]["is_visible"] is False
+
+
+def test_latex_non_unique(styler):
+ result = styler.to_latex()
+ assert result == dedent(
+ """\
+ \\begin{tabular}{lrrr}
+ {} & {c} & {d} & {d} \\\\
+ i & 1.000000 & 2.000000 & 3.000000 \\\\
+ j & 4.000000 & 5.000000 & 6.000000 \\\\
+ j & 7.000000 & 8.000000 & 9.000000 \\\\
+ \\end{tabular}
+ """
+ )
diff --git a/pandas/tests/io/formats/style/test_to_latex.py b/pandas/tests/io/formats/style/test_to_latex.py
new file mode 100644
index 0000000000000..5945502a4c90c
--- /dev/null
+++ b/pandas/tests/io/formats/style/test_to_latex.py
@@ -0,0 +1,440 @@
+from textwrap import dedent
+
+import pytest
+
+from pandas import (
+ DataFrame,
+ MultiIndex,
+ option_context,
+)
+
+pytest.importorskip("jinja2")
+from pandas.io.formats.style import Styler
+from pandas.io.formats.style_render import (
+ _parse_latex_cell_styles,
+ _parse_latex_header_span,
+ _parse_latex_table_styles,
+ _parse_latex_table_wrapping,
+)
+
+
+@pytest.fixture
+def df():
+ return DataFrame({"A": [0, 1], "B": [-0.61, -1.22], "C": ["ab", "cd"]})
+
+
+@pytest.fixture
+def styler(df):
+ return Styler(df, uuid_len=0, precision=2)
+
+
+def test_minimal_latex_tabular(styler):
+ expected = dedent(
+ """\
+ \\begin{tabular}{lrrl}
+ {} & {A} & {B} & {C} \\\\
+ 0 & 0 & -0.61 & ab \\\\
+ 1 & 1 & -1.22 & cd \\\\
+ \\end{tabular}
+ """
+ )
+ assert styler.to_latex() == expected
+
+
+def test_tabular_hrules(styler):
+ expected = dedent(
+ """\
+ \\begin{tabular}{lrrl}
+ \\toprule
+ {} & {A} & {B} & {C} \\\\
+ \\midrule
+ 0 & 0 & -0.61 & ab \\\\
+ 1 & 1 & -1.22 & cd \\\\
+ \\bottomrule
+ \\end{tabular}
+ """
+ )
+ assert styler.to_latex(hrules=True) == expected
+
+
+def test_tabular_custom_hrules(styler):
+ styler.set_table_styles(
+ [
+ {"selector": "toprule", "props": ":hline"},
+ {"selector": "bottomrule", "props": ":otherline"},
+ ]
+ ) # no midrule
+ expected = dedent(
+ """\
+ \\begin{tabular}{lrrl}
+ \\hline
+ {} & {A} & {B} & {C} \\\\
+ 0 & 0 & -0.61 & ab \\\\
+ 1 & 1 & -1.22 & cd \\\\
+ \\otherline
+ \\end{tabular}
+ """
+ )
+ assert styler.to_latex() == expected
+
+
+def test_column_format(styler):
+ # default setting is already tested in `test_latex_minimal_tabular`
+ styler.set_table_styles([{"selector": "column_format", "props": ":cccc"}])
+
+ assert "\\begin{tabular}{rrrr}" in styler.to_latex(column_format="rrrr")
+ styler.set_table_styles([{"selector": "column_format", "props": ":r|r|cc"}])
+ assert "\\begin{tabular}{r|r|cc}" in styler.to_latex()
+
+
+def test_siunitx_cols(styler):
+ expected = dedent(
+ """\
+ \\begin{tabular}{lSSl}
+ {} & {A} & {B} & {C} \\\\
+ 0 & 0 & -0.61 & ab \\\\
+ 1 & 1 & -1.22 & cd \\\\
+ \\end{tabular}
+ """
+ )
+ assert styler.to_latex(siunitx=True) == expected
+
+
+def test_position(styler):
+ assert "\\begin{table}[h!]" in styler.to_latex(position="h!")
+ assert "\\end{table}" in styler.to_latex(position="h!")
+ styler.set_table_styles([{"selector": "position", "props": ":b!"}])
+ assert "\\begin{table}[b!]" in styler.to_latex()
+ assert "\\end{table}" in styler.to_latex()
+
+
+def test_label(styler):
+ assert "\\label{text}" in styler.to_latex(label="text")
+ styler.set_table_styles([{"selector": "label", "props": ":{more §text}"}])
+ assert "\\label{more :text}" in styler.to_latex()
+
+
+def test_position_float_raises(styler):
+ msg = "`position_float` should be one of 'raggedright', 'raggedleft', 'centering',"
+ with pytest.raises(ValueError, match=msg):
+ styler.to_latex(position_float="bad_string")
+
+
+@pytest.mark.parametrize("label", [(None, ""), ("text", "\\label{text}")])
+@pytest.mark.parametrize("position", [(None, ""), ("h!", "{table}[h!]")])
+@pytest.mark.parametrize("caption", [(None, ""), ("text", "\\caption{text}")])
+@pytest.mark.parametrize("column_format", [(None, ""), ("rcrl", "{tabular}{rcrl}")])
+@pytest.mark.parametrize("position_float", [(None, ""), ("centering", "\\centering")])
+def test_kwargs_combinations(
+ styler, label, position, caption, column_format, position_float
+):
+ result = styler.to_latex(
+ label=label[0],
+ position=position[0],
+ caption=caption[0],
+ column_format=column_format[0],
+ position_float=position_float[0],
+ )
+ assert label[1] in result
+ assert position[1] in result
+ assert caption[1] in result
+ assert column_format[1] in result
+ assert position_float[1] in result
+
+
+def test_custom_table_styles(styler):
+ styler.set_table_styles(
+ [
+ {"selector": "mycommand", "props": ":{myoptions}"},
+ {"selector": "mycommand2", "props": ":{myoptions2}"},
+ ]
+ )
+ expected = dedent(
+ """\
+ \\begin{table}
+ \\mycommand{myoptions}
+ \\mycommand2{myoptions2}
+ """
+ )
+ assert expected in styler.to_latex()
+
+
+def test_cell_styling(styler):
+ styler.highlight_max(props="itshape:;Huge:--wrap;")
+ expected = dedent(
+ """\
+ \\begin{tabular}{lrrl}
+ {} & {A} & {B} & {C} \\\\
+ 0 & 0 & \\itshape {\\Huge -0.61} & ab \\\\
+ 1 & \\itshape {\\Huge 1} & -1.22 & \\itshape {\\Huge cd} \\\\
+ \\end{tabular}
+ """
+ )
+ assert expected == styler.to_latex()
+
+
+def test_multiindex_columns(df):
+ cidx = MultiIndex.from_tuples([("A", "a"), ("A", "b"), ("B", "c")])
+ df.columns = cidx
+ expected = dedent(
+ """\
+ \\begin{tabular}{lrrl}
+ {} & \\multicolumn{2}{r}{A} & {B} \\\\
+ {} & {a} & {b} & {c} \\\\
+ 0 & 0 & -0.61 & ab \\\\
+ 1 & 1 & -1.22 & cd \\\\
+ \\end{tabular}
+ """
+ )
+ s = df.style.format(precision=2)
+ assert expected == s.to_latex()
+
+ # non-sparse
+ expected = dedent(
+ """\
+ \\begin{tabular}{lrrl}
+ {} & {A} & {A} & {B} \\\\
+ {} & {a} & {b} & {c} \\\\
+ 0 & 0 & -0.61 & ab \\\\
+ 1 & 1 & -1.22 & cd \\\\
+ \\end{tabular}
+ """
+ )
+ s = df.style.format(precision=2)
+ assert expected == s.to_latex(sparse_columns=False)
+
+
+def test_multiindex_row(df):
+ ridx = MultiIndex.from_tuples([("A", "a"), ("A", "b"), ("B", "c")])
+ df.loc[2, :] = [2, -2.22, "de"]
+ df = df.astype({"A": int})
+ df.index = ridx
+ expected = dedent(
+ """\
+ \\begin{tabular}{llrrl}
+ {} & {} & {A} & {B} & {C} \\\\
+ \\multirow[c]{2}{*}{A} & a & 0 & -0.61 & ab \\\\
+ & b & 1 & -1.22 & cd \\\\
+ B & c & 2 & -2.22 & de \\\\
+ \\end{tabular}
+ """
+ )
+ s = df.style.format(precision=2)
+ assert expected == s.to_latex()
+
+ # non-sparse
+ expected = dedent(
+ """\
+ \\begin{tabular}{llrrl}
+ {} & {} & {A} & {B} & {C} \\\\
+ A & a & 0 & -0.61 & ab \\\\
+ A & b & 1 & -1.22 & cd \\\\
+ B & c & 2 & -2.22 & de \\\\
+ \\end{tabular}
+ """
+ )
+ assert expected == s.to_latex(sparse_index=False)
+
+
+def test_multiindex_row_and_col(df):
+ cidx = MultiIndex.from_tuples([("Z", "a"), ("Z", "b"), ("Y", "c")])
+ ridx = MultiIndex.from_tuples([("A", "a"), ("A", "b"), ("B", "c")])
+ df.loc[2, :] = [2, -2.22, "de"]
+ df = df.astype({"A": int})
+ df.index, df.columns = ridx, cidx
+ expected = dedent(
+ """\
+ \\begin{tabular}{llrrl}
+ {} & {} & \\multicolumn{2}{l}{Z} & {Y} \\\\
+ {} & {} & {a} & {b} & {c} \\\\
+ \\multirow[b]{2}{*}{A} & a & 0 & -0.61 & ab \\\\
+ & b & 1 & -1.22 & cd \\\\
+ B & c & 2 & -2.22 & de \\\\
+ \\end{tabular}
+ """
+ )
+ s = df.style.format(precision=2)
+ assert s.to_latex(multirow_align="b", multicol_align="l") == expected
+
+ # non-sparse
+ expected = dedent(
+ """\
+ \\begin{tabular}{llrrl}
+ {} & {} & {Z} & {Z} & {Y} \\\\
+ {} & {} & {a} & {b} & {c} \\\\
+ A & a & 0 & -0.61 & ab \\\\
+ A & b & 1 & -1.22 & cd \\\\
+ B & c & 2 & -2.22 & de \\\\
+ \\end{tabular}
+ """
+ )
+ assert s.to_latex(sparse_index=False, sparse_columns=False) == expected
+
+
+def test_multiindex_columns_hidden():
+ df = DataFrame([[1, 2, 3, 4]])
+ df.columns = MultiIndex.from_tuples([("A", 1), ("A", 2), ("A", 3), ("B", 1)])
+ s = df.style
+ assert "{tabular}{lrrrr}" in s.to_latex()
+ s.set_table_styles([]) # reset the position command
+ s.hide_columns([("A", 2)])
+ assert "{tabular}{lrrr}" in s.to_latex()
+
+
+def test_sparse_options(df):
+ cidx = MultiIndex.from_tuples([("Z", "a"), ("Z", "b"), ("Y", "c")])
+ ridx = MultiIndex.from_tuples([("A", "a"), ("A", "b"), ("B", "c")])
+ df.loc[2, :] = [2, -2.22, "de"]
+ df.index, df.columns = ridx, cidx
+ s = df.style
+
+ latex1 = s.to_latex()
+
+ with option_context("styler.sparse.index", True):
+ latex2 = s.to_latex()
+ assert latex1 == latex2
+
+ with option_context("styler.sparse.index", False):
+ latex2 = s.to_latex()
+ assert latex1 != latex2
+
+ with option_context("styler.sparse.columns", True):
+ latex2 = s.to_latex()
+ assert latex1 == latex2
+
+ with option_context("styler.sparse.columns", False):
+ latex2 = s.to_latex()
+ assert latex1 != latex2
+
+
+def test_hidden_index(styler):
+ styler.hide_index()
+ expected = dedent(
+ """\
+ \\begin{tabular}{rrl}
+ {A} & {B} & {C} \\\\
+ 0 & -0.61 & ab \\\\
+ 1 & -1.22 & cd \\\\
+ \\end{tabular}
+ """
+ )
+ assert styler.to_latex() == expected
+
+
+def test_comprehensive(df):
+ # test as many low level features simultaneously as possible
+ cidx = MultiIndex.from_tuples([("Z", "a"), ("Z", "b"), ("Y", "c")])
+ ridx = MultiIndex.from_tuples([("A", "a"), ("A", "b"), ("B", "c")])
+ df.loc[2, :] = [2, -2.22, "de"]
+ df = df.astype({"A": int})
+ df.index, df.columns = ridx, cidx
+ s = df.style
+ s.set_caption("mycap")
+ s.set_table_styles(
+ [
+ {"selector": "label", "props": ":{fig§item}"},
+ {"selector": "position", "props": ":h!"},
+ {"selector": "position_float", "props": ":centering"},
+ {"selector": "column_format", "props": ":rlrlr"},
+ {"selector": "toprule", "props": ":toprule"},
+ {"selector": "midrule", "props": ":midrule"},
+ {"selector": "bottomrule", "props": ":bottomrule"},
+ {"selector": "rowcolors", "props": ":{3}{pink}{}"}, # custom command
+ ]
+ )
+ s.highlight_max(axis=0, props="textbf:--rwrap;cellcolor:[rgb]{1,1,0.6}--rwrap")
+ s.highlight_max(axis=None, props="Huge:--wrap;", subset=[("Z", "a"), ("Z", "b")])
+
+ expected = (
+ """\
+\\begin{table}[h!]
+\\centering
+\\caption{mycap}
+\\label{fig:item}
+\\rowcolors{3}{pink}{}
+\\begin{tabular}{rlrlr}
+\\toprule
+{} & {} & \\multicolumn{2}{r}{Z} & {Y} \\\\
+{} & {} & {a} & {b} & {c} \\\\
+\\midrule
+\\multirow[c]{2}{*}{A} & a & 0 & \\textbf{\\cellcolor[rgb]{1,1,0.6}{-0.61}} & ab \\\\
+ & b & 1 & -1.22 & cd \\\\
+B & c & \\textbf{\\cellcolor[rgb]{1,1,0.6}{{\\Huge 2}}} & -2.22 & """
+ """\
+\\textbf{\\cellcolor[rgb]{1,1,0.6}{de}} \\\\
+\\bottomrule
+\\end{tabular}
+\\end{table}
+"""
+ )
+ assert s.format(precision=2).to_latex() == expected
+
+
+def test_parse_latex_table_styles(styler):
+ styler.set_table_styles(
+ [
+ {"selector": "foo", "props": [("attr", "value")]},
+ {"selector": "bar", "props": [("attr", "overwritten")]},
+ {"selector": "bar", "props": [("attr", "baz"), ("attr2", "ignored")]},
+ {"selector": "label", "props": [("", "{fig§item}")]},
+ ]
+ )
+ assert _parse_latex_table_styles(styler.table_styles, "bar") == "baz"
+
+ # test '§' replaced by ':' [for CSS compatibility]
+ assert _parse_latex_table_styles(styler.table_styles, "label") == "{fig:item}"
+
+
+def test_parse_latex_cell_styles_basic(): # test nesting
+ cell_style = [("itshape", "--rwrap"), ("cellcolor", "[rgb]{0,1,1}--rwrap")]
+ expected = "\\itshape{\\cellcolor[rgb]{0,1,1}{text}}"
+ assert _parse_latex_cell_styles(cell_style, "text") == expected
+
+
+@pytest.mark.parametrize(
+ "wrap_arg, expected",
+ [ # test wrapping
+ ("", "\\<command><options> <display_value>"),
+ ("--wrap", "{\\<command><options> <display_value>}"),
+ ("--nowrap", "\\<command><options> <display_value>"),
+ ("--lwrap", "{\\<command><options>} <display_value>"),
+ ("--dwrap", "{\\<command><options>}{<display_value>}"),
+ ("--rwrap", "\\<command><options>{<display_value>}"),
+ ],
+)
+def test_parse_latex_cell_styles_braces(wrap_arg, expected):
+ cell_style = [("<command>", f"<options>{wrap_arg}")]
+ assert _parse_latex_cell_styles(cell_style, "<display_value>") == expected
+
+
+def test_parse_latex_header_span():
+ cell = {"attributes": 'colspan="3"', "display_value": "text"}
+ expected = "\\multicolumn{3}{Y}{text}"
+ assert _parse_latex_header_span(cell, "X", "Y") == expected
+
+ cell = {"attributes": 'rowspan="5"', "display_value": "text"}
+ expected = "\\multirow[X]{5}{*}{text}"
+ assert _parse_latex_header_span(cell, "X", "Y") == expected
+
+ cell = {"display_value": "text"}
+ assert _parse_latex_header_span(cell, "X", "Y") == "text"
+
+
+def test_parse_latex_table_wrapping(styler):
+ styler.set_table_styles(
+ [
+ {"selector": "toprule", "props": ":value"},
+ {"selector": "bottomrule", "props": ":value"},
+ {"selector": "midrule", "props": ":value"},
+ {"selector": "column_format", "props": ":value"},
+ ]
+ )
+ assert _parse_latex_table_wrapping(styler.table_styles, styler.caption) is False
+ assert _parse_latex_table_wrapping(styler.table_styles, "some caption") is True
+ styler.set_table_styles(
+ [
+ {"selector": "not-ignored", "props": ":value"},
+ ],
+ overwrite=False,
+ )
+ assert _parse_latex_table_wrapping(styler.table_styles, None) is True
| ## Enhancing Styler to allow LaTeX Input
- [x] closes #21673
also indirectly (by providing an alternative to DataFrame.to_latex):
- [x] closes #38328 (enh: adding highlighting options to to_latex function)
- [x] closes #28291 (enh: add option to format \hline in to_latex)
- [x] closes #26278 (easier digit format in to_latex) (although this seems to have been separately addressed)
- [x] closes #26111 (to_latex adds extra empty header row when index has a name even if index=False)
- [x] closes #20008 (to_latex shifts column names if MultiIndex contains empty label)
- [x] closes #13203 (enhancement: add optional row/col paramater to to_latex formatter)
- [x] closes #41388 (text truncates in to_latex)
- [x] closes #6491 (text truncates in to_latex)
- [x] closes #41636 (bug in latex formatting decimal of complex numbers)
This PR leverages the conditional rendering mechanics and unit tests in `Styler` to create a pure LaTeX version, i.e. a Styler that instead of CSS (attribute, value) tuples has LaTeX (command, options) tuples and renders directly to LaTeX with nested cell
An extension to this is provided in another PR which converts an HTML-CSS Styler to a LaTeX Styler, and then renders in LaTeX.
### How is this achieved?
#### Internally:
- [x] a new jinja2 template is created for generating `latex` output.
- [x] a new `Styler._render_latex()` method is added to invoke the new template.
- [x] a `Styler._translate_latex` method is added to make structural changes to the usual render dict `d` to make it suitable for latex template.
- [x] parsing functions are added which do simple tasks to facilitate the jinja2 template's operation, including sparsifying multiindexes
#### For Users:
- [x] a new `Styler.to_latex()` method is introduced to give the user control.
## Comparison with DataFrame.to_latex()
### Input Arguments
- Replicating the **io** arguments: ``buf``, ``encoding``,
- Replicating the **LaTeX** arguments: ``position``, ``caption``, ``label``, ``sparsify``, ``column_format``
- Adding additional **LaTeX** arguments: ``position_float``, ``hrules``
- Removing all of the **formatting** arguments: ``na_rep``, ``formatters``, ``float_format``, ``escape``, ``decimal``
- The pattern ``styler.format(...).to_latex(...)`` fully replicates the functionality, and can be intermediately viewed in a Notebook.
- The pattern ``styler.format(..)...format(..).to_latex(...)`` performs better formatting than a single ``to_latex(..)`` with formatting options could replicate.
- For code maintenance it is better to only maintain the ``.format`` method rather than ``.to_latex`` as downstream.
- Removing the pseudo-formatting arguments: ``columns``, ``index``, ``index_names``, ``header``,
- The pattern ``styler.hide_index().hide_columns()`` can replicate the first three, the last seemed a bit unnecessary.
- Modifying the multiindex args:
- Now we have <s>``sparsify {bool}``</s> ``sparse_index`` and ``sparse_columns`` and ``multicol_align {str}`` and ``multirow_align {str}``.
- Previously the options were ``sparsify {bool}``, ``multicolumn {bool}``, ``multirow {bool}`` and ``multicolumn_format {str}``
- Additional functionality not coded (yet?): ``bold_rows`` (maybe ``bold_labels``), and ``longtable``.
### Performance
Suppose a user will never want to create a LaTeX bigger than (200,20) so we test that size:
| Method | Time |
|---------|------|
| df.to_latex() | 151ms |
| df.style.to_latex() | 43ms |
| df.to_latex(float_format="{:.2f}".format) | 65ms |
| <s>df.style.format(":.2f".format).to_latex()</s> | <s>61ms</s> |
| df.style.format(":.2f".format).to_latex() | 45ms |
(this was improved by #41269)
Styler is broadly unaffected here by adding formatting methods, although `DataFrame.to_latex` is oddly much faster. Anyway the conclusion seems to be for this size frame and smaller performance is broadly the same, if not a bit better in Styler.
### Docs
Here are the rendered versions of the to_latex documentation
[docs_to_latex.zip](https://github.com/pandas-dev/pandas/files/6292495/docs_to_latex.zip)
| https://api.github.com/repos/pandas-dev/pandas/pulls/40422 | 2021-03-13T20:49:09Z | 2021-05-24T13:44:07Z | 2021-05-24T13:44:05Z | 2021-05-26T20:02:47Z |
DOC: Update backticks missing in pandas.DataFrame.query GH40375 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c048088ec2350..5fe1e44379cca 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3784,7 +3784,7 @@ def _box_col_values(self, values, loc: int) -> Series:
# Unsorted
def query(self, expr: str, inplace: bool = False, **kwargs):
- r"""
+ """
Query the columns of a DataFrame with a boolean expression.
Parameters
@@ -3800,7 +3800,7 @@ def query(self, expr: str, inplace: bool = False, **kwargs):
by surrounding them in backticks. Thus, column names containing spaces
or punctuations (besides underscores) or starting with digits must be
surrounded by backticks. (For example, a column named "Area (cm^2)" would
- be referenced as \`Area (cm^2)\`). Column names which are Python keywords
+ be referenced as ```Area (cm^2)```). Column names which are Python keywords
(like "list", "for", "import", etc) cannot be used.
For example, if one of your columns is called ``a a`` and you want
| - [ ] closes #40375
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
A new suggestion was made to include code styling.
Beginner's question: What is the procedure when my pull request (in this case #40400) is merged, which closes an issue after which a new suggestion for further improvement is made in the closed issue. Can I just implement the new suggestion and make a new pull request like this? Or is the procedure different?
| https://api.github.com/repos/pandas-dev/pandas/pulls/40419 | 2021-03-13T16:43:50Z | 2021-03-14T23:43:56Z | 2021-03-14T23:43:56Z | 2021-03-15T05:22:19Z |
CI: flaky zip test | diff --git a/pandas/tests/io/test_gcs.py b/pandas/tests/io/test_gcs.py
index 3000aeea90a0f..356c82293359a 100644
--- a/pandas/tests/io/test_gcs.py
+++ b/pandas/tests/io/test_gcs.py
@@ -97,6 +97,17 @@ def test_to_read_gcs(gcs_buffer, format):
tm.assert_frame_equal(df1, df2)
+def assert_equal_zip_safe(result: bytes, expected: bytes):
+ """
+ We would like to assert these are equal, but the 11th byte is a last-modified
+ timestamp, which in some builds is off-by-one, so we check around that.
+
+ See https://en.wikipedia.org/wiki/ZIP_(file_format)#File_headers
+ """
+ assert result[:10] == expected[:10]
+ assert result[11:] == expected[11:]
+
+
@td.skip_if_no("gcsfs")
@pytest.mark.parametrize("encoding", ["utf-8", "cp1251"])
def test_to_csv_compression_encoding_gcs(gcs_buffer, compression_only, encoding):
@@ -121,7 +132,10 @@ def test_to_csv_compression_encoding_gcs(gcs_buffer, compression_only, encoding)
# write compressed file with explicit compression
path_gcs = "gs://test/test.csv"
df.to_csv(path_gcs, compression=compression, encoding=encoding)
- assert gcs_buffer.getvalue() == buffer.getvalue()
+ res = gcs_buffer.getvalue()
+ expected = buffer.getvalue()
+ assert_equal_zip_safe(res, expected)
+
read_df = read_csv(
path_gcs, index_col=0, compression=compression_only, encoding=encoding
)
@@ -136,11 +150,7 @@ def test_to_csv_compression_encoding_gcs(gcs_buffer, compression_only, encoding)
res = gcs_buffer.getvalue()
expected = buffer.getvalue()
- # We would like to assert these are equal, but the 11th byte is a last-modified
- # timestamp, which in some builds is off-by-one, so we check around that
- # See https://en.wikipedia.org/wiki/ZIP_(file_format)#File_headers
- assert res[:10] == expected[:10]
- assert res[11:] == expected[11:]
+ assert_equal_zip_safe(res, expected)
read_df = read_csv(path_gcs, index_col=0, compression="infer", encoding=encoding)
tm.assert_frame_equal(df, read_df)
| We catch the flaky byte in one place, this catches it in one more | https://api.github.com/repos/pandas-dev/pandas/pulls/40417 | 2021-03-13T15:57:12Z | 2021-03-14T23:33:31Z | 2021-03-14T23:33:31Z | 2021-03-15T00:00:07Z |
BUG: Do not attempt to cache unhashable values in to_datetime (#39756) | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 56a5412d4ecfc..5bb39bc75d6ed 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -592,6 +592,7 @@ Reshaping
- Bug in :meth:`DataFrame.pivot_table` returning a ``MultiIndex`` for a single value when operating on and empty ``DataFrame`` (:issue:`13483`)
- Allow :class:`Index` to be passed to the :func:`numpy.all` function (:issue:`40180`)
- Bug in :meth:`DataFrame.stack` not preserving ``CategoricalDtype`` in a ``MultiIndex`` (:issue:`36991`)
+- Bug in :func:`to_datetime` raising error when input sequence contains unhashable items (:issue:`39756`)
Sparse
^^^^^^
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 9822356d11d7c..67e7792b10330 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -147,7 +147,11 @@ def should_cache(
assert 0 < unique_share < 1, "unique_share must be in next bounds: (0; 1)"
- unique_elements = set(islice(arg, check_count))
+ try:
+ # We can't cache if the items are not hashable.
+ unique_elements = set(islice(arg, check_count))
+ except TypeError:
+ return False
if len(unique_elements) > check_count * unique_share:
do_caching = False
return do_caching
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 999a04a81406e..91f6c100419b6 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -1651,6 +1651,12 @@ def test_to_datetime_unprocessable_input(self, cache):
with pytest.raises(TypeError, match=msg):
to_datetime([1, "1"], errors="raise", cache=cache)
+ @pytest.mark.parametrize("cache", [True, False])
+ def test_to_datetime_unhashable_input(self, cache):
+ series = Series([["a"]] * 100)
+ result = to_datetime(series, errors="ignore", cache=cache)
+ tm.assert_series_equal(series, result)
+
def test_to_datetime_other_datetime64_units(self):
# 5/25/2012
scalar = np.int64(1337904000000000).view("M8[us]")
| - [x] closes #39756
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/40414 | 2021-03-13T07:34:37Z | 2021-03-14T23:43:26Z | 2021-03-14T23:43:26Z | 2021-03-16T01:06:01Z |
DEPR: error_bad_lines and warn_bad_lines for read_csv | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 8ca23c68657a1..b4e35d1f22840 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -344,16 +344,33 @@ dialect : str or :class:`python:csv.Dialect` instance, default ``None``
Error handling
++++++++++++++
-error_bad_lines : boolean, default ``True``
+error_bad_lines : boolean, default ``None``
Lines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no ``DataFrame`` will be
returned. If ``False``, then these "bad lines" will dropped from the
``DataFrame`` that is returned. See :ref:`bad lines <io.bad_lines>`
below.
-warn_bad_lines : boolean, default ``True``
+
+ .. deprecated:: 1.3
+ The ``on_bad_lines`` parameter should be used instead to specify behavior upon
+ encountering a bad line instead.
+warn_bad_lines : boolean, default ``None``
If error_bad_lines is ``False``, and warn_bad_lines is ``True``, a warning for
each "bad line" will be output.
+ .. deprecated:: 1.3
+ The ``on_bad_lines`` parameter should be used instead to specify behavior upon
+ encountering a bad line instead.
+on_bad_lines : {{'error', 'warn', 'skip'}}, default 'error'
+ Specifies what to do upon encountering a bad line (a line with too many fields).
+ Allowed values are :
+
+ - 'error', raise an ParserError when a bad line is encountered.
+ - 'warn', print a warning when a bad line is encountered and skip that line.
+ - 'skip', skip bad lines without raising or warning when they are encountered.
+
+ .. versionadded:: 1.3
+
.. _io.dtypes:
Specifying column data types
@@ -1245,7 +1262,7 @@ You can elect to skip bad lines:
.. code-block:: ipython
- In [29]: pd.read_csv(StringIO(data), error_bad_lines=False)
+ In [29]: pd.read_csv(StringIO(data), on_bad_lines="warn")
Skipping line 3: expected 3 fields, saw 4
Out[29]:
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 8a3d6cf63d4f1..76784fbef7a0d 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -669,6 +669,7 @@ Deprecations
- Deprecated casting ``datetime.date`` objects to ``datetime64`` when used as ``fill_value`` in :meth:`DataFrame.unstack`, :meth:`DataFrame.shift`, :meth:`Series.shift`, and :meth:`DataFrame.reindex`, pass ``pd.Timestamp(dateobj)`` instead (:issue:`39767`)
- Deprecated :meth:`.Styler.set_na_rep` and :meth:`.Styler.set_precision` in favour of :meth:`.Styler.format` with ``na_rep`` and ``precision`` as existing and new input arguments respectively (:issue:`40134`, :issue:`40425`)
- Deprecated allowing partial failure in :meth:`Series.transform` and :meth:`DataFrame.transform` when ``func`` is list-like or dict-like and raises anything but ``TypeError``; ``func`` raising anything but a ``TypeError`` will raise in a future version (:issue:`40211`)
+- Deprecated arguments ``error_bad_lines`` and ``warn_bad_lines`` in :meth:``read_csv`` and :meth:``read_table`` in favor of argument ``on_bad_lines`` (:issue:`15122`)
- Deprecated support for ``np.ma.mrecords.MaskedRecords`` in the :class:`DataFrame` constructor, pass ``{name: data[name] for name in data.dtype.names}`` instead (:issue:`40363`)
- Deprecated using :func:`merge` or :func:`join` on a different number of levels (:issue:`34862`)
- Deprecated the use of ``**kwargs`` in :class:`.ExcelWriter`; use the keyword argument ``engine_kwargs`` instead (:issue:`40430`)
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index b2d548e04eab4..7d7074988e5f0 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -146,6 +146,11 @@ cdef extern from "parser/tokenizer.h":
enum: ERROR_OVERFLOW
+ ctypedef enum BadLineHandleMethod:
+ ERROR,
+ WARN,
+ SKIP
+
ctypedef void* (*io_callback)(void *src, size_t nbytes, size_t *bytes_read,
int *status, const char *encoding_errors)
ctypedef int (*io_cleanup)(void *src)
@@ -198,8 +203,7 @@ cdef extern from "parser/tokenizer.h":
int usecols
int expected_fields
- int error_bad_lines
- int warn_bad_lines
+ BadLineHandleMethod on_bad_lines
# floating point options
char decimal
@@ -351,8 +355,7 @@ cdef class TextReader:
thousands=None, # bytes | str
dtype=None,
usecols=None,
- bint error_bad_lines=True,
- bint warn_bad_lines=True,
+ on_bad_lines = ERROR,
bint na_filter=True,
na_values=None,
na_fvalues=None,
@@ -435,9 +438,7 @@ cdef class TextReader:
raise ValueError('Only length-1 comment characters supported')
self.parser.commentchar = ord(comment)
- # error handling of bad lines
- self.parser.error_bad_lines = int(error_bad_lines)
- self.parser.warn_bad_lines = int(warn_bad_lines)
+ self.parser.on_bad_lines = on_bad_lines
self.skiprows = skiprows
if skiprows is not None:
@@ -454,8 +455,7 @@ cdef class TextReader:
# XXX
if skipfooter > 0:
- self.parser.error_bad_lines = 0
- self.parser.warn_bad_lines = 0
+ self.parser.on_bad_lines = SKIP
self.delimiter = delimiter
@@ -570,9 +570,6 @@ cdef class TextReader:
kh_destroy_str_starts(self.false_set)
self.false_set = NULL
- def set_error_bad_lines(self, int status) -> None:
- self.parser.error_bad_lines = status
-
def _set_quoting(self, quote_char: str | bytes | None, quoting: int):
if not isinstance(quoting, int):
raise TypeError('"quoting" must be an integer')
diff --git a/pandas/_libs/src/parser/tokenizer.c b/pandas/_libs/src/parser/tokenizer.c
index 49eb1e7855098..49797eea59ddc 100644
--- a/pandas/_libs/src/parser/tokenizer.c
+++ b/pandas/_libs/src/parser/tokenizer.c
@@ -93,8 +93,7 @@ void parser_set_default_options(parser_t *self) {
self->allow_embedded_newline = 1;
self->expected_fields = -1;
- self->error_bad_lines = 0;
- self->warn_bad_lines = 0;
+ self->on_bad_lines = ERROR;
self->commentchar = '#';
self->thousands = '\0';
@@ -457,7 +456,7 @@ static int end_line(parser_t *self) {
self->line_fields[self->lines] = 0;
// file_lines is now the actual file line number (starting at 1)
- if (self->error_bad_lines) {
+ if (self->on_bad_lines == ERROR) {
self->error_msg = malloc(bufsize);
snprintf(self->error_msg, bufsize,
"Expected %d fields in line %" PRIu64 ", saw %" PRId64 "\n",
@@ -468,7 +467,7 @@ static int end_line(parser_t *self) {
return -1;
} else {
// simply skip bad lines
- if (self->warn_bad_lines) {
+ if (self->on_bad_lines == WARN) {
// pass up error message
msg = malloc(bufsize);
snprintf(msg, bufsize,
diff --git a/pandas/_libs/src/parser/tokenizer.h b/pandas/_libs/src/parser/tokenizer.h
index f69fee4993d34..623d3690f252a 100644
--- a/pandas/_libs/src/parser/tokenizer.h
+++ b/pandas/_libs/src/parser/tokenizer.h
@@ -84,6 +84,12 @@ typedef enum {
QUOTE_NONE
} QuoteStyle;
+typedef enum {
+ ERROR,
+ WARN,
+ SKIP
+} BadLineHandleMethod;
+
typedef void *(*io_callback)(void *src, size_t nbytes, size_t *bytes_read,
int *status, const char *encoding_errors);
typedef int (*io_cleanup)(void *src);
@@ -136,8 +142,7 @@ typedef struct parser_t {
int usecols; // Boolean: 1: usecols provided, 0: none provided
int expected_fields;
- int error_bad_lines;
- int warn_bad_lines;
+ BadLineHandleMethod on_bad_lines;
// floating point options
char decimal;
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 02084f91d9966..2a86ff13a2edc 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -3,6 +3,7 @@
from collections import defaultdict
import csv
import datetime
+from enum import Enum
import itertools
from typing import (
Any,
@@ -108,10 +109,16 @@
"infer_datetime_format": False,
"skip_blank_lines": True,
"encoding_errors": "strict",
+ "on_bad_lines": "error",
}
class ParserBase:
+ class BadLineHandleMethod(Enum):
+ ERROR = 0
+ WARN = 1
+ SKIP = 2
+
_implicit_index: bool = False
_first_chunk: bool
@@ -203,9 +210,13 @@ def __init__(self, kwds):
self.handles: IOHandles | None = None
+ # Fallback to error to pass a sketchy test(test_override_set_noconvert_columns)
+ # Normally, this arg would get pre-processed earlier on
+ self.on_bad_lines = kwds.get("on_bad_lines", self.BadLineHandleMethod.ERROR)
+
def _open_handles(self, src: FilePathOrBuffer, kwds: dict[str, Any]) -> None:
"""
- Let the readers open IOHanldes after they are done with their potential raises.
+ Let the readers open IOHandles after they are done with their potential raises.
"""
self.handles = get_handle(
src,
diff --git a/pandas/io/parsers/c_parser_wrapper.py b/pandas/io/parsers/c_parser_wrapper.py
index 7a0e704d2fbc4..5c1f8f94a72da 100644
--- a/pandas/io/parsers/c_parser_wrapper.py
+++ b/pandas/io/parsers/c_parser_wrapper.py
@@ -50,7 +50,18 @@ def __init__(self, src: FilePathOrBuffer, **kwds):
# open handles
self._open_handles(src, kwds)
assert self.handles is not None
- for key in ("storage_options", "encoding", "memory_map", "compression"):
+
+ # Have to pass int, would break tests using TextReader directly otherwise :(
+ kwds["on_bad_lines"] = self.on_bad_lines.value
+
+ for key in (
+ "storage_options",
+ "encoding",
+ "memory_map",
+ "compression",
+ "error_bad_lines",
+ "warn_bad_lines",
+ ):
kwds.pop(key, None)
kwds["dtype"] = ensure_dtype_objs(kwds.get("dtype", None))
@@ -206,9 +217,6 @@ def _set_noconvert_columns(self):
for col in noconvert_columns:
self._reader.set_noconvert(col)
- def set_error_bad_lines(self, status):
- self._reader.set_error_bad_lines(int(status))
-
def read(self, nrows=None):
try:
if self.low_memory:
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index 8f6d95f001d91..3635d5b32faf4 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -74,9 +74,6 @@ def __init__(self, f: Union[FilePathOrBuffer, list], **kwds):
self.quoting = kwds["quoting"]
self.skip_blank_lines = kwds["skip_blank_lines"]
- self.warn_bad_lines = kwds["warn_bad_lines"]
- self.error_bad_lines = kwds["error_bad_lines"]
-
self.names_passed = kwds["names"] or None
self.has_index_names = False
@@ -707,10 +704,11 @@ def _next_line(self):
def _alert_malformed(self, msg, row_num):
"""
- Alert a user about a malformed row.
+ Alert a user about a malformed row, depending on value of
+ `self.on_bad_lines` enum.
- If `self.error_bad_lines` is True, the alert will be `ParserError`.
- If `self.warn_bad_lines` is True, the alert will be printed out.
+ If `self.on_bad_lines` is ERROR, the alert will be `ParserError`.
+ If `self.on_bad_lines` is WARN, the alert will be printed out.
Parameters
----------
@@ -719,9 +717,9 @@ def _alert_malformed(self, msg, row_num):
Because this row number is displayed, we 1-index,
even though we 0-index internally.
"""
- if self.error_bad_lines:
+ if self.on_bad_lines == self.BadLineHandleMethod.ERROR:
raise ParserError(msg)
- elif self.warn_bad_lines:
+ elif self.on_bad_lines == self.BadLineHandleMethod.WARN:
base = f"Skipping line {row_num}: "
sys.stderr.write(base + msg + "\n")
@@ -742,7 +740,10 @@ def _next_iter_line(self, row_num):
assert self.data is not None
return next(self.data)
except csv.Error as e:
- if self.warn_bad_lines or self.error_bad_lines:
+ if (
+ self.on_bad_lines == self.BadLineHandleMethod.ERROR
+ or self.on_bad_lines == self.BadLineHandleMethod.WARN
+ ):
msg = str(e)
if "NULL byte" in msg or "line contains NUL" in msg:
@@ -947,11 +948,14 @@ def _rows_to_cols(self, content):
actual_len = len(l)
if actual_len > col_len:
- if self.error_bad_lines or self.warn_bad_lines:
+ if (
+ self.on_bad_lines == self.BadLineHandleMethod.ERROR
+ or self.on_bad_lines == self.BadLineHandleMethod.WARN
+ ):
row_num = self.pos - (content_len - i + footers)
bad_lines.append((row_num, actual_len))
- if self.error_bad_lines:
+ if self.on_bad_lines == self.BadLineHandleMethod.ERROR:
break
else:
content.append(l)
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index d957a669351c1..8bf1ab1260b8e 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -34,6 +34,7 @@
Appender,
deprecate_nonkeyword_arguments,
)
+from pandas.util._validators import validate_bool_kwarg
from pandas.core.dtypes.common import (
is_file_like,
@@ -324,14 +325,32 @@
`skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
-error_bad_lines : bool, default True
+error_bad_lines : bool, default ``None``
Lines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned.
If False, then these "bad lines" will be dropped from the DataFrame that is
returned.
-warn_bad_lines : bool, default True
+
+ .. deprecated:: 1.3
+ The ``on_bad_lines`` parameter should be used instead to specify behavior upon
+ encountering a bad line instead.
+warn_bad_lines : bool, default ``None``
If error_bad_lines is False, and warn_bad_lines is True, a warning for each
"bad line" will be output.
+
+ .. deprecated:: 1.3
+ The ``on_bad_lines`` parameter should be used instead to specify behavior upon
+ encountering a bad line instead.
+on_bad_lines : {{'error', 'warn', 'skip'}}, default 'error'
+ Specifies what to do upon encountering a bad line (a line with too many fields).
+ Allowed values are :
+
+ - 'error', raise an Exception when a bad line is encountered.
+ - 'warn', raise a warning when a bad line is encountered and skip that line.
+ - 'skip', skip bad lines without raising or warning when they are encountered.
+
+ .. versionadded:: 1.3
+
delim_whitespace : bool, default False
Specifies whether or not whitespace (e.g. ``' '`` or ``'\t'``) will be
used as the sep. Equivalent to setting ``sep='\\s+'``. If this option
@@ -384,8 +403,8 @@
"na_filter": True,
"low_memory": True,
"memory_map": False,
- "error_bad_lines": True,
- "warn_bad_lines": True,
+ "error_bad_lines": None,
+ "warn_bad_lines": None,
"float_precision": None,
}
@@ -394,8 +413,8 @@
_c_unsupported = {"skipfooter"}
_python_unsupported = {"low_memory", "float_precision"}
-_deprecated_defaults: Dict[str, Any] = {}
-_deprecated_args: Set[str] = set()
+_deprecated_defaults: Dict[str, Any] = {"error_bad_lines": None, "warn_bad_lines": None}
+_deprecated_args: Set[str] = {"error_bad_lines", "warn_bad_lines"}
def validate_integer(name, val, min_val=0):
@@ -538,8 +557,11 @@ def read_csv(
encoding_errors: Optional[str] = "strict",
dialect=None,
# Error Handling
- error_bad_lines=True,
- warn_bad_lines=True,
+ error_bad_lines=None,
+ warn_bad_lines=None,
+ # TODO (2.0): set on_bad_lines to "error".
+ # See _refine_defaults_read comment for why we do this.
+ on_bad_lines=None,
# Internal
delim_whitespace=False,
low_memory=_c_parser_defaults["low_memory"],
@@ -558,6 +580,9 @@ def read_csv(
delim_whitespace,
engine,
sep,
+ error_bad_lines,
+ warn_bad_lines,
+ on_bad_lines,
names,
prefix,
defaults={"delimiter": ","},
@@ -626,8 +651,11 @@ def read_table(
encoding=None,
dialect=None,
# Error Handling
- error_bad_lines=True,
- warn_bad_lines=True,
+ error_bad_lines=None,
+ warn_bad_lines=None,
+ # TODO (2.0): set on_bad_lines to "error".
+ # See _refine_defaults_read comment for why we do this.
+ on_bad_lines=None,
encoding_errors: Optional[str] = "strict",
# Internal
delim_whitespace=False,
@@ -646,6 +674,9 @@ def read_table(
delim_whitespace,
engine,
sep,
+ error_bad_lines,
+ warn_bad_lines,
+ on_bad_lines,
names,
prefix,
defaults={"delimiter": "\t"},
@@ -947,7 +978,7 @@ def _clean_options(self, options, engine):
f"The {arg} argument has been deprecated and will be "
"removed in a future version.\n\n"
)
- warnings.warn(msg, FutureWarning, stacklevel=2)
+ warnings.warn(msg, FutureWarning, stacklevel=7)
else:
result[arg] = parser_default
@@ -1195,6 +1226,9 @@ def _refine_defaults_read(
delim_whitespace: bool,
engine: str,
sep: Union[str, object],
+ error_bad_lines: Optional[bool],
+ warn_bad_lines: Optional[bool],
+ on_bad_lines: Optional[str],
names: Union[Optional[ArrayLike], object],
prefix: Union[Optional[str], object],
defaults: Dict[str, Any],
@@ -1222,6 +1256,12 @@ def _refine_defaults_read(
sep : str or object
A delimiter provided by the user (str) or a sentinel value, i.e.
pandas._libs.lib.no_default.
+ error_bad_lines : str or None
+ Whether to error on a bad line or not.
+ warn_bad_lines : str or None
+ Whether to warn on a bad line or not.
+ on_bad_lines : str or None
+ An option for handling bad lines or a sentinel value(None).
names : array-like, optional
List of column names to use. If the file contains a header row,
then you should explicitly pass ``header=0`` to override the column names.
@@ -1238,8 +1278,11 @@ def _refine_defaults_read(
Raises
------
- ValueError : If a delimiter was specified with ``sep`` (or ``delimiter``) and
+ ValueError :
+ If a delimiter was specified with ``sep`` (or ``delimiter``) and
``delim_whitespace=True``.
+ If on_bad_lines is specified(not ``None``) and ``error_bad_lines``/
+ ``warn_bad_lines`` is True.
"""
# fix types for sep, delimiter to Union(str, Any)
delim_default = defaults["delimiter"]
@@ -1292,6 +1335,48 @@ def _refine_defaults_read(
kwds["engine"] = "c"
kwds["engine_specified"] = False
+ # Ensure that on_bad_lines and error_bad_lines/warn_bad_lines
+ # aren't specified at the same time. If so, raise. Otherwise,
+ # alias on_bad_lines to "error" if error/warn_bad_lines not set
+ # and on_bad_lines is not set. on_bad_lines is defaulted to None
+ # so we can tell if it is set (this is why this hack exists).
+ if on_bad_lines is not None:
+ if error_bad_lines is not None or warn_bad_lines is not None:
+ raise ValueError(
+ "Both on_bad_lines and error_bad_lines/warn_bad_lines are set. "
+ "Please only set on_bad_lines."
+ )
+ if on_bad_lines == "error":
+ kwds["on_bad_lines"] = ParserBase.BadLineHandleMethod.ERROR
+ elif on_bad_lines == "warn":
+ kwds["on_bad_lines"] = ParserBase.BadLineHandleMethod.WARN
+ elif on_bad_lines == "skip":
+ kwds["on_bad_lines"] = ParserBase.BadLineHandleMethod.SKIP
+ else:
+ raise ValueError(f"Argument {on_bad_lines} is invalid for on_bad_lines")
+ else:
+ if error_bad_lines is not None:
+ # Must check is_bool, because other stuff(e.g. non-empty lists) eval to true
+ validate_bool_kwarg(error_bad_lines, "error_bad_lines")
+ if error_bad_lines:
+ kwds["on_bad_lines"] = ParserBase.BadLineHandleMethod.ERROR
+ else:
+ if warn_bad_lines is not None:
+ # This is the case where error_bad_lines is False
+ # We can only warn/skip if error_bad_lines is False
+ # None doesn't work because backwards-compatibility reasons
+ validate_bool_kwarg(warn_bad_lines, "warn_bad_lines")
+ if warn_bad_lines:
+ kwds["on_bad_lines"] = ParserBase.BadLineHandleMethod.WARN
+ else:
+ kwds["on_bad_lines"] = ParserBase.BadLineHandleMethod.SKIP
+ else:
+ # Backwards compat, when only error_bad_lines = false, we warn
+ kwds["on_bad_lines"] = ParserBase.BadLineHandleMethod.WARN
+ else:
+ # Everything None -> Error
+ kwds["on_bad_lines"] = ParserBase.BadLineHandleMethod.ERROR
+
return kwds
diff --git a/pandas/tests/io/parser/common/test_common_basic.py b/pandas/tests/io/parser/common/test_common_basic.py
index eba5e52516b4c..97b3be1306cd5 100644
--- a/pandas/tests/io/parser/common/test_common_basic.py
+++ b/pandas/tests/io/parser/common/test_common_basic.py
@@ -803,6 +803,19 @@ def test_encoding_surrogatepass(all_parsers):
parser.read_csv(path)
+@pytest.mark.parametrize("on_bad_lines", ["error", "warn"])
+def test_deprecated_bad_lines_warns(all_parsers, csv1, on_bad_lines):
+ # GH 15122
+ parser = all_parsers
+ kwds = {f"{on_bad_lines}_bad_lines": False}
+ with tm.assert_produces_warning(
+ FutureWarning,
+ match=f"The {on_bad_lines}_bad_lines argument has been deprecated "
+ "and will be removed in a future version.\n\n",
+ ):
+ parser.read_csv(csv1, **kwds)
+
+
def test_malformed_second_line(all_parsers):
# see GH14782
parser = all_parsers
diff --git a/pandas/tests/io/parser/common/test_read_errors.py b/pandas/tests/io/parser/common/test_read_errors.py
index 4e3d99af685ec..f5438ea3f0296 100644
--- a/pandas/tests/io/parser/common/test_read_errors.py
+++ b/pandas/tests/io/parser/common/test_read_errors.py
@@ -140,27 +140,37 @@ def test_unexpected_keyword_parameter_exception(all_parsers):
parser.read_table("foo.tsv", foo=1)
-def test_suppress_error_output(all_parsers, capsys):
+@pytest.mark.parametrize(
+ "kwargs",
+ [
+ pytest.param(
+ {"error_bad_lines": False, "warn_bad_lines": False},
+ marks=pytest.mark.filterwarnings("ignore"),
+ ),
+ {"on_bad_lines": "skip"},
+ ],
+)
+def test_suppress_error_output(all_parsers, capsys, kwargs):
# see gh-15925
parser = all_parsers
data = "a\n1\n1,2,3\n4\n5,6,7"
expected = DataFrame({"a": [1, 4]})
- result = parser.read_csv(
- StringIO(data), error_bad_lines=False, warn_bad_lines=False
- )
+ result = parser.read_csv(StringIO(data), **kwargs)
tm.assert_frame_equal(result, expected)
captured = capsys.readouterr()
assert captured.err == ""
+@pytest.mark.filterwarnings("ignore")
@pytest.mark.parametrize(
"kwargs",
[{}, {"error_bad_lines": True}], # Default is True. # Explicitly pass in.
)
@pytest.mark.parametrize(
- "warn_kwargs", [{}, {"warn_bad_lines": True}, {"warn_bad_lines": False}]
+ "warn_kwargs",
+ [{}, {"warn_bad_lines": True}, {"warn_bad_lines": False}],
)
def test_error_bad_lines(all_parsers, kwargs, warn_kwargs):
# see gh-15925
@@ -173,13 +183,23 @@ def test_error_bad_lines(all_parsers, kwargs, warn_kwargs):
parser.read_csv(StringIO(data), **kwargs)
-def test_warn_bad_lines(all_parsers, capsys):
+@pytest.mark.parametrize(
+ "kwargs",
+ [
+ pytest.param(
+ {"error_bad_lines": False, "warn_bad_lines": True},
+ marks=pytest.mark.filterwarnings("ignore"),
+ ),
+ {"on_bad_lines": "warn"},
+ ],
+)
+def test_warn_bad_lines(all_parsers, capsys, kwargs):
# see gh-15925
parser = all_parsers
data = "a\n1\n1,2,3\n4\n5,6,7"
expected = DataFrame({"a": [1, 4]})
- result = parser.read_csv(StringIO(data), error_bad_lines=False, warn_bad_lines=True)
+ result = parser.read_csv(StringIO(data), **kwargs)
tm.assert_frame_equal(result, expected)
captured = capsys.readouterr()
@@ -234,3 +254,24 @@ def test_open_file(all_parsers):
with pytest.raises(csv.Error, match="Could not determine delimiter"):
parser.read_csv(file, sep=None, encoding_errors="replace")
assert len(record) == 0, record[0].message
+
+
+def test_invalid_on_bad_line(all_parsers):
+ parser = all_parsers
+ data = "a\n1\n1,2,3\n4\n5,6,7"
+ with pytest.raises(ValueError, match="Argument abc is invalid for on_bad_lines"):
+ parser.read_csv(StringIO(data), on_bad_lines="abc")
+
+
+@pytest.mark.parametrize("error_bad_lines", [True, False])
+@pytest.mark.parametrize("warn_bad_lines", [True, False])
+def test_conflict_on_bad_line(all_parsers, error_bad_lines, warn_bad_lines):
+ parser = all_parsers
+ data = "a\n1\n1,2,3\n4\n5,6,7"
+ kwds = {"error_bad_lines": error_bad_lines, "warn_bad_lines": warn_bad_lines}
+ with pytest.raises(
+ ValueError,
+ match="Both on_bad_lines and error_bad_lines/warn_bad_lines are set. "
+ "Please only set on_bad_lines.",
+ ):
+ parser.read_csv(StringIO(data), on_bad_lines="error", **kwds)
diff --git a/pandas/tests/io/parser/test_c_parser_only.py b/pandas/tests/io/parser/test_c_parser_only.py
index 5d1fa426ff24c..160e00f5fb930 100644
--- a/pandas/tests/io/parser/test_c_parser_only.py
+++ b/pandas/tests/io/parser/test_c_parser_only.py
@@ -498,7 +498,7 @@ def test_comment_whitespace_delimited(c_parser_only, capsys):
header=None,
delimiter="\\s+",
skiprows=0,
- error_bad_lines=False,
+ on_bad_lines="warn",
)
captured = capsys.readouterr()
# skipped lines 2, 3, 4, 9
diff --git a/pandas/tests/io/parser/test_python_parser_only.py b/pandas/tests/io/parser/test_python_parser_only.py
index cf6866946ab76..f62c9fd1349bf 100644
--- a/pandas/tests/io/parser/test_python_parser_only.py
+++ b/pandas/tests/io/parser/test_python_parser_only.py
@@ -276,9 +276,7 @@ def test_none_delimiter(python_parser_only, capsys):
# We expect the third line in the data to be
# skipped because it is malformed, but we do
# not expect any errors to occur.
- result = parser.read_csv(
- StringIO(data), header=0, sep=None, warn_bad_lines=True, error_bad_lines=False
- )
+ result = parser.read_csv(StringIO(data), header=0, sep=None, on_bad_lines="warn")
tm.assert_frame_equal(result, expected)
captured = capsys.readouterr()
diff --git a/pandas/tests/io/parser/test_textreader.py b/pandas/tests/io/parser/test_textreader.py
index 7f84c5e378d16..d594bf8a75d49 100644
--- a/pandas/tests/io/parser/test_textreader.py
+++ b/pandas/tests/io/parser/test_textreader.py
@@ -140,11 +140,7 @@ def test_skip_bad_lines(self, capsys):
reader.read()
reader = TextReader(
- StringIO(data),
- delimiter=":",
- header=None,
- error_bad_lines=False,
- warn_bad_lines=False,
+ StringIO(data), delimiter=":", header=None, on_bad_lines=2 # Skip
)
result = reader.read()
expected = {
@@ -155,11 +151,7 @@ def test_skip_bad_lines(self, capsys):
assert_array_dicts_equal(result, expected)
reader = TextReader(
- StringIO(data),
- delimiter=":",
- header=None,
- error_bad_lines=False,
- warn_bad_lines=True,
+ StringIO(data), delimiter=":", header=None, on_bad_lines=1 # Warn
)
reader.read()
captured = capsys.readouterr()
| - [x] closes #15122
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
Summary of contents
- Adds new on_bad_lines parameter (I found on_bad_lines a more explicit name than bad_lines)
- This defaults to None at first in order to preserve compatibility, however it should be changed to error in 2.0 after
error_bad_lines and warn_bad_lines are removed.
- Cleanup of some C/Python Parser code ( add enum instead of using 2 variables for C and only use on_bad_lines in Python) | https://api.github.com/repos/pandas-dev/pandas/pulls/40413 | 2021-03-13T04:56:48Z | 2021-05-28T16:58:22Z | 2021-05-28T16:58:21Z | 2021-05-28T22:31:39Z |
TYP: fix ignores | diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 493333fded6dd..baf5633db0cb3 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -337,7 +337,7 @@ cdef class TextReader:
object skiprows
object dtype
object usecols
- list dtype_cast_order
+ list dtype_cast_order # list[np.dtype]
set unnamed_cols
set noconvert
diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py
index 731b55464c11b..2adc70438cce7 100644
--- a/pandas/_testing/asserters.py
+++ b/pandas/_testing/asserters.py
@@ -976,8 +976,8 @@ def assert_series_equal(
left_values = left._values
right_values = right._values
# Only check exact if dtype is numeric
- if is_extension_array_dtype(left_values) and is_extension_array_dtype(
- right_values
+ if isinstance(left_values, ExtensionArray) and isinstance(
+ right_values, ExtensionArray
):
assert_extension_array_equal(
left_values,
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 0fa02d54b5b78..a888bfabd6f80 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -235,41 +235,26 @@ def _reconstruct_data(
# Catch DatetimeArray/TimedeltaArray
return values
- if is_extension_array_dtype(dtype):
- # error: Item "dtype[Any]" of "Union[dtype[Any], ExtensionDtype]" has no
- # attribute "construct_array_type"
- cls = dtype.construct_array_type() # type: ignore[union-attr]
+ if not isinstance(dtype, np.dtype):
+ # i.e. ExtensionDtype
+ cls = dtype.construct_array_type()
if isinstance(values, cls) and values.dtype == dtype:
return values
values = cls._from_sequence(values)
elif is_bool_dtype(dtype):
- # error: Argument 1 to "astype" of "_ArrayOrScalarCommon" has
- # incompatible type "Union[dtype, ExtensionDtype]"; expected
- # "Union[dtype, None, type, _SupportsDtype, str, Tuple[Any, int],
- # Tuple[Any, Union[int, Sequence[int]]], List[Any], _DtypeDict,
- # Tuple[Any, Any]]"
- values = values.astype(dtype, copy=False) # type: ignore[arg-type]
+ values = values.astype(dtype, copy=False)
# we only support object dtypes bool Index
if isinstance(original, ABCIndex):
values = values.astype(object, copy=False)
elif dtype is not None:
if is_datetime64_dtype(dtype):
- # error: Incompatible types in assignment (expression has type
- # "str", variable has type "Union[dtype, ExtensionDtype]")
- dtype = "datetime64[ns]" # type: ignore[assignment]
+ dtype = np.dtype("datetime64[ns]")
elif is_timedelta64_dtype(dtype):
- # error: Incompatible types in assignment (expression has type
- # "str", variable has type "Union[dtype, ExtensionDtype]")
- dtype = "timedelta64[ns]" # type: ignore[assignment]
+ dtype = np.dtype("timedelta64[ns]")
- # error: Argument 1 to "astype" of "_ArrayOrScalarCommon" has
- # incompatible type "Union[dtype, ExtensionDtype]"; expected
- # "Union[dtype, None, type, _SupportsDtype, str, Tuple[Any, int],
- # Tuple[Any, Union[int, Sequence[int]]], List[Any], _DtypeDict,
- # Tuple[Any, Any]]"
- values = values.astype(dtype, copy=False) # type: ignore[arg-type]
+ values = values.astype(dtype, copy=False)
return values
@@ -772,7 +757,8 @@ def factorize(
uniques = Index(uniques)
return codes, uniques
- if is_extension_array_dtype(values.dtype):
+ if not isinstance(values.dtype, np.dtype):
+ # i.e. ExtensionDtype
codes, uniques = values.factorize(na_sentinel=na_sentinel)
dtype = original.dtype
else:
@@ -1662,7 +1648,8 @@ def diff(arr, n: int, axis: int = 0, stacklevel=3):
arr = arr.to_numpy()
dtype = arr.dtype
- if is_extension_array_dtype(dtype):
+ if not isinstance(dtype, np.dtype):
+ # i.e ExtensionDtype
if hasattr(arr, f"__{op.__name__}__"):
if axis != 0:
raise ValueError(f"cannot diff {type(arr).__name__} on axis={axis}")
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 0bf5e05786d4d..0062ed01e957a 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -66,7 +66,10 @@
needs_i8_conversion,
pandas_dtype,
)
-from pandas.core.dtypes.dtypes import CategoricalDtype
+from pandas.core.dtypes.dtypes import (
+ CategoricalDtype,
+ ExtensionDtype,
+)
from pandas.core.dtypes.generic import (
ABCIndex,
ABCSeries,
@@ -504,7 +507,7 @@ def astype(self, dtype: Dtype, copy: bool = True) -> ArrayLike:
result = self._set_dtype(dtype)
# TODO: consolidate with ndarray case?
- elif is_extension_array_dtype(dtype):
+ elif isinstance(dtype, ExtensionDtype):
result = pd_array(self, dtype=dtype, copy=copy)
elif is_integer_dtype(dtype) and self.isna().any():
@@ -515,13 +518,7 @@ def astype(self, dtype: Dtype, copy: bool = True) -> ArrayLike:
# variable has type "Categorical")
result = np.array( # type: ignore[assignment]
self,
- # error: Argument "dtype" to "array" has incompatible type
- # "Union[ExtensionDtype, str, dtype[Any], Type[str], Type[float],
- # Type[int], Type[complex], Type[bool], Type[object]]"; expected
- # "Union[dtype[Any], None, type, _SupportsDType, str, Union[Tuple[Any,
- # int], Tuple[Any, Union[int, Sequence[int]]], List[Any], _DTypeDict,
- # Tuple[Any, Any]]]"
- dtype=dtype, # type: ignore[arg-type]
+ dtype=dtype,
copy=copy,
)
@@ -529,14 +526,7 @@ def astype(self, dtype: Dtype, copy: bool = True) -> ArrayLike:
# GH8628 (PERF): astype category codes instead of astyping array
try:
new_cats = np.asarray(self.categories)
- # error: Argument "dtype" to "astype" of "_ArrayOrScalarCommon" has
- # incompatible type "Union[ExtensionDtype, dtype[Any]]"; expected
- # "Union[dtype[Any], None, type, _SupportsDType, str, Union[Tuple[Any,
- # int], Tuple[Any, Union[int, Sequence[int]]], List[Any], _DTypeDict,
- # Tuple[Any, Any]]]"
- new_cats = new_cats.astype(
- dtype=dtype, copy=copy # type: ignore[arg-type]
- )
+ new_cats = new_cats.astype(dtype=dtype, copy=copy)
except (
TypeError, # downstream error msg for CategoricalIndex is misleading
ValueError,
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index bd5cc04659a06..42299aaf46a48 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -878,12 +878,11 @@ def _isnan(self) -> np.ndarray:
return self.asi8 == iNaT
@property # NB: override with cache_readonly in immutable subclasses
- def _hasnans(self) -> np.ndarray:
+ def _hasnans(self) -> bool:
"""
return if I have any nans; enables various perf speedups
"""
- # error: Incompatible return value type (got "bool", expected "ndarray")
- return bool(self._isnan.any()) # type: ignore[return-value]
+ return bool(self._isnan.any())
def _maybe_mask_results(
self, result: np.ndarray, fill_value=iNaT, convert=None
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 8b67b98b32f7f..26d25645b02c6 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -191,8 +191,7 @@
str_t = str
-# error: Value of type variable "_DTypeScalar" of "dtype" cannot be "object"
-_o_dtype = np.dtype(object) # type: ignore[type-var]
+_o_dtype = np.dtype("object")
_Identity = NewType("_Identity", object)
@@ -417,11 +416,7 @@ def __new__(
# maybe coerce to a sub-class
arr = data
else:
- # error: Argument "dtype" to "asarray_tuplesafe" has incompatible type
- # "Type[object]"; expected "Union[str, dtype[Any], None]"
- arr = com.asarray_tuplesafe(
- data, dtype=object # type: ignore[arg-type]
- )
+ arr = com.asarray_tuplesafe(data, dtype=np.dtype("object"))
if dtype is None:
arr = _maybe_cast_data_without_dtype(arr)
@@ -456,9 +451,7 @@ def __new__(
)
# other iterable of some kind
- # error: Argument "dtype" to "asarray_tuplesafe" has incompatible type
- # "Type[object]"; expected "Union[str, dtype[Any], None]"
- subarr = com.asarray_tuplesafe(data, dtype=object) # type: ignore[arg-type]
+ subarr = com.asarray_tuplesafe(data, dtype=np.dtype("object"))
return Index(subarr, dtype=dtype, copy=copy, name=name, **kwargs)
@classmethod
@@ -2902,16 +2895,10 @@ def union(self, other, sort=None):
# <T> | <T> -> T
# <T> | <U> -> object
if not (is_integer_dtype(self.dtype) and is_integer_dtype(other.dtype)):
- # error: Incompatible types in assignment (expression has type
- # "str", variable has type "Union[dtype[Any], ExtensionDtype]")
- dtype = "float64" # type: ignore[assignment]
+ dtype = np.dtype("float64")
else:
# one is int64 other is uint64
-
- # error: Incompatible types in assignment (expression has type
- # "Type[object]", variable has type "Union[dtype[Any],
- # ExtensionDtype]")
- dtype = object # type: ignore[assignment]
+ dtype = np.dtype("object")
left = self.astype(dtype, copy=False)
right = other.astype(dtype, copy=False)
@@ -3906,6 +3893,9 @@ def join(
self_is_mi = isinstance(self, ABCMultiIndex)
other_is_mi = isinstance(other, ABCMultiIndex)
+ lindexer: Optional[np.ndarray]
+ rindexer: Optional[np.ndarray]
+
# try to figure out the join level
# GH3662
if level is None and (self_is_mi or other_is_mi):
@@ -4003,15 +3993,11 @@ def join(
if return_indexers:
if join_index is self:
- # error: Incompatible types in assignment (expression has type "None",
- # variable has type "ndarray")
- lindexer = None # type: ignore[assignment]
+ lindexer = None
else:
lindexer = self.get_indexer(join_index)
if join_index is other:
- # error: Incompatible types in assignment (expression has type "None",
- # variable has type "ndarray")
- rindexer = None # type: ignore[assignment]
+ rindexer = None
else:
rindexer = other.get_indexer(join_index)
return join_index, lindexer, rindexer
@@ -4114,15 +4100,11 @@ def _join_non_unique(self, other, how="left", return_indexers=False):
left_idx = ensure_platform_int(left_idx)
right_idx = ensure_platform_int(right_idx)
- join_index = np.asarray(lvalues.take(left_idx))
+ join_array = np.asarray(lvalues.take(left_idx))
mask = left_idx == -1
- np.putmask(join_index, mask, rvalues.take(right_idx))
+ np.putmask(join_array, mask, rvalues.take(right_idx))
- # error: Incompatible types in assignment (expression has type "Index", variable
- # has type "ndarray")
- join_index = self._wrap_joined_index(
- join_index, other # type: ignore[assignment]
- )
+ join_index = self._wrap_joined_index(join_array, other)
if return_indexers:
return join_index, left_idx, right_idx
@@ -4286,6 +4268,9 @@ def _join_monotonic(self, other, how="left", return_indexers=False):
sv = self._get_engine_target()
ov = other._get_engine_target()
+ ridx: Optional[np.ndarray]
+ lidx: Optional[np.ndarray]
+
if self.is_unique and other.is_unique:
# We can perform much better than the general case
if how == "left":
@@ -4295,61 +4280,24 @@ def _join_monotonic(self, other, how="left", return_indexers=False):
elif how == "right":
join_index = other
lidx = self._left_indexer_unique(ov, sv)
- # error: Incompatible types in assignment (expression has type "None",
- # variable has type "ndarray")
- ridx = None # type: ignore[assignment]
+ ridx = None
elif how == "inner":
- # error: Incompatible types in assignment (expression has type
- # "ndarray", variable has type "Index")
- join_index, lidx, ridx = self._inner_indexer( # type:ignore[assignment]
- sv, ov
- )
- # error: Argument 1 to "_wrap_joined_index" of "Index" has incompatible
- # type "Index"; expected "ndarray"
- join_index = self._wrap_joined_index(
- join_index, other # type: ignore[arg-type]
- )
+ join_array, lidx, ridx = self._inner_indexer(sv, ov)
+ join_index = self._wrap_joined_index(join_array, other)
elif how == "outer":
- # error: Incompatible types in assignment (expression has type
- # "ndarray", variable has type "Index")
- join_index, lidx, ridx = self._outer_indexer( # type:ignore[assignment]
- sv, ov
- )
- # error: Argument 1 to "_wrap_joined_index" of "Index" has incompatible
- # type "Index"; expected "ndarray"
- join_index = self._wrap_joined_index(
- join_index, other # type: ignore[arg-type]
- )
+ join_array, lidx, ridx = self._outer_indexer(sv, ov)
+ join_index = self._wrap_joined_index(join_array, other)
else:
if how == "left":
- # error: Incompatible types in assignment (expression has type
- # "ndarray", variable has type "Index")
- join_index, lidx, ridx = self._left_indexer( # type: ignore[assignment]
- sv, ov
- )
+ join_array, lidx, ridx = self._left_indexer(sv, ov)
elif how == "right":
- # error: Incompatible types in assignment (expression has type
- # "ndarray", variable has type "Index")
- join_index, ridx, lidx = self._left_indexer( # type: ignore[assignment]
- ov, sv
- )
+ join_array, ridx, lidx = self._left_indexer(ov, sv)
elif how == "inner":
- # error: Incompatible types in assignment (expression has type
- # "ndarray", variable has type "Index")
- join_index, lidx, ridx = self._inner_indexer( # type:ignore[assignment]
- sv, ov
- )
+ join_array, lidx, ridx = self._inner_indexer(sv, ov)
elif how == "outer":
- # error: Incompatible types in assignment (expression has type
- # "ndarray", variable has type "Index")
- join_index, lidx, ridx = self._outer_indexer( # type:ignore[assignment]
- sv, ov
- )
- # error: Argument 1 to "_wrap_joined_index" of "Index" has incompatible type
- # "Index"; expected "ndarray"
- join_index = self._wrap_joined_index(
- join_index, other # type: ignore[arg-type]
- )
+ join_array, lidx, ridx = self._outer_indexer(sv, ov)
+
+ join_index = self._wrap_joined_index(join_array, other)
if return_indexers:
lidx = None if lidx is None else ensure_platform_int(lidx)
@@ -6481,12 +6429,8 @@ def _maybe_cast_data_without_dtype(subarr):
pass
elif inferred.startswith("timedelta"):
- # error: Incompatible types in assignment (expression has type
- # "TimedeltaArray", variable has type "ndarray")
- data = TimedeltaArray._from_sequence( # type: ignore[assignment]
- subarr, copy=False
- )
- return data
+ tda = TimedeltaArray._from_sequence(subarr, copy=False)
+ return tda
elif inferred == "period":
try:
data = PeriodArray._from_sequence(subarr)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 1bcddee4d726e..b1a552cff2274 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -118,9 +118,7 @@
from pandas.core.arrays._mixins import NDArrayBackedExtensionArray
# comparison is faster than is_object_dtype
-
-# error: Value of type variable "_DTypeScalar" of "dtype" cannot be "object"
-_dtype_obj = np.dtype(object) # type: ignore[type-var]
+_dtype_obj = np.dtype("object")
class Block(PandasObject):
@@ -1598,14 +1596,9 @@ def to_native_types(self, na_rep="nan", quoting=None, **kwargs):
values = self.values
mask = isna(values)
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- values = np.asarray(values.astype(object)) # type: ignore[assignment]
- values[mask] = na_rep
-
- # TODO(EA2D): reshape not needed with 2D EAs
- # we are expected to return a 2-d ndarray
- return self.make_block(values)
+ new_values = np.asarray(values.astype(object))
+ new_values[mask] = na_rep
+ return self.make_block(new_values)
def take_nd(
self, indexer, axis: int = 0, new_mgr_locs=None, fill_value=lib.no_default
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index e2949eb227fbf..b82ab807562f4 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -31,6 +31,7 @@
is_sparse,
)
from pandas.core.dtypes.concat import concat_compat
+from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas.core.dtypes.missing import (
is_valid_na_for_dtype,
isna_all,
@@ -331,9 +332,7 @@ def get_reindexed_values(self, empty_dtype: DtypeObj, upcasted_na) -> ArrayLike:
if self.is_valid_na_for(empty_dtype):
blk_dtype = getattr(self.block, "dtype", None)
- # error: Value of type variable "_DTypeScalar" of "dtype" cannot be
- # "object"
- if blk_dtype == np.dtype(object): # type: ignore[type-var]
+ if blk_dtype == np.dtype("object"):
# we want to avoid filling with np.nan if we are
# using None; we already know that we are all
# nulls
@@ -347,10 +346,8 @@ def get_reindexed_values(self, empty_dtype: DtypeObj, upcasted_na) -> ArrayLike:
return DatetimeArray(i8values, dtype=empty_dtype)
elif is_extension_array_dtype(blk_dtype):
pass
- elif is_extension_array_dtype(empty_dtype):
- # error: Item "dtype[Any]" of "Union[dtype[Any], ExtensionDtype]"
- # has no attribute "construct_array_type"
- cls = empty_dtype.construct_array_type() # type: ignore[union-attr]
+ elif isinstance(empty_dtype, ExtensionDtype):
+ cls = empty_dtype.construct_array_type()
missing_arr = cls._from_sequence([], dtype=empty_dtype)
ncols, nrows = self.shape
assert ncols == 1, ncols
@@ -362,14 +359,7 @@ def get_reindexed_values(self, empty_dtype: DtypeObj, upcasted_na) -> ArrayLike:
# NB: we should never get here with empty_dtype integer or bool;
# if we did, the missing_arr.fill would cast to gibberish
- # error: Argument "dtype" to "empty" has incompatible type
- # "Union[dtype[Any], ExtensionDtype]"; expected "Union[dtype[Any],
- # None, type, _SupportsDType, str, Union[Tuple[Any, int], Tuple[Any,
- # Union[int, Sequence[int]]], List[Any], _DTypeDict, Tuple[Any,
- # Any]]]"
- missing_arr = np.empty(
- self.shape, dtype=empty_dtype # type: ignore[arg-type]
- )
+ missing_arr = np.empty(self.shape, dtype=empty_dtype)
missing_arr.fill(fill_value)
return missing_arr
@@ -449,10 +439,8 @@ def _dtype_to_na_value(dtype: DtypeObj, has_none_blocks: bool):
"""
Find the NA value to go with this dtype.
"""
- if is_extension_array_dtype(dtype):
- # error: Item "dtype[Any]" of "Union[dtype[Any], ExtensionDtype]" has no
- # attribute "na_value"
- return dtype.na_value # type: ignore[union-attr]
+ if isinstance(dtype, ExtensionDtype):
+ return dtype.na_value
elif dtype.kind in ["m", "M"]:
return dtype.type("NaT")
elif dtype.kind in ["f", "c"]:
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 63a437a91f6e4..93aade8d58a71 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -45,6 +45,7 @@
is_named_tuple,
is_object_dtype,
)
+from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas.core.dtypes.generic import (
ABCDataFrame,
ABCDatetimeIndex,
@@ -249,7 +250,7 @@ def ndarray_to_mgr(
if not len(values) and columns is not None and len(columns):
values = np.empty((0, 1), dtype=object)
- if is_extension_array_dtype(values) or is_extension_array_dtype(dtype):
+ if is_extension_array_dtype(values) or isinstance(dtype, ExtensionDtype):
# GH#19157
if isinstance(values, np.ndarray) and values.ndim > 1:
@@ -365,19 +366,10 @@ def dict_to_mgr(
# no obvious "empty" int column
if missing.any() and not is_integer_dtype(dtype):
if dtype is None or (
- not is_extension_array_dtype(dtype)
- # error: Argument 1 to "issubdtype" has incompatible type
- # "Union[dtype, ExtensionDtype]"; expected "Union[dtype, None,
- # type, _SupportsDtype, str, Tuple[Any, int], Tuple[Any,
- # Union[int, Sequence[int]]], List[Any], _DtypeDict, Tuple[Any,
- # Any]]"
- and np.issubdtype(dtype, np.flexible) # type: ignore[arg-type]
+ isinstance(dtype, np.dtype) and np.issubdtype(dtype, np.flexible)
):
# GH#1783
-
- # error: Value of type variable "_DTypeScalar" of "dtype" cannot be
- # "object"
- nan_dtype = np.dtype(object) # type: ignore[type-var]
+ nan_dtype = np.dtype("object")
else:
# error: Incompatible types in assignment (expression has type
# "Union[dtype, ExtensionDtype]", variable has type "dtype")
@@ -682,13 +674,11 @@ def to_arrays(
if not len(data):
if isinstance(data, np.ndarray):
- # error: Incompatible types in assignment (expression has type
- # "Optional[Tuple[str, ...]]", variable has type "Optional[Index]")
- columns = data.dtype.names # type: ignore[assignment]
- if columns is not None:
+ if data.dtype.names is not None:
# i.e. numpy structured array
+ columns = ensure_index(data.dtype.names)
arrays = [data[name] for name in columns]
- return arrays, ensure_index(columns)
+ return arrays, columns
return [], ensure_index([])
elif isinstance(data[0], Categorical):
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index f4de822262cf4..c01bf3931b27a 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -73,6 +73,7 @@
)
from pandas.core import groupby
import pandas.core.algorithms as algos
+from pandas.core.arrays import ExtensionArray
import pandas.core.common as com
from pandas.core.construction import extract_array
from pandas.core.frame import _merge_doc
@@ -2083,12 +2084,10 @@ def _factorize_keys(
lk = ensure_int64(lk.codes)
rk = ensure_int64(rk.codes)
- elif is_extension_array_dtype(lk.dtype) and is_dtype_equal(lk.dtype, rk.dtype):
+ elif isinstance(lk, ExtensionArray) and is_dtype_equal(lk.dtype, rk.dtype):
# error: Incompatible types in assignment (expression has type "ndarray",
# variable has type "ExtensionArray")
- # error: Item "ndarray" of "Union[Any, ndarray]" has no attribute
- # "_values_for_factorize"
- lk, _ = lk._values_for_factorize() # type: ignore[union-attr,assignment]
+ lk, _ = lk._values_for_factorize()
# error: Incompatible types in assignment (expression has type
# "ndarray", variable has type "ExtensionArray")
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 8cfbae3cafc18..a011a789bf17c 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -725,9 +725,7 @@ def _cast_types(self, values, cast_type, column):
# c-parser which parses all categories
# as strings
- # error: Argument 2 to "astype_nansafe" has incompatible type
- # "Type[str]"; expected "Union[dtype[Any], ExtensionDtype]"
- values = astype_nansafe(values, str) # type: ignore[arg-type]
+ values = astype_nansafe(values, np.dtype(str))
cats = Index(values).unique().dropna()
values = Categorical._from_inferred_categories(
| cc @simonjayhawkins
Locally when I run mypy on master I get 427 errors mostly "unused 'type: ignore' comment". Same result when i switch from py39 to py38, or from OSX to Ubuntu. mypy==0.812 in all cases. Any guesses what config the CI has different? | https://api.github.com/repos/pandas-dev/pandas/pulls/40412 | 2021-03-13T04:16:37Z | 2021-03-13T13:34:11Z | 2021-03-13T13:34:11Z | 2021-03-13T15:05:20Z |
BUG: Index constructor silently ignoring dtype | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 56a5412d4ecfc..c977585ef2942 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -455,7 +455,7 @@ Conversion
- Bug in :meth:`Series.view` and :meth:`Index.view` when converting between datetime-like (``datetime64[ns]``, ``datetime64[ns, tz]``, ``timedelta64``, ``period``) dtypes (:issue:`39788`)
- Bug in creating a :class:`DataFrame` from an empty ``np.recarray`` not retaining the original dtypes (:issue:`40121`)
- Bug in :class:`DataFrame` failing to raise ``TypeError`` when constructing from a ``frozenset`` (:issue:`40163`)
--
+- Bug in :class:`Index` construction silently ignoring a passed ``dtype`` when the data cannot be cast to that dtype (:issue:`21311`)
Strings
^^^^^^^
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 8b67b98b32f7f..edc740b425c56 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -66,7 +66,6 @@
can_hold_element,
find_common_type,
infer_dtype_from,
- maybe_cast_to_integer_array,
validate_numeric_casting,
)
from pandas.core.dtypes.common import (
@@ -144,6 +143,7 @@
from pandas.core.construction import (
ensure_wrapped_if_datetimelike,
extract_array,
+ sanitize_array,
)
from pandas.core.indexers import deprecate_ndim_indexing
from pandas.core.indexes.frozen import FrozenList
@@ -399,18 +399,17 @@ def __new__(
# index-like
elif isinstance(data, (np.ndarray, Index, ABCSeries)):
+ if isinstance(data, ABCMultiIndex):
+ data = data._values
+
if dtype is not None:
# we need to avoid having numpy coerce
# things that look like ints/floats to ints unless
# they are actually ints, e.g. '0' and 0.0
# should not be coerced
# GH 11836
+ data = sanitize_array(data, None, dtype=dtype, copy=copy)
- # error: Argument 1 to "_maybe_cast_with_dtype" has incompatible type
- # "Union[ndarray, Index, Series]"; expected "ndarray"
- data = _maybe_cast_with_dtype(
- data, dtype, copy # type: ignore[arg-type]
- )
dtype = data.dtype
if data.dtype.kind in ["i", "u", "f"]:
@@ -6366,56 +6365,6 @@ def maybe_extract_name(name, obj, cls) -> Hashable:
return name
-def _maybe_cast_with_dtype(data: np.ndarray, dtype: np.dtype, copy: bool) -> np.ndarray:
- """
- If a dtype is passed, cast to the closest matching dtype that is supported
- by Index.
-
- Parameters
- ----------
- data : np.ndarray
- dtype : np.dtype
- copy : bool
-
- Returns
- -------
- np.ndarray
- """
- # we need to avoid having numpy coerce
- # things that look like ints/floats to ints unless
- # they are actually ints, e.g. '0' and 0.0
- # should not be coerced
- # GH 11836
- if is_integer_dtype(dtype):
- inferred = lib.infer_dtype(data, skipna=False)
- if inferred == "integer":
- data = maybe_cast_to_integer_array(data, dtype, copy=copy)
- elif inferred in ["floating", "mixed-integer-float"]:
- if isna(data).any():
- raise ValueError("cannot convert float NaN to integer")
-
- if inferred == "mixed-integer-float":
- data = maybe_cast_to_integer_array(data, dtype)
-
- # If we are actually all equal to integers,
- # then coerce to integer.
- try:
- data = _try_convert_to_int_array(data, copy, dtype)
- except ValueError:
- data = np.array(data, dtype=np.float64, copy=copy)
-
- elif inferred != "string":
- data = data.astype(dtype)
- elif is_float_dtype(dtype):
- inferred = lib.infer_dtype(data, skipna=False)
- if inferred != "string":
- data = data.astype(dtype)
- else:
- data = np.array(data, dtype=dtype, copy=copy)
-
- return data
-
-
def _maybe_cast_data_without_dtype(subarr):
"""
If we have an arraylike input but no passed dtype, try to infer
diff --git a/pandas/tests/indexes/base_class/test_constructors.py b/pandas/tests/indexes/base_class/test_constructors.py
index 0c4f9c6d759b9..bc894579340ab 100644
--- a/pandas/tests/indexes/base_class/test_constructors.py
+++ b/pandas/tests/indexes/base_class/test_constructors.py
@@ -36,7 +36,6 @@ def test_constructor_wrong_kwargs(self):
with tm.assert_produces_warning(FutureWarning):
Index([], foo="bar")
- @pytest.mark.xfail(reason="see GH#21311: Index doesn't enforce dtype argument")
def test_constructor_cast(self):
msg = "could not convert string to float"
with pytest.raises(ValueError, match=msg):
| - [x] closes #21311
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
We can go a step further and re-use sanitize_array for the `dtype is None` case, but that ever-so-slightly changes our datetime-inference behavior, so we'd need to either deprecate or call it a bugfix. | https://api.github.com/repos/pandas-dev/pandas/pulls/40411 | 2021-03-13T00:16:12Z | 2021-03-14T23:45:27Z | 2021-03-14T23:45:27Z | 2021-03-15T03:18:11Z |
DOC: Fix window pairwise function other not taking ndarray | diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index 4a6ed109a88a3..e35ff5afca66e 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -447,7 +447,7 @@ def var_func(values, begin, end, min_periods):
create_section_header("Parameters"),
dedent(
"""
- other : Series, DataFrame, or ndarray, optional
+ other : Series or DataFrame , optional
If not supplied then will default to self and produce pairwise
output.
pairwise : bool, default None
@@ -514,7 +514,7 @@ def cov_func(x, y):
create_section_header("Parameters"),
dedent(
"""
- other : Series, DataFrame, or ndarray, optional
+ other : Series or DataFrame, optional
If not supplied then will default to self and produce pairwise
output.
pairwise : bool, default None
diff --git a/pandas/core/window/expanding.py b/pandas/core/window/expanding.py
index 77f8486522626..ac1ebfd4b0825 100644
--- a/pandas/core/window/expanding.py
+++ b/pandas/core/window/expanding.py
@@ -560,7 +560,7 @@ def quantile(
create_section_header("Parameters"),
dedent(
"""
- other : Series, DataFrame, or ndarray, optional
+ other : Series or DataFrame, optional
If not supplied then will default to self and produce pairwise
output.
pairwise : bool, default None
@@ -598,7 +598,7 @@ def cov(
create_section_header("Parameters"),
dedent(
"""
- other : Series, DataFrame, or ndarray, optional
+ other : Series or DataFrame, optional
If not supplied then will default to self and produce pairwise
output.
pairwise : bool, default None
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 84c05a0563f04..6db86b940737e 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -2111,7 +2111,7 @@ def quantile(self, quantile: float, interpolation: str = "linear", **kwargs):
create_section_header("Parameters"),
dedent(
"""
- other : Series, DataFrame, or ndarray, optional
+ other : Series or DataFrame, optional
If not supplied then will default to self and produce pairwise
output.
pairwise : bool, default None
@@ -2149,7 +2149,7 @@ def cov(
create_section_header("Parameters"),
dedent(
"""
- other : Series, DataFrame, or ndarray, optional
+ other : Series or DataFrame, optional
If not supplied then will default to self and produce pairwise
output.
pairwise : bool, default None
| - [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
xref: https://github.com/pandas-dev/pandas/pull/40392#discussion_r593013628 | https://api.github.com/repos/pandas-dev/pandas/pulls/40410 | 2021-03-13T00:02:33Z | 2021-03-13T18:55:03Z | 2021-03-13T18:55:03Z | 2021-03-13T18:55:06Z |
CLN: docstrings, annotations, raising corner cases | diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index cb7b9f990a98e..f6f36f6ad523b 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -1,5 +1,7 @@
import warnings
+cimport cython
+
import numpy as np
cimport numpy as cnp
@@ -47,6 +49,7 @@ cdef inline bint is_definitely_invalid_key(object val):
_SIZE_CUTOFF = 1_000_000
+@cython.freelist(32)
cdef class IndexEngine:
cdef readonly:
diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx
index d2f47c9d25496..bd749d6eca18e 100644
--- a/pandas/_libs/missing.pyx
+++ b/pandas/_libs/missing.pyx
@@ -104,6 +104,7 @@ cpdef bint checknull(object val):
- np.datetime64 representation of NaT
- np.timedelta64 representation of NaT
- NA
+ - Decimal("NaN")
Parameters
----------
@@ -143,6 +144,8 @@ cpdef bint checknull_old(object val):
- NaT
- np.datetime64 representation of NaT
- np.timedelta64 representation of NaT
+ - NA
+ - Decimal("NaN")
Parameters
----------
@@ -175,6 +178,8 @@ cpdef ndarray[uint8_t] isnaobj(ndarray arr):
- NaT
- np.datetime64 representation of NaT
- np.timedelta64 representation of NaT
+ - NA
+ - Decimal("NaN")
Parameters
----------
@@ -211,6 +216,7 @@ def isnaobj_old(arr: ndarray) -> ndarray:
- NEGINF
- NaT
- NA
+ - Decimal("NaN")
Parameters
----------
@@ -249,6 +255,8 @@ def isnaobj2d(arr: ndarray) -> ndarray:
- NaT
- np.datetime64 representation of NaT
- np.timedelta64 representation of NaT
+ - NA
+ - Decimal("NaN")
Parameters
----------
@@ -293,6 +301,8 @@ def isnaobj2d_old(arr: ndarray) -> ndarray:
- NaT
- np.datetime64 representation of NaT
- np.timedelta64 representation of NaT
+ - NA
+ - Decimal("NaN")
Parameters
----------
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index 9d48035213126..25ebd3d3ddc62 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -49,7 +49,7 @@ def load_reduce(self):
return
except TypeError:
pass
- elif args and issubclass(args[0], BaseOffset):
+ elif args and isinstance(args[0], type) and issubclass(args[0], BaseOffset):
# TypeError: object.__new__(Day) is not safe, use Day.__new__()
cls = args[0]
stack[-1] = cls.__new__(*args)
diff --git a/pandas/core/array_algos/take.py b/pandas/core/array_algos/take.py
index 110b47a11c3a9..a8d0a7cbfd17a 100644
--- a/pandas/core/array_algos/take.py
+++ b/pandas/core/array_algos/take.py
@@ -256,7 +256,9 @@ def take_2d_multi(
@functools.lru_cache(maxsize=128)
-def _get_take_nd_function_cached(ndim, arr_dtype, out_dtype, axis):
+def _get_take_nd_function_cached(
+ ndim: int, arr_dtype: np.dtype, out_dtype: np.dtype, axis: int
+):
"""
Part of _get_take_nd_function below that doesn't need `mask_info` and thus
can be cached (mask_info potentially contains a numpy ndarray which is not
@@ -289,7 +291,7 @@ def _get_take_nd_function_cached(ndim, arr_dtype, out_dtype, axis):
def _get_take_nd_function(
- ndim: int, arr_dtype, out_dtype, axis: int = 0, mask_info=None
+ ndim: int, arr_dtype: np.dtype, out_dtype: np.dtype, axis: int = 0, mask_info=None
):
"""
Get the appropriate "take" implementation for the given dimension, axis
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 965eb7f68e164..12b343ab5d895 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -657,7 +657,7 @@ def _shallow_copy(self: _IndexT, values, name: Hashable = no_default) -> _IndexT
values : the values to create the new Index, optional
name : Label, defaults to self.name
"""
- name = self.name if name is no_default else name
+ name = self._name if name is no_default else name
return self._simple_new(values, name=name)
@@ -665,7 +665,7 @@ def _view(self: _IndexT) -> _IndexT:
"""
fastpath to make a shallow copy, i.e. new object with same data.
"""
- result = self._simple_new(self._values, name=self.name)
+ result = self._simple_new(self._values, name=self._name)
result._cache = self._cache
return result
@@ -4569,7 +4569,7 @@ def __getitem__(self, key):
# pessimization of basic indexing.
result = getitem(key)
# Going through simple_new for performance.
- return type(self)._simple_new(result, name=self.name)
+ return type(self)._simple_new(result, name=self._name)
if com.is_bool_indexer(key):
key = np.asarray(key, dtype=bool)
@@ -4585,7 +4585,7 @@ def __getitem__(self, key):
return result
# NB: Using _constructor._simple_new would break if MultiIndex
# didn't override __getitem__
- return self._constructor._simple_new(result, name=self.name)
+ return self._constructor._simple_new(result, name=self._name)
else:
return result
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 5cdf4c1ecef55..f8390308b18f4 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -240,7 +240,7 @@ def _shallow_copy(
values: Categorical,
name: Hashable = no_default,
):
- name = self.name if name is no_default else name
+ name = self._name if name is no_default else name
if values is not None:
# In tests we only get here with Categorical objects that
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 0e32e5c5d2762..31ad8b7d8a295 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -645,7 +645,7 @@ class DatetimeTimedeltaMixin(DatetimeIndexOpsMixin):
def _with_freq(self, freq):
arr = self._data._with_freq(freq)
- return type(self)._simple_new(arr, name=self.name)
+ return type(self)._simple_new(arr, name=self._name)
@property
def _has_complex_internals(self) -> bool:
diff --git a/pandas/core/indexes/extension.py b/pandas/core/indexes/extension.py
index f714da0d0e303..02fb6c6beb391 100644
--- a/pandas/core/indexes/extension.py
+++ b/pandas/core/indexes/extension.py
@@ -250,7 +250,7 @@ def __getitem__(self, key):
result = self._data[key]
if isinstance(result, type(self._data)):
if result.ndim == 1:
- return type(self)(result, name=self.name)
+ return type(self)(result, name=self._name)
# Unpack to ndarray for MPL compat
result = result._ndarray
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 7bb3dc5ab4545..0d89e75c097c1 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1718,7 +1718,7 @@ def unique(self, level=None):
level = self._get_level_number(level)
return self._get_level_values(level=level, unique=True)
- def to_frame(self, index=True, name=None) -> DataFrame:
+ def to_frame(self, index: bool = True, name=None) -> DataFrame:
"""
Create a DataFrame with the levels of the MultiIndex as columns.
@@ -2123,7 +2123,12 @@ def _getitem_slice(self: MultiIndex, slobj: slice) -> MultiIndex:
@Appender(_index_shared_docs["take"] % _index_doc_kwargs)
def take(
- self: MultiIndex, indices, axis=0, allow_fill=True, fill_value=None, **kwargs
+ self: MultiIndex,
+ indices,
+ axis: int = 0,
+ allow_fill: bool = True,
+ fill_value=None,
+ **kwargs,
) -> MultiIndex:
nv.validate_take((), kwargs)
indices = ensure_platform_int(indices)
@@ -3647,7 +3652,7 @@ def _intersection(self, other, sort=False) -> MultiIndex:
zip(*uniq_tuples), sortorder=0, names=result_names
)
- def _difference(self, other, sort):
+ def _difference(self, other, sort) -> MultiIndex:
other, result_names = self._convert_can_do_setop(other)
this = self._get_unique_index()
@@ -3705,7 +3710,7 @@ def symmetric_difference(self, other, result_name=None, sort=None):
# --------------------------------------------------------------------
@doc(Index.astype)
- def astype(self, dtype, copy=True):
+ def astype(self, dtype, copy: bool = True):
dtype = pandas_dtype(dtype)
if is_categorical_dtype(dtype):
msg = "> 1 ndim Categorical are not supported at this time"
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index b6f476d864011..b3e4abc6c4040 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -125,7 +125,7 @@ def _maybe_cast_slice_bound(self, label, side: str, kind):
@doc(Index._shallow_copy)
def _shallow_copy(self, values, name: Hashable = lib.no_default):
if not self._can_hold_na and values.dtype.kind == "f":
- name = self.name if name is lib.no_default else name
+ name = self._name if name is lib.no_default else name
# Ensure we are not returning an Int64Index with float data:
return Float64Index._simple_new(values, name=name)
return super()._shallow_copy(values=values, name=name)
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index e446786802239..cdf2f338529be 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -459,7 +459,7 @@ def _shallow_copy(self, values, name: Hashable = no_default):
return Int64Index._simple_new(values, name=name)
def _view(self: RangeIndex) -> RangeIndex:
- result = type(self)._simple_new(self._range, name=self.name)
+ result = type(self)._simple_new(self._range, name=self._name)
result._cache = self._cache
return result
@@ -810,7 +810,7 @@ def __getitem__(self, key):
"""
if isinstance(key, slice):
new_range = self._range[key]
- return self._simple_new(new_range, name=self.name)
+ return self._simple_new(new_range, name=self._name)
elif is_integer(key):
new_key = int(key)
try:
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 83a7c224060a8..d87df9d224bce 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -260,7 +260,7 @@ def make_block(self, values, placement=None) -> Block:
not specified
"""
if placement is None:
- placement = self.mgr_locs
+ placement = self._mgr_locs
if self.is_extension:
values = ensure_block_shape(values, ndim=self.ndim)
@@ -272,8 +272,7 @@ def make_block(self, values, placement=None) -> Block:
def make_block_same_class(self, values, placement=None) -> Block:
""" Wrap given values in a block of same type as self. """
if placement is None:
- placement = self.mgr_locs
- # TODO: perf by not going through new_block
+ placement = self._mgr_locs
# We assume maybe_coerce_values has already been called
return type(self)(values, placement=placement, ndim=self.ndim)
@@ -318,7 +317,7 @@ def getitem_block(self, slicer, new_mgr_locs=None) -> Block:
"""
if new_mgr_locs is None:
axis0_slicer = slicer[0] if isinstance(slicer, tuple) else slicer
- new_mgr_locs = self.mgr_locs[axis0_slicer]
+ new_mgr_locs = self._mgr_locs[axis0_slicer]
elif not isinstance(new_mgr_locs, BlockPlacement):
new_mgr_locs = BlockPlacement(new_mgr_locs)
@@ -358,7 +357,7 @@ def delete(self, loc) -> None:
Delete given loc(-s) from block in-place.
"""
self.values = np.delete(self.values, loc, 0)
- self.mgr_locs = self.mgr_locs.delete(loc)
+ self.mgr_locs = self._mgr_locs.delete(loc)
@final
def apply(self, func, **kwargs) -> List[Block]:
@@ -399,7 +398,7 @@ def _split_op_result(self, result) -> List[Block]:
# TODO(EA2D): unnecessary with 2D EAs
# if we get a 2D ExtensionArray, we need to split it into 1D pieces
nbs = []
- for i, loc in enumerate(self.mgr_locs):
+ for i, loc in enumerate(self._mgr_locs):
vals = result[i]
block = self.make_block(values=vals, placement=loc)
nbs.append(block)
@@ -462,7 +461,7 @@ def _split(self) -> List[Block]:
assert self.ndim == 2
new_blocks = []
- for i, ref_loc in enumerate(self.mgr_locs):
+ for i, ref_loc in enumerate(self._mgr_locs):
vals = self.values[slice(i, i + 1)]
nb = self.make_block(vals, BlockPlacement(ref_loc))
@@ -512,12 +511,12 @@ def make_a_block(nv, ref_loc):
nv = f(mask, new_values, None)
else:
nv = new_values if inplace else new_values.copy()
- block = make_a_block(nv, self.mgr_locs)
+ block = make_a_block(nv, self._mgr_locs)
return [block]
# ndim > 1
new_blocks = []
- for i, ref_loc in enumerate(self.mgr_locs):
+ for i, ref_loc in enumerate(self._mgr_locs):
m = mask[i]
v = new_values[i]
@@ -1254,7 +1253,7 @@ def take_nd(
# this assertion
assert not (axis == 0 and new_mgr_locs is None)
if new_mgr_locs is None:
- new_mgr_locs = self.mgr_locs
+ new_mgr_locs = self._mgr_locs
if not is_dtype_equal(new_values.dtype, self.dtype):
return self.make_block(new_values, new_mgr_locs)
@@ -1362,7 +1361,7 @@ def where(self, other, cond, errors="raise", axis: int = 0) -> List[Block]:
result = cast(np.ndarray, result) # EABlock overrides where
taken = result.take(m.nonzero()[0], axis=axis)
r = maybe_downcast_numeric(taken, self.dtype)
- nb = self.make_block(r.T, placement=self.mgr_locs[m])
+ nb = self.make_block(r.T, placement=self._mgr_locs[m])
result_blocks.append(nb)
return result_blocks
@@ -1423,7 +1422,7 @@ def quantile(
result = quantile_compat(self.values, qs, interpolation, axis)
- return new_block(result, placement=self.mgr_locs, ndim=2)
+ return new_block(result, placement=self._mgr_locs, ndim=2)
class ExtensionBlock(Block):
@@ -1449,7 +1448,7 @@ def shape(self) -> Shape:
# TODO(EA2D): override unnecessary with 2D EAs
if self.ndim == 1:
return (len(self.values),)
- return len(self.mgr_locs), len(self.values)
+ return len(self._mgr_locs), len(self.values)
def iget(self, col):
@@ -1594,7 +1593,7 @@ def take_nd(
# this assertion
assert not (self.ndim == 1 and new_mgr_locs is None)
if new_mgr_locs is None:
- new_mgr_locs = self.mgr_locs
+ new_mgr_locs = self._mgr_locs
return self.make_block_same_class(new_values, new_mgr_locs)
@@ -1630,7 +1629,7 @@ def _slice(self, slicer):
)
# GH#32959 only full-slicers along fake-dim0 are valid
# TODO(EA2D): won't be necessary with 2D EAs
- new_locs = self.mgr_locs[first]
+ new_locs = self._mgr_locs[first]
if len(new_locs):
# effectively slice(None)
slicer = slicer[1]
@@ -1741,9 +1740,10 @@ def _unstack(self, unstacker, fill_value, new_placement):
# TODO: in all tests we have mask.all(); can we rely on that?
blocks = [
+ # TODO: could cast to object depending on fill_value?
self.make_block_same_class(
self.values.take(indices, allow_fill=True, fill_value=fill_value),
- [place],
+ BlockPlacement(place),
)
for indices, place in zip(new_values.T, new_placement)
]
diff --git a/pandas/util/_exceptions.py b/pandas/util/_exceptions.py
index c31c421ee1445..e70c185628f71 100644
--- a/pandas/util/_exceptions.py
+++ b/pandas/util/_exceptions.py
@@ -11,6 +11,8 @@ def rewrite_exception(old_name: str, new_name: str):
try:
yield
except Exception as err:
+ if not err.args:
+ raise
msg = str(err.args[0])
msg = msg.replace(old_name, new_name)
args: Tuple[str, ...] = (msg,)
| PERF: lookup ._name and ._mgr_locs instead of .name and .mgr_locs. The property lookups add up, e.g. mgr_locs is just shy of 2% of the benchmark discussed https://github.com/pandas-dev/pandas/pull/40171#issuecomment-790219422 | https://api.github.com/repos/pandas-dev/pandas/pulls/40409 | 2021-03-13T00:01:06Z | 2021-03-16T13:48:15Z | 2021-03-16T13:48:15Z | 2021-03-16T14:58:01Z |
DOC: remove pin for pydata-sphinx-theme + update for latest release | diff --git a/doc/source/_static/css/pandas.css b/doc/source/_static/css/pandas.css
index 403d182e3d3e5..87357fd8ae716 100644
--- a/doc/source/_static/css/pandas.css
+++ b/doc/source/_static/css/pandas.css
@@ -2,7 +2,7 @@
:root {
/* Use softer blue from bootstrap's default info color */
- --color-info: 23, 162, 184;
+ --pst-color-info: 23, 162, 184;
}
/* Getting started index page */
diff --git a/environment.yml b/environment.yml
index ebf22bbf067a6..1259d0dd4ae44 100644
--- a/environment.yml
+++ b/environment.yml
@@ -113,5 +113,5 @@ dependencies:
- tabulate>=0.8.3 # DataFrame.to_markdown
- natsort # DataFrame.sort_values
- pip:
- - git+https://github.com/pandas-dev/pydata-sphinx-theme.git@2488b7defbd3d753dd5fcfc890fc4a7e79d25103
+ - git+https://github.com/pydata/pydata-sphinx-theme.git@master
- numpydoc < 1.2 # 2021-02-09 1.2dev breaking CI
diff --git a/requirements-dev.txt b/requirements-dev.txt
index f60e3bf0daea7..1817d79f96139 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -76,5 +76,5 @@ cftime
pyreadstat
tabulate>=0.8.3
natsort
-git+https://github.com/pandas-dev/pydata-sphinx-theme.git@2488b7defbd3d753dd5fcfc890fc4a7e79d25103
+git+https://github.com/pydata/pydata-sphinx-theme.git@master
numpydoc < 1.2
| No longer pinning since the bug with the mobile dropdown has been fixed + update the variable for changes in the last release.
| https://api.github.com/repos/pandas-dev/pandas/pulls/40407 | 2021-03-12T20:00:01Z | 2021-03-12T22:06:19Z | 2021-03-12T22:06:19Z | 2021-04-06T16:35:43Z |
CLN: unreachable code in algos.diff | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index c3705fada724a..0fa02d54b5b78 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -1634,10 +1634,10 @@ def diff(arr, n: int, axis: int = 0, stacklevel=3):
Parameters
----------
- arr : ndarray
+ arr : ndarray or ExtensionArray
n : int
number of periods
- axis : int
+ axis : {0, 1}
axis to shift on
stacklevel : int
The stacklevel for the lost dtype warning.
@@ -1651,7 +1651,8 @@ def diff(arr, n: int, axis: int = 0, stacklevel=3):
na = np.nan
dtype = arr.dtype
- if dtype.kind == "b":
+ is_bool = is_bool_dtype(dtype)
+ if is_bool:
op = operator.xor
else:
op = operator.sub
@@ -1677,17 +1678,15 @@ def diff(arr, n: int, axis: int = 0, stacklevel=3):
dtype = arr.dtype
is_timedelta = False
- is_bool = False
if needs_i8_conversion(arr.dtype):
dtype = np.int64
arr = arr.view("i8")
na = iNaT
is_timedelta = True
- elif is_bool_dtype(dtype):
+ elif is_bool:
# We have to cast in order to be able to hold np.nan
dtype = np.object_
- is_bool = True
elif is_integer_dtype(dtype):
# We have to cast in order to be able to hold np.nan
@@ -1708,45 +1707,26 @@ def diff(arr, n: int, axis: int = 0, stacklevel=3):
dtype = np.dtype(dtype)
out_arr = np.empty(arr.shape, dtype=dtype)
- na_indexer = [slice(None)] * arr.ndim
+ na_indexer = [slice(None)] * 2
na_indexer[axis] = slice(None, n) if n >= 0 else slice(n, None)
out_arr[tuple(na_indexer)] = na
- if arr.ndim == 2 and arr.dtype.name in _diff_special:
+ if arr.dtype.name in _diff_special:
# TODO: can diff_2d dtype specialization troubles be fixed by defining
# out_arr inside diff_2d?
algos.diff_2d(arr, out_arr, n, axis, datetimelike=is_timedelta)
else:
# To keep mypy happy, _res_indexer is a list while res_indexer is
# a tuple, ditto for lag_indexer.
- _res_indexer = [slice(None)] * arr.ndim
+ _res_indexer = [slice(None)] * 2
_res_indexer[axis] = slice(n, None) if n >= 0 else slice(None, n)
res_indexer = tuple(_res_indexer)
- _lag_indexer = [slice(None)] * arr.ndim
+ _lag_indexer = [slice(None)] * 2
_lag_indexer[axis] = slice(None, -n) if n > 0 else slice(-n, None)
lag_indexer = tuple(_lag_indexer)
- # need to make sure that we account for na for datelike/timedelta
- # we don't actually want to subtract these i8 numbers
- if is_timedelta:
- res = arr[res_indexer]
- lag = arr[lag_indexer]
-
- mask = (arr[res_indexer] == na) | (arr[lag_indexer] == na)
- if mask.any():
- res = res.copy()
- res[mask] = 0
- lag = lag.copy()
- lag[mask] = 0
-
- result = res - lag
- result[mask] = na
- out_arr[res_indexer] = result
- elif is_bool:
- out_arr[res_indexer] = arr[res_indexer] ^ arr[lag_indexer]
- else:
- out_arr[res_indexer] = arr[res_indexer] - arr[lag_indexer]
+ out_arr[res_indexer] = op(arr[res_indexer], arr[lag_indexer])
if is_timedelta:
out_arr = out_arr.view("timedelta64[ns]")
| it looks like some code is unreachable following #37140 (assuming 1d or 2d only) cc @jbrockmendel | https://api.github.com/repos/pandas-dev/pandas/pulls/40406 | 2021-03-12T19:05:06Z | 2021-03-12T22:07:21Z | 2021-03-12T22:07:21Z | 2021-03-13T09:46:14Z |
REF: consistent arguments for create_block_manager_from_blocks | diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 63a437a91f6e4..5ec4f8623aa0f 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -296,6 +296,8 @@ def ndarray_to_mgr(
)
values = values.T
+ _check_values_indices_shape_match(values, index, columns)
+
# if we don't have a dtype specified, then try to convert objects
# on the entire block; this is to convert if we have datetimelike's
# embedded in an object type
@@ -317,15 +319,37 @@ def ndarray_to_mgr(
else:
datelike_vals = maybe_infer_to_datetimelike(values)
datelike_vals = maybe_squeeze_dt64tz(datelike_vals)
- block_values = [datelike_vals]
+ nb = new_block(datelike_vals, placement=slice(len(columns)), ndim=2)
+ block_values = [nb]
else:
- # error: List item 0 has incompatible type "Union[ExtensionArray, ndarray]";
- # expected "Block"
- block_values = [maybe_squeeze_dt64tz(values)] # type: ignore[list-item]
+ new_values = maybe_squeeze_dt64tz(values)
+ nb = new_block(new_values, placement=slice(len(columns)), ndim=2)
+ block_values = [nb]
+
+ if len(columns) == 0:
+ block_values = []
return create_block_manager_from_blocks(block_values, [columns, index])
+def _check_values_indices_shape_match(
+ values: np.ndarray, index: Index, columns: Index
+) -> None:
+ """
+ Check that the shape implied by our axes matches the actual shape of the
+ data.
+ """
+ if values.shape[0] != len(columns):
+ # Could let this raise in Block constructor, but we get a more
+ # helpful exception message this way.
+ if values.shape[1] == 0:
+ raise ValueError("Empty data passed with indices specified.")
+
+ passed = values.T.shape
+ implied = (len(index), len(columns))
+ raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
+
+
def maybe_squeeze_dt64tz(dta: ArrayLike) -> ArrayLike:
"""
If we have a tzaware DatetimeArray with shape (1, N), squeeze to (N,)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 6bd3e37ae101e..1bbd253fc56c5 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1726,30 +1726,19 @@ def set_values(self, values: ArrayLike):
# Constructor Helpers
-def create_block_manager_from_blocks(blocks, axes: List[Index]) -> BlockManager:
+def create_block_manager_from_blocks(
+ blocks: List[Block], axes: List[Index]
+) -> BlockManager:
try:
- if len(blocks) == 1 and not isinstance(blocks[0], Block):
- # if blocks[0] is of length 0, return empty blocks
- if not len(blocks[0]):
- blocks = []
- else:
- # It's OK if a single block is passed as values, its placement
- # is basically "all items", but if there're many, don't bother
- # converting, it's an error anyway.
- blocks = [
- new_block(
- values=blocks[0], placement=slice(0, len(axes[0])), ndim=2
- )
- ]
-
mgr = BlockManager(blocks, axes)
- mgr._consolidate_inplace()
- return mgr
- except ValueError as e:
- blocks = [getattr(b, "values", b) for b in blocks]
- tot_items = sum(b.shape[0] for b in blocks)
- raise construction_error(tot_items, blocks[0].shape[1:], axes, e)
+ except ValueError as err:
+ arrays = [blk.values for blk in blocks]
+ tot_items = sum(arr.shape[0] for arr in arrays)
+ raise construction_error(tot_items, arrays[0].shape[1:], axes, err)
+
+ mgr._consolidate_inplace()
+ return mgr
# We define this here so we can override it in tests.extension.test_numpy
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
We do expose create_block_manager_from_blocks in core.internals, so could put the old version in internals.api and continue exposing that. Not sure if any downstream libraries are actually using it. | https://api.github.com/repos/pandas-dev/pandas/pulls/40403 | 2021-03-12T17:37:45Z | 2021-03-16T13:51:26Z | 2021-03-16T13:51:26Z | 2021-03-16T14:54:22Z |
BUG: fix convert_dtypes to handle empty df | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 56a5412d4ecfc..190bf03203903 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -625,6 +625,7 @@ Other
- Bug in :class:`Styler` where multiple elements in CSS-selectors were not correctly added to ``table_styles`` (:issue:`39942`)
- Bug in :meth:`DataFrame.equals`, :meth:`Series.equals`, :meth:`Index.equals` with object-dtype containing ``np.datetime64("NaT")`` or ``np.timedelta64("NaT")`` (:issue:`39650`)
- Bug in :func:`pandas.util.show_versions` where console JSON output was not proper JSON (:issue:`39701`)
+- Bug in :meth:`DataFrame.convert_dtypes` incorrectly raised ValueError when called on an empty DataFrame (:issue:`40393`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 67533259ae0c2..7edd501d27215 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6322,7 +6322,10 @@ def convert_dtypes(
)
for col_name, col in self.items()
]
- return concat(results, axis=1, copy=False)
+ if len(results) > 0:
+ return concat(results, axis=1, copy=False)
+ else:
+ return self.copy()
# ----------------------------------------------------------------------
# Filling NA's
diff --git a/pandas/tests/frame/methods/test_convert_dtypes.py b/pandas/tests/frame/methods/test_convert_dtypes.py
index cb0da59bc1afa..dd7bf0aada449 100644
--- a/pandas/tests/frame/methods/test_convert_dtypes.py
+++ b/pandas/tests/frame/methods/test_convert_dtypes.py
@@ -26,3 +26,8 @@ def test_convert_dtypes(self, convert_integer, expected):
}
)
tm.assert_frame_equal(result, expected)
+
+ def test_convert_empty(self):
+ # Empty DataFrame can pass convert_dtypes, see GH#40393
+ empty_df = pd.DataFrame()
+ tm.assert_frame_equal(empty_df, empty_df.convert_dtypes())
| Added a conditional to check a DataFrame had at least one column and if not return the original (empty) DataFrame if so.
- [x] closes #40393
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/40402 | 2021-03-12T16:29:49Z | 2021-03-15T14:00:57Z | 2021-03-15T14:00:57Z | 2021-03-15T14:35:58Z |
CLN: remove unreached boolean-mask case from _preprocess_slice_or_indexer | diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index e2949eb227fbf..ee684d40cd95f 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -52,7 +52,7 @@
from pandas import Index
-def concatenate_array_managers(
+def _concatenate_array_managers(
mgrs_indexers, axes: List[Index], concat_axis: int, copy: bool
) -> Manager:
"""
@@ -110,7 +110,7 @@ def concatenate_managers(
"""
# TODO(ArrayManager) this assumes that all managers are of the same type
if isinstance(mgrs_indexers[0][0], ArrayManager):
- return concatenate_array_managers(mgrs_indexers, axes, concat_axis, copy)
+ return _concatenate_array_managers(mgrs_indexers, axes, concat_axis, copy)
concat_plans = [
_get_mgr_concatenation_plan(mgr, indexers) for mgr, indexers in mgrs_indexers
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 9c21fcf957ecd..7bbf341b844b8 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -35,6 +35,7 @@
from pandas.core.dtypes.cast import infer_dtype_from_scalar
from pandas.core.dtypes.common import (
DT64NS_DTYPE,
+ ensure_int64,
is_dtype_equal,
is_extension_array_dtype,
is_list_like,
@@ -1291,7 +1292,7 @@ def insert(
def reindex_indexer(
self: T,
- new_axis,
+ new_axis: Index,
indexer,
axis: int,
fill_value=None,
@@ -1357,7 +1358,10 @@ def reindex_indexer(
return type(self).from_blocks(new_blocks, new_axes)
def _slice_take_blocks_ax0(
- self, slice_or_indexer, fill_value=lib.no_default, only_slice: bool = False
+ self,
+ slice_or_indexer: Union[slice, np.ndarray],
+ fill_value=lib.no_default,
+ only_slice: bool = False,
) -> List[Block]:
"""
Slice/take blocks along axis=0.
@@ -1366,7 +1370,7 @@ def _slice_take_blocks_ax0(
Parameters
----------
- slice_or_indexer : slice, ndarray[bool], or list-like of ints
+ slice_or_indexer : slice or np.ndarray[int64]
fill_value : scalar, default lib.no_default
only_slice : bool, default False
If True, we always return views on existing arrays, never copies.
@@ -1385,12 +1389,11 @@ def _slice_take_blocks_ax0(
if self.is_single_block:
blk = self.blocks[0]
- if sl_type in ("slice", "mask"):
+ if sl_type == "slice":
# GH#32959 EABlock would fail since we can't make 0-width
# TODO(EA2D): special casing unnecessary with 2D EAs
if sllen == 0:
return []
- # TODO: tests all have isinstance(slobj, slice), other possibilities?
return [blk.getitem_block(slobj, new_mgr_locs=slice(0, sllen))]
elif not allow_fill or self.ndim == 1:
if allow_fill and fill_value is None:
@@ -1416,7 +1419,7 @@ def _slice_take_blocks_ax0(
)
]
- if sl_type in ("slice", "mask"):
+ if sl_type == "slice":
blknos = self.blknos[slobj]
blklocs = self.blklocs[slobj]
else:
@@ -1658,9 +1661,6 @@ def get_slice(self, slobj: slice, axis: int = 0) -> SingleBlockManager:
blk = self._block
array = blk._slice(slobj)
- if array.ndim > blk.values.ndim:
- # This will be caught by Series._get_values
- raise ValueError("dimension-expanding indexing not allowed")
block = blk.make_block_same_class(array, placement=slice(0, len(array)))
new_index = self.index._getitem_slice(slobj)
return type(self)(block, new_index)
@@ -1975,10 +1975,6 @@ def _merge_blocks(
if can_consolidate:
- if dtype is None:
- if len({b.dtype for b in blocks}) != 1:
- raise AssertionError("_merge_blocks are invalid!")
-
# TODO: optimization potential in case all mgrs contain slices and
# combination of those slices is a slice, too.
new_mgr_locs = np.concatenate([b.mgr_locs.as_array for b in blocks])
@@ -2005,20 +2001,25 @@ def _fast_count_smallints(arr: np.ndarray) -> np.ndarray:
return np.c_[nz, counts[nz]]
-def _preprocess_slice_or_indexer(slice_or_indexer, length: int, allow_fill: bool):
+def _preprocess_slice_or_indexer(
+ slice_or_indexer: Union[slice, np.ndarray], length: int, allow_fill: bool
+):
if isinstance(slice_or_indexer, slice):
return (
"slice",
slice_or_indexer,
libinternals.slice_len(slice_or_indexer, length),
)
- elif (
- isinstance(slice_or_indexer, np.ndarray) and slice_or_indexer.dtype == np.bool_
- ):
- return "mask", slice_or_indexer, slice_or_indexer.sum()
else:
+ if (
+ not isinstance(slice_or_indexer, np.ndarray)
+ or slice_or_indexer.dtype.kind != "i"
+ ):
+ dtype = getattr(slice_or_indexer, "dtype", None)
+ raise TypeError(type(slice_or_indexer), dtype)
+
# TODO: np.intp?
- indexer = np.asanyarray(slice_or_indexer, dtype=np.int64)
+ indexer = ensure_int64(slice_or_indexer)
if not allow_fill:
indexer = maybe_convert_indices(indexer, length)
return "fancy", indexer, len(indexer)
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index 3c37d827c0778..1728c31ebf767 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -931,7 +931,9 @@ def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer, fill_value):
tm.assert_index_equal(reindexed.axes[axis], new_labels)
for ax in range(mgr.ndim):
- assert_reindex_indexer_is_ok(mgr, ax, Index([]), [], fill_value)
+ assert_reindex_indexer_is_ok(
+ mgr, ax, Index([]), np.array([], dtype=np.intp), fill_value
+ )
assert_reindex_indexer_is_ok(
mgr, ax, mgr.axes[ax], np.arange(mgr.shape[ax]), fill_value
)
@@ -949,22 +951,26 @@ def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer, fill_value):
mgr, ax, mgr.axes[ax], np.arange(mgr.shape[ax])[::-1], fill_value
)
assert_reindex_indexer_is_ok(
- mgr, ax, Index(["foo", "bar", "baz"]), [0, 0, 0], fill_value
+ mgr, ax, Index(["foo", "bar", "baz"]), np.array([0, 0, 0]), fill_value
)
assert_reindex_indexer_is_ok(
- mgr, ax, Index(["foo", "bar", "baz"]), [-1, 0, -1], fill_value
+ mgr, ax, Index(["foo", "bar", "baz"]), np.array([-1, 0, -1]), fill_value
)
assert_reindex_indexer_is_ok(
mgr,
ax,
Index(["foo", mgr.axes[ax][0], "baz"]),
- [-1, -1, -1],
+ np.array([-1, -1, -1]),
fill_value,
)
if mgr.shape[ax] >= 3:
assert_reindex_indexer_is_ok(
- mgr, ax, Index(["foo", "bar", "baz"]), [0, 1, 2], fill_value
+ mgr,
+ ax,
+ Index(["foo", "bar", "baz"]),
+ np.array([0, 1, 2]),
+ fill_value,
)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/40401 | 2021-03-12T15:42:42Z | 2021-03-16T13:49:08Z | 2021-03-16T13:49:07Z | 2021-03-16T14:56:16Z |
DOC: Backticks missing in pandas.DataFrame.query | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 98abe8eaffca8..e42aaf78efba6 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3784,7 +3784,7 @@ def _box_col_values(self, values, loc: int) -> Series:
# Unsorted
def query(self, expr: str, inplace: bool = False, **kwargs):
- """
+ r"""
Query the columns of a DataFrame with a boolean expression.
Parameters
@@ -3799,8 +3799,8 @@ def query(self, expr: str, inplace: bool = False, **kwargs):
You can refer to column names that are not valid Python variable names
by surrounding them in backticks. Thus, column names containing spaces
or punctuations (besides underscores) or starting with digits must be
- surrounded by backticks. (For example, a column named "Area (cm^2) would
- be referenced as `Area (cm^2)`). Column names which are Python keywords
+ surrounded by backticks. (For example, a column named "Area (cm^2)" would
+ be referenced as \`Area (cm^2)\`). Column names which are Python keywords
(like "list", "for", "import", etc) cannot be used.
For example, if one of your columns is called ``a a`` and you want
| - [ ] closes #40375
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
I also noticed that in the same sentence double quotes were missing, so I added those as well. | https://api.github.com/repos/pandas-dev/pandas/pulls/40400 | 2021-03-12T14:49:42Z | 2021-03-12T22:06:45Z | 2021-03-12T22:06:45Z | 2021-03-15T05:22:40Z |
CLN: _maybe_upcast_for_op doesn't need to check timedelta ndarray | diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index 9153eb25032e7..6f6972c34f0a9 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -195,6 +195,8 @@ def arithmetic_op(left: ArrayLike, right: Any, op):
# NB: We assume that extract_array has already been called
# on `left` and `right`.
+ # We need to special-case datetime64/timedelta64 dtypes (e.g. because numpy
+ # casts integer dtypes to timedelta64 when operating with timedelta64 - GH#22390)
lvalues = ensure_wrapped_if_datetimelike(left)
rvalues = ensure_wrapped_if_datetimelike(right)
rvalues = _maybe_upcast_for_op(rvalues, lvalues.shape)
@@ -439,11 +441,6 @@ def _maybe_upcast_for_op(obj, shape: Shape):
Be careful to call this *after* determining the `name` attribute to be
attached to the result of the arithmetic operation.
"""
- from pandas.core.arrays import (
- DatetimeArray,
- TimedeltaArray,
- )
-
if type(obj) is timedelta:
# GH#22390 cast up to Timedelta to rely on Timedelta
# implementation; otherwise operation against numeric-dtype
@@ -453,6 +450,8 @@ def _maybe_upcast_for_op(obj, shape: Shape):
# GH#28080 numpy casts integer-dtype to datetime64 when doing
# array[int] + datetime64, which we do not allow
if isna(obj):
+ from pandas.core.arrays import DatetimeArray
+
# Avoid possible ambiguities with pd.NaT
obj = obj.astype("datetime64[ns]")
right = np.broadcast_to(obj, shape)
@@ -462,6 +461,8 @@ def _maybe_upcast_for_op(obj, shape: Shape):
elif isinstance(obj, np.timedelta64):
if isna(obj):
+ from pandas.core.arrays import TimedeltaArray
+
# wrapping timedelta64("NaT") in Timedelta returns NaT,
# which would incorrectly be treated as a datetime-NaT, so
# we broadcast and wrap in a TimedeltaArray
@@ -474,9 +475,4 @@ def _maybe_upcast_for_op(obj, shape: Shape):
# np.timedelta64(3, 'D') / 2 == np.timedelta64(1, 'D')
return Timedelta(obj)
- elif isinstance(obj, np.ndarray) and obj.dtype.kind == "m":
- # GH#22390 Unfortunately we need to special-case right-hand
- # timedelta64 dtypes because numpy casts integer dtypes to
- # timedelta64 when operating with timedelta64
- return TimedeltaArray._from_sequence(obj)
return obj
| The `elif isinstance(obj, np.ndarray) and obj.dtype.kind == "m":` part is ever reached, because just before calling `_maybe_upcast_for_op`, we already did a `ensure_wrapped_if_datetimelike` which basically does the same (codedov also confirms it's not covered). So therefore removing that check.
In addition, moving the Datetime/TimedeltaArray inline within the `if/elif` checks, this gives a little bit less overhead for the non-datetimelike cases. | https://api.github.com/repos/pandas-dev/pandas/pulls/40399 | 2021-03-12T14:05:39Z | 2021-03-17T07:22:07Z | 2021-03-17T07:22:07Z | 2021-03-17T07:22:13Z |
TYP: Arraylike alias change follow up | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 42299aaf46a48..0900688e04374 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -123,6 +123,8 @@
from pandas.tseries import frequencies
if TYPE_CHECKING:
+ from typing import Literal
+
from pandas.core.arrays import (
DatetimeArray,
TimedeltaArray,
@@ -458,6 +460,14 @@ def astype(self, dtype, copy=True):
def view(self: DatetimeLikeArrayT) -> DatetimeLikeArrayT:
...
+ @overload
+ def view(self, dtype: Literal["M8[ns]"]) -> DatetimeArray:
+ ...
+
+ @overload
+ def view(self, dtype: Literal["m8[ns]"]) -> TimedeltaArray:
+ ...
+
@overload
def view(self, dtype: Optional[Dtype] = ...) -> ArrayLike:
...
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index a39182d61a8fb..d91522a9e1bb6 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -643,11 +643,7 @@ def fillna(self, value=None, method=None, limit=None) -> PeriodArray:
if method is not None:
# view as dt64 so we get treated as timelike in core.missing
dta = self.view("M8[ns]")
- # error: Item "ndarray" of "Union[ExtensionArray, ndarray]" has no attribute
- # "fillna"
- result = dta.fillna( # type: ignore[union-attr]
- value=value, method=method, limit=limit
- )
+ result = dta.fillna(value=value, method=method, limit=limit)
return result.view(self.dtype)
return super().fillna(value=value, method=method, limit=limit)
diff --git a/pandas/core/arrays/sparse/dtype.py b/pandas/core/arrays/sparse/dtype.py
index 8e55eb5f3d358..d2d05577d14df 100644
--- a/pandas/core/arrays/sparse/dtype.py
+++ b/pandas/core/arrays/sparse/dtype.py
@@ -27,7 +27,6 @@
from pandas.core.dtypes.cast import astype_nansafe
from pandas.core.dtypes.common import (
is_bool_dtype,
- is_extension_array_dtype,
is_object_dtype,
is_scalar,
is_string_dtype,
@@ -339,14 +338,10 @@ def update_dtype(self, dtype):
dtype = pandas_dtype(dtype)
if not isinstance(dtype, cls):
- if is_extension_array_dtype(dtype):
+ if not isinstance(dtype, np.dtype):
raise TypeError("sparse arrays of extension dtypes not supported")
- # error: Item "ExtensionArray" of "Union[ExtensionArray, ndarray]" has no
- # attribute "item"
- fill_value = astype_nansafe( # type: ignore[union-attr]
- np.array(self.fill_value), dtype
- ).item()
+ fill_value = astype_nansafe(np.array(self.fill_value), dtype).item()
dtype = cls(dtype, fill_value=fill_value)
return dtype
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 44650500e0f65..ce91276bc6cf4 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -25,6 +25,7 @@
Type,
Union,
cast,
+ overload,
)
import warnings
@@ -107,6 +108,8 @@
)
if TYPE_CHECKING:
+ from typing import Literal
+
from pandas import Series
from pandas.core.arrays import (
DatetimeArray,
@@ -1164,6 +1167,20 @@ def astype_td64_unit_conversion(
return result
+@overload
+def astype_nansafe(
+ arr: np.ndarray, dtype: np.dtype, copy: bool = ..., skipna: bool = ...
+) -> np.ndarray:
+ ...
+
+
+@overload
+def astype_nansafe(
+ arr: np.ndarray, dtype: ExtensionDtype, copy: bool = ..., skipna: bool = ...
+) -> ExtensionArray:
+ ...
+
+
def astype_nansafe(
arr: np.ndarray, dtype: DtypeObj, copy: bool = True, skipna: bool = False
) -> ArrayLike:
@@ -1190,14 +1207,10 @@ def astype_nansafe(
flags = arr.flags
flat = arr.ravel("K")
result = astype_nansafe(flat, dtype, copy=copy, skipna=skipna)
- order = "F" if flags.f_contiguous else "C"
+ order: Literal["C", "F"] = "F" if flags.f_contiguous else "C"
# error: Item "ExtensionArray" of "Union[ExtensionArray, ndarray]" has no
# attribute "reshape"
- # error: No overload variant of "reshape" of "_ArrayOrScalarCommon" matches
- # argument types "Tuple[int, ...]", "str"
- return result.reshape( # type: ignore[union-attr,call-overload]
- arr.shape, order=order
- )
+ return result.reshape(arr.shape, order=order) # type: ignore[union-attr]
# We get here with 0-dim from sparse
arr = np.atleast_1d(arr)
| follow-on from #40379 removing ignores that either changed or were added with the change of the ArrayLike alias to a Union
| https://api.github.com/repos/pandas-dev/pandas/pulls/40398 | 2021-03-12T12:19:05Z | 2021-03-13T15:42:42Z | 2021-03-13T15:42:42Z | 2021-03-13T15:42:48Z |
REF/PERF: move np.errstate out of core array_ops up to higher level | diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 89988349132e6..45656459792ba 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -376,7 +376,8 @@ def _cmp_method(self, other, op):
other = other._ndarray
pd_op = ops.get_array_op(op)
- result = pd_op(self._ndarray, other)
+ with np.errstate(all="ignore"):
+ result = pd_op(self._ndarray, other)
if op is divmod or op is ops.rdivmod:
a, b = result
diff --git a/pandas/core/computation/expressions.py b/pandas/core/computation/expressions.py
index 05736578b6337..0dbe5e8d83741 100644
--- a/pandas/core/computation/expressions.py
+++ b/pandas/core/computation/expressions.py
@@ -71,8 +71,7 @@ def _evaluate_standard(op, op_str, a, b):
"""
if _TEST_MODE:
_store_test_result(False)
- with np.errstate(all="ignore"):
- return op(a, b)
+ return op(a, b)
def _can_use_numexpr(op, op_str, a, b, dtype_check):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 2374cc0b6a8fa..b1f0ad8eda2aa 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6643,7 +6643,8 @@ def _dispatch_frame_op(self, right, func: Callable, axis: Optional[int] = None):
right = lib.item_from_zerodim(right)
if not is_list_like(right):
# i.e. scalar, faster than checking np.ndim(right) == 0
- bm = self._mgr.apply(array_op, right=right)
+ with np.errstate(all="ignore"):
+ bm = self._mgr.apply(array_op, right=right)
return type(self)(bm)
elif isinstance(right, DataFrame):
@@ -6654,16 +6655,17 @@ def _dispatch_frame_op(self, right, func: Callable, axis: Optional[int] = None):
# _frame_arith_method_with_reindex
# TODO operate_blockwise expects a manager of the same type
- bm = self._mgr.operate_blockwise(
- # error: Argument 1 to "operate_blockwise" of "ArrayManager" has
- # incompatible type "Union[ArrayManager, BlockManager]"; expected
- # "ArrayManager"
- # error: Argument 1 to "operate_blockwise" of "BlockManager" has
- # incompatible type "Union[ArrayManager, BlockManager]"; expected
- # "BlockManager"
- right._mgr, # type: ignore[arg-type]
- array_op,
- )
+ with np.errstate(all="ignore"):
+ bm = self._mgr.operate_blockwise(
+ # error: Argument 1 to "operate_blockwise" of "ArrayManager" has
+ # incompatible type "Union[ArrayManager, BlockManager]"; expected
+ # "ArrayManager"
+ # error: Argument 1 to "operate_blockwise" of "BlockManager" has
+ # incompatible type "Union[ArrayManager, BlockManager]"; expected
+ # "BlockManager"
+ right._mgr, # type: ignore[arg-type]
+ array_op,
+ )
return type(self)(bm)
elif isinstance(right, Series) and axis == 1:
@@ -6674,16 +6676,18 @@ def _dispatch_frame_op(self, right, func: Callable, axis: Optional[int] = None):
# maybe_align_as_frame ensures we do not have an ndarray here
assert not isinstance(right, np.ndarray)
- arrays = [
- array_op(_left, _right)
- for _left, _right in zip(self._iter_column_arrays(), right)
- ]
+ with np.errstate(all="ignore"):
+ arrays = [
+ array_op(_left, _right)
+ for _left, _right in zip(self._iter_column_arrays(), right)
+ ]
elif isinstance(right, Series):
assert right.index.equals(self.index) # Handle other cases later
right = right._values
- arrays = [array_op(left, right) for left in self._iter_column_arrays()]
+ with np.errstate(all="ignore"):
+ arrays = [array_op(left, right) for left in self._iter_column_arrays()]
else:
# Remaining cases have less-obvious dispatch rules
diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index 6f6972c34f0a9..04737d91c0d4e 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -106,8 +106,7 @@ def _masked_arith_op(x: np.ndarray, y, op):
# See GH#5284, GH#5035, GH#19448 for historical reference
if mask.any():
- with np.errstate(all="ignore"):
- result[mask] = op(xrav[mask], yrav[mask])
+ result[mask] = op(xrav[mask], yrav[mask])
else:
if not is_scalar(y):
@@ -126,8 +125,7 @@ def _masked_arith_op(x: np.ndarray, y, op):
mask = np.where(y == 1, False, mask)
if mask.any():
- with np.errstate(all="ignore"):
- result[mask] = op(xrav[mask], y)
+ result[mask] = op(xrav[mask], y)
result = maybe_upcast_putmask(result, ~mask)
result = result.reshape(x.shape) # 2D compat
@@ -179,6 +177,9 @@ def arithmetic_op(left: ArrayLike, right: Any, op):
"""
Evaluate an arithmetic operation `+`, `-`, `*`, `/`, `//`, `%`, `**`, ...
+ Note: the caller is responsible for ensuring that numpy warnings are
+ suppressed (with np.errstate(all="ignore")) if needed.
+
Parameters
----------
left : np.ndarray or ExtensionArray
@@ -206,8 +207,7 @@ def arithmetic_op(left: ArrayLike, right: Any, op):
res_values = op(lvalues, rvalues)
else:
- with np.errstate(all="ignore"):
- res_values = _na_arithmetic_op(lvalues, rvalues, op)
+ res_values = _na_arithmetic_op(lvalues, rvalues, op)
return res_values
@@ -216,6 +216,9 @@ def comparison_op(left: ArrayLike, right: Any, op) -> ArrayLike:
"""
Evaluate a comparison operation `=`, `!=`, `>=`, `>`, `<=`, or `<`.
+ Note: the caller is responsible for ensuring that numpy warnings are
+ suppressed (with np.errstate(all="ignore")) if needed.
+
Parameters
----------
left : np.ndarray or ExtensionArray
@@ -267,8 +270,7 @@ def comparison_op(left: ArrayLike, right: Any, op) -> ArrayLike:
with warnings.catch_warnings():
# suppress warnings from numpy about element-wise comparison
warnings.simplefilter("ignore", DeprecationWarning)
- with np.errstate(all="ignore"):
- res_values = _na_arithmetic_op(lvalues, rvalues, op, is_cmp=True)
+ res_values = _na_arithmetic_op(lvalues, rvalues, op, is_cmp=True)
return res_values
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 662c7abb33e33..83eb4c38bc163 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -5087,7 +5087,8 @@ def _cmp_method(self, other, op):
lvalues = self._values
rvalues = extract_array(other, extract_numpy=True)
- res_values = ops.comparison_op(lvalues, rvalues, op)
+ with np.errstate(all="ignore"):
+ res_values = ops.comparison_op(lvalues, rvalues, op)
return self._construct_result(res_values, name=res_name)
@@ -5107,7 +5108,8 @@ def _arith_method(self, other, op):
lvalues = self._values
rvalues = extract_array(other, extract_numpy=True)
- result = ops.arithmetic_op(lvalues, rvalues, op)
+ with np.errstate(all="ignore"):
+ result = ops.arithmetic_op(lvalues, rvalues, op)
return self._construct_result(result, name=res_name)
| Currently, we suppress numpy warnings with `np.errstate(all="ignore")` inside the `evaluate` function in `core.computation.expressions`, which is is only used in `core.ops.array_ops._na_arithmetic_op`, which in itself is only used in `core.ops.array_ops` `arithmetic_op()` and `comparison_op()` (where we actually called `np.errstate` again, duplicatively).
So, in summary, we suppress the warnings at the level of the "array op". For the ArrayManager, we call this array op many times for each column, and repeatedly calling `np.errstate(all="ignore")` gives a big overhead. Luckily, it is easy to suppress the warnings once at a higher level, at the DataFrame/Series level, where those array ops are called.
That's what this PR is doing: removing `np.errstate(all="ignore")` in the actual array ops, and adding it in all places where we currently call the array ops.
With the benchmark case of an arithmetic op with two dataframes, this gives a considerable improvement:
```python
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randn(1000, 1000))
df_am = df._as_manager("array")
```
```
In [2]: %timeit df_am + df_am
18.1 ms ± 1.04 ms per loop (mean ± std. dev. of 7 runs, 100 loops each) <-- master
8.57 ms ± 167 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) <-- PR
```
For BM it doesn't matter for this case, since that's a single block and `np.errstate` would be called only once anyway. But for certain cases of `df+s` ops where we potentially also work column-wise for BM, it should benefit there as well. | https://api.github.com/repos/pandas-dev/pandas/pulls/40396 | 2021-03-12T10:40:11Z | 2021-03-17T21:10:35Z | 2021-03-17T21:10:35Z | 2021-03-18T14:34:37Z |
CI run coverage on multiple builds | diff --git a/.github/workflows/database.yml b/.github/workflows/database.yml
index b34373b82af1a..a30dbc048c03d 100644
--- a/.github/workflows/database.yml
+++ b/.github/workflows/database.yml
@@ -92,6 +92,13 @@ jobs:
- name: Print skipped tests
run: python ci/print_skipped.py
+ - name: Upload coverage to Codecov
+ uses: codecov/codecov-action@v1
+ with:
+ flags: unittests
+ name: codecov-pandas
+ fail_ci_if_error: false
+
Linux_py37_cov:
runs-on: ubuntu-latest
defaults:
@@ -174,7 +181,6 @@ jobs:
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
- files: /tmp/test_coverage.xml
flags: unittests
name: codecov-pandas
fail_ci_if_error: true
diff --git a/.github/workflows/posix.yml b/.github/workflows/posix.yml
new file mode 100644
index 0000000000000..34e6c2c9d94ce
--- /dev/null
+++ b/.github/workflows/posix.yml
@@ -0,0 +1,93 @@
+name: Posix
+
+on:
+ push:
+ branches: [master]
+ pull_request:
+ branches:
+ - master
+ - 1.2.x
+
+env:
+ PYTEST_WORKERS: "auto"
+ PANDAS_CI: 1
+
+jobs:
+ pytest:
+ runs-on: ubuntu-latest
+ defaults:
+ run:
+ shell: bash -l {0}
+ strategy:
+ matrix:
+ settings: [
+ [actions-37-minimum_versions.yaml, "not slow and not network and not clipboard", "", "", "", "", ""],
+ [actions-37.yaml, "not slow and not network and not clipboard", "", "", "", "", ""],
+ [actions-37-locale_slow.yaml, "slow", "language-pack-it xsel", "it_IT.utf8", "it_IT.utf8", "", ""],
+ [actions-37-slow.yaml, "slow", "", "", "", "", ""],
+ [actions-38.yaml, "not slow and not network and not clipboard", "", "", "", "", ""],
+ [actions-38-slow.yaml, "slow", "", "", "", "", ""],
+ [actions-38-locale.yaml, "not slow and not network", "language-pack-zh-hans xsel", "zh_CN.utf8", "zh_CN.utf8", "", ""],
+ [actions-38-numpydev.yaml, "not slow and not network", "xsel", "", "", "deprecate", "-W error"],
+ [actions-39.yaml, "not slow and not network and not clipboard", "", "", "", "", ""]
+ ]
+ fail-fast: false
+ env:
+ COVERAGE: true
+ ENV_FILE: ci/deps/${{ matrix.settings[0] }}
+ PATTERN: ${{ matrix.settings[1] }}
+ EXTRA_APT: ${{ matrix.settings[2] }}
+ LANG: ${{ matrix.settings[3] }}
+ LC_ALL: ${{ matrix.settings[4] }}
+ PANDAS_TESTING_MODE: ${{ matrix.settings[5] }}
+ TEST_ARGS: ${{ matrix.settings[6] }}
+
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v1
+
+ - name: Cache conda
+ uses: actions/cache@v1
+ env:
+ CACHE_NUMBER: 0
+ with:
+ path: ~/conda_pkgs_dir
+ key: ${{ runner.os }}-conda-${{ env.CACHE_NUMBER }}-${{
+ hashFiles('${{ env.ENV_FILE }}') }}
+
+ - name: Extra installs
+ run: sudo apt-get update && sudo apt-get install -y libc6-dev-i386 ${{ env.EXTRA_APT }}
+
+ - uses: conda-incubator/setup-miniconda@v2
+ with:
+ activate-environment: pandas-dev
+ channel-priority: flexible
+ environment-file: ${{ env.ENV_FILE }}
+ use-only-tar-bz2: true
+
+ - name: Build Pandas
+ uses: ./.github/actions/build_pandas
+
+ - name: Test
+ run: ci/run_tests.sh
+ if: always()
+
+ - name: Build Version
+ run: pushd /tmp && python -c "import pandas; pandas.show_versions();" && popd
+
+ - name: Publish test results
+ uses: actions/upload-artifact@master
+ with:
+ name: Test results
+ path: test-data.xml
+ if: failure()
+
+ - name: Print skipped tests
+ run: python ci/print_skipped.py
+
+ - name: Upload coverage to Codecov
+ uses: codecov/codecov-action@v1
+ with:
+ flags: unittests
+ name: codecov-pandas
+ fail_ci_if_error: false
diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index 464bad7884362..56da4e87f2709 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -17,11 +17,6 @@ jobs:
name: macOS
vmImage: macOS-10.14
-- template: ci/azure/posix.yml
- parameters:
- name: Linux
- vmImage: ubuntu-16.04
-
- template: ci/azure/windows.yml
parameters:
name: Windows
diff --git a/ci/azure/posix.yml b/ci/azure/posix.yml
index 4cb4eaf95f6f5..2caacf3a07290 100644
--- a/ci/azure/posix.yml
+++ b/ci/azure/posix.yml
@@ -14,71 +14,7 @@ jobs:
CONDA_PY: "37"
PATTERN: "not slow and not network"
- ${{ if eq(parameters.name, 'Linux') }}:
- py37_minimum_versions:
- ENV_FILE: ci/deps/azure-37-minimum_versions.yaml
- CONDA_PY: "37"
- PATTERN: "not slow and not network and not clipboard"
-
- py37:
- ENV_FILE: ci/deps/azure-37.yaml
- CONDA_PY: "37"
- PATTERN: "not slow and not network and not clipboard"
-
- py37_locale_slow:
- ENV_FILE: ci/deps/azure-37-locale_slow.yaml
- CONDA_PY: "37"
- PATTERN: "slow"
- LANG: "it_IT.utf8"
- LC_ALL: "it_IT.utf8"
- EXTRA_APT: "language-pack-it xsel"
-
- py37_slow:
- ENV_FILE: ci/deps/azure-37-slow.yaml
- CONDA_PY: "37"
- PATTERN: "slow"
-
- py38:
- ENV_FILE: ci/deps/azure-38.yaml
- CONDA_PY: "38"
- PATTERN: "not slow and not network and not clipboard"
-
- py38_slow:
- ENV_FILE: ci/deps/azure-38-slow.yaml
- CONDA_PY: "38"
- PATTERN: "slow"
-
- py38_locale:
- ENV_FILE: ci/deps/azure-38-locale.yaml
- CONDA_PY: "38"
- PATTERN: "not slow and not network"
- # pandas does not use the language (zh_CN), but should support different encodings (utf8)
- # we should test with encodings different than utf8, but doesn't seem like Ubuntu supports any
- LANG: "zh_CN.utf8"
- LC_ALL: "zh_CN.utf8"
- EXTRA_APT: "language-pack-zh-hans xsel"
-
- py38_np_dev:
- ENV_FILE: ci/deps/azure-38-numpydev.yaml
- CONDA_PY: "38"
- PATTERN: "not slow and not network"
- TEST_ARGS: "-W error"
- PANDAS_TESTING_MODE: "deprecate"
- EXTRA_APT: "xsel"
-
- py39:
- ENV_FILE: ci/deps/azure-39.yaml
- CONDA_PY: "39"
- PATTERN: "not slow and not network and not clipboard"
-
steps:
- - script: |
- if [ "$(uname)" == "Linux" ]; then
- sudo apt-get update
- sudo apt-get install -y libc6-dev-i386 $EXTRA_APT
- fi
- displayName: 'Install extra packages'
-
- script: echo '##vso[task.prependpath]$(HOME)/miniconda3/bin'
displayName: 'Set conda path'
diff --git a/ci/deps/azure-37-locale_slow.yaml b/ci/deps/actions-37-locale_slow.yaml
similarity index 94%
rename from ci/deps/azure-37-locale_slow.yaml
rename to ci/deps/actions-37-locale_slow.yaml
index 0c47b1a72774f..d9ad1f538908e 100644
--- a/ci/deps/azure-37-locale_slow.yaml
+++ b/ci/deps/actions-37-locale_slow.yaml
@@ -8,9 +8,9 @@ dependencies:
# tools
- cython>=0.29.21
- pytest>=5.0.1
+ - pytest-cov
- pytest-xdist>=1.21
- hypothesis>=3.58.0
- - pytest-azurepipelines
# pandas dependencies
- beautifulsoup4=4.6.0
diff --git a/ci/deps/azure-37-minimum_versions.yaml b/ci/deps/actions-37-minimum_versions.yaml
similarity index 95%
rename from ci/deps/azure-37-minimum_versions.yaml
rename to ci/deps/actions-37-minimum_versions.yaml
index 9cc158b76cd41..e14e51a36be31 100644
--- a/ci/deps/azure-37-minimum_versions.yaml
+++ b/ci/deps/actions-37-minimum_versions.yaml
@@ -7,9 +7,9 @@ dependencies:
# tools
- cython=0.29.21
- pytest=5.0.1
+ - pytest-cov
- pytest-xdist>=1.21
- hypothesis>=3.58.0
- - pytest-azurepipelines
- psutil
# pandas dependencies
diff --git a/ci/deps/azure-37-slow.yaml b/ci/deps/actions-37-slow.yaml
similarity index 95%
rename from ci/deps/azure-37-slow.yaml
rename to ci/deps/actions-37-slow.yaml
index 5d097e397992c..573ff7f02c162 100644
--- a/ci/deps/azure-37-slow.yaml
+++ b/ci/deps/actions-37-slow.yaml
@@ -8,9 +8,9 @@ dependencies:
# tools
- cython>=0.29.21
- pytest>=5.0.1
+ - pytest-cov
- pytest-xdist>=1.21
- hypothesis>=3.58.0
- - pytest-azurepipelines
# pandas dependencies
- beautifulsoup4
diff --git a/ci/deps/azure-37.yaml b/ci/deps/actions-37.yaml
similarity index 93%
rename from ci/deps/azure-37.yaml
rename to ci/deps/actions-37.yaml
index 4fe3de161960c..61f431256dd4a 100644
--- a/ci/deps/azure-37.yaml
+++ b/ci/deps/actions-37.yaml
@@ -8,9 +8,9 @@ dependencies:
# tools
- cython>=0.29.21
- pytest>=5.0.1
+ - pytest-cov
- pytest-xdist>=1.21
- hypothesis>=3.58.0
- - pytest-azurepipelines
# pandas dependencies
- botocore>=1.11
diff --git a/ci/deps/azure-38-locale.yaml b/ci/deps/actions-38-locale.yaml
similarity index 95%
rename from ci/deps/azure-38-locale.yaml
rename to ci/deps/actions-38-locale.yaml
index 26297a3066fa5..629804c71e726 100644
--- a/ci/deps/azure-38-locale.yaml
+++ b/ci/deps/actions-38-locale.yaml
@@ -7,10 +7,10 @@ dependencies:
# tools
- cython>=0.29.21
- pytest>=5.0.1
+ - pytest-cov
- pytest-xdist>=1.21
- pytest-asyncio>=0.12.0
- hypothesis>=3.58.0
- - pytest-azurepipelines
# pandas dependencies
- beautifulsoup4
diff --git a/ci/deps/azure-38-numpydev.yaml b/ci/deps/actions-38-numpydev.yaml
similarity index 94%
rename from ci/deps/azure-38-numpydev.yaml
rename to ci/deps/actions-38-numpydev.yaml
index f11a3bcb28ab2..e7ee6ccfd7bac 100644
--- a/ci/deps/azure-38-numpydev.yaml
+++ b/ci/deps/actions-38-numpydev.yaml
@@ -6,9 +6,9 @@ dependencies:
# tools
- pytest>=5.0.1
+ - pytest-cov
- pytest-xdist>=1.21
- hypothesis>=3.58.0
- - pytest-azurepipelines
# pandas dependencies
- pytz
diff --git a/ci/deps/azure-38-slow.yaml b/ci/deps/actions-38-slow.yaml
similarity index 97%
rename from ci/deps/azure-38-slow.yaml
rename to ci/deps/actions-38-slow.yaml
index 0a4107917f01a..2106f48755560 100644
--- a/ci/deps/azure-38-slow.yaml
+++ b/ci/deps/actions-38-slow.yaml
@@ -7,6 +7,7 @@ dependencies:
# tools
- cython>=0.29.21
- pytest>=5.0.1
+ - pytest-cov
- pytest-xdist>=1.21
- hypothesis>=3.58.0
diff --git a/ci/deps/azure-38.yaml b/ci/deps/actions-38.yaml
similarity index 91%
rename from ci/deps/azure-38.yaml
rename to ci/deps/actions-38.yaml
index 89e8d28a139b7..e2660d07c3558 100644
--- a/ci/deps/azure-38.yaml
+++ b/ci/deps/actions-38.yaml
@@ -8,9 +8,9 @@ dependencies:
# tools
- cython>=0.29.21
- pytest>=5.0.1
+ - pytest-cov
- pytest-xdist>=1.21
- hypothesis>=3.58.0
- - pytest-azurepipelines
# pandas dependencies
- numpy
diff --git a/ci/deps/azure-39.yaml b/ci/deps/actions-39.yaml
similarity index 92%
rename from ci/deps/azure-39.yaml
rename to ci/deps/actions-39.yaml
index c4c84e73fa684..36e8bf528fc3e 100644
--- a/ci/deps/azure-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -7,9 +7,9 @@ dependencies:
# tools
- cython>=0.29.21
- pytest>=5.0.1
+ - pytest-cov
- pytest-xdist>=1.21
- hypothesis>=3.58.0
- - pytest-azurepipelines
# pandas dependencies
- numpy
diff --git a/ci/run_tests.sh b/ci/run_tests.sh
index 593939431d5eb..ec4c87e8c91b0 100755
--- a/ci/run_tests.sh
+++ b/ci/run_tests.sh
@@ -10,8 +10,7 @@ if [[ "not network" == *"$PATTERN"* ]]; then
fi
if [ "$COVERAGE" ]; then
- COVERAGE_FNAME="/tmp/test_coverage.xml"
- COVERAGE="-s --cov=pandas --cov-report=xml:$COVERAGE_FNAME"
+ COVERAGE="-s --cov=pandas --cov-report=xml"
fi
# If no X server is found, we use xvfb to emulate it
@@ -30,9 +29,3 @@ fi
echo $PYTEST_CMD
sh -c "$PYTEST_CMD"
-
-if [[ "$COVERAGE" && $? == 0 && "$TRAVIS_BRANCH" == "master" ]]; then
- echo "uploading coverage"
- echo "bash <(curl -s https://codecov.io/bash) -Z -c -f $COVERAGE_FNAME"
- bash <(curl -s https://codecov.io/bash) -Z -c -f $COVERAGE_FNAME
-fi
diff --git a/codecov.yml b/codecov.yml
index 6dd1e33a7a671..893e40db004a6 100644
--- a/codecov.yml
+++ b/codecov.yml
@@ -1,6 +1,7 @@
codecov:
branch: master
-
+ notify:
+ after_n_builds: 10
comment: false
coverage:
diff --git a/pandas/plotting/_matplotlib/misc.py b/pandas/plotting/_matplotlib/misc.py
index 3d5f4af72db6c..eab5474fce541 100644
--- a/pandas/plotting/_matplotlib/misc.py
+++ b/pandas/plotting/_matplotlib/misc.py
@@ -246,7 +246,7 @@ def f(t):
# appropriately. Take a copy of amplitudes as otherwise numpy
# deletes the element from amplitudes itself.
coeffs = np.delete(np.copy(amplitudes), 0)
- coeffs.resize(int((coeffs.size + 1) / 2), 2)
+ coeffs = np.resize(coeffs, (int((coeffs.size + 1) / 2), 2))
# Generate the harmonics and arguments for the sin and cos
# functions.
| - [x] closes #39758
Moving the posix builds to GitHubActions + uploading coverage on every build, as they're merged by default by Codecov
----
Trying to get this in so that the codecov check in #40078 will pass | https://api.github.com/repos/pandas-dev/pandas/pulls/40394 | 2021-03-12T10:17:53Z | 2021-03-17T21:08:52Z | 2021-03-17T21:08:51Z | 2021-03-17T21:09:44Z |
CI/TYP: Window typing followup | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d2b63c42d777b..67533259ae0c2 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -11014,7 +11014,9 @@ def ewm(
times: Optional[Union[str, np.ndarray, FrameOrSeries]] = None,
) -> ExponentialMovingWindow:
axis = self._get_axis_number(axis)
- return ExponentialMovingWindow(
+ # error: Value of type variable "FrameOrSeries" of "ExponentialMovingWindow"
+ # cannot be "object"
+ return ExponentialMovingWindow( # type: ignore[type-var]
self,
com=com,
span=span,
diff --git a/pandas/core/window/expanding.py b/pandas/core/window/expanding.py
index 8b7182458dd1f..77f8486522626 100644
--- a/pandas/core/window/expanding.py
+++ b/pandas/core/window/expanding.py
@@ -5,11 +5,8 @@
Dict,
Optional,
Tuple,
- Union,
)
-import numpy as np
-
from pandas._typing import (
Axis,
FrameOrSeries,
@@ -589,7 +586,7 @@ def quantile(
)
def cov(
self,
- other: Optional[Union[np.ndarray, FrameOrSeriesUnion]] = None,
+ other: Optional[FrameOrSeriesUnion] = None,
pairwise: Optional[bool] = None,
ddof: int = 1,
**kwargs,
@@ -654,7 +651,7 @@ def cov(
)
def corr(
self,
- other: Optional[Union[np.ndarray, FrameOrSeriesUnion]] = None,
+ other: Optional[FrameOrSeriesUnion] = None,
pairwise: Optional[bool] = None,
ddof: int = 1,
**kwargs,
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 37b043137858c..84c05a0563f04 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -1354,7 +1354,7 @@ def quantile(self, quantile: float, interpolation: str = "linear", **kwargs):
def cov(
self,
- other: Optional[Union[np.ndarray, FrameOrSeriesUnion]] = None,
+ other: Optional[FrameOrSeriesUnion] = None,
pairwise: Optional[bool] = None,
ddof: int = 1,
**kwargs,
@@ -1392,7 +1392,7 @@ def cov_func(x, y):
def corr(
self,
- other: Optional[Union[np.ndarray, FrameOrSeriesUnion]] = None,
+ other: Optional[FrameOrSeriesUnion] = None,
pairwise: Optional[bool] = None,
ddof: int = 1,
**kwargs,
@@ -2137,7 +2137,7 @@ def quantile(self, quantile: float, interpolation: str = "linear", **kwargs):
)
def cov(
self,
- other: Optional[Union[np.ndarray, FrameOrSeriesUnion]] = None,
+ other: Optional[FrameOrSeriesUnion] = None,
pairwise: Optional[bool] = None,
ddof: int = 1,
**kwargs,
@@ -2262,7 +2262,7 @@ def cov(
)
def corr(
self,
- other: Optional[Union[np.ndarray, FrameOrSeriesUnion]] = None,
+ other: Optional[FrameOrSeriesUnion] = None,
pairwise: Optional[bool] = None,
ddof: int = 1,
**kwargs,
| - [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
I believe https://github.com/pandas-dev/pandas/pull/40293 may have caused the current typing failures in the CI | https://api.github.com/repos/pandas-dev/pandas/pulls/40392 | 2021-03-12T05:53:55Z | 2021-03-12T09:03:41Z | 2021-03-12T09:03:41Z | 2021-03-12T18:41:13Z |
PERF: avoid double-verify of take indices | diff --git a/pandas/core/indexers.py b/pandas/core/indexers.py
index 86d6b772fe2e4..d0a53ec80ce1a 100644
--- a/pandas/core/indexers.py
+++ b/pandas/core/indexers.py
@@ -235,7 +235,7 @@ def validate_indices(indices: np.ndarray, n: int) -> None:
# Indexer Conversion
-def maybe_convert_indices(indices, n: int):
+def maybe_convert_indices(indices, n: int, verify: bool = True):
"""
Attempt to convert indices into valid, positive indices.
@@ -248,6 +248,8 @@ def maybe_convert_indices(indices, n: int):
Array of indices that we are to convert.
n : int
Number of elements in the array that we are indexing.
+ verify : bool, default True
+ Check that all entries are between 0 and n - 1, inclusive.
Returns
-------
@@ -273,9 +275,10 @@ def maybe_convert_indices(indices, n: int):
indices = indices.copy()
indices[mask] += n
- mask = (indices >= n) | (indices < 0)
- if mask.any():
- raise IndexError("indices are out-of-bounds")
+ if verify:
+ mask = (indices >= n) | (indices < 0)
+ if mask.any():
+ raise IndexError("indices are out-of-bounds")
return indices
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index 5987fdf956b68..f8df140e6bf9e 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -1021,7 +1021,7 @@ def _reindex_indexer(
return type(self)(new_arrays, new_axes, verify_integrity=False)
- def take(self, indexer, axis: int = 1, verify: bool = True, convert: bool = True):
+ def take(self: T, indexer, axis: int = 1, verify: bool = True) -> T:
"""
Take items along any axis.
"""
@@ -1034,12 +1034,7 @@ def take(self, indexer, axis: int = 1, verify: bool = True, convert: bool = True
)
n = self.shape_proper[axis]
- if convert:
- indexer = maybe_convert_indices(indexer, n)
-
- if verify:
- if ((indexer == -1) | (indexer >= n)).any():
- raise Exception("Indices must be nonzero and less than the axis length")
+ indexer = maybe_convert_indices(indexer, n, verify=verify)
new_labels = self._axes[axis].take(indexer)
return self._reindex_indexer(
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 6bd3e37ae101e..9c21fcf957ecd 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1491,10 +1491,21 @@ def _make_na_block(self, placement, fill_value=None):
block_values.fill(fill_value)
return new_block(block_values, placement=placement, ndim=block_values.ndim)
- def take(self, indexer, axis: int = 1, verify: bool = True, convert: bool = True):
+ def take(self: T, indexer, axis: int = 1, verify: bool = True) -> T:
"""
Take items along any axis.
+
+ indexer : np.ndarray or slice
+ axis : int, default 1
+ verify : bool, default True
+ Check that all entries are between 0 and len(self) - 1, inclusive.
+ Pass verify=False if this check has been done by the caller.
+
+ Returns
+ -------
+ BlockManager
"""
+ # We have 6 tests that get here with a slice
indexer = (
np.arange(indexer.start, indexer.stop, indexer.step, dtype="int64")
if isinstance(indexer, slice)
@@ -1502,12 +1513,7 @@ def take(self, indexer, axis: int = 1, verify: bool = True, convert: bool = True
)
n = self.shape[axis]
- if convert:
- indexer = maybe_convert_indices(indexer, n)
-
- if verify:
- if ((indexer == -1) | (indexer >= n)).any():
- raise Exception("Indices must be nonzero and less than the axis length")
+ indexer = maybe_convert_indices(indexer, n, verify=verify)
new_labels = self.axes[axis].take(indexer)
return self.reindex_indexer(
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/40391 | 2021-03-12T04:42:40Z | 2021-03-12T19:41:24Z | 2021-03-12T19:41:24Z | 2021-03-12T20:35:49Z |
TYP: fix ignores | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index be8d641169b10..979c7aa990184 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -46,6 +46,7 @@ class providing the base-class of operations.
)
import pandas._libs.groupby as libgroupby
from pandas._typing import (
+ ArrayLike,
F,
FrameOrSeries,
FrameOrSeriesUnion,
@@ -68,7 +69,6 @@ class providing the base-class of operations.
ensure_float,
is_bool_dtype,
is_datetime64_dtype,
- is_extension_array_dtype,
is_integer_dtype,
is_numeric_dtype,
is_object_dtype,
@@ -85,6 +85,7 @@ class providing the base-class of operations.
from pandas.core.arrays import (
Categorical,
DatetimeArray,
+ ExtensionArray,
)
from pandas.core.base import (
DataError,
@@ -2265,37 +2266,31 @@ def quantile(self, q=0.5, interpolation: str = "linear"):
"""
from pandas import concat
- def pre_processor(vals: np.ndarray) -> Tuple[np.ndarray, Optional[Type]]:
+ def pre_processor(vals: ArrayLike) -> Tuple[np.ndarray, Optional[np.dtype]]:
if is_object_dtype(vals):
raise TypeError(
"'quantile' cannot be performed against 'object' dtypes!"
)
- inference = None
+ inference: Optional[np.dtype] = None
if is_integer_dtype(vals.dtype):
- if is_extension_array_dtype(vals.dtype):
- # error: "ndarray" has no attribute "to_numpy"
- vals = vals.to_numpy( # type: ignore[attr-defined]
- dtype=float, na_value=np.nan
- )
- inference = np.int64
- elif is_bool_dtype(vals.dtype) and is_extension_array_dtype(vals.dtype):
- # error: "ndarray" has no attribute "to_numpy"
- vals = vals.to_numpy( # type: ignore[attr-defined]
- dtype=float, na_value=np.nan
- )
+ if isinstance(vals, ExtensionArray):
+ out = vals.to_numpy(dtype=float, na_value=np.nan)
+ else:
+ out = vals
+ inference = np.dtype(np.int64)
+ elif is_bool_dtype(vals.dtype) and isinstance(vals, ExtensionArray):
+ out = vals.to_numpy(dtype=float, na_value=np.nan)
elif is_datetime64_dtype(vals.dtype):
- # error: Incompatible types in assignment (expression has type
- # "str", variable has type "Optional[Type[int64]]")
- inference = "datetime64[ns]" # type: ignore[assignment]
- vals = np.asarray(vals).astype(float)
+ inference = np.dtype("datetime64[ns]")
+ out = np.asarray(vals).astype(float)
elif is_timedelta64_dtype(vals.dtype):
- # error: Incompatible types in assignment (expression has type "str",
- # variable has type "Optional[Type[signedinteger[Any]]]")
- inference = "timedelta64[ns]" # type: ignore[assignment]
- vals = np.asarray(vals).astype(float)
+ inference = np.dtype("timedelta64[ns]")
+ out = np.asarray(vals).astype(float)
+ else:
+ out = np.asarray(vals)
- return vals, inference
+ return out, inference
def post_processor(vals: np.ndarray, inference: Optional[Type]) -> np.ndarray:
if inference:
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 6495a4d26da3a..e505359987eb3 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -65,6 +65,7 @@
is_timedelta64_dtype,
needs_i8_conversion,
)
+from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas.core.dtypes.generic import ABCCategoricalIndex
from pandas.core.dtypes.missing import (
isna,
@@ -522,7 +523,7 @@ def _disallow_invalid_ops(self, values: ArrayLike, how: str):
@final
def _ea_wrap_cython_operation(
self, kind: str, values, how: str, axis: int, min_count: int = -1, **kwargs
- ) -> Tuple[np.ndarray, Optional[List[str]]]:
+ ) -> np.ndarray:
"""
If we have an ExtensionArray, unwrap, call _cython_operation, and
re-wrap if appropriate.
@@ -539,10 +540,7 @@ def _ea_wrap_cython_operation(
)
if how in ["rank"]:
# preserve float64 dtype
-
- # error: Incompatible return value type (got "ndarray", expected
- # "Tuple[ndarray, Optional[List[str]]]")
- return res_values # type: ignore[return-value]
+ return res_values
res_values = res_values.astype("i8", copy=False)
result = type(orig_values)(res_values, dtype=orig_values.dtype)
@@ -555,14 +553,11 @@ def _ea_wrap_cython_operation(
kind, values, how, axis, min_count, **kwargs
)
dtype = maybe_cast_result_dtype(orig_values.dtype, how)
- if is_extension_array_dtype(dtype):
- # error: Item "dtype[Any]" of "Union[dtype[Any], ExtensionDtype]" has no
- # attribute "construct_array_type"
- cls = dtype.construct_array_type() # type: ignore[union-attr]
+ if isinstance(dtype, ExtensionDtype):
+ cls = dtype.construct_array_type()
return cls._from_sequence(res_values, dtype=dtype)
- # error: Incompatible return value type (got "ndarray", expected
- # "Tuple[ndarray, Optional[List[str]]]")
- return res_values # type: ignore[return-value]
+
+ return res_values
elif is_float_dtype(values.dtype):
# FloatingArray
@@ -599,9 +594,7 @@ def _cython_operation(
self._disallow_invalid_ops(values, how)
if is_extension_array_dtype(values.dtype):
- # error: Incompatible return value type (got "Tuple[ndarray,
- # Optional[List[str]]]", expected "ndarray")
- return self._ea_wrap_cython_operation( # type: ignore[return-value]
+ return self._ea_wrap_cython_operation(
kind, values, how, axis, min_count, **kwargs
)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index b001139bef6c5..8b67b98b32f7f 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3876,7 +3876,14 @@ def _reindex_non_unique(self, target):
# --------------------------------------------------------------------
# Join Methods
- def join(self, other, how="left", level=None, return_indexers=False, sort=False):
+ def join(
+ self,
+ other,
+ how: str_t = "left",
+ level=None,
+ return_indexers: bool = False,
+ sort: bool = False,
+ ):
"""
Compute join_index and indexers to conform data
structures to the new index.
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 96459970a9b57..0e32e5c5d2762 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -827,7 +827,12 @@ def _union(self, other, sort):
_join_precedence = 10
def join(
- self, other, how: str = "left", level=None, return_indexers=False, sort=False
+ self,
+ other,
+ how: str = "left",
+ level=None,
+ return_indexers: bool = False,
+ sort: bool = False,
):
"""
See Index.join
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index 13119b9997002..003353856eac8 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -2,9 +2,11 @@
import itertools
from typing import (
+ TYPE_CHECKING,
List,
Optional,
Union,
+ cast,
)
import numpy as np
@@ -44,6 +46,9 @@
get_group_index_sorter,
)
+if TYPE_CHECKING:
+ from pandas.core.arrays import ExtensionArray
+
class _Unstacker:
"""
@@ -942,11 +947,11 @@ def _get_dummies_1d(
data,
prefix,
prefix_sep="_",
- dummy_na=False,
- sparse=False,
- drop_first=False,
+ dummy_na: bool = False,
+ sparse: bool = False,
+ drop_first: bool = False,
dtype: Optional[Dtype] = None,
-):
+) -> DataFrame:
from pandas.core.reshape.concat import concat
# Series avoids inconsistent NaN handling
@@ -1029,6 +1034,8 @@ def get_empty_frame(data) -> DataFrame:
sparse_series.append(Series(data=sarr, index=index, name=col))
out = concat(sparse_series, axis=1, copy=False)
+ # TODO: overload concat with Literal for axis
+ out = cast(DataFrame, out)
return out
else:
@@ -1045,7 +1052,9 @@ def get_empty_frame(data) -> DataFrame:
return DataFrame(dummy_mat, index=index, columns=dummy_cols)
-def _reorder_for_extension_array_stack(arr, n_rows: int, n_columns: int):
+def _reorder_for_extension_array_stack(
+ arr: ExtensionArray, n_rows: int, n_columns: int
+) -> ExtensionArray:
"""
Re-orders the values when stacking multiple extension-arrays.
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index ba81866602361..720643d3d98aa 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -43,7 +43,6 @@
_INT64_MAX = np.iinfo(np.int64).max
-# error: Function "numpy.array" is not valid as a type
def get_indexer_indexer(
target: Index,
level: Union[str, int, List[str], List[int]],
@@ -52,7 +51,7 @@ def get_indexer_indexer(
na_position: str,
sort_remaining: bool,
key: IndexKeyFunc,
-) -> Optional[np.array]: # type: ignore[valid-type]
+) -> Optional[np.ndarray]:
"""
Helper method that return the indexer according to input parameters for
the sort_index method of DataFrame and Series.
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 1e71069e5be4d..9822356d11d7c 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -534,25 +534,19 @@ def _to_datetime_with_unit(arg, unit, name, tz, errors: Optional[str]) -> Index:
# GH#30050 pass an ndarray to tslib.array_with_unit_to_datetime
# because it expects an ndarray argument
if isinstance(arg, IntegerArray):
- result = arg.astype(f"datetime64[{unit}]")
+ arr = arg.astype(f"datetime64[{unit}]")
tz_parsed = None
else:
- result, tz_parsed = tslib.array_with_unit_to_datetime(arg, unit, errors=errors)
+ arr, tz_parsed = tslib.array_with_unit_to_datetime(arg, unit, errors=errors)
if errors == "ignore":
# Index constructor _may_ infer to DatetimeIndex
-
- # error: Incompatible types in assignment (expression has type "Index", variable
- # has type "ExtensionArray")
- result = Index(result, name=name) # type: ignore[assignment]
+ result = Index(arr, name=name)
else:
- # error: Incompatible types in assignment (expression has type "DatetimeIndex",
- # variable has type "ExtensionArray")
- result = DatetimeIndex(result, name=name) # type: ignore[assignment]
+ result = DatetimeIndex(arr, name=name)
if not isinstance(result, DatetimeIndex):
- # error: Incompatible return value type (got "ExtensionArray", expected "Index")
- return result # type: ignore[return-value]
+ return result
# GH#23758: We may still need to localize the result with tz
# GH#25546: Apply tz_parsed first (from arg), then tz (from caller)
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index 31ab78e59a556..b7116ee95949b 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -1,3 +1,5 @@
+from typing import Optional
+
import numpy as np
from pandas._libs import lib
@@ -164,13 +166,10 @@ def to_numeric(arg, errors="raise", downcast=None):
# GH33013: for IntegerArray & FloatingArray extract non-null values for casting
# save mask to reconstruct the full array after casting
+ mask: Optional[np.ndarray] = None
if isinstance(values, NumericArray):
mask = values._mask
values = values._data[~mask]
- else:
- # error: Incompatible types in assignment (expression has type "None", variable
- # has type "ndarray")
- mask = None # type: ignore[assignment]
values_dtype = getattr(values, "dtype", None)
if is_numeric_dtype(values_dtype):
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index a8378e91f9375..047cec6501627 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -165,7 +165,7 @@ def _convert_listlike(arg, unit=None, errors="raise", name=None):
arg = np.array(list(arg), dtype=object)
try:
- value = sequence_to_td64ns(arg, unit=unit, errors=errors, copy=False)[0]
+ td64arr = sequence_to_td64ns(arg, unit=unit, errors=errors, copy=False)[0]
except ValueError:
if errors == "ignore":
return arg
@@ -181,7 +181,5 @@ def _convert_listlike(arg, unit=None, errors="raise", name=None):
from pandas import TimedeltaIndex
- # error: Incompatible types in assignment (expression has type "TimedeltaIndex",
- # variable has type "ndarray")
- value = TimedeltaIndex(value, unit="ns", name=name) # type: ignore[assignment]
+ value = TimedeltaIndex(td64arr, unit="ns", name=name)
return value
| i get a bunch of noise in my local mypy output, so will have to see what the CI says | https://api.github.com/repos/pandas-dev/pandas/pulls/40389 | 2021-03-12T03:30:43Z | 2021-03-12T22:33:24Z | 2021-03-12T22:33:24Z | 2021-03-12T22:42:08Z |
REF/PERF: do maybe_coerce_values before Block.__init__ | diff --git a/pandas/core/internals/api.py b/pandas/core/internals/api.py
index be0828f5303b8..26d0242d81cf2 100644
--- a/pandas/core/internals/api.py
+++ b/pandas/core/internals/api.py
@@ -22,6 +22,7 @@
check_ndim,
extract_pandas_array,
get_block_type,
+ maybe_coerce_values,
)
@@ -58,6 +59,7 @@ def make_block(
ndim = _maybe_infer_ndim(values, placement, ndim)
check_ndim(values, placement, ndim)
+ values = maybe_coerce_values(values)
return klass(values, ndim=ndim, placement=placement)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index ab23c67b52bcd..f70199df40f52 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -160,28 +160,14 @@ def __init__(self, values, placement, ndim: int):
Parameters
----------
values : np.ndarray or ExtensionArray
+ We assume maybe_coerce_values has already been called.
placement : BlockPlacement (or castable)
ndim : int
1 for SingleBlockManager/Series, 2 for BlockManager/DataFrame
"""
self.ndim = ndim
self.mgr_locs = placement
- self.values = self._maybe_coerce_values(values)
-
- @classmethod
- def _maybe_coerce_values(cls, values):
- """
- Ensure we have correctly-typed values.
-
- Parameters
- ----------
- values : np.ndarray or ExtensionArray
-
- Returns
- -------
- np.ndarray or ExtensionArray
- """
- return values
+ self.values = values
@property
def _holder(self):
@@ -280,6 +266,8 @@ def make_block(self, values, placement=None) -> Block:
if self.is_extension:
values = ensure_block_shape(values, ndim=self.ndim)
+ # TODO: perf by not going through new_block
+ # We assume maybe_coerce_values has already been called
return new_block(values, placement=placement, ndim=self.ndim)
@final
@@ -287,6 +275,8 @@ def make_block_same_class(self, values, placement=None) -> Block:
""" Wrap given values in a block of same type as self. """
if placement is None:
placement = self.mgr_locs
+ # TODO: perf by not going through new_block
+ # We assume maybe_coerce_values has already been called
return type(self)(values, placement=placement, ndim=self.ndim)
@final
@@ -418,6 +408,7 @@ def _split_op_result(self, result) -> List[Block]:
return nbs
if not isinstance(result, Block):
+ result = maybe_coerce_values(result)
result = self.make_block(result)
return [result]
@@ -629,6 +620,7 @@ def astype(self, dtype, copy: bool = False, errors: str = "raise"):
values, dtype, copy=copy, errors=errors # type: ignore[type-var]
)
+ new_values = maybe_coerce_values(new_values)
newb = self.make_block(new_values)
if newb.shape != self.shape:
raise TypeError(
@@ -687,6 +679,7 @@ def to_native_types(self, na_rep="nan", quoting=None, **kwargs):
values = np.array(values, dtype="object")
values[mask] = na_rep
+ values = values.astype(object, copy=False)
return self.make_block(values)
# block actions #
@@ -1540,24 +1533,6 @@ def putmask(self, mask, new) -> List[Block]:
new_values[mask] = new
return [self.make_block(values=new_values)]
- @classmethod
- def _maybe_coerce_values(cls, values):
- """
- Unbox to an extension array.
-
- This will unbox an ExtensionArray stored in an Index or Series.
- ExtensionArrays pass through. No dtype coercion is done.
-
- Parameters
- ----------
- values : np.ndarray or ExtensionArray
-
- Returns
- -------
- ExtensionArray
- """
- return extract_array(values)
-
@property
def _holder(self):
# For extension blocks, the holder is values-dependent.
@@ -1891,6 +1866,7 @@ def to_native_types(
values = np.array(values, dtype="object")
values[mask] = na_rep
+ values = values.astype(object, copy=False)
return self.make_block(values)
from pandas.io.formats.format import FloatArrayFormatter
@@ -1904,6 +1880,7 @@ def to_native_types(
fixed_width=False,
)
res = formatter.get_result_as_array()
+ res = res.astype(object, copy=False)
return self.make_block(res)
@@ -1957,6 +1934,7 @@ def where(self, other, cond, errors="raise", axis: int = 0) -> List[Block]:
# TODO(EA2D): reshape not needed with 2D EAs
res_values = res_values.reshape(self.values.shape)
+ res_values = maybe_coerce_values(res_values)
nb = self.make_block_same_class(res_values)
return [nb]
@@ -1984,12 +1962,14 @@ def diff(self, n: int, axis: int = 0) -> List[Block]:
values = self.array_values().reshape(self.shape)
new_values = values - values.shift(n, axis=axis)
+ new_values = maybe_coerce_values(new_values)
return [self.make_block(new_values)]
def shift(self, periods: int, axis: int = 0, fill_value: Any = None) -> List[Block]:
# TODO(EA2D) this is unnecessary if these blocks are backed by 2D EAs
values = self.array_values().reshape(self.shape)
new_values = values.shift(periods, fill_value=fill_value, axis=axis)
+ new_values = maybe_coerce_values(new_values)
return [self.make_block_same_class(new_values)]
def fillna(
@@ -2005,6 +1985,7 @@ def fillna(
values = self.array_values()
values = values if inplace else values.copy()
new_values = values.fillna(value=value, limit=limit)
+ new_values = maybe_coerce_values(new_values)
return [self.make_block_same_class(values=new_values)]
@@ -2014,30 +1995,6 @@ class DatetimeLikeBlockMixin(NDArrayBackedExtensionBlock):
is_numeric = False
_can_hold_na = True
- @classmethod
- def _maybe_coerce_values(cls, values):
- """
- Input validation for values passed to __init__. Ensure that
- we have nanosecond datetime64/timedelta64, coercing if necessary.
-
- Parameters
- ----------
- values : np.ndarray or ExtensionArray
- Must be convertible to datetime64/timedelta64
-
- Returns
- -------
- values : ndarray[datetime64ns/timedelta64ns]
- """
- values = extract_array(values, extract_numpy=True)
- if isinstance(values, np.ndarray):
- values = sanitize_to_nanoseconds(values)
- elif isinstance(values.dtype, np.dtype):
- # i.e. not datetime64tz
- values = values._data
-
- return values
-
def array_values(self):
return ensure_wrapped_if_datetimelike(self.values)
@@ -2054,6 +2011,7 @@ def to_native_types(self, na_rep="NaT", **kwargs):
arr = self.array_values()
result = arr._format_native_types(na_rep=na_rep, **kwargs)
+ result = result.astype(object, copy=False)
return self.make_block(result)
@@ -2111,12 +2069,6 @@ class ObjectBlock(Block):
is_object = True
_can_hold_na = True
- @classmethod
- def _maybe_coerce_values(cls, values):
- if issubclass(values.dtype.type, str):
- values = np.array(values, dtype=object)
- return values
-
@property
def is_bool(self):
"""
@@ -2242,6 +2194,38 @@ def replace(
# Constructor Helpers
+def maybe_coerce_values(values) -> ArrayLike:
+ """
+ Input validation for values passed to __init__. Ensure that
+ any datetime64/timedelta64 dtypes are in nanoseconds. Ensure
+ that we do not have string dtypes.
+
+ Parameters
+ ----------
+ values : np.ndarray or ExtensionArray
+
+ Returns
+ -------
+ values : np.ndarray or ExtensionArray
+ """
+
+ # Note: the only test that needs extract_array here is one where we
+ # pass PandasDtype to Series.astype, then need to extract PandasArray here.
+ values = extract_array(values, extract_numpy=True)
+
+ if isinstance(values, np.ndarray):
+ values = sanitize_to_nanoseconds(values)
+
+ if issubclass(values.dtype.type, str):
+ values = np.array(values, dtype=object)
+
+ elif isinstance(values.dtype, np.dtype):
+ # i.e. not datetime64tz, extract DTA/TDA -> ndarray
+ values = values._data
+
+ return values
+
+
def get_block_type(values, dtype: Optional[Dtype] = None):
"""
Find the appropriate Block subclass to use for the given values and dtype.
@@ -2300,6 +2284,7 @@ def new_block(values, placement, *, ndim: int, klass=None) -> Block:
if klass is None:
klass = get_block_type(values, values.dtype)
+ values = maybe_coerce_values(values)
return klass(values, ndim=ndim, placement=placement)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 48bb6d9bf247b..e51ba08b8cf34 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -72,6 +72,7 @@
ensure_block_shape,
extend_blocks,
get_block_type,
+ maybe_coerce_values,
new_block,
)
from pandas.core.internals.ops import (
@@ -1057,6 +1058,7 @@ def iget(self, i: int) -> SingleBlockManager:
values = block.iget(self.blklocs[i])
# shortcut for select a single-dim from a 2-dim BM
+ values = maybe_coerce_values(values)
nb = type(block)(values, placement=slice(0, len(values)), ndim=1)
return SingleBlockManager(nb, self.axes[1])
@@ -1650,6 +1652,7 @@ def getitem_mgr(self, indexer) -> SingleBlockManager:
if array.ndim > blk.values.ndim:
# This will be caught by Series._get_values
raise ValueError("dimension-expanding indexing not allowed")
+
block = blk.make_block_same_class(array, placement=slice(0, len(array)))
return type(self)(block, self.index[indexer])
| https://api.github.com/repos/pandas-dev/pandas/pulls/40385 | 2021-03-11T23:41:22Z | 2021-03-14T23:42:21Z | 2021-03-14T23:42:21Z | 2021-03-15T00:00:53Z | |
Fix frame_or_series.asfreq() dropping rows on unordered indices | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 16f76651a65aa..aae808c6c2144 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -630,6 +630,7 @@ Groupby/resample/rolling
- Bug in :class:`core.window.ewm.ExponentialMovingWindow` when calling ``__getitem__`` would incorrectly raise a ``ValueError`` when providing ``times`` (:issue:`40164`)
- Bug in :class:`core.window.ewm.ExponentialMovingWindow` when calling ``__getitem__`` would not retain ``com``, ``span``, ``alpha`` or ``halflife`` attributes (:issue:`40164`)
- :class:`core.window.ewm.ExponentialMovingWindow` now raises a ``NotImplementedError`` when specifying ``times`` with ``adjust=False`` due to an incorrect calculation (:issue:`40098`)
+- Bug in :meth:`Series.asfreq` and :meth:`DataFrame.asfreq` dropping rows when the index is not sorted (:issue:`39805`)
Reshaping
^^^^^^^^^
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 2308f9edb4328..abfd6932d7b21 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -1986,7 +1986,7 @@ def asfreq(obj, freq, method=None, how=None, normalize=False, fill_value=None):
new_obj.index = _asfreq_compat(obj.index, freq)
else:
- dti = date_range(obj.index[0], obj.index[-1], freq=freq)
+ dti = date_range(obj.index.min(), obj.index.max(), freq=freq)
dti.name = obj.index.name
new_obj = obj.reindex(dti, method=method, fill_value=fill_value)
if normalize:
diff --git a/pandas/tests/frame/methods/test_asfreq.py b/pandas/tests/frame/methods/test_asfreq.py
index 8a32841466b18..0d28af5ed7be9 100644
--- a/pandas/tests/frame/methods/test_asfreq.py
+++ b/pandas/tests/frame/methods/test_asfreq.py
@@ -91,3 +91,15 @@ def test_asfreq_with_date_object_index(self, frame_or_series):
result = ts2.asfreq("4H", method="ffill")
expected = ts.asfreq("4H", method="ffill")
tm.assert_equal(result, expected)
+
+ def test_asfreq_with_unsorted_index(self, frame_or_series):
+ # GH#39805
+ # Test that rows are not dropped when the datetime index is out of order
+ index = to_datetime(["2021-01-04", "2021-01-02", "2021-01-03", "2021-01-01"])
+ result = frame_or_series(range(4), index=index)
+
+ expected = result.reindex(sorted(index))
+ expected.index = expected.index._with_freq("infer")
+
+ result = result.asfreq("D")
+ tm.assert_equal(result, expected)
| - [x] closes #39805
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
This prevents `frame_or_series.asfreq()` from dropping rows/items if the index is not sorted.
This seemed like a relatively easy fix, and it should probably be done sooner than later. However, there wasn't much discussion in the issue so I still have some questions:
1. How should we respond to unsorted indices? For now, I went ahead with option a.
a. Silently "sort" the index before reindexing. Or we could
b. Raise a warning before "sorting." Or
c. Raise an error.
2. If it should be in the whatsnew, which version would it be added to?
3. I was thinking of converting the `asfreq` tests from class-based to functions. Can I do that in this PR or should I do that in another one? | https://api.github.com/repos/pandas-dev/pandas/pulls/40384 | 2021-03-11T23:38:48Z | 2021-03-23T17:01:39Z | 2021-03-23T17:01:39Z | 2021-03-23T23:52:13Z |
REF: share Categorical.fillna with NDArrayBackedExtensionArray | diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index d54d1855ac2f8..2ab6b85fab074 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -271,7 +271,9 @@ def __getitem__(
def fillna(
self: NDArrayBackedExtensionArrayT, value=None, method=None, limit=None
) -> NDArrayBackedExtensionArrayT:
- value, method = validate_fillna_kwargs(value, method)
+ value, method = validate_fillna_kwargs(
+ value, method, validate_scalar_dict_value=False
+ )
mask = self.isna()
value = missing.check_value_size(value, mask, len(self))
@@ -291,6 +293,10 @@ def fillna(
new_values = self.copy()
new_values[mask] = value
else:
+ # We validate the fill_value even if there is nothing to fill
+ if value is not None:
+ self._validate_setitem_value(value)
+
new_values = self.copy()
return new_values
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 8c242e3800e48..a95caf74442c2 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -40,10 +40,7 @@
cache_readonly,
deprecate_kwarg,
)
-from pandas.util._validators import (
- validate_bool_kwarg,
- validate_fillna_kwargs,
-)
+from pandas.util._validators import validate_bool_kwarg
from pandas.core.dtypes.cast import (
coerce_indexer_dtype,
@@ -102,7 +99,6 @@
sanitize_array,
)
from pandas.core.indexers import deprecate_ndim_indexing
-from pandas.core.missing import interpolate_2d
from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.core.sorting import nargsort
from pandas.core.strings.object_array import ObjectStringArrayMixin
@@ -1730,67 +1726,6 @@ def to_dense(self):
)
return np.asarray(self)
- def fillna(self, value=None, method=None, limit=None):
- """
- Fill NA/NaN values using the specified method.
-
- Parameters
- ----------
- value : scalar, dict, Series
- If a scalar value is passed it is used to fill all missing values.
- Alternatively, a Series or dict can be used to fill in different
- values for each index. The value should not be a list. The
- value(s) passed should either be in the categories or should be
- NaN.
- method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
- Method to use for filling holes in reindexed Series
- pad / ffill: propagate last valid observation forward to next valid
- backfill / bfill: use NEXT valid observation to fill gap
- limit : int, default None
- (Not implemented yet for Categorical!)
- If method is specified, this is the maximum number of consecutive
- NaN values to forward/backward fill. In other words, if there is
- a gap with more than this number of consecutive NaNs, it will only
- be partially filled. If method is not specified, this is the
- maximum number of entries along the entire axis where NaNs will be
- filled.
-
- Returns
- -------
- filled : Categorical with NA/NaN filled
- """
- value, method = validate_fillna_kwargs(
- value, method, validate_scalar_dict_value=False
- )
- value = extract_array(value, extract_numpy=True)
-
- if value is None:
- value = np.nan
- if limit is not None:
- raise NotImplementedError(
- "specifying a limit for fillna has not been implemented yet"
- )
-
- if method is not None:
- # pad / bfill
-
- # TODO: dispatch when self.categories is EA-dtype
- values = np.asarray(self).reshape(-1, len(self))
- values = interpolate_2d(values, method, 0, None).astype(
- self.categories.dtype
- )[0]
- codes = _get_codes_for_values(values, self.categories)
-
- else:
- # We copy even if there is nothing to fill
- codes = self._ndarray.copy()
- mask = self.isna()
-
- new_codes = self._validate_setitem_value(value)
- np.putmask(codes, mask, new_codes)
-
- return self._from_backing_data(codes)
-
# ------------------------------------------------------------------
# NDArrayBackedExtensionArray compat
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 869836a3da70c..1583144702c72 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -369,7 +369,15 @@ def __contains__(self, key: Any) -> bool:
@doc(Index.fillna)
def fillna(self, value, downcast=None):
value = self._require_scalar(value)
- cat = self._data.fillna(value)
+ try:
+ cat = self._data.fillna(value)
+ except (ValueError, TypeError):
+ # invalid fill_value
+ if not self.isna().any():
+ # nothing to fill, we can get away without casting
+ return self.copy()
+ return self.astype(object).fillna(value, downcast=downcast)
+
return type(self)._simple_new(cat, name=self.name)
@doc(Index.unique)
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index 718ef087e47d3..bc1f499a70aa0 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -304,16 +304,6 @@ class TestBooleanReduce(BaseNumPyTests, base.BaseBooleanReduceTests):
class TestMissing(BaseNumPyTests, base.BaseMissingTests):
- @skip_nested
- def test_fillna_scalar(self, data_missing):
- # Non-scalar "scalar" values.
- super().test_fillna_scalar(data_missing)
-
- @skip_nested
- def test_fillna_no_op_returns_copy(self, data):
- # Non-scalar "scalar" values.
- super().test_fillna_no_op_returns_copy(data)
-
@skip_nested
def test_fillna_series(self, data_missing):
# Non-scalar "scalar" values.
diff --git a/pandas/tests/indexes/categorical/test_fillna.py b/pandas/tests/indexes/categorical/test_fillna.py
index c8fc55c29054e..817e996f49162 100644
--- a/pandas/tests/indexes/categorical/test_fillna.py
+++ b/pandas/tests/indexes/categorical/test_fillna.py
@@ -13,10 +13,16 @@ def test_fillna_categorical(self):
exp = CategoricalIndex([1.0, 1.0, 3.0, 1.0], name="x")
tm.assert_index_equal(idx.fillna(1.0), exp)
- # fill by value not in categories raises ValueError
+ cat = idx._data
+
+ # fill by value not in categories raises ValueError on EA, casts on CI
msg = "Cannot setitem on a Categorical with a new category"
with pytest.raises(ValueError, match=msg):
- idx.fillna(2.0)
+ cat.fillna(2.0)
+
+ result = idx.fillna(2.0)
+ expected = idx.astype(object).fillna(2.0)
+ tm.assert_index_equal(result, expected)
def test_fillna_copies_with_no_nas(self):
# Nothing to fill, should still get a copy
@@ -37,8 +43,9 @@ def test_fillna_validates_with_no_nas(self):
cat = ci._data
msg = "Cannot setitem on a Categorical with a new category"
- with pytest.raises(ValueError, match=msg):
- ci.fillna(False)
+ res = ci.fillna(False)
+ # nothing to fill, so we dont cast
+ tm.assert_index_equal(res, ci)
# Same check directly on the Categorical
with pytest.raises(ValueError, match=msg):
diff --git a/pandas/tests/series/methods/test_fillna.py b/pandas/tests/series/methods/test_fillna.py
index 5b3a6c13af467..cf6b357d0a418 100644
--- a/pandas/tests/series/methods/test_fillna.py
+++ b/pandas/tests/series/methods/test_fillna.py
@@ -671,13 +671,15 @@ def test_fillna_categorical_with_new_categories(self, fill_value, expected_outpu
def test_fillna_categorical_raises(self):
data = ["a", np.nan, "b", np.nan, np.nan]
ser = Series(Categorical(data, categories=["a", "b"]))
+ cat = ser._values
msg = "Cannot setitem on a Categorical with a new category"
with pytest.raises(ValueError, match=msg):
ser.fillna("d")
- with pytest.raises(ValueError, match=msg):
- ser.fillna(Series("d"))
+ msg2 = "Length of 'value' does not match."
+ with pytest.raises(ValueError, match=msg2):
+ cat.fillna(Series("d"))
with pytest.raises(ValueError, match=msg):
ser.fillna({1: "d", 3: "a"})
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/40383 | 2021-03-11T23:05:10Z | 2021-03-15T00:03:14Z | 2021-03-15T00:03:14Z | 2021-03-15T00:43:16Z |
Docs - Grammar fix #40377 | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index e5010da5ccac6..be8d641169b10 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -328,7 +328,7 @@ class providing the base-class of operations.
engine : str, default None
* ``'cython'`` : Runs the function through C-extensions from cython.
* ``'numba'`` : Runs the function through JIT compiled code from numba.
- * ``None`` : Defaults to ``'cython'`` or globally setting ``compute.use_numba``
+ * ``None`` : Defaults to ``'cython'`` or the global setting ``compute.use_numba``
.. versionadded:: 1.1.0
engine_kwargs : dict, default None
| Reference Issue:
DOC: Slightly broken grammar for `groupby.apply` with fix [#40377](https://github.com/pandas-dev/pandas/issues/40377)
Made the grammar change as raised in the issue. | https://api.github.com/repos/pandas-dev/pandas/pulls/40381 | 2021-03-11T19:31:58Z | 2021-03-11T20:34:28Z | 2021-03-11T20:34:28Z | 2021-03-11T20:35:45Z |
TYP: change ArrayLike/AnyArrayLike alias to Union | diff --git a/pandas/_typing.py b/pandas/_typing.py
index e464f2a021ef6..3e584774e539a 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -47,7 +47,7 @@
from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas import Interval
- from pandas.core.arrays.base import ExtensionArray # noqa: F401
+ from pandas.core.arrays.base import ExtensionArray
from pandas.core.frame import DataFrame
from pandas.core.generic import NDFrame # noqa: F401
from pandas.core.groupby.generic import (
@@ -74,8 +74,8 @@
# array-like
-AnyArrayLike = TypeVar("AnyArrayLike", "ExtensionArray", "Index", "Series", np.ndarray)
-ArrayLike = TypeVar("ArrayLike", "ExtensionArray", np.ndarray)
+ArrayLike = Union["ExtensionArray", np.ndarray]
+AnyArrayLike = Union[ArrayLike, "Index", "Series"]
# scalars
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 57e57f48fdfe5..c3705fada724a 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -176,9 +176,7 @@ def _ensure_data(values: ArrayLike) -> Tuple[np.ndarray, DtypeObj]:
elif is_timedelta64_dtype(values.dtype):
from pandas import TimedeltaIndex
- # error: Incompatible types in assignment (expression has type
- # "TimedeltaArray", variable has type "ndarray")
- values = TimedeltaIndex(values)._data # type: ignore[assignment]
+ values = TimedeltaIndex(values)._data
else:
# Datetime
if values.ndim > 1 and is_datetime64_ns_dtype(values.dtype):
@@ -194,22 +192,13 @@ def _ensure_data(values: ArrayLike) -> Tuple[np.ndarray, DtypeObj]:
from pandas import DatetimeIndex
- # Incompatible types in assignment (expression has type "DatetimeArray",
- # variable has type "ndarray")
- values = DatetimeIndex(values)._data # type: ignore[assignment]
+ values = DatetimeIndex(values)._data
dtype = values.dtype
- # error: Item "ndarray" of "Union[PeriodArray, Any, ndarray]" has no attribute
- # "asi8"
- return values.asi8, dtype # type: ignore[union-attr]
+ return values.asi8, dtype
elif is_categorical_dtype(values.dtype):
- # error: Incompatible types in assignment (expression has type "Categorical",
- # variable has type "ndarray")
- values = cast("Categorical", values) # type: ignore[assignment]
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- # error: Item "ndarray" of "Union[Any, ndarray]" has no attribute "codes"
- values = values.codes # type: ignore[assignment,union-attr]
+ values = cast("Categorical", values)
+ values = values.codes
dtype = pandas_dtype("category")
# we are actually coercing to int64
@@ -222,10 +211,7 @@ def _ensure_data(values: ArrayLike) -> Tuple[np.ndarray, DtypeObj]:
return values, dtype # type: ignore[return-value]
# we have failed, return object
-
- # error: Incompatible types in assignment (expression has type "ndarray", variable
- # has type "ExtensionArray")
- values = np.asarray(values, dtype=object) # type: ignore[assignment]
+ values = np.asarray(values, dtype=object)
return ensure_object(values), np.dtype("object")
@@ -335,9 +321,7 @@ def _get_values_for_rank(values: ArrayLike):
if is_categorical_dtype(values):
values = cast("Categorical", values)._values_for_rank()
- # error: Incompatible types in assignment (expression has type "ndarray", variable
- # has type "ExtensionArray")
- values, _ = _ensure_data(values) # type: ignore[assignment]
+ values, _ = _ensure_data(values)
return values
@@ -503,42 +487,15 @@ def isin(comps: AnyArrayLike, values: AnyArrayLike) -> np.ndarray:
)
if not isinstance(values, (ABCIndex, ABCSeries, ABCExtensionArray, np.ndarray)):
- # error: Incompatible types in assignment (expression has type "ExtensionArray",
- # variable has type "Index")
- # error: Incompatible types in assignment (expression has type "ExtensionArray",
- # variable has type "Series")
- # error: Incompatible types in assignment (expression has type "ExtensionArray",
- # variable has type "ndarray")
- values = _ensure_arraylike(list(values)) # type: ignore[assignment]
+ values = _ensure_arraylike(list(values))
elif isinstance(values, ABCMultiIndex):
# Avoid raising in extract_array
-
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "Index")
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "Series")
- values = np.array(values) # type: ignore[assignment]
+ values = np.array(values)
else:
- # error: Incompatible types in assignment (expression has type "Union[Any,
- # ExtensionArray]", variable has type "Index")
- # error: Incompatible types in assignment (expression has type "Union[Any,
- # ExtensionArray]", variable has type "Series")
- values = extract_array(values, extract_numpy=True) # type: ignore[assignment]
-
- # error: Incompatible types in assignment (expression has type "ExtensionArray",
- # variable has type "Index")
- # error: Incompatible types in assignment (expression has type "ExtensionArray",
- # variable has type "Series")
- # error: Incompatible types in assignment (expression has type "ExtensionArray",
- # variable has type "ndarray")
- comps = _ensure_arraylike(comps) # type: ignore[assignment]
- # error: Incompatible types in assignment (expression has type "Union[Any,
- # ExtensionArray]", variable has type "Index")
- # error: Incompatible types in assignment (expression has type "Union[Any,
- # ExtensionArray]", variable has type "Series")
- comps = extract_array(comps, extract_numpy=True) # type: ignore[assignment]
+ values = extract_array(values, extract_numpy=True)
+
+ comps = _ensure_arraylike(comps)
+ comps = extract_array(comps, extract_numpy=True)
if is_extension_array_dtype(comps.dtype):
# error: Incompatible return value type (got "Series", expected "ndarray")
# error: Item "ndarray" of "Union[Any, ndarray]" has no attribute "isin"
@@ -1000,9 +957,7 @@ def duplicated(values: ArrayLike, keep: Union[str, bool] = "first") -> np.ndarra
-------
duplicated : ndarray
"""
- # error: Incompatible types in assignment (expression has type "ndarray", variable
- # has type "ExtensionArray")
- values, _ = _ensure_data(values) # type: ignore[assignment]
+ values, _ = _ensure_data(values)
ndtype = values.dtype.name
f = getattr(htable, f"duplicated_{ndtype}")
return f(values, keep=keep)
diff --git a/pandas/core/array_algos/putmask.py b/pandas/core/array_algos/putmask.py
index b552a1be4c36e..3daf1b3ae3902 100644
--- a/pandas/core/array_algos/putmask.py
+++ b/pandas/core/array_algos/putmask.py
@@ -191,16 +191,10 @@ def extract_bool_array(mask: ArrayLike) -> np.ndarray:
# We could have BooleanArray, Sparse[bool], ...
# Except for BooleanArray, this is equivalent to just
# np.asarray(mask, dtype=bool)
+ mask = mask.to_numpy(dtype=bool, na_value=False)
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- mask = mask.to_numpy(dtype=bool, na_value=False) # type: ignore[assignment]
-
- # error: Incompatible types in assignment (expression has type "ndarray", variable
- # has type "ExtensionArray")
- mask = np.asarray(mask, dtype=bool) # type: ignore[assignment]
- # error: Incompatible return value type (got "ExtensionArray", expected "ndarray")
- return mask # type: ignore[return-value]
+ mask = np.asarray(mask, dtype=bool)
+ return mask
def setitem_datetimelike_compat(values: np.ndarray, num_set: int, other):
diff --git a/pandas/core/array_algos/quantile.py b/pandas/core/array_algos/quantile.py
index 501d3308b7d8b..f140ee08aef05 100644
--- a/pandas/core/array_algos/quantile.py
+++ b/pandas/core/array_algos/quantile.py
@@ -40,10 +40,9 @@ def quantile_compat(values: ArrayLike, qs, interpolation: str, axis: int) -> Arr
if isinstance(values, np.ndarray):
fill_value = na_value_for_dtype(values.dtype, compat=False)
mask = isna(values)
- result = quantile_with_mask(values, mask, fill_value, qs, interpolation, axis)
+ return quantile_with_mask(values, mask, fill_value, qs, interpolation, axis)
else:
- result = quantile_ea_compat(values, qs, interpolation, axis)
- return result
+ return quantile_ea_compat(values, qs, interpolation, axis)
def quantile_with_mask(
diff --git a/pandas/core/array_algos/replace.py b/pandas/core/array_algos/replace.py
index b0c0799750859..0bdfc8fdb95a5 100644
--- a/pandas/core/array_algos/replace.py
+++ b/pandas/core/array_algos/replace.py
@@ -95,9 +95,7 @@ def _check_comparison_types(
if is_numeric_v_string_like(a, b):
# GH#29553 avoid deprecation warnings from numpy
- # error: Incompatible return value type (got "ndarray", expected
- # "Union[ExtensionArray, bool]")
- return np.zeros(a.shape, dtype=bool) # type: ignore[return-value]
+ return np.zeros(a.shape, dtype=bool)
elif is_datetimelike_v_numeric(a, b):
# GH#29553 avoid deprecation warnings from numpy
diff --git a/pandas/core/array_algos/take.py b/pandas/core/array_algos/take.py
index 7eed31663f1cb..110b47a11c3a9 100644
--- a/pandas/core/array_algos/take.py
+++ b/pandas/core/array_algos/take.py
@@ -1,7 +1,11 @@
from __future__ import annotations
import functools
-from typing import Optional
+from typing import (
+ TYPE_CHECKING,
+ Optional,
+ overload,
+)
import numpy as np
@@ -20,6 +24,33 @@
from pandas.core.construction import ensure_wrapped_if_datetimelike
+if TYPE_CHECKING:
+ from pandas.core.arrays.base import ExtensionArray
+
+
+@overload
+def take_nd(
+ arr: np.ndarray,
+ indexer,
+ axis: int = ...,
+ out: Optional[np.ndarray] = ...,
+ fill_value=...,
+ allow_fill: bool = ...,
+) -> np.ndarray:
+ ...
+
+
+@overload
+def take_nd(
+ arr: ExtensionArray,
+ indexer,
+ axis: int = ...,
+ out: Optional[np.ndarray] = ...,
+ fill_value=...,
+ allow_fill: bool = ...,
+) -> ArrayLike:
+ ...
+
def take_nd(
arr: ArrayLike,
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 8beafe3fe4578..f56f9b2735f88 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -291,8 +291,7 @@ def fillna(
value, mask, len(self) # type: ignore[arg-type]
)
- # error: "ExtensionArray" has no attribute "any"
- if mask.any(): # type: ignore[attr-defined]
+ if mask.any():
if method is not None:
# TODO: check value is None
# (for now) when self.ndim == 2, we assume axis=0
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 99838602eeb63..68909e30650c7 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -10,6 +10,7 @@
import operator
from typing import (
+ TYPE_CHECKING,
Any,
Callable,
Dict,
@@ -71,6 +72,16 @@
nargsort,
)
+if TYPE_CHECKING:
+
+ class ExtensionArraySupportsAnyAll("ExtensionArray"):
+ def any(self, *, skipna: bool = True) -> bool:
+ pass
+
+ def all(self, *, skipna: bool = True) -> bool:
+ pass
+
+
_extension_array_shared_docs: Dict[str, str] = {}
ExtensionArrayT = TypeVar("ExtensionArrayT", bound="ExtensionArray")
@@ -380,7 +391,7 @@ def __iter__(self):
for i in range(len(self)):
yield self[i]
- def __contains__(self, item) -> bool:
+ def __contains__(self, item) -> Union[bool, np.bool_]:
"""
Return for `item in self`.
"""
@@ -391,8 +402,7 @@ def __contains__(self, item) -> bool:
if not self._can_hold_na:
return False
elif item is self.dtype.na_value or isinstance(item, self.dtype.type):
- # error: "ExtensionArray" has no attribute "any"
- return self.isna().any() # type: ignore[attr-defined]
+ return self.isna().any()
else:
return False
else:
@@ -543,7 +553,7 @@ def astype(self, dtype, copy=True):
return np.array(self, dtype=dtype, copy=copy)
- def isna(self) -> ArrayLike:
+ def isna(self) -> Union[np.ndarray, ExtensionArraySupportsAnyAll]:
"""
A 1-D array indicating if each value is missing.
@@ -648,8 +658,7 @@ def argmin(self, skipna: bool = True) -> int:
ExtensionArray.argmax
"""
validate_bool_kwarg(skipna, "skipna")
- # error: "ExtensionArray" has no attribute "any"
- if not skipna and self.isna().any(): # type: ignore[attr-defined]
+ if not skipna and self.isna().any():
raise NotImplementedError
return nargminmax(self, "argmin")
@@ -673,8 +682,7 @@ def argmax(self, skipna: bool = True) -> int:
ExtensionArray.argmin
"""
validate_bool_kwarg(skipna, "skipna")
- # error: "ExtensionArray" has no attribute "any"
- if not skipna and self.isna().any(): # type: ignore[attr-defined]
+ if not skipna and self.isna().any():
raise NotImplementedError
return nargminmax(self, "argmax")
@@ -714,8 +722,7 @@ def fillna(self, value=None, method=None, limit=None):
value, mask, len(self) # type: ignore[arg-type]
)
- # error: "ExtensionArray" has no attribute "any"
- if mask.any(): # type: ignore[attr-defined]
+ if mask.any():
if method is not None:
func = missing.get_fill_func(method)
new_values, _ = func(self.astype(object), limit=limit, mask=mask)
@@ -1156,9 +1163,7 @@ def view(self, dtype: Optional[Dtype] = None) -> ArrayLike:
# giving a view with the same dtype as self.
if dtype is not None:
raise NotImplementedError(dtype)
- # error: Incompatible return value type (got "Union[ExtensionArray, Any]",
- # expected "ndarray")
- return self[:] # type: ignore[return-value]
+ return self[:]
# ------------------------------------------------------------------------
# Printing
diff --git a/pandas/core/arrays/boolean.py b/pandas/core/arrays/boolean.py
index a84b33d3da9af..4258279e37551 100644
--- a/pandas/core/arrays/boolean.py
+++ b/pandas/core/arrays/boolean.py
@@ -406,18 +406,14 @@ def astype(self, dtype, copy: bool = True) -> ArrayLike:
dtype = pandas_dtype(dtype)
if isinstance(dtype, ExtensionDtype):
- # error: Incompatible return value type (got "ExtensionArray", expected
- # "ndarray")
- return super().astype(dtype, copy) # type: ignore[return-value]
+ return super().astype(dtype, copy)
if is_bool_dtype(dtype):
# astype_nansafe converts np.nan to True
if self._hasna:
raise ValueError("cannot convert float NaN to bool")
else:
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return self._data.astype(dtype, copy=copy) # type: ignore[return-value]
+ return self._data.astype(dtype, copy=copy)
# for integer, error if there are missing values
if is_integer_dtype(dtype) and self._hasna:
@@ -429,12 +425,7 @@ def astype(self, dtype, copy: bool = True) -> ArrayLike:
if is_float_dtype(dtype):
na_value = np.nan
# coerce
-
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return self.to_numpy( # type: ignore[return-value]
- dtype=dtype, na_value=na_value, copy=False
- )
+ return self.to_numpy(dtype=dtype, na_value=na_value, copy=False)
def _values_for_argsort(self) -> np.ndarray:
"""
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 8588bc9aa94ec..0bf5e05786d4d 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -550,8 +550,7 @@ def astype(self, dtype: Dtype, copy: bool = True) -> ArrayLike:
new_cats, libalgos.ensure_platform_int(self._codes)
)
- # error: Incompatible return value type (got "Categorical", expected "ndarray")
- return result # type: ignore[return-value]
+ return result
@cache_readonly
def itemsize(self) -> int:
@@ -2659,8 +2658,9 @@ def _get_codes_for_values(values, categories: Index) -> np.ndarray:
# Only hit here when we've already coerced to object dtypee.
hash_klass, vals = get_data_algo(values)
- # error: Value of type variable "ArrayLike" of "get_data_algo" cannot be "Index"
- _, cats = get_data_algo(categories) # type: ignore[type-var]
+ # pandas/core/arrays/categorical.py:2661: error: Argument 1 to "get_data_algo" has
+ # incompatible type "Index"; expected "Union[ExtensionArray, ndarray]" [arg-type]
+ _, cats = get_data_algo(categories) # type: ignore[arg-type]
t = hash_klass(len(cats))
t.map_locations(cats)
return coerce_indexer_dtype(t.lookup(vals), cats)
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index c2ac7517ecba3..bd5cc04659a06 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -16,6 +16,7 @@
TypeVar,
Union,
cast,
+ overload,
)
import warnings
@@ -453,43 +454,38 @@ def astype(self, dtype, copy=True):
else:
return np.asarray(self, dtype=dtype)
+ @overload
+ def view(self: DatetimeLikeArrayT) -> DatetimeLikeArrayT:
+ ...
+
+ @overload
+ def view(self, dtype: Optional[Dtype] = ...) -> ArrayLike:
+ ...
+
def view(self, dtype: Optional[Dtype] = None) -> ArrayLike:
# We handle datetime64, datetime64tz, timedelta64, and period
# dtypes here. Everything else we pass through to the underlying
# ndarray.
if dtype is None or dtype is self.dtype:
- # error: Incompatible return value type (got "DatetimeLikeArrayMixin",
- # expected "ndarray")
- return type(self)( # type: ignore[return-value]
- self._ndarray, dtype=self.dtype
- )
+ return type(self)(self._ndarray, dtype=self.dtype)
if isinstance(dtype, type):
# we sometimes pass non-dtype objects, e.g np.ndarray;
# pass those through to the underlying ndarray
-
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return self._ndarray.view(dtype) # type: ignore[return-value]
+ return self._ndarray.view(dtype)
dtype = pandas_dtype(dtype)
if isinstance(dtype, (PeriodDtype, DatetimeTZDtype)):
cls = dtype.construct_array_type()
- # error: Incompatible return value type (got "Union[PeriodArray,
- # DatetimeArray]", expected "ndarray")
- return cls(self.asi8, dtype=dtype) # type: ignore[return-value]
+ return cls(self.asi8, dtype=dtype)
elif dtype == "M8[ns]":
from pandas.core.arrays import DatetimeArray
- # error: Incompatible return value type (got "DatetimeArray", expected
- # "ndarray")
- return DatetimeArray(self.asi8, dtype=dtype) # type: ignore[return-value]
+ return DatetimeArray(self.asi8, dtype=dtype)
elif dtype == "m8[ns]":
from pandas.core.arrays import TimedeltaArray
- # error: Incompatible return value type (got "TimedeltaArray", expected
- # "ndarray")
- return TimedeltaArray(self.asi8, dtype=dtype) # type: ignore[return-value]
+ return TimedeltaArray(self.asi8, dtype=dtype)
# error: Incompatible return value type (got "ndarray", expected
# "ExtensionArray")
# error: Argument "dtype" to "view" of "_ArrayOrScalarCommon" has incompatible
@@ -871,9 +867,7 @@ def isin(self, values) -> np.ndarray:
# ------------------------------------------------------------------
# Null Handling
- # error: Return type "ndarray" of "isna" incompatible with return type "ArrayLike"
- # in supertype "ExtensionArray"
- def isna(self) -> np.ndarray: # type: ignore[override]
+ def isna(self) -> np.ndarray:
return self._isnan
@property # NB: override with cache_readonly in immutable subclasses
@@ -1789,8 +1783,7 @@ def _with_freq(self, freq):
freq = to_offset(self.inferred_freq)
arr = self.view()
- # error: "ExtensionArray" has no attribute "_freq"
- arr._freq = freq # type: ignore[attr-defined]
+ arr._freq = freq
return arr
# --------------------------------------------------------------
diff --git a/pandas/core/arrays/floating.py b/pandas/core/arrays/floating.py
index bbe2f23421fcf..fdd358a1b3856 100644
--- a/pandas/core/arrays/floating.py
+++ b/pandas/core/arrays/floating.py
@@ -305,9 +305,7 @@ def astype(self, dtype, copy: bool = True) -> ArrayLike:
dtype = pandas_dtype(dtype)
if isinstance(dtype, ExtensionDtype):
- # error: Incompatible return value type (got "ExtensionArray", expected
- # "ndarray")
- return super().astype(dtype, copy=copy) # type: ignore[return-value]
+ return super().astype(dtype, copy=copy)
# coerce
if is_float_dtype(dtype):
@@ -323,9 +321,7 @@ def astype(self, dtype, copy: bool = True) -> ArrayLike:
# error: Argument 2 to "to_numpy" of "BaseMaskedArray" has incompatible
# type "**Dict[str, float]"; expected "bool"
data = self.to_numpy(dtype=dtype, **kwargs) # type: ignore[arg-type]
- # error: Incompatible return value type (got "ExtensionArray", expected
- # "ndarray")
- return astype_nansafe(data, dtype, copy=False) # type: ignore[return-value]
+ return astype_nansafe(data, dtype, copy=False)
def _values_for_argsort(self) -> np.ndarray:
return self._data
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index b2308233a6272..ae44acf06591f 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -369,9 +369,7 @@ def astype(self, dtype, copy: bool = True) -> ArrayLike:
dtype = pandas_dtype(dtype)
if isinstance(dtype, ExtensionDtype):
- # error: Incompatible return value type (got "ExtensionArray", expected
- # "ndarray")
- return super().astype(dtype, copy=copy) # type: ignore[return-value]
+ return super().astype(dtype, copy=copy)
# coerce
if is_float_dtype(dtype):
@@ -384,11 +382,7 @@ def astype(self, dtype, copy: bool = True) -> ArrayLike:
else:
na_value = lib.no_default
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return self.to_numpy( # type: ignore[return-value]
- dtype=dtype, na_value=na_value, copy=False
- )
+ return self.to_numpy(dtype=dtype, na_value=na_value, copy=False)
def _values_for_argsort(self) -> np.ndarray:
"""
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 7ccdad11761ab..1a626f327b97c 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -911,9 +911,7 @@ def copy(self: IntervalArrayT) -> IntervalArrayT:
# TODO: Could skip verify_integrity here.
return type(self).from_arrays(left, right, closed=closed)
- # error: Return type "ndarray" of "isna" incompatible with return type
- # "ArrayLike" in supertype "ExtensionArray"
- def isna(self) -> np.ndarray: # type: ignore[override]
+ def isna(self) -> np.ndarray:
return isna(self._left)
def shift(
@@ -1573,9 +1571,7 @@ def isin(self, values) -> np.ndarray:
if is_dtype_equal(self.dtype, values.dtype):
# GH#38353 instead of casting to object, operating on a
# complex128 ndarray is much more performant.
-
- # error: "ArrayLike" has no attribute "view"
- left = self._combined.view("complex128") # type:ignore[attr-defined]
+ left = self._combined.view("complex128")
right = values._combined.view("complex128")
return np.in1d(left, right)
@@ -1618,10 +1614,7 @@ def _maybe_convert_platform_interval(values) -> ArrayLike:
# GH 19016
# empty lists/tuples get object dtype by default, but this is
# prohibited for IntervalArray, so coerce to integer instead
-
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return np.array([], dtype=np.int64) # type: ignore[return-value]
+ return np.array([], dtype=np.int64)
elif not is_list_like(values) or isinstance(values, ABCDataFrame):
# This will raise later, but we avoid passing to maybe_convert_platform
return values
@@ -1633,5 +1626,4 @@ def _maybe_convert_platform_interval(values) -> ArrayLike:
else:
values = extract_array(values, extract_numpy=True)
- # error: Incompatible return value type (got "ExtensionArray", expected "ndarray")
- return maybe_convert_platform(values) # type: ignore[return-value]
+ return maybe_convert_platform(values)
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index ac0ac2bb21d62..31d58d9d89d49 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -310,12 +310,8 @@ def astype(self, dtype: Dtype, copy: bool = True) -> ArrayLike:
if is_dtype_equal(dtype, self.dtype):
if copy:
- # error: Incompatible return value type (got "BaseMaskedArray", expected
- # "ndarray")
- return self.copy() # type: ignore[return-value]
- # error: Incompatible return value type (got "BaseMaskedArray", expected
- # "ndarray")
- return self # type: ignore[return-value]
+ return self.copy()
+ return self
# if we are astyping to another nullable masked dtype, we can fastpath
if isinstance(dtype, BaseMaskedDtype):
@@ -325,9 +321,7 @@ def astype(self, dtype: Dtype, copy: bool = True) -> ArrayLike:
# not directly depending on the `copy` keyword
mask = self._mask if data is self._data else self._mask.copy()
cls = dtype.construct_array_type()
- # error: Incompatible return value type (got "BaseMaskedArray", expected
- # "ndarray")
- return cls(data, mask, copy=False) # type: ignore[return-value]
+ return cls(data, mask, copy=False)
if isinstance(dtype, ExtensionDtype):
eacls = dtype.construct_array_type()
@@ -361,9 +355,7 @@ def _hasna(self) -> bool:
# error: Incompatible return value type (got "bool_", expected "bool")
return self._mask.any() # type: ignore[return-value]
- # error: Return type "ndarray" of "isna" incompatible with return type
- # "ArrayLike" in supertype "ExtensionArray"
- def isna(self) -> np.ndarray: # type: ignore[override]
+ def isna(self) -> np.ndarray:
return self._mask
@property
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index bef047c29413b..5ef3c24726924 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -190,9 +190,7 @@ def __array_ufunc__(self, ufunc, method: str, *inputs, **kwargs):
# ------------------------------------------------------------------------
# Pandas ExtensionArray Interface
- # error: Return type "ndarray" of "isna" incompatible with return type
- # "ArrayLike" in supertype "ExtensionArray"
- def isna(self) -> np.ndarray: # type: ignore[override]
+ def isna(self) -> np.ndarray:
return isna(self._ndarray)
def _validate_fill_value(self, fill_value):
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index d91522a9e1bb6..a39182d61a8fb 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -643,7 +643,11 @@ def fillna(self, value=None, method=None, limit=None) -> PeriodArray:
if method is not None:
# view as dt64 so we get treated as timelike in core.missing
dta = self.view("M8[ns]")
- result = dta.fillna(value=value, method=method, limit=limit)
+ # error: Item "ndarray" of "Union[ExtensionArray, ndarray]" has no attribute
+ # "fillna"
+ result = dta.fillna( # type: ignore[union-attr]
+ value=value, method=method, limit=limit
+ )
return result.view(self.dtype)
return super().fillna(value=value, method=method, limit=limit)
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 088a1165e4df0..c798870e4126a 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -1145,9 +1145,7 @@ def astype(self, dtype: Optional[Dtype] = None, copy=True):
# TODO copy=False is broken for astype_nansafe with int -> float, so cannot
# passthrough copy keyword: https://github.com/pandas-dev/pandas/issues/34456
sp_values = astype_nansafe(self.sp_values, subtype, copy=True)
- # error: Non-overlapping identity check (left operand type: "ExtensionArray",
- # right operand t...ype: "ndarray")
- if sp_values is self.sp_values and copy: # type: ignore[comparison-overlap]
+ if sp_values is self.sp_values and copy:
sp_values = sp_values.copy()
# error: Argument 1 to "_simple_new" of "SparseArray" has incompatible type
diff --git a/pandas/core/arrays/sparse/dtype.py b/pandas/core/arrays/sparse/dtype.py
index 9e61675002e64..8e55eb5f3d358 100644
--- a/pandas/core/arrays/sparse/dtype.py
+++ b/pandas/core/arrays/sparse/dtype.py
@@ -342,10 +342,11 @@ def update_dtype(self, dtype):
if is_extension_array_dtype(dtype):
raise TypeError("sparse arrays of extension dtypes not supported")
- # error: "ExtensionArray" has no attribute "item"
- fill_value = astype_nansafe(
+ # error: Item "ExtensionArray" of "Union[ExtensionArray, ndarray]" has no
+ # attribute "item"
+ fill_value = astype_nansafe( # type: ignore[union-attr]
np.array(self.fill_value), dtype
- ).item() # type: ignore[attr-defined]
+ ).item()
dtype = cls(dtype, fill_value=fill_value)
return dtype
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index efdc18cd071b5..6f7badd3c2cd2 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -434,9 +434,7 @@ def nbytes(self) -> int:
"""
return self._data.nbytes
- # error: Return type "ndarray" of "isna" incompatible with return type "ArrayLike"
- # in supertype "ExtensionArray"
- def isna(self) -> np.ndarray: # type: ignore[override]
+ def isna(self) -> np.ndarray:
"""
Boolean NumPy array indicating if each value is missing.
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 1943aafc7c760..56ec2597314b2 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -735,8 +735,7 @@ def argmax(self, axis=None, skipna: bool = True, *args, **kwargs) -> int:
skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)
if isinstance(delegate, ExtensionArray):
- # error: "ExtensionArray" has no attribute "any"
- if not skipna and delegate.isna().any(): # type: ignore[attr-defined]
+ if not skipna and delegate.isna().any():
return -1
else:
return delegate.argmax()
@@ -798,8 +797,7 @@ def argmin(self, axis=None, skipna=True, *args, **kwargs) -> int:
skipna = nv.validate_argmin_with_skipna(skipna, args, kwargs)
if isinstance(delegate, ExtensionArray):
- # error: "ExtensionArray" has no attribute "any"
- if not skipna and delegate.isna().any(): # type: ignore[attr-defined]
+ if not skipna and delegate.isna().any():
return -1
else:
return delegate.argmin()
@@ -1333,6 +1331,4 @@ def drop_duplicates(self, keep="first"):
return self[~duplicated] # type: ignore[index]
def duplicated(self, keep: Union[str, bool] = "first") -> np.ndarray:
- # error: Value of type variable "ArrayLike" of "duplicated" cannot be
- # "Union[ExtensionArray, ndarray]"
- return duplicated(self._values, keep=keep) # type: ignore[type-var]
+ return duplicated(self._values, keep=keep)
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 83848e0532253..6790a3e54192a 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -500,13 +500,7 @@ def convert_to_list_like(
inputs are returned unmodified whereas others are converted to list.
"""
if isinstance(values, (list, np.ndarray, ABCIndex, ABCSeries, ABCExtensionArray)):
- # error: Incompatible return value type (got "Union[Any, List[Any], Index,
- # Series, ExtensionArray]", expected "Union[List[Any], ExtensionArray]")
- # error: Incompatible return value type (got "Union[Any, List[Any], Index,
- # Series, ExtensionArray]", expected "Union[List[Any], Index]")
- # error: Incompatible return value type (got "Union[Any, List[Any], Index,
- # Series, ExtensionArray]", expected "Union[List[Any], Series]")
- return values # type: ignore[return-value]
+ return values
elif isinstance(values, abc.Iterable) and not isinstance(values, str):
return list(values)
diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index 5e7fdb8dc9c7d..b9de4809c96fa 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -231,7 +231,11 @@ def stringify(value):
if v not in metadata:
result = -1
else:
- result = metadata.searchsorted(v, side="left")
+ # error: Incompatible types in assignment (expression has type
+ # "Union[Any, ndarray]", variable has type "int")
+ result = metadata.searchsorted( # type: ignore[assignment]
+ v, side="left"
+ )
return TermValue(result, result, "integer")
elif kind == "integer":
v = int(float(v))
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 46f32ee401603..78a7f1890b5de 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -306,17 +306,7 @@ def array(
# Note: we exclude np.ndarray here, will do type inference on it
dtype = data.dtype
- # error: Value of type variable "AnyArrayLike" of "extract_array" cannot be
- # "Union[Sequence[object], ExtensionArray]"
- # error: Value of type variable "AnyArrayLike" of "extract_array" cannot be
- # "Union[Sequence[object], Index]"
- # error: Incompatible types in assignment (expression has type "ExtensionArray",
- # variable has type "Union[Sequence[object], Index]")
- # error: Incompatible types in assignment (expression has type "ExtensionArray",
- # variable has type "Union[Sequence[object], Series]")
- # error: Incompatible types in assignment (expression has type "ExtensionArray",
- # variable has type "Union[Sequence[object], ndarray]")
- data = extract_array(data, extract_numpy=True) # type: ignore[type-var,assignment]
+ data = extract_array(data, extract_numpy=True)
# this returns None for not-found dtypes.
if isinstance(dtype, str):
@@ -510,9 +500,7 @@ def sanitize_array(
try:
subarr = _try_cast(data, dtype, copy, True)
except ValueError:
- # error: Incompatible types in assignment (expression has type
- # "ndarray", variable has type "ExtensionArray")
- subarr = np.array(data, copy=copy) # type: ignore[assignment]
+ subarr = np.array(data, copy=copy)
else:
# we will try to copy by-definition here
subarr = _try_cast(data, dtype, copy, raise_cast_failure)
@@ -525,9 +513,7 @@ def sanitize_array(
subarr = subarr.astype(dtype, copy=copy)
elif copy:
subarr = subarr.copy()
- # error: Incompatible return value type (got "ExtensionArray", expected
- # "ndarray")
- return subarr # type: ignore[return-value]
+ return subarr
elif isinstance(data, (list, tuple, abc.Set, abc.ValuesView)) and len(data) > 0:
# TODO: deque, array.array
@@ -564,11 +550,9 @@ def sanitize_array(
subarr = _sanitize_ndim(subarr, data, dtype, index)
if not (is_extension_array_dtype(subarr.dtype) or is_extension_array_dtype(dtype)):
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
# error: Argument 1 to "_sanitize_str_dtypes" has incompatible type
# "ExtensionArray"; expected "ndarray"
- subarr = _sanitize_str_dtypes( # type: ignore[assignment]
+ subarr = _sanitize_str_dtypes(
subarr, data, dtype, copy # type: ignore[arg-type]
)
@@ -579,8 +563,7 @@ def sanitize_array(
subarr = array(subarr)
subarr = extract_array(subarr, extract_numpy=True)
- # error: Incompatible return value type (got "ExtensionArray", expected "ndarray")
- return subarr # type: ignore[return-value]
+ return subarr
def _sanitize_ndim(
@@ -602,24 +585,16 @@ def _sanitize_ndim(
if is_object_dtype(dtype) and isinstance(dtype, ExtensionDtype):
# i.e. PandasDtype("O")
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
# error: Argument "dtype" to "asarray_tuplesafe" has incompatible type
# "Type[object]"; expected "Union[str, dtype[Any], None]"
- result = com.asarray_tuplesafe( # type: ignore[assignment]
- data, dtype=object # type: ignore[arg-type]
- )
+ result = com.asarray_tuplesafe(data, dtype=object) # type: ignore[arg-type]
cls = dtype.construct_array_type()
result = cls._from_sequence(result, dtype=dtype)
else:
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
# error: Argument "dtype" to "asarray_tuplesafe" has incompatible type
# "Union[dtype[Any], ExtensionDtype, None]"; expected "Union[str,
# dtype[Any], None]"
- result = com.asarray_tuplesafe( # type: ignore[assignment]
- data, dtype=dtype # type: ignore[arg-type]
- )
+ result = com.asarray_tuplesafe(data, dtype=dtype) # type: ignore[arg-type]
return result
@@ -689,9 +664,7 @@ def _try_cast(
and not copy
and dtype is None
):
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return arr # type: ignore[return-value]
+ return arr
if isinstance(dtype, ExtensionDtype) and (dtype.kind != "M" or is_sparse(dtype)):
# create an extension array from its dtype
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index c5d672b207369..44650500e0f65 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -136,8 +136,7 @@ def maybe_convert_platform(
if arr.dtype == object:
arr = lib.maybe_convert_objects(arr)
- # error: Incompatible return value type (got "ndarray", expected "ExtensionArray")
- return arr # type: ignore[return-value]
+ return arr
def is_nested_object(obj) -> bool:
@@ -939,9 +938,7 @@ def infer_dtype_from_array(
(dtype('O'), [1, '1'])
"""
if isinstance(arr, np.ndarray):
- # error: Incompatible return value type (got "Tuple[dtype, ndarray]", expected
- # "Tuple[Union[dtype, ExtensionDtype], ExtensionArray]")
- return arr.dtype, arr # type: ignore[return-value]
+ return arr.dtype, arr
if not is_list_like(arr):
raise TypeError("'arr' must be list-like")
@@ -950,9 +947,7 @@ def infer_dtype_from_array(
return arr.dtype, arr
elif isinstance(arr, ABCSeries):
- # error: Incompatible return value type (got "Tuple[Any, ndarray]", expected
- # "Tuple[Union[dtype, ExtensionDtype], ExtensionArray]")
- return arr.dtype, np.asarray(arr) # type: ignore[return-value]
+ return arr.dtype, np.asarray(arr)
# don't force numpy coerce with nan's
inferred = lib.infer_dtype(arr, skipna=False)
@@ -1067,18 +1062,14 @@ def astype_dt64_to_dt64tz(
from pandas.core.construction import ensure_wrapped_if_datetimelike
values = ensure_wrapped_if_datetimelike(values)
- # error: Incompatible types in assignment (expression has type "DatetimeArray",
- # variable has type "ndarray")
- values = cast("DatetimeArray", values) # type: ignore[assignment]
+ values = cast("DatetimeArray", values)
aware = isinstance(dtype, DatetimeTZDtype)
if via_utc:
# Series.astype behavior
# caller is responsible for checking this
-
- # error: "ndarray" has no attribute "tz"
- assert values.tz is None and aware # type: ignore[attr-defined]
+ assert values.tz is None and aware
dtype = cast(DatetimeTZDtype, dtype)
if copy:
@@ -1096,17 +1087,11 @@ def astype_dt64_to_dt64tz(
# FIXME: GH#33401 this doesn't match DatetimeArray.astype, which
# goes through the `not via_utc` path
-
- # error: "ndarray" has no attribute "tz_localize"
- return values.tz_localize("UTC").tz_convert( # type: ignore[attr-defined]
- dtype.tz
- )
+ return values.tz_localize("UTC").tz_convert(dtype.tz)
else:
# DatetimeArray/DatetimeIndex.astype behavior
-
- # error: "ndarray" has no attribute "tz"
- if values.tz is None and aware: # type: ignore[attr-defined]
+ if values.tz is None and aware:
dtype = cast(DatetimeTZDtype, dtype)
level = find_stack_level()
warnings.warn(
@@ -1117,20 +1102,17 @@ def astype_dt64_to_dt64tz(
stacklevel=level,
)
- # error: "ndarray" has no attribute "tz_localize"
- return values.tz_localize(dtype.tz) # type: ignore[attr-defined]
+ return values.tz_localize(dtype.tz)
elif aware:
# GH#18951: datetime64_tz dtype but not equal means different tz
dtype = cast(DatetimeTZDtype, dtype)
- # error: "ndarray" has no attribute "tz_convert"
- result = values.tz_convert(dtype.tz) # type: ignore[attr-defined]
+ result = values.tz_convert(dtype.tz)
if copy:
result = result.copy()
return result
- # error: "ndarray" has no attribute "tz"
- elif values.tz is not None: # type: ignore[attr-defined]
+ elif values.tz is not None:
level = find_stack_level()
warnings.warn(
"Using .astype to convert from timezone-aware dtype to "
@@ -1141,10 +1123,7 @@ def astype_dt64_to_dt64tz(
stacklevel=level,
)
- # error: "ndarray" has no attribute "tz_convert"
- result = values.tz_convert("UTC").tz_localize( # type: ignore[attr-defined]
- None
- )
+ result = values.tz_convert("UTC").tz_localize(None)
if copy:
result = result.copy()
return result
@@ -1212,8 +1191,13 @@ def astype_nansafe(
flat = arr.ravel("K")
result = astype_nansafe(flat, dtype, copy=copy, skipna=skipna)
order = "F" if flags.f_contiguous else "C"
- # error: "ExtensionArray" has no attribute "reshape"; maybe "shape"?
- return result.reshape(arr.shape, order=order) # type: ignore[attr-defined]
+ # error: Item "ExtensionArray" of "Union[ExtensionArray, ndarray]" has no
+ # attribute "reshape"
+ # error: No overload variant of "reshape" of "_ArrayOrScalarCommon" matches
+ # argument types "Tuple[int, ...]", "str"
+ return result.reshape( # type: ignore[union-attr,call-overload]
+ arr.shape, order=order
+ )
# We get here with 0-dim from sparse
arr = np.atleast_1d(arr)
@@ -1231,9 +1215,7 @@ def astype_nansafe(
from pandas.core.construction import ensure_wrapped_if_datetimelike
arr = ensure_wrapped_if_datetimelike(arr)
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return arr.astype(dtype, copy=copy) # type: ignore[return-value]
+ return arr.astype(dtype, copy=copy)
if issubclass(dtype.type, str):
return lib.ensure_string_array(arr, skipna=skipna, convert_na_value=False)
@@ -1250,15 +1232,11 @@ def astype_nansafe(
)
if isna(arr).any():
raise ValueError("Cannot convert NaT values to integer")
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return arr.view(dtype) # type: ignore[return-value]
+ return arr.view(dtype)
# allow frequency conversions
if dtype.kind == "M":
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return arr.astype(dtype) # type: ignore[return-value]
+ return arr.astype(dtype)
raise TypeError(f"cannot astype a datetimelike from [{arr.dtype}] to [{dtype}]")
@@ -1274,16 +1252,10 @@ def astype_nansafe(
)
if isna(arr).any():
raise ValueError("Cannot convert NaT values to integer")
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return arr.view(dtype) # type: ignore[return-value]
+ return arr.view(dtype)
elif dtype.kind == "m":
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return astype_td64_unit_conversion( # type: ignore[return-value]
- arr, dtype, copy=copy
- )
+ return astype_td64_unit_conversion(arr, dtype, copy=copy)
raise TypeError(f"cannot astype a timedelta from [{arr.dtype}] to [{dtype}]")
@@ -1304,9 +1276,7 @@ def astype_nansafe(
elif is_datetime64_dtype(dtype):
from pandas import to_datetime
- # error: Incompatible return value type (got "ExtensionArray", expected
- # "ndarray")
- return astype_nansafe( # type: ignore[return-value]
+ return astype_nansafe(
# error: No overload variant of "to_datetime" matches argument type
# "ndarray"
to_datetime(arr).values, # type: ignore[call-overload]
@@ -1316,11 +1286,7 @@ def astype_nansafe(
elif is_timedelta64_dtype(dtype):
from pandas import to_timedelta
- # error: Incompatible return value type (got "ExtensionArray", expected
- # "ndarray")
- return astype_nansafe( # type: ignore[return-value]
- to_timedelta(arr)._values, dtype, copy=copy
- )
+ return astype_nansafe(to_timedelta(arr)._values, dtype, copy=copy)
if dtype.name in ("datetime64", "timedelta64"):
msg = (
@@ -1331,13 +1297,9 @@ def astype_nansafe(
if copy or is_object_dtype(arr.dtype) or is_object_dtype(dtype):
# Explicit copy, or required since NumPy can't view from / to object.
+ return arr.astype(dtype, copy=True)
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return arr.astype(dtype, copy=True) # type: ignore[return-value]
-
- # error: Incompatible return value type (got "ndarray", expected "ExtensionArray")
- return arr.astype(dtype, copy=copy) # type: ignore[return-value]
+ return arr.astype(dtype, copy=copy)
def astype_array(values: ArrayLike, dtype: DtypeObj, copy: bool = False) -> ArrayLike:
@@ -1366,11 +1328,7 @@ def astype_array(values: ArrayLike, dtype: DtypeObj, copy: bool = False) -> Arra
raise TypeError(msg)
if is_datetime64tz_dtype(dtype) and is_datetime64_dtype(values.dtype):
- # error: Incompatible return value type (got "DatetimeArray", expected
- # "ndarray")
- return astype_dt64_to_dt64tz( # type: ignore[return-value]
- values, dtype, copy, via_utc=True
- )
+ return astype_dt64_to_dt64tz(values, dtype, copy, via_utc=True)
if is_dtype_equal(values.dtype, dtype):
if copy:
@@ -1381,19 +1339,13 @@ def astype_array(values: ArrayLike, dtype: DtypeObj, copy: bool = False) -> Arra
values = values.astype(dtype, copy=copy)
else:
- # error: Incompatible types in assignment (expression has type "ExtensionArray",
- # variable has type "ndarray")
# error: Argument 1 to "astype_nansafe" has incompatible type "ExtensionArray";
# expected "ndarray"
- values = astype_nansafe( # type: ignore[assignment]
- values, dtype, copy=copy # type: ignore[arg-type]
- )
+ values = astype_nansafe(values, dtype, copy=copy) # type: ignore[arg-type]
# in pandas we don't store numpy str dtypes, so convert to object
if isinstance(dtype, np.dtype) and issubclass(values.dtype.type, str):
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- values = np.array(values, dtype=object) # type: ignore[assignment]
+ values = np.array(values, dtype=object)
return values
@@ -1494,9 +1446,7 @@ def soft_convert_objects(
values, convert_datetime=datetime, convert_timedelta=timedelta
)
except (OutOfBoundsDatetime, ValueError):
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return values # type: ignore[return-value]
+ return values
if numeric and is_object_dtype(values.dtype):
converted = lib.maybe_convert_numeric(values, set(), coerce_numeric=True)
@@ -1505,8 +1455,7 @@ def soft_convert_objects(
values = converted if not isna(converted).all() else values
values = values.copy() if copy else values
- # error: Incompatible return value type (got "ndarray", expected "ExtensionArray")
- return values # type: ignore[return-value]
+ return values
def convert_dtypes(
@@ -1657,20 +1606,12 @@ def try_datetime(v: np.ndarray) -> ArrayLike:
dta = sequence_to_datetimes(v, require_iso8601=True, allow_object=True)
except (ValueError, TypeError):
# e.g. <class 'numpy.timedelta64'> is not convertible to datetime
-
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return v.reshape(shape) # type: ignore[return-value]
+ return v.reshape(shape)
else:
# GH#19761 we may have mixed timezones, in which cast 'dta' is
# an ndarray[object]. Only 1 test
# relies on this behavior, see GH#40111
-
- # error: Incompatible return value type (got "Union[ndarray,
- # DatetimeArray]", expected "ExtensionArray")
- # error: Incompatible return value type (got "Union[ndarray,
- # DatetimeArray]", expected "ndarray")
- return dta.reshape(shape) # type: ignore[return-value]
+ return dta.reshape(shape)
def try_timedelta(v: np.ndarray) -> np.ndarray:
# safe coerce to timedelta64
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 68c8d35810b7e..7a2d6468f1b63 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -165,8 +165,9 @@ def ensure_int_or_float(arr: ArrayLike, copy: bool = False) -> np.ndarray:
return arr.astype("uint64", copy=copy, casting="safe") # type: ignore[call-arg]
except TypeError:
if is_extension_array_dtype(arr.dtype):
- # error: "ndarray" has no attribute "to_numpy"
- return arr.to_numpy( # type: ignore[attr-defined]
+ # pandas/core/dtypes/common.py:168: error: Item "ndarray" of
+ # "Union[ExtensionArray, ndarray]" has no attribute "to_numpy" [union-attr]
+ return arr.to_numpy( # type: ignore[union-attr]
dtype="float64", na_value=np.nan
)
return arr.astype("float64", copy=copy)
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index 06fc1918b5ecf..614a637f2d904 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -51,12 +51,8 @@ def _cast_to_common_type(arr: ArrayLike, dtype: DtypeObj) -> ArrayLike:
# problem case: SparseArray.astype(dtype) doesn't follow the specified
# dtype exactly, but converts this to Sparse[dtype] -> first manually
# convert to dense array
-
- # error: Incompatible types in assignment (expression has type
- # "SparseArray", variable has type "ndarray")
- arr = cast(SparseArray, arr) # type: ignore[assignment]
- # error: "ndarray" has no attribute "to_dense"
- return arr.to_dense().astype(dtype, copy=False) # type: ignore[attr-defined]
+ arr = cast(SparseArray, arr)
+ return arr.to_dense().astype(dtype, copy=False)
if (
isinstance(arr, np.ndarray)
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index 286272b165fb9..de981c39228ae 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -164,13 +164,9 @@ def _isna(obj, inf_as_na: bool = False):
elif isinstance(obj, type):
return False
elif isinstance(obj, (np.ndarray, ABCExtensionArray)):
- # error: Value of type variable "ArrayLike" of "_isna_array" cannot be
- # "Union[ndarray, ExtensionArray]"
- return _isna_array(obj, inf_as_na=inf_as_na) # type: ignore[type-var]
+ return _isna_array(obj, inf_as_na=inf_as_na)
elif isinstance(obj, (ABCSeries, ABCIndex)):
- # error: Value of type variable "ArrayLike" of "_isna_array" cannot be
- # "Union[Any, ExtensionArray, ndarray]"
- result = _isna_array(obj._values, inf_as_na=inf_as_na) # type: ignore[type-var]
+ result = _isna_array(obj._values, inf_as_na=inf_as_na)
# box
if isinstance(obj, ABCSeries):
result = obj._constructor(
@@ -238,13 +234,15 @@ def _isna_array(values: ArrayLike, inf_as_na: bool = False):
if is_extension_array_dtype(dtype):
if inf_as_na and is_categorical_dtype(dtype):
- # error: "ndarray" has no attribute "to_numpy"
+ # error: Item "ndarray" of "Union[ExtensionArray, ndarray]" has no attribute
+ # "to_numpy"
result = libmissing.isnaobj_old(
- values.to_numpy() # type: ignore[attr-defined]
+ values.to_numpy() # type: ignore[union-attr]
)
else:
- # error: "ndarray" has no attribute "isna"
- result = values.isna() # type: ignore[attr-defined]
+ # error: Item "ndarray" of "Union[ExtensionArray, ndarray]" has no attribute
+ # "isna"
+ result = values.isna() # type: ignore[union-attr]
elif is_string_dtype(dtype):
# error: Argument 1 to "_isna_string_dtype" has incompatible type
# "ExtensionArray"; expected "ndarray"
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index de28c04ca0793..98abe8eaffca8 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -673,9 +673,10 @@ def __init__(
data = dataclasses_to_dicts(data)
if treat_as_nested(data):
if columns is not None:
- # error: Value of type variable "AnyArrayLike" of "ensure_index"
- # cannot be "Collection[Any]"
- columns = ensure_index(columns) # type: ignore[type-var]
+ # error: Argument 1 to "ensure_index" has incompatible type
+ # "Collection[Any]"; expected "Union[Union[Union[ExtensionArray,
+ # ndarray], Index, Series], Sequence[Any]]"
+ columns = ensure_index(columns) # type: ignore[arg-type]
arrays, columns, index = nested_data_to_arrays(
# error: Argument 3 to "nested_data_to_arrays" has incompatible
# type "Optional[Collection[Any]]"; expected "Optional[Index]"
@@ -1344,11 +1345,7 @@ def dot(self, other: Series) -> Series:
def dot(self, other: Union[DataFrame, Index, ArrayLike]) -> DataFrame:
...
- # error: Overloaded function implementation cannot satisfy signature 2 due to
- # inconsistencies in how they use type variables
- def dot( # type: ignore[misc]
- self, other: Union[AnyArrayLike, FrameOrSeriesUnion]
- ) -> FrameOrSeriesUnion:
+ def dot(self, other: Union[AnyArrayLike, FrameOrSeriesUnion]) -> FrameOrSeriesUnion:
"""
Compute the matrix multiplication between the DataFrame and other.
@@ -3390,9 +3387,7 @@ def _get_column_array(self, i: int) -> ArrayLike:
Get the values of the i'th column (ndarray or ExtensionArray, as stored
in the Block)
"""
- # error: Incompatible return value type (got "ExtensionArray", expected
- # "ndarray")
- return self._mgr.iget_values(i) # type: ignore[return-value]
+ return self._mgr.iget_values(i)
def _iter_column_arrays(self) -> Iterator[ArrayLike]:
"""
@@ -3400,9 +3395,7 @@ def _iter_column_arrays(self) -> Iterator[ArrayLike]:
This returns the values as stored in the Block (ndarray or ExtensionArray).
"""
for i in range(len(self.columns)):
- # error: Incompatible types in "yield" (actual type
- # "ExtensionArray", expected type "ndarray")
- yield self._get_column_array(i) # type: ignore[misc]
+ yield self._get_column_array(i)
def __getitem__(self, key):
key = lib.item_from_zerodim(key)
@@ -10168,9 +10161,7 @@ def _reindex_for_setitem(value: FrameOrSeriesUnion, index: Index) -> ArrayLike:
# reindex if necessary
if value.index.equals(index) or not len(index):
- # error: Incompatible return value type (got "Union[ndarray, Any]", expected
- # "ExtensionArray")
- return value._values.copy() # type: ignore[return-value]
+ return value._values.copy()
# GH#4107
try:
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 50d04135c9300..b407212fe6a50 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1174,20 +1174,11 @@ def py_fallback(values: ArrayLike) -> ArrayLike:
# We've split an object block! Everything we've assumed
# about a single block input returning a single block output
# is a lie. See eg GH-39329
-
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return mgr.as_array() # type: ignore[return-value]
+ return mgr.as_array()
else:
# We are a single block from a BlockManager
# or one array from SingleArrayManager
-
- # error: Incompatible return value type (got "Union[ndarray,
- # ExtensionArray, ArrayLike]", expected "ExtensionArray")
- # error: Incompatible return value type (got "Union[ndarray,
- # ExtensionArray, ArrayLike]", expected
- # "ndarray")
- return arrays[0] # type: ignore[return-value]
+ return arrays[0]
def array_func(values: ArrayLike) -> ArrayLike:
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 2d7547ff75ca4..6495a4d26da3a 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -791,7 +791,11 @@ def _aggregate_series_pure_python(self, obj: Series, func: F):
result[label] = res
result = lib.maybe_convert_objects(result, try_float=False)
- result = maybe_cast_result(result, obj, numeric_only=True)
+ # error: Incompatible types in assignment (expression has type
+ # "Union[ExtensionArray, ndarray]", variable has type "ndarray")
+ result = maybe_cast_result( # type: ignore[assignment]
+ result, obj, numeric_only=True
+ )
return result, counts
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 9543b11ad4de1..b001139bef6c5 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2993,11 +2993,7 @@ def _union(self, other: Index, sort):
missing = algos.unique1d(self.get_indexer_non_unique(other)[1])
if len(missing) > 0:
- # error: Value of type variable "ArrayLike" of "take_nd" cannot be
- # "Union[ExtensionArray, ndarray]"
- other_diff = algos.take_nd(
- rvals, missing, allow_fill=False # type: ignore[type-var]
- )
+ other_diff = algos.take_nd(rvals, missing, allow_fill=False)
result = concat_compat((lvals, other_diff))
else:
# error: Incompatible types in assignment (expression has type
@@ -4389,11 +4385,7 @@ def values(self) -> ArrayLike:
Index.array : Reference to the underlying data.
Index.to_numpy : A NumPy array representing the underlying data.
"""
- # error: Incompatible return value type (got "Union[ExtensionArray, ndarray]",
- # expected "ExtensionArray")
- # error: Incompatible return value type (got "Union[ExtensionArray, ndarray]",
- # expected "ndarray")
- return self._data # type: ignore[return-value]
+ return self._data
@cache_readonly
@doc(IndexOpsMixin.array)
@@ -4714,9 +4706,7 @@ def putmask(self, mask, value):
numpy.ndarray.putmask : Changes elements of an array
based on conditional and input values.
"""
- # error: Value of type variable "ArrayLike" of "validate_putmask" cannot be
- # "Union[ExtensionArray, ndarray]"
- mask, noop = validate_putmask(self._values, mask) # type: ignore[type-var]
+ mask, noop = validate_putmask(self._values, mask)
if noop:
return self.copy()
@@ -5600,9 +5590,7 @@ def isin(self, values, level=None):
"""
if level is not None:
self._validate_index_level(level)
- # error: Value of type variable "AnyArrayLike" of "isin" cannot be
- # "Union[ExtensionArray, ndarray]"
- return algos.isin(self._values, values) # type: ignore[type-var]
+ return algos.isin(self._values, values)
def _get_string_slice(self, key: str_t):
# this is for partial string indexing,
@@ -6023,11 +6011,7 @@ def _cmp_method(self, other, op):
else:
with np.errstate(all="ignore"):
- # error: Value of type variable "ArrayLike" of "comparison_op" cannot be
- # "Union[ExtensionArray, ndarray]"
- result = ops.comparison_op(
- self._values, other, op # type: ignore[type-var]
- )
+ result = ops.comparison_op(self._values, other, op)
return result
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index a38ef55614638..62941a23c6459 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -489,9 +489,7 @@ def _get_indexer(
if self.equals(target):
return np.arange(len(self), dtype="intp")
- # error: Value of type variable "ArrayLike" of "_get_indexer_non_unique" of
- # "CategoricalIndex" cannot be "Union[ExtensionArray, ndarray]"
- return self._get_indexer_non_unique(target._values)[0] # type: ignore[type-var]
+ return self._get_indexer_non_unique(target._values)[0]
@Appender(_index_shared_docs["get_indexer_non_unique"] % _index_doc_kwargs)
def get_indexer_non_unique(self, target):
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 793dd041fbf6f..96459970a9b57 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -136,9 +136,7 @@ def _is_all_dates(self) -> bool:
# Abstract data attributes
@property
- # error: Return type "ndarray" of "values" incompatible with return type "ArrayLike"
- # in supertype "Index"
- def values(self) -> np.ndarray: # type: ignore[override]
+ def values(self) -> np.ndarray:
# Note: PeriodArray overrides this to return an ndarray of objects.
return self._data._ndarray
@@ -530,10 +528,8 @@ def shift(self: _T, periods: int = 1, freq=None) -> _T:
PeriodIndex.shift : Shift values of PeriodIndex.
"""
arr = self._data.view()
- # error: "ExtensionArray" has no attribute "_freq"
- arr._freq = self.freq # type: ignore[attr-defined]
- # error: "ExtensionArray" has no attribute "_time_shift"
- result = arr._time_shift(periods, freq=freq) # type: ignore[attr-defined]
+ arr._freq = self.freq
+ result = arr._time_shift(periods, freq=freq)
return type(self)(result, name=self.name)
# --------------------------------------------------------------------
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 58c5b23d12a35..86ff95a588217 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -1223,7 +1223,12 @@ def interval_range(
breaks = np.linspace(start, end, periods)
if all(is_integer(x) for x in com.not_none(start, end, freq)):
# np.linspace always produces float output
- breaks = maybe_downcast_numeric(breaks, np.dtype("int64"))
+
+ # error: Incompatible types in assignment (expression has type
+ # "Union[ExtensionArray, ndarray]", variable has type "ndarray")
+ breaks = maybe_downcast_numeric( # type: ignore[assignment]
+ breaks, np.dtype("int64")
+ )
else:
# delegate to the appropriate range function
if isinstance(endpoint, Timestamp):
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 3b538b948ae81..7bb3dc5ab4545 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -720,9 +720,7 @@ def _values(self) -> np.ndarray:
return arr
@property
- # error: Return type "ndarray" of "values" incompatible with return type "ArrayLike"
- # in supertype "Index"
- def values(self) -> np.ndarray: # type: ignore[override]
+ def values(self) -> np.ndarray:
return self._values
@property
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index b15912e4c477b..0c5dbec2094e5 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -275,9 +275,7 @@ def __new__(
# Data
@property
- # error: Return type "ndarray" of "values" incompatible with return type "ArrayLike"
- # in supertype "Index"
- def values(self) -> np.ndarray: # type: ignore[override]
+ def values(self) -> np.ndarray:
return np.asarray(self, dtype=object)
def _maybe_convert_timedelta(self, other):
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index 4b60cec55a2ba..6134325d249c2 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -503,16 +503,9 @@ def quantile(
interpolation="linear",
) -> ArrayManager:
- # error: Value of type variable "ArrayLike" of "ensure_block_shape" cannot be
- # "Union[ndarray, ExtensionArray]"
- arrs = [ensure_block_shape(x, 2) for x in self.arrays] # type: ignore[type-var]
+ arrs = [ensure_block_shape(x, 2) for x in self.arrays]
assert axis == 1
- # error: Value of type variable "ArrayLike" of "quantile_compat" cannot be
- # "object"
- new_arrs = [
- quantile_compat(x, qs, interpolation, axis=axis) # type: ignore[type-var]
- for x in arrs
- ]
+ new_arrs = [quantile_compat(x, qs, interpolation, axis=axis) for x in arrs]
for i, arr in enumerate(new_arrs):
if arr.ndim == 2:
assert arr.shape[0] == 1, arr.shape
@@ -836,11 +829,7 @@ def iget_values(self, i: int) -> ArrayLike:
"""
Return the data for column i as the values (ndarray or ExtensionArray).
"""
- # error: Incompatible return value type (got "Union[ndarray, ExtensionArray]",
- # expected "ExtensionArray")
- # error: Incompatible return value type (got "Union[ndarray, ExtensionArray]",
- # expected "ndarray")
- return self.arrays[i] # type: ignore[return-value]
+ return self.arrays[i]
def idelete(self, indexer):
"""
@@ -1019,9 +1008,7 @@ def _reindex_indexer(
else:
validate_indices(indexer, len(self._axes[0]))
new_arrays = [
- # error: Value of type variable "ArrayLike" of "take_1d" cannot be
- # "Union[ndarray, ExtensionArray]" [type-var]
- take_1d( # type: ignore[type-var]
+ take_1d(
arr,
indexer,
allow_fill=True,
@@ -1080,9 +1067,7 @@ def _equal_values(self, other) -> bool:
assuming shape and indexes have already been checked.
"""
for left, right in zip(self.arrays, other.arrays):
- # error: Value of type variable "ArrayLike" of "array_equals" cannot be
- # "Union[Any, ndarray, ExtensionArray]"
- if not array_equals(left, right): # type: ignore[type-var]
+ if not array_equals(left, right):
return False
else:
return True
@@ -1109,9 +1094,7 @@ def unstack(self, unstacker, fill_value) -> ArrayManager:
new_arrays = []
for arr in self.arrays:
for i in range(unstacker.full_shape[1]):
- # error: Value of type variable "ArrayLike" of "take_1d" cannot be
- # "Union[ndarray, ExtensionArray]" [type-var]
- new_arr = take_1d( # type: ignore[type-var]
+ new_arr = take_1d(
arr, new_indexer2D[:, i], allow_fill=True, fill_value=fill_value
)
new_arrays.append(new_arr)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index ab23c67b52bcd..1bcddee4d726e 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -432,9 +432,7 @@ def fillna(
inplace = validate_bool_kwarg(inplace, "inplace")
mask = isna(self.values)
- # error: Value of type variable "ArrayLike" of "validate_putmask" cannot be
- # "Union[ndarray, ExtensionArray]"
- mask, noop = validate_putmask(self.values, mask) # type: ignore[type-var]
+ mask, noop = validate_putmask(self.values, mask)
if limit is not None:
limit = libalgos.validate_limit(None, limit=limit)
@@ -577,9 +575,7 @@ def downcast(self, dtypes=None) -> List[Block]:
if dtypes is None:
dtypes = "infer"
- # error: Value of type variable "ArrayLike" of "maybe_downcast_to_dtype"
- # cannot be "Union[ndarray, ExtensionArray]"
- nv = maybe_downcast_to_dtype(values, dtypes) # type: ignore[type-var]
+ nv = maybe_downcast_to_dtype(values, dtypes)
return [self.make_block(nv)]
# ndim > 1
@@ -623,11 +619,7 @@ def astype(self, dtype, copy: bool = False, errors: str = "raise"):
if values.dtype.kind in ["m", "M"]:
values = self.array_values()
- # error: Value of type variable "ArrayLike" of "astype_array_safe" cannot be
- # "Union[ndarray, ExtensionArray]"
- new_values = astype_array_safe(
- values, dtype, copy=copy, errors=errors # type: ignore[type-var]
- )
+ new_values = astype_array_safe(values, dtype, copy=copy, errors=errors)
newb = self.make_block(new_values)
if newb.shape != self.shape:
@@ -724,9 +716,7 @@ def replace(
values = self.values
- # error: Value of type variable "ArrayLike" of "mask_missing" cannot be
- # "Union[ndarray, ExtensionArray]"
- mask = missing.mask_missing(values, to_replace) # type: ignore[type-var]
+ mask = missing.mask_missing(values, to_replace)
if not mask.any():
# Note: we get here with test_replace_extension_other incorrectly
# bc _can_hold_element is incorrect.
@@ -753,9 +743,7 @@ def replace(
)
blk = self if inplace else self.copy()
- # error: Value of type variable "ArrayLike" of "putmask_inplace" cannot be
- # "Union[ndarray, ExtensionArray]"
- putmask_inplace(blk.values, mask, value) # type: ignore[type-var]
+ putmask_inplace(blk.values, mask, value)
blocks = blk.convert(numeric=False, copy=False)
return blocks
@@ -796,9 +784,7 @@ def _replace_regex(
rx = re.compile(to_replace)
new_values = self.values if inplace else self.values.copy()
- # error: Value of type variable "ArrayLike" of "replace_regex" cannot be
- # "Union[ndarray, ExtensionArray]"
- replace_regex(new_values, rx, value, mask) # type: ignore[type-var]
+ replace_regex(new_values, rx, value, mask)
block = self.make_block(new_values)
return [block]
@@ -835,26 +821,17 @@ def _replace_list(
# in order to avoid repeating the same computations
mask = ~isna(self.values)
masks = [
- # error: Value of type variable "ArrayLike" of "compare_or_regex_search"
- # cannot be "Union[ndarray, ExtensionArray]"
- compare_or_regex_search( # type: ignore[type-var]
- self.values, s[0], regex=regex, mask=mask
- )
+ compare_or_regex_search(self.values, s[0], regex=regex, mask=mask)
for s in pairs
]
else:
# GH#38086 faster if we know we dont need to check for regex
+ masks = [missing.mask_missing(self.values, s[0]) for s in pairs]
- # error: Value of type variable "ArrayLike" of "mask_missing" cannot be
- # "Union[ndarray, ExtensionArray]"
- masks = [
- missing.mask_missing(self.values, s[0]) # type: ignore[type-var]
- for s in pairs
- ]
-
- # error: Value of type variable "ArrayLike" of "extract_bool_array" cannot be
- # "Union[ndarray, ExtensionArray, bool]"
- masks = [extract_bool_array(x) for x in masks] # type: ignore[type-var]
+ # error: Argument 1 to "extract_bool_array" has incompatible type
+ # "Union[ExtensionArray, ndarray, bool]"; expected "Union[ExtensionArray,
+ # ndarray]"
+ masks = [extract_bool_array(x) for x in masks] # type: ignore[arg-type]
rb = [self if inplace else self.copy()]
for i, (src, dest) in enumerate(pairs):
@@ -912,9 +889,7 @@ def _replace_coerce(
nb = self.coerce_to_target_dtype(value)
if nb is self and not inplace:
nb = nb.copy()
- # error: Value of type variable "ArrayLike" of "putmask_inplace" cannot
- # be "Union[ndarray, ExtensionArray]"
- putmask_inplace(nb.values, mask, value) # type: ignore[type-var]
+ putmask_inplace(nb.values, mask, value)
return [nb]
else:
regex = should_use_regex(regex, to_replace)
@@ -987,9 +962,7 @@ def setitem(self, indexer, value):
# length checking
check_setitem_lengths(indexer, value, values)
- # error: Value of type variable "ArrayLike" of "is_exact_shape_match" cannot be
- # "Union[Any, ndarray, ExtensionArray]"
- exact_match = is_exact_shape_match(values, arr_value) # type: ignore[type-var]
+ exact_match = is_exact_shape_match(values, arr_value)
if is_empty_indexer(indexer, arr_value):
# GH#8669 empty indexers
@@ -1057,9 +1030,7 @@ def putmask(self, mask, new) -> List[Block]:
List[Block]
"""
orig_mask = mask
- # error: Value of type variable "ArrayLike" of "validate_putmask" cannot be
- # "Union[ndarray, ExtensionArray]"
- mask, noop = validate_putmask(self.values.T, mask) # type: ignore[type-var]
+ mask, noop = validate_putmask(self.values.T, mask)
assert not isinstance(new, (ABCIndex, ABCSeries, ABCDataFrame))
# if we are passed a scalar None, convert it here
@@ -1284,9 +1255,7 @@ def take_nd(
else:
allow_fill = True
- # error: Value of type variable "ArrayLike" of "take_nd" cannot be
- # "Union[ndarray, ExtensionArray]"
- new_values = algos.take_nd( # type: ignore[type-var]
+ new_values = algos.take_nd(
values, indexer, axis=axis, allow_fill=allow_fill, fill_value=fill_value
)
@@ -1350,9 +1319,7 @@ def where(self, other, cond, errors="raise", axis: int = 0) -> List[Block]:
if transpose:
values = values.T
- # error: Value of type variable "ArrayLike" of "validate_putmask" cannot be
- # "Union[ndarray, ExtensionArray]"
- icond, noop = validate_putmask(values, ~cond) # type: ignore[type-var]
+ icond, noop = validate_putmask(values, ~cond)
if is_valid_na_for_dtype(other, self.dtype) and not self.is_object:
other = self.fill_value
@@ -1463,11 +1430,7 @@ def quantile(
assert axis == 1 # only ever called this way
assert is_list_like(qs) # caller is responsible for this
- # error: Value of type variable "ArrayLike" of "quantile_compat" cannot be
- # "Union[ndarray, ExtensionArray]"
- result = quantile_compat( # type: ignore[type-var]
- self.values, qs, interpolation, axis
- )
+ result = quantile_compat(self.values, qs, interpolation, axis)
return new_block(result, placement=self.mgr_locs, ndim=2)
@@ -2353,15 +2316,10 @@ def extract_pandas_array(
"""
# For now, blocks should be backed by ndarrays when possible.
if isinstance(values, ABCPandasArray):
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- values = values.to_numpy() # type: ignore[assignment]
+ values = values.to_numpy()
if ndim and ndim > 1:
# TODO(EA2D): special case not needed with 2D EAs
-
- # error: No overload variant of "atleast_2d" matches argument type
- # "PandasArray"
- values = np.atleast_2d(values) # type: ignore[call-overload]
+ values = np.atleast_2d(values)
if isinstance(dtype, PandasDtype):
dtype = dtype.numpy_dtype
@@ -2397,8 +2355,5 @@ def ensure_block_shape(values: ArrayLike, ndim: int = 1) -> ArrayLike:
# TODO(EA2D): https://github.com/pandas-dev/pandas/issues/23023
# block.shape is incorrect for "2D" ExtensionArrays
# We can't, and don't need to, reshape.
-
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- values = np.asarray(values).reshape(1, -1) # type: ignore[assignment]
+ values = np.asarray(values).reshape(1, -1)
return values
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index 64777ef31ac6e..e2949eb227fbf 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -344,11 +344,7 @@ def get_reindexed_values(self, empty_dtype: DtypeObj, upcasted_na) -> ArrayLike:
if is_datetime64tz_dtype(empty_dtype):
# TODO(EA2D): special case unneeded with 2D EAs
i8values = np.full(self.shape[1], fill_value.value)
- # error: Incompatible return value type (got "DatetimeArray",
- # expected "ndarray")
- return DatetimeArray( # type: ignore[return-value]
- i8values, dtype=empty_dtype
- )
+ return DatetimeArray(i8values, dtype=empty_dtype)
elif is_extension_array_dtype(blk_dtype):
pass
elif is_extension_array_dtype(empty_dtype):
@@ -439,21 +435,14 @@ def _concatenate_join_units(
elif any(isinstance(t, ExtensionArray) for t in to_concat):
# concatting with at least one EA means we are concatting a single column
# the non-EA values are 2D arrays with shape (1, n)
-
- # error: Invalid index type "Tuple[int, slice]" for "ExtensionArray"; expected
- # type "Union[int, slice, ndarray]"
- to_concat = [
- t if isinstance(t, ExtensionArray) else t[0, :] # type: ignore[index]
- for t in to_concat
- ]
+ to_concat = [t if isinstance(t, ExtensionArray) else t[0, :] for t in to_concat]
concat_values = concat_compat(to_concat, axis=0, ea_compat_axis=True)
concat_values = ensure_block_shape(concat_values, 2)
else:
concat_values = concat_compat(to_concat, axis=concat_axis)
- # error: Incompatible return value type (got "ExtensionArray", expected "ndarray")
- return concat_values # type: ignore[return-value]
+ return concat_values
def _dtype_to_na_value(dtype: DtypeObj, has_none_blocks: bool):
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 0ea8c3eb994a3..63a437a91f6e4 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -319,7 +319,9 @@ def ndarray_to_mgr(
datelike_vals = maybe_squeeze_dt64tz(datelike_vals)
block_values = [datelike_vals]
else:
- block_values = [maybe_squeeze_dt64tz(values)]
+ # error: List item 0 has incompatible type "Union[ExtensionArray, ndarray]";
+ # expected "Block"
+ block_values = [maybe_squeeze_dt64tz(values)] # type: ignore[list-item]
return create_block_manager_from_blocks(block_values, [columns, index])
@@ -574,9 +576,10 @@ def extract_index(data) -> Index:
else:
index = ibase.default_index(lengths[0])
- # error: Value of type variable "AnyArrayLike" of "ensure_index" cannot be
- # "Optional[Index]"
- return ensure_index(index) # type: ignore[type-var]
+ # error: Argument 1 to "ensure_index" has incompatible type "Optional[Index]";
+ # expected "Union[Union[Union[ExtensionArray, ndarray], Index, Series],
+ # Sequence[Any]]"
+ return ensure_index(index) # type: ignore[arg-type]
def reorder_arrays(
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 48bb6d9bf247b..6bd3e37ae101e 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -314,11 +314,7 @@ def arrays(self) -> List[ArrayLike]:
Not to be used in actual code, and return value is not the same as the
ArrayManager method (list of 1D arrays vs iterator of 2D ndarrays / 1D EAs).
"""
- # error: List comprehension has incompatible type List[Union[ndarray,
- # ExtensionArray]]; expected List[ExtensionArray]
- # error: List comprehension has incompatible type List[Union[ndarray,
- # ExtensionArray]]; expected List[ndarray]
- return [blk.values for blk in self.blocks] # type: ignore[misc]
+ return [blk.values for blk in self.blocks]
def __getstate__(self):
block_values = [b.values for b in self.blocks]
@@ -1022,9 +1018,7 @@ def fast_xs(self, loc: int) -> ArrayLike:
if isinstance(dtype, ExtensionDtype):
result = dtype.construct_array_type()._from_sequence(result, dtype=dtype)
- # error: Incompatible return value type (got "ndarray", expected
- # "ExtensionArray")
- return result # type: ignore[return-value]
+ return result
def consolidate(self) -> BlockManager:
"""
@@ -1535,9 +1529,7 @@ def _equal_values(self: T, other: T) -> bool:
return False
left = self.blocks[0].values
right = other.blocks[0].values
- # error: Value of type variable "ArrayLike" of "array_equals" cannot be
- # "Union[ndarray, ExtensionArray]"
- return array_equals(left, right) # type: ignore[type-var]
+ return array_equals(left, right)
return blockwise_all(self, other, array_equals)
diff --git a/pandas/core/internals/ops.py b/pandas/core/internals/ops.py
index 103092ba37b70..88e70723517e3 100644
--- a/pandas/core/internals/ops.py
+++ b/pandas/core/internals/ops.py
@@ -131,12 +131,7 @@ def _get_same_shape_values(
# ExtensionArray]"; expected type "Union[int, slice, ndarray]"
rvals = rvals[0, :] # type: ignore[index]
- # error: Incompatible return value type (got "Tuple[Union[ndarray, ExtensionArray],
- # Union[ndarray, ExtensionArray]]", expected "Tuple[ExtensionArray,
- # ExtensionArray]")
- # error: Incompatible return value type (got "Tuple[Union[ndarray, ExtensionArray],
- # Union[ndarray, ExtensionArray]]", expected "Tuple[ndarray, ndarray]")
- return lvals, rvals # type: ignore[return-value]
+ return lvals, rvals
def blockwise_all(left: BlockManager, right: BlockManager, op) -> bool:
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index f17569d114389..a4d79284c45fd 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -302,7 +302,9 @@ def _get_values(
# with scalar fill_value. This guarantee is important for the
# np.where call below
assert is_scalar(fill_value)
- values = extract_array(values, extract_numpy=True)
+ # error: Incompatible types in assignment (expression has type "Union[Any,
+ # Union[ExtensionArray, ndarray]]", variable has type "ndarray")
+ values = extract_array(values, extract_numpy=True) # type: ignore[assignment]
mask = _maybe_get_mask(values, skipna, mask)
@@ -1161,7 +1163,9 @@ def nanskew(
>>> nanops.nanskew(s)
1.7320508075688787
"""
- values = extract_array(values, extract_numpy=True)
+ # error: Incompatible types in assignment (expression has type "Union[Any,
+ # Union[ExtensionArray, ndarray]]", variable has type "ndarray")
+ values = extract_array(values, extract_numpy=True) # type: ignore[assignment]
mask = _maybe_get_mask(values, skipna, mask)
if not is_float_dtype(values.dtype):
values = values.astype("f8")
@@ -1246,7 +1250,9 @@ def nankurt(
>>> nanops.nankurt(s)
-1.2892561983471076
"""
- values = extract_array(values, extract_numpy=True)
+ # error: Incompatible types in assignment (expression has type "Union[Any,
+ # Union[ExtensionArray, ndarray]]", variable has type "ndarray")
+ values = extract_array(values, extract_numpy=True) # type: ignore[assignment]
mask = _maybe_get_mask(values, skipna, mask)
if not is_float_dtype(values.dtype):
values = values.astype("f8")
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index a048217d6b1f0..f4de822262cf4 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1847,10 +1847,12 @@ def _get_join_indexers(self):
def flip(xs) -> np.ndarray:
""" unlike np.transpose, this returns an array of tuples """
+ # error: Item "ndarray" of "Union[Any, Union[ExtensionArray, ndarray]]" has
+ # no attribute "_values_for_argsort"
xs = [
x
if not is_extension_array_dtype(x)
- else extract_array(x)._values_for_argsort()
+ else extract_array(x)._values_for_argsort() # type: ignore[union-attr]
for x in xs
]
labels = list(string.ascii_lowercase[: len(xs)])
@@ -2064,13 +2066,8 @@ def _factorize_keys(
if is_datetime64tz_dtype(lk.dtype) and is_datetime64tz_dtype(rk.dtype):
# Extract the ndarray (UTC-localized) values
# Note: we dont need the dtypes to match, as these can still be compared
-
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- lk = cast("DatetimeArray", lk)._ndarray # type: ignore[assignment]
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- rk = cast("DatetimeArray", rk)._ndarray # type: ignore[assignment]
+ lk = cast("DatetimeArray", lk)._ndarray
+ rk = cast("DatetimeArray", rk)._ndarray
elif (
is_categorical_dtype(lk.dtype)
@@ -2081,13 +2078,10 @@ def _factorize_keys(
assert isinstance(rk, Categorical)
# Cast rk to encoding so we can compare codes with lk
- # error: <nothing> has no attribute "_encode_with_my_categories"
- rk = lk._encode_with_my_categories(rk) # type: ignore[attr-defined]
+ rk = lk._encode_with_my_categories(rk)
- # error: <nothing> has no attribute "codes"
- lk = ensure_int64(lk.codes) # type: ignore[attr-defined]
- # error: "ndarray" has no attribute "codes"
- rk = ensure_int64(rk.codes) # type: ignore[attr-defined]
+ lk = ensure_int64(lk.codes)
+ rk = ensure_int64(rk.codes)
elif is_extension_array_dtype(lk.dtype) and is_dtype_equal(lk.dtype, rk.dtype):
# error: Incompatible types in assignment (expression has type "ndarray",
diff --git a/pandas/core/series.py b/pandas/core/series.py
index b92ada9537bd4..9feec7acae4c6 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -460,11 +460,14 @@ def _init_dict(self, data, index=None, dtype: Optional[Dtype] = None):
# Input is now list-like, so rely on "standard" construction:
# TODO: passing np.float64 to not break anything yet. See GH-17261
-
- # error: Value of type variable "ArrayLike" of
- # "create_series_with_explicit_dtype" cannot be "Tuple[Any, ...]"
- s = create_series_with_explicit_dtype( # type: ignore[type-var]
- values, index=keys, dtype=dtype, dtype_if_empty=np.float64
+ s = create_series_with_explicit_dtype(
+ # error: Argument "index" to "create_series_with_explicit_dtype" has
+ # incompatible type "Tuple[Any, ...]"; expected "Union[ExtensionArray,
+ # ndarray, Index, None]"
+ values,
+ index=keys, # type: ignore[arg-type]
+ dtype=dtype,
+ dtype_if_empty=np.float64,
)
# Now we just make sure the order is respected, if any
@@ -3003,10 +3006,13 @@ def combine(self, other, func, fill_value=None) -> Series:
# The function can return something of any type, so check
# if the type is compatible with the calling EA.
- # error: Value of type variable "ArrayLike" of
- # "maybe_cast_to_extension_array" cannot be "List[Any]"
- new_values = maybe_cast_to_extension_array(
- type(self._values), new_values # type: ignore[type-var]
+ # error: Incompatible types in assignment (expression has type
+ # "Union[ExtensionArray, ndarray]", variable has type "List[Any]")
+ new_values = maybe_cast_to_extension_array( # type: ignore[assignment]
+ # error: Argument 2 to "maybe_cast_to_extension_array" has incompatible
+ # type "List[Any]"; expected "Union[ExtensionArray, ndarray]"
+ type(self._values),
+ new_values, # type: ignore[arg-type]
)
return self._constructor(new_values, index=new_index, name=new_name)
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index f7bb3083b91a9..1e71069e5be4d 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -245,9 +245,9 @@ def _convert_and_box_cache(
from pandas import Series
result = Series(arg).map(cache_array)
- # error: Value of type variable "ArrayLike" of "_box_as_indexlike" cannot
- # be "Series"
- return _box_as_indexlike(result, utc=None, name=name) # type: ignore[type-var]
+ # error: Argument 1 to "_box_as_indexlike" has incompatible type "Series"; expected
+ # "Union[ExtensionArray, ndarray]"
+ return _box_as_indexlike(result, utc=None, name=name) # type: ignore[arg-type]
def _return_parsed_timezone_results(result: np.ndarray, timezones, tz, name) -> Index:
@@ -1081,9 +1081,9 @@ def calc_with_mask(carg, mask):
# string with NaN-like
try:
- # error: Value of type variable "AnyArrayLike" of "isin" cannot be
- # "Iterable[Any]"
- mask = ~algorithms.isin(arg, list(nat_strings)) # type: ignore[type-var]
+ # error: Argument 2 to "isin" has incompatible type "List[Any]"; expected
+ # "Union[Union[ExtensionArray, ndarray], Index, Series]"
+ mask = ~algorithms.isin(arg, list(nat_strings)) # type: ignore[arg-type]
return calc_with_mask(arg, mask)
except (ValueError, OverflowError, TypeError):
pass
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index 7d314d6a6fa1a..5e45d36e188a2 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -116,11 +116,9 @@ def hash_pandas_object(
return Series(hash_tuples(obj, encoding, hash_key), dtype="uint64", copy=False)
elif isinstance(obj, ABCIndex):
- # error: Value of type variable "ArrayLike" of "hash_array" cannot be
- # "Union[ExtensionArray, ndarray]"
- h = hash_array( # type: ignore[type-var]
- obj._values, encoding, hash_key, categorize
- ).astype("uint64", copy=False)
+ h = hash_array(obj._values, encoding, hash_key, categorize).astype(
+ "uint64", copy=False
+ )
# error: Incompatible types in assignment (expression has type "Series",
# variable has type "ndarray")
h = Series(h, index=obj, dtype="uint64", copy=False) # type: ignore[assignment]
@@ -297,17 +295,13 @@ def hash_array(
# hash values. (This check is above the complex check so that we don't ask
# numpy if categorical is a subdtype of complex, as it will choke).
if is_categorical_dtype(dtype):
- # error: Incompatible types in assignment (expression has type "Categorical",
- # variable has type "ndarray")
- vals = cast("Categorical", vals) # type: ignore[assignment]
- # error: Argument 1 to "_hash_categorical" has incompatible type "ndarray";
- # expected "Categorical"
- return _hash_categorical(vals, encoding, hash_key) # type: ignore[arg-type]
+ vals = cast("Categorical", vals)
+ return _hash_categorical(vals, encoding, hash_key)
elif is_extension_array_dtype(dtype):
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- # error: "ndarray" has no attribute "_values_for_factorize"
- vals, _ = vals._values_for_factorize() # type: ignore[assignment,attr-defined]
+ # pandas/core/util/hashing.py:301: error: Item "ndarray" of
+ # "Union[ExtensionArray, ndarray]" has no attribute "_values_for_factorize"
+ # [union-attr]
+ vals, _ = vals._values_for_factorize() # type: ignore[union-attr]
# error: Argument 1 to "_hash_ndarray" has incompatible type "ExtensionArray";
# expected "ndarray"
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 17d05e81b82bb..52610478372dd 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -424,11 +424,7 @@ def hfunc(bvalues: ArrayLike) -> ArrayLike:
return getattr(res_values, "T", res_values)
def hfunc2d(values: ArrayLike) -> ArrayLike:
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- # error: Argument 1 to "_prep_values" of "BaseWindow" has incompatible type
- # "ExtensionArray"; expected "Optional[ndarray]"
- values = self._prep_values(values) # type: ignore[assignment,arg-type]
+ values = self._prep_values(values)
return homogeneous_func(values)
if isinstance(mgr, ArrayManager) and self.axis == 1:
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index f54481f527d93..a768ec8ad4eb3 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1561,7 +1561,9 @@ def _format_strings(self) -> List[str]:
formatter = self.formatter
if formatter is None:
- formatter = values._formatter(boxed=True)
+ # error: Item "ndarray" of "Union[Any, Union[ExtensionArray, ndarray]]" has
+ # no attribute "_formatter"
+ formatter = values._formatter(boxed=True) # type: ignore[union-attr]
if is_categorical_dtype(values.dtype):
# Categorical is special for now, so that we can preserve tzinfo
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 4539ceabbb92f..8cfbae3cafc18 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -531,7 +531,11 @@ def _convert_to_ndarrays(
try:
values = lib.map_infer(values, conv_f)
except ValueError:
- mask = algorithms.isin(values, list(na_values)).view(np.uint8)
+ # error: Argument 2 to "isin" has incompatible type "List[Any]";
+ # expected "Union[Union[ExtensionArray, ndarray], Index, Series]"
+ mask = algorithms.isin(
+ values, list(na_values) # type: ignore[arg-type]
+ ).view(np.uint8)
values = lib.map_infer_mask(values, conv_f, mask)
cvals, na_count = self._infer_types(
@@ -657,7 +661,9 @@ def _infer_types(self, values, na_values, try_num_bool=True):
"""
na_count = 0
if issubclass(values.dtype.type, (np.number, np.bool_)):
- mask = algorithms.isin(values, list(na_values))
+ # error: Argument 2 to "isin" has incompatible type "List[Any]"; expected
+ # "Union[Union[ExtensionArray, ndarray], Index, Series]"
+ mask = algorithms.isin(values, list(na_values)) # type: ignore[arg-type]
# error: Incompatible types in assignment (expression has type
# "number[Any]", variable has type "int")
na_count = mask.sum() # type: ignore[assignment]
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 24bd2da6cc12e..02a723902271e 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -2362,8 +2362,9 @@ def _get_atom(cls, values: ArrayLike) -> Col:
Get an appropriately typed and shaped pytables.Col object for values.
"""
dtype = values.dtype
- # error: "ExtensionDtype" has no attribute "itemsize"
- itemsize = dtype.itemsize # type: ignore[attr-defined]
+ # error: Item "ExtensionDtype" of "Union[ExtensionDtype, dtype[Any]]" has no
+ # attribute "itemsize"
+ itemsize = dtype.itemsize # type: ignore[union-attr]
shape = values.shape
if values.ndim == 1:
@@ -4845,9 +4846,9 @@ def _convert_index(name: str, index: Index, encoding: str, errors: str) -> Index
assert isinstance(name, str)
index_name = index.name
- # error: Value of type variable "ArrayLike" of "_get_data_and_dtype_name"
- # cannot be "Index"
- converted, dtype_name = _get_data_and_dtype_name(index) # type: ignore[type-var]
+ # error: Argument 1 to "_get_data_and_dtype_name" has incompatible type "Index";
+ # expected "Union[ExtensionArray, ndarray]"
+ converted, dtype_name = _get_data_and_dtype_name(index) # type: ignore[arg-type]
kind = _dtype_to_kind(dtype_name)
atom = DataIndexableCol._get_atom(converted)
@@ -5169,26 +5170,20 @@ def _get_data_and_dtype_name(data: ArrayLike):
Convert the passed data into a storable form and a dtype string.
"""
if isinstance(data, Categorical):
- # error: Incompatible types in assignment (expression has type
- # "ndarray", variable has type "ExtensionArray")
- data = data.codes # type: ignore[assignment]
+ data = data.codes
# For datetime64tz we need to drop the TZ in tests TODO: why?
dtype_name = data.dtype.name.split("[")[0]
if data.dtype.kind in ["m", "M"]:
- # error: Incompatible types in assignment (expression has type "ndarray",
- # variable has type "ExtensionArray")
- data = np.asarray(data.view("i8")) # type: ignore[assignment]
+ data = np.asarray(data.view("i8"))
# TODO: we used to reshape for the dt64tz case, but no longer
# doing that doesn't seem to break anything. why?
elif isinstance(data, PeriodIndex):
data = data.asi8
- # error: Incompatible types in assignment (expression has type "ndarray", variable
- # has type "ExtensionArray")
- data = np.asarray(data) # type: ignore[assignment]
+ data = np.asarray(data)
return data, dtype_name
diff --git a/pandas/tests/extension/decimal/array.py b/pandas/tests/extension/decimal/array.py
index 9e1c517704743..58e5dc34d59d5 100644
--- a/pandas/tests/extension/decimal/array.py
+++ b/pandas/tests/extension/decimal/array.py
@@ -4,7 +4,10 @@
import numbers
import random
import sys
-from typing import Type
+from typing import (
+ Type,
+ Union,
+)
import numpy as np
@@ -173,7 +176,7 @@ def __setitem__(self, key, value):
def __len__(self) -> int:
return len(self._data)
- def __contains__(self, item) -> bool:
+ def __contains__(self, item) -> Union[bool, np.bool_]:
if not isinstance(item, decimal.Decimal):
return False
elif item.is_nan():
| follow-on from #36092
The first commit is the change in the alias.
Although this does get rid of many of the false positives, as well as removing ignores this also results is some changes in the mypy error codes and some new errors where we might need a typevar (or more likely an overload)
The subsequent commits were an attempt to ensure that this PR only removed ignores.
The PR is starting to get a bit difficult to review imo. so I have stopped for now.
I could break off the first commit and then submit the subsequent commits as separate PRs if it makes review easier. | https://api.github.com/repos/pandas-dev/pandas/pulls/40379 | 2021-03-11T17:05:05Z | 2021-03-12T02:33:29Z | 2021-03-12T02:33:29Z | 2021-03-12T11:17:01Z |
STYLE: Extending codespell to pandas/tests/ part3 38802 | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 3b788cc2df227..71dedfaee8c04 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,6 @@ repos:
- id: codespell
types_or: [python, rst, markdown]
files: ^(pandas|doc)/
- exclude: ^pandas/tests/
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks:
diff --git a/pandas/tests/indexing/multiindex/test_loc.py b/pandas/tests/indexing/multiindex/test_loc.py
index 78c704f2e43bb..96d2c246dd0ee 100644
--- a/pandas/tests/indexing/multiindex/test_loc.py
+++ b/pandas/tests/indexing/multiindex/test_loc.py
@@ -264,7 +264,7 @@ def test_loc_multiindex_incomplete(self):
tm.assert_series_equal(result, expected)
# GH 7400
- # multiindexer gettitem with list of indexers skips wrong element
+ # multiindexer getitem with list of indexers skips wrong element
s = Series(
np.arange(15, dtype="int64"),
MultiIndex.from_product([range(5), ["a", "b", "c"]]),
@@ -385,7 +385,7 @@ def test_multiindex_setitem_columns_enlarging(self, indexer, exp_value):
[
([], []), # empty ok
(["A"], slice(3)),
- (["A", "D"], []), # "D" isnt present -> raise
+ (["A", "D"], []), # "D" isn't present -> raise
(["D", "E"], []), # no values found -> raise
(["D"], []), # same, with single item list: GH 27148
(pd.IndexSlice[:, ["foo"]], slice(2, None, 3)),
@@ -531,7 +531,7 @@ def test_loc_period_string_indexing():
# GH 9892
a = pd.period_range("2013Q1", "2013Q4", freq="Q")
i = (1111, 2222, 3333)
- idx = MultiIndex.from_product((a, i), names=("Periode", "CVR"))
+ idx = MultiIndex.from_product((a, i), names=("Period", "CVR"))
df = DataFrame(
index=idx,
columns=(
@@ -552,7 +552,7 @@ def test_loc_period_string_indexing():
dtype=object,
name="OMS",
index=MultiIndex.from_tuples(
- [(pd.Period("2013Q1"), 1111)], names=["Periode", "CVR"]
+ [(pd.Period("2013Q1"), 1111)], names=["Period", "CVR"]
),
)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 9dbce283d2a8f..bec442b7f48ac 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -1072,7 +1072,7 @@ def test_loc_setitem_str_to_small_float_conversion_type(self):
tm.assert_frame_equal(result, expected)
# assigning with loc/iloc attempts to set the values inplace, which
- # in this case is succesful
+ # in this case is successful
result.loc[result.index, "A"] = [float(x) for x in col_data]
expected = DataFrame(col_data, columns=["A"], dtype=float).astype(object)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index 2f5764ab5bd77..3c37d827c0778 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -829,8 +829,8 @@ def assert_slice_ok(mgr, axis, slobj):
elif mgr.ndim == 1 and axis == 0:
sliced = mgr.getitem_mgr(slobj)
else:
- # BlockManager doesnt support non-slice, SingleBlockManager
- # doesnt support axis > 0
+ # BlockManager doesn't support non-slice, SingleBlockManager
+ # doesn't support axis > 0
return
mat_slobj = (slice(None),) * axis + (slobj,)
diff --git a/pandas/tests/io/pytables/test_append.py b/pandas/tests/io/pytables/test_append.py
index 8c324d73a7e54..5a7d571e3a701 100644
--- a/pandas/tests/io/pytables/test_append.py
+++ b/pandas/tests/io/pytables/test_append.py
@@ -281,7 +281,7 @@ def test_append_frame_column_oriented(setup_path):
# column oriented
df = tm.makeTimeDataFrame()
- df.index = df.index._with_freq(None) # freq doesnt round-trip
+ df.index = df.index._with_freq(None) # freq doesn't round-trip
_maybe_remove(store, "df1")
store.append("df1", df.iloc[:, :2], axes=["columns"])
@@ -331,7 +331,7 @@ def test_append_with_different_block_ordering(setup_path):
store.append("df", df)
# test a different ordering but with more fields (like invalid
- # combinate)
+ # combinations)
with ensure_clean_store(setup_path) as store:
df = DataFrame(np.random.randn(10, 2), columns=list("AB"), dtype="float64")
diff --git a/pandas/tests/io/pytables/test_round_trip.py b/pandas/tests/io/pytables/test_round_trip.py
index 03d3d838a936c..97edc3cdffdf7 100644
--- a/pandas/tests/io/pytables/test_round_trip.py
+++ b/pandas/tests/io/pytables/test_round_trip.py
@@ -350,7 +350,7 @@ def test_timeseries_preepoch(setup_path):
try:
_check_roundtrip(ts, tm.assert_series_equal, path=setup_path)
except OverflowError:
- pytest.skip("known failer on some windows platforms")
+ pytest.skip("known failure on some windows platforms")
@pytest.mark.parametrize(
diff --git a/pandas/tests/io/pytables/test_select.py b/pandas/tests/io/pytables/test_select.py
index 8ad5dbc049380..0d6ee7d6efb85 100644
--- a/pandas/tests/io/pytables/test_select.py
+++ b/pandas/tests/io/pytables/test_select.py
@@ -663,13 +663,13 @@ def test_frame_select_complex(setup_path):
def test_frame_select_complex2(setup_path):
- with ensure_clean_path(["parms.hdf", "hist.hdf"]) as paths:
+ with ensure_clean_path(["params.hdf", "hist.hdf"]) as paths:
pp, hh = paths
# use non-trivial selection criteria
- parms = DataFrame({"A": [1, 1, 2, 2, 3]})
- parms.to_hdf(pp, "df", mode="w", format="table", data_columns=["A"])
+ params = DataFrame({"A": [1, 1, 2, 2, 3]})
+ params.to_hdf(pp, "df", mode="w", format="table", data_columns=["A"])
selection = read_hdf(pp, "df", where="A=[2,3]")
hist = DataFrame(
diff --git a/pandas/tests/io/pytables/test_timezones.py b/pandas/tests/io/pytables/test_timezones.py
index 0532ddd17cd19..4aa6f94ca38e9 100644
--- a/pandas/tests/io/pytables/test_timezones.py
+++ b/pandas/tests/io/pytables/test_timezones.py
@@ -137,7 +137,7 @@ def test_append_with_timezones_as_index(setup_path, gettz):
# GH#4098 example
dti = date_range("2000-1-1", periods=3, freq="H", tz=gettz("US/Eastern"))
- dti = dti._with_freq(None) # freq doesnt round-trip
+ dti = dti._with_freq(None) # freq doesn't round-trip
df = DataFrame({"A": Series(range(3), index=dti)})
@@ -217,7 +217,7 @@ def test_timezones_fixed_format_frame_non_empty(setup_path):
# index
rng = date_range("1/1/2000", "1/30/2000", tz="US/Eastern")
- rng = rng._with_freq(None) # freq doesnt round-trip
+ rng = rng._with_freq(None) # freq doesn't round-trip
df = DataFrame(np.random.randn(len(rng), 4), index=rng)
store["df"] = df
result = store["df"]
@@ -334,7 +334,7 @@ def test_dst_transitions(setup_path):
freq="H",
ambiguous="infer",
)
- times = times._with_freq(None) # freq doesnt round-trip
+ times = times._with_freq(None) # freq doesn't round-trip
for i in [times, times + pd.Timedelta("10min")]:
_maybe_remove(store, "df")
diff --git a/pandas/tests/io/xml/test_to_xml.py b/pandas/tests/io/xml/test_to_xml.py
index 97793ce8f65b8..89a3d4f2ae083 100644
--- a/pandas/tests/io/xml/test_to_xml.py
+++ b/pandas/tests/io/xml/test_to_xml.py
@@ -411,12 +411,12 @@ def test_attrs_cols_prefix(datapath, parser):
def test_attrs_unknown_column(parser):
with pytest.raises(KeyError, match=("no valid column")):
- geom_df.to_xml(attr_cols=["shape", "degreees", "sides"], parser=parser)
+ geom_df.to_xml(attr_cols=["shape", "degree", "sides"], parser=parser)
def test_attrs_wrong_type(parser):
with pytest.raises(TypeError, match=("is not a valid type for attr_cols")):
- geom_df.to_xml(attr_cols='"shape", "degreees", "sides"', parser=parser)
+ geom_df.to_xml(attr_cols='"shape", "degree", "sides"', parser=parser)
# ELEM_COLS
@@ -453,12 +453,12 @@ def test_elems_cols_nan_output(datapath, parser):
def test_elems_unknown_column(parser):
with pytest.raises(KeyError, match=("no valid column")):
- geom_df.to_xml(elem_cols=["shape", "degreees", "sides"], parser=parser)
+ geom_df.to_xml(elem_cols=["shape", "degree", "sides"], parser=parser)
def test_elems_wrong_type(parser):
with pytest.raises(TypeError, match=("is not a valid type for elem_cols")):
- geom_df.to_xml(elem_cols='"shape", "degreees", "sides"', parser=parser)
+ geom_df.to_xml(elem_cols='"shape", "degree", "sides"', parser=parser)
def test_elems_and_attrs_cols(datapath, parser):
diff --git a/pandas/tests/libs/test_hashtable.py b/pandas/tests/libs/test_hashtable.py
index a28e2f22560eb..04a8aeefbfcd6 100644
--- a/pandas/tests/libs/test_hashtable.py
+++ b/pandas/tests/libs/test_hashtable.py
@@ -170,7 +170,7 @@ def test_no_reallocation(self, table_type, dtype):
n_buckets_start = preallocated_table.get_state()["n_buckets"]
preallocated_table.map_locations(keys)
n_buckets_end = preallocated_table.get_state()["n_buckets"]
- # orgininal number of buckets was enough:
+ # original number of buckets was enough:
assert n_buckets_start == n_buckets_end
# check with clean table (not too much preallocated)
clean_table = table_type()
@@ -219,7 +219,7 @@ def test_no_reallocation_StringHashTable():
n_buckets_start = preallocated_table.get_state()["n_buckets"]
preallocated_table.map_locations(keys)
n_buckets_end = preallocated_table.get_state()["n_buckets"]
- # orgininal number of buckets was enough:
+ # original number of buckets was enough:
assert n_buckets_start == n_buckets_end
# check with clean table (not too much preallocated)
clean_table = ht.StringHashTable()
diff --git a/pandas/tests/plotting/frame/test_frame.py b/pandas/tests/plotting/frame/test_frame.py
index 3c53a0ed2500c..bed60be169e57 100644
--- a/pandas/tests/plotting/frame/test_frame.py
+++ b/pandas/tests/plotting/frame/test_frame.py
@@ -2208,7 +2208,7 @@ def test_xlabel_ylabel_dataframe_single_plot(
assert ax.get_xlabel() == old_label
assert ax.get_ylabel() == ""
- # old xlabel will be overriden and assigned ylabel will be used as ylabel
+ # old xlabel will be overridden and assigned ylabel will be used as ylabel
ax = df.plot(kind=kind, ylabel=new_label, xlabel=new_label)
assert ax.get_ylabel() == str(new_label)
assert ax.get_xlabel() == str(new_label)
diff --git a/pandas/tests/plotting/frame/test_frame_subplots.py b/pandas/tests/plotting/frame/test_frame_subplots.py
index 0e25fb5f4c01f..fa4a132001be5 100644
--- a/pandas/tests/plotting/frame/test_frame_subplots.py
+++ b/pandas/tests/plotting/frame/test_frame_subplots.py
@@ -522,7 +522,7 @@ def test_xlabel_ylabel_dataframe_subplots(
assert all(ax.get_ylabel() == "" for ax in axes)
assert all(ax.get_xlabel() == old_label for ax in axes)
- # old xlabel will be overriden and assigned ylabel will be used as ylabel
+ # old xlabel will be overridden and assigned ylabel will be used as ylabel
axes = df.plot(kind=kind, ylabel=new_label, xlabel=new_label, subplots=True)
assert all(ax.get_ylabel() == str(new_label) for ax in axes)
assert all(ax.get_xlabel() == str(new_label) for ax in axes)
diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py
index a6e3ba71e94ab..96fdcebc9b8f7 100644
--- a/pandas/tests/plotting/test_hist_method.py
+++ b/pandas/tests/plotting/test_hist_method.py
@@ -533,7 +533,7 @@ def test_hist_secondary_legend(self):
_, ax = self.plt.subplots()
ax = df["a"].plot.hist(legend=True, ax=ax)
df["b"].plot.hist(ax=ax, legend=True, secondary_y=True)
- # both legends are dran on left ax
+ # both legends are drawn on left ax
# left and right axis must be visible
self._check_legend_labels(ax, labels=["a", "b (right)"])
assert ax.get_yaxis().get_visible()
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 59b0cc99d94fb..812aae8d97151 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -381,7 +381,7 @@ def test_df_series_secondary_legend(self):
_, ax = self.plt.subplots()
ax = df.plot(ax=ax)
s.plot(legend=True, secondary_y=True, ax=ax)
- # both legends are dran on left ax
+ # both legends are drawn on left ax
# left and right axis must be visible
self._check_legend_labels(ax, labels=["a", "b", "c", "x (right)"])
assert ax.get_yaxis().get_visible()
@@ -392,7 +392,7 @@ def test_df_series_secondary_legend(self):
_, ax = self.plt.subplots()
ax = df.plot(ax=ax)
s.plot(ax=ax, legend=True, secondary_y=True)
- # both legends are dran on left ax
+ # both legends are drawn on left ax
# left and right axis must be visible
self._check_legend_labels(ax, labels=["a", "b", "c", "x (right)"])
assert ax.get_yaxis().get_visible()
@@ -403,7 +403,7 @@ def test_df_series_secondary_legend(self):
_, ax = self.plt.subplots()
ax = df.plot(secondary_y=True, ax=ax)
s.plot(legend=True, secondary_y=True, ax=ax)
- # both legends are dran on left ax
+ # both legends are drawn on left ax
# left axis must be invisible and right axis must be visible
expected = ["a (right)", "b (right)", "c (right)", "x (right)"]
self._check_legend_labels(ax.left_ax, labels=expected)
@@ -415,7 +415,7 @@ def test_df_series_secondary_legend(self):
_, ax = self.plt.subplots()
ax = df.plot(secondary_y=True, ax=ax)
s.plot(ax=ax, legend=True, secondary_y=True)
- # both legends are dran on left ax
+ # both legends are drawn on left ax
# left axis must be invisible and right axis must be visible
expected = ["a (right)", "b (right)", "c (right)", "x (right)"]
self._check_legend_labels(ax.left_ax, expected)
@@ -427,7 +427,7 @@ def test_df_series_secondary_legend(self):
_, ax = self.plt.subplots()
ax = df.plot(secondary_y=True, mark_right=False, ax=ax)
s.plot(ax=ax, legend=True, secondary_y=True)
- # both legends are dran on left ax
+ # both legends are drawn on left ax
# left axis must be invisible and right axis must be visible
expected = ["a", "b", "c", "x (right)"]
self._check_legend_labels(ax.left_ax, expected)
@@ -798,7 +798,7 @@ def test_xlabel_ylabel_series(self, kind, index_name, old_label, new_label):
assert ax.get_ylabel() == ""
assert ax.get_xlabel() == old_label
- # old xlabel will be overriden and assigned ylabel will be used as ylabel
+ # old xlabel will be overridden and assigned ylabel will be used as ylabel
ax = ser.plot(kind=kind, ylabel=new_label, xlabel=new_label)
assert ax.get_ylabel() == new_label
assert ax.get_xlabel() == new_label
diff --git a/pandas/tests/reshape/concat/test_datetimes.py b/pandas/tests/reshape/concat/test_datetimes.py
index 332c3c8f30562..2b8233388d328 100644
--- a/pandas/tests/reshape/concat/test_datetimes.py
+++ b/pandas/tests/reshape/concat/test_datetimes.py
@@ -457,7 +457,7 @@ def test_concat_tz_not_aligned(self):
)
def test_concat_tz_NaT(self, t1):
# GH#22796
- # Concating tz-aware multicolumn DataFrames
+ # Concatenating tz-aware multicolumn DataFrames
ts1 = Timestamp(t1, tz="UTC")
ts2 = Timestamp("2015-01-01", tz="UTC")
ts3 = Timestamp("2015-01-01", tz="UTC")
diff --git a/pandas/tests/reshape/merge/test_merge_ordered.py b/pandas/tests/reshape/merge/test_merge_ordered.py
index 4a4af789d540b..0268801c66e1d 100644
--- a/pandas/tests/reshape/merge/test_merge_ordered.py
+++ b/pandas/tests/reshape/merge/test_merge_ordered.py
@@ -183,19 +183,19 @@ def test_list_type_by(self, left, right, on, left_by, right_by, expected):
def test_left_by_length_equals_to_right_shape0(self):
# GH 38166
- left = DataFrame([["g", "h", 1], ["g", "h", 3]], columns=list("GHT"))
- right = DataFrame([[2, 1]], columns=list("TE"))
- result = merge_ordered(left, right, on="T", left_by=["G", "H"])
+ left = DataFrame([["g", "h", 1], ["g", "h", 3]], columns=list("GHE"))
+ right = DataFrame([[2, 1]], columns=list("ET"))
+ result = merge_ordered(left, right, on="E", left_by=["G", "H"])
expected = DataFrame(
- {"G": ["g"] * 3, "H": ["h"] * 3, "T": [1, 2, 3], "E": [np.nan, 1.0, np.nan]}
+ {"G": ["g"] * 3, "H": ["h"] * 3, "E": [1, 2, 3], "T": [np.nan, 1.0, np.nan]}
)
tm.assert_frame_equal(result, expected)
def test_elements_not_in_by_but_in_df(self):
# GH 38167
- left = DataFrame([["g", "h", 1], ["g", "h", 3]], columns=list("GHT"))
- right = DataFrame([[2, 1]], columns=list("TE"))
+ left = DataFrame([["g", "h", 1], ["g", "h", 3]], columns=list("GHE"))
+ right = DataFrame([[2, 1]], columns=list("ET"))
msg = r"\{'h'\} not found in left columns"
with pytest.raises(KeyError, match=msg):
- merge_ordered(left, right, on="T", left_by=["G", "h"])
+ merge_ordered(left, right, on="E", left_by=["G", "h"])
diff --git a/pandas/tests/scalar/test_nat.py b/pandas/tests/scalar/test_nat.py
index 9ccdd0261de0e..96aea4da9fac5 100644
--- a/pandas/tests/scalar/test_nat.py
+++ b/pandas/tests/scalar/test_nat.py
@@ -527,7 +527,7 @@ def test_to_numpy_alias():
pytest.param(
Timedelta(0).to_timedelta64(),
marks=pytest.mark.xfail(
- reason="td64 doesnt return NotImplemented, see numpy#17017"
+ reason="td64 doesn't return NotImplemented, see numpy#17017"
),
),
Timestamp(0),
@@ -535,7 +535,7 @@ def test_to_numpy_alias():
pytest.param(
Timestamp(0).to_datetime64(),
marks=pytest.mark.xfail(
- reason="dt64 doesnt return NotImplemented, see numpy#17017"
+ reason="dt64 doesn't return NotImplemented, see numpy#17017"
),
),
Timestamp(0).tz_localize("UTC"),
diff --git a/pandas/tests/series/methods/test_convert_dtypes.py b/pandas/tests/series/methods/test_convert_dtypes.py
index f7b49c187c794..b68c9c9b0e529 100644
--- a/pandas/tests/series/methods/test_convert_dtypes.py
+++ b/pandas/tests/series/methods/test_convert_dtypes.py
@@ -12,7 +12,7 @@
# test Series, the default dtype for the expected result (which is valid
# for most cases), and the specific cases where the result deviates from
# this default. Those overrides are defined as a dict with (keyword, val) as
-# dictionary key. In case of multiple items, the last override takes precendence.
+# dictionary key. In case of multiple items, the last override takes precedence.
test_cases = [
(
# data
diff --git a/pandas/tests/series/methods/test_nlargest.py b/pandas/tests/series/methods/test_nlargest.py
index b1aa09f387a13..3af06145b9fcd 100644
--- a/pandas/tests/series/methods/test_nlargest.py
+++ b/pandas/tests/series/methods/test_nlargest.py
@@ -98,7 +98,7 @@ class TestSeriesNLargestNSmallest:
)
def test_nlargest_error(self, r):
dt = r.dtype
- msg = f"Cannot use method 'n(larg|small)est' with dtype {dt}"
+ msg = f"Cannot use method 'n(largest|smallest)' with dtype {dt}"
args = 2, len(r), 0, -1
methods = r.nlargest, r.nsmallest
for method, arg in product(methods, args):
diff --git a/pandas/tests/series/methods/test_to_csv.py b/pandas/tests/series/methods/test_to_csv.py
index a22e125e68cba..9684546112078 100644
--- a/pandas/tests/series/methods/test_to_csv.py
+++ b/pandas/tests/series/methods/test_to_csv.py
@@ -25,7 +25,7 @@ def read_csv(self, path, **kwargs):
return out
def test_from_csv(self, datetime_series, string_series):
- # freq doesnt round-trip
+ # freq doesn't round-trip
datetime_series.index = datetime_series.index._with_freq(None)
with tm.ensure_clean() as path:
diff --git a/pandas/tests/strings/test_cat.py b/pandas/tests/strings/test_cat.py
index cdaccf0dad8e6..05883248592ca 100644
--- a/pandas/tests/strings/test_cat.py
+++ b/pandas/tests/strings/test_cat.py
@@ -29,7 +29,7 @@ def test_str_cat_name(index_or_series, other):
def test_str_cat(index_or_series):
box = index_or_series
# test_cat above tests "str_cat" from ndarray;
- # here testing "str.cat" from Series/Indext to ndarray/list
+ # here testing "str.cat" from Series/Index to ndarray/list
s = box(["a", "a", "b", "b", "c", np.nan])
# single array
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 13768a2cd7a61..999a04a81406e 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -515,7 +515,7 @@ def test_to_datetime_YYYYMMDD(self):
assert actual == datetime(2008, 1, 15)
def test_to_datetime_unparseable_ignore(self):
- # unparseable
+ # unparsable
s = "Month 1, 1999"
assert to_datetime(s, errors="ignore") == s
@@ -2469,7 +2469,7 @@ def test_empty_string_datetime_coerce__format():
with pytest.raises(ValueError, match="does not match format"):
result = to_datetime(td, format=format, errors="raise")
- # don't raise an expection in case no format is given
+ # don't raise an exception in case no format is given
result = to_datetime(td, errors="raise")
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py
index d36bea72908a3..3eb3892279832 100644
--- a/pandas/tests/tseries/offsets/test_offsets.py
+++ b/pandas/tests/tseries/offsets/test_offsets.py
@@ -64,7 +64,7 @@
class TestCommon(Base):
- # exected value created by Base._get_offset
+ # executed value created by Base._get_offset
# are applied to 2011/01/01 09:00 (Saturday)
# used for .apply and .rollforward
expecteds = {
diff --git a/pandas/tests/tslibs/test_fields.py b/pandas/tests/tslibs/test_fields.py
index a45fcab56759f..e5fe998923f8d 100644
--- a/pandas/tests/tslibs/test_fields.py
+++ b/pandas/tests/tslibs/test_fields.py
@@ -7,7 +7,7 @@
def test_fields_readonly():
# https://github.com/vaexio/vaex/issues/357
- # fields functions should't raise when we pass read-only data
+ # fields functions shouldn't raise when we pass read-only data
dtindex = np.arange(5, dtype=np.int64) * 10 ** 9 * 3600 * 24 * 32
dtindex.flags.writeable = False
| - [x] closes #38802
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/40372 | 2021-03-11T13:11:55Z | 2021-03-11T19:03:30Z | 2021-03-11T19:03:30Z | 2021-03-11T20:20:21Z |
CI: mypy fixup | diff --git a/pandas/core/internals/api.py b/pandas/core/internals/api.py
index e6ea2c642650d..be0828f5303b8 100644
--- a/pandas/core/internals/api.py
+++ b/pandas/core/internals/api.py
@@ -39,7 +39,11 @@ def make_block(
- Block.make_block_same_class
- Block.__init__
"""
- values, dtype = extract_pandas_array(values, dtype, ndim)
+ # error: Argument 2 to "extract_pandas_array" has incompatible type
+ # "Union[ExtensionDtype, str, dtype[Any], Type[str], Type[float], Type[int],
+ # Type[complex], Type[bool], Type[object], None]"; expected "Union[dtype[Any],
+ # ExtensionDtype, None]"
+ values, dtype = extract_pandas_array(values, dtype, ndim) # type: ignore[arg-type]
if klass is None:
dtype = dtype or values.dtype
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 3d2279f0b309b..ab23c67b52bcd 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -2353,10 +2353,15 @@ def extract_pandas_array(
"""
# For now, blocks should be backed by ndarrays when possible.
if isinstance(values, ABCPandasArray):
- values = values.to_numpy()
+ # error: Incompatible types in assignment (expression has type "ndarray",
+ # variable has type "ExtensionArray")
+ values = values.to_numpy() # type: ignore[assignment]
if ndim and ndim > 1:
# TODO(EA2D): special case not needed with 2D EAs
- values = np.atleast_2d(values)
+
+ # error: No overload variant of "atleast_2d" matches argument type
+ # "PandasArray"
+ values = np.atleast_2d(values) # type: ignore[call-overload]
if isinstance(dtype, PandasDtype):
dtype = dtype.numpy_dtype
| https://api.github.com/repos/pandas-dev/pandas/pulls/40370 | 2021-03-11T10:13:34Z | 2021-03-11T11:11:54Z | 2021-03-11T11:11:54Z | 2021-03-11T11:14:53Z | |
INT: Use Index._getitem_slice in ArrayManager | diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index 4b60cec55a2ba..e7431bee50374 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -793,12 +793,10 @@ def get_slice(self, slobj: slice, axis: int = 0) -> ArrayManager:
arrays = self.arrays[slobj]
new_axes = list(self._axes)
- new_axes[axis] = new_axes[axis][slobj]
+ new_axes[axis] = new_axes[axis]._getitem_slice(slobj)
return type(self)(arrays, new_axes, verify_integrity=False)
- getitem_mgr = get_slice
-
def fast_xs(self, loc: int) -> ArrayLike:
"""
Return the array corresponding to `frame.iloc[loc]`.
@@ -1235,7 +1233,12 @@ def get_slice(self, slobj: slice, axis: int = 0) -> SingleArrayManager:
raise IndexError("Requested axis not found in manager")
new_array = self.array[slobj]
- new_index = self.index[slobj]
+ new_index = self.index._getitem_slice(slobj)
+ return type(self)([new_array], [new_index], verify_integrity=False)
+
+ def getitem_mgr(self, indexer) -> SingleArrayManager:
+ new_array = self.array[indexer]
+ new_index = self.index[indexer]
return type(self)([new_array], [new_index])
def apply(self, func, **kwargs):
| Follow-up on https://github.com/pandas-dev/pandas/pull/40262 to actually use it in ArrayManager as well | https://api.github.com/repos/pandas-dev/pandas/pulls/40369 | 2021-03-11T09:54:03Z | 2021-03-12T07:22:09Z | 2021-03-12T07:22:09Z | 2021-03-12T07:22:13Z |
CLN: reorganise/group variables for readability | diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index cc5f3164385cb..609472bb2629f 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -350,12 +350,6 @@ def _translate(self):
Convert the DataFrame in `self.data` and the attrs from `_build_styles`
into a dictionary of {head, body, uuid, cellstyle}.
"""
- table_styles = self.table_styles or []
- caption = self.caption
- ctx = self.ctx
- hidden_index = self.hidden_index
- hidden_columns = self.hidden_columns
- uuid = self.uuid
ROW_HEADING_CLASS = "row_heading"
COL_HEADING_CLASS = "col_heading"
INDEX_NAME_CLASS = "index_name"
@@ -364,12 +358,22 @@ def _translate(self):
BLANK_CLASS = "blank"
BLANK_VALUE = ""
+ # mapping variables
+ ctx = self.ctx # td css styles from apply() and applymap()
+ cell_context = self.cell_context # td css classes from set_td_classes()
+ cellstyle_map: DefaultDict[Tuple[CSSPair, ...], List[str]] = defaultdict(list)
+
+ # copied attributes
+ table_styles = self.table_styles or []
+ caption = self.caption
+ hidden_index = self.hidden_index
+ hidden_columns = self.hidden_columns
+ uuid = self.uuid
+
# for sparsifying a MultiIndex
idx_lengths = _get_level_lengths(self.index)
col_lengths = _get_level_lengths(self.columns, hidden_columns)
- cell_context = self.cell_context
-
n_rlvls = self.data.index.nlevels
n_clvls = self.data.columns.nlevels
rlabels = self.data.index.tolist()
@@ -381,10 +385,7 @@ def _translate(self):
clabels = [[x] for x in clabels]
clabels = list(zip(*clabels))
- cellstyle_map: DefaultDict[Tuple[CSSPair, ...], List[str]] = defaultdict(list)
-
head = []
-
for r in range(n_clvls):
# Blank for Index columns...
row_es = [
| couple of comments in issues recently about complexity of `_translate` and its readability.
small commit here to group setup variables by type and function, and add some comments.
| https://api.github.com/repos/pandas-dev/pandas/pulls/40368 | 2021-03-11T08:36:27Z | 2021-03-15T00:04:19Z | 2021-03-15T00:04:19Z | 2021-03-15T14:30:16Z |
BUG: Post-merge fixup, raise on dimension-expanding | diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 447148b4ef0b7..f28a87d519f5f 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1659,6 +1659,9 @@ def getitem_mgr(self, indexer) -> SingleBlockManager:
# similar to get_slice, but not restricted to slice indexer
blk = self._block
array = blk._slice(indexer)
+ if array.ndim > blk.values.ndim:
+ # This will be caught by Series._get_values
+ raise ValueError("dimension-expanding indexing not allowed")
block = blk.make_block_same_class(array, placement=slice(0, len(array)))
return type(self)(block, self.index[indexer])
| Looks like a couple of individually-correct things got merged that together are causing failures on master. This should fix it. | https://api.github.com/repos/pandas-dev/pandas/pulls/40365 | 2021-03-11T04:14:35Z | 2021-03-11T09:38:49Z | 2021-03-11T09:38:49Z | 2021-03-11T15:08:16Z |
BUG: better exception on invalid slicing on CategoricalIndex | diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 62941a23c6459..73258e09522a2 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -529,13 +529,6 @@ def _convert_list_indexer(self, keyarr):
return self.get_indexer_for(keyarr)
- @doc(Index._maybe_cast_slice_bound)
- def _maybe_cast_slice_bound(self, label, side: str, kind):
- if kind == "loc":
- return label
-
- return super()._maybe_cast_slice_bound(label, side, kind)
-
# --------------------------------------------------------------------
def _is_comparable_dtype(self, dtype):
diff --git a/pandas/tests/indexing/test_categorical.py b/pandas/tests/indexing/test_categorical.py
index f104587ebbded..11943d353e8c8 100644
--- a/pandas/tests/indexing/test_categorical.py
+++ b/pandas/tests/indexing/test_categorical.py
@@ -469,7 +469,11 @@ def test_ix_categorical_index_non_unique(self):
def test_loc_slice(self):
# GH9748
- with pytest.raises(KeyError, match="1"):
+ msg = (
+ "cannot do slice indexing on CategoricalIndex with these "
+ r"indexers \[1\] of type int"
+ )
+ with pytest.raises(TypeError, match=msg):
self.df.loc[1:5]
result = self.df.loc["b":"c"]
| After fixing the CategoricalIndex behavior (first commit), the "kind" keyword is no longer needed in _maybe_cast_slice_bound, so removed that.
After this the keyword is no longer needed in get_slice_bound, slice_locs, slice_indexer, but those are technically public so didnt rip them out. | https://api.github.com/repos/pandas-dev/pandas/pulls/40364 | 2021-03-11T04:04:55Z | 2021-03-20T01:25:12Z | 2021-03-20T01:25:12Z | 2021-03-20T02:01:27Z |
DEPR: DataFrame(MaskedRecords) | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 56a5412d4ecfc..fceca59f0f30d 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -369,6 +369,7 @@ Deprecations
- Deprecated casting ``datetime.date`` objects to ``datetime64`` when used as ``fill_value`` in :meth:`DataFrame.unstack`, :meth:`DataFrame.shift`, :meth:`Series.shift`, and :meth:`DataFrame.reindex`, pass ``pd.Timestamp(dateobj)`` instead (:issue:`39767`)
- Deprecated :meth:`.Styler.set_na_rep` and :meth:`.Styler.set_precision` in favour of :meth:`.Styler.format` with ``na_rep`` and ``precision`` as existing and new input arguments respectively (:issue:`40134`)
- Deprecated allowing partial failure in :meth:`Series.transform` and :meth:`DataFrame.transform` when ``func`` is list-like or dict-like; will raise if any function fails on a column in a future version (:issue:`40211`)
+- Deprecated support for ``np.ma.mrecords.MaskedRecords`` in the :class:`DataFrame` constructor, pass ``{name: data[name] for name in data.dtype.names}`` instead (:issue:`40363`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index de28c04ca0793..2759301ac88c9 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -606,6 +606,13 @@ def __init__(
copy,
typ=manager,
)
+ warnings.warn(
+ "Support for MaskedRecords is deprecated and will be "
+ "removed in a future version. Pass "
+ "{name: data[name] for name in data.dtype.names} instead.",
+ FutureWarning,
+ stacklevel=2,
+ )
# a masked array
else:
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 285d6286931af..b76a44b3c86be 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -993,10 +993,19 @@ def test_constructor_maskedrecarray_dtype(self):
np.ma.zeros(5, dtype=[("date", "<f8"), ("price", "<f8")]), mask=[False] * 5
)
data = data.view(mrecords.mrecarray)
- result = DataFrame(data, dtype=int)
+
+ with tm.assert_produces_warning(FutureWarning):
+ # Support for MaskedRecords deprecated
+ result = DataFrame(data, dtype=int)
+
expected = DataFrame(np.zeros((5, 2), dtype=int), columns=["date", "price"])
tm.assert_frame_equal(result, expected)
+ # GH#40363 check that the alternative suggested in the deprecation
+ # warning behaves as expected
+ alt = DataFrame({name: data[name] for name in data.dtype.names}, dtype=int)
+ tm.assert_frame_equal(result, alt)
+
def test_constructor_mrecarray(self):
# Ensure mrecarray produces frame identical to dict of masked arrays
# from GH3479
@@ -1024,18 +1033,24 @@ def test_constructor_mrecarray(self):
# fill the comb
comb = {k: (v.filled() if hasattr(v, "filled") else v) for k, v in comb}
+ with tm.assert_produces_warning(FutureWarning):
+ # Support for MaskedRecords deprecated
+ result = DataFrame(mrecs)
expected = DataFrame(comb, columns=names)
- result = DataFrame(mrecs)
assert_fr_equal(result, expected)
# specify columns
+ with tm.assert_produces_warning(FutureWarning):
+ # Support for MaskedRecords deprecated
+ result = DataFrame(mrecs, columns=names[::-1])
expected = DataFrame(comb, columns=names[::-1])
- result = DataFrame(mrecs, columns=names[::-1])
assert_fr_equal(result, expected)
# specify index
+ with tm.assert_produces_warning(FutureWarning):
+ # Support for MaskedRecords deprecated
+ result = DataFrame(mrecs, index=[1, 2])
expected = DataFrame(comb, columns=names, index=[1, 2])
- result = DataFrame(mrecs, index=[1, 2])
assert_fr_equal(result, expected)
def test_constructor_corner_shape(self):
| - [x] closes #38399
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/40363 | 2021-03-11T03:33:28Z | 2021-03-15T00:09:58Z | 2021-03-15T00:09:58Z | 2021-03-15T00:55:16Z |
PERF/REF: require BlockPlacement in Block.__init__ | diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index f86d8755acb22..f3889ff360aa8 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -42,7 +42,6 @@
from pandas.core.dtypes.common import (
DT64NS_DTYPE,
TD64NS_DTYPE,
- is_categorical_dtype,
is_dtype_equal,
is_float_dtype,
is_integer_dtype,
@@ -53,7 +52,10 @@
pandas_dtype,
)
from pandas.core.dtypes.dtypes import DatetimeTZDtype
-from pandas.core.dtypes.generic import ABCMultiIndex
+from pandas.core.dtypes.generic import (
+ ABCCategorical,
+ ABCMultiIndex,
+)
from pandas.core.dtypes.missing import isna
from pandas.core import nanops
@@ -970,7 +972,7 @@ def sequence_to_td64ns(
elif not isinstance(data, (np.ndarray, ExtensionArray)):
# GH#24539 e.g. xarray, dask object
data = np.asarray(data)
- elif is_categorical_dtype(data.dtype):
+ elif isinstance(data, ABCCategorical):
data = data.categories.take(data.codes, fill_value=NaT)._values
copy = False
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index ac16d2609eb13..3fd1ebaca19f0 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -173,7 +173,7 @@ def _simple_new(
obj._mgr_locs = placement
return obj
- def __init__(self, values, placement, ndim: int):
+ def __init__(self, values, placement: BlockPlacement, ndim: int):
"""
Parameters
----------
@@ -183,8 +183,10 @@ def __init__(self, values, placement, ndim: int):
ndim : int
1 for SingleBlockManager/Series, 2 for BlockManager/DataFrame
"""
+ assert isinstance(ndim, int)
+ assert isinstance(placement, BlockPlacement)
self.ndim = ndim
- self.mgr_locs = placement
+ self._mgr_locs = placement
self.values = values
@property
@@ -263,14 +265,12 @@ def fill_value(self):
return np.nan
@property
- def mgr_locs(self):
+ def mgr_locs(self) -> BlockPlacement:
return self._mgr_locs
@mgr_locs.setter
- def mgr_locs(self, new_mgr_locs):
- if not isinstance(new_mgr_locs, libinternals.BlockPlacement):
- new_mgr_locs = libinternals.BlockPlacement(new_mgr_locs)
-
+ def mgr_locs(self, new_mgr_locs: BlockPlacement):
+ assert isinstance(new_mgr_locs, BlockPlacement)
self._mgr_locs = new_mgr_locs
@final
@@ -289,7 +289,9 @@ def make_block(self, values, placement=None) -> Block:
return new_block(values, placement=placement, ndim=self.ndim)
@final
- def make_block_same_class(self, values, placement=None) -> Block:
+ def make_block_same_class(
+ self, values, placement: Optional[BlockPlacement] = None
+ ) -> Block:
""" Wrap given values in a block of same type as self. """
if placement is None:
placement = self._mgr_locs
@@ -1221,7 +1223,11 @@ def func(yvalues: np.ndarray) -> np.ndarray:
return self._maybe_downcast(blocks, downcast)
def take_nd(
- self, indexer, axis: int, new_mgr_locs=None, fill_value=lib.no_default
+ self,
+ indexer,
+ axis: int,
+ new_mgr_locs: Optional[BlockPlacement] = None,
+ fill_value=lib.no_default,
) -> Block:
"""
Take values according to indexer and return them as a block.bb
@@ -1569,7 +1575,11 @@ def to_native_types(self, na_rep="nan", quoting=None, **kwargs):
return self.make_block(new_values)
def take_nd(
- self, indexer, axis: int = 0, new_mgr_locs=None, fill_value=lib.no_default
+ self,
+ indexer,
+ axis: int = 0,
+ new_mgr_locs: Optional[BlockPlacement] = None,
+ fill_value=lib.no_default,
) -> Block:
"""
Take values according to indexer and return them as a block.
@@ -2258,8 +2268,8 @@ def check_ndim(values, placement: BlockPlacement, ndim: int):
def extract_pandas_array(
- values: ArrayLike, dtype: Optional[DtypeObj], ndim: int
-) -> Tuple[ArrayLike, Optional[DtypeObj]]:
+ values: Union[np.ndarray, ExtensionArray], dtype: Optional[DtypeObj], ndim: int
+) -> Tuple[Union[np.ndarray, ExtensionArray], Optional[DtypeObj]]:
"""
Ensure that we don't allow PandasArray / PandasDtype in internals.
"""
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 0e502c08cb8f2..ea264da4c7b5f 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -240,7 +240,8 @@ def make_empty(self: T, axes=None) -> T:
assert isinstance(self, SingleBlockManager) # for mypy
blk = self.blocks[0]
arr = blk.values[:0]
- nb = blk.make_block_same_class(arr, placement=slice(0, 0))
+ bp = BlockPlacement(slice(0, 0))
+ nb = blk.make_block_same_class(arr, placement=bp)
blocks = [nb]
else:
blocks = []
@@ -786,7 +787,7 @@ def _combine(
new_blocks: List[Block] = []
for b in blocks:
b = b.copy(deep=copy)
- b.mgr_locs = inv_indexer[b.mgr_locs.indexer]
+ b.mgr_locs = BlockPlacement(inv_indexer[b.mgr_locs.indexer])
new_blocks.append(b)
axes = list(self.axes)
@@ -1053,8 +1054,9 @@ def iget(self, i: int) -> SingleBlockManager:
values = block.iget(self.blklocs[i])
# shortcut for select a single-dim from a 2-dim BM
+ bp = BlockPlacement(slice(0, len(values)))
values = maybe_coerce_values(values)
- nb = type(block)(values, placement=slice(0, len(values)), ndim=1)
+ nb = type(block)(values, placement=bp, ndim=1)
return SingleBlockManager(nb, self.axes[1])
def iget_values(self, i: int) -> ArrayLike:
@@ -1266,7 +1268,7 @@ def insert(
else:
new_mgr_locs = blk.mgr_locs.as_array.copy()
new_mgr_locs[new_mgr_locs >= loc] += 1
- blk.mgr_locs = new_mgr_locs
+ blk.mgr_locs = BlockPlacement(new_mgr_locs)
# Accessing public blklocs ensures the public versions are initialized
if loc == self.blklocs.shape[0]:
@@ -1415,11 +1417,12 @@ def _slice_take_blocks_ax0(
# all(np.shares_memory(nb.values, blk.values) for nb in blocks)
return blocks
else:
+ bp = BlockPlacement(slice(0, sllen))
return [
blk.take_nd(
slobj,
axis=0,
- new_mgr_locs=slice(0, sllen),
+ new_mgr_locs=bp,
fill_value=fill_value,
)
]
@@ -1456,7 +1459,7 @@ def _slice_take_blocks_ax0(
# item.
for mgr_loc in mgr_locs:
newblk = blk.copy(deep=False)
- newblk.mgr_locs = slice(mgr_loc, mgr_loc + 1)
+ newblk.mgr_locs = BlockPlacement(slice(mgr_loc, mgr_loc + 1))
blocks.append(newblk)
else:
@@ -1655,12 +1658,15 @@ def getitem_mgr(self, indexer) -> SingleBlockManager:
# similar to get_slice, but not restricted to slice indexer
blk = self._block
array = blk._slice(indexer)
- if array.ndim > blk.values.ndim:
+ if array.ndim > 1:
# This will be caught by Series._get_values
raise ValueError("dimension-expanding indexing not allowed")
- block = blk.make_block_same_class(array, placement=slice(0, len(array)))
- return type(self)(block, self.index[indexer])
+ bp = BlockPlacement(slice(0, len(array)))
+ block = blk.make_block_same_class(array, placement=bp)
+
+ new_idx = self.index[indexer]
+ return type(self)(block, new_idx)
def get_slice(self, slobj: slice, axis: int = 0) -> SingleBlockManager:
assert isinstance(slobj, slice), type(slobj)
@@ -1669,7 +1675,8 @@ def get_slice(self, slobj: slice, axis: int = 0) -> SingleBlockManager:
blk = self._block
array = blk._slice(slobj)
- block = blk.make_block_same_class(array, placement=slice(0, len(array)))
+ bp = BlockPlacement(slice(0, len(array)))
+ block = blk.make_block_same_class(array, placement=bp)
new_index = self.index._getitem_slice(slobj)
return type(self)(block, new_index)
@@ -1733,7 +1740,7 @@ def set_values(self, values: ArrayLike):
valid for the current Block/SingleBlockManager (length, dtype, etc).
"""
self.blocks[0].values = values
- self.blocks[0]._mgr_locs = libinternals.BlockPlacement(slice(len(values)))
+ self.blocks[0]._mgr_locs = BlockPlacement(slice(len(values)))
# --------------------------------------------------------------------
@@ -1985,7 +1992,8 @@ def _merge_blocks(
new_values = new_values[argsort]
new_mgr_locs = new_mgr_locs[argsort]
- return [new_block(new_values, placement=new_mgr_locs, ndim=2)]
+ bp = BlockPlacement(new_mgr_locs)
+ return [new_block(new_values, placement=bp, ndim=2)]
# can't consolidate --> no merge
return blocks
diff --git a/pandas/core/internals/ops.py b/pandas/core/internals/ops.py
index 88e70723517e3..df5cd66060659 100644
--- a/pandas/core/internals/ops.py
+++ b/pandas/core/internals/ops.py
@@ -87,7 +87,7 @@ def _reset_block_mgr_locs(nbs: List[Block], locs):
Reset mgr_locs to correspond to our original DataFrame.
"""
for nb in nbs:
- nblocs = locs.as_array[nb.mgr_locs.indexer]
+ nblocs = locs[nb.mgr_locs.indexer]
nb.mgr_locs = nblocs
# Assertions are disabled for performance, but should hold:
# assert len(nblocs) == nb.shape[0], (len(nblocs), nb.shape)
diff --git a/pandas/tests/extension/test_external_block.py b/pandas/tests/extension/test_external_block.py
index 693d0645c9519..9360294f5a3f7 100644
--- a/pandas/tests/extension/test_external_block.py
+++ b/pandas/tests/extension/test_external_block.py
@@ -1,6 +1,8 @@
import numpy as np
import pytest
+from pandas._libs.internals import BlockPlacement
+
import pandas as pd
from pandas.core.internals import BlockManager
from pandas.core.internals.blocks import ExtensionBlock
@@ -17,7 +19,8 @@ def df():
df1 = pd.DataFrame({"a": [1, 2, 3]})
blocks = df1._mgr.blocks
values = np.arange(3, dtype="int64")
- custom_block = CustomBlock(values, placement=slice(1, 2), ndim=2)
+ bp = BlockPlacement(slice(1, 2))
+ custom_block = CustomBlock(values, placement=bp, ndim=2)
blocks = blocks + (custom_block,)
block_manager = BlockManager(blocks, [pd.Index(["a", "b"]), df1.index])
return pd.DataFrame(block_manager)
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index a8c9a7a22ecdc..ba85ff1a044d6 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -327,8 +327,8 @@ def test_duplicate_ref_loc_failure(self):
axes, blocks = tmp_mgr.axes, tmp_mgr.blocks
- blocks[0].mgr_locs = np.array([0])
- blocks[1].mgr_locs = np.array([0])
+ blocks[0].mgr_locs = BlockPlacement(np.array([0]))
+ blocks[1].mgr_locs = BlockPlacement(np.array([0]))
# test trying to create block manager with overlapping ref locs
@@ -338,8 +338,8 @@ def test_duplicate_ref_loc_failure(self):
mgr = BlockManager(blocks, axes)
mgr._rebuild_blknos_and_blklocs()
- blocks[0].mgr_locs = np.array([0])
- blocks[1].mgr_locs = np.array([1])
+ blocks[0].mgr_locs = BlockPlacement(np.array([0]))
+ blocks[1].mgr_locs = BlockPlacement(np.array([1]))
mgr = BlockManager(blocks, axes)
mgr.iget(1)
| https://api.github.com/repos/pandas-dev/pandas/pulls/40361 | 2021-03-10T23:00:43Z | 2021-03-16T19:52:45Z | 2021-03-16T19:52:45Z | 2021-03-16T20:08:13Z | |
BUG: IntervalArray.insert cast on failure | diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index f192a34514390..3ee550620e73a 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -1483,6 +1483,28 @@ def putmask(self, mask: np.ndarray, value) -> None:
self._left.putmask(mask, value_left)
self._right.putmask(mask, value_right)
+ def insert(self: IntervalArrayT, loc: int, item: Interval) -> IntervalArrayT:
+ """
+ Return a new IntervalArray inserting new item at location. Follows
+ Python list.append semantics for negative values. Only Interval
+ objects and NA can be inserted into an IntervalIndex
+
+ Parameters
+ ----------
+ loc : int
+ item : Interval
+
+ Returns
+ -------
+ IntervalArray
+ """
+ left_insert, right_insert = self._validate_scalar(item)
+
+ new_left = self.left.insert(loc, left_insert)
+ new_right = self.right.insert(loc, right_insert)
+
+ return self._shallow_copy(new_left, new_right)
+
def delete(self: IntervalArrayT, loc) -> IntervalArrayT:
if isinstance(self._left, np.ndarray):
new_left = np.delete(self._left, loc)
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index ad512b8393166..10fdc642ba7ce 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -873,11 +873,14 @@ def insert(self, loc, item):
-------
IntervalIndex
"""
- left_insert, right_insert = self._data._validate_scalar(item)
+ try:
+ result = self._data.insert(loc, item)
+ except (ValueError, TypeError):
+ # e.g trying to insert a string
+ dtype, _ = infer_dtype_from_scalar(item, pandas_dtype=True)
+ dtype = find_common_type([self.dtype, dtype])
+ return self.astype(dtype).insert(loc, item)
- new_left = self.left.insert(loc, left_insert)
- new_right = self.right.insert(loc, right_insert)
- result = self._data._shallow_copy(new_left, new_right)
return type(self)._simple_new(result, name=self.name)
# --------------------------------------------------------------------
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index 02ef3cb0e2afb..cd61fcaa835a4 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -194,17 +194,24 @@ def test_insert(self, data):
tm.assert_index_equal(result, expected)
# invalid type
+ res = data.insert(1, "foo")
+ expected = data.astype(object).insert(1, "foo")
+ tm.assert_index_equal(res, expected)
+
msg = "can only insert Interval objects and NA into an IntervalArray"
with pytest.raises(TypeError, match=msg):
- data.insert(1, "foo")
+ data._data.insert(1, "foo")
# invalid closed
msg = "'value.closed' is 'left', expected 'right'."
for closed in {"left", "right", "both", "neither"} - {item.closed}:
msg = f"'value.closed' is '{closed}', expected '{item.closed}'."
+ bad_item = Interval(item.left, item.right, closed=closed)
+ res = data.insert(1, bad_item)
+ expected = data.astype(object).insert(1, bad_item)
+ tm.assert_index_equal(res, expected)
with pytest.raises(ValueError, match=msg):
- bad_item = Interval(item.left, item.right, closed=closed)
- data.insert(1, bad_item)
+ data._data.insert(1, bad_item)
# GH 18295 (test missing)
na_idx = IntervalIndex([np.nan], closed=data.closed)
@@ -214,13 +221,15 @@ def test_insert(self, data):
tm.assert_index_equal(result, expected)
if data.left.dtype.kind not in ["m", "M"]:
- # trying to insert pd.NaT into a numeric-dtyped Index should cast/raise
+ # trying to insert pd.NaT into a numeric-dtyped Index should cast
+ expected = data.astype(object).insert(1, pd.NaT)
+
msg = "can only insert Interval objects and NA into an IntervalArray"
with pytest.raises(TypeError, match=msg):
- result = data.insert(1, pd.NaT)
- else:
- result = data.insert(1, pd.NaT)
- tm.assert_index_equal(result, expected)
+ data._data.insert(1, pd.NaT)
+
+ result = data.insert(1, pd.NaT)
+ tm.assert_index_equal(result, expected)
def test_is_unique_interval(self, closed):
"""
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 9dbce283d2a8f..c7e9b3eb5b852 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -23,7 +23,6 @@
DatetimeIndex,
Index,
IndexSlice,
- IntervalIndex,
MultiIndex,
Period,
Series,
@@ -1680,9 +1679,6 @@ def test_loc_setitem_with_expansion_nonunique_index(self, index, request):
# GH#40096
if not len(index):
return
- if isinstance(index, IntervalIndex):
- mark = pytest.mark.xfail(reason="IntervalIndex raises")
- request.node.add_marker(mark)
index = index.repeat(2) # ensure non-unique
N = len(index)
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
We fixed this for DTI/TDI/PI in #39068. Also did CategoricalIndex recently, not sure where off the top. | https://api.github.com/repos/pandas-dev/pandas/pulls/40359 | 2021-03-10T21:40:39Z | 2021-03-15T00:52:27Z | 2021-03-15T00:52:27Z | 2021-03-15T00:57:46Z |
DOC: Proper alignment of column names #40355 | diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 32a99c0a020b2..7ac2e5a988cd8 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -1913,13 +1913,13 @@ def get_dummies(self, sep="|"):
Examples
--------
>>> pd.Series(['a|b', 'a', 'a|c']).str.get_dummies()
- a b c
+ a b c
0 1 1 0
1 1 0 0
2 1 0 1
>>> pd.Series(['a|b', np.nan, 'a|c']).str.get_dummies()
- a b c
+ a b c
0 1 1 0
1 0 0 0
2 1 0 1
| - [x] closes #40355
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/40358 | 2021-03-10T20:23:23Z | 2021-03-29T03:46:50Z | 2021-03-29T03:46:49Z | 2021-05-03T01:12:28Z |
BUG: copying from Jupyter table to excel misaligns structure | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 56a5412d4ecfc..4d5e2efa8c763 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -623,6 +623,7 @@ Other
- Bug in :class:`Styler` where rendered HTML was missing a column class identifier for certain header cells (:issue:`39716`)
- Bug in :meth:`Styler.background_gradient` where text-color was not determined correctly (:issue:`39888`)
- Bug in :class:`Styler` where multiple elements in CSS-selectors were not correctly added to ``table_styles`` (:issue:`39942`)
+- Bug in :class:`.Styler` where copying from Jupyter dropped top left cell and misaligned headers (:issue:`12147`)
- Bug in :meth:`DataFrame.equals`, :meth:`Series.equals`, :meth:`Index.equals` with object-dtype containing ``np.datetime64("NaT")`` or ``np.timedelta64("NaT")`` (:issue:`39650`)
- Bug in :func:`pandas.util.show_versions` where console JSON output was not proper JSON (:issue:`39701`)
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index cc5f3164385cb..bd112d6ac3329 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -362,7 +362,7 @@ def _translate(self):
DATA_CLASS = "data"
BLANK_CLASS = "blank"
- BLANK_VALUE = ""
+ BLANK_VALUE = " "
# for sparsifying a MultiIndex
idx_lengths = _get_level_lengths(self.index)
diff --git a/pandas/tests/io/formats/style/test_style.py b/pandas/tests/io/formats/style/test_style.py
index 977b92e217868..a377074e5484e 100644
--- a/pandas/tests/io/formats/style/test_style.py
+++ b/pandas/tests/io/formats/style/test_style.py
@@ -38,6 +38,7 @@ def h(x, foo="bar"):
{"f": [1.0, 2.0], "o": ["a", "b"], "c": pd.Categorical(["a", "b"])}
),
]
+ self.blank_value = " "
def test_init_non_pandas(self):
msg = "``data`` must be a Series or DataFrame"
@@ -255,9 +256,9 @@ def test_empty_index_name_doesnt_display(self):
{
"class": "blank level0",
"type": "th",
- "value": "",
+ "value": self.blank_value,
"is_visible": True,
- "display_value": "",
+ "display_value": self.blank_value,
},
{
"class": "col_heading level0 col0",
@@ -295,8 +296,8 @@ def test_index_name(self):
{
"class": "blank level0",
"type": "th",
- "value": "",
- "display_value": "",
+ "value": self.blank_value,
+ "display_value": self.blank_value,
"is_visible": True,
},
{
@@ -316,8 +317,8 @@ def test_index_name(self):
],
[
{"class": "index_name level0", "type": "th", "value": "A"},
- {"class": "blank col0", "type": "th", "value": ""},
- {"class": "blank col1", "type": "th", "value": ""},
+ {"class": "blank col0", "type": "th", "value": self.blank_value},
+ {"class": "blank col1", "type": "th", "value": self.blank_value},
],
]
@@ -333,15 +334,15 @@ def test_multiindex_name(self):
{
"class": "blank",
"type": "th",
- "value": "",
- "display_value": "",
+ "value": self.blank_value,
+ "display_value": self.blank_value,
"is_visible": True,
},
{
"class": "blank level0",
"type": "th",
- "value": "",
- "display_value": "",
+ "value": self.blank_value,
+ "display_value": self.blank_value,
"is_visible": True,
},
{
@@ -355,7 +356,7 @@ def test_multiindex_name(self):
[
{"class": "index_name level0", "type": "th", "value": "A"},
{"class": "index_name level1", "type": "th", "value": "B"},
- {"class": "blank col0", "type": "th", "value": ""},
+ {"class": "blank col0", "type": "th", "value": self.blank_value},
],
]
@@ -970,16 +971,16 @@ def test_mi_sparse(self):
{
"type": "th",
"class": "blank",
- "value": "",
+ "value": self.blank_value,
"is_visible": True,
- "display_value": "",
+ "display_value": self.blank_value,
},
{
"type": "th",
"class": "blank level0",
- "value": "",
+ "value": self.blank_value,
"is_visible": True,
- "display_value": "",
+ "display_value": self.blank_value,
},
{
"type": "th",
@@ -1013,7 +1014,7 @@ def test_mi_sparse_index_names(self):
expected = [
{"class": "index_name level0", "value": "idx_level_0", "type": "th"},
{"class": "index_name level1", "value": "idx_level_1", "type": "th"},
- {"class": "blank col0", "value": "", "type": "th"},
+ {"class": "blank col0", "value": self.blank_value, "type": "th"},
]
assert head == expected
@@ -1034,8 +1035,8 @@ def test_mi_sparse_column_names(self):
expected = [
{
"class": "blank",
- "value": "",
- "display_value": "",
+ "value": self.blank_value,
+ "display_value": self.blank_value,
"type": "th",
"is_visible": True,
},
@@ -1343,7 +1344,7 @@ def test_w3_html_format(self):
<caption>A comprehensive test</caption>
<thead>
<tr>
- <th class="blank level0" ></th>
+ <th class="blank level0" > </th>
<th class="col_heading level0 col0" >A</th>
</tr>
</thead>
| - [x] closes #12147
when copying a table from Jupyter to Excel the column header row is translated to the left because there is no value to copy.

this PR puts a non-breaking space in the empty cell so that structure is preserved.

my excel above is quite old, but this effect is worse on more recent excels since I believe they merge cells which have a rowspan (in the multiindex column headers) or a colspan (in a multiindex index labels). With columns merging in wrong place it is a bit of a nuisance. | https://api.github.com/repos/pandas-dev/pandas/pulls/40356 | 2021-03-10T19:05:37Z | 2021-03-21T21:08:13Z | 2021-03-21T21:08:13Z | 2021-03-21T22:50:06Z |
Fix formatting in extractall example outputs | diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 7d6a2bf1d776d..4195b011b6cfb 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -2350,7 +2350,7 @@ def extractall(self, pat, flags=0):
0
match
A 0 1
- 1 2
+ 1 2
B 0 1
Capture group names are used for column names of the result.
@@ -2359,7 +2359,7 @@ def extractall(self, pat, flags=0):
digit
match
A 0 1
- 1 2
+ 1 2
B 0 1
A pattern with two groups will return a DataFrame with two columns.
@@ -2368,7 +2368,7 @@ def extractall(self, pat, flags=0):
letter digit
match
A 0 a 1
- 1 a 2
+ 1 a 2
B 0 b 1
Optional groups that do not match are NaN in the result.
@@ -2377,7 +2377,7 @@ def extractall(self, pat, flags=0):
letter digit
match
A 0 a 1
- 1 a 2
+ 1 a 2
B 0 b 1
C 0 NaN 1
"""
| I noticed that these doctests lost some formatting in a refactor. It's a little surprising the tests were still passing, presumably doctests is configured to ignore leading whitespace? | https://api.github.com/repos/pandas-dev/pandas/pulls/40354 | 2021-03-10T17:21:53Z | 2021-03-11T02:18:24Z | 2021-03-11T02:18:24Z | 2021-03-11T16:14:39Z |
PERF: repeated slicing along index in groupby | diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index f5dc95590c963..aa16dc9a22f4e 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -817,6 +817,11 @@ def value_counts(self, dropna: bool = True):
def __getitem__(self, key):
if isinstance(key, tuple):
+ if len(key) > 1:
+ if key[0] is Ellipsis:
+ key = key[1:]
+ elif key[-1] is Ellipsis:
+ key = key[:-1]
if len(key) > 1:
raise IndexError("too many indices for array.")
key = key[0]
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index 6f7badd3c2cd2..75d9fcd3b4965 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -354,6 +354,15 @@ def __getitem__(self, item: Any) -> Any:
"Only integers, slices and integer or "
"boolean arrays are valid indices."
)
+ elif isinstance(item, tuple):
+ # possibly unpack arr[..., n] to arr[n]
+ if len(item) == 1:
+ item = item[0]
+ elif len(item) == 2:
+ if item[0] is Ellipsis:
+ item = item[1]
+ elif item[1] is Ellipsis:
+ item = item[0]
# We are not an array indexer, so maybe e.g. a slice or integer
# indexer. We dispatch to pyarrow.
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index d87df9d224bce..79339c74ca4b9 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -309,18 +309,41 @@ def _slice(self, slicer):
return self.values[slicer]
@final
- def getitem_block(self, slicer, new_mgr_locs=None) -> Block:
+ def getitem_block(self, slicer) -> Block:
"""
Perform __getitem__-like, return result as block.
Only supports slices that preserve dimensionality.
"""
- if new_mgr_locs is None:
- axis0_slicer = slicer[0] if isinstance(slicer, tuple) else slicer
- new_mgr_locs = self._mgr_locs[axis0_slicer]
- elif not isinstance(new_mgr_locs, BlockPlacement):
- new_mgr_locs = BlockPlacement(new_mgr_locs)
+ axis0_slicer = slicer[0] if isinstance(slicer, tuple) else slicer
+ new_mgr_locs = self._mgr_locs[axis0_slicer]
+
+ new_values = self._slice(slicer)
+
+ if new_values.ndim != self.values.ndim:
+ raise ValueError("Only same dim slicing is allowed")
+
+ return type(self)._simple_new(new_values, new_mgr_locs, self.ndim)
+ @final
+ def getitem_block_index(self, slicer: slice) -> Block:
+ """
+ Perform __getitem__-like specialized to slicing along index.
+
+ Assumes self.ndim == 2
+ """
+ # error: Invalid index type "Tuple[ellipsis, slice]" for
+ # "Union[ndarray, ExtensionArray]"; expected type "Union[int, slice, ndarray]"
+ new_values = self.values[..., slicer] # type: ignore[index]
+ return type(self)._simple_new(new_values, self._mgr_locs, ndim=self.ndim)
+
+ @final
+ def getitem_block_columns(self, slicer, new_mgr_locs: BlockPlacement) -> Block:
+ """
+ Perform __getitem__-like, return result as block.
+
+ Only supports slices that preserve dimensionality.
+ """
new_values = self._slice(slicer)
if new_values.ndim != self.values.ndim:
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index a12ed69cf0025..0e502c08cb8f2 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -23,6 +23,7 @@
internals as libinternals,
lib,
)
+from pandas._libs.internals import BlockPlacement
from pandas._typing import (
ArrayLike,
Dtype,
@@ -801,8 +802,7 @@ def get_slice(self, slobj: slice, axis: int = 0) -> BlockManager:
if axis == 0:
new_blocks = self._slice_take_blocks_ax0(slobj)
elif axis == 1:
- slicer = (slice(None), slobj)
- new_blocks = [blk.getitem_block(slicer) for blk in self.blocks]
+ new_blocks = [blk.getitem_block_index(slobj) for blk in self.blocks]
else:
raise IndexError("Requested axis not found in manager")
@@ -1396,7 +1396,8 @@ def _slice_take_blocks_ax0(
# TODO(EA2D): special casing unnecessary with 2D EAs
if sllen == 0:
return []
- return [blk.getitem_block(slobj, new_mgr_locs=slice(0, sllen))]
+ bp = BlockPlacement(slice(0, sllen))
+ return [blk.getitem_block_columns(slobj, new_mgr_locs=bp)]
elif not allow_fill or self.ndim == 1:
if allow_fill and fill_value is None:
fill_value = blk.fill_value
@@ -1405,7 +1406,9 @@ def _slice_take_blocks_ax0(
# GH#33597 slice instead of take, so we get
# views instead of copies
blocks = [
- blk.getitem_block(slice(ml, ml + 1), new_mgr_locs=i)
+ blk.getitem_block_columns(
+ slice(ml, ml + 1), new_mgr_locs=BlockPlacement(i)
+ )
for i, ml in enumerate(slobj)
]
# We have
@@ -1465,13 +1468,15 @@ def _slice_take_blocks_ax0(
taker = lib.maybe_indices_to_slice(taker, max_len)
if isinstance(taker, slice):
- nb = blk.getitem_block(taker, new_mgr_locs=mgr_locs)
+ nb = blk.getitem_block_columns(taker, new_mgr_locs=mgr_locs)
blocks.append(nb)
elif only_slice:
# GH#33597 slice instead of take, so we get
# views instead of copies
for i, ml in zip(taker, mgr_locs):
- nb = blk.getitem_block(slice(i, i + 1), new_mgr_locs=ml)
+ slc = slice(i, i + 1)
+ bp = BlockPlacement(ml)
+ nb = blk.getitem_block_columns(slc, new_mgr_locs=bp)
# We have np.shares_memory(nb.values, blk.values)
blocks.append(nb)
else:
diff --git a/pandas/tests/extension/base/getitem.py b/pandas/tests/extension/base/getitem.py
index a7b99c2e09e88..971da37c105bd 100644
--- a/pandas/tests/extension/base/getitem.py
+++ b/pandas/tests/extension/base/getitem.py
@@ -245,6 +245,26 @@ def test_getitem_slice(self, data):
result = data[slice(1)] # scalar
assert isinstance(result, type(data))
+ def test_getitem_ellipsis_and_slice(self, data):
+ # GH#40353 this is called from getitem_block_index
+ result = data[..., :]
+ self.assert_extension_array_equal(result, data)
+
+ result = data[:, ...]
+ self.assert_extension_array_equal(result, data)
+
+ result = data[..., :3]
+ self.assert_extension_array_equal(result, data[:3])
+
+ result = data[:3, ...]
+ self.assert_extension_array_equal(result, data[:3])
+
+ result = data[..., ::2]
+ self.assert_extension_array_equal(result, data[::2])
+
+ result = data[::2, ...]
+ self.assert_extension_array_equal(result, data[::2])
+
def test_get(self, data):
# GH 20882
s = pd.Series(data, index=[2 * i for i in range(len(data))])
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index ca593da6d97bc..a4fedd9a4c5da 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -83,6 +83,16 @@ def _from_factorized(cls, values, original):
return cls([UserDict(x) for x in values if x != ()])
def __getitem__(self, item):
+ if isinstance(item, tuple):
+ if len(item) > 1:
+ if item[0] is Ellipsis:
+ item = item[1:]
+ elif item[-1] is Ellipsis:
+ item = item[:-1]
+ if len(item) > 1:
+ raise IndexError("too many indices for array.")
+ item = item[0]
+
if isinstance(item, numbers.Integral):
return self.data[item]
elif isinstance(item, slice) and item == slice(None):
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index 1728c31ebf767..a8c9a7a22ecdc 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -848,22 +848,27 @@ def assert_slice_ok(mgr, axis, slobj):
assert_slice_ok(mgr, ax, slice(1, 4))
assert_slice_ok(mgr, ax, slice(3, 0, -2))
- # boolean mask
- assert_slice_ok(mgr, ax, np.array([], dtype=np.bool_))
- assert_slice_ok(mgr, ax, np.ones(mgr.shape[ax], dtype=np.bool_))
- assert_slice_ok(mgr, ax, np.zeros(mgr.shape[ax], dtype=np.bool_))
-
- if mgr.shape[ax] >= 3:
- assert_slice_ok(mgr, ax, np.arange(mgr.shape[ax]) % 3 == 0)
- assert_slice_ok(mgr, ax, np.array([True, True, False], dtype=np.bool_))
+ if mgr.ndim < 2:
+ # 2D only support slice objects
+
+ # boolean mask
+ assert_slice_ok(mgr, ax, np.array([], dtype=np.bool_))
+ assert_slice_ok(mgr, ax, np.ones(mgr.shape[ax], dtype=np.bool_))
+ assert_slice_ok(mgr, ax, np.zeros(mgr.shape[ax], dtype=np.bool_))
+
+ if mgr.shape[ax] >= 3:
+ assert_slice_ok(mgr, ax, np.arange(mgr.shape[ax]) % 3 == 0)
+ assert_slice_ok(
+ mgr, ax, np.array([True, True, False], dtype=np.bool_)
+ )
- # fancy indexer
- assert_slice_ok(mgr, ax, [])
- assert_slice_ok(mgr, ax, list(range(mgr.shape[ax])))
+ # fancy indexer
+ assert_slice_ok(mgr, ax, [])
+ assert_slice_ok(mgr, ax, list(range(mgr.shape[ax])))
- if mgr.shape[ax] >= 3:
- assert_slice_ok(mgr, ax, [0, 1, 2])
- assert_slice_ok(mgr, ax, [-1, -2, -3])
+ if mgr.shape[ax] >= 3:
+ assert_slice_ok(mgr, ax, [0, 1, 2])
+ assert_slice_ok(mgr, ax, [-1, -2, -3])
@pytest.mark.parametrize("mgr", MANAGERS)
def test_take(self, mgr):
| This cuts another 25% off of the benchmark discussed https://github.com/pandas-dev/pandas/pull/40171#issuecomment-790219422, getting us to within 3x of the cython-path performance. | https://api.github.com/repos/pandas-dev/pandas/pulls/40353 | 2021-03-10T16:45:51Z | 2021-03-16T16:55:42Z | 2021-03-16T16:55:42Z | 2021-03-16T17:05:03Z |
[ArrayManager] TST: Enable extension tests | diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 92c10df3e4e97..d54d29dcfacbc 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -175,6 +175,7 @@ jobs:
pytest pandas/tests/computation/
pytest pandas/tests/config/
pytest pandas/tests/dtypes/
+ pytest pandas/tests/extension/
pytest pandas/tests/generic/
pytest pandas/tests/indexes/
pytest pandas/tests/io/test_* -m "not slow and not clipboard"
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index 2177839eb34ce..4f196043b3129 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -684,7 +684,10 @@ def get_numeric_data(self, copy: bool = False) -> ArrayManager:
copy : bool, default False
Whether to copy the blocks
"""
- return self._get_data_subset(lambda arr: is_numeric_dtype(arr.dtype))
+ return self._get_data_subset(
+ lambda arr: is_numeric_dtype(arr.dtype)
+ or getattr(arr.dtype, "_is_numeric", False)
+ )
def copy(self: T, deep=True) -> T:
"""
diff --git a/pandas/tests/extension/base/casting.py b/pandas/tests/extension/base/casting.py
index 0b79a5368a542..7c5ef5b3b27d3 100644
--- a/pandas/tests/extension/base/casting.py
+++ b/pandas/tests/extension/base/casting.py
@@ -12,14 +12,21 @@ class BaseCastingTests(BaseExtensionTests):
def test_astype_object_series(self, all_data):
ser = pd.Series(all_data, name="A")
result = ser.astype(object)
- assert isinstance(result._mgr.blocks[0], ObjectBlock)
+ assert result.dtype == np.dtype(object)
+ if hasattr(result._mgr, "blocks"):
+ assert isinstance(result._mgr.blocks[0], ObjectBlock)
+ assert isinstance(result._mgr.array, np.ndarray)
+ assert result._mgr.array.dtype == np.dtype(object)
def test_astype_object_frame(self, all_data):
df = pd.DataFrame({"A": all_data})
result = df.astype(object)
- blk = result._data.blocks[0]
- assert isinstance(blk, ObjectBlock), type(blk)
+ if hasattr(result._mgr, "blocks"):
+ blk = result._data.blocks[0]
+ assert isinstance(blk, ObjectBlock), type(blk)
+ assert isinstance(result._mgr.arrays[0], np.ndarray)
+ assert result._mgr.arrays[0].dtype == np.dtype(object)
# FIXME: these currently fail; dont leave commented-out
# check that we can compare the dtypes
diff --git a/pandas/tests/extension/base/constructors.py b/pandas/tests/extension/base/constructors.py
index 6f0d8d16a0224..e2323620daa0e 100644
--- a/pandas/tests/extension/base/constructors.py
+++ b/pandas/tests/extension/base/constructors.py
@@ -2,6 +2,7 @@
import pytest
import pandas as pd
+from pandas.api.extensions import ExtensionArray
from pandas.core.internals import ExtensionBlock
from pandas.tests.extension.base.base import BaseExtensionTests
@@ -24,13 +25,15 @@ def test_series_constructor(self, data):
result = pd.Series(data)
assert result.dtype == data.dtype
assert len(result) == len(data)
- assert isinstance(result._mgr.blocks[0], ExtensionBlock)
- assert result._mgr.blocks[0].values is data
+ if hasattr(result._mgr, "blocks"):
+ assert isinstance(result._mgr.blocks[0], ExtensionBlock)
+ assert result._mgr.array is data
# Series[EA] is unboxed / boxed correctly
result2 = pd.Series(result)
assert result2.dtype == data.dtype
- assert isinstance(result2._mgr.blocks[0], ExtensionBlock)
+ if hasattr(result._mgr, "blocks"):
+ assert isinstance(result2._mgr.blocks[0], ExtensionBlock)
def test_series_constructor_no_data_with_index(self, dtype, na_value):
result = pd.Series(index=[1, 2, 3], dtype=dtype)
@@ -64,13 +67,17 @@ def test_dataframe_constructor_from_dict(self, data, from_series):
result = pd.DataFrame({"A": data})
assert result.dtypes["A"] == data.dtype
assert result.shape == (len(data), 1)
- assert isinstance(result._mgr.blocks[0], ExtensionBlock)
+ if hasattr(result._mgr, "blocks"):
+ assert isinstance(result._mgr.blocks[0], ExtensionBlock)
+ assert isinstance(result._mgr.arrays[0], ExtensionArray)
def test_dataframe_from_series(self, data):
result = pd.DataFrame(pd.Series(data))
assert result.dtypes[0] == data.dtype
assert result.shape == (len(data), 1)
- assert isinstance(result._mgr.blocks[0], ExtensionBlock)
+ if hasattr(result._mgr, "blocks"):
+ assert isinstance(result._mgr.blocks[0], ExtensionBlock)
+ assert isinstance(result._mgr.arrays[0], ExtensionArray)
def test_series_given_mismatched_index_raises(self, data):
msg = r"Length of values \(3\) does not match length of index \(5\)"
diff --git a/pandas/tests/extension/base/getitem.py b/pandas/tests/extension/base/getitem.py
index 971da37c105bd..96833a2e49fa1 100644
--- a/pandas/tests/extension/base/getitem.py
+++ b/pandas/tests/extension/base/getitem.py
@@ -408,7 +408,10 @@ def test_loc_len1(self, data):
# see GH-27785 take_nd with indexer of len 1 resulting in wrong ndim
df = pd.DataFrame({"A": data})
res = df.loc[[0], "A"]
- assert res._mgr._block.ndim == 1
+ assert res.ndim == 1
+ assert res._mgr.arrays[0].ndim == 1
+ if hasattr(res._mgr, "blocks"):
+ assert res._mgr._block.ndim == 1
def test_item(self, data):
# https://github.com/pandas-dev/pandas/pull/30175
diff --git a/pandas/tests/extension/base/interface.py b/pandas/tests/extension/base/interface.py
index 5bf26e2ca476e..f51f9f732bace 100644
--- a/pandas/tests/extension/base/interface.py
+++ b/pandas/tests/extension/base/interface.py
@@ -81,7 +81,8 @@ def test_no_values_attribute(self, data):
def test_is_numeric_honored(self, data):
result = pd.Series(data)
- assert result._mgr.blocks[0].is_numeric is data.dtype._is_numeric
+ if hasattr(result._mgr, "blocks"):
+ assert result._mgr.blocks[0].is_numeric is data.dtype._is_numeric
def test_isna_extension_array(self, data_missing):
# If your `isna` returns an ExtensionArray, you must also implement
diff --git a/pandas/tests/extension/base/reshaping.py b/pandas/tests/extension/base/reshaping.py
index 18f6084f989dc..5a2d928eea744 100644
--- a/pandas/tests/extension/base/reshaping.py
+++ b/pandas/tests/extension/base/reshaping.py
@@ -3,7 +3,10 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
import pandas as pd
+from pandas.api.extensions import ExtensionArray
from pandas.core.internals import ExtensionBlock
from pandas.tests.extension.base.base import BaseExtensionTests
@@ -26,7 +29,9 @@ def test_concat(self, data, in_frame):
dtype = result.dtype
assert dtype == data.dtype
- assert isinstance(result._mgr.blocks[0], ExtensionBlock)
+ if hasattr(result._mgr, "blocks"):
+ assert isinstance(result._mgr.blocks[0], ExtensionBlock)
+ assert isinstance(result._mgr.arrays[0], ExtensionArray)
@pytest.mark.parametrize("in_frame", [True, False])
def test_concat_all_na_block(self, data_missing, in_frame):
@@ -106,6 +111,7 @@ def test_concat_extension_arrays_copy_false(self, data, na_value):
result = pd.concat([df1, df2], axis=1, copy=False)
self.assert_frame_equal(result, expected)
+ @td.skip_array_manager_not_yet_implemented # TODO(ArrayManager) concat reindex
def test_concat_with_reindex(self, data):
# GH-33027
a = pd.DataFrame({"a": data[:5]})
diff --git a/pandas/tests/extension/test_external_block.py b/pandas/tests/extension/test_external_block.py
index 9360294f5a3f7..ee46d13055010 100644
--- a/pandas/tests/extension/test_external_block.py
+++ b/pandas/tests/extension/test_external_block.py
@@ -2,11 +2,14 @@
import pytest
from pandas._libs.internals import BlockPlacement
+import pandas.util._test_decorators as td
import pandas as pd
from pandas.core.internals import BlockManager
from pandas.core.internals.blocks import ExtensionBlock
+pytestmark = td.skip_array_manager_invalid_test
+
class CustomBlock(ExtensionBlock):
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index bc1f499a70aa0..051871513a14e 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -16,6 +16,8 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
from pandas.core.dtypes.dtypes import (
ExtensionDtype,
PandasDtype,
@@ -28,6 +30,9 @@
from pandas.core.internals import managers
from pandas.tests.extension import base
+# TODO(ArrayManager) PandasArray
+pytestmark = td.skip_array_manager_not_yet_implemented
+
def _extract_array_patched(obj):
if isinstance(obj, (pd.Index, pd.Series)):
diff --git a/pandas/tests/extension/test_sparse.py b/pandas/tests/extension/test_sparse.py
index b8e042f0599f7..0613c727dec98 100644
--- a/pandas/tests/extension/test_sparse.py
+++ b/pandas/tests/extension/test_sparse.py
@@ -290,7 +290,8 @@ def test_fillna_copy_frame(self, data_missing):
filled_val = df.iloc[0, 0]
result = df.fillna(filled_val)
- assert df.values.base is not result.values.base
+ if hasattr(df._mgr, "blocks"):
+ assert df.values.base is not result.values.base
assert df.A._values.to_dense() is arr.to_dense()
def test_fillna_copy_series(self, data_missing):
@@ -362,18 +363,19 @@ def test_equals(self, data, na_value, as_series, box):
class TestCasting(BaseSparseTests, base.BaseCastingTests):
def test_astype_object_series(self, all_data):
# Unlike the base class, we do not expect the resulting Block
- # to be ObjectBlock
+ # to be ObjectBlock / resulting array to be np.dtype("object")
ser = pd.Series(all_data, name="A")
result = ser.astype(object)
- assert is_object_dtype(result._data.blocks[0].dtype)
+ assert is_object_dtype(result.dtype)
+ assert is_object_dtype(result._mgr.array.dtype)
def test_astype_object_frame(self, all_data):
# Unlike the base class, we do not expect the resulting Block
- # to be ObjectBlock
+ # to be ObjectBlock / resulting array to be np.dtype("object")
df = pd.DataFrame({"A": all_data})
result = df.astype(object)
- assert is_object_dtype(result._data.blocks[0].dtype)
+ assert is_object_dtype(result._mgr.arrays[0].dtype)
# FIXME: these currently fail; dont leave commented-out
# check that we can compare the dtypes
| xref #39146
| https://api.github.com/repos/pandas-dev/pandas/pulls/40348 | 2021-03-10T09:10:42Z | 2021-03-18T07:40:02Z | 2021-03-18T07:40:01Z | 2021-03-18T07:40:05Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.