title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
BUG: Fixed getattr for frame with column sparse | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index a9a0d89ed01aa..5b4761c3bc6c5 100755
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -1004,7 +1004,7 @@ Reshaping
Sparse
^^^^^^
- Bug in :class:`SparseDataFrame` arithmetic operations incorrectly casting inputs to float (:issue:`28107`)
--
+- Bug in ``DataFrame.sparse`` returning a ``Series`` when there was a column named ``sparse`` rather than the accessor (:issue:`30758`)
-
ExtensionArray
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6f83d7ee11d1f..538d0feade96f 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -402,7 +402,7 @@ def _constructor(self) -> Type["DataFrame"]:
_constructor_sliced: Type[Series] = Series
_deprecations: FrozenSet[str] = NDFrame._deprecations | frozenset([])
- _accessors: Set[str] = set()
+ _accessors: Set[str] = {"sparse"}
@property
def _constructor_expanddim(self):
diff --git a/pandas/tests/arrays/sparse/test_accessor.py b/pandas/tests/arrays/sparse/test_accessor.py
index 4615eca837393..d8a1831cd61ec 100644
--- a/pandas/tests/arrays/sparse/test_accessor.py
+++ b/pandas/tests/arrays/sparse/test_accessor.py
@@ -116,3 +116,8 @@ def test_series_from_coo_incorrect_format_raises(self):
TypeError, match="Expected coo_matrix. Got csr_matrix instead."
):
pd.Series.sparse.from_coo(m)
+
+ def test_with_column_named_sparse(self):
+ # https://github.com/pandas-dev/pandas/issues/30758
+ df = pd.DataFrame({"sparse": pd.arrays.SparseArray([1, 2])})
+ assert isinstance(df.sparse, pd.core.arrays.sparse.accessor.SparseFrameAccessor)
| Closes https://github.com/pandas-dev/pandas/issues/30758 | https://api.github.com/repos/pandas-dev/pandas/pulls/30759 | 2020-01-06T21:44:39Z | 2020-01-06T23:52:02Z | 2020-01-06T23:52:02Z | 2020-01-06T23:52:05Z |
BUG: TDI.insert with empty TDI raising IndexError | diff --git a/pandas/tests/indexes/timedeltas/test_indexing.py b/pandas/tests/indexes/timedeltas/test_indexing.py
index 36105477ba9ee..e8665ee1a3555 100644
--- a/pandas/tests/indexes/timedeltas/test_indexing.py
+++ b/pandas/tests/indexes/timedeltas/test_indexing.py
@@ -173,6 +173,15 @@ def test_take_fill_value(self):
class TestTimedeltaIndex:
+ def test_insert_empty(self):
+ # Corner case inserting with length zero doesnt raise IndexError
+ idx = timedelta_range("1 Day", periods=3)
+ td = idx[0]
+
+ idx[:0].insert(0, td)
+ idx[:0].insert(1, td)
+ idx[:0].insert(-1, td)
+
def test_insert(self):
idx = TimedeltaIndex(["4day", "1day", "2day"], name="idx")
| This started out as a cosmetic-only branch and ended up finding a broken corner case. The relevant change is in timedeltas L416 where `if self.freq is not None:` is now `if self.size and self.freq is not None:`
Using _check_compatible_with causes us to raise TypeError instead of ValueError in a couple of the DatetimeIndex.insert cases. | https://api.github.com/repos/pandas-dev/pandas/pulls/30757 | 2020-01-06T21:26:35Z | 2020-01-09T13:18:27Z | 2020-01-09T13:18:27Z | 2020-01-09T16:26:05Z |
TST: Use datapath fixture | diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index 42f904b47a6ee..ccd77f47b5e5e 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -381,10 +381,10 @@ def test_read(self, protocol, get_random_path):
tm.assert_frame_equal(df, df2)
-def test_unicode_decode_error():
+def test_unicode_decode_error(datapath):
# pickle file written with py27, should be readable without raising
# UnicodeDecodeError, see GH#28645
- path = os.path.join(os.path.dirname(__file__), "data", "pickle", "test_py27.pkl")
+ path = datapath("io", "data", "pickle", "test_py27.pkl")
df = pd.read_pickle(path)
# just test the columns are correct since the values are random
| This was failing the wheel build.
https://travis-ci.org/MacPython/pandas-wheels/jobs/633451994.
I tried briefly to write a code check for this, but didn't succeed. | https://api.github.com/repos/pandas-dev/pandas/pulls/30756 | 2020-01-06T21:21:37Z | 2020-01-06T22:26:25Z | 2020-01-06T22:26:25Z | 2020-01-06T22:26:26Z |
CI: Unify code_checks whitespace checking | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index e6a761b91f353..7b223a553e114 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -102,9 +102,17 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
MSG='Check for use of not concatenated strings' ; echo $MSG
if [[ "$GITHUB_ACTIONS" == "true" ]]; then
- $BASE_DIR/scripts/validate_string_concatenation.py --format="[error]{source_path}:{line_number}:{msg}" .
+ $BASE_DIR/scripts/validate_string_concatenation.py --validation-type="strings_to_concatenate" --format="##[error]{source_path}:{line_number}:{msg}" .
else
- $BASE_DIR/scripts/validate_string_concatenation.py .
+ $BASE_DIR/scripts/validate_string_concatenation.py --validation-type="strings_to_concatenate" .
+ fi
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
+ MSG='Check for strings with wrong placed spaces' ; echo $MSG
+ if [[ "$GITHUB_ACTIONS" == "true" ]]; then
+ $BASE_DIR/scripts/validate_string_concatenation.py --validation-type="strings_with_wrong_placed_whitespace" --format="##[error]{source_path}:{line_number}:{msg}" .
+ else
+ $BASE_DIR/scripts/validate_string_concatenation.py --validation-type="strings_with_wrong_placed_whitespace" .
fi
RET=$(($RET + $?)) ; echo $MSG "DONE"
diff --git a/scripts/tests/test_validate_unwanted_patterns.py b/scripts/tests/test_validate_unwanted_patterns.py
new file mode 100644
index 0000000000000..6ad5ab44a29d6
--- /dev/null
+++ b/scripts/tests/test_validate_unwanted_patterns.py
@@ -0,0 +1,422 @@
+import io
+
+import pytest
+
+# TODO: change this import to "import validate_unwanted_patterns"
+# when renaming "scripts/validate_string_concatenation.py" to
+# "scripts/validate_unwanted_patterns.py"
+import validate_string_concatenation as validate_unwanted_patterns
+
+
+class TestBarePytestRaises:
+ @pytest.mark.parametrize(
+ "data",
+ [
+ (
+ """
+ with pytest.raises(ValueError, match="foo"):
+ pass
+ """
+ ),
+ (
+ """
+ # with pytest.raises(ValueError, match="foo"):
+ # pass
+ """
+ ),
+ (
+ """
+ # with pytest.raises(ValueError):
+ # pass
+ """
+ ),
+ (
+ """
+ with pytest.raises(
+ ValueError,
+ match="foo"
+ ):
+ pass
+ """
+ ),
+ ],
+ )
+ def test_pytest_raises(self, data):
+ fd = io.StringIO(data.strip())
+ result = list(validate_unwanted_patterns.bare_pytest_raises(fd))
+ assert result == []
+
+ @pytest.mark.parametrize(
+ "data, expected",
+ [
+ (
+ (
+ """
+ with pytest.raises(ValueError):
+ pass
+ """
+ ),
+ [
+ (
+ 1,
+ (
+ "Bare pytests raise have been found. "
+ "Please pass in the argument 'match' "
+ "as well the exception."
+ ),
+ ),
+ ],
+ ),
+ (
+ (
+ """
+ with pytest.raises(ValueError, match="foo"):
+ with pytest.raises(ValueError):
+ pass
+ pass
+ """
+ ),
+ [
+ (
+ 2,
+ (
+ "Bare pytests raise have been found. "
+ "Please pass in the argument 'match' "
+ "as well the exception."
+ ),
+ ),
+ ],
+ ),
+ (
+ (
+ """
+ with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match="foo"):
+ pass
+ pass
+ """
+ ),
+ [
+ (
+ 1,
+ (
+ "Bare pytests raise have been found. "
+ "Please pass in the argument 'match' "
+ "as well the exception."
+ ),
+ ),
+ ],
+ ),
+ (
+ (
+ """
+ with pytest.raises(
+ ValueError
+ ):
+ pass
+ """
+ ),
+ [
+ (
+ 1,
+ (
+ "Bare pytests raise have been found. "
+ "Please pass in the argument 'match' "
+ "as well the exception."
+ ),
+ ),
+ ],
+ ),
+ (
+ (
+ """
+ with pytest.raises(
+ ValueError,
+ # match = "foo"
+ ):
+ pass
+ """
+ ),
+ [
+ (
+ 1,
+ (
+ "Bare pytests raise have been found. "
+ "Please pass in the argument 'match' "
+ "as well the exception."
+ ),
+ ),
+ ],
+ ),
+ ],
+ )
+ def test_pytest_raises_raises(self, data, expected):
+ fd = io.StringIO(data.strip())
+ result = list(validate_unwanted_patterns.bare_pytest_raises(fd))
+ assert result == expected
+
+
+@pytest.mark.parametrize(
+ "data, expected",
+ [
+ (
+ 'msg = ("bar " "baz")',
+ [
+ (
+ 1,
+ (
+ "String unnecessarily split in two by black. "
+ "Please merge them manually."
+ ),
+ )
+ ],
+ ),
+ (
+ 'msg = ("foo " "bar " "baz")',
+ [
+ (
+ 1,
+ (
+ "String unnecessarily split in two by black. "
+ "Please merge them manually."
+ ),
+ ),
+ (
+ 1,
+ (
+ "String unnecessarily split in two by black. "
+ "Please merge them manually."
+ ),
+ ),
+ ],
+ ),
+ ],
+)
+def test_strings_to_concatenate(data, expected):
+ fd = io.StringIO(data.strip())
+ result = list(validate_unwanted_patterns.strings_to_concatenate(fd))
+ assert result == expected
+
+
+class TestStringsWithWrongPlacedWhitespace:
+ @pytest.mark.parametrize(
+ "data",
+ [
+ (
+ """
+ msg = (
+ "foo\n"
+ " bar"
+ )
+ """
+ ),
+ (
+ """
+ msg = (
+ "foo"
+ " bar"
+ "baz"
+ )
+ """
+ ),
+ (
+ """
+ msg = (
+ f"foo"
+ " bar"
+ )
+ """
+ ),
+ (
+ """
+ msg = (
+ "foo"
+ f" bar"
+ )
+ """
+ ),
+ (
+ """
+ msg = (
+ "foo"
+ rf" bar"
+ )
+ """
+ ),
+ ],
+ )
+ def test_strings_with_wrong_placed_whitespace(self, data):
+ fd = io.StringIO(data.strip())
+ result = list(
+ validate_unwanted_patterns.strings_with_wrong_placed_whitespace(fd)
+ )
+ assert result == []
+
+ @pytest.mark.parametrize(
+ "data, expected",
+ [
+ (
+ (
+ """
+ msg = (
+ "foo"
+ " bar"
+ )
+ """
+ ),
+ [
+ (
+ 3,
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ )
+ ],
+ ),
+ (
+ (
+ """
+ msg = (
+ f"foo"
+ " bar"
+ )
+ """
+ ),
+ [
+ (
+ 3,
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ )
+ ],
+ ),
+ (
+ (
+ """
+ msg = (
+ "foo"
+ f" bar"
+ )
+ """
+ ),
+ [
+ (
+ 3,
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ )
+ ],
+ ),
+ (
+ (
+ """
+ msg = (
+ f"foo"
+ f" bar"
+ )
+ """
+ ),
+ [
+ (
+ 3,
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ )
+ ],
+ ),
+ (
+ (
+ """
+ msg = (
+ "foo"
+ rf" bar"
+ " baz"
+ )
+ """
+ ),
+ [
+ (
+ 3,
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ ),
+ (
+ 4,
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ ),
+ ],
+ ),
+ (
+ (
+ """
+ msg = (
+ "foo"
+ " bar"
+ rf" baz"
+ )
+ """
+ ),
+ [
+ (
+ 3,
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ ),
+ (
+ 4,
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ ),
+ ],
+ ),
+ (
+ (
+ """
+ msg = (
+ "foo"
+ rf" bar"
+ rf" baz"
+ )
+ """
+ ),
+ [
+ (
+ 3,
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ ),
+ (
+ 4,
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ ),
+ ],
+ ),
+ ],
+ )
+ def test_strings_with_wrong_placed_whitespace_raises(self, data, expected):
+ fd = io.StringIO(data.strip())
+ result = list(
+ validate_unwanted_patterns.strings_with_wrong_placed_whitespace(fd)
+ )
+ assert result == expected
diff --git a/scripts/validate_string_concatenation.py b/scripts/validate_string_concatenation.py
index c5f257c641b25..c4be85ffe7306 100755
--- a/scripts/validate_string_concatenation.py
+++ b/scripts/validate_string_concatenation.py
@@ -1,24 +1,13 @@
#!/usr/bin/env python3
"""
-GH #30454
+Unwanted patterns test cases.
-Check where there is a string that needs to be concatenated.
+The reason this file exist despite the fact we already have
+`ci/code_checks.sh`,
+(see https://github.com/pandas-dev/pandas/blob/master/ci/code_checks.sh)
-This is necessary after black formatting,
-where for example black transforms this:
-
->>> foo = (
-... "bar "
-... "baz"
-... )
-
-into this:
-
->>> foo = ("bar " "baz")
-
-Black is not considering this as an
-issue (see issue https://github.com/psf/black/issues/1051),
-so we are checking it here.
+is that some of the test cases are more complex/imposible to validate via regex.
+So this file is somewhat an extensions to `ci/code_checks.sh`
"""
import argparse
@@ -26,26 +15,288 @@
import sys
import token
import tokenize
-from typing import Generator, List, Tuple
+from typing import IO, Callable, Iterable, List, Tuple
+
+FILE_EXTENSIONS_TO_CHECK: Tuple[str, ...] = (".py", ".pyx", ".pxi.ini", ".pxd")
+
+
+def _get_literal_string_prefix_len(token_string: str) -> int:
+ """
+ Getting the length of the literal string prefix.
+
+ Parameters
+ ----------
+ token_string : str
+ String to check.
+
+ Returns
+ -------
+ int
+ Length of the literal string prefix.
+
+ Examples
+ --------
+ >>> example_string = "'Hello world'"
+ >>> _get_literal_string_prefix_len(example_string)
+ 0
+ >>> example_string = "r'Hello world'"
+ >>> _get_literal_string_prefix_len(example_string)
+ 1
+ """
+ try:
+ return min(
+ token_string.find(quote)
+ for quote in (r"'", r'"')
+ if token_string.find(quote) >= 0
+ )
+ except ValueError:
+ return 0
+
+
+def bare_pytest_raises(file_obj: IO[str]) -> Iterable[Tuple[int, str]]:
+ """
+ Test Case for bare pytest raises.
+
+ For example, this is wrong:
+
+ >>> with pytest.raise(ValueError):
+ ... # Some code that raises ValueError
+
+ And this is what we want instead:
+
+ >>> with pytest.raise(ValueError, match="foo"):
+ ... # Some code that raises ValueError
+
+ Parameters
+ ----------
+ file_obj : IO
+ File-like object containing the Python code to validate.
+
+ Yields
+ ------
+ line_number : int
+ Line number of unconcatenated string.
+ msg : str
+ Explenation of the error.
+
+ Notes
+ -----
+ GH #23922
+ """
+ tokens: List = list(tokenize.generate_tokens(file_obj.readline))
+
+ for counter, current_token in enumerate(tokens, start=1):
+ if not (current_token.type == token.NAME and current_token.string == "raises"):
+ continue
+ for next_token in tokens[counter:]:
+ if next_token.type == token.NAME and next_token.string == "match":
+ break
+ # token.NEWLINE refers to the end of a logical line
+ # unlike token.NL or "\n" which represents a newline
+ if next_token.type == token.NEWLINE:
+ yield (
+ current_token.start[0],
+ "Bare pytests raise have been found. "
+ "Please pass in the argument 'match' as well the exception.",
+ )
+ break
+
+
+def strings_to_concatenate(file_obj: IO[str]) -> Iterable[Tuple[int, str]]:
+ """
+ This test case is necessary after 'Black' (https://github.com/psf/black),
+ is formating strings over multiple lines.
+
+ For example, when this:
+
+ >>> foo = (
+ ... "bar "
+ ... "baz"
+ ... )
+
+ Is becoming this:
+
+ >>> foo = ("bar " "baz")
+
+ 'Black' is not considering this as an
+ issue (see https://github.com/psf/black/issues/1051),
+ so we are checking it here instead.
+
+ Parameters
+ ----------
+ file_obj : IO
+ File-like object containing the Python code to validate.
+
+ Yields
+ ------
+ line_number : int
+ Line number of unconcatenated string.
+ msg : str
+ Explenation of the error.
+
+ Notes
+ -----
+ GH #30454
+ """
+ tokens: List = list(tokenize.generate_tokens(file_obj.readline))
+
+ for current_token, next_token in zip(tokens, tokens[1:]):
+ if current_token.type == next_token.type == token.STRING:
+ yield (
+ current_token.start[0],
+ (
+ "String unnecessarily split in two by black. "
+ "Please merge them manually."
+ ),
+ )
+
-FILE_EXTENSIONS_TO_CHECK = (".py", ".pyx", ".pyx.ini", ".pxd")
+def strings_with_wrong_placed_whitespace(
+ file_obj: IO[str],
+) -> Iterable[Tuple[int, str]]:
+ """
+ Test case for leading spaces in concated strings.
+
+ For example:
+
+ >>> rule = (
+ ... "We want the space at the end of the line, "
+ ... "not at the beginning"
+ ... )
+
+ Instead of:
+
+ >>> rule = (
+ ... "We want the space at the end of the line,"
+ ... " not at the beginning"
+ ... )
+
+ Parameters
+ ----------
+ file_obj : IO
+ File-like object containing the Python code to validate.
+
+ Yields
+ ------
+ line_number : int
+ Line number of unconcatenated string.
+ msg : str
+ Explenation of the error.
+ """
+
+ def has_wrong_whitespace(first_line: str, second_line: str) -> bool:
+ """
+ Checking if the two lines are mattching the unwanted pattern.
+
+ Parameters
+ ----------
+ first_line : str
+ First line to check.
+ second_line : str
+ Second line to check.
+
+ Returns
+ -------
+ bool
+ True if the two recived string match, an unwanted pattern.
+
+ Notes
+ -----
+ The unwanted pattern that we are trying to catch is if the spaces in
+ a string that is concatenated over multiple lines are placed at the
+ end of each string, unless this string is ending with a
+ newline character (\n).
+
+ For example, this is bad:
+
+ >>> rule = (
+ ... "We want the space at the end of the line,"
+ ... " not at the beginning"
+ ... )
+
+ And what we want is:
+
+ >>> rule = (
+ ... "We want the space at the end of the line, "
+ ... "not at the beginning"
+ ... )
+
+ And if the string is ending with a new line character (\n) we
+ do not want any trailing whitespaces after it.
+
+ For example, this is bad:
+
+ >>> rule = (
+ ... "We want the space at the begging of "
+ ... "the line if the previous line is ending with a \n "
+ ... "not at the end, like always"
+ ... )
+
+ And what we do want is:
+
+ >>> rule = (
+ ... "We want the space at the begging of "
+ ... "the line if the previous line is ending with a \n"
+ ... " not at the end, like always"
+ ... )
+ """
+ if first_line.endswith(r"\n"):
+ return False
+ elif first_line.startswith(" ") or second_line.startswith(" "):
+ return False
+ elif first_line.endswith(" ") or second_line.endswith(" "):
+ return False
+ elif (not first_line.endswith(" ")) and second_line.startswith(" "):
+ return True
+ return False
+
+ tokens: List = list(tokenize.generate_tokens(file_obj.readline))
+
+ for first_token, second_token, third_token in zip(tokens, tokens[1:], tokens[2:]):
+ # Checking if we are in a block of concated string
+ if (
+ first_token.type == third_token.type == token.STRING
+ and second_token.type == token.NL
+ ):
+ # Striping the quotes, with the string litteral prefix
+ first_string: str = first_token.string[
+ _get_literal_string_prefix_len(first_token.string) + 1 : -1
+ ]
+ second_string: str = third_token.string[
+ _get_literal_string_prefix_len(third_token.string) + 1 : -1
+ ]
+
+ if has_wrong_whitespace(first_string, second_string):
+ yield (
+ third_token.start[0],
+ (
+ "String has a space at the beginning instead "
+ "of the end of the previous string."
+ ),
+ )
-def main(source_path: str, output_format: str) -> bool:
+def main(
+ function: Callable[[IO[str]], Iterable[Tuple[int, str]]],
+ source_path: str,
+ output_format: str,
+) -> bool:
"""
Main entry point of the script.
Parameters
----------
+ function : Callable
+ Function to execute for the specified validation type.
source_path : str
Source path representing path to a file/directory.
output_format : str
- Output format of the script.
+ Output format of the error message.
Returns
-------
bool
- True if found any strings that needs to be concatenated.
+ True if found any patterns are found related to the given function.
Raises
------
@@ -53,66 +304,50 @@ def main(source_path: str, output_format: str) -> bool:
If the `source_path` is not pointing to existing file/directory.
"""
if not os.path.exists(source_path):
- raise ValueError(
- "Please enter a valid path, pointing to a valid file/directory."
- )
+ raise ValueError("Please enter a valid path, pointing to a file/directory.")
is_failed: bool = False
-
- msg = "String unnecessarily split in two by black. Please merge them manually."
+ file_path: str = ""
if os.path.isfile(source_path):
- for source_path, line_number in strings_to_concatenate(source_path):
- is_failed = True
- print(
- output_format.format(
- source_path=source_path, line_number=line_number, msg=msg
+ file_path = source_path
+ with open(file_path, "r") as file_obj:
+ for line_number, msg in function(file_obj):
+ is_failed = True
+ print(
+ output_format.format(
+ source_path=file_path, line_number=line_number, msg=msg
+ )
)
- )
for subdir, _, files in os.walk(source_path):
for file_name in files:
- if any(
+ if not any(
file_name.endswith(extension) for extension in FILE_EXTENSIONS_TO_CHECK
):
- for source_path, line_number in strings_to_concatenate(
- os.path.join(subdir, file_name)
- ):
+ continue
+
+ file_path = os.path.join(subdir, file_name)
+ with open(file_path, "r") as file_obj:
+ for line_number, msg in function(file_obj):
is_failed = True
print(
output_format.format(
- source_path=source_path, line_number=line_number, msg=msg
+ source_path=file_path, line_number=line_number, msg=msg
)
)
- return is_failed
-
-
-def strings_to_concatenate(source_path: str) -> Generator[Tuple[str, int], None, None]:
- """
- Yielding the strings that needs to be concatenated in a given file.
-
- Parameters
- ----------
- source_path : str
- File path pointing to a single file.
- Yields
- ------
- source_path : str
- Source file path.
- line_number : int
- Line number of unconcatenated string.
- """
- with open(source_path, "r") as file_name:
- tokens: List = list(tokenize.generate_tokens(file_name.readline))
-
- for current_token, next_token in zip(tokens, tokens[1:]):
- if current_token[0] == next_token[0] == token.STRING:
- yield source_path, current_token[2][0]
+ return is_failed
if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Validate concatenated strings")
+ available_validation_types: List[str] = [
+ "bare_pytest_raises",
+ "strings_to_concatenate",
+ "strings_with_wrong_placed_whitespace",
+ ]
+
+ parser = argparse.ArgumentParser(description="Unwanted patterns checker.")
parser.add_argument(
"path", nargs="?", default=".", help="Source path of file/directory to check."
@@ -120,10 +355,23 @@ def strings_to_concatenate(source_path: str) -> Generator[Tuple[str, int], None,
parser.add_argument(
"--format",
"-f",
- default="{source_path}:{line_number}:{msg}",
- help="Output format of the unconcatenated strings.",
+ default="{source_path}:{line_number}:{msg}.",
+ help="Output format of the error message.",
+ )
+ parser.add_argument(
+ "--validation-type",
+ "-vt",
+ choices=available_validation_types,
+ required=True,
+ help="Validation test case to check.",
)
args = parser.parse_args()
- sys.exit(main(source_path=args.path, output_format=args.format))
+ sys.exit(
+ main(
+ function=globals().get(args.validation_type), # type: ignore
+ source_path=args.path,
+ output_format=args.format,
+ )
+ )
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Unify test cases of #30467 #30708 #30737 | https://api.github.com/repos/pandas-dev/pandas/pulls/30755 | 2020-01-06T21:20:51Z | 2020-03-23T10:31:11Z | 2020-03-23T10:31:10Z | 2020-03-23T12:06:19Z |
BUG: DTI/TDI .insert accepting incorrectly-dtyped NaT | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 40d3823c9700b..c2edfd53e1207 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -11,7 +11,7 @@
from pandas.core.dtypes.common import _NS_DTYPE, is_float, is_integer, is_scalar
from pandas.core.dtypes.dtypes import DatetimeTZDtype
-from pandas.core.dtypes.missing import isna
+from pandas.core.dtypes.missing import is_valid_nat_for_dtype, isna
from pandas.core.accessor import delegate_names
from pandas.core.arrays.datetimes import (
@@ -922,9 +922,14 @@ def insert(self, loc, item):
-------
new_index : Index
"""
- if is_scalar(item) and isna(item):
+ if is_valid_nat_for_dtype(item, self.dtype):
# GH 18295
item = self._na_value
+ elif is_scalar(item) and isna(item):
+ # i.e. timedeltat64("NaT")
+ raise TypeError(
+ f"cannot insert {type(self).__name__} with incompatible label"
+ )
freq = None
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index ee6e5b984ae7b..59fc53a17590c 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -16,7 +16,7 @@
is_timedelta64_ns_dtype,
pandas_dtype,
)
-from pandas.core.dtypes.missing import isna
+from pandas.core.dtypes.missing import is_valid_nat_for_dtype, isna
from pandas.core.accessor import delegate_names
from pandas.core.arrays import datetimelike as dtl
@@ -397,15 +397,16 @@ def insert(self, loc, item):
new_index : Index
"""
# try to convert if possible
- if _is_convertible_to_td(item):
- try:
- item = Timedelta(item)
- except ValueError:
- # e.g. str that can't be parsed to timedelta
- pass
- elif is_scalar(item) and isna(item):
+ if isinstance(item, self._data._recognized_scalars):
+ item = Timedelta(item)
+ elif is_valid_nat_for_dtype(item, self.dtype):
# GH 18295
item = self._na_value
+ elif is_scalar(item) and isna(item):
+ # i.e. datetime64("NaT")
+ raise TypeError(
+ f"cannot insert {type(self).__name__} with incompatible label"
+ )
freq = None
if isinstance(item, Timedelta) or (is_scalar(item) and isna(item)):
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index ef0d2cd2e48cc..210b28aa0c393 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -317,7 +317,9 @@ def test_take_fill_value_with_timezone(self):
class TestDatetimeIndex:
- @pytest.mark.parametrize("null", [None, np.nan, pd.NaT])
+ @pytest.mark.parametrize(
+ "null", [None, np.nan, np.datetime64("NaT"), pd.NaT, pd.NA]
+ )
@pytest.mark.parametrize("tz", [None, "UTC", "US/Eastern"])
def test_insert_nat(self, tz, null):
# GH#16537, GH#18295 (test missing)
@@ -326,6 +328,12 @@ def test_insert_nat(self, tz, null):
res = idx.insert(0, null)
tm.assert_index_equal(res, expected)
+ @pytest.mark.parametrize("tz", [None, "UTC", "US/Eastern"])
+ def test_insert_invalid_na(self, tz):
+ idx = pd.DatetimeIndex(["2017-01-01"], tz=tz)
+ with pytest.raises(TypeError, match="incompatible label"):
+ idx.insert(0, np.timedelta64("NaT"))
+
def test_insert(self):
idx = DatetimeIndex(["2000-01-04", "2000-01-01", "2000-01-02"], name="idx")
diff --git a/pandas/tests/indexes/timedeltas/test_indexing.py b/pandas/tests/indexes/timedeltas/test_indexing.py
index 0114dfef548de..b70a3d17a10ab 100644
--- a/pandas/tests/indexes/timedeltas/test_indexing.py
+++ b/pandas/tests/indexes/timedeltas/test_indexing.py
@@ -219,11 +219,29 @@ def test_insert(self):
assert result.name == expected.name
assert result.freq == expected.freq
+ @pytest.mark.parametrize(
+ "null", [None, np.nan, np.timedelta64("NaT"), pd.NaT, pd.NA]
+ )
+ def test_insert_nat(self, null):
# GH 18295 (test missing)
+ idx = timedelta_range("1day", "3day")
+ result = idx.insert(1, null)
expected = TimedeltaIndex(["1day", pd.NaT, "2day", "3day"])
- for na in (np.nan, pd.NaT, None):
- result = timedelta_range("1day", "3day").insert(1, na)
- tm.assert_index_equal(result, expected)
+ tm.assert_index_equal(result, expected)
+
+ def test_insert_invalid_na(self):
+ idx = TimedeltaIndex(["4day", "1day", "2day"], name="idx")
+ with pytest.raises(TypeError, match="incompatible label"):
+ idx.insert(0, np.datetime64("NaT"))
+
+ def test_insert_dont_cast_strings(self):
+ # To match DatetimeIndex and PeriodIndex behavior, dont try to
+ # parse strings to Timedelta
+ idx = timedelta_range("1day", "3day")
+
+ result = idx.insert(0, "1 Day")
+ assert result.dtype == object
+ assert result[0] == "1 Day"
def test_delete(self):
idx = timedelta_range(start="1 Days", periods=5, freq="D", name="idx")
| Also TDI.insert trying to parse strings to Timedelta, which neither DTI nor PI do. | https://api.github.com/repos/pandas-dev/pandas/pulls/30754 | 2020-01-06T21:19:39Z | 2020-01-06T23:58:36Z | 2020-01-06T23:58:36Z | 2020-01-07T01:41:30Z |
Fix: Warning(Attempt to set value to copy of a slice) | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index ea59a6a49e649..de3a5e4749538 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -962,7 +962,7 @@ def setter(item, v):
s._maybe_update_cacher(clear=True)
# reset the sliced object if unique
- self.obj[item] = s
+ self.obj.loc[:, item] = s
# we need an iterable, with a ndim of at least 1
# eg. don't pass through np.array(0)
| Getting a warning emitted from this line of code. Here is the fix I used to get rid of the warning, and here is the code that generated it on my setup (Anaconda3, Windows 10):
```
import pandas as pd
ts_columns = []
for col in df.columns:
if isinstance(df[col].dtype, Timestamp):
ts_columns.append(col)
```
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30753 | 2020-01-06T20:27:00Z | 2020-01-06T23:36:18Z | null | 2023-05-11T01:19:25Z |
REF: share _validate_fill_value | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index b9a6daf7c630a..b0985332092ae 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -591,7 +591,17 @@ def _validate_fill_value(self, fill_value):
------
ValueError
"""
- raise AbstractMethodError(self)
+ if isna(fill_value):
+ fill_value = iNaT
+ elif isinstance(fill_value, self._recognized_scalars):
+ self._check_compatible_with(fill_value)
+ fill_value = self._scalar_type(fill_value)
+ fill_value = self._unbox_scalar(fill_value)
+ else:
+ raise ValueError(
+ f"'fill_value' should be a {self._scalar_type}. Got '{fill_value}'."
+ )
+ return fill_value
def take(self, indices, allow_fill=False, fill_value=None):
if allow_fill:
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 06d280a3dc25b..267780219c74e 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -20,7 +20,6 @@
)
import pandas.compat as compat
from pandas.errors import PerformanceWarning
-from pandas.util._decorators import Appender
from pandas.core.dtypes.common import (
_INT64_DTYPE,
@@ -700,20 +699,6 @@ def astype(self, dtype, copy=True):
return self.to_period(freq=dtype.freq)
return dtl.DatetimeLikeArrayMixin.astype(self, dtype, copy)
- # ----------------------------------------------------------------
- # ExtensionArray Interface
-
- @Appender(dtl.DatetimeLikeArrayMixin._validate_fill_value.__doc__)
- def _validate_fill_value(self, fill_value):
- if isna(fill_value):
- fill_value = iNaT
- elif isinstance(fill_value, (datetime, np.datetime64)):
- self._assert_tzawareness_compat(fill_value)
- fill_value = Timestamp(fill_value).value
- else:
- raise ValueError(f"'fill_value' should be a Timestamp. Got '{fill_value}'.")
- return fill_value
-
# -----------------------------------------------------------------
# Rendering Methods
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index e7e1c84b1c070..2b92d6f1cdbe3 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -21,7 +21,7 @@
)
from pandas._libs.tslibs.timedeltas import Timedelta, delta_to_nanoseconds
import pandas.compat as compat
-from pandas.util._decorators import Appender, cache_readonly
+from pandas.util._decorators import cache_readonly
from pandas.core.dtypes.common import (
_TD_DTYPE,
@@ -505,17 +505,6 @@ def to_timestamp(self, freq=None, how="start"):
# --------------------------------------------------------------------
# Array-like / EA-Interface Methods
- @Appender(dtl.DatetimeLikeArrayMixin._validate_fill_value.__doc__)
- def _validate_fill_value(self, fill_value):
- if isna(fill_value):
- fill_value = iNaT
- elif isinstance(fill_value, Period):
- self._check_compatible_with(fill_value)
- fill_value = fill_value.ordinal
- else:
- raise ValueError(f"'fill_value' should be a Period. Got '{fill_value}'.")
- return fill_value
-
def _values_for_argsort(self):
return self._data
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index ddf3af538874e..616f7b63ab25c 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -13,7 +13,6 @@
)
import pandas.compat as compat
from pandas.compat.numpy import function as nv
-from pandas.util._decorators import Appender
from pandas.core.dtypes.common import (
_NS_DTYPE,
@@ -367,16 +366,6 @@ def _maybe_clear_freq(self):
# ----------------------------------------------------------------
# Array-Like / EA-Interface Methods
- @Appender(dtl.DatetimeLikeArrayMixin._validate_fill_value.__doc__)
- def _validate_fill_value(self, fill_value):
- if isna(fill_value):
- fill_value = iNaT
- elif isinstance(fill_value, (timedelta, np.timedelta64, Tick)):
- fill_value = Timedelta(fill_value).value
- else:
- raise ValueError(f"'fill_value' should be a Timedelta. Got '{fill_value}'.")
- return fill_value
-
def astype(self, dtype, copy=True):
# We handle
# --> timedelta64[ns]
| Made feasible by #30721. | https://api.github.com/repos/pandas-dev/pandas/pulls/30752 | 2020-01-06T19:09:07Z | 2020-01-06T19:49:35Z | 2020-01-06T19:49:35Z | 2020-01-06T19:51:00Z |
REF: share comparison methods for DTA/TDA/PA | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index b0985332092ae..0fadf3a05a1d3 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -11,6 +11,7 @@
from pandas._libs.tslibs.timedeltas import Timedelta, delta_to_nanoseconds
from pandas._libs.tslibs.timestamps import RoundTo, round_nsint64
from pandas._typing import DatetimeLikeScalar
+from pandas.compat import set_function_name
from pandas.compat.numpy import function as nv
from pandas.errors import AbstractMethodError, NullFrequencyError, PerformanceWarning
from pandas.util._decorators import Appender, Substitution
@@ -37,12 +38,12 @@
from pandas.core.dtypes.inference import is_array_like
from pandas.core.dtypes.missing import is_valid_nat_for_dtype, isna
-from pandas.core import missing, nanops
+from pandas.core import missing, nanops, ops
from pandas.core.algorithms import checked_add_with_arr, take, unique1d, value_counts
import pandas.core.common as com
from pandas.core.indexers import check_bool_array_indexer
from pandas.core.ops.common import unpack_zerodim_and_defer
-from pandas.core.ops.invalid import make_invalid_op
+from pandas.core.ops.invalid import invalid_comparison, make_invalid_op
from pandas.tseries import frequencies
from pandas.tseries.offsets import DateOffset, Tick
@@ -50,6 +51,81 @@
from .base import ExtensionArray, ExtensionOpsMixin
+def _datetimelike_array_cmp(cls, op):
+ """
+ Wrap comparison operations to convert Timestamp/Timedelta/Period-like to
+ boxed scalars/arrays.
+ """
+ opname = f"__{op.__name__}__"
+ nat_result = opname == "__ne__"
+
+ @unpack_zerodim_and_defer(opname)
+ def wrapper(self, other):
+
+ if isinstance(other, str):
+ try:
+ # GH#18435 strings get a pass from tzawareness compat
+ other = self._scalar_from_string(other)
+ except ValueError:
+ # failed to parse as Timestamp/Timedelta/Period
+ return invalid_comparison(self, other, op)
+
+ if isinstance(other, self._recognized_scalars) or other is NaT:
+ other = self._scalar_type(other)
+ self._check_compatible_with(other)
+
+ other_i8 = self._unbox_scalar(other)
+
+ result = op(self.view("i8"), other_i8)
+ if isna(other):
+ result.fill(nat_result)
+
+ elif not is_list_like(other):
+ return invalid_comparison(self, other, op)
+
+ elif len(other) != len(self):
+ raise ValueError("Lengths must match")
+
+ else:
+ if isinstance(other, list):
+ # TODO: could use pd.Index to do inference?
+ other = np.array(other)
+
+ if not isinstance(other, (np.ndarray, type(self))):
+ return invalid_comparison(self, other, op)
+
+ if is_object_dtype(other):
+ # We have to use comp_method_OBJECT_ARRAY instead of numpy
+ # comparison otherwise it would fail to raise when
+ # comparing tz-aware and tz-naive
+ with np.errstate(all="ignore"):
+ result = ops.comp_method_OBJECT_ARRAY(
+ op, self.astype(object), other
+ )
+ o_mask = isna(other)
+
+ elif not type(self)._is_recognized_dtype(other.dtype):
+ return invalid_comparison(self, other, op)
+
+ else:
+ # For PeriodDType this casting is unnecessary
+ other = type(self)._from_sequence(other)
+ self._check_compatible_with(other)
+
+ result = op(self.view("i8"), other.view("i8"))
+ o_mask = other._isnan
+
+ if o_mask.any():
+ result[o_mask] = nat_result
+
+ if self._hasnans:
+ result[self._isnan] = nat_result
+
+ return result
+
+ return set_function_name(wrapper, opname, cls)
+
+
class AttributesMixin:
_data: np.ndarray
@@ -934,6 +1010,7 @@ def _is_unique(self):
# ------------------------------------------------------------------
# Arithmetic Methods
+ _create_comparison_method = classmethod(_datetimelike_array_cmp)
# pow is invalid for all three subclasses; TimedeltaArray will override
# the multiplication and division ops
@@ -1485,6 +1562,8 @@ def mean(self, skipna=True):
return self._box_func(result)
+DatetimeLikeArrayMixin._add_comparison_ops()
+
# -------------------------------------------------------------------
# Shared Constructor Helpers
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 267780219c74e..d9c4b27da8698 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -18,7 +18,6 @@
timezones,
tzconversion,
)
-import pandas.compat as compat
from pandas.errors import PerformanceWarning
from pandas.core.dtypes.common import (
@@ -32,7 +31,6 @@
is_dtype_equal,
is_extension_array_dtype,
is_float_dtype,
- is_list_like,
is_object_dtype,
is_period_dtype,
is_string_dtype,
@@ -43,13 +41,10 @@
from pandas.core.dtypes.generic import ABCIndexClass, ABCPandasArray, ABCSeries
from pandas.core.dtypes.missing import isna
-from pandas.core import ops
from pandas.core.algorithms import checked_add_with_arr
from pandas.core.arrays import datetimelike as dtl
from pandas.core.arrays._ranges import generate_regular_range
import pandas.core.common as com
-from pandas.core.ops.common import unpack_zerodim_and_defer
-from pandas.core.ops.invalid import invalid_comparison
from pandas.tseries.frequencies import get_period_alias, to_offset
from pandas.tseries.offsets import Day, Tick
@@ -131,81 +126,6 @@ def f(self):
return property(f)
-def _dt_array_cmp(cls, op):
- """
- Wrap comparison operations to convert datetime-like to datetime64
- """
- opname = f"__{op.__name__}__"
- nat_result = opname == "__ne__"
-
- @unpack_zerodim_and_defer(opname)
- def wrapper(self, other):
-
- if isinstance(other, str):
- try:
- # GH#18435 strings get a pass from tzawareness compat
- other = self._scalar_from_string(other)
- except ValueError:
- # string that cannot be parsed to Timestamp
- return invalid_comparison(self, other, op)
-
- if isinstance(other, self._recognized_scalars) or other is NaT:
- other = self._scalar_type(other)
- self._assert_tzawareness_compat(other)
-
- other_i8 = other.value
-
- result = op(self.view("i8"), other_i8)
- if isna(other):
- result.fill(nat_result)
-
- elif not is_list_like(other):
- return invalid_comparison(self, other, op)
-
- elif len(other) != len(self):
- raise ValueError("Lengths must match")
-
- else:
- if isinstance(other, list):
- other = np.array(other)
-
- if not isinstance(other, (np.ndarray, cls)):
- # Following Timestamp convention, __eq__ is all-False
- # and __ne__ is all True, others raise TypeError.
- return invalid_comparison(self, other, op)
-
- if is_object_dtype(other):
- # We have to use comp_method_OBJECT_ARRAY instead of numpy
- # comparison otherwise it would fail to raise when
- # comparing tz-aware and tz-naive
- with np.errstate(all="ignore"):
- result = ops.comp_method_OBJECT_ARRAY(
- op, self.astype(object), other
- )
- o_mask = isna(other)
-
- elif not cls._is_recognized_dtype(other.dtype):
- # e.g. is_timedelta64_dtype(other)
- return invalid_comparison(self, other, op)
-
- else:
- self._assert_tzawareness_compat(other)
- other = type(self)._from_sequence(other)
-
- result = op(self.view("i8"), other.view("i8"))
- o_mask = other._isnan
-
- if o_mask.any():
- result[o_mask] = nat_result
-
- if self._hasnans:
- result[self._isnan] = nat_result
-
- return result
-
- return compat.set_function_name(wrapper, opname, cls)
-
-
class DatetimeArray(dtl.DatetimeLikeArrayMixin, dtl.TimelikeOps, dtl.DatelikeOps):
"""
Pandas ExtensionArray for tz-naive or tz-aware datetime data.
@@ -324,7 +244,7 @@ def __init__(self, values, dtype=_NS_DTYPE, freq=None, copy=False):
raise TypeError(msg)
elif values.tz:
dtype = values.dtype
- # freq = validate_values_freq(values, freq)
+
if freq is None:
freq = values.freq
values = values._data
@@ -714,8 +634,6 @@ def _format_native_types(self, na_rep="NaT", date_format=None, **kwargs):
# -----------------------------------------------------------------
# Comparison Methods
- _create_comparison_method = classmethod(_dt_array_cmp)
-
def _has_same_tz(self, other):
zzone = self._timezone
@@ -1767,9 +1685,6 @@ def to_julian_date(self):
)
-DatetimeArray._add_comparison_ops()
-
-
# -------------------------------------------------------------------
# Constructor Helpers
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 2b92d6f1cdbe3..d1c574adeb236 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -20,7 +20,6 @@
period_asfreq_arr,
)
from pandas._libs.tslibs.timedeltas import Timedelta, delta_to_nanoseconds
-import pandas.compat as compat
from pandas.util._decorators import cache_readonly
from pandas.core.dtypes.common import (
@@ -28,8 +27,6 @@
ensure_object,
is_datetime64_dtype,
is_float_dtype,
- is_list_like,
- is_object_dtype,
is_period_dtype,
pandas_dtype,
)
@@ -42,12 +39,9 @@
)
from pandas.core.dtypes.missing import isna, notna
-from pandas.core import ops
import pandas.core.algorithms as algos
from pandas.core.arrays import datetimelike as dtl
import pandas.core.common as com
-from pandas.core.ops.common import unpack_zerodim_and_defer
-from pandas.core.ops.invalid import invalid_comparison
from pandas.tseries import frequencies
from pandas.tseries.offsets import DateOffset, Tick, _delta_to_tick
@@ -64,77 +58,6 @@ def f(self):
return property(f)
-def _period_array_cmp(cls, op):
- """
- Wrap comparison operations to convert Period-like to PeriodDtype
- """
- opname = f"__{op.__name__}__"
- nat_result = opname == "__ne__"
-
- @unpack_zerodim_and_defer(opname)
- def wrapper(self, other):
-
- if isinstance(other, str):
- try:
- other = self._scalar_from_string(other)
- except ValueError:
- # string that can't be parsed as Period
- return invalid_comparison(self, other, op)
-
- if isinstance(other, self._recognized_scalars) or other is NaT:
- other = self._scalar_type(other)
- self._check_compatible_with(other)
-
- other_i8 = self._unbox_scalar(other)
-
- result = op(self.view("i8"), other_i8)
- if isna(other):
- result.fill(nat_result)
-
- elif not is_list_like(other):
- return invalid_comparison(self, other, op)
-
- elif len(other) != len(self):
- raise ValueError("Lengths must match")
-
- else:
- if isinstance(other, list):
- # TODO: could use pd.Index to do inference?
- other = np.array(other)
-
- if not isinstance(other, (np.ndarray, cls)):
- return invalid_comparison(self, other, op)
-
- if is_object_dtype(other):
- with np.errstate(all="ignore"):
- result = ops.comp_method_OBJECT_ARRAY(
- op, self.astype(object), other
- )
- o_mask = isna(other)
-
- elif not cls._is_recognized_dtype(other.dtype):
- # e.g. is_timedelta64_dtype(other)
- return invalid_comparison(self, other, op)
-
- else:
- assert isinstance(other, cls), type(other)
-
- self._check_compatible_with(other)
-
- result = op(self.view("i8"), other.view("i8"))
- o_mask = other._isnan
-
- if o_mask.any():
- result[o_mask] = nat_result
-
- if self._hasnans:
- result[self._isnan] = nat_result
-
- return result
-
- return compat.set_function_name(wrapper, opname, cls)
-
-
class PeriodArray(dtl.DatetimeLikeArrayMixin, dtl.DatelikeOps):
"""
Pandas ExtensionArray for storing Period data.
@@ -639,7 +562,6 @@ def astype(self, dtype, copy=True):
# ------------------------------------------------------------------
# Arithmetic Methods
- _create_comparison_method = classmethod(_period_array_cmp)
def _sub_datelike(self, other):
assert other is not NaT
@@ -810,9 +732,6 @@ def _check_timedeltalike_freq_compat(self, other):
raise raise_on_incompatible(self, other)
-PeriodArray._add_comparison_ops()
-
-
def raise_on_incompatible(left, right):
"""
Helper function to render a consistent error message when raising
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 616f7b63ab25c..fc92521c97dae 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -11,7 +11,6 @@
parse_timedelta_unit,
precision_from_unit,
)
-import pandas.compat as compat
from pandas.compat.numpy import function as nv
from pandas.core.dtypes.common import (
@@ -20,7 +19,6 @@
is_dtype_equal,
is_float_dtype,
is_integer_dtype,
- is_list_like,
is_object_dtype,
is_scalar,
is_string_dtype,
@@ -37,11 +35,9 @@
)
from pandas.core.dtypes.missing import isna
-from pandas.core import nanops, ops
+from pandas.core import nanops
from pandas.core.algorithms import checked_add_with_arr
import pandas.core.common as com
-from pandas.core.ops.common import unpack_zerodim_and_defer
-from pandas.core.ops.invalid import invalid_comparison
from pandas.tseries.frequencies import to_offset
from pandas.tseries.offsets import Tick
@@ -71,76 +67,6 @@ def f(self):
return property(f)
-def _td_array_cmp(cls, op):
- """
- Wrap comparison operations to convert timedelta-like to timedelta64
- """
- opname = f"__{op.__name__}__"
- nat_result = opname == "__ne__"
-
- @unpack_zerodim_and_defer(opname)
- def wrapper(self, other):
-
- if isinstance(other, str):
- try:
- other = self._scalar_from_string(other)
- except ValueError:
- # failed to parse as timedelta
- return invalid_comparison(self, other, op)
-
- if isinstance(other, self._recognized_scalars) or other is NaT:
- other = self._scalar_type(other)
- self._check_compatible_with(other)
-
- other_i8 = self._unbox_scalar(other)
-
- result = op(self.view("i8"), other_i8)
- if isna(other):
- result.fill(nat_result)
-
- elif not is_list_like(other):
- return invalid_comparison(self, other, op)
-
- elif len(other) != len(self):
- raise ValueError("Lengths must match")
-
- else:
- if isinstance(other, list):
- other = np.array(other)
-
- if not isinstance(other, (np.ndarray, cls)):
- return invalid_comparison(self, other, op)
-
- if is_object_dtype(other):
- with np.errstate(all="ignore"):
- result = ops.comp_method_OBJECT_ARRAY(
- op, self.astype(object), other
- )
- o_mask = isna(other)
-
- elif not cls._is_recognized_dtype(other.dtype):
- # e.g. other is datetimearray
- return invalid_comparison(self, other, op)
-
- else:
- other = type(self)._from_sequence(other)
-
- self._check_compatible_with(other)
-
- result = op(self.view("i8"), other.view("i8"))
- o_mask = other._isnan
-
- if o_mask.any():
- result[o_mask] = nat_result
-
- if self._hasnans:
- result[self._isnan] = nat_result
-
- return result
-
- return compat.set_function_name(wrapper, opname, cls)
-
-
class TimedeltaArray(dtl.DatetimeLikeArrayMixin, dtl.TimelikeOps):
"""
Pandas ExtensionArray for timedelta data.
@@ -468,8 +394,6 @@ def _format_native_types(self, na_rep="NaT", date_format=None, **kwargs):
# ----------------------------------------------------------------
# Arithmetic Methods
- _create_comparison_method = classmethod(_td_array_cmp)
-
def _add_offset(self, other):
assert not isinstance(other, Tick)
raise TypeError(
@@ -965,9 +889,6 @@ def f(x):
return result
-TimedeltaArray._add_comparison_ops()
-
-
# ---------------------------------------------------------------------
# Constructor Helpers
| https://api.github.com/repos/pandas-dev/pandas/pulls/30751 | 2020-01-06T18:55:08Z | 2020-01-06T23:53:41Z | 2020-01-06T23:53:41Z | 2020-01-07T01:56:31Z | |
Fix PR08 errors | diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index d9e68b64f526d..1415d1e0a1226 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2673,7 +2673,7 @@ def get_loc_level(self, key, level=0, drop_level: bool = True):
key : label or sequence of labels
level : int/level name or list thereof, optional
drop_level : bool, default True
- if ``False``, the resulting index will not drop any level.
+ If ``False``, the resulting index will not drop any level.
Returns
-------
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index ac64a875ca0aa..e2d007cd2d7f8 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -244,7 +244,7 @@ def infer_freq(index, warn: bool = True) -> Optional[str]:
Parameters
----------
index : DatetimeIndex or TimedeltaIndex
- if passed a Series will use the values of the series (NOT THE INDEX).
+ If passed a Series will use the values of the series (NOT THE INDEX).
warn : bool, default True
Returns
| Fixes PR08 errors. When I ran the script, a lot of them seem to be false positives, these are the ones I'm pretty sure should be fixed:
```
pandas.infer_freq: Parameter "index" description should start with a capital letter
pandas.MultiIndex.get_loc_level: Parameter "drop_level" description should start with a capital letter
```
Related to #27977 cc @datapythonista
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30749 | 2020-01-06T18:22:38Z | 2020-01-06T18:59:43Z | 2020-01-06T18:59:43Z | 2020-01-06T18:59:49Z |
PERF: Categorical getitem perf | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 7386c9d0ef1de..524e3fcf309cb 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -39,7 +39,7 @@
)
from pandas.core.dtypes.dtypes import CategoricalDtype
from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries
-from pandas.core.dtypes.inference import is_hashable
+from pandas.core.dtypes.inference import is_array_like, is_hashable
from pandas.core.dtypes.missing import isna, notna
from pandas.core import ops
@@ -1998,7 +1998,10 @@ def __getitem__(self, key):
else:
return self.categories[i]
- elif com.is_bool_indexer(key):
+ if is_list_like(key) and not is_array_like(key):
+ key = np.asarray(key)
+
+ if com.is_bool_indexer(key):
key = check_bool_array_indexer(self, key)
return self._constructor(
| Convert to an array earlier on.
Closes https://github.com/pandas-dev/pandas/issues/30744 | https://api.github.com/repos/pandas-dev/pandas/pulls/30747 | 2020-01-06T17:42:48Z | 2020-01-06T19:28:11Z | 2020-01-06T19:28:11Z | 2020-01-06T19:29:39Z |
CI: Using docstring validator from numpydoc | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index a90774d2e8ff1..5be34eee69a91 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -311,8 +311,8 @@ fi
### DOCSTRINGS ###
if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
- MSG='Validate docstrings (GL03, GL04, GL05, GL06, GL07, GL09, GL10, SS04, SS05, PR03, PR04, PR05, PR10, EX04, RT01, RT04, RT05, SA01, SA02, SA03, SA05)' ; echo $MSG
- $BASE_DIR/scripts/validate_docstrings.py --format=azure --errors=GL03,GL04,GL05,GL06,GL07,GL09,GL10,SS04,SS05,PR03,PR04,PR05,PR10,EX04,RT01,RT04,RT05,SA01,SA02,SA03,SA05
+ MSG='Validate docstrings (GL03, GL04, GL05, GL06, GL07, GL09, GL10, SS04, SS05, PR03, PR04, PR05, PR10, EX04, RT01, RT04, RT05, SA02, SA03, SA05)' ; echo $MSG
+ $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=GL03,GL04,GL05,GL06,GL07,GL09,GL10,SS04,SS05,PR03,PR04,PR05,PR10,EX04,RT01,RT04,RT05,SA02,SA03,SA05
RET=$(($RET + $?)) ; echo $MSG "DONE"
fi
diff --git a/environment.yml b/environment.yml
index e244350a0bea0..5f1184e921119 100644
--- a/environment.yml
+++ b/environment.yml
@@ -27,7 +27,6 @@ dependencies:
# documentation
- gitpython # obtain contributors from git for whatsnew
- sphinx
- - numpydoc>=0.9.0
# documentation (jupyter notebooks)
- nbconvert>=5.4.1
@@ -105,3 +104,4 @@ dependencies:
- tabulate>=0.8.3 # DataFrame.to_markdown
- pip:
- git+https://github.com/pandas-dev/pandas-sphinx-theme.git@master
+ - git+https://github.com/numpy/numpydoc
diff --git a/requirements-dev.txt b/requirements-dev.txt
index f4f5fed82662c..c9e4b6a1e3b1e 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -16,7 +16,6 @@ mypy==0.730
pycodestyle
gitpython
sphinx
-numpydoc>=0.9.0
nbconvert>=5.4.1
nbsphinx
pandoc
@@ -70,4 +69,5 @@ sqlalchemy
xarray
pyreadstat
tabulate>=0.8.3
-git+https://github.com/pandas-dev/pandas-sphinx-theme.git@master
\ No newline at end of file
+git+https://github.com/pandas-dev/pandas-sphinx-theme.git@master
+git+https://github.com/numpy/numpydoc
\ No newline at end of file
diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index a1bccb1dd1629..b11de0c4ad860 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -1,819 +1,52 @@
-import functools
import io
-import random
-import string
import textwrap
-import numpy as np
import pytest
import validate_docstrings
-import pandas as pd
-validate_one = validate_docstrings.validate_one
-
-
-class GoodDocStrings:
- """
- Collection of good doc strings.
-
- This class contains a lot of docstrings that should pass the validation
- script without any errors.
- """
-
- def plot(self, kind, color="blue", **kwargs):
- """
- Generate a plot.
-
- Render the data in the Series as a matplotlib plot of the
- specified kind.
-
- Parameters
- ----------
- kind : str
- Kind of matplotlib plot.
- color : str, default 'blue'
- Color name or rgb code.
- **kwargs
- These parameters will be passed to the matplotlib plotting
- function.
- """
- pass
-
- def swap(self, arr, i, j, *args, **kwargs):
- """
- Swap two indicies on an array.
-
- Parameters
- ----------
- arr : list
- The list having indexes swapped.
- i, j : int
- The indexes being swapped.
- *args, **kwargs
- Extraneous parameters are being permitted.
- """
- pass
-
- def sample(self):
- """
- Generate and return a random number.
-
- The value is sampled from a continuous uniform distribution between
- 0 and 1.
-
- Returns
- -------
- float
- Random number generated.
- """
- return random.random()
-
- @functools.lru_cache(None)
- def decorated_sample(self, max):
- """
- Generate and return a random integer between 0 and max.
-
- Parameters
- ----------
- max : int
- The maximum value of the random number.
-
- Returns
- -------
- int
- Random number generated.
- """
- return random.randint(0, max)
-
- def random_letters(self):
- """
- Generate and return a sequence of random letters.
-
- The length of the returned string is also random, and is also
- returned.
-
- Returns
- -------
- length : int
- Length of the returned string.
- letters : str
- String of random letters.
- """
- length = random.randint(1, 10)
- letters = "".join(random.sample(string.ascii_lowercase, length))
- return length, letters
-
- def sample_values(self):
- """
- Generate an infinite sequence of random numbers.
-
- The values are sampled from a continuous uniform distribution between
- 0 and 1.
-
- Yields
- ------
- float
- Random number generated.
- """
- while True:
- yield random.random()
-
- def head(self):
- """
- Return the first 5 elements of the Series.
-
- This function is mainly useful to preview the values of the
- Series without displaying the whole of it.
-
- Returns
- -------
- Series
- Subset of the original series with the 5 first values.
-
- See Also
- --------
- Series.tail : Return the last 5 elements of the Series.
- Series.iloc : Return a slice of the elements in the Series,
- which can also be used to return the first or last n.
- """
- return self.iloc[:5]
-
- def head1(self, n=5):
- """
- Return the first elements of the Series.
-
- This function is mainly useful to preview the values of the
- Series without displaying the whole of it.
-
- Parameters
- ----------
- n : int
- Number of values to return.
-
- Returns
- -------
- Series
- Subset of the original series with the n first values.
-
- See Also
- --------
- tail : Return the last n elements of the Series.
-
- Examples
- --------
- >>> s = pd.Series(['Ant', 'Bear', 'Cow', 'Dog', 'Falcon'])
- >>> s.head()
- 0 Ant
- 1 Bear
- 2 Cow
- 3 Dog
- 4 Falcon
- dtype: object
-
- With the `n` parameter, we can change the number of returned rows:
-
- >>> s.head(n=3)
- 0 Ant
- 1 Bear
- 2 Cow
- dtype: object
- """
- return self.iloc[:n]
-
- def contains(self, pat, case=True, na=np.nan):
- """
- Return whether each value contains `pat`.
-
- In this case, we are illustrating how to use sections, even
- if the example is simple enough and does not require them.
-
- Parameters
- ----------
- pat : str
- Pattern to check for within each element.
- case : bool, default True
- Whether check should be done with case sensitivity.
- na : object, default np.nan
- Fill value for missing data.
-
- Examples
- --------
- >>> s = pd.Series(['Antelope', 'Lion', 'Zebra', np.nan])
- >>> s.str.contains(pat='a')
- 0 False
- 1 False
- 2 True
- 3 NaN
- dtype: object
-
- **Case sensitivity**
-
- With `case_sensitive` set to `False` we can match `a` with both
- `a` and `A`:
-
- >>> s.str.contains(pat='a', case=False)
- 0 True
- 1 False
- 2 True
- 3 NaN
- dtype: object
-
- **Missing values**
-
- We can fill missing values in the output using the `na` parameter:
-
- >>> s.str.contains(pat='a', na=False)
- 0 False
- 1 False
- 2 True
- 3 False
- dtype: bool
- """
- pass
-
- def mode(self, axis, numeric_only):
- """
- Ensure reST directives don't affect checks for leading periods.
-
- Parameters
- ----------
- axis : str
- Sentence ending in period, followed by single directive.
-
- .. versionchanged:: 0.1.2
-
- numeric_only : bool
- Sentence ending in period, followed by multiple directives.
-
- .. versionadded:: 0.1.2
- .. deprecated:: 0.00.0
- A multiline description,
- which spans another line.
- """
- pass
-
- def good_imports(self):
- """
- Ensure import other than numpy and pandas are fine.
-
- Examples
- --------
- This example does not import pandas or import numpy.
- >>> import datetime
- >>> datetime.MAXYEAR
- 9999
- """
- pass
-
- def no_returns(self):
- """
- Say hello and have no returns.
- """
- pass
-
- def empty_returns(self):
- """
- Say hello and always return None.
-
- Since this function never returns a value, this
- docstring doesn't need a return section.
- """
-
- def say_hello():
- return "Hello World!"
-
- say_hello()
- if True:
- return
- else:
- return None
-
- def multiple_variables_on_one_line(self, matrix, a, b, i, j):
- """
- Swap two values in a matrix.
-
- Parameters
- ----------
- matrix : list of list
- A double list that represents a matrix.
- a, b : int
- The indicies of the first value.
- i, j : int
- The indicies of the second value.
- """
- pass
-
-
-class BadGenericDocStrings:
- """Everything here has a bad docstring
- """
-
- def func(self):
-
- """Some function.
-
- With several mistakes in the docstring.
-
- It has a blank like after the signature `def func():`.
-
- The text 'Some function' should go in the line after the
- opening quotes of the docstring, not in the same line.
-
- There is a blank line between the docstring and the first line
- of code `foo = 1`.
-
- The closing quotes should be in the next line, not in this one."""
-
- foo = 1
- bar = 2
- return foo + bar
-
- def astype(self, dtype):
- """
- Casts Series type.
-
- Verb in third-person of the present simple, should be infinitive.
- """
- pass
-
- def astype1(self, dtype):
- """
- Method to cast Series type.
-
- Does not start with verb.
- """
- pass
-
- def astype2(self, dtype):
- """
- Cast Series type
-
- Missing dot at the end.
- """
- pass
-
- def astype3(self, dtype):
- """
- Cast Series type from its current type to the new type defined in
- the parameter dtype.
-
- Summary is too verbose and doesn't fit in a single line.
- """
- pass
-
- def two_linebreaks_between_sections(self, foo):
- """
- Test linebreaks message GL03.
-
- Note 2 blank lines before parameters section.
-
-
- Parameters
- ----------
- foo : str
- Description of foo parameter.
- """
- pass
-
- def linebreak_at_end_of_docstring(self, foo):
- """
- Test linebreaks message GL03.
-
- Note extra blank line at end of docstring.
-
- Parameters
- ----------
- foo : str
- Description of foo parameter.
-
- """
- pass
-
- def plot(self, kind, **kwargs):
- """
- Generate a plot.
-
- Render the data in the Series as a matplotlib plot of the
- specified kind.
-
- Note the blank line between the parameters title and the first
- parameter. Also, note that after the name of the parameter `kind`
- and before the colon, a space is missing.
-
- Also, note that the parameter descriptions do not start with a
- capital letter, and do not finish with a dot.
-
- Finally, the `**kwargs` parameter is missing.
-
- Parameters
- ----------
-
- kind: str
- kind of matplotlib plot
- """
- pass
-
- def method(self, foo=None, bar=None):
- """
- A sample DataFrame method.
-
- Do not import numpy and pandas.
-
- Try to use meaningful data, when it makes the example easier
- to understand.
-
- Try to avoid positional arguments like in `df.method(1)`. They
- can be alright if previously defined with a meaningful name,
- like in `present_value(interest_rate)`, but avoid them otherwise.
-
- When presenting the behavior with different parameters, do not place
- all the calls one next to the other. Instead, add a short sentence
- explaining what the example shows.
-
- Examples
- --------
- >>> import numpy as np
- >>> import pandas as pd
- >>> df = pd.DataFrame(np.ones((3, 3)),
- ... columns=('a', 'b', 'c'))
- >>> df.all(1)
- 0 True
- 1 True
- 2 True
- dtype: bool
- >>> df.all(bool_only=True)
- Series([], dtype: bool)
- """
- pass
-
- def private_classes(self):
- """
- This mentions NDFrame, which is not correct.
- """
-
- def unknown_section(self):
- """
- This section has an unknown section title.
-
- Unknown Section
- ---------------
- This should raise an error in the validation.
- """
-
- def sections_in_wrong_order(self):
- """
- This docstring has the sections in the wrong order.
-
- Parameters
- ----------
- name : str
- This section is in the right position.
-
- Examples
- --------
- >>> print('So far Examples is good, as it goes before Parameters')
- So far Examples is good, as it goes before Parameters
-
- See Also
- --------
- function : This should generate an error, as See Also needs to go
- before Examples.
- """
-
- def deprecation_in_wrong_order(self):
- """
- This docstring has the deprecation warning in the wrong order.
-
- This is the extended summary. The correct order should be
- summary, deprecation warning, extended summary.
-
- .. deprecated:: 1.0
- This should generate an error as it needs to go before
- extended summary.
- """
-
- def method_wo_docstrings(self):
- pass
-
- def directives_without_two_colons(self, first, second):
- """
- Ensure reST directives have trailing colons.
-
- Parameters
- ----------
- first : str
- Sentence ending in period, followed by single directive w/o colons.
-
- .. versionchanged 0.1.2
-
- second : bool
- Sentence ending in period, followed by multiple directives w/o
- colons.
-
- .. versionadded 0.1.2
- .. deprecated 0.00.0
-
- """
- pass
-
-
-class BadSummaries:
- def wrong_line(self):
- """Exists on the wrong line"""
- pass
-
- def no_punctuation(self):
- """
- Has the right line but forgets punctuation
- """
- pass
-
- def no_capitalization(self):
- """
- provides a lowercase summary.
- """
- pass
-
- def no_infinitive(self):
- """
- Started with a verb that is not infinitive.
- """
-
- def multi_line(self):
- """
- Extends beyond one line
- which is not correct.
- """
-
- def two_paragraph_multi_line(self):
- """
- Extends beyond one line
- which is not correct.
-
- Extends beyond one line, which in itself is correct but the
- previous short summary should still be an issue.
- """
-
-
-class BadParameters:
- """
- Everything here has a problem with its Parameters section.
- """
-
- def missing_params(self, kind, **kwargs):
- """
- Lacks kwargs in Parameters.
-
- Parameters
- ----------
- kind : str
- Foo bar baz.
- """
-
- def bad_colon_spacing(self, kind):
- """
- Has bad spacing in the type line.
-
- Parameters
- ----------
- kind: str
- Needs a space after kind.
- """
-
- def no_description_period(self, kind):
- """
- Forgets to add a period to the description.
-
- Parameters
- ----------
- kind : str
- Doesn't end with a dot
- """
-
- def no_description_period_with_directive(self, kind):
- """
- Forgets to add a period, and also includes a directive.
-
- Parameters
- ----------
- kind : str
- Doesn't end with a dot
-
- .. versionadded:: 0.00.0
- """
-
- def no_description_period_with_directives(self, kind):
- """
- Forgets to add a period, and also includes multiple directives.
-
- Parameters
- ----------
- kind : str
- Doesn't end with a dot
-
- .. versionchanged:: 0.00.0
- .. deprecated:: 0.00.0
- """
-
- def parameter_capitalization(self, kind):
- """
- Forgets to capitalize the description.
-
- Parameters
- ----------
- kind : str
- this is not capitalized.
- """
-
- def blank_lines(self, kind):
- """
- Adds a blank line after the section header.
-
- Parameters
- ----------
-
- kind : str
- Foo bar baz.
- """
- pass
-
- def integer_parameter(self, kind):
- """
- Uses integer instead of int.
-
- Parameters
- ----------
- kind : integer
- Foo bar baz.
- """
- pass
-
- def string_parameter(self, kind):
- """
- Uses string instead of str.
-
- Parameters
- ----------
- kind : string
- Foo bar baz.
- """
- pass
-
- def boolean_parameter(self, kind):
- """
- Uses boolean instead of bool.
-
- Parameters
- ----------
- kind : boolean
- Foo bar baz.
- """
- pass
-
- def list_incorrect_parameter_type(self, kind):
- """
- Uses list of boolean instead of list of bool.
-
- Parameters
- ----------
- kind : list of boolean, integer, float or string
- Foo bar baz.
- """
- pass
-
- def bad_parameter_spacing(self, a, b):
- """
- The parameters on the same line have an extra space between them.
-
- Parameters
- ----------
- a, b : int
- Foo bar baz.
- """
- pass
-
-
-class BadReturns:
- def return_not_documented(self):
- """
- Lacks section for Returns
- """
- return "Hello world!"
-
- def yield_not_documented(self):
- """
- Lacks section for Yields
- """
- yield "Hello world!"
-
- def no_type(self):
- """
- Returns documented but without type.
-
- Returns
- -------
- Some value.
- """
- return "Hello world!"
-
- def no_description(self):
- """
- Provides type but no description.
-
- Returns
- -------
- str
- """
- return "Hello world!"
-
- def no_punctuation(self):
- """
- Provides type and description but no period.
-
- Returns
- -------
- str
- A nice greeting
- """
- return "Hello world!"
-
- def named_single_return(self):
- """
- Provides name but returns only one value.
-
- Returns
- -------
- s : str
- A nice greeting.
- """
- return "Hello world!"
-
- def no_capitalization(self):
- """
- Forgets capitalization in return values description.
-
- Returns
- -------
- foo : str
- The first returned string.
- bar : str
- the second returned string.
- """
- return "Hello", "World!"
+class BadDocstrings:
+ """Everything here has a bad docstring
+ """
- def no_period_multi(self):
+ def private_classes(self):
"""
- Forgets period in return values description.
-
- Returns
- -------
- foo : str
- The first returned string
- bar : str
- The second returned string.
+ This mentions NDFrame, which is not correct.
"""
- return "Hello", "World!"
-
-class BadSeeAlso:
- def desc_no_period(self):
+ def prefix_pandas(self):
"""
- Return the first 5 elements of the Series.
+ Have `pandas` prefix in See Also section.
See Also
--------
- Series.tail : Return the last 5 elements of the Series.
- Series.iloc : Return a slice of the elements in the Series,
- which can also be used to return the first or last n
+ pandas.Series.rename : Alter Series index labels or name.
+ DataFrame.head : The first `n` rows of the caller object.
"""
pass
- def desc_first_letter_lowercase(self):
- """
- Return the first 5 elements of the Series.
-
- See Also
- --------
- Series.tail : return the last 5 elements of the Series.
- Series.iloc : Return a slice of the elements in the Series,
- which can also be used to return the first or last n.
+ def redundant_import(self, foo=None, bar=None):
"""
- pass
+ A sample DataFrame method.
- def prefix_pandas(self):
- """
- Have `pandas` prefix in See Also section.
+ Should not import numpy and pandas.
- See Also
+ Examples
--------
- pandas.Series.rename : Alter Series index labels or name.
- DataFrame.head : The first `n` rows of the caller object.
+ >>> import numpy as np
+ >>> import pandas as pd
+ >>> df = pd.DataFrame(np.ones((3, 3)),
+ ... columns=('a', 'b', 'c'))
+ >>> df.all(1)
+ 0 True
+ 1 True
+ 2 True
+ dtype: bool
+ >>> df.all(bool_only=True)
+ Series([], dtype: bool)
"""
pass
-
-class BadExamples:
def unused_import(self):
"""
Examples
@@ -877,59 +110,9 @@ def _import_path(self, klass=None, func=None):
return base_path
- def test_good_class(self, capsys):
- errors = validate_one(self._import_path(klass="GoodDocStrings"))["errors"]
- assert isinstance(errors, list)
- assert not errors
-
- @pytest.mark.parametrize(
- "func",
- [
- "plot",
- "swap",
- "sample",
- "decorated_sample",
- "random_letters",
- "sample_values",
- "head",
- "head1",
- "contains",
- "mode",
- "good_imports",
- "no_returns",
- "empty_returns",
- "multiple_variables_on_one_line",
- ],
- )
- def test_good_functions(self, capsys, func):
- errors = validate_one(self._import_path(klass="GoodDocStrings", func=func))[
- "errors"
- ]
- assert isinstance(errors, list)
- assert not errors
-
def test_bad_class(self, capsys):
- errors = validate_one(self._import_path(klass="BadGenericDocStrings"))["errors"]
- assert isinstance(errors, list)
- assert errors
-
- @pytest.mark.parametrize(
- "func",
- [
- "func",
- "astype",
- "astype1",
- "astype2",
- "astype3",
- "plot",
- "method",
- "private_classes",
- "directives_without_two_colons",
- ],
- )
- def test_bad_generic_functions(self, capsys, func):
- errors = validate_one(
- self._import_path(klass="BadGenericDocStrings", func=func) # noqa:F821
+ errors = validate_docstrings.pandas_validate(
+ self._import_path(klass="BadDocstrings")
)["errors"]
assert isinstance(errors, list)
assert errors
@@ -937,9 +120,8 @@ def test_bad_generic_functions(self, capsys, func):
@pytest.mark.parametrize(
"klass,func,msgs",
[
- # See Also tests
(
- "BadGenericDocStrings",
+ "BadDocstrings",
"private_classes",
(
"Private classes (NDFrame) should not be mentioned in public "
@@ -947,200 +129,31 @@ def test_bad_generic_functions(self, capsys, func):
),
),
(
- "BadGenericDocStrings",
- "unknown_section",
- ('Found unknown section "Unknown Section".',),
- ),
- (
- "BadGenericDocStrings",
- "sections_in_wrong_order",
- (
- "Sections are in the wrong order. Correct order is: Parameters, "
- "See Also, Examples",
- ),
- ),
- (
- "BadGenericDocStrings",
- "deprecation_in_wrong_order",
- ("Deprecation warning should precede extended summary",),
- ),
- (
- "BadGenericDocStrings",
- "directives_without_two_colons",
- (
- "reST directives ['versionchanged', 'versionadded', "
- "'deprecated'] must be followed by two colons",
- ),
- ),
- (
- "BadSeeAlso",
- "desc_no_period",
- ('Missing period at end of description for See Also "Series.iloc"',),
- ),
- (
- "BadSeeAlso",
- "desc_first_letter_lowercase",
- ('should be capitalized for See Also "Series.tail"',),
- ),
- # Summary tests
- (
- "BadSummaries",
- "wrong_line",
- ("should start in the line immediately after the opening quotes",),
- ),
- ("BadSummaries", "no_punctuation", ("Summary does not end with a period",)),
- (
- "BadSummaries",
- "no_capitalization",
- ("Summary does not start with a capital letter",),
- ),
- (
- "BadSummaries",
- "no_capitalization",
- ("Summary must start with infinitive verb",),
- ),
- ("BadSummaries", "multi_line", ("Summary should fit in a single line",)),
- (
- "BadSummaries",
- "two_paragraph_multi_line",
- ("Summary should fit in a single line",),
- ),
- # Parameters tests
- (
- "BadParameters",
- "missing_params",
- ("Parameters {**kwargs} not documented",),
- ),
- (
- "BadParameters",
- "bad_colon_spacing",
- (
- 'Parameter "kind" requires a space before the colon '
- "separating the parameter name and type",
- ),
- ),
- (
- "BadParameters",
- "no_description_period",
- ('Parameter "kind" description should finish with "."',),
- ),
- (
- "BadParameters",
- "no_description_period_with_directive",
- ('Parameter "kind" description should finish with "."',),
- ),
- (
- "BadParameters",
- "parameter_capitalization",
- ('Parameter "kind" description should start with a capital letter',),
- ),
- (
- "BadParameters",
- "integer_parameter",
- ('Parameter "kind" type should use "int" instead of "integer"',),
- ),
- (
- "BadParameters",
- "string_parameter",
- ('Parameter "kind" type should use "str" instead of "string"',),
- ),
- (
- "BadParameters",
- "boolean_parameter",
- ('Parameter "kind" type should use "bool" instead of "boolean"',),
- ),
- (
- "BadParameters",
- "list_incorrect_parameter_type",
- ('Parameter "kind" type should use "bool" instead of "boolean"',),
- ),
- (
- "BadParameters",
- "list_incorrect_parameter_type",
- ('Parameter "kind" type should use "int" instead of "integer"',),
- ),
- (
- "BadParameters",
- "list_incorrect_parameter_type",
- ('Parameter "kind" type should use "str" instead of "string"',),
- ),
- (
- "BadParameters",
- "bad_parameter_spacing",
- ("Parameters {b} not documented", "Unknown parameters { b}"),
- ),
- pytest.param(
- "BadParameters",
- "blank_lines",
- ("No error yet?",),
- marks=pytest.mark.xfail,
- ),
- # Returns tests
- ("BadReturns", "return_not_documented", ("No Returns section found",)),
- ("BadReturns", "yield_not_documented", ("No Yields section found",)),
- pytest.param("BadReturns", "no_type", ("foo",), marks=pytest.mark.xfail),
- ("BadReturns", "no_description", ("Return value has no description",)),
- (
- "BadReturns",
- "no_punctuation",
- ('Return value description should finish with "."',),
- ),
- (
- "BadReturns",
- "named_single_return",
+ "BadDocstrings",
+ "prefix_pandas",
(
- "The first line of the Returns section should contain only the "
- "type, unless multiple values are being returned",
+ "pandas.Series.rename in `See Also` section "
+ "does not need `pandas` prefix",
),
),
- (
- "BadReturns",
- "no_capitalization",
- ("Return value description should start with a capital letter",),
- ),
- (
- "BadReturns",
- "no_period_multi",
- ('Return value description should finish with "."',),
- ),
# Examples tests
(
- "BadGenericDocStrings",
- "method",
+ "BadDocstrings",
+ "redundant_import",
("Do not import numpy, as it is imported automatically",),
),
(
- "BadGenericDocStrings",
- "method",
+ "BadDocstrings",
+ "redundant_import",
("Do not import pandas, as it is imported automatically",),
),
(
- "BadGenericDocStrings",
- "method_wo_docstrings",
- ("The object does not have a docstring",),
- ),
- # See Also tests
- (
- "BadSeeAlso",
- "prefix_pandas",
- (
- "pandas.Series.rename in `See Also` section "
- "does not need `pandas` prefix",
- ),
- ),
- # Examples tests
- (
- "BadExamples",
+ "BadDocstrings",
"unused_import",
("flake8 error: F401 'pandas as pdf' imported but unused",),
),
(
- "BadExamples",
- "indentation_is_not_a_multiple_of_four",
- ("flake8 error: E111 indentation is not a multiple of four",),
- ),
- (
- "BadExamples",
+ "BadDocstrings",
"missing_whitespace_around_arithmetic_operator",
(
"flake8 error: "
@@ -1148,39 +161,28 @@ def test_bad_generic_functions(self, capsys, func):
),
),
(
- "BadExamples",
- "missing_whitespace_after_comma",
- ("flake8 error: E231 missing whitespace after ',' (3 times)",),
- ),
- (
- "BadGenericDocStrings",
- "two_linebreaks_between_sections",
- (
- "Double line break found; please use only one blank line to "
- "separate sections or paragraphs, and do not leave blank lines "
- "at the end of docstrings",
- ),
+ "BadDocstrings",
+ "indentation_is_not_a_multiple_of_four",
+ ("flake8 error: E111 indentation is not a multiple of four",),
),
(
- "BadGenericDocStrings",
- "linebreak_at_end_of_docstring",
- (
- "Double line break found; please use only one blank line to "
- "separate sections or paragraphs, and do not leave blank lines "
- "at the end of docstrings",
- ),
+ "BadDocstrings",
+ "missing_whitespace_after_comma",
+ ("flake8 error: E231 missing whitespace after ',' (3 times)",),
),
],
)
def test_bad_docstrings(self, capsys, klass, func, msgs):
- result = validate_one(self._import_path(klass=klass, func=func))
+ result = validate_docstrings.pandas_validate(
+ self._import_path(klass=klass, func=func)
+ )
for msg in msgs:
assert msg in " ".join(err[1] for err in result["errors"])
def test_validate_all_ignore_deprecated(self, monkeypatch):
monkeypatch.setattr(
validate_docstrings,
- "validate_one",
+ "pandas_validate",
lambda func_name: {
"docstring": "docstring1",
"errors": [
@@ -1285,50 +287,22 @@ def test_item_subsection(self, idx, subsection):
assert result[idx][3] == subsection
-class TestDocstringClass:
- @pytest.mark.parametrize(
- "name, expected_obj",
- [
- ("pandas.isnull", pd.isnull),
- ("pandas.DataFrame", pd.DataFrame),
- ("pandas.Series.sum", pd.Series.sum),
- ],
- )
- def test_resolves_class_name(self, name, expected_obj):
- d = validate_docstrings.Docstring(name)
- assert d.obj is expected_obj
-
- @pytest.mark.parametrize("invalid_name", ["panda", "panda.DataFrame"])
- def test_raises_for_invalid_module_name(self, invalid_name):
- msg = f'No module can be imported from "{invalid_name}"'
- with pytest.raises(ImportError, match=msg):
- validate_docstrings.Docstring(invalid_name)
-
- @pytest.mark.parametrize(
- "invalid_name", ["pandas.BadClassName", "pandas.Series.bad_method_name"]
- )
- def test_raises_for_invalid_attribute_name(self, invalid_name):
- name_components = invalid_name.split(".")
- obj_name, invalid_attr_name = name_components[-2], name_components[-1]
- msg = f"'{obj_name}' has no attribute '{invalid_attr_name}'"
- with pytest.raises(AttributeError, match=msg):
- validate_docstrings.Docstring(invalid_name)
-
+class TestPandasDocstringClass:
@pytest.mark.parametrize(
"name", ["pandas.Series.str.isdecimal", "pandas.Series.str.islower"]
)
def test_encode_content_write_to_file(self, name):
# GH25466
- docstr = validate_docstrings.Docstring(name).validate_pep8()
+ docstr = validate_docstrings.PandasDocstring(name).validate_pep8()
# the list of pep8 errors should be empty
assert not list(docstr)
class TestMainFunction:
- def test_exit_status_for_validate_one(self, monkeypatch):
+ def test_exit_status_for_main(self, monkeypatch):
monkeypatch.setattr(
validate_docstrings,
- "validate_one",
+ "pandas_validate",
lambda func_name: {
"docstring": "docstring1",
"errors": [
@@ -1336,8 +310,7 @@ def test_exit_status_for_validate_one(self, monkeypatch):
("ER02", "err desc"),
("ER03", "err desc"),
],
- "warnings": [],
- "examples_errors": "",
+ "examples_errs": "",
},
)
exit_status = validate_docstrings.main(
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index bcf3fd5d276f5..079e9a16cfd13 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -14,19 +14,14 @@
$ ./validate_docstrings.py pandas.DataFrame.head
"""
import argparse
-import ast
import doctest
-import functools
import glob
import importlib
-import inspect
import json
import os
-import pydoc
-import re
import sys
import tempfile
-import textwrap
+from typing import List, Optional
import flake8.main.application
@@ -52,87 +47,15 @@
import pandas # noqa: E402 isort:skip
sys.path.insert(1, os.path.join(BASE_PATH, "doc", "sphinxext"))
-from numpydoc.docscrape import NumpyDocString # noqa: E402 isort:skip
-from pandas.io.formats.printing import pprint_thing # noqa: E402 isort:skip
+from numpydoc.validate import validate, Docstring # noqa: E402 isort:skip
PRIVATE_CLASSES = ["NDFrame", "IndexOpsMixin"]
-DIRECTIVES = ["versionadded", "versionchanged", "deprecated"]
-DIRECTIVE_PATTERN = re.compile(rf"^\s*\.\. ({'|'.join(DIRECTIVES)})(?!::)", re.I | re.M)
-ALLOWED_SECTIONS = [
- "Parameters",
- "Attributes",
- "Methods",
- "Returns",
- "Yields",
- "Other Parameters",
- "Raises",
- "Warns",
- "See Also",
- "Notes",
- "References",
- "Examples",
-]
ERROR_MSGS = {
- "GL01": "Docstring text (summary) should start in the line immediately "
- "after the opening quotes (not in the same line, or leaving a "
- "blank line in between)",
- "GL02": "Closing quotes should be placed in the line after the last text "
- "in the docstring (do not close the quotes in the same line as "
- "the text, or leave a blank line between the last text and the "
- "quotes)",
- "GL03": "Double line break found; please use only one blank line to "
- "separate sections or paragraphs, and do not leave blank lines "
- "at the end of docstrings",
"GL04": "Private classes ({mentioned_private_classes}) should not be "
"mentioned in public docstrings",
- "GL05": 'Tabs found at the start of line "{line_with_tabs}", please use '
- "whitespace only",
- "GL06": 'Found unknown section "{section}". Allowed sections are: '
- "{allowed_sections}",
- "GL07": "Sections are in the wrong order. Correct order is: {correct_sections}",
- "GL08": "The object does not have a docstring",
- "GL09": "Deprecation warning should precede extended summary",
- "GL10": "reST directives {directives} must be followed by two colons",
- "SS01": "No summary found (a short summary in a single line should be "
- "present at the beginning of the docstring)",
- "SS02": "Summary does not start with a capital letter",
- "SS03": "Summary does not end with a period",
- "SS04": "Summary contains heading whitespaces",
- "SS05": "Summary must start with infinitive verb, not third person "
- '(e.g. use "Generate" instead of "Generates")',
- "SS06": "Summary should fit in a single line",
- "ES01": "No extended summary found",
- "PR01": "Parameters {missing_params} not documented",
- "PR02": "Unknown parameters {unknown_params}",
- "PR03": "Wrong parameters order. Actual: {actual_params}. "
- "Documented: {documented_params}",
- "PR04": 'Parameter "{param_name}" has no type',
- "PR05": 'Parameter "{param_name}" type should not finish with "."',
- "PR06": 'Parameter "{param_name}" type should use "{right_type}" instead '
- 'of "{wrong_type}"',
- "PR07": 'Parameter "{param_name}" has no description',
- "PR08": 'Parameter "{param_name}" description should start with a '
- "capital letter",
- "PR09": 'Parameter "{param_name}" description should finish with "."',
- "PR10": 'Parameter "{param_name}" requires a space before the colon '
- "separating the parameter name and type",
- "RT01": "No Returns section found",
- "RT02": "The first line of the Returns section should contain only the "
- "type, unless multiple values are being returned",
- "RT03": "Return value has no description",
- "RT04": "Return value description should start with a capital letter",
- "RT05": 'Return value description should finish with "."',
- "YD01": "No Yields section found",
- "SA01": "See Also section not found",
- "SA02": "Missing period at end of description for See Also "
- '"{reference_name}" reference',
- "SA03": "Description should be capitalized for See Also "
- '"{reference_name}" reference',
- "SA04": 'Missing description for See Also "{reference_name}" reference',
"SA05": "{reference_name} in `See Also` section does not need `pandas` "
"prefix, use {right_reference} instead.",
- "EX01": "No examples section found",
"EX02": "Examples do not pass tests:\n{doctest_log}",
"EX03": "flake8 error: {error_code} {error_message}{times_happening}",
"EX04": "Do not import {imported_library}, as it is imported "
@@ -140,29 +63,10 @@
}
-def error(code, **kwargs):
+def pandas_error(code, **kwargs):
"""
- Return a tuple with the error code and the message with variables replaced.
-
- This is syntactic sugar so instead of:
- - `('EX02', ERROR_MSGS['EX02'].format(doctest_log=log))`
-
- We can simply use:
- - `error('EX02', doctest_log=log)`
-
- Parameters
- ----------
- code : str
- Error code.
- **kwargs
- Values for the variables in the error messages
-
- Returns
- -------
- code : str
- Error code.
- message : str
- Error message with variables replaced.
+ Copy of the numpydoc error function, since ERROR_MSGS can't be updated
+ with our custom errors yet.
"""
return (code, ERROR_MSGS[code].format(**kwargs))
@@ -239,347 +143,7 @@ def get_api_items(api_doc_fd):
previous_line = line
-class Docstring:
- def __init__(self, name):
- self.name = name
- obj = self._load_obj(name)
- self.obj = obj
- self.code_obj = self._to_original_callable(obj)
- self.raw_doc = obj.__doc__ or ""
- self.clean_doc = pydoc.getdoc(obj)
- self.doc = NumpyDocString(self.clean_doc)
-
- def __len__(self) -> int:
- return len(self.raw_doc)
-
- @staticmethod
- def _load_obj(name):
- """
- Import Python object from its name as string.
-
- Parameters
- ----------
- name : str
- Object name to import (e.g. pandas.Series.str.upper)
-
- Returns
- -------
- object
- Python object that can be a class, method, function...
-
- Examples
- --------
- >>> Docstring._load_obj('pandas.Series')
- <class 'pandas.core.series.Series'>
- """
- for maxsplit in range(1, name.count(".") + 1):
- # TODO when py3 only replace by: module, *func_parts = ...
- func_name_split = name.rsplit(".", maxsplit)
- module = func_name_split[0]
- func_parts = func_name_split[1:]
- try:
- obj = importlib.import_module(module)
- except ImportError:
- pass
- else:
- continue
-
- if "obj" not in locals():
- raise ImportError(f'No module can be imported from "{name}"')
-
- for part in func_parts:
- obj = getattr(obj, part)
- return obj
-
- @staticmethod
- def _to_original_callable(obj):
- """
- Find the Python object that contains the source code of the object.
-
- This is useful to find the place in the source code (file and line
- number) where a docstring is defined. It does not currently work for
- all cases, but it should help find some (properties...).
- """
- while True:
- if inspect.isfunction(obj) or inspect.isclass(obj):
- f = inspect.getfile(obj)
- if f.startswith("<") and f.endswith(">"):
- return None
- return obj
- if inspect.ismethod(obj):
- obj = obj.__func__
- elif isinstance(obj, functools.partial):
- obj = obj.func
- elif isinstance(obj, property):
- obj = obj.fget
- else:
- return None
-
- @property
- def type(self):
- return type(self.obj).__name__
-
- @property
- def is_function_or_method(self):
- # TODO(py27): remove ismethod
- return inspect.isfunction(self.obj) or inspect.ismethod(self.obj)
-
- @property
- def source_file_name(self):
- """
- File name where the object is implemented (e.g. pandas/core/frame.py).
- """
- try:
- fname = inspect.getsourcefile(self.code_obj)
- except TypeError:
- # In some cases the object is something complex like a cython
- # object that can't be easily introspected. An it's better to
- # return the source code file of the object as None, than crash
- pass
- else:
- if fname:
- fname = os.path.relpath(fname, BASE_PATH)
- return fname
-
- @property
- def source_file_def_line(self):
- """
- Number of line where the object is defined in its file.
- """
- try:
- return inspect.getsourcelines(self.code_obj)[-1]
- except (OSError, TypeError):
- # In some cases the object is something complex like a cython
- # object that can't be easily introspected. An it's better to
- # return the line number as None, than crash
- pass
-
- @property
- def github_url(self):
- url = "https://github.com/pandas-dev/pandas/blob/master/"
- url += f"{self.source_file_name}#L{self.source_file_def_line}"
- return url
-
- @property
- def start_blank_lines(self):
- i = None
- if self.raw_doc:
- for i, row in enumerate(self.raw_doc.split("\n")):
- if row.strip():
- break
- return i
-
- @property
- def end_blank_lines(self):
- i = None
- if self.raw_doc:
- for i, row in enumerate(reversed(self.raw_doc.split("\n"))):
- if row.strip():
- break
- return i
-
- @property
- def double_blank_lines(self):
- prev = True
- for row in self.raw_doc.split("\n"):
- if not prev and not row.strip():
- return True
- prev = row.strip()
- return False
-
- @property
- def section_titles(self):
- sections = []
- self.doc._doc.reset()
- while not self.doc._doc.eof():
- content = self.doc._read_to_next_section()
- if (
- len(content) > 1
- and len(content[0]) == len(content[1])
- and set(content[1]) == {"-"}
- ):
- sections.append(content[0])
- return sections
-
- @property
- def summary(self):
- return " ".join(self.doc["Summary"])
-
- @property
- def num_summary_lines(self):
- return len(self.doc["Summary"])
-
- @property
- def extended_summary(self):
- if not self.doc["Extended Summary"] and len(self.doc["Summary"]) > 1:
- return " ".join(self.doc["Summary"])
- return " ".join(self.doc["Extended Summary"])
-
- @property
- def needs_summary(self):
- return not (bool(self.summary) and bool(self.extended_summary))
-
- @property
- def doc_parameters(self):
- parameters = {}
- for names, type_, desc in self.doc["Parameters"]:
- for name in names.split(", "):
- parameters[name] = (type_, "".join(desc))
- return parameters
-
- @property
- def signature_parameters(self):
- def add_stars(param_name: str, info: inspect.Parameter):
- """
- Add stars to *args and **kwargs parameters
- """
- if info.kind == inspect.Parameter.VAR_POSITIONAL:
- return f"*{param_name}"
- elif info.kind == inspect.Parameter.VAR_KEYWORD:
- return f"**{param_name}"
- else:
- return param_name
-
- if inspect.isclass(self.obj):
- if hasattr(self.obj, "_accessors") and (
- self.name.split(".")[-1] in self.obj._accessors
- ):
- # accessor classes have a signature but don't want to show this
- return tuple()
- try:
- sig = inspect.signature(self.obj)
- except (TypeError, ValueError):
- # Some objects, mainly in C extensions do not support introspection
- # of the signature
- return tuple()
-
- params = tuple(
- add_stars(parameter, sig.parameters[parameter])
- for parameter in sig.parameters
- )
- if params and params[0] in ("self", "cls"):
- return params[1:]
- return params
-
- @property
- def parameter_mismatches(self):
- errs = []
- signature_params = self.signature_parameters
- doc_params = tuple(self.doc_parameters)
- missing = set(signature_params) - set(doc_params)
- if missing:
- errs.append(error("PR01", missing_params=pprint_thing(missing)))
- extra = set(doc_params) - set(signature_params)
- if extra:
- errs.append(error("PR02", unknown_params=pprint_thing(extra)))
- if (
- not missing
- and not extra
- and signature_params != doc_params
- and not (not signature_params and not doc_params)
- ):
- errs.append(
- error(
- "PR03", actual_params=signature_params, documented_params=doc_params
- )
- )
-
- return errs
-
- @property
- def correct_parameters(self):
- return not bool(self.parameter_mismatches)
-
- @property
- def directives_without_two_colons(self):
- return DIRECTIVE_PATTERN.findall(self.raw_doc)
-
- def parameter_type(self, param):
- return self.doc_parameters[param][0]
-
- def parameter_desc(self, param):
- desc = self.doc_parameters[param][1]
- # Find and strip out any sphinx directives
- for directive in DIRECTIVES:
- full_directive = f".. {directive}"
- if full_directive in desc:
- # Only retain any description before the directive
- desc = desc[: desc.index(full_directive)]
- return desc
-
- @property
- def see_also(self):
- result = {}
- for funcs, desc in self.doc["See Also"]:
- for func, _ in funcs:
- result[func] = "".join(desc)
-
- return result
-
- @property
- def examples(self):
- return self.doc["Examples"]
-
- @property
- def returns(self):
- return self.doc["Returns"]
-
- @property
- def yields(self):
- return self.doc["Yields"]
-
- @property
- def method_source(self):
- try:
- source = inspect.getsource(self.obj)
- except TypeError:
- return ""
- return textwrap.dedent(source)
-
- @property
- def method_returns_something(self):
- """
- Check if the docstrings method can return something.
-
- Bare returns, returns valued None and returns from nested functions are
- disconsidered.
-
- Returns
- -------
- bool
- Whether the docstrings method can return something.
- """
-
- def get_returns_not_on_nested_functions(node):
- returns = [node] if isinstance(node, ast.Return) else []
- for child in ast.iter_child_nodes(node):
- # Ignore nested functions and its subtrees.
- if not isinstance(child, ast.FunctionDef):
- child_returns = get_returns_not_on_nested_functions(child)
- returns.extend(child_returns)
- return returns
-
- tree = ast.parse(self.method_source).body
- if tree:
- returns = get_returns_not_on_nested_functions(tree[0])
- return_values = [r.value for r in returns]
- # Replace NameConstant nodes valued None for None.
- for i, v in enumerate(return_values):
- if isinstance(v, ast.NameConstant) and v.value is None:
- return_values[i] = None
- return any(return_values)
- else:
- return False
-
- @property
- def first_line_ends_in_dot(self):
- if self.doc:
- return self.doc.split("\n")[0][-1] == "."
-
- @property
- def deprecated(self):
- return ".. deprecated:: " in (self.summary + self.extended_summary)
-
+class PandasDocstring(Docstring):
@property
def mentioned_private_classes(self):
return [klass for klass in PRIVATE_CLASSES if klass in self.raw_doc]
@@ -632,237 +196,66 @@ def validate_pep8(self):
yield from application.guide.stats.statistics_for("")
-def get_validation_data(doc):
+def pandas_validate(func_name: str):
"""
- Validate the docstring.
+ Call the numpydoc validation, and add the errors specific to pandas.
Parameters
----------
- doc : Docstring
- A Docstring object with the given function name.
+ func_name : str
+ Name of the object of the docstring to validate.
Returns
-------
- tuple
- errors : list of tuple
- Errors occurred during validation.
- warnings : list of tuple
- Warnings occurred during validation.
- examples_errs : str
- Examples usage displayed along the error, otherwise empty string.
-
- Notes
- -----
- The errors codes are defined as:
- - First two characters: Section where the error happens:
- * GL: Global (no section, like section ordering errors)
- * SS: Short summary
- * ES: Extended summary
- * PR: Parameters
- * RT: Returns
- * YD: Yields
- * RS: Raises
- * WN: Warns
- * SA: See Also
- * NT: Notes
- * RF: References
- * EX: Examples
- - Last two characters: Numeric error code inside the section
-
- For example, EX02 is the second codified error in the Examples section
- (which in this case is assigned to examples that do not pass the tests).
-
- The error codes, their corresponding error messages, and the details on how
- they are validated, are not documented more than in the source code of this
- function.
+ dict
+ Information about the docstring and the errors found.
"""
+ doc = PandasDocstring(func_name)
+ result = validate(func_name)
- errs = []
- wrns = []
- if not doc.raw_doc:
- errs.append(error("GL08"))
- return errs, wrns, ""
-
- if doc.start_blank_lines != 1:
- errs.append(error("GL01"))
- if doc.end_blank_lines != 1:
- errs.append(error("GL02"))
- if doc.double_blank_lines:
- errs.append(error("GL03"))
mentioned_errs = doc.mentioned_private_classes
if mentioned_errs:
- errs.append(error("GL04", mentioned_private_classes=", ".join(mentioned_errs)))
- for line in doc.raw_doc.splitlines():
- if re.match("^ *\t", line):
- errs.append(error("GL05", line_with_tabs=line.lstrip()))
-
- unexpected_sections = [
- section for section in doc.section_titles if section not in ALLOWED_SECTIONS
- ]
- for section in unexpected_sections:
- errs.append(
- error("GL06", section=section, allowed_sections=", ".join(ALLOWED_SECTIONS))
+ result["errors"].append(
+ pandas_error("GL04", mentioned_private_classes=", ".join(mentioned_errs))
)
- correct_order = [
- section for section in ALLOWED_SECTIONS if section in doc.section_titles
- ]
- if correct_order != doc.section_titles:
- errs.append(error("GL07", correct_sections=", ".join(correct_order)))
-
- if doc.deprecated and not doc.extended_summary.startswith(".. deprecated:: "):
- errs.append(error("GL09"))
-
- directives_without_two_colons = doc.directives_without_two_colons
- if directives_without_two_colons:
- errs.append(error("GL10", directives=directives_without_two_colons))
-
- if not doc.summary:
- errs.append(error("SS01"))
- else:
- if not doc.summary[0].isupper():
- errs.append(error("SS02"))
- if doc.summary[-1] != ".":
- errs.append(error("SS03"))
- if doc.summary != doc.summary.lstrip():
- errs.append(error("SS04"))
- elif doc.is_function_or_method and doc.summary.split(" ")[0][-1] == "s":
- errs.append(error("SS05"))
- if doc.num_summary_lines > 1:
- errs.append(error("SS06"))
-
- if not doc.extended_summary:
- wrns.append(("ES01", "No extended summary found"))
-
- # PR01: Parameters not documented
- # PR02: Unknown parameters
- # PR03: Wrong parameters order
- errs += doc.parameter_mismatches
-
- for param in doc.doc_parameters:
- if not param.startswith("*"): # Check can ignore var / kwargs
- if not doc.parameter_type(param):
- if ":" in param:
- errs.append(error("PR10", param_name=param.split(":")[0]))
- else:
- errs.append(error("PR04", param_name=param))
- else:
- if doc.parameter_type(param)[-1] == ".":
- errs.append(error("PR05", param_name=param))
- common_type_errors = [
- ("integer", "int"),
- ("boolean", "bool"),
- ("string", "str"),
- ]
- for wrong_type, right_type in common_type_errors:
- if wrong_type in doc.parameter_type(param):
- errs.append(
- error(
- "PR06",
- param_name=param,
- right_type=right_type,
- wrong_type=wrong_type,
- )
- )
- if not doc.parameter_desc(param):
- errs.append(error("PR07", param_name=param))
- else:
- if not doc.parameter_desc(param)[0].isupper():
- errs.append(error("PR08", param_name=param))
- if doc.parameter_desc(param)[-1] != ".":
- errs.append(error("PR09", param_name=param))
-
- if doc.is_function_or_method:
- if not doc.returns:
- if doc.method_returns_something:
- errs.append(error("RT01"))
- else:
- if len(doc.returns) == 1 and doc.returns[0].name:
- errs.append(error("RT02"))
- for name_or_type, type_, desc in doc.returns:
- if not desc:
- errs.append(error("RT03"))
- else:
- desc = " ".join(desc)
- if not desc[0].isupper():
- errs.append(error("RT04"))
- if not desc.endswith("."):
- errs.append(error("RT05"))
-
- if not doc.yields and "yield" in doc.method_source:
- errs.append(error("YD01"))
-
- if not doc.see_also:
- wrns.append(error("SA01"))
- else:
+ if doc.see_also:
for rel_name, rel_desc in doc.see_also.items():
- if rel_desc:
- if not rel_desc.endswith("."):
- errs.append(error("SA02", reference_name=rel_name))
- if not rel_desc[0].isupper():
- errs.append(error("SA03", reference_name=rel_name))
- else:
- errs.append(error("SA04", reference_name=rel_name))
if rel_name.startswith("pandas."):
- errs.append(
- error(
+ result["errors"].append(
+ pandas_error(
"SA05",
reference_name=rel_name,
right_reference=rel_name[len("pandas.") :],
)
)
- examples_errs = ""
- if not doc.examples:
- wrns.append(error("EX01"))
- else:
- examples_errs = doc.examples_errors
- if examples_errs:
- errs.append(error("EX02", doctest_log=examples_errs))
+ result["examples_errs"] = ""
+ if doc.examples:
+ result["examples_errs"] = doc.examples_errors
+ if result["examples_errs"]:
+ result["errors"].append(
+ pandas_error("EX02", doctest_log=result["examples_errs"])
+ )
for err in doc.validate_pep8():
- errs.append(
- error(
+ result["errors"].append(
+ pandas_error(
"EX03",
error_code=err.error_code,
error_message=err.message,
- times_happening=f" ({err.count} times)" if err.count > 1 else "",
+ times_happening=" ({} times)".format(err.count)
+ if err.count > 1
+ else "",
)
)
examples_source_code = "".join(doc.examples_source_code)
for wrong_import in ("numpy", "pandas"):
- if f"import {wrong_import}" in examples_source_code:
- errs.append(error("EX04", imported_library=wrong_import))
- return errs, wrns, examples_errs
-
-
-def validate_one(func_name):
- """
- Validate the docstring for the given func_name
-
- Parameters
- ----------
- func_name : function
- Function whose docstring will be evaluated (e.g. pandas.read_csv).
+ if "import {}".format(wrong_import) in examples_source_code:
+ result["errors"].append(
+ pandas_error("EX04", imported_library=wrong_import)
+ )
- Returns
- -------
- dict
- A dictionary containing all the information obtained from validating
- the docstring.
- """
- doc = Docstring(func_name)
- errs, wrns, examples_errs = get_validation_data(doc)
- return {
- "type": doc.type,
- "docstring": doc.clean_doc,
- "deprecated": doc.deprecated,
- "file": doc.source_file_name,
- "file_line": doc.source_file_def_line,
- "github_link": doc.github_url,
- "errors": errs,
- "warnings": wrns,
- "examples_errors": examples_errs,
- }
+ return result
def validate_all(prefix, ignore_deprecated=False):
@@ -887,16 +280,16 @@ def validate_all(prefix, ignore_deprecated=False):
result = {}
seen = {}
- # functions from the API docs
api_doc_fnames = os.path.join(BASE_PATH, "doc", "source", "reference", "*.rst")
api_items = []
for api_doc_fname in glob.glob(api_doc_fnames):
with open(api_doc_fname) as f:
api_items += list(get_api_items(f))
+
for func_name, func_obj, section, subsection in api_items:
if prefix and not func_name.startswith(prefix):
continue
- doc_info = validate_one(func_name)
+ doc_info = pandas_validate(func_name)
if ignore_deprecated and doc_info["deprecated"]:
continue
result[func_name] = doc_info
@@ -914,100 +307,86 @@ def validate_all(prefix, ignore_deprecated=False):
seen[shared_code_key] = func_name
- # functions from introspecting Series and DataFrame
- api_item_names = set(list(zip(*api_items))[0])
- for class_ in (pandas.Series, pandas.DataFrame):
- for member in inspect.getmembers(class_):
- func_name = f"pandas.{class_.__name__}.{member[0]}"
- if not member[0].startswith("_") and func_name not in api_item_names:
- if prefix and not func_name.startswith(prefix):
- continue
- doc_info = validate_one(func_name)
- if ignore_deprecated and doc_info["deprecated"]:
- continue
- result[func_name] = doc_info
- result[func_name]["in_api"] = False
-
return result
-def main(func_name, prefix, errors, output_format, ignore_deprecated):
+def print_validate_all_results(
+ prefix: str,
+ errors: Optional[List[str]],
+ output_format: str,
+ ignore_deprecated: bool,
+):
+ if output_format not in ("default", "json", "actions"):
+ raise ValueError(f'Unknown output_format "{output_format}"')
+
+ result = validate_all(prefix, ignore_deprecated)
+
+ if output_format == "json":
+ sys.stdout.write(json.dumps(result))
+ return 0
+
+ prefix = "##[error]" if output_format == "actions" else ""
+ exit_status = 0
+ for name, res in result.items():
+ for err_code, err_desc in res["errors"]:
+ if errors and err_code not in errors:
+ continue
+ sys.stdout.write(
+ f'{prefix}{res["file"]}:{res["file_line"]}:'
+ f"{err_code}:{name}:{err_desc}\n"
+ )
+ exit_status += 1
+
+ return exit_status
+
+
+def print_validate_one_results(func_name: str):
def header(title, width=80, char="#"):
full_line = char * width
side_len = (width - len(title) - 2) // 2
adj = "" if len(title) % 2 == 0 else " "
- title_line = f"{char * side_len} {title}{adj} {char * side_len}"
+ title_line = "{side} {title}{adj} {side}".format(
+ side=char * side_len, title=title, adj=adj
+ )
return f"\n{full_line}\n{title_line}\n{full_line}\n\n"
- exit_status = 0
- if func_name is None:
- result = validate_all(prefix, ignore_deprecated)
-
- if output_format == "json":
- output = json.dumps(result)
- else:
- if output_format == "default":
- output_format = "{text}\n"
- elif output_format == "azure":
- output_format = (
- "##vso[task.logissue type=error;"
- "sourcepath={path};"
- "linenumber={row};"
- "code={code};"
- "]{text}\n"
- )
- else:
- raise ValueError(f'Unknown output_format "{output_format}"')
-
- output = ""
- for name, res in result.items():
- for err_code, err_desc in res["errors"]:
- # The script would be faster if instead of filtering the
- # errors after validating them, it didn't validate them
- # initially. But that would complicate the code too much
- if errors and err_code not in errors:
- continue
- exit_status += 1
- output += output_format.format(
- path=res["file"],
- row=res["file_line"],
- code=err_code,
- text=f"{name}: {err_desc}",
- )
+ result = pandas_validate(func_name)
- sys.stdout.write(output)
+ sys.stderr.write(header(f"Docstring ({func_name})"))
+ sys.stderr.write(f"{result['docstring']}\n")
- else:
- result = validate_one(func_name)
- sys.stderr.write(header(f"Docstring ({func_name})"))
- sys.stderr.write(f"{result['docstring']}\n")
- sys.stderr.write(header("Validation"))
- if result["errors"]:
- sys.stderr.write(f"{len(result['errors'])} Errors found:\n")
- for err_code, err_desc in result["errors"]:
- # Failing examples are printed at the end
- if err_code == "EX02":
- sys.stderr.write("\tExamples do not pass tests\n")
- continue
- sys.stderr.write(f"\t{err_desc}\n")
- if result["warnings"]:
- sys.stderr.write(f"{len(result['warnings'])} Warnings found:\n")
- for wrn_code, wrn_desc in result["warnings"]:
- sys.stderr.write(f"\t{wrn_desc}\n")
-
- if not result["errors"]:
- sys.stderr.write(f'Docstring for "{func_name}" correct. :)\n')
-
- if result["examples_errors"]:
- sys.stderr.write(header("Doctests"))
- sys.stderr.write(result["examples_errors"])
+ sys.stderr.write(header("Validation"))
+ if result["errors"]:
+ sys.stderr.write(f'{len(result["errors"])} Errors found:\n')
+ for err_code, err_desc in result["errors"]:
+ if err_code == "EX02": # Failing examples are printed at the end
+ sys.stderr.write("\tExamples do not pass tests\n")
+ continue
+ sys.stderr.write(f"\t{err_desc}\n")
+ elif result["errors"]:
+ sys.stderr.write(f'Docstring for "{func_name}" correct. :)\n')
- return exit_status
+ if result["examples_errs"]:
+ sys.stderr.write(header("Doctests"))
+ sys.stderr.write(result["examples_errs"])
+
+
+def main(func_name, prefix, errors, output_format, ignore_deprecated):
+ """
+ Main entry point. Call the validation for one or for all docstrings.
+ """
+ if func_name is None:
+ return print_validate_all_results(
+ prefix, errors, output_format, ignore_deprecated
+ )
+ else:
+ print_validate_one_results(func_name)
+ return 0
if __name__ == "__main__":
- format_opts = "default", "json", "azure"
+ format_opts = "default", "json", "actions"
func_help = (
"function or method to validate (e.g. pandas.DataFrame.head) "
"if not provided, all docstrings are validated and returned "
@@ -1020,16 +399,16 @@ def header(title, width=80, char="#"):
default="default",
choices=format_opts,
help="format of the output when validating "
- "multiple docstrings (ignored when validating one)."
- f"It can be {str(format_opts)[1:-1]}",
+ "multiple docstrings (ignored when validating one). "
+ "It can be {str(format_opts)[1:-1]}",
)
argparser.add_argument(
"--prefix",
default=None,
help="pattern for the "
"docstring names, in order to decide which ones "
- 'will be validated. A prefix "pandas.Series.str.'
- "will make the script validate all the docstrings"
+ 'will be validated. A prefix "pandas.Series.str."'
+ "will make the script validate all the docstrings "
"of methods starting by this pattern. It is "
"ignored if parameter function is provided",
)
| - [X] xref #28822
- [x] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
We moved our script to numpydoc, and it already had some improvements there. The script does like 80% of our validation, so what I'm doing here is to call numpydoc validation, and then call our custom validation (things that for different reasons weren't moved to numpydoc). | https://api.github.com/repos/pandas-dev/pandas/pulls/30746 | 2020-01-06T17:32:37Z | 2020-01-16T02:01:22Z | 2020-01-16T02:01:22Z | 2020-01-16T02:01:23Z |
DEPR/REGR: Fix pandas.util.testing deprecation | diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index bdb4e813023b6..72e2413ce87d9 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -1,3 +1,4 @@
+import sys
from typing import List
import pandas as pd
@@ -288,14 +289,20 @@ def test_testing(self):
self.check(testing, self.funcs)
def test_util_testing_deprecated(self):
- s = pd.Series([], dtype="object")
- with tm.assert_produces_warning(FutureWarning) as m:
- import pandas.util.testing as tm2
+ # avoid cache state affecting the test
+ sys.modules.pop("pandas.util.testing", None)
- tm2.assert_series_equal(s, s)
+ with tm.assert_produces_warning(FutureWarning) as m:
+ import pandas.util.testing # noqa: F401
- assert "pandas.testing.assert_series_equal" in str(m[0].message)
+ assert "pandas.util.testing is deprecated" in str(m[0].message)
+ assert "pandas.testing instead" in str(m[0].message)
+ def test_util_testing_deprecated_direct(self):
+ # avoid cache state affecting the test
+ sys.modules.pop("pandas.util.testing", None)
with tm.assert_produces_warning(FutureWarning) as m:
- tm2.DataFrame
- assert "removed" in str(m[0].message)
+ from pandas.util.testing import assert_series_equal # noqa: F401
+
+ assert "pandas.util.testing is deprecated" in str(m[0].message)
+ assert "pandas.testing instead" in str(m[0].message)
diff --git a/pandas/util/__init__.py b/pandas/util/__init__.py
index 231a23e247650..d906c0371d207 100644
--- a/pandas/util/__init__.py
+++ b/pandas/util/__init__.py
@@ -1,4 +1,3 @@
from pandas.util._decorators import Appender, Substitution, cache_readonly # noqa
from pandas.core.util.hashing import hash_array, hash_pandas_object # noqa
-from pandas.util.testing import testing # noqa: F401
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
new file mode 100644
index 0000000000000..af9fe4846b27d
--- /dev/null
+++ b/pandas/util/testing.py
@@ -0,0 +1,12 @@
+import warnings
+
+from pandas._testing import * # noqa
+
+warnings.warn(
+ (
+ "pandas.util.testing is deprecated. Use the functions in the "
+ "public API at pandas.testing instead."
+ ),
+ FutureWarning,
+ stacklevel=2,
+)
diff --git a/pandas/util/testing/__init__.py b/pandas/util/testing/__init__.py
deleted file mode 100644
index 02cbd19a9a888..0000000000000
--- a/pandas/util/testing/__init__.py
+++ /dev/null
@@ -1,144 +0,0 @@
-from pandas.util._depr_module import _DeprecatedModule
-
-_removals = [
- "Categorical",
- "CategoricalIndex",
- "Counter",
- "DataFrame",
- "DatetimeArray",
- "DatetimeIndex",
- "ExtensionArray",
- "FrameOrSeries",
- "Index",
- "IntervalArray",
- "IntervalIndex",
- "K",
- "List",
- "MultiIndex",
- "N",
- "Optional",
- "PeriodArray",
- "RANDS_CHARS",
- "RANDU_CHARS",
- "RNGContext",
- "RangeIndex",
- "Series",
- "SubclassedCategorical",
- "SubclassedDataFrame",
- "SubclassedSeries",
- "TimedeltaArray",
- "Union",
- "all_index_generator",
- "all_timeseries_index_generator",
- "array_equivalent",
- "assert_attr_equal",
- "assert_class_equal",
- "assert_contains_all",
- "assert_copy",
- "assert_dict_equal",
- "assert_is_sorted",
- "assert_is_valid_plot_return_object",
- "assert_produces_warning",
- "bdate_range",
- "box_expected",
- "bz2",
- "can_connect",
- "can_set_locale",
- "cast",
- "close",
- "contextmanager",
- "convert_rows_list_to_csv_str",
- "datetime",
- "decompress_file",
- "ensure_clean",
- "ensure_clean_dir",
- "ensure_safe_environment_variables",
- "equalContents",
- "getCols",
- "getMixedTypeDict",
- "getPeriodData",
- "getSeriesData",
- "getTimeSeriesData",
- "get_locales",
- "gzip",
- "index_subclass_makers_generator",
- "is_bool",
- "is_categorical_dtype",
- "is_datetime64_dtype",
- "is_datetime64tz_dtype",
- "is_extension_array_dtype",
- "is_interval_dtype",
- "is_list_like",
- "is_number",
- "is_period_dtype",
- "is_sequence",
- "is_timedelta64_dtype",
- "isiterable",
- "lzma",
- "makeBoolIndex",
- "makeCategoricalIndex",
- "makeCustomDataframe",
- "makeCustomIndex",
- "makeDataFrame",
- "makeDateIndex",
- "makeFloatIndex",
- "makeFloatSeries",
- "makeIntIndex",
- "makeIntervalIndex",
- "makeMissingCustomDataframe",
- "makeMissingDataframe",
- "makeMixedDataFrame",
- "makeMultiIndex",
- "makeObjectSeries",
- "makePeriodFrame",
- "makePeriodIndex",
- "makePeriodSeries",
- "makeRangeIndex",
- "makeStringIndex",
- "makeStringSeries",
- "makeTimeDataFrame",
- "makeTimeSeries",
- "makeTimedeltaIndex",
- "makeUIntIndex",
- "makeUnicodeIndex",
- "needs_i8_conversion",
- "network",
- "np",
- "optional_args",
- "os",
- "pd",
- "period_array",
- "pprint_thing",
- "raise_assert_detail",
- "rand",
- "randbool",
- "randn",
- "rands",
- "rands_array",
- "randu",
- "randu_array",
- "reset_display_options",
- "reset_testing_mode",
- "rmtree",
- "round_trip_localpath",
- "round_trip_pathlib",
- "round_trip_pickle",
- "set_locale",
- "set_testing_mode",
- "set_timezone",
- "string",
- "take_1d",
- "tempfile",
- "test_parallel",
- "to_array",
- "urlopen",
- "use_numexpr",
- "warnings",
- "with_connectivity_check",
- "with_csv_dialect",
- "wraps",
- "write_to_compressed",
- "zipfile",
-]
-
-testing = _DeprecatedModule("pandas._testing", "pandas.testing", _removals)
| Closes https://github.com/pandas-dev/pandas/issues/30735
This avoids using _DeprecatedModule, which doesn't work for
direct imports from a module. Sorry for the importlib magic, but
I think this is the correct way to do things.
cc @jorisvandenbossche. | https://api.github.com/repos/pandas-dev/pandas/pulls/30745 | 2020-01-06T16:50:02Z | 2020-01-07T00:01:14Z | 2020-01-07T00:01:14Z | 2020-01-07T00:01:18Z |
ENH: Support multi row inserts in to_sql when using the sqlite fallback | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 8cbc95f0349cf..4c14f9735157b 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -19,6 +19,7 @@ Other enhancements
^^^^^^^^^^^^^^^^^^
- :class:`Styler` may now render CSS more efficiently where multiple cells have the same styling (:issue:`30876`)
+- When writing directly to a sqlite connection :func:`to_sql` now supports the ``multi`` method (:issue:`29921`)
-
-
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 58fed0d18dd4a..d6cf0274bac70 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -1440,7 +1440,7 @@ def _execute_create(self):
for stmt in self.table:
conn.execute(stmt)
- def insert_statement(self):
+ def insert_statement(self, *, num_rows):
names = list(map(str, self.frame.columns))
wld = "?" # wildcard char
escape = _get_valid_sqlite_name
@@ -1451,15 +1451,22 @@ def insert_statement(self):
bracketed_names = [escape(column) for column in names]
col_names = ",".join(bracketed_names)
- wildcards = ",".join([wld] * len(names))
+
+ row_wildcards = ",".join([wld] * len(names))
+ wildcards = ",".join(f"({row_wildcards})" for _ in range(num_rows))
insert_statement = (
- f"INSERT INTO {escape(self.name)} ({col_names}) VALUES ({wildcards})"
+ f"INSERT INTO {escape(self.name)} ({col_names}) VALUES {wildcards}"
)
return insert_statement
def _execute_insert(self, conn, keys, data_iter):
data_list = list(data_iter)
- conn.executemany(self.insert_statement(), data_list)
+ conn.executemany(self.insert_statement(num_rows=1), data_list)
+
+ def _execute_insert_multi(self, conn, keys, data_iter):
+ data_list = list(data_iter)
+ flattened_data = [x for row in data_list for x in row]
+ conn.execute(self.insert_statement(num_rows=len(data_list)), flattened_data)
def _create_table_setup(self):
"""
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 45b3e839a08d1..0ad9f2c1e941f 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -2148,6 +2148,10 @@ def test_to_sql_replace(self):
def test_to_sql_append(self):
self._to_sql_append()
+ def test_to_sql_method_multi(self):
+ # GH 29921
+ self._to_sql(method="multi")
+
def test_create_and_drop_table(self):
temp_frame = DataFrame(
{"one": [1.0, 2.0, 3.0, 4.0], "two": [4.0, 3.0, 2.0, 1.0]}
| Currently we do not support multi row inserts into sqlite databases
when `to_sql` is passed `method="multi"` - despite the documentation
suggesting that this is supported.
Adding support for this is straightforward - it only needs us
to implement a single method on the SQLiteTable class and so
this PR does just that.
- [x] closes #29921
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30743 | 2020-01-06T16:17:46Z | 2020-02-11T23:01:02Z | 2020-02-11T23:01:01Z | 2020-04-18T18:00:34Z |
DOC: Capitalize the 'p' in 'pandas code style guide' | diff --git a/doc/source/development/code_style.rst b/doc/source/development/code_style.rst
index 2fc2f1fb6ee8d..1e9128ee75148 100644
--- a/doc/source/development/code_style.rst
+++ b/doc/source/development/code_style.rst
@@ -3,7 +3,7 @@
{{ header }}
=======================
-pandas code style guide
+Pandas code style guide
=======================
.. contents:: Table of contents:
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30739 | 2020-01-06T14:00:34Z | 2020-01-06T19:01:26Z | null | 2020-01-07T14:39:40Z |
CI: Disallow bare pytest raise | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index a90774d2e8ff1..13d2416c61b8a 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -108,6 +108,14 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
fi
RET=$(($RET + $?)) ; echo $MSG "DONE"
+ MSG='Check for use of bare pytest raise' ; echo $MSG
+ if [[ "$GITHUB_ACTIONS" == "true" ]]; then
+ $BASE_DIR/scripts/validate_bare_pytest_raise.py --format="[error]{source_path}:{line_number}:{msg}" pandas/tests/
+ else
+ $BASE_DIR/scripts/validate_bare_pytest_raise.py pandas/tests/
+ fi
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
echo "isort --version-number"
isort --version-number
diff --git a/scripts/validate_bare_pytest_raise.py b/scripts/validate_bare_pytest_raise.py
new file mode 100755
index 0000000000000..e17418817a20b
--- /dev/null
+++ b/scripts/validate_bare_pytest_raise.py
@@ -0,0 +1,130 @@
+#!/usr/bin/env python
+"""
+GH #23922
+
+Check for the use of bare pytest raise.
+
+For example:
+
+>>> with pytest.raise(ValueError):
+... # Some code that raises ValueError
+
+Instead of:
+
+>>> with pytest.raise(ValueError, match="foo"):
+... # Some code that raises ValueError
+"""
+
+import argparse
+import os
+import sys
+import token
+import tokenize
+from typing import Generator, List, Tuple
+
+FILE_EXTENSIONS_TO_CHECK = ".py"
+
+
+def main(source_path: str, output_format: str) -> bool:
+ """
+ Main entry point of the script.
+
+ Parameters
+ ----------
+ source_path : str
+ Source path representing path to a file/directory.
+ output_format : str
+ Output format of the script.
+
+ Returns
+ -------
+ bool
+ True if found any bare pytest raises.
+
+ Raises
+ ------
+ ValueError
+ If the `source_path` is not pointing to existing file/directory.
+ """
+ if not os.path.exists(source_path):
+ raise ValueError(
+ "Please enter a valid path, pointing to a valid file/directory."
+ )
+
+ is_failed: bool = False
+
+ msg = "Bare pytests raise have been found."
+
+ if os.path.isfile(source_path):
+ for source_path, line_number in bare_pytest_raise(source_path):
+ is_failed = True
+ print(
+ output_format.format(
+ source_path=source_path, line_number=line_number, msg=msg
+ )
+ )
+
+ for subdir, _, files in os.walk(source_path):
+ for file_name in files:
+ if any(
+ file_name.endswith(extension) for extension in FILE_EXTENSIONS_TO_CHECK
+ ):
+ for source_path, line_number in bare_pytest_raise(
+ os.path.join(subdir, file_name)
+ ):
+ is_failed = True
+ print(
+ output_format.format(
+ source_path=source_path, line_number=line_number, msg=msg
+ )
+ )
+ return is_failed
+
+
+def bare_pytest_raise(source_path: str) -> Generator[Tuple[str, int], None, None]:
+ """
+ Yielding the files and line numbers of files with bare pytest raise.
+
+ Parameters
+ ----------
+ source_path : str
+ File path pointing to a single file.
+
+ Yields
+ ------
+ source_path : str
+ Source file path.
+ line_number : int
+ Line number of bare pytests raise.
+ """
+ with open(source_path, "r") as file_name:
+ tokens: List = list(tokenize.generate_tokens(file_name.readline))
+
+ for counter, current_token in enumerate(tokens, start=1):
+ if current_token[0] == token.NAME and current_token[1] == "raises":
+ for next_token in tokens[counter:]:
+ if next_token[0] == token.NAME and next_token[1] == "match":
+ break
+ if next_token[0] == token.NEWLINE:
+ yield source_path, current_token[2][0]
+ break
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(
+ description="Validate there's no use of bare pytest raise"
+ )
+
+ parser.add_argument(
+ "path", nargs="?", default=".", help="Source path of file/directory to check."
+ )
+ parser.add_argument(
+ "--format",
+ "-f",
+ default="{source_path}:{line_number}:{msg}",
+ help="Output format of the error message.",
+ )
+
+ args = parser.parse_args()
+
+ sys.exit(main(source_path=args.path, output_format=args.format))
| - [x] closes #23922
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30737 | 2020-01-06T13:32:31Z | 2020-01-06T21:37:40Z | null | 2020-01-08T20:31:01Z |
TYP: _config/config.py && core/{apply,construction}.py | diff --git a/pandas/_config/config.py b/pandas/_config/config.py
index 42df8a84a8c77..cacd6f5454de7 100644
--- a/pandas/_config/config.py
+++ b/pandas/_config/config.py
@@ -51,7 +51,18 @@
from collections import namedtuple
from contextlib import contextmanager
import re
-from typing import Any, Dict, Iterable, List
+from typing import (
+ Any,
+ Callable,
+ Dict,
+ Iterable,
+ List,
+ Optional,
+ Tuple,
+ Type,
+ TypeVar,
+ cast,
+)
import warnings
DeprecatedOption = namedtuple("DeprecatedOption", "key msg rkey removal_ver")
@@ -80,7 +91,7 @@ class OptionError(AttributeError, KeyError):
# User API
-def _get_single_key(pat, silent):
+def _get_single_key(pat: str, silent: bool) -> str:
keys = _select_options(pat)
if len(keys) == 0:
if not silent:
@@ -98,7 +109,7 @@ def _get_single_key(pat, silent):
return key
-def _get_option(pat, silent=False):
+def _get_option(pat: str, silent: bool = False):
key = _get_single_key(pat, silent)
# walk the nested dict
@@ -106,7 +117,7 @@ def _get_option(pat, silent=False):
return root[k]
-def _set_option(*args, **kwargs):
+def _set_option(*args, **kwargs) -> None:
# must at least 1 arg deal with constraints later
nargs = len(args)
if not nargs or nargs % 2 != 0:
@@ -138,7 +149,7 @@ def _set_option(*args, **kwargs):
o.cb(key)
-def _describe_option(pat="", _print_desc=True):
+def _describe_option(pat: str = "", _print_desc: bool = True):
keys = _select_options(pat)
if len(keys) == 0:
@@ -154,7 +165,7 @@ def _describe_option(pat="", _print_desc=True):
return s
-def _reset_option(pat, silent=False):
+def _reset_option(pat: str, silent: bool = False) -> None:
keys = _select_options(pat)
@@ -172,7 +183,7 @@ def _reset_option(pat, silent=False):
_set_option(k, _registered_options[k].defval, silent=silent)
-def get_default_val(pat):
+def get_default_val(pat: str):
key = _get_single_key(pat, silent=True)
return _get_registered_option(key).defval
@@ -180,11 +191,11 @@ def get_default_val(pat):
class DictWrapper:
""" provide attribute-style access to a nested dict"""
- def __init__(self, d, prefix=""):
+ def __init__(self, d: Dict[str, Any], prefix: str = ""):
object.__setattr__(self, "d", d)
object.__setattr__(self, "prefix", prefix)
- def __setattr__(self, key, val):
+ def __setattr__(self, key: str, val: Any) -> None:
prefix = object.__getattribute__(self, "prefix")
if prefix:
prefix += "."
@@ -210,7 +221,7 @@ def __getattr__(self, key: str):
else:
return _get_option(prefix)
- def __dir__(self):
+ def __dir__(self) -> Iterable[str]:
return list(self.d.keys())
@@ -411,23 +422,31 @@ def __exit__(self, *args):
_set_option(pat, val, silent=True)
-def register_option(key: str, defval: object, doc="", validator=None, cb=None):
- """Register an option in the package-wide pandas config object
+def register_option(
+ key: str,
+ defval: object,
+ doc: str = "",
+ validator: Optional[Callable[[Any], Any]] = None,
+ cb: Optional[Callable[[str], Any]] = None,
+) -> None:
+ """
+ Register an option in the package-wide pandas config object
Parameters
----------
- key - a fully-qualified key, e.g. "x.y.option - z".
- defval - the default value of the option
- doc - a string description of the option
- validator - a function of a single argument, should raise `ValueError` if
- called with a value which is not a legal value for the option.
- cb - a function of a single argument "key", which is called
- immediately after an option value is set/reset. key is
- the full name of the option.
-
- Returns
- -------
- Nothing.
+ key : str
+ Fully-qualified key, e.g. "x.y.option - z".
+ defval : object
+ Default value of the option.
+ doc : str
+ Description of the option.
+ validator : Callable, optional
+ Function of a single argument, should raise `ValueError` if
+ called with a value which is not a legal value for the option.
+ cb
+ a function of a single argument "key", which is called
+ immediately after an option value is set/reset. key is
+ the full name of the option.
Raises
------
@@ -480,7 +499,9 @@ def register_option(key: str, defval: object, doc="", validator=None, cb=None):
)
-def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
+def deprecate_option(
+ key: str, msg: Optional[str] = None, rkey: Optional[str] = None, removal_ver=None
+) -> None:
"""
Mark option `key` as deprecated, if code attempts to access this option,
a warning will be produced, using `msg` if given, or a default message
@@ -493,32 +514,27 @@ def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
Parameters
----------
- key - the name of the option to be deprecated. must be a fully-qualified
- option name (e.g "x.y.z.rkey").
-
- msg - (Optional) a warning message to output when the key is referenced.
- if no message is given a default message will be emitted.
-
- rkey - (Optional) the name of an option to reroute access to.
- If specified, any referenced `key` will be re-routed to `rkey`
- including set/get/reset.
- rkey must be a fully-qualified option name (e.g "x.y.z.rkey").
- used by the default message if no `msg` is specified.
-
- removal_ver - (Optional) specifies the version in which this option will
- be removed. used by the default message if no `msg`
- is specified.
-
- Returns
- -------
- Nothing
+ key : str
+ Name of the option to be deprecated.
+ must be a fully-qualified option name (e.g "x.y.z.rkey").
+ msg : str, optional
+ Warning message to output when the key is referenced.
+ if no message is given a default message will be emitted.
+ rkey : str, optional
+ Name of an option to reroute access to.
+ If specified, any referenced `key` will be
+ re-routed to `rkey` including set/get/reset.
+ rkey must be a fully-qualified option name (e.g "x.y.z.rkey").
+ used by the default message if no `msg` is specified.
+ removal_ver : optional
+ Specifies the version in which this option will
+ be removed. used by the default message if no `msg` is specified.
Raises
------
- OptionError - if key has already been deprecated.
-
+ OptionError
+ If the specified key has already been deprecated.
"""
-
key = key.lower()
if key in _deprecated_options:
@@ -531,7 +547,7 @@ def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
# functions internal to the module
-def _select_options(pat):
+def _select_options(pat: str) -> List[str]:
"""returns a list of keys matching `pat`
if pat=="all", returns all registered options
@@ -549,7 +565,7 @@ def _select_options(pat):
return [k for k in keys if re.search(pat, k, re.I)]
-def _get_root(key):
+def _get_root(key: str) -> Tuple[Dict[str, Any], str]:
path = key.split(".")
cursor = _global_config
for p in path[:-1]:
@@ -557,14 +573,14 @@ def _get_root(key):
return cursor, path[-1]
-def _is_deprecated(key):
+def _is_deprecated(key: str) -> bool:
""" Returns True if the given option has been deprecated """
key = key.lower()
return key in _deprecated_options
-def _get_deprecated_option(key):
+def _get_deprecated_option(key: str):
"""
Retrieves the metadata for a deprecated option, if `key` is deprecated.
@@ -581,7 +597,7 @@ def _get_deprecated_option(key):
return d
-def _get_registered_option(key):
+def _get_registered_option(key: str):
"""
Retrieves the option metadata if `key` is a registered option.
@@ -592,7 +608,7 @@ def _get_registered_option(key):
return _registered_options.get(key)
-def _translate_key(key):
+def _translate_key(key: str) -> str:
"""
if key id deprecated and a replacement key defined, will return the
replacement key, otherwise returns `key` as - is
@@ -605,7 +621,7 @@ def _translate_key(key):
return key
-def _warn_if_deprecated(key):
+def _warn_if_deprecated(key: str) -> bool:
"""
Checks if `key` is a deprecated option and if so, prints a warning.
@@ -633,7 +649,7 @@ def _warn_if_deprecated(key):
return False
-def _build_option_description(k):
+def _build_option_description(k: str) -> str:
""" Builds a formatted description of a registered option and prints it """
o = _get_registered_option(k)
@@ -658,7 +674,7 @@ def _build_option_description(k):
return s
-def pp_options_list(keys, width=80, _print=False):
+def pp_options_list(keys: Iterable[str], width=80, _print: bool = False):
""" Builds a concise listing of available options, grouped by prefix """
from textwrap import wrap
@@ -696,6 +712,9 @@ def pp(name: str, ks: Iterable[str]) -> List[str]:
#
# helpers
+FuncType = Callable[..., Any]
+F = TypeVar("F", bound=FuncType)
+
@contextmanager
def config_prefix(prefix):
@@ -727,12 +746,12 @@ def config_prefix(prefix):
global register_option, get_option, set_option, reset_option
- def wrap(func):
- def inner(key, *args, **kwds):
+ def wrap(func: F) -> F:
+ def inner(key: str, *args, **kwds):
pkey = f"{prefix}.{key}"
return func(pkey, *args, **kwds)
- return inner
+ return cast(F, inner)
_register_option = register_option
_get_option = get_option
@@ -750,7 +769,7 @@ def inner(key, *args, **kwds):
# arg in register_option
-def is_type_factory(_type):
+def is_type_factory(_type: Type[Any]) -> Callable[[Any], None]:
"""
Parameters
@@ -764,14 +783,14 @@ def is_type_factory(_type):
"""
- def inner(x):
+ def inner(x) -> None:
if type(x) != _type:
raise ValueError(f"Value must have type '{_type}'")
return inner
-def is_instance_factory(_type):
+def is_instance_factory(_type) -> Callable[[Any], None]:
"""
Parameters
@@ -791,19 +810,19 @@ def is_instance_factory(_type):
else:
type_repr = f"'{_type}'"
- def inner(x):
+ def inner(x) -> None:
if not isinstance(x, _type):
raise ValueError(f"Value must be an instance of {type_repr}")
return inner
-def is_one_of_factory(legal_values):
+def is_one_of_factory(legal_values) -> Callable[[Any], None]:
callables = [c for c in legal_values if callable(c)]
legal_values = [c for c in legal_values if not callable(c)]
- def inner(x):
+ def inner(x) -> None:
if x not in legal_values:
if not any(c(x) for c in callables):
@@ -817,7 +836,7 @@ def inner(x):
return inner
-def is_nonnegative_int(value):
+def is_nonnegative_int(value: Optional[int]) -> None:
"""
Verify that value is None or a positive int.
@@ -852,7 +871,7 @@ def is_nonnegative_int(value):
is_text = is_instance_factory((str, bytes))
-def is_callable(obj):
+def is_callable(obj) -> bool:
"""
Parameters
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 14a3c3c008e92..ca1be3154757a 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -1,10 +1,11 @@
import abc
import inspect
-from typing import TYPE_CHECKING, Any, Dict, Iterator, Tuple, Type, Union
+from typing import TYPE_CHECKING, Any, Dict, Iterator, Optional, Tuple, Type, Union
import numpy as np
from pandas._libs import reduction as libreduction
+from pandas._typing import Axis
from pandas.util._decorators import cache_readonly
from pandas.core.dtypes.common import (
@@ -26,9 +27,9 @@
def frame_apply(
obj: "DataFrame",
func,
- axis=0,
+ axis: Axis = 0,
raw: bool = False,
- result_type=None,
+ result_type: Optional[str] = None,
ignore_failures: bool = False,
args=None,
kwds=None,
@@ -87,7 +88,7 @@ def __init__(
obj: "DataFrame",
func,
raw: bool,
- result_type,
+ result_type: Optional[str],
ignore_failures: bool,
args,
kwds,
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 203ef3ec75c8f..f947a1fda49f1 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -334,7 +334,7 @@ def array(
return result
-def extract_array(obj, extract_numpy=False):
+def extract_array(obj, extract_numpy: bool = False):
"""
Extract the ndarray or ExtensionArray from a Series or Index.
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30734 | 2020-01-06T13:10:35Z | 2020-01-16T20:30:54Z | 2020-01-16T20:30:54Z | 2020-01-16T23:41:26Z |
Fix read_json category dtype | diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 7f2aab569ab71..8d44a54fbd988 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -12,7 +12,12 @@
from pandas._typing import JSONSerializable
from pandas.errors import AbstractMethodError
-from pandas.core.dtypes.common import ensure_str, is_period_dtype
+from pandas.core.dtypes.common import (
+ pandas_dtype,
+ ensure_str,
+ is_period_dtype,
+ is_categorical_dtype,
+)
from pandas import DataFrame, MultiIndex, Series, isna, to_datetime
from pandas.core.construction import create_series_with_explicit_dtype
@@ -892,7 +897,8 @@ def _try_convert_data(self, name, data, use_dtypes=True, convert_dates=True):
)
if dtype is not None:
try:
- dtype = np.dtype(dtype)
+ if not is_categorical_dtype(dtype):
+ dtype = pandas_dtype(dtype)
return data.astype(dtype), True
except (TypeError, ValueError):
return data, False
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 09d8a1d3f10ea..686a0ce99f9aa 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -8,6 +8,7 @@
import pytest
from pandas.compat import is_platform_32bit, is_platform_windows
+from pandas.core.dtypes.common import is_categorical
import pandas.util._test_decorators as td
import pandas as pd
@@ -1197,6 +1198,17 @@ def test_read_local_jsonl(self):
expected = DataFrame([[1, 2], [1, 2]], columns=["a", "b"])
tm.assert_frame_equal(result, expected)
+ def test_read_json_category_dtype(self):
+ json = (
+ '{"a": 0, "b": "A"}\n'
+ '{"a": 1, "b": "B"}\n'
+ '{"a": 2, "b": "A"}\n'
+ '{"a": 3, "b": "B"}\n'
+ )
+ json = StringIO(json)
+ result = read_json(json, lines=True, dtype={"b": "category"})
+ assert is_categorical(result["b"])
+
def test_read_jsonl_unicode_chars(self):
# GH15132: non-ascii unicode characters
# \u201d == RIGHT DOUBLE QUOTATION MARK
| - [X] closes #21892
- [ ] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] Add support for "category" dtype in read_json
| https://api.github.com/repos/pandas-dev/pandas/pulls/30728 | 2020-01-06T10:49:08Z | 2020-05-08T16:44:34Z | null | 2020-05-08T16:44:35Z |
DOC: fix see also in docstring of check_bool_array_indexer | diff --git a/pandas/core/indexers.py b/pandas/core/indexers.py
index 0f932f7b849e3..4d45769d2fea9 100644
--- a/pandas/core/indexers.py
+++ b/pandas/core/indexers.py
@@ -274,14 +274,14 @@ def check_bool_array_indexer(array: AnyArrayLike, mask: AnyArrayLike) -> np.ndar
See Also
--------
- api.extensions.is_bool_indexer : Check if `key` is a boolean indexer.
+ api.types.is_bool_dtype : Check if `key` is of boolean dtype.
Examples
--------
A boolean ndarray is returned when the arguments are all valid.
>>> mask = pd.array([True, False])
- >>> arr = pd.Series([1, 2])
+ >>> arr = pd.array([1, 2])
>>> pd.api.extensions.check_bool_array_indexer(arr, mask)
array([ True, False])
| xref https://github.com/pandas-dev/pandas/pull/30308#pullrequestreview-338522525 | https://api.github.com/repos/pandas-dev/pandas/pulls/30725 | 2020-01-06T08:55:41Z | 2020-01-06T11:55:03Z | 2020-01-06T11:55:03Z | 2020-01-06T12:02:17Z |
annotations | diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 4293108ea7ec2..1166768472449 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -326,7 +326,7 @@ cdef class Interval(IntervalMixin):
def __hash__(self):
return hash((self.left, self.right, self.closed))
- def __contains__(self, key):
+ def __contains__(self, key) -> bool:
if _interval_like(key):
raise TypeError("__contains__ not defined for two intervals")
return ((self.left < key if self.open_left else self.left <= key) and
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 136c7fa32a6e7..7386c9d0ef1de 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1874,7 +1874,7 @@ def __iter__(self):
"""
return iter(self._internal_get_values().tolist())
- def __contains__(self, key):
+ def __contains__(self, key) -> bool:
"""
Returns True if `key` is in this Categorical.
"""
@@ -1884,7 +1884,7 @@ def __contains__(self, key):
return contains(self, key, container=self._codes)
- def _tidy_repr(self, max_vals=10, footer=True):
+ def _tidy_repr(self, max_vals=10, footer=True) -> str:
""" a short repr displaying only max_vals and an optional (but default
footer)
"""
@@ -1921,7 +1921,7 @@ def _repr_categories(self):
category_strs = [x.strip() for x in category_strs]
return category_strs
- def _repr_categories_info(self):
+ def _repr_categories_info(self) -> str:
"""
Returns a string representation of the footer.
"""
@@ -1951,11 +1951,11 @@ def _repr_categories_info(self):
# replace to simple save space by
return levheader + "[" + levstring.replace(" < ... < ", " ... ") + "]"
- def _repr_footer(self):
+ def _repr_footer(self) -> str:
info = self._repr_categories_info()
return f"Length: {len(self)}\n{info}"
- def _get_repr(self, length=True, na_rep="NaN", footer=True):
+ def _get_repr(self, length=True, na_rep="NaN", footer=True) -> str:
from pandas.io.formats import format as fmt
formatter = fmt.CategoricalFormatter(
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 0c5f451c4f07b..84504e2103860 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -147,7 +147,7 @@ def __array_wrap__(self, result, context=None):
# ------------------------------------------------------------------------
- def equals(self, other):
+ def equals(self, other) -> bool:
"""
Determines if two Index objects contain the same elements.
"""
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 0d2e2fbfd8ddd..d4425764f4dbd 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -738,16 +738,16 @@ def combine(self, blocks, copy=True):
return type(self)(new_blocks, axes, do_integrity_check=False)
- def get_slice(self, slobj, axis=0):
+ def get_slice(self, slobj: slice, axis: int = 0):
if axis >= self.ndim:
raise IndexError("Requested axis not found in manager")
if axis == 0:
new_blocks = self._slice_take_blocks_ax0(slobj)
else:
- slicer = [slice(None)] * (axis + 1)
- slicer[axis] = slobj
- slicer = tuple(slicer)
+ _slicer = [slice(None)] * (axis + 1)
+ _slicer[axis] = slobj
+ slicer = tuple(_slicer)
new_blocks = [blk.getitem_block(slicer) for blk in self.blocks]
new_axes = list(self.axes)
@@ -757,11 +757,11 @@ def get_slice(self, slobj, axis=0):
bm._consolidate_inplace()
return bm
- def __contains__(self, item):
+ def __contains__(self, item) -> bool:
return item in self.items
@property
- def nblocks(self):
+ def nblocks(self) -> int:
return len(self.blocks)
def copy(self, deep=True):
diff --git a/pandas/plotting/_misc.py b/pandas/plotting/_misc.py
index 7f208436ddc4a..ccd42d3940431 100644
--- a/pandas/plotting/_misc.py
+++ b/pandas/plotting/_misc.py
@@ -453,7 +453,7 @@ def __delitem__(self, key):
raise ValueError(f"Cannot remove default parameter {key}")
return super().__delitem__(key)
- def __contains__(self, key):
+ def __contains__(self, key) -> bool:
key = self._get_canonical_key(key)
return super().__contains__(key)
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index d5ca35dce1c85..d022b0e97877a 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -240,7 +240,7 @@ def __getitem__(self, key):
if has_contains:
- def __contains__(self, key):
+ def __contains__(self, key) -> bool:
return self.d.__contains__(key)
d = DictLike({1: 2})
diff --git a/pandas/tests/frame/methods/test_to_records.py b/pandas/tests/frame/methods/test_to_records.py
index 8099c881c987e..54a3affdc3024 100644
--- a/pandas/tests/frame/methods/test_to_records.py
+++ b/pandas/tests/frame/methods/test_to_records.py
@@ -326,7 +326,7 @@ def __init__(self, **kwargs):
def __getitem__(self, key):
return self.d.__getitem__(key)
- def __contains__(self, key):
+ def __contains__(self, key) -> bool:
return key in self.d
def keys(self):
| grepped for `__contains__`, annotated those where possible and a few things around it | https://api.github.com/repos/pandas-dev/pandas/pulls/30724 | 2020-01-06T02:59:02Z | 2020-01-06T13:34:17Z | 2020-01-06T13:34:16Z | 2020-01-06T15:26:17Z |
REF: Create test_encoding file for CSV | diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index 0e408df625ccd..4c02a37b66455 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -5,7 +5,7 @@
import codecs
import csv
from datetime import datetime
-from io import BytesIO, StringIO
+from io import StringIO
import os
import platform
from tempfile import TemporaryFile
@@ -69,17 +69,6 @@ def _set_noconvert_columns(self):
tm.assert_frame_equal(result, expected)
-def test_bytes_io_input(all_parsers):
- encoding = "cp1255"
- parser = all_parsers
-
- data = BytesIO("שלום:1234\n562:123".encode(encoding))
- result = parser.read_csv(data, sep=":", encoding=encoding)
-
- expected = DataFrame([[562, 123]], columns=["שלום", "1234"])
- tm.assert_frame_equal(result, expected)
-
-
def test_empty_decimal_marker(all_parsers):
data = """A|B|C
1|2,334|5
@@ -316,15 +305,6 @@ def test_read_csv_no_index_name(all_parsers, csv_dir_path):
tm.assert_frame_equal(result, expected)
-def test_read_csv_unicode(all_parsers):
- parser = all_parsers
- data = BytesIO("\u0141aski, Jan;1".encode("utf-8"))
-
- result = parser.read_csv(data, sep=";", encoding="utf-8", header=None)
- expected = DataFrame([["\u0141aski, Jan", 1]])
- tm.assert_frame_equal(result, expected)
-
-
def test_read_csv_wrong_num_columns(all_parsers):
# Too few columns.
data = """A,B,C,D,E,F
@@ -1064,59 +1044,6 @@ def test_skip_initial_space(all_parsers):
tm.assert_frame_equal(result, expected)
-@pytest.mark.parametrize("sep", [",", "\t"])
-@pytest.mark.parametrize("encoding", ["utf-16", "utf-16le", "utf-16be"])
-def test_utf16_bom_skiprows(all_parsers, sep, encoding):
- # see gh-2298
- parser = all_parsers
- data = """skip this
-skip this too
-A,B,C
-1,2,3
-4,5,6""".replace(
- ",", sep
- )
- path = "__{}__.csv".format(tm.rands(10))
- kwargs = dict(sep=sep, skiprows=2)
- utf8 = "utf-8"
-
- with tm.ensure_clean(path) as path:
- from io import TextIOWrapper
-
- bytes_data = data.encode(encoding)
-
- with open(path, "wb") as f:
- f.write(bytes_data)
-
- bytes_buffer = BytesIO(data.encode(utf8))
- bytes_buffer = TextIOWrapper(bytes_buffer, encoding=utf8)
-
- result = parser.read_csv(path, encoding=encoding, **kwargs)
- expected = parser.read_csv(bytes_buffer, encoding=utf8, **kwargs)
-
- bytes_buffer.close()
- tm.assert_frame_equal(result, expected)
-
-
-def test_utf16_example(all_parsers, csv_dir_path):
- path = os.path.join(csv_dir_path, "utf16_ex.txt")
- parser = all_parsers
- result = parser.read_csv(path, encoding="utf-16", sep="\t")
- assert len(result) == 50
-
-
-def test_unicode_encoding(all_parsers, csv_dir_path):
- path = os.path.join(csv_dir_path, "unicode_series.csv")
- parser = all_parsers
-
- result = parser.read_csv(path, header=None, encoding="latin-1")
- result = result.set_index(0)
- got = result[1][1632]
-
- expected = "\xc1 k\xf6ldum klaka (Cold Fever) (1994)"
- assert got == expected
-
-
def test_trailing_delimiters(all_parsers):
# see gh-2442
data = """A,B,C
@@ -1915,39 +1842,6 @@ def test_null_byte_char(all_parsers):
parser.read_csv(StringIO(data), names=names)
-@pytest.mark.parametrize(
- "data,kwargs,expected",
- [
- # Basic test
- ("a\n1", dict(), DataFrame({"a": [1]})),
- # "Regular" quoting
- ('"a"\n1', dict(quotechar='"'), DataFrame({"a": [1]})),
- # Test in a data row instead of header
- ("b\n1", dict(names=["a"]), DataFrame({"a": ["b", "1"]})),
- # Test in empty data row with skipping
- ("\n1", dict(names=["a"], skip_blank_lines=True), DataFrame({"a": [1]})),
- # Test in empty data row without skipping
- (
- "\n1",
- dict(names=["a"], skip_blank_lines=False),
- DataFrame({"a": [np.nan, 1]}),
- ),
- ],
-)
-def test_utf8_bom(all_parsers, data, kwargs, expected):
- # see gh-4793
- parser = all_parsers
- bom = "\ufeff"
- utf8 = "utf-8"
-
- def _encode_data_with_bom(_data):
- bom_data = (bom + _data).encode(utf8)
- return BytesIO(bom_data)
-
- result = parser.read_csv(_encode_data_with_bom(data), encoding=utf8, **kwargs)
- tm.assert_frame_equal(result, expected)
-
-
def test_temporary_file(all_parsers):
# see gh-13398
parser = all_parsers
@@ -1965,20 +1859,6 @@ def test_temporary_file(all_parsers):
tm.assert_frame_equal(result, expected)
-@pytest.mark.parametrize("byte", [8, 16])
-@pytest.mark.parametrize("fmt", ["utf-{0}", "utf_{0}", "UTF-{0}", "UTF_{0}"])
-def test_read_csv_utf_aliases(all_parsers, byte, fmt):
- # see gh-13549
- expected = DataFrame({"mb_num": [4.8], "multibyte": ["test"]})
- parser = all_parsers
-
- encoding = fmt.format(byte)
- data = "mb_num,multibyte\n4.8,test".encode(encoding)
-
- result = parser.read_csv(BytesIO(data), encoding=encoding)
- tm.assert_frame_equal(result, expected)
-
-
def test_internal_eof_byte(all_parsers):
# see gh-5500
parser = all_parsers
@@ -2038,30 +1918,6 @@ def test_file_handles_with_open(all_parsers, csv1):
assert not f.closed
-@pytest.mark.parametrize(
- "fname,encoding",
- [
- ("test1.csv", "utf-8"),
- ("unicode_series.csv", "latin-1"),
- ("sauron.SHIFT_JIS.csv", "shiftjis"),
- ],
-)
-def test_binary_mode_file_buffers(all_parsers, csv_dir_path, fname, encoding):
- # gh-23779: Python csv engine shouldn't error on files opened in binary.
- parser = all_parsers
-
- fpath = os.path.join(csv_dir_path, fname)
- expected = parser.read_csv(fpath, encoding=encoding)
-
- with open(fpath, mode="r", encoding=encoding) as fa:
- result = parser.read_csv(fa)
- tm.assert_frame_equal(expected, result)
-
- with open(fpath, mode="rb") as fb:
- result = parser.read_csv(fb, encoding=encoding)
- tm.assert_frame_equal(expected, result)
-
-
def test_invalid_file_buffer_class(all_parsers):
# see gh-15337
class InvalidBuffer:
diff --git a/pandas/tests/io/parser/test_encoding.py b/pandas/tests/io/parser/test_encoding.py
new file mode 100644
index 0000000000000..2540dd9b19fce
--- /dev/null
+++ b/pandas/tests/io/parser/test_encoding.py
@@ -0,0 +1,157 @@
+"""
+Tests encoding functionality during parsing
+for all of the parsers defined in parsers.py
+"""
+
+from io import BytesIO
+import os
+
+import numpy as np
+import pytest
+
+from pandas import DataFrame
+import pandas._testing as tm
+
+
+def test_bytes_io_input(all_parsers):
+ encoding = "cp1255"
+ parser = all_parsers
+
+ data = BytesIO("שלום:1234\n562:123".encode(encoding))
+ result = parser.read_csv(data, sep=":", encoding=encoding)
+
+ expected = DataFrame([[562, 123]], columns=["שלום", "1234"])
+ tm.assert_frame_equal(result, expected)
+
+
+def test_read_csv_unicode(all_parsers):
+ parser = all_parsers
+ data = BytesIO("\u0141aski, Jan;1".encode("utf-8"))
+
+ result = parser.read_csv(data, sep=";", encoding="utf-8", header=None)
+ expected = DataFrame([["\u0141aski, Jan", 1]])
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("sep", [",", "\t"])
+@pytest.mark.parametrize("encoding", ["utf-16", "utf-16le", "utf-16be"])
+def test_utf16_bom_skiprows(all_parsers, sep, encoding):
+ # see gh-2298
+ parser = all_parsers
+ data = """skip this
+skip this too
+A,B,C
+1,2,3
+4,5,6""".replace(
+ ",", sep
+ )
+ path = "__{}__.csv".format(tm.rands(10))
+ kwargs = dict(sep=sep, skiprows=2)
+ utf8 = "utf-8"
+
+ with tm.ensure_clean(path) as path:
+ from io import TextIOWrapper
+
+ bytes_data = data.encode(encoding)
+
+ with open(path, "wb") as f:
+ f.write(bytes_data)
+
+ bytes_buffer = BytesIO(data.encode(utf8))
+ bytes_buffer = TextIOWrapper(bytes_buffer, encoding=utf8)
+
+ result = parser.read_csv(path, encoding=encoding, **kwargs)
+ expected = parser.read_csv(bytes_buffer, encoding=utf8, **kwargs)
+
+ bytes_buffer.close()
+ tm.assert_frame_equal(result, expected)
+
+
+def test_utf16_example(all_parsers, csv_dir_path):
+ path = os.path.join(csv_dir_path, "utf16_ex.txt")
+ parser = all_parsers
+ result = parser.read_csv(path, encoding="utf-16", sep="\t")
+ assert len(result) == 50
+
+
+def test_unicode_encoding(all_parsers, csv_dir_path):
+ path = os.path.join(csv_dir_path, "unicode_series.csv")
+ parser = all_parsers
+
+ result = parser.read_csv(path, header=None, encoding="latin-1")
+ result = result.set_index(0)
+ got = result[1][1632]
+
+ expected = "\xc1 k\xf6ldum klaka (Cold Fever) (1994)"
+ assert got == expected
+
+
+@pytest.mark.parametrize(
+ "data,kwargs,expected",
+ [
+ # Basic test
+ ("a\n1", dict(), DataFrame({"a": [1]})),
+ # "Regular" quoting
+ ('"a"\n1', dict(quotechar='"'), DataFrame({"a": [1]})),
+ # Test in a data row instead of header
+ ("b\n1", dict(names=["a"]), DataFrame({"a": ["b", "1"]})),
+ # Test in empty data row with skipping
+ ("\n1", dict(names=["a"], skip_blank_lines=True), DataFrame({"a": [1]})),
+ # Test in empty data row without skipping
+ (
+ "\n1",
+ dict(names=["a"], skip_blank_lines=False),
+ DataFrame({"a": [np.nan, 1]}),
+ ),
+ ],
+)
+def test_utf8_bom(all_parsers, data, kwargs, expected):
+ # see gh-4793
+ parser = all_parsers
+ bom = "\ufeff"
+ utf8 = "utf-8"
+
+ def _encode_data_with_bom(_data):
+ bom_data = (bom + _data).encode(utf8)
+ return BytesIO(bom_data)
+
+ result = parser.read_csv(_encode_data_with_bom(data), encoding=utf8, **kwargs)
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("byte", [8, 16])
+@pytest.mark.parametrize("fmt", ["utf-{0}", "utf_{0}", "UTF-{0}", "UTF_{0}"])
+def test_read_csv_utf_aliases(all_parsers, byte, fmt):
+ # see gh-13549
+ expected = DataFrame({"mb_num": [4.8], "multibyte": ["test"]})
+ parser = all_parsers
+
+ encoding = fmt.format(byte)
+ data = "mb_num,multibyte\n4.8,test".encode(encoding)
+
+ result = parser.read_csv(BytesIO(data), encoding=encoding)
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "fname,encoding",
+ [
+ ("test1.csv", "utf-8"),
+ ("unicode_series.csv", "latin-1"),
+ ("sauron.SHIFT_JIS.csv", "shiftjis"),
+ ],
+)
+def test_binary_mode_file_buffers(all_parsers, csv_dir_path, fname, encoding):
+ # gh-23779: Python csv engine shouldn't error on files opened in binary.
+ parser = all_parsers
+
+ fpath = os.path.join(csv_dir_path, fname)
+ expected = parser.read_csv(fpath, encoding=encoding)
+
+ with open(fpath, mode="r", encoding=encoding) as fa:
+ result = parser.read_csv(fa)
+ tm.assert_frame_equal(expected, result)
+
+ with open(fpath, mode="rb") as fb:
+ result = parser.read_csv(fb, encoding=encoding)
+ tm.assert_frame_equal(expected, result)
| This is 99.99% copy and paste | https://api.github.com/repos/pandas-dev/pandas/pulls/30723 | 2020-01-06T01:44:32Z | 2020-01-06T13:24:41Z | 2020-01-06T13:24:41Z | 2020-01-06T13:24:47Z |
BUG: PeriodArray comparisons inconsistent with Period comparisons | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index b2b6fe393f069..a9a0d89ed01aa 100755
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -837,6 +837,7 @@ Datetimelike
- Bug in :class:`DatetimeArray`, :class:`TimedeltaArray`, and :class:`PeriodArray` where inplace addition and subtraction did not actually operate inplace (:issue:`24115`)
- Bug in :func:`pandas.to_datetime` when called with ``Series`` storing ``IntegerArray`` raising ``TypeError`` instead of returning ``Series`` (:issue:`30050`)
- Bug in :func:`date_range` with custom business hours as ``freq`` and given number of ``periods`` (:issue:`30593`)
+- Bug in :class:`PeriodIndex` comparisons with incorrectly casting integers to :class:`Period` objects, inconsistent with the :class:`Period` comparison behavior (:issue:`30722`)
Timedelta
^^^^^^^^^
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 7aa6f8e8aa090..97e5b87dafeac 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -73,7 +73,6 @@ def _period_array_cmp(cls, op):
@unpack_zerodim_and_defer(opname)
def wrapper(self, other):
- ordinal_op = getattr(self.asi8, opname)
if isinstance(other, str):
try:
@@ -81,11 +80,6 @@ def wrapper(self, other):
except ValueError:
# string that can't be parsed as Period
return invalid_comparison(self, other, op)
- elif isinstance(other, int):
- # TODO: sure we want to allow this? we dont for DTA/TDA
- # 2 tests rely on this
- other = Period(other, freq=self.freq)
- result = ordinal_op(other.ordinal)
if isinstance(other, self._recognized_scalars) or other is NaT:
other = self._scalar_type(other)
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index 03aa9acd6b917..abb667260f094 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -127,7 +127,7 @@ def test_compare_object_dtype(self, box_with_array, other_box):
class TestPeriodIndexComparisons:
# TODO: parameterize over boxes
- @pytest.mark.parametrize("other", ["2017", 2017])
+ @pytest.mark.parametrize("other", ["2017", pd.Period("2017", freq="D")])
def test_eq(self, other):
idx = PeriodIndex(["2017", "2017", "2018"], freq="D")
expected = np.array([True, True, False])
@@ -135,6 +135,34 @@ def test_eq(self, other):
tm.assert_numpy_array_equal(result, expected)
+ @pytest.mark.parametrize(
+ "other",
+ [
+ 2017,
+ [2017, 2017, 2017],
+ np.array([2017, 2017, 2017]),
+ np.array([2017, 2017, 2017], dtype=object),
+ pd.Index([2017, 2017, 2017]),
+ ],
+ )
+ def test_eq_integer_disallowed(self, other):
+ # match Period semantics by not treating integers as Periods
+
+ idx = PeriodIndex(["2017", "2017", "2018"], freq="D")
+ expected = np.array([False, False, False])
+ result = idx == other
+
+ tm.assert_numpy_array_equal(result, expected)
+
+ with pytest.raises(TypeError):
+ idx < other
+ with pytest.raises(TypeError):
+ idx > other
+ with pytest.raises(TypeError):
+ idx <= other
+ with pytest.raises(TypeError):
+ idx >= other
+
def test_pi_cmp_period(self):
idx = period_range("2007-01", periods=20, freq="M")
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index af5aa54c60476..3276fea4dd575 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -462,7 +462,7 @@ def test_index_duplicate_periods(self):
ts = Series(np.random.randn(len(idx)), index=idx)
result = ts[2007]
- expected = ts[idx == 2007]
+ expected = ts[idx == "2007"]
tm.assert_series_equal(result, expected)
def test_index_unique(self):
| The second of two non-cosmetic changes mentioned in #30720. | https://api.github.com/repos/pandas-dev/pandas/pulls/30722 | 2020-01-06T01:20:48Z | 2020-01-06T17:48:37Z | 2020-01-06T17:48:37Z | 2020-01-06T17:53:25Z |
Make DTA _check_compatible_with less strict by default | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 2bdd9acaeb70f..b9a6daf7c630a 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -109,7 +109,7 @@ def _unbox_scalar(self, value: Union[Period, Timestamp, Timedelta, NaTType]) ->
raise AbstractMethodError(self)
def _check_compatible_with(
- self, other: Union[Period, Timestamp, Timedelta, NaTType]
+ self, other: Union[Period, Timestamp, Timedelta, NaTType], setitem: bool = False
) -> None:
"""
Verify that `self` and `other` are compatible.
@@ -123,6 +123,9 @@ def _check_compatible_with(
Parameters
----------
other
+ setitem : bool, default False
+ For __setitem__ we may have stricter compatiblity resrictions than
+ for comparisons.
Raises
------
@@ -500,10 +503,10 @@ def __setitem__(
return
value = type(self)._from_sequence(value, dtype=self.dtype)
- self._check_compatible_with(value)
+ self._check_compatible_with(value, setitem=True)
value = value.asi8
elif isinstance(value, self._scalar_type):
- self._check_compatible_with(value)
+ self._check_compatible_with(value, setitem=True)
value = self._unbox_scalar(value)
elif is_valid_nat_for_dtype(value, self.dtype):
value = iNaT
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 6aa910c0f6ab8..9c88f4f8aa905 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -554,11 +554,14 @@ def _unbox_scalar(self, value):
def _scalar_from_string(self, value):
return Timestamp(value, tz=self.tz)
- def _check_compatible_with(self, other):
+ def _check_compatible_with(self, other, setitem: bool = False):
if other is NaT:
return
- if not timezones.tz_compare(self.tz, other.tz):
- raise ValueError(f"Timezones don't match. '{self.tz} != {other.tz}'")
+ self._assert_tzawareness_compat(other)
+ if setitem:
+ # Stricter check for setitem vs comparison methods
+ if not timezones.tz_compare(self.tz, other.tz):
+ raise ValueError(f"Timezones don't match. '{self.tz} != {other.tz}'")
def _maybe_clear_freq(self):
self._freq = None
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 7aa6f8e8aa090..a0e806fb8f04c 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -341,7 +341,7 @@ def _unbox_scalar(self, value: Union[Period, NaTType]) -> int:
def _scalar_from_string(self, value: str) -> Period:
return Period(value, freq=self.freq)
- def _check_compatible_with(self, other):
+ def _check_compatible_with(self, other, setitem: bool = False):
if other is NaT:
return
if self.freqstr != other.freqstr:
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 953a242380311..39ef9414564e7 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -357,7 +357,7 @@ def _unbox_scalar(self, value):
def _scalar_from_string(self, value):
return Timedelta(value)
- def _check_compatible_with(self, other):
+ def _check_compatible_with(self, other, setitem: bool = False):
# we don't have anything to validate.
pass
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index bca629ae32270..b5f9c8957c2b8 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -173,7 +173,7 @@ def test_tz_setter_raises(self):
def test_setitem_different_tz_raises(self):
data = np.array([1, 2, 3], dtype="M8[ns]")
arr = DatetimeArray(data, copy=False, dtype=DatetimeTZDtype(tz="US/Central"))
- with pytest.raises(ValueError, match="None"):
+ with pytest.raises(TypeError, match="Cannot compare tz-naive and tz-aware"):
arr[0] = pd.Timestamp("2000")
with pytest.raises(ValueError, match="US/Central"):
| One of the two non-cosmetic things mentioned in #30720.
There are a bunch of places where DTA or DTI do a compatibility check that for tz_awareness_compat, but not requiring the same tz. This check is analogous to `PeriodArray._check_compatible_with` and `TimedeltaArray._check_compatible_with`, so this adds a kwarg to _check_compatible_with so that we can use _check_compatible_with in all the relevant places and subsequently de-duplicate a bunch of code.
In addition to the comparisons, this is going to be relevant for searchsorted and insert, where we have slightly different behavior in a bunch of EA/Index subclasses. | https://api.github.com/repos/pandas-dev/pandas/pulls/30721 | 2020-01-06T01:14:01Z | 2020-01-06T17:36:32Z | 2020-01-06T17:36:32Z | 2020-01-06T17:47:48Z |
REF: cosmetic differences between DTA/TDA/PA comparison methods | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index cc54fb5e5af13..6aa910c0f6ab8 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -26,12 +26,14 @@
_INT64_DTYPE,
_NS_DTYPE,
is_categorical_dtype,
+ is_datetime64_any_dtype,
is_datetime64_dtype,
is_datetime64_ns_dtype,
is_datetime64tz_dtype,
is_dtype_equal,
is_extension_array_dtype,
is_float_dtype,
+ is_list_like,
is_object_dtype,
is_period_dtype,
is_string_dtype,
@@ -148,17 +150,22 @@ def wrapper(self, other):
# string that cannot be parsed to Timestamp
return invalid_comparison(self, other, op)
- if isinstance(other, (datetime, np.datetime64)):
- other = Timestamp(other)
+ if isinstance(other, self._recognized_scalars) or other is NaT:
+ other = self._scalar_type(other)
self._assert_tzawareness_compat(other)
- result = op(self.asi8, other.value)
+ other_i8 = other.value
+
+ result = op(self.view("i8"), other_i8)
if isna(other):
result.fill(nat_result)
- elif lib.is_scalar(other) or np.ndim(other) == 0:
+
+ elif not is_list_like(other):
return invalid_comparison(self, other, op)
+
elif len(other) != len(self):
raise ValueError("Lengths must match")
+
else:
if isinstance(other, list):
other = np.array(other)
@@ -178,7 +185,7 @@ def wrapper(self, other):
)
o_mask = isna(other)
- elif not (is_datetime64_dtype(other) or is_datetime64tz_dtype(other)):
+ elif not cls._is_recognized_dtype(other.dtype):
# e.g. is_timedelta64_dtype(other)
return invalid_comparison(self, other, op)
@@ -239,6 +246,8 @@ class DatetimeArray(dtl.DatetimeLikeArrayMixin, dtl.TimelikeOps, dtl.DatelikeOps
_typ = "datetimearray"
_scalar_type = Timestamp
+ _recognized_scalars = (datetime, np.datetime64)
+ _is_recognized_dtype = is_datetime64_any_dtype
# define my properties & methods for delegation
_bool_ops = [
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 6b9c7f4e1eb38..7aa6f8e8aa090 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -75,9 +75,6 @@ def _period_array_cmp(cls, op):
def wrapper(self, other):
ordinal_op = getattr(self.asi8, opname)
- if is_list_like(other) and len(other) != len(self):
- raise ValueError("Lengths must match")
-
if isinstance(other, str):
try:
other = self._scalar_from_string(other)
@@ -90,18 +87,22 @@ def wrapper(self, other):
other = Period(other, freq=self.freq)
result = ordinal_op(other.ordinal)
- if isinstance(other, Period):
+ if isinstance(other, self._recognized_scalars) or other is NaT:
+ other = self._scalar_type(other)
self._check_compatible_with(other)
- result = ordinal_op(other.ordinal)
+ other_i8 = self._unbox_scalar(other)
- elif other is NaT:
- result = np.empty(len(self.asi8), dtype=bool)
- result.fill(nat_result)
+ result = op(self.view("i8"), other_i8)
+ if isna(other):
+ result.fill(nat_result)
elif not is_list_like(other):
return invalid_comparison(self, other, op)
+ elif len(other) != len(self):
+ raise ValueError("Lengths must match")
+
else:
if isinstance(other, list):
# TODO: could use pd.Index to do inference?
@@ -117,7 +118,7 @@ def wrapper(self, other):
)
o_mask = isna(other)
- elif not is_period_dtype(other):
+ elif not cls._is_recognized_dtype(other.dtype):
# e.g. is_timedelta64_dtype(other)
return invalid_comparison(self, other, op)
@@ -126,7 +127,7 @@ def wrapper(self, other):
self._check_compatible_with(other)
- result = ordinal_op(other.asi8)
+ result = op(self.view("i8"), other.view("i8"))
o_mask = other._isnan
if o_mask.any():
@@ -195,6 +196,8 @@ class PeriodArray(dtl.DatetimeLikeArrayMixin, dtl.DatelikeOps):
__array_priority__ = 1000
_typ = "periodarray" # ABCPeriodArray
_scalar_type = Period
+ _recognized_scalars = (Period,)
+ _is_recognized_dtype = is_period_dtype
# Names others delegate to us
_other_ops: List[str] = []
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 1874517f0f2e4..953a242380311 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -89,10 +89,13 @@ def wrapper(self, other):
# failed to parse as timedelta
return invalid_comparison(self, other, op)
- if _is_convertible_to_td(other) or other is NaT:
- other = Timedelta(other)
+ if isinstance(other, self._recognized_scalars) or other is NaT:
+ other = self._scalar_type(other)
+ self._check_compatible_with(other)
+
+ other_i8 = self._unbox_scalar(other)
- result = op(self.view("i8"), other.value)
+ result = op(self.view("i8"), other_i8)
if isna(other):
result.fill(nat_result)
@@ -116,13 +119,15 @@ def wrapper(self, other):
)
o_mask = isna(other)
- elif not is_timedelta64_dtype(other):
+ elif not cls._is_recognized_dtype(other.dtype):
# e.g. other is datetimearray
return invalid_comparison(self, other, op)
else:
other = type(self)._from_sequence(other)
+ self._check_compatible_with(other)
+
result = op(self.view("i8"), other.view("i8"))
o_mask = other._isnan
@@ -172,6 +177,9 @@ class TimedeltaArray(dtl.DatetimeLikeArrayMixin, dtl.TimelikeOps):
_typ = "timedeltaarray"
_scalar_type = Timedelta
+ _recognized_scalars = (timedelta, np.timedelta64, Tick)
+ _is_recognized_dtype = is_timedelta64_dtype
+
__array_priority__ = 1000
# define my properties & methods for delegation
_other_ops: List[str] = []
| There are 2 remaining non-cosmetic differences between these methods remaining, which will be the subjects of upcoming PRs. Once those are addressed, we'll be able to de-duplicate the methods completely. | https://api.github.com/repos/pandas-dev/pandas/pulls/30720 | 2020-01-06T01:04:23Z | 2020-01-06T13:25:32Z | 2020-01-06T13:25:32Z | 2020-01-06T15:27:19Z |
Consistent json error | diff --git a/pandas/_libs/src/ujson/python/objToJSON.c b/pandas/_libs/src/ujson/python/objToJSON.c
index c413a16f8d5f0..edee06117817d 100644
--- a/pandas/_libs/src/ujson/python/objToJSON.c
+++ b/pandas/_libs/src/ujson/python/objToJSON.c
@@ -156,24 +156,72 @@ void *initObjToJSON(void) {
(PyTypeObject *)PyObject_GetAttrString(mod_decimal, "Decimal");
Py_DECREF(mod_decimal);
+ if (type_decimal == NULL) {
+ return NULL;
+ }
+
PyDateTime_IMPORT;
mod_pandas = PyImport_ImportModule("pandas");
- if (mod_pandas) {
- cls_dataframe =
- (PyTypeObject *)PyObject_GetAttrString(mod_pandas, "DataFrame");
- cls_index = (PyTypeObject *)PyObject_GetAttrString(mod_pandas, "Index");
- cls_series =
- (PyTypeObject *)PyObject_GetAttrString(mod_pandas, "Series");
- cls_timedelta = PyObject_GetAttrString(mod_pandas, "Timedelta");
+ if (mod_pandas == NULL) {
+ Py_DECREF(type_decimal);
+ return NULL;
+ }
+
+ cls_dataframe =
+ (PyTypeObject *)PyObject_GetAttrString(mod_pandas, "DataFrame");
+ if (cls_dataframe == NULL) {
+ Py_DECREF(type_decimal);
+ Py_DECREF(mod_pandas);
+ return NULL;
+ }
+
+ cls_index = (PyTypeObject *)PyObject_GetAttrString(mod_pandas, "Index");
+ if (cls_index == NULL) {
+ Py_DECREF(cls_dataframe);
+ Py_DECREF(type_decimal);
+ Py_DECREF(mod_pandas);
+ return NULL;
+ }
+
+ cls_series = (PyTypeObject *)PyObject_GetAttrString(mod_pandas, "Series");
+ if (cls_series == NULL) {
+ Py_DECREF(cls_index);
+ Py_DECREF(cls_dataframe);
+ Py_DECREF(type_decimal);
+ Py_DECREF(mod_pandas);
+ return NULL;
+ }
+
+ cls_timedelta = PyObject_GetAttrString(mod_pandas, "Timedelta");
+ if (cls_timedelta == NULL) {
+ Py_DECREF(cls_series);
+ Py_DECREF(cls_index);
+ Py_DECREF(cls_dataframe);
+ Py_DECREF(type_decimal);
Py_DECREF(mod_pandas);
+ return NULL;
}
+ Py_DECREF(mod_pandas);
+
mod_nattype = PyImport_ImportModule("pandas._libs.tslibs.nattype");
- if (mod_nattype) {
- cls_nat =
- (PyTypeObject *)PyObject_GetAttrString(mod_nattype, "NaTType");
- Py_DECREF(mod_nattype);
+ if (mod_nattype == NULL) {
+ Py_DECREF(cls_series);
+ Py_DECREF(cls_index);
+ Py_DECREF(cls_dataframe);
+ Py_DECREF(type_decimal);
+ return NULL;
+ }
+
+ cls_nat = (PyTypeObject *)PyObject_GetAttrString(mod_nattype, "NaTType");
+ Py_DECREF(mod_nattype);
+ if (cls_nat == NULL) {
+ Py_DECREF(cls_series);
+ Py_DECREF(cls_index);
+ Py_DECREF(cls_dataframe);
+ Py_DECREF(type_decimal);
+ return NULL;
}
/* Initialise numpy API */
@@ -184,7 +232,7 @@ static TypeContext *createTypeContext(void) {
TypeContext *pc;
pc = PyObject_Malloc(sizeof(TypeContext));
- if (!pc) {
+ if (pc == NULL) {
PyErr_NoMemory();
return NULL;
}
@@ -215,7 +263,7 @@ static TypeContext *createTypeContext(void) {
*
* Scales an integer value representing time in nanoseconds to provided unit.
*
- * Mutates the provided value directly. Returns 0 on success, non-zero on error.
+ * Mutates the provided value directly. Returns 0 on success, -1 on error.
*/
static int scaleNanosecToUnit(npy_int64 *value, NPY_DATETIMEUNIT unit) {
switch (unit) {
@@ -231,9 +279,9 @@ static int scaleNanosecToUnit(npy_int64 *value, NPY_DATETIMEUNIT unit) {
*value /= 1000000000LL;
break;
default:
+ PyErr_Format(PyExc_ValueError, "Unknown NPY_DATETIMEUNIT '%d'", unit);
return -1;
}
-
return 0;
}
@@ -299,8 +347,8 @@ static PyObject *get_sub_attr(PyObject *obj, char *attr, char *subAttr) {
PyObject *tmp = PyObject_GetAttrString(obj, attr);
PyObject *ret;
- if (tmp == 0) {
- return 0;
+ if (tmp == NULL) {
+ return NULL;
}
ret = PyObject_GetAttrString(tmp, subAttr);
Py_DECREF(tmp);
@@ -310,38 +358,40 @@ static PyObject *get_sub_attr(PyObject *obj, char *attr, char *subAttr) {
static int is_simple_frame(PyObject *obj) {
PyObject *check = get_sub_attr(obj, "_data", "is_mixed_type");
- int ret = (check == Py_False);
-
- if (!check) {
+ if (check == NULL) {
+ PyErr_Clear();
return 0;
}
Py_DECREF(check);
+ int ret = (check == Py_False);
return ret;
}
+/* Returns -1 on failure */
static Py_ssize_t get_attr_length(PyObject *obj, char *attr) {
PyObject *tmp = PyObject_GetAttrString(obj, attr);
Py_ssize_t ret;
- if (tmp == 0) {
- return 0;
+ if (tmp == NULL) {
+ return -1;
}
ret = PyObject_Length(tmp);
Py_DECREF(tmp);
- if (ret == -1) {
- return 0;
- }
-
return ret;
}
+/* Returns -1 on failure; to disambiguate check PyErr_Occurred */
static npy_int64 get_long_attr(PyObject *o, const char *attr) {
npy_int64 long_val;
PyObject *value = PyObject_GetAttrString(o, attr);
- long_val =
- (PyLong_Check(value) ? PyLong_AsLongLong(value) : PyLong_AsLong(value));
+
+ if (value == NULL) {
+ return -1;
+ }
+
+ long_val = PyLong_AsLongLong(value);
Py_DECREF(value);
return long_val;
}
@@ -358,8 +408,8 @@ static PyObject *get_item(PyObject *obj, Py_ssize_t i) {
PyObject *tmp = PyLong_FromSsize_t(i);
PyObject *ret;
- if (tmp == 0) {
- return 0;
+ if (tmp == NULL) {
+ return NULL;
}
ret = PyObject_GetItem(obj, tmp);
Py_DECREF(tmp);
@@ -395,10 +445,9 @@ static char *int64ToIso(int64_t value, NPY_DATETIMEUNIT base, size_t *len) {
}
ret_code = make_iso_8601_datetime(&dts, result, *len, base);
- if (ret_code != 0) {
- PyErr_SetString(PyExc_ValueError,
- "Could not convert datetime value to string");
+ if (ret_code == -1) {
PyObject_Free(result);
+ return NULL;
}
// Note that get_datetime_iso_8601_strlen just gives a generic size
@@ -415,7 +464,9 @@ static char *NpyDateTimeToIsoCallback(JSOBJ Py_UNUSED(unused),
}
static npy_datetime NpyDateTimeToEpoch(npy_datetime dt, NPY_DATETIMEUNIT base) {
- scaleNanosecToUnit(&dt, base);
+ if (scaleNanosecToUnit(&dt, base) == -1) {
+ return -1;
+ }
return dt;
}
@@ -426,22 +477,20 @@ static char *PyDateTimeToIso(PyDateTime_Date *obj, NPY_DATETIMEUNIT base,
int ret;
ret = convert_pydatetime_to_datetimestruct(obj, &dts);
- if (ret != 0) {
- if (!PyErr_Occurred()) {
- PyErr_SetString(PyExc_ValueError,
- "Could not convert PyDateTime to numpy datetime");
- }
+ if (ret == -1) {
return NULL;
}
*len = (size_t)get_datetime_iso_8601_strlen(0, base);
+
char *result = PyObject_Malloc(*len);
- ret = make_iso_8601_datetime(&dts, result, *len, base);
+ if (result == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
- if (ret != 0) {
- PRINTMARK();
- PyErr_SetString(PyExc_ValueError,
- "Could not convert datetime value to string");
+ ret = make_iso_8601_datetime(&dts, result, *len, base);
+ if (ret == -1) {
PyObject_Free(result);
return NULL;
}
@@ -465,27 +514,33 @@ static char *PyDateTimeToIsoCallback(JSOBJ obj, JSONTypeContext *tc,
return PyDateTimeToIso(obj, base, len);
}
+/* Returns -1 on failure along with PyErr being set */
static npy_datetime PyDateTimeToEpoch(PyObject *obj, NPY_DATETIMEUNIT base) {
npy_datetimestruct dts;
int ret;
- if (!PyDateTime_Check(obj)) {
- // TODO: raise TypeError
+ // TODO: we actually pass datetime.date objects through here, though
+ // relevant functions suggest datetime.datetime at least
+ if (!PyDate_Check(obj)) {
+ PyErr_SetString(PyExc_TypeError, "Expected datetime object");
+ return -1;
}
PyDateTime_Date *dt = (PyDateTime_Date *)obj;
ret = convert_pydatetime_to_datetimestruct(dt, &dts);
- if (ret != 0) {
+ if (ret == -1) {
if (!PyErr_Occurred()) {
- PyErr_SetString(PyExc_ValueError,
- "Could not convert PyDateTime to numpy datetime");
+ PyErr_Format(PyExc_ValueError,
+ "Could not convert %R to npy_datetime", obj);
+ return -1;
}
- // TODO: is setting errMsg required?
- //((JSONObjectEncoder *)tc->encoder)->errorMsg = "";
- // return NULL;
}
npy_datetime npy_dt = npy_datetimestruct_to_datetime(NPY_FR_ns, &dts);
+ if ((npy_dt == -1) && (PyErr_Occurred())) {
+ return -1;
+ }
+
return NpyDateTimeToEpoch(npy_dt, base);
}
@@ -848,7 +903,8 @@ void PdBlock_iterBegin(JSOBJ _obj, JSONTypeContext *tc) {
blkCtxt->transpose = GET_TC(tc)->transpose;
blkCtxt->ncols = get_attr_length(obj, "columns");
- if (blkCtxt->ncols == 0) {
+ if (blkCtxt->ncols == -1) {
+ // TODO: ensure error is propogated
blkCtxt->npyCtxts = NULL;
blkCtxt->cindices = NULL;
@@ -1578,9 +1634,8 @@ char **NpyArr_encodeLabels(PyArrayObject *labels, PyObjectEncoder *enc,
if (enc->datetimeIso) {
cLabel = int64ToIso(longVal, base, &len);
} else {
- if (!scaleNanosecToUnit(&longVal, base)) {
- // TODO: This gets hit but somehow doesn't cause errors
- // need to clean up (elsewhere in module as well)
+ if (scaleNanosecToUnit(&longVal, base) == -1) {
+ // TODO: error handler
}
cLabel = PyObject_Malloc(21); // 21 chars for int64
sprintf(cLabel, "%" NPY_INT64_FMT, longVal);
@@ -1818,13 +1873,18 @@ void Object_beginTypeContext(JSOBJ _obj, JSONTypeContext *tc) {
if (PyObject_HasAttrString(obj, "value")) {
PRINTMARK();
value = get_long_attr(obj, "value");
+
+ if ((value == -1) && PyErr_Occurred()) {
+ // TODO: Add error handler
+ }
} else {
PRINTMARK();
+ // total_seconds returns float, so potentially lossy
value = total_seconds(obj) * 1000000000LL; // nanoseconds per second
}
unit = ((PyObjectEncoder *)tc->encoder)->datetimeUnit;
- if (scaleNanosecToUnit(&value, unit) != 0) {
+ if (scaleNanosecToUnit(&value, unit) == -1) {
// TODO: Add some kind of error handling here
}
diff --git a/pandas/_libs/tslibs/src/datetime/np_datetime.c b/pandas/_libs/tslibs/src/datetime/np_datetime.c
index a8a47e2e90f93..e7a37fccf0c17 100644
--- a/pandas/_libs/tslibs/src/datetime/np_datetime.c
+++ b/pandas/_libs/tslibs/src/datetime/np_datetime.c
@@ -318,8 +318,7 @@ int cmp_npy_datetimestruct(const npy_datetimestruct *a,
* datetime duck typing. The tzinfo time zone conversion would require
* this style of access anyway.
*
- * Returns -1 on error, 0 on success, and 1 (with no error set)
- * if obj doesn't have the needed date or datetime attributes.
+ * Returns -1 on error, 0 on success
*/
int convert_pydatetime_to_datetimestruct(PyDateTime_Date *dtobj,
npy_datetimestruct *out) {
| This will take a few passes to do comprehensively, but trying to clean up error handling in the extension module. Right now failure points are allowed, which can lead to segfaults or surprising behaviour when debugging.
Trying to push closers towards the CPython error handling conventions:
https://docs.python.org/3/c-api/exceptions.html
So basically:
- Explicitly check for NULL from most C API functions except integer returning functions (where they return -1 and set PyErr) AND
- Don't set error messages from failing functions, as they should have already set it | https://api.github.com/repos/pandas-dev/pandas/pulls/30719 | 2020-01-06T00:47:09Z | 2020-01-13T23:16:53Z | null | 2023-04-12T20:16:01Z |
TYP: type up parts of frame.py | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index ba0c0e7d66b1d..3b012b2ff3736 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -16,6 +16,7 @@
from textwrap import dedent
from typing import (
IO,
+ TYPE_CHECKING,
Any,
FrozenSet,
Hashable,
@@ -127,6 +128,9 @@
from pandas.io.formats.printing import pprint_thing
import pandas.plotting
+if TYPE_CHECKING:
+ from pandas.io.formats.style import Styler
+
# ---------------------------------------------------------------------
# Docstring templates
@@ -818,7 +822,7 @@ def to_string(
# ----------------------------------------------------------------------
@property
- def style(self):
+ def style(self) -> "Styler":
"""
Returns a Styler object.
@@ -893,10 +897,10 @@ def items(self) -> Iterable[Tuple[Optional[Hashable], Series]]:
yield k, self._ixs(i, axis=1)
@Appender(_shared_docs["items"])
- def iteritems(self):
+ def iteritems(self) -> Iterable[Tuple[Optional[Hashable], Series]]:
yield from self.items()
- def iterrows(self):
+ def iterrows(self) -> Iterable[Tuple[Optional[Hashable], Series]]:
"""
Iterate over DataFrame rows as (index, Series) pairs.
@@ -1162,7 +1166,7 @@ def __rmatmul__(self, other):
# IO methods (to / from other formats)
@classmethod
- def from_dict(cls, data, orient="columns", dtype=None, columns=None):
+ def from_dict(cls, data, orient="columns", dtype=None, columns=None) -> "DataFrame":
"""
Construct DataFrame from dict of array-like or dicts.
@@ -1242,7 +1246,7 @@ def from_dict(cls, data, orient="columns", dtype=None, columns=None):
return cls(data, index=index, columns=columns, dtype=dtype)
- def to_numpy(self, dtype=None, copy=False):
+ def to_numpy(self, dtype=None, copy=False) -> np.ndarray:
"""
Convert the DataFrame to a NumPy array.
@@ -1446,7 +1450,7 @@ def to_gbq(
location=None,
progress_bar=True,
credentials=None,
- ):
+ ) -> None:
"""
Write a DataFrame to a Google BigQuery table.
@@ -1551,7 +1555,7 @@ def from_records(
columns=None,
coerce_float=False,
nrows=None,
- ):
+ ) -> "DataFrame":
"""
Convert structured or record ndarray to DataFrame.
@@ -1673,7 +1677,9 @@ def from_records(
return cls(mgr)
- def to_records(self, index=True, column_dtypes=None, index_dtypes=None):
+ def to_records(
+ self, index=True, column_dtypes=None, index_dtypes=None
+ ) -> np.recarray:
"""
Convert DataFrame to a NumPy record array.
@@ -1838,7 +1844,7 @@ def to_records(self, index=True, column_dtypes=None, index_dtypes=None):
return np.rec.fromarrays(arrays, dtype={"names": names, "formats": formats})
@classmethod
- def _from_arrays(cls, arrays, columns, index, dtype=None):
+ def _from_arrays(cls, arrays, columns, index, dtype=None) -> "DataFrame":
mgr = arrays_to_mgr(arrays, columns, index, columns, dtype=dtype)
return cls(mgr)
@@ -1962,7 +1968,7 @@ def to_stata(
writer.write_file()
@deprecate_kwarg(old_arg_name="fname", new_arg_name="path")
- def to_feather(self, path):
+ def to_feather(self, path) -> None:
"""
Write out the binary feather-format for DataFrames.
@@ -2014,7 +2020,7 @@ def to_parquet(
index=None,
partition_cols=None,
**kwargs,
- ):
+ ) -> None:
"""
Write a DataFrame to the binary parquet format.
@@ -2205,7 +2211,7 @@ def to_html(
def info(
self, verbose=None, buf=None, max_cols=None, memory_usage=None, null_counts=None
- ):
+ ) -> None:
"""
Print a concise summary of a DataFrame.
@@ -2480,7 +2486,7 @@ def _sizeof_fmt(num, size_qualifier):
lines.append(f"memory usage: {_sizeof_fmt(mem_usage, size_qualifier)}\n")
fmt.buffer_put_lines(buf, lines)
- def memory_usage(self, index=True, deep=False):
+ def memory_usage(self, index=True, deep=False) -> Series:
"""
Return the memory usage of each column in bytes.
@@ -2574,7 +2580,7 @@ def memory_usage(self, index=True, deep=False):
)
return result
- def transpose(self, *args, copy: bool = False):
+ def transpose(self, *args, copy: bool = False) -> "DataFrame":
"""
Transpose index and columns.
@@ -3324,7 +3330,7 @@ def eval(self, expr, inplace=False, **kwargs):
return _eval(expr, inplace=inplace, **kwargs)
- def select_dtypes(self, include=None, exclude=None):
+ def select_dtypes(self, include=None, exclude=None) -> "DataFrame":
"""
Return a subset of the DataFrame's columns based on the column dtypes.
@@ -3454,7 +3460,7 @@ def extract_unique_dtypes_from_dtypes_set(
return self.iloc[:, keep_these.values]
- def insert(self, loc, column, value, allow_duplicates=False):
+ def insert(self, loc, column, value, allow_duplicates=False) -> None:
"""
Insert column into DataFrame at specified location.
@@ -3474,7 +3480,7 @@ def insert(self, loc, column, value, allow_duplicates=False):
value = self._sanitize_column(column, value, broadcast=False)
self._data.insert(loc, column, value, allow_duplicates=allow_duplicates)
- def assign(self, **kwargs):
+ def assign(self, **kwargs) -> "DataFrame":
r"""
Assign new columns to a DataFrame.
@@ -3657,7 +3663,7 @@ def _series(self):
for idx, item in enumerate(self.columns)
}
- def lookup(self, row_labels, col_labels):
+ def lookup(self, row_labels, col_labels) -> np.ndarray:
"""
Label-based "fancy indexing" function for DataFrame.
@@ -3765,7 +3771,7 @@ def _reindex_columns(
allow_dups=False,
)
- def _reindex_multi(self, axes, copy, fill_value):
+ def _reindex_multi(self, axes, copy, fill_value) -> "DataFrame":
"""
We are guaranteed non-Nones in the axes.
"""
@@ -3799,7 +3805,7 @@ def align(
limit=None,
fill_axis=0,
broadcast_axis=None,
- ):
+ ) -> "DataFrame":
return super().align(
other,
join=join,
@@ -3826,13 +3832,13 @@ def align(
("tolerance", None),
],
)
- def reindex(self, *args, **kwargs):
+ def reindex(self, *args, **kwargs) -> "DataFrame":
axes = validate_axis_style_args(self, args, kwargs, "labels", "reindex")
kwargs.update(axes)
# Pop these, since the values are in `kwargs` under different names
kwargs.pop("axis", None)
kwargs.pop("labels", None)
- return super().reindex(**kwargs)
+ return self._ensure_type(super().reindex(**kwargs))
def drop(
self,
@@ -4136,9 +4142,9 @@ def replace(
)
@Appender(_shared_docs["shift"] % _shared_doc_kwargs)
- def shift(self, periods=1, freq=None, axis=0, fill_value=None):
- return super().shift(
- periods=periods, freq=freq, axis=axis, fill_value=fill_value
+ def shift(self, periods=1, freq=None, axis=0, fill_value=None) -> "DataFrame":
+ return self._ensure_type(
+ super().shift(periods=periods, freq=freq, axis=axis, fill_value=fill_value)
)
def set_index(
@@ -4243,7 +4249,7 @@ def set_index(
"one-dimensional arrays."
)
- missing = []
+ missing: List[Optional[Hashable]] = []
for col in keys:
if isinstance(
col, (ABCIndexClass, ABCSeries, np.ndarray, list, abc.Iterator)
@@ -4280,7 +4286,7 @@ def set_index(
else:
arrays.append(self.index)
- to_remove = []
+ to_remove: List[Optional[Hashable]] = []
for col in keys:
if isinstance(col, ABCMultiIndex):
for n in range(col.nlevels):
@@ -4576,19 +4582,19 @@ def _maybe_casted_values(index, labels=None):
# Reindex-based selection methods
@Appender(_shared_docs["isna"] % _shared_doc_kwargs)
- def isna(self):
+ def isna(self) -> "DataFrame":
return super().isna()
@Appender(_shared_docs["isna"] % _shared_doc_kwargs)
- def isnull(self):
+ def isnull(self) -> "DataFrame":
return super().isnull()
@Appender(_shared_docs["notna"] % _shared_doc_kwargs)
- def notna(self):
+ def notna(self) -> "DataFrame":
return super().notna()
@Appender(_shared_docs["notna"] % _shared_doc_kwargs)
- def notnull(self):
+ def notnull(self) -> "DataFrame":
return super().notnull()
def dropna(self, axis=0, how="any", thresh=None, subset=None, inplace=False):
@@ -4978,7 +4984,7 @@ def sort_index(
else:
return self._constructor(new_data).__finalize__(self)
- def nlargest(self, n, columns, keep="first"):
+ def nlargest(self, n, columns, keep="first") -> "DataFrame":
"""
Return the first `n` rows ordered by `columns` in descending order.
@@ -5087,7 +5093,7 @@ def nlargest(self, n, columns, keep="first"):
"""
return algorithms.SelectNFrame(self, n=n, keep=keep, columns=columns).nlargest()
- def nsmallest(self, n, columns, keep="first"):
+ def nsmallest(self, n, columns, keep="first") -> "DataFrame":
"""
Return the first `n` rows ordered by `columns` in ascending order.
@@ -5188,7 +5194,7 @@ def nsmallest(self, n, columns, keep="first"):
self, n=n, keep=keep, columns=columns
).nsmallest()
- def swaplevel(self, i=-2, j=-1, axis=0):
+ def swaplevel(self, i=-2, j=-1, axis=0) -> "DataFrame":
"""
Swap levels i and j in a MultiIndex on a particular axis.
@@ -5210,7 +5216,7 @@ def swaplevel(self, i=-2, j=-1, axis=0):
result.columns = result.columns.swaplevel(i, j)
return result
- def reorder_levels(self, order, axis=0):
+ def reorder_levels(self, order, axis=0) -> "DataFrame":
"""
Rearrange index levels using input order. May not drop or duplicate levels.
@@ -5224,7 +5230,7 @@ def reorder_levels(self, order, axis=0):
Returns
-------
- type of caller (new object)
+ DataFrame
"""
axis = self._get_axis_number(axis)
if not isinstance(self._get_axis(axis), ABCMultiIndex): # pragma: no cover
@@ -5298,7 +5304,9 @@ def _construct_result(self, result) -> "DataFrame":
out.columns = self.columns
return out
- def combine(self, other, func, fill_value=None, overwrite=True):
+ def combine(
+ self, other: "DataFrame", func, fill_value=None, overwrite=True
+ ) -> "DataFrame":
"""
Perform column-wise combine with another DataFrame.
@@ -5465,7 +5473,7 @@ def combine(self, other, func, fill_value=None, overwrite=True):
# convert_objects just in case
return self._constructor(result, index=new_index, columns=new_columns)
- def combine_first(self, other):
+ def combine_first(self, other: "DataFrame") -> "DataFrame":
"""
Update null elements with value in the same location in `other`.
@@ -5543,7 +5551,7 @@ def combiner(x, y):
def update(
self, other, join="left", overwrite=True, filter_func=None, errors="ignore"
- ):
+ ) -> None:
"""
Modify in place using non-NA values from another DataFrame.
@@ -6455,7 +6463,7 @@ def melt(
var_name=None,
value_name="value",
col_level=None,
- ):
+ ) -> "DataFrame":
from pandas.core.reshape.melt import melt
return melt(
@@ -6470,7 +6478,7 @@ def melt(
# ----------------------------------------------------------------------
# Time series-related
- def diff(self, periods=1, axis=0):
+ def diff(self, periods=1, axis=0) -> "DataFrame":
"""
First discrete difference of element.
@@ -6678,7 +6686,7 @@ def _aggregate(self, arg, axis=0, *args, **kwargs):
agg = aggregate
@Appender(_shared_docs["transform"] % _shared_doc_kwargs)
- def transform(self, func, axis=0, *args, **kwargs):
+ def transform(self, func, axis=0, *args, **kwargs) -> "DataFrame":
axis = self._get_axis_number(axis)
if axis == 1:
return self.T.transform(func, *args, **kwargs).T
@@ -6833,7 +6841,7 @@ def apply(self, func, axis=0, raw=False, result_type=None, args=(), **kwds):
)
return op.get_result()
- def applymap(self, func):
+ def applymap(self, func) -> "DataFrame":
"""
Apply a function to a Dataframe elementwise.
@@ -6902,7 +6910,9 @@ def infer(x):
# ----------------------------------------------------------------------
# Merging / joining methods
- def append(self, other, ignore_index=False, verify_integrity=False, sort=False):
+ def append(
+ self, other, ignore_index=False, verify_integrity=False, sort=False
+ ) -> "DataFrame":
"""
Append rows of `other` to the end of caller, returning a new object.
@@ -7029,7 +7039,7 @@ def append(self, other, ignore_index=False, verify_integrity=False, sort=False):
from pandas.core.reshape.concat import concat
if isinstance(other, (list, tuple)):
- to_concat = [self] + other
+ to_concat = [self, *other]
else:
to_concat = [self, other]
return concat(
@@ -7039,7 +7049,9 @@ def append(self, other, ignore_index=False, verify_integrity=False, sort=False):
sort=sort,
)
- def join(self, other, on=None, how="left", lsuffix="", rsuffix="", sort=False):
+ def join(
+ self, other, on=None, how="left", lsuffix="", rsuffix="", sort=False
+ ) -> "DataFrame":
"""
Join columns of another DataFrame.
@@ -7230,7 +7242,7 @@ def merge(
copy=True,
indicator=False,
validate=None,
- ):
+ ) -> "DataFrame":
from pandas.core.reshape.merge import merge
return merge(
@@ -7249,7 +7261,7 @@ def merge(
validate=validate,
)
- def round(self, decimals=0, *args, **kwargs):
+ def round(self, decimals=0, *args, **kwargs) -> "DataFrame":
"""
Round a DataFrame to a variable number of decimal places.
@@ -7363,7 +7375,7 @@ def _series_round(s, decimals):
# ----------------------------------------------------------------------
# Statistical methods, etc.
- def corr(self, method="pearson", min_periods=1):
+ def corr(self, method="pearson", min_periods=1) -> "DataFrame":
"""
Compute pairwise correlation of columns, excluding NA/null values.
@@ -7451,7 +7463,7 @@ def corr(self, method="pearson", min_periods=1):
return self._constructor(correl, index=idx, columns=cols)
- def cov(self, min_periods=None):
+ def cov(self, min_periods=None) -> "DataFrame":
"""
Compute pairwise covariance of columns, excluding NA/null values.
@@ -7561,7 +7573,7 @@ def cov(self, min_periods=None):
return self._constructor(baseCov, index=idx, columns=cols)
- def corrwith(self, other, axis=0, drop=False, method="pearson"):
+ def corrwith(self, other, axis=0, drop=False, method="pearson") -> Series:
"""
Compute pairwise correlation.
@@ -7917,7 +7929,7 @@ def _get_data(axis_matters):
result = Series(result, index=labels)
return result
- def nunique(self, axis=0, dropna=True):
+ def nunique(self, axis=0, dropna=True) -> Series:
"""
Count distinct observations over requested axis.
@@ -7957,7 +7969,7 @@ def nunique(self, axis=0, dropna=True):
"""
return self.apply(Series.nunique, axis=axis, dropna=dropna)
- def idxmin(self, axis=0, skipna=True):
+ def idxmin(self, axis=0, skipna=True) -> Series:
"""
Return index of first occurrence of minimum over requested axis.
@@ -7995,7 +8007,7 @@ def idxmin(self, axis=0, skipna=True):
result = [index[i] if i >= 0 else np.nan for i in indices]
return Series(result, index=self._get_agg_axis(axis))
- def idxmax(self, axis=0, skipna=True):
+ def idxmax(self, axis=0, skipna=True) -> Series:
"""
Return index of first occurrence of maximum over requested axis.
@@ -8044,7 +8056,7 @@ def _get_agg_axis(self, axis_num):
else:
raise ValueError(f"Axis must be 0 or 1 (got {repr(axis_num)})")
- def mode(self, axis=0, numeric_only=False, dropna=True):
+ def mode(self, axis=0, numeric_only=False, dropna=True) -> "DataFrame":
"""
Get the mode(s) of each element along the selected axis.
@@ -8227,7 +8239,7 @@ def quantile(self, q=0.5, axis=0, numeric_only=True, interpolation="linear"):
return result
- def to_timestamp(self, freq=None, how="start", axis=0, copy=True):
+ def to_timestamp(self, freq=None, how="start", axis=0, copy=True) -> "DataFrame":
"""
Cast to DatetimeIndex of timestamps, at *beginning* of period.
@@ -8261,7 +8273,7 @@ def to_timestamp(self, freq=None, how="start", axis=0, copy=True):
return self._constructor(new_data)
- def to_period(self, freq=None, axis=0, copy=True):
+ def to_period(self, freq=None, axis=0, copy=True) -> "DataFrame":
"""
Convert DataFrame from DatetimeIndex to PeriodIndex.
@@ -8295,7 +8307,7 @@ def to_period(self, freq=None, axis=0, copy=True):
return self._constructor(new_data)
- def isin(self, values):
+ def isin(self, values) -> "DataFrame":
"""
Whether each element in the DataFrame is contained in values.
@@ -8362,12 +8374,14 @@ def isin(self, values):
from pandas.core.reshape.concat import concat
values = collections.defaultdict(list, values)
- return concat(
- (
- self.iloc[:, [i]].isin(values[col])
- for i, col in enumerate(self.columns)
- ),
- axis=1,
+ return self._ensure_type(
+ concat(
+ (
+ self.iloc[:, [i]].isin(values[col])
+ for i, col in enumerate(self.columns)
+ ),
+ axis=1,
+ )
)
elif isinstance(values, Series):
if not values.index.is_unique:
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 2007f6aa32a57..ac00930ce248e 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -2,7 +2,7 @@
concat routines
"""
-from typing import Hashable, List, Mapping, Optional, Sequence, Union, overload
+from typing import Hashable, Iterable, List, Mapping, Optional, Union, overload
import numpy as np
@@ -30,7 +30,7 @@
@overload
def concat(
- objs: Union[Sequence["DataFrame"], Mapping[Optional[Hashable], "DataFrame"]],
+ objs: Union[Iterable["DataFrame"], Mapping[Optional[Hashable], "DataFrame"]],
axis=0,
join: str = "outer",
ignore_index: bool = False,
@@ -47,7 +47,7 @@ def concat(
@overload
def concat(
objs: Union[
- Sequence[FrameOrSeriesUnion], Mapping[Optional[Hashable], FrameOrSeriesUnion]
+ Iterable[FrameOrSeriesUnion], Mapping[Optional[Hashable], FrameOrSeriesUnion]
],
axis=0,
join: str = "outer",
@@ -64,7 +64,7 @@ def concat(
def concat(
objs: Union[
- Sequence[FrameOrSeriesUnion], Mapping[Optional[Hashable], FrameOrSeriesUnion]
+ Iterable[FrameOrSeriesUnion], Mapping[Optional[Hashable], FrameOrSeriesUnion]
],
axis=0,
join="outer",
| Type up methods with a single return value type. | https://api.github.com/repos/pandas-dev/pandas/pulls/30718 | 2020-01-05T23:29:10Z | 2020-01-06T18:58:50Z | 2020-01-06T18:58:50Z | 2020-01-06T19:03:46Z |
REF: move sharable methods to ExtensionIndex | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 0c40900d54b53..4697f2d2e59a4 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -831,20 +831,6 @@ def slice_indexer(self, start=None, end=None, step=None, kind=None):
else:
raise
- # --------------------------------------------------------------------
- # Wrapping DatetimeArray
-
- def __getitem__(self, key):
- result = self._data.__getitem__(key)
- if is_scalar(result):
- return result
- elif result.ndim > 1:
- # To support MPL which performs slicing with 2 dim
- # even though it only has 1 dim by definition
- assert isinstance(result, np.ndarray), result
- return result
- return type(self)(result, name=self.name)
-
# --------------------------------------------------------------------
@Substitution(klass="DatetimeIndex")
diff --git a/pandas/core/indexes/extension.py b/pandas/core/indexes/extension.py
index 9011616dfe496..bd089f574a313 100644
--- a/pandas/core/indexes/extension.py
+++ b/pandas/core/indexes/extension.py
@@ -3,6 +3,8 @@
"""
from typing import List
+import numpy as np
+
from pandas.compat.numpy import function as nv
from pandas.util._decorators import cache_readonly
@@ -170,6 +172,29 @@ class ExtensionIndex(Index):
__le__ = _make_wrapped_comparison_op("__le__")
__ge__ = _make_wrapped_comparison_op("__ge__")
+ def __getitem__(self, key):
+ result = self._data[key]
+ if isinstance(result, type(self._data)):
+ return type(self)(result, name=self.name)
+
+ # Includes cases where we get a 2D ndarray back for MPL compat
+ return result
+
+ def __iter__(self):
+ return self._data.__iter__()
+
+ @property
+ def _ndarray_values(self) -> np.ndarray:
+ return self._data._ndarray_values
+
+ def dropna(self, how="any"):
+ if how not in ("any", "all"):
+ raise ValueError(f"invalid how option: {how}")
+
+ if self.hasnans:
+ return self._shallow_copy(self._data[~self._isnan])
+ return self._shallow_copy()
+
def repeat(self, repeats, axis=None):
nv.validate_repeat(tuple(), dict(axis=axis))
result = self._data.repeat(repeats, axis=axis)
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 86dd4525c7d6d..1f3182bc83e1d 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -211,15 +211,6 @@ def _formatter_func(self):
return _get_format_timedelta64(self, box=True)
- # -------------------------------------------------------------------
- # Wrapping TimedeltaArray
-
- def __getitem__(self, key):
- result = self._data.__getitem__(key)
- if is_scalar(result):
- return result
- return type(self)(result, name=self.name)
-
# -------------------------------------------------------------------
@Appender(_index_shared_docs["astype"])
| https://api.github.com/repos/pandas-dev/pandas/pulls/30717 | 2020-01-05T22:47:12Z | 2020-01-09T13:17:59Z | 2020-01-09T13:17:59Z | 2020-01-15T08:26:11Z | |
IntervalArray equality follow-ups | diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 9b1399b9e5aa9..9ce917b004bc1 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -51,6 +51,7 @@
maybe_extract_name,
)
from pandas.core.indexes.datetimes import DatetimeIndex, date_range
+from pandas.core.indexes.extension import make_wrapped_comparison_op
from pandas.core.indexes.multi import MultiIndex
from pandas.core.indexes.timedeltas import TimedeltaIndex, timedelta_range
from pandas.core.ops import get_op_result_name
@@ -205,9 +206,7 @@ def func(intvidx_self, other, sort=False):
"__array__",
"overlaps",
"contains",
- "__eq__",
"__len__",
- "__ne__",
"set_closed",
"to_tuples",
],
@@ -231,8 +230,6 @@ class IntervalIndex(IntervalMixin, ExtensionIndex, accessor.PandasDelegate):
"__array__",
"overlaps",
"contains",
- "__eq__",
- "__ne__",
}
# --------------------------------------------------------------------
@@ -1206,7 +1203,14 @@ def _delegate_method(self, name, *args, **kwargs):
return type(self)._simple_new(res, name=self.name)
return Index(res)
+ @classmethod
+ def _add_comparison_methods(cls):
+ """ add in comparison methods """
+ cls.__eq__ = make_wrapped_comparison_op("__eq__")
+ cls.__ne__ = make_wrapped_comparison_op("__ne__")
+
+IntervalIndex._add_comparison_methods()
IntervalIndex._add_logical_methods_disabled()
diff --git a/pandas/tests/arithmetic/test_interval.py b/pandas/tests/arithmetic/test_interval.py
new file mode 100644
index 0000000000000..f9e1a515277d5
--- /dev/null
+++ b/pandas/tests/arithmetic/test_interval.py
@@ -0,0 +1,273 @@
+import operator
+
+import numpy as np
+import pytest
+
+from pandas.core.dtypes.common import is_list_like
+
+import pandas as pd
+from pandas import (
+ Categorical,
+ Index,
+ Interval,
+ IntervalIndex,
+ Period,
+ Series,
+ Timedelta,
+ Timestamp,
+ date_range,
+ period_range,
+ timedelta_range,
+)
+import pandas._testing as tm
+from pandas.core.arrays import IntervalArray
+
+
+@pytest.fixture(
+ params=[
+ (Index([0, 2, 4, 4]), Index([1, 3, 5, 8])),
+ (Index([0.0, 1.0, 2.0, np.nan]), Index([1.0, 2.0, 3.0, np.nan])),
+ (
+ timedelta_range("0 days", periods=3).insert(4, pd.NaT),
+ timedelta_range("1 day", periods=3).insert(4, pd.NaT),
+ ),
+ (
+ date_range("20170101", periods=3).insert(4, pd.NaT),
+ date_range("20170102", periods=3).insert(4, pd.NaT),
+ ),
+ (
+ date_range("20170101", periods=3, tz="US/Eastern").insert(4, pd.NaT),
+ date_range("20170102", periods=3, tz="US/Eastern").insert(4, pd.NaT),
+ ),
+ ],
+ ids=lambda x: str(x[0].dtype),
+)
+def left_right_dtypes(request):
+ """
+ Fixture for building an IntervalArray from various dtypes
+ """
+ return request.param
+
+
+@pytest.fixture
+def array(left_right_dtypes):
+ """
+ Fixture to generate an IntervalArray of various dtypes containing NA if possible
+ """
+ left, right = left_right_dtypes
+ return IntervalArray.from_arrays(left, right)
+
+
+def create_categorical_intervals(left, right, closed="right"):
+ return Categorical(IntervalIndex.from_arrays(left, right, closed))
+
+
+def create_series_intervals(left, right, closed="right"):
+ return Series(IntervalArray.from_arrays(left, right, closed))
+
+
+def create_series_categorical_intervals(left, right, closed="right"):
+ return Series(Categorical(IntervalIndex.from_arrays(left, right, closed)))
+
+
+class TestComparison:
+ @pytest.fixture(params=[operator.eq, operator.ne])
+ def op(self, request):
+ return request.param
+
+ @pytest.fixture(
+ params=[
+ IntervalArray.from_arrays,
+ IntervalIndex.from_arrays,
+ create_categorical_intervals,
+ create_series_intervals,
+ create_series_categorical_intervals,
+ ],
+ ids=[
+ "IntervalArray",
+ "IntervalIndex",
+ "Categorical[Interval]",
+ "Series[Interval]",
+ "Series[Categorical[Interval]]",
+ ],
+ )
+ def interval_constructor(self, request):
+ """
+ Fixture for all pandas native interval constructors.
+ To be used as the LHS of IntervalArray comparisons.
+ """
+ return request.param
+
+ def elementwise_comparison(self, op, array, other):
+ """
+ Helper that performs elementwise comparisions between `array` and `other`
+ """
+ other = other if is_list_like(other) else [other] * len(array)
+ return np.array([op(x, y) for x, y in zip(array, other)])
+
+ def test_compare_scalar_interval(self, op, array):
+ # matches first interval
+ other = array[0]
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ # matches on a single endpoint but not both
+ other = Interval(array.left[0], array.right[1])
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_compare_scalar_interval_mixed_closed(self, op, closed, other_closed):
+ array = IntervalArray.from_arrays(range(2), range(1, 3), closed=closed)
+ other = Interval(0, 1, closed=other_closed)
+
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_compare_scalar_na(self, op, array, nulls_fixture):
+ result = op(array, nulls_fixture)
+ expected = self.elementwise_comparison(op, array, nulls_fixture)
+ tm.assert_numpy_array_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "other",
+ [
+ 0,
+ 1.0,
+ True,
+ "foo",
+ Timestamp("2017-01-01"),
+ Timestamp("2017-01-01", tz="US/Eastern"),
+ Timedelta("0 days"),
+ Period("2017-01-01", "D"),
+ ],
+ )
+ def test_compare_scalar_other(self, op, array, other):
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_compare_list_like_interval(
+ self, op, array, interval_constructor,
+ ):
+ # same endpoints
+ other = interval_constructor(array.left, array.right)
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ # different endpoints
+ other = interval_constructor(array.left[::-1], array.right[::-1])
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ # all nan endpoints
+ other = interval_constructor([np.nan] * 4, [np.nan] * 4)
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_compare_list_like_interval_mixed_closed(
+ self, op, interval_constructor, closed, other_closed
+ ):
+ array = IntervalArray.from_arrays(range(2), range(1, 3), closed=closed)
+ other = interval_constructor(range(2), range(1, 3), closed=other_closed)
+
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "other",
+ [
+ (
+ Interval(0, 1),
+ Interval(Timedelta("1 day"), Timedelta("2 days")),
+ Interval(4, 5, "both"),
+ Interval(10, 20, "neither"),
+ ),
+ (0, 1.5, Timestamp("20170103"), np.nan),
+ (
+ Timestamp("20170102", tz="US/Eastern"),
+ Timedelta("2 days"),
+ "baz",
+ pd.NaT,
+ ),
+ ],
+ )
+ def test_compare_list_like_object(self, op, array, other):
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_compare_list_like_nan(self, op, array, nulls_fixture):
+ other = [nulls_fixture] * 4
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "other",
+ [
+ np.arange(4, dtype="int64"),
+ np.arange(4, dtype="float64"),
+ date_range("2017-01-01", periods=4),
+ date_range("2017-01-01", periods=4, tz="US/Eastern"),
+ timedelta_range("0 days", periods=4),
+ period_range("2017-01-01", periods=4, freq="D"),
+ Categorical(list("abab")),
+ Categorical(date_range("2017-01-01", periods=4)),
+ pd.array(list("abcd")),
+ pd.array(["foo", 3.14, None, object()]),
+ ],
+ ids=lambda x: str(x.dtype),
+ )
+ def test_compare_list_like_other(self, op, array, other):
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ @pytest.mark.parametrize("length", [1, 3, 5])
+ @pytest.mark.parametrize("other_constructor", [IntervalArray, list])
+ def test_compare_length_mismatch_errors(self, op, other_constructor, length):
+ array = IntervalArray.from_arrays(range(4), range(1, 5))
+ other = other_constructor([Interval(0, 1)] * length)
+ with pytest.raises(ValueError, match="Lengths must match to compare"):
+ op(array, other)
+
+ @pytest.mark.parametrize(
+ "constructor, expected_type, assert_func",
+ [
+ (IntervalIndex, np.array, tm.assert_numpy_array_equal),
+ (Series, Series, tm.assert_series_equal),
+ ],
+ )
+ def test_index_series_compat(self, op, constructor, expected_type, assert_func):
+ # IntervalIndex/Series that rely on IntervalArray for comparisons
+ breaks = range(4)
+ index = constructor(IntervalIndex.from_breaks(breaks))
+
+ # scalar comparisons
+ other = index[0]
+ result = op(index, other)
+ expected = expected_type(self.elementwise_comparison(op, index, other))
+ assert_func(result, expected)
+
+ other = breaks[0]
+ result = op(index, other)
+ expected = expected_type(self.elementwise_comparison(op, index, other))
+ assert_func(result, expected)
+
+ # list-like comparisons
+ other = IntervalArray.from_breaks(breaks)
+ result = op(index, other)
+ expected = expected_type(self.elementwise_comparison(op, index, other))
+ assert_func(result, expected)
+
+ other = [index[0], breaks[0], "foo"]
+ result = op(index, other)
+ expected = expected_type(self.elementwise_comparison(op, index, other))
+ assert_func(result, expected)
diff --git a/pandas/tests/arrays/interval/test_interval.py b/pandas/tests/arrays/interval/test_interval.py
index 9d495351abc97..82db14d9eb135 100644
--- a/pandas/tests/arrays/interval/test_interval.py
+++ b/pandas/tests/arrays/interval/test_interval.py
@@ -1,22 +1,14 @@
-import operator
-
import numpy as np
import pytest
-from pandas.core.dtypes.common import is_list_like
-
import pandas as pd
from pandas import (
- Categorical,
Index,
Interval,
IntervalIndex,
- Period,
- Series,
Timedelta,
Timestamp,
date_range,
- period_range,
timedelta_range,
)
import pandas._testing as tm
@@ -43,18 +35,6 @@ def left_right_dtypes(request):
return request.param
-def create_categorical_intervals(left, right, closed="right"):
- return Categorical(IntervalIndex.from_arrays(left, right, closed))
-
-
-def create_series_intervals(left, right, closed="right"):
- return Series(IntervalArray.from_arrays(left, right, closed))
-
-
-def create_series_categorical_intervals(left, right, closed="right"):
- return Series(Categorical(IntervalIndex.from_arrays(left, right, closed)))
-
-
class TestAttributes:
@pytest.mark.parametrize(
"left, right",
@@ -113,221 +93,6 @@ def test_set_na(self, left_right_dtypes):
tm.assert_extension_array_equal(result, expected)
-class TestComparison:
- @pytest.fixture(params=[operator.eq, operator.ne])
- def op(self, request):
- return request.param
-
- @pytest.fixture
- def array(self, left_right_dtypes):
- """
- Fixture to generate an IntervalArray of various dtypes containing NA if possible
- """
- left, right = left_right_dtypes
- if left.dtype != "int64":
- left, right = left.insert(4, np.nan), right.insert(4, np.nan)
- else:
- left, right = left.insert(4, 10), right.insert(4, 20)
- return IntervalArray.from_arrays(left, right)
-
- @pytest.fixture(
- params=[
- IntervalArray.from_arrays,
- IntervalIndex.from_arrays,
- create_categorical_intervals,
- create_series_intervals,
- create_series_categorical_intervals,
- ],
- ids=[
- "IntervalArray",
- "IntervalIndex",
- "Categorical[Interval]",
- "Series[Interval]",
- "Series[Categorical[Interval]]",
- ],
- )
- def interval_constructor(self, request):
- """
- Fixture for all pandas native interval constructors.
- To be used as the LHS of IntervalArray comparisons.
- """
- return request.param
-
- def elementwise_comparison(self, op, array, other):
- """
- Helper that performs elementwise comparisions between `array` and `other`
- """
- other = other if is_list_like(other) else [other] * len(array)
- return np.array([op(x, y) for x, y in zip(array, other)])
-
- def test_compare_scalar_interval(self, op, array):
- # matches first interval
- other = array[0]
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- # matches on a single endpoint but not both
- other = Interval(array.left[0], array.right[1])
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- def test_compare_scalar_interval_mixed_closed(self, op, closed, other_closed):
- array = IntervalArray.from_arrays(range(2), range(1, 3), closed=closed)
- other = Interval(0, 1, closed=other_closed)
-
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- def test_compare_scalar_na(self, op, array, nulls_fixture):
- result = op(array, nulls_fixture)
- expected = self.elementwise_comparison(op, array, nulls_fixture)
- tm.assert_numpy_array_equal(result, expected)
-
- @pytest.mark.parametrize(
- "other",
- [
- 0,
- 1.0,
- True,
- "foo",
- Timestamp("2017-01-01"),
- Timestamp("2017-01-01", tz="US/Eastern"),
- Timedelta("0 days"),
- Period("2017-01-01", "D"),
- ],
- )
- def test_compare_scalar_other(self, op, array, other):
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- def test_compare_list_like_interval(
- self, op, array, interval_constructor,
- ):
- # same endpoints
- other = interval_constructor(array.left, array.right)
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- # different endpoints
- other = interval_constructor(array.left[::-1], array.right[::-1])
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- # all nan endpoints
- other = interval_constructor([np.nan] * 4, [np.nan] * 4)
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- def test_compare_list_like_interval_mixed_closed(
- self, op, interval_constructor, closed, other_closed
- ):
- array = IntervalArray.from_arrays(range(2), range(1, 3), closed=closed)
- other = interval_constructor(range(2), range(1, 3), closed=other_closed)
-
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- @pytest.mark.parametrize(
- "other",
- [
- (
- Interval(0, 1),
- Interval(Timedelta("1 day"), Timedelta("2 days")),
- Interval(4, 5, "both"),
- Interval(10, 20, "neither"),
- ),
- (0, 1.5, Timestamp("20170103"), np.nan),
- (
- Timestamp("20170102", tz="US/Eastern"),
- Timedelta("2 days"),
- "baz",
- pd.NaT,
- ),
- ],
- )
- def test_compare_list_like_object(self, op, array, other):
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- def test_compare_list_like_nan(self, op, array, nulls_fixture):
- other = [nulls_fixture] * 4
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- @pytest.mark.parametrize(
- "other",
- [
- np.arange(4, dtype="int64"),
- np.arange(4, dtype="float64"),
- date_range("2017-01-01", periods=4),
- date_range("2017-01-01", periods=4, tz="US/Eastern"),
- timedelta_range("0 days", periods=4),
- period_range("2017-01-01", periods=4, freq="D"),
- Categorical(list("abab")),
- Categorical(date_range("2017-01-01", periods=4)),
- pd.array(list("abcd")),
- pd.array(["foo", 3.14, None, object()]),
- ],
- ids=lambda x: str(x.dtype),
- )
- def test_compare_list_like_other(self, op, array, other):
- result = op(array, other)
- expected = self.elementwise_comparison(op, array, other)
- tm.assert_numpy_array_equal(result, expected)
-
- @pytest.mark.parametrize("length", [1, 3, 5])
- @pytest.mark.parametrize("other_constructor", [IntervalArray, list])
- def test_compare_length_mismatch_errors(self, op, other_constructor, length):
- array = IntervalArray.from_arrays(range(4), range(1, 5))
- other = other_constructor([Interval(0, 1)] * length)
- with pytest.raises(ValueError, match="Lengths must match to compare"):
- op(array, other)
-
- @pytest.mark.parametrize(
- "constructor, expected_type, assert_func",
- [
- (IntervalIndex, np.array, tm.assert_numpy_array_equal),
- (Series, Series, tm.assert_series_equal),
- ],
- )
- def test_index_series_compat(self, op, constructor, expected_type, assert_func):
- # IntervalIndex/Series that rely on IntervalArray for comparisons
- breaks = range(4)
- index = constructor(IntervalIndex.from_breaks(breaks))
-
- # scalar comparisons
- other = index[0]
- result = op(index, other)
- expected = expected_type(self.elementwise_comparison(op, index, other))
- assert_func(result, expected)
-
- other = breaks[0]
- result = op(index, other)
- expected = expected_type(self.elementwise_comparison(op, index, other))
- assert_func(result, expected)
-
- # list-like comparisons
- other = IntervalArray.from_breaks(breaks)
- result = op(index, other)
- expected = expected_type(self.elementwise_comparison(op, index, other))
- assert_func(result, expected)
-
- other = [index[0], breaks[0], "foo"]
- result = op(index, other)
- expected = expected_type(self.elementwise_comparison(op, index, other))
- assert_func(result, expected)
-
-
def test_repr():
# GH 25022
arr = IntervalArray.from_tuples([(0, 1), (1, 2)])
| Follow-ups to #30640 based on @jbrockmendel's comments.
Haven't addressed all the comments yet but pushing this up now so there's a record of it.
Changes thus far:
- Created `tests/arithmetic/test_interval.py ` and moved the tests there
- Used `make_wrapped_comparison_op` to add `__eq__` and `__ne__` to `IntervalArray`. | https://api.github.com/repos/pandas-dev/pandas/pulls/30715 | 2020-01-05T21:50:47Z | 2020-01-06T00:22:12Z | 2020-01-06T00:22:12Z | 2020-01-08T15:55:24Z |
REF: share _union between DTI/TDI | diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 365cfb1585f00..c78545a12f43b 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -806,6 +806,25 @@ def _fast_union(self, other, sort=None):
else:
return left
+ def _union(self, other, sort):
+ if not len(other) or self.equals(other) or not len(self):
+ return super()._union(other, sort=sort)
+
+ # We are called by `union`, which is responsible for this validation
+ assert isinstance(other, type(self))
+
+ this, other = self._maybe_utc_convert(other)
+
+ if this._can_fast_union(other):
+ return this._fast_union(other, sort=sort)
+ else:
+ result = Index._union(this, other, sort=sort)
+ if isinstance(result, type(self)):
+ assert result._data.dtype == this.dtype
+ if result.freq is None:
+ result._set_freq("infer")
+ return result
+
# --------------------------------------------------------------------
# Join Methods
_join_precedence = 10
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 4b2362bea27b3..40d3823c9700b 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -379,25 +379,6 @@ def _formatter_func(self):
# --------------------------------------------------------------------
# Set Operation Methods
- def _union(self, other: "DatetimeIndex", sort):
- if not len(other) or self.equals(other) or not len(self):
- return super()._union(other, sort=sort)
-
- # We are called by `union`, which is responsible for this validation
- assert isinstance(other, DatetimeIndex)
-
- this, other = self._maybe_utc_convert(other)
-
- if this._can_fast_union(other):
- return this._fast_union(other, sort=sort)
- else:
- result = Index._union(this, other, sort=sort)
- if isinstance(result, DatetimeIndex):
- assert result._data.dtype == this.dtype
- if result.freq is None:
- result._set_freq("infer")
- return result
-
def union_many(self, others):
"""
A bit of a hack to accelerate unioning a collection of indexes.
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 1bdb4bf8b0a00..ee6e5b984ae7b 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -246,24 +246,6 @@ def astype(self, dtype, copy=True):
return Index(result.astype("i8"), name=self.name)
return DatetimeIndexOpsMixin.astype(self, dtype, copy=copy)
- def _union(self, other: "TimedeltaIndex", sort):
- if len(other) == 0 or self.equals(other) or len(self) == 0:
- return super()._union(other, sort=sort)
-
- # We are called by `union`, which is responsible for this validation
- assert isinstance(other, TimedeltaIndex)
-
- this, other = self, other
-
- if this._can_fast_union(other):
- return this._fast_union(other, sort=sort)
- else:
- result = Index._union(this, other, sort=sort)
- if isinstance(result, TimedeltaIndex):
- if result.freq is None:
- result._set_freq("infer")
- return result
-
def _maybe_promote(self, other):
if other.inferred_type == "timedelta":
other = TimedeltaIndex(other)
| https://api.github.com/repos/pandas-dev/pandas/pulls/30714 | 2020-01-05T20:08:33Z | 2020-01-06T13:28:09Z | 2020-01-06T13:28:09Z | 2020-01-06T15:31:16Z | |
Fix PeriodIndex._shallow_copy allowing object-dtype | diff --git a/pandas/core/indexes/api.py b/pandas/core/indexes/api.py
index 1904456848396..4072d06b9427c 100644
--- a/pandas/core/indexes/api.py
+++ b/pandas/core/indexes/api.py
@@ -198,6 +198,7 @@ def conv(i):
result = indexes[0]
if hasattr(result, "union_many"):
+ # DatetimeIndex
return result.union_many(indexes[1:])
else:
for other in indexes[1:]:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index bd8d6b9e73188..087db014de5b3 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1947,7 +1947,7 @@ def dropna(self, how="any"):
raise ValueError(f"invalid how option: {how}")
if self.hasnans:
- return self._shallow_copy(self.values[~self._isnan])
+ return self._shallow_copy(self._values[~self._isnan])
return self._shallow_copy()
# --------------------------------------------------------------------
@@ -2568,11 +2568,11 @@ def symmetric_difference(self, other, result_name=None, sort=None):
left_indexer = np.setdiff1d(
np.arange(this.size), common_indexer, assume_unique=True
)
- left_diff = this.values.take(left_indexer)
+ left_diff = this._values.take(left_indexer)
# {other} minus {this}
right_indexer = (indexer == -1).nonzero()[0]
- right_diff = other.values.take(right_indexer)
+ right_diff = other._values.take(right_indexer)
the_diff = concat_compat([left_diff, right_diff])
if sort is None:
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 93f989b95773f..6ba778bc83dd1 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -272,19 +272,14 @@ def _shallow_copy(self, values=None, **kwargs):
values = self._data
if isinstance(values, type(self)):
- values = values._values
+ values = values._data
if not isinstance(values, PeriodArray):
- if isinstance(values, np.ndarray) and is_integer_dtype(values.dtype):
+ if isinstance(values, np.ndarray) and values.dtype == "i8":
values = PeriodArray(values, freq=self.freq)
else:
- # in particular, I would like to avoid period_array here.
- # Some people seem to be calling use with unexpected types
- # Index.difference -> ndarray[Period]
- # DatetimelikeIndexOpsMixin.repeat -> ndarray[ordinal]
- # I think that once all of Datetime* are EAs, we can simplify
- # this quite a bit.
- values = period_array(values, freq=self.freq)
+ # GH#30713 this should never be reached
+ raise TypeError(type(values), getattr(values, "dtype", None))
# We don't allow changing `freq` in _shallow_copy.
validate_dtype_freq(self.dtype, kwargs.get("freq"))
| https://api.github.com/repos/pandas-dev/pandas/pulls/30713 | 2020-01-05T19:56:33Z | 2020-01-06T13:20:10Z | 2020-01-06T13:20:10Z | 2020-01-06T15:32:06Z | |
CLN: remove Index/Series._is_homogeneous_type | diff --git a/pandas/core/base.py b/pandas/core/base.py
index d38dbec684f35..7d499181c6ed1 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -628,24 +628,6 @@ def transpose(self, *args, **kwargs):
""",
)
- @property
- def _is_homogeneous_type(self) -> bool:
- """
- Whether the object has a single dtype.
-
- By definition, Series and Index are always considered homogeneous.
- A MultiIndex may or may not be homogeneous, depending on the
- dtypes of the levels.
-
- See Also
- --------
- DataFrame._is_homogeneous_type : Whether all the columns in a
- DataFrame have the same dtype.
- MultiIndex._is_homogeneous_type : Whether all the levels of a
- MultiIndex have the same dtype.
- """
- return True
-
@property
def shape(self):
"""
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 19f36e071fd0e..d9e68b64f526d 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -661,31 +661,6 @@ def array(self):
"'MultiIndex.to_numpy()' to get a NumPy array of tuples."
)
- @property
- def _is_homogeneous_type(self) -> bool:
- """
- Whether the levels of a MultiIndex all have the same dtype.
-
- This looks at the dtypes of the levels.
-
- See Also
- --------
- Index._is_homogeneous_type : Whether the object has a single
- dtype.
- DataFrame._is_homogeneous_type : Whether all the columns in a
- DataFrame have the same dtype.
-
- Examples
- --------
- >>> MultiIndex.from_tuples([
- ... ('a', 'b'), ('a', 'c')])._is_homogeneous_type
- True
- >>> MultiIndex.from_tuples([
- ... ('a', 1), ('a', 2)])._is_homogeneous_type
- False
- """
- return len({x.dtype for x in self.levels}) <= 1
-
def _set_levels(
self, levels, level=None, copy=False, validate=True, verify_integrity=False
):
diff --git a/pandas/tests/indexing/multiindex/test_multiindex.py b/pandas/tests/indexing/multiindex/test_multiindex.py
index 3339cb5f3150d..8163de8588232 100644
--- a/pandas/tests/indexing/multiindex/test_multiindex.py
+++ b/pandas/tests/indexing/multiindex/test_multiindex.py
@@ -1,5 +1,4 @@
import numpy as np
-import pytest
import pandas._libs.index as _index
from pandas.errors import PerformanceWarning
@@ -47,17 +46,6 @@ def test_multiindex_contains_dropped(self):
assert "a" in idx.levels[0]
assert "a" not in idx
- @pytest.mark.parametrize(
- "data, expected",
- [
- (MultiIndex.from_product([(), ()]), True),
- (MultiIndex.from_product([(1, 2), (3, 4)]), True),
- (MultiIndex.from_product([("a", "b"), (1, 2)]), False),
- ],
- )
- def test_multiindex_is_homogeneous_type(self, data, expected):
- assert data._is_homogeneous_type is expected
-
def test_indexing_over_hashtable_size_cutoff(self):
n = 10000
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index 4bf2f1bd82eff..a57ec2ba05d54 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -465,13 +465,6 @@ def test_infer_objects_series(self):
assert actual.dtype == "object"
tm.assert_series_equal(actual, expected)
- def test_is_homogeneous_type(self):
- with tm.assert_produces_warning(DeprecationWarning, check_stacklevel=False):
- empty = Series()
- assert empty._is_homogeneous_type
- assert Series([1, 2])._is_homogeneous_type
- assert Series(pd.Categorical([1, 2]))._is_homogeneous_type
-
@pytest.mark.parametrize(
"data",
[
| https://api.github.com/repos/pandas-dev/pandas/pulls/30712 | 2020-01-05T19:43:26Z | 2020-01-05T20:46:28Z | 2020-01-05T20:46:28Z | 2020-01-05T20:47:52Z | |
Datetime error message (unable to continue that PR) | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 267780219c74e..d7feb194af11e 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -1952,14 +1952,21 @@ def objects_to_datetime64ns(
yearfirst=yearfirst,
require_iso8601=require_iso8601,
)
- except ValueError as e:
+ except ValueError as err:
try:
values, tz_parsed = conversion.datetime_to_datetime64(data)
# If tzaware, these values represent unix timestamps, so we
# return them as i8 to distinguish from wall times
return values.view("i8"), tz_parsed
except (ValueError, TypeError):
- raise e
+ # GH#10720. If we failed to parse datetime then notify
+ # that flag errors='coerce' could be used to NaT.
+ # Trying to distinguish exception based on message.
+ if "Unknown string format" in err.args[0]:
+ msg = f"Unknown string format: {err.args[1]}"
+ msg = f"{msg}. You can coerce to NaT by passing errors='coerce'"
+ raise ValueError(msg).with_traceback(err.__traceback__)
+ raise
if tz_parsed is not None:
# We can take a shortcut since the datetime64 numpy array
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index 3e185feaea38e..1f205aff6db0a 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -113,9 +113,11 @@ def _coerce_scalar_to_timedelta_type(r, unit="ns", errors="raise"):
try:
result = Timedelta(r, unit)
- except ValueError:
+ except ValueError as err:
if errors == "raise":
- raise
+ # GH#10720
+ msg = f"{err.args[0]}. You can coerce to NaT by passing errors='coerce'"
+ raise ValueError(msg).with_traceback(err.__traceback__)
elif errors == "ignore":
return r
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index a6a43697c36dc..6555d4a049a93 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -480,6 +480,17 @@ def test_to_datetime_unparseable_ignore(self):
s = "Month 1, 1999"
assert pd.to_datetime(s, errors="ignore") == s
+ def test_to_datetime_unparseable_raise(self):
+ # GH#10720
+ invalid_data = "Month 1, 1999"
+ expected_args = (
+ f"Unknown string format: {invalid_data}. "
+ "You can coerce to NaT by passing errors='coerce'"
+ )
+
+ with pytest.raises(ValueError, match=expected_args):
+ pd.to_datetime(invalid_data, errors="raise")
+
@td.skip_if_windows # `tm.set_timezone` does not work in windows
def test_to_datetime_now(self):
# See GH#18666
diff --git a/pandas/tests/indexes/timedeltas/test_tools.py b/pandas/tests/indexes/timedeltas/test_tools.py
index 223bde8b0e2c2..75534c752f44a 100644
--- a/pandas/tests/indexes/timedeltas/test_tools.py
+++ b/pandas/tests/indexes/timedeltas/test_tools.py
@@ -111,6 +111,15 @@ def test_to_timedelta_invalid(self):
invalid_data, to_timedelta(invalid_data, errors="ignore")
)
+ # GH#10720
+ invalid_data = "some_nonesense"
+ expected_msg = (
+ "unit abbreviation w/o a number. "
+ "You can coerce to NaT by passing errors='coerce'"
+ )
+ with pytest.raises(ValueError, match=expected_msg):
+ pd.to_timedelta(invalid_data, errors="raise")
+
def test_to_timedelta_via_apply(self):
# GH 5458
expected = Series([np.timedelta64(1, "s")])
| - [ ] closes #10720
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/30711 | 2020-01-05T18:30:22Z | 2020-01-18T06:44:10Z | null | 2020-01-18T12:47:41Z |
CI: Fix numpydev build | diff --git a/pandas/_libs/src/ujson/python/objToJSON.c b/pandas/_libs/src/ujson/python/objToJSON.c
index 5181b0400d7bb..c413a16f8d5f0 100644
--- a/pandas/_libs/src/ujson/python/objToJSON.c
+++ b/pandas/_libs/src/ujson/python/objToJSON.c
@@ -176,9 +176,8 @@ void *initObjToJSON(void) {
Py_DECREF(mod_nattype);
}
- /* Initialise numpy API and use 2/3 compatible return */
+ /* Initialise numpy API */
import_array();
- return NUMPY_IMPORT_ARRAY_RETVAL;
}
static TypeContext *createTypeContext(void) {
| - [x] closes #30709
hmm.. im not sure on this. But don't see any usage of this ret value.
The C extensions now build with this change
Potentially related change on the numpy side:
https://github.com/numpy/numpy/pull/15232
cc. @WillAyd @jreback | https://api.github.com/repos/pandas-dev/pandas/pulls/30710 | 2020-01-05T15:33:20Z | 2020-01-05T16:13:31Z | 2020-01-05T16:13:30Z | 2020-01-14T23:42:25Z |
CI: Test case for wrong placed space | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index a90774d2e8ff1..598d687180f1b 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -108,6 +108,14 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
fi
RET=$(($RET + $?)) ; echo $MSG "DONE"
+ MSG='Check for wrong space in a string' ; echo $MSG
+ if [[ "$GITHUB_ACTIONS" == "true" ]]; then
+ $BASE_DIR/scripts/validate_spaces_over_concatned_strings.py --format="[error]{source_path}:{line_number}:{msg}" .
+ else
+ $BASE_DIR/scripts/validate_spaces_over_concatned_strings.py .
+ fi
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
echo "isort --version-number"
isort --version-number
diff --git a/scripts/validate_spaces_over_concatned_strings.py b/scripts/validate_spaces_over_concatned_strings.py
new file mode 100755
index 0000000000000..5590549c991e6
--- /dev/null
+++ b/scripts/validate_spaces_over_concatned_strings.py
@@ -0,0 +1,140 @@
+#!/usr/bin/env python
+"""
+Test case for leading spaces in concated strings.
+
+For example:
+
+# Good
+foo = (
+ "bar "
+ "baz"
+)
+
+
+# Bad
+foo = (
+ "bar"
+ " baz"
+)
+"""
+
+import argparse
+import os
+import sys
+import token
+import tokenize
+from typing import Generator, List, Tuple
+
+FILE_EXTENSIONS_TO_CHECK = (".py", ".pyx", ".pyx.ini", ".pxd")
+
+MSG = "String has a space at the beginning instead of the end of the previous string."
+
+
+def main(source_path: str, output_format: str) -> bool:
+ """
+ Main entry point of the script.
+
+ Parameters
+ ----------
+ source_path : str
+ Source path representing path to a file/directory.
+ output_format : str
+ Output format of the script.
+
+ Returns
+ -------
+ bool
+ True if found any strings that have a leading space in the wrong line.
+
+ Raises
+ ------
+ ValueError
+ If the `source_path` is not pointing to existing file/directory.
+ """
+ if not os.path.exists(source_path):
+ raise ValueError(
+ "Please enter a valid path, pointing to a valid file/directory."
+ )
+
+ is_failed: bool = False
+
+ if os.path.isfile(source_path):
+ for source_path, line_number in strings_with_wrong_space(source_path):
+ is_failed = True
+ print(
+ output_format.format(
+ source_path=source_path, line_number=line_number, msg=MSG
+ )
+ )
+
+ for subdir, _, files in os.walk(source_path):
+ for file_name in files:
+ if any(
+ file_name.endswith(extension) for extension in FILE_EXTENSIONS_TO_CHECK
+ ):
+ for source_path, line_number in strings_with_wrong_space(
+ os.path.join(subdir, file_name)
+ ):
+ is_failed = True
+ print(
+ output_format.format(
+ source_path=source_path, line_number=line_number, msg=MSG
+ )
+ )
+ return is_failed
+
+
+def strings_with_wrong_space(
+ source_path: str,
+) -> Generator[Tuple[str, int], None, None]:
+ """
+ Yielding the file path and the line number of the string to fix.
+
+ Parameters
+ ----------
+ source_path : str
+ File path pointing to a single file.
+
+ Yields
+ ------
+ source_path : str
+ Source file path.
+ line_number : int
+ Line number of the wrong placed space.
+ """
+ with open(source_path, "r") as file_name:
+ tokens: List = list(tokenize.generate_tokens(file_name.readline))
+
+ for first_token, second_token, third_token in zip(tokens, tokens[1:], tokens[2:]):
+ if (
+ first_token[0] == third_token[0] == token.STRING
+ and second_token[0] == token.NL
+ ):
+ # Means we are in a block of concated string
+
+ # Striping the quotes
+ first_string = first_token[1][1:-1]
+ second_string = third_token[1][1:-1]
+
+ if (not first_string.endswith(" ")) and (second_string.startswith(" ")):
+ yield source_path, third_token[2][0]
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(
+ description="Validate spaces over concated strings"
+ )
+
+ parser.add_argument(
+ "path", nargs="?", default=".", help="Source path of file/directory to check."
+ )
+ parser.add_argument(
+ "--format",
+ "-f",
+ default="{source_path}:{line_number}:{msg}",
+ help="Output format of the error message.",
+ )
+
+ args = parser.parse_args()
+
+ sys.exit(main(source_path=args.path, output_format=args.format))
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Test case for:
```python
foo = (
"bar"
" baz"
)
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/30708 | 2020-01-05T11:36:22Z | 2020-01-06T21:37:12Z | null | 2020-01-08T20:32:56Z |
STY: Spaces over concat strings - batch 1 | diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index e0a2b987c98d5..53e3354ca8eb6 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -120,8 +120,7 @@ def ints_to_pydatetime(const int64_t[:] arr, object tz=None, object freq=None,
elif box == "datetime":
func_create = create_datetime_from_ts
else:
- raise ValueError("box must be one of 'datetime', 'date', 'time' or"
- " 'timestamp'")
+ raise ValueError("box must be one of 'datetime', 'date', 'time' or 'timestamp'")
if is_utc(tz) or tz is None:
for i in range(n):
diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx
index 742403883f7dd..5508b208de00a 100644
--- a/pandas/_libs/tslibs/strptime.pyx
+++ b/pandas/_libs/tslibs/strptime.pyx
@@ -278,8 +278,8 @@ def array_strptime(object[:] values, object fmt,
"the ISO year directive '%G' and a weekday "
"directive '%A', '%a', '%w', or '%u'.")
else:
- raise ValueError("ISO week directive '%V' is incompatible with"
- " the year directive '%Y'. Use the ISO year "
+ raise ValueError("ISO week directive '%V' is incompatible with "
+ "the year directive '%Y'. Use the ISO year "
"'%G' instead.")
# If we know the wk of the year and what day of that wk, we can figure
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 86a9d053730b8..abe7f9e5b4105 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -814,9 +814,9 @@ default 'raise'
'shift_backward')
if nonexistent not in nonexistent_options and not isinstance(
nonexistent, timedelta):
- raise ValueError("The nonexistent argument must be one of 'raise',"
- " 'NaT', 'shift_forward', 'shift_backward' or"
- " a timedelta object")
+ raise ValueError("The nonexistent argument must be one of 'raise', "
+ "'NaT', 'shift_forward', 'shift_backward' or "
+ "a timedelta object")
if self.tzinfo is None:
# tz naive, localize
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index cc54fb5e5af13..c16281f1e3c42 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -324,8 +324,7 @@ def __init__(self, values, dtype=_NS_DTYPE, freq=None, copy=False):
if not isinstance(values, np.ndarray):
msg = (
f"Unexpected type '{type(values).__name__}'. 'values' must be "
- "a DatetimeArray ndarray, or Series or Index containing one of"
- " those."
+ "a DatetimeArray ndarray, or Series or Index containing one of those."
)
raise ValueError(msg)
if values.ndim not in [1, 2]:
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 1874517f0f2e4..0e4fb5883e915 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -230,8 +230,8 @@ def __init__(self, values, dtype=_TD_DTYPE, freq=None, copy=False):
if not isinstance(values, np.ndarray):
msg = (
- f"Unexpected type '{type(values).__name__}'. 'values' must be a"
- " TimedeltaArray ndarray, or Series or Index containing one of those."
+ f"Unexpected type '{type(values).__name__}'. 'values' must be a "
+ "TimedeltaArray ndarray, or Series or Index containing one of those."
)
raise ValueError(msg)
if values.ndim not in [1, 2]:
diff --git a/pandas/core/computation/eval.py b/pandas/core/computation/eval.py
index 5c320042721dc..72a0fda9098fd 100644
--- a/pandas/core/computation/eval.py
+++ b/pandas/core/computation/eval.py
@@ -339,8 +339,8 @@ def eval(
if parsed_expr.assigner is None:
if multi_line:
raise ValueError(
- "Multi-line expressions are only valid"
- " if all expressions contain an assignment"
+ "Multi-line expressions are only valid "
+ "if all expressions contain an assignment"
)
elif inplace:
raise ValueError("Cannot operate inplace if there is no assignment")
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index ba0c0e7d66b1d..3636712f95278 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7000,8 +7000,8 @@ def append(self, other, ignore_index=False, verify_integrity=False, sort=False):
other = Series(other)
if other.name is None and not ignore_index:
raise TypeError(
- "Can only append a Series if ignore_index=True"
- " or if the Series has a name"
+ "Can only append a Series if ignore_index=True "
+ "or if the Series has a name"
)
index = Index([other.name], name=self.index.name)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 3b8e9cf82f08c..357ba33973f9b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6428,8 +6428,8 @@ def replace(
if not is_dict_like(to_replace):
if not is_dict_like(regex):
raise TypeError(
- 'If "to_replace" and "value" are both None'
- ' and "to_replace" is not a list, then '
+ 'If "to_replace" and "value" are both None '
+ 'and "to_replace" is not a list, then '
"regex must be a mapping"
)
to_replace = regex
@@ -6443,9 +6443,8 @@ def replace(
if any(are_mappings):
if not all(are_mappings):
raise TypeError(
- "If a nested mapping is passed, all values"
- " of the top level mapping must be "
- "mappings"
+ "If a nested mapping is passed, all values "
+ "of the top level mapping must be mappings"
)
# passed a nested dict/Series
to_rep_dict = {}
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 1ba4938d45fc9..233bdd11b372b 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -485,8 +485,8 @@ def get_converter(s):
except KeyError:
# turns out it wasn't a tuple
msg = (
- "must supply a same-length tuple to get_group"
- " with multiple grouping keys"
+ "must supply a same-length tuple to get_group "
+ "with multiple grouping keys"
)
raise ValueError(msg)
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index 7e7261130ff4a..e38d9909c2010 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -605,8 +605,8 @@ def is_in_obj(gpr) -> bool:
if is_categorical_dtype(gpr) and len(gpr) != obj.shape[axis]:
raise ValueError(
- f"Length of grouper ({len(gpr)}) and axis ({obj.shape[axis]})"
- " must be same length"
+ f"Length of grouper ({len(gpr)}) and axis ({obj.shape[axis]}) "
+ "must be same length"
)
# create the Grouping
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index d9e68b64f526d..1f0aff6609dba 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2073,9 +2073,8 @@ def drop(self, codes, level=None, errors="raise"):
elif com.is_bool_indexer(loc):
if self.lexsort_depth == 0:
warnings.warn(
- "dropping on a non-lexsorted multi-index"
- " without a level parameter may impact "
- "performance.",
+ "dropping on a non-lexsorted multi-index "
+ "without a level parameter may impact performance.",
PerformanceWarning,
stacklevel=3,
)
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 931653b63af36..056ba73edfe34 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -1076,10 +1076,9 @@ def _upsample(self, method, limit=None, fill_value=None):
raise AssertionError("axis must be 0")
if self._from_selection:
raise ValueError(
- "Upsampling from level= or on= selection"
- " is not supported, use .set_index(...)"
- " to explicitly set index to"
- " datetime-like"
+ "Upsampling from level= or on= selection "
+ "is not supported, use .set_index(...) "
+ "to explicitly set index to datetime-like"
)
ax = self.ax
@@ -1135,9 +1134,9 @@ def _convert_obj(self, obj):
if self._from_selection:
# see GH 14008, GH 12871
msg = (
- "Resampling from level= or on= selection"
- " with a PeriodIndex is not currently supported,"
- " use .set_index(...) to explicitly set index"
+ "Resampling from level= or on= selection "
+ "with a PeriodIndex is not currently supported, "
+ "use .set_index(...) to explicitly set index"
)
raise NotImplementedError(msg)
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 2007f6aa32a57..ff7b7580eb12f 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -351,8 +351,8 @@ def __init__(
for obj in objs:
if not isinstance(obj, (Series, DataFrame)):
msg = (
- "cannot concatenate object of type '{typ}';"
- " only Series and DataFrame objs are valid".format(typ=type(obj))
+ "cannot concatenate object of type '{typ}'; "
+ "only Series and DataFrame objs are valid".format(typ=type(obj))
)
raise TypeError(msg)
@@ -402,8 +402,8 @@ def __init__(
self._is_series = isinstance(sample, Series)
if not 0 <= axis <= sample.ndim:
raise AssertionError(
- "axis must be between 0 and {ndim}, input was"
- " {axis}".format(ndim=sample.ndim, axis=axis)
+ "axis must be between 0 and {ndim}, input was "
+ "{axis}".format(ndim=sample.ndim, axis=axis)
)
# if we have mixed ndims, then convert to highest ndim
@@ -648,8 +648,8 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None) -> MultiInde
# make sure that all of the passed indices have the same nlevels
if not len({idx.nlevels for idx in indexes}) == 1:
raise AssertionError(
- "Cannot concat indices that do"
- " not have the same number of levels"
+ "Cannot concat indices that do "
+ "not have the same number of levels"
)
# also copies
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index 722dd8751dfad..d4ccb19fc0dda 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -51,8 +51,8 @@ def melt(
missing = Index(com.flatten(id_vars)).difference(cols)
if not missing.empty:
raise KeyError(
- "The following 'id_vars' are not present"
- " in the DataFrame: {missing}"
+ "The following 'id_vars' are not present "
+ "in the DataFrame: {missing}"
"".format(missing=list(missing))
)
else:
@@ -73,8 +73,8 @@ def melt(
missing = Index(com.flatten(value_vars)).difference(cols)
if not missing.empty:
raise KeyError(
- "The following 'value_vars' are not present in"
- " the DataFrame: {missing}"
+ "The following 'value_vars' are not present in "
+ "the DataFrame: {missing}"
"".format(missing=list(missing))
)
frame = frame.loc[:, id_vars + value_vars]
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 6fe2287923fcb..5f92e4a88b568 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1246,32 +1246,32 @@ def _validate(self, validate: str):
if validate in ["one_to_one", "1:1"]:
if not left_unique and not right_unique:
raise MergeError(
- "Merge keys are not unique in either left"
- " or right dataset; not a one-to-one merge"
+ "Merge keys are not unique in either left "
+ "or right dataset; not a one-to-one merge"
)
elif not left_unique:
raise MergeError(
- "Merge keys are not unique in left dataset;"
- " not a one-to-one merge"
+ "Merge keys are not unique in left dataset; "
+ "not a one-to-one merge"
)
elif not right_unique:
raise MergeError(
- "Merge keys are not unique in right dataset;"
- " not a one-to-one merge"
+ "Merge keys are not unique in right dataset; "
+ "not a one-to-one merge"
)
elif validate in ["one_to_many", "1:m"]:
if not left_unique:
raise MergeError(
- "Merge keys are not unique in left dataset;"
- " not a one-to-many merge"
+ "Merge keys are not unique in left dataset; "
+ "not a one-to-many merge"
)
elif validate in ["many_to_one", "m:1"]:
if not right_unique:
raise MergeError(
- "Merge keys are not unique in right dataset;"
- " not a many-to-one merge"
+ "Merge keys are not unique in right dataset; "
+ "not a many-to-one merge"
)
elif validate in ["many_to_many", "m:m"]:
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 02f4eb47ba914..f8d9eeb211a1e 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -438,8 +438,8 @@ def str_contains(arr, pat, case=True, flags=0, na=np.nan, regex=True):
if regex.groups > 0:
warnings.warn(
- "This pattern has match groups. To actually get the"
- " groups, use str.extract.",
+ "This pattern has match groups. To actually get the "
+ "groups, use str.extract.",
UserWarning,
stacklevel=3,
)
diff --git a/pandas/io/clipboards.py b/pandas/io/clipboards.py
index 518b940ec5da3..34e8e03d8771e 100644
--- a/pandas/io/clipboards.py
+++ b/pandas/io/clipboards.py
@@ -69,8 +69,8 @@ def read_clipboard(sep=r"\s+", **kwargs): # pragma: no cover
kwargs["engine"] = "python"
elif len(sep) > 1 and kwargs.get("engine") == "c":
warnings.warn(
- "read_clipboard with regex separator does not work"
- " properly with c engine"
+ "read_clipboard with regex separator does not work "
+ "properly with c engine"
)
return read_csv(StringIO(text), sep=sep, **kwargs)
diff --git a/pandas/io/excel/_util.py b/pandas/io/excel/_util.py
index 8cd4b2012cb42..a084be54dfa10 100644
--- a/pandas/io/excel/_util.py
+++ b/pandas/io/excel/_util.py
@@ -154,8 +154,8 @@ def _validate_freeze_panes(freeze_panes):
return True
raise ValueError(
- "freeze_panes must be of form (row, column)"
- " where row and column are integers"
+ "freeze_panes must be of form (row, column) "
+ "where row and column are integers"
)
# freeze_panes wasn't specified, return False so it won't be applied
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 3020ac421fc2f..5c4b7d103d271 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -579,8 +579,8 @@ def __init__(
else:
raise ValueError(
(
- "Formatters length({flen}) should match"
- " DataFrame number of columns({dlen})"
+ "Formatters length({flen}) should match "
+ "DataFrame number of columns({dlen})"
).format(flen=len(formatters), dlen=len(frame.columns))
)
self.na_rep = na_rep
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 0c9d2d54d3065..30d850faddd9f 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -1272,9 +1272,9 @@ def bar(
color = [color[0], color[0]]
elif len(color) > 2:
raise ValueError(
- "`color` must be string or a list-like"
- " of length 2: [`color_neg`, `color_pos`]"
- " (eg: color=['#d65f5f', '#5fba7d'])"
+ "`color` must be string or a list-like "
+ "of length 2: [`color_neg`, `color_pos`] "
+ "(eg: color=['#d65f5f', '#5fba7d'])"
)
subset = _maybe_numeric_slice(self.data, subset)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index ee4932b4f9194..21e1ef98fc55c 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -612,9 +612,9 @@ def parser_f(
if delim_whitespace and delimiter != default_sep:
raise ValueError(
- "Specified a delimiter with both sep and"
- " delim_whitespace=True; you can only"
- " specify one."
+ "Specified a delimiter with both sep and "
+ "delim_whitespace=True; you can only "
+ "specify one."
)
if engine is not None:
@@ -956,8 +956,8 @@ def _clean_options(self, options, engine):
if sep is None and not delim_whitespace:
if engine == "c":
fallback_reason = (
- "the 'c' engine does not support"
- " sep=None with delim_whitespace=False"
+ "the 'c' engine does not support "
+ "sep=None with delim_whitespace=False"
)
engine = "python"
elif sep is not None and len(sep) > 1:
@@ -1120,9 +1120,9 @@ def _make_engine(self, engine="c"):
klass = FixedWidthFieldParser
else:
raise ValueError(
- f"Unknown engine: {engine} (valid options are"
- ' "c", "python", or'
- ' "python-fwf")'
+ f"Unknown engine: {engine} (valid options are "
+ '"c", "python", or '
+ '"python-fwf")'
)
self._engine = klass(self.f, **self.options)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 3d2c2159bfbdd..d61d1cf7f0257 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1215,9 +1215,8 @@ def append_to_multiple(
"""
if axes is not None:
raise TypeError(
- "axes is currently not accepted as a parameter to"
- " append_to_multiple; you can create the "
- "tables independently instead"
+ "axes is currently not accepted as a parameter to append_to_multiple; "
+ "you can create the tables independently instead"
)
if not isinstance(d, dict):
@@ -3548,9 +3547,8 @@ def create_index(self, columns=None, optlevel=None, kind: Optional[str] = None):
if not v.is_indexed:
if v.type.startswith("complex"):
raise TypeError(
- "Columns containing complex values can be stored "
- "but cannot"
- " be indexed when using table format. Either use "
+ "Columns containing complex values can be stored but "
+ "cannot be indexed when using table format. Either use "
"fixed format, set index=False, or do not include "
"the columns containing complex values to "
"data_columns when initializing the table."
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index 609da140a3f0b..2d68bb46a8ada 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -229,10 +229,9 @@ def _validate_color_args(self):
for char in s:
if char in matplotlib.colors.BASE_COLORS:
raise ValueError(
- "Cannot pass 'style' string with a color "
- "symbol and 'color' keyword argument. Please"
- " use one or the other or pass 'style' "
- "without a color symbol"
+ "Cannot pass 'style' string with a color symbol and "
+ "'color' keyword argument. Please use one or the other or "
+ "pass 'style' without a color symbol"
)
def _iter_data(self, data=None, keep_index=False, fillna=None):
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 886c43f84045e..7f68abb92ba43 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -339,8 +339,8 @@ def check_floor_division(self, lhs, arith1, rhs):
self.check_equal(res, expected)
else:
msg = (
- r"unsupported operand type\(s\) for //: 'VariableNode' and"
- " 'VariableNode'"
+ r"unsupported operand type\(s\) for //: 'VariableNode' and "
+ "'VariableNode'"
)
with pytest.raises(TypeError, match=msg):
pd.eval(
diff --git a/pandas/tests/indexing/test_categorical.py b/pandas/tests/indexing/test_categorical.py
index 4a33dbd8fc7bd..8c8dece53277e 100644
--- a/pandas/tests/indexing/test_categorical.py
+++ b/pandas/tests/indexing/test_categorical.py
@@ -74,8 +74,8 @@ def test_loc_scalar(self):
df.loc["d"] = 10
msg = (
- "cannot insert an item into a CategoricalIndex that is not"
- " already an existing category"
+ "cannot insert an item into a CategoricalIndex that is not "
+ "already an existing category"
)
with pytest.raises(TypeError, match=msg):
df.loc["d", "A"] = 10
@@ -365,8 +365,9 @@ def test_loc_listlike(self):
# not all labels in the categories
with pytest.raises(
KeyError,
- match="'a list-indexer must only include values that are in the"
- " categories'",
+ match=(
+ "'a list-indexer must only include values that are in the categories'"
+ ),
):
self.df2.loc[["a", "d"]]
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 75bf23b39a935..2cc8232566aa9 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -90,11 +90,11 @@ def test_scalar_non_numeric(self):
else:
error = TypeError
msg = (
- r"cannot do (label|index|positional) indexing"
- r" on {klass} with these indexers \[3\.0\] of"
- r" {kind}|"
- "Cannot index by location index with a"
- " non-integer key".format(klass=type(i), kind=str(float))
+ r"cannot do (label|index|positional) indexing "
+ r"on {klass} with these indexers \[3\.0\] of "
+ r"{kind}|"
+ "Cannot index by location index with a "
+ "non-integer key".format(klass=type(i), kind=str(float))
)
with pytest.raises(error, match=msg):
idxr(s)[3.0]
@@ -111,9 +111,9 @@ def test_scalar_non_numeric(self):
else:
error = TypeError
msg = (
- r"cannot do (label|index) indexing"
- r" on {klass} with these indexers \[3\.0\] of"
- r" {kind}".format(klass=type(i), kind=str(float))
+ r"cannot do (label|index) indexing "
+ r"on {klass} with these indexers \[3\.0\] of "
+ r"{kind}".format(klass=type(i), kind=str(float))
)
with pytest.raises(error, match=msg):
s.loc[3.0]
@@ -344,9 +344,9 @@ def test_slice_non_numeric(self):
for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[(3|4)\.0\] of"
- " {kind}".format(klass=type(index), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[(3|4)\.0\] of "
+ "{kind}".format(klass=type(index), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
s.iloc[l]
@@ -354,10 +354,10 @@ def test_slice_non_numeric(self):
for idxr in [lambda x: x.loc, lambda x: x.iloc, lambda x: x]:
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers"
- r" \[(3|4)(\.0)?\]"
- r" of ({kind_float}|{kind_int})".format(
+ "cannot do slice indexing "
+ r"on {klass} with these indexers "
+ r"\[(3|4)(\.0)?\] "
+ r"of ({kind_float}|{kind_int})".format(
klass=type(index),
kind_float=str(float),
kind_int=str(int),
@@ -370,9 +370,9 @@ def test_slice_non_numeric(self):
for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[(3|4)\.0\] of"
- " {kind}".format(klass=type(index), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[(3|4)\.0\] of "
+ "{kind}".format(klass=type(index), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
s.iloc[l] = 0
@@ -424,9 +424,9 @@ def test_slice_integer(self):
# positional indexing
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[(3|4)\.0\] of"
- " {kind}".format(klass=type(index), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[(3|4)\.0\] of "
+ "{kind}".format(klass=type(index), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -448,9 +448,9 @@ def test_slice_integer(self):
# positional indexing
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[-6\.0\] of"
- " {kind}".format(klass=type(index), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[-6\.0\] of "
+ "{kind}".format(klass=type(index), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
s[slice(-6.0, 6.0)]
@@ -474,9 +474,9 @@ def test_slice_integer(self):
# positional indexing
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[(2|3)\.5\] of"
- " {kind}".format(klass=type(index), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[(2|3)\.5\] of "
+ "{kind}".format(klass=type(index), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -492,9 +492,9 @@ def test_slice_integer(self):
# positional indexing
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[(3|4)\.0\] of"
- " {kind}".format(klass=type(index), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[(3|4)\.0\] of "
+ "{kind}".format(klass=type(index), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
s[l] = 0
@@ -515,9 +515,9 @@ def test_integer_positional_indexing(self):
klass = RangeIndex
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[(2|4)\.0\] of"
- " {kind}".format(klass=str(klass), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[(2|4)\.0\] of "
+ "{kind}".format(klass=str(klass), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
idxr(s)[l]
@@ -540,9 +540,9 @@ def f(idxr):
# positional indexing
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[(0|1)\.0\] of"
- " {kind}".format(klass=type(index), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[(0|1)\.0\] of "
+ "{kind}".format(klass=type(index), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -555,9 +555,9 @@ def f(idxr):
# positional indexing
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[-10\.0\] of"
- " {kind}".format(klass=type(index), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[-10\.0\] of "
+ "{kind}".format(klass=type(index), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
s[slice(-10.0, 10.0)]
@@ -574,9 +574,9 @@ def f(idxr):
# positional indexing
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[0\.5\] of"
- " {kind}".format(klass=type(index), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[0\.5\] of "
+ "{kind}".format(klass=type(index), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -591,9 +591,9 @@ def f(idxr):
# positional indexing
msg = (
- "cannot do slice indexing"
- r" on {klass} with these indexers \[(3|4)\.0\] of"
- " {kind}".format(klass=type(index), kind=str(float))
+ "cannot do slice indexing "
+ r"on {klass} with these indexers \[(3|4)\.0\] of "
+ "{kind}".format(klass=type(index), kind=str(float))
)
with pytest.raises(TypeError, match=msg):
s[l] = 0
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index be921c813e2fa..ea4d8edd2f413 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -83,8 +83,8 @@ def test_getitem_ndarray_3d(self, index, obj, idxr, idxr_id):
msg = (
r"Buffer has wrong number of dimensions \(expected 1,"
r" got 3\)|"
- "The truth value of an array with more than one element is"
- " ambiguous|"
+ "The truth value of an array with more than one element is "
+ "ambiguous|"
"Cannot index with multidimensional key|"
r"Wrong number of dimensions. values.ndim != ndim \[3 != 1\]|"
"No matching signature found|" # TypeError
@@ -146,13 +146,13 @@ def test_setitem_ndarray_3d(self, index, obj, idxr, idxr_id):
nd3 = np.random.randint(5, size=(2, 2, 2))
msg = (
- r"Buffer has wrong number of dimensions \(expected 1,"
- r" got 3\)|"
- "The truth value of an array with more than one element is"
- " ambiguous|"
+ r"Buffer has wrong number of dimensions \(expected 1, "
+ r"got 3\)|"
+ "The truth value of an array with more than one element is "
+ "ambiguous|"
"Only 1-dimensional input arrays are supported|"
- "'pandas._libs.interval.IntervalTree' object has no attribute"
- " 'set_value'|" # AttributeError
+ "'pandas._libs.interval.IntervalTree' object has no attribute "
+ "'set_value'|" # AttributeError
"unhashable type: 'numpy.ndarray'|" # TypeError
"No matching signature found|" # TypeError
r"^\[\[\[" # pandas.core.indexing.IndexingError
diff --git a/pandas/tests/indexing/test_scalar.py b/pandas/tests/indexing/test_scalar.py
index 362a2c00e6775..a567fb9b8ccc7 100644
--- a/pandas/tests/indexing/test_scalar.py
+++ b/pandas/tests/indexing/test_scalar.py
@@ -132,8 +132,8 @@ def test_at_to_fail(self):
result = s.at["a"]
assert result == 1
msg = (
- "At based indexing on an non-integer index can only have"
- " non-integer indexers"
+ "At based indexing on an non-integer index can only have "
+ "non-integer indexers"
)
with pytest.raises(ValueError, match=msg):
s.at[0]
diff --git a/pandas/tests/resample/test_base.py b/pandas/tests/resample/test_base.py
index 476643bb3e497..f8a1810e66219 100644
--- a/pandas/tests/resample/test_base.py
+++ b/pandas/tests/resample/test_base.py
@@ -84,8 +84,8 @@ def test_raises_on_non_datetimelike_index():
# this is a non datetimelike index
xp = DataFrame()
msg = (
- "Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex,"
- " but got an instance of 'Index'"
+ "Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, "
+ "but got an instance of 'Index'"
)
with pytest.raises(TypeError, match=msg):
xp.resample("A").mean()
diff --git a/pandas/tests/resample/test_period_index.py b/pandas/tests/resample/test_period_index.py
index 40226ab2fe9b0..955f8c7482937 100644
--- a/pandas/tests/resample/test_period_index.py
+++ b/pandas/tests/resample/test_period_index.py
@@ -82,9 +82,9 @@ def test_selection(self, index, freq, kind, kwargs):
index=pd.MultiIndex.from_arrays([rng, index], names=["v", "d"]),
)
msg = (
- "Resampling from level= or on= selection with a PeriodIndex is"
- r" not currently supported, use \.set_index\(\.\.\.\) to"
- " explicitly set index"
+ "Resampling from level= or on= selection with a PeriodIndex is "
+ r"not currently supported, use \.set_index\(\.\.\.\) to "
+ "explicitly set index"
)
with pytest.raises(NotImplementedError, match=msg):
df.resample(freq, kind=kind, **kwargs)
@@ -130,8 +130,8 @@ def test_not_subperiod(self, simple_period_range_series, rule, expected_error_ms
# These are incompatible period rules for resampling
ts = simple_period_range_series("1/1/1990", "6/30/1995", freq="w-wed")
msg = (
- "Frequency <Week: weekday=2> cannot be resampled to {}, as they"
- " are not sub or super periods"
+ "Frequency <Week: weekday=2> cannot be resampled to {}, as they "
+ "are not sub or super periods"
).format(expected_error_msg)
with pytest.raises(IncompatibleFrequency, match=msg):
ts.resample(rule).mean()
@@ -236,8 +236,8 @@ def test_resample_same_freq(self, resample_method):
def test_resample_incompat_freq(self):
msg = (
- "Frequency <MonthEnd> cannot be resampled to <Week: weekday=6>,"
- " as they are not sub or super periods"
+ "Frequency <MonthEnd> cannot be resampled to <Week: weekday=6>, "
+ "as they are not sub or super periods"
)
with pytest.raises(IncompatibleFrequency, match=msg):
Series(
diff --git a/pandas/tests/resample/test_resample_api.py b/pandas/tests/resample/test_resample_api.py
index bc2d6df3755d5..170201b4f8e5c 100644
--- a/pandas/tests/resample/test_resample_api.py
+++ b/pandas/tests/resample/test_resample_api.py
@@ -519,8 +519,8 @@ def test_selection_api_validation():
# non DatetimeIndex
msg = (
- "Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex,"
- " but got an instance of 'Int64Index'"
+ "Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, "
+ "but got an instance of 'Int64Index'"
)
with pytest.raises(TypeError, match=msg):
df.resample("2D", level="v")
@@ -539,8 +539,8 @@ def test_selection_api_validation():
# upsampling not allowed
msg = (
- "Upsampling from level= or on= selection is not supported, use"
- r" \.set_index\(\.\.\.\) to explicitly set index to datetime-like"
+ "Upsampling from level= or on= selection is not supported, use "
+ r"\.set_index\(\.\.\.\) to explicitly set index to datetime-like"
)
with pytest.raises(ValueError, match=msg):
df.resample("2D", level="d").asfreq()
diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py
index e98699f6b4ec9..c8b322b3c832a 100644
--- a/pandas/tests/tseries/offsets/test_offsets.py
+++ b/pandas/tests/tseries/offsets/test_offsets.py
@@ -2792,8 +2792,8 @@ def test_apply_large_n(self):
def test_apply_corner(self):
msg = (
- "Only know how to combine trading day with datetime, datetime64"
- " or timedelta"
+ "Only know how to combine trading day "
+ "with datetime, datetime64 or timedelta"
)
with pytest.raises(ApplyTypeError, match=msg):
CDay().apply(BMonthEnd())
diff --git a/pandas/tests/window/moments/test_moments_rolling.py b/pandas/tests/window/moments/test_moments_rolling.py
index 9495d2bc7d51d..9acb4ffcb40b8 100644
--- a/pandas/tests/window/moments/test_moments_rolling.py
+++ b/pandas/tests/window/moments/test_moments_rolling.py
@@ -1128,8 +1128,8 @@ def test_flex_binary_moment(self):
# GH3155
# don't blow the stack
msg = (
- "arguments to moment function must be of type"
- " np.ndarray/Series/DataFrame"
+ "arguments to moment function must be of type "
+ "np.ndarray/Series/DataFrame"
)
with pytest.raises(TypeError, match=msg):
_flex_binary_moment(5, 6, None)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30707 | 2020-01-05T11:03:40Z | 2020-01-06T17:24:23Z | 2020-01-06T17:24:23Z | 2020-01-06T23:42:16Z |
CLN: replacing '.format' with f-strings in various files | diff --git a/pandas/tests/util/test_validate_args.py b/pandas/tests/util/test_validate_args.py
index dfbd8a3f9af19..746d859b3322e 100644
--- a/pandas/tests/util/test_validate_args.py
+++ b/pandas/tests/util/test_validate_args.py
@@ -20,10 +20,8 @@ def test_bad_arg_length_max_value_single():
max_length = len(compat_args) + min_fname_arg_count
actual_length = len(args) + min_fname_arg_count
msg = (
- r"{fname}\(\) takes at most {max_length} "
- r"argument \({actual_length} given\)".format(
- fname=_fname, max_length=max_length, actual_length=actual_length
- )
+ fr"{_fname}\(\) takes at most {max_length} "
+ fr"argument \({actual_length} given\)"
)
with pytest.raises(TypeError, match=msg):
@@ -38,10 +36,8 @@ def test_bad_arg_length_max_value_multiple():
max_length = len(compat_args) + min_fname_arg_count
actual_length = len(args) + min_fname_arg_count
msg = (
- r"{fname}\(\) takes at most {max_length} "
- r"arguments \({actual_length} given\)".format(
- fname=_fname, max_length=max_length, actual_length=actual_length
- )
+ fr"{_fname}\(\) takes at most {max_length} "
+ fr"arguments \({actual_length} given\)"
)
with pytest.raises(TypeError, match=msg):
@@ -52,8 +48,8 @@ def test_bad_arg_length_max_value_multiple():
def test_not_all_defaults(i):
bad_arg = "foo"
msg = (
- "the '{arg}' parameter is not supported "
- r"in the pandas implementation of {func}\(\)".format(arg=bad_arg, func=_fname)
+ f"the '{bad_arg}' parameter is not supported "
+ fr"in the pandas implementation of {_fname}\(\)"
)
compat_args = {"foo": 2, "bar": -1, "baz": 3}
diff --git a/pandas/tests/util/test_validate_kwargs.py b/pandas/tests/util/test_validate_kwargs.py
index a26d96fcda231..a7b6d8f98cc60 100644
--- a/pandas/tests/util/test_validate_kwargs.py
+++ b/pandas/tests/util/test_validate_kwargs.py
@@ -22,8 +22,8 @@ def test_bad_kwarg():
def test_not_all_none(i):
bad_arg = "foo"
msg = (
- r"the '{arg}' parameter is not supported "
- r"in the pandas implementation of {func}\(\)".format(arg=bad_arg, func=_fname)
+ fr"the '{bad_arg}' parameter is not supported "
+ fr"in the pandas implementation of {_fname}\(\)"
)
compat_args = {"foo": 1, "bar": "s", "baz": None}
diff --git a/pandas/tests/window/test_window.py b/pandas/tests/window/test_window.py
index 39ab3ffd9319e..cc29ab4f2cd62 100644
--- a/pandas/tests/window/test_window.py
+++ b/pandas/tests/window/test_window.py
@@ -65,7 +65,7 @@ def test_agg_function_support(self, arg):
df = pd.DataFrame({"A": np.arange(5)})
roll = df.rolling(2, win_type="triang")
- msg = "'{arg}' is not a valid function for 'Window' object".format(arg=arg)
+ msg = f"'{arg}' is not a valid function for 'Window' object"
with pytest.raises(AttributeError, match=msg):
roll.agg(arg)
diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py
index 2e5477ea00e39..62d7c26b590cc 100644
--- a/pandas/tseries/holiday.py
+++ b/pandas/tseries/holiday.py
@@ -186,16 +186,16 @@ class from pandas.tseries.offsets
def __repr__(self) -> str:
info = ""
if self.year is not None:
- info += "year={year}, ".format(year=self.year)
- info += "month={mon}, day={day}, ".format(mon=self.month, day=self.day)
+ info += f"year={self.year}, "
+ info += f"month={self.month}, day={self.day}, "
if self.offset is not None:
- info += "offset={offset}".format(offset=self.offset)
+ info += f"offset={self.offset}"
if self.observance is not None:
- info += "observance={obs}".format(obs=self.observance)
+ info += f"observance={self.observance}"
- repr = "Holiday: {name} ({info})".format(name=self.name, info=info)
+ repr = f"Holiday: {self.name} ({info})"
return repr
def dates(self, start_date, end_date, return_name=False):
@@ -394,8 +394,7 @@ def holidays(self, start=None, end=None, return_name=False):
"""
if self.rules is None:
raise Exception(
- "Holiday Calendar {name} does not have any "
- "rules specified".format(name=self.name)
+ f"Holiday Calendar {self.name} does not have any rules specified"
)
if start is None:
diff --git a/scripts/download_wheels.py b/scripts/download_wheels.py
index 4ca1354321134..3d36eed2d888a 100644
--- a/scripts/download_wheels.py
+++ b/scripts/download_wheels.py
@@ -26,7 +26,7 @@ def fetch(version):
files = [
x
for x in root.xpath("//a/text()")
- if x.startswith("pandas-{}".format(version)) and not dest.joinpath(x).exists()
+ if x.startswith(f"pandas-{version}") and not dest.joinpath(x).exists()
]
N = len(files)
@@ -35,9 +35,7 @@ def fetch(version):
out = str(dest.joinpath(filename))
link = urllib.request.urljoin(base, filename)
urllib.request.urlretrieve(link, out)
- print(
- "Downloaded {link} to {out} [{i}/{N}]".format(link=link, out=out, i=i, N=N)
- )
+ print(f"Downloaded {link} to {out} [{i}/{N}]")
def main(args=None):
diff --git a/scripts/generate_pip_deps_from_conda.py b/scripts/generate_pip_deps_from_conda.py
index 1d2c33aeee384..3b14d61ce4254 100755
--- a/scripts/generate_pip_deps_from_conda.py
+++ b/scripts/generate_pip_deps_from_conda.py
@@ -127,13 +127,13 @@ def main(conda_fname, pip_fname, compare=False):
)
if res:
msg = (
- "`requirements-dev.txt` has to be generated with `{}` after "
- "`environment.yml` is modified.\n".format(sys.argv[0])
+ f"`requirements-dev.txt` has to be generated with `{sys.argv[0]}` after "
+ "`environment.yml` is modified.\n"
)
if args.azure:
msg = (
"##vso[task.logissue type=error;"
- "sourcepath=requirements-dev.txt]{}".format(msg)
+ f"sourcepath=requirements-dev.txt]{msg}"
)
sys.stderr.write(msg)
sys.exit(res)
diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index 674e3b72884fa..a1bccb1dd1629 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -1300,7 +1300,7 @@ def test_resolves_class_name(self, name, expected_obj):
@pytest.mark.parametrize("invalid_name", ["panda", "panda.DataFrame"])
def test_raises_for_invalid_module_name(self, invalid_name):
- msg = 'No module can be imported from "{}"'.format(invalid_name)
+ msg = f'No module can be imported from "{invalid_name}"'
with pytest.raises(ImportError, match=msg):
validate_docstrings.Docstring(invalid_name)
@@ -1310,7 +1310,7 @@ def test_raises_for_invalid_module_name(self, invalid_name):
def test_raises_for_invalid_attribute_name(self, invalid_name):
name_components = invalid_name.split(".")
obj_name, invalid_attr_name = name_components[-2], name_components[-1]
- msg = "'{}' has no attribute '{}'".format(obj_name, invalid_attr_name)
+ msg = f"'{obj_name}' has no attribute '{invalid_attr_name}'"
with pytest.raises(AttributeError, match=msg):
validate_docstrings.Docstring(invalid_name)
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index 872182ee8e20f..bcf3fd5d276f5 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -357,7 +357,7 @@ def source_file_def_line(self):
@property
def github_url(self):
url = "https://github.com/pandas-dev/pandas/blob/master/"
- url += "{}#L{}".format(self.source_file_name, self.source_file_def_line)
+ url += f"{self.source_file_name}#L{self.source_file_def_line}"
return url
@property
@@ -501,7 +501,7 @@ def parameter_desc(self, param):
desc = self.doc_parameters[param][1]
# Find and strip out any sphinx directives
for directive in DIRECTIVES:
- full_directive = ".. {}".format(directive)
+ full_directive = f".. {directive}"
if full_directive in desc:
# Only retain any description before the directive
desc = desc[: desc.index(full_directive)]
@@ -825,14 +825,12 @@ def get_validation_data(doc):
"EX03",
error_code=err.error_code,
error_message=err.message,
- times_happening=" ({} times)".format(err.count)
- if err.count > 1
- else "",
+ times_happening=f" ({err.count} times)" if err.count > 1 else "",
)
)
examples_source_code = "".join(doc.examples_source_code)
for wrong_import in ("numpy", "pandas"):
- if "import {}".format(wrong_import) in examples_source_code:
+ if f"import {wrong_import}" in examples_source_code:
errs.append(error("EX04", imported_library=wrong_import))
return errs, wrns, examples_errs
@@ -920,7 +918,7 @@ def validate_all(prefix, ignore_deprecated=False):
api_item_names = set(list(zip(*api_items))[0])
for class_ in (pandas.Series, pandas.DataFrame):
for member in inspect.getmembers(class_):
- func_name = "pandas.{}.{}".format(class_.__name__, member[0])
+ func_name = f"pandas.{class_.__name__}.{member[0]}"
if not member[0].startswith("_") and func_name not in api_item_names:
if prefix and not func_name.startswith(prefix):
continue
@@ -938,13 +936,9 @@ def header(title, width=80, char="#"):
full_line = char * width
side_len = (width - len(title) - 2) // 2
adj = "" if len(title) % 2 == 0 else " "
- title_line = "{side} {title}{adj} {side}".format(
- side=char * side_len, title=title, adj=adj
- )
+ title_line = f"{char * side_len} {title}{adj} {char * side_len}"
- return "\n{full_line}\n{title_line}\n{full_line}\n\n".format(
- full_line=full_line, title_line=title_line
- )
+ return f"\n{full_line}\n{title_line}\n{full_line}\n\n"
exit_status = 0
if func_name is None:
@@ -986,24 +980,24 @@ def header(title, width=80, char="#"):
else:
result = validate_one(func_name)
- sys.stderr.write(header("Docstring ({})".format(func_name)))
- sys.stderr.write("{}\n".format(result["docstring"]))
+ sys.stderr.write(header(f"Docstring ({func_name})"))
+ sys.stderr.write(f"{result['docstring']}\n")
sys.stderr.write(header("Validation"))
if result["errors"]:
- sys.stderr.write("{} Errors found:\n".format(len(result["errors"])))
+ sys.stderr.write(f"{len(result['errors'])} Errors found:\n")
for err_code, err_desc in result["errors"]:
# Failing examples are printed at the end
if err_code == "EX02":
sys.stderr.write("\tExamples do not pass tests\n")
continue
- sys.stderr.write("\t{}\n".format(err_desc))
+ sys.stderr.write(f"\t{err_desc}\n")
if result["warnings"]:
- sys.stderr.write("{} Warnings found:\n".format(len(result["warnings"])))
+ sys.stderr.write(f"{len(result['warnings'])} Warnings found:\n")
for wrn_code, wrn_desc in result["warnings"]:
- sys.stderr.write("\t{}\n".format(wrn_desc))
+ sys.stderr.write(f"\t{wrn_desc}\n")
if not result["errors"]:
- sys.stderr.write('Docstring for "{}" correct. :)\n'.format(func_name))
+ sys.stderr.write(f'Docstring for "{func_name}" correct. :)\n')
if result["examples_errors"]:
sys.stderr.write(header("Doctests"))
@@ -1027,7 +1021,7 @@ def header(title, width=80, char="#"):
choices=format_opts,
help="format of the output when validating "
"multiple docstrings (ignored when validating one)."
- "It can be {}".format(str(format_opts)[1:-1]),
+ f"It can be {str(format_opts)[1:-1]}",
)
argparser.add_argument(
"--prefix",
| - [x] contributes to #29547
- [ ] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30706 | 2020-01-05T06:17:42Z | 2020-01-06T17:45:14Z | 2020-01-06T17:45:14Z | 2020-01-06T17:45:18Z |
BUG: listlike comparisons for DTA and TDA | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index dcdde4d7fb13a..cc54fb5e5af13 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -161,11 +161,9 @@ def wrapper(self, other):
raise ValueError("Lengths must match")
else:
if isinstance(other, list):
- try:
- other = type(self)._from_sequence(other)
- except ValueError:
- other = np.array(other, dtype=np.object_)
- elif not isinstance(other, (np.ndarray, DatetimeArray)):
+ other = np.array(other)
+
+ if not isinstance(other, (np.ndarray, cls)):
# Following Timestamp convention, __eq__ is all-False
# and __ne__ is all True, others raise TypeError.
return invalid_comparison(self, other, op)
@@ -179,20 +177,14 @@ def wrapper(self, other):
op, self.astype(object), other
)
o_mask = isna(other)
+
elif not (is_datetime64_dtype(other) or is_datetime64tz_dtype(other)):
# e.g. is_timedelta64_dtype(other)
return invalid_comparison(self, other, op)
+
else:
self._assert_tzawareness_compat(other)
-
- if (
- is_datetime64_dtype(other)
- and not is_datetime64_ns_dtype(other)
- or not hasattr(other, "asi8")
- ):
- # e.g. other.dtype == 'datetime64[s]'
- # or an object-dtype ndarray
- other = type(self)._from_sequence(other)
+ other = type(self)._from_sequence(other)
result = op(self.view("i8"), other.view("i8"))
o_mask = other._isnan
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 098ad268784ed..1874517f0f2e4 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -38,7 +38,7 @@
)
from pandas.core.dtypes.missing import isna
-from pandas.core import nanops
+from pandas.core import nanops, ops
from pandas.core.algorithms import checked_add_with_arr
import pandas.core.common as com
from pandas.core.ops.common import unpack_zerodim_and_defer
@@ -103,15 +103,29 @@ def wrapper(self, other):
raise ValueError("Lengths must match")
else:
- try:
- other = type(self)._from_sequence(other)._data
- except (ValueError, TypeError):
+ if isinstance(other, list):
+ other = np.array(other)
+
+ if not isinstance(other, (np.ndarray, cls)):
+ return invalid_comparison(self, other, op)
+
+ if is_object_dtype(other):
+ with np.errstate(all="ignore"):
+ result = ops.comp_method_OBJECT_ARRAY(
+ op, self.astype(object), other
+ )
+ o_mask = isna(other)
+
+ elif not is_timedelta64_dtype(other):
+ # e.g. other is datetimearray
return invalid_comparison(self, other, op)
- result = op(self.view("i8"), other.view("i8"))
- result = com.values_from_object(result)
+ else:
+ other = type(self)._from_sequence(other)
+
+ result = op(self.view("i8"), other.view("i8"))
+ o_mask = other._isnan
- o_mask = np.array(isna(other))
if o_mask.any():
result[o_mask] = nat_result
diff --git a/pandas/tests/arithmetic/common.py b/pandas/tests/arithmetic/common.py
index 7c3ceb3dba2b6..83d19b8a20ac3 100644
--- a/pandas/tests/arithmetic/common.py
+++ b/pandas/tests/arithmetic/common.py
@@ -70,7 +70,7 @@ def assert_invalid_comparison(left, right, box):
result = right != left
tm.assert_equal(result, ~expected)
- msg = "Invalid comparison between"
+ msg = "Invalid comparison between|Cannot compare type|not supported between"
with pytest.raises(TypeError, match=msg):
left < right
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index 20ea8d31ebbe2..1dfd95551f68d 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -85,6 +85,52 @@ def test_dt64arr_cmp_scalar_invalid(self, other, tz_naive_fixture, box_with_arra
dtarr = tm.box_expected(rng, box_with_array)
assert_invalid_comparison(dtarr, other, box_with_array)
+ @pytest.mark.parametrize(
+ "other",
+ [
+ list(range(10)),
+ np.arange(10),
+ np.arange(10).astype(np.float32),
+ np.arange(10).astype(object),
+ pd.timedelta_range("1ns", periods=10).array,
+ np.array(pd.timedelta_range("1ns", periods=10)),
+ list(pd.timedelta_range("1ns", periods=10)),
+ pd.timedelta_range("1 Day", periods=10).astype(object),
+ pd.period_range("1971-01-01", freq="D", periods=10).array,
+ pd.period_range("1971-01-01", freq="D", periods=10).astype(object),
+ ],
+ )
+ def test_dt64arr_cmp_arraylike_invalid(self, other, tz_naive_fixture):
+ # We don't parametrize this over box_with_array because listlike
+ # other plays poorly with assert_invalid_comparison reversed checks
+ tz = tz_naive_fixture
+
+ dta = date_range("1970-01-01", freq="ns", periods=10, tz=tz)._data
+ assert_invalid_comparison(dta, other, tm.to_array)
+
+ def test_dt64arr_cmp_mixed_invalid(self, tz_naive_fixture):
+ tz = tz_naive_fixture
+
+ dta = date_range("1970-01-01", freq="h", periods=5, tz=tz)._data
+
+ other = np.array([0, 1, 2, dta[3], pd.Timedelta(days=1)])
+ result = dta == other
+ expected = np.array([False, False, False, True, False])
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = dta != other
+ tm.assert_numpy_array_equal(result, ~expected)
+
+ msg = "Invalid comparison between|Cannot compare type|not supported between"
+ with pytest.raises(TypeError, match=msg):
+ dta < other
+ with pytest.raises(TypeError, match=msg):
+ dta > other
+ with pytest.raises(TypeError, match=msg):
+ dta <= other
+ with pytest.raises(TypeError, match=msg):
+ dta >= other
+
def test_dt64arr_nat_comparison(self, tz_naive_fixture, box_with_array):
# GH#22242, GH#22163 DataFrame considered NaT == ts incorrectly
tz = tz_naive_fixture
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index 9b0d3712e9bea..158da37aa7239 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -76,6 +76,49 @@ def test_td64_comparisons_invalid(self, box_with_array, invalid):
assert_invalid_comparison(obj, invalid, box)
+ @pytest.mark.parametrize(
+ "other",
+ [
+ list(range(10)),
+ np.arange(10),
+ np.arange(10).astype(np.float32),
+ np.arange(10).astype(object),
+ pd.date_range("1970-01-01", periods=10, tz="UTC").array,
+ np.array(pd.date_range("1970-01-01", periods=10)),
+ list(pd.date_range("1970-01-01", periods=10)),
+ pd.date_range("1970-01-01", periods=10).astype(object),
+ pd.period_range("1971-01-01", freq="D", periods=10).array,
+ pd.period_range("1971-01-01", freq="D", periods=10).astype(object),
+ ],
+ )
+ def test_td64arr_cmp_arraylike_invalid(self, other):
+ # We don't parametrize this over box_with_array because listlike
+ # other plays poorly with assert_invalid_comparison reversed checks
+
+ rng = timedelta_range("1 days", periods=10)._data
+ assert_invalid_comparison(rng, other, tm.to_array)
+
+ def test_td64arr_cmp_mixed_invalid(self):
+ rng = timedelta_range("1 days", periods=5)._data
+
+ other = np.array([0, 1, 2, rng[3], pd.Timestamp.now()])
+ result = rng == other
+ expected = np.array([False, False, False, True, False])
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = rng != other
+ tm.assert_numpy_array_equal(result, ~expected)
+
+ msg = "Invalid comparison between|Cannot compare type|not supported between"
+ with pytest.raises(TypeError, match=msg):
+ rng < other
+ with pytest.raises(TypeError, match=msg):
+ rng > other
+ with pytest.raises(TypeError, match=msg):
+ rng <= other
+ with pytest.raises(TypeError, match=msg):
+ rng >= other
+
class TestTimedelta64ArrayComparisons:
# TODO: All of these need to be parametrized over box
| These will now match PeriodArray, and we can move to share the methods. | https://api.github.com/repos/pandas-dev/pandas/pulls/30705 | 2020-01-05T01:43:31Z | 2020-01-05T16:18:07Z | 2020-01-05T16:18:07Z | 2020-01-05T18:09:02Z |
REF: Share _fast_union between DTI/TDI | diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 7001b32c66204..365cfb1585f00 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -23,6 +23,7 @@
is_period_dtype,
is_scalar,
)
+from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.generic import ABCIndex, ABCIndexClass, ABCSeries
from pandas.core import algorithms
@@ -771,6 +772,40 @@ def _can_fast_union(self, other) -> bool:
# this will raise
return False
+ def _fast_union(self, other, sort=None):
+ if len(other) == 0:
+ return self.view(type(self))
+
+ if len(self) == 0:
+ return other.view(type(self))
+
+ # to make our life easier, "sort" the two ranges
+ if self[0] <= other[0]:
+ left, right = self, other
+ elif sort is False:
+ # TDIs are not in the "correct" order and we don't want
+ # to sort but want to remove overlaps
+ left, right = self, other
+ left_start = left[0]
+ loc = right.searchsorted(left_start, side="left")
+ right_chunk = right.values[:loc]
+ dates = concat_compat((left.values, right_chunk))
+ return self._shallow_copy(dates)
+ else:
+ left, right = other, self
+
+ left_end = left[-1]
+ right_end = right[-1]
+
+ # concatenate
+ if left_end < right_end:
+ loc = right.searchsorted(left_end, side="right")
+ right_chunk = right.values[loc:]
+ dates = concat_compat((left.values, right_chunk))
+ return self._shallow_copy(dates)
+ else:
+ return left
+
# --------------------------------------------------------------------
# Join Methods
_join_precedence = 10
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 9c259dfd11793..f5f897c474bca 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -10,7 +10,6 @@
from pandas.util._decorators import Appender, Substitution, cache_readonly
from pandas.core.dtypes.common import _NS_DTYPE, is_float, is_integer, is_scalar
-from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.missing import isna
@@ -431,45 +430,6 @@ def union_many(self, others):
this._data._dtype = dtype
return this
- def _fast_union(self, other, sort=None):
- if len(other) == 0:
- return self.view(type(self))
-
- if len(self) == 0:
- return other.view(type(self))
-
- # Both DTIs are monotonic. Check if they are already
- # in the "correct" order
- if self[0] <= other[0]:
- left, right = self, other
- # DTIs are not in the "correct" order and we don't want
- # to sort but want to remove overlaps
- elif sort is False:
- left, right = self, other
- left_start = left[0]
- loc = right.searchsorted(left_start, side="left")
- right_chunk = right.values[:loc]
- dates = concat_compat((left.values, right_chunk))
- return self._shallow_copy(dates)
- # DTIs are not in the "correct" order and we want
- # to sort
- else:
- left, right = other, self
-
- left_end = left[-1]
- right_end = right[-1]
-
- # TODO: consider re-implementing freq._should_cache for fastpath
-
- # concatenate dates
- if left_end < right_end:
- loc = right.searchsorted(left_end, side="right")
- right_chunk = right.values[loc:]
- dates = concat_compat((left.values, right_chunk))
- return self._shallow_copy(dates)
- else:
- return left
-
def _wrap_setop_result(self, other, result):
name = get_op_result_name(self, other)
return self._shallow_copy(result, name=name, freq=None, tz=self.tz)
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 6a68cb102d0a8..1bdb4bf8b0a00 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -16,7 +16,6 @@
is_timedelta64_ns_dtype,
pandas_dtype,
)
-from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.missing import isna
from pandas.core.accessor import delegate_names
@@ -265,40 +264,6 @@ def _union(self, other: "TimedeltaIndex", sort):
result._set_freq("infer")
return result
- def _fast_union(self, other, sort=None):
- if len(other) == 0:
- return self.view(type(self))
-
- if len(self) == 0:
- return other.view(type(self))
-
- # to make our life easier, "sort" the two ranges
- if self[0] <= other[0]:
- left, right = self, other
- elif sort is False:
- # TDIs are not in the "correct" order and we don't want
- # to sort but want to remove overlaps
- left, right = self, other
- left_start = left[0]
- loc = right.searchsorted(left_start, side="left")
- right_chunk = right.values[:loc]
- dates = concat_compat((left.values, right_chunk))
- return self._shallow_copy(dates)
- else:
- left, right = other, self
-
- left_end = left[-1]
- right_end = right[-1]
-
- # concatenate
- if left_end < right_end:
- loc = right.searchsorted(left_end, side="right")
- right_chunk = right.values[loc:]
- dates = concat_compat((left.values, right_chunk))
- return self._shallow_copy(dates)
- else:
- return left
-
def _maybe_promote(self, other):
if other.inferred_type == "timedelta":
other = TimedeltaIndex(other)
| https://api.github.com/repos/pandas-dev/pandas/pulls/30704 | 2020-01-05T00:46:51Z | 2020-01-05T16:23:01Z | 2020-01-05T16:23:01Z | 2020-01-05T18:07:26Z | |
Implement ExtensionIndex | diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index c162bbb412591..f61721a0e51e6 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -10,7 +10,6 @@
from pandas._libs.hashtable import duplicated_int64
from pandas._typing import AnyArrayLike
import pandas.compat as compat
-from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, cache_readonly
from pandas.core.dtypes.common import (
@@ -30,7 +29,7 @@
import pandas.core.common as com
import pandas.core.indexes.base as ibase
from pandas.core.indexes.base import Index, _index_shared_docs, maybe_extract_name
-from pandas.core.indexes.extension import make_wrapped_comparison_op
+from pandas.core.indexes.extension import ExtensionIndex, make_wrapped_comparison_op
import pandas.core.missing as missing
from pandas.core.ops import get_op_result_name
@@ -67,7 +66,7 @@
typ="method",
overwrite=True,
)
-class CategoricalIndex(Index, accessor.PandasDelegate):
+class CategoricalIndex(ExtensionIndex, accessor.PandasDelegate):
"""
Index based on an underlying :class:`Categorical`.
@@ -723,20 +722,8 @@ def _convert_arr_indexer(self, keyarr):
def _convert_index_indexer(self, keyarr):
return self._shallow_copy(keyarr)
- @Appender(_index_shared_docs["take"] % _index_doc_kwargs)
- def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
- nv.validate_take(tuple(), kwargs)
- indices = ensure_platform_int(indices)
- taken = self._assert_take_fillable(
- self._data,
- indices,
- allow_fill=allow_fill,
- fill_value=fill_value,
- na_value=self._data.dtype.na_value,
- )
- return self._shallow_copy(taken)
-
def take_nd(self, *args, **kwargs):
+ """Alias for `take`"""
warnings.warn(
"CategoricalIndex.take_nd is deprecated, use CategoricalIndex.take instead",
FutureWarning,
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 365cfb1585f00..0c5f451c4f07b 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -41,7 +41,12 @@
from pandas.tseries.frequencies import DateOffset, to_offset
-from .extension import inherit_names, make_wrapped_arith_op, make_wrapped_comparison_op
+from .extension import (
+ ExtensionIndex,
+ inherit_names,
+ make_wrapped_arith_op,
+ make_wrapped_comparison_op,
+)
_index_doc_kwargs = dict(ibase._index_doc_kwargs)
@@ -81,7 +86,7 @@ def wrapper(left, right):
["__iter__", "mean", "freq", "freqstr", "_ndarray_values", "asi8", "_box_values"],
DatetimeLikeArrayMixin,
)
-class DatetimeIndexOpsMixin(ExtensionOpsMixin):
+class DatetimeIndexOpsMixin(ExtensionIndex, ExtensionOpsMixin):
"""
Common ops mixin to support a unified interface datetimelike Index.
"""
@@ -246,16 +251,13 @@ def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
if isinstance(maybe_slice, slice):
return self[maybe_slice]
- taken = self._assert_take_fillable(
- self._data,
- indices,
- allow_fill=allow_fill,
- fill_value=fill_value,
- na_value=NaT,
+ taken = ExtensionIndex.take(
+ self, indices, axis, allow_fill, fill_value, **kwargs
)
# keep freq in PeriodArray/Index, reset otherwise
freq = self.freq if is_period_dtype(self) else None
+ assert taken.freq == freq, (taken.freq, freq, taken)
return self._shallow_copy(taken, freq=freq)
_can_hold_na = True
diff --git a/pandas/core/indexes/extension.py b/pandas/core/indexes/extension.py
index 3c98d31e34b7d..6f581c4ebb594 100644
--- a/pandas/core/indexes/extension.py
+++ b/pandas/core/indexes/extension.py
@@ -3,10 +3,13 @@
"""
from typing import List
+from pandas.compat.numpy import function as nv
from pandas.util._decorators import cache_readonly
+from pandas.core.dtypes.common import ensure_platform_int
from pandas.core.dtypes.generic import ABCSeries
+from pandas.core.arrays import ExtensionArray
from pandas.core.ops import get_op_result_name
from .base import Index
@@ -152,3 +155,24 @@ def _maybe_unwrap_index(obj):
if isinstance(obj, Index):
return obj._data
return obj
+
+
+class ExtensionIndex(Index):
+ """
+ Index subclass for indexes backed by ExtensionArray.
+ """
+
+ _data: ExtensionArray
+
+ def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
+ nv.validate_take(tuple(), kwargs)
+ indices = ensure_platform_int(indices)
+
+ taken = self._assert_take_fillable(
+ self._data,
+ indices,
+ allow_fill=allow_fill,
+ fill_value=fill_value,
+ na_value=self._na_value,
+ )
+ return type(self)(taken, name=self.name)
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 35cdf840a55b2..2677ed4d95158 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -58,7 +58,7 @@
from pandas.tseries.frequencies import to_offset
from pandas.tseries.offsets import DateOffset
-from .extension import inherit_names
+from .extension import ExtensionIndex, inherit_names
_VALID_CLOSED = {"left", "right", "both", "neither"}
_index_doc_kwargs = dict(ibase._index_doc_kwargs)
@@ -213,7 +213,7 @@ def func(intvidx_self, other, sort=False):
overwrite=True,
)
@inherit_names(["is_non_overlapping_monotonic", "mid"], IntervalArray, cache=True)
-class IntervalIndex(IntervalMixin, Index, accessor.PandasDelegate):
+class IntervalIndex(IntervalMixin, ExtensionIndex, accessor.PandasDelegate):
_typ = "intervalindex"
_comparables = ["name"]
_attributes = ["name", "closed"]
| So far this just puts `take` in it, but there are a handful of other methods we'll be able to move from our existing EA Indexes. | https://api.github.com/repos/pandas-dev/pandas/pulls/30703 | 2020-01-04T23:22:49Z | 2020-01-05T21:30:44Z | 2020-01-05T21:30:44Z | 2020-01-05T21:35:09Z |
DEPR: CategoricalIndex.take_nd | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 014bd22aa2dab..08a04174c8399 100755
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -568,7 +568,7 @@ Deprecations
- :func:`eval` keyword argument "truediv" is deprecated and will be removed in a future version (:issue:`29812`)
- :meth:`DateOffset.isAnchored` and :meth:`DatetOffset.onOffset` are deprecated and will be removed in a future version, use :meth:`DateOffset.is_anchored` and :meth:`DateOffset.is_on_offset` instead (:issue:`30340`)
- ``pandas.tseries.frequencies.get_offset`` is deprecated and will be removed in a future version, use ``pandas.tseries.frequencies.to_offset`` instead (:issue:`4205`)
-- :meth:`Categorical.take_nd` is deprecated, use :meth:`Categorical.take` instead (:issue:`27745`)
+- :meth:`Categorical.take_nd` and :meth:`CategoricalIndex.take_nd` are deprecated, use :meth:`Categorical.take` and :meth:`CategoricalIndex.take` instead (:issue:`27745`)
- The parameter ``numeric_only`` of :meth:`Categorical.min` and :meth:`Categorical.max` is deprecated and replaced with ``skipna`` (:issue:`25303`)
- The parameter ``label`` in :func:`lreshape` has been deprecated and will be removed in a future version (:issue:`29742`)
- ``pandas.core.index`` has been deprecated and will be removed in a future version, the public classes are available in the top-level namespace (:issue:`19711`)
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index b44b83cec7b71..c162bbb412591 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -1,5 +1,6 @@
import operator
from typing import Any, List
+import warnings
import numpy as np
@@ -735,7 +736,13 @@ def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
)
return self._shallow_copy(taken)
- take_nd = take
+ def take_nd(self, *args, **kwargs):
+ warnings.warn(
+ "CategoricalIndex.take_nd is deprecated, use CategoricalIndex.take instead",
+ FutureWarning,
+ stacklevel=2,
+ )
+ return self.take(*args, **kwargs)
@Appender(_index_shared_docs["_maybe_cast_slice_bound"])
def _maybe_cast_slice_bound(self, label, side, kind):
diff --git a/pandas/tests/arrays/categorical/test_algos.py b/pandas/tests/arrays/categorical/test_algos.py
index c55a19bf04987..52640044565fc 100644
--- a/pandas/tests/arrays/categorical/test_algos.py
+++ b/pandas/tests/arrays/categorical/test_algos.py
@@ -177,3 +177,7 @@ def test_take_nd_deprecated(self):
cat = pd.Categorical(["a", "b", "c"])
with tm.assert_produces_warning(FutureWarning):
cat.take_nd([0, 1])
+
+ ci = pd.Index(cat)
+ with tm.assert_produces_warning(FutureWarning):
+ ci.take_nd([0, 1])
| matching Categorical.take_nd deprecation | https://api.github.com/repos/pandas-dev/pandas/pulls/30702 | 2020-01-04T22:58:59Z | 2020-01-05T18:43:12Z | 2020-01-05T18:43:12Z | 2020-01-05T18:56:35Z |
BUG: TimedeltaIndex.union with sort=False | diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 6fa7a7d3d4fb6..66b4101a92197 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -260,7 +260,7 @@ def _union(self, other, sort):
this, other = self, other
if this._can_fast_union(other):
- return this._fast_union(other)
+ return this._fast_union(other, sort=sort)
else:
result = Index._union(this, other, sort=sort)
if isinstance(result, TimedeltaIndex):
@@ -268,7 +268,7 @@ def _union(self, other, sort):
result._set_freq("infer")
return result
- def _fast_union(self, other):
+ def _fast_union(self, other, sort=None):
if len(other) == 0:
return self.view(type(self))
@@ -278,6 +278,15 @@ def _fast_union(self, other):
# to make our life easier, "sort" the two ranges
if self[0] <= other[0]:
left, right = self, other
+ elif sort is False:
+ # TDIs are not in the "correct" order and we don't want
+ # to sort but want to remove overlaps
+ left, right = self, other
+ left_start = left[0]
+ loc = right.searchsorted(left_start, side="left")
+ right_chunk = right.values[:loc]
+ dates = concat_compat((left.values, right_chunk))
+ return self._shallow_copy(dates)
else:
left, right = other, self
diff --git a/pandas/tests/indexes/timedeltas/test_setops.py b/pandas/tests/indexes/timedeltas/test_setops.py
index b2024d04efc66..e6d5ca97a0bdf 100644
--- a/pandas/tests/indexes/timedeltas/test_setops.py
+++ b/pandas/tests/indexes/timedeltas/test_setops.py
@@ -22,6 +22,22 @@ def test_union(self):
i1.union(i2) # Works
i2.union(i1) # Fails with "AttributeError: can't set attribute"
+ def test_union_sort_false(self):
+ tdi = timedelta_range("1day", periods=5)
+
+ left = tdi[3:]
+ right = tdi[:3]
+
+ # Check that we are testing the desired code path
+ assert left._can_fast_union(right)
+
+ result = left.union(right)
+ tm.assert_index_equal(result, tdi)
+
+ result = left.union(right, sort=False)
+ expected = pd.TimedeltaIndex(["4 Days", "5 Days", "1 Days", "2 Day", "3 Days"])
+ tm.assert_index_equal(result, expected)
+
def test_union_coverage(self):
idx = TimedeltaIndex(["3d", "1d", "2d"])
| This matches the DTI behavior, so after this _fast_union can be shared between DTI and TDI. | https://api.github.com/repos/pandas-dev/pandas/pulls/30701 | 2020-01-04T22:33:28Z | 2020-01-04T23:15:41Z | 2020-01-04T23:15:41Z | 2020-01-04T23:23:09Z |
CLN: Replace fstring in tests/groupby/*.py files | diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py
index f1dece6a1c46b..52ee3e652501c 100644
--- a/pandas/tests/groupby/aggregate/test_other.py
+++ b/pandas/tests/groupby/aggregate/test_other.py
@@ -473,8 +473,7 @@ def test_agg_timezone_round_trip():
assert result3 == ts
dates = [
- pd.Timestamp("2016-01-0{i:d} 12:00:00".format(i=i), tz="US/Pacific")
- for i in range(1, 5)
+ pd.Timestamp(f"2016-01-0{i:d} 12:00:00", tz="US/Pacific") for i in range(1, 5)
]
df = pd.DataFrame({"A": ["a", "b"] * 2, "B": dates})
grouped = df.groupby("A")
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index 17e5d3efe850f..2f2f97f2cd993 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -265,7 +265,7 @@ def desc3(group):
result = group.describe()
# names are different
- result.index.name = "stat_{:d}".format(len(group))
+ result.index.name = f"stat_{len(group):d}"
result = result[: len(group)]
# weirdo
diff --git a/pandas/tests/groupby/test_bin_groupby.py b/pandas/tests/groupby/test_bin_groupby.py
index 69be1067ce37d..ad71f73e80e64 100644
--- a/pandas/tests/groupby/test_bin_groupby.py
+++ b/pandas/tests/groupby/test_bin_groupby.py
@@ -87,7 +87,7 @@ def _check(dtype):
counts = np.zeros(len(out), dtype=np.int64)
labels = ensure_int64(np.repeat(np.arange(3), np.diff(np.r_[0, bins])))
- func = getattr(groupby, "group_ohlc_{dtype}".format(dtype=dtype))
+ func = getattr(groupby, f"group_ohlc_{dtype}")
func(out, counts, obj[:, None], labels)
def _ohlc(group):
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 3e5e3b8045be2..9323946581a0d 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -497,10 +497,10 @@ def test_dataframe_categorical_ordered_observed_sort(ordered, observed, sort):
aggr[aggr.isna()] = "missing"
if not all(label == aggr):
msg = (
- "Labels and aggregation results not consistently sorted\n"
- + "for (ordered={}, observed={}, sort={})\n"
- + "Result:\n{}"
- ).format(ordered, observed, sort, result)
+ f"Labels and aggregation results not consistently sorted\n"
+ + "for (ordered={ordered}, observed={observed}, sort={sort})\n"
+ + "Result:\n{result}"
+ )
assert False, msg
@@ -805,7 +805,7 @@ def test_sort():
# self.cat.groupby(['value_group'])['value_group'].count().plot(kind='bar')
df = DataFrame({"value": np.random.randint(0, 10000, 100)})
- labels = ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)]
+ labels = [f"{i} - {i+499}" for i in range(0, 10000, 500)]
cat_labels = Categorical(labels, labels)
df = df.sort_values(by=["value"], ascending=True)
diff --git a/pandas/tests/groupby/test_counting.py b/pandas/tests/groupby/test_counting.py
index d88f293c99e0f..b4239d7d34a90 100644
--- a/pandas/tests/groupby/test_counting.py
+++ b/pandas/tests/groupby/test_counting.py
@@ -197,11 +197,8 @@ def test_ngroup_respects_groupby_order(self):
@pytest.mark.parametrize(
"datetimelike",
[
- [
- Timestamp("2016-05-{i:02d} 20:09:25+00:00".format(i=i))
- for i in range(1, 4)
- ],
- [Timestamp("2016-05-{i:02d} 20:09:25".format(i=i)) for i in range(1, 4)],
+ [Timestamp(f"2016-05-{i:02d} 20:09:25+00:00") for i in range(1, 4)],
+ [Timestamp(f"2016-05-{i:02d} 20:09:25") for i in range(1, 4)],
[Timedelta(x, unit="h") for x in range(1, 4)],
[Period(freq="2W", year=2017, month=x) for x in range(1, 4)],
],
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index 19b00502da5dc..97cf1af1d2e9e 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -103,9 +103,7 @@ def test_builtins_apply(keys, f):
result = df.groupby(keys).apply(f)
ngroups = len(df.drop_duplicates(subset=keys))
- assert_msg = "invalid frame shape: {} (expected ({}, 3))".format(
- result.shape, ngroups
- )
+ assert_msg = f"invalid frame shape: {result.shape} (expected ({ngroups}, 3))"
assert result.shape == (ngroups, 3), assert_msg
tm.assert_frame_equal(
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 6fc7d16554ccd..7e374811d1960 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -935,7 +935,7 @@ def test_mutate_groups():
+ ["c"] * 2
+ ["d"] * 2
+ ["e"] * 2,
- "cat3": ["g{}".format(x) for x in range(1, 15)],
+ "cat3": [f"g{x}" for x in range(1, 15)],
"val": np.random.randint(100, size=14),
}
)
diff --git a/pandas/tests/groupby/test_transform.py b/pandas/tests/groupby/test_transform.py
index ebf75191806fb..6c05c4038a829 100644
--- a/pandas/tests/groupby/test_transform.py
+++ b/pandas/tests/groupby/test_transform.py
@@ -962,9 +962,7 @@ def demean_rename(x):
if isinstance(x, pd.Series):
return result
- result = result.rename(
- columns={c: "{}_demeaned".format(c) for c in result.columns}
- )
+ result = result.rename(columns={c: "{c}_demeaned" for c in result.columns})
return result
diff --git a/pandas/tests/groupby/test_value_counts.py b/pandas/tests/groupby/test_value_counts.py
index 5acf71edf2b63..c86cb4532bc26 100644
--- a/pandas/tests/groupby/test_value_counts.py
+++ b/pandas/tests/groupby/test_value_counts.py
@@ -47,7 +47,7 @@ def seed_df(seed_nans, n, m):
keys = "1st", "2nd", ["1st", "2nd"]
for k, b in product(keys, bins):
binned.append((df, k, b, n, m))
- ids.append("{}-{}-{}".format(k, n, m))
+ ids.append(f"{k}-{n}-{m}")
@pytest.mark.slow
diff --git a/pandas/tests/groupby/test_whitelist.py b/pandas/tests/groupby/test_whitelist.py
index 6a5e531416ecb..8e387e9202ef6 100644
--- a/pandas/tests/groupby/test_whitelist.py
+++ b/pandas/tests/groupby/test_whitelist.py
@@ -404,7 +404,7 @@ def test_all_methods_categorized(mframe):
# new public method?
if new_names:
- msg = """
+ msg = f"""
There are uncatgeorized methods defined on the Grouper class:
{names}.
@@ -418,19 +418,19 @@ def test_all_methods_categorized(mframe):
see the comments in pandas/core/groupby/base.py for guidance on
how to fix this test.
"""
- raise AssertionError(msg.format(names=names))
+ raise AssertionError(msg)
# removed a public method?
all_categorized = reduction_kernels | transformation_kernels | groupby_other_methods
print(names)
print(all_categorized)
if not (names == all_categorized):
- msg = """
+ msg = f"""
Some methods which are supposed to be on the Grouper class
are missing:
-{names}.
+{all_categorized - names}.
They're still defined in one of the lists that live in pandas/core/groupby/base.py.
If you removed a method, you should update them
"""
- raise AssertionError(msg.format(names=all_categorized - names))
+ raise AssertionError(msg)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Ref to #29547 | https://api.github.com/repos/pandas-dev/pandas/pulls/30700 | 2020-01-04T22:14:19Z | 2020-01-04T23:27:39Z | 2020-01-04T23:27:39Z | 2020-01-05T06:50:20Z |
Fix integer check; also add column with integer name in test case. | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 21a22322daece..20fd42e44258a 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -461,7 +461,7 @@ def _get_index_resolvers(self) -> Dict[str, ABCSeries]:
for axis_name in self._AXIS_ORDERS:
d.update(self._get_axis_resolvers(axis_name))
- return {clean_column_name(k): v for k, v in d.items() if k is not int}
+ return {clean_column_name(k): v for k, v in d.items() if not isinstance(k, int)}
def _get_cleaned_column_resolvers(self) -> Dict[str, ABCSeries]:
"""
@@ -476,7 +476,9 @@ def _get_cleaned_column_resolvers(self) -> Dict[str, ABCSeries]:
if isinstance(self, ABCSeries):
return {clean_column_name(self.name): self}
- return {clean_column_name(k): v for k, v in self.items() if k is not int}
+ return {
+ clean_column_name(k): v for k, v in self.items() if not isinstance(k, int)
+ }
@property
def _info_axis(self):
diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index 578487ea3f54c..703e05998e93c 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -1076,6 +1076,7 @@ def df(self):
"that's": [9, 1, 8],
"☺": [8, 7, 6],
"foo#bar": [2, 4, 5],
+ 1: [5, 7, 9],
}
)
| A small embarrassing mistake is corrected here that was introduced in the recently merged #28215 | https://api.github.com/repos/pandas-dev/pandas/pulls/30698 | 2020-01-04T21:14:05Z | 2020-01-04T22:06:33Z | 2020-01-04T22:06:33Z | 2020-01-04T22:06:46Z |
Implement PeriodIndex.difference without object-dtype cast | diff --git a/pandas/_testing.py b/pandas/_testing.py
index 2ebebc5d5e10a..e9151e6df8260 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -619,7 +619,7 @@ def _get_ilevel_values(index, level):
# accept level number only
unique = index.levels[level]
level_codes = index.codes[level]
- filled = take_1d(unique.values, level_codes, fill_value=unique._na_value)
+ filled = take_1d(unique._values, level_codes, fill_value=unique._na_value)
values = unique._shallow_copy(filled, name=index.names[level])
return values
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index ca7c69c9713bc..93f989b95773f 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -17,6 +17,7 @@
is_float_dtype,
is_integer,
is_integer_dtype,
+ is_object_dtype,
pandas_dtype,
)
@@ -588,13 +589,13 @@ def get_indexer_non_unique(self, target):
return ensure_platform_int(indexer), missing
def _get_unique_index(self, dropna=False):
- """
- wrap Index._get_unique_index to handle NaT
- """
- res = super()._get_unique_index(dropna=dropna)
- if dropna:
- res = res.dropna()
- return res
+ if self.is_unique and not dropna:
+ return self
+
+ result = self._data.unique()
+ if dropna and self.hasnans:
+ result = result[~result.isna()]
+ return self._shallow_copy(result)
def get_loc(self, key, method=None, tolerance=None):
"""
@@ -809,6 +810,29 @@ def intersection(self, other, sort=False):
result = self._shallow_copy(np.asarray(i8result, dtype=np.int64), name=res_name)
return result
+ def difference(self, other, sort=None):
+ self._validate_sort_keyword(sort)
+ self._assert_can_do_setop(other)
+ res_name = get_op_result_name(self, other)
+ other = ensure_index(other)
+
+ if self.equals(other):
+ # pass an empty PeriodArray with the appropriate dtype
+ return self._shallow_copy(self._data[:0])
+
+ if is_object_dtype(other):
+ return self.astype(object).difference(other).astype(self.dtype)
+
+ elif not is_dtype_equal(self.dtype, other.dtype):
+ return self
+
+ i8self = Int64Index._simple_new(self.asi8)
+ i8other = Int64Index._simple_new(other.asi8)
+ i8result = i8self.difference(i8other, sort=sort)
+
+ result = self._shallow_copy(np.asarray(i8result, dtype=np.int64), name=res_name)
+ return result
+
# ------------------------------------------------------------------------
def _apply_meta(self, rawarr):
| Analogous to #30666. After this we'll be able to share some code between the set operations.
The edit in pd._testing is mostly unrelated. Both are motivated by tracking down the places where object-dtype ndarray is passed to PeriodIndex._shallow_copy. | https://api.github.com/repos/pandas-dev/pandas/pulls/30697 | 2020-01-04T20:52:23Z | 2020-01-05T16:22:42Z | 2020-01-05T16:22:42Z | 2020-01-05T18:08:39Z |
Make DTI/TDI _union behavior match | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 9c259dfd11793..6035131e78aa8 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -395,9 +395,7 @@ def _union(self, other: "DatetimeIndex", sort):
result = Index._union(this, other, sort=sort)
if isinstance(result, DatetimeIndex):
assert result._data.dtype == this.dtype
- if result.freq is None and (
- this.freq is not None or other.freq is not None
- ):
+ if result.freq is None:
result._set_freq("infer")
return result
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index f7960c114ec9d..78188c54b1d85 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -163,6 +163,21 @@ def test_union_freq_both_none(self, sort):
tm.assert_index_equal(result, expected)
assert result.freq is None
+ def test_union_freq_infer(self):
+ # When taking the union of two DatetimeIndexes, we infer
+ # a freq even if the arguments don't have freq. This matches
+ # TimedeltaIndex behavior.
+ dti = pd.date_range("2016-01-01", periods=5)
+ left = dti[[0, 1, 3, 4]]
+ right = dti[[2, 3, 1]]
+
+ assert left.freq is None
+ assert right.freq is None
+
+ result = left.union(right)
+ tm.assert_index_equal(result, dti)
+ assert result.freq == "D"
+
def test_union_dataframe_index(self):
rng1 = date_range("1/1/1999", "1/1/2012", freq="MS")
s1 = Series(np.random.randn(len(rng1)), rng1)
diff --git a/pandas/tests/indexes/timedeltas/test_setops.py b/pandas/tests/indexes/timedeltas/test_setops.py
index e6d5ca97a0bdf..0aa784cbb7710 100644
--- a/pandas/tests/indexes/timedeltas/test_setops.py
+++ b/pandas/tests/indexes/timedeltas/test_setops.py
@@ -78,6 +78,21 @@ def test_union_bug_4564(self):
exp = TimedeltaIndex(sorted(set(left) | set(right)))
tm.assert_index_equal(result, exp)
+ def test_union_freq_infer(self):
+ # When taking the union of two TimedeltaIndexes, we infer
+ # a freq even if the arguments don't have freq. This matches
+ # DatetimeIndex behavior.
+ tdi = pd.timedelta_range("1 Day", periods=5)
+ left = tdi[[0, 1, 3, 4]]
+ right = tdi[[2, 3, 1]]
+
+ assert left.freq is None
+ assert right.freq is None
+
+ result = left.union(right)
+ tm.assert_index_equal(result, tdi)
+ assert result.freq == "D"
+
def test_intersection_bug_1708(self):
index_1 = timedelta_range("1 day", periods=4, freq="h")
index_2 = index_1 + pd.offsets.Hour(5)
| Following this we'll be able to share the method. | https://api.github.com/repos/pandas-dev/pandas/pulls/30696 | 2020-01-04T20:49:39Z | 2020-01-05T18:42:18Z | 2020-01-05T18:42:18Z | 2020-01-05T18:57:34Z |
REF: share `delete` between DTI/TDI/PI | diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index de27f0c0be850..2833f32e2712d 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -597,6 +597,27 @@ def shift(self, periods=1, freq=None):
result = self._data._time_shift(periods, freq=freq)
return type(self)(result, name=self.name)
+ # --------------------------------------------------------------------
+ # List-like Methods
+
+ def delete(self, loc):
+ new_i8s = np.delete(self.asi8, loc)
+
+ freq = None
+ if is_period_dtype(self):
+ freq = self.freq
+ elif is_integer(loc):
+ if loc in (0, -len(self), -1, len(self) - 1):
+ freq = self.freq
+ else:
+ if is_list_like(loc):
+ loc = lib.maybe_indices_to_slice(ensure_int64(np.array(loc)), len(self))
+ if isinstance(loc, slice) and loc.step in (1, None):
+ if loc.start in (0, None) or loc.stop in (len(self), None):
+ freq = self.freq
+
+ return self._shallow_copy(new_i8s, freq=freq)
+
class DatetimeTimedeltaMixin(DatetimeIndexOpsMixin, Int64Index):
"""
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index ec95b6c483c52..57f5597978ce7 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -9,14 +9,7 @@
from pandas._libs.tslibs import ccalendar, fields, parsing, timezones
from pandas.util._decorators import Appender, Substitution, cache_readonly
-from pandas.core.dtypes.common import (
- _NS_DTYPE,
- ensure_int64,
- is_float,
- is_integer,
- is_list_like,
- is_scalar,
-)
+from pandas.core.dtypes.common import _NS_DTYPE, is_float, is_integer, is_scalar
from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.missing import isna
@@ -1031,34 +1024,6 @@ def insert(self, loc, item):
return self.astype(object).insert(loc, item)
raise TypeError("cannot insert DatetimeIndex with incompatible label")
- def delete(self, loc):
- """
- Make a new DatetimeIndex with passed location(s) deleted.
-
- Parameters
- ----------
- loc: int, slice or array of ints
- Indicate which sub-arrays to remove.
-
- Returns
- -------
- new_index : DatetimeIndex
- """
- new_dates = np.delete(self.asi8, loc)
-
- freq = None
- if is_integer(loc):
- if loc in (0, -len(self), -1, len(self) - 1):
- freq = self.freq
- else:
- if is_list_like(loc):
- loc = lib.maybe_indices_to_slice(ensure_int64(np.array(loc)), len(self))
- if isinstance(loc, slice) and loc.step in (1, None):
- if loc.start in (0, None) or loc.stop in (len(self), None):
- freq = self.freq
-
- return self._shallow_copy(new_dates, freq=freq)
-
def indexer_at_time(self, time, asof=False):
"""
Return index locations of index values at particular time of day
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 6fa7a7d3d4fb6..2fa34d9fcd2dd 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -3,12 +3,11 @@
import numpy as np
-from pandas._libs import NaT, Timedelta, index as libindex, lib
+from pandas._libs import NaT, Timedelta, index as libindex
from pandas.util._decorators import Appender, Substitution
from pandas.core.dtypes.common import (
_TD_DTYPE,
- ensure_int64,
is_float,
is_integer,
is_list_like,
@@ -478,34 +477,6 @@ def insert(self, loc, item):
return self.astype(object).insert(loc, item)
raise TypeError("cannot insert TimedeltaIndex with incompatible label")
- def delete(self, loc):
- """
- Make a new TimedeltaIndex with passed location(s) deleted.
-
- Parameters
- ----------
- loc: int, slice or array of ints
- Indicate which sub-arrays to remove.
-
- Returns
- -------
- new_index : TimedeltaIndex
- """
- new_tds = np.delete(self.asi8, loc)
-
- freq = None
- if is_integer(loc):
- if loc in (0, -len(self), -1, len(self) - 1):
- freq = self.freq
- else:
- if is_list_like(loc):
- loc = lib.maybe_indices_to_slice(ensure_int64(np.array(loc)), len(self))
- if isinstance(loc, slice) and loc.step in (1, None):
- if loc.start in (0, None) or loc.stop in (len(self), None):
- freq = self.freq
-
- return self._shallow_copy(new_tds, freq=freq)
-
TimedeltaIndex._add_comparison_ops()
TimedeltaIndex._add_logical_methods_disabled()
| Besides de-duplicating, this moves the ball down the field in getting rid of PeriodIndex._shallow_copy cases where we pass an object ndarray | https://api.github.com/repos/pandas-dev/pandas/pulls/30695 | 2020-01-04T20:33:45Z | 2020-01-04T23:08:34Z | 2020-01-04T23:08:34Z | 2020-01-04T23:12:01Z |
CLN: unreachable code in indexes | diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 8b9e56b39f75f..b44b83cec7b71 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -269,14 +269,12 @@ def _create_categorical(cls, data, dtype=None):
return data
@classmethod
- def _simple_new(cls, values, name=None, dtype=None, **kwargs):
+ def _simple_new(cls, values, name=None, dtype=None):
result = object.__new__(cls)
values = cls._create_categorical(values, dtype=dtype)
result._data = values
result.name = name
- for k, v in kwargs.items():
- setattr(result, k, v)
result._reset_identity()
result._no_setting_name = False
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index de27f0c0be850..f1b6f5c6d43b4 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -164,12 +164,6 @@ def equals(self, other):
# have different timezone
return False
- elif is_period_dtype(self):
- if not is_period_dtype(other):
- return False
- if self.freq != other.freq:
- return False
-
return np.array_equal(self.asi8, other.asi8)
def _ensure_localized(
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index ec95b6c483c52..64bbb2a53725f 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -387,18 +387,12 @@ def _formatter_func(self):
# --------------------------------------------------------------------
# Set Operation Methods
- def _union(self, other, sort):
+ def _union(self, other: "DatetimeIndex", sort):
if not len(other) or self.equals(other) or not len(self):
return super()._union(other, sort=sort)
- if len(other) == 0 or self.equals(other) or len(self) == 0:
- return super().union(other, sort=sort)
-
- if not isinstance(other, DatetimeIndex):
- try:
- other = DatetimeIndex(other)
- except TypeError:
- pass
+ # We are called by `union`, which is responsible for this validation
+ assert isinstance(other, DatetimeIndex)
this, other = self._maybe_utc_convert(other)
@@ -407,9 +401,7 @@ def _union(self, other, sort):
else:
result = Index._union(this, other, sort=sort)
if isinstance(result, DatetimeIndex):
- # TODO: we shouldn't be setting attributes like this;
- # in all the tests this equality already holds
- result._data._dtype = this.dtype
+ assert result._data.dtype == this.dtype
if result.freq is None and (
this.freq is not None or other.freq is not None
):
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 6fa7a7d3d4fb6..6cb4410d6a305 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -248,15 +248,13 @@ def astype(self, dtype, copy=True):
return Index(result.astype("i8"), name=self.name)
return DatetimeIndexOpsMixin.astype(self, dtype, copy=copy)
- def _union(self, other, sort):
+ def _union(self, other: "TimedeltaIndex", sort):
if len(other) == 0 or self.equals(other) or len(self) == 0:
return super()._union(other, sort=sort)
- if not isinstance(other, TimedeltaIndex):
- try:
- other = TimedeltaIndex(other)
- except (TypeError, ValueError):
- pass
+ # We are called by `union`, which is responsible for this validation
+ assert isinstance(other, TimedeltaIndex)
+
this, other = self, other
if this._can_fast_union(other):
@@ -309,7 +307,7 @@ def get_value(self, series, key):
return self.get_value_maybe_box(series, key)
try:
- return com.maybe_box(self, Index.get_value(self, series, key), series, key)
+ value = Index.get_value(self, series, key)
except KeyError:
try:
loc = self._get_string_slice(key)
@@ -321,10 +319,10 @@ def get_value(self, series, key):
return self.get_value_maybe_box(series, key)
except (TypeError, ValueError, KeyError):
raise KeyError(key)
+ else:
+ return com.maybe_box(self, value, series, key)
- def get_value_maybe_box(self, series, key):
- if not isinstance(key, Timedelta):
- key = Timedelta(key)
+ def get_value_maybe_box(self, series, key: Timedelta):
values = self._engine.get_value(com.values_from_object(series), key)
return com.maybe_box(self, values, series, key)
| https://api.github.com/repos/pandas-dev/pandas/pulls/30694 | 2020-01-04T20:02:26Z | 2020-01-04T23:07:56Z | 2020-01-04T23:07:56Z | 2020-01-04T23:15:13Z | |
CI: Fix pytest junit_family warnings | diff --git a/setup.cfg b/setup.cfg
index f813d1296b047..d0570cee6fe10 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -66,6 +66,7 @@ xfail_strict = True
filterwarnings =
error:Sparse:FutureWarning
error:The SparseArray:FutureWarning
+junit_family=xunit2
[coverage:run]
branch = False
| - [x] closes #30433
As per https://docs.pytest.org/en/latest/deprecations.html#junit-family-default-value-change-to-xunit2
This will mean we produce xml output with xsd as per: (opposed to old legacy v1)
https://github.com/jenkinsci/xunit-plugin/blob/xunit-2.3.2/src/main/resources/org/jenkinsci/plugins/xunit/types/model/xsd/junit-10.xsd
Done following check
- [x] Warning no longer present in builds -> checked [here](https://travis-ci.org/pandas-dev/pandas/jobs/632732296?utm_medium=notification&utm_source=github_status)
- [x] CodeCov unaffected since it uses the output xml -> [here](https://codecov.io/gh/pandas-dev/pandas/compare/b29d58d1a3e0ebb1ed3bf3bed1e1cf4848092c77...2a2d21a14a78e028e8e9a9c3bbf815ea76501036)
| https://api.github.com/repos/pandas-dev/pandas/pulls/30693 | 2020-01-04T19:40:42Z | 2020-01-04T22:57:37Z | 2020-01-04T22:57:37Z | 2020-01-14T23:43:05Z |
ERR: Improve error message and doc for invalid labels in cut/qcut | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 5b4761c3bc6c5..917ffba3ec0a7 100755
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -999,7 +999,7 @@ Reshaping
- Bug in :func:`melt` where supplying mixed strings and numeric values for ``id_vars`` or ``value_vars`` would incorrectly raise a ``ValueError`` (:issue:`29718`)
- Dtypes are now preserved when transposing a ``DataFrame`` where each column is the same extension dtype (:issue:`30091`)
- Bug in :func:`merge_asof` merging on a tz-aware ``left_index`` and ``right_on`` a tz-aware column (:issue:`29864`)
--
+- Improved error message and docstring in :func:`cut` and :func:`qcut` when `labels=True` (:issue:`13318`)
Sparse
^^^^^^
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
index 8cf51ae09fbcb..2e3eb9170b15c 100644
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -15,6 +15,7 @@
is_datetime64tz_dtype,
is_datetime_or_timedelta_dtype,
is_integer,
+ is_list_like,
is_scalar,
is_timedelta64_dtype,
)
@@ -65,11 +66,12 @@ def cut(
``right == True`` (the default), then the `bins` ``[1, 2, 3, 4]``
indicate (1,2], (2,3], (3,4]. This argument is ignored when
`bins` is an IntervalIndex.
- labels : array or bool, optional
+ labels : array or False, default None
Specifies the labels for the returned bins. Must be the same length as
the resulting bins. If False, returns only integer indicators of the
bins. This affects the type of the output container (see below).
- This argument is ignored when `bins` is an IntervalIndex.
+ This argument is ignored when `bins` is an IntervalIndex. If True,
+ raises an error.
retbins : bool, default False
Whether to return the bins or not. Useful when bins is provided
as a scalar.
@@ -286,10 +288,10 @@ def qcut(
q : int or list-like of int
Number of quantiles. 10 for deciles, 4 for quartiles, etc. Alternately
array of quantiles, e.g. [0, .25, .5, .75, 1.] for quartiles.
- labels : array or bool, default None
+ labels : array or False, default None
Used as labels for the resulting bins. Must be of the same length as
the resulting bins. If False, return only integer indicators of the
- bins.
+ bins. If True, raises an error.
retbins : bool, optional
Whether to return the (bins, labels) or not. Can be useful if bins
is given as a scalar.
@@ -391,15 +393,23 @@ def _bins_to_cuts(
has_nas = na_mask.any()
if labels is not False:
- if labels is None:
+ if not (labels is None or is_list_like(labels)):
+ raise ValueError(
+ "Bin labels must either be False, None or passed in as a "
+ "list-like argument"
+ )
+
+ elif labels is None:
labels = _format_labels(
bins, precision, right=right, include_lowest=include_lowest, dtype=dtype
)
+
else:
if len(labels) != len(bins) - 1:
raise ValueError(
"Bin labels must be one fewer than the number of bin edges"
)
+
if not is_categorical_dtype(labels):
labels = Categorical(labels, categories=labels, ordered=True)
diff --git a/pandas/tests/reshape/test_cut.py b/pandas/tests/reshape/test_cut.py
index e52636d54ebe8..13b6f05ed304a 100644
--- a/pandas/tests/reshape/test_cut.py
+++ b/pandas/tests/reshape/test_cut.py
@@ -603,3 +603,12 @@ def test_cut_bool_coercion_to_int(bins, box, compare):
expected = cut(data_expected, bins, duplicates="drop")
result = cut(data_result, bins, duplicates="drop")
compare(result, expected)
+
+
+@pytest.mark.parametrize("labels", ["foo", 1, True])
+def test_cut_incorrect_labels(labels):
+ # GH 13318
+ values = range(5)
+ msg = "Bin labels must either be False, None or passed in as a list-like argument"
+ with pytest.raises(ValueError, match=msg):
+ cut(values, 4, labels=labels)
diff --git a/pandas/tests/reshape/test_qcut.py b/pandas/tests/reshape/test_qcut.py
index c5ca05056a306..95406a5ebf4f7 100644
--- a/pandas/tests/reshape/test_qcut.py
+++ b/pandas/tests/reshape/test_qcut.py
@@ -130,6 +130,38 @@ def test_qcut_return_intervals():
tm.assert_series_equal(res, exp)
+@pytest.mark.parametrize("labels", ["foo", 1, True])
+def test_qcut_incorrect_labels(labels):
+ # GH 13318
+ values = range(5)
+ msg = "Bin labels must either be False, None or passed in as a list-like argument"
+ with pytest.raises(ValueError, match=msg):
+ qcut(values, 4, labels=labels)
+
+
+@pytest.mark.parametrize("labels", [["a", "b", "c"], list(range(3))])
+def test_qcut_wrong_length_labels(labels):
+ # GH 13318
+ values = range(10)
+ msg = "Bin labels must be one fewer than the number of bin edges"
+ with pytest.raises(ValueError, match=msg):
+ qcut(values, 4, labels=labels)
+
+
+@pytest.mark.parametrize(
+ "labels, expected",
+ [
+ (["a", "b", "c"], Categorical(["a", "b", "c"], ordered=True)),
+ (list(range(3)), Categorical([0, 1, 2], ordered=True)),
+ ],
+)
+def test_qcut_list_like_labels(labels, expected):
+ # GH 13318
+ values = range(3)
+ result = qcut(values, 3, labels=labels)
+ tm.assert_categorical_equal(result, expected)
+
+
@pytest.mark.parametrize(
"kwargs,msg",
[
| - [x] closes #13318
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] add whats new note
| https://api.github.com/repos/pandas-dev/pandas/pulls/30691 | 2020-01-04T18:44:34Z | 2020-01-07T21:18:36Z | 2020-01-07T21:18:35Z | 2020-01-08T03:26:09Z |
TYP: NDFrame.(loc|iloc|at|iat) | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f5b0ce1ae77fb..ac7c5ca935336 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -144,7 +144,7 @@ def _single_replace(self, to_replace, method, inplace, limit):
bool_t = bool # Need alias because NDFrame has def bool:
-class NDFrame(PandasObject, SelectionMixin):
+class NDFrame(PandasObject, SelectionMixin, indexing.IndexingMixin):
"""
N-dimensional analogue of DataFrame. Store multi-dimensional in a
size-mutable, labeled data structure
@@ -3181,7 +3181,10 @@ def to_csv(
@classmethod
def _create_indexer(cls, name: str, indexer) -> None:
- """Create an indexer like _name in the class."""
+ """Create an indexer like _name in the class.
+
+ Kept for compatibility with geopandas. To be removed in the future. See GH27258
+ """
if getattr(cls, name, None) is None:
_indexer = functools.partial(indexer, name)
setattr(cls, name, property(_indexer, doc=indexer.__doc__))
@@ -11176,8 +11179,3 @@ def logical_func(self, axis=0, bool_only=None, skipna=True, level=None, **kwargs
)
return set_function_name(logical_func, name, cls)
-
-
-# install the indexes
-for _name, _indexer in indexing.get_indexers_list():
- NDFrame._create_indexer(_name, _indexer)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index b15d91240e7bb..ea59a6a49e649 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -29,18 +29,6 @@
)
from pandas.core.indexes.api import Index, InvalidIndexError
-
-# the supported indexers
-def get_indexers_list():
-
- return [
- ("iloc", _iLocIndexer),
- ("loc", _LocIndexer),
- ("at", _AtIndexer),
- ("iat", _iAtIndexer),
- ]
-
-
# "null slice"
_NS = slice(None, None)
@@ -98,6 +86,486 @@ class IndexingError(Exception):
pass
+class IndexingMixin:
+ """Mixin for adding .loc/.iloc/.at/.iat to Datafames and Series.
+ """
+
+ @property
+ def iloc(self) -> "_iLocIndexer":
+ """
+ Purely integer-location based indexing for selection by position.
+
+ ``.iloc[]`` is primarily integer position based (from ``0`` to
+ ``length-1`` of the axis), but may also be used with a boolean
+ array.
+
+ Allowed inputs are:
+
+ - An integer, e.g. ``5``.
+ - A list or array of integers, e.g. ``[4, 3, 0]``.
+ - A slice object with ints, e.g. ``1:7``.
+ - A boolean array.
+ - A ``callable`` function with one argument (the calling Series or
+ DataFrame) and that returns valid output for indexing (one of the above).
+ This is useful in method chains, when you don't have a reference to the
+ calling object, but would like to base your selection on some value.
+
+ ``.iloc`` will raise ``IndexError`` if a requested indexer is
+ out-of-bounds, except *slice* indexers which allow out-of-bounds
+ indexing (this conforms with python/numpy *slice* semantics).
+
+ See more at :ref:`Selection by Position <indexing.integer>`.
+
+ See Also
+ --------
+ DataFrame.iat : Fast integer location scalar accessor.
+ DataFrame.loc : Purely label-location based indexer for selection by label.
+ Series.iloc : Purely integer-location based indexing for
+ selection by position.
+
+ Examples
+ --------
+
+ >>> mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4},
+ ... {'a': 100, 'b': 200, 'c': 300, 'd': 400},
+ ... {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000 }]
+ >>> df = pd.DataFrame(mydict)
+ >>> df
+ a b c d
+ 0 1 2 3 4
+ 1 100 200 300 400
+ 2 1000 2000 3000 4000
+
+ **Indexing just the rows**
+
+ With a scalar integer.
+
+ >>> type(df.iloc[0])
+ <class 'pandas.core.series.Series'>
+ >>> df.iloc[0]
+ a 1
+ b 2
+ c 3
+ d 4
+ Name: 0, dtype: int64
+
+ With a list of integers.
+
+ >>> df.iloc[[0]]
+ a b c d
+ 0 1 2 3 4
+ >>> type(df.iloc[[0]])
+ <class 'pandas.core.frame.DataFrame'>
+
+ >>> df.iloc[[0, 1]]
+ a b c d
+ 0 1 2 3 4
+ 1 100 200 300 400
+
+ With a `slice` object.
+
+ >>> df.iloc[:3]
+ a b c d
+ 0 1 2 3 4
+ 1 100 200 300 400
+ 2 1000 2000 3000 4000
+
+ With a boolean mask the same length as the index.
+
+ >>> df.iloc[[True, False, True]]
+ a b c d
+ 0 1 2 3 4
+ 2 1000 2000 3000 4000
+
+ With a callable, useful in method chains. The `x` passed
+ to the ``lambda`` is the DataFrame being sliced. This selects
+ the rows whose index label even.
+
+ >>> df.iloc[lambda x: x.index % 2 == 0]
+ a b c d
+ 0 1 2 3 4
+ 2 1000 2000 3000 4000
+
+ **Indexing both axes**
+
+ You can mix the indexer types for the index and columns. Use ``:`` to
+ select the entire axis.
+
+ With scalar integers.
+
+ >>> df.iloc[0, 1]
+ 2
+
+ With lists of integers.
+
+ >>> df.iloc[[0, 2], [1, 3]]
+ b d
+ 0 2 4
+ 2 2000 4000
+
+ With `slice` objects.
+
+ >>> df.iloc[1:3, 0:3]
+ a b c
+ 1 100 200 300
+ 2 1000 2000 3000
+
+ With a boolean array whose length matches the columns.
+
+ >>> df.iloc[:, [True, False, True, False]]
+ a c
+ 0 1 3
+ 1 100 300
+ 2 1000 3000
+
+ With a callable function that expects the Series or DataFrame.
+
+ >>> df.iloc[:, lambda df: [0, 2]]
+ a c
+ 0 1 3
+ 1 100 300
+ 2 1000 3000
+ """
+ return _iLocIndexer("iloc", self)
+
+ @property
+ def loc(self) -> "_LocIndexer":
+ """
+ Access a group of rows and columns by label(s) or a boolean array.
+
+ ``.loc[]`` is primarily label based, but may also be used with a
+ boolean array.
+
+ Allowed inputs are:
+
+ - A single label, e.g. ``5`` or ``'a'``, (note that ``5`` is
+ interpreted as a *label* of the index, and **never** as an
+ integer position along the index).
+ - A list or array of labels, e.g. ``['a', 'b', 'c']``.
+ - A slice object with labels, e.g. ``'a':'f'``.
+
+ .. warning:: Note that contrary to usual python slices, **both** the
+ start and the stop are included
+
+ - A boolean array of the same length as the axis being sliced,
+ e.g. ``[True, False, True]``.
+ - A ``callable`` function with one argument (the calling Series or
+ DataFrame) and that returns valid output for indexing (one of the above)
+
+ See more at :ref:`Selection by Label <indexing.label>`
+
+ Raises
+ ------
+ KeyError
+ If any items are not found.
+
+ See Also
+ --------
+ DataFrame.at : Access a single value for a row/column label pair.
+ DataFrame.iloc : Access group of rows and columns by integer position(s).
+ DataFrame.xs : Returns a cross-section (row(s) or column(s)) from the
+ Series/DataFrame.
+ Series.loc : Access group of values using labels.
+
+ Examples
+ --------
+ **Getting values**
+
+ >>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
+ ... index=['cobra', 'viper', 'sidewinder'],
+ ... columns=['max_speed', 'shield'])
+ >>> df
+ max_speed shield
+ cobra 1 2
+ viper 4 5
+ sidewinder 7 8
+
+ Single label. Note this returns the row as a Series.
+
+ >>> df.loc['viper']
+ max_speed 4
+ shield 5
+ Name: viper, dtype: int64
+
+ List of labels. Note using ``[[]]`` returns a DataFrame.
+
+ >>> df.loc[['viper', 'sidewinder']]
+ max_speed shield
+ viper 4 5
+ sidewinder 7 8
+
+ Single label for row and column
+
+ >>> df.loc['cobra', 'shield']
+ 2
+
+ Slice with labels for row and single label for column. As mentioned
+ above, note that both the start and stop of the slice are included.
+
+ >>> df.loc['cobra':'viper', 'max_speed']
+ cobra 1
+ viper 4
+ Name: max_speed, dtype: int64
+
+ Boolean list with the same length as the row axis
+
+ >>> df.loc[[False, False, True]]
+ max_speed shield
+ sidewinder 7 8
+
+ Conditional that returns a boolean Series
+
+ >>> df.loc[df['shield'] > 6]
+ max_speed shield
+ sidewinder 7 8
+
+ Conditional that returns a boolean Series with column labels specified
+
+ >>> df.loc[df['shield'] > 6, ['max_speed']]
+ max_speed
+ sidewinder 7
+
+ Callable that returns a boolean Series
+
+ >>> df.loc[lambda df: df['shield'] == 8]
+ max_speed shield
+ sidewinder 7 8
+
+ **Setting values**
+
+ Set value for all items matching the list of labels
+
+ >>> df.loc[['viper', 'sidewinder'], ['shield']] = 50
+ >>> df
+ max_speed shield
+ cobra 1 2
+ viper 4 50
+ sidewinder 7 50
+
+ Set value for an entire row
+
+ >>> df.loc['cobra'] = 10
+ >>> df
+ max_speed shield
+ cobra 10 10
+ viper 4 50
+ sidewinder 7 50
+
+ Set value for an entire column
+
+ >>> df.loc[:, 'max_speed'] = 30
+ >>> df
+ max_speed shield
+ cobra 30 10
+ viper 30 50
+ sidewinder 30 50
+
+ Set value for rows matching callable condition
+
+ >>> df.loc[df['shield'] > 35] = 0
+ >>> df
+ max_speed shield
+ cobra 30 10
+ viper 0 0
+ sidewinder 0 0
+
+ **Getting values on a DataFrame with an index that has integer labels**
+
+ Another example using integers for the index
+
+ >>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
+ ... index=[7, 8, 9], columns=['max_speed', 'shield'])
+ >>> df
+ max_speed shield
+ 7 1 2
+ 8 4 5
+ 9 7 8
+
+ Slice with integer labels for rows. As mentioned above, note that both
+ the start and stop of the slice are included.
+
+ >>> df.loc[7:9]
+ max_speed shield
+ 7 1 2
+ 8 4 5
+ 9 7 8
+
+ **Getting values with a MultiIndex**
+
+ A number of examples using a DataFrame with a MultiIndex
+
+ >>> tuples = [
+ ... ('cobra', 'mark i'), ('cobra', 'mark ii'),
+ ... ('sidewinder', 'mark i'), ('sidewinder', 'mark ii'),
+ ... ('viper', 'mark ii'), ('viper', 'mark iii')
+ ... ]
+ >>> index = pd.MultiIndex.from_tuples(tuples)
+ >>> values = [[12, 2], [0, 4], [10, 20],
+ ... [1, 4], [7, 1], [16, 36]]
+ >>> df = pd.DataFrame(values, columns=['max_speed', 'shield'], index=index)
+ >>> df
+ max_speed shield
+ cobra mark i 12 2
+ mark ii 0 4
+ sidewinder mark i 10 20
+ mark ii 1 4
+ viper mark ii 7 1
+ mark iii 16 36
+
+ Single label. Note this returns a DataFrame with a single index.
+
+ >>> df.loc['cobra']
+ max_speed shield
+ mark i 12 2
+ mark ii 0 4
+
+ Single index tuple. Note this returns a Series.
+
+ >>> df.loc[('cobra', 'mark ii')]
+ max_speed 0
+ shield 4
+ Name: (cobra, mark ii), dtype: int64
+
+ Single label for row and column. Similar to passing in a tuple, this
+ returns a Series.
+
+ >>> df.loc['cobra', 'mark i']
+ max_speed 12
+ shield 2
+ Name: (cobra, mark i), dtype: int64
+
+ Single tuple. Note using ``[[]]`` returns a DataFrame.
+
+ >>> df.loc[[('cobra', 'mark ii')]]
+ max_speed shield
+ cobra mark ii 0 4
+
+ Single tuple for the index with a single label for the column
+
+ >>> df.loc[('cobra', 'mark i'), 'shield']
+ 2
+
+ Slice from index tuple to single label
+
+ >>> df.loc[('cobra', 'mark i'):'viper']
+ max_speed shield
+ cobra mark i 12 2
+ mark ii 0 4
+ sidewinder mark i 10 20
+ mark ii 1 4
+ viper mark ii 7 1
+ mark iii 16 36
+
+ Slice from index tuple to index tuple
+
+ >>> df.loc[('cobra', 'mark i'):('viper', 'mark ii')]
+ max_speed shield
+ cobra mark i 12 2
+ mark ii 0 4
+ sidewinder mark i 10 20
+ mark ii 1 4
+ viper mark ii 7 1
+ """
+ return _LocIndexer("loc", self)
+
+ @property
+ def at(self) -> "_AtIndexer":
+ """
+ Access a single value for a row/column label pair.
+
+ Similar to ``loc``, in that both provide label-based lookups. Use
+ ``at`` if you only need to get or set a single value in a DataFrame
+ or Series.
+
+ Raises
+ ------
+ KeyError
+ If 'label' does not exist in DataFrame.
+
+ See Also
+ --------
+ DataFrame.iat : Access a single value for a row/column pair by integer
+ position.
+ DataFrame.loc : Access a group of rows and columns by label(s).
+ Series.at : Access a single value using a label.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
+ ... index=[4, 5, 6], columns=['A', 'B', 'C'])
+ >>> df
+ A B C
+ 4 0 2 3
+ 5 0 4 1
+ 6 10 20 30
+
+ Get value at specified row/column pair
+
+ >>> df.at[4, 'B']
+ 2
+
+ Set value at specified row/column pair
+
+ >>> df.at[4, 'B'] = 10
+ >>> df.at[4, 'B']
+ 10
+
+ Get value within a Series
+
+ >>> df.loc[5].at['B']
+ 4
+ """
+ return _AtIndexer("at", self)
+
+ @property
+ def iat(self) -> "_iAtIndexer":
+ """
+ Access a single value for a row/column pair by integer position.
+
+ Similar to ``iloc``, in that both provide integer-based lookups. Use
+ ``iat`` if you only need to get or set a single value in a DataFrame
+ or Series.
+
+ Raises
+ ------
+ IndexError
+ When integer position is out of bounds.
+
+ See Also
+ --------
+ DataFrame.at : Access a single value for a row/column label pair.
+ DataFrame.loc : Access a group of rows and columns by label(s).
+ DataFrame.iloc : Access a group of rows and columns by integer position(s).
+
+ Examples
+ --------
+ >>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
+ ... columns=['A', 'B', 'C'])
+ >>> df
+ A B C
+ 0 0 2 3
+ 1 0 4 1
+ 2 10 20 30
+
+ Get value at specified row/column pair
+
+ >>> df.iat[1, 2]
+ 1
+
+ Set value at specified row/column pair
+
+ >>> df.iat[1, 2] = 10
+ >>> df.iat[1, 2]
+ 10
+
+ Get value within a series
+
+ >>> df.loc[0].iat[1]
+ 2
+ """
+ return _iAtIndexer("iat", self)
+
+
class _NDFrameIndexer(_NDFrameIndexerBase):
_valid_types: str
axis = None
@@ -1336,244 +1804,8 @@ def _get_slice_axis(self, slice_obj: slice, axis: int):
return self.obj.take(indexer, axis=axis)
+@Appender(IndexingMixin.loc.__doc__)
class _LocIndexer(_LocationIndexer):
- """
- Access a group of rows and columns by label(s) or a boolean array.
-
- ``.loc[]`` is primarily label based, but may also be used with a
- boolean array.
-
- Allowed inputs are:
-
- - A single label, e.g. ``5`` or ``'a'``, (note that ``5`` is
- interpreted as a *label* of the index, and **never** as an
- integer position along the index).
- - A list or array of labels, e.g. ``['a', 'b', 'c']``.
- - A slice object with labels, e.g. ``'a':'f'``.
-
- .. warning:: Note that contrary to usual python slices, **both** the
- start and the stop are included
-
- - A boolean array of the same length as the axis being sliced,
- e.g. ``[True, False, True]``.
- - A ``callable`` function with one argument (the calling Series or
- DataFrame) and that returns valid output for indexing (one of the above)
-
- See more at :ref:`Selection by Label <indexing.label>`
-
- Raises
- ------
- KeyError
- If any items are not found.
-
- See Also
- --------
- DataFrame.at : Access a single value for a row/column label pair.
- DataFrame.iloc : Access group of rows and columns by integer position(s).
- DataFrame.xs : Returns a cross-section (row(s) or column(s)) from the
- Series/DataFrame.
- Series.loc : Access group of values using labels.
-
- Examples
- --------
- **Getting values**
-
- >>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
- ... index=['cobra', 'viper', 'sidewinder'],
- ... columns=['max_speed', 'shield'])
- >>> df
- max_speed shield
- cobra 1 2
- viper 4 5
- sidewinder 7 8
-
- Single label. Note this returns the row as a Series.
-
- >>> df.loc['viper']
- max_speed 4
- shield 5
- Name: viper, dtype: int64
-
- List of labels. Note using ``[[]]`` returns a DataFrame.
-
- >>> df.loc[['viper', 'sidewinder']]
- max_speed shield
- viper 4 5
- sidewinder 7 8
-
- Single label for row and column
-
- >>> df.loc['cobra', 'shield']
- 2
-
- Slice with labels for row and single label for column. As mentioned
- above, note that both the start and stop of the slice are included.
-
- >>> df.loc['cobra':'viper', 'max_speed']
- cobra 1
- viper 4
- Name: max_speed, dtype: int64
-
- Boolean list with the same length as the row axis
-
- >>> df.loc[[False, False, True]]
- max_speed shield
- sidewinder 7 8
-
- Conditional that returns a boolean Series
-
- >>> df.loc[df['shield'] > 6]
- max_speed shield
- sidewinder 7 8
-
- Conditional that returns a boolean Series with column labels specified
-
- >>> df.loc[df['shield'] > 6, ['max_speed']]
- max_speed
- sidewinder 7
-
- Callable that returns a boolean Series
-
- >>> df.loc[lambda df: df['shield'] == 8]
- max_speed shield
- sidewinder 7 8
-
- **Setting values**
-
- Set value for all items matching the list of labels
-
- >>> df.loc[['viper', 'sidewinder'], ['shield']] = 50
- >>> df
- max_speed shield
- cobra 1 2
- viper 4 50
- sidewinder 7 50
-
- Set value for an entire row
-
- >>> df.loc['cobra'] = 10
- >>> df
- max_speed shield
- cobra 10 10
- viper 4 50
- sidewinder 7 50
-
- Set value for an entire column
-
- >>> df.loc[:, 'max_speed'] = 30
- >>> df
- max_speed shield
- cobra 30 10
- viper 30 50
- sidewinder 30 50
-
- Set value for rows matching callable condition
-
- >>> df.loc[df['shield'] > 35] = 0
- >>> df
- max_speed shield
- cobra 30 10
- viper 0 0
- sidewinder 0 0
-
- **Getting values on a DataFrame with an index that has integer labels**
-
- Another example using integers for the index
-
- >>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
- ... index=[7, 8, 9], columns=['max_speed', 'shield'])
- >>> df
- max_speed shield
- 7 1 2
- 8 4 5
- 9 7 8
-
- Slice with integer labels for rows. As mentioned above, note that both
- the start and stop of the slice are included.
-
- >>> df.loc[7:9]
- max_speed shield
- 7 1 2
- 8 4 5
- 9 7 8
-
- **Getting values with a MultiIndex**
-
- A number of examples using a DataFrame with a MultiIndex
-
- >>> tuples = [
- ... ('cobra', 'mark i'), ('cobra', 'mark ii'),
- ... ('sidewinder', 'mark i'), ('sidewinder', 'mark ii'),
- ... ('viper', 'mark ii'), ('viper', 'mark iii')
- ... ]
- >>> index = pd.MultiIndex.from_tuples(tuples)
- >>> values = [[12, 2], [0, 4], [10, 20],
- ... [1, 4], [7, 1], [16, 36]]
- >>> df = pd.DataFrame(values, columns=['max_speed', 'shield'], index=index)
- >>> df
- max_speed shield
- cobra mark i 12 2
- mark ii 0 4
- sidewinder mark i 10 20
- mark ii 1 4
- viper mark ii 7 1
- mark iii 16 36
-
- Single label. Note this returns a DataFrame with a single index.
-
- >>> df.loc['cobra']
- max_speed shield
- mark i 12 2
- mark ii 0 4
-
- Single index tuple. Note this returns a Series.
-
- >>> df.loc[('cobra', 'mark ii')]
- max_speed 0
- shield 4
- Name: (cobra, mark ii), dtype: int64
-
- Single label for row and column. Similar to passing in a tuple, this
- returns a Series.
-
- >>> df.loc['cobra', 'mark i']
- max_speed 12
- shield 2
- Name: (cobra, mark i), dtype: int64
-
- Single tuple. Note using ``[[]]`` returns a DataFrame.
-
- >>> df.loc[[('cobra', 'mark ii')]]
- max_speed shield
- cobra mark ii 0 4
-
- Single tuple for the index with a single label for the column
-
- >>> df.loc[('cobra', 'mark i'), 'shield']
- 2
-
- Slice from index tuple to single label
-
- >>> df.loc[('cobra', 'mark i'):'viper']
- max_speed shield
- cobra mark i 12 2
- mark ii 0 4
- sidewinder mark i 10 20
- mark ii 1 4
- viper mark ii 7 1
- mark iii 16 36
-
- Slice from index tuple to index tuple
-
- >>> df.loc[('cobra', 'mark i'):('viper', 'mark ii')]
- max_speed shield
- cobra mark i 12 2
- mark ii 0 4
- sidewinder mark i 10 20
- mark ii 1 4
- viper mark ii 7 1
- """
-
_valid_types = (
"labels (MUST BE IN THE INDEX), slices of labels (BOTH "
"endpoints included! Can be slices of integers if the "
@@ -1732,142 +1964,8 @@ def _getitem_axis(self, key, axis: int):
return self._get_label(key, axis=axis)
+@Appender(IndexingMixin.iloc.__doc__)
class _iLocIndexer(_LocationIndexer):
- """
- Purely integer-location based indexing for selection by position.
-
- ``.iloc[]`` is primarily integer position based (from ``0`` to
- ``length-1`` of the axis), but may also be used with a boolean
- array.
-
- Allowed inputs are:
-
- - An integer, e.g. ``5``.
- - A list or array of integers, e.g. ``[4, 3, 0]``.
- - A slice object with ints, e.g. ``1:7``.
- - A boolean array.
- - A ``callable`` function with one argument (the calling Series or
- DataFrame) and that returns valid output for indexing (one of the above).
- This is useful in method chains, when you don't have a reference to the
- calling object, but would like to base your selection on some value.
-
- ``.iloc`` will raise ``IndexError`` if a requested indexer is
- out-of-bounds, except *slice* indexers which allow out-of-bounds
- indexing (this conforms with python/numpy *slice* semantics).
-
- See more at :ref:`Selection by Position <indexing.integer>`.
-
- See Also
- --------
- DataFrame.iat : Fast integer location scalar accessor.
- DataFrame.loc : Purely label-location based indexer for selection by label.
- Series.iloc : Purely integer-location based indexing for
- selection by position.
-
- Examples
- --------
-
- >>> mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4},
- ... {'a': 100, 'b': 200, 'c': 300, 'd': 400},
- ... {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000 }]
- >>> df = pd.DataFrame(mydict)
- >>> df
- a b c d
- 0 1 2 3 4
- 1 100 200 300 400
- 2 1000 2000 3000 4000
-
- **Indexing just the rows**
-
- With a scalar integer.
-
- >>> type(df.iloc[0])
- <class 'pandas.core.series.Series'>
- >>> df.iloc[0]
- a 1
- b 2
- c 3
- d 4
- Name: 0, dtype: int64
-
- With a list of integers.
-
- >>> df.iloc[[0]]
- a b c d
- 0 1 2 3 4
- >>> type(df.iloc[[0]])
- <class 'pandas.core.frame.DataFrame'>
-
- >>> df.iloc[[0, 1]]
- a b c d
- 0 1 2 3 4
- 1 100 200 300 400
-
- With a `slice` object.
-
- >>> df.iloc[:3]
- a b c d
- 0 1 2 3 4
- 1 100 200 300 400
- 2 1000 2000 3000 4000
-
- With a boolean mask the same length as the index.
-
- >>> df.iloc[[True, False, True]]
- a b c d
- 0 1 2 3 4
- 2 1000 2000 3000 4000
-
- With a callable, useful in method chains. The `x` passed
- to the ``lambda`` is the DataFrame being sliced. This selects
- the rows whose index label even.
-
- >>> df.iloc[lambda x: x.index % 2 == 0]
- a b c d
- 0 1 2 3 4
- 2 1000 2000 3000 4000
-
- **Indexing both axes**
-
- You can mix the indexer types for the index and columns. Use ``:`` to
- select the entire axis.
-
- With scalar integers.
-
- >>> df.iloc[0, 1]
- 2
-
- With lists of integers.
-
- >>> df.iloc[[0, 2], [1, 3]]
- b d
- 0 2 4
- 2 2000 4000
-
- With `slice` objects.
-
- >>> df.iloc[1:3, 0:3]
- a b c
- 1 100 200 300
- 2 1000 2000 3000
-
- With a boolean array whose length matches the columns.
-
- >>> df.iloc[:, [True, False, True, False]]
- a c
- 0 1 3
- 1 100 300
- 2 1000 3000
-
- With a callable function that expects the Series or DataFrame.
-
- >>> df.iloc[:, lambda df: [0, 2]]
- a c
- 0 1 3
- 1 100 300
- 2 1000 3000
- """
-
_valid_types = (
"integer, integer slice (START point is INCLUDED, END "
"point is EXCLUDED), listlike of integers, boolean array"
@@ -2095,53 +2193,8 @@ def __setitem__(self, key, value):
self.obj._set_value(*key, takeable=self._takeable)
+@Appender(IndexingMixin.at.__doc__)
class _AtIndexer(_ScalarAccessIndexer):
- """
- Access a single value for a row/column label pair.
-
- Similar to ``loc``, in that both provide label-based lookups. Use
- ``at`` if you only need to get or set a single value in a DataFrame
- or Series.
-
- Raises
- ------
- KeyError
- If 'label' does not exist in DataFrame.
-
- See Also
- --------
- DataFrame.iat : Access a single value for a row/column pair by integer
- position.
- DataFrame.loc : Access a group of rows and columns by label(s).
- Series.at : Access a single value using a label.
-
- Examples
- --------
- >>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
- ... index=[4, 5, 6], columns=['A', 'B', 'C'])
- >>> df
- A B C
- 4 0 2 3
- 5 0 4 1
- 6 10 20 30
-
- Get value at specified row/column pair
-
- >>> df.at[4, 'B']
- 2
-
- Set value at specified row/column pair
-
- >>> df.at[4, 'B'] = 10
- >>> df.at[4, 'B']
- 10
-
- Get value within a Series
-
- >>> df.loc[5].at['B']
- 4
- """
-
_takeable = False
def _convert_key(self, key, is_setter: bool = False):
@@ -2170,52 +2223,8 @@ def _convert_key(self, key, is_setter: bool = False):
return key
+@Appender(IndexingMixin.iat.__doc__)
class _iAtIndexer(_ScalarAccessIndexer):
- """
- Access a single value for a row/column pair by integer position.
-
- Similar to ``iloc``, in that both provide integer-based lookups. Use
- ``iat`` if you only need to get or set a single value in a DataFrame
- or Series.
-
- Raises
- ------
- IndexError
- When integer position is out of bounds.
-
- See Also
- --------
- DataFrame.at : Access a single value for a row/column label pair.
- DataFrame.loc : Access a group of rows and columns by label(s).
- DataFrame.iloc : Access a group of rows and columns by integer position(s).
-
- Examples
- --------
- >>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
- ... columns=['A', 'B', 'C'])
- >>> df
- A B C
- 0 0 2 3
- 1 0 4 1
- 2 10 20 30
-
- Get value at specified row/column pair
-
- >>> df.iat[1, 2]
- 1
-
- Set value at specified row/column pair
-
- >>> df.iat[1, 2] = 10
- >>> df.iat[1, 2]
- 10
-
- Get value within a series
-
- >>> df.loc[0].iat[1]
- 2
- """
-
_takeable = True
def _convert_key(self, key, is_setter: bool = False):
| Currently, the NDFrame indexers (.loc/.iloc/.at/.iat) are too complex set up to let mypy understand them. This PR makes their implementations understandable for mypy (and humans also maybe, the old imp. was a bit indirect so took som effort to understand).
One issue is that I - for the life of me - can't programatically set doc strings of properties and make mypy understand the properties' types at the same time. Python/mypy seem to demand that doc strings be set directly on the property if the type is to be understandable by mypy. E.g. Appender on a property is not supported by mypy. So I've set the doc string on the NDFrame properties, and reference them from the indexer classes instances. This makes the PR seem large, but it's mostly just moving the doc strings.
| https://api.github.com/repos/pandas-dev/pandas/pulls/30690 | 2020-01-04T18:25:52Z | 2020-01-04T23:21:27Z | 2020-01-04T23:21:27Z | 2020-01-04T23:22:53Z |
CI: Fix IPython Tab Completion test async warning | diff --git a/ci/deps/azure-36-locale.yaml b/ci/deps/azure-36-locale.yaml
index 4f4c4524cb4dd..c6940f9327e0d 100644
--- a/ci/deps/azure-36-locale.yaml
+++ b/ci/deps/azure-36-locale.yaml
@@ -9,6 +9,7 @@ dependencies:
- cython>=0.29.13
- pytest>=5.0.1
- pytest-xdist>=1.21
+ - pytest-asyncio
- hypothesis>=3.58.0
- pytest-azurepipelines
diff --git a/ci/deps/azure-37-locale.yaml b/ci/deps/azure-37-locale.yaml
index a10fa0904a451..111ba6b020bc7 100644
--- a/ci/deps/azure-37-locale.yaml
+++ b/ci/deps/azure-37-locale.yaml
@@ -8,6 +8,7 @@ dependencies:
- cython>=0.29.13
- pytest>=5.0.1
- pytest-xdist>=1.21
+ - pytest-asyncio
- hypothesis>=3.58.0
- pytest-azurepipelines
diff --git a/environment.yml b/environment.yml
index 404a5b97e316a..e244350a0bea0 100644
--- a/environment.yml
+++ b/environment.yml
@@ -55,6 +55,7 @@ dependencies:
- pytest>=5.0.1
- pytest-cov
- pytest-xdist>=1.21
+ - pytest-asyncio
# downstream tests
- seaborn
diff --git a/pandas/tests/arrays/categorical/test_warnings.py b/pandas/tests/arrays/categorical/test_warnings.py
index 1ee877cbbf348..f66c327e9967d 100644
--- a/pandas/tests/arrays/categorical/test_warnings.py
+++ b/pandas/tests/arrays/categorical/test_warnings.py
@@ -1,16 +1,19 @@
import pytest
+from pandas.util._test_decorators import async_mark
+
import pandas._testing as tm
class TestCategoricalWarnings:
- def test_tab_complete_warning(self, ip):
+ @async_mark()
+ async def test_tab_complete_warning(self, ip):
# https://github.com/pandas-dev/pandas/issues/16409
pytest.importorskip("IPython", minversion="6.0.0")
from IPython.core.completer import provisionalcompleter
code = "import pandas as pd; c = Categorical([])"
- ip.run_code(code)
+ await ip.run_code(code)
with tm.assert_produces_warning(None):
with provisionalcompleter("ignore"):
list(ip.Completer.completions("c.", 1))
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 8093b602dd6f3..26d6a917fe1ca 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -6,6 +6,7 @@
import pytest
from pandas.compat import PY37
+from pandas.util._test_decorators import async_mark
import pandas as pd
from pandas import Categorical, DataFrame, Series, compat, date_range, timedelta_range
@@ -539,13 +540,14 @@ def _check_f(base, f):
f = lambda x: x.rename({1: "foo"}, inplace=True)
_check_f(d.copy(), f)
- def test_tab_complete_warning(self, ip):
+ @async_mark()
+ async def test_tab_complete_warning(self, ip):
# GH 16409
pytest.importorskip("IPython", minversion="6.0.0")
from IPython.core.completer import provisionalcompleter
code = "import pandas as pd; df = pd.DataFrame()"
- ip.run_code(code)
+ await ip.run_code(code)
with tm.assert_produces_warning(None):
with provisionalcompleter("ignore"):
list(ip.Completer.completions("df.", 1))
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 4a773cc1c6f49..c1ab5a9dbe3eb 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -12,6 +12,7 @@
from pandas._libs.tslib import Timestamp
from pandas.compat.numpy import np_datetime64_compat
+from pandas.util._test_decorators import async_mark
from pandas.core.dtypes.common import is_unsigned_integer_dtype
from pandas.core.dtypes.generic import ABCIndex
@@ -2397,13 +2398,14 @@ def test_cached_properties_not_settable(self):
with pytest.raises(AttributeError, match="Can't set attribute"):
index.is_unique = False
- def test_tab_complete_warning(self, ip):
+ @async_mark()
+ async def test_tab_complete_warning(self, ip):
# https://github.com/pandas-dev/pandas/issues/16409
pytest.importorskip("IPython", minversion="6.0.0")
from IPython.core.completer import provisionalcompleter
code = "import pandas as pd; idx = pd.Index([1, 2])"
- ip.run_code(code)
+ await ip.run_code(code)
with tm.assert_produces_warning(None):
with provisionalcompleter("ignore"):
list(ip.Completer.completions("idx.", 4))
diff --git a/pandas/tests/resample/test_resampler_grouper.py b/pandas/tests/resample/test_resampler_grouper.py
index 95a7654a618b2..4e3585c0be884 100644
--- a/pandas/tests/resample/test_resampler_grouper.py
+++ b/pandas/tests/resample/test_resampler_grouper.py
@@ -2,6 +2,8 @@
import numpy as np
+from pandas.util._test_decorators import async_mark
+
import pandas as pd
from pandas import DataFrame, Series, Timestamp
import pandas._testing as tm
@@ -13,7 +15,8 @@
)
-def test_tab_complete_ipython6_warning(ip):
+@async_mark()
+async def test_tab_complete_ipython6_warning(ip):
from IPython.core.completer import provisionalcompleter
code = dedent(
@@ -23,7 +26,7 @@ def test_tab_complete_ipython6_warning(ip):
rs = s.resample("D")
"""
)
- ip.run_code(code)
+ await ip.run_code(code)
with tm.assert_produces_warning(None):
with provisionalcompleter("ignore"):
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 32587c6afee73..d235e51d00793 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -5,6 +5,8 @@
import numpy as np
import pytest
+from pandas.util._test_decorators import async_mark
+
import pandas as pd
from pandas import (
Categorical,
@@ -491,13 +493,14 @@ def test_empty_method(self):
for full_series in [pd.Series([1]), s2]:
assert not full_series.empty
- def test_tab_complete_warning(self, ip):
+ @async_mark()
+ async def test_tab_complete_warning(self, ip):
# https://github.com/pandas-dev/pandas/issues/16409
pytest.importorskip("IPython", minversion="6.0.0")
from IPython.core.completer import provisionalcompleter
code = "import pandas as pd; s = pd.Series()"
- ip.run_code(code)
+ await ip.run_code(code)
with tm.assert_produces_warning(None):
with provisionalcompleter("ignore"):
list(ip.Completer.completions("s.", 1))
diff --git a/pandas/util/_test_decorators.py b/pandas/util/_test_decorators.py
index a280da6e239b2..d8804994af426 100644
--- a/pandas/util/_test_decorators.py
+++ b/pandas/util/_test_decorators.py
@@ -32,6 +32,7 @@ def test_foo():
import pytest
from pandas.compat import is_platform_32bit, is_platform_windows
+from pandas.compat._optional import import_optional_dependency
from pandas.compat.numpy import _np_version
from pandas.core.computation.expressions import _NUMEXPR_INSTALLED, _USE_NUMEXPR
@@ -251,3 +252,13 @@ def new_func(*args, **kwargs):
assert flist2 == flist
return new_func
+
+
+def async_mark():
+ try:
+ import_optional_dependency("pytest_asyncio")
+ async_mark = pytest.mark.asyncio
+ except ImportError:
+ async_mark = pytest.mark.skip(reason="Missing dependency pytest-asyncio")
+
+ return async_mark
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 7a6be037e38f8..f4f5fed82662c 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -36,6 +36,7 @@ moto
pytest>=5.0.1
pytest-cov
pytest-xdist>=1.21
+pytest-asyncio
seaborn
statsmodels
ipywidgets
| - [x] closes #29070
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- New PR since https://github.com/pandas-dev/pandas/pull/29087 went stale
| https://api.github.com/repos/pandas-dev/pandas/pulls/30689 | 2020-01-04T18:20:33Z | 2020-01-04T23:12:36Z | 2020-01-04T23:12:35Z | 2020-01-04T23:12:40Z |
CLN: share compatibility-check code | diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 056c80717e54f..3d4f8720f3377 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -215,12 +215,7 @@ def __init__(self, values, freq=None, dtype=None, copy=False):
if isinstance(values, type(self)):
if freq is not None and freq != values.freq:
- msg = DIFFERENT_FREQ.format(
- cls=type(self).__name__,
- own_freq=values.freq.freqstr,
- other_freq=freq.freqstr,
- )
- raise IncompatibleFrequency(msg)
+ raise raise_on_incompatible(values, freq)
values, freq = values._data, values.freq
values = np.array(values, dtype="int64", copy=copy)
@@ -323,7 +318,7 @@ def _check_compatible_with(self, other):
if other is NaT:
return
if self.freqstr != other.freqstr:
- _raise_on_incompatible(self, other)
+ raise raise_on_incompatible(self, other)
# --------------------------------------------------------------------
# Data / Attributes
@@ -682,7 +677,7 @@ def _add_offset(self, other):
assert not isinstance(other, Tick)
base = libfrequencies.get_base_alias(other.rule_code)
if base != self.freq.rule_code:
- _raise_on_incompatible(self, other)
+ raise raise_on_incompatible(self, other)
# Note: when calling parent class's _add_timedeltalike_scalar,
# it will call delta_to_nanoseconds(delta). Because delta here
@@ -750,7 +745,7 @@ def _add_delta(self, other):
"""
if not isinstance(self.freq, Tick):
# We cannot add timedelta-like to non-tick PeriodArray
- _raise_on_incompatible(self, other)
+ raise raise_on_incompatible(self, other)
new_ordinals = super()._add_delta(other)
return type(self)(new_ordinals, freq=self.freq)
@@ -802,13 +797,13 @@ def _check_timedeltalike_freq_compat(self, other):
# by which will be added to self.
return delta
- _raise_on_incompatible(self, other)
+ raise raise_on_incompatible(self, other)
PeriodArray._add_comparison_ops()
-def _raise_on_incompatible(left, right):
+def raise_on_incompatible(left, right):
"""
Helper function to render a consistent error message when raising
IncompatibleFrequency.
@@ -816,14 +811,15 @@ def _raise_on_incompatible(left, right):
Parameters
----------
left : PeriodArray
- right : DateOffset, Period, ndarray, or timedelta-like
+ right : None, DateOffset, Period, ndarray, or timedelta-like
- Raises
+ Returns
------
IncompatibleFrequency
+ Exception to be raised by the caller.
"""
# GH#24283 error message format depends on whether right is scalar
- if isinstance(right, np.ndarray):
+ if isinstance(right, np.ndarray) or right is None:
other_freq = None
elif isinstance(right, (ABCPeriodIndex, PeriodArray, Period, DateOffset)):
other_freq = right.freqstr
@@ -833,7 +829,7 @@ def _raise_on_incompatible(left, right):
msg = DIFFERENT_FREQ.format(
cls=type(left).__name__, own_freq=left.freqstr, other_freq=other_freq
)
- raise IncompatibleFrequency(msg)
+ return IncompatibleFrequency(msg)
# -------------------------------------------------------------------
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 72ef335665ee5..ca7c69c9713bc 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -5,7 +5,7 @@
from pandas._libs import index as libindex
from pandas._libs.tslibs import NaT, frequencies as libfrequencies, iNaT, resolution
-from pandas._libs.tslibs.period import DIFFERENT_FREQ, IncompatibleFrequency, Period
+from pandas._libs.tslibs.period import Period
from pandas.util._decorators import Appender, Substitution, cache_readonly
from pandas.core.dtypes.common import (
@@ -21,7 +21,12 @@
)
from pandas.core.accessor import delegate_names
-from pandas.core.arrays.period import PeriodArray, period_array, validate_dtype_freq
+from pandas.core.arrays.period import (
+ PeriodArray,
+ period_array,
+ raise_on_incompatible,
+ validate_dtype_freq,
+)
from pandas.core.base import _shared_docs
import pandas.core.common as com
import pandas.core.indexes.base as ibase
@@ -338,10 +343,7 @@ def _maybe_convert_timedelta(self, other):
if base == self.freq.rule_code:
return other.n
- msg = DIFFERENT_FREQ.format(
- cls=type(self).__name__, own_freq=self.freqstr, other_freq=other.freqstr
- )
- raise IncompatibleFrequency(msg)
+ raise raise_on_incompatible(self, other)
elif is_integer(other):
# integer is passed to .shift via
# _add_datetimelike_methods basically
@@ -349,10 +351,7 @@ def _maybe_convert_timedelta(self, other):
return other
# raise when input doesn't have freq
- msg = DIFFERENT_FREQ.format(
- cls=type(self).__name__, own_freq=self.freqstr, other_freq=None
- )
- raise IncompatibleFrequency(msg)
+ raise raise_on_incompatible(self, None)
# ------------------------------------------------------------------------
# Rendering Methods
@@ -486,12 +485,7 @@ def astype(self, dtype, copy=True, how="start"):
def searchsorted(self, value, side="left", sorter=None):
if isinstance(value, Period):
if value.freq != self.freq:
- msg = DIFFERENT_FREQ.format(
- cls=type(self).__name__,
- own_freq=self.freqstr,
- other_freq=value.freqstr,
- )
- raise IncompatibleFrequency(msg)
+ raise raise_on_incompatible(self, value)
value = value.ordinal
elif isinstance(value, str):
try:
@@ -785,10 +779,7 @@ def _assert_can_do_setop(self, other):
# *Can't* use PeriodIndexes of different freqs
# *Can* use PeriodIndex/DatetimeIndex
if isinstance(other, PeriodIndex) and self.freq != other.freq:
- msg = DIFFERENT_FREQ.format(
- cls=type(self).__name__, own_freq=self.freqstr, other_freq=other.freqstr
- )
- raise IncompatibleFrequency(msg)
+ raise raise_on_incompatible(self, other)
def _wrap_setop_result(self, other, result):
name = get_op_result_name(self, other)
diff --git a/pandas/tests/indexes/period/test_setops.py b/pandas/tests/indexes/period/test_setops.py
index 1ec53c1dac81c..dc7805880784f 100644
--- a/pandas/tests/indexes/period/test_setops.py
+++ b/pandas/tests/indexes/period/test_setops.py
@@ -1,10 +1,11 @@
import numpy as np
import pytest
+from pandas._libs.tslibs import IncompatibleFrequency
+
import pandas as pd
from pandas import Index, PeriodIndex, date_range, period_range
import pandas._testing as tm
-import pandas.core.indexes.period as period
def _permute(obj):
@@ -177,11 +178,11 @@ def test_union_misc(self, sort):
# raise if different frequencies
index = period_range("1/1/2000", "1/20/2000", freq="D")
index2 = period_range("1/1/2000", "1/20/2000", freq="W-WED")
- with pytest.raises(period.IncompatibleFrequency):
+ with pytest.raises(IncompatibleFrequency):
index.union(index2, sort=sort)
index3 = period_range("1/1/2000", "1/20/2000", freq="2D")
- with pytest.raises(period.IncompatibleFrequency):
+ with pytest.raises(IncompatibleFrequency):
index.join(index3)
def test_union_dataframe_index(self):
@@ -213,11 +214,11 @@ def test_intersection(self, sort):
# raise if different frequencies
index = period_range("1/1/2000", "1/20/2000", freq="D")
index2 = period_range("1/1/2000", "1/20/2000", freq="W-WED")
- with pytest.raises(period.IncompatibleFrequency):
+ with pytest.raises(IncompatibleFrequency):
index.intersection(index2, sort=sort)
index3 = period_range("1/1/2000", "1/20/2000", freq="2D")
- with pytest.raises(period.IncompatibleFrequency):
+ with pytest.raises(IncompatibleFrequency):
index.intersection(index3, sort=sort)
@pytest.mark.parametrize("sort", [None, False])
diff --git a/pandas/tests/indexes/period/test_tools.py b/pandas/tests/indexes/period/test_tools.py
index fc861b88d1f1b..2135b8a992128 100644
--- a/pandas/tests/indexes/period/test_tools.py
+++ b/pandas/tests/indexes/period/test_tools.py
@@ -3,6 +3,7 @@
import numpy as np
import pytest
+from pandas._libs.tslibs import IncompatibleFrequency
from pandas._libs.tslibs.ccalendar import MONTHS
import pandas as pd
@@ -18,7 +19,6 @@
to_datetime,
)
import pandas._testing as tm
-import pandas.core.indexes.period as period
class TestPeriodRepresentation:
@@ -232,11 +232,11 @@ def test_searchsorted(self, freq):
assert pidx.searchsorted(p2) == 3
msg = "Input has different freq=H from PeriodIndex"
- with pytest.raises(period.IncompatibleFrequency, match=msg):
+ with pytest.raises(IncompatibleFrequency, match=msg):
pidx.searchsorted(pd.Period("2014-01-01", freq="H"))
msg = "Input has different freq=5D from PeriodIndex"
- with pytest.raises(period.IncompatibleFrequency, match=msg):
+ with pytest.raises(IncompatibleFrequency, match=msg):
pidx.searchsorted(pd.Period("2014-01-01", freq="5D"))
diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py
index edb1d2d98fa2e..06ae089d9c28b 100644
--- a/pandas/tests/series/test_arithmetic.py
+++ b/pandas/tests/series/test_arithmetic.py
@@ -3,10 +3,11 @@
import numpy as np
import pytest
+from pandas._libs.tslibs import IncompatibleFrequency
+
import pandas as pd
from pandas import Series
import pandas._testing as tm
-from pandas.core.indexes.period import IncompatibleFrequency
def _permute(obj):
| https://api.github.com/repos/pandas-dev/pandas/pulls/30688 | 2020-01-04T18:13:54Z | 2020-01-04T23:16:24Z | 2020-01-04T23:16:24Z | 2020-01-04T23:20:27Z | |
PLT: Add tests for missing markers | diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index bdf37ac7e83a4..1c429bafa9a19 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -3271,6 +3271,34 @@ def test_plot_no_numeric_data(self):
with pytest.raises(TypeError):
df.plot()
+ def test_missing_markers_legend(self):
+ # 14958
+ df = pd.DataFrame(np.random.randn(8, 3), columns=["A", "B", "C"])
+ ax = df.plot(y=["A"], marker="x", linestyle="solid")
+ df.plot(y=["B"], marker="o", linestyle="dotted", ax=ax)
+ df.plot(y=["C"], marker="<", linestyle="dotted", ax=ax)
+
+ self._check_legend_labels(ax, labels=["A", "B", "C"])
+ self._check_legend_marker(ax, expected_markers=["x", "o", "<"])
+
+ def test_missing_markers_legend_using_style(self):
+ # 14563
+ df = pd.DataFrame(
+ {
+ "A": [1, 2, 3, 4, 5, 6],
+ "B": [2, 4, 1, 3, 2, 4],
+ "C": [3, 3, 2, 6, 4, 2],
+ "X": [1, 2, 3, 4, 5, 6],
+ }
+ )
+
+ fig, ax = self.plt.subplots()
+ for kind in "ABC":
+ df.plot("X", kind, label=kind, ax=ax, style=".")
+
+ self._check_legend_labels(ax, labels=["A", "B", "C"])
+ self._check_legend_marker(ax, expected_markers=[".", ".", "."])
+
def _generate_4_axes_via_gridspec():
import matplotlib.pyplot as plt
| closes #14563
closes #14958
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30687 | 2020-01-04T17:45:46Z | 2020-01-04T21:33:41Z | 2020-01-04T21:33:41Z | 2020-01-04T21:33:47Z |
Fix PeriodIndex.get_indexer with non-PI | diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 1cc37504b675f..363ff32519d27 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -564,15 +564,12 @@ def get_value(self, series, key):
def get_indexer(self, target, method=None, limit=None, tolerance=None):
target = ensure_index(target)
- if hasattr(target, "freq") and target.freq != self.freq:
- msg = DIFFERENT_FREQ.format(
- cls=type(self).__name__,
- own_freq=self.freqstr,
- other_freq=target.freqstr,
- )
- raise IncompatibleFrequency(msg)
-
if isinstance(target, PeriodIndex):
+ if target.freq != self.freq:
+ # No matches
+ no_matches = -1 * np.ones(self.shape, dtype=np.intp)
+ return no_matches
+
target = target.asi8
self_index = self._int64index
else:
@@ -587,14 +584,11 @@ def get_indexer_non_unique(self, target):
target = ensure_index(target)
if isinstance(target, PeriodIndex):
+ if target.freq != self.freq:
+ no_matches = -1 * np.ones(self.shape, dtype=np.intp)
+ return no_matches, no_matches
+
target = target.asi8
- if hasattr(target, "freq") and target.freq != self.freq:
- msg = DIFFERENT_FREQ.format(
- cls=type(self).__name__,
- own_freq=self.freqstr,
- other_freq=target.freqstr,
- )
- raise IncompatibleFrequency(msg)
indexer, missing = self._int64index.get_indexer_non_unique(target)
return ensure_platform_int(indexer), missing
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index 8b5a2958c4c61..e95b4ae5397b4 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -550,6 +550,35 @@ def test_get_indexer(self):
res = idx.get_indexer(target, "nearest", tolerance=pd.Timedelta("1 day"))
tm.assert_numpy_array_equal(res, np.array([0, 0, 1, -1], dtype=np.intp))
+ def test_get_indexer_mismatched_dtype(self):
+ # Check that we return all -1s and do not raise or cast incorrectly
+
+ dti = pd.date_range("2016-01-01", periods=3)
+ pi = dti.to_period("D")
+ pi2 = dti.to_period("W")
+
+ expected = np.array([-1, -1, -1], dtype=np.intp)
+
+ result = pi.get_indexer(dti)
+ tm.assert_numpy_array_equal(result, expected)
+
+ # This should work in both directions
+ result = dti.get_indexer(pi)
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = pi.get_indexer(pi2)
+ tm.assert_numpy_array_equal(result, expected)
+
+ # We expect the same from get_indexer_non_unique
+ result = pi.get_indexer_non_unique(dti)[0]
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = dti.get_indexer_non_unique(pi)[0]
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = pi.get_indexer_non_unique(pi2)[0]
+ tm.assert_numpy_array_equal(result, expected)
+
def test_get_indexer_non_unique(self):
# GH 17717
p1 = pd.Period("2017-09-02")
| I'm _pretty_ sure that for non-comparable we should be returning an array of -1s, not raising. Can you double-check me on that @jreback?
| https://api.github.com/repos/pandas-dev/pandas/pulls/30686 | 2020-01-04T17:13:30Z | 2020-01-04T18:04:05Z | 2020-01-04T18:04:05Z | 2020-01-04T18:10:10Z |
DOC: Mention TYP as a type annotation PR prefix | diff --git a/doc/source/development/contributing.rst b/doc/source/development/contributing.rst
index 0c275f85b72e0..93c65ba7358c9 100644
--- a/doc/source/development/contributing.rst
+++ b/doc/source/development/contributing.rst
@@ -1363,6 +1363,7 @@ some common prefixes along with general guidelines for when to use them:
* TST: Additions/updates to tests
* BLD: Updates to the build process/scripts
* PERF: Performance improvement
+* TYP: Type annotations
* CLN: Code cleanup
The following defines how a commit message should be structured. Please reference the
| This seems to be quite common nowadays. | https://api.github.com/repos/pandas-dev/pandas/pulls/30684 | 2020-01-04T17:02:47Z | 2020-01-04T17:58:57Z | 2020-01-04T17:58:57Z | 2020-01-04T20:41:47Z |
TYP: Add mypy as a pre-commit | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 88548f6c2f678..809764a20a713 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,3 +16,15 @@ repos:
- id: isort
language: python_venv
exclude: ^pandas/__init__\.py$|^pandas/core/api\.py$
+- repo: https://github.com/pre-commit/mirrors-mypy
+ rev: v0.730
+ hooks:
+ - id: mypy
+ # We run mypy over all files because of:
+ # * changes in type definitions may affect non-touched files.
+ # * Running it with `mypy pandas` and the filenames will lead to
+ # spurious duplicate module errors,
+ # see also https://github.com/pre-commit/mirrors-mypy/issues/5
+ pass_filenames: false
+ args:
+ - pandas
| This is quite helpful in developing with typing, especially if you plan to update the mypy version.
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30682 | 2020-01-04T16:54:25Z | 2020-01-04T17:39:10Z | 2020-01-04T17:39:10Z | 2020-01-04T20:42:06Z |
TYP: Fix chainmap typing for mypy 0.740+ | diff --git a/pandas/compat/chainmap.py b/pandas/compat/chainmap.py
index 479eddf0c0536..588bd24ddf797 100644
--- a/pandas/compat/chainmap.py
+++ b/pandas/compat/chainmap.py
@@ -1,15 +1,24 @@
-from collections import ChainMap
+from typing import ChainMap, MutableMapping, TypeVar, cast
+_KT = TypeVar("_KT")
+_VT = TypeVar("_VT")
-class DeepChainMap(ChainMap):
- def __setitem__(self, key, value):
+
+class DeepChainMap(ChainMap[_KT, _VT]):
+ """Variant of ChainMap that allows direct updates to inner scopes.
+
+ Only works when all passed mapping are mutable.
+ """
+
+ def __setitem__(self, key: _KT, value: _VT) -> None:
for mapping in self.maps:
- if key in mapping:
- mapping[key] = value
+ mutable_mapping = cast(MutableMapping[_KT, _VT], mapping)
+ if key in mutable_mapping:
+ mutable_mapping[key] = value
return
- self.maps[0][key] = value
+ cast(MutableMapping[_KT, _VT], self.maps[0])[key] = value
- def __delitem__(self, key):
+ def __delitem__(self, key: _KT) -> None:
"""
Raises
------
@@ -17,7 +26,8 @@ def __delitem__(self, key):
If `key` doesn't exist.
"""
for mapping in self.maps:
+ mutable_mapping = cast(MutableMapping[_KT, _VT], mapping)
if key in mapping:
- del mapping[key]
+ del mutable_mapping[key]
return
raise KeyError(key)
diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index 4d27bcf2845f1..be652ca0e6a36 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -533,7 +533,7 @@ def __init__(
self._visitor = None
# capture the environment if needed
- local_dict = DeepChainMap()
+ local_dict: DeepChainMap[Any, Any] = DeepChainMap()
if isinstance(where, PyTablesExpr):
local_dict = where.env.scope
| This will otherwise raise a failure during a `mypy` update. Also increase type precision of this file to 100%.
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
cc @simonjayhawkins | https://api.github.com/repos/pandas-dev/pandas/pulls/30680 | 2020-01-04T16:32:18Z | 2020-01-05T16:22:12Z | 2020-01-05T16:22:12Z | 2020-01-06T08:02:06Z |
BUG: groupby apply raises ValueError when groupby axis has duplicates and applied identity function | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 1cd325dad9f07..40c02eb495f67 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -133,9 +133,7 @@ Plotting
Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
--
--
-
+- Bug in :meth:`GroupBy.apply` raises ``ValueError`` when the ``by`` axis is not sorted and has duplicates and the applied ``func`` does not mutate passed in objects (:issue:`30667`)
Reshaping
^^^^^^^^^
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 233bdd11b372b..a8c96840ff17b 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -969,22 +969,17 @@ def reset_identity(values):
result = concat(values, axis=self.axis)
ax = self._selected_obj._get_axis(self.axis)
- if isinstance(result, Series):
- result = result.reindex(ax)
+ # this is a very unfortunate situation
+ # we can't use reindex to restore the original order
+ # when the ax has duplicates
+ # so we resort to this
+ # GH 14776, 30667
+ if ax.has_duplicates:
+ indexer, _ = result.index.get_indexer_non_unique(ax.values)
+ indexer = algorithms.unique1d(indexer)
+ result = result.take(indexer, axis=self.axis)
else:
-
- # this is a very unfortunate situation
- # we have a multi-index that is NOT lexsorted
- # and we have a result which is duplicated
- # we can't reindex, so we resort to this
- # GH 14776
- if isinstance(ax, MultiIndex) and not ax.is_unique:
- indexer = algorithms.unique1d(
- result.index.get_indexer_for(ax.values)
- )
- result = result.take(indexer, axis=self.axis)
- else:
- result = result.reindex(ax, axis=self.axis)
+ result = result.reindex(ax, axis=self.axis)
elif self.group_keys:
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index 2f2f97f2cd993..e81ff37510dc0 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -467,6 +467,29 @@ def filt2(x):
tm.assert_frame_equal(result, expected)
+@pytest.mark.parametrize("test_series", [True, False])
+def test_apply_with_duplicated_non_sorted_axis(test_series):
+ # GH 30667
+ df = pd.DataFrame(
+ [["x", "p"], ["x", "p"], ["x", "o"]], columns=["X", "Y"], index=[1, 2, 2]
+ )
+ if test_series:
+ ser = df.set_index("Y")["X"]
+ result = ser.groupby(level=0).apply(lambda x: x)
+
+ # not expecting the order to remain the same for duplicated axis
+ result = result.sort_index()
+ expected = ser.sort_index()
+ tm.assert_series_equal(result, expected)
+ else:
+ result = df.groupby("Y").apply(lambda x: x)
+
+ # not expecting the order to remain the same for duplicated axis
+ result = result.sort_values("Y")
+ expected = df.sort_values("Y")
+ tm.assert_frame_equal(result, expected)
+
+
def test_apply_corner_cases():
# #535, can't use sliding iterator
| This is a more of a patch than a complete solution to this groupby apply paradigm.
When there are duplicates in the groupby axis, we restore the axis to its original order, but not guaranteeing that the order of the data with the same axis values is restored.
- [x] closes #30667
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30679 | 2020-01-04T15:22:01Z | 2020-01-20T16:28:01Z | 2020-01-20T16:28:00Z | 2020-01-21T11:00:06Z |
TYP: simplify NDFrame.(iloc|loc|iat|at) | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f5b0ce1ae77fb..a7d360bce8cf6 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3179,12 +3179,25 @@ def to_csv(
# ----------------------------------------------------------------------
# Fancy Indexing
- @classmethod
- def _create_indexer(cls, name: str, indexer) -> None:
- """Create an indexer like _name in the class."""
- if getattr(cls, name, None) is None:
- _indexer = functools.partial(indexer, name)
- setattr(cls, name, property(_indexer, doc=indexer.__doc__))
+ @property
+ @Appender(indexing._iLocIndexer.__doc__)
+ def iloc(self) -> "indexing._iLocIndexer":
+ return indexing._iLocIndexer(name="iloc", obj=self)
+
+ @property
+ @Appender(indexing._LocIndexer.__doc__)
+ def loc(self) -> "indexing._LocIndexer":
+ return indexing._LocIndexer(name="loc", obj=self)
+
+ @property
+ @Appender(indexing._AtIndexer.__doc__)
+ def at(self) -> "indexing._AtIndexer":
+ return indexing._AtIndexer(name="at", obj=self)
+
+ @property
+ @Appender(indexing._iAtIndexer.__doc__)
+ def iat(self) -> "indexing._iAtIndexer":
+ return indexing._iAtIndexer(name="iat", obj=self)
# ----------------------------------------------------------------------
# Lookup Caching
@@ -11176,8 +11189,3 @@ def logical_func(self, axis=0, bool_only=None, skipna=True, level=None, **kwargs
)
return set_function_name(logical_func, name, cls)
-
-
-# install the indexes
-for _name, _indexer in indexing.get_indexers_list():
- NDFrame._create_indexer(_name, _indexer)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index b15d91240e7bb..22f8652c96933 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -30,17 +30,6 @@
from pandas.core.indexes.api import Index, InvalidIndexError
-# the supported indexers
-def get_indexers_list():
-
- return [
- ("iloc", _iLocIndexer),
- ("loc", _LocIndexer),
- ("at", _AtIndexer),
- ("iat", _iAtIndexer),
- ]
-
-
# "null slice"
_NS = slice(None, None)
| The signature of .iloc/.loc/.iat/.at is very complex and mypy (and maybe many humans too:-)) can't work through it to see their types. This simplifies them and allows mypy to see their types. | https://api.github.com/repos/pandas-dev/pandas/pulls/30678 | 2020-01-04T10:18:09Z | 2020-01-04T11:09:53Z | null | 2020-01-04T11:09:53Z |
PERF: add shortcut to Timestamp constructor | diff --git a/asv_bench/benchmarks/tslibs/timestamp.py b/asv_bench/benchmarks/tslibs/timestamp.py
index 8ebb2d8d2f35d..3ef9b814dd79e 100644
--- a/asv_bench/benchmarks/tslibs/timestamp.py
+++ b/asv_bench/benchmarks/tslibs/timestamp.py
@@ -1,12 +1,19 @@
import datetime
import dateutil
+import numpy as np
import pytz
from pandas import Timestamp
class TimestampConstruction:
+ def setup(self):
+ self.npdatetime64 = np.datetime64("2020-01-01 00:00:00")
+ self.dttime_unaware = datetime.datetime(2020, 1, 1, 0, 0, 0)
+ self.dttime_aware = datetime.datetime(2020, 1, 1, 0, 0, 0, 0, pytz.UTC)
+ self.ts = Timestamp("2020-01-01 00:00:00")
+
def time_parse_iso8601_no_tz(self):
Timestamp("2017-08-25 08:16:14")
@@ -28,6 +35,18 @@ def time_fromordinal(self):
def time_fromtimestamp(self):
Timestamp.fromtimestamp(1515448538)
+ def time_from_npdatetime64(self):
+ Timestamp(self.npdatetime64)
+
+ def time_from_datetime_unaware(self):
+ Timestamp(self.dttime_unaware)
+
+ def time_from_datetime_aware(self):
+ Timestamp(self.dttime_aware)
+
+ def time_from_pd_timestamp(self):
+ Timestamp(self.ts)
+
class TimestampProperties:
_tzs = [None, pytz.timezone("Europe/Amsterdam"), pytz.UTC, dateutil.tz.tzutc()]
diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index d0cf92b60fe0d..a0e1c964dd365 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -109,7 +109,9 @@ Deprecations
Performance improvements
~~~~~~~~~~~~~~~~~~~~~~~~
+
- Performance improvement in :class:`Timedelta` constructor (:issue:`30543`)
+- Performance improvement in :class:`Timestamp` constructor (:issue:`30543`)
-
-
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 36566b55e74ad..4915671aa6512 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -391,7 +391,18 @@ class Timestamp(_Timestamp):
# User passed tzinfo instead of tz; avoid silently ignoring
tz, tzinfo = tzinfo, None
- if isinstance(ts_input, str):
+ # GH 30543 if pd.Timestamp already passed, return it
+ # check that only ts_input is passed
+ # checking verbosely, because cython doesn't optimize
+ # list comprehensions (as of cython 0.29.x)
+ if (isinstance(ts_input, Timestamp) and freq is None and
+ tz is None and unit is None and year is None and
+ month is None and day is None and hour is None and
+ minute is None and second is None and
+ microsecond is None and nanosecond is None and
+ tzinfo is None):
+ return ts_input
+ elif isinstance(ts_input, str):
# User passed a date string to parse.
# Check that the user didn't also pass a date attribute kwarg.
if any(arg is not None for arg in _date_attributes):
diff --git a/pandas/tests/indexes/datetimes/test_constructors.py b/pandas/tests/indexes/datetimes/test_constructors.py
index b6013c3939793..68285d41bda70 100644
--- a/pandas/tests/indexes/datetimes/test_constructors.py
+++ b/pandas/tests/indexes/datetimes/test_constructors.py
@@ -957,3 +957,10 @@ def test_timedelta_constructor_identity():
expected = pd.Timedelta(np.timedelta64(1, "s"))
result = pd.Timedelta(expected)
assert result is expected
+
+
+def test_timestamp_constructor_identity():
+ # Test for #30543
+ expected = pd.Timestamp("2017-01-01T12")
+ result = pd.Timestamp(expected)
+ assert result is expected
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index c785eb67e5184..cd8e8c3542cce 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -2,7 +2,6 @@
Tests for DatetimeIndex timezone-related methods
"""
from datetime import date, datetime, time, timedelta, tzinfo
-from distutils.version import LooseVersion
import dateutil
from dateutil.tz import gettz, tzlocal
@@ -11,7 +10,6 @@
import pytz
from pandas._libs.tslibs import conversion, timezones
-from pandas.compat._optional import _get_version
import pandas.util._test_decorators as td
import pandas as pd
@@ -583,15 +581,7 @@ def test_dti_construction_ambiguous_endpoint(self, tz):
["US/Pacific", "shift_forward", "2019-03-10 03:00"],
["dateutil/US/Pacific", "shift_forward", "2019-03-10 03:00"],
["US/Pacific", "shift_backward", "2019-03-10 01:00"],
- pytest.param(
- "dateutil/US/Pacific",
- "shift_backward",
- "2019-03-10 01:00",
- marks=pytest.mark.xfail(
- LooseVersion(_get_version(dateutil)) < LooseVersion("2.7.0"),
- reason="GH 31043",
- ),
- ),
+ ["dateutil/US/Pacific", "shift_backward", "2019-03-10 01:00"],
["US/Pacific", timedelta(hours=1), "2019-03-10 03:00"],
],
)
| - [X] closes #30543
- [X] tests added 1 / passed 1
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
This implements a shortcut in the Timestamp constructor to cut down on processing if Timestamp is passed. We still need to check if the timezone was passed correctly. Then, if a Timestamp was passed, and there is no timezone, we just return that same Timestamp.
A test is added to check that the Timestamp is still the same object.
PR for timedelta to be added once I confirm that this is the approach we want to go with.
| https://api.github.com/repos/pandas-dev/pandas/pulls/30676 | 2020-01-04T07:19:38Z | 2020-01-26T01:03:37Z | 2020-01-26T01:03:37Z | 2020-01-27T06:55:10Z |
BUG: bug in date_range with custom business hours and given periods | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index b9cc1dad53674..98ef62b557afd 100755
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -833,6 +833,7 @@ Datetimelike
- Bug in :meth:`Series.cummin` and :meth:`Series.cummax` with timezone-aware dtype incorrectly dropping its timezone (:issue:`15553`)
- Bug in :class:`DatetimeArray`, :class:`TimedeltaArray`, and :class:`PeriodArray` where inplace addition and subtraction did not actually operate inplace (:issue:`24115`)
- Bug in :func:`pandas.to_datetime` when called with ``Series`` storing ``IntegerArray`` raising ``TypeError`` instead of returning ``Series`` (:issue:`30050`)
+- Bug in :func:`date_range` with custom business hours as ``freq`` and given number of ``periods`` (:issue:`30593`)
Timedelta
^^^^^^^^^
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index f9df295284806..4d0beecbbf5d3 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -945,3 +945,19 @@ def test_range_with_millisecond_resolution(self, start_end):
result = pd.date_range(start=start, end=end, periods=2, closed="left")
expected = DatetimeIndex([start])
tm.assert_index_equal(result, expected)
+
+
+def test_date_range_with_custom_holidays():
+ # GH 30593
+ freq = pd.offsets.CustomBusinessHour(start="15:00", holidays=["2020-11-26"])
+ result = pd.date_range(start="2020-11-25 15:00", periods=4, freq=freq)
+ expected = pd.DatetimeIndex(
+ [
+ "2020-11-25 15:00:00",
+ "2020-11-25 16:00:00",
+ "2020-11-27 15:00:00",
+ "2020-11-27 16:00:00",
+ ],
+ freq=freq,
+ )
+ tm.assert_index_equal(result, expected)
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index f20d385ffbbce..8bb98a271bce8 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -896,7 +896,15 @@ def apply(self, other):
# adjust by business days first
if bd != 0:
- skip_bd = BusinessDay(n=bd)
+ if isinstance(self, _CustomMixin): # GH 30593
+ skip_bd = CustomBusinessDay(
+ n=bd,
+ weekmask=self.weekmask,
+ holidays=self.holidays,
+ calendar=self.calendar,
+ )
+ else:
+ skip_bd = BusinessDay(n=bd)
# midnight business hour may not on BusinessDay
if not self.next_bday.is_on_offset(other):
prev_open = self._prev_opening_time(other)
| - [x] closes #30593
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30675 | 2020-01-04T06:01:07Z | 2020-01-05T21:39:10Z | 2020-01-05T21:39:10Z | 2020-01-06T02:00:11Z |
TST: Add more tests for fixed issues | diff --git a/pandas/tests/groupby/test_nth.py b/pandas/tests/groupby/test_nth.py
index 1d8883b60d4a3..0f850f2e94581 100644
--- a/pandas/tests/groupby/test_nth.py
+++ b/pandas/tests/groupby/test_nth.py
@@ -89,6 +89,25 @@ def test_first_last_nth_dtypes(df_mixed_floats):
assert f.dtype == "int64"
+def test_first_strings_timestamps():
+ # GH 11244
+ test = pd.DataFrame(
+ {
+ pd.Timestamp("2012-01-01 00:00:00"): ["a", "b"],
+ pd.Timestamp("2012-01-02 00:00:00"): ["c", "d"],
+ "name": ["e", "e"],
+ "aaaa": ["f", "g"],
+ }
+ )
+ result = test.groupby("name").first()
+ expected = DataFrame(
+ [["a", "c", "f"]],
+ columns=Index([Timestamp("2012-01-01"), Timestamp("2012-01-02"), "aaaa"]),
+ index=Index(["e"], name="name"),
+ )
+ tm.assert_frame_equal(result, expected)
+
+
def test_nth():
df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
g = df.groupby("A")
diff --git a/pandas/tests/io/parser/test_c_parser_only.py b/pandas/tests/io/parser/test_c_parser_only.py
index 2d11f619d508b..1737f14e7adf9 100644
--- a/pandas/tests/io/parser/test_c_parser_only.py
+++ b/pandas/tests/io/parser/test_c_parser_only.py
@@ -597,3 +597,14 @@ def test_file_binary_mode(c_parser_only):
with open(path, "rb") as f:
result = parser.read_csv(f, header=None)
tm.assert_frame_equal(result, expected)
+
+
+def test_unix_style_breaks(c_parser_only):
+ # GH 11020
+ parser = c_parser_only
+ with tm.ensure_clean() as path:
+ with open(path, "w", newline="\n") as f:
+ f.write("blah\n\ncol_1,col_2,col_3\n\n")
+ result = parser.read_csv(path, skiprows=2, encoding="utf-8", engine="c")
+ expected = DataFrame(columns=["col_1", "col_2", "col_3"])
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 2e8fe0c266f6b..5382ad84bcca2 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -1359,6 +1359,30 @@ def test_mixed_depth_drop(self):
)
tm.assert_frame_equal(expected, result)
+ def test_drop_multiindex_other_level_nan(self):
+ # GH 12754
+ df = (
+ DataFrame(
+ {
+ "A": ["one", "one", "two", "two"],
+ "B": [np.nan, 0.0, 1.0, 2.0],
+ "C": ["a", "b", "c", "c"],
+ "D": [1, 2, 3, 4],
+ }
+ )
+ .set_index(["A", "B", "C"])
+ .sort_index()
+ )
+ result = df.drop("c", level="C")
+ expected = DataFrame(
+ [2, 1],
+ columns=["D"],
+ index=pd.MultiIndex.from_tuples(
+ [("one", 0.0, "b"), ("one", np.nan, "a")], names=["A", "B", "C"]
+ ),
+ )
+ tm.assert_frame_equal(result, expected)
+
def test_drop_nonunique(self):
df = DataFrame(
[
@@ -2286,6 +2310,14 @@ def test_sort_index_and_reconstruction_doc_example(self):
tm.assert_frame_equal(result, expected)
+ def test_sort_index_non_existent_label_multiindex(self):
+ # GH 12261
+ df = DataFrame(0, columns=[], index=pd.MultiIndex.from_product([[], []]))
+ df.loc["b", "2"] = 1
+ df.loc["a", "3"] = 1
+ result = df.sort_index().index.is_monotonic
+ assert result is True
+
def test_sort_index_reorder_on_ops(self):
# 15687
df = DataFrame(
| - [x] closes #11244
- [x] closes #11020
- [x] closes #12754
- [x] closes #12261
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/30674 | 2020-01-04T05:23:03Z | 2020-01-04T17:25:15Z | 2020-01-04T17:25:15Z | 2020-01-04T19:01:04Z |
REF: EA value_counts -> _value_counts | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 39e8e9008a844..cfbbb9e94628f 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -708,8 +708,9 @@ def value_counts(
if is_extension_array_dtype(values):
# handle Categorical and sparse,
- result = Series(values)._values.value_counts(dropna=dropna)
- result.name = name
+ arr = extract_array(values)
+ index, counts = arr._value_counts(dropna=dropna)
+ result = Series(counts, index=index, name=name)
counts = result.values
else:
diff --git a/pandas/core/arrays/boolean.py b/pandas/core/arrays/boolean.py
index 409be244c4327..9daa88c6cabc7 100644
--- a/pandas/core/arrays/boolean.py
+++ b/pandas/core/arrays/boolean.py
@@ -528,11 +528,9 @@ def astype(self, dtype, copy=True):
data = self._coerce_to_ndarray(na_value=na_value)
return astype_nansafe(data, dtype, copy=False)
- def value_counts(self, dropna=True):
+ def _value_counts(self, dropna=True):
"""
- Returns a Series containing counts of each category.
-
- Every category will have an entry, even those with a count of 0.
+ Return a tuple describing the counts for each value.
Parameters
----------
@@ -541,15 +539,14 @@ def value_counts(self, dropna=True):
Returns
-------
- counts : Series
+ index : BooleanArray
+ values : ndarray[int64]
See Also
--------
Series.value_counts
-
"""
-
- from pandas import Index, Series
+ from pandas import Index
# compute counts on the data with no nans
data = self._data[~self._mask]
@@ -571,8 +568,7 @@ def value_counts(self, dropna=True):
index = Index(
np.concatenate([index, np.array([np.nan], dtype=object)]), dtype=object
)
-
- return Series(array, index=index)
+ return index, array
def _values_for_argsort(self) -> np.ndarray:
"""
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 7386c9d0ef1de..04879f79b91fa 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1423,9 +1423,9 @@ def dropna(self):
return result
- def value_counts(self, dropna=True):
+ def _value_counts(self, dropna=True):
"""
- Return a Series containing counts of each category.
+ Return a tuple describing the counts of each category.
Every category will have an entry, even those with a count of 0.
@@ -1436,17 +1436,21 @@ def value_counts(self, dropna=True):
Returns
-------
- counts : Series
+ index : Categorical
+ values : ndarray[int64]
See Also
--------
Series.value_counts
"""
- from pandas import Series, CategoricalIndex
- code, cat = self._codes, self.categories
- ncat, mask = len(cat), 0 <= code
- ix, clean = np.arange(ncat), mask.all()
+ code = self._codes
+ mask = 0 <= code
+ clean = mask.all()
+
+ cat = self.categories
+ ncat = len(cat)
+ ix = np.arange(ncat)
if dropna or clean:
obs = code if clean else code[mask]
@@ -1455,9 +1459,8 @@ def value_counts(self, dropna=True):
count = np.bincount(np.where(mask, code, ncat))
ix = np.append(ix, -1)
- ix = self._constructor(ix, dtype=self.dtype, fastpath=True)
-
- return Series(count, index=CategoricalIndex(ix), dtype="int64")
+ index = self._constructor(ix, dtype=self.dtype, fastpath=True)
+ return index, count.astype(np.int64)
def _internal_get_values(self):
"""
@@ -2323,7 +2326,11 @@ def describe(self):
description: `DataFrame`
A dataframe with frequency and counts by category.
"""
- counts = self.value_counts(dropna=False)
+ from pandas import Series
+
+ index, values = self._value_counts(dropna=False)
+ counts = Series(values, index=index)
+
freqs = counts / float(counts.sum())
from pandas.core.reshape.concat import concat
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 2bdd9acaeb70f..3ab6bcb7759a9 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -679,33 +679,34 @@ def repeat(self, repeats, *args, **kwargs):
values = self._data.repeat(repeats)
return type(self)(values.view("i8"), dtype=self.dtype)
- def value_counts(self, dropna=False):
+ def _value_counts(self, dropna: bool = False):
"""
- Return a Series containing counts of unique values.
+ Return an array of unique values and an array of their counts.
Parameters
----------
- dropna : bool, default True
- Don't include counts of NaT values.
+ dropna : bool, default False
Returns
-------
- Series
+ ExtensionArray
+ ndarray[int64]
"""
- from pandas import Series, Index
-
if dropna:
- values = self[~self.isna()]._data
+ values = self[~self.isna()]
else:
- values = self._data
+ values = self
- cls = type(self)
+ arg = values._values_for_factorize()[0]
- result = value_counts(values, sort=False, dropna=dropna)
- index = Index(
- cls(result.index.view("i8"), dtype=self.dtype), name=result.index.name
- )
- return Series(result.values, index=index, name=result.name)
+ result = value_counts(arg, sort=False, dropna=False)
+
+ freq = self.freq if is_period_dtype(self) else None
+ idx = result.index
+ new_index = type(self)(idx, dtype=self.dtype, freq=freq) # type: ignore
+ counts = result.values
+
+ return new_index, counts
def map(self, mapper):
# TODO(GH-23179): Add ExtensionArray.map
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index 0922f4ac6f71d..71bf72fe76f53 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -578,11 +578,9 @@ def _ndarray_values(self) -> np.ndarray:
"""
return self._data
- def value_counts(self, dropna=True):
+ def _value_counts(self, dropna=True):
"""
- Returns a Series containing counts of each category.
-
- Every category will have an entry, even those with a count of 0.
+ Return a tuple describing the counts for each value.
Parameters
----------
@@ -591,15 +589,15 @@ def value_counts(self, dropna=True):
Returns
-------
- counts : Series
+ index : IntegerArray
+ values : ndarray[int64]
See Also
--------
Series.value_counts
-
"""
- from pandas import Index, Series
+ from pandas import Index
# compute counts on the data with no nans
data = self._data[~self._mask]
@@ -624,8 +622,7 @@ def value_counts(self, dropna=True):
),
dtype=object,
)
-
- return Series(array, index=index)
+ return index, array
def _values_for_factorize(self) -> Tuple[np.ndarray, Any]:
# TODO: https://github.com/pandas-dev/pandas/issues/30037
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 75dd00104db1b..e400c5ec2614d 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -855,25 +855,12 @@ def take(self, indices, allow_fill=False, fill_value=None, axis=None, **kwargs):
return self._shallow_copy(left_take, right_take)
- def value_counts(self, dropna=True):
- """
- Returns a Series containing counts of each interval.
-
- Parameters
- ----------
- dropna : bool, default True
- Don't include counts of NaN.
-
- Returns
- -------
- counts : Series
-
- See Also
- --------
- Series.value_counts
- """
+ def _value_counts(self, dropna=True):
# TODO: implement this is a non-naive way!
- return value_counts(np.asarray(self), dropna=dropna)
+
+ arg = self._values_for_factorize()[0]
+ result = value_counts(arg, dropna=dropna)
+ return result.index, result.values
# Formatting
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 9838cdfabbb95..8a6509624cac8 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -34,7 +34,7 @@
is_string_dtype,
pandas_dtype,
)
-from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries, ABCSparseArray
+from pandas.core.dtypes.generic import ABCSeries, ABCSparseArray
from pandas.core.dtypes.missing import isna, na_value_for_dtype, notna
import pandas.core.algorithms as algos
@@ -696,20 +696,19 @@ def factorize(self, na_sentinel=-1):
uniques = SparseArray(uniques, dtype=self.dtype)
return codes, uniques
- def value_counts(self, dropna=True):
+ def _value_counts(self, dropna=True):
"""
- Returns a Series containing counts of unique values.
+ Return an array of unique values and an array of their counts.
Parameters
----------
- dropna : boolean, default True
- Don't include counts of NaN, even if NaN is in sp_values.
+ dropna : bool, default True
Returns
-------
- counts : Series
+ ndarray
+ ndarray[int64]
"""
- from pandas import Index, Series
keys, counts = algos._value_counts_arraylike(self.sp_values, dropna=dropna)
fcounts = self.sp_index.ngaps
@@ -728,10 +727,7 @@ def value_counts(self, dropna=True):
keys = np.insert(keys, 0, self.fill_value)
counts = np.insert(counts, 0, fcounts)
- if not isinstance(keys, ABCIndexClass):
- keys = Index(keys)
- result = Series(counts, index=keys)
- return result
+ return keys, counts
# --------
# Indexing
diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py
index 0da877fb1ad45..610a5c3db563b 100644
--- a/pandas/core/arrays/string_.py
+++ b/pandas/core/arrays/string_.py
@@ -250,10 +250,11 @@ def astype(self, dtype, copy=True):
def _reduce(self, name, skipna=True, **kwargs):
raise TypeError(f"Cannot perform reduction '{name}' with string dtype")
- def value_counts(self, dropna=False):
+ def _value_counts(self, dropna=False):
from pandas import value_counts
- return value_counts(self._ndarray, dropna=dropna)
+ result = value_counts(self._ndarray, dropna=dropna)
+ return result.index, result.values
# Overrride parent because we have different return types.
@classmethod
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index bca629ae32270..4428d8027bcfe 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -214,13 +214,16 @@ def test_value_counts_preserves_tz(self):
dti = pd.date_range("2000", periods=2, freq="D", tz="US/Central")
arr = DatetimeArray(dti).repeat([4, 3])
- result = arr.value_counts()
+ index, values = arr._value_counts()
+ result = pd.Series(values, index=index)
# Note: not tm.assert_index_equal, since `freq`s do not match
assert result.index.equals(dti)
arr[-2] = pd.NaT
- result = arr.value_counts()
+ index, values = arr._value_counts()
+ result = pd.Series(values, index=index)
+
expected = pd.Series([1, 4, 2], index=[pd.NaT, dti[0], dti[1]])
tm.assert_series_equal(result, expected)
| Instead of returning a Series, return a tuple with the index and values to be passed to Series.
Where possible I've changed the methods to use `_values_for_factorized` in the hopes of converging on a base class implementation. This is proving elusive, suggestions welcome. cc @TomAugspurger @jorisvandenbossche
xref #22843, #23074. | https://api.github.com/repos/pandas-dev/pandas/pulls/30673 | 2020-01-04T03:15:35Z | 2020-01-18T03:24:36Z | null | 2020-09-21T18:21:36Z |
CLN: Simplify rolling.py helper functions | diff --git a/pandas/core/window/common.py b/pandas/core/window/common.py
index 5b467b03c1fc2..64ec0e68e11b0 100644
--- a/pandas/core/window/common.py
+++ b/pandas/core/window/common.py
@@ -105,7 +105,7 @@ def _flex_binary_moment(arg1, arg2, f, pairwise=False):
if isinstance(arg1, (np.ndarray, ABCSeries)) and isinstance(
arg2, (np.ndarray, ABCSeries)
):
- X, Y = _prep_binary(arg1, arg2)
+ X, Y = prep_binary(arg1, arg2)
return f(X, Y)
elif isinstance(arg1, ABCDataFrame):
@@ -152,7 +152,7 @@ def dataframe_from_int_dict(data, frame_template):
results[i][j] = results[j][i]
else:
results[i][j] = f(
- *_prep_binary(arg1.iloc[:, i], arg2.iloc[:, j])
+ *prep_binary(arg1.iloc[:, i], arg2.iloc[:, j])
)
from pandas import concat
@@ -213,7 +213,7 @@ def dataframe_from_int_dict(data, frame_template):
raise ValueError("'pairwise' is not True/False")
else:
results = {
- i: f(*_prep_binary(arg1.iloc[:, i], arg2))
+ i: f(*prep_binary(arg1.iloc[:, i], arg2))
for i, col in enumerate(arg1.columns)
}
return dataframe_from_int_dict(results, arg1)
@@ -250,31 +250,10 @@ def _get_center_of_mass(comass, span, halflife, alpha):
return float(comass)
-def _offset(window, center):
+def calculate_center_offset(window):
if not is_integer(window):
window = len(window)
- offset = (window - 1) / 2.0 if center else 0
- try:
- return int(offset)
- except TypeError:
- return offset.astype(int)
-
-
-def _require_min_periods(p):
- def _check_func(minp, window):
- if minp is None:
- return window
- else:
- return max(p, minp)
-
- return _check_func
-
-
-def _use_window(minp, window):
- if minp is None:
- return window
- else:
- return minp
+ return int((window - 1) / 2.0)
def calculate_min_periods(
@@ -312,7 +291,7 @@ def calculate_min_periods(
return max(min_periods, floor)
-def _zsqrt(x):
+def zsqrt(x):
with np.errstate(all="ignore"):
result = np.sqrt(x)
mask = x < 0
@@ -327,7 +306,7 @@ def _zsqrt(x):
return result
-def _prep_binary(arg1, arg2):
+def prep_binary(arg1, arg2):
if not isinstance(arg2, type(arg1)):
raise Exception("Input arrays must be of the same type!")
@@ -336,3 +315,12 @@ def _prep_binary(arg1, arg2):
Y = arg2 + 0 * arg1
return X, Y
+
+
+def get_weighted_roll_func(cfunc: Callable) -> Callable:
+ def func(arg, window, min_periods=None):
+ if min_periods is None:
+ min_periods = len(window)
+ return cfunc(arg, window, min_periods)
+
+ return func
diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index bf05b7825acac..37e3cd42f2115 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -9,8 +9,13 @@
from pandas.core.dtypes.generic import ABCDataFrame
from pandas.core.base import DataError
-from pandas.core.window.common import _doc_template, _get_center_of_mass, _shared_docs
-from pandas.core.window.rolling import _flex_binary_moment, _Rolling, _zsqrt
+from pandas.core.window.common import (
+ _doc_template,
+ _get_center_of_mass,
+ _shared_docs,
+ zsqrt,
+)
+from pandas.core.window.rolling import _flex_binary_moment, _Rolling
_bias_template = """
Parameters
@@ -269,7 +274,7 @@ def std(self, bias=False, *args, **kwargs):
Exponential weighted moving stddev.
"""
nv.validate_window_func("std", args, kwargs)
- return _zsqrt(self.var(bias=bias, **kwargs))
+ return zsqrt(self.var(bias=bias, **kwargs))
vol = std
@@ -390,7 +395,7 @@ def _cov(x, y):
cov = _cov(x_values, y_values)
x_var = _cov(x_values, x_values)
y_var = _cov(y_values, y_values)
- corr = cov / _zsqrt(x_var * y_var)
+ corr = cov / zsqrt(x_var * y_var)
return X._wrap_result(corr)
return _flex_binary_moment(
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index c3c3e61f222df..02efd4e37472a 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -24,7 +24,6 @@
is_integer_dtype,
is_list_like,
is_scalar,
- is_timedelta64_dtype,
needs_i8_conversion,
)
from pandas.core.dtypes.generic import (
@@ -43,11 +42,11 @@
WindowGroupByMixin,
_doc_template,
_flex_binary_moment,
- _offset,
_shared_docs,
- _use_window,
- _zsqrt,
+ calculate_center_offset,
calculate_min_periods,
+ get_weighted_roll_func,
+ zsqrt,
)
from pandas.core.window.indexers import (
BaseIndexer,
@@ -252,19 +251,6 @@ def __iter__(self):
url = "https://github.com/pandas-dev/pandas/issues/11704"
raise NotImplementedError(f"See issue #11704 {url}")
- def _get_index(self) -> Optional[np.ndarray]:
- """
- Return integer representations as an ndarray if index is frequency.
-
- Returns
- -------
- None or ndarray
- """
-
- if self.is_freq_type:
- return self._on.asi8
- return None
-
def _prep_values(self, values: Optional[np.ndarray] = None) -> np.ndarray:
"""Convert input to numpy arrays for Cython routines"""
if values is None:
@@ -305,17 +291,6 @@ def _wrap_result(self, result, block=None, obj=None):
if isinstance(result, np.ndarray):
- # coerce if necessary
- if block is not None:
- if is_timedelta64_dtype(block.values.dtype):
- # TODO: do we know what result.dtype is at this point?
- # i.e. can we just do an astype?
- from pandas import to_timedelta
-
- result = to_timedelta(result.ravel(), unit="ns").values.reshape(
- result.shape
- )
-
if result.ndim == 1:
from pandas import Series
@@ -384,14 +359,11 @@ def _center_window(self, result, window) -> np.ndarray:
if self.axis > result.ndim - 1:
raise ValueError("Requested axis is larger then no. of argument dimensions")
- offset = _offset(window, True)
+ offset = calculate_center_offset(window)
if offset > 0:
- if isinstance(result, (ABCSeries, ABCDataFrame)):
- result = result.slice_shift(-offset, axis=self.axis)
- else:
- lead_indexer = [slice(None)] * result.ndim
- lead_indexer[self.axis] = slice(offset, None)
- result = np.copy(result[tuple(lead_indexer)])
+ lead_indexer = [slice(None)] * result.ndim
+ lead_indexer[self.axis] = slice(offset, None)
+ result = np.copy(result[tuple(lead_indexer)])
return result
def _get_roll_func(self, func_name: str) -> Callable:
@@ -424,17 +396,15 @@ def _get_cython_func_type(self, func: str) -> Callable:
return self._get_roll_func(f"{func}_variable")
return partial(self._get_roll_func(f"{func}_fixed"), win=self._get_window())
- def _get_window_indexer(
- self, index_as_array: Optional[np.ndarray], window: int
- ) -> BaseIndexer:
+ def _get_window_indexer(self, window: int) -> BaseIndexer:
"""
Return an indexer class that will compute the window start and end bounds
"""
if isinstance(self.window, BaseIndexer):
return self.window
if self.is_freq_type:
- return VariableWindowIndexer(index_array=index_as_array, window_size=window)
- return FixedWindowIndexer(index_array=index_as_array, window_size=window)
+ return VariableWindowIndexer(index_array=self._on.asi8, window_size=window)
+ return FixedWindowIndexer(window_size=window)
def _apply(
self,
@@ -476,8 +446,7 @@ def _apply(
blocks, obj = self._create_blocks()
block_list = list(blocks)
- index_as_array = self._get_index()
- window_indexer = self._get_window_indexer(index_as_array, window)
+ window_indexer = self._get_window_indexer(window)
results = []
exclude: List[Scalar] = []
@@ -498,7 +467,7 @@ def _apply(
continue
# calculation function
- offset = _offset(window, center) if center else 0
+ offset = calculate_center_offset(window) if center else 0
additional_nans = np.array([np.nan] * offset)
if not is_weighted:
@@ -1051,15 +1020,6 @@ def _get_window(
# GH #15662. `False` makes symmetric window, rather than periodic.
return sig.get_window(win_type, window, False).astype(float)
- def _get_weighted_roll_func(
- self, cfunc: Callable, check_minp: Callable, **kwargs
- ) -> Callable:
- def func(arg, window, min_periods=None, closed=None):
- minp = check_minp(min_periods, len(window))
- return cfunc(arg, window, minp, **kwargs)
-
- return func
-
_agg_see_also_doc = dedent(
"""
See Also
@@ -1127,7 +1087,7 @@ def aggregate(self, func, *args, **kwargs):
def sum(self, *args, **kwargs):
nv.validate_window_func("sum", args, kwargs)
window_func = self._get_roll_func("roll_weighted_sum")
- window_func = self._get_weighted_roll_func(window_func, _use_window)
+ window_func = get_weighted_roll_func(window_func)
return self._apply(
window_func, center=self.center, is_weighted=True, name="sum", **kwargs
)
@@ -1137,7 +1097,7 @@ def sum(self, *args, **kwargs):
def mean(self, *args, **kwargs):
nv.validate_window_func("mean", args, kwargs)
window_func = self._get_roll_func("roll_weighted_mean")
- window_func = self._get_weighted_roll_func(window_func, _use_window)
+ window_func = get_weighted_roll_func(window_func)
return self._apply(
window_func, center=self.center, is_weighted=True, name="mean", **kwargs
)
@@ -1147,7 +1107,7 @@ def mean(self, *args, **kwargs):
def var(self, ddof=1, *args, **kwargs):
nv.validate_window_func("var", args, kwargs)
window_func = partial(self._get_roll_func("roll_weighted_var"), ddof=ddof)
- window_func = self._get_weighted_roll_func(window_func, _use_window)
+ window_func = get_weighted_roll_func(window_func)
kwargs.pop("name", None)
return self._apply(
window_func, center=self.center, is_weighted=True, name="var", **kwargs
@@ -1157,7 +1117,7 @@ def var(self, ddof=1, *args, **kwargs):
@Appender(_shared_docs["std"])
def std(self, ddof=1, *args, **kwargs):
nv.validate_window_func("std", args, kwargs)
- return _zsqrt(self.var(ddof=ddof, name="std", **kwargs))
+ return zsqrt(self.var(ddof=ddof, name="std", **kwargs))
class _Rolling(_Window):
@@ -1211,8 +1171,6 @@ class _Rolling_and_Expanding(_Rolling):
def count(self):
blocks, obj = self._create_blocks()
- # Validate the index
- self._get_index()
window = self._get_window()
window = min(window, len(obj)) if not self.center else window
@@ -1307,7 +1265,7 @@ def apply(
kwargs.pop("_level", None)
kwargs.pop("floor", None)
window = self._get_window()
- offset = _offset(window, self.center)
+ offset = calculate_center_offset(window) if self.center else 0
if not is_bool(raw):
raise ValueError("raw parameter must be `True` or `False`")
@@ -1478,7 +1436,7 @@ def std(self, ddof=1, *args, **kwargs):
window_func = self._get_cython_func_type("roll_var")
def zsqrt_func(values, begin, end, min_periods):
- return _zsqrt(window_func(values, begin, end, min_periods, ddof=ddof))
+ return zsqrt(window_func(values, begin, end, min_periods, ddof=ddof))
# ddof passed again for compat with groupby.rolling
return self._apply(
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Cleans helper functions in `rolling.py` and remove dead code paths. | https://api.github.com/repos/pandas-dev/pandas/pulls/30672 | 2020-01-04T03:06:08Z | 2020-01-05T20:52:35Z | 2020-01-05T20:52:35Z | 2020-04-21T16:29:50Z |
JSON Code Cleanup | diff --git a/pandas/_libs/src/ujson/python/objToJSON.c b/pandas/_libs/src/ujson/python/objToJSON.c
index 2192539e24626..5181b0400d7bb 100644
--- a/pandas/_libs/src/ujson/python/objToJSON.c
+++ b/pandas/_libs/src/ujson/python/objToJSON.c
@@ -241,65 +241,39 @@ static int scaleNanosecToUnit(npy_int64 *value, NPY_DATETIMEUNIT unit) {
static PyObject *get_values(PyObject *obj) {
PyObject *values = NULL;
- values = PyObject_GetAttrString(obj, "values");
PRINTMARK();
- if (values && !PyArray_CheckExact(values)) {
-
- if (PyObject_HasAttrString(values, "to_numpy")) {
- values = PyObject_CallMethod(values, "to_numpy", NULL);
- }
-
- if (PyObject_HasAttrString(values, "values")) {
- PyObject *subvals = get_values(values);
- PyErr_Clear();
- PRINTMARK();
- // subvals are sometimes missing a dimension
- if (subvals) {
- PyArrayObject *reshape = (PyArrayObject *)subvals;
- PyObject *shape = PyObject_GetAttrString(obj, "shape");
- PyArray_Dims dims;
- PRINTMARK();
-
- if (!shape || !PyArray_IntpConverter(shape, &dims)) {
- subvals = NULL;
- } else {
- subvals = PyArray_Newshape(reshape, &dims, NPY_ANYORDER);
- PyDimMem_FREE(dims.ptr);
- }
- Py_DECREF(reshape);
- Py_XDECREF(shape);
- }
- Py_DECREF(values);
- values = subvals;
- } else {
- PRINTMARK();
- Py_DECREF(values);
- values = NULL;
- }
- }
-
- if (!values && PyObject_HasAttrString(obj, "_internal_get_values")) {
+ if (PyObject_HasAttrString(obj, "_internal_get_values")) {
PRINTMARK();
values = PyObject_CallMethod(obj, "_internal_get_values", NULL);
- if (values && !PyArray_CheckExact(values)) {
+
+ if (values == NULL) {
+ // Clear so we can subsequently try another method
+ PyErr_Clear();
+ } else if (!PyArray_CheckExact(values)) {
+ // Didn't get a numpy array, so keep trying
PRINTMARK();
Py_DECREF(values);
values = NULL;
}
}
- if (!values && PyObject_HasAttrString(obj, "get_block_values")) {
+ if ((values == NULL) && PyObject_HasAttrString(obj, "get_block_values")) {
PRINTMARK();
values = PyObject_CallMethod(obj, "get_block_values", NULL);
- if (values && !PyArray_CheckExact(values)) {
+
+ if (values == NULL) {
+ // Clear so we can subsequently try another method
+ PyErr_Clear();
+ } else if (!PyArray_CheckExact(values)) {
+ // Didn't get a numpy array, so keep trying
PRINTMARK();
Py_DECREF(values);
values = NULL;
}
}
- if (!values) {
+ if (values == NULL) {
PyObject *typeRepr = PyObject_Repr((PyObject *)Py_TYPE(obj));
PyObject *repr;
PRINTMARK();
@@ -435,8 +409,8 @@ static char *int64ToIso(int64_t value, NPY_DATETIMEUNIT base, size_t *len) {
}
/* JSON callback. returns a char* and mutates the pointer to *len */
-static char *NpyDateTimeToIsoCallback(JSOBJ Py_UNUSED(unused), JSONTypeContext *tc,
- size_t *len) {
+static char *NpyDateTimeToIsoCallback(JSOBJ Py_UNUSED(unused),
+ JSONTypeContext *tc, size_t *len) {
NPY_DATETIMEUNIT base = ((PyObjectEncoder *)tc->encoder)->datetimeUnit;
return int64ToIso(GET_TC(tc)->longValue, base, len);
}
| https://api.github.com/repos/pandas-dev/pandas/pulls/30671 | 2020-01-04T01:20:31Z | 2020-01-04T17:31:21Z | 2020-01-04T17:31:21Z | 2023-04-12T20:17:16Z | |
Fix errors='ignore' being ignored in astype #30324 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f0147859cae97..d5c3a95fa3b6a 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -5447,7 +5447,7 @@ def astype(
"""
if is_dict_like(dtype):
if self.ndim == 1: # i.e. Series
- if len(dtype) > 1 or self.name not in dtype:
+ if len(dtype) > 1 or self.name not in dtype and errors == "raise":
raise KeyError(
"Only the Series name can be used for "
"the key in Series dtype mappings."
@@ -5456,7 +5456,7 @@ def astype(
return self.astype(new_type, copy, errors)
for col_name in dtype.keys():
- if col_name not in self:
+ if col_name not in self and errors == "raise":
raise KeyError(
"Only a column name can be used for the "
"key in a dtype mappings argument."
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 713d8f3ceeedb..4e048f02582fc 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -276,7 +276,7 @@ def test_astype_str_float(self):
@pytest.mark.parametrize("dtype_class", [dict, Series])
def test_astype_dict_like(self, dtype_class):
- # GH7271 & GH16717
+ # GH7271 & GH16717 & GH30324
a = Series(date_range("2010-01-04", periods=5))
b = Series(range(5))
c = Series([0.0, 0.2, 0.4, 0.6, 0.8])
@@ -342,6 +342,15 @@ def test_astype_dict_like(self, dtype_class):
tm.assert_frame_equal(df, equiv)
tm.assert_frame_equal(df, original)
+ # GH 30324
+ # If errors=='ignore' than the resulting DataFrame
+ # should be the same as the original DataFrame
+ dt8 = dtype_class({"b": str, 2: str})
+ dt9 = dtype_class({"e": str})
+ df.astype(dt8, errors="ignore")
+ df.astype(dt9, errors="ignore")
+ tm.assert_frame_equal(df, original)
+
def test_astype_duplicate_col(self):
a1 = Series([1, 2, 3, 4, 5], name="a")
b = Series([0.1, 0.2, 0.4, 0.6, 0.8], name="b")
| - [x] closes #30324
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30670 | 2020-01-04T00:19:24Z | 2020-05-08T16:01:37Z | null | 2020-05-08T16:01:38Z |
MAINT: Change all pandas links to use HTTPS | diff --git a/RELEASE.md b/RELEASE.md
index efd075dabcba9..7924ffaff561f 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -2,5 +2,5 @@ Release Notes
=============
The list of changes to Pandas between each release can be found
-[here](http://pandas.pydata.org/pandas-docs/stable/whatsnew.html). For full
+[here](https://pandas.pydata.org/pandas-docs/stable/whatsnew/index.html). For full
details, see the commit logs at http://github.com/pandas-dev/pandas.
diff --git a/asv_bench/asv.conf.json b/asv_bench/asv.conf.json
index 902b472304909..cd1a31d4eaf34 100644
--- a/asv_bench/asv.conf.json
+++ b/asv_bench/asv.conf.json
@@ -7,7 +7,7 @@
"project": "pandas",
// The project's homepage
- "project_url": "http://pandas.pydata.org/",
+ "project_url": "https://pandas.pydata.org/",
// The URL of the source code repository for the project being
// benchmarked
diff --git a/conda.recipe/meta.yaml b/conda.recipe/meta.yaml
index f92090fecccf3..47f63c11d0567 100644
--- a/conda.recipe/meta.yaml
+++ b/conda.recipe/meta.yaml
@@ -36,5 +36,5 @@ test:
about:
- home: http://pandas.pydata.org
+ home: https://pandas.pydata.org
license: BSD
diff --git a/doc/source/development/contributing.rst b/doc/source/development/contributing.rst
index 3cafcf3a1c090..0c275f85b72e0 100644
--- a/doc/source/development/contributing.rst
+++ b/doc/source/development/contributing.rst
@@ -434,7 +434,7 @@ The utility script ``scripts/validate_docstrings.py`` can be used to get a csv
summary of the API documentation. And also validate common errors in the docstring
of a specific class, function or method. The summary also compares the list of
methods documented in ``doc/source/api.rst`` (which is used to generate
-the `API Reference <http://pandas.pydata.org/pandas-docs/stable/api.html>`_ page)
+the `API Reference <https://pandas.pydata.org/pandas-docs/stable/api.html>`_ page)
and the actual public methods.
This will identify methods documented in ``doc/source/api.rst`` that are not actually
class methods, and existing methods that are not documented in ``doc/source/api.rst``.
diff --git a/doc/source/getting_started/tutorials.rst b/doc/source/getting_started/tutorials.rst
index 212f3636d0a98..1ed0e8f635b58 100644
--- a/doc/source/getting_started/tutorials.rst
+++ b/doc/source/getting_started/tutorials.rst
@@ -15,7 +15,7 @@ pandas' own :ref:`10 Minutes to pandas<10min>`.
More complex recipes are in the :ref:`Cookbook<cookbook>`.
-A handy pandas `cheat sheet <http://pandas.pydata.org/Pandas_Cheat_Sheet.pdf>`_.
+A handy pandas `cheat sheet <https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf>`_.
Community guides
================
diff --git a/doc/source/user_guide/cookbook.rst b/doc/source/user_guide/cookbook.rst
index 3127dd09b3652..f581d183b9413 100644
--- a/doc/source/user_guide/cookbook.rst
+++ b/doc/source/user_guide/cookbook.rst
@@ -795,7 +795,7 @@ The :ref:`Resample <timeseries.resampling>` docs.
<https://stackoverflow.com/questions/33637312/pandas-grouper-by-frequency-with-completeness-requirement>`__
`Valid frequency arguments to Grouper
-<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__
+<https://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__
`Grouping using a MultiIndex
<https://stackoverflow.com/questions/41483763/pandas-timegrouper-on-multiindex>`__
diff --git a/doc/source/user_guide/indexing.rst b/doc/source/user_guide/indexing.rst
index 0229331127441..a8cdf4a61073d 100644
--- a/doc/source/user_guide/indexing.rst
+++ b/doc/source/user_guide/indexing.rst
@@ -668,7 +668,7 @@ Current behavior
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
- http://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike
+ https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike
Out[4]:
1 2.0
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index f9751dae87deb..82e01b62efbb9 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -2629,7 +2629,7 @@ that contain URLs.
url_df = pd.DataFrame({
'name': ['Python', 'Pandas'],
- 'url': ['https://www.python.org/', 'http://pandas.pydata.org']})
+ 'url': ['https://www.python.org/', 'https://pandas.pydata.org']})
print(url_df.to_html(render_links=True))
.. ipython:: python
diff --git a/doc/source/user_guide/style.ipynb b/doc/source/user_guide/style.ipynb
index 633827eb79f46..02550eab86913 100644
--- a/doc/source/user_guide/style.ipynb
+++ b/doc/source/user_guide/style.ipynb
@@ -1063,7 +1063,7 @@
"- Provide an API that is pleasing to use interactively and is \"good enough\" for many tasks\n",
"- Provide the foundations for dedicated libraries to build on\n",
"\n",
- "If you build a great library on top of this, let us know and we'll [link](http://pandas.pydata.org/pandas-docs/stable/ecosystem.html) to it.\n",
+ "If you build a great library on top of this, let us know and we'll [link](https://pandas.pydata.org/pandas-docs/stable/ecosystem.html) to it.\n",
"\n",
"### Subclassing\n",
"\n",
diff --git a/doc/source/whatsnew/v0.15.0.rst b/doc/source/whatsnew/v0.15.0.rst
index b328e549e8899..95e354e425143 100644
--- a/doc/source/whatsnew/v0.15.0.rst
+++ b/doc/source/whatsnew/v0.15.0.rst
@@ -852,7 +852,7 @@ Other notable API changes:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
- See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
+ See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
- ``merge``, ``DataFrame.merge``, and ``ordered_merge`` now return the same type
as the ``left`` argument (:issue:`7737`).
diff --git a/doc/source/whatsnew/v0.15.2.rst b/doc/source/whatsnew/v0.15.2.rst
index b58eabaed6127..292351c709940 100644
--- a/doc/source/whatsnew/v0.15.2.rst
+++ b/doc/source/whatsnew/v0.15.2.rst
@@ -172,7 +172,7 @@ Other enhancements:
4 True True True True
- Added support for ``utcfromtimestamp()``, ``fromtimestamp()``, and ``combine()`` on `Timestamp` class (:issue:`5351`).
-- Added Google Analytics (`pandas.io.ga`) basic documentation (:issue:`8835`). See `here <http://pandas.pydata.org/pandas-docs/version/0.15.2/remote_data.html#remote-data-ga>`__.
+- Added Google Analytics (`pandas.io.ga`) basic documentation (:issue:`8835`). See `here <https://pandas.pydata.org/pandas-docs/version/0.15.2/remote_data.html#remote-data-ga>`__.
- ``Timedelta`` arithmetic returns ``NotImplemented`` in unknown cases, allowing extensions by custom classes (:issue:`8813`).
- ``Timedelta`` now supports arithmetic with ``numpy.ndarray`` objects of the appropriate dtype (numpy 1.8 or newer only) (:issue:`8884`).
- Added ``Timedelta.to_timedelta64()`` method to the public API (:issue:`8884`).
diff --git a/doc/source/whatsnew/v0.16.0.rst b/doc/source/whatsnew/v0.16.0.rst
index fc638e35ed88b..855d0b8695bb1 100644
--- a/doc/source/whatsnew/v0.16.0.rst
+++ b/doc/source/whatsnew/v0.16.0.rst
@@ -528,7 +528,7 @@ Deprecations
`seaborn <http://stanford.edu/~mwaskom/software/seaborn/>`_ for similar
but more refined functionality (:issue:`3445`).
The documentation includes some examples how to convert your existing code
- from ``rplot`` to seaborn `here <http://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.html#trellis-plotting-interface>`__.
+ from ``rplot`` to seaborn `here <https://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.html#trellis-plotting-interface>`__.
- The ``pandas.sandbox.qtpandas`` interface is deprecated and will be removed in a future version.
We refer users to the external package `pandas-qt <https://github.com/datalyze-solutions/pandas-qt>`_. (:issue:`9615`)
diff --git a/doc/source/whatsnew/v0.21.0.rst b/doc/source/whatsnew/v0.21.0.rst
index 2a160eed9f8fd..71969c4de6b02 100644
--- a/doc/source/whatsnew/v0.21.0.rst
+++ b/doc/source/whatsnew/v0.21.0.rst
@@ -470,7 +470,7 @@ Current behavior
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
- http://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike
+ https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike
Out[4]:
1 2.0
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index f4e75364ae932..136c7fa32a6e7 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -273,7 +273,7 @@ class Categorical(ExtensionArray, PandasObject):
Notes
-----
See the `user guide
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html>`_
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html>`_
for more.
Examples
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index cea059fb22be1..e0d4be2273eb2 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -105,7 +105,7 @@
Notes
-----
See the `user guide
-<http://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#intervalindex>`_
+<https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#intervalindex>`_
for more.
%(examples)s\
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index 7b3e7d4f42121..cd4b5af4588e5 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -220,7 +220,7 @@ def union_categoricals(
-----
To learn more about categories, see `link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html#unioning>`__
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html#unioning>`__
Examples
--------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 89034973a6426..f5b0ce1ae77fb 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3653,7 +3653,7 @@ def _check_setitem_copy(self, stacklevel=4, t="setting", force=False):
"A value is trying to be set on a copy of a slice from a "
"DataFrame\n\n"
"See the caveats in the documentation: "
- "http://pandas.pydata.org/pandas-docs/stable/user_guide/"
+ "https://pandas.pydata.org/pandas-docs/stable/user_guide/"
"indexing.html#returning-a-view-versus-a-copy"
)
@@ -3664,7 +3664,7 @@ def _check_setitem_copy(self, stacklevel=4, t="setting", force=False):
"DataFrame.\n"
"Try using .loc[row_indexer,col_indexer] = value "
"instead\n\nSee the caveats in the documentation: "
- "http://pandas.pydata.org/pandas-docs/stable/user_guide/"
+ "https://pandas.pydata.org/pandas-docs/stable/user_guide/"
"indexing.html#returning-a-view-versus-a-copy"
)
@@ -7376,7 +7376,7 @@ def clip(
Notes
-----
See the `user guide
- <http://pandas.pydata.org/pandas-docs/stable/groupby.html>`_ for more.
+ <https://pandas.pydata.org/pandas-docs/stable/groupby.html>`_ for more.
"""
def asfreq(
@@ -7425,7 +7425,7 @@ def asfreq(
Notes
-----
To learn more about the frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
Examples
--------
@@ -7704,7 +7704,7 @@ def resample(
for more.
To learn more about the offset strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects>`__.
Examples
--------
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 5f543181cfb4e..1ba4938d45fc9 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -236,7 +236,7 @@ class providing the base-class of operations.
Notes
-----
See more `here
-<http://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#piping-function-calls>`_
+<https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#piping-function-calls>`_
Examples
--------
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index 05a5458f60cf5..7e7261130ff4a 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -53,7 +53,7 @@ class Grouper:
This will groupby the specified frequency if the target selection
(via key or level) is a datetime-like object. For full specification
of available frequencies, please see `here
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`_.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`_.
axis : str, int, defaults to 0
Number/name of the axis.
sort : bool, default to False
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 96bfff9a0a09f..c656cc45b5199 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -132,7 +132,7 @@ class CategoricalIndex(Index, accessor.PandasDelegate):
Notes
-----
See the `user guide
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#categoricalindex>`_
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#categoricalindex>`_
for more.
Examples
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index bc6b8ff845a56..90da563c30109 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -208,7 +208,7 @@ class DatetimeIndex(DatetimeTimedeltaMixin, DatetimeDelegateMixin):
Notes
-----
To learn more about the frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
"""
_typ = "datetimeindex"
@@ -1212,7 +1212,7 @@ def date_range(
``start`` and ``end`` (closed on both sides).
To learn more about the frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
Examples
--------
@@ -1383,7 +1383,7 @@ def bdate_range(
desired.
To learn more about the frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
Examples
--------
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index cae9fa949f711..35cdf840a55b2 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -1269,7 +1269,7 @@ def interval_range(
``start`` and ``end``, inclusively.
To learn more about datetime-like frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
Examples
--------
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 467d87905df3b..19f36e071fd0e 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -205,7 +205,7 @@ class MultiIndex(Index):
Notes
-----
See the `user guide
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html>`_
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html>`_
for more.
Examples
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 83a65b6505446..1cc37504b675f 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -879,7 +879,7 @@ def period_range(
must be specified.
To learn more about the frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
Examples
--------
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 894b430f1c4fd..fc55e1c530272 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -125,7 +125,7 @@ class TimedeltaIndex(
Notes
-----
To learn more about the frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
"""
_typ = "timedeltaindex"
@@ -547,7 +547,7 @@ def timedelta_range(
``start`` and ``end`` (closed on both sides).
To learn more about the frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
Examples
--------
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index a3d9dbfba9e71..8893a37998062 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -109,7 +109,7 @@ def concat(
A walkthrough of how this method fits in with other tools for combining
pandas objects can be found `here
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html>`__.
Examples
--------
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 85094ce741134..cfa42d764ee44 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -645,7 +645,7 @@ def to_datetime(
dtype: datetime64[ns]
If a date does not meet the `timestamp limitations
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
#timeseries-timestamp-limits>`_, passing errors='ignore'
will return the original input instead of raising any exception.
diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index 441a8756f09e0..bf05b7825acac 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -89,7 +89,7 @@ class EWM(_Rolling):
(if adjust is True), and 1-alpha and alpha (if adjust is False).
More details can be found at
- http://pandas.pydata.org/pandas-docs/stable/user_guide/computation.html#exponentially-weighted-windows
+ https://pandas.pydata.org/pandas-docs/stable/user_guide/computation.html#exponentially-weighted-windows
Examples
--------
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 176406f953f67..c3c3e61f222df 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -856,7 +856,7 @@ class Window(_Window):
changed to the center of the window by setting ``center=True``.
To learn more about the offsets & frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`__.
The recognized win_types are:
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index f5008f0c311ad..7f2aab569ab71 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -479,7 +479,7 @@ def read_json(
chunksize : int, optional
Return JsonReader object for iteration.
See the `line-delimited json docs
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#line-delimited-json>`_
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#line-delimited-json>`_
for more information on ``chunksize``.
This can only be passed if `lines=True`.
If this is None, the file will be read into memory all at once.
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 302a1e19e835a..ee4932b4f9194 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -84,7 +84,7 @@
into chunks.
Additional help can be found in the online docs for
-`IO Tools <http://pandas.pydata.org/pandas-docs/stable/user_guide/io.html>`_.
+`IO Tools <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html>`_.
Parameters
----------
@@ -272,7 +272,7 @@
chunksize : int, optional
Return TextFileReader object for iteration.
See the `IO Tools docs
- <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
+ <https://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
for more information on ``iterator`` and ``chunksize``.
compression : {{'infer', 'gzip', 'bz2', 'zip', 'xz', None}}, default 'infer'
For on-the-fly decompression of on-disk data. If 'infer' and
@@ -715,7 +715,7 @@ def read_fwf(
into chunks.
Additional help can be found in the `online docs for IO Tools
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/io.html>`_.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html>`_.
Parameters
----------
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 4f1541e8d127e..3d2c2159bfbdd 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1018,7 +1018,7 @@ def put(
data_columns : list, default None
List of columns to create as data columns, or True to
use all columns. See `here
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
encoding : str, default None
Provide an encoding for strings.
dropna : bool, default False, do not write an ALL nan row to
@@ -1138,7 +1138,7 @@ def append(
List of columns to create as indexed data columns for on-disk
queries, or True to use all columns. By default only the axes
of the object are indexed. See `here
- <http://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
+ <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
min_itemsize : dict of columns that specify minimum string sizes
nan_rep : string to use as string nan representation
chunksize : size to chunk the writing
diff --git a/pandas/tests/indexes/multi/test_sorting.py b/pandas/tests/indexes/multi/test_sorting.py
index 062c0ba9f4759..277bd79cfe953 100644
--- a/pandas/tests/indexes/multi/test_sorting.py
+++ b/pandas/tests/indexes/multi/test_sorting.py
@@ -120,7 +120,7 @@ def test_unsortedindex():
def test_unsortedindex_doc_examples():
- # http://pandas.pydata.org/pandas-docs/stable/advanced.html#sorting-a-multiindex # noqa
+ # https://pandas.pydata.org/pandas-docs/stable/advanced.html#sorting-a-multiindex # noqa
dfm = DataFrame(
{"jim": [0, 0, 1, 1], "joe": ["x", "x", "z", "y"], "jolie": np.random.rand(4)}
)
diff --git a/pandas/tests/io/formats/data/html/render_links_false.html b/pandas/tests/io/formats/data/html/render_links_false.html
index 6509a0e985597..6feb403d63051 100644
--- a/pandas/tests/io/formats/data/html/render_links_false.html
+++ b/pandas/tests/io/formats/data/html/render_links_false.html
@@ -11,7 +11,7 @@
<tr>
<th>0</th>
<td>0</td>
- <td>http://pandas.pydata.org/?q1=a&q2=b</td>
+ <td>https://pandas.pydata.org/?q1=a&q2=b</td>
<td>pydata.org</td>
</tr>
<tr>
diff --git a/pandas/tests/io/formats/data/html/render_links_true.html b/pandas/tests/io/formats/data/html/render_links_true.html
index e9cb5632aad1d..3eb53f3160a77 100644
--- a/pandas/tests/io/formats/data/html/render_links_true.html
+++ b/pandas/tests/io/formats/data/html/render_links_true.html
@@ -11,7 +11,7 @@
<tr>
<th>0</th>
<td>0</td>
- <td><a href="http://pandas.pydata.org/?q1=a&q2=b" target="_blank">http://pandas.pydata.org/?q1=a&q2=b</a></td>
+ <td><a href="https://pandas.pydata.org/?q1=a&q2=b" target="_blank">https://pandas.pydata.org/?q1=a&q2=b</a></td>
<td>pydata.org</td>
</tr>
<tr>
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index d6e0b53d4c176..060072e5103f4 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -688,7 +688,7 @@ def test_to_html_float_format_no_fixed_width(value, float_format, expected, data
def test_to_html_render_links(render_links, expected, datapath):
# GH 2679
data = [
- [0, "http://pandas.pydata.org/?q1=a&q2=b", "pydata.org"],
+ [0, "https://pandas.pydata.org/?q1=a&q2=b", "pydata.org"],
[0, "www.pydata.org", "pydata.org"],
]
df = DataFrame(data, columns=["foo", "bar", None])
| Also update the link to pandas' `whatsnew` page | https://api.github.com/repos/pandas-dev/pandas/pulls/30669 | 2020-01-04T00:00:29Z | 2020-01-04T01:58:24Z | 2020-01-04T01:58:24Z | 2020-01-04T01:58:27Z |
ENH: Implement PeriodIndex.intersection without object-dtype cast | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index ed5c6b450b05e..ce4715dd5bbf3 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2408,14 +2408,8 @@ def intersection(self, other, sort=False):
return this.intersection(other, sort=sort)
# TODO(EA): setops-refactor, clean all this up
- if is_period_dtype(self):
- lvals = self._ndarray_values
- else:
- lvals = self._values
- if is_period_dtype(other):
- rvals = other._ndarray_values
- else:
- rvals = other._values
+ lvals = self._values
+ rvals = other._values
if self.is_monotonic and other.is_monotonic:
try:
@@ -2434,18 +2428,13 @@ def intersection(self, other, sort=False):
indexer = indexer[indexer != -1]
taken = other.take(indexer)
+ res_name = get_op_result_name(self, other)
if sort is None:
taken = algos.safe_sort(taken.values)
- if self.name != other.name:
- name = None
- else:
- name = self.name
- return self._shallow_copy(taken, name=name)
-
- if self.name != other.name:
- taken.name = None
+ return self._shallow_copy(taken, name=res_name)
+ taken.name = res_name
return taken
def difference(self, other, sort=None):
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 7bf1a601a0ab6..e1b77ef1fd242 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -721,12 +721,9 @@ def intersection(self, other, sort=False):
# this point, depending on the values.
result._set_freq(None)
- if hasattr(self, "tz"):
- result = self._shallow_copy(
- result._values, name=result.name, tz=result.tz, freq=None
- )
- else:
- result = self._shallow_copy(result._values, name=result.name, freq=None)
+ result = self._shallow_copy(
+ result._data, name=result.name, dtype=result.dtype, freq=None
+ )
if result.freq is None:
result._set_freq("infer")
return result
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 83a65b6505446..ca2056c6d519e 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -12,6 +12,7 @@
ensure_platform_int,
is_bool_dtype,
is_datetime64_any_dtype,
+ is_dtype_equal,
is_float,
is_float_dtype,
is_integer,
@@ -782,6 +783,9 @@ def join(self, other, how="left", level=None, return_indexers=False, sort=False)
return self._apply_meta(result), lidx, ridx
return self._apply_meta(result)
+ # ------------------------------------------------------------------------
+ # Set Operation Methods
+
def _assert_can_do_setop(self, other):
super()._assert_can_do_setop(other)
@@ -799,6 +803,30 @@ def _wrap_setop_result(self, other, result):
result.name = name
return result
+ def intersection(self, other, sort=False):
+ self._validate_sort_keyword(sort)
+ self._assert_can_do_setop(other)
+ res_name = get_op_result_name(self, other)
+ other = ensure_index(other)
+
+ if self.equals(other):
+ return self._get_reconciled_name_object(other)
+
+ if not is_dtype_equal(self.dtype, other.dtype):
+ # TODO: fastpath for if we have a different PeriodDtype
+ this = self.astype("O")
+ other = other.astype("O")
+ return this.intersection(other, sort=sort)
+
+ i8self = Int64Index._simple_new(self.asi8)
+ i8other = Int64Index._simple_new(other.asi8)
+ i8result = i8self.intersection(i8other, sort=sort)
+
+ result = self._shallow_copy(np.asarray(i8result, dtype=np.int64), name=res_name)
+ return result
+
+ # ------------------------------------------------------------------------
+
def _apply_meta(self, rawarr):
if not isinstance(rawarr, PeriodIndex):
rawarr = PeriodIndex._simple_new(rawarr, freq=self.freq, name=self.name)
| PeriodIndex._simple_new is not as simple as it should be. In order to get it to have the appropriate signature we need to implement some of the set methods correctly. This is the first of those. | https://api.github.com/repos/pandas-dev/pandas/pulls/30666 | 2020-01-03T22:04:07Z | 2020-01-04T17:36:53Z | 2020-01-04T17:36:53Z | 2020-01-04T17:40:05Z |
CI: unpin IPython | diff --git a/environment.yml b/environment.yml
index 46fb5e7a19078..404a5b97e316a 100644
--- a/environment.yml
+++ b/environment.yml
@@ -70,7 +70,7 @@ dependencies:
- blosc
- bottleneck>=1.2.1
- ipykernel
- - ipython>=5.6.0,<=7.10.1 # see gh-30527
+ - ipython>=7.11.1
- jinja2 # pandas.Styler
- matplotlib>=2.2.2 # pandas.plotting, Series.plot, DataFrame.plot
- numexpr>=2.6.8
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 9f18bf767ae56..7a6be037e38f8 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -45,7 +45,7 @@ pip
blosc
bottleneck>=1.2.1
ipykernel
-ipython>=5.6.0,<=7.10.1
+ipython>=7.11.1
jinja2
matplotlib>=2.2.2
numexpr>=2.6.8
| Closes https://github.com/pandas-dev/pandas/issues/30537 | https://api.github.com/repos/pandas-dev/pandas/pulls/30665 | 2020-01-03T21:52:39Z | 2020-01-04T17:37:46Z | 2020-01-04T17:37:46Z | 2020-01-04T17:38:13Z |
DOC: Update info regarding pydatastream | diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst
index c8a60e0e40323..7bd5ba7ecdf0b 100644
--- a/doc/source/ecosystem.rst
+++ b/doc/source/ecosystem.rst
@@ -244,8 +244,8 @@ Pandas DataFrames with timeseries indexes.
`pydatastream <https://github.com/vfilimonov/pydatastream>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PyDatastream is a Python interface to the
-`Thomson Dataworks Enterprise (DWE/Datastream) <http://dataworks.thomson.com/Dataworks/Enterprise/1.0/>`__
-SOAP API to return indexed Pandas DataFrames with financial data.
+`Refinitiv Datastream (DWS) <https://www.refinitiv.com/en/products/datastream-macroeconomic-analysis>`__
+REST API to return indexed Pandas DataFrames with financial data.
This package requires valid credentials for this API (non free).
`pandaSDMX <https://pandasdmx.readthedocs.io>`__
| Title is self-explanatory | https://api.github.com/repos/pandas-dev/pandas/pulls/30664 | 2020-01-03T21:46:32Z | 2020-01-03T22:46:27Z | 2020-01-03T22:46:27Z | 2020-01-03T22:46:32Z |
MAINT: Change all SO links to use HTTPS | diff --git a/doc/source/user_guide/cookbook.rst b/doc/source/user_guide/cookbook.rst
index 37637bbdb38e6..3127dd09b3652 100644
--- a/doc/source/user_guide/cookbook.rst
+++ b/doc/source/user_guide/cookbook.rst
@@ -406,10 +406,10 @@ Levels
******
`Prepending a level to a multiindex
-<http://stackoverflow.com/questions/14744068/prepend-a-level-to-a-pandas-multiindex>`__
+<https://stackoverflow.com/questions/14744068/prepend-a-level-to-a-pandas-multiindex>`__
`Flatten Hierarchical columns
-<http://stackoverflow.com/questions/14507794/python-pandas-how-to-flatten-a-hierarchical-index-in-columns>`__
+<https://stackoverflow.com/questions/14507794/python-pandas-how-to-flatten-a-hierarchical-index-in-columns>`__
.. _cookbook.missing_data:
@@ -430,13 +430,13 @@ Fill forward a reversed timeseries
df.reindex(df.index[::-1]).ffill()
`cumsum reset at NaN values
-<http://stackoverflow.com/questions/18196811/cumsum-reset-at-nan>`__
+<https://stackoverflow.com/questions/18196811/cumsum-reset-at-nan>`__
Replace
*******
`Using replace with backrefs
-<http://stackoverflow.com/questions/16818871/extracting-value-and-creating-new-column-out-of-it>`__
+<https://stackoverflow.com/questions/16818871/extracting-value-and-creating-new-column-out-of-it>`__
.. _cookbook.grouping:
@@ -446,7 +446,7 @@ Grouping
The :ref:`grouping <groupby>` docs.
`Basic grouping with apply
-<http://stackoverflow.com/questions/15322632/python-pandas-df-groupy-agg-column-reference-in-agg>`__
+<https://stackoverflow.com/questions/15322632/python-pandas-df-groupy-agg-column-reference-in-agg>`__
Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to all the columns
@@ -462,7 +462,7 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
df.groupby('animal').apply(lambda subf: subf['size'][subf['weight'].idxmax()])
`Using get_group
-<http://stackoverflow.com/questions/14734533/how-to-access-pandas-groupby-dataframe-by-key>`__
+<https://stackoverflow.com/questions/14734533/how-to-access-pandas-groupby-dataframe-by-key>`__
.. ipython:: python
@@ -470,7 +470,7 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
gb.get_group('cat')
`Apply to different items in a group
-<http://stackoverflow.com/questions/15262134/apply-different-functions-to-different-items-in-group-object-python-pandas>`__
+<https://stackoverflow.com/questions/15262134/apply-different-functions-to-different-items-in-group-object-python-pandas>`__
.. ipython:: python
@@ -486,7 +486,7 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
expected_df
`Expanding apply
-<http://stackoverflow.com/questions/14542145/reductions-down-a-column-in-pandas>`__
+<https://stackoverflow.com/questions/14542145/reductions-down-a-column-in-pandas>`__
.. ipython:: python
@@ -502,7 +502,7 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
`Replacing some values with mean of the rest of a group
-<http://stackoverflow.com/questions/14760757/replacing-values-with-groupby-means>`__
+<https://stackoverflow.com/questions/14760757/replacing-values-with-groupby-means>`__
.. ipython:: python
@@ -516,7 +516,7 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
gb.transform(replace)
`Sort groups by aggregated data
-<http://stackoverflow.com/questions/14941366/pandas-sort-by-group-aggregate-and-column>`__
+<https://stackoverflow.com/questions/14941366/pandas-sort-by-group-aggregate-and-column>`__
.. ipython:: python
@@ -533,7 +533,7 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
sorted_df
`Create multiple aggregated columns
-<http://stackoverflow.com/questions/14897100/create-multiple-columns-in-pandas-aggregation-function>`__
+<https://stackoverflow.com/questions/14897100/create-multiple-columns-in-pandas-aggregation-function>`__
.. ipython:: python
@@ -550,7 +550,7 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
ts
`Create a value counts column and reassign back to the DataFrame
-<http://stackoverflow.com/questions/17709270/i-want-to-create-a-column-of-value-counts-in-my-pandas-dataframe>`__
+<https://stackoverflow.com/questions/17709270/i-want-to-create-a-column-of-value-counts-in-my-pandas-dataframe>`__
.. ipython:: python
@@ -561,7 +561,7 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
df
`Shift groups of the values in a column based on the index
-<http://stackoverflow.com/q/23198053/190597>`__
+<https://stackoverflow.com/q/23198053/190597>`__
.. ipython:: python
@@ -575,7 +575,7 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
df
`Select row with maximum value from each group
-<http://stackoverflow.com/q/26701849/190597>`__
+<https://stackoverflow.com/q/26701849/190597>`__
.. ipython:: python
@@ -587,7 +587,7 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
df_count
`Grouping like Python's itertools.groupby
-<http://stackoverflow.com/q/29142487/846892>`__
+<https://stackoverflow.com/q/29142487/846892>`__
.. ipython:: python
@@ -599,19 +599,19 @@ Expanding data
**************
`Alignment and to-date
-<http://stackoverflow.com/questions/15489011/python-time-series-alignment-and-to-date-functions>`__
+<https://stackoverflow.com/questions/15489011/python-time-series-alignment-and-to-date-functions>`__
`Rolling Computation window based on values instead of counts
-<http://stackoverflow.com/questions/14300768/pandas-rolling-computation-with-window-based-on-values-instead-of-counts>`__
+<https://stackoverflow.com/questions/14300768/pandas-rolling-computation-with-window-based-on-values-instead-of-counts>`__
`Rolling Mean by Time Interval
-<http://stackoverflow.com/questions/15771472/pandas-rolling-mean-by-time-interval>`__
+<https://stackoverflow.com/questions/15771472/pandas-rolling-mean-by-time-interval>`__
Splitting
*********
`Splitting a frame
-<http://stackoverflow.com/questions/13353233/best-way-to-split-a-dataframe-given-an-edge/15449992#15449992>`__
+<https://stackoverflow.com/questions/13353233/best-way-to-split-a-dataframe-given-an-edge/15449992#15449992>`__
Create a list of dataframes, split using a delineation based on logic included in rows.
@@ -635,7 +635,7 @@ Pivot
The :ref:`Pivot <reshaping.pivot>` docs.
`Partial sums and subtotals
-<http://stackoverflow.com/questions/15570099/pandas-pivot-tables-row-subtotals/15574875#15574875>`__
+<https://stackoverflow.com/questions/15570099/pandas-pivot-tables-row-subtotals/15574875#15574875>`__
.. ipython:: python
@@ -649,7 +649,7 @@ The :ref:`Pivot <reshaping.pivot>` docs.
table.stack('City')
`Frequency table like plyr in R
-<http://stackoverflow.com/questions/15589354/frequency-tables-in-pandas-like-plyr-in-r>`__
+<https://stackoverflow.com/questions/15589354/frequency-tables-in-pandas-like-plyr-in-r>`__
.. ipython:: python
@@ -675,7 +675,7 @@ The :ref:`Pivot <reshaping.pivot>` docs.
'Grade': lambda x: sum(x) / len(x)})
`Plot pandas DataFrame with year over year data
-<http://stackoverflow.com/questions/30379789/plot-pandas-data-frame-with-year-over-year-data>`__
+<https://stackoverflow.com/questions/30379789/plot-pandas-data-frame-with-year-over-year-data>`__
To create year and month cross tabulation:
@@ -691,7 +691,7 @@ Apply
*****
`Rolling apply to organize - Turning embedded lists into a MultiIndex frame
-<http://stackoverflow.com/questions/17349981/converting-pandas-dataframe-with-categorical-values-into-binary-values>`__
+<https://stackoverflow.com/questions/17349981/converting-pandas-dataframe-with-categorical-values-into-binary-values>`__
.. ipython:: python
@@ -707,7 +707,7 @@ Apply
df_orgz
`Rolling apply with a DataFrame returning a Series
-<http://stackoverflow.com/questions/19121854/using-rolling-apply-on-a-dataframe-object>`__
+<https://stackoverflow.com/questions/19121854/using-rolling-apply-on-a-dataframe-object>`__
Rolling Apply to multiple columns where function calculates a Series before a Scalar from the Series is returned
@@ -727,7 +727,7 @@ Rolling Apply to multiple columns where function calculates a Series before a Sc
s
`Rolling apply with a DataFrame returning a Scalar
-<http://stackoverflow.com/questions/21040766/python-pandas-rolling-apply-two-column-input-into-function/21045831#21045831>`__
+<https://stackoverflow.com/questions/21040766/python-pandas-rolling-apply-two-column-input-into-function/21045831#21045831>`__
Rolling Apply to multiple columns where function returns a Scalar (Volume Weighted Average Price)
@@ -753,26 +753,26 @@ Timeseries
----------
`Between times
-<http://stackoverflow.com/questions/14539992/pandas-drop-rows-outside-of-time-range>`__
+<https://stackoverflow.com/questions/14539992/pandas-drop-rows-outside-of-time-range>`__
`Using indexer between time
-<http://stackoverflow.com/questions/17559885/pandas-dataframe-mask-based-on-index>`__
+<https://stackoverflow.com/questions/17559885/pandas-dataframe-mask-based-on-index>`__
`Constructing a datetime range that excludes weekends and includes only certain times
-<http://stackoverflow.com/questions/24010830/pandas-generate-sequential-timestamp-with-jump/24014440#24014440?>`__
+<https://stackoverflow.com/questions/24010830/pandas-generate-sequential-timestamp-with-jump/24014440#24014440?>`__
`Vectorized Lookup
-<http://stackoverflow.com/questions/13893227/vectorized-look-up-of-values-in-pandas-dataframe>`__
+<https://stackoverflow.com/questions/13893227/vectorized-look-up-of-values-in-pandas-dataframe>`__
`Aggregation and plotting time series
<http://nipunbatra.github.io/2015/06/timeseries/>`__
Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series.
`How to rearrange a Python pandas DataFrame?
-<http://stackoverflow.com/questions/15432659/how-to-rearrange-a-python-pandas-dataframe>`__
+<https://stackoverflow.com/questions/15432659/how-to-rearrange-a-python-pandas-dataframe>`__
`Dealing with duplicates when reindexing a timeseries to a specified frequency
-<http://stackoverflow.com/questions/22244383/pandas-df-refill-adding-two-columns-of-different-shape>`__
+<https://stackoverflow.com/questions/22244383/pandas-df-refill-adding-two-columns-of-different-shape>`__
Calculate the first day of the month for each entry in a DatetimeIndex
@@ -804,15 +804,15 @@ The :ref:`Resample <timeseries.resampling>` docs.
<https://github.com/pandas-dev/pandas/issues/3791>`__
`Resampling with custom periods
-<http://stackoverflow.com/questions/15408156/resampling-with-custom-periods>`__
+<https://stackoverflow.com/questions/15408156/resampling-with-custom-periods>`__
`Resample intraday frame without adding new days
-<http://stackoverflow.com/questions/14898574/resample-intrday-pandas-dataframe-without-add-new-days>`__
+<https://stackoverflow.com/questions/14898574/resample-intrday-pandas-dataframe-without-add-new-days>`__
`Resample minute data
-<http://stackoverflow.com/questions/14861023/resampling-minute-data>`__
+<https://stackoverflow.com/questions/14861023/resampling-minute-data>`__
-`Resample with groupby <http://stackoverflow.com/q/18677271/564538>`__
+`Resample with groupby <https://stackoverflow.com/q/18677271/564538>`__
.. _cookbook.merge:
@@ -822,7 +822,7 @@ Merge
The :ref:`Concat <merging.concatenation>` docs. The :ref:`Join <merging.join>` docs.
`Append two dataframes with overlapping index (emulate R rbind)
-<http://stackoverflow.com/questions/14988480/pandas-version-of-rbind>`__
+<https://stackoverflow.com/questions/14988480/pandas-version-of-rbind>`__
.. ipython:: python
@@ -855,16 +855,16 @@ Depending on df construction, ``ignore_index`` may be needed
suffixes=('_L', '_R'))
`How to set the index and join
-<http://stackoverflow.com/questions/14341805/pandas-merge-pd-merge-how-to-set-the-index-and-join>`__
+<https://stackoverflow.com/questions/14341805/pandas-merge-pd-merge-how-to-set-the-index-and-join>`__
`KDB like asof join
-<http://stackoverflow.com/questions/12322289/kdb-like-asof-join-for-timeseries-data-in-pandas/12336039#12336039>`__
+<https://stackoverflow.com/questions/12322289/kdb-like-asof-join-for-timeseries-data-in-pandas/12336039#12336039>`__
`Join with a criteria based on the values
-<http://stackoverflow.com/questions/15581829/how-to-perform-an-inner-or-outer-join-of-dataframes-with-pandas-on-non-simplisti>`__
+<https://stackoverflow.com/questions/15581829/how-to-perform-an-inner-or-outer-join-of-dataframes-with-pandas-on-non-simplisti>`__
`Using searchsorted to merge based on values inside a range
-<http://stackoverflow.com/questions/25125626/pandas-merge-with-logic/2512764>`__
+<https://stackoverflow.com/questions/25125626/pandas-merge-with-logic/2512764>`__
.. _cookbook.plotting:
@@ -874,31 +874,31 @@ Plotting
The :ref:`Plotting <visualization>` docs.
`Make Matplotlib look like R
-<http://stackoverflow.com/questions/14349055/making-matplotlib-graphs-look-like-r-by-default>`__
+<https://stackoverflow.com/questions/14349055/making-matplotlib-graphs-look-like-r-by-default>`__
`Setting x-axis major and minor labels
-<http://stackoverflow.com/questions/12945971/pandas-timeseries-plot-setting-x-axis-major-and-minor-ticks-and-labels>`__
+<https://stackoverflow.com/questions/12945971/pandas-timeseries-plot-setting-x-axis-major-and-minor-ticks-and-labels>`__
`Plotting multiple charts in an ipython notebook
-<http://stackoverflow.com/questions/16392921/make-more-than-one-chart-in-same-ipython-notebook-cell>`__
+<https://stackoverflow.com/questions/16392921/make-more-than-one-chart-in-same-ipython-notebook-cell>`__
`Creating a multi-line plot
-<http://stackoverflow.com/questions/16568964/make-a-multiline-plot-from-csv-file-in-matplotlib>`__
+<https://stackoverflow.com/questions/16568964/make-a-multiline-plot-from-csv-file-in-matplotlib>`__
`Plotting a heatmap
-<http://stackoverflow.com/questions/17050202/plot-timeseries-of-histograms-in-python>`__
+<https://stackoverflow.com/questions/17050202/plot-timeseries-of-histograms-in-python>`__
`Annotate a time-series plot
-<http://stackoverflow.com/questions/11067368/annotate-time-series-plot-in-matplotlib>`__
+<https://stackoverflow.com/questions/11067368/annotate-time-series-plot-in-matplotlib>`__
`Annotate a time-series plot #2
-<http://stackoverflow.com/questions/17891493/annotating-points-from-a-pandas-dataframe-in-matplotlib-plot>`__
+<https://stackoverflow.com/questions/17891493/annotating-points-from-a-pandas-dataframe-in-matplotlib-plot>`__
`Generate Embedded plots in excel files using Pandas, Vincent and xlsxwriter
<https://pandas-xlsxwriter-charts.readthedocs.io/>`__
`Boxplot for each quartile of a stratifying variable
-<http://stackoverflow.com/questions/23232989/boxplot-stratified-by-column-in-python-pandas>`__
+<https://stackoverflow.com/questions/23232989/boxplot-stratified-by-column-in-python-pandas>`__
.. ipython:: python
@@ -918,7 +918,7 @@ Data In/Out
-----------
`Performance comparison of SQL vs HDF5
-<http://stackoverflow.com/questions/16628329/hdf5-and-sqlite-concurrency-compression-i-o-performance>`__
+<https://stackoverflow.com/questions/16628329/hdf5-and-sqlite-concurrency-compression-i-o-performance>`__
.. _cookbook.csv:
@@ -930,25 +930,25 @@ The :ref:`CSV <io.read_csv_table>` docs
`read_csv in action <http://wesmckinney.com/blog/update-on-upcoming-pandas-v0-10-new-file-parser-other-performance-wins/>`__
`appending to a csv
-<http://stackoverflow.com/questions/17134942/pandas-dataframe-output-end-of-csv>`__
+<https://stackoverflow.com/questions/17134942/pandas-dataframe-output-end-of-csv>`__
`Reading a csv chunk-by-chunk
-<http://stackoverflow.com/questions/11622652/large-persistent-dataframe-in-pandas/12193309#12193309>`__
+<https://stackoverflow.com/questions/11622652/large-persistent-dataframe-in-pandas/12193309#12193309>`__
`Reading only certain rows of a csv chunk-by-chunk
-<http://stackoverflow.com/questions/19674212/pandas-data-frame-select-rows-and-clear-memory>`__
+<https://stackoverflow.com/questions/19674212/pandas-data-frame-select-rows-and-clear-memory>`__
`Reading the first few lines of a frame
-<http://stackoverflow.com/questions/15008970/way-to-read-first-few-lines-for-pandas-dataframe>`__
+<https://stackoverflow.com/questions/15008970/way-to-read-first-few-lines-for-pandas-dataframe>`__
Reading a file that is compressed but not by ``gzip/bz2`` (the native compressed formats which ``read_csv`` understands).
This example shows a ``WinZipped`` file, but is a general application of opening the file within a context manager and
using that handle to read.
`See here
-<http://stackoverflow.com/questions/17789907/pandas-convert-winzipped-csv-file-to-data-frame>`__
+<https://stackoverflow.com/questions/17789907/pandas-convert-winzipped-csv-file-to-data-frame>`__
`Inferring dtypes from a file
-<http://stackoverflow.com/questions/15555005/get-inferred-dataframe-types-iteratively-using-chunksize>`__
+<https://stackoverflow.com/questions/15555005/get-inferred-dataframe-types-iteratively-using-chunksize>`__
`Dealing with bad lines
<http://github.com/pandas-dev/pandas/issues/2886>`__
@@ -960,7 +960,7 @@ using that handle to read.
<http://nipunbatra.github.io/2013/06/pandas-reading-csv-with-unix-timestamps-and-converting-to-local-timezone/>`__
`Write a multi-row index CSV without writing duplicates
-<http://stackoverflow.com/questions/17349574/pandas-write-multiindex-rows-with-to-csv>`__
+<https://stackoverflow.com/questions/17349574/pandas-write-multiindex-rows-with-to-csv>`__
.. _cookbook.csv.multiple_files:
@@ -1069,7 +1069,7 @@ SQL
The :ref:`SQL <io.sql>` docs
`Reading from databases with SQL
-<http://stackoverflow.com/questions/10065051/python-pandas-and-databases-like-mysql>`__
+<https://stackoverflow.com/questions/10065051/python-pandas-and-databases-like-mysql>`__
.. _cookbook.excel:
@@ -1079,7 +1079,7 @@ Excel
The :ref:`Excel <io.excel>` docs
`Reading from a filelike handle
-<http://stackoverflow.com/questions/15588713/sheets-of-excel-workbook-from-a-url-into-a-pandas-dataframe>`__
+<https://stackoverflow.com/questions/15588713/sheets-of-excel-workbook-from-a-url-into-a-pandas-dataframe>`__
`Modifying formatting in XlsxWriter output
<http://pbpython.com/improve-pandas-excel-output.html>`__
@@ -1090,7 +1090,7 @@ HTML
****
`Reading HTML tables from a server that cannot handle the default request
-header <http://stackoverflow.com/a/18939272/564538>`__
+header <https://stackoverflow.com/a/18939272/564538>`__
.. _cookbook.hdf:
@@ -1100,54 +1100,54 @@ HDFStore
The :ref:`HDFStores <io.hdf5>` docs
`Simple queries with a Timestamp Index
-<http://stackoverflow.com/questions/13926089/selecting-columns-from-pandas-hdfstore-table>`__
+<https://stackoverflow.com/questions/13926089/selecting-columns-from-pandas-hdfstore-table>`__
`Managing heterogeneous data using a linked multiple table hierarchy
<http://github.com/pandas-dev/pandas/issues/3032>`__
`Merging on-disk tables with millions of rows
-<http://stackoverflow.com/questions/14614512/merging-two-tables-with-millions-of-rows-in-python/14617925#14617925>`__
+<https://stackoverflow.com/questions/14614512/merging-two-tables-with-millions-of-rows-in-python/14617925#14617925>`__
`Avoiding inconsistencies when writing to a store from multiple processes/threads
-<http://stackoverflow.com/a/29014295/2858145>`__
+<https://stackoverflow.com/a/29014295/2858145>`__
De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from
csv file and creating a store by chunks, with date parsing as well.
`See here
-<http://stackoverflow.com/questions/16110252/need-to-compare-very-large-files-around-1-5gb-in-python/16110391#16110391>`__
+<https://stackoverflow.com/questions/16110252/need-to-compare-very-large-files-around-1-5gb-in-python/16110391#16110391>`__
`Creating a store chunk-by-chunk from a csv file
-<http://stackoverflow.com/questions/20428355/appending-column-to-frame-of-hdf-file-in-pandas/20428786#20428786>`__
+<https://stackoverflow.com/questions/20428355/appending-column-to-frame-of-hdf-file-in-pandas/20428786#20428786>`__
`Appending to a store, while creating a unique index
-<http://stackoverflow.com/questions/16997048/how-does-one-append-large-amounts-of-data-to-a-pandas-hdfstore-and-get-a-natural/16999397#16999397>`__
+<https://stackoverflow.com/questions/16997048/how-does-one-append-large-amounts-of-data-to-a-pandas-hdfstore-and-get-a-natural/16999397#16999397>`__
`Large Data work flows
-<http://stackoverflow.com/questions/14262433/large-data-work-flows-using-pandas>`__
+<https://stackoverflow.com/questions/14262433/large-data-work-flows-using-pandas>`__
`Reading in a sequence of files, then providing a global unique index to a store while appending
-<http://stackoverflow.com/questions/16997048/how-does-one-append-large-amounts-of-data-to-a-pandas-hdfstore-and-get-a-natural>`__
+<https://stackoverflow.com/questions/16997048/how-does-one-append-large-amounts-of-data-to-a-pandas-hdfstore-and-get-a-natural>`__
`Groupby on a HDFStore with low group density
-<http://stackoverflow.com/questions/15798209/pandas-group-by-query-on-large-data-in-hdfstore>`__
+<https://stackoverflow.com/questions/15798209/pandas-group-by-query-on-large-data-in-hdfstore>`__
`Groupby on a HDFStore with high group density
-<http://stackoverflow.com/questions/25459982/trouble-with-grouby-on-millions-of-keys-on-a-chunked-file-in-python-pandas/25471765#25471765>`__
+<https://stackoverflow.com/questions/25459982/trouble-with-grouby-on-millions-of-keys-on-a-chunked-file-in-python-pandas/25471765#25471765>`__
`Hierarchical queries on a HDFStore
-<http://stackoverflow.com/questions/22777284/improve-query-performance-from-a-large-hdfstore-table-with-pandas/22820780#22820780>`__
+<https://stackoverflow.com/questions/22777284/improve-query-performance-from-a-large-hdfstore-table-with-pandas/22820780#22820780>`__
`Counting with a HDFStore
-<http://stackoverflow.com/questions/20497897/converting-dict-of-dicts-into-pandas-dataframe-memory-issues>`__
+<https://stackoverflow.com/questions/20497897/converting-dict-of-dicts-into-pandas-dataframe-memory-issues>`__
`Troubleshoot HDFStore exceptions
-<http://stackoverflow.com/questions/15488809/how-to-trouble-shoot-hdfstore-exception-cannot-find-the-correct-atom-type>`__
+<https://stackoverflow.com/questions/15488809/how-to-trouble-shoot-hdfstore-exception-cannot-find-the-correct-atom-type>`__
`Setting min_itemsize with strings
-<http://stackoverflow.com/questions/15988871/hdfstore-appendstring-dataframe-fails-when-string-column-contents-are-longer>`__
+<https://stackoverflow.com/questions/15988871/hdfstore-appendstring-dataframe-fails-when-string-column-contents-are-longer>`__
`Using ptrepack to create a completely-sorted-index on a store
-<http://stackoverflow.com/questions/17893370/ptrepack-sortby-needs-full-index>`__
+<https://stackoverflow.com/questions/17893370/ptrepack-sortby-needs-full-index>`__
Storing Attributes to a group node
@@ -1305,7 +1305,7 @@ The :ref:`Timedeltas <timedeltas.timedeltas>` docs.
datetime.timedelta(minutes=5) + s
`Adding and subtracting deltas and dates
-<http://stackoverflow.com/questions/16385785/add-days-to-dates-in-dataframe>`__
+<https://stackoverflow.com/questions/16385785/add-days-to-dates-in-dataframe>`__
.. ipython:: python
@@ -1322,7 +1322,7 @@ The :ref:`Timedeltas <timedeltas.timedeltas>` docs.
df.dtypes
`Another example
-<http://stackoverflow.com/questions/15683588/iterating-through-a-pandas-dataframe>`__
+<https://stackoverflow.com/questions/15683588/iterating-through-a-pandas-dataframe>`__
Values can be set to NaT using np.nan, similar to datetime
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index fc22d5be1ca69..f7d61486ce8cd 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -212,7 +212,7 @@ def _use_inf_as_na(key):
This approach to setting global module values is discussed and
approved here:
- * http://stackoverflow.com/questions/4859217/
+ * https://stackoverflow.com/questions/4859217/
programmatically-creating-variables-in-python/4859312#4859312
"""
flag = get_option(key)
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 47805207862f0..2a8beacccfa7f 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -1393,7 +1393,7 @@ def _get_unicode_name(name):
def _get_valid_sqlite_name(name):
- # See http://stackoverflow.com/questions/6514274/how-do-you-escape-strings\
+ # See https://stackoverflow.com/questions/6514274/how-do-you-escape-strings\
# -for-sqlite-table-column-names-in-python
# Ensure the string can be encoded as UTF-8.
# Ensure the string does not include any NUL characters.
diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py
index 2fe23e15cedc4..f1dece6a1c46b 100644
--- a/pandas/tests/groupby/aggregate/test_other.py
+++ b/pandas/tests/groupby/aggregate/test_other.py
@@ -26,7 +26,7 @@
def test_agg_api():
# GH 6337
- # http://stackoverflow.com/questions/21706030/pandas-groupby-agg-function-column-dtype-error
+ # https://stackoverflow.com/questions/21706030/pandas-groupby-agg-function-column-dtype-error
# different api for agg when passed custom function with mixed frame
df = DataFrame(
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 40f844bdaa7c0..6c2ec945abce1 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -798,7 +798,7 @@ def test_groupby_empty_with_category():
def test_sort():
- # http://stackoverflow.com/questions/23814368/sorting-pandas-
+ # https://stackoverflow.com/questions/23814368/sorting-pandas-
# categorical-labels-after-groupby
# This should result in a properly sorted Series so that the plot
# has a sorted x axis
diff --git a/pandas/tests/indexing/multiindex/test_chaining_and_caching.py b/pandas/tests/indexing/multiindex/test_chaining_and_caching.py
index 4051d7c5fe374..8bfba8c12e934 100644
--- a/pandas/tests/indexing/multiindex/test_chaining_and_caching.py
+++ b/pandas/tests/indexing/multiindex/test_chaining_and_caching.py
@@ -8,7 +8,7 @@
def test_detect_chained_assignment():
# Inplace ops, originally from:
- # http://stackoverflow.com/questions/20508968/series-fillna-in-a-multiindex-dataframe-does-not-fill-is-this-a-bug
+ # https://stackoverflow.com/questions/20508968/series-fillna-in-a-multiindex-dataframe-does-not-fill-is-this-a-bug
a = [12, 23]
b = [123, None]
c = [1234, 2345]
diff --git a/pandas/tests/indexing/multiindex/test_setitem.py b/pandas/tests/indexing/multiindex/test_setitem.py
index cb6c3a71fecc4..aebd1ad2573ed 100644
--- a/pandas/tests/indexing/multiindex/test_setitem.py
+++ b/pandas/tests/indexing/multiindex/test_setitem.py
@@ -141,7 +141,7 @@ def test_multiindex_setitem(self):
df.loc["bar"] *= 2
# from SO
- # http://stackoverflow.com/questions/24572040/pandas-access-the-level-of-multiindex-for-inplace-operation
+ # https://stackoverflow.com/questions/24572040/pandas-access-the-level-of-multiindex-for-inplace-operation
df_orig = DataFrame.from_dict(
{
"price": {
diff --git a/pandas/tests/indexing/test_chaining_and_caching.py b/pandas/tests/indexing/test_chaining_and_caching.py
index 785448e910217..e845487ffca9a 100644
--- a/pandas/tests/indexing/test_chaining_and_caching.py
+++ b/pandas/tests/indexing/test_chaining_and_caching.py
@@ -273,7 +273,7 @@ def random_text(nobs=100):
str(df)
# from SO:
- # http://stackoverflow.com/questions/24054495/potential-bug-setting-value-for-undefined-column-using-iloc
+ # https://stackoverflow.com/questions/24054495/potential-bug-setting-value-for-undefined-column-using-iloc
df = DataFrame(np.arange(0, 9), columns=["count"])
df["group"] = "b"
diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index 1678b1ecf8700..0e408df625ccd 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -1131,7 +1131,7 @@ def test_trailing_delimiters(all_parsers):
def test_escapechar(all_parsers):
- # http://stackoverflow.com/questions/13824840/feature-request-for-
+ # https://stackoverflow.com/questions/13824840/feature-request-for-
# pandas-read-csv
data = '''SEARCH_TERM,ACTUAL_URL
"bra tv bord","http://www.ikea.com/se/sv/catalog/categories/departments/living_room/10475/?se%7cps%7cnonbranded%7cvardagsrum%7cgoogle%7ctv_bord"
| Title is self-explanatory | https://api.github.com/repos/pandas-dev/pandas/pulls/30663 | 2020-01-03T21:42:31Z | 2020-01-03T22:37:09Z | 2020-01-03T22:37:09Z | 2020-01-03T22:37:13Z |
TYP: check_untyped_defs plotting._matplotlib.timeseries | diff --git a/pandas/plotting/_matplotlib/__init__.py b/pandas/plotting/_matplotlib/__init__.py
index f9a692b0559ca..27b1d55fe1bd6 100644
--- a/pandas/plotting/_matplotlib/__init__.py
+++ b/pandas/plotting/_matplotlib/__init__.py
@@ -1,3 +1,5 @@
+from typing import TYPE_CHECKING, Dict, Type
+
from pandas.plotting._matplotlib.boxplot import (
BoxPlot,
boxplot,
@@ -26,7 +28,10 @@
)
from pandas.plotting._matplotlib.tools import table
-PLOT_CLASSES = {
+if TYPE_CHECKING:
+ from pandas.plotting._matplotlib.core import MPLPlot # noqa: F401
+
+PLOT_CLASSES: Dict[str, Type["MPLPlot"]] = {
"line": LinePlot,
"bar": BarPlot,
"barh": BarhPlot,
diff --git a/setup.cfg b/setup.cfg
index 1484198929973..9f21aa40f694a 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -331,9 +331,6 @@ check_untyped_defs=False
[mypy-pandas.plotting._matplotlib.misc]
check_untyped_defs=False
-[mypy-pandas.plotting._matplotlib.timeseries]
-check_untyped_defs=False
-
[mypy-pandas.tseries.holiday]
check_untyped_defs=False
| pandas\plotting\_matplotlib\timeseries.py:120: error: "type" has no attribute "_plot" | https://api.github.com/repos/pandas-dev/pandas/pulls/30662 | 2020-01-03T21:01:03Z | 2020-01-03T22:22:34Z | 2020-01-03T22:22:34Z | 2020-01-04T08:04:00Z |
TYP: check_untyped_defs pandas/io/sql.py | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 47805207862f0..e950515c54729 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -782,7 +782,8 @@ def read(self, coerce_float=True, parse_dates=None, columns=None, chunksize=None
cols = [self.table.c[n] for n in columns]
if self.index is not None:
- [cols.insert(0, self.table.c[idx]) for idx in self.index[::-1]]
+ for idx in self.index[::-1]:
+ cols.insert(0, self.table.c[idx])
sql_select = select(cols)
else:
sql_select = self.table.select()
@@ -1447,7 +1448,8 @@ def insert_statement(self):
escape = _get_valid_sqlite_name
if self.index is not None:
- [names.insert(0, idx) for idx in self.index[::-1]]
+ for idx in self.index[::-1]:
+ names.insert(0, idx)
bracketed_names = [escape(column) for column in names]
col_names = ",".join(bracketed_names)
diff --git a/setup.cfg b/setup.cfg
index 1484198929973..b9cd025bfe392 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -316,9 +316,6 @@ check_untyped_defs=False
[mypy-pandas.io.sas.sasreader]
check_untyped_defs=False
-[mypy-pandas.io.sql]
-check_untyped_defs=False
-
[mypy-pandas.io.stata]
check_untyped_defs=False
| pandas\io\sql.py:785: error: "insert" of "list" does not return a value
pandas\io\sql.py:1450: error: "insert" of "list" does not return a value
| https://api.github.com/repos/pandas-dev/pandas/pulls/30661 | 2020-01-03T20:45:32Z | 2020-01-04T01:28:08Z | 2020-01-04T01:28:08Z | 2020-01-04T08:02:35Z |
Fix flakey base setitem test | diff --git a/pandas/tests/extension/base/setitem.py b/pandas/tests/extension/base/setitem.py
index 7d50f176edd67..0bb8aede6298c 100644
--- a/pandas/tests/extension/base/setitem.py
+++ b/pandas/tests/extension/base/setitem.py
@@ -189,11 +189,9 @@ def test_setitem_scalar_key_sequence_raise(self, data):
def test_setitem_preserves_views(self, data):
# GH#28150 setitem shouldn't swap the underlying data
- assert data[-1] != data[0] # otherwise test would not be meaningful
-
view1 = data.view()
view2 = data[:]
- data[0] = data[-1]
- assert view1[0] == data[-1]
- assert view2[0] == data[-1]
+ data[0] = data[1]
+ assert view1[0] == data[1]
+ assert view2[0] == data[1]
| This test makes a potentially incorrect assertion about the data provided to the test. We already require that `data[0] != data[1]`, so it can be used instead.
This failed in https://dev.azure.com/pandas-dev/pandas/_build/results?buildId=24731&view=logs&j=a67b4c4c-cd2e-5e3c-a361-de73ac9c05f9&t=33d2fdd0-c376-5f94-e6d3-957bdd23a3b8. | https://api.github.com/repos/pandas-dev/pandas/pulls/30660 | 2020-01-03T20:40:22Z | 2020-01-03T23:59:25Z | 2020-01-03T23:59:25Z | 2020-01-03T23:59:30Z |
TYP: check_untyped_defs io.sas.sasreader | diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index d47dd2c71b86f..2bfcd500ee239 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -26,6 +26,7 @@
from pandas.io.common import get_filepath_or_buffer
from pandas.io.sas._sas import Parser
import pandas.io.sas.sas_constants as const
+from pandas.io.sas.sasreader import ReaderBase
class _subheader_pointer:
@@ -37,7 +38,7 @@ class _column:
# SAS7BDAT represents a SAS data file in SAS7BDAT format.
-class SAS7BDATReader(abc.Iterator):
+class SAS7BDATReader(ReaderBase, abc.Iterator):
"""
Read SAS files in SAS7BDAT format.
diff --git a/pandas/io/sas/sas_xport.py b/pandas/io/sas/sas_xport.py
index 85b7fd497cedd..7fc1bc6d3eb6c 100644
--- a/pandas/io/sas/sas_xport.py
+++ b/pandas/io/sas/sas_xport.py
@@ -19,6 +19,7 @@
import pandas as pd
from pandas.io.common import get_filepath_or_buffer
+from pandas.io.sas.sasreader import ReaderBase
_correct_line1 = (
"HEADER RECORD*******LIBRARY HEADER RECORD!!!!!!!"
@@ -239,7 +240,7 @@ def _parse_float_vec(vec):
return ieee
-class XportReader(abc.Iterator):
+class XportReader(ReaderBase, abc.Iterator):
__doc__ = _xport_reader_doc
def __init__(
diff --git a/pandas/io/sas/sasreader.py b/pandas/io/sas/sasreader.py
index 27d56d4ede403..6ebcaf6b72c45 100644
--- a/pandas/io/sas/sasreader.py
+++ b/pandas/io/sas/sasreader.py
@@ -1,9 +1,27 @@
"""
Read SAS sas7bdat or xport files.
"""
+
+from abc import ABCMeta, abstractmethod
+
from pandas.io.common import stringify_path
+# TODO: replace with Protocol in Python 3.8
+class ReaderBase(metaclass=ABCMeta):
+ """
+ Protocol for XportReader and SAS7BDATReader classes.
+ """
+
+ @abstractmethod
+ def read(self, nrows=None):
+ pass
+
+ @abstractmethod
+ def close(self):
+ pass
+
+
def read_sas(
filepath_or_buffer,
format=None,
@@ -62,6 +80,7 @@ def read_sas(
else:
raise ValueError("unable to infer format of SAS file")
+ reader: ReaderBase
if format.lower() == "xport":
from pandas.io.sas.sas_xport import XportReader
diff --git a/setup.cfg b/setup.cfg
index f7370b6cef8d6..79fe68b7e2dfe 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -291,9 +291,6 @@ check_untyped_defs=False
[mypy-pandas.io.sas.sas7bdat]
check_untyped_defs=False
-[mypy-pandas.io.sas.sasreader]
-check_untyped_defs=False
-
[mypy-pandas.io.stata]
check_untyped_defs=False
|
pandas\io\sas\sasreader.py:75: error: Incompatible types in assignment (expression has type "SAS7BDATReader", variable has type "XportReader") | https://api.github.com/repos/pandas-dev/pandas/pulls/30659 | 2020-01-03T20:37:01Z | 2020-04-27T08:26:16Z | 2020-04-27T08:26:16Z | 2020-04-27T08:27:24Z |
TYP: --disallow-any-generics pandas\core\reshape\concat.py | diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index a3d9dbfba9e71..1414dbeb9b950 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -2,7 +2,7 @@
concat routines
"""
-from typing import List
+from typing import Hashable, List, Optional
import numpy as np
@@ -474,15 +474,10 @@ def _get_result_dim(self) -> int:
def _get_new_axes(self) -> List[Index]:
ndim = self._get_result_dim()
- new_axes: List = [None] * ndim
-
- for i in range(ndim):
- if i == self.axis:
- continue
- new_axes[i] = self._get_comb_axis(i)
-
- new_axes[self.axis] = self._get_concat_axis()
- return new_axes
+ return [
+ self._get_concat_axis() if i == self.axis else self._get_comb_axis(i)
+ for i in range(ndim)
+ ]
def _get_comb_axis(self, i: int) -> Index:
data_axis = self.objs[0]._get_block_manager_axis(i)
@@ -501,7 +496,7 @@ def _get_concat_axis(self) -> Index:
idx = ibase.default_index(len(self.objs))
return idx
elif self.keys is None:
- names: List = [None] * len(self.objs)
+ names: List[Optional[Hashable]] = [None] * len(self.objs)
num = 0
has_names = False
for i, x in enumerate(self.objs):
| xref #30539
pandas\core\reshape\concat.py:477: error: Missing type parameters for generic type "List"
pandas\core\reshape\concat.py:504: error: Missing type parameters for generic type "List" | https://api.github.com/repos/pandas-dev/pandas/pulls/30658 | 2020-01-03T20:21:58Z | 2020-01-04T01:27:21Z | 2020-01-04T01:27:21Z | 2020-01-04T08:03:15Z |
CLN: Deprecate pandas.SparseArray for pandas.arrays.SparseArray | diff --git a/doc/source/development/contributing_docstring.rst b/doc/source/development/contributing_docstring.rst
index 34bc5f44eb0c0..d897889ed9eff 100644
--- a/doc/source/development/contributing_docstring.rst
+++ b/doc/source/development/contributing_docstring.rst
@@ -399,7 +399,7 @@ DataFrame:
* DataFrame
* pandas.Index
* pandas.Categorical
-* pandas.SparseArray
+* pandas.arrays.SparseArray
If the exact type is not relevant, but must be compatible with a numpy
array, array-like can be specified. If Any type that can be iterated is
diff --git a/doc/source/getting_started/basics.rst b/doc/source/getting_started/basics.rst
index f47fa48eb6202..4fef5efbd1551 100644
--- a/doc/source/getting_started/basics.rst
+++ b/doc/source/getting_started/basics.rst
@@ -1951,7 +1951,7 @@ documentation sections for more on each type.
| period | :class:`PeriodDtype` | :class:`Period` | :class:`arrays.PeriodArray` | ``'period[<freq>]'``, | :ref:`timeseries.periods` |
| (time spans) | | | | ``'Period[<freq>]'`` | |
+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
-| sparse | :class:`SparseDtype` | (none) | :class:`SparseArray` | ``'Sparse'``, ``'Sparse[int]'``, | :ref:`sparse` |
+| sparse | :class:`SparseDtype` | (none) | :class:`arrays.SparseArray` | ``'Sparse'``, ``'Sparse[int]'``, | :ref:`sparse` |
| | | | | ``'Sparse[float]'`` | |
+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
| intervals | :class:`IntervalDtype` | :class:`Interval` | :class:`arrays.IntervalArray` | ``'interval'``, ``'Interval'``, | :ref:`advanced.intervalindex` |
diff --git a/doc/source/getting_started/dsintro.rst b/doc/source/getting_started/dsintro.rst
index a07fcbd8b67c4..82d4b5e34e4f8 100644
--- a/doc/source/getting_started/dsintro.rst
+++ b/doc/source/getting_started/dsintro.rst
@@ -741,7 +741,7 @@ implementation takes precedence and a Series is returned.
np.maximum(ser, idx)
NumPy ufuncs are safe to apply to :class:`Series` backed by non-ndarray arrays,
-for example :class:`SparseArray` (see :ref:`sparse.calculation`). If possible,
+for example :class:`arrays.SparseArray` (see :ref:`sparse.calculation`). If possible,
the ufunc is applied without converting the underlying data to an ndarray.
Console display
diff --git a/doc/source/reference/arrays.rst b/doc/source/reference/arrays.rst
index 2c8382e916ed8..c71350ecd73b3 100644
--- a/doc/source/reference/arrays.rst
+++ b/doc/source/reference/arrays.rst
@@ -444,13 +444,13 @@ Sparse data
-----------
Data where a single value is repeated many times (e.g. ``0`` or ``NaN``) may
-be stored efficiently as a :class:`SparseArray`.
+be stored efficiently as a :class:`arrays.SparseArray`.
.. autosummary::
:toctree: api/
:template: autosummary/class_without_autosummary.rst
- SparseArray
+ arrays.SparseArray
.. autosummary::
:toctree: api/
diff --git a/doc/source/user_guide/sparse.rst b/doc/source/user_guide/sparse.rst
index c258a8840b714..8588fac4a18d0 100644
--- a/doc/source/user_guide/sparse.rst
+++ b/doc/source/user_guide/sparse.rst
@@ -15,7 +15,7 @@ can be chosen, including 0) is omitted. The compressed values are not actually s
arr = np.random.randn(10)
arr[2:-2] = np.nan
- ts = pd.Series(pd.SparseArray(arr))
+ ts = pd.Series(pd.arrays.SparseArray(arr))
ts
Notice the dtype, ``Sparse[float64, nan]``. The ``nan`` means that elements in the
@@ -51,7 +51,7 @@ identical to their dense counterparts.
SparseArray
-----------
-:class:`SparseArray` is a :class:`~pandas.api.extensions.ExtensionArray`
+:class:`arrays.SparseArray` is a :class:`~pandas.api.extensions.ExtensionArray`
for storing an array of sparse values (see :ref:`basics.dtypes` for more
on extension arrays). It is a 1-dimensional ndarray-like object storing
only values distinct from the ``fill_value``:
@@ -61,7 +61,7 @@ only values distinct from the ``fill_value``:
arr = np.random.randn(10)
arr[2:5] = np.nan
arr[7:8] = np.nan
- sparr = pd.SparseArray(arr)
+ sparr = pd.arrays.SparseArray(arr)
sparr
A sparse array can be converted to a regular (dense) ndarray with :meth:`numpy.asarray`
@@ -144,7 +144,7 @@ to ``SparseArray`` and get a ``SparseArray`` as a result.
.. ipython:: python
- arr = pd.SparseArray([1., np.nan, np.nan, -2., np.nan])
+ arr = pd.arrays.SparseArray([1., np.nan, np.nan, -2., np.nan])
np.abs(arr)
@@ -153,7 +153,7 @@ the correct dense result.
.. ipython:: python
- arr = pd.SparseArray([1., -1, -1, -2., -1], fill_value=-1)
+ arr = pd.arrays.SparseArray([1., -1, -1, -2., -1], fill_value=-1)
np.abs(arr)
np.abs(arr).to_dense()
@@ -194,7 +194,7 @@ From an array-like, use the regular :class:`Series` or
.. ipython:: python
# New way
- pd.DataFrame({"A": pd.SparseArray([0, 1])})
+ pd.DataFrame({"A": pd.arrays.SparseArray([0, 1])})
From a SciPy sparse matrix, use :meth:`DataFrame.sparse.from_spmatrix`,
@@ -256,10 +256,10 @@ Instead, you'll need to ensure that the values being assigned are sparse
.. ipython:: python
- df = pd.DataFrame({"A": pd.SparseArray([0, 1])})
+ df = pd.DataFrame({"A": pd.arrays.SparseArray([0, 1])})
df['B'] = [0, 0] # remains dense
df['B'].dtype
- df['B'] = pd.SparseArray([0, 0])
+ df['B'] = pd.arrays.SparseArray([0, 0])
df['B'].dtype
The ``SparseDataFrame.default_kind`` and ``SparseDataFrame.default_fill_value`` attributes
diff --git a/doc/source/whatsnew/v0.19.0.rst b/doc/source/whatsnew/v0.19.0.rst
index 6f6446c3f74e1..6eb509a258430 100644
--- a/doc/source/whatsnew/v0.19.0.rst
+++ b/doc/source/whatsnew/v0.19.0.rst
@@ -1225,6 +1225,7 @@ Previously, sparse data were ``float64`` dtype by default, even if all inputs we
As of v0.19.0, sparse data keeps the input dtype, and uses more appropriate ``fill_value`` defaults (``0`` for ``int64`` dtype, ``False`` for ``bool`` dtype).
.. ipython:: python
+ :okwarning:
pd.SparseArray([1, 2, 0, 0], dtype=np.int64)
pd.SparseArray([True, False, False, False])
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index b6b91983b8267..b18d022349001 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -354,6 +354,7 @@ When passed DataFrames whose values are sparse, :func:`concat` will now return a
:class:`Series` or :class:`DataFrame` with sparse values, rather than a :class:`SparseDataFrame` (:issue:`25702`).
.. ipython:: python
+ :okwarning:
df = pd.DataFrame({"A": pd.SparseArray([0, 1])})
@@ -910,6 +911,7 @@ by a ``Series`` or ``DataFrame`` with sparse values.
**New way**
.. ipython:: python
+ :okwarning:
df = pd.DataFrame({"A": pd.SparseArray([0, 0, 1, 2])})
df.dtypes
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 014bd22aa2dab..7532eae6affe1 100755
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -578,6 +578,7 @@ Deprecations
- :meth:`DataFrame.to_stata`, :meth:`DataFrame.to_feather`, and :meth:`DataFrame.to_parquet` argument "fname" is deprecated, use "path" instead (:issue:`23574`)
- The deprecated internal attributes ``_start``, ``_stop`` and ``_step`` of :class:`RangeIndex` now raise a ``FutureWarning`` instead of a ``DeprecationWarning`` (:issue:`26581`)
- The ``pandas.util.testing`` module has been deprecated. Use the public API in ``pandas.testing`` documented at :ref:`api.general.testing` (:issue:`16232`).
+- ``pandas.SparseArray`` has been deprecated. Use ``pandas.arrays.SparseArray`` (:class:`arrays.SparseArray`) instead. (:issue:`30642`)
**Selecting Columns from a Grouped DataFrame**
diff --git a/pandas/__init__.py b/pandas/__init__.py
index 0c6c1c0433fb9..10d65e41d3030 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -115,7 +115,7 @@
DataFrame,
)
-from pandas.core.arrays.sparse import SparseArray, SparseDtype
+from pandas.core.arrays.sparse import SparseDtype
from pandas.tseries.api import infer_freq
from pandas.tseries import offsets
@@ -246,6 +246,19 @@ class Panel:
return type(name, (), {})
+ elif name == "SparseArray":
+
+ warnings.warn(
+ "The pandas.SparseArray class is deprecated "
+ "and will be removed from pandas in a future version. "
+ "Use pandas.arrays.SparseArray instead.",
+ FutureWarning,
+ stacklevel=2,
+ )
+ from pandas.core.arrays.sparse import SparseArray as _SparseArray
+
+ return _SparseArray
+
raise AttributeError(f"module 'pandas' has no attribute '{name}'")
@@ -308,6 +321,9 @@ def __getattr__(self, item):
datetime = __Datetime().datetime
+ class SparseArray:
+ pass
+
# module level doc-string
__doc__ = """
diff --git a/pandas/_testing.py b/pandas/_testing.py
index 2ebebc5d5e10a..0b3f96e5a56c6 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -1492,7 +1492,7 @@ def assert_sp_array_equal(
block indices.
"""
- _check_isinstance(left, right, pd.SparseArray)
+ _check_isinstance(left, right, pd.arrays.SparseArray)
assert_numpy_array_equal(left.sp_values, right.sp_values, check_dtype=check_dtype)
diff --git a/pandas/core/arrays/sparse/accessor.py b/pandas/core/arrays/sparse/accessor.py
index c207b96a8d308..eb4d7cdf2709f 100644
--- a/pandas/core/arrays/sparse/accessor.py
+++ b/pandas/core/arrays/sparse/accessor.py
@@ -163,7 +163,7 @@ def to_dense(self):
Examples
--------
- >>> series = pd.Series(pd.SparseArray([0, 1, 0]))
+ >>> series = pd.Series(pd.arrays.SparseArray([0, 1, 0]))
>>> series
0 0
1 1
@@ -216,7 +216,7 @@ def from_spmatrix(cls, data, index=None, columns=None):
-------
DataFrame
Each column of the DataFrame is stored as a
- :class:`SparseArray`.
+ :class:`arrays.SparseArray`.
Examples
--------
@@ -251,7 +251,7 @@ def to_dense(self):
Examples
--------
- >>> df = pd.DataFrame({"A": pd.SparseArray([0, 1, 0])})
+ >>> df = pd.DataFrame({"A": pd.arrays.SparseArray([0, 1, 0])})
>>> df.sparse.to_dense()
A
0 0
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index adf10642f337a..9838cdfabbb95 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -403,7 +403,7 @@ def from_spmatrix(cls, data):
--------
>>> import scipy.sparse
>>> mat = scipy.sparse.coo_matrix((4, 1))
- >>> pd.SparseArray.from_spmatrix(mat)
+ >>> pd.arrays.SparseArray.from_spmatrix(mat)
[0.0, 0.0, 0.0, 0.0]
Fill: 0.0
IntIndex
@@ -1079,7 +1079,7 @@ def map(self, mapper):
Examples
--------
- >>> arr = pd.SparseArray([0, 1, 2])
+ >>> arr = pd.arrays.SparseArray([0, 1, 2])
>>> arr.apply(lambda x: x + 10)
[10, 11, 12]
Fill: 10
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 8fc8b8300d21c..a716bc8e0a337 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -269,9 +269,9 @@ def is_sparse(arr) -> bool:
--------
Returns `True` if the parameter is a 1-D pandas sparse array.
- >>> is_sparse(pd.SparseArray([0, 0, 1, 0]))
+ >>> is_sparse(pd.arrays.SparseArray([0, 0, 1, 0]))
True
- >>> is_sparse(pd.Series(pd.SparseArray([0, 0, 1, 0])))
+ >>> is_sparse(pd.Series(pd.arrays.SparseArray([0, 0, 1, 0])))
True
Returns `False` if the parameter is not sparse.
@@ -318,7 +318,7 @@ def is_scipy_sparse(arr) -> bool:
>>> from scipy.sparse import bsr_matrix
>>> is_scipy_sparse(bsr_matrix([1, 2, 3]))
True
- >>> is_scipy_sparse(pd.SparseArray([1, 2, 3]))
+ >>> is_scipy_sparse(pd.arrays.SparseArray([1, 2, 3]))
False
"""
@@ -1467,7 +1467,7 @@ def is_bool_dtype(arr_or_dtype) -> bool:
True
>>> is_bool_dtype(pd.Categorical([True, False]))
True
- >>> is_bool_dtype(pd.SparseArray([True, False]))
+ >>> is_bool_dtype(pd.arrays.SparseArray([True, False]))
True
"""
if arr_or_dtype is None:
@@ -1529,7 +1529,7 @@ def is_extension_type(arr) -> bool:
True
>>> is_extension_type(pd.Series(cat))
True
- >>> is_extension_type(pd.SparseArray([1, 2, 3]))
+ >>> is_extension_type(pd.arrays.SparseArray([1, 2, 3]))
True
>>> from scipy.sparse import bsr_matrix
>>> is_extension_type(bsr_matrix([1, 2, 3]))
diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index 82bf0c0fff9c0..bdb4e813023b6 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -67,7 +67,6 @@ class TestPDApi(Base):
"RangeIndex",
"UInt64Index",
"Series",
- "SparseArray",
"SparseDtype",
"StringDtype",
"Timedelta",
@@ -91,7 +90,7 @@ class TestPDApi(Base):
"NamedAgg",
]
if not compat.PY37:
- classes.extend(["Panel", "SparseSeries", "SparseDataFrame"])
+ classes.extend(["Panel", "SparseSeries", "SparseDataFrame", "SparseArray"])
deprecated_modules.extend(["np", "datetime"])
# these are already deprecated; awaiting removal
diff --git a/pandas/tests/arrays/sparse/test_accessor.py b/pandas/tests/arrays/sparse/test_accessor.py
index e40535697cf1b..4615eca837393 100644
--- a/pandas/tests/arrays/sparse/test_accessor.py
+++ b/pandas/tests/arrays/sparse/test_accessor.py
@@ -7,6 +7,7 @@
import pandas as pd
import pandas._testing as tm
+from pandas.core.arrays.sparse import SparseArray, SparseDtype
class TestSeriesAccessor:
@@ -31,7 +32,7 @@ def test_accessor_raises(self):
def test_from_spmatrix(self, format, labels, dtype):
import scipy.sparse
- sp_dtype = pd.SparseDtype(dtype, np.array(0, dtype=dtype).item())
+ sp_dtype = SparseDtype(dtype, np.array(0, dtype=dtype).item())
mat = scipy.sparse.eye(10, format=format, dtype=dtype)
result = pd.DataFrame.sparse.from_spmatrix(mat, index=labels, columns=labels)
@@ -48,7 +49,7 @@ def test_from_spmatrix(self, format, labels, dtype):
def test_from_spmatrix_columns(self, columns):
import scipy.sparse
- dtype = pd.SparseDtype("float64", 0.0)
+ dtype = SparseDtype("float64", 0.0)
mat = scipy.sparse.random(10, 2, density=0.5)
result = pd.DataFrame.sparse.from_spmatrix(mat, columns=columns)
@@ -67,9 +68,9 @@ def test_to_coo(self):
def test_to_dense(self):
df = pd.DataFrame(
{
- "A": pd.SparseArray([1, 0], dtype=pd.SparseDtype("int64", 0)),
- "B": pd.SparseArray([1, 0], dtype=pd.SparseDtype("int64", 1)),
- "C": pd.SparseArray([1.0, 0.0], dtype=pd.SparseDtype("float64", 0.0)),
+ "A": SparseArray([1, 0], dtype=SparseDtype("int64", 0)),
+ "B": SparseArray([1, 0], dtype=SparseDtype("int64", 1)),
+ "C": SparseArray([1.0, 0.0], dtype=SparseDtype("float64", 0.0)),
},
index=["b", "a"],
)
@@ -82,8 +83,8 @@ def test_to_dense(self):
def test_density(self):
df = pd.DataFrame(
{
- "A": pd.SparseArray([1, 0, 2, 1], fill_value=0),
- "B": pd.SparseArray([0, 1, 1, 1], fill_value=0),
+ "A": SparseArray([1, 0, 2, 1], fill_value=0),
+ "B": SparseArray([0, 1, 1, 1], fill_value=0),
}
)
res = df.sparse.density
@@ -99,9 +100,7 @@ def test_series_from_coo(self, dtype, dense_index):
A = scipy.sparse.eye(3, format="coo", dtype=dtype)
result = pd.Series.sparse.from_coo(A, dense_index=dense_index)
index = pd.MultiIndex.from_tuples([(0, 0), (1, 1), (2, 2)])
- expected = pd.Series(
- pd.SparseArray(np.array([1, 1, 1], dtype=dtype)), index=index
- )
+ expected = pd.Series(SparseArray(np.array([1, 1, 1], dtype=dtype)), index=index)
if dense_index:
expected = expected.reindex(pd.MultiIndex.from_product(index.levels))
diff --git a/pandas/tests/arrays/sparse/test_arithmetics.py b/pandas/tests/arrays/sparse/test_arithmetics.py
index b23e011a92ed9..76442a63ccb0f 100644
--- a/pandas/tests/arrays/sparse/test_arithmetics.py
+++ b/pandas/tests/arrays/sparse/test_arithmetics.py
@@ -6,7 +6,7 @@
import pandas as pd
import pandas._testing as tm
from pandas.core import ops
-from pandas.core.arrays.sparse import SparseDtype
+from pandas.core.arrays.sparse import SparseArray, SparseDtype
@pytest.fixture(params=["integer", "block"])
@@ -24,7 +24,7 @@ def mix(request):
class TestSparseArrayArithmetics:
_base = np.array
- _klass = pd.SparseArray
+ _klass = SparseArray
def _assert(self, a, b):
tm.assert_numpy_array_equal(a, b)
@@ -391,15 +391,15 @@ def test_mixed_array_comparison(self, kind):
@pytest.mark.parametrize("op", [operator.eq, operator.add])
def test_with_list(op):
- arr = pd.SparseArray([0, 1], fill_value=0)
+ arr = SparseArray([0, 1], fill_value=0)
result = op(arr, [0, 1])
- expected = op(arr, pd.SparseArray([0, 1]))
+ expected = op(arr, SparseArray([0, 1]))
tm.assert_sp_array_equal(result, expected)
def test_with_dataframe():
# GH#27910
- arr = pd.SparseArray([0, 1], fill_value=0)
+ arr = SparseArray([0, 1], fill_value=0)
df = pd.DataFrame([[1, 2], [3, 4]])
result = arr.__add__(df)
assert result is NotImplemented
@@ -407,7 +407,7 @@ def test_with_dataframe():
def test_with_zerodim_ndarray():
# GH#27910
- arr = pd.SparseArray([0, 1], fill_value=0)
+ arr = SparseArray([0, 1], fill_value=0)
result = arr * np.array(2)
expected = arr * 2
@@ -416,23 +416,23 @@ def test_with_zerodim_ndarray():
@pytest.mark.parametrize("ufunc", [np.abs, np.exp])
@pytest.mark.parametrize(
- "arr", [pd.SparseArray([0, 0, -1, 1]), pd.SparseArray([None, None, -1, 1])]
+ "arr", [SparseArray([0, 0, -1, 1]), SparseArray([None, None, -1, 1])]
)
def test_ufuncs(ufunc, arr):
result = ufunc(arr)
fill_value = ufunc(arr.fill_value)
- expected = pd.SparseArray(ufunc(np.asarray(arr)), fill_value=fill_value)
+ expected = SparseArray(ufunc(np.asarray(arr)), fill_value=fill_value)
tm.assert_sp_array_equal(result, expected)
@pytest.mark.parametrize(
"a, b",
[
- (pd.SparseArray([0, 0, 0]), np.array([0, 1, 2])),
- (pd.SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
- (pd.SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
- (pd.SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
- (pd.SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
+ (SparseArray([0, 0, 0]), np.array([0, 1, 2])),
+ (SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
+ (SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
+ (SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
+ (SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
],
)
@pytest.mark.parametrize("ufunc", [np.add, np.greater])
@@ -440,12 +440,12 @@ def test_binary_ufuncs(ufunc, a, b):
# can't say anything about fill value here.
result = ufunc(a, b)
expected = ufunc(np.asarray(a), np.asarray(b))
- assert isinstance(result, pd.SparseArray)
+ assert isinstance(result, SparseArray)
tm.assert_numpy_array_equal(np.asarray(result), expected)
def test_ndarray_inplace():
- sparray = pd.SparseArray([0, 2, 0, 0])
+ sparray = SparseArray([0, 2, 0, 0])
ndarray = np.array([0, 1, 2, 3])
ndarray += sparray
expected = np.array([0, 3, 2, 3])
@@ -453,19 +453,19 @@ def test_ndarray_inplace():
def test_sparray_inplace():
- sparray = pd.SparseArray([0, 2, 0, 0])
+ sparray = SparseArray([0, 2, 0, 0])
ndarray = np.array([0, 1, 2, 3])
sparray += ndarray
- expected = pd.SparseArray([0, 3, 2, 3], fill_value=0)
+ expected = SparseArray([0, 3, 2, 3], fill_value=0)
tm.assert_sp_array_equal(sparray, expected)
@pytest.mark.parametrize("fill_value", [True, False])
def test_invert(fill_value):
arr = np.array([True, False, False, True])
- sparray = pd.SparseArray(arr, fill_value=fill_value)
+ sparray = SparseArray(arr, fill_value=fill_value)
result = ~sparray
- expected = pd.SparseArray(~arr, fill_value=not fill_value)
+ expected = SparseArray(~arr, fill_value=not fill_value)
tm.assert_sp_array_equal(result, expected)
@@ -473,7 +473,7 @@ def test_invert(fill_value):
@pytest.mark.parametrize("op", [operator.pos, operator.neg])
def test_unary_op(op, fill_value):
arr = np.array([0, 1, np.nan, 2])
- sparray = pd.SparseArray(arr, fill_value=fill_value)
+ sparray = SparseArray(arr, fill_value=fill_value)
result = op(sparray)
- expected = pd.SparseArray(op(arr), fill_value=op(fill_value))
+ expected = SparseArray(op(arr), fill_value=op(fill_value))
tm.assert_sp_array_equal(result, expected)
diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
index 4cb6d48fa6ec0..baca18239b929 100644
--- a/pandas/tests/arrays/sparse/test_array.py
+++ b/pandas/tests/arrays/sparse/test_array.py
@@ -470,7 +470,7 @@ def test_astype(self):
arr.astype("Sparse[i8]")
def test_astype_bool(self):
- a = pd.SparseArray([1, 0, 0, 1], dtype=SparseDtype(int, 0))
+ a = SparseArray([1, 0, 0, 1], dtype=SparseDtype(int, 0))
result = a.astype(bool)
expected = SparseArray([True, 0, 0, True], dtype=SparseDtype(bool, 0))
tm.assert_sp_array_equal(result, expected)
@@ -682,7 +682,7 @@ def test_getslice_tuple(self):
dense[4:, :]
def test_boolean_slice_empty(self):
- arr = pd.SparseArray([0, 1, 2])
+ arr = SparseArray([0, 1, 2])
res = arr[[False, False, False]]
assert res.dtype == arr.dtype
@@ -828,12 +828,12 @@ def test_fillna_overlap(self):
def test_nonzero(self):
# Tests regression #21172.
- sa = pd.SparseArray([float("nan"), float("nan"), 1, 0, 0, 2, 0, 0, 0, 3, 0, 0])
+ sa = SparseArray([float("nan"), float("nan"), 1, 0, 0, 2, 0, 0, 0, 3, 0, 0])
expected = np.array([2, 5, 9], dtype=np.int32)
(result,) = sa.nonzero()
tm.assert_numpy_array_equal(expected, result)
- sa = pd.SparseArray([0, 0, 1, 0, 0, 2, 0, 0, 0, 3, 0, 0])
+ sa = SparseArray([0, 0, 1, 0, 0, 2, 0, 0, 0, 3, 0, 0])
(result,) = sa.nonzero()
tm.assert_numpy_array_equal(expected, result)
@@ -1086,11 +1086,11 @@ def test_ufunc_args(self):
@pytest.mark.parametrize("fill_value", [0.0, np.nan])
def test_modf(self, fill_value):
# https://github.com/pandas-dev/pandas/issues/26946
- sparse = pd.SparseArray([fill_value] * 10 + [1.1, 2.2], fill_value=fill_value)
+ sparse = SparseArray([fill_value] * 10 + [1.1, 2.2], fill_value=fill_value)
r1, r2 = np.modf(sparse)
e1, e2 = np.modf(np.asarray(sparse))
- tm.assert_sp_array_equal(r1, pd.SparseArray(e1, fill_value=fill_value))
- tm.assert_sp_array_equal(r2, pd.SparseArray(e2, fill_value=fill_value))
+ tm.assert_sp_array_equal(r1, SparseArray(e1, fill_value=fill_value))
+ tm.assert_sp_array_equal(r2, SparseArray(e2, fill_value=fill_value))
def test_nbytes_integer(self):
arr = SparseArray([1, 0, 0, 0, 2], kind="integer")
@@ -1106,7 +1106,7 @@ def test_nbytes_block(self):
assert result == 24
def test_asarray_datetime64(self):
- s = pd.SparseArray(pd.to_datetime(["2012", None, None, "2013"]))
+ s = SparseArray(pd.to_datetime(["2012", None, None, "2013"]))
np.asarray(s)
def test_density(self):
@@ -1208,7 +1208,7 @@ def test_first_fill_value_loc(arr, loc):
)
@pytest.mark.parametrize("fill_value", [np.nan, 0, 1])
def test_unique_na_fill(arr, fill_value):
- a = pd.SparseArray(arr, fill_value=fill_value).unique()
+ a = SparseArray(arr, fill_value=fill_value).unique()
b = pd.Series(arr).unique()
assert isinstance(a, SparseArray)
a = np.asarray(a)
diff --git a/pandas/tests/arrays/sparse/test_combine_concat.py b/pandas/tests/arrays/sparse/test_combine_concat.py
index bcca4a23ea9ed..f1697dc9ff7ce 100644
--- a/pandas/tests/arrays/sparse/test_combine_concat.py
+++ b/pandas/tests/arrays/sparse/test_combine_concat.py
@@ -1,17 +1,17 @@
import numpy as np
import pytest
-import pandas as pd
import pandas._testing as tm
+from pandas.core.arrays.sparse import SparseArray
class TestSparseArrayConcat:
@pytest.mark.parametrize("kind", ["integer", "block"])
def test_basic(self, kind):
- a = pd.SparseArray([1, 0, 0, 2], kind=kind)
- b = pd.SparseArray([1, 0, 2, 2], kind=kind)
+ a = SparseArray([1, 0, 0, 2], kind=kind)
+ b = SparseArray([1, 0, 2, 2], kind=kind)
- result = pd.SparseArray._concat_same_type([a, b])
+ result = SparseArray._concat_same_type([a, b])
# Can't make any assertions about the sparse index itself
# since we aren't don't merge sparse blocs across arrays
# in to_concat
@@ -22,10 +22,10 @@ def test_basic(self, kind):
@pytest.mark.parametrize("kind", ["integer", "block"])
def test_uses_first_kind(self, kind):
other = "integer" if kind == "block" else "block"
- a = pd.SparseArray([1, 0, 0, 2], kind=kind)
- b = pd.SparseArray([1, 0, 2, 2], kind=other)
+ a = SparseArray([1, 0, 0, 2], kind=kind)
+ b = SparseArray([1, 0, 2, 2], kind=other)
- result = pd.SparseArray._concat_same_type([a, b])
+ result = SparseArray._concat_same_type([a, b])
expected = np.array([1, 2, 1, 2, 2], dtype="int64")
tm.assert_numpy_array_equal(result.sp_values, expected)
assert result.kind == kind
diff --git a/pandas/tests/arrays/test_array.py b/pandas/tests/arrays/test_array.py
index 4d714623db5f7..d6d7db0d99d96 100644
--- a/pandas/tests/arrays/test_array.py
+++ b/pandas/tests/arrays/test_array.py
@@ -113,7 +113,7 @@
pd.arrays.IntervalArray.from_tuples([(1, 2), (3, 4)]),
),
# Sparse
- ([0, 1], "Sparse[int64]", pd.SparseArray([0, 1], dtype="int64")),
+ ([0, 1], "Sparse[int64]", pd.arrays.SparseArray([0, 1], dtype="int64")),
# IntegerNA
([1, None], "Int16", integer_array([1, None], dtype="Int16")),
(pd.Series([1, 2]), None, PandasArray(np.array([1, 2], dtype=np.int64))),
diff --git a/pandas/tests/base/test_conversion.py b/pandas/tests/base/test_conversion.py
index 486a1daaf8b50..e328cc223c8f2 100644
--- a/pandas/tests/base/test_conversion.py
+++ b/pandas/tests/base/test_conversion.py
@@ -7,7 +7,14 @@
import pandas as pd
from pandas import CategoricalIndex, Series, Timedelta, Timestamp
import pandas._testing as tm
-from pandas.core.arrays import DatetimeArray, PandasArray, TimedeltaArray
+from pandas.core.arrays import (
+ DatetimeArray,
+ IntervalArray,
+ PandasArray,
+ PeriodArray,
+ SparseArray,
+ TimedeltaArray,
+)
class TestToIterable:
@@ -177,14 +184,10 @@ def test_iter_box(self):
),
(
pd.PeriodIndex([2018, 2019], freq="A"),
- pd.core.arrays.PeriodArray,
+ PeriodArray,
pd.core.dtypes.dtypes.PeriodDtype("A-DEC"),
),
- (
- pd.IntervalIndex.from_breaks([0, 1, 2]),
- pd.core.arrays.IntervalArray,
- "interval",
- ),
+ (pd.IntervalIndex.from_breaks([0, 1, 2]), IntervalArray, "interval",),
# This test is currently failing for datetime64[ns] and timedelta64[ns].
# The NumPy type system is sufficient for representing these types, so
# we just use NumPy for Series / DataFrame columns of these types (so
@@ -270,8 +273,8 @@ def test_numpy_array_all_dtypes(any_numpy_dtype):
(pd.Categorical(["a", "b"]), "_codes"),
(pd.core.arrays.period_array(["2000", "2001"], freq="D"), "_data"),
(pd.core.arrays.integer_array([0, np.nan]), "_data"),
- (pd.core.arrays.IntervalArray.from_breaks([0, 1]), "_left"),
- (pd.SparseArray([0, 1]), "_sparse_values"),
+ (IntervalArray.from_breaks([0, 1]), "_left"),
+ (SparseArray([0, 1]), "_sparse_values"),
(DatetimeArray(np.array([1, 2], dtype="datetime64[ns]")), "_data"),
# tz-aware Datetime
(
@@ -318,10 +321,10 @@ def test_array_multiindex_raises():
np.array([0, pd.NA], dtype=object),
),
(
- pd.core.arrays.IntervalArray.from_breaks([0, 1, 2]),
+ IntervalArray.from_breaks([0, 1, 2]),
np.array([pd.Interval(0, 1), pd.Interval(1, 2)], dtype=object),
),
- (pd.SparseArray([0, 1]), np.array([0, 1], dtype=np.int64)),
+ (SparseArray([0, 1]), np.array([0, 1], dtype=np.int64)),
# tz-naive datetime
(
DatetimeArray(np.array(["2000", "2001"], dtype="M8[ns]")),
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index f58979f807adb..c96886a1bc7a8 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -182,7 +182,7 @@ def test_is_object():
"check_scipy", [False, pytest.param(True, marks=td.skip_if_no_scipy)]
)
def test_is_sparse(check_scipy):
- assert com.is_sparse(pd.SparseArray([1, 2, 3]))
+ assert com.is_sparse(pd.arrays.SparseArray([1, 2, 3]))
assert not com.is_sparse(np.array([1, 2, 3]))
@@ -198,7 +198,7 @@ def test_is_scipy_sparse():
assert com.is_scipy_sparse(bsr_matrix([1, 2, 3]))
- assert not com.is_scipy_sparse(pd.SparseArray([1, 2, 3]))
+ assert not com.is_scipy_sparse(pd.arrays.SparseArray([1, 2, 3]))
def test_is_categorical():
@@ -576,7 +576,7 @@ def test_is_extension_type(check_scipy):
cat = pd.Categorical([1, 2, 3])
assert com.is_extension_type(cat)
assert com.is_extension_type(pd.Series(cat))
- assert com.is_extension_type(pd.SparseArray([1, 2, 3]))
+ assert com.is_extension_type(pd.arrays.SparseArray([1, 2, 3]))
assert com.is_extension_type(pd.DatetimeIndex(["2000"], tz="US/Eastern"))
dtype = DatetimeTZDtype("ns", tz="US/Eastern")
@@ -605,7 +605,7 @@ def test_is_extension_array_dtype(check_scipy):
cat = pd.Categorical([1, 2, 3])
assert com.is_extension_array_dtype(cat)
assert com.is_extension_array_dtype(pd.Series(cat))
- assert com.is_extension_array_dtype(pd.SparseArray([1, 2, 3]))
+ assert com.is_extension_array_dtype(pd.arrays.SparseArray([1, 2, 3]))
assert com.is_extension_array_dtype(pd.DatetimeIndex(["2000"], tz="US/Eastern"))
dtype = DatetimeTZDtype("ns", tz="US/Eastern")
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index 13648322fc9c9..f47246898b821 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -914,7 +914,7 @@ def test_registry_find(dtype, expected):
(pd.Series([1, 2]), False),
(np.array([True, False]), True),
(pd.Series([True, False]), True),
- (pd.SparseArray([True, False]), True),
+ (pd.arrays.SparseArray([True, False]), True),
(SparseDtype(bool), True),
],
)
@@ -924,7 +924,7 @@ def test_is_bool_dtype(dtype, expected):
def test_is_bool_dtype_sparse():
- result = is_bool_dtype(pd.Series(pd.SparseArray([True, False])))
+ result = is_bool_dtype(pd.Series(pd.arrays.SparseArray([True, False])))
assert result is True
diff --git a/pandas/tests/dtypes/test_generic.py b/pandas/tests/dtypes/test_generic.py
index 6e9334996100f..2c8631ac2d71d 100644
--- a/pandas/tests/dtypes/test_generic.py
+++ b/pandas/tests/dtypes/test_generic.py
@@ -17,7 +17,7 @@ class TestABCClasses:
categorical = pd.Categorical([1, 2, 3], categories=[2, 3, 1])
categorical_df = pd.DataFrame({"values": [1, 2, 3]}, index=categorical)
df = pd.DataFrame({"names": ["a", "b", "c"]}, index=multi_index)
- sparse_array = pd.SparseArray(np.random.randn(10))
+ sparse_array = pd.arrays.SparseArray(np.random.randn(10))
datetime_array = pd.core.arrays.DatetimeArray(datetime_index)
timedelta_array = pd.core.arrays.TimedeltaArray(timedelta_index)
diff --git a/pandas/tests/extension/test_sparse.py b/pandas/tests/extension/test_sparse.py
index 1acd466f51c88..198a228b621b4 100644
--- a/pandas/tests/extension/test_sparse.py
+++ b/pandas/tests/extension/test_sparse.py
@@ -4,8 +4,9 @@
from pandas.errors import PerformanceWarning
import pandas as pd
-from pandas import SparseArray, SparseDtype
+from pandas import SparseDtype
import pandas._testing as tm
+from pandas.arrays import SparseArray
from pandas.tests.extension import base
@@ -235,7 +236,7 @@ def test_combine_le(self, data_repeated):
s2 = pd.Series(orig_data2)
result = s1.combine(s2, lambda x1, x2: x1 <= x2)
expected = pd.Series(
- pd.SparseArray(
+ SparseArray(
[a <= b for (a, b) in zip(list(orig_data1), list(orig_data2))],
fill_value=False,
)
@@ -245,7 +246,7 @@ def test_combine_le(self, data_repeated):
val = s1.iloc[0]
result = s1.combine(val, lambda x1, x2: x1 <= x2)
expected = pd.Series(
- pd.SparseArray([a <= val for a in list(orig_data1)], fill_value=False)
+ SparseArray([a <= val for a in list(orig_data1)], fill_value=False)
)
self.assert_series_equal(result, expected)
@@ -350,7 +351,7 @@ def _compare_other(self, s, data, op_name, other):
with np.errstate(all="ignore"):
expected = pd.Series(
- pd.SparseArray(
+ SparseArray(
op(np.asarray(data), np.asarray(other)),
fill_value=result.values.fill_value,
)
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index 0734a7bb240e5..e85f40329a2c5 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -1776,7 +1776,7 @@ def test_getitem_ix_float_duplicates(self):
def test_getitem_sparse_column(self):
# https://github.com/pandas-dev/pandas/issues/23559
- data = pd.SparseArray([0, 1])
+ data = pd.arrays.SparseArray([0, 1])
df = pd.DataFrame({"A": data})
expected = pd.Series(data, name="A")
result = df["A"]
@@ -1791,7 +1791,7 @@ def test_getitem_sparse_column(self):
def test_setitem_with_sparse_value(self):
# GH8131
df = pd.DataFrame({"c_1": ["a", "b", "c"], "n_1": [1.0, 2.0, 3.0]})
- sp_array = pd.SparseArray([0, 0, 1])
+ sp_array = pd.arrays.SparseArray([0, 0, 1])
df["new_column"] = sp_array
tm.assert_series_equal(
df["new_column"], pd.Series(sp_array, name="new_column"), check_names=False
@@ -1799,9 +1799,9 @@ def test_setitem_with_sparse_value(self):
def test_setitem_with_unaligned_sparse_value(self):
df = pd.DataFrame({"c_1": ["a", "b", "c"], "n_1": [1.0, 2.0, 3.0]})
- sp_series = pd.Series(pd.SparseArray([0, 0, 1]), index=[2, 1, 0])
+ sp_series = pd.Series(pd.arrays.SparseArray([0, 0, 1]), index=[2, 1, 0])
df["new_column"] = sp_series
- exp = pd.Series(pd.SparseArray([1, 0, 0]), name="new_column")
+ exp = pd.Series(pd.arrays.SparseArray([1, 0, 0]), name="new_column")
tm.assert_series_equal(df["new_column"], exp)
def test_setitem_with_unaligned_tz_aware_datetime_column(self):
diff --git a/pandas/tests/frame/methods/test_quantile.py b/pandas/tests/frame/methods/test_quantile.py
index 9c0ab67e62a1a..9ad2417592fe1 100644
--- a/pandas/tests/frame/methods/test_quantile.py
+++ b/pandas/tests/frame/methods/test_quantile.py
@@ -9,8 +9,8 @@
class TestDataFrameQuantile:
def test_quantile_sparse(self):
# GH#17198
- s = pd.Series(pd.SparseArray([1, 2]))
- s1 = pd.Series(pd.SparseArray([3, 4]))
+ s = pd.Series(pd.arrays.SparseArray([1, 2]))
+ s1 = pd.Series(pd.arrays.SparseArray([3, 4]))
df = pd.DataFrame({0: s, 1: s1})
result = df.quantile()
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 1687f114670f3..1f190221b456a 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2414,7 +2414,7 @@ class List(list):
"extension_arr",
[
Categorical(list("aabbc")),
- pd.SparseArray([1, np.nan, np.nan, np.nan]),
+ pd.arrays.SparseArray([1, np.nan, np.nan, np.nan]),
IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)]),
PeriodArray(pd.period_range(start="1/1/2017", end="1/1/2018", freq="M")),
],
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index f20e9ef6d7b57..15b1434f8629f 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -10,18 +10,10 @@
from pandas._libs.internals import BlockPlacement
import pandas as pd
-from pandas import (
- Categorical,
- DataFrame,
- DatetimeIndex,
- Index,
- MultiIndex,
- Series,
- SparseArray,
-)
+from pandas import Categorical, DataFrame, DatetimeIndex, Index, MultiIndex, Series
import pandas._testing as tm
import pandas.core.algorithms as algos
-from pandas.core.arrays import DatetimeArray, TimedeltaArray
+from pandas.core.arrays import DatetimeArray, SparseArray, TimedeltaArray
from pandas.core.internals import BlockManager, SingleBlockManager, make_block
diff --git a/pandas/tests/reshape/test_reshape.py b/pandas/tests/reshape/test_reshape.py
index 003c74566be71..776f610f17e8e 100644
--- a/pandas/tests/reshape/test_reshape.py
+++ b/pandas/tests/reshape/test_reshape.py
@@ -45,7 +45,7 @@ def test_basic(self, sparse, dtype):
dtype=self.effective_dtype(dtype),
)
if sparse:
- expected = expected.apply(pd.SparseArray, fill_value=0.0)
+ expected = expected.apply(pd.arrays.SparseArray, fill_value=0.0)
result = get_dummies(s_list, sparse=sparse, dtype=dtype)
tm.assert_frame_equal(result, expected)
@@ -132,7 +132,7 @@ def test_include_na(self, sparse, dtype):
{"a": [1, 0, 0], "b": [0, 1, 0]}, dtype=self.effective_dtype(dtype)
)
if sparse:
- exp = exp.apply(pd.SparseArray, fill_value=0.0)
+ exp = exp.apply(pd.arrays.SparseArray, fill_value=0.0)
tm.assert_frame_equal(res, exp)
# Sparse dataframes do not allow nan labelled columns, see #GH8822
@@ -145,7 +145,7 @@ def test_include_na(self, sparse, dtype):
# hack (NaN handling in assert_index_equal)
exp_na.columns = res_na.columns
if sparse:
- exp_na = exp_na.apply(pd.SparseArray, fill_value=0.0)
+ exp_na = exp_na.apply(pd.arrays.SparseArray, fill_value=0.0)
tm.assert_frame_equal(res_na, exp_na)
res_just_na = get_dummies([np.nan], dummy_na=True, sparse=sparse, dtype=dtype)
@@ -167,7 +167,7 @@ def test_unicode(self, sparse):
dtype=np.uint8,
)
if sparse:
- exp = exp.apply(pd.SparseArray, fill_value=0)
+ exp = exp.apply(pd.arrays.SparseArray, fill_value=0)
tm.assert_frame_equal(res, exp)
def test_dataframe_dummies_all_obj(self, df, sparse):
@@ -180,10 +180,10 @@ def test_dataframe_dummies_all_obj(self, df, sparse):
if sparse:
expected = pd.DataFrame(
{
- "A_a": pd.SparseArray([1, 0, 1], dtype="uint8"),
- "A_b": pd.SparseArray([0, 1, 0], dtype="uint8"),
- "B_b": pd.SparseArray([1, 1, 0], dtype="uint8"),
- "B_c": pd.SparseArray([0, 0, 1], dtype="uint8"),
+ "A_a": pd.arrays.SparseArray([1, 0, 1], dtype="uint8"),
+ "A_b": pd.arrays.SparseArray([0, 1, 0], dtype="uint8"),
+ "B_b": pd.arrays.SparseArray([1, 1, 0], dtype="uint8"),
+ "B_c": pd.arrays.SparseArray([0, 0, 1], dtype="uint8"),
}
)
@@ -226,7 +226,7 @@ def test_dataframe_dummies_prefix_list(self, df, sparse):
cols = ["from_A_a", "from_A_b", "from_B_b", "from_B_c"]
expected = expected[["C"] + cols]
- typ = pd.SparseArray if sparse else pd.Series
+ typ = pd.arrays.SparseArray if sparse else pd.Series
expected[cols] = expected[cols].apply(lambda x: typ(x))
tm.assert_frame_equal(result, expected)
@@ -423,7 +423,7 @@ def test_basic_drop_first(self, sparse):
result = get_dummies(s_list, drop_first=True, sparse=sparse)
if sparse:
- expected = expected.apply(pd.SparseArray, fill_value=0)
+ expected = expected.apply(pd.arrays.SparseArray, fill_value=0)
tm.assert_frame_equal(result, expected)
result = get_dummies(s_series, drop_first=True, sparse=sparse)
@@ -457,7 +457,7 @@ def test_basic_drop_first_NA(self, sparse):
res = get_dummies(s_NA, drop_first=True, sparse=sparse)
exp = DataFrame({"b": [0, 1, 0]}, dtype=np.uint8)
if sparse:
- exp = exp.apply(pd.SparseArray, fill_value=0)
+ exp = exp.apply(pd.arrays.SparseArray, fill_value=0)
tm.assert_frame_equal(res, exp)
@@ -466,7 +466,7 @@ def test_basic_drop_first_NA(self, sparse):
["b", np.nan], axis=1
)
if sparse:
- exp_na = exp_na.apply(pd.SparseArray, fill_value=0)
+ exp_na = exp_na.apply(pd.arrays.SparseArray, fill_value=0)
tm.assert_frame_equal(res_na, exp_na)
res_just_na = get_dummies(
@@ -480,7 +480,7 @@ def test_dataframe_dummies_drop_first(self, df, sparse):
result = get_dummies(df, drop_first=True, sparse=sparse)
expected = DataFrame({"A_b": [0, 1, 0], "B_c": [0, 0, 1]}, dtype=np.uint8)
if sparse:
- expected = expected.apply(pd.SparseArray, fill_value=0)
+ expected = expected.apply(pd.arrays.SparseArray, fill_value=0)
tm.assert_frame_equal(result, expected)
def test_dataframe_dummies_drop_first_with_categorical(self, df, sparse, dtype):
@@ -494,7 +494,7 @@ def test_dataframe_dummies_drop_first_with_categorical(self, df, sparse, dtype):
expected = expected[["C", "A_b", "B_c", "cat_y"]]
if sparse:
for col in cols:
- expected[col] = pd.SparseArray(expected[col])
+ expected[col] = pd.arrays.SparseArray(expected[col])
tm.assert_frame_equal(result, expected)
def test_dataframe_dummies_drop_first_with_na(self, df, sparse):
@@ -516,7 +516,7 @@ def test_dataframe_dummies_drop_first_with_na(self, df, sparse):
expected = expected.sort_index(axis=1)
if sparse:
for col in cols:
- expected[col] = pd.SparseArray(expected[col])
+ expected[col] = pd.arrays.SparseArray(expected[col])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 128aea84fc967..3972e7ff4f3f4 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -457,9 +457,9 @@ def test_fillna_consistency(self):
def test_where_sparse(self):
# GH#17198 make sure we dont get an AttributeError for sp_index
- ser = pd.Series(pd.SparseArray([1, 2]))
+ ser = pd.Series(pd.arrays.SparseArray([1, 2]))
result = ser.where(ser >= 2, 0)
- expected = pd.Series(pd.SparseArray([0, 2]))
+ expected = pd.Series(pd.arrays.SparseArray([0, 2]))
tm.assert_series_equal(result, expected)
def test_datetime64tz_fillna_round_issue(self):
diff --git a/pandas/tests/series/test_ufunc.py b/pandas/tests/series/test_ufunc.py
index f3c3dd876d87a..067ee1b465bb1 100644
--- a/pandas/tests/series/test_ufunc.py
+++ b/pandas/tests/series/test_ufunc.py
@@ -33,7 +33,7 @@ def test_unary_ufunc(ufunc, sparse):
array = np.random.randint(0, 10, 10, dtype="int64")
array[::2] = 0
if sparse:
- array = pd.SparseArray(array, dtype=pd.SparseDtype("int64", 0))
+ array = pd.arrays.SparseArray(array, dtype=pd.SparseDtype("int64", 0))
index = list(string.ascii_letters[:10])
name = "name"
@@ -51,8 +51,8 @@ def test_binary_ufunc_with_array(flip, sparse, ufunc, arrays_for_binary_ufunc):
# Test that ufunc(Series(a), array) == Series(ufunc(a, b))
a1, a2 = arrays_for_binary_ufunc
if sparse:
- a1 = pd.SparseArray(a1, dtype=pd.SparseDtype("int64", 0))
- a2 = pd.SparseArray(a2, dtype=pd.SparseDtype("int64", 0))
+ a1 = pd.arrays.SparseArray(a1, dtype=pd.SparseDtype("int64", 0))
+ a2 = pd.arrays.SparseArray(a2, dtype=pd.SparseDtype("int64", 0))
name = "name" # op(Series, array) preserves the name.
series = pd.Series(a1, name=name)
@@ -79,8 +79,8 @@ def test_binary_ufunc_with_index(flip, sparse, ufunc, arrays_for_binary_ufunc):
# * ufunc(Index, Series) dispatches to Series (returns a Series)
a1, a2 = arrays_for_binary_ufunc
if sparse:
- a1 = pd.SparseArray(a1, dtype=pd.SparseDtype("int64", 0))
- a2 = pd.SparseArray(a2, dtype=pd.SparseDtype("int64", 0))
+ a1 = pd.arrays.SparseArray(a1, dtype=pd.SparseDtype("int64", 0))
+ a2 = pd.arrays.SparseArray(a2, dtype=pd.SparseDtype("int64", 0))
name = "name" # op(Series, array) preserves the name.
series = pd.Series(a1, name=name)
@@ -110,8 +110,8 @@ def test_binary_ufunc_with_series(
# with alignment between the indices
a1, a2 = arrays_for_binary_ufunc
if sparse:
- a1 = pd.SparseArray(a1, dtype=pd.SparseDtype("int64", 0))
- a2 = pd.SparseArray(a2, dtype=pd.SparseDtype("int64", 0))
+ a1 = pd.arrays.SparseArray(a1, dtype=pd.SparseDtype("int64", 0))
+ a2 = pd.arrays.SparseArray(a2, dtype=pd.SparseDtype("int64", 0))
name = "name" # op(Series, array) preserves the name.
series = pd.Series(a1, name=name)
@@ -149,7 +149,7 @@ def test_binary_ufunc_scalar(ufunc, sparse, flip, arrays_for_binary_ufunc):
# * ufunc(Series, scalar) == ufunc(scalar, Series)
array, _ = arrays_for_binary_ufunc
if sparse:
- array = pd.SparseArray(array)
+ array = pd.arrays.SparseArray(array)
other = 2
series = pd.Series(array, name="name")
@@ -183,8 +183,8 @@ def test_multiple_ouput_binary_ufuncs(ufunc, sparse, shuffle, arrays_for_binary_
a2[a2 == 0] = 1
if sparse:
- a1 = pd.SparseArray(a1, dtype=pd.SparseDtype("int64", 0))
- a2 = pd.SparseArray(a2, dtype=pd.SparseDtype("int64", 0))
+ a1 = pd.arrays.SparseArray(a1, dtype=pd.SparseDtype("int64", 0))
+ a2 = pd.arrays.SparseArray(a2, dtype=pd.SparseDtype("int64", 0))
s1 = pd.Series(a1)
s2 = pd.Series(a2)
@@ -209,7 +209,7 @@ def test_multiple_ouput_ufunc(sparse, arrays_for_binary_ufunc):
array, _ = arrays_for_binary_ufunc
if sparse:
- array = pd.SparseArray(array)
+ array = pd.arrays.SparseArray(array)
series = pd.Series(array, name="name")
result = np.modf(series)
| - [x] closes #30642
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
- in #30644
Change all references in code from `pd.SparseArray` to `pd.arrays.SparseArray` . Add deprecation message for `pd.SparseArray`
Per comment from @TomAugspurger here: https://github.com/pandas-dev/pandas/pull/30644#issuecomment-570671439, this may require discussion. | https://api.github.com/repos/pandas-dev/pandas/pulls/30656 | 2020-01-03T19:36:17Z | 2020-01-05T16:16:34Z | 2020-01-05T16:16:34Z | 2020-01-07T14:35:23Z |
REF: change TDI.delete behavior to match DTI.delete | diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index fc55e1c530272..13c4c0161aeee 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -494,7 +494,7 @@ def delete(self, loc):
"""
new_tds = np.delete(self.asi8, loc)
- freq = "infer"
+ freq = None
if is_integer(loc):
if loc in (0, -len(self), -1, len(self) - 1):
freq = self.freq
@@ -505,7 +505,7 @@ def delete(self, loc):
if loc.start in (0, None) or loc.stop in (len(self), None):
freq = self.freq
- return TimedeltaIndex(new_tds, name=self.name, freq=freq)
+ return self._shallow_copy(new_tds, freq=freq)
TimedeltaIndex._add_comparison_ops()
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index 7703fbbbcad2a..3b52b93fa6369 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -201,6 +201,13 @@ def test_append_numpy_bug_1681(self):
result = a.append(c)
assert (result["B"] == td).all()
+ def test_delete_doesnt_infer_freq(self):
+ # GH#30655 behavior matches DatetimeIndex
+
+ tdi = pd.TimedeltaIndex(["1 Day", "2 Days", None, "3 Days", "4 Days"])
+ result = tdi.delete(2)
+ assert result.freq is None
+
def test_fields(self):
rng = timedelta_range("1 days, 10:11:12.100123456", periods=2, freq="s")
tm.assert_index_equal(rng.days, Index([1, 1], dtype="int64"))
| With this change, the two methods behave the same and can be shared. | https://api.github.com/repos/pandas-dev/pandas/pulls/30655 | 2020-01-03T19:21:15Z | 2020-01-04T19:44:49Z | 2020-01-04T19:44:49Z | 2020-01-04T19:46:37Z |
REF/TST: PeriodArray comparisons with listlike | diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 056c80717e54f..b7d841ab5c6a1 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -29,6 +29,7 @@
is_datetime64_dtype,
is_float_dtype,
is_list_like,
+ is_object_dtype,
is_period_dtype,
pandas_dtype,
)
@@ -41,6 +42,7 @@
)
from pandas.core.dtypes.missing import isna, notna
+from pandas.core import ops
import pandas.core.algorithms as algos
from pandas.core.arrays import datetimelike as dtl
import pandas.core.common as com
@@ -92,22 +94,44 @@ def wrapper(self, other):
self._check_compatible_with(other)
result = ordinal_op(other.ordinal)
- elif isinstance(other, cls):
- self._check_compatible_with(other)
-
- result = ordinal_op(other.asi8)
-
- mask = self._isnan | other._isnan
- if mask.any():
- result[mask] = nat_result
- return result
elif other is NaT:
result = np.empty(len(self.asi8), dtype=bool)
result.fill(nat_result)
- else:
+
+ elif not is_list_like(other):
return invalid_comparison(self, other, op)
+ else:
+ if isinstance(other, list):
+ # TODO: could use pd.Index to do inference?
+ other = np.array(other)
+
+ if not isinstance(other, (np.ndarray, cls)):
+ return invalid_comparison(self, other, op)
+
+ if is_object_dtype(other):
+ with np.errstate(all="ignore"):
+ result = ops.comp_method_OBJECT_ARRAY(
+ op, self.astype(object), other
+ )
+ o_mask = isna(other)
+
+ elif not is_period_dtype(other):
+ # e.g. is_timedelta64_dtype(other)
+ return invalid_comparison(self, other, op)
+
+ else:
+ assert isinstance(other, cls), type(other)
+
+ self._check_compatible_with(other)
+
+ result = ordinal_op(other.asi8)
+ o_mask = other._isnan
+
+ if o_mask.any():
+ result[o_mask] = nat_result
+
if self._hasnans:
result[self._isnan] = nat_result
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index ed5c6b450b05e..50040409473d2 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -107,6 +107,11 @@ def cmp_method(self, other):
if is_object_dtype(self) and isinstance(other, ABCCategorical):
left = type(other)(self._values, dtype=other.dtype)
return op(left, other)
+ elif is_object_dtype(self) and isinstance(other, ExtensionArray):
+ # e.g. PeriodArray
+ with np.errstate(all="ignore"):
+ result = op(self.values, other)
+
elif is_object_dtype(self) and not isinstance(self, ABCMultiIndex):
# don't pass MultiIndex
with np.errstate(all="ignore"):
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index 3ad7a6d8e465c..6eef99a124b1a 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -50,6 +50,79 @@ def test_compare_invalid_scalar(self, box_with_array, scalar):
parr = tm.box_expected(pi, box_with_array)
assert_invalid_comparison(parr, scalar, box_with_array)
+ @pytest.mark.parametrize(
+ "other",
+ [
+ pd.date_range("2000", periods=4).array,
+ pd.timedelta_range("1D", periods=4).array,
+ np.arange(4),
+ np.arange(4).astype(np.float64),
+ list(range(4)),
+ ],
+ )
+ def test_compare_invalid_listlike(self, box_with_array, other):
+ pi = pd.period_range("2000", periods=4)
+ parr = tm.box_expected(pi, box_with_array)
+ assert_invalid_comparison(parr, other, box_with_array)
+
+ @pytest.mark.parametrize("other_box", [list, np.array, lambda x: x.astype(object)])
+ def test_compare_object_dtype(self, box_with_array, other_box):
+ pi = pd.period_range("2000", periods=5)
+ parr = tm.box_expected(pi, box_with_array)
+
+ xbox = np.ndarray if box_with_array is pd.Index else box_with_array
+
+ other = other_box(pi)
+
+ expected = np.array([True, True, True, True, True])
+ expected = tm.box_expected(expected, xbox)
+
+ result = parr == other
+ tm.assert_equal(result, expected)
+ result = parr <= other
+ tm.assert_equal(result, expected)
+ result = parr >= other
+ tm.assert_equal(result, expected)
+
+ result = parr != other
+ tm.assert_equal(result, ~expected)
+ result = parr < other
+ tm.assert_equal(result, ~expected)
+ result = parr > other
+ tm.assert_equal(result, ~expected)
+
+ other = other_box(pi[::-1])
+
+ expected = np.array([False, False, True, False, False])
+ expected = tm.box_expected(expected, xbox)
+ result = parr == other
+ tm.assert_equal(result, expected)
+
+ expected = np.array([True, True, True, False, False])
+ expected = tm.box_expected(expected, xbox)
+ result = parr <= other
+ tm.assert_equal(result, expected)
+
+ expected = np.array([False, False, True, True, True])
+ expected = tm.box_expected(expected, xbox)
+ result = parr >= other
+ tm.assert_equal(result, expected)
+
+ expected = np.array([True, True, False, True, True])
+ expected = tm.box_expected(expected, xbox)
+ result = parr != other
+ tm.assert_equal(result, expected)
+
+ expected = np.array([True, True, False, False, False])
+ expected = tm.box_expected(expected, xbox)
+ result = parr < other
+ tm.assert_equal(result, expected)
+
+ expected = np.array([False, False, False, True, True])
+ expected = tm.box_expected(expected, xbox)
+ result = parr > other
+ tm.assert_equal(result, expected)
+
class TestPeriodIndexComparisons:
# TODO: parameterize over boxes
| Similar edits are going to be made to the DTA and TDA ops, separating PeriodArray to its own PR for exposition. | https://api.github.com/repos/pandas-dev/pandas/pulls/30654 | 2020-01-03T19:18:06Z | 2020-01-04T23:31:36Z | 2020-01-04T23:31:36Z | 2020-01-05T01:05:30Z |
TST: Test for merge_asof groupby=multiple with categorical column | diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index b2e764c5463fa..b461d1ec69152 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -1185,6 +1185,13 @@ def test_merge_datatype_categorical_error_raises(self):
with pytest.raises(MergeError, match=msg):
merge_asof(left, right, on="a")
+ def test_merge_groupby_multiple_column_with_categorical_column(self):
+ # GH 16454
+ df = pd.DataFrame({"x": [0], "y": [0], "z": pd.Categorical([0])})
+ result = merge_asof(df, df, on="x", by=["y", "z"])
+ expected = pd.DataFrame({"x": [0], "y": [0], "z": pd.Categorical([0])})
+ tm.assert_frame_equal(result, expected)
+
@pytest.mark.parametrize(
"func", [lambda x: x, lambda x: to_datetime(x)], ids=["numeric", "datetime"]
)
| - [x] closes #16454
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30653 | 2020-01-03T18:37:31Z | 2020-01-03T22:08:08Z | 2020-01-03T22:08:07Z | 2020-01-03T22:08:21Z |
ENH: .equals for Extension Arrays | diff --git a/doc/source/reference/extensions.rst b/doc/source/reference/extensions.rst
index 4c0763e091b75..fe4113d100abf 100644
--- a/doc/source/reference/extensions.rst
+++ b/doc/source/reference/extensions.rst
@@ -45,6 +45,7 @@ objects.
api.extensions.ExtensionArray.copy
api.extensions.ExtensionArray.view
api.extensions.ExtensionArray.dropna
+ api.extensions.ExtensionArray.equals
api.extensions.ExtensionArray.factorize
api.extensions.ExtensionArray.fillna
api.extensions.ExtensionArray.isna
diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 9c424f70b1ee0..3ce0db2cf38d0 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -150,6 +150,8 @@ Other enhancements
such as ``dict`` and ``list``, mirroring the behavior of :meth:`DataFrame.update` (:issue:`33215`)
- :meth:`~pandas.core.groupby.GroupBy.transform` and :meth:`~pandas.core.groupby.GroupBy.aggregate` has gained ``engine`` and ``engine_kwargs`` arguments that supports executing functions with ``Numba`` (:issue:`32854`, :issue:`33388`)
- :meth:`~pandas.core.resample.Resampler.interpolate` now supports SciPy interpolation method :class:`scipy.interpolate.CubicSpline` as method ``cubicspline`` (:issue:`33670`)
+- The ``ExtensionArray`` class has now an :meth:`~pandas.arrays.ExtensionArray.equals`
+ method, similarly to :meth:`Series.equals` (:issue:`27081`).
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/_testing.py b/pandas/_testing.py
index 8fbdcb89dafca..6424a8097fbf1 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -1490,7 +1490,9 @@ def box_expected(expected, box_cls, transpose=True):
-------
subclass of box_cls
"""
- if box_cls is pd.Index:
+ if box_cls is pd.array:
+ expected = pd.array(expected)
+ elif box_cls is pd.Index:
expected = pd.Index(expected)
elif box_cls is pd.Series:
expected = pd.Series(expected)
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index bd903d9b1fae3..0c5634a932e12 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -58,6 +58,7 @@ class ExtensionArray:
dropna
factorize
fillna
+ equals
isna
ravel
repeat
@@ -84,6 +85,7 @@ class ExtensionArray:
* _from_factorized
* __getitem__
* __len__
+ * __eq__
* dtype
* nbytes
* isna
@@ -333,6 +335,24 @@ def __iter__(self):
for i in range(len(self)):
yield self[i]
+ def __eq__(self, other: Any) -> ArrayLike:
+ """
+ Return for `self == other` (element-wise equality).
+ """
+ # Implementer note: this should return a boolean numpy ndarray or
+ # a boolean ExtensionArray.
+ # When `other` is one of Series, Index, or DataFrame, this method should
+ # return NotImplemented (to ensure that those objects are responsible for
+ # first unpacking the arrays, and then dispatch the operation to the
+ # underlying arrays)
+ raise AbstractMethodError(self)
+
+ def __ne__(self, other: Any) -> ArrayLike:
+ """
+ Return for `self != other` (element-wise in-equality).
+ """
+ return ~(self == other)
+
def to_numpy(
self, dtype=None, copy: bool = False, na_value=lib.no_default
) -> np.ndarray:
@@ -682,6 +702,38 @@ def searchsorted(self, value, side="left", sorter=None):
arr = self.astype(object)
return arr.searchsorted(value, side=side, sorter=sorter)
+ def equals(self, other: "ExtensionArray") -> bool:
+ """
+ Return if another array is equivalent to this array.
+
+ Equivalent means that both arrays have the same shape and dtype, and
+ all values compare equal. Missing values in the same location are
+ considered equal (in contrast with normal equality).
+
+ Parameters
+ ----------
+ other : ExtensionArray
+ Array to compare to this Array.
+
+ Returns
+ -------
+ boolean
+ Whether the arrays are equivalent.
+ """
+ if not type(self) == type(other):
+ return False
+ elif not self.dtype == other.dtype:
+ return False
+ elif not len(self) == len(other):
+ return False
+ else:
+ equal_values = self == other
+ if isinstance(equal_values, ExtensionArray):
+ # boolean array with NA -> fill with False
+ equal_values = equal_values.fillna(False)
+ equal_na = self.isna() & other.isna()
+ return (equal_values | equal_na).all().item()
+
def _values_for_factorize(self) -> Tuple[np.ndarray, Any]:
"""
Return an array and missing value suitable for factorization.
@@ -1134,7 +1186,7 @@ class ExtensionScalarOpsMixin(ExtensionOpsMixin):
"""
@classmethod
- def _create_method(cls, op, coerce_to_dtype=True):
+ def _create_method(cls, op, coerce_to_dtype=True, result_dtype=None):
"""
A class method that returns a method that will correspond to an
operator for an ExtensionArray subclass, by dispatching to the
@@ -1202,7 +1254,7 @@ def _maybe_convert(arr):
# exception raised in _from_sequence; ensure we have ndarray
res = np.asarray(arr)
else:
- res = np.asarray(arr)
+ res = np.asarray(arr, dtype=result_dtype)
return res
if op.__name__ in {"divmod", "rdivmod"}:
@@ -1220,4 +1272,4 @@ def _create_arithmetic_method(cls, op):
@classmethod
def _create_comparison_method(cls, op):
- return cls._create_method(op, coerce_to_dtype=False)
+ return cls._create_method(op, coerce_to_dtype=False, result_dtype=bool)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 66faca29670cb..8cac909b70802 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -606,9 +606,6 @@ def __eq__(self, other):
return result
- def __ne__(self, other):
- return ~self.__eq__(other)
-
def fillna(self, value=None, method=None, limit=None):
"""
Fill NA/NaN values using the specified method.
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index e4dcffae45f67..d22adf2aaf179 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1864,6 +1864,9 @@ def where(
return [self.make_block_same_class(result, placement=self.mgr_locs)]
+ def equals(self, other) -> bool:
+ return self.values.equals(other.values)
+
def _unstack(self, unstacker, fill_value, new_placement):
# ExtensionArray-safe unstack.
# We override ObjectBlock._unstack, which unstacks directly on the
diff --git a/pandas/tests/arrays/integer/test_comparison.py b/pandas/tests/arrays/integer/test_comparison.py
index d76ed2c21ca0e..1767250af09b0 100644
--- a/pandas/tests/arrays/integer/test_comparison.py
+++ b/pandas/tests/arrays/integer/test_comparison.py
@@ -104,3 +104,13 @@ def test_compare_to_int(self, any_nullable_int_dtype, all_compare_operators):
expected[s2.isna()] = pd.NA
self.assert_series_equal(result, expected)
+
+
+def test_equals():
+ # GH-30652
+ # equals is generally tested in /tests/extension/base/methods, but this
+ # specifically tests that two arrays of the same class but different dtype
+ # do not evaluate equal
+ a1 = pd.array([1, 2, None], dtype="Int64")
+ a2 = pd.array([1, 2, None], dtype="Int32")
+ assert a1.equals(a2) is False
diff --git a/pandas/tests/extension/base/methods.py b/pandas/tests/extension/base/methods.py
index ca92c2e1e318d..4a6d827b36b02 100644
--- a/pandas/tests/extension/base/methods.py
+++ b/pandas/tests/extension/base/methods.py
@@ -421,3 +421,32 @@ def test_repeat_raises(self, data, repeats, kwargs, error, msg, use_numpy):
np.repeat(data, repeats, **kwargs)
else:
data.repeat(repeats, **kwargs)
+
+ @pytest.mark.parametrize("box", [pd.array, pd.Series, pd.DataFrame])
+ def test_equals(self, data, na_value, as_series, box):
+ data2 = type(data)._from_sequence([data[0]] * len(data), dtype=data.dtype)
+ data_na = type(data)._from_sequence([na_value] * len(data), dtype=data.dtype)
+
+ data = tm.box_expected(data, box, transpose=False)
+ data2 = tm.box_expected(data2, box, transpose=False)
+ data_na = tm.box_expected(data_na, box, transpose=False)
+
+ # we are asserting with `is True/False` explicitly, to test that the
+ # result is an actual Python bool, and not something "truthy"
+
+ assert data.equals(data) is True
+ assert data.equals(data.copy()) is True
+
+ # unequal other data
+ assert data.equals(data2) is False
+ assert data.equals(data_na) is False
+
+ # different length
+ assert data[:2].equals(data[:3]) is False
+
+ # emtpy are equal
+ assert data[:0].equals(data[:0]) is True
+
+ # other types
+ assert data.equals(None) is False
+ assert data[[0]].equals(data[0]) is False
diff --git a/pandas/tests/extension/base/ops.py b/pandas/tests/extension/base/ops.py
index d3b6472044ea5..188893c8b067c 100644
--- a/pandas/tests/extension/base/ops.py
+++ b/pandas/tests/extension/base/ops.py
@@ -139,10 +139,8 @@ class BaseComparisonOpsTests(BaseOpsUtil):
def _compare_other(self, s, data, op_name, other):
op = self.get_op_from_name(op_name)
if op_name == "__eq__":
- assert getattr(data, op_name)(other) is NotImplemented
assert not op(s, other).all()
elif op_name == "__ne__":
- assert getattr(data, op_name)(other) is NotImplemented
assert op(s, other).all()
else:
@@ -176,6 +174,12 @@ def test_direct_arith_with_series_returns_not_implemented(self, data):
else:
raise pytest.skip(f"{type(data).__name__} does not implement __eq__")
+ if hasattr(data, "__ne__"):
+ result = data.__ne__(other)
+ assert result is NotImplemented
+ else:
+ raise pytest.skip(f"{type(data).__name__} does not implement __ne__")
+
class BaseUnaryOpsTests(BaseOpsUtil):
def test_invert(self, data):
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index 1f026e405dc17..94f971938b690 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -105,6 +105,12 @@ def __setitem__(self, key, value):
def __len__(self) -> int:
return len(self.data)
+ def __eq__(self, other):
+ return NotImplemented
+
+ def __ne__(self, other):
+ return NotImplemented
+
def __array__(self, dtype=None):
if dtype is None:
dtype = object
diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py
index d79769208ab56..74ca341e27bf8 100644
--- a/pandas/tests/extension/json/test_json.py
+++ b/pandas/tests/extension/json/test_json.py
@@ -262,6 +262,10 @@ def test_where_series(self, data, na_value):
def test_searchsorted(self, data_for_sorting):
super().test_searchsorted(data_for_sorting)
+ @pytest.mark.skip(reason="Can't compare dicts.")
+ def test_equals(self, data, na_value, as_series):
+ pass
+
class TestCasting(BaseJSON, base.BaseCastingTests):
@pytest.mark.skip(reason="failing on np.array(self, dtype=str)")
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index e48065b47f17c..1e21249988df6 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -276,6 +276,12 @@ def test_repeat(self, data, repeats, as_series, use_numpy):
def test_diff(self, data, periods):
return super().test_diff(data, periods)
+ @skip_nested
+ @pytest.mark.parametrize("box", [pd.array, pd.Series, pd.DataFrame])
+ def test_equals(self, data, na_value, as_series, box):
+ # Fails creating with _from_sequence
+ super().test_equals(data, na_value, as_series, box)
+
@skip_nested
class TestArithmetics(BaseNumPyTests, base.BaseArithmeticOpsTests):
diff --git a/pandas/tests/extension/test_sparse.py b/pandas/tests/extension/test_sparse.py
index 19ac25eb0ccf7..e59b3f0600867 100644
--- a/pandas/tests/extension/test_sparse.py
+++ b/pandas/tests/extension/test_sparse.py
@@ -316,6 +316,11 @@ def test_shift_0_periods(self, data):
data._sparse_values[0] = data._sparse_values[1]
assert result._sparse_values[0] != result._sparse_values[1]
+ @pytest.mark.parametrize("box", [pd.array, pd.Series, pd.DataFrame])
+ def test_equals(self, data, na_value, as_series, box):
+ self._check_unsupported(data)
+ super().test_equals(data, na_value, as_series, box)
+
class TestCasting(BaseSparseTests, base.BaseCastingTests):
def test_astype_object_series(self, all_data):
| - [x] closes #27081
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
--- Jan 3 10:00 PST
Hey team,
Looking for a bit of feedback on this API design and appropriate testing for this PR before I get too deep into it. Could you provide some guidance regarding how to appropriately handle updating this test? https://github.com/pandas-dev/pandas/blob/6f03e76f9d47ecfcfdd44641de6df1fc7dd57a01/pandas/tests/extension/base/ops.py#L162 It appears to contradict the issue #27081.
Appreciate the help,
Dave
| https://api.github.com/repos/pandas-dev/pandas/pulls/30652 | 2020-01-03T18:01:16Z | 2020-05-09T07:57:17Z | 2020-05-09T07:57:16Z | 2020-05-09T07:57:18Z |
CLN: Clean tests for *.sort_index, *.sort_values and df.drop_duplicates | diff --git a/pandas/tests/frame/methods/test_drop_duplicates.py b/pandas/tests/frame/methods/test_drop_duplicates.py
index 29ab2e1bfd512..0856ed6885978 100644
--- a/pandas/tests/frame/methods/test_drop_duplicates.py
+++ b/pandas/tests/frame/methods/test_drop_duplicates.py
@@ -393,6 +393,7 @@ def test_drop_duplicates_inplace():
tm.assert_frame_equal(result, expected)
+@pytest.mark.parametrize("inplace", [True, False])
@pytest.mark.parametrize(
"origin_dict, output_dict, ignore_index, output_index",
[
@@ -403,24 +404,17 @@ def test_drop_duplicates_inplace():
],
)
def test_drop_duplicates_ignore_index(
- origin_dict, output_dict, ignore_index, output_index
+ inplace, origin_dict, output_dict, ignore_index, output_index
):
# GH 30114
df = DataFrame(origin_dict)
expected = DataFrame(output_dict, index=output_index)
- # Test when inplace is False
- result = df.drop_duplicates(ignore_index=ignore_index)
- tm.assert_frame_equal(result, expected)
-
- # to verify original dataframe is not mutated
- tm.assert_frame_equal(df, DataFrame(origin_dict))
-
- # Test when inplace is True
- copied_df = df.copy()
-
- copied_df.drop_duplicates(ignore_index=ignore_index, inplace=True)
- tm.assert_frame_equal(copied_df, expected)
+ if inplace:
+ result_df = df.copy()
+ result_df.drop_duplicates(ignore_index=ignore_index, inplace=inplace)
+ else:
+ result_df = df.drop_duplicates(ignore_index=ignore_index, inplace=inplace)
- # to verify that input is unchanged
+ tm.assert_frame_equal(result_df, expected)
tm.assert_frame_equal(df, DataFrame(origin_dict))
diff --git a/pandas/tests/frame/methods/test_sort_index.py b/pandas/tests/frame/methods/test_sort_index.py
index 6866aab11d2fa..29a52de66e100 100644
--- a/pandas/tests/frame/methods/test_sort_index.py
+++ b/pandas/tests/frame/methods/test_sort_index.py
@@ -230,6 +230,7 @@ def test_sort_index_intervalindex(self):
result = result.columns.levels[1].categories
tm.assert_index_equal(result, expected)
+ @pytest.mark.parametrize("inplace", [True, False])
@pytest.mark.parametrize(
"original_dict, sorted_dict, ascending, ignore_index, output_index",
[
@@ -240,25 +241,28 @@ def test_sort_index_intervalindex(self):
],
)
def test_sort_index_ignore_index(
- self, original_dict, sorted_dict, ascending, ignore_index, output_index
+ self, inplace, original_dict, sorted_dict, ascending, ignore_index, output_index
):
# GH 30114
original_index = [2, 5, 3]
df = DataFrame(original_dict, index=original_index)
expected_df = DataFrame(sorted_dict, index=output_index)
-
- sorted_df = df.sort_index(ascending=ascending, ignore_index=ignore_index)
- tm.assert_frame_equal(sorted_df, expected_df)
- tm.assert_frame_equal(df, DataFrame(original_dict, index=original_index))
-
- # Test when inplace is True
- copied_df = df.copy()
- copied_df.sort_index(
- ascending=ascending, ignore_index=ignore_index, inplace=True
- )
- tm.assert_frame_equal(copied_df, expected_df)
+ kwargs = {
+ "ascending": ascending,
+ "ignore_index": ignore_index,
+ "inplace": inplace,
+ }
+
+ if inplace:
+ result_df = df.copy()
+ result_df.sort_index(**kwargs)
+ else:
+ result_df = df.sort_index(**kwargs)
+
+ tm.assert_frame_equal(result_df, expected_df)
tm.assert_frame_equal(df, DataFrame(original_dict, index=original_index))
+ @pytest.mark.parametrize("inplace", [True, False])
@pytest.mark.parametrize(
"original_dict, sorted_dict, ascending, ignore_index, output_index",
[
@@ -293,21 +297,24 @@ def test_sort_index_ignore_index(
],
)
def test_sort_index_ignore_index_multi_index(
- self, original_dict, sorted_dict, ascending, ignore_index, output_index
+ self, inplace, original_dict, sorted_dict, ascending, ignore_index, output_index
):
# GH 30114, this is to test ignore_index on MulitIndex of index
mi = MultiIndex.from_tuples([[2, 1], [3, 4]], names=list("AB"))
df = DataFrame(original_dict, index=mi)
expected_df = DataFrame(sorted_dict, index=output_index)
- sorted_df = df.sort_index(ascending=ascending, ignore_index=ignore_index)
- tm.assert_frame_equal(sorted_df, expected_df)
- tm.assert_frame_equal(df, DataFrame(original_dict, index=mi))
+ kwargs = {
+ "ascending": ascending,
+ "ignore_index": ignore_index,
+ "inplace": inplace,
+ }
- # Test when inplace is True
- copied_df = df.copy()
- copied_df.sort_index(
- ascending=ascending, ignore_index=ignore_index, inplace=True
- )
- tm.assert_frame_equal(copied_df, expected_df)
+ if inplace:
+ result_df = df.copy()
+ result_df.sort_index(**kwargs)
+ else:
+ result_df = df.sort_index(**kwargs)
+
+ tm.assert_frame_equal(result_df, expected_df)
tm.assert_frame_equal(df, DataFrame(original_dict, index=mi))
diff --git a/pandas/tests/frame/methods/test_sort_values.py b/pandas/tests/frame/methods/test_sort_values.py
index e733c01e01740..ecf2fcf90dd2d 100644
--- a/pandas/tests/frame/methods/test_sort_values.py
+++ b/pandas/tests/frame/methods/test_sort_values.py
@@ -461,6 +461,7 @@ def test_sort_values_na_position_with_categories_raises(self):
with pytest.raises(ValueError):
df.sort_values(by="c", ascending=False, na_position="bad_position")
+ @pytest.mark.parametrize("inplace", [True, False])
@pytest.mark.parametrize(
"original_dict, sorted_dict, ignore_index, output_index",
[
@@ -481,24 +482,18 @@ def test_sort_values_na_position_with_categories_raises(self):
],
)
def test_sort_values_ignore_index(
- self, original_dict, sorted_dict, ignore_index, output_index
+ self, inplace, original_dict, sorted_dict, ignore_index, output_index
):
# GH 30114
df = DataFrame(original_dict)
expected = DataFrame(sorted_dict, index=output_index)
+ kwargs = {"ignore_index": ignore_index, "inplace": inplace}
- # Test when inplace is False
- sorted_df = df.sort_values("A", ascending=False, ignore_index=ignore_index)
- tm.assert_frame_equal(sorted_df, expected)
-
- tm.assert_frame_equal(df, DataFrame(original_dict))
-
- # Test when inplace is True
- copied_df = df.copy()
-
- copied_df.sort_values(
- "A", ascending=False, ignore_index=ignore_index, inplace=True
- )
- tm.assert_frame_equal(copied_df, expected)
+ if inplace:
+ result_df = df.copy()
+ result_df.sort_values("A", ascending=False, **kwargs)
+ else:
+ result_df = df.sort_values("A", ascending=False, **kwargs)
+ tm.assert_frame_equal(result_df, expected)
tm.assert_frame_equal(df, DataFrame(original_dict))
diff --git a/pandas/tests/series/methods/test_sort_index.py b/pandas/tests/series/methods/test_sort_index.py
index a9b73c2344681..5d47c54c9c336 100644
--- a/pandas/tests/series/methods/test_sort_index.py
+++ b/pandas/tests/series/methods/test_sort_index.py
@@ -136,6 +136,7 @@ def test_sort_index_intervals(self):
)
tm.assert_series_equal(result, expected)
+ @pytest.mark.parametrize("inplace", [True, False])
@pytest.mark.parametrize(
"original_list, sorted_list, ascending, ignore_index, output_index",
[
@@ -146,23 +147,22 @@ def test_sort_index_intervals(self):
],
)
def test_sort_index_ignore_index(
- self, original_list, sorted_list, ascending, ignore_index, output_index
+ self, inplace, original_list, sorted_list, ascending, ignore_index, output_index
):
# GH 30114
ser = Series(original_list)
expected = Series(sorted_list, index=output_index)
-
- # Test when inplace is False
- sorted_sr = ser.sort_index(ascending=ascending, ignore_index=ignore_index)
- tm.assert_series_equal(sorted_sr, expected)
-
- tm.assert_series_equal(ser, Series(original_list))
-
- # Test when inplace is True
- copied_sr = ser.copy()
- copied_sr.sort_index(
- ascending=ascending, ignore_index=ignore_index, inplace=True
- )
- tm.assert_series_equal(copied_sr, expected)
-
+ kwargs = {
+ "ascending": ascending,
+ "ignore_index": ignore_index,
+ "inplace": inplace,
+ }
+
+ if inplace:
+ result_ser = ser.copy()
+ result_ser.sort_index(**kwargs)
+ else:
+ result_ser = ser.sort_index(**kwargs)
+
+ tm.assert_series_equal(result_ser, expected)
tm.assert_series_equal(ser, Series(original_list))
diff --git a/pandas/tests/series/methods/test_sort_values.py b/pandas/tests/series/methods/test_sort_values.py
index 2cea6f061de76..6dc63986ef144 100644
--- a/pandas/tests/series/methods/test_sort_values.py
+++ b/pandas/tests/series/methods/test_sort_values.py
@@ -157,6 +157,7 @@ def test_sort_values_categorical(self):
expected = df.iloc[[2, 1, 5, 4, 3, 0]]
tm.assert_frame_equal(result, expected)
+ @pytest.mark.parametrize("inplace", [True, False])
@pytest.mark.parametrize(
"original_list, sorted_list, ignore_index, output_index",
[
@@ -165,21 +166,18 @@ def test_sort_values_categorical(self):
],
)
def test_sort_values_ignore_index(
- self, original_list, sorted_list, ignore_index, output_index
+ self, inplace, original_list, sorted_list, ignore_index, output_index
):
# GH 30114
- sr = Series(original_list)
+ ser = Series(original_list)
expected = Series(sorted_list, index=output_index)
+ kwargs = {"ignore_index": ignore_index, "inplace": inplace}
- # Test when inplace is False
- sorted_sr = sr.sort_values(ascending=False, ignore_index=ignore_index)
- tm.assert_series_equal(sorted_sr, expected)
+ if inplace:
+ result_ser = ser.copy()
+ result_ser.sort_values(ascending=False, **kwargs)
+ else:
+ result_ser = ser.sort_values(ascending=False, **kwargs)
- tm.assert_series_equal(sr, Series(original_list))
-
- # Test when inplace is True
- copied_sr = sr.copy()
- copied_sr.sort_values(ascending=False, ignore_index=ignore_index, inplace=True)
- tm.assert_series_equal(copied_sr, expected)
-
- tm.assert_series_equal(sr, Series(original_list))
+ tm.assert_series_equal(result_ser, expected)
+ tm.assert_series_equal(ser, Series(original_list))
| - [ ] xref #30578 #30405 #30402
This follow-up PR is to parametrize and deduplicate `inplace` cases brought up in the above PRs. | https://api.github.com/repos/pandas-dev/pandas/pulls/30651 | 2020-01-03T17:53:16Z | 2020-01-04T18:05:14Z | 2020-01-04T18:05:14Z | 2020-01-04T18:05:21Z |
REF: use _data.take for CI/DTI/TDI/PI.take | diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 96bfff9a0a09f..eb1c45e750bc9 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -728,13 +728,13 @@ def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
nv.validate_take(tuple(), kwargs)
indices = ensure_platform_int(indices)
taken = self._assert_take_fillable(
- self.codes,
+ self._data,
indices,
allow_fill=allow_fill,
fill_value=fill_value,
- na_value=-1,
+ na_value=self._data.dtype.na_value,
)
- return self._create_from_codes(taken)
+ return self._shallow_copy(taken)
take_nd = take
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 7bf1a601a0ab6..3c66889a8b4a7 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -274,11 +274,11 @@ def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
return self[maybe_slice]
taken = self._assert_take_fillable(
- self.asi8,
+ self._data,
indices,
allow_fill=allow_fill,
fill_value=fill_value,
- na_value=iNaT,
+ na_value=NaT,
)
# keep freq in PeriodArray/Index, reset otherwise
| cc @jschendel @jreback any idea why IntervalIndex doesn't use _assert_take_fillable like the others? If it can/should, then we can share this method between all of our EA-backed indexes. (Actually also need to make sure the slice behavior in the DTI/TDI/PI is OK to do for all of them) | https://api.github.com/repos/pandas-dev/pandas/pulls/30650 | 2020-01-03T17:15:56Z | 2020-01-04T18:07:25Z | 2020-01-04T18:07:25Z | 2020-01-04T18:09:16Z |
REF: move EA wrapping/unwrapping to indexes.extensions | diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 96bfff9a0a09f..8fa71a60365c4 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -29,6 +29,7 @@
import pandas.core.common as com
import pandas.core.indexes.base as ibase
from pandas.core.indexes.base import Index, _index_shared_docs, maybe_extract_name
+from pandas.core.indexes.extension import make_wrapped_comparison_op
import pandas.core.missing as missing
from pandas.core.ops import get_op_result_name
@@ -876,14 +877,7 @@ def _add_comparison_methods(cls):
def _make_compare(op):
opname = f"__{op.__name__}__"
- def _evaluate_compare(self, other):
- with np.errstate(all="ignore"):
- result = op(self.array, other)
- if isinstance(result, ABCSeries):
- # Dispatch to pd.Categorical returned NotImplemented
- # and we got a Series back; down-cast to ndarray
- result = result._values
- return result
+ _evaluate_compare = make_wrapped_comparison_op(opname)
return compat.set_function_name(_evaluate_compare, opname, cls)
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 7bf1a601a0ab6..b9598acb552e3 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -25,7 +25,7 @@
)
from pandas.core.dtypes.generic import ABCIndex, ABCIndexClass, ABCSeries
-from pandas.core import algorithms, ops
+from pandas.core import algorithms
from pandas.core.accessor import PandasDelegate
from pandas.core.arrays import ExtensionArray, ExtensionOpsMixin
from pandas.core.arrays.datetimelike import (
@@ -40,21 +40,11 @@
from pandas.tseries.frequencies import DateOffset, to_offset
-from .extension import inherit_names
+from .extension import inherit_names, make_wrapped_arith_op, make_wrapped_comparison_op
_index_doc_kwargs = dict(ibase._index_doc_kwargs)
-def _make_wrapped_arith_op(opname):
- def method(self, other):
- meth = getattr(self._data, opname)
- result = meth(maybe_unwrap_index(other))
- return wrap_arithmetic_op(self, other, result)
-
- method.__name__ = opname
- return method
-
-
def _join_i8_wrapper(joinf, with_indexers: bool = True):
"""
Create the join wrapper methods.
@@ -125,19 +115,7 @@ def _create_comparison_method(cls, op):
"""
Create a comparison method that dispatches to ``cls.values``.
"""
-
- def wrapper(self, other):
- if isinstance(other, ABCSeries):
- # the arrays defer to Series for comparison ops but the indexes
- # don't, so we have to unwrap here.
- other = other._values
-
- result = op(self._data, maybe_unwrap_index(other))
- return result
-
- wrapper.__doc__ = op.__doc__
- wrapper.__name__ = f"__{op.__name__}__"
- return wrapper
+ return make_wrapped_comparison_op(f"__{op.__name__}__")
# ------------------------------------------------------------------------
# Abstract data attributes
@@ -467,22 +445,22 @@ def _convert_scalar_indexer(self, key, kind=None):
return super()._convert_scalar_indexer(key, kind=kind)
- __add__ = _make_wrapped_arith_op("__add__")
- __radd__ = _make_wrapped_arith_op("__radd__")
- __sub__ = _make_wrapped_arith_op("__sub__")
- __rsub__ = _make_wrapped_arith_op("__rsub__")
- __pow__ = _make_wrapped_arith_op("__pow__")
- __rpow__ = _make_wrapped_arith_op("__rpow__")
- __mul__ = _make_wrapped_arith_op("__mul__")
- __rmul__ = _make_wrapped_arith_op("__rmul__")
- __floordiv__ = _make_wrapped_arith_op("__floordiv__")
- __rfloordiv__ = _make_wrapped_arith_op("__rfloordiv__")
- __mod__ = _make_wrapped_arith_op("__mod__")
- __rmod__ = _make_wrapped_arith_op("__rmod__")
- __divmod__ = _make_wrapped_arith_op("__divmod__")
- __rdivmod__ = _make_wrapped_arith_op("__rdivmod__")
- __truediv__ = _make_wrapped_arith_op("__truediv__")
- __rtruediv__ = _make_wrapped_arith_op("__rtruediv__")
+ __add__ = make_wrapped_arith_op("__add__")
+ __radd__ = make_wrapped_arith_op("__radd__")
+ __sub__ = make_wrapped_arith_op("__sub__")
+ __rsub__ = make_wrapped_arith_op("__rsub__")
+ __pow__ = make_wrapped_arith_op("__pow__")
+ __rpow__ = make_wrapped_arith_op("__rpow__")
+ __mul__ = make_wrapped_arith_op("__mul__")
+ __rmul__ = make_wrapped_arith_op("__rmul__")
+ __floordiv__ = make_wrapped_arith_op("__floordiv__")
+ __rfloordiv__ = make_wrapped_arith_op("__rfloordiv__")
+ __mod__ = make_wrapped_arith_op("__mod__")
+ __rmod__ = make_wrapped_arith_op("__rmod__")
+ __divmod__ = make_wrapped_arith_op("__divmod__")
+ __rdivmod__ = make_wrapped_arith_op("__rdivmod__")
+ __truediv__ = make_wrapped_arith_op("__truediv__")
+ __rtruediv__ = make_wrapped_arith_op("__rtruediv__")
def isin(self, values, level=None):
"""
@@ -867,46 +845,6 @@ def _wrap_joined_index(self, joined, other):
return self._simple_new(joined, name, **kwargs)
-def wrap_arithmetic_op(self, other, result):
- if result is NotImplemented:
- return NotImplemented
-
- if isinstance(result, tuple):
- # divmod, rdivmod
- assert len(result) == 2
- return (
- wrap_arithmetic_op(self, other, result[0]),
- wrap_arithmetic_op(self, other, result[1]),
- )
-
- if not isinstance(result, Index):
- # Index.__new__ will choose appropriate subclass for dtype
- result = Index(result)
-
- res_name = ops.get_op_result_name(self, other)
- result.name = res_name
- return result
-
-
-def maybe_unwrap_index(obj):
- """
- If operating against another Index object, we need to unwrap the underlying
- data before deferring to the DatetimeArray/TimedeltaArray/PeriodArray
- implementation, otherwise we will incorrectly return NotImplemented.
-
- Parameters
- ----------
- obj : object
-
- Returns
- -------
- unwrapped object
- """
- if isinstance(obj, ABCIndexClass):
- return obj._data
- return obj
-
-
class DatetimelikeDelegateMixin(PandasDelegate):
"""
Delegation mechanism, specific for Datetime, Timedelta, and Period types.
@@ -914,8 +852,6 @@ class DatetimelikeDelegateMixin(PandasDelegate):
Functionality is delegated from the Index class to an Array class. A
few things can be customized
- * _delegate_class : type
- The class being delegated to.
* _delegated_methods, delegated_properties : List
The list of property / method names being delagated.
* raw_methods : Set
@@ -932,10 +868,6 @@ class DatetimelikeDelegateMixin(PandasDelegate):
_raw_properties: Set[str] = set()
_data: ExtensionArray
- @property
- def _delegate_class(self):
- raise AbstractMethodError
-
def _delegate_property_get(self, name, *args, **kwargs):
result = getattr(self._data, name)
if name not in self._raw_properties:
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index bc6b8ff845a56..fa0fed42f5191 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -86,7 +86,6 @@ class DatetimeDelegateMixin(DatetimelikeDelegateMixin):
| set(_extra_raw_properties)
)
_raw_methods = set(_extra_raw_methods)
- _delegate_class = DatetimeArray
@inherit_names(["_timezone", "is_normalized", "_resolution"], DatetimeArray, cache=True)
diff --git a/pandas/core/indexes/extension.py b/pandas/core/indexes/extension.py
index 779cd8eac4eaf..3c98d31e34b7d 100644
--- a/pandas/core/indexes/extension.py
+++ b/pandas/core/indexes/extension.py
@@ -5,6 +5,12 @@
from pandas.util._decorators import cache_readonly
+from pandas.core.dtypes.generic import ABCSeries
+
+from pandas.core.ops import get_op_result_name
+
+from .base import Index
+
def inherit_from_data(name: str, delegate, cache: bool = False):
"""
@@ -76,3 +82,73 @@ def wrapper(cls):
return cls
return wrapper
+
+
+def make_wrapped_comparison_op(opname):
+ """
+ Create a comparison method that dispatches to ``._data``.
+ """
+
+ def wrapper(self, other):
+ if isinstance(other, ABCSeries):
+ # the arrays defer to Series for comparison ops but the indexes
+ # don't, so we have to unwrap here.
+ other = other._values
+
+ other = _maybe_unwrap_index(other)
+
+ op = getattr(self._data, opname)
+ return op(other)
+
+ wrapper.__name__ = opname
+ return wrapper
+
+
+def make_wrapped_arith_op(opname):
+ def method(self, other):
+ meth = getattr(self._data, opname)
+ result = meth(_maybe_unwrap_index(other))
+ return _wrap_arithmetic_op(self, other, result)
+
+ method.__name__ = opname
+ return method
+
+
+def _wrap_arithmetic_op(self, other, result):
+ if result is NotImplemented:
+ return NotImplemented
+
+ if isinstance(result, tuple):
+ # divmod, rdivmod
+ assert len(result) == 2
+ return (
+ _wrap_arithmetic_op(self, other, result[0]),
+ _wrap_arithmetic_op(self, other, result[1]),
+ )
+
+ if not isinstance(result, Index):
+ # Index.__new__ will choose appropriate subclass for dtype
+ result = Index(result)
+
+ res_name = get_op_result_name(self, other)
+ result.name = res_name
+ return result
+
+
+def _maybe_unwrap_index(obj):
+ """
+ If operating against another Index object, we need to unwrap the underlying
+ data before deferring to the DatetimeArray/TimedeltaArray/PeriodArray
+ implementation, otherwise we will incorrectly return NotImplemented.
+
+ Parameters
+ ----------
+ obj : object
+
+ Returns
+ -------
+ unwrapped object
+ """
+ if isinstance(obj, Index):
+ return obj._data
+ return obj
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 83a65b6505446..33c2ab20754f3 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -65,7 +65,6 @@ class PeriodDelegateMixin(DatetimelikeDelegateMixin):
Delegate from PeriodIndex to PeriodArray.
"""
- _delegate_class = PeriodArray
_raw_methods = {"_format_native_types"}
_raw_properties = {"is_leap_year", "freq"}
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 894b430f1c4fd..98e8a452a7a77 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -41,7 +41,6 @@ class TimedeltaDelegateMixin(DatetimelikeDelegateMixin):
# Some are "raw" methods, the result is not re-boxed in an Index
# We also have a few "extra" attrs, which may or may not be raw,
# which we don't want to expose in the .dt accessor.
- _delegate_class = TimedeltaArray
_raw_properties = {"components", "_box_func"}
_raw_methods = {"to_pytimedelta", "sum", "std", "median", "_format_native_types"}
| Re-use the comparison method wrapper in CategoricalIndex. | https://api.github.com/repos/pandas-dev/pandas/pulls/30648 | 2020-01-03T16:57:43Z | 2020-01-04T18:10:21Z | 2020-01-04T18:10:21Z | 2020-01-04T18:18:04Z |
REF: restructure api import | diff --git a/pandas/__init__.py b/pandas/__init__.py
index f9de17a2e3914..0c6c1c0433fb9 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -138,6 +138,7 @@
qcut,
)
+import pandas.api
from pandas.util._print_versions import show_versions
from pandas.io.api import (
diff --git a/pandas/io/json/_table_schema.py b/pandas/io/json/_table_schema.py
index 87bfd6030ec31..5f23b95c10f8e 100644
--- a/pandas/io/json/_table_schema.py
+++ b/pandas/io/json/_table_schema.py
@@ -18,9 +18,9 @@
is_string_dtype,
is_timedelta64_dtype,
)
+from pandas.core.dtypes.dtypes import CategoricalDtype
from pandas import DataFrame
-from pandas.api.types import CategoricalDtype
import pandas.core.common as com
loads = json.loads
diff --git a/pandas/io/spss.py b/pandas/io/spss.py
index cf682ec72f284..cdbe14e9fe927 100644
--- a/pandas/io/spss.py
+++ b/pandas/io/spss.py
@@ -3,7 +3,8 @@
from pandas.compat._optional import import_optional_dependency
-from pandas.api.types import is_list_like
+from pandas.core.dtypes.inference import is_list_like
+
from pandas.core.api import DataFrame
| Previously, `pandas.api` was imported indirectly via `pandas.io.json._table_schema`. This makes things a bit more direct.
I think we'll want a code check for importing from `pandas.api` within `pandas.core` but I wasn't able to easily write one (things like docstrings may want to have `import pandas.api` for example). | https://api.github.com/repos/pandas-dev/pandas/pulls/30647 | 2020-01-03T15:59:59Z | 2020-01-03T17:35:16Z | 2020-01-03T17:35:15Z | 2020-01-04T03:48:09Z |
TST: Regression testing for fixed issues | diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 071c6bb79d30e..1687f114670f3 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1,5 +1,5 @@
from collections import OrderedDict, abc
-from datetime import datetime, timedelta
+from datetime import date, datetime, timedelta
import functools
import itertools
@@ -2425,6 +2425,14 @@ def test_constructor_with_extension_array(self, extension_arr):
result = DataFrame(extension_arr)
tm.assert_frame_equal(result, expected)
+ def test_datetime_date_tuple_columns_from_dict(self):
+ # GH 10863
+ v = date.today()
+ tup = v, v
+ result = DataFrame({tup: Series(range(3), index=range(3))}, columns=[tup])
+ expected = DataFrame([0, 1, 2], columns=pd.Index(pd.Series([tup])))
+ tm.assert_frame_equal(result, expected)
+
class TestDataFrameConstructorWithDatetimeTZ:
def test_from_dict(self):
diff --git a/pandas/tests/frame/test_missing.py b/pandas/tests/frame/test_missing.py
index 651929216a722..2e6759cb1a238 100644
--- a/pandas/tests/frame/test_missing.py
+++ b/pandas/tests/frame/test_missing.py
@@ -970,3 +970,16 @@ def test_interp_ignore_all_good(self):
# all good
result = df[["B", "D"]].interpolate(downcast=None)
tm.assert_frame_equal(result, df[["B", "D"]])
+
+ @pytest.mark.parametrize("axis", [0, 1])
+ def test_interp_time_inplace_axis(self, axis):
+ # GH 9687
+ periods = 5
+ idx = pd.date_range(start="2014-01-01", periods=periods)
+ data = np.random.rand(periods, periods)
+ data[data < 0.5] = np.nan
+ expected = pd.DataFrame(index=idx, columns=idx, data=data)
+
+ result = expected.interpolate(axis=0, method="time")
+ expected.interpolate(axis=0, method="time", inplace=True)
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index 53879cad629b2..17e5d3efe850f 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -714,3 +714,41 @@ def test_apply_datetime_issue(group_column_dtlike):
["spam"], Index(["foo"], dtype="object", name="a"), columns=[42]
)
tm.assert_frame_equal(result, expected)
+
+
+def test_apply_series_return_dataframe_groups():
+ # GH 10078
+ tdf = DataFrame(
+ {
+ "day": {
+ 0: pd.Timestamp("2015-02-24 00:00:00"),
+ 1: pd.Timestamp("2015-02-24 00:00:00"),
+ 2: pd.Timestamp("2015-02-24 00:00:00"),
+ 3: pd.Timestamp("2015-02-24 00:00:00"),
+ 4: pd.Timestamp("2015-02-24 00:00:00"),
+ },
+ "userAgent": {
+ 0: "some UA string",
+ 1: "some UA string",
+ 2: "some UA string",
+ 3: "another UA string",
+ 4: "some UA string",
+ },
+ "userId": {
+ 0: "17661101",
+ 1: "17661101",
+ 2: "17661101",
+ 3: "17661101",
+ 4: "17661101",
+ },
+ }
+ )
+
+ def most_common_values(df):
+ return Series({c: s.value_counts().index[0] for c, s in df.iteritems()})
+
+ result = tdf.groupby("day").apply(most_common_values)["userId"]
+ expected = pd.Series(
+ ["17661101"], index=pd.DatetimeIndex(["2015-02-24"], name="day"), name="userId"
+ )
+ tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 40f844bdaa7c0..24a45677f90cc 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -1330,3 +1330,15 @@ def test_series_groupby_on_2_categoricals_unobserved_zeroes_or_nans(func, zero_o
# If we expect unobserved values to be zero, we also expect the dtype to be int
if zero_or_nan == 0:
assert np.issubdtype(result.dtype, np.integer)
+
+
+def test_series_groupby_categorical_aggregation_getitem():
+ # GH 8870
+ d = {"foo": [10, 8, 4, 1], "bar": [10, 20, 30, 40], "baz": ["d", "c", "d", "c"]}
+ df = pd.DataFrame(d)
+ cat = pd.cut(df["foo"], np.linspace(0, 20, 5))
+ df["range"] = cat
+ groups = df.groupby(["range", "baz"], as_index=True, sort=True)
+ result = groups["foo"].agg("mean")
+ expected = groups.agg("mean")["foo"]
+ tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index f9a77cd584d46..6fc7d16554ccd 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -2023,3 +2023,10 @@ def test_groupby_crash_on_nunique(axis):
expected = expected.T
tm.assert_frame_equal(result, expected)
+
+
+def test_groupby_list_level():
+ # GH 9790
+ expected = pd.DataFrame(np.arange(0, 9).reshape(3, 3))
+ result = expected.groupby(level=[0]).mean()
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/indexing/multiindex/test_loc.py b/pandas/tests/indexing/multiindex/test_loc.py
index ce427116ea343..3b8aa963ac698 100644
--- a/pandas/tests/indexing/multiindex/test_loc.py
+++ b/pandas/tests/indexing/multiindex/test_loc.py
@@ -437,3 +437,34 @@ def test_loc_nan_multiindex():
columns=Index(["d1", "d2", "d3", "d4"], dtype="object"),
)
tm.assert_frame_equal(result, expected)
+
+
+def test_loc_period_string_indexing():
+ # GH 9892
+ a = pd.period_range("2013Q1", "2013Q4", freq="Q")
+ i = (1111, 2222, 3333)
+ idx = pd.MultiIndex.from_product((a, i), names=("Periode", "CVR"))
+ df = pd.DataFrame(
+ index=idx,
+ columns=(
+ "OMS",
+ "OMK",
+ "RES",
+ "DRIFT_IND",
+ "OEVRIG_IND",
+ "FIN_IND",
+ "VARE_UD",
+ "LOEN_UD",
+ "FIN_UD",
+ ),
+ )
+ result = df.loc[("2013Q1", 1111), "OMS"]
+ expected = pd.Series(
+ [np.nan],
+ dtype=object,
+ name="OMS",
+ index=pd.MultiIndex.from_tuples(
+ [(pd.Period("2013Q1"), 1111)], names=["Periode", "CVR"]
+ ),
+ )
+ tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index c74435e9a9347..a36078b11c663 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -983,3 +983,22 @@ def test_loc_setitem_float_intindex():
result = pd.DataFrame(rand_data)
result.loc[:, 0.5] = np.nan
tm.assert_frame_equal(result, expected)
+
+
+def test_loc_axis_1_slice():
+ # GH 10586
+ cols = [(yr, m) for yr in [2014, 2015] for m in [7, 8, 9, 10]]
+ df = pd.DataFrame(
+ np.ones((10, 8)),
+ index=tuple("ABCDEFGHIJ"),
+ columns=pd.MultiIndex.from_tuples(cols),
+ )
+ result = df.loc(axis=1)[(2014, 9):(2015, 8)]
+ expected = pd.DataFrame(
+ np.ones((10, 4)),
+ index=tuple("ABCDEFGHIJ"),
+ columns=pd.MultiIndex.from_tuples(
+ [(2014, 9), (2014, 10), (2015, 7), (2015, 8)]
+ ),
+ )
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_index_col.py b/pandas/tests/io/parser/test_index_col.py
index 7361e2ca6868f..f67a658cadfa2 100644
--- a/pandas/tests/io/parser/test_index_col.py
+++ b/pandas/tests/io/parser/test_index_col.py
@@ -5,6 +5,7 @@
"""
from io import StringIO
+import numpy as np
import pytest
from pandas import DataFrame, Index, MultiIndex
@@ -172,3 +173,14 @@ def test_multi_index_naming_not_all_at_beginning(all_parsers):
),
)
tm.assert_frame_equal(result, expected)
+
+
+def test_no_multi_index_level_names_empty(all_parsers):
+ # GH 10984
+ parser = all_parsers
+ midx = MultiIndex.from_tuples([("A", 1, 2), ("A", 1, 2), ("B", 1, 2)])
+ expected = DataFrame(np.random.randn(3, 3), index=midx, columns=["x", "y", "z"])
+ with tm.ensure_clean() as path:
+ expected.to_csv(path)
+ result = parser.read_csv(path, index_col=[0, 1, 2])
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py
index 6a26dc474afc8..2ea7ab827732e 100644
--- a/pandas/tests/reshape/test_concat.py
+++ b/pandas/tests/reshape/test_concat.py
@@ -2730,3 +2730,12 @@ def test_concat_datetimeindex_freq():
expected = pd.DataFrame(data[50:] + data[:50], index=dr[50:].append(dr[:50]))
expected.index._data.freq = None
tm.assert_frame_equal(result, expected)
+
+
+def test_concat_empty_df_object_dtype():
+ # GH 9149
+ df_1 = pd.DataFrame({"Row": [0, 1, 1], "EmptyCol": np.nan, "NumberCol": [1, 2, 3]})
+ df_2 = pd.DataFrame(columns=df_1.columns)
+ result = pd.concat([df_1, df_2], axis=0)
+ expected = df_1.astype(object)
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 054af87b42411..743fc50c87e96 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1965,6 +1965,31 @@ def test_pivot_table_aggfunc_scalar_dropna(self, dropna):
tm.assert_frame_equal(result, expected)
+ def test_pivot_table_empty_aggfunc(self):
+ # GH 9186
+ df = pd.DataFrame(
+ {
+ "A": [2, 2, 3, 3, 2],
+ "id": [5, 6, 7, 8, 9],
+ "C": ["p", "q", "q", "p", "q"],
+ "D": [None, None, None, None, None],
+ }
+ )
+ result = df.pivot_table(index="A", columns="D", values="id", aggfunc=np.size)
+ expected = pd.DataFrame()
+ tm.assert_frame_equal(result, expected)
+
+ def test_pivot_table_no_column_raises(self):
+ # GH 10326
+ def agg(l):
+ return np.mean(l)
+
+ foo = pd.DataFrame(
+ {"X": [0, 0, 1, 1], "Y": [0, 1, 0, 1], "Z": [10, 20, 30, 40]}
+ )
+ with pytest.raises(KeyError, match="notpresent"):
+ foo.pivot_table("notpresent", "X", "Y", aggfunc=agg)
+
class TestCrosstab:
def setup_method(self, method):
| - [x] closes #10863
- [x] closes #9687
- [x] closes #10078
- [x] closes #8870
- [x] closes #9790
- [x] closes #9892
- [x] closes #10586
- [x] closes #10984
- [x] closes #9149
- [x] closes #9186
- [x] closes #10326
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/30646 | 2020-01-03T15:36:25Z | 2020-01-03T23:23:45Z | 2020-01-03T23:23:43Z | 2020-01-03T23:23:50Z |
TST: Adding test to concat Sparse arrays | diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py
index 2ea7ab827732e..990669f1ae13a 100644
--- a/pandas/tests/reshape/test_concat.py
+++ b/pandas/tests/reshape/test_concat.py
@@ -28,6 +28,7 @@
read_csv,
)
import pandas._testing as tm
+from pandas.core.arrays import SparseArray
from pandas.core.construction import create_series_with_explicit_dtype
from pandas.tests.extension.decimal import to_decimal
@@ -2739,3 +2740,13 @@ def test_concat_empty_df_object_dtype():
result = pd.concat([df_1, df_2], axis=0)
expected = df_1.astype(object)
tm.assert_frame_equal(result, expected)
+
+
+def test_concat_sparse():
+ # GH 23557
+ a = pd.Series(SparseArray([0, 1, 2]))
+ expected = pd.DataFrame(data=[[0, 0], [1, 1], [2, 2]]).astype(
+ pd.SparseDtype(np.int64, 0)
+ )
+ result = pd.concat([a, a], axis=1)
+ tm.assert_frame_equal(result, expected)
| - [ ] closes #23557
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30645 | 2020-01-03T15:30:08Z | 2020-01-06T13:22:19Z | 2020-01-06T13:22:18Z | 2020-01-06T13:22:22Z |
DOC: Change refs in docs from pandas.SparseArray to pandas.arrays.SparseArray | diff --git a/doc/source/development/contributing_docstring.rst b/doc/source/development/contributing_docstring.rst
index 34bc5f44eb0c0..d897889ed9eff 100644
--- a/doc/source/development/contributing_docstring.rst
+++ b/doc/source/development/contributing_docstring.rst
@@ -399,7 +399,7 @@ DataFrame:
* DataFrame
* pandas.Index
* pandas.Categorical
-* pandas.SparseArray
+* pandas.arrays.SparseArray
If the exact type is not relevant, but must be compatible with a numpy
array, array-like can be specified. If Any type that can be iterated is
diff --git a/doc/source/getting_started/basics.rst b/doc/source/getting_started/basics.rst
index f47fa48eb6202..4fef5efbd1551 100644
--- a/doc/source/getting_started/basics.rst
+++ b/doc/source/getting_started/basics.rst
@@ -1951,7 +1951,7 @@ documentation sections for more on each type.
| period | :class:`PeriodDtype` | :class:`Period` | :class:`arrays.PeriodArray` | ``'period[<freq>]'``, | :ref:`timeseries.periods` |
| (time spans) | | | | ``'Period[<freq>]'`` | |
+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
-| sparse | :class:`SparseDtype` | (none) | :class:`SparseArray` | ``'Sparse'``, ``'Sparse[int]'``, | :ref:`sparse` |
+| sparse | :class:`SparseDtype` | (none) | :class:`arrays.SparseArray` | ``'Sparse'``, ``'Sparse[int]'``, | :ref:`sparse` |
| | | | | ``'Sparse[float]'`` | |
+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
| intervals | :class:`IntervalDtype` | :class:`Interval` | :class:`arrays.IntervalArray` | ``'interval'``, ``'Interval'``, | :ref:`advanced.intervalindex` |
diff --git a/doc/source/getting_started/dsintro.rst b/doc/source/getting_started/dsintro.rst
index a07fcbd8b67c4..82d4b5e34e4f8 100644
--- a/doc/source/getting_started/dsintro.rst
+++ b/doc/source/getting_started/dsintro.rst
@@ -741,7 +741,7 @@ implementation takes precedence and a Series is returned.
np.maximum(ser, idx)
NumPy ufuncs are safe to apply to :class:`Series` backed by non-ndarray arrays,
-for example :class:`SparseArray` (see :ref:`sparse.calculation`). If possible,
+for example :class:`arrays.SparseArray` (see :ref:`sparse.calculation`). If possible,
the ufunc is applied without converting the underlying data to an ndarray.
Console display
diff --git a/doc/source/reference/arrays.rst b/doc/source/reference/arrays.rst
index 2c8382e916ed8..c71350ecd73b3 100644
--- a/doc/source/reference/arrays.rst
+++ b/doc/source/reference/arrays.rst
@@ -444,13 +444,13 @@ Sparse data
-----------
Data where a single value is repeated many times (e.g. ``0`` or ``NaN``) may
-be stored efficiently as a :class:`SparseArray`.
+be stored efficiently as a :class:`arrays.SparseArray`.
.. autosummary::
:toctree: api/
:template: autosummary/class_without_autosummary.rst
- SparseArray
+ arrays.SparseArray
.. autosummary::
:toctree: api/
diff --git a/doc/source/user_guide/sparse.rst b/doc/source/user_guide/sparse.rst
index c258a8840b714..8588fac4a18d0 100644
--- a/doc/source/user_guide/sparse.rst
+++ b/doc/source/user_guide/sparse.rst
@@ -15,7 +15,7 @@ can be chosen, including 0) is omitted. The compressed values are not actually s
arr = np.random.randn(10)
arr[2:-2] = np.nan
- ts = pd.Series(pd.SparseArray(arr))
+ ts = pd.Series(pd.arrays.SparseArray(arr))
ts
Notice the dtype, ``Sparse[float64, nan]``. The ``nan`` means that elements in the
@@ -51,7 +51,7 @@ identical to their dense counterparts.
SparseArray
-----------
-:class:`SparseArray` is a :class:`~pandas.api.extensions.ExtensionArray`
+:class:`arrays.SparseArray` is a :class:`~pandas.api.extensions.ExtensionArray`
for storing an array of sparse values (see :ref:`basics.dtypes` for more
on extension arrays). It is a 1-dimensional ndarray-like object storing
only values distinct from the ``fill_value``:
@@ -61,7 +61,7 @@ only values distinct from the ``fill_value``:
arr = np.random.randn(10)
arr[2:5] = np.nan
arr[7:8] = np.nan
- sparr = pd.SparseArray(arr)
+ sparr = pd.arrays.SparseArray(arr)
sparr
A sparse array can be converted to a regular (dense) ndarray with :meth:`numpy.asarray`
@@ -144,7 +144,7 @@ to ``SparseArray`` and get a ``SparseArray`` as a result.
.. ipython:: python
- arr = pd.SparseArray([1., np.nan, np.nan, -2., np.nan])
+ arr = pd.arrays.SparseArray([1., np.nan, np.nan, -2., np.nan])
np.abs(arr)
@@ -153,7 +153,7 @@ the correct dense result.
.. ipython:: python
- arr = pd.SparseArray([1., -1, -1, -2., -1], fill_value=-1)
+ arr = pd.arrays.SparseArray([1., -1, -1, -2., -1], fill_value=-1)
np.abs(arr)
np.abs(arr).to_dense()
@@ -194,7 +194,7 @@ From an array-like, use the regular :class:`Series` or
.. ipython:: python
# New way
- pd.DataFrame({"A": pd.SparseArray([0, 1])})
+ pd.DataFrame({"A": pd.arrays.SparseArray([0, 1])})
From a SciPy sparse matrix, use :meth:`DataFrame.sparse.from_spmatrix`,
@@ -256,10 +256,10 @@ Instead, you'll need to ensure that the values being assigned are sparse
.. ipython:: python
- df = pd.DataFrame({"A": pd.SparseArray([0, 1])})
+ df = pd.DataFrame({"A": pd.arrays.SparseArray([0, 1])})
df['B'] = [0, 0] # remains dense
df['B'].dtype
- df['B'] = pd.SparseArray([0, 0])
+ df['B'] = pd.arrays.SparseArray([0, 0])
df['B'].dtype
The ``SparseDataFrame.default_kind`` and ``SparseDataFrame.default_fill_value`` attributes
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index b9cc1dad53674..47ebbf230d719 100755
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -578,6 +578,8 @@ Deprecations
- :meth:`DataFrame.to_stata`, :meth:`DataFrame.to_feather`, and :meth:`DataFrame.to_parquet` argument "fname" is deprecated, use "path" instead (:issue:`23574`)
- The deprecated internal attributes ``_start``, ``_stop`` and ``_step`` of :class:`RangeIndex` now raise a ``FutureWarning`` instead of a ``DeprecationWarning`` (:issue:`26581`)
- The ``pandas.util.testing`` module has been deprecated. Use the public API in ``pandas.testing`` documented at :ref:`api.general.testing` (:issue:`16232`).
+- ``pandas.SparseArray`` has been deprecated. Use ``pandas.arrays.SparseArray`` (:class:`arrays.SparseArray`) instead. (:issue:`30642`)
+
**Selecting Columns from a Grouped DataFrame**
| - [x ] closes #30642
- [ ] tests added / passed
- N/A
- [ ] passes `black pandas`
- N/A
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- N/A
- [x] whatsnew entry
Change all references in docs to use `arrays.SparseArray` rather than just `SparseArray`
| https://api.github.com/repos/pandas-dev/pandas/pulls/30644 | 2020-01-03T15:18:57Z | 2020-01-04T23:25:09Z | null | 2020-01-06T23:10:20Z |
CI: Adding build for ARM64 | diff --git a/.travis.yml b/.travis.yml
index 98826a2d9e115..7943ca370af1a 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -14,6 +14,8 @@ cache:
env:
global:
+ # Variable for test workers
+ - PYTEST_WORKERS="auto"
# create a github personal access token
# cd pandas-dev/pandas
# travis encrypt 'PANDAS_GH_TOKEN=personal_access_token' -r pandas-dev/pandas
@@ -38,6 +40,10 @@ matrix:
- env:
- JOB="3.7" ENV_FILE="ci/deps/travis-37.yaml" PATTERN="(not slow and not network and not clipboard)"
+ - arch: arm64
+ env:
+ - JOB="3.7, arm64" PYTEST_WORKERS=8 ENV_FILE="ci/deps/travis-37-arm64.yaml" PATTERN="(not slow and not network and not clipboard)"
+
- env:
- JOB="3.6, locale" ENV_FILE="ci/deps/travis-36-locale.yaml" PATTERN="((not slow and not network and not clipboard) or (single and db))" LOCALE_OVERRIDE="zh_CN.UTF-8" SQL="1"
services:
@@ -59,9 +65,12 @@ matrix:
- mysql
- postgresql
allow_failures:
- - dist: bionic
- python: 3.9-dev
- env:
+ - arch: arm64
+ env:
+ - JOB="3.7, arm64" PYTEST_WORKERS=8 ENV_FILE="ci/deps/travis-37-arm64.yaml" PATTERN="(not slow and not network and not clipboard)"
+ - dist: bionic
+ python: 3.9-dev
+ env:
- JOB="3.9-dev" PATTERN="(not slow and not network)"
before_install:
diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index d042bda77d4e8..e45cafc02cb61 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -5,6 +5,9 @@ trigger:
pr:
- master
+variables:
+ PYTEST_WORKERS: auto
+
jobs:
# Mac and Linux use the same template
- template: ci/azure/posix.yml
diff --git a/ci/deps/travis-37-arm64.yaml b/ci/deps/travis-37-arm64.yaml
new file mode 100644
index 0000000000000..5cb53489be225
--- /dev/null
+++ b/ci/deps/travis-37-arm64.yaml
@@ -0,0 +1,21 @@
+name: pandas-dev
+channels:
+ - defaults
+ - conda-forge
+dependencies:
+ - python=3.7.*
+
+ # tools
+ - cython>=0.29.13
+ - pytest>=5.0.1
+ - pytest-xdist>=1.21
+ - hypothesis>=3.58.0
+
+ # pandas dependencies
+ - botocore>=1.11
+ - numpy
+ - python-dateutil
+ - pytz
+ - pip
+ - pip:
+ - moto
diff --git a/ci/run_tests.sh b/ci/run_tests.sh
index dbd1046656d5a..fda2005ce7843 100755
--- a/ci/run_tests.sh
+++ b/ci/run_tests.sh
@@ -20,7 +20,7 @@ if [[ $(uname) == "Linux" && -z $DISPLAY ]]; then
XVFB="xvfb-run "
fi
-PYTEST_CMD="${XVFB}pytest -m \"$PATTERN\" -n auto --dist=loadfile -s --strict --durations=30 --junitxml=test-data.xml $TEST_ARGS $COVERAGE pandas"
+PYTEST_CMD="${XVFB}pytest -m \"$PATTERN\" -n $PYTEST_WORKERS --dist=loadfile -s --strict --durations=30 --junitxml=test-data.xml $TEST_ARGS $COVERAGE pandas"
echo $PYTEST_CMD
sh -c "$PYTEST_CMD"
diff --git a/ci/setup_env.sh b/ci/setup_env.sh
index 76f47b7ed3634..4d551294dbb21 100755
--- a/ci/setup_env.sh
+++ b/ci/setup_env.sh
@@ -41,9 +41,17 @@ else
exit 1
fi
-wget -q "https://repo.continuum.io/miniconda/Miniconda3-latest-$CONDA_OS.sh" -O miniconda.sh
+if [ "${TRAVIS_CPU_ARCH}" == "arm64" ]; then
+ sudo apt-get -y install xvfb
+ CONDA_URL="https://github.com/conda-forge/miniforge/releases/download/4.8.2-1/Miniforge3-4.8.2-1-Linux-aarch64.sh"
+else
+ CONDA_URL="https://repo.continuum.io/miniconda/Miniconda3-latest-$CONDA_OS.sh"
+fi
+wget -q $CONDA_URL -O miniconda.sh
chmod +x miniconda.sh
-./miniconda.sh -b
+
+# Installation path is required for ARM64 platform as miniforge script installs in path $HOME/miniforge3.
+./miniconda.sh -b -p $MINICONDA_DIR
export PATH=$MINICONDA_DIR/bin:$PATH
diff --git a/pandas/_libs/src/ujson/lib/ultrajson.h b/pandas/_libs/src/ujson/lib/ultrajson.h
index b40ac9856d6a6..acb66b668e8dc 100644
--- a/pandas/_libs/src/ujson/lib/ultrajson.h
+++ b/pandas/_libs/src/ujson/lib/ultrajson.h
@@ -108,7 +108,7 @@ typedef uint32_t JSUINT32;
#define FASTCALL_MSVC
-#if !defined __x86_64__
+#if !defined __x86_64__ && !defined __aarch64__
#define FASTCALL_ATTR __attribute__((fastcall))
#else
#define FASTCALL_ATTR
| Added arm64 test support in travis-ci .
Modified environment creation by using archiconda instead of miniconda as miniconda is not supported in arm64 currently .
- [X] closes https://github.com/pandas-dev/pandas/issues/28986
- [X] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/30641 | 2020-01-03T11:10:46Z | 2020-05-12T16:02:40Z | 2020-05-12T16:02:40Z | 2020-05-12T16:02:40Z |
BUG: Fix IntervalArray equality comparisions | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 40690abe0c600..a5a583963f82d 100755
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -881,6 +881,7 @@ Interval
- Bug in :meth:`IntervalIndex.get_indexer` where a :class:`Categorical` or :class:`CategoricalIndex` ``target`` would incorrectly raise a ``TypeError`` (:issue:`30063`)
- Bug in ``pandas.core.dtypes.cast.infer_dtype_from_scalar`` where passing ``pandas_dtype=True`` did not infer :class:`IntervalDtype` (:issue:`30337`)
- Bug in :class:`IntervalDtype` where the ``kind`` attribute was incorrectly set as ``None`` instead of ``"O"`` (:issue:`30568`)
+- Bug in :class:`IntervalIndex`, :class:`~arrays.IntervalArray`, and :class:`Series` with interval data where equality comparisons were incorrect (:issue:`24112`)
Indexing
^^^^^^^^
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index cea059fb22be1..7a12b1dcf436d 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -17,6 +17,8 @@
is_integer_dtype,
is_interval,
is_interval_dtype,
+ is_list_like,
+ is_object_dtype,
is_scalar,
is_string_dtype,
is_timedelta64_dtype,
@@ -37,6 +39,7 @@
from pandas.core.arrays.base import ExtensionArray, _extension_array_shared_docs
from pandas.core.arrays.categorical import Categorical
import pandas.core.common as com
+from pandas.core.construction import array
from pandas.core.indexes.base import ensure_index
_VALID_CLOSED = {"left", "right", "both", "neither"}
@@ -547,6 +550,58 @@ def __setitem__(self, key, value):
right.values[key] = value_right
self._right = right
+ def __eq__(self, other):
+ # ensure pandas array for list-like and eliminate non-interval scalars
+ if is_list_like(other):
+ if len(self) != len(other):
+ raise ValueError("Lengths must match to compare")
+ other = array(other)
+ elif not isinstance(other, Interval):
+ # non-interval scalar -> no matches
+ return np.zeros(len(self), dtype=bool)
+
+ # determine the dtype of the elements we want to compare
+ if isinstance(other, Interval):
+ other_dtype = "interval"
+ elif not is_categorical_dtype(other):
+ other_dtype = other.dtype
+ else:
+ # for categorical defer to categories for dtype
+ other_dtype = other.categories.dtype
+
+ # extract intervals if we have interval categories with matching closed
+ if is_interval_dtype(other_dtype):
+ if self.closed != other.categories.closed:
+ return np.zeros(len(self), dtype=bool)
+ other = other.categories.take(other.codes)
+
+ # interval-like -> need same closed and matching endpoints
+ if is_interval_dtype(other_dtype):
+ if self.closed != other.closed:
+ return np.zeros(len(self), dtype=bool)
+ return (self.left == other.left) & (self.right == other.right)
+
+ # non-interval/non-object dtype -> no matches
+ if not is_object_dtype(other_dtype):
+ return np.zeros(len(self), dtype=bool)
+
+ # object dtype -> iteratively check for intervals
+ result = np.zeros(len(self), dtype=bool)
+ for i, obj in enumerate(other):
+ # need object to be an Interval with same closed and endpoints
+ if (
+ isinstance(obj, Interval)
+ and self.closed == obj.closed
+ and self.left[i] == obj.left
+ and self.right[i] == obj.right
+ ):
+ result[i] = True
+
+ return result
+
+ def __ne__(self, other):
+ return ~self.__eq__(other)
+
def fillna(self, value=None, method=None, limit=None):
"""
Fill NA/NaN values using the specified method.
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index cae9fa949f711..5708f8a73fe63 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -205,7 +205,9 @@ def func(intvidx_self, other, sort=False):
"__array__",
"overlaps",
"contains",
+ "__eq__",
"__len__",
+ "__ne__",
"set_closed",
"to_tuples",
],
@@ -224,7 +226,14 @@ class IntervalIndex(IntervalMixin, Index, accessor.PandasDelegate):
# Immutable, so we are able to cache computations like isna in '_mask'
_mask = None
- _raw_inherit = {"_ndarray_values", "__array__", "overlaps", "contains"}
+ _raw_inherit = {
+ "_ndarray_values",
+ "__array__",
+ "overlaps",
+ "contains",
+ "__eq__",
+ "__ne__",
+ }
# --------------------------------------------------------------------
# Constructors
diff --git a/pandas/tests/arrays/interval/test_interval.py b/pandas/tests/arrays/interval/test_interval.py
index 655a6e717119b..3bdd68fa904fc 100644
--- a/pandas/tests/arrays/interval/test_interval.py
+++ b/pandas/tests/arrays/interval/test_interval.py
@@ -1,14 +1,22 @@
+import operator
+
import numpy as np
import pytest
+from pandas.core.dtypes.common import is_list_like
+
import pandas as pd
from pandas import (
+ Categorical,
Index,
Interval,
IntervalIndex,
+ Period,
+ Series,
Timedelta,
Timestamp,
date_range,
+ period_range,
timedelta_range,
)
from pandas.core.arrays import IntervalArray
@@ -35,6 +43,18 @@ def left_right_dtypes(request):
return request.param
+def create_categorical_intervals(left, right, closed="right"):
+ return Categorical(IntervalIndex.from_arrays(left, right, closed))
+
+
+def create_series_intervals(left, right, closed="right"):
+ return Series(IntervalArray.from_arrays(left, right, closed))
+
+
+def create_series_categorical_intervals(left, right, closed="right"):
+ return Series(Categorical(IntervalIndex.from_arrays(left, right, closed)))
+
+
class TestAttributes:
@pytest.mark.parametrize(
"left, right",
@@ -93,6 +113,221 @@ def test_set_na(self, left_right_dtypes):
tm.assert_extension_array_equal(result, expected)
+class TestComparison:
+ @pytest.fixture(params=[operator.eq, operator.ne])
+ def op(self, request):
+ return request.param
+
+ @pytest.fixture
+ def array(self, left_right_dtypes):
+ """
+ Fixture to generate an IntervalArray of various dtypes containing NA if possible
+ """
+ left, right = left_right_dtypes
+ if left.dtype != "int64":
+ left, right = left.insert(4, np.nan), right.insert(4, np.nan)
+ else:
+ left, right = left.insert(4, 10), right.insert(4, 20)
+ return IntervalArray.from_arrays(left, right)
+
+ @pytest.fixture(
+ params=[
+ IntervalArray.from_arrays,
+ IntervalIndex.from_arrays,
+ create_categorical_intervals,
+ create_series_intervals,
+ create_series_categorical_intervals,
+ ],
+ ids=[
+ "IntervalArray",
+ "IntervalIndex",
+ "Categorical[Interval]",
+ "Series[Interval]",
+ "Series[Categorical[Interval]]",
+ ],
+ )
+ def interval_constructor(self, request):
+ """
+ Fixture for all pandas native interval constructors.
+ To be used as the LHS of IntervalArray comparisons.
+ """
+ return request.param
+
+ def elementwise_comparison(self, op, array, other):
+ """
+ Helper that performs elementwise comparisions between `array` and `other`
+ """
+ other = other if is_list_like(other) else [other] * len(array)
+ return np.array([op(x, y) for x, y in zip(array, other)])
+
+ def test_compare_scalar_interval(self, op, array):
+ # matches first interval
+ other = array[0]
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ # matches on a single endpoint but not both
+ other = Interval(array.left[0], array.right[1])
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_compare_scalar_interval_mixed_closed(self, op, closed, other_closed):
+ array = IntervalArray.from_arrays(range(2), range(1, 3), closed=closed)
+ other = Interval(0, 1, closed=other_closed)
+
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_compare_scalar_na(self, op, array, nulls_fixture):
+ result = op(array, nulls_fixture)
+ expected = self.elementwise_comparison(op, array, nulls_fixture)
+ tm.assert_numpy_array_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "other",
+ [
+ 0,
+ 1.0,
+ True,
+ "foo",
+ Timestamp("2017-01-01"),
+ Timestamp("2017-01-01", tz="US/Eastern"),
+ Timedelta("0 days"),
+ Period("2017-01-01", "D"),
+ ],
+ )
+ def test_compare_scalar_other(self, op, array, other):
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_compare_list_like_interval(
+ self, op, array, interval_constructor,
+ ):
+ # same endpoints
+ other = interval_constructor(array.left, array.right)
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ # different endpoints
+ other = interval_constructor(array.left[::-1], array.right[::-1])
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ # all nan endpoints
+ other = interval_constructor([np.nan] * 4, [np.nan] * 4)
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_compare_list_like_interval_mixed_closed(
+ self, op, interval_constructor, closed, other_closed
+ ):
+ array = IntervalArray.from_arrays(range(2), range(1, 3), closed=closed)
+ other = interval_constructor(range(2), range(1, 3), closed=other_closed)
+
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "other",
+ [
+ (
+ Interval(0, 1),
+ Interval(Timedelta("1 day"), Timedelta("2 days")),
+ Interval(4, 5, "both"),
+ Interval(10, 20, "neither"),
+ ),
+ (0, 1.5, Timestamp("20170103"), np.nan),
+ (
+ Timestamp("20170102", tz="US/Eastern"),
+ Timedelta("2 days"),
+ "baz",
+ pd.NaT,
+ ),
+ ],
+ )
+ def test_compare_list_like_object(self, op, array, other):
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_compare_list_like_nan(self, op, array, nulls_fixture):
+ other = [nulls_fixture] * 4
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "other",
+ [
+ np.arange(4, dtype="int64"),
+ np.arange(4, dtype="float64"),
+ date_range("2017-01-01", periods=4),
+ date_range("2017-01-01", periods=4, tz="US/Eastern"),
+ timedelta_range("0 days", periods=4),
+ period_range("2017-01-01", periods=4, freq="D"),
+ Categorical(list("abab")),
+ Categorical(date_range("2017-01-01", periods=4)),
+ pd.array(list("abcd")),
+ pd.array(["foo", 3.14, None, object()]),
+ ],
+ ids=lambda x: str(x.dtype),
+ )
+ def test_compare_list_like_other(self, op, array, other):
+ result = op(array, other)
+ expected = self.elementwise_comparison(op, array, other)
+ tm.assert_numpy_array_equal(result, expected)
+
+ @pytest.mark.parametrize("length", [1, 3, 5])
+ @pytest.mark.parametrize("other_constructor", [IntervalArray, list])
+ def test_compare_length_mismatch_errors(self, op, other_constructor, length):
+ array = IntervalArray.from_arrays(range(4), range(1, 5))
+ other = other_constructor([Interval(0, 1)] * length)
+ with pytest.raises(ValueError, match="Lengths must match to compare"):
+ op(array, other)
+
+ @pytest.mark.parametrize(
+ "constructor, expected_type, assert_func",
+ [
+ (IntervalIndex, np.array, tm.assert_numpy_array_equal),
+ (Series, Series, tm.assert_series_equal),
+ ],
+ )
+ def test_index_series_compat(self, op, constructor, expected_type, assert_func):
+ # IntervalIndex/Series that rely on IntervalArray for comparisons
+ breaks = range(4)
+ index = constructor(IntervalIndex.from_breaks(breaks))
+
+ # scalar comparisons
+ other = index[0]
+ result = op(index, other)
+ expected = expected_type(self.elementwise_comparison(op, index, other))
+ assert_func(result, expected)
+
+ other = breaks[0]
+ result = op(index, other)
+ expected = expected_type(self.elementwise_comparison(op, index, other))
+ assert_func(result, expected)
+
+ # list-like comparisons
+ other = IntervalArray.from_breaks(breaks)
+ result = op(index, other)
+ expected = expected_type(self.elementwise_comparison(op, index, other))
+ assert_func(result, expected)
+
+ other = [index[0], breaks[0], "foo"]
+ result = op(index, other)
+ expected = expected_type(self.elementwise_comparison(op, index, other))
+ assert_func(result, expected)
+
+
def test_repr():
# GH 25022
arr = IntervalArray.from_tuples([(0, 1), (1, 2)])
diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py
index 68d6169fa4f34..412bd1c63d140 100644
--- a/pandas/tests/series/test_arithmetic.py
+++ b/pandas/tests/series/test_arithmetic.py
@@ -171,6 +171,14 @@ def test_ser_cmp_result_names(self, names, op):
result = op(ser, tdi)
assert result.name == names[2]
+ # interval dtype
+ if op in [operator.eq, operator.ne]:
+ # interval dtype comparisons not yet implemented
+ ii = pd.interval_range(start=0, periods=5, name=names[0])
+ ser = Series(ii).rename(names[1])
+ result = op(ser, ii)
+ assert result.name == names[2]
+
# categorical
if op in [operator.eq, operator.ne]:
# categorical dtype comparisons raise for inequalities
| - [X] closes #24112
- [X] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
cc @TomAugspurger | https://api.github.com/repos/pandas-dev/pandas/pulls/30640 | 2020-01-03T09:45:37Z | 2020-01-05T21:33:12Z | 2020-01-05T21:33:11Z | 2020-01-05T21:33:16Z |
BLD: address build warnings | diff --git a/pandas/_libs/algos_take_helper.pxi.in b/pandas/_libs/algos_take_helper.pxi.in
index 420e08a3d68d4..995fabbedcb5d 100644
--- a/pandas/_libs/algos_take_helper.pxi.in
+++ b/pandas/_libs/algos_take_helper.pxi.in
@@ -116,7 +116,7 @@ def take_2d_axis0_{{name}}_{{dest}}(ndarray[{{c_type_in}}, ndim=2] values,
IF {{True if c_type_in == c_type_out != "object" else False}}:
cdef:
- {{c_type_out}} *v
+ const {{c_type_out}} *v
{{c_type_out}} *o
# GH#3130
diff --git a/pandas/_libs/hashtable.pyx b/pandas/_libs/hashtable.pyx
index 6e68a687de94a..59ba1705d2dbb 100644
--- a/pandas/_libs/hashtable.pyx
+++ b/pandas/_libs/hashtable.pyx
@@ -1,7 +1,7 @@
cimport cython
from cpython.ref cimport PyObject, Py_INCREF
-from cpython.mem cimport PyMem_Malloc, PyMem_Realloc, PyMem_Free
+from cpython.mem cimport PyMem_Malloc, PyMem_Free
from libc.stdlib cimport malloc, free
diff --git a/pandas/_libs/tslibs/conversion.pxd b/pandas/_libs/tslibs/conversion.pxd
index 0b77948027ad7..36e6b14be182a 100644
--- a/pandas/_libs/tslibs/conversion.pxd
+++ b/pandas/_libs/tslibs/conversion.pxd
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
-from cpython.datetime cimport datetime, tzinfo
+from cpython.datetime cimport datetime
from numpy cimport int64_t, int32_t
| xref #30609 | https://api.github.com/repos/pandas-dev/pandas/pulls/30639 | 2020-01-03T04:42:21Z | 2020-01-06T00:30:21Z | 2020-01-06T00:30:21Z | 2020-01-06T01:04:48Z |
ENH: Create DockerFile and devcontainer.json files to work with Docker and VS Code in Containers | diff --git a/.devcontainer.json b/.devcontainer.json
new file mode 100644
index 0000000000000..315a1ff647012
--- /dev/null
+++ b/.devcontainer.json
@@ -0,0 +1,28 @@
+// For format details, see https://aka.ms/vscode-remote/devcontainer.json or the definition README at
+// https://github.com/microsoft/vscode-dev-containers/tree/master/containers/python-3-miniconda
+{
+ "name": "pandas",
+ "context": ".",
+ "dockerFile": "Dockerfile",
+
+ // Use 'settings' to set *default* container specific settings.json values on container create.
+ // You can edit these settings after create using File > Preferences > Settings > Remote.
+ "settings": {
+ "terminal.integrated.shell.linux": "/bin/bash",
+ "python.condaPath": "/opt/conda/bin/conda",
+ "python.pythonPath": "/opt/conda/bin/python",
+ "python.formatting.provider": "black",
+ "python.linting.enabled": true,
+ "python.linting.flake8Enabled": true,
+ "python.linting.pylintEnabled": false,
+ "python.linting.mypyEnabled": true,
+ "python.testing.pytestEnabled": true,
+ "python.testing.cwd": "pandas/tests"
+ },
+
+ // Add the IDs of extensions you want installed when the container is created in the array below.
+ "extensions": [
+ "ms-python.python",
+ "ms-vscode.cpptools"
+ ]
+}
diff --git a/Dockerfile b/Dockerfile
new file mode 100644
index 0000000000000..b8aff5d671dcf
--- /dev/null
+++ b/Dockerfile
@@ -0,0 +1,47 @@
+FROM continuumio/miniconda3
+
+# if you forked pandas, you can pass in your own GitHub username to use your fork
+# i.e. gh_username=myname
+ARG gh_username=pandas-dev
+ARG pandas_home="/home/pandas"
+
+# Avoid warnings by switching to noninteractive
+ENV DEBIAN_FRONTEND=noninteractive
+
+# Configure apt and install packages
+RUN apt-get update \
+ && apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
+ #
+ # Verify git, process tools, lsb-release (common in install instructions for CLIs) installed
+ && apt-get -y install git iproute2 procps iproute2 lsb-release \
+ #
+ # Install C compilers (gcc not enough, so just went with build-essential which admittedly might be overkill),
+ # needed to build pandas C extensions
+ && apt-get -y install build-essential \
+ #
+ # cleanup
+ && apt-get autoremove -y \
+ && apt-get clean -y \
+ && rm -rf /var/lib/apt/lists/*
+
+# Switch back to dialog for any ad-hoc use of apt-get
+ENV DEBIAN_FRONTEND=dialog
+
+# Clone pandas repo
+RUN mkdir "$pandas_home" \
+ && git clone "https://github.com/$gh_username/pandas.git" "$pandas_home" \
+ && cd "$pandas_home" \
+ && git remote add upstream "https://github.com/pandas-dev/pandas.git" \
+ && git pull upstream master
+
+# Because it is surprisingly difficult to activate a conda environment inside a DockerFile
+# (from personal experience and per https://github.com/ContinuumIO/docker-images/issues/89),
+# we just update the base/root one from the 'environment.yml' file instead of creating a new one.
+#
+# Set up environment
+RUN conda env update -n base -f "$pandas_home/environment.yml"
+
+# Build C extensions and pandas
+RUN cd "$pandas_home" \
+ && python setup.py build_ext --inplace -j 4 \
+ && python -m pip install -e .
diff --git a/doc/source/development/contributing.rst b/doc/source/development/contributing.rst
index 2dc5ed07544d1..10395e74660ab 100644
--- a/doc/source/development/contributing.rst
+++ b/doc/source/development/contributing.rst
@@ -146,6 +146,17 @@ requires a C compiler and Python environment. If you're making documentation
changes, you can skip to :ref:`contributing.documentation` but you won't be able
to build the documentation locally before pushing your changes.
+Using a Docker Container
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Instead of manually setting up a development environment, you can use Docker to
+automatically create the environment with just several commands. Pandas provides a `DockerFile`
+in the root directory to build a Docker image with a full pandas development environment.
+
+Even easier, you can use the DockerFile to launch a remote session with Visual Studio Code,
+a popular free IDE, using the `.devcontainer.json` file.
+See https://code.visualstudio.com/docs/remote/containers for details.
+
.. _contributing.dev_c:
Installing a C compiler
| - [x] closes #30614
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
The instructions aren't that well written, and should probably be improved, but here's a first shot. | https://api.github.com/repos/pandas-dev/pandas/pulls/30638 | 2020-01-03T04:03:13Z | 2020-01-19T01:00:59Z | 2020-01-19T01:00:58Z | 2020-01-19T03:53:33Z |
REF/BUG: DTA/TDA/PA comparison ops inconsistencies | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index aeb953031ae89..dcdde4d7fb13a 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -140,18 +140,19 @@ def _dt_array_cmp(cls, op):
@unpack_zerodim_and_defer(opname)
def wrapper(self, other):
- if isinstance(other, (datetime, np.datetime64, str)):
- if isinstance(other, (datetime, np.datetime64)):
- # GH#18435 strings get a pass from tzawareness compat
- self._assert_tzawareness_compat(other)
-
+ if isinstance(other, str):
try:
- other = _to_M8(other, tz=self.tz)
+ # GH#18435 strings get a pass from tzawareness compat
+ other = self._scalar_from_string(other)
except ValueError:
# string that cannot be parsed to Timestamp
return invalid_comparison(self, other, op)
- result = op(self.asi8, other.view("i8"))
+ if isinstance(other, (datetime, np.datetime64)):
+ other = Timestamp(other)
+ self._assert_tzawareness_compat(other)
+
+ result = op(self.asi8, other.value)
if isna(other):
result.fill(nat_result)
elif lib.is_scalar(other) or np.ndim(other) == 0:
@@ -164,9 +165,7 @@ def wrapper(self, other):
other = type(self)._from_sequence(other)
except ValueError:
other = np.array(other, dtype=np.object_)
- elif not isinstance(
- other, (np.ndarray, ABCIndexClass, ABCSeries, DatetimeArray)
- ):
+ elif not isinstance(other, (np.ndarray, DatetimeArray)):
# Following Timestamp convention, __eq__ is all-False
# and __ne__ is all True, others raise TypeError.
return invalid_comparison(self, other, op)
@@ -185,8 +184,6 @@ def wrapper(self, other):
return invalid_comparison(self, other, op)
else:
self._assert_tzawareness_compat(other)
- if isinstance(other, (ABCIndexClass, ABCSeries)):
- other = other.array
if (
is_datetime64_dtype(other)
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 854d9067f2f2a..056c80717e54f 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -45,6 +45,7 @@
from pandas.core.arrays import datetimelike as dtl
import pandas.core.common as com
from pandas.core.ops.common import unpack_zerodim_and_defer
+from pandas.core.ops.invalid import invalid_comparison
from pandas.tseries import frequencies
from pandas.tseries.offsets import DateOffset, Tick, _delta_to_tick
@@ -75,6 +76,18 @@ def wrapper(self, other):
if is_list_like(other) and len(other) != len(self):
raise ValueError("Lengths must match")
+ if isinstance(other, str):
+ try:
+ other = self._scalar_from_string(other)
+ except ValueError:
+ # string that can't be parsed as Period
+ return invalid_comparison(self, other, op)
+ elif isinstance(other, int):
+ # TODO: sure we want to allow this? we dont for DTA/TDA
+ # 2 tests rely on this
+ other = Period(other, freq=self.freq)
+ result = ordinal_op(other.ordinal)
+
if isinstance(other, Period):
self._check_compatible_with(other)
@@ -93,8 +106,7 @@ def wrapper(self, other):
result = np.empty(len(self.asi8), dtype=bool)
result.fill(nat_result)
else:
- other = Period(other, freq=self.freq)
- result = ordinal_op(other.ordinal)
+ return invalid_comparison(self, other, op)
if self._hasnans:
result[self._isnan] = nat_result
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 87a76b8681da4..098ad268784ed 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -82,13 +82,16 @@ def _td_array_cmp(cls, op):
@unpack_zerodim_and_defer(opname)
def wrapper(self, other):
- if _is_convertible_to_td(other) or other is NaT:
+ if isinstance(other, str):
try:
- other = Timedelta(other)
+ other = self._scalar_from_string(other)
except ValueError:
# failed to parse as timedelta
return invalid_comparison(self, other, op)
+ if _is_convertible_to_td(other) or other is NaT:
+ other = Timedelta(other)
+
result = op(self.view("i8"), other.value)
if isna(other):
result.fill(nat_result)
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index 8bc952e85bb5d..3ad7a6d8e465c 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -17,6 +17,8 @@
from pandas.tseries.frequencies import to_offset
+from .common import assert_invalid_comparison
+
# ------------------------------------------------------------------
# Comparisons
@@ -39,6 +41,15 @@ def test_compare_zerodim(self, box_with_array):
expected = tm.box_expected(expected, xbox)
tm.assert_equal(result, expected)
+ @pytest.mark.parametrize(
+ "scalar", ["foo", pd.Timestamp.now(), pd.Timedelta(days=4)]
+ )
+ def test_compare_invalid_scalar(self, box_with_array, scalar):
+ # comparison with scalar that cannot be interpreted as a Period
+ pi = pd.period_range("2000", periods=4)
+ parr = tm.box_expected(pi, box_with_array)
+ assert_invalid_comparison(parr, scalar, box_with_array)
+
class TestPeriodIndexComparisons:
# TODO: parameterize over boxes
| It will take a couple of steps to get these three methods to all behave the same. Following that we can move to share code between them.
This came up when trying to make the indexes dispatch `searchsorted` and it turns out that the DTI/TDI/PI searchsorted methods are not consistent with their DTA/TDA/PA counterparts. The fix is to have shared validation/casting methods, which should in turn be consistent with the comparison ops. | https://api.github.com/repos/pandas-dev/pandas/pulls/30637 | 2020-01-03T03:21:47Z | 2020-01-03T12:20:31Z | 2020-01-03T12:20:31Z | 2020-01-03T15:21:12Z |
DEPR: Deprecate numpy argument in read_json | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 82e01b62efbb9..9f99f36b6007d 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -2066,6 +2066,8 @@ The Numpy parameter
+++++++++++++++++++
.. note::
+ This param has been deprecated as of version 1.0.0 and will raise a ``FutureWarning``.
+
This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If ``numpy=True`` is passed to ``read_json`` an attempt will be made to sniff
@@ -2088,6 +2090,7 @@ data:
%timeit pd.read_json(jsonfloats)
.. ipython:: python
+ :okwarning:
%timeit pd.read_json(jsonfloats, numpy=True)
@@ -2102,6 +2105,7 @@ The speedup is less noticeable for smaller datasets:
%timeit pd.read_json(jsonfloats)
.. ipython:: python
+ :okwarning:
%timeit pd.read_json(jsonfloats, numpy=True)
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index a9a0d89ed01aa..fda0bc4d50f27 100755
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -576,6 +576,7 @@ Deprecations
- :func:`pandas.json_normalize` is now exposed in the top-level namespace.
Usage of ``json_normalize`` as ``pandas.io.json.json_normalize`` is now deprecated and
it is recommended to use ``json_normalize`` as :func:`pandas.json_normalize` instead (:issue:`27586`).
+- The ``numpy`` argument of :meth:`pandas.read_json` is deprecated (:issue:`28512`).
- :meth:`DataFrame.to_stata`, :meth:`DataFrame.to_feather`, and :meth:`DataFrame.to_parquet` argument "fname" is deprecated, use "path" instead (:issue:`23574`)
- The deprecated internal attributes ``_start``, ``_stop`` and ``_step`` of :class:`RangeIndex` now raise a ``FutureWarning`` instead of a ``DeprecationWarning`` (:issue:`26581`)
- The ``pandas.util.testing`` module has been deprecated. Use the public API in ``pandas.testing`` documented at :ref:`api.general.testing` (:issue:`16232`).
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 7f2aab569ab71..4b82e722405ff 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -11,6 +11,7 @@
from pandas._libs.tslibs import iNaT
from pandas._typing import JSONSerializable
from pandas.errors import AbstractMethodError
+from pandas.util._decorators import deprecate_kwarg
from pandas.core.dtypes.common import ensure_str, is_period_dtype
@@ -346,6 +347,7 @@ def _write(
return serialized
+@deprecate_kwarg(old_arg_name="numpy", new_arg_name=None)
def read_json(
path_or_buf=None,
orient=None,
@@ -459,6 +461,8 @@ def read_json(
non-numeric column and index labels are supported. Note also that the
JSON ordering MUST be the same for each term if numpy=True.
+ .. deprecated:: 1.0.0
+
precise_float : bool, default False
Set to enable usage of higher precision (strtod) function when
decoding string to double values. Default (False) is to use fast but
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 09d8a1d3f10ea..e909a4952948c 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -39,6 +39,7 @@ def assert_json_roundtrip_equal(result, expected, orient):
tm.assert_frame_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:the 'numpy' keyword is deprecated:FutureWarning")
class TestPandasContainer:
@pytest.fixture(scope="function", autouse=True)
def setup(self, datapath):
@@ -1606,3 +1607,10 @@ def test_emca_262_nan_inf_support(self):
["a", np.nan, "NaN", np.inf, "Infinity", -np.inf, "-Infinity"]
)
tm.assert_frame_equal(result, expected)
+
+ def test_deprecate_numpy_argument_read_json(self):
+ # GH 28512
+ expected = DataFrame([1, 2, 3])
+ with tm.assert_produces_warning(FutureWarning):
+ result = read_json(expected.to_json(), numpy=True)
+ tm.assert_frame_equal(result, expected)
| Co-authored-by: Luca Ionescu <lucaionescu@users.noreply.github.com>
- Continuing https://github.com/pandas-dev/pandas/pull/28562
- [x] closes #28512
@lucaionescu - i've merged master and pushed to here (I don't have the permissions to push to your branch), I will aim to fix up the tests. Or feel free to take it from here if you have time?
Opening up WIP PR to see what test failures need addressing.
I've also co-authored the commit. | https://api.github.com/repos/pandas-dev/pandas/pulls/30636 | 2020-01-03T02:41:41Z | 2020-01-09T03:25:48Z | 2020-01-09T03:25:47Z | 2020-01-09T03:26:09Z |
BUG: Index.__new__ with Interval/Period data and object dtype | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 28e144a957d1c..7324292cb7c0a 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -291,11 +291,15 @@ def __new__(
return CategoricalIndex(data, dtype=dtype, copy=copy, name=name, **kwargs)
# interval
- elif (
- is_interval_dtype(data) or is_interval_dtype(dtype)
- ) and not is_object_dtype(dtype):
- closed = kwargs.get("closed", None)
- return IntervalIndex(data, dtype=dtype, name=name, copy=copy, closed=closed)
+ elif is_interval_dtype(data) or is_interval_dtype(dtype):
+ closed = kwargs.pop("closed", None)
+ if is_dtype_equal(_o_dtype, dtype):
+ return IntervalIndex(
+ data, name=name, copy=copy, closed=closed, **kwargs
+ ).astype(object)
+ return IntervalIndex(
+ data, dtype=dtype, name=name, copy=copy, closed=closed, **kwargs
+ )
elif (
is_datetime64_any_dtype(data)
@@ -325,8 +329,10 @@ def __new__(
else:
return TimedeltaIndex(data, copy=copy, name=name, dtype=dtype, **kwargs)
- elif is_period_dtype(data) and not is_object_dtype(dtype):
- return PeriodIndex(data, copy=copy, name=name, **kwargs)
+ elif is_period_dtype(data) or is_period_dtype(dtype):
+ if is_dtype_equal(_o_dtype, dtype):
+ return PeriodIndex(data, copy=False, name=name, **kwargs).astype(object)
+ return PeriodIndex(data, dtype=dtype, copy=copy, name=name, **kwargs)
# extension dtype
elif is_extension_array_dtype(data) or is_extension_array_dtype(dtype):
diff --git a/pandas/tests/indexes/period/test_constructors.py b/pandas/tests/indexes/period/test_constructors.py
index 2adce0b7f8b44..d87e49e3cba2a 100644
--- a/pandas/tests/indexes/period/test_constructors.py
+++ b/pandas/tests/indexes/period/test_constructors.py
@@ -7,14 +7,11 @@
import pandas as pd
from pandas import Index, Period, PeriodIndex, Series, date_range, offsets, period_range
-import pandas.core.indexes.period as period
+from pandas.core.arrays import PeriodArray
import pandas.util.testing as tm
class TestPeriodIndex:
- def setup_method(self, method):
- pass
-
def test_construction_base_constructor(self):
# GH 13664
arr = [pd.Period("2011-01", freq="M"), pd.NaT, pd.Period("2011-03", freq="M")]
@@ -32,6 +29,30 @@ def test_construction_base_constructor(self):
pd.Index(np.array(arr)), pd.Index(np.array(arr), dtype=object)
)
+ def test_base_constructor_with_period_dtype(self):
+ dtype = PeriodDtype("D")
+ values = ["2011-01-01", "2012-03-04", "2014-05-01"]
+ result = pd.Index(values, dtype=dtype)
+
+ expected = pd.PeriodIndex(values, dtype=dtype)
+ tm.assert_index_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "values_constructor", [list, np.array, PeriodIndex, PeriodArray._from_sequence]
+ )
+ def test_index_object_dtype(self, values_constructor):
+ # Index(periods, dtype=object) is an Index (not an PeriodIndex)
+ periods = [
+ pd.Period("2011-01", freq="M"),
+ pd.NaT,
+ pd.Period("2011-03", freq="M"),
+ ]
+ values = values_constructor(periods)
+ result = Index(values, dtype=object)
+
+ assert type(result) is Index
+ tm.assert_numpy_array_equal(result.values, np.array(values))
+
def test_constructor_use_start_freq(self):
# GH #1118
p = Period("4/2/2012", freq="B")
@@ -201,7 +222,7 @@ def test_constructor_dtype(self):
assert res.dtype == "period[M]"
msg = "specified freq and dtype are different"
- with pytest.raises(period.IncompatibleFrequency, match=msg):
+ with pytest.raises(IncompatibleFrequency, match=msg):
PeriodIndex(["2011-01"], freq="M", dtype="period[D]")
def test_constructor_empty(self):
@@ -261,12 +282,12 @@ def test_constructor_pi_nat(self):
def test_constructor_incompat_freq(self):
msg = "Input has different freq=D from PeriodIndex\\(freq=M\\)"
- with pytest.raises(period.IncompatibleFrequency, match=msg):
+ with pytest.raises(IncompatibleFrequency, match=msg):
PeriodIndex(
[Period("2011-01", freq="M"), pd.NaT, Period("2011-01", freq="D")]
)
- with pytest.raises(period.IncompatibleFrequency, match=msg):
+ with pytest.raises(IncompatibleFrequency, match=msg):
PeriodIndex(
np.array(
[Period("2011-01", freq="M"), pd.NaT, Period("2011-01", freq="D")]
@@ -274,12 +295,12 @@ def test_constructor_incompat_freq(self):
)
# first element is pd.NaT
- with pytest.raises(period.IncompatibleFrequency, match=msg):
+ with pytest.raises(IncompatibleFrequency, match=msg):
PeriodIndex(
[pd.NaT, Period("2011-01", freq="M"), Period("2011-01", freq="D")]
)
- with pytest.raises(period.IncompatibleFrequency, match=msg):
+ with pytest.raises(IncompatibleFrequency, match=msg):
PeriodIndex(
np.array(
[pd.NaT, Period("2011-01", freq="M"), Period("2011-01", freq="D")]
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
xref #17246, #21311
After this we now use the same pattern when calling DatetimeIndex, TimedeltaIndex, PeriodIndex, and IntervalIndex, so we can make a helper function and de-duplicate this code.
The cases of categorical or range data with object-dtype are not yet handled. | https://api.github.com/repos/pandas-dev/pandas/pulls/30635 | 2020-01-03T02:34:15Z | 2020-01-03T03:40:48Z | 2020-01-03T03:40:48Z | 2020-01-03T03:47:55Z |
CI: Remove pin google-cloud-bigquery | diff --git a/ci/deps/travis-36-cov.yaml b/ci/deps/travis-36-cov.yaml
index c1403f8eb8409..f44bf8c14b467 100644
--- a/ci/deps/travis-36-cov.yaml
+++ b/ci/deps/travis-36-cov.yaml
@@ -30,8 +30,6 @@ dependencies:
- openpyxl<=3.0.1
# https://github.com/pandas-dev/pandas/pull/30009 openpyxl 3.0.2 broke
- pandas-gbq
- # https://github.com/pydata/pandas-gbq/issues/271
- - google-cloud-bigquery<=1.11
- psycopg2
- pyarrow>=0.12.0
- pymysql
| xref: https://github.com/pydata/pandas-gbq/issues/271 is now resolved.
This can be removed since `pandas-gbq` depends on `google-cloud-bigquery` ref: https://github.com/pydata/pandas-gbq/blob/master/setup.py | https://api.github.com/repos/pandas-dev/pandas/pulls/30633 | 2020-01-03T00:40:51Z | 2020-01-03T01:16:52Z | 2020-01-03T01:16:52Z | 2020-01-04T15:16:04Z |
TPY: Add Types to gbq.py | diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py
index d9711f4f4626a..69ebc470fba6f 100644
--- a/pandas/io/gbq.py
+++ b/pandas/io/gbq.py
@@ -1,6 +1,11 @@
""" Google BigQuery support """
+from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
+
from pandas.compat._optional import import_optional_dependency
+if TYPE_CHECKING:
+ from pandas import DataFrame
+
def _try_import():
# since pandas is a dependency of pandas-gbq
@@ -14,21 +19,21 @@ def _try_import():
def read_gbq(
- query,
- project_id=None,
- index_col=None,
- col_order=None,
- reauth=False,
- auth_local_webserver=False,
- dialect=None,
- location=None,
- configuration=None,
+ query: str,
+ project_id: Optional[str] = None,
+ index_col: Optional[str] = None,
+ col_order: Optional[List[str]] = None,
+ reauth: bool = False,
+ auth_local_webserver: bool = False,
+ dialect: Optional[str] = None,
+ location: Optional[str] = None,
+ configuration: Optional[Dict[str, Any]] = None,
credentials=None,
- use_bqstorage_api=None,
+ use_bqstorage_api: Optional[bool] = None,
private_key=None,
verbose=None,
- progress_bar_type=None,
-):
+ progress_bar_type: Optional[str] = None,
+) -> "DataFrame":
"""
Load data from Google BigQuery.
@@ -157,7 +162,7 @@ def read_gbq(
"""
pandas_gbq = _try_import()
- kwargs = {}
+ kwargs: Dict[str, Union[str, bool]] = {}
# START: new kwargs. Don't populate unless explicitly set.
if use_bqstorage_api is not None:
@@ -183,20 +188,20 @@ def read_gbq(
def to_gbq(
- dataframe,
- destination_table,
- project_id=None,
- chunksize=None,
- reauth=False,
- if_exists="fail",
- auth_local_webserver=False,
- table_schema=None,
- location=None,
- progress_bar=True,
+ dataframe: "DataFrame",
+ destination_table: str,
+ project_id: Optional[str] = None,
+ chunksize: Optional[int] = None,
+ reauth: bool = False,
+ if_exists: str = "fail",
+ auth_local_webserver: bool = False,
+ table_schema: Optional[List[Dict[str, str]]] = None,
+ location: Optional[str] = None,
+ progress_bar: bool = True,
credentials=None,
verbose=None,
private_key=None,
-):
+) -> None:
pandas_gbq = _try_import()
pandas_gbq.to_gbq(
dataframe,
| Add Types to args for `to_gbq` and `read_gbq`. | https://api.github.com/repos/pandas-dev/pandas/pulls/30632 | 2020-01-03T00:34:03Z | 2020-01-04T23:37:18Z | 2020-01-04T23:37:18Z | 2020-01-05T00:03:54Z |
CLN: Update old string formatting to f-string | diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index e0ddd17335175..b84d468fff736 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -246,7 +246,7 @@ def comparison_op(
res_values = comp_method_OBJECT_ARRAY(op, lvalues, rvalues)
else:
- op_name = "__{op}__".format(op=op.__name__)
+ op_name = f"__{op.__name__}__"
method = getattr(lvalues, op_name)
with np.errstate(all="ignore"):
res_values = method(rvalues)
@@ -254,9 +254,8 @@ def comparison_op(
if res_values is NotImplemented:
res_values = invalid_comparison(lvalues, rvalues, op)
if is_scalar(res_values):
- raise TypeError(
- "Could not compare {typ} type with Series".format(typ=type(rvalues))
- )
+ typ = type(rvalues)
+ raise TypeError(f"Could not compare {typ} type with Series")
return res_values
@@ -293,11 +292,10 @@ def na_logical_op(x: np.ndarray, y, op):
OverflowError,
NotImplementedError,
):
+ typ = type(y).__name__
raise TypeError(
- "Cannot perform '{op}' with a dtyped [{dtype}] array "
- "and scalar of type [{typ}]".format(
- op=op.__name__, dtype=x.dtype, typ=type(y).__name__
- )
+ f"Cannot perform '{op.__name__}' with a dtyped [{x.dtype}] array "
+ f"and scalar of type [{typ}]"
)
return result
diff --git a/pandas/core/ops/dispatch.py b/pandas/core/ops/dispatch.py
index 1eb952c1394ac..f35279378dc65 100644
--- a/pandas/core/ops/dispatch.py
+++ b/pandas/core/ops/dispatch.py
@@ -210,10 +210,10 @@ def not_implemented(*args, **kwargs):
if method == "__call__" and op_name in special and kwargs.get("out") is None:
if isinstance(inputs[0], type(self)):
- name = "__{}__".format(op_name)
+ name = f"__{op_name}__"
return getattr(self, name, not_implemented)(inputs[1])
else:
- name = flipped.get(op_name, "__r{}__".format(op_name))
+ name = flipped.get(op_name, f"__r{op_name}__")
return getattr(self, name, not_implemented)(inputs[0])
else:
return NotImplemented
diff --git a/pandas/core/ops/invalid.py b/pandas/core/ops/invalid.py
index 013ff7689b221..cc4a1f11edd2b 100644
--- a/pandas/core/ops/invalid.py
+++ b/pandas/core/ops/invalid.py
@@ -30,11 +30,8 @@ def invalid_comparison(left, right, op):
elif op is operator.ne:
res_values = np.ones(left.shape, dtype=bool)
else:
- raise TypeError(
- "Invalid comparison between dtype={dtype} and {typ}".format(
- dtype=left.dtype, typ=type(right).__name__
- )
- )
+ typ = type(right).__name__
+ raise TypeError(f"Invalid comparison between dtype={left.dtype} and {typ}")
return res_values
@@ -52,10 +49,8 @@ def make_invalid_op(name: str):
"""
def invalid_op(self, other=None):
- raise TypeError(
- "cannot perform {name} with this index type: "
- "{typ}".format(name=name, typ=type(self).__name__)
- )
+ typ = type(self).__name__
+ raise TypeError(f"cannot perform {name} with this index type: {typ}")
invalid_op.__name__ = name
return invalid_op
diff --git a/pandas/core/ops/methods.py b/pandas/core/ops/methods.py
index 8c66eea270c76..c04658565f235 100644
--- a/pandas/core/ops/methods.py
+++ b/pandas/core/ops/methods.py
@@ -102,7 +102,8 @@ def f(self, other):
return self
- f.__name__ = "__i{name}__".format(name=method.__name__.strip("__"))
+ name = method.__name__.strip("__")
+ f.__name__ = f"__i{name}__"
return f
new_methods.update(
@@ -214,7 +215,7 @@ def _create_methods(cls, arith_method, comp_method, bool_method, special):
)
if special:
- dunderize = lambda x: "__{name}__".format(name=x.strip("_"))
+ dunderize = lambda x: f"__{x.strip('_')}__"
else:
dunderize = lambda x: x
new_methods = {dunderize(k): v for k, v in new_methods.items()}
diff --git a/pandas/core/ops/roperator.py b/pandas/core/ops/roperator.py
index 4cb02238aea16..e6691ddf8984e 100644
--- a/pandas/core/ops/roperator.py
+++ b/pandas/core/ops/roperator.py
@@ -34,9 +34,8 @@ def rmod(left, right):
# formatting operation; this is a TypeError
# otherwise perform the op
if isinstance(right, str):
- raise TypeError(
- "{typ} cannot perform the operation mod".format(typ=type(left).__name__)
- )
+ typ = type(left).__name__
+ raise TypeError(f"{typ} cannot perform the operation mod")
return right % left
| - [x] contributes to #29547
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Updates:
pandas/core/ops/array_ops.py
pandas/core/ops/dispatch.py
pandas/core/ops/docstrings.py
pandas/core/ops/invalid.py
pandas/core/ops/methods.py
pandas/core/ops/roperator.py
| https://api.github.com/repos/pandas-dev/pandas/pulls/30631 | 2020-01-02T23:28:55Z | 2020-01-03T22:21:19Z | 2020-01-03T22:21:19Z | 2020-01-03T23:17:46Z |
CI: Fix Flakey GBQ Tests | diff --git a/pandas/tests/io/test_gbq.py b/pandas/tests/io/test_gbq.py
index 48c8923dab7cd..7a5eba5264421 100644
--- a/pandas/tests/io/test_gbq.py
+++ b/pandas/tests/io/test_gbq.py
@@ -68,6 +68,10 @@ def _get_client():
return bigquery.Client(project=project_id, credentials=credentials)
+def generate_rand_str(length: int = 10) -> str:
+ return "".join(random.choices(string.ascii_lowercase, k=length))
+
+
def make_mixed_dataframe_v2(test_size):
# create df to test for all BQ datatypes except RECORD
bools = np.random.randint(2, size=(1, test_size)).astype(bool)
@@ -153,19 +157,15 @@ def gbq_dataset(self):
_skip_if_no_project_id()
_skip_if_no_private_key_path()
- dataset_id = "pydata_pandas_bq_testing_py31"
+ dataset_id = "pydata_pandas_bq_testing_" + generate_rand_str()
self.client = _get_client()
self.dataset = self.client.dataset(dataset_id)
- try:
- # Clean-up previous test runs.
- self.client.delete_dataset(self.dataset, delete_contents=True)
- except api_exceptions.NotFound:
- pass # It's OK if the dataset doesn't already exist.
+ # Create the dataset
self.client.create_dataset(bigquery.Dataset(self.dataset))
- table_name = "".join(random.choices(string.ascii_lowercase, k=10))
+ table_name = generate_rand_str()
destination_table = f"{dataset_id}.{table_name}"
yield destination_table
| -ref https://github.com/pandas-dev/pandas/pull/30478#issuecomment-570068174
We see the below in the logs:
```
google.api_core.exceptions.Conflict: 409 POST https://bigquery.googleapis.com/bigquery/v2/projects/pandas-travis/datasets: Already Exists: Dataset pandas-travis:pydata_pandas_bq_testing_py31
```
https://travis-ci.org/pandas-dev/pandas/jobs/631599036
Despite attempting to delete the dataset in the previous line.
`self.client.delete_dataset(self.dataset, delete_contents=True)`
Since we run with `dist=loadfile` these tests are run sequentially by pytest. But they could potentially clash across builds?
We now create a unique dataset name per test function and teardown when complete
GBQ Tests will run against my fork will post results on here.
cc. @jreback, @tswast | https://api.github.com/repos/pandas-dev/pandas/pulls/30630 | 2020-01-02T23:25:10Z | 2020-01-03T02:45:05Z | 2020-01-03T02:45:05Z | 2020-01-03T02:45:10Z |
REF: implement indexes.extension to share delegation | diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 306ccf176f970..7bf1a601a0ab6 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -2,7 +2,7 @@
Base and utility classes for tseries type pandas objects.
"""
import operator
-from typing import List, Set
+from typing import List, Optional, Set
import numpy as np
@@ -40,28 +40,9 @@
from pandas.tseries.frequencies import DateOffset, to_offset
-_index_doc_kwargs = dict(ibase._index_doc_kwargs)
-
+from .extension import inherit_names
-def ea_passthrough(array_method):
- """
- Make an alias for a method of the underlying ExtensionArray.
-
- Parameters
- ----------
- array_method : method on an Array class
-
- Returns
- -------
- method
- """
-
- def method(self, *args, **kwargs):
- return array_method(self._data, *args, **kwargs)
-
- method.__name__ = array_method.__name__
- method.__doc__ = array_method.__doc__
- return method
+_index_doc_kwargs = dict(ibase._index_doc_kwargs)
def _make_wrapped_arith_op(opname):
@@ -100,48 +81,34 @@ def wrapper(left, right):
return wrapper
+@inherit_names(
+ ["inferred_freq", "_isnan", "_resolution", "resolution"],
+ DatetimeLikeArrayMixin,
+ cache=True,
+)
+@inherit_names(
+ ["__iter__", "mean", "freq", "freqstr", "_ndarray_values", "asi8", "_box_values"],
+ DatetimeLikeArrayMixin,
+)
class DatetimeIndexOpsMixin(ExtensionOpsMixin):
"""
Common ops mixin to support a unified interface datetimelike Index.
"""
_data: ExtensionArray
+ freq: Optional[DateOffset]
+ freqstr: Optional[str]
+ _resolution: int
+ _bool_ops: List[str] = []
+ _field_ops: List[str] = []
- # DatetimeLikeArrayMixin assumes subclasses are mutable, so these are
- # properties there. They can be made into cache_readonly for Index
- # subclasses bc they are immutable
- inferred_freq = cache_readonly(
- DatetimeLikeArrayMixin.inferred_freq.fget # type: ignore
- )
- _isnan = cache_readonly(DatetimeLikeArrayMixin._isnan.fget) # type: ignore
hasnans = cache_readonly(DatetimeLikeArrayMixin._hasnans.fget) # type: ignore
_hasnans = hasnans # for index / array -agnostic code
- _resolution = cache_readonly(
- DatetimeLikeArrayMixin._resolution.fget # type: ignore
- )
- resolution = cache_readonly(DatetimeLikeArrayMixin.resolution.fget) # type: ignore
-
- __iter__ = ea_passthrough(DatetimeLikeArrayMixin.__iter__)
- mean = ea_passthrough(DatetimeLikeArrayMixin.mean)
@property
def is_all_dates(self) -> bool:
return True
- @property
- def freq(self):
- """
- Return the frequency object if it is set, otherwise None.
- """
- return self._data.freq
-
- @property
- def freqstr(self):
- """
- Return the frequency object as a string if it is set, otherwise None.
- """
- return self._data.freqstr
-
def unique(self, level=None):
if level is not None:
self._validate_index_level(level)
@@ -172,10 +139,6 @@ def wrapper(self, other):
wrapper.__name__ = f"__{op.__name__}__"
return wrapper
- @property
- def _ndarray_values(self) -> np.ndarray:
- return self._data._ndarray_values
-
# ------------------------------------------------------------------------
# Abstract data attributes
@@ -184,11 +147,6 @@ def values(self):
# Note: PeriodArray overrides this to return an ndarray of objects.
return self._data._data
- @property # type: ignore # https://github.com/python/mypy/issues/1362
- @Appender(DatetimeLikeArrayMixin.asi8.__doc__)
- def asi8(self):
- return self._data.asi8
-
def __array_wrap__(self, result, context=None):
"""
Gets called after a ufunc.
@@ -248,9 +206,6 @@ def _ensure_localized(
return type(self)._simple_new(result, name=self.name)
return arg
- def _box_values(self, values):
- return self._data._box_values(values)
-
@Appender(_index_shared_docs["contains"] % _index_doc_kwargs)
def __contains__(self, key):
try:
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 698576a90bb7e..eefd33c7a9c34 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -35,6 +35,7 @@
DatetimelikeDelegateMixin,
DatetimeTimedeltaMixin,
)
+from pandas.core.indexes.extension import inherit_names
from pandas.core.ops import get_op_result_name
import pandas.core.tools.datetimes as tools
@@ -72,6 +73,7 @@ class DatetimeDelegateMixin(DatetimelikeDelegateMixin):
"_local_timestamps",
"_has_same_tz",
"_format_native_types",
+ "__iter__",
]
_extra_raw_properties = ["_box_func", "tz", "tzinfo", "dtype"]
_delegated_properties = DatetimeArray._datetimelike_ops + _extra_raw_properties
@@ -87,6 +89,17 @@ class DatetimeDelegateMixin(DatetimelikeDelegateMixin):
_delegate_class = DatetimeArray
+@inherit_names(["_timezone", "is_normalized", "_resolution"], DatetimeArray, cache=True)
+@inherit_names(
+ [
+ "_bool_ops",
+ "_object_ops",
+ "_field_ops",
+ "_datetimelike_ops",
+ "_datetimelike_methods",
+ ],
+ DatetimeArray,
+)
@delegate_names(
DatetimeArray, DatetimeDelegateMixin._delegated_properties, typ="property"
)
@@ -209,15 +222,6 @@ class DatetimeIndex(DatetimeTimedeltaMixin, DatetimeDelegateMixin):
_is_numeric_dtype = False
_infer_as_myclass = True
- # Use faster implementation given we know we have DatetimeArrays
- __iter__ = DatetimeArray.__iter__
- # some things like freq inference make use of these attributes.
- _bool_ops = DatetimeArray._bool_ops
- _object_ops = DatetimeArray._object_ops
- _field_ops = DatetimeArray._field_ops
- _datetimelike_ops = DatetimeArray._datetimelike_ops
- _datetimelike_methods = DatetimeArray._datetimelike_methods
-
tz: Optional[tzinfo]
# --------------------------------------------------------------------
@@ -962,10 +966,6 @@ def slice_indexer(self, start=None, end=None, step=None, kind=None):
# --------------------------------------------------------------------
# Wrapping DatetimeArray
- _timezone = cache_readonly(DatetimeArray._timezone.fget) # type: ignore
- is_normalized = cache_readonly(DatetimeArray.is_normalized.fget) # type: ignore
- _resolution = cache_readonly(DatetimeArray._resolution.fget) # type: ignore
-
def __getitem__(self, key):
result = self._data.__getitem__(key)
if is_scalar(result):
diff --git a/pandas/core/indexes/extension.py b/pandas/core/indexes/extension.py
new file mode 100644
index 0000000000000..779cd8eac4eaf
--- /dev/null
+++ b/pandas/core/indexes/extension.py
@@ -0,0 +1,78 @@
+"""
+Shared methods for Index subclasses backed by ExtensionArray.
+"""
+from typing import List
+
+from pandas.util._decorators import cache_readonly
+
+
+def inherit_from_data(name: str, delegate, cache: bool = False):
+ """
+ Make an alias for a method of the underlying ExtensionArray.
+
+ Parameters
+ ----------
+ name : str
+ Name of an attribute the class should inherit from its EA parent.
+ delegate : class
+ cache : bool, default False
+ Whether to convert wrapped properties into cache_readonly
+
+ Returns
+ -------
+ attribute, method, property, or cache_readonly
+ """
+
+ attr = getattr(delegate, name)
+
+ if isinstance(attr, property):
+ if cache:
+ method = cache_readonly(attr.fget)
+
+ else:
+
+ def fget(self):
+ return getattr(self._data, name)
+
+ def fset(self, value):
+ setattr(self._data, name, value)
+
+ fget.__name__ = name
+ fget.__doc__ = attr.__doc__
+
+ method = property(fget, fset)
+
+ elif not callable(attr):
+ # just a normal attribute, no wrapping
+ method = attr
+
+ else:
+
+ def method(self, *args, **kwargs):
+ result = attr(self._data, *args, **kwargs)
+ return result
+
+ method.__name__ = name
+ method.__doc__ = attr.__doc__
+ return method
+
+
+def inherit_names(names: List[str], delegate, cache: bool = False):
+ """
+ Class decorator to pin attributes from an ExtensionArray to a Index subclass.
+
+ Parameters
+ ----------
+ names : List[str]
+ delegate : class
+ cache : bool, default False
+ """
+
+ def wrapper(cls):
+ for name in names:
+ meth = inherit_from_data(name, delegate, cache=cache)
+ setattr(cls, name, meth)
+
+ return cls
+
+ return wrapper
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index d16eb230b9f33..c69ea8a5f779b 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -58,6 +58,8 @@
from pandas.tseries.frequencies import to_offset
from pandas.tseries.offsets import DateOffset
+from .extension import inherit_names
+
_VALID_CLOSED = {"left", "right", "both", "neither"}
_index_doc_kwargs = dict(ibase._index_doc_kwargs)
@@ -199,10 +201,11 @@ def func(intvidx_self, other, sort=False):
)
@accessor.delegate_names(
delegate=IntervalArray,
- accessors=["__array__", "overlaps", "contains"],
+ accessors=["__array__", "overlaps", "contains", "__len__", "set_closed"],
typ="method",
overwrite=True,
)
+@inherit_names(["is_non_overlapping_monotonic", "mid"], IntervalArray, cache=True)
class IntervalIndex(IntervalMixin, Index, accessor.PandasDelegate):
_typ = "intervalindex"
_comparables = ["name"]
@@ -412,34 +415,6 @@ def to_tuples(self, na_tuple=True):
def _multiindex(self):
return MultiIndex.from_arrays([self.left, self.right], names=["left", "right"])
- @Appender(
- _interval_shared_docs["set_closed"]
- % dict(
- klass="IntervalIndex",
- examples=textwrap.dedent(
- """\
- Examples
- --------
- >>> index = pd.interval_range(0, 3)
- >>> index
- IntervalIndex([(0, 1], (1, 2], (2, 3]],
- closed='right',
- dtype='interval[int64]')
- >>> index.set_closed('both')
- IntervalIndex([[0, 1], [1, 2], [2, 3]],
- closed='both',
- dtype='interval[int64]')
- """
- ),
- )
- )
- def set_closed(self, closed):
- array = self._data.set_closed(closed)
- return self._simple_new(array, self.name) # TODO: can we use _shallow_copy?
-
- def __len__(self) -> int:
- return len(self.left)
-
@cache_readonly
def values(self):
"""
@@ -479,13 +454,6 @@ def memory_usage(self, deep: bool = False) -> int:
# so return the bytes here
return self.left.memory_usage(deep=deep) + self.right.memory_usage(deep=deep)
- @cache_readonly
- def mid(self):
- """
- Return the midpoint of each Interval in the IntervalIndex as an Index.
- """
- return self._data.mid
-
@cache_readonly
def is_monotonic(self) -> bool:
"""
@@ -534,11 +502,6 @@ def is_unique(self):
return True
- @cache_readonly
- @Appender(_interval_shared_docs["is_non_overlapping_monotonic"] % _index_doc_kwargs)
- def is_non_overlapping_monotonic(self):
- return self._data.is_non_overlapping_monotonic
-
@property
def is_overlapping(self):
"""
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index eba4726755234..894b430f1c4fd 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -31,6 +31,7 @@
DatetimelikeDelegateMixin,
DatetimeTimedeltaMixin,
)
+from pandas.core.indexes.extension import inherit_names
from pandas.tseries.frequencies import to_offset
@@ -52,6 +53,17 @@ class TimedeltaDelegateMixin(DatetimelikeDelegateMixin):
)
+@inherit_names(
+ [
+ "_bool_ops",
+ "_object_ops",
+ "_field_ops",
+ "_datetimelike_ops",
+ "_datetimelike_methods",
+ "_other_ops",
+ ],
+ TimedeltaArray,
+)
@delegate_names(
TimedeltaArray, TimedeltaDelegateMixin._delegated_properties, typ="property"
)
@@ -125,15 +137,6 @@ class TimedeltaIndex(
_is_numeric_dtype = True
_infer_as_myclass = True
- _freq = None
-
- _bool_ops = TimedeltaArray._bool_ops
- _object_ops = TimedeltaArray._object_ops
- _field_ops = TimedeltaArray._field_ops
- _datetimelike_ops = TimedeltaArray._datetimelike_ops
- _datetimelike_methods = TimedeltaArray._datetimelike_methods
- _other_ops = TimedeltaArray._other_ops
-
# -------------------------------------------------------------------
# Constructors
| In following steps, I plan to
- remove the DatetimeDelegateMixin entirely and just use inherit_names
- de-duplicate the slightly-different comparison method code from CategoricalIndex vs DatetimelikeIndex
- move the remaining wrapping utilities from datetimelike to extension.py, after double-checking if we can remove any layers
- delegate more methods, some of which depends on smoothing out small idiosyncrasies, e.g. #30627. | https://api.github.com/repos/pandas-dev/pandas/pulls/30629 | 2020-01-02T21:44:35Z | 2020-01-03T02:00:15Z | 2020-01-03T02:00:15Z | 2020-01-03T02:09:43Z |
DOC: Add strings for dtypes in basic.rst | diff --git a/doc/source/getting_started/basics.rst b/doc/source/getting_started/basics.rst
index d489d35dc1226..f47fa48eb6202 100644
--- a/doc/source/getting_started/basics.rst
+++ b/doc/source/getting_started/basics.rst
@@ -1937,21 +1937,36 @@ See :ref:`extending.extension-types` for how to write your own extension that
works with pandas. See :ref:`ecosystem.extensions` for a list of third-party
libraries that have implemented an extension.
-The following table lists all of pandas extension types. See the respective
+The following table lists all of pandas extension types. For methods requiring ``dtype``
+arguments, strings can be specified as indicated. See the respective
documentation sections for more on each type.
-=================== ========================= ================== ============================= =============================
-Kind of Data Data Type Scalar Array Documentation
-=================== ========================= ================== ============================= =============================
-tz-aware datetime :class:`DatetimeTZDtype` :class:`Timestamp` :class:`arrays.DatetimeArray` :ref:`timeseries.timezone`
-Categorical :class:`CategoricalDtype` (none) :class:`Categorical` :ref:`categorical`
-period (time spans) :class:`PeriodDtype` :class:`Period` :class:`arrays.PeriodArray` :ref:`timeseries.periods`
-sparse :class:`SparseDtype` (none) :class:`arrays.SparseArray` :ref:`sparse`
-intervals :class:`IntervalDtype` :class:`Interval` :class:`arrays.IntervalArray` :ref:`advanced.intervalindex`
-nullable integer :class:`Int64Dtype`, ... (none) :class:`arrays.IntegerArray` :ref:`integer_na`
-Strings :class:`StringDtype` :class:`str` :class:`arrays.StringArray` :ref:`text`
-Boolean (with NA) :class:`BooleanDtype` :class:`bool` :class:`arrays.BooleanArray` :ref:`api.arrays.bool`
-=================== ========================= ================== ============================= =============================
++-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
+| Kind of Data | Data Type | Scalar | Array | String Aliases | Documentation |
++===================+===========================+====================+===============================+=========================================+===============================+
+| tz-aware datetime | :class:`DatetimeTZDtype` | :class:`Timestamp` | :class:`arrays.DatetimeArray` | ``'datetime64[ns, <tz>]'`` | :ref:`timeseries.timezone` |
++-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
+| Categorical | :class:`CategoricalDtype` | (none) | :class:`Categorical` | ``'category'`` | :ref:`categorical` |
++-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
+| period | :class:`PeriodDtype` | :class:`Period` | :class:`arrays.PeriodArray` | ``'period[<freq>]'``, | :ref:`timeseries.periods` |
+| (time spans) | | | | ``'Period[<freq>]'`` | |
++-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
+| sparse | :class:`SparseDtype` | (none) | :class:`SparseArray` | ``'Sparse'``, ``'Sparse[int]'``, | :ref:`sparse` |
+| | | | | ``'Sparse[float]'`` | |
++-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
+| intervals | :class:`IntervalDtype` | :class:`Interval` | :class:`arrays.IntervalArray` | ``'interval'``, ``'Interval'``, | :ref:`advanced.intervalindex` |
+| | | | | ``'Interval[<numpy_dtype>]'``, | |
+| | | | | ``'Interval[datetime64[ns, <tz>]]'``, | |
+| | | | | ``'Interval[timedelta64[<freq>]]'`` | |
++-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
+| nullable integer + :class:`Int64Dtype`, ... | (none) | :class:`arrays.IntegerArray` | ``'Int8'``, ``'Int16'``, ``'Int32'``, | :ref:`integer_na` |
+| | | | | ``'Int64'``, ``'UInt8'``, ``'UInt16'``, | |
+| | | | | ``'UInt32'``, ``'UInt64'`` | |
++-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
+| Strings | :class:`StringDtype` | :class:`str` | :class:`arrays.StringArray` | ``'string'`` | :ref:`text` |
++-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
+| Boolean (with NA) | :class:`BooleanDtype` | :class:`bool` | :class:`arrays.BooleanArray` | ``'boolean'`` | :ref:`api.arrays.bool` |
++-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
Pandas has two ways to store strings.
diff --git a/doc/source/reference/arrays.rst b/doc/source/reference/arrays.rst
index cf14d28772f4c..2c8382e916ed8 100644
--- a/doc/source/reference/arrays.rst
+++ b/doc/source/reference/arrays.rst
@@ -12,7 +12,8 @@ For most data types, pandas uses NumPy arrays as the concrete
objects contained with a :class:`Index`, :class:`Series`, or
:class:`DataFrame`.
-For some data types, pandas extends NumPy's type system.
+For some data types, pandas extends NumPy's type system. String aliases for these types
+can be found at :ref:`basics.dtypes`.
=================== ========================= ================== =============================
Kind of Data Pandas Data Type Scalar Array
| - [x] closes #30590
- [ ] tests added / passed
- N/A
- [ ] passes `black pandas`
- N/A
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- N/A
- [ ] whatsnew entry
- N/A
Decided to add all valid strings for all the types in the table about dtypes. I didn't want to decide which ones to leave out, and if we want to leave some of them out, we should decide whether they should be removed from the code as well (e.g., `'Sparse[int, 0]'`)
Had to reformat the table so it would look nice, by splitting the list of strings into multiple lines (e.g., a merged cell)
| https://api.github.com/repos/pandas-dev/pandas/pulls/30628 | 2020-01-02T21:33:29Z | 2020-01-03T13:18:35Z | 2020-01-03T13:18:34Z | 2020-01-03T21:39:21Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.