doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
pandas.Series.str.islower Series.str.islower()[source] Check whether all characters in each string are lowercase. This is equivalent to running the Python string method str.islower() for each element of the Series/Index. If a string has zero characters, False is returned for that check. Returns Series or Index of bool Series or Index of boolean values with the same length as the original Series/Index. See also Series.str.isalpha Check whether all characters are alphabetic. Series.str.isnumeric Check whether all characters are numeric. Series.str.isalnum Check whether all characters are alphanumeric. Series.str.isdigit Check whether all characters are digits. Series.str.isdecimal Check whether all characters are decimal. Series.str.isspace Check whether all characters are whitespace. Series.str.islower Check whether all characters are lowercase. Series.str.isupper Check whether all characters are uppercase. Series.str.istitle Check whether all characters are titlecase. Examples Checks for Alphabetic and Numeric Characters >>> s1 = pd.Series(['one', 'one1', '1', '']) >>> s1.str.isalpha() 0 True 1 False 2 False 3 False dtype: bool >>> s1.str.isnumeric() 0 False 1 False 2 True 3 False dtype: bool >>> s1.str.isalnum() 0 True 1 True 2 True 3 False dtype: bool Note that checks against characters mixed with any additional punctuation or whitespace will evaluate to false for an alphanumeric check. >>> s2 = pd.Series(['A B', '1.5', '3,000']) >>> s2.str.isalnum() 0 False 1 False 2 False dtype: bool More Detailed Checks for Numeric Characters There are several different but overlapping sets of numeric characters that can be checked for. >>> s3 = pd.Series(['23', '³', '⅕', '']) The s3.str.isdecimal method checks for characters used to form numbers in base 10. >>> s3.str.isdecimal() 0 True 1 False 2 False 3 False dtype: bool The s.str.isdigit method is the same as s3.str.isdecimal but also includes special digits, like superscripted and subscripted digits in unicode. >>> s3.str.isdigit() 0 True 1 True 2 False 3 False dtype: bool The s.str.isnumeric method is the same as s3.str.isdigit but also includes other characters that can represent quantities such as unicode fractions. >>> s3.str.isnumeric() 0 True 1 True 2 True 3 False dtype: bool Checks for Whitespace >>> s4 = pd.Series([' ', '\t\r\n ', '']) >>> s4.str.isspace() 0 True 1 True 2 False dtype: bool Checks for Character Case >>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', '']) >>> s5.str.islower() 0 True 1 False 2 False 3 False dtype: bool >>> s5.str.isupper() 0 False 1 False 2 True 3 False dtype: bool The s5.str.istitle method checks for whether all words are in title case (whether only the first letter of each word is capitalized). Words are assumed to be as any sequence of non-numeric characters separated by whitespace characters. >>> s5.str.istitle() 0 False 1 True 2 False 3 False dtype: bool
pandas.reference.api.pandas.series.str.islower
pandas.Series.str.isnumeric Series.str.isnumeric()[source] Check whether all characters in each string are numeric. This is equivalent to running the Python string method str.isnumeric() for each element of the Series/Index. If a string has zero characters, False is returned for that check. Returns Series or Index of bool Series or Index of boolean values with the same length as the original Series/Index. See also Series.str.isalpha Check whether all characters are alphabetic. Series.str.isnumeric Check whether all characters are numeric. Series.str.isalnum Check whether all characters are alphanumeric. Series.str.isdigit Check whether all characters are digits. Series.str.isdecimal Check whether all characters are decimal. Series.str.isspace Check whether all characters are whitespace. Series.str.islower Check whether all characters are lowercase. Series.str.isupper Check whether all characters are uppercase. Series.str.istitle Check whether all characters are titlecase. Examples Checks for Alphabetic and Numeric Characters >>> s1 = pd.Series(['one', 'one1', '1', '']) >>> s1.str.isalpha() 0 True 1 False 2 False 3 False dtype: bool >>> s1.str.isnumeric() 0 False 1 False 2 True 3 False dtype: bool >>> s1.str.isalnum() 0 True 1 True 2 True 3 False dtype: bool Note that checks against characters mixed with any additional punctuation or whitespace will evaluate to false for an alphanumeric check. >>> s2 = pd.Series(['A B', '1.5', '3,000']) >>> s2.str.isalnum() 0 False 1 False 2 False dtype: bool More Detailed Checks for Numeric Characters There are several different but overlapping sets of numeric characters that can be checked for. >>> s3 = pd.Series(['23', '³', '⅕', '']) The s3.str.isdecimal method checks for characters used to form numbers in base 10. >>> s3.str.isdecimal() 0 True 1 False 2 False 3 False dtype: bool The s.str.isdigit method is the same as s3.str.isdecimal but also includes special digits, like superscripted and subscripted digits in unicode. >>> s3.str.isdigit() 0 True 1 True 2 False 3 False dtype: bool The s.str.isnumeric method is the same as s3.str.isdigit but also includes other characters that can represent quantities such as unicode fractions. >>> s3.str.isnumeric() 0 True 1 True 2 True 3 False dtype: bool Checks for Whitespace >>> s4 = pd.Series([' ', '\t\r\n ', '']) >>> s4.str.isspace() 0 True 1 True 2 False dtype: bool Checks for Character Case >>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', '']) >>> s5.str.islower() 0 True 1 False 2 False 3 False dtype: bool >>> s5.str.isupper() 0 False 1 False 2 True 3 False dtype: bool The s5.str.istitle method checks for whether all words are in title case (whether only the first letter of each word is capitalized). Words are assumed to be as any sequence of non-numeric characters separated by whitespace characters. >>> s5.str.istitle() 0 False 1 True 2 False 3 False dtype: bool
pandas.reference.api.pandas.series.str.isnumeric
pandas.Series.str.isspace Series.str.isspace()[source] Check whether all characters in each string are whitespace. This is equivalent to running the Python string method str.isspace() for each element of the Series/Index. If a string has zero characters, False is returned for that check. Returns Series or Index of bool Series or Index of boolean values with the same length as the original Series/Index. See also Series.str.isalpha Check whether all characters are alphabetic. Series.str.isnumeric Check whether all characters are numeric. Series.str.isalnum Check whether all characters are alphanumeric. Series.str.isdigit Check whether all characters are digits. Series.str.isdecimal Check whether all characters are decimal. Series.str.isspace Check whether all characters are whitespace. Series.str.islower Check whether all characters are lowercase. Series.str.isupper Check whether all characters are uppercase. Series.str.istitle Check whether all characters are titlecase. Examples Checks for Alphabetic and Numeric Characters >>> s1 = pd.Series(['one', 'one1', '1', '']) >>> s1.str.isalpha() 0 True 1 False 2 False 3 False dtype: bool >>> s1.str.isnumeric() 0 False 1 False 2 True 3 False dtype: bool >>> s1.str.isalnum() 0 True 1 True 2 True 3 False dtype: bool Note that checks against characters mixed with any additional punctuation or whitespace will evaluate to false for an alphanumeric check. >>> s2 = pd.Series(['A B', '1.5', '3,000']) >>> s2.str.isalnum() 0 False 1 False 2 False dtype: bool More Detailed Checks for Numeric Characters There are several different but overlapping sets of numeric characters that can be checked for. >>> s3 = pd.Series(['23', '³', '⅕', '']) The s3.str.isdecimal method checks for characters used to form numbers in base 10. >>> s3.str.isdecimal() 0 True 1 False 2 False 3 False dtype: bool The s.str.isdigit method is the same as s3.str.isdecimal but also includes special digits, like superscripted and subscripted digits in unicode. >>> s3.str.isdigit() 0 True 1 True 2 False 3 False dtype: bool The s.str.isnumeric method is the same as s3.str.isdigit but also includes other characters that can represent quantities such as unicode fractions. >>> s3.str.isnumeric() 0 True 1 True 2 True 3 False dtype: bool Checks for Whitespace >>> s4 = pd.Series([' ', '\t\r\n ', '']) >>> s4.str.isspace() 0 True 1 True 2 False dtype: bool Checks for Character Case >>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', '']) >>> s5.str.islower() 0 True 1 False 2 False 3 False dtype: bool >>> s5.str.isupper() 0 False 1 False 2 True 3 False dtype: bool The s5.str.istitle method checks for whether all words are in title case (whether only the first letter of each word is capitalized). Words are assumed to be as any sequence of non-numeric characters separated by whitespace characters. >>> s5.str.istitle() 0 False 1 True 2 False 3 False dtype: bool
pandas.reference.api.pandas.series.str.isspace
pandas.Series.str.istitle Series.str.istitle()[source] Check whether all characters in each string are titlecase. This is equivalent to running the Python string method str.istitle() for each element of the Series/Index. If a string has zero characters, False is returned for that check. Returns Series or Index of bool Series or Index of boolean values with the same length as the original Series/Index. See also Series.str.isalpha Check whether all characters are alphabetic. Series.str.isnumeric Check whether all characters are numeric. Series.str.isalnum Check whether all characters are alphanumeric. Series.str.isdigit Check whether all characters are digits. Series.str.isdecimal Check whether all characters are decimal. Series.str.isspace Check whether all characters are whitespace. Series.str.islower Check whether all characters are lowercase. Series.str.isupper Check whether all characters are uppercase. Series.str.istitle Check whether all characters are titlecase. Examples Checks for Alphabetic and Numeric Characters >>> s1 = pd.Series(['one', 'one1', '1', '']) >>> s1.str.isalpha() 0 True 1 False 2 False 3 False dtype: bool >>> s1.str.isnumeric() 0 False 1 False 2 True 3 False dtype: bool >>> s1.str.isalnum() 0 True 1 True 2 True 3 False dtype: bool Note that checks against characters mixed with any additional punctuation or whitespace will evaluate to false for an alphanumeric check. >>> s2 = pd.Series(['A B', '1.5', '3,000']) >>> s2.str.isalnum() 0 False 1 False 2 False dtype: bool More Detailed Checks for Numeric Characters There are several different but overlapping sets of numeric characters that can be checked for. >>> s3 = pd.Series(['23', '³', '⅕', '']) The s3.str.isdecimal method checks for characters used to form numbers in base 10. >>> s3.str.isdecimal() 0 True 1 False 2 False 3 False dtype: bool The s.str.isdigit method is the same as s3.str.isdecimal but also includes special digits, like superscripted and subscripted digits in unicode. >>> s3.str.isdigit() 0 True 1 True 2 False 3 False dtype: bool The s.str.isnumeric method is the same as s3.str.isdigit but also includes other characters that can represent quantities such as unicode fractions. >>> s3.str.isnumeric() 0 True 1 True 2 True 3 False dtype: bool Checks for Whitespace >>> s4 = pd.Series([' ', '\t\r\n ', '']) >>> s4.str.isspace() 0 True 1 True 2 False dtype: bool Checks for Character Case >>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', '']) >>> s5.str.islower() 0 True 1 False 2 False 3 False dtype: bool >>> s5.str.isupper() 0 False 1 False 2 True 3 False dtype: bool The s5.str.istitle method checks for whether all words are in title case (whether only the first letter of each word is capitalized). Words are assumed to be as any sequence of non-numeric characters separated by whitespace characters. >>> s5.str.istitle() 0 False 1 True 2 False 3 False dtype: bool
pandas.reference.api.pandas.series.str.istitle
pandas.Series.str.isupper Series.str.isupper()[source] Check whether all characters in each string are uppercase. This is equivalent to running the Python string method str.isupper() for each element of the Series/Index. If a string has zero characters, False is returned for that check. Returns Series or Index of bool Series or Index of boolean values with the same length as the original Series/Index. See also Series.str.isalpha Check whether all characters are alphabetic. Series.str.isnumeric Check whether all characters are numeric. Series.str.isalnum Check whether all characters are alphanumeric. Series.str.isdigit Check whether all characters are digits. Series.str.isdecimal Check whether all characters are decimal. Series.str.isspace Check whether all characters are whitespace. Series.str.islower Check whether all characters are lowercase. Series.str.isupper Check whether all characters are uppercase. Series.str.istitle Check whether all characters are titlecase. Examples Checks for Alphabetic and Numeric Characters >>> s1 = pd.Series(['one', 'one1', '1', '']) >>> s1.str.isalpha() 0 True 1 False 2 False 3 False dtype: bool >>> s1.str.isnumeric() 0 False 1 False 2 True 3 False dtype: bool >>> s1.str.isalnum() 0 True 1 True 2 True 3 False dtype: bool Note that checks against characters mixed with any additional punctuation or whitespace will evaluate to false for an alphanumeric check. >>> s2 = pd.Series(['A B', '1.5', '3,000']) >>> s2.str.isalnum() 0 False 1 False 2 False dtype: bool More Detailed Checks for Numeric Characters There are several different but overlapping sets of numeric characters that can be checked for. >>> s3 = pd.Series(['23', '³', '⅕', '']) The s3.str.isdecimal method checks for characters used to form numbers in base 10. >>> s3.str.isdecimal() 0 True 1 False 2 False 3 False dtype: bool The s.str.isdigit method is the same as s3.str.isdecimal but also includes special digits, like superscripted and subscripted digits in unicode. >>> s3.str.isdigit() 0 True 1 True 2 False 3 False dtype: bool The s.str.isnumeric method is the same as s3.str.isdigit but also includes other characters that can represent quantities such as unicode fractions. >>> s3.str.isnumeric() 0 True 1 True 2 True 3 False dtype: bool Checks for Whitespace >>> s4 = pd.Series([' ', '\t\r\n ', '']) >>> s4.str.isspace() 0 True 1 True 2 False dtype: bool Checks for Character Case >>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', '']) >>> s5.str.islower() 0 True 1 False 2 False 3 False dtype: bool >>> s5.str.isupper() 0 False 1 False 2 True 3 False dtype: bool The s5.str.istitle method checks for whether all words are in title case (whether only the first letter of each word is capitalized). Words are assumed to be as any sequence of non-numeric characters separated by whitespace characters. >>> s5.str.istitle() 0 False 1 True 2 False 3 False dtype: bool
pandas.reference.api.pandas.series.str.isupper
pandas.Series.str.join Series.str.join(sep)[source] Join lists contained as elements in the Series/Index with passed delimiter. If the elements of a Series are lists themselves, join the content of these lists using the delimiter passed to the function. This function is an equivalent to str.join(). Parameters sep:str Delimiter to use between list entries. Returns Series/Index: object The list entries concatenated by intervening occurrences of the delimiter. Raises AttributeError If the supplied Series contains neither strings nor lists. See also str.join Standard library version of this method. Series.str.split Split strings around given separator/delimiter. Notes If any of the list items is not a string object, the result of the join will be NaN. Examples Example with a list that contains non-string elements. >>> s = pd.Series([['lion', 'elephant', 'zebra'], ... [1.1, 2.2, 3.3], ... ['cat', np.nan, 'dog'], ... ['cow', 4.5, 'goat'], ... ['duck', ['swan', 'fish'], 'guppy']]) >>> s 0 [lion, elephant, zebra] 1 [1.1, 2.2, 3.3] 2 [cat, nan, dog] 3 [cow, 4.5, goat] 4 [duck, [swan, fish], guppy] dtype: object Join all lists using a ‘-’. The lists containing object(s) of types other than str will produce a NaN. >>> s.str.join('-') 0 lion-elephant-zebra 1 NaN 2 NaN 3 NaN 4 NaN dtype: object
pandas.reference.api.pandas.series.str.join
pandas.Series.str.len Series.str.len()[source] Compute the length of each element in the Series/Index. The element may be a sequence (such as a string, tuple or list) or a collection (such as a dictionary). Returns Series or Index of int A Series or Index of integer values indicating the length of each element in the Series or Index. See also str.len Python built-in function returning the length of an object. Series.size Returns the length of the Series. Examples Returns the length (number of characters) in a string. Returns the number of entries for dictionaries, lists or tuples. >>> s = pd.Series(['dog', ... '', ... 5, ... {'foo' : 'bar'}, ... [2, 3, 5, 7], ... ('one', 'two', 'three')]) >>> s 0 dog 1 2 5 3 {'foo': 'bar'} 4 [2, 3, 5, 7] 5 (one, two, three) dtype: object >>> s.str.len() 0 3.0 1 0.0 2 NaN 3 1.0 4 4.0 5 3.0 dtype: float64
pandas.reference.api.pandas.series.str.len
pandas.Series.str.ljust Series.str.ljust(width, fillchar=' ')[source] Pad right side of strings in the Series/Index. Equivalent to str.ljust(). Parameters width:int Minimum width of resulting string; additional characters will be filled with fillchar. fillchar:str Additional character for filling, default is whitespace. Returns filled:Series/Index of objects.
pandas.reference.api.pandas.series.str.ljust
pandas.Series.str.lower Series.str.lower()[source] Convert strings in the Series/Index to lowercase. Equivalent to str.lower(). Returns Series or Index of object See also Series.str.lower Converts all characters to lowercase. Series.str.upper Converts all characters to uppercase. Series.str.title Converts first character of each word to uppercase and remaining to lowercase. Series.str.capitalize Converts first character to uppercase and remaining to lowercase. Series.str.swapcase Converts uppercase to lowercase and lowercase to uppercase. Series.str.casefold Removes all case distinctions in the string. Examples >>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe']) >>> s 0 lower 1 CAPITALS 2 this is a sentence 3 SwApCaSe dtype: object >>> s.str.lower() 0 lower 1 capitals 2 this is a sentence 3 swapcase dtype: object >>> s.str.upper() 0 LOWER 1 CAPITALS 2 THIS IS A SENTENCE 3 SWAPCASE dtype: object >>> s.str.title() 0 Lower 1 Capitals 2 This Is A Sentence 3 Swapcase dtype: object >>> s.str.capitalize() 0 Lower 1 Capitals 2 This is a sentence 3 Swapcase dtype: object >>> s.str.swapcase() 0 LOWER 1 capitals 2 THIS IS A SENTENCE 3 sWaPcAsE dtype: object
pandas.reference.api.pandas.series.str.lower
pandas.Series.str.lstrip Series.str.lstrip(to_strip=None)[source] Remove leading characters. Strip whitespaces (including newlines) or a set of specified characters from each string in the Series/Index from left side. Equivalent to str.lstrip(). Parameters to_strip:str or None, default None Specifying the set of characters to be removed. All combinations of this set of characters will be stripped. If None then whitespaces are removed. Returns Series or Index of object See also Series.str.strip Remove leading and trailing characters in Series/Index. Series.str.lstrip Remove leading characters in Series/Index. Series.str.rstrip Remove trailing characters in Series/Index. Examples >>> s = pd.Series(['1. Ant. ', '2. Bee!\n', '3. Cat?\t', np.nan]) >>> s 0 1. Ant. 1 2. Bee!\n 2 3. Cat?\t 3 NaN dtype: object >>> s.str.strip() 0 1. Ant. 1 2. Bee! 2 3. Cat? 3 NaN dtype: object >>> s.str.lstrip('123.') 0 Ant. 1 Bee!\n 2 Cat?\t 3 NaN dtype: object >>> s.str.rstrip('.!? \n\t') 0 1. Ant 1 2. Bee 2 3. Cat 3 NaN dtype: object >>> s.str.strip('123.!? \n\t') 0 Ant 1 Bee 2 Cat 3 NaN dtype: object
pandas.reference.api.pandas.series.str.lstrip
pandas.Series.str.match Series.str.match(pat, case=True, flags=0, na=None)[source] Determine if each string starts with a match of a regular expression. Parameters pat:str Character sequence or regular expression. case:bool, default True If True, case sensitive. flags:int, default 0 (no flags) Regex module flags, e.g. re.IGNORECASE. na:scalar, optional Fill value for missing values. The default depends on dtype of the array. For object-dtype, numpy.nan is used. For StringDtype, pandas.NA is used. Returns Series/Index/array of boolean values See also fullmatch Stricter matching that requires the entire string to match. contains Analogous, but less strict, relying on re.search instead of re.match. extract Extract matched groups.
pandas.reference.api.pandas.series.str.match
pandas.Series.str.normalize Series.str.normalize(form)[source] Return the Unicode normal form for the strings in the Series/Index. For more information on the forms, see the unicodedata.normalize(). Parameters form:{‘NFC’, ‘NFKC’, ‘NFD’, ‘NFKD’} Unicode form. Returns normalized:Series/Index of objects
pandas.reference.api.pandas.series.str.normalize
pandas.Series.str.pad Series.str.pad(width, side='left', fillchar=' ')[source] Pad strings in the Series/Index up to width. Parameters width:int Minimum width of resulting string; additional characters will be filled with character defined in fillchar. side:{‘left’, ‘right’, ‘both’}, default ‘left’ Side from which to fill resulting string. fillchar:str, default ‘ ‘ Additional character for filling, default is whitespace. Returns Series or Index of object Returns Series or Index with minimum number of char in object. See also Series.str.rjust Fills the left side of strings with an arbitrary character. Equivalent to Series.str.pad(side='left'). Series.str.ljust Fills the right side of strings with an arbitrary character. Equivalent to Series.str.pad(side='right'). Series.str.center Fills both sides of strings with an arbitrary character. Equivalent to Series.str.pad(side='both'). Series.str.zfill Pad strings in the Series/Index by prepending ‘0’ character. Equivalent to Series.str.pad(side='left', fillchar='0'). Examples >>> s = pd.Series(["caribou", "tiger"]) >>> s 0 caribou 1 tiger dtype: object >>> s.str.pad(width=10) 0 caribou 1 tiger dtype: object >>> s.str.pad(width=10, side='right', fillchar='-') 0 caribou--- 1 tiger----- dtype: object >>> s.str.pad(width=10, side='both', fillchar='-') 0 -caribou-- 1 --tiger--- dtype: object
pandas.reference.api.pandas.series.str.pad
pandas.Series.str.partition Series.str.partition(sep=' ', expand=True)[source] Split the string at the first occurrence of sep. This method splits the string at the first occurrence of sep, and returns 3 elements containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return 3 elements containing the string itself, followed by two empty strings. Parameters sep:str, default whitespace String to split on. expand:bool, default True If True, return DataFrame/MultiIndex expanding dimensionality. If False, return Series/Index. Returns DataFrame/MultiIndex or Series/Index of objects See also rpartition Split the string at the last occurrence of sep. Series.str.split Split strings around given separators. str.partition Standard library version. Examples >>> s = pd.Series(['Linda van der Berg', 'George Pitt-Rivers']) >>> s 0 Linda van der Berg 1 George Pitt-Rivers dtype: object >>> s.str.partition() 0 1 2 0 Linda van der Berg 1 George Pitt-Rivers To partition by the last space instead of the first one: >>> s.str.rpartition() 0 1 2 0 Linda van der Berg 1 George Pitt-Rivers To partition by something different than a space: >>> s.str.partition('-') 0 1 2 0 Linda van der Berg 1 George Pitt - Rivers To return a Series containing tuples instead of a DataFrame: >>> s.str.partition('-', expand=False) 0 (Linda van der Berg, , ) 1 (George Pitt, -, Rivers) dtype: object Also available on indices: >>> idx = pd.Index(['X 123', 'Y 999']) >>> idx Index(['X 123', 'Y 999'], dtype='object') Which will create a MultiIndex: >>> idx.str.partition() MultiIndex([('X', ' ', '123'), ('Y', ' ', '999')], ) Or an index with tuples with expand=False: >>> idx.str.partition(expand=False) Index([('X', ' ', '123'), ('Y', ' ', '999')], dtype='object')
pandas.reference.api.pandas.series.str.partition
pandas.Series.str.removeprefix Series.str.removeprefix(prefix)[source] Remove a prefix from an object series. If the prefix is not present, the original string will be returned. Parameters prefix:str Remove the prefix of the string. Returns Series/Index: object The Series or Index with given prefix removed. See also Series.str.removesuffix Remove a suffix from an object series. Examples >>> s = pd.Series(["str_foo", "str_bar", "no_prefix"]) >>> s 0 str_foo 1 str_bar 2 no_prefix dtype: object >>> s.str.removeprefix("str_") 0 foo 1 bar 2 no_prefix dtype: object >>> s = pd.Series(["foo_str", "bar_str", "no_suffix"]) >>> s 0 foo_str 1 bar_str 2 no_suffix dtype: object >>> s.str.removesuffix("_str") 0 foo 1 bar 2 no_suffix dtype: object
pandas.reference.api.pandas.series.str.removeprefix
pandas.Series.str.removesuffix Series.str.removesuffix(suffix)[source] Remove a suffix from an object series. If the suffix is not present, the original string will be returned. Parameters suffix:str Remove the suffix of the string. Returns Series/Index: object The Series or Index with given suffix removed. See also Series.str.removeprefix Remove a prefix from an object series. Examples >>> s = pd.Series(["str_foo", "str_bar", "no_prefix"]) >>> s 0 str_foo 1 str_bar 2 no_prefix dtype: object >>> s.str.removeprefix("str_") 0 foo 1 bar 2 no_prefix dtype: object >>> s = pd.Series(["foo_str", "bar_str", "no_suffix"]) >>> s 0 foo_str 1 bar_str 2 no_suffix dtype: object >>> s.str.removesuffix("_str") 0 foo 1 bar 2 no_suffix dtype: object
pandas.reference.api.pandas.series.str.removesuffix
pandas.Series.str.repeat Series.str.repeat(repeats)[source] Duplicate each string in the Series or Index. Parameters repeats:int or sequence of int Same value for all (int) or different value per (sequence). Returns Series or Index of object Series or Index of repeated string objects specified by input parameter repeats. Examples >>> s = pd.Series(['a', 'b', 'c']) >>> s 0 a 1 b 2 c dtype: object Single int repeats string in Series >>> s.str.repeat(repeats=2) 0 aa 1 bb 2 cc dtype: object Sequence of int repeats corresponding string in Series >>> s.str.repeat(repeats=[1, 2, 3]) 0 a 1 bb 2 ccc dtype: object
pandas.reference.api.pandas.series.str.repeat
pandas.Series.str.replace Series.str.replace(pat, repl, n=- 1, case=None, flags=0, regex=None)[source] Replace each occurrence of pattern/regex in the Series/Index. Equivalent to str.replace() or re.sub(), depending on the regex value. Parameters pat:str or compiled regex String can be a character sequence or regular expression. repl:str or callable Replacement string or a callable. The callable is passed the regex match object and must return a replacement string to be used. See re.sub(). n:int, default -1 (all) Number of replacements to make from start. case:bool, default None Determines if replace is case sensitive: If True, case sensitive (the default if pat is a string) Set to False for case insensitive Cannot be set if pat is a compiled regex. flags:int, default 0 (no flags) Regex module flags, e.g. re.IGNORECASE. Cannot be set if pat is a compiled regex. regex:bool, default True Determines if the passed-in pattern is a regular expression: If True, assumes the passed-in pattern is a regular expression. If False, treats the pattern as a literal string Cannot be set to False if pat is a compiled regex or repl is a callable. New in version 0.23.0. Returns Series or Index of object A copy of the object with all matching occurrences of pat replaced by repl. Raises ValueError if regex is False and repl is a callable or pat is a compiled regex if pat is a compiled regex and case or flags is set Notes When pat is a compiled regex, all flags should be included in the compiled regex. Use of case, flags, or regex=False with a compiled regex will raise an error. Examples When pat is a string and regex is True (the default), the given pat is compiled as a regex. When repl is a string, it replaces matching regex patterns as with re.sub(). NaN value(s) in the Series are left as is: >>> pd.Series(['foo', 'fuz', np.nan]).str.replace('f.', 'ba', regex=True) 0 bao 1 baz 2 NaN dtype: object When pat is a string and regex is False, every pat is replaced with repl as with str.replace(): >>> pd.Series(['f.o', 'fuz', np.nan]).str.replace('f.', 'ba', regex=False) 0 bao 1 fuz 2 NaN dtype: object When repl is a callable, it is called on every pat using re.sub(). The callable should expect one positional argument (a regex object) and return a string. To get the idea: >>> pd.Series(['foo', 'fuz', np.nan]).str.replace('f', repr, regex=True) 0 <re.Match object; span=(0, 1), match='f'>oo 1 <re.Match object; span=(0, 1), match='f'>uz 2 NaN dtype: object Reverse every lowercase alphabetic word: >>> repl = lambda m: m.group(0)[::-1] >>> ser = pd.Series(['foo 123', 'bar baz', np.nan]) >>> ser.str.replace(r'[a-z]+', repl, regex=True) 0 oof 123 1 rab zab 2 NaN dtype: object Using regex groups (extract second group and swap case): >>> pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)" >>> repl = lambda m: m.group('two').swapcase() >>> ser = pd.Series(['One Two Three', 'Foo Bar Baz']) >>> ser.str.replace(pat, repl, regex=True) 0 tWO 1 bAR dtype: object Using a compiled regex with flags >>> import re >>> regex_pat = re.compile(r'FUZ', flags=re.IGNORECASE) >>> pd.Series(['foo', 'fuz', np.nan]).str.replace(regex_pat, 'bar', regex=True) 0 foo 1 bar 2 NaN dtype: object
pandas.reference.api.pandas.series.str.replace
pandas.Series.str.rfind Series.str.rfind(sub, start=0, end=None)[source] Return highest indexes in each strings in the Series/Index. Each of returned indexes corresponds to the position where the substring is fully contained between [start:end]. Return -1 on failure. Equivalent to standard str.rfind(). Parameters sub:str Substring being searched. start:int Left edge index. end:int Right edge index. Returns Series or Index of int. See also find Return lowest indexes in each strings.
pandas.reference.api.pandas.series.str.rfind
pandas.Series.str.rindex Series.str.rindex(sub, start=0, end=None)[source] Return highest indexes in each string in Series/Index. Each of the returned indexes corresponds to the position where the substring is fully contained between [start:end]. This is the same as str.rfind except instead of returning -1, it raises a ValueError when the substring is not found. Equivalent to standard str.rindex. Parameters sub:str Substring being searched. start:int Left edge index. end:int Right edge index. Returns Series or Index of object See also index Return lowest indexes in each strings.
pandas.reference.api.pandas.series.str.rindex
pandas.Series.str.rjust Series.str.rjust(width, fillchar=' ')[source] Pad left side of strings in the Series/Index. Equivalent to str.rjust(). Parameters width:int Minimum width of resulting string; additional characters will be filled with fillchar. fillchar:str Additional character for filling, default is whitespace. Returns filled:Series/Index of objects.
pandas.reference.api.pandas.series.str.rjust
pandas.Series.str.rpartition Series.str.rpartition(sep=' ', expand=True)[source] Split the string at the last occurrence of sep. This method splits the string at the last occurrence of sep, and returns 3 elements containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return 3 elements containing two empty strings, followed by the string itself. Parameters sep:str, default whitespace String to split on. expand:bool, default True If True, return DataFrame/MultiIndex expanding dimensionality. If False, return Series/Index. Returns DataFrame/MultiIndex or Series/Index of objects See also partition Split the string at the first occurrence of sep. Series.str.split Split strings around given separators. str.partition Standard library version. Examples >>> s = pd.Series(['Linda van der Berg', 'George Pitt-Rivers']) >>> s 0 Linda van der Berg 1 George Pitt-Rivers dtype: object >>> s.str.partition() 0 1 2 0 Linda van der Berg 1 George Pitt-Rivers To partition by the last space instead of the first one: >>> s.str.rpartition() 0 1 2 0 Linda van der Berg 1 George Pitt-Rivers To partition by something different than a space: >>> s.str.partition('-') 0 1 2 0 Linda van der Berg 1 George Pitt - Rivers To return a Series containing tuples instead of a DataFrame: >>> s.str.partition('-', expand=False) 0 (Linda van der Berg, , ) 1 (George Pitt, -, Rivers) dtype: object Also available on indices: >>> idx = pd.Index(['X 123', 'Y 999']) >>> idx Index(['X 123', 'Y 999'], dtype='object') Which will create a MultiIndex: >>> idx.str.partition() MultiIndex([('X', ' ', '123'), ('Y', ' ', '999')], ) Or an index with tuples with expand=False: >>> idx.str.partition(expand=False) Index([('X', ' ', '123'), ('Y', ' ', '999')], dtype='object')
pandas.reference.api.pandas.series.str.rpartition
pandas.Series.str.rsplit Series.str.rsplit(pat=None, n=- 1, expand=False)[source] Split strings around given separator/delimiter. Splits the string in the Series/Index from the end, at the specified delimiter string. Parameters pat:str or compiled regex, optional String or regular expression to split on. If not specified, split on whitespace. n:int, default -1 (all) Limit number of splits in output. None, 0 and -1 will be interpreted as return all splits. expand:bool, default False Expand the split strings into separate columns. If True, return DataFrame/MultiIndex expanding dimensionality. If False, return Series/Index, containing lists of strings. regex:bool, default None Determines if the passed-in pattern is a regular expression: If True, assumes the passed-in pattern is a regular expression If False, treats the pattern as a literal string. If None and pat length is 1, treats pat as a literal string. If None and pat length is not 1, treats pat as a regular expression. Cannot be set to False if pat is a compiled regex New in version 1.4.0. Returns Series, Index, DataFrame or MultiIndex Type matches caller unless expand=True (see Notes). Raises ValueError if regex is False and pat is a compiled regex See also Series.str.split Split strings around given separator/delimiter. Series.str.rsplit Splits string around given separator/delimiter, starting from the right. Series.str.join Join lists contained as elements in the Series/Index with passed delimiter. str.split Standard library version for split. str.rsplit Standard library version for rsplit. Notes The handling of the n keyword depends on the number of found splits: If found splits > n, make first n splits only If found splits <= n, make all splits If for a certain row the number of found splits < n, append None for padding up to n if expand=True If using expand=True, Series and Index callers return DataFrame and MultiIndex objects, respectively. Use of regex=False with a pat as a compiled regex will raise an error. Examples >>> s = pd.Series( ... [ ... "this is a regular sentence", ... "https://docs.python.org/3/tutorial/index.html", ... np.nan ... ] ... ) >>> s 0 this is a regular sentence 1 https://docs.python.org/3/tutorial/index.html 2 NaN dtype: object In the default setting, the string is split by whitespace. >>> s.str.split() 0 [this, is, a, regular, sentence] 1 [https://docs.python.org/3/tutorial/index.html] 2 NaN dtype: object Without the n parameter, the outputs of rsplit and split are identical. >>> s.str.rsplit() 0 [this, is, a, regular, sentence] 1 [https://docs.python.org/3/tutorial/index.html] 2 NaN dtype: object The n parameter can be used to limit the number of splits on the delimiter. The outputs of split and rsplit are different. >>> s.str.split(n=2) 0 [this, is, a regular sentence] 1 [https://docs.python.org/3/tutorial/index.html] 2 NaN dtype: object >>> s.str.rsplit(n=2) 0 [this is a, regular, sentence] 1 [https://docs.python.org/3/tutorial/index.html] 2 NaN dtype: object The pat parameter can be used to split by other characters. >>> s.str.split(pat="/") 0 [this is a regular sentence] 1 [https:, , docs.python.org, 3, tutorial, index... 2 NaN dtype: object When using expand=True, the split elements will expand out into separate columns. If NaN is present, it is propagated throughout the columns during the split. >>> s.str.split(expand=True) 0 1 2 3 4 0 this is a regular sentence 1 https://docs.python.org/3/tutorial/index.html None None None None 2 NaN NaN NaN NaN NaN For slightly more complex use cases like splitting the html document name from a url, a combination of parameter settings can be used. >>> s.str.rsplit("/", n=1, expand=True) 0 1 0 this is a regular sentence None 1 https://docs.python.org/3/tutorial index.html 2 NaN NaN Remember to escape special characters when explicitly using regular expressions. >>> s = pd.Series(["foo and bar plus baz"]) >>> s.str.split(r"and|plus", expand=True) 0 1 2 0 foo bar baz Regular expressions can be used to handle urls or file names. When pat is a string and regex=None (the default), the given pat is compiled as a regex only if len(pat) != 1. >>> s = pd.Series(['foojpgbar.jpg']) >>> s.str.split(r".", expand=True) 0 1 0 foojpgbar jpg >>> s.str.split(r"\.jpg", expand=True) 0 1 0 foojpgbar When regex=True, pat is interpreted as a regex >>> s.str.split(r"\.jpg", regex=True, expand=True) 0 1 0 foojpgbar A compiled regex can be passed as pat >>> import re >>> s.str.split(re.compile(r"\.jpg"), expand=True) 0 1 0 foojpgbar When regex=False, pat is interpreted as the string itself >>> s.str.split(r"\.jpg", regex=False, expand=True) 0 0 foojpgbar.jpg
pandas.reference.api.pandas.series.str.rsplit
pandas.Series.str.rstrip Series.str.rstrip(to_strip=None)[source] Remove trailing characters. Strip whitespaces (including newlines) or a set of specified characters from each string in the Series/Index from right side. Equivalent to str.rstrip(). Parameters to_strip:str or None, default None Specifying the set of characters to be removed. All combinations of this set of characters will be stripped. If None then whitespaces are removed. Returns Series or Index of object See also Series.str.strip Remove leading and trailing characters in Series/Index. Series.str.lstrip Remove leading characters in Series/Index. Series.str.rstrip Remove trailing characters in Series/Index. Examples >>> s = pd.Series(['1. Ant. ', '2. Bee!\n', '3. Cat?\t', np.nan]) >>> s 0 1. Ant. 1 2. Bee!\n 2 3. Cat?\t 3 NaN dtype: object >>> s.str.strip() 0 1. Ant. 1 2. Bee! 2 3. Cat? 3 NaN dtype: object >>> s.str.lstrip('123.') 0 Ant. 1 Bee!\n 2 Cat?\t 3 NaN dtype: object >>> s.str.rstrip('.!? \n\t') 0 1. Ant 1 2. Bee 2 3. Cat 3 NaN dtype: object >>> s.str.strip('123.!? \n\t') 0 Ant 1 Bee 2 Cat 3 NaN dtype: object
pandas.reference.api.pandas.series.str.rstrip
pandas.Series.str.slice Series.str.slice(start=None, stop=None, step=None)[source] Slice substrings from each element in the Series or Index. Parameters start:int, optional Start position for slice operation. stop:int, optional Stop position for slice operation. step:int, optional Step size for slice operation. Returns Series or Index of object Series or Index from sliced substring from original string object. See also Series.str.slice_replace Replace a slice with a string. Series.str.get Return element at position. Equivalent to Series.str.slice(start=i, stop=i+1) with i being the position. Examples >>> s = pd.Series(["koala", "dog", "chameleon"]) >>> s 0 koala 1 dog 2 chameleon dtype: object >>> s.str.slice(start=1) 0 oala 1 og 2 hameleon dtype: object >>> s.str.slice(start=-1) 0 a 1 g 2 n dtype: object >>> s.str.slice(stop=2) 0 ko 1 do 2 ch dtype: object >>> s.str.slice(step=2) 0 kaa 1 dg 2 caeen dtype: object >>> s.str.slice(start=0, stop=5, step=3) 0 kl 1 d 2 cm dtype: object Equivalent behaviour to: >>> s.str[0:5:3] 0 kl 1 d 2 cm dtype: object
pandas.reference.api.pandas.series.str.slice
pandas.Series.str.slice_replace Series.str.slice_replace(start=None, stop=None, repl=None)[source] Replace a positional slice of a string with another value. Parameters start:int, optional Left index position to use for the slice. If not specified (None), the slice is unbounded on the left, i.e. slice from the start of the string. stop:int, optional Right index position to use for the slice. If not specified (None), the slice is unbounded on the right, i.e. slice until the end of the string. repl:str, optional String for replacement. If not specified (None), the sliced region is replaced with an empty string. Returns Series or Index Same type as the original object. See also Series.str.slice Just slicing without replacement. Examples >>> s = pd.Series(['a', 'ab', 'abc', 'abdc', 'abcde']) >>> s 0 a 1 ab 2 abc 3 abdc 4 abcde dtype: object Specify just start, meaning replace start until the end of the string with repl. >>> s.str.slice_replace(1, repl='X') 0 aX 1 aX 2 aX 3 aX 4 aX dtype: object Specify just stop, meaning the start of the string to stop is replaced with repl, and the rest of the string is included. >>> s.str.slice_replace(stop=2, repl='X') 0 X 1 X 2 Xc 3 Xdc 4 Xcde dtype: object Specify start and stop, meaning the slice from start to stop is replaced with repl. Everything before or after start and stop is included as is. >>> s.str.slice_replace(start=1, stop=3, repl='X') 0 aX 1 aX 2 aX 3 aXc 4 aXde dtype: object
pandas.reference.api.pandas.series.str.slice_replace
pandas.Series.str.split Series.str.split(pat=None, n=- 1, expand=False, *, regex=None)[source] Split strings around given separator/delimiter. Splits the string in the Series/Index from the beginning, at the specified delimiter string. Parameters pat:str or compiled regex, optional String or regular expression to split on. If not specified, split on whitespace. n:int, default -1 (all) Limit number of splits in output. None, 0 and -1 will be interpreted as return all splits. expand:bool, default False Expand the split strings into separate columns. If True, return DataFrame/MultiIndex expanding dimensionality. If False, return Series/Index, containing lists of strings. regex:bool, default None Determines if the passed-in pattern is a regular expression: If True, assumes the passed-in pattern is a regular expression If False, treats the pattern as a literal string. If None and pat length is 1, treats pat as a literal string. If None and pat length is not 1, treats pat as a regular expression. Cannot be set to False if pat is a compiled regex New in version 1.4.0. Returns Series, Index, DataFrame or MultiIndex Type matches caller unless expand=True (see Notes). Raises ValueError if regex is False and pat is a compiled regex See also Series.str.split Split strings around given separator/delimiter. Series.str.rsplit Splits string around given separator/delimiter, starting from the right. Series.str.join Join lists contained as elements in the Series/Index with passed delimiter. str.split Standard library version for split. str.rsplit Standard library version for rsplit. Notes The handling of the n keyword depends on the number of found splits: If found splits > n, make first n splits only If found splits <= n, make all splits If for a certain row the number of found splits < n, append None for padding up to n if expand=True If using expand=True, Series and Index callers return DataFrame and MultiIndex objects, respectively. Use of regex=False with a pat as a compiled regex will raise an error. Examples >>> s = pd.Series( ... [ ... "this is a regular sentence", ... "https://docs.python.org/3/tutorial/index.html", ... np.nan ... ] ... ) >>> s 0 this is a regular sentence 1 https://docs.python.org/3/tutorial/index.html 2 NaN dtype: object In the default setting, the string is split by whitespace. >>> s.str.split() 0 [this, is, a, regular, sentence] 1 [https://docs.python.org/3/tutorial/index.html] 2 NaN dtype: object Without the n parameter, the outputs of rsplit and split are identical. >>> s.str.rsplit() 0 [this, is, a, regular, sentence] 1 [https://docs.python.org/3/tutorial/index.html] 2 NaN dtype: object The n parameter can be used to limit the number of splits on the delimiter. The outputs of split and rsplit are different. >>> s.str.split(n=2) 0 [this, is, a regular sentence] 1 [https://docs.python.org/3/tutorial/index.html] 2 NaN dtype: object >>> s.str.rsplit(n=2) 0 [this is a, regular, sentence] 1 [https://docs.python.org/3/tutorial/index.html] 2 NaN dtype: object The pat parameter can be used to split by other characters. >>> s.str.split(pat="/") 0 [this is a regular sentence] 1 [https:, , docs.python.org, 3, tutorial, index... 2 NaN dtype: object When using expand=True, the split elements will expand out into separate columns. If NaN is present, it is propagated throughout the columns during the split. >>> s.str.split(expand=True) 0 1 2 3 4 0 this is a regular sentence 1 https://docs.python.org/3/tutorial/index.html None None None None 2 NaN NaN NaN NaN NaN For slightly more complex use cases like splitting the html document name from a url, a combination of parameter settings can be used. >>> s.str.rsplit("/", n=1, expand=True) 0 1 0 this is a regular sentence None 1 https://docs.python.org/3/tutorial index.html 2 NaN NaN Remember to escape special characters when explicitly using regular expressions. >>> s = pd.Series(["foo and bar plus baz"]) >>> s.str.split(r"and|plus", expand=True) 0 1 2 0 foo bar baz Regular expressions can be used to handle urls or file names. When pat is a string and regex=None (the default), the given pat is compiled as a regex only if len(pat) != 1. >>> s = pd.Series(['foojpgbar.jpg']) >>> s.str.split(r".", expand=True) 0 1 0 foojpgbar jpg >>> s.str.split(r"\.jpg", expand=True) 0 1 0 foojpgbar When regex=True, pat is interpreted as a regex >>> s.str.split(r"\.jpg", regex=True, expand=True) 0 1 0 foojpgbar A compiled regex can be passed as pat >>> import re >>> s.str.split(re.compile(r"\.jpg"), expand=True) 0 1 0 foojpgbar When regex=False, pat is interpreted as the string itself >>> s.str.split(r"\.jpg", regex=False, expand=True) 0 0 foojpgbar.jpg
pandas.reference.api.pandas.series.str.split
pandas.Series.str.startswith Series.str.startswith(pat, na=None)[source] Test if the start of each string element matches a pattern. Equivalent to str.startswith(). Parameters pat:str Character sequence. Regular expressions are not accepted. na:object, default NaN Object shown if element tested is not a string. The default depends on dtype of the array. For object-dtype, numpy.nan is used. For StringDtype, pandas.NA is used. Returns Series or Index of bool A Series of booleans indicating whether the given pattern matches the start of each string element. See also str.startswith Python standard library string method. Series.str.endswith Same as startswith, but tests the end of string. Series.str.contains Tests if string element contains a pattern. Examples >>> s = pd.Series(['bat', 'Bear', 'cat', np.nan]) >>> s 0 bat 1 Bear 2 cat 3 NaN dtype: object >>> s.str.startswith('b') 0 True 1 False 2 False 3 NaN dtype: object Specifying na to be False instead of NaN. >>> s.str.startswith('b', na=False) 0 True 1 False 2 False 3 False dtype: bool
pandas.reference.api.pandas.series.str.startswith
pandas.Series.str.strip Series.str.strip(to_strip=None)[source] Remove leading and trailing characters. Strip whitespaces (including newlines) or a set of specified characters from each string in the Series/Index from left and right sides. Equivalent to str.strip(). Parameters to_strip:str or None, default None Specifying the set of characters to be removed. All combinations of this set of characters will be stripped. If None then whitespaces are removed. Returns Series or Index of object See also Series.str.strip Remove leading and trailing characters in Series/Index. Series.str.lstrip Remove leading characters in Series/Index. Series.str.rstrip Remove trailing characters in Series/Index. Examples >>> s = pd.Series(['1. Ant. ', '2. Bee!\n', '3. Cat?\t', np.nan]) >>> s 0 1. Ant. 1 2. Bee!\n 2 3. Cat?\t 3 NaN dtype: object >>> s.str.strip() 0 1. Ant. 1 2. Bee! 2 3. Cat? 3 NaN dtype: object >>> s.str.lstrip('123.') 0 Ant. 1 Bee!\n 2 Cat?\t 3 NaN dtype: object >>> s.str.rstrip('.!? \n\t') 0 1. Ant 1 2. Bee 2 3. Cat 3 NaN dtype: object >>> s.str.strip('123.!? \n\t') 0 Ant 1 Bee 2 Cat 3 NaN dtype: object
pandas.reference.api.pandas.series.str.strip
pandas.Series.str.swapcase Series.str.swapcase()[source] Convert strings in the Series/Index to be swapcased. Equivalent to str.swapcase(). Returns Series or Index of object See also Series.str.lower Converts all characters to lowercase. Series.str.upper Converts all characters to uppercase. Series.str.title Converts first character of each word to uppercase and remaining to lowercase. Series.str.capitalize Converts first character to uppercase and remaining to lowercase. Series.str.swapcase Converts uppercase to lowercase and lowercase to uppercase. Series.str.casefold Removes all case distinctions in the string. Examples >>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe']) >>> s 0 lower 1 CAPITALS 2 this is a sentence 3 SwApCaSe dtype: object >>> s.str.lower() 0 lower 1 capitals 2 this is a sentence 3 swapcase dtype: object >>> s.str.upper() 0 LOWER 1 CAPITALS 2 THIS IS A SENTENCE 3 SWAPCASE dtype: object >>> s.str.title() 0 Lower 1 Capitals 2 This Is A Sentence 3 Swapcase dtype: object >>> s.str.capitalize() 0 Lower 1 Capitals 2 This is a sentence 3 Swapcase dtype: object >>> s.str.swapcase() 0 LOWER 1 capitals 2 THIS IS A SENTENCE 3 sWaPcAsE dtype: object
pandas.reference.api.pandas.series.str.swapcase
pandas.Series.str.title Series.str.title()[source] Convert strings in the Series/Index to titlecase. Equivalent to str.title(). Returns Series or Index of object See also Series.str.lower Converts all characters to lowercase. Series.str.upper Converts all characters to uppercase. Series.str.title Converts first character of each word to uppercase and remaining to lowercase. Series.str.capitalize Converts first character to uppercase and remaining to lowercase. Series.str.swapcase Converts uppercase to lowercase and lowercase to uppercase. Series.str.casefold Removes all case distinctions in the string. Examples >>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe']) >>> s 0 lower 1 CAPITALS 2 this is a sentence 3 SwApCaSe dtype: object >>> s.str.lower() 0 lower 1 capitals 2 this is a sentence 3 swapcase dtype: object >>> s.str.upper() 0 LOWER 1 CAPITALS 2 THIS IS A SENTENCE 3 SWAPCASE dtype: object >>> s.str.title() 0 Lower 1 Capitals 2 This Is A Sentence 3 Swapcase dtype: object >>> s.str.capitalize() 0 Lower 1 Capitals 2 This is a sentence 3 Swapcase dtype: object >>> s.str.swapcase() 0 LOWER 1 capitals 2 THIS IS A SENTENCE 3 sWaPcAsE dtype: object
pandas.reference.api.pandas.series.str.title
pandas.Series.str.translate Series.str.translate(table)[source] Map all characters in the string through the given mapping table. Equivalent to standard str.translate(). Parameters table:dict Table is a mapping of Unicode ordinals to Unicode ordinals, strings, or None. Unmapped characters are left untouched. Characters mapped to None are deleted. str.maketrans() is a helper function for making translation tables. Returns Series or Index
pandas.reference.api.pandas.series.str.translate
pandas.Series.str.upper Series.str.upper()[source] Convert strings in the Series/Index to uppercase. Equivalent to str.upper(). Returns Series or Index of object See also Series.str.lower Converts all characters to lowercase. Series.str.upper Converts all characters to uppercase. Series.str.title Converts first character of each word to uppercase and remaining to lowercase. Series.str.capitalize Converts first character to uppercase and remaining to lowercase. Series.str.swapcase Converts uppercase to lowercase and lowercase to uppercase. Series.str.casefold Removes all case distinctions in the string. Examples >>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe']) >>> s 0 lower 1 CAPITALS 2 this is a sentence 3 SwApCaSe dtype: object >>> s.str.lower() 0 lower 1 capitals 2 this is a sentence 3 swapcase dtype: object >>> s.str.upper() 0 LOWER 1 CAPITALS 2 THIS IS A SENTENCE 3 SWAPCASE dtype: object >>> s.str.title() 0 Lower 1 Capitals 2 This Is A Sentence 3 Swapcase dtype: object >>> s.str.capitalize() 0 Lower 1 Capitals 2 This is a sentence 3 Swapcase dtype: object >>> s.str.swapcase() 0 LOWER 1 capitals 2 THIS IS A SENTENCE 3 sWaPcAsE dtype: object
pandas.reference.api.pandas.series.str.upper
pandas.Series.str.wrap Series.str.wrap(width, **kwargs)[source] Wrap strings in Series/Index at specified line width. This method has the same keyword parameters and defaults as textwrap.TextWrapper. Parameters width:int Maximum line width. expand_tabs:bool, optional If True, tab characters will be expanded to spaces (default: True). replace_whitespace:bool, optional If True, each whitespace character (as defined by string.whitespace) remaining after tab expansion will be replaced by a single space (default: True). drop_whitespace:bool, optional If True, whitespace that, after wrapping, happens to end up at the beginning or end of a line is dropped (default: True). break_long_words:bool, optional If True, then words longer than width will be broken in order to ensure that no lines are longer than width. If it is false, long words will not be broken, and some lines may be longer than width (default: True). break_on_hyphens:bool, optional If True, wrapping will occur preferably on whitespace and right after hyphens in compound words, as it is customary in English. If false, only whitespaces will be considered as potentially good places for line breaks, but you need to set break_long_words to false if you want truly insecable words (default: True). Returns Series or Index Notes Internally, this method uses a textwrap.TextWrapper instance with default settings. To achieve behavior matching R’s stringr library str_wrap function, use the arguments: expand_tabs = False replace_whitespace = True drop_whitespace = True break_long_words = False break_on_hyphens = False Examples >>> s = pd.Series(['line to be wrapped', 'another line to be wrapped']) >>> s.str.wrap(12) 0 line to be\nwrapped 1 another line\nto be\nwrapped dtype: object
pandas.reference.api.pandas.series.str.wrap
pandas.Series.str.zfill Series.str.zfill(width)[source] Pad strings in the Series/Index by prepending ‘0’ characters. Strings in the Series/Index are padded with ‘0’ characters on the left of the string to reach a total string length width. Strings in the Series/Index with length greater or equal to width are unchanged. Parameters width:int Minimum length of resulting string; strings with length less than width be prepended with ‘0’ characters. Returns Series/Index of objects. See also Series.str.rjust Fills the left side of strings with an arbitrary character. Series.str.ljust Fills the right side of strings with an arbitrary character. Series.str.pad Fills the specified sides of strings with an arbitrary character. Series.str.center Fills both sides of strings with an arbitrary character. Notes Differs from str.zfill() which has special handling for ‘+’/’-’ in the string. Examples >>> s = pd.Series(['-1', '1', '1000', 10, np.nan]) >>> s 0 -1 1 1 2 1000 3 10 4 NaN dtype: object Note that 10 and NaN are not strings, therefore they are converted to NaN. The minus sign in '-1' is treated as a regular character and the zero is added to the left of it (str.zfill() would have moved it to the left). 1000 remains unchanged as it is longer than width. >>> s.str.zfill(3) 0 0-1 1 001 2 1000 3 NaN 4 NaN dtype: object
pandas.reference.api.pandas.series.str.zfill
pandas.Series.sub Series.sub(other, level=None, fill_value=None, axis=0)[source] Return Subtraction of series and other, element-wise (binary operator sub). Equivalent to series - other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters other:Series or scalar value fill_value:None or float value, default None (NaN) Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing. level:int or name Broadcast across a level, matching Index values on the passed MultiIndex level. Returns Series The result of the operation. See also Series.rsub Reverse of the Subtraction operator, see Python documentation for more details. Examples >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.subtract(b, fill_value=0) a 0.0 b 1.0 c 1.0 d -1.0 e NaN dtype: float64
pandas.reference.api.pandas.series.sub
pandas.Series.subtract Series.subtract(other, level=None, fill_value=None, axis=0)[source] Return Subtraction of series and other, element-wise (binary operator sub). Equivalent to series - other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters other:Series or scalar value fill_value:None or float value, default None (NaN) Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing. level:int or name Broadcast across a level, matching Index values on the passed MultiIndex level. Returns Series The result of the operation. See also Series.rsub Reverse of the Subtraction operator, see Python documentation for more details. Examples >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.subtract(b, fill_value=0) a 0.0 b 1.0 c 1.0 d -1.0 e NaN dtype: float64
pandas.reference.api.pandas.series.subtract
pandas.Series.sum Series.sum(axis=None, skipna=True, level=None, numeric_only=None, min_count=0, **kwargs)[source] Return the sum of the values over the requested axis. This is equivalent to the method numpy.sum. Parameters axis:{index (0)} Axis for the function to be applied on. skipna:bool, default True Exclude NA/null values when computing the result. level:int or level name, default None If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a scalar. numeric_only:bool, default None Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series. min_count:int, default 0 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. **kwargs Additional keyword arguments to be passed to the function. Returns scalar or Series (if level specified) See also Series.sum Return the sum. Series.min Return the minimum. Series.max Return the maximum. Series.idxmin Return the index of the minimum. Series.idxmax Return the index of the maximum. DataFrame.sum Return the sum over the requested axis. DataFrame.min Return the minimum over the requested axis. DataFrame.max Return the maximum over the requested axis. DataFrame.idxmin Return the index of the minimum over the requested axis. DataFrame.idxmax Return the index of the maximum over the requested axis. Examples >>> idx = pd.MultiIndex.from_arrays([ ... ['warm', 'warm', 'cold', 'cold'], ... ['dog', 'falcon', 'fish', 'spider']], ... names=['blooded', 'animal']) >>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx) >>> s blooded animal warm dog 4 falcon 2 cold fish 0 spider 8 Name: legs, dtype: int64 >>> s.sum() 14 By default, the sum of an empty or all-NA Series is 0. >>> pd.Series([], dtype="float64").sum() # min_count=0 is the default 0.0 This can be controlled with the min_count parameter. For example, if you’d like the sum of an empty series to be NaN, pass min_count=1. >>> pd.Series([], dtype="float64").sum(min_count=1) nan Thanks to the skipna parameter, min_count handles all-NA and empty series identically. >>> pd.Series([np.nan]).sum() 0.0 >>> pd.Series([np.nan]).sum(min_count=1) nan
pandas.reference.api.pandas.series.sum
pandas.Series.swapaxes Series.swapaxes(axis1, axis2, copy=True)[source] Interchange axes and swap values axes appropriately. Returns y:same as input
pandas.reference.api.pandas.series.swapaxes
pandas.Series.swaplevel Series.swaplevel(i=- 2, j=- 1, copy=True)[source] Swap levels i and j in a MultiIndex. Default is to swap the two innermost levels of the index. Parameters i, j:int or str Levels of the indices to be swapped. Can pass level name as string. copy:bool, default True Whether to copy underlying data. Returns Series Series with levels swapped in MultiIndex. Examples >>> s = pd.Series( ... ["A", "B", "A", "C"], ... index=[ ... ["Final exam", "Final exam", "Coursework", "Coursework"], ... ["History", "Geography", "History", "Geography"], ... ["January", "February", "March", "April"], ... ], ... ) >>> s Final exam History January A Geography February B Coursework History March A Geography April C dtype: object In the following example, we will swap the levels of the indices. Here, we will swap the levels column-wise, but levels can be swapped row-wise in a similar manner. Note that column-wise is the default behaviour. By not supplying any arguments for i and j, we swap the last and second to last indices. >>> s.swaplevel() Final exam January History A February Geography B Coursework March History A April Geography C dtype: object By supplying one argument, we can choose which index to swap the last index with. We can for example swap the first index with the last one as follows. >>> s.swaplevel(0) January History Final exam A February Geography Final exam B March History Coursework A April Geography Coursework C dtype: object We can also define explicitly which indices we want to swap by supplying values for both i and j. Here, we for example swap the first and second indices. >>> s.swaplevel(0, 1) History Final exam January A Geography Final exam February B History Coursework March A Geography Coursework April C dtype: object
pandas.reference.api.pandas.series.swaplevel
pandas.Series.T propertySeries.T Return the transpose, which is by definition self.
pandas.reference.api.pandas.series.t
pandas.Series.tail Series.tail(n=5)[source] Return the last n rows. This function returns last n rows from the object based on position. It is useful for quickly verifying data, for example, after sorting or appending rows. For negative values of n, this function returns all rows except the first n rows, equivalent to df[n:]. Parameters n:int, default 5 Number of rows to select. Returns type of caller The last n rows of the caller object. See also DataFrame.head The first n rows of the caller object. Examples >>> df = pd.DataFrame({'animal': ['alligator', 'bee', 'falcon', 'lion', ... 'monkey', 'parrot', 'shark', 'whale', 'zebra']}) >>> df animal 0 alligator 1 bee 2 falcon 3 lion 4 monkey 5 parrot 6 shark 7 whale 8 zebra Viewing the last 5 lines >>> df.tail() animal 4 monkey 5 parrot 6 shark 7 whale 8 zebra Viewing the last n lines (three in this case) >>> df.tail(3) animal 6 shark 7 whale 8 zebra For negative values of n >>> df.tail(-3) animal 3 lion 4 monkey 5 parrot 6 shark 7 whale 8 zebra
pandas.reference.api.pandas.series.tail
pandas.Series.take Series.take(indices, axis=0, is_copy=None, **kwargs)[source] Return the elements in the given positional indices along an axis. This means that we are not indexing according to actual values in the index attribute of the object. We are indexing according to the actual position of the element in the object. Parameters indices:array-like An array of ints indicating which positions to take. axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0 The axis on which to select elements. 0 means that we are selecting rows, 1 means that we are selecting columns. is_copy:bool Before pandas 1.0, is_copy=False can be specified to ensure that the return value is an actual copy. Starting with pandas 1.0, take always returns a copy, and the keyword is therefore deprecated. Deprecated since version 1.0.0. **kwargs For compatibility with numpy.take(). Has no effect on the output. Returns taken:same type as caller An array-like containing the elements taken from the object. See also DataFrame.loc Select a subset of a DataFrame by labels. DataFrame.iloc Select a subset of a DataFrame by positions. numpy.take Take elements from an array along an axis. Examples >>> df = pd.DataFrame([('falcon', 'bird', 389.0), ... ('parrot', 'bird', 24.0), ... ('lion', 'mammal', 80.5), ... ('monkey', 'mammal', np.nan)], ... columns=['name', 'class', 'max_speed'], ... index=[0, 2, 3, 1]) >>> df name class max_speed 0 falcon bird 389.0 2 parrot bird 24.0 3 lion mammal 80.5 1 monkey mammal NaN Take elements at positions 0 and 3 along the axis 0 (default). Note how the actual indices selected (0 and 1) do not correspond to our selected indices 0 and 3. That’s because we are selecting the 0th and 3rd rows, not rows whose indices equal 0 and 3. >>> df.take([0, 3]) name class max_speed 0 falcon bird 389.0 1 monkey mammal NaN Take elements at indices 1 and 2 along the axis 1 (column selection). >>> df.take([1, 2], axis=1) class max_speed 0 bird 389.0 2 bird 24.0 3 mammal 80.5 1 mammal NaN We may take elements using negative integers for positive indices, starting from the end of the object, just like with Python lists. >>> df.take([-1, -2]) name class max_speed 1 monkey mammal NaN 3 lion mammal 80.5
pandas.reference.api.pandas.series.take
pandas.Series.to_clipboard Series.to_clipboard(excel=True, sep=None, **kwargs)[source] Copy object to the system clipboard. Write a text representation of object to the system clipboard. This can be pasted into Excel, for example. Parameters excel:bool, default True Produce output in a csv format for easy pasting into excel. True, use the provided separator for csv pasting. False, write a string representation of the object to the clipboard. sep:str, default '\t' Field delimiter. **kwargs These parameters will be passed to DataFrame.to_csv. See also DataFrame.to_csv Write a DataFrame to a comma-separated values (csv) file. read_clipboard Read text from clipboard and pass to read_csv. Notes Requirements for your platform. Linux : xclip, or xsel (with PyQt4 modules) Windows : none macOS : none Examples Copy the contents of a DataFrame to the clipboard. >>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=['A', 'B', 'C']) >>> df.to_clipboard(sep=',') ... # Wrote the following to the system clipboard: ... # ,A,B,C ... # 0,1,2,3 ... # 1,4,5,6 We can omit the index by passing the keyword index and setting it to false. >>> df.to_clipboard(sep=',', index=False) ... # Wrote the following to the system clipboard: ... # A,B,C ... # 1,2,3 ... # 4,5,6
pandas.reference.api.pandas.series.to_clipboard
pandas.Series.to_csv Series.to_csv(path_or_buf=None, sep=',', na_rep='', float_format=None, columns=None, header=True, index=True, index_label=None, mode='w', encoding=None, compression='infer', quoting=None, quotechar='"', line_terminator=None, chunksize=None, date_format=None, doublequote=True, escapechar=None, decimal='.', errors='strict', storage_options=None)[source] Write object to a comma-separated values (csv) file. Parameters path_or_buf:str, path object, file-like object, or None, default None String, path object (implementing os.PathLike[str]), or file-like object implementing a write() function. If None, the result is returned as a string. If a non-binary file object is passed, it should be opened with newline=’’, disabling universal newlines. If a binary file object is passed, mode might need to contain a ‘b’. Changed in version 1.2.0: Support for binary file objects was introduced. sep:str, default ‘,’ String of length 1. Field delimiter for the output file. na_rep:str, default ‘’ Missing data representation. float_format:str, default None Format string for floating point numbers. columns:sequence, optional Columns to write. header:bool or list of str, default True Write out the column names. If a list of strings is given it is assumed to be aliases for the column names. index:bool, default True Write row names (index). index_label:str or sequence, or False, default None Column label for index column(s) if desired. If None is given, and header and index are True, then the index names are used. A sequence should be given if the object uses MultiIndex. If False do not print fields for index names. Use index_label=False for easier importing in R. mode:str Python write mode, default ‘w’. encoding:str, optional A string representing the encoding to use in the output file, defaults to ‘utf-8’. encoding is not supported if path_or_buf is a non-binary file object. compression:str or dict, default ‘infer’ For on-the-fly compression of the output data. If ‘infer’ and ‘%s’ path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, or ‘.zst’ (otherwise no compression). Set to None for no compression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor, respectively. As an example, the following could be passed for faster compression and to create a reproducible gzip archive: compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}. Changed in version 1.0.0: May now be a dict with key ‘method’ as compression mode and other entries as additional compression options if compression mode is ‘zip’. Changed in version 1.1.0: Passing compression options as keys in dict is supported for compression modes ‘gzip’, ‘bz2’, ‘zstd’, and ‘zip’. Changed in version 1.2.0: Compression is supported for binary file objects. Changed in version 1.2.0: Previous versions forwarded dict entries for ‘gzip’ to gzip.open instead of gzip.GzipFile which prevented setting mtime. quoting:optional constant from csv module Defaults to csv.QUOTE_MINIMAL. If you have set a float_format then floats are converted to strings and thus csv.QUOTE_NONNUMERIC will treat them as non-numeric. quotechar:str, default ‘"’ String of length 1. Character used to quote fields. line_terminator:str, optional The newline character or character sequence to use in the output file. Defaults to os.linesep, which depends on the OS in which this method is called (’\n’ for linux, ‘\r\n’ for Windows, i.e.). chunksize:int or None Rows to write at a time. date_format:str, default None Format string for datetime objects. doublequote:bool, default True Control quoting of quotechar inside a field. escapechar:str, default None String of length 1. Character used to escape sep and quotechar when appropriate. decimal:str, default ‘.’ Character recognized as decimal separator. E.g. use ‘,’ for European data. errors:str, default ‘strict’ Specifies how encoding and decoding errors are to be handled. See the errors argument for open() for a full list of options. New in version 1.1.0. storage_options:dict, optional Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. Returns None or str If path_or_buf is None, returns the resulting csv format as a string. Otherwise returns None. See also read_csv Load a CSV file into a DataFrame. to_excel Write DataFrame to an Excel file. Examples >>> df = pd.DataFrame({'name': ['Raphael', 'Donatello'], ... 'mask': ['red', 'purple'], ... 'weapon': ['sai', 'bo staff']}) >>> df.to_csv(index=False) 'name,mask,weapon\nRaphael,red,sai\nDonatello,purple,bo staff\n' Create ‘out.zip’ containing ‘out.csv’ >>> compression_opts = dict(method='zip', ... archive_name='out.csv') >>> df.to_csv('out.zip', index=False, ... compression=compression_opts) To write a csv file to a new folder or nested folder you will first need to create it using either Pathlib or os: >>> from pathlib import Path >>> filepath = Path('folder/subfolder/out.csv') >>> filepath.parent.mkdir(parents=True, exist_ok=True) >>> df.to_csv(filepath) >>> import os >>> os.makedirs('folder/subfolder', exist_ok=True) >>> df.to_csv('folder/subfolder/out.csv')
pandas.reference.api.pandas.series.to_csv
pandas.Series.to_dict Series.to_dict(into=<class 'dict'>)[source] Convert Series to {label -> value} dict or dict-like object. Parameters into:class, default dict The collections.abc.Mapping subclass to use as the return object. Can be the actual class or an empty instance of the mapping type you want. If you want a collections.defaultdict, you must pass it initialized. Returns collections.abc.Mapping Key-value representation of Series. Examples >>> s = pd.Series([1, 2, 3, 4]) >>> s.to_dict() {0: 1, 1: 2, 2: 3, 3: 4} >>> from collections import OrderedDict, defaultdict >>> s.to_dict(OrderedDict) OrderedDict([(0, 1), (1, 2), (2, 3), (3, 4)]) >>> dd = defaultdict(list) >>> s.to_dict(dd) defaultdict(<class 'list'>, {0: 1, 1: 2, 2: 3, 3: 4})
pandas.reference.api.pandas.series.to_dict
pandas.Series.to_excel Series.to_excel(excel_writer, sheet_name='Sheet1', na_rep='', float_format=None, columns=None, header=True, index=True, index_label=None, startrow=0, startcol=0, engine=None, merge_cells=True, encoding=None, inf_rep='inf', verbose=True, freeze_panes=None, storage_options=None)[source] Write object to an Excel sheet. To write a single object to an Excel .xlsx file it is only necessary to specify a target file name. To write to multiple sheets it is necessary to create an ExcelWriter object with a target file name, and specify a sheet in the file to write to. Multiple sheets may be written to by specifying unique sheet_name. With all data written to the file it is necessary to save the changes. Note that creating an ExcelWriter object with a file name that already exists will result in the contents of the existing file being erased. Parameters excel_writer:path-like, file-like, or ExcelWriter object File path or existing ExcelWriter. sheet_name:str, default ‘Sheet1’ Name of sheet which will contain DataFrame. na_rep:str, default ‘’ Missing data representation. float_format:str, optional Format string for floating point numbers. For example float_format="%.2f" will format 0.1234 to 0.12. columns:sequence or list of str, optional Columns to write. header:bool or list of str, default True Write out the column names. If a list of string is given it is assumed to be aliases for the column names. index:bool, default True Write row names (index). index_label:str or sequence, optional Column label for index column(s) if desired. If not specified, and header and index are True, then the index names are used. A sequence should be given if the DataFrame uses MultiIndex. startrow:int, default 0 Upper left cell row to dump data frame. startcol:int, default 0 Upper left cell column to dump data frame. engine:str, optional Write engine to use, ‘openpyxl’ or ‘xlsxwriter’. You can also set this via the options io.excel.xlsx.writer, io.excel.xls.writer, and io.excel.xlsm.writer. Deprecated since version 1.2.0: As the xlwt package is no longer maintained, the xlwt engine will be removed in a future version of pandas. merge_cells:bool, default True Write MultiIndex and Hierarchical Rows as merged cells. encoding:str, optional Encoding of the resulting excel file. Only necessary for xlwt, other writers support unicode natively. inf_rep:str, default ‘inf’ Representation for infinity (there is no native representation for infinity in Excel). verbose:bool, default True Display more information in the error logs. freeze_panes:tuple of int (length 2), optional Specifies the one-based bottommost row and rightmost column that is to be frozen. storage_options:dict, optional Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. See also to_csv Write DataFrame to a comma-separated values (csv) file. ExcelWriter Class for writing DataFrame objects into excel sheets. read_excel Read an Excel file into a pandas DataFrame. read_csv Read a comma-separated values (csv) file into DataFrame. Notes For compatibility with to_csv(), to_excel serializes lists and dicts to strings before writing. Once a workbook has been saved it is not possible to write further data without rewriting the whole workbook. Examples Create, write to and save a workbook: >>> df1 = pd.DataFrame([['a', 'b'], ['c', 'd']], ... index=['row 1', 'row 2'], ... columns=['col 1', 'col 2']) >>> df1.to_excel("output.xlsx") To specify the sheet name: >>> df1.to_excel("output.xlsx", ... sheet_name='Sheet_name_1') If you wish to write to more than one sheet in the workbook, it is necessary to specify an ExcelWriter object: >>> df2 = df1.copy() >>> with pd.ExcelWriter('output.xlsx') as writer: ... df1.to_excel(writer, sheet_name='Sheet_name_1') ... df2.to_excel(writer, sheet_name='Sheet_name_2') ExcelWriter can also be used to append to an existing Excel file: >>> with pd.ExcelWriter('output.xlsx', ... mode='a') as writer: ... df.to_excel(writer, sheet_name='Sheet_name_3') To set the library that is used to write the Excel file, you can pass the engine keyword (the default engine is automatically chosen depending on the file extension): >>> df1.to_excel('output1.xlsx', engine='xlsxwriter')
pandas.reference.api.pandas.series.to_excel
pandas.Series.to_frame Series.to_frame(name=NoDefault.no_default)[source] Convert Series to DataFrame. Parameters name:object, optional The passed name should substitute for the series name (if it has one). Returns DataFrame DataFrame representation of Series. Examples >>> s = pd.Series(["a", "b", "c"], ... name="vals") >>> s.to_frame() vals 0 a 1 b 2 c
pandas.reference.api.pandas.series.to_frame
pandas.Series.to_hdf Series.to_hdf(path_or_buf, key, mode='a', complevel=None, complib=None, append=False, format=None, index=True, min_itemsize=None, nan_rep=None, dropna=None, data_columns=None, errors='strict', encoding='UTF-8')[source] Write the contained data to an HDF5 file using HDFStore. Hierarchical Data Format (HDF) is self-describing, allowing an application to interpret the structure and contents of a file with no outside information. One HDF file can hold a mix of related objects which can be accessed as a group or as individual objects. In order to add another DataFrame or Series to an existing HDF file please use append mode and a different a key. Warning One can store a subclass of DataFrame or Series to HDF5, but the type of the subclass is lost upon storing. For more information see the user guide. Parameters path_or_buf:str or pandas.HDFStore File path or HDFStore object. key:str Identifier for the group in the store. mode:{‘a’, ‘w’, ‘r+’}, default ‘a’ Mode to open file: ‘w’: write, a new file is created (an existing file with the same name would be deleted). ‘a’: append, an existing file is opened for reading and writing, and if the file does not exist it is created. ‘r+’: similar to ‘a’, but the file must already exist. complevel:{0-9}, default None Specifies a compression level for data. A value of 0 or None disables compression. complib:{‘zlib’, ‘lzo’, ‘bzip2’, ‘blosc’}, default ‘zlib’ Specifies the compression library to be used. As of v0.20.2 these additional compressors for Blosc are supported (default if no compressor specified: ‘blosc:blosclz’): {‘blosc:blosclz’, ‘blosc:lz4’, ‘blosc:lz4hc’, ‘blosc:snappy’, ‘blosc:zlib’, ‘blosc:zstd’}. Specifying a compression library which is not available issues a ValueError. append:bool, default False For Table formats, append the input data to the existing. format:{‘fixed’, ‘table’, None}, default ‘fixed’ Possible values: ‘fixed’: Fixed format. Fast writing/reading. Not-appendable, nor searchable. ‘table’: Table format. Write as a PyTables Table structure which may perform worse but allow more flexible operations like searching / selecting subsets of the data. If None, pd.get_option(‘io.hdf.default_format’) is checked, followed by fallback to “fixed”. errors:str, default ‘strict’ Specifies how encoding and decoding errors are to be handled. See the errors argument for open() for a full list of options. encoding:str, default “UTF-8” min_itemsize:dict or int, optional Map column names to minimum string sizes for columns. nan_rep:Any, optional How to represent null values as str. Not allowed with append=True. data_columns:list of columns or True, optional List of columns to create as indexed data columns for on-disk queries, or True to use all columns. By default only the axes of the object are indexed. See Query via data columns. Applicable only to format=’table’. See also read_hdf Read from HDF file. DataFrame.to_parquet Write a DataFrame to the binary parquet format. DataFrame.to_sql Write to a SQL table. DataFrame.to_feather Write out feather-format for DataFrames. DataFrame.to_csv Write out to a csv file. Examples >>> df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, ... index=['a', 'b', 'c']) >>> df.to_hdf('data.h5', key='df', mode='w') We can add another object to the same file: >>> s = pd.Series([1, 2, 3, 4]) >>> s.to_hdf('data.h5', key='s') Reading from HDF file: >>> pd.read_hdf('data.h5', 'df') A B a 1 4 b 2 5 c 3 6 >>> pd.read_hdf('data.h5', 's') 0 1 1 2 2 3 3 4 dtype: int64
pandas.reference.api.pandas.series.to_hdf
pandas.Series.to_json Series.to_json(path_or_buf=None, orient=None, date_format=None, double_precision=10, force_ascii=True, date_unit='ms', default_handler=None, lines=False, compression='infer', index=True, indent=None, storage_options=None)[source] Convert the object to a JSON string. Note NaN’s and None will be converted to null and datetime objects will be converted to UNIX timestamps. Parameters path_or_buf:str, path object, file-like object, or None, default None String, path object (implementing os.PathLike[str]), or file-like object implementing a write() function. If None, the result is returned as a string. orient:str Indication of expected JSON string format. Series: default is ‘index’ allowed values are: {‘split’, ‘records’, ‘index’, ‘table’}. DataFrame: default is ‘columns’ allowed values are: {‘split’, ‘records’, ‘index’, ‘columns’, ‘values’, ‘table’}. The format of the JSON string: ‘split’ : dict like {‘index’ -> [index], ‘columns’ -> [columns], ‘data’ -> [values]} ‘records’ : list like [{column -> value}, … , {column -> value}] ‘index’ : dict like {index -> {column -> value}} ‘columns’ : dict like {column -> {index -> value}} ‘values’ : just the values array ‘table’ : dict like {‘schema’: {schema}, ‘data’: {data}} Describing the data, where data component is like orient='records'. date_format:{None, ‘epoch’, ‘iso’} Type of date conversion. ‘epoch’ = epoch milliseconds, ‘iso’ = ISO8601. The default depends on the orient. For orient='table', the default is ‘iso’. For all other orients, the default is ‘epoch’. double_precision:int, default 10 The number of decimal places to use when encoding floating point values. force_ascii:bool, default True Force encoded string to be ASCII. date_unit:str, default ‘ms’ (milliseconds) The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’, ‘ns’ for second, millisecond, microsecond, and nanosecond respectively. default_handler:callable, default None Handler to call if object cannot otherwise be converted to a suitable format for JSON. Should receive a single argument which is the object to convert and return a serialisable object. lines:bool, default False If ‘orient’ is ‘records’ write out line-delimited json format. Will throw ValueError if incorrect ‘orient’ since others are not list-like. compression:str or dict, default ‘infer’ For on-the-fly compression of the output data. If ‘infer’ and ‘path_or_buf’ path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, or ‘.zst’ (otherwise no compression). Set to None for no compression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor, respectively. As an example, the following could be passed for faster compression and to create a reproducible gzip archive: compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}. Changed in version 1.4.0: Zstandard support. index:bool, default True Whether to include the index values in the JSON string. Not including the index (index=False) is only supported when orient is ‘split’ or ‘table’. indent:int, optional Length of whitespace used to indent each record. New in version 1.0.0. storage_options:dict, optional Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. Returns None or str If path_or_buf is None, returns the resulting json format as a string. Otherwise returns None. See also read_json Convert a JSON string to pandas object. Notes The behavior of indent=0 varies from the stdlib, which does not indent the output but does insert newlines. Currently, indent=0 and the default indent=None are equivalent in pandas, though this may change in a future release. orient='table' contains a ‘pandas_version’ field under ‘schema’. This stores the version of pandas used in the latest revision of the schema. Examples >>> import json >>> df = pd.DataFrame( ... [["a", "b"], ["c", "d"]], ... index=["row 1", "row 2"], ... columns=["col 1", "col 2"], ... ) >>> result = df.to_json(orient="split") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) { "columns": [ "col 1", "col 2" ], "index": [ "row 1", "row 2" ], "data": [ [ "a", "b" ], [ "c", "d" ] ] } Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not preserved with this encoding. >>> result = df.to_json(orient="records") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) [ { "col 1": "a", "col 2": "b" }, { "col 1": "c", "col 2": "d" } ] Encoding/decoding a Dataframe using 'index' formatted JSON: >>> result = df.to_json(orient="index") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) { "row 1": { "col 1": "a", "col 2": "b" }, "row 2": { "col 1": "c", "col 2": "d" } } Encoding/decoding a Dataframe using 'columns' formatted JSON: >>> result = df.to_json(orient="columns") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) { "col 1": { "row 1": "a", "row 2": "c" }, "col 2": { "row 1": "b", "row 2": "d" } } Encoding/decoding a Dataframe using 'values' formatted JSON: >>> result = df.to_json(orient="values") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) [ [ "a", "b" ], [ "c", "d" ] ] Encoding with Table Schema: >>> result = df.to_json(orient="table") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) { "schema": { "fields": [ { "name": "index", "type": "string" }, { "name": "col 1", "type": "string" }, { "name": "col 2", "type": "string" } ], "primaryKey": [ "index" ], "pandas_version": "1.4.0" }, "data": [ { "index": "row 1", "col 1": "a", "col 2": "b" }, { "index": "row 2", "col 1": "c", "col 2": "d" } ] }
pandas.reference.api.pandas.series.to_json
pandas.Series.to_latex Series.to_latex(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, bold_rows=False, column_format=None, longtable=None, escape=None, encoding=None, decimal='.', multicolumn=None, multicolumn_format=None, multirow=None, caption=None, label=None, position=None)[source] Render object to a LaTeX tabular, longtable, or nested table. Requires \usepackage{booktabs}. The output can be copy/pasted into a main LaTeX document or read from an external file with \input{table.tex}. Changed in version 1.0.0: Added caption and label arguments. Changed in version 1.2.0: Added position argument, changed meaning of caption argument. Parameters buf:str, Path or StringIO-like, optional, default None Buffer to write to. If None, the output is returned as a string. columns:list of label, optional The subset of columns to write. Writes all columns by default. col_space:int, optional The minimum width of each column. header:bool or list of str, default True Write out the column names. If a list of strings is given, it is assumed to be aliases for the column names. index:bool, default True Write row names (index). na_rep:str, default ‘NaN’ Missing data representation. formatters:list of functions or dict of {str: function}, optional Formatter functions to apply to columns’ elements by position or name. The result of each function must be a unicode string. List must be of length equal to the number of columns. float_format:one-parameter function or str, optional, default None Formatter for floating point numbers. For example float_format="%.2f" and float_format="{:0.2f}".format will both result in 0.1234 being formatted as 0.12. sparsify:bool, optional Set to False for a DataFrame with a hierarchical index to print every multiindex key at each row. By default, the value will be read from the config module. index_names:bool, default True Prints the names of the indexes. bold_rows:bool, default False Make the row labels bold in the output. column_format:str, optional The columns format as specified in LaTeX table format e.g. ‘rcl’ for 3 columns. By default, ‘l’ will be used for all columns except columns of numbers, which default to ‘r’. longtable:bool, optional By default, the value will be read from the pandas config module. Use a longtable environment instead of tabular. Requires adding a usepackage{longtable} to your LaTeX preamble. escape:bool, optional By default, the value will be read from the pandas config module. When set to False prevents from escaping latex special characters in column names. encoding:str, optional A string representing the encoding to use in the output file, defaults to ‘utf-8’. decimal:str, default ‘.’ Character recognized as decimal separator, e.g. ‘,’ in Europe. multicolumn:bool, default True Use multicolumn to enhance MultiIndex columns. The default will be read from the config module. multicolumn_format:str, default ‘l’ The alignment for multicolumns, similar to column_format The default will be read from the config module. multirow:bool, default False Use multirow to enhance MultiIndex rows. Requires adding a usepackage{multirow} to your LaTeX preamble. Will print centered labels (instead of top-aligned) across the contained rows, separating groups via clines. The default will be read from the pandas config module. caption:str or tuple, optional Tuple (full_caption, short_caption), which results in \caption[short_caption]{full_caption}; if a single string is passed, no short caption will be set. New in version 1.0.0. Changed in version 1.2.0: Optionally allow caption to be a tuple (full_caption, short_caption). label:str, optional The LaTeX label to be placed inside \label{} in the output. This is used with \ref{} in the main .tex file. New in version 1.0.0. position:str, optional The LaTeX positional argument for tables, to be placed after \begin{} in the output. New in version 1.2.0. Returns str or None If buf is None, returns the result as a string. Otherwise returns None. See also Styler.to_latex Render a DataFrame to LaTeX with conditional formatting. DataFrame.to_string Render a DataFrame to a console-friendly tabular output. DataFrame.to_html Render a DataFrame as an HTML table. Examples >>> df = pd.DataFrame(dict(name=['Raphael', 'Donatello'], ... mask=['red', 'purple'], ... weapon=['sai', 'bo staff'])) >>> print(df.to_latex(index=False)) \begin{tabular}{lll} \toprule name & mask & weapon \\ \midrule Raphael & red & sai \\ Donatello & purple & bo staff \\ \bottomrule \end{tabular}
pandas.reference.api.pandas.series.to_latex
pandas.Series.to_list Series.to_list()[source] Return a list of the values. These are each a scalar type, which is a Python scalar (for str, int, float) or a pandas scalar (for Timestamp/Timedelta/Interval/Period) Returns list See also numpy.ndarray.tolist Return the array as an a.ndim-levels deep nested list of Python scalars.
pandas.reference.api.pandas.series.to_list
pandas.Series.to_markdown Series.to_markdown(buf=None, mode='wt', index=True, storage_options=None, **kwargs)[source] Print Series in Markdown-friendly format. New in version 1.0.0. Parameters buf:str, Path or StringIO-like, optional, default None Buffer to write to. If None, the output is returned as a string. mode:str, optional Mode in which file is opened, “wt” by default. index:bool, optional, default True Add index (row) labels. New in version 1.1.0. storage_options:dict, optional Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. **kwargs These parameters will be passed to tabulate. Returns str Series in Markdown-friendly format. Notes Requires the tabulate package. Examples >>> s = pd.Series(["elk", "pig", "dog", "quetzal"], name="animal") >>> print(s.to_markdown()) | | animal | |---:|:---------| | 0 | elk | | 1 | pig | | 2 | dog | | 3 | quetzal | Output markdown with a tabulate option. >>> print(s.to_markdown(tablefmt="grid")) +----+----------+ | | animal | +====+==========+ | 0 | elk | +----+----------+ | 1 | pig | +----+----------+ | 2 | dog | +----+----------+ | 3 | quetzal | +----+----------+
pandas.reference.api.pandas.series.to_markdown
pandas.Series.to_numpy Series.to_numpy(dtype=None, copy=False, na_value=NoDefault.no_default, **kwargs)[source] A NumPy ndarray representing the values in this Series or Index. Parameters dtype:str or numpy.dtype, optional The dtype to pass to numpy.asarray(). copy:bool, default False Whether to ensure that the returned value is not a view on another array. Note that copy=False does not ensure that to_numpy() is no-copy. Rather, copy=True ensure that a copy is made, even if not strictly necessary. na_value:Any, optional The value to use for missing values. The default value depends on dtype and the type of the array. New in version 1.0.0. **kwargs Additional keywords passed through to the to_numpy method of the underlying array (for extension arrays). New in version 1.0.0. Returns numpy.ndarray See also Series.array Get the actual data stored within. Index.array Get the actual data stored within. DataFrame.to_numpy Similar method for DataFrame. Notes The returned array will be the same up to equality (values equal in self will be equal in the returned array; likewise for values that are not equal). When self contains an ExtensionArray, the dtype may be different. For example, for a category-dtype Series, to_numpy() will return a NumPy array and the categorical dtype will be lost. For NumPy dtypes, this will be a reference to the actual data stored in this Series or Index (assuming copy=False). Modifying the result in place will modify the data stored in the Series or Index (not that we recommend doing that). For extension types, to_numpy() may require copying data and coercing the result to a NumPy type (possibly object), which may be expensive. When you need a no-copy reference to the underlying data, Series.array should be used instead. This table lays out the different dtypes and default return types of to_numpy() for various dtypes within pandas. dtype array type category[T] ndarray[T] (same dtype as input) period ndarray[object] (Periods) interval ndarray[object] (Intervals) IntegerNA ndarray[object] datetime64[ns] datetime64[ns] datetime64[ns, tz] ndarray[object] (Timestamps) Examples >>> ser = pd.Series(pd.Categorical(['a', 'b', 'a'])) >>> ser.to_numpy() array(['a', 'b', 'a'], dtype=object) Specify the dtype to control how datetime-aware data is represented. Use dtype=object to return an ndarray of pandas Timestamp objects, each with the correct tz. >>> ser = pd.Series(pd.date_range('2000', periods=2, tz="CET")) >>> ser.to_numpy(dtype=object) array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'), Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object) Or dtype='datetime64[ns]' to return an ndarray of native datetime64 values. The values are converted to UTC and the timezone info is dropped. >>> ser.to_numpy(dtype="datetime64[ns]") ... array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00...'], dtype='datetime64[ns]')
pandas.reference.api.pandas.series.to_numpy
pandas.Series.to_period Series.to_period(freq=None, copy=True)[source] Convert Series from DatetimeIndex to PeriodIndex. Parameters freq:str, default None Frequency associated with the PeriodIndex. copy:bool, default True Whether or not to return a copy. Returns Series Series with index converted to PeriodIndex.
pandas.reference.api.pandas.series.to_period
pandas.Series.to_pickle Series.to_pickle(path, compression='infer', protocol=5, storage_options=None)[source] Pickle (serialize) object to file. Parameters path:str File path where the pickled object will be stored. compression:str or dict, default ‘infer’ For on-the-fly compression of the output data. If ‘infer’ and ‘path’ path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, or ‘.zst’ (otherwise no compression). Set to None for no compression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor, respectively. As an example, the following could be passed for faster compression and to create a reproducible gzip archive: compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}. protocol:int Int which indicates which protocol should be used by the pickler, default HIGHEST_PROTOCOL (see [1] paragraph 12.1.2). The possible values are 0, 1, 2, 3, 4, 5. A negative value for the protocol parameter is equivalent to setting its value to HIGHEST_PROTOCOL. 1 https://docs.python.org/3/library/pickle.html. storage_options:dict, optional Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. See also read_pickle Load pickled pandas object (or any object) from file. DataFrame.to_hdf Write DataFrame to an HDF5 file. DataFrame.to_sql Write DataFrame to a SQL database. DataFrame.to_parquet Write a DataFrame to the binary parquet format. Examples >>> original_df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)}) >>> original_df foo bar 0 0 5 1 1 6 2 2 7 3 3 8 4 4 9 >>> original_df.to_pickle("./dummy.pkl") >>> unpickled_df = pd.read_pickle("./dummy.pkl") >>> unpickled_df foo bar 0 0 5 1 1 6 2 2 7 3 3 8 4 4 9
pandas.reference.api.pandas.series.to_pickle
pandas.Series.to_sql Series.to_sql(name, con, schema=None, if_exists='fail', index=True, index_label=None, chunksize=None, dtype=None, method=None)[source] Write records stored in a DataFrame to a SQL database. Databases supported by SQLAlchemy [1] are supported. Tables can be newly created, appended to, or overwritten. Parameters name:str Name of SQL table. con:sqlalchemy.engine.(Engine or Connection) or sqlite3.Connection Using SQLAlchemy makes it possible to use any DB supported by that library. Legacy support is provided for sqlite3.Connection objects. The user is responsible for engine disposal and connection closure for the SQLAlchemy connectable See here. schema:str, optional Specify the schema (if database flavor supports this). If None, use default schema. if_exists:{‘fail’, ‘replace’, ‘append’}, default ‘fail’ How to behave if the table already exists. fail: Raise a ValueError. replace: Drop the table before inserting new values. append: Insert new values to the existing table. index:bool, default True Write DataFrame index as a column. Uses index_label as the column name in the table. index_label:str or sequence, default None Column label for index column(s). If None is given (default) and index is True, then the index names are used. A sequence should be given if the DataFrame uses MultiIndex. chunksize:int, optional Specify the number of rows in each batch to be written at a time. By default, all rows will be written at once. dtype:dict or scalar, optional Specifying the datatype for columns. If a dictionary is used, the keys should be the column names and the values should be the SQLAlchemy types or strings for the sqlite3 legacy mode. If a scalar is provided, it will be applied to all columns. method:{None, ‘multi’, callable}, optional Controls the SQL insertion clause used: None : Uses standard SQL INSERT clause (one per row). ‘multi’: Pass multiple values in a single INSERT clause. callable with signature (pd_table, conn, keys, data_iter). Details and a sample callable implementation can be found in the section insert method. Returns None or int Number of rows affected by to_sql. None is returned if the callable passed into method does not return the number of rows. The number of returned rows affected is the sum of the rowcount attribute of sqlite3.Cursor or SQLAlchemy connectable which may not reflect the exact number of written rows as stipulated in the sqlite3 or SQLAlchemy. New in version 1.4.0. Raises ValueError When the table already exists and if_exists is ‘fail’ (the default). See also read_sql Read a DataFrame from a table. Notes Timezone aware datetime columns will be written as Timestamp with timezone type with SQLAlchemy if supported by the database. Otherwise, the datetimes will be stored as timezone unaware timestamps local to the original timezone. References 1 https://docs.sqlalchemy.org 2 https://www.python.org/dev/peps/pep-0249/ Examples Create an in-memory SQLite database. >>> from sqlalchemy import create_engine >>> engine = create_engine('sqlite://', echo=False) Create a table from scratch with 3 rows. >>> df = pd.DataFrame({'name' : ['User 1', 'User 2', 'User 3']}) >>> df name 0 User 1 1 User 2 2 User 3 >>> df.to_sql('users', con=engine) 3 >>> engine.execute("SELECT * FROM users").fetchall() [(0, 'User 1'), (1, 'User 2'), (2, 'User 3')] An sqlalchemy.engine.Connection can also be passed to con: >>> with engine.begin() as connection: ... df1 = pd.DataFrame({'name' : ['User 4', 'User 5']}) ... df1.to_sql('users', con=connection, if_exists='append') 2 This is allowed to support operations that require that the same DBAPI connection is used for the entire operation. >>> df2 = pd.DataFrame({'name' : ['User 6', 'User 7']}) >>> df2.to_sql('users', con=engine, if_exists='append') 2 >>> engine.execute("SELECT * FROM users").fetchall() [(0, 'User 1'), (1, 'User 2'), (2, 'User 3'), (0, 'User 4'), (1, 'User 5'), (0, 'User 6'), (1, 'User 7')] Overwrite the table with just df2. >>> df2.to_sql('users', con=engine, if_exists='replace', ... index_label='id') 2 >>> engine.execute("SELECT * FROM users").fetchall() [(0, 'User 6'), (1, 'User 7')] Specify the dtype (especially useful for integers with missing values). Notice that while pandas is forced to store the data as floating point, the database supports nullable integers. When fetching the data with Python, we get back integer scalars. >>> df = pd.DataFrame({"A": [1, None, 2]}) >>> df A 0 1.0 1 NaN 2 2.0 >>> from sqlalchemy.types import Integer >>> df.to_sql('integers', con=engine, index=False, ... dtype={"A": Integer()}) 3 >>> engine.execute("SELECT * FROM integers").fetchall() [(1,), (None,), (2,)]
pandas.reference.api.pandas.series.to_sql
pandas.Series.to_string Series.to_string(buf=None, na_rep='NaN', float_format=None, header=True, index=True, length=False, dtype=False, name=False, max_rows=None, min_rows=None)[source] Render a string representation of the Series. Parameters buf:StringIO-like, optional Buffer to write to. na_rep:str, optional String representation of NaN to use, default ‘NaN’. float_format:one-parameter function, optional Formatter function to apply to columns’ elements if they are floats, default None. header:bool, default True Add the Series header (index name). index:bool, optional Add index (row) labels, default True. length:bool, default False Add the Series length. dtype:bool, default False Add the Series dtype. name:bool, default False Add the Series name if not None. max_rows:int, optional Maximum number of rows to show before truncating. If None, show all. min_rows:int, optional The number of rows to display in a truncated repr (when number of rows is above max_rows). Returns str or None String representation of Series if buf=None, otherwise None.
pandas.reference.api.pandas.series.to_string
pandas.Series.to_timestamp Series.to_timestamp(freq=None, how='start', copy=True)[source] Cast to DatetimeIndex of Timestamps, at beginning of period. Parameters freq:str, default frequency of PeriodIndex Desired frequency. how:{‘s’, ‘e’, ‘start’, ‘end’} Convention for converting period to timestamp; start of period vs. end. copy:bool, default True Whether or not to return a copy. Returns Series with DatetimeIndex
pandas.reference.api.pandas.series.to_timestamp
pandas.Series.to_xarray Series.to_xarray()[source] Return an xarray object from the pandas object. Returns xarray.DataArray or xarray.Dataset Data in the pandas structure converted to Dataset if the object is a DataFrame, or a DataArray if the object is a Series. See also DataFrame.to_hdf Write DataFrame to an HDF5 file. DataFrame.to_parquet Write a DataFrame to the binary parquet format. Notes See the xarray docs Examples >>> df = pd.DataFrame([('falcon', 'bird', 389.0, 2), ... ('parrot', 'bird', 24.0, 2), ... ('lion', 'mammal', 80.5, 4), ... ('monkey', 'mammal', np.nan, 4)], ... columns=['name', 'class', 'max_speed', ... 'num_legs']) >>> df name class max_speed num_legs 0 falcon bird 389.0 2 1 parrot bird 24.0 2 2 lion mammal 80.5 4 3 monkey mammal NaN 4 >>> df.to_xarray() <xarray.Dataset> Dimensions: (index: 4) Coordinates: * index (index) int64 0 1 2 3 Data variables: name (index) object 'falcon' 'parrot' 'lion' 'monkey' class (index) object 'bird' 'bird' 'mammal' 'mammal' max_speed (index) float64 389.0 24.0 80.5 nan num_legs (index) int64 2 2 4 4 >>> df['max_speed'].to_xarray() <xarray.DataArray 'max_speed' (index: 4)> array([389. , 24. , 80.5, nan]) Coordinates: * index (index) int64 0 1 2 3 >>> dates = pd.to_datetime(['2018-01-01', '2018-01-01', ... '2018-01-02', '2018-01-02']) >>> df_multiindex = pd.DataFrame({'date': dates, ... 'animal': ['falcon', 'parrot', ... 'falcon', 'parrot'], ... 'speed': [350, 18, 361, 15]}) >>> df_multiindex = df_multiindex.set_index(['date', 'animal']) >>> df_multiindex speed date animal 2018-01-01 falcon 350 parrot 18 2018-01-02 falcon 361 parrot 15 >>> df_multiindex.to_xarray() <xarray.Dataset> Dimensions: (animal: 2, date: 2) Coordinates: * date (date) datetime64[ns] 2018-01-01 2018-01-02 * animal (animal) object 'falcon' 'parrot' Data variables: speed (date, animal) int64 350 18 361 15
pandas.reference.api.pandas.series.to_xarray
pandas.Series.tolist Series.tolist()[source] Return a list of the values. These are each a scalar type, which is a Python scalar (for str, int, float) or a pandas scalar (for Timestamp/Timedelta/Interval/Period) Returns list See also numpy.ndarray.tolist Return the array as an a.ndim-levels deep nested list of Python scalars.
pandas.reference.api.pandas.series.tolist
pandas.Series.transform Series.transform(func, axis=0, *args, **kwargs)[source] Call func on self producing a Series with the same axis shape as self. Parameters func:function, str, list-like or dict-like Function to use for transforming the data. If a function, must either work when passed a Series or when passed to Series.apply. If func is both list-like and dict-like, dict-like behavior takes precedence. Accepted combinations are: function string function name list-like of functions and/or function names, e.g. [np.exp, 'sqrt'] dict-like of axis labels -> functions, function names or list-like of such. axis:{0 or ‘index’} Parameter needed for compatibility with DataFrame. *args Positional arguments to pass to func. **kwargs Keyword arguments to pass to func. Returns Series A Series that must have the same length as self. Raises ValueError:If the returned Series has a different length than self. See also Series.agg Only perform aggregating type operations. Series.apply Invoke function on a Series. Notes Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. Examples >>> df = pd.DataFrame({'A': range(3), 'B': range(1, 4)}) >>> df A B 0 0 1 1 1 2 2 2 3 >>> df.transform(lambda x: x + 1) A B 0 1 2 1 2 3 2 3 4 Even though the resulting Series must have the same length as the input Series, it is possible to provide several input functions: >>> s = pd.Series(range(3)) >>> s 0 0 1 1 2 2 dtype: int64 >>> s.transform([np.sqrt, np.exp]) sqrt exp 0 0.000000 1.000000 1 1.000000 2.718282 2 1.414214 7.389056 You can call transform on a GroupBy object: >>> df = pd.DataFrame({ ... "Date": [ ... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05", ... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05"], ... "Data": [5, 8, 6, 1, 50, 100, 60, 120], ... }) >>> df Date Data 0 2015-05-08 5 1 2015-05-07 8 2 2015-05-06 6 3 2015-05-05 1 4 2015-05-08 50 5 2015-05-07 100 6 2015-05-06 60 7 2015-05-05 120 >>> df.groupby('Date')['Data'].transform('sum') 0 55 1 108 2 66 3 121 4 55 5 108 6 66 7 121 Name: Data, dtype: int64 >>> df = pd.DataFrame({ ... "c": [1, 1, 1, 2, 2, 2, 2], ... "type": ["m", "n", "o", "m", "m", "n", "n"] ... }) >>> df c type 0 1 m 1 1 n 2 1 o 3 2 m 4 2 m 5 2 n 6 2 n >>> df['size'] = df.groupby('c')['type'].transform(len) >>> df c type size 0 1 m 3 1 1 n 3 2 1 o 3 3 2 m 4 4 2 m 4 5 2 n 4 6 2 n 4
pandas.reference.api.pandas.series.transform
pandas.Series.transpose Series.transpose(*args, **kwargs)[source] Return the transpose, which is by definition self. Returns %(klass)s
pandas.reference.api.pandas.series.transpose
pandas.Series.truediv Series.truediv(other, level=None, fill_value=None, axis=0)[source] Return Floating division of series and other, element-wise (binary operator truediv). Equivalent to series / other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters other:Series or scalar value fill_value:None or float value, default None (NaN) Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing. level:int or name Broadcast across a level, matching Index values on the passed MultiIndex level. Returns Series The result of the operation. See also Series.rtruediv Reverse of the Floating division operator, see Python documentation for more details. Examples >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.divide(b, fill_value=0) a 1.0 b inf c inf d 0.0 e NaN dtype: float64
pandas.reference.api.pandas.series.truediv
pandas.Series.truncate Series.truncate(before=None, after=None, axis=None, copy=True)[source] Truncate a Series or DataFrame before and after some index value. This is a useful shorthand for boolean indexing based on index values above or below certain thresholds. Parameters before:date, str, int Truncate all rows before this index value. after:date, str, int Truncate all rows after this index value. axis:{0 or ‘index’, 1 or ‘columns’}, optional Axis to truncate. Truncates the index (rows) by default. copy:bool, default is True, Return a copy of the truncated section. Returns type of caller The truncated Series or DataFrame. See also DataFrame.loc Select a subset of a DataFrame by label. DataFrame.iloc Select a subset of a DataFrame by position. Notes If the index being truncated contains only datetime values, before and after may be specified as strings instead of Timestamps. Examples >>> df = pd.DataFrame({'A': ['a', 'b', 'c', 'd', 'e'], ... 'B': ['f', 'g', 'h', 'i', 'j'], ... 'C': ['k', 'l', 'm', 'n', 'o']}, ... index=[1, 2, 3, 4, 5]) >>> df A B C 1 a f k 2 b g l 3 c h m 4 d i n 5 e j o >>> df.truncate(before=2, after=4) A B C 2 b g l 3 c h m 4 d i n The columns of a DataFrame can be truncated. >>> df.truncate(before="A", after="B", axis="columns") A B 1 a f 2 b g 3 c h 4 d i 5 e j For Series, only rows can be truncated. >>> df['A'].truncate(before=2, after=4) 2 b 3 c 4 d Name: A, dtype: object The index values in truncate can be datetimes or string dates. >>> dates = pd.date_range('2016-01-01', '2016-02-01', freq='s') >>> df = pd.DataFrame(index=dates, data={'A': 1}) >>> df.tail() A 2016-01-31 23:59:56 1 2016-01-31 23:59:57 1 2016-01-31 23:59:58 1 2016-01-31 23:59:59 1 2016-02-01 00:00:00 1 >>> df.truncate(before=pd.Timestamp('2016-01-05'), ... after=pd.Timestamp('2016-01-10')).tail() A 2016-01-09 23:59:56 1 2016-01-09 23:59:57 1 2016-01-09 23:59:58 1 2016-01-09 23:59:59 1 2016-01-10 00:00:00 1 Because the index is a DatetimeIndex containing only dates, we can specify before and after as strings. They will be coerced to Timestamps before truncation. >>> df.truncate('2016-01-05', '2016-01-10').tail() A 2016-01-09 23:59:56 1 2016-01-09 23:59:57 1 2016-01-09 23:59:58 1 2016-01-09 23:59:59 1 2016-01-10 00:00:00 1 Note that truncate assumes a 0 value for any unspecified time component (midnight). This differs from partial string slicing, which returns any partially matching dates. >>> df.loc['2016-01-05':'2016-01-10', :].tail() A 2016-01-10 23:59:55 1 2016-01-10 23:59:56 1 2016-01-10 23:59:57 1 2016-01-10 23:59:58 1 2016-01-10 23:59:59 1
pandas.reference.api.pandas.series.truncate
pandas.Series.tshift Series.tshift(periods=1, freq=None, axis=0)[source] Shift the time index, using the index’s frequency if available. Deprecated since version 1.1.0: Use shift instead. Parameters periods:int Number of periods to move, can be positive or negative. freq:DateOffset, timedelta, or str, default None Increment to use from the tseries module or time rule expressed as a string (e.g. ‘EOM’). axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0 Corresponds to the axis that contains the Index. Returns shifted:Series/DataFrame Notes If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those attributes exist, a ValueError is thrown
pandas.reference.api.pandas.series.tshift
pandas.Series.tz_convert Series.tz_convert(tz, axis=0, level=None, copy=True)[source] Convert tz-aware axis to target time zone. Parameters tz:str or tzinfo object axis:the axis to convert level:int, str, default None If axis is a MultiIndex, convert a specific level. Otherwise must be None. copy:bool, default True Also make a copy of the underlying data. Returns {klass} Object with time zone converted axis. Raises TypeError If the axis is tz-naive.
pandas.reference.api.pandas.series.tz_convert
pandas.Series.tz_localize Series.tz_localize(tz, axis=0, level=None, copy=True, ambiguous='raise', nonexistent='raise')[source] Localize tz-naive index of a Series or DataFrame to target time zone. This operation localizes the Index. To localize the values in a timezone-naive Series, use Series.dt.tz_localize(). Parameters tz:str or tzinfo axis:the axis to localize level:int, str, default None If axis ia a MultiIndex, localize a specific level. Otherwise must be None. copy:bool, default True Also make a copy of the underlying data. ambiguous:‘infer’, bool-ndarray, ‘NaT’, default ‘raise’ When clocks moved backward due to DST, ambiguous times may arise. For example in Central European Time (UTC+01), when going from 03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at 00:30:00 UTC and at 01:30:00 UTC. In such a situation, the ambiguous parameter dictates how ambiguous times should be handled. ‘infer’ will attempt to infer fall dst-transition hours based on order bool-ndarray where True signifies a DST time, False designates a non-DST time (note that this flag is only applicable for ambiguous times) ‘NaT’ will return NaT where there are ambiguous times ‘raise’ will raise an AmbiguousTimeError if there are ambiguous times. nonexistent:str, default ‘raise’ A nonexistent time does not exist in a particular timezone where clocks moved forward due to DST. Valid values are: ‘shift_forward’ will shift the nonexistent time forward to the closest existing time ‘shift_backward’ will shift the nonexistent time backward to the closest existing time ‘NaT’ will return NaT where there are nonexistent times timedelta objects will shift nonexistent times by the timedelta ‘raise’ will raise an NonExistentTimeError if there are nonexistent times. Returns Series or DataFrame Same type as the input. Raises TypeError If the TimeSeries is tz-aware and tz is not None. Examples Localize local times: >>> s = pd.Series([1], ... index=pd.DatetimeIndex(['2018-09-15 01:30:00'])) >>> s.tz_localize('CET') 2018-09-15 01:30:00+02:00 1 dtype: int64 Be careful with DST changes. When there is sequential data, pandas can infer the DST time: >>> s = pd.Series(range(7), ... index=pd.DatetimeIndex(['2018-10-28 01:30:00', ... '2018-10-28 02:00:00', ... '2018-10-28 02:30:00', ... '2018-10-28 02:00:00', ... '2018-10-28 02:30:00', ... '2018-10-28 03:00:00', ... '2018-10-28 03:30:00'])) >>> s.tz_localize('CET', ambiguous='infer') 2018-10-28 01:30:00+02:00 0 2018-10-28 02:00:00+02:00 1 2018-10-28 02:30:00+02:00 2 2018-10-28 02:00:00+01:00 3 2018-10-28 02:30:00+01:00 4 2018-10-28 03:00:00+01:00 5 2018-10-28 03:30:00+01:00 6 dtype: int64 In some cases, inferring the DST is impossible. In such cases, you can pass an ndarray to the ambiguous parameter to set the DST explicitly >>> s = pd.Series(range(3), ... index=pd.DatetimeIndex(['2018-10-28 01:20:00', ... '2018-10-28 02:36:00', ... '2018-10-28 03:46:00'])) >>> s.tz_localize('CET', ambiguous=np.array([True, True, False])) 2018-10-28 01:20:00+02:00 0 2018-10-28 02:36:00+02:00 1 2018-10-28 03:46:00+01:00 2 dtype: int64 If the DST transition causes nonexistent times, you can shift these dates forward or backward with a timedelta object or ‘shift_forward’ or ‘shift_backward’. >>> s = pd.Series(range(2), ... index=pd.DatetimeIndex(['2015-03-29 02:30:00', ... '2015-03-29 03:30:00'])) >>> s.tz_localize('Europe/Warsaw', nonexistent='shift_forward') 2015-03-29 03:00:00+02:00 0 2015-03-29 03:30:00+02:00 1 dtype: int64 >>> s.tz_localize('Europe/Warsaw', nonexistent='shift_backward') 2015-03-29 01:59:59.999999999+01:00 0 2015-03-29 03:30:00+02:00 1 dtype: int64 >>> s.tz_localize('Europe/Warsaw', nonexistent=pd.Timedelta('1H')) 2015-03-29 03:30:00+02:00 0 2015-03-29 03:30:00+02:00 1 dtype: int64
pandas.reference.api.pandas.series.tz_localize
pandas.Series.unique Series.unique()[source] Return unique values of Series object. Uniques are returned in order of appearance. Hash table-based unique, therefore does NOT sort. Returns ndarray or ExtensionArray The unique values returned as a NumPy array. See Notes. See also unique Top-level unique method for any 1-d array-like object. Index.unique Return Index with unique values from an Index object. Notes Returns the unique values as a NumPy array. In case of an extension-array backed Series, a new ExtensionArray of that type with just the unique values is returned. This includes Categorical Period Datetime with Timezone Interval Sparse IntegerNA See Examples section. Examples >>> pd.Series([2, 1, 3, 3], name='A').unique() array([2, 1, 3]) >>> pd.Series([pd.Timestamp('2016-01-01') for _ in range(3)]).unique() array(['2016-01-01T00:00:00.000000000'], dtype='datetime64[ns]') >>> pd.Series([pd.Timestamp('2016-01-01', tz='US/Eastern') ... for _ in range(3)]).unique() <DatetimeArray> ['2016-01-01 00:00:00-05:00'] Length: 1, dtype: datetime64[ns, US/Eastern] An Categorical will return categories in the order of appearance and with the same dtype. >>> pd.Series(pd.Categorical(list('baabc'))).unique() ['b', 'a', 'c'] Categories (3, object): ['a', 'b', 'c'] >>> pd.Series(pd.Categorical(list('baabc'), categories=list('abc'), ... ordered=True)).unique() ['b', 'a', 'c'] Categories (3, object): ['a' < 'b' < 'c']
pandas.reference.api.pandas.series.unique
pandas.Series.unstack Series.unstack(level=- 1, fill_value=None)[source] Unstack, also known as pivot, Series with MultiIndex to produce DataFrame. Parameters level:int, str, or list of these, default last level Level(s) to unstack, can pass level name. fill_value:scalar value, default None Value to use when replacing NaN values. Returns DataFrame Unstacked Series. Examples >>> s = pd.Series([1, 2, 3, 4], ... index=pd.MultiIndex.from_product([['one', 'two'], ... ['a', 'b']])) >>> s one a 1 b 2 two a 3 b 4 dtype: int64 >>> s.unstack(level=-1) a b one 1 2 two 3 4 >>> s.unstack(level=0) one two a 1 3 b 2 4
pandas.reference.api.pandas.series.unstack
pandas.Series.update Series.update(other)[source] Modify Series in place using values from passed Series. Uses non-NA values from passed Series to make updates. Aligns on index. Parameters other:Series, or object coercible into Series Examples >>> s = pd.Series([1, 2, 3]) >>> s.update(pd.Series([4, 5, 6])) >>> s 0 4 1 5 2 6 dtype: int64 >>> s = pd.Series(['a', 'b', 'c']) >>> s.update(pd.Series(['d', 'e'], index=[0, 2])) >>> s 0 d 1 b 2 e dtype: object >>> s = pd.Series([1, 2, 3]) >>> s.update(pd.Series([4, 5, 6, 7, 8])) >>> s 0 4 1 5 2 6 dtype: int64 If other contains NaNs the corresponding values are not updated in the original Series. >>> s = pd.Series([1, 2, 3]) >>> s.update(pd.Series([4, np.nan, 6])) >>> s 0 4 1 2 2 6 dtype: int64 other can also be a non-Series object type that is coercible into a Series >>> s = pd.Series([1, 2, 3]) >>> s.update([4, np.nan, 6]) >>> s 0 4 1 2 2 6 dtype: int64 >>> s = pd.Series([1, 2, 3]) >>> s.update({1: 9}) >>> s 0 1 1 9 2 3 dtype: int64
pandas.reference.api.pandas.series.update
pandas.Series.value_counts Series.value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=True)[source] Return a Series containing counts of unique values. The resulting object will be in descending order so that the first element is the most frequently-occurring element. Excludes NA values by default. Parameters normalize:bool, default False If True then the object returned will contain the relative frequencies of the unique values. sort:bool, default True Sort by frequencies. ascending:bool, default False Sort in ascending order. bins:int, optional Rather than count values, group them into half-open bins, a convenience for pd.cut, only works with numeric data. dropna:bool, default True Don’t include counts of NaN. Returns Series See also Series.count Number of non-NA elements in a Series. DataFrame.count Number of non-NA elements in a DataFrame. DataFrame.value_counts Equivalent method on DataFrames. Examples >>> index = pd.Index([3, 1, 2, 3, 4, np.nan]) >>> index.value_counts() 3.0 2 1.0 1 2.0 1 4.0 1 dtype: int64 With normalize set to True, returns the relative frequency by dividing all values by the sum of values. >>> s = pd.Series([3, 1, 2, 3, 4, np.nan]) >>> s.value_counts(normalize=True) 3.0 0.4 1.0 0.2 2.0 0.2 4.0 0.2 dtype: float64 bins Bins can be useful for going from a continuous variable to a categorical variable; instead of counting unique apparitions of values, divide the index in the specified number of half-open bins. >>> s.value_counts(bins=3) (0.996, 2.0] 2 (2.0, 3.0] 2 (3.0, 4.0] 1 dtype: int64 dropna With dropna set to False we can also see NaN index values. >>> s.value_counts(dropna=False) 3.0 2 1.0 1 2.0 1 4.0 1 NaN 1 dtype: int64
pandas.reference.api.pandas.series.value_counts
pandas.Series.values propertySeries.values Return Series as ndarray or ndarray-like depending on the dtype. Warning We recommend using Series.array or Series.to_numpy(), depending on whether you need a reference to the underlying data or a NumPy array. Returns numpy.ndarray or ndarray-like See also Series.array Reference to the underlying data. Series.to_numpy A NumPy array representing the underlying data. Examples >>> pd.Series([1, 2, 3]).values array([1, 2, 3]) >>> pd.Series(list('aabc')).values array(['a', 'a', 'b', 'c'], dtype=object) >>> pd.Series(list('aabc')).astype('category').values ['a', 'a', 'b', 'c'] Categories (3, object): ['a', 'b', 'c'] Timezone aware datetime data is converted to UTC: >>> pd.Series(pd.date_range('20130101', periods=3, ... tz='US/Eastern')).values array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000', '2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')
pandas.reference.api.pandas.series.values
pandas.Series.var Series.var(axis=None, skipna=True, level=None, ddof=1, numeric_only=None, **kwargs)[source] Return unbiased variance over requested axis. Normalized by N-1 by default. This can be changed using the ddof argument. Parameters axis:{index (0)} skipna:bool, default True Exclude NA/null values. If an entire row/column is NA, the result will be NA. level:int or level name, default None If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a scalar. ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. numeric_only:bool, default None Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series. Returns scalar or Series (if level specified) Examples >>> df = pd.DataFrame({'person_id': [0, 1, 2, 3], ... 'age': [21, 25, 62, 43], ... 'height': [1.61, 1.87, 1.49, 2.01]} ... ).set_index('person_id') >>> df age height person_id 0 21 1.61 1 25 1.87 2 62 1.49 3 43 2.01 >>> df.var() age 352.916667 height 0.056367 Alternatively, ddof=0 can be set to normalize by N instead of N-1: >>> df.var(ddof=0) age 264.687500 height 0.042275
pandas.reference.api.pandas.series.var
pandas.Series.view Series.view(dtype=None)[source] Create a new view of the Series. This function will return a new Series with a view of the same underlying values in memory, optionally reinterpreted with a new data type. The new data type must preserve the same size in bytes as to not cause index misalignment. Parameters dtype:data type Data type object or one of their string representations. Returns Series A new Series object as a view of the same data in memory. See also numpy.ndarray.view Equivalent numpy function to create a new view of the same data in memory. Notes Series are instantiated with dtype=float64 by default. While numpy.ndarray.view() will return a view with the same data type as the original array, Series.view() (without specified dtype) will try using float64 and may fail if the original data type size in bytes is not the same. Examples >>> s = pd.Series([-2, -1, 0, 1, 2], dtype='int8') >>> s 0 -2 1 -1 2 0 3 1 4 2 dtype: int8 The 8 bit signed integer representation of -1 is 0b11111111, but the same bytes represent 255 if read as an 8 bit unsigned integer: >>> us = s.view('uint8') >>> us 0 254 1 255 2 0 3 1 4 2 dtype: uint8 The views share the same underlying values: >>> us[0] = 128 >>> s 0 -128 1 -1 2 0 3 1 4 2 dtype: int8
pandas.reference.api.pandas.series.view
pandas.Series.where Series.where(cond, other=NoDefault.no_default, inplace=False, axis=None, level=None, errors=NoDefault.no_default, try_cast=NoDefault.no_default)[source] Replace values where the condition is False. Parameters cond:bool Series/DataFrame, array-like, or callable Where cond is True, keep the original value. Where False, replace with corresponding value from other. If cond is callable, it is computed on the Series/DataFrame and should return boolean Series/DataFrame or array. The callable must not change input Series/DataFrame (though pandas doesn’t check it). other:scalar, Series/DataFrame, or callable Entries where cond is False are replaced with corresponding value from other. If other is callable, it is computed on the Series/DataFrame and should return scalar or Series/DataFrame. The callable must not change input Series/DataFrame (though pandas doesn’t check it). inplace:bool, default False Whether to perform the operation in place on the data. axis:int, default None Alignment axis if needed. level:int, default None Alignment level if needed. errors:str, {‘raise’, ‘ignore’}, default ‘raise’ Note that currently this parameter won’t affect the results and will always coerce to a suitable dtype. ‘raise’ : allow exceptions to be raised. ‘ignore’ : suppress exceptions. On error return original object. try_cast:bool, default None Try to cast the result back to the input type (if possible). Deprecated since version 1.3.0: Manually cast back if necessary. Returns Same type as caller or None if inplace=True. See also DataFrame.mask() Return an object of same shape as self. Notes The where method is an application of the if-then idiom. For each element in the calling DataFrame, if cond is True the element is used; otherwise the corresponding element from the DataFrame other is used. The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m, df2) is equivalent to np.where(m, df1, df2). For further details and examples see the where documentation in indexing. Examples >>> s = pd.Series(range(5)) >>> s.where(s > 0) 0 NaN 1 1.0 2 2.0 3 3.0 4 4.0 dtype: float64 >>> s.mask(s > 0) 0 0.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64 >>> s.where(s > 1, 10) 0 10 1 10 2 2 3 3 4 4 dtype: int64 >>> s.mask(s > 1, 10) 0 0 1 1 2 10 3 10 4 10 dtype: int64 >>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B']) >>> df A B 0 0 1 1 2 3 2 4 5 3 6 7 4 8 9 >>> m = df % 3 == 0 >>> df.where(m, -df) A B 0 0 -1 1 -2 3 2 -4 -5 3 6 -7 4 -8 9 >>> df.where(m, -df) == np.where(m, df, -df) A B 0 True True 1 True True 2 True True 3 True True 4 True True >>> df.where(m, -df) == df.mask(~m, -df) A B 0 True True 1 True True 2 True True 3 True True 4 True True
pandas.reference.api.pandas.series.where
pandas.Series.xs Series.xs(key, axis=0, level=None, drop_level=True)[source] Return cross-section from the Series/DataFrame. This method takes a key argument to select data at a particular level of a MultiIndex. Parameters key:label or tuple of label Label contained in the index, or partially in a MultiIndex. axis:{0 or ‘index’, 1 or ‘columns’}, default 0 Axis to retrieve cross-section on. level:object, defaults to first n levels (n=1 or len(key)) In case of a key partially contained in a MultiIndex, indicate which levels are used. Levels can be referred by label or position. drop_level:bool, default True If False, returns object with same levels as self. Returns Series or DataFrame Cross-section from the original Series or DataFrame corresponding to the selected index levels. See also DataFrame.loc Access a group of rows and columns by label(s) or a boolean array. DataFrame.iloc Purely integer-location based indexing for selection by position. Notes xs can not be used to set values. MultiIndex Slicers is a generic way to get/set values on any level or levels. It is a superset of xs functionality, see MultiIndex Slicers. Examples >>> d = {'num_legs': [4, 4, 2, 2], ... 'num_wings': [0, 0, 2, 2], ... 'class': ['mammal', 'mammal', 'mammal', 'bird'], ... 'animal': ['cat', 'dog', 'bat', 'penguin'], ... 'locomotion': ['walks', 'walks', 'flies', 'walks']} >>> df = pd.DataFrame(data=d) >>> df = df.set_index(['class', 'animal', 'locomotion']) >>> df num_legs num_wings class animal locomotion mammal cat walks 4 0 dog walks 4 0 bat flies 2 2 bird penguin walks 2 2 Get values at specified index >>> df.xs('mammal') num_legs num_wings animal locomotion cat walks 4 0 dog walks 4 0 bat flies 2 2 Get values at several indexes >>> df.xs(('mammal', 'dog')) num_legs num_wings locomotion walks 4 0 Get values at specified index and level >>> df.xs('cat', level=1) num_legs num_wings class locomotion mammal walks 4 0 Get values at several indexes and levels >>> df.xs(('bird', 'walks'), ... level=[0, 'locomotion']) num_legs num_wings animal penguin 2 2 Get values at specified column and axis >>> df.xs('num_wings', axis=1) class animal locomotion mammal cat walks 0 dog walks 0 bat flies 2 bird penguin walks 2 Name: num_wings, dtype: int64
pandas.reference.api.pandas.series.xs
pandas.set_option pandas.set_option(pat, value)=<pandas._config.config.CallableDynamicDoc object> Sets the value of the specified option. Available options: compute.[use_bottleneck, use_numba, use_numexpr] display.[chop_threshold, colheader_justify, column_space, date_dayfirst, date_yearfirst, encoding, expand_frame_repr, float_format] display.html.[border, table_schema, use_mathjax] display.[large_repr] display.latex.[escape, longtable, multicolumn, multicolumn_format, multirow, repr] display.[max_categories, max_columns, max_colwidth, max_dir_items, max_info_columns, max_info_rows, max_rows, max_seq_items, memory_usage, min_rows, multi_sparse, notebook_repr_html, pprint_nest_depth, precision, show_dimensions] display.unicode.[ambiguous_as_wide, east_asian_width] display.[width] io.excel.ods.[reader, writer] io.excel.xls.[reader, writer] io.excel.xlsb.[reader] io.excel.xlsm.[reader, writer] io.excel.xlsx.[reader, writer] io.hdf.[default_format, dropna_table] io.parquet.[engine] io.sql.[engine] mode.[chained_assignment, data_manager, sim_interactive, string_storage, use_inf_as_na, use_inf_as_null] plotting.[backend] plotting.matplotlib.[register_converters] styler.format.[decimal, escape, formatter, na_rep, precision, thousands] styler.html.[mathjax] styler.latex.[environment, hrules, multicol_align, multirow_align] styler.render.[encoding, max_columns, max_elements, max_rows, repr] styler.sparse.[columns, index] Parameters pat:str Regexp which should match a single option. Note: partial matches are supported for convenience, but unless you use the full option name (e.g. x.y.z.option_name), your code may break in future versions if new options with similar names are introduced. value:object New value of option. Returns None Raises OptionError if no such option exists Notes The available options with its descriptions: compute.use_bottleneck:bool Use the bottleneck library to accelerate if it is installed, the default is True Valid values: False,True [default: True] [currently: True] compute.use_numba:bool Use the numba engine option for select operations if it is installed, the default is False Valid values: False,True [default: False] [currently: False] compute.use_numexpr:bool Use the numexpr library to accelerate computation if it is installed, the default is True Valid values: False,True [default: True] [currently: True] display.chop_threshold:float or None if set to a float value, all float values smaller then the given threshold will be displayed as exactly 0 by repr and friends. [default: None] [currently: None] display.colheader_justify:‘left’/’right’ Controls the justification of column headers. used by DataFrameFormatter. [default: right] [currently: right] display.column_space No description available. [default: 12] [currently: 12] display.date_dayfirst:boolean When True, prints and parses dates with the day first, eg 20/01/2005 [default: False] [currently: False] display.date_yearfirst:boolean When True, prints and parses dates with the year first, eg 2005/01/20 [default: False] [currently: False] display.encoding:str/unicode Defaults to the detected encoding of the console. Specifies the encoding to be used for strings returned by to_string, these are generally strings meant to be displayed on the console. [default: utf-8] [currently: utf-8] display.expand_frame_repr:boolean Whether to print out the full DataFrame repr for wide DataFrames across multiple lines, max_columns is still respected, but the output will wrap-around across multiple “pages” if its width exceeds display.width. [default: True] [currently: True] display.float_format:callable The callable should accept a floating point number and return a string with the desired format of the number. This is used in some places like SeriesFormatter. See formats.format.EngFormatter for an example. [default: None] [currently: None] display.html.border:int A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr. [default: 1] [currently: 1] display.html.table_schema:boolean Whether to publish a Table Schema representation for frontends that support it. (default: False) [default: False] [currently: False] display.html.use_mathjax:boolean When True, Jupyter notebook will process table contents using MathJax, rendering mathematical expressions enclosed by the dollar symbol. (default: True) [default: True] [currently: True] display.large_repr:‘truncate’/’info’ For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can show a truncated table (the default from 0.13), or switch to the view from df.info() (the behaviour in earlier versions of pandas). [default: truncate] [currently: truncate] display.latex.escape:bool This specifies if the to_latex method of a Dataframe uses escapes special characters. Valid values: False,True [default: True] [currently: True] display.latex.longtable :bool This specifies if the to_latex method of a Dataframe uses the longtable format. Valid values: False,True [default: False] [currently: False] display.latex.multicolumn:bool This specifies if the to_latex method of a Dataframe uses multicolumns to pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True] display.latex.multicolumn_format:bool This specifies if the to_latex method of a Dataframe uses multicolumns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l] display.latex.multirow:bool This specifies if the to_latex method of a Dataframe uses multirows to pretty-print MultiIndex rows. Valid values: False,True [default: False] [currently: False] display.latex.repr:boolean Whether to produce a latex DataFrame representation for jupyter environments that support it. (default: False) [default: False] [currently: False] display.max_categories:int This sets the maximum number of categories pandas should output when printing out a Categorical or a Series of dtype “category”. [default: 8] [currently: 8] display.max_columns:int If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects are either centrally truncated or printed as a summary view. ‘None’ value means unlimited. In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. [default: 0] [currently: 0] display.max_colwidth:int or None The maximum width in characters of a column in the repr of a pandas data structure. When the column overflows, a “…” placeholder is embedded in the output. A ‘None’ value means unlimited. [default: 50] [currently: 50] display.max_dir_items:int The number of items that will be added to dir(…). ‘None’ value means unlimited. Because dir is cached, changing this option will not immediately affect already existing dataframes until a column is deleted or added. This is for instance used to suggest columns from a dataframe to tab completion. [default: 100] [currently: 100] display.max_info_columns:int max_info_columns is used in DataFrame.info method to decide if per column information will be printed. [default: 100] [currently: 100] display.max_info_rows:int or None df.info() will usually show null-counts for each column. For large frames this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller dimensions than specified. [default: 1690785] [currently: 1690785] display.max_rows:int If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects are either centrally truncated or printed as a summary view. ‘None’ value means unlimited. In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. [default: 60] [currently: 60] display.max_seq_items:int or None When pretty-printing a long sequence, no more then max_seq_items will be printed. If items are omitted, they will be denoted by the addition of “…” to the resulting string. If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100] display.memory_usage:bool, string or None This specifies if the memory usage of a DataFrame should be displayed when df.info() is called. Valid values True,False,’deep’ [default: True] [currently: True] display.min_rows:int The numbers of rows to show in a truncated view (when max_rows is exceeded). Ignored when max_rows is set to None or 0. When set to None, follows the value of max_rows. [default: 10] [currently: 10] display.multi_sparse:boolean “sparsify” MultiIndex display (don’t display repeated elements in outer levels within groups) [default: True] [currently: True] display.notebook_repr_html:boolean When True, IPython notebook will use html representation for pandas objects (if it is available). [default: True] [currently: True] display.pprint_nest_depth:int Controls the number of nested levels to process when pretty-printing [default: 3] [currently: 3] display.precision:int Floating point output precision in terms of number of places after the decimal, for regular formatting as well as scientific notation. Similar to precision in numpy.set_printoptions(). [default: 6] [currently: 6] display.show_dimensions:boolean or ‘truncate’ Whether to print out dimensions at the end of DataFrame repr. If ‘truncate’ is specified, only print out the dimensions if the frame is truncated (e.g. not display all rows and/or columns) [default: truncate] [currently: truncate] display.unicode.ambiguous_as_wide:boolean Whether to use the Unicode East Asian Width to calculate the display text width. Enabling this may affect to the performance (default: False) [default: False] [currently: False] display.unicode.east_asian_width:boolean Whether to use the Unicode East Asian Width to calculate the display text width. Enabling this may affect to the performance (default: False) [default: False] [currently: False] display.width:int Width of the display in characters. In case python/IPython is running in a terminal this can be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width. [default: 80] [currently: 80] io.excel.ods.reader:string The default Excel reader engine for ‘ods’ files. Available options: auto, odf. [default: auto] [currently: auto] io.excel.ods.writer:string The default Excel writer engine for ‘ods’ files. Available options: auto, odf. [default: auto] [currently: auto] io.excel.xls.reader:string The default Excel reader engine for ‘xls’ files. Available options: auto, xlrd. [default: auto] [currently: auto] io.excel.xls.writer:string The default Excel writer engine for ‘xls’ files. Available options: auto, xlwt. [default: auto] [currently: auto] (Deprecated, use `` instead.) io.excel.xlsb.reader:string The default Excel reader engine for ‘xlsb’ files. Available options: auto, pyxlsb. [default: auto] [currently: auto] io.excel.xlsm.reader:string The default Excel reader engine for ‘xlsm’ files. Available options: auto, xlrd, openpyxl. [default: auto] [currently: auto] io.excel.xlsm.writer:string The default Excel writer engine for ‘xlsm’ files. Available options: auto, openpyxl. [default: auto] [currently: auto] io.excel.xlsx.reader:string The default Excel reader engine for ‘xlsx’ files. Available options: auto, xlrd, openpyxl. [default: auto] [currently: auto] io.excel.xlsx.writer:string The default Excel writer engine for ‘xlsx’ files. Available options: auto, openpyxl, xlsxwriter. [default: auto] [currently: auto] io.hdf.default_format:format default format writing format, if None, then put will default to ‘fixed’ and append will default to ‘table’ [default: None] [currently: None] io.hdf.dropna_table:boolean drop ALL nan rows when appending to a table [default: False] [currently: False] io.parquet.engine:string The default parquet reader/writer engine. Available options: ‘auto’, ‘pyarrow’, ‘fastparquet’, the default is ‘auto’ [default: auto] [currently: auto] io.sql.engine:string The default sql reader/writer engine. Available options: ‘auto’, ‘sqlalchemy’, the default is ‘auto’ [default: auto] [currently: auto] mode.chained_assignment:string Raise an exception, warn, or no action if trying to use chained assignment, The default is warn [default: warn] [currently: warn] mode.data_manager:string Internal data manager type; can be “block” or “array”. Defaults to “block”, unless overridden by the ‘PANDAS_DATA_MANAGER’ environment variable (needs to be set before pandas is imported). [default: block] [currently: block] mode.sim_interactive:boolean Whether to simulate interactive mode for purposes of testing [default: False] [currently: False] mode.string_storage:string The default storage for StringDtype. [default: python] [currently: python] mode.use_inf_as_na:boolean True means treat None, NaN, INF, -INF as NA (old way), False means None and NaN are null, but INF, -INF are not NA (new way). [default: False] [currently: False] mode.use_inf_as_null:boolean use_inf_as_null had been deprecated and will be removed in a future version. Use use_inf_as_na instead. [default: False] [currently: False] (Deprecated, use mode.use_inf_as_na instead.) plotting.backend:str The plotting backend to use. The default value is “matplotlib”, the backend provided with pandas. Other backends can be specified by providing the name of the module that implements the backend. [default: matplotlib] [currently: matplotlib] plotting.matplotlib.register_converters:bool or ‘auto’. Whether to register converters with matplotlib’s units registry for dates, times, datetimes, and Periods. Toggling to False will remove the converters, restoring any converters that pandas overwrote. [default: auto] [currently: auto] styler.format.decimal:str The character representation for the decimal separator for floats and complex. [default: .] [currently: .] styler.format.escape:str, optional Whether to escape certain characters according to the given context; html or latex. [default: None] [currently: None] styler.format.formatter:str, callable, dict, optional A formatter object to be used as default within Styler.format. [default: None] [currently: None] styler.format.na_rep:str, optional The string representation for values identified as missing. [default: None] [currently: None] styler.format.precision:int The precision for floats and complex numbers. [default: 6] [currently: 6] styler.format.thousands:str, optional The character representation for thousands separator for floats, int and complex. [default: None] [currently: None] styler.html.mathjax:bool If False will render special CSS classes to table attributes that indicate Mathjax will not be used in Jupyter Notebook. [default: True] [currently: True] styler.latex.environment:str The environment to replace \begin{table}. If “longtable” is used results in a specific longtable environment format. [default: None] [currently: None] styler.latex.hrules:bool Whether to add horizontal rules on top and bottom and below the headers. [default: False] [currently: False] styler.latex.multicol_align:{“r”, “c”, “l”, “naive-l”, “naive-r”} The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe decorators can also be added to non-naive values to draw vertical rules, e.g. “|r” will draw a rule on the left side of right aligned merged cells. [default: r] [currently: r] styler.latex.multirow_align:{“c”, “t”, “b”} The specifier for vertical alignment of sparsified LaTeX multirows. [default: c] [currently: c] styler.render.encoding:str The encoding used for output HTML and LaTeX files. [default: utf-8] [currently: utf-8] styler.render.max_columns:int, optional The maximum number of columns that will be rendered. May still be reduced to satsify max_elements, which takes precedence. [default: None] [currently: None] styler.render.max_elements:int The maximum number of data-cell (<td>) elements that will be rendered before trimming will occur over columns, rows or both if needed. [default: 262144] [currently: 262144] styler.render.max_rows:int, optional The maximum number of rows that will be rendered. May still be reduced to satsify max_elements, which takes precedence. [default: None] [currently: None] styler.render.repr:str Determine which output to use in Jupyter Notebook in {“html”, “latex”}. [default: html] [currently: html] styler.sparse.columns:bool Whether to sparsify the display of hierarchical columns. Setting to False will display each explicit level element in a hierarchical key for each column. [default: True] [currently: True] styler.sparse.index:bool Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each row. [default: True] [currently: True]
pandas.reference.api.pandas.set_option
pandas.show_versions pandas.show_versions(as_json=False)[source] Provide useful information, important for bug reports. It comprises info about hosting operation system, pandas version, and versions of other installed relative packages. Parameters as_json:str or bool, default False If False, outputs info in a human readable form to the console. If str, it will be considered as a path to a file. Info will be written to that file in JSON format. If True, outputs info in JSON format to the console.
pandas.reference.api.pandas.show_versions
pandas.SparseDtype classpandas.SparseDtype(dtype=<class 'numpy.float64'>, fill_value=None)[source] Dtype for data stored in SparseArray. This dtype implements the pandas ExtensionDtype interface. Parameters dtype:str, ExtensionDtype, numpy.dtype, type, default numpy.float64 The dtype of the underlying array storing the non-fill value values. fill_value:scalar, optional The scalar value not stored in the SparseArray. By default, this depends on dtype. dtype na_value float np.nan int 0 bool False datetime64 pd.NaT timedelta64 pd.NaT The default value may be overridden by specifying a fill_value. Attributes None Methods None
pandas.reference.api.pandas.sparsedtype
pandas.StringDtype classpandas.StringDtype(storage=None)[source] Extension dtype for string data. New in version 1.0.0. Warning StringDtype is considered experimental. The implementation and parts of the API may change without warning. In particular, StringDtype.na_value may change to no longer be numpy.nan. Parameters storage:{“python”, “pyarrow”}, optional If not given, the value of pd.options.mode.string_storage. Examples >>> pd.StringDtype() string[python] >>> pd.StringDtype(storage="pyarrow") string[pyarrow] Attributes None Methods None
pandas.reference.api.pandas.stringdtype
pandas.test pandas.test(extra_args=None)[source] Run the pandas test suite using pytest.
pandas.reference.api.pandas.test
pandas.testing.assert_extension_array_equal pandas.testing.assert_extension_array_equal(left, right, check_dtype=True, index_values=None, check_less_precise=NoDefault.no_default, check_exact=False, rtol=1e-05, atol=1e-08)[source] Check that left and right ExtensionArrays are equal. Parameters left, right:ExtensionArray The two arrays to compare. check_dtype:bool, default True Whether to check if the ExtensionArray dtypes are identical. index_values:numpy.ndarray, default None Optional index (shared by both left and right), used in output. check_less_precise:bool or int, default False Specify comparison precision. Only used when check_exact is False. 5 digits (False) or 3 digits (True) after decimal points are compared. If int, then specify the digits to compare. Deprecated since version 1.1.0: Use rtol and atol instead to define relative/absolute tolerance, respectively. Similar to math.isclose(). check_exact:bool, default False Whether to compare number exactly. rtol:float, default 1e-5 Relative tolerance. Only used when check_exact is False. New in version 1.1.0. atol:float, default 1e-8 Absolute tolerance. Only used when check_exact is False. New in version 1.1.0. Notes Missing values are checked separately from valid values. A mask of missing values is computed for each and checked to match. The remaining all-valid values are cast to object dtype and checked. Examples >>> from pandas import testing as tm >>> a = pd.Series([1, 2, 3, 4]) >>> b, c = a.array, a.array >>> tm.assert_extension_array_equal(b, c)
pandas.reference.api.pandas.testing.assert_extension_array_equal
pandas.testing.assert_frame_equal pandas.testing.assert_frame_equal(left, right, check_dtype=True, check_index_type='equiv', check_column_type='equiv', check_frame_type=True, check_less_precise=NoDefault.no_default, check_names=True, by_blocks=False, check_exact=False, check_datetimelike_compat=False, check_categorical=True, check_like=False, check_freq=True, check_flags=True, rtol=1e-05, atol=1e-08, obj='DataFrame')[source] Check that left and right DataFrame are equal. This function is intended to compare two DataFrames and output any differences. Is is mostly intended for use in unit tests. Additional parameters allow varying the strictness of the equality checks performed. Parameters left:DataFrame First DataFrame to compare. right:DataFrame Second DataFrame to compare. check_dtype:bool, default True Whether to check the DataFrame dtype is identical. check_index_type:bool or {‘equiv’}, default ‘equiv’ Whether to check the Index class, dtype and inferred_type are identical. check_column_type:bool or {‘equiv’}, default ‘equiv’ Whether to check the columns class, dtype and inferred_type are identical. Is passed as the exact argument of assert_index_equal(). check_frame_type:bool, default True Whether to check the DataFrame class is identical. check_less_precise:bool or int, default False Specify comparison precision. Only used when check_exact is False. 5 digits (False) or 3 digits (True) after decimal points are compared. If int, then specify the digits to compare. When comparing two numbers, if the first number has magnitude less than 1e-5, we compare the two numbers directly and check whether they are equivalent within the specified precision. Otherwise, we compare the ratio of the second number to the first number and check whether it is equivalent to 1 within the specified precision. Deprecated since version 1.1.0: Use rtol and atol instead to define relative/absolute tolerance, respectively. Similar to math.isclose(). check_names:bool, default True Whether to check that the names attribute for both the index and column attributes of the DataFrame is identical. by_blocks:bool, default False Specify how to compare internal data. If False, compare by columns. If True, compare by blocks. check_exact:bool, default False Whether to compare number exactly. check_datetimelike_compat:bool, default False Compare datetime-like which is comparable ignoring dtype. check_categorical:bool, default True Whether to compare internal Categorical exactly. check_like:bool, default False If True, ignore the order of index & columns. Note: index labels must match their respective rows (same as in columns) - same labels must be with the same data. check_freq:bool, default True Whether to check the freq attribute on a DatetimeIndex or TimedeltaIndex. New in version 1.1.0. check_flags:bool, default True Whether to check the flags attribute. rtol:float, default 1e-5 Relative tolerance. Only used when check_exact is False. New in version 1.1.0. atol:float, default 1e-8 Absolute tolerance. Only used when check_exact is False. New in version 1.1.0. obj:str, default ‘DataFrame’ Specify object name being compared, internally used to show appropriate assertion message. See also assert_series_equal Equivalent method for asserting Series equality. DataFrame.equals Check DataFrame equality. Examples This example shows comparing two DataFrames that are equal but with columns of differing dtypes. >>> from pandas.testing import assert_frame_equal >>> df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]}) >>> df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]}) df1 equals itself. >>> assert_frame_equal(df1, df1) df1 differs from df2 as column ‘b’ is of a different type. >>> assert_frame_equal(df1, df2) Traceback (most recent call last): ... AssertionError: Attributes of DataFrame.iloc[:, 1] (column name="b") are different Attribute “dtype” are different [left]: int64 [right]: float64 Ignore differing dtypes in columns with check_dtype. >>> assert_frame_equal(df1, df2, check_dtype=False)
pandas.reference.api.pandas.testing.assert_frame_equal
pandas.testing.assert_index_equal pandas.testing.assert_index_equal(left, right, exact='equiv', check_names=True, check_less_precise=NoDefault.no_default, check_exact=True, check_categorical=True, check_order=True, rtol=1e-05, atol=1e-08, obj='Index')[source] Check that left and right Index are equal. Parameters left:Index right:Index exact:bool or {‘equiv’}, default ‘equiv’ Whether to check the Index class, dtype and inferred_type are identical. If ‘equiv’, then RangeIndex can be substituted for Int64Index as well. check_names:bool, default True Whether to check the names attribute. check_less_precise:bool or int, default False Specify comparison precision. Only used when check_exact is False. 5 digits (False) or 3 digits (True) after decimal points are compared. If int, then specify the digits to compare. Deprecated since version 1.1.0: Use rtol and atol instead to define relative/absolute tolerance, respectively. Similar to math.isclose(). check_exact:bool, default True Whether to compare number exactly. check_categorical:bool, default True Whether to compare internal Categorical exactly. check_order:bool, default True Whether to compare the order of index entries as well as their values. If True, both indexes must contain the same elements, in the same order. If False, both indexes must contain the same elements, but in any order. New in version 1.2.0. rtol:float, default 1e-5 Relative tolerance. Only used when check_exact is False. New in version 1.1.0. atol:float, default 1e-8 Absolute tolerance. Only used when check_exact is False. New in version 1.1.0. obj:str, default ‘Index’ Specify object name being compared, internally used to show appropriate assertion message. Examples >>> from pandas import testing as tm >>> a = pd.Index([1, 2, 3]) >>> b = pd.Index([1, 2, 3]) >>> tm.assert_index_equal(a, b)
pandas.reference.api.pandas.testing.assert_index_equal
pandas.testing.assert_series_equal pandas.testing.assert_series_equal(left, right, check_dtype=True, check_index_type='equiv', check_series_type=True, check_less_precise=NoDefault.no_default, check_names=True, check_exact=False, check_datetimelike_compat=False, check_categorical=True, check_category_order=True, check_freq=True, check_flags=True, rtol=1e-05, atol=1e-08, obj='Series', *, check_index=True)[source] Check that left and right Series are equal. Parameters left:Series right:Series check_dtype:bool, default True Whether to check the Series dtype is identical. check_index_type:bool or {‘equiv’}, default ‘equiv’ Whether to check the Index class, dtype and inferred_type are identical. check_series_type:bool, default True Whether to check the Series class is identical. check_less_precise:bool or int, default False Specify comparison precision. Only used when check_exact is False. 5 digits (False) or 3 digits (True) after decimal points are compared. If int, then specify the digits to compare. When comparing two numbers, if the first number has magnitude less than 1e-5, we compare the two numbers directly and check whether they are equivalent within the specified precision. Otherwise, we compare the ratio of the second number to the first number and check whether it is equivalent to 1 within the specified precision. Deprecated since version 1.1.0: Use rtol and atol instead to define relative/absolute tolerance, respectively. Similar to math.isclose(). check_names:bool, default True Whether to check the Series and Index names attribute. check_exact:bool, default False Whether to compare number exactly. check_datetimelike_compat:bool, default False Compare datetime-like which is comparable ignoring dtype. check_categorical:bool, default True Whether to compare internal Categorical exactly. check_category_order:bool, default True Whether to compare category order of internal Categoricals. New in version 1.0.2. check_freq:bool, default True Whether to check the freq attribute on a DatetimeIndex or TimedeltaIndex. New in version 1.1.0. check_flags:bool, default True Whether to check the flags attribute. New in version 1.2.0. rtol:float, default 1e-5 Relative tolerance. Only used when check_exact is False. New in version 1.1.0. atol:float, default 1e-8 Absolute tolerance. Only used when check_exact is False. New in version 1.1.0. obj:str, default ‘Series’ Specify object name being compared, internally used to show appropriate assertion message. check_index:bool, default True Whether to check index equivalence. If False, then compare only values. New in version 1.3.0. Examples >>> from pandas import testing as tm >>> a = pd.Series([1, 2, 3, 4]) >>> b = pd.Series([1, 2, 3, 4]) >>> tm.assert_series_equal(a, b)
pandas.reference.api.pandas.testing.assert_series_equal
pandas.Timedelta classpandas.Timedelta(value=<object object>, unit=None, **kwargs) Represents a duration, the difference between two dates or times. Timedelta is the pandas equivalent of python’s datetime.timedelta and is interchangeable with it in most cases. Parameters value:Timedelta, timedelta, np.timedelta64, str, or int unit:str, default ‘ns’ Denote the unit of the input, if input is an integer. Possible values: ‘W’, ‘D’, ‘T’, ‘S’, ‘L’, ‘U’, or ‘N’ ‘days’ or ‘day’ ‘hours’, ‘hour’, ‘hr’, or ‘h’ ‘minutes’, ‘minute’, ‘min’, or ‘m’ ‘seconds’, ‘second’, or ‘sec’ ‘milliseconds’, ‘millisecond’, ‘millis’, or ‘milli’ ‘microseconds’, ‘microsecond’, ‘micros’, or ‘micro’ ‘nanoseconds’, ‘nanosecond’, ‘nanos’, ‘nano’, or ‘ns’. **kwargs Available kwargs: {days, seconds, microseconds, milliseconds, minutes, hours, weeks}. Values for construction in compat with datetime.timedelta. Numpy ints and floats will be coerced to python ints and floats. Notes The constructor may take in either both values of value and unit or kwargs as above. Either one of them must be used during initialization The .value attribute is always in ns. If the precision is higher than nanoseconds, the precision of the duration is truncated to nanoseconds. Examples Here we initialize Timedelta object with both value and unit >>> td = pd.Timedelta(1, "d") >>> td Timedelta('1 days 00:00:00') Here we initialize the Timedelta object with kwargs >>> td2 = pd.Timedelta(days=1) >>> td2 Timedelta('1 days 00:00:00') We see that either way we get the same result Attributes asm8 Return a numpy timedelta64 array scalar view. components Return a components namedtuple-like. days Number of days. delta Return the timedelta in nanoseconds (ns), for internal compatibility. microseconds Number of microseconds (>= 0 and less than 1 second). nanoseconds Return the number of nanoseconds (n), where 0 <= n < 1 microsecond. resolution_string Return a string representing the lowest timedelta resolution. seconds Number of seconds (>= 0 and less than 1 day). freq is_populated value Methods ceil(freq) Return a new Timedelta ceiled to this resolution. floor(freq) Return a new Timedelta floored to this resolution. isoformat Format Timedelta as ISO 8601 Duration like P[n]Y[n]M[n]DT[n]H[n]M[n]S, where the [n] s are replaced by the values. round(freq) Round the Timedelta to the specified resolution. to_numpy Convert the Timedelta to a NumPy timedelta64. to_pytimedelta Convert a pandas Timedelta object into a python datetime.timedelta object. to_timedelta64 Return a numpy.timedelta64 object with 'ns' precision. total_seconds Total seconds in the duration. view Array view compatibility.
pandas.reference.api.pandas.timedelta
pandas.Timedelta.asm8 Timedelta.asm8 Return a numpy timedelta64 array scalar view. Provides access to the array scalar view (i.e. a combination of the value and the units) associated with the numpy.timedelta64().view(), including a 64-bit integer representation of the timedelta in nanoseconds (Python int compatible). Returns numpy timedelta64 array scalar view Array scalar view of the timedelta in nanoseconds. Examples >>> td = pd.Timedelta('1 days 2 min 3 us 42 ns') >>> td.asm8 numpy.timedelta64(86520000003042,'ns') >>> td = pd.Timedelta('2 min 3 s') >>> td.asm8 numpy.timedelta64(123000000000,'ns') >>> td = pd.Timedelta('3 ms 5 us') >>> td.asm8 numpy.timedelta64(3005000,'ns') >>> td = pd.Timedelta(42, unit='ns') >>> td.asm8 numpy.timedelta64(42,'ns')
pandas.reference.api.pandas.timedelta.asm8
pandas.Timedelta.ceil Timedelta.ceil(freq) Return a new Timedelta ceiled to this resolution. Parameters freq:str Frequency string indicating the ceiling resolution.
pandas.reference.api.pandas.timedelta.ceil
pandas.Timedelta.components Timedelta.components Return a components namedtuple-like.
pandas.reference.api.pandas.timedelta.components
pandas.Timedelta.days Timedelta.days Number of days.
pandas.reference.api.pandas.timedelta.days
pandas.Timedelta.delta Timedelta.delta Return the timedelta in nanoseconds (ns), for internal compatibility. Returns int Timedelta in nanoseconds. Examples >>> td = pd.Timedelta('1 days 42 ns') >>> td.delta 86400000000042 >>> td = pd.Timedelta('3 s') >>> td.delta 3000000000 >>> td = pd.Timedelta('3 ms 5 us') >>> td.delta 3005000 >>> td = pd.Timedelta(42, unit='ns') >>> td.delta 42
pandas.reference.api.pandas.timedelta.delta
pandas.Timedelta.floor Timedelta.floor(freq) Return a new Timedelta floored to this resolution. Parameters freq:str Frequency string indicating the flooring resolution.
pandas.reference.api.pandas.timedelta.floor
pandas.Timedelta.freq Timedelta.freq
pandas.reference.api.pandas.timedelta.freq
pandas.Timedelta.is_populated Timedelta.is_populated
pandas.reference.api.pandas.timedelta.is_populated
pandas.Timedelta.isoformat Timedelta.isoformat() Format Timedelta as ISO 8601 Duration like P[n]Y[n]M[n]DT[n]H[n]M[n]S, where the [n] s are replaced by the values. See https://en.wikipedia.org/wiki/ISO_8601#Durations. Returns str See also Timestamp.isoformat Function is used to convert the given Timestamp object into the ISO format. Notes The longest component is days, whose value may be larger than 365. Every component is always included, even if its value is 0. Pandas uses nanosecond precision, so up to 9 decimal places may be included in the seconds component. Trailing 0’s are removed from the seconds component after the decimal. We do not 0 pad components, so it’s …T5H…, not …T05H… Examples >>> td = pd.Timedelta(days=6, minutes=50, seconds=3, ... milliseconds=10, microseconds=10, nanoseconds=12) >>> td.isoformat() 'P6DT0H50M3.010010012S' >>> pd.Timedelta(hours=1, seconds=10).isoformat() 'P0DT1H0M10S' >>> pd.Timedelta(days=500.5).isoformat() 'P500DT12H0M0S'
pandas.reference.api.pandas.timedelta.isoformat
pandas.Timedelta.max Timedelta.max=Timedelta('106751 days 23:47:16.854775807')
pandas.reference.api.pandas.timedelta.max
pandas.Timedelta.microseconds Timedelta.microseconds Number of microseconds (>= 0 and less than 1 second).
pandas.reference.api.pandas.timedelta.microseconds
pandas.Timedelta.min Timedelta.min=Timedelta('-106752 days +00:12:43.145224193')
pandas.reference.api.pandas.timedelta.min
pandas.Timedelta.nanoseconds Timedelta.nanoseconds Return the number of nanoseconds (n), where 0 <= n < 1 microsecond. Returns int Number of nanoseconds. See also Timedelta.components Return all attributes with assigned values (i.e. days, hours, minutes, seconds, milliseconds, microseconds, nanoseconds). Examples Using string input >>> td = pd.Timedelta('1 days 2 min 3 us 42 ns') >>> td.nanoseconds 42 Using integer input >>> td = pd.Timedelta(42, unit='ns') >>> td.nanoseconds 42
pandas.reference.api.pandas.timedelta.nanoseconds