doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
pandas.plotting.table pandas.plotting.table(ax, data, rowLabels=None, colLabels=None, **kwargs)[source]
Helper function to convert DataFrame and Series to matplotlib.table. Parameters
ax:Matplotlib axes object
data:DataFrame or Series
Data for table contents. **kwargs
Keyword arguments to be passed to matplotlib.table.table. If rowLabels or colLabels is not specified, data index or column name will be used. Returns
matplotlib table object | pandas.reference.api.pandas.plotting.table |
pandas.qcut pandas.qcut(x, q, labels=None, retbins=False, precision=3, duplicates='raise')[source]
Quantile-based discretization function. Discretize variable into equal-sized buckets based on rank or based on sample quantiles. For example 1000 values for 10 quantiles would produce a Categorical object indicating quantile membership for each data point. Parameters
x:1d ndarray or Series
q:int or list-like of float
Number of quantiles. 10 for deciles, 4 for quartiles, etc. Alternately array of quantiles, e.g. [0, .25, .5, .75, 1.] for quartiles.
labels:array or False, default None
Used as labels for the resulting bins. Must be of the same length as the resulting bins. If False, return only integer indicators of the bins. If True, raises an error.
retbins:bool, optional
Whether to return the (bins, labels) or not. Can be useful if bins is given as a scalar.
precision:int, optional
The precision at which to store and display the bins labels.
duplicates:{default ‘raise’, ‘drop’}, optional
If bin edges are not unique, raise ValueError or drop non-uniques. Returns
out:Categorical or Series or array of integers if labels is False
The return type (Categorical or Series) depends on the input: a Series of type category if input is a Series else Categorical. Bins are represented as categories when categorical data is returned.
bins:ndarray of floats
Returned only if retbins is True. Notes Out of bounds values will be NA in the resulting Categorical object Examples
>>> pd.qcut(range(5), 4)
...
[(-0.001, 1.0], (-0.001, 1.0], (1.0, 2.0], (2.0, 3.0], (3.0, 4.0]]
Categories (4, interval[float64, right]): [(-0.001, 1.0] < (1.0, 2.0] ...
>>> pd.qcut(range(5), 3, labels=["good", "medium", "bad"])
...
[good, good, medium, bad, bad]
Categories (3, object): [good < medium < bad]
>>> pd.qcut(range(5), 4, labels=False)
array([0, 0, 1, 2, 3]) | pandas.reference.api.pandas.qcut |
pandas.RangeIndex classpandas.RangeIndex(start=None, stop=None, step=None, dtype=None, copy=False, name=None)[source]
Immutable Index implementing a monotonic integer range. RangeIndex is a memory-saving special case of Int64Index limited to representing monotonic ranges. Using RangeIndex may in some instances improve computing speed. This is the default index type used by DataFrame and Series when no explicit index is provided by the user. Parameters
start:int (default: 0), range, or other RangeIndex instance
If int and “stop” is not given, interpreted as “stop” instead.
stop:int (default: 0)
step:int (default: 1)
dtype:np.int64
Unused, accepted for homogeneity with other index types.
copy:bool, default False
Unused, accepted for homogeneity with other index types.
name:object, optional
Name to be stored in the index. See also Index
The base pandas Index type. Int64Index
Index of int64 data. Attributes
start The value of the start parameter (0 if this was not supplied).
stop The value of the stop parameter.
step The value of the step parameter (1 if this was not supplied). Methods
from_range(data[, name, dtype]) Create RangeIndex from a range object. | pandas.reference.api.pandas.rangeindex |
pandas.RangeIndex.from_range classmethodRangeIndex.from_range(data, name=None, dtype=None)[source]
Create RangeIndex from a range object. Returns
RangeIndex | pandas.reference.api.pandas.rangeindex.from_range |
pandas.RangeIndex.start propertyRangeIndex.start
The value of the start parameter (0 if this was not supplied). | pandas.reference.api.pandas.rangeindex.start |
pandas.RangeIndex.step propertyRangeIndex.step
The value of the step parameter (1 if this was not supplied). | pandas.reference.api.pandas.rangeindex.step |
pandas.RangeIndex.stop propertyRangeIndex.stop
The value of the stop parameter. | pandas.reference.api.pandas.rangeindex.stop |
pandas.read_clipboard pandas.read_clipboard(sep='\\s+', **kwargs)[source]
Read text from clipboard and pass to read_csv. Parameters
sep:str, default ‘s+’
A string or regex delimiter. The default of ‘s+’ denotes one or more whitespace characters. **kwargs
See read_csv for the full argument list. Returns
DataFrame
A parsed DataFrame object. | pandas.reference.api.pandas.read_clipboard |
pandas.read_csv pandas.read_csv(filepath_or_buffer, sep=NoDefault.no_default, delimiter=None, header='infer', names=NoDefault.no_default, index_col=None, usecols=None, squeeze=None, prefix=NoDefault.no_default, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=None, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal='.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, encoding_errors='strict', dialect=None, error_bad_lines=None, warn_bad_lines=None, on_bad_lines=None, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None, storage_options=None)[source]
Read a comma-separated values (csv) file into DataFrame. Also supports optionally iterating or breaking of the file into chunks. Additional help can be found in the online docs for IO Tools. Parameters
filepath_or_buffer:str, path object or file-like object
Any valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.csv. If you want to pass in a path object, pandas accepts any os.PathLike. By file-like object, we refer to objects with a read() method, such as a file handle (e.g. via builtin open function) or StringIO.
sep:str, default ‘,’
Delimiter to use. If sep is None, the C engine cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used and automatically detect the separator by Python’s builtin sniffer tool, csv.Sniffer. In addition, separators longer than 1 character and different from '\s+' will be interpreted as regular expressions and will also force the use of the Python parsing engine. Note that regex delimiters are prone to ignoring quoted data. Regex example: '\r\t'.
delimiter:str, default None
Alias for sep.
header:int, list of int, None, default ‘infer’
Row number(s) to use as the column names, and the start of the data. Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0 and column names are inferred from the first line of the file, if column names are passed explicitly then the behavior is identical to header=None. Explicitly pass header=0 to be able to replace existing names. The header can be a list of integers that specify row locations for a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not specified will be skipped (e.g. 2 in this example is skipped). Note that this parameter ignores commented lines and empty lines if skip_blank_lines=True, so header=0 denotes the first line of data rather than the first line of the file.
names:array-like, optional
List of column names to use. If the file contains a header row, then you should explicitly pass header=0 to override the column names. Duplicates in this list are not allowed.
index_col:int, str, sequence of int / str, or False, optional, default None
Column(s) to use as the row labels of the DataFrame, either given as string name or column index. If a sequence of int / str is given, a MultiIndex is used. Note: index_col=False can be used to force pandas to not use the first column as the index, e.g. when you have a malformed file with delimiters at the end of each line.
usecols:list-like or callable, optional
Return a subset of the columns. If list-like, all elements must either be positional (i.e. integer indices into the document columns) or strings that correspond to column names provided either by the user in names or inferred from the document header row(s). If names are given, the document header row(s) are not taken into account. For example, a valid list-like usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To instantiate a DataFrame from data with element order preserved use pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns in ['foo', 'bar'] order or pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for ['bar', 'foo'] order. If callable, the callable function will be evaluated against the column names, returning names where the callable function evaluates to True. An example of a valid callable argument would be lambda x: x.upper() in
['AAA', 'BBB', 'DDD']. Using this parameter results in much faster parsing time and lower memory usage.
squeeze:bool, default False
If the parsed data only contains one column then return a Series. Deprecated since version 1.4.0: Append .squeeze("columns") to the call to read_csv to squeeze the data.
prefix:str, optional
Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, … Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
mangle_dupe_cols:bool, default True
Duplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than ‘X’…’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.
dtype:Type name or dict of column -> type, optional
Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’} Use str or object together with suitable na_values settings to preserve and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion.
engine:{‘c’, ‘python’, ‘pyarrow’}, optional
Parser engine to use. The C and pyarrow engines are faster, while the python engine is currently more feature-complete. Multithreading is currently only supported by the pyarrow engine. New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features are unsupported, or may not work correctly, with this engine.
converters:dict, optional
Dict of functions for converting values in certain columns. Keys can either be integers or column labels.
true_values:list, optional
Values to consider as True.
false_values:list, optional
Values to consider as False.
skipinitialspace:bool, default False
Skip spaces after delimiter.
skiprows:list-like, int or callable, optional
Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file. If callable, the callable function will be evaluated against the row indices, returning True if the row should be skipped and False otherwise. An example of a valid callable argument would be lambda x: x in [0, 2].
skipfooter:int, default 0
Number of lines at bottom of file to skip (Unsupported with engine=’c’).
nrows:int, optional
Number of rows of file to read. Useful for reading pieces of large files.
na_values:scalar, str, list-like, or dict, optional
Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. By default the following values are interpreted as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.
keep_default_na:bool, default True
Whether or not to include the default NaN values when parsing the data. Depending on whether na_values is passed in, the behavior is as follows: If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values used for parsing. If keep_default_na is True, and na_values are not specified, only the default NaN values are used for parsing. If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are used for parsing. If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN. Note that if na_filter is passed in as False, the keep_default_na and na_values parameters will be ignored.
na_filter:bool, default True
Detect missing value markers (empty strings and the value of na_values). In data without any NAs, passing na_filter=False can improve the performance of reading a large file.
verbose:bool, default False
Indicate number of NA values placed in non-numeric columns.
skip_blank_lines:bool, default True
If True, skip over blank lines rather than interpreting as NaN values.
parse_dates:bool or list of int or names or list of lists or dict, default False
The behavior is as follows: boolean. If True -> try parsing the index. list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column. list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column. dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’ If a column or index cannot be represented as an array of datetimes, say because of an unparsable value or a mixture of timezones, the column or index will be returned unaltered as an object data type. For non-standard datetime parsing, use pd.to_datetime after pd.read_csv. To parse an index or column with a mixture of timezones, specify date_parser to be a partially-applied pandas.to_datetime() with utc=True. See Parsing a CSV with mixed timezones for more. Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format:bool, default False
If True and parse_dates is enabled, pandas will attempt to infer the format of the datetime strings in the columns, and if it can be inferred, switch to a faster method of parsing them. In some cases this can increase the parsing speed by 5-10x.
keep_date_col:bool, default False
If True and parse_dates specifies combining multiple columns then keep the original columns.
date_parser:function, optional
Function to use for converting a sequence of string columns to an array of datetime instances. The default uses dateutil.parser.parser to do the conversion. Pandas will try to call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more strings (corresponding to the columns defined by parse_dates) as arguments.
dayfirst:bool, default False
DD/MM format dates, international and European format.
cache_dates:bool, default True
If True, use a cache of unique, converted dates to apply the datetime conversion. May produce significant speed-up when parsing duplicate date strings, especially ones with timezone offsets. New in version 0.25.0.
iterator:bool, default False
Return TextFileReader object for iteration or getting chunks with get_chunk(). Changed in version 1.2: TextFileReader is a context manager.
chunksize:int, optional
Return TextFileReader object for iteration. See the IO Tools docs for more information on iterator and chunksize. Changed in version 1.2: TextFileReader is a context manager.
compression:str or dict, default ‘infer’
For on-the-fly decompression of on-disk data. If ‘infer’ and ‘%s’ is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, or ‘.zst’ (otherwise no compression). If using ‘zip’, the ZIP file must contain only one data file to be read in. Set to None for no decompression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor, respectively. As an example, the following could be passed for Zstandard decompression using a custom compression dictionary: compression={'method': 'zstd', 'dict_data': my_compression_dict}. Changed in version 1.4.0: Zstandard support.
thousands:str, optional
Thousands separator.
decimal:str, default ‘.’
Character to recognize as decimal point (e.g. use ‘,’ for European data).
lineterminator:str (length 1), optional
Character to break file into lines. Only valid with C parser.
quotechar:str (length 1), optional
The character used to denote the start and end of a quoted item. Quoted items can include the delimiter and it will be ignored.
quoting:int or csv.QUOTE_* instance, default 0
Control field quoting behavior per csv.QUOTE_* constants. Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
doublequote:bool, default True
When quotechar is specified and quoting is not QUOTE_NONE, indicate whether or not to interpret two consecutive quotechar elements INSIDE a field as a single quotechar element.
escapechar:str (length 1), optional
One-character string used to escape other characters.
comment:str, optional
Indicates remainder of line should not be parsed. If found at the beginning of a line, the line will be ignored altogether. This parameter must be a single character. Like empty lines (as long as skip_blank_lines=True), fully commented lines are ignored by the parameter header but not by skiprows. For example, if comment='#', parsing #empty\na,b,c\n1,2,3 with header=0 will result in ‘a,b,c’ being treated as the header.
encoding:str, optional
Encoding to use for UTF when reading/writing (ex. ‘utf-8’). List of Python standard encodings . Changed in version 1.2: When encoding is None, errors="replace" is passed to open(). Otherwise, errors="strict" is passed to open(). This behavior was previously only the case for engine="python". Changed in version 1.3.0: encoding_errors is a new argument. encoding has no longer an influence on how encoding errors are handled.
encoding_errors:str, optional, default “strict”
How encoding errors are treated. List of possible values . New in version 1.3.0.
dialect:str or csv.Dialect, optional
If provided, this parameter will override values (default or not) for the following parameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting. If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect documentation for more details.
error_bad_lines:bool, optional, default None
Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines” will be dropped from the DataFrame that is returned. Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon encountering a bad line instead.
warn_bad_lines:bool, optional, default None
If error_bad_lines is False, and warn_bad_lines is True, a warning for each “bad line” will be output. Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon encountering a bad line instead.
on_bad_lines:{‘error’, ‘warn’, ‘skip’} or callable, default ‘error’
Specifies what to do upon encountering a bad line (a line with too many fields). Allowed values are :
‘error’, raise an Exception when a bad line is encountered. ‘warn’, raise a warning when a bad line is encountered and skip that line. ‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0: callable, function with signature (bad_line: list[str]) -> list[str] | None that will process a single bad line. bad_line is a list of strings split by the sep. If the function returns None`, the bad line will be ignored.
If the function returns a new list of strings with more elements than
expected, a ``ParserWarning will be emitted while dropping extra elements. Only supported when engine="python" New in version 1.4.0.
delim_whitespace:bool, default False
Specifies whether or not whitespace (e.g. ' ' or ' ') will be used as the sep. Equivalent to setting sep='\s+'. If this option is set to True, nothing should be passed in for the delimiter parameter.
low_memory:bool, default True
Internally process the file in chunks, resulting in lower memory use while parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with the dtype parameter. Note that the entire file is read into a single DataFrame regardless, use the chunksize or iterator parameter to return the data in chunks. (Only valid with C parser).
memory_map:bool, default False
If a filepath is provided for filepath_or_buffer, map the file object directly onto memory and access the data directly from there. Using this option can improve performance because there is no longer any I/O overhead.
float_precision:str, optional
Specifies which converter the C engine should use for floating-point values. The options are None or ‘high’ for the ordinary converter, ‘legacy’ for the original lower precision pandas converter, and ‘round_trip’ for the round-trip converter. Changed in version 1.2.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2. Returns
DataFrame or TextParser
A comma-separated values (csv) file is returned as two-dimensional data structure with labeled axes. See also DataFrame.to_csv
Write DataFrame to a comma-separated values (csv) file. read_csv
Read a comma-separated values (csv) file into DataFrame. read_fwf
Read a table of fixed-width formatted lines into DataFrame. Examples
>>> pd.read_csv('data.csv') | pandas.reference.api.pandas.read_csv |
pandas.read_excel pandas.read_excel(io, sheet_name=0, header=0, names=None, index_col=None, usecols=None, squeeze=None, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, parse_dates=False, date_parser=None, thousands=None, decimal='.', comment=None, skipfooter=0, convert_float=None, mangle_dupe_cols=True, storage_options=None)[source]
Read an Excel file into a pandas DataFrame. Supports xls, xlsx, xlsm, xlsb, odf, ods and odt file extensions read from a local filesystem or URL. Supports an option to read a single sheet or a list of sheets. Parameters
io:str, bytes, ExcelFile, xlrd.Book, path object, or file-like object
Any valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.xlsx. If you want to pass in a path object, pandas accepts any os.PathLike. By file-like object, we refer to objects with a read() method, such as a file handle (e.g. via builtin open function) or StringIO.
sheet_name:str, int, list, or None, default 0
Strings are used for sheet names. Integers are used in zero-indexed sheet positions (chart sheets do not count as a sheet position). Lists of strings/integers are used to request multiple sheets. Specify None to get all worksheets. Available cases: Defaults to 0: 1st sheet as a DataFrame 1: 2nd sheet as a DataFrame "Sheet1": Load sheet with name “Sheet1” [0, 1, "Sheet5"]: Load first, second and sheet named “Sheet5” as a dict of DataFrame None: All worksheets.
header:int, list of int, default 0
Row (0-indexed) to use for the column labels of the parsed DataFrame. If a list of integers is passed those row positions will be combined into a MultiIndex. Use None if there is no header.
names:array-like, default None
List of column names to use. If file contains no header row, then you should explicitly pass header=None.
index_col:int, list of int, default None
Column (0-indexed) to use as the row labels of the DataFrame. Pass None if there is no such column. If a list is passed, those columns will be combined into a MultiIndex. If a subset of data is selected with usecols, index_col is based on the subset.
usecols:int, str, list-like, or callable default None
If None, then parse all columns. If str, then indicates comma separated list of Excel column letters and column ranges (e.g. “A:E” or “A,C,E:F”). Ranges are inclusive of both sides. If list of int, then indicates list of column numbers to be parsed. If list of string, then indicates list of column names to be parsed. If callable, then evaluate each column name against it and parse the column if the callable returns True. Returns a subset of the columns according to behavior above.
squeeze:bool, default False
If the parsed data only contains one column then return a Series. Deprecated since version 1.4.0: Append .squeeze("columns") to the call to read_excel to squeeze the data.
dtype:Type name or dict of column -> type, default None
Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32} Use object to preserve data as stored in Excel and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion.
engine:str, default None
If io is not a buffer or path, this must be set to identify io. Supported engines: “xlrd”, “openpyxl”, “odf”, “pyxlsb”. Engine compatibility : “xlrd” supports old-style Excel files (.xls). “openpyxl” supports newer Excel file formats. “odf” supports OpenDocument file formats (.odf, .ods, .odt). “pyxlsb” supports Binary Excel files. Changed in version 1.2.0: The engine xlrd now only supports old-style .xls files. When engine=None, the following logic will be used to determine the engine: If path_or_buffer is an OpenDocument format (.odf, .ods, .odt), then odf will be used. Otherwise if path_or_buffer is an xls format, xlrd will be used.
Otherwise if path_or_buffer is in xlsb format, pyxlsb will be used. New in version 1.3.0.
Otherwise openpyxl will be used. Changed in version 1.3.0.
converters:dict, default None
Dict of functions for converting values in certain columns. Keys can either be integers or column labels, values are functions that take one input argument, the Excel cell content, and return the transformed content.
true_values:list, default None
Values to consider as True.
false_values:list, default None
Values to consider as False.
skiprows:list-like, int, or callable, optional
Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file. If callable, the callable function will be evaluated against the row indices, returning True if the row should be skipped and False otherwise. An example of a valid callable argument would be lambda
x: x in [0, 2].
nrows:int, default None
Number of rows to parse.
na_values:scalar, str, list-like, or dict, default None
Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. By default the following values are interpreted as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.
keep_default_na:bool, default True
Whether or not to include the default NaN values when parsing the data. Depending on whether na_values is passed in, the behavior is as follows: If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values used for parsing. If keep_default_na is True, and na_values are not specified, only the default NaN values are used for parsing. If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are used for parsing. If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN. Note that if na_filter is passed in as False, the keep_default_na and na_values parameters will be ignored.
na_filter:bool, default True
Detect missing value markers (empty strings and the value of na_values). In data without any NAs, passing na_filter=False can improve the performance of reading a large file.
verbose:bool, default False
Indicate number of NA values placed in non-numeric columns.
parse_dates:bool, list-like, or dict, default False
The behavior is as follows: bool. If True -> try parsing the index. list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column. list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column. dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’ If a column or index contains an unparsable date, the entire column or index will be returned unaltered as an object data type. If you don`t want to parse some cells as date just change their type in Excel to “Text”. For non-standard datetime parsing, use pd.to_datetime after pd.read_excel. Note: A fast-path exists for iso8601-formatted dates.
date_parser:function, optional
Function to use for converting a sequence of string columns to an array of datetime instances. The default uses dateutil.parser.parser to do the conversion. Pandas will try to call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more strings (corresponding to the columns defined by parse_dates) as arguments.
thousands:str, default None
Thousands separator for parsing string columns to numeric. Note that this parameter is only necessary for columns stored as TEXT in Excel, any numeric columns will automatically be parsed, regardless of display format.
decimal:str, default ‘.’
Character to recognize as decimal point for parsing string columns to numeric. Note that this parameter is only necessary for columns stored as TEXT in Excel, any numeric columns will automatically be parsed, regardless of display format.(e.g. use ‘,’ for European data). New in version 1.4.0.
comment:str, default None
Comments out remainder of line. Pass a character or characters to this argument to indicate comments in the input file. Any data between the comment string and the end of the current line is ignored.
skipfooter:int, default 0
Rows at the end to skip (0-indexed).
convert_float:bool, default True
Convert integral floats to int (i.e., 1.0 –> 1). If False, all numeric data will be read in as floats: Excel stores all numbers as floats internally. Deprecated since version 1.3.0: convert_float will be removed in a future version
mangle_dupe_cols:bool, default True
Duplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than ‘X’…’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a local path or a file-like buffer. See the fsspec and backend storage implementation docs for the set of allowed keys and values. New in version 1.2.0. Returns
DataFrame or dict of DataFrames
DataFrame from the passed in Excel file. See notes in sheet_name argument for more information on when a dict of DataFrames is returned. See also DataFrame.to_excel
Write DataFrame to an Excel file. DataFrame.to_csv
Write DataFrame to a comma-separated values (csv) file. read_csv
Read a comma-separated values (csv) file into DataFrame. read_fwf
Read a table of fixed-width formatted lines into DataFrame. Examples The file can be read using the file name as string or an open file object:
>>> pd.read_excel('tmp.xlsx', index_col=0)
Name Value
0 string1 1
1 string2 2
2 #Comment 3
>>> pd.read_excel(open('tmp.xlsx', 'rb'),
... sheet_name='Sheet3')
Unnamed: 0 Name Value
0 0 string1 1
1 1 string2 2
2 2 #Comment 3
Index and header can be specified via the index_col and header arguments
>>> pd.read_excel('tmp.xlsx', index_col=None, header=None)
0 1 2
0 NaN Name Value
1 0.0 string1 1
2 1.0 string2 2
3 2.0 #Comment 3
Column types are inferred but can be explicitly specified
>>> pd.read_excel('tmp.xlsx', index_col=0,
... dtype={'Name': str, 'Value': float})
Name Value
0 string1 1.0
1 string2 2.0
2 #Comment 3.0
True, False, and NA values, and thousands separators have defaults, but can be explicitly specified, too. Supply the values you would like as strings or lists of strings!
>>> pd.read_excel('tmp.xlsx', index_col=0,
... na_values=['string1', 'string2'])
Name Value
0 NaN 1
1 NaN 2
2 #Comment 3
Comment lines in the excel input file can be skipped using the comment kwarg
>>> pd.read_excel('tmp.xlsx', index_col=0, comment='#')
Name Value
0 string1 1.0
1 string2 2.0
2 None NaN | pandas.reference.api.pandas.read_excel |
pandas.read_feather pandas.read_feather(path, columns=None, use_threads=True, storage_options=None)[source]
Load a feather-format object from the file path. Parameters
path:str, path object, or file-like object
String, path object (implementing os.PathLike[str]), or file-like object implementing a binary read() function. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.feather.
columns:sequence, default None
If not provided, all columns are read.
use_threads:bool, default True
Whether to parallelize reading using multiple threads.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. Returns
type of object stored in file | pandas.reference.api.pandas.read_feather |
pandas.read_fwf pandas.read_fwf(filepath_or_buffer, colspecs='infer', widths=None, infer_nrows=100, **kwds)[source]
Read a table of fixed-width formatted lines into DataFrame. Also supports optionally iterating or breaking of the file into chunks. Additional help can be found in the online docs for IO Tools. Parameters
filepath_or_buffer:str, path object, or file-like object
String, path object (implementing os.PathLike[str]), or file-like object implementing a text read() function.The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.csv.
colspecs:list of tuple (int, int) or ‘infer’. optional
A list of tuples giving the extents of the fixed-width fields of each line as half-open intervals (i.e., [from, to[ ). String value ‘infer’ can be used to instruct the parser to try detecting the column specifications from the first 100 rows of the data which are not being skipped via skiprows (default=’infer’).
widths:list of int, optional
A list of field widths which can be used instead of ‘colspecs’ if the intervals are contiguous.
infer_nrows:int, default 100
The number of rows to consider when letting the parser determine the colspecs.
**kwds:optional
Optional keyword arguments can be passed to TextFileReader. Returns
DataFrame or TextFileReader
A comma-separated values (csv) file is returned as two-dimensional data structure with labeled axes. See also DataFrame.to_csv
Write DataFrame to a comma-separated values (csv) file. read_csv
Read a comma-separated values (csv) file into DataFrame. Examples
>>> pd.read_fwf('data.csv') | pandas.reference.api.pandas.read_fwf |
pandas.read_gbq pandas.read_gbq(query, project_id=None, index_col=None, col_order=None, reauth=False, auth_local_webserver=False, dialect=None, location=None, configuration=None, credentials=None, use_bqstorage_api=None, max_results=None, progress_bar_type=None)[source]
Load data from Google BigQuery. This function requires the pandas-gbq package. See the How to authenticate with Google BigQuery guide for authentication instructions. Parameters
query:str
SQL-Like Query to return data values.
project_id:str, optional
Google BigQuery Account project ID. Optional when available from the environment.
index_col:str, optional
Name of result column to use for index in results DataFrame.
col_order:list(str), optional
List of BigQuery column names in the desired order for results DataFrame.
reauth:bool, default False
Force Google BigQuery to re-authenticate the user. This is useful if multiple accounts are used.
auth_local_webserver:bool, default False
Use the local webserver flow instead of the console flow when getting user credentials. New in version 0.2.0 of pandas-gbq.
dialect:str, default ‘legacy’
Note: The default value is changing to ‘standard’ in a future version. SQL syntax dialect to use. Value can be one of: 'legacy'
Use BigQuery’s legacy SQL dialect. For more information see BigQuery Legacy SQL Reference. 'standard'
Use BigQuery’s standard SQL, which is compliant with the SQL 2011 standard. For more information see BigQuery Standard SQL Reference.
location:str, optional
Location where the query job should run. See the BigQuery locations documentation for a list of available locations. The location must match that of any datasets used in the query. New in version 0.5.0 of pandas-gbq.
configuration:dict, optional
Query config parameters for job processing. For example:
configuration = {‘query’: {‘useQueryCache’: False}}
For more information see BigQuery REST API Reference.
credentials:google.auth.credentials.Credentials, optional
Credentials for accessing Google APIs. Use this parameter to override default credentials, such as to use Compute Engine google.auth.compute_engine.Credentials or Service Account google.oauth2.service_account.Credentials directly. New in version 0.8.0 of pandas-gbq.
use_bqstorage_api:bool, default False
Use the BigQuery Storage API to download query results quickly, but at an increased cost. To use this API, first enable it in the Cloud Console. You must also have the bigquery.readsessions.create permission on the project you are billing queries to. This feature requires version 0.10.0 or later of the pandas-gbq package. It also requires the google-cloud-bigquery-storage and fastavro packages. New in version 0.25.0.
max_results:int, optional
If set, limit the maximum number of rows to fetch from the query results. New in version 0.12.0 of pandas-gbq. New in version 1.1.0.
progress_bar_type:Optional, str
If set, use the tqdm library to display a progress bar while the data downloads. Install the tqdm package to use this feature. Possible values of progress_bar_type include: None
No progress bar. 'tqdm'
Use the tqdm.tqdm() function to print a progress bar to sys.stderr. 'tqdm_notebook'
Use the tqdm.tqdm_notebook() function to display a progress bar as a Jupyter notebook widget. 'tqdm_gui'
Use the tqdm.tqdm_gui() function to display a progress bar as a graphical dialog box. Note that this feature requires version 0.12.0 or later of the pandas-gbq package. And it requires the tqdm package. Slightly different than pandas-gbq, here the default is None. New in version 1.0.0. Returns
df: DataFrame
DataFrame representing results of query. See also pandas_gbq.read_gbq
This function in the pandas-gbq library. DataFrame.to_gbq
Write a DataFrame to Google BigQuery. | pandas.reference.api.pandas.read_gbq |
pandas.read_hdf pandas.read_hdf(path_or_buf, key=None, mode='r', errors='strict', where=None, start=None, stop=None, columns=None, iterator=False, chunksize=None, **kwargs)[source]
Read from the store, close it if we opened it. Retrieve pandas object stored in file, optionally based on where criteria. Warning Pandas uses PyTables for reading and writing HDF5 files, which allows serializing object-dtype data with pickle when using the “fixed” format. Loading pickled data received from untrusted sources can be unsafe. See: https://docs.python.org/3/library/pickle.html for more. Parameters
path_or_buf:str, path object, pandas.HDFStore
Any valid string path is acceptable. Only supports the local file system, remote URLs and file-like objects are not supported. If you want to pass in a path object, pandas accepts any os.PathLike. Alternatively, pandas accepts an open pandas.HDFStore object.
key:object, optional
The group identifier in the store. Can be omitted if the HDF file contains a single pandas object.
mode:{‘r’, ‘r+’, ‘a’}, default ‘r’
Mode to use when opening the file. Ignored if path_or_buf is a pandas.HDFStore. Default is ‘r’.
errors:str, default ‘strict’
Specifies how encoding and decoding errors are to be handled. See the errors argument for open() for a full list of options.
where:list, optional
A list of Term (or convertible) objects.
start:int, optional
Row number to start selection.
stop:int, optional
Row number to stop selection.
columns:list, optional
A list of columns names to return.
iterator:bool, optional
Return an iterator object.
chunksize:int, optional
Number of rows to include in an iteration when using an iterator. **kwargs
Additional keyword arguments passed to HDFStore. Returns
item:object
The selected object. Return type depends on the object stored. See also DataFrame.to_hdf
Write a HDF file from a DataFrame. HDFStore
Low-level access to HDF files. Examples
>>> df = pd.DataFrame([[1, 1.0, 'a']], columns=['x', 'y', 'z'])
>>> df.to_hdf('./store.h5', 'data')
>>> reread = pd.read_hdf('./store.h5') | pandas.reference.api.pandas.read_hdf |
pandas.read_html pandas.read_html(io, match='.+', flavor=None, header=None, index_col=None, skiprows=None, attrs=None, parse_dates=False, thousands=',', encoding=None, decimal='.', converters=None, na_values=None, keep_default_na=True, displayed_only=True)[source]
Read HTML tables into a list of DataFrame objects. Parameters
io:str, path object, or file-like object
String, path object (implementing os.PathLike[str]), or file-like object implementing a string read() function. The string can represent a URL or the HTML itself. Note that lxml only accepts the http, ftp and file url protocols. If you have a URL that starts with 'https' you might try removing the 's'.
match:str or compiled regular expression, optional
The set of tables containing text matching this regex or string will be returned. Unless the HTML is extremely simple you will probably need to pass a non-empty string here. Defaults to ‘.+’ (match any non-empty string). The default value will return all tables contained on a page. This value is converted to a regular expression so that there is consistent behavior between Beautiful Soup and lxml.
flavor:str, optional
The parsing engine to use. ‘bs4’ and ‘html5lib’ are synonymous with each other, they are both there for backwards compatibility. The default of None tries to use lxml to parse and if that fails it falls back on bs4 + html5lib.
header:int or list-like, optional
The row (or list of rows for a MultiIndex) to use to make the columns headers.
index_col:int or list-like, optional
The column (or list of columns) to use to create the index.
skiprows:int, list-like or slice, optional
Number of rows to skip after parsing the column integer. 0-based. If a sequence of integers or a slice is given, will skip the rows indexed by that sequence. Note that a single element sequence means ‘skip the nth row’ whereas an integer means ‘skip n rows’.
attrs:dict, optional
This is a dictionary of attributes that you can pass to use to identify the table in the HTML. These are not checked for validity before being passed to lxml or Beautiful Soup. However, these attributes must be valid HTML table attributes to work correctly. For example,
attrs = {'id': 'table'}
is a valid attribute dictionary because the ‘id’ HTML tag attribute is a valid HTML attribute for any HTML tag as per this document.
attrs = {'asdf': 'table'}
is not a valid attribute dictionary because ‘asdf’ is not a valid HTML attribute even if it is a valid XML attribute. Valid HTML 4.01 table attributes can be found here. A working draft of the HTML 5 spec can be found here. It contains the latest information on table attributes for the modern web.
parse_dates:bool, optional
See read_csv() for more details.
thousands:str, optional
Separator to use to parse thousands. Defaults to ','.
encoding:str, optional
The encoding used to decode the web page. Defaults to None.``None`` preserves the previous encoding behavior, which depends on the underlying parser library (e.g., the parser library will try to use the encoding provided by the document).
decimal:str, default ‘.’
Character to recognize as decimal point (e.g. use ‘,’ for European data).
converters:dict, default None
Dict of functions for converting values in certain columns. Keys can either be integers or column labels, values are functions that take one input argument, the cell (not column) content, and return the transformed content.
na_values:iterable, default None
Custom NA values.
keep_default_na:bool, default True
If na_values are specified and keep_default_na is False the default NaN values are overridden, otherwise they’re appended to.
displayed_only:bool, default True
Whether elements with “display: none” should be parsed. Returns
dfs
A list of DataFrames. See also read_csv
Read a comma-separated values (csv) file into DataFrame. Notes Before using this function you should read the gotchas about the HTML parsing libraries. Expect to do some cleanup after you call this function. For example, you might need to manually assign column names if the column names are converted to NaN when you pass the header=0 argument. We try to assume as little as possible about the structure of the table and push the idiosyncrasies of the HTML contained in the table to the user. This function searches for <table> elements and only for <tr> and <th> rows and <td> elements within each <tr> or <th> element in the table. <td> stands for “table data”. This function attempts to properly handle colspan and rowspan attributes. If the function has a <thead> argument, it is used to construct the header, otherwise the function attempts to find the header within the body (by putting rows with only <th> elements into the header). Similar to read_csv() the header argument is applied after skiprows is applied. This function will always return a list of DataFrame or it will fail, e.g., it will not return an empty list. Examples See the read_html documentation in the IO section of the docs for some examples of reading in HTML tables. | pandas.reference.api.pandas.read_html |
pandas.read_json pandas.read_json(path_or_buf=None, orient=None, typ='frame', dtype=None, convert_axes=None, convert_dates=True, keep_default_dates=True, numpy=False, precise_float=False, date_unit=None, encoding=None, encoding_errors='strict', lines=False, chunksize=None, compression='infer', nrows=None, storage_options=None)[source]
Convert a JSON string to pandas object. Parameters
path_or_buf:a valid JSON str, path object or file-like object
Any valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.json. If you want to pass in a path object, pandas accepts any os.PathLike. By file-like object, we refer to objects with a read() method, such as a file handle (e.g. via builtin open function) or StringIO.
orient:str
Indication of expected JSON string format. Compatible JSON strings can be produced by to_json() with a corresponding orient value. The set of possible orients is: 'split' : dict like {index -> [index], columns -> [columns], data -> [values]} 'records' : list like [{column -> value}, ... , {column -> value}] 'index' : dict like {index -> {column -> value}} 'columns' : dict like {column -> {index -> value}} 'values' : just the values array The allowed and default values depend on the value of the typ parameter.
when typ == 'series', allowed orients are {'split','records','index'} default is 'index' The Series index must be unique for orient 'index'.
when typ == 'frame', allowed orients are {'split','records','index',
'columns','values', 'table'} default is 'columns' The DataFrame index must be unique for orients 'index' and 'columns'. The DataFrame columns must be unique for orients 'index', 'columns', and 'records'.
typ:{‘frame’, ‘series’}, default ‘frame’
The type of object to recover.
dtype:bool or dict, default None
If True, infer dtypes; if a dict of column to dtype, then use those; if False, then don’t infer dtypes at all, applies only to the data. For all orient values except 'table', default is True. Changed in version 0.25.0: Not applicable for orient='table'.
convert_axes:bool, default None
Try to convert the axes to the proper dtypes. For all orient values except 'table', default is True. Changed in version 0.25.0: Not applicable for orient='table'.
convert_dates:bool or list of str, default True
If True then default datelike columns may be converted (depending on keep_default_dates). If False, no dates will be converted. If a list of column names, then those columns will be converted and default datelike columns may also be converted (depending on keep_default_dates).
keep_default_dates:bool, default True
If parsing dates (convert_dates is not False), then try to parse the default datelike columns. A column label is datelike if it ends with '_at', it ends with '_time', it begins with 'timestamp', it is 'modified', or it is 'date'.
numpy:bool, default False
Direct decoding to numpy arrays. Supports numeric data only, but non-numeric column and index labels are supported. Note also that the JSON ordering MUST be the same for each term if numpy=True. Deprecated since version 1.0.0.
precise_float:bool, default False
Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
date_unit:str, default None
The timestamp unit to detect if converting dates. The default behaviour is to try and detect the correct precision, but if this is not desired then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force parsing only seconds, milliseconds, microseconds or nanoseconds respectively.
encoding:str, default is ‘utf-8’
The encoding to use to decode py3 bytes.
encoding_errors:str, optional, default “strict”
How encoding errors are treated. List of possible values . New in version 1.3.0.
lines:bool, default False
Read the file as a json object per line.
chunksize:int, optional
Return JsonReader object for iteration. See the line-delimited json docs for more information on chunksize. This can only be passed if lines=True. If this is None, the file will be read into memory all at once. Changed in version 1.2: JsonReader is a context manager.
compression:str or dict, default ‘infer’
For on-the-fly decompression of on-disk data. If ‘infer’ and ‘path_or_buf’ is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, or ‘.zst’ (otherwise no compression). If using ‘zip’, the ZIP file must contain only one data file to be read in. Set to None for no decompression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor, respectively. As an example, the following could be passed for Zstandard decompression using a custom compression dictionary: compression={'method': 'zstd', 'dict_data': my_compression_dict}. Changed in version 1.4.0: Zstandard support.
nrows:int, optional
The number of lines from the line-delimited jsonfile that has to be read. This can only be passed if lines=True. If this is None, all the rows will be returned. New in version 1.1.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. Returns
Series or DataFrame
The type returned depends on the value of typ. See also DataFrame.to_json
Convert a DataFrame to a JSON string. Series.to_json
Convert a Series to a JSON string. json_normalize
Normalize semi-structured JSON data into a flat table. Notes Specific to orient='table', if a DataFrame with a literal Index name of index gets written with to_json(), the subsequent read operation will incorrectly set the Index name to None. This is because index is also used by DataFrame.to_json() to denote a missing Index name, and the subsequent read_json() operation cannot distinguish between the two. The same limitation is encountered with a MultiIndex and any names beginning with 'level_'. Examples
>>> df = pd.DataFrame([['a', 'b'], ['c', 'd']],
... index=['row 1', 'row 2'],
... columns=['col 1', 'col 2'])
Encoding/decoding a Dataframe using 'split' formatted JSON:
>>> df.to_json(orient='split')
'{"columns":["col 1","col 2"],"index":["row 1","row 2"],"data":[["a","b"],["c","d"]]}'
>>> pd.read_json(_, orient='split')
col 1 col 2
row 1 a b
row 2 c d
Encoding/decoding a Dataframe using 'index' formatted JSON:
>>> df.to_json(orient='index')
'{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
>>> pd.read_json(_, orient='index')
col 1 col 2
row 1 a b
row 2 c d
Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not preserved with this encoding.
>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
>>> pd.read_json(_, orient='records')
col 1 col 2
0 a b
1 c d
Encoding with Table Schema
>>> df.to_json(orient='table')
'{"schema":{"fields":[{"name":"index","type":"string"},{"name":"col 1","type":"string"},{"name":"col 2","type":"string"}],"primaryKey":["index"],"pandas_version":"1.4.0"},"data":[{"index":"row 1","col 1":"a","col 2":"b"},{"index":"row 2","col 1":"c","col 2":"d"}]}' | pandas.reference.api.pandas.read_json |
pandas.read_orc pandas.read_orc(path, columns=None, **kwargs)[source]
Load an ORC object from the file path, returning a DataFrame. New in version 1.0.0. Parameters
path:str, path object, or file-like object
String, path object (implementing os.PathLike[str]), or file-like object implementing a binary read() function. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.orc.
columns:list, default None
If not None, only these columns will be read from the file. **kwargs
Any additional kwargs are passed to pyarrow. Returns
DataFrame
Notes Before using this function you should read the user guide about ORC and install optional dependencies. | pandas.reference.api.pandas.read_orc |
pandas.read_parquet pandas.read_parquet(path, engine='auto', columns=None, storage_options=None, use_nullable_dtypes=False, **kwargs)[source]
Load a parquet object from the file path, returning a DataFrame. Parameters
path:str, path object or file-like object
String, path object (implementing os.PathLike[str]), or file-like object implementing a binary read() function. The string could be a URL. Valid URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.parquet. A file URL can also be a path to a directory that contains multiple partitioned parquet files. Both pyarrow and fastparquet support paths to directories as well as file URLs. A directory path could be: file://localhost/path/to/tables or s3://bucket/partition_dir.
engine:{‘auto’, ‘pyarrow’, ‘fastparquet’}, default ‘auto’
Parquet library to use. If ‘auto’, then the option io.parquet.engine is used. The default io.parquet.engine behavior is to try ‘pyarrow’, falling back to ‘fastparquet’ if ‘pyarrow’ is unavailable.
columns:list, default=None
If not None, only these columns will be read from the file.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.3.0.
use_nullable_dtypes:bool, default False
If True, use dtypes that use pd.NA as missing value indicator for the resulting DataFrame. (only applicable for the pyarrow engine) As new dtypes are added that support pd.NA in the future, the output with this option will change to use those dtypes. Note: this is an experimental option, and behaviour (e.g. additional support dtypes) may change without notice. New in version 1.2.0. **kwargs
Any additional kwargs are passed to the engine. Returns
DataFrame | pandas.reference.api.pandas.read_parquet |
pandas.read_pickle pandas.read_pickle(filepath_or_buffer, compression='infer', storage_options=None)[source]
Load pickled pandas object (or any object) from file. Warning Loading pickled data received from untrusted sources can be unsafe. See here. Parameters
filepath_or_buffer:str, path object, or file-like object
String, path object (implementing os.PathLike[str]), or file-like object implementing a binary readlines() function. Changed in version 1.0.0: Accept URL. URL is not limited to S3 and GCS.
compression:str or dict, default ‘infer’
For on-the-fly decompression of on-disk data. If ‘infer’ and ‘filepath_or_buffer’ is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, or ‘.zst’ (otherwise no compression). If using ‘zip’, the ZIP file must contain only one data file to be read in. Set to None for no decompression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor, respectively. As an example, the following could be passed for Zstandard decompression using a custom compression dictionary: compression={'method': 'zstd', 'dict_data': my_compression_dict}. Changed in version 1.4.0: Zstandard support.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. Returns
unpickled:same type as object stored in file
See also DataFrame.to_pickle
Pickle (serialize) DataFrame object to file. Series.to_pickle
Pickle (serialize) Series object to file. read_hdf
Read HDF5 file into a DataFrame. read_sql
Read SQL query or database table into a DataFrame. read_parquet
Load a parquet object, returning a DataFrame. Notes read_pickle is only guaranteed to be backwards compatible to pandas 0.20.3. Examples
>>> original_df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)})
>>> original_df
foo bar
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
>>> pd.to_pickle(original_df, "./dummy.pkl")
>>> unpickled_df = pd.read_pickle("./dummy.pkl")
>>> unpickled_df
foo bar
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9 | pandas.reference.api.pandas.read_pickle |
pandas.read_sas pandas.read_sas(filepath_or_buffer, format=None, index=None, encoding=None, chunksize=None, iterator=False)[source]
Read SAS files stored as either XPORT or SAS7BDAT format files. Parameters
filepath_or_buffer:str, path object, or file-like object
String, path object (implementing os.PathLike[str]), or file-like object implementing a binary read() function. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.sas.
format:str {‘xport’, ‘sas7bdat’} or None
If None, file format is inferred from file extension. If ‘xport’ or ‘sas7bdat’, uses the corresponding format.
index:identifier of index column, defaults to None
Identifier of column that should be used as index of the DataFrame.
encoding:str, default is None
Encoding for text data. If None, text data are stored as raw bytes.
chunksize:int
Read file chunksize lines at a time, returns iterator. Changed in version 1.2: TextFileReader is a context manager.
iterator:bool, defaults to False
If True, returns an iterator for reading the file incrementally. Changed in version 1.2: TextFileReader is a context manager. Returns
DataFrame if iterator=False and chunksize=None, else SAS7BDATReader
or XportReader | pandas.reference.api.pandas.read_sas |
pandas.read_spss pandas.read_spss(path, usecols=None, convert_categoricals=True)[source]
Load an SPSS file from the file path, returning a DataFrame. New in version 0.25.0. Parameters
path:str or Path
File path.
usecols:list-like, optional
Return a subset of the columns. If None, return all columns.
convert_categoricals:bool, default is True
Convert categorical columns into pd.Categorical. Returns
DataFrame | pandas.reference.api.pandas.read_spss |
pandas.read_sql pandas.read_sql(sql, con, index_col=None, coerce_float=True, params=None, parse_dates=None, columns=None, chunksize=None)[source]
Read SQL query or database table into a DataFrame. This function is a convenience wrapper around read_sql_table and read_sql_query (for backward compatibility). It will delegate to the specific function depending on the provided input. A SQL query will be routed to read_sql_query, while a database table name will be routed to read_sql_table. Note that the delegated function might have more specific notes about their functionality not listed here. Parameters
sql:str or SQLAlchemy Selectable (select or text object)
SQL query to be executed or a table name.
con:SQLAlchemy connectable, str, or sqlite3 connection
Using SQLAlchemy makes it possible to use any DB supported by that library. If a DBAPI2 object, only sqlite3 is supported. The user is responsible for engine disposal and connection closure for the SQLAlchemy connectable; str connections are closed automatically. See here.
index_col:str or list of str, optional, default: None
Column(s) to set as index(MultiIndex).
coerce_float:bool, default True
Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets.
params:list, tuple or dict, optional, default: None
List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249’s paramstyle, is supported. Eg. for psycopg2, uses %(name)s so use params={‘name’ : ‘value’}.
parse_dates:list or dict, default: None
List of column names to parse as dates. Dict of {column_name: format string} where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps. Dict of {column_name: arg dict}, where the arg dict corresponds to the keyword arguments of pandas.to_datetime() Especially useful with databases without native Datetime support, such as SQLite.
columns:list, default: None
List of column names to select from SQL table (only used when reading a table).
chunksize:int, default None
If specified, return an iterator where chunksize is the number of rows to include in each chunk. Returns
DataFrame or Iterator[DataFrame]
See also read_sql_table
Read SQL database table into a DataFrame. read_sql_query
Read SQL query into a DataFrame. Examples Read data from SQL via either a SQL query or a SQL tablename. When using a SQLite database only SQL queries are accepted, providing only the SQL tablename will result in an error.
>>> from sqlite3 import connect
>>> conn = connect(':memory:')
>>> df = pd.DataFrame(data=[[0, '10/11/12'], [1, '12/11/10']],
... columns=['int_column', 'date_column'])
>>> df.to_sql('test_data', conn)
2
>>> pd.read_sql('SELECT int_column, date_column FROM test_data', conn)
int_column date_column
0 0 10/11/12
1 1 12/11/10
>>> pd.read_sql('test_data', 'postgres:///db_name')
Apply date parsing to columns through the parse_dates argument
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates=["date_column"])
int_column date_column
0 0 2012-10-11
1 1 2010-12-11
The parse_dates argument calls pd.to_datetime on the provided columns. Custom argument values for applying pd.to_datetime on a column are specified via a dictionary format: 1. Ignore errors while parsing the values of “date_column”
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates={"date_column": {"errors": "ignore"}})
int_column date_column
0 0 2012-10-11
1 1 2010-12-11
Apply a dayfirst date parsing order on the values of “date_column”
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates={"date_column": {"dayfirst": True}})
int_column date_column
0 0 2012-11-10
1 1 2010-11-12
Apply custom formatting when date parsing the values of “date_column”
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates={"date_column": {"format": "%d/%m/%y"}})
int_column date_column
0 0 2012-11-10
1 1 2010-11-12 | pandas.reference.api.pandas.read_sql |
pandas.read_sql_query pandas.read_sql_query(sql, con, index_col=None, coerce_float=True, params=None, parse_dates=None, chunksize=None, dtype=None)[source]
Read SQL query into a DataFrame. Returns a DataFrame corresponding to the result set of the query string. Optionally provide an index_col parameter to use one of the columns as the index, otherwise default integer index will be used. Parameters
sql:str SQL query or SQLAlchemy Selectable (select or text object)
SQL query to be executed.
con:SQLAlchemy connectable, str, or sqlite3 connection
Using SQLAlchemy makes it possible to use any DB supported by that library. If a DBAPI2 object, only sqlite3 is supported.
index_col:str or list of str, optional, default: None
Column(s) to set as index(MultiIndex).
coerce_float:bool, default True
Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point. Useful for SQL result sets.
params:list, tuple or dict, optional, default: None
List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249’s paramstyle, is supported. Eg. for psycopg2, uses %(name)s so use params={‘name’ : ‘value’}.
parse_dates:list or dict, default: None
List of column names to parse as dates. Dict of {column_name: format string} where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps. Dict of {column_name: arg dict}, where the arg dict corresponds to the keyword arguments of pandas.to_datetime() Especially useful with databases without native Datetime support, such as SQLite.
chunksize:int, default None
If specified, return an iterator where chunksize is the number of rows to include in each chunk.
dtype:Type name or dict of columns
Data type for data or columns. E.g. np.float64 or {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’}. New in version 1.3.0. Returns
DataFrame or Iterator[DataFrame]
See also read_sql_table
Read SQL database table into a DataFrame. read_sql
Read SQL query or database table into a DataFrame. Notes Any datetime values with time zone information parsed via the parse_dates parameter will be converted to UTC. | pandas.reference.api.pandas.read_sql_query |
pandas.read_sql_table pandas.read_sql_table(table_name, con, schema=None, index_col=None, coerce_float=True, parse_dates=None, columns=None, chunksize=None)[source]
Read SQL database table into a DataFrame. Given a table name and a SQLAlchemy connectable, returns a DataFrame. This function does not support DBAPI connections. Parameters
table_name:str
Name of SQL table in database.
con:SQLAlchemy connectable or str
A database URI could be provided as str. SQLite DBAPI connection mode not supported.
schema:str, default None
Name of SQL schema in database to query (if database flavor supports this). Uses default schema if None (default).
index_col:str or list of str, optional, default: None
Column(s) to set as index(MultiIndex).
coerce_float:bool, default True
Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point. Can result in loss of Precision.
parse_dates:list or dict, default None
List of column names to parse as dates. Dict of {column_name: format string} where format string is strftime compatible in case of parsing string times or is one of (D, s, ns, ms, us) in case of parsing integer timestamps. Dict of {column_name: arg dict}, where the arg dict corresponds to the keyword arguments of pandas.to_datetime() Especially useful with databases without native Datetime support, such as SQLite.
columns:list, default None
List of column names to select from SQL table.
chunksize:int, default None
If specified, returns an iterator where chunksize is the number of rows to include in each chunk. Returns
DataFrame or Iterator[DataFrame]
A SQL table is returned as two-dimensional data structure with labeled axes. See also read_sql_query
Read SQL query into a DataFrame. read_sql
Read SQL query or database table into a DataFrame. Notes Any datetime values with time zone information will be converted to UTC. Examples
>>> pd.read_sql_table('table_name', 'postgres:///db_name') | pandas.reference.api.pandas.read_sql_table |
pandas.read_stata pandas.read_stata(filepath_or_buffer, convert_dates=True, convert_categoricals=True, index_col=None, convert_missing=False, preserve_dtypes=True, columns=None, order_categoricals=True, chunksize=None, iterator=False, compression='infer', storage_options=None)[source]
Read Stata file into DataFrame. Parameters
filepath_or_buffer:str, path object or file-like object
Any valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.dta. If you want to pass in a path object, pandas accepts any os.PathLike. By file-like object, we refer to objects with a read() method, such as a file handle (e.g. via builtin open function) or StringIO.
convert_dates:bool, default True
Convert date variables to DataFrame time values.
convert_categoricals:bool, default True
Read value labels and convert columns to Categorical/Factor variables.
index_col:str, optional
Column to set as index.
convert_missing:bool, default False
Flag indicating whether to convert missing values to their Stata representations. If False, missing values are replaced with nan. If True, columns containing missing values are returned with object data types and missing values are represented by StataMissingValue objects.
preserve_dtypes:bool, default True
Preserve Stata datatypes. If False, numeric data are upcast to pandas default types for foreign data (float64 or int64).
columns:list or None
Columns to retain. Columns will be returned in the given order. None returns all columns.
order_categoricals:bool, default True
Flag indicating whether converted categorical data are ordered.
chunksize:int, default None
Return StataReader object for iterations, returns chunks with given number of lines.
iterator:bool, default False
Return StataReader object.
compression:str or dict, default ‘infer’
For on-the-fly decompression of on-disk data. If ‘infer’ and ‘%s’ is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, or ‘.zst’ (otherwise no compression). If using ‘zip’, the ZIP file must contain only one data file to be read in. Set to None for no decompression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor, respectively. As an example, the following could be passed for Zstandard decompression using a custom compression dictionary: compression={'method': 'zstd', 'dict_data': my_compression_dict}.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. Returns
DataFrame or StataReader
See also io.stata.StataReader
Low-level reader for Stata data files. DataFrame.to_stata
Export Stata data files. Notes Categorical variables read through an iterator may not have the same categories and dtype. This occurs when a variable stored in a DTA file is associated to an incomplete set of value labels that only label a strict subset of the values. Examples Creating a dummy stata for this example >>> df = pd.DataFrame({‘animal’: [‘falcon’, ‘parrot’, ‘falcon’, … ‘parrot’], … ‘speed’: [350, 18, 361, 15]}) # doctest: +SKIP >>> df.to_stata(‘animals.dta’) # doctest: +SKIP Read a Stata dta file:
>>> df = pd.read_stata('animals.dta')
Read a Stata dta file in 10,000 line chunks: >>> values = np.random.randint(0, 10, size=(20_000, 1), dtype=”uint8”) # doctest: +SKIP >>> df = pd.DataFrame(values, columns=[“i”]) # doctest: +SKIP >>> df.to_stata(‘filename.dta’) # doctest: +SKIP
>>> itr = pd.read_stata('filename.dta', chunksize=10000)
>>> for chunk in itr:
... # Operate on a single chunk, e.g., chunk.mean()
... pass | pandas.reference.api.pandas.read_stata |
pandas.read_table pandas.read_table(filepath_or_buffer, sep=NoDefault.no_default, delimiter=None, header='infer', names=NoDefault.no_default, index_col=None, usecols=None, squeeze=None, prefix=NoDefault.no_default, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal='.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, encoding_errors='strict', dialect=None, error_bad_lines=None, warn_bad_lines=None, on_bad_lines=None, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None, storage_options=None)[source]
Read general delimited file into DataFrame. Also supports optionally iterating or breaking of the file into chunks. Additional help can be found in the online docs for IO Tools. Parameters
filepath_or_buffer:str, path object or file-like object
Any valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.csv. If you want to pass in a path object, pandas accepts any os.PathLike. By file-like object, we refer to objects with a read() method, such as a file handle (e.g. via builtin open function) or StringIO.
sep:str, default ‘\t’ (tab-stop)
Delimiter to use. If sep is None, the C engine cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used and automatically detect the separator by Python’s builtin sniffer tool, csv.Sniffer. In addition, separators longer than 1 character and different from '\s+' will be interpreted as regular expressions and will also force the use of the Python parsing engine. Note that regex delimiters are prone to ignoring quoted data. Regex example: '\r\t'.
delimiter:str, default None
Alias for sep.
header:int, list of int, None, default ‘infer’
Row number(s) to use as the column names, and the start of the data. Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0 and column names are inferred from the first line of the file, if column names are passed explicitly then the behavior is identical to header=None. Explicitly pass header=0 to be able to replace existing names. The header can be a list of integers that specify row locations for a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not specified will be skipped (e.g. 2 in this example is skipped). Note that this parameter ignores commented lines and empty lines if skip_blank_lines=True, so header=0 denotes the first line of data rather than the first line of the file.
names:array-like, optional
List of column names to use. If the file contains a header row, then you should explicitly pass header=0 to override the column names. Duplicates in this list are not allowed.
index_col:int, str, sequence of int / str, or False, optional, default None
Column(s) to use as the row labels of the DataFrame, either given as string name or column index. If a sequence of int / str is given, a MultiIndex is used. Note: index_col=False can be used to force pandas to not use the first column as the index, e.g. when you have a malformed file with delimiters at the end of each line.
usecols:list-like or callable, optional
Return a subset of the columns. If list-like, all elements must either be positional (i.e. integer indices into the document columns) or strings that correspond to column names provided either by the user in names or inferred from the document header row(s). If names are given, the document header row(s) are not taken into account. For example, a valid list-like usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To instantiate a DataFrame from data with element order preserved use pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns in ['foo', 'bar'] order or pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for ['bar', 'foo'] order. If callable, the callable function will be evaluated against the column names, returning names where the callable function evaluates to True. An example of a valid callable argument would be lambda x: x.upper() in
['AAA', 'BBB', 'DDD']. Using this parameter results in much faster parsing time and lower memory usage.
squeeze:bool, default False
If the parsed data only contains one column then return a Series. Deprecated since version 1.4.0: Append .squeeze("columns") to the call to read_table to squeeze the data.
prefix:str, optional
Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, … Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
mangle_dupe_cols:bool, default True
Duplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than ‘X’…’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.
dtype:Type name or dict of column -> type, optional
Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’} Use str or object together with suitable na_values settings to preserve and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion.
engine:{‘c’, ‘python’, ‘pyarrow’}, optional
Parser engine to use. The C and pyarrow engines are faster, while the python engine is currently more feature-complete. Multithreading is currently only supported by the pyarrow engine. New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features are unsupported, or may not work correctly, with this engine.
converters:dict, optional
Dict of functions for converting values in certain columns. Keys can either be integers or column labels.
true_values:list, optional
Values to consider as True.
false_values:list, optional
Values to consider as False.
skipinitialspace:bool, default False
Skip spaces after delimiter.
skiprows:list-like, int or callable, optional
Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file. If callable, the callable function will be evaluated against the row indices, returning True if the row should be skipped and False otherwise. An example of a valid callable argument would be lambda x: x in [0, 2].
skipfooter:int, default 0
Number of lines at bottom of file to skip (Unsupported with engine=’c’).
nrows:int, optional
Number of rows of file to read. Useful for reading pieces of large files.
na_values:scalar, str, list-like, or dict, optional
Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. By default the following values are interpreted as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.
keep_default_na:bool, default True
Whether or not to include the default NaN values when parsing the data. Depending on whether na_values is passed in, the behavior is as follows: If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values used for parsing. If keep_default_na is True, and na_values are not specified, only the default NaN values are used for parsing. If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are used for parsing. If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN. Note that if na_filter is passed in as False, the keep_default_na and na_values parameters will be ignored.
na_filter:bool, default True
Detect missing value markers (empty strings and the value of na_values). In data without any NAs, passing na_filter=False can improve the performance of reading a large file.
verbose:bool, default False
Indicate number of NA values placed in non-numeric columns.
skip_blank_lines:bool, default True
If True, skip over blank lines rather than interpreting as NaN values.
parse_dates:bool or list of int or names or list of lists or dict, default False
The behavior is as follows: boolean. If True -> try parsing the index. list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column. list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column. dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’ If a column or index cannot be represented as an array of datetimes, say because of an unparsable value or a mixture of timezones, the column or index will be returned unaltered as an object data type. For non-standard datetime parsing, use pd.to_datetime after pd.read_csv. To parse an index or column with a mixture of timezones, specify date_parser to be a partially-applied pandas.to_datetime() with utc=True. See Parsing a CSV with mixed timezones for more. Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format:bool, default False
If True and parse_dates is enabled, pandas will attempt to infer the format of the datetime strings in the columns, and if it can be inferred, switch to a faster method of parsing them. In some cases this can increase the parsing speed by 5-10x.
keep_date_col:bool, default False
If True and parse_dates specifies combining multiple columns then keep the original columns.
date_parser:function, optional
Function to use for converting a sequence of string columns to an array of datetime instances. The default uses dateutil.parser.parser to do the conversion. Pandas will try to call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more strings (corresponding to the columns defined by parse_dates) as arguments.
dayfirst:bool, default False
DD/MM format dates, international and European format.
cache_dates:bool, default True
If True, use a cache of unique, converted dates to apply the datetime conversion. May produce significant speed-up when parsing duplicate date strings, especially ones with timezone offsets. New in version 0.25.0.
iterator:bool, default False
Return TextFileReader object for iteration or getting chunks with get_chunk(). Changed in version 1.2: TextFileReader is a context manager.
chunksize:int, optional
Return TextFileReader object for iteration. See the IO Tools docs for more information on iterator and chunksize. Changed in version 1.2: TextFileReader is a context manager.
compression:str or dict, default ‘infer’
For on-the-fly decompression of on-disk data. If ‘infer’ and ‘%s’ is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, or ‘.zst’ (otherwise no compression). If using ‘zip’, the ZIP file must contain only one data file to be read in. Set to None for no decompression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor, respectively. As an example, the following could be passed for Zstandard decompression using a custom compression dictionary: compression={'method': 'zstd', 'dict_data': my_compression_dict}. Changed in version 1.4.0: Zstandard support.
thousands:str, optional
Thousands separator.
decimal:str, default ‘.’
Character to recognize as decimal point (e.g. use ‘,’ for European data).
lineterminator:str (length 1), optional
Character to break file into lines. Only valid with C parser.
quotechar:str (length 1), optional
The character used to denote the start and end of a quoted item. Quoted items can include the delimiter and it will be ignored.
quoting:int or csv.QUOTE_* instance, default 0
Control field quoting behavior per csv.QUOTE_* constants. Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
doublequote:bool, default True
When quotechar is specified and quoting is not QUOTE_NONE, indicate whether or not to interpret two consecutive quotechar elements INSIDE a field as a single quotechar element.
escapechar:str (length 1), optional
One-character string used to escape other characters.
comment:str, optional
Indicates remainder of line should not be parsed. If found at the beginning of a line, the line will be ignored altogether. This parameter must be a single character. Like empty lines (as long as skip_blank_lines=True), fully commented lines are ignored by the parameter header but not by skiprows. For example, if comment='#', parsing #empty\na,b,c\n1,2,3 with header=0 will result in ‘a,b,c’ being treated as the header.
encoding:str, optional
Encoding to use for UTF when reading/writing (ex. ‘utf-8’). List of Python standard encodings . Changed in version 1.2: When encoding is None, errors="replace" is passed to open(). Otherwise, errors="strict" is passed to open(). This behavior was previously only the case for engine="python". Changed in version 1.3.0: encoding_errors is a new argument. encoding has no longer an influence on how encoding errors are handled.
encoding_errors:str, optional, default “strict”
How encoding errors are treated. List of possible values . New in version 1.3.0.
dialect:str or csv.Dialect, optional
If provided, this parameter will override values (default or not) for the following parameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting. If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect documentation for more details.
error_bad_lines:bool, optional, default None
Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines” will be dropped from the DataFrame that is returned. Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon encountering a bad line instead.
warn_bad_lines:bool, optional, default None
If error_bad_lines is False, and warn_bad_lines is True, a warning for each “bad line” will be output. Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon encountering a bad line instead.
on_bad_lines:{‘error’, ‘warn’, ‘skip’} or callable, default ‘error’
Specifies what to do upon encountering a bad line (a line with too many fields). Allowed values are :
‘error’, raise an Exception when a bad line is encountered. ‘warn’, raise a warning when a bad line is encountered and skip that line. ‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0: callable, function with signature (bad_line: list[str]) -> list[str] | None that will process a single bad line. bad_line is a list of strings split by the sep. If the function returns None`, the bad line will be ignored.
If the function returns a new list of strings with more elements than
expected, a ``ParserWarning will be emitted while dropping extra elements. Only supported when engine="python" New in version 1.4.0.
delim_whitespace:bool, default False
Specifies whether or not whitespace (e.g. ' ' or ' ') will be used as the sep. Equivalent to setting sep='\s+'. If this option is set to True, nothing should be passed in for the delimiter parameter.
low_memory:bool, default True
Internally process the file in chunks, resulting in lower memory use while parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with the dtype parameter. Note that the entire file is read into a single DataFrame regardless, use the chunksize or iterator parameter to return the data in chunks. (Only valid with C parser).
memory_map:bool, default False
If a filepath is provided for filepath_or_buffer, map the file object directly onto memory and access the data directly from there. Using this option can improve performance because there is no longer any I/O overhead.
float_precision:str, optional
Specifies which converter the C engine should use for floating-point values. The options are None or ‘high’ for the ordinary converter, ‘legacy’ for the original lower precision pandas converter, and ‘round_trip’ for the round-trip converter. Changed in version 1.2.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2. Returns
DataFrame or TextParser
A comma-separated values (csv) file is returned as two-dimensional data structure with labeled axes. See also DataFrame.to_csv
Write DataFrame to a comma-separated values (csv) file. read_csv
Read a comma-separated values (csv) file into DataFrame. read_fwf
Read a table of fixed-width formatted lines into DataFrame. Examples
>>> pd.read_table('data.csv') | pandas.reference.api.pandas.read_table |
pandas.read_xml pandas.read_xml(path_or_buffer, xpath='./*', namespaces=None, elems_only=False, attrs_only=False, names=None, encoding='utf-8', parser='lxml', stylesheet=None, compression='infer', storage_options=None)[source]
Read XML document into a DataFrame object. New in version 1.3.0. Parameters
path_or_buffer:str, path object, or file-like object
String, path object (implementing os.PathLike[str]), or file-like object implementing a read() function. The string can be any valid XML string or a path. The string can further be a URL. Valid URL schemes include http, ftp, s3, and file.
xpath:str, optional, default ‘./*’
The XPath to parse required set of nodes for migration to DataFrame. XPath should return a collection of elements and not a single element. Note: The etree parser supports limited XPath expressions. For more complex XPath, use lxml which requires installation.
namespaces:dict, optional
The namespaces defined in XML document as dicts with key being namespace prefix and value the URI. There is no need to include all namespaces in XML, only the ones used in xpath expression. Note: if XML document uses default namespace denoted as xmlns=’<URI>’ without a prefix, you must assign any temporary namespace prefix such as ‘doc’ to the URI in order to parse underlying nodes and/or attributes. For example,
namespaces = {"doc": "https://example.com"}
elems_only:bool, optional, default False
Parse only the child elements at the specified xpath. By default, all child elements and non-empty text nodes are returned.
attrs_only:bool, optional, default False
Parse only the attributes at the specified xpath. By default, all attributes are returned.
names:list-like, optional
Column names for DataFrame of parsed XML data. Use this parameter to rename original element names and distinguish same named elements.
encoding:str, optional, default ‘utf-8’
Encoding of XML document.
parser:{‘lxml’,’etree’}, default ‘lxml’
Parser module to use for retrieval of data. Only ‘lxml’ and ‘etree’ are supported. With ‘lxml’ more complex XPath searches and ability to use XSLT stylesheet are supported.
stylesheet:str, path object or file-like object
A URL, file-like object, or a raw string containing an XSLT script. This stylesheet should flatten complex, deeply nested XML documents for easier parsing. To use this feature you must have lxml module installed and specify ‘lxml’ as parser. The xpath must reference nodes of transformed XML document generated after XSLT transformation and not the original XML document. Only XSLT 1.0 scripts and not later versions is currently supported.
compression:str or dict, default ‘infer’
For on-the-fly decompression of on-disk data. If ‘infer’ and ‘path_or_buffer’ is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, or ‘.zst’ (otherwise no compression). If using ‘zip’, the ZIP file must contain only one data file to be read in. Set to None for no decompression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor, respectively. As an example, the following could be passed for Zstandard decompression using a custom compression dictionary: compression={'method': 'zstd', 'dict_data': my_compression_dict}. Changed in version 1.4.0: Zstandard support.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. Returns
df
A DataFrame. See also read_json
Convert a JSON string to pandas object. read_html
Read HTML tables into a list of DataFrame objects. Notes This method is best designed to import shallow XML documents in following format which is the ideal fit for the two-dimensions of a DataFrame (row by column).
<root>
<row>
<column1>data</column1>
<column2>data</column2>
<column3>data</column3>
...
</row>
<row>
...
</row>
...
</root>
As a file format, XML documents can be designed any way including layout of elements and attributes as long as it conforms to W3C specifications. Therefore, this method is a convenience handler for a specific flatter design and not all possible XML structures. However, for more complex XML documents, stylesheet allows you to temporarily redesign original document with XSLT (a special purpose language) for a flatter version for migration to a DataFrame. This function will always return a single DataFrame or raise exceptions due to issues with XML document, xpath, or other parameters. Examples
>>> xml = '''<?xml version='1.0' encoding='utf-8'?>
... <data xmlns="http://example.com">
... <row>
... <shape>square</shape>
... <degrees>360</degrees>
... <sides>4.0</sides>
... </row>
... <row>
... <shape>circle</shape>
... <degrees>360</degrees>
... <sides/>
... </row>
... <row>
... <shape>triangle</shape>
... <degrees>180</degrees>
... <sides>3.0</sides>
... </row>
... </data>'''
>>> df = pd.read_xml(xml)
>>> df
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
>>> xml = '''<?xml version='1.0' encoding='utf-8'?>
... <data>
... <row shape="square" degrees="360" sides="4.0"/>
... <row shape="circle" degrees="360"/>
... <row shape="triangle" degrees="180" sides="3.0"/>
... </data>'''
>>> df = pd.read_xml(xml, xpath=".//row")
>>> df
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
>>> xml = '''<?xml version='1.0' encoding='utf-8'?>
... <doc:data xmlns:doc="https://example.com">
... <doc:row>
... <doc:shape>square</doc:shape>
... <doc:degrees>360</doc:degrees>
... <doc:sides>4.0</doc:sides>
... </doc:row>
... <doc:row>
... <doc:shape>circle</doc:shape>
... <doc:degrees>360</doc:degrees>
... <doc:sides/>
... </doc:row>
... <doc:row>
... <doc:shape>triangle</doc:shape>
... <doc:degrees>180</doc:degrees>
... <doc:sides>3.0</doc:sides>
... </doc:row>
... </doc:data>'''
>>> df = pd.read_xml(xml,
... xpath="//doc:row",
... namespaces={"doc": "https://example.com"})
>>> df
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0 | pandas.reference.api.pandas.read_xml |
pandas.reset_option pandas.reset_option(pat)=<pandas._config.config.CallableDynamicDoc object>
Reset one or more options to their default value. Pass “all” as argument to reset all options. Available options: compute.[use_bottleneck, use_numba, use_numexpr] display.[chop_threshold, colheader_justify, column_space, date_dayfirst, date_yearfirst, encoding, expand_frame_repr, float_format] display.html.[border, table_schema, use_mathjax] display.[large_repr] display.latex.[escape, longtable, multicolumn, multicolumn_format, multirow, repr] display.[max_categories, max_columns, max_colwidth, max_dir_items, max_info_columns, max_info_rows, max_rows, max_seq_items, memory_usage, min_rows, multi_sparse, notebook_repr_html, pprint_nest_depth, precision, show_dimensions] display.unicode.[ambiguous_as_wide, east_asian_width] display.[width] io.excel.ods.[reader, writer] io.excel.xls.[reader, writer] io.excel.xlsb.[reader] io.excel.xlsm.[reader, writer] io.excel.xlsx.[reader, writer] io.hdf.[default_format, dropna_table] io.parquet.[engine] io.sql.[engine] mode.[chained_assignment, data_manager, sim_interactive, string_storage, use_inf_as_na, use_inf_as_null] plotting.[backend] plotting.matplotlib.[register_converters] styler.format.[decimal, escape, formatter, na_rep, precision, thousands] styler.html.[mathjax] styler.latex.[environment, hrules, multicol_align, multirow_align] styler.render.[encoding, max_columns, max_elements, max_rows, repr] styler.sparse.[columns, index] Parameters
pat:str/regex
If specified only options matching prefix* will be reset. Note: partial matches are supported for convenience, but unless you use the full option name (e.g. x.y.z.option_name), your code may break in future versions if new options with similar names are introduced. Returns
None
Notes The available options with its descriptions: compute.use_bottleneck:bool
Use the bottleneck library to accelerate if it is installed, the default is True Valid values: False,True [default: True] [currently: True] compute.use_numba:bool
Use the numba engine option for select operations if it is installed, the default is False Valid values: False,True [default: False] [currently: False] compute.use_numexpr:bool
Use the numexpr library to accelerate computation if it is installed, the default is True Valid values: False,True [default: True] [currently: True] display.chop_threshold:float or None
if set to a float value, all float values smaller then the given threshold will be displayed as exactly 0 by repr and friends. [default: None] [currently: None] display.colheader_justify:‘left’/’right’
Controls the justification of column headers. used by DataFrameFormatter. [default: right] [currently: right] display.column_space No description available.
[default: 12] [currently: 12] display.date_dayfirst:boolean
When True, prints and parses dates with the day first, eg 20/01/2005 [default: False] [currently: False] display.date_yearfirst:boolean
When True, prints and parses dates with the year first, eg 2005/01/20 [default: False] [currently: False] display.encoding:str/unicode
Defaults to the detected encoding of the console. Specifies the encoding to be used for strings returned by to_string, these are generally strings meant to be displayed on the console. [default: utf-8] [currently: utf-8] display.expand_frame_repr:boolean
Whether to print out the full DataFrame repr for wide DataFrames across multiple lines, max_columns is still respected, but the output will wrap-around across multiple “pages” if its width exceeds display.width. [default: True] [currently: True] display.float_format:callable
The callable should accept a floating point number and return a string with the desired format of the number. This is used in some places like SeriesFormatter. See formats.format.EngFormatter for an example. [default: None] [currently: None] display.html.border:int
A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr. [default: 1] [currently: 1] display.html.table_schema:boolean
Whether to publish a Table Schema representation for frontends that support it. (default: False) [default: False] [currently: False] display.html.use_mathjax:boolean
When True, Jupyter notebook will process table contents using MathJax, rendering mathematical expressions enclosed by the dollar symbol. (default: True) [default: True] [currently: True] display.large_repr:‘truncate’/’info’
For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can show a truncated table (the default from 0.13), or switch to the view from df.info() (the behaviour in earlier versions of pandas). [default: truncate] [currently: truncate] display.latex.escape:bool
This specifies if the to_latex method of a Dataframe uses escapes special characters. Valid values: False,True [default: True] [currently: True] display.latex.longtable :bool
This specifies if the to_latex method of a Dataframe uses the longtable format. Valid values: False,True [default: False] [currently: False] display.latex.multicolumn:bool
This specifies if the to_latex method of a Dataframe uses multicolumns to pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True] display.latex.multicolumn_format:bool
This specifies if the to_latex method of a Dataframe uses multicolumns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l] display.latex.multirow:bool
This specifies if the to_latex method of a Dataframe uses multirows to pretty-print MultiIndex rows. Valid values: False,True [default: False] [currently: False] display.latex.repr:boolean
Whether to produce a latex DataFrame representation for jupyter environments that support it. (default: False) [default: False] [currently: False] display.max_categories:int
This sets the maximum number of categories pandas should output when printing out a Categorical or a Series of dtype “category”. [default: 8] [currently: 8] display.max_columns:int
If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects are either centrally truncated or printed as a summary view. ‘None’ value means unlimited. In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. [default: 0] [currently: 0] display.max_colwidth:int or None
The maximum width in characters of a column in the repr of a pandas data structure. When the column overflows, a “…” placeholder is embedded in the output. A ‘None’ value means unlimited. [default: 50] [currently: 50] display.max_dir_items:int
The number of items that will be added to dir(…). ‘None’ value means unlimited. Because dir is cached, changing this option will not immediately affect already existing dataframes until a column is deleted or added. This is for instance used to suggest columns from a dataframe to tab completion. [default: 100] [currently: 100] display.max_info_columns:int
max_info_columns is used in DataFrame.info method to decide if per column information will be printed. [default: 100] [currently: 100] display.max_info_rows:int or None
df.info() will usually show null-counts for each column. For large frames this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller dimensions than specified. [default: 1690785] [currently: 1690785] display.max_rows:int
If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects are either centrally truncated or printed as a summary view. ‘None’ value means unlimited. In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. [default: 60] [currently: 60] display.max_seq_items:int or None
When pretty-printing a long sequence, no more then max_seq_items will be printed. If items are omitted, they will be denoted by the addition of “…” to the resulting string. If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100] display.memory_usage:bool, string or None
This specifies if the memory usage of a DataFrame should be displayed when df.info() is called. Valid values True,False,’deep’ [default: True] [currently: True] display.min_rows:int
The numbers of rows to show in a truncated view (when max_rows is exceeded). Ignored when max_rows is set to None or 0. When set to None, follows the value of max_rows. [default: 10] [currently: 10] display.multi_sparse:boolean
“sparsify” MultiIndex display (don’t display repeated elements in outer levels within groups) [default: True] [currently: True] display.notebook_repr_html:boolean
When True, IPython notebook will use html representation for pandas objects (if it is available). [default: True] [currently: True] display.pprint_nest_depth:int
Controls the number of nested levels to process when pretty-printing [default: 3] [currently: 3] display.precision:int
Floating point output precision in terms of number of places after the decimal, for regular formatting as well as scientific notation. Similar to precision in numpy.set_printoptions(). [default: 6] [currently: 6] display.show_dimensions:boolean or ‘truncate’
Whether to print out dimensions at the end of DataFrame repr. If ‘truncate’ is specified, only print out the dimensions if the frame is truncated (e.g. not display all rows and/or columns) [default: truncate] [currently: truncate] display.unicode.ambiguous_as_wide:boolean
Whether to use the Unicode East Asian Width to calculate the display text width. Enabling this may affect to the performance (default: False) [default: False] [currently: False] display.unicode.east_asian_width:boolean
Whether to use the Unicode East Asian Width to calculate the display text width. Enabling this may affect to the performance (default: False) [default: False] [currently: False] display.width:int
Width of the display in characters. In case python/IPython is running in a terminal this can be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width. [default: 80] [currently: 80] io.excel.ods.reader:string
The default Excel reader engine for ‘ods’ files. Available options: auto, odf. [default: auto] [currently: auto] io.excel.ods.writer:string
The default Excel writer engine for ‘ods’ files. Available options: auto, odf. [default: auto] [currently: auto] io.excel.xls.reader:string
The default Excel reader engine for ‘xls’ files. Available options: auto, xlrd. [default: auto] [currently: auto] io.excel.xls.writer:string
The default Excel writer engine for ‘xls’ files. Available options: auto, xlwt. [default: auto] [currently: auto] (Deprecated, use `` instead.) io.excel.xlsb.reader:string
The default Excel reader engine for ‘xlsb’ files. Available options: auto, pyxlsb. [default: auto] [currently: auto] io.excel.xlsm.reader:string
The default Excel reader engine for ‘xlsm’ files. Available options: auto, xlrd, openpyxl. [default: auto] [currently: auto] io.excel.xlsm.writer:string
The default Excel writer engine for ‘xlsm’ files. Available options: auto, openpyxl. [default: auto] [currently: auto] io.excel.xlsx.reader:string
The default Excel reader engine for ‘xlsx’ files. Available options: auto, xlrd, openpyxl. [default: auto] [currently: auto] io.excel.xlsx.writer:string
The default Excel writer engine for ‘xlsx’ files. Available options: auto, openpyxl, xlsxwriter. [default: auto] [currently: auto] io.hdf.default_format:format
default format writing format, if None, then put will default to ‘fixed’ and append will default to ‘table’ [default: None] [currently: None] io.hdf.dropna_table:boolean
drop ALL nan rows when appending to a table [default: False] [currently: False] io.parquet.engine:string
The default parquet reader/writer engine. Available options: ‘auto’, ‘pyarrow’, ‘fastparquet’, the default is ‘auto’ [default: auto] [currently: auto] io.sql.engine:string
The default sql reader/writer engine. Available options: ‘auto’, ‘sqlalchemy’, the default is ‘auto’ [default: auto] [currently: auto] mode.chained_assignment:string
Raise an exception, warn, or no action if trying to use chained assignment, The default is warn [default: warn] [currently: warn] mode.data_manager:string
Internal data manager type; can be “block” or “array”. Defaults to “block”, unless overridden by the ‘PANDAS_DATA_MANAGER’ environment variable (needs to be set before pandas is imported). [default: block] [currently: block] mode.sim_interactive:boolean
Whether to simulate interactive mode for purposes of testing [default: False] [currently: False] mode.string_storage:string
The default storage for StringDtype. [default: python] [currently: python] mode.use_inf_as_na:boolean
True means treat None, NaN, INF, -INF as NA (old way), False means None and NaN are null, but INF, -INF are not NA (new way). [default: False] [currently: False] mode.use_inf_as_null:boolean
use_inf_as_null had been deprecated and will be removed in a future version. Use use_inf_as_na instead. [default: False] [currently: False] (Deprecated, use mode.use_inf_as_na instead.) plotting.backend:str
The plotting backend to use. The default value is “matplotlib”, the backend provided with pandas. Other backends can be specified by providing the name of the module that implements the backend. [default: matplotlib] [currently: matplotlib] plotting.matplotlib.register_converters:bool or ‘auto’.
Whether to register converters with matplotlib’s units registry for dates, times, datetimes, and Periods. Toggling to False will remove the converters, restoring any converters that pandas overwrote. [default: auto] [currently: auto] styler.format.decimal:str
The character representation for the decimal separator for floats and complex. [default: .] [currently: .] styler.format.escape:str, optional
Whether to escape certain characters according to the given context; html or latex. [default: None] [currently: None] styler.format.formatter:str, callable, dict, optional
A formatter object to be used as default within Styler.format. [default: None] [currently: None] styler.format.na_rep:str, optional
The string representation for values identified as missing. [default: None] [currently: None] styler.format.precision:int
The precision for floats and complex numbers. [default: 6] [currently: 6] styler.format.thousands:str, optional
The character representation for thousands separator for floats, int and complex. [default: None] [currently: None] styler.html.mathjax:bool
If False will render special CSS classes to table attributes that indicate Mathjax will not be used in Jupyter Notebook. [default: True] [currently: True] styler.latex.environment:str
The environment to replace \begin{table}. If “longtable” is used results in a specific longtable environment format. [default: None] [currently: None] styler.latex.hrules:bool
Whether to add horizontal rules on top and bottom and below the headers. [default: False] [currently: False] styler.latex.multicol_align:{“r”, “c”, “l”, “naive-l”, “naive-r”}
The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe decorators can also be added to non-naive values to draw vertical rules, e.g. “|r” will draw a rule on the left side of right aligned merged cells. [default: r] [currently: r] styler.latex.multirow_align:{“c”, “t”, “b”}
The specifier for vertical alignment of sparsified LaTeX multirows. [default: c] [currently: c] styler.render.encoding:str
The encoding used for output HTML and LaTeX files. [default: utf-8] [currently: utf-8] styler.render.max_columns:int, optional
The maximum number of columns that will be rendered. May still be reduced to satsify max_elements, which takes precedence. [default: None] [currently: None] styler.render.max_elements:int
The maximum number of data-cell (<td>) elements that will be rendered before trimming will occur over columns, rows or both if needed. [default: 262144] [currently: 262144] styler.render.max_rows:int, optional
The maximum number of rows that will be rendered. May still be reduced to satsify max_elements, which takes precedence. [default: None] [currently: None] styler.render.repr:str
Determine which output to use in Jupyter Notebook in {“html”, “latex”}. [default: html] [currently: html] styler.sparse.columns:bool
Whether to sparsify the display of hierarchical columns. Setting to False will display each explicit level element in a hierarchical key for each column. [default: True] [currently: True] styler.sparse.index:bool
Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each row. [default: True] [currently: True] | pandas.reference.api.pandas.reset_option |
pandas.Series classpandas.Series(data=None, index=None, dtype=None, name=None, copy=False, fastpath=False)[source]
One-dimensional ndarray with axis labels (including time series). Labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index. Statistical methods from ndarray have been overridden to automatically exclude missing data (currently represented as NaN). Operations between Series (+, -, /, *, **) align values based on their associated index values– they need not be the same length. The result index will be the sorted union of the two indexes. Parameters
data:array-like, Iterable, dict, or scalar value
Contains data stored in Series. If data is a dict, argument order is maintained.
index:array-like or Index (1d)
Values must be hashable and have the same length as data. Non-unique index values are allowed. Will default to RangeIndex (0, 1, 2, …, n) if not provided. If data is dict-like and index is None, then the keys in the data are used as the index. If the index is not None, the resulting Series is reindexed with the index values.
dtype:str, numpy.dtype, or ExtensionDtype, optional
Data type for the output Series. If not specified, this will be inferred from data. See the user guide for more usages.
name:str, optional
The name to give to the Series.
copy:bool, default False
Copy input data. Only affects Series or 1d ndarray input. See examples. Examples Constructing Series from a dictionary with an Index specified
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> ser = pd.Series(data=d, index=['a', 'b', 'c'])
>>> ser
a 1
b 2
c 3
dtype: int64
The keys of the dictionary match with the Index values, hence the Index values have no effect.
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> ser = pd.Series(data=d, index=['x', 'y', 'z'])
>>> ser
x NaN
y NaN
z NaN
dtype: float64
Note that the Index is first build with the keys from the dictionary. After this the Series is reindexed with the given Index values, hence we get all NaN as a result. Constructing Series from a list with copy=False.
>>> r = [1, 2]
>>> ser = pd.Series(r, copy=False)
>>> ser.iloc[0] = 999
>>> r
[1, 2]
>>> ser
0 999
1 2
dtype: int64
Due to input data type the Series has a copy of the original data even though copy=False, so the data is unchanged. Constructing Series from a 1d ndarray with copy=False.
>>> r = np.array([1, 2])
>>> ser = pd.Series(r, copy=False)
>>> ser.iloc[0] = 999
>>> r
array([999, 2])
>>> ser
0 999
1 2
dtype: int64
Due to input data type the Series has a view on the original data, so the data is changed as well. Attributes
T Return the transpose, which is by definition self.
array The ExtensionArray of the data backing this Series or Index.
at Access a single value for a row/column label pair.
attrs Dictionary of global attributes of this dataset.
axes Return a list of the row axis labels.
dtype Return the dtype object of the underlying data.
dtypes Return the dtype object of the underlying data.
flags Get the properties associated with this pandas object.
hasnans Return True if there are any NaNs.
iat Access a single value for a row/column pair by integer position.
iloc Purely integer-location based indexing for selection by position.
index The index (axis labels) of the Series.
is_monotonic Return boolean if values in the object are monotonic_increasing.
is_monotonic_decreasing Return boolean if values in the object are monotonic_decreasing.
is_monotonic_increasing Alias for is_monotonic.
is_unique Return boolean if values in the object are unique.
loc Access a group of rows and columns by label(s) or a boolean array.
name Return the name of the Series.
nbytes Return the number of bytes in the underlying data.
ndim Number of dimensions of the underlying data, by definition 1.
shape Return a tuple of the shape of the underlying data.
size Return the number of elements in the underlying data.
values Return Series as ndarray or ndarray-like depending on the dtype.
empty Methods
abs() Return a Series/DataFrame with absolute numeric value of each element.
add(other[, level, fill_value, axis]) Return Addition of series and other, element-wise (binary operator add).
add_prefix(prefix) Prefix labels with string prefix.
add_suffix(suffix) Suffix labels with string suffix.
agg([func, axis]) Aggregate using one or more operations over the specified axis.
aggregate([func, axis]) Aggregate using one or more operations over the specified axis.
align(other[, join, axis, level, copy, ...]) Align two objects on their axes with the specified join method.
all([axis, bool_only, skipna, level]) Return whether all elements are True, potentially over an axis.
any([axis, bool_only, skipna, level]) Return whether any element is True, potentially over an axis.
append(to_append[, ignore_index, ...]) Concatenate two or more Series.
apply(func[, convert_dtype, args]) Invoke function on values of Series.
argmax([axis, skipna]) Return int position of the largest value in the Series.
argmin([axis, skipna]) Return int position of the smallest value in the Series.
argsort([axis, kind, order]) Return the integer indices that would sort the Series values.
asfreq(freq[, method, how, normalize, ...]) Convert time series to specified frequency.
asof(where[, subset]) Return the last row(s) without any NaNs before where.
astype(dtype[, copy, errors]) Cast a pandas object to a specified dtype dtype.
at_time(time[, asof, axis]) Select values at particular time of day (e.g., 9:30AM).
autocorr([lag]) Compute the lag-N autocorrelation.
backfill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='bfill'.
between(left, right[, inclusive]) Return boolean Series equivalent to left <= series <= right.
between_time(start_time, end_time[, ...]) Select values between particular times of the day (e.g., 9:00-9:30 AM).
bfill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='bfill'.
bool() Return the bool of a single element Series or DataFrame.
cat alias of pandas.core.arrays.categorical.CategoricalAccessor
clip([lower, upper, axis, inplace]) Trim values at input threshold(s).
combine(other, func[, fill_value]) Combine the Series with a Series or scalar according to func.
combine_first(other) Update null elements with value in the same location in 'other'.
compare(other[, align_axis, keep_shape, ...]) Compare to another Series and show the differences.
convert_dtypes([infer_objects, ...]) Convert columns to best possible dtypes using dtypes supporting pd.NA.
copy([deep]) Make a copy of this object's indices and data.
corr(other[, method, min_periods]) Compute correlation with other Series, excluding missing values.
count([level]) Return number of non-NA/null observations in the Series.
cov(other[, min_periods, ddof]) Compute covariance with Series, excluding missing values.
cummax([axis, skipna]) Return cumulative maximum over a DataFrame or Series axis.
cummin([axis, skipna]) Return cumulative minimum over a DataFrame or Series axis.
cumprod([axis, skipna]) Return cumulative product over a DataFrame or Series axis.
cumsum([axis, skipna]) Return cumulative sum over a DataFrame or Series axis.
describe([percentiles, include, exclude, ...]) Generate descriptive statistics.
diff([periods]) First discrete difference of element.
div(other[, level, fill_value, axis]) Return Floating division of series and other, element-wise (binary operator truediv).
divide(other[, level, fill_value, axis]) Return Floating division of series and other, element-wise (binary operator truediv).
divmod(other[, level, fill_value, axis]) Return Integer division and modulo of series and other, element-wise (binary operator divmod).
dot(other) Compute the dot product between the Series and the columns of other.
drop([labels, axis, index, columns, level, ...]) Return Series with specified index labels removed.
drop_duplicates([keep, inplace]) Return Series with duplicate values removed.
droplevel(level[, axis]) Return Series/DataFrame with requested index / column level(s) removed.
dropna([axis, inplace, how]) Return a new Series with missing values removed.
dt alias of pandas.core.indexes.accessors.CombinedDatetimelikeProperties
duplicated([keep]) Indicate duplicate Series values.
eq(other[, level, fill_value, axis]) Return Equal to of series and other, element-wise (binary operator eq).
equals(other) Test whether two objects contain the same elements.
ewm([com, span, halflife, alpha, ...]) Provide exponentially weighted (EW) calculations.
expanding([min_periods, center, axis, method]) Provide expanding window calculations.
explode([ignore_index]) Transform each element of a list-like to a row.
factorize([sort, na_sentinel]) Encode the object as an enumerated type or categorical variable.
ffill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='ffill'.
fillna([value, method, axis, inplace, ...]) Fill NA/NaN values using the specified method.
filter([items, like, regex, axis]) Subset the dataframe rows or columns according to the specified index labels.
first(offset) Select initial periods of time series data based on a date offset.
first_valid_index() Return index for first non-NA value or None, if no NA value is found.
floordiv(other[, level, fill_value, axis]) Return Integer division of series and other, element-wise (binary operator floordiv).
ge(other[, level, fill_value, axis]) Return Greater than or equal to of series and other, element-wise (binary operator ge).
get(key[, default]) Get item from object for given key (ex: DataFrame column).
groupby([by, axis, level, as_index, sort, ...]) Group Series using a mapper or by a Series of columns.
gt(other[, level, fill_value, axis]) Return Greater than of series and other, element-wise (binary operator gt).
head([n]) Return the first n rows.
hist([by, ax, grid, xlabelsize, xrot, ...]) Draw histogram of the input series using matplotlib.
idxmax([axis, skipna]) Return the row label of the maximum value.
idxmin([axis, skipna]) Return the row label of the minimum value.
infer_objects() Attempt to infer better dtypes for object columns.
info([verbose, buf, max_cols, memory_usage, ...]) Print a concise summary of a Series.
interpolate([method, axis, limit, inplace, ...]) Fill NaN values using an interpolation method.
isin(values) Whether elements in Series are contained in values.
isna() Detect missing values.
isnull() Series.isnull is an alias for Series.isna.
item() Return the first element of the underlying data as a Python scalar.
items() Lazily iterate over (index, value) tuples.
iteritems() Lazily iterate over (index, value) tuples.
keys() Return alias for index.
kurt([axis, skipna, level, numeric_only]) Return unbiased kurtosis over requested axis.
kurtosis([axis, skipna, level, numeric_only]) Return unbiased kurtosis over requested axis.
last(offset) Select final periods of time series data based on a date offset.
last_valid_index() Return index for last non-NA value or None, if no NA value is found.
le(other[, level, fill_value, axis]) Return Less than or equal to of series and other, element-wise (binary operator le).
lt(other[, level, fill_value, axis]) Return Less than of series and other, element-wise (binary operator lt).
mad([axis, skipna, level]) Return the mean absolute deviation of the values over the requested axis.
map(arg[, na_action]) Map values of Series according to an input mapping or function.
mask(cond[, other, inplace, axis, level, ...]) Replace values where the condition is True.
max([axis, skipna, level, numeric_only]) Return the maximum of the values over the requested axis.
mean([axis, skipna, level, numeric_only]) Return the mean of the values over the requested axis.
median([axis, skipna, level, numeric_only]) Return the median of the values over the requested axis.
memory_usage([index, deep]) Return the memory usage of the Series.
min([axis, skipna, level, numeric_only]) Return the minimum of the values over the requested axis.
mod(other[, level, fill_value, axis]) Return Modulo of series and other, element-wise (binary operator mod).
mode([dropna]) Return the mode(s) of the Series.
mul(other[, level, fill_value, axis]) Return Multiplication of series and other, element-wise (binary operator mul).
multiply(other[, level, fill_value, axis]) Return Multiplication of series and other, element-wise (binary operator mul).
ne(other[, level, fill_value, axis]) Return Not equal to of series and other, element-wise (binary operator ne).
nlargest([n, keep]) Return the largest n elements.
notna() Detect existing (non-missing) values.
notnull() Series.notnull is an alias for Series.notna.
nsmallest([n, keep]) Return the smallest n elements.
nunique([dropna]) Return number of unique elements in the object.
pad([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='ffill'.
pct_change([periods, fill_method, limit, freq]) Percentage change between the current and a prior element.
pipe(func, *args, **kwargs) Apply chainable functions that expect Series or DataFrames.
plot alias of pandas.plotting._core.PlotAccessor
pop(item) Return item and drops from series.
pow(other[, level, fill_value, axis]) Return Exponential power of series and other, element-wise (binary operator pow).
prod([axis, skipna, level, numeric_only, ...]) Return the product of the values over the requested axis.
product([axis, skipna, level, numeric_only, ...]) Return the product of the values over the requested axis.
quantile([q, interpolation]) Return value at the given quantile.
radd(other[, level, fill_value, axis]) Return Addition of series and other, element-wise (binary operator radd).
rank([axis, method, numeric_only, ...]) Compute numerical data ranks (1 through n) along axis.
ravel([order]) Return the flattened underlying data as an ndarray.
rdiv(other[, level, fill_value, axis]) Return Floating division of series and other, element-wise (binary operator rtruediv).
rdivmod(other[, level, fill_value, axis]) Return Integer division and modulo of series and other, element-wise (binary operator rdivmod).
reindex(*args, **kwargs) Conform Series to new index with optional filling logic.
reindex_like(other[, method, copy, limit, ...]) Return an object with matching indices as other object.
rename([index, axis, copy, inplace, level, ...]) Alter Series index labels or name.
rename_axis([mapper, index, columns, axis, ...]) Set the name of the axis for the index or columns.
reorder_levels(order) Rearrange index levels using input order.
repeat(repeats[, axis]) Repeat elements of a Series.
replace([to_replace, value, inplace, limit, ...]) Replace values given in to_replace with value.
resample(rule[, axis, closed, label, ...]) Resample time-series data.
reset_index([level, drop, name, inplace]) Generate a new DataFrame or Series with the index reset.
rfloordiv(other[, level, fill_value, axis]) Return Integer division of series and other, element-wise (binary operator rfloordiv).
rmod(other[, level, fill_value, axis]) Return Modulo of series and other, element-wise (binary operator rmod).
rmul(other[, level, fill_value, axis]) Return Multiplication of series and other, element-wise (binary operator rmul).
rolling(window[, min_periods, center, ...]) Provide rolling window calculations.
round([decimals]) Round each value in a Series to the given number of decimals.
rpow(other[, level, fill_value, axis]) Return Exponential power of series and other, element-wise (binary operator rpow).
rsub(other[, level, fill_value, axis]) Return Subtraction of series and other, element-wise (binary operator rsub).
rtruediv(other[, level, fill_value, axis]) Return Floating division of series and other, element-wise (binary operator rtruediv).
sample([n, frac, replace, weights, ...]) Return a random sample of items from an axis of object.
searchsorted(value[, side, sorter]) Find indices where elements should be inserted to maintain order.
sem([axis, skipna, level, ddof, numeric_only]) Return unbiased standard error of the mean over requested axis.
set_axis(labels[, axis, inplace]) Assign desired index to given axis.
set_flags(*[, copy, allows_duplicate_labels]) Return a new object with updated flags.
shift([periods, freq, axis, fill_value]) Shift index by desired number of periods with an optional time freq.
skew([axis, skipna, level, numeric_only]) Return unbiased skew over requested axis.
slice_shift([periods, axis]) (DEPRECATED) Equivalent to shift without copying data.
sort_index([axis, level, ascending, ...]) Sort Series by index labels.
sort_values([axis, ascending, inplace, ...]) Sort by the values.
sparse alias of pandas.core.arrays.sparse.accessor.SparseAccessor
squeeze([axis]) Squeeze 1 dimensional axis objects into scalars.
std([axis, skipna, level, ddof, numeric_only]) Return sample standard deviation over requested axis.
str alias of pandas.core.strings.accessor.StringMethods
sub(other[, level, fill_value, axis]) Return Subtraction of series and other, element-wise (binary operator sub).
subtract(other[, level, fill_value, axis]) Return Subtraction of series and other, element-wise (binary operator sub).
sum([axis, skipna, level, numeric_only, ...]) Return the sum of the values over the requested axis.
swapaxes(axis1, axis2[, copy]) Interchange axes and swap values axes appropriately.
swaplevel([i, j, copy]) Swap levels i and j in a MultiIndex.
tail([n]) Return the last n rows.
take(indices[, axis, is_copy]) Return the elements in the given positional indices along an axis.
to_clipboard([excel, sep]) Copy object to the system clipboard.
to_csv([path_or_buf, sep, na_rep, ...]) Write object to a comma-separated values (csv) file.
to_dict([into]) Convert Series to {label -> value} dict or dict-like object.
to_excel(excel_writer[, sheet_name, na_rep, ...]) Write object to an Excel sheet.
to_frame([name]) Convert Series to DataFrame.
to_hdf(path_or_buf, key[, mode, complevel, ...]) Write the contained data to an HDF5 file using HDFStore.
to_json([path_or_buf, orient, date_format, ...]) Convert the object to a JSON string.
to_latex([buf, columns, col_space, header, ...]) Render object to a LaTeX tabular, longtable, or nested table.
to_list() Return a list of the values.
to_markdown([buf, mode, index, storage_options]) Print Series in Markdown-friendly format.
to_numpy([dtype, copy, na_value]) A NumPy ndarray representing the values in this Series or Index.
to_period([freq, copy]) Convert Series from DatetimeIndex to PeriodIndex.
to_pickle(path[, compression, protocol, ...]) Pickle (serialize) object to file.
to_sql(name, con[, schema, if_exists, ...]) Write records stored in a DataFrame to a SQL database.
to_string([buf, na_rep, float_format, ...]) Render a string representation of the Series.
to_timestamp([freq, how, copy]) Cast to DatetimeIndex of Timestamps, at beginning of period.
to_xarray() Return an xarray object from the pandas object.
tolist() Return a list of the values.
transform(func[, axis]) Call func on self producing a Series with the same axis shape as self.
transpose(*args, **kwargs) Return the transpose, which is by definition self.
truediv(other[, level, fill_value, axis]) Return Floating division of series and other, element-wise (binary operator truediv).
truncate([before, after, axis, copy]) Truncate a Series or DataFrame before and after some index value.
tshift([periods, freq, axis]) (DEPRECATED) Shift the time index, using the index's frequency if available.
tz_convert(tz[, axis, level, copy]) Convert tz-aware axis to target time zone.
tz_localize(tz[, axis, level, copy, ...]) Localize tz-naive index of a Series or DataFrame to target time zone.
unique() Return unique values of Series object.
unstack([level, fill_value]) Unstack, also known as pivot, Series with MultiIndex to produce DataFrame.
update(other) Modify Series in place using values from passed Series.
value_counts([normalize, sort, ascending, ...]) Return a Series containing counts of unique values.
var([axis, skipna, level, ddof, numeric_only]) Return unbiased variance over requested axis.
view([dtype]) Create a new view of the Series.
where(cond[, other, inplace, axis, level, ...]) Replace values where the condition is False.
xs(key[, axis, level, drop_level]) Return cross-section from the Series/DataFrame. | pandas.reference.api.pandas.series |
pandas.Series.__array__ Series.__array__(dtype=None)[source]
Return the values as a NumPy array. Users should not call this directly. Rather, it is invoked by numpy.array() and numpy.asarray(). Parameters
dtype:str or numpy.dtype, optional
The dtype to use for the resulting NumPy array. By default, the dtype is inferred from the data. Returns
numpy.ndarray
The values in the series converted to a numpy.ndarray with the specified dtype. See also array
Create a new array from data. Series.array
Zero-copy view to the array backing the Series. Series.to_numpy
Series method for similar behavior. Examples
>>> ser = pd.Series([1, 2, 3])
>>> np.asarray(ser)
array([1, 2, 3])
For timezone-aware data, the timezones may be retained with dtype='object'
>>> tzser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
>>> np.asarray(tzser, dtype="object")
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'),
Timestamp('2000-01-02 00:00:00+0100', tz='CET')],
dtype=object)
Or the values may be localized to UTC and the tzinfo discarded with dtype='datetime64[ns]'
>>> np.asarray(tzser, dtype="datetime64[ns]")
array(['1999-12-31T23:00:00.000000000', ...],
dtype='datetime64[ns]') | pandas.reference.api.pandas.series.__array__ |
pandas.Series.__iter__ Series.__iter__()[source]
Return an iterator of the values. These are each a scalar type, which is a Python scalar (for str, int, float) or a pandas scalar (for Timestamp/Timedelta/Interval/Period) Returns
iterator | pandas.reference.api.pandas.series.__iter__ |
pandas.Series.abs Series.abs()[source]
Return a Series/DataFrame with absolute numeric value of each element. This function only applies to elements that are all numeric. Returns
abs
Series/DataFrame containing the absolute value of each element. See also numpy.absolute
Calculate the absolute value element-wise. Notes For complex inputs, 1.2 + 1j, the absolute value is \(\sqrt{ a^2 + b^2 }\). Examples Absolute numeric values in a Series.
>>> s = pd.Series([-1.10, 2, -3.33, 4])
>>> s.abs()
0 1.10
1 2.00
2 3.33
3 4.00
dtype: float64
Absolute numeric values in a Series with complex numbers.
>>> s = pd.Series([1.2 + 1j])
>>> s.abs()
0 1.56205
dtype: float64
Absolute numeric values in a Series with a Timedelta element.
>>> s = pd.Series([pd.Timedelta('1 days')])
>>> s.abs()
0 1 days
dtype: timedelta64[ns]
Select rows with data closest to certain value using argsort (from StackOverflow).
>>> df = pd.DataFrame({
... 'a': [4, 5, 6, 7],
... 'b': [10, 20, 30, 40],
... 'c': [100, 50, -30, -50]
... })
>>> df
a b c
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
>>> df.loc[(df.c - 43).abs().argsort()]
a b c
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50 | pandas.reference.api.pandas.series.abs |
pandas.Series.add Series.add(other, level=None, fill_value=None, axis=0)[source]
Return Addition of series and other, element-wise (binary operator add). Equivalent to series + other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters
other:Series or scalar value
fill_value:None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing.
level:int or name
Broadcast across a level, matching Index values on the passed MultiIndex level. Returns
Series
The result of the operation. See also Series.radd
Reverse of the Addition operator, see Python documentation for more details. Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.add(b, fill_value=0)
a 2.0
b 1.0
c 1.0
d 1.0
e NaN
dtype: float64 | pandas.reference.api.pandas.series.add |
pandas.Series.add_prefix Series.add_prefix(prefix)[source]
Prefix labels with string prefix. For Series, the row labels are prefixed. For DataFrame, the column labels are prefixed. Parameters
prefix:str
The string to add before each label. Returns
Series or DataFrame
New Series or DataFrame with updated labels. See also Series.add_suffix
Suffix row labels with string suffix. DataFrame.add_suffix
Suffix column labels with string suffix. Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.add_prefix('item_')
item_0 1
item_1 2
item_2 3
item_3 4
dtype: int64
>>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]})
>>> df
A B
0 1 3
1 2 4
2 3 5
3 4 6
>>> df.add_prefix('col_')
col_A col_B
0 1 3
1 2 4
2 3 5
3 4 6 | pandas.reference.api.pandas.series.add_prefix |
pandas.Series.add_suffix Series.add_suffix(suffix)[source]
Suffix labels with string suffix. For Series, the row labels are suffixed. For DataFrame, the column labels are suffixed. Parameters
suffix:str
The string to add after each label. Returns
Series or DataFrame
New Series or DataFrame with updated labels. See also Series.add_prefix
Prefix row labels with string prefix. DataFrame.add_prefix
Prefix column labels with string prefix. Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.add_suffix('_item')
0_item 1
1_item 2
2_item 3
3_item 4
dtype: int64
>>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]})
>>> df
A B
0 1 3
1 2 4
2 3 5
3 4 6
>>> df.add_suffix('_col')
A_col B_col
0 1 3
1 2 4
2 3 5
3 4 6 | pandas.reference.api.pandas.series.add_suffix |
pandas.Series.agg Series.agg(func=None, axis=0, *args, **kwargs)[source]
Aggregate using one or more operations over the specified axis. Parameters
func:function, str, list or dict
Function to use for aggregating the data. If a function, must either work when passed a Series or when passed to Series.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such.
axis:{0 or ‘index’}
Parameter needed for compatibility with DataFrame. *args
Positional arguments to pass to func. **kwargs
Keyword arguments to pass to func. Returns
scalar, Series or DataFrame
The return can be: scalar : when Series.agg is called with single function Series : when DataFrame.agg is called with a single function DataFrame : when DataFrame.agg is called with several functions Return scalar, Series or DataFrame. See also Series.apply
Invoke function on a Series. Series.transform
Transform function producing a Series with like indexes. Notes agg is an alias for aggregate. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. A passed user-defined-function will be passed a Series for evaluation. Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.agg('min')
1
>>> s.agg(['min', 'max'])
min 1
max 4
dtype: int64 | pandas.reference.api.pandas.series.agg |
pandas.Series.aggregate Series.aggregate(func=None, axis=0, *args, **kwargs)[source]
Aggregate using one or more operations over the specified axis. Parameters
func:function, str, list or dict
Function to use for aggregating the data. If a function, must either work when passed a Series or when passed to Series.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such.
axis:{0 or ‘index’}
Parameter needed for compatibility with DataFrame. *args
Positional arguments to pass to func. **kwargs
Keyword arguments to pass to func. Returns
scalar, Series or DataFrame
The return can be: scalar : when Series.agg is called with single function Series : when DataFrame.agg is called with a single function DataFrame : when DataFrame.agg is called with several functions Return scalar, Series or DataFrame. See also Series.apply
Invoke function on a Series. Series.transform
Transform function producing a Series with like indexes. Notes agg is an alias for aggregate. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. A passed user-defined-function will be passed a Series for evaluation. Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.agg('min')
1
>>> s.agg(['min', 'max'])
min 1
max 4
dtype: int64 | pandas.reference.api.pandas.series.aggregate |
pandas.Series.align Series.align(other, join='outer', axis=None, level=None, copy=True, fill_value=None, method=None, limit=None, fill_axis=0, broadcast_axis=None)[source]
Align two objects on their axes with the specified join method. Join method is specified for each axis Index. Parameters
other:DataFrame or Series
join:{‘outer’, ‘inner’, ‘left’, ‘right’}, default ‘outer’
axis:allowed axis of the other object, default None
Align on index (0), columns (1), or both (None).
level:int or level name, default None
Broadcast across a level, matching Index values on the passed MultiIndex level.
copy:bool, default True
Always returns new objects. If copy=False and no reindexing is required then original objects are returned.
fill_value:scalar, default np.NaN
Value to use for missing values. Defaults to NaN, but can be any “compatible” value.
method:{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None
Method to use for filling holes in reindexed Series: pad / ffill: propagate last valid observation forward to next valid. backfill / bfill: use NEXT valid observation to fill gap.
limit:int, default None
If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.
fill_axis:{0 or ‘index’}, default 0
Filling axis, method and limit.
broadcast_axis:{0 or ‘index’}, default None
Broadcast values along this axis, if aligning two objects of different dimensions. Returns
(left, right):(Series, type of other)
Aligned objects. Examples
>>> df = pd.DataFrame(
... [[1, 2, 3, 4], [6, 7, 8, 9]], columns=["D", "B", "E", "A"], index=[1, 2]
... )
>>> other = pd.DataFrame(
... [[10, 20, 30, 40], [60, 70, 80, 90], [600, 700, 800, 900]],
... columns=["A", "B", "C", "D"],
... index=[2, 3, 4],
... )
>>> df
D B E A
1 1 2 3 4
2 6 7 8 9
>>> other
A B C D
2 10 20 30 40
3 60 70 80 90
4 600 700 800 900
Align on columns:
>>> left, right = df.align(other, join="outer", axis=1)
>>> left
A B C D E
1 4 2 NaN 1 3
2 9 7 NaN 6 8
>>> right
A B C D E
2 10 20 30 40 NaN
3 60 70 80 90 NaN
4 600 700 800 900 NaN
We can also align on the index:
>>> left, right = df.align(other, join="outer", axis=0)
>>> left
D B E A
1 1.0 2.0 3.0 4.0
2 6.0 7.0 8.0 9.0
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
>>> right
A B C D
1 NaN NaN NaN NaN
2 10.0 20.0 30.0 40.0
3 60.0 70.0 80.0 90.0
4 600.0 700.0 800.0 900.0
Finally, the default axis=None will align on both index and columns:
>>> left, right = df.align(other, join="outer", axis=None)
>>> left
A B C D E
1 4.0 2.0 NaN 1.0 3.0
2 9.0 7.0 NaN 6.0 8.0
3 NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN
>>> right
A B C D E
1 NaN NaN NaN NaN NaN
2 10.0 20.0 30.0 40.0 NaN
3 60.0 70.0 80.0 90.0 NaN
4 600.0 700.0 800.0 900.0 NaN | pandas.reference.api.pandas.series.align |
pandas.Series.all Series.all(axis=0, bool_only=None, skipna=True, level=None, **kwargs)[source]
Return whether all elements are True, potentially over an axis. Returns True unless there at least one element within a series or along a Dataframe axis that is False or equivalent (e.g. zero or empty). Parameters
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Indicate which axis or axes should be reduced. 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels. 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index. None : reduce all axes, return a scalar.
bool_only:bool, default None
Include only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series.
skipna:bool, default True
Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be True, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.
level:int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a scalar.
**kwargs:any, default None
Additional keywords have no effect but might be accepted for compatibility with NumPy. Returns
scalar or Series
If level is specified, then, Series is returned; otherwise, scalar is returned. See also Series.all
Return True if all elements are True. DataFrame.any
Return True if one (or more) elements are True. Examples Series
>>> pd.Series([True, True]).all()
True
>>> pd.Series([True, False]).all()
False
>>> pd.Series([], dtype="float64").all()
True
>>> pd.Series([np.nan]).all()
True
>>> pd.Series([np.nan]).all(skipna=False)
True
DataFrames Create a dataframe from a dictionary.
>>> df = pd.DataFrame({'col1': [True, True], 'col2': [True, False]})
>>> df
col1 col2
0 True True
1 True False
Default behaviour checks if column-wise values all return True.
>>> df.all()
col1 True
col2 False
dtype: bool
Specify axis='columns' to check if row-wise values all return True.
>>> df.all(axis='columns')
0 True
1 False
dtype: bool
Or axis=None for whether every value is True.
>>> df.all(axis=None)
False | pandas.reference.api.pandas.series.all |
pandas.Series.any Series.any(axis=0, bool_only=None, skipna=True, level=None, **kwargs)[source]
Return whether any element is True, potentially over an axis. Returns False unless there is at least one element within a series or along a Dataframe axis that is True or equivalent (e.g. non-zero or non-empty). Parameters
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Indicate which axis or axes should be reduced. 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels. 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index. None : reduce all axes, return a scalar.
bool_only:bool, default None
Include only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series.
skipna:bool, default True
Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be False, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.
level:int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a scalar.
**kwargs:any, default None
Additional keywords have no effect but might be accepted for compatibility with NumPy. Returns
scalar or Series
If level is specified, then, Series is returned; otherwise, scalar is returned. See also numpy.any
Numpy version of this method. Series.any
Return whether any element is True. Series.all
Return whether all elements are True. DataFrame.any
Return whether any element is True over requested axis. DataFrame.all
Return whether all elements are True over requested axis. Examples Series For Series input, the output is a scalar indicating whether any element is True.
>>> pd.Series([False, False]).any()
False
>>> pd.Series([True, False]).any()
True
>>> pd.Series([], dtype="float64").any()
False
>>> pd.Series([np.nan]).any()
False
>>> pd.Series([np.nan]).any(skipna=False)
True
DataFrame Whether each column contains at least one True element (the default).
>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})
>>> df
A B C
0 1 0 0
1 2 2 0
>>> df.any()
A True
B True
C False
dtype: bool
Aggregating over the columns.
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]})
>>> df
A B
0 True 1
1 False 2
>>> df.any(axis='columns')
0 True
1 True
dtype: bool
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]})
>>> df
A B
0 True 1
1 False 0
>>> df.any(axis='columns')
0 True
1 False
dtype: bool
Aggregating over the entire DataFrame with axis=None.
>>> df.any(axis=None)
True
any for an empty DataFrame is an empty Series.
>>> pd.DataFrame([]).any()
Series([], dtype: bool) | pandas.reference.api.pandas.series.any |
pandas.Series.append Series.append(to_append, ignore_index=False, verify_integrity=False)[source]
Concatenate two or more Series. Parameters
to_append:Series or list/tuple of Series
Series to append with self.
ignore_index:bool, default False
If True, the resulting axis will be labeled 0, 1, …, n - 1.
verify_integrity:bool, default False
If True, raise Exception on creating index with duplicates. Returns
Series
Concatenated Series. See also concat
General function to concatenate DataFrame or Series objects. Notes Iteratively appending to a Series can be more computationally intensive than a single concatenate. A better solution is to append values to a list and then concatenate the list with the original Series all at once. Examples
>>> s1 = pd.Series([1, 2, 3])
>>> s2 = pd.Series([4, 5, 6])
>>> s3 = pd.Series([4, 5, 6], index=[3, 4, 5])
>>> s1.append(s2)
0 1
1 2
2 3
0 4
1 5
2 6
dtype: int64
>>> s1.append(s3)
0 1
1 2
2 3
3 4
4 5
5 6
dtype: int64
With ignore_index set to True:
>>> s1.append(s2, ignore_index=True)
0 1
1 2
2 3
3 4
4 5
5 6
dtype: int64
With verify_integrity set to True:
>>> s1.append(s2, verify_integrity=True)
Traceback (most recent call last):
...
ValueError: Indexes have overlapping values: [0, 1, 2] | pandas.reference.api.pandas.series.append |
pandas.Series.apply Series.apply(func, convert_dtype=True, args=(), **kwargs)[source]
Invoke function on values of Series. Can be ufunc (a NumPy function that applies to the entire Series) or a Python function that only works on single values. Parameters
func:function
Python function or NumPy ufunc to apply.
convert_dtype:bool, default True
Try to find better dtype for elementwise function results. If False, leave as dtype=object. Note that the dtype is always preserved for some extension array dtypes, such as Categorical.
args:tuple
Positional arguments passed to func after the series value. **kwargs
Additional keyword arguments passed to func. Returns
Series or DataFrame
If func returns a Series object the result will be a DataFrame. See also Series.map
For element-wise operations. Series.agg
Only perform aggregating type operations. Series.transform
Only perform transforming type operations. Notes Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. Examples Create a series with typical summer temperatures for each city.
>>> s = pd.Series([20, 21, 12],
... index=['London', 'New York', 'Helsinki'])
>>> s
London 20
New York 21
Helsinki 12
dtype: int64
Square the values by defining a function and passing it as an argument to apply().
>>> def square(x):
... return x ** 2
>>> s.apply(square)
London 400
New York 441
Helsinki 144
dtype: int64
Square the values by passing an anonymous function as an argument to apply().
>>> s.apply(lambda x: x ** 2)
London 400
New York 441
Helsinki 144
dtype: int64
Define a custom function that needs additional positional arguments and pass these additional arguments using the args keyword.
>>> def subtract_custom_value(x, custom_value):
... return x - custom_value
>>> s.apply(subtract_custom_value, args=(5,))
London 15
New York 16
Helsinki 7
dtype: int64
Define a custom function that takes keyword arguments and pass these arguments to apply.
>>> def add_custom_values(x, **kwargs):
... for month in kwargs:
... x += kwargs[month]
... return x
>>> s.apply(add_custom_values, june=30, july=20, august=25)
London 95
New York 96
Helsinki 87
dtype: int64
Use a function from the Numpy library.
>>> s.apply(np.log)
London 2.995732
New York 3.044522
Helsinki 2.484907
dtype: float64 | pandas.reference.api.pandas.series.apply |
pandas.Series.argmax Series.argmax(axis=None, skipna=True, *args, **kwargs)[source]
Return int position of the largest value in the Series. If the maximum is achieved in multiple locations, the first row position is returned. Parameters
axis:{None}
Dummy argument for consistency with Series.
skipna:bool, default True
Exclude NA/null values when showing the result. *args, **kwargs
Additional arguments and keywords for compatibility with NumPy. Returns
int
Row position of the maximum value. See also Series.argmax
Return position of the maximum value. Series.argmin
Return position of the minimum value. numpy.ndarray.argmax
Equivalent method for numpy arrays. Series.idxmax
Return index label of the maximum values. Series.idxmin
Return index label of the minimum values. Examples Consider dataset containing cereal calories
>>> s = pd.Series({'Corn Flakes': 100.0, 'Almond Delight': 110.0,
... 'Cinnamon Toast Crunch': 120.0, 'Cocoa Puff': 110.0})
>>> s
Corn Flakes 100.0
Almond Delight 110.0
Cinnamon Toast Crunch 120.0
Cocoa Puff 110.0
dtype: float64
>>> s.argmax()
2
>>> s.argmin()
0
The maximum cereal calories is the third element and the minimum cereal calories is the first element, since series is zero-indexed. | pandas.reference.api.pandas.series.argmax |
pandas.Series.argmin Series.argmin(axis=None, skipna=True, *args, **kwargs)[source]
Return int position of the smallest value in the Series. If the minimum is achieved in multiple locations, the first row position is returned. Parameters
axis:{None}
Dummy argument for consistency with Series.
skipna:bool, default True
Exclude NA/null values when showing the result. *args, **kwargs
Additional arguments and keywords for compatibility with NumPy. Returns
int
Row position of the minimum value. See also Series.argmin
Return position of the minimum value. Series.argmax
Return position of the maximum value. numpy.ndarray.argmin
Equivalent method for numpy arrays. Series.idxmax
Return index label of the maximum values. Series.idxmin
Return index label of the minimum values. Examples Consider dataset containing cereal calories
>>> s = pd.Series({'Corn Flakes': 100.0, 'Almond Delight': 110.0,
... 'Cinnamon Toast Crunch': 120.0, 'Cocoa Puff': 110.0})
>>> s
Corn Flakes 100.0
Almond Delight 110.0
Cinnamon Toast Crunch 120.0
Cocoa Puff 110.0
dtype: float64
>>> s.argmax()
2
>>> s.argmin()
0
The maximum cereal calories is the third element and the minimum cereal calories is the first element, since series is zero-indexed. | pandas.reference.api.pandas.series.argmin |
pandas.Series.argsort Series.argsort(axis=0, kind='quicksort', order=None)[source]
Return the integer indices that would sort the Series values. Override ndarray.argsort. Argsorts the value, omitting NA/null values, and places the result in the same locations as the non-NA values. Parameters
axis:{0 or “index”}
Has no effect but is accepted for compatibility with numpy.
kind:{‘mergesort’, ‘quicksort’, ‘heapsort’, ‘stable’}, default ‘quicksort’
Choice of sorting algorithm. See numpy.sort() for more information. ‘mergesort’ and ‘stable’ are the only stable algorithms.
order:None
Has no effect but is accepted for compatibility with numpy. Returns
Series[np.intp]
Positions of values within the sort order with -1 indicating nan values. See also numpy.ndarray.argsort
Returns the indices that would sort this array. | pandas.reference.api.pandas.series.argsort |
pandas.Series.array propertySeries.array
The ExtensionArray of the data backing this Series or Index. Returns
ExtensionArray
An ExtensionArray of the values stored within. For extension types, this is the actual array. For NumPy native types, this is a thin (no copy) wrapper around numpy.ndarray. .array differs .values which may require converting the data to a different form. See also Index.to_numpy
Similar method that always returns a NumPy array. Series.to_numpy
Similar method that always returns a NumPy array. Notes This table lays out the different array types for each extension dtype within pandas.
dtype array type
category Categorical
period PeriodArray
interval IntervalArray
IntegerNA IntegerArray
string StringArray
boolean BooleanArray
datetime64[ns, tz] DatetimeArray For any 3rd-party extension types, the array type will be an ExtensionArray. For all remaining dtypes .array will be a arrays.NumpyExtensionArray wrapping the actual ndarray stored within. If you absolutely need a NumPy array (possibly with copying / coercing data), then use Series.to_numpy() instead. Examples For regular NumPy types like int, and float, a PandasArray is returned.
>>> pd.Series([1, 2, 3]).array
<PandasArray>
[1, 2, 3]
Length: 3, dtype: int64
For extension types, like Categorical, the actual ExtensionArray is returned
>>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))
>>> ser.array
['a', 'b', 'a']
Categories (2, object): ['a', 'b'] | pandas.reference.api.pandas.series.array |
pandas.Series.asfreq Series.asfreq(freq, method=None, how=None, normalize=False, fill_value=None)[source]
Convert time series to specified frequency. Returns the original data conformed to a new index with the specified frequency. If the index of this Series is a PeriodIndex, the new index is the result of transforming the original index with PeriodIndex.asfreq (so the original index will map one-to-one to the new index). Otherwise, the new index will be equivalent to pd.date_range(start, end,
freq=freq) where start and end are, respectively, the first and last entries in the original index (see pandas.date_range()). The values corresponding to any timesteps in the new index which were not present in the original index will be null (NaN), unless a method for filling such unknowns is provided (see the method parameter below). The resample() method is more appropriate if an operation on each group of timesteps (such as an aggregate) is necessary to represent the data at the new frequency. Parameters
freq:DateOffset or str
Frequency DateOffset or string.
method:{‘backfill’/’bfill’, ‘pad’/’ffill’}, default None
Method to use for filling holes in reindexed Series (note this does not fill NaNs that already were present): ‘pad’ / ‘ffill’: propagate last valid observation forward to next valid ‘backfill’ / ‘bfill’: use NEXT valid observation to fill.
how:{‘start’, ‘end’}, default end
For PeriodIndex only (see PeriodIndex.asfreq).
normalize:bool, default False
Whether to reset output index to midnight.
fill_value:scalar, optional
Value to use for missing values, applied during upsampling (note this does not fill NaNs that already were present). Returns
Series
Series object reindexed to the specified frequency. See also reindex
Conform DataFrame to new index with optional filling logic. Notes To learn more about the frequency strings, please see this link. Examples Start by creating a series with 4 one minute timestamps.
>>> index = pd.date_range('1/1/2000', periods=4, freq='T')
>>> series = pd.Series([0.0, None, 2.0, 3.0], index=index)
>>> df = pd.DataFrame({'s': series})
>>> df
s
2000-01-01 00:00:00 0.0
2000-01-01 00:01:00 NaN
2000-01-01 00:02:00 2.0
2000-01-01 00:03:00 3.0
Upsample the series into 30 second bins.
>>> df.asfreq(freq='30S')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 NaN
2000-01-01 00:03:00 3.0
Upsample again, providing a fill value.
>>> df.asfreq(freq='30S', fill_value=9.0)
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 9.0
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 9.0
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 9.0
2000-01-01 00:03:00 3.0
Upsample again, providing a method.
>>> df.asfreq(freq='30S', method='bfill')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 NaN
2000-01-01 00:01:30 2.0
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 3.0
2000-01-01 00:03:00 3.0 | pandas.reference.api.pandas.series.asfreq |
pandas.Series.asof Series.asof(where, subset=None)[source]
Return the last row(s) without any NaNs before where. The last row (for each element in where, if list) without any NaN is taken. In case of a DataFrame, the last row without NaN considering only the subset of columns (if not None) If there is no good value, NaN is returned for a Series or a Series of NaN values for a DataFrame Parameters
where:date or array-like of dates
Date(s) before which the last row(s) are returned.
subset:str or array-like of str, default None
For DataFrame, if not None, only use these columns to check for NaNs. Returns
scalar, Series, or DataFrame
The return can be: scalar : when self is a Series and where is a scalar Series: when self is a Series and where is an array-like, or when self is a DataFrame and where is a scalar DataFrame : when self is a DataFrame and where is an array-like Return scalar, Series, or DataFrame. See also merge_asof
Perform an asof merge. Similar to left join. Notes Dates are assumed to be sorted. Raises if this is not the case. Examples A Series and a scalar where.
>>> s = pd.Series([1, 2, np.nan, 4], index=[10, 20, 30, 40])
>>> s
10 1.0
20 2.0
30 NaN
40 4.0
dtype: float64
>>> s.asof(20)
2.0
For a sequence where, a Series is returned. The first value is NaN, because the first element of where is before the first index value.
>>> s.asof([5, 20])
5 NaN
20 2.0
dtype: float64
Missing values are not considered. The following is 2.0, not NaN, even though NaN is at the index location for 30.
>>> s.asof(30)
2.0
Take all columns into consideration
>>> df = pd.DataFrame({'a': [10, 20, 30, 40, 50],
... 'b': [None, None, None, None, 500]},
... index=pd.DatetimeIndex(['2018-02-27 09:01:00',
... '2018-02-27 09:02:00',
... '2018-02-27 09:03:00',
... '2018-02-27 09:04:00',
... '2018-02-27 09:05:00']))
>>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
... '2018-02-27 09:04:30']))
a b
2018-02-27 09:03:30 NaN NaN
2018-02-27 09:04:30 NaN NaN
Take a single column into consideration
>>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
... '2018-02-27 09:04:30']),
... subset=['a'])
a b
2018-02-27 09:03:30 30.0 NaN
2018-02-27 09:04:30 40.0 NaN | pandas.reference.api.pandas.series.asof |
pandas.Series.astype Series.astype(dtype, copy=True, errors='raise')[source]
Cast a pandas object to a specified dtype dtype. Parameters
dtype:data type, or dict of column name -> data type
Use a numpy.dtype or Python type to cast entire pandas object to the same type. Alternatively, use {col: dtype, …}, where col is a column label and dtype is a numpy.dtype or Python type to cast one or more of the DataFrame’s columns to column-specific types.
copy:bool, default True
Return a copy when copy=True (be very careful setting copy=False as changes to values then may propagate to other pandas objects).
errors:{‘raise’, ‘ignore’}, default ‘raise’
Control raising of exceptions on invalid data for provided dtype. raise : allow exceptions to be raised ignore : suppress exceptions. On error return original object. Returns
casted:same type as caller
See also to_datetime
Convert argument to datetime. to_timedelta
Convert argument to timedelta. to_numeric
Convert argument to a numeric type. numpy.ndarray.astype
Cast a numpy array to a specified type. Notes Deprecated since version 1.3.0: Using astype to convert from timezone-naive dtype to timezone-aware dtype is deprecated and will raise in a future version. Use Series.dt.tz_localize() instead. Examples Create a DataFrame:
>>> d = {'col1': [1, 2], 'col2': [3, 4]}
>>> df = pd.DataFrame(data=d)
>>> df.dtypes
col1 int64
col2 int64
dtype: object
Cast all columns to int32:
>>> df.astype('int32').dtypes
col1 int32
col2 int32
dtype: object
Cast col1 to int32 using a dictionary:
>>> df.astype({'col1': 'int32'}).dtypes
col1 int32
col2 int64
dtype: object
Create a series:
>>> ser = pd.Series([1, 2], dtype='int32')
>>> ser
0 1
1 2
dtype: int32
>>> ser.astype('int64')
0 1
1 2
dtype: int64
Convert to categorical type:
>>> ser.astype('category')
0 1
1 2
dtype: category
Categories (2, int64): [1, 2]
Convert to ordered categorical type with custom ordering:
>>> from pandas.api.types import CategoricalDtype
>>> cat_dtype = CategoricalDtype(
... categories=[2, 1], ordered=True)
>>> ser.astype(cat_dtype)
0 1
1 2
dtype: category
Categories (2, int64): [2 < 1]
Note that using copy=False and changing data on a new pandas object may propagate changes:
>>> s1 = pd.Series([1, 2])
>>> s2 = s1.astype('int64', copy=False)
>>> s2[0] = 10
>>> s1 # note that s1[0] has changed too
0 10
1 2
dtype: int64
Create a series of dates:
>>> ser_date = pd.Series(pd.date_range('20200101', periods=3))
>>> ser_date
0 2020-01-01
1 2020-01-02
2 2020-01-03
dtype: datetime64[ns] | pandas.reference.api.pandas.series.astype |
pandas.Series.at propertySeries.at
Access a single value for a row/column label pair. Similar to loc, in that both provide label-based lookups. Use at if you only need to get or set a single value in a DataFrame or Series. Raises
KeyError
If ‘label’ does not exist in DataFrame. See also DataFrame.iat
Access a single value for a row/column pair by integer position. DataFrame.loc
Access a group of rows and columns by label(s). Series.at
Access a single value using a label. Examples
>>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
... index=[4, 5, 6], columns=['A', 'B', 'C'])
>>> df
A B C
4 0 2 3
5 0 4 1
6 10 20 30
Get value at specified row/column pair
>>> df.at[4, 'B']
2
Set value at specified row/column pair
>>> df.at[4, 'B'] = 10
>>> df.at[4, 'B']
10
Get value within a Series
>>> df.loc[5].at['B']
4 | pandas.reference.api.pandas.series.at |
pandas.Series.at_time Series.at_time(time, asof=False, axis=None)[source]
Select values at particular time of day (e.g., 9:30AM). Parameters
time:datetime.time or str
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
Returns
Series or DataFrame
Raises
TypeError
If the index is not a DatetimeIndex See also between_time
Select values between particular times of the day. first
Select initial periods of time series based on a date offset. last
Select final periods of time series based on a date offset. DatetimeIndex.indexer_at_time
Get just the index locations for values at particular time of the day. Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='12H')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 00:00:00 1
2018-04-09 12:00:00 2
2018-04-10 00:00:00 3
2018-04-10 12:00:00 4
>>> ts.at_time('12:00')
A
2018-04-09 12:00:00 2
2018-04-10 12:00:00 4 | pandas.reference.api.pandas.series.at_time |
pandas.Series.attrs propertySeries.attrs
Dictionary of global attributes of this dataset. Warning attrs is experimental and may change without warning. See also DataFrame.flags
Global flags applying to this object. | pandas.reference.api.pandas.series.attrs |
pandas.Series.autocorr Series.autocorr(lag=1)[source]
Compute the lag-N autocorrelation. This method computes the Pearson correlation between the Series and its shifted self. Parameters
lag:int, default 1
Number of lags to apply before performing autocorrelation. Returns
float
The Pearson correlation between self and self.shift(lag). See also Series.corr
Compute the correlation between two Series. Series.shift
Shift index by desired number of periods. DataFrame.corr
Compute pairwise correlation of columns. DataFrame.corrwith
Compute pairwise correlation between rows or columns of two DataFrame objects. Notes If the Pearson correlation is not well defined return ‘NaN’. Examples
>>> s = pd.Series([0.25, 0.5, 0.2, -0.05])
>>> s.autocorr()
0.10355...
>>> s.autocorr(lag=2)
-0.99999...
If the Pearson correlation is not well defined, then ‘NaN’ is returned.
>>> s = pd.Series([1, 0, 0, 0])
>>> s.autocorr()
nan | pandas.reference.api.pandas.series.autocorr |
pandas.Series.axes propertySeries.axes
Return a list of the row axis labels. | pandas.reference.api.pandas.series.axes |
pandas.Series.backfill Series.backfill(axis=None, inplace=False, limit=None, downcast=None)[source]
Synonym for DataFrame.fillna() with method='bfill'. Returns
Series/DataFrame or None
Object with missing values filled or None if inplace=True. | pandas.reference.api.pandas.series.backfill |
pandas.Series.between Series.between(left, right, inclusive='both')[source]
Return boolean Series equivalent to left <= series <= right. This function returns a boolean vector containing True wherever the corresponding Series element is between the boundary values left and right. NA values are treated as False. Parameters
left:scalar or list-like
Left boundary.
right:scalar or list-like
Right boundary.
inclusive:{“both”, “neither”, “left”, “right”}
Include boundaries. Whether to set each bound as closed or open. Changed in version 1.3.0. Returns
Series
Series representing whether each element is between left and right (inclusive). See also Series.gt
Greater than of series and other. Series.lt
Less than of series and other. Notes This function is equivalent to (left <= ser) & (ser <= right) Examples
>>> s = pd.Series([2, 0, 4, 8, np.nan])
Boundary values are included by default:
>>> s.between(1, 4)
0 True
1 False
2 True
3 False
4 False
dtype: bool
With inclusive set to "neither" boundary values are excluded:
>>> s.between(1, 4, inclusive="neither")
0 True
1 False
2 False
3 False
4 False
dtype: bool
left and right can be any scalar value:
>>> s = pd.Series(['Alice', 'Bob', 'Carol', 'Eve'])
>>> s.between('Anna', 'Daniel')
0 False
1 True
2 True
3 False
dtype: bool | pandas.reference.api.pandas.series.between |
pandas.Series.between_time Series.between_time(start_time, end_time, include_start=NoDefault.no_default, include_end=NoDefault.no_default, inclusive=None, axis=None)[source]
Select values between particular times of the day (e.g., 9:00-9:30 AM). By setting start_time to be later than end_time, you can get the times that are not between the two times. Parameters
start_time:datetime.time or str
Initial time as a time filter limit.
end_time:datetime.time or str
End time as a time filter limit.
include_start:bool, default True
Whether the start time needs to be included in the result. Deprecated since version 1.4.0: Arguments include_start and include_end have been deprecated to standardize boundary inputs. Use inclusive instead, to set each bound as closed or open.
include_end:bool, default True
Whether the end time needs to be included in the result. Deprecated since version 1.4.0: Arguments include_start and include_end have been deprecated to standardize boundary inputs. Use inclusive instead, to set each bound as closed or open.
inclusive:{“both”, “neither”, “left”, “right”}, default “both”
Include boundaries; whether to set each bound as closed or open.
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
Determine range time on index or columns value. Returns
Series or DataFrame
Data from the original object filtered to the specified dates range. Raises
TypeError
If the index is not a DatetimeIndex See also at_time
Select values at a particular time of the day. first
Select initial periods of time series based on a date offset. last
Select final periods of time series based on a date offset. DatetimeIndex.indexer_between_time
Get just the index locations for values between particular times of the day. Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='1D20min')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 00:00:00 1
2018-04-10 00:20:00 2
2018-04-11 00:40:00 3
2018-04-12 01:00:00 4
>>> ts.between_time('0:15', '0:45')
A
2018-04-10 00:20:00 2
2018-04-11 00:40:00 3
You get the times that are not between two times by setting start_time later than end_time:
>>> ts.between_time('0:45', '0:15')
A
2018-04-09 00:00:00 1
2018-04-12 01:00:00 4 | pandas.reference.api.pandas.series.between_time |
pandas.Series.bfill Series.bfill(axis=None, inplace=False, limit=None, downcast=None)[source]
Synonym for DataFrame.fillna() with method='bfill'. Returns
Series/DataFrame or None
Object with missing values filled or None if inplace=True. | pandas.reference.api.pandas.series.bfill |
pandas.Series.bool Series.bool()[source]
Return the bool of a single element Series or DataFrame. This must be a boolean scalar value, either True or False. It will raise a ValueError if the Series or DataFrame does not have exactly 1 element, or that element is not boolean (integer values 0 and 1 will also raise an exception). Returns
bool
The value in the Series or DataFrame. See also Series.astype
Change the data type of a Series, including to boolean. DataFrame.astype
Change the data type of a DataFrame, including to boolean. numpy.bool_
NumPy boolean data type, used by pandas for boolean values. Examples The method will only work for single element objects with a boolean value:
>>> pd.Series([True]).bool()
True
>>> pd.Series([False]).bool()
False
>>> pd.DataFrame({'col': [True]}).bool()
True
>>> pd.DataFrame({'col': [False]}).bool()
False | pandas.reference.api.pandas.series.bool |
pandas.Series.cat Series.cat()[source]
Accessor object for categorical properties of the Series values. Be aware that assigning to categories is a inplace operation, while all methods return new categorical data per default (but can be called with inplace=True). Parameters
data:Series or CategoricalIndex
Examples
>>> s = pd.Series(list("abbccc")).astype("category")
>>> s
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> s.cat.categories
Index(['a', 'b', 'c'], dtype='object')
>>> s.cat.rename_categories(list("cba"))
0 c
1 b
2 b
3 a
4 a
5 a
dtype: category
Categories (3, object): ['c', 'b', 'a']
>>> s.cat.reorder_categories(list("cba"))
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['c', 'b', 'a']
>>> s.cat.add_categories(["d", "e"])
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (5, object): ['a', 'b', 'c', 'd', 'e']
>>> s.cat.remove_categories(["a", "c"])
0 NaN
1 b
2 b
3 NaN
4 NaN
5 NaN
dtype: category
Categories (1, object): ['b']
>>> s1 = s.cat.add_categories(["d", "e"])
>>> s1.cat.remove_unused_categories()
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> s.cat.set_categories(list("abcde"))
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (5, object): ['a', 'b', 'c', 'd', 'e']
>>> s.cat.as_ordered()
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['a' < 'b' < 'c']
>>> s.cat.as_unordered()
0 a
1 b
2 b
3 c
4 c
5 c
dtype: category
Categories (3, object): ['a', 'b', 'c'] | pandas.reference.api.pandas.series.cat |
pandas.Series.cat.add_categories Series.cat.add_categories(*args, **kwargs)[source]
Add new categories. new_categories will be included at the last/highest place in the categories and will be unused directly after this call. Parameters
new_categories:category or list-like of category
The new categories to be included.
inplace:bool, default False
Whether or not to add the categories inplace or return a copy of this categorical with added categories. Deprecated since version 1.3.0. Returns
cat:Categorical or None
Categorical with new categories added or None if inplace=True. Raises
ValueError
If the new categories include old categories or do not validate as categories See also rename_categories
Rename categories. reorder_categories
Reorder categories. remove_categories
Remove the specified categories. remove_unused_categories
Remove categories which are not used. set_categories
Set the categories to the specified ones. Examples
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
>>> c.add_categories(['d', 'a'])
['c', 'b', 'c']
Categories (4, object): ['b', 'c', 'd', 'a'] | pandas.reference.api.pandas.series.cat.add_categories |
pandas.Series.cat.as_ordered Series.cat.as_ordered(*args, **kwargs)[source]
Set the Categorical to be ordered. Parameters
inplace:bool, default False
Whether or not to set the ordered attribute in-place or return a copy of this categorical with ordered set to True. Returns
Categorical or None
Ordered Categorical or None if inplace=True. | pandas.reference.api.pandas.series.cat.as_ordered |
pandas.Series.cat.as_unordered Series.cat.as_unordered(*args, **kwargs)[source]
Set the Categorical to be unordered. Parameters
inplace:bool, default False
Whether or not to set the ordered attribute in-place or return a copy of this categorical with ordered set to False. Returns
Categorical or None
Unordered Categorical or None if inplace=True. | pandas.reference.api.pandas.series.cat.as_unordered |
pandas.Series.cat.categories Series.cat.categories
The categories of this categorical. Setting assigns new values to each category (effectively a rename of each individual category). The assigned value has to be a list-like object. All items must be unique and the number of items in the new categories must be the same as the number of items in the old categories. Assigning to categories is a inplace operation! Raises
ValueError
If the new categories do not validate as categories or if the number of new categories is unequal the number of old categories See also rename_categories
Rename categories. reorder_categories
Reorder categories. add_categories
Add new categories. remove_categories
Remove the specified categories. remove_unused_categories
Remove categories which are not used. set_categories
Set the categories to the specified ones. | pandas.reference.api.pandas.series.cat.categories |
pandas.Series.cat.codes Series.cat.codes
Return Series of codes as well as the index. | pandas.reference.api.pandas.series.cat.codes |
pandas.Series.cat.ordered Series.cat.ordered
Whether the categories have an ordered relationship. | pandas.reference.api.pandas.series.cat.ordered |
pandas.Series.cat.remove_categories Series.cat.remove_categories(*args, **kwargs)[source]
Remove the specified categories. removals must be included in the old categories. Values which were in the removed categories will be set to NaN Parameters
removals:category or list of categories
The categories which should be removed.
inplace:bool, default False
Whether or not to remove the categories inplace or return a copy of this categorical with removed categories. Deprecated since version 1.3.0. Returns
cat:Categorical or None
Categorical with removed categories or None if inplace=True. Raises
ValueError
If the removals are not contained in the categories See also rename_categories
Rename categories. reorder_categories
Reorder categories. add_categories
Add new categories. remove_unused_categories
Remove categories which are not used. set_categories
Set the categories to the specified ones. Examples
>>> c = pd.Categorical(['a', 'c', 'b', 'c', 'd'])
>>> c
['a', 'c', 'b', 'c', 'd']
Categories (4, object): ['a', 'b', 'c', 'd']
>>> c.remove_categories(['d', 'a'])
[NaN, 'c', 'b', 'c', NaN]
Categories (2, object): ['b', 'c'] | pandas.reference.api.pandas.series.cat.remove_categories |
pandas.Series.cat.remove_unused_categories Series.cat.remove_unused_categories(*args, **kwargs)[source]
Remove categories which are not used. Parameters
inplace:bool, default False
Whether or not to drop unused categories inplace or return a copy of this categorical with unused categories dropped. Deprecated since version 1.2.0. Returns
cat:Categorical or None
Categorical with unused categories dropped or None if inplace=True. See also rename_categories
Rename categories. reorder_categories
Reorder categories. add_categories
Add new categories. remove_categories
Remove the specified categories. set_categories
Set the categories to the specified ones. Examples
>>> c = pd.Categorical(['a', 'c', 'b', 'c', 'd'])
>>> c
['a', 'c', 'b', 'c', 'd']
Categories (4, object): ['a', 'b', 'c', 'd']
>>> c[2] = 'a'
>>> c[4] = 'c'
>>> c
['a', 'c', 'a', 'c', 'c']
Categories (4, object): ['a', 'b', 'c', 'd']
>>> c.remove_unused_categories()
['a', 'c', 'a', 'c', 'c']
Categories (2, object): ['a', 'c'] | pandas.reference.api.pandas.series.cat.remove_unused_categories |
pandas.Series.cat.rename_categories Series.cat.rename_categories(*args, **kwargs)[source]
Rename categories. Parameters
new_categories:list-like, dict-like or callable
New categories which will replace old categories. list-like: all items must be unique and the number of items in the new categories must match the existing number of categories. dict-like: specifies a mapping from old categories to new. Categories not contained in the mapping are passed through and extra categories in the mapping are ignored. callable : a callable that is called on all items in the old categories and whose return values comprise the new categories.
inplace:bool, default False
Whether or not to rename the categories inplace or return a copy of this categorical with renamed categories. Deprecated since version 1.3.0. Returns
cat:Categorical or None
Categorical with removed categories or None if inplace=True. Raises
ValueError
If new categories are list-like and do not have the same number of items than the current categories or do not validate as categories See also reorder_categories
Reorder categories. add_categories
Add new categories. remove_categories
Remove the specified categories. remove_unused_categories
Remove categories which are not used. set_categories
Set the categories to the specified ones. Examples
>>> c = pd.Categorical(['a', 'a', 'b'])
>>> c.rename_categories([0, 1])
[0, 0, 1]
Categories (2, int64): [0, 1]
For dict-like new_categories, extra keys are ignored and categories not in the dictionary are passed through
>>> c.rename_categories({'a': 'A', 'c': 'C'})
['A', 'A', 'b']
Categories (2, object): ['A', 'b']
You may also provide a callable to create the new categories
>>> c.rename_categories(lambda x: x.upper())
['A', 'A', 'B']
Categories (2, object): ['A', 'B'] | pandas.reference.api.pandas.series.cat.rename_categories |
pandas.Series.cat.reorder_categories Series.cat.reorder_categories(*args, **kwargs)[source]
Reorder categories as specified in new_categories. new_categories need to include all old categories and no new category items. Parameters
new_categories:Index-like
The categories in new order.
ordered:bool, optional
Whether or not the categorical is treated as a ordered categorical. If not given, do not change the ordered information.
inplace:bool, default False
Whether or not to reorder the categories inplace or return a copy of this categorical with reordered categories. Deprecated since version 1.3.0. Returns
cat:Categorical or None
Categorical with removed categories or None if inplace=True. Raises
ValueError
If the new categories do not contain all old category items or any new ones See also rename_categories
Rename categories. add_categories
Add new categories. remove_categories
Remove the specified categories. remove_unused_categories
Remove categories which are not used. set_categories
Set the categories to the specified ones. | pandas.reference.api.pandas.series.cat.reorder_categories |
pandas.Series.cat.set_categories Series.cat.set_categories(*args, **kwargs)[source]
Set the categories to the specified new_categories. new_categories can include new categories (which will result in unused categories) or remove old categories (which results in values set to NaN). If rename==True, the categories will simple be renamed (less or more items than in old categories will result in values set to NaN or in unused categories respectively). This method can be used to perform more than one action of adding, removing, and reordering simultaneously and is therefore faster than performing the individual steps via the more specialised methods. On the other hand this methods does not do checks (e.g., whether the old categories are included in the new categories on a reorder), which can result in surprising changes, for example when using special string dtypes, which does not considers a S1 string equal to a single char python string. Parameters
new_categories:Index-like
The categories in new order.
ordered:bool, default False
Whether or not the categorical is treated as a ordered categorical. If not given, do not change the ordered information.
rename:bool, default False
Whether or not the new_categories should be considered as a rename of the old categories or as reordered categories.
inplace:bool, default False
Whether or not to reorder the categories in-place or return a copy of this categorical with reordered categories. Deprecated since version 1.3.0. Returns
Categorical with reordered categories or None if inplace.
Raises
ValueError
If new_categories does not validate as categories See also rename_categories
Rename categories. reorder_categories
Reorder categories. add_categories
Add new categories. remove_categories
Remove the specified categories. remove_unused_categories
Remove categories which are not used. | pandas.reference.api.pandas.series.cat.set_categories |
pandas.Series.clip Series.clip(lower=None, upper=None, axis=None, inplace=False, *args, **kwargs)[source]
Trim values at input threshold(s). Assigns values outside boundary to boundary values. Thresholds can be singular values or array like, and in the latter case the clipping is performed element-wise in the specified axis. Parameters
lower:float or array-like, default None
Minimum threshold value. All values below this threshold will be set to it. A missing threshold (e.g NA) will not clip the value.
upper:float or array-like, default None
Maximum threshold value. All values above this threshold will be set to it. A missing threshold (e.g NA) will not clip the value.
axis:int or str axis name, optional
Align object with lower and upper along the given axis.
inplace:bool, default False
Whether to perform the operation in place on the data. *args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with numpy. Returns
Series or DataFrame or None
Same type as calling object with the values outside the clip boundaries replaced or None if inplace=True. See also Series.clip
Trim values at input threshold in series. DataFrame.clip
Trim values at input threshold in dataframe. numpy.clip
Clip (limit) the values in an array. Examples
>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}
>>> df = pd.DataFrame(data)
>>> df
col_0 col_1
0 9 -2
1 -3 -7
2 0 6
3 -1 8
4 5 -5
Clips per column using lower and upper thresholds:
>>> df.clip(-4, 6)
col_0 col_1
0 6 -2
1 -3 -4
2 0 6
3 -1 6
4 5 -4
Clips using specific lower and upper thresholds per column element:
>>> t = pd.Series([2, -4, -1, 6, 3])
>>> t
0 2
1 -4
2 -1
3 6
4 3
dtype: int64
>>> df.clip(t, t + 4, axis=0)
col_0 col_1
0 6 2
1 -3 -4
2 0 3
3 6 8
4 5 3
Clips using specific lower threshold per column element, with missing values:
>>> t = pd.Series([2, -4, np.NaN, 6, 3])
>>> t
0 2.0
1 -4.0
2 NaN
3 6.0
4 3.0
dtype: float64
>>> df.clip(t, axis=0)
col_0 col_1
0 9 2
1 -3 -4
2 0 6
3 6 8
4 5 3 | pandas.reference.api.pandas.series.clip |
pandas.Series.combine Series.combine(other, func, fill_value=None)[source]
Combine the Series with a Series or scalar according to func. Combine the Series and other using func to perform elementwise selection for combined Series. fill_value is assumed when value is missing at some index from one of the two objects being combined. Parameters
other:Series or scalar
The value(s) to be combined with the Series.
func:function
Function that takes two scalars as inputs and returns an element.
fill_value:scalar, optional
The value to assume when an index is missing from one Series or the other. The default specifies to use the appropriate NaN value for the underlying dtype of the Series. Returns
Series
The result of combining the Series with the other object. See also Series.combine_first
Combine Series values, choosing the calling Series’ values first. Examples Consider 2 Datasets s1 and s2 containing highest clocked speeds of different birds.
>>> s1 = pd.Series({'falcon': 330.0, 'eagle': 160.0})
>>> s1
falcon 330.0
eagle 160.0
dtype: float64
>>> s2 = pd.Series({'falcon': 345.0, 'eagle': 200.0, 'duck': 30.0})
>>> s2
falcon 345.0
eagle 200.0
duck 30.0
dtype: float64
Now, to combine the two datasets and view the highest speeds of the birds across the two datasets
>>> s1.combine(s2, max)
duck NaN
eagle 200.0
falcon 345.0
dtype: float64
In the previous example, the resulting value for duck is missing, because the maximum of a NaN and a float is a NaN. So, in the example, we set fill_value=0, so the maximum value returned will be the value from some dataset.
>>> s1.combine(s2, max, fill_value=0)
duck 30.0
eagle 200.0
falcon 345.0
dtype: float64 | pandas.reference.api.pandas.series.combine |
pandas.Series.combine_first Series.combine_first(other)[source]
Update null elements with value in the same location in ‘other’. Combine two Series objects by filling null values in one Series with non-null values from the other Series. Result index will be the union of the two indexes. Parameters
other:Series
The value(s) to be used for filling null values. Returns
Series
The result of combining the provided Series with the other object. See also Series.combine
Perform element-wise operation on two Series using a given function. Examples
>>> s1 = pd.Series([1, np.nan])
>>> s2 = pd.Series([3, 4, 5])
>>> s1.combine_first(s2)
0 1.0
1 4.0
2 5.0
dtype: float64
Null values still persist if the location of that null value does not exist in other
>>> s1 = pd.Series({'falcon': np.nan, 'eagle': 160.0})
>>> s2 = pd.Series({'eagle': 200.0, 'duck': 30.0})
>>> s1.combine_first(s2)
duck 30.0
eagle 160.0
falcon NaN
dtype: float64 | pandas.reference.api.pandas.series.combine_first |
pandas.Series.compare Series.compare(other, align_axis=1, keep_shape=False, keep_equal=False)[source]
Compare to another Series and show the differences. New in version 1.1.0. Parameters
other:Series
Object to compare with.
align_axis:{0 or ‘index’, 1 or ‘columns’}, default 1
Determine which axis to align the comparison on.
0, or ‘index’:Resulting differences are stacked vertically
with rows drawn alternately from self and other.
1, or ‘columns’:Resulting differences are aligned horizontally
with columns drawn alternately from self and other.
keep_shape:bool, default False
If true, all rows and columns are kept. Otherwise, only the ones with different values are kept.
keep_equal:bool, default False
If true, the result keeps values that are equal. Otherwise, equal values are shown as NaNs. Returns
Series or DataFrame
If axis is 0 or ‘index’ the result will be a Series. The resulting index will be a MultiIndex with ‘self’ and ‘other’ stacked alternately at the inner level. If axis is 1 or ‘columns’ the result will be a DataFrame. It will have two columns namely ‘self’ and ‘other’. See also DataFrame.compare
Compare with another DataFrame and show differences. Notes Matching NaNs will not appear as a difference. Examples
>>> s1 = pd.Series(["a", "b", "c", "d", "e"])
>>> s2 = pd.Series(["a", "a", "c", "b", "e"])
Align the differences on columns
>>> s1.compare(s2)
self other
1 b a
3 d b
Stack the differences on indices
>>> s1.compare(s2, align_axis=0)
1 self b
other a
3 self d
other b
dtype: object
Keep all original rows
>>> s1.compare(s2, keep_shape=True)
self other
0 NaN NaN
1 b a
2 NaN NaN
3 d b
4 NaN NaN
Keep all original rows and also all original values
>>> s1.compare(s2, keep_shape=True, keep_equal=True)
self other
0 a a
1 b a
2 c c
3 d b
4 e e | pandas.reference.api.pandas.series.compare |
pandas.Series.convert_dtypes Series.convert_dtypes(infer_objects=True, convert_string=True, convert_integer=True, convert_boolean=True, convert_floating=True)[source]
Convert columns to best possible dtypes using dtypes supporting pd.NA. New in version 1.0.0. Parameters
infer_objects:bool, default True
Whether object dtypes should be converted to the best possible types.
convert_string:bool, default True
Whether object dtypes should be converted to StringDtype().
convert_integer:bool, default True
Whether, if possible, conversion can be done to integer extension types.
convert_boolean:bool, defaults True
Whether object dtypes should be converted to BooleanDtypes().
convert_floating:bool, defaults True
Whether, if possible, conversion can be done to floating extension types. If convert_integer is also True, preference will be give to integer dtypes if the floats can be faithfully casted to integers. New in version 1.2.0. Returns
Series or DataFrame
Copy of input object with new dtype. See also infer_objects
Infer dtypes of objects. to_datetime
Convert argument to datetime. to_timedelta
Convert argument to timedelta. to_numeric
Convert argument to a numeric type. Notes By default, convert_dtypes will attempt to convert a Series (or each Series in a DataFrame) to dtypes that support pd.NA. By using the options convert_string, convert_integer, convert_boolean and convert_boolean, it is possible to turn off individual conversions to StringDtype, the integer extension types, BooleanDtype or floating extension types, respectively. For object-dtyped columns, if infer_objects is True, use the inference rules as during normal Series/DataFrame construction. Then, if possible, convert to StringDtype, BooleanDtype or an appropriate integer or floating extension type, otherwise leave as object. If the dtype is integer, convert to an appropriate integer extension type. If the dtype is numeric, and consists of all integers, convert to an appropriate integer extension type. Otherwise, convert to an appropriate floating extension type. Changed in version 1.2: Starting with pandas 1.2, this method also converts float columns to the nullable floating extension type. In the future, as new dtypes are added that support pd.NA, the results of this method will change to support those new dtypes. Examples
>>> df = pd.DataFrame(
... {
... "a": pd.Series([1, 2, 3], dtype=np.dtype("int32")),
... "b": pd.Series(["x", "y", "z"], dtype=np.dtype("O")),
... "c": pd.Series([True, False, np.nan], dtype=np.dtype("O")),
... "d": pd.Series(["h", "i", np.nan], dtype=np.dtype("O")),
... "e": pd.Series([10, np.nan, 20], dtype=np.dtype("float")),
... "f": pd.Series([np.nan, 100.5, 200], dtype=np.dtype("float")),
... }
... )
Start with a DataFrame with default dtypes.
>>> df
a b c d e f
0 1 x True h 10.0 NaN
1 2 y False i NaN 100.5
2 3 z NaN NaN 20.0 200.0
>>> df.dtypes
a int32
b object
c object
d object
e float64
f float64
dtype: object
Convert the DataFrame to use best possible dtypes.
>>> dfn = df.convert_dtypes()
>>> dfn
a b c d e f
0 1 x True h 10 <NA>
1 2 y False i <NA> 100.5
2 3 z <NA> <NA> 20 200.0
>>> dfn.dtypes
a Int32
b string
c boolean
d string
e Int64
f Float64
dtype: object
Start with a Series of strings and missing data represented by np.nan.
>>> s = pd.Series(["a", "b", np.nan])
>>> s
0 a
1 b
2 NaN
dtype: object
Obtain a Series with dtype StringDtype.
>>> s.convert_dtypes()
0 a
1 b
2 <NA>
dtype: string | pandas.reference.api.pandas.series.convert_dtypes |
pandas.Series.copy Series.copy(deep=True)[source]
Make a copy of this object’s indices and data. When deep=True (default), a new object will be created with a copy of the calling object’s data and indices. Modifications to the data or indices of the copy will not be reflected in the original object (see notes below). When deep=False, a new object will be created without copying the calling object’s data or index (only references to the data and index are copied). Any changes to the data of the original will be reflected in the shallow copy (and vice versa). Parameters
deep:bool, default True
Make a deep copy, including a copy of the data and the indices. With deep=False neither the indices nor the data are copied. Returns
copy:Series or DataFrame
Object type matches caller. Notes When deep=True, data is copied but actual Python objects will not be copied recursively, only the reference to the object. This is in contrast to copy.deepcopy in the Standard Library, which recursively copies object data (see examples below). While Index objects are copied when deep=True, the underlying numpy array is not copied for performance reasons. Since Index is immutable, the underlying data can be safely shared and a copy is not needed. Examples
>>> s = pd.Series([1, 2], index=["a", "b"])
>>> s
a 1
b 2
dtype: int64
>>> s_copy = s.copy()
>>> s_copy
a 1
b 2
dtype: int64
Shallow copy versus default (deep) copy:
>>> s = pd.Series([1, 2], index=["a", "b"])
>>> deep = s.copy()
>>> shallow = s.copy(deep=False)
Shallow copy shares data and index with original.
>>> s is shallow
False
>>> s.values is shallow.values and s.index is shallow.index
True
Deep copy has own copy of data and index.
>>> s is deep
False
>>> s.values is deep.values or s.index is deep.index
False
Updates to the data shared by shallow copy and original is reflected in both; deep copy remains unchanged.
>>> s[0] = 3
>>> shallow[1] = 4
>>> s
a 3
b 4
dtype: int64
>>> shallow
a 3
b 4
dtype: int64
>>> deep
a 1
b 2
dtype: int64
Note that when copying an object containing Python objects, a deep copy will copy the data, but will not do so recursively. Updating a nested data object will be reflected in the deep copy.
>>> s = pd.Series([[1, 2], [3, 4]])
>>> deep = s.copy()
>>> s[0][0] = 10
>>> s
0 [10, 2]
1 [3, 4]
dtype: object
>>> deep
0 [10, 2]
1 [3, 4]
dtype: object | pandas.reference.api.pandas.series.copy |
pandas.Series.corr Series.corr(other, method='pearson', min_periods=None)[source]
Compute correlation with other Series, excluding missing values. Parameters
other:Series
Series with which to compute the correlation.
method:{‘pearson’, ‘kendall’, ‘spearman’} or callable
Method used to compute correlation: pearson : Standard correlation coefficient kendall : Kendall Tau correlation coefficient spearman : Spearman rank correlation callable: Callable with input two 1d ndarrays and returning a float. Warning Note that the returned matrix from corr will have 1 along the diagonals and will be symmetric regardless of the callable’s behavior.
min_periods:int, optional
Minimum number of observations needed to have a valid result. Returns
float
Correlation with other. See also DataFrame.corr
Compute pairwise correlation between columns. DataFrame.corrwith
Compute pairwise correlation with another DataFrame or Series. Examples
>>> def histogram_intersection(a, b):
... v = np.minimum(a, b).sum().round(decimals=1)
... return v
>>> s1 = pd.Series([.2, .0, .6, .2])
>>> s2 = pd.Series([.3, .6, .0, .1])
>>> s1.corr(s2, method=histogram_intersection)
0.3 | pandas.reference.api.pandas.series.corr |
pandas.Series.count Series.count(level=None)[source]
Return number of non-NA/null observations in the Series. Parameters
level:int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a smaller Series. Returns
int or Series (if level specified)
Number of non-null values in the Series. See also DataFrame.count
Count non-NA cells for each column or row. Examples
>>> s = pd.Series([0.0, 1.0, np.nan])
>>> s.count()
2 | pandas.reference.api.pandas.series.count |
pandas.Series.cov Series.cov(other, min_periods=None, ddof=1)[source]
Compute covariance with Series, excluding missing values. Parameters
other:Series
Series with which to compute the covariance.
min_periods:int, optional
Minimum number of observations needed to have a valid result.
ddof:int, default 1
Delta degrees of freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. New in version 1.1.0. Returns
float
Covariance between Series and other normalized by N-1 (unbiased estimator). See also DataFrame.cov
Compute pairwise covariance of columns. Examples
>>> s1 = pd.Series([0.90010907, 0.13484424, 0.62036035])
>>> s2 = pd.Series([0.12528585, 0.26962463, 0.51111198])
>>> s1.cov(s2)
-0.01685762652715874 | pandas.reference.api.pandas.series.cov |
pandas.Series.cummax Series.cummax(axis=None, skipna=True, *args, **kwargs)[source]
Return cumulative maximum over a DataFrame or Series axis. Returns a DataFrame or Series of the same size containing the cumulative maximum. Parameters
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna:bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA. *args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with NumPy. Returns
scalar or Series
Return cumulative maximum of scalar or Series. See also core.window.Expanding.max
Similar functionality but ignores NaN values. Series.max
Return the maximum over Series axis. Series.cummax
Return cumulative maximum over Series axis. Series.cummin
Return cumulative minimum over Series axis. Series.cumsum
Return cumulative sum over Series axis. Series.cumprod
Return cumulative product over Series axis. Examples Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cummax()
0 2.0
1 NaN
2 5.0
3 5.0
4 5.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cummax(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the maximum in each column. This is equivalent to axis=None or axis='index'.
>>> df.cummax()
A B
0 2.0 1.0
1 3.0 NaN
2 3.0 1.0
To iterate over columns and find the maximum in each row, use axis=1
>>> df.cummax(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 1.0 | pandas.reference.api.pandas.series.cummax |
pandas.Series.cummin Series.cummin(axis=None, skipna=True, *args, **kwargs)[source]
Return cumulative minimum over a DataFrame or Series axis. Returns a DataFrame or Series of the same size containing the cumulative minimum. Parameters
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna:bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA. *args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with NumPy. Returns
scalar or Series
Return cumulative minimum of scalar or Series. See also core.window.Expanding.min
Similar functionality but ignores NaN values. Series.min
Return the minimum over Series axis. Series.cummax
Return cumulative maximum over Series axis. Series.cummin
Return cumulative minimum over Series axis. Series.cumsum
Return cumulative sum over Series axis. Series.cumprod
Return cumulative product over Series axis. Examples Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cummin()
0 2.0
1 NaN
2 2.0
3 -1.0
4 -1.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cummin(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the minimum in each column. This is equivalent to axis=None or axis='index'.
>>> df.cummin()
A B
0 2.0 1.0
1 2.0 NaN
2 1.0 0.0
To iterate over columns and find the minimum in each row, use axis=1
>>> df.cummin(axis=1)
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0 | pandas.reference.api.pandas.series.cummin |
pandas.Series.cumprod Series.cumprod(axis=None, skipna=True, *args, **kwargs)[source]
Return cumulative product over a DataFrame or Series axis. Returns a DataFrame or Series of the same size containing the cumulative product. Parameters
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna:bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA. *args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with NumPy. Returns
scalar or Series
Return cumulative product of scalar or Series. See also core.window.Expanding.prod
Similar functionality but ignores NaN values. Series.prod
Return the product over Series axis. Series.cummax
Return cumulative maximum over Series axis. Series.cummin
Return cumulative minimum over Series axis. Series.cumsum
Return cumulative sum over Series axis. Series.cumprod
Return cumulative product over Series axis. Examples Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cumprod()
0 2.0
1 NaN
2 10.0
3 -10.0
4 -0.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cumprod(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the product in each column. This is equivalent to axis=None or axis='index'.
>>> df.cumprod()
A B
0 2.0 1.0
1 6.0 NaN
2 6.0 0.0
To iterate over columns and find the product in each row, use axis=1
>>> df.cumprod(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 0.0 | pandas.reference.api.pandas.series.cumprod |
pandas.Series.cumsum Series.cumsum(axis=None, skipna=True, *args, **kwargs)[source]
Return cumulative sum over a DataFrame or Series axis. Returns a DataFrame or Series of the same size containing the cumulative sum. Parameters
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna:bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA. *args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with NumPy. Returns
scalar or Series
Return cumulative sum of scalar or Series. See also core.window.Expanding.sum
Similar functionality but ignores NaN values. Series.sum
Return the sum over Series axis. Series.cummax
Return cumulative maximum over Series axis. Series.cummin
Return cumulative minimum over Series axis. Series.cumsum
Return cumulative sum over Series axis. Series.cumprod
Return cumulative product over Series axis. Examples Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cumsum()
0 2.0
1 NaN
2 7.0
3 6.0
4 6.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cumsum(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the sum in each column. This is equivalent to axis=None or axis='index'.
>>> df.cumsum()
A B
0 2.0 1.0
1 5.0 NaN
2 6.0 1.0
To iterate over columns and find the sum in each row, use axis=1
>>> df.cumsum(axis=1)
A B
0 2.0 3.0
1 3.0 NaN
2 1.0 1.0 | pandas.reference.api.pandas.series.cumsum |
pandas.Series.describe Series.describe(percentiles=None, include=None, exclude=None, datetime_is_numeric=False)[source]
Generate descriptive statistics. Descriptive statistics include those that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. Analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. The output will vary depending on what is provided. Refer to the notes below for more detail. Parameters
percentiles:list-like of numbers, optional
The percentiles to include in the output. All should fall between 0 and 1. The default is [.25, .5, .75], which returns the 25th, 50th, and 75th percentiles.
include:‘all’, list-like of dtypes or None (default), optional
A white list of data types to include in the result. Ignored for Series. Here are the options: ‘all’ : All columns of the input will be included in the output. A list-like of dtypes : Limits the results to the provided data types. To limit the result to numeric types submit numpy.number. To limit it instead to object columns submit the numpy.object data type. Strings can also be used in the style of select_dtypes (e.g. df.describe(include=['O'])). To select pandas categorical columns, use 'category' None (default) : The result will include all numeric columns.
exclude:list-like of dtypes or None (default), optional,
A black list of data types to omit from the result. Ignored for Series. Here are the options: A list-like of dtypes : Excludes the provided data types from the result. To exclude numeric types submit numpy.number. To exclude object columns submit the data type numpy.object. Strings can also be used in the style of select_dtypes (e.g. df.describe(exclude=['O'])). To exclude pandas categorical columns, use 'category' None (default) : The result will exclude nothing.
datetime_is_numeric:bool, default False
Whether to treat datetime dtypes as numeric. This affects statistics calculated for the column. For DataFrame input, this also controls whether datetime columns are included by default. New in version 1.1.0. Returns
Series or DataFrame
Summary statistics of the Series or Dataframe provided. See also DataFrame.count
Count number of non-NA/null observations. DataFrame.max
Maximum of the values in the object. DataFrame.min
Minimum of the values in the object. DataFrame.mean
Mean of the values. DataFrame.std
Standard deviation of the observations. DataFrame.select_dtypes
Subset of a DataFrame including/excluding columns based on their dtype. Notes For numeric data, the result’s index will include count, mean, std, min, max as well as lower, 50 and upper percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile is the same as the median. For object data (e.g. strings or timestamps), the result’s index will include count, unique, top, and freq. The top is the most common value. The freq is the most common value’s frequency. Timestamps also include the first and last items. If multiple object values have the highest count, then the count and top results will be arbitrarily chosen from among those with the highest count. For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric columns. If the dataframe consists only of object and categorical data without any numeric columns, the default is to return an analysis of both the object and categorical columns. If include='all' is provided as an option, the result will include a union of attributes of each type. The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed for the output. The parameters are ignored when analyzing a Series. Examples Describing a numeric Series.
>>> s = pd.Series([1, 2, 3])
>>> s.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
dtype: float64
Describing a categorical Series.
>>> s = pd.Series(['a', 'a', 'b', 'c'])
>>> s.describe()
count 4
unique 3
top a
freq 2
dtype: object
Describing a timestamp Series.
>>> s = pd.Series([
... np.datetime64("2000-01-01"),
... np.datetime64("2010-01-01"),
... np.datetime64("2010-01-01")
... ])
>>> s.describe(datetime_is_numeric=True)
count 3
mean 2006-09-01 08:00:00
min 2000-01-01 00:00:00
25% 2004-12-31 12:00:00
50% 2010-01-01 00:00:00
75% 2010-01-01 00:00:00
max 2010-01-01 00:00:00
dtype: object
Describing a DataFrame. By default only numeric fields are returned.
>>> df = pd.DataFrame({'categorical': pd.Categorical(['d','e','f']),
... 'numeric': [1, 2, 3],
... 'object': ['a', 'b', 'c']
... })
>>> df.describe()
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Describing all columns of a DataFrame regardless of data type.
>>> df.describe(include='all')
categorical numeric object
count 3 3.0 3
unique 3 NaN 3
top f NaN a
freq 1 NaN 1
mean NaN 2.0 NaN
std NaN 1.0 NaN
min NaN 1.0 NaN
25% NaN 1.5 NaN
50% NaN 2.0 NaN
75% NaN 2.5 NaN
max NaN 3.0 NaN
Describing a column from a DataFrame by accessing it as an attribute.
>>> df.numeric.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Name: numeric, dtype: float64
Including only numeric columns in a DataFrame description.
>>> df.describe(include=[np.number])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Including only string columns in a DataFrame description.
>>> df.describe(include=[object])
object
count 3
unique 3
top a
freq 1
Including only categorical columns from a DataFrame description.
>>> df.describe(include=['category'])
categorical
count 3
unique 3
top d
freq 1
Excluding numeric columns from a DataFrame description.
>>> df.describe(exclude=[np.number])
categorical object
count 3 3
unique 3 3
top f a
freq 1 1
Excluding object columns from a DataFrame description.
>>> df.describe(exclude=[object])
categorical numeric
count 3 3.0
unique 3 NaN
top f NaN
freq 1 NaN
mean NaN 2.0
std NaN 1.0
min NaN 1.0
25% NaN 1.5
50% NaN 2.0
75% NaN 2.5
max NaN 3.0 | pandas.reference.api.pandas.series.describe |
pandas.Series.diff Series.diff(periods=1)[source]
First discrete difference of element. Calculates the difference of a Series element compared with another element in the Series (default is element in previous row). Parameters
periods:int, default 1
Periods to shift for calculating difference, accepts negative values. Returns
Series
First differences of the Series. See also Series.pct_change
Percent change over given number of periods. Series.shift
Shift index by desired number of periods with an optional time freq. DataFrame.diff
First discrete difference of object. Notes For boolean dtypes, this uses operator.xor() rather than operator.sub(). The result is calculated according to current dtype in Series, however dtype of the result is always float64. Examples Difference with previous row
>>> s = pd.Series([1, 1, 2, 3, 5, 8])
>>> s.diff()
0 NaN
1 0.0
2 1.0
3 1.0
4 2.0
5 3.0
dtype: float64
Difference with 3rd previous row
>>> s.diff(periods=3)
0 NaN
1 NaN
2 NaN
3 2.0
4 4.0
5 6.0
dtype: float64
Difference with following row
>>> s.diff(periods=-1)
0 0.0
1 -1.0
2 -1.0
3 -2.0
4 -3.0
5 NaN
dtype: float64
Overflow in input dtype
>>> s = pd.Series([1, 0], dtype=np.uint8)
>>> s.diff()
0 NaN
1 255.0
dtype: float64 | pandas.reference.api.pandas.series.diff |
pandas.Series.div Series.div(other, level=None, fill_value=None, axis=0)[source]
Return Floating division of series and other, element-wise (binary operator truediv). Equivalent to series / other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters
other:Series or scalar value
fill_value:None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing.
level:int or name
Broadcast across a level, matching Index values on the passed MultiIndex level. Returns
Series
The result of the operation. See also Series.rtruediv
Reverse of the Floating division operator, see Python documentation for more details. Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divide(b, fill_value=0)
a 1.0
b inf
c inf
d 0.0
e NaN
dtype: float64 | pandas.reference.api.pandas.series.div |
pandas.Series.divide Series.divide(other, level=None, fill_value=None, axis=0)[source]
Return Floating division of series and other, element-wise (binary operator truediv). Equivalent to series / other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters
other:Series or scalar value
fill_value:None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing.
level:int or name
Broadcast across a level, matching Index values on the passed MultiIndex level. Returns
Series
The result of the operation. See also Series.rtruediv
Reverse of the Floating division operator, see Python documentation for more details. Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divide(b, fill_value=0)
a 1.0
b inf
c inf
d 0.0
e NaN
dtype: float64 | pandas.reference.api.pandas.series.divide |
pandas.Series.divmod Series.divmod(other, level=None, fill_value=None, axis=0)[source]
Return Integer division and modulo of series and other, element-wise (binary operator divmod). Equivalent to divmod(series, other), but with support to substitute a fill_value for missing data in either one of the inputs. Parameters
other:Series or scalar value
fill_value:None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing.
level:int or name
Broadcast across a level, matching Index values on the passed MultiIndex level. Returns
2-Tuple of Series
The result of the operation. See also Series.rdivmod
Reverse of the Integer division and modulo operator, see Python documentation for more details. Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divmod(b, fill_value=0)
(a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64,
a 0.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64) | pandas.reference.api.pandas.series.divmod |
pandas.Series.dot Series.dot(other)[source]
Compute the dot product between the Series and the columns of other. This method computes the dot product between the Series and another one, or the Series and each columns of a DataFrame, or the Series and each columns of an array. It can also be called using self @ other in Python >= 3.5. Parameters
other:Series, DataFrame or array-like
The other object to compute the dot product with its columns. Returns
scalar, Series or numpy.ndarray
Return the dot product of the Series and other if other is a Series, the Series of the dot product of Series and each rows of other if other is a DataFrame or a numpy.ndarray between the Series and each columns of the numpy array. See also DataFrame.dot
Compute the matrix product with the DataFrame. Series.mul
Multiplication of series and other, element-wise. Notes The Series and other has to share the same index if other is a Series or a DataFrame. Examples
>>> s = pd.Series([0, 1, 2, 3])
>>> other = pd.Series([-1, 2, -3, 4])
>>> s.dot(other)
8
>>> s @ other
8
>>> df = pd.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(df)
0 24
1 14
dtype: int64
>>> arr = np.array([[0, 1], [-2, 3], [4, -5], [6, 7]])
>>> s.dot(arr)
array([24, 14]) | pandas.reference.api.pandas.series.dot |
pandas.Series.drop Series.drop(labels=None, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise')[source]
Return Series with specified index labels removed. Remove elements of a Series based on specifying the index labels. When using a multi-index, labels on different levels can be removed by specifying the level. Parameters
labels:single label or list-like
Index labels to drop.
axis:0, default 0
Redundant for application on Series.
index:single label or list-like
Redundant for application on Series, but ‘index’ can be used instead of ‘labels’.
columns:single label or list-like
No change is made to the Series; use ‘index’ or ‘labels’ instead.
level:int or level name, optional
For MultiIndex, level for which the labels will be removed.
inplace:bool, default False
If True, do operation inplace and return None.
errors:{‘ignore’, ‘raise’}, default ‘raise’
If ‘ignore’, suppress error and only existing labels are dropped. Returns
Series or None
Series with specified index labels removed or None if inplace=True. Raises
KeyError
If none of the labels are found in the index. See also Series.reindex
Return only specified index labels of Series. Series.dropna
Return series without null values. Series.drop_duplicates
Return Series with duplicate values removed. DataFrame.drop
Drop specified labels from rows or columns. Examples
>>> s = pd.Series(data=np.arange(3), index=['A', 'B', 'C'])
>>> s
A 0
B 1
C 2
dtype: int64
Drop labels B en C
>>> s.drop(labels=['B', 'C'])
A 0
dtype: int64
Drop 2nd level label in MultiIndex Series
>>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
... ['speed', 'weight', 'length']],
... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
... [0, 1, 2, 0, 1, 2, 0, 1, 2]])
>>> s = pd.Series([45, 200, 1.2, 30, 250, 1.5, 320, 1, 0.3],
... index=midx)
>>> s
lama speed 45.0
weight 200.0
length 1.2
cow speed 30.0
weight 250.0
length 1.5
falcon speed 320.0
weight 1.0
length 0.3
dtype: float64
>>> s.drop(labels='weight', level=1)
lama speed 45.0
length 1.2
cow speed 30.0
length 1.5
falcon speed 320.0
length 0.3
dtype: float64 | pandas.reference.api.pandas.series.drop |
pandas.Series.drop_duplicates Series.drop_duplicates(keep='first', inplace=False)[source]
Return Series with duplicate values removed. Parameters
keep:{‘first’, ‘last’, False}, default ‘first’
Method to handle dropping duplicates: ‘first’ : Drop duplicates except for the first occurrence. ‘last’ : Drop duplicates except for the last occurrence. False : Drop all duplicates.
inplace:bool, default False
If True, performs operation inplace and returns None. Returns
Series or None
Series with duplicates dropped or None if inplace=True. See also Index.drop_duplicates
Equivalent method on Index. DataFrame.drop_duplicates
Equivalent method on DataFrame. Series.duplicated
Related method on Series, indicating duplicate Series values. Examples Generate a Series with duplicated entries.
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'],
... name='animal')
>>> s
0 lama
1 cow
2 lama
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
With the ‘keep’ parameter, the selection behaviour of duplicated values can be changed. The value ‘first’ keeps the first occurrence for each set of duplicated entries. The default value of keep is ‘first’.
>>> s.drop_duplicates()
0 lama
1 cow
3 beetle
5 hippo
Name: animal, dtype: object
The value ‘last’ for parameter ‘keep’ keeps the last occurrence for each set of duplicated entries.
>>> s.drop_duplicates(keep='last')
1 cow
3 beetle
4 lama
5 hippo
Name: animal, dtype: object
The value False for parameter ‘keep’ discards all sets of duplicated entries. Setting the value of ‘inplace’ to True performs the operation inplace and returns None.
>>> s.drop_duplicates(keep=False, inplace=True)
>>> s
1 cow
3 beetle
5 hippo
Name: animal, dtype: object | pandas.reference.api.pandas.series.drop_duplicates |
pandas.Series.droplevel Series.droplevel(level, axis=0)[source]
Return Series/DataFrame with requested index / column level(s) removed. Parameters
level:int, str, or list-like
If a string is given, must be the name of a level If list-like, elements must be names or positional indexes of levels.
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
Axis along which the level(s) is removed: 0 or ‘index’: remove level(s) in column. 1 or ‘columns’: remove level(s) in row. Returns
Series/DataFrame
Series/DataFrame with requested index / column level(s) removed. Examples
>>> df = pd.DataFrame([
... [1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12]
... ]).set_index([0, 1]).rename_axis(['a', 'b'])
>>> df.columns = pd.MultiIndex.from_tuples([
... ('c', 'e'), ('d', 'f')
... ], names=['level_1', 'level_2'])
>>> df
level_1 c d
level_2 e f
a b
1 2 3 4
5 6 7 8
9 10 11 12
>>> df.droplevel('a')
level_1 c d
level_2 e f
b
2 3 4
6 7 8
10 11 12
>>> df.droplevel('level_2', axis=1)
level_1 c d
a b
1 2 3 4
5 6 7 8
9 10 11 12 | pandas.reference.api.pandas.series.droplevel |
pandas.Series.dropna Series.dropna(axis=0, inplace=False, how=None)[source]
Return a new Series with missing values removed. See the User Guide for more on which values are considered missing, and how to work with missing data. Parameters
axis:{0 or ‘index’}, default 0
There is only one axis to drop values from.
inplace:bool, default False
If True, do operation inplace and return None.
how:str, optional
Not in use. Kept for compatibility. Returns
Series or None
Series with NA entries dropped from it or None if inplace=True. See also Series.isna
Indicate missing values. Series.notna
Indicate existing (non-missing) values. Series.fillna
Replace missing values. DataFrame.dropna
Drop rows or columns which contain NA values. Index.dropna
Drop missing indices. Examples
>>> ser = pd.Series([1., 2., np.nan])
>>> ser
0 1.0
1 2.0
2 NaN
dtype: float64
Drop NA values from a Series.
>>> ser.dropna()
0 1.0
1 2.0
dtype: float64
Keep the Series with valid entries in the same variable.
>>> ser.dropna(inplace=True)
>>> ser
0 1.0
1 2.0
dtype: float64
Empty strings are not considered NA values. None is considered an NA value.
>>> ser = pd.Series([np.NaN, 2, pd.NaT, '', None, 'I stay'])
>>> ser
0 NaN
1 2
2 NaT
3
4 None
5 I stay
dtype: object
>>> ser.dropna()
1 2
3
5 I stay
dtype: object | pandas.reference.api.pandas.series.dropna |
pandas.Series.dt Series.dt()[source]
Accessor object for datetimelike properties of the Series values. Examples
>>> seconds_series = pd.Series(pd.date_range("2000-01-01", periods=3, freq="s"))
>>> seconds_series
0 2000-01-01 00:00:00
1 2000-01-01 00:00:01
2 2000-01-01 00:00:02
dtype: datetime64[ns]
>>> seconds_series.dt.second
0 0
1 1
2 2
dtype: int64
>>> hours_series = pd.Series(pd.date_range("2000-01-01", periods=3, freq="h"))
>>> hours_series
0 2000-01-01 00:00:00
1 2000-01-01 01:00:00
2 2000-01-01 02:00:00
dtype: datetime64[ns]
>>> hours_series.dt.hour
0 0
1 1
2 2
dtype: int64
>>> quarters_series = pd.Series(pd.date_range("2000-01-01", periods=3, freq="q"))
>>> quarters_series
0 2000-03-31
1 2000-06-30
2 2000-09-30
dtype: datetime64[ns]
>>> quarters_series.dt.quarter
0 1
1 2
2 3
dtype: int64
Returns a Series indexed like the original Series. Raises TypeError if the Series does not contain datetimelike values. | pandas.reference.api.pandas.series.dt |
pandas.Series.dt.ceil Series.dt.ceil(*args, **kwargs)[source]
Perform ceil operation on the data to the specified freq. Parameters
freq:str or Offset
The frequency level to ceil the index to. Must be a fixed frequency like ‘S’ (second) not ‘ME’ (month end). See frequency aliases for a list of possible freq values.
ambiguous:‘infer’, bool-ndarray, ‘NaT’, default ‘raise’
Only relevant for DatetimeIndex: ‘infer’ will attempt to infer fall dst-transition hours based on order bool-ndarray where True signifies a DST time, False designates a non-DST time (note that this flag is only applicable for ambiguous times) ‘NaT’ will return NaT where there are ambiguous times ‘raise’ will raise an AmbiguousTimeError if there are ambiguous times.
nonexistent:‘shift_forward’, ‘shift_backward’, ‘NaT’, timedelta, default ‘raise’
A nonexistent time does not exist in a particular timezone where clocks moved forward due to DST. ‘shift_forward’ will shift the nonexistent time forward to the closest existing time ‘shift_backward’ will shift the nonexistent time backward to the closest existing time ‘NaT’ will return NaT where there are nonexistent times timedelta objects will shift nonexistent times by the timedelta ‘raise’ will raise an NonExistentTimeError if there are nonexistent times. Returns
DatetimeIndex, TimedeltaIndex, or Series
Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with the same index for a Series. Raises
ValueError if the freq cannot be converted.
Notes If the timestamps have a timezone, ceiling will take place relative to the local (“wall”) time and re-localized to the same timezone. When ceiling near daylight savings time, use nonexistent and ambiguous to control the re-localization behavior. Examples DatetimeIndex
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.ceil('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 13:00:00'],
dtype='datetime64[ns]', freq=None)
Series
>>> pd.Series(rng).dt.ceil("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 13:00:00
dtype: datetime64[ns]
When rounding near a daylight savings time transition, use ambiguous or nonexistent to control how the timestamp should be re-localized.
>>> rng_tz = pd.DatetimeIndex(["2021-10-31 01:30:00"], tz="Europe/Amsterdam")
>>> rng_tz.ceil("H", ambiguous=False)
DatetimeIndex(['2021-10-31 02:00:00+01:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
>>> rng_tz.ceil("H", ambiguous=True)
DatetimeIndex(['2021-10-31 02:00:00+02:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None) | pandas.reference.api.pandas.series.dt.ceil |
pandas.Series.dt.components Series.dt.components
Return a Dataframe of the components of the Timedeltas. Returns
DataFrame
Examples
>>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='s'))
>>> s
0 0 days 00:00:00
1 0 days 00:00:01
2 0 days 00:00:02
3 0 days 00:00:03
4 0 days 00:00:04
dtype: timedelta64[ns]
>>> s.dt.components
days hours minutes seconds milliseconds microseconds nanoseconds
0 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
2 0 0 0 2 0 0 0
3 0 0 0 3 0 0 0
4 0 0 0 4 0 0 0 | pandas.reference.api.pandas.series.dt.components |
pandas.Series.dt.date Series.dt.date
Returns numpy array of python datetime.date objects. Namely, the date part of Timestamps without time and timezone information. | pandas.reference.api.pandas.series.dt.date |
pandas.Series.dt.day Series.dt.day
The day of the datetime. Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="D")
... )
>>> datetime_series
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
>>> datetime_series.dt.day
0 1
1 2
2 3
dtype: int64 | pandas.reference.api.pandas.series.dt.day |
pandas.Series.dt.day_name Series.dt.day_name(*args, **kwargs)[source]
Return the day names of the DateTimeIndex with specified locale. Parameters
locale:str, optional
Locale determining the language in which to return the day name. Default is English locale. Returns
Index
Index of day names. Examples
>>> idx = pd.date_range(start='2018-01-01', freq='D', periods=3)
>>> idx
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03'],
dtype='datetime64[ns]', freq='D')
>>> idx.day_name()
Index(['Monday', 'Tuesday', 'Wednesday'], dtype='object') | pandas.reference.api.pandas.series.dt.day_name |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.