doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
pandas.api.extensions.ExtensionDtype.construct_from_string classmethodExtensionDtype.construct_from_string(string)[source]
Construct this type from a string. This is useful mainly for data types that accept parameters. For example, a period dtype accepts a frequency parameter that can be set as period[H] (where H means hourly frequency). By default, in the abstract class, just the name of the type is expected. But subclasses can overwrite this method to accept parameters. Parameters
string:str
The name of the type, for example category. Returns
ExtensionDtype
Instance of the dtype. Raises
TypeError
If a class cannot be constructed from this ‘string’. Examples For extension dtypes with arguments the following may be an adequate implementation.
>>> @classmethod
... def construct_from_string(cls, string):
... pattern = re.compile(r"^my_type\[(?P<arg_name>.+)\]$")
... match = pattern.match(string)
... if match:
... return cls(**match.groupdict())
... else:
... raise TypeError(
... f"Cannot construct a '{cls.__name__}' from '{string}'"
... ) | pandas.reference.api.pandas.api.extensions.extensiondtype.construct_from_string |
pandas.api.extensions.ExtensionDtype.empty ExtensionDtype.empty(shape)[source]
Construct an ExtensionArray of this dtype with the given shape. Analogous to numpy.empty. Parameters
shape:int or tuple[int]
Returns
ExtensionArray | pandas.reference.api.pandas.api.extensions.extensiondtype.empty |
pandas.api.extensions.ExtensionDtype.is_dtype classmethodExtensionDtype.is_dtype(dtype)[source]
Check if we match ‘dtype’. Parameters
dtype:object
The object to check. Returns
bool
Notes The default implementation is True if cls.construct_from_string(dtype) is an instance of cls. dtype is an object and is an instance of cls dtype has a dtype attribute, and any of the above conditions is true for dtype.dtype. | pandas.reference.api.pandas.api.extensions.extensiondtype.is_dtype |
pandas.api.extensions.ExtensionDtype.kind propertyExtensionDtype.kind
A character code (one of ‘biufcmMOSUV’), default ‘O’ This should match the NumPy dtype used when the array is converted to an ndarray, which is probably ‘O’ for object if the extension type cannot be represented as a built-in NumPy type. See also numpy.dtype.kind | pandas.reference.api.pandas.api.extensions.extensiondtype.kind |
pandas.api.extensions.ExtensionDtype.na_value propertyExtensionDtype.na_value
Default NA value to use for this type. This is used in e.g. ExtensionArray.take. This should be the user-facing “boxed” version of the NA value, not the physical NA value for storage. e.g. for JSONArray, this is an empty dictionary. | pandas.reference.api.pandas.api.extensions.extensiondtype.na_value |
pandas.api.extensions.ExtensionDtype.name propertyExtensionDtype.name
A string identifying the data type. Will be used for display in, e.g. Series.dtype | pandas.reference.api.pandas.api.extensions.extensiondtype.name |
pandas.api.extensions.ExtensionDtype.names propertyExtensionDtype.names
Ordered list of field names, or None if there are no fields. This is for compatibility with NumPy arrays, and may be removed in the future. | pandas.reference.api.pandas.api.extensions.extensiondtype.names |
pandas.api.extensions.ExtensionDtype.type propertyExtensionDtype.type
The scalar type for the array, e.g. int It’s expected ExtensionArray[item] returns an instance of ExtensionDtype.type for scalar item, assuming that value is valid (not NA). NA values do not need to be instances of type. | pandas.reference.api.pandas.api.extensions.extensiondtype.type |
pandas.api.extensions.register_dataframe_accessor pandas.api.extensions.register_dataframe_accessor(name)[source]
Register a custom accessor on DataFrame objects. Parameters
name:str
Name under which the accessor should be registered. A warning is issued if this name conflicts with a preexisting attribute. Returns
callable
A class decorator. See also register_dataframe_accessor
Register a custom accessor on DataFrame objects. register_series_accessor
Register a custom accessor on Series objects. register_index_accessor
Register a custom accessor on Index objects. Notes When accessed, your accessor will be initialized with the pandas object the user is interacting with. So the signature must be
def __init__(self, pandas_object): # noqa: E999
...
For consistency with pandas methods, you should raise an AttributeError if the data passed to your accessor has an incorrect dtype.
>>> pd.Series(['a', 'b']).dt
Traceback (most recent call last):
...
AttributeError: Can only use .dt accessor with datetimelike values
Examples In your library code:
import pandas as pd
@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor:
def __init__(self, pandas_obj):
self._obj = pandas_obj
@property
def center(self):
# return the geographic center point of this DataFrame
lat = self._obj.latitude
lon = self._obj.longitude
return (float(lon.mean()), float(lat.mean()))
def plot(self):
# plot this array's data on a map, e.g., using Cartopy
pass
Back in an interactive IPython session:
In [1]: ds = pd.DataFrame({"longitude": np.linspace(0, 10),
...: "latitude": np.linspace(0, 20)})
In [2]: ds.geo.center
Out[2]: (5.0, 10.0)
In [3]: ds.geo.plot() # plots data on a map | pandas.reference.api.pandas.api.extensions.register_dataframe_accessor |
pandas.api.extensions.register_extension_dtype pandas.api.extensions.register_extension_dtype(cls)[source]
Register an ExtensionType with pandas as class decorator. This enables operations like .astype(name) for the name of the ExtensionDtype. Returns
callable
A class decorator. Examples
>>> from pandas.api.extensions import register_extension_dtype, ExtensionDtype
>>> @register_extension_dtype
... class MyExtensionDtype(ExtensionDtype):
... name = "myextension" | pandas.reference.api.pandas.api.extensions.register_extension_dtype |
pandas.api.extensions.register_index_accessor pandas.api.extensions.register_index_accessor(name)[source]
Register a custom accessor on Index objects. Parameters
name:str
Name under which the accessor should be registered. A warning is issued if this name conflicts with a preexisting attribute. Returns
callable
A class decorator. See also register_dataframe_accessor
Register a custom accessor on DataFrame objects. register_series_accessor
Register a custom accessor on Series objects. register_index_accessor
Register a custom accessor on Index objects. Notes When accessed, your accessor will be initialized with the pandas object the user is interacting with. So the signature must be
def __init__(self, pandas_object): # noqa: E999
...
For consistency with pandas methods, you should raise an AttributeError if the data passed to your accessor has an incorrect dtype.
>>> pd.Series(['a', 'b']).dt
Traceback (most recent call last):
...
AttributeError: Can only use .dt accessor with datetimelike values
Examples In your library code:
import pandas as pd
@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor:
def __init__(self, pandas_obj):
self._obj = pandas_obj
@property
def center(self):
# return the geographic center point of this DataFrame
lat = self._obj.latitude
lon = self._obj.longitude
return (float(lon.mean()), float(lat.mean()))
def plot(self):
# plot this array's data on a map, e.g., using Cartopy
pass
Back in an interactive IPython session:
In [1]: ds = pd.DataFrame({"longitude": np.linspace(0, 10),
...: "latitude": np.linspace(0, 20)})
In [2]: ds.geo.center
Out[2]: (5.0, 10.0)
In [3]: ds.geo.plot() # plots data on a map | pandas.reference.api.pandas.api.extensions.register_index_accessor |
pandas.api.extensions.register_series_accessor pandas.api.extensions.register_series_accessor(name)[source]
Register a custom accessor on Series objects. Parameters
name:str
Name under which the accessor should be registered. A warning is issued if this name conflicts with a preexisting attribute. Returns
callable
A class decorator. See also register_dataframe_accessor
Register a custom accessor on DataFrame objects. register_series_accessor
Register a custom accessor on Series objects. register_index_accessor
Register a custom accessor on Index objects. Notes When accessed, your accessor will be initialized with the pandas object the user is interacting with. So the signature must be
def __init__(self, pandas_object): # noqa: E999
...
For consistency with pandas methods, you should raise an AttributeError if the data passed to your accessor has an incorrect dtype.
>>> pd.Series(['a', 'b']).dt
Traceback (most recent call last):
...
AttributeError: Can only use .dt accessor with datetimelike values
Examples In your library code:
import pandas as pd
@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor:
def __init__(self, pandas_obj):
self._obj = pandas_obj
@property
def center(self):
# return the geographic center point of this DataFrame
lat = self._obj.latitude
lon = self._obj.longitude
return (float(lon.mean()), float(lat.mean()))
def plot(self):
# plot this array's data on a map, e.g., using Cartopy
pass
Back in an interactive IPython session:
In [1]: ds = pd.DataFrame({"longitude": np.linspace(0, 10),
...: "latitude": np.linspace(0, 20)})
In [2]: ds.geo.center
Out[2]: (5.0, 10.0)
In [3]: ds.geo.plot() # plots data on a map | pandas.reference.api.pandas.api.extensions.register_series_accessor |
pandas.api.indexers.BaseIndexer classpandas.api.indexers.BaseIndexer(index_array=None, window_size=0, **kwargs)[source]
Base class for window bounds calculations. Methods
get_window_bounds([num_values, min_periods, ...]) Computes the bounds of a window. | pandas.reference.api.pandas.api.indexers.baseindexer |
pandas.api.indexers.BaseIndexer.get_window_bounds BaseIndexer.get_window_bounds(num_values=0, min_periods=None, center=None, closed=None)[source]
Computes the bounds of a window. Parameters
num_values:int, default 0
number of values that will be aggregated over
window_size:int, default 0
the number of rows in a window
min_periods:int, default None
min_periods passed from the top level rolling API
center:bool, default None
center passed from the top level rolling API
closed:str, default None
closed passed from the top level rolling API
win_type:str, default None
win_type passed from the top level rolling API Returns
A tuple of ndarray[int64]s, indicating the boundaries of each
window | pandas.reference.api.pandas.api.indexers.baseindexer.get_window_bounds |
pandas.api.indexers.check_array_indexer pandas.api.indexers.check_array_indexer(array, indexer)[source]
Check if indexer is a valid array indexer for array. For a boolean mask, array and indexer are checked to have the same length. The dtype is validated, and if it is an integer or boolean ExtensionArray, it is checked if there are missing values present, and it is converted to the appropriate numpy array. Other dtypes will raise an error. Non-array indexers (integer, slice, Ellipsis, tuples, ..) are passed through as is. New in version 1.0.0. Parameters
array:array-like
The array that is being indexed (only used for the length).
indexer:array-like or list-like
The array-like that’s used to index. List-like input that is not yet a numpy array or an ExtensionArray is converted to one. Other input types are passed through as is. Returns
numpy.ndarray
The validated indexer as a numpy array that can be used to index. Raises
IndexError
When the lengths don’t match. ValueError
When indexer cannot be converted to a numpy ndarray to index (e.g. presence of missing values). See also api.types.is_bool_dtype
Check if key is of boolean dtype. Examples When checking a boolean mask, a boolean ndarray is returned when the arguments are all valid.
>>> mask = pd.array([True, False])
>>> arr = pd.array([1, 2])
>>> pd.api.indexers.check_array_indexer(arr, mask)
array([ True, False])
An IndexError is raised when the lengths don’t match.
>>> mask = pd.array([True, False, True])
>>> pd.api.indexers.check_array_indexer(arr, mask)
Traceback (most recent call last):
...
IndexError: Boolean index has wrong length: 3 instead of 2.
NA values in a boolean array are treated as False.
>>> mask = pd.array([True, pd.NA])
>>> pd.api.indexers.check_array_indexer(arr, mask)
array([ True, False])
A numpy boolean mask will get passed through (if the length is correct):
>>> mask = np.array([True, False])
>>> pd.api.indexers.check_array_indexer(arr, mask)
array([ True, False])
Similarly for integer indexers, an integer ndarray is returned when it is a valid indexer, otherwise an error is (for integer indexers, a matching length is not required):
>>> indexer = pd.array([0, 2], dtype="Int64")
>>> arr = pd.array([1, 2, 3])
>>> pd.api.indexers.check_array_indexer(arr, indexer)
array([0, 2])
>>> indexer = pd.array([0, pd.NA], dtype="Int64")
>>> pd.api.indexers.check_array_indexer(arr, indexer)
Traceback (most recent call last):
...
ValueError: Cannot index with an integer indexer containing NA values
For non-integer/boolean dtypes, an appropriate error is raised:
>>> indexer = np.array([0., 2.], dtype="float64")
>>> pd.api.indexers.check_array_indexer(arr, indexer)
Traceback (most recent call last):
...
IndexError: arrays used as indices must be of integer or boolean type | pandas.reference.api.pandas.api.indexers.check_array_indexer |
pandas.api.indexers.FixedForwardWindowIndexer classpandas.api.indexers.FixedForwardWindowIndexer(index_array=None, window_size=0, **kwargs)[source]
Creates window boundaries for fixed-length windows that include the current row. Examples
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
>>> indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=2)
>>> df.rolling(window=indexer, min_periods=1).sum()
B
0 1.0
1 3.0
2 2.0
3 4.0
4 4.0
Methods
get_window_bounds([num_values, min_periods, ...]) Computes the bounds of a window. | pandas.reference.api.pandas.api.indexers.fixedforwardwindowindexer |
pandas.api.indexers.FixedForwardWindowIndexer.get_window_bounds FixedForwardWindowIndexer.get_window_bounds(num_values=0, min_periods=None, center=None, closed=None)[source]
Computes the bounds of a window. Parameters
num_values:int, default 0
number of values that will be aggregated over
window_size:int, default 0
the number of rows in a window
min_periods:int, default None
min_periods passed from the top level rolling API
center:bool, default None
center passed from the top level rolling API
closed:str, default None
closed passed from the top level rolling API
win_type:str, default None
win_type passed from the top level rolling API Returns
A tuple of ndarray[int64]s, indicating the boundaries of each
window | pandas.reference.api.pandas.api.indexers.fixedforwardwindowindexer.get_window_bounds |
pandas.api.indexers.VariableOffsetWindowIndexer classpandas.api.indexers.VariableOffsetWindowIndexer(index_array=None, window_size=0, index=None, offset=None, **kwargs)[source]
Calculate window boundaries based on a non-fixed offset such as a BusinessDay. Methods
get_window_bounds([num_values, min_periods, ...]) Computes the bounds of a window. | pandas.reference.api.pandas.api.indexers.variableoffsetwindowindexer |
pandas.api.indexers.VariableOffsetWindowIndexer.get_window_bounds VariableOffsetWindowIndexer.get_window_bounds(num_values=0, min_periods=None, center=None, closed=None)[source]
Computes the bounds of a window. Parameters
num_values:int, default 0
number of values that will be aggregated over
window_size:int, default 0
the number of rows in a window
min_periods:int, default None
min_periods passed from the top level rolling API
center:bool, default None
center passed from the top level rolling API
closed:str, default None
closed passed from the top level rolling API
win_type:str, default None
win_type passed from the top level rolling API Returns
A tuple of ndarray[int64]s, indicating the boundaries of each
window | pandas.reference.api.pandas.api.indexers.variableoffsetwindowindexer.get_window_bounds |
pandas.api.types.infer_dtype pandas.api.types.infer_dtype()
Efficiently infer the type of a passed val, or list-like array of values. Return a string describing the type. Parameters
value:scalar, list, ndarray, or pandas type
skipna:bool, default True
Ignore NaN values when inferring the type. Returns
str
Describing the common type of the input data. Results can include:
string
bytes
floating
integer
mixed-integer
mixed-integer-float
decimal
complex
categorical
boolean
datetime64
datetime
date
timedelta64
timedelta
time
period
mixed
unknown-array
Raises
TypeError
If ndarray-like but cannot infer the dtype Notes ‘mixed’ is the catchall for anything that is not otherwise specialized ‘mixed-integer-float’ are floats and integers ‘mixed-integer’ are integers mixed with non-integers ‘unknown-array’ is the catchall for something that is an array (has a dtype attribute), but has a dtype unknown to pandas (e.g. external extension array) Examples
>>> import datetime
>>> infer_dtype(['foo', 'bar'])
'string'
>>> infer_dtype(['a', np.nan, 'b'], skipna=True)
'string'
>>> infer_dtype(['a', np.nan, 'b'], skipna=False)
'mixed'
>>> infer_dtype([b'foo', b'bar'])
'bytes'
>>> infer_dtype([1, 2, 3])
'integer'
>>> infer_dtype([1, 2, 3.5])
'mixed-integer-float'
>>> infer_dtype([1.0, 2.0, 3.5])
'floating'
>>> infer_dtype(['a', 1])
'mixed-integer'
>>> infer_dtype([Decimal(1), Decimal(2.0)])
'decimal'
>>> infer_dtype([True, False])
'boolean'
>>> infer_dtype([True, False, np.nan])
'boolean'
>>> infer_dtype([pd.Timestamp('20130101')])
'datetime'
>>> infer_dtype([datetime.date(2013, 1, 1)])
'date'
>>> infer_dtype([np.datetime64('2013-01-01')])
'datetime64'
>>> infer_dtype([datetime.timedelta(0, 1, 1)])
'timedelta'
>>> infer_dtype(pd.Series(list('aabc')).astype('category'))
'categorical' | pandas.reference.api.pandas.api.types.infer_dtype |
pandas.api.types.is_bool pandas.api.types.is_bool()
Return True if given object is boolean. Returns
bool | pandas.reference.api.pandas.api.types.is_bool |
pandas.api.types.is_bool_dtype pandas.api.types.is_bool_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of a boolean dtype. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of a boolean dtype. Notes An ExtensionArray is considered boolean when the _is_boolean attribute is set to True. Examples
>>> is_bool_dtype(str)
False
>>> is_bool_dtype(int)
False
>>> is_bool_dtype(bool)
True
>>> is_bool_dtype(np.bool_)
True
>>> is_bool_dtype(np.array(['a', 'b']))
False
>>> is_bool_dtype(pd.Series([1, 2]))
False
>>> is_bool_dtype(np.array([True, False]))
True
>>> is_bool_dtype(pd.Categorical([True, False]))
True
>>> is_bool_dtype(pd.arrays.SparseArray([True, False]))
True | pandas.reference.api.pandas.api.types.is_bool_dtype |
pandas.api.types.is_categorical pandas.api.types.is_categorical(arr)[source]
Check whether an array-like is a Categorical instance. Parameters
arr:array-like
The array-like to check. Returns
boolean
Whether or not the array-like is of a Categorical instance. Examples
>>> is_categorical([1, 2, 3])
False
Categoricals, Series Categoricals, and CategoricalIndex will return True.
>>> cat = pd.Categorical([1, 2, 3])
>>> is_categorical(cat)
True
>>> is_categorical(pd.Series(cat))
True
>>> is_categorical(pd.CategoricalIndex([1, 2, 3]))
True | pandas.reference.api.pandas.api.types.is_categorical |
pandas.api.types.is_categorical_dtype pandas.api.types.is_categorical_dtype(arr_or_dtype)[source]
Check whether an array-like or dtype is of the Categorical dtype. Parameters
arr_or_dtype:array-like or dtype
The array-like or dtype to check. Returns
boolean
Whether or not the array-like or dtype is of the Categorical dtype. Examples
>>> is_categorical_dtype(object)
False
>>> is_categorical_dtype(CategoricalDtype())
True
>>> is_categorical_dtype([1, 2, 3])
False
>>> is_categorical_dtype(pd.Categorical([1, 2, 3]))
True
>>> is_categorical_dtype(pd.CategoricalIndex([1, 2, 3]))
True | pandas.reference.api.pandas.api.types.is_categorical_dtype |
pandas.api.types.is_complex pandas.api.types.is_complex()
Return True if given object is complex. Returns
bool | pandas.reference.api.pandas.api.types.is_complex |
pandas.api.types.is_complex_dtype pandas.api.types.is_complex_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of a complex dtype. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of a complex dtype. Examples
>>> is_complex_dtype(str)
False
>>> is_complex_dtype(int)
False
>>> is_complex_dtype(np.complex_)
True
>>> is_complex_dtype(np.array(['a', 'b']))
False
>>> is_complex_dtype(pd.Series([1, 2]))
False
>>> is_complex_dtype(np.array([1 + 1j, 5]))
True | pandas.reference.api.pandas.api.types.is_complex_dtype |
pandas.api.types.is_datetime64_any_dtype pandas.api.types.is_datetime64_any_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of the datetime64 dtype. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
bool
Whether or not the array or dtype is of the datetime64 dtype. Examples
>>> is_datetime64_any_dtype(str)
False
>>> is_datetime64_any_dtype(int)
False
>>> is_datetime64_any_dtype(np.datetime64) # can be tz-naive
True
>>> is_datetime64_any_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_any_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_any_dtype(np.array([1, 2]))
False
>>> is_datetime64_any_dtype(np.array([], dtype="datetime64[ns]"))
True
>>> is_datetime64_any_dtype(pd.DatetimeIndex([1, 2, 3], dtype="datetime64[ns]"))
True | pandas.reference.api.pandas.api.types.is_datetime64_any_dtype |
pandas.api.types.is_datetime64_dtype pandas.api.types.is_datetime64_dtype(arr_or_dtype)[source]
Check whether an array-like or dtype is of the datetime64 dtype. Parameters
arr_or_dtype:array-like or dtype
The array-like or dtype to check. Returns
boolean
Whether or not the array-like or dtype is of the datetime64 dtype. Examples
>>> is_datetime64_dtype(object)
False
>>> is_datetime64_dtype(np.datetime64)
True
>>> is_datetime64_dtype(np.array([], dtype=int))
False
>>> is_datetime64_dtype(np.array([], dtype=np.datetime64))
True
>>> is_datetime64_dtype([1, 2, 3])
False | pandas.reference.api.pandas.api.types.is_datetime64_dtype |
pandas.api.types.is_datetime64_ns_dtype pandas.api.types.is_datetime64_ns_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of the datetime64[ns] dtype. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
bool
Whether or not the array or dtype is of the datetime64[ns] dtype. Examples
>>> is_datetime64_ns_dtype(str)
False
>>> is_datetime64_ns_dtype(int)
False
>>> is_datetime64_ns_dtype(np.datetime64) # no unit
False
>>> is_datetime64_ns_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_ns_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_ns_dtype(np.array([1, 2]))
False
>>> is_datetime64_ns_dtype(np.array([], dtype="datetime64")) # no unit
False
>>> is_datetime64_ns_dtype(np.array([], dtype="datetime64[ps]")) # wrong unit
False
>>> is_datetime64_ns_dtype(pd.DatetimeIndex([1, 2, 3], dtype="datetime64[ns]"))
True | pandas.reference.api.pandas.api.types.is_datetime64_ns_dtype |
pandas.api.types.is_datetime64tz_dtype pandas.api.types.is_datetime64tz_dtype(arr_or_dtype)[source]
Check whether an array-like or dtype is of a DatetimeTZDtype dtype. Parameters
arr_or_dtype:array-like or dtype
The array-like or dtype to check. Returns
boolean
Whether or not the array-like or dtype is of a DatetimeTZDtype dtype. Examples
>>> is_datetime64tz_dtype(object)
False
>>> is_datetime64tz_dtype([1, 2, 3])
False
>>> is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3])) # tz-naive
False
>>> is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
True
>>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
>>> s = pd.Series([], dtype=dtype)
>>> is_datetime64tz_dtype(dtype)
True
>>> is_datetime64tz_dtype(s)
True | pandas.reference.api.pandas.api.types.is_datetime64tz_dtype |
pandas.api.types.is_dict_like pandas.api.types.is_dict_like(obj)[source]
Check if the object is dict-like. Parameters
obj:The object to check
Returns
is_dict_like:bool
Whether obj has dict-like properties. Examples
>>> is_dict_like({1: 2})
True
>>> is_dict_like([1, 2, 3])
False
>>> is_dict_like(dict)
False
>>> is_dict_like(dict())
True | pandas.reference.api.pandas.api.types.is_dict_like |
pandas.api.types.is_extension_array_dtype pandas.api.types.is_extension_array_dtype(arr_or_dtype)[source]
Check if an object is a pandas extension array type. See the Use Guide for more. Parameters
arr_or_dtype:object
For array-like input, the .dtype attribute will be extracted. Returns
bool
Whether the arr_or_dtype is an extension array type. Notes This checks whether an object implements the pandas extension array interface. In pandas, this includes: Categorical Sparse Interval Period DatetimeArray TimedeltaArray Third-party libraries may implement arrays or types satisfying this interface as well. Examples
>>> from pandas.api.types import is_extension_array_dtype
>>> arr = pd.Categorical(['a', 'b'])
>>> is_extension_array_dtype(arr)
True
>>> is_extension_array_dtype(arr.dtype)
True
>>> arr = np.array(['a', 'b'])
>>> is_extension_array_dtype(arr.dtype)
False | pandas.reference.api.pandas.api.types.is_extension_array_dtype |
pandas.api.types.is_extension_type pandas.api.types.is_extension_type(arr)[source]
Check whether an array-like is of a pandas extension class instance. Deprecated since version 1.0.0: Use is_extension_array_dtype instead. Extension classes include categoricals, pandas sparse objects (i.e. classes represented within the pandas library and not ones external to it like scipy sparse matrices), and datetime-like arrays. Parameters
arr:array-like, scalar
The array-like to check. Returns
boolean
Whether or not the array-like is of a pandas extension class instance. Examples
>>> is_extension_type([1, 2, 3])
False
>>> is_extension_type(np.array([1, 2, 3]))
False
>>>
>>> cat = pd.Categorical([1, 2, 3])
>>>
>>> is_extension_type(cat)
True
>>> is_extension_type(pd.Series(cat))
True
>>> is_extension_type(pd.arrays.SparseArray([1, 2, 3]))
True
>>> from scipy.sparse import bsr_matrix
>>> is_extension_type(bsr_matrix([1, 2, 3]))
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3]))
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
True
>>>
>>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
>>> s = pd.Series([], dtype=dtype)
>>> is_extension_type(s)
True | pandas.reference.api.pandas.api.types.is_extension_type |
pandas.api.types.is_file_like pandas.api.types.is_file_like(obj)[source]
Check if the object is a file-like object. For objects to be considered file-like, they must be an iterator AND have either a read and/or write method as an attribute. Note: file-like objects must be iterable, but iterable objects need not be file-like. Parameters
obj:The object to check
Returns
is_file_like:bool
Whether obj has file-like properties. Examples
>>> import io
>>> buffer = io.StringIO("data")
>>> is_file_like(buffer)
True
>>> is_file_like([1, 2, 3])
False | pandas.reference.api.pandas.api.types.is_file_like |
pandas.api.types.is_float pandas.api.types.is_float()
Return True if given object is float. Returns
bool | pandas.reference.api.pandas.api.types.is_float |
pandas.api.types.is_float_dtype pandas.api.types.is_float_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of a float dtype. This function is internal and should not be exposed in the public API. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of a float dtype. Examples
>>> is_float_dtype(str)
False
>>> is_float_dtype(int)
False
>>> is_float_dtype(float)
True
>>> is_float_dtype(np.array(['a', 'b']))
False
>>> is_float_dtype(pd.Series([1, 2]))
False
>>> is_float_dtype(pd.Index([1, 2.]))
True | pandas.reference.api.pandas.api.types.is_float_dtype |
pandas.api.types.is_hashable pandas.api.types.is_hashable(obj)[source]
Return True if hash(obj) will succeed, False otherwise. Some types will pass a test against collections.abc.Hashable but fail when they are actually hashed with hash(). Distinguish between these and other types by trying the call to hash() and seeing if they raise TypeError. Returns
bool
Examples
>>> import collections
>>> a = ([],)
>>> isinstance(a, collections.abc.Hashable)
True
>>> is_hashable(a)
False | pandas.reference.api.pandas.api.types.is_hashable |
pandas.api.types.is_int64_dtype pandas.api.types.is_int64_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of the int64 dtype. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of the int64 dtype. Notes Depending on system architecture, the return value of is_int64_dtype( int) will be True if the OS uses 64-bit integers and False if the OS uses 32-bit integers. Examples
>>> is_int64_dtype(str)
False
>>> is_int64_dtype(np.int32)
False
>>> is_int64_dtype(np.int64)
True
>>> is_int64_dtype('int8')
False
>>> is_int64_dtype('Int8')
False
>>> is_int64_dtype(pd.Int64Dtype)
True
>>> is_int64_dtype(float)
False
>>> is_int64_dtype(np.uint64) # unsigned
False
>>> is_int64_dtype(np.array(['a', 'b']))
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.int64))
True
>>> is_int64_dtype(pd.Index([1, 2.])) # float
False
>>> is_int64_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False | pandas.reference.api.pandas.api.types.is_int64_dtype |
pandas.api.types.is_integer pandas.api.types.is_integer()
Return True if given object is integer. Returns
bool | pandas.reference.api.pandas.api.types.is_integer |
pandas.api.types.is_integer_dtype pandas.api.types.is_integer_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of an integer dtype. Unlike in is_any_int_dtype, timedelta64 instances will return False. The nullable Integer dtypes (e.g. pandas.Int64Dtype) are also considered as integer by this function. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of an integer dtype and not an instance of timedelta64. Examples
>>> is_integer_dtype(str)
False
>>> is_integer_dtype(int)
True
>>> is_integer_dtype(float)
False
>>> is_integer_dtype(np.uint64)
True
>>> is_integer_dtype('int8')
True
>>> is_integer_dtype('Int8')
True
>>> is_integer_dtype(pd.Int8Dtype)
True
>>> is_integer_dtype(np.datetime64)
False
>>> is_integer_dtype(np.timedelta64)
False
>>> is_integer_dtype(np.array(['a', 'b']))
False
>>> is_integer_dtype(pd.Series([1, 2]))
True
>>> is_integer_dtype(np.array([], dtype=np.timedelta64))
False
>>> is_integer_dtype(pd.Index([1, 2.])) # float
False | pandas.reference.api.pandas.api.types.is_integer_dtype |
pandas.api.types.is_interval pandas.api.types.is_interval() | pandas.reference.api.pandas.api.types.is_interval |
pandas.api.types.is_interval_dtype pandas.api.types.is_interval_dtype(arr_or_dtype)[source]
Check whether an array-like or dtype is of the Interval dtype. Parameters
arr_or_dtype:array-like or dtype
The array-like or dtype to check. Returns
boolean
Whether or not the array-like or dtype is of the Interval dtype. Examples
>>> is_interval_dtype(object)
False
>>> is_interval_dtype(IntervalDtype())
True
>>> is_interval_dtype([1, 2, 3])
False
>>>
>>> interval = pd.Interval(1, 2, closed="right")
>>> is_interval_dtype(interval)
False
>>> is_interval_dtype(pd.IntervalIndex([interval]))
True | pandas.reference.api.pandas.api.types.is_interval_dtype |
pandas.api.types.is_iterator pandas.api.types.is_iterator()
Check if the object is an iterator. This is intended for generators, not list-like objects. Parameters
obj:The object to check
Returns
is_iter:bool
Whether obj is an iterator. Examples
>>> import datetime
>>> is_iterator((x for x in []))
True
>>> is_iterator([1, 2, 3])
False
>>> is_iterator(datetime.datetime(2017, 1, 1))
False
>>> is_iterator("foo")
False
>>> is_iterator(1)
False | pandas.reference.api.pandas.api.types.is_iterator |
pandas.api.types.is_list_like pandas.api.types.is_list_like()
Check if the object is list-like. Objects that are considered list-like are for example Python lists, tuples, sets, NumPy arrays, and Pandas Series. Strings and datetime objects, however, are not considered list-like. Parameters
obj:object
Object to check.
allow_sets:bool, default True
If this parameter is False, sets will not be considered list-like. Returns
bool
Whether obj has list-like properties. Examples
>>> import datetime
>>> is_list_like([1, 2, 3])
True
>>> is_list_like({1, 2, 3})
True
>>> is_list_like(datetime.datetime(2017, 1, 1))
False
>>> is_list_like("foo")
False
>>> is_list_like(1)
False
>>> is_list_like(np.array([2]))
True
>>> is_list_like(np.array(2))
False | pandas.reference.api.pandas.api.types.is_list_like |
pandas.api.types.is_named_tuple pandas.api.types.is_named_tuple(obj)[source]
Check if the object is a named tuple. Parameters
obj:The object to check
Returns
is_named_tuple:bool
Whether obj is a named tuple. Examples
>>> from collections import namedtuple
>>> Point = namedtuple("Point", ["x", "y"])
>>> p = Point(1, 2)
>>>
>>> is_named_tuple(p)
True
>>> is_named_tuple((1, 2))
False | pandas.reference.api.pandas.api.types.is_named_tuple |
pandas.api.types.is_number pandas.api.types.is_number(obj)[source]
Check if the object is a number. Returns True when the object is a number, and False if is not. Parameters
obj:any type
The object to check if is a number. Returns
is_number:bool
Whether obj is a number or not. See also api.types.is_integer
Checks a subgroup of numbers. Examples
>>> from pandas.api.types import is_number
>>> is_number(1)
True
>>> is_number(7.15)
True
Booleans are valid because they are int subclass.
>>> is_number(False)
True
>>> is_number("foo")
False
>>> is_number("5")
False | pandas.reference.api.pandas.api.types.is_number |
pandas.api.types.is_numeric_dtype pandas.api.types.is_numeric_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of a numeric dtype. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of a numeric dtype. Examples
>>> is_numeric_dtype(str)
False
>>> is_numeric_dtype(int)
True
>>> is_numeric_dtype(float)
True
>>> is_numeric_dtype(np.uint64)
True
>>> is_numeric_dtype(np.datetime64)
False
>>> is_numeric_dtype(np.timedelta64)
False
>>> is_numeric_dtype(np.array(['a', 'b']))
False
>>> is_numeric_dtype(pd.Series([1, 2]))
True
>>> is_numeric_dtype(pd.Index([1, 2.]))
True
>>> is_numeric_dtype(np.array([], dtype=np.timedelta64))
False | pandas.reference.api.pandas.api.types.is_numeric_dtype |
pandas.api.types.is_object_dtype pandas.api.types.is_object_dtype(arr_or_dtype)[source]
Check whether an array-like or dtype is of the object dtype. Parameters
arr_or_dtype:array-like or dtype
The array-like or dtype to check. Returns
boolean
Whether or not the array-like or dtype is of the object dtype. Examples
>>> is_object_dtype(object)
True
>>> is_object_dtype(int)
False
>>> is_object_dtype(np.array([], dtype=object))
True
>>> is_object_dtype(np.array([], dtype=int))
False
>>> is_object_dtype([1, 2, 3])
False | pandas.reference.api.pandas.api.types.is_object_dtype |
pandas.api.types.is_period_dtype pandas.api.types.is_period_dtype(arr_or_dtype)[source]
Check whether an array-like or dtype is of the Period dtype. Parameters
arr_or_dtype:array-like or dtype
The array-like or dtype to check. Returns
boolean
Whether or not the array-like or dtype is of the Period dtype. Examples
>>> is_period_dtype(object)
False
>>> is_period_dtype(PeriodDtype(freq="D"))
True
>>> is_period_dtype([1, 2, 3])
False
>>> is_period_dtype(pd.Period("2017-01-01"))
False
>>> is_period_dtype(pd.PeriodIndex([], freq="A"))
True | pandas.reference.api.pandas.api.types.is_period_dtype |
pandas.api.types.is_re pandas.api.types.is_re(obj)[source]
Check if the object is a regex pattern instance. Parameters
obj:The object to check
Returns
is_regex:bool
Whether obj is a regex pattern. Examples
>>> is_re(re.compile(".*"))
True
>>> is_re("foo")
False | pandas.reference.api.pandas.api.types.is_re |
pandas.api.types.is_re_compilable pandas.api.types.is_re_compilable(obj)[source]
Check if the object can be compiled into a regex pattern instance. Parameters
obj:The object to check
Returns
is_regex_compilable:bool
Whether obj can be compiled as a regex pattern. Examples
>>> is_re_compilable(".*")
True
>>> is_re_compilable(1)
False | pandas.reference.api.pandas.api.types.is_re_compilable |
pandas.api.types.is_scalar pandas.api.types.is_scalar()
Return True if given object is scalar. Parameters
val:object
This includes: numpy array scalar (e.g. np.int64) Python builtin numerics Python builtin byte arrays and strings None datetime.datetime datetime.timedelta Period decimal.Decimal Interval DateOffset Fraction Number. Returns
bool
Return True if given object is scalar. Examples
>>> import datetime
>>> dt = datetime.datetime(2018, 10, 3)
>>> pd.api.types.is_scalar(dt)
True
>>> pd.api.types.is_scalar([2, 3])
False
>>> pd.api.types.is_scalar({0: 1, 2: 3})
False
>>> pd.api.types.is_scalar((0, 2))
False
pandas supports PEP 3141 numbers:
>>> from fractions import Fraction
>>> pd.api.types.is_scalar(Fraction(3, 5))
True | pandas.reference.api.pandas.api.types.is_scalar |
pandas.api.types.is_signed_integer_dtype pandas.api.types.is_signed_integer_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of a signed integer dtype. Unlike in is_any_int_dtype, timedelta64 instances will return False. The nullable Integer dtypes (e.g. pandas.Int64Dtype) are also considered as integer by this function. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of a signed integer dtype and not an instance of timedelta64. Examples
>>> is_signed_integer_dtype(str)
False
>>> is_signed_integer_dtype(int)
True
>>> is_signed_integer_dtype(float)
False
>>> is_signed_integer_dtype(np.uint64) # unsigned
False
>>> is_signed_integer_dtype('int8')
True
>>> is_signed_integer_dtype('Int8')
True
>>> is_signed_integer_dtype(pd.Int8Dtype)
True
>>> is_signed_integer_dtype(np.datetime64)
False
>>> is_signed_integer_dtype(np.timedelta64)
False
>>> is_signed_integer_dtype(np.array(['a', 'b']))
False
>>> is_signed_integer_dtype(pd.Series([1, 2]))
True
>>> is_signed_integer_dtype(np.array([], dtype=np.timedelta64))
False
>>> is_signed_integer_dtype(pd.Index([1, 2.])) # float
False
>>> is_signed_integer_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False | pandas.reference.api.pandas.api.types.is_signed_integer_dtype |
pandas.api.types.is_sparse pandas.api.types.is_sparse(arr)[source]
Check whether an array-like is a 1-D pandas sparse array. Check that the one-dimensional array-like is a pandas sparse array. Returns True if it is a pandas sparse array, not another type of sparse array. Parameters
arr:array-like
Array-like to check. Returns
bool
Whether or not the array-like is a pandas sparse array. Examples Returns True if the parameter is a 1-D pandas sparse array.
>>> is_sparse(pd.arrays.SparseArray([0, 0, 1, 0]))
True
>>> is_sparse(pd.Series(pd.arrays.SparseArray([0, 0, 1, 0])))
True
Returns False if the parameter is not sparse.
>>> is_sparse(np.array([0, 0, 1, 0]))
False
>>> is_sparse(pd.Series([0, 1, 0, 0]))
False
Returns False if the parameter is not a pandas sparse array.
>>> from scipy.sparse import bsr_matrix
>>> is_sparse(bsr_matrix([0, 1, 0, 0]))
False
Returns False if the parameter has more than one dimension. | pandas.reference.api.pandas.api.types.is_sparse |
pandas.api.types.is_string_dtype pandas.api.types.is_string_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of the string dtype. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of the string dtype. Examples
>>> is_string_dtype(str)
True
>>> is_string_dtype(object)
True
>>> is_string_dtype(int)
False
>>>
>>> is_string_dtype(np.array(['a', 'b']))
True
>>> is_string_dtype(pd.Series([1, 2]))
False | pandas.reference.api.pandas.api.types.is_string_dtype |
pandas.api.types.is_timedelta64_dtype pandas.api.types.is_timedelta64_dtype(arr_or_dtype)[source]
Check whether an array-like or dtype is of the timedelta64 dtype. Parameters
arr_or_dtype:array-like or dtype
The array-like or dtype to check. Returns
boolean
Whether or not the array-like or dtype is of the timedelta64 dtype. Examples
>>> is_timedelta64_dtype(object)
False
>>> is_timedelta64_dtype(np.timedelta64)
True
>>> is_timedelta64_dtype([1, 2, 3])
False
>>> is_timedelta64_dtype(pd.Series([], dtype="timedelta64[ns]"))
True
>>> is_timedelta64_dtype('0 days')
False | pandas.reference.api.pandas.api.types.is_timedelta64_dtype |
pandas.api.types.is_timedelta64_ns_dtype pandas.api.types.is_timedelta64_ns_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of the timedelta64[ns] dtype. This is a very specific dtype, so generic ones like np.timedelta64 will return False if passed into this function. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of the timedelta64[ns] dtype. Examples
>>> is_timedelta64_ns_dtype(np.dtype('m8[ns]'))
True
>>> is_timedelta64_ns_dtype(np.dtype('m8[ps]')) # Wrong frequency
False
>>> is_timedelta64_ns_dtype(np.array([1, 2], dtype='m8[ns]'))
True
>>> is_timedelta64_ns_dtype(np.array([1, 2], dtype=np.timedelta64))
False | pandas.reference.api.pandas.api.types.is_timedelta64_ns_dtype |
pandas.api.types.is_unsigned_integer_dtype pandas.api.types.is_unsigned_integer_dtype(arr_or_dtype)[source]
Check whether the provided array or dtype is of an unsigned integer dtype. The nullable Integer dtypes (e.g. pandas.UInt64Dtype) are also considered as integer by this function. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of an unsigned integer dtype. Examples
>>> is_unsigned_integer_dtype(str)
False
>>> is_unsigned_integer_dtype(int) # signed
False
>>> is_unsigned_integer_dtype(float)
False
>>> is_unsigned_integer_dtype(np.uint64)
True
>>> is_unsigned_integer_dtype('uint8')
True
>>> is_unsigned_integer_dtype('UInt8')
True
>>> is_unsigned_integer_dtype(pd.UInt8Dtype)
True
>>> is_unsigned_integer_dtype(np.array(['a', 'b']))
False
>>> is_unsigned_integer_dtype(pd.Series([1, 2])) # signed
False
>>> is_unsigned_integer_dtype(pd.Index([1, 2.])) # float
False
>>> is_unsigned_integer_dtype(np.array([1, 2], dtype=np.uint32))
True | pandas.reference.api.pandas.api.types.is_unsigned_integer_dtype |
pandas.api.types.pandas_dtype pandas.api.types.pandas_dtype(dtype)[source]
Convert input into a pandas only dtype object or a numpy dtype object. Parameters
dtype:object to be converted
Returns
np.dtype or a pandas dtype
Raises
TypeError if not a dtype | pandas.reference.api.pandas.api.types.pandas_dtype |
pandas.api.types.union_categoricals pandas.api.types.union_categoricals(to_union, sort_categories=False, ignore_order=False)[source]
Combine list-like of Categorical-like, unioning categories. All categories must have the same dtype. Parameters
to_union:list-like
Categorical, CategoricalIndex, or Series with dtype=’category’.
sort_categories:bool, default False
If true, resulting categories will be lexsorted, otherwise they will be ordered as they appear in the data.
ignore_order:bool, default False
If true, the ordered attribute of the Categoricals will be ignored. Results in an unordered categorical. Returns
Categorical
Raises
TypeError
all inputs do not have the same dtype all inputs do not have the same ordered property all inputs are ordered and their categories are not identical sort_categories=True and Categoricals are ordered ValueError
Empty list of categoricals passed Notes To learn more about categories, see link Examples
>>> from pandas.api.types import union_categoricals
If you want to combine categoricals that do not necessarily have the same categories, union_categoricals will combine a list-like of categoricals. The new categories will be the union of the categories being combined.
>>> a = pd.Categorical(["b", "c"])
>>> b = pd.Categorical(["a", "b"])
>>> union_categoricals([a, b])
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
By default, the resulting categories will be ordered as they appear in the categories of the data. If you want the categories to be lexsorted, use sort_categories=True argument.
>>> union_categoricals([a, b], sort_categories=True)
['b', 'c', 'a', 'b']
Categories (3, object): ['a', 'b', 'c']
union_categoricals also works with the case of combining two categoricals of the same categories and order information (e.g. what you could also append for).
>>> a = pd.Categorical(["a", "b"], ordered=True)
>>> b = pd.Categorical(["a", "b", "a"], ordered=True)
>>> union_categoricals([a, b])
['a', 'b', 'a', 'b', 'a']
Categories (2, object): ['a' < 'b']
Raises TypeError because the categories are ordered and not identical.
>>> a = pd.Categorical(["a", "b"], ordered=True)
>>> b = pd.Categorical(["a", "b", "c"], ordered=True)
>>> union_categoricals([a, b])
Traceback (most recent call last):
...
TypeError: to union ordered Categoricals, all categories must be the same
New in version 0.20.0 Ordered categoricals with different categories or orderings can be combined by using the ignore_ordered=True argument.
>>> a = pd.Categorical(["a", "b", "c"], ordered=True)
>>> b = pd.Categorical(["c", "b", "a"], ordered=True)
>>> union_categoricals([a, b], ignore_order=True)
['a', 'b', 'c', 'c', 'b', 'a']
Categories (3, object): ['a', 'b', 'c']
union_categoricals also works with a CategoricalIndex, or Series containing categorical data, but note that the resulting array will always be a plain Categorical
>>> a = pd.Series(["b", "c"], dtype='category')
>>> b = pd.Series(["a", "b"], dtype='category')
>>> union_categoricals([a, b])
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a'] | pandas.reference.api.pandas.api.types.union_categoricals |
pandas.array pandas.array(data, dtype=None, copy=True)[source]
Create an array. Parameters
data:Sequence of objects
The scalars inside data should be instances of the scalar type for dtype. It’s expected that data represents a 1-dimensional array of data. When data is an Index or Series, the underlying array will be extracted from data.
dtype:str, np.dtype, or ExtensionDtype, optional
The dtype to use for the array. This may be a NumPy dtype or an extension type registered with pandas using pandas.api.extensions.register_extension_dtype(). If not specified, there are two possibilities: When data is a Series, Index, or ExtensionArray, the dtype will be taken from the data. Otherwise, pandas will attempt to infer the dtype from the data. Note that when data is a NumPy array, data.dtype is not used for inferring the array type. This is because NumPy cannot represent all the types of data that can be held in extension arrays. Currently, pandas will infer an extension dtype for sequences of
Scalar Type Array Type
pandas.Interval pandas.arrays.IntervalArray
pandas.Period pandas.arrays.PeriodArray
datetime.datetime pandas.arrays.DatetimeArray
datetime.timedelta pandas.arrays.TimedeltaArray
int pandas.arrays.IntegerArray
float pandas.arrays.FloatingArray
str pandas.arrays.StringArray or pandas.arrays.ArrowStringArray
bool pandas.arrays.BooleanArray The ExtensionArray created when the scalar type is str is determined by pd.options.mode.string_storage if the dtype is not explicitly given. For all other cases, NumPy’s usual inference rules will be used. Changed in version 1.0.0: Pandas infers nullable-integer dtype for integer data, string dtype for string data, and nullable-boolean dtype for boolean data. Changed in version 1.2.0: Pandas now also infers nullable-floating dtype for float-like input data
copy:bool, default True
Whether to copy the data, even if not necessary. Depending on the type of data, creating the new array may require copying data, even if copy=False. Returns
ExtensionArray
The newly created array. Raises
ValueError
When data is not 1-dimensional. See also numpy.array
Construct a NumPy array. Series
Construct a pandas Series. Index
Construct a pandas Index. arrays.PandasArray
ExtensionArray wrapping a NumPy array. Series.array
Extract the array stored within a Series. Notes Omitting the dtype argument means pandas will attempt to infer the best array type from the values in the data. As new array types are added by pandas and 3rd party libraries, the “best” array type may change. We recommend specifying dtype to ensure that the correct array type for the data is returned the returned array type doesn’t change as new extension types are added by pandas and third-party libraries Additionally, if the underlying memory representation of the returned array matters, we recommend specifying the dtype as a concrete object rather than a string alias or allowing it to be inferred. For example, a future version of pandas or a 3rd-party library may include a dedicated ExtensionArray for string data. In this event, the following would no longer return a arrays.PandasArray backed by a NumPy array.
>>> pd.array(['a', 'b'], dtype=str)
<PandasArray>
['a', 'b']
Length: 2, dtype: str32
This would instead return the new ExtensionArray dedicated for string data. If you really need the new array to be backed by a NumPy array, specify that in the dtype.
>>> pd.array(['a', 'b'], dtype=np.dtype("<U1"))
<PandasArray>
['a', 'b']
Length: 2, dtype: str32
Finally, Pandas has arrays that mostly overlap with NumPy
arrays.DatetimeArray arrays.TimedeltaArray
When data with a datetime64[ns] or timedelta64[ns] dtype is passed, pandas will always return a DatetimeArray or TimedeltaArray rather than a PandasArray. This is for symmetry with the case of timezone-aware data, which NumPy does not natively support.
>>> pd.array(['2015', '2016'], dtype='datetime64[ns]')
<DatetimeArray>
['2015-01-01 00:00:00', '2016-01-01 00:00:00']
Length: 2, dtype: datetime64[ns]
>>> pd.array(["1H", "2H"], dtype='timedelta64[ns]')
<TimedeltaArray>
['0 days 01:00:00', '0 days 02:00:00']
Length: 2, dtype: timedelta64[ns]
Examples If a dtype is not specified, pandas will infer the best dtype from the values. See the description of dtype for the types pandas infers for.
>>> pd.array([1, 2])
<IntegerArray>
[1, 2]
Length: 2, dtype: Int64
>>> pd.array([1, 2, np.nan])
<IntegerArray>
[1, 2, <NA>]
Length: 3, dtype: Int64
>>> pd.array([1.1, 2.2])
<FloatingArray>
[1.1, 2.2]
Length: 2, dtype: Float64
>>> pd.array(["a", None, "c"])
<StringArray>
['a', <NA>, 'c']
Length: 3, dtype: string
>>> with pd.option_context("string_storage", "pyarrow"):
... arr = pd.array(["a", None, "c"])
...
>>> arr
<ArrowStringArray>
['a', <NA>, 'c']
Length: 3, dtype: string
>>> pd.array([pd.Period('2000', freq="D"), pd.Period("2000", freq="D")])
<PeriodArray>
['2000-01-01', '2000-01-01']
Length: 2, dtype: period[D]
You can use the string alias for dtype
>>> pd.array(['a', 'b', 'a'], dtype='category')
['a', 'b', 'a']
Categories (2, object): ['a', 'b']
Or specify the actual dtype
>>> pd.array(['a', 'b', 'a'],
... dtype=pd.CategoricalDtype(['a', 'b', 'c'], ordered=True))
['a', 'b', 'a']
Categories (3, object): ['a' < 'b' < 'c']
If pandas does not infer a dedicated extension type a arrays.PandasArray is returned.
>>> pd.array([1 + 1j, 3 + 2j])
<PandasArray>
[(1+1j), (3+2j)]
Length: 2, dtype: complex128
As mentioned in the “Notes” section, new extension types may be added in the future (by pandas or 3rd party libraries), causing the return value to no longer be a arrays.PandasArray. Specify the dtype as a NumPy dtype if you need to ensure there’s no future change in behavior.
>>> pd.array([1, 2], dtype=np.dtype("int32"))
<PandasArray>
[1, 2]
Length: 2, dtype: int32
data must be 1-dimensional. A ValueError is raised when the input has the wrong dimensionality.
>>> pd.array(1)
Traceback (most recent call last):
...
ValueError: Cannot pass scalar '1' to 'pandas.array'. | pandas.reference.api.pandas.array |
pandas.arrays.ArrowStringArray classpandas.arrays.ArrowStringArray(values)[source]
Extension array for string data in a pyarrow.ChunkedArray. New in version 1.2.0. Warning ArrowStringArray is considered experimental. The implementation and parts of the API may change without warning. Parameters
values:pyarrow.Array or pyarrow.ChunkedArray
The array of data. See also array
The recommended function for creating a ArrowStringArray. Series.str
The string methods are available on Series backed by a ArrowStringArray. Notes ArrowStringArray returns a BooleanArray for comparison methods. Examples
>>> pd.array(['This is', 'some text', None, 'data.'], dtype="string[pyarrow]")
<ArrowStringArray>
['This is', 'some text', <NA>, 'data.']
Length: 4, dtype: string
Attributes
None Methods
None | pandas.reference.api.pandas.arrays.arrowstringarray |
pandas.arrays.BooleanArray classpandas.arrays.BooleanArray(values, mask, copy=False)[source]
Array of boolean (True/False) data with missing values. This is a pandas Extension array for boolean data, under the hood represented by 2 numpy arrays: a boolean array with the data and a boolean array with the mask (True indicating missing). BooleanArray implements Kleene logic (sometimes called three-value logic) for logical operations. See Kleene logical operations for more. To construct an BooleanArray from generic array-like input, use pandas.array() specifying dtype="boolean" (see examples below). New in version 1.0.0. Warning BooleanArray is considered experimental. The implementation and parts of the API may change without warning. Parameters
values:numpy.ndarray
A 1-d boolean-dtype array with the data.
mask:numpy.ndarray
A 1-d boolean-dtype array indicating missing values (True indicates missing).
copy:bool, default False
Whether to copy the values and mask arrays. Returns
BooleanArray
Examples Create an BooleanArray with pandas.array():
>>> pd.array([True, False, None], dtype="boolean")
<BooleanArray>
[True, False, <NA>]
Length: 3, dtype: boolean
Attributes
None Methods
None | pandas.reference.api.pandas.arrays.booleanarray |
pandas.arrays.DatetimeArray classpandas.arrays.DatetimeArray(values, dtype=dtype('<M8[ns]'), freq=None, copy=False)[source]
Pandas ExtensionArray for tz-naive or tz-aware datetime data. Warning DatetimeArray is currently experimental, and its API may change without warning. In particular, DatetimeArray.dtype is expected to change to always be an instance of an ExtensionDtype subclass. Parameters
values:Series, Index, DatetimeArray, ndarray
The datetime data. For DatetimeArray values (or a Series or Index boxing one), dtype and freq will be extracted from values.
dtype:numpy.dtype or DatetimeTZDtype
Note that the only NumPy dtype allowed is ‘datetime64[ns]’.
freq:str or Offset, optional
The frequency.
copy:bool, default False
Whether to copy the underlying array of values. Attributes
None Methods
None | pandas.reference.api.pandas.arrays.datetimearray |
pandas.arrays.IntegerArray classpandas.arrays.IntegerArray(values, mask, copy=False)[source]
Array of integer (optional missing) values. Changed in version 1.0.0: Now uses pandas.NA as the missing value rather than numpy.nan. Warning IntegerArray is currently experimental, and its API or internal implementation may change without warning. We represent an IntegerArray with 2 numpy arrays: data: contains a numpy integer array of the appropriate dtype mask: a boolean array holding a mask on the data, True is missing To construct an IntegerArray from generic array-like input, use pandas.array() with one of the integer dtypes (see examples). See Nullable integer data type for more. Parameters
values:numpy.ndarray
A 1-d integer-dtype array.
mask:numpy.ndarray
A 1-d boolean-dtype array indicating missing values.
copy:bool, default False
Whether to copy the values and mask. Returns
IntegerArray
Examples Create an IntegerArray with pandas.array().
>>> int_array = pd.array([1, None, 3], dtype=pd.Int32Dtype())
>>> int_array
<IntegerArray>
[1, <NA>, 3]
Length: 3, dtype: Int32
String aliases for the dtypes are also available. They are capitalized.
>>> pd.array([1, None, 3], dtype='Int32')
<IntegerArray>
[1, <NA>, 3]
Length: 3, dtype: Int32
>>> pd.array([1, None, 3], dtype='UInt16')
<IntegerArray>
[1, <NA>, 3]
Length: 3, dtype: UInt16
Attributes
None Methods
None | pandas.reference.api.pandas.arrays.integerarray |
pandas.arrays.IntervalArray classpandas.arrays.IntervalArray(data, closed=None, dtype=None, copy=False, verify_integrity=True)[source]
Pandas array for interval data that are closed on the same side. New in version 0.24.0. Parameters
data:array-like (1-dimensional)
Array-like containing Interval objects from which to build the IntervalArray.
closed:{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’
Whether the intervals are closed on the left-side, right-side, both or neither.
dtype:dtype or None, default None
If None, dtype will be inferred.
copy:bool, default False
Copy the input data.
verify_integrity:bool, default True
Verify that the IntervalArray is valid. See also Index
The base pandas Index type. Interval
A bounded slice-like interval; the elements of an IntervalArray. interval_range
Function to create a fixed frequency IntervalIndex. cut
Bin values into discrete Intervals. qcut
Bin values into equal-sized Intervals based on rank or sample quantiles. Notes See the user guide for more. Examples A new IntervalArray can be constructed directly from an array-like of Interval objects:
>>> pd.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)])
<IntervalArray>
[(0, 1], (1, 5]]
Length: 2, dtype: interval[int64, right]
It may also be constructed using one of the constructor methods: IntervalArray.from_arrays(), IntervalArray.from_breaks(), and IntervalArray.from_tuples(). Attributes
left Return the left endpoints of each Interval in the IntervalArray as an Index.
right Return the right endpoints of each Interval in the IntervalArray as an Index.
closed Whether the intervals are closed on the left-side, right-side, both or neither.
mid Return the midpoint of each Interval in the IntervalArray as an Index.
length Return an Index with entries denoting the length of each Interval in the IntervalArray.
is_empty Indicates if an interval is empty, meaning it contains no points.
is_non_overlapping_monotonic Return True if the IntervalArray is non-overlapping (no Intervals share points) and is either monotonic increasing or monotonic decreasing, else False. Methods
from_arrays(left, right[, closed, copy, dtype]) Construct from two arrays defining the left and right bounds.
from_tuples(data[, closed, copy, dtype]) Construct an IntervalArray from an array-like of tuples.
from_breaks(breaks[, closed, copy, dtype]) Construct an IntervalArray from an array of splits.
contains(other) Check elementwise if the Intervals contain the value.
overlaps(other) Check elementwise if an Interval overlaps the values in the IntervalArray.
set_closed(closed) Return an IntervalArray identical to the current one, but closed on the specified side.
to_tuples([na_tuple]) Return an ndarray of tuples of the form (left, right). | pandas.reference.api.pandas.arrays.intervalarray |
pandas.arrays.IntervalArray.closed propertyIntervalArray.closed
Whether the intervals are closed on the left-side, right-side, both or neither. | pandas.reference.api.pandas.arrays.intervalarray.closed |
pandas.arrays.IntervalArray.contains IntervalArray.contains(other)[source]
Check elementwise if the Intervals contain the value. Return a boolean mask whether the value is contained in the Intervals of the IntervalArray. New in version 0.25.0. Parameters
other:scalar
The value to check whether it is contained in the Intervals. Returns
boolean array
See also Interval.contains
Check whether Interval object contains value. IntervalArray.overlaps
Check if an Interval overlaps the values in the IntervalArray. Examples
>>> intervals = pd.arrays.IntervalArray.from_tuples([(0, 1), (1, 3), (2, 4)])
>>> intervals
<IntervalArray>
[(0, 1], (1, 3], (2, 4]]
Length: 3, dtype: interval[int64, right]
>>> intervals.contains(0.5)
array([ True, False, False]) | pandas.reference.api.pandas.arrays.intervalarray.contains |
pandas.arrays.IntervalArray.from_arrays classmethodIntervalArray.from_arrays(left, right, closed='right', copy=False, dtype=None)[source]
Construct from two arrays defining the left and right bounds. Parameters
left:array-like (1-dimensional)
Left bounds for each interval.
right:array-like (1-dimensional)
Right bounds for each interval.
closed:{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’
Whether the intervals are closed on the left-side, right-side, both or neither.
copy:bool, default False
Copy the data.
dtype:dtype, optional
If None, dtype will be inferred. Returns
IntervalArray
Raises
ValueError
When a value is missing in only one of left or right. When a value in left is greater than the corresponding value in right. See also interval_range
Function to create a fixed frequency IntervalIndex. IntervalArray.from_breaks
Construct an IntervalArray from an array of splits. IntervalArray.from_tuples
Construct an IntervalArray from an array-like of tuples. Notes Each element of left must be less than or equal to the right element at the same position. If an element is missing, it must be missing in both left and right. A TypeError is raised when using an unsupported type for left or right. At the moment, ‘category’, ‘object’, and ‘string’ subtypes are not supported.
>>> pd.arrays.IntervalArray.from_arrays([0, 1, 2], [1, 2, 3])
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right] | pandas.reference.api.pandas.arrays.intervalarray.from_arrays |
pandas.arrays.IntervalArray.from_breaks classmethodIntervalArray.from_breaks(breaks, closed='right', copy=False, dtype=None)[source]
Construct an IntervalArray from an array of splits. Parameters
breaks:array-like (1-dimensional)
Left and right bounds for each interval.
closed:{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’
Whether the intervals are closed on the left-side, right-side, both or neither.
copy:bool, default False
Copy the data.
dtype:dtype or None, default None
If None, dtype will be inferred. Returns
IntervalArray
See also interval_range
Function to create a fixed frequency IntervalIndex. IntervalArray.from_arrays
Construct from a left and right array. IntervalArray.from_tuples
Construct from a sequence of tuples. Examples
>>> pd.arrays.IntervalArray.from_breaks([0, 1, 2, 3])
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right] | pandas.reference.api.pandas.arrays.intervalarray.from_breaks |
pandas.arrays.IntervalArray.from_tuples classmethodIntervalArray.from_tuples(data, closed='right', copy=False, dtype=None)[source]
Construct an IntervalArray from an array-like of tuples. Parameters
data:array-like (1-dimensional)
Array of tuples.
closed:{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’
Whether the intervals are closed on the left-side, right-side, both or neither.
copy:bool, default False
By-default copy the data, this is compat only and ignored.
dtype:dtype or None, default None
If None, dtype will be inferred. Returns
IntervalArray
See also interval_range
Function to create a fixed frequency IntervalIndex. IntervalArray.from_arrays
Construct an IntervalArray from a left and right array. IntervalArray.from_breaks
Construct an IntervalArray from an array of splits. Examples
>>> pd.arrays.IntervalArray.from_tuples([(0, 1), (1, 2)])
<IntervalArray>
[(0, 1], (1, 2]]
Length: 2, dtype: interval[int64, right] | pandas.reference.api.pandas.arrays.intervalarray.from_tuples |
pandas.arrays.IntervalArray.is_empty IntervalArray.is_empty
Indicates if an interval is empty, meaning it contains no points. New in version 0.25.0. Returns
bool or ndarray
A boolean indicating if a scalar Interval is empty, or a boolean ndarray positionally indicating if an Interval in an IntervalArray or IntervalIndex is empty. Examples An Interval that contains points is not empty:
>>> pd.Interval(0, 1, closed='right').is_empty
False
An Interval that does not contain any points is empty:
>>> pd.Interval(0, 0, closed='right').is_empty
True
>>> pd.Interval(0, 0, closed='left').is_empty
True
>>> pd.Interval(0, 0, closed='neither').is_empty
True
An Interval that contains a single point is not empty:
>>> pd.Interval(0, 0, closed='both').is_empty
False
An IntervalArray or IntervalIndex returns a boolean ndarray positionally indicating if an Interval is empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'),
... pd.Interval(1, 2, closed='neither')]
>>> pd.arrays.IntervalArray(ivs).is_empty
array([ True, False])
Missing values are not considered empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'), np.nan]
>>> pd.IntervalIndex(ivs).is_empty
array([ True, False]) | pandas.reference.api.pandas.arrays.intervalarray.is_empty |
pandas.arrays.IntervalArray.is_non_overlapping_monotonic propertyIntervalArray.is_non_overlapping_monotonic
Return True if the IntervalArray is non-overlapping (no Intervals share points) and is either monotonic increasing or monotonic decreasing, else False. | pandas.reference.api.pandas.arrays.intervalarray.is_non_overlapping_monotonic |
pandas.arrays.IntervalArray.left propertyIntervalArray.left
Return the left endpoints of each Interval in the IntervalArray as an Index. | pandas.reference.api.pandas.arrays.intervalarray.left |
pandas.arrays.IntervalArray.length propertyIntervalArray.length
Return an Index with entries denoting the length of each Interval in the IntervalArray. | pandas.reference.api.pandas.arrays.intervalarray.length |
pandas.arrays.IntervalArray.mid propertyIntervalArray.mid
Return the midpoint of each Interval in the IntervalArray as an Index. | pandas.reference.api.pandas.arrays.intervalarray.mid |
pandas.arrays.IntervalArray.overlaps IntervalArray.overlaps(other)[source]
Check elementwise if an Interval overlaps the values in the IntervalArray. Two intervals overlap if they share a common point, including closed endpoints. Intervals that only have an open endpoint in common do not overlap. Parameters
other:IntervalArray
Interval to check against for an overlap. Returns
ndarray
Boolean array positionally indicating where an overlap occurs. See also Interval.overlaps
Check whether two Interval objects overlap. Examples
>>> data = [(0, 1), (1, 3), (2, 4)]
>>> intervals = pd.arrays.IntervalArray.from_tuples(data)
>>> intervals
<IntervalArray>
[(0, 1], (1, 3], (2, 4]]
Length: 3, dtype: interval[int64, right]
>>> intervals.overlaps(pd.Interval(0.5, 1.5))
array([ True, True, False])
Intervals that share closed endpoints overlap:
>>> intervals.overlaps(pd.Interval(1, 3, closed='left'))
array([ True, True, True])
Intervals that only have an open endpoint in common do not overlap:
>>> intervals.overlaps(pd.Interval(1, 2, closed='right'))
array([False, True, False]) | pandas.reference.api.pandas.arrays.intervalarray.overlaps |
pandas.arrays.IntervalArray.right propertyIntervalArray.right
Return the right endpoints of each Interval in the IntervalArray as an Index. | pandas.reference.api.pandas.arrays.intervalarray.right |
pandas.arrays.IntervalArray.set_closed IntervalArray.set_closed(closed)[source]
Return an IntervalArray identical to the current one, but closed on the specified side. Parameters
closed:{‘left’, ‘right’, ‘both’, ‘neither’}
Whether the intervals are closed on the left-side, right-side, both or neither. Returns
new_index:IntervalArray
Examples
>>> index = pd.arrays.IntervalArray.from_breaks(range(4))
>>> index
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
>>> index.set_closed('both')
<IntervalArray>
[[0, 1], [1, 2], [2, 3]]
Length: 3, dtype: interval[int64, both] | pandas.reference.api.pandas.arrays.intervalarray.set_closed |
pandas.arrays.IntervalArray.to_tuples IntervalArray.to_tuples(na_tuple=True)[source]
Return an ndarray of tuples of the form (left, right). Parameters
na_tuple:bool, default True
Returns NA as a tuple if True, (nan, nan), or just as the NA value itself if False, nan. Returns
tuples: ndarray | pandas.reference.api.pandas.arrays.intervalarray.to_tuples |
pandas.arrays.PandasArray classpandas.arrays.PandasArray(values, copy=False)[source]
A pandas ExtensionArray for NumPy data. This is mostly for internal compatibility, and is not especially useful on its own. Parameters
values:ndarray
The NumPy ndarray to wrap. Must be 1-dimensional.
copy:bool, default False
Whether to copy values. Attributes
None Methods
None | pandas.reference.api.pandas.arrays.pandasarray |
pandas.arrays.PeriodArray classpandas.arrays.PeriodArray(values, dtype=None, freq=None, copy=False)[source]
Pandas ExtensionArray for storing Period data. Users should use period_array() to create new instances. Alternatively, array() can be used to create new instances from a sequence of Period scalars. Parameters
values:Union[PeriodArray, Series[period], ndarray[int], PeriodIndex]
The data to store. These should be arrays that can be directly converted to ordinals without inference or copy (PeriodArray, ndarray[int64]), or a box around such an array (Series[period], PeriodIndex).
dtype:PeriodDtype, optional
A PeriodDtype instance from which to extract a freq. If both freq and dtype are specified, then the frequencies must match.
freq:str or DateOffset
The freq to use for the array. Mostly applicable when values is an ndarray of integers, when freq is required. When values is a PeriodArray (or box around), it’s checked that values.freq matches freq.
copy:bool, default False
Whether to copy the ordinals before storing. See also Period
Represents a period of time. PeriodIndex
Immutable Index for period data. period_range
Create a fixed-frequency PeriodArray. array
Construct a pandas array. Notes There are two components to a PeriodArray ordinals : integer ndarray freq : pd.tseries.offsets.Offset The values are physically stored as a 1-D ndarray of integers. These are called “ordinals” and represent some kind of offset from a base. The freq indicates the span covered by each element of the array. All elements in the PeriodArray have the same freq. Attributes
None Methods
None | pandas.reference.api.pandas.arrays.periodarray |
pandas.arrays.SparseArray classpandas.arrays.SparseArray(data, sparse_index=None, index=None, fill_value=None, kind='integer', dtype=None, copy=False)[source]
An ExtensionArray for storing sparse data. Parameters
data:array-like or scalar
A dense array of values to store in the SparseArray. This may contain fill_value.
sparse_index:SparseIndex, optional
index:Index
Deprecated since version 1.4.0: Use a function like np.full to construct an array with the desired repeats of the scalar value instead.
fill_value:scalar, optional
Elements in data that are fill_value are not stored in the SparseArray. For memory savings, this should be the most common value in data. By default, fill_value depends on the dtype of data:
data.dtype na_value
float np.nan
int 0
bool False
datetime64 pd.NaT
timedelta64 pd.NaT The fill value is potentially specified in three ways. In order of precedence, these are The fill_value argument dtype.fill_value if fill_value is None and dtype is a SparseDtype data.dtype.fill_value if fill_value is None and dtype is not a SparseDtype and data is a SparseArray.
kind:str
Can be ‘integer’ or ‘block’, default is ‘integer’. The type of storage for sparse locations. ‘block’: Stores a block and block_length for each contiguous span of sparse values. This is best when sparse data tends to be clumped together, with large regions of fill-value values between sparse values. ‘integer’: uses an integer to store the location of each sparse value.
dtype:np.dtype or SparseDtype, optional
The dtype to use for the SparseArray. For numpy dtypes, this determines the dtype of self.sp_values. For SparseDtype, this determines self.sp_values and self.fill_value.
copy:bool, default False
Whether to explicitly copy the incoming data array. Examples
>>> from pandas.arrays import SparseArray
>>> arr = SparseArray([0, 0, 1, 2])
>>> arr
[0, 0, 1, 2]
Fill: 0
IntIndex
Indices: array([2, 3], dtype=int32)
Attributes
None Methods
None | pandas.reference.api.pandas.arrays.sparsearray |
pandas.arrays.StringArray classpandas.arrays.StringArray(values, copy=False)[source]
Extension array for string data. New in version 1.0.0. Warning StringArray is considered experimental. The implementation and parts of the API may change without warning. Parameters
values:array-like
The array of data. Warning Currently, this expects an object-dtype ndarray where the elements are Python strings or pandas.NA. This may change without warning in the future. Use pandas.array() with dtype="string" for a stable way of creating a StringArray from any sequence.
copy:bool, default False
Whether to copy the array of data. See also array
The recommended function for creating a StringArray. Series.str
The string methods are available on Series backed by a StringArray. Notes StringArray returns a BooleanArray for comparison methods. Examples
>>> pd.array(['This is', 'some text', None, 'data.'], dtype="string")
<StringArray>
['This is', 'some text', <NA>, 'data.']
Length: 4, dtype: string
Unlike arrays instantiated with dtype="object", StringArray will convert the values to strings.
>>> pd.array(['1', 1], dtype="object")
<PandasArray>
['1', 1]
Length: 2, dtype: object
>>> pd.array(['1', 1], dtype="string")
<StringArray>
['1', '1']
Length: 2, dtype: string
However, instantiating StringArrays directly with non-strings will raise an error. For comparison methods, StringArray returns a pandas.BooleanArray:
>>> pd.array(["a", None, "c"], dtype="string") == "a"
<BooleanArray>
[True, <NA>, False]
Length: 3, dtype: boolean
Attributes
None Methods
None | pandas.reference.api.pandas.arrays.stringarray |
pandas.arrays.TimedeltaArray classpandas.arrays.TimedeltaArray(values, dtype=dtype('<m8[ns]'), freq=NoDefault.no_default, copy=False)[source]
Pandas ExtensionArray for timedelta data. Warning TimedeltaArray is currently experimental, and its API may change without warning. In particular, TimedeltaArray.dtype is expected to change to be an instance of an ExtensionDtype subclass. Parameters
values:array-like
The timedelta data.
dtype:numpy.dtype
Currently, only numpy.dtype("timedelta64[ns]") is accepted.
freq:Offset, optional
copy:bool, default False
Whether to copy the underlying array of data. Attributes
None Methods
None | pandas.reference.api.pandas.arrays.timedeltaarray |
pandas.bdate_range pandas.bdate_range(start=None, end=None, periods=None, freq='B', tz=None, normalize=True, name=None, weekmask=None, holidays=None, closed=NoDefault.no_default, inclusive=None, **kwargs)[source]
Return a fixed frequency DatetimeIndex, with business day as the default frequency. Parameters
start:str or datetime-like, default None
Left bound for generating dates.
end:str or datetime-like, default None
Right bound for generating dates.
periods:int, default None
Number of periods to generate.
freq:str or DateOffset, default ‘B’ (business daily)
Frequency strings can have multiples, e.g. ‘5H’.
tz:str or None
Time zone name for returning localized DatetimeIndex, for example Asia/Beijing.
normalize:bool, default False
Normalize start/end dates to midnight before generating date range.
name:str, default None
Name of the resulting DatetimeIndex.
weekmask:str or None, default None
Weekmask of valid business days, passed to numpy.busdaycalendar, only used when custom frequency strings are passed. The default value None is equivalent to ‘Mon Tue Wed Thu Fri’.
holidays:list-like or None, default None
Dates to exclude from the set of valid business days, passed to numpy.busdaycalendar, only used when custom frequency strings are passed.
closed:str, default None
Make the interval closed with respect to the given frequency to the ‘left’, ‘right’, or both sides (None). Deprecated since version 1.4.0: Argument closed has been deprecated to standardize boundary inputs. Use inclusive instead, to set each bound as closed or open.
inclusive:{“both”, “neither”, “left”, “right”}, default “both”
Include boundaries; Whether to set each bound as closed or open. New in version 1.4.0. **kwargs
For compatibility. Has no effect on the result. Returns
DatetimeIndex
Notes Of the four parameters: start, end, periods, and freq, exactly three must be specified. Specifying freq is a requirement for bdate_range. Use date_range if specifying freq is not desired. To learn more about the frequency strings, please see this link. Examples Note how the two weekend days are skipped in the result.
>>> pd.bdate_range(start='1/1/2018', end='1/08/2018')
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-08'],
dtype='datetime64[ns]', freq='B') | pandas.reference.api.pandas.bdate_range |
pandas.BooleanDtype classpandas.BooleanDtype[source]
Extension dtype for boolean data. New in version 1.0.0. Warning BooleanDtype is considered experimental. The implementation and parts of the API may change without warning. Examples
>>> pd.BooleanDtype()
BooleanDtype
Attributes
None Methods
None | pandas.reference.api.pandas.booleandtype |
pandas.Categorical classpandas.Categorical(values, categories=None, ordered=None, dtype=None, fastpath=False, copy=True)[source]
Represent a categorical variable in classic R / S-plus fashion. Categoricals can only take on only a limited, and usually fixed, number of possible values (categories). In contrast to statistical categorical variables, a Categorical might have an order, but numerical operations (additions, divisions, …) are not possible. All values of the Categorical are either in categories or np.nan. Assigning values outside of categories will raise a ValueError. Order is defined by the order of the categories, not lexical order of the values. Parameters
values:list-like
The values of the categorical. If categories are given, values not in categories will be replaced with NaN.
categories:Index-like (unique), optional
The unique categories for this categorical. If not given, the categories are assumed to be the unique values of values (sorted, if possible, otherwise in the order in which they appear).
ordered:bool, default False
Whether or not this categorical is treated as a ordered categorical. If True, the resulting categorical will be ordered. An ordered categorical respects, when sorted, the order of its categories attribute (which in turn is the categories argument, if provided).
dtype:CategoricalDtype
An instance of CategoricalDtype to use for this categorical. Raises
ValueError
If the categories do not validate. TypeError
If an explicit ordered=True is given but no categories and the values are not sortable. See also CategoricalDtype
Type for categorical data. CategoricalIndex
An Index with an underlying Categorical. Notes See the user guide for more. Examples
>>> pd.Categorical([1, 2, 3, 1, 2, 3])
[1, 2, 3, 1, 2, 3]
Categories (3, int64): [1, 2, 3]
>>> pd.Categorical(['a', 'b', 'c', 'a', 'b', 'c'])
['a', 'b', 'c', 'a', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
Missing values are not included as a category.
>>> c = pd.Categorical([1, 2, 3, 1, 2, 3, np.nan])
>>> c
[1, 2, 3, 1, 2, 3, NaN]
Categories (3, int64): [1, 2, 3]
However, their presence is indicated in the codes attribute by code -1.
>>> c.codes
array([ 0, 1, 2, 0, 1, 2, -1], dtype=int8)
Ordered Categoricals can be sorted according to the custom order of the categories and can have a min and max value.
>>> c = pd.Categorical(['a', 'b', 'c', 'a', 'b', 'c'], ordered=True,
... categories=['c', 'b', 'a'])
>>> c
['a', 'b', 'c', 'a', 'b', 'c']
Categories (3, object): ['c' < 'b' < 'a']
>>> c.min()
'c'
Attributes
categories The categories of this categorical.
codes The category codes of this categorical.
ordered Whether the categories have an ordered relationship.
dtype The CategoricalDtype for this instance. Methods
from_codes(codes[, categories, ordered, dtype]) Make a Categorical type from codes and categories or dtype.
__array__([dtype]) The numpy array interface. | pandas.reference.api.pandas.categorical |
pandas.Categorical.__array__ Categorical.__array__(dtype=None)[source]
The numpy array interface. Returns
numpy.array
A numpy array of either the specified dtype or, if dtype==None (default), the same dtype as categorical.categories.dtype. | pandas.reference.api.pandas.categorical.__array__ |
pandas.Categorical.categories propertyCategorical.categories
The categories of this categorical. Setting assigns new values to each category (effectively a rename of each individual category). The assigned value has to be a list-like object. All items must be unique and the number of items in the new categories must be the same as the number of items in the old categories. Assigning to categories is a inplace operation! Raises
ValueError
If the new categories do not validate as categories or if the number of new categories is unequal the number of old categories See also rename_categories
Rename categories. reorder_categories
Reorder categories. add_categories
Add new categories. remove_categories
Remove the specified categories. remove_unused_categories
Remove categories which are not used. set_categories
Set the categories to the specified ones. | pandas.reference.api.pandas.categorical.categories |
pandas.Categorical.codes propertyCategorical.codes
The category codes of this categorical. Codes are an array of integers which are the positions of the actual values in the categories array. There is no setter, use the other categorical methods and the normal item setter to change values in the categorical. Returns
ndarray[int]
A non-writable view of the codes array. | pandas.reference.api.pandas.categorical.codes |
pandas.Categorical.dtype propertyCategorical.dtype
The CategoricalDtype for this instance. | pandas.reference.api.pandas.categorical.dtype |
pandas.Categorical.from_codes classmethodCategorical.from_codes(codes, categories=None, ordered=None, dtype=None)[source]
Make a Categorical type from codes and categories or dtype. This constructor is useful if you already have codes and categories/dtype and so do not need the (computation intensive) factorization step, which is usually done on the constructor. If your data does not follow this convention, please use the normal constructor. Parameters
codes:array-like of int
An integer array, where each integer points to a category in categories or dtype.categories, or else is -1 for NaN.
categories:index-like, optional
The categories for the categorical. Items need to be unique. If the categories are not given here, then they must be provided in dtype.
ordered:bool, optional
Whether or not this categorical is treated as an ordered categorical. If not given here or in dtype, the resulting categorical will be unordered.
dtype:CategoricalDtype or “category”, optional
If CategoricalDtype, cannot be used together with categories or ordered. Returns
Categorical
Examples
>>> dtype = pd.CategoricalDtype(['a', 'b'], ordered=True)
>>> pd.Categorical.from_codes(codes=[0, 1, 0, 1], dtype=dtype)
['a', 'b', 'a', 'b']
Categories (2, object): ['a' < 'b'] | pandas.reference.api.pandas.categorical.from_codes |
pandas.Categorical.ordered propertyCategorical.ordered
Whether the categories have an ordered relationship. | pandas.reference.api.pandas.categorical.ordered |
pandas.CategoricalDtype classpandas.CategoricalDtype(categories=None, ordered=False)[source]
Type for categorical data with the categories and orderedness. Parameters
categories:sequence, optional
Must be unique, and must not contain any nulls. The categories are stored in an Index, and if an index is provided the dtype of that index will be used.
ordered:bool or None, default False
Whether or not this categorical is treated as a ordered categorical. None can be used to maintain the ordered value of existing categoricals when used in operations that combine categoricals, e.g. astype, and will resolve to False if there is no existing ordered to maintain. See also Categorical
Represent a categorical variable in classic R / S-plus fashion. Notes This class is useful for specifying the type of a Categorical independent of the values. See CategoricalDtype for more. Examples
>>> t = pd.CategoricalDtype(categories=['b', 'a'], ordered=True)
>>> pd.Series(['a', 'b', 'a', 'c'], dtype=t)
0 a
1 b
2 a
3 NaN
dtype: category
Categories (2, object): ['b' < 'a']
An empty CategoricalDtype with a specific dtype can be created by providing an empty index. As follows,
>>> pd.CategoricalDtype(pd.DatetimeIndex([])).categories.dtype
dtype('<M8[ns]')
Attributes
categories An Index containing the unique categories allowed.
ordered Whether the categories have an ordered relationship. Methods
None | pandas.reference.api.pandas.categoricaldtype |
pandas.CategoricalDtype.categories propertyCategoricalDtype.categories
An Index containing the unique categories allowed. | pandas.reference.api.pandas.categoricaldtype.categories |
pandas.CategoricalDtype.ordered propertyCategoricalDtype.ordered
Whether the categories have an ordered relationship. | pandas.reference.api.pandas.categoricaldtype.ordered |
pandas.CategoricalIndex classpandas.CategoricalIndex(data=None, categories=None, ordered=None, dtype=None, copy=False, name=None)[source]
Index based on an underlying Categorical. CategoricalIndex, like Categorical, can only take on a limited, and usually fixed, number of possible values (categories). Also, like Categorical, it might have an order, but numerical operations (additions, divisions, …) are not possible. Parameters
data:array-like (1-dimensional)
The values of the categorical. If categories are given, values not in categories will be replaced with NaN.
categories:index-like, optional
The categories for the categorical. Items need to be unique. If the categories are not given here (and also not in dtype), they will be inferred from the data.
ordered:bool, optional
Whether or not this categorical is treated as an ordered categorical. If not given here or in dtype, the resulting categorical will be unordered.
dtype:CategoricalDtype or “category”, optional
If CategoricalDtype, cannot be used together with categories or ordered.
copy:bool, default False
Make a copy of input ndarray.
name:object, optional
Name to be stored in the index. Raises
ValueError
If the categories do not validate. TypeError
If an explicit ordered=True is given but no categories and the values are not sortable. See also Index
The base pandas Index type. Categorical
A categorical array. CategoricalDtype
Type for categorical data. Notes See the user guide for more. Examples
>>> pd.CategoricalIndex(["a", "b", "c", "a", "b", "c"])
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['a', 'b', 'c'], ordered=False, dtype='category')
CategoricalIndex can also be instantiated from a Categorical:
>>> c = pd.Categorical(["a", "b", "c", "a", "b", "c"])
>>> pd.CategoricalIndex(c)
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['a', 'b', 'c'], ordered=False, dtype='category')
Ordered CategoricalIndex can have a min and max value.
>>> ci = pd.CategoricalIndex(
... ["a", "b", "c", "a", "b", "c"], ordered=True, categories=["c", "b", "a"]
... )
>>> ci
CategoricalIndex(['a', 'b', 'c', 'a', 'b', 'c'],
categories=['c', 'b', 'a'], ordered=True, dtype='category')
>>> ci.min()
'c'
Attributes
codes The category codes of this categorical.
categories The categories of this categorical.
ordered Whether the categories have an ordered relationship. Methods
rename_categories(*args, **kwargs) Rename categories.
reorder_categories(*args, **kwargs) Reorder categories as specified in new_categories.
add_categories(*args, **kwargs) Add new categories.
remove_categories(*args, **kwargs) Remove the specified categories.
remove_unused_categories(*args, **kwargs) Remove categories which are not used.
set_categories(*args, **kwargs) Set the categories to the specified new_categories.
as_ordered(*args, **kwargs) Set the Categorical to be ordered.
as_unordered(*args, **kwargs) Set the Categorical to be unordered.
map(mapper) Map values using input an input mapping or function. | pandas.reference.api.pandas.categoricalindex |
pandas.CategoricalIndex.add_categories CategoricalIndex.add_categories(*args, **kwargs)[source]
Add new categories. new_categories will be included at the last/highest place in the categories and will be unused directly after this call. Parameters
new_categories:category or list-like of category
The new categories to be included.
inplace:bool, default False
Whether or not to add the categories inplace or return a copy of this categorical with added categories. Deprecated since version 1.3.0. Returns
cat:Categorical or None
Categorical with new categories added or None if inplace=True. Raises
ValueError
If the new categories include old categories or do not validate as categories See also rename_categories
Rename categories. reorder_categories
Reorder categories. remove_categories
Remove the specified categories. remove_unused_categories
Remove categories which are not used. set_categories
Set the categories to the specified ones. Examples
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
>>> c.add_categories(['d', 'a'])
['c', 'b', 'c']
Categories (4, object): ['b', 'c', 'd', 'a'] | pandas.reference.api.pandas.categoricalindex.add_categories |
pandas.CategoricalIndex.as_ordered CategoricalIndex.as_ordered(*args, **kwargs)[source]
Set the Categorical to be ordered. Parameters
inplace:bool, default False
Whether or not to set the ordered attribute in-place or return a copy of this categorical with ordered set to True. Returns
Categorical or None
Ordered Categorical or None if inplace=True. | pandas.reference.api.pandas.categoricalindex.as_ordered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.