doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
pandas.Index.value_counts Index.value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=True)[source]
Return a Series containing counts of unique values. The resulting object will be in descending order so that the first element is the most frequently-occurring element. Excludes NA values by default. Parameters
normalize:bool, default False
If True then the object returned will contain the relative frequencies of the unique values.
sort:bool, default True
Sort by frequencies.
ascending:bool, default False
Sort in ascending order.
bins:int, optional
Rather than count values, group them into half-open bins, a convenience for pd.cut, only works with numeric data.
dropna:bool, default True
Don’t include counts of NaN. Returns
Series
See also Series.count
Number of non-NA elements in a Series. DataFrame.count
Number of non-NA elements in a DataFrame. DataFrame.value_counts
Equivalent method on DataFrames. Examples
>>> index = pd.Index([3, 1, 2, 3, 4, np.nan])
>>> index.value_counts()
3.0 2
1.0 1
2.0 1
4.0 1
dtype: int64
With normalize set to True, returns the relative frequency by dividing all values by the sum of values.
>>> s = pd.Series([3, 1, 2, 3, 4, np.nan])
>>> s.value_counts(normalize=True)
3.0 0.4
1.0 0.2
2.0 0.2
4.0 0.2
dtype: float64
bins Bins can be useful for going from a continuous variable to a categorical variable; instead of counting unique apparitions of values, divide the index in the specified number of half-open bins.
>>> s.value_counts(bins=3)
(0.996, 2.0] 2
(2.0, 3.0] 2
(3.0, 4.0] 1
dtype: int64
dropna With dropna set to False we can also see NaN index values.
>>> s.value_counts(dropna=False)
3.0 2
1.0 1
2.0 1
4.0 1
NaN 1
dtype: int64 | pandas.reference.api.pandas.index.value_counts |
pandas.Index.values propertyIndex.values
Return an array representing the data in the Index. Warning We recommend using Index.array or Index.to_numpy(), depending on whether you need a reference to the underlying data or a NumPy array. Returns
array: numpy.ndarray or ExtensionArray
See also Index.array
Reference to the underlying data. Index.to_numpy
A NumPy array representing the underlying data. | pandas.reference.api.pandas.index.values |
pandas.Index.view Index.view(cls=None)[source] | pandas.reference.api.pandas.index.view |
pandas.Index.where finalIndex.where(cond, other=None)[source]
Replace values where the condition is False. The replacement is taken from other. Parameters
cond:bool array-like with the same length as self
Condition to select the values on.
other:scalar, or array-like, default None
Replacement if the condition is False. Returns
pandas.Index
A copy of self with values replaced from other where the condition is False. See also Series.where
Same method for Series. DataFrame.where
Same method for DataFrame. Examples
>>> idx = pd.Index(['car', 'bike', 'train', 'tractor'])
>>> idx
Index(['car', 'bike', 'train', 'tractor'], dtype='object')
>>> idx.where(idx.isin(['car', 'train']), 'other')
Index(['car', 'other', 'train', 'other'], dtype='object') | pandas.reference.api.pandas.index.where |
pandas.IndexSlice pandas.IndexSlice=<pandas.core.indexing._IndexSlice object>
Create an object to more easily perform multi-index slicing. See also MultiIndex.remove_unused_levels
New MultiIndex with no unused levels. Notes See Defined Levels for further info on slicing a MultiIndex. Examples
>>> midx = pd.MultiIndex.from_product([['A0','A1'], ['B0','B1','B2','B3']])
>>> columns = ['foo', 'bar']
>>> dfmi = pd.DataFrame(np.arange(16).reshape((len(midx), len(columns))),
... index=midx, columns=columns)
Using the default slice command:
>>> dfmi.loc[(slice(None), slice('B0', 'B1')), :]
foo bar
A0 B0 0 1
B1 2 3
A1 B0 8 9
B1 10 11
Using the IndexSlice class for a more intuitive command:
>>> idx = pd.IndexSlice
>>> dfmi.loc[idx[:, 'B0':'B1'], :]
foo bar
A0 B0 0 1
B1 2 3
A1 B0 8 9
B1 10 11 | pandas.reference.api.pandas.indexslice |
pandas.infer_freq pandas.infer_freq(index, warn=True)[source]
Infer the most likely frequency given the input index. If the frequency is uncertain, a warning will be printed. Parameters
index:DatetimeIndex or TimedeltaIndex
If passed a Series will use the values of the series (NOT THE INDEX).
warn:bool, default True
Returns
str or None
None if no discernible frequency. Raises
TypeError
If the index is not datetime-like. ValueError
If there are fewer than three values. Examples
>>> idx = pd.date_range(start='2020/12/01', end='2020/12/30', periods=30)
>>> pd.infer_freq(idx)
'D' | pandas.reference.api.pandas.infer_freq |
pandas.Int16Dtype classpandas.Int16Dtype[source]
An ExtensionDtype for int16 integer data. Changed in version 1.0.0: Now uses pandas.NA as its missing value, rather than numpy.nan. Attributes
None Methods
None | pandas.reference.api.pandas.int16dtype |
pandas.Int32Dtype classpandas.Int32Dtype[source]
An ExtensionDtype for int32 integer data. Changed in version 1.0.0: Now uses pandas.NA as its missing value, rather than numpy.nan. Attributes
None Methods
None | pandas.reference.api.pandas.int32dtype |
pandas.Int64Dtype classpandas.Int64Dtype[source]
An ExtensionDtype for int64 integer data. Changed in version 1.0.0: Now uses pandas.NA as its missing value, rather than numpy.nan. Attributes
None Methods
None | pandas.reference.api.pandas.int64dtype |
pandas.Int64Index classpandas.Int64Index(data=None, dtype=None, copy=False, name=None)[source]
Immutable sequence used for indexing and alignment. The basic object storing axis labels for all pandas objects. Int64Index is a special case of Index with purely integer labels. . Deprecated since version 1.4.0: In pandas v2.0 Int64Index will be removed and NumericIndex used instead. Int64Index will remain fully functional for the duration of pandas 1.x. Parameters
data:array-like (1-dimensional)
dtype:NumPy dtype (default: int64)
copy:bool
Make a copy of input ndarray.
name:object
Name to be stored in the index. See also Index
The base pandas Index type. NumericIndex
Index of numpy int/uint/float data. Notes An Index instance can only contain hashable objects. Attributes
None Methods
None | pandas.reference.api.pandas.int64index |
pandas.Int8Dtype classpandas.Int8Dtype[source]
An ExtensionDtype for int8 integer data. Changed in version 1.0.0: Now uses pandas.NA as its missing value, rather than numpy.nan. Attributes
None Methods
None | pandas.reference.api.pandas.int8dtype |
pandas.Interval classpandas.Interval
Immutable object implementing an Interval, a bounded slice-like interval. Parameters
left:orderable scalar
Left bound for the interval.
right:orderable scalar
Right bound for the interval.
closed:{‘right’, ‘left’, ‘both’, ‘neither’}, default ‘right’
Whether the interval is closed on the left-side, right-side, both or neither. See the Notes for more detailed explanation. See also IntervalIndex
An Index of Interval objects that are all closed on the same side. cut
Convert continuous data into discrete bins (Categorical of Interval objects). qcut
Convert continuous data into bins (Categorical of Interval objects) based on quantiles. Period
Represents a period of time. Notes The parameters left and right must be from the same type, you must be able to compare them and they must satisfy left <= right. A closed interval (in mathematics denoted by square brackets) contains its endpoints, i.e. the closed interval [0, 5] is characterized by the conditions 0 <= x <= 5. This is what closed='both' stands for. An open interval (in mathematics denoted by parentheses) does not contain its endpoints, i.e. the open interval (0, 5) is characterized by the conditions 0 < x < 5. This is what closed='neither' stands for. Intervals can also be half-open or half-closed, i.e. [0, 5) is described by 0 <= x < 5 (closed='left') and (0, 5] is described by 0 < x <= 5 (closed='right'). Examples It is possible to build Intervals of different types, like numeric ones:
>>> iv = pd.Interval(left=0, right=5)
>>> iv
Interval(0, 5, closed='right')
You can check if an element belongs to it
>>> 2.5 in iv
True
You can test the bounds (closed='right', so 0 < x <= 5):
>>> 0 in iv
False
>>> 5 in iv
True
>>> 0.0001 in iv
True
Calculate its length
>>> iv.length
5
You can operate with + and * over an Interval and the operation is applied to each of its bounds, so the result depends on the type of the bound elements
>>> shifted_iv = iv + 3
>>> shifted_iv
Interval(3, 8, closed='right')
>>> extended_iv = iv * 10.0
>>> extended_iv
Interval(0.0, 50.0, closed='right')
To create a time interval you can use Timestamps as the bounds
>>> year_2017 = pd.Interval(pd.Timestamp('2017-01-01 00:00:00'),
... pd.Timestamp('2018-01-01 00:00:00'),
... closed='left')
>>> pd.Timestamp('2017-01-01 00:00') in year_2017
True
>>> year_2017.length
Timedelta('365 days 00:00:00')
Attributes
closed Whether the interval is closed on the left-side, right-side, both or neither.
closed_left Check if the interval is closed on the left side.
closed_right Check if the interval is closed on the right side.
is_empty Indicates if an interval is empty, meaning it contains no points.
left Left bound for the interval.
length Return the length of the Interval.
mid Return the midpoint of the Interval.
open_left Check if the interval is open on the left side.
open_right Check if the interval is open on the right side.
right Right bound for the interval. Methods
overlaps Check whether two Interval objects overlap. | pandas.reference.api.pandas.interval |
pandas.Interval.closed Interval.closed
Whether the interval is closed on the left-side, right-side, both or neither. | pandas.reference.api.pandas.interval.closed |
pandas.Interval.closed_left Interval.closed_left
Check if the interval is closed on the left side. For the meaning of closed and open see Interval. Returns
bool
True if the Interval is closed on the left-side. | pandas.reference.api.pandas.interval.closed_left |
pandas.Interval.closed_right Interval.closed_right
Check if the interval is closed on the right side. For the meaning of closed and open see Interval. Returns
bool
True if the Interval is closed on the left-side. | pandas.reference.api.pandas.interval.closed_right |
pandas.Interval.is_empty Interval.is_empty
Indicates if an interval is empty, meaning it contains no points. New in version 0.25.0. Returns
bool or ndarray
A boolean indicating if a scalar Interval is empty, or a boolean ndarray positionally indicating if an Interval in an IntervalArray or IntervalIndex is empty. Examples An Interval that contains points is not empty:
>>> pd.Interval(0, 1, closed='right').is_empty
False
An Interval that does not contain any points is empty:
>>> pd.Interval(0, 0, closed='right').is_empty
True
>>> pd.Interval(0, 0, closed='left').is_empty
True
>>> pd.Interval(0, 0, closed='neither').is_empty
True
An Interval that contains a single point is not empty:
>>> pd.Interval(0, 0, closed='both').is_empty
False
An IntervalArray or IntervalIndex returns a boolean ndarray positionally indicating if an Interval is empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'),
... pd.Interval(1, 2, closed='neither')]
>>> pd.arrays.IntervalArray(ivs).is_empty
array([ True, False])
Missing values are not considered empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'), np.nan]
>>> pd.IntervalIndex(ivs).is_empty
array([ True, False]) | pandas.reference.api.pandas.interval.is_empty |
pandas.Interval.left Interval.left
Left bound for the interval. | pandas.reference.api.pandas.interval.left |
pandas.Interval.length Interval.length
Return the length of the Interval. | pandas.reference.api.pandas.interval.length |
pandas.Interval.mid Interval.mid
Return the midpoint of the Interval. | pandas.reference.api.pandas.interval.mid |
pandas.Interval.open_left Interval.open_left
Check if the interval is open on the left side. For the meaning of closed and open see Interval. Returns
bool
True if the Interval is closed on the left-side. | pandas.reference.api.pandas.interval.open_left |
pandas.Interval.open_right Interval.open_right
Check if the interval is open on the right side. For the meaning of closed and open see Interval. Returns
bool
True if the Interval is closed on the left-side. | pandas.reference.api.pandas.interval.open_right |
pandas.Interval.overlaps Interval.overlaps()
Check whether two Interval objects overlap. Two intervals overlap if they share a common point, including closed endpoints. Intervals that only have an open endpoint in common do not overlap. Parameters
other:Interval
Interval to check against for an overlap. Returns
bool
True if the two intervals overlap. See also IntervalArray.overlaps
The corresponding method for IntervalArray. IntervalIndex.overlaps
The corresponding method for IntervalIndex. Examples
>>> i1 = pd.Interval(0, 2)
>>> i2 = pd.Interval(1, 3)
>>> i1.overlaps(i2)
True
>>> i3 = pd.Interval(4, 5)
>>> i1.overlaps(i3)
False
Intervals that share closed endpoints overlap:
>>> i4 = pd.Interval(0, 1, closed='both')
>>> i5 = pd.Interval(1, 2, closed='both')
>>> i4.overlaps(i5)
True
Intervals that only have an open endpoint in common do not overlap:
>>> i6 = pd.Interval(1, 2, closed='neither')
>>> i4.overlaps(i6)
False | pandas.reference.api.pandas.interval.overlaps |
pandas.Interval.right Interval.right
Right bound for the interval. | pandas.reference.api.pandas.interval.right |
pandas.interval_range pandas.interval_range(start=None, end=None, periods=None, freq=None, name=None, closed='right')[source]
Return a fixed frequency IntervalIndex. Parameters
start:numeric or datetime-like, default None
Left bound for generating intervals.
end:numeric or datetime-like, default None
Right bound for generating intervals.
periods:int, default None
Number of periods to generate.
freq:numeric, str, or DateOffset, default None
The length of each interval. Must be consistent with the type of start and end, e.g. 2 for numeric, or ‘5H’ for datetime-like. Default is 1 for numeric and ‘D’ for datetime-like.
name:str, default None
Name of the resulting IntervalIndex.
closed:{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’
Whether the intervals are closed on the left-side, right-side, both or neither. Returns
IntervalIndex
See also IntervalIndex
An Index of intervals that are all closed on the same side. Notes Of the four parameters start, end, periods, and freq, exactly three must be specified. If freq is omitted, the resulting IntervalIndex will have periods linearly spaced elements between start and end, inclusively. To learn more about datetime-like frequency strings, please see this link. Examples Numeric start and end is supported.
>>> pd.interval_range(start=0, end=5)
IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]],
dtype='interval[int64, right]')
Additionally, datetime-like input is also supported.
>>> pd.interval_range(start=pd.Timestamp('2017-01-01'),
... end=pd.Timestamp('2017-01-04'))
IntervalIndex([(2017-01-01, 2017-01-02], (2017-01-02, 2017-01-03],
(2017-01-03, 2017-01-04]],
dtype='interval[datetime64[ns], right]')
The freq parameter specifies the frequency between the left and right. endpoints of the individual intervals within the IntervalIndex. For numeric start and end, the frequency must also be numeric.
>>> pd.interval_range(start=0, periods=4, freq=1.5)
IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]],
dtype='interval[float64, right]')
Similarly, for datetime-like start and end, the frequency must be convertible to a DateOffset.
>>> pd.interval_range(start=pd.Timestamp('2017-01-01'),
... periods=3, freq='MS')
IntervalIndex([(2017-01-01, 2017-02-01], (2017-02-01, 2017-03-01],
(2017-03-01, 2017-04-01]],
dtype='interval[datetime64[ns], right]')
Specify start, end, and periods; the frequency is generated automatically (linearly spaced).
>>> pd.interval_range(start=0, end=6, periods=4)
IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]],
dtype='interval[float64, right]')
The closed parameter specifies which endpoints of the individual intervals within the IntervalIndex are closed.
>>> pd.interval_range(end=5, periods=4, closed='both')
IntervalIndex([[1, 2], [2, 3], [3, 4], [4, 5]],
dtype='interval[int64, both]') | pandas.reference.api.pandas.interval_range |
pandas.IntervalDtype classpandas.IntervalDtype(subtype=None, closed=None)[source]
An ExtensionDtype for Interval data. This is not an actual numpy dtype, but a duck type. Parameters
subtype:str, np.dtype
The dtype of the Interval bounds. Examples
>>> pd.IntervalDtype(subtype='int64', closed='both')
interval[int64, both]
Attributes
subtype The dtype of the Interval bounds. Methods
None | pandas.reference.api.pandas.intervaldtype |
pandas.IntervalDtype.subtype propertyIntervalDtype.subtype
The dtype of the Interval bounds. | pandas.reference.api.pandas.intervaldtype.subtype |
pandas.IntervalIndex classpandas.IntervalIndex(data, closed=None, dtype=None, copy=False, name=None, verify_integrity=True)[source]
Immutable index of intervals that are closed on the same side. New in version 0.20.0. Parameters
data:array-like (1-dimensional)
Array-like containing Interval objects from which to build the IntervalIndex.
closed:{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’
Whether the intervals are closed on the left-side, right-side, both or neither.
dtype:dtype or None, default None
If None, dtype will be inferred.
copy:bool, default False
Copy the input data.
name:object, optional
Name to be stored in the index.
verify_integrity:bool, default True
Verify that the IntervalIndex is valid. See also Index
The base pandas Index type. Interval
A bounded slice-like interval; the elements of an IntervalIndex. interval_range
Function to create a fixed frequency IntervalIndex. cut
Bin values into discrete Intervals. qcut
Bin values into equal-sized Intervals based on rank or sample quantiles. Notes See the user guide for more. Examples A new IntervalIndex is typically constructed using interval_range():
>>> pd.interval_range(start=0, end=5)
IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]],
dtype='interval[int64, right]')
It may also be constructed using one of the constructor methods: IntervalIndex.from_arrays(), IntervalIndex.from_breaks(), and IntervalIndex.from_tuples(). See further examples in the doc strings of interval_range and the mentioned constructor methods. Attributes
closed Whether the intervals are closed on the left-side, right-side, both or neither.
is_empty Indicates if an interval is empty, meaning it contains no points.
is_non_overlapping_monotonic Return True if the IntervalArray is non-overlapping (no Intervals share points) and is either monotonic increasing or monotonic decreasing, else False.
is_overlapping Return True if the IntervalIndex has overlapping intervals, else False.
values Return an array representing the data in the Index.
left
right
mid
length Methods
from_arrays(left, right[, closed, name, ...]) Construct from two arrays defining the left and right bounds.
from_tuples(data[, closed, name, copy, dtype]) Construct an IntervalIndex from an array-like of tuples.
from_breaks(breaks[, closed, name, copy, dtype]) Construct an IntervalIndex from an array of splits.
contains(*args, **kwargs) Check elementwise if the Intervals contain the value.
overlaps(*args, **kwargs) Check elementwise if an Interval overlaps the values in the IntervalArray.
set_closed(*args, **kwargs) Return an IntervalArray identical to the current one, but closed on the specified side.
to_tuples(*args, **kwargs) Return an ndarray of tuples of the form (left, right). | pandas.reference.api.pandas.intervalindex |
pandas.IntervalIndex.closed IntervalIndex.closed
Whether the intervals are closed on the left-side, right-side, both or neither. | pandas.reference.api.pandas.intervalindex.closed |
pandas.IntervalIndex.contains IntervalIndex.contains(*args, **kwargs)[source]
Check elementwise if the Intervals contain the value. Return a boolean mask whether the value is contained in the Intervals of the IntervalArray. New in version 0.25.0. Parameters
other:scalar
The value to check whether it is contained in the Intervals. Returns
boolean array
See also Interval.contains
Check whether Interval object contains value. IntervalArray.overlaps
Check if an Interval overlaps the values in the IntervalArray. Examples
>>> intervals = pd.arrays.IntervalArray.from_tuples([(0, 1), (1, 3), (2, 4)])
>>> intervals
<IntervalArray>
[(0, 1], (1, 3], (2, 4]]
Length: 3, dtype: interval[int64, right]
>>> intervals.contains(0.5)
array([ True, False, False]) | pandas.reference.api.pandas.intervalindex.contains |
pandas.IntervalIndex.from_arrays classmethodIntervalIndex.from_arrays(left, right, closed='right', name=None, copy=False, dtype=None)[source]
Construct from two arrays defining the left and right bounds. Parameters
left:array-like (1-dimensional)
Left bounds for each interval.
right:array-like (1-dimensional)
Right bounds for each interval.
closed:{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’
Whether the intervals are closed on the left-side, right-side, both or neither.
copy:bool, default False
Copy the data.
dtype:dtype, optional
If None, dtype will be inferred. Returns
IntervalIndex
Raises
ValueError
When a value is missing in only one of left or right. When a value in left is greater than the corresponding value in right. See also interval_range
Function to create a fixed frequency IntervalIndex. IntervalIndex.from_breaks
Construct an IntervalIndex from an array of splits. IntervalIndex.from_tuples
Construct an IntervalIndex from an array-like of tuples. Notes Each element of left must be less than or equal to the right element at the same position. If an element is missing, it must be missing in both left and right. A TypeError is raised when using an unsupported type for left or right. At the moment, ‘category’, ‘object’, and ‘string’ subtypes are not supported. Examples
>>> pd.IntervalIndex.from_arrays([0, 1, 2], [1, 2, 3])
IntervalIndex([(0, 1], (1, 2], (2, 3]],
dtype='interval[int64, right]') | pandas.reference.api.pandas.intervalindex.from_arrays |
pandas.IntervalIndex.from_breaks classmethodIntervalIndex.from_breaks(breaks, closed='right', name=None, copy=False, dtype=None)[source]
Construct an IntervalIndex from an array of splits. Parameters
breaks:array-like (1-dimensional)
Left and right bounds for each interval.
closed:{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’
Whether the intervals are closed on the left-side, right-side, both or neither.
copy:bool, default False
Copy the data.
dtype:dtype or None, default None
If None, dtype will be inferred. Returns
IntervalIndex
See also interval_range
Function to create a fixed frequency IntervalIndex. IntervalIndex.from_arrays
Construct from a left and right array. IntervalIndex.from_tuples
Construct from a sequence of tuples. Examples
>>> pd.IntervalIndex.from_breaks([0, 1, 2, 3])
IntervalIndex([(0, 1], (1, 2], (2, 3]],
dtype='interval[int64, right]') | pandas.reference.api.pandas.intervalindex.from_breaks |
pandas.IntervalIndex.from_tuples classmethodIntervalIndex.from_tuples(data, closed='right', name=None, copy=False, dtype=None)[source]
Construct an IntervalIndex from an array-like of tuples. Parameters
data:array-like (1-dimensional)
Array of tuples.
closed:{‘left’, ‘right’, ‘both’, ‘neither’}, default ‘right’
Whether the intervals are closed on the left-side, right-side, both or neither.
copy:bool, default False
By-default copy the data, this is compat only and ignored.
dtype:dtype or None, default None
If None, dtype will be inferred. Returns
IntervalIndex
See also interval_range
Function to create a fixed frequency IntervalIndex. IntervalIndex.from_arrays
Construct an IntervalIndex from a left and right array. IntervalIndex.from_breaks
Construct an IntervalIndex from an array of splits. Examples
>>> pd.IntervalIndex.from_tuples([(0, 1), (1, 2)])
IntervalIndex([(0, 1], (1, 2]],
dtype='interval[int64, right]') | pandas.reference.api.pandas.intervalindex.from_tuples |
pandas.IntervalIndex.get_indexer IntervalIndex.get_indexer(target, method=None, limit=None, tolerance=None)[source]
Compute indexer and mask for new index given the current index. The indexer should be then used as an input to ndarray.take to align the current data to the new index. Parameters
target:Index
method:{None, ‘pad’/’ffill’, ‘backfill’/’bfill’, ‘nearest’}, optional
default: exact matches only. pad / ffill: find the PREVIOUS index value if no exact match. backfill / bfill: use NEXT index value if no exact match nearest: use the NEAREST index value if no exact match. Tied distances are broken by preferring the larger index value.
limit:int, optional
Maximum number of consecutive labels in target to match for inexact matches.
tolerance:optional
Maximum distance between original and new labels for inexact matches. The values of the index at the matching locations must satisfy the equation abs(index[indexer] - target) <= tolerance. Tolerance may be a scalar value, which applies the same tolerance to all values, or list-like, which applies variable tolerance per element. List-like includes list, tuple, array, Series, and must be the same size as the index and its dtype must exactly match the index’s type. Returns
indexer:np.ndarray[np.intp]
Integers from 0 to n - 1 indicating that the index at these positions matches the corresponding target values. Missing values in the target are marked by -1. Notes Returns -1 for unmatched values, for further explanation see the example below. Examples
>>> index = pd.Index(['c', 'a', 'b'])
>>> index.get_indexer(['a', 'b', 'x'])
array([ 1, 2, -1])
Notice that the return value is an array of locations in index and x is marked by -1, as it is not in index. | pandas.reference.api.pandas.intervalindex.get_indexer |
pandas.IntervalIndex.get_loc IntervalIndex.get_loc(key, method=None, tolerance=None)[source]
Get integer location, slice or boolean mask for requested label. Parameters
key:label
method:{None}, optional
default: matches where the label is within an interval only. Returns
int if unique index, slice if monotonic index, else mask
Examples
>>> i1, i2 = pd.Interval(0, 1), pd.Interval(1, 2)
>>> index = pd.IntervalIndex([i1, i2])
>>> index.get_loc(1)
0
You can also supply a point inside an interval.
>>> index.get_loc(1.5)
1
If a label is in several intervals, you get the locations of all the relevant intervals.
>>> i3 = pd.Interval(0, 2)
>>> overlapping_index = pd.IntervalIndex([i1, i2, i3])
>>> overlapping_index.get_loc(0.5)
array([ True, False, True])
Only exact matches will be returned if an interval is provided.
>>> index.get_loc(pd.Interval(0, 1))
0 | pandas.reference.api.pandas.intervalindex.get_loc |
pandas.IntervalIndex.is_empty propertyIntervalIndex.is_empty
Indicates if an interval is empty, meaning it contains no points. New in version 0.25.0. Returns
bool or ndarray
A boolean indicating if a scalar Interval is empty, or a boolean ndarray positionally indicating if an Interval in an IntervalArray or IntervalIndex is empty. Examples An Interval that contains points is not empty:
>>> pd.Interval(0, 1, closed='right').is_empty
False
An Interval that does not contain any points is empty:
>>> pd.Interval(0, 0, closed='right').is_empty
True
>>> pd.Interval(0, 0, closed='left').is_empty
True
>>> pd.Interval(0, 0, closed='neither').is_empty
True
An Interval that contains a single point is not empty:
>>> pd.Interval(0, 0, closed='both').is_empty
False
An IntervalArray or IntervalIndex returns a boolean ndarray positionally indicating if an Interval is empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'),
... pd.Interval(1, 2, closed='neither')]
>>> pd.arrays.IntervalArray(ivs).is_empty
array([ True, False])
Missing values are not considered empty:
>>> ivs = [pd.Interval(0, 0, closed='neither'), np.nan]
>>> pd.IntervalIndex(ivs).is_empty
array([ True, False]) | pandas.reference.api.pandas.intervalindex.is_empty |
pandas.IntervalIndex.is_non_overlapping_monotonic IntervalIndex.is_non_overlapping_monotonic
Return True if the IntervalArray is non-overlapping (no Intervals share points) and is either monotonic increasing or monotonic decreasing, else False. | pandas.reference.api.pandas.intervalindex.is_non_overlapping_monotonic |
pandas.IntervalIndex.is_overlapping propertyIntervalIndex.is_overlapping
Return True if the IntervalIndex has overlapping intervals, else False. Two intervals overlap if they share a common point, including closed endpoints. Intervals that only have an open endpoint in common do not overlap. Returns
bool
Boolean indicating if the IntervalIndex has overlapping intervals. See also Interval.overlaps
Check whether two Interval objects overlap. IntervalIndex.overlaps
Check an IntervalIndex elementwise for overlaps. Examples
>>> index = pd.IntervalIndex.from_tuples([(0, 2), (1, 3), (4, 5)])
>>> index
IntervalIndex([(0, 2], (1, 3], (4, 5]],
dtype='interval[int64, right]')
>>> index.is_overlapping
True
Intervals that share closed endpoints overlap:
>>> index = pd.interval_range(0, 3, closed='both')
>>> index
IntervalIndex([[0, 1], [1, 2], [2, 3]],
dtype='interval[int64, both]')
>>> index.is_overlapping
True
Intervals that only have an open endpoint in common do not overlap:
>>> index = pd.interval_range(0, 3, closed='left')
>>> index
IntervalIndex([[0, 1), [1, 2), [2, 3)],
dtype='interval[int64, left]')
>>> index.is_overlapping
False | pandas.reference.api.pandas.intervalindex.is_overlapping |
pandas.IntervalIndex.left IntervalIndex.left | pandas.reference.api.pandas.intervalindex.left |
pandas.IntervalIndex.length propertyIntervalIndex.length | pandas.reference.api.pandas.intervalindex.length |
pandas.IntervalIndex.mid IntervalIndex.mid | pandas.reference.api.pandas.intervalindex.mid |
pandas.IntervalIndex.overlaps IntervalIndex.overlaps(*args, **kwargs)[source]
Check elementwise if an Interval overlaps the values in the IntervalArray. Two intervals overlap if they share a common point, including closed endpoints. Intervals that only have an open endpoint in common do not overlap. Parameters
other:IntervalArray
Interval to check against for an overlap. Returns
ndarray
Boolean array positionally indicating where an overlap occurs. See also Interval.overlaps
Check whether two Interval objects overlap. Examples
>>> data = [(0, 1), (1, 3), (2, 4)]
>>> intervals = pd.arrays.IntervalArray.from_tuples(data)
>>> intervals
<IntervalArray>
[(0, 1], (1, 3], (2, 4]]
Length: 3, dtype: interval[int64, right]
>>> intervals.overlaps(pd.Interval(0.5, 1.5))
array([ True, True, False])
Intervals that share closed endpoints overlap:
>>> intervals.overlaps(pd.Interval(1, 3, closed='left'))
array([ True, True, True])
Intervals that only have an open endpoint in common do not overlap:
>>> intervals.overlaps(pd.Interval(1, 2, closed='right'))
array([False, True, False]) | pandas.reference.api.pandas.intervalindex.overlaps |
pandas.IntervalIndex.right IntervalIndex.right | pandas.reference.api.pandas.intervalindex.right |
pandas.IntervalIndex.set_closed IntervalIndex.set_closed(*args, **kwargs)[source]
Return an IntervalArray identical to the current one, but closed on the specified side. Parameters
closed:{‘left’, ‘right’, ‘both’, ‘neither’}
Whether the intervals are closed on the left-side, right-side, both or neither. Returns
new_index:IntervalArray
Examples
>>> index = pd.arrays.IntervalArray.from_breaks(range(4))
>>> index
<IntervalArray>
[(0, 1], (1, 2], (2, 3]]
Length: 3, dtype: interval[int64, right]
>>> index.set_closed('both')
<IntervalArray>
[[0, 1], [1, 2], [2, 3]]
Length: 3, dtype: interval[int64, both] | pandas.reference.api.pandas.intervalindex.set_closed |
pandas.IntervalIndex.to_tuples IntervalIndex.to_tuples(*args, **kwargs)[source]
Return an ndarray of tuples of the form (left, right). Parameters
na_tuple:bool, default True
Returns NA as a tuple if True, (nan, nan), or just as the NA value itself if False, nan. Returns
tuples: ndarray | pandas.reference.api.pandas.intervalindex.to_tuples |
pandas.IntervalIndex.values propertyIntervalIndex.values
Return an array representing the data in the Index. Warning We recommend using Index.array or Index.to_numpy(), depending on whether you need a reference to the underlying data or a NumPy array. Returns
array: numpy.ndarray or ExtensionArray
See also Index.array
Reference to the underlying data. Index.to_numpy
A NumPy array representing the underlying data. | pandas.reference.api.pandas.intervalindex.values |
pandas.io.formats.style.Styler classpandas.io.formats.style.Styler(data, precision=None, table_styles=None, uuid=None, caption=None, table_attributes=None, cell_ids=True, na_rep=None, uuid_len=5, decimal=None, thousands=None, escape=None, formatter=None)[source]
Helps style a DataFrame or Series according to the data with HTML and CSS. Parameters
data:Series or DataFrame
Data to be styled - either a Series or DataFrame.
precision:int, optional
Precision to round floats to. If not given defaults to pandas.options.styler.format.precision. Changed in version 1.4.0.
table_styles:list-like, default None
List of {selector: (attr, value)} dicts; see Notes.
uuid:str, default None
A unique identifier to avoid CSS collisions; generated automatically.
caption:str, tuple, default None
String caption to attach to the table. Tuple only used for LaTeX dual captions.
table_attributes:str, default None
Items that show up in the opening <table> tag in addition to automatic (by default) id.
cell_ids:bool, default True
If True, each cell will have an id attribute in their HTML tag. The id takes the form T_<uuid>_row<num_row>_col<num_col> where <uuid> is the unique identifier, <num_row> is the row number and <num_col> is the column number.
na_rep:str, optional
Representation for missing values. If na_rep is None, no special formatting is applied, and falls back to pandas.options.styler.format.na_rep. New in version 1.0.0.
uuid_len:int, default 5
If uuid is not specified, the length of the uuid to randomly generate expressed in hex characters, in range [0, 32]. New in version 1.2.0.
decimal:str, optional
Character used as decimal separator for floats, complex and integers. If not given uses pandas.options.styler.format.decimal. New in version 1.3.0.
thousands:str, optional, default None
Character used as thousands separator for floats, complex and integers. If not given uses pandas.options.styler.format.thousands. New in version 1.3.0.
escape:str, optional
Use ‘html’ to replace the characters &, <, >, ', and " in cell display string with HTML-safe sequences. Use ‘latex’ to replace the characters &, %, $, #, _, {, }, ~, ^, and \ in the cell display string with LaTeX-safe sequences. If not given uses pandas.options.styler.format.escape. New in version 1.3.0.
formatter:str, callable, dict, optional
Object to define how values are displayed. See Styler.format. If not given uses pandas.options.styler.format.formatter. New in version 1.4.0. See also DataFrame.style
Return a Styler object containing methods for building a styled HTML representation for the DataFrame. Notes Most styling will be done by passing style functions into Styler.apply or Styler.applymap. Style functions should return values with strings containing CSS 'attr: value' that will be applied to the indicated cells. If using in the Jupyter notebook, Styler has defined a _repr_html_ to automatically render itself. Otherwise call Styler.to_html to get the generated HTML. CSS classes are attached to the generated HTML Index and Column names include index_name and level<k> where k is its level in a MultiIndex
Index label cells include row_heading row<n> where n is the numeric position of the row level<k> where k is the level in a MultiIndex Column label cells include * col_heading * col<n> where n is the numeric position of the column * level<k> where k is the level in a MultiIndex Blank cells include blank Data cells include data Trimmed cells include col_trim or row_trim. Any, or all, or these classes can be renamed by using the css_class_names argument in Styler.set_table_classes, giving a value such as {“row”: “MY_ROW_CLASS”, “col_trim”: “”, “row_trim”: “”}. Attributes
env (Jinja2 jinja2.Environment)
template_html (Jinja2 Template)
template_html_table (Jinja2 Template)
template_html_style (Jinja2 Template)
template_latex (Jinja2 Template)
loader (Jinja2 Loader) Methods
apply(func[, axis, subset]) Apply a CSS-styling function column-wise, row-wise, or table-wise.
apply_index(func[, axis, level]) Apply a CSS-styling function to the index or column headers, level-wise.
applymap(func[, subset]) Apply a CSS-styling function elementwise.
applymap_index(func[, axis, level]) Apply a CSS-styling function to the index or column headers, elementwise.
background_gradient([cmap, low, high, axis, ...]) Color the background in a gradient style.
bar([subset, axis, color, cmap, width, ...]) Draw bar chart in the cell backgrounds.
clear() Reset the Styler, removing any previously applied styles.
export() Export the styles applied to the current Styler.
format([formatter, subset, na_rep, ...]) Format the text display value of cells.
format_index([formatter, axis, level, ...]) Format the text display value of index labels or column headers.
from_custom_template(searchpath[, ...]) Factory function for creating a subclass of Styler.
hide([subset, axis, level, names]) Hide the entire index / column headers, or specific rows / columns from display.
hide_columns([subset, level, names]) Hide the column headers or specific keys in the columns from rendering.
hide_index([subset, level, names]) (DEPRECATED) Hide the entire index, or specific keys in the index from rendering.
highlight_between([subset, color, axis, ...]) Highlight a defined range with a style.
highlight_max([subset, color, axis, props]) Highlight the maximum with a style.
highlight_min([subset, color, axis, props]) Highlight the minimum with a style.
highlight_null([null_color, subset, props]) Highlight missing values with a style.
highlight_quantile([subset, color, axis, ...]) Highlight values defined by a quantile with a style.
pipe(func, *args, **kwargs) Apply func(self, *args, **kwargs), and return the result.
render([sparse_index, sparse_columns]) (DEPRECATED) Render the Styler including all applied styles to HTML.
set_caption(caption) Set the text added to a <caption> HTML element.
set_na_rep(na_rep) (DEPRECATED) Set the missing data representation on a Styler.
set_precision(precision) (DEPRECATED) Set the precision used to display values.
set_properties([subset]) Set defined CSS-properties to each <td> HTML element within the given subset.
set_sticky([axis, pixel_size, levels]) Add CSS to permanently display the index or column headers in a scrolling frame.
set_table_attributes(attributes) Set the table attributes added to the <table> HTML element.
set_table_styles([table_styles, axis, ...]) Set the table styles included within the <style> HTML element.
set_td_classes(classes) Set the DataFrame of strings added to the class attribute of <td> HTML elements.
set_tooltips(ttips[, props, css_class]) Set the DataFrame of strings on Styler generating :hover tooltips.
set_uuid(uuid) Set the uuid applied to id attributes of HTML elements.
text_gradient([cmap, low, high, axis, ...]) Color the text in a gradient style.
to_excel(excel_writer[, sheet_name, na_rep, ...]) Write Styler to an Excel sheet.
to_html([buf, table_uuid, table_attributes, ...]) Write Styler to a file, buffer or string in HTML-CSS format.
to_latex([buf, column_format, position, ...]) Write Styler to a file, buffer or string in LaTeX format.
use(styles) Set the styles on the current Styler.
where(cond, value[, other, subset]) (DEPRECATED) Apply CSS-styles based on a conditional function elementwise. | pandas.reference.api.pandas.io.formats.style.styler |
pandas.io.formats.style.Styler.apply Styler.apply(func, axis=0, subset=None, **kwargs)[source]
Apply a CSS-styling function column-wise, row-wise, or table-wise. Updates the HTML representation with the result. Parameters
func:function
func should take a Series if axis in [0,1] and return a list-like object of same length, or a Series, not necessarily of same length, with valid index labels considering subset. func should take a DataFrame if axis is None and return either an ndarray with the same shape or a DataFrame, not necessarily of the same shape, with valid index and columns labels considering subset. Changed in version 1.3.0. Changed in version 1.4.0.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Apply to each column (axis=0 or 'index'), to each row (axis=1 or 'columns'), or to the entire DataFrame at once with axis=None.
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
**kwargs:dict
Pass along to func. Returns
self:Styler
See also Styler.applymap_index
Apply a CSS-styling function to headers elementwise. Styler.apply_index
Apply a CSS-styling function to headers level-wise. Styler.applymap
Apply a CSS-styling function elementwise. Notes The elements of the output of func should be CSS styles as strings, in the format ‘attribute: value; attribute2: value2; …’ or, if nothing is to be applied to that element, an empty string or None. This is similar to DataFrame.apply, except that axis=None applies the function to the entire DataFrame at once, rather than column-wise or row-wise. Examples
>>> def highlight_max(x, color):
... return np.where(x == np.nanmax(x.to_numpy()), f"color: {color};", None)
>>> df = pd.DataFrame(np.random.randn(5, 2), columns=["A", "B"])
>>> df.style.apply(highlight_max, color='red')
>>> df.style.apply(highlight_max, color='blue', axis=1)
>>> df.style.apply(highlight_max, color='green', axis=None)
Using subset to restrict application to a single column or multiple columns
>>> df.style.apply(highlight_max, color='red', subset="A")
...
>>> df.style.apply(highlight_max, color='red', subset=["A", "B"])
...
Using a 2d input to subset to select rows in addition to columns
>>> df.style.apply(highlight_max, color='red', subset=([0,1,2], slice(None)))
...
>>> df.style.apply(highlight_max, color='red', subset=(slice(0,5,2), "A"))
...
Using a function which returns a Series / DataFrame of unequal length but containing valid index labels
>>> df = pd.DataFrame([[1, 2], [3, 4], [4, 6]], index=["A1", "A2", "Total"])
>>> total_style = pd.Series("font-weight: bold;", index=["Total"])
>>> df.style.apply(lambda s: total_style)
See Table Visualization user guide for more details. | pandas.reference.api.pandas.io.formats.style.styler.apply |
pandas.io.formats.style.Styler.apply_index Styler.apply_index(func, axis=0, level=None, **kwargs)[source]
Apply a CSS-styling function to the index or column headers, level-wise. Updates the HTML representation with the result. New in version 1.4.0. Parameters
func:function
func should take a Series and return a string array of the same length.
axis:{0, 1, “index”, “columns”}
The headers over which to apply the function.
level:int, str, list, optional
If index is MultiIndex the level(s) over which to apply the function.
**kwargs:dict
Pass along to func. Returns
self:Styler
See also Styler.applymap_index
Apply a CSS-styling function to headers elementwise. Styler.apply
Apply a CSS-styling function column-wise, row-wise, or table-wise. Styler.applymap
Apply a CSS-styling function elementwise. Notes Each input to func will be the index as a Series, if an Index, or a level of a MultiIndex. The output of func should be an identically sized array of CSS styles as strings, in the format ‘attribute: value; attribute2: value2; …’ or, if nothing is to be applied to that element, an empty string or None. Examples Basic usage to conditionally highlight values in the index.
>>> df = pd.DataFrame([[1,2], [3,4]], index=["A", "B"])
>>> def color_b(s):
... return np.where(s == "B", "background-color: yellow;", "")
>>> df.style.apply_index(color_b)
Selectively applying to specific levels of MultiIndex columns.
>>> midx = pd.MultiIndex.from_product([['ix', 'jy'], [0, 1], ['x3', 'z4']])
>>> df = pd.DataFrame([np.arange(8)], columns=midx)
>>> def highlight_x(s):
... return ["background-color: yellow;" if "x" in v else "" for v in s]
>>> df.style.apply_index(highlight_x, axis="columns", level=[0, 2])
... | pandas.reference.api.pandas.io.formats.style.styler.apply_index |
pandas.io.formats.style.Styler.applymap Styler.applymap(func, subset=None, **kwargs)[source]
Apply a CSS-styling function elementwise. Updates the HTML representation with the result. Parameters
func:function
func should take a scalar and return a string.
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
**kwargs:dict
Pass along to func. Returns
self:Styler
See also Styler.applymap_index
Apply a CSS-styling function to headers elementwise. Styler.apply_index
Apply a CSS-styling function to headers level-wise. Styler.apply
Apply a CSS-styling function column-wise, row-wise, or table-wise. Notes The elements of the output of func should be CSS styles as strings, in the format ‘attribute: value; attribute2: value2; …’ or, if nothing is to be applied to that element, an empty string or None. Examples
>>> def color_negative(v, color):
... return f"color: {color};" if v < 0 else None
>>> df = pd.DataFrame(np.random.randn(5, 2), columns=["A", "B"])
>>> df.style.applymap(color_negative, color='red')
Using subset to restrict application to a single column or multiple columns
>>> df.style.applymap(color_negative, color='red', subset="A")
...
>>> df.style.applymap(color_negative, color='red', subset=["A", "B"])
...
Using a 2d input to subset to select rows in addition to columns
>>> df.style.applymap(color_negative, color='red',
... subset=([0,1,2], slice(None)))
>>> df.style.applymap(color_negative, color='red', subset=(slice(0,5,2), "A"))
...
See Table Visualization user guide for more details. | pandas.reference.api.pandas.io.formats.style.styler.applymap |
pandas.io.formats.style.Styler.applymap_index Styler.applymap_index(func, axis=0, level=None, **kwargs)[source]
Apply a CSS-styling function to the index or column headers, elementwise. Updates the HTML representation with the result. New in version 1.4.0. Parameters
func:function
func should take a scalar and return a string.
axis:{0, 1, “index”, “columns”}
The headers over which to apply the function.
level:int, str, list, optional
If index is MultiIndex the level(s) over which to apply the function.
**kwargs:dict
Pass along to func. Returns
self:Styler
See also Styler.apply_index
Apply a CSS-styling function to headers level-wise. Styler.apply
Apply a CSS-styling function column-wise, row-wise, or table-wise. Styler.applymap
Apply a CSS-styling function elementwise. Notes Each input to func will be an index value, if an Index, or a level value of a MultiIndex. The output of func should be CSS styles as a string, in the format ‘attribute: value; attribute2: value2; …’ or, if nothing is to be applied to that element, an empty string or None. Examples Basic usage to conditionally highlight values in the index.
>>> df = pd.DataFrame([[1,2], [3,4]], index=["A", "B"])
>>> def color_b(s):
... return "background-color: yellow;" if v == "B" else None
>>> df.style.applymap_index(color_b)
Selectively applying to specific levels of MultiIndex columns.
>>> midx = pd.MultiIndex.from_product([['ix', 'jy'], [0, 1], ['x3', 'z4']])
>>> df = pd.DataFrame([np.arange(8)], columns=midx)
>>> def highlight_x(v):
... return "background-color: yellow;" if "x" in v else None
>>> df.style.applymap_index(highlight_x, axis="columns", level=[0, 2])
... | pandas.reference.api.pandas.io.formats.style.styler.applymap_index |
pandas.io.formats.style.Styler.background_gradient Styler.background_gradient(cmap='PuBu', low=0, high=0, axis=0, subset=None, text_color_threshold=0.408, vmin=None, vmax=None, gmap=None)[source]
Color the background in a gradient style. The background color is determined according to the data in each column, row or frame, or by a given gradient map. Requires matplotlib. Parameters
cmap:str or colormap
Matplotlib colormap.
low:float
Compress the color range at the low end. This is a multiple of the data range to extend below the minimum; good values usually in [0, 1], defaults to 0.
high:float
Compress the color range at the high end. This is a multiple of the data range to extend above the maximum; good values usually in [0, 1], defaults to 0.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Apply to each column (axis=0 or 'index'), to each row (axis=1 or 'columns'), or to the entire DataFrame at once with axis=None.
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
text_color_threshold:float or int
Luminance threshold for determining text color in [0, 1]. Facilitates text visibility across varying background colors. All text is dark if 0, and light if 1, defaults to 0.408.
vmin:float, optional
Minimum data value that corresponds to colormap minimum value. If not specified the minimum value of the data (or gmap) will be used. New in version 1.0.0.
vmax:float, optional
Maximum data value that corresponds to colormap maximum value. If not specified the maximum value of the data (or gmap) will be used. New in version 1.0.0.
gmap:array-like, optional
Gradient map for determining the background colors. If not supplied will use the underlying data from rows, columns or frame. If given as an ndarray or list-like must be an identical shape to the underlying data considering axis and subset. If given as DataFrame or Series must have same index and column labels considering axis and subset. If supplied, vmin and vmax should be given relative to this gradient map. New in version 1.3.0. Returns
self:Styler
See also Styler.text_gradient
Color the text in a gradient style. Notes When using low and high the range of the gradient, given by the data if gmap is not given or by gmap, is extended at the low end effectively by map.min - low * map.range and at the high end by map.max + high * map.range before the colors are normalized and determined. If combining with vmin and vmax the map.min, map.max and map.range are replaced by values according to the values derived from vmin and vmax. This method will preselect numeric columns and ignore non-numeric columns unless a gmap is supplied in which case no preselection occurs. Examples
>>> df = pd.DataFrame(columns=["City", "Temp (c)", "Rain (mm)", "Wind (m/s)"],
... data=[["Stockholm", 21.6, 5.0, 3.2],
... ["Oslo", 22.4, 13.3, 3.1],
... ["Copenhagen", 24.5, 0.0, 6.7]])
Shading the values column-wise, with axis=0, preselecting numeric columns
>>> df.style.background_gradient(axis=0)
Shading all values collectively using axis=None
>>> df.style.background_gradient(axis=None)
Compress the color map from the both low and high ends
>>> df.style.background_gradient(axis=None, low=0.75, high=1.0)
Manually setting vmin and vmax gradient thresholds
>>> df.style.background_gradient(axis=None, vmin=6.7, vmax=21.6)
Setting a gmap and applying to all columns with another cmap
>>> df.style.background_gradient(axis=0, gmap=df['Temp (c)'], cmap='YlOrRd')
...
Setting the gradient map for a dataframe (i.e. axis=None), we need to explicitly state subset to match the gmap shape
>>> gmap = np.array([[1,2,3], [2,3,4], [3,4,5]])
>>> df.style.background_gradient(axis=None, gmap=gmap,
... cmap='YlOrRd', subset=['Temp (c)', 'Rain (mm)', 'Wind (m/s)']
... ) | pandas.reference.api.pandas.io.formats.style.styler.background_gradient |
pandas.io.formats.style.Styler.bar Styler.bar(subset=None, axis=0, *, color=None, cmap=None, width=100, height=100, align='mid', vmin=None, vmax=None, props='width: 10em;')[source]
Draw bar chart in the cell backgrounds. Changed in version 1.4.0. Parameters
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Apply to each column (axis=0 or 'index'), to each row (axis=1 or 'columns'), or to the entire DataFrame at once with axis=None.
color:str or 2-tuple/list
If a str is passed, the color is the same for both negative and positive numbers. If 2-tuple/list is used, the first element is the color_negative and the second is the color_positive (eg: [‘#d65f5f’, ‘#5fba7d’]).
cmap:str, matplotlib.cm.ColorMap
A string name of a matplotlib Colormap, or a Colormap object. Cannot be used together with color. New in version 1.4.0.
width:float, default 100
The percentage of the cell, measured from the left, in which to draw the bars, in [0, 100].
height:float, default 100
The percentage height of the bar in the cell, centrally aligned, in [0,100]. New in version 1.4.0.
align:str, int, float, callable, default ‘mid’
How to align the bars within the cells relative to a width adjusted center. If string must be one of: ‘left’ : bars are drawn rightwards from the minimum data value. ‘right’ : bars are drawn leftwards from the maximum data value. ‘zero’ : a value of zero is located at the center of the cell. ‘mid’ : a value of (max-min)/2 is located at the center of the cell, or if all values are negative (positive) the zero is aligned at the right (left) of the cell. ‘mean’ : the mean value of the data is located at the center of the cell. If a float or integer is given this will indicate the center of the cell. If a callable should take a 1d or 2d array and return a scalar. Changed in version 1.4.0.
vmin:float, optional
Minimum bar value, defining the left hand limit of the bar drawing range, lower values are clipped to vmin. When None (default): the minimum value of the data will be used.
vmax:float, optional
Maximum bar value, defining the right hand limit of the bar drawing range, higher values are clipped to vmax. When None (default): the maximum value of the data will be used.
props:str, optional
The base CSS of the cell that is extended to add the bar chart. Defaults to “width: 10em;”. New in version 1.4.0. Returns
self:Styler
Notes This section of the user guide: Table Visualization gives a number of examples for different settings and color coordination. | pandas.reference.api.pandas.io.formats.style.styler.bar |
pandas.io.formats.style.Styler.clear Styler.clear()[source]
Reset the Styler, removing any previously applied styles. Returns None. | pandas.reference.api.pandas.io.formats.style.styler.clear |
pandas.io.formats.style.Styler.env Styler.env=<jinja2.environment.Environment object> | pandas.reference.api.pandas.io.formats.style.styler.env |
pandas.io.formats.style.Styler.export Styler.export()[source]
Export the styles applied to the current Styler. Can be applied to a second Styler with Styler.use. Returns
styles:dict
See also Styler.use
Set the styles on the current Styler. Styler.copy
Create a copy of the current Styler. Notes This method is designed to copy non-data dependent attributes of one Styler to another. It differs from Styler.copy where data and data dependent attributes are also copied. The following items are exported since they are not generally data dependent:
Styling functions added by the apply and applymap Whether axes and names are hidden from the display, if unambiguous. Table attributes Table styles
The following attributes are considered data dependent and therefore not exported:
Caption UUID Tooltips Any hidden rows or columns identified by Index labels Any formatting applied using Styler.format Any CSS classes added using Styler.set_td_classes
Examples
>>> styler = DataFrame([[1, 2], [3, 4]]).style
>>> styler2 = DataFrame([[9, 9, 9]]).style
>>> styler.hide(axis=0).highlight_max(axis=1)
>>> export = styler.export()
>>> styler2.use(export) | pandas.reference.api.pandas.io.formats.style.styler.export |
pandas.io.formats.style.Styler.format Styler.format(formatter=None, subset=None, na_rep=None, precision=None, decimal='.', thousands=None, escape=None, hyperlinks=None)[source]
Format the text display value of cells. Parameters
formatter:str, callable, dict or None
Object to define how values are displayed. See notes.
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
na_rep:str, optional
Representation for missing values. If na_rep is None, no special formatting is applied. New in version 1.0.0.
precision:int, optional
Floating point precision to use for display purposes, if not determined by the specified formatter. New in version 1.3.0.
decimal:str, default “.”
Character used as decimal separator for floats, complex and integers. New in version 1.3.0.
thousands:str, optional, default None
Character used as thousands separator for floats, complex and integers. New in version 1.3.0.
escape:str, optional
Use ‘html’ to replace the characters &, <, >, ', and " in cell display string with HTML-safe sequences. Use ‘latex’ to replace the characters &, %, $, #, _, {, }, ~, ^, and \ in the cell display string with LaTeX-safe sequences. Escaping is done before formatter. New in version 1.3.0.
hyperlinks:{“html”, “latex”}, optional
Convert string patterns containing https://, http://, ftp:// or www. to HTML <a> tags as clickable URL hyperlinks if “html”, or LaTeX href commands if “latex”. New in version 1.4.0. Returns
self:Styler
Notes This method assigns a formatting function, formatter, to each cell in the DataFrame. If formatter is None, then the default formatter is used. If a callable then that function should take a data value as input and return a displayable representation, such as a string. If formatter is given as a string this is assumed to be a valid Python format specification and is wrapped to a callable as string.format(x). If a dict is given, keys should correspond to column names, and values should be string or callable, as above. The default formatter currently expresses floats and complex numbers with the pandas display precision unless using the precision argument here. The default formatter does not adjust the representation of missing values unless the na_rep argument is used. The subset argument defines which region to apply the formatting function to. If the formatter argument is given in dict form but does not include all columns within the subset then these columns will have the default formatter applied. Any columns in the formatter dict excluded from the subset will be ignored. When using a formatter string the dtypes must be compatible, otherwise a ValueError will be raised. When instantiating a Styler, default formatting can be applied be setting the pandas.options:
styler.format.formatter: default None. styler.format.na_rep: default None. styler.format.precision: default 6. styler.format.decimal: default “.”. styler.format.thousands: default None. styler.format.escape: default None.
Examples Using na_rep and precision with the default formatter
>>> df = pd.DataFrame([[np.nan, 1.0, 'A'], [2.0, np.nan, 3.0]])
>>> df.style.format(na_rep='MISS', precision=3)
0 1 2
0 MISS 1.000 A
1 2.000 MISS 3.000
Using a formatter specification on consistent column dtypes
>>> df.style.format('{:.2f}', na_rep='MISS', subset=[0,1])
0 1 2
0 MISS 1.00 A
1 2.00 MISS 3.000000
Using the default formatter for unspecified columns
>>> df.style.format({0: '{:.2f}', 1: '£ {:.1f}'}, na_rep='MISS', precision=1)
...
0 1 2
0 MISS £ 1.0 A
1 2.00 MISS 3.0
Multiple na_rep or precision specifications under the default formatter.
>>> df.style.format(na_rep='MISS', precision=1, subset=[0])
... .format(na_rep='PASS', precision=2, subset=[1, 2])
0 1 2
0 MISS 1.00 A
1 2.0 PASS 3.00
Using a callable formatter function.
>>> func = lambda s: 'STRING' if isinstance(s, str) else 'FLOAT'
>>> df.style.format({0: '{:.1f}', 2: func}, precision=4, na_rep='MISS')
...
0 1 2
0 MISS 1.0000 STRING
1 2.0 MISS FLOAT
Using a formatter with HTML escape and na_rep.
>>> df = pd.DataFrame([['<div></div>', '"A&B"', None]])
>>> s = df.style.format(
... '<a href="a.com/{0}">{0}</a>', escape="html", na_rep="NA"
... )
>>> s.to_html()
...
<td .. ><a href="a.com/<div></div>"><div></div></a></td>
<td .. ><a href="a.com/"A&B"">"A&B"</a></td>
<td .. >NA</td>
...
Using a formatter with LaTeX escape.
>>> df = pd.DataFrame([["123"], ["~ ^"], ["$%#"]])
>>> df.style.format("\\textbf{{{}}}", escape="latex").to_latex()
...
\begin{tabular}{ll}
{} & {0} \\
0 & \textbf{123} \\
1 & \textbf{\textasciitilde \space \textasciicircum } \\
2 & \textbf{\$\%\#} \\
\end{tabular} | pandas.reference.api.pandas.io.formats.style.styler.format |
pandas.io.formats.style.Styler.format_index Styler.format_index(formatter=None, axis=0, level=None, na_rep=None, precision=None, decimal='.', thousands=None, escape=None, hyperlinks=None)[source]
Format the text display value of index labels or column headers. New in version 1.4.0. Parameters
formatter:str, callable, dict or None
Object to define how values are displayed. See notes.
axis:{0, “index”, 1, “columns”}
Whether to apply the formatter to the index or column headers.
level:int, str, list
The level(s) over which to apply the generic formatter.
na_rep:str, optional
Representation for missing values. If na_rep is None, no special formatting is applied.
precision:int, optional
Floating point precision to use for display purposes, if not determined by the specified formatter.
decimal:str, default “.”
Character used as decimal separator for floats, complex and integers.
thousands:str, optional, default None
Character used as thousands separator for floats, complex and integers.
escape:str, optional
Use ‘html’ to replace the characters &, <, >, ', and " in cell display string with HTML-safe sequences. Use ‘latex’ to replace the characters &, %, $, #, _, {, }, ~, ^, and \ in the cell display string with LaTeX-safe sequences. Escaping is done before formatter.
hyperlinks:{“html”, “latex”}, optional
Convert string patterns containing https://, http://, ftp:// or www. to HTML <a> tags as clickable URL hyperlinks if “html”, or LaTeX href commands if “latex”. Returns
self:Styler
Notes This method assigns a formatting function, formatter, to each level label in the DataFrame’s index or column headers. If formatter is None, then the default formatter is used. If a callable then that function should take a label value as input and return a displayable representation, such as a string. If formatter is given as a string this is assumed to be a valid Python format specification and is wrapped to a callable as string.format(x). If a dict is given, keys should correspond to MultiIndex level numbers or names, and values should be string or callable, as above. The default formatter currently expresses floats and complex numbers with the pandas display precision unless using the precision argument here. The default formatter does not adjust the representation of missing values unless the na_rep argument is used. The level argument defines which levels of a MultiIndex to apply the method to. If the formatter argument is given in dict form but does not include all levels within the level argument then these unspecified levels will have the default formatter applied. Any levels in the formatter dict specifically excluded from the level argument will be ignored. When using a formatter string the dtypes must be compatible, otherwise a ValueError will be raised. Examples Using na_rep and precision with the default formatter
>>> df = pd.DataFrame([[1, 2, 3]], columns=[2.0, np.nan, 4.0])
>>> df.style.format_index(axis=1, na_rep='MISS', precision=3)
2.000 MISS 4.000
0 1 2 3
Using a formatter specification on consistent dtypes in a level
>>> df.style.format_index('{:.2f}', axis=1, na_rep='MISS')
2.00 MISS 4.00
0 1 2 3
Using the default formatter for unspecified levels
>>> df = pd.DataFrame([[1, 2, 3]],
... columns=pd.MultiIndex.from_arrays([["a", "a", "b"],[2, np.nan, 4]]))
>>> df.style.format_index({0: lambda v: upper(v)}, axis=1, precision=1)
...
A B
2.0 nan 4.0
0 1 2 3
Using a callable formatter function.
>>> func = lambda s: 'STRING' if isinstance(s, str) else 'FLOAT'
>>> df.style.format_index(func, axis=1, na_rep='MISS')
...
STRING STRING
FLOAT MISS FLOAT
0 1 2 3
Using a formatter with HTML escape and na_rep.
>>> df = pd.DataFrame([[1, 2, 3]], columns=['"A"', 'A&B', None])
>>> s = df.style.format_index('$ {0}', axis=1, escape="html", na_rep="NA")
...
<th .. >$ "A"</th>
<th .. >$ A&B</th>
<th .. >NA</td>
...
Using a formatter with LaTeX escape.
>>> df = pd.DataFrame([[1, 2, 3]], columns=["123", "~", "$%#"])
>>> df.style.format_index("\\textbf{{{}}}", escape="latex", axis=1).to_latex()
...
\begin{tabular}{lrrr}
{} & {\textbf{123}} & {\textbf{\textasciitilde }} & {\textbf{\$\%\#}} \\
0 & 1 & 2 & 3 \\
\end{tabular} | pandas.reference.api.pandas.io.formats.style.styler.format_index |
pandas.io.formats.style.Styler.from_custom_template classmethodStyler.from_custom_template(searchpath, html_table=None, html_style=None)[source]
Factory function for creating a subclass of Styler. Uses custom templates and Jinja environment. Changed in version 1.3.0. Parameters
searchpath:str or list
Path or paths of directories containing the templates.
html_table:str
Name of your custom template to replace the html_table template. New in version 1.3.0.
html_style:str
Name of your custom template to replace the html_style template. New in version 1.3.0. Returns
MyStyler:subclass of Styler
Has the correct env,``template_html``, template_html_table and template_html_style class attributes set. | pandas.reference.api.pandas.io.formats.style.styler.from_custom_template |
pandas.io.formats.style.Styler.hide Styler.hide(subset=None, axis=0, level=None, names=False)[source]
Hide the entire index / column headers, or specific rows / columns from display. New in version 1.4.0. Parameters
subset:label, array-like, IndexSlice, optional
A valid 1d input or single key along the axis within DataFrame.loc[<subset>, :] or DataFrame.loc[:, <subset>] depending upon axis, to limit data to select hidden rows / columns.
axis:{“index”, 0, “columns”, 1}
Apply to the index or columns.
level:int, str, list
The level(s) to hide in a MultiIndex if hiding the entire index / column headers. Cannot be used simultaneously with subset.
names:bool
Whether to hide the level name(s) of the index / columns headers in the case it (or at least one the levels) remains visible. Returns
self:Styler
Notes This method has multiple functionality depending upon the combination of the subset, level and names arguments (see examples). The axis argument is used only to control whether the method is applied to row or column headers: Argument combinations
subset level names Effect
None None False The axis-Index is hidden entirely.
None None True Only the axis-Index names are hidden.
None Int, Str, List False Specified axis-MultiIndex levels are hidden entirely.
None Int, Str, List True Specified axis-MultiIndex levels are hidden entirely and the names of remaining axis-MultiIndex levels.
Subset None False The specified data rows/columns are hidden, but the axis-Index itself, and names, remain unchanged.
Subset None True The specified data rows/columns and axis-Index names are hidden, but the axis-Index itself remains unchanged.
Subset Int, Str, List Boolean ValueError: cannot supply subset and level simultaneously. Note this method only hides the identifed elements so can be chained to hide multiple elements in sequence. Examples Simple application hiding specific rows:
>>> df = pd.DataFrame([[1,2], [3,4], [5,6]], index=["a", "b", "c"])
>>> df.style.hide(["a", "b"])
0 1
c 5 6
Hide the index and retain the data values:
>>> midx = pd.MultiIndex.from_product([["x", "y"], ["a", "b", "c"]])
>>> df = pd.DataFrame(np.random.randn(6,6), index=midx, columns=midx)
>>> df.style.format("{:.1f}").hide()
x y
a b c a b c
0.1 0.0 0.4 1.3 0.6 -1.4
0.7 1.0 1.3 1.5 -0.0 -0.2
1.4 -0.8 1.6 -0.2 -0.4 -0.3
0.4 1.0 -0.2 -0.8 -1.2 1.1
-0.6 1.2 1.8 1.9 0.3 0.3
0.8 0.5 -0.3 1.2 2.2 -0.8
Hide specific rows in a MultiIndex but retain the index:
>>> df.style.format("{:.1f}").hide(subset=(slice(None), ["a", "c"]))
...
x y
a b c a b c
x b 0.7 1.0 1.3 1.5 -0.0 -0.2
y b -0.6 1.2 1.8 1.9 0.3 0.3
Hide specific rows and the index through chaining:
>>> df.style.format("{:.1f}").hide(subset=(slice(None), ["a", "c"])).hide()
...
x y
a b c a b c
0.7 1.0 1.3 1.5 -0.0 -0.2
-0.6 1.2 1.8 1.9 0.3 0.3
Hide a specific level:
>>> df.style.format("{:,.1f}").hide(level=1)
x y
a b c a b c
x 0.1 0.0 0.4 1.3 0.6 -1.4
0.7 1.0 1.3 1.5 -0.0 -0.2
1.4 -0.8 1.6 -0.2 -0.4 -0.3
y 0.4 1.0 -0.2 -0.8 -1.2 1.1
-0.6 1.2 1.8 1.9 0.3 0.3
0.8 0.5 -0.3 1.2 2.2 -0.8
Hiding just the index level names:
>>> df.index.names = ["lev0", "lev1"]
>>> df.style.format("{:,.1f}").hide(names=True)
x y
a b c a b c
x a 0.1 0.0 0.4 1.3 0.6 -1.4
b 0.7 1.0 1.3 1.5 -0.0 -0.2
c 1.4 -0.8 1.6 -0.2 -0.4 -0.3
y a 0.4 1.0 -0.2 -0.8 -1.2 1.1
b -0.6 1.2 1.8 1.9 0.3 0.3
c 0.8 0.5 -0.3 1.2 2.2 -0.8
Examples all produce equivalently transposed effects with axis="columns". | pandas.reference.api.pandas.io.formats.style.styler.hide |
pandas.io.formats.style.Styler.hide_columns Styler.hide_columns(subset=None, level=None, names=False)[source]
Hide the column headers or specific keys in the columns from rendering. This method has dual functionality:
if subset is None then the entire column headers row, or specific levels, will be hidden whilst the data-values remain visible. if a subset is given then those specific columns, including the data-values will be hidden, whilst the column headers row remains visible.
Changed in version 1.3.0. ..deprecated:: 1.4.0
This method should be replaced by hide(axis="columns", **kwargs) Parameters
subset:label, array-like, IndexSlice, optional
A valid 1d input or single key along the columns axis within DataFrame.loc[:, <subset>], to limit data to before applying the function.
level:int, str, list
The level(s) to hide in a MultiIndex if hiding the entire column headers row. Cannot be used simultaneously with subset. New in version 1.4.0.
names:bool
Whether to hide the column index name(s), in the case all column headers, or some levels, are visible. New in version 1.4.0. Returns
self:Styler
See also Styler.hide
Hide the entire index / columns, or specific rows / columns. | pandas.reference.api.pandas.io.formats.style.styler.hide_columns |
pandas.io.formats.style.Styler.hide_index Styler.hide_index(subset=None, level=None, names=False)[source]
Hide the entire index, or specific keys in the index from rendering. This method has dual functionality:
if subset is None then the entire index, or specified levels, will be hidden whilst displaying all data-rows. if a subset is given then those specific rows will be hidden whilst the index itself remains visible.
Changed in version 1.3.0. Deprecated since version 1.4.0: This method should be replaced by hide(axis="index", **kwargs) Parameters
subset:label, array-like, IndexSlice, optional
A valid 1d input or single key along the index axis within DataFrame.loc[<subset>, :], to limit data to before applying the function.
level:int, str, list
The level(s) to hide in a MultiIndex if hiding the entire index. Cannot be used simultaneously with subset. New in version 1.4.0.
names:bool
Whether to hide the index name(s), in the case the index or part of it remains visible. New in version 1.4.0. Returns
self:Styler
See also Styler.hide
Hide the entire index / columns, or specific rows / columns. | pandas.reference.api.pandas.io.formats.style.styler.hide_index |
pandas.io.formats.style.Styler.highlight_between Styler.highlight_between(subset=None, color='yellow', axis=0, left=None, right=None, inclusive='both', props=None)[source]
Highlight a defined range with a style. New in version 1.3.0. Parameters
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
color:str, default ‘yellow’
Background color to use for highlighting.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
If left or right given as sequence, axis along which to apply those boundaries. See examples.
left:scalar or datetime-like, or sequence or array-like, default None
Left bound for defining the range.
right:scalar or datetime-like, or sequence or array-like, default None
Right bound for defining the range.
inclusive:{‘both’, ‘neither’, ‘left’, ‘right’}
Identify whether bounds are closed or open.
props:str, default None
CSS properties to use for highlighting. If props is given, color is not used. Returns
self:Styler
See also Styler.highlight_null
Highlight missing values with a style. Styler.highlight_max
Highlight the maximum with a style. Styler.highlight_min
Highlight the minimum with a style. Styler.highlight_quantile
Highlight values defined by a quantile with a style. Notes If left is None only the right bound is applied. If right is None only the left bound is applied. If both are None all values are highlighted. axis is only needed if left or right are provided as a sequence or an array-like object for aligning the shapes. If left and right are both scalars then all axis inputs will give the same result. This function only works with compatible dtypes. For example a datetime-like region can only use equivalent datetime-like left and right arguments. Use subset to control regions which have multiple dtypes. Examples Basic usage
>>> df = pd.DataFrame({
... 'One': [1.2, 1.6, 1.5],
... 'Two': [2.9, 2.1, 2.5],
... 'Three': [3.1, 3.2, 3.8],
... })
>>> df.style.highlight_between(left=2.1, right=2.9)
Using a range input sequnce along an axis, in this case setting a left and right for each column individually
>>> df.style.highlight_between(left=[1.4, 2.4, 3.4], right=[1.6, 2.6, 3.6],
... axis=1, color="#fffd75")
Using axis=None and providing the left argument as an array that matches the input DataFrame, with a constant right
>>> df.style.highlight_between(left=[[2,2,3],[2,2,3],[3,3,3]], right=3.5,
... axis=None, color="#fffd75")
Using props instead of default background coloring
>>> df.style.highlight_between(left=1.5, right=3.5,
... props='font-weight:bold;color:#e83e8c') | pandas.reference.api.pandas.io.formats.style.styler.highlight_between |
pandas.io.formats.style.Styler.highlight_max Styler.highlight_max(subset=None, color='yellow', axis=0, props=None)[source]
Highlight the maximum with a style. Parameters
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
color:str, default ‘yellow’
Background color to use for highlighting.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Apply to each column (axis=0 or 'index'), to each row (axis=1 or 'columns'), or to the entire DataFrame at once with axis=None.
props:str, default None
CSS properties to use for highlighting. If props is given, color is not used. New in version 1.3.0. Returns
self:Styler
See also Styler.highlight_null
Highlight missing values with a style. Styler.highlight_min
Highlight the minimum with a style. Styler.highlight_between
Highlight a defined range with a style. Styler.highlight_quantile
Highlight values defined by a quantile with a style. | pandas.reference.api.pandas.io.formats.style.styler.highlight_max |
pandas.io.formats.style.Styler.highlight_min Styler.highlight_min(subset=None, color='yellow', axis=0, props=None)[source]
Highlight the minimum with a style. Parameters
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
color:str, default ‘yellow’
Background color to use for highlighting.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Apply to each column (axis=0 or 'index'), to each row (axis=1 or 'columns'), or to the entire DataFrame at once with axis=None.
props:str, default None
CSS properties to use for highlighting. If props is given, color is not used. New in version 1.3.0. Returns
self:Styler
See also Styler.highlight_null
Highlight missing values with a style. Styler.highlight_max
Highlight the maximum with a style. Styler.highlight_between
Highlight a defined range with a style. Styler.highlight_quantile
Highlight values defined by a quantile with a style. | pandas.reference.api.pandas.io.formats.style.styler.highlight_min |
pandas.io.formats.style.Styler.highlight_null Styler.highlight_null(null_color='red', subset=None, props=None)[source]
Highlight missing values with a style. Parameters
null_color:str, default ‘red’
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function. New in version 1.1.0.
props:str, default None
CSS properties to use for highlighting. If props is given, color is not used. New in version 1.3.0. Returns
self:Styler
See also Styler.highlight_max
Highlight the maximum with a style. Styler.highlight_min
Highlight the minimum with a style. Styler.highlight_between
Highlight a defined range with a style. Styler.highlight_quantile
Highlight values defined by a quantile with a style. | pandas.reference.api.pandas.io.formats.style.styler.highlight_null |
pandas.io.formats.style.Styler.highlight_quantile Styler.highlight_quantile(subset=None, color='yellow', axis=0, q_left=0.0, q_right=1.0, interpolation='linear', inclusive='both', props=None)[source]
Highlight values defined by a quantile with a style. New in version 1.3.0. Parameters
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
color:str, default ‘yellow’
Background color to use for highlighting.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Axis along which to determine and highlight quantiles. If None quantiles are measured over the entire DataFrame. See examples.
q_left:float, default 0
Left bound, in [0, q_right), for the target quantile range.
q_right:float, default 1
Right bound, in (q_left, 1], for the target quantile range.
interpolation:{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}
Argument passed to Series.quantile or DataFrame.quantile for quantile estimation.
inclusive:{‘both’, ‘neither’, ‘left’, ‘right’}
Identify whether quantile bounds are closed or open.
props:str, default None
CSS properties to use for highlighting. If props is given, color is not used. Returns
self:Styler
See also Styler.highlight_null
Highlight missing values with a style. Styler.highlight_max
Highlight the maximum with a style. Styler.highlight_min
Highlight the minimum with a style. Styler.highlight_between
Highlight a defined range with a style. Notes This function does not work with str dtypes. Examples Using axis=None and apply a quantile to all collective data
>>> df = pd.DataFrame(np.arange(10).reshape(2,5) + 1)
>>> df.style.highlight_quantile(axis=None, q_left=0.8, color="#fffd75")
...
Or highlight quantiles row-wise or column-wise, in this case by row-wise
>>> df.style.highlight_quantile(axis=1, q_left=0.8, color="#fffd75")
...
Use props instead of default background coloring
>>> df.style.highlight_quantile(axis=None, q_left=0.2, q_right=0.8,
... props='font-weight:bold;color:#e83e8c') | pandas.reference.api.pandas.io.formats.style.styler.highlight_quantile |
pandas.io.formats.style.Styler.loader Styler.loader=<jinja2.loaders.PackageLoader object> | pandas.reference.api.pandas.io.formats.style.styler.loader |
pandas.io.formats.style.Styler.pipe Styler.pipe(func, *args, **kwargs)[source]
Apply func(self, *args, **kwargs), and return the result. Parameters
func:function
Function to apply to the Styler. Alternatively, a (callable, keyword) tuple where keyword is a string indicating the keyword of callable that expects the Styler.
*args:optional
Arguments passed to func.
**kwargs:optional
A dictionary of keyword arguments passed into func. Returns
object :
The value returned by func. See also DataFrame.pipe
Analogous method for DataFrame. Styler.apply
Apply a CSS-styling function column-wise, row-wise, or table-wise. Notes Like DataFrame.pipe(), this method can simplify the application of several user-defined functions to a styler. Instead of writing:
f(g(df.style.set_precision(3), arg1=a), arg2=b, arg3=c)
users can write:
(df.style.set_precision(3)
.pipe(g, arg1=a)
.pipe(f, arg2=b, arg3=c))
In particular, this allows users to define functions that take a styler object, along with other parameters, and return the styler after making styling changes (such as calling Styler.apply() or Styler.set_properties()). Using .pipe, these user-defined style “transformations” can be interleaved with calls to the built-in Styler interface. Examples
>>> def format_conversion(styler):
... return (styler.set_properties(**{'text-align': 'right'})
... .format({'conversion': '{:.1%}'}))
The user-defined format_conversion function above can be called within a sequence of other style modifications:
>>> df = pd.DataFrame({'trial': list(range(5)),
... 'conversion': [0.75, 0.85, np.nan, 0.7, 0.72]})
>>> (df.style
... .highlight_min(subset=['conversion'], color='yellow')
... .pipe(format_conversion)
... .set_caption("Results with minimum conversion highlighted."))
... | pandas.reference.api.pandas.io.formats.style.styler.pipe |
pandas.io.formats.style.Styler.render Styler.render(sparse_index=None, sparse_columns=None, **kwargs)[source]
Render the Styler including all applied styles to HTML. Deprecated since version 1.4.0. Parameters
sparse_index:bool, optional
Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each row. Defaults to pandas.options.styler.sparse.index value.
sparse_columns:bool, optional
Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each row. Defaults to pandas.options.styler.sparse.columns value. **kwargs
Any additional keyword arguments are passed through to self.template.render. This is useful when you need to provide additional variables for a custom template. Returns
rendered:str
The rendered HTML. Notes This method is deprecated in favour of Styler.to_html. Styler objects have defined the _repr_html_ method which automatically calls self.to_html() when it’s the last item in a Notebook cell. When calling Styler.render() directly, wrap the result in IPython.display.HTML to view the rendered HTML in the notebook. Pandas uses the following keys in render. Arguments passed in **kwargs take precedence, so think carefully if you want to override them: head cellstyle body uuid table_styles caption table_attributes | pandas.reference.api.pandas.io.formats.style.styler.render |
pandas.io.formats.style.Styler.set_caption Styler.set_caption(caption)[source]
Set the text added to a <caption> HTML element. Parameters
caption:str, tuple
For HTML output either the string input is used or the first element of the tuple. For LaTeX the string input provides a caption and the additional tuple input allows for full captions and short captions, in that order. Returns
self:Styler | pandas.reference.api.pandas.io.formats.style.styler.set_caption |
pandas.io.formats.style.Styler.set_na_rep Styler.set_na_rep(na_rep)[source]
Set the missing data representation on a Styler. New in version 1.0.0. Deprecated since version 1.3.0. Parameters
na_rep:str
Returns
self:Styler
Notes This method is deprecated. See Styler.format() | pandas.reference.api.pandas.io.formats.style.styler.set_na_rep |
pandas.io.formats.style.Styler.set_precision Styler.set_precision(precision)[source]
Set the precision used to display values. Deprecated since version 1.3.0. Parameters
precision:int
Returns
self:Styler
Notes This method is deprecated see Styler.format. | pandas.reference.api.pandas.io.formats.style.styler.set_precision |
pandas.io.formats.style.Styler.set_properties Styler.set_properties(subset=None, **kwargs)[source]
Set defined CSS-properties to each <td> HTML element within the given subset. Parameters
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
**kwargs:dict
A dictionary of property, value pairs to be set for each cell. Returns
self:Styler
Notes This is a convenience methods which wraps the Styler.applymap() calling a function returning the CSS-properties independently of the data. Examples
>>> df = pd.DataFrame(np.random.randn(10, 4))
>>> df.style.set_properties(color="white", align="right")
>>> df.style.set_properties(**{'background-color': 'yellow'})
See Table Visualization user guide for more details. | pandas.reference.api.pandas.io.formats.style.styler.set_properties |
pandas.io.formats.style.Styler.set_sticky Styler.set_sticky(axis=0, pixel_size=None, levels=None)[source]
Add CSS to permanently display the index or column headers in a scrolling frame. Parameters
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
Whether to make the index or column headers sticky.
pixel_size:int, optional
Required to configure the width of index cells or the height of column header cells when sticking a MultiIndex (or with a named Index). Defaults to 75 and 25 respectively.
levels:int, str, list, optional
If axis is a MultiIndex the specific levels to stick. If None will stick all levels. Returns
self:Styler
Notes This method uses the CSS ‘position: sticky;’ property to display. It is designed to work with visible axes, therefore both:
styler.set_sticky(axis=”index”).hide(axis=”index”) styler.set_sticky(axis=”columns”).hide(axis=”columns”)
may produce strange behaviour due to CSS controls with missing elements. | pandas.reference.api.pandas.io.formats.style.styler.set_sticky |
pandas.io.formats.style.Styler.set_table_attributes Styler.set_table_attributes(attributes)[source]
Set the table attributes added to the <table> HTML element. These are items in addition to automatic (by default) id attribute. Parameters
attributes:str
Returns
self:Styler
See also Styler.set_table_styles
Set the table styles included within the <style> HTML element. Styler.set_td_classes
Set the DataFrame of strings added to the class attribute of <td> HTML elements. Examples
>>> df = pd.DataFrame(np.random.randn(10, 4))
>>> df.style.set_table_attributes('class="pure-table"')
# ... <table class="pure-table"> ... | pandas.reference.api.pandas.io.formats.style.styler.set_table_attributes |
pandas.io.formats.style.Styler.set_table_styles Styler.set_table_styles(table_styles=None, axis=0, overwrite=True, css_class_names=None)[source]
Set the table styles included within the <style> HTML element. This function can be used to style the entire table, columns, rows or specific HTML selectors. Parameters
table_styles:list or dict
If supplying a list, each individual table_style should be a dictionary with selector and props keys. selector should be a CSS selector that the style will be applied to (automatically prefixed by the table’s UUID) and props should be a list of tuples with (attribute, value). If supplying a dict, the dict keys should correspond to column names or index values, depending upon the specified axis argument. These will be mapped to row or col CSS selectors. MultiIndex values as dict keys should be in their respective tuple form. The dict values should be a list as specified in the form with CSS selectors and props that will be applied to the specified row or column. Changed in version 1.2.0.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Apply to each column (axis=0 or 'index'), to each row (axis=1 or 'columns'). Only used if table_styles is dict. New in version 1.2.0.
overwrite:bool, default True
Styles are replaced if True, or extended if False. CSS rules are preserved so most recent styles set will dominate if selectors intersect. New in version 1.2.0.
css_class_names:dict, optional
A dict of strings used to replace the default CSS classes described below. New in version 1.4.0. Returns
self:Styler
See also Styler.set_td_classes
Set the DataFrame of strings added to the class attribute of <td> HTML elements. Styler.set_table_attributes
Set the table attributes added to the <table> HTML element. Notes The default CSS classes dict, whose values can be replaced is as follows:
css_class_names = {"row_heading": "row_heading",
"col_heading": "col_heading",
"index_name": "index_name",
"col": "col",
"col_trim": "col_trim",
"row_trim": "row_trim",
"level": "level",
"data": "data",
"blank": "blank}
Examples
>>> df = pd.DataFrame(np.random.randn(10, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df.style.set_table_styles(
... [{'selector': 'tr:hover',
... 'props': [('background-color', 'yellow')]}]
... )
Or with CSS strings
>>> df.style.set_table_styles(
... [{'selector': 'tr:hover',
... 'props': 'background-color: yellow; font-size: 1em;'}]
... )
Adding column styling by name
>>> df.style.set_table_styles({
... 'A': [{'selector': '',
... 'props': [('color', 'red')]}],
... 'B': [{'selector': 'td',
... 'props': 'color: blue;'}]
... }, overwrite=False)
Adding row styling
>>> df.style.set_table_styles({
... 0: [{'selector': 'td:hover',
... 'props': [('font-size', '25px')]}]
... }, axis=1, overwrite=False)
See Table Visualization user guide for more details. | pandas.reference.api.pandas.io.formats.style.styler.set_table_styles |
pandas.io.formats.style.Styler.set_td_classes Styler.set_td_classes(classes)[source]
Set the DataFrame of strings added to the class attribute of <td> HTML elements. Parameters
classes:DataFrame
DataFrame containing strings that will be translated to CSS classes, mapped by identical column and index key values that must exist on the underlying Styler data. None, NaN values, and empty strings will be ignored and not affect the rendered HTML. Returns
self:Styler
See also Styler.set_table_styles
Set the table styles included within the <style> HTML element. Styler.set_table_attributes
Set the table attributes added to the <table> HTML element. Notes Can be used in combination with Styler.set_table_styles to define an internal CSS solution without reference to external CSS files. Examples
>>> df = pd.DataFrame(data=[[1, 2, 3], [4, 5, 6]], columns=["A", "B", "C"])
>>> classes = pd.DataFrame([
... ["min-val red", "", "blue"],
... ["red", None, "blue max-val"]
... ], index=df.index, columns=df.columns)
>>> df.style.set_td_classes(classes)
Using MultiIndex columns and a classes DataFrame as a subset of the underlying,
>>> df = pd.DataFrame([[1,2],[3,4]], index=["a", "b"],
... columns=[["level0", "level0"], ["level1a", "level1b"]])
>>> classes = pd.DataFrame(["min-val"], index=["a"],
... columns=[["level0"],["level1a"]])
>>> df.style.set_td_classes(classes)
Form of the output with new additional css classes,
>>> df = pd.DataFrame([[1]])
>>> css = pd.DataFrame([["other-class"]])
>>> s = Styler(df, uuid="_", cell_ids=False).set_td_classes(css)
>>> s.hide(axis=0).to_html()
'<style type="text/css"></style>'
'<table id="T__">'
' <thead>'
' <tr><th class="col_heading level0 col0" >0</th></tr>'
' </thead>'
' <tbody>'
' <tr><td class="data row0 col0 other-class" >1</td></tr>'
' </tbody>'
'</table>' | pandas.reference.api.pandas.io.formats.style.styler.set_td_classes |
pandas.io.formats.style.Styler.set_tooltips Styler.set_tooltips(ttips, props=None, css_class=None)[source]
Set the DataFrame of strings on Styler generating :hover tooltips. These string based tooltips are only applicable to <td> HTML elements, and cannot be used for column or index headers. New in version 1.3.0. Parameters
ttips:DataFrame
DataFrame containing strings that will be translated to tooltips, mapped by identical column and index values that must exist on the underlying Styler data. None, NaN values, and empty strings will be ignored and not affect the rendered HTML.
props:list-like or str, optional
List of (attr, value) tuples or a valid CSS string. If None adopts the internal default values described in notes.
css_class:str, optional
Name of the tooltip class used in CSS, should conform to HTML standards. Only useful if integrating tooltips with external CSS. If None uses the internal default value ‘pd-t’. Returns
self:Styler
Notes Tooltips are created by adding <span class=”pd-t”></span> to each data cell and then manipulating the table level CSS to attach pseudo hover and pseudo after selectors to produce the required the results. The default properties for the tooltip CSS class are: visibility: hidden position: absolute z-index: 1 background-color: black color: white transform: translate(-20px, -20px) The property ‘visibility: hidden;’ is a key prerequisite to the hover functionality, and should always be included in any manual properties specification, using the props argument. Tooltips are not designed to be efficient, and can add large amounts of additional HTML for larger tables, since they also require that cell_ids is forced to True. Examples Basic application
>>> df = pd.DataFrame(data=[[0, 1], [2, 3]])
>>> ttips = pd.DataFrame(
... data=[["Min", ""], [np.nan, "Max"]], columns=df.columns, index=df.index
... )
>>> s = df.style.set_tooltips(ttips).to_html()
Optionally controlling the tooltip visual display
>>> df.style.set_tooltips(ttips, css_class='tt-add', props=[
... ('visibility', 'hidden'),
... ('position', 'absolute'),
... ('z-index', 1)])
>>> df.style.set_tooltips(ttips, css_class='tt-add',
... props='visibility:hidden; position:absolute; z-index:1;')
... | pandas.reference.api.pandas.io.formats.style.styler.set_tooltips |
pandas.io.formats.style.Styler.set_uuid Styler.set_uuid(uuid)[source]
Set the uuid applied to id attributes of HTML elements. Parameters
uuid:str
Returns
self:Styler
Notes Almost all HTML elements within the table, and including the <table> element are assigned id attributes. The format is T_uuid_<extra> where <extra> is typically a more specific identifier, such as row1_col2. | pandas.reference.api.pandas.io.formats.style.styler.set_uuid |
pandas.io.formats.style.Styler.template_html Styler.template_html=<Template 'html.tpl'> | pandas.reference.api.pandas.io.formats.style.styler.template_html |
pandas.io.formats.style.Styler.template_html_style Styler.template_html_style=<Template 'html_style.tpl'> | pandas.reference.api.pandas.io.formats.style.styler.template_html_style |
pandas.io.formats.style.Styler.template_html_table Styler.template_html_table=<Template 'html_table.tpl'> | pandas.reference.api.pandas.io.formats.style.styler.template_html_table |
pandas.io.formats.style.Styler.template_latex Styler.template_latex=<Template 'latex.tpl'> | pandas.reference.api.pandas.io.formats.style.styler.template_latex |
pandas.io.formats.style.Styler.text_gradient Styler.text_gradient(cmap='PuBu', low=0, high=0, axis=0, subset=None, vmin=None, vmax=None, gmap=None)[source]
Color the text in a gradient style. The text color is determined according to the data in each column, row or frame, or by a given gradient map. Requires matplotlib. Parameters
cmap:str or colormap
Matplotlib colormap.
low:float
Compress the color range at the low end. This is a multiple of the data range to extend below the minimum; good values usually in [0, 1], defaults to 0.
high:float
Compress the color range at the high end. This is a multiple of the data range to extend above the maximum; good values usually in [0, 1], defaults to 0.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Apply to each column (axis=0 or 'index'), to each row (axis=1 or 'columns'), or to the entire DataFrame at once with axis=None.
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
text_color_threshold:float or int
This argument is ignored (only used in background_gradient). Luminance threshold for determining text color in [0, 1]. Facilitates text visibility across varying background colors. All text is dark if 0, and light if 1, defaults to 0.408.
vmin:float, optional
Minimum data value that corresponds to colormap minimum value. If not specified the minimum value of the data (or gmap) will be used. New in version 1.0.0.
vmax:float, optional
Maximum data value that corresponds to colormap maximum value. If not specified the maximum value of the data (or gmap) will be used. New in version 1.0.0.
gmap:array-like, optional
Gradient map for determining the text colors. If not supplied will use the underlying data from rows, columns or frame. If given as an ndarray or list-like must be an identical shape to the underlying data considering axis and subset. If given as DataFrame or Series must have same index and column labels considering axis and subset. If supplied, vmin and vmax should be given relative to this gradient map. New in version 1.3.0. Returns
self:Styler
See also Styler.background_gradient
Color the background in a gradient style. Notes When using low and high the range of the gradient, given by the data if gmap is not given or by gmap, is extended at the low end effectively by map.min - low * map.range and at the high end by map.max + high * map.range before the colors are normalized and determined. If combining with vmin and vmax the map.min, map.max and map.range are replaced by values according to the values derived from vmin and vmax. This method will preselect numeric columns and ignore non-numeric columns unless a gmap is supplied in which case no preselection occurs. Examples
>>> df = pd.DataFrame(columns=["City", "Temp (c)", "Rain (mm)", "Wind (m/s)"],
... data=[["Stockholm", 21.6, 5.0, 3.2],
... ["Oslo", 22.4, 13.3, 3.1],
... ["Copenhagen", 24.5, 0.0, 6.7]])
Shading the values column-wise, with axis=0, preselecting numeric columns
>>> df.style.text_gradient(axis=0)
Shading all values collectively using axis=None
>>> df.style.text_gradient(axis=None)
Compress the color map from the both low and high ends
>>> df.style.text_gradient(axis=None, low=0.75, high=1.0)
Manually setting vmin and vmax gradient thresholds
>>> df.style.text_gradient(axis=None, vmin=6.7, vmax=21.6)
Setting a gmap and applying to all columns with another cmap
>>> df.style.text_gradient(axis=0, gmap=df['Temp (c)'], cmap='YlOrRd')
...
Setting the gradient map for a dataframe (i.e. axis=None), we need to explicitly state subset to match the gmap shape
>>> gmap = np.array([[1,2,3], [2,3,4], [3,4,5]])
>>> df.style.text_gradient(axis=None, gmap=gmap,
... cmap='YlOrRd', subset=['Temp (c)', 'Rain (mm)', 'Wind (m/s)']
... ) | pandas.reference.api.pandas.io.formats.style.styler.text_gradient |
pandas.io.formats.style.Styler.to_excel Styler.to_excel(excel_writer, sheet_name='Sheet1', na_rep='', float_format=None, columns=None, header=True, index=True, index_label=None, startrow=0, startcol=0, engine=None, merge_cells=True, encoding=None, inf_rep='inf', verbose=True, freeze_panes=None)[source]
Write Styler to an Excel sheet. To write a single Styler to an Excel .xlsx file it is only necessary to specify a target file name. To write to multiple sheets it is necessary to create an ExcelWriter object with a target file name, and specify a sheet in the file to write to. Multiple sheets may be written to by specifying unique sheet_name. With all data written to the file it is necessary to save the changes. Note that creating an ExcelWriter object with a file name that already exists will result in the contents of the existing file being erased. Parameters
excel_writer:path-like, file-like, or ExcelWriter object
File path or existing ExcelWriter.
sheet_name:str, default ‘Sheet1’
Name of sheet which will contain DataFrame.
na_rep:str, default ‘’
Missing data representation.
float_format:str, optional
Format string for floating point numbers. For example float_format="%.2f" will format 0.1234 to 0.12.
columns:sequence or list of str, optional
Columns to write.
header:bool or list of str, default True
Write out the column names. If a list of string is given it is assumed to be aliases for the column names.
index:bool, default True
Write row names (index).
index_label:str or sequence, optional
Column label for index column(s) if desired. If not specified, and header and index are True, then the index names are used. A sequence should be given if the DataFrame uses MultiIndex.
startrow:int, default 0
Upper left cell row to dump data frame.
startcol:int, default 0
Upper left cell column to dump data frame.
engine:str, optional
Write engine to use, ‘openpyxl’ or ‘xlsxwriter’. You can also set this via the options io.excel.xlsx.writer, io.excel.xls.writer, and io.excel.xlsm.writer. Deprecated since version 1.2.0: As the xlwt package is no longer maintained, the xlwt engine will be removed in a future version of pandas.
merge_cells:bool, default True
Write MultiIndex and Hierarchical Rows as merged cells.
encoding:str, optional
Encoding of the resulting excel file. Only necessary for xlwt, other writers support unicode natively.
inf_rep:str, default ‘inf’
Representation for infinity (there is no native representation for infinity in Excel).
verbose:bool, default True
Display more information in the error logs.
freeze_panes:tuple of int (length 2), optional
Specifies the one-based bottommost row and rightmost column that is to be frozen.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. See also to_csv
Write DataFrame to a comma-separated values (csv) file. ExcelWriter
Class for writing DataFrame objects into excel sheets. read_excel
Read an Excel file into a pandas DataFrame. read_csv
Read a comma-separated values (csv) file into DataFrame. Notes For compatibility with to_csv(), to_excel serializes lists and dicts to strings before writing. Once a workbook has been saved it is not possible to write further data without rewriting the whole workbook. Examples Create, write to and save a workbook:
>>> df1 = pd.DataFrame([['a', 'b'], ['c', 'd']],
... index=['row 1', 'row 2'],
... columns=['col 1', 'col 2'])
>>> df1.to_excel("output.xlsx")
To specify the sheet name:
>>> df1.to_excel("output.xlsx",
... sheet_name='Sheet_name_1')
If you wish to write to more than one sheet in the workbook, it is necessary to specify an ExcelWriter object:
>>> df2 = df1.copy()
>>> with pd.ExcelWriter('output.xlsx') as writer:
... df1.to_excel(writer, sheet_name='Sheet_name_1')
... df2.to_excel(writer, sheet_name='Sheet_name_2')
ExcelWriter can also be used to append to an existing Excel file:
>>> with pd.ExcelWriter('output.xlsx',
... mode='a') as writer:
... df.to_excel(writer, sheet_name='Sheet_name_3')
To set the library that is used to write the Excel file, you can pass the engine keyword (the default engine is automatically chosen depending on the file extension):
>>> df1.to_excel('output1.xlsx', engine='xlsxwriter') | pandas.reference.api.pandas.io.formats.style.styler.to_excel |
pandas.io.formats.style.Styler.to_html Styler.to_html(buf=None, *, table_uuid=None, table_attributes=None, sparse_index=None, sparse_columns=None, bold_headers=False, caption=None, max_rows=None, max_columns=None, encoding=None, doctype_html=False, exclude_styles=False, **kwargs)[source]
Write Styler to a file, buffer or string in HTML-CSS format. New in version 1.3.0. Parameters
buf:str, path object, file-like object, or None, default None
String, path object (implementing os.PathLike[str]), or file-like object implementing a string write() function. If None, the result is returned as a string.
table_uuid:str, optional
Id attribute assigned to the <table> HTML element in the format: <table id="T_<table_uuid>" ..> If not given uses Styler’s initially assigned value.
table_attributes:str, optional
Attributes to assign within the <table> HTML element in the format: <table .. <table_attributes> > If not given defaults to Styler’s preexisting value.
sparse_index:bool, optional
Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each row. Defaults to pandas.options.styler.sparse.index value. New in version 1.4.0.
sparse_columns:bool, optional
Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each column. Defaults to pandas.options.styler.sparse.columns value. New in version 1.4.0.
bold_headers:bool, optional
Adds “font-weight: bold;” as a CSS property to table style header cells. New in version 1.4.0.
caption:str, optional
Set, or overwrite, the caption on Styler before rendering. New in version 1.4.0.
max_rows:int, optional
The maximum number of rows that will be rendered. Defaults to pandas.options.styler.render.max_rows/max_columns. New in version 1.4.0.
max_columns:int, optional
The maximum number of columns that will be rendered. Defaults to pandas.options.styler.render.max_columns, which is None. Rows and columns may be reduced if the number of total elements is large. This value is set to pandas.options.styler.render.max_elements, which is 262144 (18 bit browser rendering). New in version 1.4.0.
encoding:str, optional
Character encoding setting for file output, and HTML meta tags. Defaults to pandas.options.styler.render.encoding value of “utf-8”.
doctype_html:bool, default False
Whether to output a fully structured HTML file including all HTML elements, or just the core <style> and <table> elements.
exclude_styles:bool, default False
Whether to include the <style> element and all associated element class and id identifiers, or solely the <table> element without styling identifiers. **kwargs
Any additional keyword arguments are passed through to the jinja2 self.template.render process. This is useful when you need to provide additional variables for a custom template. Returns
str or None
If buf is None, returns the result as a string. Otherwise returns None. See also DataFrame.to_html
Write a DataFrame to a file, buffer or string in HTML format. | pandas.reference.api.pandas.io.formats.style.styler.to_html |
pandas.io.formats.style.Styler.to_latex Styler.to_latex(buf=None, *, column_format=None, position=None, position_float=None, hrules=None, clines=None, label=None, caption=None, sparse_index=None, sparse_columns=None, multirow_align=None, multicol_align=None, siunitx=False, environment=None, encoding=None, convert_css=False)[source]
Write Styler to a file, buffer or string in LaTeX format. New in version 1.3.0. Parameters
buf:str, path object, file-like object, or None, default None
String, path object (implementing os.PathLike[str]), or file-like object implementing a string write() function. If None, the result is returned as a string.
column_format:str, optional
The LaTeX column specification placed in location: \begin{tabular}{<column_format>} Defaults to ‘l’ for index and non-numeric data columns, and, for numeric data columns, to ‘r’ by default, or ‘S’ if siunitx is True.
position:str, optional
The LaTeX positional argument (e.g. ‘h!’) for tables, placed in location: \\begin{table}[<position>].
position_float:{“centering”, “raggedleft”, “raggedright”}, optional
The LaTeX float command placed in location: \begin{table}[<position>] \<position_float> Cannot be used if environment is “longtable”.
hrules:bool
Set to True to add \toprule, \midrule and \bottomrule from the {booktabs} LaTeX package. Defaults to pandas.options.styler.latex.hrules, which is False. Changed in version 1.4.0.
clines:str, optional
Use to control adding \cline commands for the index labels separation. Possible values are:
None: no cline commands are added (default). “all;data”: a cline is added for every index value extending the width of the table, including data entries. “all;index”: as above with lines extending only the width of the index entries. “skip-last;data”: a cline is added for each index value except the last level (which is never sparsified), extending the widtn of the table. “skip-last;index”: as above with lines extending only the width of the index entries.
New in version 1.4.0.
label:str, optional
The LaTeX label included as: \label{<label>}. This is used with \ref{<label>} in the main .tex file.
caption:str, tuple, optional
If string, the LaTeX table caption included as: \caption{<caption>}. If tuple, i.e (“full caption”, “short caption”), the caption included as: \caption[<caption[1]>]{<caption[0]>}.
sparse_index:bool, optional
Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each row. Defaults to pandas.options.styler.sparse.index, which is True.
sparse_columns:bool, optional
Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each column. Defaults to pandas.options.styler.sparse.columns, which is True.
multirow_align:{“c”, “t”, “b”, “naive”}, optional
If sparsifying hierarchical MultiIndexes whether to align text centrally, at the top or bottom using the multirow package. If not given defaults to pandas.options.styler.latex.multirow_align, which is “c”. If “naive” is given renders without multirow. Changed in version 1.4.0.
multicol_align:{“r”, “c”, “l”, “naive-l”, “naive-r”}, optional
If sparsifying hierarchical MultiIndex columns whether to align text at the left, centrally, or at the right. If not given defaults to pandas.options.styler.latex.multicol_align, which is “r”. If a naive option is given renders without multicol. Pipe decorators can also be added to non-naive values to draw vertical rules, e.g. “|r” will draw a rule on the left side of right aligned merged cells. Changed in version 1.4.0.
siunitx:bool, default False
Set to True to structure LaTeX compatible with the {siunitx} package.
environment:str, optional
If given, the environment that will replace ‘table’ in \\begin{table}. If ‘longtable’ is specified then a more suitable template is rendered. If not given defaults to pandas.options.styler.latex.environment, which is None. New in version 1.4.0.
encoding:str, optional
Character encoding setting. Defaults to pandas.options.styler.render.encoding, which is “utf-8”.
convert_css:bool, default False
Convert simple cell-styles from CSS to LaTeX format. Any CSS not found in conversion table is dropped. A style can be forced by adding option –latex. See notes. Returns
str or None
If buf is None, returns the result as a string. Otherwise returns None. See also Styler.format
Format the text display value of cells. Notes Latex Packages For the following features we recommend the following LaTeX inclusions:
Feature Inclusion
sparse columns none: included within default {tabular} environment
sparse rows \usepackage{multirow}
hrules \usepackage{booktabs}
colors \usepackage[table]{xcolor}
siunitx \usepackage{siunitx}
bold (with siunitx)
\usepackage{etoolbox} \robustify\bfseries \sisetup{detect-all = true} (within {document})
italic (with siunitx)
\usepackage{etoolbox} \robustify\itshape \sisetup{detect-all = true} (within {document})
environment \usepackage{longtable} if arg is “longtable” | or any other relevant environment package
hyperlinks \usepackage{hyperref} Cell Styles LaTeX styling can only be rendered if the accompanying styling functions have been constructed with appropriate LaTeX commands. All styling functionality is built around the concept of a CSS (<attribute>, <value>) pair (see Table Visualization), and this should be replaced by a LaTeX (<command>, <options>) approach. Each cell will be styled individually using nested LaTeX commands with their accompanied options. For example the following code will highlight and bold a cell in HTML-CSS:
>>> df = pd.DataFrame([[1,2], [3,4]])
>>> s = df.style.highlight_max(axis=None,
... props='background-color:red; font-weight:bold;')
>>> s.to_html()
The equivalent using LaTeX only commands is the following:
>>> s = df.style.highlight_max(axis=None,
... props='cellcolor:{red}; bfseries: ;')
>>> s.to_latex()
Internally these structured LaTeX (<command>, <options>) pairs are translated to the display_value with the default structure: \<command><options> <display_value>. Where there are multiple commands the latter is nested recursively, so that the above example highlighed cell is rendered as \cellcolor{red} \bfseries 4. Occasionally this format does not suit the applied command, or combination of LaTeX packages that is in use, so additional flags can be added to the <options>, within the tuple, to result in different positions of required braces (the default being the same as --nowrap):
Tuple Format Output Structure
(<command>,<options>) \<command><options> <display_value>
(<command>,<options> --nowrap) \<command><options> <display_value>
(<command>,<options> --rwrap) \<command><options>{<display_value>}
(<command>,<options> --wrap) {\<command><options> <display_value>}
(<command>,<options> --lwrap) {\<command><options>} <display_value>
(<command>,<options> --dwrap) {\<command><options>}{<display_value>} For example the textbf command for font-weight should always be used with –rwrap so ('textbf', '--rwrap') will render a working cell, wrapped with braces, as \textbf{<display_value>}. A more comprehensive example is as follows:
>>> df = pd.DataFrame([[1, 2.2, "dogs"], [3, 4.4, "cats"], [2, 6.6, "cows"]],
... index=["ix1", "ix2", "ix3"],
... columns=["Integers", "Floats", "Strings"])
>>> s = df.style.highlight_max(
... props='cellcolor:[HTML]{FFFF00}; color:{red};'
... 'textit:--rwrap; textbf:--rwrap;'
... )
>>> s.to_latex()
Table Styles Internally Styler uses its table_styles object to parse the column_format, position, position_float, and label input arguments. These arguments are added to table styles in the format:
set_table_styles([
{"selector": "column_format", "props": f":{column_format};"},
{"selector": "position", "props": f":{position};"},
{"selector": "position_float", "props": f":{position_float};"},
{"selector": "label", "props": f":{{{label.replace(':','§')}}};"}
], overwrite=False)
Exception is made for the hrules argument which, in fact, controls all three commands: toprule, bottomrule and midrule simultaneously. Instead of setting hrules to True, it is also possible to set each individual rule definition, by manually setting the table_styles, for example below we set a regular toprule, set an hline for bottomrule and exclude the midrule:
set_table_styles([
{'selector': 'toprule', 'props': ':toprule;'},
{'selector': 'bottomrule', 'props': ':hline;'},
], overwrite=False)
If other commands are added to table styles they will be detected, and positioned immediately above the ‘\begin{tabular}’ command. For example to add odd and even row coloring, from the {colortbl} package, in format \rowcolors{1}{pink}{red}, use:
set_table_styles([
{'selector': 'rowcolors', 'props': ':{1}{pink}{red};'}
], overwrite=False)
A more comprehensive example using these arguments is as follows:
>>> df.columns = pd.MultiIndex.from_tuples([
... ("Numeric", "Integers"),
... ("Numeric", "Floats"),
... ("Non-Numeric", "Strings")
... ])
>>> df.index = pd.MultiIndex.from_tuples([
... ("L0", "ix1"), ("L0", "ix2"), ("L1", "ix3")
... ])
>>> s = df.style.highlight_max(
... props='cellcolor:[HTML]{FFFF00}; color:{red}; itshape:; bfseries:;'
... )
>>> s.to_latex(
... column_format="rrrrr", position="h", position_float="centering",
... hrules=True, label="table:5", caption="Styled LaTeX Table",
... multirow_align="t", multicol_align="r"
... )
Formatting To format values Styler.format() should be used prior to calling Styler.to_latex, as well as other methods such as Styler.hide() for example:
>>> s.clear()
>>> s.table_styles = []
>>> s.caption = None
>>> s.format({
... ("Numeric", "Integers"): '\${}',
... ("Numeric", "Floats"): '{:.3f}',
... ("Non-Numeric", "Strings"): str.upper
... })
Numeric Non-Numeric
Integers Floats Strings
L0 ix1 $1 2.200 DOGS
ix2 $3 4.400 CATS
L1 ix3 $2 6.600 COWS
>>> s.to_latex()
\begin{tabular}{llrrl}
{} & {} & \multicolumn{2}{r}{Numeric} & {Non-Numeric} \\
{} & {} & {Integers} & {Floats} & {Strings} \\
\multirow[c]{2}{*}{L0} & ix1 & \\$1 & 2.200 & DOGS \\
& ix2 & \$3 & 4.400 & CATS \\
L1 & ix3 & \$2 & 6.600 & COWS \\
\end{tabular}
CSS Conversion This method can convert a Styler constructured with HTML-CSS to LaTeX using the following limited conversions.
CSS Attribute CSS value LaTeX Command LaTeX Options
font-weight
bold bolder
bfseries bfseries
font-style
italic oblique
itshape slshape
background-color
red #fe01ea #f0e rgb(128,255,0) rgba(128,0,0,0.5) rgb(25%,255,50%) cellcolor
{red}–lwrap [HTML]{FE01EA}–lwrap [HTML]{FF00EE}–lwrap [rgb]{0.5,1,0}–lwrap [rgb]{0.5,0,0}–lwrap [rgb]{0.25,1,0.5}–lwrap
color
red #fe01ea #f0e rgb(128,255,0) rgba(128,0,0,0.5) rgb(25%,255,50%) color
{red} [HTML]{FE01EA} [HTML]{FF00EE} [rgb]{0.5,1,0} [rgb]{0.5,0,0} [rgb]{0.25,1,0.5} It is also possible to add user-defined LaTeX only styles to a HTML-CSS Styler using the --latex flag, and to add LaTeX parsing options that the converter will detect within a CSS-comment.
>>> df = pd.DataFrame([[1]])
>>> df.style.set_properties(
... **{"font-weight": "bold /* --dwrap */", "Huge": "--latex--rwrap"}
... ).to_latex(convert_css=True)
\begin{tabular}{lr}
{} & {0} \\
0 & {\bfseries}{\Huge{1}} \\
\end{tabular}
Examples Below we give a complete step by step example adding some advanced features and noting some common gotchas. First we create the DataFrame and Styler as usual, including MultiIndex rows and columns, which allow for more advanced formatting options:
>>> cidx = pd.MultiIndex.from_arrays([
... ["Equity", "Equity", "Equity", "Equity",
... "Stats", "Stats", "Stats", "Stats", "Rating"],
... ["Energy", "Energy", "Consumer", "Consumer", "", "", "", "", ""],
... ["BP", "Shell", "H&M", "Unilever",
... "Std Dev", "Variance", "52w High", "52w Low", ""]
... ])
>>> iidx = pd.MultiIndex.from_arrays([
... ["Equity", "Equity", "Equity", "Equity"],
... ["Energy", "Energy", "Consumer", "Consumer"],
... ["BP", "Shell", "H&M", "Unilever"]
... ])
>>> styler = pd.DataFrame([
... [1, 0.8, 0.66, 0.72, 32.1678, 32.1678**2, 335.12, 240.89, "Buy"],
... [0.8, 1.0, 0.69, 0.79, 1.876, 1.876**2, 14.12, 19.78, "Hold"],
... [0.66, 0.69, 1.0, 0.86, 7, 7**2, 210.9, 140.6, "Buy"],
... [0.72, 0.79, 0.86, 1.0, 213.76, 213.76**2, 2807, 3678, "Sell"],
... ], columns=cidx, index=iidx).style
Second we will format the display and, since our table is quite wide, will hide the repeated level-0 of the index:
>>> styler.format(subset="Equity", precision=2)
... .format(subset="Stats", precision=1, thousands=",")
... .format(subset="Rating", formatter=str.upper)
... .format_index(escape="latex", axis=1)
... .format_index(escape="latex", axis=0)
... .hide(level=0, axis=0)
Note that one of the string entries of the index and column headers is “H&M”. Without applying the escape=”latex” option to the format_index method the resultant LaTeX will fail to render, and the error returned is quite difficult to debug. Using the appropriate escape the “&” is converted to “\&”. Thirdly we will apply some (CSS-HTML) styles to our object. We will use a builtin method and also define our own method to highlight the stock recommendation:
>>> def rating_color(v):
... if v == "Buy": color = "#33ff85"
... elif v == "Sell": color = "#ff5933"
... else: color = "#ffdd33"
... return f"color: {color}; font-weight: bold;"
>>> styler.background_gradient(cmap="inferno", subset="Equity", vmin=0, vmax=1)
... .applymap(rating_color, subset="Rating")
All the above styles will work with HTML (see below) and LaTeX upon conversion: However, we finally want to add one LaTeX only style (from the {graphicx} package), that is not easy to convert from CSS and pandas does not support it. Notice the –latex flag used here, as well as –rwrap to ensure this is formatted correctly and not ignored upon conversion.
>>> styler.applymap_index(
... lambda v: "rotatebox:{45}--rwrap--latex;", level=2, axis=1
... )
Finally we render our LaTeX adding in other options as required:
>>> styler.to_latex(
... caption="Selected stock correlation and simple statistics.",
... clines="skip-last;data",
... convert_css=True,
... position_float="centering",
... multicol_align="|c|",
... hrules=True,
... )
\begin{table}
\centering
\caption{Selected stock correlation and simple statistics.}
\begin{tabular}{llrrrrrrrrl}
\toprule
& & \multicolumn{4}{|c|}{Equity} & \multicolumn{4}{|c|}{Stats} & Rating \\
& & \multicolumn{2}{|c|}{Energy} & \multicolumn{2}{|c|}{Consumer} &
\multicolumn{4}{|c|}{} & \\
& & \rotatebox{45}{BP} & \rotatebox{45}{Shell} & \rotatebox{45}{H\&M} &
\rotatebox{45}{Unilever} & \rotatebox{45}{Std Dev} & \rotatebox{45}{Variance} &
\rotatebox{45}{52w High} & \rotatebox{45}{52w Low} & \rotatebox{45}{} \\
\midrule
\multirow[c]{2}{*}{Energy} & BP & {\cellcolor[HTML]{FCFFA4}}
\color[HTML]{000000} 1.00 & {\cellcolor[HTML]{FCA50A}} \color[HTML]{000000}
0.80 & {\cellcolor[HTML]{EB6628}} \color[HTML]{F1F1F1} 0.66 &
{\cellcolor[HTML]{F68013}} \color[HTML]{F1F1F1} 0.72 & 32.2 & 1,034.8 & 335.1
& 240.9 & \color[HTML]{33FF85} \bfseries BUY \\
& Shell & {\cellcolor[HTML]{FCA50A}} \color[HTML]{000000} 0.80 &
{\cellcolor[HTML]{FCFFA4}} \color[HTML]{000000} 1.00 &
{\cellcolor[HTML]{F1731D}} \color[HTML]{F1F1F1} 0.69 &
{\cellcolor[HTML]{FCA108}} \color[HTML]{000000} 0.79 & 1.9 & 3.5 & 14.1 &
19.8 & \color[HTML]{FFDD33} \bfseries HOLD \\
\cline{1-11}
\multirow[c]{2}{*}{Consumer} & H\&M & {\cellcolor[HTML]{EB6628}}
\color[HTML]{F1F1F1} 0.66 & {\cellcolor[HTML]{F1731D}} \color[HTML]{F1F1F1}
0.69 & {\cellcolor[HTML]{FCFFA4}} \color[HTML]{000000} 1.00 &
{\cellcolor[HTML]{FAC42A}} \color[HTML]{000000} 0.86 & 7.0 & 49.0 & 210.9 &
140.6 & \color[HTML]{33FF85} \bfseries BUY \\
& Unilever & {\cellcolor[HTML]{F68013}} \color[HTML]{F1F1F1} 0.72 &
{\cellcolor[HTML]{FCA108}} \color[HTML]{000000} 0.79 &
{\cellcolor[HTML]{FAC42A}} \color[HTML]{000000} 0.86 &
{\cellcolor[HTML]{FCFFA4}} \color[HTML]{000000} 1.00 & 213.8 & 45,693.3 &
2,807.0 & 3,678.0 & \color[HTML]{FF5933} \bfseries SELL \\
\cline{1-11}
\bottomrule
\end{tabular}
\end{table} | pandas.reference.api.pandas.io.formats.style.styler.to_latex |
pandas.io.formats.style.Styler.use Styler.use(styles)[source]
Set the styles on the current Styler. Possibly uses styles from Styler.export. Parameters
styles:dict(str, Any)
List of attributes to add to Styler. Dict keys should contain only:
“apply”: list of styler functions, typically added with apply or applymap. “table_attributes”: HTML attributes, typically added with set_table_attributes. “table_styles”: CSS selectors and properties, typically added with set_table_styles. “hide_index”: whether the index is hidden, typically added with hide_index, or a boolean list for hidden levels. “hide_columns”: whether column headers are hidden, typically added with hide_columns, or a boolean list for hidden levels. “hide_index_names”: whether index names are hidden. “hide_column_names”: whether column header names are hidden. “css”: the css class names used. Returns
self:Styler
See also Styler.export
Export the non data dependent attributes to the current Styler. Examples
>>> styler = DataFrame([[1, 2], [3, 4]]).style
>>> styler2 = DataFrame([[9, 9, 9]]).style
>>> styler.hide(axis=0).highlight_max(axis=1)
>>> export = styler.export()
>>> styler2.use(export) | pandas.reference.api.pandas.io.formats.style.styler.use |
pandas.io.formats.style.Styler.where Styler.where(cond, value, other=None, subset=None, **kwargs)[source]
Apply CSS-styles based on a conditional function elementwise. Deprecated since version 1.3.0. Updates the HTML representation with a style which is selected in accordance with the return value of a function. Parameters
cond:callable
cond should take a scalar, and optional keyword arguments, and return a boolean.
value:str
Applied when cond returns true.
other:str
Applied when cond returns false.
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
**kwargs:dict
Pass along to cond. Returns
self:Styler
See also Styler.applymap
Apply a CSS-styling function elementwise. Styler.apply
Apply a CSS-styling function column-wise, row-wise, or table-wise. Notes This method is deprecated. This method is a convenience wrapper for Styler.applymap(), which we recommend using instead. The example:
>>> df = pd.DataFrame([[1, 2], [3, 4]])
>>> def cond(v, limit=4):
... return v > 1 and v != limit
>>> df.style.where(cond, value='color:green;', other='color:red;')
...
should be refactored to:
>>> def style_func(v, value, other, limit=4):
... cond = v > 1 and v != limit
... return value if cond else other
>>> df.style.applymap(style_func, value='color:green;', other='color:red;')
... | pandas.reference.api.pandas.io.formats.style.styler.where |
pandas.io.json.build_table_schema pandas.io.json.build_table_schema(data, index=True, primary_key=None, version=True)[source]
Create a Table schema from data. Parameters
data:Series, DataFrame
index:bool, default True
Whether to include data.index in the schema.
primary_key:bool or None, default True
Column names to designate as the primary key. The default None will set ‘primaryKey’ to the index level or levels if the index is unique.
version:bool, default True
Whether to include a field pandas_version with the version of pandas that last revised the table schema. This version can be different from the installed pandas version. Returns
schema:dict
Notes See Table Schema for conversion types. Timedeltas as converted to ISO8601 duration format with 9 decimal places after the seconds field for nanosecond precision. Categoricals are converted to the any dtype, and use the enum field constraint to list the allowed values. The ordered attribute is included in an ordered field. Examples
>>> df = pd.DataFrame(
... {'A': [1, 2, 3],
... 'B': ['a', 'b', 'c'],
... 'C': pd.date_range('2016-01-01', freq='d', periods=3),
... }, index=pd.Index(range(3), name='idx'))
>>> build_table_schema(df)
{'fields': [{'name': 'idx', 'type': 'integer'}, {'name': 'A', 'type': 'integer'}, {'name': 'B', 'type': 'string'}, {'name': 'C', 'type': 'datetime'}], 'primaryKey': ['idx'], 'pandas_version': '1.4.0'} | pandas.reference.api.pandas.io.json.build_table_schema |
pandas.io.stata.StataReader.data_label propertyStataReader.data_label
Return data label of Stata file. | pandas.reference.api.pandas.io.stata.statareader.data_label |
pandas.io.stata.StataReader.value_labels StataReader.value_labels()[source]
Return a dict, associating each variable name a dict, associating each value its corresponding label. Returns
dict | pandas.reference.api.pandas.io.stata.statareader.value_labels |
pandas.io.stata.StataReader.variable_labels StataReader.variable_labels()[source]
Return variable labels as a dict, associating each variable name with corresponding label. Returns
dict | pandas.reference.api.pandas.io.stata.statareader.variable_labels |
pandas.io.stata.StataWriter.write_file StataWriter.write_file()[source]
Export DataFrame object to Stata dta format. | pandas.reference.api.pandas.io.stata.statawriter.write_file |
pandas.isna pandas.isna(obj)[source]
Detect missing values for an array-like object. This function takes a scalar or array-like object and indicates whether values are missing (NaN in numeric arrays, None or NaN in object arrays, NaT in datetimelike). Parameters
obj:scalar or array-like
Object to check for null or missing values. Returns
bool or array-like of bool
For scalar input, returns a scalar boolean. For array input, returns an array of boolean indicating whether each corresponding element is missing. See also notna
Boolean inverse of pandas.isna. Series.isna
Detect missing values in a Series. DataFrame.isna
Detect missing values in a DataFrame. Index.isna
Detect missing values in an Index. Examples Scalar arguments (including strings) result in a scalar boolean.
>>> pd.isna('dog')
False
>>> pd.isna(pd.NA)
True
>>> pd.isna(np.nan)
True
ndarrays result in an ndarray of booleans.
>>> array = np.array([[1, np.nan, 3], [4, 5, np.nan]])
>>> array
array([[ 1., nan, 3.],
[ 4., 5., nan]])
>>> pd.isna(array)
array([[False, True, False],
[False, False, True]])
For indexes, an ndarray of booleans is returned.
>>> index = pd.DatetimeIndex(["2017-07-05", "2017-07-06", None,
... "2017-07-08"])
>>> index
DatetimeIndex(['2017-07-05', '2017-07-06', 'NaT', '2017-07-08'],
dtype='datetime64[ns]', freq=None)
>>> pd.isna(index)
array([False, False, True, False])
For Series and DataFrame, the same type is returned, containing booleans.
>>> df = pd.DataFrame([['ant', 'bee', 'cat'], ['dog', None, 'fly']])
>>> df
0 1 2
0 ant bee cat
1 dog None fly
>>> pd.isna(df)
0 1 2
0 False False False
1 False True False
>>> pd.isna(df[1])
0 False
1 True
Name: 1, dtype: bool | pandas.reference.api.pandas.isna |
pandas.isnull pandas.isnull(obj)[source]
Detect missing values for an array-like object. This function takes a scalar or array-like object and indicates whether values are missing (NaN in numeric arrays, None or NaN in object arrays, NaT in datetimelike). Parameters
obj:scalar or array-like
Object to check for null or missing values. Returns
bool or array-like of bool
For scalar input, returns a scalar boolean. For array input, returns an array of boolean indicating whether each corresponding element is missing. See also notna
Boolean inverse of pandas.isna. Series.isna
Detect missing values in a Series. DataFrame.isna
Detect missing values in a DataFrame. Index.isna
Detect missing values in an Index. Examples Scalar arguments (including strings) result in a scalar boolean.
>>> pd.isna('dog')
False
>>> pd.isna(pd.NA)
True
>>> pd.isna(np.nan)
True
ndarrays result in an ndarray of booleans.
>>> array = np.array([[1, np.nan, 3], [4, 5, np.nan]])
>>> array
array([[ 1., nan, 3.],
[ 4., 5., nan]])
>>> pd.isna(array)
array([[False, True, False],
[False, False, True]])
For indexes, an ndarray of booleans is returned.
>>> index = pd.DatetimeIndex(["2017-07-05", "2017-07-06", None,
... "2017-07-08"])
>>> index
DatetimeIndex(['2017-07-05', '2017-07-06', 'NaT', '2017-07-08'],
dtype='datetime64[ns]', freq=None)
>>> pd.isna(index)
array([False, False, True, False])
For Series and DataFrame, the same type is returned, containing booleans.
>>> df = pd.DataFrame([['ant', 'bee', 'cat'], ['dog', None, 'fly']])
>>> df
0 1 2
0 ant bee cat
1 dog None fly
>>> pd.isna(df)
0 1 2
0 False False False
1 False True False
>>> pd.isna(df[1])
0 False
1 True
Name: 1, dtype: bool | pandas.reference.api.pandas.isnull |
pandas.json_normalize pandas.json_normalize(data, record_path=None, meta=None, meta_prefix=None, record_prefix=None, errors='raise', sep='.', max_level=None)[source]
Normalize semi-structured JSON data into a flat table. Parameters
data:dict or list of dicts
Unserialized JSON objects.
record_path:str or list of str, default None
Path in each object to list of records. If not passed, data will be assumed to be an array of records.
meta:list of paths (str or list of str), default None
Fields to use as metadata for each record in resulting table.
meta_prefix:str, default None
If True, prefix records with dotted (?) path, e.g. foo.bar.field if meta is [‘foo’, ‘bar’].
record_prefix:str, default None
If True, prefix records with dotted (?) path, e.g. foo.bar.field if path to records is [‘foo’, ‘bar’].
errors:{‘raise’, ‘ignore’}, default ‘raise’
Configures error handling. ‘ignore’ : will ignore KeyError if keys listed in meta are not always present. ‘raise’ : will raise KeyError if keys listed in meta are not always present.
sep:str, default ‘.’
Nested records will generate names separated by sep. e.g., for sep=’.’, {‘foo’: {‘bar’: 0}} -> foo.bar.
max_level:int, default None
Max number of levels(depth of dict) to normalize. if None, normalizes all levels. New in version 0.25.0. Returns
frame:DataFrame
Normalize semi-structured JSON data into a flat table.
Examples
>>> data = [
... {"id": 1, "name": {"first": "Coleen", "last": "Volk"}},
... {"name": {"given": "Mark", "family": "Regner"}},
... {"id": 2, "name": "Faye Raker"},
... ]
>>> pd.json_normalize(data)
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mark Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker
>>> data = [
... {
... "id": 1,
... "name": "Cole Volk",
... "fitness": {"height": 130, "weight": 60},
... },
... {"name": "Mark Reg", "fitness": {"height": 130, "weight": 60}},
... {
... "id": 2,
... "name": "Faye Raker",
... "fitness": {"height": 130, "weight": 60},
... },
... ]
>>> pd.json_normalize(data, max_level=0)
id name fitness
0 1.0 Cole Volk {'height': 130, 'weight': 60}
1 NaN Mark Reg {'height': 130, 'weight': 60}
2 2.0 Faye Raker {'height': 130, 'weight': 60}
Normalizes nested data up to level 1.
>>> data = [
... {
... "id": 1,
... "name": "Cole Volk",
... "fitness": {"height": 130, "weight": 60},
... },
... {"name": "Mark Reg", "fitness": {"height": 130, "weight": 60}},
... {
... "id": 2,
... "name": "Faye Raker",
... "fitness": {"height": 130, "weight": 60},
... },
... ]
>>> pd.json_normalize(data, max_level=1)
id name fitness.height fitness.weight
0 1.0 Cole Volk 130 60
1 NaN Mark Reg 130 60
2 2.0 Faye Raker 130 60
>>> data = [
... {
... "state": "Florida",
... "shortname": "FL",
... "info": {"governor": "Rick Scott"},
... "counties": [
... {"name": "Dade", "population": 12345},
... {"name": "Broward", "population": 40000},
... {"name": "Palm Beach", "population": 60000},
... ],
... },
... {
... "state": "Ohio",
... "shortname": "OH",
... "info": {"governor": "John Kasich"},
... "counties": [
... {"name": "Summit", "population": 1234},
... {"name": "Cuyahoga", "population": 1337},
... ],
... },
... ]
>>> result = pd.json_normalize(
... data, "counties", ["state", "shortname", ["info", "governor"]]
... )
>>> result
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
>>> data = {"A": [1, 2]}
>>> pd.json_normalize(data, "A", record_prefix="Prefix.")
Prefix.0
0 1
1 2
Returns normalized data with columns prefixed with the given string. | pandas.reference.api.pandas.json_normalize |
pandas.melt pandas.melt(frame, id_vars=None, value_vars=None, var_name=None, value_name='value', col_level=None, ignore_index=True)[source]
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set. This function is useful to massage a DataFrame into a format where one or more columns are identifier variables (id_vars), while all other columns, considered measured variables (value_vars), are “unpivoted” to the row axis, leaving just two non-identifier columns, ‘variable’ and ‘value’. Parameters
id_vars:tuple, list, or ndarray, optional
Column(s) to use as identifier variables.
value_vars:tuple, list, or ndarray, optional
Column(s) to unpivot. If not specified, uses all columns that are not set as id_vars.
var_name:scalar
Name to use for the ‘variable’ column. If None it uses frame.columns.name or ‘variable’.
value_name:scalar, default ‘value’
Name to use for the ‘value’ column.
col_level:int or str, optional
If columns are a MultiIndex then use this level to melt.
ignore_index:bool, default True
If True, original index is ignored. If False, the original index is retained. Index labels will be repeated as necessary. New in version 1.1.0. Returns
DataFrame
Unpivoted DataFrame. See also DataFrame.melt
Identical method. pivot_table
Create a spreadsheet-style pivot table as a DataFrame. DataFrame.pivot
Return reshaped DataFrame organized by given index / column values. DataFrame.explode
Explode a DataFrame from list-like columns to long format. Examples
>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
... 'B': {0: 1, 1: 3, 2: 5},
... 'C': {0: 2, 1: 4, 2: 6}})
>>> df
A B C
0 a 1 2
1 b 3 4
2 c 5 6
>>> pd.melt(df, id_vars=['A'], value_vars=['B'])
A variable value
0 a B 1
1 b B 3
2 c B 5
>>> pd.melt(df, id_vars=['A'], value_vars=['B', 'C'])
A variable value
0 a B 1
1 b B 3
2 c B 5
3 a C 2
4 b C 4
5 c C 6
The names of ‘variable’ and ‘value’ columns can be customized:
>>> pd.melt(df, id_vars=['A'], value_vars=['B'],
... var_name='myVarname', value_name='myValname')
A myVarname myValname
0 a B 1
1 b B 3
2 c B 5
Original index values can be kept around:
>>> pd.melt(df, id_vars=['A'], value_vars=['B', 'C'], ignore_index=False)
A variable value
0 a B 1
1 b B 3
2 c B 5
0 a C 2
1 b C 4
2 c C 6
If you have multi-index columns:
>>> df.columns = [list('ABC'), list('DEF')]
>>> df
A B C
D E F
0 a 1 2
1 b 3 4
2 c 5 6
>>> pd.melt(df, col_level=0, id_vars=['A'], value_vars=['B'])
A variable value
0 a B 1
1 b B 3
2 c B 5
>>> pd.melt(df, id_vars=[('A', 'D')], value_vars=[('B', 'E')])
(A, D) variable_0 variable_1 value
0 a B E 1
1 b B E 3
2 c B E 5 | pandas.reference.api.pandas.melt |
pandas.merge pandas.merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)[source]
Merge DataFrame or named Series objects with a database-style join. A named Series object is treated as a DataFrame with a single named column. The join is done on columns or indexes. If joining columns on columns, the DataFrame indexes will be ignored. Otherwise if joining indexes on indexes or indexes on a column or columns, the index will be passed on. When performing a cross merge, no column specifications to merge on are allowed. Warning If both key columns contain rows where the key is a null value, those rows will be matched against each other. This is different from usual SQL join behaviour and can lead to unexpected results. Parameters
left:DataFrame
right:DataFrame or named Series
Object to merge with.
how:{‘left’, ‘right’, ‘outer’, ‘inner’, ‘cross’}, default ‘inner’
Type of merge to be performed. left: use only keys from left frame, similar to a SQL left outer join; preserve key order. right: use only keys from right frame, similar to a SQL right outer join; preserve key order. outer: use union of keys from both frames, similar to a SQL full outer join; sort keys lexicographically. inner: use intersection of keys from both frames, similar to a SQL inner join; preserve the order of the left keys.
cross: creates the cartesian product from both frames, preserves the order of the left keys. New in version 1.2.0.
on:label or list
Column or index level names to join on. These must be found in both DataFrames. If on is None and not merging on indexes then this defaults to the intersection of the columns in both DataFrames.
left_on:label or list, or array-like
Column or index level names to join on in the left DataFrame. Can also be an array or list of arrays of the length of the left DataFrame. These arrays are treated as if they are columns.
right_on:label or list, or array-like
Column or index level names to join on in the right DataFrame. Can also be an array or list of arrays of the length of the right DataFrame. These arrays are treated as if they are columns.
left_index:bool, default False
Use the index from the left DataFrame as the join key(s). If it is a MultiIndex, the number of keys in the other DataFrame (either the index or a number of columns) must match the number of levels.
right_index:bool, default False
Use the index from the right DataFrame as the join key. Same caveats as left_index.
sort:bool, default False
Sort the join keys lexicographically in the result DataFrame. If False, the order of the join keys depends on the join type (how keyword).
suffixes:list-like, default is (“_x”, “_y”)
A length-2 sequence where each element is optionally a string indicating the suffix to add to overlapping column names in left and right respectively. Pass a value of None instead of a string to indicate that the column name from left or right should be left as-is, with no suffix. At least one of the values must not be None.
copy:bool, default True
If False, avoid copy if possible.
indicator:bool or str, default False
If True, adds a column to the output DataFrame called “_merge” with information on the source of each row. The column can be given a different name by providing a string argument. The column will have a Categorical type with the value of “left_only” for observations whose merge key only appears in the left DataFrame, “right_only” for observations whose merge key only appears in the right DataFrame, and “both” if the observation’s merge key is found in both DataFrames.
validate:str, optional
If specified, checks if merge is of specified type. “one_to_one” or “1:1”: check if merge keys are unique in both left and right datasets. “one_to_many” or “1:m”: check if merge keys are unique in left dataset. “many_to_one” or “m:1”: check if merge keys are unique in right dataset. “many_to_many” or “m:m”: allowed, but does not result in checks. Returns
DataFrame
A DataFrame of the two merged objects. See also merge_ordered
Merge with optional filling/interpolation. merge_asof
Merge on nearest keys. DataFrame.join
Similar method using indices. Notes Support for specifying index levels as the on, left_on, and right_on parameters was added in version 0.23.0 Support for merging named Series objects was added in version 0.24.0 Examples
>>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'],
... 'value': [1, 2, 3, 5]})
>>> df2 = pd.DataFrame({'rkey': ['foo', 'bar', 'baz', 'foo'],
... 'value': [5, 6, 7, 8]})
>>> df1
lkey value
0 foo 1
1 bar 2
2 baz 3
3 foo 5
>>> df2
rkey value
0 foo 5
1 bar 6
2 baz 7
3 foo 8
Merge df1 and df2 on the lkey and rkey columns. The value columns have the default suffixes, _x and _y, appended.
>>> df1.merge(df2, left_on='lkey', right_on='rkey')
lkey value_x rkey value_y
0 foo 1 foo 5
1 foo 1 foo 8
2 foo 5 foo 5
3 foo 5 foo 8
4 bar 2 bar 6
5 baz 3 baz 7
Merge DataFrames df1 and df2 with specified left and right suffixes appended to any overlapping columns.
>>> df1.merge(df2, left_on='lkey', right_on='rkey',
... suffixes=('_left', '_right'))
lkey value_left rkey value_right
0 foo 1 foo 5
1 foo 1 foo 8
2 foo 5 foo 5
3 foo 5 foo 8
4 bar 2 bar 6
5 baz 3 baz 7
Merge DataFrames df1 and df2, but raise an exception if the DataFrames have any overlapping columns.
>>> df1.merge(df2, left_on='lkey', right_on='rkey', suffixes=(False, False))
Traceback (most recent call last):
...
ValueError: columns overlap but no suffix specified:
Index(['value'], dtype='object')
>>> df1 = pd.DataFrame({'a': ['foo', 'bar'], 'b': [1, 2]})
>>> df2 = pd.DataFrame({'a': ['foo', 'baz'], 'c': [3, 4]})
>>> df1
a b
0 foo 1
1 bar 2
>>> df2
a c
0 foo 3
1 baz 4
>>> df1.merge(df2, how='inner', on='a')
a b c
0 foo 1 3
>>> df1.merge(df2, how='left', on='a')
a b c
0 foo 1 3.0
1 bar 2 NaN
>>> df1 = pd.DataFrame({'left': ['foo', 'bar']})
>>> df2 = pd.DataFrame({'right': [7, 8]})
>>> df1
left
0 foo
1 bar
>>> df2
right
0 7
1 8
>>> df1.merge(df2, how='cross')
left right
0 foo 7
1 foo 8
2 bar 7
3 bar 8 | pandas.reference.api.pandas.merge |
pandas.merge_asof pandas.merge_asof(left, right, on=None, left_on=None, right_on=None, left_index=False, right_index=False, by=None, left_by=None, right_by=None, suffixes=('_x', '_y'), tolerance=None, allow_exact_matches=True, direction='backward')[source]
Perform a merge by key distance. This is similar to a left-join except that we match on nearest key rather than equal keys. Both DataFrames must be sorted by the key. For each row in the left DataFrame:
A “backward” search selects the last row in the right DataFrame whose ‘on’ key is less than or equal to the left’s key. A “forward” search selects the first row in the right DataFrame whose ‘on’ key is greater than or equal to the left’s key. A “nearest” search selects the row in the right DataFrame whose ‘on’ key is closest in absolute distance to the left’s key.
The default is “backward” and is compatible in versions below 0.20.0. The direction parameter was added in version 0.20.0 and introduces “forward” and “nearest”. Optionally match on equivalent keys with ‘by’ before searching with ‘on’. Parameters
left:DataFrame or named Series
right:DataFrame or named Series
on:label
Field name to join on. Must be found in both DataFrames. The data MUST be ordered. Furthermore this must be a numeric column, such as datetimelike, integer, or float. On or left_on/right_on must be given.
left_on:label
Field name to join on in left DataFrame.
right_on:label
Field name to join on in right DataFrame.
left_index:bool
Use the index of the left DataFrame as the join key.
right_index:bool
Use the index of the right DataFrame as the join key.
by:column name or list of column names
Match on these columns before performing merge operation.
left_by:column name
Field names to match on in the left DataFrame.
right_by:column name
Field names to match on in the right DataFrame.
suffixes:2-length sequence (tuple, list, …)
Suffix to apply to overlapping column names in the left and right side, respectively.
tolerance:int or Timedelta, optional, default None
Select asof tolerance within this range; must be compatible with the merge index.
allow_exact_matches:bool, default True
If True, allow matching with the same ‘on’ value (i.e. less-than-or-equal-to / greater-than-or-equal-to) If False, don’t match the same ‘on’ value (i.e., strictly less-than / strictly greater-than).
direction:‘backward’ (default), ‘forward’, or ‘nearest’
Whether to search for prior, subsequent, or closest matches. Returns
merged:DataFrame
See also merge
Merge with a database-style join. merge_ordered
Merge with optional filling/interpolation. Examples
>>> left = pd.DataFrame({"a": [1, 5, 10], "left_val": ["a", "b", "c"]})
>>> left
a left_val
0 1 a
1 5 b
2 10 c
>>> right = pd.DataFrame({"a": [1, 2, 3, 6, 7], "right_val": [1, 2, 3, 6, 7]})
>>> right
a right_val
0 1 1
1 2 2
2 3 3
3 6 6
4 7 7
>>> pd.merge_asof(left, right, on="a")
a left_val right_val
0 1 a 1
1 5 b 3
2 10 c 7
>>> pd.merge_asof(left, right, on="a", allow_exact_matches=False)
a left_val right_val
0 1 a NaN
1 5 b 3.0
2 10 c 7.0
>>> pd.merge_asof(left, right, on="a", direction="forward")
a left_val right_val
0 1 a 1.0
1 5 b 6.0
2 10 c NaN
>>> pd.merge_asof(left, right, on="a", direction="nearest")
a left_val right_val
0 1 a 1
1 5 b 6
2 10 c 7
We can use indexed DataFrames as well.
>>> left = pd.DataFrame({"left_val": ["a", "b", "c"]}, index=[1, 5, 10])
>>> left
left_val
1 a
5 b
10 c
>>> right = pd.DataFrame({"right_val": [1, 2, 3, 6, 7]}, index=[1, 2, 3, 6, 7])
>>> right
right_val
1 1
2 2
3 3
6 6
7 7
>>> pd.merge_asof(left, right, left_index=True, right_index=True)
left_val right_val
1 a 1
5 b 3
10 c 7
Here is a real-world times-series example
>>> quotes = pd.DataFrame(
... {
... "time": [
... pd.Timestamp("2016-05-25 13:30:00.023"),
... pd.Timestamp("2016-05-25 13:30:00.023"),
... pd.Timestamp("2016-05-25 13:30:00.030"),
... pd.Timestamp("2016-05-25 13:30:00.041"),
... pd.Timestamp("2016-05-25 13:30:00.048"),
... pd.Timestamp("2016-05-25 13:30:00.049"),
... pd.Timestamp("2016-05-25 13:30:00.072"),
... pd.Timestamp("2016-05-25 13:30:00.075")
... ],
... "ticker": [
... "GOOG",
... "MSFT",
... "MSFT",
... "MSFT",
... "GOOG",
... "AAPL",
... "GOOG",
... "MSFT"
... ],
... "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
... "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03]
... }
... )
>>> quotes
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
>>> trades = pd.DataFrame(
... {
... "time": [
... pd.Timestamp("2016-05-25 13:30:00.023"),
... pd.Timestamp("2016-05-25 13:30:00.038"),
... pd.Timestamp("2016-05-25 13:30:00.048"),
... pd.Timestamp("2016-05-25 13:30:00.048"),
... pd.Timestamp("2016-05-25 13:30:00.048")
... ],
... "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
... "price": [51.95, 51.95, 720.77, 720.92, 98.0],
... "quantity": [75, 155, 100, 100, 100]
... }
... )
>>> trades
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
By default we are taking the asof of the quotes
>>> pd.merge_asof(trades, quotes, on="time", by="ticker")
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time
>>> pd.merge_asof(
... trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms")
... )
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we exclude exact matches on time. However prior data will propagate forward
>>> pd.merge_asof(
... trades,
... quotes,
... on="time",
... by="ticker",
... tolerance=pd.Timedelta("10ms"),
... allow_exact_matches=False
... )
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN | pandas.reference.api.pandas.merge_asof |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.