doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
pandas.core.resample.Resampler.aggregate Resampler.aggregate(func=None, *args, **kwargs)[source] Aggregate using one or more operations over the specified axis. Parameters func:function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such. *args Positional arguments to pass to func. **kwargs Keyword arguments to pass to func. Returns scalar, Series or DataFrame The return can be: scalar : when Series.agg is called with single function Series : when DataFrame.agg is called with a single function DataFrame : when DataFrame.agg is called with several functions Return scalar, Series or DataFrame. See also DataFrame.groupby.aggregate Aggregate using callable, string, dict, or list of string/callables. DataFrame.resample.transform Transforms the Series on each group based on the given function. DataFrame.aggregate Aggregate using one or more operations over the specified axis. Notes agg is an alias for aggregate. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. A passed user-defined-function will be passed a Series for evaluation. Examples >>> s = pd.Series([1, 2, 3, 4, 5], ... index=pd.date_range('20130101', periods=5, freq='s')) >>> s 2013-01-01 00:00:00 1 2013-01-01 00:00:01 2 2013-01-01 00:00:02 3 2013-01-01 00:00:03 4 2013-01-01 00:00:04 5 Freq: S, dtype: int64 >>> r = s.resample('2s') >>> r.agg(np.sum) 2013-01-01 00:00:00 3 2013-01-01 00:00:02 7 2013-01-01 00:00:04 5 Freq: 2S, dtype: int64 >>> r.agg(['sum', 'mean', 'max']) sum mean max 2013-01-01 00:00:00 3 1.5 2 2013-01-01 00:00:02 7 3.5 4 2013-01-01 00:00:04 5 5.0 5 >>> r.agg({'result': lambda x: x.mean() / x.std(), ... 'total': np.sum}) result total 2013-01-01 00:00:00 2.121320 3 2013-01-01 00:00:02 4.949747 7 2013-01-01 00:00:04 NaN 5 >>> r.agg(average="mean", total="sum") average total 2013-01-01 00:00:00 1.5 3 2013-01-01 00:00:02 3.5 7 2013-01-01 00:00:04 5.0 5
pandas.reference.api.pandas.core.resample.resampler.aggregate
pandas.core.resample.Resampler.apply Resampler.apply(func=None, *args, **kwargs)[source] Aggregate using one or more operations over the specified axis. Parameters func:function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such. *args Positional arguments to pass to func. **kwargs Keyword arguments to pass to func. Returns scalar, Series or DataFrame The return can be: scalar : when Series.agg is called with single function Series : when DataFrame.agg is called with a single function DataFrame : when DataFrame.agg is called with several functions Return scalar, Series or DataFrame. See also DataFrame.groupby.aggregate Aggregate using callable, string, dict, or list of string/callables. DataFrame.resample.transform Transforms the Series on each group based on the given function. DataFrame.aggregate Aggregate using one or more operations over the specified axis. Notes agg is an alias for aggregate. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. A passed user-defined-function will be passed a Series for evaluation. Examples >>> s = pd.Series([1, 2, 3, 4, 5], ... index=pd.date_range('20130101', periods=5, freq='s')) >>> s 2013-01-01 00:00:00 1 2013-01-01 00:00:01 2 2013-01-01 00:00:02 3 2013-01-01 00:00:03 4 2013-01-01 00:00:04 5 Freq: S, dtype: int64 >>> r = s.resample('2s') >>> r.agg(np.sum) 2013-01-01 00:00:00 3 2013-01-01 00:00:02 7 2013-01-01 00:00:04 5 Freq: 2S, dtype: int64 >>> r.agg(['sum', 'mean', 'max']) sum mean max 2013-01-01 00:00:00 3 1.5 2 2013-01-01 00:00:02 7 3.5 4 2013-01-01 00:00:04 5 5.0 5 >>> r.agg({'result': lambda x: x.mean() / x.std(), ... 'total': np.sum}) result total 2013-01-01 00:00:00 2.121320 3 2013-01-01 00:00:02 4.949747 7 2013-01-01 00:00:04 NaN 5 >>> r.agg(average="mean", total="sum") average total 2013-01-01 00:00:00 1.5 3 2013-01-01 00:00:02 3.5 7 2013-01-01 00:00:04 5.0 5
pandas.reference.api.pandas.core.resample.resampler.apply
pandas.core.resample.Resampler.asfreq Resampler.asfreq(fill_value=None)[source] Return the values at the new freq, essentially a reindex. Parameters fill_value:scalar, optional Value to use for missing values, applied during upsampling (note this does not fill NaNs that already were present). Returns DataFrame or Series Values at the specified freq. See also Series.asfreq Convert TimeSeries to specified frequency. DataFrame.asfreq Convert TimeSeries to specified frequency.
pandas.reference.api.pandas.core.resample.resampler.asfreq
pandas.core.resample.Resampler.backfill Resampler.backfill(limit=None)[source] Backward fill the new missing values in the resampled data. In statistics, imputation is the process of replacing missing data with substituted values [1]. When resampling data, missing values may appear (e.g., when the resampling frequency is higher than the original frequency). The backward fill will replace NaN values that appeared in the resampled data with the next value in the original sequence. Missing values that existed in the original data will not be modified. Parameters limit:int, optional Limit of how many values to fill. Returns Series, DataFrame An upsampled Series or DataFrame with backward filled NaN values. See also bfill Alias of backfill. fillna Fill NaN values using the specified method, which can be ‘backfill’. nearest Fill NaN values with nearest neighbor starting from center. ffill Forward fill NaN values. Series.fillna Fill NaN values in the Series using the specified method, which can be ‘backfill’. DataFrame.fillna Fill NaN values in the DataFrame using the specified method, which can be ‘backfill’. References 1 https://en.wikipedia.org/wiki/Imputation_(statistics) Examples Resampling a Series: >>> s = pd.Series([1, 2, 3], ... index=pd.date_range('20180101', periods=3, freq='h')) >>> s 2018-01-01 00:00:00 1 2018-01-01 01:00:00 2 2018-01-01 02:00:00 3 Freq: H, dtype: int64 >>> s.resample('30min').bfill() 2018-01-01 00:00:00 1 2018-01-01 00:30:00 2 2018-01-01 01:00:00 2 2018-01-01 01:30:00 3 2018-01-01 02:00:00 3 Freq: 30T, dtype: int64 >>> s.resample('15min').bfill(limit=2) 2018-01-01 00:00:00 1.0 2018-01-01 00:15:00 NaN 2018-01-01 00:30:00 2.0 2018-01-01 00:45:00 2.0 2018-01-01 01:00:00 2.0 2018-01-01 01:15:00 NaN 2018-01-01 01:30:00 3.0 2018-01-01 01:45:00 3.0 2018-01-01 02:00:00 3.0 Freq: 15T, dtype: float64 Resampling a DataFrame that has missing values: >>> df = pd.DataFrame({'a': [2, np.nan, 6], 'b': [1, 3, 5]}, ... index=pd.date_range('20180101', periods=3, ... freq='h')) >>> df a b 2018-01-01 00:00:00 2.0 1 2018-01-01 01:00:00 NaN 3 2018-01-01 02:00:00 6.0 5 >>> df.resample('30min').bfill() a b 2018-01-01 00:00:00 2.0 1 2018-01-01 00:30:00 NaN 3 2018-01-01 01:00:00 NaN 3 2018-01-01 01:30:00 6.0 5 2018-01-01 02:00:00 6.0 5 >>> df.resample('15min').bfill(limit=2) a b 2018-01-01 00:00:00 2.0 1.0 2018-01-01 00:15:00 NaN NaN 2018-01-01 00:30:00 NaN 3.0 2018-01-01 00:45:00 NaN 3.0 2018-01-01 01:00:00 NaN 3.0 2018-01-01 01:15:00 NaN NaN 2018-01-01 01:30:00 6.0 5.0 2018-01-01 01:45:00 6.0 5.0 2018-01-01 02:00:00 6.0 5.0
pandas.reference.api.pandas.core.resample.resampler.backfill
pandas.core.resample.Resampler.bfill Resampler.bfill(limit=None)[source] Backward fill the new missing values in the resampled data. In statistics, imputation is the process of replacing missing data with substituted values [1]. When resampling data, missing values may appear (e.g., when the resampling frequency is higher than the original frequency). The backward fill will replace NaN values that appeared in the resampled data with the next value in the original sequence. Missing values that existed in the original data will not be modified. Parameters limit:int, optional Limit of how many values to fill. Returns Series, DataFrame An upsampled Series or DataFrame with backward filled NaN values. See also bfill Alias of backfill. fillna Fill NaN values using the specified method, which can be ‘backfill’. nearest Fill NaN values with nearest neighbor starting from center. ffill Forward fill NaN values. Series.fillna Fill NaN values in the Series using the specified method, which can be ‘backfill’. DataFrame.fillna Fill NaN values in the DataFrame using the specified method, which can be ‘backfill’. References 1 https://en.wikipedia.org/wiki/Imputation_(statistics) Examples Resampling a Series: >>> s = pd.Series([1, 2, 3], ... index=pd.date_range('20180101', periods=3, freq='h')) >>> s 2018-01-01 00:00:00 1 2018-01-01 01:00:00 2 2018-01-01 02:00:00 3 Freq: H, dtype: int64 >>> s.resample('30min').bfill() 2018-01-01 00:00:00 1 2018-01-01 00:30:00 2 2018-01-01 01:00:00 2 2018-01-01 01:30:00 3 2018-01-01 02:00:00 3 Freq: 30T, dtype: int64 >>> s.resample('15min').bfill(limit=2) 2018-01-01 00:00:00 1.0 2018-01-01 00:15:00 NaN 2018-01-01 00:30:00 2.0 2018-01-01 00:45:00 2.0 2018-01-01 01:00:00 2.0 2018-01-01 01:15:00 NaN 2018-01-01 01:30:00 3.0 2018-01-01 01:45:00 3.0 2018-01-01 02:00:00 3.0 Freq: 15T, dtype: float64 Resampling a DataFrame that has missing values: >>> df = pd.DataFrame({'a': [2, np.nan, 6], 'b': [1, 3, 5]}, ... index=pd.date_range('20180101', periods=3, ... freq='h')) >>> df a b 2018-01-01 00:00:00 2.0 1 2018-01-01 01:00:00 NaN 3 2018-01-01 02:00:00 6.0 5 >>> df.resample('30min').bfill() a b 2018-01-01 00:00:00 2.0 1 2018-01-01 00:30:00 NaN 3 2018-01-01 01:00:00 NaN 3 2018-01-01 01:30:00 6.0 5 2018-01-01 02:00:00 6.0 5 >>> df.resample('15min').bfill(limit=2) a b 2018-01-01 00:00:00 2.0 1.0 2018-01-01 00:15:00 NaN NaN 2018-01-01 00:30:00 NaN 3.0 2018-01-01 00:45:00 NaN 3.0 2018-01-01 01:00:00 NaN 3.0 2018-01-01 01:15:00 NaN NaN 2018-01-01 01:30:00 6.0 5.0 2018-01-01 01:45:00 6.0 5.0 2018-01-01 02:00:00 6.0 5.0
pandas.reference.api.pandas.core.resample.resampler.bfill
pandas.core.resample.Resampler.count Resampler.count()[source] Compute count of group, excluding missing values. Returns Series or DataFrame Count of values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.resample.resampler.count
pandas.core.resample.Resampler.ffill Resampler.ffill(limit=None)[source] Forward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns An upsampled Series. See also Series.fillna Fill NA/NaN values using the specified method. DataFrame.fillna Fill NA/NaN values using the specified method.
pandas.reference.api.pandas.core.resample.resampler.ffill
pandas.core.resample.Resampler.fillna Resampler.fillna(method, limit=None)[source] Fill missing values introduced by upsampling. In statistics, imputation is the process of replacing missing data with substituted values [1]. When resampling data, missing values may appear (e.g., when the resampling frequency is higher than the original frequency). Missing values that existed in the original data will not be modified. Parameters method:{‘pad’, ‘backfill’, ‘ffill’, ‘bfill’, ‘nearest’} Method to use for filling holes in resampled data ‘pad’ or ‘ffill’: use previous valid observation to fill gap (forward fill). ‘backfill’ or ‘bfill’: use next valid observation to fill gap. ‘nearest’: use nearest valid observation to fill gap. limit:int, optional Limit of how many consecutive missing values to fill. Returns Series or DataFrame An upsampled Series or DataFrame with missing values filled. See also bfill Backward fill NaN values in the resampled data. ffill Forward fill NaN values in the resampled data. nearest Fill NaN values in the resampled data with nearest neighbor starting from center. interpolate Fill NaN values using interpolation. Series.fillna Fill NaN values in the Series using the specified method, which can be ‘bfill’ and ‘ffill’. DataFrame.fillna Fill NaN values in the DataFrame using the specified method, which can be ‘bfill’ and ‘ffill’. References 1 https://en.wikipedia.org/wiki/Imputation_(statistics) Examples Resampling a Series: >>> s = pd.Series([1, 2, 3], ... index=pd.date_range('20180101', periods=3, freq='h')) >>> s 2018-01-01 00:00:00 1 2018-01-01 01:00:00 2 2018-01-01 02:00:00 3 Freq: H, dtype: int64 Without filling the missing values you get: >>> s.resample("30min").asfreq() 2018-01-01 00:00:00 1.0 2018-01-01 00:30:00 NaN 2018-01-01 01:00:00 2.0 2018-01-01 01:30:00 NaN 2018-01-01 02:00:00 3.0 Freq: 30T, dtype: float64 >>> s.resample('30min').fillna("backfill") 2018-01-01 00:00:00 1 2018-01-01 00:30:00 2 2018-01-01 01:00:00 2 2018-01-01 01:30:00 3 2018-01-01 02:00:00 3 Freq: 30T, dtype: int64 >>> s.resample('15min').fillna("backfill", limit=2) 2018-01-01 00:00:00 1.0 2018-01-01 00:15:00 NaN 2018-01-01 00:30:00 2.0 2018-01-01 00:45:00 2.0 2018-01-01 01:00:00 2.0 2018-01-01 01:15:00 NaN 2018-01-01 01:30:00 3.0 2018-01-01 01:45:00 3.0 2018-01-01 02:00:00 3.0 Freq: 15T, dtype: float64 >>> s.resample('30min').fillna("pad") 2018-01-01 00:00:00 1 2018-01-01 00:30:00 1 2018-01-01 01:00:00 2 2018-01-01 01:30:00 2 2018-01-01 02:00:00 3 Freq: 30T, dtype: int64 >>> s.resample('30min').fillna("nearest") 2018-01-01 00:00:00 1 2018-01-01 00:30:00 2 2018-01-01 01:00:00 2 2018-01-01 01:30:00 3 2018-01-01 02:00:00 3 Freq: 30T, dtype: int64 Missing values present before the upsampling are not affected. >>> sm = pd.Series([1, None, 3], ... index=pd.date_range('20180101', periods=3, freq='h')) >>> sm 2018-01-01 00:00:00 1.0 2018-01-01 01:00:00 NaN 2018-01-01 02:00:00 3.0 Freq: H, dtype: float64 >>> sm.resample('30min').fillna('backfill') 2018-01-01 00:00:00 1.0 2018-01-01 00:30:00 NaN 2018-01-01 01:00:00 NaN 2018-01-01 01:30:00 3.0 2018-01-01 02:00:00 3.0 Freq: 30T, dtype: float64 >>> sm.resample('30min').fillna('pad') 2018-01-01 00:00:00 1.0 2018-01-01 00:30:00 1.0 2018-01-01 01:00:00 NaN 2018-01-01 01:30:00 NaN 2018-01-01 02:00:00 3.0 Freq: 30T, dtype: float64 >>> sm.resample('30min').fillna('nearest') 2018-01-01 00:00:00 1.0 2018-01-01 00:30:00 NaN 2018-01-01 01:00:00 NaN 2018-01-01 01:30:00 3.0 2018-01-01 02:00:00 3.0 Freq: 30T, dtype: float64 DataFrame resampling is done column-wise. All the same options are available. >>> df = pd.DataFrame({'a': [2, np.nan, 6], 'b': [1, 3, 5]}, ... index=pd.date_range('20180101', periods=3, ... freq='h')) >>> df a b 2018-01-01 00:00:00 2.0 1 2018-01-01 01:00:00 NaN 3 2018-01-01 02:00:00 6.0 5 >>> df.resample('30min').fillna("bfill") a b 2018-01-01 00:00:00 2.0 1 2018-01-01 00:30:00 NaN 3 2018-01-01 01:00:00 NaN 3 2018-01-01 01:30:00 6.0 5 2018-01-01 02:00:00 6.0 5
pandas.reference.api.pandas.core.resample.resampler.fillna
pandas.core.resample.Resampler.first Resampler.first(_method='first', min_count=0, *args, **kwargs)[source] Compute first of group values. Parameters numeric_only:bool, default False Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default -1 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed first of values within each group.
pandas.reference.api.pandas.core.resample.resampler.first
pandas.core.resample.Resampler.get_group Resampler.get_group(name, obj=None)[source] Construct DataFrame from group with provided name. Parameters name:object The name of the group to get as a DataFrame. obj:DataFrame, default None The DataFrame to take the DataFrame out of. If it is None, the object groupby was called on will be used. Returns group:same type as obj
pandas.reference.api.pandas.core.resample.resampler.get_group
pandas.core.resample.Resampler.groups propertyResampler.groups Dict {group name -> group labels}.
pandas.reference.api.pandas.core.resample.resampler.groups
pandas.core.resample.Resampler.indices propertyResampler.indices Dict {group name -> group indices}.
pandas.reference.api.pandas.core.resample.resampler.indices
pandas.core.resample.Resampler.interpolate Resampler.interpolate(method='linear', axis=0, limit=None, inplace=False, limit_direction='forward', limit_area=None, downcast=None, **kwargs)[source] Interpolate values according to different methods. Fill NaN values using an interpolation method. Please note that only method='linear' is supported for DataFrame/Series with a MultiIndex. Parameters method:str, default ‘linear’ Interpolation technique to use. One of: ‘linear’: Ignore the index and treat the values as equally spaced. This is the only method supported on MultiIndexes. ‘time’: Works on daily and higher resolution data to interpolate given length of interval. ‘index’, ‘values’: use the actual numerical values of the index. ‘pad’: Fill in NaNs using existing values. ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘spline’, ‘barycentric’, ‘polynomial’: Passed to scipy.interpolate.interp1d. These methods use the numerical values of the index. Both ‘polynomial’ and ‘spline’ require that you also specify an order (int), e.g. df.interpolate(method='polynomial', order=5). ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’, ‘akima’, ‘cubicspline’: Wrappers around the SciPy interpolation methods of similar names. See Notes. ‘from_derivatives’: Refers to scipy.interpolate.BPoly.from_derivatives which replaces ‘piecewise_polynomial’ interpolation method in scipy 0.18. axis:{{0 or ‘index’, 1 or ‘columns’, None}}, default None Axis to interpolate along. limit:int, optional Maximum number of consecutive NaNs to fill. Must be greater than 0. inplace:bool, default False Update the data in place if possible. limit_direction:{{‘forward’, ‘backward’, ‘both’}}, Optional Consecutive NaNs will be filled in this direction. If limit is specified: If ‘method’ is ‘pad’ or ‘ffill’, ‘limit_direction’ must be ‘forward’. If ‘method’ is ‘backfill’ or ‘bfill’, ‘limit_direction’ must be ‘backwards’. If ‘limit’ is not specified: If ‘method’ is ‘backfill’ or ‘bfill’, the default is ‘backward’ else the default is ‘forward’ Changed in version 1.1.0: raises ValueError if limit_direction is ‘forward’ or ‘both’ and method is ‘backfill’ or ‘bfill’. raises ValueError if limit_direction is ‘backward’ or ‘both’ and method is ‘pad’ or ‘ffill’. limit_area:{{None, ‘inside’, ‘outside’}}, default None If limit is specified, consecutive NaNs will be filled with this restriction. None: No fill restriction. ‘inside’: Only fill NaNs surrounded by valid values (interpolate). ‘outside’: Only fill NaNs outside valid values (extrapolate). downcast:optional, ‘infer’ or None, defaults to None Downcast dtypes if possible. ``**kwargs``:optional Keyword arguments to pass on to the interpolating function. Returns Series or DataFrame or None Returns the same object type as the caller, interpolated at some or all NaN values or None if inplace=True. See also fillna Fill missing values using different methods. scipy.interpolate.Akima1DInterpolator Piecewise cubic polynomials (Akima interpolator). scipy.interpolate.BPoly.from_derivatives Piecewise polynomial in the Bernstein basis. scipy.interpolate.interp1d Interpolate a 1-D function. scipy.interpolate.KroghInterpolator Interpolate polynomial (Krogh interpolator). scipy.interpolate.PchipInterpolator PCHIP 1-d monotonic cubic interpolation. scipy.interpolate.CubicSpline Cubic spline data interpolator. Notes The ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’ and ‘akima’ methods are wrappers around the respective SciPy implementations of similar names. These use the actual numerical values of the index. For more information on their behavior, see the SciPy documentation and SciPy tutorial. Examples Filling in NaN in a Series via linear interpolation. >>> s = pd.Series([0, 1, np.nan, 3]) >>> s 0 0.0 1 1.0 2 NaN 3 3.0 dtype: float64 >>> s.interpolate() 0 0.0 1 1.0 2 2.0 3 3.0 dtype: float64 Filling in NaN in a Series by padding, but filling at most two consecutive NaN at a time. >>> s = pd.Series([np.nan, "single_one", np.nan, ... "fill_two_more", np.nan, np.nan, np.nan, ... 4.71, np.nan]) >>> s 0 NaN 1 single_one 2 NaN 3 fill_two_more 4 NaN 5 NaN 6 NaN 7 4.71 8 NaN dtype: object >>> s.interpolate(method='pad', limit=2) 0 NaN 1 single_one 2 single_one 3 fill_two_more 4 fill_two_more 5 fill_two_more 6 NaN 7 4.71 8 4.71 dtype: object Filling in NaN in a Series via polynomial interpolation or splines: Both ‘polynomial’ and ‘spline’ methods require that you also specify an order (int). >>> s = pd.Series([0, 2, np.nan, 8]) >>> s.interpolate(method='polynomial', order=2) 0 0.000000 1 2.000000 2 4.666667 3 8.000000 dtype: float64 Fill the DataFrame forward (that is, going down) along each column using linear interpolation. Note how the last entry in column ‘a’ is interpolated differently, because there is no entry after it to use for interpolation. Note how the first entry in column ‘b’ remains NaN, because there is no entry before it to use for interpolation. >>> df = pd.DataFrame([(0.0, np.nan, -1.0, 1.0), ... (np.nan, 2.0, np.nan, np.nan), ... (2.0, 3.0, np.nan, 9.0), ... (np.nan, 4.0, -4.0, 16.0)], ... columns=list('abcd')) >>> df a b c d 0 0.0 NaN -1.0 1.0 1 NaN 2.0 NaN NaN 2 2.0 3.0 NaN 9.0 3 NaN 4.0 -4.0 16.0 >>> df.interpolate(method='linear', limit_direction='forward', axis=0) a b c d 0 0.0 NaN -1.0 1.0 1 1.0 2.0 -2.0 5.0 2 2.0 3.0 -3.0 9.0 3 2.0 4.0 -4.0 16.0 Using polynomial interpolation. >>> df['d'].interpolate(method='polynomial', order=2) 0 1.0 1 4.0 2 9.0 3 16.0 Name: d, dtype: float64
pandas.reference.api.pandas.core.resample.resampler.interpolate
pandas.core.resample.Resampler.last Resampler.last(_method='last', min_count=0, *args, **kwargs)[source] Compute last of group values. Parameters numeric_only:bool, default False Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default -1 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed last of values within each group.
pandas.reference.api.pandas.core.resample.resampler.last
pandas.core.resample.Resampler.max Resampler.max(_method='max', min_count=0, *args, **kwargs)[source] Compute max of group values. Parameters numeric_only:bool, default False Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default -1 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed max of values within each group.
pandas.reference.api.pandas.core.resample.resampler.max
pandas.core.resample.Resampler.mean Resampler.mean(_method='mean', *args, **kwargs)[source] Compute mean of groups, excluding missing values. Parameters numeric_only:bool, default True Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.4.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {{'nopython': True, 'nogil': False, 'parallel': False}} New in version 1.4.0. Returns pandas.Series or pandas.DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame. Examples >>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2], ... 'B': [np.nan, 2, 3, 4, 5], ... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C']) Groupby one column and return the mean of the remaining columns in each group. >>> df.groupby('A').mean() B C A 1 3.0 1.333333 2 4.0 1.500000 Groupby two columns and return the mean of the remaining column. >>> df.groupby(['A', 'B']).mean() C A B 1 2.0 2.0 4.0 1.0 2 3.0 1.0 5.0 2.0 Groupby one column and return the mean of only particular column in the group. >>> df.groupby('A')['B'].mean() A 1 3.0 2 4.0 Name: B, dtype: float64
pandas.reference.api.pandas.core.resample.resampler.mean
pandas.core.resample.Resampler.median Resampler.median(_method='median', *args, **kwargs)[source] Compute median of groups, excluding missing values. For multiple groupings, the result index will be a MultiIndex Parameters numeric_only:bool, default True Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Returns Series or DataFrame Median of values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.resample.resampler.median
pandas.core.resample.Resampler.min Resampler.min(_method='min', min_count=0, *args, **kwargs)[source] Compute min of group values. Parameters numeric_only:bool, default False Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default -1 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed min of values within each group.
pandas.reference.api.pandas.core.resample.resampler.min
pandas.core.resample.Resampler.nearest Resampler.nearest(limit=None)[source] Resample by using the nearest value. When resampling data, missing values may appear (e.g., when the resampling frequency is higher than the original frequency). The nearest method will replace NaN values that appeared in the resampled data with the value from the nearest member of the sequence, based on the index value. Missing values that existed in the original data will not be modified. If limit is given, fill only this many values in each direction for each of the original values. Parameters limit:int, optional Limit of how many values to fill. Returns Series or DataFrame An upsampled Series or DataFrame with NaN values filled with their nearest value. See also backfill Backward fill the new missing values in the resampled data. pad Forward fill NaN values. Examples >>> s = pd.Series([1, 2], ... index=pd.date_range('20180101', ... periods=2, ... freq='1h')) >>> s 2018-01-01 00:00:00 1 2018-01-01 01:00:00 2 Freq: H, dtype: int64 >>> s.resample('15min').nearest() 2018-01-01 00:00:00 1 2018-01-01 00:15:00 1 2018-01-01 00:30:00 2 2018-01-01 00:45:00 2 2018-01-01 01:00:00 2 Freq: 15T, dtype: int64 Limit the number of upsampled values imputed by the nearest: >>> s.resample('15min').nearest(limit=1) 2018-01-01 00:00:00 1.0 2018-01-01 00:15:00 1.0 2018-01-01 00:30:00 NaN 2018-01-01 00:45:00 2.0 2018-01-01 01:00:00 2.0 Freq: 15T, dtype: float64
pandas.reference.api.pandas.core.resample.resampler.nearest
pandas.core.resample.Resampler.nunique Resampler.nunique(_method='nunique')[source] Return number of unique elements in the group. Returns Series Number of unique values within each group.
pandas.reference.api.pandas.core.resample.resampler.nunique
pandas.core.resample.Resampler.ohlc Resampler.ohlc(_method='ohlc', *args, **kwargs)[source] Compute open, high, low and close values of a group, excluding missing values. For multiple groupings, the result index will be a MultiIndex Returns DataFrame Open, high, low and close values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.resample.resampler.ohlc
pandas.core.resample.Resampler.pad Resampler.pad(limit=None)[source] Forward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns An upsampled Series. See also Series.fillna Fill NA/NaN values using the specified method. DataFrame.fillna Fill NA/NaN values using the specified method.
pandas.reference.api.pandas.core.resample.resampler.pad
pandas.core.resample.Resampler.pipe Resampler.pipe(func, *args, **kwargs)[source] Apply a function func with arguments to this Resampler object and return the function’s result. Use .pipe when you want to improve readability by chaining together functions that expect Series, DataFrames, GroupBy or Resampler objects. Instead of writing >>> h(g(f(df.groupby('group')), arg1=a), arg2=b, arg3=c) You can write >>> (df.groupby('group') ... .pipe(f) ... .pipe(g, arg1=a) ... .pipe(h, arg2=b, arg3=c)) which is much more readable. Parameters func:callable or tuple of (callable, str) Function to apply to this Resampler object or, alternatively, a (callable, data_keyword) tuple where data_keyword is a string indicating the keyword of callable that expects the Resampler object. args:iterable, optional Positional arguments passed into func. kwargs:dict, optional A dictionary of keyword arguments passed into func. Returns object:the return type of func. See also Series.pipe Apply a function with arguments to a series. DataFrame.pipe Apply a function with arguments to a dataframe. apply Apply function to each group instead of to the full Resampler object. Notes See more here Examples >>> df = pd.DataFrame({'A': [1, 2, 3, 4]}, ... index=pd.date_range('2012-08-02', periods=4)) >>> df A 2012-08-02 1 2012-08-03 2 2012-08-04 3 2012-08-05 4 To get the difference between each 2-day period’s maximum and minimum value in one pass, you can do >>> df.resample('2D').pipe(lambda x: x.max() - x.min()) A 2012-08-02 1 2012-08-04 1
pandas.reference.api.pandas.core.resample.resampler.pipe
pandas.core.resample.Resampler.prod Resampler.prod(_method='prod', min_count=0, *args, **kwargs)[source] Compute prod of group values. Parameters numeric_only:bool, default True Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default 0 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed prod of values within each group.
pandas.reference.api.pandas.core.resample.resampler.prod
pandas.core.resample.Resampler.quantile Resampler.quantile(q=0.5, **kwargs)[source] Return value at the given quantile. Parameters q:float or array-like, default 0.5 (50% quantile) Returns DataFrame or Series Quantile of values within each group. See also Series.quantile Return a series, where the index is q and the values are the quantiles. DataFrame.quantile Return a DataFrame, where the columns are the columns of self, and the values are the quantiles. DataFrameGroupBy.quantile Return a DataFrame, where the coulmns are groupby columns, and the values are its quantiles.
pandas.reference.api.pandas.core.resample.resampler.quantile
pandas.core.resample.Resampler.sem Resampler.sem(_method='sem', *args, **kwargs)[source] Compute standard error of the mean of groups, excluding missing values. For multiple groupings, the result index will be a MultiIndex. Parameters ddof:int, default 1 Degrees of freedom. Returns Series or DataFrame Standard error of the mean of values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.resample.resampler.sem
pandas.core.resample.Resampler.size Resampler.size()[source] Compute group sizes. Returns DataFrame or Series Number of rows in each group as a Series if as_index is True or a DataFrame if as_index is False. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.resample.resampler.size
pandas.core.resample.Resampler.std Resampler.std(ddof=1, *args, **kwargs)[source] Compute standard deviation of groups, excluding missing values. Parameters ddof:int, default 1 Degrees of freedom. Returns DataFrame or Series Standard deviation of values within each group.
pandas.reference.api.pandas.core.resample.resampler.std
pandas.core.resample.Resampler.sum Resampler.sum(_method='sum', min_count=0, *args, **kwargs)[source] Compute sum of group values. Parameters numeric_only:bool, default True Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default 0 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed sum of values within each group.
pandas.reference.api.pandas.core.resample.resampler.sum
pandas.core.resample.Resampler.transform Resampler.transform(arg, *args, **kwargs)[source] Call function producing a like-indexed Series on each group and return a Series with the transformed values. Parameters arg:function To apply to each group. Should return a Series with the same index. Returns transformed:Series Examples >>> s = pd.Series([1, 2], ... index=pd.date_range('20180101', ... periods=2, ... freq='1h')) >>> s 2018-01-01 00:00:00 1 2018-01-01 01:00:00 2 Freq: H, dtype: int64 >>> resampled = s.resample('15min') >>> resampled.transform(lambda x: (x - x.mean()) / x.std()) 2018-01-01 00:00:00 NaN 2018-01-01 01:00:00 NaN Freq: H, dtype: float64
pandas.reference.api.pandas.core.resample.resampler.transform
pandas.core.resample.Resampler.var Resampler.var(ddof=1, *args, **kwargs)[source] Compute variance of groups, excluding missing values. Parameters ddof:int, default 1 Degrees of freedom. Returns DataFrame or Series Variance of values within each group.
pandas.reference.api.pandas.core.resample.resampler.var
pandas.core.window.ewm.ExponentialMovingWindow.corr ExponentialMovingWindow.corr(other=None, pairwise=None, **kwargs)[source] Calculate the ewm (exponential weighted moment) sample correlation. Parameters other:Series or DataFrame, optional If not supplied then will default to self and produce pairwise output. pairwise:bool, default None If False then only matching columns between self and other will be used and the output will be a DataFrame. If True then all pairwise combinations will be calculated and the output will be a MultiIndex DataFrame in the case of DataFrame inputs. In the case of missing elements, only complete pairwise observations will be used. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.ewm Calling ewm with Series data. pandas.DataFrame.ewm Calling ewm with DataFrames. pandas.Series.corr Aggregating corr for Series. pandas.DataFrame.corr Aggregating corr for DataFrame.
pandas.reference.api.pandas.core.window.ewm.exponentialmovingwindow.corr
pandas.core.window.ewm.ExponentialMovingWindow.cov ExponentialMovingWindow.cov(other=None, pairwise=None, bias=False, **kwargs)[source] Calculate the ewm (exponential weighted moment) sample covariance. Parameters other:Series or DataFrame , optional If not supplied then will default to self and produce pairwise output. pairwise:bool, default None If False then only matching columns between self and other will be used and the output will be a DataFrame. If True then all pairwise combinations will be calculated and the output will be a MultiIndex DataFrame in the case of DataFrame inputs. In the case of missing elements, only complete pairwise observations will be used. bias:bool, default False Use a standard estimation bias correction. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.ewm Calling ewm with Series data. pandas.DataFrame.ewm Calling ewm with DataFrames. pandas.Series.cov Aggregating cov for Series. pandas.DataFrame.cov Aggregating cov for DataFrame.
pandas.reference.api.pandas.core.window.ewm.exponentialmovingwindow.cov
pandas.core.window.ewm.ExponentialMovingWindow.mean ExponentialMovingWindow.mean(*args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the ewm (exponential weighted moment) mean. Parameters *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.ewm Calling ewm with Series data. pandas.DataFrame.ewm Calling ewm with DataFrames. pandas.Series.mean Aggregating mean for Series. pandas.DataFrame.mean Aggregating mean for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
pandas.reference.api.pandas.core.window.ewm.exponentialmovingwindow.mean
pandas.core.window.ewm.ExponentialMovingWindow.std ExponentialMovingWindow.std(bias=False, *args, **kwargs)[source] Calculate the ewm (exponential weighted moment) standard deviation. Parameters bias:bool, default False Use a standard estimation bias correction. *args For NumPy compatibility and will not have an effect on the result. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.ewm Calling ewm with Series data. pandas.DataFrame.ewm Calling ewm with DataFrames. pandas.Series.std Aggregating std for Series. pandas.DataFrame.std Aggregating std for DataFrame.
pandas.reference.api.pandas.core.window.ewm.exponentialmovingwindow.std
pandas.core.window.ewm.ExponentialMovingWindow.sum ExponentialMovingWindow.sum(*args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the ewm (exponential weighted moment) sum. Parameters *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.ewm Calling ewm with Series data. pandas.DataFrame.ewm Calling ewm with DataFrames. pandas.Series.sum Aggregating sum for Series. pandas.DataFrame.sum Aggregating sum for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
pandas.reference.api.pandas.core.window.ewm.exponentialmovingwindow.sum
pandas.core.window.ewm.ExponentialMovingWindow.var ExponentialMovingWindow.var(bias=False, *args, **kwargs)[source] Calculate the ewm (exponential weighted moment) variance. Parameters bias:bool, default False Use a standard estimation bias correction. *args For NumPy compatibility and will not have an effect on the result. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.ewm Calling ewm with Series data. pandas.DataFrame.ewm Calling ewm with DataFrames. pandas.Series.var Aggregating var for Series. pandas.DataFrame.var Aggregating var for DataFrame.
pandas.reference.api.pandas.core.window.ewm.exponentialmovingwindow.var
pandas.core.window.expanding.Expanding.aggregate Expanding.aggregate(func, *args, **kwargs)[source] Aggregate using one or more operations over the specified axis. Parameters func:function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a Series/Dataframe or when passed to Series/Dataframe.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such. *args Positional arguments to pass to func. **kwargs Keyword arguments to pass to func. Returns scalar, Series or DataFrame The return can be: scalar : when Series.agg is called with single function Series : when DataFrame.agg is called with a single function DataFrame : when DataFrame.agg is called with several functions Return scalar, Series or DataFrame. See also pandas.DataFrame.aggregate Similar DataFrame method. pandas.Series.aggregate Similar Series method. Notes agg is an alias for aggregate. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. A passed user-defined-function will be passed a Series for evaluation. Examples >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]}) >>> df A B C 0 1 4 7 1 2 5 8 2 3 6 9 >>> df.ewm(alpha=0.5).mean() A B C 0 1.000000 4.000000 7.000000 1 1.666667 4.666667 7.666667 2 2.428571 5.428571 8.428571
pandas.reference.api.pandas.core.window.expanding.expanding.aggregate
pandas.core.window.expanding.Expanding.apply Expanding.apply(func, raw=False, engine=None, engine_kwargs=None, args=None, kwargs=None)[source] Calculate the expanding custom aggregation function. Parameters func:function Must produce a single value from an ndarray input if raw=True or a single value from a Series if raw=False. Can also accept a Numba JIT function with engine='numba' specified. Changed in version 1.0.0. raw:bool, default False False : passes each row or column as a Series to the function. True : the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance. engine:str, default None 'cython' : Runs rolling apply through C-extensions from cython. 'numba' : Runs rolling apply through JIT compiled code from numba. Only available when raw is set to True. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.0.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to both the func and the apply rolling aggregation. New in version 1.0.0. args:tuple, default None Positional arguments to be passed into func. kwargs:dict, default None Keyword arguments to be passed into func. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.apply Aggregating apply for Series. pandas.DataFrame.apply Aggregating apply for DataFrame.
pandas.reference.api.pandas.core.window.expanding.expanding.apply
pandas.core.window.expanding.Expanding.corr Expanding.corr(other=None, pairwise=None, ddof=1, **kwargs)[source] Calculate the expanding correlation. Parameters other:Series or DataFrame, optional If not supplied then will default to self and produce pairwise output. pairwise:bool, default None If False then only matching columns between self and other will be used and the output will be a DataFrame. If True then all pairwise combinations will be calculated and the output will be a MultiIndexed DataFrame in the case of DataFrame inputs. In the case of missing elements, only complete pairwise observations will be used. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also cov Similar method to calculate covariance. numpy.corrcoef NumPy Pearson’s correlation calculation. pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.corr Aggregating corr for Series. pandas.DataFrame.corr Aggregating corr for DataFrame. Notes This function uses Pearson’s definition of correlation (https://en.wikipedia.org/wiki/Pearson_correlation_coefficient). When other is not specified, the output will be self correlation (e.g. all 1’s), except for DataFrame inputs with pairwise set to True. Function will return NaN for correlations of equal valued sequences; this is the result of a 0/0 division error. When pairwise is set to False, only matching columns between self and other will be used. When pairwise is set to True, the output will be a MultiIndex DataFrame with the original index on the first level, and the other DataFrame columns on the second level. In the case of missing elements, only complete pairwise observations will be used.
pandas.reference.api.pandas.core.window.expanding.expanding.corr
pandas.core.window.expanding.Expanding.count Expanding.count()[source] Calculate the expanding count of non NaN observations. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.count Aggregating count for Series. pandas.DataFrame.count Aggregating count for DataFrame.
pandas.reference.api.pandas.core.window.expanding.expanding.count
pandas.core.window.expanding.Expanding.cov Expanding.cov(other=None, pairwise=None, ddof=1, **kwargs)[source] Calculate the expanding sample covariance. Parameters other:Series or DataFrame, optional If not supplied then will default to self and produce pairwise output. pairwise:bool, default None If False then only matching columns between self and other will be used and the output will be a DataFrame. If True then all pairwise combinations will be calculated and the output will be a MultiIndexed DataFrame in the case of DataFrame inputs. In the case of missing elements, only complete pairwise observations will be used. ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.cov Aggregating cov for Series. pandas.DataFrame.cov Aggregating cov for DataFrame.
pandas.reference.api.pandas.core.window.expanding.expanding.cov
pandas.core.window.expanding.Expanding.kurt Expanding.kurt(**kwargs)[source] Calculate the expanding Fisher’s definition of kurtosis without bias. Parameters **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also scipy.stats.kurtosis Reference SciPy method. pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.kurt Aggregating kurt for Series. pandas.DataFrame.kurt Aggregating kurt for DataFrame. Notes A minimum of four periods is required for the calculation. Examples The example below will show a rolling calculation with a window size of four matching the equivalent function call using scipy.stats. >>> arr = [1, 2, 3, 4, 999] >>> import scipy.stats >>> print(f"{scipy.stats.kurtosis(arr[:-1], bias=False):.6f}") -1.200000 >>> print(f"{scipy.stats.kurtosis(arr, bias=False):.6f}") 4.999874 >>> s = pd.Series(arr) >>> s.expanding(4).kurt() 0 NaN 1 NaN 2 NaN 3 -1.200000 4 4.999874 dtype: float64
pandas.reference.api.pandas.core.window.expanding.expanding.kurt
pandas.core.window.expanding.Expanding.max Expanding.max(*args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the expanding maximum. Parameters *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.max Aggregating max for Series. pandas.DataFrame.max Aggregating max for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
pandas.reference.api.pandas.core.window.expanding.expanding.max
pandas.core.window.expanding.Expanding.mean Expanding.mean(*args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the expanding mean. Parameters *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.mean Aggregating mean for Series. pandas.DataFrame.mean Aggregating mean for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
pandas.reference.api.pandas.core.window.expanding.expanding.mean
pandas.core.window.expanding.Expanding.median Expanding.median(engine=None, engine_kwargs=None, **kwargs)[source] Calculate the expanding median. Parameters engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.median Aggregating median for Series. pandas.DataFrame.median Aggregating median for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
pandas.reference.api.pandas.core.window.expanding.expanding.median
pandas.core.window.expanding.Expanding.min Expanding.min(*args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the expanding minimum. Parameters *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.min Aggregating min for Series. pandas.DataFrame.min Aggregating min for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
pandas.reference.api.pandas.core.window.expanding.expanding.min
pandas.core.window.expanding.Expanding.quantile Expanding.quantile(quantile, interpolation='linear', **kwargs)[source] Calculate the expanding quantile. Parameters quantile:float Quantile to compute. 0 <= quantile <= 1. interpolation:{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’} This optional parameter specifies the interpolation method to use, when the desired quantile lies between two data points i and j: linear: i + (j - i) * fraction, where fraction is the fractional part of the index surrounded by i and j. lower: i. higher: j. nearest: i or j whichever is nearest. midpoint: (i + j) / 2. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.quantile Aggregating quantile for Series. pandas.DataFrame.quantile Aggregating quantile for DataFrame.
pandas.reference.api.pandas.core.window.expanding.expanding.quantile
pandas.core.window.expanding.Expanding.rank Expanding.rank(method='average', ascending=True, pct=False, **kwargs)[source] Calculate the expanding rank. New in version 1.4.0. Parameters method:{‘average’, ‘min’, ‘max’}, default ‘average’ How to rank the group of records that have the same value (i.e. ties): average: average rank of the group min: lowest rank in the group max: highest rank in the group ascending:bool, default True Whether or not the elements should be ranked in ascending order. pct:bool, default False Whether or not to display the returned rankings in percentile form. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.rank Aggregating rank for Series. pandas.DataFrame.rank Aggregating rank for DataFrame. Examples >>> s = pd.Series([1, 4, 2, 3, 5, 3]) >>> s.expanding().rank() 0 1.0 1 2.0 2 2.0 3 3.0 4 5.0 5 3.5 dtype: float64 >>> s.expanding().rank(method="max") 0 1.0 1 2.0 2 2.0 3 3.0 4 5.0 5 4.0 dtype: float64 >>> s.expanding().rank(method="min") 0 1.0 1 2.0 2 2.0 3 3.0 4 5.0 5 3.0 dtype: float64
pandas.reference.api.pandas.core.window.expanding.expanding.rank
pandas.core.window.expanding.Expanding.sem Expanding.sem(ddof=1, *args, **kwargs)[source] Calculate the expanding standard error of mean. Parameters ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. *args For NumPy compatibility and will not have an effect on the result. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.sem Aggregating sem for Series. pandas.DataFrame.sem Aggregating sem for DataFrame. Notes A minimum of one period is required for the calculation. Examples >>> s = pd.Series([0, 1, 2, 3]) >>> s.expanding().sem() 0 NaN 1 0.707107 2 0.707107 3 0.745356 dtype: float64
pandas.reference.api.pandas.core.window.expanding.expanding.sem
pandas.core.window.expanding.Expanding.skew Expanding.skew(**kwargs)[source] Calculate the expanding unbiased skewness. Parameters **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also scipy.stats.skew Third moment of a probability density. pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.skew Aggregating skew for Series. pandas.DataFrame.skew Aggregating skew for DataFrame. Notes A minimum of three periods is required for the rolling calculation.
pandas.reference.api.pandas.core.window.expanding.expanding.skew
pandas.core.window.expanding.Expanding.std Expanding.std(ddof=1, *args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the expanding standard deviation. Parameters ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.4.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.4.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also numpy.std Equivalent method for NumPy array. pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.std Aggregating std for Series. pandas.DataFrame.std Aggregating std for DataFrame. Notes The default ddof of 1 used in Series.std() is different than the default ddof of 0 in numpy.std(). A minimum of one period is required for the rolling calculation. Examples >>> s = pd.Series([5, 5, 6, 7, 5, 5, 5]) >>> s.expanding(3).std() 0 NaN 1 NaN 2 0.577350 3 0.957427 4 0.894427 5 0.836660 6 0.786796 dtype: float64
pandas.reference.api.pandas.core.window.expanding.expanding.std
pandas.core.window.expanding.Expanding.sum Expanding.sum(*args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the expanding sum. Parameters *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.sum Aggregating sum for Series. pandas.DataFrame.sum Aggregating sum for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
pandas.reference.api.pandas.core.window.expanding.expanding.sum
pandas.core.window.expanding.Expanding.var Expanding.var(ddof=1, *args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the expanding variance. Parameters ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.4.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.4.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also numpy.var Equivalent method for NumPy array. pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.var Aggregating var for Series. pandas.DataFrame.var Aggregating var for DataFrame. Notes The default ddof of 1 used in Series.var() is different than the default ddof of 0 in numpy.var(). A minimum of one period is required for the rolling calculation. Examples >>> s = pd.Series([5, 5, 6, 7, 5, 5, 5]) >>> s.expanding(3).var() 0 NaN 1 NaN 2 0.333333 3 0.916667 4 0.800000 5 0.700000 6 0.619048 dtype: float64
pandas.reference.api.pandas.core.window.expanding.expanding.var
pandas.core.window.rolling.Rolling.aggregate Rolling.aggregate(func, *args, **kwargs)[source] Aggregate using one or more operations over the specified axis. Parameters func:function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a Series/Dataframe or when passed to Series/Dataframe.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such. *args Positional arguments to pass to func. **kwargs Keyword arguments to pass to func. Returns scalar, Series or DataFrame The return can be: scalar : when Series.agg is called with single function Series : when DataFrame.agg is called with a single function DataFrame : when DataFrame.agg is called with several functions Return scalar, Series or DataFrame. See also pandas.Series.rolling Calling object with Series data. pandas.DataFrame.rolling Calling object with DataFrame data. Notes agg is an alias for aggregate. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. A passed user-defined-function will be passed a Series for evaluation. Examples >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]}) >>> df A B C 0 1 4 7 1 2 5 8 2 3 6 9 >>> df.rolling(2).sum() A B C 0 NaN NaN NaN 1 3.0 9.0 15.0 2 5.0 11.0 17.0 >>> df.rolling(2).agg({"A": "sum", "B": "min"}) A B 0 NaN NaN 1 3.0 4.0 2 5.0 5.0
pandas.reference.api.pandas.core.window.rolling.rolling.aggregate
pandas.core.window.rolling.Rolling.apply Rolling.apply(func, raw=False, engine=None, engine_kwargs=None, args=None, kwargs=None)[source] Calculate the rolling custom aggregation function. Parameters func:function Must produce a single value from an ndarray input if raw=True or a single value from a Series if raw=False. Can also accept a Numba JIT function with engine='numba' specified. Changed in version 1.0.0. raw:bool, default False False : passes each row or column as a Series to the function. True : the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance. engine:str, default None 'cython' : Runs rolling apply through C-extensions from cython. 'numba' : Runs rolling apply through JIT compiled code from numba. Only available when raw is set to True. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.0.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to both the func and the apply rolling aggregation. New in version 1.0.0. args:tuple, default None Positional arguments to be passed into func. kwargs:dict, default None Keyword arguments to be passed into func. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.apply Aggregating apply for Series. pandas.DataFrame.apply Aggregating apply for DataFrame.
pandas.reference.api.pandas.core.window.rolling.rolling.apply
pandas.core.window.rolling.Rolling.corr Rolling.corr(other=None, pairwise=None, ddof=1, **kwargs)[source] Calculate the rolling correlation. Parameters other:Series or DataFrame, optional If not supplied then will default to self and produce pairwise output. pairwise:bool, default None If False then only matching columns between self and other will be used and the output will be a DataFrame. If True then all pairwise combinations will be calculated and the output will be a MultiIndexed DataFrame in the case of DataFrame inputs. In the case of missing elements, only complete pairwise observations will be used. ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also cov Similar method to calculate covariance. numpy.corrcoef NumPy Pearson’s correlation calculation. pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.corr Aggregating corr for Series. pandas.DataFrame.corr Aggregating corr for DataFrame. Notes This function uses Pearson’s definition of correlation (https://en.wikipedia.org/wiki/Pearson_correlation_coefficient). When other is not specified, the output will be self correlation (e.g. all 1’s), except for DataFrame inputs with pairwise set to True. Function will return NaN for correlations of equal valued sequences; this is the result of a 0/0 division error. When pairwise is set to False, only matching columns between self and other will be used. When pairwise is set to True, the output will be a MultiIndex DataFrame with the original index on the first level, and the other DataFrame columns on the second level. In the case of missing elements, only complete pairwise observations will be used. Examples The below example shows a rolling calculation with a window size of four matching the equivalent function call using numpy.corrcoef(). >>> v1 = [3, 3, 3, 5, 8] >>> v2 = [3, 4, 4, 4, 8] >>> # numpy returns a 2X2 array, the correlation coefficient >>> # is the number at entry [0][1] >>> print(f"{np.corrcoef(v1[:-1], v2[:-1])[0][1]:.6f}") 0.333333 >>> print(f"{np.corrcoef(v1[1:], v2[1:])[0][1]:.6f}") 0.916949 >>> s1 = pd.Series(v1) >>> s2 = pd.Series(v2) >>> s1.rolling(4).corr(s2) 0 NaN 1 NaN 2 NaN 3 0.333333 4 0.916949 dtype: float64 The below example shows a similar rolling calculation on a DataFrame using the pairwise option. >>> matrix = np.array([[51., 35.], [49., 30.], [47., 32.], [46., 31.], [50., 36.]]) >>> print(np.corrcoef(matrix[:-1,0], matrix[:-1,1]).round(7)) [[1. 0.6263001] [0.6263001 1. ]] >>> print(np.corrcoef(matrix[1:,0], matrix[1:,1]).round(7)) [[1. 0.5553681] [0.5553681 1. ]] >>> df = pd.DataFrame(matrix, columns=['X','Y']) >>> df X Y 0 51.0 35.0 1 49.0 30.0 2 47.0 32.0 3 46.0 31.0 4 50.0 36.0 >>> df.rolling(4).corr(pairwise=True) X Y 0 X NaN NaN Y NaN NaN 1 X NaN NaN Y NaN NaN 2 X NaN NaN Y NaN NaN 3 X 1.000000 0.626300 Y 0.626300 1.000000 4 X 1.000000 0.555368 Y 0.555368 1.000000
pandas.reference.api.pandas.core.window.rolling.rolling.corr
pandas.core.window.rolling.Rolling.count Rolling.count()[source] Calculate the rolling count of non NaN observations. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.count Aggregating count for Series. pandas.DataFrame.count Aggregating count for DataFrame. Examples >>> s = pd.Series([2, 3, np.nan, 10]) >>> s.rolling(2).count() 0 1.0 1 2.0 2 1.0 3 1.0 dtype: float64 >>> s.rolling(3).count() 0 1.0 1 2.0 2 2.0 3 2.0 dtype: float64 >>> s.rolling(4).count() 0 1.0 1 2.0 2 2.0 3 3.0 dtype: float64
pandas.reference.api.pandas.core.window.rolling.rolling.count
pandas.core.window.rolling.Rolling.cov Rolling.cov(other=None, pairwise=None, ddof=1, **kwargs)[source] Calculate the rolling sample covariance. Parameters other:Series or DataFrame, optional If not supplied then will default to self and produce pairwise output. pairwise:bool, default None If False then only matching columns between self and other will be used and the output will be a DataFrame. If True then all pairwise combinations will be calculated and the output will be a MultiIndexed DataFrame in the case of DataFrame inputs. In the case of missing elements, only complete pairwise observations will be used. ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.cov Aggregating cov for Series. pandas.DataFrame.cov Aggregating cov for DataFrame.
pandas.reference.api.pandas.core.window.rolling.rolling.cov
pandas.core.window.rolling.Rolling.kurt Rolling.kurt(**kwargs)[source] Calculate the rolling Fisher’s definition of kurtosis without bias. Parameters **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also scipy.stats.kurtosis Reference SciPy method. pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.kurt Aggregating kurt for Series. pandas.DataFrame.kurt Aggregating kurt for DataFrame. Notes A minimum of four periods is required for the calculation. Examples The example below will show a rolling calculation with a window size of four matching the equivalent function call using scipy.stats. >>> arr = [1, 2, 3, 4, 999] >>> import scipy.stats >>> print(f"{scipy.stats.kurtosis(arr[:-1], bias=False):.6f}") -1.200000 >>> print(f"{scipy.stats.kurtosis(arr[1:], bias=False):.6f}") 3.999946 >>> s = pd.Series(arr) >>> s.rolling(4).kurt() 0 NaN 1 NaN 2 NaN 3 -1.200000 4 3.999946 dtype: float64
pandas.reference.api.pandas.core.window.rolling.rolling.kurt
pandas.core.window.rolling.Rolling.max Rolling.max(*args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the rolling maximum. Parameters *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.max Aggregating max for Series. pandas.DataFrame.max Aggregating max for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
pandas.reference.api.pandas.core.window.rolling.rolling.max
pandas.core.window.rolling.Rolling.mean Rolling.mean(*args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the rolling mean. Parameters *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.mean Aggregating mean for Series. pandas.DataFrame.mean Aggregating mean for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine. Examples The below examples will show rolling mean calculations with window sizes of two and three, respectively. >>> s = pd.Series([1, 2, 3, 4]) >>> s.rolling(2).mean() 0 NaN 1 1.5 2 2.5 3 3.5 dtype: float64 >>> s.rolling(3).mean() 0 NaN 1 NaN 2 2.0 3 3.0 dtype: float64
pandas.reference.api.pandas.core.window.rolling.rolling.mean
pandas.core.window.rolling.Rolling.median Rolling.median(engine=None, engine_kwargs=None, **kwargs)[source] Calculate the rolling median. Parameters engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.median Aggregating median for Series. pandas.DataFrame.median Aggregating median for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine. Examples Compute the rolling median of a series with a window size of 3. >>> s = pd.Series([0, 1, 2, 3, 4]) >>> s.rolling(3).median() 0 NaN 1 NaN 2 1.0 3 2.0 4 3.0 dtype: float64
pandas.reference.api.pandas.core.window.rolling.rolling.median
pandas.core.window.rolling.Rolling.min Rolling.min(*args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the rolling minimum. Parameters *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.min Aggregating min for Series. pandas.DataFrame.min Aggregating min for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine. Examples Performing a rolling minimum with a window size of 3. >>> s = pd.Series([4, 3, 5, 2, 6]) >>> s.rolling(3).min() 0 NaN 1 NaN 2 3.0 3 2.0 4 2.0 dtype: float64
pandas.reference.api.pandas.core.window.rolling.rolling.min
pandas.core.window.rolling.Rolling.quantile Rolling.quantile(quantile, interpolation='linear', **kwargs)[source] Calculate the rolling quantile. Parameters quantile:float Quantile to compute. 0 <= quantile <= 1. interpolation:{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’} This optional parameter specifies the interpolation method to use, when the desired quantile lies between two data points i and j: linear: i + (j - i) * fraction, where fraction is the fractional part of the index surrounded by i and j. lower: i. higher: j. nearest: i or j whichever is nearest. midpoint: (i + j) / 2. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.quantile Aggregating quantile for Series. pandas.DataFrame.quantile Aggregating quantile for DataFrame. Examples >>> s = pd.Series([1, 2, 3, 4]) >>> s.rolling(2).quantile(.4, interpolation='lower') 0 NaN 1 1.0 2 2.0 3 3.0 dtype: float64 >>> s.rolling(2).quantile(.4, interpolation='midpoint') 0 NaN 1 1.5 2 2.5 3 3.5 dtype: float64
pandas.reference.api.pandas.core.window.rolling.rolling.quantile
pandas.core.window.rolling.Rolling.rank Rolling.rank(method='average', ascending=True, pct=False, **kwargs)[source] Calculate the rolling rank. New in version 1.4.0. Parameters method:{‘average’, ‘min’, ‘max’}, default ‘average’ How to rank the group of records that have the same value (i.e. ties): average: average rank of the group min: lowest rank in the group max: highest rank in the group ascending:bool, default True Whether or not the elements should be ranked in ascending order. pct:bool, default False Whether or not to display the returned rankings in percentile form. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.rank Aggregating rank for Series. pandas.DataFrame.rank Aggregating rank for DataFrame. Examples >>> s = pd.Series([1, 4, 2, 3, 5, 3]) >>> s.rolling(3).rank() 0 NaN 1 NaN 2 2.0 3 2.0 4 3.0 5 1.5 dtype: float64 >>> s.rolling(3).rank(method="max") 0 NaN 1 NaN 2 2.0 3 2.0 4 3.0 5 2.0 dtype: float64 >>> s.rolling(3).rank(method="min") 0 NaN 1 NaN 2 2.0 3 2.0 4 3.0 5 1.0 dtype: float64
pandas.reference.api.pandas.core.window.rolling.rolling.rank
pandas.core.window.rolling.Rolling.sem Rolling.sem(ddof=1, *args, **kwargs)[source] Calculate the rolling standard error of mean. Parameters ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. *args For NumPy compatibility and will not have an effect on the result. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.sem Aggregating sem for Series. pandas.DataFrame.sem Aggregating sem for DataFrame. Notes A minimum of one period is required for the calculation. Examples >>> s = pd.Series([0, 1, 2, 3]) >>> s.rolling(2, min_periods=1).sem() 0 NaN 1 0.707107 2 0.707107 3 0.707107 dtype: float64
pandas.reference.api.pandas.core.window.rolling.rolling.sem
pandas.core.window.rolling.Rolling.skew Rolling.skew(**kwargs)[source] Calculate the rolling unbiased skewness. Parameters **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also scipy.stats.skew Third moment of a probability density. pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.skew Aggregating skew for Series. pandas.DataFrame.skew Aggregating skew for DataFrame. Notes A minimum of three periods is required for the rolling calculation.
pandas.reference.api.pandas.core.window.rolling.rolling.skew
pandas.core.window.rolling.Rolling.std Rolling.std(ddof=1, *args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the rolling standard deviation. Parameters ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.4.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.4.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also numpy.std Equivalent method for NumPy array. pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.std Aggregating std for Series. pandas.DataFrame.std Aggregating std for DataFrame. Notes The default ddof of 1 used in Series.std() is different than the default ddof of 0 in numpy.std(). A minimum of one period is required for the rolling calculation. The implementation is susceptible to floating point imprecision as shown in the example below. Examples >>> s = pd.Series([5, 5, 6, 7, 5, 5, 5]) >>> s.rolling(3).std() 0 NaN 1 NaN 2 5.773503e-01 3 1.000000e+00 4 1.000000e+00 5 1.154701e+00 6 2.580957e-08 dtype: float64
pandas.reference.api.pandas.core.window.rolling.rolling.std
pandas.core.window.rolling.Rolling.sum Rolling.sum(*args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the rolling sum. Parameters *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.sum Aggregating sum for Series. pandas.DataFrame.sum Aggregating sum for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine. Examples >>> s = pd.Series([1, 2, 3, 4, 5]) >>> s 0 1 1 2 2 3 3 4 4 5 dtype: int64 >>> s.rolling(3).sum() 0 NaN 1 NaN 2 6.0 3 9.0 4 12.0 dtype: float64 >>> s.rolling(3, center=True).sum() 0 NaN 1 6.0 2 9.0 3 12.0 4 NaN dtype: float64 For DataFrame, each sum is computed column-wise. >>> df = pd.DataFrame({"A": s, "B": s ** 2}) >>> df A B 0 1 1 1 2 4 2 3 9 3 4 16 4 5 25 >>> df.rolling(3).sum() A B 0 NaN NaN 1 NaN NaN 2 6.0 14.0 3 9.0 29.0 4 12.0 50.0
pandas.reference.api.pandas.core.window.rolling.rolling.sum
pandas.core.window.rolling.Rolling.var Rolling.var(ddof=1, *args, engine=None, engine_kwargs=None, **kwargs)[source] Calculate the rolling variance. Parameters ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. *args For NumPy compatibility and will not have an effect on the result. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.4.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.4.0. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also numpy.var Equivalent method for NumPy array. pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.var Aggregating var for Series. pandas.DataFrame.var Aggregating var for DataFrame. Notes The default ddof of 1 used in Series.var() is different than the default ddof of 0 in numpy.var(). A minimum of one period is required for the rolling calculation. The implementation is susceptible to floating point imprecision as shown in the example below. Examples >>> s = pd.Series([5, 5, 6, 7, 5, 5, 5]) >>> s.rolling(3).var() 0 NaN 1 NaN 2 3.333333e-01 3 1.000000e+00 4 1.000000e+00 5 1.333333e+00 6 6.661338e-16 dtype: float64
pandas.reference.api.pandas.core.window.rolling.rolling.var
pandas.core.window.rolling.Window.mean Window.mean(*args, **kwargs)[source] Calculate the rolling weighted window mean. Parameters **kwargs Keyword arguments to configure the SciPy weighted window type. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.mean Aggregating mean for Series. pandas.DataFrame.mean Aggregating mean for DataFrame.
pandas.reference.api.pandas.core.window.rolling.window.mean
pandas.core.window.rolling.Window.std Window.std(ddof=1, *args, **kwargs)[source] Calculate the rolling weighted window standard deviation. New in version 1.0.0. Parameters **kwargs Keyword arguments to configure the SciPy weighted window type. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.std Aggregating std for Series. pandas.DataFrame.std Aggregating std for DataFrame.
pandas.reference.api.pandas.core.window.rolling.window.std
pandas.core.window.rolling.Window.sum Window.sum(*args, **kwargs)[source] Calculate the rolling weighted window sum. Parameters **kwargs Keyword arguments to configure the SciPy weighted window type. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.sum Aggregating sum for Series. pandas.DataFrame.sum Aggregating sum for DataFrame.
pandas.reference.api.pandas.core.window.rolling.window.sum
pandas.core.window.rolling.Window.var Window.var(ddof=1, *args, **kwargs)[source] Calculate the rolling weighted window variance. New in version 1.0.0. Parameters **kwargs Keyword arguments to configure the SciPy weighted window type. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling Calling rolling with Series data. pandas.DataFrame.rolling Calling rolling with DataFrames. pandas.Series.var Aggregating var for Series. pandas.DataFrame.var Aggregating var for DataFrame.
pandas.reference.api.pandas.core.window.rolling.window.var
pandas.crosstab pandas.crosstab(index, columns, values=None, rownames=None, colnames=None, aggfunc=None, margins=False, margins_name='All', dropna=True, normalize=False)[source] Compute a simple cross tabulation of two (or more) factors. By default computes a frequency table of the factors unless an array of values and an aggregation function are passed. Parameters index:array-like, Series, or list of arrays/Series Values to group by in the rows. columns:array-like, Series, or list of arrays/Series Values to group by in the columns. values:array-like, optional Array of values to aggregate according to the factors. Requires aggfunc be specified. rownames:sequence, default None If passed, must match number of row arrays passed. colnames:sequence, default None If passed, must match number of column arrays passed. aggfunc:function, optional If specified, requires values be specified as well. margins:bool, default False Add row/column margins (subtotals). margins_name:str, default ‘All’ Name of the row/column that will contain the totals when margins is True. dropna:bool, default True Do not include columns whose entries are all NaN. normalize:bool, {‘all’, ‘index’, ‘columns’}, or {0,1}, default False Normalize by dividing all values by the sum of values. If passed ‘all’ or True, will normalize over all values. If passed ‘index’ will normalize over each row. If passed ‘columns’ will normalize over each column. If margins is True, will also normalize margin values. Returns DataFrame Cross tabulation of the data. See also DataFrame.pivot Reshape data based on column values. pivot_table Create a pivot table as a DataFrame. Notes Any Series passed will have their name attributes used unless row or column names for the cross-tabulation are specified. Any input passed containing Categorical data will have all of its categories included in the cross-tabulation, even if the actual data does not contain any instances of a particular category. In the event that there aren’t overlapping indexes an empty DataFrame will be returned. Examples >>> a = np.array(["foo", "foo", "foo", "foo", "bar", "bar", ... "bar", "bar", "foo", "foo", "foo"], dtype=object) >>> b = np.array(["one", "one", "one", "two", "one", "one", ... "one", "two", "two", "two", "one"], dtype=object) >>> c = np.array(["dull", "dull", "shiny", "dull", "dull", "shiny", ... "shiny", "dull", "shiny", "shiny", "shiny"], ... dtype=object) >>> pd.crosstab(a, [b, c], rownames=['a'], colnames=['b', 'c']) b one two c dull shiny dull shiny a bar 1 2 1 0 foo 2 2 1 2 Here ‘c’ and ‘f’ are not represented in the data and will not be shown in the output because dropna is True by default. Set dropna=False to preserve categories with no data. >>> foo = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c']) >>> bar = pd.Categorical(['d', 'e'], categories=['d', 'e', 'f']) >>> pd.crosstab(foo, bar) col_0 d e row_0 a 1 0 b 0 1 >>> pd.crosstab(foo, bar, dropna=False) col_0 d e f row_0 a 1 0 0 b 0 1 0 c 0 0 0
pandas.reference.api.pandas.crosstab
pandas.cut pandas.cut(x, bins, right=True, labels=None, retbins=False, precision=3, include_lowest=False, duplicates='raise', ordered=True)[source] Bin values into discrete intervals. Use cut when you need to segment and sort data values into bins. This function is also useful for going from a continuous variable to a categorical variable. For example, cut could convert ages to groups of age ranges. Supports binning into an equal number of bins, or a pre-specified array of bins. Parameters x:array-like The input array to be binned. Must be 1-dimensional. bins:int, sequence of scalars, or IntervalIndex The criteria to bin by. int : Defines the number of equal-width bins in the range of x. The range of x is extended by .1% on each side to include the minimum and maximum values of x. sequence of scalars : Defines the bin edges allowing for non-uniform width. No extension of the range of x is done. IntervalIndex : Defines the exact bins to be used. Note that IntervalIndex for bins must be non-overlapping. right:bool, default True Indicates whether bins includes the rightmost edge or not. If right == True (the default), then the bins [1, 2, 3, 4] indicate (1,2], (2,3], (3,4]. This argument is ignored when bins is an IntervalIndex. labels:array or False, default None Specifies the labels for the returned bins. Must be the same length as the resulting bins. If False, returns only integer indicators of the bins. This affects the type of the output container (see below). This argument is ignored when bins is an IntervalIndex. If True, raises an error. When ordered=False, labels must be provided. retbins:bool, default False Whether to return the bins or not. Useful when bins is provided as a scalar. precision:int, default 3 The precision at which to store and display the bins labels. include_lowest:bool, default False Whether the first interval should be left-inclusive or not. duplicates:{default ‘raise’, ‘drop’}, optional If bin edges are not unique, raise ValueError or drop non-uniques. ordered:bool, default True Whether the labels are ordered or not. Applies to returned types Categorical and Series (with Categorical dtype). If True, the resulting categorical will be ordered. If False, the resulting categorical will be unordered (labels must be provided). New in version 1.1.0. Returns out:Categorical, Series, or ndarray An array-like object representing the respective bin for each value of x. The type depends on the value of labels. None (default) : returns a Series for Series x or a Categorical for all other inputs. The values stored within are Interval dtype. sequence of scalars : returns a Series for Series x or a Categorical for all other inputs. The values stored within are whatever the type in the sequence is. False : returns an ndarray of integers. bins:numpy.ndarray or IntervalIndex. The computed or specified bins. Only returned when retbins=True. For scalar or sequence bins, this is an ndarray with the computed bins. If set duplicates=drop, bins will drop non-unique bin. For an IntervalIndex bins, this is equal to bins. See also qcut Discretize variable into equal-sized buckets based on rank or based on sample quantiles. Categorical Array type for storing data that come from a fixed set of values. Series One-dimensional array with axis labels (including time series). IntervalIndex Immutable Index implementing an ordered, sliceable set. Notes Any NA values will be NA in the result. Out of bounds values will be NA in the resulting Series or Categorical object. Examples Discretize into three equal-sized bins. >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3) ... [(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ... Categories (3, interval[float64, right]): [(0.994, 3.0] < (3.0, 5.0] ... >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3, retbins=True) ... ([(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ... Categories (3, interval[float64, right]): [(0.994, 3.0] < (3.0, 5.0] ... array([0.994, 3. , 5. , 7. ])) Discovers the same bins, but assign them specific labels. Notice that the returned Categorical’s categories are labels and is ordered. >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), ... 3, labels=["bad", "medium", "good"]) ['bad', 'good', 'medium', 'medium', 'good', 'bad'] Categories (3, object): ['bad' < 'medium' < 'good'] ordered=False will result in unordered categories when labels are passed. This parameter can be used to allow non-unique labels: >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3, ... labels=["B", "A", "B"], ordered=False) ['B', 'B', 'A', 'A', 'B', 'B'] Categories (2, object): ['A', 'B'] labels=False implies you just want the bins back. >>> pd.cut([0, 1, 1, 2], bins=4, labels=False) array([0, 1, 1, 3]) Passing a Series as an input returns a Series with categorical dtype: >>> s = pd.Series(np.array([2, 4, 6, 8, 10]), ... index=['a', 'b', 'c', 'd', 'e']) >>> pd.cut(s, 3) ... a (1.992, 4.667] b (1.992, 4.667] c (4.667, 7.333] d (7.333, 10.0] e (7.333, 10.0] dtype: category Categories (3, interval[float64, right]): [(1.992, 4.667] < (4.667, ... Passing a Series as an input returns a Series with mapping value. It is used to map numerically to intervals based on bins. >>> s = pd.Series(np.array([2, 4, 6, 8, 10]), ... index=['a', 'b', 'c', 'd', 'e']) >>> pd.cut(s, [0, 2, 4, 6, 8, 10], labels=False, retbins=True, right=False) ... (a 1.0 b 2.0 c 3.0 d 4.0 e NaN dtype: float64, array([ 0, 2, 4, 6, 8, 10])) Use drop optional when bins is not unique >>> pd.cut(s, [0, 2, 4, 6, 10, 10], labels=False, retbins=True, ... right=False, duplicates='drop') ... (a 1.0 b 2.0 c 3.0 d 3.0 e NaN dtype: float64, array([ 0, 2, 4, 6, 10])) Passing an IntervalIndex for bins results in those categories exactly. Notice that values not covered by the IntervalIndex are set to NaN. 0 is to the left of the first bin (which is closed on the right), and 1.5 falls between two bins. >>> bins = pd.IntervalIndex.from_tuples([(0, 1), (2, 3), (4, 5)]) >>> pd.cut([0, 0.5, 1.5, 2.5, 4.5], bins) [NaN, (0.0, 1.0], NaN, (2.0, 3.0], (4.0, 5.0]] Categories (3, interval[int64, right]): [(0, 1] < (2, 3] < (4, 5]]
pandas.reference.api.pandas.cut
pandas.DataFrame classpandas.DataFrame(data=None, index=None, columns=None, dtype=None, copy=None)[source] Two-dimensional, size-mutable, potentially heterogeneous tabular data. Data structure also contains labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure. Parameters data:ndarray (structured or homogeneous), Iterable, dict, or DataFrame Dict can contain Series, arrays, constants, dataclass or list-like objects. If data is a dict, column order follows insertion-order. If a dict contains Series which have an index defined, it is aligned by its index. Changed in version 0.25.0: If data is a list of dicts, column order follows insertion-order. index:Index or array-like Index to use for resulting frame. Will default to RangeIndex if no indexing information part of input data and no index provided. columns:Index or array-like Column labels to use for resulting frame when data does not have them, defaulting to RangeIndex(0, 1, 2, …, n). If data contains column labels, will perform column selection instead. dtype:dtype, default None Data type to force. Only a single dtype is allowed. If None, infer. copy:bool or None, default None Copy data from inputs. For dict data, the default of None behaves like copy=True. For DataFrame or 2d ndarray input, the default of None behaves like copy=False. Changed in version 1.3.0. See also DataFrame.from_records Constructor from tuples, also record arrays. DataFrame.from_dict From dicts of Series, arrays, or dicts. read_csv Read a comma-separated values (csv) file into DataFrame. read_table Read general delimited file into DataFrame. read_clipboard Read text from clipboard into DataFrame. Examples Constructing DataFrame from a dictionary. >>> d = {'col1': [1, 2], 'col2': [3, 4]} >>> df = pd.DataFrame(data=d) >>> df col1 col2 0 1 3 1 2 4 Notice that the inferred dtype is int64. >>> df.dtypes col1 int64 col2 int64 dtype: object To enforce a single dtype: >>> df = pd.DataFrame(data=d, dtype=np.int8) >>> df.dtypes col1 int8 col2 int8 dtype: object Constructing DataFrame from a dictionary including Series: >>> d = {'col1': [0, 1, 2, 3], 'col2': pd.Series([2, 3], index=[2, 3])} >>> pd.DataFrame(data=d, index=[0, 1, 2, 3]) col1 col2 0 0 NaN 1 1 NaN 2 2 2.0 3 3 3.0 Constructing DataFrame from numpy ndarray: >>> df2 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), ... columns=['a', 'b', 'c']) >>> df2 a b c 0 1 2 3 1 4 5 6 2 7 8 9 Constructing DataFrame from a numpy ndarray that has labeled columns: >>> data = np.array([(1, 2, 3), (4, 5, 6), (7, 8, 9)], ... dtype=[("a", "i4"), ("b", "i4"), ("c", "i4")]) >>> df3 = pd.DataFrame(data, columns=['c', 'a']) ... >>> df3 c a 0 3 1 1 6 4 2 9 7 Constructing DataFrame from dataclass: >>> from dataclasses import make_dataclass >>> Point = make_dataclass("Point", [("x", int), ("y", int)]) >>> pd.DataFrame([Point(0, 0), Point(0, 3), Point(2, 3)]) x y 0 0 0 1 0 3 2 2 3 Attributes at Access a single value for a row/column label pair. attrs Dictionary of global attributes of this dataset. axes Return a list representing the axes of the DataFrame. columns The column labels of the DataFrame. dtypes Return the dtypes in the DataFrame. empty Indicator whether Series/DataFrame is empty. flags Get the properties associated with this pandas object. iat Access a single value for a row/column pair by integer position. iloc Purely integer-location based indexing for selection by position. index The index (row labels) of the DataFrame. loc Access a group of rows and columns by label(s) or a boolean array. ndim Return an int representing the number of axes / array dimensions. shape Return a tuple representing the dimensionality of the DataFrame. size Return an int representing the number of elements in this object. style Returns a Styler object. values Return a Numpy representation of the DataFrame. T Methods abs() Return a Series/DataFrame with absolute numeric value of each element. add(other[, axis, level, fill_value]) Get Addition of dataframe and other, element-wise (binary operator add). add_prefix(prefix) Prefix labels with string prefix. add_suffix(suffix) Suffix labels with string suffix. agg([func, axis]) Aggregate using one or more operations over the specified axis. aggregate([func, axis]) Aggregate using one or more operations over the specified axis. align(other[, join, axis, level, copy, ...]) Align two objects on their axes with the specified join method. all([axis, bool_only, skipna, level]) Return whether all elements are True, potentially over an axis. any([axis, bool_only, skipna, level]) Return whether any element is True, potentially over an axis. append(other[, ignore_index, ...]) Append rows of other to the end of caller, returning a new object. apply(func[, axis, raw, result_type, args]) Apply a function along an axis of the DataFrame. applymap(func[, na_action]) Apply a function to a Dataframe elementwise. asfreq(freq[, method, how, normalize, ...]) Convert time series to specified frequency. asof(where[, subset]) Return the last row(s) without any NaNs before where. assign(**kwargs) Assign new columns to a DataFrame. astype(dtype[, copy, errors]) Cast a pandas object to a specified dtype dtype. at_time(time[, asof, axis]) Select values at particular time of day (e.g., 9:30AM). backfill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='bfill'. between_time(start_time, end_time[, ...]) Select values between particular times of the day (e.g., 9:00-9:30 AM). bfill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='bfill'. bool() Return the bool of a single element Series or DataFrame. boxplot([column, by, ax, fontsize, rot, ...]) Make a box plot from DataFrame columns. clip([lower, upper, axis, inplace]) Trim values at input threshold(s). combine(other, func[, fill_value, overwrite]) Perform column-wise combine with another DataFrame. combine_first(other) Update null elements with value in the same location in other. compare(other[, align_axis, keep_shape, ...]) Compare to another DataFrame and show the differences. convert_dtypes([infer_objects, ...]) Convert columns to best possible dtypes using dtypes supporting pd.NA. copy([deep]) Make a copy of this object's indices and data. corr([method, min_periods]) Compute pairwise correlation of columns, excluding NA/null values. corrwith(other[, axis, drop, method]) Compute pairwise correlation. count([axis, level, numeric_only]) Count non-NA cells for each column or row. cov([min_periods, ddof]) Compute pairwise covariance of columns, excluding NA/null values. cummax([axis, skipna]) Return cumulative maximum over a DataFrame or Series axis. cummin([axis, skipna]) Return cumulative minimum over a DataFrame or Series axis. cumprod([axis, skipna]) Return cumulative product over a DataFrame or Series axis. cumsum([axis, skipna]) Return cumulative sum over a DataFrame or Series axis. describe([percentiles, include, exclude, ...]) Generate descriptive statistics. diff([periods, axis]) First discrete difference of element. div(other[, axis, level, fill_value]) Get Floating division of dataframe and other, element-wise (binary operator truediv). divide(other[, axis, level, fill_value]) Get Floating division of dataframe and other, element-wise (binary operator truediv). dot(other) Compute the matrix multiplication between the DataFrame and other. drop([labels, axis, index, columns, level, ...]) Drop specified labels from rows or columns. drop_duplicates([subset, keep, inplace, ...]) Return DataFrame with duplicate rows removed. droplevel(level[, axis]) Return Series/DataFrame with requested index / column level(s) removed. dropna([axis, how, thresh, subset, inplace]) Remove missing values. duplicated([subset, keep]) Return boolean Series denoting duplicate rows. eq(other[, axis, level]) Get Equal to of dataframe and other, element-wise (binary operator eq). equals(other) Test whether two objects contain the same elements. eval(expr[, inplace]) Evaluate a string describing operations on DataFrame columns. ewm([com, span, halflife, alpha, ...]) Provide exponentially weighted (EW) calculations. expanding([min_periods, center, axis, method]) Provide expanding window calculations. explode(column[, ignore_index]) Transform each element of a list-like to a row, replicating index values. ffill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='ffill'. fillna([value, method, axis, inplace, ...]) Fill NA/NaN values using the specified method. filter([items, like, regex, axis]) Subset the dataframe rows or columns according to the specified index labels. first(offset) Select initial periods of time series data based on a date offset. first_valid_index() Return index for first non-NA value or None, if no NA value is found. floordiv(other[, axis, level, fill_value]) Get Integer division of dataframe and other, element-wise (binary operator floordiv). from_dict(data[, orient, dtype, columns]) Construct DataFrame from dict of array-like or dicts. from_records(data[, index, exclude, ...]) Convert structured or record ndarray to DataFrame. ge(other[, axis, level]) Get Greater than or equal to of dataframe and other, element-wise (binary operator ge). get(key[, default]) Get item from object for given key (ex: DataFrame column). groupby([by, axis, level, as_index, sort, ...]) Group DataFrame using a mapper or by a Series of columns. gt(other[, axis, level]) Get Greater than of dataframe and other, element-wise (binary operator gt). head([n]) Return the first n rows. hist([column, by, grid, xlabelsize, xrot, ...]) Make a histogram of the DataFrame's columns. idxmax([axis, skipna]) Return index of first occurrence of maximum over requested axis. idxmin([axis, skipna]) Return index of first occurrence of minimum over requested axis. infer_objects() Attempt to infer better dtypes for object columns. info([verbose, buf, max_cols, memory_usage, ...]) Print a concise summary of a DataFrame. insert(loc, column, value[, allow_duplicates]) Insert column into DataFrame at specified location. interpolate([method, axis, limit, inplace, ...]) Fill NaN values using an interpolation method. isin(values) Whether each element in the DataFrame is contained in values. isna() Detect missing values. isnull() DataFrame.isnull is an alias for DataFrame.isna. items() Iterate over (column name, Series) pairs. iteritems() Iterate over (column name, Series) pairs. iterrows() Iterate over DataFrame rows as (index, Series) pairs. itertuples([index, name]) Iterate over DataFrame rows as namedtuples. join(other[, on, how, lsuffix, rsuffix, sort]) Join columns of another DataFrame. keys() Get the 'info axis' (see Indexing for more). kurt([axis, skipna, level, numeric_only]) Return unbiased kurtosis over requested axis. kurtosis([axis, skipna, level, numeric_only]) Return unbiased kurtosis over requested axis. last(offset) Select final periods of time series data based on a date offset. last_valid_index() Return index for last non-NA value or None, if no NA value is found. le(other[, axis, level]) Get Less than or equal to of dataframe and other, element-wise (binary operator le). lookup(row_labels, col_labels) (DEPRECATED) Label-based "fancy indexing" function for DataFrame. lt(other[, axis, level]) Get Less than of dataframe and other, element-wise (binary operator lt). mad([axis, skipna, level]) Return the mean absolute deviation of the values over the requested axis. mask(cond[, other, inplace, axis, level, ...]) Replace values where the condition is True. max([axis, skipna, level, numeric_only]) Return the maximum of the values over the requested axis. mean([axis, skipna, level, numeric_only]) Return the mean of the values over the requested axis. median([axis, skipna, level, numeric_only]) Return the median of the values over the requested axis. melt([id_vars, value_vars, var_name, ...]) Unpivot a DataFrame from wide to long format, optionally leaving identifiers set. memory_usage([index, deep]) Return the memory usage of each column in bytes. merge(right[, how, on, left_on, right_on, ...]) Merge DataFrame or named Series objects with a database-style join. min([axis, skipna, level, numeric_only]) Return the minimum of the values over the requested axis. mod(other[, axis, level, fill_value]) Get Modulo of dataframe and other, element-wise (binary operator mod). mode([axis, numeric_only, dropna]) Get the mode(s) of each element along the selected axis. mul(other[, axis, level, fill_value]) Get Multiplication of dataframe and other, element-wise (binary operator mul). multiply(other[, axis, level, fill_value]) Get Multiplication of dataframe and other, element-wise (binary operator mul). ne(other[, axis, level]) Get Not equal to of dataframe and other, element-wise (binary operator ne). nlargest(n, columns[, keep]) Return the first n rows ordered by columns in descending order. notna() Detect existing (non-missing) values. notnull() DataFrame.notnull is an alias for DataFrame.notna. nsmallest(n, columns[, keep]) Return the first n rows ordered by columns in ascending order. nunique([axis, dropna]) Count number of distinct elements in specified axis. pad([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='ffill'. pct_change([periods, fill_method, limit, freq]) Percentage change between the current and a prior element. pipe(func, *args, **kwargs) Apply chainable functions that expect Series or DataFrames. pivot([index, columns, values]) Return reshaped DataFrame organized by given index / column values. pivot_table([values, index, columns, ...]) Create a spreadsheet-style pivot table as a DataFrame. plot alias of pandas.plotting._core.PlotAccessor pop(item) Return item and drop from frame. pow(other[, axis, level, fill_value]) Get Exponential power of dataframe and other, element-wise (binary operator pow). prod([axis, skipna, level, numeric_only, ...]) Return the product of the values over the requested axis. product([axis, skipna, level, numeric_only, ...]) Return the product of the values over the requested axis. quantile([q, axis, numeric_only, interpolation]) Return values at the given quantile over requested axis. query(expr[, inplace]) Query the columns of a DataFrame with a boolean expression. radd(other[, axis, level, fill_value]) Get Addition of dataframe and other, element-wise (binary operator radd). rank([axis, method, numeric_only, ...]) Compute numerical data ranks (1 through n) along axis. rdiv(other[, axis, level, fill_value]) Get Floating division of dataframe and other, element-wise (binary operator rtruediv). reindex([labels, index, columns, axis, ...]) Conform Series/DataFrame to new index with optional filling logic. reindex_like(other[, method, copy, limit, ...]) Return an object with matching indices as other object. rename([mapper, index, columns, axis, copy, ...]) Alter axes labels. rename_axis([mapper, index, columns, axis, ...]) Set the name of the axis for the index or columns. reorder_levels(order[, axis]) Rearrange index levels using input order. replace([to_replace, value, inplace, limit, ...]) Replace values given in to_replace with value. resample(rule[, axis, closed, label, ...]) Resample time-series data. reset_index([level, drop, inplace, ...]) Reset the index, or a level of it. rfloordiv(other[, axis, level, fill_value]) Get Integer division of dataframe and other, element-wise (binary operator rfloordiv). rmod(other[, axis, level, fill_value]) Get Modulo of dataframe and other, element-wise (binary operator rmod). rmul(other[, axis, level, fill_value]) Get Multiplication of dataframe and other, element-wise (binary operator rmul). rolling(window[, min_periods, center, ...]) Provide rolling window calculations. round([decimals]) Round a DataFrame to a variable number of decimal places. rpow(other[, axis, level, fill_value]) Get Exponential power of dataframe and other, element-wise (binary operator rpow). rsub(other[, axis, level, fill_value]) Get Subtraction of dataframe and other, element-wise (binary operator rsub). rtruediv(other[, axis, level, fill_value]) Get Floating division of dataframe and other, element-wise (binary operator rtruediv). sample([n, frac, replace, weights, ...]) Return a random sample of items from an axis of object. select_dtypes([include, exclude]) Return a subset of the DataFrame's columns based on the column dtypes. sem([axis, skipna, level, ddof, numeric_only]) Return unbiased standard error of the mean over requested axis. set_axis(labels[, axis, inplace]) Assign desired index to given axis. set_flags(*[, copy, allows_duplicate_labels]) Return a new object with updated flags. set_index(keys[, drop, append, inplace, ...]) Set the DataFrame index using existing columns. shift([periods, freq, axis, fill_value]) Shift index by desired number of periods with an optional time freq. skew([axis, skipna, level, numeric_only]) Return unbiased skew over requested axis. slice_shift([periods, axis]) (DEPRECATED) Equivalent to shift without copying data. sort_index([axis, level, ascending, ...]) Sort object by labels (along an axis). sort_values(by[, axis, ascending, inplace, ...]) Sort by the values along either axis. sparse alias of pandas.core.arrays.sparse.accessor.SparseFrameAccessor squeeze([axis]) Squeeze 1 dimensional axis objects into scalars. stack([level, dropna]) Stack the prescribed level(s) from columns to index. std([axis, skipna, level, ddof, numeric_only]) Return sample standard deviation over requested axis. sub(other[, axis, level, fill_value]) Get Subtraction of dataframe and other, element-wise (binary operator sub). subtract(other[, axis, level, fill_value]) Get Subtraction of dataframe and other, element-wise (binary operator sub). sum([axis, skipna, level, numeric_only, ...]) Return the sum of the values over the requested axis. swapaxes(axis1, axis2[, copy]) Interchange axes and swap values axes appropriately. swaplevel([i, j, axis]) Swap levels i and j in a MultiIndex. tail([n]) Return the last n rows. take(indices[, axis, is_copy]) Return the elements in the given positional indices along an axis. to_clipboard([excel, sep]) Copy object to the system clipboard. to_csv([path_or_buf, sep, na_rep, ...]) Write object to a comma-separated values (csv) file. to_dict([orient, into]) Convert the DataFrame to a dictionary. to_excel(excel_writer[, sheet_name, na_rep, ...]) Write object to an Excel sheet. to_feather(path, **kwargs) Write a DataFrame to the binary Feather format. to_gbq(destination_table[, project_id, ...]) Write a DataFrame to a Google BigQuery table. to_hdf(path_or_buf, key[, mode, complevel, ...]) Write the contained data to an HDF5 file using HDFStore. to_html([buf, columns, col_space, header, ...]) Render a DataFrame as an HTML table. to_json([path_or_buf, orient, date_format, ...]) Convert the object to a JSON string. to_latex([buf, columns, col_space, header, ...]) Render object to a LaTeX tabular, longtable, or nested table. to_markdown([buf, mode, index, storage_options]) Print DataFrame in Markdown-friendly format. to_numpy([dtype, copy, na_value]) Convert the DataFrame to a NumPy array. to_parquet([path, engine, compression, ...]) Write a DataFrame to the binary parquet format. to_period([freq, axis, copy]) Convert DataFrame from DatetimeIndex to PeriodIndex. to_pickle(path[, compression, protocol, ...]) Pickle (serialize) object to file. to_records([index, column_dtypes, index_dtypes]) Convert DataFrame to a NumPy record array. to_sql(name, con[, schema, if_exists, ...]) Write records stored in a DataFrame to a SQL database. to_stata(path[, convert_dates, write_index, ...]) Export DataFrame object to Stata dta format. to_string([buf, columns, col_space, header, ...]) Render a DataFrame to a console-friendly tabular output. to_timestamp([freq, how, axis, copy]) Cast to DatetimeIndex of timestamps, at beginning of period. to_xarray() Return an xarray object from the pandas object. to_xml([path_or_buffer, index, root_name, ...]) Render a DataFrame to an XML document. transform(func[, axis]) Call func on self producing a DataFrame with the same axis shape as self. transpose(*args[, copy]) Transpose index and columns. truediv(other[, axis, level, fill_value]) Get Floating division of dataframe and other, element-wise (binary operator truediv). truncate([before, after, axis, copy]) Truncate a Series or DataFrame before and after some index value. tshift([periods, freq, axis]) (DEPRECATED) Shift the time index, using the index's frequency if available. tz_convert(tz[, axis, level, copy]) Convert tz-aware axis to target time zone. tz_localize(tz[, axis, level, copy, ...]) Localize tz-naive index of a Series or DataFrame to target time zone. unstack([level, fill_value]) Pivot a level of the (necessarily hierarchical) index labels. update(other[, join, overwrite, ...]) Modify in place using non-NA values from another DataFrame. value_counts([subset, normalize, sort, ...]) Return a Series containing counts of unique rows in the DataFrame. var([axis, skipna, level, ddof, numeric_only]) Return unbiased variance over requested axis. where(cond[, other, inplace, axis, level, ...]) Replace values where the condition is False. xs(key[, axis, level, drop_level]) Return cross-section from the Series/DataFrame.
pandas.reference.api.pandas.dataframe
pandas.DataFrame.__iter__ DataFrame.__iter__()[source] Iterate over info axis. Returns iterator Info axis as iterator.
pandas.reference.api.pandas.dataframe.__iter__
pandas.DataFrame.abs DataFrame.abs()[source] Return a Series/DataFrame with absolute numeric value of each element. This function only applies to elements that are all numeric. Returns abs Series/DataFrame containing the absolute value of each element. See also numpy.absolute Calculate the absolute value element-wise. Notes For complex inputs, 1.2 + 1j, the absolute value is \(\sqrt{ a^2 + b^2 }\). Examples Absolute numeric values in a Series. >>> s = pd.Series([-1.10, 2, -3.33, 4]) >>> s.abs() 0 1.10 1 2.00 2 3.33 3 4.00 dtype: float64 Absolute numeric values in a Series with complex numbers. >>> s = pd.Series([1.2 + 1j]) >>> s.abs() 0 1.56205 dtype: float64 Absolute numeric values in a Series with a Timedelta element. >>> s = pd.Series([pd.Timedelta('1 days')]) >>> s.abs() 0 1 days dtype: timedelta64[ns] Select rows with data closest to certain value using argsort (from StackOverflow). >>> df = pd.DataFrame({ ... 'a': [4, 5, 6, 7], ... 'b': [10, 20, 30, 40], ... 'c': [100, 50, -30, -50] ... }) >>> df a b c 0 4 10 100 1 5 20 50 2 6 30 -30 3 7 40 -50 >>> df.loc[(df.c - 43).abs().argsort()] a b c 1 5 20 50 0 4 10 100 2 6 30 -30 3 7 40 -50
pandas.reference.api.pandas.dataframe.abs
pandas.DataFrame.add DataFrame.add(other, axis='columns', level=None, fill_value=None)[source] Get Addition of dataframe and other, element-wise (binary operator add). Equivalent to dataframe + other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, radd. Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **. Parameters other:scalar, sequence, Series, or DataFrame Any single or multiple element data structure, or list-like object. axis:{0 or ‘index’, 1 or ‘columns’} Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on. level:int or label Broadcast across a level, matching Index values on the passed MultiIndex level. fill_value:float or None, default None Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing. Returns DataFrame Result of the arithmetic operation. See also DataFrame.add Add DataFrames. DataFrame.sub Subtract DataFrames. DataFrame.mul Multiply DataFrames. DataFrame.div Divide DataFrames (float division). DataFrame.truediv Divide DataFrames (float division). DataFrame.floordiv Divide DataFrames (integer division). DataFrame.mod Calculate modulo (remainder after division). DataFrame.pow Calculate exponential power. Notes Mismatched indices will be unioned together. Examples >>> df = pd.DataFrame({'angles': [0, 3, 4], ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df angles degrees circle 0 360 triangle 3 180 rectangle 4 360 Add a scalar with operator version which return the same results. >>> df + 1 angles degrees circle 1 361 triangle 4 181 rectangle 5 361 >>> df.add(1) angles degrees circle 1 361 triangle 4 181 rectangle 5 361 Divide by constant with reverse version. >>> df.div(10) angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0 >>> df.rdiv(10) angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778 Subtract a list and Series by axis with operator version. >>> df - [1, 2] angles degrees circle -1 358 triangle 2 178 rectangle 3 358 >>> df.sub([1, 2], axis='columns') angles degrees circle -1 358 triangle 2 178 rectangle 3 358 >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359 Multiply a DataFrame of different shape with operator version. >>> other = pd.DataFrame({'angles': [0, 3, 4]}, ... index=['circle', 'triangle', 'rectangle']) >>> other angles circle 0 triangle 3 rectangle 4 >>> df * other angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN >>> df.mul(other, fill_value=0) angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0 Divide by a MultiIndex by level. >>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720 >>> df.div(df_multindex, level=1, fill_value=0) angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
pandas.reference.api.pandas.dataframe.add
pandas.DataFrame.add_prefix DataFrame.add_prefix(prefix)[source] Prefix labels with string prefix. For Series, the row labels are prefixed. For DataFrame, the column labels are prefixed. Parameters prefix:str The string to add before each label. Returns Series or DataFrame New Series or DataFrame with updated labels. See also Series.add_suffix Suffix row labels with string suffix. DataFrame.add_suffix Suffix column labels with string suffix. Examples >>> s = pd.Series([1, 2, 3, 4]) >>> s 0 1 1 2 2 3 3 4 dtype: int64 >>> s.add_prefix('item_') item_0 1 item_1 2 item_2 3 item_3 4 dtype: int64 >>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]}) >>> df A B 0 1 3 1 2 4 2 3 5 3 4 6 >>> df.add_prefix('col_') col_A col_B 0 1 3 1 2 4 2 3 5 3 4 6
pandas.reference.api.pandas.dataframe.add_prefix
pandas.DataFrame.add_suffix DataFrame.add_suffix(suffix)[source] Suffix labels with string suffix. For Series, the row labels are suffixed. For DataFrame, the column labels are suffixed. Parameters suffix:str The string to add after each label. Returns Series or DataFrame New Series or DataFrame with updated labels. See also Series.add_prefix Prefix row labels with string prefix. DataFrame.add_prefix Prefix column labels with string prefix. Examples >>> s = pd.Series([1, 2, 3, 4]) >>> s 0 1 1 2 2 3 3 4 dtype: int64 >>> s.add_suffix('_item') 0_item 1 1_item 2 2_item 3 3_item 4 dtype: int64 >>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]}) >>> df A B 0 1 3 1 2 4 2 3 5 3 4 6 >>> df.add_suffix('_col') A_col B_col 0 1 3 1 2 4 2 3 5 3 4 6
pandas.reference.api.pandas.dataframe.add_suffix
pandas.DataFrame.agg DataFrame.agg(func=None, axis=0, *args, **kwargs)[source] Aggregate using one or more operations over the specified axis. Parameters func:function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such. axis:{0 or ‘index’, 1 or ‘columns’}, default 0 If 0 or ‘index’: apply function to each column. If 1 or ‘columns’: apply function to each row. *args Positional arguments to pass to func. **kwargs Keyword arguments to pass to func. Returns scalar, Series or DataFrame The return can be: scalar : when Series.agg is called with single function Series : when DataFrame.agg is called with a single function DataFrame : when DataFrame.agg is called with several functions Return scalar, Series or DataFrame. The aggregation operations are always performed over an axis, either the index (default) or the column axis. This behavior is different from numpy aggregation functions (mean, median, prod, sum, std, var), where the default is to compute the aggregation of the flattened array, e.g., numpy.mean(arr_2d) as opposed to numpy.mean(arr_2d, axis=0). agg is an alias for aggregate. Use the alias. See also DataFrame.apply Perform any type of operations. DataFrame.transform Perform transformation type operations. core.groupby.GroupBy Perform operations over groups. core.resample.Resampler Perform operations over resampled bins. core.window.Rolling Perform operations over rolling window. core.window.Expanding Perform operations over expanding window. core.window.ExponentialMovingWindow Perform operation over exponential weighted window. Notes agg is an alias for aggregate. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. A passed user-defined-function will be passed a Series for evaluation. Examples >>> df = pd.DataFrame([[1, 2, 3], ... [4, 5, 6], ... [7, 8, 9], ... [np.nan, np.nan, np.nan]], ... columns=['A', 'B', 'C']) Aggregate these functions over the rows. >>> df.agg(['sum', 'min']) A B C sum 12.0 15.0 18.0 min 1.0 2.0 3.0 Different aggregations per column. >>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']}) A B sum 12.0 NaN min 1.0 2.0 max NaN 8.0 Aggregate different functions over the columns and rename the index of the resulting DataFrame. >>> df.agg(x=('A', max), y=('B', 'min'), z=('C', np.mean)) A B C x 7.0 NaN NaN y NaN 2.0 NaN z NaN NaN 6.0 Aggregate over the columns. >>> df.agg("mean", axis="columns") 0 2.0 1 5.0 2 8.0 3 NaN dtype: float64
pandas.reference.api.pandas.dataframe.agg
pandas.DataFrame.aggregate DataFrame.aggregate(func=None, axis=0, *args, **kwargs)[source] Aggregate using one or more operations over the specified axis. Parameters func:function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such. axis:{0 or ‘index’, 1 or ‘columns’}, default 0 If 0 or ‘index’: apply function to each column. If 1 or ‘columns’: apply function to each row. *args Positional arguments to pass to func. **kwargs Keyword arguments to pass to func. Returns scalar, Series or DataFrame The return can be: scalar : when Series.agg is called with single function Series : when DataFrame.agg is called with a single function DataFrame : when DataFrame.agg is called with several functions Return scalar, Series or DataFrame. The aggregation operations are always performed over an axis, either the index (default) or the column axis. This behavior is different from numpy aggregation functions (mean, median, prod, sum, std, var), where the default is to compute the aggregation of the flattened array, e.g., numpy.mean(arr_2d) as opposed to numpy.mean(arr_2d, axis=0). agg is an alias for aggregate. Use the alias. See also DataFrame.apply Perform any type of operations. DataFrame.transform Perform transformation type operations. core.groupby.GroupBy Perform operations over groups. core.resample.Resampler Perform operations over resampled bins. core.window.Rolling Perform operations over rolling window. core.window.Expanding Perform operations over expanding window. core.window.ExponentialMovingWindow Perform operation over exponential weighted window. Notes agg is an alias for aggregate. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. A passed user-defined-function will be passed a Series for evaluation. Examples >>> df = pd.DataFrame([[1, 2, 3], ... [4, 5, 6], ... [7, 8, 9], ... [np.nan, np.nan, np.nan]], ... columns=['A', 'B', 'C']) Aggregate these functions over the rows. >>> df.agg(['sum', 'min']) A B C sum 12.0 15.0 18.0 min 1.0 2.0 3.0 Different aggregations per column. >>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']}) A B sum 12.0 NaN min 1.0 2.0 max NaN 8.0 Aggregate different functions over the columns and rename the index of the resulting DataFrame. >>> df.agg(x=('A', max), y=('B', 'min'), z=('C', np.mean)) A B C x 7.0 NaN NaN y NaN 2.0 NaN z NaN NaN 6.0 Aggregate over the columns. >>> df.agg("mean", axis="columns") 0 2.0 1 5.0 2 8.0 3 NaN dtype: float64
pandas.reference.api.pandas.dataframe.aggregate
pandas.DataFrame.align DataFrame.align(other, join='outer', axis=None, level=None, copy=True, fill_value=None, method=None, limit=None, fill_axis=0, broadcast_axis=None)[source] Align two objects on their axes with the specified join method. Join method is specified for each axis Index. Parameters other:DataFrame or Series join:{‘outer’, ‘inner’, ‘left’, ‘right’}, default ‘outer’ axis:allowed axis of the other object, default None Align on index (0), columns (1), or both (None). level:int or level name, default None Broadcast across a level, matching Index values on the passed MultiIndex level. copy:bool, default True Always returns new objects. If copy=False and no reindexing is required then original objects are returned. fill_value:scalar, default np.NaN Value to use for missing values. Defaults to NaN, but can be any “compatible” value. method:{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None Method to use for filling holes in reindexed Series: pad / ffill: propagate last valid observation forward to next valid. backfill / bfill: use NEXT valid observation to fill gap. limit:int, default None If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None. fill_axis:{0 or ‘index’, 1 or ‘columns’}, default 0 Filling axis, method and limit. broadcast_axis:{0 or ‘index’, 1 or ‘columns’}, default None Broadcast values along this axis, if aligning two objects of different dimensions. Returns (left, right):(DataFrame, type of other) Aligned objects. Examples >>> df = pd.DataFrame( ... [[1, 2, 3, 4], [6, 7, 8, 9]], columns=["D", "B", "E", "A"], index=[1, 2] ... ) >>> other = pd.DataFrame( ... [[10, 20, 30, 40], [60, 70, 80, 90], [600, 700, 800, 900]], ... columns=["A", "B", "C", "D"], ... index=[2, 3, 4], ... ) >>> df D B E A 1 1 2 3 4 2 6 7 8 9 >>> other A B C D 2 10 20 30 40 3 60 70 80 90 4 600 700 800 900 Align on columns: >>> left, right = df.align(other, join="outer", axis=1) >>> left A B C D E 1 4 2 NaN 1 3 2 9 7 NaN 6 8 >>> right A B C D E 2 10 20 30 40 NaN 3 60 70 80 90 NaN 4 600 700 800 900 NaN We can also align on the index: >>> left, right = df.align(other, join="outer", axis=0) >>> left D B E A 1 1.0 2.0 3.0 4.0 2 6.0 7.0 8.0 9.0 3 NaN NaN NaN NaN 4 NaN NaN NaN NaN >>> right A B C D 1 NaN NaN NaN NaN 2 10.0 20.0 30.0 40.0 3 60.0 70.0 80.0 90.0 4 600.0 700.0 800.0 900.0 Finally, the default axis=None will align on both index and columns: >>> left, right = df.align(other, join="outer", axis=None) >>> left A B C D E 1 4.0 2.0 NaN 1.0 3.0 2 9.0 7.0 NaN 6.0 8.0 3 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN >>> right A B C D E 1 NaN NaN NaN NaN NaN 2 10.0 20.0 30.0 40.0 NaN 3 60.0 70.0 80.0 90.0 NaN 4 600.0 700.0 800.0 900.0 NaN
pandas.reference.api.pandas.dataframe.align
pandas.DataFrame.all DataFrame.all(axis=0, bool_only=None, skipna=True, level=None, **kwargs)[source] Return whether all elements are True, potentially over an axis. Returns True unless there at least one element within a series or along a Dataframe axis that is False or equivalent (e.g. zero or empty). Parameters axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0 Indicate which axis or axes should be reduced. 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels. 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index. None : reduce all axes, return a scalar. bool_only:bool, default None Include only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series. skipna:bool, default True Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be True, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero. level:int or level name, default None If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series. **kwargs:any, default None Additional keywords have no effect but might be accepted for compatibility with NumPy. Returns Series or DataFrame If level is specified, then, DataFrame is returned; otherwise, Series is returned. See also Series.all Return True if all elements are True. DataFrame.any Return True if one (or more) elements are True. Examples Series >>> pd.Series([True, True]).all() True >>> pd.Series([True, False]).all() False >>> pd.Series([], dtype="float64").all() True >>> pd.Series([np.nan]).all() True >>> pd.Series([np.nan]).all(skipna=False) True DataFrames Create a dataframe from a dictionary. >>> df = pd.DataFrame({'col1': [True, True], 'col2': [True, False]}) >>> df col1 col2 0 True True 1 True False Default behaviour checks if column-wise values all return True. >>> df.all() col1 True col2 False dtype: bool Specify axis='columns' to check if row-wise values all return True. >>> df.all(axis='columns') 0 True 1 False dtype: bool Or axis=None for whether every value is True. >>> df.all(axis=None) False
pandas.reference.api.pandas.dataframe.all
pandas.DataFrame.any DataFrame.any(axis=0, bool_only=None, skipna=True, level=None, **kwargs)[source] Return whether any element is True, potentially over an axis. Returns False unless there is at least one element within a series or along a Dataframe axis that is True or equivalent (e.g. non-zero or non-empty). Parameters axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0 Indicate which axis or axes should be reduced. 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels. 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index. None : reduce all axes, return a scalar. bool_only:bool, default None Include only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series. skipna:bool, default True Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be False, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero. level:int or level name, default None If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series. **kwargs:any, default None Additional keywords have no effect but might be accepted for compatibility with NumPy. Returns Series or DataFrame If level is specified, then, DataFrame is returned; otherwise, Series is returned. See also numpy.any Numpy version of this method. Series.any Return whether any element is True. Series.all Return whether all elements are True. DataFrame.any Return whether any element is True over requested axis. DataFrame.all Return whether all elements are True over requested axis. Examples Series For Series input, the output is a scalar indicating whether any element is True. >>> pd.Series([False, False]).any() False >>> pd.Series([True, False]).any() True >>> pd.Series([], dtype="float64").any() False >>> pd.Series([np.nan]).any() False >>> pd.Series([np.nan]).any(skipna=False) True DataFrame Whether each column contains at least one True element (the default). >>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]}) >>> df A B C 0 1 0 0 1 2 2 0 >>> df.any() A True B True C False dtype: bool Aggregating over the columns. >>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]}) >>> df A B 0 True 1 1 False 2 >>> df.any(axis='columns') 0 True 1 True dtype: bool >>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]}) >>> df A B 0 True 1 1 False 0 >>> df.any(axis='columns') 0 True 1 False dtype: bool Aggregating over the entire DataFrame with axis=None. >>> df.any(axis=None) True any for an empty DataFrame is an empty Series. >>> pd.DataFrame([]).any() Series([], dtype: bool)
pandas.reference.api.pandas.dataframe.any
pandas.DataFrame.append DataFrame.append(other, ignore_index=False, verify_integrity=False, sort=False)[source] Append rows of other to the end of caller, returning a new object. Columns in other that are not in the caller are added as new columns. Parameters other:DataFrame or Series/dict-like object, or list of these The data to append. ignore_index:bool, default False If True, the resulting axis will be labeled 0, 1, …, n - 1. verify_integrity:bool, default False If True, raise ValueError on creating index with duplicates. sort:bool, default False Sort columns if the columns of self and other are not aligned. Changed in version 1.0.0: Changed to not sort by default. Returns DataFrame A new DataFrame consisting of the rows of caller and the rows of other. See also concat General function to concatenate DataFrame or Series objects. Notes If a list of dict/series is passed and the keys are all contained in the DataFrame’s index, the order of the columns in the resulting DataFrame will be unchanged. Iteratively appending rows to a DataFrame can be more computationally intensive than a single concatenate. A better solution is to append those rows to a list and then concatenate the list with the original DataFrame all at once. Examples >>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'), index=['x', 'y']) >>> df A B x 1 2 y 3 4 >>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB'), index=['x', 'y']) >>> df.append(df2) A B x 1 2 y 3 4 x 5 6 y 7 8 With ignore_index set to True: >>> df.append(df2, ignore_index=True) A B 0 1 2 1 3 4 2 5 6 3 7 8 The following, while not recommended methods for generating DataFrames, show two ways to generate a DataFrame from multiple data sources. Less efficient: >>> df = pd.DataFrame(columns=['A']) >>> for i in range(5): ... df = df.append({'A': i}, ignore_index=True) >>> df A 0 0 1 1 2 2 3 3 4 4 More efficient: >>> pd.concat([pd.DataFrame([i], columns=['A']) for i in range(5)], ... ignore_index=True) A 0 0 1 1 2 2 3 3 4 4
pandas.reference.api.pandas.dataframe.append
pandas.DataFrame.apply DataFrame.apply(func, axis=0, raw=False, result_type=None, args=(), **kwargs)[source] Apply a function along an axis of the DataFrame. Objects passed to the function are Series objects whose index is either the DataFrame’s index (axis=0) or the DataFrame’s columns (axis=1). By default (result_type=None), the final return type is inferred from the return type of the applied function. Otherwise, it depends on the result_type argument. Parameters func:function Function to apply to each column or row. axis:{0 or ‘index’, 1 or ‘columns’}, default 0 Axis along which the function is applied: 0 or ‘index’: apply function to each column. 1 or ‘columns’: apply function to each row. raw:bool, default False Determines if row or column is passed as a Series or ndarray object: False : passes each row or column as a Series to the function. True : the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance. result_type:{‘expand’, ‘reduce’, ‘broadcast’, None}, default None These only act when axis=1 (columns): ‘expand’ : list-like results will be turned into columns. ‘reduce’ : returns a Series if possible rather than expanding list-like results. This is the opposite of ‘expand’. ‘broadcast’ : results will be broadcast to the original shape of the DataFrame, the original index and columns will be retained. The default behaviour (None) depends on the return value of the applied function: list-like results will be returned as a Series of those. However if the apply function returns a Series these are expanded to columns. args:tuple Positional arguments to pass to func in addition to the array/series. **kwargs Additional keyword arguments to pass as keywords arguments to func. Returns Series or DataFrame Result of applying func along the given axis of the DataFrame. See also DataFrame.applymap For elementwise operations. DataFrame.aggregate Only perform aggregating type operations. DataFrame.transform Only perform transforming type operations. Notes Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. Examples >>> df = pd.DataFrame([[4, 9]] * 3, columns=['A', 'B']) >>> df A B 0 4 9 1 4 9 2 4 9 Using a numpy universal function (in this case the same as np.sqrt(df)): >>> df.apply(np.sqrt) A B 0 2.0 3.0 1 2.0 3.0 2 2.0 3.0 Using a reducing function on either axis >>> df.apply(np.sum, axis=0) A 12 B 27 dtype: int64 >>> df.apply(np.sum, axis=1) 0 13 1 13 2 13 dtype: int64 Returning a list-like will result in a Series >>> df.apply(lambda x: [1, 2], axis=1) 0 [1, 2] 1 [1, 2] 2 [1, 2] dtype: object Passing result_type='expand' will expand list-like results to columns of a Dataframe >>> df.apply(lambda x: [1, 2], axis=1, result_type='expand') 0 1 0 1 2 1 1 2 2 1 2 Returning a Series inside the function is similar to passing result_type='expand'. The resulting column names will be the Series index. >>> df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1) foo bar 0 1 2 1 1 2 2 1 2 Passing result_type='broadcast' will ensure the same shape result, whether list-like or scalar is returned by the function, and broadcast it along the axis. The resulting column names will be the originals. >>> df.apply(lambda x: [1, 2], axis=1, result_type='broadcast') A B 0 1 2 1 1 2 2 1 2
pandas.reference.api.pandas.dataframe.apply
pandas.DataFrame.applymap DataFrame.applymap(func, na_action=None, **kwargs)[source] Apply a function to a Dataframe elementwise. This method applies a function that accepts and returns a scalar to every element of a DataFrame. Parameters func:callable Python function, returns a single value from a single value. na_action:{None, ‘ignore’}, default None If ‘ignore’, propagate NaN values, without passing them to func. New in version 1.2. **kwargs Additional keyword arguments to pass as keywords arguments to func. New in version 1.3.0. Returns DataFrame Transformed DataFrame. See also DataFrame.apply Apply a function along input axis of DataFrame. Examples >>> df = pd.DataFrame([[1, 2.12], [3.356, 4.567]]) >>> df 0 1 0 1.000 2.120 1 3.356 4.567 >>> df.applymap(lambda x: len(str(x))) 0 1 0 3 4 1 5 5 Like Series.map, NA values can be ignored: >>> df_copy = df.copy() >>> df_copy.iloc[0, 0] = pd.NA >>> df_copy.applymap(lambda x: len(str(x)), na_action='ignore') 0 1 0 <NA> 4 1 5 5 Note that a vectorized version of func often exists, which will be much faster. You could square each number elementwise. >>> df.applymap(lambda x: x**2) 0 1 0 1.000000 4.494400 1 11.262736 20.857489 But it’s better to avoid applymap in that case. >>> df ** 2 0 1 0 1.000000 4.494400 1 11.262736 20.857489
pandas.reference.api.pandas.dataframe.applymap
pandas.DataFrame.asfreq DataFrame.asfreq(freq, method=None, how=None, normalize=False, fill_value=None)[source] Convert time series to specified frequency. Returns the original data conformed to a new index with the specified frequency. If the index of this DataFrame is a PeriodIndex, the new index is the result of transforming the original index with PeriodIndex.asfreq (so the original index will map one-to-one to the new index). Otherwise, the new index will be equivalent to pd.date_range(start, end, freq=freq) where start and end are, respectively, the first and last entries in the original index (see pandas.date_range()). The values corresponding to any timesteps in the new index which were not present in the original index will be null (NaN), unless a method for filling such unknowns is provided (see the method parameter below). The resample() method is more appropriate if an operation on each group of timesteps (such as an aggregate) is necessary to represent the data at the new frequency. Parameters freq:DateOffset or str Frequency DateOffset or string. method:{‘backfill’/’bfill’, ‘pad’/’ffill’}, default None Method to use for filling holes in reindexed Series (note this does not fill NaNs that already were present): ‘pad’ / ‘ffill’: propagate last valid observation forward to next valid ‘backfill’ / ‘bfill’: use NEXT valid observation to fill. how:{‘start’, ‘end’}, default end For PeriodIndex only (see PeriodIndex.asfreq). normalize:bool, default False Whether to reset output index to midnight. fill_value:scalar, optional Value to use for missing values, applied during upsampling (note this does not fill NaNs that already were present). Returns DataFrame DataFrame object reindexed to the specified frequency. See also reindex Conform DataFrame to new index with optional filling logic. Notes To learn more about the frequency strings, please see this link. Examples Start by creating a series with 4 one minute timestamps. >>> index = pd.date_range('1/1/2000', periods=4, freq='T') >>> series = pd.Series([0.0, None, 2.0, 3.0], index=index) >>> df = pd.DataFrame({'s': series}) >>> df s 2000-01-01 00:00:00 0.0 2000-01-01 00:01:00 NaN 2000-01-01 00:02:00 2.0 2000-01-01 00:03:00 3.0 Upsample the series into 30 second bins. >>> df.asfreq(freq='30S') s 2000-01-01 00:00:00 0.0 2000-01-01 00:00:30 NaN 2000-01-01 00:01:00 NaN 2000-01-01 00:01:30 NaN 2000-01-01 00:02:00 2.0 2000-01-01 00:02:30 NaN 2000-01-01 00:03:00 3.0 Upsample again, providing a fill value. >>> df.asfreq(freq='30S', fill_value=9.0) s 2000-01-01 00:00:00 0.0 2000-01-01 00:00:30 9.0 2000-01-01 00:01:00 NaN 2000-01-01 00:01:30 9.0 2000-01-01 00:02:00 2.0 2000-01-01 00:02:30 9.0 2000-01-01 00:03:00 3.0 Upsample again, providing a method. >>> df.asfreq(freq='30S', method='bfill') s 2000-01-01 00:00:00 0.0 2000-01-01 00:00:30 NaN 2000-01-01 00:01:00 NaN 2000-01-01 00:01:30 2.0 2000-01-01 00:02:00 2.0 2000-01-01 00:02:30 3.0 2000-01-01 00:03:00 3.0
pandas.reference.api.pandas.dataframe.asfreq
pandas.DataFrame.asof DataFrame.asof(where, subset=None)[source] Return the last row(s) without any NaNs before where. The last row (for each element in where, if list) without any NaN is taken. In case of a DataFrame, the last row without NaN considering only the subset of columns (if not None) If there is no good value, NaN is returned for a Series or a Series of NaN values for a DataFrame Parameters where:date or array-like of dates Date(s) before which the last row(s) are returned. subset:str or array-like of str, default None For DataFrame, if not None, only use these columns to check for NaNs. Returns scalar, Series, or DataFrame The return can be: scalar : when self is a Series and where is a scalar Series: when self is a Series and where is an array-like, or when self is a DataFrame and where is a scalar DataFrame : when self is a DataFrame and where is an array-like Return scalar, Series, or DataFrame. See also merge_asof Perform an asof merge. Similar to left join. Notes Dates are assumed to be sorted. Raises if this is not the case. Examples A Series and a scalar where. >>> s = pd.Series([1, 2, np.nan, 4], index=[10, 20, 30, 40]) >>> s 10 1.0 20 2.0 30 NaN 40 4.0 dtype: float64 >>> s.asof(20) 2.0 For a sequence where, a Series is returned. The first value is NaN, because the first element of where is before the first index value. >>> s.asof([5, 20]) 5 NaN 20 2.0 dtype: float64 Missing values are not considered. The following is 2.0, not NaN, even though NaN is at the index location for 30. >>> s.asof(30) 2.0 Take all columns into consideration >>> df = pd.DataFrame({'a': [10, 20, 30, 40, 50], ... 'b': [None, None, None, None, 500]}, ... index=pd.DatetimeIndex(['2018-02-27 09:01:00', ... '2018-02-27 09:02:00', ... '2018-02-27 09:03:00', ... '2018-02-27 09:04:00', ... '2018-02-27 09:05:00'])) >>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30', ... '2018-02-27 09:04:30'])) a b 2018-02-27 09:03:30 NaN NaN 2018-02-27 09:04:30 NaN NaN Take a single column into consideration >>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30', ... '2018-02-27 09:04:30']), ... subset=['a']) a b 2018-02-27 09:03:30 30.0 NaN 2018-02-27 09:04:30 40.0 NaN
pandas.reference.api.pandas.dataframe.asof
pandas.DataFrame.assign DataFrame.assign(**kwargs)[source] Assign new columns to a DataFrame. Returns a new object with all original columns in addition to new ones. Existing columns that are re-assigned will be overwritten. Parameters **kwargs:dict of {str: callable or Series} The column names are keywords. If the values are callable, they are computed on the DataFrame and assigned to the new columns. The callable must not change input DataFrame (though pandas doesn’t check it). If the values are not callable, (e.g. a Series, scalar, or array), they are simply assigned. Returns DataFrame A new DataFrame with the new columns in addition to all the existing columns. Notes Assigning multiple columns within the same assign is possible. Later items in ‘**kwargs’ may refer to newly created or modified columns in ‘df’; items are computed and assigned into ‘df’ in order. Examples >>> df = pd.DataFrame({'temp_c': [17.0, 25.0]}, ... index=['Portland', 'Berkeley']) >>> df temp_c Portland 17.0 Berkeley 25.0 Where the value is a callable, evaluated on df: >>> df.assign(temp_f=lambda x: x.temp_c * 9 / 5 + 32) temp_c temp_f Portland 17.0 62.6 Berkeley 25.0 77.0 Alternatively, the same behavior can be achieved by directly referencing an existing Series or sequence: >>> df.assign(temp_f=df['temp_c'] * 9 / 5 + 32) temp_c temp_f Portland 17.0 62.6 Berkeley 25.0 77.0 You can create multiple columns within the same assign where one of the columns depends on another one defined within the same assign: >>> df.assign(temp_f=lambda x: x['temp_c'] * 9 / 5 + 32, ... temp_k=lambda x: (x['temp_f'] + 459.67) * 5 / 9) temp_c temp_f temp_k Portland 17.0 62.6 290.15 Berkeley 25.0 77.0 298.15
pandas.reference.api.pandas.dataframe.assign
pandas.DataFrame.astype DataFrame.astype(dtype, copy=True, errors='raise')[source] Cast a pandas object to a specified dtype dtype. Parameters dtype:data type, or dict of column name -> data type Use a numpy.dtype or Python type to cast entire pandas object to the same type. Alternatively, use {col: dtype, …}, where col is a column label and dtype is a numpy.dtype or Python type to cast one or more of the DataFrame’s columns to column-specific types. copy:bool, default True Return a copy when copy=True (be very careful setting copy=False as changes to values then may propagate to other pandas objects). errors:{‘raise’, ‘ignore’}, default ‘raise’ Control raising of exceptions on invalid data for provided dtype. raise : allow exceptions to be raised ignore : suppress exceptions. On error return original object. Returns casted:same type as caller See also to_datetime Convert argument to datetime. to_timedelta Convert argument to timedelta. to_numeric Convert argument to a numeric type. numpy.ndarray.astype Cast a numpy array to a specified type. Notes Deprecated since version 1.3.0: Using astype to convert from timezone-naive dtype to timezone-aware dtype is deprecated and will raise in a future version. Use Series.dt.tz_localize() instead. Examples Create a DataFrame: >>> d = {'col1': [1, 2], 'col2': [3, 4]} >>> df = pd.DataFrame(data=d) >>> df.dtypes col1 int64 col2 int64 dtype: object Cast all columns to int32: >>> df.astype('int32').dtypes col1 int32 col2 int32 dtype: object Cast col1 to int32 using a dictionary: >>> df.astype({'col1': 'int32'}).dtypes col1 int32 col2 int64 dtype: object Create a series: >>> ser = pd.Series([1, 2], dtype='int32') >>> ser 0 1 1 2 dtype: int32 >>> ser.astype('int64') 0 1 1 2 dtype: int64 Convert to categorical type: >>> ser.astype('category') 0 1 1 2 dtype: category Categories (2, int64): [1, 2] Convert to ordered categorical type with custom ordering: >>> from pandas.api.types import CategoricalDtype >>> cat_dtype = CategoricalDtype( ... categories=[2, 1], ordered=True) >>> ser.astype(cat_dtype) 0 1 1 2 dtype: category Categories (2, int64): [2 < 1] Note that using copy=False and changing data on a new pandas object may propagate changes: >>> s1 = pd.Series([1, 2]) >>> s2 = s1.astype('int64', copy=False) >>> s2[0] = 10 >>> s1 # note that s1[0] has changed too 0 10 1 2 dtype: int64 Create a series of dates: >>> ser_date = pd.Series(pd.date_range('20200101', periods=3)) >>> ser_date 0 2020-01-01 1 2020-01-02 2 2020-01-03 dtype: datetime64[ns]
pandas.reference.api.pandas.dataframe.astype
pandas.DataFrame.at propertyDataFrame.at Access a single value for a row/column label pair. Similar to loc, in that both provide label-based lookups. Use at if you only need to get or set a single value in a DataFrame or Series. Raises KeyError If ‘label’ does not exist in DataFrame. See also DataFrame.iat Access a single value for a row/column pair by integer position. DataFrame.loc Access a group of rows and columns by label(s). Series.at Access a single value using a label. Examples >>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]], ... index=[4, 5, 6], columns=['A', 'B', 'C']) >>> df A B C 4 0 2 3 5 0 4 1 6 10 20 30 Get value at specified row/column pair >>> df.at[4, 'B'] 2 Set value at specified row/column pair >>> df.at[4, 'B'] = 10 >>> df.at[4, 'B'] 10 Get value within a Series >>> df.loc[5].at['B'] 4
pandas.reference.api.pandas.dataframe.at
pandas.DataFrame.at_time DataFrame.at_time(time, asof=False, axis=None)[source] Select values at particular time of day (e.g., 9:30AM). Parameters time:datetime.time or str axis:{0 or ‘index’, 1 or ‘columns’}, default 0 Returns Series or DataFrame Raises TypeError If the index is not a DatetimeIndex See also between_time Select values between particular times of the day. first Select initial periods of time series based on a date offset. last Select final periods of time series based on a date offset. DatetimeIndex.indexer_at_time Get just the index locations for values at particular time of the day. Examples >>> i = pd.date_range('2018-04-09', periods=4, freq='12H') >>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i) >>> ts A 2018-04-09 00:00:00 1 2018-04-09 12:00:00 2 2018-04-10 00:00:00 3 2018-04-10 12:00:00 4 >>> ts.at_time('12:00') A 2018-04-09 12:00:00 2 2018-04-10 12:00:00 4
pandas.reference.api.pandas.dataframe.at_time
pandas.DataFrame.attrs propertyDataFrame.attrs Dictionary of global attributes of this dataset. Warning attrs is experimental and may change without warning. See also DataFrame.flags Global flags applying to this object.
pandas.reference.api.pandas.dataframe.attrs
pandas.DataFrame.axes propertyDataFrame.axes Return a list representing the axes of the DataFrame. It has the row axis labels and column axis labels as the only members. They are returned in that order. Examples >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]}) >>> df.axes [RangeIndex(start=0, stop=2, step=1), Index(['col1', 'col2'], dtype='object')]
pandas.reference.api.pandas.dataframe.axes
pandas.DataFrame.backfill DataFrame.backfill(axis=None, inplace=False, limit=None, downcast=None)[source] Synonym for DataFrame.fillna() with method='bfill'. Returns Series/DataFrame or None Object with missing values filled or None if inplace=True.
pandas.reference.api.pandas.dataframe.backfill