doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
pandas.CategoricalIndex.as_unordered CategoricalIndex.as_unordered(*args, **kwargs)[source] Set the Categorical to be unordered. Parameters inplace:bool, default False Whether or not to set the ordered attribute in-place or return a copy of this categorical with ordered set to False. Returns Categorical or None Unordered Categorical or None if inplace=True.
pandas.reference.api.pandas.categoricalindex.as_unordered
pandas.CategoricalIndex.categories propertyCategoricalIndex.categories The categories of this categorical. Setting assigns new values to each category (effectively a rename of each individual category). The assigned value has to be a list-like object. All items must be unique and the number of items in the new categories must be the same as the number of items in the old categories. Assigning to categories is a inplace operation! Raises ValueError If the new categories do not validate as categories or if the number of new categories is unequal the number of old categories See also rename_categories Rename categories. reorder_categories Reorder categories. add_categories Add new categories. remove_categories Remove the specified categories. remove_unused_categories Remove categories which are not used. set_categories Set the categories to the specified ones.
pandas.reference.api.pandas.categoricalindex.categories
pandas.CategoricalIndex.codes propertyCategoricalIndex.codes The category codes of this categorical. Codes are an array of integers which are the positions of the actual values in the categories array. There is no setter, use the other categorical methods and the normal item setter to change values in the categorical. Returns ndarray[int] A non-writable view of the codes array.
pandas.reference.api.pandas.categoricalindex.codes
pandas.CategoricalIndex.equals CategoricalIndex.equals(other)[source] Determine if two CategoricalIndex objects contain the same elements. Returns bool If two CategoricalIndex objects have equal elements True, otherwise False.
pandas.reference.api.pandas.categoricalindex.equals
pandas.CategoricalIndex.map CategoricalIndex.map(mapper)[source] Map values using input an input mapping or function. Maps the values (their categories, not the codes) of the index to new categories. If the mapping correspondence is one-to-one the result is a CategoricalIndex which has the same order property as the original, otherwise an Index is returned. If a dict or Series is used any unmapped category is mapped to NaN. Note that if this happens an Index will be returned. Parameters mapper:function, dict, or Series Mapping correspondence. Returns pandas.CategoricalIndex or pandas.Index Mapped index. See also Index.map Apply a mapping correspondence on an Index. Series.map Apply a mapping correspondence on a Series. Series.apply Apply more complex functions on a Series. Examples >>> idx = pd.CategoricalIndex(['a', 'b', 'c']) >>> idx CategoricalIndex(['a', 'b', 'c'], categories=['a', 'b', 'c'], ordered=False, dtype='category') >>> idx.map(lambda x: x.upper()) CategoricalIndex(['A', 'B', 'C'], categories=['A', 'B', 'C'], ordered=False, dtype='category') >>> idx.map({'a': 'first', 'b': 'second', 'c': 'third'}) CategoricalIndex(['first', 'second', 'third'], categories=['first', 'second', 'third'], ordered=False, dtype='category') If the mapping is one-to-one the ordering of the categories is preserved: >>> idx = pd.CategoricalIndex(['a', 'b', 'c'], ordered=True) >>> idx CategoricalIndex(['a', 'b', 'c'], categories=['a', 'b', 'c'], ordered=True, dtype='category') >>> idx.map({'a': 3, 'b': 2, 'c': 1}) CategoricalIndex([3, 2, 1], categories=[3, 2, 1], ordered=True, dtype='category') If the mapping is not one-to-one an Index is returned: >>> idx.map({'a': 'first', 'b': 'second', 'c': 'first'}) Index(['first', 'second', 'first'], dtype='object') If a dict is used, all unmapped categories are mapped to NaN and the result is an Index: >>> idx.map({'a': 'first', 'b': 'second'}) Index(['first', 'second', nan], dtype='object')
pandas.reference.api.pandas.categoricalindex.map
pandas.CategoricalIndex.ordered propertyCategoricalIndex.ordered Whether the categories have an ordered relationship.
pandas.reference.api.pandas.categoricalindex.ordered
pandas.CategoricalIndex.remove_categories CategoricalIndex.remove_categories(*args, **kwargs)[source] Remove the specified categories. removals must be included in the old categories. Values which were in the removed categories will be set to NaN Parameters removals:category or list of categories The categories which should be removed. inplace:bool, default False Whether or not to remove the categories inplace or return a copy of this categorical with removed categories. Deprecated since version 1.3.0. Returns cat:Categorical or None Categorical with removed categories or None if inplace=True. Raises ValueError If the removals are not contained in the categories See also rename_categories Rename categories. reorder_categories Reorder categories. add_categories Add new categories. remove_unused_categories Remove categories which are not used. set_categories Set the categories to the specified ones. Examples >>> c = pd.Categorical(['a', 'c', 'b', 'c', 'd']) >>> c ['a', 'c', 'b', 'c', 'd'] Categories (4, object): ['a', 'b', 'c', 'd'] >>> c.remove_categories(['d', 'a']) [NaN, 'c', 'b', 'c', NaN] Categories (2, object): ['b', 'c']
pandas.reference.api.pandas.categoricalindex.remove_categories
pandas.CategoricalIndex.remove_unused_categories CategoricalIndex.remove_unused_categories(*args, **kwargs)[source] Remove categories which are not used. Parameters inplace:bool, default False Whether or not to drop unused categories inplace or return a copy of this categorical with unused categories dropped. Deprecated since version 1.2.0. Returns cat:Categorical or None Categorical with unused categories dropped or None if inplace=True. See also rename_categories Rename categories. reorder_categories Reorder categories. add_categories Add new categories. remove_categories Remove the specified categories. set_categories Set the categories to the specified ones. Examples >>> c = pd.Categorical(['a', 'c', 'b', 'c', 'd']) >>> c ['a', 'c', 'b', 'c', 'd'] Categories (4, object): ['a', 'b', 'c', 'd'] >>> c[2] = 'a' >>> c[4] = 'c' >>> c ['a', 'c', 'a', 'c', 'c'] Categories (4, object): ['a', 'b', 'c', 'd'] >>> c.remove_unused_categories() ['a', 'c', 'a', 'c', 'c'] Categories (2, object): ['a', 'c']
pandas.reference.api.pandas.categoricalindex.remove_unused_categories
pandas.CategoricalIndex.rename_categories CategoricalIndex.rename_categories(*args, **kwargs)[source] Rename categories. Parameters new_categories:list-like, dict-like or callable New categories which will replace old categories. list-like: all items must be unique and the number of items in the new categories must match the existing number of categories. dict-like: specifies a mapping from old categories to new. Categories not contained in the mapping are passed through and extra categories in the mapping are ignored. callable : a callable that is called on all items in the old categories and whose return values comprise the new categories. inplace:bool, default False Whether or not to rename the categories inplace or return a copy of this categorical with renamed categories. Deprecated since version 1.3.0. Returns cat:Categorical or None Categorical with removed categories or None if inplace=True. Raises ValueError If new categories are list-like and do not have the same number of items than the current categories or do not validate as categories See also reorder_categories Reorder categories. add_categories Add new categories. remove_categories Remove the specified categories. remove_unused_categories Remove categories which are not used. set_categories Set the categories to the specified ones. Examples >>> c = pd.Categorical(['a', 'a', 'b']) >>> c.rename_categories([0, 1]) [0, 0, 1] Categories (2, int64): [0, 1] For dict-like new_categories, extra keys are ignored and categories not in the dictionary are passed through >>> c.rename_categories({'a': 'A', 'c': 'C'}) ['A', 'A', 'b'] Categories (2, object): ['A', 'b'] You may also provide a callable to create the new categories >>> c.rename_categories(lambda x: x.upper()) ['A', 'A', 'B'] Categories (2, object): ['A', 'B']
pandas.reference.api.pandas.categoricalindex.rename_categories
pandas.CategoricalIndex.reorder_categories CategoricalIndex.reorder_categories(*args, **kwargs)[source] Reorder categories as specified in new_categories. new_categories need to include all old categories and no new category items. Parameters new_categories:Index-like The categories in new order. ordered:bool, optional Whether or not the categorical is treated as a ordered categorical. If not given, do not change the ordered information. inplace:bool, default False Whether or not to reorder the categories inplace or return a copy of this categorical with reordered categories. Deprecated since version 1.3.0. Returns cat:Categorical or None Categorical with removed categories or None if inplace=True. Raises ValueError If the new categories do not contain all old category items or any new ones See also rename_categories Rename categories. add_categories Add new categories. remove_categories Remove the specified categories. remove_unused_categories Remove categories which are not used. set_categories Set the categories to the specified ones.
pandas.reference.api.pandas.categoricalindex.reorder_categories
pandas.CategoricalIndex.set_categories CategoricalIndex.set_categories(*args, **kwargs)[source] Set the categories to the specified new_categories. new_categories can include new categories (which will result in unused categories) or remove old categories (which results in values set to NaN). If rename==True, the categories will simple be renamed (less or more items than in old categories will result in values set to NaN or in unused categories respectively). This method can be used to perform more than one action of adding, removing, and reordering simultaneously and is therefore faster than performing the individual steps via the more specialised methods. On the other hand this methods does not do checks (e.g., whether the old categories are included in the new categories on a reorder), which can result in surprising changes, for example when using special string dtypes, which does not considers a S1 string equal to a single char python string. Parameters new_categories:Index-like The categories in new order. ordered:bool, default False Whether or not the categorical is treated as a ordered categorical. If not given, do not change the ordered information. rename:bool, default False Whether or not the new_categories should be considered as a rename of the old categories or as reordered categories. inplace:bool, default False Whether or not to reorder the categories in-place or return a copy of this categorical with reordered categories. Deprecated since version 1.3.0. Returns Categorical with reordered categories or None if inplace. Raises ValueError If new_categories does not validate as categories See also rename_categories Rename categories. reorder_categories Reorder categories. add_categories Add new categories. remove_categories Remove the specified categories. remove_unused_categories Remove categories which are not used.
pandas.reference.api.pandas.categoricalindex.set_categories
pandas.concat pandas.concat(objs, axis=0, join='outer', ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False, sort=False, copy=True)[source] Concatenate pandas objects along a particular axis with optional set logic along the other axes. Can also add a layer of hierarchical indexing on the concatenation axis, which may be useful if the labels are the same (or overlapping) on the passed axis number. Parameters objs:a sequence or mapping of Series or DataFrame objects If a mapping is passed, the sorted keys will be used as the keys argument, unless it is passed, in which case the values will be selected (see below). Any None objects will be dropped silently unless they are all None in which case a ValueError will be raised. axis:{0/’index’, 1/’columns’}, default 0 The axis to concatenate along. join:{‘inner’, ‘outer’}, default ‘outer’ How to handle indexes on other axis (or axes). ignore_index:bool, default False If True, do not use the index values along the concatenation axis. The resulting axis will be labeled 0, …, n - 1. This is useful if you are concatenating objects where the concatenation axis does not have meaningful indexing information. Note the index values on the other axes are still respected in the join. keys:sequence, default None If multiple levels passed, should contain tuples. Construct hierarchical index using the passed keys as the outermost level. levels:list of sequences, default None Specific levels (unique values) to use for constructing a MultiIndex. Otherwise they will be inferred from the keys. names:list, default None Names for the levels in the resulting hierarchical index. verify_integrity:bool, default False Check whether the new concatenated axis contains duplicates. This can be very expensive relative to the actual data concatenation. sort:bool, default False Sort non-concatenation axis if it is not already aligned when join is ‘outer’. This has no effect when join='inner', which already preserves the order of the non-concatenation axis. Changed in version 1.0.0: Changed to not sort by default. copy:bool, default True If False, do not copy data unnecessarily. Returns object, type of objs When concatenating all Series along the index (axis=0), a Series is returned. When objs contains at least one DataFrame, a DataFrame is returned. When concatenating along the columns (axis=1), a DataFrame is returned. See also Series.append Concatenate Series. DataFrame.append Concatenate DataFrames. DataFrame.join Join DataFrames using indexes. DataFrame.merge Merge DataFrames by indexes or columns. Notes The keys, levels, and names arguments are all optional. A walkthrough of how this method fits in with other tools for combining pandas objects can be found here. Examples Combine two Series. >>> s1 = pd.Series(['a', 'b']) >>> s2 = pd.Series(['c', 'd']) >>> pd.concat([s1, s2]) 0 a 1 b 0 c 1 d dtype: object Clear the existing index and reset it in the result by setting the ignore_index option to True. >>> pd.concat([s1, s2], ignore_index=True) 0 a 1 b 2 c 3 d dtype: object Add a hierarchical index at the outermost level of the data with the keys option. >>> pd.concat([s1, s2], keys=['s1', 's2']) s1 0 a 1 b s2 0 c 1 d dtype: object Label the index keys you create with the names option. >>> pd.concat([s1, s2], keys=['s1', 's2'], ... names=['Series name', 'Row ID']) Series name Row ID s1 0 a 1 b s2 0 c 1 d dtype: object Combine two DataFrame objects with identical columns. >>> df1 = pd.DataFrame([['a', 1], ['b', 2]], ... columns=['letter', 'number']) >>> df1 letter number 0 a 1 1 b 2 >>> df2 = pd.DataFrame([['c', 3], ['d', 4]], ... columns=['letter', 'number']) >>> df2 letter number 0 c 3 1 d 4 >>> pd.concat([df1, df2]) letter number 0 a 1 1 b 2 0 c 3 1 d 4 Combine DataFrame objects with overlapping columns and return everything. Columns outside the intersection will be filled with NaN values. >>> df3 = pd.DataFrame([['c', 3, 'cat'], ['d', 4, 'dog']], ... columns=['letter', 'number', 'animal']) >>> df3 letter number animal 0 c 3 cat 1 d 4 dog >>> pd.concat([df1, df3], sort=False) letter number animal 0 a 1 NaN 1 b 2 NaN 0 c 3 cat 1 d 4 dog Combine DataFrame objects with overlapping columns and return only those that are shared by passing inner to the join keyword argument. >>> pd.concat([df1, df3], join="inner") letter number 0 a 1 1 b 2 0 c 3 1 d 4 Combine DataFrame objects horizontally along the x axis by passing in axis=1. >>> df4 = pd.DataFrame([['bird', 'polly'], ['monkey', 'george']], ... columns=['animal', 'name']) >>> pd.concat([df1, df4], axis=1) letter number animal name 0 a 1 bird polly 1 b 2 monkey george Prevent the result from including duplicate index values with the verify_integrity option. >>> df5 = pd.DataFrame([1], index=['a']) >>> df5 0 a 1 >>> df6 = pd.DataFrame([2], index=['a']) >>> df6 0 a 2 >>> pd.concat([df5, df6], verify_integrity=True) Traceback (most recent call last): ... ValueError: Indexes have overlapping values: ['a']
pandas.reference.api.pandas.concat
pandas.core.groupby.DataFrameGroupBy.aggregate DataFrameGroupBy.aggregate(func=None, *args, engine=None, engine_kwargs=None, **kwargs)[source] Aggregate using one or more operations over the specified axis. Parameters func:function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such. Can also accept a Numba JIT function with engine='numba' specified. Only passing a single function is supported with this engine. If the 'numba' engine is chosen, the function must be a user defined function with values and index as the first and second arguments respectively in the function signature. Each group’s index will be passed to the user defined function and optionally available for use. Changed in version 1.1.0. *args Positional arguments to pass to func. engine:str, default None 'cython' : Runs the function through C-extensions from cython. 'numba' : Runs the function through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.1.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to the function New in version 1.1.0. **kwargs Keyword arguments to be passed into func. Returns DataFrame See also DataFrame.groupby.apply Apply function func group-wise and combine the results together. DataFrame.groupby.transform Aggregate using one or more operations over the specified axis. DataFrame.aggregate Transforms the Series on each group based on the given function. Notes When using engine='numba', there will be no “fall back” behavior internally. The group data and group index will be passed as numpy arrays to the JITed user defined function, and no alternative execution attempts will be tried. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, see the examples below. Examples >>> df = pd.DataFrame( ... { ... "A": [1, 1, 2, 2], ... "B": [1, 2, 3, 4], ... "C": [0.362838, 0.227877, 1.267767, -0.562860], ... } ... ) >>> df A B C 0 1 1 0.362838 1 1 2 0.227877 2 2 3 1.267767 3 2 4 -0.562860 The aggregation is for each column. >>> df.groupby('A').agg('min') B C A 1 1 0.227877 2 3 -0.562860 Multiple aggregations >>> df.groupby('A').agg(['min', 'max']) B C min max min max A 1 1 2 0.227877 0.362838 2 3 4 -0.562860 1.267767 Select a column for aggregation >>> df.groupby('A').B.agg(['min', 'max']) min max A 1 1 2 2 3 4 Different aggregations per column >>> df.groupby('A').agg({'B': ['min', 'max'], 'C': 'sum'}) B C min max sum A 1 1 2 0.590715 2 3 4 0.704907 To control the output names with different aggregations per column, pandas supports “named aggregation” >>> df.groupby("A").agg( ... b_min=pd.NamedAgg(column="B", aggfunc="min"), ... c_sum=pd.NamedAgg(column="C", aggfunc="sum")) b_min c_sum A 1 1 0.590715 2 3 0.704907 The keywords are the output column names The values are tuples whose first element is the column to select and the second element is the aggregation to apply to that column. Pandas provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc'] to make it clearer what the arguments are. As usual, the aggregation can be a callable or a string alias. See Named aggregation for more. Changed in version 1.3.0: The resulting dtype will reflect the return value of the aggregating function. >>> df.groupby("A")[["B"]].agg(lambda x: x.astype(float).min()) B A 1 1.0 2 3.0
pandas.reference.api.pandas.core.groupby.dataframegroupby.aggregate
pandas.core.groupby.DataFrameGroupBy.all DataFrameGroupBy.all(skipna=True)[source] Return True if all values in the group are truthful, else False. Parameters skipna:bool, default True Flag to ignore nan values during truth testing. Returns Series or DataFrame DataFrame or Series of boolean values, where a value is True if all elements are True within its respective group, False otherwise. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.all
pandas.core.groupby.DataFrameGroupBy.any DataFrameGroupBy.any(skipna=True)[source] Return True if any value in the group is truthful, else False. Parameters skipna:bool, default True Flag to ignore nan values during truth testing. Returns Series or DataFrame DataFrame or Series of boolean values, where a value is True if any element is True within its respective group, False otherwise. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.any
pandas.core.groupby.DataFrameGroupBy.backfill DataFrameGroupBy.backfill(limit=None)[source] Backward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns Series or DataFrame Object with missing values filled. See also Series.bfill Backward fill the missing values in the dataset. DataFrame.bfill Backward fill the missing values in the dataset. Series.fillna Fill NaN values of a Series. DataFrame.fillna Fill NaN values of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.backfill
pandas.core.groupby.DataFrameGroupBy.bfill DataFrameGroupBy.bfill(limit=None)[source] Backward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns Series or DataFrame Object with missing values filled. See also Series.bfill Backward fill the missing values in the dataset. DataFrame.bfill Backward fill the missing values in the dataset. Series.fillna Fill NaN values of a Series. DataFrame.fillna Fill NaN values of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.bfill
pandas.core.groupby.DataFrameGroupBy.boxplot DataFrameGroupBy.boxplot(subplots=True, column=None, fontsize=None, rot=0, grid=True, ax=None, figsize=None, layout=None, sharex=False, sharey=True, backend=None, **kwargs)[source] Make box plots from DataFrameGroupBy data. Parameters grouped:Grouped DataFrame subplots:bool False - no subplots will be used True - create a subplot for each group. column:column name or list of names, or vector Can be any valid input to groupby. fontsize:int or str rot:label rotation angle grid:Setting this to True will show the grid ax:Matplotlib axis object, default None figsize:A tuple (width, height) in inches layout:tuple (optional) The layout of the plot: (rows, columns). sharex:bool, default False Whether x-axes will be shared among subplots. sharey:bool, default True Whether y-axes will be shared among subplots. backend:str, default None Backend to use instead of the backend specified in the option plotting.backend. For instance, ‘matplotlib’. Alternatively, to specify the plotting.backend for the whole session, set pd.options.plotting.backend. New in version 1.0.0. **kwargs All other plotting keyword arguments to be passed to matplotlib’s boxplot function. Returns dict of key/value = group key/DataFrame.boxplot return value or DataFrame.boxplot return value in case subplots=figures=False Examples You can create boxplots for grouped data and show them as separate subplots: >>> import itertools >>> tuples = [t for t in itertools.product(range(1000), range(4))] >>> index = pd.MultiIndex.from_tuples(tuples, names=['lvl0', 'lvl1']) >>> data = np.random.randn(len(index),4) >>> df = pd.DataFrame(data, columns=list('ABCD'), index=index) >>> grouped = df.groupby(level='lvl1') >>> grouped.boxplot(rot=45, fontsize=12, figsize=(8,10)) The subplots=False option shows the boxplots in a single figure. >>> grouped.boxplot(subplots=False, rot=45, fontsize=12)
pandas.reference.api.pandas.core.groupby.dataframegroupby.boxplot
pandas.core.groupby.DataFrameGroupBy.corr propertyDataFrameGroupBy.corr Compute pairwise correlation of columns, excluding NA/null values. Parameters method:{‘pearson’, ‘kendall’, ‘spearman’} or callable Method of correlation: pearson : standard correlation coefficient kendall : Kendall Tau correlation coefficient spearman : Spearman rank correlation callable: callable with input two 1d ndarrays and returning a float. Note that the returned matrix from corr will have 1 along the diagonals and will be symmetric regardless of the callable’s behavior. min_periods:int, optional Minimum number of observations required per pair of columns to have a valid result. Currently only available for Pearson and Spearman correlation. Returns DataFrame Correlation matrix. See also DataFrame.corrwith Compute pairwise correlation with another DataFrame or Series. Series.corr Compute the correlation between two Series. Examples >>> def histogram_intersection(a, b): ... v = np.minimum(a, b).sum().round(decimals=1) ... return v >>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)], ... columns=['dogs', 'cats']) >>> df.corr(method=histogram_intersection) dogs cats dogs 1.0 0.3 cats 0.3 1.0
pandas.reference.api.pandas.core.groupby.dataframegroupby.corr
pandas.core.groupby.DataFrameGroupBy.corrwith propertyDataFrameGroupBy.corrwith Compute pairwise correlation. Pairwise correlation is computed between rows or columns of DataFrame with rows or columns of Series or DataFrame. DataFrames are first aligned along both axes before computing the correlations. Parameters other:DataFrame, Series Object with which to compute correlations. axis:{0 or ‘index’, 1 or ‘columns’}, default 0 The axis to use. 0 or ‘index’ to compute column-wise, 1 or ‘columns’ for row-wise. drop:bool, default False Drop missing indices from result. method:{‘pearson’, ‘kendall’, ‘spearman’} or callable Method of correlation: pearson : standard correlation coefficient kendall : Kendall Tau correlation coefficient spearman : Spearman rank correlation callable: callable with input two 1d ndarrays and returning a float. Returns Series Pairwise correlations. See also DataFrame.corr Compute pairwise correlation of columns.
pandas.reference.api.pandas.core.groupby.dataframegroupby.corrwith
pandas.core.groupby.DataFrameGroupBy.count DataFrameGroupBy.count()[source] Compute count of group, excluding missing values. Returns Series or DataFrame Count of values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.count
pandas.core.groupby.DataFrameGroupBy.cov propertyDataFrameGroupBy.cov Compute pairwise covariance of columns, excluding NA/null values. Compute the pairwise covariance among the series of a DataFrame. The returned data frame is the covariance matrix of the columns of the DataFrame. Both NA and null values are automatically excluded from the calculation. (See the note below about bias from missing values.) A threshold can be set for the minimum number of observations for each value created. Comparisons with observations below this threshold will be returned as NaN. This method is generally used for the analysis of time series data to understand the relationship between different measures across time. Parameters min_periods:int, optional Minimum number of observations required per pair of columns to have a valid result. ddof:int, default 1 Delta degrees of freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. New in version 1.1.0. Returns DataFrame The covariance matrix of the series of the DataFrame. See also Series.cov Compute covariance with another Series. core.window.ExponentialMovingWindow.cov Exponential weighted sample covariance. core.window.Expanding.cov Expanding sample covariance. core.window.Rolling.cov Rolling sample covariance. Notes Returns the covariance matrix of the DataFrame’s time series. The covariance is normalized by N-ddof. For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series. However, for many applications this estimate may not be acceptable because the estimate covariance matrix is not guaranteed to be positive semi-definite. This could lead to estimate correlations having absolute values which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices for more details. Examples >>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)], ... columns=['dogs', 'cats']) >>> df.cov() dogs cats dogs 0.666667 -1.000000 cats -1.000000 1.666667 >>> np.random.seed(42) >>> df = pd.DataFrame(np.random.randn(1000, 5), ... columns=['a', 'b', 'c', 'd', 'e']) >>> df.cov() a b c d e a 0.998438 -0.020161 0.059277 -0.008943 0.014144 b -0.020161 1.059352 -0.008543 -0.024738 0.009826 c 0.059277 -0.008543 1.010670 -0.001486 -0.000271 d -0.008943 -0.024738 -0.001486 0.921297 -0.013692 e 0.014144 0.009826 -0.000271 -0.013692 0.977795 Minimum number of periods This method also supports an optional min_periods keyword that specifies the required minimum number of non-NA observations for each column pair in order to have a valid result: >>> np.random.seed(42) >>> df = pd.DataFrame(np.random.randn(20, 3), ... columns=['a', 'b', 'c']) >>> df.loc[df.index[:5], 'a'] = np.nan >>> df.loc[df.index[5:10], 'b'] = np.nan >>> df.cov(min_periods=12) a b c a 0.316741 NaN -0.150812 b NaN 1.248003 0.191417 c -0.150812 0.191417 0.895202
pandas.reference.api.pandas.core.groupby.dataframegroupby.cov
pandas.core.groupby.DataFrameGroupBy.cumcount DataFrameGroupBy.cumcount(ascending=True)[source] Number each item in each group from 0 to the length of that group - 1. Essentially this is equivalent to self.apply(lambda x: pd.Series(np.arange(len(x)), x.index)) Parameters ascending:bool, default True If False, number in reverse, from length of group - 1 to 0. Returns Series Sequence number of each element within each group. See also ngroup Number the groups themselves. Examples >>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']], ... columns=['A']) >>> df A 0 a 1 a 2 a 3 b 4 b 5 a >>> df.groupby('A').cumcount() 0 0 1 1 2 2 3 0 4 1 5 3 dtype: int64 >>> df.groupby('A').cumcount(ascending=False) 0 3 1 2 2 1 3 1 4 0 5 0 dtype: int64
pandas.reference.api.pandas.core.groupby.dataframegroupby.cumcount
pandas.core.groupby.DataFrameGroupBy.cummax DataFrameGroupBy.cummax(axis=0, **kwargs)[source] Cumulative max for each group. Returns Series or DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.cummax
pandas.core.groupby.DataFrameGroupBy.cummin DataFrameGroupBy.cummin(axis=0, **kwargs)[source] Cumulative min for each group. Returns Series or DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.cummin
pandas.core.groupby.DataFrameGroupBy.cumprod DataFrameGroupBy.cumprod(axis=0, *args, **kwargs)[source] Cumulative product for each group. Returns Series or DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.cumprod
pandas.core.groupby.DataFrameGroupBy.cumsum DataFrameGroupBy.cumsum(axis=0, *args, **kwargs)[source] Cumulative sum for each group. Returns Series or DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.cumsum
pandas.core.groupby.DataFrameGroupBy.describe DataFrameGroupBy.describe(**kwargs)[source] Generate descriptive statistics. Descriptive statistics include those that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. Analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. The output will vary depending on what is provided. Refer to the notes below for more detail. Parameters percentiles:list-like of numbers, optional The percentiles to include in the output. All should fall between 0 and 1. The default is [.25, .5, .75], which returns the 25th, 50th, and 75th percentiles. include:‘all’, list-like of dtypes or None (default), optional A white list of data types to include in the result. Ignored for Series. Here are the options: ‘all’ : All columns of the input will be included in the output. A list-like of dtypes : Limits the results to the provided data types. To limit the result to numeric types submit numpy.number. To limit it instead to object columns submit the numpy.object data type. Strings can also be used in the style of select_dtypes (e.g. df.describe(include=['O'])). To select pandas categorical columns, use 'category' None (default) : The result will include all numeric columns. exclude:list-like of dtypes or None (default), optional, A black list of data types to omit from the result. Ignored for Series. Here are the options: A list-like of dtypes : Excludes the provided data types from the result. To exclude numeric types submit numpy.number. To exclude object columns submit the data type numpy.object. Strings can also be used in the style of select_dtypes (e.g. df.describe(exclude=['O'])). To exclude pandas categorical columns, use 'category' None (default) : The result will exclude nothing. datetime_is_numeric:bool, default False Whether to treat datetime dtypes as numeric. This affects statistics calculated for the column. For DataFrame input, this also controls whether datetime columns are included by default. New in version 1.1.0. Returns Series or DataFrame Summary statistics of the Series or Dataframe provided. See also DataFrame.count Count number of non-NA/null observations. DataFrame.max Maximum of the values in the object. DataFrame.min Minimum of the values in the object. DataFrame.mean Mean of the values. DataFrame.std Standard deviation of the observations. DataFrame.select_dtypes Subset of a DataFrame including/excluding columns based on their dtype. Notes For numeric data, the result’s index will include count, mean, std, min, max as well as lower, 50 and upper percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile is the same as the median. For object data (e.g. strings or timestamps), the result’s index will include count, unique, top, and freq. The top is the most common value. The freq is the most common value’s frequency. Timestamps also include the first and last items. If multiple object values have the highest count, then the count and top results will be arbitrarily chosen from among those with the highest count. For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric columns. If the dataframe consists only of object and categorical data without any numeric columns, the default is to return an analysis of both the object and categorical columns. If include='all' is provided as an option, the result will include a union of attributes of each type. The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed for the output. The parameters are ignored when analyzing a Series. Examples Describing a numeric Series. >>> s = pd.Series([1, 2, 3]) >>> s.describe() count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 dtype: float64 Describing a categorical Series. >>> s = pd.Series(['a', 'a', 'b', 'c']) >>> s.describe() count 4 unique 3 top a freq 2 dtype: object Describing a timestamp Series. >>> s = pd.Series([ ... np.datetime64("2000-01-01"), ... np.datetime64("2010-01-01"), ... np.datetime64("2010-01-01") ... ]) >>> s.describe(datetime_is_numeric=True) count 3 mean 2006-09-01 08:00:00 min 2000-01-01 00:00:00 25% 2004-12-31 12:00:00 50% 2010-01-01 00:00:00 75% 2010-01-01 00:00:00 max 2010-01-01 00:00:00 dtype: object Describing a DataFrame. By default only numeric fields are returned. >>> df = pd.DataFrame({'categorical': pd.Categorical(['d','e','f']), ... 'numeric': [1, 2, 3], ... 'object': ['a', 'b', 'c'] ... }) >>> df.describe() numeric count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 Describing all columns of a DataFrame regardless of data type. >>> df.describe(include='all') categorical numeric object count 3 3.0 3 unique 3 NaN 3 top f NaN a freq 1 NaN 1 mean NaN 2.0 NaN std NaN 1.0 NaN min NaN 1.0 NaN 25% NaN 1.5 NaN 50% NaN 2.0 NaN 75% NaN 2.5 NaN max NaN 3.0 NaN Describing a column from a DataFrame by accessing it as an attribute. >>> df.numeric.describe() count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 Name: numeric, dtype: float64 Including only numeric columns in a DataFrame description. >>> df.describe(include=[np.number]) numeric count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 Including only string columns in a DataFrame description. >>> df.describe(include=[object]) object count 3 unique 3 top a freq 1 Including only categorical columns from a DataFrame description. >>> df.describe(include=['category']) categorical count 3 unique 3 top d freq 1 Excluding numeric columns from a DataFrame description. >>> df.describe(exclude=[np.number]) categorical object count 3 3 unique 3 3 top f a freq 1 1 Excluding object columns from a DataFrame description. >>> df.describe(exclude=[object]) categorical numeric count 3 3.0 unique 3 NaN top f NaN freq 1 NaN mean NaN 2.0 std NaN 1.0 min NaN 1.0 25% NaN 1.5 50% NaN 2.0 75% NaN 2.5 max NaN 3.0
pandas.reference.api.pandas.core.groupby.dataframegroupby.describe
pandas.core.groupby.DataFrameGroupBy.diff propertyDataFrameGroupBy.diff First discrete difference of element. Calculates the difference of a Dataframe element compared with another element in the Dataframe (default is element in previous row). Parameters periods:int, default 1 Periods to shift for calculating difference, accepts negative values. axis:{0 or ‘index’, 1 or ‘columns’}, default 0 Take difference over rows (0) or columns (1). Returns Dataframe First differences of the Series. See also Dataframe.pct_change Percent change over given number of periods. Dataframe.shift Shift index by desired number of periods with an optional time freq. Series.diff First discrete difference of object. Notes For boolean dtypes, this uses operator.xor() rather than operator.sub(). The result is calculated according to current dtype in Dataframe, however dtype of the result is always float64. Examples Difference with previous row >>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6], ... 'b': [1, 1, 2, 3, 5, 8], ... 'c': [1, 4, 9, 16, 25, 36]}) >>> df a b c 0 1 1 1 1 2 1 4 2 3 2 9 3 4 3 16 4 5 5 25 5 6 8 36 >>> df.diff() a b c 0 NaN NaN NaN 1 1.0 0.0 3.0 2 1.0 1.0 5.0 3 1.0 1.0 7.0 4 1.0 2.0 9.0 5 1.0 3.0 11.0 Difference with previous column >>> df.diff(axis=1) a b c 0 NaN 0 0 1 NaN -1 3 2 NaN -1 7 3 NaN -1 13 4 NaN 0 20 5 NaN 2 28 Difference with 3rd previous row >>> df.diff(periods=3) a b c 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 3.0 2.0 15.0 4 3.0 4.0 21.0 5 3.0 6.0 27.0 Difference with following row >>> df.diff(periods=-1) a b c 0 -1.0 0.0 -3.0 1 -1.0 -1.0 -5.0 2 -1.0 -1.0 -7.0 3 -1.0 -2.0 -9.0 4 -1.0 -3.0 -11.0 5 NaN NaN NaN Overflow in input dtype >>> df = pd.DataFrame({'a': [1, 0]}, dtype=np.uint8) >>> df.diff() a 0 NaN 1 255.0
pandas.reference.api.pandas.core.groupby.dataframegroupby.diff
pandas.core.groupby.DataFrameGroupBy.ffill DataFrameGroupBy.ffill(limit=None)[source] Forward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns Series or DataFrame Object with missing values filled. See also Series.ffill Returns Series with minimum number of char in object. DataFrame.ffill Object with missing values filled or None if inplace=True. Series.fillna Fill NaN values of a Series. DataFrame.fillna Fill NaN values of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.ffill
pandas.core.groupby.DataFrameGroupBy.fillna propertyDataFrameGroupBy.fillna Fill NA/NaN values using the specified method. Parameters value:scalar, dict, Series, or DataFrame Value to use to fill holes (e.g. 0), alternately a dict/Series/DataFrame of values specifying which value to use for each index (for a Series) or column (for a DataFrame). Values not in the dict/Series/DataFrame will not be filled. This value cannot be a list. method:{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None Method to use for filling holes in reindexed Series pad / ffill: propagate last valid observation forward to next valid backfill / bfill: use next valid observation to fill gap. axis:{0 or ‘index’, 1 or ‘columns’} Axis along which to fill missing values. inplace:bool, default False If True, fill in-place. Note: this will modify any other views on this object (e.g., a no-copy slice for a column in a DataFrame). limit:int, default None If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None. downcast:dict, default is None A dict of item->dtype of what to downcast if possible, or the string ‘infer’ which will try to downcast to an appropriate equal type (e.g. float64 to int64 if possible). Returns DataFrame or None Object with missing values filled or None if inplace=True. See also interpolate Fill NaN values using interpolation. reindex Conform object to new index. asfreq Convert TimeSeries to specified frequency. Examples >>> df = pd.DataFrame([[np.nan, 2, np.nan, 0], ... [3, 4, np.nan, 1], ... [np.nan, np.nan, np.nan, np.nan], ... [np.nan, 3, np.nan, 4]], ... columns=list("ABCD")) >>> df A B C D 0 NaN 2.0 NaN 0.0 1 3.0 4.0 NaN 1.0 2 NaN NaN NaN NaN 3 NaN 3.0 NaN 4.0 Replace all NaN elements with 0s. >>> df.fillna(0) A B C D 0 0.0 2.0 0.0 0.0 1 3.0 4.0 0.0 1.0 2 0.0 0.0 0.0 0.0 3 0.0 3.0 0.0 4.0 We can also propagate non-null values forward or backward. >>> df.fillna(method="ffill") A B C D 0 NaN 2.0 NaN 0.0 1 3.0 4.0 NaN 1.0 2 3.0 4.0 NaN 1.0 3 3.0 3.0 NaN 4.0 Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1, 2, and 3 respectively. >>> values = {"A": 0, "B": 1, "C": 2, "D": 3} >>> df.fillna(value=values) A B C D 0 0.0 2.0 2.0 0.0 1 3.0 4.0 2.0 1.0 2 0.0 1.0 2.0 3.0 3 0.0 3.0 2.0 4.0 Only replace the first NaN element. >>> df.fillna(value=values, limit=1) A B C D 0 0.0 2.0 2.0 0.0 1 3.0 4.0 NaN 1.0 2 NaN 1.0 NaN 3.0 3 NaN 3.0 NaN 4.0 When filling using a DataFrame, replacement happens along the same column names and same indices >>> df2 = pd.DataFrame(np.zeros((4, 4)), columns=list("ABCE")) >>> df.fillna(df2) A B C D 0 0.0 2.0 0.0 0.0 1 3.0 4.0 0.0 1.0 2 0.0 0.0 0.0 NaN 3 0.0 3.0 0.0 4.0 Note that column D is not affected since it is not present in df2.
pandas.reference.api.pandas.core.groupby.dataframegroupby.fillna
pandas.core.groupby.DataFrameGroupBy.filter DataFrameGroupBy.filter(func, dropna=True, *args, **kwargs)[source] Return a copy of a DataFrame excluding filtered elements. Elements from groups are filtered if they do not satisfy the boolean criterion specified by func. Parameters func:function Function to apply to each subframe. Should return True or False. dropna:Drop groups that do not pass the filter. True by default; If False, groups that evaluate False are filled with NaNs. Returns filtered:DataFrame Notes Each subframe is endowed the attribute ‘name’ in case you need to know which group you are working on. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. Examples >>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', ... 'foo', 'bar'], ... 'B' : [1, 2, 3, 4, 5, 6], ... 'C' : [2.0, 5., 8., 1., 2., 9.]}) >>> grouped = df.groupby('A') >>> grouped.filter(lambda x: x['B'].mean() > 3.) A B C 1 bar 2 5.0 3 bar 4 1.0 5 bar 6 9.0
pandas.reference.api.pandas.core.groupby.dataframegroupby.filter
pandas.core.groupby.DataFrameGroupBy.hist propertyDataFrameGroupBy.hist Make a histogram of the DataFrame’s columns. A histogram is a representation of the distribution of data. This function calls matplotlib.pyplot.hist(), on each series in the DataFrame, resulting in one histogram per column. Parameters data:DataFrame The pandas object holding the data. column:str or sequence, optional If passed, will be used to limit data to a subset of columns. by:object, optional If passed, then used to form histograms for separate groups. grid:bool, default True Whether to show axis grid lines. xlabelsize:int, default None If specified changes the x-axis label size. xrot:float, default None Rotation of x axis labels. For example, a value of 90 displays the x labels rotated 90 degrees clockwise. ylabelsize:int, default None If specified changes the y-axis label size. yrot:float, default None Rotation of y axis labels. For example, a value of 90 displays the y labels rotated 90 degrees clockwise. ax:Matplotlib axes object, default None The axes to plot the histogram on. sharex:bool, default True if ax is None else False In case subplots=True, share x axis and set some x axis labels to invisible; defaults to True if ax is None otherwise False if an ax is passed in. Note that passing in both an ax and sharex=True will alter all x axis labels for all subplots in a figure. sharey:bool, default False In case subplots=True, share y axis and set some y axis labels to invisible. figsize:tuple, optional The size in inches of the figure to create. Uses the value in matplotlib.rcParams by default. layout:tuple, optional Tuple of (rows, columns) for the layout of the histograms. bins:int or sequence, default 10 Number of histogram bins to be used. If an integer is given, bins + 1 bin edges are calculated and returned. If bins is a sequence, gives bin edges, including left edge of first bin and right edge of last bin. In this case, bins is returned unmodified. backend:str, default None Backend to use instead of the backend specified in the option plotting.backend. For instance, ‘matplotlib’. Alternatively, to specify the plotting.backend for the whole session, set pd.options.plotting.backend. New in version 1.0.0. legend:bool, default False Whether to show the legend. New in version 1.1.0. **kwargs All other plotting keyword arguments to be passed to matplotlib.pyplot.hist(). Returns matplotlib.AxesSubplot or numpy.ndarray of them See also matplotlib.pyplot.hist Plot a histogram using matplotlib. Examples This example draws a histogram based on the length and width of some animals, displayed in three bins >>> df = pd.DataFrame({ ... 'length': [1.5, 0.5, 1.2, 0.9, 3], ... 'width': [0.7, 0.2, 0.15, 0.2, 1.1] ... }, index=['pig', 'rabbit', 'duck', 'chicken', 'horse']) >>> hist = df.hist(bins=3)
pandas.reference.api.pandas.core.groupby.dataframegroupby.hist
pandas.core.groupby.DataFrameGroupBy.idxmax DataFrameGroupBy.idxmax(axis=0, skipna=True)[source] Return index of first occurrence of maximum over requested axis. NA/null values are excluded. Parameters axis:{0 or ‘index’, 1 or ‘columns’}, default 0 The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise. skipna:bool, default True Exclude NA/null values. If an entire row/column is NA, the result will be NA. Returns Series Indexes of maxima along the specified axis. Raises ValueError If the row/column is empty See also Series.idxmax Return index of the maximum element. Notes This method is the DataFrame version of ndarray.argmax. Examples Consider a dataset containing food consumption in Argentina. >>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48], ... 'co2_emissions': [37.2, 19.66, 1712]}, ... index=['Pork', 'Wheat Products', 'Beef']) >>> df consumption co2_emissions Pork 10.51 37.20 Wheat Products 103.11 19.66 Beef 55.48 1712.00 By default, it returns the index for the maximum value in each column. >>> df.idxmax() consumption Wheat Products co2_emissions Beef dtype: object To return the index for the maximum value in each row, use axis="columns". >>> df.idxmax(axis="columns") Pork co2_emissions Wheat Products consumption Beef co2_emissions dtype: object
pandas.reference.api.pandas.core.groupby.dataframegroupby.idxmax
pandas.core.groupby.DataFrameGroupBy.idxmin DataFrameGroupBy.idxmin(axis=0, skipna=True)[source] Return index of first occurrence of minimum over requested axis. NA/null values are excluded. Parameters axis:{0 or ‘index’, 1 or ‘columns’}, default 0 The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise. skipna:bool, default True Exclude NA/null values. If an entire row/column is NA, the result will be NA. Returns Series Indexes of minima along the specified axis. Raises ValueError If the row/column is empty See also Series.idxmin Return index of the minimum element. Notes This method is the DataFrame version of ndarray.argmin. Examples Consider a dataset containing food consumption in Argentina. >>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48], ... 'co2_emissions': [37.2, 19.66, 1712]}, ... index=['Pork', 'Wheat Products', 'Beef']) >>> df consumption co2_emissions Pork 10.51 37.20 Wheat Products 103.11 19.66 Beef 55.48 1712.00 By default, it returns the index for the minimum value in each column. >>> df.idxmin() consumption Pork co2_emissions Wheat Products dtype: object To return the index for the minimum value in each row, use axis="columns". >>> df.idxmin(axis="columns") Pork consumption Wheat Products co2_emissions Beef consumption dtype: object
pandas.reference.api.pandas.core.groupby.dataframegroupby.idxmin
pandas.core.groupby.DataFrameGroupBy.mad propertyDataFrameGroupBy.mad Return the mean absolute deviation of the values over the requested axis. Parameters axis:{index (0), columns (1)} Axis for the function to be applied on. skipna:bool, default True Exclude NA/null values when computing the result. level:int or level name, default None If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series. Returns Series or DataFrame (if level specified)
pandas.reference.api.pandas.core.groupby.dataframegroupby.mad
pandas.core.groupby.DataFrameGroupBy.nunique DataFrameGroupBy.nunique(dropna=True)[source] Return DataFrame with counts of unique elements in each position. Parameters dropna:bool, default True Don’t include NaN in the counts. Returns nunique: DataFrame Examples >>> df = pd.DataFrame({'id': ['spam', 'egg', 'egg', 'spam', ... 'ham', 'ham'], ... 'value1': [1, 5, 5, 2, 5, 5], ... 'value2': list('abbaxy')}) >>> df id value1 value2 0 spam 1 a 1 egg 5 b 2 egg 5 b 3 spam 2 a 4 ham 5 x 5 ham 5 y >>> df.groupby('id').nunique() value1 value2 id egg 1 1 ham 1 2 spam 2 1 Check for rows with the same id but conflicting values: >>> df.groupby('id').filter(lambda g: (g.nunique() > 1).any()) id value1 value2 0 spam 1 a 3 spam 2 a 4 ham 5 x 5 ham 5 y
pandas.reference.api.pandas.core.groupby.dataframegroupby.nunique
pandas.core.groupby.DataFrameGroupBy.pad DataFrameGroupBy.pad(limit=None)[source] Forward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns Series or DataFrame Object with missing values filled. See also Series.ffill Returns Series with minimum number of char in object. DataFrame.ffill Object with missing values filled or None if inplace=True. Series.fillna Fill NaN values of a Series. DataFrame.fillna Fill NaN values of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.pad
pandas.core.groupby.DataFrameGroupBy.pct_change DataFrameGroupBy.pct_change(periods=1, fill_method='ffill', limit=None, freq=None, axis=0)[source] Calculate pct_change of each value to previous entry in group. Returns Series or DataFrame Percentage changes within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.pct_change
pandas.core.groupby.DataFrameGroupBy.plot propertyDataFrameGroupBy.plot Class implementing the .plot attribute for groupby objects.
pandas.reference.api.pandas.core.groupby.dataframegroupby.plot
pandas.core.groupby.DataFrameGroupBy.quantile DataFrameGroupBy.quantile(q=0.5, interpolation='linear')[source] Return group values at the given quantile, a la numpy.percentile. Parameters q:float or array-like, default 0.5 (50% quantile) Value(s) between 0 and 1 providing the quantile(s) to compute. interpolation:{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’} Method to use when the desired quantile falls between two points. Returns Series or DataFrame Return type determined by caller of GroupBy object. See also Series.quantile Similar method for Series. DataFrame.quantile Similar method for DataFrame. numpy.percentile NumPy method to compute qth percentile. Examples >>> df = pd.DataFrame([ ... ['a', 1], ['a', 2], ['a', 3], ... ['b', 1], ['b', 3], ['b', 5] ... ], columns=['key', 'val']) >>> df.groupby('key').quantile() val key a 2.0 b 3.0
pandas.reference.api.pandas.core.groupby.dataframegroupby.quantile
pandas.core.groupby.DataFrameGroupBy.rank DataFrameGroupBy.rank(method='average', ascending=True, na_option='keep', pct=False, axis=0)[source] Provide the rank of values within each group. Parameters method:{‘average’, ‘min’, ‘max’, ‘first’, ‘dense’}, default ‘average’ average: average rank of group. min: lowest rank in group. max: highest rank in group. first: ranks assigned in order they appear in the array. dense: like ‘min’, but rank always increases by 1 between groups. ascending:bool, default True False for ranks by high (1) to low (N). na_option:{‘keep’, ‘top’, ‘bottom’}, default ‘keep’ keep: leave NA values where they are. top: smallest rank if ascending. bottom: smallest rank if descending. pct:bool, default False Compute percentage rank of data within each group. axis:int, default 0 The axis of the object over which to compute the rank. Returns DataFrame with ranking of values within each group See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame. Examples >>> df = pd.DataFrame( ... { ... "group": ["a", "a", "a", "a", "a", "b", "b", "b", "b", "b"], ... "value": [2, 4, 2, 3, 5, 1, 2, 4, 1, 5], ... } ... ) >>> df group value 0 a 2 1 a 4 2 a 2 3 a 3 4 a 5 5 b 1 6 b 2 7 b 4 8 b 1 9 b 5 >>> for method in ['average', 'min', 'max', 'dense', 'first']: ... df[f'{method}_rank'] = df.groupby('group')['value'].rank(method) >>> df group value average_rank min_rank max_rank dense_rank first_rank 0 a 2 1.5 1.0 2.0 1.0 1.0 1 a 4 4.0 4.0 4.0 3.0 4.0 2 a 2 1.5 1.0 2.0 1.0 2.0 3 a 3 3.0 3.0 3.0 2.0 3.0 4 a 5 5.0 5.0 5.0 4.0 5.0 5 b 1 1.5 1.0 2.0 1.0 1.0 6 b 2 3.0 3.0 3.0 2.0 3.0 7 b 4 4.0 4.0 4.0 3.0 4.0 8 b 1 1.5 1.0 2.0 1.0 2.0 9 b 5 5.0 5.0 5.0 4.0 5.0
pandas.reference.api.pandas.core.groupby.dataframegroupby.rank
pandas.core.groupby.DataFrameGroupBy.resample DataFrameGroupBy.resample(rule, *args, **kwargs)[source] Provide resampling when using a TimeGrouper. Given a grouper, the function resamples it according to a string “string” -> “frequency”. See the frequency aliases documentation for more details. Parameters rule:str or DateOffset The offset string or object representing target grouper conversion. *args, **kwargs Possible arguments are how, fill_method, limit, kind and on, and other arguments of TimeGrouper. Returns Grouper Return a new grouper with our resampler appended. See also Grouper Specify a frequency to resample with when grouping by a key. DatetimeIndex.resample Frequency conversion and resampling of time series. Examples >>> idx = pd.date_range('1/1/2000', periods=4, freq='T') >>> df = pd.DataFrame(data=4 * [range(2)], ... index=idx, ... columns=['a', 'b']) >>> df.iloc[2, 0] = 5 >>> df a b 2000-01-01 00:00:00 0 1 2000-01-01 00:01:00 0 1 2000-01-01 00:02:00 5 1 2000-01-01 00:03:00 0 1 Downsample the DataFrame into 3 minute bins and sum the values of the timestamps falling into a bin. >>> df.groupby('a').resample('3T').sum() a b a 0 2000-01-01 00:00:00 0 2 2000-01-01 00:03:00 0 1 5 2000-01-01 00:00:00 5 1 Upsample the series into 30 second bins. >>> df.groupby('a').resample('30S').sum() a b a 0 2000-01-01 00:00:00 0 1 2000-01-01 00:00:30 0 0 2000-01-01 00:01:00 0 1 2000-01-01 00:01:30 0 0 2000-01-01 00:02:00 0 0 2000-01-01 00:02:30 0 0 2000-01-01 00:03:00 0 1 5 2000-01-01 00:02:00 5 1 Resample by month. Values are assigned to the month of the period. >>> df.groupby('a').resample('M').sum() a b a 0 2000-01-31 0 3 5 2000-01-31 5 1 Downsample the series into 3 minute bins as above, but close the right side of the bin interval. >>> df.groupby('a').resample('3T', closed='right').sum() a b a 0 1999-12-31 23:57:00 0 1 2000-01-01 00:00:00 0 2 5 2000-01-01 00:00:00 5 1 Downsample the series into 3 minute bins and close the right side of the bin interval, but label each bin using the right edge instead of the left. >>> df.groupby('a').resample('3T', closed='right', label='right').sum() a b a 0 2000-01-01 00:00:00 0 1 2000-01-01 00:03:00 0 2 5 2000-01-01 00:03:00 5 1
pandas.reference.api.pandas.core.groupby.dataframegroupby.resample
pandas.core.groupby.DataFrameGroupBy.sample DataFrameGroupBy.sample(n=None, frac=None, replace=False, weights=None, random_state=None)[source] Return a random sample of items from each group. You can use random_state for reproducibility. New in version 1.1.0. Parameters n:int, optional Number of items to return for each group. Cannot be used with frac and must be no larger than the smallest group unless replace is True. Default is one if frac is None. frac:float, optional Fraction of items to return. Cannot be used with n. replace:bool, default False Allow or disallow sampling of the same row more than once. weights:list-like, optional Default None results in equal probability weighting. If passed a list-like then values must have the same length as the underlying DataFrame or Series object and will be used as sampling probabilities after normalization within each group. Values must be non-negative with at least one positive element within each group. random_state:int, array-like, BitGenerator, np.random.RandomState, np.random.Generator, optional If int, array-like, or BitGenerator, seed for random number generator. If np.random.RandomState or np.random.Generator, use as given. Changed in version 1.4.0: np.random.Generator objects now accepted Returns Series or DataFrame A new object of same type as caller containing items randomly sampled within each group from the caller object. See also DataFrame.sample Generate random samples from a DataFrame object. numpy.random.choice Generate a random sample from a given 1-D numpy array. Examples >>> df = pd.DataFrame( ... {"a": ["red"] * 2 + ["blue"] * 2 + ["black"] * 2, "b": range(6)} ... ) >>> df a b 0 red 0 1 red 1 2 blue 2 3 blue 3 4 black 4 5 black 5 Select one row at random for each distinct value in column a. The random_state argument can be used to guarantee reproducibility: >>> df.groupby("a").sample(n=1, random_state=1) a b 4 black 4 2 blue 2 1 red 1 Set frac to sample fixed proportions rather than counts: >>> df.groupby("a")["b"].sample(frac=0.5, random_state=2) 5 5 2 2 0 0 Name: b, dtype: int64 Control sample probabilities within groups by setting weights: >>> df.groupby("a").sample( ... n=1, ... weights=[1, 1, 1, 0, 0, 1], ... random_state=1, ... ) a b 5 black 5 2 blue 2 0 red 0
pandas.reference.api.pandas.core.groupby.dataframegroupby.sample
pandas.core.groupby.DataFrameGroupBy.shift DataFrameGroupBy.shift(periods=1, freq=None, axis=0, fill_value=None)[source] Shift each group by periods observations. If freq is passed, the index will be increased using the periods and the freq. Parameters periods:int, default 1 Number of periods to shift. freq:str, optional Frequency string. axis:axis to shift, default 0 Shift direction. fill_value:optional The scalar value to use for newly introduced missing values. Returns Series or DataFrame Object shifted within each group. See also Index.shift Shift values of Index. tshift Shift the time index, using the index’s frequency if available.
pandas.reference.api.pandas.core.groupby.dataframegroupby.shift
pandas.core.groupby.DataFrameGroupBy.size DataFrameGroupBy.size()[source] Compute group sizes. Returns DataFrame or Series Number of rows in each group as a Series if as_index is True or a DataFrame if as_index is False. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.dataframegroupby.size
pandas.core.groupby.DataFrameGroupBy.skew propertyDataFrameGroupBy.skew Return unbiased skew over requested axis. Normalized by N-1. Parameters axis:{index (0), columns (1)} Axis for the function to be applied on. skipna:bool, default True Exclude NA/null values when computing the result. level:int or level name, default None If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series. numeric_only:bool, default None Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series. **kwargs Additional keyword arguments to be passed to the function. Returns Series or DataFrame (if level specified)
pandas.reference.api.pandas.core.groupby.dataframegroupby.skew
pandas.core.groupby.DataFrameGroupBy.take propertyDataFrameGroupBy.take Return the elements in the given positional indices along an axis. This means that we are not indexing according to actual values in the index attribute of the object. We are indexing according to the actual position of the element in the object. Parameters indices:array-like An array of ints indicating which positions to take. axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0 The axis on which to select elements. 0 means that we are selecting rows, 1 means that we are selecting columns. is_copy:bool Before pandas 1.0, is_copy=False can be specified to ensure that the return value is an actual copy. Starting with pandas 1.0, take always returns a copy, and the keyword is therefore deprecated. Deprecated since version 1.0.0. **kwargs For compatibility with numpy.take(). Has no effect on the output. Returns taken:same type as caller An array-like containing the elements taken from the object. See also DataFrame.loc Select a subset of a DataFrame by labels. DataFrame.iloc Select a subset of a DataFrame by positions. numpy.take Take elements from an array along an axis. Examples >>> df = pd.DataFrame([('falcon', 'bird', 389.0), ... ('parrot', 'bird', 24.0), ... ('lion', 'mammal', 80.5), ... ('monkey', 'mammal', np.nan)], ... columns=['name', 'class', 'max_speed'], ... index=[0, 2, 3, 1]) >>> df name class max_speed 0 falcon bird 389.0 2 parrot bird 24.0 3 lion mammal 80.5 1 monkey mammal NaN Take elements at positions 0 and 3 along the axis 0 (default). Note how the actual indices selected (0 and 1) do not correspond to our selected indices 0 and 3. That’s because we are selecting the 0th and 3rd rows, not rows whose indices equal 0 and 3. >>> df.take([0, 3]) name class max_speed 0 falcon bird 389.0 1 monkey mammal NaN Take elements at indices 1 and 2 along the axis 1 (column selection). >>> df.take([1, 2], axis=1) class max_speed 0 bird 389.0 2 bird 24.0 3 mammal 80.5 1 mammal NaN We may take elements using negative integers for positive indices, starting from the end of the object, just like with Python lists. >>> df.take([-1, -2]) name class max_speed 1 monkey mammal NaN 3 lion mammal 80.5
pandas.reference.api.pandas.core.groupby.dataframegroupby.take
pandas.core.groupby.DataFrameGroupBy.transform DataFrameGroupBy.transform(func, *args, engine=None, engine_kwargs=None, **kwargs)[source] Call function producing a like-indexed DataFrame on each group and return a DataFrame having the same indexes as the original object filled with the transformed values. Parameters f:function Function to apply to each group. Can also accept a Numba JIT function with engine='numba' specified. If the 'numba' engine is chosen, the function must be a user defined function with values and index as the first and second arguments respectively in the function signature. Each group’s index will be passed to the user defined function and optionally available for use. Changed in version 1.1.0. *args Positional arguments to pass to func. engine:str, default None 'cython' : Runs the function through C-extensions from cython. 'numba' : Runs the function through JIT compiled code from numba. None : Defaults to 'cython' or the global setting compute.use_numba New in version 1.1.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to the function New in version 1.1.0. **kwargs Keyword arguments to be passed into func. Returns DataFrame See also DataFrame.groupby.apply Apply function func group-wise and combine the results together. DataFrame.groupby.aggregate Aggregate using one or more operations over the specified axis. DataFrame.transform Call func on self producing a DataFrame with the same axis shape as self. Notes Each group is endowed the attribute ‘name’ in case you need to know which group you are working on. The current implementation imposes three requirements on f: f must return a value that either has the same shape as the input subframe or can be broadcast to the shape of the input subframe. For example, if f returns a scalar it will be broadcast to have the same shape as the input subframe. if this is a DataFrame, f must support application column-by-column in the subframe. If f also supports application to the entire subframe, then a fast path is used starting from the second chunk. f must not mutate groups. Mutation is not supported and may produce unexpected results. See Mutating with User Defined Function (UDF) methods for more details. When using engine='numba', there will be no “fall back” behavior internally. The group data and group index will be passed as numpy arrays to the JITed user defined function, and no alternative execution attempts will be tried. Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, see the examples below. Examples >>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', ... 'foo', 'bar'], ... 'B' : ['one', 'one', 'two', 'three', ... 'two', 'two'], ... 'C' : [1, 5, 5, 2, 5, 5], ... 'D' : [2.0, 5., 8., 1., 2., 9.]}) >>> grouped = df.groupby('A') >>> grouped.transform(lambda x: (x - x.mean()) / x.std()) C D 0 -1.154701 -0.577350 1 0.577350 0.000000 2 0.577350 1.154701 3 -1.154701 -1.000000 4 0.577350 -0.577350 5 0.577350 1.000000 Broadcast result of the transformation >>> grouped.transform(lambda x: x.max() - x.min()) C D 0 4 6.0 1 3 8.0 2 4 6.0 3 3 8.0 4 4 6.0 5 3 8.0 Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, for example: >>> grouped[['C', 'D']].transform(lambda x: x.astype(int).max()) C D 0 5 8 1 5 9 2 5 8 3 5 9 4 5 8 5 5 9
pandas.reference.api.pandas.core.groupby.dataframegroupby.transform
pandas.core.groupby.DataFrameGroupBy.tshift propertyDataFrameGroupBy.tshift Shift the time index, using the index’s frequency if available. Deprecated since version 1.1.0: Use shift instead. Parameters periods:int Number of periods to move, can be positive or negative. freq:DateOffset, timedelta, or str, default None Increment to use from the tseries module or time rule expressed as a string (e.g. ‘EOM’). axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0 Corresponds to the axis that contains the Index. Returns shifted:Series/DataFrame Notes If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those attributes exist, a ValueError is thrown
pandas.reference.api.pandas.core.groupby.dataframegroupby.tshift
pandas.core.groupby.DataFrameGroupBy.value_counts DataFrameGroupBy.value_counts(subset=None, normalize=False, sort=True, ascending=False, dropna=True)[source] Return a Series or DataFrame containing counts of unique rows. New in version 1.4.0. Parameters subset:list-like, optional Columns to use when counting unique combinations. normalize:bool, default False Return proportions rather than frequencies. sort:bool, default True Sort by frequencies. ascending:bool, default False Sort in ascending order. dropna:bool, default True Don’t include counts of rows that contain NA values. Returns Series or DataFrame Series if the groupby as_index is True, otherwise DataFrame. See also Series.value_counts Equivalent method on Series. DataFrame.value_counts Equivalent method on DataFrame. SeriesGroupBy.value_counts Equivalent method on SeriesGroupBy. Notes If the groupby as_index is True then the returned Series will have a MultiIndex with one level per input column. If the groupby as_index is False then the returned DataFrame will have an additional column with the value_counts. The column is labelled ‘count’ or ‘proportion’, depending on the normalize parameter. By default, rows that contain any NA values are omitted from the result. By default, the result will be in descending order so that the first element of each group is the most frequently-occurring row. Examples >>> df = pd.DataFrame({ ... 'gender': ['male', 'male', 'female', 'male', 'female', 'male'], ... 'education': ['low', 'medium', 'high', 'low', 'high', 'low'], ... 'country': ['US', 'FR', 'US', 'FR', 'FR', 'FR'] ... }) >>> df gender education country 0 male low US 1 male medium FR 2 female high US 3 male low FR 4 female high FR 5 male low FR >>> df.groupby('gender').value_counts() gender education country female high FR 1 US 1 male low FR 2 US 1 medium FR 1 dtype: int64 >>> df.groupby('gender').value_counts(ascending=True) gender education country female high FR 1 US 1 male low US 1 medium FR 1 low FR 2 dtype: int64 >>> df.groupby('gender').value_counts(normalize=True) gender education country female high FR 0.50 US 0.50 male low FR 0.50 US 0.25 medium FR 0.25 dtype: float64 >>> df.groupby('gender', as_index=False).value_counts() gender education country count 0 female high FR 1 1 female high US 1 2 male low FR 2 3 male low US 1 4 male medium FR 1 >>> df.groupby('gender', as_index=False).value_counts(normalize=True) gender education country proportion 0 female high FR 0.50 1 female high US 0.50 2 male low FR 0.50 3 male low US 0.25 4 male medium FR 0.25
pandas.reference.api.pandas.core.groupby.dataframegroupby.value_counts
pandas.core.groupby.GroupBy.__iter__ GroupBy.__iter__()[source] Groupby iterator. Returns Generator yielding sequence of (name, subsetted object) for each group
pandas.reference.api.pandas.core.groupby.groupby.__iter__
pandas.core.groupby.GroupBy.agg GroupBy.agg(func, *args, **kwargs)[source]
pandas.reference.api.pandas.core.groupby.groupby.agg
pandas.core.groupby.GroupBy.all finalGroupBy.all(skipna=True)[source] Return True if all values in the group are truthful, else False. Parameters skipna:bool, default True Flag to ignore nan values during truth testing. Returns Series or DataFrame DataFrame or Series of boolean values, where a value is True if all elements are True within its respective group, False otherwise. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.all
pandas.core.groupby.GroupBy.any finalGroupBy.any(skipna=True)[source] Return True if any value in the group is truthful, else False. Parameters skipna:bool, default True Flag to ignore nan values during truth testing. Returns Series or DataFrame DataFrame or Series of boolean values, where a value is True if any element is True within its respective group, False otherwise. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.any
pandas.core.groupby.GroupBy.apply GroupBy.apply(func, *args, **kwargs)[source] Apply function func group-wise and combine the results together. The function passed to apply must take a dataframe as its first argument and return a DataFrame, Series or scalar. apply will then take care of combining the results back together into a single dataframe or series. apply is therefore a highly flexible grouping method. While apply is a very flexible method, its downside is that using it can be quite a bit slower than using more specific methods like agg or transform. Pandas offers a wide range of method that will be much faster than using apply for their specific purposes, so try to use them before reaching for apply. Parameters func:callable A callable that takes a dataframe as its first argument, and returns a dataframe, a series or a scalar. In addition the callable may take positional and keyword arguments. args, kwargs:tuple and dict Optional positional and keyword arguments to pass to func. Returns applied:Series or DataFrame See also pipe Apply function to the full GroupBy object instead of to each group. aggregate Apply aggregate function to the GroupBy object. transform Apply function column-by-column to the GroupBy object. Series.apply Apply a function to a Series. DataFrame.apply Apply a function to each row or column of a DataFrame. Notes Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, see the examples below. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. Examples >>> df = pd.DataFrame({'A': 'a a b'.split(), ... 'B': [1,2,3], ... 'C': [4,6,5]}) >>> g = df.groupby('A') Notice that g has two groups, a and b. Calling apply in various ways, we can get different grouping results: Example 1: below the function passed to apply takes a DataFrame as its argument and returns a DataFrame. apply combines the result for each group together into a new DataFrame: >>> g[['B', 'C']].apply(lambda x: x / x.sum()) B C 0 0.333333 0.4 1 0.666667 0.6 2 1.000000 1.0 Example 2: The function passed to apply takes a DataFrame as its argument and returns a Series. apply combines the result for each group together into a new DataFrame. Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func. >>> g[['B', 'C']].apply(lambda x: x.astype(float).max() - x.min()) B C A a 1.0 2.0 b 0.0 0.0 Example 3: The function passed to apply takes a DataFrame as its argument and returns a scalar. apply combines the result for each group together into a Series, including setting the index as appropriate: >>> g.apply(lambda x: x.C.max() - x.B.min()) A a 5 b 2 dtype: int64
pandas.reference.api.pandas.core.groupby.groupby.apply
pandas.core.groupby.GroupBy.backfill GroupBy.backfill(limit=None)[source] Backward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns Series or DataFrame Object with missing values filled. See also Series.bfill Backward fill the missing values in the dataset. DataFrame.bfill Backward fill the missing values in the dataset. Series.fillna Fill NaN values of a Series. DataFrame.fillna Fill NaN values of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.backfill
pandas.core.groupby.GroupBy.bfill finalGroupBy.bfill(limit=None)[source] Backward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns Series or DataFrame Object with missing values filled. See also Series.bfill Backward fill the missing values in the dataset. DataFrame.bfill Backward fill the missing values in the dataset. Series.fillna Fill NaN values of a Series. DataFrame.fillna Fill NaN values of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.bfill
pandas.core.groupby.GroupBy.count finalGroupBy.count()[source] Compute count of group, excluding missing values. Returns Series or DataFrame Count of values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.count
pandas.core.groupby.GroupBy.cumcount finalGroupBy.cumcount(ascending=True)[source] Number each item in each group from 0 to the length of that group - 1. Essentially this is equivalent to self.apply(lambda x: pd.Series(np.arange(len(x)), x.index)) Parameters ascending:bool, default True If False, number in reverse, from length of group - 1 to 0. Returns Series Sequence number of each element within each group. See also ngroup Number the groups themselves. Examples >>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']], ... columns=['A']) >>> df A 0 a 1 a 2 a 3 b 4 b 5 a >>> df.groupby('A').cumcount() 0 0 1 1 2 2 3 0 4 1 5 3 dtype: int64 >>> df.groupby('A').cumcount(ascending=False) 0 3 1 2 2 1 3 1 4 0 5 0 dtype: int64
pandas.reference.api.pandas.core.groupby.groupby.cumcount
pandas.core.groupby.GroupBy.cummax finalGroupBy.cummax(axis=0, **kwargs)[source] Cumulative max for each group. Returns Series or DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.cummax
pandas.core.groupby.GroupBy.cummin finalGroupBy.cummin(axis=0, **kwargs)[source] Cumulative min for each group. Returns Series or DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.cummin
pandas.core.groupby.GroupBy.cumprod finalGroupBy.cumprod(axis=0, *args, **kwargs)[source] Cumulative product for each group. Returns Series or DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.cumprod
pandas.core.groupby.GroupBy.cumsum finalGroupBy.cumsum(axis=0, *args, **kwargs)[source] Cumulative sum for each group. Returns Series or DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.cumsum
pandas.core.groupby.GroupBy.ffill finalGroupBy.ffill(limit=None)[source] Forward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns Series or DataFrame Object with missing values filled. See also Series.ffill Returns Series with minimum number of char in object. DataFrame.ffill Object with missing values filled or None if inplace=True. Series.fillna Fill NaN values of a Series. DataFrame.fillna Fill NaN values of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.ffill
pandas.core.groupby.GroupBy.first finalGroupBy.first(numeric_only=False, min_count=- 1)[source] Compute first of group values. Parameters numeric_only:bool, default False Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default -1 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed first of values within each group.
pandas.reference.api.pandas.core.groupby.groupby.first
pandas.core.groupby.GroupBy.get_group GroupBy.get_group(name, obj=None)[source] Construct DataFrame from group with provided name. Parameters name:object The name of the group to get as a DataFrame. obj:DataFrame, default None The DataFrame to take the DataFrame out of. If it is None, the object groupby was called on will be used. Returns group:same type as obj
pandas.reference.api.pandas.core.groupby.groupby.get_group
pandas.core.groupby.GroupBy.groups propertyGroupBy.groups Dict {group name -> group labels}.
pandas.reference.api.pandas.core.groupby.groupby.groups
pandas.core.groupby.GroupBy.head finalGroupBy.head(n=5)[source] Return first n rows of each group. Similar to .apply(lambda x: x.head(n)), but it returns a subset of rows from the original DataFrame with original index and order preserved (as_index flag is ignored). Parameters n:int If positive: number of entries to include from start of each group. If negative: number of entries to exclude from end of each group. Returns Series or DataFrame Subset of original Series or DataFrame as determined by n. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame. Examples >>> df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], ... columns=['A', 'B']) >>> df.groupby('A').head(1) A B 0 1 2 2 5 6 >>> df.groupby('A').head(-1) A B 0 1 2
pandas.reference.api.pandas.core.groupby.groupby.head
pandas.core.groupby.GroupBy.indices propertyGroupBy.indices Dict {group name -> group indices}.
pandas.reference.api.pandas.core.groupby.groupby.indices
pandas.core.groupby.GroupBy.last finalGroupBy.last(numeric_only=False, min_count=- 1)[source] Compute last of group values. Parameters numeric_only:bool, default False Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default -1 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed last of values within each group.
pandas.reference.api.pandas.core.groupby.groupby.last
pandas.core.groupby.GroupBy.max finalGroupBy.max(numeric_only=False, min_count=- 1)[source] Compute max of group values. Parameters numeric_only:bool, default False Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default -1 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed max of values within each group.
pandas.reference.api.pandas.core.groupby.groupby.max
pandas.core.groupby.GroupBy.mean finalGroupBy.mean(numeric_only=NoDefault.no_default, engine='cython', engine_kwargs=None)[source] Compute mean of groups, excluding missing values. Parameters numeric_only:bool, default True Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.4.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {{'nopython': True, 'nogil': False, 'parallel': False}} New in version 1.4.0. Returns pandas.Series or pandas.DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame. Examples >>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2], ... 'B': [np.nan, 2, 3, 4, 5], ... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C']) Groupby one column and return the mean of the remaining columns in each group. >>> df.groupby('A').mean() B C A 1 3.0 1.333333 2 4.0 1.500000 Groupby two columns and return the mean of the remaining column. >>> df.groupby(['A', 'B']).mean() C A B 1 2.0 2.0 4.0 1.0 2 3.0 1.0 5.0 2.0 Groupby one column and return the mean of only particular column in the group. >>> df.groupby('A')['B'].mean() A 1 3.0 2 4.0 Name: B, dtype: float64
pandas.reference.api.pandas.core.groupby.groupby.mean
pandas.core.groupby.GroupBy.median finalGroupBy.median(numeric_only=NoDefault.no_default)[source] Compute median of groups, excluding missing values. For multiple groupings, the result index will be a MultiIndex Parameters numeric_only:bool, default True Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Returns Series or DataFrame Median of values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.median
pandas.core.groupby.GroupBy.min finalGroupBy.min(numeric_only=False, min_count=- 1)[source] Compute min of group values. Parameters numeric_only:bool, default False Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default -1 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed min of values within each group.
pandas.reference.api.pandas.core.groupby.groupby.min
pandas.core.groupby.GroupBy.ngroup finalGroupBy.ngroup(ascending=True)[source] Number each group from 0 to the number of groups - 1. This is the enumerative complement of cumcount. Note that the numbers given to the groups match the order in which the groups would be seen when iterating over the groupby object, not the order they are first observed. Parameters ascending:bool, default True If False, number in reverse, from number of group - 1 to 0. Returns Series Unique numbers for each group. See also cumcount Number the rows in each group. Examples >>> df = pd.DataFrame({"A": list("aaabba")}) >>> df A 0 a 1 a 2 a 3 b 4 b 5 a >>> df.groupby('A').ngroup() 0 0 1 0 2 0 3 1 4 1 5 0 dtype: int64 >>> df.groupby('A').ngroup(ascending=False) 0 1 1 1 2 1 3 0 4 0 5 1 dtype: int64 >>> df.groupby(["A", [1,1,2,3,2,1]]).ngroup() 0 0 1 0 2 1 3 3 4 2 5 0 dtype: int64
pandas.reference.api.pandas.core.groupby.groupby.ngroup
pandas.core.groupby.GroupBy.nth finalGroupBy.nth(n, dropna=None)[source] Take the nth row from each group if n is an int, otherwise a subset of rows. Can be either a call or an index. dropna is not available with index notation. Index notation accepts a comma separated list of integers and slices. If dropna, will take the nth non-null row, dropna is either ‘all’ or ‘any’; this is equivalent to calling dropna(how=dropna) before the groupby. Parameters n:int, slice or list of ints and slices A single nth value for the row or a list of nth values or slices. Changed in version 1.4.0: Added slice and lists containiing slices. Added index notation. dropna:{‘any’, ‘all’, None}, default None Apply the specified dropna operation before counting which row is the nth row. Only supported if n is an int. Returns Series or DataFrame N-th value within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame. Examples >>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2], ... 'B': [np.nan, 2, 3, 4, 5]}, columns=['A', 'B']) >>> g = df.groupby('A') >>> g.nth(0) B A 1 NaN 2 3.0 >>> g.nth(1) B A 1 2.0 2 5.0 >>> g.nth(-1) B A 1 4.0 2 5.0 >>> g.nth([0, 1]) B A 1 NaN 1 2.0 2 3.0 2 5.0 >>> g.nth(slice(None, -1)) B A 1 NaN 1 2.0 2 3.0 Index notation may also be used >>> g.nth[0, 1] B A 1 NaN 1 2.0 2 3.0 2 5.0 >>> g.nth[:-1] B A 1 NaN 1 2.0 2 3.0 Specifying dropna allows count ignoring NaN >>> g.nth(0, dropna='any') B A 1 2.0 2 3.0 NaNs denote group exhausted when using dropna >>> g.nth(3, dropna='any') B A 1 NaN 2 NaN Specifying as_index=False in groupby keeps the original index. >>> df.groupby('A', as_index=False).nth(1) A B 1 1 2.0 4 2 5.0
pandas.reference.api.pandas.core.groupby.groupby.nth
pandas.core.groupby.GroupBy.ohlc finalGroupBy.ohlc()[source] Compute open, high, low and close values of a group, excluding missing values. For multiple groupings, the result index will be a MultiIndex Returns DataFrame Open, high, low and close values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.ohlc
pandas.core.groupby.GroupBy.pad GroupBy.pad(limit=None)[source] Forward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns Series or DataFrame Object with missing values filled. See also Series.ffill Returns Series with minimum number of char in object. DataFrame.ffill Object with missing values filled or None if inplace=True. Series.fillna Fill NaN values of a Series. DataFrame.fillna Fill NaN values of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.pad
pandas.core.groupby.GroupBy.pct_change finalGroupBy.pct_change(periods=1, fill_method='ffill', limit=None, freq=None, axis=0)[source] Calculate pct_change of each value to previous entry in group. Returns Series or DataFrame Percentage changes within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.pct_change
pandas.core.groupby.GroupBy.pipe GroupBy.pipe(func, *args, **kwargs)[source] Apply a function func with arguments to this GroupBy object and return the function’s result. Use .pipe when you want to improve readability by chaining together functions that expect Series, DataFrames, GroupBy or Resampler objects. Instead of writing >>> h(g(f(df.groupby('group')), arg1=a), arg2=b, arg3=c) You can write >>> (df.groupby('group') ... .pipe(f) ... .pipe(g, arg1=a) ... .pipe(h, arg2=b, arg3=c)) which is much more readable. Parameters func:callable or tuple of (callable, str) Function to apply to this GroupBy object or, alternatively, a (callable, data_keyword) tuple where data_keyword is a string indicating the keyword of callable that expects the GroupBy object. args:iterable, optional Positional arguments passed into func. kwargs:dict, optional A dictionary of keyword arguments passed into func. Returns object:the return type of func. See also Series.pipe Apply a function with arguments to a series. DataFrame.pipe Apply a function with arguments to a dataframe. apply Apply function to each group instead of to the full GroupBy object. Notes See more here Examples >>> df = pd.DataFrame({'A': 'a b a b'.split(), 'B': [1, 2, 3, 4]}) >>> df A B 0 a 1 1 b 2 2 a 3 3 b 4 To get the difference between each groups maximum and minimum value in one pass, you can do >>> df.groupby('A').pipe(lambda x: x.max() - x.min()) B A a 2 b 2
pandas.reference.api.pandas.core.groupby.groupby.pipe
pandas.core.groupby.GroupBy.prod finalGroupBy.prod(numeric_only=NoDefault.no_default, min_count=0)[source] Compute prod of group values. Parameters numeric_only:bool, default True Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default 0 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed prod of values within each group.
pandas.reference.api.pandas.core.groupby.groupby.prod
pandas.core.groupby.GroupBy.rank finalGroupBy.rank(method='average', ascending=True, na_option='keep', pct=False, axis=0)[source] Provide the rank of values within each group. Parameters method:{‘average’, ‘min’, ‘max’, ‘first’, ‘dense’}, default ‘average’ average: average rank of group. min: lowest rank in group. max: highest rank in group. first: ranks assigned in order they appear in the array. dense: like ‘min’, but rank always increases by 1 between groups. ascending:bool, default True False for ranks by high (1) to low (N). na_option:{‘keep’, ‘top’, ‘bottom’}, default ‘keep’ keep: leave NA values where they are. top: smallest rank if ascending. bottom: smallest rank if descending. pct:bool, default False Compute percentage rank of data within each group. axis:int, default 0 The axis of the object over which to compute the rank. Returns DataFrame with ranking of values within each group See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame. Examples >>> df = pd.DataFrame( ... { ... "group": ["a", "a", "a", "a", "a", "b", "b", "b", "b", "b"], ... "value": [2, 4, 2, 3, 5, 1, 2, 4, 1, 5], ... } ... ) >>> df group value 0 a 2 1 a 4 2 a 2 3 a 3 4 a 5 5 b 1 6 b 2 7 b 4 8 b 1 9 b 5 >>> for method in ['average', 'min', 'max', 'dense', 'first']: ... df[f'{method}_rank'] = df.groupby('group')['value'].rank(method) >>> df group value average_rank min_rank max_rank dense_rank first_rank 0 a 2 1.5 1.0 2.0 1.0 1.0 1 a 4 4.0 4.0 4.0 3.0 4.0 2 a 2 1.5 1.0 2.0 1.0 2.0 3 a 3 3.0 3.0 3.0 2.0 3.0 4 a 5 5.0 5.0 5.0 4.0 5.0 5 b 1 1.5 1.0 2.0 1.0 1.0 6 b 2 3.0 3.0 3.0 2.0 3.0 7 b 4 4.0 4.0 4.0 3.0 4.0 8 b 1 1.5 1.0 2.0 1.0 2.0 9 b 5 5.0 5.0 5.0 4.0 5.0
pandas.reference.api.pandas.core.groupby.groupby.rank
pandas.core.groupby.GroupBy.sem finalGroupBy.sem(ddof=1)[source] Compute standard error of the mean of groups, excluding missing values. For multiple groupings, the result index will be a MultiIndex. Parameters ddof:int, default 1 Degrees of freedom. Returns Series or DataFrame Standard error of the mean of values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.sem
pandas.core.groupby.GroupBy.size finalGroupBy.size()[source] Compute group sizes. Returns DataFrame or Series Number of rows in each group as a Series if as_index is True or a DataFrame if as_index is False. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.size
pandas.core.groupby.GroupBy.std finalGroupBy.std(ddof=1, engine=None, engine_kwargs=None)[source] Compute standard deviation of groups, excluding missing values. For multiple groupings, the result index will be a MultiIndex. Parameters ddof:int, default 1 Degrees of freedom. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.4.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {{'nopython': True, 'nogil': False, 'parallel': False}} New in version 1.4.0. Returns Series or DataFrame Standard deviation of values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.std
pandas.core.groupby.GroupBy.sum finalGroupBy.sum(numeric_only=NoDefault.no_default, min_count=0, engine=None, engine_kwargs=None)[source] Compute sum of group values. Parameters numeric_only:bool, default True Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default 0 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed sum of values within each group.
pandas.reference.api.pandas.core.groupby.groupby.sum
pandas.core.groupby.GroupBy.tail finalGroupBy.tail(n=5)[source] Return last n rows of each group. Similar to .apply(lambda x: x.tail(n)), but it returns a subset of rows from the original DataFrame with original index and order preserved (as_index flag is ignored). Parameters n:int If positive: number of entries to include from end of each group. If negative: number of entries to exclude from start of each group. Returns Series or DataFrame Subset of original Series or DataFrame as determined by n. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame. Examples >>> df = pd.DataFrame([['a', 1], ['a', 2], ['b', 1], ['b', 2]], ... columns=['A', 'B']) >>> df.groupby('A').tail(1) A B 1 a 2 3 b 2 >>> df.groupby('A').tail(-1) A B 1 a 2 3 b 2
pandas.reference.api.pandas.core.groupby.groupby.tail
pandas.core.groupby.GroupBy.var finalGroupBy.var(ddof=1, engine=None, engine_kwargs=None)[source] Compute variance of groups, excluding missing values. For multiple groupings, the result index will be a MultiIndex. Parameters ddof:int, default 1 Degrees of freedom. engine:str, default None 'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.4.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {{'nopython': True, 'nogil': False, 'parallel': False}} New in version 1.4.0. Returns Series or DataFrame Variance of values within each group. See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
pandas.reference.api.pandas.core.groupby.groupby.var
pandas.core.groupby.SeriesGroupBy.aggregate SeriesGroupBy.aggregate(func=None, *args, engine=None, engine_kwargs=None, **kwargs)[source] Aggregate using one or more operations over the specified axis. Parameters func:function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a Series or when passed to Series.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such. Can also accept a Numba JIT function with engine='numba' specified. Only passing a single function is supported with this engine. If the 'numba' engine is chosen, the function must be a user defined function with values and index as the first and second arguments respectively in the function signature. Each group’s index will be passed to the user defined function and optionally available for use. Changed in version 1.1.0. *args Positional arguments to pass to func. engine:str, default None 'cython' : Runs the function through C-extensions from cython. 'numba' : Runs the function through JIT compiled code from numba. None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.1.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to the function New in version 1.1.0. **kwargs Keyword arguments to be passed into func. Returns Series See also Series.groupby.apply Apply function func group-wise and combine the results together. Series.groupby.transform Aggregate using one or more operations over the specified axis. Series.aggregate Transforms the Series on each group based on the given function. Notes When using engine='numba', there will be no “fall back” behavior internally. The group data and group index will be passed as numpy arrays to the JITed user defined function, and no alternative execution attempts will be tried. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, see the examples below. Examples >>> s = pd.Series([1, 2, 3, 4]) >>> s 0 1 1 2 2 3 3 4 dtype: int64 >>> s.groupby([1, 1, 2, 2]).min() 1 1 2 3 dtype: int64 >>> s.groupby([1, 1, 2, 2]).agg('min') 1 1 2 3 dtype: int64 >>> s.groupby([1, 1, 2, 2]).agg(['min', 'max']) min max 1 1 2 2 3 4 The output column names can be controlled by passing the desired column names and aggregations as keyword arguments. >>> s.groupby([1, 1, 2, 2]).agg( ... minimum='min', ... maximum='max', ... ) minimum maximum 1 1 2 2 3 4 Changed in version 1.3.0: The resulting dtype will reflect the return value of the aggregating function. >>> s.groupby([1, 1, 2, 2]).agg(lambda x: x.astype(float).min()) 1 1.0 2 3.0 dtype: float64
pandas.reference.api.pandas.core.groupby.seriesgroupby.aggregate
pandas.core.groupby.SeriesGroupBy.hist propertySeriesGroupBy.hist Draw histogram of the input series using matplotlib. Parameters by:object, optional If passed, then used to form histograms for separate groups. ax:matplotlib axis object If not passed, uses gca(). grid:bool, default True Whether to show axis grid lines. xlabelsize:int, default None If specified changes the x-axis label size. xrot:float, default None Rotation of x axis labels. ylabelsize:int, default None If specified changes the y-axis label size. yrot:float, default None Rotation of y axis labels. figsize:tuple, default None Figure size in inches by default. bins:int or sequence, default 10 Number of histogram bins to be used. If an integer is given, bins + 1 bin edges are calculated and returned. If bins is a sequence, gives bin edges, including left edge of first bin and right edge of last bin. In this case, bins is returned unmodified. backend:str, default None Backend to use instead of the backend specified in the option plotting.backend. For instance, ‘matplotlib’. Alternatively, to specify the plotting.backend for the whole session, set pd.options.plotting.backend. New in version 1.0.0. legend:bool, default False Whether to show the legend. New in version 1.1.0. **kwargs To be passed to the actual plotting function. Returns matplotlib.AxesSubplot A histogram plot. See also matplotlib.axes.Axes.hist Plot a histogram using matplotlib.
pandas.reference.api.pandas.core.groupby.seriesgroupby.hist
pandas.core.groupby.SeriesGroupBy.is_monotonic_decreasing propertySeriesGroupBy.is_monotonic_decreasing Return boolean if values in the object are monotonic_decreasing. Returns bool
pandas.reference.api.pandas.core.groupby.seriesgroupby.is_monotonic_decreasing
pandas.core.groupby.SeriesGroupBy.is_monotonic_increasing propertySeriesGroupBy.is_monotonic_increasing Alias for is_monotonic.
pandas.reference.api.pandas.core.groupby.seriesgroupby.is_monotonic_increasing
pandas.core.groupby.SeriesGroupBy.nlargest SeriesGroupBy.nlargest(n=5, keep='first')[source] Return the largest n elements. Parameters n:int, default 5 Return this many descending sorted values. keep:{‘first’, ‘last’, ‘all’}, default ‘first’ When there are duplicate values that cannot all fit in a Series of n elements: first : return the first n occurrences in order of appearance. last : return the last n occurrences in reverse order of appearance. all : keep all occurrences. This can result in a Series of size larger than n. Returns Series The n largest values in the Series, sorted in decreasing order. See also Series.nsmallest Get the n smallest elements. Series.sort_values Sort Series by values. Series.head Return the first n rows. Notes Faster than .sort_values(ascending=False).head(n) for small n relative to the size of the Series object. Examples >>> countries_population = {"Italy": 59000000, "France": 65000000, ... "Malta": 434000, "Maldives": 434000, ... "Brunei": 434000, "Iceland": 337000, ... "Nauru": 11300, "Tuvalu": 11300, ... "Anguilla": 11300, "Montserrat": 5200} >>> s = pd.Series(countries_population) >>> s Italy 59000000 France 65000000 Malta 434000 Maldives 434000 Brunei 434000 Iceland 337000 Nauru 11300 Tuvalu 11300 Anguilla 11300 Montserrat 5200 dtype: int64 The n largest elements where n=5 by default. >>> s.nlargest() France 65000000 Italy 59000000 Malta 434000 Maldives 434000 Brunei 434000 dtype: int64 The n largest elements where n=3. Default keep value is ‘first’ so Malta will be kept. >>> s.nlargest(3) France 65000000 Italy 59000000 Malta 434000 dtype: int64 The n largest elements where n=3 and keeping the last duplicates. Brunei will be kept since it is the last with value 434000 based on the index order. >>> s.nlargest(3, keep='last') France 65000000 Italy 59000000 Brunei 434000 dtype: int64 The n largest elements where n=3 with all duplicates kept. Note that the returned Series has five elements due to the three duplicates. >>> s.nlargest(3, keep='all') France 65000000 Italy 59000000 Malta 434000 Maldives 434000 Brunei 434000 dtype: int64
pandas.reference.api.pandas.core.groupby.seriesgroupby.nlargest
pandas.core.groupby.SeriesGroupBy.nsmallest SeriesGroupBy.nsmallest(n=5, keep='first')[source] Return the smallest n elements. Parameters n:int, default 5 Return this many ascending sorted values. keep:{‘first’, ‘last’, ‘all’}, default ‘first’ When there are duplicate values that cannot all fit in a Series of n elements: first : return the first n occurrences in order of appearance. last : return the last n occurrences in reverse order of appearance. all : keep all occurrences. This can result in a Series of size larger than n. Returns Series The n smallest values in the Series, sorted in increasing order. See also Series.nlargest Get the n largest elements. Series.sort_values Sort Series by values. Series.head Return the first n rows. Notes Faster than .sort_values().head(n) for small n relative to the size of the Series object. Examples >>> countries_population = {"Italy": 59000000, "France": 65000000, ... "Brunei": 434000, "Malta": 434000, ... "Maldives": 434000, "Iceland": 337000, ... "Nauru": 11300, "Tuvalu": 11300, ... "Anguilla": 11300, "Montserrat": 5200} >>> s = pd.Series(countries_population) >>> s Italy 59000000 France 65000000 Brunei 434000 Malta 434000 Maldives 434000 Iceland 337000 Nauru 11300 Tuvalu 11300 Anguilla 11300 Montserrat 5200 dtype: int64 The n smallest elements where n=5 by default. >>> s.nsmallest() Montserrat 5200 Nauru 11300 Tuvalu 11300 Anguilla 11300 Iceland 337000 dtype: int64 The n smallest elements where n=3. Default keep value is ‘first’ so Nauru and Tuvalu will be kept. >>> s.nsmallest(3) Montserrat 5200 Nauru 11300 Tuvalu 11300 dtype: int64 The n smallest elements where n=3 and keeping the last duplicates. Anguilla and Tuvalu will be kept since they are the last with value 11300 based on the index order. >>> s.nsmallest(3, keep='last') Montserrat 5200 Anguilla 11300 Tuvalu 11300 dtype: int64 The n smallest elements where n=3 with all duplicates kept. Note that the returned Series has four elements due to the three duplicates. >>> s.nsmallest(3, keep='all') Montserrat 5200 Nauru 11300 Tuvalu 11300 Anguilla 11300 dtype: int64
pandas.reference.api.pandas.core.groupby.seriesgroupby.nsmallest
pandas.core.groupby.SeriesGroupBy.nunique SeriesGroupBy.nunique(dropna=True)[source] Return number of unique elements in the group. Returns Series Number of unique values within each group.
pandas.reference.api.pandas.core.groupby.seriesgroupby.nunique
pandas.core.groupby.SeriesGroupBy.transform SeriesGroupBy.transform(func, *args, engine=None, engine_kwargs=None, **kwargs)[source] Call function producing a like-indexed Series on each group and return a Series having the same indexes as the original object filled with the transformed values. Parameters f:function Function to apply to each group. Can also accept a Numba JIT function with engine='numba' specified. If the 'numba' engine is chosen, the function must be a user defined function with values and index as the first and second arguments respectively in the function signature. Each group’s index will be passed to the user defined function and optionally available for use. Changed in version 1.1.0. *args Positional arguments to pass to func. engine:str, default None 'cython' : Runs the function through C-extensions from cython. 'numba' : Runs the function through JIT compiled code from numba. None : Defaults to 'cython' or the global setting compute.use_numba New in version 1.1.0. engine_kwargs:dict, default None For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to the function New in version 1.1.0. **kwargs Keyword arguments to be passed into func. Returns Series See also Series.groupby.apply Apply function func group-wise and combine the results together. Series.groupby.aggregate Aggregate using one or more operations over the specified axis. Series.transform Call func on self producing a Series with the same axis shape as self. Notes Each group is endowed the attribute ‘name’ in case you need to know which group you are working on. The current implementation imposes three requirements on f: f must return a value that either has the same shape as the input subframe or can be broadcast to the shape of the input subframe. For example, if f returns a scalar it will be broadcast to have the same shape as the input subframe. if this is a DataFrame, f must support application column-by-column in the subframe. If f also supports application to the entire subframe, then a fast path is used starting from the second chunk. f must not mutate groups. Mutation is not supported and may produce unexpected results. See Mutating with User Defined Function (UDF) methods for more details. When using engine='numba', there will be no “fall back” behavior internally. The group data and group index will be passed as numpy arrays to the JITed user defined function, and no alternative execution attempts will be tried. Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, see the examples below. Examples >>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', ... 'foo', 'bar'], ... 'B' : ['one', 'one', 'two', 'three', ... 'two', 'two'], ... 'C' : [1, 5, 5, 2, 5, 5], ... 'D' : [2.0, 5., 8., 1., 2., 9.]}) >>> grouped = df.groupby('A') >>> grouped.transform(lambda x: (x - x.mean()) / x.std()) C D 0 -1.154701 -0.577350 1 0.577350 0.000000 2 0.577350 1.154701 3 -1.154701 -1.000000 4 0.577350 -0.577350 5 0.577350 1.000000 Broadcast result of the transformation >>> grouped.transform(lambda x: x.max() - x.min()) C D 0 4 6.0 1 3 8.0 2 4 6.0 3 3 8.0 4 4 6.0 5 3 8.0 Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, for example: >>> grouped[['C', 'D']].transform(lambda x: x.astype(int).max()) C D 0 5 8 1 5 9 2 5 8 3 5 9 4 5 8 5 5 9
pandas.reference.api.pandas.core.groupby.seriesgroupby.transform
pandas.core.groupby.SeriesGroupBy.unique propertySeriesGroupBy.unique Return unique values of Series object. Uniques are returned in order of appearance. Hash table-based unique, therefore does NOT sort. Returns ndarray or ExtensionArray The unique values returned as a NumPy array. See Notes. See also unique Top-level unique method for any 1-d array-like object. Index.unique Return Index with unique values from an Index object. Notes Returns the unique values as a NumPy array. In case of an extension-array backed Series, a new ExtensionArray of that type with just the unique values is returned. This includes Categorical Period Datetime with Timezone Interval Sparse IntegerNA See Examples section. Examples >>> pd.Series([2, 1, 3, 3], name='A').unique() array([2, 1, 3]) >>> pd.Series([pd.Timestamp('2016-01-01') for _ in range(3)]).unique() array(['2016-01-01T00:00:00.000000000'], dtype='datetime64[ns]') >>> pd.Series([pd.Timestamp('2016-01-01', tz='US/Eastern') ... for _ in range(3)]).unique() <DatetimeArray> ['2016-01-01 00:00:00-05:00'] Length: 1, dtype: datetime64[ns, US/Eastern] An Categorical will return categories in the order of appearance and with the same dtype. >>> pd.Series(pd.Categorical(list('baabc'))).unique() ['b', 'a', 'c'] Categories (3, object): ['a', 'b', 'c'] >>> pd.Series(pd.Categorical(list('baabc'), categories=list('abc'), ... ordered=True)).unique() ['b', 'a', 'c'] Categories (3, object): ['a' < 'b' < 'c']
pandas.reference.api.pandas.core.groupby.seriesgroupby.unique
pandas.core.groupby.SeriesGroupBy.value_counts SeriesGroupBy.value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=True)[source]
pandas.reference.api.pandas.core.groupby.seriesgroupby.value_counts
pandas.core.resample.Resampler.__iter__ Resampler.__iter__()[source] Groupby iterator. Returns Generator yielding sequence of (name, subsetted object) for each group
pandas.reference.api.pandas.core.resample.resampler.__iter__