doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
pandas.DataFrame.update DataFrame.update(other, join='left', overwrite=True, filter_func=None, errors='ignore')[source] Modify in place using non-NA values from another DataFrame. Aligns on indices. There is no return value. Parameters other:DataFrame, or object coercible into a DataFrame Should have at least one matching index/column label with the original DataFrame. If a Series is passed, its name attribute must be set, and that will be used as the column name to align with the original DataFrame. join:{‘left’}, default ‘left’ Only left join is implemented, keeping the index and columns of the original object. overwrite:bool, default True How to handle non-NA values for overlapping keys: True: overwrite original DataFrame’s values with values from other. False: only update values that are NA in the original DataFrame. filter_func:callable(1d-array) -> bool 1d-array, optional Can choose to replace values other than NA. Return True for values that should be updated. errors:{‘raise’, ‘ignore’}, default ‘ignore’ If ‘raise’, will raise a ValueError if the DataFrame and other both contain non-NA data in the same place. Returns None:method directly changes calling object Raises ValueError When errors=’raise’ and there’s overlapping non-NA data. When errors is not either ‘ignore’ or ‘raise’ NotImplementedError If join != ‘left’ See also dict.update Similar method for dictionaries. DataFrame.merge For column(s)-on-column(s) operations. Examples >>> df = pd.DataFrame({'A': [1, 2, 3], ... 'B': [400, 500, 600]}) >>> new_df = pd.DataFrame({'B': [4, 5, 6], ... 'C': [7, 8, 9]}) >>> df.update(new_df) >>> df A B 0 1 4 1 2 5 2 3 6 The DataFrame’s length does not increase as a result of the update, only values at matching index/column labels are updated. >>> df = pd.DataFrame({'A': ['a', 'b', 'c'], ... 'B': ['x', 'y', 'z']}) >>> new_df = pd.DataFrame({'B': ['d', 'e', 'f', 'g', 'h', 'i']}) >>> df.update(new_df) >>> df A B 0 a d 1 b e 2 c f For Series, its name attribute must be set. >>> df = pd.DataFrame({'A': ['a', 'b', 'c'], ... 'B': ['x', 'y', 'z']}) >>> new_column = pd.Series(['d', 'e'], name='B', index=[0, 2]) >>> df.update(new_column) >>> df A B 0 a d 1 b y 2 c e >>> df = pd.DataFrame({'A': ['a', 'b', 'c'], ... 'B': ['x', 'y', 'z']}) >>> new_df = pd.DataFrame({'B': ['d', 'e']}, index=[1, 2]) >>> df.update(new_df) >>> df A B 0 a x 1 b d 2 c e If other contains NaNs the corresponding values are not updated in the original dataframe. >>> df = pd.DataFrame({'A': [1, 2, 3], ... 'B': [400, 500, 600]}) >>> new_df = pd.DataFrame({'B': [4, np.nan, 6]}) >>> df.update(new_df) >>> df A B 0 1 4.0 1 2 500.0 2 3 6.0
pandas.reference.api.pandas.dataframe.update
pandas.DataFrame.value_counts DataFrame.value_counts(subset=None, normalize=False, sort=True, ascending=False, dropna=True)[source] Return a Series containing counts of unique rows in the DataFrame. New in version 1.1.0. Parameters subset:list-like, optional Columns to use when counting unique combinations. normalize:bool, default False Return proportions rather than frequencies. sort:bool, default True Sort by frequencies. ascending:bool, default False Sort in ascending order. dropna:bool, default True Don’t include counts of rows that contain NA values. New in version 1.3.0. Returns Series See also Series.value_counts Equivalent method on Series. Notes The returned Series will have a MultiIndex with one level per input column. By default, rows that contain any NA values are omitted from the result. By default, the resulting Series will be in descending order so that the first element is the most frequently-occurring row. Examples >>> df = pd.DataFrame({'num_legs': [2, 4, 4, 6], ... 'num_wings': [2, 0, 0, 0]}, ... index=['falcon', 'dog', 'cat', 'ant']) >>> df num_legs num_wings falcon 2 2 dog 4 0 cat 4 0 ant 6 0 >>> df.value_counts() num_legs num_wings 4 0 2 2 2 1 6 0 1 dtype: int64 >>> df.value_counts(sort=False) num_legs num_wings 2 2 1 4 0 2 6 0 1 dtype: int64 >>> df.value_counts(ascending=True) num_legs num_wings 2 2 1 6 0 1 4 0 2 dtype: int64 >>> df.value_counts(normalize=True) num_legs num_wings 4 0 0.50 2 2 0.25 6 0 0.25 dtype: float64 With dropna set to False we can also count rows with NA values. >>> df = pd.DataFrame({'first_name': ['John', 'Anne', 'John', 'Beth'], ... 'middle_name': ['Smith', pd.NA, pd.NA, 'Louise']}) >>> df first_name middle_name 0 John Smith 1 Anne <NA> 2 John <NA> 3 Beth Louise >>> df.value_counts() first_name middle_name Beth Louise 1 John Smith 1 dtype: int64 >>> df.value_counts(dropna=False) first_name middle_name Anne NaN 1 Beth Louise 1 John Smith 1 NaN 1 dtype: int64
pandas.reference.api.pandas.dataframe.value_counts
pandas.DataFrame.values propertyDataFrame.values Return a Numpy representation of the DataFrame. Warning We recommend using DataFrame.to_numpy() instead. Only the values in the DataFrame will be returned, the axes labels will be removed. Returns numpy.ndarray The values of the DataFrame. See also DataFrame.to_numpy Recommended alternative to this method. DataFrame.index Retrieve the index labels. DataFrame.columns Retrieving the column names. Notes The dtype will be a lower-common-denominator dtype (implicit upcasting); that is to say if the dtypes (even of numeric types) are mixed, the one that accommodates all will be chosen. Use this with care if you are not dealing with the blocks. e.g. If the dtypes are float16 and float32, dtype will be upcast to float32. If dtypes are int32 and uint8, dtype will be upcast to int32. By numpy.find_common_type() convention, mixing int64 and uint64 will result in a float64 dtype. Examples A DataFrame where all columns are the same type (e.g., int64) results in an array of the same type. >>> df = pd.DataFrame({'age': [ 3, 29], ... 'height': [94, 170], ... 'weight': [31, 115]}) >>> df age height weight 0 3 94 31 1 29 170 115 >>> df.dtypes age int64 height int64 weight int64 dtype: object >>> df.values array([[ 3, 94, 31], [ 29, 170, 115]]) A DataFrame with mixed type columns(e.g., str/object, int64, float32) results in an ndarray of the broadest type that accommodates these mixed types (e.g., object). >>> df2 = pd.DataFrame([('parrot', 24.0, 'second'), ... ('lion', 80.5, 1), ... ('monkey', np.nan, None)], ... columns=('name', 'max_speed', 'rank')) >>> df2.dtypes name object max_speed float64 rank object dtype: object >>> df2.values array([['parrot', 24.0, 'second'], ['lion', 80.5, 1], ['monkey', nan, None]], dtype=object)
pandas.reference.api.pandas.dataframe.values
pandas.DataFrame.var DataFrame.var(axis=None, skipna=True, level=None, ddof=1, numeric_only=None, **kwargs)[source] Return unbiased variance over requested axis. Normalized by N-1 by default. This can be changed using the ddof argument. Parameters axis:{index (0), columns (1)} skipna:bool, default True Exclude NA/null values. If an entire row/column is NA, the result will be NA. level:int or level name, default None If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series. ddof:int, default 1 Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. numeric_only:bool, default None Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series. Returns Series or DataFrame (if level specified) Examples >>> df = pd.DataFrame({'person_id': [0, 1, 2, 3], ... 'age': [21, 25, 62, 43], ... 'height': [1.61, 1.87, 1.49, 2.01]} ... ).set_index('person_id') >>> df age height person_id 0 21 1.61 1 25 1.87 2 62 1.49 3 43 2.01 >>> df.var() age 352.916667 height 0.056367 Alternatively, ddof=0 can be set to normalize by N instead of N-1: >>> df.var(ddof=0) age 264.687500 height 0.042275
pandas.reference.api.pandas.dataframe.var
pandas.DataFrame.where DataFrame.where(cond, other=NoDefault.no_default, inplace=False, axis=None, level=None, errors='raise', try_cast=NoDefault.no_default)[source] Replace values where the condition is False. Parameters cond:bool Series/DataFrame, array-like, or callable Where cond is True, keep the original value. Where False, replace with corresponding value from other. If cond is callable, it is computed on the Series/DataFrame and should return boolean Series/DataFrame or array. The callable must not change input Series/DataFrame (though pandas doesn’t check it). other:scalar, Series/DataFrame, or callable Entries where cond is False are replaced with corresponding value from other. If other is callable, it is computed on the Series/DataFrame and should return scalar or Series/DataFrame. The callable must not change input Series/DataFrame (though pandas doesn’t check it). inplace:bool, default False Whether to perform the operation in place on the data. axis:int, default None Alignment axis if needed. level:int, default None Alignment level if needed. errors:str, {‘raise’, ‘ignore’}, default ‘raise’ Note that currently this parameter won’t affect the results and will always coerce to a suitable dtype. ‘raise’ : allow exceptions to be raised. ‘ignore’ : suppress exceptions. On error return original object. try_cast:bool, default None Try to cast the result back to the input type (if possible). Deprecated since version 1.3.0: Manually cast back if necessary. Returns Same type as caller or None if inplace=True. See also DataFrame.mask() Return an object of same shape as self. Notes The where method is an application of the if-then idiom. For each element in the calling DataFrame, if cond is True the element is used; otherwise the corresponding element from the DataFrame other is used. The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m, df2) is equivalent to np.where(m, df1, df2). For further details and examples see the where documentation in indexing. Examples >>> s = pd.Series(range(5)) >>> s.where(s > 0) 0 NaN 1 1.0 2 2.0 3 3.0 4 4.0 dtype: float64 >>> s.mask(s > 0) 0 0.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64 >>> s.where(s > 1, 10) 0 10 1 10 2 2 3 3 4 4 dtype: int64 >>> s.mask(s > 1, 10) 0 0 1 1 2 10 3 10 4 10 dtype: int64 >>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B']) >>> df A B 0 0 1 1 2 3 2 4 5 3 6 7 4 8 9 >>> m = df % 3 == 0 >>> df.where(m, -df) A B 0 0 -1 1 -2 3 2 -4 -5 3 6 -7 4 -8 9 >>> df.where(m, -df) == np.where(m, df, -df) A B 0 True True 1 True True 2 True True 3 True True 4 True True >>> df.where(m, -df) == df.mask(~m, -df) A B 0 True True 1 True True 2 True True 3 True True 4 True True
pandas.reference.api.pandas.dataframe.where
pandas.DataFrame.xs DataFrame.xs(key, axis=0, level=None, drop_level=True)[source] Return cross-section from the Series/DataFrame. This method takes a key argument to select data at a particular level of a MultiIndex. Parameters key:label or tuple of label Label contained in the index, or partially in a MultiIndex. axis:{0 or ‘index’, 1 or ‘columns’}, default 0 Axis to retrieve cross-section on. level:object, defaults to first n levels (n=1 or len(key)) In case of a key partially contained in a MultiIndex, indicate which levels are used. Levels can be referred by label or position. drop_level:bool, default True If False, returns object with same levels as self. Returns Series or DataFrame Cross-section from the original Series or DataFrame corresponding to the selected index levels. See also DataFrame.loc Access a group of rows and columns by label(s) or a boolean array. DataFrame.iloc Purely integer-location based indexing for selection by position. Notes xs can not be used to set values. MultiIndex Slicers is a generic way to get/set values on any level or levels. It is a superset of xs functionality, see MultiIndex Slicers. Examples >>> d = {'num_legs': [4, 4, 2, 2], ... 'num_wings': [0, 0, 2, 2], ... 'class': ['mammal', 'mammal', 'mammal', 'bird'], ... 'animal': ['cat', 'dog', 'bat', 'penguin'], ... 'locomotion': ['walks', 'walks', 'flies', 'walks']} >>> df = pd.DataFrame(data=d) >>> df = df.set_index(['class', 'animal', 'locomotion']) >>> df num_legs num_wings class animal locomotion mammal cat walks 4 0 dog walks 4 0 bat flies 2 2 bird penguin walks 2 2 Get values at specified index >>> df.xs('mammal') num_legs num_wings animal locomotion cat walks 4 0 dog walks 4 0 bat flies 2 2 Get values at several indexes >>> df.xs(('mammal', 'dog')) num_legs num_wings locomotion walks 4 0 Get values at specified index and level >>> df.xs('cat', level=1) num_legs num_wings class locomotion mammal walks 4 0 Get values at several indexes and levels >>> df.xs(('bird', 'walks'), ... level=[0, 'locomotion']) num_legs num_wings animal penguin 2 2 Get values at specified column and axis >>> df.xs('num_wings', axis=1) class animal locomotion mammal cat walks 0 dog walks 0 bat flies 2 bird penguin walks 2 Name: num_wings, dtype: int64
pandas.reference.api.pandas.dataframe.xs
pandas.date_range pandas.date_range(start=None, end=None, periods=None, freq=None, tz=None, normalize=False, name=None, closed=NoDefault.no_default, inclusive=None, **kwargs)[source] Return a fixed frequency DatetimeIndex. Returns the range of equally spaced time points (where the difference between any two adjacent points is specified by the given frequency) such that they all satisfy start <[=] x <[=] end, where the first one and the last one are, resp., the first and last time points in that range that fall on the boundary of freq (if given as a frequency string) or that are valid for freq (if given as a pandas.tseries.offsets.DateOffset). (If exactly one of start, end, or freq is not specified, this missing parameter can be computed given periods, the number of timesteps in the range. See the note below.) Parameters start:str or datetime-like, optional Left bound for generating dates. end:str or datetime-like, optional Right bound for generating dates. periods:int, optional Number of periods to generate. freq:str or DateOffset, default ‘D’ Frequency strings can have multiples, e.g. ‘5H’. See here for a list of frequency aliases. tz:str or tzinfo, optional Time zone name for returning localized DatetimeIndex, for example ‘Asia/Hong_Kong’. By default, the resulting DatetimeIndex is timezone-naive. normalize:bool, default False Normalize start/end dates to midnight before generating date range. name:str, default None Name of the resulting DatetimeIndex. closed:{None, ‘left’, ‘right’}, optional Make the interval closed with respect to the given frequency to the ‘left’, ‘right’, or both sides (None, the default). Deprecated since version 1.4.0: Argument closed has been deprecated to standardize boundary inputs. Use inclusive instead, to set each bound as closed or open. inclusive:{“both”, “neither”, “left”, “right”}, default “both” Include boundaries; Whether to set each bound as closed or open. New in version 1.4.0. **kwargs For compatibility. Has no effect on the result. Returns rng:DatetimeIndex See also DatetimeIndex An immutable container for datetimes. timedelta_range Return a fixed frequency TimedeltaIndex. period_range Return a fixed frequency PeriodIndex. interval_range Return a fixed frequency IntervalIndex. Notes Of the four parameters start, end, periods, and freq, exactly three must be specified. If freq is omitted, the resulting DatetimeIndex will have periods linearly spaced elements between start and end (closed on both sides). To learn more about the frequency strings, please see this link. Examples Specifying the values The next four examples generate the same DatetimeIndex, but vary the combination of start, end and periods. Specify start and end, with the default daily frequency. >>> pd.date_range(start='1/1/2018', end='1/08/2018') DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'], dtype='datetime64[ns]', freq='D') Specify start and periods, the number of periods (days). >>> pd.date_range(start='1/1/2018', periods=8) DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'], dtype='datetime64[ns]', freq='D') Specify end and periods, the number of periods (days). >>> pd.date_range(end='1/1/2018', periods=8) DatetimeIndex(['2017-12-25', '2017-12-26', '2017-12-27', '2017-12-28', '2017-12-29', '2017-12-30', '2017-12-31', '2018-01-01'], dtype='datetime64[ns]', freq='D') Specify start, end, and periods; the frequency is generated automatically (linearly spaced). >>> pd.date_range(start='2018-04-24', end='2018-04-27', periods=3) DatetimeIndex(['2018-04-24 00:00:00', '2018-04-25 12:00:00', '2018-04-27 00:00:00'], dtype='datetime64[ns]', freq=None) Other Parameters Changed the freq (frequency) to 'M' (month end frequency). >>> pd.date_range(start='1/1/2018', periods=5, freq='M') DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31', '2018-04-30', '2018-05-31'], dtype='datetime64[ns]', freq='M') Multiples are allowed >>> pd.date_range(start='1/1/2018', periods=5, freq='3M') DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31', '2019-01-31'], dtype='datetime64[ns]', freq='3M') freq can also be specified as an Offset object. >>> pd.date_range(start='1/1/2018', periods=5, freq=pd.offsets.MonthEnd(3)) DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31', '2019-01-31'], dtype='datetime64[ns]', freq='3M') Specify tz to set the timezone. >>> pd.date_range(start='1/1/2018', periods=5, tz='Asia/Tokyo') DatetimeIndex(['2018-01-01 00:00:00+09:00', '2018-01-02 00:00:00+09:00', '2018-01-03 00:00:00+09:00', '2018-01-04 00:00:00+09:00', '2018-01-05 00:00:00+09:00'], dtype='datetime64[ns, Asia/Tokyo]', freq='D') inclusive controls whether to include start and end that are on the boundary. The default, “both”, includes boundary points on either end. >>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive="both") DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03', '2017-01-04'], dtype='datetime64[ns]', freq='D') Use inclusive='left' to exclude end if it falls on the boundary. >>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive='left') DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03'], dtype='datetime64[ns]', freq='D') Use inclusive='right' to exclude start if it falls on the boundary, and similarly inclusive='neither' will exclude both start and end. >>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive='right') DatetimeIndex(['2017-01-02', '2017-01-03', '2017-01-04'], dtype='datetime64[ns]', freq='D')
pandas.reference.api.pandas.date_range
pandas.DatetimeIndex classpandas.DatetimeIndex(data=None, freq=NoDefault.no_default, tz=None, normalize=False, closed=None, ambiguous='raise', dayfirst=False, yearfirst=False, dtype=None, copy=False, name=None)[source] Immutable ndarray-like of datetime64 data. Represented internally as int64, and which can be boxed to Timestamp objects that are subclasses of datetime and carry metadata. Parameters data:array-like (1-dimensional), optional Optional datetime-like data to construct index with. freq:str or pandas offset object, optional One of pandas date offset strings or corresponding objects. The string ‘infer’ can be passed in order to set the frequency of the index as the inferred frequency upon creation. tz:pytz.timezone or dateutil.tz.tzfile or datetime.tzinfo or str Set the Timezone of the data. normalize:bool, default False Normalize start/end dates to midnight before generating date range. closed:{‘left’, ‘right’}, optional Set whether to include start and end that are on the boundary. The default includes boundary points on either end. ambiguous:‘infer’, bool-ndarray, ‘NaT’, default ‘raise’ When clocks moved backward due to DST, ambiguous times may arise. For example in Central European Time (UTC+01), when going from 03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at 00:30:00 UTC and at 01:30:00 UTC. In such a situation, the ambiguous parameter dictates how ambiguous times should be handled. ‘infer’ will attempt to infer fall dst-transition hours based on order bool-ndarray where True signifies a DST time, False signifies a non-DST time (note that this flag is only applicable for ambiguous times) ‘NaT’ will return NaT where there are ambiguous times ‘raise’ will raise an AmbiguousTimeError if there are ambiguous times. dayfirst:bool, default False If True, parse dates in data with the day first order. yearfirst:bool, default False If True parse dates in data with the year first order. dtype:numpy.dtype or DatetimeTZDtype or str, default None Note that the only NumPy dtype allowed is ‘datetime64[ns]’. copy:bool, default False Make a copy of input ndarray. name:label, default None Name to be stored in the index. See also Index The base pandas Index type. TimedeltaIndex Index of timedelta64 data. PeriodIndex Index of Period data. to_datetime Convert argument to datetime. date_range Create a fixed-frequency DatetimeIndex. Notes To learn more about the frequency strings, please see this link. Attributes year The year of the datetime. month The month as January=1, December=12. day The day of the datetime. hour The hours of the datetime. minute The minutes of the datetime. second The seconds of the datetime. microsecond The microseconds of the datetime. nanosecond The nanoseconds of the datetime. date Returns numpy array of python datetime.date objects. time Returns numpy array of datetime.time objects. timetz Returns numpy array of datetime.time objects with timezone information. dayofyear The ordinal day of the year. day_of_year The ordinal day of the year. weekofyear (DEPRECATED) The week ordinal of the year. week (DEPRECATED) The week ordinal of the year. dayofweek The day of the week with Monday=0, Sunday=6. day_of_week The day of the week with Monday=0, Sunday=6. weekday The day of the week with Monday=0, Sunday=6. quarter The quarter of the date. tz Return the timezone. freq Return the frequency object if it is set, otherwise None. freqstr Return the frequency object as a string if its set, otherwise None. is_month_start Indicates whether the date is the first day of the month. is_month_end Indicates whether the date is the last day of the month. is_quarter_start Indicator for whether the date is the first day of a quarter. is_quarter_end Indicator for whether the date is the last day of a quarter. is_year_start Indicate whether the date is the first day of a year. is_year_end Indicate whether the date is the last day of the year. is_leap_year Boolean indicator if the date belongs to a leap year. inferred_freq Tries to return a string representing a frequency guess, generated by infer_freq. Methods normalize(*args, **kwargs) Convert times to midnight. strftime(*args, **kwargs) Convert to Index using specified date_format. snap([freq]) Snap time stamps to nearest occurring frequency. tz_convert(tz) Convert tz-aware Datetime Array/Index from one time zone to another. tz_localize(tz[, ambiguous, nonexistent]) Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index. round(*args, **kwargs) Perform round operation on the data to the specified freq. floor(*args, **kwargs) Perform floor operation on the data to the specified freq. ceil(*args, **kwargs) Perform ceil operation on the data to the specified freq. to_period(*args, **kwargs) Cast to PeriodArray/Index at a particular frequency. to_perioddelta(freq) Calculate TimedeltaArray of difference between index values and index converted to PeriodArray at specified freq. to_pydatetime(*args, **kwargs) Return Datetime Array/Index as object ndarray of datetime.datetime objects. to_series([keep_tz, index, name]) Create a Series with both index and values equal to the index keys useful with map for returning an indexer based on an index. to_frame([index, name]) Create a DataFrame with a column containing the Index. month_name(*args, **kwargs) Return the month names of the DateTimeIndex with specified locale. day_name(*args, **kwargs) Return the day names of the DateTimeIndex with specified locale. mean(*args, **kwargs) Return the mean value of the Array. std(*args, **kwargs) Return sample standard deviation over requested axis.
pandas.reference.api.pandas.datetimeindex
pandas.DatetimeIndex.ceil DatetimeIndex.ceil(*args, **kwargs)[source] Perform ceil operation on the data to the specified freq. Parameters freq:str or Offset The frequency level to ceil the index to. Must be a fixed frequency like ‘S’ (second) not ‘ME’ (month end). See frequency aliases for a list of possible freq values. ambiguous:‘infer’, bool-ndarray, ‘NaT’, default ‘raise’ Only relevant for DatetimeIndex: ‘infer’ will attempt to infer fall dst-transition hours based on order bool-ndarray where True signifies a DST time, False designates a non-DST time (note that this flag is only applicable for ambiguous times) ‘NaT’ will return NaT where there are ambiguous times ‘raise’ will raise an AmbiguousTimeError if there are ambiguous times. nonexistent:‘shift_forward’, ‘shift_backward’, ‘NaT’, timedelta, default ‘raise’ A nonexistent time does not exist in a particular timezone where clocks moved forward due to DST. ‘shift_forward’ will shift the nonexistent time forward to the closest existing time ‘shift_backward’ will shift the nonexistent time backward to the closest existing time ‘NaT’ will return NaT where there are nonexistent times timedelta objects will shift nonexistent times by the timedelta ‘raise’ will raise an NonExistentTimeError if there are nonexistent times. Returns DatetimeIndex, TimedeltaIndex, or Series Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with the same index for a Series. Raises ValueError if the freq cannot be converted. Notes If the timestamps have a timezone, ceiling will take place relative to the local (“wall”) time and re-localized to the same timezone. When ceiling near daylight savings time, use nonexistent and ambiguous to control the re-localization behavior. Examples DatetimeIndex >>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min') >>> rng DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00', '2018-01-01 12:01:00'], dtype='datetime64[ns]', freq='T') >>> rng.ceil('H') DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00', '2018-01-01 13:00:00'], dtype='datetime64[ns]', freq=None) Series >>> pd.Series(rng).dt.ceil("H") 0 2018-01-01 12:00:00 1 2018-01-01 12:00:00 2 2018-01-01 13:00:00 dtype: datetime64[ns] When rounding near a daylight savings time transition, use ambiguous or nonexistent to control how the timestamp should be re-localized. >>> rng_tz = pd.DatetimeIndex(["2021-10-31 01:30:00"], tz="Europe/Amsterdam") >>> rng_tz.ceil("H", ambiguous=False) DatetimeIndex(['2021-10-31 02:00:00+01:00'], dtype='datetime64[ns, Europe/Amsterdam]', freq=None) >>> rng_tz.ceil("H", ambiguous=True) DatetimeIndex(['2021-10-31 02:00:00+02:00'], dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
pandas.reference.api.pandas.datetimeindex.ceil
pandas.DatetimeIndex.date propertyDatetimeIndex.date Returns numpy array of python datetime.date objects. Namely, the date part of Timestamps without time and timezone information.
pandas.reference.api.pandas.datetimeindex.date
pandas.DatetimeIndex.day propertyDatetimeIndex.day The day of the datetime. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="D") ... ) >>> datetime_series 0 2000-01-01 1 2000-01-02 2 2000-01-03 dtype: datetime64[ns] >>> datetime_series.dt.day 0 1 1 2 2 3 dtype: int64
pandas.reference.api.pandas.datetimeindex.day
pandas.DatetimeIndex.day_name DatetimeIndex.day_name(*args, **kwargs)[source] Return the day names of the DateTimeIndex with specified locale. Parameters locale:str, optional Locale determining the language in which to return the day name. Default is English locale. Returns Index Index of day names. Examples >>> idx = pd.date_range(start='2018-01-01', freq='D', periods=3) >>> idx DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03'], dtype='datetime64[ns]', freq='D') >>> idx.day_name() Index(['Monday', 'Tuesday', 'Wednesday'], dtype='object')
pandas.reference.api.pandas.datetimeindex.day_name
pandas.DatetimeIndex.day_of_week propertyDatetimeIndex.day_of_week The day of the week with Monday=0, Sunday=6. Return the day of the week. It is assumed the week starts on Monday, which is denoted by 0 and ends on Sunday which is denoted by 6. This method is available on both Series with datetime values (using the dt accessor) or DatetimeIndex. Returns Series or Index Containing integers indicating the day number. See also Series.dt.dayofweek Alias. Series.dt.weekday Alias. Series.dt.day_name Returns the name of the day of the week. Examples >>> s = pd.date_range('2016-12-31', '2017-01-08', freq='D').to_series() >>> s.dt.dayofweek 2016-12-31 5 2017-01-01 6 2017-01-02 0 2017-01-03 1 2017-01-04 2 2017-01-05 3 2017-01-06 4 2017-01-07 5 2017-01-08 6 Freq: D, dtype: int64
pandas.reference.api.pandas.datetimeindex.day_of_week
pandas.DatetimeIndex.day_of_year propertyDatetimeIndex.day_of_year The ordinal day of the year.
pandas.reference.api.pandas.datetimeindex.day_of_year
pandas.DatetimeIndex.dayofweek propertyDatetimeIndex.dayofweek The day of the week with Monday=0, Sunday=6. Return the day of the week. It is assumed the week starts on Monday, which is denoted by 0 and ends on Sunday which is denoted by 6. This method is available on both Series with datetime values (using the dt accessor) or DatetimeIndex. Returns Series or Index Containing integers indicating the day number. See also Series.dt.dayofweek Alias. Series.dt.weekday Alias. Series.dt.day_name Returns the name of the day of the week. Examples >>> s = pd.date_range('2016-12-31', '2017-01-08', freq='D').to_series() >>> s.dt.dayofweek 2016-12-31 5 2017-01-01 6 2017-01-02 0 2017-01-03 1 2017-01-04 2 2017-01-05 3 2017-01-06 4 2017-01-07 5 2017-01-08 6 Freq: D, dtype: int64
pandas.reference.api.pandas.datetimeindex.dayofweek
pandas.DatetimeIndex.dayofyear propertyDatetimeIndex.dayofyear The ordinal day of the year.
pandas.reference.api.pandas.datetimeindex.dayofyear
pandas.DatetimeIndex.floor DatetimeIndex.floor(*args, **kwargs)[source] Perform floor operation on the data to the specified freq. Parameters freq:str or Offset The frequency level to floor the index to. Must be a fixed frequency like ‘S’ (second) not ‘ME’ (month end). See frequency aliases for a list of possible freq values. ambiguous:‘infer’, bool-ndarray, ‘NaT’, default ‘raise’ Only relevant for DatetimeIndex: ‘infer’ will attempt to infer fall dst-transition hours based on order bool-ndarray where True signifies a DST time, False designates a non-DST time (note that this flag is only applicable for ambiguous times) ‘NaT’ will return NaT where there are ambiguous times ‘raise’ will raise an AmbiguousTimeError if there are ambiguous times. nonexistent:‘shift_forward’, ‘shift_backward’, ‘NaT’, timedelta, default ‘raise’ A nonexistent time does not exist in a particular timezone where clocks moved forward due to DST. ‘shift_forward’ will shift the nonexistent time forward to the closest existing time ‘shift_backward’ will shift the nonexistent time backward to the closest existing time ‘NaT’ will return NaT where there are nonexistent times timedelta objects will shift nonexistent times by the timedelta ‘raise’ will raise an NonExistentTimeError if there are nonexistent times. Returns DatetimeIndex, TimedeltaIndex, or Series Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with the same index for a Series. Raises ValueError if the freq cannot be converted. Notes If the timestamps have a timezone, flooring will take place relative to the local (“wall”) time and re-localized to the same timezone. When flooring near daylight savings time, use nonexistent and ambiguous to control the re-localization behavior. Examples DatetimeIndex >>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min') >>> rng DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00', '2018-01-01 12:01:00'], dtype='datetime64[ns]', freq='T') >>> rng.floor('H') DatetimeIndex(['2018-01-01 11:00:00', '2018-01-01 12:00:00', '2018-01-01 12:00:00'], dtype='datetime64[ns]', freq=None) Series >>> pd.Series(rng).dt.floor("H") 0 2018-01-01 11:00:00 1 2018-01-01 12:00:00 2 2018-01-01 12:00:00 dtype: datetime64[ns] When rounding near a daylight savings time transition, use ambiguous or nonexistent to control how the timestamp should be re-localized. >>> rng_tz = pd.DatetimeIndex(["2021-10-31 03:30:00"], tz="Europe/Amsterdam") >>> rng_tz.floor("2H", ambiguous=False) DatetimeIndex(['2021-10-31 02:00:00+01:00'], dtype='datetime64[ns, Europe/Amsterdam]', freq=None) >>> rng_tz.floor("2H", ambiguous=True) DatetimeIndex(['2021-10-31 02:00:00+02:00'], dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
pandas.reference.api.pandas.datetimeindex.floor
pandas.DatetimeIndex.freq propertyDatetimeIndex.freq Return the frequency object if it is set, otherwise None.
pandas.reference.api.pandas.datetimeindex.freq
pandas.DatetimeIndex.freqstr propertyDatetimeIndex.freqstr Return the frequency object as a string if its set, otherwise None.
pandas.reference.api.pandas.datetimeindex.freqstr
pandas.DatetimeIndex.hour propertyDatetimeIndex.hour The hours of the datetime. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="h") ... ) >>> datetime_series 0 2000-01-01 00:00:00 1 2000-01-01 01:00:00 2 2000-01-01 02:00:00 dtype: datetime64[ns] >>> datetime_series.dt.hour 0 0 1 1 2 2 dtype: int64
pandas.reference.api.pandas.datetimeindex.hour
pandas.DatetimeIndex.indexer_at_time DatetimeIndex.indexer_at_time(time, asof=False)[source] Return index locations of values at particular time of day (e.g. 9:30AM). Parameters time:datetime.time or str Time passed in either as object (datetime.time) or as string in appropriate format (“%H:%M”, “%H%M”, “%I:%M%p”, “%I%M%p”, “%H:%M:%S”, “%H%M%S”, “%I:%M:%S%p”, “%I%M%S%p”). Returns np.ndarray[np.intp] See also indexer_between_time Get index locations of values between particular times of day. DataFrame.at_time Select values at particular time of day.
pandas.reference.api.pandas.datetimeindex.indexer_at_time
pandas.DatetimeIndex.indexer_between_time DatetimeIndex.indexer_between_time(start_time, end_time, include_start=True, include_end=True)[source] Return index locations of values between particular times of day (e.g., 9:00-9:30AM). Parameters start_time, end_time:datetime.time, str Time passed either as object (datetime.time) or as string in appropriate format (“%H:%M”, “%H%M”, “%I:%M%p”, “%I%M%p”, “%H:%M:%S”, “%H%M%S”, “%I:%M:%S%p”,”%I%M%S%p”). include_start:bool, default True include_end:bool, default True Returns np.ndarray[np.intp] See also indexer_at_time Get index locations of values at particular time of day. DataFrame.between_time Select values between particular times of day.
pandas.reference.api.pandas.datetimeindex.indexer_between_time
pandas.DatetimeIndex.inferred_freq DatetimeIndex.inferred_freq Tries to return a string representing a frequency guess, generated by infer_freq. Returns None if it can’t autodetect the frequency.
pandas.reference.api.pandas.datetimeindex.inferred_freq
pandas.DatetimeIndex.is_leap_year propertyDatetimeIndex.is_leap_year Boolean indicator if the date belongs to a leap year. A leap year is a year, which has 366 days (instead of 365) including 29th of February as an intercalary day. Leap years are years which are multiples of four with the exception of years divisible by 100 but not by 400. Returns Series or ndarray Booleans indicating if dates belong to a leap year. Examples This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex. >>> idx = pd.date_range("2012-01-01", "2015-01-01", freq="Y") >>> idx DatetimeIndex(['2012-12-31', '2013-12-31', '2014-12-31'], dtype='datetime64[ns]', freq='A-DEC') >>> idx.is_leap_year array([ True, False, False]) >>> dates_series = pd.Series(idx) >>> dates_series 0 2012-12-31 1 2013-12-31 2 2014-12-31 dtype: datetime64[ns] >>> dates_series.dt.is_leap_year 0 True 1 False 2 False dtype: bool
pandas.reference.api.pandas.datetimeindex.is_leap_year
pandas.DatetimeIndex.is_month_end propertyDatetimeIndex.is_month_end Indicates whether the date is the last day of the month. Returns Series or array For Series, returns a Series with boolean values. For DatetimeIndex, returns a boolean array. See also is_month_start Return a boolean indicating whether the date is the first day of the month. is_month_end Return a boolean indicating whether the date is the last day of the month. Examples This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex. >>> s = pd.Series(pd.date_range("2018-02-27", periods=3)) >>> s 0 2018-02-27 1 2018-02-28 2 2018-03-01 dtype: datetime64[ns] >>> s.dt.is_month_start 0 False 1 False 2 True dtype: bool >>> s.dt.is_month_end 0 False 1 True 2 False dtype: bool >>> idx = pd.date_range("2018-02-27", periods=3) >>> idx.is_month_start array([False, False, True]) >>> idx.is_month_end array([False, True, False])
pandas.reference.api.pandas.datetimeindex.is_month_end
pandas.DatetimeIndex.is_month_start propertyDatetimeIndex.is_month_start Indicates whether the date is the first day of the month. Returns Series or array For Series, returns a Series with boolean values. For DatetimeIndex, returns a boolean array. See also is_month_start Return a boolean indicating whether the date is the first day of the month. is_month_end Return a boolean indicating whether the date is the last day of the month. Examples This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex. >>> s = pd.Series(pd.date_range("2018-02-27", periods=3)) >>> s 0 2018-02-27 1 2018-02-28 2 2018-03-01 dtype: datetime64[ns] >>> s.dt.is_month_start 0 False 1 False 2 True dtype: bool >>> s.dt.is_month_end 0 False 1 True 2 False dtype: bool >>> idx = pd.date_range("2018-02-27", periods=3) >>> idx.is_month_start array([False, False, True]) >>> idx.is_month_end array([False, True, False])
pandas.reference.api.pandas.datetimeindex.is_month_start
pandas.DatetimeIndex.is_quarter_end propertyDatetimeIndex.is_quarter_end Indicator for whether the date is the last day of a quarter. Returns is_quarter_end:Series or DatetimeIndex The same type as the original data with boolean values. Series will have the same name and index. DatetimeIndex will have the same name. See also quarter Return the quarter of the date. is_quarter_start Similar property indicating the quarter start. Examples This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex. >>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30", ... periods=4)}) >>> df.assign(quarter=df.dates.dt.quarter, ... is_quarter_end=df.dates.dt.is_quarter_end) dates quarter is_quarter_end 0 2017-03-30 1 False 1 2017-03-31 1 True 2 2017-04-01 2 False 3 2017-04-02 2 False >>> idx = pd.date_range('2017-03-30', periods=4) >>> idx DatetimeIndex(['2017-03-30', '2017-03-31', '2017-04-01', '2017-04-02'], dtype='datetime64[ns]', freq='D') >>> idx.is_quarter_end array([False, True, False, False])
pandas.reference.api.pandas.datetimeindex.is_quarter_end
pandas.DatetimeIndex.is_quarter_start propertyDatetimeIndex.is_quarter_start Indicator for whether the date is the first day of a quarter. Returns is_quarter_start:Series or DatetimeIndex The same type as the original data with boolean values. Series will have the same name and index. DatetimeIndex will have the same name. See also quarter Return the quarter of the date. is_quarter_end Similar property for indicating the quarter start. Examples This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex. >>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30", ... periods=4)}) >>> df.assign(quarter=df.dates.dt.quarter, ... is_quarter_start=df.dates.dt.is_quarter_start) dates quarter is_quarter_start 0 2017-03-30 1 False 1 2017-03-31 1 False 2 2017-04-01 2 True 3 2017-04-02 2 False >>> idx = pd.date_range('2017-03-30', periods=4) >>> idx DatetimeIndex(['2017-03-30', '2017-03-31', '2017-04-01', '2017-04-02'], dtype='datetime64[ns]', freq='D') >>> idx.is_quarter_start array([False, False, True, False])
pandas.reference.api.pandas.datetimeindex.is_quarter_start
pandas.DatetimeIndex.is_year_end propertyDatetimeIndex.is_year_end Indicate whether the date is the last day of the year. Returns Series or DatetimeIndex The same type as the original data with boolean values. Series will have the same name and index. DatetimeIndex will have the same name. See also is_year_start Similar property indicating the start of the year. Examples This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex. >>> dates = pd.Series(pd.date_range("2017-12-30", periods=3)) >>> dates 0 2017-12-30 1 2017-12-31 2 2018-01-01 dtype: datetime64[ns] >>> dates.dt.is_year_end 0 False 1 True 2 False dtype: bool >>> idx = pd.date_range("2017-12-30", periods=3) >>> idx DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'], dtype='datetime64[ns]', freq='D') >>> idx.is_year_end array([False, True, False])
pandas.reference.api.pandas.datetimeindex.is_year_end
pandas.DatetimeIndex.is_year_start propertyDatetimeIndex.is_year_start Indicate whether the date is the first day of a year. Returns Series or DatetimeIndex The same type as the original data with boolean values. Series will have the same name and index. DatetimeIndex will have the same name. See also is_year_end Similar property indicating the last day of the year. Examples This method is available on Series with datetime values under the .dt accessor, and directly on DatetimeIndex. >>> dates = pd.Series(pd.date_range("2017-12-30", periods=3)) >>> dates 0 2017-12-30 1 2017-12-31 2 2018-01-01 dtype: datetime64[ns] >>> dates.dt.is_year_start 0 False 1 False 2 True dtype: bool >>> idx = pd.date_range("2017-12-30", periods=3) >>> idx DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'], dtype='datetime64[ns]', freq='D') >>> idx.is_year_start array([False, False, True])
pandas.reference.api.pandas.datetimeindex.is_year_start
pandas.DatetimeIndex.mean DatetimeIndex.mean(*args, **kwargs)[source] Return the mean value of the Array. New in version 0.25.0. Parameters skipna:bool, default True Whether to ignore any NaT elements. axis:int, optional, default 0 Returns scalar Timestamp or Timedelta. See also numpy.ndarray.mean Returns the average of array elements along a given axis. Series.mean Return the mean value in a Series. Notes mean is only defined for Datetime and Timedelta dtypes, not for Period.
pandas.reference.api.pandas.datetimeindex.mean
pandas.DatetimeIndex.microsecond propertyDatetimeIndex.microsecond The microseconds of the datetime. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="us") ... ) >>> datetime_series 0 2000-01-01 00:00:00.000000 1 2000-01-01 00:00:00.000001 2 2000-01-01 00:00:00.000002 dtype: datetime64[ns] >>> datetime_series.dt.microsecond 0 0 1 1 2 2 dtype: int64
pandas.reference.api.pandas.datetimeindex.microsecond
pandas.DatetimeIndex.minute propertyDatetimeIndex.minute The minutes of the datetime. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="T") ... ) >>> datetime_series 0 2000-01-01 00:00:00 1 2000-01-01 00:01:00 2 2000-01-01 00:02:00 dtype: datetime64[ns] >>> datetime_series.dt.minute 0 0 1 1 2 2 dtype: int64
pandas.reference.api.pandas.datetimeindex.minute
pandas.DatetimeIndex.month propertyDatetimeIndex.month The month as January=1, December=12. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="M") ... ) >>> datetime_series 0 2000-01-31 1 2000-02-29 2 2000-03-31 dtype: datetime64[ns] >>> datetime_series.dt.month 0 1 1 2 2 3 dtype: int64
pandas.reference.api.pandas.datetimeindex.month
pandas.DatetimeIndex.month_name DatetimeIndex.month_name(*args, **kwargs)[source] Return the month names of the DateTimeIndex with specified locale. Parameters locale:str, optional Locale determining the language in which to return the month name. Default is English locale. Returns Index Index of month names. Examples >>> idx = pd.date_range(start='2018-01', freq='M', periods=3) >>> idx DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31'], dtype='datetime64[ns]', freq='M') >>> idx.month_name() Index(['January', 'February', 'March'], dtype='object')
pandas.reference.api.pandas.datetimeindex.month_name
pandas.DatetimeIndex.nanosecond propertyDatetimeIndex.nanosecond The nanoseconds of the datetime. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="ns") ... ) >>> datetime_series 0 2000-01-01 00:00:00.000000000 1 2000-01-01 00:00:00.000000001 2 2000-01-01 00:00:00.000000002 dtype: datetime64[ns] >>> datetime_series.dt.nanosecond 0 0 1 1 2 2 dtype: int64
pandas.reference.api.pandas.datetimeindex.nanosecond
pandas.DatetimeIndex.normalize DatetimeIndex.normalize(*args, **kwargs)[source] Convert times to midnight. The time component of the date-time is converted to midnight i.e. 00:00:00. This is useful in cases, when the time does not matter. Length is unaltered. The timezones are unaffected. This method is available on Series with datetime values under the .dt accessor, and directly on Datetime Array/Index. Returns DatetimeArray, DatetimeIndex or Series The same type as the original data. Series will have the same name and index. DatetimeIndex will have the same name. See also floor Floor the datetimes to the specified freq. ceil Ceil the datetimes to the specified freq. round Round the datetimes to the specified freq. Examples >>> idx = pd.date_range(start='2014-08-01 10:00', freq='H', ... periods=3, tz='Asia/Calcutta') >>> idx DatetimeIndex(['2014-08-01 10:00:00+05:30', '2014-08-01 11:00:00+05:30', '2014-08-01 12:00:00+05:30'], dtype='datetime64[ns, Asia/Calcutta]', freq='H') >>> idx.normalize() DatetimeIndex(['2014-08-01 00:00:00+05:30', '2014-08-01 00:00:00+05:30', '2014-08-01 00:00:00+05:30'], dtype='datetime64[ns, Asia/Calcutta]', freq=None)
pandas.reference.api.pandas.datetimeindex.normalize
pandas.DatetimeIndex.quarter propertyDatetimeIndex.quarter The quarter of the date.
pandas.reference.api.pandas.datetimeindex.quarter
pandas.DatetimeIndex.round DatetimeIndex.round(*args, **kwargs)[source] Perform round operation on the data to the specified freq. Parameters freq:str or Offset The frequency level to round the index to. Must be a fixed frequency like ‘S’ (second) not ‘ME’ (month end). See frequency aliases for a list of possible freq values. ambiguous:‘infer’, bool-ndarray, ‘NaT’, default ‘raise’ Only relevant for DatetimeIndex: ‘infer’ will attempt to infer fall dst-transition hours based on order bool-ndarray where True signifies a DST time, False designates a non-DST time (note that this flag is only applicable for ambiguous times) ‘NaT’ will return NaT where there are ambiguous times ‘raise’ will raise an AmbiguousTimeError if there are ambiguous times. nonexistent:‘shift_forward’, ‘shift_backward’, ‘NaT’, timedelta, default ‘raise’ A nonexistent time does not exist in a particular timezone where clocks moved forward due to DST. ‘shift_forward’ will shift the nonexistent time forward to the closest existing time ‘shift_backward’ will shift the nonexistent time backward to the closest existing time ‘NaT’ will return NaT where there are nonexistent times timedelta objects will shift nonexistent times by the timedelta ‘raise’ will raise an NonExistentTimeError if there are nonexistent times. Returns DatetimeIndex, TimedeltaIndex, or Series Index of the same type for a DatetimeIndex or TimedeltaIndex, or a Series with the same index for a Series. Raises ValueError if the freq cannot be converted. Notes If the timestamps have a timezone, rounding will take place relative to the local (“wall”) time and re-localized to the same timezone. When rounding near daylight savings time, use nonexistent and ambiguous to control the re-localization behavior. Examples DatetimeIndex >>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min') >>> rng DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00', '2018-01-01 12:01:00'], dtype='datetime64[ns]', freq='T') >>> rng.round('H') DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00', '2018-01-01 12:00:00'], dtype='datetime64[ns]', freq=None) Series >>> pd.Series(rng).dt.round("H") 0 2018-01-01 12:00:00 1 2018-01-01 12:00:00 2 2018-01-01 12:00:00 dtype: datetime64[ns] When rounding near a daylight savings time transition, use ambiguous or nonexistent to control how the timestamp should be re-localized. >>> rng_tz = pd.DatetimeIndex(["2021-10-31 03:30:00"], tz="Europe/Amsterdam") >>> rng_tz.floor("2H", ambiguous=False) DatetimeIndex(['2021-10-31 02:00:00+01:00'], dtype='datetime64[ns, Europe/Amsterdam]', freq=None) >>> rng_tz.floor("2H", ambiguous=True) DatetimeIndex(['2021-10-31 02:00:00+02:00'], dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
pandas.reference.api.pandas.datetimeindex.round
pandas.DatetimeIndex.second propertyDatetimeIndex.second The seconds of the datetime. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="s") ... ) >>> datetime_series 0 2000-01-01 00:00:00 1 2000-01-01 00:00:01 2 2000-01-01 00:00:02 dtype: datetime64[ns] >>> datetime_series.dt.second 0 0 1 1 2 2 dtype: int64
pandas.reference.api.pandas.datetimeindex.second
pandas.DatetimeIndex.snap DatetimeIndex.snap(freq='S')[source] Snap time stamps to nearest occurring frequency. Returns DatetimeIndex
pandas.reference.api.pandas.datetimeindex.snap
pandas.DatetimeIndex.std DatetimeIndex.std(*args, **kwargs)[source] Return sample standard deviation over requested axis. Normalized by N-1 by default. This can be changed using the ddof argument Parameters axis:int optional, default None Axis for the function to be applied on. ddof:int, default 1 Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. skipna:bool, default True Exclude NA/null values. If an entire row/column is NA, the result will be NA. Returns Timedelta
pandas.reference.api.pandas.datetimeindex.std
pandas.DatetimeIndex.strftime DatetimeIndex.strftime(*args, **kwargs)[source] Convert to Index using specified date_format. Return an Index of formatted strings specified by date_format, which supports the same string format as the python standard library. Details of the string format can be found in python string format doc. Parameters date_format:str Date format string (e.g. “%Y-%m-%d”). Returns ndarray[object] NumPy ndarray of formatted strings. See also to_datetime Convert the given argument to datetime. DatetimeIndex.normalize Return DatetimeIndex with times to midnight. DatetimeIndex.round Round the DatetimeIndex to the specified freq. DatetimeIndex.floor Floor the DatetimeIndex to the specified freq. Examples >>> rng = pd.date_range(pd.Timestamp("2018-03-10 09:00"), ... periods=3, freq='s') >>> rng.strftime('%B %d, %Y, %r') Index(['March 10, 2018, 09:00:00 AM', 'March 10, 2018, 09:00:01 AM', 'March 10, 2018, 09:00:02 AM'], dtype='object')
pandas.reference.api.pandas.datetimeindex.strftime
pandas.DatetimeIndex.time propertyDatetimeIndex.time Returns numpy array of datetime.time objects. The time part of the Timestamps.
pandas.reference.api.pandas.datetimeindex.time
pandas.DatetimeIndex.timetz propertyDatetimeIndex.timetz Returns numpy array of datetime.time objects with timezone information. The time part of the Timestamps.
pandas.reference.api.pandas.datetimeindex.timetz
pandas.DatetimeIndex.to_frame DatetimeIndex.to_frame(index=True, name=NoDefault.no_default)[source] Create a DataFrame with a column containing the Index. Parameters index:bool, default True Set the index of the returned DataFrame as the original Index. name:object, default None The passed name should substitute for the index name (if it has one). Returns DataFrame DataFrame containing the original Index data. See also Index.to_series Convert an Index to a Series. Series.to_frame Convert Series to DataFrame. Examples >>> idx = pd.Index(['Ant', 'Bear', 'Cow'], name='animal') >>> idx.to_frame() animal animal Ant Ant Bear Bear Cow Cow By default, the original Index is reused. To enforce a new Index: >>> idx.to_frame(index=False) animal 0 Ant 1 Bear 2 Cow To override the name of the resulting column, specify name: >>> idx.to_frame(index=False, name='zoo') zoo 0 Ant 1 Bear 2 Cow
pandas.reference.api.pandas.datetimeindex.to_frame
pandas.DatetimeIndex.to_period DatetimeIndex.to_period(*args, **kwargs)[source] Cast to PeriodArray/Index at a particular frequency. Converts DatetimeArray/Index to PeriodArray/Index. Parameters freq:str or Offset, optional One of pandas’ offset strings or an Offset object. Will be inferred by default. Returns PeriodArray/Index Raises ValueError When converting a DatetimeArray/Index with non-regular values, so that a frequency cannot be inferred. See also PeriodIndex Immutable ndarray holding ordinal values. DatetimeIndex.to_pydatetime Return DatetimeIndex as object. Examples >>> df = pd.DataFrame({"y": [1, 2, 3]}, ... index=pd.to_datetime(["2000-03-31 00:00:00", ... "2000-05-31 00:00:00", ... "2000-08-31 00:00:00"])) >>> df.index.to_period("M") PeriodIndex(['2000-03', '2000-05', '2000-08'], dtype='period[M]') Infer the daily frequency >>> idx = pd.date_range("2017-01-01", periods=2) >>> idx.to_period() PeriodIndex(['2017-01-01', '2017-01-02'], dtype='period[D]')
pandas.reference.api.pandas.datetimeindex.to_period
pandas.DatetimeIndex.to_perioddelta DatetimeIndex.to_perioddelta(freq)[source] Calculate TimedeltaArray of difference between index values and index converted to PeriodArray at specified freq. Used for vectorized offsets. Parameters freq:Period frequency Returns TimedeltaArray/Index
pandas.reference.api.pandas.datetimeindex.to_perioddelta
pandas.DatetimeIndex.to_pydatetime DatetimeIndex.to_pydatetime(*args, **kwargs)[source] Return Datetime Array/Index as object ndarray of datetime.datetime objects. Returns datetimes:ndarray[object]
pandas.reference.api.pandas.datetimeindex.to_pydatetime
pandas.DatetimeIndex.to_series DatetimeIndex.to_series(keep_tz=NoDefault.no_default, index=None, name=None)[source] Create a Series with both index and values equal to the index keys useful with map for returning an indexer based on an index. Parameters keep_tz:optional, defaults True Return the data keeping the timezone. If keep_tz is True: If the timezone is not set, the resulting Series will have a datetime64[ns] dtype. Otherwise the Series will have an datetime64[ns, tz] dtype; the tz will be preserved. If keep_tz is False: Series will have a datetime64[ns] dtype. TZ aware objects will have the tz removed. Changed in version 1.0.0: The default value is now True. In a future version, this keyword will be removed entirely. Stop passing the argument to obtain the future behavior and silence the warning. index:Index, optional Index of resulting Series. If None, defaults to original index. name:str, optional Name of resulting Series. If None, defaults to name of original index. Returns Series
pandas.reference.api.pandas.datetimeindex.to_series
pandas.DatetimeIndex.tz propertyDatetimeIndex.tz Return the timezone. Returns datetime.tzinfo, pytz.tzinfo.BaseTZInfo, dateutil.tz.tz.tzfile, or None Returns None when the array is tz-naive.
pandas.reference.api.pandas.datetimeindex.tz
pandas.DatetimeIndex.tz_convert DatetimeIndex.tz_convert(tz)[source] Convert tz-aware Datetime Array/Index from one time zone to another. Parameters tz:str, pytz.timezone, dateutil.tz.tzfile or None Time zone for time. Corresponding timestamps would be converted to this time zone of the Datetime Array/Index. A tz of None will convert to UTC and remove the timezone information. Returns Array or Index Raises TypeError If Datetime Array/Index is tz-naive. See also DatetimeIndex.tz A timezone that has a variable offset from UTC. DatetimeIndex.tz_localize Localize tz-naive DatetimeIndex to a given time zone, or remove timezone from a tz-aware DatetimeIndex. Examples With the tz parameter, we can change the DatetimeIndex to other time zones: >>> dti = pd.date_range(start='2014-08-01 09:00', ... freq='H', periods=3, tz='Europe/Berlin') >>> dti DatetimeIndex(['2014-08-01 09:00:00+02:00', '2014-08-01 10:00:00+02:00', '2014-08-01 11:00:00+02:00'], dtype='datetime64[ns, Europe/Berlin]', freq='H') >>> dti.tz_convert('US/Central') DatetimeIndex(['2014-08-01 02:00:00-05:00', '2014-08-01 03:00:00-05:00', '2014-08-01 04:00:00-05:00'], dtype='datetime64[ns, US/Central]', freq='H') With the tz=None, we can remove the timezone (after converting to UTC if necessary): >>> dti = pd.date_range(start='2014-08-01 09:00', freq='H', ... periods=3, tz='Europe/Berlin') >>> dti DatetimeIndex(['2014-08-01 09:00:00+02:00', '2014-08-01 10:00:00+02:00', '2014-08-01 11:00:00+02:00'], dtype='datetime64[ns, Europe/Berlin]', freq='H') >>> dti.tz_convert(None) DatetimeIndex(['2014-08-01 07:00:00', '2014-08-01 08:00:00', '2014-08-01 09:00:00'], dtype='datetime64[ns]', freq='H')
pandas.reference.api.pandas.datetimeindex.tz_convert
pandas.DatetimeIndex.tz_localize DatetimeIndex.tz_localize(tz, ambiguous='raise', nonexistent='raise')[source] Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index. This method takes a time zone (tz) naive Datetime Array/Index object and makes this time zone aware. It does not move the time to another time zone. This method can also be used to do the inverse – to create a time zone unaware object from an aware object. To that end, pass tz=None. Parameters tz:str, pytz.timezone, dateutil.tz.tzfile or None Time zone to convert timestamps to. Passing None will remove the time zone information preserving local time. ambiguous:‘infer’, ‘NaT’, bool array, default ‘raise’ When clocks moved backward due to DST, ambiguous times may arise. For example in Central European Time (UTC+01), when going from 03:00 DST to 02:00 non-DST, 02:30:00 local time occurs both at 00:30:00 UTC and at 01:30:00 UTC. In such a situation, the ambiguous parameter dictates how ambiguous times should be handled. ‘infer’ will attempt to infer fall dst-transition hours based on order bool-ndarray where True signifies a DST time, False signifies a non-DST time (note that this flag is only applicable for ambiguous times) ‘NaT’ will return NaT where there are ambiguous times ‘raise’ will raise an AmbiguousTimeError if there are ambiguous times. nonexistent:‘shift_forward’, ‘shift_backward, ‘NaT’, timedelta, default ‘raise’ A nonexistent time does not exist in a particular timezone where clocks moved forward due to DST. ‘shift_forward’ will shift the nonexistent time forward to the closest existing time ‘shift_backward’ will shift the nonexistent time backward to the closest existing time ‘NaT’ will return NaT where there are nonexistent times timedelta objects will shift nonexistent times by the timedelta ‘raise’ will raise an NonExistentTimeError if there are nonexistent times. Returns Same type as self Array/Index converted to the specified time zone. Raises TypeError If the Datetime Array/Index is tz-aware and tz is not None. See also DatetimeIndex.tz_convert Convert tz-aware DatetimeIndex from one time zone to another. Examples >>> tz_naive = pd.date_range('2018-03-01 09:00', periods=3) >>> tz_naive DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00', '2018-03-03 09:00:00'], dtype='datetime64[ns]', freq='D') Localize DatetimeIndex in US/Eastern time zone: >>> tz_aware = tz_naive.tz_localize(tz='US/Eastern') >>> tz_aware DatetimeIndex(['2018-03-01 09:00:00-05:00', '2018-03-02 09:00:00-05:00', '2018-03-03 09:00:00-05:00'], dtype='datetime64[ns, US/Eastern]', freq=None) With the tz=None, we can remove the time zone information while keeping the local time (not converted to UTC): >>> tz_aware.tz_localize(None) DatetimeIndex(['2018-03-01 09:00:00', '2018-03-02 09:00:00', '2018-03-03 09:00:00'], dtype='datetime64[ns]', freq=None) Be careful with DST changes. When there is sequential data, pandas can infer the DST time: >>> s = pd.to_datetime(pd.Series(['2018-10-28 01:30:00', ... '2018-10-28 02:00:00', ... '2018-10-28 02:30:00', ... '2018-10-28 02:00:00', ... '2018-10-28 02:30:00', ... '2018-10-28 03:00:00', ... '2018-10-28 03:30:00'])) >>> s.dt.tz_localize('CET', ambiguous='infer') 0 2018-10-28 01:30:00+02:00 1 2018-10-28 02:00:00+02:00 2 2018-10-28 02:30:00+02:00 3 2018-10-28 02:00:00+01:00 4 2018-10-28 02:30:00+01:00 5 2018-10-28 03:00:00+01:00 6 2018-10-28 03:30:00+01:00 dtype: datetime64[ns, CET] In some cases, inferring the DST is impossible. In such cases, you can pass an ndarray to the ambiguous parameter to set the DST explicitly >>> s = pd.to_datetime(pd.Series(['2018-10-28 01:20:00', ... '2018-10-28 02:36:00', ... '2018-10-28 03:46:00'])) >>> s.dt.tz_localize('CET', ambiguous=np.array([True, True, False])) 0 2018-10-28 01:20:00+02:00 1 2018-10-28 02:36:00+02:00 2 2018-10-28 03:46:00+01:00 dtype: datetime64[ns, CET] If the DST transition causes nonexistent times, you can shift these dates forward or backwards with a timedelta object or ‘shift_forward’ or ‘shift_backwards’. >>> s = pd.to_datetime(pd.Series(['2015-03-29 02:30:00', ... '2015-03-29 03:30:00'])) >>> s.dt.tz_localize('Europe/Warsaw', nonexistent='shift_forward') 0 2015-03-29 03:00:00+02:00 1 2015-03-29 03:30:00+02:00 dtype: datetime64[ns, Europe/Warsaw] >>> s.dt.tz_localize('Europe/Warsaw', nonexistent='shift_backward') 0 2015-03-29 01:59:59.999999999+01:00 1 2015-03-29 03:30:00+02:00 dtype: datetime64[ns, Europe/Warsaw] >>> s.dt.tz_localize('Europe/Warsaw', nonexistent=pd.Timedelta('1H')) 0 2015-03-29 03:30:00+02:00 1 2015-03-29 03:30:00+02:00 dtype: datetime64[ns, Europe/Warsaw]
pandas.reference.api.pandas.datetimeindex.tz_localize
pandas.DatetimeIndex.week propertyDatetimeIndex.week The week ordinal of the year. Deprecated since version 1.1.0. weekofyear and week have been deprecated. Please use DatetimeIndex.isocalendar().week instead.
pandas.reference.api.pandas.datetimeindex.week
pandas.DatetimeIndex.weekday propertyDatetimeIndex.weekday The day of the week with Monday=0, Sunday=6. Return the day of the week. It is assumed the week starts on Monday, which is denoted by 0 and ends on Sunday which is denoted by 6. This method is available on both Series with datetime values (using the dt accessor) or DatetimeIndex. Returns Series or Index Containing integers indicating the day number. See also Series.dt.dayofweek Alias. Series.dt.weekday Alias. Series.dt.day_name Returns the name of the day of the week. Examples >>> s = pd.date_range('2016-12-31', '2017-01-08', freq='D').to_series() >>> s.dt.dayofweek 2016-12-31 5 2017-01-01 6 2017-01-02 0 2017-01-03 1 2017-01-04 2 2017-01-05 3 2017-01-06 4 2017-01-07 5 2017-01-08 6 Freq: D, dtype: int64
pandas.reference.api.pandas.datetimeindex.weekday
pandas.DatetimeIndex.weekofyear propertyDatetimeIndex.weekofyear The week ordinal of the year. Deprecated since version 1.1.0. weekofyear and week have been deprecated. Please use DatetimeIndex.isocalendar().week instead.
pandas.reference.api.pandas.datetimeindex.weekofyear
pandas.DatetimeIndex.year propertyDatetimeIndex.year The year of the datetime. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="Y") ... ) >>> datetime_series 0 2000-12-31 1 2001-12-31 2 2002-12-31 dtype: datetime64[ns] >>> datetime_series.dt.year 0 2000 1 2001 2 2002 dtype: int64
pandas.reference.api.pandas.datetimeindex.year
pandas.DatetimeTZDtype classpandas.DatetimeTZDtype(unit='ns', tz=None)[source] An ExtensionDtype for timezone-aware datetime data. This is not an actual numpy dtype, but a duck type. Parameters unit:str, default “ns” The precision of the datetime data. Currently limited to "ns". tz:str, int, or datetime.tzinfo The timezone. Raises pytz.UnknownTimeZoneError When the requested timezone cannot be found. Examples >>> pd.DatetimeTZDtype(tz='UTC') datetime64[ns, UTC] >>> pd.DatetimeTZDtype(tz='dateutil/US/Central') datetime64[ns, tzfile('/usr/share/zoneinfo/US/Central')] Attributes unit The precision of the datetime data. tz The timezone. Methods None
pandas.reference.api.pandas.datetimetzdtype
pandas.DatetimeTZDtype.tz propertyDatetimeTZDtype.tz The timezone.
pandas.reference.api.pandas.datetimetzdtype.tz
pandas.DatetimeTZDtype.unit propertyDatetimeTZDtype.unit The precision of the datetime data.
pandas.reference.api.pandas.datetimetzdtype.unit
pandas.describe_option pandas.describe_option(pat, _print_desc=False)=<pandas._config.config.CallableDynamicDoc object> Prints the description for one or more registered options. Call with no arguments to get a listing for all registered options. Available options: compute.[use_bottleneck, use_numba, use_numexpr] display.[chop_threshold, colheader_justify, column_space, date_dayfirst, date_yearfirst, encoding, expand_frame_repr, float_format] display.html.[border, table_schema, use_mathjax] display.[large_repr] display.latex.[escape, longtable, multicolumn, multicolumn_format, multirow, repr] display.[max_categories, max_columns, max_colwidth, max_dir_items, max_info_columns, max_info_rows, max_rows, max_seq_items, memory_usage, min_rows, multi_sparse, notebook_repr_html, pprint_nest_depth, precision, show_dimensions] display.unicode.[ambiguous_as_wide, east_asian_width] display.[width] io.excel.ods.[reader, writer] io.excel.xls.[reader, writer] io.excel.xlsb.[reader] io.excel.xlsm.[reader, writer] io.excel.xlsx.[reader, writer] io.hdf.[default_format, dropna_table] io.parquet.[engine] io.sql.[engine] mode.[chained_assignment, data_manager, sim_interactive, string_storage, use_inf_as_na, use_inf_as_null] plotting.[backend] plotting.matplotlib.[register_converters] styler.format.[decimal, escape, formatter, na_rep, precision, thousands] styler.html.[mathjax] styler.latex.[environment, hrules, multicol_align, multirow_align] styler.render.[encoding, max_columns, max_elements, max_rows, repr] styler.sparse.[columns, index] Parameters pat:str Regexp pattern. All matching keys will have their description displayed. _print_desc:bool, default True If True (default) the description(s) will be printed to stdout. Otherwise, the description(s) will be returned as a unicode string (for testing). Returns None by default, the description(s) as a unicode string if _print_desc is False Notes The available options with its descriptions: compute.use_bottleneck:bool Use the bottleneck library to accelerate if it is installed, the default is True Valid values: False,True [default: True] [currently: True] compute.use_numba:bool Use the numba engine option for select operations if it is installed, the default is False Valid values: False,True [default: False] [currently: False] compute.use_numexpr:bool Use the numexpr library to accelerate computation if it is installed, the default is True Valid values: False,True [default: True] [currently: True] display.chop_threshold:float or None if set to a float value, all float values smaller then the given threshold will be displayed as exactly 0 by repr and friends. [default: None] [currently: None] display.colheader_justify:‘left’/’right’ Controls the justification of column headers. used by DataFrameFormatter. [default: right] [currently: right] display.column_space No description available. [default: 12] [currently: 12] display.date_dayfirst:boolean When True, prints and parses dates with the day first, eg 20/01/2005 [default: False] [currently: False] display.date_yearfirst:boolean When True, prints and parses dates with the year first, eg 2005/01/20 [default: False] [currently: False] display.encoding:str/unicode Defaults to the detected encoding of the console. Specifies the encoding to be used for strings returned by to_string, these are generally strings meant to be displayed on the console. [default: utf-8] [currently: utf-8] display.expand_frame_repr:boolean Whether to print out the full DataFrame repr for wide DataFrames across multiple lines, max_columns is still respected, but the output will wrap-around across multiple “pages” if its width exceeds display.width. [default: True] [currently: True] display.float_format:callable The callable should accept a floating point number and return a string with the desired format of the number. This is used in some places like SeriesFormatter. See formats.format.EngFormatter for an example. [default: None] [currently: None] display.html.border:int A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr. [default: 1] [currently: 1] display.html.table_schema:boolean Whether to publish a Table Schema representation for frontends that support it. (default: False) [default: False] [currently: False] display.html.use_mathjax:boolean When True, Jupyter notebook will process table contents using MathJax, rendering mathematical expressions enclosed by the dollar symbol. (default: True) [default: True] [currently: True] display.large_repr:‘truncate’/’info’ For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can show a truncated table (the default from 0.13), or switch to the view from df.info() (the behaviour in earlier versions of pandas). [default: truncate] [currently: truncate] display.latex.escape:bool This specifies if the to_latex method of a Dataframe uses escapes special characters. Valid values: False,True [default: True] [currently: True] display.latex.longtable :bool This specifies if the to_latex method of a Dataframe uses the longtable format. Valid values: False,True [default: False] [currently: False] display.latex.multicolumn:bool This specifies if the to_latex method of a Dataframe uses multicolumns to pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True] display.latex.multicolumn_format:bool This specifies if the to_latex method of a Dataframe uses multicolumns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l] display.latex.multirow:bool This specifies if the to_latex method of a Dataframe uses multirows to pretty-print MultiIndex rows. Valid values: False,True [default: False] [currently: False] display.latex.repr:boolean Whether to produce a latex DataFrame representation for jupyter environments that support it. (default: False) [default: False] [currently: False] display.max_categories:int This sets the maximum number of categories pandas should output when printing out a Categorical or a Series of dtype “category”. [default: 8] [currently: 8] display.max_columns:int If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects are either centrally truncated or printed as a summary view. ‘None’ value means unlimited. In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. [default: 0] [currently: 0] display.max_colwidth:int or None The maximum width in characters of a column in the repr of a pandas data structure. When the column overflows, a “…” placeholder is embedded in the output. A ‘None’ value means unlimited. [default: 50] [currently: 50] display.max_dir_items:int The number of items that will be added to dir(…). ‘None’ value means unlimited. Because dir is cached, changing this option will not immediately affect already existing dataframes until a column is deleted or added. This is for instance used to suggest columns from a dataframe to tab completion. [default: 100] [currently: 100] display.max_info_columns:int max_info_columns is used in DataFrame.info method to decide if per column information will be printed. [default: 100] [currently: 100] display.max_info_rows:int or None df.info() will usually show null-counts for each column. For large frames this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller dimensions than specified. [default: 1690785] [currently: 1690785] display.max_rows:int If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects are either centrally truncated or printed as a summary view. ‘None’ value means unlimited. In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. [default: 60] [currently: 60] display.max_seq_items:int or None When pretty-printing a long sequence, no more then max_seq_items will be printed. If items are omitted, they will be denoted by the addition of “…” to the resulting string. If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100] display.memory_usage:bool, string or None This specifies if the memory usage of a DataFrame should be displayed when df.info() is called. Valid values True,False,’deep’ [default: True] [currently: True] display.min_rows:int The numbers of rows to show in a truncated view (when max_rows is exceeded). Ignored when max_rows is set to None or 0. When set to None, follows the value of max_rows. [default: 10] [currently: 10] display.multi_sparse:boolean “sparsify” MultiIndex display (don’t display repeated elements in outer levels within groups) [default: True] [currently: True] display.notebook_repr_html:boolean When True, IPython notebook will use html representation for pandas objects (if it is available). [default: True] [currently: True] display.pprint_nest_depth:int Controls the number of nested levels to process when pretty-printing [default: 3] [currently: 3] display.precision:int Floating point output precision in terms of number of places after the decimal, for regular formatting as well as scientific notation. Similar to precision in numpy.set_printoptions(). [default: 6] [currently: 6] display.show_dimensions:boolean or ‘truncate’ Whether to print out dimensions at the end of DataFrame repr. If ‘truncate’ is specified, only print out the dimensions if the frame is truncated (e.g. not display all rows and/or columns) [default: truncate] [currently: truncate] display.unicode.ambiguous_as_wide:boolean Whether to use the Unicode East Asian Width to calculate the display text width. Enabling this may affect to the performance (default: False) [default: False] [currently: False] display.unicode.east_asian_width:boolean Whether to use the Unicode East Asian Width to calculate the display text width. Enabling this may affect to the performance (default: False) [default: False] [currently: False] display.width:int Width of the display in characters. In case python/IPython is running in a terminal this can be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width. [default: 80] [currently: 80] io.excel.ods.reader:string The default Excel reader engine for ‘ods’ files. Available options: auto, odf. [default: auto] [currently: auto] io.excel.ods.writer:string The default Excel writer engine for ‘ods’ files. Available options: auto, odf. [default: auto] [currently: auto] io.excel.xls.reader:string The default Excel reader engine for ‘xls’ files. Available options: auto, xlrd. [default: auto] [currently: auto] io.excel.xls.writer:string The default Excel writer engine for ‘xls’ files. Available options: auto, xlwt. [default: auto] [currently: auto] (Deprecated, use `` instead.) io.excel.xlsb.reader:string The default Excel reader engine for ‘xlsb’ files. Available options: auto, pyxlsb. [default: auto] [currently: auto] io.excel.xlsm.reader:string The default Excel reader engine for ‘xlsm’ files. Available options: auto, xlrd, openpyxl. [default: auto] [currently: auto] io.excel.xlsm.writer:string The default Excel writer engine for ‘xlsm’ files. Available options: auto, openpyxl. [default: auto] [currently: auto] io.excel.xlsx.reader:string The default Excel reader engine for ‘xlsx’ files. Available options: auto, xlrd, openpyxl. [default: auto] [currently: auto] io.excel.xlsx.writer:string The default Excel writer engine for ‘xlsx’ files. Available options: auto, openpyxl, xlsxwriter. [default: auto] [currently: auto] io.hdf.default_format:format default format writing format, if None, then put will default to ‘fixed’ and append will default to ‘table’ [default: None] [currently: None] io.hdf.dropna_table:boolean drop ALL nan rows when appending to a table [default: False] [currently: False] io.parquet.engine:string The default parquet reader/writer engine. Available options: ‘auto’, ‘pyarrow’, ‘fastparquet’, the default is ‘auto’ [default: auto] [currently: auto] io.sql.engine:string The default sql reader/writer engine. Available options: ‘auto’, ‘sqlalchemy’, the default is ‘auto’ [default: auto] [currently: auto] mode.chained_assignment:string Raise an exception, warn, or no action if trying to use chained assignment, The default is warn [default: warn] [currently: warn] mode.data_manager:string Internal data manager type; can be “block” or “array”. Defaults to “block”, unless overridden by the ‘PANDAS_DATA_MANAGER’ environment variable (needs to be set before pandas is imported). [default: block] [currently: block] mode.sim_interactive:boolean Whether to simulate interactive mode for purposes of testing [default: False] [currently: False] mode.string_storage:string The default storage for StringDtype. [default: python] [currently: python] mode.use_inf_as_na:boolean True means treat None, NaN, INF, -INF as NA (old way), False means None and NaN are null, but INF, -INF are not NA (new way). [default: False] [currently: False] mode.use_inf_as_null:boolean use_inf_as_null had been deprecated and will be removed in a future version. Use use_inf_as_na instead. [default: False] [currently: False] (Deprecated, use mode.use_inf_as_na instead.) plotting.backend:str The plotting backend to use. The default value is “matplotlib”, the backend provided with pandas. Other backends can be specified by providing the name of the module that implements the backend. [default: matplotlib] [currently: matplotlib] plotting.matplotlib.register_converters:bool or ‘auto’. Whether to register converters with matplotlib’s units registry for dates, times, datetimes, and Periods. Toggling to False will remove the converters, restoring any converters that pandas overwrote. [default: auto] [currently: auto] styler.format.decimal:str The character representation for the decimal separator for floats and complex. [default: .] [currently: .] styler.format.escape:str, optional Whether to escape certain characters according to the given context; html or latex. [default: None] [currently: None] styler.format.formatter:str, callable, dict, optional A formatter object to be used as default within Styler.format. [default: None] [currently: None] styler.format.na_rep:str, optional The string representation for values identified as missing. [default: None] [currently: None] styler.format.precision:int The precision for floats and complex numbers. [default: 6] [currently: 6] styler.format.thousands:str, optional The character representation for thousands separator for floats, int and complex. [default: None] [currently: None] styler.html.mathjax:bool If False will render special CSS classes to table attributes that indicate Mathjax will not be used in Jupyter Notebook. [default: True] [currently: True] styler.latex.environment:str The environment to replace \begin{table}. If “longtable” is used results in a specific longtable environment format. [default: None] [currently: None] styler.latex.hrules:bool Whether to add horizontal rules on top and bottom and below the headers. [default: False] [currently: False] styler.latex.multicol_align:{“r”, “c”, “l”, “naive-l”, “naive-r”} The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe decorators can also be added to non-naive values to draw vertical rules, e.g. “|r” will draw a rule on the left side of right aligned merged cells. [default: r] [currently: r] styler.latex.multirow_align:{“c”, “t”, “b”} The specifier for vertical alignment of sparsified LaTeX multirows. [default: c] [currently: c] styler.render.encoding:str The encoding used for output HTML and LaTeX files. [default: utf-8] [currently: utf-8] styler.render.max_columns:int, optional The maximum number of columns that will be rendered. May still be reduced to satsify max_elements, which takes precedence. [default: None] [currently: None] styler.render.max_elements:int The maximum number of data-cell (<td>) elements that will be rendered before trimming will occur over columns, rows or both if needed. [default: 262144] [currently: 262144] styler.render.max_rows:int, optional The maximum number of rows that will be rendered. May still be reduced to satsify max_elements, which takes precedence. [default: None] [currently: None] styler.render.repr:str Determine which output to use in Jupyter Notebook in {“html”, “latex”}. [default: html] [currently: html] styler.sparse.columns:bool Whether to sparsify the display of hierarchical columns. Setting to False will display each explicit level element in a hierarchical key for each column. [default: True] [currently: True] styler.sparse.index:bool Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each row. [default: True] [currently: True]
pandas.reference.api.pandas.describe_option
pandas.errors.AbstractMethodError exceptionpandas.errors.AbstractMethodError(class_instance, methodtype='method')[source] Raise this error instead of NotImplementedError for abstract methods while keeping compatibility with Python 2 and Python 3.
pandas.reference.api.pandas.errors.abstractmethoderror
pandas.errors.AccessorRegistrationWarning exceptionpandas.errors.AccessorRegistrationWarning[source] Warning for attribute conflicts in accessor registration.
pandas.reference.api.pandas.errors.accessorregistrationwarning
pandas.errors.DtypeWarning exceptionpandas.errors.DtypeWarning[source] Warning raised when reading different dtypes in a column from a file. Raised for a dtype incompatibility. This can happen whenever read_csv or read_table encounter non-uniform dtypes in a column(s) of a given CSV file. See also read_csv Read CSV (comma-separated) file into a DataFrame. read_table Read general delimited file into a DataFrame. Notes This warning is issued when dealing with larger files because the dtype checking happens per chunk read. Despite the warning, the CSV file is read with mixed types in a single column which will be an object type. See the examples below to better understand this issue. Examples This example creates and reads a large CSV file with a column that contains int and str. >>> df = pd.DataFrame({'a': (['1'] * 100000 + ['X'] * 100000 + ... ['1'] * 100000), ... 'b': ['b'] * 300000}) >>> df.to_csv('test.csv', index=False) >>> df2 = pd.read_csv('test.csv') ... # DtypeWarning: Columns (0) have mixed types Important to notice that df2 will contain both str and int for the same input, ‘1’. >>> df2.iloc[262140, 0] '1' >>> type(df2.iloc[262140, 0]) <class 'str'> >>> df2.iloc[262150, 0] 1 >>> type(df2.iloc[262150, 0]) <class 'int'> One way to solve this issue is using the dtype parameter in the read_csv and read_table functions to explicit the conversion: >>> df2 = pd.read_csv('test.csv', sep=',', dtype={'a': str}) No warning was issued.
pandas.reference.api.pandas.errors.dtypewarning
pandas.errors.DuplicateLabelError exceptionpandas.errors.DuplicateLabelError[source] Error raised when an operation would introduce duplicate labels. New in version 1.2.0. Examples >>> s = pd.Series([0, 1, 2], index=['a', 'b', 'c']).set_flags( ... allows_duplicate_labels=False ... ) >>> s.reindex(['a', 'a', 'b']) Traceback (most recent call last): ... DuplicateLabelError: Index has duplicates. positions label a [0, 1]
pandas.reference.api.pandas.errors.duplicatelabelerror
pandas.errors.EmptyDataError exceptionpandas.errors.EmptyDataError[source] Exception that is thrown in pd.read_csv (by both the C and Python engines) when empty data or header is encountered.
pandas.reference.api.pandas.errors.emptydataerror
pandas.errors.IntCastingNaNError exceptionpandas.errors.IntCastingNaNError[source] Raised when attempting an astype operation on an array with NaN to an integer dtype.
pandas.reference.api.pandas.errors.intcastingnanerror
pandas.errors.InvalidIndexError exceptionpandas.errors.InvalidIndexError[source] Exception raised when attempting to use an invalid index key. New in version 1.1.0.
pandas.reference.api.pandas.errors.invalidindexerror
pandas.errors.MergeError exceptionpandas.errors.MergeError[source] Error raised when problems arise during merging due to problems with input data. Subclass of ValueError.
pandas.reference.api.pandas.errors.mergeerror
pandas.errors.NullFrequencyError exceptionpandas.errors.NullFrequencyError[source] Error raised when a null freq attribute is used in an operation that needs a non-null frequency, particularly DatetimeIndex.shift, TimedeltaIndex.shift, PeriodIndex.shift.
pandas.reference.api.pandas.errors.nullfrequencyerror
pandas.errors.NumbaUtilError exceptionpandas.errors.NumbaUtilError[source] Error raised for unsupported Numba engine routines.
pandas.reference.api.pandas.errors.numbautilerror
pandas.errors.OptionError exceptionpandas.errors.OptionError[source] Exception for pandas.options, backwards compatible with KeyError checks.
pandas.reference.api.pandas.errors.optionerror
pandas.errors.OutOfBoundsDatetime exceptionpandas.errors.OutOfBoundsDatetime
pandas.reference.api.pandas.errors.outofboundsdatetime
pandas.errors.OutOfBoundsTimedelta exceptionpandas.errors.OutOfBoundsTimedelta Raised when encountering a timedelta value that cannot be represented as a timedelta64[ns].
pandas.reference.api.pandas.errors.outofboundstimedelta
pandas.errors.ParserError exceptionpandas.errors.ParserError[source] Exception that is raised by an error encountered in parsing file contents. This is a generic error raised for errors encountered when functions like read_csv or read_html are parsing contents of a file. See also read_csv Read CSV (comma-separated) file into a DataFrame. read_html Read HTML table into a DataFrame.
pandas.reference.api.pandas.errors.parsererror
pandas.errors.ParserWarning exceptionpandas.errors.ParserWarning[source] Warning raised when reading a file that doesn’t use the default ‘c’ parser. Raised by pd.read_csv and pd.read_table when it is necessary to change parsers, generally from the default ‘c’ parser to ‘python’. It happens due to a lack of support or functionality for parsing a particular attribute of a CSV file with the requested engine. Currently, ‘c’ unsupported options include the following parameters: sep other than a single character (e.g. regex separators) skipfooter higher than 0 sep=None with delim_whitespace=False The warning can be avoided by adding engine=’python’ as a parameter in pd.read_csv and pd.read_table methods. See also pd.read_csv Read CSV (comma-separated) file into DataFrame. pd.read_table Read general delimited file into DataFrame. Examples Using a sep in pd.read_csv other than a single character: >>> import io >>> csv = '''a;b;c ... 1;1,8 ... 1;2,1''' >>> df = pd.read_csv(io.StringIO(csv), sep='[;,]') ... # ParserWarning: Falling back to the 'python' engine... Adding engine=’python’ to pd.read_csv removes the Warning: >>> df = pd.read_csv(io.StringIO(csv), sep='[;,]', engine='python')
pandas.reference.api.pandas.errors.parserwarning
pandas.errors.PerformanceWarning exceptionpandas.errors.PerformanceWarning[source] Warning raised when there is a possible performance impact.
pandas.reference.api.pandas.errors.performancewarning
pandas.errors.UnsortedIndexError exceptionpandas.errors.UnsortedIndexError[source] Error raised when attempting to get a slice of a MultiIndex, and the index has not been lexsorted. Subclass of KeyError.
pandas.reference.api.pandas.errors.unsortedindexerror
pandas.errors.UnsupportedFunctionCall exceptionpandas.errors.UnsupportedFunctionCall[source] Exception raised when attempting to call a numpy function on a pandas object, but that function is not supported by the object e.g. np.cumsum(groupby_object).
pandas.reference.api.pandas.errors.unsupportedfunctioncall
pandas.eval pandas.eval(expr, parser='pandas', engine=None, truediv=NoDefault.no_default, local_dict=None, global_dict=None, resolvers=(), level=0, target=None, inplace=False)[source] Evaluate a Python expression as a string using various backends. The following arithmetic operations are supported: +, -, *, /, **, %, // (python engine only) along with the following boolean operations: | (or), & (and), and ~ (not). Additionally, the 'pandas' parser allows the use of and, or, and not with the same semantics as the corresponding bitwise operators. Series and DataFrame objects are supported and behave as they would with plain ol’ Python evaluation. Parameters expr:str The expression to evaluate. This string cannot contain any Python statements, only Python expressions. parser:{‘pandas’, ‘python’}, default ‘pandas’ The parser to use to construct the syntax tree from the expression. The default of 'pandas' parses code slightly different than standard Python. Alternatively, you can parse an expression using the 'python' parser to retain strict Python semantics. See the enhancing performance documentation for more details. engine:{‘python’, ‘numexpr’}, default ‘numexpr’ The engine used to evaluate the expression. Supported engines are None : tries to use numexpr, falls back to python 'numexpr': This default engine evaluates pandas objects using numexpr for large speed ups in complex expressions with large frames. 'python': Performs operations as if you had eval’d in top level python. This engine is generally not that useful. More backends may be available in the future. truediv:bool, optional Whether to use true division, like in Python >= 3. Deprecated since version 1.0.0. local_dict:dict or None, optional A dictionary of local variables, taken from locals() by default. global_dict:dict or None, optional A dictionary of global variables, taken from globals() by default. resolvers:list of dict-like or None, optional A list of objects implementing the __getitem__ special method that you can use to inject an additional collection of namespaces to use for variable lookup. For example, this is used in the query() method to inject the DataFrame.index and DataFrame.columns variables that refer to their respective DataFrame instance attributes. level:int, optional The number of prior stack frames to traverse and add to the current scope. Most users will not need to change this parameter. target:object, optional, default None This is the target object for assignment. It is used when there is variable assignment in the expression. If so, then target must support item assignment with string keys, and if a copy is being returned, it must also support .copy(). inplace:bool, default False If target is provided, and the expression mutates target, whether to modify target inplace. Otherwise, return a copy of target with the mutation. Returns ndarray, numeric scalar, DataFrame, Series, or None The completion value of evaluating the given code or None if inplace=True. Raises ValueError There are many instances where such an error can be raised: target=None, but the expression is multiline. The expression is multiline, but not all them have item assignment. An example of such an arrangement is this: a = b + 1 a + 2 Here, there are expressions on different lines, making it multiline, but the last line has no variable assigned to the output of a + 2. inplace=True, but the expression is missing item assignment. Item assignment is provided, but the target does not support string item assignment. Item assignment is provided and inplace=False, but the target does not support the .copy() method See also DataFrame.query Evaluates a boolean expression to query the columns of a frame. DataFrame.eval Evaluate a string describing operations on DataFrame columns. Notes The dtype of any objects involved in an arithmetic % operation are recursively cast to float64. See the enhancing performance documentation for more details. Examples >>> df = pd.DataFrame({"animal": ["dog", "pig"], "age": [10, 20]}) >>> df animal age 0 dog 10 1 pig 20 We can add a new column using pd.eval: >>> pd.eval("double_age = df.age * 2", target=df) animal age double_age 0 dog 10 20 1 pig 20 40
pandas.reference.api.pandas.eval
pandas.ExcelFile.parse ExcelFile.parse(sheet_name=0, header=0, names=None, index_col=None, usecols=None, squeeze=None, converters=None, true_values=None, false_values=None, skiprows=None, nrows=None, na_values=None, parse_dates=False, date_parser=None, thousands=None, comment=None, skipfooter=0, convert_float=None, mangle_dupe_cols=True, **kwds)[source] Parse specified sheet(s) into a DataFrame. Equivalent to read_excel(ExcelFile, …) See the read_excel docstring for more info on accepted parameters. Returns DataFrame or dict of DataFrames DataFrame from the passed in Excel file.
pandas.reference.api.pandas.excelfile.parse
pandas.ExcelWriter classpandas.ExcelWriter(path, engine=None, date_format=None, datetime_format=None, mode='w', storage_options=None, if_sheet_exists=None, engine_kwargs=None, **kwargs)[source] Class for writing DataFrame objects into excel sheets. Default is to use : * xlwt for xls * xlsxwriter for xlsx if xlsxwriter is installed otherwise openpyxl * odf for ods. See DataFrame.to_excel for typical usage. The writer should be used as a context manager. Otherwise, call close() to save and close any opened file handles. Parameters path:str or typing.BinaryIO Path to xls or xlsx or ods file. engine:str (optional) Engine to use for writing. If None, defaults to io.excel.<extension>.writer. NOTE: can only be passed as a keyword argument. Deprecated since version 1.2.0: As the xlwt package is no longer maintained, the xlwt engine will be removed in a future version of pandas. date_format:str, default None Format string for dates written into Excel files (e.g. ‘YYYY-MM-DD’). datetime_format:str, default None Format string for datetime objects written into Excel files. (e.g. ‘YYYY-MM-DD HH:MM:SS’). mode:{‘w’, ‘a’}, default ‘w’ File mode to use (write or append). Append does not work with fsspec URLs. storage_options:dict, optional Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. New in version 1.2.0. if_sheet_exists:{‘error’, ‘new’, ‘replace’, ‘overlay’}, default ‘error’ How to behave when trying to write to a sheet that already exists (append mode only). error: raise a ValueError. new: Create a new sheet, with a name determined by the engine. replace: Delete the contents of the sheet before writing to it. overlay: Write contents to the existing sheet without removing the old contents. New in version 1.3.0. Changed in version 1.4.0: Added overlay option engine_kwargs:dict, optional Keyword arguments to be passed into the engine. These will be passed to the following functions of the respective engines: xlsxwriter: xlsxwriter.Workbook(file, **engine_kwargs) openpyxl (write mode): openpyxl.Workbook(**engine_kwargs) openpyxl (append mode): openpyxl.load_workbook(file, **engine_kwargs) odswriter: odf.opendocument.OpenDocumentSpreadsheet(**engine_kwargs) New in version 1.3.0. **kwargs:dict, optional Keyword arguments to be passed into the engine. Deprecated since version 1.3.0: Use engine_kwargs instead. Notes None of the methods and properties are considered public. For compatibility with CSV writers, ExcelWriter serializes lists and dicts to strings before writing. Examples Default usage: >>> df = pd.DataFrame([["ABC", "XYZ"]], columns=["Foo", "Bar"]) >>> with pd.ExcelWriter("path_to_file.xlsx") as writer: ... df.to_excel(writer) To write to separate sheets in a single file: >>> df1 = pd.DataFrame([["AAA", "BBB"]], columns=["Spam", "Egg"]) >>> df2 = pd.DataFrame([["ABC", "XYZ"]], columns=["Foo", "Bar"]) >>> with pd.ExcelWriter("path_to_file.xlsx") as writer: ... df1.to_excel(writer, sheet_name="Sheet1") ... df2.to_excel(writer, sheet_name="Sheet2") You can set the date format or datetime format: >>> from datetime import date, datetime >>> df = pd.DataFrame( ... [ ... [date(2014, 1, 31), date(1999, 9, 24)], ... [datetime(1998, 5, 26, 23, 33, 4), datetime(2014, 2, 28, 13, 5, 13)], ... ], ... index=["Date", "Datetime"], ... columns=["X", "Y"], ... ) >>> with pd.ExcelWriter( ... "path_to_file.xlsx", ... date_format="YYYY-MM-DD", ... datetime_format="YYYY-MM-DD HH:MM:SS" ... ) as writer: ... df.to_excel(writer) You can also append to an existing Excel file: >>> with pd.ExcelWriter("path_to_file.xlsx", mode="a", engine="openpyxl") as writer: ... df.to_excel(writer, sheet_name="Sheet3") Here, the if_sheet_exists parameter can be set to replace a sheet if it already exists: >>> with ExcelWriter( ... "path_to_file.xlsx", ... mode="a", ... engine="openpyxl", ... if_sheet_exists="replace", ... ) as writer: ... df.to_excel(writer, sheet_name="Sheet1") You can also write multiple DataFrames to a single sheet. Note that the if_sheet_exists parameter needs to be set to overlay: >>> with ExcelWriter("path_to_file.xlsx", ... mode="a", ... engine="openpyxl", ... if_sheet_exists="overlay", ... ) as writer: ... df1.to_excel(writer, sheet_name="Sheet1") ... df2.to_excel(writer, sheet_name="Sheet1", startcol=3) You can store Excel file in RAM: >>> import io >>> df = pd.DataFrame([["ABC", "XYZ"]], columns=["Foo", "Bar"]) >>> buffer = io.BytesIO() >>> with pd.ExcelWriter(buffer) as writer: ... df.to_excel(writer) You can pack Excel file into zip archive: >>> import zipfile >>> df = pd.DataFrame([["ABC", "XYZ"]], columns=["Foo", "Bar"]) >>> with zipfile.ZipFile("path_to_file.zip", "w") as zf: ... with zf.open("filename.xlsx", "w") as buffer: ... with pd.ExcelWriter(buffer) as writer: ... df.to_excel(writer) You can specify additional arguments to the underlying engine: >>> with pd.ExcelWriter( ... "path_to_file.xlsx", ... engine="xlsxwriter", ... engine_kwargs={"options": {"nan_inf_to_errors": True}} ... ) as writer: ... df.to_excel(writer) In append mode, engine_kwargs are passed through to openpyxl’s load_workbook: >>> with pd.ExcelWriter( ... "path_to_file.xlsx", ... engine="openpyxl", ... mode="a", ... engine_kwargs={"keep_vba": True} ... ) as writer: ... df.to_excel(writer, sheet_name="Sheet2") Attributes None Methods None
pandas.reference.api.pandas.excelwriter
pandas.factorize pandas.factorize(values, sort=False, na_sentinel=- 1, size_hint=None)[source] Encode the object as an enumerated type or categorical variable. This method is useful for obtaining a numeric representation of an array when all that matters is identifying distinct values. factorize is available as both a top-level function pandas.factorize(), and as a method Series.factorize() and Index.factorize(). Parameters values:sequence A 1-D sequence. Sequences that aren’t pandas objects are coerced to ndarrays before factorization. sort:bool, default False Sort uniques and shuffle codes to maintain the relationship. na_sentinel:int or None, default -1 Value to mark “not found”. If None, will not drop the NaN from the uniques of the values. Changed in version 1.1.2. size_hint:int, optional Hint to the hashtable sizer. Returns codes:ndarray An integer ndarray that’s an indexer into uniques. uniques.take(codes) will have the same values as values. uniques:ndarray, Index, or Categorical The unique valid values. When values is Categorical, uniques is a Categorical. When values is some other pandas object, an Index is returned. Otherwise, a 1-D ndarray is returned. Note Even if there’s a missing value in values, uniques will not contain an entry for it. See also cut Discretize continuous-valued array. unique Find the unique value in an array. Examples These examples all show factorize as a top-level method like pd.factorize(values). The results are identical for methods like Series.factorize(). >>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b']) >>> codes array([0, 0, 1, 2, 0]...) >>> uniques array(['b', 'a', 'c'], dtype=object) With sort=True, the uniques will be sorted, and codes will be shuffled so that the relationship is the maintained. >>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'], sort=True) >>> codes array([1, 1, 0, 2, 1]...) >>> uniques array(['a', 'b', 'c'], dtype=object) Missing values are indicated in codes with na_sentinel (-1 by default). Note that missing values are never included in uniques. >>> codes, uniques = pd.factorize(['b', None, 'a', 'c', 'b']) >>> codes array([ 0, -1, 1, 2, 0]...) >>> uniques array(['b', 'a', 'c'], dtype=object) Thus far, we’ve only factorized lists (which are internally coerced to NumPy arrays). When factorizing pandas objects, the type of uniques will differ. For Categoricals, a Categorical is returned. >>> cat = pd.Categorical(['a', 'a', 'c'], categories=['a', 'b', 'c']) >>> codes, uniques = pd.factorize(cat) >>> codes array([0, 0, 1]...) >>> uniques ['a', 'c'] Categories (3, object): ['a', 'b', 'c'] Notice that 'b' is in uniques.categories, despite not being present in cat.values. For all other pandas objects, an Index of the appropriate type is returned. >>> cat = pd.Series(['a', 'a', 'c']) >>> codes, uniques = pd.factorize(cat) >>> codes array([0, 0, 1]...) >>> uniques Index(['a', 'c'], dtype='object') If NaN is in the values, and we want to include NaN in the uniques of the values, it can be achieved by setting na_sentinel=None. >>> values = np.array([1, 2, 1, np.nan]) >>> codes, uniques = pd.factorize(values) # default: na_sentinel=-1 >>> codes array([ 0, 1, 0, -1]) >>> uniques array([1., 2.]) >>> codes, uniques = pd.factorize(values, na_sentinel=None) >>> codes array([0, 1, 0, 2]) >>> uniques array([ 1., 2., nan])
pandas.reference.api.pandas.factorize
pandas.Flags classpandas.Flags(obj, *, allows_duplicate_labels)[source] Flags that apply to pandas objects. New in version 1.2.0. Parameters obj:Series or DataFrame The object these flags are associated with. allows_duplicate_labels:bool, default True Whether to allow duplicate labels in this object. By default, duplicate labels are permitted. Setting this to False will cause an errors.DuplicateLabelError to be raised when index (or columns for DataFrame) is not unique, or any subsequent operation on introduces duplicates. See Disallowing Duplicate Labels for more. Warning This is an experimental feature. Currently, many methods fail to propagate the allows_duplicate_labels value. In future versions it is expected that every method taking or returning one or more DataFrame or Series objects will propagate allows_duplicate_labels. Notes Attributes can be set in two ways >>> df = pd.DataFrame() >>> df.flags <Flags(allows_duplicate_labels=True)> >>> df.flags.allows_duplicate_labels = False >>> df.flags <Flags(allows_duplicate_labels=False)> >>> df.flags['allows_duplicate_labels'] = True >>> df.flags <Flags(allows_duplicate_labels=True)> Attributes allows_duplicate_labels Whether this object allows duplicate labels.
pandas.reference.api.pandas.flags
pandas.Flags.allows_duplicate_labels propertyFlags.allows_duplicate_labels Whether this object allows duplicate labels. Setting allows_duplicate_labels=False ensures that the index (and columns of a DataFrame) are unique. Most methods that accept and return a Series or DataFrame will propagate the value of allows_duplicate_labels. See Duplicate Labels for more. See also DataFrame.attrs Set global metadata on this object. DataFrame.set_flags Set global flags on this object. Examples >>> df = pd.DataFrame({"A": [1, 2]}, index=['a', 'a']) >>> df.flags.allows_duplicate_labels True >>> df.flags.allows_duplicate_labels = False Traceback (most recent call last): ... pandas.errors.DuplicateLabelError: Index has duplicates. positions label a [0, 1]
pandas.reference.api.pandas.flags.allows_duplicate_labels
pandas.Float64Index classpandas.Float64Index(data=None, dtype=None, copy=False, name=None)[source] Immutable sequence used for indexing and alignment. The basic object storing axis labels for all pandas objects. Float64Index is a special case of Index with purely float labels. . Deprecated since version 1.4.0: In pandas v2.0 Float64Index will be removed and NumericIndex used instead. Float64Index will remain fully functional for the duration of pandas 1.x. Parameters data:array-like (1-dimensional) dtype:NumPy dtype (default: float64) copy:bool Make a copy of input ndarray. name:object Name to be stored in the index. See also Index The base pandas Index type. NumericIndex Index of numpy int/uint/float data. Notes An Index instance can only contain hashable objects. Attributes None Methods None
pandas.reference.api.pandas.float64index
pandas.get_dummies pandas.get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, columns=None, sparse=False, drop_first=False, dtype=None)[source] Convert categorical variable into dummy/indicator variables. Parameters data:array-like, Series, or DataFrame Data of which to get dummy indicators. prefix:str, list of str, or dict of str, default None String to append DataFrame column names. Pass a list with length equal to the number of columns when calling get_dummies on a DataFrame. Alternatively, prefix can be a dictionary mapping column names to prefixes. prefix_sep:str, default ‘_’ If appending prefix, separator/delimiter to use. Or pass a list or dictionary as with prefix. dummy_na:bool, default False Add a column to indicate NaNs, if False NaNs are ignored. columns:list-like, default None Column names in the DataFrame to be encoded. If columns is None then all the columns with object or category dtype will be converted. sparse:bool, default False Whether the dummy-encoded columns should be backed by a SparseArray (True) or a regular NumPy array (False). drop_first:bool, default False Whether to get k-1 dummies out of k categorical levels by removing the first level. dtype:dtype, default np.uint8 Data type for new columns. Only a single dtype is allowed. Returns DataFrame Dummy-coded data. See also Series.str.get_dummies Convert Series to dummy codes. Examples >>> s = pd.Series(list('abca')) >>> pd.get_dummies(s) a b c 0 1 0 0 1 0 1 0 2 0 0 1 3 1 0 0 >>> s1 = ['a', 'b', np.nan] >>> pd.get_dummies(s1) a b 0 1 0 1 0 1 2 0 0 >>> pd.get_dummies(s1, dummy_na=True) a b NaN 0 1 0 0 1 0 1 0 2 0 0 1 >>> df = pd.DataFrame({'A': ['a', 'b', 'a'], 'B': ['b', 'a', 'c'], ... 'C': [1, 2, 3]}) >>> pd.get_dummies(df, prefix=['col1', 'col2']) C col1_a col1_b col2_a col2_b col2_c 0 1 1 0 0 1 0 1 2 0 1 1 0 0 2 3 1 0 0 0 1 >>> pd.get_dummies(pd.Series(list('abcaa'))) a b c 0 1 0 0 1 0 1 0 2 0 0 1 3 1 0 0 4 1 0 0 >>> pd.get_dummies(pd.Series(list('abcaa')), drop_first=True) b c 0 0 0 1 1 0 2 0 1 3 0 0 4 0 0 >>> pd.get_dummies(pd.Series(list('abc')), dtype=float) a b c 0 1.0 0.0 0.0 1 0.0 1.0 0.0 2 0.0 0.0 1.0
pandas.reference.api.pandas.get_dummies
pandas.get_option pandas.get_option(pat)=<pandas._config.config.CallableDynamicDoc object> Retrieves the value of the specified option. Available options: compute.[use_bottleneck, use_numba, use_numexpr] display.[chop_threshold, colheader_justify, column_space, date_dayfirst, date_yearfirst, encoding, expand_frame_repr, float_format] display.html.[border, table_schema, use_mathjax] display.[large_repr] display.latex.[escape, longtable, multicolumn, multicolumn_format, multirow, repr] display.[max_categories, max_columns, max_colwidth, max_dir_items, max_info_columns, max_info_rows, max_rows, max_seq_items, memory_usage, min_rows, multi_sparse, notebook_repr_html, pprint_nest_depth, precision, show_dimensions] display.unicode.[ambiguous_as_wide, east_asian_width] display.[width] io.excel.ods.[reader, writer] io.excel.xls.[reader, writer] io.excel.xlsb.[reader] io.excel.xlsm.[reader, writer] io.excel.xlsx.[reader, writer] io.hdf.[default_format, dropna_table] io.parquet.[engine] io.sql.[engine] mode.[chained_assignment, data_manager, sim_interactive, string_storage, use_inf_as_na, use_inf_as_null] plotting.[backend] plotting.matplotlib.[register_converters] styler.format.[decimal, escape, formatter, na_rep, precision, thousands] styler.html.[mathjax] styler.latex.[environment, hrules, multicol_align, multirow_align] styler.render.[encoding, max_columns, max_elements, max_rows, repr] styler.sparse.[columns, index] Parameters pat:str Regexp which should match a single option. Note: partial matches are supported for convenience, but unless you use the full option name (e.g. x.y.z.option_name), your code may break in future versions if new options with similar names are introduced. Returns result:the value of the option Raises OptionError:if no such option exists Notes The available options with its descriptions: compute.use_bottleneck:bool Use the bottleneck library to accelerate if it is installed, the default is True Valid values: False,True [default: True] [currently: True] compute.use_numba:bool Use the numba engine option for select operations if it is installed, the default is False Valid values: False,True [default: False] [currently: False] compute.use_numexpr:bool Use the numexpr library to accelerate computation if it is installed, the default is True Valid values: False,True [default: True] [currently: True] display.chop_threshold:float or None if set to a float value, all float values smaller then the given threshold will be displayed as exactly 0 by repr and friends. [default: None] [currently: None] display.colheader_justify:‘left’/’right’ Controls the justification of column headers. used by DataFrameFormatter. [default: right] [currently: right] display.column_space No description available. [default: 12] [currently: 12] display.date_dayfirst:boolean When True, prints and parses dates with the day first, eg 20/01/2005 [default: False] [currently: False] display.date_yearfirst:boolean When True, prints and parses dates with the year first, eg 2005/01/20 [default: False] [currently: False] display.encoding:str/unicode Defaults to the detected encoding of the console. Specifies the encoding to be used for strings returned by to_string, these are generally strings meant to be displayed on the console. [default: utf-8] [currently: utf-8] display.expand_frame_repr:boolean Whether to print out the full DataFrame repr for wide DataFrames across multiple lines, max_columns is still respected, but the output will wrap-around across multiple “pages” if its width exceeds display.width. [default: True] [currently: True] display.float_format:callable The callable should accept a floating point number and return a string with the desired format of the number. This is used in some places like SeriesFormatter. See formats.format.EngFormatter for an example. [default: None] [currently: None] display.html.border:int A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr. [default: 1] [currently: 1] display.html.table_schema:boolean Whether to publish a Table Schema representation for frontends that support it. (default: False) [default: False] [currently: False] display.html.use_mathjax:boolean When True, Jupyter notebook will process table contents using MathJax, rendering mathematical expressions enclosed by the dollar symbol. (default: True) [default: True] [currently: True] display.large_repr:‘truncate’/’info’ For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can show a truncated table (the default from 0.13), or switch to the view from df.info() (the behaviour in earlier versions of pandas). [default: truncate] [currently: truncate] display.latex.escape:bool This specifies if the to_latex method of a Dataframe uses escapes special characters. Valid values: False,True [default: True] [currently: True] display.latex.longtable :bool This specifies if the to_latex method of a Dataframe uses the longtable format. Valid values: False,True [default: False] [currently: False] display.latex.multicolumn:bool This specifies if the to_latex method of a Dataframe uses multicolumns to pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True] display.latex.multicolumn_format:bool This specifies if the to_latex method of a Dataframe uses multicolumns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l] display.latex.multirow:bool This specifies if the to_latex method of a Dataframe uses multirows to pretty-print MultiIndex rows. Valid values: False,True [default: False] [currently: False] display.latex.repr:boolean Whether to produce a latex DataFrame representation for jupyter environments that support it. (default: False) [default: False] [currently: False] display.max_categories:int This sets the maximum number of categories pandas should output when printing out a Categorical or a Series of dtype “category”. [default: 8] [currently: 8] display.max_columns:int If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects are either centrally truncated or printed as a summary view. ‘None’ value means unlimited. In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. [default: 0] [currently: 0] display.max_colwidth:int or None The maximum width in characters of a column in the repr of a pandas data structure. When the column overflows, a “…” placeholder is embedded in the output. A ‘None’ value means unlimited. [default: 50] [currently: 50] display.max_dir_items:int The number of items that will be added to dir(…). ‘None’ value means unlimited. Because dir is cached, changing this option will not immediately affect already existing dataframes until a column is deleted or added. This is for instance used to suggest columns from a dataframe to tab completion. [default: 100] [currently: 100] display.max_info_columns:int max_info_columns is used in DataFrame.info method to decide if per column information will be printed. [default: 100] [currently: 100] display.max_info_rows:int or None df.info() will usually show null-counts for each column. For large frames this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller dimensions than specified. [default: 1690785] [currently: 1690785] display.max_rows:int If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects are either centrally truncated or printed as a summary view. ‘None’ value means unlimited. In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. [default: 60] [currently: 60] display.max_seq_items:int or None When pretty-printing a long sequence, no more then max_seq_items will be printed. If items are omitted, they will be denoted by the addition of “…” to the resulting string. If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100] display.memory_usage:bool, string or None This specifies if the memory usage of a DataFrame should be displayed when df.info() is called. Valid values True,False,’deep’ [default: True] [currently: True] display.min_rows:int The numbers of rows to show in a truncated view (when max_rows is exceeded). Ignored when max_rows is set to None or 0. When set to None, follows the value of max_rows. [default: 10] [currently: 10] display.multi_sparse:boolean “sparsify” MultiIndex display (don’t display repeated elements in outer levels within groups) [default: True] [currently: True] display.notebook_repr_html:boolean When True, IPython notebook will use html representation for pandas objects (if it is available). [default: True] [currently: True] display.pprint_nest_depth:int Controls the number of nested levels to process when pretty-printing [default: 3] [currently: 3] display.precision:int Floating point output precision in terms of number of places after the decimal, for regular formatting as well as scientific notation. Similar to precision in numpy.set_printoptions(). [default: 6] [currently: 6] display.show_dimensions:boolean or ‘truncate’ Whether to print out dimensions at the end of DataFrame repr. If ‘truncate’ is specified, only print out the dimensions if the frame is truncated (e.g. not display all rows and/or columns) [default: truncate] [currently: truncate] display.unicode.ambiguous_as_wide:boolean Whether to use the Unicode East Asian Width to calculate the display text width. Enabling this may affect to the performance (default: False) [default: False] [currently: False] display.unicode.east_asian_width:boolean Whether to use the Unicode East Asian Width to calculate the display text width. Enabling this may affect to the performance (default: False) [default: False] [currently: False] display.width:int Width of the display in characters. In case python/IPython is running in a terminal this can be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width. [default: 80] [currently: 80] io.excel.ods.reader:string The default Excel reader engine for ‘ods’ files. Available options: auto, odf. [default: auto] [currently: auto] io.excel.ods.writer:string The default Excel writer engine for ‘ods’ files. Available options: auto, odf. [default: auto] [currently: auto] io.excel.xls.reader:string The default Excel reader engine for ‘xls’ files. Available options: auto, xlrd. [default: auto] [currently: auto] io.excel.xls.writer:string The default Excel writer engine for ‘xls’ files. Available options: auto, xlwt. [default: auto] [currently: auto] (Deprecated, use `` instead.) io.excel.xlsb.reader:string The default Excel reader engine for ‘xlsb’ files. Available options: auto, pyxlsb. [default: auto] [currently: auto] io.excel.xlsm.reader:string The default Excel reader engine for ‘xlsm’ files. Available options: auto, xlrd, openpyxl. [default: auto] [currently: auto] io.excel.xlsm.writer:string The default Excel writer engine for ‘xlsm’ files. Available options: auto, openpyxl. [default: auto] [currently: auto] io.excel.xlsx.reader:string The default Excel reader engine for ‘xlsx’ files. Available options: auto, xlrd, openpyxl. [default: auto] [currently: auto] io.excel.xlsx.writer:string The default Excel writer engine for ‘xlsx’ files. Available options: auto, openpyxl, xlsxwriter. [default: auto] [currently: auto] io.hdf.default_format:format default format writing format, if None, then put will default to ‘fixed’ and append will default to ‘table’ [default: None] [currently: None] io.hdf.dropna_table:boolean drop ALL nan rows when appending to a table [default: False] [currently: False] io.parquet.engine:string The default parquet reader/writer engine. Available options: ‘auto’, ‘pyarrow’, ‘fastparquet’, the default is ‘auto’ [default: auto] [currently: auto] io.sql.engine:string The default sql reader/writer engine. Available options: ‘auto’, ‘sqlalchemy’, the default is ‘auto’ [default: auto] [currently: auto] mode.chained_assignment:string Raise an exception, warn, or no action if trying to use chained assignment, The default is warn [default: warn] [currently: warn] mode.data_manager:string Internal data manager type; can be “block” or “array”. Defaults to “block”, unless overridden by the ‘PANDAS_DATA_MANAGER’ environment variable (needs to be set before pandas is imported). [default: block] [currently: block] mode.sim_interactive:boolean Whether to simulate interactive mode for purposes of testing [default: False] [currently: False] mode.string_storage:string The default storage for StringDtype. [default: python] [currently: python] mode.use_inf_as_na:boolean True means treat None, NaN, INF, -INF as NA (old way), False means None and NaN are null, but INF, -INF are not NA (new way). [default: False] [currently: False] mode.use_inf_as_null:boolean use_inf_as_null had been deprecated and will be removed in a future version. Use use_inf_as_na instead. [default: False] [currently: False] (Deprecated, use mode.use_inf_as_na instead.) plotting.backend:str The plotting backend to use. The default value is “matplotlib”, the backend provided with pandas. Other backends can be specified by providing the name of the module that implements the backend. [default: matplotlib] [currently: matplotlib] plotting.matplotlib.register_converters:bool or ‘auto’. Whether to register converters with matplotlib’s units registry for dates, times, datetimes, and Periods. Toggling to False will remove the converters, restoring any converters that pandas overwrote. [default: auto] [currently: auto] styler.format.decimal:str The character representation for the decimal separator for floats and complex. [default: .] [currently: .] styler.format.escape:str, optional Whether to escape certain characters according to the given context; html or latex. [default: None] [currently: None] styler.format.formatter:str, callable, dict, optional A formatter object to be used as default within Styler.format. [default: None] [currently: None] styler.format.na_rep:str, optional The string representation for values identified as missing. [default: None] [currently: None] styler.format.precision:int The precision for floats and complex numbers. [default: 6] [currently: 6] styler.format.thousands:str, optional The character representation for thousands separator for floats, int and complex. [default: None] [currently: None] styler.html.mathjax:bool If False will render special CSS classes to table attributes that indicate Mathjax will not be used in Jupyter Notebook. [default: True] [currently: True] styler.latex.environment:str The environment to replace \begin{table}. If “longtable” is used results in a specific longtable environment format. [default: None] [currently: None] styler.latex.hrules:bool Whether to add horizontal rules on top and bottom and below the headers. [default: False] [currently: False] styler.latex.multicol_align:{“r”, “c”, “l”, “naive-l”, “naive-r”} The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe decorators can also be added to non-naive values to draw vertical rules, e.g. “|r” will draw a rule on the left side of right aligned merged cells. [default: r] [currently: r] styler.latex.multirow_align:{“c”, “t”, “b”} The specifier for vertical alignment of sparsified LaTeX multirows. [default: c] [currently: c] styler.render.encoding:str The encoding used for output HTML and LaTeX files. [default: utf-8] [currently: utf-8] styler.render.max_columns:int, optional The maximum number of columns that will be rendered. May still be reduced to satsify max_elements, which takes precedence. [default: None] [currently: None] styler.render.max_elements:int The maximum number of data-cell (<td>) elements that will be rendered before trimming will occur over columns, rows or both if needed. [default: 262144] [currently: 262144] styler.render.max_rows:int, optional The maximum number of rows that will be rendered. May still be reduced to satsify max_elements, which takes precedence. [default: None] [currently: None] styler.render.repr:str Determine which output to use in Jupyter Notebook in {“html”, “latex”}. [default: html] [currently: html] styler.sparse.columns:bool Whether to sparsify the display of hierarchical columns. Setting to False will display each explicit level element in a hierarchical key for each column. [default: True] [currently: True] styler.sparse.index:bool Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each row. [default: True] [currently: True]
pandas.reference.api.pandas.get_option
pandas.Grouper classpandas.Grouper(*args, **kwargs)[source] A Grouper allows the user to specify a groupby instruction for an object. This specification will select a column via the key parameter, or if the level and/or axis parameters are given, a level of the index of the target object. If axis and/or level are passed as keywords to both Grouper and groupby, the values passed to Grouper take precedence. Parameters key:str, defaults to None Groupby key, which selects the grouping column of the target. level:name/number, defaults to None The level for the target index. freq:str / frequency object, defaults to None This will groupby the specified frequency if the target selection (via key or level) is a datetime-like object. For full specification of available frequencies, please see here. axis:str, int, defaults to 0 Number/name of the axis. sort:bool, default to False Whether to sort the resulting labels. closed:{‘left’ or ‘right’} Closed end of interval. Only when freq parameter is passed. label:{‘left’ or ‘right’} Interval boundary to use for labeling. Only when freq parameter is passed. convention:{‘start’, ‘end’, ‘e’, ‘s’} If grouper is PeriodIndex and freq parameter is passed. base:int, default 0 Only when freq parameter is passed. For frequencies that evenly subdivide 1 day, the “origin” of the aggregated intervals. For example, for ‘5min’ frequency, base could range from 0 through 4. Defaults to 0. Deprecated since version 1.1.0: The new arguments that you should use are ‘offset’ or ‘origin’. loffset:str, DateOffset, timedelta object Only when freq parameter is passed. Deprecated since version 1.1.0: loffset is only working for .resample(...) and not for Grouper (GH28302). However, loffset is also deprecated for .resample(...) See: DataFrame.resample origin:Timestamp or str, default ‘start_day’ The timestamp on which to adjust the grouping. The timezone of origin must match the timezone of the index. If string, must be one of the following: ‘epoch’: origin is 1970-01-01 ‘start’: origin is the first value of the timeseries ‘start_day’: origin is the first day at midnight of the timeseries New in version 1.1.0. ‘end’: origin is the last value of the timeseries ‘end_day’: origin is the ceiling midnight of the last day New in version 1.3.0. offset:Timedelta or str, default is None An offset timedelta added to the origin. New in version 1.1.0. dropna:bool, default True If True, and if group keys contain NA values, NA values together with row/column will be dropped. If False, NA values will also be treated as the key in groups. New in version 1.2.0. Returns A specification for a groupby instruction Examples Syntactic sugar for df.groupby('A') >>> df = pd.DataFrame( ... { ... "Animal": ["Falcon", "Parrot", "Falcon", "Falcon", "Parrot"], ... "Speed": [100, 5, 200, 300, 15], ... } ... ) >>> df Animal Speed 0 Falcon 100 1 Parrot 5 2 Falcon 200 3 Falcon 300 4 Parrot 15 >>> df.groupby(pd.Grouper(key="Animal")).mean() Speed Animal Falcon 200.0 Parrot 10.0 Specify a resample operation on the column ‘Publish date’ >>> df = pd.DataFrame( ... { ... "Publish date": [ ... pd.Timestamp("2000-01-02"), ... pd.Timestamp("2000-01-02"), ... pd.Timestamp("2000-01-09"), ... pd.Timestamp("2000-01-16") ... ], ... "ID": [0, 1, 2, 3], ... "Price": [10, 20, 30, 40] ... } ... ) >>> df Publish date ID Price 0 2000-01-02 0 10 1 2000-01-02 1 20 2 2000-01-09 2 30 3 2000-01-16 3 40 >>> df.groupby(pd.Grouper(key="Publish date", freq="1W")).mean() ID Price Publish date 2000-01-02 0.5 15.0 2000-01-09 2.0 30.0 2000-01-16 3.0 40.0 If you want to adjust the start of the bins based on a fixed timestamp: >>> start, end = '2000-10-01 23:30:00', '2000-10-02 00:30:00' >>> rng = pd.date_range(start, end, freq='7min') >>> ts = pd.Series(np.arange(len(rng)) * 3, index=rng) >>> ts 2000-10-01 23:30:00 0 2000-10-01 23:37:00 3 2000-10-01 23:44:00 6 2000-10-01 23:51:00 9 2000-10-01 23:58:00 12 2000-10-02 00:05:00 15 2000-10-02 00:12:00 18 2000-10-02 00:19:00 21 2000-10-02 00:26:00 24 Freq: 7T, dtype: int64 >>> ts.groupby(pd.Grouper(freq='17min')).sum() 2000-10-01 23:14:00 0 2000-10-01 23:31:00 9 2000-10-01 23:48:00 21 2000-10-02 00:05:00 54 2000-10-02 00:22:00 24 Freq: 17T, dtype: int64 >>> ts.groupby(pd.Grouper(freq='17min', origin='epoch')).sum() 2000-10-01 23:18:00 0 2000-10-01 23:35:00 18 2000-10-01 23:52:00 27 2000-10-02 00:09:00 39 2000-10-02 00:26:00 24 Freq: 17T, dtype: int64 >>> ts.groupby(pd.Grouper(freq='17min', origin='2000-01-01')).sum() 2000-10-01 23:24:00 3 2000-10-01 23:41:00 15 2000-10-01 23:58:00 45 2000-10-02 00:15:00 45 Freq: 17T, dtype: int64 If you want to adjust the start of the bins with an offset Timedelta, the two following lines are equivalent: >>> ts.groupby(pd.Grouper(freq='17min', origin='start')).sum() 2000-10-01 23:30:00 9 2000-10-01 23:47:00 21 2000-10-02 00:04:00 54 2000-10-02 00:21:00 24 Freq: 17T, dtype: int64 >>> ts.groupby(pd.Grouper(freq='17min', offset='23h30min')).sum() 2000-10-01 23:30:00 9 2000-10-01 23:47:00 21 2000-10-02 00:04:00 54 2000-10-02 00:21:00 24 Freq: 17T, dtype: int64 To replace the use of the deprecated base argument, you can now use offset, in this example it is equivalent to have base=2: >>> ts.groupby(pd.Grouper(freq='17min', offset='2min')).sum() 2000-10-01 23:16:00 0 2000-10-01 23:33:00 9 2000-10-01 23:50:00 36 2000-10-02 00:07:00 39 2000-10-02 00:24:00 24 Freq: 17T, dtype: int64 Attributes ax groups
pandas.reference.api.pandas.grouper
pandas.HDFStore.append HDFStore.append(key, value, format=None, axes=None, index=True, append=True, complib=None, complevel=None, columns=None, min_itemsize=None, nan_rep=None, chunksize=None, expectedrows=None, dropna=None, data_columns=None, encoding=None, errors='strict')[source] Append to Table in file. Node must already exist and be Table format. Parameters key:str value:{Series, DataFrame} format:‘table’ is the default Format to use when storing object in HDFStore. Value can be one of: 'table' Table format. Write as a PyTables Table structure which may perform worse but allow more flexible operations like searching / selecting subsets of the data. append:bool, default True Append the input data to the existing. data_columns:list of columns, or True, default None List of columns to create as indexed data columns for on-disk queries, or True to use all columns. By default only the axes of the object are indexed. See here. min_itemsize:dict of columns that specify minimum str sizes nan_rep:str to use as str nan representation chunksize:size to chunk the writing expectedrows:expected TOTAL row size of this table encoding:default None, provide an encoding for str dropna:bool, default False Do not write an ALL nan row to the store settable by the option ‘io.hdf.dropna_table’. Notes Does not check if data being appended overlaps with existing data in the table, so be careful
pandas.reference.api.pandas.hdfstore.append
pandas.HDFStore.get HDFStore.get(key)[source] Retrieve pandas object stored in file. Parameters key:str Returns object Same type as object stored in file.
pandas.reference.api.pandas.hdfstore.get
pandas.HDFStore.groups HDFStore.groups()[source] Return a list of all the top-level nodes. Each node returned is not a pandas storage object. Returns list List of objects.
pandas.reference.api.pandas.hdfstore.groups
pandas.HDFStore.info HDFStore.info()[source] Print detailed information on the store. Returns str
pandas.reference.api.pandas.hdfstore.info
pandas.HDFStore.keys HDFStore.keys(include='pandas')[source] Return a list of keys corresponding to objects stored in HDFStore. Parameters include:str, default ‘pandas’ When kind equals ‘pandas’ return pandas objects. When kind equals ‘native’ return native HDF5 Table objects. New in version 1.1.0. Returns list List of ABSOLUTE path-names (e.g. have the leading ‘/’). Raises raises ValueError if kind has an illegal value
pandas.reference.api.pandas.hdfstore.keys
pandas.HDFStore.put HDFStore.put(key, value, format=None, index=True, append=False, complib=None, complevel=None, min_itemsize=None, nan_rep=None, data_columns=None, encoding=None, errors='strict', track_times=True, dropna=False)[source] Store object in HDFStore. Parameters key:str value:{Series, DataFrame} format:‘fixed(f)|table(t)’, default is ‘fixed’ Format to use when storing object in HDFStore. Value can be one of: 'fixed' Fixed format. Fast writing/reading. Not-appendable, nor searchable. 'table' Table format. Write as a PyTables Table structure which may perform worse but allow more flexible operations like searching / selecting subsets of the data. append:bool, default False This will force Table format, append the input data to the existing. data_columns:list of columns or True, default None List of columns to create as data columns, or True to use all columns. See here. encoding:str, default None Provide an encoding for strings. track_times:bool, default True Parameter is propagated to ‘create_table’ method of ‘PyTables’. If set to False it enables to have the same h5 files (same hashes) independent on creation time. New in version 1.1.0.
pandas.reference.api.pandas.hdfstore.put
pandas.HDFStore.select HDFStore.select(key, where=None, start=None, stop=None, columns=None, iterator=False, chunksize=None, auto_close=False)[source] Retrieve pandas object stored in file, optionally based on where criteria. Warning Pandas uses PyTables for reading and writing HDF5 files, which allows serializing object-dtype data with pickle when using the “fixed” format. Loading pickled data received from untrusted sources can be unsafe. See: https://docs.python.org/3/library/pickle.html for more. Parameters key:str Object being retrieved from file. where:list or None List of Term (or convertible) objects, optional. start:int or None Row number to start selection. stop:int, default None Row number to stop selection. columns:list or None A list of columns that if not None, will limit the return columns. iterator:bool or False Returns an iterator. chunksize:int or None Number or rows to include in iteration, return an iterator. auto_close:bool or False Should automatically close the store when finished. Returns object Retrieved object from file.
pandas.reference.api.pandas.hdfstore.select
pandas.HDFStore.walk HDFStore.walk(where='/')[source] Walk the pytables group hierarchy for pandas objects. This generator will yield the group path, subgroups and pandas object names for each group. Any non-pandas PyTables objects that are not a group will be ignored. The where group itself is listed first (preorder), then each of its child groups (following an alphanumerical order) is also traversed, following the same procedure. Parameters where:str, default “/” Group where to start walking. Yields path:str Full path to a group (without trailing ‘/’). groups:list Names (strings) of the groups contained in path. leaves:list Names (strings) of the pandas objects contained in path.
pandas.reference.api.pandas.hdfstore.walk
pandas.Index classpandas.Index(data=None, dtype=None, copy=False, name=None, tupleize_cols=True, **kwargs)[source] Immutable sequence used for indexing and alignment. The basic object storing axis labels for all pandas objects. Parameters data:array-like (1-dimensional) dtype:NumPy dtype (default: object) If dtype is None, we find the dtype that best fits the data. If an actual dtype is provided, we coerce to that dtype if it’s safe. Otherwise, an error will be raised. copy:bool Make a copy of input ndarray. name:object Name to be stored in the index. tupleize_cols:bool (default: True) When True, attempt to create a MultiIndex if possible. See also RangeIndex Index implementing a monotonic integer range. CategoricalIndex Index of Categorical s. MultiIndex A multi-level, or hierarchical Index. IntervalIndex An Index of Interval s. DatetimeIndex Index of datetime64 data. TimedeltaIndex Index of timedelta64 data. PeriodIndex Index of Period data. NumericIndex Index of numpy int/uint/float data. Int64Index Index of purely int64 labels (deprecated). UInt64Index Index of purely uint64 labels (deprecated). Float64Index Index of purely float64 labels (deprecated). Notes An Index instance can only contain hashable objects Examples >>> pd.Index([1, 2, 3]) Int64Index([1, 2, 3], dtype='int64') >>> pd.Index(list('abc')) Index(['a', 'b', 'c'], dtype='object') Attributes T Return the transpose, which is by definition self. array The ExtensionArray of the data backing this Series or Index. asi8 Integer representation of the values. dtype Return the dtype object of the underlying data. has_duplicates Check if the Index has duplicate values. hasnans Return True if there are any NaNs. inferred_type Return a string of the type inferred from the values. is_all_dates Whether or not the index values only consist of dates. is_monotonic Alias for is_monotonic_increasing. is_monotonic_decreasing Return if the index is monotonic decreasing (only equal or decreasing) values. is_monotonic_increasing Return if the index is monotonic increasing (only equal or increasing) values. is_unique Return if the index has unique values. name Return Index or MultiIndex name. nbytes Return the number of bytes in the underlying data. ndim Number of dimensions of the underlying data, by definition 1. nlevels Number of levels. shape Return a tuple of the shape of the underlying data. size Return the number of elements in the underlying data. values Return an array representing the data in the Index. empty names Methods all(*args, **kwargs) Return whether all elements are Truthy. any(*args, **kwargs) Return whether any element is Truthy. append(other) Append a collection of Index options together. argmax([axis, skipna]) Return int position of the largest value in the Series. argmin([axis, skipna]) Return int position of the smallest value in the Series. argsort(*args, **kwargs) Return the integer indices that would sort the index. asof(label) Return the label from the index, or, if not present, the previous one. asof_locs(where, mask) Return the locations (indices) of labels in the index. astype(dtype[, copy]) Create an Index with values cast to dtypes. copy([name, deep, dtype, names]) Make a copy of this object. delete(loc) Make new Index with passed location(-s) deleted. difference(other[, sort]) Return a new Index with elements of index not in other. drop(labels[, errors]) Make new Index with passed list of labels deleted. drop_duplicates([keep]) Return Index with duplicate values removed. droplevel([level]) Return index with requested level(s) removed. dropna([how]) Return Index without NA/NaN values. duplicated([keep]) Indicate duplicate index values. equals(other) Determine if two Index object are equal. factorize([sort, na_sentinel]) Encode the object as an enumerated type or categorical variable. fillna([value, downcast]) Fill NA/NaN values with the specified value. format([name, formatter, na_rep]) Render a string representation of the Index. get_indexer(target[, method, limit, tolerance]) Compute indexer and mask for new index given the current index. get_indexer_for(target) Guaranteed return of an indexer even when non-unique. get_indexer_non_unique(target) Compute indexer and mask for new index given the current index. get_level_values(level) Return an Index of values for requested level. get_loc(key[, method, tolerance]) Get integer location, slice or boolean mask for requested label. get_slice_bound(label, side[, kind]) Calculate slice bound that corresponds to given label. get_value(series, key) Fast lookup of value from 1-dimensional ndarray. groupby(values) Group the index labels by a given array of values. holds_integer() Whether the type is an integer type. identical(other) Similar to equals, but checks that object attributes and types are also equal. insert(loc, item) Make new Index inserting new item at location. intersection(other[, sort]) Form the intersection of two Index objects. is_(other) More flexible, faster check like is but that works through views. is_boolean() Check if the Index only consists of booleans. is_categorical() Check if the Index holds categorical data. is_floating() Check if the Index is a floating type. is_integer() Check if the Index only consists of integers. is_interval() Check if the Index holds Interval objects. is_mixed() Check if the Index holds data with mixed data types. is_numeric() Check if the Index only consists of numeric data. is_object() Check if the Index is of the object dtype. is_type_compatible(kind) Whether the index type is compatible with the provided type. isin(values[, level]) Return a boolean array where the index values are in values. isna() Detect missing values. isnull() Detect missing values. item() Return the first element of the underlying data as a Python scalar. join(other[, how, level, return_indexers, sort]) Compute join_index and indexers to conform data structures to the new index. map(mapper[, na_action]) Map values using an input mapping or function. max([axis, skipna]) Return the maximum value of the Index. memory_usage([deep]) Memory usage of the values. min([axis, skipna]) Return the minimum value of the Index. notna() Detect existing (non-missing) values. notnull() Detect existing (non-missing) values. nunique([dropna]) Return number of unique elements in the object. putmask(mask, value) Return a new Index of the values set with the mask. ravel([order]) Return an ndarray of the flattened values of the underlying data. reindex(target[, method, level, limit, ...]) Create index with target's values. rename(name[, inplace]) Alter Index or MultiIndex name. repeat(repeats[, axis]) Repeat elements of a Index. searchsorted(value[, side, sorter]) Find indices where elements should be inserted to maintain order. set_names(names[, level, inplace]) Set Index or MultiIndex name. set_value(arr, key, value) (DEPRECATED) Fast lookup of value from 1-dimensional ndarray. shift([periods, freq]) Shift index by desired number of time frequency increments. slice_indexer([start, end, step, kind]) Compute the slice indexer for input labels and step. slice_locs([start, end, step, kind]) Compute slice locations for input labels. sort(*args, **kwargs) Use sort_values instead. sort_values([return_indexer, ascending, ...]) Return a sorted copy of the index. sortlevel([level, ascending, sort_remaining]) For internal compatibility with the Index API. str alias of pandas.core.strings.accessor.StringMethods symmetric_difference(other[, result_name, sort]) Compute the symmetric difference of two Index objects. take(indices[, axis, allow_fill, fill_value]) Return a new Index of the values selected by the indices. to_flat_index() Identity method. to_frame([index, name]) Create a DataFrame with a column containing the Index. to_list() Return a list of the values. to_native_types([slicer]) (DEPRECATED) Format specified values of self and return them. to_numpy([dtype, copy, na_value]) A NumPy ndarray representing the values in this Series or Index. to_series([index, name]) Create a Series with both index and values equal to the index keys. tolist() Return a list of the values. transpose(*args, **kwargs) Return the transpose, which is by definition self. union(other[, sort]) Form the union of two Index objects. unique([level]) Return unique values in the index. value_counts([normalize, sort, ascending, ...]) Return a Series containing counts of unique values. where(cond[, other]) Replace values where the condition is False. view
pandas.reference.api.pandas.index
pandas.Index.all Index.all(*args, **kwargs)[source] Return whether all elements are Truthy. Parameters *args Required for compatibility with numpy. **kwargs Required for compatibility with numpy. Returns all:bool or array-like (if axis is specified) A single element array-like may be converted to bool. See also Index.any Return whether any element in an Index is True. Series.any Return whether any element in a Series is True. Series.all Return whether all elements in a Series are True. Notes Not a Number (NaN), positive infinity and negative infinity evaluate to True because these are not equal to zero. Examples True, because nonzero integers are considered True. >>> pd.Index([1, 2, 3]).all() True False, because 0 is considered False. >>> pd.Index([0, 1, 2]).all() False
pandas.reference.api.pandas.index.all
pandas.Index.any Index.any(*args, **kwargs)[source] Return whether any element is Truthy. Parameters *args Required for compatibility with numpy. **kwargs Required for compatibility with numpy. Returns any:bool or array-like (if axis is specified) A single element array-like may be converted to bool. See also Index.all Return whether all elements are True. Series.all Return whether all elements are True. Notes Not a Number (NaN), positive infinity and negative infinity evaluate to True because these are not equal to zero. Examples >>> index = pd.Index([0, 1, 2]) >>> index.any() True >>> index = pd.Index([0, 0, 0]) >>> index.any() False
pandas.reference.api.pandas.index.any