Code stringlengths 103 85.9k | Summary listlengths 0 94 |
|---|---|
Please provide a description of the function:def _replace_booleans(tok):
toknum, tokval = tok
if toknum == tokenize.OP:
if tokval == '&':
return tokenize.NAME, 'and'
elif tokval == '|':
return tokenize.NAME, 'or'
return toknum, tokval
return toknum, tokva... | [
"Replace ``&`` with ``and`` and ``|`` with ``or`` so that bitwise\n precedence is changed to boolean precedence.\n\n Parameters\n ----------\n tok : tuple of int, str\n ints correspond to the all caps constants in the tokenize module\n\n Returns\n -------\n t : tuple of int, str\n ... |
Please provide a description of the function:def _replace_locals(tok):
toknum, tokval = tok
if toknum == tokenize.OP and tokval == '@':
return tokenize.OP, _LOCAL_TAG
return toknum, tokval | [
"Replace local variables with a syntactically valid name.\n\n Parameters\n ----------\n tok : tuple of int, str\n ints correspond to the all caps constants in the tokenize module\n\n Returns\n -------\n t : tuple of int, str\n Either the input or token or the replacement values\n\n ... |
Please provide a description of the function:def _clean_spaces_backtick_quoted_names(tok):
toknum, tokval = tok
if toknum == _BACKTICK_QUOTED_STRING:
return tokenize.NAME, _remove_spaces_column_name(tokval)
return toknum, tokval | [
"Clean up a column name if surrounded by backticks.\n\n Backtick quoted string are indicated by a certain tokval value. If a string\n is a backtick quoted token it will processed by\n :func:`_remove_spaces_column_name` so that the parser can find this\n string when the query is executed.\n See also :... |
Please provide a description of the function:def _preparse(source, f=_compose(_replace_locals, _replace_booleans,
_rewrite_assign,
_clean_spaces_backtick_quoted_names)):
assert callable(f), 'f must be callable'
return tokenize.untokenize(lma... | [
"Compose a collection of tokenization functions\n\n Parameters\n ----------\n source : str\n A Python source code string\n f : callable\n This takes a tuple of (toknum, tokval) as its argument and returns a\n tuple with the same structure but possibly different elements. Defaults\n ... |
Please provide a description of the function:def _filter_nodes(superclass, all_nodes=_all_nodes):
node_names = (node.__name__ for node in all_nodes
if issubclass(node, superclass))
return frozenset(node_names) | [
"Filter out AST nodes that are subclasses of ``superclass``."
] |
Please provide a description of the function:def _node_not_implemented(node_name, cls):
def f(self, *args, **kwargs):
raise NotImplementedError("{name!r} nodes are not "
"implemented".format(name=node_name))
return f | [
"Return a function that raises a NotImplementedError with a passed node\n name.\n "
] |
Please provide a description of the function:def disallow(nodes):
def disallowed(cls):
cls.unsupported_nodes = ()
for node in nodes:
new_method = _node_not_implemented(node, cls)
name = 'visit_{node}'.format(node=node)
cls.unsupported_nodes += (name,)
... | [
"Decorator to disallow certain nodes from parsing. Raises a\n NotImplementedError instead.\n\n Returns\n -------\n disallowed : callable\n "
] |
Please provide a description of the function:def _op_maker(op_class, op_symbol):
def f(self, node, *args, **kwargs):
return partial(op_class, op_symbol, *args, **kwargs)
return f | [
"Return a function to create an op class with its symbol already passed.\n\n Returns\n -------\n f : callable\n ",
"Return a partial function with an Op subclass with an operator\n already passed.\n\n Returns\n -------\n f : callable\n "
] |
Please provide a description of the function:def add_ops(op_classes):
def f(cls):
for op_attr_name, op_class in op_classes.items():
ops = getattr(cls, '{name}_ops'.format(name=op_attr_name))
ops_map = getattr(cls, '{name}_op_nodes_map'.format(
name=op_attr_name))... | [
"Decorator to add default implementation of ops."
] |
Please provide a description of the function:def names(self):
if is_term(self.terms):
return frozenset([self.terms.name])
return frozenset(term.name for term in com.flatten(self.terms)) | [
"Get the names in an expression"
] |
Please provide a description of the function:def _is_convertible_to_index(other):
if isinstance(other, TimedeltaIndex):
return True
elif (len(other) > 0 and
other.inferred_type not in ('floating', 'mixed-integer', 'integer',
'mixed-integer-float', 'mi... | [
"\n return a boolean whether I can attempt conversion to a TimedeltaIndex\n "
] |
Please provide a description of the function:def timedelta_range(start=None, end=None, periods=None, freq=None,
name=None, closed=None):
if freq is None and com._any_none(periods, start, end):
freq = 'D'
freq, freq_infer = dtl.maybe_infer_freq(freq)
tdarr = TimedeltaArray._... | [
"\n Return a fixed frequency TimedeltaIndex, with day as the default\n frequency\n\n Parameters\n ----------\n start : string or timedelta-like, default None\n Left bound for generating timedeltas\n end : string or timedelta-like, default None\n Right bound for generating timedeltas\... |
Please provide a description of the function:def union(self, other):
if isinstance(other, tuple):
other = list(other)
return type(self)(super().__add__(other)) | [
"\n Returns a FrozenList with other concatenated to the end of self.\n\n Parameters\n ----------\n other : array-like\n The array-like whose elements we are concatenating.\n\n Returns\n -------\n diff : FrozenList\n The collection difference bet... |
Please provide a description of the function:def difference(self, other):
other = set(other)
temp = [x for x in self if x not in other]
return type(self)(temp) | [
"\n Returns a FrozenList with elements from other removed from self.\n\n Parameters\n ----------\n other : array-like\n The array-like whose elements we are removing self.\n\n Returns\n -------\n diff : FrozenList\n The collection difference bet... |
Please provide a description of the function:def searchsorted(self, value, side="left", sorter=None):
# We are much more performant if the searched
# indexer is the same type as the array.
#
# This doesn't matter for int64, but DOES
# matter for smaller int dtypes.
... | [
"\n Find indices to insert `value` so as to maintain order.\n\n For full documentation, see `numpy.searchsorted`\n\n See Also\n --------\n numpy.searchsorted : Equivalent function.\n "
] |
Please provide a description of the function:def arrays_to_mgr(arrays, arr_names, index, columns, dtype=None):
# figure out the index, if necessary
if index is None:
index = extract_index(arrays)
else:
index = ensure_index(index)
# don't force copy because getting jammed in an ndar... | [
"\n Segregate Series based on type and coerce into matrices.\n\n Needs to handle a lot of exceptional cases.\n "
] |
Please provide a description of the function:def masked_rec_array_to_mgr(data, index, columns, dtype, copy):
# essentially process a record array then fill it
fill_value = data.fill_value
fdata = ma.getdata(data)
if index is None:
index = get_names_from_index(fdata)
if index is Non... | [
"\n Extract from a masked rec array and create the manager.\n "
] |
Please provide a description of the function:def init_dict(data, index, columns, dtype=None):
if columns is not None:
from pandas.core.series import Series
arrays = Series(data, index=columns, dtype=object)
data_names = arrays.index
missing = arrays.isnull()
if index is... | [
"\n Segregate Series based on type and coerce into matrices.\n Needs to handle a lot of exceptional cases.\n "
] |
Please provide a description of the function:def to_arrays(data, columns, coerce_float=False, dtype=None):
if isinstance(data, ABCDataFrame):
if columns is not None:
arrays = [data._ixs(i, axis=1).values
for i, col in enumerate(data.columns) if col in columns]
... | [
"\n Return list of arrays, columns.\n "
] |
Please provide a description of the function:def sanitize_index(data, index, copy=False):
if index is None:
return data
if len(data) != len(index):
raise ValueError('Length of values does not match length of index')
if isinstance(data, ABCIndexClass) and not copy:
pass
el... | [
"\n Sanitize an index type to return an ndarray of the underlying, pass\n through a non-Index.\n "
] |
Please provide a description of the function:def sanitize_array(data, index, dtype=None, copy=False,
raise_cast_failure=False):
if dtype is not None:
dtype = pandas_dtype(dtype)
if isinstance(data, ma.MaskedArray):
mask = ma.getmaskarray(data)
if mask.any():
... | [
"\n Sanitize input data to an ndarray, copy if specified, coerce to the\n dtype if specified.\n "
] |
Please provide a description of the function:def _check_engine(engine):
from pandas.core.computation.check import _NUMEXPR_INSTALLED
if engine is None:
if _NUMEXPR_INSTALLED:
engine = 'numexpr'
else:
engine = 'python'
if engine not in _engines:
valid = ... | [
"Make sure a valid engine is passed.\n\n Parameters\n ----------\n engine : str\n\n Raises\n ------\n KeyError\n * If an invalid engine is passed\n ImportError\n * If numexpr was requested but doesn't exist\n\n Returns\n -------\n string engine\n\n "
] |
Please provide a description of the function:def _check_parser(parser):
from pandas.core.computation.expr import _parsers
if parser not in _parsers:
raise KeyError('Invalid parser {parser!r} passed, valid parsers are'
' {valid}'.format(parser=parser, valid=_parsers.keys())) | [
"Make sure a valid parser is passed.\n\n Parameters\n ----------\n parser : str\n\n Raises\n ------\n KeyError\n * If an invalid parser is passed\n "
] |
Please provide a description of the function:def eval(expr, parser='pandas', engine=None, truediv=True,
local_dict=None, global_dict=None, resolvers=(), level=0,
target=None, inplace=False):
from pandas.core.computation.expr import Expr
inplace = validate_bool_kwarg(inplace, "inplace")
... | [
"Evaluate a Python expression as a string using various backends.\n\n The following arithmetic operations are supported: ``+``, ``-``, ``*``,\n ``/``, ``**``, ``%``, ``//`` (python engine only) along with the following\n boolean operations: ``|`` (or), ``&`` (and), and ``~`` (not).\n Additionally, the `... |
Please provide a description of the function:def _codes_to_ints(self, codes):
# Shift the representation of each level by the pre-calculated number
# of bits:
codes <<= self.offsets
# Now sum and OR are in fact interchangeable. This is a simple
# composition of the (dis... | [
"\n Transform combination(s) of uint64 in one uint64 (each), in a strictly\n monotonic way (i.e. respecting the lexicographic order of integer\n combinations): see BaseMultiIndexCodesEngine documentation.\n\n Parameters\n ----------\n codes : 1- or 2-dimensional array of dt... |
Please provide a description of the function:def from_arrays(cls, arrays, sortorder=None, names=None):
error_msg = "Input must be a list / sequence of array-likes."
if not is_list_like(arrays):
raise TypeError(error_msg)
elif is_iterator(arrays):
arrays = list(ar... | [
"\n Convert arrays to MultiIndex.\n\n Parameters\n ----------\n arrays : list / sequence of array-likes\n Each array-like gives one level's value for each data point.\n len(arrays) is the number of levels.\n sortorder : int or None\n Level of sorte... |
Please provide a description of the function:def from_tuples(cls, tuples, sortorder=None, names=None):
if not is_list_like(tuples):
raise TypeError('Input must be a list / sequence of tuple-likes.')
elif is_iterator(tuples):
tuples = list(tuples)
if len(tuples) ... | [
"\n Convert list of tuples to MultiIndex.\n\n Parameters\n ----------\n tuples : list / sequence of tuple-likes\n Each tuple is the index of one row/column.\n sortorder : int or None\n Level of sortedness (must be lexicographically sorted by that\n ... |
Please provide a description of the function:def from_product(cls, iterables, sortorder=None, names=None):
from pandas.core.arrays.categorical import _factorize_from_iterables
from pandas.core.reshape.util import cartesian_product
if not is_list_like(iterables):
raise TypeE... | [
"\n Make a MultiIndex from the cartesian product of multiple iterables.\n\n Parameters\n ----------\n iterables : list / sequence of iterables\n Each iterable has unique labels for each level of the index.\n sortorder : int or None\n Level of sortedness (must... |
Please provide a description of the function:def from_frame(cls, df, sortorder=None, names=None):
if not isinstance(df, ABCDataFrame):
raise TypeError("Input must be a DataFrame")
column_names, columns = lzip(*df.iteritems())
names = column_names if names is None else names... | [
"\n Make a MultiIndex from a DataFrame.\n\n .. versionadded:: 0.24.0\n\n Parameters\n ----------\n df : DataFrame\n DataFrame to be converted to MultiIndex.\n sortorder : int, optional\n Level of sortedness (must be lexicographically sorted by that\n ... |
Please provide a description of the function:def set_levels(self, levels, level=None, inplace=False,
verify_integrity=True):
if is_list_like(levels) and not isinstance(levels, Index):
levels = list(levels)
if level is not None and not is_list_like(level):
... | [
"\n Set new levels on MultiIndex. Defaults to returning\n new index.\n\n Parameters\n ----------\n levels : sequence or list of sequence\n new level(s) to apply\n level : int, level name, or sequence of int/level names (default None)\n level(s) to set ... |
Please provide a description of the function:def set_codes(self, codes, level=None, inplace=False,
verify_integrity=True):
if level is not None and not is_list_like(level):
if not is_list_like(codes):
raise TypeError("Codes must be list-like")
i... | [
"\n Set new codes on MultiIndex. Defaults to returning\n new index.\n\n .. versionadded:: 0.24.0\n\n New name for deprecated method `set_labels`.\n\n Parameters\n ----------\n codes : sequence or list of sequence\n new codes to apply\n level : in... |
Please provide a description of the function:def copy(self, names=None, dtype=None, levels=None, codes=None,
deep=False, _set_identity=False, **kwargs):
name = kwargs.get('name')
names = self._validate_names(name=name, names=names, deep=deep)
if deep:
from copy... | [
"\n Make a copy of this object. Names, dtype, levels and codes can be\n passed and will be set on new copy.\n\n Parameters\n ----------\n names : sequence, optional\n dtype : numpy dtype or pandas type, optional\n levels : sequence, optional\n codes : sequence... |
Please provide a description of the function:def view(self, cls=None):
result = self.copy()
result._id = self._id
return result | [
" this is defined as a copy with the same identity "
] |
Please provide a description of the function:def _is_memory_usage_qualified(self):
def f(l):
return 'mixed' in l or 'string' in l or 'unicode' in l
return any(f(l) for l in self._inferred_type_levels) | [
" return a boolean if we need a qualified .info display "
] |
Please provide a description of the function:def _nbytes(self, deep=False):
# for implementations with no useful getsizeof (PyPy)
objsize = 24
level_nbytes = sum(i.memory_usage(deep=deep) for i in self.levels)
label_nbytes = sum(i.nbytes for i in self.codes)
names_nbyt... | [
"\n return the number of bytes in the underlying data\n deeply introspect the level data if deep=True\n\n include the engine hashtable\n\n *this is in internal routine*\n\n "
] |
Please provide a description of the function:def _format_attrs(self):
attrs = [
('levels', ibase.default_pprint(self._levels,
max_seq_items=False)),
('codes', ibase.default_pprint(self._codes,
... | [
"\n Return a list of tuples of the (attr,formatted_value)\n "
] |
Please provide a description of the function:def _set_names(self, names, level=None, validate=True):
# GH 15110
# Don't allow a single string for names in a MultiIndex
if names is not None and not is_list_like(names):
raise ValueError('Names should be list-like for a MultiIn... | [
"\n Set new names on index. Each name has to be a hashable type.\n\n Parameters\n ----------\n values : str or sequence\n name(s) to set\n level : int, level name, or sequence of int/level names (default None)\n If the index is a MultiIndex (hierarchical), le... |
Please provide a description of the function:def is_monotonic_increasing(self):
# reversed() because lexsort() wants the most significant key last.
values = [self._get_level_values(i).values
for i in reversed(range(len(self.levels)))]
try:
sort_order = np.... | [
"\n return if the index is monotonic increasing (only equal or\n increasing) values.\n "
] |
Please provide a description of the function:def _hashed_indexing_key(self, key):
from pandas.core.util.hashing import hash_tuples, hash_tuple
if not isinstance(key, tuple):
return hash_tuples(key)
if not len(key) == self.nlevels:
raise KeyError
def f(... | [
"\n validate and return the hash for the provided key\n\n *this is internal for use for the cython routines*\n\n Parameters\n ----------\n key : string or tuple\n\n Returns\n -------\n np.uint64\n\n Notes\n -----\n we need to stringify if ... |
Please provide a description of the function:def _get_level_values(self, level, unique=False):
values = self.levels[level]
level_codes = self.codes[level]
if unique:
level_codes = algos.unique(level_codes)
filled = algos.take_1d(values._values, level_codes,
... | [
"\n Return vector of label values for requested level,\n equal to the length of the index\n\n **this is an internal method**\n\n Parameters\n ----------\n level : int level\n unique : bool, default False\n if True, drop duplicated values\n\n Returns... |
Please provide a description of the function:def get_level_values(self, level):
level = self._get_level_number(level)
values = self._get_level_values(level)
return values | [
"\n Return vector of label values for requested level,\n equal to the length of the index.\n\n Parameters\n ----------\n level : int or str\n ``level`` is either the integer position of the level in the\n MultiIndex, or the name of the level.\n\n Retur... |
Please provide a description of the function:def to_frame(self, index=True, name=None):
from pandas import DataFrame
if name is not None:
if not is_list_like(name):
raise TypeError("'name' must be a list / sequence "
"of column names.... | [
"\n Create a DataFrame with the levels of the MultiIndex as columns.\n\n Column ordering is determined by the DataFrame constructor with data as\n a dict.\n\n .. versionadded:: 0.24.0\n\n Parameters\n ----------\n index : boolean, default True\n Set the in... |
Please provide a description of the function:def to_hierarchical(self, n_repeat, n_shuffle=1):
levels = self.levels
codes = [np.repeat(level_codes, n_repeat) for
level_codes in self.codes]
# Assumes that each level_codes is divisible by n_shuffle
codes = [x.resh... | [
"\n Return a MultiIndex reshaped to conform to the\n shapes given by n_repeat and n_shuffle.\n\n .. deprecated:: 0.24.0\n\n Useful to replicate and rearrange a MultiIndex for combination\n with another Index with n_repeat items.\n\n Parameters\n ----------\n n... |
Please provide a description of the function:def _sort_levels_monotonic(self):
if self.is_lexsorted() and self.is_monotonic:
return self
new_levels = []
new_codes = []
for lev, level_codes in zip(self.levels, self.codes):
if not lev.is_monotonic:
... | [
"\n .. versionadded:: 0.20.0\n\n This is an *internal* function.\n\n Create a new MultiIndex from the current to monotonically sorted\n items IN the levels. This does not actually make the entire MultiIndex\n monotonic, JUST the levels.\n\n The resulting MultiIndex will hav... |
Please provide a description of the function:def remove_unused_levels(self):
new_levels = []
new_codes = []
changed = False
for lev, level_codes in zip(self.levels, self.codes):
# Since few levels are typically unused, bincount() is more
# efficient th... | [
"\n Create a new MultiIndex from the current that removes\n unused levels, meaning that they are not expressed in the labels.\n\n The resulting MultiIndex will have the same outward\n appearance, meaning the same .values and ordering. It will also\n be .equals() to the original.\n... |
Please provide a description of the function:def _assert_take_fillable(self, values, indices, allow_fill=True,
fill_value=None, na_value=None):
# only fill if we are passing a non-None fill_value
if allow_fill and fill_value is not None:
if (indices < -... | [
" Internal method to handle NA filling of take "
] |
Please provide a description of the function:def append(self, other):
if not isinstance(other, (list, tuple)):
other = [other]
if all((isinstance(o, MultiIndex) and o.nlevels >= self.nlevels)
for o in other):
arrays = []
for i in range(self.nl... | [
"\n Append a collection of Index options together\n\n Parameters\n ----------\n other : Index or list/tuple of indices\n\n Returns\n -------\n appended : Index\n "
] |
Please provide a description of the function:def drop(self, codes, level=None, errors='raise'):
if level is not None:
return self._drop_from_level(codes, level)
try:
if not isinstance(codes, (np.ndarray, Index)):
codes = com.index_labels_to_array(codes)
... | [
"\n Make new MultiIndex with passed list of codes deleted\n\n Parameters\n ----------\n codes : array-like\n Must be a list of tuples\n level : int or level name, default None\n\n Returns\n -------\n dropped : MultiIndex\n "
] |
Please provide a description of the function:def swaplevel(self, i=-2, j=-1):
new_levels = list(self.levels)
new_codes = list(self.codes)
new_names = list(self.names)
i = self._get_level_number(i)
j = self._get_level_number(j)
new_levels[i], new_levels[j] = new... | [
"\n Swap level i with level j.\n\n Calling this method does not change the ordering of the values.\n\n Parameters\n ----------\n i : int, str, default -2\n First level of index to be swapped. Can pass level name as string.\n Type of parameters can be mixed.\n... |
Please provide a description of the function:def reorder_levels(self, order):
order = [self._get_level_number(i) for i in order]
if len(order) != self.nlevels:
raise AssertionError('Length of order must be same as '
'number of levels (%d), got %d' %
... | [
"\n Rearrange levels using input order. May not drop or duplicate levels\n\n Parameters\n ----------\n "
] |
Please provide a description of the function:def _get_codes_for_sorting(self):
from pandas.core.arrays import Categorical
def cats(level_codes):
return np.arange(np.array(level_codes).max() + 1 if
len(level_codes) else 0,
dt... | [
"\n we categorizing our codes by using the\n available categories (all, not just observed)\n excluding any missing ones (-1); this is in preparation\n for sorting, where we need to disambiguate that -1 is not\n a valid valid\n "
] |
Please provide a description of the function:def sortlevel(self, level=0, ascending=True, sort_remaining=True):
from pandas.core.sorting import indexer_from_factorized
if isinstance(level, (str, int)):
level = [level]
level = [self._get_level_number(lev) for lev in level]
... | [
"\n Sort MultiIndex at the requested level. The result will respect the\n original ordering of the associated factor at that level.\n\n Parameters\n ----------\n level : list-like, int or str, default 0\n If a string is given, must be a name of the level\n If... |
Please provide a description of the function:def _convert_listlike_indexer(self, keyarr, kind=None):
indexer, keyarr = super()._convert_listlike_indexer(keyarr, kind=kind)
# are we indexing a specific level
if indexer is None and len(keyarr) and not isinstance(keyarr[0],
... | [
"\n Parameters\n ----------\n keyarr : list-like\n Indexer to convert.\n\n Returns\n -------\n tuple (indexer, keyarr)\n indexer is an ndarray or None if cannot convert\n keyarr are tuple-safe keys\n "
] |
Please provide a description of the function:def reindex(self, target, method=None, level=None, limit=None,
tolerance=None):
# GH6552: preserve names when reindexing to non-named target
# (i.e. neither Index nor Series).
preserve_names = not hasattr(target, 'names')
... | [
"\n Create index with target's values (move/add/delete values as necessary)\n\n Returns\n -------\n new_index : pd.MultiIndex\n Resulting index\n indexer : np.ndarray or None\n Indices of output values in original index.\n\n "
] |
Please provide a description of the function:def slice_locs(self, start=None, end=None, step=None, kind=None):
# This function adds nothing to its parent implementation (the magic
# happens in get_slice_bound method), but it adds meaningful doc.
return super().slice_locs(start, end, ste... | [
"\n For an ordered MultiIndex, compute the slice locations for input\n labels.\n\n The input labels can be tuples representing partial levels, e.g. for a\n MultiIndex with 3 levels, you can pass a single value (corresponding to\n the first level), or a 1-, 2-, or 3-tuple.\n\n ... |
Please provide a description of the function:def get_loc(self, key, method=None):
if method is not None:
raise NotImplementedError('only the default get_loc method is '
'currently supported for MultiIndex')
def _maybe_to_slice(loc):
... | [
"\n Get location for a label or a tuple of labels as an integer, slice or\n boolean mask.\n\n Parameters\n ----------\n key : label or tuple of labels (one for each level)\n method : None\n\n Returns\n -------\n loc : int, slice object or boolean mask\n... |
Please provide a description of the function:def get_loc_level(self, key, level=0, drop_level=True):
def maybe_droplevels(indexer, levels, drop_level):
if not drop_level:
return self[indexer]
# kludgearound
orig_index = new_index = self[indexer]
... | [
"\n Get both the location for the requested label(s) and the\n resulting sliced index.\n\n Parameters\n ----------\n key : label or sequence of labels\n level : int/level name or list thereof, optional\n drop_level : bool, default True\n if ``False``, the ... |
Please provide a description of the function:def get_locs(self, seq):
from .numeric import Int64Index
# must be lexsorted to at least as many levels
true_slices = [i for (i, s) in enumerate(com.is_true_slices(seq)) if s]
if true_slices and true_slices[-1] >= self.lexsort_depth:... | [
"\n Get location for a given label/slice/list/mask or a sequence of such as\n an array of integers.\n\n Parameters\n ----------\n seq : label/slice/list/mask or a sequence of such\n You should use one of the above for each level.\n If a level should not be used... |
Please provide a description of the function:def truncate(self, before=None, after=None):
if after and before and after < before:
raise ValueError('after < before')
i, j = self.levels[0].slice_locs(before, after)
left, right = self.slice_locs(before, after)
new_lev... | [
"\n Slice index between two labels / tuples, return new MultiIndex\n\n Parameters\n ----------\n before : label or tuple, can be partial. Default None\n None defaults to start\n after : label or tuple, can be partial. Default None\n None defaults to end\n\n ... |
Please provide a description of the function:def equals(self, other):
if self.is_(other):
return True
if not isinstance(other, Index):
return False
if not isinstance(other, MultiIndex):
other_vals = com.values_from_object(ensure_index(other))
... | [
"\n Determines if two MultiIndex objects have the same labeling information\n (the levels themselves do not necessarily have to be the same)\n\n See Also\n --------\n equal_levels\n "
] |
Please provide a description of the function:def equal_levels(self, other):
if self.nlevels != other.nlevels:
return False
for i in range(self.nlevels):
if not self.levels[i].equals(other.levels[i]):
return False
return True | [
"\n Return True if the levels of both MultiIndex objects are the same\n\n "
] |
Please provide a description of the function:def union(self, other, sort=None):
self._validate_sort_keyword(sort)
self._assert_can_do_setop(other)
other, result_names = self._convert_can_do_setop(other)
if len(other) == 0 or self.equals(other):
return self
... | [
"\n Form the union of two MultiIndex objects\n\n Parameters\n ----------\n other : MultiIndex or array / Index of tuples\n sort : False or None, default None\n Whether to sort the resulting Index.\n\n * None : Sort the result, except when\n\n 1. ... |
Please provide a description of the function:def intersection(self, other, sort=False):
self._validate_sort_keyword(sort)
self._assert_can_do_setop(other)
other, result_names = self._convert_can_do_setop(other)
if self.equals(other):
return self
self_tuples... | [
"\n Form the intersection of two MultiIndex objects.\n\n Parameters\n ----------\n other : MultiIndex or array / Index of tuples\n sort : False or None, default False\n Sort the resulting MultiIndex if possible\n\n .. versionadded:: 0.24.0\n\n .. v... |
Please provide a description of the function:def difference(self, other, sort=None):
self._validate_sort_keyword(sort)
self._assert_can_do_setop(other)
other, result_names = self._convert_can_do_setop(other)
if len(other) == 0:
return self
if self.equals(ot... | [
"\n Compute set difference of two MultiIndex objects\n\n Parameters\n ----------\n other : MultiIndex\n sort : False or None, default None\n Sort the resulting MultiIndex if possible\n\n .. versionadded:: 0.24.0\n\n .. versionchanged:: 0.24.1\n\n ... |
Please provide a description of the function:def insert(self, loc, item):
# Pad the key with empty strings if lower levels of the key
# aren't specified:
if not isinstance(item, tuple):
item = (item, ) + ('', ) * (self.nlevels - 1)
elif len(item) != self.nlevels:
... | [
"\n Make new MultiIndex inserting new item at location\n\n Parameters\n ----------\n loc : int\n item : tuple\n Must be same length as number of levels in the MultiIndex\n\n Returns\n -------\n new_index : Index\n "
] |
Please provide a description of the function:def delete(self, loc):
new_codes = [np.delete(level_codes, loc) for level_codes in self.codes]
return MultiIndex(levels=self.levels, codes=new_codes,
names=self.names, verify_integrity=False) | [
"\n Make new index with passed location deleted\n\n Returns\n -------\n new_index : MultiIndex\n "
] |
Please provide a description of the function:def _ensure_data(values, dtype=None):
# we check some simple dtypes first
try:
if is_object_dtype(dtype):
return ensure_object(np.asarray(values)), 'object', 'object'
if is_bool_dtype(values) or is_bool_dtype(dtype):
# we... | [
"\n routine to ensure that our data is of the correct\n input dtype for lower-level routines\n\n This will coerce:\n - ints -> int64\n - uint -> uint64\n - bool -> uint64 (TODO this should be uint8)\n - datetimelike -> i8\n - datetime64tz -> i8 (in local tz)\n - categorical -> codes\n\n ... |
Please provide a description of the function:def _reconstruct_data(values, dtype, original):
from pandas import Index
if is_extension_array_dtype(dtype):
values = dtype.construct_array_type()._from_sequence(values)
elif is_datetime64tz_dtype(dtype) or is_period_dtype(dtype):
values = In... | [
"\n reverse of _ensure_data\n\n Parameters\n ----------\n values : ndarray\n dtype : pandas_dtype\n original : ndarray-like\n\n Returns\n -------\n Index for extension types, otherwise ndarray casted to dtype\n "
] |
Please provide a description of the function:def _ensure_arraylike(values):
if not is_array_like(values):
inferred = lib.infer_dtype(values, skipna=False)
if inferred in ['mixed', 'string', 'unicode']:
if isinstance(values, tuple):
values = list(values)
v... | [
"\n ensure that we are arraylike if not already\n "
] |
Please provide a description of the function:def _get_hashtable_algo(values):
values, dtype, ndtype = _ensure_data(values)
if ndtype == 'object':
# it's cheaper to use a String Hash Table than Object; we infer
# including nulls because that is the only difference between
# StringH... | [
"\n Parameters\n ----------\n values : arraylike\n\n Returns\n -------\n tuples(hashtable class,\n vector class,\n values,\n dtype,\n ndtype)\n "
] |
Please provide a description of the function:def match(to_match, values, na_sentinel=-1):
values = com.asarray_tuplesafe(values)
htable, _, values, dtype, ndtype = _get_hashtable_algo(values)
to_match, _, _ = _ensure_data(to_match, dtype)
table = htable(min(len(to_match), 1000000))
table.map_lo... | [
"\n Compute locations of to_match into values\n\n Parameters\n ----------\n to_match : array-like\n values to find positions of\n values : array-like\n Unique set of values\n na_sentinel : int, default -1\n Value to mark \"not found\"\n\n Examples\n --------\n\n Retur... |
Please provide a description of the function:def unique(values):
values = _ensure_arraylike(values)
if is_extension_array_dtype(values):
# Dispatch to extension dtype's unique.
return values.unique()
original = values
htable, _, values, dtype, ndtype = _get_hashtable_algo(values)... | [
"\n Hash table-based unique. Uniques are returned in order\n of appearance. This does NOT sort.\n\n Significantly faster than numpy.unique. Includes NA values.\n\n Parameters\n ----------\n values : 1d array-like\n\n Returns\n -------\n numpy.ndarray or ExtensionArray\n\n The retur... |
Please provide a description of the function:def isin(comps, values):
if not is_list_like(comps):
raise TypeError("only list-like objects are allowed to be passed"
" to isin(), you passed a [{comps_type}]"
.format(comps_type=type(comps).__name__))
if... | [
"\n Compute the isin boolean array\n\n Parameters\n ----------\n comps : array-like\n values : array-like\n\n Returns\n -------\n boolean array same length as comps\n "
] |
Please provide a description of the function:def _factorize_array(values, na_sentinel=-1, size_hint=None,
na_value=None):
(hash_klass, _), values = _get_data_algo(values, _hashtables)
table = hash_klass(size_hint or len(values))
uniques, labels = table.factorize(values, na_sentine... | [
"Factorize an array-like to labels and uniques.\n\n This doesn't do any coercion of types or unboxing before factorization.\n\n Parameters\n ----------\n values : ndarray\n na_sentinel : int, default -1\n size_hint : int, optional\n Passsed through to the hashtable's 'get_labels' method\n ... |
Please provide a description of the function:def value_counts(values, sort=True, ascending=False, normalize=False,
bins=None, dropna=True):
from pandas.core.series import Series, Index
name = getattr(values, 'name', None)
if bins is not None:
try:
from pandas.core.... | [
"\n Compute a histogram of the counts of non-null values.\n\n Parameters\n ----------\n values : ndarray (1-d)\n sort : boolean, default True\n Sort by values\n ascending : boolean, default False\n Sort in ascending order\n normalize: boolean, default False\n If True then c... |
Please provide a description of the function:def _value_counts_arraylike(values, dropna):
values = _ensure_arraylike(values)
original = values
values, dtype, ndtype = _ensure_data(values)
if needs_i8_conversion(dtype):
# i8
keys, counts = htable.value_count_int64(values, dropna)
... | [
"\n Parameters\n ----------\n values : arraylike\n dropna : boolean\n\n Returns\n -------\n (uniques, counts)\n\n "
] |
Please provide a description of the function:def duplicated(values, keep='first'):
values, dtype, ndtype = _ensure_data(values)
f = getattr(htable, "duplicated_{dtype}".format(dtype=ndtype))
return f(values, keep=keep) | [
"\n Return boolean ndarray denoting duplicate values.\n\n .. versionadded:: 0.19.0\n\n Parameters\n ----------\n values : ndarray-like\n Array over which to check for duplicate values.\n keep : {'first', 'last', False}, default 'first'\n - ``first`` : Mark duplicates as ``True`` exce... |
Please provide a description of the function:def mode(values, dropna=True):
from pandas import Series
values = _ensure_arraylike(values)
original = values
# categorical is a fast-path
if is_categorical_dtype(values):
if isinstance(values, Series):
return Series(values.valu... | [
"\n Returns the mode(s) of an array.\n\n Parameters\n ----------\n values : array-like\n Array over which to check for duplicate values.\n dropna : boolean, default True\n Don't consider counts of NaN/NaT.\n\n .. versionadded:: 0.24.0\n\n Returns\n -------\n mode : Serie... |
Please provide a description of the function:def rank(values, axis=0, method='average', na_option='keep',
ascending=True, pct=False):
if values.ndim == 1:
f, values = _get_data_algo(values, _rank1d_functions)
ranks = f(values, ties_method=method, ascending=ascending,
... | [
"\n Rank the values along a given axis.\n\n Parameters\n ----------\n values : array-like\n Array whose values will be ranked. The number of dimensions in this\n array must not exceed 2.\n axis : int, default 0\n Axis over which to perform rankings.\n method : {'average', 'min... |
Please provide a description of the function:def checked_add_with_arr(arr, b, arr_mask=None, b_mask=None):
# For performance reasons, we broadcast 'b' to the new array 'b2'
# so that it has the same size as 'arr'.
b2 = np.broadcast_to(b, arr.shape)
if b_mask is not None:
# We do the same br... | [
"\n Perform array addition that checks for underflow and overflow.\n\n Performs the addition of an int64 array and an int64 integer (or array)\n but checks that they do not result in overflow first. For elements that\n are indicated to be NaN, whether or not there is overflow for that element\n is au... |
Please provide a description of the function:def quantile(x, q, interpolation_method='fraction'):
x = np.asarray(x)
mask = isna(x)
x = x[~mask]
values = np.sort(x)
def _interpolate(a, b, fraction):
return a + (b - a) * fraction
def _get_score(at):
if len(values)... | [
"\n Compute sample quantile or quantiles of the input array. For example, q=0.5\n computes the median.\n\n The `interpolation_method` parameter supports three values, namely\n `fraction` (default), `lower` and `higher`. Interpolation is done only,\n if the desired quantile lies between two data point... |
Please provide a description of the function:def take(arr, indices, axis=0, allow_fill=False, fill_value=None):
from pandas.core.indexing import validate_indices
if not is_array_like(arr):
arr = np.asarray(arr)
indices = np.asarray(indices, dtype=np.intp)
if allow_fill:
# Pandas ... | [
"\n Take elements from an array.\n\n .. versionadded:: 0.23.0\n\n Parameters\n ----------\n arr : sequence\n Non array-likes (sequences without a dtype) are coerced\n to an ndarray.\n indices : sequence of integers\n Indices to be taken.\n axis : int, default 0\n The... |
Please provide a description of the function:def take_nd(arr, indexer, axis=0, out=None, fill_value=np.nan, mask_info=None,
allow_fill=True):
# TODO(EA): Remove these if / elifs as datetimeTZ, interval, become EAs
# dispatch to internal type takes
if is_extension_array_dtype(arr):
... | [
"\n Specialized Cython take which sets NaN values in one pass\n\n This dispatches to ``take`` defined on ExtensionArrays. It does not\n currently dispatch to ``SparseArray.take`` for sparse ``arr``.\n\n Parameters\n ----------\n arr : array-like\n Input array.\n indexer : ndarray\n ... |
Please provide a description of the function:def take_2d_multi(arr, indexer, out=None, fill_value=np.nan, mask_info=None,
allow_fill=True):
if indexer is None or (indexer[0] is None and indexer[1] is None):
row_idx = np.arange(arr.shape[0], dtype=np.int64)
col_idx = np.arange(... | [
"\n Specialized Cython take which sets NaN values in one pass\n "
] |
Please provide a description of the function:def searchsorted(arr, value, side="left", sorter=None):
if sorter is not None:
sorter = ensure_platform_int(sorter)
if isinstance(arr, np.ndarray) and is_integer_dtype(arr) and (
is_integer(value) or is_integer_dtype(value)):
from .a... | [
"\n Find indices where elements should be inserted to maintain order.\n\n .. versionadded:: 0.25.0\n\n Find the indices into a sorted array `arr` (a) such that, if the\n corresponding elements in `value` were inserted before the indices,\n the order of `arr` would be preserved.\n\n Assuming that `... |
Please provide a description of the function:def diff(arr, n, axis=0):
n = int(n)
na = np.nan
dtype = arr.dtype
is_timedelta = False
if needs_i8_conversion(arr):
dtype = np.float64
arr = arr.view('i8')
na = iNaT
is_timedelta = True
elif is_bool_dtype(dtype... | [
"\n difference of n between self,\n analogous to s-s.shift(n)\n\n Parameters\n ----------\n arr : ndarray\n n : int\n number of periods\n axis : int\n axis to shift on\n\n Returns\n -------\n shifted\n\n "
] |
Please provide a description of the function:def _to_ijv(ss, row_levels=(0, ), column_levels=(1, ), sort_labels=False):
# index and column levels must be a partition of the index
_check_is_partition([row_levels, column_levels], range(ss.index.nlevels))
# from the SparseSeries: get the labels and data ... | [
" For arbitrary (MultiIndexed) SparseSeries return\n (v, i, j, ilabels, jlabels) where (v, (i, j)) is suitable for\n passing to scipy.sparse.coo constructor. ",
" Return sparse coords and dense labels for subset levels ",
" Return OrderedDict of unique labels to number.\n Optionally sort by lab... |
Please provide a description of the function:def _sparse_series_to_coo(ss, row_levels=(0, ), column_levels=(1, ),
sort_labels=False):
import scipy.sparse
if ss.index.nlevels < 2:
raise ValueError('to_coo requires MultiIndex with nlevels > 2')
if not ss.index.is_uniqu... | [
"\n Convert a SparseSeries to a scipy.sparse.coo_matrix using index\n levels row_levels, column_levels as the row and column\n labels respectively. Returns the sparse_matrix, row and column labels.\n "
] |
Please provide a description of the function:def _coo_to_sparse_series(A, dense_index=False):
s = Series(A.data, MultiIndex.from_arrays((A.row, A.col)))
s = s.sort_index()
s = s.to_sparse() # TODO: specify kind?
if dense_index:
# is there a better constructor method to use here?
i ... | [
"\n Convert a scipy.sparse.coo_matrix to a SparseSeries.\n Use the defaults given in the SparseSeries constructor.\n "
] |
Please provide a description of the function:def _to_M8(key, tz=None):
if not isinstance(key, Timestamp):
# this also converts strings
key = Timestamp(key)
if key.tzinfo is not None and tz is not None:
# Don't tz_localize(None) if key is already tz-aware
key = ke... | [
"\n Timestamp-like => dt64\n "
] |
Please provide a description of the function:def _dt_array_cmp(cls, op):
opname = '__{name}__'.format(name=op.__name__)
nat_result = opname == '__ne__'
def wrapper(self, other):
if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
return NotImplemented
other = l... | [
"\n Wrap comparison operations to convert datetime-like to datetime64\n "
] |
Please provide a description of the function:def sequence_to_dt64ns(data, dtype=None, copy=False,
tz=None,
dayfirst=False, yearfirst=False, ambiguous='raise',
int_as_wall_time=False):
inferred_freq = None
dtype = _validate_dt64_dtype(dt... | [
"\n Parameters\n ----------\n data : list-like\n dtype : dtype, str, or None, default None\n copy : bool, default False\n tz : tzinfo, str, or None, default None\n dayfirst : bool, default False\n yearfirst : bool, default False\n ambiguous : str, bool, or arraylike, default 'raise'\n ... |
Please provide a description of the function:def objects_to_datetime64ns(data, dayfirst, yearfirst,
utc=False, errors="raise",
require_iso8601=False, allow_object=False):
assert errors in ["raise", "ignore", "coerce"]
# if str-dtype, convert
data... | [
"\n Convert data to array of timestamps.\n\n Parameters\n ----------\n data : np.ndarray[object]\n dayfirst : bool\n yearfirst : bool\n utc : bool, default False\n Whether to convert timezone-aware timestamps to UTC\n errors : {'raise', 'ignore', 'coerce'}\n allow_object : bool\n ... |
Please provide a description of the function:def maybe_convert_dtype(data, copy):
if is_float_dtype(data):
# Note: we must cast to datetime64[ns] here in order to treat these
# as wall-times instead of UTC timestamps.
data = data.astype(_NS_DTYPE)
copy = False
# TODO: d... | [
"\n Convert data based on dtype conventions, issuing deprecation warnings\n or errors where appropriate.\n\n Parameters\n ----------\n data : np.ndarray or pd.Index\n copy : bool\n\n Returns\n -------\n data : np.ndarray or pd.Index\n copy : bool\n\n Raises\n ------\n TypeErro... |
Please provide a description of the function:def maybe_infer_tz(tz, inferred_tz):
if tz is None:
tz = inferred_tz
elif inferred_tz is None:
pass
elif not timezones.tz_compare(tz, inferred_tz):
raise TypeError('data is already tz-aware {inferred_tz}, unable to '
... | [
"\n If a timezone is inferred from data, check that it is compatible with\n the user-provided timezone, if any.\n\n Parameters\n ----------\n tz : tzinfo or None\n inferred_tz : tzinfo or None\n\n Returns\n -------\n tz : tzinfo or None\n\n Raises\n ------\n TypeError : if both t... |
Please provide a description of the function:def _validate_dt64_dtype(dtype):
if dtype is not None:
dtype = pandas_dtype(dtype)
if is_dtype_equal(dtype, np.dtype("M8")):
# no precision, warn
dtype = _NS_DTYPE
msg = textwrap.dedent()
warnings.warn(... | [
"\n Check that a dtype, if passed, represents either a numpy datetime64[ns]\n dtype or a pandas DatetimeTZDtype.\n\n Parameters\n ----------\n dtype : object\n\n Returns\n -------\n dtype : None, numpy.dtype, or DatetimeTZDtype\n\n Raises\n ------\n ValueError : invalid dtype\n\n ... |
Please provide a description of the function:def validate_tz_from_dtype(dtype, tz):
if dtype is not None:
if isinstance(dtype, str):
try:
dtype = DatetimeTZDtype.construct_from_string(dtype)
except TypeError:
# Things like `datetime64[ns]`, which ... | [
"\n If the given dtype is a DatetimeTZDtype, extract the implied\n tzinfo object from it and check that it does not conflict with the given\n tz.\n\n Parameters\n ----------\n dtype : dtype, str\n tz : None, tzinfo\n\n Returns\n -------\n tz : consensus tzinfo\n\n Raises\n ------... |
Please provide a description of the function:def _infer_tz_from_endpoints(start, end, tz):
try:
inferred_tz = timezones.infer_tzinfo(start, end)
except Exception:
raise TypeError('Start and end cannot both be tz-aware with '
'different timezones')
inferred_tz = ... | [
"\n If a timezone is not explicitly given via `tz`, see if one can\n be inferred from the `start` and `end` endpoints. If more than one\n of these inputs provides a timezone, require that they all agree.\n\n Parameters\n ----------\n start : Timestamp\n end : Timestamp\n tz : tzinfo or None... |
Please provide a description of the function:def _maybe_localize_point(ts, is_none, is_not_none, freq, tz):
# Make sure start and end are timezone localized if:
# 1) freq = a Timedelta-like frequency (Tick)
# 2) freq = None i.e. generating a linspaced range
if isinstance(freq, Tick) or freq is None... | [
"\n Localize a start or end Timestamp to the timezone of the corresponding\n start or end Timestamp\n\n Parameters\n ----------\n ts : start or end Timestamp to potentially localize\n is_none : argument that should be None\n is_not_none : argument that should not be None\n freq : Tick, DateO... |
Please provide a description of the function:def _sub_datetime_arraylike(self, other):
if len(self) != len(other):
raise ValueError("cannot add indices of unequal length")
if isinstance(other, np.ndarray):
assert is_datetime64_dtype(other)
other = type(self)... | [
"subtract DatetimeArray/Index or ndarray[datetime64]"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.