title
stringlengths
1
185
diff
stringlengths
0
32.2M
body
stringlengths
0
123k
url
stringlengths
57
58
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
updated_at
stringlengths
20
20
BUG: allow itertuples to work with frames with duplicate column names
diff --git a/RELEASE.rst b/RELEASE.rst index 8256b13b4e553..0fcd9bd3731fe 100644 --- a/RELEASE.rst +++ b/RELEASE.rst @@ -223,6 +223,8 @@ pandas 0.11.1 - ``read_html`` now correctly skips tests (GH3741_) - Fix incorrect arguments passed to concat that are not list-like (e.g. concat(df1,df2)) (GH3481_) - Correctly parse when passed the ``dtype=str`` (or other variable-len string dtypes) in ``read_csv`` (GH3795_) + - ``DataFrame.itertuples()`` now works with frames with duplicate column + names (GH3873_) .. _GH3164: https://github.com/pydata/pandas/issues/3164 .. _GH2786: https://github.com/pydata/pandas/issues/2786 @@ -314,6 +316,7 @@ pandas 0.11.1 .. _GH3795: https://github.com/pydata/pandas/issues/3795 .. _GH3814: https://github.com/pydata/pandas/issues/3814 .. _GH3834: https://github.com/pydata/pandas/issues/3834 +.. _GH3873: https://github.com/pydata/pandas/issues/3873 pandas 0.11.0 diff --git a/doc/source/v0.11.1.txt b/doc/source/v0.11.1.txt index 34ba9f0859641..564939c596ced 100644 --- a/doc/source/v0.11.1.txt +++ b/doc/source/v0.11.1.txt @@ -349,6 +349,8 @@ Bug Fixes - ``DataFrame.from_records`` did not accept empty recarrays (GH3682_) - ``read_html`` now correctly skips tests (GH3741_) + - ``DataFrame.itertuples()`` now works with frames with duplicate column + names (GH3873_) See the `full release notes <https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker @@ -399,3 +401,4 @@ on GitHub for a complete list. .. _GH3726: https://github.com/pydata/pandas/issues/3726 .. _GH3425: https://github.com/pydata/pandas/issues/3425 .. _GH3834: https://github.com/pydata/pandas/issues/3834 +.. _GH3873: https://github.com/pydata/pandas/issues/3873 diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 9c0a2843370f4..b6e29204fc0d8 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -818,7 +818,9 @@ def itertuples(self, index=True): arrays = [] if index: arrays.append(self.index) - arrays.extend(self[k] for k in self.columns) + + # use integer indexing because of possible duplicate column names + arrays.extend(self.iloc[:, k] for k in xrange(len(self.columns))) return izip(*arrays) iterkv = iteritems diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py index 2c6d3b221c6ff..9b2f078e3b95a 100644 --- a/pandas/tests/test_frame.py +++ b/pandas/tests/test_frame.py @@ -3951,6 +3951,10 @@ def test_itertuples(self): for tup in df.itertuples(index=False): self.assert_(isinstance(tup[1], np.integer)) + df = DataFrame(data={"a": [1, 2, 3], "b": [4, 5, 6]}) + dfaa = df[['a', 'a']] + self.assertEqual(list(dfaa.itertuples()), [(0, 1, 1), (1, 2, 2), (2, 3, 3)]) + def test_len(self): self.assertEqual(len(self.frame), len(self.frame.index))
closes #3873.
https://api.github.com/repos/pandas-dev/pandas/pulls/3879
2013-06-13T05:48:29Z
2013-06-13T12:22:47Z
2013-06-13T12:22:47Z
2014-06-26T11:26:17Z
ENH: do not convert mixed-integer type indexes to datetimeindex
diff --git a/RELEASE.rst b/RELEASE.rst index 0fcd9bd3731fe..161047c478d88 100644 --- a/RELEASE.rst +++ b/RELEASE.rst @@ -80,6 +80,8 @@ pandas 0.11.1 - Added Faq section on repr display options, to help users customize their setup. - ``where`` operations that result in block splitting are much faster (GH3733_) - Series and DataFrame hist methods now take a ``figsize`` argument (GH3834_) + - DatetimeIndexes no longer try to convert mixed-integer indexes during join + operations (GH3877_) **API Changes** @@ -317,6 +319,7 @@ pandas 0.11.1 .. _GH3814: https://github.com/pydata/pandas/issues/3814 .. _GH3834: https://github.com/pydata/pandas/issues/3834 .. _GH3873: https://github.com/pydata/pandas/issues/3873 +.. _GH3877: https://github.com/pydata/pandas/issues/3877 pandas 0.11.0 diff --git a/doc/source/v0.11.1.txt b/doc/source/v0.11.1.txt index 564939c596ced..1a43e9e6a49e0 100644 --- a/doc/source/v0.11.1.txt +++ b/doc/source/v0.11.1.txt @@ -289,6 +289,8 @@ Enhancements dff.groupby('B').filter(lambda x: len(x) > 2, dropna=False) - Series and DataFrame hist methods now take a ``figsize`` argument (GH3834_) + - DatetimeIndexes no longer try to convert mixed-integer indexes during join + operations (GH3877_) Bug Fixes @@ -402,3 +404,4 @@ on GitHub for a complete list. .. _GH3425: https://github.com/pydata/pandas/issues/3425 .. _GH3834: https://github.com/pydata/pandas/issues/3834 .. _GH3873: https://github.com/pydata/pandas/issues/3873 +.. _GH3877: https://github.com/pydata/pandas/issues/3877 diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index a918e9eb18e8b..51e657d1723b2 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -910,7 +910,8 @@ def join(self, other, how='left', level=None, return_indexers=False): """ See Index.join """ - if not isinstance(other, DatetimeIndex) and len(other) > 0: + if (not isinstance(other, DatetimeIndex) and len(other) > 0 and + other.inferred_type != 'mixed-integer'): try: other = DatetimeIndex(other) except TypeError: diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index beee5caa871c5..f5415a195db77 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -18,7 +18,6 @@ import pandas.core.datetools as datetools import pandas.tseries.offsets as offsets import pandas.tseries.frequencies as fmod -from pandas.tseries.index import TimeSeriesError import pandas as pd from pandas.util.testing import assert_series_equal, assert_almost_equal @@ -1853,6 +1852,14 @@ def test_date(self): expected = [t.date() for t in rng] self.assert_((result == expected).all()) + def test_does_not_convert_mixed_integer(self): + df = tm.makeCustomDataframe(10, 10, data_gen_f=lambda *args, **kwargs: + randn(), r_idx_type='i', c_idx_type='dt') + cols = df.columns.join(df.index, how='outer') + joined = cols.join(df.columns) + self.assertEqual(cols.dtype, np.dtype('O')) + self.assertEqual(cols.dtype, joined.dtype) + self.assert_(np.array_equal(cols.values, joined.values)) class TestLegacySupport(unittest.TestCase): _multiprocess_can_split_ = True
closes #3877.
https://api.github.com/repos/pandas-dev/pandas/pulls/3878
2013-06-13T04:35:16Z
2013-06-13T17:44:38Z
2013-06-13T17:44:37Z
2014-07-03T18:56:41Z
ENH: JSON
diff --git a/doc/source/io.rst b/doc/source/io.rst index e64cbc4bc8101..c182d456315ec 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -954,13 +954,21 @@ with optional parameters: - path_or_buf : the pathname or buffer to write the output This can be ``None`` in which case a JSON string is returned -- orient : The format of the JSON string, default is ``index`` for ``Series``, ``columns`` for ``DataFrame`` +- orient : - * split : dict like {index -> [index], columns -> [columns], data -> [values]} - * records : list like [{column -> value}, ... , {column -> value}] - * index : dict like {index -> {column -> value}} - * columns : dict like {column -> {index -> value}} - * values : just the values array + Series : + default is 'index', allowed values are: {'split','records','index'} + + DataFrame : + default is 'columns', allowed values are: {'split','records','index','columns','values'} + + The format of the JSON string + + * split : dict like {index -> [index], columns -> [columns], data -> [values]} + * records : list like [{column -> value}, ... , {column -> value}] + * index : dict like {index -> {column -> value}} + * columns : dict like {column -> {index -> value}} + * values : just the values array - date_format : type of date conversion (epoch = epoch milliseconds, iso = ISO8601), default is epoch - double_precision : The number of decimal places to use when encoding floating point values, default 10. @@ -989,6 +997,8 @@ Writing to a file, with a date index and a date column dfj2 = dfj.copy() dfj2['date'] = Timestamp('20130101') + dfj2['ints'] = range(5) + dfj2['bools'] = True dfj2.index = date_range('20130101',periods=5) dfj2.to_json('test.json') open('test.json').read() @@ -1005,31 +1015,86 @@ is ``None``. To explicity force ``Series`` parsing, pass ``typ=series`` is expected. For instance, a local file could be file ://localhost/path/to/table.json - typ : type of object to recover (series or frame), default 'frame' -- orient : The format of the JSON string, one of the following +- orient : + + Series : + default is 'index', allowed values are: {'split','records','index'} + + DataFrame : + default is 'columns', allowed values are: {'split','records','index','columns','values'} + + The format of the JSON string - * split : dict like {index -> [index], name -> name, data -> [values]} - * records : list like [value, ... , value] - * index : dict like {index -> value} + * split : dict like {index -> [index], columns -> [columns], data -> [values]} + * records : list like [{column -> value}, ... , {column -> value}] + * index : dict like {index -> {column -> value}} + * columns : dict like {column -> {index -> value}} + * values : just the values array -- dtype : dtype of the resulting object -- numpy : direct decoding to numpy arrays. default True but falls back to standard decoding if a problem occurs. -- parse_dates : a list of columns to parse for dates; If True, then try to parse datelike columns, default is False +- dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don't infer dtypes at all, default is True, apply only to the data +- convert_axes : boolean, try to convert the axes to the proper dtypes, default is True +- convert_dates : a list of columns to parse for dates; If True, then try to parse datelike columns, default is True - keep_default_dates : boolean, default True. If parsing dates, then parse the default datelike columns +- numpy: direct decoding to numpy arrays. default is False; + Note that the JSON ordering **MUST** be the same for each term if ``numpy=True`` The parser will raise one of ``ValueError/TypeError/AssertionError`` if the JSON is not parsable. +The default of ``convert_axes=True``, ``dtype=True``, and ``convert_dates=True`` will try to parse the axes, and all of the data +into appropriate types, including dates. If you need to override specific dtypes, pass a dict to ``dtype``. ``convert_axes`` should only +be set to ``False`` if you need to preserve string-like numbers (e.g. '1', '2') in an axes. + +.. warning:: + + When reading JSON data, automatic coercing into dtypes has some quirks: + + * an index can be in a different order, that is the returned order is not guaranteed to be the same as before serialization + * a column that was ``float`` data can safely be converted to ``integer``, e.g. a column of ``1.`` + * bool columns will be converted to ``integer`` on reconstruction + + Thus there are times where you may want to specify specific dtypes via the ``dtype`` keyword argument. + Reading from a JSON string .. ipython:: python pd.read_json(json) -Reading from a file, parsing dates +Reading from a file + +.. ipython:: python + + pd.read_json('test.json') + +Don't convert any data (but still convert axes and dates) + +.. ipython:: python + + pd.read_json('test.json',dtype=object).dtypes + +Specify how I want to convert data + +.. ipython:: python + + pd.read_json('test.json',dtype={'A' : 'float32', 'bools' : 'int8'}).dtypes + +I like my string indicies .. ipython:: python - pd.read_json('test.json',parse_dates=True) + si = DataFrame(np.zeros((4, 4)), + columns=range(4), + index=[str(i) for i in range(4)]) + si + si.index + si.columns + json = si.to_json() + + sij = pd.read_json(json,convert_axes=False) + sij + sij.index + sij.columns .. ipython:: python :suppress: diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 0d2612d7aed7a..55347aef078ef 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -507,8 +507,15 @@ def to_json(self, path_or_buf=None, orient=None, date_format='epoch', ---------- path_or_buf : the path or buffer to write the result string if this is None, return a StringIO of the converted string - orient : {'split', 'records', 'index', 'columns', 'values'}, - default is 'index' for Series, 'columns' for DataFrame + orient : + + Series : + default is 'index' + allowed values are: {'split','records','index'} + + DataFrame : + default is 'columns' + allowed values are: {'split','records','index','columns','values'} The format of the JSON string split : dict like @@ -517,6 +524,7 @@ def to_json(self, path_or_buf=None, orient=None, date_format='epoch', index : dict like {index -> {column -> value}} columns : dict like {column -> {index -> value}} values : just the values array + date_format : type of date conversion (epoch = epoch milliseconds, iso = ISO8601), default is epoch double_precision : The number of decimal places to use when encoding diff --git a/pandas/io/json.py b/pandas/io/json.py index 17b33931bee5a..fcecb31bb77a7 100644 --- a/pandas/io/json.py +++ b/pandas/io/json.py @@ -11,6 +11,7 @@ import numpy as np from pandas.tslib import iNaT +import pandas.lib as lib ### interface to/from ### @@ -86,6 +87,11 @@ def _format_dates(self): self.copy_if_needed() self.obj = self._format_to_date(self.obj) + def _format_bools(self): + if self._needs_to_bool(self.obj): + self.copy_if_needed() + self.obj = self._format_to_bool(self.obj) + class FrameWriter(Writer): _default_orient = 'columns' @@ -112,8 +118,8 @@ def _format_dates(self): for c in dtypes.index: self.obj[c] = self._format_to_date(self.obj[c]) -def read_json(path_or_buf=None, orient=None, typ='frame', dtype=None, numpy=True, - parse_dates=False, keep_default_dates=True): +def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True, + convert_axes=True, convert_dates=True, keep_default_dates=True, numpy=False): """ Convert JSON string to pandas object @@ -123,20 +129,33 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=None, numpy=True a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. For instance, a local file could be file ://localhost/path/to/table.json - orient : {'split', 'records', 'index'}, default 'index' + orient : + Series : + default is 'index' + allowed values are: {'split','records','index'} + + DataFrame : + default is 'columns' + allowed values are: {'split','records','index','columns','values'} + The format of the JSON string - split : dict like - {index -> [index], name -> name, data -> [values]} - records : list like [value, ... , value] - index : dict like {index -> value} + split : dict like {index -> [index], columns -> [columns], data -> [values]} + records : list like [{column -> value}, ... , {column -> value}] + index : dict like {index -> {column -> value}} + columns : dict like {column -> {index -> value}} + values : just the values array + typ : type of object to recover (series or frame), default 'frame' - dtype : dtype of the resulting object - numpy: direct decoding to numpy arrays. default True but falls back - to standard decoding if a problem occurs. - parse_dates : a list of columns to parse for dates; If True, then try to parse datelike columns - default is False + dtype : if True, infer dtypes, if a dict of column to dtype, then use those, + if False, then don't infer dtypes at all, default is True, + apply only to the data + convert_axes : boolean, try to convert the axes to the proper dtypes, default is True + convert_dates : a list of columns to parse for dates; If True, then try to parse datelike columns + default is True keep_default_dates : boolean, default True. If parsing dates, then parse the default datelike columns + numpy: direct decoding to numpy arrays. default is False.Note that the JSON ordering MUST be the same + for each term if numpy=True. Returns ------- @@ -157,16 +176,18 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=None, numpy=True obj = None if typ == 'frame': - obj = FrameParser(json, orient, dtype, numpy, parse_dates, keep_default_dates).parse() + obj = FrameParser(json, orient, dtype, convert_axes, convert_dates, keep_default_dates, numpy).parse() if typ == 'series' or obj is None: - obj = SeriesParser(json, orient, dtype, numpy, parse_dates, keep_default_dates).parse() + if not isinstance(dtype,bool): + dtype = dict(data = dtype) + obj = SeriesParser(json, orient, dtype, convert_axes, convert_dates, keep_default_dates, numpy).parse() return obj class Parser(object): - def __init__(self, json, orient, dtype, numpy, parse_dates=False, keep_default_dates=False): + def __init__(self, json, orient, dtype=True, convert_axes=True, convert_dates=True, keep_default_dates=False, numpy=False): self.json = json if orient is None: @@ -175,27 +196,100 @@ def __init__(self, json, orient, dtype, numpy, parse_dates=False, keep_default_d self.orient = orient self.dtype = dtype - if dtype is not None and orient == "split": + if orient == "split": numpy = False self.numpy = numpy - self.parse_dates = parse_dates + self.convert_axes = convert_axes + self.convert_dates = convert_dates self.keep_default_dates = keep_default_dates self.obj = None def parse(self): - self._parse() - if self.obj is not None: + + # try numpy + numpy = self.numpy + if numpy: + self._parse_numpy() + + else: + self._parse_no_numpy() + + if self.obj is None: return None + if self.convert_axes: self._convert_axes() - if self.parse_dates: - self._try_parse_dates() + self._try_convert_types() return self.obj + def _convert_axes(self): + """ try to convert axes """ + for axis in self.obj._AXIS_NUMBERS.keys(): + new_axis, result = self._try_convert_data(axis, self.obj._get_axis(axis), use_dtypes=False, convert_dates=True) + if result: + setattr(self.obj,axis,new_axis) + + def _try_convert_types(self): + raise NotImplementedError + + def _try_convert_data(self, name, data, use_dtypes=True, convert_dates=True): + """ try to parse a ndarray like into a column by inferring dtype """ + + # don't try to coerce, unless a force conversion + if use_dtypes: + if self.dtype is False: + return data, False + elif self.dtype is True: + pass + + else: + + # dtype to force + dtype = self.dtype.get(name) if isinstance(self.dtype,dict) else self.dtype + if dtype is not None: + try: + dtype = np.dtype(dtype) + return data.astype(dtype), True + except: + return data, False + + if convert_dates: + new_data, result = self._try_convert_to_date(data) + if result: + return new_data, True + + result = False + + if data.dtype == 'object': + + # try float + try: + data = data.astype('float64') + result = True + except: + pass - def _try_parse_to_date(self, data): + # do't coerce 0-len data + if len(data) and (data.dtype == 'float' or data.dtype == 'object'): + + # coerce ints if we can + try: + new_data = data.astype('int64') + if (new_data == data).all(): + data = new_data + result = True + except: + pass + + return data, result + + def _try_convert_to_date(self, data): """ try to parse a ndarray like into a date column try to coerce object in epoch/iso formats and - integer/float in epcoh formats """ + integer/float in epcoh formats, return a boolean if parsing + was successful """ + + # no conversion on empty + if not len(data): return data, False new_data = data if new_data.dtype == 'object': @@ -208,7 +302,7 @@ def _try_parse_to_date(self, data): # ignore numbers that are out of range if issubclass(new_data.dtype.type,np.number): if not ((new_data == iNaT) | (new_data > 31536000000000000L)).all(): - return data + return data, False try: new_data = to_datetime(new_data) @@ -218,122 +312,102 @@ def _try_parse_to_date(self, data): except: # return old, noting more we can do - new_data = data + return data, False - return new_data + return new_data, True - def _try_parse_dates(self): + def _try_convert_dates(self): raise NotImplementedError class SeriesParser(Parser): _default_orient = 'index' - def _parse(self): - + def _parse_no_numpy(self): + json = self.json - dtype = self.dtype orient = self.orient - numpy = self.numpy - - if numpy: - try: - if orient == "split": - decoded = loads(json, dtype=dtype, numpy=True) - decoded = dict((str(k), v) for k, v in decoded.iteritems()) - self.obj = Series(**decoded) - elif orient == "columns" or orient == "index": - self.obj = Series(*loads(json, dtype=dtype, numpy=True, - labelled=True)) - else: - self.obj = Series(loads(json, dtype=dtype, numpy=True)) - except ValueError: - numpy = False - - if not numpy: - if orient == "split": - decoded = dict((str(k), v) - for k, v in loads(json).iteritems()) - self.obj = Series(dtype=dtype, **decoded) - else: - self.obj = Series(loads(json), dtype=dtype) + if orient == "split": + decoded = dict((str(k), v) + for k, v in loads(json).iteritems()) + self.obj = Series(dtype=None, **decoded) + else: + self.obj = Series(loads(json), dtype=None) - def _convert_axes(self): - """ try to axes if they are datelike """ - try: - self.obj.index = self._try_parse_to_date(self.obj.index) - except: - pass + def _parse_numpy(self): - def _try_parse_dates(self): - if self.obj is None: return + json = self.json + orient = self.orient + if orient == "split": + decoded = loads(json, dtype=None, numpy=True) + decoded = dict((str(k), v) for k, v in decoded.iteritems()) + self.obj = Series(**decoded) + elif orient == "columns" or orient == "index": + self.obj = Series(*loads(json, dtype=None, numpy=True, + labelled=True)) + else: + self.obj = Series(loads(json, dtype=None, numpy=True)) - if self.parse_dates: - self.obj = self._try_parse_to_date(self.obj) + def _try_convert_types(self): + if self.obj is None: return + obj, result = self._try_convert_data('data', self.obj, convert_dates=self.convert_dates) + if result: + self.obj = obj class FrameParser(Parser): _default_orient = 'columns' - def _parse(self): + def _parse_numpy(self): json = self.json - dtype = self.dtype orient = self.orient - numpy = self.numpy - - if numpy: - try: - if orient == "columns": - args = loads(json, dtype=dtype, numpy=True, labelled=True) - if args: - args = (args[0].T, args[2], args[1]) - self.obj = DataFrame(*args) - elif orient == "split": - decoded = loads(json, dtype=dtype, numpy=True) - decoded = dict((str(k), v) for k, v in decoded.iteritems()) - self.obj = DataFrame(**decoded) - elif orient == "values": - self.obj = DataFrame(loads(json, dtype=dtype, numpy=True)) - else: - self.obj = DataFrame(*loads(json, dtype=dtype, numpy=True, - labelled=True)) - except ValueError: - numpy = False - - if not numpy: - if orient == "columns": - self.obj = DataFrame(loads(json), dtype=dtype) - elif orient == "split": - decoded = dict((str(k), v) - for k, v in loads(json).iteritems()) - self.obj = DataFrame(dtype=dtype, **decoded) - elif orient == "index": - self.obj = DataFrame(loads(json), dtype=dtype).T - else: - self.obj = DataFrame(loads(json), dtype=dtype) - def _convert_axes(self): - """ try to axes if they are datelike """ - if self.orient == 'columns': - axis = 'index' - elif self.orient == 'index': - axis = 'columns' + if orient == "columns": + args = loads(json, dtype=None, numpy=True, labelled=True) + if args: + args = (args[0].T, args[2], args[1]) + self.obj = DataFrame(*args) + elif orient == "split": + decoded = loads(json, dtype=None, numpy=True) + decoded = dict((str(k), v) for k, v in decoded.iteritems()) + self.obj = DataFrame(**decoded) + elif orient == "values": + self.obj = DataFrame(loads(json, dtype=None, numpy=True)) else: - return + self.obj = DataFrame(*loads(json, dtype=None, numpy=True, labelled=True)) - try: - a = getattr(self.obj,axis) - setattr(self.obj,axis,self._try_parse_to_date(a)) - except: - pass + def _parse_no_numpy(self): - def _try_parse_dates(self): + json = self.json + orient = self.orient + + if orient == "columns": + self.obj = DataFrame(loads(json), dtype=None) + elif orient == "split": + decoded = dict((str(k), v) + for k, v in loads(json).iteritems()) + self.obj = DataFrame(dtype=None, **decoded) + elif orient == "index": + self.obj = DataFrame(loads(json), dtype=None).T + else: + self.obj = DataFrame(loads(json), dtype=None) + + def _try_convert_types(self): + if self.obj is None: return + if self.convert_dates: + self._try_convert_dates() + for col in self.obj.columns: + new_data, result = self._try_convert_data(col, self.obj[col], convert_dates=False) + if result: + self.obj[col] = new_data + + def _try_convert_dates(self): if self.obj is None: return # our columns to parse - parse_dates = self.parse_dates - if parse_dates is True: - parse_dates = [] - parse_dates = set(parse_dates) + convert_dates = self.convert_dates + if convert_dates is True: + convert_dates = [] + convert_dates = set(convert_dates) def is_ok(col): """ return if this col is ok to try for a date parse """ @@ -348,6 +422,8 @@ def is_ok(col): return False - for col, c in self.obj.iteritems(): - if (self.keep_default_dates and is_ok(col)) or col in parse_dates: - self.obj[col] = self._try_parse_to_date(c) + for col in self.obj.columns: + if (self.keep_default_dates and is_ok(col)) or col in convert_dates: + new_data, result = self._try_convert_to_date(self.obj[col]) + if result: + self.obj[col] = new_data diff --git a/pandas/io/tests/test_json/test_pandas.py b/pandas/io/tests/test_json/test_pandas.py index 4b1294b786df7..bdd700bdbcec3 100644 --- a/pandas/io/tests/test_json/test_pandas.py +++ b/pandas/io/tests/test_json/test_pandas.py @@ -56,13 +56,22 @@ def setUp(self): def test_frame_from_json_to_json(self): - def _check_orient(df, orient, dtype=None, numpy=True): + def _check_orient(df, orient, dtype=None, numpy=False, convert_axes=True, check_dtype=True, raise_ok=None): df = df.sort() dfjson = df.to_json(orient=orient) - unser = read_json(dfjson, orient=orient, dtype=dtype, - numpy=numpy) + + try: + unser = read_json(dfjson, orient=orient, dtype=dtype, + numpy=numpy, convert_axes=convert_axes) + except (Exception), detail: + if raise_ok is not None: + if type(detail) == raise_ok: + return + raise + unser = unser.sort() - if df.index.dtype.type == np.datetime64: + + if not convert_axes and df.index.dtype.type == np.datetime64: unser.index = DatetimeIndex(unser.index.values.astype('i8')) if orient == "records": # index is not captured in this orientation @@ -78,20 +87,40 @@ def _check_orient(df, orient, dtype=None, numpy=True): unser = unser.sort() assert_almost_equal(df.values, unser.values) else: - assert_frame_equal(df, unser) - - def _check_all_orients(df, dtype=None): - _check_orient(df, "columns", dtype=dtype) - _check_orient(df, "records", dtype=dtype) - _check_orient(df, "split", dtype=dtype) - _check_orient(df, "index", dtype=dtype) - _check_orient(df, "values", dtype=dtype) - - _check_orient(df, "columns", dtype=dtype, numpy=False) - _check_orient(df, "records", dtype=dtype, numpy=False) - _check_orient(df, "split", dtype=dtype, numpy=False) - _check_orient(df, "index", dtype=dtype, numpy=False) - _check_orient(df, "values", dtype=dtype, numpy=False) + if convert_axes: + assert_frame_equal(df, unser, check_dtype=check_dtype) + else: + assert_frame_equal(df, unser, check_less_precise=False, check_dtype=check_dtype) + + def _check_all_orients(df, dtype=None, convert_axes=True, raise_ok=None): + + # numpy=False + if convert_axes: + _check_orient(df, "columns", dtype=dtype) + _check_orient(df, "records", dtype=dtype) + _check_orient(df, "split", dtype=dtype) + _check_orient(df, "index", dtype=dtype) + _check_orient(df, "values", dtype=dtype) + + _check_orient(df, "columns", dtype=dtype, convert_axes=False) + _check_orient(df, "records", dtype=dtype, convert_axes=False) + _check_orient(df, "split", dtype=dtype, convert_axes=False) + _check_orient(df, "index", dtype=dtype, convert_axes=False) + _check_orient(df, "values", dtype=dtype ,convert_axes=False) + + # numpy=True and raise_ok might be not None, so ignore the error + if convert_axes: + _check_orient(df, "columns", dtype=dtype, numpy=True, raise_ok=raise_ok) + _check_orient(df, "records", dtype=dtype, numpy=True, raise_ok=raise_ok) + _check_orient(df, "split", dtype=dtype, numpy=True, raise_ok=raise_ok) + _check_orient(df, "index", dtype=dtype, numpy=True, raise_ok=raise_ok) + _check_orient(df, "values", dtype=dtype, numpy=True, raise_ok=raise_ok) + + _check_orient(df, "columns", dtype=dtype, numpy=True, convert_axes=False, raise_ok=raise_ok) + _check_orient(df, "records", dtype=dtype, numpy=True, convert_axes=False, raise_ok=raise_ok) + _check_orient(df, "split", dtype=dtype, numpy=True, convert_axes=False, raise_ok=raise_ok) + _check_orient(df, "index", dtype=dtype, numpy=True, convert_axes=False, raise_ok=raise_ok) + _check_orient(df, "values", dtype=dtype, numpy=True, convert_axes=False, raise_ok=raise_ok) # basic _check_all_orients(self.frame) @@ -99,6 +128,7 @@ def _check_all_orients(df, dtype=None): self.frame.to_json(orient="columns")) _check_all_orients(self.intframe, dtype=self.intframe.values.dtype) + _check_all_orients(self.intframe, dtype=False) # big one # index and columns are strings as all unserialised JSON object keys @@ -106,13 +136,14 @@ def _check_all_orients(df, dtype=None): biggie = DataFrame(np.zeros((200, 4)), columns=[str(i) for i in range(4)], index=[str(i) for i in range(200)]) - _check_all_orients(biggie) + _check_all_orients(biggie,dtype=False,convert_axes=False) # dtypes _check_all_orients(DataFrame(biggie, dtype=np.float64), - dtype=np.float64) - _check_all_orients(DataFrame(biggie, dtype=np.int), dtype=np.int) - _check_all_orients(DataFrame(biggie, dtype='<U3'), dtype='<U3') + dtype=np.float64, convert_axes=False) + _check_all_orients(DataFrame(biggie, dtype=np.int), dtype=np.int, convert_axes=False) + _check_all_orients(DataFrame(biggie, dtype='<U3'), dtype='<U3', convert_axes=False, + raise_ok=ValueError) # empty _check_all_orients(self.empty_frame) @@ -129,15 +160,15 @@ def _check_all_orients(df, dtype=None): 'D': [True, False, True, False, True] } df = DataFrame(data=data, index=index) - _check_orient(df, "split") - _check_orient(df, "records") - _check_orient(df, "values") - _check_orient(df, "columns") + _check_orient(df, "split", check_dtype=False) + _check_orient(df, "records", check_dtype=False) + _check_orient(df, "values", check_dtype=False) + _check_orient(df, "columns", check_dtype=False) # index oriented is problematic as it is read back in in a transposed # state, so the columns are interpreted as having mixed data and # given object dtypes. # force everything to have object dtype beforehand - _check_orient(df.transpose().transpose(), "index") + _check_orient(df.transpose().transpose(), "index", dtype=False) def test_frame_from_json_bad_data(self): self.assertRaises(ValueError, read_json, StringIO('{"key":b:a:d}')) @@ -166,25 +197,37 @@ def test_frame_from_json_bad_data(self): def test_frame_from_json_nones(self): df = DataFrame([[1, 2], [4, 5, 6]]) unser = read_json(df.to_json()) - self.assert_(np.isnan(unser['2'][0])) + self.assert_(np.isnan(unser[2][0])) df = DataFrame([['1', '2'], ['4', '5', '6']]) unser = read_json(df.to_json()) - self.assert_(unser['2'][0] is None) + self.assert_(np.isnan(unser[2][0])) + unser = read_json(df.to_json(),dtype=False) + self.assert_(unser[2][0] is None) + unser = read_json(df.to_json(),convert_axes=False,dtype=False) + self.assert_(unser['2']['0'] is None) unser = read_json(df.to_json(), numpy=False) - self.assert_(unser['2'][0] is None) + self.assert_(np.isnan(unser[2][0])) + unser = read_json(df.to_json(), numpy=False, dtype=False) + self.assert_(unser[2][0] is None) + unser = read_json(df.to_json(), numpy=False, convert_axes=False, dtype=False) + self.assert_(unser['2']['0'] is None) # infinities get mapped to nulls which get mapped to NaNs during # deserialisation df = DataFrame([[1, 2], [4, 5, 6]]) df[2][0] = np.inf unser = read_json(df.to_json()) - self.assert_(np.isnan(unser['2'][0])) + self.assert_(np.isnan(unser[2][0])) + unser = read_json(df.to_json(), dtype=False) + self.assert_(np.isnan(unser[2][0])) df[2][0] = np.NINF unser = read_json(df.to_json()) - self.assert_(np.isnan(unser['2'][0])) + self.assert_(np.isnan(unser[2][0])) + unser = read_json(df.to_json(),dtype=False) + self.assert_(np.isnan(unser[2][0])) def test_frame_to_json_except(self): df = DataFrame([1, 2, 3]) @@ -192,13 +235,13 @@ def test_frame_to_json_except(self): def test_series_from_json_to_json(self): - def _check_orient(series, orient, dtype=None, numpy=True): + def _check_orient(series, orient, dtype=None, numpy=False): series = series.sort_index() unser = read_json(series.to_json(orient=orient), typ='series', orient=orient, numpy=numpy, dtype=dtype) unser = unser.sort_index() - if series.index.dtype.type == np.datetime64: - unser.index = DatetimeIndex(unser.index.values.astype('i8')) + #if series.index.dtype.type == np.datetime64: + # unser.index = DatetimeIndex(unser.index.values.astype('i8')) if orient == "records" or orient == "values": assert_almost_equal(series.values, unser.values) else: @@ -216,11 +259,11 @@ def _check_all_orients(series, dtype=None): _check_orient(series, "index", dtype=dtype) _check_orient(series, "values", dtype=dtype) - _check_orient(series, "columns", dtype=dtype, numpy=False) - _check_orient(series, "records", dtype=dtype, numpy=False) - _check_orient(series, "split", dtype=dtype, numpy=False) - _check_orient(series, "index", dtype=dtype, numpy=False) - _check_orient(series, "values", dtype=dtype, numpy=False) + _check_orient(series, "columns", dtype=dtype, numpy=True) + _check_orient(series, "records", dtype=dtype, numpy=True) + _check_orient(series, "split", dtype=dtype, numpy=True) + _check_orient(series, "index", dtype=dtype, numpy=True) + _check_orient(series, "values", dtype=dtype, numpy=True) # basic _check_all_orients(self.series) @@ -230,7 +273,7 @@ def _check_all_orients(series, dtype=None): objSeries = Series([str(d) for d in self.objSeries], index=self.objSeries.index, name=self.objSeries.name) - _check_all_orients(objSeries) + _check_all_orients(objSeries, dtype=False) _check_all_orients(self.empty_series) _check_all_orients(self.ts) @@ -276,25 +319,28 @@ def test_axis_dates(self): result = read_json(json,typ='series') assert_series_equal(result,self.ts) - def test_parse_dates(self): + def test_convert_dates(self): # frame df = self.tsframe.copy() df['date'] = Timestamp('20130101') json = df.to_json() - result = read_json(json,parse_dates=True) + result = read_json(json) assert_frame_equal(result,df) df['foo'] = 1. json = df.to_json() - result = read_json(json,parse_dates=True) - assert_frame_equal(result,df) + result = read_json(json,convert_dates=False) + expected = df.copy() + expected['date'] = expected['date'].values.view('i8') + expected['foo'] = expected['foo'].astype('int64') + assert_frame_equal(result,expected) # series ts = Series(Timestamp('20130101'),index=self.ts.index) json = ts.to_json() - result = read_json(json,typ='series',parse_dates=True) + result = read_json(json,typ='series') assert_series_equal(result,ts) def test_date_format(self): @@ -304,7 +350,7 @@ def test_date_format(self): df_orig = df.copy() json = df.to_json(date_format='iso') - result = read_json(json,parse_dates=True) + result = read_json(json) assert_frame_equal(result,df_orig) # make sure that we did in fact copy @@ -312,7 +358,7 @@ def test_date_format(self): ts = Series(Timestamp('20130101'),index=self.ts.index) json = ts.to_json(date_format='iso') - result = read_json(json,typ='series',parse_dates=True) + result = read_json(json,typ='series') assert_series_equal(result,ts) def test_weird_nested_json(self): @@ -338,6 +384,38 @@ def test_weird_nested_json(self): read_json(s) + def test_doc_example(self): + dfj2 = DataFrame(np.random.randn(5, 2), columns=list('AB')) + dfj2['date'] = Timestamp('20130101') + dfj2['ints'] = range(5) + dfj2['bools'] = True + dfj2.index = pd.date_range('20130101',periods=5) + + json = dfj2.to_json() + result = read_json(json,dtype={'ints' : np.int64, 'bools' : np.bool_}) + assert_frame_equal(result,result) + + def test_misc_example(self): + + # parsing unordered input fails + result = read_json('[{"a": 1, "b": 2}, {"b":2, "a" :1}]',numpy=True) + expected = DataFrame([[1,2],[1,2]],columns=['a','b']) + #assert_frame_equal(result,expected) + + result = read_json('[{"a": 1, "b": 2}, {"b":2, "a" :1}]') + expected = DataFrame([[1,2],[1,2]],columns=['a','b']) + assert_frame_equal(result,expected) + + @network + @slow + def test_round_trip_exception_(self): + # GH 3867 + + df = pd.read_csv('https://raw.github.com/hayd/lahman2012/master/csvs/Teams.csv') + s = df.to_json() + result = pd.read_json(s) + assert_frame_equal(result.reindex(index=df.index,columns=df.columns),df) + @network @slow def test_url(self): @@ -345,7 +423,7 @@ def test_url(self): try: url = 'https://api.github.com/repos/pydata/pandas/issues?per_page=5' - result = read_json(url,parse_dates=True) + result = read_json(url,convert_dates=True) for c in ['created_at','closed_at','updated_at']: self.assert_(result[c].dtype == 'datetime64[ns]')
revised argument structure for `read_json` to control dtype conversions, which are all on by default: - `convert_axes` : if you for some reason want to turn off dtype conversion on the axes (only really necessary if you have string-like numbers) - `dtype` : now accepts a dict of name -> dtype for specific conversions, or True to try to coerce all - `convert_dates` : default True (in conjunction with `keep_default_dates` determines which columns to attempt date conversion) DOC updates for all
https://api.github.com/repos/pandas-dev/pandas/pulls/3876
2013-06-13T02:53:57Z
2013-06-13T19:14:10Z
2013-06-13T19:14:10Z
2014-06-19T05:28:21Z
FIX: change initObjToJSON return type
diff --git a/pandas/src/ujson/python/objToJSON.c b/pandas/src/ujson/python/objToJSON.c index ce8bdf3721f5e..534d60970dd81 100644 --- a/pandas/src/ujson/python/objToJSON.c +++ b/pandas/src/ujson/python/objToJSON.c @@ -100,7 +100,11 @@ enum PANDAS_FORMAT //#define PRINTMARK() fprintf(stderr, "%s: MARK(%d)\n", __FILE__, __LINE__) #define PRINTMARK() +#if (PY_VERSION_HEX >= 0x03000000) void initObjToJSON(void) +#else +int initObjToJSON(void) +#endif { PyObject *mod_frame; PyDateTime_IMPORT;
This is necessary because clang complains about the return type. There's a call to the macro import_array() which injects a return statement into wherever it's used. closes #3872.
https://api.github.com/repos/pandas-dev/pandas/pulls/3874
2013-06-13T00:09:07Z
2013-06-15T12:26:49Z
2013-06-15T12:26:49Z
2014-06-19T22:24:06Z
BLD: remove after_script.sh from travis since it does not exist anymore
diff --git a/.travis.yml b/.travis.yml index b48f6d834b62d..8e2bb49d9df93 100644 --- a/.travis.yml +++ b/.travis.yml @@ -55,4 +55,3 @@ script: after_script: - ci/print_versions.py - - ci/after_script.sh
closes #3857
https://api.github.com/repos/pandas-dev/pandas/pulls/3868
2013-06-12T18:31:16Z
2013-06-12T19:37:29Z
2013-06-12T19:37:29Z
2014-07-16T08:13:41Z
CLN: avoid Unboundlocal error in tools/merge/_get_concatenated_data (GH3833)
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py index 9cdddc47acac1..75e35b403dd78 100644 --- a/pandas/tools/merge.py +++ b/pandas/tools/merge.py @@ -984,11 +984,11 @@ def _prepare_blocks(self): return blockmaps, reindexed_data def _get_concatenated_data(self): - try: - # need to conform to same other (joined) axes for block join - blockmaps, rdata = self._prepare_blocks() - kinds = _get_all_block_kinds(blockmaps) + # need to conform to same other (joined) axes for block join + blockmaps, rdata = self._prepare_blocks() + kinds = _get_all_block_kinds(blockmaps) + try: new_blocks = [] for kind in kinds: klass_blocks = [mapping.get(kind) for mapping in blockmaps]
closes #3833
https://api.github.com/repos/pandas-dev/pandas/pulls/3864
2013-06-12T14:22:18Z
2013-06-12T14:47:58Z
2013-06-12T14:47:58Z
2014-06-18T11:00:34Z
Add colormap= argument to DataFrame plotting methods
diff --git a/doc/source/release.rst b/doc/source/release.rst index b2eefda10fccc..0fa7b4b2ed5f2 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -52,6 +52,8 @@ pandas 0.12 - A ``filter`` method on grouped Series or DataFrames returns a subset of the original (:issue:`3680`, :issue:`919`) - Access to historical Google Finance data in pandas.io.data (:issue:`3814`) + - DataFrame plotting methods can sample column colors from a Matplotlib + colormap via the ``colormap`` keyword. (:issue:`3860`) **Improvements to existing features** diff --git a/doc/source/v0.12.0.txt b/doc/source/v0.12.0.txt index 643ef7ddbbab4..4b100ed0b5fab 100644 --- a/doc/source/v0.12.0.txt +++ b/doc/source/v0.12.0.txt @@ -96,6 +96,12 @@ API changes and thus you should cast to an appropriate numeric dtype if you need to plot something. + - Add ``colormap`` keyword to DataFrame plotting methods. Accepts either a + matplotlib colormap object (ie, matplotlib.cm.jet) or a string name of such + an object (ie, 'jet'). The colormap is sampled to select the color for each + column. Please see :ref:`visualization.colormaps` for more information. + (:issue:`3860`) + - ``DataFrame.interpolate()`` is now deprecated. Please use ``DataFrame.fillna()`` and ``DataFrame.replace()`` instead. (:issue:`3582`, :issue:`3675`, :issue:`3676`) diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst index f0790396a5c39..f1a9880047691 100644 --- a/doc/source/visualization.rst +++ b/doc/source/visualization.rst @@ -531,3 +531,65 @@ be colored differently. @savefig radviz.png width=6in radviz(data, 'Name') + +.. _visualization.colormaps: + +Colormaps +~~~~~~~~~ + +A potential issue when plotting a large number of columns is that it can be difficult to distinguish some series due to repetition in the default colors. To remedy this, DataFrame plotting supports the use of the ``colormap=`` argument, which accepts either a Matplotlib `colormap <http://matplotlib.org/api/cm_api.html>`__ or a string that is a name of a colormap registered with Matplotlib. A visualization of the default matplotlib colormaps is available `here <http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps>`__. + +As matplotlib does not directly support colormaps for line-based plots, the colors are selected based on an even spacing determined by the number of columns in the DataFrame. There is no consideration made for background color, so some colormaps will produce lines that are not easily visible. + +To use the jet colormap, we can simply pass ``'jet'`` to ``colormap=`` + +.. ipython:: python + + df = DataFrame(randn(1000, 10), index=ts.index) + df = df.cumsum() + + plt.figure() + + @savefig jet.png width=6in + df.plot(colormap='jet') + +or we can pass the colormap itself + +.. ipython:: python + + from matplotlib import cm + + plt.figure() + + @savefig jet_cm.png width=6in + df.plot(colormap=cm.jet) + +Colormaps can also be used other plot types, like bar charts: + +.. ipython:: python + + dd = DataFrame(randn(10, 10)).applymap(abs) + dd = dd.cumsum() + + plt.figure() + + @savefig greens.png width=6in + dd.plot(kind='bar', colormap='Greens') + +Parallel coordinates charts: + +.. ipython:: python + + plt.figure() + + @savefig parallel_gist_rainbow.png width=6in + parallel_coordinates(data, 'Name', colormap='gist_rainbow') + +Andrews curves charts: + +.. ipython:: python + + plt.figure() + + @savefig andrews_curve_winter.png width=6in + andrews_curves(data, 'Name', colormap='winter') diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py index e57e5a9af2fc0..d094e8b99d9cb 100644 --- a/pandas/tests/test_graphics.py +++ b/pandas/tests/test_graphics.py @@ -103,6 +103,35 @@ def test_bar_colors(self): self.assert_(xp == rs) plt.close('all') + + from matplotlib import cm + + # Test str -> colormap functionality + ax = df.plot(kind='bar', colormap='jet') + + rects = ax.patches + + rgba_colors = map(cm.jet, np.linspace(0, 1, 5)) + for i, rect in enumerate(rects[::5]): + xp = rgba_colors[i] + rs = rect.get_facecolor() + self.assert_(xp == rs) + + plt.close('all') + + # Test colormap functionality + ax = df.plot(kind='bar', colormap=cm.jet) + + rects = ax.patches + + rgba_colors = map(cm.jet, np.linspace(0, 1, 5)) + for i, rect in enumerate(rects[::5]): + xp = rgba_colors[i] + rs = rect.get_facecolor() + self.assert_(xp == rs) + + plt.close('all') + df.ix[:, [0]].plot(kind='bar', color='DodgerBlue') @slow @@ -600,6 +629,7 @@ def test_andrews_curves(self): def test_parallel_coordinates(self): from pandas import read_csv from pandas.tools.plotting import parallel_coordinates + from matplotlib import cm path = os.path.join(curpath(), 'data/iris.csv') df = read_csv(path) _check_plot_works(parallel_coordinates, df, 'Name') @@ -611,6 +641,7 @@ def test_parallel_coordinates(self): colors=('#556270', '#4ECDC4', '#C7F464')) _check_plot_works(parallel_coordinates, df, 'Name', colors=['dodgerblue', 'aquamarine', 'seagreen']) + _check_plot_works(parallel_coordinates, df, 'Name', colormap=cm.jet) df = read_csv( path, header=None, skiprows=1, names=[1, 2, 4, 8, 'Name']) @@ -622,9 +653,11 @@ def test_parallel_coordinates(self): def test_radviz(self): from pandas import read_csv from pandas.tools.plotting import radviz + from matplotlib import cm path = os.path.join(curpath(), 'data/iris.csv') df = read_csv(path) _check_plot_works(radviz, df, 'Name') + _check_plot_works(radviz, df, 'Name', colormap=cm.jet) @slow def test_plot_int_columns(self): @@ -666,6 +699,7 @@ def test_line_colors(self): import matplotlib.pyplot as plt import sys from StringIO import StringIO + from matplotlib import cm custom_colors = 'rgcby' @@ -691,6 +725,30 @@ def test_line_colors(self): finally: sys.stderr = tmp + plt.close('all') + + ax = df.plot(colormap='jet') + + rgba_colors = map(cm.jet, np.linspace(0, 1, len(df))) + + lines = ax.get_lines() + for i, l in enumerate(lines): + xp = rgba_colors[i] + rs = l.get_color() + self.assert_(xp == rs) + + plt.close('all') + + ax = df.plot(colormap=cm.jet) + + rgba_colors = map(cm.jet, np.linspace(0, 1, len(df))) + + lines = ax.get_lines() + for i, l in enumerate(lines): + xp = rgba_colors[i] + rs = l.get_color() + self.assert_(xp == rs) + # make color a list if plotting one column frame # handles cases like df.plot(color='DodgerBlue') plt.close('all') @@ -862,6 +920,10 @@ def test_option_mpl_style(self): except ValueError: pass + def test_invalid_colormap(self): + df = DataFrame(np.random.randn(500, 2), columns=['A', 'B']) + + self.assertRaises(ValueError, df.plot, colormap='invalid_colormap') def _check_plot_works(f, *args, **kwargs): import matplotlib.pyplot as plt diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index a5aaac05d8ad8..8abe9df5ddd56 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -91,6 +91,43 @@ def _get_standard_kind(kind): return {'density': 'kde'}.get(kind, kind) +def _get_standard_colors(num_colors=None, colormap=None, + color_type='default', color=None): + import matplotlib.pyplot as plt + + if color is None and colormap is not None: + if isinstance(colormap, basestring): + import matplotlib.cm as cm + colormap = cm.get_cmap(colormap) + colors = map(colormap, np.linspace(0, 1, num=num_colors)) + elif color is not None: + if colormap is not None: + warnings.warn("'color' and 'colormap' cannot be used " + "simultaneously. Using 'color'") + colors = color + else: + if color_type == 'default': + colors = plt.rcParams.get('axes.color_cycle', list('bgrcmyk')) + if isinstance(colors, basestring): + colors = list(colors) + elif color_type == 'random': + import random + def random_color(column): + random.seed(column) + return [random.random() for _ in range(3)] + + colors = map(random_color, range(num_colors)) + else: + raise NotImplementedError + + if len(colors) != num_colors: + multiple = num_colors//len(colors) - 1 + mod = num_colors % len(colors) + + colors += multiple * colors + colors += colors[:mod] + + return colors class _Options(dict): """ @@ -283,7 +320,7 @@ def _get_marker_compat(marker): return marker -def radviz(frame, class_column, ax=None, **kwds): +def radviz(frame, class_column, ax=None, colormap=None, **kwds): """RadViz - a multivariate data visualization algorithm Parameters: @@ -291,6 +328,9 @@ def radviz(frame, class_column, ax=None, **kwds): frame: DataFrame object class_column: Column name that contains information about class membership ax: Matplotlib axis object, optional + colormap : str or matplotlib colormap object, default None + Colormap to select colors from. If string, load colormap with that name + from matplotlib. kwds: Matplotlib scatter method keyword arguments, optional Returns: @@ -302,10 +342,6 @@ def radviz(frame, class_column, ax=None, **kwds): import matplotlib.text as text import random - def random_color(column): - random.seed(column) - return [random.random() for _ in range(3)] - def normalize(series): a = min(series) b = max(series) @@ -322,6 +358,9 @@ def normalize(series): classes = set(frame[class_column]) to_plot = {} + colors = _get_standard_colors(num_colors=len(classes), colormap=colormap, + color_type='random', color=kwds.get('color')) + for class_ in classes: to_plot[class_] = [[], []] @@ -338,10 +377,10 @@ def normalize(series): to_plot[class_name][0].append(y[0]) to_plot[class_name][1].append(y[1]) - for class_ in classes: + for i, class_ in enumerate(classes): line = ax.scatter(to_plot[class_][0], to_plot[class_][1], - color=random_color(class_), + color=colors[i], label=com.pprint_thing(class_), **kwds) ax.legend() @@ -368,7 +407,8 @@ def normalize(series): return ax -def andrews_curves(data, class_column, ax=None, samples=200): +def andrews_curves(data, class_column, ax=None, samples=200, colormap=None, + **kwds): """ Parameters: ----------- @@ -377,6 +417,10 @@ def andrews_curves(data, class_column, ax=None, samples=200): class_column : Name of the column containing class names ax : matplotlib axes object, default None samples : Number of points to plot in each curve + colormap : str or matplotlib colormap object, default None + Colormap to select colors from. If string, load colormap with that name + from matplotlib. + kwds : Optional plotting arguments to be passed to matplotlib Returns: -------- @@ -401,15 +445,17 @@ def f(x): return result return f - def random_color(column): - random.seed(column) - return [random.random() for _ in range(3)] + n = len(data) classes = set(data[class_column]) class_col = data[class_column] columns = [data[col] for col in data.columns if (col != class_column)] x = [-pi + 2.0 * pi * (t / float(samples)) for t in range(samples)] used_legends = set([]) + + colors = _get_standard_colors(num_colors=n, colormap=colormap, + color_type='random', color=kwds.get('color')) + if ax is None: ax = plt.gca(xlim=(-pi, pi)) for i in range(n): @@ -420,9 +466,9 @@ def random_color(column): if com.pprint_thing(class_col[i]) not in used_legends: label = com.pprint_thing(class_col[i]) used_legends.add(label) - ax.plot(x, y, color=random_color(class_col[i]), label=label) + ax.plot(x, y, color=colors[i], label=label, **kwds) else: - ax.plot(x, y, color=random_color(class_col[i])) + ax.plot(x, y, color=colors[i], **kwds) ax.legend(loc='upper right') ax.grid() @@ -492,7 +538,7 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds): def parallel_coordinates(data, class_column, cols=None, ax=None, colors=None, - use_columns=False, xticks=None, **kwds): + use_columns=False, xticks=None, colormap=None, **kwds): """Parallel coordinates plotting. Parameters @@ -511,6 +557,8 @@ def parallel_coordinates(data, class_column, cols=None, ax=None, colors=None, If true, columns will be used as xticks xticks: list or tuple, optional A list of values to use for xticks + colormap: str or matplotlib colormap, default None + Colormap to use for line colors. kwds: list, optional A list of keywords for matplotlib plot method @@ -530,9 +578,7 @@ def parallel_coordinates(data, class_column, cols=None, ax=None, colors=None, import matplotlib.pyplot as plt import random - def random_color(column): - random.seed(column) - return [random.random() for _ in range(3)] + n = len(data) classes = set(data[class_column]) class_col = data[class_column] @@ -563,13 +609,11 @@ def random_color(column): if ax is None: ax = plt.gca() - # if user has not specified colors to use, choose at random - if colors is None: - colors = dict((kls, random_color(kls)) for kls in classes) - else: - if len(colors) != len(classes): - raise ValueError('Number of colors must match number of classes') - colors = dict((kls, colors[i]) for i, kls in enumerate(classes)) + color_values = _get_standard_colors(num_colors=len(classes), + colormap=colormap, color_type='random', + color=colors) + + colors = dict(zip(classes, color_values)) for i in range(n): row = df.irow(i).values @@ -714,7 +758,7 @@ def __init__(self, data, kind=None, by=None, subplots=False, sharex=True, ax=None, fig=None, title=None, xlim=None, ylim=None, xticks=None, yticks=None, sort_columns=False, fontsize=None, - secondary_y=False, **kwds): + secondary_y=False, colormap=None, **kwds): self.data = data self.by = by @@ -756,6 +800,8 @@ def __init__(self, data, kind=None, by=None, subplots=False, sharex=True, secondary_y = [secondary_y] self.secondary_y = secondary_y + self.colormap = colormap + self.kwds = kwds self._validate_color_args() @@ -774,6 +820,11 @@ def _validate_color_args(self): # support series.plot(color='green') self.kwds['color'] = [self.kwds['color']] + if ('color' in self.kwds or 'colors' in self.kwds) and \ + self.colormap is not None: + warnings.warn("'color' and 'colormap' cannot be used " + "simultaneously. Using 'color'") + def _iter_data(self): from pandas.core.frame import DataFrame if isinstance(self.data, (Series, np.ndarray)): @@ -1072,15 +1123,18 @@ def _get_style(self, i, col_name): return style or None def _get_colors(self): - import matplotlib.pyplot as plt - cycle = plt.rcParams.get('axes.color_cycle', list('bgrcmyk')) - if isinstance(cycle, basestring): - cycle = list(cycle) - colors = self.kwds.get('color', cycle) - return colors + from pandas.core.frame import DataFrame + if isinstance(self.data, DataFrame): + num_colors = len(self.data.columns) + else: + num_colors = 1 + + return _get_standard_colors(num_colors=num_colors, + colormap=self.colormap, + color=self.kwds.get('color')) def _maybe_add_color(self, colors, kwds, style, i): - has_color = 'color' in kwds + has_color = 'color' in kwds or self.colormap is not None if has_color and (style is None or re.match('[a-z]+', style) is None): kwds['color'] = colors[i % len(colors)] @@ -1090,6 +1144,7 @@ def _get_marked_label(self, label, col_num): else: return label + class KdePlot(MPLPlot): def __init__(self, data, **kwargs): MPLPlot.__init__(self, data, **kwargs) @@ -1389,15 +1444,6 @@ def f(ax, x, y, w, start=None, log=self.log, **kwds): return f - def _get_colors(self): - import matplotlib.pyplot as plt - cycle = plt.rcParams.get('axes.color_cycle', list('bgrcmyk')) - if isinstance(cycle, basestring): - cycle = list(cycle) - has_colors = 'color' in self.kwds - colors = self.kwds.get('color', cycle) - return colors - def _make_plot(self): import matplotlib as mpl colors = self._get_colors() @@ -1547,6 +1593,9 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True, mark_right: boolean, default True When using a secondary_y axis, should the legend label the axis of the various columns automatically + colormap : str or matplotlib colormap object, default None + Colormap to select colors from. If string, load colormap with that name + from matplotlib. kwds : keywords Options to pass to matplotlib plotting method @@ -1724,12 +1773,7 @@ def boxplot(data, column=None, by=None, ax=None, fontsize=None, def _get_colors(): - import matplotlib.pyplot as plt - cycle = plt.rcParams.get('axes.color_cycle', list('bgrcmyk')) - if isinstance(cycle, basestring): - cycle = list(cycle) - colors = kwds.get('color', cycle) - return colors + return _get_standard_colors(color=kwds.get('color'), num_colors=1) def maybe_color_bp(bp): if 'color' not in kwds :
I frequently plot DataFrames with a large number of columns and generally have difficulty distinguishing series due to the short cycle length of the default color scheme. Especially in cases where the ordering of columns has significant information, the ideal way to color the series would be with a matplotlib colormap that uniformly spaces colors. This is pretty straightforward with pyplot, but pretty annoying to have to repeatedly do. This patch modifies DataFrame plotting functions to take a `colormap=` argument consisting of either a `str` name of a [matplotlib colormap](http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps) or a colormap object itself. ``` df.cumsum().plot(colormap='jet', figsize=(10,5)) ``` ![jet_10](https://f.cloud.github.com/assets/440095/641179/ed27a674-d327-11e2-86a3-85c64f80e294.png) KDE plot: ``` df.plot(kind='kde', colormap='jet', figsize=(10,5)) ``` ![kde](https://f.cloud.github.com/assets/440095/641208/1f9ec6d6-d329-11e2-9c47-5cbf88c5fe30.png) Some colormaps don't work as well on a white background (the 0 column is white): df.cumsum().plot(colormap=cm.Greens, figsize=(10,5)) ![greens_10](https://f.cloud.github.com/assets/440095/641183/18e0b134-d328-11e2-8436-94b26240a074.png) But work better for other graph types: df.plot(kind='bar', colormap='jet', figsize=(10,5)) ![greens_bar](https://f.cloud.github.com/assets/440095/641185/56d57b28-d328-11e2-9412-ab6b40559ebc.png) Parallel coordinates on the iris dataset: ``` parallel_coordinates(iris, 'Name', colormap='gist_rainbow') ``` ![iris_parallel](https://f.cloud.github.com/assets/440095/641195/a207e32e-d328-11e2-910f-36576d187ff0.png) Andrews curves (I'd appreciate someone double checking this one; don't think I have it quite right): ``` andrews_curves(iris, 'Name', colormap='winter') ``` ![andrews_winter](https://f.cloud.github.com/assets/440095/641201/e1acf85c-d328-11e2-8a8c-263d9abc12f9.png) I've included some test coverage and unified all the color creation code into one method `_get_standard_colors()`. I started adding to the documentation but ran into a weird issue with the sphinx plot output. When adding this to `visualization.rst`: ``` .. ipython:: python from matplotlib import cm df = DataFrame(randn(1000, 4), index=ts.index, columns=list('ABCD')) df = df.cumsum() plt.figure() @savefig greens.png width=6in df.plot(colormap=cm.Greens) ``` I get this output (the lines should be white->green): ![greens](https://f.cloud.github.com/assets/440095/641261/d7243762-d32b-11e2-9bb0-e135ae25546b.png) My first thought was that it was the `options.display.mpl_style = 'default'`, but plots render fine in IPython with this setting. My guess is something in `@savefig`, but is anyone familiar with what might be happening here?
https://api.github.com/repos/pandas-dev/pandas/pulls/3860
2013-06-12T06:57:17Z
2013-06-27T02:56:30Z
2013-06-27T02:56:30Z
2014-06-18T19:57:31Z
DOC add to_datetime to api.rst
diff --git a/doc/source/api.rst b/doc/source/api.rst index a4be0df5f489e..7e863a4429487 100644 --- a/doc/source/api.rst +++ b/doc/source/api.rst @@ -126,7 +126,7 @@ Data manipulations merge concat -Top-level Missing Data +Top-level missing data ~~~~~~~~~~~~~~~~~~~~~~ .. currentmodule:: pandas.core.common @@ -137,6 +137,17 @@ Top-level Missing Data isnull notnull +Top-level dealing with datetimes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. currentmodule:: pandas.tseries.tools + +.. autosummary:: + :toctree: generated/ + + to_datetime + + Standard moving window functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst index f8d1e8323b9f5..f11bf60549d93 100644 --- a/doc/source/timeseries.rst +++ b/doc/source/timeseries.rst @@ -110,6 +110,68 @@ scalar values and ``PeriodIndex`` for sequences of spans. Better support for irregular intervals with arbitrary start and end points are forth-coming in future releases. + +.. _timeseries.converting: + +Converting to Timestamps +------------------------ + +To convert a Series or list-like object of date-like objects e.g. strings, +epochs, or a mixture, you can use the ``to_datetime`` function. When passed +a Series, this returns a Series (with the same index), while a list-like +is converted to a DatetimeIndex: + +.. ipython:: python + + to_datetime(Series(['Jul 31, 2009', '2010-01-10', None])) + + to_datetime(['2005/11/23', '2010.12.31']) + +If you use dates which start with the day first (i.e. European style), +you can pass the ``dayfirst`` flag: + +.. ipython:: python + + to_datetime(['04-01-2012 10:00'], dayfirst=True) + + to_datetime(['14-01-2012', '01-14-2012'], dayfirst=True) + +.. warning:: + + You see in the above example that ``dayfirst`` isn't strict, so if a date + can't be parsed with the day being first it will be parsed as if + ``dayfirst`` were False. + + +Pass ``coerce=True`` to convert bad data to ``NaT`` (not a time): + +.. ipython:: python + + to_datetime(['2009-07-31', 'asd']) + + to_datetime(['2009-07-31', 'asd'], coerce=True) + +It's also possible to convert integer or float epoch times. The default unit +for these is nanoseconds (since these are how Timestamps are stored). However, +often epochs are stored in another ``unit`` which can be specified: + + +.. ipython:: python + + to_datetime([1]) + + to_datetime([1, 3.14], unit='s') + +.. note:: + + Epoch times will be rounded to the nearest nanosecond. + +Take care, ``to_datetime`` may not act as you expect on mixed data: + +.. ipython:: python + + pd.to_datetime([1, '1']) + .. _timeseries.daterange: Generating Ranges of Timestamps
Either I'm being thick or `to_datetime` isn't in the docs (does adding it like this add it?) Should also put something in basics...?
https://api.github.com/repos/pandas-dev/pandas/pulls/3859
2013-06-12T02:54:37Z
2013-06-21T00:07:10Z
2013-06-21T00:07:10Z
2014-06-16T20:26:09Z
TST slicing regression test
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py index e7f824ace983c..295eaede443b1 100644 --- a/pandas/tests/test_indexing.py +++ b/pandas/tests/test_indexing.py @@ -724,6 +724,15 @@ def test_ix_general(self): df.sortlevel(inplace=True) df.ix[(4.0,2012)] + def test_ix_weird_slicing(self): + ## http://stackoverflow.com/q/17056560/1240268 + df = DataFrame({'one' : [1, 2, 3, np.nan, np.nan], 'two' : [1, 2, 3, 4, 5]}) + df.ix[df['one']>1, 'two'] = -df['two'] + + expected = DataFrame({'one': {0: 1.0, 1: 2.0, 2: 3.0, 3: nan, 4: nan}, + 'two': {0: 1, 1: -2, 2: -3, 3: 4, 4: 5}}) + assert_frame_equal(df, expected) + def test_xs_multiindex(self): # GH2903
From http://stackoverflow.com/questions/17056560/how-do-i-assign-a-vector-to-a-subset-of-rows-of-a-column-in-a-pandas-dataframe fixed in #3668 ?
https://api.github.com/repos/pandas-dev/pandas/pulls/3858
2013-06-12T02:26:18Z
2013-06-12T03:20:07Z
2013-06-12T03:20:07Z
2014-06-23T10:54:37Z
TST: Fix missing import in io/tests/test_json
diff --git a/pandas/io/tests/test_json/test_pandas.py b/pandas/io/tests/test_json/test_pandas.py index b64bfaacd38f2..4b1294b786df7 100644 --- a/pandas/io/tests/test_json/test_pandas.py +++ b/pandas/io/tests/test_json/test_pandas.py @@ -8,6 +8,7 @@ import os import unittest +import nose import numpy as np from pandas import Series, DataFrame, DatetimeIndex, Timestamp
Nose import is missing. If you get to the error at the last line, it throws an error because nose is never imported.
https://api.github.com/repos/pandas-dev/pandas/pulls/3855
2013-06-11T22:55:48Z
2013-06-12T00:54:34Z
2013-06-12T00:54:34Z
2014-06-27T11:37:18Z
DOC: Clarify quote behavior parameters
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index 6e937ba696e39..bc06969ba1fa1 100644 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -42,7 +42,11 @@ class DateConversionError(Exception): lineterminator : string (length 1), default None Character to break file into lines. Only valid with C parser quotechar : string -quoting : string + The character to used to denote the start and end of a quoted item. Quoted items can include the delimiter and it will be ignored. +quoting : int + Controls whether quotes should be recognized. Values are taken from + `csv.QUOTE_*` values. Acceptable values are 0, 1, 2, and 3 for + QUOTE_MINIMAL, QUOTE_ALL, QUOTE_NONE, and QUOTE_NONNUMERIC, respectively. skipinitialspace : boolean, default False Skip spaces after delimiter escapechar : string
I've been bit many times recently by mal-formed CSV. Non-closing quotes across lines. This clarifies how to avoid the problem a bit.
https://api.github.com/repos/pandas-dev/pandas/pulls/3853
2013-06-11T19:50:17Z
2013-06-26T14:49:49Z
2013-06-26T14:49:49Z
2013-06-26T14:53:25Z
ENH use pyperclip for read and to_clipboard
diff --git a/LICENSES/OTHER b/LICENSES/OTHER index a1b367fe6061c..f0550b4ee208a 100644 --- a/LICENSES/OTHER +++ b/LICENSES/OTHER @@ -48,3 +48,33 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. + +Pyperclip v1.3 license +---------------------- + +Copyright (c) 2010, Albert Sweigart +All rights reserved. + +BSD-style license: + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + * Neither the name of the pyperclip nor the + names of its contributors may be used to endorse or promote products + derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY Albert Sweigart "AS IS" AND ANY +EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL Albert Sweigart BE LIABLE FOR ANY +DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. \ No newline at end of file diff --git a/RELEASE.rst b/RELEASE.rst index 307986ab81681..a03451542796a 100644 --- a/RELEASE.rst +++ b/RELEASE.rst @@ -73,6 +73,8 @@ pandas 0.11.1 - ``melt`` now accepts the optional parameters ``var_name`` and ``value_name`` to specify custom column names of the returned DataFrame (GH3649_), thanks @hoechenberger + - clipboard functions use pyperclip (no dependencies on Windows, alternative + dependencies offered for Linux) (GH3837_). - Plotting functions now raise a ``TypeError`` before trying to plot anything if the associated objects have have a dtype of ``object`` (GH1818_, GH3572_). This happens before any drawing takes place which elimnates any diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 5533584745167..1ea9c48f45269 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -492,6 +492,16 @@ def to_hdf(self, path_or_buf, key, **kwargs): return pytables.to_hdf(path_or_buf, key, self, **kwargs) def to_clipboard(self): + """ + Attempt to write text representation of object to the system clipboard + + Notes + ----- + Requirements for your platform + - Linux: xclip, or xsel (with gtk or PyQt4 modules) + - Windows: + - OS X: + """ from pandas.io import clipboard clipboard.to_clipboard(self) diff --git a/pandas/io/clipboard.py b/pandas/io/clipboard.py index c763c1e8faadb..4e3f7203a279e 100644 --- a/pandas/io/clipboard.py +++ b/pandas/io/clipboard.py @@ -23,8 +23,8 @@ def to_clipboard(obj): # pragma: no cover Notes ----- Requirements for your platform - - Linux: xsel command line tool - - Windows: Python win32 extensions + - Linux: xclip, or xsel (with gtk or PyQt4 modules) + - Windows: - OS X: """ from pandas.util.clipboard import clipboard_set diff --git a/pandas/util/clipboard.py b/pandas/util/clipboard.py index bc58af8c0ea3c..9f3ee0638352f 100644 --- a/pandas/util/clipboard.py +++ b/pandas/util/clipboard.py @@ -1,119 +1,160 @@ -""" -Taken from the IPython project http://ipython.org - -Used under the terms of the BSD license -""" - -import subprocess -import sys - - -def clipboard_get(): - """ Get text from the clipboard. - """ - if sys.platform == 'win32': - try: - return win32_clipboard_get() - except Exception: - pass - elif sys.platform == 'darwin': - try: - return osx_clipboard_get() - except Exception: - pass - return tkinter_clipboard_get() - - -def clipboard_set(text): - """ Get text from the clipboard. - """ - if sys.platform == 'win32': - try: - return win32_clipboard_set(text) - except Exception: - raise - elif sys.platform == 'darwin': - try: - return osx_clipboard_set(text) - except Exception: - pass - xsel_clipboard_set(text) - - -def win32_clipboard_get(): - """ Get the current clipboard's text on Windows. - - Requires Mark Hammond's pywin32 extensions. - """ - try: - import win32clipboard - except ImportError: - message = ("Getting text from the clipboard requires the pywin32 " - "extensions: http://sourceforge.net/projects/pywin32/") - raise Exception(message) - win32clipboard.OpenClipboard() - text = win32clipboard.GetClipboardData(win32clipboard.CF_TEXT) - # FIXME: convert \r\n to \n? - win32clipboard.CloseClipboard() - return text - - -def osx_clipboard_get(): - """ Get the clipboard's text on OS X. - """ - p = subprocess.Popen(['pbpaste', '-Prefer', 'ascii'], - stdout=subprocess.PIPE) - text, stderr = p.communicate() - # Text comes in with old Mac \r line endings. Change them to \n. - text = text.replace('\r', '\n') - return text - - -def tkinter_clipboard_get(): - """ Get the clipboard's text using Tkinter. - - This is the default on systems that are not Windows or OS X. It may - interfere with other UI toolkits and should be replaced with an - implementation that uses that toolkit. - """ +# Pyperclip v1.3 +# A cross-platform clipboard module for Python. (only handles plain text for now) +# By Al Sweigart al@coffeeghost.net + +# Usage: +# import pyperclip +# pyperclip.copy('The text to be copied to the clipboard.') +# spam = pyperclip.paste() + +# On Mac, this module makes use of the pbcopy and pbpaste commands, which should come with the os. +# On Linux, this module makes use of the xclip command, which should come with the os. Otherwise run "sudo apt-get install xclip" + + +# Copyright (c) 2010, Albert Sweigart +# All rights reserved. +# +# BSD-style license: +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are met: +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# * Neither the name of the pyperclip nor the +# names of its contributors may be used to endorse or promote products +# derived from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY Albert Sweigart "AS IS" AND ANY +# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +# DISCLAIMED. IN NO EVENT SHALL Albert Sweigart BE LIABLE FOR ANY +# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# Change Log: +# 1.2 Use the platform module to help determine OS. +# 1.3 Changed ctypes.windll.user32.OpenClipboard(None) to ctypes.windll.user32.OpenClipboard(0), after some people ran into some TypeError + +import platform, os + +def winGetClipboard(): + ctypes.windll.user32.OpenClipboard(0) + pcontents = ctypes.windll.user32.GetClipboardData(1) # 1 is CF_TEXT + data = ctypes.c_char_p(pcontents).value + #ctypes.windll.kernel32.GlobalUnlock(pcontents) + ctypes.windll.user32.CloseClipboard() + return data + +def winSetClipboard(text): + GMEM_DDESHARE = 0x2000 + ctypes.windll.user32.OpenClipboard(0) + ctypes.windll.user32.EmptyClipboard() try: - import Tkinter - except ImportError: - message = ("Getting text from the clipboard on this platform " - "requires Tkinter.") - raise Exception(message) - root = Tkinter.Tk() - root.withdraw() - text = root.clipboard_get() - root.destroy() - return text - - -def win32_clipboard_set(text): - # idiosyncratic win32 import issues - import pywintypes as _ - import win32clipboard - win32clipboard.OpenClipboard() + # works on Python 2 (bytes() only takes one argument) + hCd = ctypes.windll.kernel32.GlobalAlloc(GMEM_DDESHARE, len(bytes(text))+1) + except TypeError: + # works on Python 3 (bytes() requires an encoding) + hCd = ctypes.windll.kernel32.GlobalAlloc(GMEM_DDESHARE, len(bytes(text, 'ascii'))+1) + pchData = ctypes.windll.kernel32.GlobalLock(hCd) try: - win32clipboard.EmptyClipboard() - win32clipboard.SetClipboardText(_fix_line_endings(text)) - finally: - win32clipboard.CloseClipboard() - - -def _fix_line_endings(text): - return '\r\n'.join(text.splitlines()) - - -def osx_clipboard_set(text): - """ Get the clipboard's text on OS X. - """ - p = subprocess.Popen(['pbcopy', '-Prefer', 'ascii'], - stdin=subprocess.PIPE) - p.communicate(input=text) - - -def xsel_clipboard_set(text): - from subprocess import Popen, PIPE - p = Popen(['xsel', '-bi'], stdin=PIPE) - p.communicate(input=text) + # works on Python 2 (bytes() only takes one argument) + ctypes.cdll.msvcrt.strcpy(ctypes.c_char_p(pchData), bytes(text)) + except TypeError: + # works on Python 3 (bytes() requires an encoding) + ctypes.cdll.msvcrt.strcpy(ctypes.c_char_p(pchData), bytes(text, 'ascii')) + ctypes.windll.kernel32.GlobalUnlock(hCd) + ctypes.windll.user32.SetClipboardData(1,hCd) + ctypes.windll.user32.CloseClipboard() + +def macSetClipboard(text): + outf = os.popen('pbcopy', 'w') + outf.write(text) + outf.close() + +def macGetClipboard(): + outf = os.popen('pbpaste', 'r') + content = outf.read() + outf.close() + return content + +def gtkGetClipboard(): + return gtk.Clipboard().wait_for_text() + +def gtkSetClipboard(text): + cb = gtk.Clipboard() + cb.set_text(text) + cb.store() + +def qtGetClipboard(): + return str(cb.text()) + +def qtSetClipboard(text): + cb.setText(text) + +def xclipSetClipboard(text): + outf = os.popen('xclip -selection c', 'w') + outf.write(text) + outf.close() + +def xclipGetClipboard(): + outf = os.popen('xclip -selection c -o', 'r') + content = outf.read() + outf.close() + return content + +def xselSetClipboard(text): + outf = os.popen('xsel -i', 'w') + outf.write(text) + outf.close() + +def xselGetClipboard(): + outf = os.popen('xsel -o', 'r') + content = outf.read() + outf.close() + return content + + +if os.name == 'nt' or platform.system() == 'Windows': + import ctypes + getcb = winGetClipboard + setcb = winSetClipboard +elif os.name == 'mac' or platform.system() == 'Darwin': + getcb = macGetClipboard + setcb = macSetClipboard +elif os.name == 'posix' or platform.system() == 'Linux': + xclipExists = os.system('which xclip') == 0 + if xclipExists: + getcb = xclipGetClipboard + setcb = xclipSetClipboard + else: + xselExists = os.system('which xsel') == 0 + if xselExists: + getcb = xselGetClipboard + setcb = xselSetClipboard + try: + import gtk + getcb = gtkGetClipboard + setcb = gtkSetClipboard + except: + try: + import PyQt4.QtCore + import PyQt4.QtGui + app = QApplication([]) + cb = PyQt4.QtGui.QApplication.clipboard() + getcb = qtGetClipboard + setcb = qtSetClipboard + except: + raise Exception('Pyperclip requires the gtk or PyQt4 module installed, or the xclip command.') +copy = setcb +paste = getcb + +## pandas aliases +clipboard_get = paste +clipboard_set = copy \ No newline at end of file
Use [pyperclip](http://coffeeghost.net/src/pyperclip.py) to manage copy and pasting. Fixes #3837, also cc #3845.
https://api.github.com/repos/pandas-dev/pandas/pulls/3848
2013-06-11T10:33:11Z
2013-06-13T18:41:39Z
2013-06-13T18:41:39Z
2014-07-16T08:13:24Z
Io to clipboard
diff --git a/doc/source/io.rst b/doc/source/io.rst index 9d923d2d0e0cf..d01b671bbae67 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -1231,6 +1231,26 @@ And then import the data directly to a DataFrame by calling: clipdf +The ``to_clipboard`` method can be used to write the contents of a DataFrame to +the clipboard. Following which you can paste the clipboard contents into other +applications (CTRL-V on many operating systems). Here we illustrate writing a +DataFrame into clipboard and reading it back. + +.. ipython:: python + + df=pd.DataFrame(randn(5,3)) + df + df.to_clipboard() + pd.read_clipboard() + +We can see that we got the same content back, which we had earlier written to the clipboard. + +.. note:: + + You may need to install xclip or xsel (with gtk or PyQt4 modules) on Linux to use these methods. + + + .. _io.excel:
Added documentation for to_clipboard(). Closes #3784
https://api.github.com/repos/pandas-dev/pandas/pulls/3845
2013-06-11T05:21:44Z
2013-06-13T18:42:30Z
2013-06-13T18:42:30Z
2014-07-16T08:13:23Z
ENH: add figsize argument to DataFrame and Series hist methods
diff --git a/RELEASE.rst b/RELEASE.rst index 307986ab81681..8256b13b4e553 100644 --- a/RELEASE.rst +++ b/RELEASE.rst @@ -79,6 +79,7 @@ pandas 0.11.1 spurious plots from showing up. - Added Faq section on repr display options, to help users customize their setup. - ``where`` operations that result in block splitting are much faster (GH3733_) + - Series and DataFrame hist methods now take a ``figsize`` argument (GH3834_) **API Changes** @@ -312,6 +313,8 @@ pandas 0.11.1 .. _GH3726: https://github.com/pydata/pandas/issues/3726 .. _GH3795: https://github.com/pydata/pandas/issues/3795 .. _GH3814: https://github.com/pydata/pandas/issues/3814 +.. _GH3834: https://github.com/pydata/pandas/issues/3834 + pandas 0.11.0 ============= diff --git a/doc/source/v0.11.1.txt b/doc/source/v0.11.1.txt index 5045f73375a97..34ba9f0859641 100644 --- a/doc/source/v0.11.1.txt +++ b/doc/source/v0.11.1.txt @@ -288,6 +288,8 @@ Enhancements dff.groupby('B').filter(lambda x: len(x) > 2, dropna=False) + - Series and DataFrame hist methods now take a ``figsize`` argument (GH3834_) + Bug Fixes ~~~~~~~~~ @@ -396,3 +398,4 @@ on GitHub for a complete list. .. _GH3741: https://github.com/pydata/pandas/issues/3741 .. _GH3726: https://github.com/pydata/pandas/issues/3726 .. _GH3425: https://github.com/pydata/pandas/issues/3425 +.. _GH3834: https://github.com/pydata/pandas/issues/3834 diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py index 5a1411ccf577e..0755caf45d336 100644 --- a/pandas/tests/test_graphics.py +++ b/pandas/tests/test_graphics.py @@ -8,7 +8,7 @@ from pandas import Series, DataFrame, MultiIndex, PeriodIndex, date_range import pandas.util.testing as tm from pandas.util.testing import ensure_clean -from pandas.core.config import set_option,get_option,config_prefix +from pandas.core.config import set_option import numpy as np @@ -28,11 +28,6 @@ class TestSeriesPlots(unittest.TestCase): @classmethod def setUpClass(cls): - import sys - - # if 'IPython' in sys.modules: - # raise nose.SkipTest - try: import matplotlib as mpl mpl.use('Agg', warn=False) @@ -150,9 +145,16 @@ def test_irregular_datetime(self): def test_hist(self): _check_plot_works(self.ts.hist) _check_plot_works(self.ts.hist, grid=False) - + _check_plot_works(self.ts.hist, figsize=(8, 10)) _check_plot_works(self.ts.hist, by=self.ts.index.month) + def test_plot_fails_when_ax_differs_from_figure(self): + from pylab import figure + fig1 = figure() + fig2 = figure() + ax1 = fig1.add_subplot(111) + self.assertRaises(AssertionError, self.ts.hist, ax=ax1, figure=fig2) + @slow def test_kde(self): _skip_if_no_scipy() @@ -258,7 +260,8 @@ def test_plot(self): (u'\u03b4', 6), (u'\u03b4', 7)], names=['i0', 'i1']) columns = MultiIndex.from_tuples([('bar', u'\u0394'), - ('bar', u'\u0395')], names=['c0', 'c1']) + ('bar', u'\u0395')], names=['c0', + 'c1']) df = DataFrame(np.random.randint(0, 10, (8, 2)), columns=columns, index=index) @@ -269,9 +272,9 @@ def test_nonnumeric_exclude(self): import matplotlib.pyplot as plt plt.close('all') - df = DataFrame({'A': ["x", "y", "z"], 'B': [1,2,3]}) + df = DataFrame({'A': ["x", "y", "z"], 'B': [1, 2, 3]}) ax = df.plot() - self.assert_(len(ax.get_lines()) == 1) #B was plotted + self.assert_(len(ax.get_lines()) == 1) # B was plotted @slow def test_label(self): @@ -434,21 +437,24 @@ def test_bar_center(self): ax = df.plot(kind='bar', grid=True) self.assertEqual(ax.xaxis.get_ticklocs()[0], ax.patches[0].get_x() + ax.patches[0].get_width()) + @slow def test_bar_log(self): # GH3254, GH3298 matplotlib/matplotlib#1882, #1892 # regressions in 1.2.1 - df = DataFrame({'A': [3] * 5, 'B': range(1,6)}, index=range(5)) - ax = df.plot(kind='bar', grid=True,log=True) - self.assertEqual(ax.yaxis.get_ticklocs()[0],1.0) + df = DataFrame({'A': [3] * 5, 'B': range(1, 6)}, index=range(5)) + ax = df.plot(kind='bar', grid=True, log=True) + self.assertEqual(ax.yaxis.get_ticklocs()[0], 1.0) - p1 = Series([200,500]).plot(log=True,kind='bar') - p2 = DataFrame([Series([200,300]),Series([300,500])]).plot(log=True,kind='bar',subplots=True) + p1 = Series([200, 500]).plot(log=True, kind='bar') + p2 = DataFrame([Series([200, 300]), + Series([300, 500])]).plot(log=True, kind='bar', + subplots=True) - (p1.yaxis.get_ticklocs() == np.array([ 0.625, 1.625])) - (p2[0].yaxis.get_ticklocs() == np.array([ 1., 10., 100., 1000.])).all() - (p2[1].yaxis.get_ticklocs() == np.array([ 1., 10., 100., 1000.])).all() + (p1.yaxis.get_ticklocs() == np.array([0.625, 1.625])) + (p2[0].yaxis.get_ticklocs() == np.array([1., 10., 100., 1000.])).all() + (p2[1].yaxis.get_ticklocs() == np.array([1., 10., 100., 1000.])).all() @slow def test_boxplot(self): @@ -508,6 +514,9 @@ def test_hist(self): # make sure sharex, sharey is handled _check_plot_works(df.hist, sharex=True, sharey=True) + # handle figsize arg + _check_plot_works(df.hist, figsize=(8, 10)) + # make sure xlabelsize and xrot are handled ser = df[0] xf, yf = 20, 20 @@ -727,6 +736,7 @@ def test_invalid_kind(self): df = DataFrame(np.random.randn(10, 2)) self.assertRaises(ValueError, df.plot, kind='aasdf') + class TestDataFrameGroupByPlots(unittest.TestCase): @classmethod @@ -786,10 +796,10 @@ def test_time_series_plot_color_with_empty_kwargs(self): plt.close('all') for i in range(3): - ax = Series(np.arange(12) + 1, index=date_range( - '1/1/2000', periods=12)).plot() + ax = Series(np.arange(12) + 1, index=date_range('1/1/2000', + periods=12)).plot() - line_colors = [ l.get_color() for l in ax.get_lines() ] + line_colors = [l.get_color() for l in ax.get_lines()] self.assert_(line_colors == ['b', 'g', 'r']) @slow @@ -829,7 +839,6 @@ def test_grouped_hist(self): self.assertRaises(AttributeError, plotting.grouped_hist, df.A, by=df.C, foo='bar') - def test_option_mpl_style(self): # just a sanity check try: @@ -845,6 +854,7 @@ def test_option_mpl_style(self): except ValueError: pass + def _check_plot_works(f, *args, **kwargs): import matplotlib.pyplot as plt @@ -852,7 +862,7 @@ def _check_plot_works(f, *args, **kwargs): plt.clf() ax = fig.add_subplot(211) ret = f(*args, **kwargs) - assert(ret is not None) # do something more intelligent + assert ret is not None # do something more intelligent ax = fig.add_subplot(212) try: @@ -865,10 +875,12 @@ def _check_plot_works(f, *args, **kwargs): with ensure_clean() as path: plt.savefig(path) + def curpath(): pth, _ = os.path.split(os.path.abspath(__file__)) return pth + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index e25e83a40b267..83ad58c1eb41c 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -658,9 +658,9 @@ def r(h): return ax -def grouped_hist(data, column=None, by=None, ax=None, bins=50, - figsize=None, layout=None, sharex=False, sharey=False, - rot=90, grid=True, **kwargs): +def grouped_hist(data, column=None, by=None, ax=None, bins=50, figsize=None, + layout=None, sharex=False, sharey=False, rot=90, grid=True, + **kwargs): """ Grouped histogram @@ -1839,10 +1839,9 @@ def plot_group(group, ax): return fig -def hist_frame( - data, column=None, by=None, grid=True, xlabelsize=None, xrot=None, - ylabelsize=None, yrot=None, ax=None, - sharex=False, sharey=False, **kwds): +def hist_frame(data, column=None, by=None, grid=True, xlabelsize=None, + xrot=None, ylabelsize=None, yrot=None, ax=None, sharex=False, + sharey=False, figsize=None, **kwds): """ Draw Histogram the DataFrame's series using matplotlib / pylab. @@ -1866,17 +1865,20 @@ def hist_frame( ax : matplotlib axes object, default None sharex : bool, if True, the X axis will be shared amongst all subplots. sharey : bool, if True, the Y axis will be shared amongst all subplots. + figsize : tuple + The size of the figure to create in inches by default kwds : other plotting keyword arguments To be passed to hist function """ if column is not None: if not isinstance(column, (list, np.ndarray)): column = [column] - data = data.ix[:, column] + data = data[column] if by is not None: - axes = grouped_hist(data, by=by, ax=ax, grid=grid, **kwds) + axes = grouped_hist(data, by=by, ax=ax, grid=grid, figsize=figsize, + **kwds) for ax in axes.ravel(): if xlabelsize is not None: @@ -1898,11 +1900,11 @@ def hist_frame( rows += 1 else: cols += 1 - _, axes = _subplots(nrows=rows, ncols=cols, ax=ax, squeeze=False, - sharex=sharex, sharey=sharey) + fig, axes = _subplots(nrows=rows, ncols=cols, ax=ax, squeeze=False, + sharex=sharex, sharey=sharey, figsize=figsize) for i, col in enumerate(com._try_sort(data.columns)): - ax = axes[i / cols][i % cols] + ax = axes[i / cols, i % cols] ax.xaxis.set_visible(True) ax.yaxis.set_visible(True) ax.hist(data[col].dropna().values, **kwds) @@ -1922,13 +1924,13 @@ def hist_frame( ax = axes[j / cols, j % cols] ax.set_visible(False) - ax.get_figure().subplots_adjust(wspace=0.3, hspace=0.3) + fig.subplots_adjust(wspace=0.3, hspace=0.3) return axes def hist_series(self, by=None, ax=None, grid=True, xlabelsize=None, - xrot=None, ylabelsize=None, yrot=None, **kwds): + xrot=None, ylabelsize=None, yrot=None, figsize=None, **kwds): """ Draw histogram of the input series using matplotlib @@ -1948,6 +1950,8 @@ def hist_series(self, by=None, ax=None, grid=True, xlabelsize=None, If specified changes the y-axis label size yrot : float, default None rotation of y axis labels + figsize : tuple, default None + figure size in inches by default kwds : keywords To be passed to the actual plotting function @@ -1958,16 +1962,22 @@ def hist_series(self, by=None, ax=None, grid=True, xlabelsize=None, """ import matplotlib.pyplot as plt + fig = kwds.setdefault('figure', plt.figure(figsize=figsize)) + if by is None: if ax is None: - ax = plt.gca() + ax = fig.add_subplot(111) + else: + if ax.get_figure() != fig: + raise AssertionError('passed axis not bound to passed figure') values = self.dropna().values ax.hist(values, **kwds) ax.grid(grid) axes = np.array([ax]) else: - axes = grouped_hist(self, by=by, ax=ax, grid=grid, **kwds) + axes = grouped_hist(self, by=by, ax=ax, grid=grid, figsize=figsize, + **kwds) for ax in axes.ravel(): if xlabelsize is not None:
closes #3834
https://api.github.com/repos/pandas-dev/pandas/pulls/3842
2013-06-10T21:24:08Z
2013-06-12T19:41:01Z
2013-06-12T19:41:00Z
2014-06-20T00:43:51Z
BUG: GH3611 fix again, float na_values were not stringified correctly
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index 6e937ba696e39..e4fb478a2a288 100644 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -297,6 +297,7 @@ def parser_f(filepath_or_buffer, skipfooter=None, skip_footer=0, na_values=None, + na_fvalues=None, true_values=None, false_values=None, delimiter=None, @@ -359,6 +360,7 @@ def parser_f(filepath_or_buffer, prefix=prefix, skiprows=skiprows, na_values=na_values, + na_fvalues=na_fvalues, true_values=true_values, false_values=false_values, keep_default_na=keep_default_na, @@ -554,7 +556,7 @@ def _clean_options(self, options, engine): converters = {} # Converting values to NA - na_values = _clean_na_values(na_values, keep_default_na) + na_values, na_fvalues = _clean_na_values(na_values, keep_default_na) if com.is_integer(skiprows): skiprows = range(skiprows) @@ -565,6 +567,7 @@ def _clean_options(self, options, engine): result['names'] = names result['converters'] = converters result['na_values'] = na_values + result['na_fvalues'] = na_fvalues result['skiprows'] = skiprows return result, engine @@ -644,6 +647,7 @@ def __init__(self, kwds): self.keep_date_col = kwds.pop('keep_date_col', False) self.na_values = kwds.get('na_values') + self.na_fvalues = kwds.get('na_fvalues') self.true_values = kwds.get('true_values') self.false_values = kwds.get('false_values') self.tupleize_cols = kwds.get('tupleize_cols',True) @@ -837,31 +841,34 @@ def _agg_index(self, index, try_parse_dates=True): arr = self._date_conv(arr) col_na_values = self.na_values + col_na_fvalues = self.na_fvalues if isinstance(self.na_values, dict): col_name = self.index_names[i] if col_name is not None: - col_na_values = _get_na_values(col_name, - self.na_values) - - arr, _ = self._convert_types(arr, col_na_values) + col_na_values, col_na_fvalues = _get_na_values(col_name, + self.na_values, + self.na_fvalues) + + arr, _ = self._convert_types(arr, col_na_values | col_na_fvalues) arrays.append(arr) index = MultiIndex.from_arrays(arrays, names=self.index_names) return index - def _convert_to_ndarrays(self, dct, na_values, verbose=False, + def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False, converters=None): result = {} for c, values in dct.iteritems(): conv_f = None if converters is None else converters.get(c, None) - col_na_values = _get_na_values(c, na_values) + col_na_values, col_na_fvalues = _get_na_values(c, na_values, na_fvalues) coerce_type = True if conv_f is not None: values = lib.map_infer(values, conv_f) coerce_type = False - cvals, na_count = self._convert_types(values, col_na_values, + cvals, na_count = self._convert_types(values, + set(col_na_values) | col_na_fvalues, coerce_type) result[c] = cvals if verbose and na_count: @@ -1370,7 +1377,7 @@ def _convert_data(self, data): col = self.orig_names[col] clean_conv[col] = f - return self._convert_to_ndarrays(data, self.na_values, self.verbose, + return self._convert_to_ndarrays(data, self.na_values, self.na_fvalues, self.verbose, clean_conv) def _infer_columns(self): @@ -1754,37 +1761,26 @@ def _try_convert_dates(parser, colspec, data_dict, columns): def _clean_na_values(na_values, keep_default_na=True): + if na_values is None and keep_default_na: na_values = _NA_VALUES + na_fvalues = set() elif isinstance(na_values, dict): if keep_default_na: for k, v in na_values.iteritems(): v = set(list(v)) | _NA_VALUES na_values[k] = v + na_fvalues = dict([ (k, _floatify_na_values(v)) for k, v in na_values.items() ]) else: if not com.is_list_like(na_values): na_values = [na_values] - na_values = set(_stringify_na_values(na_values)) + na_values = _stringify_na_values(na_values) if keep_default_na: na_values = na_values | _NA_VALUES - return na_values + na_fvalues = _floatify_na_values(na_values) -def _stringify_na_values(na_values): - """ return a stringified and numeric for these values """ - result = [] - for x in na_values: - result.append(str(x)) - result.append(x) - try: - result.append(float(x)) - except: - pass - try: - result.append(int(x)) - except: - pass - return result + return na_values, na_fvalues def _clean_index_names(columns, index_col): if not _is_index_col(index_col): @@ -1832,14 +1828,52 @@ def _get_empty_meta(columns, index_col, index_names): return index, columns, {} -def _get_na_values(col, na_values): +def _floatify_na_values(na_values): + # create float versions of the na_values + result = set() + for v in na_values: + try: + v = float(v) + if not np.isnan(v): + result.add(v) + except: + pass + return result + +def _stringify_na_values(na_values): + """ return a stringified and numeric for these values """ + result = [] + for x in na_values: + result.append(str(x)) + result.append(x) + try: + v = float(x) + + # we are like 999 here + if v == int(v): + v = int(v) + result.append("%s.0" % v) + result.append(str(v)) + + result.append(v) + except: + pass + try: + result.append(int(x)) + except: + pass + return set(result) + +def _get_na_values(col, na_values, na_fvalues): if isinstance(na_values, dict): if col in na_values: - return set(_stringify_na_values(list(na_values[col]))) + values = na_values[col] + fvalues = na_fvalues[col] + return na_values[col], na_fvalues[col] else: - return _NA_VALUES + return _NA_VALUES, set() else: - return na_values + return na_values, na_fvalues def _get_col_names(colspec, columns): diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py index cae4c0902a97c..cc2dddd829302 100644 --- a/pandas/io/tests/test_parsers.py +++ b/pandas/io/tests/test_parsers.py @@ -540,6 +540,36 @@ def test_non_string_na_values(self): tm.assert_frame_equal(result1,result2) tm.assert_frame_equal(result2,result3) + result4 = read_csv(path, sep= ' ', header=0, na_values=['-999.0']) + result5 = read_csv(path, sep= ' ', header=0, na_values=['-999']) + result6 = read_csv(path, sep= ' ', header=0, na_values=[-999.0]) + result7 = read_csv(path, sep= ' ', header=0, na_values=[-999]) + tm.assert_frame_equal(result4,result3) + tm.assert_frame_equal(result5,result3) + tm.assert_frame_equal(result6,result3) + tm.assert_frame_equal(result7,result3) + + good_compare = result3 + + # with an odd float format, so we can't match the string 999.0 exactly, + # but need float matching + df.to_csv(path, sep=' ', index=False, float_format = '%.3f') + result1 = read_csv(path, sep= ' ', header=0, na_values=['-999.0','-999']) + result2 = read_csv(path, sep= ' ', header=0, na_values=[-999,-999.0]) + result3 = read_csv(path, sep= ' ', header=0, na_values=[-999.0,-999]) + tm.assert_frame_equal(result1,good_compare) + tm.assert_frame_equal(result2,good_compare) + tm.assert_frame_equal(result3,good_compare) + + result4 = read_csv(path, sep= ' ', header=0, na_values=['-999.0']) + result5 = read_csv(path, sep= ' ', header=0, na_values=['-999']) + result6 = read_csv(path, sep= ' ', header=0, na_values=[-999.0]) + result7 = read_csv(path, sep= ' ', header=0, na_values=[-999]) + tm.assert_frame_equal(result4,good_compare) + tm.assert_frame_equal(result5,good_compare) + tm.assert_frame_equal(result6,good_compare) + tm.assert_frame_equal(result7,good_compare) + def test_custom_na_values(self): data = """A,B,C ignore,this,row diff --git a/pandas/parser.pyx b/pandas/parser.pyx index 004c23d09ccdf..eaa588ef4d150 100644 --- a/pandas/parser.pyx +++ b/pandas/parser.pyx @@ -231,7 +231,7 @@ cdef class TextReader: cdef: parser_t *parser - object file_handle + object file_handle, na_fvalues bint factorize, na_filter, verbose, has_usecols, has_mi_columns int parser_start list clocks @@ -294,6 +294,7 @@ cdef class TextReader: na_filter=True, na_values=None, + na_fvalues=None, true_values=None, false_values=None, @@ -391,6 +392,9 @@ cdef class TextReader: self.delim_whitespace = delim_whitespace self.na_values = na_values + if na_fvalues is None: + na_fvalues = set() + self.na_fvalues = na_fvalues self.true_values = _maybe_encode(true_values) self.false_values = _maybe_encode(false_values) @@ -834,7 +838,7 @@ cdef class TextReader: Py_ssize_t i, nused, ncols kh_str_t *na_hashset = NULL int start, end - object name + object name, na_flist bint na_filter = 0 start = self.parser_start @@ -863,8 +867,9 @@ cdef class TextReader: conv = self._get_converter(i, name) # XXX + na_flist = set() if self.na_filter: - na_list = self._get_na_list(i, name) + na_list, na_flist = self._get_na_list(i, name) if na_list is None: na_filter = 0 else: @@ -880,7 +885,7 @@ cdef class TextReader: # Should return as the desired dtype (inferred or specified) col_res, na_count = self._convert_tokens(i, start, end, name, - na_filter, na_hashset) + na_filter, na_hashset, na_flist) if na_filter: self._free_na_set(na_hashset) @@ -906,7 +911,8 @@ cdef class TextReader: cdef inline _convert_tokens(self, Py_ssize_t i, int start, int end, object name, bint na_filter, - kh_str_t *na_hashset): + kh_str_t *na_hashset, + object na_flist): cdef: object col_dtype = None @@ -930,7 +936,7 @@ cdef class TextReader: col_dtype = np.dtype(col_dtype).str return self._convert_with_dtype(col_dtype, i, start, end, - na_filter, 1, na_hashset) + na_filter, 1, na_hashset, na_flist) if i in self.noconvert: return self._string_convert(i, start, end, na_filter, na_hashset) @@ -939,10 +945,10 @@ cdef class TextReader: for dt in dtype_cast_order: try: col_res, na_count = self._convert_with_dtype( - dt, i, start, end, na_filter, 0, na_hashset) + dt, i, start, end, na_filter, 0, na_hashset, na_flist) except OverflowError: col_res, na_count = self._convert_with_dtype( - '|O8', i, start, end, na_filter, 0, na_hashset) + '|O8', i, start, end, na_filter, 0, na_hashset, na_flist) if col_res is not None: break @@ -953,7 +959,8 @@ cdef class TextReader: int start, int end, bint na_filter, bint user_dtype, - kh_str_t *na_hashset): + kh_str_t *na_hashset, + object na_flist): cdef kh_str_t *true_set, *false_set if dtype[1] == 'i' or dtype[1] == 'u': @@ -969,7 +976,7 @@ cdef class TextReader: elif dtype[1] == 'f': result, na_count = _try_double(self.parser, i, start, end, - na_filter, na_hashset) + na_filter, na_hashset, na_flist) if dtype[1:] != 'f8': result = result.astype(dtype) @@ -1060,7 +1067,7 @@ cdef class TextReader: cdef _get_na_list(self, i, name): if self.na_values is None: - return None + return None, set() if isinstance(self.na_values, dict): values = None @@ -1068,18 +1075,23 @@ cdef class TextReader: values = self.na_values[name] if values is not None and not isinstance(values, list): values = list(values) + fvalues = self.na_fvalues[name] + if fvalues is not None and not isinstance(fvalues, set): + fvalues = set(fvalues) else: if i in self.na_values: - return self.na_values[i] + return self.na_values[i], self.na_fvalues[i] else: - return _NA_VALUES + return _NA_VALUES, set() - return _ensure_encoded(values) + return _ensure_encoded(values), fvalues else: if not isinstance(self.na_values, list): self.na_values = list(self.na_values) + if not isinstance(self.na_fvalues, set): + self.na_fvalues = set(self.na_fvalues) - return _ensure_encoded(self.na_values) + return _ensure_encoded(self.na_values), self.na_fvalues cdef _free_na_set(self, kh_str_t *table): kh_destroy_str(table) @@ -1163,8 +1175,6 @@ def _maybe_upcast(arr): # ---------------------------------------------------------------------- # Type conversions / inference support code - - cdef _string_box_factorize(parser_t *parser, int col, int line_start, int line_end, bint na_filter, kh_str_t *na_hashset): @@ -1357,7 +1367,7 @@ cdef char* cinf = b'inf' cdef char* cneginf = b'-inf' cdef _try_double(parser_t *parser, int col, int line_start, int line_end, - bint na_filter, kh_str_t *na_hashset): + bint na_filter, kh_str_t *na_hashset, object na_flist): cdef: int error, na_count = 0 size_t i, lines @@ -1367,6 +1377,7 @@ cdef _try_double(parser_t *parser, int col, int line_start, int line_end, double NA = na_values[np.float64] ndarray result khiter_t k + bint use_na_flist = len(na_flist) > 0 lines = line_end - line_start result = np.empty(lines, dtype=np.float64) @@ -1391,6 +1402,10 @@ cdef _try_double(parser_t *parser, int col, int line_start, int line_end, data[0] = NEGINF else: return None, None + if use_na_flist: + if data[0] in na_flist: + na_count += 1 + data[0] = NA data += 1 else: for i in range(lines):
now, 999.0 (a float) will have: ['999','999.0'] added for matching, as well as matching a float value of 999.0 in a float column closes #3611 again!
https://api.github.com/repos/pandas-dev/pandas/pulls/3841
2013-06-10T20:40:22Z
2013-06-11T17:43:23Z
2013-06-11T17:43:23Z
2014-07-07T04:45:28Z
BUG: fix Series.interpolate() corner cases, close #3674
diff --git a/pandas/core/series.py b/pandas/core/series.py index 3a7a7d0f49b66..3439aeb79e174 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -3180,14 +3180,15 @@ def interpolate(self, method='linear'): invalid = isnull(values) valid = -invalid - firstIndex = valid.argmax() - valid = valid[firstIndex:] - invalid = invalid[firstIndex:] - inds = inds[firstIndex:] - result = values.copy() - result[firstIndex:][invalid] = np.interp(inds[invalid], inds[valid], - values[firstIndex:][valid]) + if valid.any(): + firstIndex = valid.argmax() + valid = valid[firstIndex:] + invalid = invalid[firstIndex:] + inds = inds[firstIndex:] + + result[firstIndex:][invalid] = np.interp( + inds[invalid], inds[valid], values[firstIndex:][valid]) return Series(result, index=self.index, name=self.name) diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py index e1589b9499757..58ca34b73b6a0 100644 --- a/pandas/tests/test_series.py +++ b/pandas/tests/test_series.py @@ -4063,6 +4063,13 @@ def test_interpolate(self): # try time interpolation on a non-TimeSeries self.assertRaises(Exception, self.series.interpolate, method='time') + def test_interpolate_corners(self): + s = Series([np.nan, np.nan]) + assert_series_equal(s.interpolate(), s) + + s = Series([]).interpolate() + assert_series_equal(s.interpolate(), s) + def test_interpolate_index_values(self): s = Series(np.nan, index=np.sort(np.random.rand(30))) s[::3] = np.random.randn(10)
closes #3674
https://api.github.com/repos/pandas-dev/pandas/pulls/3840
2013-06-10T19:15:24Z
2013-06-13T18:42:47Z
2013-06-13T18:42:47Z
2014-07-16T08:13:15Z
TST regression tests for GH3836
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py index ad3d150c7e0ad..e7f824ace983c 100644 --- a/pandas/tests/test_indexing.py +++ b/pandas/tests/test_indexing.py @@ -974,6 +974,23 @@ def test_iloc_mask(self): (key,ans,r)) warnings.filterwarnings(action='always', category=UserWarning) + def test_ix_slicing_strings(self): + ##GH3836 + data = {'Classification': ['SA EQUITY CFD', 'bbb', 'SA EQUITY', 'SA SSF', 'aaa'], + 'Random': [1,2,3,4,5], + 'X': ['correct', 'wrong','correct', 'correct','wrong']} + df = DataFrame(data) + x = df[~df.Classification.isin(['SA EQUITY CFD', 'SA EQUITY', 'SA SSF'])] + df.ix[x.index,'X'] = df['Classification'] + + expected = DataFrame({'Classification': {0: 'SA EQUITY CFD', 1: 'bbb', + 2: 'SA EQUITY', 3: 'SA SSF', 4: 'aaa'}, + 'Random': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5}, + 'X': {0: 'correct', 1: 'bbb', 2: 'correct', + 3: 'correct', 4: 'aaa'}}) # bug was 4: 'bbb' + + assert_frame_equal(df, expected) + def test_non_unique_loc(self): ## GH3659 ## non-unique indexer with loc slice
cc #3836 Added test for example from OP (which is not working in 0.11 but is fixed in master).
https://api.github.com/repos/pandas-dev/pandas/pulls/3839
2013-06-10T19:13:04Z
2013-06-10T19:13:17Z
2013-06-10T19:13:17Z
2014-06-26T11:27:24Z
ENH: support for msgpack serialization/deserialization
diff --git a/LICENSES/MSGPACK_LICENSE b/LICENSES/MSGPACK_LICENSE new file mode 100644 index 0000000000000..ae1b0f2f32f06 --- /dev/null +++ b/LICENSES/MSGPACK_LICENSE @@ -0,0 +1,13 @@ +Copyright (C) 2008-2011 INADA Naoki <songofacandy@gmail.com> + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. \ No newline at end of file diff --git a/LICENSES/MSGPACK_NUMPY_LICENSE b/LICENSES/MSGPACK_NUMPY_LICENSE new file mode 100644 index 0000000000000..e570011efac73 --- /dev/null +++ b/LICENSES/MSGPACK_NUMPY_LICENSE @@ -0,0 +1,33 @@ +.. -*- rst -*- + +License +======= + +Copyright (c) 2013, Lev Givon. +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +* Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. +* Redistributions in binary form must reproduce the above + copyright notice, this list of conditions and the following + disclaimer in the documentation and/or other materials provided + with the distribution. +* Neither the name of Lev Givon nor the names of any + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/doc/source/io.rst b/doc/source/io.rst index 5e04fcff61539..9442f59425106 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -36,6 +36,7 @@ object. * ``read_hdf`` * ``read_sql`` * ``read_json`` + * ``read_msgpack`` (experimental) * ``read_html`` * ``read_stata`` * ``read_clipboard`` @@ -48,6 +49,7 @@ The corresponding ``writer`` functions are object methods that are accessed like * ``to_hdf`` * ``to_sql`` * ``to_json`` + * ``to_msgpack`` (experimental) * ``to_html`` * ``to_stata`` * ``to_clipboard`` @@ -1732,6 +1734,72 @@ module is installed you can use it as a xlsx writer engine as follows: .. _io.hdf5: +Serialization +------------- + +msgpack (experimental) +~~~~~~~~~~~~~~~~~~~~~~ + +.. _io.msgpack: + +.. versionadded:: 0.13.0 + +Starting in 0.13.0, pandas is supporting the ``msgpack`` format for +object serialization. This is a lightweight portable binary format, similar +to binary JSON, that is highly space efficient, and provides good performance +both on the writing (serialization), and reading (deserialization). + +.. warning:: + + This is a very new feature of pandas. We intend to provide certain + optimizations in the io of the ``msgpack`` data. Since this is marked + as an EXPERIMENTAL LIBRARY, the storage format may not be stable until a future release. + +.. ipython:: python + + df = DataFrame(np.random.rand(5,2),columns=list('AB')) + df.to_msgpack('foo.msg') + pd.read_msgpack('foo.msg') + s = Series(np.random.rand(5),index=date_range('20130101',periods=5)) + +You can pass a list of objects and you will receive them back on deserialization. + +.. ipython:: python + + pd.to_msgpack('foo.msg', df, 'foo', np.array([1,2,3]), s) + pd.read_msgpack('foo.msg') + +You can pass ``iterator=True`` to iterate over the unpacked results + +.. ipython:: python + + for o in pd.read_msgpack('foo.msg',iterator=True): + print o + +You can pass ``append=True`` to the writer to append to an existing pack + +.. ipython:: python + + df.to_msgpack('foo.msg',append=True) + pd.read_msgpack('foo.msg') + +Unlike other io methods, ``to_msgpack`` is available on both a per-object basis, +``df.to_msgpack()`` and using the top-level ``pd.to_msgpack(...)`` where you +can pack arbitrary collections of python lists, dicts, scalars, while intermixing +pandas objects. + +.. ipython:: python + + pd.to_msgpack('foo2.msg', { 'dict' : [ { 'df' : df }, { 'string' : 'foo' }, { 'scalar' : 1. }, { 's' : s } ] }) + pd.read_msgpack('foo2.msg') + +.. ipython:: python + :suppress: + :okexcept: + + os.remove('foo.msg') + os.remove('foo2.msg') + HDF5 (PyTables) --------------- diff --git a/doc/source/release.rst b/doc/source/release.rst index 65e6ca0e1d95c..be62ef7d31a0b 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -64,17 +64,19 @@ New features Experimental Features ~~~~~~~~~~~~~~~~~~~~~ -- The new :func:`~pandas.eval` function implements expression evaluation using - ``numexpr`` behind the scenes. This results in large speedups for complicated - expressions involving large DataFrames/Series. -- :class:`~pandas.DataFrame` has a new :meth:`~pandas.DataFrame.eval` that - evaluates an expression in the context of the ``DataFrame``. -- A :meth:`~pandas.DataFrame.query` method has been added that allows - you to select elements of a ``DataFrame`` using a natural query syntax nearly - identical to Python syntax. -- ``pd.eval`` and friends now evaluate operations involving ``datetime64`` - objects in Python space because ``numexpr`` cannot handle ``NaT`` values - (:issue:`4897`). + - The new :func:`~pandas.eval` function implements expression evaluation using + ``numexpr`` behind the scenes. This results in large speedups for complicated + expressions involving large DataFrames/Series. + - :class:`~pandas.DataFrame` has a new :meth:`~pandas.DataFrame.eval` that + evaluates an expression in the context of the ``DataFrame``. + - A :meth:`~pandas.DataFrame.query` method has been added that allows + you to select elements of a ``DataFrame`` using a natural query syntax nearly + identical to Python syntax. + - ``pd.eval`` and friends now evaluate operations involving ``datetime64`` + objects in Python space because ``numexpr`` cannot handle ``NaT`` values + (:issue:`4897`). + - Add msgpack support via ``pd.read_msgpack()`` and ``pd.to_msgpack()/df.to_msgpack()`` for serialization + of arbitrary pandas (and python objects) in a lightweight portable binary format (:issue:`686`) Improvements to existing features ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt index 5ff7038d02e45..98099bac15900 100644 --- a/doc/source/v0.13.0.txt +++ b/doc/source/v0.13.0.txt @@ -464,6 +464,15 @@ Enhancements t = Timestamp('20130101 09:01:02') t + pd.datetools.Nano(123) + - The ``isin`` method plays nicely with boolean indexing. To get the rows where each condition is met: + + .. ipython:: python + + mask = df.isin({'A': [1, 2], 'B': ['e', 'f']}) + df[mask.all(1)] + + See the :ref:`documentation<indexing.basics.indexing_isin>` for more. + .. _whatsnew_0130.experimental: Experimental @@ -553,21 +562,35 @@ Experimental For more details see the :ref:`indexing documentation on query <indexing.query>`. - - DataFrame now has an ``isin`` method that can be used to easily check whether the DataFrame's values are contained in an iterable. Use a dictionary if you'd like to check specific iterables for specific columns or rows. +- ``pd.read_msgpack()`` and ``pd.to_msgpack()`` are now a supported method of serialization + of arbitrary pandas (and python objects) in a lightweight portable binary format. :ref:`See the docs<io.msgpack>` - .. ipython:: python + .. warning:: + + Since this is an EXPERIMENTAL LIBRARY, the storage format may not be stable until a future release. - df = pd.DataFrame({'A': [1, 2, 3], 'B': ['d', 'e', 'f']}) - df.isin({'A': [1, 2], 'B': ['e', 'f']}) + .. ipython:: python - The ``isin`` method plays nicely with boolean indexing. To get the rows where each condition is met: + df = DataFrame(np.random.rand(5,2),columns=list('AB')) + df.to_msgpack('foo.msg') + pd.read_msgpack('foo.msg') - .. ipython:: python + s = Series(np.random.rand(5),index=date_range('20130101',periods=5)) + pd.to_msgpack('foo.msg', df, s) + pd.read_msgpack('foo.msg') - mask = df.isin({'A': [1, 2], 'B': ['e', 'f']}) - df[mask.all(1)] + You can pass ``iterator=True`` to iterator over the unpacked results + + .. ipython:: python + + for o in pd.read_msgpack('foo.msg',iterator=True): + print o + + .. ipython:: python + :suppress: + :okexcept: - See the :ref:`documentation<indexing.basics.indexing_isin>` for more. + os.remove('foo.msg') .. _whatsnew_0130.refactoring: diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 835b66512a89e..3142f74f2f5c5 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -805,6 +805,25 @@ def to_hdf(self, path_or_buf, key, **kwargs): from pandas.io import pytables return pytables.to_hdf(path_or_buf, key, self, **kwargs) + def to_msgpack(self, path_or_buf, **kwargs): + """ + msgpack (serialize) object to input file path + + THIS IS AN EXPERIMENTAL LIBRARY and the storage format + may not be stable until a future release. + + Parameters + ---------- + path : string File path + args : an object or objects to serialize + append : boolean whether to append to an existing msgpack + (default is False) + compress : type of compressor (zlib or blosc), default to None (no compression) + """ + + from pandas.io import packers + return packers.to_msgpack(path_or_buf, self, **kwargs) + def to_pickle(self, path): """ Pickle (serialize) object to input file path diff --git a/pandas/io/api.py b/pandas/io/api.py index 94deb51ab4b18..dc9ea290eb45e 100644 --- a/pandas/io/api.py +++ b/pandas/io/api.py @@ -11,3 +11,4 @@ from pandas.io.sql import read_sql from pandas.io.stata import read_stata from pandas.io.pickle import read_pickle, to_pickle +from pandas.io.packers import read_msgpack, to_msgpack diff --git a/pandas/io/packers.py b/pandas/io/packers.py new file mode 100644 index 0000000000000..d6aa1ebeb896a --- /dev/null +++ b/pandas/io/packers.py @@ -0,0 +1,534 @@ +""" +Msgpack serializer support for reading and writing pandas data structures +to disk +""" + +# portions of msgpack_numpy package, by Lev Givon were incorporated +# into this module (and tests_packers.py) + +""" +License +======= + +Copyright (c) 2013, Lev Givon. +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +* Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. +* Redistributions in binary form must reproduce the above + copyright notice, this list of conditions and the following + disclaimer in the documentation and/or other materials provided + with the distribution. +* Neither the name of Lev Givon nor the names of any + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +""" + +from datetime import datetime, date, timedelta +from dateutil.parser import parse + +import numpy as np +from pandas import compat +from pandas.compat import u +from pandas import ( + Timestamp, Period, Series, DataFrame, Panel, Panel4D, + Index, MultiIndex, Int64Index, PeriodIndex, DatetimeIndex, Float64Index, NaT +) +from pandas.sparse.api import SparseSeries, SparseDataFrame, SparsePanel +from pandas.sparse.array import BlockIndex, IntIndex +from pandas.core.generic import NDFrame +from pandas.core.common import needs_i8_conversion +from pandas.core.internals import BlockManager, make_block +import pandas.core.internals as internals + +from pandas.msgpack import Unpacker as _Unpacker, Packer as _Packer +import zlib + +try: + import blosc + _BLOSC = True +except: + _BLOSC = False + +# until we can pass this into our conversion functions, +# this is pretty hacky +compressor = None + + +def to_msgpack(path, *args, **kwargs): + """ + msgpack (serialize) object to input file path + + THIS IS AN EXPERIMENTAL LIBRARY and the storage format + may not be stable until a future release. + + Parameters + ---------- + path : string File path + args : an object or objects to serialize + append : boolean whether to append to an existing msgpack + (default is False) + compress : type of compressor (zlib or blosc), default to None (no compression) + """ + global compressor + compressor = kwargs.pop('compress', None) + append = kwargs.pop('append', None) + if append: + f = open(path, 'a+b') + else: + f = open(path, 'wb') + try: + for a in args: + f.write(pack(a, **kwargs)) + finally: + f.close() + + +def read_msgpack(path, iterator=False, **kwargs): + """ + Load msgpack pandas object from the specified + file path + + THIS IS AN EXPERIMENTAL LIBRARY and the storage format + may not be stable until a future release. + + Parameters + ---------- + path : string + File path + iterator : boolean, if True, return an iterator to the unpacker + (default is False) + + Returns + ------- + obj : type of object stored in file + + """ + if iterator: + return Iterator(path) + + with open(path, 'rb') as fh: + l = list(unpack(fh)) + if len(l) == 1: + return l[0] + return l + +dtype_dict = {21: np.dtype('M8[ns]'), + u('datetime64[ns]'): np.dtype('M8[ns]'), + u('datetime64[us]'): np.dtype('M8[us]'), + 22: np.dtype('m8[ns]'), + u('timedelta64[ns]'): np.dtype('m8[ns]'), + u('timedelta64[us]'): np.dtype('m8[us]')} + + +def dtype_for(t): + if t in dtype_dict: + return dtype_dict[t] + return np.typeDict[t] + +c2f_dict = {'complex': np.float64, + 'complex128': np.float64, + 'complex64': np.float32} + +# numpy 1.6.1 compat +if hasattr(np, 'float128'): + c2f_dict['complex256'] = np.float128 + + +def c2f(r, i, ctype_name): + """ + Convert strings to complex number instance with specified numpy type. + """ + + ftype = c2f_dict[ctype_name] + return np.typeDict[ctype_name](ftype(r) + 1j * ftype(i)) + + +def convert(values): + """ convert the numpy values to a list """ + + dtype = values.dtype + if needs_i8_conversion(dtype): + values = values.view('i8') + v = values.ravel() + + if compressor == 'zlib': + + # return string arrays like they are + if dtype == np.object_: + return v.tolist() + + # convert to a bytes array + v = v.tostring() + return zlib.compress(v) + + elif compressor == 'blosc' and _BLOSC: + + # return string arrays like they are + if dtype == np.object_: + return v.tolist() + + # convert to a bytes array + v = v.tostring() + return blosc.compress(v, typesize=dtype.itemsize) + + # ndarray (on original dtype) + if dtype == 'float64' or dtype == 'int64': + return v + + # as a list + return v.tolist() + + +def unconvert(values, dtype, compress=None): + + if dtype == np.object_: + return np.array(values, dtype=object) + + if compress == 'zlib': + + values = zlib.decompress(values) + return np.frombuffer(values, dtype=dtype) + + elif compress == 'blosc': + + if not _BLOSC: + raise Exception("cannot uncompress w/o blosc") + + # decompress + values = blosc.decompress(values) + + return np.frombuffer(values, dtype=dtype) + + # as a list + return np.array(values, dtype=dtype) + + +def encode(obj): + """ + Data encoder + """ + + tobj = type(obj) + if isinstance(obj, Index): + if isinstance(obj, PeriodIndex): + return {'typ': 'period_index', + 'klass': obj.__class__.__name__, + 'name': getattr(obj, 'name', None), + 'freq': obj.freqstr, + 'dtype': obj.dtype.num, + 'data': convert(obj.asi8)} + elif isinstance(obj, DatetimeIndex): + return {'typ': 'datetime_index', + 'klass': obj.__class__.__name__, + 'name': getattr(obj, 'name', None), + 'dtype': obj.dtype.num, + 'data': convert(obj.asi8), + 'freq': obj.freqstr, + 'tz': obj.tz} + elif isinstance(obj, MultiIndex): + return {'typ': 'multi_index', + 'klass': obj.__class__.__name__, + 'names': getattr(obj, 'names', None), + 'dtype': obj.dtype.num, + 'data': convert(obj.values)} + else: + return {'typ': 'index', + 'klass': obj.__class__.__name__, + 'name': getattr(obj, 'name', None), + 'dtype': obj.dtype.num, + 'data': obj.tolist()} + elif isinstance(obj, Series): + if isinstance(obj, SparseSeries): + d = {'typ': 'sparse_series', + 'klass': obj.__class__.__name__, + 'dtype': obj.dtype.num, + 'index': obj.index, + 'sp_index': obj.sp_index, + 'sp_values': convert(obj.sp_values), + 'compress': compressor} + for f in ['name', 'fill_value', 'kind']: + d[f] = getattr(obj, f, None) + return d + else: + return {'typ': 'series', + 'klass': obj.__class__.__name__, + 'name': getattr(obj, 'name', None), + 'index': obj.index, + 'dtype': obj.dtype.num, + 'data': convert(obj.values), + 'compress': compressor} + elif issubclass(tobj, NDFrame): + if isinstance(obj, SparseDataFrame): + d = {'typ': 'sparse_dataframe', + 'klass': obj.__class__.__name__, + 'columns': obj.columns} + for f in ['default_fill_value', 'default_kind']: + d[f] = getattr(obj, f, None) + d['data'] = dict([(name, ss) + for name, ss in compat.iteritems(obj)]) + return d + elif isinstance(obj, SparsePanel): + d = {'typ': 'sparse_panel', + 'klass': obj.__class__.__name__, + 'items': obj.items} + for f in ['default_fill_value', 'default_kind']: + d[f] = getattr(obj, f, None) + d['data'] = dict([(name, df) + for name, df in compat.iteritems(obj)]) + return d + else: + + data = obj._data + if not data.is_consolidated(): + data = data.consolidate() + + # the block manager + return {'typ': 'block_manager', + 'klass': obj.__class__.__name__, + 'axes': data.axes, + 'blocks': [{'items': b.items, + 'values': convert(b.values), + 'shape': b.values.shape, + 'dtype': b.dtype.num, + 'klass': b.__class__.__name__, + 'compress': compressor + } for b in data.blocks]} + + elif isinstance(obj, (datetime, date, np.datetime64, timedelta, np.timedelta64)): + if isinstance(obj, Timestamp): + tz = obj.tzinfo + if tz is not None: + tz = tz.zone + offset = obj.offset + if offset is not None: + offset = offset.freqstr + return {'typ': 'timestamp', + 'value': obj.value, + 'offset': offset, + 'tz': tz} + elif isinstance(obj, np.timedelta64): + return {'typ': 'timedelta64', + 'data': obj.view('i8')} + elif isinstance(obj, timedelta): + return {'typ': 'timedelta', + 'data': (obj.days, obj.seconds, obj.microseconds)} + elif isinstance(obj, np.datetime64): + return {'typ': 'datetime64', + 'data': str(obj)} + elif isinstance(obj, datetime): + return {'typ': 'datetime', + 'data': obj.isoformat()} + elif isinstance(obj, date): + return {'typ': 'date', + 'data': obj.isoformat()} + raise Exception("cannot encode this datetimelike object: %s" % obj) + elif isinstance(obj, Period): + return {'typ': 'period', + 'ordinal': obj.ordinal, + 'freq': obj.freq} + elif isinstance(obj, BlockIndex): + return {'typ': 'block_index', + 'klass': obj.__class__.__name__, + 'blocs': obj.blocs, + 'blengths': obj.blengths, + 'length': obj.length} + elif isinstance(obj, IntIndex): + return {'typ': 'int_index', + 'klass': obj.__class__.__name__, + 'indices': obj.indices, + 'length': obj.length} + elif isinstance(obj, np.ndarray) and obj.dtype not in ['float64', 'int64']: + return {'typ': 'ndarray', + 'shape': obj.shape, + 'ndim': obj.ndim, + 'dtype': obj.dtype.num, + 'data': convert(obj), + 'compress': compressor} + elif isinstance(obj, np.number): + if np.iscomplexobj(obj): + return {'typ': 'np_scalar', + 'sub_typ': 'np_complex', + 'dtype': obj.dtype.name, + 'real': obj.real.__repr__(), + 'imag': obj.imag.__repr__()} + else: + return {'typ': 'np_scalar', + 'dtype': obj.dtype.name, + 'data': obj.__repr__()} + elif isinstance(obj, complex): + return {'typ': 'np_complex', + 'real': obj.real.__repr__(), + 'imag': obj.imag.__repr__()} + + return obj + + +def decode(obj): + """ + Decoder for deserializing numpy data types. + """ + + typ = obj.get('typ') + if typ is None: + return obj + elif typ == 'timestamp': + return Timestamp(obj['value'], tz=obj['tz'], offset=obj['offset']) + elif typ == 'period': + return Period(ordinal=obj['ordinal'], freq=obj['freq']) + elif typ == 'index': + dtype = dtype_for(obj['dtype']) + data = obj['data'] + return globals()[obj['klass']](data, dtype=dtype, name=obj['name']) + elif typ == 'multi_index': + return globals()[obj['klass']].from_tuples(obj['data'], names=obj['names']) + elif typ == 'period_index': + return globals()[obj['klass']](obj['data'], name=obj['name'], freq=obj['freq']) + elif typ == 'datetime_index': + return globals()[obj['klass']](obj['data'], freq=obj['freq'], tz=obj['tz'], name=obj['name']) + elif typ == 'series': + dtype = dtype_for(obj['dtype']) + index = obj['index'] + return globals()[obj['klass']](unconvert(obj['data'], dtype, obj['compress']), index=index, name=obj['name']) + elif typ == 'block_manager': + axes = obj['axes'] + + def create_block(b): + dtype = dtype_for(b['dtype']) + return make_block(unconvert(b['values'], dtype, b['compress']).reshape(b['shape']), b['items'], axes[0], klass=getattr(internals, b['klass'])) + + blocks = [create_block(b) for b in obj['blocks']] + return globals()[obj['klass']](BlockManager(blocks, axes)) + elif typ == 'datetime': + return parse(obj['data']) + elif typ == 'datetime64': + return np.datetime64(parse(obj['data'])) + elif typ == 'date': + return parse(obj['data']).date() + elif typ == 'timedelta': + return timedelta(*obj['data']) + elif typ == 'timedelta64': + return np.timedelta64(int(obj['data'])) + elif typ == 'sparse_series': + dtype = dtype_for(obj['dtype']) + return globals( + )[obj['klass']](unconvert(obj['sp_values'], dtype, obj['compress']), sparse_index=obj['sp_index'], + index=obj['index'], fill_value=obj['fill_value'], kind=obj['kind'], name=obj['name']) + elif typ == 'sparse_dataframe': + return globals()[obj['klass']](obj['data'], + columns=obj['columns'], default_fill_value=obj['default_fill_value'], default_kind=obj['default_kind']) + elif typ == 'sparse_panel': + return globals()[obj['klass']](obj['data'], + items=obj['items'], default_fill_value=obj['default_fill_value'], default_kind=obj['default_kind']) + elif typ == 'block_index': + return globals()[obj['klass']](obj['length'], obj['blocs'], obj['blengths']) + elif typ == 'int_index': + return globals()[obj['klass']](obj['length'], obj['indices']) + elif typ == 'ndarray': + return unconvert(obj['data'], np.typeDict[obj['dtype']], obj.get('compress')).reshape(obj['shape']) + elif typ == 'np_scalar': + if obj.get('sub_typ') == 'np_complex': + return c2f(obj['real'], obj['imag'], obj['dtype']) + else: + dtype = dtype_for(obj['dtype']) + try: + return dtype(obj['data']) + except: + return dtype.type(obj['data']) + elif typ == 'np_complex': + return complex(obj['real'] + '+' + obj['imag'] + 'j') + elif isinstance(obj, (dict, list, set)): + return obj + else: + return obj + + +def pack(o, default=encode, + encoding='utf-8', unicode_errors='strict', use_single_float=False): + """ + Pack an object and return the packed bytes. + """ + + return Packer(default=default, encoding=encoding, + unicode_errors=unicode_errors, + use_single_float=use_single_float).pack(o) + + +def unpack(packed, object_hook=decode, + list_hook=None, use_list=False, encoding='utf-8', + unicode_errors='strict', object_pairs_hook=None): + """ + Unpack a packed object, return an iterator + Note: packed lists will be returned as tuples + """ + + return Unpacker(packed, object_hook=object_hook, + list_hook=list_hook, + use_list=use_list, encoding=encoding, + unicode_errors=unicode_errors, + object_pairs_hook=object_pairs_hook) + + +class Packer(_Packer): + + def __init__(self, default=encode, + encoding='utf-8', + unicode_errors='strict', + use_single_float=False): + super(Packer, self).__init__(default=default, + encoding=encoding, + unicode_errors=unicode_errors, + use_single_float=use_single_float) + + +class Unpacker(_Unpacker): + + def __init__(self, file_like=None, read_size=0, use_list=False, + object_hook=decode, + object_pairs_hook=None, list_hook=None, encoding='utf-8', + unicode_errors='strict', max_buffer_size=0): + super(Unpacker, self).__init__(file_like=file_like, + read_size=read_size, + use_list=use_list, + object_hook=object_hook, + object_pairs_hook=object_pairs_hook, + list_hook=list_hook, + encoding=encoding, + unicode_errors=unicode_errors, + max_buffer_size=max_buffer_size) + + +class Iterator(object): + + """ manage the unpacking iteration, + close the file on completion """ + + def __init__(self, path, **kwargs): + self.path = path + self.kwargs = kwargs + + def __iter__(self): + + try: + fh = open(self.path, 'rb') + unpacker = unpack(fh) + for o in unpacker: + yield o + finally: + fh.close() diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py new file mode 100644 index 0000000000000..79b421ff7b047 --- /dev/null +++ b/pandas/io/tests/test_packers.py @@ -0,0 +1,387 @@ +import nose +import unittest + +import datetime +import numpy as np + +from pandas import compat +from pandas.compat import u +from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range, + date_range, period_range, Index, SparseSeries, SparseDataFrame, + SparsePanel) +import pandas.util.testing as tm +from pandas.util.testing import ensure_clean +from pandas.tests.test_series import assert_series_equal +from pandas.tests.test_frame import assert_frame_equal +from pandas.tests.test_panel import assert_panel_equal + +import pandas +from pandas.sparse.tests.test_sparse import assert_sp_series_equal, assert_sp_frame_equal +from pandas import Timestamp, tslib + +nan = np.nan + +from pandas.io.packers import to_msgpack, read_msgpack + +_multiprocess_can_split_ = False + + +def check_arbitrary(a, b): + + if isinstance(a, (list, tuple)) and isinstance(b, (list, tuple)): + assert(len(a) == len(b)) + for a_, b_ in zip(a, b): + check_arbitrary(a_, b_) + elif isinstance(a, Panel): + assert_panel_equal(a, b) + elif isinstance(a, DataFrame): + assert_frame_equal(a, b) + elif isinstance(a, Series): + assert_series_equal(a, b) + else: + assert(a == b) + + +class Test(unittest.TestCase): + + def setUp(self): + self.path = '__%s__.msg' % tm.rands(10) + + def tearDown(self): + pass + + def encode_decode(self, x, **kwargs): + with ensure_clean(self.path) as p: + to_msgpack(p, x, **kwargs) + return read_msgpack(p, **kwargs) + + +class TestNumpy(Test): + + def test_numpy_scalar_float(self): + x = np.float32(np.random.rand()) + x_rec = self.encode_decode(x) + self.assert_(np.allclose(x, x_rec) and type(x) == type(x_rec)) + + def test_numpy_scalar_complex(self): + x = np.complex64(np.random.rand() + 1j * np.random.rand()) + x_rec = self.encode_decode(x) + self.assert_(np.allclose(x, x_rec) and type(x) == type(x_rec)) + + def test_scalar_float(self): + x = np.random.rand() + x_rec = self.encode_decode(x) + self.assert_(np.allclose(x, x_rec) and type(x) == type(x_rec)) + + def test_scalar_complex(self): + x = np.random.rand() + 1j * np.random.rand() + x_rec = self.encode_decode(x) + self.assert_(np.allclose(x, x_rec) and type(x) == type(x_rec)) + + def test_list_numpy_float(self): + raise nose.SkipTest('buggy test') + x = [np.float32(np.random.rand()) for i in range(5)] + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: + x == y, x, x_rec)) and + all(map(lambda x, y: type(x) == type(y), x, x_rec))) + + def test_list_numpy_float_complex(self): + if not hasattr(np, 'complex128'): + raise nose.SkipTest('numpy cant handle complex128') + + # buggy test + raise nose.SkipTest('buggy test') + x = [np.float32(np.random.rand()) for i in range(5)] + \ + [np.complex128(np.random.rand() + 1j * np.random.rand()) + for i in range(5)] + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: x == y, x, x_rec)) and + all(map(lambda x, y: type(x) == type(y), x, x_rec))) + + def test_list_float(self): + x = [np.random.rand() for i in range(5)] + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: x == y, x, x_rec)) and + all(map(lambda x, y: type(x) == type(y), x, x_rec))) + + def test_list_float_complex(self): + x = [np.random.rand() for i in range(5)] + \ + [(np.random.rand() + 1j * np.random.rand()) for i in range(5)] + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: x == y, x, x_rec)) and + all(map(lambda x, y: type(x) == type(y), x, x_rec))) + + def test_dict_float(self): + x = {'foo': 1.0, 'bar': 2.0} + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: x == y, x.values(), x_rec.values())) and + all(map(lambda x, y: type(x) == type(y), x.values(), x_rec.values()))) + + def test_dict_complex(self): + x = {'foo': 1.0 + 1.0j, 'bar': 2.0 + 2.0j} + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: x == y, x.values(), x_rec.values())) and + all(map(lambda x, y: type(x) == type(y), x.values(), x_rec.values()))) + + def test_dict_numpy_float(self): + x = {'foo': np.float32(1.0), 'bar': np.float32(2.0)} + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: x == y, x.values(), x_rec.values())) and + all(map(lambda x, y: type(x) == type(y), x.values(), x_rec.values()))) + + def test_dict_numpy_complex(self): + x = {'foo': np.complex128( + 1.0 + 1.0j), 'bar': np.complex128(2.0 + 2.0j)} + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: x == y, x.values(), x_rec.values())) and + all(map(lambda x, y: type(x) == type(y), x.values(), x_rec.values()))) + + def test_numpy_array_float(self): + x = np.random.rand(5).astype(np.float32) + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: x == y, x, x_rec)) and + x.dtype == x_rec.dtype) + + def test_numpy_array_complex(self): + x = (np.random.rand(5) + 1j * np.random.rand(5)).astype(np.complex128) + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: x == y, x, x_rec)) and + x.dtype == x_rec.dtype) + + def test_list_mixed(self): + x = [1.0, np.float32(3.5), np.complex128(4.25), u('foo')] + x_rec = self.encode_decode(x) + self.assert_(all(map(lambda x, y: x == y, x, x_rec)) and + all(map(lambda x, y: type(x) == type(y), x, x_rec))) + + +class TestBasic(Test): + + def test_timestamp(self): + + for i in [Timestamp( + '20130101'), Timestamp('20130101', tz='US/Eastern'), + Timestamp('201301010501')]: + i_rec = self.encode_decode(i) + self.assert_(i == i_rec) + + def test_datetimes(self): + + for i in [datetime.datetime( + 2013, 1, 1), datetime.datetime(2013, 1, 1, 5, 1), + datetime.date(2013, 1, 1), np.datetime64(datetime.datetime(2013, 1, 5, 2, 15))]: + i_rec = self.encode_decode(i) + self.assert_(i == i_rec) + + def test_timedeltas(self): + + for i in [datetime.timedelta(days=1), + datetime.timedelta(days=1, seconds=10), + np.timedelta64(1000000)]: + i_rec = self.encode_decode(i) + self.assert_(i == i_rec) + + +class TestIndex(Test): + + def setUp(self): + super(TestIndex, self).setUp() + + self.d = { + 'string': tm.makeStringIndex(100), + 'date': tm.makeDateIndex(100), + 'int': tm.makeIntIndex(100), + 'float': tm.makeFloatIndex(100), + 'empty': Index([]), + 'tuple': Index(zip(['foo', 'bar', 'baz'], [1, 2, 3])), + 'period': Index(period_range('2012-1-1', freq='M', periods=3)), + 'date2': Index(date_range('2013-01-1', periods=10)), + 'bdate': Index(bdate_range('2013-01-02', periods=10)), + } + + self.mi = { + 'reg': MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), ('foo', 'two'), + ('qux', 'one'), ('qux', 'two')], names=['first', 'second']), + } + + def test_basic_index(self): + + for s, i in self.d.items(): + i_rec = self.encode_decode(i) + self.assert_(i.equals(i_rec)) + + def test_multi_index(self): + + for s, i in self.mi.items(): + i_rec = self.encode_decode(i) + self.assert_(i.equals(i_rec)) + + def test_unicode(self): + i = tm.makeUnicodeIndex(100) + i_rec = self.encode_decode(i) + self.assert_(i.equals(i_rec)) + + +class TestSeries(Test): + + def setUp(self): + super(TestSeries, self).setUp() + + self.d = {} + + s = tm.makeStringSeries() + s.name = 'string' + self.d['string'] = s + + s = tm.makeObjectSeries() + s.name = 'object' + self.d['object'] = s + + s = Series(tslib.iNaT, dtype='M8[ns]', index=range(5)) + self.d['date'] = s + + data = { + 'A': [0., 1., 2., 3., np.nan], + 'B': [0, 1, 0, 1, 0], + 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], + 'D': date_range('1/1/2009', periods=5), + 'E': [0., 1, Timestamp('20100101'), 'foo', 2.], + } + + self.d['float'] = Series(data['A']) + self.d['int'] = Series(data['B']) + self.d['mixed'] = Series(data['E']) + + def test_basic(self): + + for s, i in self.d.items(): + i_rec = self.encode_decode(i) + assert_series_equal(i, i_rec) + + +class TestNDFrame(Test): + + def setUp(self): + super(TestNDFrame, self).setUp() + + data = { + 'A': [0., 1., 2., 3., np.nan], + 'B': [0, 1, 0, 1, 0], + 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], + 'D': date_range('1/1/2009', periods=5), + 'E': [0., 1, Timestamp('20100101'), 'foo', 2.], + } + + self.frame = { + 'float': DataFrame(dict(A=data['A'], B=Series(data['A']) + 1)), + 'int': DataFrame(dict(A=data['B'], B=Series(data['B']) + 1)), + 'mixed': DataFrame(dict([(k, data[k]) for k in ['A', 'B', 'C', 'D']]))} + + self.panel = { + 'float': Panel(dict(ItemA=self.frame['float'], ItemB=self.frame['float'] + 1))} + + def test_basic_frame(self): + + for s, i in self.frame.items(): + i_rec = self.encode_decode(i) + assert_frame_equal(i, i_rec) + + def test_basic_panel(self): + + for s, i in self.panel.items(): + i_rec = self.encode_decode(i) + assert_panel_equal(i, i_rec) + + def test_multi(self): + + i_rec = self.encode_decode(self.frame) + for k in self.frame.keys(): + assert_frame_equal(self.frame[k], i_rec[k]) + + l = tuple( + [self.frame['float'], self.frame['float'].A, self.frame['float'].B, None]) + l_rec = self.encode_decode(l) + check_arbitrary(l, l_rec) + + # this is an oddity in that packed lists will be returned as tuples + l = [self.frame['float'], self.frame['float'] + .A, self.frame['float'].B, None] + l_rec = self.encode_decode(l) + self.assert_(isinstance(l_rec, tuple)) + check_arbitrary(l, l_rec) + + def test_iterator(self): + + l = [self.frame['float'], self.frame['float'] + .A, self.frame['float'].B, None] + + with ensure_clean(self.path) as path: + to_msgpack(path, *l) + for i, packed in enumerate(read_msgpack(path, iterator=True)): + check_arbitrary(packed, l[i]) + + +class TestSparse(Test): + + def _check_roundtrip(self, obj, comparator, **kwargs): + + i_rec = self.encode_decode(obj) + comparator(obj, i_rec, **kwargs) + + def test_sparse_series(self): + + s = tm.makeStringSeries() + s[3:5] = np.nan + ss = s.to_sparse() + self._check_roundtrip(ss, tm.assert_series_equal, + check_series_type=True) + + ss2 = s.to_sparse(kind='integer') + self._check_roundtrip(ss2, tm.assert_series_equal, + check_series_type=True) + + ss3 = s.to_sparse(fill_value=0) + self._check_roundtrip(ss3, tm.assert_series_equal, + check_series_type=True) + + def test_sparse_frame(self): + + s = tm.makeDataFrame() + s.ix[3:5, 1:3] = np.nan + s.ix[8:10, -2] = np.nan + ss = s.to_sparse() + + self._check_roundtrip(ss, tm.assert_frame_equal, + check_frame_type=True) + + ss2 = s.to_sparse(kind='integer') + self._check_roundtrip(ss2, tm.assert_frame_equal, + check_frame_type=True) + + ss3 = s.to_sparse(fill_value=0) + self._check_roundtrip(ss3, tm.assert_frame_equal, + check_frame_type=True) + + def test_sparse_panel(self): + + items = ['x', 'y', 'z'] + p = Panel(dict((i, tm.makeDataFrame().ix[:2, :2]) for i in items)) + sp = p.to_sparse() + + self._check_roundtrip(sp, tm.assert_panel_equal, + check_panel_type=True) + + sp2 = p.to_sparse(kind='integer') + self._check_roundtrip(sp2, tm.assert_panel_equal, + check_panel_type=True) + + sp3 = p.to_sparse(fill_value=0) + self._check_roundtrip(sp3, tm.assert_panel_equal, + check_panel_type=True) + + +if __name__ == '__main__': + import nose + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/msgpack.pyx b/pandas/msgpack.pyx new file mode 100644 index 0000000000000..2c8d7fd014b94 --- /dev/null +++ b/pandas/msgpack.pyx @@ -0,0 +1,711 @@ +# coding: utf-8 +#cython: embedsignature=True +#cython: profile=False + +from cpython cimport * +cdef extern from "Python.h": + ctypedef char* const_char_ptr "const char*" + ctypedef char* const_void_ptr "const void*" + ctypedef struct PyObject + cdef int PyObject_AsReadBuffer(object o, const_void_ptr* buff, Py_ssize_t* buf_len) except -1 + +from libc.stdlib cimport * +from libc.string cimport * +from libc.limits cimport * + +import cython +import numpy as np +from numpy cimport * + +class UnpackException(IOError): + pass + + +class BufferFull(UnpackException): + pass + + +class OutOfData(UnpackException): + pass + + +class UnpackValueError(UnpackException, ValueError): + pass + + +class ExtraData(ValueError): + def __init__(self, unpacked, extra): + self.unpacked = unpacked + self.extra = extra + + def __str__(self): + return "unpack(b) recieved extra data." + +class PackException(IOError): + pass + +class PackValueError(PackException, ValueError): + pass + +cdef extern from "msgpack/unpack.h": + ctypedef struct msgpack_user: + bint use_list + PyObject* object_hook + bint has_pairs_hook # call object_hook with k-v pairs + PyObject* list_hook + char *encoding + char *unicode_errors + + ctypedef struct template_context: + msgpack_user user + PyObject* obj + size_t count + unsigned int ct + PyObject* key + + ctypedef int (*execute_fn)(template_context* ctx, const_char_ptr data, + size_t len, size_t* off) except? -1 + execute_fn template_construct + execute_fn template_skip + execute_fn read_array_header + execute_fn read_map_header + void template_init(template_context* ctx) + object template_data(template_context* ctx) + +cdef extern from "msgpack/pack.h": + struct msgpack_packer: + char* buf + size_t length + size_t buf_size + + int msgpack_pack_int(msgpack_packer* pk, int d) + int msgpack_pack_nil(msgpack_packer* pk) + int msgpack_pack_true(msgpack_packer* pk) + int msgpack_pack_false(msgpack_packer* pk) + int msgpack_pack_long(msgpack_packer* pk, long d) + int msgpack_pack_long_long(msgpack_packer* pk, long long d) + int msgpack_pack_unsigned_long_long(msgpack_packer* pk, unsigned long long d) + int msgpack_pack_float(msgpack_packer* pk, float d) + int msgpack_pack_double(msgpack_packer* pk, double d) + int msgpack_pack_array(msgpack_packer* pk, size_t l) + int msgpack_pack_map(msgpack_packer* pk, size_t l) + int msgpack_pack_raw(msgpack_packer* pk, size_t l) + int msgpack_pack_raw_body(msgpack_packer* pk, char* body, size_t l) + +cdef int DEFAULT_RECURSE_LIMIT=511 + + + +cdef class Packer(object): + """MessagePack Packer + + usage: + + packer = Packer() + astream.write(packer.pack(a)) + astream.write(packer.pack(b)) + + Packer's constructor has some keyword arguments: + + * *defaut* - Convert user type to builtin type that Packer supports. + See also simplejson's document. + * *encoding* - Convert unicode to bytes with this encoding. (default: 'utf-8') + * *unicode_errors* - Error handler for encoding unicode. (default: 'strict') + * *use_single_float* - Use single precision float type for float. (default: False) + * *autoreset* - Reset buffer after each pack and return it's content as `bytes`. (default: True). + If set this to false, use `bytes()` to get content and `.reset()` to clear buffer. + """ + cdef msgpack_packer pk + cdef object _default + cdef object _bencoding + cdef object _berrors + cdef char *encoding + cdef char *unicode_errors + cdef bool use_float + cdef bint autoreset + + def __cinit__(self): + cdef int buf_size = 1024*1024 + self.pk.buf = <char*> malloc(buf_size); + if self.pk.buf == NULL: + raise MemoryError("Unable to allocate internal buffer.") + self.pk.buf_size = buf_size + self.pk.length = 0 + + def __init__(self, default=None, encoding='utf-8', unicode_errors='strict', + use_single_float=False, bint autoreset=1): + self.use_float = use_single_float + self.autoreset = autoreset + if default is not None: + if not PyCallable_Check(default): + raise TypeError("default must be a callable.") + self._default = default + if encoding is None: + self.encoding = NULL + self.unicode_errors = NULL + else: + if isinstance(encoding, unicode): + self._bencoding = encoding.encode('ascii') + else: + self._bencoding = encoding + self.encoding = PyBytes_AsString(self._bencoding) + if isinstance(unicode_errors, unicode): + self._berrors = unicode_errors.encode('ascii') + else: + self._berrors = unicode_errors + self.unicode_errors = PyBytes_AsString(self._berrors) + + def __dealloc__(self): + free(self.pk.buf); + + @cython.boundscheck(False) + @cython.wraparound(False) + cdef int _pack(self, object o, int nest_limit=DEFAULT_RECURSE_LIMIT) except -1: + cdef long long llval + cdef unsigned long long ullval + cdef long longval + cdef float fval + cdef double dval + cdef char* rawval + cdef int ret + cdef dict d + cdef object dtype + + cdef int n,i + cdef double f8val + cdef int64_t i8val + cdef ndarray[float64_t,ndim=1] array_double + cdef ndarray[int64_t,ndim=1] array_int + + if nest_limit < 0: + raise PackValueError("recursion limit exceeded.") + + if o is None: + ret = msgpack_pack_nil(&self.pk) + elif isinstance(o, bool): + if o: + ret = msgpack_pack_true(&self.pk) + else: + ret = msgpack_pack_false(&self.pk) + elif PyLong_Check(o): + if o > 0: + ullval = o + ret = msgpack_pack_unsigned_long_long(&self.pk, ullval) + else: + llval = o + ret = msgpack_pack_long_long(&self.pk, llval) + elif PyInt_Check(o): + longval = o + ret = msgpack_pack_long(&self.pk, longval) + elif PyFloat_Check(o): + if self.use_float: + fval = o + ret = msgpack_pack_float(&self.pk, fval) + else: + dval = o + ret = msgpack_pack_double(&self.pk, dval) + elif PyBytes_Check(o): + rawval = o + ret = msgpack_pack_raw(&self.pk, len(o)) + if ret == 0: + ret = msgpack_pack_raw_body(&self.pk, rawval, len(o)) + elif PyUnicode_Check(o): + if not self.encoding: + raise TypeError("Can't encode unicode string: no encoding is specified") + o = PyUnicode_AsEncodedString(o, self.encoding, self.unicode_errors) + rawval = o + ret = msgpack_pack_raw(&self.pk, len(o)) + if ret == 0: + ret = msgpack_pack_raw_body(&self.pk, rawval, len(o)) + elif PyDict_CheckExact(o): + d = <dict>o + ret = msgpack_pack_map(&self.pk, len(d)) + if ret == 0: + for k, v in d.iteritems(): + ret = self._pack(k, nest_limit-1) + if ret != 0: break + ret = self._pack(v, nest_limit-1) + if ret != 0: break + elif PyDict_Check(o): + ret = msgpack_pack_map(&self.pk, len(o)) + if ret == 0: + for k, v in o.items(): + ret = self._pack(k, nest_limit-1) + if ret != 0: break + ret = self._pack(v, nest_limit-1) + if ret != 0: break + elif PyTuple_Check(o) or PyList_Check(o): + ret = msgpack_pack_array(&self.pk, len(o)) + if ret == 0: + for v in o: + ret = self._pack(v, nest_limit-1) + if ret != 0: break + + # ndarray support ONLY (and float64/int64) for now + elif isinstance(o, np.ndarray) and not hasattr(o,'values') and (o.dtype == 'float64' or o.dtype == 'int64'): + + ret = msgpack_pack_map(&self.pk, 5) + if ret != 0: return -1 + + dtype = o.dtype + self.pack_pair('typ', 'ndarray', nest_limit) + self.pack_pair('shape', o.shape, nest_limit) + self.pack_pair('ndim', o.ndim, nest_limit) + self.pack_pair('dtype', dtype.num, nest_limit) + + ret = self._pack('data', nest_limit-1) + if ret != 0: return ret + + if dtype == 'float64': + array_double = o.ravel() + n = len(array_double) + ret = msgpack_pack_array(&self.pk, n) + if ret != 0: return ret + + for i in range(n): + + f8val = array_double[i] + ret = msgpack_pack_double(&self.pk, f8val) + if ret != 0: break + elif dtype == 'int64': + array_int = o.ravel() + n = len(array_int) + ret = msgpack_pack_array(&self.pk, n) + if ret != 0: return ret + + for i in range(n): + + i8val = array_int[i] + ret = msgpack_pack_long_long(&self.pk, i8val) + if ret != 0: break + + elif self._default: + o = self._default(o) + ret = self._pack(o, nest_limit-1) + else: + raise TypeError("can't serialize %r" % (o,)) + return ret + + cpdef pack(self, object obj): + cdef int ret + ret = self._pack(obj, DEFAULT_RECURSE_LIMIT) + if ret == -1: + raise MemoryError + elif ret: # should not happen. + raise TypeError + if self.autoreset: + buf = PyBytes_FromStringAndSize(self.pk.buf, self.pk.length) + self.pk.length = 0 + return buf + + def pack_array_header(self, size_t size): + cdef int ret = msgpack_pack_array(&self.pk, size) + if ret == -1: + raise MemoryError + elif ret: # should not happen + raise TypeError + if self.autoreset: + buf = PyBytes_FromStringAndSize(self.pk.buf, self.pk.length) + self.pk.length = 0 + return buf + + def pack_map_header(self, size_t size): + cdef int ret = msgpack_pack_map(&self.pk, size) + if ret == -1: + raise MemoryError + elif ret: # should not happen + raise TypeError + if self.autoreset: + buf = PyBytes_FromStringAndSize(self.pk.buf, self.pk.length) + self.pk.length = 0 + return buf + + def pack_map_pairs(self, object pairs): + """ + Pack *pairs* as msgpack map type. + + *pairs* should sequence of pair. + (`len(pairs)` and `for k, v in *pairs*:` should be supported.) + """ + cdef int ret = msgpack_pack_map(&self.pk, len(pairs)) + if ret == 0: + for k, v in pairs: + ret = self._pack(k) + if ret != 0: break + ret = self._pack(v) + if ret != 0: break + if ret == -1: + raise MemoryError + elif ret: # should not happen + raise TypeError + if self.autoreset: + buf = PyBytes_FromStringAndSize(self.pk.buf, self.pk.length) + self.pk.length = 0 + return buf + + def reset(self): + """Clear internal buffer.""" + self.pk.length = 0 + + def bytes(self): + """Return buffer content.""" + return PyBytes_FromStringAndSize(self.pk.buf, self.pk.length) + + + cdef inline pack_pair(self, object k, object v, int nest_limit): + ret = self._pack(k, nest_limit-1) + if ret != 0: raise PackException("cannot pack : %s" % k) + ret = self._pack(v, nest_limit-1) + if ret != 0: raise PackException("cannot pack : %s" % v) + return ret + +def pack(object o, object stream, default=None, encoding='utf-8', unicode_errors='strict'): + """ + pack an object `o` and write it to stream).""" + packer = Packer(default=default, encoding=encoding, unicode_errors=unicode_errors) + stream.write(packer.pack(o)) + +def packb(object o, default=None, encoding='utf-8', unicode_errors='strict', use_single_float=False): + """ + pack o and return packed bytes.""" + packer = Packer(default=default, encoding=encoding, unicode_errors=unicode_errors, + use_single_float=use_single_float) + return packer.pack(o) + + +cdef inline init_ctx(template_context *ctx, + object object_hook, object object_pairs_hook, object list_hook, + bint use_list, char* encoding, char* unicode_errors): + template_init(ctx) + ctx.user.use_list = use_list + ctx.user.object_hook = ctx.user.list_hook = <PyObject*>NULL + + if object_hook is not None and object_pairs_hook is not None: + raise ValueError("object_pairs_hook and object_hook are mutually exclusive.") + + if object_hook is not None: + if not PyCallable_Check(object_hook): + raise TypeError("object_hook must be a callable.") + ctx.user.object_hook = <PyObject*>object_hook + + if object_pairs_hook is None: + ctx.user.has_pairs_hook = False + else: + if not PyCallable_Check(object_pairs_hook): + raise TypeError("object_pairs_hook must be a callable.") + ctx.user.object_hook = <PyObject*>object_pairs_hook + ctx.user.has_pairs_hook = True + + if list_hook is not None: + if not PyCallable_Check(list_hook): + raise TypeError("list_hook must be a callable.") + ctx.user.list_hook = <PyObject*>list_hook + + ctx.user.encoding = encoding + ctx.user.unicode_errors = unicode_errors + +def unpackb(object packed, object object_hook=None, object list_hook=None, + bint use_list=1, encoding=None, unicode_errors="strict", + object_pairs_hook=None, + ): + """Unpack packed_bytes to object. Returns an unpacked object. + + Raises `ValueError` when `packed` contains extra bytes. + """ + cdef template_context ctx + cdef size_t off = 0 + cdef int ret + + cdef char* buf + cdef Py_ssize_t buf_len + cdef char* cenc = NULL + cdef char* cerr = NULL + + PyObject_AsReadBuffer(packed, <const_void_ptr*>&buf, &buf_len) + + if encoding is not None: + if isinstance(encoding, unicode): + encoding = encoding.encode('ascii') + cenc = PyBytes_AsString(encoding) + + if unicode_errors is not None: + if isinstance(unicode_errors, unicode): + unicode_errors = unicode_errors.encode('ascii') + cerr = PyBytes_AsString(unicode_errors) + + init_ctx(&ctx, object_hook, object_pairs_hook, list_hook, use_list, cenc, cerr) + ret = template_construct(&ctx, buf, buf_len, &off) + if ret == 1: + obj = template_data(&ctx) + if off < buf_len: + raise ExtraData(obj, PyBytes_FromStringAndSize(buf+off, buf_len-off)) + return obj + elif ret < 0: + raise ValueError("Unpack failed: error = %d" % (ret,)) + else: + raise UnpackValueError + + +def unpack(object stream, object object_hook=None, object list_hook=None, + bint use_list=1, encoding=None, unicode_errors="strict", + object_pairs_hook=None, + ): + """Unpack an object from `stream`. + + Raises `ValueError` when `stream` has extra bytes. + """ + return unpackb(stream.read(), use_list=use_list, + object_hook=object_hook, object_pairs_hook=object_pairs_hook, list_hook=list_hook, + encoding=encoding, unicode_errors=unicode_errors, + ) + + +cdef class Unpacker(object): + """ + Streaming unpacker. + + `file_like` is a file-like object having `.read(n)` method. + When `Unpacker` initialized with `file_like`, unpacker reads serialized data + from it and `.feed()` method is not usable. + + `read_size` is used as `file_like.read(read_size)`. + (default: min(1024**2, max_buffer_size)) + + If `use_list` is true (default), msgpack list is deserialized to Python list. + Otherwise, it is deserialized to Python tuple. + + `object_hook` is same to simplejson. If it is not None, it should be callable + and Unpacker calls it with a dict argument after deserializing a map. + + `object_pairs_hook` is same to simplejson. If it is not None, it should be callable + and Unpacker calls it with a list of key-value pairs after deserializing a map. + + `encoding` is encoding used for decoding msgpack bytes. If it is None (default), + msgpack bytes is deserialized to Python bytes. + + `unicode_errors` is used for decoding bytes. + + `max_buffer_size` limits size of data waiting unpacked. + 0 means system's INT_MAX (default). + Raises `BufferFull` exception when it is insufficient. + You shoud set this parameter when unpacking data from untrasted source. + + example of streaming deserialize from file-like object:: + + unpacker = Unpacker(file_like) + for o in unpacker: + do_something(o) + + example of streaming deserialize from socket:: + + unpacker = Unpacker() + while 1: + buf = sock.recv(1024**2) + if not buf: + break + unpacker.feed(buf) + for o in unpacker: + do_something(o) + """ + cdef template_context ctx + cdef char* buf + cdef size_t buf_size, buf_head, buf_tail + cdef object file_like + cdef object file_like_read + cdef Py_ssize_t read_size + cdef object object_hook + cdef object encoding, unicode_errors + cdef size_t max_buffer_size + + def __cinit__(self): + self.buf = NULL + + def __dealloc__(self): + free(self.buf) + self.buf = NULL + + def __init__(self, file_like=None, Py_ssize_t read_size=0, bint use_list=1, + object object_hook=None, object object_pairs_hook=None, object list_hook=None, + encoding=None, unicode_errors='strict', int max_buffer_size=0, + ): + cdef char *cenc=NULL, *cerr=NULL + + self.file_like = file_like + if file_like: + self.file_like_read = file_like.read + if not PyCallable_Check(self.file_like_read): + raise ValueError("`file_like.read` must be a callable.") + if not max_buffer_size: + max_buffer_size = INT_MAX + if read_size > max_buffer_size: + raise ValueError("read_size should be less or equal to max_buffer_size") + if not read_size: + read_size = min(max_buffer_size, 1024**2) + self.max_buffer_size = max_buffer_size + self.read_size = read_size + self.buf = <char*>malloc(read_size) + if self.buf == NULL: + raise MemoryError("Unable to allocate internal buffer.") + self.buf_size = read_size + self.buf_head = 0 + self.buf_tail = 0 + + if encoding is not None: + if isinstance(encoding, unicode): + encoding = encoding.encode('ascii') + self.encoding = encoding + cenc = PyBytes_AsString(encoding) + + if unicode_errors is not None: + if isinstance(unicode_errors, unicode): + unicode_errors = unicode_errors.encode('ascii') + self.unicode_errors = unicode_errors + cerr = PyBytes_AsString(unicode_errors) + + init_ctx(&self.ctx, object_hook, object_pairs_hook, list_hook, use_list, cenc, cerr) + + def feed(self, object next_bytes): + """Append `next_bytes` to internal buffer.""" + cdef char* buf + cdef Py_ssize_t buf_len + if self.file_like is not None: + raise TypeError( + "unpacker.feed() is not be able to use with `file_like`.") + PyObject_AsReadBuffer(next_bytes, <const_void_ptr*>&buf, &buf_len) + self.append_buffer(buf, buf_len) + + cdef append_buffer(self, void* _buf, Py_ssize_t _buf_len): + cdef: + char* buf = self.buf + char* new_buf + size_t head = self.buf_head + size_t tail = self.buf_tail + size_t buf_size = self.buf_size + size_t new_size + + if tail + _buf_len > buf_size: + if ((tail - head) + _buf_len) <= buf_size: + # move to front. + memmove(buf, buf + head, tail - head) + tail -= head + head = 0 + else: + # expand buffer. + new_size = (tail-head) + _buf_len + if new_size > self.max_buffer_size: + raise BufferFull + new_size = min(new_size*2, self.max_buffer_size) + new_buf = <char*>malloc(new_size) + if new_buf == NULL: + # self.buf still holds old buffer and will be freed during + # obj destruction + raise MemoryError("Unable to enlarge internal buffer.") + memcpy(new_buf, buf + head, tail - head) + free(buf) + + buf = new_buf + buf_size = new_size + tail -= head + head = 0 + + memcpy(buf + tail, <char*>(_buf), _buf_len) + self.buf = buf + self.buf_head = head + self.buf_size = buf_size + self.buf_tail = tail + _buf_len + + cdef read_from_file(self): + next_bytes = self.file_like_read( + min(self.read_size, + self.max_buffer_size - (self.buf_tail - self.buf_head) + )) + if next_bytes: + self.append_buffer(PyBytes_AsString(next_bytes), PyBytes_Size(next_bytes)) + else: + self.file_like = None + + cdef object _unpack(self, execute_fn execute, object write_bytes, bint iter=0): + cdef int ret + cdef object obj + cdef size_t prev_head + while 1: + prev_head = self.buf_head + ret = execute(&self.ctx, self.buf, self.buf_tail, &self.buf_head) + if write_bytes is not None: + write_bytes(PyBytes_FromStringAndSize(self.buf + prev_head, self.buf_head - prev_head)) + + if ret == 1: + obj = template_data(&self.ctx) + template_init(&self.ctx) + return obj + elif ret == 0: + if self.file_like is not None: + self.read_from_file() + continue + if iter: + raise StopIteration("No more data to unpack.") + else: + raise OutOfData("No more data to unpack.") + else: + raise ValueError("Unpack failed: error = %d" % (ret,)) + + def read_bytes(self, Py_ssize_t nbytes): + """read a specified number of raw bytes from the stream""" + cdef size_t nread + nread = min(self.buf_tail - self.buf_head, nbytes) + ret = PyBytes_FromStringAndSize(self.buf + self.buf_head, nread) + self.buf_head += nread + if len(ret) < nbytes and self.file_like is not None: + ret += self.file_like.read(nbytes - len(ret)) + return ret + + def unpack(self, object write_bytes=None): + """ + unpack one object + + If write_bytes is not None, it will be called with parts of the raw + message as it is unpacked. + + Raises `OutOfData` when there are no more bytes to unpack. + """ + return self._unpack(template_construct, write_bytes) + + def skip(self, object write_bytes=None): + """ + read and ignore one object, returning None + + If write_bytes is not None, it will be called with parts of the raw + message as it is unpacked. + + Raises `OutOfData` when there are no more bytes to unpack. + """ + return self._unpack(template_skip, write_bytes) + + def read_array_header(self, object write_bytes=None): + """assuming the next object is an array, return its size n, such that + the next n unpack() calls will iterate over its contents. + + Raises `OutOfData` when there are no more bytes to unpack. + """ + return self._unpack(read_array_header, write_bytes) + + def read_map_header(self, object write_bytes=None): + """assuming the next object is a map, return its size n, such that the + next n * 2 unpack() calls will iterate over its key-value pairs. + + Raises `OutOfData` when there are no more bytes to unpack. + """ + return self._unpack(read_map_header, write_bytes) + + def __iter__(self): + return self + + def __next__(self): + return self._unpack(template_construct, None, 1) + + # for debug. + #def _buf(self): + # return PyString_FromStringAndSize(self.buf, self.buf_tail) + + #def _off(self): + # return self.buf_head diff --git a/pandas/src/msgpack/pack.h b/pandas/src/msgpack/pack.h new file mode 100644 index 0000000000000..bb939d93ebeca --- /dev/null +++ b/pandas/src/msgpack/pack.h @@ -0,0 +1,108 @@ +/* + * MessagePack for Python packing routine + * + * Copyright (C) 2009 Naoki INADA + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include <stddef.h> +#include <stdlib.h> +#include "sysdep.h" +#include <limits.h> +#include <string.h> + +#ifdef __cplusplus +extern "C" { +#endif + +#ifdef _MSC_VER +#define inline __inline +#endif + +typedef struct msgpack_packer { + char *buf; + size_t length; + size_t buf_size; +} msgpack_packer; + +typedef struct Packer Packer; + +static inline int msgpack_pack_short(msgpack_packer* pk, short d); +static inline int msgpack_pack_int(msgpack_packer* pk, int d); +static inline int msgpack_pack_long(msgpack_packer* pk, long d); +static inline int msgpack_pack_long_long(msgpack_packer* pk, long long d); +static inline int msgpack_pack_unsigned_short(msgpack_packer* pk, unsigned short d); +static inline int msgpack_pack_unsigned_int(msgpack_packer* pk, unsigned int d); +static inline int msgpack_pack_unsigned_long(msgpack_packer* pk, unsigned long d); +static inline int msgpack_pack_unsigned_long_long(msgpack_packer* pk, unsigned long long d); + +static inline int msgpack_pack_uint8(msgpack_packer* pk, uint8_t d); +static inline int msgpack_pack_uint16(msgpack_packer* pk, uint16_t d); +static inline int msgpack_pack_uint32(msgpack_packer* pk, uint32_t d); +static inline int msgpack_pack_uint64(msgpack_packer* pk, uint64_t d); +static inline int msgpack_pack_int8(msgpack_packer* pk, int8_t d); +static inline int msgpack_pack_int16(msgpack_packer* pk, int16_t d); +static inline int msgpack_pack_int32(msgpack_packer* pk, int32_t d); +static inline int msgpack_pack_int64(msgpack_packer* pk, int64_t d); + +static inline int msgpack_pack_float(msgpack_packer* pk, float d); +static inline int msgpack_pack_double(msgpack_packer* pk, double d); + +static inline int msgpack_pack_nil(msgpack_packer* pk); +static inline int msgpack_pack_true(msgpack_packer* pk); +static inline int msgpack_pack_false(msgpack_packer* pk); + +static inline int msgpack_pack_array(msgpack_packer* pk, unsigned int n); + +static inline int msgpack_pack_map(msgpack_packer* pk, unsigned int n); + +static inline int msgpack_pack_raw(msgpack_packer* pk, size_t l); +static inline int msgpack_pack_raw_body(msgpack_packer* pk, const void* b, size_t l); + +static inline int msgpack_pack_write(msgpack_packer* pk, const char *data, size_t l) +{ + char* buf = pk->buf; + size_t bs = pk->buf_size; + size_t len = pk->length; + + if (len + l > bs) { + bs = (len + l) * 2; + buf = (char*)realloc(buf, bs); + if (!buf) return -1; + } + memcpy(buf + len, data, l); + len += l; + + pk->buf = buf; + pk->buf_size = bs; + pk->length = len; + return 0; +} + +#define msgpack_pack_inline_func(name) \ + static inline int msgpack_pack ## name + +#define msgpack_pack_inline_func_cint(name) \ + static inline int msgpack_pack ## name + +#define msgpack_pack_user msgpack_packer* + +#define msgpack_pack_append_buffer(user, buf, len) \ + return msgpack_pack_write(user, (const char*)buf, len) + +#include "pack_template.h" + +#ifdef __cplusplus +} +#endif diff --git a/pandas/src/msgpack/pack_template.h b/pandas/src/msgpack/pack_template.h new file mode 100644 index 0000000000000..65c959dd8ce63 --- /dev/null +++ b/pandas/src/msgpack/pack_template.h @@ -0,0 +1,771 @@ +/* + * MessagePack packing routine template + * + * Copyright (C) 2008-2010 FURUHASHI Sadayuki + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#if defined(__LITTLE_ENDIAN__) +#define TAKE8_8(d) ((uint8_t*)&d)[0] +#define TAKE8_16(d) ((uint8_t*)&d)[0] +#define TAKE8_32(d) ((uint8_t*)&d)[0] +#define TAKE8_64(d) ((uint8_t*)&d)[0] +#elif defined(__BIG_ENDIAN__) +#define TAKE8_8(d) ((uint8_t*)&d)[0] +#define TAKE8_16(d) ((uint8_t*)&d)[1] +#define TAKE8_32(d) ((uint8_t*)&d)[3] +#define TAKE8_64(d) ((uint8_t*)&d)[7] +#endif + +#ifndef msgpack_pack_inline_func +#error msgpack_pack_inline_func template is not defined +#endif + +#ifndef msgpack_pack_user +#error msgpack_pack_user type is not defined +#endif + +#ifndef msgpack_pack_append_buffer +#error msgpack_pack_append_buffer callback is not defined +#endif + + +/* + * Integer + */ + +#define msgpack_pack_real_uint8(x, d) \ +do { \ + if(d < (1<<7)) { \ + /* fixnum */ \ + msgpack_pack_append_buffer(x, &TAKE8_8(d), 1); \ + } else { \ + /* unsigned 8 */ \ + unsigned char buf[2] = {0xcc, TAKE8_8(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } \ +} while(0) + +#define msgpack_pack_real_uint16(x, d) \ +do { \ + if(d < (1<<7)) { \ + /* fixnum */ \ + msgpack_pack_append_buffer(x, &TAKE8_16(d), 1); \ + } else if(d < (1<<8)) { \ + /* unsigned 8 */ \ + unsigned char buf[2] = {0xcc, TAKE8_16(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } else { \ + /* unsigned 16 */ \ + unsigned char buf[3]; \ + buf[0] = 0xcd; _msgpack_store16(&buf[1], (uint16_t)d); \ + msgpack_pack_append_buffer(x, buf, 3); \ + } \ +} while(0) + +#define msgpack_pack_real_uint32(x, d) \ +do { \ + if(d < (1<<8)) { \ + if(d < (1<<7)) { \ + /* fixnum */ \ + msgpack_pack_append_buffer(x, &TAKE8_32(d), 1); \ + } else { \ + /* unsigned 8 */ \ + unsigned char buf[2] = {0xcc, TAKE8_32(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } \ + } else { \ + if(d < (1<<16)) { \ + /* unsigned 16 */ \ + unsigned char buf[3]; \ + buf[0] = 0xcd; _msgpack_store16(&buf[1], (uint16_t)d); \ + msgpack_pack_append_buffer(x, buf, 3); \ + } else { \ + /* unsigned 32 */ \ + unsigned char buf[5]; \ + buf[0] = 0xce; _msgpack_store32(&buf[1], (uint32_t)d); \ + msgpack_pack_append_buffer(x, buf, 5); \ + } \ + } \ +} while(0) + +#define msgpack_pack_real_uint64(x, d) \ +do { \ + if(d < (1ULL<<8)) { \ + if(d < (1ULL<<7)) { \ + /* fixnum */ \ + msgpack_pack_append_buffer(x, &TAKE8_64(d), 1); \ + } else { \ + /* unsigned 8 */ \ + unsigned char buf[2] = {0xcc, TAKE8_64(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } \ + } else { \ + if(d < (1ULL<<16)) { \ + /* unsigned 16 */ \ + unsigned char buf[3]; \ + buf[0] = 0xcd; _msgpack_store16(&buf[1], (uint16_t)d); \ + msgpack_pack_append_buffer(x, buf, 3); \ + } else if(d < (1ULL<<32)) { \ + /* unsigned 32 */ \ + unsigned char buf[5]; \ + buf[0] = 0xce; _msgpack_store32(&buf[1], (uint32_t)d); \ + msgpack_pack_append_buffer(x, buf, 5); \ + } else { \ + /* unsigned 64 */ \ + unsigned char buf[9]; \ + buf[0] = 0xcf; _msgpack_store64(&buf[1], d); \ + msgpack_pack_append_buffer(x, buf, 9); \ + } \ + } \ +} while(0) + +#define msgpack_pack_real_int8(x, d) \ +do { \ + if(d < -(1<<5)) { \ + /* signed 8 */ \ + unsigned char buf[2] = {0xd0, TAKE8_8(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } else { \ + /* fixnum */ \ + msgpack_pack_append_buffer(x, &TAKE8_8(d), 1); \ + } \ +} while(0) + +#define msgpack_pack_real_int16(x, d) \ +do { \ + if(d < -(1<<5)) { \ + if(d < -(1<<7)) { \ + /* signed 16 */ \ + unsigned char buf[3]; \ + buf[0] = 0xd1; _msgpack_store16(&buf[1], (int16_t)d); \ + msgpack_pack_append_buffer(x, buf, 3); \ + } else { \ + /* signed 8 */ \ + unsigned char buf[2] = {0xd0, TAKE8_16(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } \ + } else if(d < (1<<7)) { \ + /* fixnum */ \ + msgpack_pack_append_buffer(x, &TAKE8_16(d), 1); \ + } else { \ + if(d < (1<<8)) { \ + /* unsigned 8 */ \ + unsigned char buf[2] = {0xcc, TAKE8_16(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } else { \ + /* unsigned 16 */ \ + unsigned char buf[3]; \ + buf[0] = 0xcd; _msgpack_store16(&buf[1], (uint16_t)d); \ + msgpack_pack_append_buffer(x, buf, 3); \ + } \ + } \ +} while(0) + +#define msgpack_pack_real_int32(x, d) \ +do { \ + if(d < -(1<<5)) { \ + if(d < -(1<<15)) { \ + /* signed 32 */ \ + unsigned char buf[5]; \ + buf[0] = 0xd2; _msgpack_store32(&buf[1], (int32_t)d); \ + msgpack_pack_append_buffer(x, buf, 5); \ + } else if(d < -(1<<7)) { \ + /* signed 16 */ \ + unsigned char buf[3]; \ + buf[0] = 0xd1; _msgpack_store16(&buf[1], (int16_t)d); \ + msgpack_pack_append_buffer(x, buf, 3); \ + } else { \ + /* signed 8 */ \ + unsigned char buf[2] = {0xd0, TAKE8_32(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } \ + } else if(d < (1<<7)) { \ + /* fixnum */ \ + msgpack_pack_append_buffer(x, &TAKE8_32(d), 1); \ + } else { \ + if(d < (1<<8)) { \ + /* unsigned 8 */ \ + unsigned char buf[2] = {0xcc, TAKE8_32(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } else if(d < (1<<16)) { \ + /* unsigned 16 */ \ + unsigned char buf[3]; \ + buf[0] = 0xcd; _msgpack_store16(&buf[1], (uint16_t)d); \ + msgpack_pack_append_buffer(x, buf, 3); \ + } else { \ + /* unsigned 32 */ \ + unsigned char buf[5]; \ + buf[0] = 0xce; _msgpack_store32(&buf[1], (uint32_t)d); \ + msgpack_pack_append_buffer(x, buf, 5); \ + } \ + } \ +} while(0) + +#define msgpack_pack_real_int64(x, d) \ +do { \ + if(d < -(1LL<<5)) { \ + if(d < -(1LL<<15)) { \ + if(d < -(1LL<<31)) { \ + /* signed 64 */ \ + unsigned char buf[9]; \ + buf[0] = 0xd3; _msgpack_store64(&buf[1], d); \ + msgpack_pack_append_buffer(x, buf, 9); \ + } else { \ + /* signed 32 */ \ + unsigned char buf[5]; \ + buf[0] = 0xd2; _msgpack_store32(&buf[1], (int32_t)d); \ + msgpack_pack_append_buffer(x, buf, 5); \ + } \ + } else { \ + if(d < -(1<<7)) { \ + /* signed 16 */ \ + unsigned char buf[3]; \ + buf[0] = 0xd1; _msgpack_store16(&buf[1], (int16_t)d); \ + msgpack_pack_append_buffer(x, buf, 3); \ + } else { \ + /* signed 8 */ \ + unsigned char buf[2] = {0xd0, TAKE8_64(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } \ + } \ + } else if(d < (1<<7)) { \ + /* fixnum */ \ + msgpack_pack_append_buffer(x, &TAKE8_64(d), 1); \ + } else { \ + if(d < (1LL<<16)) { \ + if(d < (1<<8)) { \ + /* unsigned 8 */ \ + unsigned char buf[2] = {0xcc, TAKE8_64(d)}; \ + msgpack_pack_append_buffer(x, buf, 2); \ + } else { \ + /* unsigned 16 */ \ + unsigned char buf[3]; \ + buf[0] = 0xcd; _msgpack_store16(&buf[1], (uint16_t)d); \ + msgpack_pack_append_buffer(x, buf, 3); \ + } \ + } else { \ + if(d < (1LL<<32)) { \ + /* unsigned 32 */ \ + unsigned char buf[5]; \ + buf[0] = 0xce; _msgpack_store32(&buf[1], (uint32_t)d); \ + msgpack_pack_append_buffer(x, buf, 5); \ + } else { \ + /* unsigned 64 */ \ + unsigned char buf[9]; \ + buf[0] = 0xcf; _msgpack_store64(&buf[1], d); \ + msgpack_pack_append_buffer(x, buf, 9); \ + } \ + } \ + } \ +} while(0) + + +#ifdef msgpack_pack_inline_func_fixint + +msgpack_pack_inline_func_fixint(_uint8)(msgpack_pack_user x, uint8_t d) +{ + unsigned char buf[2] = {0xcc, TAKE8_8(d)}; + msgpack_pack_append_buffer(x, buf, 2); +} + +msgpack_pack_inline_func_fixint(_uint16)(msgpack_pack_user x, uint16_t d) +{ + unsigned char buf[3]; + buf[0] = 0xcd; _msgpack_store16(&buf[1], d); + msgpack_pack_append_buffer(x, buf, 3); +} + +msgpack_pack_inline_func_fixint(_uint32)(msgpack_pack_user x, uint32_t d) +{ + unsigned char buf[5]; + buf[0] = 0xce; _msgpack_store32(&buf[1], d); + msgpack_pack_append_buffer(x, buf, 5); +} + +msgpack_pack_inline_func_fixint(_uint64)(msgpack_pack_user x, uint64_t d) +{ + unsigned char buf[9]; + buf[0] = 0xcf; _msgpack_store64(&buf[1], d); + msgpack_pack_append_buffer(x, buf, 9); +} + +msgpack_pack_inline_func_fixint(_int8)(msgpack_pack_user x, int8_t d) +{ + unsigned char buf[2] = {0xd0, TAKE8_8(d)}; + msgpack_pack_append_buffer(x, buf, 2); +} + +msgpack_pack_inline_func_fixint(_int16)(msgpack_pack_user x, int16_t d) +{ + unsigned char buf[3]; + buf[0] = 0xd1; _msgpack_store16(&buf[1], d); + msgpack_pack_append_buffer(x, buf, 3); +} + +msgpack_pack_inline_func_fixint(_int32)(msgpack_pack_user x, int32_t d) +{ + unsigned char buf[5]; + buf[0] = 0xd2; _msgpack_store32(&buf[1], d); + msgpack_pack_append_buffer(x, buf, 5); +} + +msgpack_pack_inline_func_fixint(_int64)(msgpack_pack_user x, int64_t d) +{ + unsigned char buf[9]; + buf[0] = 0xd3; _msgpack_store64(&buf[1], d); + msgpack_pack_append_buffer(x, buf, 9); +} + +#undef msgpack_pack_inline_func_fixint +#endif + + +msgpack_pack_inline_func(_uint8)(msgpack_pack_user x, uint8_t d) +{ + msgpack_pack_real_uint8(x, d); +} + +msgpack_pack_inline_func(_uint16)(msgpack_pack_user x, uint16_t d) +{ + msgpack_pack_real_uint16(x, d); +} + +msgpack_pack_inline_func(_uint32)(msgpack_pack_user x, uint32_t d) +{ + msgpack_pack_real_uint32(x, d); +} + +msgpack_pack_inline_func(_uint64)(msgpack_pack_user x, uint64_t d) +{ + msgpack_pack_real_uint64(x, d); +} + +msgpack_pack_inline_func(_int8)(msgpack_pack_user x, int8_t d) +{ + msgpack_pack_real_int8(x, d); +} + +msgpack_pack_inline_func(_int16)(msgpack_pack_user x, int16_t d) +{ + msgpack_pack_real_int16(x, d); +} + +msgpack_pack_inline_func(_int32)(msgpack_pack_user x, int32_t d) +{ + msgpack_pack_real_int32(x, d); +} + +msgpack_pack_inline_func(_int64)(msgpack_pack_user x, int64_t d) +{ + msgpack_pack_real_int64(x, d); +} + + +#ifdef msgpack_pack_inline_func_cint + +msgpack_pack_inline_func_cint(_short)(msgpack_pack_user x, short d) +{ +#if defined(SIZEOF_SHORT) +#if SIZEOF_SHORT == 2 + msgpack_pack_real_int16(x, d); +#elif SIZEOF_SHORT == 4 + msgpack_pack_real_int32(x, d); +#else + msgpack_pack_real_int64(x, d); +#endif + +#elif defined(SHRT_MAX) +#if SHRT_MAX == 0x7fff + msgpack_pack_real_int16(x, d); +#elif SHRT_MAX == 0x7fffffff + msgpack_pack_real_int32(x, d); +#else + msgpack_pack_real_int64(x, d); +#endif + +#else +if(sizeof(short) == 2) { + msgpack_pack_real_int16(x, d); +} else if(sizeof(short) == 4) { + msgpack_pack_real_int32(x, d); +} else { + msgpack_pack_real_int64(x, d); +} +#endif +} + +msgpack_pack_inline_func_cint(_int)(msgpack_pack_user x, int d) +{ +#if defined(SIZEOF_INT) +#if SIZEOF_INT == 2 + msgpack_pack_real_int16(x, d); +#elif SIZEOF_INT == 4 + msgpack_pack_real_int32(x, d); +#else + msgpack_pack_real_int64(x, d); +#endif + +#elif defined(INT_MAX) +#if INT_MAX == 0x7fff + msgpack_pack_real_int16(x, d); +#elif INT_MAX == 0x7fffffff + msgpack_pack_real_int32(x, d); +#else + msgpack_pack_real_int64(x, d); +#endif + +#else +if(sizeof(int) == 2) { + msgpack_pack_real_int16(x, d); +} else if(sizeof(int) == 4) { + msgpack_pack_real_int32(x, d); +} else { + msgpack_pack_real_int64(x, d); +} +#endif +} + +msgpack_pack_inline_func_cint(_long)(msgpack_pack_user x, long d) +{ +#if defined(SIZEOF_LONG) +#if SIZEOF_LONG == 2 + msgpack_pack_real_int16(x, d); +#elif SIZEOF_LONG == 4 + msgpack_pack_real_int32(x, d); +#else + msgpack_pack_real_int64(x, d); +#endif + +#elif defined(LONG_MAX) +#if LONG_MAX == 0x7fffL + msgpack_pack_real_int16(x, d); +#elif LONG_MAX == 0x7fffffffL + msgpack_pack_real_int32(x, d); +#else + msgpack_pack_real_int64(x, d); +#endif + +#else +if(sizeof(long) == 2) { + msgpack_pack_real_int16(x, d); +} else if(sizeof(long) == 4) { + msgpack_pack_real_int32(x, d); +} else { + msgpack_pack_real_int64(x, d); +} +#endif +} + +msgpack_pack_inline_func_cint(_long_long)(msgpack_pack_user x, long long d) +{ +#if defined(SIZEOF_LONG_LONG) +#if SIZEOF_LONG_LONG == 2 + msgpack_pack_real_int16(x, d); +#elif SIZEOF_LONG_LONG == 4 + msgpack_pack_real_int32(x, d); +#else + msgpack_pack_real_int64(x, d); +#endif + +#elif defined(LLONG_MAX) +#if LLONG_MAX == 0x7fffL + msgpack_pack_real_int16(x, d); +#elif LLONG_MAX == 0x7fffffffL + msgpack_pack_real_int32(x, d); +#else + msgpack_pack_real_int64(x, d); +#endif + +#else +if(sizeof(long long) == 2) { + msgpack_pack_real_int16(x, d); +} else if(sizeof(long long) == 4) { + msgpack_pack_real_int32(x, d); +} else { + msgpack_pack_real_int64(x, d); +} +#endif +} + +msgpack_pack_inline_func_cint(_unsigned_short)(msgpack_pack_user x, unsigned short d) +{ +#if defined(SIZEOF_SHORT) +#if SIZEOF_SHORT == 2 + msgpack_pack_real_uint16(x, d); +#elif SIZEOF_SHORT == 4 + msgpack_pack_real_uint32(x, d); +#else + msgpack_pack_real_uint64(x, d); +#endif + +#elif defined(USHRT_MAX) +#if USHRT_MAX == 0xffffU + msgpack_pack_real_uint16(x, d); +#elif USHRT_MAX == 0xffffffffU + msgpack_pack_real_uint32(x, d); +#else + msgpack_pack_real_uint64(x, d); +#endif + +#else +if(sizeof(unsigned short) == 2) { + msgpack_pack_real_uint16(x, d); +} else if(sizeof(unsigned short) == 4) { + msgpack_pack_real_uint32(x, d); +} else { + msgpack_pack_real_uint64(x, d); +} +#endif +} + +msgpack_pack_inline_func_cint(_unsigned_int)(msgpack_pack_user x, unsigned int d) +{ +#if defined(SIZEOF_INT) +#if SIZEOF_INT == 2 + msgpack_pack_real_uint16(x, d); +#elif SIZEOF_INT == 4 + msgpack_pack_real_uint32(x, d); +#else + msgpack_pack_real_uint64(x, d); +#endif + +#elif defined(UINT_MAX) +#if UINT_MAX == 0xffffU + msgpack_pack_real_uint16(x, d); +#elif UINT_MAX == 0xffffffffU + msgpack_pack_real_uint32(x, d); +#else + msgpack_pack_real_uint64(x, d); +#endif + +#else +if(sizeof(unsigned int) == 2) { + msgpack_pack_real_uint16(x, d); +} else if(sizeof(unsigned int) == 4) { + msgpack_pack_real_uint32(x, d); +} else { + msgpack_pack_real_uint64(x, d); +} +#endif +} + +msgpack_pack_inline_func_cint(_unsigned_long)(msgpack_pack_user x, unsigned long d) +{ +#if defined(SIZEOF_LONG) +#if SIZEOF_LONG == 2 + msgpack_pack_real_uint16(x, d); +#elif SIZEOF_LONG == 4 + msgpack_pack_real_uint32(x, d); +#else + msgpack_pack_real_uint64(x, d); +#endif + +#elif defined(ULONG_MAX) +#if ULONG_MAX == 0xffffUL + msgpack_pack_real_uint16(x, d); +#elif ULONG_MAX == 0xffffffffUL + msgpack_pack_real_uint32(x, d); +#else + msgpack_pack_real_uint64(x, d); +#endif + +#else +if(sizeof(unsigned long) == 2) { + msgpack_pack_real_uint16(x, d); +} else if(sizeof(unsigned long) == 4) { + msgpack_pack_real_uint32(x, d); +} else { + msgpack_pack_real_uint64(x, d); +} +#endif +} + +msgpack_pack_inline_func_cint(_unsigned_long_long)(msgpack_pack_user x, unsigned long long d) +{ +#if defined(SIZEOF_LONG_LONG) +#if SIZEOF_LONG_LONG == 2 + msgpack_pack_real_uint16(x, d); +#elif SIZEOF_LONG_LONG == 4 + msgpack_pack_real_uint32(x, d); +#else + msgpack_pack_real_uint64(x, d); +#endif + +#elif defined(ULLONG_MAX) +#if ULLONG_MAX == 0xffffUL + msgpack_pack_real_uint16(x, d); +#elif ULLONG_MAX == 0xffffffffUL + msgpack_pack_real_uint32(x, d); +#else + msgpack_pack_real_uint64(x, d); +#endif + +#else +if(sizeof(unsigned long long) == 2) { + msgpack_pack_real_uint16(x, d); +} else if(sizeof(unsigned long long) == 4) { + msgpack_pack_real_uint32(x, d); +} else { + msgpack_pack_real_uint64(x, d); +} +#endif +} + +#undef msgpack_pack_inline_func_cint +#endif + + + +/* + * Float + */ + +msgpack_pack_inline_func(_float)(msgpack_pack_user x, float d) +{ + union { float f; uint32_t i; } mem; + mem.f = d; + unsigned char buf[5]; + buf[0] = 0xca; _msgpack_store32(&buf[1], mem.i); + msgpack_pack_append_buffer(x, buf, 5); +} + +msgpack_pack_inline_func(_double)(msgpack_pack_user x, double d) +{ + union { double f; uint64_t i; } mem; + mem.f = d; + unsigned char buf[9]; + buf[0] = 0xcb; +#if defined(__arm__) && !(__ARM_EABI__) // arm-oabi + // https://github.com/msgpack/msgpack-perl/pull/1 + mem.i = (mem.i & 0xFFFFFFFFUL) << 32UL | (mem.i >> 32UL); +#endif + _msgpack_store64(&buf[1], mem.i); + msgpack_pack_append_buffer(x, buf, 9); +} + + +/* + * Nil + */ + +msgpack_pack_inline_func(_nil)(msgpack_pack_user x) +{ + static const unsigned char d = 0xc0; + msgpack_pack_append_buffer(x, &d, 1); +} + + +/* + * Boolean + */ + +msgpack_pack_inline_func(_true)(msgpack_pack_user x) +{ + static const unsigned char d = 0xc3; + msgpack_pack_append_buffer(x, &d, 1); +} + +msgpack_pack_inline_func(_false)(msgpack_pack_user x) +{ + static const unsigned char d = 0xc2; + msgpack_pack_append_buffer(x, &d, 1); +} + + +/* + * Array + */ + +msgpack_pack_inline_func(_array)(msgpack_pack_user x, unsigned int n) +{ + if(n < 16) { + unsigned char d = 0x90 | n; + msgpack_pack_append_buffer(x, &d, 1); + } else if(n < 65536) { + unsigned char buf[3]; + buf[0] = 0xdc; _msgpack_store16(&buf[1], (uint16_t)n); + msgpack_pack_append_buffer(x, buf, 3); + } else { + unsigned char buf[5]; + buf[0] = 0xdd; _msgpack_store32(&buf[1], (uint32_t)n); + msgpack_pack_append_buffer(x, buf, 5); + } +} + + +/* + * Map + */ + +msgpack_pack_inline_func(_map)(msgpack_pack_user x, unsigned int n) +{ + if(n < 16) { + unsigned char d = 0x80 | n; + msgpack_pack_append_buffer(x, &TAKE8_8(d), 1); + } else if(n < 65536) { + unsigned char buf[3]; + buf[0] = 0xde; _msgpack_store16(&buf[1], (uint16_t)n); + msgpack_pack_append_buffer(x, buf, 3); + } else { + unsigned char buf[5]; + buf[0] = 0xdf; _msgpack_store32(&buf[1], (uint32_t)n); + msgpack_pack_append_buffer(x, buf, 5); + } +} + + +/* + * Raw + */ + +msgpack_pack_inline_func(_raw)(msgpack_pack_user x, size_t l) +{ + if(l < 32) { + unsigned char d = 0xa0 | (uint8_t)l; + msgpack_pack_append_buffer(x, &TAKE8_8(d), 1); + } else if(l < 65536) { + unsigned char buf[3]; + buf[0] = 0xda; _msgpack_store16(&buf[1], (uint16_t)l); + msgpack_pack_append_buffer(x, buf, 3); + } else { + unsigned char buf[5]; + buf[0] = 0xdb; _msgpack_store32(&buf[1], (uint32_t)l); + msgpack_pack_append_buffer(x, buf, 5); + } +} + +msgpack_pack_inline_func(_raw_body)(msgpack_pack_user x, const void* b, size_t l) +{ + msgpack_pack_append_buffer(x, (const unsigned char*)b, l); +} + +#undef msgpack_pack_inline_func +#undef msgpack_pack_user +#undef msgpack_pack_append_buffer + +#undef TAKE8_8 +#undef TAKE8_16 +#undef TAKE8_32 +#undef TAKE8_64 + +#undef msgpack_pack_real_uint8 +#undef msgpack_pack_real_uint16 +#undef msgpack_pack_real_uint32 +#undef msgpack_pack_real_uint64 +#undef msgpack_pack_real_int8 +#undef msgpack_pack_real_int16 +#undef msgpack_pack_real_int32 +#undef msgpack_pack_real_int64 + diff --git a/pandas/src/msgpack/sysdep.h b/pandas/src/msgpack/sysdep.h new file mode 100644 index 0000000000000..4fedbd8ba472f --- /dev/null +++ b/pandas/src/msgpack/sysdep.h @@ -0,0 +1,195 @@ +/* + * MessagePack system dependencies + * + * Copyright (C) 2008-2010 FURUHASHI Sadayuki + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef MSGPACK_SYSDEP_H__ +#define MSGPACK_SYSDEP_H__ + +#include <stdlib.h> +#include <stddef.h> +#if defined(_MSC_VER) && _MSC_VER < 1600 +typedef __int8 int8_t; +typedef unsigned __int8 uint8_t; +typedef __int16 int16_t; +typedef unsigned __int16 uint16_t; +typedef __int32 int32_t; +typedef unsigned __int32 uint32_t; +typedef __int64 int64_t; +typedef unsigned __int64 uint64_t; +#elif defined(_MSC_VER) // && _MSC_VER >= 1600 +#include <stdint.h> +#else +#include <stdint.h> +#include <stdbool.h> +#endif + +#ifdef _WIN32 +#define _msgpack_atomic_counter_header <windows.h> +typedef long _msgpack_atomic_counter_t; +#define _msgpack_sync_decr_and_fetch(ptr) InterlockedDecrement(ptr) +#define _msgpack_sync_incr_and_fetch(ptr) InterlockedIncrement(ptr) +#elif defined(__GNUC__) && ((__GNUC__*10 + __GNUC_MINOR__) < 41) +#define _msgpack_atomic_counter_header "gcc_atomic.h" +#else +typedef unsigned int _msgpack_atomic_counter_t; +#define _msgpack_sync_decr_and_fetch(ptr) __sync_sub_and_fetch(ptr, 1) +#define _msgpack_sync_incr_and_fetch(ptr) __sync_add_and_fetch(ptr, 1) +#endif + +#ifdef _WIN32 + +#ifdef __cplusplus +/* numeric_limits<T>::min,max */ +#ifdef max +#undef max +#endif +#ifdef min +#undef min +#endif +#endif + +#else +#include <arpa/inet.h> /* __BYTE_ORDER */ +#endif + +#if !defined(__LITTLE_ENDIAN__) && !defined(__BIG_ENDIAN__) +#if __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN__ +#elif __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN__ +#elif _WIN32 +#define __LITTLE_ENDIAN__ +#endif +#endif + + +#ifdef __LITTLE_ENDIAN__ + +#ifdef _WIN32 +# if defined(ntohs) +# define _msgpack_be16(x) ntohs(x) +# elif defined(_byteswap_ushort) || (defined(_MSC_VER) && _MSC_VER >= 1400) +# define _msgpack_be16(x) ((uint16_t)_byteswap_ushort((unsigned short)x)) +# else +# define _msgpack_be16(x) ( \ + ((((uint16_t)x) << 8) ) | \ + ((((uint16_t)x) >> 8) ) ) +# endif +#else +# define _msgpack_be16(x) ntohs(x) +#endif + +#ifdef _WIN32 +# if defined(ntohl) +# define _msgpack_be32(x) ntohl(x) +# elif defined(_byteswap_ulong) || (defined(_MSC_VER) && _MSC_VER >= 1400) +# define _msgpack_be32(x) ((uint32_t)_byteswap_ulong((unsigned long)x)) +# else +# define _msgpack_be32(x) \ + ( ((((uint32_t)x) << 24) ) | \ + ((((uint32_t)x) << 8) & 0x00ff0000U ) | \ + ((((uint32_t)x) >> 8) & 0x0000ff00U ) | \ + ((((uint32_t)x) >> 24) ) ) +# endif +#else +# define _msgpack_be32(x) ntohl(x) +#endif + +#if defined(_byteswap_uint64) || (defined(_MSC_VER) && _MSC_VER >= 1400) +# define _msgpack_be64(x) (_byteswap_uint64(x)) +#elif defined(bswap_64) +# define _msgpack_be64(x) bswap_64(x) +#elif defined(__DARWIN_OSSwapInt64) +# define _msgpack_be64(x) __DARWIN_OSSwapInt64(x) +#else +#define _msgpack_be64(x) \ + ( ((((uint64_t)x) << 56) ) | \ + ((((uint64_t)x) << 40) & 0x00ff000000000000ULL ) | \ + ((((uint64_t)x) << 24) & 0x0000ff0000000000ULL ) | \ + ((((uint64_t)x) << 8) & 0x000000ff00000000ULL ) | \ + ((((uint64_t)x) >> 8) & 0x00000000ff000000ULL ) | \ + ((((uint64_t)x) >> 24) & 0x0000000000ff0000ULL ) | \ + ((((uint64_t)x) >> 40) & 0x000000000000ff00ULL ) | \ + ((((uint64_t)x) >> 56) ) ) +#endif + +#define _msgpack_load16(cast, from) ((cast)( \ + (((uint16_t)((uint8_t*)(from))[0]) << 8) | \ + (((uint16_t)((uint8_t*)(from))[1]) ) )) + +#define _msgpack_load32(cast, from) ((cast)( \ + (((uint32_t)((uint8_t*)(from))[0]) << 24) | \ + (((uint32_t)((uint8_t*)(from))[1]) << 16) | \ + (((uint32_t)((uint8_t*)(from))[2]) << 8) | \ + (((uint32_t)((uint8_t*)(from))[3]) ) )) + +#define _msgpack_load64(cast, from) ((cast)( \ + (((uint64_t)((uint8_t*)(from))[0]) << 56) | \ + (((uint64_t)((uint8_t*)(from))[1]) << 48) | \ + (((uint64_t)((uint8_t*)(from))[2]) << 40) | \ + (((uint64_t)((uint8_t*)(from))[3]) << 32) | \ + (((uint64_t)((uint8_t*)(from))[4]) << 24) | \ + (((uint64_t)((uint8_t*)(from))[5]) << 16) | \ + (((uint64_t)((uint8_t*)(from))[6]) << 8) | \ + (((uint64_t)((uint8_t*)(from))[7]) ) )) + +#else + +#define _msgpack_be16(x) (x) +#define _msgpack_be32(x) (x) +#define _msgpack_be64(x) (x) + +#define _msgpack_load16(cast, from) ((cast)( \ + (((uint16_t)((uint8_t*)from)[0]) << 8) | \ + (((uint16_t)((uint8_t*)from)[1]) ) )) + +#define _msgpack_load32(cast, from) ((cast)( \ + (((uint32_t)((uint8_t*)from)[0]) << 24) | \ + (((uint32_t)((uint8_t*)from)[1]) << 16) | \ + (((uint32_t)((uint8_t*)from)[2]) << 8) | \ + (((uint32_t)((uint8_t*)from)[3]) ) )) + +#define _msgpack_load64(cast, from) ((cast)( \ + (((uint64_t)((uint8_t*)from)[0]) << 56) | \ + (((uint64_t)((uint8_t*)from)[1]) << 48) | \ + (((uint64_t)((uint8_t*)from)[2]) << 40) | \ + (((uint64_t)((uint8_t*)from)[3]) << 32) | \ + (((uint64_t)((uint8_t*)from)[4]) << 24) | \ + (((uint64_t)((uint8_t*)from)[5]) << 16) | \ + (((uint64_t)((uint8_t*)from)[6]) << 8) | \ + (((uint64_t)((uint8_t*)from)[7]) ) )) +#endif + + +#define _msgpack_store16(to, num) \ + do { uint16_t val = _msgpack_be16(num); memcpy(to, &val, 2); } while(0) +#define _msgpack_store32(to, num) \ + do { uint32_t val = _msgpack_be32(num); memcpy(to, &val, 4); } while(0) +#define _msgpack_store64(to, num) \ + do { uint64_t val = _msgpack_be64(num); memcpy(to, &val, 8); } while(0) + +/* +#define _msgpack_load16(cast, from) \ + ({ cast val; memcpy(&val, (char*)from, 2); _msgpack_be16(val); }) +#define _msgpack_load32(cast, from) \ + ({ cast val; memcpy(&val, (char*)from, 4); _msgpack_be32(val); }) +#define _msgpack_load64(cast, from) \ + ({ cast val; memcpy(&val, (char*)from, 8); _msgpack_be64(val); }) +*/ + + +#endif /* msgpack/sysdep.h */ + diff --git a/pandas/src/msgpack/unpack.h b/pandas/src/msgpack/unpack.h new file mode 100644 index 0000000000000..3dc88e5fbded0 --- /dev/null +++ b/pandas/src/msgpack/unpack.h @@ -0,0 +1,235 @@ +/* + * MessagePack for Python unpacking routine + * + * Copyright (C) 2009 Naoki INADA + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define MSGPACK_EMBED_STACK_SIZE (1024) +#include "unpack_define.h" + +typedef struct unpack_user { + int use_list; + PyObject *object_hook; + bool has_pairs_hook; + PyObject *list_hook; + const char *encoding; + const char *unicode_errors; +} unpack_user; + + +#define msgpack_unpack_struct(name) \ + struct template ## name + +#define msgpack_unpack_func(ret, name) \ + static inline ret template ## name + +#define msgpack_unpack_callback(name) \ + template_callback ## name + +#define msgpack_unpack_object PyObject* + +#define msgpack_unpack_user unpack_user + +typedef int (*execute_fn)(msgpack_unpack_struct(_context)* ctx, const char* data, size_t len, size_t* off); + +struct template_context; +typedef struct template_context template_context; + +static inline msgpack_unpack_object template_callback_root(unpack_user* u) +{ + return NULL; +} + +static inline int template_callback_uint16(unpack_user* u, uint16_t d, msgpack_unpack_object* o) +{ + PyObject *p = PyInt_FromLong((long)d); + if (!p) + return -1; + *o = p; + return 0; +} +static inline int template_callback_uint8(unpack_user* u, uint8_t d, msgpack_unpack_object* o) +{ + return template_callback_uint16(u, d, o); +} + + +static inline int template_callback_uint32(unpack_user* u, uint32_t d, msgpack_unpack_object* o) +{ + PyObject *p; + if (d > LONG_MAX) { + p = PyLong_FromUnsignedLong((unsigned long)d); + } else { + p = PyInt_FromLong((long)d); + } + if (!p) + return -1; + *o = p; + return 0; +} + +static inline int template_callback_uint64(unpack_user* u, uint64_t d, msgpack_unpack_object* o) +{ + PyObject *p = PyLong_FromUnsignedLongLong(d); + if (!p) + return -1; + *o = p; + return 0; +} + +static inline int template_callback_int32(unpack_user* u, int32_t d, msgpack_unpack_object* o) +{ + PyObject *p = PyInt_FromLong(d); + if (!p) + return -1; + *o = p; + return 0; +} + +static inline int template_callback_int16(unpack_user* u, int16_t d, msgpack_unpack_object* o) +{ + return template_callback_int32(u, d, o); +} + +static inline int template_callback_int8(unpack_user* u, int8_t d, msgpack_unpack_object* o) +{ + return template_callback_int32(u, d, o); +} + +static inline int template_callback_int64(unpack_user* u, int64_t d, msgpack_unpack_object* o) +{ + PyObject *p = PyLong_FromLongLong(d); + if (!p) + return -1; + *o = p; + return 0; +} + +static inline int template_callback_double(unpack_user* u, double d, msgpack_unpack_object* o) +{ + PyObject *p = PyFloat_FromDouble(d); + if (!p) + return -1; + *o = p; + return 0; +} + +static inline int template_callback_float(unpack_user* u, float d, msgpack_unpack_object* o) +{ + return template_callback_double(u, d, o); +} + +static inline int template_callback_nil(unpack_user* u, msgpack_unpack_object* o) +{ Py_INCREF(Py_None); *o = Py_None; return 0; } + +static inline int template_callback_true(unpack_user* u, msgpack_unpack_object* o) +{ Py_INCREF(Py_True); *o = Py_True; return 0; } + +static inline int template_callback_false(unpack_user* u, msgpack_unpack_object* o) +{ Py_INCREF(Py_False); *o = Py_False; return 0; } + +static inline int template_callback_array(unpack_user* u, unsigned int n, msgpack_unpack_object* o) +{ + PyObject *p = u->use_list ? PyList_New(n) : PyTuple_New(n); + + if (!p) + return -1; + *o = p; + return 0; +} + +static inline int template_callback_array_item(unpack_user* u, unsigned int current, msgpack_unpack_object* c, msgpack_unpack_object o) +{ + if (u->use_list) + PyList_SET_ITEM(*c, current, o); + else + PyTuple_SET_ITEM(*c, current, o); + return 0; +} + +static inline int template_callback_array_end(unpack_user* u, msgpack_unpack_object* c) +{ + if (u->list_hook) { + PyObject *new_c = PyEval_CallFunction(u->list_hook, "(O)", *c); + if (!new_c) + return -1; + Py_DECREF(*c); + *c = new_c; + } + return 0; +} + +static inline int template_callback_map(unpack_user* u, unsigned int n, msgpack_unpack_object* o) +{ + PyObject *p; + if (u->has_pairs_hook) { + p = PyList_New(n); // Or use tuple? + } + else { + p = PyDict_New(); + } + if (!p) + return -1; + *o = p; + return 0; +} + +static inline int template_callback_map_item(unpack_user* u, unsigned int current, msgpack_unpack_object* c, msgpack_unpack_object k, msgpack_unpack_object v) +{ + if (u->has_pairs_hook) { + msgpack_unpack_object item = PyTuple_Pack(2, k, v); + if (!item) + return -1; + Py_DECREF(k); + Py_DECREF(v); + PyList_SET_ITEM(*c, current, item); + return 0; + } + else if (PyDict_SetItem(*c, k, v) == 0) { + Py_DECREF(k); + Py_DECREF(v); + return 0; + } + return -1; +} + +static inline int template_callback_map_end(unpack_user* u, msgpack_unpack_object* c) +{ + if (u->object_hook) { + PyObject *new_c = PyEval_CallFunction(u->object_hook, "(O)", *c); + if (!new_c) + return -1; + + Py_DECREF(*c); + *c = new_c; + } + return 0; +} + +static inline int template_callback_raw(unpack_user* u, const char* b, const char* p, unsigned int l, msgpack_unpack_object* o) +{ + PyObject *py; + if(u->encoding) { + py = PyUnicode_Decode(p, l, u->encoding, u->unicode_errors); + } else { + py = PyBytes_FromStringAndSize(p, l); + } + if (!py) + return -1; + *o = py; + return 0; +} + +#include "unpack_template.h" diff --git a/pandas/src/msgpack/unpack_define.h b/pandas/src/msgpack/unpack_define.h new file mode 100644 index 0000000000000..959d3519e7b5c --- /dev/null +++ b/pandas/src/msgpack/unpack_define.h @@ -0,0 +1,93 @@ +/* + * MessagePack unpacking routine template + * + * Copyright (C) 2008-2010 FURUHASHI Sadayuki + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef MSGPACK_UNPACK_DEFINE_H__ +#define MSGPACK_UNPACK_DEFINE_H__ + +#include "msgpack/sysdep.h" +#include <stdlib.h> +#include <string.h> +#include <assert.h> +#include <stdio.h> + +#ifdef __cplusplus +extern "C" { +#endif + + +#ifndef MSGPACK_EMBED_STACK_SIZE +#define MSGPACK_EMBED_STACK_SIZE 32 +#endif + + +typedef enum { + CS_HEADER = 0x00, // nil + + //CS_ = 0x01, + //CS_ = 0x02, // false + //CS_ = 0x03, // true + + //CS_ = 0x04, + //CS_ = 0x05, + //CS_ = 0x06, + //CS_ = 0x07, + + //CS_ = 0x08, + //CS_ = 0x09, + CS_FLOAT = 0x0a, + CS_DOUBLE = 0x0b, + CS_UINT_8 = 0x0c, + CS_UINT_16 = 0x0d, + CS_UINT_32 = 0x0e, + CS_UINT_64 = 0x0f, + CS_INT_8 = 0x10, + CS_INT_16 = 0x11, + CS_INT_32 = 0x12, + CS_INT_64 = 0x13, + + //CS_ = 0x14, + //CS_ = 0x15, + //CS_BIG_INT_16 = 0x16, + //CS_BIG_INT_32 = 0x17, + //CS_BIG_FLOAT_16 = 0x18, + //CS_BIG_FLOAT_32 = 0x19, + CS_RAW_16 = 0x1a, + CS_RAW_32 = 0x1b, + CS_ARRAY_16 = 0x1c, + CS_ARRAY_32 = 0x1d, + CS_MAP_16 = 0x1e, + CS_MAP_32 = 0x1f, + + //ACS_BIG_INT_VALUE, + //ACS_BIG_FLOAT_VALUE, + ACS_RAW_VALUE, +} msgpack_unpack_state; + + +typedef enum { + CT_ARRAY_ITEM, + CT_MAP_KEY, + CT_MAP_VALUE, +} msgpack_container_type; + + +#ifdef __cplusplus +} +#endif + +#endif /* msgpack/unpack_define.h */ + diff --git a/pandas/src/msgpack/unpack_template.h b/pandas/src/msgpack/unpack_template.h new file mode 100644 index 0000000000000..83b6918dc6686 --- /dev/null +++ b/pandas/src/msgpack/unpack_template.h @@ -0,0 +1,492 @@ +/* + * MessagePack unpacking routine template + * + * Copyright (C) 2008-2010 FURUHASHI Sadayuki + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#ifndef msgpack_unpack_func +#error msgpack_unpack_func template is not defined +#endif + +#ifndef msgpack_unpack_callback +#error msgpack_unpack_callback template is not defined +#endif + +#ifndef msgpack_unpack_struct +#error msgpack_unpack_struct template is not defined +#endif + +#ifndef msgpack_unpack_struct_decl +#define msgpack_unpack_struct_decl(name) msgpack_unpack_struct(name) +#endif + +#ifndef msgpack_unpack_object +#error msgpack_unpack_object type is not defined +#endif + +#ifndef msgpack_unpack_user +#error msgpack_unpack_user type is not defined +#endif + +#ifndef USE_CASE_RANGE +#if !defined(_MSC_VER) +#define USE_CASE_RANGE +#endif +#endif + +msgpack_unpack_struct_decl(_stack) { + msgpack_unpack_object obj; + size_t size; + size_t count; + unsigned int ct; + msgpack_unpack_object map_key; +}; + +msgpack_unpack_struct_decl(_context) { + msgpack_unpack_user user; + unsigned int cs; + unsigned int trail; + unsigned int top; + /* + msgpack_unpack_struct(_stack)* stack; + unsigned int stack_size; + msgpack_unpack_struct(_stack) embed_stack[MSGPACK_EMBED_STACK_SIZE]; + */ + msgpack_unpack_struct(_stack) stack[MSGPACK_EMBED_STACK_SIZE]; +}; + + +msgpack_unpack_func(void, _init)(msgpack_unpack_struct(_context)* ctx) +{ + ctx->cs = CS_HEADER; + ctx->trail = 0; + ctx->top = 0; + /* + ctx->stack = ctx->embed_stack; + ctx->stack_size = MSGPACK_EMBED_STACK_SIZE; + */ + ctx->stack[0].obj = msgpack_unpack_callback(_root)(&ctx->user); +} + +/* +msgpack_unpack_func(void, _destroy)(msgpack_unpack_struct(_context)* ctx) +{ + if(ctx->stack_size != MSGPACK_EMBED_STACK_SIZE) { + free(ctx->stack); + } +} +*/ + +msgpack_unpack_func(msgpack_unpack_object, _data)(msgpack_unpack_struct(_context)* ctx) +{ + return (ctx)->stack[0].obj; +} + + +template <bool construct> +msgpack_unpack_func(int, _execute)(msgpack_unpack_struct(_context)* ctx, const char* data, size_t len, size_t* off) +{ + assert(len >= *off); + + const unsigned char* p = (unsigned char*)data + *off; + const unsigned char* const pe = (unsigned char*)data + len; + const void* n = NULL; + + unsigned int trail = ctx->trail; + unsigned int cs = ctx->cs; + unsigned int top = ctx->top; + msgpack_unpack_struct(_stack)* stack = ctx->stack; + /* + unsigned int stack_size = ctx->stack_size; + */ + msgpack_unpack_user* user = &ctx->user; + + msgpack_unpack_object obj; + msgpack_unpack_struct(_stack)* c = NULL; + + int ret; + +#define construct_cb(name) \ + construct && msgpack_unpack_callback(name) + +#define push_simple_value(func) \ + if(construct_cb(func)(user, &obj) < 0) { goto _failed; } \ + goto _push +#define push_fixed_value(func, arg) \ + if(construct_cb(func)(user, arg, &obj) < 0) { goto _failed; } \ + goto _push +#define push_variable_value(func, base, pos, len) \ + if(construct_cb(func)(user, \ + (const char*)base, (const char*)pos, len, &obj) < 0) { goto _failed; } \ + goto _push + +#define again_fixed_trail(_cs, trail_len) \ + trail = trail_len; \ + cs = _cs; \ + goto _fixed_trail_again +#define again_fixed_trail_if_zero(_cs, trail_len, ifzero) \ + trail = trail_len; \ + if(trail == 0) { goto ifzero; } \ + cs = _cs; \ + goto _fixed_trail_again + +#define start_container(func, count_, ct_) \ + if(top >= MSGPACK_EMBED_STACK_SIZE) { goto _failed; } /* FIXME */ \ + if(construct_cb(func)(user, count_, &stack[top].obj) < 0) { goto _failed; } \ + if((count_) == 0) { obj = stack[top].obj; \ + if (construct_cb(func##_end)(user, &obj) < 0) { goto _failed; } \ + goto _push; } \ + stack[top].ct = ct_; \ + stack[top].size = count_; \ + stack[top].count = 0; \ + ++top; \ + /*printf("container %d count %d stack %d\n",stack[top].obj,count_,top);*/ \ + /*printf("stack push %d\n", top);*/ \ + /* FIXME \ + if(top >= stack_size) { \ + if(stack_size == MSGPACK_EMBED_STACK_SIZE) { \ + size_t csize = sizeof(msgpack_unpack_struct(_stack)) * MSGPACK_EMBED_STACK_SIZE; \ + size_t nsize = csize * 2; \ + msgpack_unpack_struct(_stack)* tmp = (msgpack_unpack_struct(_stack)*)malloc(nsize); \ + if(tmp == NULL) { goto _failed; } \ + memcpy(tmp, ctx->stack, csize); \ + ctx->stack = stack = tmp; \ + ctx->stack_size = stack_size = MSGPACK_EMBED_STACK_SIZE * 2; \ + } else { \ + size_t nsize = sizeof(msgpack_unpack_struct(_stack)) * ctx->stack_size * 2; \ + msgpack_unpack_struct(_stack)* tmp = (msgpack_unpack_struct(_stack)*)realloc(ctx->stack, nsize); \ + if(tmp == NULL) { goto _failed; } \ + ctx->stack = stack = tmp; \ + ctx->stack_size = stack_size = stack_size * 2; \ + } \ + } \ + */ \ + goto _header_again + +#define NEXT_CS(p) \ + ((unsigned int)*p & 0x1f) + +#ifdef USE_CASE_RANGE +#define SWITCH_RANGE_BEGIN switch(*p) { +#define SWITCH_RANGE(FROM, TO) case FROM ... TO: +#define SWITCH_RANGE_DEFAULT default: +#define SWITCH_RANGE_END } +#else +#define SWITCH_RANGE_BEGIN { if(0) { +#define SWITCH_RANGE(FROM, TO) } else if(FROM <= *p && *p <= TO) { +#define SWITCH_RANGE_DEFAULT } else { +#define SWITCH_RANGE_END } } +#endif + + if(p == pe) { goto _out; } + do { + switch(cs) { + case CS_HEADER: + SWITCH_RANGE_BEGIN + SWITCH_RANGE(0x00, 0x7f) // Positive Fixnum + push_fixed_value(_uint8, *(uint8_t*)p); + SWITCH_RANGE(0xe0, 0xff) // Negative Fixnum + push_fixed_value(_int8, *(int8_t*)p); + SWITCH_RANGE(0xc0, 0xdf) // Variable + switch(*p) { + case 0xc0: // nil + push_simple_value(_nil); + //case 0xc1: // string + // again_terminal_trail(NEXT_CS(p), p+1); + case 0xc2: // false + push_simple_value(_false); + case 0xc3: // true + push_simple_value(_true); + //case 0xc4: + //case 0xc5: + //case 0xc6: + //case 0xc7: + //case 0xc8: + //case 0xc9: + case 0xca: // float + case 0xcb: // double + case 0xcc: // unsigned int 8 + case 0xcd: // unsigned int 16 + case 0xce: // unsigned int 32 + case 0xcf: // unsigned int 64 + case 0xd0: // signed int 8 + case 0xd1: // signed int 16 + case 0xd2: // signed int 32 + case 0xd3: // signed int 64 + again_fixed_trail(NEXT_CS(p), 1 << (((unsigned int)*p) & 0x03)); + //case 0xd4: + //case 0xd5: + //case 0xd6: // big integer 16 + //case 0xd7: // big integer 32 + //case 0xd8: // big float 16 + //case 0xd9: // big float 32 + case 0xda: // raw 16 + case 0xdb: // raw 32 + case 0xdc: // array 16 + case 0xdd: // array 32 + case 0xde: // map 16 + case 0xdf: // map 32 + again_fixed_trail(NEXT_CS(p), 2 << (((unsigned int)*p) & 0x01)); + default: + goto _failed; + } + SWITCH_RANGE(0xa0, 0xbf) // FixRaw + again_fixed_trail_if_zero(ACS_RAW_VALUE, ((unsigned int)*p & 0x1f), _raw_zero); + SWITCH_RANGE(0x90, 0x9f) // FixArray + start_container(_array, ((unsigned int)*p) & 0x0f, CT_ARRAY_ITEM); + SWITCH_RANGE(0x80, 0x8f) // FixMap + start_container(_map, ((unsigned int)*p) & 0x0f, CT_MAP_KEY); + + SWITCH_RANGE_DEFAULT + goto _failed; + SWITCH_RANGE_END + // end CS_HEADER + + + _fixed_trail_again: + ++p; + + default: + if((size_t)(pe - p) < trail) { goto _out; } + n = p; p += trail - 1; + switch(cs) { + //case CS_ + //case CS_ + case CS_FLOAT: { + union { uint32_t i; float f; } mem; + mem.i = _msgpack_load32(uint32_t,n); + push_fixed_value(_float, mem.f); } + case CS_DOUBLE: { + union { uint64_t i; double f; } mem; + mem.i = _msgpack_load64(uint64_t,n); +#if defined(__arm__) && !(__ARM_EABI__) // arm-oabi + // https://github.com/msgpack/msgpack-perl/pull/1 + mem.i = (mem.i & 0xFFFFFFFFUL) << 32UL | (mem.i >> 32UL); +#endif + push_fixed_value(_double, mem.f); } + case CS_UINT_8: + push_fixed_value(_uint8, *(uint8_t*)n); + case CS_UINT_16: + push_fixed_value(_uint16, _msgpack_load16(uint16_t,n)); + case CS_UINT_32: + push_fixed_value(_uint32, _msgpack_load32(uint32_t,n)); + case CS_UINT_64: + push_fixed_value(_uint64, _msgpack_load64(uint64_t,n)); + + case CS_INT_8: + push_fixed_value(_int8, *(int8_t*)n); + case CS_INT_16: + push_fixed_value(_int16, _msgpack_load16(int16_t,n)); + case CS_INT_32: + push_fixed_value(_int32, _msgpack_load32(int32_t,n)); + case CS_INT_64: + push_fixed_value(_int64, _msgpack_load64(int64_t,n)); + + //case CS_ + //case CS_ + //case CS_BIG_INT_16: + // again_fixed_trail_if_zero(ACS_BIG_INT_VALUE, _msgpack_load16(uint16_t,n), _big_int_zero); + //case CS_BIG_INT_32: + // again_fixed_trail_if_zero(ACS_BIG_INT_VALUE, _msgpack_load32(uint32_t,n), _big_int_zero); + //case ACS_BIG_INT_VALUE: + //_big_int_zero: + // // FIXME + // push_variable_value(_big_int, data, n, trail); + + //case CS_BIG_FLOAT_16: + // again_fixed_trail_if_zero(ACS_BIG_FLOAT_VALUE, _msgpack_load16(uint16_t,n), _big_float_zero); + //case CS_BIG_FLOAT_32: + // again_fixed_trail_if_zero(ACS_BIG_FLOAT_VALUE, _msgpack_load32(uint32_t,n), _big_float_zero); + //case ACS_BIG_FLOAT_VALUE: + //_big_float_zero: + // // FIXME + // push_variable_value(_big_float, data, n, trail); + + case CS_RAW_16: + again_fixed_trail_if_zero(ACS_RAW_VALUE, _msgpack_load16(uint16_t,n), _raw_zero); + case CS_RAW_32: + again_fixed_trail_if_zero(ACS_RAW_VALUE, _msgpack_load32(uint32_t,n), _raw_zero); + case ACS_RAW_VALUE: + _raw_zero: + push_variable_value(_raw, data, n, trail); + + case CS_ARRAY_16: + start_container(_array, _msgpack_load16(uint16_t,n), CT_ARRAY_ITEM); + case CS_ARRAY_32: + /* FIXME security guard */ + start_container(_array, _msgpack_load32(uint32_t,n), CT_ARRAY_ITEM); + + case CS_MAP_16: + start_container(_map, _msgpack_load16(uint16_t,n), CT_MAP_KEY); + case CS_MAP_32: + /* FIXME security guard */ + start_container(_map, _msgpack_load32(uint32_t,n), CT_MAP_KEY); + + default: + goto _failed; + } + } + +_push: + if(top == 0) { goto _finish; } + c = &stack[top-1]; + switch(c->ct) { + case CT_ARRAY_ITEM: + if(construct_cb(_array_item)(user, c->count, &c->obj, obj) < 0) { goto _failed; } + if(++c->count == c->size) { + obj = c->obj; + if (construct_cb(_array_end)(user, &obj) < 0) { goto _failed; } + --top; + /*printf("stack pop %d\n", top);*/ + goto _push; + } + goto _header_again; + case CT_MAP_KEY: + c->map_key = obj; + c->ct = CT_MAP_VALUE; + goto _header_again; + case CT_MAP_VALUE: + if(construct_cb(_map_item)(user, c->count, &c->obj, c->map_key, obj) < 0) { goto _failed; } + if(++c->count == c->size) { + obj = c->obj; + if (construct_cb(_map_end)(user, &obj) < 0) { goto _failed; } + --top; + /*printf("stack pop %d\n", top);*/ + goto _push; + } + c->ct = CT_MAP_KEY; + goto _header_again; + + default: + goto _failed; + } + +_header_again: + cs = CS_HEADER; + ++p; + } while(p != pe); + goto _out; + + +_finish: + if (!construct) + msgpack_unpack_callback(_nil)(user, &obj); + stack[0].obj = obj; + ++p; + ret = 1; + /*printf("-- finish --\n"); */ + goto _end; + +_failed: + /*printf("** FAILED **\n"); */ + ret = -1; + goto _end; + +_out: + ret = 0; + goto _end; + +_end: + ctx->cs = cs; + ctx->trail = trail; + ctx->top = top; + *off = p - (const unsigned char*)data; + + return ret; +#undef construct_cb +} + +#undef SWITCH_RANGE_BEGIN +#undef SWITCH_RANGE +#undef SWITCH_RANGE_DEFAULT +#undef SWITCH_RANGE_END +#undef push_simple_value +#undef push_fixed_value +#undef push_variable_value +#undef again_fixed_trail +#undef again_fixed_trail_if_zero +#undef start_container + +template <unsigned int fixed_offset, unsigned int var_offset> +msgpack_unpack_func(int, _container_header)(msgpack_unpack_struct(_context)* ctx, const char* data, size_t len, size_t* off) +{ + assert(len >= *off); + uint32_t size; + const unsigned char *const p = (unsigned char*)data + *off; + +#define inc_offset(inc) \ + if (len - *off < inc) \ + return 0; \ + *off += inc; + + switch (*p) { + case var_offset: + inc_offset(3); + size = _msgpack_load16(uint16_t, p + 1); + break; + case var_offset + 1: + inc_offset(5); + size = _msgpack_load32(uint32_t, p + 1); + break; +#ifdef USE_CASE_RANGE + case fixed_offset + 0x0 ... fixed_offset + 0xf: +#else + case fixed_offset + 0x0: + case fixed_offset + 0x1: + case fixed_offset + 0x2: + case fixed_offset + 0x3: + case fixed_offset + 0x4: + case fixed_offset + 0x5: + case fixed_offset + 0x6: + case fixed_offset + 0x7: + case fixed_offset + 0x8: + case fixed_offset + 0x9: + case fixed_offset + 0xa: + case fixed_offset + 0xb: + case fixed_offset + 0xc: + case fixed_offset + 0xd: + case fixed_offset + 0xe: + case fixed_offset + 0xf: +#endif + ++*off; + size = ((unsigned int)*p) & 0x0f; + break; + default: + PyErr_SetString(PyExc_ValueError, "Unexpected type header on stream"); + return -1; + } + msgpack_unpack_callback(_uint32)(&ctx->user, size, &ctx->stack[0].obj); + return 1; +} + +#undef SWITCH_RANGE_BEGIN +#undef SWITCH_RANGE +#undef SWITCH_RANGE_DEFAULT +#undef SWITCH_RANGE_END + +static const execute_fn template_construct = &template_execute<true>; +static const execute_fn template_skip = &template_execute<false>; +static const execute_fn read_array_header = &template_container_header<0x90, 0xdc>; +static const execute_fn read_map_header = &template_container_header<0x80, 0xde>; + +#undef msgpack_unpack_func +#undef msgpack_unpack_callback +#undef msgpack_unpack_struct +#undef msgpack_unpack_object +#undef msgpack_unpack_user + +#undef NEXT_CS + +/* vim: set ts=4 sw=4 noexpandtab */ diff --git a/pandas/tests/test_msgpack/__init__.py b/pandas/tests/test_msgpack/__init__.py new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/pandas/tests/test_msgpack/test_buffer.py b/pandas/tests/test_msgpack/test_buffer.py new file mode 100644 index 0000000000000..940b65406103e --- /dev/null +++ b/pandas/tests/test_msgpack/test_buffer.py @@ -0,0 +1,12 @@ +#!/usr/bin/env python +# coding: utf-8 + +from pandas.msgpack import packb, unpackb + + +def test_unpack_buffer(): + from array import array + buf = array('b') + buf.fromstring(packb(('foo', 'bar'))) + obj = unpackb(buf, use_list=1) + assert [b'foo', b'bar'] == obj diff --git a/pandas/tests/test_msgpack/test_case.py b/pandas/tests/test_msgpack/test_case.py new file mode 100644 index 0000000000000..e78456b2ddb62 --- /dev/null +++ b/pandas/tests/test_msgpack/test_case.py @@ -0,0 +1,101 @@ +#!/usr/bin/env python +# coding: utf-8 + +from pandas.msgpack import packb, unpackb + + +def check(length, obj): + v = packb(obj) + assert len(v) == length, \ + "%r length should be %r but get %r" % (obj, length, len(v)) + assert unpackb(v, use_list=0) == obj + +def test_1(): + for o in [None, True, False, 0, 1, (1 << 6), (1 << 7) - 1, -1, + -((1<<5)-1), -(1<<5)]: + check(1, o) + +def test_2(): + for o in [1 << 7, (1 << 8) - 1, + -((1<<5)+1), -(1<<7) + ]: + check(2, o) + +def test_3(): + for o in [1 << 8, (1 << 16) - 1, + -((1<<7)+1), -(1<<15)]: + check(3, o) + +def test_5(): + for o in [1 << 16, (1 << 32) - 1, + -((1<<15)+1), -(1<<31)]: + check(5, o) + +def test_9(): + for o in [1 << 32, (1 << 64) - 1, + -((1<<31)+1), -(1<<63), + 1.0, 0.1, -0.1, -1.0]: + check(9, o) + + +def check_raw(overhead, num): + check(num + overhead, b" " * num) + +def test_fixraw(): + check_raw(1, 0) + check_raw(1, (1<<5) - 1) + +def test_raw16(): + check_raw(3, 1<<5) + check_raw(3, (1<<16) - 1) + +def test_raw32(): + check_raw(5, 1<<16) + + +def check_array(overhead, num): + check(num + overhead, (None,) * num) + +def test_fixarray(): + check_array(1, 0) + check_array(1, (1 << 4) - 1) + +def test_array16(): + check_array(3, 1 << 4) + check_array(3, (1<<16)-1) + +def test_array32(): + check_array(5, (1<<16)) + + +def match(obj, buf): + assert packb(obj) == buf + assert unpackb(buf, use_list=0) == obj + +def test_match(): + cases = [ + (None, b'\xc0'), + (False, b'\xc2'), + (True, b'\xc3'), + (0, b'\x00'), + (127, b'\x7f'), + (128, b'\xcc\x80'), + (256, b'\xcd\x01\x00'), + (-1, b'\xff'), + (-33, b'\xd0\xdf'), + (-129, b'\xd1\xff\x7f'), + ({1:1}, b'\x81\x01\x01'), + (1.0, b"\xcb\x3f\xf0\x00\x00\x00\x00\x00\x00"), + ((), b'\x90'), + (tuple(range(15)),b"\x9f\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e"), + (tuple(range(16)),b"\xdc\x00\x10\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"), + ({}, b'\x80'), + (dict([(x,x) for x in range(15)]), b'\x8f\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e'), + (dict([(x,x) for x in range(16)]), b'\xde\x00\x10\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e\x0f\x0f'), + ] + + for v, p in cases: + match(v, p) + +def test_unicode(): + assert unpackb(packb('foobar'), use_list=1) == b'foobar' diff --git a/pandas/tests/test_msgpack/test_except.py b/pandas/tests/test_msgpack/test_except.py new file mode 100644 index 0000000000000..a0239336ca20d --- /dev/null +++ b/pandas/tests/test_msgpack/test_except.py @@ -0,0 +1,29 @@ +#!/usr/bin/env python +# coding: utf-8 + +import unittest +import nose + +import datetime +from pandas.msgpack import packb, unpackb + +class DummyException(Exception): + pass + +class TestExceptions(unittest.TestCase): + + def test_raise_on_find_unsupported_value(self): + import datetime + self.assertRaises(TypeError, packb, datetime.datetime.now()) + + def test_raise_from_object_hook(self): + def hook(obj): + raise DummyException + self.assertRaises(DummyException, unpackb, packb({}), object_hook=hook) + self.assertRaises(DummyException, unpackb, packb({'fizz': 'buzz'}), object_hook=hook) + self.assertRaises(DummyException, unpackb, packb({'fizz': 'buzz'}), object_pairs_hook=hook) + self.assertRaises(DummyException, unpackb, packb({'fizz': {'buzz': 'spam'}}), object_hook=hook) + self.assertRaises(DummyException, unpackb, packb({'fizz': {'buzz': 'spam'}}), object_pairs_hook=hook) + + def test_invalidvalue(self): + self.assertRaises(ValueError, unpackb, b'\xd9\x97#DL_') diff --git a/pandas/tests/test_msgpack/test_format.py b/pandas/tests/test_msgpack/test_format.py new file mode 100644 index 0000000000000..a3a3afd046ce2 --- /dev/null +++ b/pandas/tests/test_msgpack/test_format.py @@ -0,0 +1,70 @@ +#!/usr/bin/env python +# coding: utf-8 + +from pandas.msgpack import unpackb + +def check(src, should, use_list=0): + assert unpackb(src, use_list=use_list) == should + +def testSimpleValue(): + check(b"\x93\xc0\xc2\xc3", + (None, False, True,)) + +def testFixnum(): + check(b"\x92\x93\x00\x40\x7f\x93\xe0\xf0\xff", + ((0,64,127,), (-32,-16,-1,),) + ) + +def testFixArray(): + check(b"\x92\x90\x91\x91\xc0", + ((),((None,),),), + ) + +def testFixRaw(): + check(b"\x94\xa0\xa1a\xa2bc\xa3def", + (b"", b"a", b"bc", b"def",), + ) + +def testFixMap(): + check( + b"\x82\xc2\x81\xc0\xc0\xc3\x81\xc0\x80", + {False: {None: None}, True:{None:{}}}, + ) + +def testUnsignedInt(): + check( + b"\x99\xcc\x00\xcc\x80\xcc\xff\xcd\x00\x00\xcd\x80\x00" + b"\xcd\xff\xff\xce\x00\x00\x00\x00\xce\x80\x00\x00\x00" + b"\xce\xff\xff\xff\xff", + (0, 128, 255, 0, 32768, 65535, 0, 2147483648, 4294967295,), + ) + +def testSignedInt(): + check(b"\x99\xd0\x00\xd0\x80\xd0\xff\xd1\x00\x00\xd1\x80\x00" + b"\xd1\xff\xff\xd2\x00\x00\x00\x00\xd2\x80\x00\x00\x00" + b"\xd2\xff\xff\xff\xff", + (0, -128, -1, 0, -32768, -1, 0, -2147483648, -1,)) + +def testRaw(): + check(b"\x96\xda\x00\x00\xda\x00\x01a\xda\x00\x02ab\xdb\x00\x00" + b"\x00\x00\xdb\x00\x00\x00\x01a\xdb\x00\x00\x00\x02ab", + (b"", b"a", b"ab", b"", b"a", b"ab")) + +def testArray(): + check(b"\x96\xdc\x00\x00\xdc\x00\x01\xc0\xdc\x00\x02\xc2\xc3\xdd\x00" + b"\x00\x00\x00\xdd\x00\x00\x00\x01\xc0\xdd\x00\x00\x00\x02" + b"\xc2\xc3", + ((), (None,), (False,True), (), (None,), (False,True)) + ) + +def testMap(): + check( + b"\x96" + b"\xde\x00\x00" + b"\xde\x00\x01\xc0\xc2" + b"\xde\x00\x02\xc0\xc2\xc3\xc2" + b"\xdf\x00\x00\x00\x00" + b"\xdf\x00\x00\x00\x01\xc0\xc2" + b"\xdf\x00\x00\x00\x02\xc0\xc2\xc3\xc2", + ({}, {None: False}, {True: False, None: False}, {}, + {None: False}, {True: False, None: False})) diff --git a/pandas/tests/test_msgpack/test_obj.py b/pandas/tests/test_msgpack/test_obj.py new file mode 100644 index 0000000000000..4a018bc8b87f1 --- /dev/null +++ b/pandas/tests/test_msgpack/test_obj.py @@ -0,0 +1,71 @@ +# coding: utf-8 + +import unittest +import nose + +import datetime +from pandas.msgpack import packb, unpackb + +class DecodeError(Exception): + pass + +class TestObj(unittest.TestCase): + + def _arr_to_str(self, arr): + return ''.join(str(c) for c in arr) + + def bad_complex_decoder(self, o): + raise DecodeError("Ooops!") + + def _decode_complex(self, obj): + if b'__complex__' in obj: + return complex(obj[b'real'], obj[b'imag']) + return obj + + def _encode_complex(self, obj): + if isinstance(obj, complex): + return {b'__complex__': True, b'real': 1, b'imag': 2} + return obj + + def test_encode_hook(self): + packed = packb([3, 1+2j], default=self._encode_complex) + unpacked = unpackb(packed, use_list=1) + assert unpacked[1] == {b'__complex__': True, b'real': 1, b'imag': 2} + + def test_decode_hook(self): + packed = packb([3, {b'__complex__': True, b'real': 1, b'imag': 2}]) + unpacked = unpackb(packed, object_hook=self._decode_complex, use_list=1) + assert unpacked[1] == 1+2j + + def test_decode_pairs_hook(self): + packed = packb([3, {1: 2, 3: 4}]) + prod_sum = 1 * 2 + 3 * 4 + unpacked = unpackb(packed, object_pairs_hook=lambda l: sum(k * v for k, v in l), use_list=1) + assert unpacked[1] == prod_sum + + def test_only_one_obj_hook(self): + self.assertRaises(ValueError, unpackb, b'', object_hook=lambda x: x, object_pairs_hook=lambda x: x) + + def test_bad_hook(self): + def f(): + packed = packb([3, 1+2j], default=lambda o: o) + unpacked = unpackb(packed, use_list=1) + self.assertRaises(ValueError, f) + + def test_array_hook(self): + packed = packb([1,2,3]) + unpacked = unpackb(packed, list_hook=self._arr_to_str, use_list=1) + assert unpacked == '123' + + def test_an_exception_in_objecthook1(self): + def f(): + packed = packb({1: {'__complex__': True, 'real': 1, 'imag': 2}}) + unpackb(packed, object_hook=self.bad_complex_decoder) + self.assertRaises(DecodeError, f) + + + def test_an_exception_in_objecthook2(self): + def f(): + packed = packb({1: [{'__complex__': True, 'real': 1, 'imag': 2}]}) + unpackb(packed, list_hook=self.bad_complex_decoder, use_list=1) + self.assertRaises(DecodeError, f) diff --git a/pandas/tests/test_msgpack/test_pack.py b/pandas/tests/test_msgpack/test_pack.py new file mode 100644 index 0000000000000..22df6df5e2e45 --- /dev/null +++ b/pandas/tests/test_msgpack/test_pack.py @@ -0,0 +1,144 @@ +#!/usr/bin/env python +# coding: utf-8 + +import unittest +import nose + +import struct +from pandas import compat +from pandas.compat import u, OrderedDict +from pandas.msgpack import packb, unpackb, Unpacker, Packer + +class TestPack(unittest.TestCase): + + def check(self, data, use_list=False): + re = unpackb(packb(data), use_list=use_list) + assert re == data + + def testPack(self): + test_data = [ + 0, 1, 127, 128, 255, 256, 65535, 65536, + -1, -32, -33, -128, -129, -32768, -32769, + 1.0, + b"", b"a", b"a"*31, b"a"*32, + None, True, False, + (), ((),), ((), None,), + {None: 0}, + (1<<23), + ] + for td in test_data: + self.check(td) + + def testPackUnicode(self): + test_data = [ + u(""), u("abcd"), [u("defgh")], u("Русский текст"), + ] + for td in test_data: + re = unpackb(packb(td, encoding='utf-8'), use_list=1, encoding='utf-8') + assert re == td + packer = Packer(encoding='utf-8') + data = packer.pack(td) + re = Unpacker(compat.BytesIO(data), encoding='utf-8', use_list=1).unpack() + assert re == td + + def testPackUTF32(self): + test_data = [ + compat.u(""), + compat.u("abcd"), + [compat.u("defgh")], + compat.u("Русский текст"), + ] + for td in test_data: + re = unpackb(packb(td, encoding='utf-32'), use_list=1, encoding='utf-32') + assert re == td + + def testPackBytes(self): + test_data = [ + b"", b"abcd", (b"defgh",), + ] + for td in test_data: + self.check(td) + + def testIgnoreUnicodeErrors(self): + re = unpackb(packb(b'abc\xeddef'), encoding='utf-8', unicode_errors='ignore', use_list=1) + assert re == "abcdef" + + def testStrictUnicodeUnpack(self): + self.assertRaises(UnicodeDecodeError, unpackb, packb(b'abc\xeddef'), encoding='utf-8', use_list=1) + + def testStrictUnicodePack(self): + self.assertRaises(UnicodeEncodeError, packb, compat.u("abc\xeddef"), encoding='ascii', unicode_errors='strict') + + def testIgnoreErrorsPack(self): + re = unpackb(packb(compat.u("abcФФФdef"), encoding='ascii', unicode_errors='ignore'), encoding='utf-8', use_list=1) + assert re == compat.u("abcdef") + + def testNoEncoding(self): + self.assertRaises(TypeError, packb, compat.u("abc"), encoding=None) + + def testDecodeBinary(self): + re = unpackb(packb("abc"), encoding=None, use_list=1) + assert re == b"abc" + + def testPackFloat(self): + assert packb(1.0, use_single_float=True) == b'\xca' + struct.pack('>f', 1.0) + assert packb(1.0, use_single_float=False) == b'\xcb' + struct.pack('>d', 1.0) + + def testArraySize(self, sizes=[0, 5, 50, 1000]): + bio = compat.BytesIO() + packer = Packer() + for size in sizes: + bio.write(packer.pack_array_header(size)) + for i in range(size): + bio.write(packer.pack(i)) + + bio.seek(0) + unpacker = Unpacker(bio, use_list=1) + for size in sizes: + assert unpacker.unpack() == list(range(size)) + + def test_manualreset(self, sizes=[0, 5, 50, 1000]): + packer = Packer(autoreset=False) + for size in sizes: + packer.pack_array_header(size) + for i in range(size): + packer.pack(i) + + bio = compat.BytesIO(packer.bytes()) + unpacker = Unpacker(bio, use_list=1) + for size in sizes: + assert unpacker.unpack() == list(range(size)) + + packer.reset() + assert packer.bytes() == b'' + + def testMapSize(self, sizes=[0, 5, 50, 1000]): + bio = compat.BytesIO() + packer = Packer() + for size in sizes: + bio.write(packer.pack_map_header(size)) + for i in range(size): + bio.write(packer.pack(i)) # key + bio.write(packer.pack(i * 2)) # value + + bio.seek(0) + unpacker = Unpacker(bio) + for size in sizes: + assert unpacker.unpack() == dict((i, i * 2) for i in range(size)) + + + def test_odict(self): + seq = [(b'one', 1), (b'two', 2), (b'three', 3), (b'four', 4)] + od = OrderedDict(seq) + assert unpackb(packb(od), use_list=1) == dict(seq) + def pair_hook(seq): + return list(seq) + assert unpackb(packb(od), object_pairs_hook=pair_hook, use_list=1) == seq + + + def test_pairlist(self): + pairlist = [(b'a', 1), (2, b'b'), (b'foo', b'bar')] + packer = Packer() + packed = packer.pack_map_pairs(pairlist) + unpacked = unpackb(packed, object_pairs_hook=list) + assert pairlist == unpacked diff --git a/pandas/tests/test_msgpack/test_read_size.py b/pandas/tests/test_msgpack/test_read_size.py new file mode 100644 index 0000000000000..db3e1deb04f8f --- /dev/null +++ b/pandas/tests/test_msgpack/test_read_size.py @@ -0,0 +1,65 @@ +"""Test Unpacker's read_array_header and read_map_header methods""" +from pandas.msgpack import packb, Unpacker, OutOfData +UnexpectedTypeException = ValueError + +def test_read_array_header(): + unpacker = Unpacker() + unpacker.feed(packb(['a', 'b', 'c'])) + assert unpacker.read_array_header() == 3 + assert unpacker.unpack() == b'a' + assert unpacker.unpack() == b'b' + assert unpacker.unpack() == b'c' + try: + unpacker.unpack() + assert 0, 'should raise exception' + except OutOfData: + assert 1, 'okay' + + +def test_read_map_header(): + unpacker = Unpacker() + unpacker.feed(packb({'a': 'A'})) + assert unpacker.read_map_header() == 1 + assert unpacker.unpack() == B'a' + assert unpacker.unpack() == B'A' + try: + unpacker.unpack() + assert 0, 'should raise exception' + except OutOfData: + assert 1, 'okay' + +def test_incorrect_type_array(): + unpacker = Unpacker() + unpacker.feed(packb(1)) + try: + unpacker.read_array_header() + assert 0, 'should raise exception' + except UnexpectedTypeException: + assert 1, 'okay' + +def test_incorrect_type_map(): + unpacker = Unpacker() + unpacker.feed(packb(1)) + try: + unpacker.read_map_header() + assert 0, 'should raise exception' + except UnexpectedTypeException: + assert 1, 'okay' + +def test_correct_type_nested_array(): + unpacker = Unpacker() + unpacker.feed(packb({'a': ['b', 'c', 'd']})) + try: + unpacker.read_array_header() + assert 0, 'should raise exception' + except UnexpectedTypeException: + assert 1, 'okay' + +def test_incorrect_type_nested_map(): + unpacker = Unpacker() + unpacker.feed(packb([{'a': 'b'}])) + try: + unpacker.read_map_header() + assert 0, 'should raise exception' + except UnexpectedTypeException: + assert 1, 'okay' diff --git a/pandas/tests/test_msgpack/test_seq.py b/pandas/tests/test_msgpack/test_seq.py new file mode 100644 index 0000000000000..e5ee68c4cab84 --- /dev/null +++ b/pandas/tests/test_msgpack/test_seq.py @@ -0,0 +1,44 @@ +#!/usr/bin/env python +# coding: utf-8 + +from pandas import compat +from pandas.compat import u +import pandas.msgpack as msgpack + +binarydata = [chr(i) for i in range(256)] +binarydata = "".join(binarydata) +if compat.PY3: + binarydata = binarydata.encode('utf-8') + +def gen_binary_data(idx): + data = binarydata[:idx % 300] + return data + +def test_exceeding_unpacker_read_size(): + dumpf = compat.BytesIO() + + packer = msgpack.Packer() + + NUMBER_OF_STRINGS = 6 + read_size = 16 + # 5 ok for read_size=16, while 6 glibc detected *** python: double free or corruption (fasttop): + # 20 ok for read_size=256, while 25 segfaults / glibc detected *** python: double free or corruption (!prev) + # 40 ok for read_size=1024, while 50 introduces errors + # 7000 ok for read_size=1024*1024, while 8000 leads to glibc detected *** python: double free or corruption (!prev): + + for idx in range(NUMBER_OF_STRINGS): + data = gen_binary_data(idx) + dumpf.write(packer.pack(data)) + + f = compat.BytesIO(dumpf.getvalue()) + dumpf.close() + + unpacker = msgpack.Unpacker(f, read_size=read_size, use_list=1) + + read_count = 0 + for idx, o in enumerate(unpacker): + assert type(o) == bytes + assert o == gen_binary_data(idx) + read_count += 1 + + assert read_count == NUMBER_OF_STRINGS diff --git a/pandas/tests/test_msgpack/test_sequnpack.py b/pandas/tests/test_msgpack/test_sequnpack.py new file mode 100644 index 0000000000000..4c3ad363e5b6e --- /dev/null +++ b/pandas/tests/test_msgpack/test_sequnpack.py @@ -0,0 +1,84 @@ +#!/usr/bin/env python +# coding: utf-8 + +import unittest +import nose + +from pandas import compat +from pandas.msgpack import Unpacker, BufferFull +from pandas.msgpack import OutOfData + +class TestPack(unittest.TestCase): + + def test_partialdata(self): + unpacker = Unpacker() + unpacker.feed(b'\xa5') + self.assertRaises(StopIteration, next, iter(unpacker)) + unpacker.feed(b'h') + self.assertRaises(StopIteration, next, iter(unpacker)) + unpacker.feed(b'a') + self.assertRaises(StopIteration, next, iter(unpacker)) + unpacker.feed(b'l') + self.assertRaises(StopIteration, next, iter(unpacker)) + unpacker.feed(b'l') + self.assertRaises(StopIteration, next, iter(unpacker)) + unpacker.feed(b'o') + assert next(iter(unpacker)) == b'hallo' + + def test_foobar(self): + unpacker = Unpacker(read_size=3, use_list=1) + unpacker.feed(b'foobar') + assert unpacker.unpack() == ord(b'f') + assert unpacker.unpack() == ord(b'o') + assert unpacker.unpack() == ord(b'o') + assert unpacker.unpack() == ord(b'b') + assert unpacker.unpack() == ord(b'a') + assert unpacker.unpack() == ord(b'r') + self.assertRaises(OutOfData, unpacker.unpack) + + unpacker.feed(b'foo') + unpacker.feed(b'bar') + + k = 0 + for o, e in zip(unpacker, 'foobarbaz'): + assert o == ord(e) + k += 1 + assert k == len(b'foobar') + + def test_foobar_skip(self): + unpacker = Unpacker(read_size=3, use_list=1) + unpacker.feed(b'foobar') + assert unpacker.unpack() == ord(b'f') + unpacker.skip() + assert unpacker.unpack() == ord(b'o') + unpacker.skip() + assert unpacker.unpack() == ord(b'a') + unpacker.skip() + self.assertRaises(OutOfData, unpacker.unpack) + + def test_maxbuffersize(self): + self.assertRaises(ValueError, Unpacker, read_size=5, max_buffer_size=3) + unpacker = Unpacker(read_size=3, max_buffer_size=3, use_list=1) + unpacker.feed(b'fo') + self.assertRaises(BufferFull, unpacker.feed, b'ob') + unpacker.feed(b'o') + assert ord('f') == next(unpacker) + unpacker.feed(b'b') + assert ord('o') == next(unpacker) + assert ord('o') == next(unpacker) + assert ord('b') == next(unpacker) + + def test_readbytes(self): + unpacker = Unpacker(read_size=3) + unpacker.feed(b'foobar') + assert unpacker.unpack() == ord(b'f') + assert unpacker.read_bytes(3) == b'oob' + assert unpacker.unpack() == ord(b'a') + assert unpacker.unpack() == ord(b'r') + + # Test buffer refill + unpacker = Unpacker(compat.BytesIO(b'foobar'), read_size=3) + assert unpacker.unpack() == ord(b'f') + assert unpacker.read_bytes(3) == b'oob' + assert unpacker.unpack() == ord(b'a') + assert unpacker.unpack() == ord(b'r') diff --git a/pandas/tests/test_msgpack/test_subtype.py b/pandas/tests/test_msgpack/test_subtype.py new file mode 100644 index 0000000000000..0934b31cebeda --- /dev/null +++ b/pandas/tests/test_msgpack/test_subtype.py @@ -0,0 +1,21 @@ +#!/usr/bin/env python +# coding: utf-8 + +from pandas.msgpack import packb, unpackb +from collections import namedtuple + +class MyList(list): + pass + +class MyDict(dict): + pass + +class MyTuple(tuple): + pass + +MyNamedTuple = namedtuple('MyNamedTuple', 'x y') + +def test_types(): + assert packb(MyDict()) == packb(dict()) + assert packb(MyList()) == packb(list()) + assert packb(MyNamedTuple(1, 2)) == packb((1, 2)) diff --git a/pandas/tests/test_msgpack/test_unpack_raw.py b/pandas/tests/test_msgpack/test_unpack_raw.py new file mode 100644 index 0000000000000..0e96a79cf190a --- /dev/null +++ b/pandas/tests/test_msgpack/test_unpack_raw.py @@ -0,0 +1,28 @@ +"""Tests for cases where the user seeks to obtain packed msgpack objects""" + +from pandas import compat +from pandas.msgpack import Unpacker, packb + +def test_write_bytes(): + unpacker = Unpacker() + unpacker.feed(b'abc') + f = compat.BytesIO() + assert unpacker.unpack(f.write) == ord('a') + assert f.getvalue() == b'a' + f = compat.BytesIO() + assert unpacker.skip(f.write) is None + assert f.getvalue() == b'b' + f = compat.BytesIO() + assert unpacker.skip() is None + assert f.getvalue() == b'' + + +def test_write_bytes_multi_buffer(): + long_val = (5) * 100 + expected = packb(long_val) + unpacker = Unpacker(compat.BytesIO(expected), read_size=3, max_buffer_size=3) + + f = compat.BytesIO() + unpacked = unpacker.unpack(f.write) + assert unpacked == long_val + assert f.getvalue() == expected diff --git a/setup.py b/setup.py index ffd6089bdc88d..c326d14f552e0 100755 --- a/setup.py +++ b/setup.py @@ -464,6 +464,23 @@ def pxd(name): extensions.extend([sparse_ext]) +#---------------------------------------------------------------------- +# msgpack stuff here + +if sys.byteorder == 'big': + macros = [('__BIG_ENDIAN__', '1')] +else: + macros = [('__LITTLE_ENDIAN__', '1')] + +msgpack_ext = Extension('pandas.msgpack', + sources = [srcpath('msgpack', + suffix=suffix, subdir='')], + language='c++', + include_dirs=common_include, + define_macros=macros) + +extensions.append(msgpack_ext) + # if not ISRELEASED: # extensions.extend([sandbox_ext]) @@ -517,6 +534,7 @@ def pxd(name): 'pandas.stats', 'pandas.util', 'pandas.tests', + 'pandas.tests.test_msgpack', 'pandas.tools', 'pandas.tools.tests', 'pandas.tseries', diff --git a/vb_suite/packers.py b/vb_suite/packers.py new file mode 100644 index 0000000000000..9af6a6b1b0c4e --- /dev/null +++ b/vb_suite/packers.py @@ -0,0 +1,94 @@ +from vbench.api import Benchmark +from datetime import datetime + +start_date = datetime(2013, 5, 1) + +common_setup = """from pandas_vb_common import * +import os +import pandas as pd +from pandas.core import common as com + +f = '__test__.msg' +def remove(f): + try: + os.remove(f) + except: + pass + +index = date_range('20000101',periods=50000,freq='H') +df = DataFrame({'float1' : randn(50000), + 'float2' : randn(50000)}, + index=index) +remove(f) +""" + +#---------------------------------------------------------------------- +# msgpack + +setup = common_setup + """ +df.to_msgpack(f) +""" + +packers_read_pack = Benchmark("pd.read_msgpack(f)", setup, start_date=start_date) + +setup = common_setup + """ +""" + +packers_write_pack = Benchmark("df.to_msgpack(f)", setup, cleanup="remove(f)", start_date=start_date) + +#---------------------------------------------------------------------- +# pickle + +setup = common_setup + """ +df.to_pickle(f) +""" + +packers_read_pickle = Benchmark("pd.read_pickle(f)", setup, start_date=start_date) + +setup = common_setup + """ +""" + +packers_write_pickle = Benchmark("df.to_pickle(f)", setup, cleanup="remove(f)", start_date=start_date) + +#---------------------------------------------------------------------- +# csv + +setup = common_setup + """ +df.to_csv(f) +""" + +packers_read_csv = Benchmark("pd.read_csv(f)", setup, start_date=start_date) + +setup = common_setup + """ +""" + +packers_write_csv = Benchmark("df.to_csv(f)", setup, cleanup="remove(f)", start_date=start_date) + +#---------------------------------------------------------------------- +# hdf store + +setup = common_setup + """ +df.to_hdf(f,'df') +""" + +packers_read_hdf_store = Benchmark("pd.read_hdf(f,'df')", setup, start_date=start_date) + +setup = common_setup + """ +""" + +packers_write_hdf_store = Benchmark("df.to_hdf(f,'df')", setup, cleanup="remove(f)", start_date=start_date) + +#---------------------------------------------------------------------- +# hdf table + +setup = common_setup + """ +df.to_hdf(f,'df',table=True) +""" + +packers_read_hdf_table = Benchmark("pd.read_hdf(f,'df')", setup, start_date=start_date) + +setup = common_setup + """ +""" + +packers_write_hdf_table = Benchmark("df.to_hdf(f,'df',table=True)", setup, cleanup="remove(f)", start_date=start_date) + diff --git a/vb_suite/suite.py b/vb_suite/suite.py index 57920fcbf7c19..e5002ef78ab9b 100644 --- a/vb_suite/suite.py +++ b/vb_suite/suite.py @@ -16,6 +16,7 @@ 'join_merge', 'miscellaneous', 'panel_ctor', + 'packers', 'parser', 'plotting', 'reindex',
extension of #3828 ToDo - [x] remove use of `pytest` in test_msgpack - [ ] PERF! ``` msgpack serialization/deserialization support all pandas objects: Timestamp,Period,all index types,Series,DataFrame,Panel,Sparse suite docs included (in io.rst) iterator support top-level api support compression and direct calls to in-line msgpack (enabled via #3828) will wait for 0.13+ closes #686 ``` ``` Benchmarking: 50k rows of 2x columns of floats with a datetime index In [4]: %timeit df.to_msgpack('foo') 100 loops, best of 3: 16.5 ms per loop In [2]: %timeit df.to_pickle('foo') 10 loops, best of 3: 12.9 ms per loop In [6]: %timeit df.to_csv('foo') 1 loops, best of 3: 470 ms per loop In [11]: %timeit df.to_hdf('foo2','df',mode='w') 10 loops, best of 3: 20.1 ms per loop In [13]: %timeit df.to_hdf('foo2','df',mode='w',table=True) 10 loops, best of 3: 81.4 ms per loop ``` ``` In [5]: %timeit pd.read_msgpack('foo') 100 loops, best of 3: 16.3 ms per loop In [3]: %timeit pd.read_pickle('foo') 1000 loops, best of 3: 1.28 ms per loop In [7]: %timeit pd.read_csv('foo') 10 loops, best of 3: 46.1 ms per loop In [12]: %timeit pd.read_hdf('foo2','df') 100 loops, best of 3: 5.64 ms per loop In [14]: %timeit pd.read_hdf('foo2','df') 100 loops, best of 3: 9.39 ms per loop ``` ``` In [1]: df = DataFrame(randn(10,2), ...: columns=list('AB'), ...: index=date_range('20130101',periods=10)) In [2]: pd.to_msgpack('foo.msg',df) In [3]: pd.read_msgpack('foo.msg') Out[3]: A B 2013-01-01 0.676700 -1.702599 2013-01-02 -0.070164 -1.368716 2013-01-03 -0.877145 -1.427964 2013-01-04 -0.295715 -0.176954 2013-01-05 0.566986 0.588918 2013-01-06 -0.307070 1.541773 2013-01-07 1.302388 0.689701 2013-01-08 0.165292 0.273496 2013-01-09 -3.492113 -1.178075 2013-01-10 -1.069521 0.848614 ```
https://api.github.com/repos/pandas-dev/pandas/pulls/3831
2013-06-10T14:10:57Z
2013-10-01T13:54:09Z
2013-10-01T13:54:09Z
2014-06-24T20:07:18Z
CLN: remove relative imports
diff --git a/pandas/__init__.py b/pandas/__init__.py index da4c146da3cfd..62de9a10e729b 100644 --- a/pandas/__init__.py +++ b/pandas/__init__.py @@ -3,17 +3,10 @@ __docformat__ = 'restructuredtext' try: - from . import hashtable, tslib, lib -except Exception: # pragma: no cover - import sys - e = sys.exc_info()[1] # Py25 and Py3 current exception syntax conflict - print e - if 'No module named lib' in str(e): - raise ImportError('C extensions not built: if you installed already ' - 'verify that you are not importing from the source ' - 'directory') - else: - raise + from pandas import hashtable, tslib, lib +except ImportError as e: # pragma: no cover + module = str(e).lstrip('cannot import name ') # hack but overkill to use re + raise ImportError("C extensions: {0} not built".format(module)) from datetime import datetime import numpy as np diff --git a/pandas/algos.pyx b/pandas/algos.pyx index cac9c5ccc7a6d..836101ecafa2d 100644 --- a/pandas/algos.pyx +++ b/pandas/algos.pyx @@ -57,7 +57,7 @@ cdef extern from "src/headers/math.h": double fabs(double) int signbit(double) -from . import lib +from pandas import lib include "skiplist.pyx" diff --git a/pandas/index.pyx b/pandas/index.pyx index 7d33d6083d0eb..85a83b745510f 100644 --- a/pandas/index.pyx +++ b/pandas/index.pyx @@ -15,8 +15,8 @@ import numpy as np cimport tslib from hashtable cimport * -from . import algos, tslib, hashtable as _hash -from .tslib import Timestamp +from pandas import algos, tslib, hashtable as _hash +from pandas.tslib import Timestamp from datetime cimport (get_datetime64_value, _pydatetime_to_dts, pandas_datetimestruct) @@ -34,7 +34,7 @@ try: import pytz UTC = pytz.utc have_pytz = True -except: +except ImportError: have_pytz = False PyDateTime_IMPORT @@ -42,8 +42,6 @@ PyDateTime_IMPORT cdef extern from "Python.h": int PySlice_Check(object) -# int PyList_Check(object) -# int PyTuple_Check(object) cdef inline is_definitely_invalid_key(object val): if PyTuple_Check(val):
https://api.github.com/repos/pandas-dev/pandas/pulls/3827
2013-06-10T00:58:47Z
2013-06-10T22:21:11Z
2013-06-10T22:21:11Z
2014-06-12T09:03:19Z
DOC: speedup io.rst doc build
diff --git a/doc/source/io.rst b/doc/source/io.rst index 9d923d2d0e0cf..ac5d49e036669 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -953,10 +953,15 @@ Reading HTML Content .. versionadded:: 0.11.1 -The toplevel :func:`~pandas.io.html.read_html` function can accept an HTML +The top-level :func:`~pandas.io.html.read_html` function can accept an HTML string/file/url and will parse HTML tables into list of pandas DataFrames. Let's look at a few examples. +.. note:: + + ``read_html`` returns a ``list`` of ``DataFrame`` objects, even if there is + only a single table contained in the HTML content + Read a URL with no options .. ipython:: python @@ -967,107 +972,129 @@ Read a URL with no options .. note:: - ``read_html`` returns a ``list`` of ``DataFrame`` objects, even if there is - only a single table contained in the HTML content + The data from the above URL changes every Monday so the resulting data above + and the data below may be slightly different. -Read a URL and match a table that contains specific text +Read in the content of the file from the above URL and pass it to ``read_html`` +as a string + +.. ipython:: python + :suppress: + + import os + file_path = os.path.abspath(os.path.join('source', '_static', 'banklist.html')) + +.. ipython:: python + + with open(file_path, 'r') as f: + dfs = read_html(f.read()) + dfs + +You can even pass in an instance of ``StringIO`` if you so desire .. ipython:: python + from cStringIO import StringIO + + with open(file_path, 'r') as f: + sio = StringIO(f.read()) + + dfs = read_html(sio) + dfs + +.. note:: + + The following examples are not run by the IPython evaluator due to the fact + that having so many network-accessing functions slows down the documentation + build. If you spot an error or an example that doesn't run, please do not + hesitate to report it over on `pandas GitHub issues page + <http://www.github.com/pydata/pandas/issues>`__. + + +Read a URL and match a table that contains specific text + +.. code-block:: python + match = 'Metcalf Bank' df_list = read_html(url, match=match) - len(dfs) - dfs[0] Specify a header row (by default ``<th>`` elements are used to form the column index); if specified, the header row is taken from the data minus the parsed header elements (``<th>`` elements). -.. ipython:: python +.. code-block:: python dfs = read_html(url, header=0) - len(dfs) - dfs[0] Specify an index column -.. ipython:: python +.. code-block:: python dfs = read_html(url, index_col=0) - len(dfs) - dfs[0] - dfs[0].index.name Specify a number of rows to skip -.. ipython:: python +.. code-block:: python dfs = read_html(url, skiprows=0) - len(dfs) - dfs[0] Specify a number of rows to skip using a list (``xrange`` (Python 2 only) works as well) -.. ipython:: python +.. code-block:: python dfs = read_html(url, skiprows=range(2)) - len(dfs) - dfs[0] Don't infer numeric and date types -.. ipython:: python +.. code-block:: python dfs = read_html(url, infer_types=False) - len(dfs) - dfs[0] Specify an HTML attribute -.. ipython:: python +.. code-block:: python dfs1 = read_html(url, attrs={'id': 'table'}) dfs2 = read_html(url, attrs={'class': 'sortable'}) - np.array_equal(dfs1[0], dfs2[0]) + print np.array_equal(dfs1[0], dfs2[0]) # Should be True Use some combination of the above -.. ipython:: python +.. code-block:: python dfs = read_html(url, match='Metcalf Bank', index_col=0) - len(dfs) - dfs[0] Read in pandas ``to_html`` output (with some loss of floating point precision) -.. ipython:: python +.. code-block:: python df = DataFrame(randn(2, 2)) s = df.to_html(float_format='{0:.40g}'.format) dfin = read_html(s, index_col=0) - df - dfin[0] - df.index - df.columns - dfin[0].index - dfin[0].columns - np.allclose(df, dfin[0]) -``lxml`` will raise an error on a failed parse if that is the only parser you -provide +The ``lxml`` backend will raise an error on a failed parse if that is the only +parser you provide (if you only have a single parser you can provide just a +string, but it is considered good practice to pass a list with one string if, +for example, the function expects a sequence of strings) -.. ipython:: python +.. code-block:: python + + dfs = read_html(url, 'Metcalf Bank', index_col=0, flavor=['lxml']) - dfs = read_html(url, match='Metcalf Bank', index_col=0, flavor=['lxml']) +or + +.. code-block:: python + + dfs = read_html(url, 'Metcalf Bank', index_col=0, flavor='lxml') However, if you have bs4 and html5lib installed and pass ``None`` or ``['lxml', 'bs4']`` then the parse will most likely succeed. Note that *as soon as a parse succeeds, the function will return*. -.. ipython:: python +.. code-block:: python - dfs = read_html(url, match='Metcalf Bank', index_col=0, flavor=['lxml', 'bs4']) + dfs = read_html(url, 'Metcalf Bank', index_col=0, flavor=['lxml', 'bs4']) Writing to HTML files @@ -1082,8 +1109,8 @@ in the method ``to_string`` described above. .. note:: Not all of the possible options for ``DataFrame.to_html`` are shown here for - brevity's sake. See :func:`~pandas.core.frame.DataFrame.to_html` for the full set of - options. + brevity's sake. See :func:`~pandas.core.frame.DataFrame.to_html` for the + full set of options. .. ipython:: python :suppress:
@jreback can u check this out and see if the speedup is acceptable?
https://api.github.com/repos/pandas-dev/pandas/pulls/3826
2013-06-10T00:34:59Z
2013-06-11T11:50:08Z
2013-06-11T11:50:08Z
2013-06-27T15:39:34Z
CLN: conform read_clipboard / to_clipboard to new io standards
diff --git a/RELEASE.rst b/RELEASE.rst index 4d85834706e80..eca69d824d377 100644 --- a/RELEASE.rst +++ b/RELEASE.rst @@ -127,6 +127,7 @@ pandas 0.11.1 - added ``pandas.io.api`` for i/o imports - removed ``Excel`` support to ``pandas.io.excel`` - added top-level ``pd.read_sql`` and ``to_sql`` DataFrame methods + - removed ``clipboard`` support to ``pandas.io.clipboard`` - the ``method`` and ``axis`` arguments of ``DataFrame.replace()`` are deprecated - Implement ``__nonzero__`` for ``NDFrame`` objects (GH3691_, GH3696_) diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 7dd0315d7d90e..5533584745167 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -492,8 +492,8 @@ def to_hdf(self, path_or_buf, key, **kwargs): return pytables.to_hdf(path_or_buf, key, self, **kwargs) def to_clipboard(self): - from pandas.io import parsers - parsers.to_clipboard(self) + from pandas.io import clipboard + clipboard.to_clipboard(self) # install the indexerse for _name, _indexer in indexing.get_indexers_list(): diff --git a/pandas/io/api.py b/pandas/io/api.py index e4c0c8c0c77f0..f17351921f83f 100644 --- a/pandas/io/api.py +++ b/pandas/io/api.py @@ -2,8 +2,8 @@ Data IO api """ -from pandas.io.parsers import (read_csv, read_table, read_clipboard, - read_fwf, to_clipboard) +from pandas.io.parsers import read_csv, read_table, read_fwf +from pandas.io.clipboard import read_clipboard from pandas.io.excel import ExcelFile, ExcelWriter, read_excel from pandas.io.pytables import HDFStore, Term, get_store, read_hdf from pandas.io.html import read_html diff --git a/pandas/io/clipboard.py b/pandas/io/clipboard.py new file mode 100644 index 0000000000000..4aa8db414386b --- /dev/null +++ b/pandas/io/clipboard.py @@ -0,0 +1,31 @@ +""" io on the clipboard """ + +def read_clipboard(**kwargs): # pragma: no cover + """ + Read text from clipboard and pass to read_table. See read_table for the + full argument list + + Returns + ------- + parsed : DataFrame + """ + from pandas.util.clipboard import clipboard_get + text = clipboard_get() + return read_table(StringIO(text), **kwargs) + + +def to_clipboard(obj): # pragma: no cover + """ + Attempt to write text representation of object to the system clipboard + + Notes + ----- + Requirements for your platform + - Linux: xsel command line tool + - Windows: Python win32 extensions + - OS X: + """ + from pandas.util.clipboard import clipboard_set + clipboard_set(str(obj)) + + diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index 54ba7536afaee..6e937ba696e39 100644 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -5,6 +5,7 @@ import re from itertools import izip import csv +from warnings import warn import numpy as np @@ -427,35 +428,6 @@ def read_fwf(filepath_or_buffer, colspecs=None, widths=None, **kwds): return _read(filepath_or_buffer, kwds) -def read_clipboard(**kwargs): # pragma: no cover - """ - Read text from clipboard and pass to read_table. See read_table for the - full argument list - - Returns - ------- - parsed : DataFrame - """ - from pandas.util.clipboard import clipboard_get - text = clipboard_get() - return read_table(StringIO(text), **kwargs) - - -def to_clipboard(obj): # pragma: no cover - """ - Attempt to write text representation of object to the system clipboard - - Notes - ----- - Requirements for your platform - - Linux: xsel command line tool - - Windows: Python win32 extensions - - OS X: - """ - from pandas.util.clipboard import clipboard_set - clipboard_set(str(obj)) - - # common NA values # no longer excluding inf representations # '1.#INF','-1.#INF', '1.#INF000000', @@ -1940,15 +1912,25 @@ def _make_reader(self, f): self.data = FixedWidthReader(f, self.colspecs, self.delimiter) +##### deprecations in 0.11.1 ##### +##### remove in 0.12 ##### + +from pandas.io import clipboard +def read_clipboard(**kwargs): + warn("read_clipboard is now a top-level accessible via pandas.read_clipboard", FutureWarning) + clipboard.read_clipboard(**kwargs) + +def to_clipboard(obj): + warn("to_clipboard is now an object level method accessible via obj.to_clipboard()", FutureWarning) + clipboard.to_clipboard(obj) + from pandas.io import excel class ExcelWriter(excel.ExcelWriter): def __init__(self, path): - from warnings import warn warn("ExcelWriter can now be imported from: pandas.io.excel", FutureWarning) super(ExcelWriter, self).__init__(path) class ExcelFile(excel.ExcelFile): def __init__(self, path_or_buf, kind=None, **kwds): - from warnings import warn warn("ExcelFile can now be imported from: pandas.io.excel", FutureWarning) super(ExcelFile, self).__init__(path_or_buf, kind=kind, **kwds)
removed to io.clipboard (from io.parsers)
https://api.github.com/repos/pandas-dev/pandas/pulls/3824
2013-06-09T23:15:25Z
2013-06-09T23:41:23Z
2013-06-09T23:41:23Z
2014-07-16T08:13:04Z
ENH: DataFrame.corr(method='spearman') is cythonized.
diff --git a/doc/source/v0.11.1.txt b/doc/source/v0.11.1.txt index 76ae85a53102b..8dbf2e86e972e 100644 --- a/doc/source/v0.11.1.txt +++ b/doc/source/v0.11.1.txt @@ -295,6 +295,8 @@ Enhancements - DatetimeIndexes no longer try to convert mixed-integer indexes during join operations (GH3877_) + - DataFrame corr method (spearman) is now cythonized. + Bug Fixes ~~~~~~~~~ diff --git a/pandas/algos.pyx b/pandas/algos.pyx index 836101ecafa2d..08ec707b0d96d 100644 --- a/pandas/algos.pyx +++ b/pandas/algos.pyx @@ -997,6 +997,69 @@ def nancorr(ndarray[float64_t, ndim=2] mat, cov=False, minp=None): return result +#---------------------------------------------------------------------- +# Pairwise Spearman correlation + +@cython.boundscheck(False) +@cython.wraparound(False) +def nancorr_spearman(ndarray[float64_t, ndim=2] mat, Py_ssize_t minp=1): + cdef: + Py_ssize_t i, j, xi, yi, N, K + ndarray[float64_t, ndim=2] result + ndarray[float64_t, ndim=1] maskedx + ndarray[float64_t, ndim=1] maskedy + ndarray[uint8_t, ndim=2] mask + int64_t nobs = 0 + float64_t vx, vy, sumx, sumxx, sumyy, mean, divisor + + N, K = (<object> mat).shape + + result = np.empty((K, K), dtype=np.float64) + mask = np.isfinite(mat).view(np.uint8) + + for xi in range(K): + for yi in range(xi + 1): + nobs = 0 + for i in range(N): + if mask[i, xi] and mask[i, yi]: + nobs += 1 + + if nobs < minp: + result[xi, yi] = result[yi, xi] = np.NaN + else: + maskedx = np.empty(nobs, dtype=np.float64) + maskedy = np.empty(nobs, dtype=np.float64) + j = 0 + for i in range(N): + if mask[i, xi] and mask[i, yi]: + maskedx[j] = mat[i, xi] + maskedy[j] = mat[i, yi] + j += 1 + maskedx = rank_1d_float64(maskedx) + maskedy = rank_1d_float64(maskedy) + + mean = (nobs + 1) / 2. + + # now the cov numerator + sumx = sumxx = sumyy = 0 + + for i in range(nobs): + vx = maskedx[i] - mean + vy = maskedy[i] - mean + + sumx += vx * vy + sumxx += vx * vx + sumyy += vy * vy + + divisor = sqrt(sumxx * sumyy) + + if divisor != 0: + result[xi, yi] = result[yi, xi] = sumx / divisor + else: + result[xi, yi] = result[yi, xi] = np.NaN + + return result + #---------------------------------------------------------------------- # Rolling variance diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 5e3d3e95d8e56..f0145364363ac 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1528,7 +1528,7 @@ def to_stata(self, fname, convert_dates=None, write_index=True, encoding="latin- from pandas.io.stata import StataWriter writer = StataWriter(fname,self,convert_dates=convert_dates, encoding=encoding, byteorder=byteorder) writer.write_file() - + def to_sql(self, name, con, flavor='sqlite', if_exists='fail', **kwargs): """ Write records stored in a DataFrame to a SQL database. @@ -4711,7 +4711,7 @@ def merge(self, right, how='inner', on=None, left_on=None, right_on=None, #---------------------------------------------------------------------- # Statistical methods, etc. - def corr(self, method='pearson', min_periods=None): + def corr(self, method='pearson', min_periods=1): """ Compute pairwise correlation of columns, excluding NA/null values @@ -4724,7 +4724,7 @@ def corr(self, method='pearson', min_periods=None): min_periods : int, optional Minimum number of observations required per pair of columns to have a valid result. Currently only available for pearson - correlation + and spearman correlation Returns ------- @@ -4737,6 +4737,9 @@ def corr(self, method='pearson', min_periods=None): if method == 'pearson': correl = _algos.nancorr(com._ensure_float64(mat), minp=min_periods) + elif method == 'spearman': + correl = _algos.nancorr_spearman(com._ensure_float64(mat), + minp=min_periods) else: if min_periods is None: min_periods = 1 diff --git a/vb_suite/stat_ops.py b/vb_suite/stat_ops.py index 86e879d0be523..f01a867ea2893 100644 --- a/vb_suite/stat_ops.py +++ b/vb_suite/stat_ops.py @@ -82,3 +82,12 @@ stats_rolling_mean = Benchmark('rolling_mean(arr, 100)', setup, start_date=datetime(2011, 6, 1)) + +# spearman correlation + +setup = common_setup + """ +df = DataFrame(np.random.randn(1000, 300)) +""" + +stats_corr_spearman = Benchmark("df.corr(method='spearman')", setup, + start_date=datetime(2011, 12, 4))
It should be more than 10 times faster than the old version.
https://api.github.com/repos/pandas-dev/pandas/pulls/3823
2013-06-09T18:47:50Z
2013-06-17T13:29:07Z
2013-06-17T13:29:07Z
2014-06-20T06:53:44Z
Change Finance Options signatures and deprecate year/month parameters
diff --git a/doc/source/release.rst b/doc/source/release.rst index 07489a140c018..df09d2f5a50ba 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -156,6 +156,9 @@ pandas 0.11.1 ``load`` will give deprecation warning. - the ``method`` and ``axis`` arguments of ``DataFrame.replace()`` are deprecated + - set FutureWarning to require data_source, and to replace year/month with + expiry date in pandas.io options. This is in preparation to add options + data from google (:issue:`3822`) - the ``method`` and ``axis`` arguments of ``DataFrame.replace()`` are deprecated - Implement ``__nonzero__`` for ``NDFrame`` objects (:issue:`3691`, :issue:`3696`) diff --git a/pandas/io/data.py b/pandas/io/data.py index a97c77c207a4c..03ccde6a2fcc1 100644 --- a/pandas/io/data.py +++ b/pandas/io/data.py @@ -10,6 +10,7 @@ import urllib import urllib2 import time +import warnings from zipfile import ZipFile from pandas.util.py3compat import StringIO, BytesIO, bytes_to_str @@ -111,12 +112,7 @@ def get_quote_yahoo(symbols): urlStr = 'http://finance.yahoo.com/d/quotes.csv?s=%s&f=%s' % ( sym_list, request) - try: - lines = urllib2.urlopen(urlStr).readlines() - except Exception, e: - s = "Failed to download:\n{0}".format(e) - print (s) - return None + lines = urllib2.urlopen(urlStr).readlines() for line in lines: fields = line.decode('utf-8').strip().split(',') @@ -539,7 +535,7 @@ def _parse_options_data(table): class Options(object): """ - This class fetches call/put data for a given stock/exipry month. + This class fetches call/put data for a given stock/expiry month. It is instantiated with a string representing the ticker symbol. @@ -553,7 +549,7 @@ class Options(object): Examples -------- # Instantiate object with ticker - >>> aapl = Options('aapl') + >>> aapl = Options('aapl', 'yahoo') # Fetch September 2012 call data >>> calls = aapl.get_call_data(9, 2012) @@ -576,24 +572,25 @@ class Options(object): """ - def __init__(self, symbol): + def __init__(self, symbol, data_source=None): """ Instantiates options_data with a ticker saved as symbol """ self.symbol = str(symbol).upper() + if (data_source is None): + warnings.warn("Options(symbol) is deprecated, use Options(symbol, data_source) instead", + FutureWarning) + data_source = "yahoo" + if (data_source != "yahoo"): + raise NotImplementedError("currently only yahoo supported") - def get_options_data(self, month=None, year=None): + def get_options_data(self, month=None, year=None, expiry=None): """ Gets call/put data for the stock with the expiration data in the given month and year Parameters ---------- - month: number, int, optional(default=None) - The month the options expire. This should be either 1 or 2 - digits. - - year: number, int, optional(default=None) - The year the options expire. This sould be a 4 digit int. - + expiry: datetime.date, optional(default=None) + The date when options expire (defaults to current month) Returns ------- @@ -609,7 +606,7 @@ def get_options_data(self, month=None, year=None): When called, this function will add instance variables named calls and puts. See the following example: - >>> aapl = Options('aapl') # Create object + >>> aapl = Options('aapl', 'yahoo') # Create object >>> aapl.calls # will give an AttributeError >>> aapl.get_options_data() # Get data and set ivars >>> aapl.calls # Doesn't throw AttributeError @@ -621,6 +618,8 @@ def get_options_data(self, month=None, year=None): representations of the month and year for the expiry of the options. """ + year, month = self._try_parse_dates(year,month,expiry) + from lxml.html import parse if month and year: # try to get specified month from yahoo finance @@ -659,19 +658,15 @@ def get_options_data(self, month=None, year=None): return [call_data, put_data] - def get_call_data(self, month=None, year=None): + def get_call_data(self, month=None, year=None, expiry=None): """ Gets call/put data for the stock with the expiration data in the given month and year Parameters ---------- - month: number, int, optional(default=None) - The month the options expire. This should be either 1 or 2 - digits. - - year: number, int, optional(default=None) - The year the options expire. This sould be a 4 digit int. + expiry: datetime.date, optional(default=None) + The date when options expire (defaults to current month) Returns ------- @@ -683,7 +678,7 @@ def get_call_data(self, month=None, year=None): When called, this function will add instance variables named calls and puts. See the following example: - >>> aapl = Options('aapl') # Create object + >>> aapl = Options('aapl', 'yahoo') # Create object >>> aapl.calls # will give an AttributeError >>> aapl.get_call_data() # Get data and set ivars >>> aapl.calls # Doesn't throw AttributeError @@ -694,6 +689,8 @@ def get_call_data(self, month=None, year=None): repsectively, two digit representations of the month and year for the expiry of the options. """ + year, month = self._try_parse_dates(year,month,expiry) + from lxml.html import parse if month and year: # try to get specified month from yahoo finance @@ -727,19 +724,15 @@ def get_call_data(self, month=None, year=None): return call_data - def get_put_data(self, month=None, year=None): + def get_put_data(self, month=None, year=None, expiry=None): """ Gets put data for the stock with the expiration data in the given month and year Parameters ---------- - month: number, int, optional(default=None) - The month the options expire. This should be either 1 or 2 - digits. - - year: number, int, optional(default=None) - The year the options expire. This sould be a 4 digit int. + expiry: datetime.date, optional(default=None) + The date when options expire (defaults to current month) Returns ------- @@ -764,6 +757,8 @@ def get_put_data(self, month=None, year=None): repsectively, two digit representations of the month and year for the expiry of the options. """ + year, month = self._try_parse_dates(year,month,expiry) + from lxml.html import parse if month and year: # try to get specified month from yahoo finance @@ -798,7 +793,7 @@ def get_put_data(self, month=None, year=None): return put_data def get_near_stock_price(self, above_below=2, call=True, put=False, - month=None, year=None): + month=None, year=None, expiry=None): """ Cuts the data frame opt_df that is passed in to only take options that are near the current stock price. @@ -810,19 +805,15 @@ def get_near_stock_price(self, above_below=2, call=True, put=False, should be taken call: bool - Tells the function weather or not it should be using + Tells the function whether or not it should be using self.calls put: bool Tells the function weather or not it should be using self.puts - month: number, int, optional(default=None) - The month the options expire. This should be either 1 or 2 - digits. - - year: number, int, optional(default=None) - The year the options expire. This sould be a 4 digit int. + expiry: datetime.date, optional(default=None) + The date when options expire (defaults to current month) Returns ------- @@ -831,6 +822,8 @@ def get_near_stock_price(self, above_below=2, call=True, put=False, desired. If there isn't data as far out as the user has asked for then """ + year, month = self._try_parse_dates(year,month,expiry) + price = float(get_quote_yahoo([self.symbol])['last']) if call: @@ -844,13 +837,6 @@ def get_near_stock_price(self, above_below=2, call=True, put=False, except AttributeError: df_c = self.get_call_data(month, year) - # NOTE: For some reason the put commas in all values >1000. We remove - # them here - df_c.Strike = df_c.Strike.astype(str).apply(lambda x: \ - x.replace(',', '')) - # Now make sure Strike column has dtype float - df_c.Strike = df_c.Strike.astype(float) - start_index = np.where(df_c['Strike'] > price)[0][0] get_range = range(start_index - above_below, @@ -872,13 +858,6 @@ def get_near_stock_price(self, above_below=2, call=True, put=False, except AttributeError: df_p = self.get_put_data(month, year) - # NOTE: For some reason the put commas in all values >1000. We remove - # them here - df_p.Strike = df_p.Strike.astype(str).apply(lambda x: \ - x.replace(',', '')) - # Now make sure Strike column has dtype float - df_p.Strike = df_p.Strike.astype(float) - start_index = np.where(df_p.Strike > price)[0][0] get_range = range(start_index - above_below, @@ -897,11 +876,21 @@ def get_near_stock_price(self, above_below=2, call=True, put=False, else: return chop_put + def _try_parse_dates(self, year, month, expiry): + if year is not None or month is not None: + warnings.warn("month, year arguments are deprecated, use expiry instead", + FutureWarning) + + if expiry is not None: + year=expiry.year + month=expiry.month + return year, month + def get_forward_data(self, months, call=True, put=False, near=False, above_below=2): """ Gets either call, put, or both data for months starting in the current - month and going out in the future a spcified amount of time. + month and going out in the future a specified amount of time. Parameters ---------- @@ -933,6 +922,7 @@ def get_forward_data(self, months, call=True, put=False, near=False, If asked for, a DataFrame containing put data from the current month to the current month plus months. """ + warnings.warn("get_forward_data() is deprecated", FutureWarning) in_months = range(cur_month, cur_month + months + 1) in_years = [cur_year] * (months + 1) diff --git a/pandas/io/tests/test_yahoo.py b/pandas/io/tests/test_yahoo.py index 712475f76f5ed..1edb29efd00b9 100644 --- a/pandas/io/tests/test_yahoo.py +++ b/pandas/io/tests/test_yahoo.py @@ -1,6 +1,7 @@ import unittest import nose from datetime import datetime +import warnings import pandas as pd import pandas.io.data as web @@ -96,6 +97,61 @@ def test_get_data(self): t= np.array(pan) assert np.issubdtype(t.dtype, np.floating) + @network + def test_options(self): + try: + import lxml + except ImportError: + raise nose.SkipTest + # aapl has monthlies + aapl = web.Options('aapl', 'yahoo') + today = datetime.today() + year = today.year + month = today.month+1 + if (month>12): + year = year +1 + month = 1 + expiry=datetime(year, month, 1) + (calls, puts) = aapl.get_options_data(expiry=expiry) + assert len(calls)>1 + assert len(puts)>1 + (calls, puts) = aapl.get_near_stock_price(call=True, put=True, expiry=expiry) + assert len(calls)==5 + assert len(puts)==5 + calls = aapl.get_call_data(expiry=expiry) + assert len(calls)>1 + puts = aapl.get_put_data(expiry=expiry) + assert len(puts)>1 + + @network + def test_options_warnings(self): + try: + import lxml + except ImportError: + raise nose.SkipTest + with warnings.catch_warnings(record=True) as w: + warnings.resetwarnings() + # Cause all warnings to always be triggered. + warnings.simplefilter("always") + # aapl has monthlies + aapl = web.Options('aapl') + today = datetime.today() + year = today.year + month = today.month+1 + if (month>12): + year = year +1 + month = 1 + (calls, puts) = aapl.get_options_data(month=month, year=year) + (calls, puts) = aapl.get_near_stock_price(call=True, put=True, month=month, year=year) + calls = aapl.get_call_data(month=month, year=year) + puts = aapl.get_put_data(month=month, year=year) + print(w) + assert len(w) == 5 + assert "deprecated" in str(w[0].message) + assert "deprecated" in str(w[1].message) + assert "deprecated" in str(w[2].message) + assert "deprecated" in str(w[3].message) + assert "deprecated" in str(w[4].message) if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
#3817
https://api.github.com/repos/pandas-dev/pandas/pulls/3822
2013-06-09T17:17:44Z
2013-06-22T19:02:18Z
2013-06-22T19:02:18Z
2014-06-15T12:00:55Z
FIX py3ing some print statements
diff --git a/examples/regressions.py b/examples/regressions.py index 2d21a0ece58c3..2203165825ccb 100644 --- a/examples/regressions.py +++ b/examples/regressions.py @@ -31,7 +31,7 @@ def makeSeries(): model = ols(y=Y, x=X) -print model +print (model) #------------------------------------------------------------------------------- # Panel regression @@ -48,4 +48,4 @@ def makeSeries(): model = ols(y=Y, x=data) -print panelModel +print (panelModel) diff --git a/pandas/__init__.py b/pandas/__init__.py index 62de9a10e729b..a0edb397c28c1 100644 --- a/pandas/__init__.py +++ b/pandas/__init__.py @@ -3,10 +3,17 @@ __docformat__ = 'restructuredtext' try: - from pandas import hashtable, tslib, lib -except ImportError as e: # pragma: no cover - module = str(e).lstrip('cannot import name ') # hack but overkill to use re - raise ImportError("C extensions: {0} not built".format(module)) + from . import hashtable, tslib, lib +except Exception: # pragma: no cover + import sys + e = sys.exc_info()[1] # Py25 and Py3 current exception syntax conflict + print (e) + if 'No module named lib' in str(e): + raise ImportError('C extensions not built: if you installed already ' + 'verify that you are not importing from the source ' + 'directory') + else: + raise from datetime import datetime import numpy as np diff --git a/pandas/core/config.py b/pandas/core/config.py index e8403164ac1b9..ae7c71d082a89 100644 --- a/pandas/core/config.py +++ b/pandas/core/config.py @@ -154,7 +154,7 @@ def _describe_option(pat='', _print_desc=True): s += _build_option_description(k) if _print_desc: - print s + print (s) else: return s @@ -631,7 +631,7 @@ def pp(name, ks): ls += pp(k, ks) s = '\n'.join(ls) if _print: - print s + print (s) else: return s diff --git a/pandas/core/format.py b/pandas/core/format.py index 40d80e91f0264..b1f7a2a8964b9 100644 --- a/pandas/core/format.py +++ b/pandas/core/format.py @@ -1899,4 +1899,4 @@ def _binify(cols, line_width): 1134250., 1219550., 855736.85, 1042615.4286, 722621.3043, 698167.1818, 803750.]) fmt = FloatArrayFormatter(arr, digits=7) - print fmt.get_result() + print (fmt.get_result()) diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py index d15dcc1510577..43def5047197a 100644 --- a/pandas/core/groupby.py +++ b/pandas/core/groupby.py @@ -1420,7 +1420,7 @@ def aggregate(self, func_or_funcs, *args, **kwargs): ret = Series(result, index=index) if not self.as_index: # pragma: no cover - print 'Warning, ignoring as_index=True' + print ('Warning, ignoring as_index=True') return ret diff --git a/pandas/io/auth.py b/pandas/io/auth.py index 5e64b795b8885..6da497687cf25 100644 --- a/pandas/io/auth.py +++ b/pandas/io/auth.py @@ -55,7 +55,7 @@ def process_flags(flags=[]): try: FLAGS(flags) except gflags.FlagsError, e: - print '%s\nUsage: %s ARGS\n%s' % (e, str(flags), FLAGS) + print ('%s\nUsage: %s ARGS\n%s' % (e, str(flags), FLAGS)) sys.exit(1) # Set the logging according to the command-line flag. diff --git a/pandas/io/data.py b/pandas/io/data.py index 8bc3df561cadb..a97c77c207a4c 100644 --- a/pandas/io/data.py +++ b/pandas/io/data.py @@ -115,7 +115,7 @@ def get_quote_yahoo(symbols): lines = urllib2.urlopen(urlStr).readlines() except Exception, e: s = "Failed to download:\n{0}".format(e) - print s + print (s) return None for line in lines: @@ -467,7 +467,7 @@ def get_data_fred(name=None, start=dt.datetime(2010, 1, 1), start, end = _sanitize_dates(start, end) if(name is None): - print "Need to provide a name" + print ("Need to provide a name") return None fred_URL = "http://research.stlouisfed.org/fred2/series/" diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index 054363d8cda06..e9088d68d73fa 100644 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -508,7 +508,7 @@ def _clean_options(self, options, engine): sep = options['delimiter'] if (sep is None and not options['delim_whitespace']): if engine == 'c': - print 'Using Python parser to sniff delimiter' + print ('Using Python parser to sniff delimiter') engine = 'python' elif sep is not None and len(sep) > 1: # wait until regex engine integrated @@ -867,7 +867,7 @@ def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False, coerce_type) result[c] = cvals if verbose and na_count: - print 'Filled %d NA values in column %s' % (na_count, str(c)) + print ('Filled %d NA values in column %s' % (na_count, str(c))) return result def _convert_types(self, values, na_values, try_num_bool=True): diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index 83e46fc949a4d..6cfbfd0f2d60a 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -386,7 +386,7 @@ def open(self, mode='a', warn=True): self._handle = h5_open(self._path, self._mode) except IOError, e: # pragma: no cover if 'can not be written' in str(e): - print 'Opening %s in read-only mode' % self._path + print ('Opening %s in read-only mode' % self._path) self._handle = h5_open(self._path, 'r') else: raise diff --git a/pandas/io/sql.py b/pandas/io/sql.py index 4a1cac8a60e30..68dff479a5015 100644 --- a/pandas/io/sql.py +++ b/pandas/io/sql.py @@ -51,7 +51,7 @@ def execute(sql, con, retry=True, cur=None, params=None): except Exception: # pragma: no cover pass - print 'Error on sql %s' % sql + print ('Error on sql %s' % sql) raise @@ -94,7 +94,7 @@ def tquery(sql, con=None, cur=None, retry=True): except Exception, e: excName = e.__class__.__name__ if excName == 'OperationalError': # pragma: no cover - print 'Failed to commit, may need to restart interpreter' + print ('Failed to commit, may need to restart interpreter') else: raise @@ -128,7 +128,7 @@ def uquery(sql, con=None, cur=None, retry=True, params=None): traceback.print_exc() if retry: - print 'Looks like your connection failed, reconnecting...' + print ('Looks like your connection failed, reconnecting...') return uquery(sql, con, retry=False) return result diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py index d6086d822ee02..0157729044782 100644 --- a/pandas/io/tests/test_html.py +++ b/pandas/io/tests/test_html.py @@ -86,8 +86,8 @@ def test_to_html_compat(self): out = df.to_html() res = self.run_read_html(out, attrs={'class': 'dataframe'}, index_col=0)[0] - print df.dtypes - print res.dtypes + print (df.dtypes) + print (res.dtypes) assert_frame_equal(res, df) @network @@ -125,7 +125,7 @@ def test_spam(self): df2 = self.run_read_html(self.spam_data, 'Unit', infer_types=False) assert_framelist_equal(df1, df2) - print df1[0] + print (df1[0]) self.assertEqual(df1[0].ix[0, 0], 'Proximates') self.assertEqual(df1[0].columns[0], 'Nutrient') diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py index f7f77698f51f5..f348e1ddce461 100644 --- a/pandas/io/tests/test_pytables.py +++ b/pandas/io/tests/test_pytables.py @@ -987,7 +987,7 @@ def test_big_table_frame(self): rows = store.root.df.table.nrows recons = store.select('df') - print "\nbig_table frame [%s] -> %5.2f" % (rows, time.time() - x) + print ("\nbig_table frame [%s] -> %5.2f" % (rows, time.time() - x)) def test_big_table2_frame(self): # this is a really big table: 1m rows x 60 float columns, 20 string, 20 datetime @@ -995,7 +995,7 @@ def test_big_table2_frame(self): raise nose.SkipTest('no big table2 frame') # create and write a big table - print "\nbig_table2 start" + print ("\nbig_table2 start") import time start_time = time.time() df = DataFrame(np.random.randn(1000 * 1000, 60), index=xrange(int( @@ -1005,7 +1005,8 @@ def test_big_table2_frame(self): for x in xrange(20): df['datetime%03d' % x] = datetime.datetime(2001, 1, 2, 0, 0) - print "\nbig_table2 frame (creation of df) [rows->%s] -> %5.2f" % (len(df.index), time.time() - start_time) + print ("\nbig_table2 frame (creation of df) [rows->%s] -> %5.2f" + % (len(df.index), time.time() - start_time)) def f(chunksize): with ensure_clean(self.path,mode='w') as store: @@ -1015,14 +1016,15 @@ def f(chunksize): for c in [10000, 50000, 250000]: start_time = time.time() - print "big_table2 frame [chunk->%s]" % c + print ("big_table2 frame [chunk->%s]" % c) rows = f(c) - print "big_table2 frame [rows->%s,chunk->%s] -> %5.2f" % (rows, c, time.time() - start_time) + print ("big_table2 frame [rows->%s,chunk->%s] -> %5.2f" + % (rows, c, time.time() - start_time)) def test_big_put_frame(self): raise nose.SkipTest('no big put frame') - print "\nbig_put start" + print ("\nbig_put start") import time start_time = time.time() df = DataFrame(np.random.randn(1000 * 1000, 60), index=xrange(int( @@ -1032,15 +1034,17 @@ def test_big_put_frame(self): for x in xrange(20): df['datetime%03d' % x] = datetime.datetime(2001, 1, 2, 0, 0) - print "\nbig_put frame (creation of df) [rows->%s] -> %5.2f" % (len(df.index), time.time() - start_time) + print ("\nbig_put frame (creation of df) [rows->%s] -> %5.2f" + % (len(df.index), time.time() - start_time)) with ensure_clean(self.path, mode='w') as store: start_time = time.time() store = HDFStore(fn, mode='w') store.put('df', df) - print df.get_dtype_counts() - print "big_put frame [shape->%s] -> %5.2f" % (df.shape, time.time() - start_time) + print (df.get_dtype_counts()) + print ("big_put frame [shape->%s] -> %5.2f" + % (df.shape, time.time() - start_time)) def test_big_table_panel(self): raise nose.SkipTest('no big table panel') @@ -1064,7 +1068,7 @@ def test_big_table_panel(self): rows = store.root.wp.table.nrows recons = store.select('wp') - print "\nbig_table panel [%s] -> %5.2f" % (rows, time.time() - x) + print ("\nbig_table panel [%s] -> %5.2f" % (rows, time.time() - x)) def test_append_diff_item_order(self): @@ -2461,10 +2465,10 @@ def test_select_as_multiple(self): expected = expected[5:] tm.assert_frame_equal(result, expected) except (Exception), detail: - print "error in select_as_multiple %s" % str(detail) - print "store: ", store - print "df1: ", df1 - print "df2: ", df2 + print ("error in select_as_multiple %s" % str(detail)) + print ("store: %s" % store) + print ("df1: %s" % df1) + print ("df2: %s" % df2) # test excpection for diff rows diff --git a/pandas/io/wb.py b/pandas/io/wb.py index 1a2108d069589..579da6bbc4e45 100644 --- a/pandas/io/wb.py +++ b/pandas/io/wb.py @@ -65,10 +65,10 @@ def download(country=['MX', 'CA', 'US'], indicator=['GDPPCKD', 'GDPPCKN'], bad_indicators.append(ind) # Warn if len(bad_indicators) > 0: - print 'Failed to obtain indicator(s): ' + '; '.join(bad_indicators) - print 'The data may still be available for download at http://data.worldbank.org' + print ('Failed to obtain indicator(s): %s' % '; '.join(bad_indicators)) + print ('The data may still be available for download at http://data.worldbank.org') if len(bad_countries) > 0: - print 'Invalid ISO-2 codes: ' + ' '.join(bad_countries) + print ('Invalid ISO-2 codes: %s' % ' '.join(bad_countries)) # Merge WDI series if len(data) > 0: out = reduce(lambda x, y: x.merge(y, how='outer'), data) diff --git a/pandas/rpy/common.py b/pandas/rpy/common.py index acc562925c925..92adee5bdae57 100644 --- a/pandas/rpy/common.py +++ b/pandas/rpy/common.py @@ -73,7 +73,7 @@ def _convert_array(obj): major_axis=name_list[0], minor_axis=name_list[1]) else: - print 'Cannot handle dim=%d' % len(dim) + print ('Cannot handle dim=%d' % len(dim)) else: return arr diff --git a/pandas/stats/plm.py b/pandas/stats/plm.py index 467ce6a05e1f0..e8c413ec4739c 100644 --- a/pandas/stats/plm.py +++ b/pandas/stats/plm.py @@ -56,7 +56,7 @@ def __init__(self, y, x, weights=None, intercept=True, nw_lags=None, def log(self, msg): if self._verbose: # pragma: no cover - print msg + print (msg) def _prepare_data(self): """Cleans and stacks input data into DataFrame objects diff --git a/pandas/stats/tests/test_var.py b/pandas/stats/tests/test_var.py index 282a794980979..cbaacd0e89b6e 100644 --- a/pandas/stats/tests/test_var.py +++ b/pandas/stats/tests/test_var.py @@ -124,10 +124,10 @@ def beta(self): return rpy.convert_robj(r.coef(self._estimate)) def summary(self, equation=None): - print r.summary(self._estimate, equation=equation) + print (r.summary(self._estimate, equation=equation)) def output(self): - print self._estimate + print (self._estimate) def estimate(self): self._estimate = r.VAR(self.rdata, p=self.p, type=self.type) @@ -144,7 +144,7 @@ def serial_test(self, lags_pt=16, type='PT.asymptotic'): return test def data_summary(self): - print r.summary(self.rdata) + print (r.summary(self.rdata)) class TestVAR(TestCase): diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py index 8b32b3a641ebb..dd2fd88945f19 100644 --- a/pandas/tests/test_frame.py +++ b/pandas/tests/test_frame.py @@ -9079,7 +9079,7 @@ def _check_stat_op(self, name, alternative, frame=None, has_skipna=True, if not ('max' in name or 'min' in name or 'count' in name): df = DataFrame({'b': date_range('1/1/2001', periods=2)}) _f = getattr(df, name) - print df + print (df) self.assertFalse(len(_f())) df['a'] = range(len(df)) diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py index 6989d3bcae42b..8f60cb8fc6a63 100644 --- a/pandas/tests/test_groupby.py +++ b/pandas/tests/test_groupby.py @@ -499,8 +499,8 @@ def test_agg_item_by_item_raise_typeerror(self): df = DataFrame(randint(10, size=(20, 10))) def raiseException(df): - print '----------------------------------------' - print df.to_string() + print ('----------------------------------------') + print (df.to_string()) raise TypeError self.assertRaises(TypeError, df.groupby(0).agg, diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py index 4bf0a5bf3182c..9c22ad66d4f2b 100644 --- a/pandas/tseries/resample.py +++ b/pandas/tseries/resample.py @@ -85,7 +85,7 @@ def resample(self, obj): offset = to_offset(self.freq) if offset.n > 1: if self.kind == 'period': # pragma: no cover - print 'Warning: multiple of frequency -> timestamps' + print ('Warning: multiple of frequency -> timestamps') # Cannot have multiple of periods, convert to timestamp self.kind = 'timestamp' diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py index 43d44702d2d5e..c39f65f95d99f 100644 --- a/pandas/tseries/tools.py +++ b/pandas/tseries/tools.py @@ -20,7 +20,7 @@ raise Exception('dateutil 2.0 incompatible with Python 2.x, you must ' 'install version 1.5 or 2.1+!') except ImportError: # pragma: no cover - print 'Please install python-dateutil via easy_install or some method!' + print ('Please install python-dateutil via easy_install or some method!') raise # otherwise a 2nd import won't show the message diff --git a/pandas/util/terminal.py b/pandas/util/terminal.py index 7b9ddfbcfc8e6..3b5f893d1a0b3 100644 --- a/pandas/util/terminal.py +++ b/pandas/util/terminal.py @@ -117,4 +117,4 @@ def ioctl_GWINSZ(fd): if __name__ == "__main__": sizex, sizey = get_terminal_size() - print 'width =', sizex, 'height =', sizey + print ('width = %s height = %s' % (sizex, sizey))
I just grepped where there was python 2 print statement. There are still some in vbench/scripts/ez_setup and sphinxext/src (as well as in the rst docs), but these are all all from the main codebase. ``` grep "print [^(]" . -r ``` I don't know how to deal with the `print >>buf, empty` etc. (or whether we need to, ~~I've only labelled~~ there's only the two of these).... ? _I found one by trying to use py3 before building, where I got a `print e` syntax error._
https://api.github.com/repos/pandas-dev/pandas/pulls/3821
2013-06-09T14:18:36Z
2013-06-21T10:22:58Z
2013-06-21T10:22:58Z
2014-07-16T08:12:55Z
ENH: Add unit keyword to Timestamp and to_datetime
diff --git a/RELEASE.rst b/RELEASE.rst index 161047c478d88..0d94337ffea78 100644 --- a/RELEASE.rst +++ b/RELEASE.rst @@ -82,6 +82,9 @@ pandas 0.11.1 - Series and DataFrame hist methods now take a ``figsize`` argument (GH3834_) - DatetimeIndexes no longer try to convert mixed-integer indexes during join operations (GH3877_) + - Add ``unit`` keyword to ``Timestamp`` and ``to_datetime`` to enable passing of + integers or floats that are in an epoch unit of ``s, ms, us, ns`` + (e.g. unix timestamps or epoch ``s``, with fracional seconds allowed) (GH3540_) **API Changes** @@ -264,6 +267,7 @@ pandas 0.11.1 .. _GH3499: https://github.com/pydata/pandas/issues/3499 .. _GH3495: https://github.com/pydata/pandas/issues/3495 .. _GH3492: https://github.com/pydata/pandas/issues/3492 +.. _GH3540: https://github.com/pydata/pandas/issues/3540 .. _GH3552: https://github.com/pydata/pandas/issues/3552 .. _GH3562: https://github.com/pydata/pandas/issues/3562 .. _GH3586: https://github.com/pydata/pandas/issues/3586 diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx index 5343819b9fbfe..270fb01a42033 100644 --- a/pandas/src/inference.pyx +++ b/pandas/src/inference.pyx @@ -471,7 +471,7 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0, seen_float = 1 elif util.is_datetime64_object(val): if convert_datetime: - idatetimes[i] = convert_to_tsobject(val, None).value + idatetimes[i] = convert_to_tsobject(val, None, None).value seen_datetime = 1 else: seen_object = 1 @@ -493,7 +493,7 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0, elif PyDateTime_Check(val) or util.is_datetime64_object(val): if convert_datetime: seen_datetime = 1 - idatetimes[i] = convert_to_tsobject(val, None).value + idatetimes[i] = convert_to_tsobject(val, None, None).value else: seen_object = 1 break diff --git a/pandas/src/offsets.pyx b/pandas/src/offsets.pyx index 5868ca5210e33..1823edeb0a4d9 100644 --- a/pandas/src/offsets.pyx +++ b/pandas/src/offsets.pyx @@ -76,7 +76,7 @@ cdef class _Offset: cpdef anchor(self, object start=None): if start is not None: self.start = start - self.ts = convert_to_tsobject(self.start) + self.ts = convert_to_tsobject(self.start, None, None) self._setup() cdef _setup(self): diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index 51e657d1723b2..1cb986ee6cd7c 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -1204,6 +1204,9 @@ def slice_indexer(self, start=None, end=None, step=None): if isinstance(start, time) or isinstance(end, time): raise KeyError('Cannot mix time and non-time slice keys') + if isinstance(start, float) or isinstance(end, float): + raise TypeError('Cannot index datetime64 with float keys') + return Index.slice_indexer(self, start, end, step) def slice_locs(self, start=None, end=None): diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index f5415a195db77..ac02dee335afc 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -38,6 +38,7 @@ import pandas.util.py3compat as py3compat from pandas.core.datetools import BDay import pandas.core.common as com +from pandas import concat from numpy.testing.decorators import slow @@ -171,7 +172,6 @@ def test_indexing_over_size_cutoff(self): def test_indexing_unordered(self): # GH 2437 - from pandas import concat rng = date_range(start='2011-01-01', end='2011-01-15') ts = Series(randn(len(rng)), index=rng) ts2 = concat([ts[0:4],ts[-4:],ts[4:-4]]) @@ -593,6 +593,34 @@ def test_frame_add_datetime64_col_other_units(self): self.assert_((tmp['dates'].values == ex_vals).all()) + def test_to_datetime_unit(self): + + epoch = 1370745748 + s = Series([ epoch + t for t in range(20) ]) + result = to_datetime(s,unit='s') + expected = Series([ Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20) ]) + assert_series_equal(result,expected) + + s = Series([ epoch + t for t in range(20) ]).astype(float) + result = to_datetime(s,unit='s') + expected = Series([ Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20) ]) + assert_series_equal(result,expected) + + s = Series([ epoch + t for t in range(20) ] + [iNaT]) + result = to_datetime(s,unit='s') + expected = Series([ Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20) ] + [NaT]) + assert_series_equal(result,expected) + + s = Series([ epoch + t for t in range(20) ] + [iNaT]).astype(float) + result = to_datetime(s,unit='s') + expected = Series([ Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20) ] + [NaT]) + assert_series_equal(result,expected) + + s = concat([Series([ epoch + t for t in range(20) ]).astype(float),Series([np.nan])],ignore_index=True) + result = to_datetime(s,unit='s') + expected = Series([ Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20) ] + [NaT]) + assert_series_equal(result,expected) + def test_series_ctor_datetime64(self): rng = date_range('1/1/2000 00:00:00', '1/1/2000 1:59:50', freq='10s') @@ -2691,6 +2719,61 @@ def test_basics_nanos(self): self.assert_(stamp.microsecond == 0) self.assert_(stamp.nanosecond == 500) + def test_unit(self): + def check(val,unit=None,s=1,us=0): + stamp = Timestamp(val, unit=unit) + self.assert_(stamp.year == 2000) + self.assert_(stamp.month == 1) + self.assert_(stamp.day == 1) + self.assert_(stamp.hour == 1) + self.assert_(stamp.minute == 1) + self.assert_(stamp.second == s) + self.assert_(stamp.microsecond == us) + self.assert_(stamp.nanosecond == 0) + + val = Timestamp('20000101 01:01:01').value + + check(val) + check(val/1000L,unit='us') + check(val/1000000L,unit='ms') + check(val/1000000000L,unit='s') + + # using truediv, so these are like floats + if py3compat.PY3: + check((val+500000)/1000000000L,unit='s',us=500) + check((val+500000000)/1000000000L,unit='s',us=500000) + check((val+500000)/1000000L,unit='ms',us=500) + + # get chopped in py2 + else: + check((val+500000)/1000000000L,unit='s') + check((val+500000000)/1000000000L,unit='s') + check((val+500000)/1000000L,unit='ms') + + # ok + check((val+500000)/1000L,unit='us',us=500) + check((val+500000000)/1000000L,unit='ms',us=500000) + + # floats + check(val/1000.0 + 5,unit='us',us=5) + check(val/1000.0 + 5000,unit='us',us=5000) + check(val/1000000.0 + 0.5,unit='ms',us=500) + check(val/1000000.0 + 0.005,unit='ms',us=5) + check(val/1000000000.0 + 0.5,unit='s',us=500000) + + # nan + result = Timestamp(np.nan) + self.assert_(result is NaT) + + result = Timestamp(None) + self.assert_(result is NaT) + + result = Timestamp(iNaT) + self.assert_(result is NaT) + + result = Timestamp(NaT) + self.assert_(result is NaT) + def test_comparison(self): # 5-18-2012 00:00:00.000 stamp = 1337299200000000000L diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py index 62ee19da6b845..90bc0beb8eb84 100644 --- a/pandas/tseries/tools.py +++ b/pandas/tseries/tools.py @@ -50,7 +50,7 @@ def _maybe_get_tz(tz): def to_datetime(arg, errors='ignore', dayfirst=False, utc=None, box=True, - format=None, coerce=False): + format=None, coerce=False, unit='ns'): """ Convert argument to datetime @@ -69,6 +69,8 @@ def to_datetime(arg, errors='ignore', dayfirst=False, utc=None, box=True, format : string, default None strftime to parse time, eg "%d/%m/%Y" coerce : force errors to NaT (False by default) + unit : unit of the arg (s,ms,us,ns) denote the unit in epoch + (e.g. a unix timestamp), which is an integer/float number Returns ------- @@ -86,7 +88,7 @@ def _convert_f(arg): else: result = tslib.array_to_datetime(arg, raise_=errors == 'raise', utc=utc, dayfirst=dayfirst, - coerce=coerce) + coerce=coerce, unit=unit) if com.is_datetime64_dtype(result) and box: result = DatetimeIndex(result, tz='utc' if utc else None) return result diff --git a/pandas/tslib.pxd b/pandas/tslib.pxd index 3e7a6ef615e00..a70f9883c5bb1 100644 --- a/pandas/tslib.pxd +++ b/pandas/tslib.pxd @@ -1,3 +1,3 @@ from numpy cimport ndarray, int64_t -cdef convert_to_tsobject(object, object) +cdef convert_to_tsobject(object, object, object) diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx index abec45b52a363..ec11de7392680 100644 --- a/pandas/tslib.pyx +++ b/pandas/tslib.pyx @@ -131,21 +131,17 @@ class Timestamp(_Timestamp): note: by definition there cannot be any tz info on the ordinal itself """ return cls(datetime.fromordinal(ordinal),offset=offset,tz=tz) - def __new__(cls, object ts_input, object offset=None, tz=None): + def __new__(cls, object ts_input, object offset=None, tz=None, unit=None): cdef _TSObject ts cdef _Timestamp ts_base - if PyFloat_Check(ts_input): - # to do, do we want to support this, ie with fractional seconds? - raise TypeError("Cannot convert a float to datetime") - if util.is_string_object(ts_input): try: ts_input = parse_date(ts_input) except Exception: pass - ts = convert_to_tsobject(ts_input, tz) + ts = convert_to_tsobject(ts_input, tz, unit) if ts.value == NPY_NAT: return NaT @@ -311,7 +307,7 @@ class Timestamp(_Timestamp): if self.nanosecond != 0 and warn: print 'Warning: discarding nonzero nanoseconds' - ts = convert_to_tsobject(self, self.tzinfo) + ts = convert_to_tsobject(self, self.tzinfo, None) return datetime(ts.dts.year, ts.dts.month, ts.dts.day, ts.dts.hour, ts.dts.min, ts.dts.sec, @@ -530,7 +526,7 @@ cdef class _Timestamp(datetime): cdef: pandas_datetimestruct dts _TSObject ts - ts = convert_to_tsobject(self, self.tzinfo) + ts = convert_to_tsobject(self, self.tzinfo, None) dts = ts.dts return datetime(dts.year, dts.month, dts.day, dts.hour, dts.min, dts.sec, @@ -623,12 +619,13 @@ cpdef _get_utcoffset(tzinfo, obj): return tzinfo.utcoffset(obj) # helper to extract datetime and int64 from several different possibilities -cdef convert_to_tsobject(object ts, object tz): +cdef convert_to_tsobject(object ts, object tz, object unit): """ Extract datetime and int64 from any of: - - np.int64 + - np.int64 (with unit providing a possible modifier) - np.datetime64 - - python int or long object + - a float (with unit providing a possible modifier) + - python int or long object (with unit providing a possible modifier) - iso8601 string object - python datetime object - another timestamp object @@ -643,12 +640,25 @@ cdef convert_to_tsobject(object ts, object tz): obj = _TSObject() - if is_datetime64_object(ts): + if ts is None or ts is NaT: + obj.value = NPY_NAT + elif is_datetime64_object(ts): obj.value = _get_datetime64_nanos(ts) pandas_datetime_to_datetimestruct(obj.value, PANDAS_FR_ns, &obj.dts) elif is_integer_object(ts): - obj.value = ts - pandas_datetime_to_datetimestruct(ts, PANDAS_FR_ns, &obj.dts) + if ts == NPY_NAT: + obj.value = NPY_NAT + else: + ts = ts * cast_from_unit(unit,None) + obj.value = ts + pandas_datetime_to_datetimestruct(ts, PANDAS_FR_ns, &obj.dts) + elif util.is_float_object(ts): + if ts != ts or ts == NPY_NAT: + obj.value = NPY_NAT + else: + ts = cast_from_unit(unit,ts) + obj.value = ts + pandas_datetime_to_datetimestruct(ts, PANDAS_FR_ns, &obj.dts) elif util.is_string_object(ts): if ts in _nat_strings: obj.value = NPY_NAT @@ -699,7 +709,7 @@ cdef convert_to_tsobject(object ts, object tz): elif PyDate_Check(ts): # Keep the converter same as PyDateTime's ts = datetime.combine(ts, datetime_time()) - return convert_to_tsobject(ts, tz) + return convert_to_tsobject(ts, tz, None) else: raise ValueError("Could not construct Timestamp from argument %s" % type(ts)) @@ -804,7 +814,7 @@ def datetime_to_datetime64(ndarray[object] values): else: inferred_tz = _get_zone(val.tzinfo) - _ts = convert_to_tsobject(val, None) + _ts = convert_to_tsobject(val, None, None) iresult[i] = _ts.value _check_dts_bounds(iresult[i], &_ts.dts) else: @@ -819,7 +829,7 @@ def datetime_to_datetime64(ndarray[object] values): def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False, - format=None, utc=None, coerce=False): + format=None, utc=None, coerce=False, unit=None): cdef: Py_ssize_t i, n = len(values) object val @@ -828,6 +838,7 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False, pandas_datetimestruct dts bint utc_convert = bool(utc) _TSObject _ts + int64_t m = cast_from_unit(unit,None) from dateutil.parser import parse @@ -841,7 +852,7 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False, elif PyDateTime_Check(val): if val.tzinfo is not None: if utc_convert: - _ts = convert_to_tsobject(val, None) + _ts = convert_to_tsobject(val, None, unit) iresult[i] = _ts.value _check_dts_bounds(iresult[i], &_ts.dts) else: @@ -861,7 +872,15 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False, # if we are coercing, dont' allow integers elif util.is_integer_object(val) and not coerce: - iresult[i] = val + if val == iNaT: + iresult[i] = iNaT + else: + iresult[i] = val*m + elif util.is_float_object(val) and not coerce: + if val != val or val == iNaT: + iresult[i] = iNaT + else: + iresult[i] = cast_from_unit(unit,val) else: try: if len(val) == 0: @@ -1246,6 +1265,31 @@ cdef inline _get_datetime64_nanos(object val): else: return ival +cdef inline int64_t cast_from_unit(object unit, object ts): + """ return a casting of the unit represented to nanoseconds + round the fractional part of a float to our precision, p """ + if unit == 's': + m = 1000000000L + p = 6 + elif unit == 'ms': + m = 1000000L + p = 3 + elif unit == 'us': + m = 1000L + p = 0 + else: + m = 1L + p = 0 + + # just give me the unit back + if ts is None: + return m + + # cast the unit, multiply base/frace separately + # to avoid precision issues from float -> int + base = <int64_t> ts + frac = ts-base + return <int64_t> (base*m) + <int64_t> (round(frac,p)*m) def cast_to_nanoseconds(ndarray arr): cdef: @@ -1286,7 +1330,7 @@ def pydt_to_i8(object pydt): cdef: _TSObject ts - ts = convert_to_tsobject(pydt, None) + ts = convert_to_tsobject(pydt, None, None) return ts.value @@ -1784,7 +1828,7 @@ def get_date_field(ndarray[int64_t] dtindex, object field): for i in range(count): if dtindex[i] == NPY_NAT: out[i] = -1; continue - ts = convert_to_tsobject(dtindex[i], None) + ts = convert_to_tsobject(dtindex[i], None, None) out[i] = ts_dayofweek(ts) return out @@ -1793,7 +1837,7 @@ def get_date_field(ndarray[int64_t] dtindex, object field): if dtindex[i] == NPY_NAT: out[i] = -1; continue pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts) - ts = convert_to_tsobject(dtindex[i], None) + ts = convert_to_tsobject(dtindex[i], None, None) isleap = is_leapyear(dts.year) isleap_prev = is_leapyear(dts.year - 1) mo_off = _month_offset[isleap, dts.month - 1] @@ -1831,7 +1875,7 @@ def get_date_field(ndarray[int64_t] dtindex, object field): cdef inline int m8_weekday(int64_t val): - ts = convert_to_tsobject(val, None) + ts = convert_to_tsobject(val, None, None) return ts_dayofweek(ts) cdef int64_t DAY_NS = 86400000000000LL
to enable passing of integers or floats that are in an epoch unit of s, ms, us, ns (e.g. unix timestamps or epoch s, with fracional seconds allowed) closes #3540 ``` In [5]: pd.to_datetime(Series([ 1370745748 + t for t in range(5) ]),unit='s') Out[5]: 0 2013-06-09 02:42:28 1 2013-06-09 02:42:29 2 2013-06-09 02:42:30 3 2013-06-09 02:42:31 4 2013-06-09 02:42:32 dtype: datetime64[ns] ```
https://api.github.com/repos/pandas-dev/pandas/pulls/3818
2013-06-09T03:41:50Z
2013-06-13T19:12:08Z
2013-06-13T19:12:08Z
2014-06-18T08:55:58Z
Tag yahoo data tests as @network only
diff --git a/pandas/io/tests/test_yahoo.py b/pandas/io/tests/test_yahoo.py index b79fdad2bff9d..0e2c2022af422 100644 --- a/pandas/io/tests/test_yahoo.py +++ b/pandas/io/tests/test_yahoo.py @@ -14,7 +14,6 @@ class TestYahoo(unittest.TestCase): - @slow @network def test_yahoo(self): # asserts that yahoo is minimally working and that it throws @@ -41,14 +40,12 @@ def test_yahoo(self): raise - @slow @network def test_get_quote(self): df = web.get_quote_yahoo(pd.Series(['GOOG', 'AAPL', 'GOOG'])) assert_series_equal(df.ix[0], df.ix[2]) - @slow @network def test_get_components(self): @@ -69,7 +66,6 @@ def test_get_components(self): assert 'GOOG' in df.index assert 'AMZN' in df.index - @slow @network def test_get_data(self): import numpy as np
https://api.github.com/repos/pandas-dev/pandas/pulls/3816
2013-06-09T00:21:06Z
2013-06-10T02:25:21Z
2013-06-10T02:25:21Z
2014-07-16T08:12:50Z
DOC/BLD: Fix a bunch of doc build warnings that weren't being previously caught
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst index 7870bdbeb97d3..c1d034d0d8e58 100644 --- a/doc/source/dsintro.rst +++ b/doc/source/dsintro.rst @@ -482,19 +482,23 @@ column-wise: .. ipython:: python index = date_range('1/1/2000', periods=8) - df = DataFrame(randn(8, 3), index=index, - columns=['A', 'B', 'C']) + df = DataFrame(randn(8, 3), index=index, columns=list('ABC')) df type(df['A']) df - df['A'] -Technical purity aside, this case is so common in practice that supporting the -special case is preferable to the alternative of forcing the user to transpose -and do column-based alignment like so: +.. warning:: -.. ipython:: python + .. code-block:: python + + df - df['A'] + + is now deprecated and will be removed in a future release. The preferred way + to replicate this behavior is + + .. code-block:: python - (df.T - df['A']).T + df.sub(df['A'], axis=0) For explicit control over the matching and broadcasting behavior, see the section on :ref:`flexible binary operations <basics.binop>`. diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst index 3f6a4b7c59067..7f572c8c8e191 100644 --- a/doc/source/timeseries.rst +++ b/doc/source/timeseries.rst @@ -930,89 +930,103 @@ They can be both positive and negative. .. ipython:: python - from datetime import datetime, timedelta - s = Series(date_range('2012-1-1', periods=3, freq='D')) - td = Series([ timedelta(days=i) for i in range(3) ]) - df = DataFrame(dict(A = s, B = td)) - df - df['C'] = df['A'] + df['B'] - df - df.dtypes - - s - s.max() - s - datetime(2011,1,1,3,5) - s + timedelta(minutes=5) + from datetime import datetime, timedelta + s = Series(date_range('2012-1-1', periods=3, freq='D')) + td = Series([ timedelta(days=i) for i in range(3) ]) + df = DataFrame(dict(A = s, B = td)) + df + df['C'] = df['A'] + df['B'] + df + df.dtypes + + s - s.max() + s - datetime(2011,1,1,3,5) + s + timedelta(minutes=5) Getting scalar results from a ``timedelta64[ns]`` series +.. ipython:: python + :suppress: + + from distutils.version import LooseVersion + .. ipython:: python y = s - s[0] y - y.apply(lambda x: x.item().total_seconds()) - y.apply(lambda x: x.item().days) - -.. note:: - These operations are different in numpy 1.6.2 and in numpy >= 1.7. The ``timedelta64[ns]`` scalar - type in 1.6.2 is much like a ``datetime.timedelta``, while in 1.7 it is a nanosecond based integer. - A future version of pandas will make this transparent. + if LooseVersion(np.__version__) <= '1.6.2': + y.apply(lambda x: x.item().total_seconds()) + y.apply(lambda x: x.item().days) + else: + y.apply(lambda x: x / np.timedelta64(1, 's')) + y.apply(lambda x: x / np.timedelta64(1, 'D')) + +.. note:: - These are the equivalent operation to above in numpy >= 1.7 + As you can see from the conditional statement above, these operations are + different in numpy 1.6.2 and in numpy >= 1.7. The ``timedelta64[ns]`` scalar + type in 1.6.2 is much like a ``datetime.timedelta``, while in 1.7 it is a + nanosecond based integer. A future version of pandas will make this + transparent. - ``y.apply(lambda x: x.item()/np.timedelta64(1,'s'))`` +.. note:: - ``y.apply(lambda x: x.item()/np.timedelta64(1,'D'))`` + In numpy >= 1.7 dividing a ``timedelta64`` array by another ``timedelta64`` + array will yield an array with dtype ``np.float64``. Series of timedeltas with ``NaT`` values are supported .. ipython:: python - y = s - s.shift() - y + y = s - s.shift() + y + The can be set to ``NaT`` using ``np.nan`` analagously to datetimes .. ipython:: python - y[1] = np.nan - y + y[1] = np.nan + y Operands can also appear in a reversed order (a singluar object operated with a Series) .. ipython:: python - s.max() - s - datetime(2011,1,1,3,5) - s - timedelta(minutes=5) + s + s.max() - s + datetime(2011,1,1,3,5) - s + timedelta(minutes=5) + s Some timedelta numeric like operations are supported. .. ipython:: python - td - timedelta(minutes=5,seconds=5,microseconds=5) + td - timedelta(minutes=5, seconds=5, microseconds=5) ``min, max`` and the corresponding ``idxmin, idxmax`` operations are support on frames .. ipython:: python - df = DataFrame(dict(A = s - Timestamp('20120101')-timedelta(minutes=5,seconds=5), - B = s - Series(date_range('2012-1-2', periods=3, freq='D')))) - df + A = s - Timestamp('20120101') - timedelta(minutes=5, seconds=5) + B = s - Series(date_range('2012-1-2', periods=3, freq='D')) + df = DataFrame(dict(A=A, B=B)) + df - df.min() - df.min(axis=1) + df.min() + df.min(axis=1) - df.idxmin() - df.idxmax() + df.idxmin() + df.idxmax() -``min, max`` operations are support on series, these return a single element ``timedelta64[ns]`` Series (this avoids -having to deal with numpy timedelta64 issues). ``idxmin, idxmax`` are supported as well. +``min, max`` operations are support on series, these return a single element +``timedelta64[ns]`` Series (this avoids having to deal with numpy timedelta64 +issues). ``idxmin, idxmax`` are supported as well. .. ipython:: python - df.min().max() - df.min(axis=1).min() + df.min().max() + df.min(axis=1).min() - df.min().idxmax() - df.min(axis=1).idxmin() + df.min().idxmax() + df.min(axis=1).idxmin() diff --git a/doc/source/v0.10.1.txt b/doc/source/v0.10.1.txt index 3c22e9552c3a2..dafa4300af0e3 100644 --- a/doc/source/v0.10.1.txt +++ b/doc/source/v0.10.1.txt @@ -69,7 +69,7 @@ Retrieving unique values in an indexable or data column. import warnings with warnings.catch_warnings(): - warnings.simplefilter('ignore', category=DeprecationWarning) + warnings.simplefilter('ignore', category=UserWarning) store.unique('df','index') store.unique('df','string') diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst index 63b5920bb0146..f0790396a5c39 100644 --- a/doc/source/visualization.rst +++ b/doc/source/visualization.rst @@ -5,14 +5,14 @@ :suppress: import numpy as np + from numpy.random import randn, rand, randint np.random.seed(123456) - from pandas import * + from pandas import DataFrame, Series, date_range, options import pandas.util.testing as tm - randn = np.random.randn np.set_printoptions(precision=4, suppress=True) import matplotlib.pyplot as plt plt.close('all') - options.display.mpl_style='default' + options.display.mpl_style = 'default' ************************ Plotting with matplotlib @@ -60,8 +60,7 @@ On DataFrame, ``plot`` is a convenience to plot all of the columns with labels: .. ipython:: python - df = DataFrame(randn(1000, 4), index=ts.index, - columns=['A', 'B', 'C', 'D']) + df = DataFrame(randn(1000, 4), index=ts.index, columns=list('ABCD')) df = df.cumsum() @savefig frame_plot_basic.png width=6in @@ -101,7 +100,7 @@ You can plot one column versus another using the `x` and `y` keywords in plt.figure() - df3 = DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum() + df3 = DataFrame(randn(1000, 2), columns=['B', 'C']).cumsum() df3['A'] = Series(range(len(df))) @savefig df_plot_xy.png width=6in @@ -169,7 +168,7 @@ Here is the default behavior, notice how the x-axis tick labelling is performed: df.A.plot() -Using the ``x_compat`` parameter, you can suppress this bevahior: +Using the ``x_compat`` parameter, you can suppress this behavior: .. ipython:: python @@ -200,6 +199,15 @@ Targeting different subplots You can pass an ``ax`` argument to ``Series.plot`` to plot on a particular axis: +.. ipython:: python + :suppress: + + ts = Series(randn(1000), index=date_range('1/1/2000', periods=1000)) + ts = ts.cumsum() + + df = DataFrame(randn(1000, 4), index=ts.index, columns=list('ABCD')) + df = df.cumsum() + .. ipython:: python fig, axes = plt.subplots(nrows=2, ncols=2) @@ -210,6 +218,7 @@ You can pass an ``ax`` argument to ``Series.plot`` to plot on a particular axis: @savefig series_plot_multi.png width=6in df['D'].plot(ax=axes[1,1]); axes[1,1].set_title('D') + .. _visualization.other: Other plotting features @@ -239,7 +248,7 @@ bar plot: .. ipython:: python - df2 = DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd']) + df2 = DataFrame(rand(10, 4), columns=['a', 'b', 'c', 'd']) @savefig bar_plot_multi_ex.png width=5in df2.plot(kind='bar'); @@ -298,10 +307,10 @@ New since 0.10.0, the ``by`` keyword can be specified to plot grouped histograms .. ipython:: python - data = Series(np.random.randn(1000)) + data = Series(randn(1000)) @savefig grouped_hist.png width=6in - data.hist(by=np.random.randint(0, 4, 1000)) + data.hist(by=randint(0, 4, 1000)) .. _visualization.box: @@ -317,7 +326,7 @@ a uniform random variable on [0,1). .. ipython:: python - df = DataFrame(np.random.rand(10,5)) + df = DataFrame(rand(10,5)) plt.figure(); @savefig box_plot_ex.png width=6in @@ -328,7 +337,7 @@ groupings. For instance, .. ipython:: python - df = DataFrame(np.random.rand(10,2), columns=['Col1', 'Col2'] ) + df = DataFrame(rand(10,2), columns=['Col1', 'Col2'] ) df['X'] = Series(['A','A','A','A','A','B','B','B','B','B']) plt.figure(); @@ -341,7 +350,7 @@ columns: .. ipython:: python - df = DataFrame(np.random.rand(10,3), columns=['Col1', 'Col2', 'Col3']) + df = DataFrame(rand(10,3), columns=['Col1', 'Col2', 'Col3']) df['X'] = Series(['A','A','A','A','A','B','B','B','B','B']) df['Y'] = Series(['A','B','A','B','A','B','A','B','A','B']) @@ -361,7 +370,7 @@ Scatter plot matrix .. ipython:: python from pandas.tools.plotting import scatter_matrix - df = DataFrame(np.random.randn(1000, 4), columns=['a', 'b', 'c', 'd']) + df = DataFrame(randn(1000, 4), columns=['a', 'b', 'c', 'd']) @savefig scatter_matrix_kde.png width=6in scatter_matrix(df, alpha=0.2, figsize=(6, 6), diagonal='kde') @@ -378,7 +387,7 @@ setting `kind='kde'`: .. ipython:: python - ser = Series(np.random.randn(1000)) + ser = Series(randn(1000)) @savefig kde_plot.png width=6in ser.plot(kind='kde') @@ -444,7 +453,7 @@ implies that the underlying data are not random. plt.figure() - data = Series(0.1 * np.random.random(1000) + + data = Series(0.1 * rand(1000) + 0.9 * np.sin(np.linspace(-99 * np.pi, 99 * np.pi, num=1000))) @savefig lag_plot.png width=6in @@ -467,7 +476,7 @@ confidence band. plt.figure() - data = Series(0.7 * np.random.random(1000) + + data = Series(0.7 * rand(1000) + 0.3 * np.sin(np.linspace(-9 * np.pi, 9 * np.pi, num=1000))) @savefig autocorrelation_plot.png width=6in @@ -488,7 +497,7 @@ are what constitutes the bootstrap plot. from pandas.tools.plotting import bootstrap_plot - data = Series(np.random.random(1000)) + data = Series(rand(1000)) @savefig bootstrap_plot.png width=6in bootstrap_plot(data, size=50, samples=500, color='grey')
https://api.github.com/repos/pandas-dev/pandas/pulls/3815
2013-06-08T23:51:07Z
2013-06-09T01:56:17Z
2013-06-09T01:56:17Z
2014-07-06T07:55:09Z
Implement historical finance data from Google Finance
diff --git a/pandas/io/data.py b/pandas/io/data.py index 43178fdcfddf1..8bc3df561cadb 100644 --- a/pandas/io/data.py +++ b/pandas/io/data.py @@ -58,6 +58,10 @@ def DataReader(name, data_source=None, start=None, end=None, return get_data_yahoo(symbols=name, start=start, end=end, adjust_price=False, chunk=25, retry_count=retry_count, pause=pause) + elif(data_source == "google"): + return get_data_google(symbols=name, start=start, end=end, + adjust_price=False, chunk=25, + retry_count=retry_count, pause=pause) elif(data_source == "fred"): return get_data_fred(name=name, start=start, end=end) elif(data_source == "famafrench"): @@ -132,6 +136,9 @@ def get_quote_yahoo(symbols): return DataFrame(data, index=idx) +def get_quote_google(symbols): + raise NotImplementedError("Google Finance doesn't have this functionality") + def _get_hist_yahoo(sym=None, start=None, end=None, retry_count=3, pause=0, **kwargs): """ @@ -178,6 +185,41 @@ def _get_hist_yahoo(sym=None, start=None, end=None, retry_count=3, "return a 200 for url %s" % (pause, url)) +def _get_hist_google(sym=None, start=None, end=None, retry_count=3, + pause=0, **kwargs): + """ + Get historical data for the given name from google. + Date format is datetime + + Returns a DataFrame. + """ + if(sym is None): + warnings.warn("Need to provide a name.") + return None + + start, end = _sanitize_dates(start, end) + + google_URL = 'http://www.google.com/finance/historical?' + + # www.google.com/finance/historical?q=GOOG&startdate=Jun+9%2C+2011&enddate=Jun+8%2C+2013&output=csv + url = google_URL + urllib.urlencode({"q": sym, \ + "startdate": start.strftime('%b %d, %Y'), \ + "enddate": end.strftime('%b %d, %Y'), "output": "csv" }) + for _ in range(retry_count): + resp = urllib2.urlopen(url) + if resp.code == 200: + lines = resp.read() + rs = read_csv(StringIO(bytes_to_str(lines)), index_col=0, + parse_dates=True)[::-1] + + return rs + + time.sleep(pause) + + raise Exception("after %d tries, Google did not " + "return a 200 for url %s" % (pause, url)) + + def _adjust_prices(hist_data, price_list=['Open', 'High', 'Low', 'Close']): """ Return modifed DataFrame or Panel with adjusted prices based on @@ -347,6 +389,72 @@ def dl_mult_symbols(symbols): return hist_data +def get_data_google(symbols=None, start=None, end=None, retry_count=3, pause=0, + chunksize=25, **kwargs): + """ + Returns DataFrame/Panel of historical stock prices from symbols, over date + range, start to end. To avoid being penalized by Google Finance servers, + pauses between downloading 'chunks' of symbols can be specified. + + Parameters + ---------- + symbols : string, array-like object (list, tuple, Series), or DataFrame + Single stock symbol (ticker), array-like object of symbols or + DataFrame with index containing stock symbols. + start : string, (defaults to '1/1/2010') + Starting date, timestamp. Parses many different kind of date + representations (e.g., 'JAN-01-2010', '1/1/10', 'Jan, 1, 1980') + end : string, (defaults to today) + Ending date, timestamp. Same format as starting date. + retry_count : int, default 3 + Number of times to retry query request. + pause : int, default 0 + Time, in seconds, to pause between consecutive queries of chunks. If + single value given for symbol, represents the pause between retries. + chunksize : int, default 25 + Number of symbols to download consecutively before intiating pause. + + Returns + ------- + hist_data : DataFrame (str) or Panel (array-like object, DataFrame) + """ + + def dl_mult_symbols(symbols): + stocks = {} + for sym_group in _in_chunks(symbols, chunksize): + for sym in sym_group: + try: + stocks[sym] = _get_hist_google(sym, start=start, + end=end, **kwargs) + except: + warnings.warn('Error with sym: ' + sym + '... skipping.') + + time.sleep(pause) + + return Panel(stocks).swapaxes('items', 'minor') + + if 'name' in kwargs: + warnings.warn("Arg 'name' is deprecated, please use 'symbols' instead.", + FutureWarning) + symbols = kwargs['name'] + + #If a single symbol, (e.g., 'GOOG') + if isinstance(symbols, (str, int)): + sym = symbols + hist_data = _get_hist_google(sym, start=start, end=end) + #Or multiple symbols, (e.g., ['GOOG', 'AAPL', 'MSFT']) + elif isinstance(symbols, DataFrame): + try: + hist_data = dl_mult_symbols(Series(symbols.index)) + except ValueError: + raise + else: #Guess a Series + try: + hist_data = dl_mult_symbols(symbols) + except TypeError: + hist_data = dl_mult_symbols(Series(symbols)) + + return hist_data def get_data_fred(name=None, start=dt.datetime(2010, 1, 1), end=dt.datetime.today()): diff --git a/pandas/io/tests/test_google.py b/pandas/io/tests/test_google.py new file mode 100644 index 0000000000000..7f4ca13c27e58 --- /dev/null +++ b/pandas/io/tests/test_google.py @@ -0,0 +1,82 @@ +import unittest +import nose +from datetime import datetime + +import pandas as pd +import pandas.io.data as web +from pandas.util.testing import (network, assert_series_equal) +from numpy.testing.decorators import slow + +import urllib2 + + +class TestGoogle(unittest.TestCase): + + @network + def test_google(self): + # asserts that google is minimally working and that it throws + # an excecption when DataReader can't get a 200 response from + # google + start = datetime(2010, 1, 1) + end = datetime(2013, 01, 27) + + try: + self.assertEquals( + web.DataReader("F", 'google', start, end)['Close'][-1], + 13.68) + + self.assertRaises( + Exception, + lambda: web.DataReader("NON EXISTENT TICKER", 'google', + start, end)) + except urllib2.URLError: + try: + urllib2.urlopen('http://www.google.com') + except urllib2.URLError: + raise nose.SkipTest + else: + raise + + + @network + def test_get_quote(self): + self.assertRaises(NotImplementedError, + lambda: web.get_quote_google(pd.Series(['GOOG', 'AAPL', 'GOOG']))) + + @network + def test_get_data(self): + import numpy as np + df = web.get_data_google('GOOG') + print(df.Volume.ix['OCT-08-2010']) + assert df.Volume.ix['OCT-08-2010'] == 2863473 + + sl = ['AAPL', 'AMZN', 'GOOG'] + pan = web.get_data_google(sl, '2012') + ts = pan.Close.GOOG.index[pan.Close.AAPL > pan.Close.GOOG] + assert ts[0].dayofyear == 96 + + pan = web.get_data_google(['GE', 'MSFT', 'INTC'], 'JAN-01-12', 'JAN-31-12') + expected = [19.02, 28.23, 25.39] + result = pan.Close.ix['01-18-12'][['GE', 'MSFT', 'INTC']].tolist() + assert result == expected + + # sanity checking + t= np.array(result) + assert np.issubdtype(t.dtype, np.floating) + assert t.shape == (3,) + + expected = [[ 18.99, 28.4 , 25.18], + [ 18.58, 28.31, 25.13], + [ 19.03, 28.16, 25.52], + [ 18.81, 28.82, 25.87]] + result = pan.Open.ix['Jan-15-12':'Jan-20-12'][['GE', 'MSFT', 'INTC']].values + assert (result == expected).all() + + # sanity checking + t= np.array(pan) + assert np.issubdtype(t.dtype, np.floating) + + +if __name__ == '__main__': + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False)
https://api.github.com/repos/pandas-dev/pandas/pulls/3814
2013-06-08T23:31:54Z
2013-06-10T02:23:17Z
2013-06-10T02:23:17Z
2014-06-17T08:58:27Z
remove unused import in test_yahoo
diff --git a/pandas/io/tests/test_yahoo.py b/pandas/io/tests/test_yahoo.py index 1109d67278f73..b79fdad2bff9d 100644 --- a/pandas/io/tests/test_yahoo.py +++ b/pandas/io/tests/test_yahoo.py @@ -2,8 +2,6 @@ import nose from datetime import datetime -from pandas.util.py3compat import StringIO, BytesIO - import pandas as pd import pandas.io.data as web from pandas.util.testing import (network, assert_frame_equal,
https://api.github.com/repos/pandas-dev/pandas/pulls/3813
2013-06-08T21:39:58Z
2013-06-08T22:49:11Z
2013-06-08T22:49:11Z
2014-07-16T08:12:42Z
correct FRED test (GDP changed ...)
diff --git a/pandas/io/tests/test_fred.py b/pandas/io/tests/test_fred.py index 3e951e5443bc3..00a90ec3da402 100644 --- a/pandas/io/tests/test_fred.py +++ b/pandas/io/tests/test_fred.py @@ -29,7 +29,7 @@ def test_fred(self): try: self.assertEquals( web.DataReader("GDP", "fred", start, end)['GDP'].tail(1), - 16010.2) + 16004.5) self.assertRaises( Exception,
https://api.github.com/repos/pandas-dev/pandas/pulls/3812
2013-06-08T20:54:58Z
2013-06-08T21:48:41Z
2013-06-08T21:48:41Z
2014-07-16T08:12:41Z
DOC: turn off the ipython cache
diff --git a/doc/sphinxext/ipython_directive.py b/doc/sphinxext/ipython_directive.py index 3b19b443af327..bc3c46dd5cc93 100644 --- a/doc/sphinxext/ipython_directive.py +++ b/doc/sphinxext/ipython_directive.py @@ -64,15 +64,8 @@ import sys import tempfile -# To keep compatibility with various python versions -try: - from hashlib import md5 -except ImportError: - from md5 import md5 - # Third-party import matplotlib -import sphinx from docutils.parsers.rst import directives from docutils import nodes from sphinx.util.compat import Directive @@ -84,7 +77,6 @@ from IPython.core.profiledir import ProfileDir from IPython.utils import io -from pdb import set_trace #----------------------------------------------------------------------------- # Globals @@ -205,6 +197,7 @@ def __init__(self): config.InteractiveShell.autocall = False config.InteractiveShell.autoindent = False config.InteractiveShell.colors = 'NoColor' + config.InteractiveShell.cache_size = 0 # create a profile so instance history isn't saved tmp_profile_dir = tempfile.mkdtemp(prefix='profile_')
Also clean up some unused import there closes #3807.
https://api.github.com/repos/pandas-dev/pandas/pulls/3809
2013-06-08T13:22:39Z
2013-06-08T14:30:15Z
2013-06-08T14:30:15Z
2014-07-16T08:12:40Z
CLN: refactored url accessing and filepath conversion from urls to io.common
diff --git a/pandas/io/common.py b/pandas/io/common.py new file mode 100644 index 0000000000000..46b47c06f7f5d --- /dev/null +++ b/pandas/io/common.py @@ -0,0 +1,79 @@ +""" Common api utilities """ + +import urlparse +from pandas.util import py3compat + +_VALID_URLS = set(urlparse.uses_relative + urlparse.uses_netloc + + urlparse.uses_params) +_VALID_URLS.discard('') + + +def _is_url(url): + """Check to see if a URL has a valid protocol. + + Parameters + ---------- + url : str or unicode + + Returns + ------- + isurl : bool + If `url` has a valid protocol return True otherwise False. + """ + try: + return urlparse.urlparse(url).scheme in _VALID_URLS + except: + return False + +def _is_s3_url(url): + """ Check for an s3 url """ + try: + return urlparse.urlparse(url).scheme == 's3' + except: + return False + +def get_filepath_or_buffer(filepath_or_buffer, encoding=None): + """ if the filepath_or_buffer is a url, translate and return the buffer + passthru otherwise + + Parameters + ---------- + filepath_or_buffer : a url, filepath, or buffer + encoding : the encoding to use to decode py3 bytes, default is 'utf-8' + + Returns + ------- + a filepath_or_buffer, the encoding + + """ + + if _is_url(filepath_or_buffer): + from urllib2 import urlopen + filepath_or_buffer = urlopen(filepath_or_buffer) + if py3compat.PY3: # pragma: no cover + if encoding: + errors = 'strict' + else: + errors = 'replace' + encoding = 'utf-8' + bytes = filepath_or_buffer.read() + filepath_or_buffer = StringIO(bytes.decode(encoding, errors)) + return filepath_or_buffer, encoding + return filepath_or_buffer, None + + if _is_s3_url(filepath_or_buffer): + try: + import boto + except: + raise ImportError("boto is required to handle s3 files") + # Assuming AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY + # are environment variables + parsed_url = urlparse.urlparse(filepath_or_buffer) + conn = boto.connect_s3() + b = conn.get_bucket(parsed_url.netloc) + k = boto.s3.key.Key(b) + k.key = parsed_url.path + filepath_or_buffer = StringIO(k.get_contents_as_string()) + return filepath_or_buffer, None + + return filepath_or_buffer, None diff --git a/pandas/io/html.py b/pandas/io/html.py index a5798b3493732..08a9403cd18a7 100644 --- a/pandas/io/html.py +++ b/pandas/io/html.py @@ -20,7 +20,7 @@ import numpy as np from pandas import DataFrame, MultiIndex, isnull -from pandas.io.parsers import _is_url +from pandas.io.common import _is_url try: diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index 556d1ab1976b4..54ba7536afaee 100644 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -4,7 +4,6 @@ from StringIO import StringIO import re from itertools import izip -import urlparse import csv import numpy as np @@ -15,6 +14,7 @@ import pandas.core.common as com from pandas.util import py3compat from pandas.io.date_converters import generic_parser +from pandas.io.common import get_filepath_or_buffer from pandas.util.decorators import Appender @@ -176,35 +176,6 @@ class DateConversionError(Exception): """ % (_parser_params % _fwf_widths) -_VALID_URLS = set(urlparse.uses_relative + urlparse.uses_netloc + - urlparse.uses_params) -_VALID_URLS.discard('') - - -def _is_url(url): - """Check to see if a URL has a valid protocol. - - Parameters - ---------- - url : str or unicode - - Returns - ------- - isurl : bool - If `url` has a valid protocol return True otherwise False. - """ - try: - return urlparse.urlparse(url).scheme in _VALID_URLS - except: - return False - -def _is_s3_url(url): - """ Check for an s3 url """ - try: - return urlparse.urlparse(url).scheme == 's3' - except: - return False - def _read(filepath_or_buffer, kwds): "Generic reader of line files." encoding = kwds.get('encoding', None) @@ -212,32 +183,7 @@ def _read(filepath_or_buffer, kwds): if skipfooter is not None: kwds['skip_footer'] = skipfooter - if isinstance(filepath_or_buffer, basestring): - if _is_url(filepath_or_buffer): - from urllib2 import urlopen - filepath_or_buffer = urlopen(filepath_or_buffer) - if py3compat.PY3: # pragma: no cover - if encoding: - errors = 'strict' - else: - errors = 'replace' - encoding = 'utf-8' - bytes = filepath_or_buffer.read() - filepath_or_buffer = StringIO(bytes.decode(encoding, errors)) - - if _is_s3_url(filepath_or_buffer): - try: - import boto - except: - raise ImportError("boto is required to handle s3 files") - # Assuming AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY - # are environment variables - parsed_url = urlparse.urlparse(filepath_or_buffer) - conn = boto.connect_s3() - b = conn.get_bucket(parsed_url.netloc) - k = boto.s3.key.Key(b) - k.key = parsed_url.path - filepath_or_buffer = StringIO(k.get_contents_as_string()) + filepath_or_buffer, _ = get_filepath_or_buffer(filepath_or_buffer) if kwds.get('date_parser', None) is not None: if isinstance(kwds['parse_dates'], bool): diff --git a/pandas/io/stata.py b/pandas/io/stata.py index f1257f505ca9b..ddc9db0b76539 100644 --- a/pandas/io/stata.py +++ b/pandas/io/stata.py @@ -21,7 +21,8 @@ import datetime from pandas.util import py3compat from pandas import isnull -from pandas.io.parsers import _parser_params, _is_url, Appender +from pandas.io.parsers import _parser_params, Appender +from pandas.io.common import get_filepath_or_buffer _read_stata_doc = """ @@ -288,18 +289,12 @@ def __init__(self, path_or_buf, encoding=None): self._missing_values = False self._data_read = False self._value_labels_read = False - if isinstance(path_or_buf, str) and _is_url(path_or_buf): - from urllib.request import urlopen - path_or_buf = urlopen(path_or_buf) - if py3compat.PY3: # pragma: no cover - if self._encoding: - errors = 'strict' - else: - errors = 'replace' - self._encoding = 'cp1252' - bytes = path_or_buf.read() - self.path_or_buf = StringIO(self._decode_bytes(bytes, errors)) - elif type(path_or_buf) is str: + if isinstance(path_or_buf, str): + path_or_buf, encoding = get_filepath_or_buffer(path_or_buf, encoding='cp1252') + if encoding is not None: + self._encoding = encoding + + if type(path_or_buf) is str: self.path_or_buf = open(path_or_buf, 'rb') else: self.path_or_buf = path_or_buf diff --git a/pandas/io/tests/test_stata.py b/pandas/io/tests/test_stata.py index 9f5d796763fb0..d512b0267ed13 100644 --- a/pandas/io/tests/test_stata.py +++ b/pandas/io/tests/test_stata.py @@ -185,7 +185,7 @@ def test_read_dta9(self): def test_stata_doc_examples(self): with ensure_clean(self.dta5) as path: df = DataFrame(np.random.randn(10,2),columns=list('AB')) - df.to_stata('path') + df.to_stata(path) if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
https://api.github.com/repos/pandas-dev/pandas/pulls/3808
2013-06-08T12:39:08Z
2013-06-08T13:04:02Z
2013-06-08T13:04:02Z
2014-07-16T08:12:37Z
ENH: add ujson support in pandas.io.json
diff --git a/LICENSES/ULTRAJSON_LICENSE b/LICENSES/ULTRAJSON_LICENSE new file mode 100644 index 0000000000000..defca46e7f820 --- /dev/null +++ b/LICENSES/ULTRAJSON_LICENSE @@ -0,0 +1,34 @@ +Copyright (c) 2011-2013, ESN Social Software AB and Jonas Tarnstrom +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + * Neither the name of the ESN Social Software AB nor the + names of its contributors may be used to endorse or promote products + derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL ESN SOCIAL SOFTWARE AB OR JONAS TARNSTROM BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + +Portions of code from MODP_ASCII - Ascii transformations (upper/lower, etc) +http://code.google.com/p/stringencoders/ +Copyright (c) 2007 Nick Galbreath -- nickg [at] modp [dot] com. All rights reserved. + +Numeric decoder derived from from TCL library +http://www.opensource.apple.com/source/tcl/tcl-14/tcl/license.terms + * Copyright (c) 1988-1993 The Regents of the University of California. + * Copyright (c) 1994 Sun Microsystems, Inc. \ No newline at end of file diff --git a/doc/source/api.rst b/doc/source/api.rst index e263554460380..bb6f0ac073e21 100644 --- a/doc/source/api.rst +++ b/doc/source/api.rst @@ -45,6 +45,16 @@ Excel read_excel ExcelFile.parse +JSON +~~~~ + +.. currentmodule:: pandas.io.json + +.. autosummary:: + :toctree: generated/ + + read_json + HTML ~~~~ @@ -597,6 +607,7 @@ Serialization / IO / Conversion DataFrame.to_hdf DataFrame.to_dict DataFrame.to_excel + DataFrame.to_json DataFrame.to_html DataFrame.to_stata DataFrame.to_records diff --git a/doc/source/io.rst b/doc/source/io.rst index ac5d49e036669..e64cbc4bc8101 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -35,6 +35,7 @@ object. * ``read_excel`` * ``read_hdf`` * ``read_sql`` + * ``read_json`` * ``read_html`` * ``read_stata`` * ``read_clipboard`` @@ -45,6 +46,7 @@ The corresponding ``writer`` functions are object methods that are accessed like * ``to_excel`` * ``to_hdf`` * ``to_sql`` + * ``to_json`` * ``to_html`` * ``to_stata`` * ``to_clipboard`` @@ -937,6 +939,104 @@ The Series object also has a ``to_string`` method, but with only the ``buf``, which, if set to ``True``, will additionally output the length of the Series. +JSON +---- + +Read and write ``JSON`` format files. + +.. _io.json: + +Writing JSON +~~~~~~~~~~~~ + +A ``Series`` or ``DataFrame`` can be converted to a valid JSON string. Use ``to_json`` +with optional parameters: + +- path_or_buf : the pathname or buffer to write the output + This can be ``None`` in which case a JSON string is returned +- orient : The format of the JSON string, default is ``index`` for ``Series``, ``columns`` for ``DataFrame`` + + * split : dict like {index -> [index], columns -> [columns], data -> [values]} + * records : list like [{column -> value}, ... , {column -> value}] + * index : dict like {index -> {column -> value}} + * columns : dict like {column -> {index -> value}} + * values : just the values array + +- date_format : type of date conversion (epoch = epoch milliseconds, iso = ISO8601), default is epoch +- double_precision : The number of decimal places to use when encoding floating point values, default 10. +- force_ascii : force encoded string to be ASCII, default True. + +Note NaN's and None will be converted to null and datetime objects will be converted based on the date_format parameter + +.. ipython:: python + + dfj = DataFrame(randn(5, 2), columns=list('AB')) + json = dfj.to_json() + json + +Writing in iso date format + +.. ipython:: python + + dfd = DataFrame(randn(5, 2), columns=list('AB')) + dfd['date'] = Timestamp('20130101') + json = dfd.to_json(date_format='iso') + json + +Writing to a file, with a date index and a date column + +.. ipython:: python + + dfj2 = dfj.copy() + dfj2['date'] = Timestamp('20130101') + dfj2.index = date_range('20130101',periods=5) + dfj2.to_json('test.json') + open('test.json').read() + +Reading JSON +~~~~~~~~~~~~ + +Reading a JSON string to pandas object can take a number of parameters. +The parser will try to parse a ``DataFrame`` if ``typ`` is not supplied or +is ``None``. To explicity force ``Series`` parsing, pass ``typ=series`` + +- filepath_or_buffer : a **VALID** JSON string or file handle / StringIO. The string could be + a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host + is expected. For instance, a local file could be + file ://localhost/path/to/table.json +- typ : type of object to recover (series or frame), default 'frame' +- orient : The format of the JSON string, one of the following + + * split : dict like {index -> [index], name -> name, data -> [values]} + * records : list like [value, ... , value] + * index : dict like {index -> value} + +- dtype : dtype of the resulting object +- numpy : direct decoding to numpy arrays. default True but falls back to standard decoding if a problem occurs. +- parse_dates : a list of columns to parse for dates; If True, then try to parse datelike columns, default is False +- keep_default_dates : boolean, default True. If parsing dates, then parse the default datelike columns + +The parser will raise one of ``ValueError/TypeError/AssertionError`` if the JSON is +not parsable. + +Reading from a JSON string + +.. ipython:: python + + pd.read_json(json) + +Reading from a file, parsing dates + +.. ipython:: python + + pd.read_json('test.json',parse_dates=True) + +.. ipython:: python + :suppress: + + import os + os.remove('test.json') + HTML ---- @@ -2193,7 +2293,6 @@ into a .dta file. The format version of this file is always the latest one, 115. .. ipython:: python - from pandas.io.stata import StataWriter df = DataFrame(randn(10, 2), columns=list('AB')) df.to_stata('stata.dta') diff --git a/doc/source/v0.11.1.txt b/doc/source/v0.11.1.txt index 70d840f8c477a..5045f73375a97 100644 --- a/doc/source/v0.11.1.txt +++ b/doc/source/v0.11.1.txt @@ -16,6 +16,7 @@ API changes * ``read_excel`` * ``read_hdf`` * ``read_sql`` + * ``read_json`` * ``read_html`` * ``read_stata`` * ``read_clipboard`` @@ -26,6 +27,7 @@ API changes * ``to_excel`` * ``to_hdf`` * ``to_sql`` + * ``to_json`` * ``to_html`` * ``to_stata`` * ``to_clipboard`` @@ -175,6 +177,10 @@ Enhancements accessable via ``read_stata`` top-level function for reading, and ``to_stata`` DataFrame method for writing, :ref:`See the docs<io.stata>` + - Added module for reading and writing json format files: ``pandas.io.json`` + accessable via ``read_json`` top-level function for reading, + and ``to_json`` DataFrame method for writing, :ref:`See the docs<io.json>` + - ``DataFrame.replace()`` now allows regular expressions on contained ``Series`` with object dtype. See the examples section in the regular docs :ref:`Replacing via String Expression <missing_data.replace_expression>` diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 5533584745167..0d2612d7aed7a 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -495,6 +495,45 @@ def to_clipboard(self): from pandas.io import clipboard clipboard.to_clipboard(self) + def to_json(self, path_or_buf=None, orient=None, date_format='epoch', + double_precision=10, force_ascii=True): + """ + Convert the object to a JSON string. + + Note NaN's and None will be converted to null and datetime objects + will be converted to UNIX timestamps. + + Parameters + ---------- + path_or_buf : the path or buffer to write the result string + if this is None, return a StringIO of the converted string + orient : {'split', 'records', 'index', 'columns', 'values'}, + default is 'index' for Series, 'columns' for DataFrame + + The format of the JSON string + split : dict like + {index -> [index], columns -> [columns], data -> [values]} + records : list like [{column -> value}, ... , {column -> value}] + index : dict like {index -> {column -> value}} + columns : dict like {column -> {index -> value}} + values : just the values array + date_format : type of date conversion (epoch = epoch milliseconds, iso = ISO8601), + default is epoch + double_precision : The number of decimal places to use when encoding + floating point values, default 10. + force_ascii : force encoded string to be ASCII, default True. + + Returns + ------- + result : a JSON compatible string written to the path_or_buf; + if the path_or_buf is none, return a StringIO of the result + + """ + + from pandas.io import json + return json.to_json(path_or_buf=path_or_buf, obj=self, orient=orient, date_format=date_format, + double_precision=double_precision, force_ascii=force_ascii) + # install the indexerse for _name, _indexer in indexing.get_indexers_list(): PandasObject._create_indexer(_name,_indexer) diff --git a/pandas/io/api.py b/pandas/io/api.py index f17351921f83f..48566399f9bfe 100644 --- a/pandas/io/api.py +++ b/pandas/io/api.py @@ -6,6 +6,7 @@ from pandas.io.clipboard import read_clipboard from pandas.io.excel import ExcelFile, ExcelWriter, read_excel from pandas.io.pytables import HDFStore, Term, get_store, read_hdf +from pandas.io.json import read_json from pandas.io.html import read_html from pandas.io.sql import read_sql from pandas.io.stata import read_stata diff --git a/pandas/io/common.py b/pandas/io/common.py index 46b47c06f7f5d..353930482c8b8 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -2,6 +2,7 @@ import urlparse from pandas.util import py3compat +from StringIO import StringIO _VALID_URLS = set(urlparse.uses_relative + urlparse.uses_netloc + urlparse.uses_params) diff --git a/pandas/io/excel.py b/pandas/io/excel.py index 5b7d13acd99ec..95702847d9c7f 100644 --- a/pandas/io/excel.py +++ b/pandas/io/excel.py @@ -11,7 +11,7 @@ from pandas.io.parsers import TextParser from pandas.tseries.period import Period -import json +from pandas import json def read_excel(path_or_buf, sheetname, kind=None, **kwds): """Read an Excel table into a pandas DataFrame diff --git a/pandas/io/json.py b/pandas/io/json.py new file mode 100644 index 0000000000000..17b33931bee5a --- /dev/null +++ b/pandas/io/json.py @@ -0,0 +1,353 @@ + +# pylint: disable-msg=E1101,W0613,W0603 +from StringIO import StringIO +import os + +from pandas import Series, DataFrame, to_datetime +from pandas.io.common import get_filepath_or_buffer +import pandas.json as _json +loads = _json.loads +dumps = _json.dumps + +import numpy as np +from pandas.tslib import iNaT + +### interface to/from ### + +def to_json(path_or_buf, obj, orient=None, date_format='epoch', double_precision=10, force_ascii=True): + + if isinstance(obj, Series): + s = SeriesWriter(obj, orient=orient, date_format=date_format, double_precision=double_precision, + ensure_ascii=force_ascii).write() + elif isinstance(obj, DataFrame): + s = FrameWriter(obj, orient=orient, date_format=date_format, double_precision=double_precision, + ensure_ascii=force_ascii).write() + else: + raise NotImplementedError + + if isinstance(path_or_buf, basestring): + with open(path_or_buf,'w') as fh: + fh.write(s) + elif path_or_buf is None: + return s + else: + path_or_buf.write(s) + +class Writer(object): + + def __init__(self, obj, orient, date_format, double_precision, ensure_ascii): + self.obj = obj + + if orient is None: + orient = self._default_orient + + self.orient = orient + self.date_format = date_format + self.double_precision = double_precision + self.ensure_ascii = ensure_ascii + + self.is_copy = False + self._format_axes() + self._format_dates() + + def _format_dates(self): + raise NotImplementedError + + def _format_axes(self): + raise NotImplementedError + + def _needs_to_date(self, data): + return self.date_format == 'iso' and data.dtype == 'datetime64[ns]' + + def _format_to_date(self, data): + if self._needs_to_date(data): + return data.apply(lambda x: x.isoformat()) + return data + + def copy_if_needed(self): + """ copy myself if necessary """ + if not self.is_copy: + self.obj = self.obj.copy() + self.is_copy = True + + def write(self): + return dumps(self.obj, orient=self.orient, double_precision=self.double_precision, ensure_ascii=self.ensure_ascii) + +class SeriesWriter(Writer): + _default_orient = 'index' + + def _format_axes(self): + if self._needs_to_date(self.obj.index): + self.copy_if_needed() + self.obj.index = self._format_to_date(self.obj.index.to_series()) + + def _format_dates(self): + if self._needs_to_date(self.obj): + self.copy_if_needed() + self.obj = self._format_to_date(self.obj) + +class FrameWriter(Writer): + _default_orient = 'columns' + + def _format_axes(self): + """ try to axes if they are datelike """ + if self.orient == 'columns': + axis = 'index' + elif self.orient == 'index': + axis = 'columns' + else: + return + + a = getattr(self.obj,axis) + if self._needs_to_date(a): + self.copy_if_needed() + setattr(self.obj,axis,self._format_to_date(a.to_series())) + + def _format_dates(self): + if self.date_format == 'iso': + dtypes = self.obj.dtypes + dtypes = dtypes[dtypes == 'datetime64[ns]'] + if len(dtypes): + self.copy_if_needed() + for c in dtypes.index: + self.obj[c] = self._format_to_date(self.obj[c]) + +def read_json(path_or_buf=None, orient=None, typ='frame', dtype=None, numpy=True, + parse_dates=False, keep_default_dates=True): + """ + Convert JSON string to pandas object + + Parameters + ---------- + filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be + a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host + is expected. For instance, a local file could be + file ://localhost/path/to/table.json + orient : {'split', 'records', 'index'}, default 'index' + The format of the JSON string + split : dict like + {index -> [index], name -> name, data -> [values]} + records : list like [value, ... , value] + index : dict like {index -> value} + typ : type of object to recover (series or frame), default 'frame' + dtype : dtype of the resulting object + numpy: direct decoding to numpy arrays. default True but falls back + to standard decoding if a problem occurs. + parse_dates : a list of columns to parse for dates; If True, then try to parse datelike columns + default is False + keep_default_dates : boolean, default True. If parsing dates, + then parse the default datelike columns + + Returns + ------- + result : Series or DataFrame + """ + + filepath_or_buffer,_ = get_filepath_or_buffer(path_or_buf) + if isinstance(filepath_or_buffer, basestring): + if os.path.exists(filepath_or_buffer): + with open(filepath_or_buffer,'r') as fh: + json = fh.read() + else: + json = filepath_or_buffer + elif hasattr(filepath_or_buffer, 'read'): + json = filepath_or_buffer.read() + else: + json = filepath_or_buffer + + obj = None + if typ == 'frame': + obj = FrameParser(json, orient, dtype, numpy, parse_dates, keep_default_dates).parse() + + if typ == 'series' or obj is None: + obj = SeriesParser(json, orient, dtype, numpy, parse_dates, keep_default_dates).parse() + + return obj + +class Parser(object): + + def __init__(self, json, orient, dtype, numpy, parse_dates=False, keep_default_dates=False): + self.json = json + + if orient is None: + orient = self._default_orient + + self.orient = orient + self.dtype = dtype + + if dtype is not None and orient == "split": + numpy = False + + self.numpy = numpy + self.parse_dates = parse_dates + self.keep_default_dates = keep_default_dates + self.obj = None + + def parse(self): + self._parse() + if self.obj is not None: + self._convert_axes() + if self.parse_dates: + self._try_parse_dates() + return self.obj + + + def _try_parse_to_date(self, data): + """ try to parse a ndarray like into a date column + try to coerce object in epoch/iso formats and + integer/float in epcoh formats """ + + new_data = data + if new_data.dtype == 'object': + try: + new_data = data.astype('int64') + except: + pass + + + # ignore numbers that are out of range + if issubclass(new_data.dtype.type,np.number): + if not ((new_data == iNaT) | (new_data > 31536000000000000L)).all(): + return data + + try: + new_data = to_datetime(new_data) + except: + try: + new_data = to_datetime(new_data.astype('int64')) + except: + + # return old, noting more we can do + new_data = data + + return new_data + + def _try_parse_dates(self): + raise NotImplementedError + +class SeriesParser(Parser): + _default_orient = 'index' + + def _parse(self): + + json = self.json + dtype = self.dtype + orient = self.orient + numpy = self.numpy + + if numpy: + try: + if orient == "split": + decoded = loads(json, dtype=dtype, numpy=True) + decoded = dict((str(k), v) for k, v in decoded.iteritems()) + self.obj = Series(**decoded) + elif orient == "columns" or orient == "index": + self.obj = Series(*loads(json, dtype=dtype, numpy=True, + labelled=True)) + else: + self.obj = Series(loads(json, dtype=dtype, numpy=True)) + except ValueError: + numpy = False + + if not numpy: + if orient == "split": + decoded = dict((str(k), v) + for k, v in loads(json).iteritems()) + self.obj = Series(dtype=dtype, **decoded) + else: + self.obj = Series(loads(json), dtype=dtype) + + def _convert_axes(self): + """ try to axes if they are datelike """ + try: + self.obj.index = self._try_parse_to_date(self.obj.index) + except: + pass + + def _try_parse_dates(self): + if self.obj is None: return + + if self.parse_dates: + self.obj = self._try_parse_to_date(self.obj) + +class FrameParser(Parser): + _default_orient = 'columns' + + def _parse(self): + + json = self.json + dtype = self.dtype + orient = self.orient + numpy = self.numpy + + if numpy: + try: + if orient == "columns": + args = loads(json, dtype=dtype, numpy=True, labelled=True) + if args: + args = (args[0].T, args[2], args[1]) + self.obj = DataFrame(*args) + elif orient == "split": + decoded = loads(json, dtype=dtype, numpy=True) + decoded = dict((str(k), v) for k, v in decoded.iteritems()) + self.obj = DataFrame(**decoded) + elif orient == "values": + self.obj = DataFrame(loads(json, dtype=dtype, numpy=True)) + else: + self.obj = DataFrame(*loads(json, dtype=dtype, numpy=True, + labelled=True)) + except ValueError: + numpy = False + + if not numpy: + if orient == "columns": + self.obj = DataFrame(loads(json), dtype=dtype) + elif orient == "split": + decoded = dict((str(k), v) + for k, v in loads(json).iteritems()) + self.obj = DataFrame(dtype=dtype, **decoded) + elif orient == "index": + self.obj = DataFrame(loads(json), dtype=dtype).T + else: + self.obj = DataFrame(loads(json), dtype=dtype) + + def _convert_axes(self): + """ try to axes if they are datelike """ + if self.orient == 'columns': + axis = 'index' + elif self.orient == 'index': + axis = 'columns' + else: + return + + try: + a = getattr(self.obj,axis) + setattr(self.obj,axis,self._try_parse_to_date(a)) + except: + pass + + def _try_parse_dates(self): + if self.obj is None: return + + # our columns to parse + parse_dates = self.parse_dates + if parse_dates is True: + parse_dates = [] + parse_dates = set(parse_dates) + + def is_ok(col): + """ return if this col is ok to try for a date parse """ + if not isinstance(col, basestring): return False + + if (col.endswith('_at') or + col.endswith('_time') or + col.lower() == 'modified' or + col.lower() == 'date' or + col.lower() == 'datetime'): + return True + return False + + + for col, c in self.obj.iteritems(): + if (self.keep_default_dates and is_ok(col)) or col in parse_dates: + self.obj[col] = self._try_parse_to_date(c) diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index 6e937ba696e39..faf439d87a5f2 100644 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -23,7 +23,6 @@ import pandas.tslib as tslib import pandas.parser as _parser from pandas.tseries.period import Period -import json class DateConversionError(Exception): diff --git a/pandas/io/tests/test_json/__init__.py b/pandas/io/tests/test_json/__init__.py new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/pandas/io/tests/test_json/test_pandas.py b/pandas/io/tests/test_json/test_pandas.py new file mode 100644 index 0000000000000..b64bfaacd38f2 --- /dev/null +++ b/pandas/io/tests/test_json/test_pandas.py @@ -0,0 +1,355 @@ + +# pylint: disable-msg=W0612,E1101 +from copy import deepcopy +from datetime import datetime, timedelta +from StringIO import StringIO +import cPickle as pickle +import operator +import os +import unittest + +import numpy as np + +from pandas import Series, DataFrame, DatetimeIndex, Timestamp +import pandas as pd +read_json = pd.read_json + +from pandas.util.testing import (assert_almost_equal, assert_frame_equal, + assert_series_equal, network, + ensure_clean) +import pandas.util.testing as tm +from numpy.testing.decorators import slow + +_seriesd = tm.getSeriesData() +_tsd = tm.getTimeSeriesData() + +_frame = DataFrame(_seriesd) +_frame2 = DataFrame(_seriesd, columns=['D', 'C', 'B', 'A']) +_intframe = DataFrame(dict((k, v.astype(int)) + for k, v in _seriesd.iteritems())) + +_tsframe = DataFrame(_tsd) + +_mixed_frame = _frame.copy() + +class TestPandasObjects(unittest.TestCase): + + def setUp(self): + self.ts = tm.makeTimeSeries() + self.ts.name = 'ts' + + self.series = tm.makeStringSeries() + self.series.name = 'series' + + self.objSeries = tm.makeObjectSeries() + self.objSeries.name = 'objects' + + self.empty_series = Series([], index=[]) + self.empty_frame = DataFrame({}) + + self.frame = _frame.copy() + self.frame2 = _frame2.copy() + self.intframe = _intframe.copy() + self.tsframe = _tsframe.copy() + self.mixed_frame = _mixed_frame.copy() + + def test_frame_from_json_to_json(self): + + def _check_orient(df, orient, dtype=None, numpy=True): + df = df.sort() + dfjson = df.to_json(orient=orient) + unser = read_json(dfjson, orient=orient, dtype=dtype, + numpy=numpy) + unser = unser.sort() + if df.index.dtype.type == np.datetime64: + unser.index = DatetimeIndex(unser.index.values.astype('i8')) + if orient == "records": + # index is not captured in this orientation + assert_almost_equal(df.values, unser.values) + self.assert_(df.columns.equals(unser.columns)) + elif orient == "values": + # index and cols are not captured in this orientation + assert_almost_equal(df.values, unser.values) + elif orient == "split": + # index and col labels might not be strings + unser.index = [str(i) for i in unser.index] + unser.columns = [str(i) for i in unser.columns] + unser = unser.sort() + assert_almost_equal(df.values, unser.values) + else: + assert_frame_equal(df, unser) + + def _check_all_orients(df, dtype=None): + _check_orient(df, "columns", dtype=dtype) + _check_orient(df, "records", dtype=dtype) + _check_orient(df, "split", dtype=dtype) + _check_orient(df, "index", dtype=dtype) + _check_orient(df, "values", dtype=dtype) + + _check_orient(df, "columns", dtype=dtype, numpy=False) + _check_orient(df, "records", dtype=dtype, numpy=False) + _check_orient(df, "split", dtype=dtype, numpy=False) + _check_orient(df, "index", dtype=dtype, numpy=False) + _check_orient(df, "values", dtype=dtype, numpy=False) + + # basic + _check_all_orients(self.frame) + self.assertEqual(self.frame.to_json(), + self.frame.to_json(orient="columns")) + + _check_all_orients(self.intframe, dtype=self.intframe.values.dtype) + + # big one + # index and columns are strings as all unserialised JSON object keys + # are assumed to be strings + biggie = DataFrame(np.zeros((200, 4)), + columns=[str(i) for i in range(4)], + index=[str(i) for i in range(200)]) + _check_all_orients(biggie) + + # dtypes + _check_all_orients(DataFrame(biggie, dtype=np.float64), + dtype=np.float64) + _check_all_orients(DataFrame(biggie, dtype=np.int), dtype=np.int) + _check_all_orients(DataFrame(biggie, dtype='<U3'), dtype='<U3') + + # empty + _check_all_orients(self.empty_frame) + + # time series data + _check_all_orients(self.tsframe) + + # mixed data + index = pd.Index(['a', 'b', 'c', 'd', 'e']) + data = { + 'A': [0., 1., 2., 3., 4.], + 'B': [0., 1., 0., 1., 0.], + 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], + 'D': [True, False, True, False, True] + } + df = DataFrame(data=data, index=index) + _check_orient(df, "split") + _check_orient(df, "records") + _check_orient(df, "values") + _check_orient(df, "columns") + # index oriented is problematic as it is read back in in a transposed + # state, so the columns are interpreted as having mixed data and + # given object dtypes. + # force everything to have object dtype beforehand + _check_orient(df.transpose().transpose(), "index") + + def test_frame_from_json_bad_data(self): + self.assertRaises(ValueError, read_json, StringIO('{"key":b:a:d}')) + + # too few indices + json = StringIO('{"columns":["A","B"],' + '"index":["2","3"],' + '"data":[[1.0,"1"],[2.0,"2"],[null,"3"]]}"') + self.assertRaises(ValueError, read_json, json, + orient="split") + + # too many columns + json = StringIO('{"columns":["A","B","C"],' + '"index":["1","2","3"],' + '"data":[[1.0,"1"],[2.0,"2"],[null,"3"]]}"') + self.assertRaises(AssertionError, read_json, json, + orient="split") + + # bad key + json = StringIO('{"badkey":["A","B"],' + '"index":["2","3"],' + '"data":[[1.0,"1"],[2.0,"2"],[null,"3"]]}"') + self.assertRaises(TypeError, read_json, json, + orient="split") + + def test_frame_from_json_nones(self): + df = DataFrame([[1, 2], [4, 5, 6]]) + unser = read_json(df.to_json()) + self.assert_(np.isnan(unser['2'][0])) + + df = DataFrame([['1', '2'], ['4', '5', '6']]) + unser = read_json(df.to_json()) + self.assert_(unser['2'][0] is None) + + unser = read_json(df.to_json(), numpy=False) + self.assert_(unser['2'][0] is None) + + # infinities get mapped to nulls which get mapped to NaNs during + # deserialisation + df = DataFrame([[1, 2], [4, 5, 6]]) + df[2][0] = np.inf + unser = read_json(df.to_json()) + self.assert_(np.isnan(unser['2'][0])) + + df[2][0] = np.NINF + unser = read_json(df.to_json()) + self.assert_(np.isnan(unser['2'][0])) + + def test_frame_to_json_except(self): + df = DataFrame([1, 2, 3]) + self.assertRaises(ValueError, df.to_json, orient="garbage") + + def test_series_from_json_to_json(self): + + def _check_orient(series, orient, dtype=None, numpy=True): + series = series.sort_index() + unser = read_json(series.to_json(orient=orient), typ='series', + orient=orient, numpy=numpy, dtype=dtype) + unser = unser.sort_index() + if series.index.dtype.type == np.datetime64: + unser.index = DatetimeIndex(unser.index.values.astype('i8')) + if orient == "records" or orient == "values": + assert_almost_equal(series.values, unser.values) + else: + try: + assert_series_equal(series, unser) + except: + raise + if orient == "split": + self.assert_(series.name == unser.name) + + def _check_all_orients(series, dtype=None): + _check_orient(series, "columns", dtype=dtype) + _check_orient(series, "records", dtype=dtype) + _check_orient(series, "split", dtype=dtype) + _check_orient(series, "index", dtype=dtype) + _check_orient(series, "values", dtype=dtype) + + _check_orient(series, "columns", dtype=dtype, numpy=False) + _check_orient(series, "records", dtype=dtype, numpy=False) + _check_orient(series, "split", dtype=dtype, numpy=False) + _check_orient(series, "index", dtype=dtype, numpy=False) + _check_orient(series, "values", dtype=dtype, numpy=False) + + # basic + _check_all_orients(self.series) + self.assertEqual(self.series.to_json(), + self.series.to_json(orient="index")) + + objSeries = Series([str(d) for d in self.objSeries], + index=self.objSeries.index, + name=self.objSeries.name) + _check_all_orients(objSeries) + _check_all_orients(self.empty_series) + _check_all_orients(self.ts) + + # dtype + s = Series(range(6), index=['a','b','c','d','e','f']) + _check_all_orients(Series(s, dtype=np.float64), dtype=np.float64) + _check_all_orients(Series(s, dtype=np.int), dtype=np.int) + + def test_series_to_json_except(self): + s = Series([1, 2, 3]) + self.assertRaises(ValueError, s.to_json, orient="garbage") + + def test_typ(self): + + s = Series(range(6), index=['a','b','c','d','e','f']) + result = read_json(s.to_json(),typ=None) + assert_series_equal(result,s) + + def test_reconstruction_index(self): + + df = DataFrame([[1, 2, 3], [4, 5, 6]]) + result = read_json(df.to_json()) + + # the index is serialized as strings....correct? + #assert_frame_equal(result,df) + + def test_path(self): + with ensure_clean('test.json') as path: + + for df in [ self.frame, self.frame2, self.intframe, self.tsframe, self.mixed_frame ]: + df.to_json(path) + read_json(path) + + def test_axis_dates(self): + + # frame + json = self.tsframe.to_json() + result = read_json(json) + assert_frame_equal(result,self.tsframe) + + # series + json = self.ts.to_json() + result = read_json(json,typ='series') + assert_series_equal(result,self.ts) + + def test_parse_dates(self): + + # frame + df = self.tsframe.copy() + df['date'] = Timestamp('20130101') + + json = df.to_json() + result = read_json(json,parse_dates=True) + assert_frame_equal(result,df) + + df['foo'] = 1. + json = df.to_json() + result = read_json(json,parse_dates=True) + assert_frame_equal(result,df) + + # series + ts = Series(Timestamp('20130101'),index=self.ts.index) + json = ts.to_json() + result = read_json(json,typ='series',parse_dates=True) + assert_series_equal(result,ts) + + def test_date_format(self): + + df = self.tsframe.copy() + df['date'] = Timestamp('20130101') + df_orig = df.copy() + + json = df.to_json(date_format='iso') + result = read_json(json,parse_dates=True) + assert_frame_equal(result,df_orig) + + # make sure that we did in fact copy + assert_frame_equal(df,df_orig) + + ts = Series(Timestamp('20130101'),index=self.ts.index) + json = ts.to_json(date_format='iso') + result = read_json(json,typ='series',parse_dates=True) + assert_series_equal(result,ts) + + def test_weird_nested_json(self): + + # this used to core dump the parser + s = r'''{ + "status": "success", + "data": { + "posts": [ + { + "id": 1, + "title": "A blog post", + "body": "Some useful content" + }, + { + "id": 2, + "title": "Another blog post", + "body": "More content" + } + ] + } +}''' + + read_json(s) + + @network + @slow + def test_url(self): + import urllib2 + try: + + url = 'https://api.github.com/repos/pydata/pandas/issues?per_page=5' + result = read_json(url,parse_dates=True) + for c in ['created_at','closed_at','updated_at']: + self.assert_(result[c].dtype == 'datetime64[ns]') + + url = 'http://search.twitter.com/search.json?q=pandas%20python' + result = read_json(url) + + except urllib2.URLError: + raise nose.SkipTest diff --git a/pandas/io/tests/test_json/test_ujson.py b/pandas/io/tests/test_json/test_ujson.py new file mode 100644 index 0000000000000..2e775b4a541ea --- /dev/null +++ b/pandas/io/tests/test_json/test_ujson.py @@ -0,0 +1,1232 @@ +import unittest +from unittest import TestCase + +import pandas.json as ujson +try: + import json +except ImportError: + import simplejson as json +import math +import nose +import platform +import sys +import time +import datetime +import calendar +import StringIO +import re +from functools import partial +import pandas.util.py3compat as py3compat + +import numpy as np +from pandas.util.testing import assert_almost_equal +from numpy.testing import (assert_array_equal, + assert_array_almost_equal_nulp, + assert_approx_equal) +from pandas import DataFrame, Series, Index +import pandas.util.testing as tm + + +def _skip_if_python_ver(skip_major, skip_minor=None): + major, minor = sys.version_info[:2] + if major == skip_major and (skip_minor is None or minor == skip_minor): + raise nose.SkipTest + +json_unicode = (json.dumps if sys.version_info[0] >= 3 + else partial(json.dumps, encoding="utf-8")) + +class UltraJSONTests(TestCase): + def test_encodeDictWithUnicodeKeys(self): + input = { u"key1": u"value1", u"key1": u"value1", u"key1": u"value1", u"key1": u"value1", u"key1": u"value1", u"key1": u"value1" } + output = ujson.encode(input) + + input = { u"بن": u"value1", u"بن": u"value1", u"بن": u"value1", u"بن": u"value1", u"بن": u"value1", u"بن": u"value1", u"بن": u"value1" } + output = ujson.encode(input) + + pass + + def test_encodeDoubleConversion(self): + input = math.pi + output = ujson.encode(input) + self.assertEquals(round(input, 5), round(json.loads(output), 5)) + self.assertEquals(round(input, 5), round(ujson.decode(output), 5)) + + def test_encodeWithDecimal(self): + input = 1.0 + output = ujson.encode(input) + self.assertEquals(output, "1.0") + + def test_encodeDoubleNegConversion(self): + input = -math.pi + output = ujson.encode(input) + self.assertEquals(round(input, 5), round(json.loads(output), 5)) + self.assertEquals(round(input, 5), round(ujson.decode(output), 5)) + + def test_encodeArrayOfNestedArrays(self): + input = [[[[]]]] * 20 + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + #self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + input = np.array(input) + assert_array_equal(input, ujson.decode(output, numpy=True, dtype=input.dtype)) + + def test_encodeArrayOfDoubles(self): + input = [ 31337.31337, 31337.31337, 31337.31337, 31337.31337] * 10 + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + #self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + assert_array_equal(np.array(input), ujson.decode(output, numpy=True)) + + def test_doublePrecisionTest(self): + input = 30.012345678901234 + output = ujson.encode(input, double_precision = 15) + self.assertEquals(input, json.loads(output)) + self.assertEquals(input, ujson.decode(output)) + + output = ujson.encode(input, double_precision = 9) + self.assertEquals(round(input, 9), json.loads(output)) + self.assertEquals(round(input, 9), ujson.decode(output)) + + output = ujson.encode(input, double_precision = 3) + self.assertEquals(round(input, 3), json.loads(output)) + self.assertEquals(round(input, 3), ujson.decode(output)) + + output = ujson.encode(input) + self.assertEquals(round(input, 5), json.loads(output)) + self.assertEquals(round(input, 5), ujson.decode(output)) + + def test_invalidDoublePrecision(self): + input = 30.12345678901234567890 + output = ujson.encode(input, double_precision = 20) + # should snap to the max, which is 15 + self.assertEquals(round(input, 15), json.loads(output)) + self.assertEquals(round(input, 15), ujson.decode(output)) + + output = ujson.encode(input, double_precision = -1) + # also should snap to the max, which is 15 + self.assertEquals(round(input, 15), json.loads(output)) + self.assertEquals(round(input, 15), ujson.decode(output)) + + # will throw typeError + self.assertRaises(TypeError, ujson.encode, input, double_precision = '9') + # will throw typeError + self.assertRaises(TypeError, ujson.encode, input, double_precision = None) + + + def test_encodeStringConversion(self): + input = "A string \\ / \b \f \n \r \t" + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, '"A string \\\\ \\/ \\b \\f \\n \\r \\t"') + self.assertEquals(input, ujson.decode(output)) + pass + + def test_decodeUnicodeConversion(self): + pass + + def test_encodeUnicodeConversion1(self): + input = "Räksmörgås اسامة بن محمد بن عوض بن لادن" + enc = ujson.encode(input) + dec = ujson.decode(enc) + self.assertEquals(enc, json_unicode(input)) + self.assertEquals(dec, json.loads(enc)) + + def test_encodeControlEscaping(self): + input = "\x19" + enc = ujson.encode(input) + dec = ujson.decode(enc) + self.assertEquals(input, dec) + self.assertEquals(enc, json_unicode(input)) + + + def test_encodeUnicodeConversion2(self): + input = "\xe6\x97\xa5\xd1\x88" + enc = ujson.encode(input) + dec = ujson.decode(enc) + self.assertEquals(enc, json_unicode(input)) + self.assertEquals(dec, json.loads(enc)) + + def test_encodeUnicodeSurrogatePair(self): + _skip_if_python_ver(2, 5) + _skip_if_python_ver(2, 6) + input = "\xf0\x90\x8d\x86" + enc = ujson.encode(input) + dec = ujson.decode(enc) + + self.assertEquals(enc, json_unicode(input)) + self.assertEquals(dec, json.loads(enc)) + + def test_encodeUnicode4BytesUTF8(self): + _skip_if_python_ver(2, 5) + _skip_if_python_ver(2, 6) + input = "\xf0\x91\x80\xb0TRAILINGNORMAL" + enc = ujson.encode(input) + dec = ujson.decode(enc) + + self.assertEquals(enc, json_unicode(input)) + self.assertEquals(dec, json.loads(enc)) + + def test_encodeUnicode4BytesUTF8Highest(self): + _skip_if_python_ver(2, 5) + _skip_if_python_ver(2, 6) + input = "\xf3\xbf\xbf\xbfTRAILINGNORMAL" + enc = ujson.encode(input) + + dec = ujson.decode(enc) + + self.assertEquals(enc, json_unicode(input)) + self.assertEquals(dec, json.loads(enc)) + + + def test_encodeArrayInArray(self): + input = [[[[]]]] + output = ujson.encode(input) + + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + assert_array_equal(np.array(input), ujson.decode(output, numpy=True)) + pass + + def test_encodeIntConversion(self): + input = 31337 + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + pass + + def test_encodeIntNegConversion(self): + input = -31337 + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + pass + + + def test_encodeLongNegConversion(self): + input = -9223372036854775808 + output = ujson.encode(input) + + outputjson = json.loads(output) + outputujson = ujson.decode(output) + + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + pass + + def test_encodeListConversion(self): + input = [ 1, 2, 3, 4 ] + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(input, ujson.decode(output)) + assert_array_equal(np.array(input), ujson.decode(output, numpy=True)) + pass + + def test_encodeDictConversion(self): + input = { "k1": 1, "k2": 2, "k3": 3, "k4": 4 } + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(input, ujson.decode(output)) + self.assertEquals(input, ujson.decode(output)) + pass + + def test_encodeNoneConversion(self): + input = None + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + pass + + def test_encodeTrueConversion(self): + input = True + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + pass + + def test_encodeFalseConversion(self): + input = False + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + pass + + # def test_encodeDatetimeConversion(self): + # ts = time.time() + # input = datetime.datetime.fromtimestamp(ts) + # output = ujson.encode(input) + # expected = calendar.timegm(input.utctimetuple()) + # self.assertEquals(int(expected), json.loads(output)) + # self.assertEquals(int(expected), ujson.decode(output)) + # pass + + # def test_encodeDateConversion(self): + # ts = time.time() + # input = datetime.date.fromtimestamp(ts) + + # output = ujson.encode(input) + # tup = ( input.year, input.month, input.day, 0, 0, 0 ) + + # expected = calendar.timegm(tup) + # self.assertEquals(int(expected), json.loads(output)) + # self.assertEquals(int(expected), ujson.decode(output)) + + def test_datetime_nanosecond_unit(self): + from datetime import datetime + from pandas.lib import Timestamp + + val = datetime.now() + stamp = Timestamp(val) + + roundtrip = ujson.decode(ujson.encode(val)) + self.assert_(roundtrip == stamp.value) + + def test_encodeToUTF8(self): + _skip_if_python_ver(2, 5) + input = "\xe6\x97\xa5\xd1\x88" + enc = ujson.encode(input, ensure_ascii=False) + dec = ujson.decode(enc) + self.assertEquals(enc, json_unicode(input, ensure_ascii=False)) + self.assertEquals(dec, json.loads(enc)) + + def test_decodeFromUnicode(self): + input = u"{\"obj\": 31337}" + dec1 = ujson.decode(input) + dec2 = ujson.decode(str(input)) + self.assertEquals(dec1, dec2) + + def test_encodeRecursionMax(self): + # 8 is the max recursion depth + + class O2: + member = 0 + pass + + class O1: + member = 0 + pass + + input = O1() + input.member = O2() + input.member.member = input + + try: + output = ujson.encode(input) + assert False, "Expected overflow exception" + except(OverflowError): + pass + + def test_encodeDoubleNan(self): + input = np.nan + assert ujson.encode(input) == 'null', "Expected null" + + def test_encodeDoubleInf(self): + input = np.inf + assert ujson.encode(input) == 'null', "Expected null" + + def test_encodeDoubleNegInf(self): + input = -np.inf + assert ujson.encode(input) == 'null', "Expected null" + + + def test_decodeJibberish(self): + input = "fdsa sda v9sa fdsa" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + def test_decodeBrokenArrayStart(self): + input = "[" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + def test_decodeBrokenObjectStart(self): + input = "{" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + def test_decodeBrokenArrayEnd(self): + input = "]" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + def test_decodeBrokenObjectEnd(self): + input = "}" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + def test_decodeStringUnterminated(self): + input = "\"TESTING" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + def test_decodeStringUntermEscapeSequence(self): + input = "\"TESTING\\\"" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + def test_decodeStringBadEscape(self): + input = "\"TESTING\\\"" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + def test_decodeTrueBroken(self): + input = "tru" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + def test_decodeFalseBroken(self): + input = "fa" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + def test_decodeNullBroken(self): + input = "n" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + assert False, "Wrong exception" + + + def test_decodeBrokenDictKeyTypeLeakTest(self): + input = '{{1337:""}}' + for x in xrange(1000): + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError),e: + continue + + assert False, "Wrong exception" + + def test_decodeBrokenDictLeakTest(self): + input = '{{"key":"}' + for x in xrange(1000): + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + continue + + assert False, "Wrong exception" + + def test_decodeBrokenListLeakTest(self): + input = '[[[true' + for x in xrange(1000): + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + continue + + assert False, "Wrong exception" + + def test_decodeDictWithNoKey(self): + input = "{{{{31337}}}}" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + + assert False, "Wrong exception" + + def test_decodeDictWithNoColonOrValue(self): + input = "{{{{\"key\"}}}}" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + + assert False, "Wrong exception" + + def test_decodeDictWithNoValue(self): + input = "{{{{\"key\":}}}}" + try: + ujson.decode(input) + assert False, "Expected exception!" + except(ValueError): + return + + assert False, "Wrong exception" + + def test_decodeNumericIntPos(self): + input = "31337" + self.assertEquals (31337, ujson.decode(input)) + + def test_decodeNumericIntNeg(self): + input = "-31337" + self.assertEquals (-31337, ujson.decode(input)) + + def test_encodeUnicode4BytesUTF8Fail(self): + _skip_if_python_ver(3) + input = "\xfd\xbf\xbf\xbf\xbf\xbf" + try: + enc = ujson.encode(input) + assert False, "Expected exception" + except OverflowError: + pass + + def test_encodeNullCharacter(self): + input = "31337 \x00 1337" + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + + input = "\x00" + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + + self.assertEquals('" \\u0000\\r\\n "', ujson.dumps(u" \u0000\r\n ")) + pass + + def test_decodeNullCharacter(self): + input = "\"31337 \\u0000 31337\"" + self.assertEquals(ujson.decode(input), json.loads(input)) + + + def test_encodeListLongConversion(self): + input = [9223372036854775807, 9223372036854775807, 9223372036854775807, + 9223372036854775807, 9223372036854775807, 9223372036854775807 ] + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(input, ujson.decode(output)) + assert_array_equal(np.array(input), ujson.decode(output, numpy=True, + dtype=np.int64)) + pass + + def test_encodeLongConversion(self): + input = 9223372036854775807 + output = ujson.encode(input) + self.assertEquals(input, json.loads(output)) + self.assertEquals(output, json.dumps(input)) + self.assertEquals(input, ujson.decode(output)) + pass + + def test_numericIntExp(self): + input = "1337E40" + output = ujson.decode(input) + self.assertEquals(output, json.loads(input)) + + def test_numericIntFrcExp(self): + input = "1.337E40" + output = ujson.decode(input) + self.assertAlmostEqual(output, json.loads(input)) + + def test_decodeNumericIntExpEPLUS(self): + input = "1337E+40" + output = ujson.decode(input) + self.assertAlmostEqual(output, json.loads(input)) + + def test_decodeNumericIntExpePLUS(self): + input = "1.337e+40" + output = ujson.decode(input) + self.assertAlmostEqual(output, json.loads(input)) + + def test_decodeNumericIntExpE(self): + input = "1337E40" + output = ujson.decode(input) + self.assertAlmostEqual(output, json.loads(input)) + + def test_decodeNumericIntExpe(self): + input = "1337e40" + output = ujson.decode(input) + self.assertAlmostEqual(output, json.loads(input)) + + def test_decodeNumericIntExpEMinus(self): + input = "1.337E-4" + output = ujson.decode(input) + self.assertAlmostEqual(output, json.loads(input)) + + def test_decodeNumericIntExpeMinus(self): + input = "1.337e-4" + output = ujson.decode(input) + self.assertAlmostEqual(output, json.loads(input)) + + def test_dumpToFile(self): + f = StringIO.StringIO() + ujson.dump([1, 2, 3], f) + self.assertEquals("[1,2,3]", f.getvalue()) + + def test_dumpToFileLikeObject(self): + class filelike: + def __init__(self): + self.bytes = '' + def write(self, bytes): + self.bytes += bytes + f = filelike() + ujson.dump([1, 2, 3], f) + self.assertEquals("[1,2,3]", f.bytes) + + def test_dumpFileArgsError(self): + try: + ujson.dump([], '') + except TypeError: + pass + else: + assert False, 'expected TypeError' + + def test_loadFile(self): + f = StringIO.StringIO("[1,2,3,4]") + self.assertEquals([1, 2, 3, 4], ujson.load(f)) + f = StringIO.StringIO("[1,2,3,4]") + assert_array_equal(np.array([1, 2, 3, 4]), ujson.load(f, numpy=True)) + + def test_loadFileLikeObject(self): + class filelike: + def read(self): + try: + self.end + except AttributeError: + self.end = True + return "[1,2,3,4]" + f = filelike() + self.assertEquals([1, 2, 3, 4], ujson.load(f)) + f = filelike() + assert_array_equal(np.array([1, 2, 3, 4]), ujson.load(f, numpy=True)) + + def test_loadFileArgsError(self): + try: + ujson.load("[]") + except TypeError: + pass + else: + assert False, "expected TypeError" + + def test_version(self): + assert re.match(r'^\d+\.\d+(\.\d+)?$', ujson.__version__), \ + "ujson.__version__ must be a string like '1.4.0'" + + def test_encodeNumericOverflow(self): + try: + ujson.encode(12839128391289382193812939) + except OverflowError: + pass + else: + assert False, "expected OverflowError" + + def test_encodeNumericOverflowNested(self): + for n in xrange(0, 100): + class Nested: + x = 12839128391289382193812939 + + nested = Nested() + + try: + ujson.encode(nested) + except OverflowError: + pass + else: + assert False, "expected OverflowError" + + def test_decodeNumberWith32bitSignBit(self): + #Test that numbers that fit within 32 bits but would have the + # sign bit set (2**31 <= x < 2**32) are decoded properly. + boundary1 = 2**31 + boundary2 = 2**32 + docs = ( + '{"id": 3590016419}', + '{"id": %s}' % 2**31, + '{"id": %s}' % 2**32, + '{"id": %s}' % ((2**32)-1), + ) + results = (3590016419, 2**31, 2**32, 2**32-1) + for doc,result in zip(docs, results): + self.assertEqual(ujson.decode(doc)['id'], result) + + def test_encodeBigEscape(self): + for x in xrange(10): + if py3compat.PY3: + base = '\u00e5'.encode('utf-8') + else: + base = "\xc3\xa5" + input = base * 1024 * 1024 * 2 + output = ujson.encode(input) + + def test_decodeBigEscape(self): + for x in xrange(10): + if py3compat.PY3: + base = '\u00e5'.encode('utf-8') + else: + base = "\xc3\xa5" + quote = py3compat.str_to_bytes("\"") + input = quote + (base * 1024 * 1024 * 2) + quote + output = ujson.decode(input) + + def test_toDict(self): + d = {u"key": 31337} + + class DictTest: + def toDict(self): + return d + + o = DictTest() + output = ujson.encode(o) + dec = ujson.decode(output) + self.assertEquals(dec, d) + + +class NumpyJSONTests(TestCase): + + def testBool(self): + b = np.bool(True) + self.assertEqual(ujson.decode(ujson.encode(b)), b) + + def testBoolArray(self): + inpt = np.array([True, False, True, True, False, True, False , False], + dtype=np.bool) + outp = np.array(ujson.decode(ujson.encode(inpt)), dtype=np.bool) + assert_array_equal(inpt, outp) + + def testInt(self): + num = np.int(2562010) + self.assertEqual(np.int(ujson.decode(ujson.encode(num))), num) + + num = np.int8(127) + self.assertEqual(np.int8(ujson.decode(ujson.encode(num))), num) + + num = np.int16(2562010) + self.assertEqual(np.int16(ujson.decode(ujson.encode(num))), num) + + num = np.int32(2562010) + self.assertEqual(np.int32(ujson.decode(ujson.encode(num))), num) + + num = np.int64(2562010) + self.assertEqual(np.int64(ujson.decode(ujson.encode(num))), num) + + num = np.uint8(255) + self.assertEqual(np.uint8(ujson.decode(ujson.encode(num))), num) + + num = np.uint16(2562010) + self.assertEqual(np.uint16(ujson.decode(ujson.encode(num))), num) + + num = np.uint32(2562010) + self.assertEqual(np.uint32(ujson.decode(ujson.encode(num))), num) + + num = np.uint64(2562010) + self.assertEqual(np.uint64(ujson.decode(ujson.encode(num))), num) + + def testIntArray(self): + arr = np.arange(100, dtype=np.int) + dtypes = (np.int, np.int8, np.int16, np.int32, np.int64, + np.uint, np.uint8, np.uint16, np.uint32, np.uint64) + for dtype in dtypes: + inpt = arr.astype(dtype) + outp = np.array(ujson.decode(ujson.encode(inpt)), dtype=dtype) + assert_array_equal(inpt, outp) + + def testIntMax(self): + num = np.int(np.iinfo(np.int).max) + self.assertEqual(np.int(ujson.decode(ujson.encode(num))), num) + + num = np.int8(np.iinfo(np.int8).max) + self.assertEqual(np.int8(ujson.decode(ujson.encode(num))), num) + + num = np.int16(np.iinfo(np.int16).max) + self.assertEqual(np.int16(ujson.decode(ujson.encode(num))), num) + + num = np.int32(np.iinfo(np.int32).max) + self.assertEqual(np.int32(ujson.decode(ujson.encode(num))), num) + + num = np.uint8(np.iinfo(np.uint8).max) + self.assertEqual(np.uint8(ujson.decode(ujson.encode(num))), num) + + num = np.uint16(np.iinfo(np.uint16).max) + self.assertEqual(np.uint16(ujson.decode(ujson.encode(num))), num) + + num = np.uint32(np.iinfo(np.uint32).max) + self.assertEqual(np.uint32(ujson.decode(ujson.encode(num))), num) + + if platform.architecture()[0] != '32bit': + num = np.int64(np.iinfo(np.int64).max) + self.assertEqual(np.int64(ujson.decode(ujson.encode(num))), num) + + # uint64 max will always overflow as it's encoded to signed + num = np.uint64(np.iinfo(np.int64).max) + self.assertEqual(np.uint64(ujson.decode(ujson.encode(num))), num) + + def testFloat(self): + num = np.float(256.2013) + self.assertEqual(np.float(ujson.decode(ujson.encode(num))), num) + + num = np.float32(256.2013) + self.assertEqual(np.float32(ujson.decode(ujson.encode(num))), num) + + num = np.float64(256.2013) + self.assertEqual(np.float64(ujson.decode(ujson.encode(num))), num) + + def testFloatArray(self): + arr = np.arange(12.5, 185.72, 1.7322, dtype=np.float) + dtypes = (np.float, np.float32, np.float64) + + for dtype in dtypes: + inpt = arr.astype(dtype) + outp = np.array(ujson.decode(ujson.encode(inpt, double_precision=15)), dtype=dtype) + assert_array_almost_equal_nulp(inpt, outp) + + def testFloatMax(self): + num = np.float(np.finfo(np.float).max/10) + assert_approx_equal(np.float(ujson.decode(ujson.encode(num))), num, 15) + + num = np.float32(np.finfo(np.float32).max/10) + assert_approx_equal(np.float32(ujson.decode(ujson.encode(num))), num, 15) + + num = np.float64(np.finfo(np.float64).max/10) + assert_approx_equal(np.float64(ujson.decode(ujson.encode(num))), num, 15) + + def testArrays(self): + arr = np.arange(100); + + arr = arr.reshape((10, 10)) + assert_array_equal(np.array(ujson.decode(ujson.encode(arr))), arr) + assert_array_equal(ujson.decode(ujson.encode(arr), numpy=True), arr) + + arr = arr.reshape((5, 5, 4)) + assert_array_equal(np.array(ujson.decode(ujson.encode(arr))), arr) + assert_array_equal(ujson.decode(ujson.encode(arr), numpy=True), arr) + + arr = arr.reshape((100, 1)) + assert_array_equal(np.array(ujson.decode(ujson.encode(arr))), arr) + assert_array_equal(ujson.decode(ujson.encode(arr), numpy=True), arr) + + arr = np.arange(96); + arr = arr.reshape((2, 2, 2, 2, 3, 2)) + assert_array_equal(np.array(ujson.decode(ujson.encode(arr))), arr) + assert_array_equal(ujson.decode(ujson.encode(arr), numpy=True), arr) + + l = ['a', list(), dict(), dict(), list(), + 42, 97.8, ['a', 'b'], {'key': 'val'}] + arr = np.array(l) + assert_array_equal(np.array(ujson.decode(ujson.encode(arr))), arr) + + arr = np.arange(100.202, 200.202, 1, dtype=np.float32); + arr = arr.reshape((5, 5, 4)) + outp = np.array(ujson.decode(ujson.encode(arr)), dtype=np.float32) + assert_array_almost_equal_nulp(arr, outp) + outp = ujson.decode(ujson.encode(arr), numpy=True, dtype=np.float32) + assert_array_almost_equal_nulp(arr, outp) + + def testArrayNumpyExcept(self): + + input = ujson.dumps([42, {}, 'a']) + try: + ujson.decode(input, numpy=True) + assert False, "Expected exception!" + except(TypeError): + pass + except: + assert False, "Wrong exception" + + input = ujson.dumps(['a', 'b', [], 'c']) + try: + ujson.decode(input, numpy=True) + assert False, "Expected exception!" + except(ValueError): + pass + except: + assert False, "Wrong exception" + + input = ujson.dumps([['a'], 42]) + try: + ujson.decode(input, numpy=True) + assert False, "Expected exception!" + except(ValueError): + pass + except: + assert False, "Wrong exception" + + input = ujson.dumps([42, ['a'], 42]) + try: + ujson.decode(input, numpy=True) + assert False, "Expected exception!" + except(ValueError): + pass + except: + assert False, "Wrong exception" + + input = ujson.dumps([{}, []]) + try: + ujson.decode(input, numpy=True) + assert False, "Expected exception!" + except(ValueError): + pass + except: + assert False, "Wrong exception" + + input = ujson.dumps([42, None]) + try: + ujson.decode(input, numpy=True) + assert False, "Expected exception!" + except(TypeError): + pass + except: + assert False, "Wrong exception" + + input = ujson.dumps([{'a': 'b'}]) + try: + ujson.decode(input, numpy=True, labelled=True) + assert False, "Expected exception!" + except(ValueError): + pass + except: + assert False, "Wrong exception" + + input = ujson.dumps({'a': {'b': {'c': 42}}}) + try: + ujson.decode(input, numpy=True, labelled=True) + assert False, "Expected exception!" + except(ValueError): + pass + except: + assert False, "Wrong exception" + + input = ujson.dumps([{'a': 42, 'b': 23}, {'c': 17}]) + try: + ujson.decode(input, numpy=True, labelled=True) + assert False, "Expected exception!" + except(ValueError): + pass + except: + assert False, "Wrong exception" + + def testArrayNumpyLabelled(self): + input = {'a': []} + output = ujson.loads(ujson.dumps(input), numpy=True, labelled=True) + self.assertTrue((np.empty((1, 0)) == output[0]).all()) + self.assertTrue((np.array(['a']) == output[1]).all()) + self.assertTrue(output[2] is None) + + input = [{'a': 42}] + output = ujson.loads(ujson.dumps(input), numpy=True, labelled=True) + self.assertTrue((np.array([42]) == output[0]).all()) + self.assertTrue(output[1] is None) + self.assertTrue((np.array([u'a']) == output[2]).all()) + + # py3 is non-determinstic on the ordering...... + if not py3compat.PY3: + input = [{'a': 42, 'b':31}, {'a': 24, 'c': 99}, {'a': 2.4, 'b': 78}] + output = ujson.loads(ujson.dumps(input), numpy=True, labelled=True) + expectedvals = np.array([42, 31, 24, 99, 2.4, 78], dtype=int).reshape((3,2)) + self.assertTrue((expectedvals == output[0]).all()) + self.assertTrue(output[1] is None) + self.assertTrue((np.array([u'a', 'b']) == output[2]).all()) + + + input = {1: {'a': 42, 'b':31}, 2: {'a': 24, 'c': 99}, 3: {'a': 2.4, 'b': 78}} + output = ujson.loads(ujson.dumps(input), numpy=True, labelled=True) + expectedvals = np.array([42, 31, 24, 99, 2.4, 78], dtype=int).reshape((3,2)) + self.assertTrue((expectedvals == output[0]).all()) + self.assertTrue((np.array(['1','2','3']) == output[1]).all()) + self.assertTrue((np.array(['a', 'b']) == output[2]).all()) + +class PandasJSONTests(TestCase): + + def testDataFrame(self): + df = DataFrame([[1,2,3], [4,5,6]], index=['a', 'b'], columns=['x', 'y', 'z']) + + # column indexed + outp = DataFrame(ujson.decode(ujson.encode(df))) + self.assertTrue((df == outp).values.all()) + assert_array_equal(df.columns, outp.columns) + assert_array_equal(df.index, outp.index) + + dec = _clean_dict(ujson.decode(ujson.encode(df, orient="split"))) + outp = DataFrame(**dec) + self.assertTrue((df == outp).values.all()) + assert_array_equal(df.columns, outp.columns) + assert_array_equal(df.index, outp.index) + + outp = DataFrame(ujson.decode(ujson.encode(df, orient="records"))) + outp.index = df.index + self.assertTrue((df == outp).values.all()) + assert_array_equal(df.columns, outp.columns) + + outp = DataFrame(ujson.decode(ujson.encode(df, orient="values"))) + outp.index = df.index + self.assertTrue((df.values == outp.values).all()) + + outp = DataFrame(ujson.decode(ujson.encode(df, orient="index"))) + self.assertTrue((df.transpose() == outp).values.all()) + assert_array_equal(df.transpose().columns, outp.columns) + assert_array_equal(df.transpose().index, outp.index) + + + def testDataFrameNumpy(self): + df = DataFrame([[1,2,3], [4,5,6]], index=['a', 'b'], columns=['x', 'y', 'z']) + + # column indexed + outp = DataFrame(ujson.decode(ujson.encode(df), numpy=True)) + self.assertTrue((df == outp).values.all()) + assert_array_equal(df.columns, outp.columns) + assert_array_equal(df.index, outp.index) + + dec = _clean_dict(ujson.decode(ujson.encode(df, orient="split"), + numpy=True)) + outp = DataFrame(**dec) + self.assertTrue((df == outp).values.all()) + assert_array_equal(df.columns, outp.columns) + assert_array_equal(df.index, outp.index) + + outp = DataFrame(ujson.decode(ujson.encode(df, orient="index"), numpy=True)) + self.assertTrue((df.transpose() == outp).values.all()) + assert_array_equal(df.transpose().columns, outp.columns) + assert_array_equal(df.transpose().index, outp.index) + + def testDataFrameNested(self): + df = DataFrame([[1,2,3], [4,5,6]], index=['a', 'b'], columns=['x', 'y', 'z']) + + nested = {'df1': df, 'df2': df.copy()} + + exp = {'df1': ujson.decode(ujson.encode(df)), + 'df2': ujson.decode(ujson.encode(df))} + self.assertTrue(ujson.decode(ujson.encode(nested)) == exp) + + exp = {'df1': ujson.decode(ujson.encode(df, orient="index")), + 'df2': ujson.decode(ujson.encode(df, orient="index"))} + self.assertTrue(ujson.decode(ujson.encode(nested, orient="index")) == exp) + + exp = {'df1': ujson.decode(ujson.encode(df, orient="records")), + 'df2': ujson.decode(ujson.encode(df, orient="records"))} + self.assertTrue(ujson.decode(ujson.encode(nested, orient="records")) == exp) + + exp = {'df1': ujson.decode(ujson.encode(df, orient="values")), + 'df2': ujson.decode(ujson.encode(df, orient="values"))} + self.assertTrue(ujson.decode(ujson.encode(nested, orient="values")) == exp) + + exp = {'df1': ujson.decode(ujson.encode(df, orient="split")), + 'df2': ujson.decode(ujson.encode(df, orient="split"))} + self.assertTrue(ujson.decode(ujson.encode(nested, orient="split")) == exp) + + def testDataFrameNumpyLabelled(self): + df = DataFrame([[1,2,3], [4,5,6]], index=['a', 'b'], columns=['x', 'y', 'z']) + + # column indexed + outp = DataFrame(*ujson.decode(ujson.encode(df), numpy=True, labelled=True)) + self.assertTrue((df.T == outp).values.all()) + assert_array_equal(df.T.columns, outp.columns) + assert_array_equal(df.T.index, outp.index) + + outp = DataFrame(*ujson.decode(ujson.encode(df, orient="records"), numpy=True, labelled=True)) + outp.index = df.index + self.assertTrue((df == outp).values.all()) + assert_array_equal(df.columns, outp.columns) + + outp = DataFrame(*ujson.decode(ujson.encode(df, orient="index"), numpy=True, labelled=True)) + self.assertTrue((df == outp).values.all()) + assert_array_equal(df.columns, outp.columns) + assert_array_equal(df.index, outp.index) + + def testSeries(self): + s = Series([10, 20, 30, 40, 50, 60], name="series", index=[6,7,8,9,10,15]) + s.sort() + + # column indexed + outp = Series(ujson.decode(ujson.encode(s))) + outp.sort() + self.assertTrue((s == outp).values.all()) + + outp = Series(ujson.decode(ujson.encode(s), numpy=True)) + outp.sort() + self.assertTrue((s == outp).values.all()) + + dec = _clean_dict(ujson.decode(ujson.encode(s, orient="split"))) + outp = Series(**dec) + self.assertTrue((s == outp).values.all()) + self.assertTrue(s.name == outp.name) + + dec = _clean_dict(ujson.decode(ujson.encode(s, orient="split"), + numpy=True)) + outp = Series(**dec) + self.assertTrue((s == outp).values.all()) + self.assertTrue(s.name == outp.name) + + outp = Series(ujson.decode(ujson.encode(s, orient="records"), numpy=True)) + self.assertTrue((s == outp).values.all()) + + outp = Series(ujson.decode(ujson.encode(s, orient="records"))) + self.assertTrue((s == outp).values.all()) + + outp = Series(ujson.decode(ujson.encode(s, orient="values"), numpy=True)) + self.assertTrue((s == outp).values.all()) + + outp = Series(ujson.decode(ujson.encode(s, orient="values"))) + self.assertTrue((s == outp).values.all()) + + outp = Series(ujson.decode(ujson.encode(s, orient="index"))) + outp.sort() + self.assertTrue((s == outp).values.all()) + + outp = Series(ujson.decode(ujson.encode(s, orient="index"), numpy=True)) + outp.sort() + self.assertTrue((s == outp).values.all()) + + def testSeriesNested(self): + s = Series([10, 20, 30, 40, 50, 60], name="series", index=[6,7,8,9,10,15]) + s.sort() + + nested = {'s1': s, 's2': s.copy()} + + exp = {'s1': ujson.decode(ujson.encode(s)), + 's2': ujson.decode(ujson.encode(s))} + self.assertTrue(ujson.decode(ujson.encode(nested)) == exp) + + exp = {'s1': ujson.decode(ujson.encode(s, orient="split")), + 's2': ujson.decode(ujson.encode(s, orient="split"))} + self.assertTrue(ujson.decode(ujson.encode(nested, orient="split")) == exp) + + exp = {'s1': ujson.decode(ujson.encode(s, orient="records")), + 's2': ujson.decode(ujson.encode(s, orient="records"))} + self.assertTrue(ujson.decode(ujson.encode(nested, orient="records")) == exp) + + exp = {'s1': ujson.decode(ujson.encode(s, orient="values")), + 's2': ujson.decode(ujson.encode(s, orient="values"))} + self.assertTrue(ujson.decode(ujson.encode(nested, orient="values")) == exp) + + exp = {'s1': ujson.decode(ujson.encode(s, orient="index")), + 's2': ujson.decode(ujson.encode(s, orient="index"))} + self.assertTrue(ujson.decode(ujson.encode(nested, orient="index")) == exp) + + def testIndex(self): + i = Index([23, 45, 18, 98, 43, 11], name="index") + + # column indexed + outp = Index(ujson.decode(ujson.encode(i))) + self.assert_(i.equals(outp)) + + outp = Index(ujson.decode(ujson.encode(i), numpy=True)) + self.assert_(i.equals(outp)) + + dec = _clean_dict(ujson.decode(ujson.encode(i, orient="split"))) + outp = Index(**dec) + self.assert_(i.equals(outp)) + self.assertTrue(i.name == outp.name) + + dec = _clean_dict(ujson.decode(ujson.encode(i, orient="split"), + numpy=True)) + outp = Index(**dec) + self.assert_(i.equals(outp)) + self.assertTrue(i.name == outp.name) + + outp = Index(ujson.decode(ujson.encode(i, orient="values"))) + self.assert_(i.equals(outp)) + + outp = Index(ujson.decode(ujson.encode(i, orient="values"), numpy=True)) + self.assert_(i.equals(outp)) + + outp = Index(ujson.decode(ujson.encode(i, orient="records"))) + self.assert_(i.equals(outp)) + + outp = Index(ujson.decode(ujson.encode(i, orient="records"), numpy=True)) + self.assert_(i.equals(outp)) + + outp = Index(ujson.decode(ujson.encode(i, orient="index"))) + self.assert_(i.equals(outp)) + + outp = Index(ujson.decode(ujson.encode(i, orient="index"), numpy=True)) + self.assert_(i.equals(outp)) + + def test_datetimeindex(self): + from pandas.tseries.index import date_range, DatetimeIndex + + rng = date_range('1/1/2000', periods=20) + + encoded = ujson.encode(rng) + decoded = DatetimeIndex(np.array(ujson.decode(encoded))) + + self.assert_(rng.equals(decoded)) + + ts = Series(np.random.randn(len(rng)), index=rng) + decoded = Series(ujson.decode(ujson.encode(ts))) + idx_values = decoded.index.values.astype(np.int64) + decoded.index = DatetimeIndex(idx_values) + tm.assert_series_equal(np.round(ts, 5), decoded) + +""" +def test_decodeNumericIntFrcOverflow(self): +input = "X.Y" +raise NotImplementedError("Implement this test!") + + +def test_decodeStringUnicodeEscape(self): +input = "\u3131" +raise NotImplementedError("Implement this test!") + +def test_decodeStringUnicodeBrokenEscape(self): +input = "\u3131" +raise NotImplementedError("Implement this test!") + +def test_decodeStringUnicodeInvalidEscape(self): +input = "\u3131" +raise NotImplementedError("Implement this test!") + +def test_decodeStringUTF8(self): +input = "someutfcharacters" +raise NotImplementedError("Implement this test!") + + + +""" + +def _clean_dict(d): + return dict((str(k), v) for k, v in d.iteritems()) + +if __name__ == '__main__': + # unittest.main() + import nose + # nose.runmodule(argv=[__file__,'-vvs','-x', '--ipdb-failure'], + # exit=False) + nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/src/ujson/lib/ultrajson.h b/pandas/src/ujson/lib/ultrajson.h new file mode 100644 index 0000000000000..eae665f00f03e --- /dev/null +++ b/pandas/src/ujson/lib/ultrajson.h @@ -0,0 +1,298 @@ +/* +Copyright (c) 2011, Jonas Tarnstrom and ESN Social Software AB +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. +3. All advertising materials mentioning features or use of this software + must display the following acknowledgement: + This product includes software developed by ESN Social Software AB (www.esn.me). +4. Neither the name of the ESN Social Software AB nor the + names of its contributors may be used to endorse or promote products + derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY ESN SOCIAL SOFTWARE AB ''AS IS'' AND ANY +EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL ESN SOCIAL SOFTWARE AB BE LIABLE FOR ANY +DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +Portions of code from: +MODP_ASCII - Ascii transformations (upper/lower, etc) +http://code.google.com/p/stringencoders/ +Copyright (c) 2007 Nick Galbreath -- nickg [at] modp [dot] com. All rights reserved. + +*/ + +/* +Ultra fast JSON encoder and decoder +Developed by Jonas Tarnstrom (jonas@esn.me). + +Encoder notes: +------------------ + +:: Cyclic references :: +Cyclic referenced objects are not detected. +Set JSONObjectEncoder.recursionMax to suitable value or make sure input object +tree doesn't have cyclic references. + +*/ + +#ifndef __ULTRAJSON_H__ +#define __ULTRAJSON_H__ + +#include <stdio.h> +#include <wchar.h> + +//#define JSON_DECODE_NUMERIC_AS_DOUBLE + +// Don't output any extra whitespaces when encoding +#define JSON_NO_EXTRA_WHITESPACE + +// Max decimals to encode double floating point numbers with +#ifndef JSON_DOUBLE_MAX_DECIMALS +#define JSON_DOUBLE_MAX_DECIMALS 15 +#endif + +// Max recursion depth, default for encoder +#ifndef JSON_MAX_RECURSION_DEPTH +#define JSON_MAX_RECURSION_DEPTH 1024 +#endif + +/* +Dictates and limits how much stack space for buffers UltraJSON will use before resorting to provided heap functions */ +#ifndef JSON_MAX_STACK_BUFFER_SIZE +#define JSON_MAX_STACK_BUFFER_SIZE 131072 +#endif + +#ifdef _WIN32 + +typedef __int64 JSINT64; +typedef unsigned __int64 JSUINT64; + +typedef __int32 JSINT32; +typedef unsigned __int32 JSUINT32; +typedef unsigned __int8 JSUINT8; +typedef unsigned __int16 JSUTF16; +typedef unsigned __int32 JSUTF32; +typedef __int64 JSLONG; + +#define EXPORTFUNCTION __declspec(dllexport) + +#define FASTCALL_MSVC __fastcall +#define FASTCALL_ATTR +#define INLINE_PREFIX __inline + +#else + +#include <sys/types.h> +typedef int64_t JSINT64; +typedef u_int64_t JSUINT64; + +typedef int32_t JSINT32; +typedef u_int32_t JSUINT32; + +#define FASTCALL_MSVC +#define FASTCALL_ATTR __attribute__((fastcall)) +#define INLINE_PREFIX inline + +typedef u_int8_t JSUINT8; +typedef u_int16_t JSUTF16; +typedef u_int32_t JSUTF32; + +typedef int64_t JSLONG; + +#define EXPORTFUNCTION +#endif + +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN__ +#else + +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN__ +#endif + +#endif + +#if !defined(__LITTLE_ENDIAN__) && !defined(__BIG_ENDIAN__) +#error "Endianess not supported" +#endif + +enum JSTYPES +{ + JT_NULL, // NULL + JT_TRUE, //boolean true + JT_FALSE, //boolean false + JT_INT, //(JSINT32 (signed 32-bit)) + JT_LONG, //(JSINT64 (signed 64-bit)) + JT_DOUBLE, //(double) + JT_UTF8, //(char 8-bit) + JT_ARRAY, // Array structure + JT_OBJECT, // Key/Value structure + JT_INVALID, // Internal, do not return nor expect +}; + +typedef void * JSOBJ; +typedef void * JSITER; + +typedef struct __JSONTypeContext +{ + int type; + void *encoder; + void *prv; +} JSONTypeContext; + +/* +Function pointer declarations, suitable for implementing UltraJSON */ +typedef void (*JSPFN_ITERBEGIN)(JSOBJ obj, JSONTypeContext *tc); +typedef int (*JSPFN_ITERNEXT)(JSOBJ obj, JSONTypeContext *tc); +typedef void (*JSPFN_ITEREND)(JSOBJ obj, JSONTypeContext *tc); +typedef JSOBJ (*JSPFN_ITERGETVALUE)(JSOBJ obj, JSONTypeContext *tc); +typedef char *(*JSPFN_ITERGETNAME)(JSOBJ obj, JSONTypeContext *tc, size_t *outLen); +typedef void *(*JSPFN_MALLOC)(size_t size); +typedef void (*JSPFN_FREE)(void *pptr); +typedef void *(*JSPFN_REALLOC)(void *base, size_t size); + +typedef struct __JSONObjectEncoder +{ + void (*beginTypeContext)(JSOBJ obj, JSONTypeContext *tc); + void (*endTypeContext)(JSOBJ obj, JSONTypeContext *tc); + const char *(*getStringValue)(JSOBJ obj, JSONTypeContext *tc, size_t *_outLen); + JSINT64 (*getLongValue)(JSOBJ obj, JSONTypeContext *tc); + JSINT32 (*getIntValue)(JSOBJ obj, JSONTypeContext *tc); + double (*getDoubleValue)(JSOBJ obj, JSONTypeContext *tc); + + /* + Begin iteration of an iteratable object (JS_ARRAY or JS_OBJECT) + Implementor should setup iteration state in ti->prv + */ + JSPFN_ITERBEGIN iterBegin; + + /* + Retrieve next object in an iteration. Should return 0 to indicate iteration has reached end or 1 if there are more items. + Implementor is responsible for keeping state of the iteration. Use ti->prv fields for this + */ + JSPFN_ITERNEXT iterNext; + + /* + Ends the iteration of an iteratable object. + Any iteration state stored in ti->prv can be freed here + */ + JSPFN_ITEREND iterEnd; + + /* + Returns a reference to the value object of an iterator + The is responsible for the life-cycle of the returned string. Use iterNext/iterEnd and ti->prv to keep track of current object + */ + JSPFN_ITERGETVALUE iterGetValue; + + /* + Return name of iterator. + The is responsible for the life-cycle of the returned string. Use iterNext/iterEnd and ti->prv to keep track of current object + */ + JSPFN_ITERGETNAME iterGetName; + + /* + Release a value as indicated by setting ti->release = 1 in the previous getValue call. + The ti->prv array should contain the necessary context to release the value + */ + void (*releaseObject)(JSOBJ obj); + + /* Library functions + Set to NULL to use STDLIB malloc,realloc,free */ + JSPFN_MALLOC malloc; + JSPFN_REALLOC realloc; + JSPFN_FREE free; + + /* + Configuration for max recursion, set to 0 to use default (see JSON_MAX_RECURSION_DEPTH)*/ + int recursionMax; + + /* + Configuration for max decimals of double floating poiunt numbers to encode (0-9) */ + int doublePrecision; + + /* + If true output will be ASCII with all characters above 127 encoded as \uXXXX. If false output will be UTF-8 or what ever charset strings are brought as */ + int forceASCII; + + + /* + Set to an error message if error occured */ + const char *errorMsg; + JSOBJ errorObj; + + /* Buffer stuff */ + char *start; + char *offset; + char *end; + int heap; + int level; + +} JSONObjectEncoder; + + +/* +Encode an object structure into JSON. + +Arguments: +obj - An anonymous type representing the object +enc - Function definitions for querying JSOBJ type +buffer - Preallocated buffer to store result in. If NULL function allocates own buffer +cbBuffer - Length of buffer (ignored if buffer is NULL) + +Returns: +Encoded JSON object as a null terminated char string. + +NOTE: +If the supplied buffer wasn't enough to hold the result the function will allocate a new buffer. +Life cycle of the provided buffer must still be handled by caller. + +If the return value doesn't equal the specified buffer caller must release the memory using +JSONObjectEncoder.free or free() as specified when calling this function. +*/ +EXPORTFUNCTION char *JSON_EncodeObject(JSOBJ obj, JSONObjectEncoder *enc, char *buffer, size_t cbBuffer); + + + +typedef struct __JSONObjectDecoder +{ + JSOBJ (*newString)(wchar_t *start, wchar_t *end); + int (*objectAddKey)(JSOBJ obj, JSOBJ name, JSOBJ value); + int (*arrayAddItem)(JSOBJ obj, JSOBJ value); + JSOBJ (*newTrue)(void); + JSOBJ (*newFalse)(void); + JSOBJ (*newNull)(void); + JSOBJ (*newObject)(void *decoder); + JSOBJ (*endObject)(JSOBJ obj); + JSOBJ (*newArray)(void *decoder); + JSOBJ (*endArray)(JSOBJ obj); + JSOBJ (*newInt)(JSINT32 value); + JSOBJ (*newLong)(JSINT64 value); + JSOBJ (*newDouble)(double value); + void (*releaseObject)(JSOBJ obj, void *decoder); + JSPFN_MALLOC malloc; + JSPFN_FREE free; + JSPFN_REALLOC realloc; + + char *errorStr; + char *errorOffset; + + + +} JSONObjectDecoder; + +EXPORTFUNCTION JSOBJ JSON_DecodeObject(JSONObjectDecoder *dec, const char *buffer, size_t cbBuffer); + +#endif diff --git a/pandas/src/ujson/lib/ultrajsondec.c b/pandas/src/ujson/lib/ultrajsondec.c new file mode 100644 index 0000000000000..eda30f3fea839 --- /dev/null +++ b/pandas/src/ujson/lib/ultrajsondec.c @@ -0,0 +1,845 @@ +/* +Copyright (c) 2011, Jonas Tarnstrom and ESN Social Software AB +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. +3. All advertising materials mentioning features or use of this software + must display the following acknowledgement: + This product includes software developed by ESN Social Software AB (www.esn.me). +4. Neither the name of the ESN Social Software AB nor the + names of its contributors may be used to endorse or promote products + derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY ESN SOCIAL SOFTWARE AB ''AS IS'' AND ANY +EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL ESN SOCIAL SOFTWARE AB BE LIABLE FOR ANY +DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +Portions of code from: +MODP_ASCII - Ascii transformations (upper/lower, etc) +http://code.google.com/p/stringencoders/ +Copyright (c) 2007 Nick Galbreath -- nickg [at] modp [dot] com. All rights reserved. + +*/ + +#include "ultrajson.h" +#include <math.h> +#include <assert.h> +#include <string.h> +#include <limits.h> +#include <wchar.h> + +struct DecoderState +{ + char *start; + char *end; + wchar_t *escStart; + wchar_t *escEnd; + int escHeap; + int lastType; + JSONObjectDecoder *dec; +}; + +JSOBJ FASTCALL_MSVC decode_any( struct DecoderState *ds) FASTCALL_ATTR; +typedef JSOBJ (*PFN_DECODER)( struct DecoderState *ds); +#define RETURN_JSOBJ_NULLCHECK(_expr) return(_expr); + +double createDouble(double intNeg, double intValue, double frcValue, int frcDecimalCount) +{ + static const double g_pow10[] = {1, 10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000, 1000000000, 10000000000, 100000000000, 1000000000000, 10000000000000, 100000000000000, 1000000000000000}; + + return (intValue + (frcValue / g_pow10[frcDecimalCount])) * intNeg; +} + +static JSOBJ SetError( struct DecoderState *ds, int offset, const char *message) +{ + ds->dec->errorOffset = ds->start + offset; + ds->dec->errorStr = (char *) message; + return NULL; +} + + +FASTCALL_ATTR JSOBJ FASTCALL_MSVC decode_numeric ( struct DecoderState *ds) +{ +#ifdef JSON_DECODE_NUMERIC_AS_DOUBLE + double intNeg = 1; + double intValue; +#else + int intNeg = 1; + JSLONG intValue; +#endif + + double expNeg; + int chr; + int decimalCount = 0; + double frcValue = 0.0; + double expValue; + char *offset = ds->start; + + if (*(offset) == '-') + { + offset ++; + intNeg = -1; + } + + // Scan integer part + intValue = 0; + + while (1) + { + chr = (int) (unsigned char) *(offset); + + switch (chr) + { + case '0': + case '1': + case '2': + case '3': + case '4': + case '5': + case '6': + case '7': + case '8': + case '9': + //FIXME: Check for arithemtic overflow here + //PERF: Don't do 64-bit arithmetic here unless we know we have to +#ifdef JSON_DECODE_NUMERIC_AS_DOUBLE + intValue = intValue * 10.0 + (double) (chr - 48); +#else + intValue = intValue * 10LL + (JSLONG) (chr - 48); +#endif + offset ++; + break; + + case '.': + offset ++; + goto DECODE_FRACTION; + break; + + case 'e': + case 'E': + offset ++; + goto DECODE_EXPONENT; + break; + + default: + goto BREAK_INT_LOOP; + break; + } + } + +BREAK_INT_LOOP: + + ds->lastType = JT_INT; + ds->start = offset; + + //If input string is LONGLONG_MIN here the value is already negative so we should not flip it + +#ifdef JSON_DECODE_NUMERIC_AS_DOUBLE +#else + if (intValue < 0) + { + intNeg = 1; + } +#endif + + //dbg1 = (intValue * intNeg); + //dbg2 = (JSLONG) dbg1; + +#ifdef JSON_DECODE_NUMERIC_AS_DOUBLE + if (intValue > (double) INT_MAX || intValue < (double) INT_MIN) +#else + if ( (intValue >> 31)) +#endif + { + RETURN_JSOBJ_NULLCHECK(ds->dec->newLong( (JSINT64) (intValue * (JSINT64) intNeg))); + } + else + { + RETURN_JSOBJ_NULLCHECK(ds->dec->newInt( (JSINT32) (intValue * intNeg))); + } + + + +DECODE_FRACTION: + + // Scan fraction part + frcValue = 0.0; + while (1) + { + chr = (int) (unsigned char) *(offset); + + switch (chr) + { + case '0': + case '1': + case '2': + case '3': + case '4': + case '5': + case '6': + case '7': + case '8': + case '9': + if (decimalCount < JSON_DOUBLE_MAX_DECIMALS) + { + frcValue = frcValue * 10.0 + (double) (chr - 48); + decimalCount ++; + } + offset ++; + break; + + case 'e': + case 'E': + offset ++; + goto DECODE_EXPONENT; + break; + + default: + goto BREAK_FRC_LOOP; + } + } + +BREAK_FRC_LOOP: + + if (intValue < 0) + { + intNeg = 1; + } + + //FIXME: Check for arithemtic overflow here + ds->lastType = JT_DOUBLE; + ds->start = offset; + RETURN_JSOBJ_NULLCHECK(ds->dec->newDouble (createDouble( (double) intNeg, (double) intValue, frcValue, decimalCount))); + +DECODE_EXPONENT: + expNeg = 1.0; + + if (*(offset) == '-') + { + expNeg = -1.0; + offset ++; + } + else + if (*(offset) == '+') + { + expNeg = +1.0; + offset ++; + } + + expValue = 0.0; + + while (1) + { + chr = (int) (unsigned char) *(offset); + + switch (chr) + { + case '0': + case '1': + case '2': + case '3': + case '4': + case '5': + case '6': + case '7': + case '8': + case '9': + expValue = expValue * 10.0 + (double) (chr - 48); + offset ++; + break; + + default: + goto BREAK_EXP_LOOP; + + } + } + +BREAK_EXP_LOOP: + +#ifdef JSON_DECODE_NUMERIC_AS_DOUBLE +#else + if (intValue < 0) + { + intNeg = 1; + } +#endif + + //FIXME: Check for arithemtic overflow here + ds->lastType = JT_DOUBLE; + ds->start = offset; + RETURN_JSOBJ_NULLCHECK(ds->dec->newDouble (createDouble( (double) intNeg, (double) intValue , frcValue, decimalCount) * pow(10.0, expValue * expNeg))); +} + +FASTCALL_ATTR JSOBJ FASTCALL_MSVC decode_true ( struct DecoderState *ds) +{ + char *offset = ds->start; + offset ++; + + if (*(offset++) != 'r') + goto SETERROR; + if (*(offset++) != 'u') + goto SETERROR; + if (*(offset++) != 'e') + goto SETERROR; + + ds->lastType = JT_TRUE; + ds->start = offset; + RETURN_JSOBJ_NULLCHECK(ds->dec->newTrue()); + +SETERROR: + return SetError(ds, -1, "Unexpected character found when decoding 'true'"); +} + +FASTCALL_ATTR JSOBJ FASTCALL_MSVC decode_false ( struct DecoderState *ds) +{ + char *offset = ds->start; + offset ++; + + if (*(offset++) != 'a') + goto SETERROR; + if (*(offset++) != 'l') + goto SETERROR; + if (*(offset++) != 's') + goto SETERROR; + if (*(offset++) != 'e') + goto SETERROR; + + ds->lastType = JT_FALSE; + ds->start = offset; + RETURN_JSOBJ_NULLCHECK(ds->dec->newFalse()); + +SETERROR: + return SetError(ds, -1, "Unexpected character found when decoding 'false'"); + +} + + +FASTCALL_ATTR JSOBJ FASTCALL_MSVC decode_null ( struct DecoderState *ds) +{ + char *offset = ds->start; + offset ++; + + if (*(offset++) != 'u') + goto SETERROR; + if (*(offset++) != 'l') + goto SETERROR; + if (*(offset++) != 'l') + goto SETERROR; + + ds->lastType = JT_NULL; + ds->start = offset; + RETURN_JSOBJ_NULLCHECK(ds->dec->newNull()); + +SETERROR: + return SetError(ds, -1, "Unexpected character found when decoding 'null'"); +} + +FASTCALL_ATTR void FASTCALL_MSVC SkipWhitespace(struct DecoderState *ds) +{ + char *offset = ds->start; + + while (1) + { + switch (*offset) + { + case ' ': + case '\t': + case '\r': + case '\n': + offset ++; + break; + + default: + ds->start = offset; + return; + } + } +} + + +enum DECODESTRINGSTATE +{ + DS_ISNULL = 0x32, + DS_ISQUOTE, + DS_ISESCAPE, + DS_UTFLENERROR, + +}; + +static const JSUINT8 g_decoderLookup[256] = +{ +/* 0x00 */ DS_ISNULL, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x10 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x20 */ 1, 1, DS_ISQUOTE, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x30 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x40 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x50 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, DS_ISESCAPE, 1, 1, 1, +/* 0x60 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x70 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x80 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x90 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0xa0 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0xb0 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0xc0 */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* 0xd0 */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* 0xe0 */ 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, +/* 0xf0 */ 4, 4, 4, 4, 4, 4, 4, 4, DS_UTFLENERROR, DS_UTFLENERROR, DS_UTFLENERROR, DS_UTFLENERROR, DS_UTFLENERROR, DS_UTFLENERROR, DS_UTFLENERROR, DS_UTFLENERROR, +}; + + +FASTCALL_ATTR JSOBJ FASTCALL_MSVC decode_string ( struct DecoderState *ds) +{ + JSUTF16 sur[2] = { 0 }; + int iSur = 0; + int index; + wchar_t *escOffset; + size_t escLen = (ds->escEnd - ds->escStart); + JSUINT8 *inputOffset; + JSUINT8 oct; + JSUTF32 ucs; + ds->lastType = JT_INVALID; + ds->start ++; + + if ( (ds->end - ds->start) > escLen) + { + size_t newSize = (ds->end - ds->start); + + if (ds->escHeap) + { + ds->escStart = (wchar_t *) ds->dec->realloc (ds->escStart, newSize * sizeof(wchar_t)); + if (!ds->escStart) + { + return SetError(ds, -1, "Could not reserve memory block"); + } + } + else + { + wchar_t *oldStart = ds->escStart; + ds->escHeap = 1; + ds->escStart = (wchar_t *) ds->dec->malloc (newSize * sizeof(wchar_t)); + if (!ds->escStart) + { + return SetError(ds, -1, "Could not reserve memory block"); + } + memcpy (ds->escStart, oldStart, escLen * sizeof(wchar_t)); + } + + ds->escEnd = ds->escStart + newSize; + } + + escOffset = ds->escStart; + inputOffset = ds->start; + + while(1) + { + switch (g_decoderLookup[(JSUINT8)(*inputOffset)]) + { + case DS_ISNULL: + return SetError(ds, -1, "Unmatched ''\"' when when decoding 'string'"); + + case DS_ISQUOTE: + ds->lastType = JT_UTF8; + inputOffset ++; + ds->start += ( (char *) inputOffset - (ds->start)); + RETURN_JSOBJ_NULLCHECK(ds->dec->newString(ds->escStart, escOffset)); + + case DS_UTFLENERROR: + return SetError (ds, -1, "Invalid UTF-8 sequence length when decoding 'string'"); + + case DS_ISESCAPE: + inputOffset ++; + switch (*inputOffset) + { + case '\\': *(escOffset++) = L'\\'; inputOffset++; continue; + case '\"': *(escOffset++) = L'\"'; inputOffset++; continue; + case '/': *(escOffset++) = L'/'; inputOffset++; continue; + case 'b': *(escOffset++) = L'\b'; inputOffset++; continue; + case 'f': *(escOffset++) = L'\f'; inputOffset++; continue; + case 'n': *(escOffset++) = L'\n'; inputOffset++; continue; + case 'r': *(escOffset++) = L'\r'; inputOffset++; continue; + case 't': *(escOffset++) = L'\t'; inputOffset++; continue; + + case 'u': + { + int index; + inputOffset ++; + + for (index = 0; index < 4; index ++) + { + switch (*inputOffset) + { + case '\0': return SetError (ds, -1, "Unterminated unicode escape sequence when decoding 'string'"); + default: return SetError (ds, -1, "Unexpected character in unicode escape sequence when decoding 'string'"); + + case '0': + case '1': + case '2': + case '3': + case '4': + case '5': + case '6': + case '7': + case '8': + case '9': + sur[iSur] = (sur[iSur] << 4) + (JSUTF16) (*inputOffset - '0'); + break; + + case 'a': + case 'b': + case 'c': + case 'd': + case 'e': + case 'f': + sur[iSur] = (sur[iSur] << 4) + 10 + (JSUTF16) (*inputOffset - 'a'); + break; + + case 'A': + case 'B': + case 'C': + case 'D': + case 'E': + case 'F': + sur[iSur] = (sur[iSur] << 4) + 10 + (JSUTF16) (*inputOffset - 'A'); + break; + } + + inputOffset ++; + } + + + if (iSur == 0) + { + if((sur[iSur] & 0xfc00) == 0xd800) + { + // First of a surrogate pair, continue parsing + iSur ++; + break; + } + (*escOffset++) = (wchar_t) sur[iSur]; + iSur = 0; + } + else + { + // Decode pair + if ((sur[1] & 0xfc00) != 0xdc00) + { + return SetError (ds, -1, "Unpaired high surrogate when decoding 'string'"); + } + +#if WCHAR_MAX == 0xffff + (*escOffset++) = (wchar_t) sur[0]; + (*escOffset++) = (wchar_t) sur[1]; +#else + (*escOffset++) = (wchar_t) 0x10000 + (((sur[0] - 0xd800) << 10) | (sur[1] - 0xdc00)); +#endif + iSur = 0; + } + break; + } + + case '\0': return SetError(ds, -1, "Unterminated escape sequence when decoding 'string'"); + default: return SetError(ds, -1, "Unrecognized escape sequence when decoding 'string'"); + } + break; + + case 1: + *(escOffset++) = (wchar_t) (*inputOffset++); + break; + + case 2: + { + ucs = (*inputOffset++) & 0x1f; + ucs <<= 6; + if (((*inputOffset) & 0x80) != 0x80) + { + return SetError(ds, -1, "Invalid octet in UTF-8 sequence when decoding 'string'"); + } + ucs |= (*inputOffset++) & 0x3f; + if (ucs < 0x80) return SetError (ds, -1, "Overlong 2 byte UTF-8 sequence detected when decoding 'string'"); + *(escOffset++) = (wchar_t) ucs; + break; + } + + case 3: + { + JSUTF32 ucs = 0; + ucs |= (*inputOffset++) & 0x0f; + + for (index = 0; index < 2; index ++) + { + ucs <<= 6; + oct = (*inputOffset++); + + if ((oct & 0x80) != 0x80) + { + return SetError(ds, -1, "Invalid octet in UTF-8 sequence when decoding 'string'"); + } + + ucs |= oct & 0x3f; + } + + if (ucs < 0x800) return SetError (ds, -1, "Overlong 3 byte UTF-8 sequence detected when encoding string"); + *(escOffset++) = (wchar_t) ucs; + break; + } + + case 4: + { + JSUTF32 ucs = 0; + ucs |= (*inputOffset++) & 0x07; + + for (index = 0; index < 3; index ++) + { + ucs <<= 6; + oct = (*inputOffset++); + + if ((oct & 0x80) != 0x80) + { + return SetError(ds, -1, "Invalid octet in UTF-8 sequence when decoding 'string'"); + } + + ucs |= oct & 0x3f; + } + + if (ucs < 0x10000) return SetError (ds, -1, "Overlong 4 byte UTF-8 sequence detected when decoding 'string'"); + + #if WCHAR_MAX == 0xffff + if (ucs >= 0x10000) + { + ucs -= 0x10000; + *(escOffset++) = (ucs >> 10) + 0xd800; + *(escOffset++) = (ucs & 0x3ff) + 0xdc00; + } + else + { + *(escOffset++) = (wchar_t) ucs; + } + #else + *(escOffset++) = (wchar_t) ucs; + #endif + break; + } + } + } +} + +FASTCALL_ATTR JSOBJ FASTCALL_MSVC decode_array( struct DecoderState *ds) +{ + JSOBJ itemValue; + JSOBJ newObj = ds->dec->newArray(ds->dec); + + ds->lastType = JT_INVALID; + ds->start ++; + + while (1)//(*ds->start) != '\0') + { + SkipWhitespace(ds); + + if ((*ds->start) == ']') + { + ds->start++; + return ds->dec->endArray(newObj); + } + + itemValue = decode_any(ds); + + if (itemValue == NULL) + { + ds->dec->releaseObject(newObj, ds->dec); + return NULL; + } + + if (!ds->dec->arrayAddItem (newObj, itemValue)) + { + ds->dec->releaseObject(newObj, ds->dec); + return NULL; + } + + SkipWhitespace(ds); + + switch (*(ds->start++)) + { + case ']': + return ds->dec->endArray(newObj); + + case ',': + break; + + default: + ds->dec->releaseObject(newObj, ds->dec); + return SetError(ds, -1, "Unexpected character in found when decoding array value"); + } + } + + ds->dec->releaseObject(newObj, ds->dec); + return SetError(ds, -1, "Unmatched ']' when decoding 'array'"); +} + + + +FASTCALL_ATTR JSOBJ FASTCALL_MSVC decode_object( struct DecoderState *ds) +{ + JSOBJ itemName; + JSOBJ itemValue; + JSOBJ newObj = ds->dec->newObject(ds->dec); + + ds->start ++; + + while (1) + { + SkipWhitespace(ds); + + if ((*ds->start) == '}') + { + ds->start ++; + return ds->dec->endObject(newObj); + } + + ds->lastType = JT_INVALID; + itemName = decode_any(ds); + + if (itemName == NULL) + { + ds->dec->releaseObject(newObj, ds->dec); + return NULL; + } + + if (ds->lastType != JT_UTF8) + { + ds->dec->releaseObject(newObj, ds->dec); + ds->dec->releaseObject(itemName, ds->dec); + return SetError(ds, -1, "Key name of object must be 'string' when decoding 'object'"); + } + + SkipWhitespace(ds); + + if (*(ds->start++) != ':') + { + ds->dec->releaseObject(newObj, ds->dec); + ds->dec->releaseObject(itemName, ds->dec); + return SetError(ds, -1, "No ':' found when decoding object value"); + } + + SkipWhitespace(ds); + + itemValue = decode_any(ds); + + if (itemValue == NULL) + { + ds->dec->releaseObject(newObj, ds->dec); + ds->dec->releaseObject(itemName, ds->dec); + return NULL; + } + + if (!ds->dec->objectAddKey (newObj, itemName, itemValue)) + { + ds->dec->releaseObject(newObj, ds->dec); + ds->dec->releaseObject(itemName, ds->dec); + ds->dec->releaseObject(itemValue, ds->dec); + return NULL; + } + + SkipWhitespace(ds); + + switch (*(ds->start++)) + { + case '}': + return ds->dec->endObject(newObj); + + case ',': + break; + + default: + ds->dec->releaseObject(newObj, ds->dec); + return SetError(ds, -1, "Unexpected character in found when decoding object value"); + } + } + + ds->dec->releaseObject(newObj, ds->dec); + return SetError(ds, -1, "Unmatched '}' when decoding object"); +} + +FASTCALL_ATTR JSOBJ FASTCALL_MSVC decode_any(struct DecoderState *ds) +{ + while (1) + { + switch (*ds->start) + { + case '\"': + return decode_string (ds); + case '0': + case '1': + case '2': + case '3': + case '4': + case '5': + case '6': + case '7': + case '8': + case '9': + case '-': + return decode_numeric (ds); + + case '[': return decode_array (ds); + case '{': return decode_object (ds); + case 't': return decode_true (ds); + case 'f': return decode_false (ds); + case 'n': return decode_null (ds); + + case ' ': + case '\t': + case '\r': + case '\n': + // White space + ds->start ++; + break; + + default: + return SetError(ds, -1, "Expected object or value"); + } + } +} + + +JSOBJ JSON_DecodeObject(JSONObjectDecoder *dec, const char *buffer, size_t cbBuffer) +{ + + /* + FIXME: Base the size of escBuffer of that of cbBuffer so that the unicode escaping doesn't run into the wall each time */ + struct DecoderState ds; + wchar_t escBuffer[(JSON_MAX_STACK_BUFFER_SIZE / sizeof(wchar_t))]; + JSOBJ ret; + + ds.start = (char *) buffer; + ds.end = ds.start + cbBuffer; + + ds.escStart = escBuffer; + ds.escEnd = ds.escStart + (JSON_MAX_STACK_BUFFER_SIZE / sizeof(wchar_t)); + ds.escHeap = 0; + ds.dec = dec; + ds.dec->errorStr = NULL; + ds.dec->errorOffset = NULL; + + ds.dec = dec; + + ret = decode_any (&ds); + + if (ds.escHeap) + { + dec->free(ds.escStart); + } + return ret; +} diff --git a/pandas/src/ujson/lib/ultrajsonenc.c b/pandas/src/ujson/lib/ultrajsonenc.c new file mode 100644 index 0000000000000..22871513870b7 --- /dev/null +++ b/pandas/src/ujson/lib/ultrajsonenc.c @@ -0,0 +1,891 @@ +/* +Copyright (c) 2011, Jonas Tarnstrom and ESN Social Software AB +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. +3. All advertising materials mentioning features or use of this software + must display the following acknowledgement: + This product includes software developed by ESN Social Software AB (www.esn.me). +4. Neither the name of the ESN Social Software AB nor the + names of its contributors may be used to endorse or promote products + derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY ESN SOCIAL SOFTWARE AB ''AS IS'' AND ANY +EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL ESN SOCIAL SOFTWARE AB BE LIABLE FOR ANY +DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +Portions of code from: +MODP_ASCII - Ascii transformations (upper/lower, etc) +http://code.google.com/p/stringencoders/ +Copyright (c) 2007 Nick Galbreath -- nickg [at] modp [dot] com. All rights reserved. + +*/ + +#include "ultrajson.h" +#include <stdio.h> +#include <assert.h> +#include <string.h> +#include <stdlib.h> +#include <math.h> + +#include <float.h> + +#ifndef TRUE +#define TRUE 1 +#endif +#ifndef FALSE +#define FALSE 0 +#endif + +static const double g_pow10[] = {1, 10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000, 1000000000, 10000000000, 100000000000, 1000000000000, 10000000000000, 100000000000000, 1000000000000000}; +static const char g_hexChars[] = "0123456789abcdef"; +static const char g_escapeChars[] = "0123456789\\b\\t\\n\\f\\r\\\"\\\\\\/"; + + +/* +FIXME: While this is fine dandy and working it's a magic value mess which probably only the author understands. +Needs a cleanup and more documentation */ + +/* +Table for pure ascii output escaping all characters above 127 to \uXXXX */ +static const JSUINT8 g_asciiOutputTable[256] = +{ +/* 0x00 */ 0, 30, 30, 30, 30, 30, 30, 30, 10, 12, 14, 30, 16, 18, 30, 30, +/* 0x10 */ 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, +/* 0x20 */ 1, 1, 20, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 24, +/* 0x30 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x40 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x50 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 22, 1, 1, 1, +/* 0x60 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x70 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x80 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0x90 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0xa0 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0xb0 */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 0xc0 */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* 0xd0 */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* 0xe0 */ 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, +/* 0xf0 */ 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 1, 1 +}; + + +static void SetError (JSOBJ obj, JSONObjectEncoder *enc, const char *message) +{ + enc->errorMsg = message; + enc->errorObj = obj; +} + +/* +FIXME: Keep track of how big these get across several encoder calls and try to make an estimate +That way we won't run our head into the wall each call */ +void Buffer_Realloc (JSONObjectEncoder *enc, size_t cbNeeded) +{ + size_t curSize = enc->end - enc->start; + size_t newSize = curSize * 2; + size_t offset = enc->offset - enc->start; + + while (newSize < curSize + cbNeeded) + { + newSize *= 2; + } + + if (enc->heap) + { + enc->start = (char *) enc->realloc (enc->start, newSize); + if (!enc->start) + { + SetError (NULL, enc, "Could not reserve memory block"); + return; + } + } + else + { + char *oldStart = enc->start; + enc->heap = 1; + enc->start = (char *) enc->malloc (newSize); + if (!enc->start) + { + SetError (NULL, enc, "Could not reserve memory block"); + return; + } + memcpy (enc->start, oldStart, offset); + } + enc->offset = enc->start + offset; + enc->end = enc->start + newSize; +} + +FASTCALL_ATTR INLINE_PREFIX void FASTCALL_MSVC Buffer_AppendShortHexUnchecked (char *outputOffset, unsigned short value) +{ + *(outputOffset++) = g_hexChars[(value & 0xf000) >> 12]; + *(outputOffset++) = g_hexChars[(value & 0x0f00) >> 8]; + *(outputOffset++) = g_hexChars[(value & 0x00f0) >> 4]; + *(outputOffset++) = g_hexChars[(value & 0x000f) >> 0]; +} + +int Buffer_EscapeStringUnvalidated (JSOBJ obj, JSONObjectEncoder *enc, const char *io, const char *end) +{ + char *of = (char *) enc->offset; + + while (1) + { + switch (*io) + { + case 0x00: + if (io < end) + { + *(of++) = '\\'; + *(of++) = 'u'; + *(of++) = '0'; + *(of++) = '0'; + *(of++) = '0'; + *(of++) = '0'; + break; + } + else + { + enc->offset += (of - enc->offset); + return TRUE; + } + + case '\"': (*of++) = '\\'; (*of++) = '\"'; break; + case '\\': (*of++) = '\\'; (*of++) = '\\'; break; + case '/': (*of++) = '\\'; (*of++) = '/'; break; + case '\b': (*of++) = '\\'; (*of++) = 'b'; break; + case '\f': (*of++) = '\\'; (*of++) = 'f'; break; + case '\n': (*of++) = '\\'; (*of++) = 'n'; break; + case '\r': (*of++) = '\\'; (*of++) = 'r'; break; + case '\t': (*of++) = '\\'; (*of++) = 't'; break; + + case 0x01: + case 0x02: + case 0x03: + case 0x04: + case 0x05: + case 0x06: + case 0x07: + case 0x0b: + case 0x0e: + case 0x0f: + case 0x10: + case 0x11: + case 0x12: + case 0x13: + case 0x14: + case 0x15: + case 0x16: + case 0x17: + case 0x18: + case 0x19: + case 0x1a: + case 0x1b: + case 0x1c: + case 0x1d: + case 0x1e: + case 0x1f: + *(of++) = '\\'; + *(of++) = 'u'; + *(of++) = '0'; + *(of++) = '0'; + *(of++) = g_hexChars[ (unsigned char) (((*io) & 0xf0) >> 4)]; + *(of++) = g_hexChars[ (unsigned char) ((*io) & 0x0f)]; + break; + + default: (*of++) = (*io); break; + } + + io++; + } + + return FALSE; +} + + +/* +FIXME: +This code only works with Little and Big Endian + +FIXME: The JSON spec says escape "/" but non of the others do and we don't +want to be left alone doing it so we don't :) + +*/ +int Buffer_EscapeStringValidated (JSOBJ obj, JSONObjectEncoder *enc, const char *io, const char *end) +{ + JSUTF32 ucs; + char *of = (char *) enc->offset; + + while (1) + { + + //JSUINT8 chr = (unsigned char) *io; + JSUINT8 utflen = g_asciiOutputTable[(unsigned char) *io]; + + switch (utflen) + { + case 0: + { + if (io < end) + { + *(of++) = '\\'; + *(of++) = 'u'; + *(of++) = '0'; + *(of++) = '0'; + *(of++) = '0'; + *(of++) = '0'; + io ++; + continue; + } + else + { + enc->offset += (of - enc->offset); + return TRUE; + } + } + + case 1: + { + *(of++)= (*io++); + continue; + } + + case 2: + { + JSUTF32 in; + JSUTF16 in16; + + if (end - io < 1) + { + enc->offset += (of - enc->offset); + SetError (obj, enc, "Unterminated UTF-8 sequence when encoding string"); + return FALSE; + } + + memcpy(&in16, io, sizeof(JSUTF16)); + in = (JSUTF32) in16; + +#ifdef __LITTLE_ENDIAN__ + ucs = ((in & 0x1f) << 6) | ((in >> 8) & 0x3f); +#else + ucs = ((in & 0x1f00) >> 2) | (in & 0x3f); +#endif + + if (ucs < 0x80) + { + enc->offset += (of - enc->offset); + SetError (obj, enc, "Overlong 2 byte UTF-8 sequence detected when encoding string"); + return FALSE; + } + + io += 2; + break; + } + + case 3: + { + JSUTF32 in; + JSUTF16 in16; + JSUINT8 in8; + + if (end - io < 2) + { + enc->offset += (of - enc->offset); + SetError (obj, enc, "Unterminated UTF-8 sequence when encoding string"); + return FALSE; + } + + memcpy(&in16, io, sizeof(JSUTF16)); + memcpy(&in8, io + 2, sizeof(JSUINT8)); +#ifdef __LITTLE_ENDIAN__ + in = (JSUTF32) in16; + in |= in8 << 16; + ucs = ((in & 0x0f) << 12) | ((in & 0x3f00) >> 2) | ((in & 0x3f0000) >> 16); +#else + in = in16 << 8; + in |= in8; + ucs = ((in & 0x0f0000) >> 4) | ((in & 0x3f00) >> 2) | (in & 0x3f); +#endif + + + if (ucs < 0x800) + { + enc->offset += (of - enc->offset); + SetError (obj, enc, "Overlong 3 byte UTF-8 sequence detected when encoding string"); + return FALSE; + } + + io += 3; + break; + } + case 4: + { + JSUTF32 in; + + if (end - io < 3) + { + enc->offset += (of - enc->offset); + SetError (obj, enc, "Unterminated UTF-8 sequence when encoding string"); + return FALSE; + } + + memcpy(&in, io, sizeof(JSUTF32)); +#ifdef __LITTLE_ENDIAN__ + ucs = ((in & 0x07) << 18) | ((in & 0x3f00) << 4) | ((in & 0x3f0000) >> 10) | ((in & 0x3f000000) >> 24); +#else + ucs = ((in & 0x07000000) >> 6) | ((in & 0x3f0000) >> 4) | ((in & 0x3f00) >> 2) | (in & 0x3f); +#endif + if (ucs < 0x10000) + { + enc->offset += (of - enc->offset); + SetError (obj, enc, "Overlong 4 byte UTF-8 sequence detected when encoding string"); + return FALSE; + } + + io += 4; + break; + } + + + case 5: + case 6: + enc->offset += (of - enc->offset); + SetError (obj, enc, "Unsupported UTF-8 sequence length when encoding string"); + return FALSE; + + case 30: + // \uXXXX encode + *(of++) = '\\'; + *(of++) = 'u'; + *(of++) = '0'; + *(of++) = '0'; + *(of++) = g_hexChars[ (unsigned char) (((*io) & 0xf0) >> 4)]; + *(of++) = g_hexChars[ (unsigned char) ((*io) & 0x0f)]; + io ++; + continue; + + case 10: + case 12: + case 14: + case 16: + case 18: + case 20: + case 22: + case 24: + *(of++) = *( (char *) (g_escapeChars + utflen + 0)); + *(of++) = *( (char *) (g_escapeChars + utflen + 1)); + io ++; + continue; + } + + /* + If the character is a UTF8 sequence of length > 1 we end up here */ + if (ucs >= 0x10000) + { + ucs -= 0x10000; + *(of++) = '\\'; + *(of++) = 'u'; + Buffer_AppendShortHexUnchecked(of, (ucs >> 10) + 0xd800); + of += 4; + + *(of++) = '\\'; + *(of++) = 'u'; + Buffer_AppendShortHexUnchecked(of, (ucs & 0x3ff) + 0xdc00); + of += 4; + } + else + { + *(of++) = '\\'; + *(of++) = 'u'; + Buffer_AppendShortHexUnchecked(of, ucs); + of += 4; + } + } + + return FALSE; +} + +#define Buffer_Reserve(__enc, __len) \ + if ((__enc)->end - (__enc)->offset < (__len)) \ + { \ + Buffer_Realloc((__enc), (__len));\ + } \ + + +#define Buffer_AppendCharUnchecked(__enc, __chr) \ + *((__enc)->offset++) = __chr; \ + +FASTCALL_ATTR INLINE_PREFIX void FASTCALL_MSVC strreverse(char* begin, char* end) +{ + char aux; + while (end > begin) + aux = *end, *end-- = *begin, *begin++ = aux; +} + +void Buffer_AppendIntUnchecked(JSONObjectEncoder *enc, JSINT32 value) +{ + char* wstr; + JSUINT32 uvalue = (value < 0) ? -value : value; + + wstr = enc->offset; + // Conversion. Number is reversed. + + do *wstr++ = (char)(48 + (uvalue % 10)); while(uvalue /= 10); + if (value < 0) *wstr++ = '-'; + + // Reverse string + strreverse(enc->offset,wstr - 1); + enc->offset += (wstr - (enc->offset)); +} + +void Buffer_AppendLongUnchecked(JSONObjectEncoder *enc, JSINT64 value) +{ + char* wstr; + JSUINT64 uvalue = (value < 0) ? -value : value; + + wstr = enc->offset; + // Conversion. Number is reversed. + + do *wstr++ = (char)(48 + (uvalue % 10ULL)); while(uvalue /= 10ULL); + if (value < 0) *wstr++ = '-'; + + // Reverse string + strreverse(enc->offset,wstr - 1); + enc->offset += (wstr - (enc->offset)); +} + +int Buffer_AppendDoubleUnchecked(JSOBJ obj, JSONObjectEncoder *enc, double value) +{ + /* if input is larger than thres_max, revert to exponential */ + const double thres_max = (double) 1e16 - 1; + int count; + double diff = 0.0; + char* str = enc->offset; + char* wstr = str; + unsigned long long whole; + double tmp; + unsigned long long frac; + int neg; + double pow10; + + if (value == HUGE_VAL || value == -HUGE_VAL) + { + SetError (obj, enc, "Invalid Inf value when encoding double"); + return FALSE; + } + if (! (value == value)) + { + SetError (obj, enc, "Invalid Nan value when encoding double"); + return FALSE; + } + + + /* we'll work in positive values and deal with the + negative sign issue later */ + neg = 0; + if (value < 0) + { + neg = 1; + value = -value; + } + + pow10 = g_pow10[enc->doublePrecision]; + + whole = (unsigned long long) value; + tmp = (value - whole) * pow10; + frac = (unsigned long long)(tmp); + diff = tmp - frac; + + if (diff > 0.5) + { + ++frac; + /* handle rollover, e.g. case 0.99 with prec 1 is 1.0 */ + if (frac >= pow10) + { + frac = 0; + ++whole; + } + } + else + if (diff == 0.5 && ((frac == 0) || (frac & 1))) + { + /* if halfway, round up if odd, OR + if last digit is 0. That last part is strange */ + ++frac; + } + + /* for very large numbers switch back to native sprintf for exponentials. + anyone want to write code to replace this? */ + /* + normal printf behavior is to print EVERY whole number digit + which can be 100s of characters overflowing your buffers == bad + */ + if (value > thres_max) + { + enc->offset += sprintf(str, "%.15e", neg ? -value : value); + return TRUE; + } + + if (enc->doublePrecision == 0) + { + diff = value - whole; + + if (diff > 0.5) + { + /* greater than 0.5, round up, e.g. 1.6 -> 2 */ + ++whole; + } + else + if (diff == 0.5 && (whole & 1)) + { + /* exactly 0.5 and ODD, then round up */ + /* 1.5 -> 2, but 2.5 -> 2 */ + ++whole; + } + + //vvvvvvvvvvvvvvvvvvv Diff from modp_dto2 + } + else + if (frac) + { + count = enc->doublePrecision; + // now do fractional part, as an unsigned number + // we know it is not 0 but we can have leading zeros, these + // should be removed + while (!(frac % 10)) + { + --count; + frac /= 10; + } + //^^^^^^^^^^^^^^^^^^^ Diff from modp_dto2 + + // now do fractional part, as an unsigned number + do + { + --count; + *wstr++ = (char)(48 + (frac % 10)); + } while (frac /= 10); + // add extra 0s + while (count-- > 0) + { + *wstr++ = '0'; + } + // add decimal + *wstr++ = '.'; + } + else + { + *wstr++ = '0'; + *wstr++ = '.'; + } + + // do whole part + // Take care of sign + // Conversion. Number is reversed. + do *wstr++ = (char)(48 + (whole % 10)); while (whole /= 10); + + if (neg) + { + *wstr++ = '-'; + } + strreverse(str, wstr-1); + enc->offset += (wstr - (enc->offset)); + + return TRUE; +} + + + + + + +/* +FIXME: +Handle integration functions returning NULL here */ + +/* +FIXME: +Perhaps implement recursion detection */ + +void encode(JSOBJ obj, JSONObjectEncoder *enc, const char *name, size_t cbName) +{ + const char *value; + char *objName; + int count; + JSOBJ iterObj; + size_t szlen; + JSONTypeContext tc; + tc.encoder = enc; + + if (enc->level > enc->recursionMax) + { + SetError (obj, enc, "Maximum recursion level reached"); + return; + } + + /* + This reservation must hold + + length of _name as encoded worst case + + maxLength of double to string OR maxLength of JSLONG to string + + Since input is assumed to be UTF-8 the worst character length is: + + 4 bytes (of UTF-8) => "\uXXXX\uXXXX" (12 bytes) + */ + + Buffer_Reserve(enc, 256 + (((cbName / 4) + 1) * 12)); + if (enc->errorMsg) + { + return; + } + + if (name) + { + Buffer_AppendCharUnchecked(enc, '\"'); + + if (enc->forceASCII) + { + if (!Buffer_EscapeStringValidated(obj, enc, name, name + cbName)) + { + return; + } + } + else + { + if (!Buffer_EscapeStringUnvalidated(obj, enc, name, name + cbName)) + { + return; + } + } + + + Buffer_AppendCharUnchecked(enc, '\"'); + + Buffer_AppendCharUnchecked (enc, ':'); +#ifndef JSON_NO_EXTRA_WHITESPACE + Buffer_AppendCharUnchecked (enc, ' '); +#endif + } + + enc->beginTypeContext(obj, &tc); + + switch (tc.type) + { + case JT_INVALID: + return; + + case JT_ARRAY: + { + count = 0; + enc->iterBegin(obj, &tc); + + Buffer_AppendCharUnchecked (enc, '['); + + while (enc->iterNext(obj, &tc)) + { + if (count > 0) + { + Buffer_AppendCharUnchecked (enc, ','); +#ifndef JSON_NO_EXTRA_WHITESPACE + Buffer_AppendCharUnchecked (buffer, ' '); +#endif + } + + iterObj = enc->iterGetValue(obj, &tc); + + enc->level ++; + encode (iterObj, enc, NULL, 0); + count ++; + } + + enc->iterEnd(obj, &tc); + Buffer_AppendCharUnchecked (enc, ']'); + break; + } + + case JT_OBJECT: + { + count = 0; + enc->iterBegin(obj, &tc); + + Buffer_AppendCharUnchecked (enc, '{'); + + while (enc->iterNext(obj, &tc)) + { + if (count > 0) + { + Buffer_AppendCharUnchecked (enc, ','); +#ifndef JSON_NO_EXTRA_WHITESPACE + Buffer_AppendCharUnchecked (enc, ' '); +#endif + } + + iterObj = enc->iterGetValue(obj, &tc); + objName = enc->iterGetName(obj, &tc, &szlen); + + enc->level ++; + encode (iterObj, enc, objName, szlen); + count ++; + } + + enc->iterEnd(obj, &tc); + Buffer_AppendCharUnchecked (enc, '}'); + break; + } + + case JT_LONG: + { + Buffer_AppendLongUnchecked (enc, enc->getLongValue(obj, &tc)); + break; + } + + case JT_INT: + { + Buffer_AppendIntUnchecked (enc, enc->getIntValue(obj, &tc)); + break; + } + + case JT_TRUE: + { + Buffer_AppendCharUnchecked (enc, 't'); + Buffer_AppendCharUnchecked (enc, 'r'); + Buffer_AppendCharUnchecked (enc, 'u'); + Buffer_AppendCharUnchecked (enc, 'e'); + break; + } + + case JT_FALSE: + { + Buffer_AppendCharUnchecked (enc, 'f'); + Buffer_AppendCharUnchecked (enc, 'a'); + Buffer_AppendCharUnchecked (enc, 'l'); + Buffer_AppendCharUnchecked (enc, 's'); + Buffer_AppendCharUnchecked (enc, 'e'); + break; + } + + + case JT_NULL: + { + Buffer_AppendCharUnchecked (enc, 'n'); + Buffer_AppendCharUnchecked (enc, 'u'); + Buffer_AppendCharUnchecked (enc, 'l'); + Buffer_AppendCharUnchecked (enc, 'l'); + break; + } + + case JT_DOUBLE: + { + if (!Buffer_AppendDoubleUnchecked (obj, enc, enc->getDoubleValue(obj, &tc))) + { + enc->endTypeContext(obj, &tc); + enc->level --; + return; + } + break; + } + + case JT_UTF8: + { + value = enc->getStringValue(obj, &tc, &szlen); + Buffer_Reserve(enc, ((szlen / 4) + 1) * 12); + if (enc->errorMsg) + { + enc->endTypeContext(obj, &tc); + return; + } + Buffer_AppendCharUnchecked (enc, '\"'); + + + if (enc->forceASCII) + { + if (!Buffer_EscapeStringValidated(obj, enc, value, value + szlen)) + { + enc->endTypeContext(obj, &tc); + enc->level --; + return; + } + } + else + { + if (!Buffer_EscapeStringUnvalidated(obj, enc, value, value + szlen)) + { + enc->endTypeContext(obj, &tc); + enc->level --; + return; + } + } + + Buffer_AppendCharUnchecked (enc, '\"'); + break; + } + } + + enc->endTypeContext(obj, &tc); + enc->level --; + +} + +char *JSON_EncodeObject(JSOBJ obj, JSONObjectEncoder *enc, char *_buffer, size_t _cbBuffer) +{ + enc->malloc = enc->malloc ? enc->malloc : malloc; + enc->free = enc->free ? enc->free : free; + enc->realloc = enc->realloc ? enc->realloc : realloc; + enc->errorMsg = NULL; + enc->errorObj = NULL; + enc->level = 0; + + if (enc->recursionMax < 1) + { + enc->recursionMax = JSON_MAX_RECURSION_DEPTH; + } + + if (enc->doublePrecision < 0 || + enc->doublePrecision > JSON_DOUBLE_MAX_DECIMALS) + { + enc->doublePrecision = JSON_DOUBLE_MAX_DECIMALS; + } + + if (_buffer == NULL) + { + _cbBuffer = 32768; + enc->start = (char *) enc->malloc (_cbBuffer); + if (!enc->start) + { + SetError(obj, enc, "Could not reserve memory block"); + return NULL; + } + enc->heap = 1; + } + else + { + enc->start = _buffer; + enc->heap = 0; + } + + enc->end = enc->start + _cbBuffer; + enc->offset = enc->start; + + + encode (obj, enc, NULL, 0); + + Buffer_Reserve(enc, 1); + if (enc->errorMsg) + { + return NULL; + } + Buffer_AppendCharUnchecked(enc, '\0'); + + return enc->start; +} diff --git a/pandas/src/ujson/python/JSONtoObj.c b/pandas/src/ujson/python/JSONtoObj.c new file mode 100644 index 0000000000000..bc42269d9698b --- /dev/null +++ b/pandas/src/ujson/python/JSONtoObj.c @@ -0,0 +1,676 @@ +#include "py_defines.h" +#define PY_ARRAY_UNIQUE_SYMBOL UJSON_NUMPY +#define NO_IMPORT_ARRAY +#include <numpy/arrayobject.h> +#include <ultrajson.h> + + +typedef struct __PyObjectDecoder +{ + JSONObjectDecoder dec; + + void* npyarr; // Numpy context buffer + void* npyarr_addr; // Ref to npyarr ptr to track DECREF calls + npy_intp curdim; // Current array dimension + + PyArray_Descr* dtype; +} PyObjectDecoder; + +typedef struct __NpyArrContext +{ + PyObject* ret; + PyObject* labels[2]; + PyArray_Dims shape; + + PyObjectDecoder* dec; + + npy_intp i; + npy_intp elsize; + npy_intp elcount; +} NpyArrContext; + +//#define PRINTMARK() fprintf(stderr, "%s: MARK(%d)\n", __FILE__, __LINE__) +#define PRINTMARK() + +// Numpy handling based on numpy internal code, specifically the function +// PyArray_FromIter. + +// numpy related functions are inter-dependent so declare them all here, +// to ensure the compiler catches any errors + +// standard numpy array handling +JSOBJ Object_npyNewArray(void* decoder); +JSOBJ Object_npyEndArray(JSOBJ obj); +int Object_npyArrayAddItem(JSOBJ obj, JSOBJ value); + +// for more complex dtypes (object and string) fill a standard Python list +// and convert to a numpy array when done. +JSOBJ Object_npyNewArrayList(void* decoder); +JSOBJ Object_npyEndArrayList(JSOBJ obj); +int Object_npyArrayListAddItem(JSOBJ obj, JSOBJ value); + +// labelled support, encode keys and values of JS object into separate numpy +// arrays +JSOBJ Object_npyNewObject(void* decoder); +JSOBJ Object_npyEndObject(JSOBJ obj); +int Object_npyObjectAddKey(JSOBJ obj, JSOBJ name, JSOBJ value); + + +// free the numpy context buffer +void Npy_releaseContext(NpyArrContext* npyarr) +{ + PRINTMARK(); + if (npyarr) + { + if (npyarr->shape.ptr) + { + PyObject_Free(npyarr->shape.ptr); + } + if (npyarr->dec) + { + npyarr->dec->npyarr = NULL; + npyarr->dec->curdim = 0; + } + Py_XDECREF(npyarr->labels[0]); + Py_XDECREF(npyarr->labels[1]); + Py_XDECREF(npyarr->ret); + PyObject_Free(npyarr); + } +} + +JSOBJ Object_npyNewArray(void* _decoder) +{ + NpyArrContext* npyarr; + PyObjectDecoder* decoder = (PyObjectDecoder*) _decoder; + PRINTMARK(); + if (decoder->curdim <= 0) + { + // start of array - initialise the context buffer + npyarr = decoder->npyarr = PyObject_Malloc(sizeof(NpyArrContext)); + decoder->npyarr_addr = npyarr; + + if (!npyarr) + { + PyErr_NoMemory(); + return NULL; + } + + npyarr->dec = decoder; + npyarr->labels[0] = npyarr->labels[1] = NULL; + + npyarr->shape.ptr = PyObject_Malloc(sizeof(npy_intp)*NPY_MAXDIMS); + npyarr->shape.len = 1; + npyarr->ret = NULL; + + npyarr->elsize = 0; + npyarr->elcount = 4; + npyarr->i = 0; + } + else + { + // starting a new dimension continue the current array (and reshape after) + npyarr = (NpyArrContext*) decoder->npyarr; + if (decoder->curdim >= npyarr->shape.len) + { + npyarr->shape.len++; + } + } + + npyarr->shape.ptr[decoder->curdim] = 0; + decoder->curdim++; + return npyarr; +} + +PyObject* Npy_returnLabelled(NpyArrContext* npyarr) +{ + PyObject* ret = npyarr->ret; + npy_intp i; + + if (npyarr->labels[0] || npyarr->labels[1]) + { + // finished decoding, build tuple with values and labels + ret = PyTuple_New(npyarr->shape.len+1); + for (i = 0; i < npyarr->shape.len; i++) + { + if (npyarr->labels[i]) + { + PyTuple_SET_ITEM(ret, i+1, npyarr->labels[i]); + npyarr->labels[i] = NULL; + } + else + { + Py_INCREF(Py_None); + PyTuple_SET_ITEM(ret, i+1, Py_None); + } + } + PyTuple_SET_ITEM(ret, 0, npyarr->ret); + } + + return ret; +} + +JSOBJ Object_npyEndArray(JSOBJ obj) +{ + PyObject *ret; + char* new_data; + NpyArrContext* npyarr = (NpyArrContext*) obj; + int emptyType = NPY_DEFAULT_TYPE; + npy_intp i; + PRINTMARK(); + if (!npyarr) + { + return NULL; + } + + ret = npyarr->ret; + i = npyarr->i; + + npyarr->dec->curdim--; + + if (i == 0 || !npyarr->ret) { + // empty array would not have been initialised so do it now. + if (npyarr->dec->dtype) + { + emptyType = npyarr->dec->dtype->type_num; + } + npyarr->ret = ret = PyArray_EMPTY(npyarr->shape.len, npyarr->shape.ptr, emptyType, 0); + } + else if (npyarr->dec->curdim <= 0) + { + // realloc to final size + new_data = PyDataMem_RENEW(PyArray_DATA(ret), i * npyarr->elsize); + if (new_data == NULL) { + PyErr_NoMemory(); + Npy_releaseContext(npyarr); + return NULL; + } + ((PyArrayObject*) ret)->data = (void*) new_data; + // PyArray_BYTES(ret) = new_data; + } + + if (npyarr->dec->curdim <= 0) + { + // finished decoding array, reshape if necessary + if (npyarr->shape.len > 1) + { + npyarr->ret = PyArray_Newshape((PyArrayObject*) ret, &npyarr->shape, NPY_ANYORDER); + Py_DECREF(ret); + } + + ret = Npy_returnLabelled(npyarr); + + npyarr->ret = NULL; + Npy_releaseContext(npyarr); + } + + return ret; +} + +int Object_npyArrayAddItem(JSOBJ obj, JSOBJ value) +{ + PyObject* type; + PyArray_Descr* dtype; + npy_intp i; + char *new_data, *item; + NpyArrContext* npyarr = (NpyArrContext*) obj; + PRINTMARK(); + if (!npyarr) + { + return 0; + } + + i = npyarr->i; + + npyarr->shape.ptr[npyarr->dec->curdim-1]++; + + if (PyArray_Check((PyObject*)value)) + { + // multidimensional array, keep decoding values. + return 1; + } + + if (!npyarr->ret) + { + // Array not initialised yet. + // We do it here so we can 'sniff' the data type if none was provided + if (!npyarr->dec->dtype) + { + type = PyObject_Type(value); + if(!PyArray_DescrConverter(type, &dtype)) + { + Py_DECREF(type); + goto fail; + } + Py_INCREF(dtype); + Py_DECREF(type); + } + else + { + dtype = PyArray_DescrNew(npyarr->dec->dtype); + } + + // If it's an object or string then fill a Python list and subsequently + // convert. Otherwise we would need to somehow mess about with + // reference counts when renewing memory. + npyarr->elsize = dtype->elsize; + if (PyDataType_REFCHK(dtype) || npyarr->elsize == 0) + { + Py_XDECREF(dtype); + + if (npyarr->dec->curdim > 1) + { + PyErr_SetString(PyExc_ValueError, "Cannot decode multidimensional arrays with variable length elements to numpy"); + goto fail; + } + npyarr->elcount = 0; + npyarr->ret = PyList_New(0); + if (!npyarr->ret) + { + goto fail; + } + ((JSONObjectDecoder*)npyarr->dec)->newArray = Object_npyNewArrayList; + ((JSONObjectDecoder*)npyarr->dec)->arrayAddItem = Object_npyArrayListAddItem; + ((JSONObjectDecoder*)npyarr->dec)->endArray = Object_npyEndArrayList; + return Object_npyArrayListAddItem(obj, value); + } + + npyarr->ret = PyArray_NewFromDescr(&PyArray_Type, dtype, 1, + &npyarr->elcount, NULL,NULL, 0, NULL); + + if (!npyarr->ret) + { + goto fail; + } + } + + if (i >= npyarr->elcount) { + // Grow PyArray_DATA(ret): + // this is similar for the strategy for PyListObject, but we use + // 50% overallocation => 0, 4, 8, 14, 23, 36, 56, 86 ... + if (npyarr->elsize == 0) + { + PyErr_SetString(PyExc_ValueError, "Cannot decode multidimensional arrays with variable length elements to numpy"); + goto fail; + } + + npyarr->elcount = (i >> 1) + (i < 4 ? 4 : 2) + i; + if (npyarr->elcount <= NPY_MAX_INTP/npyarr->elsize) { + new_data = PyDataMem_RENEW(PyArray_DATA(npyarr->ret), npyarr->elcount * npyarr->elsize); + } + else { + PyErr_NoMemory(); + goto fail; + } + ((PyArrayObject*) npyarr->ret)->data = (void*) new_data; + + // PyArray_BYTES(npyarr->ret) = new_data; + } + + PyArray_DIMS(npyarr->ret)[0] = i + 1; + + if ((item = PyArray_GETPTR1(npyarr->ret, i)) == NULL + || PyArray_SETITEM(npyarr->ret, item, value) == -1) { + goto fail; + } + + Py_DECREF( (PyObject *) value); + npyarr->i++; + return 1; + +fail: + + Npy_releaseContext(npyarr); + return 0; +} + +JSOBJ Object_npyNewArrayList(void* _decoder) +{ + PyObjectDecoder* decoder = (PyObjectDecoder*) _decoder; + PRINTMARK(); + PyErr_SetString(PyExc_ValueError, "nesting not supported for object or variable length dtypes"); + Npy_releaseContext(decoder->npyarr); + return NULL; +} + +JSOBJ Object_npyEndArrayList(JSOBJ obj) +{ + PyObject *list, *ret; + NpyArrContext* npyarr = (NpyArrContext*) obj; + PRINTMARK(); + if (!npyarr) + { + return NULL; + } + + // convert decoded list to numpy array + list = (PyObject *) npyarr->ret; + npyarr->ret = PyArray_FROM_O(list); + + ret = Npy_returnLabelled(npyarr); + npyarr->ret = list; + + ((JSONObjectDecoder*)npyarr->dec)->newArray = Object_npyNewArray; + ((JSONObjectDecoder*)npyarr->dec)->arrayAddItem = Object_npyArrayAddItem; + ((JSONObjectDecoder*)npyarr->dec)->endArray = Object_npyEndArray; + Npy_releaseContext(npyarr); + return ret; +} + +int Object_npyArrayListAddItem(JSOBJ obj, JSOBJ value) +{ + NpyArrContext* npyarr = (NpyArrContext*) obj; + PRINTMARK(); + if (!npyarr) + { + return 0; + } + PyList_Append((PyObject*) npyarr->ret, value); + Py_DECREF( (PyObject *) value); + npyarr->elcount++; + return 1; +} + + +JSOBJ Object_npyNewObject(void* _decoder) +{ + PyObjectDecoder* decoder = (PyObjectDecoder*) _decoder; + PRINTMARK(); + if (decoder->curdim > 1) + { + PyErr_SetString(PyExc_ValueError, "labels only supported up to 2 dimensions"); + return NULL; + } + + return ((JSONObjectDecoder*)decoder)->newArray(decoder); +} + +JSOBJ Object_npyEndObject(JSOBJ obj) +{ + PyObject *list; + npy_intp labelidx; + NpyArrContext* npyarr = (NpyArrContext*) obj; + PRINTMARK(); + if (!npyarr) + { + return NULL; + } + + labelidx = npyarr->dec->curdim-1; + + list = npyarr->labels[labelidx]; + if (list) + { + npyarr->labels[labelidx] = PyArray_FROM_O(list); + Py_DECREF(list); + } + + return (PyObject*) ((JSONObjectDecoder*)npyarr->dec)->endArray(obj); +} + +int Object_npyObjectAddKey(JSOBJ obj, JSOBJ name, JSOBJ value) +{ + PyObject *label; + npy_intp labelidx; + // add key to label array, value to values array + NpyArrContext* npyarr = (NpyArrContext*) obj; + PRINTMARK(); + if (!npyarr) + { + return 0; + } + + label = (PyObject*) name; + labelidx = npyarr->dec->curdim-1; + + if (!npyarr->labels[labelidx]) + { + npyarr->labels[labelidx] = PyList_New(0); + } + + // only fill label array once, assumes all column labels are the same + // for 2-dimensional arrays. + if (PyList_GET_SIZE(npyarr->labels[labelidx]) <= npyarr->elcount) + { + PyList_Append(npyarr->labels[labelidx], label); + } + + if(((JSONObjectDecoder*)npyarr->dec)->arrayAddItem(obj, value)) + { + Py_DECREF(label); + return 1; + } + return 0; +} + +int Object_objectAddKey(JSOBJ obj, JSOBJ name, JSOBJ value) +{ + PyDict_SetItem (obj, name, value); + Py_DECREF( (PyObject *) name); + Py_DECREF( (PyObject *) value); + return 1; +} + +int Object_arrayAddItem(JSOBJ obj, JSOBJ value) +{ + PyList_Append(obj, value); + Py_DECREF( (PyObject *) value); + return 1; +} + +JSOBJ Object_newString(wchar_t *start, wchar_t *end) +{ + return PyUnicode_FromWideChar (start, (end - start)); +} + +JSOBJ Object_newTrue(void) +{ + Py_RETURN_TRUE; +} + +JSOBJ Object_newFalse(void) +{ + Py_RETURN_FALSE; +} + +JSOBJ Object_newNull(void) +{ + Py_RETURN_NONE; +} + +JSOBJ Object_newObject(void* decoder) +{ + return PyDict_New(); +} + +JSOBJ Object_endObject(JSOBJ obj) +{ + return obj; +} + +JSOBJ Object_newArray(void* decoder) +{ + return PyList_New(0); +} + +JSOBJ Object_endArray(JSOBJ obj) +{ + return obj; +} + +JSOBJ Object_newInteger(JSINT32 value) +{ + return PyInt_FromLong( (long) value); +} + +JSOBJ Object_newLong(JSINT64 value) +{ + return PyLong_FromLongLong (value); +} + +JSOBJ Object_newDouble(double value) +{ + return PyFloat_FromDouble(value); +} + +static void Object_releaseObject(JSOBJ obj, void* _decoder) +{ + PyObjectDecoder* decoder = (PyObjectDecoder*) _decoder; + if (obj != decoder->npyarr_addr) + { + Py_XDECREF( ((PyObject *)obj)); + } +} + + +PyObject* JSONToObj(PyObject* self, PyObject *args, PyObject *kwargs) +{ + PyObject *ret; + PyObject *sarg; + JSONObjectDecoder *decoder; + PyObjectDecoder pyDecoder; + PyArray_Descr *dtype = NULL; + static char *kwlist[] = { "obj", "numpy", "labelled", "dtype", NULL}; + int numpy = 0, labelled = 0, decref = 0; + // PRINTMARK(); + + JSONObjectDecoder dec = { + Object_newString, + Object_objectAddKey, + Object_arrayAddItem, + Object_newTrue, + Object_newFalse, + Object_newNull, + Object_newObject, + Object_endObject, + Object_newArray, + Object_endArray, + Object_newInteger, + Object_newLong, + Object_newDouble, + Object_releaseObject, + PyObject_Malloc, + PyObject_Free, + PyObject_Realloc, + }; + pyDecoder.dec = dec; + pyDecoder.curdim = 0; + pyDecoder.npyarr = NULL; + pyDecoder.npyarr_addr = NULL; + + decoder = (JSONObjectDecoder*) &pyDecoder; + + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|iiO&", kwlist, &sarg, &numpy, &labelled, PyArray_DescrConverter2, &dtype)) + { + Npy_releaseContext(pyDecoder.npyarr); + return NULL; + } + + if (PyUnicode_Check(sarg)) + { + sarg = PyUnicode_AsUTF8String(sarg); + if (sarg == NULL) + { + //Exception raised above us by codec according to docs + return NULL; + } + decref = 1; + } + else + if (!PyString_Check(sarg)) + { + PyErr_Format(PyExc_TypeError, "Expected String or Unicode"); + return NULL; + } + + if (numpy) + { + pyDecoder.dtype = dtype; + decoder->newArray = Object_npyNewArray; + decoder->endArray = Object_npyEndArray; + decoder->arrayAddItem = Object_npyArrayAddItem; + + if (labelled) + { + decoder->newObject = Object_npyNewObject; + decoder->endObject = Object_npyEndObject; + decoder->objectAddKey = Object_npyObjectAddKey; + } + } + + decoder->errorStr = NULL; + decoder->errorOffset = NULL; + + PRINTMARK(); + ret = JSON_DecodeObject(decoder, PyString_AS_STRING(sarg), PyString_GET_SIZE(sarg)); + PRINTMARK(); + + if (decref) + { + Py_DECREF(sarg); + } + + if (PyErr_Occurred()) + { + return NULL; + } + + if (decoder->errorStr) + { + /*FIXME: It's possible to give a much nicer error message here with actual failing element in input etc*/ + PyErr_Format (PyExc_ValueError, "%s", decoder->errorStr); + Py_XDECREF( (PyObject *) ret); + Npy_releaseContext(pyDecoder.npyarr); + + return NULL; + } + + return ret; +} + +PyObject* JSONFileToObj(PyObject* self, PyObject *args, PyObject *kwargs) +{ + PyObject *file; + PyObject *read; + PyObject *string; + PyObject *result; + PyObject *argtuple; + + if (!PyArg_ParseTuple (args, "O", &file)) { + return NULL; + } + + if (!PyObject_HasAttrString (file, "read")) + { + PyErr_Format (PyExc_TypeError, "expected file"); + return NULL; + } + + read = PyObject_GetAttrString (file, "read"); + + if (!PyCallable_Check (read)) { + Py_XDECREF(read); + PyErr_Format (PyExc_TypeError, "expected file"); + return NULL; + } + + string = PyObject_CallObject (read, NULL); + Py_XDECREF(read); + + if (string == NULL) + { + return NULL; + } + + argtuple = PyTuple_Pack(1, string); + + result = JSONToObj (self, argtuple, kwargs); + Py_XDECREF(string); + Py_DECREF(argtuple); + + if (result == NULL) { + return NULL; + } + + return result; +} + diff --git a/pandas/src/ujson/python/objToJSON.c b/pandas/src/ujson/python/objToJSON.c new file mode 100644 index 0000000000000..ce8bdf3721f5e --- /dev/null +++ b/pandas/src/ujson/python/objToJSON.c @@ -0,0 +1,1701 @@ +#define PY_ARRAY_UNIQUE_SYMBOL UJSON_NUMPY + +#include "py_defines.h" +#include <numpy/arrayobject.h> +#include <numpy/npy_math.h> +#include <np_datetime.h> +#include <stdio.h> +#include <datetime.h> +#include <ultrajson.h> + +#define NPY_JSON_BUFSIZE 32768 + +static PyObject* cls_dataframe; +static PyObject* cls_series; +static PyObject* cls_index; + +typedef void *(*PFN_PyTypeToJSON)(JSOBJ obj, JSONTypeContext *ti, void *outValue, size_t *_outLen); + + +#if (PY_VERSION_HEX < 0x02050000) +typedef ssize_t Py_ssize_t; +#endif + +typedef struct __NpyArrContext +{ + PyObject *array; + char* dataptr; + int was_datetime64; + int curdim; // current dimension in array's order + int stridedim; // dimension we are striding over + int inc; // stride dimension increment (+/- 1) + npy_intp dim; + npy_intp stride; + npy_intp ndim; + npy_intp index[NPY_MAXDIMS]; + PyArray_GetItemFunc* getitem; + + char** rowLabels; + char** columnLabels; +} NpyArrContext; + +typedef struct __TypeContext +{ + JSPFN_ITERBEGIN iterBegin; + JSPFN_ITEREND iterEnd; + JSPFN_ITERNEXT iterNext; + JSPFN_ITERGETNAME iterGetName; + JSPFN_ITERGETVALUE iterGetValue; + PFN_PyTypeToJSON PyTypeToJSON; + PyObject *newObj; + PyObject *dictObj; + Py_ssize_t index; + Py_ssize_t size; + PyObject *itemValue; + PyObject *itemName; + PyObject *attrList; + char *citemName; + + JSINT64 longValue; + + NpyArrContext *npyarr; + int transpose; + char** rowLabels; + char** columnLabels; + npy_intp rowLabelsLen; + npy_intp columnLabelsLen; + +} TypeContext; + +typedef struct __PyObjectEncoder +{ + JSONObjectEncoder enc; + + // pass through the NpyArrContext when encoding multi-dimensional arrays + NpyArrContext* npyCtxtPassthru; + + // output format style for pandas data types + int outputFormat; + int originalOutputFormat; +} PyObjectEncoder; + +#define GET_TC(__ptrtc) ((TypeContext *)((__ptrtc)->prv)) + +struct PyDictIterState +{ + PyObject *keys; + size_t i; + size_t sz; +}; + +enum PANDAS_FORMAT +{ + SPLIT, + RECORDS, + INDEX, + COLUMNS, + VALUES +}; + +//#define PRINTMARK() fprintf(stderr, "%s: MARK(%d)\n", __FILE__, __LINE__) +#define PRINTMARK() + +void initObjToJSON(void) +{ + PyObject *mod_frame; + PyDateTime_IMPORT; + + mod_frame = PyImport_ImportModule("pandas.core.frame"); + if (mod_frame) + { + cls_dataframe = PyObject_GetAttrString(mod_frame, "DataFrame"); + cls_index = PyObject_GetAttrString(mod_frame, "Index"); + cls_series = PyObject_GetAttrString(mod_frame, "Series"); + Py_DECREF(mod_frame); + } + + /* Initialise numpy API */ + import_array(); +} + +static void *PyIntToINT32(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *_outLen) +{ + PyObject *obj = (PyObject *) _obj; + *((JSINT32 *) outValue) = PyInt_AS_LONG (obj); + return NULL; +} + +static void *PyIntToINT64(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *_outLen) +{ + PyObject *obj = (PyObject *) _obj; + *((JSINT64 *) outValue) = PyInt_AS_LONG (obj); + return NULL; +} + +static void *PyLongToINT64(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *_outLen) +{ + *((JSINT64 *) outValue) = GET_TC(tc)->longValue; + return NULL; +} + +static void *NpyFloatToDOUBLE(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *_outLen) +{ + PyObject *obj = (PyObject *) _obj; + PyArray_CastScalarToCtype(obj, outValue, PyArray_DescrFromType(NPY_DOUBLE)); + return NULL; +} + +static void *PyFloatToDOUBLE(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *_outLen) +{ + PyObject *obj = (PyObject *) _obj; + *((double *) outValue) = PyFloat_AS_DOUBLE (obj); + return NULL; +} + +static void *PyStringToUTF8(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *_outLen) +{ + PyObject *obj = (PyObject *) _obj; + *_outLen = PyString_GET_SIZE(obj); + return PyString_AS_STRING(obj); +} + +static void *PyUnicodeToUTF8(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *_outLen) +{ + PyObject *obj = (PyObject *) _obj; + PyObject *newObj = PyUnicode_AsUTF8String (obj); + + GET_TC(tc)->newObj = newObj; + + *_outLen = PyString_GET_SIZE(newObj); + return PyString_AS_STRING(newObj); +} + +static void *NpyDateTimeToINT64(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *_outLen) +{ + PyObject *obj = (PyObject *) _obj; + PyArray_CastScalarToCtype(obj, outValue, PyArray_DescrFromType(NPY_DATETIME)); + return NULL; +} + +static void *PyDateTimeToINT64(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *_outLen) +{ + pandas_datetimestruct dts; + PyObject *obj = (PyObject *) _obj; + + dts.year = PyDateTime_GET_YEAR(obj); + dts.month = PyDateTime_GET_MONTH(obj); + dts.day = PyDateTime_GET_DAY(obj); + dts.hour = PyDateTime_DATE_GET_HOUR(obj); + dts.min = PyDateTime_DATE_GET_MINUTE(obj); + dts.sec = PyDateTime_DATE_GET_SECOND(obj); + dts.us = PyDateTime_DATE_GET_MICROSECOND(obj); + dts.ps = dts.as = 0; + *((JSINT64*)outValue) = (JSINT64) pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts); + return NULL; +} + +static void *PyDateToINT64(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *_outLen) +{ + pandas_datetimestruct dts; + PyObject *obj = (PyObject *) _obj; + + dts.year = PyDateTime_GET_YEAR(obj); + dts.month = PyDateTime_GET_MONTH(obj); + dts.day = PyDateTime_GET_DAY(obj); + dts.hour = dts.min = dts.sec = dts.ps = dts.as = 0; + *((JSINT64*)outValue) = (JSINT64) pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts); + return NULL; +} + +//============================================================================= +// Numpy array iteration functions +//============================================================================= +int NpyArr_iterNextNone(JSOBJ _obj, JSONTypeContext *tc) +{ + return 0; +} + +void NpyArr_iterBegin(JSOBJ _obj, JSONTypeContext *tc) +{ + PyArrayObject *obj; + PyArray_Descr *dtype; + NpyArrContext *npyarr; + + if (GET_TC(tc)->newObj) + { + obj = (PyArrayObject *) GET_TC(tc)->newObj; + } + else + { + obj = (PyArrayObject *) _obj; + } + + if (PyArray_SIZE(obj) > 0) + { + PRINTMARK(); + npyarr = PyObject_Malloc(sizeof(NpyArrContext)); + GET_TC(tc)->npyarr = npyarr; + + if (!npyarr) + { + PyErr_NoMemory(); + GET_TC(tc)->iterNext = NpyArr_iterNextNone; + return; + } + + // uber hack to support datetime64[ns] arrays + if (PyArray_DESCR(obj)->type_num == NPY_DATETIME) { + npyarr->was_datetime64 = 1; + dtype = PyArray_DescrFromType(NPY_INT64); + obj = (PyArrayObject *) PyArray_CastToType(obj, dtype, 0); + } else { + npyarr->was_datetime64 = 0; + } + + npyarr->array = (PyObject*) obj; + npyarr->getitem = (PyArray_GetItemFunc*) PyArray_DESCR(obj)->f->getitem; + npyarr->dataptr = PyArray_DATA(obj); + npyarr->ndim = PyArray_NDIM(obj) - 1; + npyarr->curdim = 0; + + if (GET_TC(tc)->transpose) + { + npyarr->dim = PyArray_DIM(obj, npyarr->ndim); + npyarr->stride = PyArray_STRIDE(obj, npyarr->ndim); + npyarr->stridedim = npyarr->ndim; + npyarr->index[npyarr->ndim] = 0; + npyarr->inc = -1; + } + else + { + npyarr->dim = PyArray_DIM(obj, 0); + npyarr->stride = PyArray_STRIDE(obj, 0); + npyarr->stridedim = 0; + npyarr->index[0] = 0; + npyarr->inc = 1; + } + + npyarr->columnLabels = GET_TC(tc)->columnLabels; + npyarr->rowLabels = GET_TC(tc)->rowLabels; + } + else + { + GET_TC(tc)->iterNext = NpyArr_iterNextNone; + } + PRINTMARK(); +} + +void NpyArr_iterEnd(JSOBJ obj, JSONTypeContext *tc) +{ + NpyArrContext *npyarr = GET_TC(tc)->npyarr; + + if (npyarr) + { + if (npyarr->was_datetime64) { + Py_XDECREF(npyarr->array); + } + + if (GET_TC(tc)->itemValue != npyarr->array) + { + Py_XDECREF(GET_TC(tc)->itemValue); + } + GET_TC(tc)->itemValue = NULL; + + PyObject_Free(npyarr); + } + PRINTMARK(); +} + +void NpyArrPassThru_iterBegin(JSOBJ obj, JSONTypeContext *tc) +{ + PRINTMARK(); +} + +void NpyArrPassThru_iterEnd(JSOBJ obj, JSONTypeContext *tc) +{ + NpyArrContext* npyarr; + PRINTMARK(); + // finished this dimension, reset the data pointer + npyarr = GET_TC(tc)->npyarr; + npyarr->curdim--; + npyarr->dataptr -= npyarr->stride * npyarr->index[npyarr->stridedim]; + npyarr->stridedim -= npyarr->inc; + npyarr->dim = PyArray_DIM(npyarr->array, npyarr->stridedim); + npyarr->stride = PyArray_STRIDE(npyarr->array, npyarr->stridedim); + npyarr->dataptr += npyarr->stride; + + if (GET_TC(tc)->itemValue != npyarr->array) + { + Py_XDECREF(GET_TC(tc)->itemValue); + GET_TC(tc)->itemValue = NULL; + } +} + +int NpyArr_iterNextItem(JSOBJ _obj, JSONTypeContext *tc) +{ + NpyArrContext* npyarr; + PRINTMARK(); + npyarr = GET_TC(tc)->npyarr; + + if (GET_TC(tc)->itemValue != npyarr->array) + { + Py_XDECREF(GET_TC(tc)->itemValue); + GET_TC(tc)->itemValue = NULL; + } + + if (npyarr->index[npyarr->stridedim] >= npyarr->dim) + { + return 0; + } + + GET_TC(tc)->itemValue = npyarr->getitem(npyarr->dataptr, npyarr->array); + + npyarr->dataptr += npyarr->stride; + npyarr->index[npyarr->stridedim]++; + return 1; +} + +int NpyArr_iterNext(JSOBJ _obj, JSONTypeContext *tc) +{ + NpyArrContext* npyarr; + PRINTMARK(); + npyarr = GET_TC(tc)->npyarr; + + if (npyarr->curdim >= npyarr->ndim || npyarr->index[npyarr->stridedim] >= npyarr->dim) + { + // innermost dimension, start retrieving item values + GET_TC(tc)->iterNext = NpyArr_iterNextItem; + return NpyArr_iterNextItem(_obj, tc); + } + + // dig a dimension deeper + npyarr->index[npyarr->stridedim]++; + + npyarr->curdim++; + npyarr->stridedim += npyarr->inc; + npyarr->dim = PyArray_DIM(npyarr->array, npyarr->stridedim); + npyarr->stride = PyArray_STRIDE(npyarr->array, npyarr->stridedim); + npyarr->index[npyarr->stridedim] = 0; + + ((PyObjectEncoder*) tc->encoder)->npyCtxtPassthru = npyarr; + GET_TC(tc)->itemValue = npyarr->array; + return 1; +} + +JSOBJ NpyArr_iterGetValue(JSOBJ obj, JSONTypeContext *tc) +{ + PRINTMARK(); + return GET_TC(tc)->itemValue; +} + +char *NpyArr_iterGetName(JSOBJ obj, JSONTypeContext *tc, size_t *outLen) +{ + NpyArrContext* npyarr; + npy_intp idx; + PRINTMARK(); + npyarr = GET_TC(tc)->npyarr; + if (GET_TC(tc)->iterNext == NpyArr_iterNextItem) + { + idx = npyarr->index[npyarr->stridedim] - 1; + *outLen = strlen(npyarr->columnLabels[idx]); + return npyarr->columnLabels[idx]; + } + else + { + idx = npyarr->index[npyarr->stridedim - npyarr->inc] - 1; + *outLen = strlen(npyarr->rowLabels[idx]); + return npyarr->rowLabels[idx]; + } +} + +//============================================================================= +// Tuple iteration functions +// itemValue is borrowed reference, no ref counting +//============================================================================= +void Tuple_iterBegin(JSOBJ obj, JSONTypeContext *tc) +{ + GET_TC(tc)->index = 0; + GET_TC(tc)->size = PyTuple_GET_SIZE( (PyObject *) obj); + GET_TC(tc)->itemValue = NULL; +} + +int Tuple_iterNext(JSOBJ obj, JSONTypeContext *tc) +{ + PyObject *item; + + if (GET_TC(tc)->index >= GET_TC(tc)->size) + { + return 0; + } + + item = PyTuple_GET_ITEM (obj, GET_TC(tc)->index); + + GET_TC(tc)->itemValue = item; + GET_TC(tc)->index ++; + return 1; +} + +void Tuple_iterEnd(JSOBJ obj, JSONTypeContext *tc) +{ +} + +JSOBJ Tuple_iterGetValue(JSOBJ obj, JSONTypeContext *tc) +{ + return GET_TC(tc)->itemValue; +} + +char *Tuple_iterGetName(JSOBJ obj, JSONTypeContext *tc, size_t *outLen) +{ + return NULL; +} + +//============================================================================= +// Dir iteration functions +// itemName ref is borrowed from PyObject_Dir (attrList). No refcount +// itemValue ref is from PyObject_GetAttr. Ref counted +//============================================================================= +void Dir_iterBegin(JSOBJ obj, JSONTypeContext *tc) +{ + GET_TC(tc)->attrList = PyObject_Dir(obj); + GET_TC(tc)->index = 0; + GET_TC(tc)->size = PyList_GET_SIZE(GET_TC(tc)->attrList); + PRINTMARK(); +} + +void Dir_iterEnd(JSOBJ obj, JSONTypeContext *tc) +{ + if (GET_TC(tc)->itemValue) + { + Py_DECREF(GET_TC(tc)->itemValue); + GET_TC(tc)->itemValue = NULL; + } + + if (GET_TC(tc)->itemName) + { + Py_DECREF(GET_TC(tc)->itemName); + GET_TC(tc)->itemName = NULL; + } + + Py_DECREF( (PyObject *) GET_TC(tc)->attrList); + PRINTMARK(); +} + +int Dir_iterNext(JSOBJ _obj, JSONTypeContext *tc) +{ + PyObject *obj = (PyObject *) _obj; + PyObject *itemValue = GET_TC(tc)->itemValue; + PyObject *itemName = GET_TC(tc)->itemName; + PyObject* attr; + PyObject* attrName; + char* attrStr; + + + if (itemValue) + { + Py_DECREF(GET_TC(tc)->itemValue); + GET_TC(tc)->itemValue = itemValue = NULL; + } + + if (itemName) + { + Py_DECREF(GET_TC(tc)->itemName); + GET_TC(tc)->itemName = itemName = NULL; + } + + for (; GET_TC(tc)->index < GET_TC(tc)->size; GET_TC(tc)->index ++) + { + attrName = PyList_GET_ITEM(GET_TC(tc)->attrList, GET_TC(tc)->index); +#if PY_MAJOR_VERSION >= 3 + attr = PyUnicode_AsUTF8String(attrName); +#else + attr = attrName; + Py_INCREF(attr); +#endif + attrStr = PyString_AS_STRING(attr); + + if (attrStr[0] == '_') + { + PRINTMARK(); + Py_DECREF(attr); + continue; + } + + itemValue = PyObject_GetAttr(obj, attrName); + if (itemValue == NULL) + { + PyErr_Clear(); + Py_DECREF(attr); + PRINTMARK(); + continue; + } + + if (PyCallable_Check(itemValue)) + { + Py_DECREF(itemValue); + Py_DECREF(attr); + PRINTMARK(); + continue; + } + + PRINTMARK(); + itemName = attr; + break; + } + + if (itemName == NULL) + { + GET_TC(tc)->index = GET_TC(tc)->size; + GET_TC(tc)->itemValue = NULL; + return 0; + } + + GET_TC(tc)->itemName = itemName; + GET_TC(tc)->itemValue = itemValue; + GET_TC(tc)->index ++; + + PRINTMARK(); + return 1; +} + + + +JSOBJ Dir_iterGetValue(JSOBJ obj, JSONTypeContext *tc) +{ + PRINTMARK(); + return GET_TC(tc)->itemValue; +} + +char *Dir_iterGetName(JSOBJ obj, JSONTypeContext *tc, size_t *outLen) +{ + PRINTMARK(); + *outLen = PyString_GET_SIZE(GET_TC(tc)->itemName); + return PyString_AS_STRING(GET_TC(tc)->itemName); +} + + + + +//============================================================================= +// List iteration functions +// itemValue is borrowed from object (which is list). No refcounting +//============================================================================= +void List_iterBegin(JSOBJ obj, JSONTypeContext *tc) +{ + GET_TC(tc)->index = 0; + GET_TC(tc)->size = PyList_GET_SIZE( (PyObject *) obj); +} + +int List_iterNext(JSOBJ obj, JSONTypeContext *tc) +{ + if (GET_TC(tc)->index >= GET_TC(tc)->size) + { + PRINTMARK(); + return 0; + } + + GET_TC(tc)->itemValue = PyList_GET_ITEM (obj, GET_TC(tc)->index); + GET_TC(tc)->index ++; + return 1; +} + +void List_iterEnd(JSOBJ obj, JSONTypeContext *tc) +{ +} + +JSOBJ List_iterGetValue(JSOBJ obj, JSONTypeContext *tc) +{ + return GET_TC(tc)->itemValue; +} + +char *List_iterGetName(JSOBJ obj, JSONTypeContext *tc, size_t *outLen) +{ + return NULL; +} + +//============================================================================= +// pandas Index iteration functions +//============================================================================= +void Index_iterBegin(JSOBJ obj, JSONTypeContext *tc) +{ + GET_TC(tc)->index = 0; + GET_TC(tc)->citemName = PyObject_Malloc(20 * sizeof(char)); + if (!GET_TC(tc)->citemName) + { + PyErr_NoMemory(); + } + PRINTMARK(); +} + +int Index_iterNext(JSOBJ obj, JSONTypeContext *tc) +{ + Py_ssize_t index; + if (!GET_TC(tc)->citemName) + { + return 0; + } + + index = GET_TC(tc)->index; + Py_XDECREF(GET_TC(tc)->itemValue); + if (index == 0) + { + memcpy(GET_TC(tc)->citemName, "name", sizeof(char)*5); + GET_TC(tc)->itemValue = PyObject_GetAttrString(obj, "name"); + } + else + if (index == 1) + { + memcpy(GET_TC(tc)->citemName, "data", sizeof(char)*5); + GET_TC(tc)->itemValue = PyObject_GetAttrString(obj, "values"); + } + else + { + PRINTMARK(); + return 0; + } + + GET_TC(tc)->index++; + PRINTMARK(); + return 1; +} + +void Index_iterEnd(JSOBJ obj, JSONTypeContext *tc) +{ + if (GET_TC(tc)->citemName) + { + PyObject_Free(GET_TC(tc)->citemName); + } + PRINTMARK(); +} + +JSOBJ Index_iterGetValue(JSOBJ obj, JSONTypeContext *tc) +{ + return GET_TC(tc)->itemValue; +} + +char *Index_iterGetName(JSOBJ obj, JSONTypeContext *tc, size_t *outLen) +{ + *outLen = strlen(GET_TC(tc)->citemName); + return GET_TC(tc)->citemName; +} + +//============================================================================= +// pandas Series iteration functions +//============================================================================= +void Series_iterBegin(JSOBJ obj, JSONTypeContext *tc) +{ + PyObjectEncoder* enc = (PyObjectEncoder*) tc->encoder; + GET_TC(tc)->index = 0; + GET_TC(tc)->citemName = PyObject_Malloc(20 * sizeof(char)); + enc->outputFormat = VALUES; // for contained series + if (!GET_TC(tc)->citemName) + { + PyErr_NoMemory(); + } + PRINTMARK(); +} + +int Series_iterNext(JSOBJ obj, JSONTypeContext *tc) +{ + Py_ssize_t index; + if (!GET_TC(tc)->citemName) + { + return 0; + } + + index = GET_TC(tc)->index; + Py_XDECREF(GET_TC(tc)->itemValue); + if (index == 0) + { + memcpy(GET_TC(tc)->citemName, "name", sizeof(char)*5); + GET_TC(tc)->itemValue = PyObject_GetAttrString(obj, "name"); + } + else + if (index == 1) + { + memcpy(GET_TC(tc)->citemName, "index", sizeof(char)*6); + GET_TC(tc)->itemValue = PyObject_GetAttrString(obj, "index"); + } + else + if (index == 2) + { + memcpy(GET_TC(tc)->citemName, "data", sizeof(char)*5); + GET_TC(tc)->itemValue = PyObject_GetAttrString(obj, "values"); + } + else + { + PRINTMARK(); + return 0; + } + + GET_TC(tc)->index++; + PRINTMARK(); + return 1; +} + +void Series_iterEnd(JSOBJ obj, JSONTypeContext *tc) +{ + PyObjectEncoder* enc = (PyObjectEncoder*) tc->encoder; + enc->outputFormat = enc->originalOutputFormat; + if (GET_TC(tc)->citemName) + { + PyObject_Free(GET_TC(tc)->citemName); + } + PRINTMARK(); +} + +JSOBJ Series_iterGetValue(JSOBJ obj, JSONTypeContext *tc) +{ + return GET_TC(tc)->itemValue; +} + +char *Series_iterGetName(JSOBJ obj, JSONTypeContext *tc, size_t *outLen) +{ + *outLen = strlen(GET_TC(tc)->citemName); + return GET_TC(tc)->citemName; +} + +//============================================================================= +// pandas DataFrame iteration functions +//============================================================================= +void DataFrame_iterBegin(JSOBJ obj, JSONTypeContext *tc) +{ + PyObjectEncoder* enc = (PyObjectEncoder*) tc->encoder; + GET_TC(tc)->index = 0; + GET_TC(tc)->citemName = PyObject_Malloc(20 * sizeof(char)); + enc->outputFormat = VALUES; // for contained series & index + if (!GET_TC(tc)->citemName) + { + PyErr_NoMemory(); + } + PRINTMARK(); +} + +int DataFrame_iterNext(JSOBJ obj, JSONTypeContext *tc) +{ + Py_ssize_t index; + if (!GET_TC(tc)->citemName) + { + return 0; + } + + index = GET_TC(tc)->index; + Py_XDECREF(GET_TC(tc)->itemValue); + if (index == 0) + { + memcpy(GET_TC(tc)->citemName, "columns", sizeof(char)*8); + GET_TC(tc)->itemValue = PyObject_GetAttrString(obj, "columns"); + } + else + if (index == 1) + { + memcpy(GET_TC(tc)->citemName, "index", sizeof(char)*6); + GET_TC(tc)->itemValue = PyObject_GetAttrString(obj, "index"); + } + else + if (index == 2) + { + memcpy(GET_TC(tc)->citemName, "data", sizeof(char)*5); + GET_TC(tc)->itemValue = PyObject_GetAttrString(obj, "values"); + } + else + { + PRINTMARK(); + return 0; + } + + GET_TC(tc)->index++; + PRINTMARK(); + return 1; +} + +void DataFrame_iterEnd(JSOBJ obj, JSONTypeContext *tc) +{ + PyObjectEncoder* enc = (PyObjectEncoder*) tc->encoder; + enc->outputFormat = enc->originalOutputFormat; + if (GET_TC(tc)->citemName) + { + PyObject_Free(GET_TC(tc)->citemName); + } + PRINTMARK(); +} + +JSOBJ DataFrame_iterGetValue(JSOBJ obj, JSONTypeContext *tc) +{ + return GET_TC(tc)->itemValue; +} + +char *DataFrame_iterGetName(JSOBJ obj, JSONTypeContext *tc, size_t *outLen) +{ + *outLen = strlen(GET_TC(tc)->citemName); + return GET_TC(tc)->citemName; +} + +//============================================================================= +// Dict iteration functions +// itemName might converted to string (Python_Str). Do refCounting +// itemValue is borrowed from object (which is dict). No refCounting +//============================================================================= +void Dict_iterBegin(JSOBJ obj, JSONTypeContext *tc) +{ + GET_TC(tc)->index = 0; + PRINTMARK(); +} + +int Dict_iterNext(JSOBJ obj, JSONTypeContext *tc) +{ +#if PY_MAJOR_VERSION >= 3 + PyObject* itemNameTmp; +#endif + + if (GET_TC(tc)->itemName) + { + Py_DECREF(GET_TC(tc)->itemName); + GET_TC(tc)->itemName = NULL; + } + + + if (!PyDict_Next ( (PyObject *)GET_TC(tc)->dictObj, &GET_TC(tc)->index, &GET_TC(tc)->itemName, &GET_TC(tc)->itemValue)) + { + PRINTMARK(); + return 0; + } + + if (PyUnicode_Check(GET_TC(tc)->itemName)) + { + GET_TC(tc)->itemName = PyUnicode_AsUTF8String (GET_TC(tc)->itemName); + } + else + if (!PyString_Check(GET_TC(tc)->itemName)) + { + GET_TC(tc)->itemName = PyObject_Str(GET_TC(tc)->itemName); +#if PY_MAJOR_VERSION >= 3 + itemNameTmp = GET_TC(tc)->itemName; + GET_TC(tc)->itemName = PyUnicode_AsUTF8String (GET_TC(tc)->itemName); + Py_DECREF(itemNameTmp); +#endif + } + else + { + Py_INCREF(GET_TC(tc)->itemName); + } + PRINTMARK(); + return 1; +} + +void Dict_iterEnd(JSOBJ obj, JSONTypeContext *tc) +{ + if (GET_TC(tc)->itemName) + { + Py_DECREF(GET_TC(tc)->itemName); + GET_TC(tc)->itemName = NULL; + } + Py_DECREF(GET_TC(tc)->dictObj); + PRINTMARK(); +} + +JSOBJ Dict_iterGetValue(JSOBJ obj, JSONTypeContext *tc) +{ + return GET_TC(tc)->itemValue; +} + +char *Dict_iterGetName(JSOBJ obj, JSONTypeContext *tc, size_t *outLen) +{ + *outLen = PyString_GET_SIZE(GET_TC(tc)->itemName); + return PyString_AS_STRING(GET_TC(tc)->itemName); +} + +void NpyArr_freeLabels(char** labels, npy_intp len) +{ + npy_intp i; + + if (labels) + { + for (i = 0; i < len; i++) + { + PyObject_Free(labels[i]); + } + PyObject_Free(labels); + } +} + +char** NpyArr_encodeLabels(PyArrayObject* labels, JSONObjectEncoder* enc, npy_intp num) +{ + // NOTE this function steals a reference to labels. + PyArray_Descr *dtype = NULL; + PyArrayObject* labelsTmp = NULL; + PyObject* item = NULL; + npy_intp i, stride, len; + // npy_intp bufsize = 32768; + char** ret; + char *dataptr, *cLabel, *origend, *origst, *origoffset; + char labelBuffer[NPY_JSON_BUFSIZE]; + PyArray_GetItemFunc* getitem; + PRINTMARK(); + + if (PyArray_SIZE(labels) < num) + { + PyErr_SetString(PyExc_ValueError, "Label array sizes do not match corresponding data shape"); + Py_DECREF(labels); + return 0; + } + + ret = PyObject_Malloc(sizeof(char*)*num); + if (!ret) + { + PyErr_NoMemory(); + Py_DECREF(labels); + return 0; + } + + for (i = 0; i < num; i++) + { + ret[i] = NULL; + } + + origst = enc->start; + origend = enc->end; + origoffset = enc->offset; + + if (PyArray_DESCR(labels)->type_num == NPY_DATETIME) { + dtype = PyArray_DescrFromType(NPY_INT64); + labelsTmp = labels; + labels = (PyArrayObject *) PyArray_CastToType(labels, dtype, 0); + Py_DECREF(labelsTmp); + } + + stride = PyArray_STRIDE(labels, 0); + dataptr = PyArray_DATA(labels); + getitem = PyArray_DESCR(labels)->f->getitem; + + for (i = 0; i < num; i++) + { + item = getitem(dataptr, labels); + if (!item) + { + NpyArr_freeLabels(ret, num); + ret = 0; + break; + } + + cLabel = JSON_EncodeObject(item, enc, labelBuffer, NPY_JSON_BUFSIZE); + Py_DECREF(item); + + if (PyErr_Occurred() || enc->errorMsg) + { + NpyArr_freeLabels(ret, num); + ret = 0; + break; + } + + // trim off any quotes surrounding the result + if (*cLabel == '\"') + { + cLabel++; + enc->offset -= 2; + *(enc->offset) = '\0'; + } + + len = enc->offset - cLabel + 1; + ret[i] = PyObject_Malloc(sizeof(char)*len); + + if (!ret[i]) + { + PyErr_NoMemory(); + ret = 0; + break; + } + + memcpy(ret[i], cLabel, sizeof(char)*len); + dataptr += stride; + } + + enc->start = origst; + enc->end = origend; + enc->offset = origoffset; + + Py_DECREF(labels); + return ret; +} + +void Object_beginTypeContext (JSOBJ _obj, JSONTypeContext *tc) +{ + PyObject *obj, *exc, *toDictFunc; + TypeContext *pc; + PyObjectEncoder *enc; + double val; + PRINTMARK(); + if (!_obj) { + tc->type = JT_INVALID; + return; + } + + obj = (PyObject*) _obj; + enc = (PyObjectEncoder*) tc->encoder; + + tc->prv = PyObject_Malloc(sizeof(TypeContext)); + pc = (TypeContext *) tc->prv; + if (!pc) + { + tc->type = JT_INVALID; + PyErr_NoMemory(); + return; + } + pc->newObj = NULL; + pc->dictObj = NULL; + pc->itemValue = NULL; + pc->itemName = NULL; + pc->attrList = NULL; + pc->citemName = NULL; + pc->npyarr = NULL; + pc->rowLabels = NULL; + pc->columnLabels = NULL; + pc->index = 0; + pc->size = 0; + pc->longValue = 0; + pc->transpose = 0; + pc->rowLabelsLen = 0; + pc->columnLabelsLen = 0; + + if (PyIter_Check(obj) || PyArray_Check(obj)) + { + goto ISITERABLE; + } + + if (PyBool_Check(obj)) + { + PRINTMARK(); + tc->type = (obj == Py_True) ? JT_TRUE : JT_FALSE; + return; + } + else + if (PyLong_Check(obj)) + { + PRINTMARK(); + pc->PyTypeToJSON = PyLongToINT64; + tc->type = JT_LONG; + GET_TC(tc)->longValue = PyLong_AsLongLong(obj); + + exc = PyErr_Occurred(); + + if (exc && PyErr_ExceptionMatches(PyExc_OverflowError)) + { + PRINTMARK(); + goto INVALID; + } + + return; + } + else + if (PyInt_Check(obj)) + { + PRINTMARK(); +#ifdef _LP64 + pc->PyTypeToJSON = PyIntToINT64; tc->type = JT_LONG; +#else + pc->PyTypeToJSON = PyIntToINT32; tc->type = JT_INT; +#endif + return; + } + else + if (PyArray_IsScalar(obj, Integer)) + { + PRINTMARK(); + pc->PyTypeToJSON = PyLongToINT64; + tc->type = JT_LONG; + PyArray_CastScalarToCtype(obj, &(GET_TC(tc)->longValue), PyArray_DescrFromType(NPY_INT64)); + + exc = PyErr_Occurred(); + + if (exc && PyErr_ExceptionMatches(PyExc_OverflowError)) + { + PRINTMARK(); + goto INVALID; + } + + return; + } + else + if (PyString_Check(obj)) + { + PRINTMARK(); + pc->PyTypeToJSON = PyStringToUTF8; tc->type = JT_UTF8; + return; + } + else + if (PyUnicode_Check(obj)) + { + PRINTMARK(); + pc->PyTypeToJSON = PyUnicodeToUTF8; tc->type = JT_UTF8; + return; + } + else + if (PyFloat_Check(obj)) + { + PRINTMARK(); + val = PyFloat_AS_DOUBLE (obj); + if (npy_isnan(val) || npy_isinf(val)) + { + tc->type = JT_NULL; + } + else + { + pc->PyTypeToJSON = PyFloatToDOUBLE; tc->type = JT_DOUBLE; + } + return; + } + else + if (PyArray_IsScalar(obj, Float)) + { + PRINTMARK(); + pc->PyTypeToJSON = NpyFloatToDOUBLE; tc->type = JT_DOUBLE; + return; + } + else + if (PyArray_IsScalar(obj, Datetime)) + { + PRINTMARK(); + pc->PyTypeToJSON = NpyDateTimeToINT64; tc->type = JT_LONG; + return; + } + else + if (PyDateTime_Check(obj)) + { + PRINTMARK(); + pc->PyTypeToJSON = PyDateTimeToINT64; tc->type = JT_LONG; + return; + } + else + if (PyDate_Check(obj)) + { + PRINTMARK(); + pc->PyTypeToJSON = PyDateToINT64; tc->type = JT_LONG; + return; + } + else + if (obj == Py_None) + { + PRINTMARK(); + tc->type = JT_NULL; + return; + } + + +ISITERABLE: + + if (PyDict_Check(obj)) + { + PRINTMARK(); + tc->type = JT_OBJECT; + pc->iterBegin = Dict_iterBegin; + pc->iterEnd = Dict_iterEnd; + pc->iterNext = Dict_iterNext; + pc->iterGetValue = Dict_iterGetValue; + pc->iterGetName = Dict_iterGetName; + pc->dictObj = obj; + Py_INCREF(obj); + + return; + } + else + if (PyList_Check(obj)) + { + PRINTMARK(); + tc->type = JT_ARRAY; + pc->iterBegin = List_iterBegin; + pc->iterEnd = List_iterEnd; + pc->iterNext = List_iterNext; + pc->iterGetValue = List_iterGetValue; + pc->iterGetName = List_iterGetName; + return; + } + else + if (PyTuple_Check(obj)) + { + PRINTMARK(); + tc->type = JT_ARRAY; + pc->iterBegin = Tuple_iterBegin; + pc->iterEnd = Tuple_iterEnd; + pc->iterNext = Tuple_iterNext; + pc->iterGetValue = Tuple_iterGetValue; + pc->iterGetName = Tuple_iterGetName; + return; + } + else + if (PyObject_TypeCheck(obj, (PyTypeObject*) cls_index)) + { + if (enc->outputFormat == SPLIT) + { + PRINTMARK(); + tc->type = JT_OBJECT; + pc->iterBegin = Index_iterBegin; + pc->iterEnd = Index_iterEnd; + pc->iterNext = Index_iterNext; + pc->iterGetValue = Index_iterGetValue; + pc->iterGetName = Index_iterGetName; + return; + } + + PRINTMARK(); + tc->type = JT_ARRAY; + pc->newObj = PyObject_GetAttrString(obj, "values"); + pc->iterBegin = NpyArr_iterBegin; + pc->iterEnd = NpyArr_iterEnd; + pc->iterNext = NpyArr_iterNext; + pc->iterGetValue = NpyArr_iterGetValue; + pc->iterGetName = NpyArr_iterGetName; + return; + } + else + if (PyObject_TypeCheck(obj, (PyTypeObject*) cls_series)) + { + if (enc->outputFormat == SPLIT) + { + PRINTMARK(); + tc->type = JT_OBJECT; + pc->iterBegin = Series_iterBegin; + pc->iterEnd = Series_iterEnd; + pc->iterNext = Series_iterNext; + pc->iterGetValue = Series_iterGetValue; + pc->iterGetName = Series_iterGetName; + return; + } + + if (enc->outputFormat == INDEX || enc->outputFormat == COLUMNS) + { + PRINTMARK(); + tc->type = JT_OBJECT; + pc->columnLabelsLen = PyArray_SIZE(obj); + pc->columnLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(obj, "index"), (JSONObjectEncoder*) enc, pc->columnLabelsLen); + if (!pc->columnLabels) + { + goto INVALID; + } + } + else + { + PRINTMARK(); + tc->type = JT_ARRAY; + } + pc->newObj = PyObject_GetAttrString(obj, "values"); + pc->iterBegin = NpyArr_iterBegin; + pc->iterEnd = NpyArr_iterEnd; + pc->iterNext = NpyArr_iterNext; + pc->iterGetValue = NpyArr_iterGetValue; + pc->iterGetName = NpyArr_iterGetName; + return; + } + else + if (PyArray_Check(obj)) + { + if (enc->npyCtxtPassthru) + { + PRINTMARK(); + pc->npyarr = enc->npyCtxtPassthru; + tc->type = (pc->npyarr->columnLabels ? JT_OBJECT : JT_ARRAY); + pc->iterBegin = NpyArrPassThru_iterBegin; + pc->iterEnd = NpyArrPassThru_iterEnd; + pc->iterNext = NpyArr_iterNext; + pc->iterGetValue = NpyArr_iterGetValue; + pc->iterGetName = NpyArr_iterGetName; + enc->npyCtxtPassthru = NULL; + return; + } + + PRINTMARK(); + tc->type = JT_ARRAY; + pc->iterBegin = NpyArr_iterBegin; + pc->iterEnd = NpyArr_iterEnd; + pc->iterNext = NpyArr_iterNext; + pc->iterGetValue = NpyArr_iterGetValue; + pc->iterGetName = NpyArr_iterGetName; + return; + } + else + if (PyObject_TypeCheck(obj, (PyTypeObject*) cls_dataframe)) + { + if (enc->outputFormat == SPLIT) + { + PRINTMARK(); + tc->type = JT_OBJECT; + pc->iterBegin = DataFrame_iterBegin; + pc->iterEnd = DataFrame_iterEnd; + pc->iterNext = DataFrame_iterNext; + pc->iterGetValue = DataFrame_iterGetValue; + pc->iterGetName = DataFrame_iterGetName; + return; + } + + PRINTMARK(); + pc->newObj = PyObject_GetAttrString(obj, "values"); + pc->iterBegin = NpyArr_iterBegin; + pc->iterEnd = NpyArr_iterEnd; + pc->iterNext = NpyArr_iterNext; + pc->iterGetValue = NpyArr_iterGetValue; + pc->iterGetName = NpyArr_iterGetName; + if (enc->outputFormat == VALUES) + { + PRINTMARK(); + tc->type = JT_ARRAY; + } + else + if (enc->outputFormat == RECORDS) + { + PRINTMARK(); + tc->type = JT_ARRAY; + pc->columnLabelsLen = PyArray_DIM(pc->newObj, 1); + pc->columnLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(obj, "columns"), (JSONObjectEncoder*) enc, pc->columnLabelsLen); + if (!pc->columnLabels) + { + goto INVALID; + } + } + else + if (enc->outputFormat == INDEX) + { + PRINTMARK(); + tc->type = JT_OBJECT; + pc->rowLabelsLen = PyArray_DIM(pc->newObj, 0); + pc->rowLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(obj, "index"), (JSONObjectEncoder*) enc, pc->rowLabelsLen); + if (!pc->rowLabels) + { + goto INVALID; + } + pc->columnLabelsLen = PyArray_DIM(pc->newObj, 1); + pc->columnLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(obj, "columns"), (JSONObjectEncoder*) enc, pc->columnLabelsLen); + if (!pc->columnLabels) + { + NpyArr_freeLabels(pc->rowLabels, pc->rowLabelsLen); + pc->rowLabels = NULL; + goto INVALID; + } + } + else + { + PRINTMARK(); + tc->type = JT_OBJECT; + pc->rowLabelsLen = PyArray_DIM(pc->newObj, 1); + pc->rowLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(obj, "columns"), (JSONObjectEncoder*) enc, pc->rowLabelsLen); + if (!pc->rowLabels) + { + goto INVALID; + } + pc->columnLabelsLen = PyArray_DIM(pc->newObj, 0); + pc->columnLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(obj, "index"), (JSONObjectEncoder*) enc, pc->columnLabelsLen); + if (!pc->columnLabels) + { + NpyArr_freeLabels(pc->rowLabels, pc->rowLabelsLen); + pc->rowLabels = NULL; + goto INVALID; + } + pc->transpose = 1; + } + return; + } + + + toDictFunc = PyObject_GetAttrString(obj, "toDict"); + + if (toDictFunc) + { + PyObject* tuple = PyTuple_New(0); + PyObject* toDictResult = PyObject_Call(toDictFunc, tuple, NULL); + Py_DECREF(tuple); + Py_DECREF(toDictFunc); + + if (toDictResult == NULL) + { + PyErr_Clear(); + tc->type = JT_NULL; + return; + } + + if (!PyDict_Check(toDictResult)) + { + Py_DECREF(toDictResult); + tc->type = JT_NULL; + return; + } + + PRINTMARK(); + tc->type = JT_OBJECT; + pc->iterBegin = Dict_iterBegin; + pc->iterEnd = Dict_iterEnd; + pc->iterNext = Dict_iterNext; + pc->iterGetValue = Dict_iterGetValue; + pc->iterGetName = Dict_iterGetName; + pc->dictObj = toDictResult; + return; + } + + PyErr_Clear(); + + tc->type = JT_OBJECT; + pc->iterBegin = Dir_iterBegin; + pc->iterEnd = Dir_iterEnd; + pc->iterNext = Dir_iterNext; + pc->iterGetValue = Dir_iterGetValue; + pc->iterGetName = Dir_iterGetName; + + return; + +INVALID: + tc->type = JT_INVALID; + PyObject_Free(tc->prv); + tc->prv = NULL; + return; +} + + +void Object_endTypeContext(JSOBJ obj, JSONTypeContext *tc) +{ + Py_XDECREF(GET_TC(tc)->newObj); + NpyArr_freeLabels(GET_TC(tc)->rowLabels, GET_TC(tc)->rowLabelsLen); + NpyArr_freeLabels(GET_TC(tc)->columnLabels, GET_TC(tc)->columnLabelsLen); + + PyObject_Free(tc->prv); + tc->prv = NULL; +} + +const char *Object_getStringValue(JSOBJ obj, JSONTypeContext *tc, size_t *_outLen) +{ + return GET_TC(tc)->PyTypeToJSON (obj, tc, NULL, _outLen); +} + +JSINT64 Object_getLongValue(JSOBJ obj, JSONTypeContext *tc) +{ + JSINT64 ret; + GET_TC(tc)->PyTypeToJSON (obj, tc, &ret, NULL); + + return ret; +} + +JSINT32 Object_getIntValue(JSOBJ obj, JSONTypeContext *tc) +{ + JSINT32 ret; + GET_TC(tc)->PyTypeToJSON (obj, tc, &ret, NULL); + return ret; +} + + +double Object_getDoubleValue(JSOBJ obj, JSONTypeContext *tc) +{ + double ret; + GET_TC(tc)->PyTypeToJSON (obj, tc, &ret, NULL); + return ret; +} + +static void Object_releaseObject(JSOBJ _obj) +{ + Py_DECREF( (PyObject *) _obj); +} + + + +void Object_iterBegin(JSOBJ obj, JSONTypeContext *tc) +{ + GET_TC(tc)->iterBegin(obj, tc); +} + +int Object_iterNext(JSOBJ obj, JSONTypeContext *tc) +{ + return GET_TC(tc)->iterNext(obj, tc); +} + +void Object_iterEnd(JSOBJ obj, JSONTypeContext *tc) +{ + GET_TC(tc)->iterEnd(obj, tc); +} + +JSOBJ Object_iterGetValue(JSOBJ obj, JSONTypeContext *tc) +{ + return GET_TC(tc)->iterGetValue(obj, tc); +} + +char *Object_iterGetName(JSOBJ obj, JSONTypeContext *tc, size_t *outLen) +{ + return GET_TC(tc)->iterGetName(obj, tc, outLen); +} + + +PyObject* objToJSON(PyObject* self, PyObject *args, PyObject *kwargs) +{ + static char *kwlist[] = { "obj", "ensure_ascii", "double_precision", "orient", NULL}; + + char buffer[65536]; + char *ret; + PyObject *newobj; + PyObject *oinput = NULL; + PyObject *oensureAscii = NULL; + char *sOrient = NULL; + int idoublePrecision = 5; // default double precision setting + + PyObjectEncoder pyEncoder = + { + { + Object_beginTypeContext, //void (*beginTypeContext)(JSOBJ obj, JSONTypeContext *tc); + Object_endTypeContext, //void (*endTypeContext)(JSOBJ obj, JSONTypeContext *tc); + Object_getStringValue, //const char *(*getStringValue)(JSOBJ obj, JSONTypeContext *tc, size_t *_outLen); + Object_getLongValue, //JSLONG (*getLongValue)(JSOBJ obj, JSONTypeContext *tc); + Object_getIntValue, //JSLONG (*getLongValue)(JSOBJ obj, JSONTypeContext *tc); + Object_getDoubleValue, //double (*getDoubleValue)(JSOBJ obj, JSONTypeContext *tc); + Object_iterBegin, //JSPFN_ITERBEGIN iterBegin; + Object_iterNext, //JSPFN_ITERNEXT iterNext; + Object_iterEnd, //JSPFN_ITEREND iterEnd; + Object_iterGetValue, //JSPFN_ITERGETVALUE iterGetValue; + Object_iterGetName, //JSPFN_ITERGETNAME iterGetName; + Object_releaseObject, //void (*releaseValue)(JSONTypeContext *ti); + PyObject_Malloc, //JSPFN_MALLOC malloc; + PyObject_Realloc, //JSPFN_REALLOC realloc; + PyObject_Free, //JSPFN_FREE free; + -1, //recursionMax + idoublePrecision, + 1, //forceAscii + } + }; + JSONObjectEncoder* encoder = (JSONObjectEncoder*) &pyEncoder; + + pyEncoder.npyCtxtPassthru = NULL; + pyEncoder.outputFormat = COLUMNS; + + PRINTMARK(); + + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|Ois", kwlist, &oinput, &oensureAscii, &idoublePrecision, &sOrient)) + { + return NULL; + } + + if (sOrient != NULL) + { + if (strcmp(sOrient, "records") == 0) + { + pyEncoder.outputFormat = RECORDS; + } + else + if (strcmp(sOrient, "index") == 0) + { + pyEncoder.outputFormat = INDEX; + } + else + if (strcmp(sOrient, "split") == 0) + { + pyEncoder.outputFormat = SPLIT; + } + else + if (strcmp(sOrient, "values") == 0) + { + pyEncoder.outputFormat = VALUES; + } + else + if (strcmp(sOrient, "columns") != 0) + { + PyErr_Format (PyExc_ValueError, "Invalid value '%s' for option 'orient'", sOrient); + return NULL; + } + } + + pyEncoder.originalOutputFormat = pyEncoder.outputFormat; + + if (oensureAscii != NULL && !PyObject_IsTrue(oensureAscii)) + { + encoder->forceASCII = 0; + } + + encoder->doublePrecision = idoublePrecision; + + PRINTMARK(); + ret = JSON_EncodeObject (oinput, encoder, buffer, sizeof (buffer)); + PRINTMARK(); + + if (PyErr_Occurred()) + { + return NULL; + } + + if (encoder->errorMsg) + { + if (ret != buffer) + { + encoder->free (ret); + } + + PyErr_Format (PyExc_OverflowError, "%s", encoder->errorMsg); + return NULL; + } + + newobj = PyString_FromString (ret); + + if (ret != buffer) + { + encoder->free (ret); + } + + PRINTMARK(); + + return newobj; +} + +PyObject* objToJSONFile(PyObject* self, PyObject *args, PyObject *kwargs) +{ + PyObject *data; + PyObject *file; + PyObject *string; + PyObject *write; + PyObject *argtuple; + + PRINTMARK(); + + if (!PyArg_ParseTuple (args, "OO", &data, &file)) { + return NULL; + } + + if (!PyObject_HasAttrString (file, "write")) + { + PyErr_Format (PyExc_TypeError, "expected file"); + return NULL; + } + + write = PyObject_GetAttrString (file, "write"); + + if (!PyCallable_Check (write)) { + Py_XDECREF(write); + PyErr_Format (PyExc_TypeError, "expected file"); + return NULL; + } + + argtuple = PyTuple_Pack(1, data); + + string = objToJSON (self, argtuple, kwargs); + + if (string == NULL) + { + Py_XDECREF(write); + Py_XDECREF(argtuple); + return NULL; + } + + Py_XDECREF(argtuple); + + argtuple = PyTuple_Pack (1, string); + if (argtuple == NULL) + { + Py_XDECREF(write); + return NULL; + } + if (PyObject_CallObject (write, argtuple) == NULL) + { + Py_XDECREF(write); + Py_XDECREF(argtuple); + return NULL; + } + + Py_XDECREF(write); + Py_DECREF(argtuple); + Py_XDECREF(string); + + PRINTMARK(); + + Py_RETURN_NONE; + + +} + diff --git a/pandas/src/ujson/python/py_defines.h b/pandas/src/ujson/python/py_defines.h new file mode 100644 index 0000000000000..1544c2e3cf34d --- /dev/null +++ b/pandas/src/ujson/python/py_defines.h @@ -0,0 +1,15 @@ +#include <Python.h> + +#if PY_MAJOR_VERSION >= 3 + +#define PyInt_Check PyLong_Check +#define PyInt_AS_LONG PyLong_AsLong +#define PyInt_FromLong PyLong_FromLong + +#define PyString_Check PyBytes_Check +#define PyString_GET_SIZE PyBytes_GET_SIZE +#define PyString_AS_STRING PyBytes_AS_STRING + +#define PyString_FromString PyUnicode_FromString + +#endif diff --git a/pandas/src/ujson/python/ujson.c b/pandas/src/ujson/python/ujson.c new file mode 100644 index 0000000000000..e04309e620a1d --- /dev/null +++ b/pandas/src/ujson/python/ujson.c @@ -0,0 +1,73 @@ +#include "py_defines.h" +#include "version.h" + +/* objToJSON */ +PyObject* objToJSON(PyObject* self, PyObject *args, PyObject *kwargs); +void initObjToJSON(void); + +/* JSONToObj */ +PyObject* JSONToObj(PyObject* self, PyObject *args, PyObject *kwargs); + +/* objToJSONFile */ +PyObject* objToJSONFile(PyObject* self, PyObject *args, PyObject *kwargs); + +/* JSONFileToObj */ +PyObject* JSONFileToObj(PyObject* self, PyObject *args, PyObject *kwargs); + + +static PyMethodDef ujsonMethods[] = { + {"encode", (PyCFunction) objToJSON, METH_VARARGS | METH_KEYWORDS, "Converts arbitrary object recursivly into JSON. Use ensure_ascii=false to output UTF-8. Pass in double_precision to alter the maximum digit precision with doubles"}, + {"decode", (PyCFunction) JSONToObj, METH_VARARGS | METH_KEYWORDS, "Converts JSON as string to dict object structure"}, + {"dumps", (PyCFunction) objToJSON, METH_VARARGS | METH_KEYWORDS, "Converts arbitrary object recursivly into JSON. Use ensure_ascii=false to output UTF-8"}, + {"loads", (PyCFunction) JSONToObj, METH_VARARGS | METH_KEYWORDS, "Converts JSON as string to dict object structure"}, + {"dump", (PyCFunction) objToJSONFile, METH_VARARGS | METH_KEYWORDS, "Converts arbitrary object recursively into JSON file. Use ensure_ascii=false to output UTF-8"}, + {"load", (PyCFunction) JSONFileToObj, METH_VARARGS | METH_KEYWORDS, "Converts JSON as file to dict object structure"}, + {NULL, NULL, 0, NULL} /* Sentinel */ +}; + +#if PY_MAJOR_VERSION >= 3 + +static struct PyModuleDef moduledef = { + PyModuleDef_HEAD_INIT, + "_pandasujson", + 0, /* m_doc */ + -1, /* m_size */ + ujsonMethods, /* m_methods */ + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear */ + NULL /* m_free */ +}; + +#define PYMODINITFUNC PyObject *PyInit_json(void) +#define PYMODULE_CREATE() PyModule_Create(&moduledef) +#define MODINITERROR return NULL + +#else + +#define PYMODINITFUNC PyMODINIT_FUNC initjson(void) +#define PYMODULE_CREATE() Py_InitModule("json", ujsonMethods) +#define MODINITERROR return + +#endif + +PYMODINITFUNC +{ + PyObject *module; + PyObject *version_string; + + initObjToJSON(); + module = PYMODULE_CREATE(); + + if (module == NULL) + { + MODINITERROR; + } + + version_string = PyString_FromString (UJSON_VERSION); + PyModule_AddObject (module, "__version__", version_string); + +#if PY_MAJOR_VERSION >= 3 + return module; +#endif +} diff --git a/pandas/src/ujson/python/version.h b/pandas/src/ujson/python/version.h new file mode 100644 index 0000000000000..9449441411192 --- /dev/null +++ b/pandas/src/ujson/python/version.h @@ -0,0 +1 @@ +#define UJSON_VERSION "1.18" diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py index d674a2f44ebe1..2c6d3b221c6ff 100644 --- a/pandas/tests/test_frame.py +++ b/pandas/tests/test_frame.py @@ -3338,146 +3338,6 @@ def test_to_dict(self): for k2, v2 in v.iteritems(): self.assertEqual(v2, recons_data[k][k2]) - def test_from_json_to_json(self): - raise nose.SkipTest - - def _check_orient(df, orient, dtype=None, numpy=True): - df = df.sort() - dfjson = df.to_json(orient=orient) - unser = DataFrame.from_json(dfjson, orient=orient, dtype=dtype, - numpy=numpy) - unser = unser.sort() - if df.index.dtype.type == np.datetime64: - unser.index = DatetimeIndex(unser.index.values.astype('i8')) - if orient == "records": - # index is not captured in this orientation - assert_almost_equal(df.values, unser.values) - self.assert_(df.columns.equals(unser.columns)) - elif orient == "values": - # index and cols are not captured in this orientation - assert_almost_equal(df.values, unser.values) - elif orient == "split": - # index and col labels might not be strings - unser.index = [str(i) for i in unser.index] - unser.columns = [str(i) for i in unser.columns] - unser = unser.sort() - assert_almost_equal(df.values, unser.values) - else: - assert_frame_equal(df, unser) - - def _check_all_orients(df, dtype=None): - _check_orient(df, "columns", dtype=dtype) - _check_orient(df, "records", dtype=dtype) - _check_orient(df, "split", dtype=dtype) - _check_orient(df, "index", dtype=dtype) - _check_orient(df, "values", dtype=dtype) - - _check_orient(df, "columns", dtype=dtype, numpy=False) - _check_orient(df, "records", dtype=dtype, numpy=False) - _check_orient(df, "split", dtype=dtype, numpy=False) - _check_orient(df, "index", dtype=dtype, numpy=False) - _check_orient(df, "values", dtype=dtype, numpy=False) - - # basic - _check_all_orients(self.frame) - self.assertEqual(self.frame.to_json(), - self.frame.to_json(orient="columns")) - - _check_all_orients(self.intframe, dtype=self.intframe.values.dtype) - - # big one - # index and columns are strings as all unserialised JSON object keys - # are assumed to be strings - biggie = DataFrame(np.zeros((200, 4)), - columns=[str(i) for i in range(4)], - index=[str(i) for i in range(200)]) - _check_all_orients(biggie) - - # dtypes - _check_all_orients(DataFrame(biggie, dtype=np.float64), - dtype=np.float64) - _check_all_orients(DataFrame(biggie, dtype=np.int64), dtype=np.int64) - _check_all_orients(DataFrame(biggie, dtype='<U3'), dtype='<U3') - - # empty - _check_all_orients(self.empty) - - # time series data - _check_all_orients(self.tsframe) - - # mixed data - index = Index(['a', 'b', 'c', 'd', 'e']) - data = { - 'A': [0., 1., 2., 3., 4.], - 'B': [0., 1., 0., 1., 0.], - 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], - 'D': [True, False, True, False, True] - } - df = DataFrame(data=data, index=index) - _check_orient(df, "split") - _check_orient(df, "records") - _check_orient(df, "values") - _check_orient(df, "columns") - # index oriented is problematic as it is read back in in a transposed - # state, so the columns are interpreted as having mixed data and - # given object dtypes. - # force everything to have object dtype beforehand - _check_orient(df.transpose().transpose(), "index") - - def test_from_json_bad_data(self): - raise nose.SkipTest - self.assertRaises(ValueError, DataFrame.from_json, '{"key":b:a:d}') - - # too few indices - json = ('{"columns":["A","B"],' - '"index":["2","3"],' - '"data":[[1.0,"1"],[2.0,"2"],[null,"3"]]}"') - self.assertRaises(AssertionError, DataFrame.from_json, json, - orient="split") - - # too many columns - json = ('{"columns":["A","B","C"],' - '"index":["1","2","3"],' - '"data":[[1.0,"1"],[2.0,"2"],[null,"3"]]}"') - self.assertRaises(AssertionError, DataFrame.from_json, json, - orient="split") - - # bad key - json = ('{"badkey":["A","B"],' - '"index":["2","3"],' - '"data":[[1.0,"1"],[2.0,"2"],[null,"3"]]}"') - self.assertRaises(TypeError, DataFrame.from_json, json, - orient="split") - - def test_from_json_nones(self): - raise nose.SkipTest - df = DataFrame([[1, 2], [4, 5, 6]]) - unser = DataFrame.from_json(df.to_json()) - self.assert_(np.isnan(unser['2'][0])) - - df = DataFrame([['1', '2'], ['4', '5', '6']]) - unser = DataFrame.from_json(df.to_json()) - self.assert_(unser['2'][0] is None) - - unser = DataFrame.from_json(df.to_json(), numpy=False) - self.assert_(unser['2'][0] is None) - - # infinities get mapped to nulls which get mapped to NaNs during - # deserialisation - df = DataFrame([[1, 2], [4, 5, 6]]) - df[2][0] = np.inf - unser = DataFrame.from_json(df.to_json()) - self.assert_(np.isnan(unser['2'][0])) - - df[2][0] = np.NINF - unser = DataFrame.from_json(df.to_json()) - self.assert_(np.isnan(unser['2'][0])) - - def test_to_json_except(self): - raise nose.SkipTest - df = DataFrame([1, 2, 3]) - self.assertRaises(ValueError, df.to_json, orient="garbage") - def test_to_records_dt64(self): df = DataFrame([["one", "two", "three"], ["four", "five", "six"]], diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py index e1589b9499757..88990bdde98b8 100644 --- a/pandas/tests/test_series.py +++ b/pandas/tests/test_series.py @@ -561,62 +561,6 @@ def test_fromDict(self): series = Series(data, dtype=float) self.assert_(series.dtype == np.float64) - def test_from_json_to_json(self): - raise nose.SkipTest - - def _check_orient(series, orient, dtype=None, numpy=True): - series = series.sort_index() - unser = Series.from_json(series.to_json(orient=orient), - orient=orient, numpy=numpy, dtype=dtype) - unser = unser.sort_index() - if series.index.dtype.type == np.datetime64: - unser.index = DatetimeIndex(unser.index.values.astype('i8')) - if orient == "records" or orient == "values": - assert_almost_equal(series.values, unser.values) - else: - try: - assert_series_equal(series, unser) - except: - raise - if orient == "split": - self.assert_(series.name == unser.name) - - def _check_all_orients(series, dtype=None): - _check_orient(series, "columns", dtype=dtype) - _check_orient(series, "records", dtype=dtype) - _check_orient(series, "split", dtype=dtype) - _check_orient(series, "index", dtype=dtype) - _check_orient(series, "values", dtype=dtype) - - _check_orient(series, "columns", dtype=dtype, numpy=False) - _check_orient(series, "records", dtype=dtype, numpy=False) - _check_orient(series, "split", dtype=dtype, numpy=False) - _check_orient(series, "index", dtype=dtype, numpy=False) - _check_orient(series, "values", dtype=dtype, numpy=False) - - # basic - _check_all_orients(self.series) - self.assertEqual(self.series.to_json(), - self.series.to_json(orient="index")) - - objSeries = Series([str(d) for d in self.objSeries], - index=self.objSeries.index, - name=self.objSeries.name) - _check_all_orients(objSeries) - _check_all_orients(self.empty) - _check_all_orients(self.ts) - - # dtype - s = Series(range(6), index=['a', 'b', 'c', 'd', 'e', 'f']) - _check_all_orients(Series(s, dtype=np.float64), dtype=np.float64) - _check_all_orients(Series(s, dtype=np.int), dtype=np.int) - - - def test_to_json_except(self): - raise nose.SkipTest - s = Series([1, 2, 3]) - self.assertRaises(ValueError, s.to_json, orient="garbage") - def test_setindex(self): # wrong type series = self.series.copy() diff --git a/scripts/json_manip.py b/scripts/json_manip.py new file mode 100644 index 0000000000000..e76a99cca344a --- /dev/null +++ b/scripts/json_manip.py @@ -0,0 +1,421 @@ +""" + +Tasks +------- + +Search and transform jsonable structures, specifically to make it 'easy' to make tabular/csv output for other consumers. + +Example +~~~~~~~~~~~~~ + + *give me a list of all the fields called 'id' in this stupid, gnarly + thing* + + >>> Q('id',gnarly_data) + ['id1','id2','id3'] + + +Observations: +--------------------- + +1) 'simple data structures' exist and are common. They are tedious + to search. + +2) The DOM is another nested / treeish structure, and jQuery selector is + a good tool for that. + +3a) R, Numpy, Excel and other analysis tools want 'tabular' data. These + analyses are valuable and worth doing. + +3b) Dot/Graphviz, NetworkX, and some other analyses *like* treeish/dicty + things, and those analyses are also worth doing! + +3c) Some analyses are best done using 'one-off' and custom code in C, Python, + or another 'real' programming language. + +4) Arbitrary transforms are tedious and error prone. SQL is one solution, + XSLT is another, + +5) the XPATH/XML/XSLT family is.... not universally loved :) They are + very complete, and the completeness can make simple cases... gross. + +6) For really complicated data structures, we can write one-off code. Getting + 80% of the way is mostly okay. There will always have to be programmers + in the loop. + +7) Re-inventing SQL is probably a failure mode. So is reinventing XPATH, XSLT + and the like. Be wary of mission creep! Re-use when possible (e.g., can + we put the thing into a DOM using + +8) If the interface is good, people can improve performance later. + + +Simplifying +--------------- + + +1) Assuming 'jsonable' structures + +2) keys are strings or stringlike. Python allows any hashable to be a key. + for now, we pretend that doesn't happen. + +3) assumes most dicts are 'well behaved'. DAG, no cycles! + +4) assume that if people want really specialized transforms, they can do it + themselves. + +""" + +from collections import Counter, namedtuple +import csv +import itertools +from itertools import product +from operator import attrgetter as aget, itemgetter as iget +import operator +import sys + + + +## note 'url' appears multiple places and not all extensions have same struct +ex1 = { + 'name': 'Gregg', + 'extensions': [ + {'id':'hello', + 'url':'url1'}, + {'id':'gbye', + 'url':'url2', + 'more': dict(url='url3')}, + ] +} + +## much longer example +ex2 = {u'metadata': {u'accessibilities': [{u'name': u'accessibility.tabfocus', + u'value': 7}, + {u'name': u'accessibility.mouse_focuses_formcontrol', u'value': False}, + {u'name': u'accessibility.browsewithcaret', u'value': False}, + {u'name': u'accessibility.win32.force_disabled', u'value': False}, + {u'name': u'accessibility.typeaheadfind.startlinksonly', u'value': False}, + {u'name': u'accessibility.usebrailledisplay', u'value': u''}, + {u'name': u'accessibility.typeaheadfind.timeout', u'value': 5000}, + {u'name': u'accessibility.typeaheadfind.enabletimeout', u'value': True}, + {u'name': u'accessibility.tabfocus_applies_to_xul', u'value': False}, + {u'name': u'accessibility.typeaheadfind.flashBar', u'value': 1}, + {u'name': u'accessibility.typeaheadfind.autostart', u'value': True}, + {u'name': u'accessibility.blockautorefresh', u'value': False}, + {u'name': u'accessibility.browsewithcaret_shortcut.enabled', + u'value': True}, + {u'name': u'accessibility.typeaheadfind.enablesound', u'value': True}, + {u'name': u'accessibility.typeaheadfind.prefillwithselection', + u'value': True}, + {u'name': u'accessibility.typeaheadfind.soundURL', u'value': u'beep'}, + {u'name': u'accessibility.typeaheadfind', u'value': False}, + {u'name': u'accessibility.typeaheadfind.casesensitive', u'value': 0}, + {u'name': u'accessibility.warn_on_browsewithcaret', u'value': True}, + {u'name': u'accessibility.usetexttospeech', u'value': u''}, + {u'name': u'accessibility.accesskeycausesactivation', u'value': True}, + {u'name': u'accessibility.typeaheadfind.linksonly', u'value': False}, + {u'name': u'isInstantiated', u'value': True}], + u'extensions': [{u'id': u'216ee7f7f4a5b8175374cd62150664efe2433a31', + u'isEnabled': True}, + {u'id': u'1aa53d3b720800c43c4ced5740a6e82bb0b3813e', u'isEnabled': False}, + {u'id': u'01ecfac5a7bd8c9e27b7c5499e71c2d285084b37', u'isEnabled': True}, + {u'id': u'1c01f5b22371b70b312ace94785f7b0b87c3dfb2', u'isEnabled': True}, + {u'id': u'fb723781a2385055f7d024788b75e959ad8ea8c3', u'isEnabled': True}], + u'fxVersion': u'9.0', + u'location': u'zh-CN', + u'operatingSystem': u'WINNT Windows NT 5.1', + u'surveyAnswers': u'', + u'task_guid': u'd69fbd15-2517-45b5-8a17-bb7354122a75', + u'tpVersion': u'1.2', + u'updateChannel': u'beta'}, + u'survey_data': { + u'extensions': [{u'appDisabled': False, + u'id': u'testpilot?labs.mozilla.com', + u'isCompatible': True, + u'isEnabled': True, + u'isPlatformCompatible': True, + u'name': u'Test Pilot'}, + {u'appDisabled': True, + u'id': u'dict?www.youdao.com', + u'isCompatible': False, + u'isEnabled': False, + u'isPlatformCompatible': True, + u'name': u'Youdao Word Capturer'}, + {u'appDisabled': False, + u'id': u'jqs?sun.com', + u'isCompatible': True, + u'isEnabled': True, + u'isPlatformCompatible': True, + u'name': u'Java Quick Starter'}, + {u'appDisabled': False, + u'id': u'?20a82645-c095-46ed-80e3-08825760534b?', + u'isCompatible': True, + u'isEnabled': True, + u'isPlatformCompatible': True, + u'name': u'Microsoft .NET Framework Assistant'}, + {u'appDisabled': False, + u'id': u'?a0d7ccb3-214d-498b-b4aa-0e8fda9a7bf7?', + u'isCompatible': True, + u'isEnabled': True, + u'isPlatformCompatible': True, + u'name': u'WOT'}], + u'version_number': 1}} + +# class SurveyResult(object): + +# def __init__(self, record): +# self.record = record +# self.metadata, self.survey_data = self._flatten_results() + +# def _flatten_results(self): +# survey_data = self.record['survey_data'] +# extensions = DataFrame(survey_data['extensions']) + +def denorm(queries,iterable_of_things,default=None): + """ + 'repeat', or 'stutter' to 'tableize' for downstream. + (I have no idea what a good word for this is!) + + Think ``kronecker`` products, or: + + ``SELECT single,multiple FROM table;`` + + single multiple + ------- --------- + id1 val1 + id1 val2 + + + Args: + + queries: iterable of ``Q`` queries. + iterable_of_things: to be queried. + + Returns: + + list of 'stuttered' output, where if a query returns + a 'single', it gets repeated appropriately. + + + """ + + def _denorm(queries,thing): + fields = [] + results = [] + for q in queries: + #print q + r = Ql(q,thing) + #print "-- result: ", r + if not r: + r = [default] + if type(r[0]) is type({}): + fields.append(sorted(r[0].keys())) # dicty answers + else: + fields.append([q]) # stringy answer + + results.append(r) + + #print results + #print fields + flist = list(flatten(*map(iter,fields))) + + prod = itertools.product(*results) + for p in prod: + U = dict() + for (ii,thing) in enumerate(p): + #print ii,thing + if type(thing) is type({}): + U.update(thing) + else: + U[fields[ii][0]] = thing + + yield U + + return list(flatten(*[_denorm(queries,thing) for thing in iterable_of_things])) + + +def default_iget(fields,default=None,): + """ itemgetter with 'default' handling, that *always* returns lists + + API CHANGES from ``operator.itemgetter`` + + Note: Sorry to break the iget api... (fields vs *fields) + Note: *always* returns a list... unlike itemgetter, + which can return tuples or 'singles' + """ + myiget = operator.itemgetter(*fields) + L = len(fields) + def f(thing): + try: + ans = list(myiget(thing)) + if L < 2: + ans = [ans,] + return ans + except KeyError: + # slower! + return [thing.get(x,default) for x in fields] + + f.__doc__ = "itemgetter with default %r for fields %r" %(default,fields) + f.__name__ = "default_itemgetter" + return f + + +def flatten(*stack): + """ + helper function for flattening iterables of generators in a + sensible way. + """ + stack = list(stack) + while stack: + try: x = stack[0].next() + except StopIteration: + stack.pop(0) + continue + if hasattr(x,'next') and callable(getattr(x,'next')): + stack.insert(0, x) + + #if isinstance(x, (GeneratorType,listerator)): + else: yield x + + +def _Q(filter_, thing): + """ underlying machinery for Q function recursion """ + T = type(thing) + if T is type({}): + for k,v in thing.iteritems(): + #print k,v + if filter_ == k: + if type(v) is type([]): + yield iter(v) + else: + yield v + + if type(v) in (type({}),type([])): + yield Q(filter_,v) + + elif T is type([]): + for k in thing: + #print k + yield Q(filter_,k) + + else: + # no recursion. + pass + +def Q(filter_,thing): + """ + type(filter): + - list: a flattened list of all searches (one list) + - dict: dict with vals each of which is that search + + Notes: + + [1] 'parent thing', with space, will do a descendent + [2] this will come back 'flattened' jQuery style + [3] returns a generator. Use ``Ql`` if you want a list. + + """ + if type(filter_) is type([]): + return flatten(*[_Q(x,thing) for x in filter_]) + elif type(filter_) is type({}): + d = dict.fromkeys(filter_.keys()) + #print d + for k in d: + #print flatten(Q(k,thing)) + d[k] = Q(k,thing) + + return d + + else: + if " " in filter_: # i.e. "antecendent post" + parts = filter_.strip().split() + r = None + for p in parts: + r = Ql(p,thing) + thing = r + + return r + + else: # simple. + return flatten(_Q(filter_,thing)) + +def Ql(filter_,thing): + """ same as Q, but returns a list, not a generator """ + res = Q(filter_,thing) + + if type(filter_) is type({}): + for k in res: + res[k] = list(res[k]) + return res + + else: + return list(res) + + + +def countit(fields,iter_of_iter,default=None): + """ + note: robust to fields not being in i_of_i, using ``default`` + """ + C = Counter() # needs hashables + T = namedtuple("Thing",fields) + get = default_iget(*fields,default=default) + return Counter( + (T(*get(thing)) for thing in iter_of_iter) + ) + + +## right now this works for one row... +def printout(queries,things,default=None, f=sys.stdout, **kwargs): + """ will print header and objects + + **kwargs go to csv.DictWriter + + help(csv.DictWriter) for more. + """ + + results = denorm(queries,things,default=None) + fields = set(itertools.chain(*(x.keys() for x in results))) + + W = csv.DictWriter(f=f,fieldnames=fields,**kwargs) + #print "---prod---" + #print list(prod) + W.writeheader() + for r in results: + W.writerow(r) + + +def test_run(): + print "\n>>> print list(Q('url',ex1))" + print list(Q('url',ex1)) + assert list(Q('url',ex1)) == ['url1','url2','url3'] + assert Ql('url',ex1) == ['url1','url2','url3'] + + print "\n>>> print list(Q(['name','id'],ex1))" + print list(Q(['name','id'],ex1)) + assert Ql(['name','id'],ex1) == ['Gregg','hello','gbye'] + + + print "\n>>> print Ql('more url',ex1)" + print Ql('more url',ex1) + + + print "\n>>> list(Q('extensions',ex1))" + print list(Q('extensions',ex1)) + + print "\n>>> print Ql('extensions',ex1)" + print Ql('extensions',ex1) + + print "\n>>> printout(['name','extensions'],[ex1,], extrasaction='ignore')" + printout(['name','extensions'],[ex1,], extrasaction='ignore') + + print "\n\n" + + from pprint import pprint as pp + + print "-- note that the extension fields are also flattened! (and N/A) -- " + pp(denorm(['location','fxVersion','notthere','survey_data extensions'],[ex2,], default="N/A")[:2]) + + +if __name__ == "__main__": + pass diff --git a/setup.py b/setup.py index 030584ba509d3..ff40738ddfb78 100755 --- a/setup.py +++ b/setup.py @@ -244,12 +244,23 @@ def initialize_options(self): 'np_datetime_strings.c', 'period.c', 'tokenizer.c', - 'io.c'] + 'io.c', + 'ujson.c', + 'objToJSON.c', + 'JSONtoObj.c', + 'ultrajsonenc.c', + 'ultrajsondec.c', + ] for root, dirs, files in list(os.walk('pandas')): for f in files: if f in self._clean_exclude: continue + + # XXX + if 'ujson' in f: + continue + if os.path.splitext(f)[-1] in ('.pyc', '.so', '.o', '.pyo', '.pyd', '.c', '.orig'): @@ -457,6 +468,22 @@ def pxd(name): root, _ = os.path.splitext(ext.sources[0]) ext.sources[0] = root + suffix +ujson_ext = Extension('pandas.json', + depends=['pandas/src/ujson/lib/ultrajson.h'], + sources=['pandas/src/ujson/python/ujson.c', + 'pandas/src/ujson/python/objToJSON.c', + 'pandas/src/ujson/python/JSONtoObj.c', + 'pandas/src/ujson/lib/ultrajsonenc.c', + 'pandas/src/ujson/lib/ultrajsondec.c', + 'pandas/src/datetime/np_datetime.c', + 'pandas/src/datetime/np_datetime_strings.c'], + include_dirs=['pandas/src/ujson/python', + 'pandas/src/ujson/lib', + 'pandas/src/datetime'] + common_include) + + +extensions.append(ujson_ext) + if _have_setuptools: setuptools_kwargs["test_suite"] = "nose.collector" @@ -485,6 +512,7 @@ def pxd(name): 'pandas.tseries', 'pandas.tseries.tests', 'pandas.io.tests', + 'pandas.io.tests.test_json', 'pandas.stats.tests', ], package_data={'pandas.io': ['tests/data/legacy_hdf/*.h5',
This is @wesm PR #3583 with this: It builds now, and passes travis on py2 and py3, had 2 issues: - clean was erasing the *.c files from ujson - the module import didn't work because it was using the original init function Converted to new io API: `to_json` / `read_json` Docs added
https://api.github.com/repos/pandas-dev/pandas/pulls/3804
2013-06-07T23:05:47Z
2013-06-11T19:18:34Z
2013-06-11T19:18:34Z
2014-06-13T20:06:58Z
BLD: Add useful shortcuts to Makefile
diff --git a/Makefile b/Makefile index af44d0223938a..6b7e02404525b 100644 --- a/Makefile +++ b/Makefile @@ -1,11 +1,27 @@ -clean: +.PHONY : clean develop build clean clean_pyc tseries doc + +clean: clean_pyc -rm -rf build dist + -find . -name '*.so' -exec rm -f {} \; + +clean_pyc: + -find . -name '*.pyc' -exec rm -f {} \; tseries: pandas/lib.pyx pandas/tslib.pyx pandas/hashtable.pyx python setup.py build_ext --inplace sparse: pandas/src/sparse.pyx - -python setup.py build_ext --inplace + python setup.py build_ext --inplace + +build: clean_pyc + python setup.py build_ext --inplace + +develop: build + -python setup.py develop -test: sparse - -python pandas/tests/test_libsparse.py \ No newline at end of file +doc: + -rm -rf doc/build + -rm -rf doc/source/generated + cd doc; \ + python make.py clean; \ + python make.py html
Add some shortcuts to make it easier to develop with pandas. now there's a set of commands in the `Makefile` in the top-level pandas directory with the following functionality - `make clean` will delete the `build` and `dist` directories + all `*.pyc` and `*.so` files - `make clean_pyc` just removes `*.pyc` files - `make build` will build extensions inplace - `make develop` will install `pandas` in your environment but will place a link to the dev dir so that you can make changes and they will show up immediately - `make doc` will build the documentation from scratch (erases generated and build directories)
https://api.github.com/repos/pandas-dev/pandas/pulls/3803
2013-06-07T23:05:25Z
2013-06-18T22:30:01Z
2013-06-18T22:30:01Z
2014-07-16T08:12:32Z
DOC: add link to tips data set in rplot docs
diff --git a/doc/data/tips.csv b/doc/data/tips.csv index c4558cce4ce36..856a65a69e647 100644 --- a/doc/data/tips.csv +++ b/doc/data/tips.csv @@ -1,245 +1,245 @@ -obs,totbill,tip,sex,smoker,day,time,size -1,16.99, 1.01,F,No,Sun,Night,2 -2,10.34, 1.66,M,No,Sun,Night,3 -3,21.01, 3.50,M,No,Sun,Night,3 -4,23.68, 3.31,M,No,Sun,Night,2 -5,24.59, 3.61,F,No,Sun,Night,4 -6,25.29, 4.71,M,No,Sun,Night,4 -7, 8.77, 2.00,M,No,Sun,Night,2 -8,26.88, 3.12,M,No,Sun,Night,4 -9,15.04, 1.96,M,No,Sun,Night,2 -10,14.78, 3.23,M,No,Sun,Night,2 -11,10.27, 1.71,M,No,Sun,Night,2 -12,35.26, 5.00,F,No,Sun,Night,4 -13,15.42, 1.57,M,No,Sun,Night,2 -14,18.43, 3.00,M,No,Sun,Night,4 -15,14.83, 3.02,F,No,Sun,Night,2 -16,21.58, 3.92,M,No,Sun,Night,2 -17,10.33, 1.67,F,No,Sun,Night,3 -18,16.29, 3.71,M,No,Sun,Night,3 -19,16.97, 3.50,F,No,Sun,Night,3 -20,20.65, 3.35,M,No,Sat,Night,3 -21,17.92, 4.08,M,No,Sat,Night,2 -22,20.29, 2.75,F,No,Sat,Night,2 -23,15.77, 2.23,F,No,Sat,Night,2 -24,39.42, 7.58,M,No,Sat,Night,4 -25,19.82, 3.18,M,No,Sat,Night,2 -26,17.81, 2.34,M,No,Sat,Night,4 -27,13.37, 2.00,M,No,Sat,Night,2 -28,12.69, 2.00,M,No,Sat,Night,2 -29,21.70, 4.30,M,No,Sat,Night,2 -30,19.65, 3.00,F,No,Sat,Night,2 -31, 9.55, 1.45,M,No,Sat,Night,2 -32,18.35, 2.50,M,No,Sat,Night,4 -33,15.06, 3.00,F,No,Sat,Night,2 -34,20.69, 2.45,F,No,Sat,Night,4 -35,17.78, 3.27,M,No,Sat,Night,2 -36,24.06, 3.60,M,No,Sat,Night,3 -37,16.31, 2.00,M,No,Sat,Night,3 -38,16.93, 3.07,F,No,Sat,Night,3 -39,18.69, 2.31,M,No,Sat,Night,3 -40,31.27, 5.00,M,No,Sat,Night,3 -41,16.04, 2.24,M,No,Sat,Night,3 -42,17.46, 2.54,M,No,Sun,Night,2 -43,13.94, 3.06,M,No,Sun,Night,2 -44, 9.68, 1.32,M,No,Sun,Night,2 -45,30.40, 5.60,M,No,Sun,Night,4 -46,18.29, 3.00,M,No,Sun,Night,2 -47,22.23, 5.00,M,No,Sun,Night,2 -48,32.40, 6.00,M,No,Sun,Night,4 -49,28.55, 2.05,M,No,Sun,Night,3 -50,18.04, 3.00,M,No,Sun,Night,2 -51,12.54, 2.50,M,No,Sun,Night,2 -52,10.29, 2.60,F,No,Sun,Night,2 -53,34.81, 5.20,F,No,Sun,Night,4 -54, 9.94, 1.56,M,No,Sun,Night,2 -55,25.56, 4.34,M,No,Sun,Night,4 -56,19.49, 3.51,M,No,Sun,Night,2 -57,38.01, 3.00,M,Yes,Sat,Night,4 -58,26.41, 1.50,F,No,Sat,Night,2 -59,11.24, 1.76,M,Yes,Sat,Night,2 -60,48.27, 6.73,M,No,Sat,Night,4 -61,20.29, 3.21,M,Yes,Sat,Night,2 -62,13.81, 2.00,M,Yes,Sat,Night,2 -63,11.02, 1.98,M,Yes,Sat,Night,2 -64,18.29, 3.76,M,Yes,Sat,Night,4 -65,17.59, 2.64,M,No,Sat,Night,3 -66,20.08, 3.15,M,No,Sat,Night,3 -67,16.45, 2.47,F,No,Sat,Night,2 -68, 3.07, 1.00,F,Yes,Sat,Night,1 -69,20.23, 2.01,M,No,Sat,Night,2 -70,15.01, 2.09,M,Yes,Sat,Night,2 -71,12.02, 1.97,M,No,Sat,Night,2 -72,17.07, 3.00,F,No,Sat,Night,3 -73,26.86, 3.14,F,Yes,Sat,Night,2 -74,25.28, 5.00,F,Yes,Sat,Night,2 -75,14.73, 2.20,F,No,Sat,Night,2 -76,10.51, 1.25,M,No,Sat,Night,2 -77,17.92, 3.08,M,Yes,Sat,Night,2 -78,27.20, 4.00,M,No,Thu,Day,4 -79,22.76, 3.00,M,No,Thu,Day,2 -80,17.29, 2.71,M,No,Thu,Day,2 -81,19.44, 3.00,M,Yes,Thu,Day,2 -82,16.66, 3.40,M,No,Thu,Day,2 -83,10.07, 1.83,F,No,Thu,Day,1 -84,32.68, 5.00,M,Yes,Thu,Day,2 -85,15.98, 2.03,M,No,Thu,Day,2 -86,34.83, 5.17,F,No,Thu,Day,4 -87,13.03, 2.00,M,No,Thu,Day,2 -88,18.28, 4.00,M,No,Thu,Day,2 -89,24.71, 5.85,M,No,Thu,Day,2 -90,21.16, 3.00,M,No,Thu,Day,2 -91,28.97, 3.00,M,Yes,Fri,Night,2 -92,22.49, 3.50,M,No,Fri,Night,2 -93, 5.75, 1.00,F,Yes,Fri,Night,2 -94,16.32, 4.30,F,Yes,Fri,Night,2 -95,22.75, 3.25,F,No,Fri,Night,2 -96,40.17, 4.73,M,Yes,Fri,Night,4 -97,27.28, 4.00,M,Yes,Fri,Night,2 -98,12.03, 1.50,M,Yes,Fri,Night,2 -99,21.01, 3.00,M,Yes,Fri,Night,2 -100,12.46, 1.50,M,No,Fri,Night,2 -101,11.35, 2.50,F,Yes,Fri,Night,2 -102,15.38, 3.00,F,Yes,Fri,Night,2 -103,44.30, 2.50,F,Yes,Sat,Night,3 -104,22.42, 3.48,F,Yes,Sat,Night,2 -105,20.92, 4.08,F,No,Sat,Night,2 -106,15.36, 1.64,M,Yes,Sat,Night,2 -107,20.49, 4.06,M,Yes,Sat,Night,2 -108,25.21, 4.29,M,Yes,Sat,Night,2 -109,18.24, 3.76,M,No,Sat,Night,2 -110,14.31, 4.00,F,Yes,Sat,Night,2 -111,14.00, 3.00,M,No,Sat,Night,2 -112, 7.25, 1.00,F,No,Sat,Night,1 -113,38.07, 4.00,M,No,Sun,Night,3 -114,23.95, 2.55,M,No,Sun,Night,2 -115,25.71, 4.00,F,No,Sun,Night,3 -116,17.31, 3.50,F,No,Sun,Night,2 -117,29.93, 5.07,M,No,Sun,Night,4 -118,10.65, 1.50,F,No,Thu,Day,2 -119,12.43, 1.80,F,No,Thu,Day,2 -120,24.08, 2.92,F,No,Thu,Day,4 -121,11.69, 2.31,M,No,Thu,Day,2 -122,13.42, 1.68,F,No,Thu,Day,2 -123,14.26, 2.50,M,No,Thu,Day,2 -124,15.95, 2.00,M,No,Thu,Day,2 -125,12.48, 2.52,F,No,Thu,Day,2 -126,29.80, 4.20,F,No,Thu,Day,6 -127, 8.52, 1.48,M,No,Thu,Day,2 -128,14.52, 2.00,F,No,Thu,Day,2 -129,11.38, 2.00,F,No,Thu,Day,2 -130,22.82, 2.18,M,No,Thu,Day,3 -131,19.08, 1.50,M,No,Thu,Day,2 -132,20.27, 2.83,F,No,Thu,Day,2 -133,11.17, 1.50,F,No,Thu,Day,2 -134,12.26, 2.00,F,No,Thu,Day,2 -135,18.26, 3.25,F,No,Thu,Day,2 -136, 8.51, 1.25,F,No,Thu,Day,2 -137,10.33, 2.00,F,No,Thu,Day,2 -138,14.15, 2.00,F,No,Thu,Day,2 -139,16.00, 2.00,M,Yes,Thu,Day,2 -140,13.16, 2.75,F,No,Thu,Day,2 -141,17.47, 3.50,F,No,Thu,Day,2 -142,34.30, 6.70,M,No,Thu,Day,6 -143,41.19, 5.00,M,No,Thu,Day,5 -144,27.05, 5.00,F,No,Thu,Day,6 -145,16.43, 2.30,F,No,Thu,Day,2 -146, 8.35, 1.50,F,No,Thu,Day,2 -147,18.64, 1.36,F,No,Thu,Day,3 -148,11.87, 1.63,F,No,Thu,Day,2 -149, 9.78, 1.73,M,No,Thu,Day,2 -150, 7.51, 2.00,M,No,Thu,Day,2 -151,14.07, 2.50,M,No,Sun,Night,2 -152,13.13, 2.00,M,No,Sun,Night,2 -153,17.26, 2.74,M,No,Sun,Night,3 -154,24.55, 2.00,M,No,Sun,Night,4 -155,19.77, 2.00,M,No,Sun,Night,4 -156,29.85, 5.14,F,No,Sun,Night,5 -157,48.17, 5.00,M,No,Sun,Night,6 -158,25.00, 3.75,F,No,Sun,Night,4 -159,13.39, 2.61,F,No,Sun,Night,2 -160,16.49, 2.00,M,No,Sun,Night,4 -161,21.50, 3.50,M,No,Sun,Night,4 -162,12.66, 2.50,M,No,Sun,Night,2 -163,16.21, 2.00,F,No,Sun,Night,3 -164,13.81, 2.00,M,No,Sun,Night,2 -165,17.51, 3.00,F,Yes,Sun,Night,2 -166,24.52, 3.48,M,No,Sun,Night,3 -167,20.76, 2.24,M,No,Sun,Night,2 -168,31.71, 4.50,M,No,Sun,Night,4 -169,10.59, 1.61,F,Yes,Sat,Night,2 -170,10.63, 2.00,F,Yes,Sat,Night,2 -171,50.81,10.00,M,Yes,Sat,Night,3 -172,15.81, 3.16,M,Yes,Sat,Night,2 -173, 7.25, 5.15,M,Yes,Sun,Night,2 -174,31.85, 3.18,M,Yes,Sun,Night,2 -175,16.82, 4.00,M,Yes,Sun,Night,2 -176,32.90, 3.11,M,Yes,Sun,Night,2 -177,17.89, 2.00,M,Yes,Sun,Night,2 -178,14.48, 2.00,M,Yes,Sun,Night,2 -179, 9.60, 4.00,F,Yes,Sun,Night,2 -180,34.63, 3.55,M,Yes,Sun,Night,2 -181,34.65, 3.68,M,Yes,Sun,Night,4 -182,23.33, 5.65,M,Yes,Sun,Night,2 -183,45.35, 3.50,M,Yes,Sun,Night,3 -184,23.17, 6.50,M,Yes,Sun,Night,4 -185,40.55, 3.00,M,Yes,Sun,Night,2 -186,20.69, 5.00,M,No,Sun,Night,5 -187,20.90, 3.50,F,Yes,Sun,Night,3 -188,30.46, 2.00,M,Yes,Sun,Night,5 -189,18.15, 3.50,F,Yes,Sun,Night,3 -190,23.10, 4.00,M,Yes,Sun,Night,3 -191,15.69, 1.50,M,Yes,Sun,Night,2 -192,19.81, 4.19,F,Yes,Thu,Day,2 -193,28.44, 2.56,M,Yes,Thu,Day,2 -194,15.48, 2.02,M,Yes,Thu,Day,2 -195,16.58, 4.00,M,Yes,Thu,Day,2 -196, 7.56, 1.44,M,No,Thu,Day,2 -197,10.34, 2.00,M,Yes,Thu,Day,2 -198,43.11, 5.00,F,Yes,Thu,Day,4 -199,13.00, 2.00,F,Yes,Thu,Day,2 -200,13.51, 2.00,M,Yes,Thu,Day,2 -201,18.71, 4.00,M,Yes,Thu,Day,3 -202,12.74, 2.01,F,Yes,Thu,Day,2 -203,13.00, 2.00,F,Yes,Thu,Day,2 -204,16.40, 2.50,F,Yes,Thu,Day,2 -205,20.53, 4.00,M,Yes,Thu,Day,4 -206,16.47, 3.23,F,Yes,Thu,Day,3 -207,26.59, 3.41,M,Yes,Sat,Night,3 -208,38.73, 3.00,M,Yes,Sat,Night,4 -209,24.27, 2.03,M,Yes,Sat,Night,2 -210,12.76, 2.23,F,Yes,Sat,Night,2 -211,30.06, 2.00,M,Yes,Sat,Night,3 -212,25.89, 5.16,M,Yes,Sat,Night,4 -213,48.33, 9.00,M,No,Sat,Night,4 -214,13.27, 2.50,F,Yes,Sat,Night,2 -215,28.17, 6.50,F,Yes,Sat,Night,3 -216,12.90, 1.10,F,Yes,Sat,Night,2 -217,28.15, 3.00,M,Yes,Sat,Night,5 -218,11.59, 1.50,M,Yes,Sat,Night,2 -219, 7.74, 1.44,M,Yes,Sat,Night,2 -220,30.14, 3.09,F,Yes,Sat,Night,4 -221,12.16, 2.20,M,Yes,Fri,Day,2 -222,13.42, 3.48,F,Yes,Fri,Day,2 -223, 8.58, 1.92,M,Yes,Fri,Day,1 -224,15.98, 3.00,F,No,Fri,Day,3 -225,13.42, 1.58,M,Yes,Fri,Day,2 -226,16.27, 2.50,F,Yes,Fri,Day,2 -227,10.09, 2.00,F,Yes,Fri,Day,2 -228,20.45, 3.00,M,No,Sat,Night,4 -229,13.28, 2.72,M,No,Sat,Night,2 -230,22.12, 2.88,F,Yes,Sat,Night,2 -231,24.01, 2.00,M,Yes,Sat,Night,4 -232,15.69, 3.00,M,Yes,Sat,Night,3 -233,11.61, 3.39,M,No,Sat,Night,2 -234,10.77, 1.47,M,No,Sat,Night,2 -235,15.53, 3.00,M,Yes,Sat,Night,2 -236,10.07, 1.25,M,No,Sat,Night,2 -237,12.60, 1.00,M,Yes,Sat,Night,2 -238,32.83, 1.17,M,Yes,Sat,Night,2 -239,35.83, 4.67,F,No,Sat,Night,3 -240,29.03, 5.92,M,No,Sat,Night,3 -241,27.18, 2.00,F,Yes,Sat,Night,2 -242,22.67, 2.00,M,Yes,Sat,Night,2 -243,17.82, 1.75,M,No,Sat,Night,2 -244,18.78, 3.00,F,No,Thu,Night,2 +total_bill,tip,sex,smoker,day,time,size +16.99,1.01,Female,No,Sun,Dinner,2 +10.34,1.66,Male,No,Sun,Dinner,3 +21.01,3.5,Male,No,Sun,Dinner,3 +23.68,3.31,Male,No,Sun,Dinner,2 +24.59,3.61,Female,No,Sun,Dinner,4 +25.29,4.71,Male,No,Sun,Dinner,4 +8.77,2.0,Male,No,Sun,Dinner,2 +26.88,3.12,Male,No,Sun,Dinner,4 +15.04,1.96,Male,No,Sun,Dinner,2 +14.78,3.23,Male,No,Sun,Dinner,2 +10.27,1.71,Male,No,Sun,Dinner,2 +35.26,5.0,Female,No,Sun,Dinner,4 +15.42,1.57,Male,No,Sun,Dinner,2 +18.43,3.0,Male,No,Sun,Dinner,4 +14.83,3.02,Female,No,Sun,Dinner,2 +21.58,3.92,Male,No,Sun,Dinner,2 +10.33,1.67,Female,No,Sun,Dinner,3 +16.29,3.71,Male,No,Sun,Dinner,3 +16.97,3.5,Female,No,Sun,Dinner,3 +20.65,3.35,Male,No,Sat,Dinner,3 +17.92,4.08,Male,No,Sat,Dinner,2 +20.29,2.75,Female,No,Sat,Dinner,2 +15.77,2.23,Female,No,Sat,Dinner,2 +39.42,7.58,Male,No,Sat,Dinner,4 +19.82,3.18,Male,No,Sat,Dinner,2 +17.81,2.34,Male,No,Sat,Dinner,4 +13.37,2.0,Male,No,Sat,Dinner,2 +12.69,2.0,Male,No,Sat,Dinner,2 +21.7,4.3,Male,No,Sat,Dinner,2 +19.65,3.0,Female,No,Sat,Dinner,2 +9.55,1.45,Male,No,Sat,Dinner,2 +18.35,2.5,Male,No,Sat,Dinner,4 +15.06,3.0,Female,No,Sat,Dinner,2 +20.69,2.45,Female,No,Sat,Dinner,4 +17.78,3.27,Male,No,Sat,Dinner,2 +24.06,3.6,Male,No,Sat,Dinner,3 +16.31,2.0,Male,No,Sat,Dinner,3 +16.93,3.07,Female,No,Sat,Dinner,3 +18.69,2.31,Male,No,Sat,Dinner,3 +31.27,5.0,Male,No,Sat,Dinner,3 +16.04,2.24,Male,No,Sat,Dinner,3 +17.46,2.54,Male,No,Sun,Dinner,2 +13.94,3.06,Male,No,Sun,Dinner,2 +9.68,1.32,Male,No,Sun,Dinner,2 +30.4,5.6,Male,No,Sun,Dinner,4 +18.29,3.0,Male,No,Sun,Dinner,2 +22.23,5.0,Male,No,Sun,Dinner,2 +32.4,6.0,Male,No,Sun,Dinner,4 +28.55,2.05,Male,No,Sun,Dinner,3 +18.04,3.0,Male,No,Sun,Dinner,2 +12.54,2.5,Male,No,Sun,Dinner,2 +10.29,2.6,Female,No,Sun,Dinner,2 +34.81,5.2,Female,No,Sun,Dinner,4 +9.94,1.56,Male,No,Sun,Dinner,2 +25.56,4.34,Male,No,Sun,Dinner,4 +19.49,3.51,Male,No,Sun,Dinner,2 +38.01,3.0,Male,Yes,Sat,Dinner,4 +26.41,1.5,Female,No,Sat,Dinner,2 +11.24,1.76,Male,Yes,Sat,Dinner,2 +48.27,6.73,Male,No,Sat,Dinner,4 +20.29,3.21,Male,Yes,Sat,Dinner,2 +13.81,2.0,Male,Yes,Sat,Dinner,2 +11.02,1.98,Male,Yes,Sat,Dinner,2 +18.29,3.76,Male,Yes,Sat,Dinner,4 +17.59,2.64,Male,No,Sat,Dinner,3 +20.08,3.15,Male,No,Sat,Dinner,3 +16.45,2.47,Female,No,Sat,Dinner,2 +3.07,1.0,Female,Yes,Sat,Dinner,1 +20.23,2.01,Male,No,Sat,Dinner,2 +15.01,2.09,Male,Yes,Sat,Dinner,2 +12.02,1.97,Male,No,Sat,Dinner,2 +17.07,3.0,Female,No,Sat,Dinner,3 +26.86,3.14,Female,Yes,Sat,Dinner,2 +25.28,5.0,Female,Yes,Sat,Dinner,2 +14.73,2.2,Female,No,Sat,Dinner,2 +10.51,1.25,Male,No,Sat,Dinner,2 +17.92,3.08,Male,Yes,Sat,Dinner,2 +27.2,4.0,Male,No,Thur,Lunch,4 +22.76,3.0,Male,No,Thur,Lunch,2 +17.29,2.71,Male,No,Thur,Lunch,2 +19.44,3.0,Male,Yes,Thur,Lunch,2 +16.66,3.4,Male,No,Thur,Lunch,2 +10.07,1.83,Female,No,Thur,Lunch,1 +32.68,5.0,Male,Yes,Thur,Lunch,2 +15.98,2.03,Male,No,Thur,Lunch,2 +34.83,5.17,Female,No,Thur,Lunch,4 +13.03,2.0,Male,No,Thur,Lunch,2 +18.28,4.0,Male,No,Thur,Lunch,2 +24.71,5.85,Male,No,Thur,Lunch,2 +21.16,3.0,Male,No,Thur,Lunch,2 +28.97,3.0,Male,Yes,Fri,Dinner,2 +22.49,3.5,Male,No,Fri,Dinner,2 +5.75,1.0,Female,Yes,Fri,Dinner,2 +16.32,4.3,Female,Yes,Fri,Dinner,2 +22.75,3.25,Female,No,Fri,Dinner,2 +40.17,4.73,Male,Yes,Fri,Dinner,4 +27.28,4.0,Male,Yes,Fri,Dinner,2 +12.03,1.5,Male,Yes,Fri,Dinner,2 +21.01,3.0,Male,Yes,Fri,Dinner,2 +12.46,1.5,Male,No,Fri,Dinner,2 +11.35,2.5,Female,Yes,Fri,Dinner,2 +15.38,3.0,Female,Yes,Fri,Dinner,2 +44.3,2.5,Female,Yes,Sat,Dinner,3 +22.42,3.48,Female,Yes,Sat,Dinner,2 +20.92,4.08,Female,No,Sat,Dinner,2 +15.36,1.64,Male,Yes,Sat,Dinner,2 +20.49,4.06,Male,Yes,Sat,Dinner,2 +25.21,4.29,Male,Yes,Sat,Dinner,2 +18.24,3.76,Male,No,Sat,Dinner,2 +14.31,4.0,Female,Yes,Sat,Dinner,2 +14.0,3.0,Male,No,Sat,Dinner,2 +7.25,1.0,Female,No,Sat,Dinner,1 +38.07,4.0,Male,No,Sun,Dinner,3 +23.95,2.55,Male,No,Sun,Dinner,2 +25.71,4.0,Female,No,Sun,Dinner,3 +17.31,3.5,Female,No,Sun,Dinner,2 +29.93,5.07,Male,No,Sun,Dinner,4 +10.65,1.5,Female,No,Thur,Lunch,2 +12.43,1.8,Female,No,Thur,Lunch,2 +24.08,2.92,Female,No,Thur,Lunch,4 +11.69,2.31,Male,No,Thur,Lunch,2 +13.42,1.68,Female,No,Thur,Lunch,2 +14.26,2.5,Male,No,Thur,Lunch,2 +15.95,2.0,Male,No,Thur,Lunch,2 +12.48,2.52,Female,No,Thur,Lunch,2 +29.8,4.2,Female,No,Thur,Lunch,6 +8.52,1.48,Male,No,Thur,Lunch,2 +14.52,2.0,Female,No,Thur,Lunch,2 +11.38,2.0,Female,No,Thur,Lunch,2 +22.82,2.18,Male,No,Thur,Lunch,3 +19.08,1.5,Male,No,Thur,Lunch,2 +20.27,2.83,Female,No,Thur,Lunch,2 +11.17,1.5,Female,No,Thur,Lunch,2 +12.26,2.0,Female,No,Thur,Lunch,2 +18.26,3.25,Female,No,Thur,Lunch,2 +8.51,1.25,Female,No,Thur,Lunch,2 +10.33,2.0,Female,No,Thur,Lunch,2 +14.15,2.0,Female,No,Thur,Lunch,2 +16.0,2.0,Male,Yes,Thur,Lunch,2 +13.16,2.75,Female,No,Thur,Lunch,2 +17.47,3.5,Female,No,Thur,Lunch,2 +34.3,6.7,Male,No,Thur,Lunch,6 +41.19,5.0,Male,No,Thur,Lunch,5 +27.05,5.0,Female,No,Thur,Lunch,6 +16.43,2.3,Female,No,Thur,Lunch,2 +8.35,1.5,Female,No,Thur,Lunch,2 +18.64,1.36,Female,No,Thur,Lunch,3 +11.87,1.63,Female,No,Thur,Lunch,2 +9.78,1.73,Male,No,Thur,Lunch,2 +7.51,2.0,Male,No,Thur,Lunch,2 +14.07,2.5,Male,No,Sun,Dinner,2 +13.13,2.0,Male,No,Sun,Dinner,2 +17.26,2.74,Male,No,Sun,Dinner,3 +24.55,2.0,Male,No,Sun,Dinner,4 +19.77,2.0,Male,No,Sun,Dinner,4 +29.85,5.14,Female,No,Sun,Dinner,5 +48.17,5.0,Male,No,Sun,Dinner,6 +25.0,3.75,Female,No,Sun,Dinner,4 +13.39,2.61,Female,No,Sun,Dinner,2 +16.49,2.0,Male,No,Sun,Dinner,4 +21.5,3.5,Male,No,Sun,Dinner,4 +12.66,2.5,Male,No,Sun,Dinner,2 +16.21,2.0,Female,No,Sun,Dinner,3 +13.81,2.0,Male,No,Sun,Dinner,2 +17.51,3.0,Female,Yes,Sun,Dinner,2 +24.52,3.48,Male,No,Sun,Dinner,3 +20.76,2.24,Male,No,Sun,Dinner,2 +31.71,4.5,Male,No,Sun,Dinner,4 +10.59,1.61,Female,Yes,Sat,Dinner,2 +10.63,2.0,Female,Yes,Sat,Dinner,2 +50.81,10.0,Male,Yes,Sat,Dinner,3 +15.81,3.16,Male,Yes,Sat,Dinner,2 +7.25,5.15,Male,Yes,Sun,Dinner,2 +31.85,3.18,Male,Yes,Sun,Dinner,2 +16.82,4.0,Male,Yes,Sun,Dinner,2 +32.9,3.11,Male,Yes,Sun,Dinner,2 +17.89,2.0,Male,Yes,Sun,Dinner,2 +14.48,2.0,Male,Yes,Sun,Dinner,2 +9.6,4.0,Female,Yes,Sun,Dinner,2 +34.63,3.55,Male,Yes,Sun,Dinner,2 +34.65,3.68,Male,Yes,Sun,Dinner,4 +23.33,5.65,Male,Yes,Sun,Dinner,2 +45.35,3.5,Male,Yes,Sun,Dinner,3 +23.17,6.5,Male,Yes,Sun,Dinner,4 +40.55,3.0,Male,Yes,Sun,Dinner,2 +20.69,5.0,Male,No,Sun,Dinner,5 +20.9,3.5,Female,Yes,Sun,Dinner,3 +30.46,2.0,Male,Yes,Sun,Dinner,5 +18.15,3.5,Female,Yes,Sun,Dinner,3 +23.1,4.0,Male,Yes,Sun,Dinner,3 +15.69,1.5,Male,Yes,Sun,Dinner,2 +19.81,4.19,Female,Yes,Thur,Lunch,2 +28.44,2.56,Male,Yes,Thur,Lunch,2 +15.48,2.02,Male,Yes,Thur,Lunch,2 +16.58,4.0,Male,Yes,Thur,Lunch,2 +7.56,1.44,Male,No,Thur,Lunch,2 +10.34,2.0,Male,Yes,Thur,Lunch,2 +43.11,5.0,Female,Yes,Thur,Lunch,4 +13.0,2.0,Female,Yes,Thur,Lunch,2 +13.51,2.0,Male,Yes,Thur,Lunch,2 +18.71,4.0,Male,Yes,Thur,Lunch,3 +12.74,2.01,Female,Yes,Thur,Lunch,2 +13.0,2.0,Female,Yes,Thur,Lunch,2 +16.4,2.5,Female,Yes,Thur,Lunch,2 +20.53,4.0,Male,Yes,Thur,Lunch,4 +16.47,3.23,Female,Yes,Thur,Lunch,3 +26.59,3.41,Male,Yes,Sat,Dinner,3 +38.73,3.0,Male,Yes,Sat,Dinner,4 +24.27,2.03,Male,Yes,Sat,Dinner,2 +12.76,2.23,Female,Yes,Sat,Dinner,2 +30.06,2.0,Male,Yes,Sat,Dinner,3 +25.89,5.16,Male,Yes,Sat,Dinner,4 +48.33,9.0,Male,No,Sat,Dinner,4 +13.27,2.5,Female,Yes,Sat,Dinner,2 +28.17,6.5,Female,Yes,Sat,Dinner,3 +12.9,1.1,Female,Yes,Sat,Dinner,2 +28.15,3.0,Male,Yes,Sat,Dinner,5 +11.59,1.5,Male,Yes,Sat,Dinner,2 +7.74,1.44,Male,Yes,Sat,Dinner,2 +30.14,3.09,Female,Yes,Sat,Dinner,4 +12.16,2.2,Male,Yes,Fri,Lunch,2 +13.42,3.48,Female,Yes,Fri,Lunch,2 +8.58,1.92,Male,Yes,Fri,Lunch,1 +15.98,3.0,Female,No,Fri,Lunch,3 +13.42,1.58,Male,Yes,Fri,Lunch,2 +16.27,2.5,Female,Yes,Fri,Lunch,2 +10.09,2.0,Female,Yes,Fri,Lunch,2 +20.45,3.0,Male,No,Sat,Dinner,4 +13.28,2.72,Male,No,Sat,Dinner,2 +22.12,2.88,Female,Yes,Sat,Dinner,2 +24.01,2.0,Male,Yes,Sat,Dinner,4 +15.69,3.0,Male,Yes,Sat,Dinner,3 +11.61,3.39,Male,No,Sat,Dinner,2 +10.77,1.47,Male,No,Sat,Dinner,2 +15.53,3.0,Male,Yes,Sat,Dinner,2 +10.07,1.25,Male,No,Sat,Dinner,2 +12.6,1.0,Male,Yes,Sat,Dinner,2 +32.83,1.17,Male,Yes,Sat,Dinner,2 +35.83,4.67,Female,No,Sat,Dinner,3 +29.03,5.92,Male,No,Sat,Dinner,3 +27.18,2.0,Female,Yes,Sat,Dinner,2 +22.67,2.0,Male,Yes,Sat,Dinner,2 +17.82,1.75,Male,No,Sat,Dinner,2 +18.78,3.0,Female,No,Thur,Dinner,2 diff --git a/doc/source/rplot.rst b/doc/source/rplot.rst index 1f33c789ee3ca..e9bae8502996f 100644 --- a/doc/source/rplot.rst +++ b/doc/source/rplot.rst @@ -22,6 +22,18 @@ Trellis plotting interface ************************** +.. note:: + + The tips data set can be downloaded `here + <http://wesmckinney.com/files/tips.csv>`_. Once you download it execute + + .. code-block:: python + + from pandas import read_csv + tips_data = read_csv('tips.csv') + + from the directory where you downloaded the file. + We import the rplot API: .. ipython:: python @@ -38,7 +50,7 @@ RPlot is a flexible API for producing Trellis plots. These plots allow you to ar plt.figure() - plot = rplot.RPlot(tips_data, x='totbill', y='tip') + plot = rplot.RPlot(tips_data, x='total_bill', y='tip') plot.add(rplot.TrellisGrid(['sex', 'smoker'])) plot.add(rplot.GeomHistogram()) @@ -51,7 +63,7 @@ In the example above, data from the tips data set is arranged by the attributes plt.figure() - plot = rplot.RPlot(tips_data, x='totbill', y='tip') + plot = rplot.RPlot(tips_data, x='total_bill', y='tip') plot.add(rplot.TrellisGrid(['sex', 'smoker'])) plot.add(rplot.GeomDensity()) @@ -64,7 +76,7 @@ Example above is the same as previous except the plot is set to kernel density e plt.figure() - plot = rplot.RPlot(tips_data, x='totbill', y='tip') + plot = rplot.RPlot(tips_data, x='total_bill', y='tip') plot.add(rplot.TrellisGrid(['sex', 'smoker'])) plot.add(rplot.GeomScatter()) plot.add(rplot.GeomPolyFit(degree=2)) @@ -78,7 +90,7 @@ The plot above shows that it is possible to have two or more plots for the same plt.figure() - plot = rplot.RPlot(tips_data, x='totbill', y='tip') + plot = rplot.RPlot(tips_data, x='total_bill', y='tip') plot.add(rplot.TrellisGrid(['sex', 'smoker'])) plot.add(rplot.GeomScatter()) plot.add(rplot.GeomDensity2D()) @@ -92,7 +104,7 @@ Above is a similar plot but with 2D kernel desnity estimation plot superimposed. plt.figure() - plot = rplot.RPlot(tips_data, x='totbill', y='tip') + plot = rplot.RPlot(tips_data, x='total_bill', y='tip') plot.add(rplot.TrellisGrid(['sex', '.'])) plot.add(rplot.GeomHistogram()) @@ -105,7 +117,7 @@ It is possible to only use one attribute for grouping data. The example above on plt.figure() - plot = rplot.RPlot(tips_data, x='totbill', y='tip') + plot = rplot.RPlot(tips_data, x='total_bill', y='tip') plot.add(rplot.TrellisGrid(['.', 'smoker'])) plot.add(rplot.GeomHistogram()) @@ -118,11 +130,11 @@ If the first grouping attribute is not specified the plots will be arranged in a plt.figure() - plot = rplot.RPlot(tips_data, x='totbill', y='tip') + plot = rplot.RPlot(tips_data, x='total_bill', y='tip') plot.add(rplot.TrellisGrid(['.', 'smoker'])) plot.add(rplot.GeomHistogram()) - plot = rplot.RPlot(tips_data, x='tip', y='totbill') + plot = rplot.RPlot(tips_data, x='tip', y='total_bill') plot.add(rplot.TrellisGrid(['sex', 'smoker'])) plot.add(rplot.GeomPoint(size=80.0, colour=rplot.ScaleRandomColour('day'), shape=rplot.ScaleShape('size'), alpha=1.0))
closes #3799.
https://api.github.com/repos/pandas-dev/pandas/pulls/3802
2013-06-07T22:37:27Z
2013-06-08T14:30:35Z
2013-06-08T14:30:35Z
2014-07-12T13:44:53Z
BUG: (GH3795) better error messages for invalid dtype specifications in read_csv
diff --git a/RELEASE.rst b/RELEASE.rst index 7a77972541c1e..4d85834706e80 100644 --- a/RELEASE.rst +++ b/RELEASE.rst @@ -219,6 +219,7 @@ pandas 0.11.1 - Incorrectly read a HDFStore multi-index Frame witha column specification (GH3748_) - ``read_html`` now correctly skips tests (GH3741_) - Fix incorrect arguments passed to concat that are not list-like (e.g. concat(df1,df2)) (GH3481_) + - Correctly parse when passed the ``dtype=str`` (or other variable-len string dtypes) in ``read_csv`` (GH3795_) .. _GH3164: https://github.com/pydata/pandas/issues/3164 .. _GH2786: https://github.com/pydata/pandas/issues/2786 @@ -307,6 +308,7 @@ pandas 0.11.1 .. _GH3741: https://github.com/pydata/pandas/issues/3741 .. _GH3750: https://github.com/pydata/pandas/issues/3750 .. _GH3726: https://github.com/pydata/pandas/issues/3726 +.. _GH3795: https://github.com/pydata/pandas/issues/3795 pandas 0.11.0 ============= diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py index 55abef2fd8d0e..cae4c0902a97c 100644 --- a/pandas/io/tests/test_parsers.py +++ b/pandas/io/tests/test_parsers.py @@ -485,6 +485,36 @@ def test_malformed(self): except Exception, inst: self.assert_('Expected 3 fields in line 6, saw 5' in str(inst)) + def test_passing_dtype(self): + + df = DataFrame(np.random.rand(5,2),columns=list('AB'),index=['1A','1B','1C','1D','1E']) + + with ensure_clean('__passing_str_as_dtype__.csv') as path: + df.to_csv(path) + + # GH 3795 + # passing 'str' as the dtype + result = pd.read_csv(path, dtype=str, index_col=0) + tm.assert_series_equal(result.dtypes,Series({ 'A' : 'object', 'B' : 'object' })) + + # we expect all object columns, so need to convert to test for equivalence + result = result.astype(float) + tm.assert_frame_equal(result,df) + + # invalid dtype + self.assertRaises(TypeError, pd.read_csv, path, dtype={'A' : 'foo', 'B' : 'float64' }, + index_col=0) + + # valid but we don't support it (date) + self.assertRaises(TypeError, pd.read_csv, path, dtype={'A' : 'datetime64', 'B' : 'float64' }, + index_col=0) + self.assertRaises(TypeError, pd.read_csv, path, dtype={'A' : 'datetime64', 'B' : 'float64' }, + index_col=0, parse_dates=['B']) + + # valid but we don't support it + self.assertRaises(TypeError, pd.read_csv, path, dtype={'A' : 'timedelta64', 'B' : 'float64' }, + index_col=0) + def test_quoting(self): bad_line_small = """printer\tresult\tvariant_name Klosterdruckerei\tKlosterdruckerei <Salem> (1611-1804)\tMuller, Jacob diff --git a/pandas/parser.pyx b/pandas/parser.pyx index ee92e2e60960c..004c23d09ccdf 100644 --- a/pandas/parser.pyx +++ b/pandas/parser.pyx @@ -990,20 +990,36 @@ cdef class TextReader: na_filter, na_hashset) return result, na_count elif dtype[1] == 'c': - raise NotImplementedError + raise NotImplementedError("the dtype %s is not supported for parsing" % dtype) elif dtype[1] == 'S': # TODO: na handling width = int(dtype[2:]) - result = _to_fw_string(self.parser, i, start, end, width) - return result, 0 + if width > 0: + result = _to_fw_string(self.parser, i, start, end, width) + return result, 0 + + # treat as a regular string parsing + return self._string_convert(i, start, end, na_filter, + na_hashset) elif dtype[1] == 'U': width = int(dtype[2:]) - raise NotImplementedError + if width > 0: + raise NotImplementedError("the dtype %s is not supported for parsing" % dtype) + + # unicode variable width + return self._string_convert(i, start, end, na_filter, + na_hashset) + elif dtype[1] == 'O': return self._string_convert(i, start, end, na_filter, na_hashset) + else: + if dtype[1] == 'M': + raise TypeError("the dtype %s is not supported for parsing, " + "pass this column using parse_dates instead" % dtype) + raise TypeError("the dtype %s is not supported for parsing" % dtype) cdef _string_convert(self, Py_ssize_t i, int start, int end, bint na_filter, kh_str_t *na_hashset):
ENH: accept 'str' as a dtype in read_csv to provide correct parsing closes #3795 ``` In [1]: df = DataFrame(np.random.rand(5,2),columns=list('AB'),index=['1A','1B','1C','1D','1E']) In [2]: df Out[2]: A B 1A 0.096563 0.761440 1B 0.623102 0.538810 1C 0.498820 0.277789 1D 0.113544 0.723437 1E 0.381104 0.061758 In [3]: df.dtypes Out[3]: A float64 B float64 dtype: object In [4]: path='test.csv' In [5]: df.to_csv(path) In [6]: pd.read_csv(path, dtype=str, index_col=0) Out[6]: A B 1A 0.09656290409114332 0.761440208545324 1B 0.6231015058315575 0.5388097714651147 1C 0.49881957371373464 0.27778943212477014 1D 0.11354443109778356 0.723437196012621 1E 0.38110436826261596 0.06175758774696094 In [7]: pd.read_csv(path, dtype=str, index_col=0).dtypes Out[7]: A object B object dtype: object ``` Invalid dtype specifciation, all `TypeError`, slightly different messages depending on what you are doing (e.g. if its completely invalid or really a date spec) ``` In [8]: pd.read_csv(path, index_col=0, dtype={'A' : 'foo'}) TypeError: data type "foo" not understood In [9]: pd.read_csv(path, index_col=0, dtype={'A' : 'datetime64'}) TypeError: the dtype <M8 is not supported for parsing, pass this column using parse_dates instead In [10]: pd.read_csv(path, index_col=0, dtype={'A' : 'timedelta64'}) TypeError: the dtype <m8 is not supported for parsing ```
https://api.github.com/repos/pandas-dev/pandas/pulls/3797
2013-06-07T19:06:19Z
2013-06-07T21:10:07Z
2013-06-07T21:10:07Z
2014-07-16T08:12:26Z
link to a numerical integration recipe (Issue #3759)
diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst index 6a68a5f83ce83..c34ad27350a35 100644 --- a/doc/source/cookbook.rst +++ b/doc/source/cookbook.rst @@ -341,6 +341,12 @@ Storing Attributes to a group node store.close() os.remove('test.h5') +Computation +--------- + +`Numerical integration (sample-based) of a time series +<http://nbviewer.ipython.org/5720498>`__ + Miscellaneous -------------
https://api.github.com/repos/pandas-dev/pandas/pulls/3790
2013-06-07T10:14:17Z
2013-06-07T13:03:46Z
2013-06-07T13:03:46Z
2014-07-03T12:13:09Z
Added examples to CSV section in cookbook
diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst index 6a68a5f83ce83..28df41cfd34b9 100644 --- a/doc/source/cookbook.rst +++ b/doc/source/cookbook.rst @@ -267,8 +267,11 @@ The :ref:`CSV <io.read_csv_table>` docs `Dealing with bad lines <https://github.com/pydata/pandas/issues/2886>`__ +`Dealing with bad lines II +<http://nipunbatra.wordpress.com/2013/06/06/reading-unclean-data-csv-using-pandas/>`__ + `Reading CSV with Unix timestamps and converting to local timezone -<http://nbviewer.ipython.org/5714493>`__ +<http://nipunbatra.wordpress.com/2013/06/07/pandas-reading-csv-with-unix-timestamps-and-converting-to-local-timezone/>`__ .. _cookbook.sql:
- Edited link to timezone handling with epochs - Added link for dealing with bad lines
https://api.github.com/repos/pandas-dev/pandas/pulls/3789
2013-06-07T03:43:39Z
2013-06-07T13:10:00Z
2013-06-07T13:10:00Z
2014-07-16T08:12:21Z
CLN deprecate save&load in favour of to_pickle&read_pickle
diff --git a/RELEASE.rst b/RELEASE.rst index 4f82f7b458737..285bbb2095488 100644 --- a/RELEASE.rst +++ b/RELEASE.rst @@ -137,6 +137,8 @@ pandas 0.11.1 - removed ``Excel`` support to ``pandas.io.excel`` - added top-level ``pd.read_sql`` and ``to_sql`` DataFrame methods - removed ``clipboard`` support to ``pandas.io.clipboard`` + - replace top-level and instance methods ``save`` and ``load`` with top-level ``read_pickle`` and + ``to_pickle`` instance method, ``save`` and ``load`` will give deprecation warning. - the ``method`` and ``axis`` arguments of ``DataFrame.replace()`` are deprecated - Implement ``__nonzero__`` for ``NDFrame`` objects (GH3691_, GH3696_) diff --git a/doc/source/api.rst b/doc/source/api.rst index bb6f0ac073e21..a4be0df5f489e 100644 --- a/doc/source/api.rst +++ b/doc/source/api.rst @@ -13,13 +13,12 @@ Input/Output Pickling ~~~~~~~~ -.. currentmodule:: pandas.core.common +.. currentmodule:: pandas.io.pickle .. autosummary:: :toctree: generated/ - load - save + read_pickle Flat File ~~~~~~~~~ @@ -378,8 +377,7 @@ Serialization / IO / Conversion :toctree: generated/ Series.from_csv - Series.load - Series.save + Series.to_pickle Series.to_csv Series.to_dict Series.to_sparse @@ -601,8 +599,7 @@ Serialization / IO / Conversion DataFrame.from_items DataFrame.from_records DataFrame.info - DataFrame.load - DataFrame.save + DataFrame.to_pickle DataFrame.to_csv DataFrame.to_hdf DataFrame.to_dict @@ -770,8 +767,7 @@ Serialization / IO / Conversion :toctree: generated/ Panel.from_dict - Panel.load - Panel.save + Panel.to_pickle Panel.to_excel Panel.to_sparse Panel.to_frame diff --git a/doc/source/basics.rst b/doc/source/basics.rst index 4100c4404ece6..05f9111497c08 100644 --- a/doc/source/basics.rst +++ b/doc/source/basics.rst @@ -1207,46 +1207,6 @@ While float dtypes are unchanged. casted casted.dtypes -.. _basics.serialize: - -Pickling and serialization --------------------------- - -All pandas objects are equipped with ``save`` methods which use Python's -``cPickle`` module to save data structures to disk using the pickle format. - -.. ipython:: python - - df - df.save('foo.pickle') - -The ``load`` function in the ``pandas`` namespace can be used to load any -pickled pandas object (or any other pickled object) from file: - - -.. ipython:: python - - load('foo.pickle') - -There is also a ``save`` function which takes any object as its first argument: - -.. ipython:: python - - save(df, 'foo.pickle') - load('foo.pickle') - -.. ipython:: python - :suppress: - - import os - os.remove('foo.pickle') - -.. warning:: - - Loading pickled data received from untrusted sources can be unsafe. - - See: http://docs.python.org/2.7/library/pickle.html - Working with package options ---------------------------- diff --git a/doc/source/io.rst b/doc/source/io.rst index 905f7f24ac427..6fee8ad35e10c 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -39,6 +39,7 @@ object. * ``read_html`` * ``read_stata`` * ``read_clipboard`` + * ``read_pickle`` The corresponding ``writer`` functions are object methods that are accessed like ``df.to_csv()`` @@ -50,6 +51,7 @@ The corresponding ``writer`` functions are object methods that are accessed like * ``to_html`` * ``to_stata`` * ``to_clipboard`` + * ``to_pickle`` .. _io.read_csv_table: @@ -1442,7 +1444,42 @@ We can see that we got the same content back, which we had earlier written to th You may need to install xclip or xsel (with gtk or PyQt4 modules) on Linux to use these methods. +.. _io.serialize: +Pickling and serialization +-------------------------- + +All pandas objects are equipped with ``to_pickle`` methods which use Python's +``cPickle`` module to save data structures to disk using the pickle format. + +.. ipython:: python + + df + df.to_pickle('foo.pkl') + +The ``read_pickle`` function in the ``pandas`` namespace can be used to load +any pickled pandas object (or any other pickled object) from file: + + +.. ipython:: python + + read_pickle('foo.pkl') + +.. ipython:: python + :suppress: + + import os + os.remove('foo.pkl') + +.. warning:: + + Loading pickled data received from untrusted sources can be unsafe. + + See: http://docs.python.org/2.7/library/pickle.html + +.. note:: + + These methods were previously ``save`` and ``load``, now deprecated. .. _io.excel: diff --git a/pandas/core/api.py b/pandas/core/api.py index 306f9aff8f4d3..a8f5bb2a46e76 100644 --- a/pandas/core/api.py +++ b/pandas/core/api.py @@ -4,7 +4,7 @@ import numpy as np from pandas.core.algorithms import factorize, match, unique, value_counts -from pandas.core.common import isnull, notnull, save, load +from pandas.core.common import isnull, notnull from pandas.core.categorical import Categorical, Factor from pandas.core.format import (set_printoptions, reset_printoptions, set_eng_float_format) @@ -28,6 +28,7 @@ # legacy from pandas.core.daterange import DateRange # deprecated +from pandas.core.common import save, load # deprecated, remove in 0.12 import pandas.core.datetools as datetools from pandas.core.config import get_option, set_option, reset_option,\ diff --git a/pandas/core/common.py b/pandas/core/common.py index 69f38bf0c7c61..d0dcb0b9770b8 100644 --- a/pandas/core/common.py +++ b/pandas/core/common.py @@ -1,11 +1,6 @@ """ Misc tools for implementing data structures """ -# XXX: HACK for NumPy 1.5.1 to suppress warnings -try: - import cPickle as pickle -except ImportError: # pragma: no cover - import pickle import itertools from datetime import datetime @@ -1668,49 +1663,6 @@ def _all_none(*args): return True -def save(obj, path): - """ - Pickle (serialize) object to input file path - - Parameters - ---------- - obj : any object - path : string - File path - """ - f = open(path, 'wb') - try: - pickle.dump(obj, f, protocol=pickle.HIGHEST_PROTOCOL) - finally: - f.close() - - -def load(path): - """ - Load pickled pandas object (or any other pickled object) from the specified - file path - - Warning: Loading pickled data received from untrusted sources can be unsafe. - See: http://docs.python.org/2.7/library/pickle.html - - Parameters - ---------- - path : string - File path - - Returns - ------- - unpickled : type of object stored in file - """ - try: - with open(path,'rb') as fh: - return pickle.load(fh) - except: - if not py3compat.PY3: - raise - with open(path,'rb') as fh: - return pickle.load(fh, encoding='latin1') - class UTF8Recoder: """ Iterator that reads an encoded stream and reencodes the input to UTF-8 @@ -2109,3 +2061,40 @@ def console_encode(object, **kwds): """ return pprint_thing_encoded(object, get_option("display.encoding")) + +def load(path): # TODO remove in 0.12 + """ + Load pickled pandas object (or any other pickled object) from the specified + file path + + Warning: Loading pickled data received from untrusted sources can be unsafe. + See: http://docs.python.org/2.7/library/pickle.html + + Parameters + ---------- + path : string + File path + + Returns + ------- + unpickled : type of object stored in file + """ + import warnings + warnings.warn("load is deprecated, use read_pickle", FutureWarning) + from pandas.io.pickle import read_pickle + return read_pickle(path) + +def save(obj, path): # TODO remove in 0.12 + ''' + Pickle (serialize) object to input file path + + Parameters + ---------- + obj : any object + path : string + File path + ''' + import warnings + warnings.warn("save is deprecated, use obj.to_pickle", FutureWarning) + from pandas.io.pickle import to_pickle + return to_pickle(obj, path) diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 86bc50ce48134..bae85aa84a96e 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -24,12 +24,29 @@ class PandasObject(object): _AXIS_ALIASES = {} _AXIS_NAMES = dict((v, k) for k, v in _AXIS_NUMBERS.iteritems()) - def save(self, path): - com.save(self, path) + def to_pickle(self, path): + """ + Pickle (serialize) object to input file path - @classmethod - def load(cls, path): - return com.load(path) + Parameters + ---------- + path : string + File path + """ + from pandas.io.pickle import to_pickle + return to_pickle(self, path) + + def save(self, path): # TODO remove in 0.12 + import warnings + from pandas.io.pickle import to_pickle + warnings.warn("save is deprecated, use to_pickle", FutureWarning) + return to_pickle(self, path) + + def load(self, path): # TODO remove in 0.12 + import warnings + from pandas.io.pickle import read_pickle + warnings.warn("load is deprecated, use pd.read_pickle", FutureWarning) + return read_pickle(path) def __hash__(self): raise TypeError('{0!r} objects are mutable, thus they cannot be' diff --git a/pandas/io/api.py b/pandas/io/api.py index 48566399f9bfe..2c8f8d1c893e2 100644 --- a/pandas/io/api.py +++ b/pandas/io/api.py @@ -10,3 +10,4 @@ from pandas.io.html import read_html from pandas.io.sql import read_sql from pandas.io.stata import read_stata +from pandas.io.pickle import read_pickle diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py new file mode 100644 index 0000000000000..a01771dda1f25 --- /dev/null +++ b/pandas/io/pickle.py @@ -0,0 +1,48 @@ +# XXX: HACK for NumPy 1.5.1 to suppress warnings +try: + import cPickle as pickle +except ImportError: # pragma: no cover + import pickle + +def to_pickle(obj, path): + """ + Pickle (serialize) object to input file path + + Parameters + ---------- + obj : any object + path : string + File path + """ + f = open(path, 'wb') + try: + pickle.dump(obj, f, protocol=pickle.HIGHEST_PROTOCOL) + finally: + f.close() + +def read_pickle(path): + """ + Load pickled pandas object (or any other pickled object) from the specified + file path + + Warning: Loading pickled data received from untrusted sources can be unsafe. + See: http://docs.python.org/2.7/library/pickle.html + + Parameters + ---------- + path : string + File path + + Returns + ------- + unpickled : type of object stored in file + """ + try: + with open(path,'rb') as fh: + return pickle.load(fh) + except: + from pandas.util import py3compat + if not py3compat.PY3: + raise + with open(path,'rb') as fh: + return pickle.load(fh, encoding='latin1') \ No newline at end of file diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py index c18e0173b4589..c6515cd4113f0 100644 --- a/pandas/sparse/tests/test_sparse.py +++ b/pandas/sparse/tests/test_sparse.py @@ -9,6 +9,7 @@ from numpy import nan import numpy as np +import pandas as pd dec = np.testing.dec from pandas.util.testing import (assert_almost_equal, assert_series_equal, diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py index 5926f5d51abfd..7ce4a11229561 100644 --- a/pandas/tests/test_index.py +++ b/pandas/tests/test_index.py @@ -1080,7 +1080,7 @@ def test_legacy_v2_unpickle(self): pth, _ = os.path.split(os.path.abspath(__file__)) filepath = os.path.join(pth, 'data', 'mindex_073.pickle') - obj = com.load(filepath) + obj = pd.read_pickle(filepath) obj2 = MultiIndex.from_tuples(obj.values) self.assert_(obj.equals(obj2)) diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py index 582a3f6ab5f7b..c5770c61e2f81 100644 --- a/pandas/tests/test_series.py +++ b/pandas/tests/test_series.py @@ -10,6 +10,7 @@ from numpy import nan import numpy as np import numpy.ma as ma +import pandas as pd from pandas import (Index, Series, TimeSeries, DataFrame, isnull, notnull, bdate_range, date_range) @@ -189,8 +190,8 @@ def test_pickle_preserve_name(self): def _pickle_roundtrip_name(self, obj): with ensure_clean() as path: - obj.save(path) - unpickled = Series.load(path) + obj.to_pickle(path) + unpickled = pd.read_pickle(path) return unpickled def test_argsort_preserve_name(self): @@ -612,8 +613,8 @@ def test_pickle(self): def _pickle_roundtrip(self, obj): with ensure_clean() as path: - obj.save(path) - unpickled = Series.load(path) + obj.to_pickle(path) + unpickled = pd.read_pickle(path) return unpickled def test_getitem_get(self): diff --git a/pandas/tools/util.py b/pandas/tools/util.py index d4c7190b0d782..c08636050ca9e 100644 --- a/pandas/tools/util.py +++ b/pandas/tools/util.py @@ -1,7 +1,6 @@ from pandas.core.index import Index - def match(needles, haystack): haystack = Index(haystack) needles = Index(needles) - return haystack.get_indexer(needles) + return haystack.get_indexer(needles) \ No newline at end of file diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index ac02dee335afc..bdc603dfdea31 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -1945,7 +1945,7 @@ def test_unpickle_legacy_len0_daterange(self): pth, _ = os.path.split(os.path.abspath(__file__)) filepath = os.path.join(pth, 'data', 'series_daterange0.pickle') - result = com.load(filepath) + result = pd.read_pickle(filepath) ex_index = DatetimeIndex([], freq='B') diff --git a/vb_suite/test_perf.py b/vb_suite/test_perf.py index fe3f4d8e5defb..2a2a5c9643c75 100755 --- a/vb_suite/test_perf.py +++ b/vb_suite/test_perf.py @@ -85,7 +85,7 @@ metavar="FNAME", dest='outdf', default=None, - help='Name of file to df.save() the result table into. Will overwrite') + help='Name of file to df.to_pickle() the result table into. Will overwrite') parser.add_argument('-r', '--regex', metavar="REGEX", dest='regex', @@ -288,7 +288,7 @@ def report_comparative(head_res,baseline_res): if args.outdf: prprint("The results DataFrame was written to '%s'\n" % args.outdf) - totals.save(args.outdf) + totals.to_pickle(args.outdf) def profile_head_single(benchmark): import gc @@ -364,7 +364,7 @@ def profile_head(benchmarks): if args.outdf: prprint("The results DataFrame was written to '%s'\n" % args.outdf) - DataFrame(results).save(args.outdf) + DataFrame(results).to_pickle(args.outdf) def print_report(df,h_head=None,h_msg="",h_baseline=None,b_msg=""): @@ -448,8 +448,8 @@ def main(): np.random.seed(args.seed) if args.base_pickle and args.target_pickle: - baseline_res = prep_pickle_for_total(pd.load(args.base_pickle)) - target_res = prep_pickle_for_total(pd.load(args.target_pickle)) + baseline_res = prep_pickle_for_total(pd.read_pickle(args.base_pickle)) + target_res = prep_pickle_for_total(pd.read_pickle(args.target_pickle)) report_comparative(target_res, baseline_res) sys.exit(0)
Add `read_pickle` to top-level and `to_pickle` as instance methods, deprecation warning til 0.12 for save and load everwhere. See [lower down](https://github.com/pydata/pandas/pull/3787#issuecomment-19211203) for how it's working. Both `read_pickle` and `to_pickle` are in `io.pickle`, save&load remain in `core.common` (but call to_pickle, read_pickle resp). cc #3782.
https://api.github.com/repos/pandas-dev/pandas/pulls/3787
2013-06-07T01:54:31Z
2013-06-15T08:38:56Z
2013-06-15T08:38:56Z
2014-06-22T01:54:36Z
DOC: make compatible with numpy v1.6.1
diff --git a/doc/source/io.rst b/doc/source/io.rst index a65b7c9024a11..9d923d2d0e0cf 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -1029,7 +1029,7 @@ Specify an HTML attribute dfs1 = read_html(url, attrs={'id': 'table'}) dfs2 = read_html(url, attrs={'class': 'sortable'}) - np.all(dfs1[0] == dfs2[0]) + np.array_equal(dfs1[0], dfs2[0]) Use some combination of the above diff --git a/pandas/io/html.py b/pandas/io/html.py index 9b47be925f740..a5798b3493732 100644 --- a/pandas/io/html.py +++ b/pandas/io/html.py @@ -754,19 +754,19 @@ def _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs): compiled_match = re.compile(match) # ugly hack because python 3 DELETES the exception variable! - retained_exception = None + retained = None for flav in flavor: parser = _parser_dispatch(flav) p = parser(io, compiled_match, attrs) try: tables = p.parse_tables() - except Exception as caught_exception: - retained_exception = caught_exception + except Exception as caught: + retained = caught else: break else: - raise retained_exception + raise retained return [_data_to_frame(table, header, index_col, infer_types, skiprows) for table in tables]
all does not work so use array_equal
https://api.github.com/repos/pandas-dev/pandas/pulls/3786
2013-06-06T22:09:16Z
2013-06-07T00:18:55Z
2013-06-07T00:18:55Z
2014-07-16T08:12:15Z
DOC: document the pitfalls of different byte orders
diff --git a/doc/source/faq.rst b/doc/source/faq.rst index 37639b9016b14..534ad576da0a7 100644 --- a/doc/source/faq.rst +++ b/doc/source/faq.rst @@ -246,3 +246,22 @@ interval (``'start'`` or ``'end'``) convention: data = Series(np.random.randn(50), index=rng) resampled = data.resample('A', kind='timestamp', convention='end') resampled.index + + +Byte-Ordering Issues +-------------------- +Occasionally you may have to deal with data that were created on a machine with +a different byte order than the one on which you are running Python. To deal +with this issue you should convert the underlying NumPy array to the native +system byte order *before* passing it to Series/DataFrame/Panel constructors +using something similar to the following: + +.. ipython:: python + + x = np.array(range(10), '>i4') # big endian + newx = x.byteswap().newbyteorder() # force native byteorder + s = Series(newx) + +See `the NumPy documentation on byte order +<http://docs.scipy.org/doc/numpy/user/basics.byteswapping.html>`__ for more +details. diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst index 422e3cec59386..45369cb7ddb08 100644 --- a/doc/source/gotchas.rst +++ b/doc/source/gotchas.rst @@ -453,3 +453,22 @@ parse HTML tables in the top-level pandas io function ``read_html``. .. |Anaconda| replace:: **Anaconda** .. _Anaconda: https://store.continuum.io/cshop/anaconda + + +Byte-Ordering Issues +-------------------- +Occasionally you may have to deal with data that were created on a machine with +a different byte order than the one on which you are running Python. To deal +with this issue you should convert the underlying NumPy array to the native +system byte order *before* passing it to Series/DataFrame/Panel constructors +using something similar to the following: + +.. ipython:: python + + x = np.array(range(10), '>i4') # big endian + newx = x.byteswap().newbyteorder() # force native byteorder + s = Series(newx) + +See `the NumPy documentation on byte order +<http://docs.scipy.org/doc/numpy/user/basics.byteswapping.html>`__ for more +details.
closes #3778 .
https://api.github.com/repos/pandas-dev/pandas/pulls/3780
2013-06-06T17:33:11Z
2013-06-06T19:00:04Z
2013-06-06T19:00:03Z
2014-07-11T02:49:31Z
Cookbook epoch handling
diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst index 7f6b54667765d..6a68a5f83ce83 100644 --- a/doc/source/cookbook.rst +++ b/doc/source/cookbook.rst @@ -267,6 +267,9 @@ The :ref:`CSV <io.read_csv_table>` docs `Dealing with bad lines <https://github.com/pydata/pandas/issues/2886>`__ +`Reading CSV with Unix timestamps and converting to local timezone +<http://nbviewer.ipython.org/5714493>`__ + .. _cookbook.sql: SQL
Added link to cookbook explaining how to use `read_csv` to parse files containing epoch timestamps and how to add local timezone information. Closes #3757
https://api.github.com/repos/pandas-dev/pandas/pulls/3771
2013-06-06T05:28:32Z
2013-06-06T11:57:28Z
2013-06-06T11:57:28Z
2014-07-03T09:26:16Z
TST: let tox run more tests
diff --git a/tox.ini b/tox.ini index b56d839e4998a..2a9c454a29435 100644 --- a/tox.ini +++ b/tox.ini @@ -21,7 +21,7 @@ changedir = {envdir} commands = # TODO: --exe because of GH #761 - {envbindir}/nosetests --exe pandas.tests -A "not network" + {envbindir}/nosetests --exe pandas -A "not network" # cleanup the temp. build dir created by the tox build # /bin/rm -rf {toxinidir}/build diff --git a/tox_prll.ini b/tox_prll.ini index 5201cd5e426ed..7ae399837b4e0 100644 --- a/tox_prll.ini +++ b/tox_prll.ini @@ -22,7 +22,7 @@ changedir = {envdir} commands = # TODO: --exe because of GH #761 - {envbindir}/nosetests --exe pandas.tests -A "not network" + {envbindir}/nosetests --exe pandas -A "not network" # cleanup the temp. build dir created by the tox build # /bin/rm -rf {toxinidir}/build
https://api.github.com/repos/pandas-dev/pandas/pulls/3770
2013-06-06T03:24:07Z
2013-06-06T15:01:14Z
2013-06-06T15:01:14Z
2014-07-16T08:12:01Z
TST: install numexpr always on travis, bottleneck on full-deps
diff --git a/ci/install.sh b/ci/install.sh index b748070db85aa..9765f1b26b198 100755 --- a/ci/install.sh +++ b/ci/install.sh @@ -67,13 +67,14 @@ if ( ! $VENV_FILE_AVAILABLE ); then if [ x"$FULL_DEPS" == x"true" ]; then echo "Installing FULL_DEPS" pip install $PIP_ARGS cython + pip install $PIP_ARGS numexpr if [ ${TRAVIS_PYTHON_VERSION:0:1} == "2" ]; then pip install $PIP_ARGS xlwt + pip install $PIP_ARGS bottleneck fi - pip install numexpr - pip install tables + pip install $PIP_ARGS tables pip install $PIP_ARGS matplotlib pip install $PIP_ARGS openpyxl pip install $PIP_ARGS xlrd>=0.9.0 diff --git a/ci/print_versions.py b/ci/print_versions.py index 6a897ea5937b0..53e43fab19ae7 100755 --- a/ci/print_versions.py +++ b/ci/print_versions.py @@ -61,6 +61,12 @@ except: print("pytz: Not installed") +try: + import bottleneck + print("bottleneck: %s" % bottleneck.__version__) +except: + print("bottleneck: Not installed") + try: import tables print("PyTables: %s" % tables.__version__)
https://api.github.com/repos/pandas-dev/pandas/pulls/3768
2013-06-06T00:02:31Z
2013-06-06T01:24:15Z
2013-06-06T01:24:15Z
2014-07-16T08:11:59Z
ENH: allow fallback when lxml fails to parse
diff --git a/RELEASE.rst b/RELEASE.rst index 98271568006e0..6a0c22c7734fb 100644 --- a/RELEASE.rst +++ b/RELEASE.rst @@ -70,7 +70,6 @@ pandas 0.11.1 - ``melt`` now accepts the optional parameters ``var_name`` and ``value_name`` to specify custom column names of the returned DataFrame (GH3649_), thanks @hoechenberger - - ``read_html`` no longer performs hard date conversion - Plotting functions now raise a ``TypeError`` before trying to plot anything if the associated objects have have a dtype of ``object`` (GH1818_, GH3572_). This happens before any drawing takes place which elimnates any @@ -133,6 +132,9 @@ pandas 0.11.1 as an int, maxing with ``int64``, to avoid precision issues (GH3733_) - ``na_values`` in a list provided to ``read_csv/read_excel`` will match string and numeric versions e.g. ``na_values=['99']`` will match 99 whether the column ends up being int, float, or string (GH3611_) + - ``read_html`` now defaults to ``None`` when reading, and falls back on + ``bs4`` + ``html5lib`` when lxml fails to parse. a list of parsers to try + until success is also valid **Bug Fixes** diff --git a/ci/install.sh b/ci/install.sh index 9765f1b26b198..c9b76b88721e9 100755 --- a/ci/install.sh +++ b/ci/install.sh @@ -80,7 +80,6 @@ if ( ! $VENV_FILE_AVAILABLE ); then pip install $PIP_ARGS xlrd>=0.9.0 pip install $PIP_ARGS 'http://downloads.sourceforge.net/project/pytseries/scikits.timeseries/0.91.3/scikits.timeseries-0.91.3.tar.gz?r=' pip install $PIP_ARGS patsy - pip install $PIP_ARGS lxml pip install $PIP_ARGS html5lib if [ ${TRAVIS_PYTHON_VERSION:0:1} == "3" ]; then @@ -88,6 +87,8 @@ if ( ! $VENV_FILE_AVAILABLE ); then elif [ ${TRAVIS_PYTHON_VERSION:0:1} == "2" ]; then sudo apt-get $APT_ARGS remove python-lxml fi + + pip install $PIP_ARGS lxml # fool statsmodels into thinking pandas was already installed # so it won't refuse to install itself. We want it in the zipped venv diff --git a/doc/source/io.rst b/doc/source/io.rst index 1c615ca278668..a65b7c9024a11 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -1054,6 +1054,21 @@ Read in pandas ``to_html`` output (with some loss of floating point precision) dfin[0].columns np.allclose(df, dfin[0]) +``lxml`` will raise an error on a failed parse if that is the only parser you +provide + +.. ipython:: python + + dfs = read_html(url, match='Metcalf Bank', index_col=0, flavor=['lxml']) + +However, if you have bs4 and html5lib installed and pass ``None`` or ``['lxml', +'bs4']`` then the parse will most likely succeed. Note that *as soon as a parse +succeeds, the function will return*. + +.. ipython:: python + + dfs = read_html(url, match='Metcalf Bank', index_col=0, flavor=['lxml', 'bs4']) + Writing to HTML files ~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/source/v0.11.1.txt b/doc/source/v0.11.1.txt index 982b2f9f2eb3b..ee2c15d429ec2 100644 --- a/doc/source/v0.11.1.txt +++ b/doc/source/v0.11.1.txt @@ -139,6 +139,10 @@ API changes - sum, prod, mean, std, var, skew, kurt, corr, and cov + - ``read_html`` now defaults to ``None`` when reading, and falls back on + ``bs4`` + ``html5lib`` when lxml fails to parse. a list of parsers to try + until success is also valid + Enhancements ~~~~~~~~~~~~ diff --git a/pandas/io/html.py b/pandas/io/html.py index 9b2f292d30f47..9b47be925f740 100644 --- a/pandas/io/html.py +++ b/pandas/io/html.py @@ -7,9 +7,10 @@ import re import numbers import urllib2 +import urlparse import contextlib import collections -import urlparse + try: from importlib import import_module @@ -18,10 +19,34 @@ import numpy as np -from pandas import DataFrame, MultiIndex, Index, Series, isnull +from pandas import DataFrame, MultiIndex, isnull from pandas.io.parsers import _is_url +try: + import_module('bs4') +except ImportError: + _HAS_BS4 = False +else: + _HAS_BS4 = True + + +try: + import_module('lxml') +except ImportError: + _HAS_LXML = False +else: + _HAS_LXML = True + + +try: + import_module('html5lib') +except ImportError: + _HAS_HTML5LIB = False +else: + _HAS_HTML5LIB = True + + ############# # READ HTML # ############# @@ -345,7 +370,7 @@ def _parse_raw_tbody(self, table): return self._parse_raw_data(res) -class _BeautifulSoupLxmlFrameParser(_HtmlFrameParser): +class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser): """HTML to DataFrame parser that uses BeautifulSoup under the hood. See Also @@ -359,7 +384,8 @@ class _BeautifulSoupLxmlFrameParser(_HtmlFrameParser): :class:`pandas.io.html._HtmlFrameParser`. """ def __init__(self, *args, **kwargs): - super(_BeautifulSoupLxmlFrameParser, self).__init__(*args, **kwargs) + super(_BeautifulSoupHtml5LibFrameParser, self).__init__(*args, + **kwargs) from bs4 import SoupStrainer self._strainer = SoupStrainer('table') @@ -406,17 +432,6 @@ def _setup_build_doc(self): raise AssertionError('No text parsed from document') return raw_text - def _build_doc(self): - from bs4 import BeautifulSoup - return BeautifulSoup(self._setup_build_doc(), features='lxml', - parse_only=self._strainer) - - -class _BeautifulSoupHtml5LibFrameParser(_BeautifulSoupLxmlFrameParser): - def __init__(self, *args, **kwargs): - super(_BeautifulSoupHtml5LibFrameParser, self).__init__(*args, - **kwargs) - def _build_doc(self): from bs4 import BeautifulSoup return BeautifulSoup(self._setup_build_doc(), features='html5lib') @@ -516,16 +531,27 @@ def _build_doc(self): -------- pandas.io.html._HtmlFrameParser._build_doc """ - from lxml.html import parse, fromstring - from lxml.html.clean import clean_html + from lxml.html import parse, fromstring, HTMLParser + from lxml.etree import XMLSyntaxError + parser = HTMLParser(recover=False) try: # try to parse the input in the simplest way - r = parse(self.io) - except (UnicodeDecodeError, IOError) as e: + r = parse(self.io, parser=parser) + + try: + r = r.getroot() + except AttributeError: + pass + except (UnicodeDecodeError, IOError): # if the input is a blob of html goop if not _is_url(self.io): - r = fromstring(self.io) + r = fromstring(self.io, parser=parser) + + try: + r = r.getroot() + except AttributeError: + pass else: # not a url scheme = urlparse.urlparse(self.io).scheme @@ -536,8 +562,11 @@ def _build_doc(self): raise ValueError(msg) else: # something else happened: maybe a faulty connection - raise e - return clean_html(r) + raise + else: + if not hasattr(r, 'text_content'): + raise XMLSyntaxError("no text parsed from document", 0, 0, 0) + return r def _parse_tbody(self, table): return table.xpath('.//tbody') @@ -559,17 +588,6 @@ def _parse_raw_tfoot(self, table): table.xpath(expr)] -def _maybe_convert_index_type(index): - try: - index = index.astype(int) - except (TypeError, ValueError): - if not isinstance(index, MultiIndex): - s = Series(index, name=index.name) - index = Index(s.convert_objects(convert_numeric=True), - name=index.name) - return index - - def _data_to_frame(data, header, index_col, infer_types, skiprows): """Parse a BeautifulSoup table into a DataFrame. @@ -665,18 +683,12 @@ def _data_to_frame(data, header, index_col, infer_types, skiprows): names = [name or None for name in df.index.names] df.index = MultiIndex.from_tuples(df.index.values, names=names) - if infer_types: - df.index = _maybe_convert_index_type(df.index) - df.columns = _maybe_convert_index_type(df.columns) - return df -_invalid_parsers = {'lxml': _LxmlFrameParser, - 'bs4': _BeautifulSoupLxmlFrameParser} -_valid_parsers = {'html5lib': _BeautifulSoupHtml5LibFrameParser} -_all_parsers = _valid_parsers.copy() -_all_parsers.update(_invalid_parsers) +_valid_parsers = {'lxml': _LxmlFrameParser, None: _LxmlFrameParser, + 'html5lib': _BeautifulSoupHtml5LibFrameParser, + 'bs4': _BeautifulSoupHtml5LibFrameParser} def _parser_dispatch(flavor): @@ -696,46 +708,71 @@ def _parser_dispatch(flavor): ------ AssertionError * If `flavor` is not a valid backend. + ImportError + * If you do not have the requested `flavor` """ valid_parsers = _valid_parsers.keys() if flavor not in valid_parsers: - raise AssertionError('"{0}" is not a valid flavor'.format(flavor)) + raise AssertionError('"{0!r}" is not a valid flavor, valid flavors are' + ' {1}'.format(flavor, valid_parsers)) - if flavor == 'bs4': - try: - import_module('lxml') - parser_t = _BeautifulSoupLxmlFrameParser - except ImportError: - try: - import_module('html5lib') - parser_t = _BeautifulSoupHtml5LibFrameParser - except ImportError: - raise ImportError("read_html does not support the native " - "Python 'html.parser' backend for bs4, " - "please install either 'lxml' or 'html5lib'") - elif flavor == 'html5lib': - try: - # much better than python's builtin - import_module('html5lib') - parser_t = _BeautifulSoupHtml5LibFrameParser - except ImportError: + if flavor in ('bs4', 'html5lib'): + if not _HAS_HTML5LIB: raise ImportError("html5lib not found please install it") + if not _HAS_BS4: + raise ImportError("bs4 not found please install it") + else: + if not _HAS_LXML: + raise ImportError("lxml not found please install it") + return _valid_parsers[flavor] + + +def _validate_parser_flavor(flavor): + if flavor is None: + flavor = ['lxml', 'bs4'] + elif isinstance(flavor, basestring): + flavor = [flavor] + elif isinstance(flavor, collections.Iterable): + if not all(isinstance(flav, basestring) for flav in flavor): + raise TypeError('{0} is not an iterable of strings'.format(flavor)) else: - parser_t = _LxmlFrameParser - return parser_t + raise TypeError('{0} is not a valid "flavor"'.format(flavor)) + + flavor = list(flavor) + valid_flavors = _valid_parsers.keys() + if not set(flavor) & set(valid_flavors): + raise ValueError('{0} is not a valid set of flavors, valid flavors are' + ' {1}'.format(flavor, valid_flavors)) + return flavor -def _parse(parser, io, match, flavor, header, index_col, skiprows, infer_types, - attrs): + +def _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs): # bonus: re.compile is idempotent under function iteration so you can pass # a compiled regex to it and it will return itself - p = parser(io, re.compile(match), attrs) - tables = p.parse_tables() + flavor = _validate_parser_flavor(flavor) + compiled_match = re.compile(match) + + # ugly hack because python 3 DELETES the exception variable! + retained_exception = None + for flav in flavor: + parser = _parser_dispatch(flav) + p = parser(io, compiled_match, attrs) + + try: + tables = p.parse_tables() + except Exception as caught_exception: + retained_exception = caught_exception + else: + break + else: + raise retained_exception + return [_data_to_frame(table, header, index_col, infer_types, skiprows) for table in tables] -def read_html(io, match='.+', flavor='html5lib', header=None, index_col=None, +def read_html(io, match='.+', flavor=None, header=None, index_col=None, skiprows=None, infer_types=True, attrs=None): r"""Read an HTML table into a DataFrame. @@ -747,7 +784,7 @@ def read_html(io, match='.+', flavor='html5lib', header=None, index_col=None, the http, ftp and file url protocols. If you have a URI that starts with ``'https'`` you might removing the ``'s'``. - match : str or regex, optional + match : str or regex, optional, default '.+' The set of tables containing text matching this regex or string will be returned. Unless the HTML is extremely simple you will probably need to pass a non-empty string here. Defaults to '.+' (match any non-empty @@ -755,23 +792,24 @@ def read_html(io, match='.+', flavor='html5lib', header=None, index_col=None, This value is converted to a regular expression so that there is consistent behavior between Beautiful Soup and lxml. - flavor : str, {'html5lib'} - The parsing engine to use under the hood. Right now only ``html5lib`` - is supported because it returns correct output whereas ``lxml`` does - not. + flavor : str, container of strings, default ``None`` + The parsing engine to use under the hood. 'bs4' and 'html5lib' are + synonymous with each other, they are both there for backwards + compatibility. The default of ``None`` tries to use ``lxml`` to parse + and if that fails it falls back on ``bs4`` + ``html5lib``. - header : int or array-like or None, optional + header : int or array-like or None, optional, default ``None`` The row (or rows for a MultiIndex) to use to make the columns headers. - Note that this row will be removed from the data. Defaults to None. + Note that this row will be removed from the data. - index_col : int or array-like or None, optional + index_col : int or array-like or None, optional, default ``None`` The column to use to make the index. Note that this column will be - removed from the data. Defaults to None. + removed from the data. - skiprows : int or collections.Container or slice or None, optional + skiprows : int or collections.Container or slice or None, optional, default ``None`` If an integer is given then skip this many rows after parsing the column header. If a sequence of integers is given skip those specific - rows (0-based). Defaults to None, i.e., no rows are skipped. Note that + rows (0-based). Note that .. code-block:: python @@ -787,16 +825,15 @@ def read_html(io, match='.+', flavor='html5lib', header=None, index_col=None, it is treated as "skip :math:`n` rows", *not* as "skip the :math:`n^\textrm{th}` row". - infer_types : bool, optional + infer_types : bool, optional, default ``True`` Whether to convert numeric types and date-appearing strings to numbers - and dates, respectively. Defaults to True. + and dates, respectively. - attrs : dict or None, optional + attrs : dict or None, optional, default ``None`` This is a dictionary of attributes that you can pass to use to identify the table in the HTML. These are not checked for validity before being passed to lxml or Beautiful Soup. However, these attributes must be - valid HTML table attributes to work correctly. Defaults to None. For - example, + valid HTML table attributes to work correctly. For example, .. code-block:: python @@ -826,6 +863,9 @@ def read_html(io, match='.+', flavor='html5lib', header=None, index_col=None, Notes ----- + Before using this function you should probably read the :ref:`gotchas about + the parser libraries that this function uses <html-gotchas>`. + There's as little cleaning of the data as possible due to the heterogeneity and general disorder of HTML on the web. @@ -848,37 +888,13 @@ def read_html(io, match='.+', flavor='html5lib', header=None, index_col=None, Examples -------- - Parse a table from a list of failed banks from the FDIC: - - >>> from pandas import read_html, DataFrame - >>> url = 'http://www.fdic.gov/bank/individual/failed/banklist.html' - >>> dfs = read_html(url, match='Florida', attrs={'id': 'table'}) - >>> assert dfs # will not be empty if the call to read_html doesn't fail - >>> assert isinstance(dfs, list) # read_html returns a list of DataFrames - >>> assert all(map(lambda x: isinstance(x, DataFrame), dfs)) - - Parse some spam infomation from the USDA: - - >>> from pandas import read_html, DataFrame - >>> url = ('http://ndb.nal.usda.gov/ndb/foods/show/1732?fg=&man=&' - ... 'lfacet=&format=&count=&max=25&offset=&sort=&qlookup=spam') - >>> dfs = read_html(url, match='Water', header=0) - >>> assert dfs - >>> assert isinstance(dfs, list) - >>> assert all(map(lambda x: isinstance(x, DataFrame), dfs)) - - You can pass nothing to the `match` argument: - - >>> from pandas import read_html, DataFrame - >>> url = 'http://www.fdic.gov/bank/individual/failed/banklist.html' - >>> dfs = read_html(url) - >>> print(len(dfs)) # this will most likely be greater than 1 + See the :ref:`read_html documentation in the IO section of the docs + <io.read_html>` for many examples of reading HTML. """ # Type check here. We don't want to parse only to fail because of an # invalid value of an integer skiprows. if isinstance(skiprows, numbers.Integral) and skiprows < 0: raise AssertionError('cannot skip rows starting from the end of the ' 'data (you passed a negative value)') - parser = _parser_dispatch(flavor) - return _parse(parser, io, match, flavor, header, index_col, skiprows, - infer_types, attrs) + return _parse(flavor, io, match, header, index_col, skiprows, infer_types, + attrs) diff --git a/pandas/io/tests/data/valid_markup.html b/pandas/io/tests/data/valid_markup.html new file mode 100644 index 0000000000000..5db90da3baec4 --- /dev/null +++ b/pandas/io/tests/data/valid_markup.html @@ -0,0 +1,71 @@ +<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"> +<html> + <head> + <meta name="generator" content= + "HTML Tidy for Linux (vers 25 March 2009), see www.w3.org"> + <title></title> + </head> + <body> + <table border="1" class="dataframe"> + <thead> + <tr style="text-align: right;"> + <th></th> + <th>a</th> + <th>b</th> + </tr> + </thead> + <tbody> + <tr> + <th>0</th> + <td>6</td> + <td>7</td> + </tr> + <tr> + <th>1</th> + <td>4</td> + <td>0</td> + </tr> + <tr> + <th>2</th> + <td>9</td> + <td>4</td> + </tr> + <tr> + <th>3</th> + <td>7</td> + <td>0</td> + </tr> + <tr> + <th>4</th> + <td>4</td> + <td>3</td> + </tr> + <tr> + <th>5</th> + <td>5</td> + <td>4</td> + </tr> + <tr> + <th>6</th> + <td>4</td> + <td>5</td> + </tr> + <tr> + <th>7</th> + <td>1</td> + <td>4</td> + </tr> + <tr> + <th>8</th> + <td>6</td> + <td>7</td> + </tr> + <tr> + <th>9</th> + <td>8</td> + <td>5</td> + </tr> + </tbody> + </table> + </body> +</html> diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py index ea3c0520de169..d6086d822ee02 100644 --- a/pandas/io/tests/test_html.py +++ b/pandas/io/tests/test_html.py @@ -2,7 +2,6 @@ import re from cStringIO import StringIO from unittest import TestCase -import numbers from urllib2 import urlopen from contextlib import closing import warnings @@ -13,13 +12,11 @@ from numpy.random import rand from numpy.testing.decorators import slow -from pandas.io.html import read_html, import_module, _parse, _LxmlFrameParser -from pandas.io.html import _BeautifulSoupHtml5LibFrameParser -from pandas.io.html import _BeautifulSoupLxmlFrameParser, _remove_whitespace +from pandas.io.html import read_html, import_module +from pandas.io.html import _remove_whitespace from pandas import DataFrame, MultiIndex, read_csv, Timestamp from pandas.util.testing import (assert_frame_equal, network, get_data_path) -from numpy.testing.decorators import slow from pandas.util.testing import makeCustomDataframe as mkdf @@ -37,7 +34,7 @@ def _skip_if_no(module_name): raise nose.SkipTest -def _skip_if_none(module_names): +def _skip_if_none_of(module_names): if isinstance(module_names, basestring): _skip_if_no(module_names) else: @@ -47,12 +44,11 @@ def _skip_if_none(module_names): DATA_PATH = get_data_path() - def isframe(x): return isinstance(x, DataFrame) -def assert_framelist_equal(list1, list2): +def assert_framelist_equal(list1, list2, *args, **kwargs): assert len(list1) == len(list2), ('lists are not of equal size ' 'len(list1) == {0}, ' 'len(list2) == {1}'.format(len(list1), @@ -60,24 +56,33 @@ def assert_framelist_equal(list1, list2): assert all(map(lambda x, y: isframe(x) and isframe(y), list1, list2)), \ 'not all list elements are DataFrames' for frame_i, frame_j in zip(list1, list2): - assert_frame_equal(frame_i, frame_j) + assert_frame_equal(frame_i, frame_j, *args, **kwargs) assert not frame_i.empty, 'frames are both empty' -def _run_read_html(parser, io, match='.+', flavor='bs4', header=None, - index_col=None, skiprows=None, infer_types=False, - attrs=None): - if isinstance(skiprows, numbers.Integral) and skiprows < 0: - raise AssertionError('cannot skip rows starting from the end of the ' - 'data (you passed a negative value)') - return _parse(parser, io, match, flavor, header, index_col, skiprows, - infer_types, attrs) +class TestReadHtmlBase(TestCase): + def run_read_html(self, *args, **kwargs): + self.try_skip() + kwargs['flavor'] = kwargs.get('flavor', self.flavor) + return read_html(*args, **kwargs) + + def try_skip(self): + _skip_if_none_of(('bs4', 'html5lib')) + + def setup_data(self): + self.spam_data = os.path.join(DATA_PATH, 'spam.html') + self.banklist_data = os.path.join(DATA_PATH, 'banklist.html') + def setup_flavor(self): + self.flavor = 'bs4' + + def setUp(self): + self.setup_data() + self.setup_flavor() -class TestLxmlReadHtml(TestCase): def test_to_html_compat(self): df = mkdf(4, 3, data_gen_f=lambda *args: rand(), c_idx_names=False, - r_idx_names=False).applymap('{0:.3f}'.format) + r_idx_names=False).applymap('{0:.3f}'.format).astype(float) out = df.to_html() res = self.run_read_html(out, attrs={'class': 'dataframe'}, index_col=0)[0] @@ -85,16 +90,6 @@ def test_to_html_compat(self): print res.dtypes assert_frame_equal(res, df) - def setUp(self): - self.spam_data = os.path.join(DATA_PATH, 'spam.html') - self.banklist_data = os.path.join(DATA_PATH, 'banklist.html') - - def run_read_html(self, *args, **kwargs): - kwargs['flavor'] = 'lxml' - _skip_if_no('lxml') - parser = _LxmlFrameParser - return _run_read_html(parser, *args, **kwargs) - @network @slow def test_banklist_url(self): @@ -124,34 +119,6 @@ def test_banklist(self): assert_framelist_equal(df1, df2) - @slow - def test_banklist_header(self): - def try_remove_ws(x): - try: - return _remove_whitespace(x) - except AttributeError: - return x - - df = self.run_read_html(self.banklist_data, 'Metcalf', - attrs={'id': 'table'}, infer_types=False)[0] - ground_truth = read_csv(os.path.join(DATA_PATH, 'banklist.csv'), - converters={'Closing Date': Timestamp, - 'Updated Date': Timestamp}) - self.assertNotEqual(df.shape, ground_truth.shape) - self.assertRaises(AssertionError, assert_frame_equal, df, - ground_truth.applymap(try_remove_ws)) - - @slow - def test_gold_canyon(self): - gc = 'Gold Canyon' - with open(self.banklist_data, 'r') as f: - raw_text = f.read() - - self.assertIn(gc, raw_text) - df = self.run_read_html(self.banklist_data, 'Gold Canyon', - attrs={'id': 'table'}, infer_types=False)[0] - self.assertNotIn(gc, df.to_string()) - def test_spam(self): df1 = self.run_read_html(self.spam_data, '.*Water.*', infer_types=False) @@ -241,7 +208,14 @@ def test_index(self): df2 = self.run_read_html(self.spam_data, 'Unit', index_col=0) assert_framelist_equal(df1, df2) - def test_header_and_index(self): + def test_header_and_index_no_types(self): + df1 = self.run_read_html(self.spam_data, '.*Water.*', header=1, + index_col=0, infer_types=False) + df2 = self.run_read_html(self.spam_data, 'Unit', header=1, index_col=0, + infer_types=False) + assert_framelist_equal(df1, df2) + + def test_header_and_index_with_types(self): df1 = self.run_read_html(self.spam_data, '.*Water.*', header=1, index_col=0) df2 = self.run_read_html(self.spam_data, 'Unit', header=1, index_col=0) @@ -374,36 +348,6 @@ def test_pythonxy_plugins_table(self): zz = [df.iloc[0, 0] for df in dfs] self.assertListEqual(sorted(zz), sorted(['Python', 'SciTE'])) - -def test_invalid_flavor(): - url = 'google.com' - nose.tools.assert_raises(AssertionError, read_html, url, 'google', - flavor='not a* valid**++ flaver') - - -@slow -class TestBs4LxmlParser(TestLxmlReadHtml): - def test(self): - pass - - def run_read_html(self, *args, **kwargs): - kwargs['flavor'] = 'bs4' - _skip_if_none(('lxml', 'bs4')) - parser = _BeautifulSoupLxmlFrameParser - return _run_read_html(parser, *args, **kwargs) - - -@slow -class TestBs4Html5LibParser(TestBs4LxmlParser): - def test(self): - pass - - def run_read_html(self, *args, **kwargs): - kwargs['flavor'] = 'bs4' - _skip_if_none(('html5lib', 'bs4')) - parser = _BeautifulSoupHtml5LibFrameParser - return _run_read_html(parser, *args, **kwargs) - @slow def test_banklist_header(self): def try_remove_ws(x): @@ -445,19 +389,61 @@ def test_gold_canyon(self): with open(self.banklist_data, 'r') as f: raw_text = f.read() - self.assertIn(gc, raw_text) + self.assert_(gc in raw_text) df = self.run_read_html(self.banklist_data, 'Gold Canyon', attrs={'id': 'table'}, infer_types=False)[0] self.assertIn(gc, df.to_string()) -def get_elements_from_url(url, flavor, element='table'): - _skip_if_no('bs4') - _skip_if_no(flavor) +class TestReadHtmlLxml(TestCase): + def run_read_html(self, *args, **kwargs): + self.flavor = ['lxml'] + self.try_skip() + kwargs['flavor'] = kwargs.get('flavor', self.flavor) + return read_html(*args, **kwargs) + + def try_skip(self): + _skip_if_no('lxml') + + def test_spam_data_fail(self): + from lxml.etree import XMLSyntaxError + spam_data = os.path.join(DATA_PATH, 'spam.html') + self.assertRaises(XMLSyntaxError, self.run_read_html, spam_data, flavor=['lxml']) + + def test_banklist_data_fail(self): + from lxml.etree import XMLSyntaxError + banklist_data = os.path.join(DATA_PATH, 'banklist.html') + self.assertRaises(XMLSyntaxError, self.run_read_html, banklist_data, flavor=['lxml']) + + def test_works_on_valid_markup(self): + filename = os.path.join(DATA_PATH, 'valid_markup.html') + dfs = self.run_read_html(filename, index_col=0, flavor=['lxml']) + self.assertIsInstance(dfs, list) + self.assertIsInstance(dfs[0], DataFrame) + + def setUp(self): + self.try_skip() + + @slow + def test_fallback_success(self): + _skip_if_none_of(('bs4', 'html5lib')) + banklist_data = os.path.join(DATA_PATH, 'banklist.html') + self.run_read_html(banklist_data, '.*Water.*', flavor=['lxml', + 'html5lib']) + + +def test_invalid_flavor(): + url = 'google.com' + nose.tools.assert_raises(ValueError, read_html, url, 'google', + flavor='not a* valid**++ flaver') + + +def get_elements_from_url(url, element='table'): + _skip_if_none_of(('bs4', 'html5lib')) from bs4 import BeautifulSoup, SoupStrainer strainer = SoupStrainer(element) with closing(urlopen(url)) as f: - soup = BeautifulSoup(f, features=flavor, parse_only=strainer) + soup = BeautifulSoup(f, features='html5lib', parse_only=strainer) return soup.find_all(element) @@ -465,16 +451,12 @@ def get_elements_from_url(url, flavor, element='table'): def test_bs4_finds_tables(): url = ('http://ndb.nal.usda.gov/ndb/foods/show/1732?fg=&man=&' 'lfacet=&format=&count=&max=25&offset=&sort=&qlookup=spam') - flavors = 'lxml', 'html5lib' with warnings.catch_warnings(): warnings.filterwarnings('ignore') - - for flavor in flavors: - assert get_elements_from_url(url, flavor, 'table') + assert get_elements_from_url(url, 'table') def get_lxml_elements(url, element): - _skip_if_no('lxml') from lxml.html import parse doc = parse(url)
https://api.github.com/repos/pandas-dev/pandas/pulls/3766
2013-06-05T23:24:33Z
2013-06-06T19:26:23Z
2013-06-06T19:26:23Z
2014-07-16T08:11:53Z
ENH: implement non-nano Timedelta scalar
diff --git a/pandas/_libs/tslibs/timedeltas.pxd b/pandas/_libs/tslibs/timedeltas.pxd index f114fd9297920..3019be5a59c46 100644 --- a/pandas/_libs/tslibs/timedeltas.pxd +++ b/pandas/_libs/tslibs/timedeltas.pxd @@ -1,6 +1,8 @@ from cpython.datetime cimport timedelta from numpy cimport int64_t +from .np_datetime cimport NPY_DATETIMEUNIT + # Exposed for tslib, not intended for outside use. cpdef int64_t delta_to_nanoseconds(delta) except? -1 @@ -13,7 +15,9 @@ cdef class _Timedelta(timedelta): int64_t value # nanoseconds bint _is_populated # are my components populated int64_t _d, _h, _m, _s, _ms, _us, _ns + NPY_DATETIMEUNIT _reso cpdef timedelta to_pytimedelta(_Timedelta self) cdef bint _has_ns(self) cdef _ensure_components(_Timedelta self) + cdef inline bint _compare_mismatched_resos(self, _Timedelta other, op) diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx index 7979feb076c6e..6606158aea807 100644 --- a/pandas/_libs/tslibs/timedeltas.pyx +++ b/pandas/_libs/tslibs/timedeltas.pyx @@ -45,13 +45,19 @@ from pandas._libs.tslibs.nattype cimport ( ) from pandas._libs.tslibs.np_datetime cimport ( NPY_DATETIMEUNIT, + NPY_FR_ns, + cmp_dtstructs, cmp_scalar, get_datetime64_unit, get_timedelta64_value, + npy_datetimestruct, + pandas_datetime_to_datetimestruct, + pandas_timedelta_to_timedeltastruct, pandas_timedeltastruct, - td64_to_tdstruct, ) + from pandas._libs.tslibs.np_datetime import OutOfBoundsTimedelta + from pandas._libs.tslibs.offsets cimport is_tick_object from pandas._libs.tslibs.util cimport ( is_array, @@ -176,7 +182,9 @@ cpdef int64_t delta_to_nanoseconds(delta) except? -1: if is_tick_object(delta): return delta.nanos if isinstance(delta, _Timedelta): - return delta.value + if delta._reso == NPY_FR_ns: + return delta.value + raise NotImplementedError(delta._reso) if is_timedelta64_object(delta): return get_timedelta64_value(ensure_td64ns(delta)) @@ -251,6 +259,8 @@ cdef convert_to_timedelta64(object ts, str unit): return np.timedelta64(NPY_NAT, "ns") elif isinstance(ts, _Timedelta): # already in the proper format + if ts._reso != NPY_FR_ns: + raise NotImplementedError ts = np.timedelta64(ts.value, "ns") elif is_timedelta64_object(ts): ts = ensure_td64ns(ts) @@ -643,7 +653,8 @@ cdef bint _validate_ops_compat(other): def _op_unary_method(func, name): def f(self): - return Timedelta(func(self.value), unit='ns') + new_value = func(self.value) + return _timedelta_from_value_and_reso(new_value, self._reso) f.__name__ = name return f @@ -688,7 +699,17 @@ def _binary_op_method_timedeltalike(op, name): if other is NaT: # e.g. if original other was timedelta64('NaT') return NaT - return Timedelta(op(self.value, other.value), unit='ns') + + if self._reso != other._reso: + raise NotImplementedError + + res = op(self.value, other.value) + if res == NPY_NAT: + # e.g. test_implementation_limits + # TODO: more generally could do an overflowcheck in op? + return NaT + + return _timedelta_from_value_and_reso(res, reso=self._reso) f.__name__ = name return f @@ -818,6 +839,38 @@ cdef _to_py_int_float(v): raise TypeError(f"Invalid type {type(v)}. Must be int or float.") +def _timedelta_unpickle(value, reso): + return _timedelta_from_value_and_reso(value, reso) + + +cdef _timedelta_from_value_and_reso(int64_t value, NPY_DATETIMEUNIT reso): + # Could make this a classmethod if/when cython supports cdef classmethods + cdef: + _Timedelta td_base + + if reso == NPY_FR_ns: + td_base = _Timedelta.__new__(Timedelta, microseconds=int(value) // 1000) + elif reso == NPY_DATETIMEUNIT.NPY_FR_us: + td_base = _Timedelta.__new__(Timedelta, microseconds=int(value)) + elif reso == NPY_DATETIMEUNIT.NPY_FR_ms: + td_base = _Timedelta.__new__(Timedelta, milliseconds=int(value)) + elif reso == NPY_DATETIMEUNIT.NPY_FR_s: + td_base = _Timedelta.__new__(Timedelta, seconds=int(value)) + elif reso == NPY_DATETIMEUNIT.NPY_FR_m: + td_base = _Timedelta.__new__(Timedelta, minutes=int(value)) + elif reso == NPY_DATETIMEUNIT.NPY_FR_h: + td_base = _Timedelta.__new__(Timedelta, hours=int(value)) + elif reso == NPY_DATETIMEUNIT.NPY_FR_D: + td_base = _Timedelta.__new__(Timedelta, days=int(value)) + else: + raise NotImplementedError(reso) + + td_base.value = value + td_base._is_populated = 0 + td_base._reso = reso + return td_base + + # Similar to Timestamp/datetime, this is a construction requirement for # timedeltas that we need to do object instantiation in python. This will # serve as a C extension type that shadows the Python class, where we do any @@ -827,6 +880,7 @@ cdef class _Timedelta(timedelta): # int64_t value # nanoseconds # bint _is_populated # are my components populated # int64_t _d, _h, _m, _s, _ms, _us, _ns + # NPY_DATETIMEUNIT _reso # higher than np.ndarray and np.matrix __array_priority__ = 100 @@ -853,6 +907,11 @@ cdef class _Timedelta(timedelta): def __hash__(_Timedelta self): if self._has_ns(): + # Note: this does *not* satisfy the invariance + # td1 == td2 \\Rightarrow hash(td1) == hash(td2) + # if td1 and td2 have different _resos. timedelta64 also has this + # non-invariant behavior. + # see GH#44504 return hash(self.value) else: return timedelta.__hash__(self) @@ -890,10 +949,30 @@ cdef class _Timedelta(timedelta): else: return NotImplemented - return cmp_scalar(self.value, ots.value, op) + if self._reso == ots._reso: + return cmp_scalar(self.value, ots.value, op) + return self._compare_mismatched_resos(ots, op) + + # TODO: re-use/share with Timestamp + cdef inline bint _compare_mismatched_resos(self, _Timedelta other, op): + # Can't just dispatch to numpy as they silently overflow and get it wrong + cdef: + npy_datetimestruct dts_self + npy_datetimestruct dts_other + + # dispatch to the datetimestruct utils instead of writing new ones! + pandas_datetime_to_datetimestruct(self.value, self._reso, &dts_self) + pandas_datetime_to_datetimestruct(other.value, other._reso, &dts_other) + return cmp_dtstructs(&dts_self, &dts_other, op) cdef bint _has_ns(self): - return self.value % 1000 != 0 + if self._reso == NPY_FR_ns: + return self.value % 1000 != 0 + elif self._reso < NPY_FR_ns: + # i.e. seconds, millisecond, microsecond + return False + else: + raise NotImplementedError(self._reso) cdef _ensure_components(_Timedelta self): """ @@ -905,7 +984,7 @@ cdef class _Timedelta(timedelta): cdef: pandas_timedeltastruct tds - td64_to_tdstruct(self.value, &tds) + pandas_timedelta_to_timedeltastruct(self.value, self._reso, &tds) self._d = tds.days self._h = tds.hrs self._m = tds.min @@ -937,13 +1016,24 @@ cdef class _Timedelta(timedelta): ----- Any nanosecond resolution will be lost. """ - return timedelta(microseconds=int(self.value) / 1000) + if self._reso == NPY_FR_ns: + return timedelta(microseconds=int(self.value) / 1000) + + # TODO(@WillAyd): is this the right way to use components? + self._ensure_components() + return timedelta( + days=self._d, seconds=self._seconds, microseconds=self._microseconds + ) def to_timedelta64(self) -> np.timedelta64: """ Return a numpy.timedelta64 object with 'ns' precision. """ - return np.timedelta64(self.value, 'ns') + cdef: + str abbrev = npy_unit_to_abbrev(self._reso) + # TODO: way to create a np.timedelta64 obj with the reso directly + # instead of having to get the abbrev? + return np.timedelta64(self.value, abbrev) def to_numpy(self, dtype=None, copy=False) -> np.timedelta64: """ @@ -1054,7 +1144,7 @@ cdef class _Timedelta(timedelta): >>> td.asm8 numpy.timedelta64(42,'ns') """ - return np.int64(self.value).view('m8[ns]') + return self.to_timedelta64() @property def resolution_string(self) -> str: @@ -1258,6 +1348,14 @@ cdef class _Timedelta(timedelta): f'H{components.minutes}M{seconds}S') return tpl + # ---------------------------------------------------------------- + # Constructors + + @classmethod + def _from_value_and_reso(cls, int64_t value, NPY_DATETIMEUNIT reso): + # exposing as classmethod for testing + return _timedelta_from_value_and_reso(value, reso) + # Python front end to C extension type _Timedelta # This serves as the box for timedelta64 @@ -1413,19 +1511,21 @@ class Timedelta(_Timedelta): if value == NPY_NAT: return NaT - # make timedelta happy - td_base = _Timedelta.__new__(cls, microseconds=int(value) // 1000) - td_base.value = value - td_base._is_populated = 0 - return td_base + return _timedelta_from_value_and_reso(value, NPY_FR_ns) def __setstate__(self, state): - (value) = state + if len(state) == 1: + # older pickle, only supported nanosecond + value = state[0] + reso = NPY_FR_ns + else: + value, reso = state self.value = value + self._reso = reso def __reduce__(self): - object_state = self.value, - return (Timedelta, object_state) + object_state = self.value, self._reso + return (_timedelta_unpickle, object_state) @cython.cdivision(True) def _round(self, freq, mode): @@ -1496,7 +1596,14 @@ class Timedelta(_Timedelta): def __mul__(self, other): if is_integer_object(other) or is_float_object(other): - return Timedelta(other * self.value, unit='ns') + if util.is_nan(other): + # np.nan * timedelta -> np.timedelta64("NaT"), in this case NaT + return NaT + + return _timedelta_from_value_and_reso( + <int64_t>(other * self.value), + reso=self._reso, + ) elif is_array(other): # ndarray-like diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py index 1ec93c69def99..17a8ec5f86fc8 100644 --- a/pandas/tests/scalar/timedelta/test_timedelta.py +++ b/pandas/tests/scalar/timedelta/test_timedelta.py @@ -24,6 +24,79 @@ import pandas._testing as tm +class TestNonNano: + @pytest.fixture(params=[7, 8, 9]) + def unit(self, request): + # 7, 8, 9 correspond to second, millisecond, and microsecond, respectively + return request.param + + @pytest.fixture + def val(self, unit): + # microsecond that would be just out of bounds for nano + us = 9223372800000000 + if unit == 9: + value = us + elif unit == 8: + value = us // 1000 + else: + value = us // 1_000_000 + return value + + @pytest.fixture + def td(self, unit, val): + return Timedelta._from_value_and_reso(val, unit) + + def test_from_value_and_reso(self, unit, val): + # Just checking that the fixture is giving us what we asked for + td = Timedelta._from_value_and_reso(val, unit) + assert td.value == val + assert td._reso == unit + assert td.days == 106752 + + def test_unary_non_nano(self, td, unit): + assert abs(td)._reso == unit + assert (-td)._reso == unit + assert (+td)._reso == unit + + def test_sub_preserves_reso(self, td, unit): + res = td - td + expected = Timedelta._from_value_and_reso(0, unit) + assert res == expected + assert res._reso == unit + + def test_mul_preserves_reso(self, td, unit): + # The td fixture should always be far from the implementation + # bound, so doubling does not risk overflow. + res = td * 2 + assert res.value == td.value * 2 + assert res._reso == unit + + def test_cmp_cross_reso(self, td): + other = Timedelta(days=106751, unit="ns") + assert other < td + assert td > other + assert not other == td + assert td != other + + def test_to_pytimedelta(self, td): + res = td.to_pytimedelta() + expected = timedelta(days=106752) + assert type(res) is timedelta + assert res == expected + + def test_to_timedelta64(self, td, unit): + for res in [td.to_timedelta64(), td.to_numpy(), td.asm8]: + + assert isinstance(res, np.timedelta64) + assert res.view("i8") == td.value + if unit == 7: + assert res.dtype == "m8[s]" + elif unit == 8: + assert res.dtype == "m8[ms]" + elif unit == 9: + assert res.dtype == "m8[us]" + + class TestTimedeltaUnaryOps: def test_invert(self): td = Timedelta(10, unit="d") diff --git a/setup.py b/setup.py index 62704dc4423c8..384c1a267afe3 100755 --- a/setup.py +++ b/setup.py @@ -538,6 +538,7 @@ def srcpath(name=None, suffix=".pyx", subdir="src"): "_libs.tslibs.timedeltas": { "pyxfile": "_libs/tslibs/timedeltas", "depends": tseries_depends, + "sources": ["pandas/_libs/tslibs/src/datetime/np_datetime.c"], }, "_libs.tslibs.timestamps": { "pyxfile": "_libs/tslibs/timestamps",
- [ ] closes #xxxx (Replace xxxx with the Github issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Not user-exposed. This is just enough to allow us to have non-nano TimedeltaArray. asvs for tslibs.timedelta unchanged
https://api.github.com/repos/pandas-dev/pandas/pulls/46688
2022-04-07T17:13:41Z
2022-04-18T20:51:44Z
2022-04-18T20:51:44Z
2022-04-22T20:13:25Z
Backport PR #46674 on branch 1.4.x (DOC: generate docs for the `Series.dt.isocalendar()` method.)
diff --git a/doc/source/reference/series.rst b/doc/source/reference/series.rst index a60dab549e66d..fcdc9ea9b95da 100644 --- a/doc/source/reference/series.rst +++ b/doc/source/reference/series.rst @@ -342,6 +342,7 @@ Datetime methods :toctree: api/ :template: autosummary/accessor_method.rst + Series.dt.isocalendar Series.dt.to_period Series.dt.to_pydatetime Series.dt.tz_localize diff --git a/pandas/core/indexes/accessors.py b/pandas/core/indexes/accessors.py index 8c2813f2b57ec..78beda95d4658 100644 --- a/pandas/core/indexes/accessors.py +++ b/pandas/core/indexes/accessors.py @@ -277,12 +277,13 @@ def isocalendar(self): @property def weekofyear(self): """ - The week ordinal of the year. + The week ordinal of the year according to the ISO 8601 standard. .. deprecated:: 1.1.0 - Series.dt.weekofyear and Series.dt.week have been deprecated. - Please use Series.dt.isocalendar().week instead. + Series.dt.weekofyear and Series.dt.week have been deprecated. Please + call :func:`Series.dt.isocalendar` and access the ``week`` column + instead. """ warnings.warn( "Series.dt.weekofyear and Series.dt.week have been deprecated. "
Backport PR #46674: DOC: generate docs for the `Series.dt.isocalendar()` method.
https://api.github.com/repos/pandas-dev/pandas/pulls/46686
2022-04-07T16:37:12Z
2022-04-09T01:59:20Z
2022-04-09T01:59:20Z
2022-04-09T01:59:20Z
Fix a few test failures on big-endian systems
diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py index fc48317114e23..0a62ee956be61 100644 --- a/pandas/_testing/__init__.py +++ b/pandas/_testing/__init__.py @@ -8,6 +8,7 @@ import os import re import string +from sys import byteorder from typing import ( TYPE_CHECKING, Callable, @@ -168,6 +169,8 @@ np.uint32, ] +ENDIAN = {"little": "<", "big": ">"}[byteorder] + NULL_OBJECTS = [None, np.nan, pd.NaT, float("nan"), pd.NA, Decimal("NaN")] NP_NAT_OBJECTS = [ cls("NaT", unit) diff --git a/pandas/tests/arrays/boolean/test_astype.py b/pandas/tests/arrays/boolean/test_astype.py index 57cec70262526..932e903c0e448 100644 --- a/pandas/tests/arrays/boolean/test_astype.py +++ b/pandas/tests/arrays/boolean/test_astype.py @@ -20,7 +20,7 @@ def test_astype(): tm.assert_numpy_array_equal(result, expected) result = arr.astype("str") - expected = np.array(["True", "False", "<NA>"], dtype="<U5") + expected = np.array(["True", "False", "<NA>"], dtype=f"{tm.ENDIAN}U5") tm.assert_numpy_array_equal(result, expected) # no missing values diff --git a/pandas/tests/arrays/boolean/test_construction.py b/pandas/tests/arrays/boolean/test_construction.py index 64b1786cbd101..d26eea19c06e9 100644 --- a/pandas/tests/arrays/boolean/test_construction.py +++ b/pandas/tests/arrays/boolean/test_construction.py @@ -273,7 +273,7 @@ def test_to_numpy(box): arr = con([True, False, None], dtype="boolean") result = arr.to_numpy(dtype="str") - expected = np.array([True, False, pd.NA], dtype="<U5") + expected = np.array([True, False, pd.NA], dtype=f"{tm.ENDIAN}U5") tm.assert_numpy_array_equal(result, expected) # no missing values -> can convert to bool, otherwise raises diff --git a/pandas/tests/arrays/floating/test_to_numpy.py b/pandas/tests/arrays/floating/test_to_numpy.py index 26e5687b1b4a0..2ed52439adf53 100644 --- a/pandas/tests/arrays/floating/test_to_numpy.py +++ b/pandas/tests/arrays/floating/test_to_numpy.py @@ -115,7 +115,7 @@ def test_to_numpy_string(box, dtype): arr = con([0.0, 1.0, None], dtype="Float64") result = arr.to_numpy(dtype="str") - expected = np.array([0.0, 1.0, pd.NA], dtype="<U32") + expected = np.array([0.0, 1.0, pd.NA], dtype=f"{tm.ENDIAN}U32") tm.assert_numpy_array_equal(result, expected) diff --git a/pandas/tests/arrays/integer/test_dtypes.py b/pandas/tests/arrays/integer/test_dtypes.py index 8348ff79b24ee..1566476c32989 100644 --- a/pandas/tests/arrays/integer/test_dtypes.py +++ b/pandas/tests/arrays/integer/test_dtypes.py @@ -283,7 +283,7 @@ def test_to_numpy_na_raises(dtype): def test_astype_str(): a = pd.array([1, 2, None], dtype="Int64") - expected = np.array(["1", "2", "<NA>"], dtype="<U21") + expected = np.array(["1", "2", "<NA>"], dtype=f"{tm.ENDIAN}U21") tm.assert_numpy_array_equal(a.astype(str), expected) tm.assert_numpy_array_equal(a.astype("str"), expected) diff --git a/pandas/tests/frame/methods/test_to_records.py b/pandas/tests/frame/methods/test_to_records.py index 1a84fb73fd524..6332ffd181eba 100644 --- a/pandas/tests/frame/methods/test_to_records.py +++ b/pandas/tests/frame/methods/test_to_records.py @@ -151,7 +151,12 @@ def test_to_records_with_categorical(self): {}, np.rec.array( [(0, 1, 0.2, "a"), (1, 2, 1.5, "bc")], - dtype=[("index", "<i8"), ("A", "<i8"), ("B", "<f8"), ("C", "O")], + dtype=[ + ("index", f"{tm.ENDIAN}i8"), + ("A", f"{tm.ENDIAN}i8"), + ("B", f"{tm.ENDIAN}f8"), + ("C", "O"), + ], ), ), # Should have no effect in this case. @@ -159,23 +164,38 @@ def test_to_records_with_categorical(self): {"index": True}, np.rec.array( [(0, 1, 0.2, "a"), (1, 2, 1.5, "bc")], - dtype=[("index", "<i8"), ("A", "<i8"), ("B", "<f8"), ("C", "O")], + dtype=[ + ("index", f"{tm.ENDIAN}i8"), + ("A", f"{tm.ENDIAN}i8"), + ("B", f"{tm.ENDIAN}f8"), + ("C", "O"), + ], ), ), # Column dtype applied across the board. Index unaffected. ( - {"column_dtypes": "<U4"}, + {"column_dtypes": f"{tm.ENDIAN}U4"}, np.rec.array( [("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")], - dtype=[("index", "<i8"), ("A", "<U4"), ("B", "<U4"), ("C", "<U4")], + dtype=[ + ("index", f"{tm.ENDIAN}i8"), + ("A", f"{tm.ENDIAN}U4"), + ("B", f"{tm.ENDIAN}U4"), + ("C", f"{tm.ENDIAN}U4"), + ], ), ), # Index dtype applied across the board. Columns unaffected. ( - {"index_dtypes": "<U1"}, + {"index_dtypes": f"{tm.ENDIAN}U1"}, np.rec.array( [("0", 1, 0.2, "a"), ("1", 2, 1.5, "bc")], - dtype=[("index", "<U1"), ("A", "<i8"), ("B", "<f8"), ("C", "O")], + dtype=[ + ("index", f"{tm.ENDIAN}U1"), + ("A", f"{tm.ENDIAN}i8"), + ("B", f"{tm.ENDIAN}f8"), + ("C", "O"), + ], ), ), # Pass in a type instance. @@ -183,7 +203,12 @@ def test_to_records_with_categorical(self): {"column_dtypes": str}, np.rec.array( [("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")], - dtype=[("index", "<i8"), ("A", "<U"), ("B", "<U"), ("C", "<U")], + dtype=[ + ("index", f"{tm.ENDIAN}i8"), + ("A", f"{tm.ENDIAN}U"), + ("B", f"{tm.ENDIAN}U"), + ("C", f"{tm.ENDIAN}U"), + ], ), ), # Pass in a dtype instance. @@ -191,15 +216,31 @@ def test_to_records_with_categorical(self): {"column_dtypes": np.dtype("unicode")}, np.rec.array( [("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")], - dtype=[("index", "<i8"), ("A", "<U"), ("B", "<U"), ("C", "<U")], + dtype=[ + ("index", f"{tm.ENDIAN}i8"), + ("A", f"{tm.ENDIAN}U"), + ("B", f"{tm.ENDIAN}U"), + ("C", f"{tm.ENDIAN}U"), + ], ), ), # Pass in a dictionary (name-only). ( - {"column_dtypes": {"A": np.int8, "B": np.float32, "C": "<U2"}}, + { + "column_dtypes": { + "A": np.int8, + "B": np.float32, + "C": f"{tm.ENDIAN}U2", + } + }, np.rec.array( [("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")], - dtype=[("index", "<i8"), ("A", "i1"), ("B", "<f4"), ("C", "<U2")], + dtype=[ + ("index", f"{tm.ENDIAN}i8"), + ("A", "i1"), + ("B", f"{tm.ENDIAN}f4"), + ("C", f"{tm.ENDIAN}U2"), + ], ), ), # Pass in a dictionary (indices-only). @@ -207,15 +248,24 @@ def test_to_records_with_categorical(self): {"index_dtypes": {0: "int16"}}, np.rec.array( [(0, 1, 0.2, "a"), (1, 2, 1.5, "bc")], - dtype=[("index", "i2"), ("A", "<i8"), ("B", "<f8"), ("C", "O")], + dtype=[ + ("index", "i2"), + ("A", f"{tm.ENDIAN}i8"), + ("B", f"{tm.ENDIAN}f8"), + ("C", "O"), + ], ), ), # Ignore index mappings if index is not True. ( - {"index": False, "index_dtypes": "<U2"}, + {"index": False, "index_dtypes": f"{tm.ENDIAN}U2"}, np.rec.array( [(1, 0.2, "a"), (2, 1.5, "bc")], - dtype=[("A", "<i8"), ("B", "<f8"), ("C", "O")], + dtype=[ + ("A", f"{tm.ENDIAN}i8"), + ("B", f"{tm.ENDIAN}f8"), + ("C", "O"), + ], ), ), # Non-existent names / indices in mapping should not error. @@ -223,7 +273,12 @@ def test_to_records_with_categorical(self): {"index_dtypes": {0: "int16", "not-there": "float32"}}, np.rec.array( [(0, 1, 0.2, "a"), (1, 2, 1.5, "bc")], - dtype=[("index", "i2"), ("A", "<i8"), ("B", "<f8"), ("C", "O")], + dtype=[ + ("index", "i2"), + ("A", f"{tm.ENDIAN}i8"), + ("B", f"{tm.ENDIAN}f8"), + ("C", "O"), + ], ), ), # Names / indices not in mapping default to array dtype. @@ -231,7 +286,12 @@ def test_to_records_with_categorical(self): {"column_dtypes": {"A": np.int8, "B": np.float32}}, np.rec.array( [("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")], - dtype=[("index", "<i8"), ("A", "i1"), ("B", "<f4"), ("C", "O")], + dtype=[ + ("index", f"{tm.ENDIAN}i8"), + ("A", "i1"), + ("B", f"{tm.ENDIAN}f4"), + ("C", "O"), + ], ), ), # Names / indices not in dtype mapping default to array dtype. @@ -239,18 +299,28 @@ def test_to_records_with_categorical(self): {"column_dtypes": {"A": np.dtype("int8"), "B": np.dtype("float32")}}, np.rec.array( [("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")], - dtype=[("index", "<i8"), ("A", "i1"), ("B", "<f4"), ("C", "O")], + dtype=[ + ("index", f"{tm.ENDIAN}i8"), + ("A", "i1"), + ("B", f"{tm.ENDIAN}f4"), + ("C", "O"), + ], ), ), # Mixture of everything. ( { "column_dtypes": {"A": np.int8, "B": np.float32}, - "index_dtypes": "<U2", + "index_dtypes": f"{tm.ENDIAN}U2", }, np.rec.array( [("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")], - dtype=[("index", "<U2"), ("A", "i1"), ("B", "<f4"), ("C", "O")], + dtype=[ + ("index", f"{tm.ENDIAN}U2"), + ("A", "i1"), + ("B", f"{tm.ENDIAN}f4"), + ("C", "O"), + ], ), ), # Invalid dype values. @@ -299,7 +369,11 @@ def test_to_records_dtype(self, kwargs, expected): {"column_dtypes": "float64", "index_dtypes": {0: "int32", 1: "int8"}}, np.rec.array( [(1, 2, 3.0), (4, 5, 6.0), (7, 8, 9.0)], - dtype=[("a", "<i4"), ("b", "i1"), ("c", "<f8")], + dtype=[ + ("a", f"{tm.ENDIAN}i4"), + ("b", "i1"), + ("c", f"{tm.ENDIAN}f8"), + ], ), ), # MultiIndex in the columns. @@ -310,14 +384,17 @@ def test_to_records_dtype(self, kwargs, expected): [("a", "d"), ("b", "e"), ("c", "f")] ), ), - {"column_dtypes": {0: "<U1", 2: "float32"}, "index_dtypes": "float32"}, + { + "column_dtypes": {0: f"{tm.ENDIAN}U1", 2: "float32"}, + "index_dtypes": "float32", + }, np.rec.array( [(0.0, "1", 2, 3.0), (1.0, "4", 5, 6.0), (2.0, "7", 8, 9.0)], dtype=[ - ("index", "<f4"), - ("('a', 'd')", "<U1"), - ("('b', 'e')", "<i8"), - ("('c', 'f')", "<f4"), + ("index", f"{tm.ENDIAN}f4"), + ("('a', 'd')", f"{tm.ENDIAN}U1"), + ("('b', 'e')", f"{tm.ENDIAN}i8"), + ("('c', 'f')", f"{tm.ENDIAN}f4"), ], ), ), @@ -332,7 +409,10 @@ def test_to_records_dtype(self, kwargs, expected): [("d", -4), ("d", -5), ("f", -6)], names=list("cd") ), ), - {"column_dtypes": "float64", "index_dtypes": {0: "<U2", 1: "int8"}}, + { + "column_dtypes": "float64", + "index_dtypes": {0: f"{tm.ENDIAN}U2", 1: "int8"}, + }, np.rec.array( [ ("d", -4, 1.0, 2.0, 3.0), @@ -340,11 +420,11 @@ def test_to_records_dtype(self, kwargs, expected): ("f", -6, 7, 8, 9.0), ], dtype=[ - ("c", "<U2"), + ("c", f"{tm.ENDIAN}U2"), ("d", "i1"), - ("('a', 'd')", "<f8"), - ("('b', 'e')", "<f8"), - ("('c', 'f')", "<f8"), + ("('a', 'd')", f"{tm.ENDIAN}f8"), + ("('b', 'e')", f"{tm.ENDIAN}f8"), + ("('c', 'f')", f"{tm.ENDIAN}f8"), ], ), ), @@ -374,13 +454,18 @@ def keys(self): dtype_mappings = { "column_dtypes": DictLike(**{"A": np.int8, "B": np.float32}), - "index_dtypes": "<U2", + "index_dtypes": f"{tm.ENDIAN}U2", } result = df.to_records(**dtype_mappings) expected = np.rec.array( [("0", "1", "0.2", "a"), ("1", "2", "1.5", "bc")], - dtype=[("index", "<U2"), ("A", "i1"), ("B", "<f4"), ("C", "O")], + dtype=[ + ("index", f"{tm.ENDIAN}U2"), + ("A", "i1"), + ("B", f"{tm.ENDIAN}f4"), + ("C", "O"), + ], ) tm.assert_almost_equal(result, expected) diff --git a/pandas/tests/io/parser/test_c_parser_only.py b/pandas/tests/io/parser/test_c_parser_only.py index 71af12b10b417..9a81790ca3bb0 100644 --- a/pandas/tests/io/parser/test_c_parser_only.py +++ b/pandas/tests/io/parser/test_c_parser_only.py @@ -144,9 +144,12 @@ def test_dtype_and_names_error(c_parser_only): "the dtype timedelta64 is not supported for parsing", {"dtype": {"A": "timedelta64", "B": "float64"}}, ), - ("the dtype <U8 is not supported for parsing", {"dtype": {"A": "U8"}}), + ( + f"the dtype {tm.ENDIAN}U8 is not supported for parsing", + {"dtype": {"A": "U8"}}, + ), ], - ids=["dt64-0", "dt64-1", "td64", "<U8"], + ids=["dt64-0", "dt64-1", "td64", f"{tm.ENDIAN}U8"], ) def test_unsupported_dtype(c_parser_only, match, kwargs): parser = c_parser_only diff --git a/pandas/tests/tools/test_to_timedelta.py b/pandas/tests/tools/test_to_timedelta.py index cf512463d0473..6c11ec42858c0 100644 --- a/pandas/tests/tools/test_to_timedelta.py +++ b/pandas/tests/tools/test_to_timedelta.py @@ -198,7 +198,8 @@ def test_to_timedelta_on_missing_values(self): actual = to_timedelta(Series(["00:00:01", np.nan])) expected = Series( - [np.timedelta64(1000000000, "ns"), timedelta_NaT], dtype="<m8[ns]" + [np.timedelta64(1000000000, "ns"), timedelta_NaT], + dtype=f"{tm.ENDIAN}m8[ns]", ) tm.assert_series_equal(actual, expected)
These are all due to tests expecting little-endian dtypes, where in fact the endianness of the dtype is that of the host. - [ ] closes #xxxx (Replace xxxx with the Github issue number) **No issue filed** - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature **Not applicable; fixes some tests on big-endian platforms** - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46681
2022-04-07T12:11:12Z
2022-04-07T16:35:16Z
2022-04-07T16:35:16Z
2022-04-07T16:35:16Z
DOC: generate docs for the `Series.dt.isocalendar()` method.
diff --git a/doc/source/reference/series.rst b/doc/source/reference/series.rst index a60dab549e66d..fcdc9ea9b95da 100644 --- a/doc/source/reference/series.rst +++ b/doc/source/reference/series.rst @@ -342,6 +342,7 @@ Datetime methods :toctree: api/ :template: autosummary/accessor_method.rst + Series.dt.isocalendar Series.dt.to_period Series.dt.to_pydatetime Series.dt.tz_localize diff --git a/pandas/core/indexes/accessors.py b/pandas/core/indexes/accessors.py index a0bc0ae8e3511..26748a4342bb6 100644 --- a/pandas/core/indexes/accessors.py +++ b/pandas/core/indexes/accessors.py @@ -277,12 +277,13 @@ def isocalendar(self): @property def weekofyear(self): """ - The week ordinal of the year. + The week ordinal of the year according to the ISO 8601 standard. .. deprecated:: 1.1.0 - Series.dt.weekofyear and Series.dt.week have been deprecated. - Please use Series.dt.isocalendar().week instead. + Series.dt.weekofyear and Series.dt.week have been deprecated. Please + call :func:`Series.dt.isocalendar` and access the ``week`` column + instead. """ warnings.warn( "Series.dt.weekofyear and Series.dt.week have been deprecated. "
- [ ] closes #xxxx (Replace xxxx with the Github issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46674
2022-04-06T20:29:57Z
2022-04-07T16:36:30Z
2022-04-07T16:36:30Z
2022-04-07T17:42:46Z
Backport PR #46663 on branch 1.4.x (CI/DOC: pin pydata-sphinx-theme to 0.8.0)
diff --git a/environment.yml b/environment.yml index c23bb99c736dc..a90f28a2c9e16 100644 --- a/environment.yml +++ b/environment.yml @@ -34,7 +34,7 @@ dependencies: - gitdb - numpydoc < 1.2 # 2021-02-09 1.2dev breaking CI - pandas-dev-flaker=0.4.0 - - pydata-sphinx-theme + - pydata-sphinx-theme=0.8.0 - pytest-cython - sphinx - sphinx-panels diff --git a/requirements-dev.txt b/requirements-dev.txt index 6caa9a7512faf..bb6c5d9427d38 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -20,7 +20,7 @@ gitpython gitdb numpydoc < 1.2 pandas-dev-flaker==0.4.0 -pydata-sphinx-theme +pydata-sphinx-theme==0.8.0 pytest-cython sphinx sphinx-panels
Backport PR #46663
https://api.github.com/repos/pandas-dev/pandas/pulls/46667
2022-04-06T13:36:56Z
2022-04-07T00:13:57Z
2022-04-07T00:13:57Z
2022-04-07T09:37:10Z
TST/CI: simplify geopandas downstream test to not use fiona
diff --git a/ci/deps/actions-38-downstream_compat.yaml b/ci/deps/actions-38-downstream_compat.yaml index 01415122e6076..629d7b501692d 100644 --- a/ci/deps/actions-38-downstream_compat.yaml +++ b/ci/deps/actions-38-downstream_compat.yaml @@ -60,7 +60,7 @@ dependencies: - cftime - dask - ipython - - geopandas + - geopandas-base - seaborn - scikit-learn - statsmodels diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py index 83b476fefea46..b4887cc321785 100644 --- a/pandas/tests/test_downstream.py +++ b/pandas/tests/test_downstream.py @@ -8,7 +8,6 @@ import numpy as np import pytest -from pandas.compat import is_platform_windows import pandas.util._test_decorators as td import pandas as pd @@ -224,20 +223,13 @@ def test_pandas_datareader(): pandas_datareader.DataReader("F", "quandl", "2017-01-01", "2017-02-01") -# importing from pandas, Cython import warning -@pytest.mark.filterwarnings("ignore:can't resolve:ImportWarning") -@pytest.mark.xfail( - is_platform_windows(), - raises=ImportError, - reason="ImportError: the 'read_file' function requires the 'fiona' package, " - "but it is not installed or does not import correctly", - strict=False, -) def test_geopandas(): geopandas = import_module("geopandas") - fp = geopandas.datasets.get_path("naturalearth_lowres") - assert geopandas.read_file(fp) is not None + gdf = geopandas.GeoDataFrame( + {"col": [1, 2, 3], "geometry": geopandas.points_from_xy([1, 2, 3], [1, 2, 3])} + ) + assert gdf[["col", "geometry"]].geometry.x.equals(Series([1.0, 2.0, 3.0])) # Cython import warning
xref https://github.com/pandas-dev/pandas/pull/46536#issuecomment-1090244769
https://api.github.com/repos/pandas-dev/pandas/pulls/46665
2022-04-06T13:09:55Z
2022-04-06T21:58:37Z
2022-04-06T21:58:37Z
2022-04-07T06:12:43Z
BUG: Fix infinite recursion loop when pivot of IntervalTree is ±inf
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 61848cb127029..4ad3f9e2ea391 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -786,6 +786,8 @@ Indexing - Bug in :meth:`Series.asof` and :meth:`DataFrame.asof` incorrectly casting bool-dtype results to ``float64`` dtype (:issue:`16063`) - Bug in :meth:`NDFrame.xs`, :meth:`DataFrame.iterrows`, :meth:`DataFrame.loc` and :meth:`DataFrame.iloc` not always propagating metadata (:issue:`28283`) - Bug in :meth:`DataFrame.sum` min_count changes dtype if input contains NaNs (:issue:`46947`) +- Bug in :class:`IntervalTree` that lead to an infinite recursion. (:issue:`46658`) +- Missing ^^^^^^^ diff --git a/pandas/_libs/intervaltree.pxi.in b/pandas/_libs/intervaltree.pxi.in index 51db5f1e76c99..9842332bae7ef 100644 --- a/pandas/_libs/intervaltree.pxi.in +++ b/pandas/_libs/intervaltree.pxi.in @@ -342,6 +342,13 @@ cdef class {{dtype_title}}Closed{{closed_title}}IntervalNode(IntervalNode): # calculate a pivot so we can create child nodes self.is_leaf_node = False self.pivot = np.median(left / 2 + right / 2) + if np.isinf(self.pivot): + self.pivot = cython.cast({{dtype}}_t, 0) + if self.pivot > np.max(right): + self.pivot = np.max(left) + if self.pivot < np.min(left): + self.pivot = np.min(right) + left_set, right_set, center_set = self.classify_intervals( left, right) diff --git a/pandas/tests/indexes/interval/test_interval_tree.py b/pandas/tests/indexes/interval/test_interval_tree.py index 345025d63f4b2..06c499b9e33f4 100644 --- a/pandas/tests/indexes/interval/test_interval_tree.py +++ b/pandas/tests/indexes/interval/test_interval_tree.py @@ -207,3 +207,21 @@ def test_interval_tree_error_and_warning(self): ): left, right = np.arange(10), [np.iinfo(np.int64).max] * 10 IntervalTree(left, right, closed="both") + + @pytest.mark.xfail(not IS64, reason="GH 23440") + @pytest.mark.parametrize( + "left, right, expected", + [ + ([-np.inf, 1.0], [1.0, 2.0], 0.0), + ([-np.inf, -2.0], [-2.0, -1.0], -2.0), + ([-2.0, -1.0], [-1.0, np.inf], 0.0), + ([1.0, 2.0], [2.0, np.inf], 2.0), + ], + ) + def test_inf_bound_infinite_recursion(self, left, right, expected): + # GH 46658 + + tree = IntervalTree(left * 101, right * 101) + + result = tree.root.pivot + assert result == expected diff --git a/pandas/tests/indexing/interval/test_interval_new.py b/pandas/tests/indexing/interval/test_interval_new.py index 2e3c765b2b372..602f45d637afb 100644 --- a/pandas/tests/indexing/interval/test_interval_new.py +++ b/pandas/tests/indexing/interval/test_interval_new.py @@ -3,7 +3,10 @@ import numpy as np import pytest +from pandas.compat import IS64 + from pandas import ( + Index, Interval, IntervalIndex, Series, @@ -217,3 +220,24 @@ def test_loc_getitem_missing_key_error_message( obj = frame_or_series(ser) with pytest.raises(KeyError, match=r"\[6\]"): obj.loc[[4, 5, 6]] + + +@pytest.mark.xfail(not IS64, reason="GH 23440") +@pytest.mark.parametrize( + "intervals", + [ + ([Interval(-np.inf, 0.0), Interval(0.0, 1.0)]), + ([Interval(-np.inf, -2.0), Interval(-2.0, -1.0)]), + ([Interval(-1.0, 0.0), Interval(0.0, np.inf)]), + ([Interval(1.0, 2.0), Interval(2.0, np.inf)]), + ], +) +def test_repeating_interval_index_with_infs(intervals): + # GH 46658 + + interval_index = Index(intervals * 51) + + expected = np.arange(1, 102, 2, dtype=np.intp) + result = interval_index.get_indexer_for([intervals[1]]) + + tm.assert_equal(result, expected)
When the pivot of an IntervalTree becomes ±inf the construction of the IntervalTree comes to an infinite loop recursion. This patch tries to fix that by catching those cases and set the pivot to a reasonable value. - [x] closes #46658 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/v1.5.0.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46664
2022-04-06T11:58:49Z
2022-06-06T17:03:40Z
2022-06-06T17:03:39Z
2022-06-06T17:03:47Z
CI/DOC: pin pydata-sphinx-theme to 0.8.0
diff --git a/environment.yml b/environment.yml index c6ac88a622ad8..dc3cba3be2132 100644 --- a/environment.yml +++ b/environment.yml @@ -34,7 +34,7 @@ dependencies: - gitdb - numpydoc - pandas-dev-flaker=0.5.0 - - pydata-sphinx-theme + - pydata-sphinx-theme=0.8.0 - pytest-cython - sphinx - sphinx-panels diff --git a/requirements-dev.txt b/requirements-dev.txt index 4942ba509ded9..a3f71ac2a3aa5 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -20,7 +20,7 @@ gitpython gitdb numpydoc pandas-dev-flaker==0.5.0 -pydata-sphinx-theme +pydata-sphinx-theme==0.8.0 pytest-cython sphinx sphinx-panels
Temporary fix for https://github.com/pandas-dev/pandas/issues/46615
https://api.github.com/repos/pandas-dev/pandas/pulls/46663
2022-04-06T11:52:12Z
2022-04-06T13:14:18Z
2022-04-06T13:14:18Z
2022-04-06T13:37:32Z
TST: added test for joining dataframes with MultiIndex containing dates
diff --git a/pandas/tests/frame/methods/test_join.py b/pandas/tests/frame/methods/test_join.py index c6bfd94b84908..8ad0fdf344edf 100644 --- a/pandas/tests/frame/methods/test_join.py +++ b/pandas/tests/frame/methods/test_join.py @@ -323,6 +323,26 @@ def test_join_multiindex_leftright(self): tm.assert_frame_equal(df1.join(df2, how="right"), exp) tm.assert_frame_equal(df2.join(df1, how="left"), exp[["value2", "value1"]]) + def test_join_multiindex_dates(self): + # GH 33692 + date = pd.Timestamp(2000, 1, 1).date() + + df1_index = MultiIndex.from_tuples([(0, date)], names=["index_0", "date"]) + df1 = DataFrame({"col1": [0]}, index=df1_index) + df2_index = MultiIndex.from_tuples([(0, date)], names=["index_0", "date"]) + df2 = DataFrame({"col2": [0]}, index=df2_index) + df3_index = MultiIndex.from_tuples([(0, date)], names=["index_0", "date"]) + df3 = DataFrame({"col3": [0]}, index=df3_index) + + result = df1.join([df2, df3]) + + expected_index = MultiIndex.from_tuples([(0, date)], names=["index_0", "date"]) + expected = DataFrame( + {"col1": [0], "col2": [0], "col3": [0]}, index=expected_index + ) + + tm.assert_equal(result, expected) + def test_merge_join_different_levels(self): # GH#9455
- [X] closes #33692 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46660
2022-04-06T10:40:43Z
2022-04-07T19:25:27Z
2022-04-07T19:25:26Z
2022-04-07T19:40:39Z
BUG: df.nsmallest get wrong results when NaN in the sorting column
diff --git a/doc/source/whatsnew/v1.4.3.rst b/doc/source/whatsnew/v1.4.3.rst index 8572c136c28a9..0c326e15d90ed 100644 --- a/doc/source/whatsnew/v1.4.3.rst +++ b/doc/source/whatsnew/v1.4.3.rst @@ -14,6 +14,7 @@ including other versions of pandas. Fixed regressions ~~~~~~~~~~~~~~~~~ +- Fixed regression in :meth:`DataFrame.nsmallest` led to wrong results when ``np.nan`` in the sorting column (:issue:`46589`) - Fixed regression in :func:`read_fwf` raising ``ValueError`` when ``widths`` was specified with ``usecols`` (:issue:`46580`) - diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py index 6c1dfc4c0da72..0c0b93f41c657 100644 --- a/pandas/core/algorithms.py +++ b/pandas/core/algorithms.py @@ -1181,7 +1181,6 @@ def compute(self, method: str) -> Series: arr = arr[::-1] nbase = n - findex = len(self.obj) narr = len(arr) n = min(n, narr) @@ -1194,6 +1193,11 @@ def compute(self, method: str) -> Series: if self.keep != "all": inds = inds[:n] findex = nbase + else: + if len(inds) < nbase and len(nan_index) + len(inds) >= nbase: + findex = len(nan_index) + len(inds) + else: + findex = len(inds) if self.keep == "last": # reverse indices diff --git a/pandas/tests/frame/methods/test_nlargest.py b/pandas/tests/frame/methods/test_nlargest.py index 1b2db80d782ce..a317dae562ae0 100644 --- a/pandas/tests/frame/methods/test_nlargest.py +++ b/pandas/tests/frame/methods/test_nlargest.py @@ -216,3 +216,24 @@ def test_nlargest_nan(self): result = df.nlargest(5, 0) expected = df.sort_values(0, ascending=False).head(5) tm.assert_frame_equal(result, expected) + + def test_nsmallest_nan_after_n_element(self): + # GH#46589 + df = pd.DataFrame( + { + "a": [1, 2, 3, 4, 5, None, 7], + "b": [7, 6, 5, 4, 3, 2, 1], + "c": [1, 1, 2, 2, 3, 3, 3], + }, + index=range(7), + ) + result = df.nsmallest(5, columns=["a", "b"]) + expected = pd.DataFrame( + { + "a": [1, 2, 3, 4, 5], + "b": [7, 6, 5, 4, 3], + "c": [1, 1, 2, 2, 3], + }, + index=range(5), + ).astype({"a": "float"}) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/series/methods/test_nlargest.py b/pandas/tests/series/methods/test_nlargest.py index ee96ab08ad66c..4f07257038bc9 100644 --- a/pandas/tests/series/methods/test_nlargest.py +++ b/pandas/tests/series/methods/test_nlargest.py @@ -231,3 +231,15 @@ def test_nlargest_nullable(self, any_numeric_ea_dtype): .astype(dtype) ) tm.assert_series_equal(result, expected) + + def test_nsmallest_nan_when_keep_is_all(self): + # GH#46589 + s = Series([1, 2, 3, 3, 3, None]) + result = s.nsmallest(3, keep="all") + expected = Series([1.0, 2.0, 3.0, 3.0, 3.0]) + tm.assert_series_equal(result, expected) + + s = Series([1, 2, None, None, None]) + result = s.nsmallest(3, keep="all") + expected = Series([1, 2, None, None, None]) + tm.assert_series_equal(result, expected)
- [x] closes #46589 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46656
2022-04-06T06:24:11Z
2022-04-10T16:59:38Z
2022-04-10T16:59:37Z
2022-04-12T10:34:42Z
TST: add validation checks on levels keyword from pd.concat
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 73dc832e2007b..de4d70473f91e 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -92,6 +92,9 @@ Other enhancements - :class:`Series` and :class:`DataFrame` with ``IntegerDtype`` now supports bitwise operations (:issue:`34463`) - Add ``milliseconds`` field support for :class:`~pandas.DateOffset` (:issue:`43371`) - :meth:`DataFrame.reset_index` now accepts a ``names`` argument which renames the index names (:issue:`6878`) +- :meth:`pd.concat` now raises when ``levels`` is given but ``keys`` is None (:issue:`46653`) +- :meth:`pd.concat` now raises when ``levels`` contains duplicate values (:issue:`46653`) +- .. --------------------------------------------------------------------------- .. _whatsnew_150.notable_bug_fixes: diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py index f2227a3e2ac83..054fbb85cead7 100644 --- a/pandas/core/reshape/concat.py +++ b/pandas/core/reshape/concat.py @@ -668,6 +668,8 @@ def _get_concat_axis(self) -> Index: return idx if self.keys is None: + if self.levels is not None: + raise ValueError("levels supported only when keys is not None") concat_axis = _concat_indexes(indexes) else: concat_axis = _make_concat_multiindex( @@ -712,6 +714,10 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None) -> MultiInde else: levels = [ensure_index(x) for x in levels] + for level in levels: + if not level.is_unique: + raise ValueError(f"Level values not unique: {level.tolist()}") + if not all_indexes_same(indexes) or not all(level.is_unique for level in levels): codes_list = [] diff --git a/pandas/tests/reshape/concat/test_index.py b/pandas/tests/reshape/concat/test_index.py index 50fee28669c58..b20e4bcc2256b 100644 --- a/pandas/tests/reshape/concat/test_index.py +++ b/pandas/tests/reshape/concat/test_index.py @@ -371,3 +371,19 @@ def test_concat_with_key_not_unique(self): out_b = df_b.loc[("x", 0), :] tm.assert_frame_equal(out_a, out_b) + + def test_concat_with_duplicated_levels(self): + # keyword levels should be unique + df1 = DataFrame({"A": [1]}, index=["x"]) + df2 = DataFrame({"A": [1]}, index=["y"]) + msg = r"Level values not unique: \['x', 'y', 'y'\]" + with pytest.raises(ValueError, match=msg): + concat([df1, df2], keys=["x", "y"], levels=[["x", "y", "y"]]) + + @pytest.mark.parametrize("levels", [[["x", "y"]], [["x", "y", "y"]]]) + def test_concat_with_levels_with_none_keys(self, levels): + df1 = DataFrame({"A": [1]}, index=["x"]) + df2 = DataFrame({"A": [1]}, index=["y"]) + msg = "levels supported only when keys is not None" + with pytest.raises(ValueError, match=msg): + concat([df1, df2], levels=levels)
- [x] closes #46653 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46654
2022-04-06T04:23:21Z
2022-04-07T00:15:26Z
2022-04-07T00:15:26Z
2022-04-07T00:20:57Z
DOC: fix SA04 errors flagged by validate_docstrings.py and add SA04 t…
diff --git a/ci/code_checks.sh b/ci/code_checks.sh index ec8545ad1ee4a..c6d9698882f4d 100755 --- a/ci/code_checks.sh +++ b/ci/code_checks.sh @@ -78,8 +78,8 @@ fi ### DOCSTRINGS ### if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then - MSG='Validate docstrings (EX04, GL01, GL02, GL03, GL04, GL05, GL06, GL07, GL09, GL10, PR03, PR04, PR05, PR06, PR08, PR09, PR10, RT01, RT04, RT05, SA02, SA03, SS01, SS02, SS03, SS04, SS05)' ; echo $MSG - $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=EX04,GL01,GL02,GL03,GL04,GL05,GL06,GL07,GL09,GL10,PR03,PR04,PR05,PR06,PR08,PR09,PR10,RT01,RT04,RT05,SA02,SA03,SS01,SS02,SS03,SS04,SS05 + MSG='Validate docstrings (EX04, GL01, GL02, GL03, GL04, GL05, GL06, GL07, GL09, GL10, PR03, PR04, PR05, PR06, PR08, PR09, PR10, RT01, RT04, RT05, SA02, SA03, SA04, SS01, SS02, SS03, SS04, SS05)' ; echo $MSG + $BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=EX04,GL01,GL02,GL03,GL04,GL05,GL06,GL07,GL09,GL10,PR03,PR04,PR05,PR06,PR08,PR09,PR10,RT01,RT04,RT05,SA02,SA03,SA04,SS01,SS02,SS03,SS04,SS05 RET=$(($RET + $?)) ; echo $MSG "DONE" fi diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 2ded3c4926f6b..80527474f2be6 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -4371,8 +4371,8 @@ def reindex( See Also -------- - Series.reindex - DataFrame.reindex + Series.reindex : Conform Series to new index with optional filling logic. + DataFrame.reindex : Conform DataFrame to new index with optional filling logic. Examples --------
- [x] closes #25337 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [NA] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Many of the SA04 reported in #25337 are already fixed, but there were two outstanding SA04 errors reported by valdiate_docstrings.py on main. So, I fixed those and added 'SA04' to the code_checks.py as per #25337
https://api.github.com/repos/pandas-dev/pandas/pulls/46652
2022-04-06T03:35:51Z
2022-04-12T01:29:29Z
2022-04-12T01:29:28Z
2022-04-12T01:29:36Z
Backport PR #46647 on branch 1.4.x (CI/DOC: Unpin jinja2)
diff --git a/environment.yml b/environment.yml index 753e210e6066a..c23bb99c736dc 100644 --- a/environment.yml +++ b/environment.yml @@ -44,7 +44,7 @@ dependencies: - types-setuptools # documentation (jupyter notebooks) - - nbconvert>=5.4.1 + - nbconvert>=6.4.5 - nbsphinx - pandoc @@ -86,7 +86,7 @@ dependencies: - bottleneck>=1.3.1 - ipykernel - ipython>=7.11.1 - - jinja2<=3.0.3 # pandas.Styler + - jinja2 # pandas.Styler - matplotlib>=3.3.2 # pandas.plotting, Series.plot, DataFrame.plot - numexpr>=2.7.1 - scipy>=1.4.1 diff --git a/requirements-dev.txt b/requirements-dev.txt index c4f6bb30c59ec..6caa9a7512faf 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -28,7 +28,7 @@ types-python-dateutil types-PyMySQL types-pytz types-setuptools -nbconvert>=5.4.1 +nbconvert>=6.4.5 nbsphinx pandoc dask @@ -58,7 +58,7 @@ blosc bottleneck>=1.3.1 ipykernel ipython>=7.11.1 -jinja2<=3.0.3 +jinja2 matplotlib>=3.3.2 numexpr>=2.7.1 scipy>=1.4.1
Backport PR #46647: CI/DOC: Unpin jinja2
https://api.github.com/repos/pandas-dev/pandas/pulls/46649
2022-04-05T23:36:58Z
2022-04-06T00:48:53Z
2022-04-06T00:48:53Z
2022-04-06T00:48:53Z
BUG/TST/DOC: added finalize to melt, GH28283
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 73dc832e2007b..4ef7e74e4f722 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -620,6 +620,7 @@ Styler Metadata ^^^^^^^^ +- Fixed metadata propagation in :meth:`DataFrame.melt` (:issue:`28283`) - Fixed metadata propagation in :meth:`DataFrame.explode` (:issue:`28283`) - diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 683674d8ef826..6c754683c7a98 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -8717,7 +8717,7 @@ def melt( value_name=value_name, col_level=col_level, ignore_index=ignore_index, - ) + ).__finalize__(self, method="melt") # ---------------------------------------------------------------------- # Time series-related diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py index 9e5e616148a32..999fee5c8281c 100644 --- a/pandas/tests/generic/test_finalize.py +++ b/pandas/tests/generic/test_finalize.py @@ -156,13 +156,10 @@ (pd.DataFrame, frame_data, operator.methodcaller("stack")), (pd.DataFrame, frame_data, operator.methodcaller("explode", "A")), (pd.DataFrame, frame_mi_data, operator.methodcaller("unstack")), - pytest.param( - ( - pd.DataFrame, - ({"A": ["a", "b", "c"], "B": [1, 3, 5], "C": [2, 4, 6]},), - operator.methodcaller("melt", id_vars=["A"], value_vars=["B"]), - ), - marks=not_implemented_mark, + ( + pd.DataFrame, + ({"A": ["a", "b", "c"], "B": [1, 3, 5], "C": [2, 4, 6]},), + operator.methodcaller("melt", id_vars=["A"], value_vars=["B"]), ), pytest.param( (pd.DataFrame, frame_data, operator.methodcaller("applymap", lambda x: x))
Progress towards #28283 This PR gives the `melt` method the ability to propagate metadata using `__finalize__`. - [ ] xref #28283 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/v1.5.0.rst` file Contributors: - [Edward Huang](https://github.com/edwhuang23) - [Solomon Song](https://github.com/songsol1)
https://api.github.com/repos/pandas-dev/pandas/pulls/46648
2022-04-05T20:55:52Z
2022-04-08T00:21:24Z
2022-04-08T00:21:24Z
2022-04-08T00:21:40Z
CI/DOC: Unpin jinja2
diff --git a/environment.yml b/environment.yml index e991f29b8a727..c6ac88a622ad8 100644 --- a/environment.yml +++ b/environment.yml @@ -44,7 +44,7 @@ dependencies: - types-setuptools # documentation (jupyter notebooks) - - nbconvert>=5.4.1 + - nbconvert>=6.4.5 - nbsphinx - pandoc @@ -86,7 +86,7 @@ dependencies: - bottleneck>=1.3.1 - ipykernel - ipython>=7.11.1 - - jinja2<=3.0.3 # pandas.Styler + - jinja2 # pandas.Styler - matplotlib>=3.3.2 # pandas.plotting, Series.plot, DataFrame.plot - numexpr>=2.7.1 - scipy>=1.4.1 diff --git a/requirements-dev.txt b/requirements-dev.txt index 22692da8f0ed4..4942ba509ded9 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -28,7 +28,7 @@ types-python-dateutil types-PyMySQL types-pytz types-setuptools -nbconvert>=5.4.1 +nbconvert>=6.4.5 nbsphinx pandoc dask @@ -58,7 +58,7 @@ blosc bottleneck>=1.3.1 ipykernel ipython>=7.11.1 -jinja2<=3.0.3 +jinja2 matplotlib>=3.3.2 numexpr>=2.7.1 scipy>=1.4.1
- [x] closes #46514 (Replace xxxx with the Github issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Should be closed by https://github.com/jupyter/nbconvert/pull/1737; we'll see if CI agrees. Also specifies `nbconvert>=6.4.5`; this is the most recent version of nbconvert but because this is only for dev I think it's okay.
https://api.github.com/repos/pandas-dev/pandas/pulls/46647
2022-04-05T20:32:44Z
2022-04-05T23:36:23Z
2022-04-05T23:36:22Z
2022-04-07T02:43:26Z
BUG: DataFrame.dropna must not allow to set both how= and thresh= parameters
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 595845e107cf8..216a9846ac241 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -570,7 +570,7 @@ Missing - Bug in :meth:`Series.fillna` and :meth:`DataFrame.fillna` with ``downcast`` keyword not being respected in some cases where there are no NA values present (:issue:`45423`) - Bug in :meth:`Series.fillna` and :meth:`DataFrame.fillna` with :class:`IntervalDtype` and incompatible value raising instead of casting to a common (usually object) dtype (:issue:`45796`) - Bug in :meth:`DataFrame.interpolate` with object-dtype column not returning a copy with ``inplace=False`` (:issue:`45791`) -- +- Bug in :meth:`DataFrame.dropna` allows to set both ``how`` and ``thresh`` incompatible arguments (:issue:`46575`) MultiIndex ^^^^^^^^^^ diff --git a/pandas/core/frame.py b/pandas/core/frame.py index a79e23058ef98..7270d73e29741 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -42,7 +42,10 @@ properties, ) from pandas._libs.hashtable import duplicated -from pandas._libs.lib import no_default +from pandas._libs.lib import ( + NoDefault, + no_default, +) from pandas._typing import ( AggFuncType, AnyArrayLike, @@ -6110,8 +6113,8 @@ def notnull(self) -> DataFrame: def dropna( self, axis: Axis = 0, - how: str = "any", - thresh=None, + how: str | NoDefault = no_default, + thresh: int | NoDefault = no_default, subset: IndexLabel = None, inplace: bool = False, ): @@ -6143,7 +6146,7 @@ def dropna( * 'all' : If all values are NA, drop that row or column. thresh : int, optional - Require that many non-NA values. + Require that many non-NA values. Cannot be combined with how. subset : column label or sequence of labels, optional Labels along other axis to consider, e.g. if you are dropping rows these would be a list of columns to include. @@ -6218,6 +6221,14 @@ def dropna( name toy born 1 Batman Batmobile 1940-04-25 """ + if (how is not no_default) and (thresh is not no_default): + raise TypeError( + "You cannot set both the how and thresh arguments at the same time." + ) + + if how is no_default: + how = "any" + inplace = validate_bool_kwarg(inplace, "inplace") if isinstance(axis, (tuple, list)): # GH20987 @@ -6238,7 +6249,7 @@ def dropna( raise KeyError(np.array(subset)[check].tolist()) agg_obj = self.take(indices, axis=agg_axis) - if thresh is not None: + if thresh is not no_default: count = agg_obj.count(axis=agg_axis) mask = count >= thresh elif how == "any": @@ -6248,10 +6259,8 @@ def dropna( # faster equivalent to 'agg_obj.count(agg_axis) > 0' mask = notna(agg_obj).any(axis=agg_axis, bool_only=False) else: - if how is not None: + if how is not no_default: raise ValueError(f"invalid how option: {how}") - else: - raise TypeError("must specify how or thresh") if np.all(mask): result = self.copy() diff --git a/pandas/tests/frame/methods/test_dropna.py b/pandas/tests/frame/methods/test_dropna.py index d0b9eebb31b93..43cecc6a1aed5 100644 --- a/pandas/tests/frame/methods/test_dropna.py +++ b/pandas/tests/frame/methods/test_dropna.py @@ -158,9 +158,6 @@ def test_dropna_corner(self, float_frame): msg = "invalid how option: foo" with pytest.raises(ValueError, match=msg): float_frame.dropna(how="foo") - msg = "must specify how or thresh" - with pytest.raises(TypeError, match=msg): - float_frame.dropna(how=None) # non-existent column - 8303 with pytest.raises(KeyError, match=r"^\['X'\]$"): float_frame.dropna(subset=["A", "X"]) @@ -274,3 +271,16 @@ def test_no_nans_in_frame(self, axis): expected = df.copy() result = df.dropna(axis=axis) tm.assert_frame_equal(result, expected, check_index_type=True) + + def test_how_thresh_param_incompatible(self): + # GH46575 + df = DataFrame([1, 2, pd.NA]) + msg = "You cannot set both the how and thresh arguments at the same time" + with pytest.raises(TypeError, match=msg): + df.dropna(how="all", thresh=2) + + with pytest.raises(TypeError, match=msg): + df.dropna(how="any", thresh=2) + + with pytest.raises(TypeError, match=msg): + df.dropna(how=None, thresh=None)
- [x] closes #46575 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46644
2022-04-05T15:45:25Z
2022-04-27T22:23:19Z
2022-04-27T22:23:19Z
2022-04-28T08:44:14Z
DOC: .bfill() (#46631)
diff --git a/doc/source/user_guide/cookbook.rst b/doc/source/user_guide/cookbook.rst index f88f4a9708c45..d0b2119f9d315 100644 --- a/doc/source/user_guide/cookbook.rst +++ b/doc/source/user_guide/cookbook.rst @@ -423,7 +423,7 @@ Fill forward a reversed timeseries ) df.loc[df.index[3], "A"] = np.nan df - df.reindex(df.index[::-1]).ffill() + df.bfill() `cumsum reset at NaN values <https://stackoverflow.com/questions/18196811/cumsum-reset-at-nan>`__
- [x] closes #46631 (Replace xxxx with the Github issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46642
2022-04-05T04:23:38Z
2022-04-05T23:35:32Z
2022-04-05T23:35:31Z
2022-04-05T23:35:36Z
TYP: tighten return type in function any
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index ef5e6dd1d6757..bc190257a1cdd 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -8988,6 +8988,42 @@ def aggregate(self, func=None, axis: Axis = 0, *args, **kwargs): agg = aggregate + # error: Signature of "any" incompatible with supertype "NDFrame" [override] + @overload # type: ignore[override] + def any( + self, + *, + axis: Axis = ..., + bool_only: bool | None = ..., + skipna: bool = ..., + level: None = ..., + **kwargs, + ) -> Series: + ... + + @overload + def any( + self, + *, + axis: Axis = ..., + bool_only: bool | None = ..., + skipna: bool = ..., + level: Level, + **kwargs, + ) -> DataFrame | Series: + ... + + @doc(NDFrame.any, **_shared_doc_kwargs) + def any( + self, + axis: Axis = 0, + bool_only: bool | None = None, + skipna: bool = True, + level: Level | None = None, + **kwargs, + ) -> DataFrame | Series: + ... + @doc( _shared_docs["transform"], klass=_shared_doc_kwargs["klass"], diff --git a/pandas/core/series.py b/pandas/core/series.py index 1d3509cac0edd..b740bac78b263 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -4397,6 +4397,42 @@ def aggregate(self, func=None, axis=0, *args, **kwargs): agg = aggregate + # error: Signature of "any" incompatible with supertype "NDFrame" [override] + @overload # type: ignore[override] + def any( + self, + *, + axis: Axis = ..., + bool_only: bool | None = ..., + skipna: bool = ..., + level: None = ..., + **kwargs, + ) -> bool: + ... + + @overload + def any( + self, + *, + axis: Axis = ..., + bool_only: bool | None = ..., + skipna: bool = ..., + level: Level, + **kwargs, + ) -> Series | bool: + ... + + @doc(NDFrame.any, **_shared_doc_kwargs) + def any( + self, + axis: Axis = 0, + bool_only: bool | None = None, + skipna: bool = True, + level: Level | None = None, + **kwargs, + ) -> Series | bool: + ... + @doc( _shared_docs["transform"], klass=_shared_doc_kwargs["klass"],
- [x] closes #44802 - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. Continuing remaining work after Pull request #44896. `DataFrame.any` the return type should be DataFrame or Series, and for `Series.any` the return type should be Series or bool.
https://api.github.com/repos/pandas-dev/pandas/pulls/46638
2022-04-04T19:46:16Z
2022-05-07T12:40:15Z
2022-05-07T12:40:15Z
2022-05-07T17:43:38Z
REGR: Replace changes the dtype of other columns
diff --git a/doc/source/whatsnew/v1.4.3.rst b/doc/source/whatsnew/v1.4.3.rst index 0c326e15d90ed..7d9b5b83185ea 100644 --- a/doc/source/whatsnew/v1.4.3.rst +++ b/doc/source/whatsnew/v1.4.3.rst @@ -14,6 +14,7 @@ including other versions of pandas. Fixed regressions ~~~~~~~~~~~~~~~~~ +- Fixed regression in :meth:`DataFrame.replace` when the replacement value was explicitly ``None`` when passed in a dictionary to ``to_replace`` also casting other columns to object dtype even when there were no values to replace (:issue:`46634`) - Fixed regression in :meth:`DataFrame.nsmallest` led to wrong results when ``np.nan`` in the sorting column (:issue:`46589`) - Fixed regression in :func:`read_fwf` raising ``ValueError`` when ``widths`` was specified with ``usecols`` (:issue:`46580`) - diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index c8430a9266ea5..71e2a1e36cbbf 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -781,12 +781,14 @@ def _replace_coerce( ) else: if value is None: - # gh-45601, gh-45836 - nb = self.astype(np.dtype(object), copy=False) - if nb is self and not inplace: - nb = nb.copy() - putmask_inplace(nb.values, mask, value) - return [nb] + # gh-45601, gh-45836, gh-46634 + if mask.any(): + nb = self.astype(np.dtype(object), copy=False) + if nb is self and not inplace: + nb = nb.copy() + putmask_inplace(nb.values, mask, value) + return [nb] + return [self] if inplace else [self.copy()] return self.replace( to_replace=to_replace, value=value, inplace=inplace, mask=mask ) diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py index 870a64cfa59c9..f7504e9173bf5 100644 --- a/pandas/tests/frame/methods/test_replace.py +++ b/pandas/tests/frame/methods/test_replace.py @@ -675,6 +675,25 @@ def test_replace_NAT_with_None(self): expected = DataFrame([None, None]) tm.assert_frame_equal(result, expected) + def test_replace_with_None_keeps_categorical(self): + # gh-46634 + cat_series = Series(["b", "b", "b", "d"], dtype="category") + df = DataFrame( + { + "id": Series([5, 4, 3, 2], dtype="float64"), + "col": cat_series, + } + ) + result = df.replace({3: None}) + + expected = DataFrame( + { + "id": Series([5.0, 4.0, None, 2.0], dtype="object"), + "col": cat_series, + } + ) + tm.assert_frame_equal(result, expected) + def test_replace_value_is_none(self, datetime_frame): orig_value = datetime_frame.iloc[0, 0] orig2 = datetime_frame.iloc[1, 0]
- [ ] closes #46634 (Replace xxxx with the Github issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46636
2022-04-04T19:21:07Z
2022-05-25T22:40:01Z
2022-05-25T22:40:00Z
2022-05-26T09:35:02Z
CI: Remove grep from asv call (using strict parameter instead)
diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml index aac6f4387a74a..a5fd72e343c16 100644 --- a/.github/workflows/code-checks.yml +++ b/.github/workflows/code-checks.yml @@ -141,11 +141,7 @@ jobs: run: | cd asv_bench asv machine --yes - # TODO add `--durations` when we start using asv >= 0.5 (#46598) - asv run --quick --dry-run --python=same | sed "/failed$/ s/^/##[error]/" | tee benchmarks.log - if grep "failed" benchmarks.log > /dev/null ; then - exit 1 - fi + asv run --quick --dry-run --strict --durations=30 --python=same build_docker_dev_environment: name: Build Docker Dev Environment
Looks like asv didn't fail the CI, because if requires a `--strict` parameter to do so. Adding it here. And will change this behavior in asv: https://github.com/airspeed-velocity/asv/issues/1199 Adding a failing benchmark to make sure this works as expected. Will remove if CI is red because of it.
https://api.github.com/repos/pandas-dev/pandas/pulls/46633
2022-04-04T13:34:05Z
2022-04-05T10:22:32Z
2022-04-05T10:22:32Z
2022-04-10T11:18:37Z
BUG: Join ValueError for DataFrame list
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 73dc832e2007b..ebbdb99bf81d2 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -601,6 +601,7 @@ Reshaping - Bug in :meth:`DataFrame.align` when aligning a :class:`MultiIndex` to a :class:`Series` with another :class:`MultiIndex` (:issue:`46001`) - Bug in concanenation with ``IntegerDtype``, or ``FloatingDtype`` arrays where the resulting dtype did not mirror the behavior of the non-nullable dtypes (:issue:`46379`) - Bug in :func:`concat` with identical key leads to error when indexing :class:`MultiIndex` (:issue:`46519`) +- Bug in :meth:`DataFrame.join` with a list when using suffixes to join DataFrames with duplicate column names (:issue:`46396`) - Sparse diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 683674d8ef826..1f083cbd3a71d 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -9576,6 +9576,11 @@ def _join_compat( "Joining multiple DataFrames only supported for joining on index" ) + if rsuffix or lsuffix: + raise ValueError( + "Suffixes not supported when joining multiple DataFrames" + ) + frames = [self] + list(other) can_concat = all(df.index.is_unique for df in frames) diff --git a/pandas/tests/frame/methods/test_join.py b/pandas/tests/frame/methods/test_join.py index c6bfd94b84908..36f3d04a7b6ac 100644 --- a/pandas/tests/frame/methods/test_join.py +++ b/pandas/tests/frame/methods/test_join.py @@ -82,6 +82,28 @@ def test_join(left, right, how, sort, expected): tm.assert_frame_equal(result, expected) +def test_suffix_on_list_join(): + first = DataFrame({"key": [1, 2, 3, 4, 5]}) + second = DataFrame({"key": [1, 8, 3, 2, 5], "v1": [1, 2, 3, 4, 5]}) + third = DataFrame({"keys": [5, 2, 3, 4, 1], "v2": [1, 2, 3, 4, 5]}) + + # check proper errors are raised + msg = "Suffixes not supported when joining multiple DataFrames" + with pytest.raises(ValueError, match=msg): + first.join([second], lsuffix="y") + with pytest.raises(ValueError, match=msg): + first.join([second, third], rsuffix="x") + with pytest.raises(ValueError, match=msg): + first.join([second, third], lsuffix="y", rsuffix="x") + with pytest.raises(ValueError, match="Indexes have overlapping values"): + first.join([second, third]) + + # no errors should be raised + arr_joined = first.join([third]) + norm_joined = first.join(third) + tm.assert_frame_equal(arr_joined, norm_joined) + + def test_join_index(float_frame): # left / right
In the comments for `DataFrame.join()`, it says that suffixes are not supported for joining a list of DataFrame objects. > Notes > ----- > Parameters `on`, `lsuffix`, and `rsuffix` are not supported when > passing a list of `DataFrame` objects. - [x] closes #46396 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/v1.5.0.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46630
2022-04-03T21:17:45Z
2022-04-07T16:43:49Z
2022-04-07T16:43:48Z
2022-04-08T21:25:18Z
BUG: added finalize to explode, GH28283
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 56b1a6317472b..45782c1f8f89d 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -616,6 +616,11 @@ Styler - Bug when attempting to apply styling functions to an empty DataFrame subset (:issue:`45313`) - +Metadata +^^^^^^^^ +- Fixed metadata propagation in :meth:`DataFrame.explode` (:issue:`28283`) +- + Other ^^^^^ diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 6d4deb79b4898..683674d8ef826 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -8629,7 +8629,7 @@ def explode( result.index = self.index.take(result.index) result = result.reindex(columns=self.columns, copy=False) - return result + return result.__finalize__(self, method="explode") def unstack(self, level: Level = -1, fill_value=None): """ diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py index 9efc2bf53439a..9e5e616148a32 100644 --- a/pandas/tests/generic/test_finalize.py +++ b/pandas/tests/generic/test_finalize.py @@ -154,10 +154,7 @@ operator.methodcaller("pivot_table", columns="A", aggfunc=["mean", "sum"]), ), (pd.DataFrame, frame_data, operator.methodcaller("stack")), - pytest.param( - (pd.DataFrame, frame_data, operator.methodcaller("explode", "A")), - marks=not_implemented_mark, - ), + (pd.DataFrame, frame_data, operator.methodcaller("explode", "A")), (pd.DataFrame, frame_mi_data, operator.methodcaller("unstack")), pytest.param( (
**Progress towards #28283** This PR gives the `explode` method the ability to propagate metadata using `__finalize__`. - [ ] xref #28283 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) - [x] Added an entry in the latest `doc/source/whatsnew/v1.5.0.rst` file Contributors: - [Solomon Song](https://github.com/songsol1) - [Edward Huang](https://github.com/edwhuang23)
https://api.github.com/repos/pandas-dev/pandas/pulls/46629
2022-04-03T21:04:52Z
2022-04-06T01:29:11Z
2022-04-06T01:29:11Z
2022-04-06T01:29:15Z
Remove references to previously-vendored Sphinx extensions
diff --git a/LICENSES/OTHER b/LICENSES/OTHER index f0550b4ee208a..7446d68eb43a6 100644 --- a/LICENSES/OTHER +++ b/LICENSES/OTHER @@ -1,8 +1,3 @@ -numpydoc license ----------------- - -The numpydoc license is in pandas/doc/sphinxext/LICENSE.txt - Bottleneck license ------------------ @@ -77,4 +72,4 @@ DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. \ No newline at end of file +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/doc/sphinxext/README.rst b/doc/sphinxext/README.rst index 8f0f4a8b2636d..ef52433e5e869 100644 --- a/doc/sphinxext/README.rst +++ b/doc/sphinxext/README.rst @@ -1,17 +1,5 @@ sphinxext ========= -This directory contains copies of different sphinx extensions in use in the -pandas documentation. These copies originate from other projects: - -- ``numpydoc`` - Numpy's Sphinx extensions: this can be found at its own - repository: https://github.com/numpy/numpydoc -- ``ipython_directive`` and ``ipython_console_highlighting`` in the folder - ``ipython_sphinxext`` - Sphinx extensions from IPython: these are included - in IPython: https://github.com/ipython/ipython/tree/master/IPython/sphinxext - -.. note:: - - These copies are maintained at the respective projects, so fixes should, - to the extent possible, be pushed upstream instead of only adapting our - local copy to avoid divergence between the local and upstream version. +This directory contains custom sphinx extensions in use in the pandas +documentation.
- [ ] closes #xxxx (Replace xxxx with the Github issue number) **No issue filed** - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature **Not applicable** - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. **Not applicable**
https://api.github.com/repos/pandas-dev/pandas/pulls/46624
2022-04-03T12:16:28Z
2022-04-06T01:24:39Z
2022-04-06T01:24:39Z
2022-04-06T01:24:43Z
BUG: SeriesGroupBy apply wrong name
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 98d56bac402ac..97ff3a9af6e26 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -602,6 +602,7 @@ Groupby/resample/rolling - Bug in :meth:`GroupBy.cummax` with ``int64`` dtype with leading value being the smallest possible int64 (:issue:`46382`) - Bug in :meth:`GroupBy.max` with empty groups and ``uint64`` dtype incorrectly raising ``RuntimeError`` (:issue:`46408`) - Bug in :meth:`.GroupBy.apply` would fail when ``func`` was a string and args or kwargs were supplied (:issue:`46479`) +- Bug in :meth:`SeriesGroupBy.apply` would incorrectly name its result when there was a unique group (:issue:`46369`) - Bug in :meth:`.Rolling.var` would segfault calculating weighted variance when window size was larger than data size (:issue:`46760`) Reshaping diff --git a/pandas/core/common.py b/pandas/core/common.py index 6c3d2c91ab012..90f665362ef56 100644 --- a/pandas/core/common.py +++ b/pandas/core/common.py @@ -612,7 +612,7 @@ def get_cython_func(arg: Callable) -> str | None: def is_builtin_func(arg): """ - if we define an builtin function for this argument, return it, + if we define a builtin function for this argument, return it, otherwise return the arg """ return _builtin_table.get(arg, arg) diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 8d9b94f242e33..06a1aed8e3b09 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -397,11 +397,13 @@ def _wrap_applied_output( res_ser.name = self.obj.name return res_ser elif isinstance(values[0], (Series, DataFrame)): - return self._concat_objects( + result = self._concat_objects( values, not_indexed_same=not_indexed_same, override_group_keys=override_group_keys, ) + result.name = self.obj.name + return result else: # GH #6265 #24880 result = self.obj._constructor( diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py index 9cb64766a1079..4cfc3ea41543b 100644 --- a/pandas/tests/groupby/test_apply.py +++ b/pandas/tests/groupby/test_apply.py @@ -1320,3 +1320,13 @@ def test_apply_str_with_args(df, args, kwargs): result = gb.apply("sum", *args, **kwargs) expected = gb.sum(numeric_only=True) tm.assert_frame_equal(result, expected) + + +@pytest.mark.parametrize("name", ["some_name", None]) +def test_result_name_when_one_group(name): + # GH 46369 + ser = Series([1, 2], name=name) + result = ser.groupby(["a", "a"], group_keys=False).apply(lambda x: x) + expected = Series([1, 2], name=name) + + tm.assert_series_equal(result, expected) diff --git a/pandas/tests/groupby/test_value_counts.py b/pandas/tests/groupby/test_value_counts.py index d38ba84cd1397..577a72d3f5090 100644 --- a/pandas/tests/groupby/test_value_counts.py +++ b/pandas/tests/groupby/test_value_counts.py @@ -183,12 +183,11 @@ def test_series_groupby_value_counts_on_categorical(): ), ] ), - name=0, ) # Expected: # 0 a 1 # b 0 - # Name: 0, dtype: int64 + # dtype: int64 tm.assert_series_equal(result, expected) diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py index a77c95e30ab43..237d6a96f39ec 100644 --- a/pandas/tests/groupby/transform/test_transform.py +++ b/pandas/tests/groupby/transform/test_transform.py @@ -1511,10 +1511,5 @@ def test_null_group_str_transformer_series(request, dropna, transformation_func) msg = f"{transformation_func} is deprecated" with tm.assert_produces_warning(warn, match=msg): result = gb.transform(transformation_func, *args) - if dropna and transformation_func == "fillna": - # GH#46369 - result name is the group; remove this block when fixed. - tm.assert_equal(result, expected, check_names=False) - # This should be None - assert result.name == 1.0 - else: - tm.assert_equal(result, expected) + + tm.assert_equal(result, expected)
- [x] closes #46369 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46623
2022-04-03T11:45:44Z
2022-04-19T02:49:13Z
2022-04-19T02:49:13Z
2022-04-19T02:49:26Z
Backport PR #46609 on branch 1.4.x (DOC: Start v1.4.3 release notes)
diff --git a/doc/source/whatsnew/index.rst b/doc/source/whatsnew/index.rst index 0f546d04ea0e7..47a46c86c3a44 100644 --- a/doc/source/whatsnew/index.rst +++ b/doc/source/whatsnew/index.rst @@ -16,6 +16,7 @@ Version 1.4 .. toctree:: :maxdepth: 2 + v1.4.3 v1.4.2 v1.4.1 v1.4.0 diff --git a/doc/source/whatsnew/v1.4.2.rst b/doc/source/whatsnew/v1.4.2.rst index 8a2bb4c6b3201..64c36632bfefe 100644 --- a/doc/source/whatsnew/v1.4.2.rst +++ b/doc/source/whatsnew/v1.4.2.rst @@ -42,4 +42,4 @@ Bug fixes Contributors ~~~~~~~~~~~~ -.. contributors:: v1.4.1..v1.4.2|HEAD +.. contributors:: v1.4.1..v1.4.2 diff --git a/doc/source/whatsnew/v1.4.3.rst b/doc/source/whatsnew/v1.4.3.rst new file mode 100644 index 0000000000000..d53acc698c3bb --- /dev/null +++ b/doc/source/whatsnew/v1.4.3.rst @@ -0,0 +1,45 @@ +.. _whatsnew_143: + +What's new in 1.4.3 (April ??, 2022) +------------------------------------ + +These are the changes in pandas 1.4.3. See :ref:`release` for a full changelog +including other versions of pandas. + +{{ header }} + +.. --------------------------------------------------------------------------- + +.. _whatsnew_143.regressions: + +Fixed regressions +~~~~~~~~~~~~~~~~~ +- +- + +.. --------------------------------------------------------------------------- + +.. _whatsnew_143.bug_fixes: + +Bug fixes +~~~~~~~~~ +- +- + +.. --------------------------------------------------------------------------- + +.. _whatsnew_143.other: + +Other +~~~~~ +- +- + +.. --------------------------------------------------------------------------- + +.. _whatsnew_143.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v1.4.2..v1.4.3|HEAD
Backport PR #46609: DOC: Start v1.4.3 release notes
https://api.github.com/repos/pandas-dev/pandas/pulls/46612
2022-04-02T10:42:54Z
2022-04-02T11:58:32Z
2022-04-02T11:58:32Z
2022-04-02T11:58:32Z
Add setup-conda action
diff --git a/.circleci/config.yml b/.circleci/config.yml index 612552f4eac59..0d9e3ade08846 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -13,7 +13,7 @@ jobs: PANDAS_CI: "1" steps: - checkout - - run: ci/setup_env.sh + - run: .circleci/setup_env.sh - run: PATH=$HOME/miniconda3/envs/pandas-dev/bin:$HOME/miniconda3/condabin:$PATH ci/run_tests.sh workflows: diff --git a/ci/setup_env.sh b/.circleci/setup_env.sh similarity index 100% rename from ci/setup_env.sh rename to .circleci/setup_env.sh diff --git a/.github/actions/setup-conda/action.yml b/.github/actions/setup-conda/action.yml new file mode 100644 index 0000000000000..1c947ff244fb9 --- /dev/null +++ b/.github/actions/setup-conda/action.yml @@ -0,0 +1,27 @@ +name: Set up Conda environment +inputs: + environment-file: + description: Conda environment file to use. + default: environment.yml + pyarrow-version: + description: If set, overrides the PyArrow version in the Conda environment to the given string. + required: false +runs: + using: composite + steps: + - name: Set Arrow version in ${{ inputs.environment-file }} to ${{ inputs.pyarrow-version }} + run: | + grep -q ' - pyarrow' ${{ inputs.environment-file }} + sed -i"" -e "s/ - pyarrow/ - pyarrow=${{ inputs.pyarrow-version }}/" ${{ inputs.environment-file }} + cat ${{ inputs.environment-file }} + shell: bash + if: ${{ inputs.pyarrow-version }} + + - name: Install ${{ inputs.environment-file }} + uses: conda-incubator/setup-miniconda@v2 + with: + environment-file: ${{ inputs.environment-file }} + channel-priority: ${{ runner.os == 'macOS' && 'flexible' || 'strict' }} + channels: conda-forge + mamba-version: "0.23" + use-mamba: true diff --git a/.github/actions/setup/action.yml b/.github/actions/setup/action.yml deleted file mode 100644 index c357f149f2c7f..0000000000000 --- a/.github/actions/setup/action.yml +++ /dev/null @@ -1,12 +0,0 @@ -name: Set up pandas -description: Runs all the setup steps required to have a built pandas ready to use -runs: - using: composite - steps: - - name: Setting conda path - run: echo "${HOME}/miniconda3/bin" >> $GITHUB_PATH - shell: bash -el {0} - - - name: Setup environment and build pandas - run: ci/setup_env.sh - shell: bash -el {0} diff --git a/.github/workflows/docbuild-and-upload.yml b/.github/workflows/docbuild-and-upload.yml index 8c2c86dc693a9..5ffd4135802bd 100644 --- a/.github/workflows/docbuild-and-upload.yml +++ b/.github/workflows/docbuild-and-upload.yml @@ -24,24 +24,27 @@ jobs: group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-web-docs cancel-in-progress: true + defaults: + run: + shell: bash -el {0} + steps: - name: Checkout uses: actions/checkout@v3 with: fetch-depth: 0 - - name: Set up pandas - uses: ./.github/actions/setup + - name: Set up Conda + uses: ./.github/actions/setup-conda + + - name: Build Pandas + uses: ./.github/actions/build_pandas - name: Build website - run: | - source activate pandas-dev - python web/pandas_web.py web/pandas --target-path=web/build + run: python web/pandas_web.py web/pandas --target-path=web/build - name: Build documentation - run: | - source activate pandas-dev - doc/make.py --warnings-are-errors + run: doc/make.py --warnings-are-errors - name: Install ssh key run: | @@ -49,18 +52,18 @@ jobs: echo "${{ secrets.server_ssh_key }}" > ~/.ssh/id_rsa chmod 600 ~/.ssh/id_rsa echo "${{ secrets.server_ip }} ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE1Kkopomm7FHG5enATf7SgnpICZ4W2bw+Ho+afqin+w7sMcrsa0je7sbztFAV8YchDkiBKnWTG4cRT+KZgZCaY=" > ~/.ssh/known_hosts - if: ${{github.event_name == 'push' && github.ref == 'refs/heads/main'}} + if: github.event_name == 'push' && github.ref == 'refs/heads/main' - name: Copy cheatsheets into site directory run: cp doc/cheatsheet/Pandas_Cheat_Sheet* web/build/ - name: Upload web run: rsync -az --delete --exclude='pandas-docs' --exclude='docs' web/build/ docs@${{ secrets.server_ip }}:/usr/share/nginx/pandas - if: ${{github.event_name == 'push' && github.ref == 'refs/heads/main'}} + if: github.event_name == 'push' && github.ref == 'refs/heads/main' - name: Upload dev docs run: rsync -az --delete doc/build/html/ docs@${{ secrets.server_ip }}:/usr/share/nginx/pandas/pandas-docs/dev - if: ${{github.event_name == 'push' && github.ref == 'refs/heads/main'}} + if: github.event_name == 'push' && github.ref == 'refs/heads/main' - name: Move docs into site directory run: mv doc/build/html web/build/docs diff --git a/.github/workflows/macos-windows.yml b/.github/workflows/macos-windows.yml index 560a421ec74ec..26e6c8699ca64 100644 --- a/.github/workflows/macos-windows.yml +++ b/.github/workflows/macos-windows.yml @@ -43,22 +43,11 @@ jobs: with: fetch-depth: 0 - - name: Install Dependencies - uses: conda-incubator/setup-miniconda@v2.1.1 + - name: Set up Conda + uses: ./.github/actions/setup-conda with: - mamba-version: "*" - channels: conda-forge - activate-environment: pandas-dev - channel-priority: ${{ matrix.os == 'macos-latest' && 'flexible' || 'strict' }} environment-file: ci/deps/${{ matrix.env_file }} - use-only-tar-bz2: true - - # ImportError: 2): Library not loaded: @rpath/libssl.1.1.dylib - # Referenced from: /Users/runner/miniconda3/envs/pandas-dev/lib/libthrift.0.13.0.dylib - # Reason: image not found - - name: Upgrade pyarrow on MacOS - run: conda install -n pandas-dev -c conda-forge --no-update-deps pyarrow=6 - if: ${{ matrix.os == 'macos-latest' }} + pyarrow-version: ${{ matrix.os == 'macos-latest' && '6' || '' }} - name: Build Pandas uses: ./.github/actions/build_pandas @@ -66,9 +55,6 @@ jobs: - name: Test run: ci/run_tests.sh - - name: Build Version - run: conda list - - name: Publish test results uses: actions/upload-artifact@v3 with: diff --git a/setup.py b/setup.py index 0418fe67c2f76..ced8b8dbc96c6 100755 --- a/setup.py +++ b/setup.py @@ -333,7 +333,7 @@ def run(self): extra_compile_args.append("/Z7") extra_link_args.append("/DEBUG") else: - # PANDAS_CI=1 is set by ci/setup_env.sh + # PANDAS_CI=1 is set in CI if os.environ.get("PANDAS_CI", "0") == "1": extra_compile_args.append("-Werror") if debugging_symbols_requested:
Smaller version of https://github.com/pandas-dev/pandas/pull/46493 according to suggestion in https://github.com/pandas-dev/pandas/pull/46493#issuecomment-1079541210. Should be orthogonal to https://github.com/pandas-dev/pandas/pull/46538. - [x] closes #39175 (Replace xxxx with the Github issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46611
2022-04-02T10:39:41Z
2022-06-03T20:59:04Z
2022-06-03T20:59:04Z
2022-06-03T20:59:20Z
DOC: Start v1.4.3 release notes
diff --git a/doc/source/whatsnew/index.rst b/doc/source/whatsnew/index.rst index 7ea81611ede27..ccec4f90183bc 100644 --- a/doc/source/whatsnew/index.rst +++ b/doc/source/whatsnew/index.rst @@ -24,6 +24,7 @@ Version 1.4 .. toctree:: :maxdepth: 2 + v1.4.3 v1.4.2 v1.4.1 v1.4.0 diff --git a/doc/source/whatsnew/v1.4.2.rst b/doc/source/whatsnew/v1.4.2.rst index 8a2bb4c6b3201..64c36632bfefe 100644 --- a/doc/source/whatsnew/v1.4.2.rst +++ b/doc/source/whatsnew/v1.4.2.rst @@ -42,4 +42,4 @@ Bug fixes Contributors ~~~~~~~~~~~~ -.. contributors:: v1.4.1..v1.4.2|HEAD +.. contributors:: v1.4.1..v1.4.2 diff --git a/doc/source/whatsnew/v1.4.3.rst b/doc/source/whatsnew/v1.4.3.rst new file mode 100644 index 0000000000000..d53acc698c3bb --- /dev/null +++ b/doc/source/whatsnew/v1.4.3.rst @@ -0,0 +1,45 @@ +.. _whatsnew_143: + +What's new in 1.4.3 (April ??, 2022) +------------------------------------ + +These are the changes in pandas 1.4.3. See :ref:`release` for a full changelog +including other versions of pandas. + +{{ header }} + +.. --------------------------------------------------------------------------- + +.. _whatsnew_143.regressions: + +Fixed regressions +~~~~~~~~~~~~~~~~~ +- +- + +.. --------------------------------------------------------------------------- + +.. _whatsnew_143.bug_fixes: + +Bug fixes +~~~~~~~~~ +- +- + +.. --------------------------------------------------------------------------- + +.. _whatsnew_143.other: + +Other +~~~~~ +- +- + +.. --------------------------------------------------------------------------- + +.. _whatsnew_143.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v1.4.2..v1.4.3|HEAD
null
https://api.github.com/repos/pandas-dev/pandas/pulls/46609
2022-04-02T09:08:28Z
2022-04-02T10:42:24Z
2022-04-02T10:42:24Z
2022-04-02T10:42:28Z
TYP: fix hashable keys for pd.concat
diff --git a/pandas/_typing.py b/pandas/_typing.py index 35c057df43322..1debc4265508f 100644 --- a/pandas/_typing.py +++ b/pandas/_typing.py @@ -73,6 +73,7 @@ else: npt: Any = None +HashableT = TypeVar("HashableT", bound=Hashable) # array-like diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py index 72f3b402d49e3..17e78d3bb900a 100644 --- a/pandas/core/reshape/concat.py +++ b/pandas/core/reshape/concat.py @@ -18,7 +18,10 @@ import numpy as np -from pandas._typing import Axis +from pandas._typing import ( + Axis, + HashableT, +) from pandas.util._decorators import ( cache_readonly, deprecate_nonkeyword_arguments, @@ -62,7 +65,7 @@ @overload def concat( - objs: Iterable[DataFrame] | Mapping[Hashable, DataFrame], + objs: Iterable[DataFrame] | Mapping[HashableT, DataFrame], axis: Literal[0, "index"] = ..., join: str = ..., ignore_index: bool = ..., @@ -78,7 +81,7 @@ def concat( @overload def concat( - objs: Iterable[Series] | Mapping[Hashable, Series], + objs: Iterable[Series] | Mapping[HashableT, Series], axis: Literal[0, "index"] = ..., join: str = ..., ignore_index: bool = ..., @@ -94,7 +97,7 @@ def concat( @overload def concat( - objs: Iterable[NDFrame] | Mapping[Hashable, NDFrame], + objs: Iterable[NDFrame] | Mapping[HashableT, NDFrame], axis: Literal[0, "index"] = ..., join: str = ..., ignore_index: bool = ..., @@ -110,7 +113,7 @@ def concat( @overload def concat( - objs: Iterable[NDFrame] | Mapping[Hashable, NDFrame], + objs: Iterable[NDFrame] | Mapping[HashableT, NDFrame], axis: Literal[1, "columns"], join: str = ..., ignore_index: bool = ..., @@ -126,7 +129,7 @@ def concat( @overload def concat( - objs: Iterable[NDFrame] | Mapping[Hashable, NDFrame], + objs: Iterable[NDFrame] | Mapping[HashableT, NDFrame], axis: Axis = ..., join: str = ..., ignore_index: bool = ..., @@ -142,7 +145,7 @@ def concat( @deprecate_nonkeyword_arguments(version=None, allowed_args=["objs"]) def concat( - objs: Iterable[NDFrame] | Mapping[Hashable, NDFrame], + objs: Iterable[NDFrame] | Mapping[HashableT, NDFrame], axis: Axis = 0, join: str = "outer", ignore_index: bool = False, @@ -367,7 +370,7 @@ class _Concatenator: def __init__( self, - objs: Iterable[NDFrame] | Mapping[Hashable, NDFrame], + objs: Iterable[NDFrame] | Mapping[HashableT, NDFrame], axis=0, join: str = "outer", keys=None,
Could use `Any` but I think in this case the covariant `TypeVar` approach isn't ugly (I think so - happy to change to `Any`).
https://api.github.com/repos/pandas-dev/pandas/pulls/46608
2022-04-02T02:17:39Z
2022-04-04T13:42:55Z
2022-04-04T13:42:55Z
2022-05-26T01:59:15Z
Backport PR #46605 on branch 1.4.x (DOC: 1.4.2 release date)
diff --git a/doc/source/whatsnew/v1.4.2.rst b/doc/source/whatsnew/v1.4.2.rst index 66068f9faa923..8a2bb4c6b3201 100644 --- a/doc/source/whatsnew/v1.4.2.rst +++ b/doc/source/whatsnew/v1.4.2.rst @@ -1,7 +1,7 @@ .. _whatsnew_142: -What's new in 1.4.2 (March ??, 2022) ------------------------------------- +What's new in 1.4.2 (April 2, 2022) +----------------------------------- These are the changes in pandas 1.4.2. See :ref:`release` for a full changelog including other versions of pandas. @@ -22,6 +22,7 @@ Fixed regressions - Fixed regression in :meth:`DataFrame.replace` when the replacement value was explicitly ``None`` when passed in a dictionary to ``to_replace`` (:issue:`45601`, :issue:`45836`) - Fixed regression when setting values with :meth:`DataFrame.loc` losing :class:`MultiIndex` names if :class:`DataFrame` was empty before (:issue:`46317`) - Fixed regression when rendering boolean datatype columns with :meth:`.Styler` (:issue:`46384`) +- Fixed regression in :meth:`Groupby.rolling` with a frequency window that would raise a ``ValueError`` even if the datetimes within each group were monotonic (:issue:`46061`) .. --------------------------------------------------------------------------- @@ -33,16 +34,6 @@ Bug fixes - Fixed "longtable" formatting in :meth:`.Styler.to_latex` when ``column_format`` is given in extended format (:issue:`46037`) - Fixed incorrect rendering in :meth:`.Styler.format` with ``hyperlinks="html"`` when the url contains a colon or other special characters (:issue:`46389`) - Improved error message in :class:`~pandas.core.window.Rolling` when ``window`` is a frequency and ``NaT`` is in the rolling axis (:issue:`46087`) -- Fixed :meth:`Groupby.rolling` with a frequency window that would raise a ``ValueError`` even if the datetimes within each group were monotonic (:issue:`46061`) - -.. --------------------------------------------------------------------------- - -.. _whatsnew_142.other: - -Other -~~~~~ -- -- .. ---------------------------------------------------------------------------
Backport PR #46605: DOC: 1.4.2 release date
https://api.github.com/repos/pandas-dev/pandas/pulls/46607
2022-04-01T20:38:54Z
2022-04-01T21:44:02Z
2022-04-01T21:44:02Z
2022-04-01T21:44:02Z
DOC: 1.4.2 release date
diff --git a/doc/source/whatsnew/v1.4.2.rst b/doc/source/whatsnew/v1.4.2.rst index 66068f9faa923..8a2bb4c6b3201 100644 --- a/doc/source/whatsnew/v1.4.2.rst +++ b/doc/source/whatsnew/v1.4.2.rst @@ -1,7 +1,7 @@ .. _whatsnew_142: -What's new in 1.4.2 (March ??, 2022) ------------------------------------- +What's new in 1.4.2 (April 2, 2022) +----------------------------------- These are the changes in pandas 1.4.2. See :ref:`release` for a full changelog including other versions of pandas. @@ -22,6 +22,7 @@ Fixed regressions - Fixed regression in :meth:`DataFrame.replace` when the replacement value was explicitly ``None`` when passed in a dictionary to ``to_replace`` (:issue:`45601`, :issue:`45836`) - Fixed regression when setting values with :meth:`DataFrame.loc` losing :class:`MultiIndex` names if :class:`DataFrame` was empty before (:issue:`46317`) - Fixed regression when rendering boolean datatype columns with :meth:`.Styler` (:issue:`46384`) +- Fixed regression in :meth:`Groupby.rolling` with a frequency window that would raise a ``ValueError`` even if the datetimes within each group were monotonic (:issue:`46061`) .. --------------------------------------------------------------------------- @@ -33,16 +34,6 @@ Bug fixes - Fixed "longtable" formatting in :meth:`.Styler.to_latex` when ``column_format`` is given in extended format (:issue:`46037`) - Fixed incorrect rendering in :meth:`.Styler.format` with ``hyperlinks="html"`` when the url contains a colon or other special characters (:issue:`46389`) - Improved error message in :class:`~pandas.core.window.Rolling` when ``window`` is a frequency and ``NaT`` is in the rolling axis (:issue:`46087`) -- Fixed :meth:`Groupby.rolling` with a frequency window that would raise a ``ValueError`` even if the datetimes within each group were monotonic (:issue:`46061`) - -.. --------------------------------------------------------------------------- - -.. _whatsnew_142.other: - -Other -~~~~~ -- -- .. ---------------------------------------------------------------------------
null
https://api.github.com/repos/pandas-dev/pandas/pulls/46605
2022-04-01T19:25:22Z
2022-04-01T20:38:09Z
2022-04-01T20:38:08Z
2022-04-01T20:38:12Z
DOC: moved release note for #46087
diff --git a/doc/source/whatsnew/v1.4.2.rst b/doc/source/whatsnew/v1.4.2.rst index e98e419283508..66068f9faa923 100644 --- a/doc/source/whatsnew/v1.4.2.rst +++ b/doc/source/whatsnew/v1.4.2.rst @@ -32,6 +32,7 @@ Bug fixes - Fix some cases for subclasses that define their ``_constructor`` properties as general callables (:issue:`46018`) - Fixed "longtable" formatting in :meth:`.Styler.to_latex` when ``column_format`` is given in extended format (:issue:`46037`) - Fixed incorrect rendering in :meth:`.Styler.format` with ``hyperlinks="html"`` when the url contains a colon or other special characters (:issue:`46389`) +- Improved error message in :class:`~pandas.core.window.Rolling` when ``window`` is a frequency and ``NaT`` is in the rolling axis (:issue:`46087`) - Fixed :meth:`Groupby.rolling` with a frequency window that would raise a ``ValueError`` even if the datetimes within each group were monotonic (:issue:`46061`) .. --------------------------------------------------------------------------- diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 0b8f11fc4749b..56b1a6317472b 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -89,7 +89,6 @@ Other enhancements - :meth:`DataFrame.rolling` and :meth:`Series.rolling` now support a ``step`` parameter with fixed-length windows (:issue:`15354`) - Implemented a ``bool``-dtype :class:`Index`, passing a bool-dtype array-like to ``pd.Index`` will now retain ``bool`` dtype instead of casting to ``object`` (:issue:`45061`) - Implemented a complex-dtype :class:`Index`, passing a complex-dtype array-like to ``pd.Index`` will now retain complex dtype instead of casting to ``object`` (:issue:`45845`) -- Improved error message in :class:`~pandas.core.window.Rolling` when ``window`` is a frequency and ``NaT`` is in the rolling axis (:issue:`46087`) - :class:`Series` and :class:`DataFrame` with ``IntegerDtype`` now supports bitwise operations (:issue:`34463`) - Add ``milliseconds`` field support for :class:`~pandas.DateOffset` (:issue:`43371`) - :meth:`DataFrame.reset_index` now accepts a ``names`` argument which renames the index names (:issue:`6878`)
xref #46592 I'm going to merge this before all tests complete, since this PR is for main only and want to follow-up quickly with a PR for the release date of 1.4.2 (and move another release note xref https://github.com/pandas-dev/pandas/pull/46592#issuecomment-1086173499) which will conflict with this one
https://api.github.com/repos/pandas-dev/pandas/pulls/46604
2022-04-01T18:37:36Z
2022-04-01T19:15:01Z
2022-04-01T19:15:01Z
2022-04-01T19:15:05Z
TYP: na_value
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py index 112c401500472..393eb2997f6f0 100644 --- a/pandas/core/algorithms.py +++ b/pandas/core/algorithms.py @@ -500,7 +500,7 @@ def factorize_array( values: np.ndarray, na_sentinel: int = -1, size_hint: int | None = None, - na_value=None, + na_value: object = None, mask: npt.NDArray[np.bool_] | None = None, ) -> tuple[npt.NDArray[np.intp], np.ndarray]: """ diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py index b06a46dfd1447..a188692a2d8f7 100644 --- a/pandas/core/arrays/base.py +++ b/pandas/core/arrays/base.py @@ -460,7 +460,7 @@ def to_numpy( self, dtype: npt.DTypeLike | None = None, copy: bool = False, - na_value=lib.no_default, + na_value: object = lib.no_default, ) -> np.ndarray: """ Convert to a NumPy ndarray. diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py index 5ae71b305ac60..90f56a3eea0fb 100644 --- a/pandas/core/arrays/masked.py +++ b/pandas/core/arrays/masked.py @@ -337,7 +337,7 @@ def to_numpy( self, dtype: npt.DTypeLike | None = None, copy: bool = False, - na_value: Scalar | lib.NoDefault | libmissing.NAType = lib.no_default, + na_value: object = lib.no_default, ) -> np.ndarray: """ Convert to a NumPy Array. diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py index be7dc5e0ebdc6..36c67d2fe1225 100644 --- a/pandas/core/arrays/numpy_.py +++ b/pandas/core/arrays/numpy_.py @@ -368,7 +368,7 @@ def to_numpy( self, dtype: npt.DTypeLike | None = None, copy: bool = False, - na_value=lib.no_default, + na_value: object = lib.no_default, ) -> np.ndarray: result = np.asarray(self._ndarray, dtype=dtype) diff --git a/pandas/core/base.py b/pandas/core/base.py index ce11f31281f5d..12ab942e70574 100644 --- a/pandas/core/base.py +++ b/pandas/core/base.py @@ -433,7 +433,7 @@ def to_numpy( self, dtype: npt.DTypeLike | None = None, copy: bool = False, - na_value=lib.no_default, + na_value: object = lib.no_default, **kwargs, ) -> np.ndarray: """ diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py index 0a102d4e2bdc9..ded525cd099fc 100644 --- a/pandas/core/internals/managers.py +++ b/pandas/core/internals/managers.py @@ -1511,7 +1511,7 @@ def as_array( self, dtype: np.dtype | None = None, copy: bool = False, - na_value=lib.no_default, + na_value: object = lib.no_default, ) -> np.ndarray: """ Convert the blockmanager data into an numpy array. @@ -1570,7 +1570,7 @@ def as_array( def _interleave( self, dtype: np.dtype | None = None, - na_value=lib.no_default, + na_value: object = lib.no_default, ) -> np.ndarray: """ Return ndarray from blocks with specified item order diff --git a/pandas/tests/extension/decimal/array.py b/pandas/tests/extension/decimal/array.py index a3edc95fce96b..6eaa90d7b868a 100644 --- a/pandas/tests/extension/decimal/array.py +++ b/pandas/tests/extension/decimal/array.py @@ -105,7 +105,11 @@ def _from_factorized(cls, values, original): _HANDLED_TYPES = (decimal.Decimal, numbers.Number, np.ndarray) def to_numpy( - self, dtype=None, copy: bool = False, na_value=no_default, decimals=None + self, + dtype=None, + copy: bool = False, + na_value: object = no_default, + decimals=None, ) -> np.ndarray: result = np.asarray(self, dtype=dtype) if decimals is not None:
`na_value` can in most cases be anything (`object`)
https://api.github.com/repos/pandas-dev/pandas/pulls/46603
2022-04-01T18:24:17Z
2022-04-30T00:51:15Z
2022-04-30T00:51:15Z
2022-05-26T01:59:19Z
CI/TST: Make test_vector_resize more deterministic
diff --git a/pandas/tests/libs/test_hashtable.py b/pandas/tests/libs/test_hashtable.py index bed427da5dd12..0a3a10315b5fd 100644 --- a/pandas/tests/libs/test_hashtable.py +++ b/pandas/tests/libs/test_hashtable.py @@ -246,16 +246,7 @@ def test_lookup_overflow(self, writable): (ht.Float64HashTable, ht.Float64Vector, "float64", False), (ht.Int64HashTable, ht.Int64Vector, "int64", False), (ht.Int32HashTable, ht.Int32Vector, "int32", False), - pytest.param( - ht.UInt64HashTable, - ht.UInt64Vector, - "uint64", - False, - marks=pytest.mark.xfail( - reason="Sometimes doesn't raise.", - strict=False, - ), - ), + (ht.UInt64HashTable, ht.UInt64Vector, "uint64", False), ], ) def test_vector_resize( @@ -263,7 +254,9 @@ def test_vector_resize( ): # Test for memory errors after internal vector # reallocations (GH 7157) - vals = np.array(np.random.randn(1000), dtype=dtype) + # Changed from using np.random.rand to range + # which could cause flaky CI failures when safely_resizes=False + vals = np.array(range(1000), dtype=dtype) # GH 21688 ensures we can deal with read-only memory views vals.setflags(write=writable)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). I had to `xfail(strict=True)` this test once before which leads me to believe that the initial random data may be causing this test to be flaky. https://dev.azure.com/pandas-dev/pandas/_build/results?buildId=76111&view=logs&jobId=b1891a80-bdb0-5193-5f59-ce68a0874df0&j=b1891a80-bdb0-5193-5f59-ce68a0874df0&t=0a58a9e9-0673-539c-5bde-d78a5a3b655e ``` =================================== FAILURES =================================== _ TestHashTableUnsorted.test_vector_resize[False-Int32HashTable-Int32Vector-int32-False-10] _ [gw1] darwin -- Python 3.9.12 /usr/local/miniconda/envs/pandas-dev/bin/python self = <pandas.tests.libs.test_hashtable.TestHashTableUnsorted object at 0x161a3d7c0> writable = False htable = <pandas._libs.hashtable.Int32HashTable object at 0x1764fee30> uniques = <pandas._libs.hashtable.Int32Vector object at 0x176528e00> dtype = 'int32', safely_resizes = False, nvals = 10 @pytest.mark.parametrize("nvals", [0, 10]) # resizing to 0 is special case @pytest.mark.parametrize( "htable, uniques, dtype, safely_resizes", [ (ht.PyObjectHashTable, ht.ObjectVector, "object", False), (ht.StringHashTable, ht.ObjectVector, "object", True), (ht.Float64HashTable, ht.Float64Vector, "float64", False), (ht.Int64HashTable, ht.Int64Vector, "int64", False), (ht.Int32HashTable, ht.Int32Vector, "int32", False), pytest.param( ht.UInt64HashTable, ht.UInt64Vector, "uint64", False, marks=pytest.mark.xfail( reason="Sometimes doesn't raise.", else: with pytest.raises(ValueError, match="external reference.*"): > htable.get_labels(vals, uniques, 0, -1) E Failed: DID NOT RAISE <class 'ValueError'> ```
https://api.github.com/repos/pandas-dev/pandas/pulls/46602
2022-04-01T17:26:11Z
2022-04-03T02:38:00Z
2022-04-03T02:38:00Z
2022-04-07T15:15:25Z
BUG: algorithms.factorize moves null values when sort=False
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 0ceac8aeb9db8..dcac0e5af912f 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -299,6 +299,7 @@ Other enhancements - Added ``copy`` keyword to :meth:`Series.set_axis` and :meth:`DataFrame.set_axis` to allow user to set axis on a new object without necessarily copying the underlying data (:issue:`47932`) - :meth:`Series.add_suffix`, :meth:`DataFrame.add_suffix`, :meth:`Series.add_prefix` and :meth:`DataFrame.add_prefix` support a ``copy`` argument. If ``False``, the underlying data is not copied in the returned object (:issue:`47934`) - :meth:`DataFrame.set_index` now supports a ``copy`` keyword. If ``False``, the underlying data is not copied when a new :class:`DataFrame` is returned (:issue:`48043`) +- The method :meth:`.ExtensionArray.factorize` accepts ``use_na_sentinel=False`` for determining how null values are to be treated (:issue:`46601`) .. --------------------------------------------------------------------------- .. _whatsnew_150.notable_bug_fixes: @@ -927,6 +928,7 @@ Numeric - Bug in division, ``pow`` and ``mod`` operations on array-likes with ``dtype="boolean"`` not being like their ``np.bool_`` counterparts (:issue:`46063`) - Bug in multiplying a :class:`Series` with ``IntegerDtype`` or ``FloatingDtype`` by an array-like with ``timedelta64[ns]`` dtype incorrectly raising (:issue:`45622`) - Bug in :meth:`mean` where the optional dependency ``bottleneck`` causes precision loss linear in the length of the array. ``bottleneck`` has been disabled for :meth:`mean` improving the loss to log-linear but may result in a performance decrease. (:issue:`42878`) +- Bug in :func:`factorize` would convert the value ``None`` to ``np.nan`` (:issue:`46601`) Conversion ^^^^^^^^^^ @@ -1098,6 +1100,7 @@ Groupby/resample/rolling - Bug in :meth:`DataFrame.groupby` and :meth:`Series.groupby` would not respect ``dropna=False`` when the input DataFrame/Series had a NaN values in a :class:`MultiIndex` (:issue:`46783`) - Bug in :meth:`DataFrameGroupBy.resample` raises ``KeyError`` when getting the result from a key list which misses the resample key (:issue:`47362`) - Bug in :meth:`DataFrame.groupby` would lose index columns when the DataFrame is empty for transforms, like fillna (:issue:`47787`) +- Bug in :meth:`DataFrame.groupby` and :meth:`Series.groupby` with ``dropna=False`` and ``sort=False`` would put any null groups at the end instead the order that they are encountered (:issue:`46584`) - Reshaping diff --git a/pandas/_libs/hashtable_class_helper.pxi.in b/pandas/_libs/hashtable_class_helper.pxi.in index 8a2b9c2f77627..f25e4ad009153 100644 --- a/pandas/_libs/hashtable_class_helper.pxi.in +++ b/pandas/_libs/hashtable_class_helper.pxi.in @@ -658,7 +658,7 @@ cdef class {{name}}HashTable(HashTable): return_inverse=return_inverse) def factorize(self, const {{dtype}}_t[:] values, Py_ssize_t na_sentinel=-1, - object na_value=None, object mask=None): + object na_value=None, object mask=None, ignore_na=True): """ Calculate unique values and labels (no sorting!) @@ -690,7 +690,7 @@ cdef class {{name}}HashTable(HashTable): """ uniques_vector = {{name}}Vector() return self._unique(values, uniques_vector, na_sentinel=na_sentinel, - na_value=na_value, ignore_na=True, mask=mask, + na_value=na_value, ignore_na=ignore_na, mask=mask, return_inverse=True) def get_labels(self, const {{dtype}}_t[:] values, {{name}}Vector uniques, @@ -1037,7 +1037,7 @@ cdef class StringHashTable(HashTable): return_inverse=return_inverse) def factorize(self, ndarray[object] values, Py_ssize_t na_sentinel=-1, - object na_value=None, object mask=None): + object na_value=None, object mask=None, ignore_na=True): """ Calculate unique values and labels (no sorting!) @@ -1067,7 +1067,7 @@ cdef class StringHashTable(HashTable): """ uniques_vector = ObjectVector() return self._unique(values, uniques_vector, na_sentinel=na_sentinel, - na_value=na_value, ignore_na=True, + na_value=na_value, ignore_na=ignore_na, return_inverse=True) def get_labels(self, ndarray[object] values, ObjectVector uniques, @@ -1290,7 +1290,7 @@ cdef class PyObjectHashTable(HashTable): return_inverse=return_inverse) def factorize(self, ndarray[object] values, Py_ssize_t na_sentinel=-1, - object na_value=None, object mask=None): + object na_value=None, object mask=None, ignore_na=True): """ Calculate unique values and labels (no sorting!) @@ -1320,7 +1320,7 @@ cdef class PyObjectHashTable(HashTable): """ uniques_vector = ObjectVector() return self._unique(values, uniques_vector, na_sentinel=na_sentinel, - na_value=na_value, ignore_na=True, + na_value=na_value, ignore_na=ignore_na, return_inverse=True) def get_labels(self, ndarray[object] values, ObjectVector uniques, diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py index a4736c2a141a5..9b031fc9517c7 100644 --- a/pandas/core/algorithms.py +++ b/pandas/core/algorithms.py @@ -509,7 +509,7 @@ def f(c, v): def factorize_array( values: np.ndarray, - na_sentinel: int = -1, + na_sentinel: int | None = -1, size_hint: int | None = None, na_value: object = None, mask: npt.NDArray[np.bool_] | None = None, @@ -540,6 +540,10 @@ def factorize_array( codes : ndarray[np.intp] uniques : ndarray """ + ignore_na = na_sentinel is not None + if not ignore_na: + na_sentinel = -1 + original = values if values.dtype.kind in ["m", "M"]: # _get_hashtable_algo will cast dt64/td64 to i8 via _ensure_data, so we @@ -552,7 +556,11 @@ def factorize_array( table = hash_klass(size_hint or len(values)) uniques, codes = table.factorize( - values, na_sentinel=na_sentinel, na_value=na_value, mask=mask + values, + na_sentinel=na_sentinel, + na_value=na_value, + mask=mask, + ignore_na=ignore_na, ) # re-cast e.g. i8->dt64/td64, uint8->bool @@ -726,6 +734,10 @@ def factorize( # responsible only for factorization. All data coercion, sorting and boxing # should happen here. + # GH#46910 deprecated na_sentinel in favor of use_na_sentinel: + # na_sentinel=None corresponds to use_na_sentinel=False + # na_sentinel=-1 correspond to use_na_sentinel=True + # Other na_sentinel values will not be supported when the deprecation is enforced. na_sentinel = resolve_na_sentinel(na_sentinel, use_na_sentinel) if isinstance(values, ABCRangeIndex): return values.factorize(sort=sort) @@ -737,10 +749,7 @@ def factorize( # GH35667, if na_sentinel=None, we will not dropna NaNs from the uniques # of values, assign na_sentinel=-1 to replace code value for NaN. - dropna = True - if na_sentinel is None: - na_sentinel = -1 - dropna = False + dropna = na_sentinel is not None if ( isinstance(values, (ABCDatetimeArray, ABCTimedeltaArray)) @@ -753,38 +762,58 @@ def factorize( elif not isinstance(values.dtype, np.dtype): if ( - na_sentinel == -1 - and "use_na_sentinel" in inspect.signature(values.factorize).parameters - ): + na_sentinel == -1 or na_sentinel is None + ) and "use_na_sentinel" in inspect.signature(values.factorize).parameters: # Avoid using catch_warnings when possible # GH#46910 - TimelikeOps has deprecated signature codes, uniques = values.factorize( # type: ignore[call-arg] - use_na_sentinel=True + use_na_sentinel=na_sentinel is not None ) else: + na_sentinel_arg = -1 if na_sentinel is None else na_sentinel with warnings.catch_warnings(): # We've already warned above warnings.filterwarnings("ignore", ".*use_na_sentinel.*", FutureWarning) - codes, uniques = values.factorize(na_sentinel=na_sentinel) + codes, uniques = values.factorize(na_sentinel=na_sentinel_arg) else: values = np.asarray(values) # convert DTA/TDA/MultiIndex + # TODO: pass na_sentinel=na_sentinel to factorize_array. When sort is True and + # na_sentinel is None we append NA on the end because safe_sort does not + # handle null values in uniques. + if na_sentinel is None and sort: + na_sentinel_arg = -1 + elif na_sentinel is None: + na_sentinel_arg = None + else: + na_sentinel_arg = na_sentinel codes, uniques = factorize_array( - values, na_sentinel=na_sentinel, size_hint=size_hint + values, + na_sentinel=na_sentinel_arg, + size_hint=size_hint, ) if sort and len(uniques) > 0: + if na_sentinel is None: + # TODO: Can remove when na_sentinel=na_sentinel as in TODO above + na_sentinel = -1 uniques, codes = safe_sort( uniques, codes, na_sentinel=na_sentinel, assume_unique=True, verify=False ) - code_is_na = codes == na_sentinel - if not dropna and code_is_na.any(): - # na_value is set based on the dtype of uniques, and compat set to False is - # because we do not want na_value to be 0 for integers - na_value = na_value_for_dtype(uniques.dtype, compat=False) - uniques = np.append(uniques, [na_value]) - codes = np.where(code_is_na, len(uniques) - 1, codes) + if not dropna and sort: + # TODO: Can remove entire block when na_sentinel=na_sentinel as in TODO above + if na_sentinel is None: + na_sentinel_arg = -1 + else: + na_sentinel_arg = na_sentinel + code_is_na = codes == na_sentinel_arg + if code_is_na.any(): + # na_value is set based on the dtype of uniques, and compat set to False is + # because we do not want na_value to be 0 for integers + na_value = na_value_for_dtype(uniques.dtype, compat=False) + uniques = np.append(uniques, [na_value]) + codes = np.where(code_is_na, len(uniques) - 1, codes) uniques = _reconstruct_data(uniques, original.dtype, original) diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py index a3b2003b0caf3..8d80cb18d21db 100644 --- a/pandas/core/arrays/arrow/array.py +++ b/pandas/core/arrays/arrow/array.py @@ -551,20 +551,32 @@ def factorize( use_na_sentinel: bool | lib.NoDefault = lib.no_default, ) -> tuple[np.ndarray, ExtensionArray]: resolved_na_sentinel = resolve_na_sentinel(na_sentinel, use_na_sentinel) - if resolved_na_sentinel is None: - raise NotImplementedError("Encoding NaN values is not yet implemented") + if pa_version_under4p0: + encoded = self._data.dictionary_encode() else: - na_sentinel = resolved_na_sentinel - encoded = self._data.dictionary_encode() + null_encoding = "mask" if resolved_na_sentinel is not None else "encode" + encoded = self._data.dictionary_encode(null_encoding=null_encoding) indices = pa.chunked_array( [c.indices for c in encoded.chunks], type=encoded.type.index_type ).to_pandas() if indices.dtype.kind == "f": - indices[np.isnan(indices)] = na_sentinel + indices[np.isnan(indices)] = ( + resolved_na_sentinel if resolved_na_sentinel is not None else -1 + ) indices = indices.astype(np.int64, copy=False) if encoded.num_chunks: uniques = type(self)(encoded.chunk(0).dictionary) + if resolved_na_sentinel is None and pa_version_under4p0: + # TODO: share logic with BaseMaskedArray.factorize + # Insert na with the proper code + na_mask = indices.values == -1 + na_index = na_mask.argmax() + if na_mask[na_index]: + uniques = uniques.insert(na_index, self.dtype.na_value) + na_code = 0 if na_index == 0 else indices[:na_index].argmax() + 1 + indices[indices >= na_code] += 1 + indices[indices == -1] = na_code else: uniques = type(self)(pa.array([], type=encoded.type.value_type)) diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py index 043917376b8c1..1e3b137184660 100644 --- a/pandas/core/arrays/base.py +++ b/pandas/core/arrays/base.py @@ -1081,14 +1081,10 @@ def factorize( # 2. ExtensionArray.factorize. # Complete control over factorization. resolved_na_sentinel = resolve_na_sentinel(na_sentinel, use_na_sentinel) - if resolved_na_sentinel is None: - raise NotImplementedError("Encoding NaN values is not yet implemented") - else: - na_sentinel = resolved_na_sentinel arr, na_value = self._values_for_factorize() codes, uniques = factorize_array( - arr, na_sentinel=na_sentinel, na_value=na_value + arr, na_sentinel=resolved_na_sentinel, na_value=na_value ) uniques_ea = self._from_factorized(uniques, self) diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py index 128c7e44f5075..fb350b311532f 100644 --- a/pandas/core/arrays/masked.py +++ b/pandas/core/arrays/masked.py @@ -875,19 +875,39 @@ def factorize( use_na_sentinel: bool | lib.NoDefault = lib.no_default, ) -> tuple[np.ndarray, ExtensionArray]: resolved_na_sentinel = algos.resolve_na_sentinel(na_sentinel, use_na_sentinel) - if resolved_na_sentinel is None: - raise NotImplementedError("Encoding NaN values is not yet implemented") - else: - na_sentinel = resolved_na_sentinel arr = self._data mask = self._mask - codes, uniques = factorize_array(arr, na_sentinel=na_sentinel, mask=mask) + # Pass non-None na_sentinel; recode and add NA to uniques if necessary below + na_sentinel_arg = -1 if resolved_na_sentinel is None else resolved_na_sentinel + codes, uniques = factorize_array(arr, na_sentinel=na_sentinel_arg, mask=mask) # check that factorize_array correctly preserves dtype. assert uniques.dtype == self.dtype.numpy_dtype, (uniques.dtype, self.dtype) - uniques_ea = type(self)(uniques, np.zeros(len(uniques), dtype=bool)) + has_na = mask.any() + if resolved_na_sentinel is not None or not has_na: + size = len(uniques) + else: + # Make room for an NA value + size = len(uniques) + 1 + uniques_mask = np.zeros(size, dtype=bool) + if resolved_na_sentinel is None and has_na: + na_index = mask.argmax() + # Insert na with the proper code + if na_index == 0: + na_code = np.intp(0) + else: + # mypy error: Slice index must be an integer or None + # https://github.com/python/mypy/issues/2410 + na_code = codes[:na_index].argmax() + 1 # type: ignore[misc] + codes[codes >= na_code] += 1 + codes[codes == -1] = na_code + # dummy value for uniques; not used since uniques_mask will be True + uniques = np.insert(uniques, na_code, 0) + uniques_mask[na_code] = True + uniques_ea = type(self)(uniques, uniques_mask) + return codes, uniques_ea @doc(ExtensionArray._values_for_argsort) diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py index e9302efdce2e7..e2cb49957cc2a 100644 --- a/pandas/core/arrays/sparse/array.py +++ b/pandas/core/arrays/sparse/array.py @@ -875,6 +875,10 @@ def factorize( codes, uniques = algos.factorize( np.asarray(self), na_sentinel=na_sentinel, use_na_sentinel=use_na_sentinel ) + if na_sentinel is lib.no_default: + na_sentinel = -1 + if use_na_sentinel is lib.no_default or use_na_sentinel: + codes[codes == -1] = na_sentinel uniques_sp = SparseArray(uniques, dtype=self.dtype) return codes, uniques_sp diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py index e2bc08c03ed3c..0e2df9f7cf728 100644 --- a/pandas/core/arrays/string_.py +++ b/pandas/core/arrays/string_.py @@ -380,8 +380,8 @@ def __arrow_array__(self, type=None): def _values_for_factorize(self): arr = self._ndarray.copy() mask = self.isna() - arr[mask] = -1 - return arr, -1 + arr[mask] = None + return arr, None def __setitem__(self, key, value): value = extract_array(value, extract_numpy=True) diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py index 04ebc00b8e964..72f54abdced27 100644 --- a/pandas/core/groupby/grouper.py +++ b/pandas/core/groupby/grouper.py @@ -658,9 +658,10 @@ def group_index(self) -> Index: @cache_readonly def _codes_and_uniques(self) -> tuple[npt.NDArray[np.signedinteger], ArrayLike]: - if self._passed_categorical: + if self._dropna and self._passed_categorical: # we make a CategoricalIndex out of the cat grouper - # preserving the categories / ordered attributes + # preserving the categories / ordered attributes; + # doesn't (yet - GH#46909) handle dropna=False cat = self.grouping_vector categories = cat.categories diff --git a/pandas/tests/groupby/test_groupby_dropna.py b/pandas/tests/groupby/test_groupby_dropna.py index 515c96780e731..ee08307c95311 100644 --- a/pandas/tests/groupby/test_groupby_dropna.py +++ b/pandas/tests/groupby/test_groupby_dropna.py @@ -1,6 +1,8 @@ import numpy as np import pytest +from pandas.compat.pyarrow import pa_version_under1p01 + import pandas as pd import pandas._testing as tm @@ -387,3 +389,61 @@ def test_groupby_drop_nan_with_multi_index(): result = df.groupby(["a", "b"], dropna=False).first() expected = df tm.assert_frame_equal(result, expected) + + +@pytest.mark.parametrize( + "values, dtype", + [ + ([2, np.nan, 1, 2], None), + ([2, np.nan, 1, 2], "UInt8"), + ([2, np.nan, 1, 2], "Int8"), + ([2, np.nan, 1, 2], "UInt16"), + ([2, np.nan, 1, 2], "Int16"), + ([2, np.nan, 1, 2], "UInt32"), + ([2, np.nan, 1, 2], "Int32"), + ([2, np.nan, 1, 2], "UInt64"), + ([2, np.nan, 1, 2], "Int64"), + ([2, np.nan, 1, 2], "Float32"), + ([2, np.nan, 1, 2], "Int64"), + ([2, np.nan, 1, 2], "Float64"), + (["y", None, "x", "y"], "category"), + (["y", pd.NA, "x", "y"], "string"), + pytest.param( + ["y", pd.NA, "x", "y"], + "string[pyarrow]", + marks=pytest.mark.skipif( + pa_version_under1p01, reason="pyarrow is not installed" + ), + ), + ( + ["2016-01-01", np.datetime64("NaT"), "2017-01-01", "2016-01-01"], + "datetime64[ns]", + ), + ( + [ + pd.Period("2012-02-01", freq="D"), + pd.NA, + pd.Period("2012-01-01", freq="D"), + pd.Period("2012-02-01", freq="D"), + ], + None, + ), + (pd.arrays.SparseArray([2, np.nan, 1, 2]), None), + ], +) +@pytest.mark.parametrize("test_series", [True, False]) +def test_no_sort_keep_na(values, dtype, test_series): + # GH#46584 + key = pd.Series(values, dtype=dtype) + df = pd.DataFrame({"key": key, "a": [1, 2, 3, 4]}) + gb = df.groupby("key", dropna=False, sort=False) + if test_series: + gb = gb["a"] + result = gb.sum() + expected = pd.DataFrame({"a": [5, 2, 3]}, index=key[:-1].rename("key")) + if test_series: + expected = expected["a"] + if expected.index.is_categorical(): + # TODO: Slicing reorders categories? + expected.index = expected.index.reorder_categories(["y", "x"]) + tm.assert_equal(result, expected) diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py index d2928c52c33e2..113bd4a0c4c65 100644 --- a/pandas/tests/groupby/transform/test_transform.py +++ b/pandas/tests/groupby/transform/test_transform.py @@ -1326,12 +1326,8 @@ def test_transform_cumcount(): @pytest.mark.parametrize("keys", [["A1"], ["A1", "A2"]]) -def test_null_group_lambda_self(request, sort, dropna, keys): +def test_null_group_lambda_self(sort, dropna, keys): # GH 17093 - if not sort and not dropna: - msg = "GH#46584: null values get sorted when sort=False" - request.node.add_marker(pytest.mark.xfail(reason=msg, strict=False)) - size = 50 nulls1 = np.random.choice([False, True], size) nulls2 = np.random.choice([False, True], size) diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py index def63c552e059..aff48d327a11c 100644 --- a/pandas/tests/test_algos.py +++ b/pandas/tests/test_algos.py @@ -455,13 +455,13 @@ def test_factorize_na_sentinel(self, sort, na_sentinel, data, uniques): [ ( ["a", None, "b", "a"], - np.array([0, 2, 1, 0], dtype=np.dtype("intp")), - np.array(["a", "b", np.nan], dtype=object), + np.array([0, 1, 2, 0], dtype=np.dtype("intp")), + np.array(["a", None, "b"], dtype=object), ), ( ["a", np.nan, "b", "a"], - np.array([0, 2, 1, 0], dtype=np.dtype("intp")), - np.array(["a", "b", np.nan], dtype=object), + np.array([0, 1, 2, 0], dtype=np.dtype("intp")), + np.array(["a", np.nan, "b"], dtype=object), ), ], ) @@ -478,13 +478,13 @@ def test_object_factorize_use_na_sentinel_false( [ ( [1, None, 1, 2], - np.array([0, 2, 0, 1], dtype=np.dtype("intp")), - np.array([1, 2, np.nan], dtype="O"), + np.array([0, 1, 0, 2], dtype=np.dtype("intp")), + np.array([1, None, 2], dtype="O"), ), ( [1, np.nan, 1, 2], - np.array([0, 2, 0, 1], dtype=np.dtype("intp")), - np.array([1, 2, np.nan], dtype=np.float64), + np.array([0, 1, 0, 2], dtype=np.dtype("intp")), + np.array([1, np.nan, 2], dtype=np.float64), ), ], )
- [x] closes #46584 (Replace xxxx with the Github issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. In the example below, the result_index has nan moved even though sort=False. This is the order that will be in any groupby reduction result and the reason why transform currently returns wrong results. ``` df = pd.DataFrame({'a': [1, 3, np.nan, 1, 2], 'b': [3, 4, 5, 6, 7]}) print(df.groupby('a', sort=False, dropna=False).grouper.result_index) # main Float64Index([1.0, 3.0, 2.0, nan], dtype='float64', name='a') # this PR Float64Index([1.0, 3.0, nan, 2.0], dtype='float64', name='a') ``` cc @jbrockmendel @jreback
https://api.github.com/repos/pandas-dev/pandas/pulls/46601
2022-04-01T16:46:57Z
2022-08-18T16:09:11Z
2022-08-18T16:09:11Z
2022-09-10T16:49:41Z
CI: Simplify call to asv
diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml index 16ac3cbf47705..aac6f4387a74a 100644 --- a/.github/workflows/code-checks.yml +++ b/.github/workflows/code-checks.yml @@ -140,22 +140,12 @@ jobs: - name: Run ASV benchmarks run: | cd asv_bench - asv check -E existing - git remote add upstream https://github.com/pandas-dev/pandas.git - git fetch upstream asv machine --yes - asv dev | sed "/failed$/ s/^/##[error]/" | tee benchmarks.log + # TODO add `--durations` when we start using asv >= 0.5 (#46598) + asv run --quick --dry-run --python=same | sed "/failed$/ s/^/##[error]/" | tee benchmarks.log if grep "failed" benchmarks.log > /dev/null ; then exit 1 fi - if: ${{ steps.build.outcome == 'success' }} - - - name: Publish benchmarks artifact - uses: actions/upload-artifact@v3 - with: - name: Benchmarks log - path: asv_bench/benchmarks.log - if: failure() build_docker_dev_environment: name: Build Docker Dev Environment diff --git a/environment.yml b/environment.yml index 0dc9806856585..e991f29b8a727 100644 --- a/environment.yml +++ b/environment.yml @@ -9,7 +9,7 @@ dependencies: - pytz # benchmarks - - asv < 0.5.0 # 2022-02-08: v0.5.0 > leads to ASV checks running > 3 hours on CI + - asv # building # The compiler packages are meta-packages and install the correct compiler (activation) packages on the respective platforms. diff --git a/requirements-dev.txt b/requirements-dev.txt index 94709171739d2..22692da8f0ed4 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -4,7 +4,7 @@ numpy>=1.18.5 python-dateutil>=2.8.1 pytz -asv < 0.5.0 +asv cython>=0.29.24 black==22.3.0 cpplint
The way we are calling `asv` in the CI is too complex for no reason. Removing the next unnecessary things: - `asv check` not worth calling, since just imports the benchmarks which will be later imported anyway when calling `asv run` - Adding the `remote` to git doesn't have any effect I think, since we don't specify the commit, and we should be running benchmarks for what's been checked out - `asv machine` is not useful, since we won't save the results, or compare them with other runs - `asv dev` just avoids creating environments, but we can speed up things a bit by just running each benchmark once, and not save the results, which are not used anyway - The conditional to just run the benchmarks if the previous step build is successful, is the default. It makes sense in other parts of this file, since there are several steps depending on a build, but not in this case - I don't think publishing the asv log makes a lot of sense. I think I added that myself in the past, but now feels just a waste of time and resources (unless someone used that artifact, then please let me know) I'm going to see how to remove the hacky grep and just leave the asv call, but I think this needs a new asv release.
https://api.github.com/repos/pandas-dev/pandas/pulls/46599
2022-04-01T14:47:48Z
2022-04-03T02:37:25Z
2022-04-03T02:37:25Z
2022-04-10T11:18:58Z
Backport PR #46567 on branch 1.4.x (BUG: groupby().rolling(freq) with monotonic dates within groups #46065 )
diff --git a/doc/source/whatsnew/v1.4.2.rst b/doc/source/whatsnew/v1.4.2.rst index 76b2a5d6ffd47..66068f9faa923 100644 --- a/doc/source/whatsnew/v1.4.2.rst +++ b/doc/source/whatsnew/v1.4.2.rst @@ -32,6 +32,8 @@ Bug fixes - Fix some cases for subclasses that define their ``_constructor`` properties as general callables (:issue:`46018`) - Fixed "longtable" formatting in :meth:`.Styler.to_latex` when ``column_format`` is given in extended format (:issue:`46037`) - Fixed incorrect rendering in :meth:`.Styler.format` with ``hyperlinks="html"`` when the url contains a colon or other special characters (:issue:`46389`) +- Improved error message in :class:`~pandas.core.window.Rolling` when ``window`` is a frequency and ``NaT`` is in the rolling axis (:issue:`46087`) +- Fixed :meth:`Groupby.rolling` with a frequency window that would raise a ``ValueError`` even if the datetimes within each group were monotonic (:issue:`46061`) .. --------------------------------------------------------------------------- diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py index 72ebbcbc65e5e..6d74c6db1f7ed 100644 --- a/pandas/core/window/rolling.py +++ b/pandas/core/window/rolling.py @@ -837,12 +837,6 @@ def _gotitem(self, key, ndim, subset=None): subset = self.obj.set_index(self._on) return super()._gotitem(key, ndim, subset=subset) - def _validate_monotonic(self): - """ - Validate that "on" is monotonic; already validated at a higher level. - """ - pass - class Window(BaseWindow): """ @@ -1687,7 +1681,7 @@ def _validate(self): or isinstance(self._on, (DatetimeIndex, TimedeltaIndex, PeriodIndex)) ) and isinstance(self.window, (str, BaseOffset, timedelta)): - self._validate_monotonic() + self._validate_datetimelike_monotonic() # this will raise ValueError on non-fixed freqs try: @@ -1712,18 +1706,24 @@ def _validate(self): elif not is_integer(self.window) or self.window < 0: raise ValueError("window must be an integer 0 or greater") - def _validate_monotonic(self): + def _validate_datetimelike_monotonic(self): """ - Validate monotonic (increasing or decreasing). + Validate self._on is monotonic (increasing or decreasing) and has + no NaT values for frequency windows. """ + if self._on.hasnans: + self._raise_monotonic_error("values must not have NaT") if not (self._on.is_monotonic_increasing or self._on.is_monotonic_decreasing): - self._raise_monotonic_error() + self._raise_monotonic_error("values must be monotonic") - def _raise_monotonic_error(self): - formatted = self.on - if self.on is None: - formatted = "index" - raise ValueError(f"{formatted} must be monotonic") + def _raise_monotonic_error(self, msg: str): + on = self.on + if on is None: + if self.axis == 0: + on = "index" + else: + on = "column" + raise ValueError(f"{on} {msg}") @doc( _shared_docs["aggregate"], @@ -2631,12 +2631,20 @@ def _get_window_indexer(self) -> GroupbyIndexer: ) return window_indexer - def _validate_monotonic(self): + def _validate_datetimelike_monotonic(self): """ - Validate that on is monotonic; + Validate that each group in self._on is monotonic """ - if ( - not (self._on.is_monotonic_increasing or self._on.is_monotonic_decreasing) - or self._on.hasnans - ): - self._raise_monotonic_error() + # GH 46061 + if self._on.hasnans: + self._raise_monotonic_error("values must not have NaT") + for group_indices in self._grouper.indices.values(): + group_on = self._on.take(group_indices) + if not ( + group_on.is_monotonic_increasing or group_on.is_monotonic_decreasing + ): + on = "index" if self.on is None else self.on + raise ValueError( + f"Each group within {on} must be monotonic. " + f"Sort the values in {on} first." + ) diff --git a/pandas/tests/window/test_groupby.py b/pandas/tests/window/test_groupby.py index 6ec19e4899d53..f1d84f7ae1cbf 100644 --- a/pandas/tests/window/test_groupby.py +++ b/pandas/tests/window/test_groupby.py @@ -651,7 +651,7 @@ def test_groupby_rolling_nans_in_index(self, rollings, key): ) if key == "index": df = df.set_index("a") - with pytest.raises(ValueError, match=f"{key} must be monotonic"): + with pytest.raises(ValueError, match=f"{key} values must not have NaT"): df.groupby("c").rolling("60min", **rollings) @pytest.mark.parametrize("group_keys", [True, False]) @@ -895,6 +895,83 @@ def test_nan_and_zero_endpoints(self): ) tm.assert_series_equal(result, expected) + def test_groupby_rolling_non_monotonic(self): + # GH 43909 + + shuffled = [3, 0, 1, 2] + sec = 1_000 + df = DataFrame( + [{"t": Timestamp(2 * x * sec), "x": x + 1, "c": 42} for x in shuffled] + ) + with pytest.raises(ValueError, match=r".* must be monotonic"): + df.groupby("c").rolling(on="t", window="3s") + + def test_groupby_monotonic(self): + + # GH 15130 + # we don't need to validate monotonicity when grouping + + # GH 43909 we should raise an error here to match + # behaviour of non-groupby rolling. + + data = [ + ["David", "1/1/2015", 100], + ["David", "1/5/2015", 500], + ["David", "5/30/2015", 50], + ["David", "7/25/2015", 50], + ["Ryan", "1/4/2014", 100], + ["Ryan", "1/19/2015", 500], + ["Ryan", "3/31/2016", 50], + ["Joe", "7/1/2015", 100], + ["Joe", "9/9/2015", 500], + ["Joe", "10/15/2015", 50], + ] + + df = DataFrame(data=data, columns=["name", "date", "amount"]) + df["date"] = to_datetime(df["date"]) + df = df.sort_values("date") + + expected = ( + df.set_index("date") + .groupby("name") + .apply(lambda x: x.rolling("180D")["amount"].sum()) + ) + result = df.groupby("name").rolling("180D", on="date")["amount"].sum() + tm.assert_series_equal(result, expected) + + def test_datelike_on_monotonic_within_each_group(self): + # GH 13966 (similar to #15130, closed by #15175) + + # superseded by 43909 + # GH 46061: OK if the on is monotonic relative to each each group + + dates = date_range(start="2016-01-01 09:30:00", periods=20, freq="s") + df = DataFrame( + { + "A": [1] * 20 + [2] * 12 + [3] * 8, + "B": np.concatenate((dates, dates)), + "C": np.arange(40), + } + ) + + expected = ( + df.set_index("B").groupby("A").apply(lambda x: x.rolling("4s")["C"].mean()) + ) + result = df.groupby("A").rolling("4s", on="B").C.mean() + tm.assert_series_equal(result, expected) + + def test_datelike_on_not_monotonic_within_each_group(self): + # GH 46061 + df = DataFrame( + { + "A": [1] * 3 + [2] * 3, + "B": [Timestamp(year, 1, 1) for year in [2020, 2021, 2019]] * 2, + "C": range(6), + } + ) + with pytest.raises(ValueError, match="Each group within B must be monotonic."): + df.groupby("A").rolling("365D", on="B") + class TestExpanding: def setup_method(self): diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py index 814bd6b998182..b60f2e60e1035 100644 --- a/pandas/tests/window/test_rolling.py +++ b/pandas/tests/window/test_rolling.py @@ -1419,18 +1419,6 @@ def test_groupby_rolling_nan_included(): tm.assert_frame_equal(result, expected) -def test_groupby_rolling_non_monotonic(): - # GH 43909 - - shuffled = [3, 0, 1, 2] - sec = 1_000 - df = DataFrame( - [{"t": Timestamp(2 * x * sec), "x": x + 1, "c": 42} for x in shuffled] - ) - with pytest.raises(ValueError, match=r".* must be monotonic"): - df.groupby("c").rolling(on="t", window="3s") - - @pytest.mark.parametrize("method", ["skew", "kurt"]) def test_rolling_skew_kurt_numerical_stability(method): # GH#6929 diff --git a/pandas/tests/window/test_timeseries_window.py b/pandas/tests/window/test_timeseries_window.py index f2cf7bd47e15b..274c986ac28f5 100644 --- a/pandas/tests/window/test_timeseries_window.py +++ b/pandas/tests/window/test_timeseries_window.py @@ -5,10 +5,10 @@ DataFrame, Index, MultiIndex, + NaT, Series, Timestamp, date_range, - to_datetime, ) import pandas._testing as tm @@ -134,7 +134,7 @@ def test_non_monotonic_on(self): assert not df.index.is_monotonic - msg = "index must be monotonic" + msg = "index values must be monotonic" with pytest.raises(ValueError, match=msg): df.rolling("2s").sum() @@ -643,65 +643,6 @@ def agg_by_day(x): tm.assert_frame_equal(result, expected) - def test_groupby_monotonic(self): - - # GH 15130 - # we don't need to validate monotonicity when grouping - - # GH 43909 we should raise an error here to match - # behaviour of non-groupby rolling. - - data = [ - ["David", "1/1/2015", 100], - ["David", "1/5/2015", 500], - ["David", "5/30/2015", 50], - ["David", "7/25/2015", 50], - ["Ryan", "1/4/2014", 100], - ["Ryan", "1/19/2015", 500], - ["Ryan", "3/31/2016", 50], - ["Joe", "7/1/2015", 100], - ["Joe", "9/9/2015", 500], - ["Joe", "10/15/2015", 50], - ] - - df = DataFrame(data=data, columns=["name", "date", "amount"]) - df["date"] = to_datetime(df["date"]) - df = df.sort_values("date") - - expected = ( - df.set_index("date") - .groupby("name") - .apply(lambda x: x.rolling("180D")["amount"].sum()) - ) - result = df.groupby("name").rolling("180D", on="date")["amount"].sum() - tm.assert_series_equal(result, expected) - - def test_non_monotonic_raises(self): - # GH 13966 (similar to #15130, closed by #15175) - - # superseded by 43909 - - dates = date_range(start="2016-01-01 09:30:00", periods=20, freq="s") - df = DataFrame( - { - "A": [1] * 20 + [2] * 12 + [3] * 8, - "B": np.concatenate((dates, dates)), - "C": np.arange(40), - } - ) - - expected = ( - df.set_index("B").groupby("A").apply(lambda x: x.rolling("4s")["C"].mean()) - ) - with pytest.raises(ValueError, match=r".* must be monotonic"): - df.groupby("A").rolling( - "4s", on="B" - ).C.mean() # should raise for non-monotonic t series - - df2 = df.sort_values("B") - result = df2.groupby("A").rolling("4s", on="B").C.mean() - tm.assert_series_equal(result, expected) - def test_rolling_cov_offset(self): # GH16058 @@ -757,3 +698,12 @@ def test_rolling_on_multi_index_level(self): {"column": [0.0, 1.0, 3.0, 6.0, 10.0, 15.0]}, index=df.index ) tm.assert_frame_equal(result, expected) + + +@pytest.mark.parametrize("msg, axis", [["column", 1], ["index", 0]]) +def test_nat_axis_error(msg, axis): + idx = [Timestamp("2020"), NaT] + kwargs = {"columns" if axis == 1 else "index": idx} + df = DataFrame(np.eye(2), **kwargs) + with pytest.raises(ValueError, match=f"{msg} values must not have NaT"): + df.rolling("D", axis=axis).mean()
Backport PRs #46567, #46087 @mroeschke marked as draft as this PR also backports #46087 (which conflicts that caused the autobackport to fail) The only conflict doing this was the release note in 1.5 (which would need a separate PR to move on main) (side note: why was the release note for #46567 in bug fix section and not regression?) #46087 is labelled as an enhancement but could also be considered a bugfix? Although saying that, I think we generally don't consider changes to error message to be API changes (only changes to exception type)? wdyt about backporting #46087? No pressure, but if you prefer not to, I will leave it to you to do the manual backport ! (you are more familiar with this code so less likely to make a mistake)
https://api.github.com/repos/pandas-dev/pandas/pulls/46592
2022-04-01T09:34:19Z
2022-04-01T17:42:19Z
2022-04-01T17:42:19Z
2022-04-01T17:42:23Z
REF: Create pandas/core/arrays/arrow
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py index eb0ebd8d08340..4563759c63a36 100644 --- a/pandas/core/arrays/_mixins.py +++ b/pandas/core/arrays/_mixins.py @@ -28,11 +28,6 @@ npt, type_t, ) -from pandas.compat import ( - pa_version_under1p01, - pa_version_under2p0, - pa_version_under5p0, -) from pandas.errors import AbstractMethodError from pandas.util._decorators import doc from pandas.util._validators import ( @@ -42,11 +37,7 @@ ) from pandas.core.dtypes.common import ( - is_array_like, - is_bool_dtype, is_dtype_equal, - is_integer, - is_scalar, pandas_dtype, ) from pandas.core.dtypes.dtypes import ( @@ -54,10 +45,7 @@ ExtensionDtype, PeriodDtype, ) -from pandas.core.dtypes.missing import ( - array_equivalent, - isna, -) +from pandas.core.dtypes.missing import array_equivalent from pandas.core import missing from pandas.core.algorithms import ( @@ -69,28 +57,19 @@ from pandas.core.array_algos.transforms import shift from pandas.core.arrays.base import ExtensionArray from pandas.core.construction import extract_array -from pandas.core.indexers import ( - check_array_indexer, - validate_indices, -) +from pandas.core.indexers import check_array_indexer from pandas.core.sorting import nargminmax NDArrayBackedExtensionArrayT = TypeVar( "NDArrayBackedExtensionArrayT", bound="NDArrayBackedExtensionArray" ) -if not pa_version_under1p01: - import pyarrow as pa - import pyarrow.compute as pc - if TYPE_CHECKING: from pandas._typing import ( NumpySorter, NumpyValueArrayLike, ) - from pandas import Series - def ravel_compat(meth: F) -> F: """ @@ -538,402 +517,3 @@ def _empty( arr = cls._from_sequence([], dtype=dtype) backing = np.empty(shape, dtype=arr._ndarray.dtype) return arr._from_backing_data(backing) - - -ArrowExtensionArrayT = TypeVar("ArrowExtensionArrayT", bound="ArrowExtensionArray") - - -class ArrowExtensionArray(ExtensionArray): - """ - Base class for ExtensionArray backed by Arrow array. - """ - - _data: pa.ChunkedArray - - def __init__(self, values: pa.ChunkedArray) -> None: - self._data = values - - def __arrow_array__(self, type=None): - """Convert myself to a pyarrow Array or ChunkedArray.""" - return self._data - - def equals(self, other) -> bool: - if not isinstance(other, ArrowExtensionArray): - return False - # I'm told that pyarrow makes __eq__ behave like pandas' equals; - # TODO: is this documented somewhere? - return self._data == other._data - - @property - def nbytes(self) -> int: - """ - The number of bytes needed to store this object in memory. - """ - return self._data.nbytes - - def __len__(self) -> int: - """ - Length of this array. - - Returns - ------- - length : int - """ - return len(self._data) - - def isna(self) -> npt.NDArray[np.bool_]: - """ - Boolean NumPy array indicating if each value is missing. - - This should return a 1-D array the same length as 'self'. - """ - if pa_version_under2p0: - return self._data.is_null().to_pandas().values - else: - return self._data.is_null().to_numpy() - - def copy(self: ArrowExtensionArrayT) -> ArrowExtensionArrayT: - """ - Return a shallow copy of the array. - - Underlying ChunkedArray is immutable, so a deep copy is unnecessary. - - Returns - ------- - type(self) - """ - return type(self)(self._data) - - @doc(ExtensionArray.factorize) - def factorize(self, na_sentinel: int = -1) -> tuple[np.ndarray, ExtensionArray]: - encoded = self._data.dictionary_encode() - indices = pa.chunked_array( - [c.indices for c in encoded.chunks], type=encoded.type.index_type - ).to_pandas() - if indices.dtype.kind == "f": - indices[np.isnan(indices)] = na_sentinel - indices = indices.astype(np.int64, copy=False) - - if encoded.num_chunks: - uniques = type(self)(encoded.chunk(0).dictionary) - else: - uniques = type(self)(pa.array([], type=encoded.type.value_type)) - - return indices.values, uniques - - def take( - self, - indices: TakeIndexer, - allow_fill: bool = False, - fill_value: Any = None, - ): - """ - Take elements from an array. - - Parameters - ---------- - indices : sequence of int or one-dimensional np.ndarray of int - Indices to be taken. - allow_fill : bool, default False - How to handle negative values in `indices`. - - * False: negative values in `indices` indicate positional indices - from the right (the default). This is similar to - :func:`numpy.take`. - - * True: negative values in `indices` indicate - missing values. These values are set to `fill_value`. Any other - other negative values raise a ``ValueError``. - - fill_value : any, optional - Fill value to use for NA-indices when `allow_fill` is True. - This may be ``None``, in which case the default NA value for - the type, ``self.dtype.na_value``, is used. - - For many ExtensionArrays, there will be two representations of - `fill_value`: a user-facing "boxed" scalar, and a low-level - physical NA value. `fill_value` should be the user-facing version, - and the implementation should handle translating that to the - physical version for processing the take if necessary. - - Returns - ------- - ExtensionArray - - Raises - ------ - IndexError - When the indices are out of bounds for the array. - ValueError - When `indices` contains negative values other than ``-1`` - and `allow_fill` is True. - - See Also - -------- - numpy.take - api.extensions.take - - Notes - ----- - ExtensionArray.take is called by ``Series.__getitem__``, ``.loc``, - ``iloc``, when `indices` is a sequence of values. Additionally, - it's called by :meth:`Series.reindex`, or any other method - that causes realignment, with a `fill_value`. - """ - # TODO: Remove once we got rid of the (indices < 0) check - if not is_array_like(indices): - indices_array = np.asanyarray(indices) - else: - # error: Incompatible types in assignment (expression has type - # "Sequence[int]", variable has type "ndarray") - indices_array = indices # type: ignore[assignment] - - if len(self._data) == 0 and (indices_array >= 0).any(): - raise IndexError("cannot do a non-empty take") - if indices_array.size > 0 and indices_array.max() >= len(self._data): - raise IndexError("out of bounds value in 'indices'.") - - if allow_fill: - fill_mask = indices_array < 0 - if fill_mask.any(): - validate_indices(indices_array, len(self._data)) - # TODO(ARROW-9433): Treat negative indices as NULL - indices_array = pa.array(indices_array, mask=fill_mask) - result = self._data.take(indices_array) - if isna(fill_value): - return type(self)(result) - # TODO: ArrowNotImplementedError: Function fill_null has no - # kernel matching input types (array[string], scalar[string]) - result = type(self)(result) - result[fill_mask] = fill_value - return result - # return type(self)(pc.fill_null(result, pa.scalar(fill_value))) - else: - # Nothing to fill - return type(self)(self._data.take(indices)) - else: # allow_fill=False - # TODO(ARROW-9432): Treat negative indices as indices from the right. - if (indices_array < 0).any(): - # Don't modify in-place - indices_array = np.copy(indices_array) - indices_array[indices_array < 0] += len(self._data) - return type(self)(self._data.take(indices_array)) - - def value_counts(self, dropna: bool = True) -> Series: - """ - Return a Series containing counts of each unique value. - - Parameters - ---------- - dropna : bool, default True - Don't include counts of missing values. - - Returns - ------- - counts : Series - - See Also - -------- - Series.value_counts - """ - from pandas import ( - Index, - Series, - ) - - vc = self._data.value_counts() - - values = vc.field(0) - counts = vc.field(1) - if dropna and self._data.null_count > 0: - mask = values.is_valid() - values = values.filter(mask) - counts = counts.filter(mask) - - # No missing values so we can adhere to the interface and return a numpy array. - counts = np.array(counts) - - index = Index(type(self)(values)) - - return Series(counts, index=index).astype("Int64") - - @classmethod - def _concat_same_type( - cls: type[ArrowExtensionArrayT], to_concat - ) -> ArrowExtensionArrayT: - """ - Concatenate multiple ArrowExtensionArrays. - - Parameters - ---------- - to_concat : sequence of ArrowExtensionArrays - - Returns - ------- - ArrowExtensionArray - """ - import pyarrow as pa - - chunks = [array for ea in to_concat for array in ea._data.iterchunks()] - arr = pa.chunked_array(chunks) - return cls(arr) - - def __setitem__(self, key: int | slice | np.ndarray, value: Any) -> None: - """Set one or more values inplace. - - Parameters - ---------- - key : int, ndarray, or slice - When called from, e.g. ``Series.__setitem__``, ``key`` will be - one of - - * scalar int - * ndarray of integers. - * boolean ndarray - * slice object - - value : ExtensionDtype.type, Sequence[ExtensionDtype.type], or object - value or values to be set of ``key``. - - Returns - ------- - None - """ - key = check_array_indexer(self, key) - indices = self._indexing_key_to_indices(key) - value = self._maybe_convert_setitem_value(value) - - argsort = np.argsort(indices) - indices = indices[argsort] - - if is_scalar(value): - value = np.broadcast_to(value, len(self)) - elif len(indices) != len(value): - raise ValueError("Length of indexer and values mismatch") - else: - value = np.asarray(value)[argsort] - - self._data = self._set_via_chunk_iteration(indices=indices, value=value) - - def _indexing_key_to_indices( - self, key: int | slice | np.ndarray - ) -> npt.NDArray[np.intp]: - """ - Convert indexing key for self into positional indices. - - Parameters - ---------- - key : int | slice | np.ndarray - - Returns - ------- - npt.NDArray[np.intp] - """ - n = len(self) - if isinstance(key, slice): - indices = np.arange(n)[key] - elif is_integer(key): - indices = np.arange(n)[[key]] # type: ignore[index] - elif is_bool_dtype(key): - key = np.asarray(key) - if len(key) != n: - raise ValueError("Length of indexer and values mismatch") - indices = key.nonzero()[0] - else: - key = np.asarray(key) - indices = np.arange(n)[key] - return indices - - def _maybe_convert_setitem_value(self, value): - """Maybe convert value to be pyarrow compatible.""" - raise NotImplementedError() - - def _set_via_chunk_iteration( - self, indices: npt.NDArray[np.intp], value: npt.NDArray[Any] - ) -> pa.ChunkedArray: - """ - Loop through the array chunks and set the new values while - leaving the chunking layout unchanged. - - Parameters - ---------- - indices : npt.NDArray[np.intp] - Position indices for the underlying ChunkedArray. - - value : ExtensionDtype.type, Sequence[ExtensionDtype.type], or object - value or values to be set of ``key``. - - Notes - ----- - Assumes that indices is sorted. Caller is responsible for sorting. - """ - new_data = [] - stop = 0 - for chunk in self._data.iterchunks(): - start, stop = stop, stop + len(chunk) - if len(indices) == 0 or stop <= indices[0]: - new_data.append(chunk) - else: - n = int(np.searchsorted(indices, stop, side="left")) - c_ind = indices[:n] - start - indices = indices[n:] - n = len(c_ind) - c_value, value = value[:n], value[n:] - new_data.append(self._replace_with_indices(chunk, c_ind, c_value)) - return pa.chunked_array(new_data) - - @classmethod - def _replace_with_indices( - cls, - chunk: pa.Array, - indices: npt.NDArray[np.intp], - value: npt.NDArray[Any], - ) -> pa.Array: - """ - Replace items selected with a set of positional indices. - - Analogous to pyarrow.compute.replace_with_mask, except that replacement - positions are identified via indices rather than a mask. - - Parameters - ---------- - chunk : pa.Array - indices : npt.NDArray[np.intp] - value : npt.NDArray[Any] - Replacement value(s). - - Returns - ------- - pa.Array - """ - n = len(indices) - - if n == 0: - return chunk - - start, stop = indices[[0, -1]] - - if (stop - start) == (n - 1): - # fast path for a contiguous set of indices - arrays = [ - chunk[:start], - pa.array(value, type=chunk.type), - chunk[stop + 1 :], - ] - arrays = [arr for arr in arrays if len(arr)] - if len(arrays) == 1: - return arrays[0] - return pa.concat_arrays(arrays) - - mask = np.zeros(len(chunk), dtype=np.bool_) - mask[indices] = True - - if pa_version_under5p0: - arr = chunk.to_numpy(zero_copy_only=False) - arr[mask] = value - return pa.array(arr, type=chunk.type) - - if isna(value).all(): - return pc.if_else(mask, None, chunk) - - return pc.replace_with_mask(chunk, mask, value) diff --git a/pandas/core/arrays/arrow/__init__.py b/pandas/core/arrays/arrow/__init__.py new file mode 100644 index 0000000000000..6bdf29e38ac62 --- /dev/null +++ b/pandas/core/arrays/arrow/__init__.py @@ -0,0 +1,3 @@ +# flake8: noqa: F401 + +from pandas.core.arrays.arrow.array import ArrowExtensionArray diff --git a/pandas/core/arrays/_arrow_utils.py b/pandas/core/arrays/arrow/_arrow_utils.py similarity index 100% rename from pandas/core/arrays/_arrow_utils.py rename to pandas/core/arrays/arrow/_arrow_utils.py diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py new file mode 100644 index 0000000000000..0a48638f5cf05 --- /dev/null +++ b/pandas/core/arrays/arrow/array.py @@ -0,0 +1,439 @@ +from __future__ import annotations + +from typing import ( + TYPE_CHECKING, + Any, + TypeVar, +) + +import numpy as np + +from pandas._typing import ( + TakeIndexer, + npt, +) +from pandas.compat import ( + pa_version_under1p01, + pa_version_under2p0, + pa_version_under5p0, +) +from pandas.util._decorators import doc + +from pandas.core.dtypes.common import ( + is_array_like, + is_bool_dtype, + is_integer, + is_scalar, +) +from pandas.core.dtypes.missing import isna + +from pandas.core.arrays.base import ExtensionArray +from pandas.core.indexers import ( + check_array_indexer, + validate_indices, +) + +if not pa_version_under1p01: + import pyarrow as pa + import pyarrow.compute as pc + +if TYPE_CHECKING: + from pandas import Series + +ArrowExtensionArrayT = TypeVar("ArrowExtensionArrayT", bound="ArrowExtensionArray") + + +class ArrowExtensionArray(ExtensionArray): + """ + Base class for ExtensionArray backed by Arrow array. + """ + + _data: pa.ChunkedArray + + def __init__(self, values: pa.ChunkedArray) -> None: + self._data = values + + def __arrow_array__(self, type=None): + """Convert myself to a pyarrow Array or ChunkedArray.""" + return self._data + + def equals(self, other) -> bool: + if not isinstance(other, ArrowExtensionArray): + return False + # I'm told that pyarrow makes __eq__ behave like pandas' equals; + # TODO: is this documented somewhere? + return self._data == other._data + + @property + def nbytes(self) -> int: + """ + The number of bytes needed to store this object in memory. + """ + return self._data.nbytes + + def __len__(self) -> int: + """ + Length of this array. + + Returns + ------- + length : int + """ + return len(self._data) + + def isna(self) -> npt.NDArray[np.bool_]: + """ + Boolean NumPy array indicating if each value is missing. + + This should return a 1-D array the same length as 'self'. + """ + if pa_version_under2p0: + return self._data.is_null().to_pandas().values + else: + return self._data.is_null().to_numpy() + + def copy(self: ArrowExtensionArrayT) -> ArrowExtensionArrayT: + """ + Return a shallow copy of the array. + + Underlying ChunkedArray is immutable, so a deep copy is unnecessary. + + Returns + ------- + type(self) + """ + return type(self)(self._data) + + @doc(ExtensionArray.factorize) + def factorize(self, na_sentinel: int = -1) -> tuple[np.ndarray, ExtensionArray]: + encoded = self._data.dictionary_encode() + indices = pa.chunked_array( + [c.indices for c in encoded.chunks], type=encoded.type.index_type + ).to_pandas() + if indices.dtype.kind == "f": + indices[np.isnan(indices)] = na_sentinel + indices = indices.astype(np.int64, copy=False) + + if encoded.num_chunks: + uniques = type(self)(encoded.chunk(0).dictionary) + else: + uniques = type(self)(pa.array([], type=encoded.type.value_type)) + + return indices.values, uniques + + def take( + self, + indices: TakeIndexer, + allow_fill: bool = False, + fill_value: Any = None, + ): + """ + Take elements from an array. + + Parameters + ---------- + indices : sequence of int or one-dimensional np.ndarray of int + Indices to be taken. + allow_fill : bool, default False + How to handle negative values in `indices`. + + * False: negative values in `indices` indicate positional indices + from the right (the default). This is similar to + :func:`numpy.take`. + + * True: negative values in `indices` indicate + missing values. These values are set to `fill_value`. Any other + other negative values raise a ``ValueError``. + + fill_value : any, optional + Fill value to use for NA-indices when `allow_fill` is True. + This may be ``None``, in which case the default NA value for + the type, ``self.dtype.na_value``, is used. + + For many ExtensionArrays, there will be two representations of + `fill_value`: a user-facing "boxed" scalar, and a low-level + physical NA value. `fill_value` should be the user-facing version, + and the implementation should handle translating that to the + physical version for processing the take if necessary. + + Returns + ------- + ExtensionArray + + Raises + ------ + IndexError + When the indices are out of bounds for the array. + ValueError + When `indices` contains negative values other than ``-1`` + and `allow_fill` is True. + + See Also + -------- + numpy.take + api.extensions.take + + Notes + ----- + ExtensionArray.take is called by ``Series.__getitem__``, ``.loc``, + ``iloc``, when `indices` is a sequence of values. Additionally, + it's called by :meth:`Series.reindex`, or any other method + that causes realignment, with a `fill_value`. + """ + # TODO: Remove once we got rid of the (indices < 0) check + if not is_array_like(indices): + indices_array = np.asanyarray(indices) + else: + # error: Incompatible types in assignment (expression has type + # "Sequence[int]", variable has type "ndarray") + indices_array = indices # type: ignore[assignment] + + if len(self._data) == 0 and (indices_array >= 0).any(): + raise IndexError("cannot do a non-empty take") + if indices_array.size > 0 and indices_array.max() >= len(self._data): + raise IndexError("out of bounds value in 'indices'.") + + if allow_fill: + fill_mask = indices_array < 0 + if fill_mask.any(): + validate_indices(indices_array, len(self._data)) + # TODO(ARROW-9433): Treat negative indices as NULL + indices_array = pa.array(indices_array, mask=fill_mask) + result = self._data.take(indices_array) + if isna(fill_value): + return type(self)(result) + # TODO: ArrowNotImplementedError: Function fill_null has no + # kernel matching input types (array[string], scalar[string]) + result = type(self)(result) + result[fill_mask] = fill_value + return result + # return type(self)(pc.fill_null(result, pa.scalar(fill_value))) + else: + # Nothing to fill + return type(self)(self._data.take(indices)) + else: # allow_fill=False + # TODO(ARROW-9432): Treat negative indices as indices from the right. + if (indices_array < 0).any(): + # Don't modify in-place + indices_array = np.copy(indices_array) + indices_array[indices_array < 0] += len(self._data) + return type(self)(self._data.take(indices_array)) + + def value_counts(self, dropna: bool = True) -> Series: + """ + Return a Series containing counts of each unique value. + + Parameters + ---------- + dropna : bool, default True + Don't include counts of missing values. + + Returns + ------- + counts : Series + + See Also + -------- + Series.value_counts + """ + from pandas import ( + Index, + Series, + ) + + vc = self._data.value_counts() + + values = vc.field(0) + counts = vc.field(1) + if dropna and self._data.null_count > 0: + mask = values.is_valid() + values = values.filter(mask) + counts = counts.filter(mask) + + # No missing values so we can adhere to the interface and return a numpy array. + counts = np.array(counts) + + index = Index(type(self)(values)) + + return Series(counts, index=index).astype("Int64") + + @classmethod + def _concat_same_type( + cls: type[ArrowExtensionArrayT], to_concat + ) -> ArrowExtensionArrayT: + """ + Concatenate multiple ArrowExtensionArrays. + + Parameters + ---------- + to_concat : sequence of ArrowExtensionArrays + + Returns + ------- + ArrowExtensionArray + """ + import pyarrow as pa + + chunks = [array for ea in to_concat for array in ea._data.iterchunks()] + arr = pa.chunked_array(chunks) + return cls(arr) + + def __setitem__(self, key: int | slice | np.ndarray, value: Any) -> None: + """Set one or more values inplace. + + Parameters + ---------- + key : int, ndarray, or slice + When called from, e.g. ``Series.__setitem__``, ``key`` will be + one of + + * scalar int + * ndarray of integers. + * boolean ndarray + * slice object + + value : ExtensionDtype.type, Sequence[ExtensionDtype.type], or object + value or values to be set of ``key``. + + Returns + ------- + None + """ + key = check_array_indexer(self, key) + indices = self._indexing_key_to_indices(key) + value = self._maybe_convert_setitem_value(value) + + argsort = np.argsort(indices) + indices = indices[argsort] + + if is_scalar(value): + value = np.broadcast_to(value, len(self)) + elif len(indices) != len(value): + raise ValueError("Length of indexer and values mismatch") + else: + value = np.asarray(value)[argsort] + + self._data = self._set_via_chunk_iteration(indices=indices, value=value) + + def _indexing_key_to_indices( + self, key: int | slice | np.ndarray + ) -> npt.NDArray[np.intp]: + """ + Convert indexing key for self into positional indices. + + Parameters + ---------- + key : int | slice | np.ndarray + + Returns + ------- + npt.NDArray[np.intp] + """ + n = len(self) + if isinstance(key, slice): + indices = np.arange(n)[key] + elif is_integer(key): + indices = np.arange(n)[[key]] # type: ignore[index] + elif is_bool_dtype(key): + key = np.asarray(key) + if len(key) != n: + raise ValueError("Length of indexer and values mismatch") + indices = key.nonzero()[0] + else: + key = np.asarray(key) + indices = np.arange(n)[key] + return indices + + def _maybe_convert_setitem_value(self, value): + """Maybe convert value to be pyarrow compatible.""" + raise NotImplementedError() + + def _set_via_chunk_iteration( + self, indices: npt.NDArray[np.intp], value: npt.NDArray[Any] + ) -> pa.ChunkedArray: + """ + Loop through the array chunks and set the new values while + leaving the chunking layout unchanged. + + Parameters + ---------- + indices : npt.NDArray[np.intp] + Position indices for the underlying ChunkedArray. + + value : ExtensionDtype.type, Sequence[ExtensionDtype.type], or object + value or values to be set of ``key``. + + Notes + ----- + Assumes that indices is sorted. Caller is responsible for sorting. + """ + new_data = [] + stop = 0 + for chunk in self._data.iterchunks(): + start, stop = stop, stop + len(chunk) + if len(indices) == 0 or stop <= indices[0]: + new_data.append(chunk) + else: + n = int(np.searchsorted(indices, stop, side="left")) + c_ind = indices[:n] - start + indices = indices[n:] + n = len(c_ind) + c_value, value = value[:n], value[n:] + new_data.append(self._replace_with_indices(chunk, c_ind, c_value)) + return pa.chunked_array(new_data) + + @classmethod + def _replace_with_indices( + cls, + chunk: pa.Array, + indices: npt.NDArray[np.intp], + value: npt.NDArray[Any], + ) -> pa.Array: + """ + Replace items selected with a set of positional indices. + + Analogous to pyarrow.compute.replace_with_mask, except that replacement + positions are identified via indices rather than a mask. + + Parameters + ---------- + chunk : pa.Array + indices : npt.NDArray[np.intp] + value : npt.NDArray[Any] + Replacement value(s). + + Returns + ------- + pa.Array + """ + n = len(indices) + + if n == 0: + return chunk + + start, stop = indices[[0, -1]] + + if (stop - start) == (n - 1): + # fast path for a contiguous set of indices + arrays = [ + chunk[:start], + pa.array(value, type=chunk.type), + chunk[stop + 1 :], + ] + arrays = [arr for arr in arrays if len(arr)] + if len(arrays) == 1: + return arrays[0] + return pa.concat_arrays(arrays) + + mask = np.zeros(len(chunk), dtype=np.bool_) + mask[indices] = True + + if pa_version_under5p0: + arr = chunk.to_numpy(zero_copy_only=False) + arr[mask] = value + return pa.array(arr, type=chunk.type) + + if isna(value).all(): + return pc.if_else(mask, None, chunk) + + return pc.replace_with_mask(chunk, mask, value) diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py index e14eec419377c..679feaca71024 100644 --- a/pandas/core/arrays/interval.py +++ b/pandas/core/arrays/interval.py @@ -1445,7 +1445,7 @@ def __arrow_array__(self, type=None): """ import pyarrow - from pandas.core.arrays._arrow_utils import ArrowIntervalType + from pandas.core.arrays.arrow._arrow_utils import ArrowIntervalType try: subtype = pyarrow.from_numpy_dtype(self.dtype.subtype) diff --git a/pandas/core/arrays/numeric.py b/pandas/core/arrays/numeric.py index 98d43db9904c8..cdffd57df9a84 100644 --- a/pandas/core/arrays/numeric.py +++ b/pandas/core/arrays/numeric.py @@ -70,7 +70,9 @@ def __from_arrow__( """ import pyarrow - from pandas.core.arrays._arrow_utils import pyarrow_array_to_numpy_and_mask + from pandas.core.arrays.arrow._arrow_utils import ( + pyarrow_array_to_numpy_and_mask, + ) array_class = self.construct_array_type() diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py index 7d0b30a1abb60..065e597537be9 100644 --- a/pandas/core/arrays/period.py +++ b/pandas/core/arrays/period.py @@ -366,7 +366,7 @@ def __arrow_array__(self, type=None): """ import pyarrow - from pandas.core.arrays._arrow_utils import ArrowPeriodType + from pandas.core.arrays.arrow._arrow_utils import ArrowPeriodType if type is not None: if pyarrow.types.is_integer(type): diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py index 154c143ac89df..b8136402b00e6 100644 --- a/pandas/core/arrays/string_arrow.py +++ b/pandas/core/arrays/string_arrow.py @@ -42,7 +42,7 @@ from pandas.core.dtypes.missing import isna from pandas.core.arraylike import OpsMixin -from pandas.core.arrays._mixins import ArrowExtensionArray +from pandas.core.arrays.arrow import ArrowExtensionArray from pandas.core.arrays.boolean import BooleanDtype from pandas.core.arrays.integer import Int64Dtype from pandas.core.arrays.numeric import NumericDtype diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py index 66eed0a75fa19..58e91f46dff43 100644 --- a/pandas/core/dtypes/dtypes.py +++ b/pandas/core/dtypes/dtypes.py @@ -997,7 +997,9 @@ def __from_arrow__( import pyarrow from pandas.core.arrays import PeriodArray - from pandas.core.arrays._arrow_utils import pyarrow_array_to_numpy_and_mask + from pandas.core.arrays.arrow._arrow_utils import ( + pyarrow_array_to_numpy_and_mask, + ) if isinstance(array, pyarrow.Array): chunks = [array] diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py index 27b0b3d08ad53..cbf3bcc9278d5 100644 --- a/pandas/io/parquet.py +++ b/pandas/io/parquet.py @@ -151,7 +151,7 @@ def __init__(self) -> None: import pyarrow.parquet # import utils to register the pyarrow extension types - import pandas.core.arrays._arrow_utils # noqa:F401 + import pandas.core.arrays.arrow._arrow_utils # noqa:F401 self.api = pyarrow diff --git a/pandas/tests/arrays/interval/test_interval.py b/pandas/tests/arrays/interval/test_interval.py index 2b5712e76e8cc..eaf86f5d521ae 100644 --- a/pandas/tests/arrays/interval/test_interval.py +++ b/pandas/tests/arrays/interval/test_interval.py @@ -248,7 +248,7 @@ def test_min_max(self, left_right_dtypes, index_or_series_or_array): def test_arrow_extension_type(): import pyarrow as pa - from pandas.core.arrays._arrow_utils import ArrowIntervalType + from pandas.core.arrays.arrow._arrow_utils import ArrowIntervalType p1 = ArrowIntervalType(pa.int64(), "left") p2 = ArrowIntervalType(pa.int64(), "left") @@ -265,7 +265,7 @@ def test_arrow_extension_type(): def test_arrow_array(): import pyarrow as pa - from pandas.core.arrays._arrow_utils import ArrowIntervalType + from pandas.core.arrays.arrow._arrow_utils import ArrowIntervalType intervals = pd.interval_range(1, 5, freq=1).array @@ -295,7 +295,7 @@ def test_arrow_array(): def test_arrow_array_missing(): import pyarrow as pa - from pandas.core.arrays._arrow_utils import ArrowIntervalType + from pandas.core.arrays.arrow._arrow_utils import ArrowIntervalType arr = IntervalArray.from_breaks([0.0, 1.0, 2.0, 3.0]) arr[1] = None @@ -330,7 +330,7 @@ def test_arrow_array_missing(): def test_arrow_table_roundtrip(breaks): import pyarrow as pa - from pandas.core.arrays._arrow_utils import ArrowIntervalType + from pandas.core.arrays.arrow._arrow_utils import ArrowIntervalType arr = IntervalArray.from_breaks(breaks) arr[1] = None diff --git a/pandas/tests/arrays/masked/test_arrow_compat.py b/pandas/tests/arrays/masked/test_arrow_compat.py index 051762511a6ca..4ccc54636eaee 100644 --- a/pandas/tests/arrays/masked/test_arrow_compat.py +++ b/pandas/tests/arrays/masked/test_arrow_compat.py @@ -6,7 +6,7 @@ pa = pytest.importorskip("pyarrow", minversion="1.0.1") -from pandas.core.arrays._arrow_utils import pyarrow_array_to_numpy_and_mask +from pandas.core.arrays.arrow._arrow_utils import pyarrow_array_to_numpy_and_mask arrays = [pd.array([1, 2, 3, None], dtype=dtype) for dtype in tm.ALL_INT_EA_DTYPES] arrays += [pd.array([0.1, 0.2, 0.3, None], dtype=dtype) for dtype in tm.FLOAT_EA_DTYPES] diff --git a/pandas/tests/arrays/period/test_arrow_compat.py b/pandas/tests/arrays/period/test_arrow_compat.py index 560299a4a47f5..7d2d2daed3497 100644 --- a/pandas/tests/arrays/period/test_arrow_compat.py +++ b/pandas/tests/arrays/period/test_arrow_compat.py @@ -13,7 +13,7 @@ def test_arrow_extension_type(): - from pandas.core.arrays._arrow_utils import ArrowPeriodType + from pandas.core.arrays.arrow._arrow_utils import ArrowPeriodType p1 = ArrowPeriodType("D") p2 = ArrowPeriodType("D") @@ -34,7 +34,7 @@ def test_arrow_extension_type(): ], ) def test_arrow_array(data, freq): - from pandas.core.arrays._arrow_utils import ArrowPeriodType + from pandas.core.arrays.arrow._arrow_utils import ArrowPeriodType periods = period_array(data, freq=freq) result = pa.array(periods) @@ -57,7 +57,7 @@ def test_arrow_array(data, freq): def test_arrow_array_missing(): - from pandas.core.arrays._arrow_utils import ArrowPeriodType + from pandas.core.arrays.arrow._arrow_utils import ArrowPeriodType arr = PeriodArray([1, 2, 3], freq="D") arr[1] = pd.NaT @@ -70,7 +70,7 @@ def test_arrow_array_missing(): def test_arrow_table_roundtrip(): - from pandas.core.arrays._arrow_utils import ArrowPeriodType + from pandas.core.arrays.arrow._arrow_utils import ArrowPeriodType arr = PeriodArray([1, 2, 3], freq="D") arr[1] = pd.NaT @@ -91,7 +91,7 @@ def test_arrow_table_roundtrip(): def test_arrow_load_from_zero_chunks(): # GH-41040 - from pandas.core.arrays._arrow_utils import ArrowPeriodType + from pandas.core.arrays.arrow._arrow_utils import ArrowPeriodType arr = PeriodArray([], freq="D") df = pd.DataFrame({"a": arr}) diff --git a/pandas/tests/extension/arrow/arrays.py b/pandas/tests/extension/arrow/arrays.py index 33eef35153bce..d19a6245809be 100644 --- a/pandas/tests/extension/arrow/arrays.py +++ b/pandas/tests/extension/arrow/arrays.py @@ -24,7 +24,7 @@ ) from pandas.api.types import is_scalar from pandas.core.arraylike import OpsMixin -from pandas.core.arrays._mixins import ArrowExtensionArray as _ArrowExtensionArray +from pandas.core.arrays.arrow import ArrowExtensionArray as _ArrowExtensionArray from pandas.core.construction import extract_array
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). * `pandas/core/arrays/_arrow_utils.py` -> `pandas/core/arrays/arrow/_arrow_utils.py` * `ArrowExtensionArray`: `pandas/core/arrays/_mixins.py` -> `pandas/core/arrays/arrow/array.py`
https://api.github.com/repos/pandas-dev/pandas/pulls/46591
2022-04-01T04:36:50Z
2022-04-03T02:34:43Z
2022-04-03T02:34:43Z
2022-07-28T17:57:34Z
DOC: Fix errors detected by sphinx-lint
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 04e148453387b..c6c7acf5b3823 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -70,6 +70,10 @@ repos: - id: rst-inline-touching-normal types: [text] # overwrite types: [rst] types_or: [python, rst] +- repo: https://github.com/sphinx-contrib/sphinx-lint + rev: v0.2 + hooks: + - id: sphinx-lint - repo: https://github.com/asottile/yesqa rev: v1.3.0 hooks: diff --git a/doc/source/development/contributing_codebase.rst b/doc/source/development/contributing_codebase.rst index 61e3bcd44bea8..2fa6bf62ba80f 100644 --- a/doc/source/development/contributing_codebase.rst +++ b/doc/source/development/contributing_codebase.rst @@ -223,7 +223,7 @@ In some cases you may be tempted to use ``cast`` from the typing module when you ... else: # Reasonably only str objects would reach this but... obj = cast(str, obj) # Mypy complains without this! - return obj.upper() + return obj.upper() The limitation here is that while a human can reasonably understand that ``is_number`` would catch the ``int`` and ``float`` types mypy cannot make that same inference just yet (see `mypy #5206 <https://github.com/python/mypy/issues/5206>`_. While the above works, the use of ``cast`` is **strongly discouraged**. Where applicable a refactor of the code to appease static analysis is preferable diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst index fb27d07cfb18f..c881770aa7584 100644 --- a/doc/source/development/contributing_environment.rst +++ b/doc/source/development/contributing_environment.rst @@ -85,10 +85,10 @@ You will need `Build Tools for Visual Studio 2019 <https://visualstudio.microsoft.com/downloads/>`_. .. warning:: - You DO NOT need to install Visual Studio 2019. - You only need "Build Tools for Visual Studio 2019" found by - scrolling down to "All downloads" -> "Tools for Visual Studio 2019". - In the installer, select the "C++ build tools" workload. + You DO NOT need to install Visual Studio 2019. + You only need "Build Tools for Visual Studio 2019" found by + scrolling down to "All downloads" -> "Tools for Visual Studio 2019". + In the installer, select the "C++ build tools" workload. You can install the necessary components on the commandline using `vs_buildtools.exe <https://download.visualstudio.microsoft.com/download/pr/9a26f37e-6001-429b-a5db-c5455b93953c/460d80ab276046de2455a4115cc4e2f1e6529c9e6cb99501844ecafd16c619c4/vs_BuildTools.exe>`_: diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst index 15fa58f8d804a..256c3ee36e80c 100644 --- a/doc/source/ecosystem.rst +++ b/doc/source/ecosystem.rst @@ -540,7 +540,7 @@ Pandas-Genomics provides extension types, extension arrays, and extension access `Pint-Pandas`_ ~~~~~~~~~~~~~~ -``Pint-Pandas <https://github.com/hgrecco/pint-pandas>`` provides an extension type for +`Pint-Pandas <https://github.com/hgrecco/pint-pandas>`_ provides an extension type for storing numeric arrays with units. These arrays can be stored inside pandas' Series and DataFrame. Operations between Series and DataFrame columns which use pint's extension array are then units aware. @@ -548,7 +548,7 @@ use pint's extension array are then units aware. `Text Extensions for Pandas`_ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -``Text Extensions for Pandas <https://ibm.biz/text-extensions-for-pandas>`` +`Text Extensions for Pandas <https://ibm.biz/text-extensions-for-pandas>`_ provides extension types to cover common data structures for representing natural language data, plus library integrations that convert the outputs of popular natural language processing libraries into Pandas DataFrames. diff --git a/doc/source/user_guide/dsintro.rst b/doc/source/user_guide/dsintro.rst index ba8ef0d86d130..571f8980070af 100644 --- a/doc/source/user_guide/dsintro.rst +++ b/doc/source/user_guide/dsintro.rst @@ -678,7 +678,7 @@ Boolean operators operate element-wise as well: Transposing ~~~~~~~~~~~ -To transpose, access the ``T`` attribute or :meth:`DataFrame.transpose``, +To transpose, access the ``T`` attribute or :meth:`DataFrame.transpose`, similar to an ndarray: .. ipython:: python diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst index bc772b5dab66c..f381d72069775 100644 --- a/doc/source/user_guide/groupby.rst +++ b/doc/source/user_guide/groupby.rst @@ -539,19 +539,19 @@ Some common aggregating functions are tabulated below: :widths: 20, 80 :delim: ; - :meth:`~pd.core.groupby.DataFrameGroupBy.mean`;Compute mean of groups - :meth:`~pd.core.groupby.DataFrameGroupBy.sum`;Compute sum of group values - :meth:`~pd.core.groupby.DataFrameGroupBy.size`;Compute group sizes - :meth:`~pd.core.groupby.DataFrameGroupBy.count`;Compute count of group - :meth:`~pd.core.groupby.DataFrameGroupBy.std`;Standard deviation of groups - :meth:`~pd.core.groupby.DataFrameGroupBy.var`;Compute variance of groups - :meth:`~pd.core.groupby.DataFrameGroupBy.sem`;Standard error of the mean of groups - :meth:`~pd.core.groupby.DataFrameGroupBy.describe`;Generates descriptive statistics - :meth:`~pd.core.groupby.DataFrameGroupBy.first`;Compute first of group values - :meth:`~pd.core.groupby.DataFrameGroupBy.last`;Compute last of group values - :meth:`~pd.core.groupby.DataFrameGroupBy.nth`;Take nth value, or a subset if n is a list - :meth:`~pd.core.groupby.DataFrameGroupBy.min`;Compute min of group values - :meth:`~pd.core.groupby.DataFrameGroupBy.max`;Compute max of group values + :meth:`~pd.core.groupby.DataFrameGroupBy.mean`;Compute mean of groups + :meth:`~pd.core.groupby.DataFrameGroupBy.sum`;Compute sum of group values + :meth:`~pd.core.groupby.DataFrameGroupBy.size`;Compute group sizes + :meth:`~pd.core.groupby.DataFrameGroupBy.count`;Compute count of group + :meth:`~pd.core.groupby.DataFrameGroupBy.std`;Standard deviation of groups + :meth:`~pd.core.groupby.DataFrameGroupBy.var`;Compute variance of groups + :meth:`~pd.core.groupby.DataFrameGroupBy.sem`;Standard error of the mean of groups + :meth:`~pd.core.groupby.DataFrameGroupBy.describe`;Generates descriptive statistics + :meth:`~pd.core.groupby.DataFrameGroupBy.first`;Compute first of group values + :meth:`~pd.core.groupby.DataFrameGroupBy.last`;Compute last of group values + :meth:`~pd.core.groupby.DataFrameGroupBy.nth`;Take nth value, or a subset if n is a list + :meth:`~pd.core.groupby.DataFrameGroupBy.min`;Compute min of group values + :meth:`~pd.core.groupby.DataFrameGroupBy.max`;Compute max of group values The aggregating functions above will exclude NA values. Any function which diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst index 3a8583d395cc4..4ed71913d7b4d 100644 --- a/doc/source/user_guide/io.rst +++ b/doc/source/user_guide/io.rst @@ -5695,9 +5695,9 @@ for an explanation of how the database connection is handled. .. warning:: - When you open a connection to a database you are also responsible for closing it. - Side effects of leaving a connection open may include locking the database or - other breaking behaviour. + When you open a connection to a database you are also responsible for closing it. + Side effects of leaving a connection open may include locking the database or + other breaking behaviour. Writing DataFrames '''''''''''''''''' diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst index b524205ed7679..582620d8b6479 100644 --- a/doc/source/user_guide/timeseries.rst +++ b/doc/source/user_guide/timeseries.rst @@ -2405,9 +2405,9 @@ you can use the ``tz_convert`` method. .. warning:: - Be wary of conversions between libraries. For some time zones, ``pytz`` and ``dateutil`` have different - definitions of the zone. This is more of a problem for unusual time zones than for - 'standard' zones like ``US/Eastern``. + Be wary of conversions between libraries. For some time zones, ``pytz`` and ``dateutil`` have different + definitions of the zone. This is more of a problem for unusual time zones than for + 'standard' zones like ``US/Eastern``. .. warning:: diff --git a/doc/source/user_guide/window.rst b/doc/source/user_guide/window.rst index f8c1f89be5d41..2407fd3113830 100644 --- a/doc/source/user_guide/window.rst +++ b/doc/source/user_guide/window.rst @@ -624,13 +624,13 @@ average of ``3, NaN, 5`` would be calculated as .. math:: - \frac{(1-\alpha)^2 \cdot 3 + 1 \cdot 5}{(1-\alpha)^2 + 1}. + \frac{(1-\alpha)^2 \cdot 3 + 1 \cdot 5}{(1-\alpha)^2 + 1}. Whereas if ``ignore_na=True``, the weighted average would be calculated as .. math:: - \frac{(1-\alpha) \cdot 3 + 1 \cdot 5}{(1-\alpha) + 1}. + \frac{(1-\alpha) \cdot 3 + 1 \cdot 5}{(1-\alpha) + 1}. The :meth:`~Ewm.var`, :meth:`~Ewm.std`, and :meth:`~Ewm.cov` functions have a ``bias`` argument, specifying whether the result should contain biased or unbiased statistics. diff --git a/doc/source/whatsnew/v0.15.0.rst b/doc/source/whatsnew/v0.15.0.rst index fc2b070df4392..04506f1655c7d 100644 --- a/doc/source/whatsnew/v0.15.0.rst +++ b/doc/source/whatsnew/v0.15.0.rst @@ -462,15 +462,15 @@ Rolling/expanding moments improvements .. code-block:: ipython - In [51]: ewma(s, com=3., min_periods=2) - Out[51]: - 0 NaN - 1 NaN - 2 1.000000 - 3 1.000000 - 4 1.571429 - 5 2.189189 - dtype: float64 + In [51]: pd.ewma(s, com=3., min_periods=2) + Out[51]: + 0 NaN + 1 NaN + 2 1.000000 + 3 1.000000 + 4 1.571429 + 5 2.189189 + dtype: float64 New behavior (note values start at index ``4``, the location of the 2nd (since ``min_periods=2``) non-empty value): @@ -557,21 +557,21 @@ Rolling/expanding moments improvements .. code-block:: ipython - In [89]: ewmvar(s, com=2., bias=False) - Out[89]: - 0 -2.775558e-16 - 1 3.000000e-01 - 2 9.556787e-01 - 3 3.585799e+00 - dtype: float64 - - In [90]: ewmvar(s, com=2., bias=False) / ewmvar(s, com=2., bias=True) - Out[90]: - 0 1.25 - 1 1.25 - 2 1.25 - 3 1.25 - dtype: float64 + In [89]: pd.ewmvar(s, com=2., bias=False) + Out[89]: + 0 -2.775558e-16 + 1 3.000000e-01 + 2 9.556787e-01 + 3 3.585799e+00 + dtype: float64 + + In [90]: pd.ewmvar(s, com=2., bias=False) / pd.ewmvar(s, com=2., bias=True) + Out[90]: + 0 1.25 + 1 1.25 + 2 1.25 + 3 1.25 + dtype: float64 Note that entry ``0`` is approximately 0, and the debiasing factors are a constant 1.25. By comparison, the following 0.15.0 results have a ``NaN`` for entry ``0``, diff --git a/doc/source/whatsnew/v0.18.1.rst b/doc/source/whatsnew/v0.18.1.rst index 3db00f686d62c..f873d320822ae 100644 --- a/doc/source/whatsnew/v0.18.1.rst +++ b/doc/source/whatsnew/v0.18.1.rst @@ -149,8 +149,8 @@ can return a valid boolean indexer or anything which is valid for these indexer' # callable returns list of labels df.loc[lambda x: [1, 2], lambda x: ["A", "B"]] -Indexing with``[]`` -""""""""""""""""""" +Indexing with ``[]`` +"""""""""""""""""""" Finally, you can use a callable in ``[]`` indexing of Series, DataFrame and Panel. The callable must return a valid input for ``[]`` indexing depending on its diff --git a/doc/source/whatsnew/v0.19.0.rst b/doc/source/whatsnew/v0.19.0.rst index 340e1ce9ee1ef..a2bb935c708bc 100644 --- a/doc/source/whatsnew/v0.19.0.rst +++ b/doc/source/whatsnew/v0.19.0.rst @@ -1553,7 +1553,7 @@ Bug fixes - Bug in invalid datetime parsing in ``to_datetime`` and ``DatetimeIndex`` may raise ``TypeError`` rather than ``ValueError`` (:issue:`11169`, :issue:`11287`) - Bug in ``Index`` created with tz-aware ``Timestamp`` and mismatched ``tz`` option incorrectly coerces timezone (:issue:`13692`) - Bug in ``DatetimeIndex`` with nanosecond frequency does not include timestamp specified with ``end`` (:issue:`13672`) -- Bug in ```Series`` when setting a slice with a ``np.timedelta64`` (:issue:`14155`) +- Bug in ``Series`` when setting a slice with a ``np.timedelta64`` (:issue:`14155`) - Bug in ``Index`` raises ``OutOfBoundsDatetime`` if ``datetime`` exceeds ``datetime64[ns]`` bounds, rather than coercing to ``object`` dtype (:issue:`13663`) - Bug in ``Index`` may ignore specified ``datetime64`` or ``timedelta64`` passed as ``dtype`` (:issue:`13981`) - Bug in ``RangeIndex`` can be created without no arguments rather than raises ``TypeError`` (:issue:`13793`) diff --git a/doc/source/whatsnew/v0.21.1.rst b/doc/source/whatsnew/v0.21.1.rst index 090a988d6406a..e217e1a75efc5 100644 --- a/doc/source/whatsnew/v0.21.1.rst +++ b/doc/source/whatsnew/v0.21.1.rst @@ -125,7 +125,7 @@ Indexing IO ^^ -- Bug in class:`~pandas.io.stata.StataReader` not converting date/time columns with display formatting addressed (:issue:`17990`). Previously columns with display formatting were normally left as ordinal numbers and not converted to datetime objects. +- Bug in :class:`~pandas.io.stata.StataReader` not converting date/time columns with display formatting addressed (:issue:`17990`). Previously columns with display formatting were normally left as ordinal numbers and not converted to datetime objects. - Bug in :func:`read_csv` when reading a compressed UTF-16 encoded file (:issue:`18071`) - Bug in :func:`read_csv` for handling null values in index columns when specifying ``na_filter=False`` (:issue:`5239`) - Bug in :func:`read_csv` when reading numeric category fields with high cardinality (:issue:`18186`) diff --git a/doc/source/whatsnew/v0.23.0.rst b/doc/source/whatsnew/v0.23.0.rst index be84c562b3c32..9f24bc8e8ec50 100644 --- a/doc/source/whatsnew/v0.23.0.rst +++ b/doc/source/whatsnew/v0.23.0.rst @@ -1126,7 +1126,7 @@ Removal of prior version deprecations/changes - The ``Panel`` class has dropped the ``to_long`` and ``toLong`` methods (:issue:`19077`) - The options ``display.line_with`` and ``display.height`` are removed in favor of ``display.width`` and ``display.max_rows`` respectively (:issue:`4391`, :issue:`19107`) - The ``labels`` attribute of the ``Categorical`` class has been removed in favor of :attr:`Categorical.codes` (:issue:`7768`) -- The ``flavor`` parameter have been removed from func:`to_sql` method (:issue:`13611`) +- The ``flavor`` parameter have been removed from :func:`to_sql` method (:issue:`13611`) - The modules ``pandas.tools.hashing`` and ``pandas.util.hashing`` have been removed (:issue:`16223`) - The top-level functions ``pd.rolling_*``, ``pd.expanding_*`` and ``pd.ewm*`` have been removed (Deprecated since v0.18). Instead, use the DataFrame/Series methods :attr:`~DataFrame.rolling`, :attr:`~DataFrame.expanding` and :attr:`~DataFrame.ewm` (:issue:`18723`) diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst index e89e2f878fc24..e4dd6fa091d80 100644 --- a/doc/source/whatsnew/v0.25.0.rst +++ b/doc/source/whatsnew/v0.25.0.rst @@ -1121,7 +1121,7 @@ Indexing - Bug in which :meth:`DataFrame.to_csv` caused a segfault for a reindexed data frame, when the indices were single-level :class:`MultiIndex` (:issue:`26303`). - Fixed bug where assigning a :class:`arrays.PandasArray` to a :class:`pandas.core.frame.DataFrame` would raise error (:issue:`26390`) - Allow keyword arguments for callable local reference used in the :meth:`DataFrame.query` string (:issue:`26426`) -- Fixed a ``KeyError`` when indexing a :class:`MultiIndex`` level with a list containing exactly one label, which is missing (:issue:`27148`) +- Fixed a ``KeyError`` when indexing a :class:`MultiIndex` level with a list containing exactly one label, which is missing (:issue:`27148`) - Bug which produced ``AttributeError`` on partial matching :class:`Timestamp` in a :class:`MultiIndex` (:issue:`26944`) - Bug in :class:`Categorical` and :class:`CategoricalIndex` with :class:`Interval` values when using the ``in`` operator (``__contains``) with objects that are not comparable to the values in the ``Interval`` (:issue:`23705`) - Bug in :meth:`DataFrame.loc` and :meth:`DataFrame.iloc` on a :class:`DataFrame` with a single timezone-aware datetime64[ns] column incorrectly returning a scalar instead of a :class:`Series` (:issue:`27110`) diff --git a/doc/source/whatsnew/v0.7.0.rst b/doc/source/whatsnew/v0.7.0.rst index 1b947030ab8ab..1ee6a9899a655 100644 --- a/doc/source/whatsnew/v0.7.0.rst +++ b/doc/source/whatsnew/v0.7.0.rst @@ -190,11 +190,11 @@ been added: :header: "Method","Description" :widths: 40,60 - ``Series.iget_value(i)``, Retrieve value stored at location ``i`` - ``Series.iget(i)``, Alias for ``iget_value`` - ``DataFrame.irow(i)``, Retrieve the ``i``-th row - ``DataFrame.icol(j)``, Retrieve the ``j``-th column - "``DataFrame.iget_value(i, j)``", Retrieve the value at row ``i`` and column ``j`` + ``Series.iget_value(i)``, Retrieve value stored at location ``i`` + ``Series.iget(i)``, Alias for ``iget_value`` + ``DataFrame.irow(i)``, Retrieve the ``i``-th row + ``DataFrame.icol(j)``, Retrieve the ``j``-th column + "``DataFrame.iget_value(i, j)``", Retrieve the value at row ``i`` and column ``j`` API tweaks regarding label-based slicing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/source/whatsnew/v0.9.1.rst b/doc/source/whatsnew/v0.9.1.rst index 6b05e5bcded7e..a5c3860a895a2 100644 --- a/doc/source/whatsnew/v0.9.1.rst +++ b/doc/source/whatsnew/v0.9.1.rst @@ -54,44 +54,44 @@ New features - DataFrame has new ``where`` and ``mask`` methods to select values according to a given boolean mask (:issue:`2109`, :issue:`2151`) - DataFrame currently supports slicing via a boolean vector the same length as the DataFrame (inside the ``[]``). - The returned DataFrame has the same number of columns as the original, but is sliced on its index. + DataFrame currently supports slicing via a boolean vector the same length as the DataFrame (inside the ``[]``). + The returned DataFrame has the same number of columns as the original, but is sliced on its index. .. ipython:: python - df = DataFrame(np.random.randn(5, 3), columns = ['A','B','C']) + df = pd.DataFrame(np.random.randn(5, 3), columns=['A', 'B', 'C']) - df + df - df[df['A'] > 0] + df[df['A'] > 0] - If a DataFrame is sliced with a DataFrame based boolean condition (with the same size as the original DataFrame), - then a DataFrame the same size (index and columns) as the original is returned, with - elements that do not meet the boolean condition as ``NaN``. This is accomplished via - the new method ``DataFrame.where``. In addition, ``where`` takes an optional ``other`` argument for replacement. + If a DataFrame is sliced with a DataFrame based boolean condition (with the same size as the original DataFrame), + then a DataFrame the same size (index and columns) as the original is returned, with + elements that do not meet the boolean condition as ``NaN``. This is accomplished via + the new method ``DataFrame.where``. In addition, ``where`` takes an optional ``other`` argument for replacement. - .. ipython:: python + .. ipython:: python - df[df>0] + df[df > 0] - df.where(df>0) + df.where(df > 0) - df.where(df>0,-df) + df.where(df > 0, -df) - Furthermore, ``where`` now aligns the input boolean condition (ndarray or DataFrame), such that partial selection - with setting is possible. This is analogous to partial setting via ``.ix`` (but on the contents rather than the axis labels) + Furthermore, ``where`` now aligns the input boolean condition (ndarray or DataFrame), such that partial selection + with setting is possible. This is analogous to partial setting via ``.ix`` (but on the contents rather than the axis labels) - .. ipython:: python + .. ipython:: python - df2 = df.copy() - df2[ df2[1:4] > 0 ] = 3 - df2 + df2 = df.copy() + df2[df2[1:4] > 0] = 3 + df2 - ``DataFrame.mask`` is the inverse boolean operation of ``where``. + ``DataFrame.mask`` is the inverse boolean operation of ``where``. - .. ipython:: python + .. ipython:: python - df.mask(df<=0) + df.mask(df <= 0) - Enable referencing of Excel columns by their column names (:issue:`1936`) diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst index 03dfe475475a1..2ab0af46cda88 100755 --- a/doc/source/whatsnew/v1.0.0.rst +++ b/doc/source/whatsnew/v1.0.0.rst @@ -525,7 +525,7 @@ Use :meth:`arrays.IntegerArray.to_numpy` with an explicit ``na_value`` instead. a.to_numpy(dtype="float", na_value=np.nan) -**Reductions can return ``pd.NA``** +**Reductions can return** ``pd.NA`` When performing a reduction such as a sum with ``skipna=False``, the result will now be ``pd.NA`` instead of ``np.nan`` in presence of missing values diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst index ebd76d97e78b3..e1f54c439ae9b 100644 --- a/doc/source/whatsnew/v1.1.0.rst +++ b/doc/source/whatsnew/v1.1.0.rst @@ -665,9 +665,9 @@ the previous index (:issue:`32240`). In [4]: result Out[4]: min_val - 0 x - 1 y - 2 z + 0 x + 1 y + 2 z *New behavior*: diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 87982a149054c..52aa9312d4c14 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -826,7 +826,7 @@ Datetimelike - Bug in :meth:`Timestamp.to_pydatetime` failing to retain the ``fold`` attribute (:issue:`45087`) - Bug in :meth:`Series.mode` with ``DatetimeTZDtype`` incorrectly returning timezone-naive and ``PeriodDtype`` incorrectly raising (:issue:`41927`) - Fixed regression in :meth:`~Series.reindex` raising an error when using an incompatible fill value with a datetime-like dtype (or not raising a deprecation warning for using a ``datetime.date`` as fill value) (:issue:`42921`) -- Bug in :class:`DateOffset`` addition with :class:`Timestamp` where ``offset.nanoseconds`` would not be included in the result (:issue:`43968`, :issue:`36589`) +- Bug in :class:`DateOffset` addition with :class:`Timestamp` where ``offset.nanoseconds`` would not be included in the result (:issue:`43968`, :issue:`36589`) - Bug in :meth:`Timestamp.fromtimestamp` not supporting the ``tz`` argument (:issue:`45083`) - Bug in :class:`DataFrame` construction from dict of :class:`Series` with mismatched index dtypes sometimes raising depending on the ordering of the passed dict (:issue:`44091`) - Bug in :class:`Timestamp` hashing during some DST transitions caused a segmentation fault (:issue:`33931` and :issue:`40817`) diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 76511cb3eb48c..6986c04ae8d37 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -1658,13 +1658,13 @@ def value_counts( ... }) >>> df - gender education country - 0 male low US - 1 male medium FR - 2 female high US - 3 male low FR - 4 female high FR - 5 male low FR + gender education country + 0 male low US + 1 male medium FR + 2 female high US + 3 male low FR + 4 female high FR + 5 male low FR >>> df.groupby('gender').value_counts() gender education country diff --git a/pandas/tests/io/parser/common/test_common_basic.py b/pandas/tests/io/parser/common/test_common_basic.py index 5472bd99fa746..0ac0fe23bbbb7 100644 --- a/pandas/tests/io/parser/common/test_common_basic.py +++ b/pandas/tests/io/parser/common/test_common_basic.py @@ -699,7 +699,7 @@ def test_read_csv_and_table_sys_setprofile(all_parsers, read_func): def test_first_row_bom(all_parsers): # see gh-26545 parser = all_parsers - data = '''\ufeff"Head1" "Head2" "Head3"''' + data = '''\ufeff"Head1"\t"Head2"\t"Head3"''' result = parser.read_csv(StringIO(data), delimiter="\t") expected = DataFrame(columns=["Head1", "Head2", "Head3"]) @@ -710,7 +710,7 @@ def test_first_row_bom(all_parsers): def test_first_row_bom_unquoted(all_parsers): # see gh-36343 parser = all_parsers - data = """\ufeffHead1 Head2 Head3""" + data = """\ufeffHead1\tHead2\tHead3""" result = parser.read_csv(StringIO(data), delimiter="\t") expected = DataFrame(columns=["Head1", "Head2", "Head3"]) diff --git a/pandas/tests/io/test_clipboard.py b/pandas/tests/io/test_clipboard.py index 0cd4f9c02f69f..73e563fd2b743 100644 --- a/pandas/tests/io/test_clipboard.py +++ b/pandas/tests/io/test_clipboard.py @@ -209,9 +209,9 @@ def test_read_clipboard_infer_excel(self, request, mock_clipboard): text = dedent( """ - John James Charlie Mingus - 1 2 - 4 Harry Carney + John James\tCharlie Mingus + 1\t2 + 4\tHarry Carney """.strip() ) mock_clipboard[request.node.name] = text
Found a few issues on the doc with [sphinx-lint](https://github.com/sphinx-contrib/sphinx-lint). Fixed them.
https://api.github.com/repos/pandas-dev/pandas/pulls/46586
2022-03-31T13:37:58Z
2022-04-07T17:04:23Z
2022-04-07T17:04:23Z
2022-04-08T09:00:39Z
REGR: groupby.transform producing segfault
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 3089a6b8c16ae..91c35d7555705 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -1106,7 +1106,7 @@ def _set_result_index_ordered( # set the result index on the passed values object and # return the new object, xref 8046 - if self.grouper.is_monotonic: + if self.grouper.is_monotonic and not self.grouper.has_dropped_na: # shortcut if we have an already ordered grouper result.set_axis(self.obj._get_axis(self.axis), axis=self.axis, inplace=True) return result diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py index 5f15e11c4740c..b43319765c5b4 100644 --- a/pandas/core/groupby/ops.py +++ b/pandas/core/groupby/ops.py @@ -818,7 +818,10 @@ def result_ilocs(self) -> npt.NDArray[np.intp]: # Original indices are where group_index would go via sorting. # But when dropna is true, we need to remove null values while accounting for # any gaps that then occur because of them. - group_index = get_group_index(self.codes, self.shape, sort=False, xnull=True) + group_index = get_group_index( + self.codes, self.shape, sort=self._sort, xnull=True + ) + group_index, _ = compress_group_index(group_index, sort=self._sort) if self.has_dropped_na: mask = np.where(group_index >= 0) diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py index c210c79c29426..f178e05d40dd0 100644 --- a/pandas/tests/groupby/transform/test_transform.py +++ b/pandas/tests/groupby/transform/test_transform.py @@ -1303,23 +1303,34 @@ def test_transform_cumcount(): tm.assert_series_equal(result, expected) -def test_null_group_lambda_self(sort, dropna): +@pytest.mark.parametrize("keys", [["A1"], ["A1", "A2"]]) +def test_null_group_lambda_self(request, sort, dropna, keys): # GH 17093 - np.random.seed(0) - keys = np.random.randint(0, 5, size=50).astype(float) - nulls = np.random.choice([0, 1], keys.shape).astype(bool) - keys[nulls] = np.nan - values = np.random.randint(0, 5, size=keys.shape) - df = DataFrame({"A": keys, "B": values}) + if not sort and not dropna: + msg = "GH#46584: null values get sorted when sort=False" + request.node.add_marker(pytest.mark.xfail(reason=msg, strict=False)) + + size = 50 + nulls1 = np.random.choice([False, True], size) + nulls2 = np.random.choice([False, True], size) + # Whether a group contains a null value or not + nulls_grouper = nulls1 if len(keys) == 1 else nulls1 | nulls2 + + a1 = np.random.randint(0, 5, size=size).astype(float) + a1[nulls1] = np.nan + a2 = np.random.randint(0, 5, size=size).astype(float) + a2[nulls2] = np.nan + values = np.random.randint(0, 5, size=a1.shape) + df = DataFrame({"A1": a1, "A2": a2, "B": values}) expected_values = values - if dropna and nulls.any(): + if dropna and nulls_grouper.any(): expected_values = expected_values.astype(float) - expected_values[nulls] = np.nan + expected_values[nulls_grouper] = np.nan expected = DataFrame(expected_values, columns=["B"]) - gb = df.groupby("A", dropna=dropna, sort=sort) - result = gb.transform(lambda x: x) + gb = df.groupby(keys, dropna=dropna, sort=sort) + result = gb[["B"]].transform(lambda x: x) tm.assert_frame_equal(result, expected)
- [x] closes #46566 (Replace xxxx with the Github issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. No whatsnew because this is only on main currently.
https://api.github.com/repos/pandas-dev/pandas/pulls/46585
2022-03-31T13:04:58Z
2022-03-31T17:59:23Z
2022-03-31T17:59:23Z
2022-03-31T18:00:02Z
REF: expose pandas_timedelta_to_timedeltastruct
diff --git a/pandas/_libs/tslibs/__init__.py b/pandas/_libs/tslibs/__init__.py index 4c74959fee60d..15836854bad4d 100644 --- a/pandas/_libs/tslibs/__init__.py +++ b/pandas/_libs/tslibs/__init__.py @@ -27,10 +27,7 @@ ] from pandas._libs.tslibs import dtypes -from pandas._libs.tslibs.conversion import ( - OutOfBoundsTimedelta, - localize_pydatetime, -) +from pandas._libs.tslibs.conversion import localize_pydatetime from pandas._libs.tslibs.dtypes import Resolution from pandas._libs.tslibs.nattype import ( NaT, @@ -38,7 +35,10 @@ iNaT, nat_strings, ) -from pandas._libs.tslibs.np_datetime import OutOfBoundsDatetime +from pandas._libs.tslibs.np_datetime import ( + OutOfBoundsDatetime, + OutOfBoundsTimedelta, +) from pandas._libs.tslibs.offsets import ( BaseOffset, Tick, diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx index e4b0c527a4cac..457b27b293f11 100644 --- a/pandas/_libs/tslibs/conversion.pyx +++ b/pandas/_libs/tslibs/conversion.pyx @@ -44,7 +44,10 @@ from pandas._libs.tslibs.np_datetime cimport ( pydatetime_to_dt64, ) -from pandas._libs.tslibs.np_datetime import OutOfBoundsDatetime +from pandas._libs.tslibs.np_datetime import ( + OutOfBoundsDatetime, + OutOfBoundsTimedelta, +) from pandas._libs.tslibs.timezones cimport ( get_dst_info, @@ -85,15 +88,6 @@ DT64NS_DTYPE = np.dtype('M8[ns]') TD64NS_DTYPE = np.dtype('m8[ns]') -class OutOfBoundsTimedelta(ValueError): - """ - Raised when encountering a timedelta value that cannot be represented - as a timedelta64[ns]. - """ - # Timedelta analogue to OutOfBoundsDatetime - pass - - # ---------------------------------------------------------------------- # Unit Conversion Helpers diff --git a/pandas/_libs/tslibs/np_datetime.pxd b/pandas/_libs/tslibs/np_datetime.pxd index 211b47cc3dc20..fb8ae4aae0e7b 100644 --- a/pandas/_libs/tslibs/np_datetime.pxd +++ b/pandas/_libs/tslibs/np_datetime.pxd @@ -66,6 +66,10 @@ cdef extern from "src/datetime/np_datetime.h": npy_datetime npy_datetimestruct_to_datetime(NPY_DATETIMEUNIT fr, npy_datetimestruct *d) nogil + void pandas_timedelta_to_timedeltastruct(npy_timedelta val, + NPY_DATETIMEUNIT fr, + pandas_timedeltastruct *result + ) nogil cdef bint cmp_scalar(int64_t lhs, int64_t rhs, int op) except -1 @@ -94,3 +98,5 @@ cpdef cnp.ndarray astype_overflowsafe( cnp.dtype dtype, # ndarray[datetime64[anyunit]] bint copy=*, ) + +cdef bint cmp_dtstructs(npy_datetimestruct* left, npy_datetimestruct* right, int op) diff --git a/pandas/_libs/tslibs/np_datetime.pyi b/pandas/_libs/tslibs/np_datetime.pyi index 922fe8c4a4dce..59f4427125266 100644 --- a/pandas/_libs/tslibs/np_datetime.pyi +++ b/pandas/_libs/tslibs/np_datetime.pyi @@ -1,6 +1,7 @@ import numpy as np class OutOfBoundsDatetime(ValueError): ... +class OutOfBoundsTimedelta(ValueError): ... # only exposed for testing def py_get_unit_from_dtype(dtype: np.dtype): ... diff --git a/pandas/_libs/tslibs/np_datetime.pyx b/pandas/_libs/tslibs/np_datetime.pyx index 2a0b4ceeb98f8..44646b0d4e87f 100644 --- a/pandas/_libs/tslibs/np_datetime.pyx +++ b/pandas/_libs/tslibs/np_datetime.pyx @@ -34,11 +34,6 @@ cdef extern from "src/datetime/np_datetime.h": int cmp_npy_datetimestruct(npy_datetimestruct *a, npy_datetimestruct *b) - void pandas_timedelta_to_timedeltastruct(npy_timedelta val, - NPY_DATETIMEUNIT fr, - pandas_timedeltastruct *result - ) nogil - # AS, FS, PS versions exist but are not imported because they are not used. npy_datetimestruct _NS_MIN_DTS, _NS_MAX_DTS npy_datetimestruct _US_MIN_DTS, _US_MAX_DTS @@ -100,6 +95,28 @@ def py_get_unit_from_dtype(dtype): # Comparison +cdef bint cmp_dtstructs( + npy_datetimestruct* left, npy_datetimestruct* right, int op +): + cdef: + int cmp_res + + cmp_res = cmp_npy_datetimestruct(left, right) + if op == Py_EQ: + return cmp_res == 0 + if op == Py_NE: + return cmp_res != 0 + if op == Py_GT: + return cmp_res == 1 + if op == Py_LT: + return cmp_res == -1 + if op == Py_GE: + return cmp_res == 1 or cmp_res == 0 + else: + # i.e. op == Py_LE + return cmp_res == -1 or cmp_res == 0 + + cdef inline bint cmp_scalar(int64_t lhs, int64_t rhs, int op) except -1: """ cmp_scalar is a more performant version of PyObject_RichCompare @@ -127,6 +144,15 @@ class OutOfBoundsDatetime(ValueError): pass +class OutOfBoundsTimedelta(ValueError): + """ + Raised when encountering a timedelta value that cannot be represented + as a timedelta64[ns]. + """ + # Timedelta analogue to OutOfBoundsDatetime + pass + + cdef check_dts_bounds(npy_datetimestruct *dts, NPY_DATETIMEUNIT unit=NPY_FR_ns): """Raises OutOfBoundsDatetime if the given date is outside the range that can be represented by nanosecond-resolution 64-bit integers.""" diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx index 627006a7f32c0..b443ae283755c 100644 --- a/pandas/_libs/tslibs/timedeltas.pyx +++ b/pandas/_libs/tslibs/timedeltas.pyx @@ -51,6 +51,7 @@ from pandas._libs.tslibs.np_datetime cimport ( pandas_timedeltastruct, td64_to_tdstruct, ) +from pandas._libs.tslibs.np_datetime import OutOfBoundsTimedelta from pandas._libs.tslibs.offsets cimport is_tick_object from pandas._libs.tslibs.util cimport ( is_array, @@ -188,7 +189,6 @@ cpdef int64_t delta_to_nanoseconds(delta) except? -1: + delta.microseconds ) * 1000 except OverflowError as err: - from pandas._libs.tslibs.conversion import OutOfBoundsTimedelta raise OutOfBoundsTimedelta(*err.args) from err raise TypeError(type(delta)) @@ -226,7 +226,6 @@ cdef object ensure_td64ns(object ts): # NB: cython#1381 this cannot be *= td64_value = td64_value * mult except OverflowError as err: - from pandas._libs.tslibs.conversion import OutOfBoundsTimedelta raise OutOfBoundsTimedelta(ts) from err return np.timedelta64(td64_value, "ns")
Broken off from a branch implementing non-nano Timedelta scalar. `cmp_dtstructs` will end up being used by both `Timedelta.__richcmp__` and `Timestamp.__richcmo__`
https://api.github.com/repos/pandas-dev/pandas/pulls/46578
2022-03-31T03:15:54Z
2022-03-31T17:34:42Z
2022-03-31T17:34:42Z
2022-03-31T18:36:04Z
Backport PR #46548 on branch 1.4.x (REGR: tests + whats new: boolean dtype in styler render)
diff --git a/doc/source/whatsnew/v1.4.2.rst b/doc/source/whatsnew/v1.4.2.rst index 13f3e9a0d0a8c..76b2a5d6ffd47 100644 --- a/doc/source/whatsnew/v1.4.2.rst +++ b/doc/source/whatsnew/v1.4.2.rst @@ -21,7 +21,7 @@ Fixed regressions - Fixed regression in :meth:`DataFrame.replace` when a replacement value was also a target for replacement (:issue:`46306`) - Fixed regression in :meth:`DataFrame.replace` when the replacement value was explicitly ``None`` when passed in a dictionary to ``to_replace`` (:issue:`45601`, :issue:`45836`) - Fixed regression when setting values with :meth:`DataFrame.loc` losing :class:`MultiIndex` names if :class:`DataFrame` was empty before (:issue:`46317`) -- +- Fixed regression when rendering boolean datatype columns with :meth:`.Styler` (:issue:`46384`) .. --------------------------------------------------------------------------- diff --git a/pandas/tests/io/formats/style/test_format.py b/pandas/tests/io/formats/style/test_format.py index 5207be992d606..a52c679e16ad5 100644 --- a/pandas/tests/io/formats/style/test_format.py +++ b/pandas/tests/io/formats/style/test_format.py @@ -434,3 +434,11 @@ def test_1level_multiindex(): assert ctx["body"][0][0]["is_visible"] is True assert ctx["body"][1][0]["display_value"] == "2" assert ctx["body"][1][0]["is_visible"] is True + + +def test_boolean_format(): + # gh 46384: booleans do not collapse to integer representation on display + df = DataFrame([[True, False]]) + ctx = df.style._translate(True, True) + assert ctx["body"][0][1]["display_value"] is True + assert ctx["body"][0][2]["display_value"] is False
Backport PR #46548: REGR: tests + whats new: boolean dtype in styler render
https://api.github.com/repos/pandas-dev/pandas/pulls/46577
2022-03-30T21:36:57Z
2022-03-31T00:24:32Z
2022-03-31T00:24:32Z
2022-03-31T00:24:32Z
STYLE update black formatter in v1.4.x
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 3ffcf29f47688..2d3178f9d62fe 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -9,7 +9,7 @@ repos: - id: absolufy-imports files: ^pandas/ - repo: https://github.com/python/black - rev: 21.12b0 + rev: 22.3.0 hooks: - id: black - repo: https://github.com/codespell-project/codespell diff --git a/asv_bench/benchmarks/algorithms.py b/asv_bench/benchmarks/algorithms.py index 2e43827232ae5..0008a589ca71f 100644 --- a/asv_bench/benchmarks/algorithms.py +++ b/asv_bench/benchmarks/algorithms.py @@ -34,7 +34,7 @@ class Factorize: param_names = ["unique", "sort", "dtype"] def setup(self, unique, sort, dtype): - N = 10 ** 5 + N = 10**5 string_index = tm.makeStringIndex(N) string_arrow = None if dtype == "string[pyarrow]": @@ -74,7 +74,7 @@ class Duplicated: param_names = ["unique", "keep", "dtype"] def setup(self, unique, keep, dtype): - N = 10 ** 5 + N = 10**5 data = { "int": pd.Index(np.arange(N), dtype="int64"), "uint": pd.Index(np.arange(N), dtype="uint64"), @@ -97,7 +97,7 @@ def time_duplicated(self, unique, keep, dtype): class Hashing: def setup_cache(self): - N = 10 ** 5 + N = 10**5 df = pd.DataFrame( { @@ -145,7 +145,7 @@ class Quantile: param_names = ["quantile", "interpolation", "dtype"] def setup(self, quantile, interpolation, dtype): - N = 10 ** 5 + N = 10**5 data = { "int": np.arange(N), "uint": np.arange(N).astype(np.uint64), @@ -158,7 +158,7 @@ def time_quantile(self, quantile, interpolation, dtype): class SortIntegerArray: - params = [10 ** 3, 10 ** 5] + params = [10**3, 10**5] def setup(self, N): data = np.arange(N, dtype=float) diff --git a/asv_bench/benchmarks/algos/isin.py b/asv_bench/benchmarks/algos/isin.py index 37fa0b490bd9e..16d90b9d23741 100644 --- a/asv_bench/benchmarks/algos/isin.py +++ b/asv_bench/benchmarks/algos/isin.py @@ -49,7 +49,7 @@ def setup(self, dtype): elif dtype in ["category[object]", "category[int]"]: # Note: sizes are different in this case than others - n = 5 * 10 ** 5 + n = 5 * 10**5 sample_size = 100 arr = list(np.random.randint(0, n // 10, size=n)) @@ -174,7 +174,7 @@ class IsinWithArange: def setup(self, dtype, M, offset_factor): offset = int(M * offset_factor) - tmp = Series(np.random.randint(offset, M + offset, 10 ** 6)) + tmp = Series(np.random.randint(offset, M + offset, 10**6)) self.series = tmp.astype(dtype) self.values = np.arange(M).astype(dtype) @@ -191,8 +191,8 @@ class IsInFloat64: param_names = ["dtype", "title"] def setup(self, dtype, title): - N_many = 10 ** 5 - N_few = 10 ** 6 + N_many = 10**5 + N_few = 10**6 self.series = Series([1, 2], dtype=dtype) if title == "many_different_values": @@ -240,10 +240,10 @@ class IsInForObjects: param_names = ["series_type", "vals_type"] def setup(self, series_type, vals_type): - N_many = 10 ** 5 + N_many = 10**5 if series_type == "nans": - ser_vals = np.full(10 ** 4, np.nan) + ser_vals = np.full(10**4, np.nan) elif series_type == "short": ser_vals = np.arange(2) elif series_type == "long": @@ -254,7 +254,7 @@ def setup(self, series_type, vals_type): self.series = Series(ser_vals).astype(object) if vals_type == "nans": - values = np.full(10 ** 4, np.nan) + values = np.full(10**4, np.nan) elif vals_type == "short": values = np.arange(2) elif vals_type == "long": @@ -277,7 +277,7 @@ class IsInLongSeriesLookUpDominates: param_names = ["dtype", "MaxNumber", "series_type"] def setup(self, dtype, MaxNumber, series_type): - N = 10 ** 7 + N = 10**7 if series_type == "random_hits": array = np.random.randint(0, MaxNumber, N) @@ -304,7 +304,7 @@ class IsInLongSeriesValuesDominate: param_names = ["dtype", "series_type"] def setup(self, dtype, series_type): - N = 10 ** 7 + N = 10**7 if series_type == "random": vals = np.random.randint(0, 10 * N, N) @@ -312,7 +312,7 @@ def setup(self, dtype, series_type): vals = np.arange(N) self.values = vals.astype(dtype.lower()) - M = 10 ** 6 + 1 + M = 10**6 + 1 self.series = Series(np.arange(M)).astype(dtype) def time_isin(self, dtypes, series_type): diff --git a/asv_bench/benchmarks/arithmetic.py b/asv_bench/benchmarks/arithmetic.py index edd1132116f76..9db2e2c2a9732 100644 --- a/asv_bench/benchmarks/arithmetic.py +++ b/asv_bench/benchmarks/arithmetic.py @@ -59,7 +59,7 @@ def time_frame_op_with_scalar(self, dtype, scalar, op): class OpWithFillValue: def setup(self): # GH#31300 - arr = np.arange(10 ** 6) + arr = np.arange(10**6) df = DataFrame({"A": arr}) ser = df["A"] @@ -93,7 +93,7 @@ class MixedFrameWithSeriesAxis: param_names = ["opname"] def setup(self, opname): - arr = np.arange(10 ** 6).reshape(1000, -1) + arr = np.arange(10**6).reshape(1000, -1) df = DataFrame(arr) df["C"] = 1.0 self.df = df @@ -201,7 +201,7 @@ def teardown(self, use_numexpr, threads): class Ops2: def setup(self): - N = 10 ** 3 + N = 10**3 self.df = DataFrame(np.random.randn(N, N)) self.df2 = DataFrame(np.random.randn(N, N)) @@ -258,7 +258,7 @@ class Timeseries: param_names = ["tz"] def setup(self, tz): - N = 10 ** 6 + N = 10**6 halfway = (N // 2) - 1 self.s = Series(date_range("20010101", periods=N, freq="T", tz=tz)) self.ts = self.s[halfway] @@ -280,7 +280,7 @@ def time_timestamp_ops_diff_with_shift(self, tz): class IrregularOps: def setup(self): - N = 10 ** 5 + N = 10**5 idx = date_range(start="1/1/2000", periods=N, freq="s") s = Series(np.random.randn(N), index=idx) self.left = s.sample(frac=1) @@ -304,7 +304,7 @@ class CategoricalComparisons: param_names = ["op"] def setup(self, op): - N = 10 ** 5 + N = 10**5 self.cat = pd.Categorical(list("aabbcd") * N, ordered=True) def time_categorical_op(self, op): @@ -317,7 +317,7 @@ class IndexArithmetic: param_names = ["dtype"] def setup(self, dtype): - N = 10 ** 6 + N = 10**6 indexes = {"int": "makeIntIndex", "float": "makeFloatIndex"} self.index = getattr(tm, indexes[dtype])(N) @@ -343,7 +343,7 @@ class NumericInferOps: param_names = ["dtype"] def setup(self, dtype): - N = 5 * 10 ** 5 + N = 5 * 10**5 self.df = DataFrame( {"A": np.arange(N).astype(dtype), "B": np.arange(N).astype(dtype)} ) @@ -367,7 +367,7 @@ def time_modulo(self, dtype): class DateInferOps: # from GH 7332 def setup_cache(self): - N = 5 * 10 ** 5 + N = 5 * 10**5 df = DataFrame({"datetime64": np.arange(N).astype("datetime64[ms]")}) df["timedelta"] = df["datetime64"] - df["datetime64"] return df @@ -388,7 +388,7 @@ class AddOverflowScalar: param_names = ["scalar"] def setup(self, scalar): - N = 10 ** 6 + N = 10**6 self.arr = np.arange(N) def time_add_overflow_scalar(self, scalar): @@ -397,7 +397,7 @@ def time_add_overflow_scalar(self, scalar): class AddOverflowArray: def setup(self): - N = 10 ** 6 + N = 10**6 self.arr = np.arange(N) self.arr_rev = np.arange(-N, 0) self.arr_mixed = np.array([1, -1]).repeat(N / 2) diff --git a/asv_bench/benchmarks/categoricals.py b/asv_bench/benchmarks/categoricals.py index 268f25c3d12e3..2b60871cbf82b 100644 --- a/asv_bench/benchmarks/categoricals.py +++ b/asv_bench/benchmarks/categoricals.py @@ -19,7 +19,7 @@ class Constructor: def setup(self): - N = 10 ** 5 + N = 10**5 self.categories = list("abcde") self.cat_idx = pd.Index(self.categories) self.values = np.tile(self.categories, N) @@ -71,16 +71,16 @@ def time_existing_series(self): class AsType: def setup(self): - N = 10 ** 5 + N = 10**5 random_pick = np.random.default_rng().choice categories = { "str": list(string.ascii_letters), - "int": np.random.randint(2 ** 16, size=154), + "int": np.random.randint(2**16, size=154), "float": sys.maxsize * np.random.random((38,)), "timestamp": [ - pd.Timestamp(x, unit="s") for x in np.random.randint(2 ** 18, size=578) + pd.Timestamp(x, unit="s") for x in np.random.randint(2**18, size=578) ], } @@ -112,7 +112,7 @@ def astype_datetime(self): class Concat: def setup(self): - N = 10 ** 5 + N = 10**5 self.s = pd.Series(list("aabbcd") * N).astype("category") self.a = pd.Categorical(list("aabbcd") * N) @@ -148,7 +148,7 @@ class ValueCounts: param_names = ["dropna"] def setup(self, dropna): - n = 5 * 10 ** 5 + n = 5 * 10**5 arr = [f"s{i:04d}" for i in np.random.randint(0, n // 10, size=n)] self.ts = pd.Series(arr).astype("category") @@ -166,7 +166,7 @@ def time_rendering(self): class SetCategories: def setup(self): - n = 5 * 10 ** 5 + n = 5 * 10**5 arr = [f"s{i:04d}" for i in np.random.randint(0, n // 10, size=n)] self.ts = pd.Series(arr).astype("category") @@ -176,7 +176,7 @@ def time_set_categories(self): class RemoveCategories: def setup(self): - n = 5 * 10 ** 5 + n = 5 * 10**5 arr = [f"s{i:04d}" for i in np.random.randint(0, n // 10, size=n)] self.ts = pd.Series(arr).astype("category") @@ -186,7 +186,7 @@ def time_remove_categories(self): class Rank: def setup(self): - N = 10 ** 5 + N = 10**5 ncats = 100 self.s_str = pd.Series(tm.makeCategoricalIndex(N, ncats)).astype(str) @@ -241,7 +241,7 @@ def time_categorical_series_is_monotonic_decreasing(self): class Contains: def setup(self): - N = 10 ** 5 + N = 10**5 self.ci = tm.makeCategoricalIndex(N) self.c = self.ci.values self.key = self.ci.categories[0] @@ -259,7 +259,7 @@ class CategoricalSlicing: param_names = ["index"] def setup(self, index): - N = 10 ** 6 + N = 10**6 categories = ["a", "b", "c"] values = [0] * N + [1] * N + [2] * N if index == "monotonic_incr": @@ -295,7 +295,7 @@ def time_getitem_bool_array(self, index): class Indexing: def setup(self): - N = 10 ** 5 + N = 10**5 self.index = pd.CategoricalIndex(range(N), range(N)) self.series = pd.Series(range(N), index=self.index).sort_index() self.category = self.index[500] @@ -327,7 +327,7 @@ def time_sort_values(self): class SearchSorted: def setup(self): - N = 10 ** 5 + N = 10**5 self.ci = tm.makeCategoricalIndex(N).sort_values() self.c = self.ci.values self.key = self.ci.categories[1] diff --git a/asv_bench/benchmarks/ctors.py b/asv_bench/benchmarks/ctors.py index 5993b068feadf..ef8b16f376d6a 100644 --- a/asv_bench/benchmarks/ctors.py +++ b/asv_bench/benchmarks/ctors.py @@ -76,7 +76,7 @@ def setup(self, data_fmt, with_index, dtype): raise NotImplementedError( "Series constructors do not support using generators with indexes" ) - N = 10 ** 4 + N = 10**4 if dtype == "float": arr = np.random.randn(N) else: @@ -90,7 +90,7 @@ def time_series_constructor(self, data_fmt, with_index, dtype): class SeriesDtypesConstructors: def setup(self): - N = 10 ** 4 + N = 10**4 self.arr = np.random.randn(N) self.arr_str = np.array(["foo", "bar", "baz"], dtype=object) self.s = Series( @@ -114,7 +114,7 @@ def time_dtindex_from_index_with_series(self): class MultiIndexConstructor: def setup(self): - N = 10 ** 4 + N = 10**4 self.iterables = [tm.makeStringIndex(N), range(20)] def time_multiindex_from_iterables(self): diff --git a/asv_bench/benchmarks/eval.py b/asv_bench/benchmarks/eval.py index cbab9fdc9c0ba..b5442531e748a 100644 --- a/asv_bench/benchmarks/eval.py +++ b/asv_bench/benchmarks/eval.py @@ -43,7 +43,7 @@ def teardown(self, engine, threads): class Query: def setup(self): - N = 10 ** 6 + N = 10**6 halfway = (N // 2) - 1 index = pd.date_range("20010101", periods=N, freq="T") s = pd.Series(index) diff --git a/asv_bench/benchmarks/frame_ctor.py b/asv_bench/benchmarks/frame_ctor.py index eace665ba0bac..810c29ec70a6f 100644 --- a/asv_bench/benchmarks/frame_ctor.py +++ b/asv_bench/benchmarks/frame_ctor.py @@ -77,7 +77,7 @@ class FromDictwithTimestamp: param_names = ["offset"] def setup(self, offset): - N = 10 ** 3 + N = 10**3 idx = date_range(Timestamp("1/1/1900"), freq=offset, periods=N) df = DataFrame(np.random.randn(N, 10), index=idx) self.d = df.to_dict() diff --git a/asv_bench/benchmarks/frame_methods.py b/asv_bench/benchmarks/frame_methods.py index 16925b7959e6a..29ac92b15f632 100644 --- a/asv_bench/benchmarks/frame_methods.py +++ b/asv_bench/benchmarks/frame_methods.py @@ -50,7 +50,7 @@ def time_frame_fancy_lookup_all(self): class Reindex: def setup(self): - N = 10 ** 3 + N = 10**3 self.df = DataFrame(np.random.randn(N * 10, N)) self.idx = np.arange(4 * N, 7 * N) self.idx_cols = np.random.randint(0, N, N) @@ -84,7 +84,7 @@ def time_reindex_upcast(self): class Rename: def setup(self): - N = 10 ** 3 + N = 10**3 self.df = DataFrame(np.random.randn(N * 10, N)) self.idx = np.arange(4 * N, 7 * N) self.dict_idx = {k: k for k in self.idx} @@ -329,7 +329,7 @@ def time_frame_mask_floats(self): class Isnull: def setup(self): - N = 10 ** 3 + N = 10**3 self.df_no_null = DataFrame(np.random.randn(N, N)) sample = np.array([np.nan, 1.0]) @@ -497,7 +497,7 @@ def time_frame_dtypes(self): class Equals: def setup(self): - N = 10 ** 3 + N = 10**3 self.float_df = DataFrame(np.random.randn(N, N)) self.float_df_nan = self.float_df.copy() self.float_df_nan.iloc[-1, -1] = np.nan @@ -618,7 +618,7 @@ class XS: param_names = ["axis"] def setup(self, axis): - self.N = 10 ** 4 + self.N = 10**4 self.df = DataFrame(np.random.randn(self.N, self.N)) def time_frame_xs(self, axis): @@ -718,9 +718,9 @@ class Describe: def setup(self): self.df = DataFrame( { - "a": np.random.randint(0, 100, 10 ** 6), - "b": np.random.randint(0, 100, 10 ** 6), - "c": np.random.randint(0, 100, 10 ** 6), + "a": np.random.randint(0, 100, 10**6), + "b": np.random.randint(0, 100, 10**6), + "c": np.random.randint(0, 100, 10**6), } ) diff --git a/asv_bench/benchmarks/gil.py b/asv_bench/benchmarks/gil.py index ac7cd87c846d5..af2efe56c2530 100644 --- a/asv_bench/benchmarks/gil.py +++ b/asv_bench/benchmarks/gil.py @@ -55,8 +55,8 @@ class ParallelGroupbyMethods: def setup(self, threads, method): if not have_real_test_parallel: raise NotImplementedError - N = 10 ** 6 - ngroups = 10 ** 3 + N = 10**6 + ngroups = 10**3 df = DataFrame( {"key": np.random.randint(0, ngroups, size=N), "data": np.random.randn(N)} ) @@ -88,8 +88,8 @@ class ParallelGroups: def setup(self, threads): if not have_real_test_parallel: raise NotImplementedError - size = 2 ** 22 - ngroups = 10 ** 3 + size = 2**22 + ngroups = 10**3 data = Series(np.random.randint(0, ngroups, size=size)) @test_parallel(num_threads=threads) @@ -110,7 +110,7 @@ class ParallelTake1D: def setup(self, dtype): if not have_real_test_parallel: raise NotImplementedError - N = 10 ** 6 + N = 10**6 df = DataFrame({"col": np.arange(N, dtype=dtype)}) indexer = np.arange(100, len(df) - 100) @@ -133,8 +133,8 @@ class ParallelKth: def setup(self): if not have_real_test_parallel: raise NotImplementedError - N = 10 ** 7 - k = 5 * 10 ** 5 + N = 10**7 + k = 5 * 10**5 kwargs_list = [{"arr": np.random.randn(N)}, {"arr": np.random.randn(N)}] @test_parallel(num_threads=2, kwargs_list=kwargs_list) @@ -151,7 +151,7 @@ class ParallelDatetimeFields: def setup(self): if not have_real_test_parallel: raise NotImplementedError - N = 10 ** 6 + N = 10**6 self.dti = date_range("1900-01-01", periods=N, freq="T") self.period = self.dti.to_period("D") diff --git a/asv_bench/benchmarks/groupby.py b/asv_bench/benchmarks/groupby.py index ff58e382a9ba2..05dd53fc38a60 100644 --- a/asv_bench/benchmarks/groupby.py +++ b/asv_bench/benchmarks/groupby.py @@ -73,7 +73,7 @@ class Apply: params = [4, 5] def setup(self, factor): - N = 10 ** factor + N = 10**factor # two cases: # - small groups: small data (N**4) + many labels (2000) -> average group # size of 5 (-> larger overhead of slicing method) @@ -116,7 +116,7 @@ class Groups: params = ["int64_small", "int64_large", "object_small", "object_large"] def setup_cache(self): - size = 10 ** 6 + size = 10**6 data = { "int64_small": Series(np.random.randint(0, 100, size=size)), "int64_large": Series(np.random.randint(0, 10000, size=size)), @@ -160,7 +160,7 @@ class Nth: params = ["float32", "float64", "datetime", "object"] def setup(self, dtype): - N = 10 ** 5 + N = 10**5 # with datetimes (GH7555) if dtype == "datetime": values = date_range("1/1/2011", periods=N, freq="s") @@ -268,7 +268,7 @@ def time_multi_int_nunique(self, df): class AggFunctions: def setup_cache(self): - N = 10 ** 5 + N = 10**5 fac1 = np.array(["A", "B", "C"], dtype="O") fac2 = np.array(["one", "two"], dtype="O") df = DataFrame( @@ -301,7 +301,7 @@ def time_different_python_functions_singlecol(self, df): class GroupStrings: def setup(self): - n = 2 * 10 ** 5 + n = 2 * 10**5 alpha = list(map("".join, product(ascii_letters, repeat=4))) data = np.random.choice(alpha, (n // 5, 4), replace=False) data = np.repeat(data, 5, axis=0) @@ -315,7 +315,7 @@ def time_multi_columns(self): class MultiColumn: def setup_cache(self): - N = 10 ** 5 + N = 10**5 key1 = np.tile(np.arange(100, dtype=object), 1000) key2 = key1.copy() np.random.shuffle(key1) @@ -345,7 +345,7 @@ def time_col_select_numpy_sum(self, df): class Size: def setup(self): - n = 10 ** 5 + n = 10**5 offsets = np.random.randint(n, size=n).astype("timedelta64[ns]") dates = np.datetime64("now") + offsets self.df = DataFrame( @@ -582,7 +582,7 @@ class RankWithTies: ] def setup(self, dtype, tie_method): - N = 10 ** 4 + N = 10**4 if dtype == "datetime64": data = np.array([Timestamp("2011/01/01")] * N, dtype=dtype) else: @@ -640,7 +640,7 @@ def time_str_func(self, dtype, method): class Categories: def setup(self): - N = 10 ** 5 + N = 10**5 arr = np.random.random(N) data = {"a": Categorical(np.random.randint(10000, size=N)), "b": arr} self.df = DataFrame(data) @@ -682,14 +682,14 @@ class Datelike: param_names = ["grouper"] def setup(self, grouper): - N = 10 ** 4 + N = 10**4 rng_map = { "period_range": period_range, "date_range": date_range, "date_range_tz": partial(date_range, tz="US/Central"), } self.grouper = rng_map[grouper]("1900-01-01", freq="D", periods=N) - self.df = DataFrame(np.random.randn(10 ** 4, 2)) + self.df = DataFrame(np.random.randn(10**4, 2)) def time_sum(self, grouper): self.df.groupby(self.grouper).sum() @@ -798,7 +798,7 @@ class TransformEngine: params = [[True, False]] def setup(self, parallel): - N = 10 ** 3 + N = 10**3 data = DataFrame( {0: [str(i) for i in range(100)] * N, 1: list(range(100)) * N}, columns=[0, 1], @@ -841,7 +841,7 @@ class AggEngine: params = [[True, False]] def setup(self, parallel): - N = 10 ** 3 + N = 10**3 data = DataFrame( {0: [str(i) for i in range(100)] * N, 1: list(range(100)) * N}, columns=[0, 1], @@ -904,7 +904,7 @@ def function(values): class Sample: def setup(self): - N = 10 ** 3 + N = 10**3 self.df = DataFrame({"a": np.zeros(N)}) self.groups = np.arange(0, N) self.weights = np.ones(N) diff --git a/asv_bench/benchmarks/hash_functions.py b/asv_bench/benchmarks/hash_functions.py index 6703cc791493a..d9a291dc27125 100644 --- a/asv_bench/benchmarks/hash_functions.py +++ b/asv_bench/benchmarks/hash_functions.py @@ -16,9 +16,9 @@ class Float64GroupIndex: # GH28303 def setup(self): self.df = pd.date_range( - start="1/1/2018", end="1/2/2018", periods=10 ** 6 + start="1/1/2018", end="1/2/2018", periods=10**6 ).to_frame() - self.group_index = np.round(self.df.index.astype(int) / 10 ** 9) + self.group_index = np.round(self.df.index.astype(int) / 10**9) def time_groupby(self): self.df.groupby(self.group_index).last() @@ -29,8 +29,8 @@ class UniqueAndFactorizeArange: param_names = ["exponent"] def setup(self, exponent): - a = np.arange(10 ** 4, dtype="float64") - self.a2 = (a + 10 ** exponent).repeat(100) + a = np.arange(10**4, dtype="float64") + self.a2 = (a + 10**exponent).repeat(100) def time_factorize(self, exponent): pd.factorize(self.a2) @@ -43,7 +43,7 @@ class NumericSeriesIndexing: params = [ (pd.Int64Index, pd.UInt64Index, pd.Float64Index), - (10 ** 4, 10 ** 5, 5 * 10 ** 5, 10 ** 6, 5 * 10 ** 6), + (10**4, 10**5, 5 * 10**5, 10**6, 5 * 10**6), ] param_names = ["index_dtype", "N"] @@ -61,7 +61,7 @@ class NumericSeriesIndexingShuffled: params = [ (pd.Int64Index, pd.UInt64Index, pd.Float64Index), - (10 ** 4, 10 ** 5, 5 * 10 ** 5, 10 ** 6, 5 * 10 ** 6), + (10**4, 10**5, 5 * 10**5, 10**6, 5 * 10**6), ] param_names = ["index_dtype", "N"] diff --git a/asv_bench/benchmarks/index_cached_properties.py b/asv_bench/benchmarks/index_cached_properties.py index 16fbc741775e4..ad6170e256091 100644 --- a/asv_bench/benchmarks/index_cached_properties.py +++ b/asv_bench/benchmarks/index_cached_properties.py @@ -22,7 +22,7 @@ class IndexCache: param_names = ["index_type"] def setup(self, index_type): - N = 10 ** 5 + N = 10**5 if index_type == "MultiIndex": self.idx = pd.MultiIndex.from_product( [pd.date_range("1/1/2000", freq="T", periods=N // 2), ["a", "b"]] diff --git a/asv_bench/benchmarks/index_object.py b/asv_bench/benchmarks/index_object.py index 2b2302a796730..dab33f02c2cd9 100644 --- a/asv_bench/benchmarks/index_object.py +++ b/asv_bench/benchmarks/index_object.py @@ -25,7 +25,7 @@ class SetOperations: param_names = ["dtype", "method"] def setup(self, dtype, method): - N = 10 ** 5 + N = 10**5 dates_left = date_range("1/1/2000", periods=N, freq="T") fmt = "%Y-%m-%d %H:%M:%S" date_str_left = Index(dates_left.strftime(fmt)) @@ -46,7 +46,7 @@ def time_operation(self, dtype, method): class SetDisjoint: def setup(self): - N = 10 ** 5 + N = 10**5 B = N + 20000 self.datetime_left = DatetimeIndex(range(N)) self.datetime_right = DatetimeIndex(range(N, B)) @@ -57,8 +57,8 @@ def time_datetime_difference_disjoint(self): class Range: def setup(self): - self.idx_inc = RangeIndex(start=0, stop=10 ** 6, step=3) - self.idx_dec = RangeIndex(start=10 ** 6, stop=-1, step=-3) + self.idx_inc = RangeIndex(start=0, stop=10**6, step=3) + self.idx_dec = RangeIndex(start=10**6, stop=-1, step=-3) def time_max(self): self.idx_inc.max() @@ -139,7 +139,7 @@ class Indexing: param_names = ["dtype"] def setup(self, dtype): - N = 10 ** 6 + N = 10**6 self.idx = getattr(tm, f"make{dtype}Index")(N) self.array_mask = (np.arange(N) % 3) == 0 self.series_mask = Series(self.array_mask) @@ -192,7 +192,7 @@ def time_get_loc(self): class IntervalIndexMethod: # GH 24813 - params = [10 ** 3, 10 ** 5] + params = [10**3, 10**5] def setup(self, N): left = np.append(np.arange(N), np.array(0)) diff --git a/asv_bench/benchmarks/indexing.py b/asv_bench/benchmarks/indexing.py index 58f2a73d82842..1135e1ada1b78 100644 --- a/asv_bench/benchmarks/indexing.py +++ b/asv_bench/benchmarks/indexing.py @@ -37,7 +37,7 @@ class NumericSeriesIndexing: param_names = ["index_dtype", "index_structure"] def setup(self, index, index_structure): - N = 10 ** 6 + N = 10**6 indices = { "unique_monotonic_inc": index(range(N)), "nonunique_monotonic_inc": index( @@ -97,7 +97,7 @@ class NonNumericSeriesIndexing: param_names = ["index_dtype", "index_structure"] def setup(self, index, index_structure): - N = 10 ** 6 + N = 10**6 if index == "string": index = tm.makeStringIndex(N) elif index == "datetime": @@ -263,7 +263,7 @@ class CategoricalIndexIndexing: param_names = ["index"] def setup(self, index): - N = 10 ** 5 + N = 10**5 values = list("a" * N + "b" * N + "c" * N) indices = { "monotonic_incr": CategoricalIndex(values), @@ -332,7 +332,7 @@ class IndexSingleRow: param_names = ["unique_cols"] def setup(self, unique_cols): - arr = np.arange(10 ** 7).reshape(-1, 10) + arr = np.arange(10**7).reshape(-1, 10) df = DataFrame(arr) dtypes = ["u1", "u2", "u4", "u8", "i1", "i2", "i4", "i8", "f8", "f4"] for i, d in enumerate(dtypes): @@ -364,7 +364,7 @@ def time_frame_assign_timeseries_index(self): class InsertColumns: def setup(self): - self.N = 10 ** 3 + self.N = 10**3 self.df = DataFrame(index=range(self.N)) self.df2 = DataFrame(np.random.randn(self.N, 2)) diff --git a/asv_bench/benchmarks/indexing_engines.py b/asv_bench/benchmarks/indexing_engines.py index 60e07a9d1469c..0c6cb89f49da1 100644 --- a/asv_bench/benchmarks/indexing_engines.py +++ b/asv_bench/benchmarks/indexing_engines.py @@ -36,7 +36,7 @@ class NumericEngineIndexing: _get_numeric_engines(), ["monotonic_incr", "monotonic_decr", "non_monotonic"], [True, False], - [10 ** 5, 2 * 10 ** 6], # 2e6 is above SIZE_CUTOFF + [10**5, 2 * 10**6], # 2e6 is above SIZE_CUTOFF ] param_names = ["engine_and_dtype", "index_type", "unique", "N"] @@ -86,7 +86,7 @@ class ObjectEngineIndexing: param_names = ["index_type"] def setup(self, index_type): - N = 10 ** 5 + N = 10**5 values = list("a" * N + "b" * N + "c" * N) arr = { "monotonic_incr": np.array(values, dtype=object), diff --git a/asv_bench/benchmarks/inference.py b/asv_bench/benchmarks/inference.py index a5a7bc5b5c8bd..0bbb599f2b045 100644 --- a/asv_bench/benchmarks/inference.py +++ b/asv_bench/benchmarks/inference.py @@ -85,8 +85,8 @@ class MaybeConvertNumeric: # go in benchmarks/libs.py def setup_cache(self): - N = 10 ** 6 - arr = np.repeat([2 ** 63], N) + np.arange(N).astype("uint64") + N = 10**6 + arr = np.repeat([2**63], N) + np.arange(N).astype("uint64") data = arr.astype(object) data[1::2] = arr[1::2].astype(str) data[-1] = -1 @@ -101,7 +101,7 @@ class MaybeConvertObjects: # does have some run-time imports from outside of _libs def setup(self): - N = 10 ** 5 + N = 10**5 data = list(range(N)) data[0] = NaT diff --git a/asv_bench/benchmarks/io/csv.py b/asv_bench/benchmarks/io/csv.py index 0b443b29116a2..10aef954a3475 100644 --- a/asv_bench/benchmarks/io/csv.py +++ b/asv_bench/benchmarks/io/csv.py @@ -266,8 +266,8 @@ def time_skipprows(self, skiprows, engine): class ReadUint64Integers(StringIORewind): def setup(self): - self.na_values = [2 ** 63 + 500] - arr = np.arange(10000).astype("uint64") + 2 ** 63 + self.na_values = [2**63 + 500] + arr = np.arange(10000).astype("uint64") + 2**63 self.data1 = StringIO("\n".join(arr.astype(str).tolist())) arr = arr.astype(object) arr[500] = -1 diff --git a/asv_bench/benchmarks/io/json.py b/asv_bench/benchmarks/io/json.py index d1468a238c491..bb09fe0ff634d 100644 --- a/asv_bench/benchmarks/io/json.py +++ b/asv_bench/benchmarks/io/json.py @@ -109,7 +109,7 @@ class ToJSON(BaseIO): param_names = ["orient", "frame"] def setup(self, orient, frame): - N = 10 ** 5 + N = 10**5 ncols = 5 index = date_range("20000101", periods=N, freq="H") timedeltas = timedelta_range(start=1, periods=N, freq="s") @@ -193,7 +193,7 @@ class ToJSONISO(BaseIO): param_names = ["orient"] def setup(self, orient): - N = 10 ** 5 + N = 10**5 index = date_range("20000101", periods=N, freq="H") timedeltas = timedelta_range(start=1, periods=N, freq="s") datetimes = date_range(start=1, periods=N, freq="s") @@ -216,7 +216,7 @@ class ToJSONLines(BaseIO): fname = "__test__.json" def setup(self): - N = 10 ** 5 + N = 10**5 ncols = 5 index = date_range("20000101", periods=N, freq="H") timedeltas = timedelta_range(start=1, periods=N, freq="s") diff --git a/asv_bench/benchmarks/join_merge.py b/asv_bench/benchmarks/join_merge.py index ad40adc75c567..dc9b1f501749e 100644 --- a/asv_bench/benchmarks/join_merge.py +++ b/asv_bench/benchmarks/join_merge.py @@ -226,7 +226,7 @@ class I8Merge: param_names = ["how"] def setup(self, how): - low, high, n = -1000, 1000, 10 ** 6 + low, high, n = -1000, 1000, 10**6 self.left = DataFrame( np.random.randint(low, high, (n, 7)), columns=list("ABCDEFG") ) @@ -394,8 +394,8 @@ def time_multiby(self, direction, tolerance): class Align: def setup(self): - size = 5 * 10 ** 5 - rng = np.arange(0, 10 ** 13, 10 ** 7) + size = 5 * 10**5 + rng = np.arange(0, 10**13, 10**7) stamps = np.datetime64("now").view("i8") + rng idx1 = np.sort(np.random.choice(stamps, size, replace=False)) idx2 = np.sort(np.random.choice(stamps, size, replace=False)) diff --git a/asv_bench/benchmarks/multiindex_object.py b/asv_bench/benchmarks/multiindex_object.py index 25df5b0214959..e349ae76ba573 100644 --- a/asv_bench/benchmarks/multiindex_object.py +++ b/asv_bench/benchmarks/multiindex_object.py @@ -203,7 +203,7 @@ class SetOperations: param_names = ["index_structure", "dtype", "method"] def setup(self, index_structure, dtype, method): - N = 10 ** 5 + N = 10**5 level1 = range(1000) level2 = date_range(start="1/1/2000", periods=N // 1000) diff --git a/asv_bench/benchmarks/replace.py b/asv_bench/benchmarks/replace.py index 2a115fb0b4fe3..c4c50f5ca8eb5 100644 --- a/asv_bench/benchmarks/replace.py +++ b/asv_bench/benchmarks/replace.py @@ -9,7 +9,7 @@ class FillNa: param_names = ["inplace"] def setup(self, inplace): - N = 10 ** 6 + N = 10**6 rng = pd.date_range("1/1/2000", periods=N, freq="min") data = np.random.randn(N) data[::2] = np.nan @@ -28,10 +28,10 @@ class ReplaceDict: param_names = ["inplace"] def setup(self, inplace): - N = 10 ** 5 - start_value = 10 ** 5 + N = 10**5 + start_value = 10**5 self.to_rep = dict(enumerate(np.arange(N) + start_value)) - self.s = pd.Series(np.random.randint(N, size=10 ** 3)) + self.s = pd.Series(np.random.randint(N, size=10**3)) def time_replace_series(self, inplace): self.s.replace(self.to_rep, inplace=inplace) @@ -44,7 +44,7 @@ class ReplaceList: param_names = ["inplace"] def setup(self, inplace): - self.df = pd.DataFrame({"A": 0, "B": 0}, index=range(4 * 10 ** 7)) + self.df = pd.DataFrame({"A": 0, "B": 0}, index=range(4 * 10**7)) def time_replace_list(self, inplace): self.df.replace([np.inf, -np.inf], np.nan, inplace=inplace) @@ -60,7 +60,7 @@ class Convert: param_names = ["constructor", "replace_data"] def setup(self, constructor, replace_data): - N = 10 ** 3 + N = 10**3 data = { "Series": pd.Series(np.random.randint(N, size=N)), "DataFrame": pd.DataFrame( diff --git a/asv_bench/benchmarks/reshape.py b/asv_bench/benchmarks/reshape.py index c83cd9a925f6d..7046c8862b0d7 100644 --- a/asv_bench/benchmarks/reshape.py +++ b/asv_bench/benchmarks/reshape.py @@ -259,7 +259,7 @@ class Cut: param_names = ["bins"] def setup(self, bins): - N = 10 ** 5 + N = 10**5 self.int_series = pd.Series(np.arange(N).repeat(5)) self.float_series = pd.Series(np.random.randn(N).repeat(5)) self.timedelta_series = pd.Series( diff --git a/asv_bench/benchmarks/rolling.py b/asv_bench/benchmarks/rolling.py index 1c53d4adc8c25..d65a1a39e8bc7 100644 --- a/asv_bench/benchmarks/rolling.py +++ b/asv_bench/benchmarks/rolling.py @@ -16,7 +16,7 @@ class Methods: param_names = ["constructor", "window_kwargs", "dtype", "method"] def setup(self, constructor, window_kwargs, dtype, method): - N = 10 ** 5 + N = 10**5 window, kwargs = window_kwargs arr = (100 * np.random.random(N)).astype(dtype) obj = getattr(pd, constructor)(arr) @@ -40,7 +40,7 @@ class Apply: param_names = ["constructor", "window", "dtype", "function", "raw"] def setup(self, constructor, window, dtype, function, raw): - N = 10 ** 3 + N = 10**3 arr = (100 * np.random.random(N)).astype(dtype) self.roll = getattr(pd, constructor)(arr).rolling(window) @@ -67,7 +67,7 @@ class NumbaEngineMethods: ] def setup(self, constructor, dtype, window_kwargs, method, parallel, cols): - N = 10 ** 3 + N = 10**3 window, kwargs = window_kwargs shape = (N, cols) if cols is not None and constructor != "Series" else N arr = (100 * np.random.random(shape)).astype(dtype) @@ -107,7 +107,7 @@ class NumbaEngineApply: ] def setup(self, constructor, dtype, window_kwargs, function, parallel, cols): - N = 10 ** 3 + N = 10**3 window, kwargs = window_kwargs shape = (N, cols) if cols is not None and constructor != "Series" else N arr = (100 * np.random.random(shape)).astype(dtype) @@ -140,7 +140,7 @@ class EWMMethods: ( { "halflife": "1 Day", - "times": pd.date_range("1900", periods=10 ** 5, freq="23s"), + "times": pd.date_range("1900", periods=10**5, freq="23s"), }, "mean", ), @@ -150,7 +150,7 @@ class EWMMethods: param_names = ["constructor", "kwargs_method", "dtype"] def setup(self, constructor, kwargs_method, dtype): - N = 10 ** 5 + N = 10**5 kwargs, method = kwargs_method arr = (100 * np.random.random(N)).astype(dtype) self.method = method @@ -170,7 +170,7 @@ class VariableWindowMethods(Methods): param_names = ["constructor", "window", "dtype", "method"] def setup(self, constructor, window, dtype, method): - N = 10 ** 5 + N = 10**5 arr = (100 * np.random.random(N)).astype(dtype) index = pd.date_range("2017-01-01", periods=N, freq="5s") self.window = getattr(pd, constructor)(arr, index=index).rolling(window) @@ -186,7 +186,7 @@ class Pairwise: param_names = ["window_kwargs", "method", "pairwise"] def setup(self, kwargs_window, method, pairwise): - N = 10 ** 4 + N = 10**4 n_groups = 20 kwargs, window = kwargs_window groups = [i for _ in range(N // n_groups) for i in range(n_groups)] @@ -215,7 +215,7 @@ class Quantile: param_names = ["constructor", "window", "dtype", "percentile"] def setup(self, constructor, window, dtype, percentile, interpolation): - N = 10 ** 5 + N = 10**5 arr = np.random.random(N).astype(dtype) self.roll = getattr(pd, constructor)(arr).rolling(window) @@ -242,7 +242,7 @@ class Rank: ] def setup(self, constructor, window, dtype, percentile, ascending, method): - N = 10 ** 5 + N = 10**5 arr = np.random.random(N).astype(dtype) self.roll = getattr(pd, constructor)(arr).rolling(window) @@ -255,7 +255,7 @@ class PeakMemFixedWindowMinMax: params = ["min", "max"] def setup(self, operation): - N = 10 ** 6 + N = 10**6 arr = np.random.random(N) self.roll = pd.Series(arr).rolling(2) @@ -274,7 +274,7 @@ class ForwardWindowMethods: param_names = ["constructor", "window_size", "dtype", "method"] def setup(self, constructor, window_size, dtype, method): - N = 10 ** 5 + N = 10**5 arr = np.random.random(N).astype(dtype) indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=window_size) self.roll = getattr(pd, constructor)(arr).rolling(window=indexer) diff --git a/asv_bench/benchmarks/series_methods.py b/asv_bench/benchmarks/series_methods.py index d8578ed604ae3..b9907b5281aa7 100644 --- a/asv_bench/benchmarks/series_methods.py +++ b/asv_bench/benchmarks/series_methods.py @@ -32,7 +32,7 @@ class ToFrame: param_names = ["dtype", "name"] def setup(self, dtype, name): - arr = np.arange(10 ** 5) + arr = np.arange(10**5) ser = Series(arr, dtype=dtype) self.ser = ser @@ -61,7 +61,7 @@ class Dropna: param_names = ["dtype"] def setup(self, dtype): - N = 10 ** 6 + N = 10**6 data = { "int": np.random.randint(1, 10, N), "datetime": date_range("2000-01-01", freq="S", periods=N), @@ -94,7 +94,7 @@ class SearchSorted: param_names = ["dtype"] def setup(self, dtype): - N = 10 ** 5 + N = 10**5 data = np.array([1] * N + [2] * N + [3] * N).astype(dtype) self.s = Series(data) @@ -130,7 +130,7 @@ def time_map(self, mapper, *args, **kwargs): class Clip: - params = [50, 1000, 10 ** 5] + params = [50, 1000, 10**5] param_names = ["n"] def setup(self, n): @@ -142,7 +142,7 @@ def time_clip(self, n): class ValueCounts: - params = [[10 ** 3, 10 ** 4, 10 ** 5], ["int", "uint", "float", "object"]] + params = [[10**3, 10**4, 10**5], ["int", "uint", "float", "object"]] param_names = ["N", "dtype"] def setup(self, N, dtype): @@ -154,7 +154,7 @@ def time_value_counts(self, N, dtype): class ValueCountsObjectDropNAFalse: - params = [10 ** 3, 10 ** 4, 10 ** 5] + params = [10**3, 10**4, 10**5] param_names = ["N"] def setup(self, N): @@ -166,7 +166,7 @@ def time_value_counts(self, N): class Mode: - params = [[10 ** 3, 10 ** 4, 10 ** 5], ["int", "uint", "float", "object"]] + params = [[10**3, 10**4, 10**5], ["int", "uint", "float", "object"]] param_names = ["N", "dtype"] def setup(self, N, dtype): @@ -178,7 +178,7 @@ def time_mode(self, N, dtype): class ModeObjectDropNAFalse: - params = [10 ** 3, 10 ** 4, 10 ** 5] + params = [10**3, 10**4, 10**5] param_names = ["N"] def setup(self, N): @@ -199,7 +199,7 @@ def time_dir_strings(self): class SeriesGetattr: # https://github.com/pandas-dev/pandas/issues/19764 def setup(self): - self.s = Series(1, index=date_range("2012-01-01", freq="s", periods=10 ** 6)) + self.s = Series(1, index=date_range("2012-01-01", freq="s", periods=10**6)) def time_series_datetimeindex_repr(self): getattr(self.s, "a", None) @@ -207,7 +207,7 @@ def time_series_datetimeindex_repr(self): class All: - params = [[10 ** 3, 10 ** 6], ["fast", "slow"], ["bool", "boolean"]] + params = [[10**3, 10**6], ["fast", "slow"], ["bool", "boolean"]] param_names = ["N", "case", "dtype"] def setup(self, N, case, dtype): @@ -220,7 +220,7 @@ def time_all(self, N, case, dtype): class Any: - params = [[10 ** 3, 10 ** 6], ["fast", "slow"], ["bool", "boolean"]] + params = [[10**3, 10**6], ["fast", "slow"], ["bool", "boolean"]] param_names = ["N", "case", "dtype"] def setup(self, N, case, dtype): @@ -248,7 +248,7 @@ class NanOps: "kurt", "prod", ], - [10 ** 3, 10 ** 6], + [10**3, 10**6], ["int8", "int32", "int64", "float64", "Int64", "boolean"], ] param_names = ["func", "N", "dtype"] diff --git a/asv_bench/benchmarks/sparse.py b/asv_bench/benchmarks/sparse.py index ec704896f5726..ff6bb582e1af5 100644 --- a/asv_bench/benchmarks/sparse.py +++ b/asv_bench/benchmarks/sparse.py @@ -40,7 +40,7 @@ class SparseArrayConstructor: param_names = ["dense_proportion", "fill_value", "dtype"] def setup(self, dense_proportion, fill_value, dtype): - N = 10 ** 6 + N = 10**6 self.array = make_array(N, dense_proportion, fill_value, dtype) def time_sparse_array(self, dense_proportion, fill_value, dtype): @@ -111,7 +111,7 @@ class Arithmetic: param_names = ["dense_proportion", "fill_value"] def setup(self, dense_proportion, fill_value): - N = 10 ** 6 + N = 10**6 arr1 = make_array(N, dense_proportion, fill_value, np.int64) self.array1 = SparseArray(arr1, fill_value=fill_value) arr2 = make_array(N, dense_proportion, fill_value, np.int64) @@ -136,7 +136,7 @@ class ArithmeticBlock: param_names = ["fill_value"] def setup(self, fill_value): - N = 10 ** 6 + N = 10**6 self.arr1 = self.make_block_array( length=N, num_blocks=1000, block_size=10, fill_value=fill_value ) diff --git a/asv_bench/benchmarks/stat_ops.py b/asv_bench/benchmarks/stat_ops.py index 5639d6702a92c..92a78b7c2f63d 100644 --- a/asv_bench/benchmarks/stat_ops.py +++ b/asv_bench/benchmarks/stat_ops.py @@ -83,7 +83,7 @@ class Rank: param_names = ["constructor", "pct"] def setup(self, constructor, pct): - values = np.random.randn(10 ** 5) + values = np.random.randn(10**5) self.data = getattr(pd, constructor)(values) def time_rank(self, constructor, pct): diff --git a/asv_bench/benchmarks/strings.py b/asv_bench/benchmarks/strings.py index 32fbf4e6c7de3..3ac59c5883742 100644 --- a/asv_bench/benchmarks/strings.py +++ b/asv_bench/benchmarks/strings.py @@ -17,7 +17,7 @@ class Dtypes: def setup(self, dtype): try: - self.s = Series(tm.makeStringIndex(10 ** 5), dtype=dtype) + self.s = Series(tm.makeStringIndex(10**5), dtype=dtype) except ImportError: raise NotImplementedError @@ -28,7 +28,7 @@ class Construction: param_names = ["dtype"] def setup(self, dtype): - self.series_arr = tm.rands_array(nchars=10, size=10 ** 5) + self.series_arr = tm.rands_array(nchars=10, size=10**5) self.frame_arr = self.series_arr.reshape((50_000, 2)).copy() # GH37371. Testing construction of string series/frames from ExtensionArrays @@ -180,7 +180,7 @@ class Repeat: param_names = ["repeats"] def setup(self, repeats): - N = 10 ** 5 + N = 10**5 self.s = Series(tm.makeStringIndex(N)) repeat = {"int": 1, "array": np.random.randint(1, 3, N)} self.values = repeat[repeats] @@ -195,7 +195,7 @@ class Cat: param_names = ["other_cols", "sep", "na_rep", "na_frac"] def setup(self, other_cols, sep, na_rep, na_frac): - N = 10 ** 5 + N = 10**5 mask_gen = lambda: np.random.choice([True, False], N, p=[1 - na_frac, na_frac]) self.s = Series(tm.makeStringIndex(N)).where(mask_gen()) if other_cols == 0: diff --git a/asv_bench/benchmarks/timeseries.py b/asv_bench/benchmarks/timeseries.py index 5b123c7127c28..9373edadb8e90 100644 --- a/asv_bench/benchmarks/timeseries.py +++ b/asv_bench/benchmarks/timeseries.py @@ -131,7 +131,7 @@ class Iteration: param_names = ["time_index"] def setup(self, time_index): - N = 10 ** 6 + N = 10**6 if time_index is timedelta_range: self.idx = time_index(start=0, freq="T", periods=N) else: @@ -247,7 +247,7 @@ class SortIndex: param_names = ["monotonic"] def setup(self, monotonic): - N = 10 ** 5 + N = 10**5 idx = date_range(start="1/1/2000", periods=N, freq="s") self.s = Series(np.random.randn(N), index=idx) if not monotonic: diff --git a/asv_bench/benchmarks/tslibs/normalize.py b/asv_bench/benchmarks/tslibs/normalize.py index f5f7adbf63995..69732018aea9a 100644 --- a/asv_bench/benchmarks/tslibs/normalize.py +++ b/asv_bench/benchmarks/tslibs/normalize.py @@ -31,7 +31,7 @@ def setup(self, size, tz): dti = pd.date_range("2016-01-01", periods=10, tz=tz).repeat(size // 10) self.i8data = dti.asi8 - if size == 10 ** 6 and tz is tzlocal_obj: + if size == 10**6 and tz is tzlocal_obj: # tzlocal is cumbersomely slow, so skip to keep runtime in check raise NotImplementedError diff --git a/asv_bench/benchmarks/tslibs/period.py b/asv_bench/benchmarks/tslibs/period.py index 15a922da7ee76..6cb1011e3c037 100644 --- a/asv_bench/benchmarks/tslibs/period.py +++ b/asv_bench/benchmarks/tslibs/period.py @@ -130,7 +130,7 @@ class TimeDT64ArrToPeriodArr: param_names = ["size", "freq", "tz"] def setup(self, size, freq, tz): - if size == 10 ** 6 and tz is tzlocal_obj: + if size == 10**6 and tz is tzlocal_obj: # tzlocal is cumbersomely slow, so skip to keep runtime in check raise NotImplementedError diff --git a/asv_bench/benchmarks/tslibs/resolution.py b/asv_bench/benchmarks/tslibs/resolution.py index 4b52efc188bf4..44f288c7de216 100644 --- a/asv_bench/benchmarks/tslibs/resolution.py +++ b/asv_bench/benchmarks/tslibs/resolution.py @@ -40,7 +40,7 @@ class TimeResolution: param_names = ["unit", "size", "tz"] def setup(self, unit, size, tz): - if size == 10 ** 6 and tz is tzlocal_obj: + if size == 10**6 and tz is tzlocal_obj: # tzlocal is cumbersomely slow, so skip to keep runtime in check raise NotImplementedError diff --git a/asv_bench/benchmarks/tslibs/tslib.py b/asv_bench/benchmarks/tslibs/tslib.py index 180f95e7fbda5..f93ef1cef841f 100644 --- a/asv_bench/benchmarks/tslibs/tslib.py +++ b/asv_bench/benchmarks/tslibs/tslib.py @@ -41,7 +41,7 @@ gettz("Asia/Tokyo"), tzlocal_obj, ] -_sizes = [0, 1, 100, 10 ** 4, 10 ** 6] +_sizes = [0, 1, 100, 10**4, 10**6] class TimeIntsToPydatetime: @@ -57,7 +57,7 @@ def setup(self, box, size, tz): if box == "date" and tz is not None: # tz is ignored, so avoid running redundant benchmarks raise NotImplementedError # skip benchmark - if size == 10 ** 6 and tz is _tzs[-1]: + if size == 10**6 and tz is _tzs[-1]: # This is cumbersomely-slow, so skip to trim runtime raise NotImplementedError # skip benchmark diff --git a/asv_bench/benchmarks/tslibs/tz_convert.py b/asv_bench/benchmarks/tslibs/tz_convert.py index 793f43e9bbe35..803c2aaa635b0 100644 --- a/asv_bench/benchmarks/tslibs/tz_convert.py +++ b/asv_bench/benchmarks/tslibs/tz_convert.py @@ -25,7 +25,7 @@ class TimeTZConvert: param_names = ["size", "tz"] def setup(self, size, tz): - if size == 10 ** 6 and tz is tzlocal_obj: + if size == 10**6 and tz is tzlocal_obj: # tzlocal is cumbersomely slow, so skip to keep runtime in check raise NotImplementedError diff --git a/environment.yml b/environment.yml index 0a87e2b56f4cb..753e210e6066a 100644 --- a/environment.yml +++ b/environment.yml @@ -18,7 +18,7 @@ dependencies: - cython>=0.29.24 # code checks - - black=21.5b2 + - black=22.3.0 - cpplint - flake8=4.0.1 - flake8-bugbear=21.3.2 # used by flake8, find likely bugs diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py index 77e4065008804..4fa9c1aabe716 100644 --- a/pandas/_testing/asserters.py +++ b/pandas/_testing/asserters.py @@ -212,7 +212,7 @@ def _get_tol_from_less_precise(check_less_precise: bool | int) -> float: return 0.5e-5 else: # Equivalent to setting checking_less_precise=<decimals> - return 0.5 * 10 ** -check_less_precise + return 0.5 * 10**-check_less_precise def _check_isinstance(left, right, cls): diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py index 703e67b88fec8..d0d7d6a1b8da6 100644 --- a/pandas/compat/__init__.py +++ b/pandas/compat/__init__.py @@ -27,7 +27,7 @@ PY39 = sys.version_info >= (3, 9) PY310 = sys.version_info >= (3, 10) PYPY = platform.python_implementation() == "PyPy" -IS64 = sys.maxsize > 2 ** 32 +IS64 = sys.maxsize > 2**32 def set_function_name(f: F, name: str, cls) -> F: diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py index 00d5f113e16e0..6843e4f7eeb58 100644 --- a/pandas/core/arrays/datetimes.py +++ b/pandas/core/arrays/datetimes.py @@ -1923,8 +1923,8 @@ def to_julian_date(self) -> np.ndarray: self.hour + self.minute / 60 + self.second / 3600 - + self.microsecond / 3600 / 10 ** 6 - + self.nanosecond / 3600 / 10 ** 9 + + self.microsecond / 3600 / 10**6 + + self.nanosecond / 3600 / 10**9 ) / 24 ) diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py index bf2d770ee1e7f..dd106b6dbb63c 100644 --- a/pandas/core/config_init.py +++ b/pandas/core/config_init.py @@ -876,7 +876,7 @@ def register_converter_cb(key): cf.register_option( "render.max_elements", - 2 ** 18, + 2**18, styler_max_elements, validator=is_nonnegative_int, ) diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py index 40664f178993e..9dcc1c8222791 100644 --- a/pandas/core/nanops.py +++ b/pandas/core/nanops.py @@ -1202,7 +1202,7 @@ def nanskew( adjusted = values - mean if skipna and mask is not None: np.putmask(adjusted, mask, 0) - adjusted2 = adjusted ** 2 + adjusted2 = adjusted**2 adjusted3 = adjusted2 * adjusted m2 = adjusted2.sum(axis, dtype=np.float64) m3 = adjusted3.sum(axis, dtype=np.float64) @@ -1215,7 +1215,7 @@ def nanskew( m3 = _zero_out_fperr(m3) with np.errstate(invalid="ignore", divide="ignore"): - result = (count * (count - 1) ** 0.5 / (count - 2)) * (m3 / m2 ** 1.5) + result = (count * (count - 1) ** 0.5 / (count - 2)) * (m3 / m2**1.5) dtype = values.dtype if is_float_dtype(dtype): @@ -1290,15 +1290,15 @@ def nankurt( adjusted = values - mean if skipna and mask is not None: np.putmask(adjusted, mask, 0) - adjusted2 = adjusted ** 2 - adjusted4 = adjusted2 ** 2 + adjusted2 = adjusted**2 + adjusted4 = adjusted2**2 m2 = adjusted2.sum(axis, dtype=np.float64) m4 = adjusted4.sum(axis, dtype=np.float64) with np.errstate(invalid="ignore", divide="ignore"): adj = 3 * (count - 1) ** 2 / ((count - 2) * (count - 3)) numerator = count * (count + 1) * (count - 1) * m4 - denominator = (count - 2) * (count - 3) * m2 ** 2 + denominator = (count - 2) * (count - 3) * m2**2 # floating point error # diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py index 4d5b11546a42f..262cd9774f694 100644 --- a/pandas/core/reshape/melt.py +++ b/pandas/core/reshape/melt.py @@ -494,7 +494,7 @@ def wide_to_long( """ def get_var_names(df, stub: str, sep: str, suffix: str) -> list[str]: - regex = fr"^{re.escape(stub)}{re.escape(sep)}{suffix}$" + regex = rf"^{re.escape(stub)}{re.escape(sep)}{suffix}$" pattern = re.compile(regex) return [col for col in df.columns if pattern.match(col)] diff --git a/pandas/core/roperator.py b/pandas/core/roperator.py index e6691ddf8984e..15b16b6fa976a 100644 --- a/pandas/core/roperator.py +++ b/pandas/core/roperator.py @@ -45,7 +45,7 @@ def rdivmod(left, right): def rpow(left, right): - return right ** left + return right**left def rand_(left, right): diff --git a/pandas/io/common.py b/pandas/io/common.py index eaf6f6475ec84..0331d320725ac 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -568,7 +568,7 @@ def check_parent_directory(path: Path | str) -> None: """ parent = Path(path).parent if not parent.is_dir(): - raise OSError(fr"Cannot save file into a non-existent directory: '{parent}'") + raise OSError(rf"Cannot save file into a non-existent directory: '{parent}'") @overload diff --git a/pandas/io/formats/excel.py b/pandas/io/formats/excel.py index 5992d5d6908ce..53627cc8bd753 100644 --- a/pandas/io/formats/excel.py +++ b/pandas/io/formats/excel.py @@ -471,8 +471,8 @@ class ExcelFormatter: This is only called for body cells. """ - max_rows = 2 ** 20 - max_cols = 2 ** 14 + max_rows = 2**20 + max_cols = 2**14 def __init__( self, diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py index 616331bf80a44..3795fbaab9122 100644 --- a/pandas/io/formats/format.py +++ b/pandas/io/formats/format.py @@ -1744,7 +1744,7 @@ def is_dates_only(values: np.ndarray | DatetimeArray | Index | DatetimeIndex) -> values_int = values.asi8 consider_values = values_int != iNaT - one_day_nanos = 86400 * 10 ** 9 + one_day_nanos = 86400 * 10**9 even_days = ( np.logical_and(consider_values, values_int % int(one_day_nanos) != 0).sum() == 0 ) @@ -1851,7 +1851,7 @@ def get_format_timedelta64( consider_values = values_int != iNaT - one_day_nanos = 86400 * 10 ** 9 + one_day_nanos = 86400 * 10**9 # error: Unsupported operand types for % ("ExtensionArray" and "int") not_midnight = values_int % one_day_nanos != 0 # type: ignore[operator] # error: Argument 1 to "__call__" of "ufunc" has incompatible type @@ -1962,7 +1962,7 @@ def _trim_zeros_float( necessary. """ trimmed = str_floats - number_regex = re.compile(fr"^\s*[\+-]?[0-9]+\{decimal}[0-9]*$") + number_regex = re.compile(rf"^\s*[\+-]?[0-9]+\{decimal}[0-9]*$") def is_number_with_decimal(x): return re.match(number_regex, x) is not None @@ -2079,7 +2079,7 @@ def __call__(self, num: int | float) -> str: else: prefix = f"E+{int_pow10:02d}" - mant = sign * dnum / (10 ** pow10) + mant = sign * dnum / (10**pow10) if self.accuracy is None: # pragma: no cover format_str = "{mant: g}{prefix}" diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py index 52fa3be4ff418..04639123c5cfc 100644 --- a/pandas/io/parsers/python_parser.py +++ b/pandas/io/parsers/python_parser.py @@ -158,12 +158,12 @@ def __init__(self, f: ReadCsvBuffer[str] | list, **kwds): decimal = re.escape(self.decimal) if self.thousands is None: - regex = fr"^[\-\+]?[0-9]*({decimal}[0-9]*)?([0-9]?(E|e)\-?[0-9]+)?$" + regex = rf"^[\-\+]?[0-9]*({decimal}[0-9]*)?([0-9]?(E|e)\-?[0-9]+)?$" else: thousands = re.escape(self.thousands) regex = ( - fr"^[\-\+]?([0-9]+{thousands}|[0-9])*({decimal}[0-9]*)?" - fr"([0-9]?(E|e)\-?[0-9]+)?$" + rf"^[\-\+]?([0-9]+{thousands}|[0-9])*({decimal}[0-9]*)?" + rf"([0-9]?(E|e)\-?[0-9]+)?$" ) self.num = re.compile(regex) @@ -1209,7 +1209,7 @@ def detect_colspecs( self, infer_nrows: int = 100, skiprows: set[int] | None = None ) -> list[tuple[int, int]]: # Regex escape the delimiters - delimiters = "".join([fr"\{x}" for x in self.delimiter]) + delimiters = "".join([rf"\{x}" for x in self.delimiter]) pattern = re.compile(f"([^{delimiters}]+)") rows = self.get_rows(infer_nrows, skiprows) if not rows: diff --git a/pandas/io/stata.py b/pandas/io/stata.py index 4a50a3dabe5e7..b4b4291af59c1 100644 --- a/pandas/io/stata.py +++ b/pandas/io/stata.py @@ -605,7 +605,7 @@ def _cast_to_stata_types(data: DataFrame) -> DataFrame: else: dtype = c_data[2] if c_data[2] == np.int64: # Warn if necessary - if data[col].max() >= 2 ** 53: + if data[col].max() >= 2**53: ws = precision_loss_doc.format("uint64", "float64") data[col] = data[col].astype(dtype) @@ -622,7 +622,7 @@ def _cast_to_stata_types(data: DataFrame) -> DataFrame: data[col] = data[col].astype(np.int32) else: data[col] = data[col].astype(np.float64) - if data[col].max() >= 2 ** 53 or data[col].min() <= -(2 ** 53): + if data[col].max() >= 2**53 or data[col].min() <= -(2**53): ws = precision_loss_doc.format("int64", "float64") elif dtype in (np.float32, np.float64): value = data[col].max() diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py index 90d3f8d9836bf..8216374732aff 100644 --- a/pandas/plotting/_matplotlib/converter.py +++ b/pandas/plotting/_matplotlib/converter.py @@ -63,7 +63,7 @@ SEC_PER_HOUR = SEC_PER_MIN * MIN_PER_HOUR SEC_PER_DAY = SEC_PER_HOUR * HOURS_PER_DAY -MUSEC_PER_DAY = 10 ** 6 * SEC_PER_DAY +MUSEC_PER_DAY = 10**6 * SEC_PER_DAY _mpl_units = {} # Cache for units overwritten by us @@ -141,7 +141,7 @@ def deregister(): def _to_ordinalf(tm: pydt.time) -> float: - tot_sec = tm.hour * 3600 + tm.minute * 60 + tm.second + tm.microsecond / 10 ** 6 + tot_sec = tm.hour * 3600 + tm.minute * 60 + tm.second + tm.microsecond / 10**6 return tot_sec @@ -207,7 +207,7 @@ def __call__(self, x, pos=0) -> str: """ fmt = "%H:%M:%S.%f" s = int(x) - msus = round((x - s) * 10 ** 6) + msus = round((x - s) * 10**6) ms = msus // 1000 us = msus % 1000 m, s = divmod(s, 60) @@ -1084,7 +1084,7 @@ def format_timedelta_ticks(x, pos, n_decimals: int) -> str: """ Convert seconds to 'D days HH:MM:SS.F' """ - s, ns = divmod(x, 10 ** 9) + s, ns = divmod(x, 10**9) m, s = divmod(s, 60) h, m = divmod(m, 60) d, h = divmod(h, 24) @@ -1098,7 +1098,7 @@ def format_timedelta_ticks(x, pos, n_decimals: int) -> str: def __call__(self, x, pos=0) -> str: (vmin, vmax) = tuple(self.axis.get_view_interval()) - n_decimals = int(np.ceil(np.log10(100 * 10 ** 9 / abs(vmax - vmin)))) + n_decimals = int(np.ceil(np.log10(100 * 10**9 / abs(vmax - vmin)))) if n_decimals > 9: n_decimals = 9 return self.format_timedelta_ticks(x, pos, n_decimals) diff --git a/pandas/plotting/_matplotlib/tools.py b/pandas/plotting/_matplotlib/tools.py index 5314a61191d78..7bd90a4e4d908 100644 --- a/pandas/plotting/_matplotlib/tools.py +++ b/pandas/plotting/_matplotlib/tools.py @@ -117,7 +117,7 @@ def _get_layout(nplots: int, layout=None, layout_type: str = "box") -> tuple[int return layouts[nplots] except KeyError: k = 1 - while k ** 2 < nplots: + while k**2 < nplots: k += 1 if (k - 1) * k >= nplots: diff --git a/pandas/tests/apply/test_series_apply.py b/pandas/tests/apply/test_series_apply.py index b7084e2bc6dc7..3b6bdeea3b3c8 100644 --- a/pandas/tests/apply/test_series_apply.py +++ b/pandas/tests/apply/test_series_apply.py @@ -378,11 +378,11 @@ def test_agg_apply_evaluate_lambdas_the_same(string_series): def test_with_nested_series(datetime_series): # GH 2316 # .agg with a reducer and a transform, what to do - result = datetime_series.apply(lambda x: Series([x, x ** 2], index=["x", "x^2"])) - expected = DataFrame({"x": datetime_series, "x^2": datetime_series ** 2}) + result = datetime_series.apply(lambda x: Series([x, x**2], index=["x", "x^2"])) + expected = DataFrame({"x": datetime_series, "x^2": datetime_series**2}) tm.assert_frame_equal(result, expected) - result = datetime_series.agg(lambda x: Series([x, x ** 2], index=["x", "x^2"])) + result = datetime_series.agg(lambda x: Series([x, x**2], index=["x", "x^2"])) tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py index bc7a929ecaa4a..22d20d7fe2356 100644 --- a/pandas/tests/arithmetic/test_numeric.py +++ b/pandas/tests/arithmetic/test_numeric.py @@ -120,12 +120,12 @@ def test_numeric_cmp_string_numexpr_path(self, box_with_array): box = box_with_array xbox = box if box is not Index else np.ndarray - obj = Series(np.random.randn(10 ** 5)) + obj = Series(np.random.randn(10**5)) obj = tm.box_expected(obj, box, transpose=False) result = obj == "a" - expected = Series(np.zeros(10 ** 5, dtype=bool)) + expected = Series(np.zeros(10**5, dtype=bool)) expected = tm.box_expected(expected, xbox, transpose=False) tm.assert_equal(result, expected) @@ -231,7 +231,7 @@ def test_numeric_arr_mul_tdscalar_numexpr_path( # GH#44772 for the float64 case box = box_with_array - arr_i8 = np.arange(2 * 10 ** 4).astype(np.int64, copy=False) + arr_i8 = np.arange(2 * 10**4).astype(np.int64, copy=False) arr = arr_i8.astype(dtype, copy=False) obj = tm.box_expected(arr, box, transpose=False) @@ -676,7 +676,7 @@ def test_mul_index(self, numeric_idx): idx = numeric_idx result = idx * idx - tm.assert_index_equal(result, idx ** 2) + tm.assert_index_equal(result, idx**2) def test_mul_datelike_raises(self, numeric_idx): idx = numeric_idx @@ -775,7 +775,7 @@ def test_operators_frame(self): df = pd.DataFrame({"A": ts}) tm.assert_series_equal(ts + ts, ts + df["A"], check_names=False) - tm.assert_series_equal(ts ** ts, ts ** df["A"], check_names=False) + tm.assert_series_equal(ts**ts, ts ** df["A"], check_names=False) tm.assert_series_equal(ts < ts, ts < df["A"], check_names=False) tm.assert_series_equal(ts / ts, ts / df["A"], check_names=False) @@ -1269,7 +1269,7 @@ def test_numeric_compat2(self): # __pow__ idx = RangeIndex(0, 1000, 2) - result = idx ** 2 + result = idx**2 expected = Int64Index(idx._values) ** 2 tm.assert_index_equal(Index(result.values), expected, exact=True) @@ -1393,7 +1393,7 @@ def test_sub_multiindex_swapped_levels(): @pytest.mark.parametrize("string_size", [0, 1, 2, 5]) def test_empty_str_comparison(power, string_size): # GH 37348 - a = np.array(range(10 ** power)) + a = np.array(range(10**power)) right = pd.DataFrame(a, dtype=np.int64) left = " " * string_size diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py index c96d7c01ec97f..e107ff6b65c0f 100644 --- a/pandas/tests/arithmetic/test_object.py +++ b/pandas/tests/arithmetic/test_object.py @@ -80,12 +80,12 @@ def test_pow_ops_object(self): # pow is weird with masking & 1, so testing here a = Series([1, np.nan, 1, np.nan], dtype=object) b = Series([1, np.nan, np.nan, 1], dtype=object) - result = a ** b - expected = Series(a.values ** b.values, dtype=object) + result = a**b + expected = Series(a.values**b.values, dtype=object) tm.assert_series_equal(result, expected) - result = b ** a - expected = Series(b.values ** a.values, dtype=object) + result = b**a + expected = Series(b.values**a.values, dtype=object) tm.assert_series_equal(result, expected) diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py index 543531889531a..3bc38c3e38213 100644 --- a/pandas/tests/arithmetic/test_timedelta64.py +++ b/pandas/tests/arithmetic/test_timedelta64.py @@ -1474,7 +1474,7 @@ def test_tdi_mul_int_array_zerodim(self, box_with_array): def test_tdi_mul_int_array(self, box_with_array): rng5 = np.arange(5, dtype="int64") idx = TimedeltaIndex(rng5) - expected = TimedeltaIndex(rng5 ** 2) + expected = TimedeltaIndex(rng5**2) idx = tm.box_expected(idx, box_with_array) expected = tm.box_expected(expected, box_with_array) @@ -2089,10 +2089,10 @@ def test_td64arr_pow_invalid(self, scalar_td, box_with_array): # defined pattern = "operate|unsupported|cannot|not supported" with pytest.raises(TypeError, match=pattern): - scalar_td ** td1 + scalar_td**td1 with pytest.raises(TypeError, match=pattern): - td1 ** scalar_td + td1**scalar_td def test_add_timestamp_to_timedelta(): diff --git a/pandas/tests/arrays/datetimes/test_constructors.py b/pandas/tests/arrays/datetimes/test_constructors.py index e6c65499f6fcc..ed285f2389959 100644 --- a/pandas/tests/arrays/datetimes/test_constructors.py +++ b/pandas/tests/arrays/datetimes/test_constructors.py @@ -29,7 +29,7 @@ def test_only_1dim_accepted(self): def test_freq_validation(self): # GH#24623 check that invalid instances cannot be created with the # public constructor - arr = np.arange(5, dtype=np.int64) * 3600 * 10 ** 9 + arr = np.arange(5, dtype=np.int64) * 3600 * 10**9 msg = ( "Inferred frequency H from passed values does not " @@ -64,7 +64,7 @@ def test_mixing_naive_tzaware_raises(self, meth): meth(obj) def test_from_pandas_array(self): - arr = pd.array(np.arange(5, dtype=np.int64)) * 3600 * 10 ** 9 + arr = pd.array(np.arange(5, dtype=np.int64)) * 3600 * 10**9 result = DatetimeArray._from_sequence(arr)._with_freq("infer") diff --git a/pandas/tests/arrays/floating/test_arithmetic.py b/pandas/tests/arrays/floating/test_arithmetic.py index e5f67a2dce3ad..7776e00c201ac 100644 --- a/pandas/tests/arrays/floating/test_arithmetic.py +++ b/pandas/tests/arrays/floating/test_arithmetic.py @@ -51,19 +51,19 @@ def test_divide_by_zero(dtype, zero, negative): def test_pow_scalar(dtype): a = pd.array([-1, 0, 1, None, 2], dtype=dtype) - result = a ** 0 + result = a**0 expected = pd.array([1, 1, 1, 1, 1], dtype=dtype) tm.assert_extension_array_equal(result, expected) - result = a ** 1 + result = a**1 expected = pd.array([-1, 0, 1, None, 2], dtype=dtype) tm.assert_extension_array_equal(result, expected) - result = a ** pd.NA + result = a**pd.NA expected = pd.array([None, None, 1, None, None], dtype=dtype) tm.assert_extension_array_equal(result, expected) - result = a ** np.nan + result = a**np.nan # TODO np.nan should be converted to pd.NA / missing before operation? expected = FloatingArray( np.array([np.nan, np.nan, 1, np.nan, np.nan], dtype=dtype.numpy_dtype), @@ -74,19 +74,19 @@ def test_pow_scalar(dtype): # reversed a = a[1:] # Can't raise integers to negative powers. - result = 0 ** a + result = 0**a expected = pd.array([1, 0, None, 0], dtype=dtype) tm.assert_extension_array_equal(result, expected) - result = 1 ** a + result = 1**a expected = pd.array([1, 1, 1, 1], dtype=dtype) tm.assert_extension_array_equal(result, expected) - result = pd.NA ** a + result = pd.NA**a expected = pd.array([1, None, None, None], dtype=dtype) tm.assert_extension_array_equal(result, expected) - result = np.nan ** a + result = np.nan**a expected = FloatingArray( np.array([1, np.nan, np.nan, np.nan], dtype=dtype.numpy_dtype), mask=a._mask ) @@ -96,7 +96,7 @@ def test_pow_scalar(dtype): def test_pow_array(dtype): a = pd.array([0, 0, 0, 1, 1, 1, None, None, None], dtype=dtype) b = pd.array([0, 1, None, 0, 1, None, 0, 1, None], dtype=dtype) - result = a ** b + result = a**b expected = pd.array([1, 0, None, 1, 1, 1, 1, None, None], dtype=dtype) tm.assert_extension_array_equal(result, expected) diff --git a/pandas/tests/arrays/integer/test_arithmetic.py b/pandas/tests/arrays/integer/test_arithmetic.py index 273bd9e4d34d5..bbaa95c8fa5a7 100644 --- a/pandas/tests/arrays/integer/test_arithmetic.py +++ b/pandas/tests/arrays/integer/test_arithmetic.py @@ -88,19 +88,19 @@ def test_mod(dtype): def test_pow_scalar(): a = pd.array([-1, 0, 1, None, 2], dtype="Int64") - result = a ** 0 + result = a**0 expected = pd.array([1, 1, 1, 1, 1], dtype="Int64") tm.assert_extension_array_equal(result, expected) - result = a ** 1 + result = a**1 expected = pd.array([-1, 0, 1, None, 2], dtype="Int64") tm.assert_extension_array_equal(result, expected) - result = a ** pd.NA + result = a**pd.NA expected = pd.array([None, None, 1, None, None], dtype="Int64") tm.assert_extension_array_equal(result, expected) - result = a ** np.nan + result = a**np.nan expected = FloatingArray( np.array([np.nan, np.nan, 1, np.nan, np.nan], dtype="float64"), np.array([False, False, False, True, False]), @@ -110,19 +110,19 @@ def test_pow_scalar(): # reversed a = a[1:] # Can't raise integers to negative powers. - result = 0 ** a + result = 0**a expected = pd.array([1, 0, None, 0], dtype="Int64") tm.assert_extension_array_equal(result, expected) - result = 1 ** a + result = 1**a expected = pd.array([1, 1, 1, 1], dtype="Int64") tm.assert_extension_array_equal(result, expected) - result = pd.NA ** a + result = pd.NA**a expected = pd.array([1, None, None, None], dtype="Int64") tm.assert_extension_array_equal(result, expected) - result = np.nan ** a + result = np.nan**a expected = FloatingArray( np.array([1, np.nan, np.nan, np.nan], dtype="float64"), np.array([False, False, True, False]), @@ -133,7 +133,7 @@ def test_pow_scalar(): def test_pow_array(): a = pd.array([0, 0, 0, 1, 1, 1, None, None, None]) b = pd.array([0, 1, None, 0, 1, None, 0, 1, None]) - result = a ** b + result = a**b expected = pd.array([1, 0, None, 1, 1, 1, 1, None, None]) tm.assert_extension_array_equal(result, expected) diff --git a/pandas/tests/arrays/integer/test_dtypes.py b/pandas/tests/arrays/integer/test_dtypes.py index 3911b7f9bad34..2bb6e36bf57b2 100644 --- a/pandas/tests/arrays/integer/test_dtypes.py +++ b/pandas/tests/arrays/integer/test_dtypes.py @@ -217,7 +217,7 @@ def test_astype_floating(): def test_astype_dt64(): # GH#32435 - arr = pd.array([1, 2, 3, pd.NA]) * 10 ** 9 + arr = pd.array([1, 2, 3, pd.NA]) * 10**9 result = arr.astype("datetime64[ns]") diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py index 7484fdccf4937..56dc3363a7f52 100644 --- a/pandas/tests/arrays/test_datetimelike.py +++ b/pandas/tests/arrays/test_datetimelike.py @@ -81,7 +81,7 @@ class SharedTests: @pytest.fixture def arr1d(self): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") return arr @@ -148,7 +148,7 @@ def test_compare_categorical_dtype(self, arr1d, as_index, reverse, ordered): tm.assert_numpy_array_equal(result, ones) def test_take(self): - data = np.arange(100, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(100, dtype="i8") * 24 * 3600 * 10**9 np.random.shuffle(data) freq = None if self.array_cls is not PeriodArray else "D" @@ -170,7 +170,7 @@ def test_take(self): @pytest.mark.parametrize("fill_value", [2, 2.0, Timestamp(2021, 1, 1, 12).time]) def test_take_fill_raises(self, fill_value): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") @@ -179,7 +179,7 @@ def test_take_fill_raises(self, fill_value): arr.take([0, 1], allow_fill=True, fill_value=fill_value) def test_take_fill(self): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") @@ -215,7 +215,7 @@ def test_concat_same_type(self, arr1d): tm.assert_index_equal(self.index_cls(result), expected) def test_unbox_scalar(self): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") result = arr._unbox_scalar(arr[0]) expected = arr._data.dtype.type @@ -229,7 +229,7 @@ def test_unbox_scalar(self): arr._unbox_scalar("foo") def test_check_compatible_with(self): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") arr._check_compatible_with(arr[0]) @@ -237,13 +237,13 @@ def test_check_compatible_with(self): arr._check_compatible_with(NaT) def test_scalar_from_string(self): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") result = arr._scalar_from_string(str(arr[0])) assert result == arr[0] def test_reduce_invalid(self): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") msg = "does not support reduction 'not a method'" @@ -252,7 +252,7 @@ def test_reduce_invalid(self): @pytest.mark.parametrize("method", ["pad", "backfill"]) def test_fillna_method_doesnt_change_orig(self, method): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") arr[4] = NaT @@ -265,7 +265,7 @@ def test_fillna_method_doesnt_change_orig(self, method): assert arr[4] is NaT def test_searchsorted(self): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") # scalar @@ -415,11 +415,11 @@ def test_repr_2d(self, arr1d): assert result == expected def test_setitem(self): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") arr[0] = arr[1] - expected = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + expected = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 expected[0] = expected[1] tm.assert_numpy_array_equal(arr.asi8, expected) @@ -504,7 +504,7 @@ def test_setitem_categorical(self, arr1d, as_index): tm.assert_equal(arr1d, expected) def test_setitem_raises(self): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") val = arr[0] @@ -540,7 +540,7 @@ def test_setitem_numeric_raises(self, arr1d, box): def test_inplace_arithmetic(self): # GH#24115 check that iadd and isub are actually in-place - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") expected = arr + pd.Timedelta(days=1) @@ -553,7 +553,7 @@ def test_inplace_arithmetic(self): def test_shift_fill_int_deprecated(self): # GH#31971 - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = self.array_cls(data, freq="D") msg = "Passing <class 'int'> to shift" diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py index 73b4e7c19dc2e..ad75c137ec703 100644 --- a/pandas/tests/arrays/test_datetimes.py +++ b/pandas/tests/arrays/test_datetimes.py @@ -280,7 +280,7 @@ def test_array_interface(self): @pytest.mark.parametrize("index", [True, False]) def test_searchsorted_different_tz(self, index): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = DatetimeArray(data, freq="D").tz_localize("Asia/Tokyo") if index: arr = pd.Index(arr) @@ -295,7 +295,7 @@ def test_searchsorted_different_tz(self, index): @pytest.mark.parametrize("index", [True, False]) def test_searchsorted_tzawareness_compat(self, index): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = DatetimeArray(data, freq="D") if index: arr = pd.Index(arr) @@ -322,14 +322,14 @@ def test_searchsorted_tzawareness_compat(self, index): np.timedelta64("NaT"), pd.Timedelta(days=2), "invalid", - np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9, - np.arange(10).view("timedelta64[ns]") * 24 * 3600 * 10 ** 9, + np.arange(10, dtype="i8") * 24 * 3600 * 10**9, + np.arange(10).view("timedelta64[ns]") * 24 * 3600 * 10**9, pd.Timestamp("2021-01-01").to_period("D"), ], ) @pytest.mark.parametrize("index", [True, False]) def test_searchsorted_invalid_types(self, other, index): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = DatetimeArray(data, freq="D") if index: arr = pd.Index(arr) diff --git a/pandas/tests/arrays/test_timedeltas.py b/pandas/tests/arrays/test_timedeltas.py index 1db0f6ad56ce3..675c6ac8c9233 100644 --- a/pandas/tests/arrays/test_timedeltas.py +++ b/pandas/tests/arrays/test_timedeltas.py @@ -52,14 +52,14 @@ def test_setitem_objects(self, obj): np.datetime64("NaT"), pd.Timestamp("2021-01-01"), "invalid", - np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9, - (np.arange(10) * 24 * 3600 * 10 ** 9).view("datetime64[ns]"), + np.arange(10, dtype="i8") * 24 * 3600 * 10**9, + (np.arange(10) * 24 * 3600 * 10**9).view("datetime64[ns]"), pd.Timestamp("2021-01-01").to_period("D"), ], ) @pytest.mark.parametrize("index", [True, False]) def test_searchsorted_invalid_types(self, other, index): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10 ** 9 + data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 arr = TimedeltaArray(data, freq="D") if index: arr = pd.Index(arr) @@ -76,10 +76,10 @@ def test_searchsorted_invalid_types(self, other, index): class TestUnaryOps: def test_abs(self): - vals = np.array([-3600 * 10 ** 9, "NaT", 7200 * 10 ** 9], dtype="m8[ns]") + vals = np.array([-3600 * 10**9, "NaT", 7200 * 10**9], dtype="m8[ns]") arr = TimedeltaArray(vals) - evals = np.array([3600 * 10 ** 9, "NaT", 7200 * 10 ** 9], dtype="m8[ns]") + evals = np.array([3600 * 10**9, "NaT", 7200 * 10**9], dtype="m8[ns]") expected = TimedeltaArray(evals) result = abs(arr) @@ -89,7 +89,7 @@ def test_abs(self): tm.assert_timedelta_array_equal(result2, expected) def test_pos(self): - vals = np.array([-3600 * 10 ** 9, "NaT", 7200 * 10 ** 9], dtype="m8[ns]") + vals = np.array([-3600 * 10**9, "NaT", 7200 * 10**9], dtype="m8[ns]") arr = TimedeltaArray(vals) result = +arr @@ -101,10 +101,10 @@ def test_pos(self): assert not tm.shares_memory(result2, arr) def test_neg(self): - vals = np.array([-3600 * 10 ** 9, "NaT", 7200 * 10 ** 9], dtype="m8[ns]") + vals = np.array([-3600 * 10**9, "NaT", 7200 * 10**9], dtype="m8[ns]") arr = TimedeltaArray(vals) - evals = np.array([3600 * 10 ** 9, "NaT", -7200 * 10 ** 9], dtype="m8[ns]") + evals = np.array([3600 * 10**9, "NaT", -7200 * 10**9], dtype="m8[ns]") expected = TimedeltaArray(evals) result = -arr diff --git a/pandas/tests/arrays/timedeltas/test_constructors.py b/pandas/tests/arrays/timedeltas/test_constructors.py index d297e745f107b..d24fabfeecb26 100644 --- a/pandas/tests/arrays/timedeltas/test_constructors.py +++ b/pandas/tests/arrays/timedeltas/test_constructors.py @@ -19,7 +19,7 @@ def test_only_1dim_accepted(self): def test_freq_validation(self): # ensure that the public constructor cannot create an invalid instance - arr = np.array([0, 0, 1], dtype=np.int64) * 3600 * 10 ** 9 + arr = np.array([0, 0, 1], dtype=np.int64) * 3600 * 10**9 msg = ( "Inferred frequency None from passed values does not " diff --git a/pandas/tests/base/test_conversion.py b/pandas/tests/base/test_conversion.py index 84e4992cce0e3..216c9b1546b9d 100644 --- a/pandas/tests/base/test_conversion.py +++ b/pandas/tests/base/test_conversion.py @@ -199,7 +199,7 @@ def test_iter_box(self): "datetime64[ns]", ), ( - pd.TimedeltaIndex([10 ** 10]), + pd.TimedeltaIndex([10**10]), TimedeltaArray, "m8[ns]", ), diff --git a/pandas/tests/dtypes/cast/test_can_hold_element.py b/pandas/tests/dtypes/cast/test_can_hold_element.py index 906123b1aee74..06d256c57a631 100644 --- a/pandas/tests/dtypes/cast/test_can_hold_element.py +++ b/pandas/tests/dtypes/cast/test_can_hold_element.py @@ -33,11 +33,11 @@ def test_can_hold_element_range(any_int_numpy_dtype): assert can_hold_element(arr, rng) # empty - rng = range(-(10 ** 10), -(10 ** 10)) + rng = range(-(10**10), -(10**10)) assert len(rng) == 0 # assert can_hold_element(arr, rng) - rng = range(10 ** 10, 10 ** 10) + rng = range(10**10, 10**10) assert len(rng) == 0 assert can_hold_element(arr, rng) @@ -51,7 +51,7 @@ def test_can_hold_element_int_values_float_ndarray(): assert not can_hold_element(arr, element + 0.5) # integer but not losslessly castable to int64 - element = np.array([3, 2 ** 65], dtype=np.float64) + element = np.array([3, 2**65], dtype=np.float64) assert not can_hold_element(arr, element) diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py index 7953d650636be..b4113d877836b 100644 --- a/pandas/tests/dtypes/test_inference.py +++ b/pandas/tests/dtypes/test_inference.py @@ -594,25 +594,25 @@ def test_convert_non_hashable(self): tm.assert_numpy_array_equal(result, np.array([np.nan, 1.0, np.nan])) def test_convert_numeric_uint64(self): - arr = np.array([2 ** 63], dtype=object) - exp = np.array([2 ** 63], dtype=np.uint64) + arr = np.array([2**63], dtype=object) + exp = np.array([2**63], dtype=np.uint64) tm.assert_numpy_array_equal(lib.maybe_convert_numeric(arr, set())[0], exp) - arr = np.array([str(2 ** 63)], dtype=object) - exp = np.array([2 ** 63], dtype=np.uint64) + arr = np.array([str(2**63)], dtype=object) + exp = np.array([2**63], dtype=np.uint64) tm.assert_numpy_array_equal(lib.maybe_convert_numeric(arr, set())[0], exp) - arr = np.array([np.uint64(2 ** 63)], dtype=object) - exp = np.array([2 ** 63], dtype=np.uint64) + arr = np.array([np.uint64(2**63)], dtype=object) + exp = np.array([2**63], dtype=np.uint64) tm.assert_numpy_array_equal(lib.maybe_convert_numeric(arr, set())[0], exp) @pytest.mark.parametrize( "arr", [ - np.array([2 ** 63, np.nan], dtype=object), - np.array([str(2 ** 63), np.nan], dtype=object), - np.array([np.nan, 2 ** 63], dtype=object), - np.array([np.nan, str(2 ** 63)], dtype=object), + np.array([2**63, np.nan], dtype=object), + np.array([str(2**63), np.nan], dtype=object), + np.array([np.nan, 2**63], dtype=object), + np.array([np.nan, str(2**63)], dtype=object), ], ) def test_convert_numeric_uint64_nan(self, coerce, arr): @@ -624,11 +624,11 @@ def test_convert_numeric_uint64_nan(self, coerce, arr): def test_convert_numeric_uint64_nan_values( self, coerce, convert_to_masked_nullable ): - arr = np.array([2 ** 63, 2 ** 63 + 1], dtype=object) - na_values = {2 ** 63} + arr = np.array([2**63, 2**63 + 1], dtype=object) + na_values = {2**63} expected = ( - np.array([np.nan, 2 ** 63 + 1], dtype=float) if coerce else arr.copy() + np.array([np.nan, 2**63 + 1], dtype=float) if coerce else arr.copy() ) result = lib.maybe_convert_numeric( arr, @@ -638,7 +638,7 @@ def test_convert_numeric_uint64_nan_values( ) if convert_to_masked_nullable and coerce: expected = IntegerArray( - np.array([0, 2 ** 63 + 1], dtype="u8"), + np.array([0, 2**63 + 1], dtype="u8"), np.array([True, False], dtype="bool"), ) result = IntegerArray(*result) @@ -649,12 +649,12 @@ def test_convert_numeric_uint64_nan_values( @pytest.mark.parametrize( "case", [ - np.array([2 ** 63, -1], dtype=object), - np.array([str(2 ** 63), -1], dtype=object), - np.array([str(2 ** 63), str(-1)], dtype=object), - np.array([-1, 2 ** 63], dtype=object), - np.array([-1, str(2 ** 63)], dtype=object), - np.array([str(-1), str(2 ** 63)], dtype=object), + np.array([2**63, -1], dtype=object), + np.array([str(2**63), -1], dtype=object), + np.array([str(2**63), str(-1)], dtype=object), + np.array([-1, 2**63], dtype=object), + np.array([-1, str(2**63)], dtype=object), + np.array([str(-1), str(2**63)], dtype=object), ], ) @pytest.mark.parametrize("convert_to_masked_nullable", [True, False]) @@ -686,7 +686,7 @@ def test_convert_numeric_string_uint64(self, convert_to_masked_nullable): result = result[0] assert np.isnan(result) - @pytest.mark.parametrize("value", [-(2 ** 63) - 1, 2 ** 64]) + @pytest.mark.parametrize("value", [-(2**63) - 1, 2**64]) def test_convert_int_overflow(self, value): # see gh-18584 arr = np.array([value], dtype=object) @@ -695,23 +695,23 @@ def test_convert_int_overflow(self, value): def test_maybe_convert_objects_uint64(self): # see gh-4471 - arr = np.array([2 ** 63], dtype=object) - exp = np.array([2 ** 63], dtype=np.uint64) + arr = np.array([2**63], dtype=object) + exp = np.array([2**63], dtype=np.uint64) tm.assert_numpy_array_equal(lib.maybe_convert_objects(arr), exp) # NumPy bug: can't compare uint64 to int64, as that # results in both casting to float64, so we should # make sure that this function is robust against it - arr = np.array([np.uint64(2 ** 63)], dtype=object) - exp = np.array([2 ** 63], dtype=np.uint64) + arr = np.array([np.uint64(2**63)], dtype=object) + exp = np.array([2**63], dtype=np.uint64) tm.assert_numpy_array_equal(lib.maybe_convert_objects(arr), exp) arr = np.array([2, -1], dtype=object) exp = np.array([2, -1], dtype=np.int64) tm.assert_numpy_array_equal(lib.maybe_convert_objects(arr), exp) - arr = np.array([2 ** 63, -1], dtype=object) - exp = np.array([2 ** 63, -1], dtype=object) + arr = np.array([2**63, -1], dtype=object) + exp = np.array([2**63, -1], dtype=object) tm.assert_numpy_array_equal(lib.maybe_convert_objects(arr), exp) def test_maybe_convert_objects_datetime(self): diff --git a/pandas/tests/frame/conftest.py b/pandas/tests/frame/conftest.py index b512664b57ade..8dbed84b85837 100644 --- a/pandas/tests/frame/conftest.py +++ b/pandas/tests/frame/conftest.py @@ -212,7 +212,7 @@ def uint64_frame(): Columns are ['A', 'B'] """ return DataFrame( - {"A": np.arange(3), "B": [2 ** 63, 2 ** 63 + 5, 2 ** 63 + 10]}, dtype=np.uint64 + {"A": np.arange(3), "B": [2**63, 2**63 + 5, 2**63 + 10]}, dtype=np.uint64 ) diff --git a/pandas/tests/frame/indexing/test_where.py b/pandas/tests/frame/indexing/test_where.py index 9b6e04dffa6ce..22d8d13b77f72 100644 --- a/pandas/tests/frame/indexing/test_where.py +++ b/pandas/tests/frame/indexing/test_where.py @@ -721,7 +721,7 @@ def test_where_datetimelike_noop(self, dtype): # GH#45135, analogue to GH#44181 for Period don't raise on no-op # For td64/dt64/dt64tz we already don't raise, but also are # checking that we don't unnecessarily upcast to object. - ser = Series(np.arange(3) * 10 ** 9, dtype=np.int64).view(dtype) + ser = Series(np.arange(3) * 10**9, dtype=np.int64).view(dtype) df = ser.to_frame() mask = np.array([False, False, False]) @@ -761,9 +761,9 @@ def test_where_int_downcasting_deprecated(using_array_manager): msg = "Downcasting integer-dtype" warn = FutureWarning if not using_array_manager else None with tm.assert_produces_warning(warn, match=msg): - res = df.where(mask, 2 ** 17) + res = df.where(mask, 2**17) - expected = DataFrame({0: arr[:, 0], 1: np.array([2 ** 17] * 3, dtype=np.int32)}) + expected = DataFrame({0: arr[:, 0], 1: np.array([2**17] * 3, dtype=np.int32)}) tm.assert_frame_equal(res, expected) diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py index d326ca3493977..bc297ebb56b48 100644 --- a/pandas/tests/frame/methods/test_astype.py +++ b/pandas/tests/frame/methods/test_astype.py @@ -455,10 +455,10 @@ def test_astype_to_incorrect_datetimelike(self, unit): msg = "|".join( [ # BlockManager path - fr"Cannot cast DatetimeArray to dtype timedelta64\[{unit}\]", + rf"Cannot cast DatetimeArray to dtype timedelta64\[{unit}\]", # ArrayManager path "cannot astype a datetimelike from " - fr"\[datetime64\[ns\]\] to \[timedelta64\[{unit}\]\]", + rf"\[datetime64\[ns\]\] to \[timedelta64\[{unit}\]\]", ] ) with pytest.raises(TypeError, match=msg): @@ -467,10 +467,10 @@ def test_astype_to_incorrect_datetimelike(self, unit): msg = "|".join( [ # BlockManager path - fr"Cannot cast TimedeltaArray to dtype datetime64\[{unit}\]", + rf"Cannot cast TimedeltaArray to dtype datetime64\[{unit}\]", # ArrayManager path "cannot astype a timedelta from " - fr"\[timedelta64\[ns\]\] to \[datetime64\[{unit}\]\]", + rf"\[timedelta64\[ns\]\] to \[datetime64\[{unit}\]\]", ] ) df = DataFrame(np.array([[1, 2, 3]], dtype=other)) diff --git a/pandas/tests/frame/methods/test_cov_corr.py b/pandas/tests/frame/methods/test_cov_corr.py index 6a1466ae1ea46..2e545942b6f46 100644 --- a/pandas/tests/frame/methods/test_cov_corr.py +++ b/pandas/tests/frame/methods/test_cov_corr.py @@ -348,7 +348,7 @@ def test_corr_numerical_instabilities(self): def test_corrwith_spearman(self): # GH#21925 df = DataFrame(np.random.random(size=(100, 3))) - result = df.corrwith(df ** 2, method="spearman") + result = df.corrwith(df**2, method="spearman") expected = Series(np.ones(len(result))) tm.assert_series_equal(result, expected) @@ -356,6 +356,6 @@ def test_corrwith_spearman(self): def test_corrwith_kendall(self): # GH#21925 df = DataFrame(np.random.random(size=(100, 3))) - result = df.corrwith(df ** 2, method="kendall") + result = df.corrwith(df**2, method="kendall") expected = Series(np.ones(len(result))) tm.assert_series_equal(result, expected) diff --git a/pandas/tests/frame/methods/test_pipe.py b/pandas/tests/frame/methods/test_pipe.py index 1d7cc16f49280..5bcc4360487f3 100644 --- a/pandas/tests/frame/methods/test_pipe.py +++ b/pandas/tests/frame/methods/test_pipe.py @@ -15,7 +15,7 @@ def test_pipe(self, frame_or_series): obj = obj["A"] expected = expected["A"] - f = lambda x, y: x ** y + f = lambda x, y: x**y result = obj.pipe(f, 2) tm.assert_equal(result, expected) diff --git a/pandas/tests/frame/methods/test_rank.py b/pandas/tests/frame/methods/test_rank.py index 31a6d7637a244..68c87a59d8230 100644 --- a/pandas/tests/frame/methods/test_rank.py +++ b/pandas/tests/frame/methods/test_rank.py @@ -332,7 +332,7 @@ def test_rank_pct_true(self, method, exp): def test_pct_max_many_rows(self): # GH 18271 df = DataFrame( - {"A": np.arange(2 ** 24 + 1), "B": np.arange(2 ** 24 + 1, 0, -1)} + {"A": np.arange(2**24 + 1), "B": np.arange(2**24 + 1, 0, -1)} ) result = df.rank(pct=True).max() assert (result == 1).all() diff --git a/pandas/tests/frame/methods/test_reset_index.py b/pandas/tests/frame/methods/test_reset_index.py index 8130c4fa41c12..8ddda099cf898 100644 --- a/pandas/tests/frame/methods/test_reset_index.py +++ b/pandas/tests/frame/methods/test_reset_index.py @@ -41,7 +41,7 @@ def test_reset_index_empty_rangeindex(self): def test_set_reset(self): - idx = Index([2 ** 63, 2 ** 63 + 5, 2 ** 63 + 10], name="foo") + idx = Index([2**63, 2**63 + 5, 2**63 + 10], name="foo") # set/reset df = DataFrame({"A": [0, 1, 2]}, index=idx) @@ -232,7 +232,7 @@ def test_reset_index_level(self): def test_reset_index_right_dtype(self): time = np.arange(0.0, 10, np.sqrt(2) / 2) s1 = Series( - (9.81 * time ** 2) / 2, index=Index(time, name="time"), name="speed" + (9.81 * time**2) / 2, index=Index(time, name="time"), name="speed" ) df = DataFrame(s1) diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py index 4820fcce6486b..bde44ddf12a8e 100644 --- a/pandas/tests/frame/test_arithmetic.py +++ b/pandas/tests/frame/test_arithmetic.py @@ -870,13 +870,13 @@ def _test_op(df, op): _test_op(df, lambda x, y: y - x) _test_op(df, lambda x, y: y * x) _test_op(df, lambda x, y: y / x) - _test_op(df, lambda x, y: y ** x) + _test_op(df, lambda x, y: y**x) _test_op(df, lambda x, y: x + y) _test_op(df, lambda x, y: x - y) _test_op(df, lambda x, y: x * y) _test_op(df, lambda x, y: x / y) - _test_op(df, lambda x, y: x ** y) + _test_op(df, lambda x, y: x**y) @pytest.mark.parametrize( "values", [[1, 2], (1, 2), np.array([1, 2]), range(1, 3), deque([1, 2])] @@ -949,9 +949,9 @@ def test_frame_with_frame_reindex(self): [ (1, "i8"), (1.0, "f8"), - (2 ** 63, "f8"), + (2**63, "f8"), (1j, "complex128"), - (2 ** 63, "complex128"), + (2**63, "complex128"), (True, "bool"), (np.timedelta64(20, "ns"), "<m8[ns]"), (np.datetime64(20, "ns"), "<M8[ns]"), @@ -1766,7 +1766,7 @@ def test_pow_with_realignment(): left = DataFrame({"A": [0, 1, 2]}) right = DataFrame(index=[0, 1, 2]) - result = left ** right + result = left**right expected = DataFrame({"A": [np.nan, 1.0, np.nan]}) tm.assert_frame_equal(result, expected) @@ -1778,7 +1778,7 @@ def test_pow_nan_with_zero(): expected = DataFrame({"A": [1.0, 1.0, 1.0]}) - result = left ** right + result = left**right tm.assert_frame_equal(result, expected) result = left["A"] ** right["A"] diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py index 01a8982c5fe16..8aa0e980b01c4 100644 --- a/pandas/tests/frame/test_block_internals.py +++ b/pandas/tests/frame/test_block_internals.py @@ -118,14 +118,14 @@ def test_boolean_set_uncons(self, float_frame): def test_constructor_with_convert(self): # this is actually mostly a test of lib.maybe_convert_objects # #2845 - df = DataFrame({"A": [2 ** 63 - 1]}) + df = DataFrame({"A": [2**63 - 1]}) result = df["A"] - expected = Series(np.asarray([2 ** 63 - 1], np.int64), name="A") + expected = Series(np.asarray([2**63 - 1], np.int64), name="A") tm.assert_series_equal(result, expected) - df = DataFrame({"A": [2 ** 63]}) + df = DataFrame({"A": [2**63]}) result = df["A"] - expected = Series(np.asarray([2 ** 63], np.uint64), name="A") + expected = Series(np.asarray([2**63], np.uint64), name="A") tm.assert_series_equal(result, expected) df = DataFrame({"A": [datetime(2005, 1, 1), True]}) diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index 08027d5807d8e..23bdfd9ee180e 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -398,7 +398,7 @@ def test_constructor_bool(self): def test_constructor_overflow_int64(self): # see gh-14881 - values = np.array([2 ** 64 - i for i in range(1, 10)], dtype=np.uint64) + values = np.array([2**64 - i for i in range(1, 10)], dtype=np.uint64) result = DataFrame({"a": values}) assert result["a"].dtype == np.uint64 @@ -420,12 +420,12 @@ def test_constructor_overflow_int64(self): @pytest.mark.parametrize( "values", [ - np.array([2 ** 64], dtype=object), - np.array([2 ** 65]), - [2 ** 64 + 1], - np.array([-(2 ** 63) - 4], dtype=object), - np.array([-(2 ** 64) - 1]), - [-(2 ** 65) - 2], + np.array([2**64], dtype=object), + np.array([2**65]), + [2**64 + 1], + np.array([-(2**63) - 4], dtype=object), + np.array([-(2**64) - 1]), + [-(2**65) - 2], ], ) def test_constructor_int_overflow(self, values): @@ -2082,7 +2082,7 @@ def test_constructor_for_list_with_dtypes(self): tm.assert_series_equal(result, expected) # overflow issue? (we always expected int64 upcasting here) - df = DataFrame({"a": [2 ** 31, 2 ** 31 + 1]}) + df = DataFrame({"a": [2**31, 2**31 + 1]}) assert df.dtypes.iloc[0] == np.dtype("int64") # GH #2751 (construction with no index specified), make sure we cast to diff --git a/pandas/tests/frame/test_stack_unstack.py b/pandas/tests/frame/test_stack_unstack.py index 005d2600b2bae..a8ed07d6deda0 100644 --- a/pandas/tests/frame/test_stack_unstack.py +++ b/pandas/tests/frame/test_stack_unstack.py @@ -1848,8 +1848,8 @@ def __init__(self, *args, **kwargs): with monkeypatch.context() as m: m.setattr(reshape_lib, "_Unstacker", MockUnstacker) df = DataFrame( - np.random.randn(2 ** 16, 2), - index=[np.arange(2 ** 16), np.arange(2 ** 16)], + np.random.randn(2**16, 2), + index=[np.arange(2**16), np.arange(2**16)], ) msg = "The following operation may generate" with tm.assert_produces_warning(PerformanceWarning, match=msg): diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py index 2ab553434873c..1ea44871eea4d 100644 --- a/pandas/tests/groupby/aggregate/test_aggregate.py +++ b/pandas/tests/groupby/aggregate/test_aggregate.py @@ -225,10 +225,10 @@ def test_agg_str_with_kwarg_axis_1_raises(df, reduction_func): "func, expected, dtype, result_dtype_dict", [ ("sum", [5, 7, 9], "int64", {}), - ("std", [4.5 ** 0.5] * 3, int, {"i": float, "j": float, "k": float}), + ("std", [4.5**0.5] * 3, int, {"i": float, "j": float, "k": float}), ("var", [4.5] * 3, int, {"i": float, "j": float, "k": float}), ("sum", [5, 7, 9], "Int64", {"j": "int64"}), - ("std", [4.5 ** 0.5] * 3, "Int64", {"i": float, "j": float, "k": float}), + ("std", [4.5**0.5] * 3, "Int64", {"i": float, "j": float, "k": float}), ("var", [4.5] * 3, "Int64", {"i": "float64", "j": "float64", "k": "float64"}), ], ) @@ -250,7 +250,7 @@ def test_multiindex_groupby_mixed_cols_axis1(func, expected, dtype, result_dtype [ ("sum", [[2, 4], [10, 12], [18, 20]], {10: "int64", 20: "int64"}), # std should ideally return Int64 / Float64 #43330 - ("std", [[2 ** 0.5] * 2] * 3, "float64"), + ("std", [[2**0.5] * 2] * 3, "float64"), ("var", [[2] * 2] * 3, {10: "float64", 20: "float64"}), ], ) @@ -1097,7 +1097,7 @@ def test_agg_with_one_lambda(self): # check pd.NameAgg case result1 = df.groupby(by="kind").agg( height_sqr_min=pd.NamedAgg( - column="height", aggfunc=lambda x: np.min(x ** 2) + column="height", aggfunc=lambda x: np.min(x**2) ), height_max=pd.NamedAgg(column="height", aggfunc="max"), weight_max=pd.NamedAgg(column="weight", aggfunc="max"), @@ -1106,7 +1106,7 @@ def test_agg_with_one_lambda(self): # check agg(key=(col, aggfunc)) case result2 = df.groupby(by="kind").agg( - height_sqr_min=("height", lambda x: np.min(x ** 2)), + height_sqr_min=("height", lambda x: np.min(x**2)), height_max=("height", "max"), weight_max=("weight", "max"), ) @@ -1143,7 +1143,7 @@ def test_agg_multiple_lambda(self): # check agg(key=(col, aggfunc)) case result1 = df.groupby(by="kind").agg( - height_sqr_min=("height", lambda x: np.min(x ** 2)), + height_sqr_min=("height", lambda x: np.min(x**2)), height_max=("height", "max"), weight_max=("weight", "max"), height_max_2=("height", lambda x: np.max(x)), @@ -1154,7 +1154,7 @@ def test_agg_multiple_lambda(self): # check pd.NamedAgg case result2 = df.groupby(by="kind").agg( height_sqr_min=pd.NamedAgg( - column="height", aggfunc=lambda x: np.min(x ** 2) + column="height", aggfunc=lambda x: np.min(x**2) ), height_max=pd.NamedAgg(column="height", aggfunc="max"), weight_max=pd.NamedAgg(column="weight", aggfunc="max"), diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py index dbc38497d3bee..1555e9d02c8ca 100644 --- a/pandas/tests/groupby/test_function.py +++ b/pandas/tests/groupby/test_function.py @@ -866,7 +866,7 @@ def test_cummin_max_skipna_multiple_cols(method): @td.skip_if_32bit @pytest.mark.parametrize("method", ["cummin", "cummax"]) @pytest.mark.parametrize( - "dtype,val", [("UInt64", np.iinfo("uint64").max), ("Int64", 2 ** 53 + 1)] + "dtype,val", [("UInt64", np.iinfo("uint64").max), ("Int64", 2**53 + 1)] ) def test_nullable_int_not_cast_as_float(method, dtype, val): data = [val, pd.NA] diff --git a/pandas/tests/groupby/test_libgroupby.py b/pandas/tests/groupby/test_libgroupby.py index 2a24448d24ce2..cde9b36fd0bf4 100644 --- a/pandas/tests/groupby/test_libgroupby.py +++ b/pandas/tests/groupby/test_libgroupby.py @@ -112,13 +112,13 @@ def test_group_var_large_inputs(self): out = np.array([[np.nan]], dtype=self.dtype) counts = np.array([0], dtype="int64") - values = (prng.rand(10 ** 6) + 10 ** 12).astype(self.dtype) - values.shape = (10 ** 6, 1) - labels = np.zeros(10 ** 6, dtype="intp") + values = (prng.rand(10**6) + 10**12).astype(self.dtype) + values.shape = (10**6, 1) + labels = np.zeros(10**6, dtype="intp") self.algo(out, counts, values, labels) - assert counts[0] == 10 ** 6 + assert counts[0] == 10**6 tm.assert_almost_equal(out[0, 0], 1.0 / 12, rtol=0.5e-3) diff --git a/pandas/tests/groupby/test_pipe.py b/pandas/tests/groupby/test_pipe.py index 42bd6a84e05f6..1229251f88c7d 100644 --- a/pandas/tests/groupby/test_pipe.py +++ b/pandas/tests/groupby/test_pipe.py @@ -27,7 +27,7 @@ def f(dfgb): return dfgb.B.max() - dfgb.C.min().min() def square(srs): - return srs ** 2 + return srs**2 # Note that the transformations are # GroupBy -> Series diff --git a/pandas/tests/indexes/base_class/test_indexing.py b/pandas/tests/indexes/base_class/test_indexing.py index 9cd582925ff79..8135dd57bba3f 100644 --- a/pandas/tests/indexes/base_class/test_indexing.py +++ b/pandas/tests/indexes/base_class/test_indexing.py @@ -51,7 +51,7 @@ def test_get_loc_tuple_monotonic_above_size_cutoff(self): lev = list("ABCDEFGHIJKLMNOPQRSTUVWXYZ") dti = pd.date_range("2016-01-01", periods=100) - mi = pd.MultiIndex.from_product([lev, range(10 ** 3), dti]) + mi = pd.MultiIndex.from_product([lev, range(10**3), dti]) oidx = mi.to_flat_index() loc = len(oidx) // 2 diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py index 61cfd5593abc0..0cf66c0814293 100644 --- a/pandas/tests/indexes/common.py +++ b/pandas/tests/indexes/common.py @@ -892,6 +892,6 @@ def test_arithmetic_explicit_conversions(self): def test_invalid_dtype(self, invalid_dtype): # GH 29539 dtype = invalid_dtype - msg = fr"Incorrect `dtype` passed: expected \w+(?: \w+)?, received {dtype}" + msg = rf"Incorrect `dtype` passed: expected \w+(?: \w+)?, received {dtype}" with pytest.raises(ValueError, match=msg): self._index_cls([1, 2, 3], dtype=dtype) diff --git a/pandas/tests/indexes/datetimelike_/test_equals.py b/pandas/tests/indexes/datetimelike_/test_equals.py index cc90e8f6d9bec..39e8270b1c4f5 100644 --- a/pandas/tests/indexes/datetimelike_/test_equals.py +++ b/pandas/tests/indexes/datetimelike_/test_equals.py @@ -167,7 +167,7 @@ def test_equals2(self): # Check that we dont raise OverflowError on comparisons outside the # implementation range GH#28532 - oob = Index([timedelta(days=10 ** 6)] * 3, dtype=object) + oob = Index([timedelta(days=10**6)] * 3, dtype=object) assert not idx.equals(oob) assert not idx2.equals(oob) diff --git a/pandas/tests/indexes/datetimelike_/test_indexing.py b/pandas/tests/indexes/datetimelike_/test_indexing.py index eb37c2c4ad2a3..b64d5421a2067 100644 --- a/pandas/tests/indexes/datetimelike_/test_indexing.py +++ b/pandas/tests/indexes/datetimelike_/test_indexing.py @@ -20,7 +20,7 @@ @pytest.mark.parametrize("rdtype", dtlike_dtypes) def test_get_indexer_non_unique_wrong_dtype(ldtype, rdtype): - vals = np.tile(3600 * 10 ** 9 * np.arange(3), 2) + vals = np.tile(3600 * 10**9 * np.arange(3), 2) def construct(dtype): if dtype is dtlike_dtypes[-1]: diff --git a/pandas/tests/indexes/datetimes/test_partial_slicing.py b/pandas/tests/indexes/datetimes/test_partial_slicing.py index 2f32f9e18311d..8ddcd6a453080 100644 --- a/pandas/tests/indexes/datetimes/test_partial_slicing.py +++ b/pandas/tests/indexes/datetimes/test_partial_slicing.py @@ -281,7 +281,7 @@ def test_partial_slicing_dataframe(self): result = df["a"][ts_string] assert isinstance(result, np.int64) assert result == expected - msg = fr"^'{ts_string}'$" + msg = rf"^'{ts_string}'$" with pytest.raises(KeyError, match=msg): df[ts_string] @@ -311,7 +311,7 @@ def test_partial_slicing_dataframe(self): result = df["a"][ts_string] assert isinstance(result, np.int64) assert result == 2 - msg = fr"^'{ts_string}'$" + msg = rf"^'{ts_string}'$" with pytest.raises(KeyError, match=msg): df[ts_string] @@ -320,7 +320,7 @@ def test_partial_slicing_dataframe(self): for fmt, res in list(zip(formats, resolutions))[rnum + 1 :]: ts = index[1] + Timedelta("1 " + res) ts_string = ts.strftime(fmt) - msg = fr"^'{ts_string}'$" + msg = rf"^'{ts_string}'$" with pytest.raises(KeyError, match=msg): df["a"][ts_string] with pytest.raises(KeyError, match=msg): diff --git a/pandas/tests/indexes/interval/test_interval_tree.py b/pandas/tests/indexes/interval/test_interval_tree.py index ab6eac482211d..f2d9ec3608271 100644 --- a/pandas/tests/indexes/interval/test_interval_tree.py +++ b/pandas/tests/indexes/interval/test_interval_tree.py @@ -58,7 +58,7 @@ def test_get_indexer(self, tree): @pytest.mark.parametrize( "dtype, target_value, target_dtype", - [("int64", 2 ** 63 + 1, "uint64"), ("uint64", -1, "int64")], + [("int64", 2**63 + 1, "uint64"), ("uint64", -1, "int64")], ) def test_get_indexer_overflow(self, dtype, target_value, target_dtype): left, right = np.array([0, 1], dtype=dtype), np.array([1, 2], dtype=dtype) @@ -89,7 +89,7 @@ def test_get_indexer_non_unique(self, tree): @pytest.mark.parametrize( "dtype, target_value, target_dtype", - [("int64", 2 ** 63 + 1, "uint64"), ("uint64", -1, "int64")], + [("int64", 2**63 + 1, "uint64"), ("uint64", -1, "int64")], ) def test_get_indexer_non_unique_overflow(self, dtype, target_value, target_dtype): left, right = np.array([0, 2], dtype=dtype), np.array([1, 3], dtype=dtype) diff --git a/pandas/tests/indexes/multi/test_indexing.py b/pandas/tests/indexes/multi/test_indexing.py index 9cb65128e7068..f81d9f9a9ce2a 100644 --- a/pandas/tests/indexes/multi/test_indexing.py +++ b/pandas/tests/indexes/multi/test_indexing.py @@ -790,8 +790,8 @@ def test_contains_td64_level(self): @pytest.mark.slow def test_large_mi_contains(self): # GH#10645 - result = MultiIndex.from_arrays([range(10 ** 6), range(10 ** 6)]) - assert not (10 ** 6, 0) in result + result = MultiIndex.from_arrays([range(10**6), range(10**6)]) + assert not (10**6, 0) in result def test_timestamp_multiindex_indexer(): diff --git a/pandas/tests/indexes/multi/test_integrity.py b/pandas/tests/indexes/multi/test_integrity.py index e1c2134b5b1f8..b9596ca62d030 100644 --- a/pandas/tests/indexes/multi/test_integrity.py +++ b/pandas/tests/indexes/multi/test_integrity.py @@ -52,7 +52,7 @@ def test_values_boxed(): def test_values_multiindex_datetimeindex(): # Test to ensure we hit the boxing / nobox part of MI.values - ints = np.arange(10 ** 18, 10 ** 18 + 5) + ints = np.arange(10**18, 10**18 + 5) naive = pd.DatetimeIndex(ints) aware = pd.DatetimeIndex(ints, tz="US/Central") diff --git a/pandas/tests/indexes/numeric/test_indexing.py b/pandas/tests/indexes/numeric/test_indexing.py index a70845a4def7f..7081cbdfe6428 100644 --- a/pandas/tests/indexes/numeric/test_indexing.py +++ b/pandas/tests/indexes/numeric/test_indexing.py @@ -20,7 +20,7 @@ @pytest.fixture def index_large(): # large values used in UInt64Index tests where no compat needed with Int64/Float64 - large = [2 ** 63, 2 ** 63 + 10, 2 ** 63 + 15, 2 ** 63 + 20, 2 ** 63 + 25] + large = [2**63, 2**63 + 10, 2**63 + 15, 2**63 + 20, 2**63 + 25] return UInt64Index(large) @@ -161,7 +161,7 @@ def test_get_loc_float_index_nan_with_method(self, vals, method): @pytest.mark.parametrize("dtype", ["f8", "i8", "u8"]) def test_get_loc_numericindex_none_raises(self, dtype): # case that goes through searchsorted and key is non-comparable to values - arr = np.arange(10 ** 7, dtype=dtype) + arr = np.arange(10**7, dtype=dtype) idx = Index(arr) with pytest.raises(KeyError, match="None"): idx.get_loc(None) @@ -376,17 +376,17 @@ def test_get_indexer_int64(self): tm.assert_numpy_array_equal(indexer, expected) def test_get_indexer_uint64(self, index_large): - target = UInt64Index(np.arange(10).astype("uint64") * 5 + 2 ** 63) + target = UInt64Index(np.arange(10).astype("uint64") * 5 + 2**63) indexer = index_large.get_indexer(target) expected = np.array([0, -1, 1, 2, 3, 4, -1, -1, -1, -1], dtype=np.intp) tm.assert_numpy_array_equal(indexer, expected) - target = UInt64Index(np.arange(10).astype("uint64") * 5 + 2 ** 63) + target = UInt64Index(np.arange(10).astype("uint64") * 5 + 2**63) indexer = index_large.get_indexer(target, method="pad") expected = np.array([0, 0, 1, 2, 3, 4, 4, 4, 4, 4], dtype=np.intp) tm.assert_numpy_array_equal(indexer, expected) - target = UInt64Index(np.arange(10).astype("uint64") * 5 + 2 ** 63) + target = UInt64Index(np.arange(10).astype("uint64") * 5 + 2**63) indexer = index_large.get_indexer(target, method="backfill") expected = np.array([0, 1, 1, 2, 3, 4, -1, -1, -1, -1], dtype=np.intp) tm.assert_numpy_array_equal(indexer, expected) diff --git a/pandas/tests/indexes/numeric/test_join.py b/pandas/tests/indexes/numeric/test_join.py index 2a47289b65aad..9bbe7a64ada87 100644 --- a/pandas/tests/indexes/numeric/test_join.py +++ b/pandas/tests/indexes/numeric/test_join.py @@ -198,13 +198,13 @@ class TestJoinUInt64Index: @pytest.fixture def index_large(self): # large values used in TestUInt64Index where no compat needed with Int64/Float64 - large = [2 ** 63, 2 ** 63 + 10, 2 ** 63 + 15, 2 ** 63 + 20, 2 ** 63 + 25] + large = [2**63, 2**63 + 10, 2**63 + 15, 2**63 + 20, 2**63 + 25] return UInt64Index(large) def test_join_inner(self, index_large): - other = UInt64Index(2 ** 63 + np.array([7, 12, 25, 1, 2, 10], dtype="uint64")) + other = UInt64Index(2**63 + np.array([7, 12, 25, 1, 2, 10], dtype="uint64")) other_mono = UInt64Index( - 2 ** 63 + np.array([1, 2, 7, 10, 12, 25], dtype="uint64") + 2**63 + np.array([1, 2, 7, 10, 12, 25], dtype="uint64") ) # not monotonic @@ -216,7 +216,7 @@ def test_join_inner(self, index_large): lidx = lidx.take(ind) ridx = ridx.take(ind) - eres = UInt64Index(2 ** 63 + np.array([10, 25], dtype="uint64")) + eres = UInt64Index(2**63 + np.array([10, 25], dtype="uint64")) elidx = np.array([1, 4], dtype=np.intp) eridx = np.array([5, 2], dtype=np.intp) @@ -242,9 +242,9 @@ def test_join_inner(self, index_large): tm.assert_numpy_array_equal(ridx, eridx) def test_join_left(self, index_large): - other = UInt64Index(2 ** 63 + np.array([7, 12, 25, 1, 2, 10], dtype="uint64")) + other = UInt64Index(2**63 + np.array([7, 12, 25, 1, 2, 10], dtype="uint64")) other_mono = UInt64Index( - 2 ** 63 + np.array([1, 2, 7, 10, 12, 25], dtype="uint64") + 2**63 + np.array([1, 2, 7, 10, 12, 25], dtype="uint64") ) # not monotonic @@ -267,12 +267,12 @@ def test_join_left(self, index_large): tm.assert_numpy_array_equal(ridx, eridx) # non-unique - idx = UInt64Index(2 ** 63 + np.array([1, 1, 2, 5], dtype="uint64")) - idx2 = UInt64Index(2 ** 63 + np.array([1, 2, 5, 7, 9], dtype="uint64")) + idx = UInt64Index(2**63 + np.array([1, 1, 2, 5], dtype="uint64")) + idx2 = UInt64Index(2**63 + np.array([1, 2, 5, 7, 9], dtype="uint64")) res, lidx, ridx = idx2.join(idx, how="left", return_indexers=True) # 1 is in idx2, so it should be x2 - eres = UInt64Index(2 ** 63 + np.array([1, 1, 2, 5, 7, 9], dtype="uint64")) + eres = UInt64Index(2**63 + np.array([1, 1, 2, 5, 7, 9], dtype="uint64")) eridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.intp) elidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.intp) @@ -281,9 +281,9 @@ def test_join_left(self, index_large): tm.assert_numpy_array_equal(ridx, eridx) def test_join_right(self, index_large): - other = UInt64Index(2 ** 63 + np.array([7, 12, 25, 1, 2, 10], dtype="uint64")) + other = UInt64Index(2**63 + np.array([7, 12, 25, 1, 2, 10], dtype="uint64")) other_mono = UInt64Index( - 2 ** 63 + np.array([1, 2, 7, 10, 12, 25], dtype="uint64") + 2**63 + np.array([1, 2, 7, 10, 12, 25], dtype="uint64") ) # not monotonic @@ -309,12 +309,12 @@ def test_join_right(self, index_large): assert ridx is None # non-unique - idx = UInt64Index(2 ** 63 + np.array([1, 1, 2, 5], dtype="uint64")) - idx2 = UInt64Index(2 ** 63 + np.array([1, 2, 5, 7, 9], dtype="uint64")) + idx = UInt64Index(2**63 + np.array([1, 1, 2, 5], dtype="uint64")) + idx2 = UInt64Index(2**63 + np.array([1, 2, 5, 7, 9], dtype="uint64")) res, lidx, ridx = idx.join(idx2, how="right", return_indexers=True) # 1 is in idx2, so it should be x2 - eres = UInt64Index(2 ** 63 + np.array([1, 1, 2, 5, 7, 9], dtype="uint64")) + eres = UInt64Index(2**63 + np.array([1, 1, 2, 5, 7, 9], dtype="uint64")) elidx = np.array([0, 1, 2, 3, -1, -1], dtype=np.intp) eridx = np.array([0, 0, 1, 2, 3, 4], dtype=np.intp) @@ -324,20 +324,20 @@ def test_join_right(self, index_large): def test_join_non_int_index(self, index_large): other = Index( - 2 ** 63 + np.array([1, 5, 7, 10, 20], dtype="uint64"), dtype=object + 2**63 + np.array([1, 5, 7, 10, 20], dtype="uint64"), dtype=object ) outer = index_large.join(other, how="outer") outer2 = other.join(index_large, how="outer") expected = Index( - 2 ** 63 + np.array([0, 1, 5, 7, 10, 15, 20, 25], dtype="uint64") + 2**63 + np.array([0, 1, 5, 7, 10, 15, 20, 25], dtype="uint64") ) tm.assert_index_equal(outer, outer2) tm.assert_index_equal(outer, expected) inner = index_large.join(other, how="inner") inner2 = other.join(index_large, how="inner") - expected = Index(2 ** 63 + np.array([10, 20], dtype="uint64")) + expected = Index(2**63 + np.array([10, 20], dtype="uint64")) tm.assert_index_equal(inner, inner2) tm.assert_index_equal(inner, expected) @@ -354,9 +354,9 @@ def test_join_non_int_index(self, index_large): tm.assert_index_equal(right2, index_large.astype(object)) def test_join_outer(self, index_large): - other = UInt64Index(2 ** 63 + np.array([7, 12, 25, 1, 2, 10], dtype="uint64")) + other = UInt64Index(2**63 + np.array([7, 12, 25, 1, 2, 10], dtype="uint64")) other_mono = UInt64Index( - 2 ** 63 + np.array([1, 2, 7, 10, 12, 25], dtype="uint64") + 2**63 + np.array([1, 2, 7, 10, 12, 25], dtype="uint64") ) # not monotonic @@ -366,7 +366,7 @@ def test_join_outer(self, index_large): tm.assert_index_equal(res, noidx_res) eres = UInt64Index( - 2 ** 63 + np.array([0, 1, 2, 7, 10, 12, 15, 20, 25], dtype="uint64") + 2**63 + np.array([0, 1, 2, 7, 10, 12, 15, 20, 25], dtype="uint64") ) elidx = np.array([0, -1, -1, -1, 1, -1, 2, 3, 4], dtype=np.intp) eridx = np.array([-1, 3, 4, 0, 5, 1, -1, -1, 2], dtype=np.intp) diff --git a/pandas/tests/indexes/numeric/test_numeric.py b/pandas/tests/indexes/numeric/test_numeric.py index af308379cba5e..85384cbd41ea5 100644 --- a/pandas/tests/indexes/numeric/test_numeric.py +++ b/pandas/tests/indexes/numeric/test_numeric.py @@ -563,8 +563,8 @@ def simple_index(self, dtype): @pytest.fixture( params=[ - [2 ** 63, 2 ** 63 + 10, 2 ** 63 + 15, 2 ** 63 + 20, 2 ** 63 + 25], - [2 ** 63 + 25, 2 ** 63 + 20, 2 ** 63 + 15, 2 ** 63 + 10, 2 ** 63], + [2**63, 2**63 + 10, 2**63 + 15, 2**63 + 20, 2**63 + 25], + [2**63 + 25, 2**63 + 20, 2**63 + 15, 2**63 + 10, 2**63], ], ids=["index_inc", "index_dec"], ) @@ -594,21 +594,21 @@ def test_constructor(self, dtype): res = Index([1, 2, 3], dtype=dtype) tm.assert_index_equal(res, idx, exact=exact) - idx = index_cls([1, 2 ** 63]) - res = Index([1, 2 ** 63], dtype=dtype) + idx = index_cls([1, 2**63]) + res = Index([1, 2**63], dtype=dtype) tm.assert_index_equal(res, idx, exact=exact) - idx = index_cls([1, 2 ** 63]) - res = Index([1, 2 ** 63]) + idx = index_cls([1, 2**63]) + res = Index([1, 2**63]) tm.assert_index_equal(res, idx, exact=exact) - idx = Index([-1, 2 ** 63], dtype=object) - res = Index(np.array([-1, 2 ** 63], dtype=object)) + idx = Index([-1, 2**63], dtype=object) + res = Index(np.array([-1, 2**63], dtype=object)) tm.assert_index_equal(res, idx, exact=exact) # https://github.com/pandas-dev/pandas/issues/29526 - idx = index_cls([1, 2 ** 63 + 1], dtype=dtype) - res = Index([1, 2 ** 63 + 1], dtype=dtype) + idx = index_cls([1, 2**63 + 1], dtype=dtype) + res = Index([1, 2**63 + 1], dtype=dtype) tm.assert_index_equal(res, idx, exact=exact) def test_constructor_does_not_cast_to_float(self): diff --git a/pandas/tests/indexes/numeric/test_setops.py b/pandas/tests/indexes/numeric/test_setops.py index 72336d3e33b79..9f2174c2de51e 100644 --- a/pandas/tests/indexes/numeric/test_setops.py +++ b/pandas/tests/indexes/numeric/test_setops.py @@ -19,7 +19,7 @@ @pytest.fixture def index_large(): # large values used in TestUInt64Index where no compat needed with Int64/Float64 - large = [2 ** 63, 2 ** 63 + 10, 2 ** 63 + 15, 2 ** 63 + 20, 2 ** 63 + 25] + large = [2**63, 2**63 + 10, 2**63 + 15, 2**63 + 20, 2**63 + 25] return UInt64Index(large) @@ -89,7 +89,7 @@ def test_float64_index_difference(self): tm.assert_index_equal(result, string_index) def test_intersection_uint64_outside_int64_range(self, index_large): - other = Index([2 ** 63, 2 ** 63 + 5, 2 ** 63 + 10, 2 ** 63 + 15, 2 ** 63 + 20]) + other = Index([2**63, 2**63 + 5, 2**63 + 10, 2**63 + 15, 2**63 + 20]) result = index_large.intersection(other) expected = Index(np.sort(np.intersect1d(index_large.values, other.values))) tm.assert_index_equal(result, expected) diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py index 1145de14ad3c4..222f1fc3e7648 100644 --- a/pandas/tests/indexes/test_base.py +++ b/pandas/tests/indexes/test_base.py @@ -743,7 +743,7 @@ def test_drop_tuple(self, values, to_drop): tm.assert_index_equal(result, expected) removed = index.drop(to_drop[1]) - msg = fr"\"\[{re.escape(to_drop[1].__repr__())}\] not found in axis\"" + msg = rf"\"\[{re.escape(to_drop[1].__repr__())}\] not found in axis\"" for drop_me in to_drop[1], [to_drop[1]]: with pytest.raises(KeyError, match=msg): removed.drop(drop_me) @@ -871,7 +871,7 @@ def test_isin_level_kwarg_bad_label_raises(self, label, index): msg = f"'Level {label} not found'" else: index = index.rename("foo") - msg = fr"Requested level \({label}\) does not match index name \(foo\)" + msg = rf"Requested level \({label}\) does not match index name \(foo\)" with pytest.raises(KeyError, match=msg): index.isin([], level=label) diff --git a/pandas/tests/indexes/test_common.py b/pandas/tests/indexes/test_common.py index 5fb9673cf52d7..6159c53ea5bf4 100644 --- a/pandas/tests/indexes/test_common.py +++ b/pandas/tests/indexes/test_common.py @@ -195,8 +195,8 @@ def test_unique_level(self, index_flat): index.unique(level=3) msg = ( - fr"Requested level \(wrong\) does not match index name " - fr"\({re.escape(index.name.__repr__())}\)" + rf"Requested level \(wrong\) does not match index name " + rf"\({re.escape(index.name.__repr__())}\)" ) with pytest.raises(KeyError, match=msg): index.unique(level="wrong") diff --git a/pandas/tests/indexes/timedeltas/test_constructors.py b/pandas/tests/indexes/timedeltas/test_constructors.py index dcddba8b22937..25a0a66ada519 100644 --- a/pandas/tests/indexes/timedeltas/test_constructors.py +++ b/pandas/tests/indexes/timedeltas/test_constructors.py @@ -51,7 +51,7 @@ def test_infer_from_tdi(self): # GH#23539 # fast-path for inferring a frequency if the passed data already # has one - tdi = timedelta_range("1 second", periods=10 ** 7, freq="1s") + tdi = timedelta_range("1 second", periods=10**7, freq="1s") result = TimedeltaIndex(tdi, freq="infer") assert result.freq == tdi.freq diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py index 8ceef8186e4ea..6904a847b04ed 100644 --- a/pandas/tests/indexes/timedeltas/test_timedelta.py +++ b/pandas/tests/indexes/timedeltas/test_timedelta.py @@ -105,7 +105,7 @@ def test_freq_conversion_always_floating(self): tdi = timedelta_range("1 Day", periods=30) res = tdi.astype("m8[s]") - expected = Index((tdi.view("i8") / 10 ** 9).astype(np.float64)) + expected = Index((tdi.view("i8") / 10**9).astype(np.float64)) tm.assert_index_equal(res, expected) # check this matches Series and TimedeltaArray diff --git a/pandas/tests/indexing/test_coercion.py b/pandas/tests/indexing/test_coercion.py index 75337cb3f453f..2f616b79a680f 100644 --- a/pandas/tests/indexing/test_coercion.py +++ b/pandas/tests/indexing/test_coercion.py @@ -114,7 +114,7 @@ def test_setitem_series_int64(self, val, exp_dtype): self._assert_setitem_series_conversion(obj, val, exp, exp_dtype) @pytest.mark.parametrize( - "val,exp_dtype", [(np.int32(1), np.int8), (np.int16(2 ** 9), np.int16)] + "val,exp_dtype", [(np.int32(1), np.int8), (np.int16(2**9), np.int16)] ) def test_setitem_series_int8(self, val, exp_dtype): obj = pd.Series([1, 2, 3, 4], dtype=np.int8) diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py index 902bd943584d9..81e1b82dd432e 100644 --- a/pandas/tests/indexing/test_floats.py +++ b/pandas/tests/indexing/test_floats.py @@ -240,13 +240,13 @@ def test_slice_non_numeric(self, index_func, idx, frame_or_series, indexer_sli): if indexer_sli is tm.iloc: msg = ( "cannot do positional indexing " - fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of " + rf"on {type(index).__name__} with these indexers \[(3|4)\.0\] of " "type float" ) else: msg = ( "cannot do slice indexing " - fr"on {type(index).__name__} with these indexers " + rf"on {type(index).__name__} with these indexers " r"\[(3|4)(\.0)?\] " r"of type (float|int)" ) @@ -306,7 +306,7 @@ def test_slice_integer(self): # positional indexing msg = ( "cannot do slice indexing " - fr"on {type(index).__name__} with these indexers \[-6\.0\] of " + rf"on {type(index).__name__} with these indexers \[-6\.0\] of " "type float" ) with pytest.raises(TypeError, match=msg): @@ -330,7 +330,7 @@ def test_slice_integer(self): # positional indexing msg = ( "cannot do slice indexing " - fr"on {type(index).__name__} with these indexers \[(2|3)\.5\] of " + rf"on {type(index).__name__} with these indexers \[(2|3)\.5\] of " "type float" ) with pytest.raises(TypeError, match=msg): @@ -350,7 +350,7 @@ def test_integer_positional_indexing(self, idx): klass = RangeIndex msg = ( "cannot do (slice|positional) indexing " - fr"on {klass.__name__} with these indexers \[(2|4)\.0\] of " + rf"on {klass.__name__} with these indexers \[(2|4)\.0\] of " "type float" ) with pytest.raises(TypeError, match=msg): @@ -376,7 +376,7 @@ def test_slice_integer_frame_getitem(self, index_func): # positional indexing msg = ( "cannot do slice indexing " - fr"on {type(index).__name__} with these indexers \[(0|1)\.0\] of " + rf"on {type(index).__name__} with these indexers \[(0|1)\.0\] of " "type float" ) with pytest.raises(TypeError, match=msg): @@ -391,7 +391,7 @@ def test_slice_integer_frame_getitem(self, index_func): # positional indexing msg = ( "cannot do slice indexing " - fr"on {type(index).__name__} with these indexers \[-10\.0\] of " + rf"on {type(index).__name__} with these indexers \[-10\.0\] of " "type float" ) with pytest.raises(TypeError, match=msg): @@ -410,7 +410,7 @@ def test_slice_integer_frame_getitem(self, index_func): # positional indexing msg = ( "cannot do slice indexing " - fr"on {type(index).__name__} with these indexers \[0\.5\] of " + rf"on {type(index).__name__} with these indexers \[0\.5\] of " "type float" ) with pytest.raises(TypeError, match=msg): @@ -434,7 +434,7 @@ def test_float_slice_getitem_with_integer_index_raises(self, idx, index_func): # positional indexing msg = ( "cannot do slice indexing " - fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of " + rf"on {type(index).__name__} with these indexers \[(3|4)\.0\] of " "type float" ) with pytest.raises(TypeError, match=msg): diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py index ee9d276925d41..d5909ce8eb48f 100644 --- a/pandas/tests/indexing/test_iloc.py +++ b/pandas/tests/indexing/test_iloc.py @@ -747,7 +747,7 @@ def test_iloc_mask(self): # the possibilities locs = np.arange(4) - nums = 2 ** locs + nums = 2**locs reps = [bin(num) for num in nums] df = DataFrame({"locs": locs, "nums": nums}, reps) diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py index 63c5091865160..892de75855c19 100644 --- a/pandas/tests/indexing/test_loc.py +++ b/pandas/tests/indexing/test_loc.py @@ -1307,11 +1307,11 @@ def test_loc_getitem_timedelta_0seconds(self): tm.assert_frame_equal(result, expected) @pytest.mark.parametrize( - "val,expected", [(2 ** 63 - 1, Series([1])), (2 ** 63, Series([2]))] + "val,expected", [(2**63 - 1, Series([1])), (2**63, Series([2]))] ) def test_loc_getitem_uint64_scalar(self, val, expected): # see GH#19399 - df = DataFrame([1, 2], index=[2 ** 63 - 1, 2 ** 63]) + df = DataFrame([1, 2], index=[2**63 - 1, 2**63]) result = df.loc[val] expected.name = val @@ -1825,9 +1825,9 @@ class TestLocSetitemWithExpansion: @pytest.mark.slow def test_loc_setitem_with_expansion_large_dataframe(self): # GH#10692 - result = DataFrame({"x": range(10 ** 6)}, dtype="int64") + result = DataFrame({"x": range(10**6)}, dtype="int64") result.loc[len(result)] = len(result) + 1 - expected = DataFrame({"x": range(10 ** 6 + 1)}, dtype="int64") + expected = DataFrame({"x": range(10**6 + 1)}, dtype="int64") tm.assert_frame_equal(result, expected) def test_loc_setitem_empty_series(self): @@ -2735,10 +2735,10 @@ def test_loc_getitem_nullable_index_with_duplicates(): class TestLocSeries: - @pytest.mark.parametrize("val,expected", [(2 ** 63 - 1, 3), (2 ** 63, 4)]) + @pytest.mark.parametrize("val,expected", [(2**63 - 1, 3), (2**63, 4)]) def test_loc_uint64(self, val, expected): # see GH#19399 - ser = Series({2 ** 63 - 1: 3, 2 ** 63: 4}) + ser = Series({2**63 - 1: 3, 2**63: 4}) assert ser.loc[val] == expected def test_loc_getitem(self, string_series, datetime_series): diff --git a/pandas/tests/io/excel/test_writers.py b/pandas/tests/io/excel/test_writers.py index 0315783569c23..c69876c6a3956 100644 --- a/pandas/tests/io/excel/test_writers.py +++ b/pandas/tests/io/excel/test_writers.py @@ -338,8 +338,8 @@ class TestExcelWriter: def test_excel_sheet_size(self, path): # GH 26080 - breaking_row_count = 2 ** 20 + 1 - breaking_col_count = 2 ** 14 + 1 + breaking_row_count = 2**20 + 1 + breaking_col_count = 2**14 + 1 # purposely using two arrays to prevent memory issues while testing row_arr = np.zeros(shape=(breaking_row_count, 1)) col_arr = np.zeros(shape=(1, breaking_col_count)) diff --git a/pandas/tests/io/formats/test_eng_formatting.py b/pandas/tests/io/formats/test_eng_formatting.py index b8e3122cac5c4..2f18623559557 100644 --- a/pandas/tests/io/formats/test_eng_formatting.py +++ b/pandas/tests/io/formats/test_eng_formatting.py @@ -56,57 +56,57 @@ def test_exponents_with_eng_prefix(self): formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True) f = np.sqrt(2) in_out = [ - (f * 10 ** -24, " 1.414y"), - (f * 10 ** -23, " 14.142y"), - (f * 10 ** -22, " 141.421y"), - (f * 10 ** -21, " 1.414z"), - (f * 10 ** -20, " 14.142z"), - (f * 10 ** -19, " 141.421z"), - (f * 10 ** -18, " 1.414a"), - (f * 10 ** -17, " 14.142a"), - (f * 10 ** -16, " 141.421a"), - (f * 10 ** -15, " 1.414f"), - (f * 10 ** -14, " 14.142f"), - (f * 10 ** -13, " 141.421f"), - (f * 10 ** -12, " 1.414p"), - (f * 10 ** -11, " 14.142p"), - (f * 10 ** -10, " 141.421p"), - (f * 10 ** -9, " 1.414n"), - (f * 10 ** -8, " 14.142n"), - (f * 10 ** -7, " 141.421n"), - (f * 10 ** -6, " 1.414u"), - (f * 10 ** -5, " 14.142u"), - (f * 10 ** -4, " 141.421u"), - (f * 10 ** -3, " 1.414m"), - (f * 10 ** -2, " 14.142m"), - (f * 10 ** -1, " 141.421m"), - (f * 10 ** 0, " 1.414"), - (f * 10 ** 1, " 14.142"), - (f * 10 ** 2, " 141.421"), - (f * 10 ** 3, " 1.414k"), - (f * 10 ** 4, " 14.142k"), - (f * 10 ** 5, " 141.421k"), - (f * 10 ** 6, " 1.414M"), - (f * 10 ** 7, " 14.142M"), - (f * 10 ** 8, " 141.421M"), - (f * 10 ** 9, " 1.414G"), - (f * 10 ** 10, " 14.142G"), - (f * 10 ** 11, " 141.421G"), - (f * 10 ** 12, " 1.414T"), - (f * 10 ** 13, " 14.142T"), - (f * 10 ** 14, " 141.421T"), - (f * 10 ** 15, " 1.414P"), - (f * 10 ** 16, " 14.142P"), - (f * 10 ** 17, " 141.421P"), - (f * 10 ** 18, " 1.414E"), - (f * 10 ** 19, " 14.142E"), - (f * 10 ** 20, " 141.421E"), - (f * 10 ** 21, " 1.414Z"), - (f * 10 ** 22, " 14.142Z"), - (f * 10 ** 23, " 141.421Z"), - (f * 10 ** 24, " 1.414Y"), - (f * 10 ** 25, " 14.142Y"), - (f * 10 ** 26, " 141.421Y"), + (f * 10**-24, " 1.414y"), + (f * 10**-23, " 14.142y"), + (f * 10**-22, " 141.421y"), + (f * 10**-21, " 1.414z"), + (f * 10**-20, " 14.142z"), + (f * 10**-19, " 141.421z"), + (f * 10**-18, " 1.414a"), + (f * 10**-17, " 14.142a"), + (f * 10**-16, " 141.421a"), + (f * 10**-15, " 1.414f"), + (f * 10**-14, " 14.142f"), + (f * 10**-13, " 141.421f"), + (f * 10**-12, " 1.414p"), + (f * 10**-11, " 14.142p"), + (f * 10**-10, " 141.421p"), + (f * 10**-9, " 1.414n"), + (f * 10**-8, " 14.142n"), + (f * 10**-7, " 141.421n"), + (f * 10**-6, " 1.414u"), + (f * 10**-5, " 14.142u"), + (f * 10**-4, " 141.421u"), + (f * 10**-3, " 1.414m"), + (f * 10**-2, " 14.142m"), + (f * 10**-1, " 141.421m"), + (f * 10**0, " 1.414"), + (f * 10**1, " 14.142"), + (f * 10**2, " 141.421"), + (f * 10**3, " 1.414k"), + (f * 10**4, " 14.142k"), + (f * 10**5, " 141.421k"), + (f * 10**6, " 1.414M"), + (f * 10**7, " 14.142M"), + (f * 10**8, " 141.421M"), + (f * 10**9, " 1.414G"), + (f * 10**10, " 14.142G"), + (f * 10**11, " 141.421G"), + (f * 10**12, " 1.414T"), + (f * 10**13, " 14.142T"), + (f * 10**14, " 141.421T"), + (f * 10**15, " 1.414P"), + (f * 10**16, " 14.142P"), + (f * 10**17, " 141.421P"), + (f * 10**18, " 1.414E"), + (f * 10**19, " 14.142E"), + (f * 10**20, " 141.421E"), + (f * 10**21, " 1.414Z"), + (f * 10**22, " 14.142Z"), + (f * 10**23, " 141.421Z"), + (f * 10**24, " 1.414Y"), + (f * 10**25, " 14.142Y"), + (f * 10**26, " 141.421Y"), ] self.compare_all(formatter, in_out) @@ -114,57 +114,57 @@ def test_exponents_without_eng_prefix(self): formatter = fmt.EngFormatter(accuracy=4, use_eng_prefix=False) f = np.pi in_out = [ - (f * 10 ** -24, " 3.1416E-24"), - (f * 10 ** -23, " 31.4159E-24"), - (f * 10 ** -22, " 314.1593E-24"), - (f * 10 ** -21, " 3.1416E-21"), - (f * 10 ** -20, " 31.4159E-21"), - (f * 10 ** -19, " 314.1593E-21"), - (f * 10 ** -18, " 3.1416E-18"), - (f * 10 ** -17, " 31.4159E-18"), - (f * 10 ** -16, " 314.1593E-18"), - (f * 10 ** -15, " 3.1416E-15"), - (f * 10 ** -14, " 31.4159E-15"), - (f * 10 ** -13, " 314.1593E-15"), - (f * 10 ** -12, " 3.1416E-12"), - (f * 10 ** -11, " 31.4159E-12"), - (f * 10 ** -10, " 314.1593E-12"), - (f * 10 ** -9, " 3.1416E-09"), - (f * 10 ** -8, " 31.4159E-09"), - (f * 10 ** -7, " 314.1593E-09"), - (f * 10 ** -6, " 3.1416E-06"), - (f * 10 ** -5, " 31.4159E-06"), - (f * 10 ** -4, " 314.1593E-06"), - (f * 10 ** -3, " 3.1416E-03"), - (f * 10 ** -2, " 31.4159E-03"), - (f * 10 ** -1, " 314.1593E-03"), - (f * 10 ** 0, " 3.1416E+00"), - (f * 10 ** 1, " 31.4159E+00"), - (f * 10 ** 2, " 314.1593E+00"), - (f * 10 ** 3, " 3.1416E+03"), - (f * 10 ** 4, " 31.4159E+03"), - (f * 10 ** 5, " 314.1593E+03"), - (f * 10 ** 6, " 3.1416E+06"), - (f * 10 ** 7, " 31.4159E+06"), - (f * 10 ** 8, " 314.1593E+06"), - (f * 10 ** 9, " 3.1416E+09"), - (f * 10 ** 10, " 31.4159E+09"), - (f * 10 ** 11, " 314.1593E+09"), - (f * 10 ** 12, " 3.1416E+12"), - (f * 10 ** 13, " 31.4159E+12"), - (f * 10 ** 14, " 314.1593E+12"), - (f * 10 ** 15, " 3.1416E+15"), - (f * 10 ** 16, " 31.4159E+15"), - (f * 10 ** 17, " 314.1593E+15"), - (f * 10 ** 18, " 3.1416E+18"), - (f * 10 ** 19, " 31.4159E+18"), - (f * 10 ** 20, " 314.1593E+18"), - (f * 10 ** 21, " 3.1416E+21"), - (f * 10 ** 22, " 31.4159E+21"), - (f * 10 ** 23, " 314.1593E+21"), - (f * 10 ** 24, " 3.1416E+24"), - (f * 10 ** 25, " 31.4159E+24"), - (f * 10 ** 26, " 314.1593E+24"), + (f * 10**-24, " 3.1416E-24"), + (f * 10**-23, " 31.4159E-24"), + (f * 10**-22, " 314.1593E-24"), + (f * 10**-21, " 3.1416E-21"), + (f * 10**-20, " 31.4159E-21"), + (f * 10**-19, " 314.1593E-21"), + (f * 10**-18, " 3.1416E-18"), + (f * 10**-17, " 31.4159E-18"), + (f * 10**-16, " 314.1593E-18"), + (f * 10**-15, " 3.1416E-15"), + (f * 10**-14, " 31.4159E-15"), + (f * 10**-13, " 314.1593E-15"), + (f * 10**-12, " 3.1416E-12"), + (f * 10**-11, " 31.4159E-12"), + (f * 10**-10, " 314.1593E-12"), + (f * 10**-9, " 3.1416E-09"), + (f * 10**-8, " 31.4159E-09"), + (f * 10**-7, " 314.1593E-09"), + (f * 10**-6, " 3.1416E-06"), + (f * 10**-5, " 31.4159E-06"), + (f * 10**-4, " 314.1593E-06"), + (f * 10**-3, " 3.1416E-03"), + (f * 10**-2, " 31.4159E-03"), + (f * 10**-1, " 314.1593E-03"), + (f * 10**0, " 3.1416E+00"), + (f * 10**1, " 31.4159E+00"), + (f * 10**2, " 314.1593E+00"), + (f * 10**3, " 3.1416E+03"), + (f * 10**4, " 31.4159E+03"), + (f * 10**5, " 314.1593E+03"), + (f * 10**6, " 3.1416E+06"), + (f * 10**7, " 31.4159E+06"), + (f * 10**8, " 314.1593E+06"), + (f * 10**9, " 3.1416E+09"), + (f * 10**10, " 31.4159E+09"), + (f * 10**11, " 314.1593E+09"), + (f * 10**12, " 3.1416E+12"), + (f * 10**13, " 31.4159E+12"), + (f * 10**14, " 314.1593E+12"), + (f * 10**15, " 3.1416E+15"), + (f * 10**16, " 31.4159E+15"), + (f * 10**17, " 314.1593E+15"), + (f * 10**18, " 3.1416E+18"), + (f * 10**19, " 31.4159E+18"), + (f * 10**20, " 314.1593E+18"), + (f * 10**21, " 3.1416E+21"), + (f * 10**22, " 31.4159E+21"), + (f * 10**23, " 314.1593E+21"), + (f * 10**24, " 3.1416E+24"), + (f * 10**25, " 31.4159E+24"), + (f * 10**26, " 314.1593E+24"), ] self.compare_all(formatter, in_out) diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py index 01bc94bf594d9..f8015851c9a83 100644 --- a/pandas/tests/io/formats/test_to_latex.py +++ b/pandas/tests/io/formats/test_to_latex.py @@ -282,7 +282,7 @@ def test_to_latex_longtable_without_index(self): ) def test_to_latex_longtable_continued_on_next_page(self, df, expected_number): result = df.to_latex(index=False, longtable=True) - assert fr"\multicolumn{{{expected_number}}}" in result + assert rf"\multicolumn{{{expected_number}}}" in result class TestToLatexHeader: @@ -1006,7 +1006,7 @@ def test_to_latex_na_rep_and_float_format(self, na_rep): ) result = df.to_latex(na_rep=na_rep, float_format="{:.2f}".format) expected = _dedent( - fr""" + rf""" \begin{{tabular}}{{llr}} \toprule {{}} & Group & Data \\ diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py index f571daef3b40f..966fddda46bd8 100644 --- a/pandas/tests/io/json/test_pandas.py +++ b/pandas/tests/io/json/test_pandas.py @@ -1275,7 +1275,7 @@ def test_to_json_large_numbers(self, bigNum): expected = '{"0":{"articleId":' + str(bigNum) + "}}" assert json == expected - @pytest.mark.parametrize("bigNum", [-(2 ** 63) - 1, 2 ** 64]) + @pytest.mark.parametrize("bigNum", [-(2**63) - 1, 2**64]) def test_read_json_large_numbers(self, bigNum): # GH20599, 26068 json = StringIO('{"articleId":' + str(bigNum) + "}") diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py index b4ae54d48dc68..41a417f6b3ef4 100644 --- a/pandas/tests/io/json/test_ujson.py +++ b/pandas/tests/io/json/test_ujson.py @@ -428,13 +428,13 @@ def test_datetime_units(self): stamp = Timestamp(val) roundtrip = ujson.decode(ujson.encode(val, date_unit="s")) - assert roundtrip == stamp.value // 10 ** 9 + assert roundtrip == stamp.value // 10**9 roundtrip = ujson.decode(ujson.encode(val, date_unit="ms")) - assert roundtrip == stamp.value // 10 ** 6 + assert roundtrip == stamp.value // 10**6 roundtrip = ujson.decode(ujson.encode(val, date_unit="us")) - assert roundtrip == stamp.value // 10 ** 3 + assert roundtrip == stamp.value // 10**3 roundtrip = ujson.decode(ujson.encode(val, date_unit="ns")) assert roundtrip == stamp.value @@ -606,7 +606,7 @@ def test_encode_long_conversion(self, long_input): assert output == json.dumps(long_input) assert long_input == ujson.decode(output) - @pytest.mark.parametrize("bigNum", [2 ** 64, -(2 ** 63) - 1]) + @pytest.mark.parametrize("bigNum", [2**64, -(2**63) - 1]) def test_dumps_ints_larger_than_maxsize(self, bigNum): encoding = ujson.encode(bigNum) assert str(bigNum) == encoding @@ -628,7 +628,7 @@ def test_loads_non_str_bytes_raises(self): with pytest.raises(TypeError, match=msg): ujson.loads(None) - @pytest.mark.parametrize("val", [3590016419, 2 ** 31, 2 ** 32, (2 ** 32) - 1]) + @pytest.mark.parametrize("val", [3590016419, 2**31, 2**32, (2**32) - 1]) def test_decode_number_with_32bit_sign_bit(self, val): # Test that numbers that fit within 32 bits but would have the # sign bit set (2**31 <= x < 2**32) are decoded properly. diff --git a/pandas/tests/io/parser/common/test_ints.py b/pandas/tests/io/parser/common/test_ints.py index aef2020fe0847..e3159ef3e6a42 100644 --- a/pandas/tests/io/parser/common/test_ints.py +++ b/pandas/tests/io/parser/common/test_ints.py @@ -193,7 +193,7 @@ def test_outside_int64_uint64_range(all_parsers, val): @skip_pyarrow -@pytest.mark.parametrize("exp_data", [[str(-1), str(2 ** 63)], [str(2 ** 63), str(-1)]]) +@pytest.mark.parametrize("exp_data", [[str(-1), str(2**63)], [str(2**63), str(-1)]]) def test_numeric_range_too_wide(all_parsers, exp_data): # No numerical dtype can hold both negative and uint64 # values, so they should be cast as string. diff --git a/pandas/tests/io/parser/test_na_values.py b/pandas/tests/io/parser/test_na_values.py index f9356dfc7d0e3..9fb096bfeb346 100644 --- a/pandas/tests/io/parser/test_na_values.py +++ b/pandas/tests/io/parser/test_na_values.py @@ -469,12 +469,12 @@ def test_na_values_dict_col_index(all_parsers): "data,kwargs,expected", [ ( - str(2 ** 63) + "\n" + str(2 ** 63 + 1), - {"na_values": [2 ** 63]}, - DataFrame([str(2 ** 63), str(2 ** 63 + 1)]), + str(2**63) + "\n" + str(2**63 + 1), + {"na_values": [2**63]}, + DataFrame([str(2**63), str(2**63 + 1)]), ), - (str(2 ** 63) + ",1" + "\n,2", {}, DataFrame([[str(2 ** 63), 1], ["", 2]])), - (str(2 ** 63) + "\n1", {"na_values": [2 ** 63]}, DataFrame([np.nan, 1])), + (str(2**63) + ",1" + "\n,2", {}, DataFrame([[str(2**63), 1], ["", 2]])), + (str(2**63) + "\n1", {"na_values": [2**63]}, DataFrame([np.nan, 1])), ], ) def test_na_values_uint64(all_parsers, data, kwargs, expected): diff --git a/pandas/tests/io/pytables/test_append.py b/pandas/tests/io/pytables/test_append.py index f38ea4d5cf306..ec72ca49747b3 100644 --- a/pandas/tests/io/pytables/test_append.py +++ b/pandas/tests/io/pytables/test_append.py @@ -77,10 +77,10 @@ def test_append(setup_path): np.random.randint(0, high=65535, size=5), dtype=np.uint16 ), "u32": Series( - np.random.randint(0, high=2 ** 30, size=5), dtype=np.uint32 + np.random.randint(0, high=2**30, size=5), dtype=np.uint32 ), "u64": Series( - [2 ** 58, 2 ** 59, 2 ** 60, 2 ** 61, 2 ** 62], + [2**58, 2**59, 2**60, 2**61, 2**62], dtype=np.uint64, ), }, diff --git a/pandas/tests/io/pytables/test_compat.py b/pandas/tests/io/pytables/test_compat.py index c7200385aa998..5fe55fda8a452 100644 --- a/pandas/tests/io/pytables/test_compat.py +++ b/pandas/tests/io/pytables/test_compat.py @@ -23,7 +23,7 @@ def pytables_hdf5_file(): testsamples = [ {"c0": t0, "c1": "aaaaa", "c2": 1}, {"c0": t0 + 1, "c1": "bbbbb", "c2": 2}, - {"c0": t0 + 2, "c1": "ccccc", "c2": 10 ** 5}, + {"c0": t0 + 2, "c1": "ccccc", "c2": 10**5}, {"c0": t0 + 3, "c1": "ddddd", "c2": 4_294_967_295}, ] diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py index b458f3351c860..36048601b3248 100644 --- a/pandas/tests/io/test_common.py +++ b/pandas/tests/io/test_common.py @@ -195,23 +195,23 @@ def test_read_non_existent(self, reader, module, error_class, fn_ext): pytest.importorskip(module) path = os.path.join(HERE, "data", "does_not_exist." + fn_ext) - msg1 = fr"File (b')?.+does_not_exist\.{fn_ext}'? does not exist" - msg2 = fr"\[Errno 2\] No such file or directory: '.+does_not_exist\.{fn_ext}'" + msg1 = rf"File (b')?.+does_not_exist\.{fn_ext}'? does not exist" + msg2 = rf"\[Errno 2\] No such file or directory: '.+does_not_exist\.{fn_ext}'" msg3 = "Expected object or value" msg4 = "path_or_buf needs to be a string file path or file-like" msg5 = ( - fr"\[Errno 2\] File .+does_not_exist\.{fn_ext} does not exist: " - fr"'.+does_not_exist\.{fn_ext}'" + rf"\[Errno 2\] File .+does_not_exist\.{fn_ext} does not exist: " + rf"'.+does_not_exist\.{fn_ext}'" ) - msg6 = fr"\[Errno 2\] 没有那个文件或目录: '.+does_not_exist\.{fn_ext}'" + msg6 = rf"\[Errno 2\] 没有那个文件或目录: '.+does_not_exist\.{fn_ext}'" msg7 = ( - fr"\[Errno 2\] File o directory non esistente: '.+does_not_exist\.{fn_ext}'" + rf"\[Errno 2\] File o directory non esistente: '.+does_not_exist\.{fn_ext}'" ) - msg8 = fr"Failed to open local file.+does_not_exist\.{fn_ext}" + msg8 = rf"Failed to open local file.+does_not_exist\.{fn_ext}" with pytest.raises( error_class, - match=fr"({msg1}|{msg2}|{msg3}|{msg4}|{msg5}|{msg6}|{msg7}|{msg8})", + match=rf"({msg1}|{msg2}|{msg3}|{msg4}|{msg5}|{msg6}|{msg7}|{msg8})", ): reader(path) @@ -265,23 +265,23 @@ def test_read_expands_user_home_dir( path = os.path.join("~", "does_not_exist." + fn_ext) monkeypatch.setattr(icom, "_expand_user", lambda x: os.path.join("foo", x)) - msg1 = fr"File (b')?.+does_not_exist\.{fn_ext}'? does not exist" - msg2 = fr"\[Errno 2\] No such file or directory: '.+does_not_exist\.{fn_ext}'" + msg1 = rf"File (b')?.+does_not_exist\.{fn_ext}'? does not exist" + msg2 = rf"\[Errno 2\] No such file or directory: '.+does_not_exist\.{fn_ext}'" msg3 = "Unexpected character found when decoding 'false'" msg4 = "path_or_buf needs to be a string file path or file-like" msg5 = ( - fr"\[Errno 2\] File .+does_not_exist\.{fn_ext} does not exist: " - fr"'.+does_not_exist\.{fn_ext}'" + rf"\[Errno 2\] File .+does_not_exist\.{fn_ext} does not exist: " + rf"'.+does_not_exist\.{fn_ext}'" ) - msg6 = fr"\[Errno 2\] 没有那个文件或目录: '.+does_not_exist\.{fn_ext}'" + msg6 = rf"\[Errno 2\] 没有那个文件或目录: '.+does_not_exist\.{fn_ext}'" msg7 = ( - fr"\[Errno 2\] File o directory non esistente: '.+does_not_exist\.{fn_ext}'" + rf"\[Errno 2\] File o directory non esistente: '.+does_not_exist\.{fn_ext}'" ) - msg8 = fr"Failed to open local file.+does_not_exist\.{fn_ext}" + msg8 = rf"Failed to open local file.+does_not_exist\.{fn_ext}" with pytest.raises( error_class, - match=fr"({msg1}|{msg2}|{msg3}|{msg4}|{msg5}|{msg6}|{msg7}|{msg8})", + match=rf"({msg1}|{msg2}|{msg3}|{msg4}|{msg5}|{msg6}|{msg7}|{msg8})", ): reader(path) diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py index 8263c437030eb..4d483099157ae 100644 --- a/pandas/tests/io/test_sql.py +++ b/pandas/tests/io/test_sql.py @@ -369,7 +369,7 @@ def test_frame1(): def test_frame3(): columns = ["index", "A", "B"] data = [ - ("2000-01-03 00:00:00", 2 ** 31 - 1, -1.987670), + ("2000-01-03 00:00:00", 2**31 - 1, -1.987670), ("2000-01-04 00:00:00", -29, -0.0412318367011), ("2000-01-05 00:00:00", 20000, 0.731167677815), ("2000-01-06 00:00:00", -290867, 1.56762092543), @@ -1721,7 +1721,7 @@ def test_default_type_conversion(self): def test_bigint(self): # int64 should be converted to BigInteger, GH7433 - df = DataFrame(data={"i64": [2 ** 62]}) + df = DataFrame(data={"i64": [2**62]}) assert df.to_sql("test_bigint", self.conn, index=False) == 1 result = sql.read_sql_table("test_bigint", self.conn) @@ -1963,7 +1963,7 @@ def test_datetime_time(self): def test_mixed_dtype_insert(self): # see GH6509 - s1 = Series(2 ** 25 + 1, dtype=np.int32) + s1 = Series(2**25 + 1, dtype=np.int32) s2 = Series(0.0, dtype=np.float32) df = DataFrame({"s1": s1, "s2": s2}) diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py index f0fd391c2a9c4..235b285551ecc 100644 --- a/pandas/tests/io/test_stata.py +++ b/pandas/tests/io/test_stata.py @@ -477,9 +477,9 @@ def test_read_write_dta12(self, version): tm.assert_frame_equal(written_and_read_again.set_index("index"), formatted) def test_read_write_dta13(self): - s1 = Series(2 ** 9, dtype=np.int16) - s2 = Series(2 ** 17, dtype=np.int32) - s3 = Series(2 ** 33, dtype=np.int64) + s1 = Series(2**9, dtype=np.int16) + s2 = Series(2**17, dtype=np.int32) + s3 = Series(2**33, dtype=np.int64) original = DataFrame({"int16": s1, "int32": s2, "int64": s3}) original.index.name = "index" @@ -610,8 +610,8 @@ def test_string_no_dates(self): def test_large_value_conversion(self): s0 = Series([1, 99], dtype=np.int8) s1 = Series([1, 127], dtype=np.int8) - s2 = Series([1, 2 ** 15 - 1], dtype=np.int16) - s3 = Series([1, 2 ** 63 - 1], dtype=np.int64) + s2 = Series([1, 2**15 - 1], dtype=np.int16) + s3 = Series([1, 2**63 - 1], dtype=np.int64) original = DataFrame({"s0": s0, "s1": s1, "s2": s2, "s3": s3}) original.index.name = "index" with tm.ensure_clean() as path: @@ -699,10 +699,10 @@ def test_bool_uint(self, byteorder, version): s0 = Series([0, 1, True], dtype=np.bool_) s1 = Series([0, 1, 100], dtype=np.uint8) s2 = Series([0, 1, 255], dtype=np.uint8) - s3 = Series([0, 1, 2 ** 15 - 100], dtype=np.uint16) - s4 = Series([0, 1, 2 ** 16 - 1], dtype=np.uint16) - s5 = Series([0, 1, 2 ** 31 - 100], dtype=np.uint32) - s6 = Series([0, 1, 2 ** 32 - 1], dtype=np.uint32) + s3 = Series([0, 1, 2**15 - 100], dtype=np.uint16) + s4 = Series([0, 1, 2**16 - 1], dtype=np.uint16) + s5 = Series([0, 1, 2**31 - 100], dtype=np.uint32) + s6 = Series([0, 1, 2**32 - 1], dtype=np.uint32) original = DataFrame( {"s0": s0, "s1": s1, "s2": s2, "s3": s3, "s4": s4, "s5": s5, "s6": s6} @@ -1999,7 +1999,7 @@ def test_iterator_value_labels(): def test_precision_loss(): df = DataFrame( - [[sum(2 ** i for i in range(60)), sum(2 ** i for i in range(52))]], + [[sum(2**i for i in range(60)), sum(2**i for i in range(52))]], columns=["big", "little"], ) with tm.ensure_clean() as path: diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py index d595f4b453b28..8243b6b73f8d8 100644 --- a/pandas/tests/plotting/test_converter.py +++ b/pandas/tests/plotting/test_converter.py @@ -213,7 +213,7 @@ def test_conversion(self): assert rs[1] == xp def test_conversion_float(self): - rtol = 0.5 * 10 ** -9 + rtol = 0.5 * 10**-9 rs = self.dtc.convert(Timestamp("2012-1-1 01:02:03", tz="UTC"), None, None) xp = converter.dates.date2num(Timestamp("2012-1-1 01:02:03", tz="UTC")) @@ -261,7 +261,7 @@ def test_time_formatter(self, time, format_expected): assert result == format_expected def test_dateindex_conversion(self): - rtol = 10 ** -9 + rtol = 10**-9 for freq in ("B", "L", "S"): dateindex = tm.makeDateIndex(k=10, freq=freq) diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py index 70e739d1440d6..445f3aa7defec 100644 --- a/pandas/tests/reductions/test_reductions.py +++ b/pandas/tests/reductions/test_reductions.py @@ -210,8 +210,8 @@ class TestIndexReductions: [ (0, 400, 3), (500, 0, -6), - (-(10 ** 6), 10 ** 6, 4), - (10 ** 6, -(10 ** 6), -4), + (-(10**6), 10**6, 4), + (10**6, -(10**6), -4), (0, 10, 20), ], ) @@ -1451,16 +1451,16 @@ def test_mode_category(self, dropna, expected1, expected2, expected3): @pytest.mark.parametrize( "dropna, expected1, expected2", - [(True, [2 ** 63], [1, 2 ** 63]), (False, [2 ** 63], [1, 2 ** 63])], + [(True, [2**63], [1, 2**63]), (False, [2**63], [1, 2**63])], ) def test_mode_intoverflow(self, dropna, expected1, expected2): # Test for uint64 overflow. - s = Series([1, 2 ** 63, 2 ** 63], dtype=np.uint64) + s = Series([1, 2**63, 2**63], dtype=np.uint64) result = s.mode(dropna) expected1 = Series(expected1, dtype=np.uint64) tm.assert_series_equal(result, expected1) - s = Series([1, 2 ** 63], dtype=np.uint64) + s = Series([1, 2**63], dtype=np.uint64) result = s.mode(dropna) expected2 = Series(expected2, dtype=np.uint64) tm.assert_series_equal(result, expected2) diff --git a/pandas/tests/reductions/test_stat_reductions.py b/pandas/tests/reductions/test_stat_reductions.py index 2f1ae5df0d5d4..0a6c0ccc891bb 100644 --- a/pandas/tests/reductions/test_stat_reductions.py +++ b/pandas/tests/reductions/test_stat_reductions.py @@ -128,7 +128,7 @@ def _check_stat_op( # GH#2888 items = [0] - items.extend(range(2 ** 40, 2 ** 40 + 1000)) + items.extend(range(2**40, 2**40 + 1000)) s = Series(items, dtype="int64") tm.assert_almost_equal(float(f(s)), float(alternate(s.values))) diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py index a023adfb509a0..3caa54e11f8ed 100644 --- a/pandas/tests/reshape/test_pivot.py +++ b/pandas/tests/reshape/test_pivot.py @@ -2004,7 +2004,7 @@ def __init__(self, *args, **kwargs): with monkeypatch.context() as m: m.setattr(reshape_lib, "_Unstacker", MockUnstacker) df = DataFrame( - {"ind1": np.arange(2 ** 16), "ind2": np.arange(2 ** 16), "count": 0} + {"ind1": np.arange(2**16), "ind2": np.arange(2**16), "count": 0} ) msg = "The following operation may generate" diff --git a/pandas/tests/scalar/test_na_scalar.py b/pandas/tests/scalar/test_na_scalar.py index 77265e8745315..88b06ec578905 100644 --- a/pandas/tests/scalar/test_na_scalar.py +++ b/pandas/tests/scalar/test_na_scalar.py @@ -100,7 +100,7 @@ def test_comparison_ops(): def test_pow_special(value, asarray): if asarray: value = np.array([value]) - result = NA ** value + result = NA**value if asarray: result = result[0] @@ -117,7 +117,7 @@ def test_pow_special(value, asarray): def test_rpow_special(value, asarray): if asarray: value = np.array([value]) - result = value ** NA + result = value**NA if asarray: result = result[0] @@ -133,7 +133,7 @@ def test_rpow_special(value, asarray): def test_rpow_minus_one(value, asarray): if asarray: value = np.array([value]) - result = value ** NA + result = value**NA if asarray: result = result[0] diff --git a/pandas/tests/scalar/timedelta/test_arithmetic.py b/pandas/tests/scalar/timedelta/test_arithmetic.py index 36ad24f2e1ef2..ff1f6ad42feb3 100644 --- a/pandas/tests/scalar/timedelta/test_arithmetic.py +++ b/pandas/tests/scalar/timedelta/test_arithmetic.py @@ -427,9 +427,9 @@ def test_td_div_td64_non_nano(self): # truediv td = Timedelta("1 days 2 hours 3 ns") result = td / np.timedelta64(1, "D") - assert result == td.value / (86400 * 10 ** 9) + assert result == td.value / (86400 * 10**9) result = td / np.timedelta64(1, "s") - assert result == td.value / 10 ** 9 + assert result == td.value / 10**9 result = td / np.timedelta64(1, "ns") assert result == td.value @@ -694,7 +694,7 @@ def test_td_rfloordiv_timedeltalike_array(self): def test_td_rfloordiv_intarray(self): # deprecated GH#19761, enforced GH#29797 - ints = np.array([1349654400, 1349740800, 1349827200, 1349913600]) * 10 ** 9 + ints = np.array([1349654400, 1349740800, 1349827200, 1349913600]) * 10**9 msg = "Invalid dtype" with pytest.raises(TypeError, match=msg): diff --git a/pandas/tests/scalar/timestamp/test_arithmetic.py b/pandas/tests/scalar/timestamp/test_arithmetic.py index 2ff1de51c33ad..b46962fb82896 100644 --- a/pandas/tests/scalar/timestamp/test_arithmetic.py +++ b/pandas/tests/scalar/timestamp/test_arithmetic.py @@ -61,7 +61,7 @@ def test_overflow_offset_raises(self): # used to crash, so check for proper overflow exception stamp = Timestamp("2000/1/1") - offset_overflow = to_offset("D") * 100 ** 5 + offset_overflow = to_offset("D") * 100**5 with pytest.raises(OverflowError, match=lmsg): stamp + offset_overflow diff --git a/pandas/tests/scalar/timestamp/test_constructors.py b/pandas/tests/scalar/timestamp/test_constructors.py index b3deb1a57e5c3..25f3f9c98023b 100644 --- a/pandas/tests/scalar/timestamp/test_constructors.py +++ b/pandas/tests/scalar/timestamp/test_constructors.py @@ -292,20 +292,17 @@ def test_constructor_keyword(self): Timestamp("20151112") ) - assert ( - repr( - Timestamp( - year=2015, - month=11, - day=12, - hour=1, - minute=2, - second=3, - microsecond=999999, - ) + assert repr( + Timestamp( + year=2015, + month=11, + day=12, + hour=1, + minute=2, + second=3, + microsecond=999999, ) - == repr(Timestamp("2015-11-12 01:02:03.999999")) - ) + ) == repr(Timestamp("2015-11-12 01:02:03.999999")) @pytest.mark.filterwarnings("ignore:Timestamp.freq is:FutureWarning") @pytest.mark.filterwarnings("ignore:The 'freq' argument:FutureWarning") diff --git a/pandas/tests/series/indexing/test_setitem.py b/pandas/tests/series/indexing/test_setitem.py index fb07b28c5a54f..c3f82ff3d8285 100644 --- a/pandas/tests/series/indexing/test_setitem.py +++ b/pandas/tests/series/indexing/test_setitem.py @@ -1058,7 +1058,7 @@ def inplace(self): [ np.array([2.0, 3.0]), np.array([2.5, 3.5]), - np.array([2 ** 65, 2 ** 65 + 1], dtype=np.float64), # all ints, but can't cast + np.array([2**65, 2**65 + 1], dtype=np.float64), # all ints, but can't cast ], ) class TestSetitemFloatNDarrayIntoIntegerSeries(SetitemCastingEquivalents): @@ -1117,7 +1117,7 @@ def test_mask_key(self, obj, key, expected, val, indexer_sli, request): super().test_mask_key(obj, key, expected, val, indexer_sli) -@pytest.mark.parametrize("val", [2 ** 33 + 1.0, 2 ** 33 + 1.1, 2 ** 62]) +@pytest.mark.parametrize("val", [2**33 + 1.0, 2**33 + 1.1, 2**62]) class TestSmallIntegerSetitemUpcast(SetitemCastingEquivalents): # https://github.com/pandas-dev/pandas/issues/39584#issuecomment-941212124 @pytest.fixture @@ -1134,9 +1134,9 @@ def inplace(self): @pytest.fixture def expected(self, val): - if val == 2 ** 62: + if val == 2**62: return Series([val, 2, 3], dtype="i8") - elif val == 2 ** 33 + 1.1: + elif val == 2**33 + 1.1: return Series([val, 2, 3], dtype="f8") else: return Series([val, 2, 3], dtype="i8") diff --git a/pandas/tests/series/methods/test_astype.py b/pandas/tests/series/methods/test_astype.py index 8197722687e78..01140f29aa5d9 100644 --- a/pandas/tests/series/methods/test_astype.py +++ b/pandas/tests/series/methods/test_astype.py @@ -135,8 +135,8 @@ def test_astype_generic_timestamp_no_frequency(self, dtype, request): request.node.add_marker(mark) msg = ( - fr"The '{dtype.__name__}' dtype has no unit\. " - fr"Please pass in '{dtype.__name__}\[ns\]' instead." + rf"The '{dtype.__name__}' dtype has no unit\. " + rf"Please pass in '{dtype.__name__}\[ns\]' instead." ) with pytest.raises(ValueError, match=msg): ser.astype(dtype) diff --git a/pandas/tests/series/methods/test_fillna.py b/pandas/tests/series/methods/test_fillna.py index 9617565cab0b4..d3667c9343f53 100644 --- a/pandas/tests/series/methods/test_fillna.py +++ b/pandas/tests/series/methods/test_fillna.py @@ -277,7 +277,7 @@ def test_timedelta_fillna(self, frame_or_series): expected = frame_or_series(expected) tm.assert_equal(result, expected) - result = obj.fillna(np.timedelta64(10 ** 9)) + result = obj.fillna(np.timedelta64(10**9)) expected = Series( [ timedelta(seconds=1), diff --git a/pandas/tests/series/methods/test_isin.py b/pandas/tests/series/methods/test_isin.py index f769c08a512ef..2288dfb5c2409 100644 --- a/pandas/tests/series/methods/test_isin.py +++ b/pandas/tests/series/methods/test_isin.py @@ -22,7 +22,7 @@ def test_isin(self): # This specific issue has to have a series over 1e6 in len, but the # comparison array (in_list) must be large enough so that numpy doesn't # do a manual masking trick that will avoid this issue altogether - s = Series(list("abcdefghijk" * 10 ** 5)) + s = Series(list("abcdefghijk" * 10**5)) # If numpy doesn't do the manual comparison/mask, these # unorderable mixed types are what cause the exception in numpy in_list = [-1, "a", "b", "G", "Y", "Z", "E", "K", "E", "S", "I", "R", "R"] * 6 diff --git a/pandas/tests/series/methods/test_rank.py b/pandas/tests/series/methods/test_rank.py index b49ffda4a7a91..4ab99673dfe46 100644 --- a/pandas/tests/series/methods/test_rank.py +++ b/pandas/tests/series/methods/test_rank.py @@ -479,6 +479,6 @@ def test_rank_first_pct(dtype, ser, exp): @pytest.mark.high_memory def test_pct_max_many_rows(): # GH 18271 - s = Series(np.arange(2 ** 24 + 1)) + s = Series(np.arange(2**24 + 1)) result = s.rank(pct=True).max() assert result == 1 diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index cab5fd456d69f..293ae86a3965f 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -1585,28 +1585,28 @@ def test_constructor_range_dtype(self, dtype): def test_constructor_range_overflows(self): # GH#30173 range objects that overflow int64 - rng = range(2 ** 63, 2 ** 63 + 4) + rng = range(2**63, 2**63 + 4) ser = Series(rng) expected = Series(list(rng)) tm.assert_series_equal(ser, expected) assert list(ser) == list(rng) assert ser.dtype == np.uint64 - rng2 = range(2 ** 63 + 4, 2 ** 63, -1) + rng2 = range(2**63 + 4, 2**63, -1) ser2 = Series(rng2) expected2 = Series(list(rng2)) tm.assert_series_equal(ser2, expected2) assert list(ser2) == list(rng2) assert ser2.dtype == np.uint64 - rng3 = range(-(2 ** 63), -(2 ** 63) - 4, -1) + rng3 = range(-(2**63), -(2**63) - 4, -1) ser3 = Series(rng3) expected3 = Series(list(rng3)) tm.assert_series_equal(ser3, expected3) assert list(ser3) == list(rng3) assert ser3.dtype == object - rng4 = range(2 ** 73, 2 ** 73 + 4) + rng4 = range(2**73, 2**73 + 4) ser4 = Series(rng4) expected4 = Series(list(rng4)) tm.assert_series_equal(ser4, expected4) diff --git a/pandas/tests/series/test_reductions.py b/pandas/tests/series/test_reductions.py index 8fb51af70f3a0..d2a3de7577353 100644 --- a/pandas/tests/series/test_reductions.py +++ b/pandas/tests/series/test_reductions.py @@ -94,7 +94,7 @@ def test_validate_any_all_out_keepdims_raises(kwargs, func): msg = ( f"the '{param}' parameter is not " "supported in the pandas " - fr"implementation of {name}\(\)" + rf"implementation of {name}\(\)" ) with pytest.raises(ValueError, match=msg): func(ser, **kwargs) diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py index 0007a966840e8..5cae600bbd8b8 100644 --- a/pandas/tests/test_algos.py +++ b/pandas/tests/test_algos.py @@ -266,20 +266,20 @@ def test_float64_factorize(self, writable): tm.assert_numpy_array_equal(uniques, expected_uniques) def test_uint64_factorize(self, writable): - data = np.array([2 ** 64 - 1, 1, 2 ** 64 - 1], dtype=np.uint64) + data = np.array([2**64 - 1, 1, 2**64 - 1], dtype=np.uint64) data.setflags(write=writable) expected_codes = np.array([0, 1, 0], dtype=np.intp) - expected_uniques = np.array([2 ** 64 - 1, 1], dtype=np.uint64) + expected_uniques = np.array([2**64 - 1, 1], dtype=np.uint64) codes, uniques = algos.factorize(data) tm.assert_numpy_array_equal(codes, expected_codes) tm.assert_numpy_array_equal(uniques, expected_uniques) def test_int64_factorize(self, writable): - data = np.array([2 ** 63 - 1, -(2 ** 63), 2 ** 63 - 1], dtype=np.int64) + data = np.array([2**63 - 1, -(2**63), 2**63 - 1], dtype=np.int64) data.setflags(write=writable) expected_codes = np.array([0, 1, 0], dtype=np.intp) - expected_uniques = np.array([2 ** 63 - 1, -(2 ** 63)], dtype=np.int64) + expected_uniques = np.array([2**63 - 1, -(2**63)], dtype=np.int64) codes, uniques = algos.factorize(data) tm.assert_numpy_array_equal(codes, expected_codes) @@ -354,7 +354,7 @@ def test_factorize_rangeindex_decreasing(self, sort): def test_deprecate_order(self): # gh 19727 - check warning is raised for deprecated keyword, order. # Test not valid once order keyword is removed. - data = np.array([2 ** 63, 1, 2 ** 63], dtype=np.uint64) + data = np.array([2**63, 1, 2**63], dtype=np.uint64) with pytest.raises(TypeError, match="got an unexpected keyword"): algos.factorize(data, order=True) with tm.assert_produces_warning(False): @@ -364,7 +364,7 @@ def test_deprecate_order(self): "data", [ np.array([0, 1, 0], dtype="u8"), - np.array([-(2 ** 63), 1, -(2 ** 63)], dtype="i8"), + np.array([-(2**63), 1, -(2**63)], dtype="i8"), np.array(["__nan__", "foo", "__nan__"], dtype="object"), ], ) @@ -381,8 +381,8 @@ def test_parametrized_factorize_na_value_default(self, data): [ (np.array([0, 1, 0, 2], dtype="u8"), 0), (np.array([1, 0, 1, 2], dtype="u8"), 1), - (np.array([-(2 ** 63), 1, -(2 ** 63), 0], dtype="i8"), -(2 ** 63)), - (np.array([1, -(2 ** 63), 1, 0], dtype="i8"), 1), + (np.array([-(2**63), 1, -(2**63), 0], dtype="i8"), -(2**63)), + (np.array([1, -(2**63), 1, 0], dtype="i8"), 1), (np.array(["a", "", "a", "b"], dtype=object), "a"), (np.array([(), ("a", 1), (), ("a", 2)], dtype=object), ()), (np.array([("a", 1), (), ("a", 1), ("a", 2)], dtype=object), ("a", 1)), @@ -601,8 +601,8 @@ def test_timedelta64_dtype_array_returned(self): assert result.dtype == expected.dtype def test_uint64_overflow(self): - s = Series([1, 2, 2 ** 63, 2 ** 63], dtype=np.uint64) - exp = np.array([1, 2, 2 ** 63], dtype=np.uint64) + s = Series([1, 2, 2**63, 2**63], dtype=np.uint64) + exp = np.array([1, 2, 2**63], dtype=np.uint64) tm.assert_numpy_array_equal(algos.unique(s), exp) def test_nan_in_object_array(self): @@ -1280,14 +1280,14 @@ def test_value_counts_normalized(self, dtype): tm.assert_series_equal(result, expected) def test_value_counts_uint64(self): - arr = np.array([2 ** 63], dtype=np.uint64) - expected = Series([1], index=[2 ** 63]) + arr = np.array([2**63], dtype=np.uint64) + expected = Series([1], index=[2**63]) result = algos.value_counts(arr) tm.assert_series_equal(result, expected) - arr = np.array([-1, 2 ** 63], dtype=object) - expected = Series([1, 1], index=[-1, 2 ** 63]) + arr = np.array([-1, 2**63], dtype=object) + expected = Series([1, 1], index=[-1, 2**63]) result = algos.value_counts(arr) tm.assert_series_equal(result, expected) @@ -1354,7 +1354,7 @@ def test_duplicated_with_nas(self): ), np.array(["a", "b", "a", "e", "c", "b", "d", "a", "e", "f"], dtype=object), np.array( - [1, 2 ** 63, 1, 3 ** 5, 10, 2 ** 63, 39, 1, 3 ** 5, 7], dtype=np.uint64 + [1, 2**63, 1, 3**5, 10, 2**63, 39, 1, 3**5, 7], dtype=np.uint64 ), ], ) @@ -1572,7 +1572,7 @@ def test_add_different_nans(self): assert len(m) == 1 # NAN1 and NAN2 are equivalent def test_lookup_overflow(self, writable): - xs = np.array([1, 2, 2 ** 63], dtype=np.uint64) + xs = np.array([1, 2, 2**63], dtype=np.uint64) # GH 21688 ensure we can deal with readonly memory views xs.setflags(write=writable) m = ht.UInt64HashTable() @@ -1580,8 +1580,8 @@ def test_lookup_overflow(self, writable): tm.assert_numpy_array_equal(m.lookup(xs), np.arange(len(xs), dtype=np.intp)) def test_get_unique(self): - s = Series([1, 2, 2 ** 63, 2 ** 63], dtype=np.uint64) - exp = np.array([1, 2, 2 ** 63], dtype=np.uint64) + s = Series([1, 2, 2**63, 2**63], dtype=np.uint64) + exp = np.array([1, 2, 2**63], dtype=np.uint64) tm.assert_numpy_array_equal(s.unique(), exp) @pytest.mark.parametrize("nvals", [0, 10]) # resizing to 0 is special case @@ -1777,7 +1777,7 @@ def test_basic(self, writable, dtype): def test_uint64_overflow(self, dtype): exp = np.array([1, 2], dtype=np.float64) - s = Series([1, 2 ** 63], dtype=dtype) + s = Series([1, 2**63], dtype=dtype) tm.assert_numpy_array_equal(algos.rank(s), exp) def test_too_many_ndims(self): @@ -1791,11 +1791,11 @@ def test_too_many_ndims(self): @pytest.mark.high_memory def test_pct_max_many_rows(self): # GH 18271 - values = np.arange(2 ** 24 + 1) + values = np.arange(2**24 + 1) result = algos.rank(values, pct=True).max() assert result == 1 - values = np.arange(2 ** 25 + 2).reshape(2 ** 24 + 1, 2) + values = np.arange(2**25 + 2).reshape(2**24 + 1, 2) result = algos.rank(values, pct=True).max() assert result == 1 @@ -2361,13 +2361,13 @@ def test_mixed_dtype(self): tm.assert_series_equal(ser.mode(), exp) def test_uint64_overflow(self): - exp = Series([2 ** 63], dtype=np.uint64) - ser = Series([1, 2 ** 63, 2 ** 63], dtype=np.uint64) + exp = Series([2**63], dtype=np.uint64) + ser = Series([1, 2**63, 2**63], dtype=np.uint64) tm.assert_numpy_array_equal(algos.mode(ser.values), exp.values) tm.assert_series_equal(ser.mode(), exp) - exp = Series([1, 2 ** 63], dtype=np.uint64) - ser = Series([1, 2 ** 63], dtype=np.uint64) + exp = Series([1, 2**63], dtype=np.uint64) + ser = Series([1, 2**63], dtype=np.uint64) tm.assert_numpy_array_equal(algos.mode(ser.values), exp.values) tm.assert_series_equal(ser.mode(), exp) diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index 0850ba66bbdbd..cbd11cd6d8685 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -64,7 +64,7 @@ def test_random_state(): # check array-like # GH32503 - state_arr_like = npr.randint(0, 2 ** 31, size=624, dtype="uint32") + state_arr_like = npr.randint(0, 2**31, size=624, dtype="uint32") assert ( com.random_state(state_arr_like).uniform() == npr.RandomState(state_arr_like).uniform() diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py index ee451d0288581..44977b9834b80 100644 --- a/pandas/tests/test_nanops.py +++ b/pandas/tests/test_nanops.py @@ -305,7 +305,7 @@ def test_nanmean_overflow(self): # In the previous implementation mean can overflow for int dtypes, it # is now consistent with numpy - for a in [2 ** 55, -(2 ** 55), 20150515061816532]: + for a in [2**55, -(2**55), 20150515061816532]: s = Series(a, index=range(500), dtype=np.int64) result = s.mean() np_result = s.values.mean() @@ -789,7 +789,7 @@ class TestNanvarFixedValues: def setup_method(self, method): # Samples from a normal distribution. self.variance = variance = 3.0 - self.samples = self.prng.normal(scale=variance ** 0.5, size=100000) + self.samples = self.prng.normal(scale=variance**0.5, size=100000) def test_nanvar_all_finite(self): samples = self.samples @@ -811,7 +811,7 @@ def test_nanstd_nans(self): samples[::2] = self.samples actual_std = nanops.nanstd(samples, skipna=True) - tm.assert_almost_equal(actual_std, self.variance ** 0.5, rtol=1e-2) + tm.assert_almost_equal(actual_std, self.variance**0.5, rtol=1e-2) actual_std = nanops.nanvar(samples, skipna=False) tm.assert_almost_equal(actual_std, np.nan, rtol=1e-2) diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py index 0a9d422c45036..fa95693369416 100644 --- a/pandas/tests/tools/test_to_datetime.py +++ b/pandas/tests/tools/test_to_datetime.py @@ -982,7 +982,7 @@ def test_datetime_invalid_index(self, values, format, infer): @pytest.mark.parametrize("constructor", [list, tuple, np.array, Index, deque]) def test_to_datetime_cache(self, utc, format, constructor): date = "20130101 00:00:00" - test_dates = [date] * 10 ** 5 + test_dates = [date] * 10**5 data = constructor(test_dates) result = to_datetime(data, utc=utc, format=format, cache=True) @@ -1000,7 +1000,7 @@ def test_to_datetime_from_deque(self): @pytest.mark.parametrize("format", ["%Y%m%d %H:%M:%S", None]) def test_to_datetime_cache_series(self, utc, format): date = "20130101 00:00:00" - test_dates = [date] * 10 ** 5 + test_dates = [date] * 10**5 data = Series(test_dates) result = to_datetime(data, utc=utc, format=format, cache=True) expected = to_datetime(data, utc=utc, format=format, cache=False) @@ -2624,7 +2624,7 @@ def test_no_slicing_errors_in_should_cache(self, listlike): def test_nullable_integer_to_datetime(): # Test for #30050 - ser = Series([1, 2, None, 2 ** 61, None]) + ser = Series([1, 2, None, 2**61, None]) ser = ser.astype("Int64") ser_copy = ser.copy() diff --git a/pandas/tests/tools/test_to_timedelta.py b/pandas/tests/tools/test_to_timedelta.py index ec6fccd42dbc9..cf512463d0473 100644 --- a/pandas/tests/tools/test_to_timedelta.py +++ b/pandas/tests/tools/test_to_timedelta.py @@ -217,7 +217,7 @@ def test_to_timedelta_float(self): # https://github.com/pandas-dev/pandas/issues/25077 arr = np.arange(0, 1, 1e-6)[-10:] result = to_timedelta(arr, unit="s") - expected_asi8 = np.arange(999990000, 10 ** 9, 1000, dtype="int64") + expected_asi8 = np.arange(999990000, 10**9, 1000, dtype="int64") tm.assert_numpy_array_equal(result.asi8, expected_asi8) def test_to_timedelta_coerce_strings_unit(self): diff --git a/pandas/tests/tslibs/test_fields.py b/pandas/tests/tslibs/test_fields.py index 9d0b3ff4fca80..9e6464f7727bd 100644 --- a/pandas/tests/tslibs/test_fields.py +++ b/pandas/tests/tslibs/test_fields.py @@ -8,7 +8,7 @@ @pytest.fixture def dtindex(): - dtindex = np.arange(5, dtype=np.int64) * 10 ** 9 * 3600 * 24 * 32 + dtindex = np.arange(5, dtype=np.int64) * 10**9 * 3600 * 24 * 32 dtindex.flags.writeable = False return dtindex diff --git a/pandas/tests/util/test_assert_extension_array_equal.py b/pandas/tests/util/test_assert_extension_array_equal.py index 545f0dcbf695f..dec10e5b76894 100644 --- a/pandas/tests/util/test_assert_extension_array_equal.py +++ b/pandas/tests/util/test_assert_extension_array_equal.py @@ -35,7 +35,7 @@ def test_assert_extension_array_equal_not_exact(kwargs): @pytest.mark.parametrize("decimals", range(10)) def test_assert_extension_array_equal_less_precise(decimals): - rtol = 0.5 * 10 ** -decimals + rtol = 0.5 * 10**-decimals arr1 = SparseArray([0.5, 0.123456]) arr2 = SparseArray([0.5, 0.123457]) diff --git a/pandas/tests/util/test_assert_series_equal.py b/pandas/tests/util/test_assert_series_equal.py index 150e7e8f3d738..93a2e4b83e760 100644 --- a/pandas/tests/util/test_assert_series_equal.py +++ b/pandas/tests/util/test_assert_series_equal.py @@ -110,7 +110,7 @@ def test_series_not_equal_metadata_mismatch(kwargs): @pytest.mark.parametrize("dtype", ["float32", "float64", "Float32"]) @pytest.mark.parametrize("decimals", [0, 1, 2, 3, 5, 10]) def test_less_precise(data1, data2, dtype, decimals): - rtol = 10 ** -decimals + rtol = 10**-decimals s1 = Series([data1], dtype=dtype) s2 = Series([data2], dtype=dtype) diff --git a/pandas/tests/util/test_validate_args.py b/pandas/tests/util/test_validate_args.py index db532480efe07..77e6b01ba1180 100644 --- a/pandas/tests/util/test_validate_args.py +++ b/pandas/tests/util/test_validate_args.py @@ -20,8 +20,8 @@ def test_bad_arg_length_max_value_single(): max_length = len(compat_args) + min_fname_arg_count actual_length = len(args) + min_fname_arg_count msg = ( - fr"{_fname}\(\) takes at most {max_length} " - fr"argument \({actual_length} given\)" + rf"{_fname}\(\) takes at most {max_length} " + rf"argument \({actual_length} given\)" ) with pytest.raises(TypeError, match=msg): @@ -36,8 +36,8 @@ def test_bad_arg_length_max_value_multiple(): max_length = len(compat_args) + min_fname_arg_count actual_length = len(args) + min_fname_arg_count msg = ( - fr"{_fname}\(\) takes at most {max_length} " - fr"arguments \({actual_length} given\)" + rf"{_fname}\(\) takes at most {max_length} " + rf"arguments \({actual_length} given\)" ) with pytest.raises(TypeError, match=msg): @@ -49,7 +49,7 @@ def test_not_all_defaults(i): bad_arg = "foo" msg = ( f"the '{bad_arg}' parameter is not supported " - fr"in the pandas implementation of {_fname}\(\)" + rf"in the pandas implementation of {_fname}\(\)" ) compat_args = {"foo": 2, "bar": -1, "baz": 3} diff --git a/pandas/tests/util/test_validate_args_and_kwargs.py b/pandas/tests/util/test_validate_args_and_kwargs.py index 941ba86c61319..54d94d2194909 100644 --- a/pandas/tests/util/test_validate_args_and_kwargs.py +++ b/pandas/tests/util/test_validate_args_and_kwargs.py @@ -15,8 +15,8 @@ def test_invalid_total_length_max_length_one(): actual_length = len(kwargs) + len(args) + min_fname_arg_count msg = ( - fr"{_fname}\(\) takes at most {max_length} " - fr"argument \({actual_length} given\)" + rf"{_fname}\(\) takes at most {max_length} " + rf"argument \({actual_length} given\)" ) with pytest.raises(TypeError, match=msg): @@ -33,8 +33,8 @@ def test_invalid_total_length_max_length_multiple(): actual_length = len(kwargs) + len(args) + min_fname_arg_count msg = ( - fr"{_fname}\(\) takes at most {max_length} " - fr"arguments \({actual_length} given\)" + rf"{_fname}\(\) takes at most {max_length} " + rf"arguments \({actual_length} given\)" ) with pytest.raises(TypeError, match=msg): @@ -49,8 +49,8 @@ def test_missing_args_or_kwargs(args, kwargs): compat_args = {"foo": -5, bad_arg: 1} msg = ( - fr"the '{bad_arg}' parameter is not supported " - fr"in the pandas implementation of {_fname}\(\)" + rf"the '{bad_arg}' parameter is not supported " + rf"in the pandas implementation of {_fname}\(\)" ) with pytest.raises(ValueError, match=msg): @@ -64,7 +64,7 @@ def test_duplicate_argument(): kwargs = {"foo": None, "bar": None} args = (None,) # duplicate value for "foo" - msg = fr"{_fname}\(\) got multiple values for keyword argument 'foo'" + msg = rf"{_fname}\(\) got multiple values for keyword argument 'foo'" with pytest.raises(TypeError, match=msg): validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args) diff --git a/pandas/tests/util/test_validate_kwargs.py b/pandas/tests/util/test_validate_kwargs.py index 0e271ef42ca93..de49cdd5e247d 100644 --- a/pandas/tests/util/test_validate_kwargs.py +++ b/pandas/tests/util/test_validate_kwargs.py @@ -15,7 +15,7 @@ def test_bad_kwarg(): compat_args = {good_arg: "foo", bad_arg + "o": "bar"} kwargs = {good_arg: "foo", bad_arg: "bar"} - msg = fr"{_fname}\(\) got an unexpected keyword argument '{bad_arg}'" + msg = rf"{_fname}\(\) got an unexpected keyword argument '{bad_arg}'" with pytest.raises(TypeError, match=msg): validate_kwargs(_fname, kwargs, compat_args) @@ -25,8 +25,8 @@ def test_bad_kwarg(): def test_not_all_none(i): bad_arg = "foo" msg = ( - fr"the '{bad_arg}' parameter is not supported " - fr"in the pandas implementation of {_fname}\(\)" + rf"the '{bad_arg}' parameter is not supported " + rf"in the pandas implementation of {_fname}\(\)" ) compat_args = {"foo": 1, "bar": "s", "baz": None} diff --git a/requirements-dev.txt b/requirements-dev.txt index 6fd3ac53c50cb..c4f6bb30c59ec 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -6,7 +6,7 @@ python-dateutil>=2.8.1 pytz asv < 0.5.0 cython>=0.29.24 -black==21.5b2 +black==22.3.0 cpplint flake8==4.0.1 flake8-bugbear==21.3.2
replacement for #46563
https://api.github.com/repos/pandas-dev/pandas/pulls/46576
2022-03-30T18:53:30Z
2022-03-30T20:42:10Z
2022-03-30T20:42:10Z
2022-03-30T20:42:11Z
Backport of CI/Build related PRs on 1.4.x (2)
diff --git a/.github/actions/build_pandas/action.yml b/.github/actions/build_pandas/action.yml index 2e4bfea165316..e916d5bfde5fb 100644 --- a/.github/actions/build_pandas/action.yml +++ b/.github/actions/build_pandas/action.yml @@ -8,10 +8,10 @@ runs: run: | conda info conda list - shell: bash -l {0} + shell: bash -el {0} - name: Build Pandas run: | python setup.py build_ext -j 2 python -m pip install -e . --no-build-isolation --no-use-pep517 --no-index - shell: bash -l {0} + shell: bash -el {0} diff --git a/.github/actions/setup/action.yml b/.github/actions/setup/action.yml index 9ef00e7a85a6f..c357f149f2c7f 100644 --- a/.github/actions/setup/action.yml +++ b/.github/actions/setup/action.yml @@ -5,8 +5,8 @@ runs: steps: - name: Setting conda path run: echo "${HOME}/miniconda3/bin" >> $GITHUB_PATH - shell: bash -l {0} + shell: bash -el {0} - name: Setup environment and build pandas run: ci/setup_env.sh - shell: bash -l {0} + shell: bash -el {0} diff --git a/.github/workflows/asv-bot.yml b/.github/workflows/asv-bot.yml index f3946aeb84a63..78c224b84d5d9 100644 --- a/.github/workflows/asv-bot.yml +++ b/.github/workflows/asv-bot.yml @@ -17,7 +17,7 @@ jobs: runs-on: ubuntu-latest defaults: run: - shell: bash -l {0} + shell: bash -el {0} concurrency: # Set concurrency to prevent abuse(full runs are ~5.5 hours !!!) @@ -29,19 +29,19 @@ jobs: steps: - name: Checkout - uses: actions/checkout@v2 + uses: actions/checkout@v3 with: fetch-depth: 0 - name: Cache conda - uses: actions/cache@v2 + uses: actions/cache@v3 with: path: ~/conda_pkgs_dir key: ${{ runner.os }}-conda-${{ hashFiles('${{ env.ENV_FILE }}') }} # Although asv sets up its own env, deps are still needed # during discovery process - - uses: conda-incubator/setup-miniconda@v2 + - uses: conda-incubator/setup-miniconda@v2.1.1 with: activate-environment: pandas-dev channel-priority: strict @@ -65,7 +65,7 @@ jobs: echo 'EOF' >> $GITHUB_ENV echo "REGEX=$REGEX" >> $GITHUB_ENV - - uses: actions/github-script@v5 + - uses: actions/github-script@v6 env: BENCH_OUTPUT: ${{env.BENCH_OUTPUT}} REGEX: ${{env.REGEX}} diff --git a/.github/workflows/autoupdate-pre-commit-config.yml b/.github/workflows/autoupdate-pre-commit-config.yml index 3696cba8cf2e6..d2eac234ca361 100644 --- a/.github/workflows/autoupdate-pre-commit-config.yml +++ b/.github/workflows/autoupdate-pre-commit-config.yml @@ -12,9 +12,9 @@ jobs: runs-on: ubuntu-latest steps: - name: Set up Python - uses: actions/setup-python@v2 + uses: actions/setup-python@v3 - name: Cache multiple paths - uses: actions/cache@v2 + uses: actions/cache@v3 with: path: | ~/.cache/pre-commit diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml index 7141b02cac376..59fb81b167bd4 100644 --- a/.github/workflows/code-checks.yml +++ b/.github/workflows/code-checks.yml @@ -24,10 +24,10 @@ jobs: cancel-in-progress: true steps: - name: Checkout - uses: actions/checkout@v2 + uses: actions/checkout@v3 - name: Install Python - uses: actions/setup-python@v2 + uses: actions/setup-python@v3 with: python-version: '3.9.7' @@ -39,7 +39,7 @@ jobs: runs-on: ubuntu-latest defaults: run: - shell: bash -l {0} + shell: bash -el {0} concurrency: # https://github.community/t/concurrecy-not-work-for-push/183068/7 @@ -48,17 +48,17 @@ jobs: steps: - name: Checkout - uses: actions/checkout@v2 + uses: actions/checkout@v3 with: fetch-depth: 0 - name: Cache conda - uses: actions/cache@v2 + uses: actions/cache@v3 with: path: ~/conda_pkgs_dir key: ${{ runner.os }}-conda-${{ hashFiles('${{ env.ENV_FILE }}') }} - - uses: conda-incubator/setup-miniconda@v2 + - uses: conda-incubator/setup-miniconda@v2.1.1 with: mamba-version: "*" channels: conda-forge @@ -68,7 +68,7 @@ jobs: use-only-tar-bz2: true - name: Install node.js (for pyright) - uses: actions/setup-node@v2 + uses: actions/setup-node@v3 with: node-version: "16" @@ -105,7 +105,7 @@ jobs: runs-on: ubuntu-latest defaults: run: - shell: bash -l {0} + shell: bash -el {0} concurrency: # https://github.community/t/concurrecy-not-work-for-push/183068/7 @@ -114,17 +114,17 @@ jobs: steps: - name: Checkout - uses: actions/checkout@v2 + uses: actions/checkout@v3 with: fetch-depth: 0 - name: Cache conda - uses: actions/cache@v2 + uses: actions/cache@v3 with: path: ~/conda_pkgs_dir key: ${{ runner.os }}-conda-${{ hashFiles('${{ env.ENV_FILE }}') }} - - uses: conda-incubator/setup-miniconda@v2 + - uses: conda-incubator/setup-miniconda@v2.1.1 with: mamba-version: "*" channels: conda-forge @@ -151,8 +151,32 @@ jobs: if: ${{ steps.build.outcome == 'success' }} - name: Publish benchmarks artifact - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v3 with: name: Benchmarks log path: asv_bench/benchmarks.log if: failure() + + build_docker_dev_environment: + name: Build Docker Dev Environment + runs-on: ubuntu-latest + defaults: + run: + shell: bash -el {0} + + concurrency: + # https://github.community/t/concurrecy-not-work-for-push/183068/7 + group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-build_docker_dev_environment + cancel-in-progress: true + + steps: + - name: Clean up dangling images + run: docker image prune -f + + - name: Checkout + uses: actions/checkout@v3 + with: + fetch-depth: 0 + + - name: Build image + run: docker build --pull --no-cache --tag pandas-dev-env . diff --git a/.github/workflows/comment_bot.yml b/.github/workflows/comment_bot.yml index 8f610fd5781ef..3824e015e8336 100644 --- a/.github/workflows/comment_bot.yml +++ b/.github/workflows/comment_bot.yml @@ -12,18 +12,18 @@ jobs: if: startsWith(github.event.comment.body, '@github-actions pre-commit') runs-on: ubuntu-latest steps: - - uses: actions/checkout@v2 + - uses: actions/checkout@v3 - uses: r-lib/actions/pr-fetch@v2 with: repo-token: ${{ secrets.GITHUB_TOKEN }} - name: Cache multiple paths - uses: actions/cache@v2 + uses: actions/cache@v3 with: path: | ~/.cache/pre-commit ~/.cache/pip key: pre-commit-dispatched-${{ runner.os }}-build - - uses: actions/setup-python@v2 + - uses: actions/setup-python@v3 with: python-version: 3.8 - name: Install-pre-commit diff --git a/.github/workflows/docbuild-and-upload.yml b/.github/workflows/docbuild-and-upload.yml index 4cce75779d750..bba9f62a0eca6 100644 --- a/.github/workflows/docbuild-and-upload.yml +++ b/.github/workflows/docbuild-and-upload.yml @@ -26,7 +26,7 @@ jobs: steps: - name: Checkout - uses: actions/checkout@v2 + uses: actions/checkout@v3 with: fetch-depth: 0 @@ -65,7 +65,7 @@ jobs: run: mv doc/build/html web/build/docs - name: Save website as an artifact - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v3 with: name: website path: web/build diff --git a/.github/workflows/posix.yml b/.github/workflows/posix.yml index 0d5ef807a3392..ea9df610c1dff 100644 --- a/.github/workflows/posix.yml +++ b/.github/workflows/posix.yml @@ -20,7 +20,7 @@ jobs: runs-on: ubuntu-latest defaults: run: - shell: bash -l {0} + shell: bash -el {0} timeout-minutes: 120 strategy: matrix: @@ -121,12 +121,12 @@ jobs: steps: - name: Checkout - uses: actions/checkout@v2 + uses: actions/checkout@v3 with: fetch-depth: 0 - name: Cache conda - uses: actions/cache@v2 + uses: actions/cache@v3 env: CACHE_NUMBER: 0 with: @@ -138,7 +138,7 @@ jobs: # xsel for clipboard tests run: sudo apt-get update && sudo apt-get install -y libc6-dev-i386 xsel ${{ env.EXTRA_APT }} - - uses: conda-incubator/setup-miniconda@v2 + - uses: conda-incubator/setup-miniconda@v2.1.1 with: mamba-version: "*" channels: conda-forge @@ -153,13 +153,12 @@ jobs: if: ${{ matrix.pyarrow_version }} - name: Setup PyPy - uses: actions/setup-python@v2 + uses: actions/setup-python@v3 with: python-version: "pypy-3.8" if: ${{ env.IS_PYPY == 'true' }} - name: Setup PyPy dependencies - shell: bash run: | # TODO: re-enable cov, its slowing the tests down though pip install Cython numpy python-dateutil pytz pytest>=6.0 pytest-xdist>=1.31.0 pytest-asyncio>=0.17 hypothesis>=5.5.3 @@ -178,7 +177,7 @@ jobs: run: pushd /tmp && python -c "import pandas; pandas.show_versions();" && popd - name: Publish test results - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v3 with: name: Test results path: test-data.xml diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml index c287827206336..8ca4cce155e96 100644 --- a/.github/workflows/python-dev.yml +++ b/.github/workflows/python-dev.yml @@ -45,18 +45,18 @@ jobs: cancel-in-progress: true steps: - - uses: actions/checkout@v2 + - uses: actions/checkout@v3 with: fetch-depth: 0 - name: Set up Python Dev Version - uses: actions/setup-python@v2 + uses: actions/setup-python@v3 with: python-version: '3.11-dev' # TODO: GH#44980 https://github.com/pypa/setuptools/issues/2941 - name: Install dependencies - shell: bash + shell: bash -el {0} run: | python -m pip install --upgrade pip "setuptools<60.0.0" wheel pip install -i https://pypi.anaconda.org/scipy-wheels-nightly/simple numpy @@ -74,12 +74,12 @@ jobs: python -c "import pandas; pandas.show_versions();" - name: Test with pytest - shell: bash + shell: bash -el {0} run: | ci/run_tests.sh - name: Publish test results - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v3 with: name: Test results path: test-data.xml diff --git a/.github/workflows/sdist.yml b/.github/workflows/sdist.yml index dd030f1aacc44..8406743889f71 100644 --- a/.github/workflows/sdist.yml +++ b/.github/workflows/sdist.yml @@ -18,7 +18,7 @@ jobs: timeout-minutes: 60 defaults: run: - shell: bash -l {0} + shell: bash -el {0} strategy: fail-fast: false @@ -30,12 +30,12 @@ jobs: cancel-in-progress: true steps: - - uses: actions/checkout@v2 + - uses: actions/checkout@v3 with: fetch-depth: 0 - name: Set up Python - uses: actions/setup-python@v2 + uses: actions/setup-python@v3 with: python-version: ${{ matrix.python-version }} @@ -52,7 +52,13 @@ jobs: pip list python setup.py sdist --formats=gztar - - uses: conda-incubator/setup-miniconda@v2 + - name: Upload sdist artifact + uses: actions/upload-artifact@v3 + with: + name: ${{matrix.python-version}}-sdist.gz + path: dist/*.gz + + - uses: conda-incubator/setup-miniconda@v2.1.1 with: activate-environment: pandas-sdist channels: conda-forge diff --git a/Dockerfile b/Dockerfile index 8887e80566772..2923cd60cc53b 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,4 +1,4 @@ -FROM quay.io/condaforge/miniforge3 +FROM quay.io/condaforge/miniforge3:4.11.0-0 # if you forked pandas, you can pass in your own GitHub username to use your fork # i.e. gh_username=myname @@ -45,4 +45,4 @@ RUN . /opt/conda/etc/profile.d/conda.sh \ && cd "$pandas_home" \ && export \ && python setup.py build_ext -j 4 \ - && python -m pip install -e . + && python -m pip install --no-build-isolation -e .
backports #45889, #46541, #46277 and #46540
https://api.github.com/repos/pandas-dev/pandas/pulls/46572
2022-03-30T12:08:59Z
2022-03-30T20:07:28Z
2022-03-30T20:07:28Z
2022-03-30T20:07:32Z
CLN/TYP: assorted
diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx index 4e3cdc0fdd14a..44234013c7a3c 100644 --- a/pandas/_libs/algos.pyx +++ b/pandas/_libs/algos.pyx @@ -750,7 +750,7 @@ def is_monotonic(ndarray[numeric_object_t, ndim=1] arr, bint timelike): n = len(arr) if n == 1: - if arr[0] != arr[0] or (timelike and <int64_t>arr[0] == NPY_NAT): + if arr[0] != arr[0] or (numeric_object_t is int64_t and timelike and arr[0] == NPY_NAT): # single value is NaN return False, False, True else: diff --git a/pandas/_libs/indexing.pyx b/pandas/_libs/indexing.pyx index 181de174c53fb..c274b28b7559a 100644 --- a/pandas/_libs/indexing.pyx +++ b/pandas/_libs/indexing.pyx @@ -2,21 +2,24 @@ cdef class NDFrameIndexerBase: """ A base class for _NDFrameIndexer for fast instantiation and attribute access. """ + cdef: + Py_ssize_t _ndim + cdef public: str name - object obj, _ndim + object obj def __init__(self, name: str, obj): self.obj = obj self.name = name - self._ndim = None + self._ndim = -1 @property def ndim(self) -> int: # Delay `ndim` instantiation until required as reading it # from `obj` isn't entirely cheap. ndim = self._ndim - if ndim is None: + if ndim == -1: ndim = self._ndim = self.obj.ndim if ndim > 2: raise ValueError( # pragma: no cover diff --git a/pandas/_libs/internals.pyx b/pandas/_libs/internals.pyx index ac423ef6c0ca2..d2dc60d27706d 100644 --- a/pandas/_libs/internals.pyx +++ b/pandas/_libs/internals.pyx @@ -544,7 +544,7 @@ cpdef update_blklocs_and_blknos( """ cdef: Py_ssize_t i - cnp.npy_intp length = len(blklocs) + 1 + cnp.npy_intp length = blklocs.shape[0] + 1 ndarray[intp_t, ndim=1] new_blklocs, new_blknos # equiv: new_blklocs = np.empty(length, dtype=np.intp) diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx index 9dfc438319148..5c0b69d51367a 100644 --- a/pandas/_libs/tslib.pyx +++ b/pandas/_libs/tslib.pyx @@ -218,7 +218,7 @@ def array_with_unit_to_datetime( """ cdef: Py_ssize_t i, j, n=len(values) - int64_t m + int64_t mult int prec = 0 ndarray[float64_t] fvalues bint is_ignore = errors=='ignore' @@ -242,7 +242,7 @@ def array_with_unit_to_datetime( ) return result, tz - m, _ = precision_from_unit(unit) + mult, _ = precision_from_unit(unit) if is_raise: # try a quick conversion to i8/f8 @@ -254,7 +254,7 @@ def array_with_unit_to_datetime( # fill missing values by comparing to NPY_NAT mask = iresult == NPY_NAT iresult[mask] = 0 - fvalues = iresult.astype("f8") * m + fvalues = iresult.astype("f8") * mult need_to_iterate = False if not need_to_iterate: @@ -265,10 +265,10 @@ def array_with_unit_to_datetime( raise OutOfBoundsDatetime(f"cannot convert input with unit '{unit}'") if values.dtype.kind in ["i", "u"]: - result = (iresult * m).astype("M8[ns]") + result = (iresult * mult).astype("M8[ns]") elif values.dtype.kind == "f": - fresult = (values * m).astype("f8") + fresult = (values * mult).astype("f8") fresult[mask] = 0 if prec: fresult = round(fresult, prec) diff --git a/pandas/_libs/tslibs/conversion.pxd b/pandas/_libs/tslibs/conversion.pxd index 206e0171e0a55..ba03de6f0b81f 100644 --- a/pandas/_libs/tslibs/conversion.pxd +++ b/pandas/_libs/tslibs/conversion.pxd @@ -19,9 +19,9 @@ cdef class _TSObject: bint fold -cdef convert_to_tsobject(object ts, tzinfo tz, str unit, - bint dayfirst, bint yearfirst, - int32_t nanos=*) +cdef _TSObject convert_to_tsobject(object ts, tzinfo tz, str unit, + bint dayfirst, bint yearfirst, + int32_t nanos=*) cdef _TSObject convert_datetime_to_tsobject(datetime ts, tzinfo tz, int32_t nanos=*) diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx index 457b27b293f11..cf85a5111e1a9 100644 --- a/pandas/_libs/tslibs/conversion.pyx +++ b/pandas/_libs/tslibs/conversion.pyx @@ -345,8 +345,8 @@ cdef class _TSObject: self.fold = 0 -cdef convert_to_tsobject(object ts, tzinfo tz, str unit, - bint dayfirst, bint yearfirst, int32_t nanos=0): +cdef _TSObject convert_to_tsobject(object ts, tzinfo tz, str unit, + bint dayfirst, bint yearfirst, int32_t nanos=0): """ Extract datetime and int64 from any of: - np.int64 (with unit providing a possible modifier) diff --git a/pandas/_libs/tslibs/dtypes.pxd b/pandas/_libs/tslibs/dtypes.pxd index f61409fc16653..8cc7bcb2a1aad 100644 --- a/pandas/_libs/tslibs/dtypes.pxd +++ b/pandas/_libs/tslibs/dtypes.pxd @@ -5,7 +5,7 @@ from pandas._libs.tslibs.np_datetime cimport NPY_DATETIMEUNIT cdef str npy_unit_to_abbrev(NPY_DATETIMEUNIT unit) cdef NPY_DATETIMEUNIT freq_group_code_to_npy_unit(int freq) nogil -cdef int64_t periods_per_day(NPY_DATETIMEUNIT reso=*) +cdef int64_t periods_per_day(NPY_DATETIMEUNIT reso=*) except? -1 cdef dict attrname_to_abbrevs diff --git a/pandas/_libs/tslibs/dtypes.pyx b/pandas/_libs/tslibs/dtypes.pyx index 48a06b2c01ae4..8f87bfe0b8c7c 100644 --- a/pandas/_libs/tslibs/dtypes.pyx +++ b/pandas/_libs/tslibs/dtypes.pyx @@ -307,7 +307,7 @@ cdef NPY_DATETIMEUNIT freq_group_code_to_npy_unit(int freq) nogil: return NPY_DATETIMEUNIT.NPY_FR_D -cdef int64_t periods_per_day(NPY_DATETIMEUNIT reso=NPY_DATETIMEUNIT.NPY_FR_ns): +cdef int64_t periods_per_day(NPY_DATETIMEUNIT reso=NPY_DATETIMEUNIT.NPY_FR_ns) except? -1: """ How many of the given time units fit into a single day? """ diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index 5a7f931292e7b..ce7246f7ab19e 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -1910,7 +1910,7 @@ cdef class WeekOfMonthMixin(SingleConstructorOffset): shifted = shift_month(other, months, "start") to_day = self._get_offset_day(shifted) - return shift_day(shifted, to_day - shifted.day) + return _shift_day(shifted, to_day - shifted.day) def is_on_offset(self, dt: datetime) -> bool: if self.normalize and not _is_normalized(dt): @@ -3132,7 +3132,7 @@ cdef class FY5253Quarter(FY5253Mixin): qtr_lens = self.get_weeks(norm) # check that qtr_lens is consistent with self._offset addition - end = shift_day(start, days=7 * sum(qtr_lens)) + end = _shift_day(start, days=7 * sum(qtr_lens)) assert self._offset.is_on_offset(end), (start, end, qtr_lens) tdelta = norm - start @@ -3173,7 +3173,7 @@ cdef class FY5253Quarter(FY5253Mixin): # Note: we always have 0 <= n < 4 weeks = sum(qtr_lens[:n]) if weeks: - res = shift_day(res, days=weeks * 7) + res = _shift_day(res, days=weeks * 7) return res @@ -3210,7 +3210,7 @@ cdef class FY5253Quarter(FY5253Mixin): current = next_year_end for qtr_len in qtr_lens: - current = shift_day(current, days=qtr_len * 7) + current = _shift_day(current, days=qtr_len * 7) if dt == current: return True return False @@ -3729,7 +3729,7 @@ cpdef to_offset(freq): # ---------------------------------------------------------------------- # RelativeDelta Arithmetic -def shift_day(other: datetime, days: int) -> datetime: +cdef datetime _shift_day(datetime other, int days): """ Increment the datetime `other` by the given number of days, retaining the time-portion of the datetime. For tz-naive datetimes this is @@ -3915,6 +3915,8 @@ cdef inline void _shift_quarters(const int64_t[:] dtindex, out[i] = dtstruct_to_dt64(&dts) +@cython.wraparound(False) +@cython.boundscheck(False) cdef ndarray[int64_t] _shift_bdays(const int64_t[:] i8other, int periods): """ Implementation of BusinessDay.apply_offset. diff --git a/pandas/_libs/tslibs/timedeltas.pxd b/pandas/_libs/tslibs/timedeltas.pxd index f054a22ed7ad7..f114fd9297920 100644 --- a/pandas/_libs/tslibs/timedeltas.pxd +++ b/pandas/_libs/tslibs/timedeltas.pxd @@ -15,4 +15,5 @@ cdef class _Timedelta(timedelta): int64_t _d, _h, _m, _s, _ms, _us, _ns cpdef timedelta to_pytimedelta(_Timedelta self) - cpdef bint _has_ns(self) + cdef bint _has_ns(self) + cdef _ensure_components(_Timedelta self) diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx index b443ae283755c..7979feb076c6e 100644 --- a/pandas/_libs/tslibs/timedeltas.pyx +++ b/pandas/_libs/tslibs/timedeltas.pyx @@ -154,7 +154,7 @@ def ints_to_pytimedelta(const int64_t[:] arr, box=False): cdef: Py_ssize_t i, n = len(arr) int64_t value - object[:] result = np.empty(n, dtype=object) + object[::1] result = np.empty(n, dtype=object) for i in range(n): @@ -892,10 +892,10 @@ cdef class _Timedelta(timedelta): return cmp_scalar(self.value, ots.value, op) - cpdef bint _has_ns(self): + cdef bint _has_ns(self): return self.value % 1000 != 0 - def _ensure_components(_Timedelta self): + cdef _ensure_components(_Timedelta self): """ compute the components """ @@ -1160,7 +1160,10 @@ cdef class _Timedelta(timedelta): converted : string of a Timedelta """ - cdef object sign, seconds_pretty, subs, fmt, comp_dict + cdef: + str sign, fmt + dict comp_dict + object subs self._ensure_components() diff --git a/pandas/_libs/tslibs/timestamps.pxd b/pandas/_libs/tslibs/timestamps.pxd index 9b05fbc5be915..ce4c5d07ecc53 100644 --- a/pandas/_libs/tslibs/timestamps.pxd +++ b/pandas/_libs/tslibs/timestamps.pxd @@ -6,17 +6,18 @@ from numpy cimport int64_t from pandas._libs.tslibs.base cimport ABCTimestamp from pandas._libs.tslibs.np_datetime cimport npy_datetimestruct +from pandas._libs.tslibs.offsets cimport BaseOffset -cdef object create_timestamp_from_ts(int64_t value, - npy_datetimestruct dts, - tzinfo tz, object freq, bint fold) +cdef _Timestamp create_timestamp_from_ts(int64_t value, + npy_datetimestruct dts, + tzinfo tz, BaseOffset freq, bint fold) cdef class _Timestamp(ABCTimestamp): cdef readonly: int64_t value, nanosecond - object _freq + BaseOffset _freq cdef bint _get_start_end_field(self, str field, freq) cdef _get_date_name_field(self, str field, object locale) diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx index 5ba28a9fae429..698e19f97c6aa 100644 --- a/pandas/_libs/tslibs/timestamps.pyx +++ b/pandas/_libs/tslibs/timestamps.pyx @@ -82,6 +82,7 @@ from pandas._libs.tslibs.np_datetime cimport ( from pandas._libs.tslibs.np_datetime import OutOfBoundsDatetime from pandas._libs.tslibs.offsets cimport ( + BaseOffset, is_offset_object, to_offset, ) @@ -113,9 +114,9 @@ _no_input = object() # ---------------------------------------------------------------------- -cdef inline object create_timestamp_from_ts(int64_t value, - npy_datetimestruct dts, - tzinfo tz, object freq, bint fold): +cdef inline _Timestamp create_timestamp_from_ts(int64_t value, + npy_datetimestruct dts, + tzinfo tz, BaseOffset freq, bint fold): """ convenience routine to construct a Timestamp from its parts """ cdef _Timestamp ts_base ts_base = _Timestamp.__new__(Timestamp, dts.year, dts.month, diff --git a/pandas/_libs/tslibs/timezones.pyx b/pandas/_libs/tslibs/timezones.pyx index 224c5be1f3b7d..6e3f7a370e5dd 100644 --- a/pandas/_libs/tslibs/timezones.pyx +++ b/pandas/_libs/tslibs/timezones.pyx @@ -230,10 +230,10 @@ cdef object _get_utc_trans_times_from_dateutil_tz(tzinfo tz): return new_trans -cdef int64_t[:] unbox_utcoffsets(object transinfo): +cdef int64_t[::1] unbox_utcoffsets(object transinfo): cdef: Py_ssize_t i, sz - int64_t[:] arr + int64_t[::1] arr sz = len(transinfo) arr = np.empty(sz, dtype='i8') diff --git a/pandas/_libs/tslibs/tzconversion.pyx b/pandas/_libs/tslibs/tzconversion.pyx index afcfe94a695bb..cee2de0cf0f4a 100644 --- a/pandas/_libs/tslibs/tzconversion.pyx +++ b/pandas/_libs/tslibs/tzconversion.pyx @@ -610,7 +610,7 @@ cdef int64_t _tz_localize_using_tzinfo_api( int64_t val, tzinfo tz, bint to_utc=True, bint* fold=NULL ) except? -1: """ - Convert the i8 representation of a datetime from a general-cast timezone to + Convert the i8 representation of a datetime from a general-case timezone to UTC, or vice-versa using the datetime/tzinfo API. Private, not intended for use outside of tslibs.tzconversion. diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py index 9ffe33e0cf38e..e40acad0cb51a 100644 --- a/pandas/core/arrays/datetimes.py +++ b/pandas/core/arrays/datetimes.py @@ -2006,10 +2006,10 @@ def sequence_to_datetimes(data, require_iso8601: bool = False) -> DatetimeArray: def _sequence_to_dt64ns( data, dtype=None, - copy=False, + copy: bool = False, tz=None, - dayfirst=False, - yearfirst=False, + dayfirst: bool = False, + yearfirst: bool = False, ambiguous="raise", *, allow_mixed: bool = False,
null
https://api.github.com/repos/pandas-dev/pandas/pulls/46568
2022-03-30T02:41:10Z
2022-04-03T03:12:53Z
2022-04-03T03:12:53Z
2022-04-03T15:10:23Z
BUG: groupby().rolling(freq) with monotonic dates within groups #46065
diff --git a/doc/source/whatsnew/v1.4.2.rst b/doc/source/whatsnew/v1.4.2.rst index 13f3e9a0d0a8c..a8763819d0531 100644 --- a/doc/source/whatsnew/v1.4.2.rst +++ b/doc/source/whatsnew/v1.4.2.rst @@ -32,6 +32,7 @@ Bug fixes - Fix some cases for subclasses that define their ``_constructor`` properties as general callables (:issue:`46018`) - Fixed "longtable" formatting in :meth:`.Styler.to_latex` when ``column_format`` is given in extended format (:issue:`46037`) - Fixed incorrect rendering in :meth:`.Styler.format` with ``hyperlinks="html"`` when the url contains a colon or other special characters (:issue:`46389`) +- Fixed :meth:`Groupby.rolling` with a frequency window that would raise a ``ValueError`` even if the datetimes within each group were monotonic (:issue:`46061`) .. --------------------------------------------------------------------------- diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py index d4569816f9f7a..ac3d8b3dabb2b 100644 --- a/pandas/core/window/rolling.py +++ b/pandas/core/window/rolling.py @@ -2680,3 +2680,21 @@ def _get_window_indexer(self) -> GroupbyIndexer: indexer_kwargs=indexer_kwargs, ) return window_indexer + + def _validate_datetimelike_monotonic(self): + """ + Validate that each group in self._on is monotonic + """ + # GH 46061 + if self._on.hasnans: + self._raise_monotonic_error("values must not have NaT") + for group_indices in self._grouper.indices.values(): + group_on = self._on.take(group_indices) + if not ( + group_on.is_monotonic_increasing or group_on.is_monotonic_decreasing + ): + on = "index" if self.on is None else self.on + raise ValueError( + f"Each group within {on} must be monotonic. " + f"Sort the values in {on} first." + ) diff --git a/pandas/tests/window/test_groupby.py b/pandas/tests/window/test_groupby.py index b4d0f6562f2d5..5f4805eaa01d2 100644 --- a/pandas/tests/window/test_groupby.py +++ b/pandas/tests/window/test_groupby.py @@ -927,6 +927,83 @@ def test_nan_and_zero_endpoints(self): ) tm.assert_series_equal(result, expected) + def test_groupby_rolling_non_monotonic(self): + # GH 43909 + + shuffled = [3, 0, 1, 2] + sec = 1_000 + df = DataFrame( + [{"t": Timestamp(2 * x * sec), "x": x + 1, "c": 42} for x in shuffled] + ) + with pytest.raises(ValueError, match=r".* must be monotonic"): + df.groupby("c").rolling(on="t", window="3s") + + def test_groupby_monotonic(self): + + # GH 15130 + # we don't need to validate monotonicity when grouping + + # GH 43909 we should raise an error here to match + # behaviour of non-groupby rolling. + + data = [ + ["David", "1/1/2015", 100], + ["David", "1/5/2015", 500], + ["David", "5/30/2015", 50], + ["David", "7/25/2015", 50], + ["Ryan", "1/4/2014", 100], + ["Ryan", "1/19/2015", 500], + ["Ryan", "3/31/2016", 50], + ["Joe", "7/1/2015", 100], + ["Joe", "9/9/2015", 500], + ["Joe", "10/15/2015", 50], + ] + + df = DataFrame(data=data, columns=["name", "date", "amount"]) + df["date"] = to_datetime(df["date"]) + df = df.sort_values("date") + + expected = ( + df.set_index("date") + .groupby("name") + .apply(lambda x: x.rolling("180D")["amount"].sum()) + ) + result = df.groupby("name").rolling("180D", on="date")["amount"].sum() + tm.assert_series_equal(result, expected) + + def test_datelike_on_monotonic_within_each_group(self): + # GH 13966 (similar to #15130, closed by #15175) + + # superseded by 43909 + # GH 46061: OK if the on is monotonic relative to each each group + + dates = date_range(start="2016-01-01 09:30:00", periods=20, freq="s") + df = DataFrame( + { + "A": [1] * 20 + [2] * 12 + [3] * 8, + "B": np.concatenate((dates, dates)), + "C": np.arange(40), + } + ) + + expected = ( + df.set_index("B").groupby("A").apply(lambda x: x.rolling("4s")["C"].mean()) + ) + result = df.groupby("A").rolling("4s", on="B").C.mean() + tm.assert_series_equal(result, expected) + + def test_datelike_on_not_monotonic_within_each_group(self): + # GH 46061 + df = DataFrame( + { + "A": [1] * 3 + [2] * 3, + "B": [Timestamp(year, 1, 1) for year in [2020, 2021, 2019]] * 2, + "C": range(6), + } + ) + with pytest.raises(ValueError, match="Each group within B must be monotonic."): + df.groupby("A").rolling("365D", on="B") + class TestExpanding: def setup_method(self): diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py index 7e9bc121f06ff..89c90836ae957 100644 --- a/pandas/tests/window/test_rolling.py +++ b/pandas/tests/window/test_rolling.py @@ -1456,18 +1456,6 @@ def test_groupby_rolling_nan_included(): tm.assert_frame_equal(result, expected) -def test_groupby_rolling_non_monotonic(): - # GH 43909 - - shuffled = [3, 0, 1, 2] - sec = 1_000 - df = DataFrame( - [{"t": Timestamp(2 * x * sec), "x": x + 1, "c": 42} for x in shuffled] - ) - with pytest.raises(ValueError, match=r".* must be monotonic"): - df.groupby("c").rolling(on="t", window="3s") - - @pytest.mark.parametrize("method", ["skew", "kurt"]) def test_rolling_skew_kurt_numerical_stability(method): # GH#6929 diff --git a/pandas/tests/window/test_timeseries_window.py b/pandas/tests/window/test_timeseries_window.py index ee28b27b17365..907c654570273 100644 --- a/pandas/tests/window/test_timeseries_window.py +++ b/pandas/tests/window/test_timeseries_window.py @@ -9,7 +9,6 @@ Series, Timestamp, date_range, - to_datetime, ) import pandas._testing as tm @@ -649,65 +648,6 @@ def agg_by_day(x): tm.assert_frame_equal(result, expected) - def test_groupby_monotonic(self): - - # GH 15130 - # we don't need to validate monotonicity when grouping - - # GH 43909 we should raise an error here to match - # behaviour of non-groupby rolling. - - data = [ - ["David", "1/1/2015", 100], - ["David", "1/5/2015", 500], - ["David", "5/30/2015", 50], - ["David", "7/25/2015", 50], - ["Ryan", "1/4/2014", 100], - ["Ryan", "1/19/2015", 500], - ["Ryan", "3/31/2016", 50], - ["Joe", "7/1/2015", 100], - ["Joe", "9/9/2015", 500], - ["Joe", "10/15/2015", 50], - ] - - df = DataFrame(data=data, columns=["name", "date", "amount"]) - df["date"] = to_datetime(df["date"]) - df = df.sort_values("date") - - expected = ( - df.set_index("date") - .groupby("name") - .apply(lambda x: x.rolling("180D")["amount"].sum()) - ) - result = df.groupby("name").rolling("180D", on="date")["amount"].sum() - tm.assert_series_equal(result, expected) - - def test_non_monotonic_raises(self): - # GH 13966 (similar to #15130, closed by #15175) - - # superseded by 43909 - - dates = date_range(start="2016-01-01 09:30:00", periods=20, freq="s") - df = DataFrame( - { - "A": [1] * 20 + [2] * 12 + [3] * 8, - "B": np.concatenate((dates, dates)), - "C": np.arange(40), - } - ) - - expected = ( - df.set_index("B").groupby("A").apply(lambda x: x.rolling("4s")["C"].mean()) - ) - with pytest.raises(ValueError, match=r".* must be monotonic"): - df.groupby("A").rolling( - "4s", on="B" - ).C.mean() # should raise for non-monotonic t series - - df2 = df.sort_values("B") - result = df2.groupby("A").rolling("4s", on="B").C.mean() - tm.assert_series_equal(result, expected) - def test_rolling_cov_offset(self): # GH16058
- [x] closes #46061 (Replace xxxx with the Github issue number) - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/v1.4.2.rst` file if fixing a bug or adding a new feature. Restored version in https://github.com/pandas-dev/pandas/pull/46065 as there is some support for the bug fix despite the performance hit.
https://api.github.com/repos/pandas-dev/pandas/pulls/46567
2022-03-30T02:15:00Z
2022-03-31T18:01:23Z
2022-03-31T18:01:23Z
2022-04-02T07:56:36Z
TYP: setter for index/columns property-like (AxisProperty)
diff --git a/pandas/_libs/properties.pyi b/pandas/_libs/properties.pyi index b2ba55aefb8a5..595e3bd706f1d 100644 --- a/pandas/_libs/properties.pyi +++ b/pandas/_libs/properties.pyi @@ -1,9 +1,28 @@ -# pyright: reportIncompleteStub = false -from typing import Any +from typing import ( + Sequence, + overload, +) + +from pandas._typing import ( + AnyArrayLike, + DataFrame, + Index, + Series, +) # note: this is a lie to make type checkers happy (they special # case property). cache_readonly uses attribute names similar to # property (fget) but it does not provide fset and fdel. cache_readonly = property -def __getattr__(name: str) -> Any: ... # incomplete +class AxisProperty: + + axis: int + def __init__(self, axis: int = ..., doc: str = ...) -> None: ... + @overload + def __get__(self, obj: DataFrame | Series, type) -> Index: ... + @overload + def __get__(self, obj: None, type) -> AxisProperty: ... + def __set__( + self, obj: DataFrame | Series, value: AnyArrayLike | Sequence + ) -> None: ... diff --git a/pandas/core/apply.py b/pandas/core/apply.py index d37080fce6e1c..c0200c7d7c5b7 100644 --- a/pandas/core/apply.py +++ b/pandas/core/apply.py @@ -235,8 +235,12 @@ def transform(self) -> DataFrame | Series: and not obj.empty ): raise ValueError("Transform function failed") + # error: Argument 1 to "__get__" of "AxisProperty" has incompatible type + # "Union[Series, DataFrame, GroupBy[Any], SeriesGroupBy, + # DataFrameGroupBy, BaseWindow, Resampler]"; expected "Union[DataFrame, + # Series]" if not isinstance(result, (ABCSeries, ABCDataFrame)) or not result.index.equals( - obj.index + obj.index # type:ignore[arg-type] ): raise ValueError("Function did not transform") @@ -645,7 +649,11 @@ class NDFrameApply(Apply): @property def index(self) -> Index: - return self.obj.index + # error: Argument 1 to "__get__" of "AxisProperty" has incompatible type + # "Union[Series, DataFrame, GroupBy[Any], SeriesGroupBy, + # DataFrameGroupBy, BaseWindow, Resampler]"; expected "Union[DataFrame, + # Series]" + return self.obj.index # type:ignore[arg-type] @property def agg_axis(self) -> Index: diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 44d79fe2f1519..4376c784bc847 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -11199,12 +11199,10 @@ def isin(self, values) -> DataFrame: _info_axis_number = 1 _info_axis_name = "columns" - index: Index = properties.AxisProperty( + index = properties.AxisProperty( axis=1, doc="The index (row labels) of the DataFrame." ) - columns: Index = properties.AxisProperty( - axis=0, doc="The column labels of the DataFrame." - ) + columns = properties.AxisProperty(axis=0, doc="The column labels of the DataFrame.") @property def _AXIS_NUMBERS(self) -> dict[str, int]: diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 89a590f291356..673228a758aca 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -37,6 +37,7 @@ to_offset, ) from pandas._typing import ( + AnyArrayLike, ArrayLike, Axis, CompressionOptions, @@ -757,7 +758,7 @@ def _set_axis_nocheck(self, labels, axis: Axis, inplace: bool_t): obj.set_axis(labels, axis=axis, inplace=True) return obj - def _set_axis(self, axis: int, labels: Index) -> None: + def _set_axis(self, axis: int, labels: AnyArrayLike | Sequence) -> None: labels = ensure_index(labels) self._mgr.set_axis(axis, labels) self._clear_item_cache() diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 5d215ec81a6cd..a469372d85967 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -275,9 +275,9 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs) func = maybe_mangle_lambdas(func) ret = self._aggregate_multiple_funcs(func) if relabeling: - # error: Incompatible types in assignment (expression has type - # "Optional[List[str]]", variable has type "Index") - ret.columns = columns # type: ignore[assignment] + # columns is not narrowed by mypy from relabeling flag + assert columns is not None # for mypy + ret.columns = columns return ret else: diff --git a/pandas/core/series.py b/pandas/core/series.py index 6ef024f13fbb1..d8ee7365120f7 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -33,6 +33,7 @@ from pandas._libs.lib import no_default from pandas._typing import ( AggFuncType, + AnyArrayLike, ArrayLike, Axis, Dtype, @@ -553,7 +554,7 @@ def _constructor_expanddim(self) -> Callable[..., DataFrame]: def _can_hold_na(self) -> bool: return self._mgr._can_hold_na - def _set_axis(self, axis: int, labels) -> None: + def _set_axis(self, axis: int, labels: AnyArrayLike | Sequence) -> None: """ Override generic, we want to set the _typ here. @@ -5813,7 +5814,7 @@ def mask( _info_axis_number = 0 _info_axis_name = "index" - index: Index = properties.AxisProperty( + index = properties.AxisProperty( axis=0, doc="The index (axis labels) of the Series." ) diff --git a/pandas/io/parsers/arrow_parser_wrapper.py b/pandas/io/parsers/arrow_parser_wrapper.py index 97a57054f3aa9..21e8bb5f9e89f 100644 --- a/pandas/io/parsers/arrow_parser_wrapper.py +++ b/pandas/io/parsers/arrow_parser_wrapper.py @@ -105,12 +105,7 @@ def _finalize_output(self, frame: DataFrame) -> DataFrame: multi_index_named = False frame.columns = self.names # we only need the frame not the names - # error: Incompatible types in assignment (expression has type - # "Union[List[Union[Union[str, int, float, bool], Union[Period, Timestamp, - # Timedelta, Any]]], Index]", variable has type "Index") [assignment] - frame.columns, frame = self._do_date_conversions( # type: ignore[assignment] - frame.columns, frame - ) + frame.columns, frame = self._do_date_conversions(frame.columns, frame) if self.index_col is not None: for i, item in enumerate(self.index_col): if is_integer(item):
null
https://api.github.com/repos/pandas-dev/pandas/pulls/46565
2022-03-29T19:50:17Z
2022-06-10T22:53:30Z
2022-06-10T22:53:30Z
2022-06-11T09:36:35Z
Backport of CI related PRs on 1.4.x
diff --git a/.circleci/config.yml b/.circleci/config.yml index dc357101e79fd..9d1a11026b35d 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -8,7 +8,7 @@ jobs: environment: ENV_FILE: ci/deps/circle-38-arm64.yaml PYTEST_WORKERS: auto - PATTERN: "not slow and not network and not clipboard and not arm_slow" + PATTERN: "not single_cpu and not slow and not network and not clipboard and not arm_slow and not db" PYTEST_TARGET: "pandas" steps: - checkout diff --git a/.github/workflows/posix.yml b/.github/workflows/posix.yml index 303d58d96c2c7..0d5ef807a3392 100644 --- a/.github/workflows/posix.yml +++ b/.github/workflows/posix.yml @@ -162,8 +162,7 @@ jobs: shell: bash run: | # TODO: re-enable cov, its slowing the tests down though - # TODO: Unpin Cython, the new Cython 0.29.26 is causing compilation errors - pip install Cython==0.29.25 numpy python-dateutil pytz pytest>=6.0 pytest-xdist>=1.31.0 hypothesis>=5.5.3 + pip install Cython numpy python-dateutil pytz pytest>=6.0 pytest-xdist>=1.31.0 pytest-asyncio>=0.17 hypothesis>=5.5.3 if: ${{ env.IS_PYPY == 'true' }} - name: Build Pandas diff --git a/azure-pipelines.yml b/azure-pipelines.yml index a37ea3eb89167..ec798bd607034 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -45,7 +45,7 @@ jobs: /opt/python/cp38-cp38/bin/python -m venv ~/virtualenvs/pandas-dev && \ . ~/virtualenvs/pandas-dev/bin/activate && \ python -m pip install --no-deps -U pip wheel 'setuptools<60.0.0' && \ - pip install cython numpy python-dateutil pytz pytest pytest-xdist hypothesis pytest-azurepipelines && \ + pip install cython numpy python-dateutil pytz pytest pytest-xdist pytest-asyncio>=0.17 hypothesis && \ python setup.py build_ext -q -j2 && \ python -m pip install --no-build-isolation -e . && \ pytest -m 'not slow and not network and not clipboard' pandas --junitxml=test-data.xml" diff --git a/ci/deps/actions-310-numpydev.yaml b/ci/deps/actions-310-numpydev.yaml index 3e32665d5433f..401be14aaca02 100644 --- a/ci/deps/actions-310-numpydev.yaml +++ b/ci/deps/actions-310-numpydev.yaml @@ -9,6 +9,7 @@ dependencies: - pytest-cov - pytest-xdist>=1.31 - hypothesis>=5.5.3 + - pytest-asyncio>=0.17 # pandas dependencies - python-dateutil diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml index bbc468f9d8f43..dac1219245e84 100644 --- a/ci/deps/actions-310.yaml +++ b/ci/deps/actions-310.yaml @@ -11,6 +11,8 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil + - pytest-asyncio>=0.17 + - boto3 # required dependencies - python-dateutil @@ -21,6 +23,7 @@ dependencies: - beautifulsoup4 - blosc - bottleneck + - brotlipy - fastparquet - fsspec - html5lib @@ -39,6 +42,7 @@ dependencies: - pytables - pyarrow - pyreadstat + - python-snappy - pyxlsb - s3fs - scipy diff --git a/ci/deps/actions-38-downstream_compat.yaml b/ci/deps/actions-38-downstream_compat.yaml index f3fa95d03c98e..01415122e6076 100644 --- a/ci/deps/actions-38-downstream_compat.yaml +++ b/ci/deps/actions-38-downstream_compat.yaml @@ -12,6 +12,8 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil + - pytest-asyncio>=0.17 + - boto3 # required dependencies - python-dateutil @@ -21,6 +23,7 @@ dependencies: # optional dependencies - beautifulsoup4 - blosc + - brotlipy - bottleneck - fastparquet - fsspec @@ -35,10 +38,11 @@ dependencies: - odfpy - pandas-gbq - psycopg2 - - pymysql - - pytables - pyarrow + - pymysql - pyreadstat + - pytables + - python-snappy - pyxlsb - s3fs - scipy @@ -52,17 +56,14 @@ dependencies: # downstream packages - aiobotocore - - boto3 - botocore - cftime - dask - ipython - geopandas - - python-snappy - seaborn - scikit-learn - statsmodels - - brotlipy - coverage - pandas-datareader - pyyaml diff --git a/ci/deps/actions-38-minimum_versions.yaml b/ci/deps/actions-38-minimum_versions.yaml index 2e782817f3f14..f3a967f67cbc3 100644 --- a/ci/deps/actions-38-minimum_versions.yaml +++ b/ci/deps/actions-38-minimum_versions.yaml @@ -13,6 +13,8 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil + - pytest-asyncio>=0.17 + - boto3 # required dependencies - python-dateutil=2.8.1 @@ -23,6 +25,7 @@ dependencies: - beautifulsoup4=4.8.2 - blosc=1.20.1 - bottleneck=1.3.1 + - brotlipy=0.7.0 - fastparquet=0.4.0 - fsspec=0.7.4 - html5lib=1.1 @@ -37,10 +40,11 @@ dependencies: - openpyxl=3.0.3 - pandas-gbq=0.14.0 - psycopg2=2.8.4 - - pymysql=0.10.1 - - pytables=3.6.1 - pyarrow=1.0.1 + - pymysql=0.10.1 - pyreadstat=1.1.0 + - pytables=3.6.1 + - python-snappy=0.6.0 - pyxlsb=1.0.6 - s3fs=0.4.0 - scipy=1.4.1 diff --git a/ci/deps/actions-38.yaml b/ci/deps/actions-38.yaml index 60db02def8a3d..79cd831051c2f 100644 --- a/ci/deps/actions-38.yaml +++ b/ci/deps/actions-38.yaml @@ -11,6 +11,8 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil + - pytest-asyncio>=0.17 + - boto3 # required dependencies - python-dateutil @@ -21,6 +23,7 @@ dependencies: - beautifulsoup4 - blosc - bottleneck + - brotlipy - fastparquet - fsspec - html5lib @@ -34,10 +37,11 @@ dependencies: - odfpy - pandas-gbq - psycopg2 - - pymysql - - pytables - pyarrow + - pymysql - pyreadstat + - pytables + - python-snappy - pyxlsb - s3fs - scipy diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml index 2d6430afd0b36..1c681104f3196 100644 --- a/ci/deps/actions-39.yaml +++ b/ci/deps/actions-39.yaml @@ -11,6 +11,8 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil + - pytest-asyncio>=0.17 + - boto3 # required dependencies - python-dateutil @@ -21,6 +23,7 @@ dependencies: - beautifulsoup4 - blosc - bottleneck + - brotlipy - fastparquet - fsspec - html5lib @@ -35,9 +38,10 @@ dependencies: - pandas-gbq - psycopg2 - pymysql - - pytables - pyarrow - pyreadstat + - pytables + - python-snappy - pyxlsb - s3fs - scipy diff --git a/ci/deps/circle-38-arm64.yaml b/ci/deps/circle-38-arm64.yaml index 60608c3ee1a86..66fedccc5eca7 100644 --- a/ci/deps/circle-38-arm64.yaml +++ b/ci/deps/circle-38-arm64.yaml @@ -4,18 +4,52 @@ channels: dependencies: - python=3.8 - # tools - - cython>=0.29.24 + # test dependencies + - cython=0.29.24 - pytest>=6.0 + - pytest-cov - pytest-xdist>=1.31 - hypothesis>=5.5.3 + - psutil + - pytest-asyncio>=0.17 + - boto3 - # pandas dependencies - - botocore>=1.11 - - flask - - moto - - numpy + # required dependencies - python-dateutil + - numpy - pytz + + # optional dependencies + - beautifulsoup4 + - blosc + - bottleneck + - brotlipy + - fastparquet + - fsspec + - html5lib + - gcsfs + - jinja2 + - lxml + - matplotlib + - numba + - numexpr + - openpyxl + - odfpy + - pandas-gbq + - psycopg2 + - pyarrow + - pymysql + # Not provided on ARM + #- pyreadstat + - pytables + - python-snappy + - pyxlsb + - s3fs + - scipy + - sqlalchemy + - tabulate + - xarray + - xlrd + - xlsxwriter + - xlwt - zstandard - - pip diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst index 8106efcfcfd98..e312889f2eb6a 100644 --- a/doc/source/getting_started/install.rst +++ b/doc/source/getting_started/install.rst @@ -410,5 +410,7 @@ Compression ========================= ================== ============================================================= Dependency Minimum Version Notes ========================= ================== ============================================================= -Zstandard Zstandard compression +brotli 0.7.0 Brotli compression +python-snappy 0.6.0 Snappy compression +Zstandard 0.15.2 Zstandard compression ========================= ================== ============================================================= diff --git a/environment.yml b/environment.yml index da5d6fabcde20..0a87e2b56f4cb 100644 --- a/environment.yml +++ b/environment.yml @@ -69,7 +69,7 @@ dependencies: - pytest>=6.0 - pytest-cov - pytest-xdist>=1.31 - - pytest-asyncio + - pytest-asyncio>=0.17 - pytest-instafail # downstream tests diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py index cb9b0c6de6d3b..e69bee5c647d8 100644 --- a/pandas/compat/_optional.py +++ b/pandas/compat/_optional.py @@ -13,6 +13,7 @@ "bs4": "4.8.2", "blosc": "1.20.1", "bottleneck": "1.3.1", + "brotli": "0.7.0", "fastparquet": "0.4.0", "fsspec": "0.7.4", "html5lib": "1.1", @@ -34,6 +35,7 @@ "pyxlsb": "1.0.6", "s3fs": "0.4.0", "scipy": "1.4.1", + "snappy": "0.6.0", "sqlalchemy": "1.4.0", "tables": "3.6.1", "tabulate": "0.8.7", @@ -50,12 +52,14 @@ INSTALL_MAPPING = { "bs4": "beautifulsoup4", "bottleneck": "Bottleneck", + "brotli": "brotlipy", + "jinja2": "Jinja2", "lxml.etree": "lxml", "odf": "odfpy", "pandas_gbq": "pandas-gbq", - "tables": "pytables", + "snappy": "python-snappy", "sqlalchemy": "SQLAlchemy", - "jinja2": "Jinja2", + "tables": "pytables", } @@ -66,6 +70,13 @@ def get_version(module: types.ModuleType) -> str: version = getattr(module, "__VERSION__", None) if version is None: + if module.__name__ == "brotli": + # brotli doesn't contain attributes to confirm it's version + return "" + if module.__name__ == "snappy": + # snappy doesn't contain attributes to confirm it's version + # See https://github.com/andrix/python-snappy/pull/119 + return "" raise ImportError(f"Can't determine version for {module.__name__}") if module.__name__ == "psycopg2": # psycopg2 appends " (dt dec pq3 ext lo64)" to it's version @@ -141,7 +152,7 @@ def import_optional_dependency( minimum_version = min_version if min_version is not None else VERSIONS.get(parent) if minimum_version: version = get_version(module_to_get) - if Version(version) < Version(minimum_version): + if version and Version(version) < Version(minimum_version): msg = ( f"Pandas requires version '{minimum_version}' or newer of '{parent}' " f"(version '{version}' currently installed)." diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py index 22fe7bb0de949..b5b4007798135 100644 --- a/pandas/tests/arrays/string_/test_string.py +++ b/pandas/tests/arrays/string_/test_string.py @@ -504,7 +504,7 @@ def test_memory_usage(dtype): # GH 33963 if dtype.storage == "pyarrow": - pytest.skip("not applicable") + pytest.skip(f"not applicable for {dtype.storage}") series = pd.Series(["a", "b", "c"], dtype=dtype) diff --git a/pandas/tests/extension/arrow/test_bool.py b/pandas/tests/extension/arrow/test_bool.py index a73684868e3ae..7f4766bac0329 100644 --- a/pandas/tests/extension/arrow/test_bool.py +++ b/pandas/tests/extension/arrow/test_bool.py @@ -1,6 +1,11 @@ import numpy as np import pytest +from pandas.compat import ( + is_ci_environment, + is_platform_windows, +) + import pandas as pd import pandas._testing as tm from pandas.api.types import is_bool_dtype @@ -91,6 +96,10 @@ def test_reduce_series_boolean(self): pass +@pytest.mark.skipif( + is_ci_environment() and is_platform_windows(), + reason="Causes stack overflow on Windows CI", +) class TestReduceBoolean(base.BaseBooleanReduceTests): pass diff --git a/pandas/tests/extension/base/ops.py b/pandas/tests/extension/base/ops.py index 1d3d736ca7ee2..a1d232b737da7 100644 --- a/pandas/tests/extension/base/ops.py +++ b/pandas/tests/extension/base/ops.py @@ -114,17 +114,22 @@ def test_add_series_with_extension_array(self, data): self.assert_series_equal(result, expected) @pytest.mark.parametrize("box", [pd.Series, pd.DataFrame]) - def test_direct_arith_with_ndframe_returns_not_implemented(self, data, box): + def test_direct_arith_with_ndframe_returns_not_implemented( + self, request, data, box + ): # EAs should return NotImplemented for ops with Series/DataFrame # Pandas takes care of unboxing the series and calling the EA's op. other = pd.Series(data) if box is pd.DataFrame: other = other.to_frame() - if hasattr(data, "__add__"): - result = data.__add__(other) - assert result is NotImplemented - else: - raise pytest.skip(f"{type(data).__name__} does not implement add") + if not hasattr(data, "__add__"): + request.node.add_marker( + pytest.mark.xfail( + reason=f"{type(data).__name__} does not implement add" + ) + ) + result = data.__add__(other) + assert result is NotImplemented class BaseComparisonOpsTests(BaseOpsUtil): diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py index d530a75b74c8f..84491adb30ef6 100644 --- a/pandas/tests/extension/json/test_json.py +++ b/pandas/tests/extension/json/test_json.py @@ -1,5 +1,6 @@ import collections import operator +import sys import pytest @@ -155,20 +156,32 @@ def test_contains(self, data): class TestConstructors(BaseJSON, base.BaseConstructorsTests): - @pytest.mark.skip(reason="not implemented constructor from dtype") + @pytest.mark.xfail(reason="not implemented constructor from dtype") def test_from_dtype(self, data): # construct from our dtype & string dtype - pass + super(self).test_from_dtype(data) @pytest.mark.xfail(reason="RecursionError, GH-33900") def test_series_constructor_no_data_with_index(self, dtype, na_value): # RecursionError: maximum recursion depth exceeded in comparison - super().test_series_constructor_no_data_with_index(dtype, na_value) + rec_limit = sys.getrecursionlimit() + try: + # Limit to avoid stack overflow on Windows CI + sys.setrecursionlimit(100) + super().test_series_constructor_no_data_with_index(dtype, na_value) + finally: + sys.setrecursionlimit(rec_limit) @pytest.mark.xfail(reason="RecursionError, GH-33900") def test_series_constructor_scalar_na_with_index(self, dtype, na_value): # RecursionError: maximum recursion depth exceeded in comparison - super().test_series_constructor_scalar_na_with_index(dtype, na_value) + rec_limit = sys.getrecursionlimit() + try: + # Limit to avoid stack overflow on Windows CI + sys.setrecursionlimit(100) + super().test_series_constructor_scalar_na_with_index(dtype, na_value) + finally: + sys.setrecursionlimit(rec_limit) @pytest.mark.xfail(reason="collection as scalar, GH-33901") def test_series_constructor_scalar_with_index(self, data, dtype): diff --git a/pandas/tests/extension/test_categorical.py b/pandas/tests/extension/test_categorical.py index d21110e078709..1e17bf33c806c 100644 --- a/pandas/tests/extension/test_categorical.py +++ b/pandas/tests/extension/test_categorical.py @@ -50,7 +50,7 @@ def data(): """Length-100 array for this type. * data[0] and data[1] should both be non missing - * data[0] and data[1] should not gbe equal + * data[0] and data[1] should not be equal """ return Categorical(make_data()) @@ -86,7 +86,7 @@ class TestDtype(base.BaseDtypeTests): class TestInterface(base.BaseInterfaceTests): - @pytest.mark.skip(reason="Memory usage doesn't match") + @pytest.mark.xfail(reason="Memory usage doesn't match") def test_memory_usage(self, data): # Is this deliberate? super().test_memory_usage(data) @@ -149,13 +149,7 @@ class TestIndex(base.BaseIndexTests): class TestMissing(base.BaseMissingTests): - @pytest.mark.skip(reason="Not implemented") - def test_fillna_limit_pad(self, data_missing): - super().test_fillna_limit_pad(data_missing) - - @pytest.mark.skip(reason="Not implemented") - def test_fillna_limit_backfill(self, data_missing): - super().test_fillna_limit_backfill(data_missing) + pass class TestReduce(base.BaseNoReduceTests): @@ -163,7 +157,7 @@ class TestReduce(base.BaseNoReduceTests): class TestMethods(base.BaseMethodsTests): - @pytest.mark.skip(reason="Unobserved categories included") + @pytest.mark.xfail(reason="Unobserved categories included") def test_value_counts(self, all_data, dropna): return super().test_value_counts(all_data, dropna) @@ -184,10 +178,6 @@ def test_combine_add(self, data_repeated): expected = pd.Series([a + val for a in list(orig_data1)]) self.assert_series_equal(result, expected) - @pytest.mark.skip(reason="Not Applicable") - def test_fillna_length_mismatch(self, data_missing): - super().test_fillna_length_mismatch(data_missing) - class TestCasting(base.BaseCastingTests): @pytest.mark.parametrize("cls", [Categorical, CategoricalIndex]) diff --git a/pandas/tests/extension/test_datetime.py b/pandas/tests/extension/test_datetime.py index a64b42fad9415..92796c604333d 100644 --- a/pandas/tests/extension/test_datetime.py +++ b/pandas/tests/extension/test_datetime.py @@ -175,9 +175,7 @@ class TestMissing(BaseDatetimeTests, base.BaseMissingTests): class TestReshaping(BaseDatetimeTests, base.BaseReshapingTests): - @pytest.mark.skip(reason="We have DatetimeTZBlock") - def test_concat(self, data, in_frame): - pass + pass class TestSetitem(BaseDatetimeTests, base.BaseSetitemTests): diff --git a/pandas/tests/extension/test_interval.py b/pandas/tests/extension/test_interval.py index e2f4d69c489ba..0f916cea9d518 100644 --- a/pandas/tests/extension/test_interval.py +++ b/pandas/tests/extension/test_interval.py @@ -121,9 +121,9 @@ def test_reduce_series_numeric(self, data, all_numeric_reductions, skipna): class TestMethods(BaseInterval, base.BaseMethodsTests): - @pytest.mark.skip(reason="addition is not defined for intervals") + @pytest.mark.xfail(reason="addition is not defined for intervals") def test_combine_add(self, data_repeated): - pass + super().test_combine_add(data_repeated) @pytest.mark.xfail( reason="Raises with incorrect message bc it disallows *all* listlikes " @@ -134,29 +134,31 @@ def test_fillna_length_mismatch(self, data_missing): class TestMissing(BaseInterval, base.BaseMissingTests): - # Index.fillna only accepts scalar `value`, so we have to skip all + # Index.fillna only accepts scalar `value`, so we have to xfail all # non-scalar fill tests. - unsupported_fill = pytest.mark.skip("Unsupported fillna option.") + unsupported_fill = pytest.mark.xfail( + reason="Unsupported fillna option for Interval." + ) @unsupported_fill def test_fillna_limit_pad(self): - pass + super().test_fillna_limit_pad() @unsupported_fill def test_fillna_series_method(self): - pass + super().test_fillna_series_method() @unsupported_fill def test_fillna_limit_backfill(self): - pass + super().test_fillna_limit_backfill() @unsupported_fill def test_fillna_no_op_returns_copy(self): - pass + super().test_fillna_no_op_returns_copy() @unsupported_fill def test_fillna_series(self): - pass + super().test_fillna_series() def test_fillna_non_scalar_raises(self, data_missing): msg = "can only insert Interval objects and NA into an IntervalArray" @@ -173,9 +175,9 @@ class TestSetitem(BaseInterval, base.BaseSetitemTests): class TestPrinting(BaseInterval, base.BasePrintingTests): - @pytest.mark.skip(reason="custom repr") + @pytest.mark.xfail(reason="Interval has custom repr") def test_array_repr(self, data, size): - pass + super().test_array_repr() class TestParsing(BaseInterval, base.BaseParsingTests): diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py index 2e1112ccf2205..ee181101a181a 100644 --- a/pandas/tests/extension/test_numpy.py +++ b/pandas/tests/extension/test_numpy.py @@ -208,10 +208,15 @@ def test_series_constructor_scalar_with_index(self, data, dtype): class TestDtype(BaseNumPyTests, base.BaseDtypeTests): - @pytest.mark.skip(reason="Incorrect expected.") - # we unsurprisingly clash with a NumPy name. - def test_check_dtype(self, data): - pass + def test_check_dtype(self, data, request): + if data.dtype.numpy_dtype == "object": + request.node.add_marker( + pytest.mark.xfail( + reason=f"PandasArray expectedly clashes with a " + f"NumPy name: {data.dtype.numpy_dtype}" + ) + ) + super().test_check_dtype(data) class TestGetitem(BaseNumPyTests, base.BaseGetitemTests): @@ -345,11 +350,6 @@ def test_fillna_frame(self, data_missing): class TestReshaping(BaseNumPyTests, base.BaseReshapingTests): - @pytest.mark.skip(reason="Incorrect expected.") - def test_merge(self, data, na_value): - # Fails creating expected (key column becomes a PandasDtype because) - super().test_merge(data, na_value) - @pytest.mark.parametrize( "in_frame", [ diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py index 4256142556894..e96850e4e945b 100644 --- a/pandas/tests/extension/test_string.py +++ b/pandas/tests/extension/test_string.py @@ -28,7 +28,7 @@ def split_array(arr): if arr.dtype.storage != "pyarrow": - pytest.skip("chunked array n/a") + pytest.skip("only applicable for pyarrow chunked array n/a") def _split_array(arr): import pyarrow as pa @@ -162,9 +162,9 @@ class TestMethods(base.BaseMethodsTests): def test_value_counts(self, all_data, dropna): return super().test_value_counts(all_data, dropna) - @pytest.mark.skip(reason="returns nullable") + @pytest.mark.xfail(reason="returns nullable: GH 44692") def test_value_counts_with_normalize(self, data): - pass + super().test_value_counts_with_normalize(data) class TestCasting(base.BaseCastingTests): diff --git a/pandas/tests/plotting/frame/test_frame.py b/pandas/tests/plotting/frame/test_frame.py index ff247349ff4d5..449a1c69a4ef6 100644 --- a/pandas/tests/plotting/frame/test_frame.py +++ b/pandas/tests/plotting/frame/test_frame.py @@ -167,12 +167,11 @@ def test_nullable_int_plot(self): df = DataFrame( { "A": [1, 2, 3, 4, 5], - "B": [1.0, 2.0, 3.0, 4.0, 5.0], - "C": [7, 5, np.nan, 3, 2], + "B": [1, 2, 3, 4, 5], + "C": np.array([7, 5, np.nan, 3, 2], dtype=object), "D": pd.to_datetime(dates, format="%Y").view("i8"), "E": pd.to_datetime(dates, format="%Y", utc=True).view("i8"), - }, - dtype=np.int64, + } ) _check_plot_works(df.plot, x="A", y="B") diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py index fe8620ef76c4b..d595f4b453b28 100644 --- a/pandas/tests/plotting/test_converter.py +++ b/pandas/tests/plotting/test_converter.py @@ -334,7 +334,7 @@ def test_conversion(self): rs = self.pc.convert( np.array( - ["2012-01-01 00:00:00+0000", "2012-01-02 00:00:00+0000"], + ["2012-01-01 00:00:00", "2012-01-02 00:00:00"], dtype="datetime64[ns]", ), None, diff --git a/pyproject.toml b/pyproject.toml index 1d318ab5f70c3..318a7398e1035 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -57,6 +57,7 @@ markers = [ "arm_slow: mark a test as slow for arm64 architecture", "arraymanager: mark a test to run with ArrayManager enabled", ] +asyncio_mode = "strict" [tool.mypy] # Import discovery diff --git a/requirements-dev.txt b/requirements-dev.txt index ac525f1d09fbe..6fd3ac53c50cb 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -47,7 +47,7 @@ flask pytest>=6.0 pytest-cov pytest-xdist>=1.31 -pytest-asyncio +pytest-asyncio>=0.17 pytest-instafail seaborn statsmodels
backports #46543, #46401, #46347, #46260, #46426, #46016, #46370 and #46358 (will change milestones on those, once CI here is successful and merged.)
https://api.github.com/repos/pandas-dev/pandas/pulls/46559
2022-03-29T12:54:08Z
2022-03-29T22:40:54Z
2022-03-29T22:40:54Z
2022-03-30T10:44:07Z
CI: pre-commit autoupdate to fix CI
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 0a2f3f8f2506d..04e148453387b 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -16,7 +16,7 @@ repos: pass_filenames: true require_serial: false - repo: https://github.com/python/black - rev: 22.1.0 + rev: 22.3.0 hooks: - id: black - repo: https://github.com/codespell-project/codespell @@ -33,7 +33,7 @@ repos: exclude: \.txt$ - id: trailing-whitespace - repo: https://github.com/cpplint/cpplint - rev: 1.5.5 + rev: 1.6.0 hooks: - id: cpplint # We don't lint all C files because we don't want to lint any that are built @@ -56,7 +56,7 @@ repos: hooks: - id: isort - repo: https://github.com/asottile/pyupgrade - rev: v2.31.0 + rev: v2.31.1 hooks: - id: pyupgrade args: [--py38-plus] diff --git a/environment.yml b/environment.yml index 187f666938aeb..0dc9806856585 100644 --- a/environment.yml +++ b/environment.yml @@ -18,7 +18,7 @@ dependencies: - cython>=0.29.24 # code checks - - black=22.1.0 + - black=22.3.0 - cpplint - flake8=4.0.1 - flake8-bugbear=21.3.2 # used by flake8, find likely bugs diff --git a/requirements-dev.txt b/requirements-dev.txt index 3ccedcbad1782..94709171739d2 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -6,7 +6,7 @@ python-dateutil>=2.8.1 pytz asv < 0.5.0 cython>=0.29.24 -black==22.1.0 +black==22.3.0 cpplint flake8==4.0.1 flake8-bugbear==21.3.2
new click release broke previous version of `black`, see https://github.com/psf/black/issues/2964
https://api.github.com/repos/pandas-dev/pandas/pulls/46558
2022-03-29T12:34:45Z
2022-03-29T13:48:59Z
2022-03-29T13:48:59Z
2022-03-30T21:32:09Z
Close FastParquet file even on error
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 6cbee83247692..48a52f21e3640 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -498,7 +498,8 @@ I/O - Bug in :func:`read_parquet` when ``engine="pyarrow"`` which caused partial write to disk when column of unsupported datatype was passed (:issue:`44914`) - Bug in :func:`DataFrame.to_excel` and :class:`ExcelWriter` would raise when writing an empty DataFrame to a ``.ods`` file (:issue:`45793`) - Bug in Parquet roundtrip for Interval dtype with ``datetime64[ns]`` subtype (:issue:`45881`) -- Bug in :func:`read_excel` when reading a ``.ods`` file with newlines between xml elements(:issue:`45598`) +- Bug in :func:`read_excel` when reading a ``.ods`` file with newlines between xml elements (:issue:`45598`) +- Bug in :func:`read_parquet` when ``engine="fastparquet"`` where the file was not closed on error (:issue:`46555`) Period ^^^^^^ diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py index c7e8d67189e5d..27b0b3d08ad53 100644 --- a/pandas/io/parquet.py +++ b/pandas/io/parquet.py @@ -349,13 +349,12 @@ def read( ) path = handles.handle - parquet_file = self.api.ParquetFile(path, **parquet_kwargs) - - result = parquet_file.to_pandas(columns=columns, **kwargs) - - if handles is not None: - handles.close() - return result + try: + parquet_file = self.api.ParquetFile(path, **parquet_kwargs) + return parquet_file.to_pandas(columns=columns, **kwargs) + finally: + if handles is not None: + handles.close() @doc(storage_options=_shared_docs["storage_options"]) diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py index 3001922e95a54..7c04a51e803f6 100644 --- a/pandas/tests/io/test_parquet.py +++ b/pandas/tests/io/test_parquet.py @@ -1155,3 +1155,11 @@ def test_use_nullable_dtypes_not_supported(self, fp): df.to_parquet(path) with pytest.raises(ValueError, match="not supported for the fastparquet"): read_parquet(path, engine="fastparquet", use_nullable_dtypes=True) + + def test_close_file_handle_on_read_error(self): + with tm.ensure_clean("test.parquet") as path: + pathlib.Path(path).write_bytes(b"breakit") + with pytest.raises(Exception, match=""): # Not important which exception + read_parquet(path, engine="fastparquet") + # The next line raises an error on Windows if the file is still open + pathlib.Path(path).unlink(missing_ok=False)
- [x] closes #46555 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46556
2022-03-29T10:56:04Z
2022-03-30T18:53:15Z
2022-03-30T18:53:14Z
2022-03-30T21:07:58Z
Backport PR #46542 on branch 1.4.x (Reorder YAML)
diff --git a/.github/workflows/posix.yml b/.github/workflows/posix.yml index 5452f9d67ee81..303d58d96c2c7 100644 --- a/.github/workflows/posix.yml +++ b/.github/workflows/posix.yml @@ -30,38 +30,38 @@ jobs: # even if tests are skipped/xfailed pyarrow_version: ["5", "6", "7"] include: - - env_file: actions-38-downstream_compat.yaml + - name: "Downstream Compat" + env_file: actions-38-downstream_compat.yaml pattern: "not slow and not network and not single_cpu" pytest_target: "pandas/tests/test_downstream.py" - name: "Downstream Compat" - - env_file: actions-38-minimum_versions.yaml + - name: "Minimum Versions" + env_file: actions-38-minimum_versions.yaml pattern: "not slow and not network and not single_cpu" - name: "Minimum Versions" - - env_file: actions-38.yaml + - name: "Locale: it_IT.utf8" + env_file: actions-38.yaml pattern: "not slow and not network and not single_cpu" extra_apt: "language-pack-it" lang: "it_IT.utf8" lc_all: "it_IT.utf8" - name: "Locale: it_IT.utf8" - - env_file: actions-38.yaml + - name: "Locale: zh_CN.utf8" + env_file: actions-38.yaml pattern: "not slow and not network and not single_cpu" extra_apt: "language-pack-zh-hans" lang: "zh_CN.utf8" lc_all: "zh_CN.utf8" - name: "Locale: zh_CN.utf8" - - env_file: actions-38.yaml + - name: "Data Manager" + env_file: actions-38.yaml pattern: "not slow and not network and not single_cpu" pandas_data_manager: "array" - name: "Data Manager" - - env_file: actions-pypy-38.yaml + - name: "Pypy" + env_file: actions-pypy-38.yaml pattern: "not slow and not network and not single_cpu" test_args: "--max-worker-restart 0" - name: "Pypy" - - env_file: actions-310-numpydev.yaml + - name: "Numpy Dev" + env_file: actions-310-numpydev.yaml pattern: "not slow and not network and not single_cpu" pandas_testing_mode: "deprecate" test_args: "-W error" - name: "Numpy Dev" fail-fast: false name: ${{ matrix.name || format('{0} pyarrow={1} {2}', matrix.env_file, matrix.pyarrow_version, matrix.pattern) }} env:
Backport PR #46542: Reorder YAML
https://api.github.com/repos/pandas-dev/pandas/pulls/46550
2022-03-28T22:59:41Z
2022-03-29T11:39:32Z
2022-03-29T11:39:32Z
2022-03-29T11:39:33Z
REGR: tests + whats new: boolean dtype in styler render
diff --git a/doc/source/whatsnew/v1.4.2.rst b/doc/source/whatsnew/v1.4.2.rst index 13f3e9a0d0a8c..76b2a5d6ffd47 100644 --- a/doc/source/whatsnew/v1.4.2.rst +++ b/doc/source/whatsnew/v1.4.2.rst @@ -21,7 +21,7 @@ Fixed regressions - Fixed regression in :meth:`DataFrame.replace` when a replacement value was also a target for replacement (:issue:`46306`) - Fixed regression in :meth:`DataFrame.replace` when the replacement value was explicitly ``None`` when passed in a dictionary to ``to_replace`` (:issue:`45601`, :issue:`45836`) - Fixed regression when setting values with :meth:`DataFrame.loc` losing :class:`MultiIndex` names if :class:`DataFrame` was empty before (:issue:`46317`) -- +- Fixed regression when rendering boolean datatype columns with :meth:`.Styler` (:issue:`46384`) .. --------------------------------------------------------------------------- diff --git a/pandas/tests/io/formats/style/test_format.py b/pandas/tests/io/formats/style/test_format.py index 5207be992d606..a52c679e16ad5 100644 --- a/pandas/tests/io/formats/style/test_format.py +++ b/pandas/tests/io/formats/style/test_format.py @@ -434,3 +434,11 @@ def test_1level_multiindex(): assert ctx["body"][0][0]["is_visible"] is True assert ctx["body"][1][0]["display_value"] == "2" assert ctx["body"][1][0]["is_visible"] is True + + +def test_boolean_format(): + # gh 46384: booleans do not collapse to integer representation on display + df = DataFrame([[True, False]]) + ctx = df.style._translate(True, True) + assert ctx["body"][0][1]["display_value"] is True + assert ctx["body"][0][2]["display_value"] is False
### COMES AFTER #46501 closes #46384
https://api.github.com/repos/pandas-dev/pandas/pulls/46548
2022-03-28T21:41:21Z
2022-03-30T21:36:26Z
2022-03-30T21:36:25Z
2022-03-30T21:36:26Z
BUG: pd.concat with identical key leads to multi-indexing error
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 8c02785647861..121c2805be8ca 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -536,6 +536,7 @@ Reshaping - Bug in :func:`get_dummies` that selected object and categorical dtypes but not string (:issue:`44965`) - Bug in :meth:`DataFrame.align` when aligning a :class:`MultiIndex` to a :class:`Series` with another :class:`MultiIndex` (:issue:`46001`) - Bug in concanenation with ``IntegerDtype``, or ``FloatingDtype`` arrays where the resulting dtype did not mirror the behavior of the non-nullable dtypes (:issue:`46379`) +- Bug in :func:`concat` with identical key leads to error when indexing :class:`MultiIndex` (:issue:`46519`) - Sparse diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py index 72f3b402d49e3..864aa97df0587 100644 --- a/pandas/core/reshape/concat.py +++ b/pandas/core/reshape/concat.py @@ -705,7 +705,7 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None) -> MultiInde names = [None] if levels is None: - levels = [ensure_index(keys)] + levels = [ensure_index(keys).unique()] else: levels = [ensure_index(x) for x in levels] diff --git a/pandas/tests/reshape/concat/test_index.py b/pandas/tests/reshape/concat/test_index.py index aad56a1dccedc..50fee28669c58 100644 --- a/pandas/tests/reshape/concat/test_index.py +++ b/pandas/tests/reshape/concat/test_index.py @@ -1,6 +1,8 @@ import numpy as np import pytest +from pandas.errors import PerformanceWarning + import pandas as pd from pandas import ( DataFrame, @@ -323,3 +325,49 @@ def test_concat_multiindex_(self): {"col": ["a", "b", "c"]}, index=MultiIndex.from_product(iterables) ) tm.assert_frame_equal(result_df, expected_df) + + def test_concat_with_key_not_unique(self): + # GitHub #46519 + df1 = DataFrame({"name": [1]}) + df2 = DataFrame({"name": [2]}) + df3 = DataFrame({"name": [3]}) + df_a = concat([df1, df2, df3], keys=["x", "y", "x"]) + # the warning is caused by indexing unsorted multi-index + with tm.assert_produces_warning( + PerformanceWarning, match="indexing past lexsort depth" + ): + out_a = df_a.loc[("x", 0), :] + + df_b = DataFrame( + {"name": [1, 2, 3]}, index=Index([("x", 0), ("y", 0), ("x", 0)]) + ) + with tm.assert_produces_warning( + PerformanceWarning, match="indexing past lexsort depth" + ): + out_b = df_b.loc[("x", 0)] + + tm.assert_frame_equal(out_a, out_b) + + df1 = DataFrame({"name": ["a", "a", "b"]}) + df2 = DataFrame({"name": ["a", "b"]}) + df3 = DataFrame({"name": ["c", "d"]}) + df_a = concat([df1, df2, df3], keys=["x", "y", "x"]) + with tm.assert_produces_warning( + PerformanceWarning, match="indexing past lexsort depth" + ): + out_a = df_a.loc[("x", 0), :] + + df_b = DataFrame( + { + "a": ["x", "x", "x", "y", "y", "x", "x"], + "b": [0, 1, 2, 0, 1, 0, 1], + "name": list("aababcd"), + } + ).set_index(["a", "b"]) + df_b.index.names = [None, None] + with tm.assert_produces_warning( + PerformanceWarning, match="indexing past lexsort depth" + ): + out_b = df_b.loc[("x", 0), :] + + tm.assert_frame_equal(out_a, out_b)
- [x] closes #46519 - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46546
2022-03-28T19:47:50Z
2022-04-05T20:20:21Z
2022-04-05T20:20:21Z
2022-04-05T20:25:53Z
Pin pytest-asyncio>=0.17
diff --git a/.github/workflows/posix.yml b/.github/workflows/posix.yml index bc8791afc69f7..2fc1d0d512cf1 100644 --- a/.github/workflows/posix.yml +++ b/.github/workflows/posix.yml @@ -162,7 +162,7 @@ jobs: shell: bash run: | # TODO: re-enable cov, its slowing the tests down though - pip install Cython numpy python-dateutil pytz pytest>=6.0 pytest-xdist>=1.31.0 pytest-asyncio hypothesis>=5.5.3 + pip install Cython numpy python-dateutil pytz pytest>=6.0 pytest-xdist>=1.31.0 pytest-asyncio>=0.17 hypothesis>=5.5.3 if: ${{ env.IS_PYPY == 'true' }} - name: Build Pandas diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 86cf0c79759f5..9b83ed6116ed3 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -45,7 +45,7 @@ jobs: /opt/python/cp38-cp38/bin/python -m venv ~/virtualenvs/pandas-dev && \ . ~/virtualenvs/pandas-dev/bin/activate && \ python -m pip install --no-deps -U pip wheel 'setuptools<60.0.0' && \ - pip install cython numpy python-dateutil pytz pytest pytest-xdist pytest-asyncio hypothesis && \ + pip install cython numpy python-dateutil pytz pytest pytest-xdist pytest-asyncio>=0.17 hypothesis && \ python setup.py build_ext -q -j2 && \ python -m pip install --no-build-isolation -e . && \ pytest -m 'not slow and not network and not clipboard and not single_cpu' pandas --junitxml=test-data.xml" diff --git a/ci/deps/actions-310-numpydev.yaml b/ci/deps/actions-310-numpydev.yaml index a5eb8a69e19da..401be14aaca02 100644 --- a/ci/deps/actions-310-numpydev.yaml +++ b/ci/deps/actions-310-numpydev.yaml @@ -9,7 +9,7 @@ dependencies: - pytest-cov - pytest-xdist>=1.31 - hypothesis>=5.5.3 - - pytest-asyncio + - pytest-asyncio>=0.17 # pandas dependencies - python-dateutil diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml index 37e7ea04a348a..dac1219245e84 100644 --- a/ci/deps/actions-310.yaml +++ b/ci/deps/actions-310.yaml @@ -11,7 +11,7 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil - - pytest-asyncio + - pytest-asyncio>=0.17 - boto3 # required dependencies diff --git a/ci/deps/actions-38-downstream_compat.yaml b/ci/deps/actions-38-downstream_compat.yaml index 40f48884f1822..01415122e6076 100644 --- a/ci/deps/actions-38-downstream_compat.yaml +++ b/ci/deps/actions-38-downstream_compat.yaml @@ -12,7 +12,7 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil - - pytest-asyncio + - pytest-asyncio>=0.17 - boto3 # required dependencies diff --git a/ci/deps/actions-38-minimum_versions.yaml b/ci/deps/actions-38-minimum_versions.yaml index abba5ddd60325..f3a967f67cbc3 100644 --- a/ci/deps/actions-38-minimum_versions.yaml +++ b/ci/deps/actions-38-minimum_versions.yaml @@ -13,7 +13,7 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil - - pytest-asyncio + - pytest-asyncio>=0.17 - boto3 # required dependencies diff --git a/ci/deps/actions-38.yaml b/ci/deps/actions-38.yaml index 9c46eca4ab989..79cd831051c2f 100644 --- a/ci/deps/actions-38.yaml +++ b/ci/deps/actions-38.yaml @@ -11,7 +11,7 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil - - pytest-asyncio + - pytest-asyncio>=0.17 - boto3 # required dependencies diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml index 89b647372d7bc..1c681104f3196 100644 --- a/ci/deps/actions-39.yaml +++ b/ci/deps/actions-39.yaml @@ -11,7 +11,7 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil - - pytest-asyncio + - pytest-asyncio>=0.17 - boto3 # required dependencies diff --git a/ci/deps/circle-38-arm64.yaml b/ci/deps/circle-38-arm64.yaml index f38c401ca49f8..66fedccc5eca7 100644 --- a/ci/deps/circle-38-arm64.yaml +++ b/ci/deps/circle-38-arm64.yaml @@ -11,7 +11,7 @@ dependencies: - pytest-xdist>=1.31 - hypothesis>=5.5.3 - psutil - - pytest-asyncio + - pytest-asyncio>=0.17 - boto3 # required dependencies diff --git a/environment.yml b/environment.yml index ac8921b12f4a3..a424100eda21a 100644 --- a/environment.yml +++ b/environment.yml @@ -69,7 +69,7 @@ dependencies: - pytest>=6.0 - pytest-cov - pytest-xdist>=1.31 - - pytest-asyncio + - pytest-asyncio>=0.17 - pytest-instafail # downstream tests diff --git a/requirements-dev.txt b/requirements-dev.txt index a0558f1a00177..2746b91986a3c 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -47,7 +47,7 @@ flask pytest>=6.0 pytest-cov pytest-xdist>=1.31 -pytest-asyncio +pytest-asyncio>=0.17 pytest-instafail seaborn statsmodels
Required for "asyncio_mode" setting that was added to pyproject.toml. Part of https://github.com/pandas-dev/pandas/pull/46493#issuecomment-1079541210 - [ ] closes #xxxx (Replace xxxx with the Github issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46543
2022-03-28T18:14:17Z
2022-03-28T22:57:39Z
2022-03-28T22:57:39Z
2022-03-30T10:45:20Z
Reorder YAML
diff --git a/.github/workflows/posix.yml b/.github/workflows/posix.yml index bc8791afc69f7..5bd1ac35e0746 100644 --- a/.github/workflows/posix.yml +++ b/.github/workflows/posix.yml @@ -30,38 +30,38 @@ jobs: # even if tests are skipped/xfailed pyarrow_version: ["5", "7"] include: - - env_file: actions-38-downstream_compat.yaml + - name: "Downstream Compat" + env_file: actions-38-downstream_compat.yaml pattern: "not slow and not network and not single_cpu" pytest_target: "pandas/tests/test_downstream.py" - name: "Downstream Compat" - - env_file: actions-38-minimum_versions.yaml + - name: "Minimum Versions" + env_file: actions-38-minimum_versions.yaml pattern: "not slow and not network and not single_cpu" - name: "Minimum Versions" - - env_file: actions-38.yaml + - name: "Locale: it_IT.utf8" + env_file: actions-38.yaml pattern: "not slow and not network and not single_cpu" extra_apt: "language-pack-it" lang: "it_IT.utf8" lc_all: "it_IT.utf8" - name: "Locale: it_IT.utf8" - - env_file: actions-38.yaml + - name: "Locale: zh_CN.utf8" + env_file: actions-38.yaml pattern: "not slow and not network and not single_cpu" extra_apt: "language-pack-zh-hans" lang: "zh_CN.utf8" lc_all: "zh_CN.utf8" - name: "Locale: zh_CN.utf8" - - env_file: actions-38.yaml + - name: "Data Manager" + env_file: actions-38.yaml pattern: "not slow and not network and not single_cpu" pandas_data_manager: "array" - name: "Data Manager" - - env_file: actions-pypy-38.yaml + - name: "Pypy" + env_file: actions-pypy-38.yaml pattern: "not slow and not network and not single_cpu" test_args: "--max-worker-restart 0" - name: "Pypy" - - env_file: actions-310-numpydev.yaml + - name: "Numpy Dev" + env_file: actions-310-numpydev.yaml pattern: "not slow and not network and not single_cpu" pandas_testing_mode: "deprecate" test_args: "-W error" - name: "Numpy Dev" fail-fast: false name: ${{ matrix.name || format('{0} pyarrow={1} {2}', matrix.env_file, matrix.pyarrow_version, matrix.pattern) }} env:
Part of https://github.com/pandas-dev/pandas/pull/46493#issuecomment-1079541210 - [ ] closes #xxxx (Replace xxxx with the Github issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46542
2022-03-28T18:11:58Z
2022-03-28T22:59:12Z
2022-03-28T22:59:12Z
2022-03-28T22:59:48Z
GHA: Use bash -el
diff --git a/.github/actions/build_pandas/action.yml b/.github/actions/build_pandas/action.yml index 2e4bfea165316..e916d5bfde5fb 100644 --- a/.github/actions/build_pandas/action.yml +++ b/.github/actions/build_pandas/action.yml @@ -8,10 +8,10 @@ runs: run: | conda info conda list - shell: bash -l {0} + shell: bash -el {0} - name: Build Pandas run: | python setup.py build_ext -j 2 python -m pip install -e . --no-build-isolation --no-use-pep517 --no-index - shell: bash -l {0} + shell: bash -el {0} diff --git a/.github/actions/setup/action.yml b/.github/actions/setup/action.yml index 9ef00e7a85a6f..c357f149f2c7f 100644 --- a/.github/actions/setup/action.yml +++ b/.github/actions/setup/action.yml @@ -5,8 +5,8 @@ runs: steps: - name: Setting conda path run: echo "${HOME}/miniconda3/bin" >> $GITHUB_PATH - shell: bash -l {0} + shell: bash -el {0} - name: Setup environment and build pandas run: ci/setup_env.sh - shell: bash -l {0} + shell: bash -el {0} diff --git a/.github/workflows/asv-bot.yml b/.github/workflows/asv-bot.yml index f3946aeb84a63..7b974f24fc162 100644 --- a/.github/workflows/asv-bot.yml +++ b/.github/workflows/asv-bot.yml @@ -17,7 +17,7 @@ jobs: runs-on: ubuntu-latest defaults: run: - shell: bash -l {0} + shell: bash -el {0} concurrency: # Set concurrency to prevent abuse(full runs are ~5.5 hours !!!) diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml index f32fed3b3ee68..4592279442b82 100644 --- a/.github/workflows/code-checks.yml +++ b/.github/workflows/code-checks.yml @@ -39,7 +39,7 @@ jobs: runs-on: ubuntu-latest defaults: run: - shell: bash -l {0} + shell: bash -el {0} concurrency: # https://github.community/t/concurrecy-not-work-for-push/183068/7 @@ -105,7 +105,7 @@ jobs: runs-on: ubuntu-latest defaults: run: - shell: bash -l {0} + shell: bash -el {0} concurrency: # https://github.community/t/concurrecy-not-work-for-push/183068/7 @@ -162,7 +162,7 @@ jobs: runs-on: ubuntu-latest defaults: run: - shell: bash -l {0} + shell: bash -el {0} concurrency: # https://github.community/t/concurrecy-not-work-for-push/183068/7 diff --git a/.github/workflows/posix.yml b/.github/workflows/posix.yml index bc8791afc69f7..9f67c78ce7bb8 100644 --- a/.github/workflows/posix.yml +++ b/.github/workflows/posix.yml @@ -20,7 +20,7 @@ jobs: runs-on: ubuntu-latest defaults: run: - shell: bash -l {0} + shell: bash -el {0} timeout-minutes: 120 strategy: matrix: @@ -159,7 +159,6 @@ jobs: if: ${{ env.IS_PYPY == 'true' }} - name: Setup PyPy dependencies - shell: bash run: | # TODO: re-enable cov, its slowing the tests down though pip install Cython numpy python-dateutil pytz pytest>=6.0 pytest-xdist>=1.31.0 pytest-asyncio hypothesis>=5.5.3 diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml index c287827206336..a44f85222fb0e 100644 --- a/.github/workflows/python-dev.yml +++ b/.github/workflows/python-dev.yml @@ -56,7 +56,7 @@ jobs: # TODO: GH#44980 https://github.com/pypa/setuptools/issues/2941 - name: Install dependencies - shell: bash + shell: bash -el {0} run: | python -m pip install --upgrade pip "setuptools<60.0.0" wheel pip install -i https://pypi.anaconda.org/scipy-wheels-nightly/simple numpy @@ -74,7 +74,7 @@ jobs: python -c "import pandas; pandas.show_versions();" - name: Test with pytest - shell: bash + shell: bash -el {0} run: | ci/run_tests.sh diff --git a/.github/workflows/sdist.yml b/.github/workflows/sdist.yml index 431710a49a7dd..b97b08c73fb71 100644 --- a/.github/workflows/sdist.yml +++ b/.github/workflows/sdist.yml @@ -20,7 +20,7 @@ jobs: timeout-minutes: 60 defaults: run: - shell: bash -l {0} + shell: bash -el {0} strategy: fail-fast: false
Part of https://github.com/pandas-dev/pandas/pull/46493#issuecomment-1079541210 - [ ] closes #xxxx (Replace xxxx with the Github issue number) - [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). - [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
https://api.github.com/repos/pandas-dev/pandas/pulls/46541
2022-03-28T18:10:22Z
2022-03-28T23:10:33Z
2022-03-28T23:10:33Z
2022-03-30T20:08:25Z