index
int64
0
731k
package
stringlengths
2
98
name
stringlengths
1
76
docstring
stringlengths
0
281k
code
stringlengths
4
1.07M
signature
stringlengths
2
42.8k
709,209
biglist._biglist
_warn_flush
null
def _warn_flush(self): if self._append_buffer or self._append_files_buffer: warnings.warn( f"did you forget to flush {self.__class__.__name__} at '{self.path}'?" )
(self)
709,210
biglist._biglist
append
Append a single element to the :class:`Biglist`. In implementation, this appends to an in-memory buffer. Once the buffer size reaches :data:`batch_size`, the buffer's content will be persisted as a new data file, and the buffer will re-start empty. In other words, whenever the buffer is non-empty, its content is not yet persisted. You can append data to a common biglist from multiple processes. In the processes, use independent ``Biglist`` objects that point to the same "path". Each of the objects will maintain its own in-memory buffer and save its own files once the buffer fills up. Remember to :meth:`flush` at the end of work in each process.
def append(self, x: Element) -> None: """ Append a single element to the :class:`Biglist`. In implementation, this appends to an in-memory buffer. Once the buffer size reaches :data:`batch_size`, the buffer's content will be persisted as a new data file, and the buffer will re-start empty. In other words, whenever the buffer is non-empty, its content is not yet persisted. You can append data to a common biglist from multiple processes. In the processes, use independent ``Biglist`` objects that point to the same "path". Each of the objects will maintain its own in-memory buffer and save its own files once the buffer fills up. Remember to :meth:`flush` at the end of work in each process. """ self._append_buffer.append(x) if len(self._append_buffer) >= self.batch_size: self._flush()
(self, x: ~Element) -> NoneType
709,211
biglist._biglist
destroy
null
def destroy(self, *, concurrent=True) -> None: self.keep_files = False self.path.rmrf(concurrent=concurrent)
(self, *, concurrent=True) -> NoneType
709,212
biglist._biglist
extend
This simply calls :meth:`append` repeatedly.
def extend(self, x: Iterable[Element]) -> None: """This simply calls :meth:`append` repeatedly.""" for v in x: self.append(v)
(self, x: collections.abc.Iterable[~Element]) -> NoneType
709,213
biglist._biglist
flush
:meth:`_flush` is called automatically whenever the "append buffer" is full, so to persist the data and empty the buffer. (The capacity of this buffer is equal to ``self.batch_size``.) However, if this buffer is only partially filled when the user is done adding elements to the biglist, the data in the buffer will not be persisted. This is the first reason that user should call ``flush`` when they are done adding data (via :meth:`append` or :meth:`extend`). Although :meth:`_flush` creates new data files, it does not update the "meta info file" (``info.json`` in the root of ``self.path``) to include the new data files; it only updates the in-memory ``self.info``. This is for efficiency reasons, because updating ``info.json`` involves locking. Updating ``info.json`` to include new data files (created due to :meth:`append` and :meth:`extend`) is performed by :meth:`flush`. This is the second reason that user should call :meth:`flush` at the end of their data writting session, regardless of whether all the new data have been persisted in data files. (They would be if their count happens to be a multiple of ``self.batch_size``.) If there are multiple workers adding data to this biglist at the same time (from multiple processes or machines), data added by one worker will not be visible to another worker until the writing worker calls :meth:`flush` and the reading worker calls :meth:`reload`. Further, user should assume that data not yet persisted (i.e. still in "append buffer") are not visible to data reading via :meth:`__getitem__` or :meth:`__iter__` and not included in :meth:`__len__`, even to the same worker. In common use cases, we do not start reading data until we're done adding data to the biglist (at least "for now"), hence this is not a big issue. In summary, call :meth:`flush` when - You are done adding data (for this "session") - or you need to start reading data :meth:`flush` has overhead. You should call it only in the two situations above. **Do not** call it frequently "just to be safe". After a call to ``flush()``, there's no problem to add more elements again by :meth:`append` or :meth:`extend`. Data files created by ``flush()`` with less than :data:`batch_size` elements will stay as is among larger files. This is a legitimate case in parallel or distributed writing, or writing in multiple sessions.
def flush(self, *, lock_timeout=300, raise_on_write_error: bool = True) -> None: """ :meth:`_flush` is called automatically whenever the "append buffer" is full, so to persist the data and empty the buffer. (The capacity of this buffer is equal to ``self.batch_size``.) However, if this buffer is only partially filled when the user is done adding elements to the biglist, the data in the buffer will not be persisted. This is the first reason that user should call ``flush`` when they are done adding data (via :meth:`append` or :meth:`extend`). Although :meth:`_flush` creates new data files, it does not update the "meta info file" (``info.json`` in the root of ``self.path``) to include the new data files; it only updates the in-memory ``self.info``. This is for efficiency reasons, because updating ``info.json`` involves locking. Updating ``info.json`` to include new data files (created due to :meth:`append` and :meth:`extend`) is performed by :meth:`flush`. This is the second reason that user should call :meth:`flush` at the end of their data writting session, regardless of whether all the new data have been persisted in data files. (They would be if their count happens to be a multiple of ``self.batch_size``.) If there are multiple workers adding data to this biglist at the same time (from multiple processes or machines), data added by one worker will not be visible to another worker until the writing worker calls :meth:`flush` and the reading worker calls :meth:`reload`. Further, user should assume that data not yet persisted (i.e. still in "append buffer") are not visible to data reading via :meth:`__getitem__` or :meth:`__iter__` and not included in :meth:`__len__`, even to the same worker. In common use cases, we do not start reading data until we're done adding data to the biglist (at least "for now"), hence this is not a big issue. In summary, call :meth:`flush` when - You are done adding data (for this "session") - or you need to start reading data :meth:`flush` has overhead. You should call it only in the two situations above. **Do not** call it frequently "just to be safe". After a call to ``flush()``, there's no problem to add more elements again by :meth:`append` or :meth:`extend`. Data files created by ``flush()`` with less than :data:`batch_size` elements will stay as is among larger files. This is a legitimate case in parallel or distributed writing, or writing in multiple sessions. """ self._flush() if self._file_dumper is not None: errors = self._file_dumper.wait(raise_on_error=raise_on_write_error) if errors: for file, e in errors: logger.error('failed to write file %s: %r', file, e) fname = file.name for i, (f, _) in enumerate(self._append_files_buffer): if f == fname: self._append_files_buffer.pop(i) break if file.exists(): try: file.remove_file() except Exception as e: logger.error('failed to delete file %s: %r', file, e) # Other workers in other threads, processes, or machines may have appended data # to the list. This block merges the appends by the current worker with # appends by other workers. The last call to ``flush`` across all workers # will get the final meta info right. if self._append_files_buffer: with self._info_file.lock(timeout=lock_timeout) as ff: z0 = ff.read_json()['data_files_info'] z = sorted( set((*(tuple(v[:2]) for v in z0), *self._append_files_buffer)) ) # TODO: maybe a merge sort can be more efficient. cum = list(itertools.accumulate(v[1] for v in z)) z = [(a, b, c) for (a, b), c in zip(z, cum)] self.info['data_files_info'] = z ff.write_json(self.info, overwrite=True) self._append_files_buffer.clear()
(self, *, lock_timeout=300, raise_on_write_error: bool = True) -> NoneType
709,214
biglist._biglist
make_file_name
This method constructs the file name of a data file. If you need to customize this method for any reason, you should do it via ``extra`` and keep the other patterns unchanged. The string ``extra`` will appear between other fixed patterns in the file name. One possible usecase is this: in distributed writing, you want files written by different workers to be distinguishable by the file names. Do something like this:: def worker(datapath: str, worker_id: str, ...): out = Biglist(datapath) _make_file_name = out.make_file_name out.make_file_name = lambda buffer_len: _make_file_name(buffer_len, worker_id) ...
def make_file_name(self, buffer_len: int, extra: str = '') -> str: """ This method constructs the file name of a data file. If you need to customize this method for any reason, you should do it via ``extra`` and keep the other patterns unchanged. The string ``extra`` will appear between other fixed patterns in the file name. One possible usecase is this: in distributed writing, you want files written by different workers to be distinguishable by the file names. Do something like this:: def worker(datapath: str, worker_id: str, ...): out = Biglist(datapath) _make_file_name = out.make_file_name out.make_file_name = lambda buffer_len: _make_file_name(buffer_len, worker_id) ... """ if extra: extra = extra.lstrip('_').rstrip('_') + '_' return f"{datetime.utcnow().strftime('%Y%m%d%H%M%S.%f')}_{extra}{str(uuid4()).replace('-', '')[:10]}_{buffer_len}" # File name pattern introduced on 7/25/2022. # This should guarantee the file name is unique, hence # we do not need to verify that this file name is not already used. # Also include timestamp and item count in the file name, in case # later we decide to use these pieces of info. # Changes in 0.7.4: the time part changes from epoch to datetime, with guaranteed fixed length. # Change in 0.8.4: the uuid part has dash removed and length reduced to 10; add ``extra``.
(self, buffer_len: int, extra: str = '') -> str
709,215
biglist._biglist
reload
Reload the meta info. This is used in this scenario: suppose we have this object pointing to a biglist on the local disk; another object in another process is appending data to the same biglist (that is, it points to the same storage location); then after a while, the meta info file on the disk has been modified by the other process, hence the current object is out-dated; calling this method will bring it up to date. The same idea applies if the storage is in the cloud, and another machine is appending data to the same remote biglist. Creating a new object pointing to the same storage location would achieve the same effect.
def reload(self) -> None: """ Reload the meta info. This is used in this scenario: suppose we have this object pointing to a biglist on the local disk; another object in another process is appending data to the same biglist (that is, it points to the same storage location); then after a while, the meta info file on the disk has been modified by the other process, hence the current object is out-dated; calling this method will bring it up to date. The same idea applies if the storage is in the cloud, and another machine is appending data to the same remote biglist. Creating a new object pointing to the same storage location would achieve the same effect. """ self.info = self._info_file.read_json()
(self) -> NoneType
709,216
biglist._biglist
BiglistFileReader
null
class BiglistFileReader(FileReader[Element]): def __init__(self, path: PathType, loader: Callable[[Upath], Any]): """ Parameters ---------- path Path of a data file. loader A function that will be used to load the data file. This must be pickle-able. Usually this is the bound method ``load`` of a subclass of :class:`upathlib.serializer.Serializer`. If you customize this, please see the doc of :class:`~biglist.FileReader`. """ super().__init__() self.path: Upath = resolve_path(path) self.loader = loader self._data: list | None = None def __getstate__(self): return self.path, self.loader def __setstate__(self, data): self.path, self.loader = data self._data = None def load(self) -> None: if self._data is None: self._data = self.loader(self.path) def data(self) -> list[Element]: """Return the data loaded from the file.""" self.load() return self._data def __len__(self) -> int: return len(self.data()) def __getitem__(self, idx: int) -> Element: return self.data()[idx] def __iter__(self) -> Iterator[Element]: return iter(self.data())
(path: 'PathType', loader: 'Callable[[Upath], Any]')
709,217
biglist._biglist
__getitem__
null
def __getitem__(self, idx: int) -> Element: return self.data()[idx]
(self, idx: int) -> ~Element
709,218
biglist._biglist
__getstate__
null
def __getstate__(self): return self.path, self.loader
(self)
709,219
biglist._biglist
__init__
Parameters ---------- path Path of a data file. loader A function that will be used to load the data file. This must be pickle-able. Usually this is the bound method ``load`` of a subclass of :class:`upathlib.serializer.Serializer`. If you customize this, please see the doc of :class:`~biglist.FileReader`.
def __init__(self, path: PathType, loader: Callable[[Upath], Any]): """ Parameters ---------- path Path of a data file. loader A function that will be used to load the data file. This must be pickle-able. Usually this is the bound method ``load`` of a subclass of :class:`upathlib.serializer.Serializer`. If you customize this, please see the doc of :class:`~biglist.FileReader`. """ super().__init__() self.path: Upath = resolve_path(path) self.loader = loader self._data: list | None = None
(self, path: Union[str, pathlib.Path, upathlib._upath.Upath], loader: Callable[[upathlib._upath.Upath], Any])
709,220
biglist._biglist
__iter__
null
def __iter__(self) -> Iterator[Element]: return iter(self.data())
(self) -> collections.abc.Iterator[~Element]
709,221
biglist._biglist
__len__
null
def __len__(self) -> int: return len(self.data())
(self) -> int
709,222
biglist._util
__repr__
null
def __repr__(self): return f"<{self.__class__.__name__} for '{self.path}'>"
(self)
709,223
biglist._biglist
__setstate__
null
def __setstate__(self, data): self.path, self.loader = data self._data = None
(self, data)
709,226
biglist._biglist
data
Return the data loaded from the file.
def data(self) -> list[Element]: """Return the data loaded from the file.""" self.load() return self._data
(self) -> list[~Element]
709,227
biglist._biglist
load
null
def load(self) -> None: if self._data is None: self._data = self.loader(self.path)
(self) -> NoneType
709,228
biglist._util
Chain
This class tracks a series of :class:`Seq` objects to provide random element access and iteration on the series as a whole, with zero-copy. This class is in contrast with the standard `itertools.chain <https://docs.python.org/3/library/itertools.html#itertools.chain>`_, which takes iterables.
class Chain(Seq[Element]): """ This class tracks a series of :class:`Seq` objects to provide random element access and iteration on the series as a whole, with zero-copy. This class is in contrast with the standard `itertools.chain <https://docs.python.org/3/library/itertools.html#itertools.chain>`_, which takes iterables. """ def __init__(self, list_: Seq[Element], *lists: Seq[Element]): self._lists = (list_, *lists) self._lists_len: None | list[int] = None self._lists_len_cumsum: None | list[int] = None self._len: None | int = None # Records info about the last call to `__getitem__` # to hopefully speed up the next call, under the assumption # that user tends to access consecutive or neighboring # elements. self._get_item_last_list = None def __repr__(self): return "<{} with {} elements in {} member Seq's>".format( self.__class__.__name__, self.__len__(), len(self._lists), ) def __str__(self): return self.__repr__() def __len__(self) -> int: if self._len is None: if self._lists_len is None: self._lists_len = [len(v) for v in self._lists] self._len = sum(self._lists_len) return self._len def __getitem__(self, idx: int) -> Element: if self._lists_len_cumsum is None: if self._lists_len is None: self._lists_len = [len(v) for v in self._lists] self._lists_len_cumsum = list(itertools.accumulate(self._lists_len)) ilist, idx_in_list, list_info = locate_idx_in_chunked_seq( idx, self._lists_len_cumsum, self._get_item_last_list ) self._get_item_last_list = list_info return self._lists[ilist][idx_in_list] def __iter__(self) -> Iterator[Element]: for v in self._lists: yield from v @property def raw(self) -> tuple[Seq[Element], ...]: """ Return the underlying list of :class:`Seq`\\s. A member ``Seq`` could be a :class:`Slicer`. The current method does not follow a ``Slicer`` to its "raw" component, b/c that could represent a different set of elements than the ``Slicer`` object. """ return self._lists
(list_: 'Seq[Element]', *lists: 'Seq[Element]')
709,229
biglist._util
__getitem__
null
def __getitem__(self, idx: int) -> Element: if self._lists_len_cumsum is None: if self._lists_len is None: self._lists_len = [len(v) for v in self._lists] self._lists_len_cumsum = list(itertools.accumulate(self._lists_len)) ilist, idx_in_list, list_info = locate_idx_in_chunked_seq( idx, self._lists_len_cumsum, self._get_item_last_list ) self._get_item_last_list = list_info return self._lists[ilist][idx_in_list]
(self, idx: int) -> ~Element
709,230
biglist._util
__init__
null
def __init__(self, list_: Seq[Element], *lists: Seq[Element]): self._lists = (list_, *lists) self._lists_len: None | list[int] = None self._lists_len_cumsum: None | list[int] = None self._len: None | int = None # Records info about the last call to `__getitem__` # to hopefully speed up the next call, under the assumption # that user tends to access consecutive or neighboring # elements. self._get_item_last_list = None
(self, list_: biglist._util.Seq[~Element], *lists: biglist._util.Seq[~Element])
709,231
biglist._util
__iter__
null
def __iter__(self) -> Iterator[Element]: for v in self._lists: yield from v
(self) -> collections.abc.Iterator[~Element]
709,232
biglist._util
__len__
null
def __len__(self) -> int: if self._len is None: if self._lists_len is None: self._lists_len = [len(v) for v in self._lists] self._len = sum(self._lists_len) return self._len
(self) -> int
709,233
biglist._util
__repr__
null
def __repr__(self): return "<{} with {} elements in {} member Seq's>".format( self.__class__.__name__, self.__len__(), len(self._lists), )
(self)
709,236
biglist._util
FileReader
A ``FileReader`` is a "lazy" loader of a data file. It keeps track of the path of a data file along with a loader function, but performs the loading only when needed. In particular, upon initiation of a ``FileReader`` object, file loading has not happened, and the object is light weight and friendly to pickling. Once data have been loaded, this class provides various ways to navigate the data. At a minimum, the :class:`Seq` API is implemented. With loaded data and associated facilities, this object may no longer be pickle-able, depending on the specifics of a subclass. One use case of this class is to pass around ``FileReader`` objects (that are initiated but not loaded) in `multiprocessing <https://docs.python.org/3/library/multiprocessing.html>`_ code for concurrent data processing. This class is generic with a parameter indicating the type of the elements in the data sequence contained in the file. For example you can write:: def func(file_reader: FileReader[int]): ...
class FileReader(Seq[Element]): """ A ``FileReader`` is a "lazy" loader of a data file. It keeps track of the path of a data file along with a loader function, but performs the loading only when needed. In particular, upon initiation of a ``FileReader`` object, file loading has not happened, and the object is light weight and friendly to pickling. Once data have been loaded, this class provides various ways to navigate the data. At a minimum, the :class:`Seq` API is implemented. With loaded data and associated facilities, this object may no longer be pickle-able, depending on the specifics of a subclass. One use case of this class is to pass around ``FileReader`` objects (that are initiated but not loaded) in `multiprocessing <https://docs.python.org/3/library/multiprocessing.html>`_ code for concurrent data processing. This class is generic with a parameter indicating the type of the elements in the data sequence contained in the file. For example you can write:: def func(file_reader: FileReader[int]): ... """ def __repr__(self): return f"<{self.__class__.__name__} for '{self.path}'>" def __str__(self): return self.__repr__() @abstractmethod def load(self) -> None: """ This method *eagerly* loads all the data from the file into memory. Once this method has been called, subsequent data consumption should all draw upon this in-memory copy. However, if the data file is large, and especially if only part of the data is of interest, calling this method may not be the best approach. This all depends on the specifics of the subclass. A subclass may allow consuming the data and load parts of data in a "as-needed" or "streaming" fashion. In that approach, :meth:`__getitem__` and :meth:`__iter__` do not require this method to be called (although they may take advantage of the in-memory data if this method *has been called*.). """ raise NotImplementedError
(*args, **kwargs)
709,237
biglist._util
__getitem__
null
def __getitem__(self, index: int) -> Element: ...
(self, index: int) -> ~Element
709,239
biglist._util
__iter__
null
def __iter__(self) -> Iterator[Element]: # A reference, or naive, implementation. for i in range(self.__len__()): yield self[i]
(self) -> collections.abc.Iterator[~Element]
709,244
biglist._util
load
This method *eagerly* loads all the data from the file into memory. Once this method has been called, subsequent data consumption should all draw upon this in-memory copy. However, if the data file is large, and especially if only part of the data is of interest, calling this method may not be the best approach. This all depends on the specifics of the subclass. A subclass may allow consuming the data and load parts of data in a "as-needed" or "streaming" fashion. In that approach, :meth:`__getitem__` and :meth:`__iter__` do not require this method to be called (although they may take advantage of the in-memory data if this method *has been called*.).
@abstractmethod def load(self) -> None: """ This method *eagerly* loads all the data from the file into memory. Once this method has been called, subsequent data consumption should all draw upon this in-memory copy. However, if the data file is large, and especially if only part of the data is of interest, calling this method may not be the best approach. This all depends on the specifics of the subclass. A subclass may allow consuming the data and load parts of data in a "as-needed" or "streaming" fashion. In that approach, :meth:`__getitem__` and :meth:`__iter__` do not require this method to be called (although they may take advantage of the in-memory data if this method *has been called*.). """ raise NotImplementedError
(self) -> NoneType
709,245
biglist._parquet
ParquetBatchData
``ParquetBatchData`` wraps a `pyarrow.Table`_ or `pyarrow.RecordBatch`_. The data is already in memory; this class does not involve file reading. :meth:`ParquetFileReader.data` and :meth:`ParquetFileReader.iter_batches` both return or yield ParquetBatchData. In addition, the method :meth:`columns` of this class returns a new object of this class. Objects of this class can be pickled.
class ParquetBatchData(Seq): """ ``ParquetBatchData`` wraps a `pyarrow.Table`_ or `pyarrow.RecordBatch`_. The data is already in memory; this class does not involve file reading. :meth:`ParquetFileReader.data` and :meth:`ParquetFileReader.iter_batches` both return or yield ParquetBatchData. In addition, the method :meth:`columns` of this class returns a new object of this class. Objects of this class can be pickled. """ def __init__( self, data: pyarrow.Table | pyarrow.RecordBatch, ): # `self.scalar_as_py` may be toggled anytime # and have its effect right away. self._data = data self.scalar_as_py = True """Indicate whether scalar values should be converted to Python types from `pyarrow`_ types.""" self.num_rows = data.num_rows self.num_columns = data.num_columns self.column_names = data.schema.names def __repr__(self): return '<{} with {} rows, {} columns>'.format( self.__class__.__name__, self.num_rows, self.num_columns, ) def __str__(self): return self.__repr__() def data(self) -> pyarrow.Table | pyarrow.RecordBatch: """Return the underlying `pyarrow`_ data.""" return self._data def __len__(self) -> int: return self.num_rows def __getitem__(self, idx: int): """ Get one row (or "record"). If the object has a single column, then return its value in the specified row. If the object has multiple columns, return a dict with column names as keys. The values are converted to Python builtin types if :data:`scalar_as_py` is ``True``. Parameters ---------- idx Row index in this batch. Negative value counts from the end as expected. """ if idx < 0: idx = self.num_rows + idx if idx < 0 or idx >= self.num_rows: raise IndexError(idx) if self.num_columns == 1: z = self._data.column(0)[idx] if self.scalar_as_py: return z.as_py() return z z = {col: self._data.column(col)[idx] for col in self.column_names} if self.scalar_as_py: return {k: v.as_py() for k, v in z.items()} return z def __iter__(self): """ Iterate over rows. The type of yielded individual elements is the same as :meth:`__getitem__`. """ if self.num_columns == 1: if self.scalar_as_py: yield from (v.as_py() for v in self._data.column(0)) else: yield from self._data.column(0) else: names = self.column_names if self.scalar_as_py: for row in zip(*self._data.columns): yield dict(zip(names, (v.as_py() for v in row))) else: for row in zip(*self._data.columns): yield dict(zip(names, row)) def columns(self, cols: Sequence[str]) -> ParquetBatchData: """ Return a new :class:`ParquetBatchData` object that only contains the specified columns. The columns of interest have to be within currently available columns. In other words, a series of calls to this method would incrementally narrow down the selection of columns. (Note this returns a new :class:`ParquetBatchData`, hence one can call :meth:`columns` again on the returned object.) This method "slices" the data by columns, in contrast to other data access methods that select rows. Parameters ---------- cols Names of the columns to select. Examples -------- >>> obj = ParquetBatchData(parquet_table) # doctest: +SKIP >>> obj1 = obj.columns(['a', 'b', 'c']) # doctest: +SKIP >>> print(obj1[2]) # doctest: +SKIP >>> obj2 = obj1.columns(['b', 'c']) # doctest: +SKIP >>> print(obj2[3]) # doctest: +SKIP >>> obj3 = obj.columns(['d']) # doctest: +SKIP >>> for v in obj: # doctest: +SKIP >>> print(v) # doctest: +SKIP """ assert len(set(cols)) == len(cols) # no repeat values if all(col in self.column_names for col in cols): if len(cols) == len(self.column_names): return self else: cc = [col for col in cols if col not in self.column_names] raise ValueError( f'cannot select the columns {cc} because they are not in existing set of columns' ) z = self.__class__(self._data.select(cols)) z.scalar_as_py = self.scalar_as_py return z def column(self, idx_or_name: int | str) -> pyarrow.Array | pyarrow.ChunkedArray: """ Select a single column specified by name or index. If ``self._data`` is `pyarrow.Table`_, return `pyarrow.ChunkedArray`_. If ``self._data`` is `pyarrow.RecordBatch`_, return `pyarrow.Array`_. """ return self._data.column(idx_or_name)
(data: 'pyarrow.Table | pyarrow.RecordBatch')
709,246
biglist._parquet
__getitem__
Get one row (or "record"). If the object has a single column, then return its value in the specified row. If the object has multiple columns, return a dict with column names as keys. The values are converted to Python builtin types if :data:`scalar_as_py` is ``True``. Parameters ---------- idx Row index in this batch. Negative value counts from the end as expected.
def __getitem__(self, idx: int): """ Get one row (or "record"). If the object has a single column, then return its value in the specified row. If the object has multiple columns, return a dict with column names as keys. The values are converted to Python builtin types if :data:`scalar_as_py` is ``True``. Parameters ---------- idx Row index in this batch. Negative value counts from the end as expected. """ if idx < 0: idx = self.num_rows + idx if idx < 0 or idx >= self.num_rows: raise IndexError(idx) if self.num_columns == 1: z = self._data.column(0)[idx] if self.scalar_as_py: return z.as_py() return z z = {col: self._data.column(col)[idx] for col in self.column_names} if self.scalar_as_py: return {k: v.as_py() for k, v in z.items()} return z
(self, idx: int)
709,247
biglist._parquet
__init__
null
def __init__( self, data: pyarrow.Table | pyarrow.RecordBatch, ): # `self.scalar_as_py` may be toggled anytime # and have its effect right away. self._data = data self.scalar_as_py = True """Indicate whether scalar values should be converted to Python types from `pyarrow`_ types.""" self.num_rows = data.num_rows self.num_columns = data.num_columns self.column_names = data.schema.names
(self, data: pyarrow.lib.Table | pyarrow.lib.RecordBatch)
709,248
biglist._parquet
__iter__
Iterate over rows. The type of yielded individual elements is the same as :meth:`__getitem__`.
def __iter__(self): """ Iterate over rows. The type of yielded individual elements is the same as :meth:`__getitem__`. """ if self.num_columns == 1: if self.scalar_as_py: yield from (v.as_py() for v in self._data.column(0)) else: yield from self._data.column(0) else: names = self.column_names if self.scalar_as_py: for row in zip(*self._data.columns): yield dict(zip(names, (v.as_py() for v in row))) else: for row in zip(*self._data.columns): yield dict(zip(names, row))
(self)
709,249
biglist._parquet
__len__
null
def __len__(self) -> int: return self.num_rows
(self) -> int
709,250
biglist._parquet
__repr__
null
def __repr__(self): return '<{} with {} rows, {} columns>'.format( self.__class__.__name__, self.num_rows, self.num_columns, )
(self)
709,253
biglist._parquet
column
Select a single column specified by name or index. If ``self._data`` is `pyarrow.Table`_, return `pyarrow.ChunkedArray`_. If ``self._data`` is `pyarrow.RecordBatch`_, return `pyarrow.Array`_.
def column(self, idx_or_name: int | str) -> pyarrow.Array | pyarrow.ChunkedArray: """ Select a single column specified by name or index. If ``self._data`` is `pyarrow.Table`_, return `pyarrow.ChunkedArray`_. If ``self._data`` is `pyarrow.RecordBatch`_, return `pyarrow.Array`_. """ return self._data.column(idx_or_name)
(self, idx_or_name: int | str) -> pyarrow.lib.Array | pyarrow.lib.ChunkedArray
709,254
biglist._parquet
columns
Return a new :class:`ParquetBatchData` object that only contains the specified columns. The columns of interest have to be within currently available columns. In other words, a series of calls to this method would incrementally narrow down the selection of columns. (Note this returns a new :class:`ParquetBatchData`, hence one can call :meth:`columns` again on the returned object.) This method "slices" the data by columns, in contrast to other data access methods that select rows. Parameters ---------- cols Names of the columns to select. Examples -------- >>> obj = ParquetBatchData(parquet_table) # doctest: +SKIP >>> obj1 = obj.columns(['a', 'b', 'c']) # doctest: +SKIP >>> print(obj1[2]) # doctest: +SKIP >>> obj2 = obj1.columns(['b', 'c']) # doctest: +SKIP >>> print(obj2[3]) # doctest: +SKIP >>> obj3 = obj.columns(['d']) # doctest: +SKIP >>> for v in obj: # doctest: +SKIP >>> print(v) # doctest: +SKIP
def columns(self, cols: Sequence[str]) -> ParquetBatchData: """ Return a new :class:`ParquetBatchData` object that only contains the specified columns. The columns of interest have to be within currently available columns. In other words, a series of calls to this method would incrementally narrow down the selection of columns. (Note this returns a new :class:`ParquetBatchData`, hence one can call :meth:`columns` again on the returned object.) This method "slices" the data by columns, in contrast to other data access methods that select rows. Parameters ---------- cols Names of the columns to select. Examples -------- >>> obj = ParquetBatchData(parquet_table) # doctest: +SKIP >>> obj1 = obj.columns(['a', 'b', 'c']) # doctest: +SKIP >>> print(obj1[2]) # doctest: +SKIP >>> obj2 = obj1.columns(['b', 'c']) # doctest: +SKIP >>> print(obj2[3]) # doctest: +SKIP >>> obj3 = obj.columns(['d']) # doctest: +SKIP >>> for v in obj: # doctest: +SKIP >>> print(v) # doctest: +SKIP """ assert len(set(cols)) == len(cols) # no repeat values if all(col in self.column_names for col in cols): if len(cols) == len(self.column_names): return self else: cc = [col for col in cols if col not in self.column_names] raise ValueError( f'cannot select the columns {cc} because they are not in existing set of columns' ) z = self.__class__(self._data.select(cols)) z.scalar_as_py = self.scalar_as_py return z
(self, cols: collections.abc.Sequence[str]) -> biglist._parquet.ParquetBatchData
709,255
biglist._parquet
data
Return the underlying `pyarrow`_ data.
def data(self) -> pyarrow.Table | pyarrow.RecordBatch: """Return the underlying `pyarrow`_ data.""" return self._data
(self) -> pyarrow.lib.Table | pyarrow.lib.RecordBatch
709,256
biglist._biglist
ParquetBiglist
``ParquetBiglist`` defines a kind of "external biglist", that is, it points to pre-existing Parquet files and provides facilities to read them. As long as you use a ParquetBiglist object to read, it is assumed that the dataset (all the data files) have not changed since the object was created by :meth:`new`.
class ParquetBiglist(BiglistBase): """ ``ParquetBiglist`` defines a kind of "external biglist", that is, it points to pre-existing Parquet files and provides facilities to read them. As long as you use a ParquetBiglist object to read, it is assumed that the dataset (all the data files) have not changed since the object was created by :meth:`new`. """ @classmethod def new( cls, data_path: PathType | Sequence[PathType], path: PathType | None = None, *, suffix: str = '.parquet', **kwargs, ) -> ParquetBiglist: """ This classmethod gathers info of the specified data files and saves the info to facilitate reading the data files. The data files remain "external" to the :class:`ParquetBiglist` object; the "data" persisted and managed by the ParquetBiglist object are the meta info about the Parquet data files. If the number of data files is small, it's feasible to create a temporary object of this class (by leaving ``path`` at the default value ``None``) "on-the-fly" for one-time use. Parameters ---------- path Passed on to :meth:`BiglistBase.new` of :class:`BiglistBase`. data_path Parquet file(s) or folder(s) containing Parquet files. If this is a single path, then it's either a Parquet file or a directory. If this is a list, each element is either a Parquet file or a directory; there can be a mix of files and directories. Directories are traversed recursively for Parquet files. The paths can be local, or in the cloud, or a mix of both. Once the info of all Parquet files are gathered, their order is fixed as far as this :class:`ParquetBiglist` is concerned. The data sequence represented by this ParquetBiglist follows this order of the files. The order is determined as follows: The order of the entries in ``data_path`` is preserved; if any entry is a directory, the files therein (recursively) are sorted by the string value of each file's full path. suffix Only files with this suffix will be included. To include all files, use ``suffix='*'``. **kwargs additional arguments are passed on to :meth:`__init__`. """ if isinstance(data_path, (str, Path, Upath)): # TODO: in py 3.10, we will be able to do `isinstance(data_path, PathType)` data_path = [resolve_path(data_path)] else: data_path = [resolve_path(p) for p in data_path] def get_file_meta(p: Upath): ff = ParquetFileReader.load_file(p) meta = ff.metadata return { 'path': str(p), # str of full path 'num_rows': meta.num_rows, # "row_groups_num_rows": [ # meta.row_group(k).num_rows for k in range(meta.num_row_groups) # ], } pool = get_global_thread_pool() tasks = [] for p in data_path: if p.is_file(): if suffix == '*' or p.name.endswith(suffix): tasks.append(pool.submit(get_file_meta, p)) else: tt = [] for pp in p.riterdir(): if suffix == '*' or pp.name.endswith(suffix): tt.append((str(pp), pool.submit(get_file_meta, pp))) tt.sort() for p, t in tt: tasks.append(t) assert tasks datafiles = [] for k, t in enumerate(tasks): datafiles.append(t.result()) if (k + 1) % 1000 == 0: logger.info('processed %d files', k + 1) datafiles_cumlength = list( itertools.accumulate(v['num_rows'] for v in datafiles) ) obj = super().new(path, **kwargs) # type: ignore obj.info['datapath'] = [str(p) for p in data_path] # Removed in 0.7.4 # obj.info["datafiles"] = datafiles # obj.info["datafiles_cumlength"] = datafiles_cumlength # Added in 0.7.4 data_files_info = [ (a['path'], a['num_rows'], b) for a, b in zip(datafiles, datafiles_cumlength) ] obj.info['data_files_info'] = data_files_info obj.info['storage_format'] = 'parquet' obj.info['storage_version'] = 1 # `storage_version` is a flag for certain breaking changes in the implementation, # such that certain parts of the code (mainly concerning I/O) need to # branch into different treatments according to the version. # This has little relation to `storage_format`. # version 1 designator introduced in version 0.7.4. # prior to 0.7.4 it is absent, and considered 0. obj._info_file.write_json(obj.info, overwrite=True) return obj def __init__(self, *args, **kwargs): """Please see doc of the base class.""" super().__init__(*args, **kwargs) self.keep_files: bool = True """Indicates whether the meta info persisted by this object should be kept or deleted when this object is garbage-collected. This does *not* affect the external Parquet data files. """ # For back compat. Added in 0.7.4. if self.info and 'data_files_info' not in self.info: # This is not called by ``new``, instead is opening an existing dataset assert self.storage_version == 0 data_files_info = [ (a['path'], a['num_rows'], b) for a, b in zip( self.info['datafiles'], self.info['datafiles_cumlength'] ) ] self.info['data_files_info'] = data_files_info with self._info_file.lock() as ff: ff.write_json(self.info, overwrite=True) def __repr__(self): return f"<{self.__class__.__name__} at '{self.path}' with {len(self)} records in {len(self.files)} data file(s) stored at {self.info['datapath']}>" @property def storage_version(self) -> int: return self.info.get('storage_version', 0) @property def files(self): # This method should be cheap to call. return ParquetFileSeq( self.path, self.info['data_files_info'], )
(*args, **kwargs)
709,257
biglist._biglist
__del__
null
def __del__(self): if getattr(self, 'keep_files', True) is False: self.destroy(concurrent=False)
(self)
709,258
biglist._biglist
__getitem__
Access a data item by its index; negative index works as expected.
def __getitem__(self, idx: int) -> Element: """ Access a data item by its index; negative index works as expected. """ # This is not optimized for speed. # For better speed, use ``__iter__``. if not isinstance(idx, int): raise TypeError( f'{self.__class__.__name__} indices must be integers, not {type(idx).__name__}' ) if idx >= 0 and self._read_buffer_file_idx is not None: n1, n2 = self._read_buffer_item_range # type: ignore if n1 <= idx < n2: return self._read_buffer[idx - n1] # type: ignore files = self.files if files: data_files_cumlength = [v[-1] for v in files.data_files_info] length = data_files_cumlength[-1] nfiles = len(files) else: data_files_cumlength = [] length = 0 nfiles = 0 idx = range(length)[idx] if idx >= length: raise IndexError(idx) ifile0 = 0 ifile1 = nfiles if self._read_buffer_file_idx is not None: n1, n2 = self._read_buffer_item_range # type: ignore if idx < n1: ifile1 = self._read_buffer_file_idx # pylint: disable=access-member-before-definition elif idx < n2: return self._read_buffer[idx - n1] # type: ignore else: ifile0 = self._read_buffer_file_idx + 1 # pylint: disable=access-member-before-definition # Now find the data file that contains the target item. ifile = bisect.bisect_right(data_files_cumlength, idx, lo=ifile0, hi=ifile1) # `ifile`: index of data file that contains the target element. # `n`: total length before `ifile`. if ifile == 0: n = 0 else: n = data_files_cumlength[ifile - 1] self._read_buffer_item_range = (n, data_files_cumlength[ifile]) data = files[ifile] self._read_buffer_file_idx = ifile self._read_buffer = data return data[idx - n]
(self, idx: int) -> ~Element
709,260
biglist._biglist
__init__
Please see doc of the base class.
def __init__(self, *args, **kwargs): """Please see doc of the base class.""" super().__init__(*args, **kwargs) self.keep_files: bool = True """Indicates whether the meta info persisted by this object should be kept or deleted when this object is garbage-collected. This does *not* affect the external Parquet data files. """ # For back compat. Added in 0.7.4. if self.info and 'data_files_info' not in self.info: # This is not called by ``new``, instead is opening an existing dataset assert self.storage_version == 0 data_files_info = [ (a['path'], a['num_rows'], b) for a, b in zip( self.info['datafiles'], self.info['datafiles_cumlength'] ) ] self.info['data_files_info'] = data_files_info with self._info_file.lock() as ff: ff.write_json(self.info, overwrite=True)
(self, *args, **kwargs)
709,261
biglist._biglist
__iter__
Iterate over all the elements. When there are multiple data files, as the data in one file is being yielded, the next file(s) may be pre-loaded in background threads. For this reason, although the following is equivalent in the final result:: for file in self.files: for item in file: ... use item ... it could be less efficient than iterating over `self` directly, as in :: for item in self: ... use item ...
def __iter__(self) -> Iterator[Element]: """ Iterate over all the elements. When there are multiple data files, as the data in one file is being yielded, the next file(s) may be pre-loaded in background threads. For this reason, although the following is equivalent in the final result:: for file in self.files: for item in file: ... use item ... it could be less efficient than iterating over `self` directly, as in :: for item in self: ... use item ... """ files = self.files if not files: return if len(files) == 1: z = files[0] z.load() yield from z else: ndatafiles = len(files) max_workers = min(self._n_read_threads, ndatafiles) tasks = queue.Queue(max_workers) executor = self._get_thread_pool() def _read_file(idx): z = files[idx] z.load() return z for i in range(max_workers): t = executor.submit(_read_file, i) tasks.put(t) nfiles_queued = max_workers for _ in range(ndatafiles): t = tasks.get() file_reader = t.result() # Before starting to yield data, take care of the # downloading queue to keep it busy. if nfiles_queued < ndatafiles: # `nfiles_queued` is the index of the next file to download. t = executor.submit(_read_file, nfiles_queued) tasks.put(t) nfiles_queued += 1 yield from file_reader
(self) -> collections.abc.Iterator[~Element]
709,262
biglist._biglist
__len__
Number of data items in this biglist. This is an alias to :meth:`num_data_items`.
def __len__(self) -> int: """ Number of data items in this biglist. This is an alias to :meth:`num_data_items`. """ return self.num_data_items
(self) -> int
709,263
biglist._biglist
__repr__
null
def __repr__(self): return f"<{self.__class__.__name__} at '{self.path}' with {len(self)} records in {len(self.files)} data file(s) stored at {self.info['datapath']}>"
(self)
709,269
biglist._parquet
ParquetFileReader
null
class ParquetFileReader(FileReader): @classmethod def get_gcsfs(cls, *, good_for_seconds=600) -> GcsFileSystem: """ Obtain a `pyarrow.fs.GcsFileSystem`_ object with credentials given so that the GCP default process of inferring credentials (which involves env vars and file reading etc) will not be triggered. This is provided under the (un-verified) assumption that the default credential inference process is a high overhead. """ cls._GCP_PROJECT_ID, cls._GCP_CREDENTIALS, renewed = get_google_auth( project_id=getattr(cls, '_GCP_PROJECT_ID', None), credentials=getattr(cls, '_GCP_CREDENTIALS', None), valid_for_seconds=good_for_seconds, ) if renewed or getattr(cls, '_GCSFS', None) is None: fs = GcsFileSystem( access_token=cls._GCP_CREDENTIALS.token, credential_token_expiration=cls._GCP_CREDENTIALS.expiry, ) cls._GCSFS = fs return cls._GCSFS @classmethod def load_file(cls, path: Upath) -> ParquetFile: """ This reads *meta* info and constructs a ``pyarrow.parquet.ParquetFile`` object. This does not load the entire file. See :meth:`load` for eager loading. Parameters ---------- path Path of the file. """ ff, pp = FileSystem.from_uri(str(path)) if isinstance(ff, GcsFileSystem): ff = cls.get_gcsfs() file = ParquetFile(pp, filesystem=ff) Finalize(file, file.reader.close) # NOTE: can not use # # Finalize(file, file.close, kwargs={'force': True}) # # because the instance method `file.close` can't be used as the callback---the # object `file` is no long available at that time. # # See https://github.com/apache/arrow/issues/35318 return file def __init__(self, path: PathType): """ Parameters ---------- path Path of a Parquet file. """ self.path: Upath = resolve_path(path) self._reset() def _reset(self): self._file: ParquetFile | None = None self._data: ParquetBatchData | None = None self._row_groups_num_rows = None self._row_groups_num_rows_cumsum = None self._row_groups: None | list[ParquetBatchData] = None self._column_names = None self._columns = {} self._getitem_last_row_group = None self._scalar_as_py = None self.scalar_as_py = True def __getstate__(self): return (self.path,) def __setstate__(self, data): self.path = data[0] self._reset() @property def scalar_as_py(self) -> bool: """ ``scalar_as_py`` controls whether the values returned by :meth:`__getitem__` (or indirectly by :meth:`__iter__`) are converted from a `pyarrow.Scalar`_ type such as `pyarrow.lib.StringScalar`_ to a Python builtin type such as ``str``. This property can be toggled anytime to take effect until it is toggled again. :getter: Returns this property's value. :setter: Sets this property's value. """ if self._scalar_as_py is None: self._scalar_as_py = True return self._scalar_as_py @scalar_as_py.setter def scalar_as_py(self, value: bool): self._scalar_as_py = bool(value) if self._data is not None: self._data.scalar_as_py = self._scalar_as_py if self._row_groups: for r in self._row_groups: if r is not None: r.scalar_as_py = self._scalar_as_py def __len__(self) -> int: return self.num_rows def load(self) -> None: """Eagerly read the whole file into memory as a table.""" if self._data is None: self._data = ParquetBatchData( self.file.read(columns=self._column_names, use_threads=True), ) self._data.scalar_as_py = self.scalar_as_py if self.num_row_groups == 1: assert self._row_groups is None self._row_groups = [self._data] @property def file(self) -> ParquetFile: """Return a `pyarrow.parquet.ParquetFile`_ object. Upon initiation of a :class:`ParquetFileReader` object, the file is not read at all. When this property is requested, the file is accessed to construct a `pyarrow.parquet.ParquetFile`_ object. """ if self._file is None: self._file = self.load_file(self.path) return self._file @property def metadata(self) -> FileMetaData: return self.file.metadata @property def num_rows(self) -> int: return self.metadata.num_rows @property def num_row_groups(self) -> int: return self.metadata.num_row_groups @property def num_columns(self) -> int: if self._column_names: return len(self._column_names) return self.metadata.num_columns @property def column_names(self) -> list[str]: if self._column_names: return self._column_names return self.metadata.schema.names def data(self) -> ParquetBatchData: """Return the entire data in the file.""" self.load() return self._data def _locate_row_group_for_item(self, idx: int): # Assuming user is checking neighboring items, # then the requested item may be in the same row-group # as the item requested last time. if self._row_groups_num_rows is None: meta = self.metadata self._row_groups_num_rows = [ meta.row_group(i).num_rows for i in range(self.num_row_groups) ] self._row_groups_num_rows_cumsum = list( itertools.accumulate(self._row_groups_num_rows) ) igrp, idx_in_grp, group_info = locate_idx_in_chunked_seq( idx, self._row_groups_num_rows_cumsum, self._getitem_last_row_group ) self._getitem_last_row_group = group_info return igrp, idx_in_grp def __getitem__(self, idx: int): """ Get one row (or "record"). If the object has a single column, then return its value in the specified row. If the object has multiple columns, return a dict with column names as keys. The values are converted to Python builtin types if :data:`scalar_as_py` is ``True``. Parameters ---------- idx Row index in this file. Negative value counts from the end as expected. """ if idx < 0: idx = self.num_rows + idx if idx < 0 or idx >= self.num_rows: raise IndexError(idx) if self._data is not None: return self._data[idx] igrp, idx_in_row_group = self._locate_row_group_for_item(idx) row_group = self.row_group(igrp) return row_group[idx_in_row_group] def __iter__(self): """ Iterate over rows. The type of yielded individual elements is the same as the return of :meth:`__getitem__`. """ if self._data is None: for batch in self.iter_batches(): yield from batch else: yield from self._data def iter_batches(self, batch_size=10_000) -> Iterator[ParquetBatchData]: if self._data is None: for batch in self.file.iter_batches( batch_size=batch_size, columns=self._column_names, use_threads=True, ): z = ParquetBatchData(batch) z.scalar_as_py = self.scalar_as_py yield z else: for batch in self._data.data().to_batches(batch_size): z = ParquetBatchData(batch) z.scalar_as_py = self.scalar_as_py yield z def row_group(self, idx: int) -> ParquetBatchData: """ Parameters ---------- idx Index of the row group of interest. """ assert 0 <= idx < self.num_row_groups if self._row_groups is None: self._row_groups = [None] * self.num_row_groups if self._row_groups[idx] is None: z = ParquetBatchData( self.file.read_row_group(idx, columns=self._column_names), ) z.scalar_as_py = self.scalar_as_py self._row_groups[idx] = z if self.num_row_groups == 1: assert self._data is None self._data = self._row_groups[0] return self._row_groups[idx] def columns(self, cols: Sequence[str]) -> ParquetFileReader: """ Return a new :class:`ParquetFileReader` object that will only load the specified columns. The columns of interest have to be within currently available columns. In other words, a series of calls to this method would incrementally narrow down the selection of columns. (Note this returns a new :class:`ParquetFileReader`, hence one can call :meth:`columns` again on the returned object.) This method "slices" the data by columns, in contrast to other data access methods that select rows. Parameters ---------- cols Names of the columns to select. Examples -------- >>> obj = ParquetFileReader('file_path') # doctest: +SKIP >>> obj1 = obj.columns(['a', 'b', 'c']) # doctest: +SKIP >>> print(obj1[2]) # doctest: +SKIP >>> obj2 = obj1.columns(['b', 'c']) # doctest: +SKIP >>> print(obj2[3]) # doctest: +SKIP >>> obj3 = obj.columns(['d']) # doctest: +SKIP >>> for v in obj: # doctest: +SKIP >>> print(v) # doctest: +SKIP """ assert len(set(cols)) == len(cols) # no repeat values if self._column_names: if all(col in self._column_names for col in cols): if len(cols) == len(self._column_names): return self else: cc = [col for col in cols if col not in self._column_names] raise ValueError( f'cannot select the columns {cc} because they are not in existing set of columns' ) obj = self.__class__(self.path) obj.scalar_as_py = self.scalar_as_py obj._file = self._file obj._row_groups_num_rows = self._row_groups_num_rows obj._row_groups_num_rows_cumsum = self._row_groups_num_rows_cumsum if self._row_groups: obj._row_groups = [ None if v is None else v.columns(cols) for v in self._row_groups ] if self._data is not None: obj._data = self._data.columns(cols) # TODO: also carry over `self._columns`? obj._column_names = cols return obj def column(self, idx_or_name: int | str) -> pyarrow.ChunkedArray: """Select a single column. Note: while :meth:`columns` returns a new :class:`ParquetFileReader`, :meth:`column` returns a `pyarrow.ChunkedArray`_. """ z = self._columns.get(idx_or_name) if z is not None: return z if self._data is not None: return self._data.column(idx_or_name) if isinstance(idx_or_name, int): idx = idx_or_name name = self.column_names[idx] else: name = idx_or_name idx = self.column_names.index(name) z = self.file.read(columns=[name]).column(name) self._columns[idx] = z self._columns[name] = z return z
(path: 'PathType')
709,270
biglist._parquet
__getitem__
Get one row (or "record"). If the object has a single column, then return its value in the specified row. If the object has multiple columns, return a dict with column names as keys. The values are converted to Python builtin types if :data:`scalar_as_py` is ``True``. Parameters ---------- idx Row index in this file. Negative value counts from the end as expected.
def __getitem__(self, idx: int): """ Get one row (or "record"). If the object has a single column, then return its value in the specified row. If the object has multiple columns, return a dict with column names as keys. The values are converted to Python builtin types if :data:`scalar_as_py` is ``True``. Parameters ---------- idx Row index in this file. Negative value counts from the end as expected. """ if idx < 0: idx = self.num_rows + idx if idx < 0 or idx >= self.num_rows: raise IndexError(idx) if self._data is not None: return self._data[idx] igrp, idx_in_row_group = self._locate_row_group_for_item(idx) row_group = self.row_group(igrp) return row_group[idx_in_row_group]
(self, idx: int)
709,272
biglist._parquet
__init__
Parameters ---------- path Path of a Parquet file.
def __init__(self, path: PathType): """ Parameters ---------- path Path of a Parquet file. """ self.path: Upath = resolve_path(path) self._reset()
(self, path: Union[str, pathlib.Path, upathlib._upath.Upath])
709,273
biglist._parquet
__iter__
Iterate over rows. The type of yielded individual elements is the same as the return of :meth:`__getitem__`.
def __iter__(self): """ Iterate over rows. The type of yielded individual elements is the same as the return of :meth:`__getitem__`. """ if self._data is None: for batch in self.iter_batches(): yield from batch else: yield from self._data
(self)
709,276
biglist._parquet
__setstate__
null
def __setstate__(self, data): self.path = data[0] self._reset()
(self, data)
709,279
biglist._parquet
_locate_row_group_for_item
null
def _locate_row_group_for_item(self, idx: int): # Assuming user is checking neighboring items, # then the requested item may be in the same row-group # as the item requested last time. if self._row_groups_num_rows is None: meta = self.metadata self._row_groups_num_rows = [ meta.row_group(i).num_rows for i in range(self.num_row_groups) ] self._row_groups_num_rows_cumsum = list( itertools.accumulate(self._row_groups_num_rows) ) igrp, idx_in_grp, group_info = locate_idx_in_chunked_seq( idx, self._row_groups_num_rows_cumsum, self._getitem_last_row_group ) self._getitem_last_row_group = group_info return igrp, idx_in_grp
(self, idx: int)
709,280
biglist._parquet
_reset
null
def _reset(self): self._file: ParquetFile | None = None self._data: ParquetBatchData | None = None self._row_groups_num_rows = None self._row_groups_num_rows_cumsum = None self._row_groups: None | list[ParquetBatchData] = None self._column_names = None self._columns = {} self._getitem_last_row_group = None self._scalar_as_py = None self.scalar_as_py = True
(self)
709,281
biglist._parquet
column
Select a single column. Note: while :meth:`columns` returns a new :class:`ParquetFileReader`, :meth:`column` returns a `pyarrow.ChunkedArray`_.
def column(self, idx_or_name: int | str) -> pyarrow.ChunkedArray: """Select a single column. Note: while :meth:`columns` returns a new :class:`ParquetFileReader`, :meth:`column` returns a `pyarrow.ChunkedArray`_. """ z = self._columns.get(idx_or_name) if z is not None: return z if self._data is not None: return self._data.column(idx_or_name) if isinstance(idx_or_name, int): idx = idx_or_name name = self.column_names[idx] else: name = idx_or_name idx = self.column_names.index(name) z = self.file.read(columns=[name]).column(name) self._columns[idx] = z self._columns[name] = z return z
(self, idx_or_name: int | str) -> pyarrow.lib.ChunkedArray
709,282
biglist._parquet
columns
Return a new :class:`ParquetFileReader` object that will only load the specified columns. The columns of interest have to be within currently available columns. In other words, a series of calls to this method would incrementally narrow down the selection of columns. (Note this returns a new :class:`ParquetFileReader`, hence one can call :meth:`columns` again on the returned object.) This method "slices" the data by columns, in contrast to other data access methods that select rows. Parameters ---------- cols Names of the columns to select. Examples -------- >>> obj = ParquetFileReader('file_path') # doctest: +SKIP >>> obj1 = obj.columns(['a', 'b', 'c']) # doctest: +SKIP >>> print(obj1[2]) # doctest: +SKIP >>> obj2 = obj1.columns(['b', 'c']) # doctest: +SKIP >>> print(obj2[3]) # doctest: +SKIP >>> obj3 = obj.columns(['d']) # doctest: +SKIP >>> for v in obj: # doctest: +SKIP >>> print(v) # doctest: +SKIP
def columns(self, cols: Sequence[str]) -> ParquetFileReader: """ Return a new :class:`ParquetFileReader` object that will only load the specified columns. The columns of interest have to be within currently available columns. In other words, a series of calls to this method would incrementally narrow down the selection of columns. (Note this returns a new :class:`ParquetFileReader`, hence one can call :meth:`columns` again on the returned object.) This method "slices" the data by columns, in contrast to other data access methods that select rows. Parameters ---------- cols Names of the columns to select. Examples -------- >>> obj = ParquetFileReader('file_path') # doctest: +SKIP >>> obj1 = obj.columns(['a', 'b', 'c']) # doctest: +SKIP >>> print(obj1[2]) # doctest: +SKIP >>> obj2 = obj1.columns(['b', 'c']) # doctest: +SKIP >>> print(obj2[3]) # doctest: +SKIP >>> obj3 = obj.columns(['d']) # doctest: +SKIP >>> for v in obj: # doctest: +SKIP >>> print(v) # doctest: +SKIP """ assert len(set(cols)) == len(cols) # no repeat values if self._column_names: if all(col in self._column_names for col in cols): if len(cols) == len(self._column_names): return self else: cc = [col for col in cols if col not in self._column_names] raise ValueError( f'cannot select the columns {cc} because they are not in existing set of columns' ) obj = self.__class__(self.path) obj.scalar_as_py = self.scalar_as_py obj._file = self._file obj._row_groups_num_rows = self._row_groups_num_rows obj._row_groups_num_rows_cumsum = self._row_groups_num_rows_cumsum if self._row_groups: obj._row_groups = [ None if v is None else v.columns(cols) for v in self._row_groups ] if self._data is not None: obj._data = self._data.columns(cols) # TODO: also carry over `self._columns`? obj._column_names = cols return obj
(self, cols: collections.abc.Sequence[str]) -> biglist._parquet.ParquetFileReader
709,283
biglist._parquet
data
Return the entire data in the file.
def data(self) -> ParquetBatchData: """Return the entire data in the file.""" self.load() return self._data
(self) -> biglist._parquet.ParquetBatchData
709,284
biglist._parquet
iter_batches
null
def iter_batches(self, batch_size=10_000) -> Iterator[ParquetBatchData]: if self._data is None: for batch in self.file.iter_batches( batch_size=batch_size, columns=self._column_names, use_threads=True, ): z = ParquetBatchData(batch) z.scalar_as_py = self.scalar_as_py yield z else: for batch in self._data.data().to_batches(batch_size): z = ParquetBatchData(batch) z.scalar_as_py = self.scalar_as_py yield z
(self, batch_size=10000) -> collections.abc.Iterator[biglist._parquet.ParquetBatchData]
709,285
biglist._parquet
load
Eagerly read the whole file into memory as a table.
def load(self) -> None: """Eagerly read the whole file into memory as a table.""" if self._data is None: self._data = ParquetBatchData( self.file.read(columns=self._column_names, use_threads=True), ) self._data.scalar_as_py = self.scalar_as_py if self.num_row_groups == 1: assert self._row_groups is None self._row_groups = [self._data]
(self) -> NoneType
709,286
biglist._parquet
row_group
Parameters ---------- idx Index of the row group of interest.
def row_group(self, idx: int) -> ParquetBatchData: """ Parameters ---------- idx Index of the row group of interest. """ assert 0 <= idx < self.num_row_groups if self._row_groups is None: self._row_groups = [None] * self.num_row_groups if self._row_groups[idx] is None: z = ParquetBatchData( self.file.read_row_group(idx, columns=self._column_names), ) z.scalar_as_py = self.scalar_as_py self._row_groups[idx] = z if self.num_row_groups == 1: assert self._data is None self._data = self._row_groups[0] return self._row_groups[idx]
(self, idx: int) -> biglist._parquet.ParquetBatchData
709,287
biglist._util
Seq
The protocol ``Seq`` is simpler and broader than the standard |Sequence|_. The former requires/provides only ``__len__``, ``__getitem__``, and ``__iter__``, whereas the latter adds ``__contains__``, ``__reversed__``, ``index`` and ``count`` to these three. Although the extra methods can be implemented using the three basic methods, they could be massively inefficient in particular cases, and that is the case in the applications targeted by ``biglist``. For this reason, the classes defined in this package implement the protocol ``Seq`` rather than ``Sequence``, to prevent the illusion that methods ``__contains__``, etc., are usable. A class that implements this protocol is sized, iterable, and subscriptable by an int index. This is a subset of the methods provided by ``Sequence``. In particular, ``Sequence`` implements this protocol, hence is considered a subclass of ``Seq`` for type checking purposes: >>> from biglist import Seq >>> from collections.abc import Sequence >>> issubclass(Sequence, Seq) True The built-in dict and tuple also implement the ``Seq`` protocol. The type parameter ``Element`` indicates the type of each data element.
class Seq(Protocol[Element]): """ The protocol ``Seq`` is simpler and broader than the standard |Sequence|_. The former requires/provides only ``__len__``, ``__getitem__``, and ``__iter__``, whereas the latter adds ``__contains__``, ``__reversed__``, ``index`` and ``count`` to these three. Although the extra methods can be implemented using the three basic methods, they could be massively inefficient in particular cases, and that is the case in the applications targeted by ``biglist``. For this reason, the classes defined in this package implement the protocol ``Seq`` rather than ``Sequence``, to prevent the illusion that methods ``__contains__``, etc., are usable. A class that implements this protocol is sized, iterable, and subscriptable by an int index. This is a subset of the methods provided by ``Sequence``. In particular, ``Sequence`` implements this protocol, hence is considered a subclass of ``Seq`` for type checking purposes: >>> from biglist import Seq >>> from collections.abc import Sequence >>> issubclass(Sequence, Seq) True The built-in dict and tuple also implement the ``Seq`` protocol. The type parameter ``Element`` indicates the type of each data element. """ # The subclass check is not exactly right. # This protocol requires the method ``__getitem__`` # to take an int key and return an element, but # the subclass check does not enforce this signature. # For example, dict would pass this check but it is not # an intended subclass. # An alternative definition is a `Subscriptable` following examples # in "cpython/Lib/_collections_abc.py", then a `Seq` inheriting from # Sized, Iterable, and Subscriptable. def __len__(self) -> int: ... def __getitem__(self, index: int) -> Element: ... def __iter__(self) -> Iterator[Element]: # A reference, or naive, implementation. for i in range(self.__len__()): yield self[i]
(*args, **kwargs)
709,293
biglist._util
Slicer
This class wraps a :class:`Seq` and enables element access by slice or index array, in addition to single integer. A ``Slicer`` object makes "zero-copy"---it holds a reference to the underlying ``Seq`` and keeps track of indices of the selected elements. A ``Slicer`` object may be sliced again in a repeated "zoom in" fashion. Actual data elements are retrieved from the underlying ``Seq`` only when a single-element is accessed or iteration is performed. In other words, until an actual data element needs to be returned, it's all operations on the indices.
class Slicer(Seq[Element]): """ This class wraps a :class:`Seq` and enables element access by slice or index array, in addition to single integer. A ``Slicer`` object makes "zero-copy"---it holds a reference to the underlying ``Seq`` and keeps track of indices of the selected elements. A ``Slicer`` object may be sliced again in a repeated "zoom in" fashion. Actual data elements are retrieved from the underlying ``Seq`` only when a single-element is accessed or iteration is performed. In other words, until an actual data element needs to be returned, it's all operations on the indices. """ def __init__(self, list_: Seq[Element], range_: None | range | Seq[int] = None): """ This provides a "slice" of, or "window" into, ``list_``. The selection of elements is represented by the optional ``range_``, which is eithe a `range <https://docs.python.org/3/library/stdtypes.html#range>`_ such as ``range(3, 8)``, or a list of indices such as ``[1, 3, 5, 6]``. If ``range_`` is ``None``, the "window" covers the entire ``list_``. A common practice is to create a ``Slicer`` object without ``range_``, and then access a slice of it, for example, ``Slicer(obj)[3:8]`` rather than ``Slicer(obj, range(3,8))``. During the use of this object, the underlying ``list_`` must remain unchanged. Otherwise purplexing and surprising things may happen. """ self._list = list_ self._range = range_ def __repr__(self): return f'<{self.__class__.__name__} into {self.__len__()}/{len(self._list)} of {self._list!r}>' def __str__(self): return self.__repr__() def __len__(self) -> int: """Number of elements in the current window or "slice".""" if self._range is None: return len(self._list) return len(self._range) def __getitem__(self, idx: int | slice | Seq[int]): """ Element access by a single index, slice, or an index array. Negative index and standard slice syntax work as expected. Single-index access returns the requested data element. Slice and index-array accesses return a new :class:`Slicer` object, which, naturally, can be sliced again, like :: >>> x = list(range(30)) >>> Slicer(x)[[1, 3, 5, 6, 7, 8, 9, 13, 14]][::2][-2] 9 """ if isinstance(idx, int): # Return a single element. if self._range is None: return self._list[idx] return self._list[self._range[idx]] # Return a new `Slicer` object below. if isinstance(idx, slice): if self._range is None: range_ = range(len(self._list))[idx] else: range_ = self._range[idx] return self.__class__(self._list, range_) # `idx` is a list of indices. if self._range is None: return self.__class__(self._list, idx) return self.__class__(self._list, [self._range[i] for i in idx]) def __iter__(self) -> Iterator[Element]: """Iterate over the elements in the current window or "slice".""" if self._range is None: yield from self._list else: # This could be inefficient, depending on # the random-access performance of `self._list`. for i in self._range: yield self._list[i] @property def raw(self) -> Seq[Element]: """ Return the underlying data :class:`Seq`, that is, the ``list_`` that was passed into :meth:`__init__`. """ return self._list @property def range(self) -> None | range | Seq[int]: """ Return the parameter ``range_`` that was provided to :meth:`__init__`, representing the selection of items in the underlying ``Seq``. """ return self._range def collect(self) -> list[Element]: """ Return a list containing the elements in the current window. This is equivalent to ``list(self)``. This is often used to substantiate a small slice as a list, because a slice is still a :class:`Slicer` object, which does not directly reveal the data items. For example, :: >>> x = list(range(30)) >>> Slicer(x)[3:11] <Slicer into 8/30 of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]> >>> Slicer(x)[3:11].collect() [3, 4, 5, 6, 7, 8, 9, 10] (A list is used for illustration. In reality, list supports slicing directly, hence would not need ``Slicer``.) .. warning:: Do not call this on "big" data! """ return list(self)
(list_: 'Seq[Element]', range_: 'None | range | Seq[int]' = None)
709,294
biglist._util
__getitem__
Element access by a single index, slice, or an index array. Negative index and standard slice syntax work as expected. Single-index access returns the requested data element. Slice and index-array accesses return a new :class:`Slicer` object, which, naturally, can be sliced again, like :: >>> x = list(range(30)) >>> Slicer(x)[[1, 3, 5, 6, 7, 8, 9, 13, 14]][::2][-2] 9
def __getitem__(self, idx: int | slice | Seq[int]): """ Element access by a single index, slice, or an index array. Negative index and standard slice syntax work as expected. Single-index access returns the requested data element. Slice and index-array accesses return a new :class:`Slicer` object, which, naturally, can be sliced again, like :: >>> x = list(range(30)) >>> Slicer(x)[[1, 3, 5, 6, 7, 8, 9, 13, 14]][::2][-2] 9 """ if isinstance(idx, int): # Return a single element. if self._range is None: return self._list[idx] return self._list[self._range[idx]] # Return a new `Slicer` object below. if isinstance(idx, slice): if self._range is None: range_ = range(len(self._list))[idx] else: range_ = self._range[idx] return self.__class__(self._list, range_) # `idx` is a list of indices. if self._range is None: return self.__class__(self._list, idx) return self.__class__(self._list, [self._range[i] for i in idx])
(self, idx: Union[int, slice, biglist._util.Seq[int]])
709,295
biglist._util
__init__
This provides a "slice" of, or "window" into, ``list_``. The selection of elements is represented by the optional ``range_``, which is eithe a `range <https://docs.python.org/3/library/stdtypes.html#range>`_ such as ``range(3, 8)``, or a list of indices such as ``[1, 3, 5, 6]``. If ``range_`` is ``None``, the "window" covers the entire ``list_``. A common practice is to create a ``Slicer`` object without ``range_``, and then access a slice of it, for example, ``Slicer(obj)[3:8]`` rather than ``Slicer(obj, range(3,8))``. During the use of this object, the underlying ``list_`` must remain unchanged. Otherwise purplexing and surprising things may happen.
def __init__(self, list_: Seq[Element], range_: None | range | Seq[int] = None): """ This provides a "slice" of, or "window" into, ``list_``. The selection of elements is represented by the optional ``range_``, which is eithe a `range <https://docs.python.org/3/library/stdtypes.html#range>`_ such as ``range(3, 8)``, or a list of indices such as ``[1, 3, 5, 6]``. If ``range_`` is ``None``, the "window" covers the entire ``list_``. A common practice is to create a ``Slicer`` object without ``range_``, and then access a slice of it, for example, ``Slicer(obj)[3:8]`` rather than ``Slicer(obj, range(3,8))``. During the use of this object, the underlying ``list_`` must remain unchanged. Otherwise purplexing and surprising things may happen. """ self._list = list_ self._range = range_
(self, list_: biglist._util.Seq[~Element], range_: Union[NoneType, range, biglist._util.Seq[int]] = None)
709,296
biglist._util
__iter__
Iterate over the elements in the current window or "slice".
def __iter__(self) -> Iterator[Element]: """Iterate over the elements in the current window or "slice".""" if self._range is None: yield from self._list else: # This could be inefficient, depending on # the random-access performance of `self._list`. for i in self._range: yield self._list[i]
(self) -> collections.abc.Iterator[~Element]
709,297
biglist._util
__len__
Number of elements in the current window or "slice".
def __len__(self) -> int: """Number of elements in the current window or "slice".""" if self._range is None: return len(self._list) return len(self._range)
(self) -> int
709,298
biglist._util
__repr__
null
def __repr__(self): return f'<{self.__class__.__name__} into {self.__len__()}/{len(self._list)} of {self._list!r}>'
(self)
709,301
biglist._util
collect
Return a list containing the elements in the current window. This is equivalent to ``list(self)``. This is often used to substantiate a small slice as a list, because a slice is still a :class:`Slicer` object, which does not directly reveal the data items. For example, :: >>> x = list(range(30)) >>> Slicer(x)[3:11] <Slicer into 8/30 of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]> >>> Slicer(x)[3:11].collect() [3, 4, 5, 6, 7, 8, 9, 10] (A list is used for illustration. In reality, list supports slicing directly, hence would not need ``Slicer``.) .. warning:: Do not call this on "big" data!
def collect(self) -> list[Element]: """ Return a list containing the elements in the current window. This is equivalent to ``list(self)``. This is often used to substantiate a small slice as a list, because a slice is still a :class:`Slicer` object, which does not directly reveal the data items. For example, :: >>> x = list(range(30)) >>> Slicer(x)[3:11] <Slicer into 8/30 of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]> >>> Slicer(x)[3:11].collect() [3, 4, 5, 6, 7, 8, 9, 10] (A list is used for illustration. In reality, list supports slicing directly, hence would not need ``Slicer``.) .. warning:: Do not call this on "big" data! """ return list(self)
(self) -> list[~Element]
709,305
biglist._parquet
make_parquet_field
``filed_spec`` is a list or tuple with 2, 3, or 4 elements. The first element is the name of the field. The second element is the spec of the type, to be passed to function :func:`make_parquet_type`. Additional elements are the optional ``nullable`` and ``metadata`` to the function `pyarrow.field() <https://arrow.apache.org/docs/python/generated/pyarrow.field.html#pyarrow.field>`_.
def make_parquet_field(field_spec: Sequence): """ ``filed_spec`` is a list or tuple with 2, 3, or 4 elements. The first element is the name of the field. The second element is the spec of the type, to be passed to function :func:`make_parquet_type`. Additional elements are the optional ``nullable`` and ``metadata`` to the function `pyarrow.field() <https://arrow.apache.org/docs/python/generated/pyarrow.field.html#pyarrow.field>`_. """ field_name = field_spec[0] type_spec = field_spec[1] assert len(field_spec) <= 4 # two optional elements are `nullable` and `metadata`. return pyarrow.field(field_name, make_parquet_type(type_spec), *field_spec[3:])
(field_spec: collections.abc.Sequence)
709,306
biglist._parquet
make_parquet_schema
This function constructs a pyarrow schema that is expressed by simple Python types that can be json-serialized. ``fields_spec`` is a list or tuple, each of its elements accepted by :func:`make_parquet_field`. This function is motivated by the need of :class:`~biglist._biglist.ParquetSerializer`. When :class:`biglist.Biglist` uses a "storage-format" that takes options (such as 'parquet'), these options can be passed into :func:`biglist.Biglist.new` (via ``serialize_kwargs`` and ``deserialize_kwargs``) and saved in "info.json". However, this requires the options to be json-serializable. Therefore, the argument ``schema`` to :meth:`ParquetSerializer.serialize() <biglist._biglist.ParquetSerializer.serialize>` can not be used by this mechanism. As an alternative, user can use the argument ``schema_spec``; this argument can be saved in "info.json", and it is handled by this function.
def make_parquet_schema(fields_spec: Iterable[Sequence]): """ This function constructs a pyarrow schema that is expressed by simple Python types that can be json-serialized. ``fields_spec`` is a list or tuple, each of its elements accepted by :func:`make_parquet_field`. This function is motivated by the need of :class:`~biglist._biglist.ParquetSerializer`. When :class:`biglist.Biglist` uses a "storage-format" that takes options (such as 'parquet'), these options can be passed into :func:`biglist.Biglist.new` (via ``serialize_kwargs`` and ``deserialize_kwargs``) and saved in "info.json". However, this requires the options to be json-serializable. Therefore, the argument ``schema`` to :meth:`ParquetSerializer.serialize() <biglist._biglist.ParquetSerializer.serialize>` can not be used by this mechanism. As an alternative, user can use the argument ``schema_spec``; this argument can be saved in "info.json", and it is handled by this function. """ return pyarrow.schema((make_parquet_field(v) for v in fields_spec))
(fields_spec: collections.abc.Iterable[collections.abc.Sequence])
709,307
biglist._parquet
make_parquet_type
``type_spec`` is a spec of arguments to one of pyarrow's data type `factory functions <https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions>`_. For simple types, this may be just the type name (or function name), e.g. ``'bool_'``, ``'string'``, ``'float64'``. For type functions expecting arguments, this is a list or tuple with the type name followed by other arguments, for example, :: ('time32', 's') ('decimal128', 5, -3) For compound types (types constructed by other types), this is a "recursive" structure, such as :: ('list_', 'int64') ('list_', ('time32', 's'), 5) where the second element is the spec for the member type, or :: ('map_', 'string', ('list_', 'int64'), True) where the second and third elements are specs for the key type and value type, respectively, and the fourth element is the optional argument ``keys_sorted`` to `pyarrow.map_() <https://arrow.apache.org/docs/python/generated/pyarrow.map_.html#pyarrow.map_>`_. Below is an example of a struct type:: ('struct', [('name', 'string', False), ('age', 'uint8', True), ('income', ('struct', (('currency', 'string'), ('amount', 'uint64'))), False)]) Here, the second element is the list of fields in the struct. Each field is expressed by a spec that is taken by :meth:`make_parquet_field`.
def make_parquet_type(type_spec: str | Sequence): """ ``type_spec`` is a spec of arguments to one of pyarrow's data type `factory functions <https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions>`_. For simple types, this may be just the type name (or function name), e.g. ``'bool_'``, ``'string'``, ``'float64'``. For type functions expecting arguments, this is a list or tuple with the type name followed by other arguments, for example, :: ('time32', 's') ('decimal128', 5, -3) For compound types (types constructed by other types), this is a "recursive" structure, such as :: ('list_', 'int64') ('list_', ('time32', 's'), 5) where the second element is the spec for the member type, or :: ('map_', 'string', ('list_', 'int64'), True) where the second and third elements are specs for the key type and value type, respectively, and the fourth element is the optional argument ``keys_sorted`` to `pyarrow.map_() <https://arrow.apache.org/docs/python/generated/pyarrow.map_.html#pyarrow.map_>`_. Below is an example of a struct type:: ('struct', [('name', 'string', False), ('age', 'uint8', True), ('income', ('struct', (('currency', 'string'), ('amount', 'uint64'))), False)]) Here, the second element is the list of fields in the struct. Each field is expressed by a spec that is taken by :meth:`make_parquet_field`. """ if isinstance(type_spec, str): type_name = type_spec args = () else: type_name = type_spec[0] args = type_spec[1:] if type_name in ('string', 'float64', 'bool_', 'int8', 'int64', 'uint8', 'uint64'): assert not args return getattr(pyarrow, type_name)() if type_name == 'list_': if len(args) > 2: raise ValueError(f"'pyarrow.list_' expects 1 or 2 args, got `{args}`") return pyarrow.list_(make_parquet_type(args[0]), *args[1:]) if type_name in ('map_', 'dictionary'): if len(args) > 3: raise ValueError(f"'pyarrow.{type_name}' expects 2 or 3 args, got `{args}`") return getattr(pyarrow, type_name)( make_parquet_type(args[0]), make_parquet_type(args[1]), *args[2:], ) if type_name == 'struct': assert len(args) == 1 return pyarrow.struct((make_parquet_field(v) for v in args[0])) if type_name == 'large_list': assert len(args) == 1 return pyarrow.large_list(make_parquet_type(args[0])) if type_name in ( 'int16', 'int32', 'uint16', 'uint32', 'float32', 'date32', 'date64', 'month_day_nano_interval', 'utf8', 'large_binary', 'large_string', 'large_utf8', 'null', ): assert not args return getattr(pyarrow, type_name)() if type_name in ('time32', 'time64', 'duration'): assert len(args) == 1 elif type_name in ('timestamp', 'decimal128'): assert len(args) in (1, 2) elif type_name in ('binary',): assert len(args) <= 1 else: raise ValueError(f"unknown pyarrow type '{type_name}'") return getattr(pyarrow, type_name)(*args)
(type_spec: str | collections.abc.Sequence)
709,308
biglist._parquet
read_parquet_file
Parameters ---------- path Path of the file.
def read_parquet_file(path: PathType) -> ParquetFileReader: """ Parameters ---------- path Path of the file. """ return ParquetFileReader(path)
(path: Union[str, pathlib.Path, upathlib._upath.Upath]) -> biglist._parquet.ParquetFileReader
709,309
biglist._parquet
write_arrays_to_parquet
Parameters ---------- path Path of the file to create and write to. data A list of data arrays. names List of names for the arrays in ``data``. **kwargs Passed on to `pyarrow.parquet.write_table() <https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_table.html>`_.
def write_arrays_to_parquet( data: Sequence[pyarrow.Array | pyarrow.ChunkedArray | Iterable], path: PathType, *, names: Sequence[str], **kwargs, ) -> None: """ Parameters ---------- path Path of the file to create and write to. data A list of data arrays. names List of names for the arrays in ``data``. **kwargs Passed on to `pyarrow.parquet.write_table() <https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_table.html>`_. """ assert len(names) == len(data) arrays = [ a if isinstance(a, (pyarrow.Array, pyarrow.ChunkedArray)) else pyarrow.array(a) for a in data ] table = pyarrow.Table.from_arrays(arrays, names=names) return write_parquet_table(table, path, **kwargs)
(data: collections.abc.Sequence[pyarrow.lib.Array | pyarrow.lib.ChunkedArray | collections.abc.Iterable], path: Union[str, pathlib.Path, upathlib._upath.Upath], *, names: collections.abc.Sequence[str], **kwargs) -> NoneType
709,310
biglist._parquet
write_pylist_to_parquet
null
def write_pylist_to_parquet( data: Sequence, path: PathType, *, schema=None, schema_spec=None, metadata=None, **kwargs, ): if schema is not None: assert schema_spec is None elif schema_spec is not None: assert schema is None schema = make_parquet_schema(schema_spec) table = pyarrow.Table.from_pylist(data, schema=schema, metadata=metadata) return write_parquet_table(table, path, **kwargs)
(data: collections.abc.Sequence, path: Union[str, pathlib.Path, upathlib._upath.Upath], *, schema=None, schema_spec=None, metadata=None, **kwargs)
709,408
configargparse
ArgumentDefaultsRawHelpFormatter
HelpFormatter that adds default values AND doesn't do line-wrapping
class ArgumentDefaultsRawHelpFormatter( argparse.ArgumentDefaultsHelpFormatter, argparse.RawTextHelpFormatter, argparse.RawDescriptionHelpFormatter): """HelpFormatter that adds default values AND doesn't do line-wrapping""" pass
(prog, indent_increment=2, max_help_position=24, width=None)
709,501
configargparse
CompositeConfigParser
Createa a config parser composed by others `ConfigFileParser`s. The composite parser will successively try to parse the file with each parser, until it succeeds, else raise execption with all encountered errors.
class CompositeConfigParser(ConfigFileParser): """ Createa a config parser composed by others `ConfigFileParser`s. The composite parser will successively try to parse the file with each parser, until it succeeds, else raise execption with all encountered errors. """ def __init__(self, config_parser_types): super().__init__() self.parsers = [p() for p in config_parser_types] def __call__(self): return self def parse(self, stream): errors = [] for p in self.parsers: try: return p.parse(stream) # type: ignore[no-any-return] except Exception as e: stream.seek(0) errors.append(e) raise ConfigFileParserException( f"Error parsing config: {', '.join(repr(str(e)) for e in errors)}") def get_syntax_description(self) : def guess_format_name(classname): strip = classname.lower().strip('_').replace('parser', '').replace('config', '').replace('file', '') return strip.upper() if strip else '??' msg = "Uses multiple config parser settings (in order): \n" for i, parser in enumerate(self.parsers): msg += f"[{i+1}] {guess_format_name(parser.__class__.__name__)}: {parser.get_syntax_description()} \n" return msg
(config_parser_types)
709,502
configargparse
__call__
null
def __call__(self): return self
(self)
709,503
configargparse
__init__
null
def __init__(self, config_parser_types): super().__init__() self.parsers = [p() for p in config_parser_types]
(self, config_parser_types)
709,504
configargparse
get_syntax_description
null
def get_syntax_description(self) : def guess_format_name(classname): strip = classname.lower().strip('_').replace('parser', '').replace('config', '').replace('file', '') return strip.upper() if strip else '??' msg = "Uses multiple config parser settings (in order): \n" for i, parser in enumerate(self.parsers): msg += f"[{i+1}] {guess_format_name(parser.__class__.__name__)}: {parser.get_syntax_description()} \n" return msg
(self)
709,505
configargparse
parse
null
def parse(self, stream): errors = [] for p in self.parsers: try: return p.parse(stream) # type: ignore[no-any-return] except Exception as e: stream.seek(0) errors.append(e) raise ConfigFileParserException( f"Error parsing config: {', '.join(repr(str(e)) for e in errors)}")
(self, stream)
709,506
configargparse
serialize
Does the inverse of config parsing by taking parsed values and converting them back to a string representing config file contents. Args: items: an OrderedDict of items to be converted to the config file format. Keys should be strings, and values should be either strings or lists. Returns: Contents of config file as a string
def serialize(self, items): """Does the inverse of config parsing by taking parsed values and converting them back to a string representing config file contents. Args: items: an OrderedDict of items to be converted to the config file format. Keys should be strings, and values should be either strings or lists. Returns: Contents of config file as a string """ raise NotImplementedError("serialize(..) not implemented")
(self, items)
709,507
configargparse
ConfigFileParser
This abstract class can be extended to add support for new config file formats
class ConfigFileParser(object): """This abstract class can be extended to add support for new config file formats""" def get_syntax_description(self): """Returns a string describing the config file syntax.""" raise NotImplementedError("get_syntax_description(..) not implemented") def parse(self, stream): """Parses the keys and values from a config file. NOTE: For keys that were specified to configargparse as action="store_true" or "store_false", the config file value must be one of: "yes", "no", "on", "off", "true", "false". Otherwise an error will be raised. Args: stream (IO): A config file input stream (such as an open file object). Returns: OrderedDict: Items where the keys are strings and the values are either strings or lists (eg. to support config file formats like YAML which allow lists). """ raise NotImplementedError("parse(..) not implemented") def serialize(self, items): """Does the inverse of config parsing by taking parsed values and converting them back to a string representing config file contents. Args: items: an OrderedDict of items to be converted to the config file format. Keys should be strings, and values should be either strings or lists. Returns: Contents of config file as a string """ raise NotImplementedError("serialize(..) not implemented")
()
709,508
configargparse
get_syntax_description
Returns a string describing the config file syntax.
def get_syntax_description(self): """Returns a string describing the config file syntax.""" raise NotImplementedError("get_syntax_description(..) not implemented")
(self)
709,509
configargparse
parse
Parses the keys and values from a config file. NOTE: For keys that were specified to configargparse as action="store_true" or "store_false", the config file value must be one of: "yes", "no", "on", "off", "true", "false". Otherwise an error will be raised. Args: stream (IO): A config file input stream (such as an open file object). Returns: OrderedDict: Items where the keys are strings and the values are either strings or lists (eg. to support config file formats like YAML which allow lists).
def parse(self, stream): """Parses the keys and values from a config file. NOTE: For keys that were specified to configargparse as action="store_true" or "store_false", the config file value must be one of: "yes", "no", "on", "off", "true", "false". Otherwise an error will be raised. Args: stream (IO): A config file input stream (such as an open file object). Returns: OrderedDict: Items where the keys are strings and the values are either strings or lists (eg. to support config file formats like YAML which allow lists). """ raise NotImplementedError("parse(..) not implemented")
(self, stream)
709,511
configargparse
ConfigFileParserException
Raised when config file parsing failed.
class ConfigFileParserException(Exception): """Raised when config file parsing failed."""
null
709,512
configargparse
ConfigparserConfigFileParser
parses INI files using pythons configparser.
class ConfigparserConfigFileParser(ConfigFileParser): """parses INI files using pythons configparser.""" def get_syntax_description(self): msg = """Uses configparser module to parse an INI file which allows multi-line values. Allowed syntax is that for a ConfigParser with the following options: allow_no_value = False, inline_comment_prefixes = ("#",) strict = True empty_lines_in_values = False See https://docs.python.org/3/library/configparser.html for details. Note: INI file sections names are still treated as comments. """ return msg def parse(self, stream): # see ConfigFileParser.parse docstring import configparser from ast import literal_eval # parse with configparser to allow multi-line values config = configparser.ConfigParser( delimiters=("=",":"), allow_no_value=False, comment_prefixes=("#",";"), inline_comment_prefixes=("#",";"), strict=True, empty_lines_in_values=False, ) try: config.read_string(stream.read()) except Exception as e: raise ConfigFileParserException("Couldn't parse config file: %s" % e) # convert to dict and remove INI section names result = OrderedDict() for section in config.sections(): for k,v in config[section].items(): multiLine2SingleLine = v.replace('\n',' ').replace('\r',' ') # handle special case for lists if '[' in multiLine2SingleLine and ']' in multiLine2SingleLine: # ensure not a dict with a list value prelist_string = multiLine2SingleLine.split('[')[0] if '{' not in prelist_string: result[k] = literal_eval(multiLine2SingleLine) else: result[k] = multiLine2SingleLine else: result[k] = multiLine2SingleLine return result def serialize(self, items): # see ConfigFileParser.serialize docstring import configparser import io config = configparser.ConfigParser( allow_no_value=False, inline_comment_prefixes=("#",), strict=True, empty_lines_in_values=False, ) items = {"DEFAULT": items} config.read_dict(items) stream = io.StringIO() config.write(stream) stream.seek(0) return stream.read()
()
709,513
configargparse
get_syntax_description
null
def get_syntax_description(self): msg = """Uses configparser module to parse an INI file which allows multi-line values. Allowed syntax is that for a ConfigParser with the following options: allow_no_value = False, inline_comment_prefixes = ("#",) strict = True empty_lines_in_values = False See https://docs.python.org/3/library/configparser.html for details. Note: INI file sections names are still treated as comments. """ return msg
(self)
709,514
configargparse
parse
null
def parse(self, stream): # see ConfigFileParser.parse docstring import configparser from ast import literal_eval # parse with configparser to allow multi-line values config = configparser.ConfigParser( delimiters=("=",":"), allow_no_value=False, comment_prefixes=("#",";"), inline_comment_prefixes=("#",";"), strict=True, empty_lines_in_values=False, ) try: config.read_string(stream.read()) except Exception as e: raise ConfigFileParserException("Couldn't parse config file: %s" % e) # convert to dict and remove INI section names result = OrderedDict() for section in config.sections(): for k,v in config[section].items(): multiLine2SingleLine = v.replace('\n',' ').replace('\r',' ') # handle special case for lists if '[' in multiLine2SingleLine and ']' in multiLine2SingleLine: # ensure not a dict with a list value prelist_string = multiLine2SingleLine.split('[')[0] if '{' not in prelist_string: result[k] = literal_eval(multiLine2SingleLine) else: result[k] = multiLine2SingleLine else: result[k] = multiLine2SingleLine return result
(self, stream)
709,515
configargparse
serialize
null
def serialize(self, items): # see ConfigFileParser.serialize docstring import configparser import io config = configparser.ConfigParser( allow_no_value=False, inline_comment_prefixes=("#",), strict=True, empty_lines_in_values=False, ) items = {"DEFAULT": items} config.read_dict(items) stream = io.StringIO() config.write(stream) stream.seek(0) return stream.read()
(self, items)
709,516
configargparse
DefaultConfigFileParser
Based on a simplified subset of INI and YAML formats. Here is the supported syntax .. code:: # this is a comment ; this is also a comment (.ini style) --- # lines that start with --- are ignored (yaml style) ------------------- [section] # .ini-style section names are treated as comments # how to specify a key-value pair (all of these are equivalent): name value # key is case sensitive: "Name" isn't "name" name = value # (.ini style) (white space is ignored, so name = value same as name=value) name: value # (yaml style) --name value # (argparse style) # how to set a flag arg (eg. arg which has action="store_true") --name name name = True # "True" and "true" are the same # how to specify a list arg (eg. arg which has action="append") fruit = [apple, orange, lemon] indexes = [1, 12, 35 , 40]
class DefaultConfigFileParser(ConfigFileParser): """ Based on a simplified subset of INI and YAML formats. Here is the supported syntax .. code:: # this is a comment ; this is also a comment (.ini style) --- # lines that start with --- are ignored (yaml style) ------------------- [section] # .ini-style section names are treated as comments # how to specify a key-value pair (all of these are equivalent): name value # key is case sensitive: "Name" isn't "name" name = value # (.ini style) (white space is ignored, so name = value same as name=value) name: value # (yaml style) --name value # (argparse style) # how to set a flag arg (eg. arg which has action="store_true") --name name name = True # "True" and "true" are the same # how to specify a list arg (eg. arg which has action="append") fruit = [apple, orange, lemon] indexes = [1, 12, 35 , 40] """ def get_syntax_description(self): msg = ("Config file syntax allows: key=value, flag=true, stuff=[a,b,c] " "(for details, see syntax at https://goo.gl/R74nmi).") return msg def parse(self, stream): # see ConfigFileParser.parse docstring items = OrderedDict() for i, line in enumerate(stream): line = line.strip() if not line or line[0] in ["#", ";", "["] or line.startswith("---"): continue match = re.match(r'^(?P<key>[^:=;#\s]+)\s*' r'(?:(?P<equal>[:=\s])\s*([\'"]?)(?P<value>.+?)?\3)?' r'\s*(?:\s[;#]\s*(?P<comment>.*?)\s*)?$', line) if match: key = match.group("key") equal = match.group('equal') value = match.group("value") comment = match.group("comment") if value is None and equal is not None and equal != ' ': value = '' elif value is None: value = "true" if value.startswith("[") and value.endswith("]"): # handle special case of k=[1,2,3] or other json-like syntax try: value = json.loads(value) except Exception as e: # for backward compatibility with legacy format (eg. where config value is [a, b, c] instead of proper json ["a", "b", "c"] value = [elem.strip() for elem in value[1:-1].split(",")] if comment: comment = comment.strip()[1:].strip() items[key] = value else: raise ConfigFileParserException("Unexpected line {} in {}: {}".format(i, getattr(stream, 'name', 'stream'), line)) return items def serialize(self, items): # see ConfigFileParser.serialize docstring r = StringIO() for key, value in items.items(): if isinstance(value, list): # handle special case of lists value = "["+", ".join(map(str, value))+"]" r.write("{} = {}\n".format(key, value)) return r.getvalue()
()
709,517
configargparse
get_syntax_description
null
def get_syntax_description(self): msg = ("Config file syntax allows: key=value, flag=true, stuff=[a,b,c] " "(for details, see syntax at https://goo.gl/R74nmi).") return msg
(self)
709,518
configargparse
parse
null
def parse(self, stream): # see ConfigFileParser.parse docstring items = OrderedDict() for i, line in enumerate(stream): line = line.strip() if not line or line[0] in ["#", ";", "["] or line.startswith("---"): continue match = re.match(r'^(?P<key>[^:=;#\s]+)\s*' r'(?:(?P<equal>[:=\s])\s*([\'"]?)(?P<value>.+?)?\3)?' r'\s*(?:\s[;#]\s*(?P<comment>.*?)\s*)?$', line) if match: key = match.group("key") equal = match.group('equal') value = match.group("value") comment = match.group("comment") if value is None and equal is not None and equal != ' ': value = '' elif value is None: value = "true" if value.startswith("[") and value.endswith("]"): # handle special case of k=[1,2,3] or other json-like syntax try: value = json.loads(value) except Exception as e: # for backward compatibility with legacy format (eg. where config value is [a, b, c] instead of proper json ["a", "b", "c"] value = [elem.strip() for elem in value[1:-1].split(",")] if comment: comment = comment.strip()[1:].strip() items[key] = value else: raise ConfigFileParserException("Unexpected line {} in {}: {}".format(i, getattr(stream, 'name', 'stream'), line)) return items
(self, stream)
709,519
configargparse
serialize
null
def serialize(self, items): # see ConfigFileParser.serialize docstring r = StringIO() for key, value in items.items(): if isinstance(value, list): # handle special case of lists value = "["+", ".join(map(str, value))+"]" r.write("{} = {}\n".format(key, value)) return r.getvalue()
(self, items)
709,605
configargparse
IniConfigParser
Create a INI parser bounded to the list of provided sections. Optionaly convert multiline strings to list. Example (if split_ml_text_to_list=False):: # this is a comment ; also a comment [my-software] # how to specify a key-value pair format-string: restructuredtext # white space are ignored, so name = value same as name=value # this is why you can quote strings quoted-string = ' hello mom... ' # how to set an arg which has action="store_true" warnings-as-errors = true # how to set an arg which has action="count" or type=int verbosity = 1 # how to specify a list arg (eg. arg which has action="append") repeatable-option = ["https://docs.python.org/3/objects.inv", "https://twistedmatrix.com/documents/current/api/objects.inv"] # how to specify a multiline text: multi-line-text = Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus tortor odio, dignissim non ornare non, laoreet quis nunc. Maecenas quis dapibus leo, a pellentesque leo. Example (if split_ml_text_to_list=True):: # the same rules are applicable with the following changes: [my-software] # how to specify a list arg (eg. arg which has action="append") repeatable-option = # Just enter one value per line (the list literal format can also be used) https://docs.python.org/3/objects.inv https://twistedmatrix.com/documents/current/api/objects.inv # how to specify a multiline text (you have to quote it): multi-line-text = ''' Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus tortor odio, dignissim non ornare non, laoreet quis nunc. Maecenas quis dapibus leo, a pellentesque leo. '''
class IniConfigParser(ConfigFileParser): """ Create a INI parser bounded to the list of provided sections. Optionaly convert multiline strings to list. Example (if split_ml_text_to_list=False):: # this is a comment ; also a comment [my-software] # how to specify a key-value pair format-string: restructuredtext # white space are ignored, so name = value same as name=value # this is why you can quote strings quoted-string = '\thello\tmom... ' # how to set an arg which has action="store_true" warnings-as-errors = true # how to set an arg which has action="count" or type=int verbosity = 1 # how to specify a list arg (eg. arg which has action="append") repeatable-option = ["https://docs.python.org/3/objects.inv", "https://twistedmatrix.com/documents/current/api/objects.inv"] # how to specify a multiline text: multi-line-text = Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus tortor odio, dignissim non ornare non, laoreet quis nunc. Maecenas quis dapibus leo, a pellentesque leo. Example (if split_ml_text_to_list=True):: # the same rules are applicable with the following changes: [my-software] # how to specify a list arg (eg. arg which has action="append") repeatable-option = # Just enter one value per line (the list literal format can also be used) https://docs.python.org/3/objects.inv https://twistedmatrix.com/documents/current/api/objects.inv # how to specify a multiline text (you have to quote it): multi-line-text = ''' Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus tortor odio, dignissim non ornare non, laoreet quis nunc. Maecenas quis dapibus leo, a pellentesque leo. ''' """ def __init__(self, sections, split_ml_text_to_list): """ :param sections: The section names bounded to the new parser. :split_ml_text_to_list: Wether to convert multiline strings to list """ super().__init__() self.sections = sections self.split_ml_text_to_list = split_ml_text_to_list def __call__(self): return self def parse(self, stream): """Parses the keys and values from an INI config file.""" # parse with configparser to allow multi-line values import configparser config = configparser.ConfigParser() try: config.read_string(stream.read()) except Exception as e: raise ConfigFileParserException("Couldn't parse INI file: %s" % e) # convert to dict and filter based on INI section names result = OrderedDict() for section in config.sections() + [configparser.DEFAULTSECT]: if section not in self.sections: continue for k,v in config[section].items(): strip_v = v.strip() if not strip_v: # ignores empty values, anyway allow_no_value=False by default so this should not happend. continue # evaluate lists if strip_v.startswith('[') and strip_v.endswith(']'): try: result[k] = ast.literal_eval(strip_v) except ValueError as e: # error evaluating object raise ConfigFileParserException("Error evaluating list: " + str(e) + ". Put quotes around your text if it's meant to be a string.") from e else: if is_quoted(strip_v): # evaluate quoted string try: result[k] = unquote_str(strip_v) except ValueError as e: # error unquoting string raise ConfigFileParserException(str(e)) from e # split multi-line text into list of strings if split_ml_text_to_list is enabled. elif self.split_ml_text_to_list and '\n' in v.rstrip('\n'): try: result[k] = [unquote_str(i) for i in strip_v.split('\n') if i] except ValueError as e: # error unquoting string raise ConfigFileParserException(str(e)) from e else: result[k] = v return result def get_syntax_description(self): msg = ("Uses configparser module to parse an INI file which allows multi-line values. " "See https://docs.python.org/3/library/configparser.html for details. " "This parser includes support for quoting strings literal as well as python list syntax evaluation. ") if self.split_ml_text_to_list: msg += ("Alternatively lists can be constructed with a plain multiline string, " "each non-empty line will be converted to a list item.") return msg
(sections, split_ml_text_to_list)
709,607
configargparse
__init__
:param sections: The section names bounded to the new parser. :split_ml_text_to_list: Wether to convert multiline strings to list
def __init__(self, sections, split_ml_text_to_list): """ :param sections: The section names bounded to the new parser. :split_ml_text_to_list: Wether to convert multiline strings to list """ super().__init__() self.sections = sections self.split_ml_text_to_list = split_ml_text_to_list
(self, sections, split_ml_text_to_list)
709,608
configargparse
get_syntax_description
null
def get_syntax_description(self): msg = ("Uses configparser module to parse an INI file which allows multi-line values. " "See https://docs.python.org/3/library/configparser.html for details. " "This parser includes support for quoting strings literal as well as python list syntax evaluation. ") if self.split_ml_text_to_list: msg += ("Alternatively lists can be constructed with a plain multiline string, " "each non-empty line will be converted to a list item.") return msg
(self)
709,609
configargparse
parse
Parses the keys and values from an INI config file.
def parse(self, stream): """Parses the keys and values from an INI config file.""" # parse with configparser to allow multi-line values import configparser config = configparser.ConfigParser() try: config.read_string(stream.read()) except Exception as e: raise ConfigFileParserException("Couldn't parse INI file: %s" % e) # convert to dict and filter based on INI section names result = OrderedDict() for section in config.sections() + [configparser.DEFAULTSECT]: if section not in self.sections: continue for k,v in config[section].items(): strip_v = v.strip() if not strip_v: # ignores empty values, anyway allow_no_value=False by default so this should not happend. continue # evaluate lists if strip_v.startswith('[') and strip_v.endswith(']'): try: result[k] = ast.literal_eval(strip_v) except ValueError as e: # error evaluating object raise ConfigFileParserException("Error evaluating list: " + str(e) + ". Put quotes around your text if it's meant to be a string.") from e else: if is_quoted(strip_v): # evaluate quoted string try: result[k] = unquote_str(strip_v) except ValueError as e: # error unquoting string raise ConfigFileParserException(str(e)) from e # split multi-line text into list of strings if split_ml_text_to_list is enabled. elif self.split_ml_text_to_list and '\n' in v.rstrip('\n'): try: result[k] = [unquote_str(i) for i in strip_v.split('\n') if i] except ValueError as e: # error unquoting string raise ConfigFileParserException(str(e)) from e else: result[k] = v return result
(self, stream)
709,790
configargparse
TomlConfigParser
Create a TOML parser bounded to the list of provided sections. Example:: # this is a comment [tool.my-software] # TOML section table. # how to specify a key-value pair format-string = "restructuredtext" # strings must be quoted # how to set an arg which has action="store_true" warnings-as-errors = true # how to set an arg which has action="count" or type=int verbosity = 1 # how to specify a list arg (eg. arg which has action="append") repeatable-option = ["https://docs.python.org/3/objects.inv", "https://twistedmatrix.com/documents/current/api/objects.inv"] # how to specify a multiline text: multi-line-text = ''' Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus tortor odio, dignissim non ornare non, laoreet quis nunc. Maecenas quis dapibus leo, a pellentesque leo. ''' Note that the config file fragment above is also valid for the `IniConfigParser` class and would be parsed the same manner. Thought, any valid TOML config file will not be necessarly parsable with `IniConfigParser` (INI files must be rigorously indented whereas TOML files). See the `TOML specification <>`_ for details.
class TomlConfigParser(ConfigFileParser): """ Create a TOML parser bounded to the list of provided sections. Example:: # this is a comment [tool.my-software] # TOML section table. # how to specify a key-value pair format-string = "restructuredtext" # strings must be quoted # how to set an arg which has action="store_true" warnings-as-errors = true # how to set an arg which has action="count" or type=int verbosity = 1 # how to specify a list arg (eg. arg which has action="append") repeatable-option = ["https://docs.python.org/3/objects.inv", "https://twistedmatrix.com/documents/current/api/objects.inv"] # how to specify a multiline text: multi-line-text = ''' Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus tortor odio, dignissim non ornare non, laoreet quis nunc. Maecenas quis dapibus leo, a pellentesque leo. ''' Note that the config file fragment above is also valid for the `IniConfigParser` class and would be parsed the same manner. Thought, any valid TOML config file will not be necessarly parsable with `IniConfigParser` (INI files must be rigorously indented whereas TOML files). See the `TOML specification <>`_ for details. """ def __init__(self, sections): """ :param sections: The section names bounded to the new parser. """ super().__init__() self.sections = sections def __call__(self): return self def parse(self, stream): """Parses the keys and values from a TOML config file.""" # parse with configparser to allow multi-line values import toml try: config = toml.load(stream) except Exception as e: raise ConfigFileParserException("Couldn't parse TOML file: %s" % e) # convert to dict and filter based on section names result = OrderedDict() for section in self.sections: data = get_toml_section(config, section) if data: # Seems a little weird, but anything that is not a list is converted to string, # It will be converted back to boolean, int or whatever after. # Because config values are still passed to argparser for computation. for key, value in data.items(): if isinstance(value, list): result[key] = value elif value is None: pass else: result[key] = str(value) break return result def get_syntax_description(self): return ("Config file syntax is Tom's Obvious, Minimal Language. " "See https://github.com/toml-lang/toml/blob/v0.5.0/README.md for details.")
(sections)
709,792
configargparse
__init__
:param sections: The section names bounded to the new parser.
def __init__(self, sections): """ :param sections: The section names bounded to the new parser. """ super().__init__() self.sections = sections
(self, sections)
709,793
configargparse
get_syntax_description
null
def get_syntax_description(self): return ("Config file syntax is Tom's Obvious, Minimal Language. " "See https://github.com/toml-lang/toml/blob/v0.5.0/README.md for details.")
(self)
709,794
configargparse
parse
Parses the keys and values from a TOML config file.
def parse(self, stream): """Parses the keys and values from a TOML config file.""" # parse with configparser to allow multi-line values import toml try: config = toml.load(stream) except Exception as e: raise ConfigFileParserException("Couldn't parse TOML file: %s" % e) # convert to dict and filter based on section names result = OrderedDict() for section in self.sections: data = get_toml_section(config, section) if data: # Seems a little weird, but anything that is not a list is converted to string, # It will be converted back to boolean, int or whatever after. # Because config values are still passed to argparser for computation. for key, value in data.items(): if isinstance(value, list): result[key] = value elif value is None: pass else: result[key] = str(value) break return result
(self, stream)
709,796
configargparse
YAMLConfigFileParser
Parses YAML config files. Depends on the PyYAML module. https://pypi.python.org/pypi/PyYAML
class YAMLConfigFileParser(ConfigFileParser): """Parses YAML config files. Depends on the PyYAML module. https://pypi.python.org/pypi/PyYAML """ def get_syntax_description(self): msg = ("The config file uses YAML syntax and must represent a YAML " "'mapping' (for details, see http://learn.getgrav.org/advanced/yaml).") return msg def _load_yaml(self): """lazy-import PyYAML so that configargparse doesn't have to depend on it unless this parser is used.""" try: import yaml except ImportError: raise ConfigFileParserException("Could not import yaml. " "It can be installed by running 'pip install PyYAML'") try: from yaml import CSafeLoader as SafeLoader from yaml import CDumper as Dumper except ImportError: from yaml import SafeLoader from yaml import Dumper return yaml, SafeLoader, Dumper def parse(self, stream): # see ConfigFileParser.parse docstring yaml, SafeLoader, _ = self._load_yaml() try: parsed_obj = yaml.load(stream, Loader=SafeLoader) except Exception as e: raise ConfigFileParserException("Couldn't parse config file: %s" % e) if not isinstance(parsed_obj, dict): raise ConfigFileParserException("The config file doesn't appear to " "contain 'key: value' pairs (aka. a YAML mapping). " "yaml.load('%s') returned type '%s' instead of 'dict'." % ( getattr(stream, 'name', 'stream'), type(parsed_obj).__name__)) result = OrderedDict() for key, value in parsed_obj.items(): if isinstance(value, list): result[key] = value elif value is None: pass else: result[key] = str(value) return result def serialize(self, items, default_flow_style=False): # see ConfigFileParser.serialize docstring # lazy-import so there's no dependency on yaml unless this class is used yaml, _, Dumper = self._load_yaml() # it looks like ordering can't be preserved: http://pyyaml.org/ticket/29 items = dict(items) return yaml.dump(items, default_flow_style=default_flow_style, Dumper=Dumper)
()
709,797
configargparse
_load_yaml
lazy-import PyYAML so that configargparse doesn't have to depend on it unless this parser is used.
def _load_yaml(self): """lazy-import PyYAML so that configargparse doesn't have to depend on it unless this parser is used.""" try: import yaml except ImportError: raise ConfigFileParserException("Could not import yaml. " "It can be installed by running 'pip install PyYAML'") try: from yaml import CSafeLoader as SafeLoader from yaml import CDumper as Dumper except ImportError: from yaml import SafeLoader from yaml import Dumper return yaml, SafeLoader, Dumper
(self)
709,798
configargparse
get_syntax_description
null
def get_syntax_description(self): msg = ("The config file uses YAML syntax and must represent a YAML " "'mapping' (for details, see http://learn.getgrav.org/advanced/yaml).") return msg
(self)