repo_id
stringlengths
21
96
file_path
stringlengths
31
155
content
stringlengths
1
92.9M
__index_level_0__
int64
0
0
rapidsai_public_repos/cudf/docs
rapidsai_public_repos/cudf/docs/dask_cudf/make.bat
@ECHO OFF pushd %~dp0 REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set SOURCEDIR=source set BUILDDIR=build %SPHINXBUILD% >NUL 2>NUL if errorlevel 9009 ( echo. echo.The 'sphinx-build' command was not found. Make sure you have Sphinx echo.installed, then set the SPHINXBUILD environment variable to point echo.to the full path of the 'sphinx-build' executable. Alternatively you echo.may add the Sphinx directory to PATH. echo. echo.If you don't have Sphinx installed, grab it from echo.https://www.sphinx-doc.org/ exit /b 1 ) if "%1" == "" goto help %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% goto end :help %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% :end popd
0
rapidsai_public_repos/cudf/docs
rapidsai_public_repos/cudf/docs/dask_cudf/Makefile
# Minimal makefile for Sphinx documentation # # You can set these variables from the command line, and also # from the environment for the first two. SPHINXOPTS ?= -n -v SPHINXBUILD ?= sphinx-build SPHINXPROJ = dask_cudf SOURCEDIR = source BUILDDIR = build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
0
rapidsai_public_repos/cudf/docs/dask_cudf
rapidsai_public_repos/cudf/docs/dask_cudf/source/conf.py
# Copyright (c) 2018-2023, NVIDIA CORPORATION. # Configuration file for the Sphinx documentation builder. # # For the full list of built-in configuration values, see the documentation: # https://www.sphinx-doc.org/en/master/usage/configuration.html # -- Project information ----------------------------------------------------- # https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information project = "dask-cudf" copyright = "2018-2023, NVIDIA Corporation" author = "NVIDIA Corporation" version = '24.02' release = '24.02.00' language = "en" # -- General configuration --------------------------------------------------- # https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration extensions = [ "sphinx.ext.intersphinx", "sphinx.ext.autodoc", "sphinx_copybutton", "numpydoc", "IPython.sphinxext.ipython_console_highlighting", "IPython.sphinxext.ipython_directive", "myst_nb", ] templates_path = ["_templates"] exclude_patterns = [] copybutton_prompt_text = ">>> " # Enable automatic generation of systematic, namespaced labels for sections myst_heading_anchors = 2 # -- Options for HTML output ------------------------------------------------- # https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output html_theme = "pydata_sphinx_theme" html_logo = "_static/RAPIDS-logo-purple.png" htmlhelp_basename = "dask-cudfdoc" html_use_modindex = True html_static_path = ["_static"] pygments_style = "sphinx" html_theme_options = { "external_links": [], "github_url": "https://github.com/rapidsai/cudf", "twitter_url": "https://twitter.com/rapidsai", "show_toc_level": 1, "navbar_align": "right", "navigation_with_keys": True, } include_pandas_compat = True intersphinx_mapping = { "python": ("https://docs.python.org/3", None), "cupy": ("https://docs.cupy.dev/en/stable/", None), "numpy": ("https://numpy.org/doc/stable", None), "pyarrow": ("https://arrow.apache.org/docs/", None), "cudf": ("https://docs.rapids.ai/api/cudf/stable/", None), "dask": ("https://docs.dask.org/en/stable/", None), "pandas": ("https://pandas.pydata.org/docs/", None), } numpydoc_show_inherited_class_members = True numpydoc_class_members_toctree = False numpydoc_attributes_as_param_list = False def setup(app): app.add_css_file("https://docs.rapids.ai/assets/css/custom.css") app.add_js_file( "https://docs.rapids.ai/assets/js/custom.js", loading_method="defer" )
0
rapidsai_public_repos/cudf/docs/dask_cudf
rapidsai_public_repos/cudf/docs/dask_cudf/source/api.rst
=============== API reference =============== This page provides a list of all publicly accessible modules, methods, and classes in the ``dask_cudf`` namespace. Creating and storing DataFrames =============================== :doc:`Like Dask <dask:dataframe-create>`, Dask-cuDF supports creation of DataFrames from a variety of storage formats. For on-disk data that are not supported directly in Dask-cuDF, we recommend using Dask's data reading facilities, followed by calling :func:`.from_dask_dataframe` to obtain a Dask-cuDF object. .. automodule:: dask_cudf :members: from_cudf, from_dask_dataframe, from_delayed, read_csv, read_json, read_orc, to_orc, read_text, read_parquet Grouping ======== As discussed in the :doc:`Dask documentation for groupby <dask:dataframe-groupby>`, ``groupby``, ``join``, and ``merge``, and similar operations that require matching up rows of a DataFrame become significantly more challenging in a parallel setting than they are in serial. Dask-cuDF has the same challenges, however for certain groupby operations, we can take advantage of functionality in cuDF that allows us to compute multiple aggregations at once. There are therefore two interfaces to grouping in Dask-cuDF, the general :meth:`DataFrame.groupby` which returns a :class:`.CudfDataFrameGroupBy` object, and a specialized :func:`.groupby_agg`. Generally speaking, you should not need to call :func:`.groupby_agg` directly, since Dask-cuDF will arrange to call it if possible. .. autoclass:: dask_cudf.groupby.CudfDataFrameGroupBy :members: :inherited-members: :show-inheritance: .. autofunction:: dask_cudf.groupby_agg DataFrames and Series ===================== The core distributed objects provided by Dask-cuDF are the :class:`.DataFrame` and :class:`.Series`. These inherit respectively from :class:`dask.dataframe.DataFrame` and :class:`dask.dataframe.Series`, and so the API is essentially identical. The full API is provided below. .. autoclass:: dask_cudf.DataFrame :members: :inherited-members: :show-inheritance: .. autoclass:: dask_cudf.Series :members: :inherited-members: :show-inheritance: .. automodule:: dask_cudf :members: concat
0
rapidsai_public_repos/cudf/docs/dask_cudf
rapidsai_public_repos/cudf/docs/dask_cudf/source/index.rst
.. dask-cudf documentation coordinating file, created by sphinx-quickstart on Mon Feb 6 18:48:11 2023. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to dask-cudf's documentation! ===================================== Dask-cuDF is an extension library for the `Dask <https://dask.org>`__ parallel computing framework that provides a `cuDF <https://docs.rapids.ai/api/cudf/stable/>`__-backed distributed dataframe with the same API as `Dask dataframes <https://docs.dask.org/en/stable/dataframe.html>`__. If you are familiar with Dask and `pandas <pandas.pydata.org>`__ or `cuDF <https://docs.rapids.ai/api/cudf/stable/>`__, then Dask-cuDF should feel familiar to you. If not, we recommend starting with `10 minutes to Dask <https://docs.dask.org/en/stable/10-minutes-to-dask.html>`__ followed by `10 minutes to cuDF and Dask-cuDF <https://docs.rapids.ai/api/cudf/stable/user_guide/10min.html>`__. When running on multi-GPU systems, `Dask-CUDA <https://docs.rapids.ai/api/dask-cuda/stable/>`__ is recommended to simplify the setup of the cluster, taking advantage of all features of the GPU and networking hardware. Using Dask-cuDF --------------- When installed, Dask-cuDF registers itself as a dataframe backend for Dask. This means that in many cases, using cuDF-backed dataframes requires only small changes to an existing workflow. The minimal change is to select cuDF as the dataframe backend in :doc:`Dask's configuration <dask:configuration>`. To do so, we must set the option ``dataframe.backend`` to ``cudf``. From Python, this can be achieved like so:: import dask dask.config.set({"dataframe.backend": "cudf"}) Alternatively, you can set ``DASK_DATAFRAME__BACKEND=cudf`` in the environment before running your code. Dataframe creation from on-disk formats ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If your workflow creates Dask dataframes from on-disk formats (for example using :func:`dask.dataframe.read_parquet`), then setting the backend may well be enough to migrate your workflow. For example, consider reading a dataframe from parquet:: import dask.dataframe as dd # By default, we obtain a pandas-backed dataframe df = dd.read_parquet("data.parquet", ...) To obtain a cuDF-backed dataframe, we must set the ``dataframe.backend`` configuration option:: import dask import dask.dataframe as dd dask.config.set({"dataframe.backend": "cudf"}) # This gives us a cuDF-backed dataframe df = dd.read_parquet("data.parquet", ...) This code will use cuDF's GPU-accelerated :func:`parquet reader <cudf.read_parquet>` to read partitions of the data. Dataframe creation from in-memory formats ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you already have a dataframe in memory and want to convert it to a cuDF-backend one, there are two options depending on whether the dataframe is already a Dask one or not. If you have a Dask dataframe, then you can call :func:`dask.dataframe.to_backend` passing ``"cudf"`` as the backend; if you have a pandas dataframe then you can either call :func:`dask.dataframe.from_pandas` followed by :func:`~dask.dataframe.to_backend` or first convert the dataframe with :func:`cudf.from_pandas` and then parallelise this with :func:`dask_cudf.from_cudf`. API Reference ------------- Generally speaking, Dask-cuDF tries to offer exactly the same API as Dask itself. There are, however, some minor differences mostly because cuDF does not :doc:`perfectly mirror <cudf:user_guide/PandasCompat>` the pandas API, or because cuDF provides additional configuration flags (these mostly occur in data reading and writing interfaces). As a result, straightforward workflows can be migrated without too much trouble, but more complex ones that utilise more features may need a bit of tweaking. The API documentation describes details of the differences and all functionality that Dask-cuDF supports. .. toctree:: :maxdepth: 2 api Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search`
0
rapidsai_public_repos/cudf/docs
rapidsai_public_repos/cudf/docs/cudf/README.md
# Building Documentation This directory contains the documentation of cuDF Python. For more information on how to write, build, and read the documentation, see [the developer documentation](https://github.com/rapidsai/cudf/blob/HEAD/docs/cudf/source/developer_guide/documentation.md).
0
rapidsai_public_repos/cudf/docs
rapidsai_public_repos/cudf/docs/cudf/make.bat
@ECHO OFF pushd %~dp0 REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set SOURCEDIR=source set BUILDDIR=build set SPHINXPROJ=cudf if "%1" == "" goto help %SPHINXBUILD% >NUL 2>NUL if errorlevel 9009 ( echo. echo.The 'sphinx-build' command was not found. Make sure you have Sphinx echo.installed, then set the SPHINXBUILD environment variable to point echo.to the full path of the 'sphinx-build' executable. Alternatively you echo.may add the Sphinx directory to PATH. echo. echo.If you don't have Sphinx installed, grab it from echo.http://sphinx-doc.org/ exit /b 1 ) %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% goto end :help %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% :end popd
0
rapidsai_public_repos/cudf/docs
rapidsai_public_repos/cudf/docs/cudf/Makefile
# Minimal makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = -n -v -W --keep-going SPHINXBUILD = sphinx-build SPHINXPROJ = cudf SOURCEDIR = source BUILDDIR = build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
0
rapidsai_public_repos/cudf/docs/cudf
rapidsai_public_repos/cudf/docs/cudf/source/conf.py
# Copyright (c) 2018-2023, NVIDIA CORPORATION. # # cudf documentation build configuration file, created by # sphinx-quickstart on Wed May 3 10:59:22 2017. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # import os import sys from docutils.nodes import Text from sphinx.addnodes import pending_xref # -- Custom Extensions ---------------------------------------------------- sys.path.append(os.path.abspath("./_ext")) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ "sphinx.ext.intersphinx", "sphinx.ext.autodoc", "sphinx.ext.autosummary", "sphinx_copybutton", "numpydoc", "IPython.sphinxext.ipython_console_highlighting", "IPython.sphinxext.ipython_directive", "PandasCompat", "myst_nb", ] nb_execution_excludepatterns = ['performance-comparisons.ipynb'] nb_execution_mode = "force" nb_execution_timeout = 300 copybutton_prompt_text = ">>> " autosummary_generate = True # Enable automatic generation of systematic, namespaced labels for sections myst_heading_anchors = 2 # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = {".rst": "restructuredtext"} # The master toctree document. master_doc = "index" # General information about the project. project = "cudf" copyright = "2018-2023, NVIDIA Corporation" author = "NVIDIA Corporation" # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '24.02' # The full version, including alpha/beta/rc tags. release = '24.02.00' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = "en" # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = ['venv', "**/includes/**",] # The name of the Pygments (syntax highlighting) style to use. pygments_style = "sphinx" html_theme_options = { "external_links": [], # https://github.com/pydata/pydata-sphinx-theme/issues/1220 "icon_links": [], "github_url": "https://github.com/rapidsai/cudf", "twitter_url": "https://twitter.com/rapidsai", "show_toc_level": 1, "navbar_align": "right", "navigation_with_keys": True, } include_pandas_compat = True # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = "pydata_sphinx_theme" html_logo = "_static/RAPIDS-logo-purple.png" # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ["_static"] # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. htmlhelp_basename = "cudfdoc" # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ( master_doc, "cudf.tex", "cudf Documentation", "NVIDIA Corporation", "manual", ) ] # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [(master_doc, "cudf", "cudf Documentation", [author], 1)] # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ( master_doc, "cudf", "cudf Documentation", author, "cudf", "One line description of project.", "Miscellaneous", ) ] # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = { "python": ("https://docs.python.org/3", None), "cupy": ("https://docs.cupy.dev/en/stable/", None), "numpy": ("https://numpy.org/doc/stable", None), "pyarrow": ("https://arrow.apache.org/docs/", None), "pandas": ("https://pandas.pydata.org/docs/", None), "typing_extensions": ("https://typing-extensions.readthedocs.io/en/stable/", None), } # Config numpydoc numpydoc_show_inherited_class_members = { # option_context inherits undocumented members from the parent class "cudf.option_context": False, } # Rely on toctrees generated from autosummary on each of the pages we define # rather than the autosummaries on the numpydoc auto-generated class pages. numpydoc_class_members_toctree = False numpydoc_attributes_as_param_list = False autoclass_content = "class" # Replace API shorthands with fullname _reftarget_aliases = { "cudf.Series": ("cudf.core.series.Series", "cudf.Series"), "cudf.Index": ("cudf.core.index.Index", "cudf.Index"), "cupy.core.core.ndarray": ("cupy.ndarray", "cupy.ndarray"), } def resolve_aliases(app, doctree): pending_xrefs = doctree.traverse(condition=pending_xref) for node in pending_xrefs: alias = node.get("reftarget", None) if alias is not None and alias in _reftarget_aliases: real_ref, text_to_render = _reftarget_aliases[alias] node["reftarget"] = real_ref text_node = next( iter(node.traverse(lambda n: n.tagname == "#text")) ) text_node.parent.replace(text_node, Text(text_to_render, "")) def ignore_internal_references(app, env, node, contnode): name = node.get("reftarget", None) if name == "cudf.core.index.GenericIndex": # We don't exposed docs for `cudf.core.index.GenericIndex` # hence we would want the docstring & mypy references to # use `cudf.Index` node["reftarget"] = "cudf.Index" return contnode return None nitpick_ignore = [ ("py:class", "SeriesOrIndex"), ("py:class", "Dtype"), # TODO: Remove this when we figure out why typing_extensions doesn't seem # to map types correctly for intersphinx ("py:class", "typing_extensions.Self"), ] def setup(app): app.add_css_file("https://docs.rapids.ai/assets/css/custom.css") app.add_js_file("https://docs.rapids.ai/assets/js/custom.js", loading_method="defer") app.connect("doctree-read", resolve_aliases) app.connect("missing-reference", ignore_internal_references)
0
rapidsai_public_repos/cudf/docs/cudf
rapidsai_public_repos/cudf/docs/cudf/source/index.rst
Welcome to the cuDF documentation! ================================== .. figure:: _static/RAPIDS-logo-purple.png :width: 300px :align: center **cuDF** is a Python GPU DataFrame library (built on the `Apache Arrow <https://arrow.apache.org/>`_ columnar memory format) for loading, joining, aggregating, filtering, and otherwise manipulating data. cuDF also provides a pandas-like API that will be familiar to data engineers & data scientists, so they can use it to easily accelerate their workflows without going into the details of CUDA programming. ``cudf.pandas`` is built on cuDF and accelerates pandas code on the GPU. It supports 100% of the pandas API, using the GPU for supported operations, and automatically falling back to pandas for other operations. .. figure:: _static/duckdb-benchmark-groupby-join.png :width: 750px :align: center Results of the `Database-like ops benchmark <https://duckdblabs.github.io/db-benchmark/>`_ including `cudf.pandas`. See details `here <cudf_pandas/benchmarks.html>`_. .. toctree:: :maxdepth: 1 :caption: Contents: user_guide/index cudf_pandas/index developer_guide/index
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/cupy-interop.ipynb
import timeit import cupy as cp from packaging import version import cudf if version.parse(cp.__version__) >= version.parse("10.0.0"): cupy_from_dlpack = cp.from_dlpack else: cupy_from_dlpack = cp.fromDlpacknelem = 10000 df = cudf.DataFrame( { "a": range(nelem), "b": range(500, nelem + 500), "c": range(1000, nelem + 1000), } ) %timeit arr_cupy = cupy_from_dlpack(df.to_dlpack()) %timeit arr_cupy = df.values %timeit arr_cupy = df.to_cupy()arr_cupy = cupy_from_dlpack(df.to_dlpack()) arr_cupycol = "a" %timeit cola_cupy = cp.asarray(df[col]) %timeit cola_cupy = cupy_from_dlpack(df[col].to_dlpack()) %timeit cola_cupy = df[col].valuescola_cupy = cp.asarray(df[col]) cola_cupyreshaped_arr = cola_cupy.reshape(50, 200) reshaped_arrreshaped_arr.diagonal()cp.linalg.norm(reshaped_arr)%timeit reshaped_df = cudf.DataFrame(reshaped_arr)reshaped_df = cudf.DataFrame(reshaped_arr) reshaped_df.head()cp.isfortran(reshaped_arr)%%timeit fortran_arr = cp.asfortranarray(reshaped_arr) reshaped_df = cudf.DataFrame(fortran_arr)%%timeit fortran_arr = cp.asfortranarray(reshaped_arr) reshaped_df = cudf.from_dlpack(fortran_arr.toDlpack())fortran_arr = cp.asfortranarray(reshaped_arr) reshaped_df = cudf.DataFrame(fortran_arr) reshaped_df.head()cudf.Series(reshaped_arr.diagonal()).head()reshaped_df.head()new_arr = cupy_from_dlpack(reshaped_df.to_dlpack()) new_arr.sum(axis=1)def cudf_to_cupy_sparse_matrix(data, sparseformat="column"): """Converts a cuDF object to a CuPy Sparse Column matrix.""" if sparseformat not in ( "row", "column", ): raise ValueError("Let's focus on column and row formats for now.") _sparse_constructor = cp.sparse.csc_matrix if sparseformat == "row": _sparse_constructor = cp.sparse.csr_matrix return _sparse_constructor(cupy_from_dlpack(data.to_dlpack()))df = cudf.DataFrame() nelem = 10000 nonzero = 1000 for i in range(20): arr = cp.random.normal(5, 5, nelem) arr[cp.random.choice(arr.shape[0], nelem - nonzero, replace=False)] = 0 df["a" + str(i)] = arrdf.head()sparse_data = cudf_to_cupy_sparse_matrix(df) print(sparse_data)
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/10min.ipynb
import os import cupy as cp import pandas as pd import cudf import dask_cudf cp.random.seed(12) #### Portions of this were borrowed and adapted from the #### cuDF cheatsheet, existing cuDF documentation, #### and 10 Minutes to Pandas.s = cudf.Series([1, 2, 3, None, 4]) sds = dask_cudf.from_cudf(s, npartitions=2) # Note the call to head here to show the first few entries, unlike # cuDF objects, dask-cuDF objects do not have a printing # representation that shows values since they may not be in local # memory. ds.head(n=3)df = cudf.DataFrame( { "a": list(range(20)), "b": list(reversed(range(20))), "c": list(range(20)), } ) dfddf = dask_cudf.from_cudf(df, npartitions=2) ddf.head()pdf = pd.DataFrame({"a": [0, 1, 2, 3], "b": [0.1, 0.2, None, 0.3]}) gdf = cudf.DataFrame.from_pandas(pdf) gdfdask_gdf = dask_cudf.from_cudf(gdf, npartitions=2) dask_gdf.head(n=2)df.head(2)ddf.head(2)df.sort_values(by="b")ddf.sort_values(by="b").head()df["a"]ddf["a"].head()df.loc[2:5, ["a", "b"]]ddf.loc[2:5, ["a", "b"]].head()df.iloc[0]df.iloc[0:3, 0:2]df[3:5]s[3:5]df[df.b > 15]ddf[ddf.b > 15].head(n=3)df.query("b == 3")ddf.query("b == 3").compute()cudf_comparator = 3 df.query("b == @cudf_comparator")dask_cudf_comparator = 3 ddf.query("b == @val", local_dict={"val": dask_cudf_comparator}).compute()df[df.a.isin([0, 5])]arrays = [["a", "a", "b", "b"], [1, 2, 3, 4]] tuples = list(zip(*arrays)) idx = cudf.MultiIndex.from_tuples(tuples) idxgdf1 = cudf.DataFrame( {"first": cp.random.rand(4), "second": cp.random.rand(4)} ) gdf1.index = idx gdf1gdf2 = cudf.DataFrame( {"first": cp.random.rand(4), "second": cp.random.rand(4)} ).T gdf2.columns = idx gdf2gdf1.loc[("b", 3)]gdf1.iloc[0:2]s.fillna(999)ds.fillna(999).head(n=3)s.mean(), s.var()ds.mean().compute(), ds.var().compute()def add_ten(num): return num + 10 df["a"].apply(add_ten)ddf["a"].map_partitions(add_ten).head()df.a.value_counts()ddf.a.value_counts().head()s = cudf.Series(["A", "B", "C", "Aaba", "Baca", None, "CABA", "dog", "cat"]) s.str.lower()ds = dask_cudf.from_cudf(s, npartitions=2) ds.str.lower().head(n=4)s.str.match("^[aAc].+")ds.str.match("^[aAc].+").head()s = cudf.Series([1, 2, 3, None, 5]) cudf.concat([s, s])ds2 = dask_cudf.from_cudf(s, npartitions=2) dask_cudf.concat([ds2, ds2]).head(n=3)df_a = cudf.DataFrame() df_a["key"] = ["a", "b", "c", "d", "e"] df_a["vals_a"] = [float(i + 10) for i in range(5)] df_b = cudf.DataFrame() df_b["key"] = ["a", "c", "e"] df_b["vals_b"] = [float(i + 100) for i in range(3)] merged = df_a.merge(df_b, on=["key"], how="left") mergedddf_a = dask_cudf.from_cudf(df_a, npartitions=2) ddf_b = dask_cudf.from_cudf(df_b, npartitions=2) merged = ddf_a.merge(ddf_b, on=["key"], how="left").head(n=4) mergeddf["agg_col1"] = [1 if x % 2 == 0 else 0 for x in range(len(df))] df["agg_col2"] = [1 if x % 3 == 0 else 0 for x in range(len(df))] ddf = dask_cudf.from_cudf(df, npartitions=2)df.groupby("agg_col1").sum()ddf.groupby("agg_col1").sum().compute()df.groupby(["agg_col1", "agg_col2"]).sum()ddf.groupby(["agg_col1", "agg_col2"]).sum().compute()df.groupby("agg_col1").agg({"a": "max", "b": "mean", "c": "sum"})ddf.groupby("agg_col1").agg({"a": "max", "b": "mean", "c": "sum"}).compute()sample = cudf.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]}) samplesample.transpose()import datetime as dt date_df = cudf.DataFrame() date_df["date"] = pd.date_range("11/20/2018", periods=72, freq="D") date_df["value"] = cp.random.sample(len(date_df)) search_date = dt.datetime.strptime("2018-11-23", "%Y-%m-%d") date_df.query("date <= @search_date")date_ddf = dask_cudf.from_cudf(date_df, npartitions=2) date_ddf.query( "date <= @search_date", local_dict={"search_date": search_date} ).compute()gdf = cudf.DataFrame( {"id": [1, 2, 3, 4, 5, 6], "grade": ["a", "b", "b", "a", "a", "e"]} ) gdf["grade"] = gdf["grade"].astype("category") gdfdgdf = dask_cudf.from_cudf(gdf, npartitions=2) dgdf.head(n=3)gdf.grade.cat.categoriesgdf.grade.cat.codesdgdf.grade.cat.codes.compute()df.head().to_pandas()ddf.head().to_pandas()ddf.compute().to_pandas().head()df.to_numpy()ddf.compute().to_numpy()df["a"].to_numpy()ddf["a"].compute().to_numpy()df.to_arrow()ddf.head().to_arrow()if not os.path.exists("example_output"): os.mkdir("example_output") df.to_csv("example_output/foo.csv", index=False)ddf.compute().to_csv("example_output/foo_dask.csv", index=False)df = cudf.read_csv("example_output/foo.csv") dfddf = dask_cudf.read_csv("example_output/foo_dask.csv") ddf.head()ddf = dask_cudf.read_csv("example_output/*.csv") ddf.head()df.to_parquet("example_output/temp_parquet")df = cudf.read_parquet("example_output/temp_parquet") dfddf.to_parquet("example_output/ddf_parquet_files")df.to_orc("example_output/temp_orc")df2 = cudf.read_orc("example_output/temp_orc") df2import time from dask.distributed import Client, wait from dask_cuda import LocalCUDACluster cluster = LocalCUDACluster() client = Client(cluster)nrows = 10000000 df2 = cudf.DataFrame({"a": cp.arange(nrows), "b": cp.arange(nrows)}) ddf2 = dask_cudf.from_cudf(df2, npartitions=16) ddf2["c"] = ddf2["a"] + 5 ddf2ddf2 = ddf2.persist() ddf2# Sleep to ensure the persist finishes and shows in the memory usage !sleep 5; nvidia-smiimport random nrows = 10000000 df1 = cudf.DataFrame({"a": cp.arange(nrows), "b": cp.arange(nrows)}) ddf1 = dask_cudf.from_cudf(df1, npartitions=100) def func(df): time.sleep(random.randint(1, 10)) return (df + 5) * 3 - 11results_ddf = ddf2.map_partitions(func) results_ddf = results_ddf.persist()wait(results_ddf)
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/missing-data.ipynb
import numpy as np import cudfdf = cudf.DataFrame({"a": [1, 2, None, 4], "b": [0.1, None, 2.3, 17.17]})dfdf.isna()df["a"].notna()None == Nonenp.nan == np.nandf["b"] == np.nans = cudf.Series([None, 1, 2])ss == Nones = cudf.Series([1, 2, np.nan], nan_as_null=False)ss == np.nancudf.Series([1, 2, np.nan])cudf.Series([1, 2, np.nan], nan_as_null=False)import pandas as pd datetime_series = cudf.Series( [pd.Timestamp("20120101"), pd.NaT, pd.Timestamp("20120101")] ) datetime_seriesdatetime_series.to_pandas()datetime_series - datetime_seriesdf1 = cudf.DataFrame( { "a": [1, None, 2, 3, None], "b": cudf.Series([np.nan, 2, 3.2, 0.1, 1], nan_as_null=False), } )df2 = cudf.DataFrame( {"a": [1, 11, 2, 34, 10], "b": cudf.Series([0.23, 22, 3.2, None, 1])} )df1df2df1 + df2df1["a"]df1["a"].sum()df1["a"].mean()df1["a"].sum(skipna=False)df1["a"].mean(skipna=False)df1["a"].cumsum()df1["a"].cumsum(skipna=False)cudf.Series([np.nan], nan_as_null=False).sum()cudf.Series([np.nan], nan_as_null=False).sum(skipna=False)cudf.Series([], dtype="float64").sum()cudf.Series([np.nan], nan_as_null=False).prod()cudf.Series([np.nan], nan_as_null=False).prod(skipna=False)cudf.Series([], dtype="float64").prod()df1df1.groupby("a").mean()df1.groupby("a", dropna=False).mean()series = cudf.Series([1, 2, 3, 4])seriesseries[2] = Noneseriesdf1df1["b"].fillna(10)import cupy as cp dff = cudf.DataFrame(cp.random.randn(10, 3), columns=list("ABC"))dff.iloc[3:5, 0] = np.nandff.iloc[4:6, 1] = np.nandff.iloc[5:8, 2] = np.nandffdff.fillna(dff.mean())dff.fillna(dff.mean()[1:3])df1df1.dropna(axis=0)df1.dropna(axis=1)df1["a"].dropna()series = cudf.Series([0.0, 1.0, 2.0, 3.0, 4.0])seriesseries.replace(0, 5)series.replace(0, None)series.replace([0, 1, 2, 3, 4], [4, 3, 2, 1, 0])series.replace({0: 10, 1: 100})df = cudf.DataFrame({"a": [0, 1, 2, 3, 4], "b": [5, 6, 7, 8, 9]})dfdf.replace({"a": 0, "b": 5}, 100)d = {"a": list(range(4)), "b": list("ab.."), "c": ["a", "b", None, "d"]}df = cudf.DataFrame(d)dfdf.replace(".", "A Dot")df.replace([".", "b"], ["A Dot", None])df.replace(["a", "."], ["b", "--"])df.replace({"b": "."}, {"b": "replacement value"})df = cudf.DataFrame(cp.random.randn(10, 2))df[np.random.rand(df.shape[0]) > 0.5] = 1.5df.replace(1.5, None)df00 = df.iloc[0, 0]df.replace([1.5, df00], [5, 10])df.replace(1.5, None, inplace=True)df
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/copy-on-write.md
(copy-on-write-user-doc)= # Copy-on-write Copy-on-write is a memory management strategy that allows multiple cuDF objects containing the same data to refer to the same memory address as long as neither of them modify the underlying data. With this approach, any operation that generates an unmodified view of an object (such as copies, slices, or methods like `DataFrame.head`) returns a new object that points to the same memory as the original. However, when either the existing or new object is _modified_, a copy of the data is made prior to the modification, ensuring that the changes do not propagate between the two objects. This behavior is best understood by looking at the examples below. The default behaviour in cuDF is for copy-on-write to be disabled, so to use it, one must explicitly opt in by setting a cuDF option. It is recommended to set the copy-on-write at the beginning of the script execution, because when this setting is changed in middle of a script execution there will be un-intended behavior where the objects created when copy-on-write is enabled will still have the copy-on-write behavior whereas the objects created when copy-on-write is disabled will have different behavior. ## Enabling copy-on-write 1. Use `cudf.set_option`: ```python >>> import cudf >>> cudf.set_option("copy_on_write", True) ``` 2. Set the environment variable ``CUDF_COPY_ON_WRITE`` to ``1`` prior to the launch of the Python interpreter: ```bash export CUDF_COPY_ON_WRITE="1" python -c "import cudf" ``` ## Disabling copy-on-write Copy-on-write can be disabled by setting the ``copy_on_write`` option to ``False``: ```python >>> cudf.set_option("copy_on_write", False) ``` ## Making copies There are no additional changes required in the code to make use of copy-on-write. ```python >>> series = cudf.Series([1, 2, 3, 4]) ``` Performing a shallow copy will create a new Series object pointing to the same underlying device memory: ```python >>> copied_series = series.copy(deep=False) >>> series 0 1 1 2 2 3 3 4 dtype: int64 >>> copied_series 0 1 1 2 2 3 3 4 dtype: int64 ``` When a write operation is performed on either ``series`` or ``copied_series``, a true physical copy of the data is created: ```python >>> series[0:2] = 10 >>> series 0 10 1 10 2 3 3 4 dtype: int64 >>> copied_series 0 1 1 2 2 3 3 4 dtype: int64 ``` ## Notes When copy-on-write is enabled, there is no longer a concept of views when slicing or indexing. In this sense, indexing behaves as one would expect for built-in Python containers like `lists`, rather than indexing `numpy arrays`. Modifying a "view" created by cuDF will always trigger a copy and will not modify the original object. Copy-on-write produces much more consistent copy semantics. Since every object is a copy of the original, users no longer have to think about when modifications may unexpectedly happen in place. This will bring consistency across operations and bring cudf and pandas behavior into alignment when copy-on-write is enabled for both. Here is one example where pandas and cudf are currently inconsistent without copy-on-write enabled: ```python >>> import pandas as pd >>> s = pd.Series([1, 2, 3, 4, 5]) >>> s1 = s[0:2] >>> s1[0] = 10 >>> s1 0 10 1 2 dtype: int64 >>> s 0 10 1 2 2 3 3 4 4 5 dtype: int64 >>> import cudf >>> s = cudf.Series([1, 2, 3, 4, 5]) >>> s1 = s[0:2] >>> s1[0] = 10 >>> s1 0 10 1 2 >>> s 0 1 1 2 2 3 3 4 4 5 dtype: int64 ``` The above inconsistency is solved when copy-on-write is enabled: ```python >>> import pandas as pd >>> pd.set_option("mode.copy_on_write", True) >>> s = pd.Series([1, 2, 3, 4, 5]) >>> s1 = s[0:2] >>> s1[0] = 10 >>> s1 0 10 1 2 dtype: int64 >>> s 0 1 1 2 2 3 3 4 4 5 dtype: int64 >>> import cudf >>> cudf.set_option("copy_on_write", True) >>> s = cudf.Series([1, 2, 3, 4, 5]) >>> s1 = s[0:2] >>> s1[0] = 10 >>> s1 0 10 1 2 dtype: int64 >>> s 0 1 1 2 2 3 3 4 4 5 dtype: int64 ``` ### Explicit deep and shallow copies comparison | | Copy-on-Write enabled | Copy-on-Write disabled (default) | |---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------| | `.copy(deep=True)` | A true copy is made and changes don't propagate to the original object. | A true copy is made and changes don't propagate to the original object. | | `.copy(deep=False)` | Memory is shared between the two objects and but any write operation on one object will trigger a true physical copy before the write is performed. Hence changes will not propagate to the original object. | Memory is shared between the two objects and changes performed on one will propagate to the other object. |
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/groupby.md
--- myst: substitutions: describe: '`describe`' --- (basics-groupby)= # GroupBy cuDF supports a small (but important) subset of Pandas' [groupby API](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html). ## Summary of supported operations 1. Grouping by one or more columns 2. Basic aggregations such as "sum", "mean", etc. 3. Quantile aggregation 4. A "collect" or `list` aggregation for collecting values in a group into lists 5. Automatic exclusion of columns with unsupported dtypes ("nuisance" columns) when aggregating 6. Iterating over the groups of a GroupBy object 7. `GroupBy.groups` API that returns a mapping of group keys to row labels 8. `GroupBy.apply` API for performing arbitrary operations on each group. Note that this has very limited functionality compared to the equivalent Pandas function. See the section on [apply](#groupby-apply) for more details. 9. `GroupBy.pipe` similar to [Pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#piping-function-calls). ## Grouping A GroupBy object is created by grouping the values of a `Series` or `DataFrame` by one or more columns: ```python >>> import cudf >>> df = cudf.DataFrame({'a': [1, 1, 1, 2, 2], 'b': [1, 1, 2, 2, 3], 'c': [1, 2, 3, 4, 5]}) >>> df a b c 0 1 1 1 1 1 1 2 2 1 2 3 3 2 2 4 4 2 3 5 >>> gb1 = df.groupby('a') # grouping by a single column >>> gb2 = df.groupby(['a', 'b']) # grouping by multiple columns >>> gb3 = df.groupby(cudf.Series(['a', 'a', 'b', 'b', 'b'])) # grouping by an external column ``` ````{warning} Unlike Pandas, cuDF uses `sort=False` by default to achieve better performance, which does not guarantee any particular group order in the result. For example: ```python >>> df = cudf.DataFrame({'a' : [2, 2, 1], 'b' : [42, 21, 11]}) >>> df.groupby('a').sum() b a 2 63 1 11 >>> df.to_pandas().groupby('a').sum() b a 1 11 2 63 ``` Setting `sort=True` will produce Pandas-like output, but with some performance penalty: ```python >>> df.groupby('a', sort=True).sum() b a 1 11 2 63 ``` ```` ### Grouping by index levels You can also group by one or more levels of a MultiIndex: ```python >>> df = cudf.DataFrame( ... {'a': [1, 1, 1, 2, 2], 'b': [1, 1, 2, 2, 3], 'c': [1, 2, 3, 4, 5]} ... ).set_index(['a', 'b']) ... >>> df.groupby(level='a') ``` ### The `Grouper` object A `Grouper` can be used to disambiguate between columns and levels when they have the same name: ```python >>> df b c b 1 1 1 1 1 2 1 2 3 2 2 4 2 3 5 >>> df.groupby('b', level='b') # ValueError: Cannot specify both by and level >>> df.groupby([cudf.Grouper(key='b'), cudf.Grouper(level='b')]) # OK ``` ## Aggregation Aggregations on groups are supported via the `agg` method: ```python >>> df a b c 0 1 1 1 1 1 1 2 2 1 2 3 3 2 2 4 4 2 3 5 >>> df.groupby('a').agg('sum') b c a 1 4 6 2 5 9 >>> df.groupby('a').agg({'b': ['sum', 'min'], 'c': 'mean'}) b c sum min mean a 1 4 1 2.0 2 5 2 4.5 >>> df.groupby("a").corr(method="pearson") b c a 1 b 1.000000 0.866025 c 0.866025 1.000000 2 b 1.000000 1.000000 c 1.000000 1.000000 ``` The following table summarizes the available aggregations and the types that support them: ```{eval-rst} .. table:: :class: special-table +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | Aggregations / dtypes | Numeric | Datetime | String | Categorical | List | Struct | Interval | Decimal | +====================================+===========+============+==========+===============+========+==========+============+===========+ | count | ✅ | ✅ | ✅ | ✅ | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | size | ✅ | ✅ | ✅ | ✅ | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | sum | ✅ | ✅ | | | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | idxmin | ✅ | ✅ | | | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | idxmax | ✅ | ✅ | | | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | min | ✅ | ✅ | ✅ | | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | max | ✅ | ✅ | ✅ | | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | mean | ✅ | ✅ | | | | | | | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | var | ✅ | ✅ | | | | | | | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | std | ✅ | ✅ | | | | | | | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | quantile | ✅ | ✅ | | | | | | | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | median | ✅ | ✅ | | | | | | | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | nunique | ✅ | ✅ | ✅ | ✅ | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | nth | ✅ | ✅ | ✅ | | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | collect | ✅ | ✅ | ✅ | | ✅ | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | unique | ✅ | ✅ | ✅ | ✅ | | | | | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | corr | ✅ | | | | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ | cov | ✅ | | | | | | | ✅ | +------------------------------------+-----------+------------+----------+---------------+--------+----------+------------+-----------+ ``` ## GroupBy apply To apply function on each group, use the `GroupBy.apply()` method: ```python >>> df a b c 0 1 1 1 1 1 1 2 2 1 2 3 3 2 2 4 4 2 3 5 >>> df.groupby('a').apply(lambda x: x.max() - x.min()) a b c a 0 0 1 2 1 0 1 1 ``` ### Limitations - `apply` works by applying the provided function to each group sequentially, and concatenating the results together. **This can be very slow**, especially for a large number of small groups. For a small number of large groups, it can give acceptable performance. - The results may not always match Pandas exactly. For example, cuDF may return a `DataFrame` containing a single column where Pandas returns a `Series`. Some post-processing may be required to match Pandas behavior. - cuDF does not support some of the exceptional cases that Pandas supports with `apply`, such as calling [describe] inside the callable. ## Transform The `.transform()` method aggregates per group, and broadcasts the result to the group size, resulting in a Series/DataFrame that is of the same size as the input Series/DataFrame. ```python >>> import cudf >>> df = cudf.DataFrame({'a': [2, 1, 1, 2, 2], 'b': [1, 2, 3, 4, 5]}) >>> df.groupby('a').transform('max') b 0 5 1 3 2 3 3 5 4 5 ``` ## Rolling window calculations Use the `GroupBy.rolling()` method to perform rolling window calculations on each group: ```python >>> df a b c 0 1 1 1 1 1 1 2 2 1 2 3 3 2 2 4 4 2 3 5 ``` Rolling window sum on each group with a window size of 2: ```python >>> df.groupby('a').rolling(2).sum() a b c a 1 0 <NA> <NA> <NA> 1 2 2 3 2 2 3 5 2 3 <NA> <NA> <NA> 4 4 5 9 ``` [describe]: https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#flexible-apply
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/guide-to-udfs.ipynb
import numpy as np import cudf from cudf.datasets import randomdata# Create a cuDF series sr = cudf.Series([1, 2, 3])# define a scalar function def f(x): return x + 1sr.apply(f)def g(x, const): return x + const# cuDF apply sr.apply(g, args=(42,))# Create a cuDF series with nulls sr = cudf.Series([1, cudf.NA, 3]) sr# redefine the same function from above def f(x): return x + 1# cuDF result sr.apply(f)def f_null_sensitive(x): # do something if the input is null if x is cudf.NA: return 42 else: return x + 1# cuDF result sr.apply(f_null_sensitive)sr = cudf.Series(["", "abc", "some_example"])def f(st): if len(st) > 0: if st.startswith("a"): return 1 elif "example" in st: return 2 else: return -1 else: return 42result = sr.apply(f) print(result)from cudf.core.udf.utils import set_malloc_heap_size set_malloc_heap_size(int(2e9))df = randomdata(nrows=5, dtypes={"a": int, "b": int, "c": int}, seed=12)from numba import cuda @cuda.jit def multiply(in_col, out_col, multiplier): i = cuda.grid(1) if i < in_col.size: # boundary guard out_col[i] = in_col[i] * multipliersize = len(df["a"]) df["e"] = 0.0 multiply.forall(size)(df["a"], df["e"], 10.0)df.head()def f(row): return row["A"] + row["B"]df = cudf.DataFrame({"A": [1, 2, 3], "B": [4, cudf.NA, 6]}) dfdf.apply(f, axis=1)df.to_pandas(nullable=True).apply(f, axis=1)def f(row): x = row["a"] if x is cudf.NA: return 0 else: return x + 1 df = cudf.DataFrame({"a": [1, cudf.NA, 3]}) dfdf.apply(f, axis=1)def f(row): x = row["a"] y = row["b"] if x + y > 3: return cudf.NA else: return x + y df = cudf.DataFrame({"a": [1, 2, 3], "b": [2, 1, 1]}) dfdf.apply(f, axis=1)def f(row): return row["a"] + row["b"] df = cudf.DataFrame({"a": [1, 2, 3], "b": [0.5, cudf.NA, 3.14]}) dfdf.apply(f, axis=1)def f(row): x = row["a"] if x > 3: return x else: return 1.5 df = cudf.DataFrame({"a": [1, 3, 5]}) dfdf.apply(f, axis=1)def f(row): return row["a"] + (row["b"] - (row["c"] / row["d"])) % row["e"] df = cudf.DataFrame( { "a": [1, 2, 3], "b": [4, 5, 6], "c": [cudf.NA, 4, 4], "d": [8, 7, 8], "e": [7, 1, 6], } ) dfdf.apply(f, axis=1)str_df = cudf.DataFrame( {"str_col": ["abc", "ABC", "Example"], "scale": [1, 2, 3]} ) str_dfdef f(row): st = row["str_col"] scale = row["scale"] if len(st) > 5: return len(st) + scale else: return len(st)result = str_df.apply(f, axis=1) print(result)def conditional_add(x, y, out): for i, (a, e) in enumerate(zip(x, y)): if a > 0: out[i] = a + e else: out[i] = adf = df.apply_rows( conditional_add, incols={"a": "x", "e": "y"}, outcols={"out": np.float64}, kwargs={}, ) df.head()def gpu_add(a, b, out): for i, (x, y) in enumerate(zip(a, b)): out[i] = x + y df = randomdata(nrows=5, dtypes={"a": int, "b": int, "c": int}, seed=12) df.loc[2, "a"] = None df.loc[3, "b"] = None df.loc[1, "c"] = None df.head()df = df.apply_rows( gpu_add, incols=["a", "b"], outcols={"out": np.float64}, kwargs={} ) df.head()ser = cudf.Series([16, 25, 36, 49, 64, 81], dtype="float64") serrolling = ser.rolling(window=3, min_periods=3, center=False) rollingimport math def example_func(window): b = 0 for a in window: b = max(b, math.sqrt(a)) if b == 8: return 100 return brolling.apply(example_func)df2 = cudf.DataFrame() df2["a"] = np.arange(55, 65, dtype="float64") df2["b"] = np.arange(55, 65, dtype="float64") df2.head()rolling = df2.rolling(window=3, min_periods=3, center=False) rolling.apply(example_func)df = randomdata( nrows=10, dtypes={"a": float, "b": bool, "c": str, "e": float}, seed=12 ) df.head()grouped = df.groupby(["b"])def rolling_avg(e, rolling_avg_e): win_size = 3 for i in range(cuda.threadIdx.x, len(e), cuda.blockDim.x): if i < win_size - 1: # If there is not enough data to fill the window, # take the average to be NaN rolling_avg_e[i] = np.nan else: total = 0 for j in range(i - win_size + 1, i + 1): total += e[j] rolling_avg_e[i] = total / win_sizeresults = grouped.apply_grouped( rolling_avg, incols=["e"], outcols=dict(rolling_avg_e=np.float64) ) resultsimport cupy as cp s = cudf.Series([1.0, 2, 3, 4, 10]) arr = cp.asarray(s) arr@cuda.jit def multiply_by_5(x, out): i = cuda.grid(1) if i < x.size: out[i] = x[i] * 5 out = cudf.Series(cp.zeros(len(s), dtype="int32")) multiply_by_5.forall(s.shape[0])(s, out) outout = cp.empty_like(arr) multiply_by_5.forall(arr.size)(arr, out) out
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/PandasCompat.md
# Pandas Compatibility Notes ```{eval-rst} .. pandas-compat-list:: ```
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/index.md
# cuDF User Guide ```{toctree} :maxdepth: 2 api_docs/index 10min pandas-comparison data-types io/index missing-data groupby guide-to-udfs cupy-interop options performance-comparisons/index PandasCompat copy-on-write ```
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/pandas-comparison.md
# Comparison of cuDF and Pandas cuDF is a DataFrame library that closely matches the Pandas API, but when used directly is *not* a full drop-in replacement for Pandas. There are some differences between cuDF and Pandas, both in terms of API and behaviour. This page documents the similarities and differences between cuDF and Pandas. Starting with the v23.10.01 release, cuDF also provides a pandas accelerator mode (`cudf.pandas`) that supports 100% of the pandas API and accelerates pandas code on the GPU without requiring any code change. See the [`cudf.pandas` documentation](../cudf_pandas/index). ## Supported operations cuDF supports many of the same data structures and operations as Pandas. This includes `Series`, `DataFrame`, `Index` and operations on them such as unary and binary operations, indexing, filtering, concatenating, joining, groupby and window operations - among many others. The best way to check if we support a particular Pandas API is to search our [API docs](/user_guide/api_docs/index). ## Data types cuDF supports many of the commonly-used data types in Pandas, including numeric, datetime, timestamp, string, and categorical data types. In addition, we support special data types for decimal, list, and "struct" values. See the section on [Data Types](data-types) for details. Note that we do not support custom data types like Pandas' `ExtensionDtype`. ## Null (or "missing") values Unlike Pandas, *all* data types in cuDF are nullable, meaning they can contain missing values (represented by `cudf.NA`). ```{code} python >>> s = cudf.Series([1, 2, cudf.NA]) >>> s 0 1 1 2 2 <NA> dtype: int64 ``` Nulls are not coerced to `NaN` in any situation; compare the behavior of cuDF with Pandas below: ```{code} python >>> s = cudf.Series([1, 2, cudf.NA], dtype="category") >>> s 0 1 1 2 2 <NA> dtype: category Categories (2, int64): [1, 2] >>> s = pd.Series([1, 2, pd.NA], dtype="category") >>> s 0 1 1 2 2 NaN dtype: category Categories (2, int64): [1, 2] ``` See the docs on [missing data](missing-data) for details. (pandas-comparison/iteration)= ## Iteration Iterating over a cuDF `Series`, `DataFrame` or `Index` is not supported. This is because iterating over data that resides on the GPU will yield *extremely* poor performance, as GPUs are optimized for highly parallel operations rather than sequential operations. In the vast majority of cases, it is possible to avoid iteration and use an existing function or method to accomplish the same task. If you absolutely must iterate, copy the data from GPU to CPU by using `.to_arrow()` or `.to_pandas()`, then copy the result back to GPU using `.from_arrow()` or `.from_pandas()`. ## Result ordering By default, `join` (or `merge`) and `groupby` operations in cuDF do *not* guarantee output ordering. Compare the results obtained from Pandas and cuDF below: ```{code} python >>> import cupy as cp >>> cp.random.seed(0) >>> import cudf >>> df = cudf.DataFrame({'a': cp.random.randint(0, 1000, 1000), 'b': range(1000)}) >>> df.groupby("a").mean().head() b a 29 193.0 803 915.0 5 138.0 583 300.0 418 613.0 >>> df.to_pandas().groupby("a").mean().head() b a 0 70.000000 1 356.333333 2 770.000000 3 838.000000 4 342.000000 ``` To match Pandas behavior, you must explicitly pass `sort=True` or enable the `mode.pandas_compatible` option when trying to match Pandas behavior with `sort=False`: ```{code} python >>> df.to_pandas().groupby("a", sort=True).mean().head() b a 0 70.000000 1 356.333333 2 770.000000 3 838.000000 4 342.000000 >>> cudf.set_option("mode.pandas_compatible", True) >>> df.groupby("a").mean().head() b a 0 70.000000 1 356.333333 2 770.000000 3 838.000000 4 342.000000 ``` ## Floating-point computation cuDF leverages GPUs to execute operations in parallel. This means the order of operations is not always deterministic. This impacts the determinism of floating-point operations because floating-point arithmetic is non-associative, that is, `a + b` is not equal to `b + a`. For example, `s.sum()` is not guaranteed to produce identical results to Pandas nor produce identical results from run to run, when `s` is a Series of floats. If you need to compare floating point results, you should typically do so using the functions provided in the [`cudf.testing`](/user_guide/api_docs/general_utilities) module, which allow you to compare values up to a desired precision. ## Column names Unlike Pandas, cuDF does not support duplicate column names. It is best to use unique strings for column names. ## No true `"object"` data type In Pandas and NumPy, the `"object"` data type is used for collections of arbitrary Python objects. For example, in Pandas you can do the following: ```{code} python >>> import pandas as pd >>> s = pd.Series(["a", 1, [1, 2, 3]]) 0 a 1 1 2 [1, 2, 3] dtype: object ``` For compatibility with Pandas, cuDF reports the data type for strings as `"object"`, but we do *not* support storing or operating on collections of arbitrary Python objects. ## `.apply()` function limitations The `.apply()` function in Pandas accepts a user-defined function (UDF) that can include arbitrary operations that are applied to each value of a `Series`, `DataFrame`, or in the case of a groupby, each group. cuDF also supports `.apply()`, but it relies on Numba to JIT compile the UDF and execute it on the GPU. This can be extremely fast, but imposes a few limitations on what operations are allowed in the UDF. See the docs on [UDFs](guide-to-udfs) for details.
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/data-types.md
# Supported Data Types cuDF supports many data types supported by NumPy and Pandas, including numeric, datetime, timedelta, categorical and string data types. We also provide special data types for working with decimals, list-like, and dictionary-like data. All data types in cuDF are [nullable](missing-data). <div class="special-table"> | Kind of data | Data type(s) | |----------------------|------------------------------------------------------------------------------------------------------------------------------------------| | Signed integer | `'int8'`, `'int16'`, `'int32'`, `'int64'` | | Unsigned integer | `'uint32'`, `'uint64'` | | Floating-point | `'float32'`, `'float64'` | | Datetime | `'datetime64[s]'`, `'datetime64[ms]'`, `'datetime64['us']`, `'datetime64[ns]'` | | Timedelta (duration) | `'timedelta[s]'`, `'timedelta[ms]'`, `'timedelta['us']`, `'timedelta[ns]'` | | Category | {py:class}`~cudf.core.dtypes.CategoricalDtype` | | String | `'object'` or `'string'` | | Decimal | {py:class}`~cudf.core.dtypes.Decimal32Dtype`, {py:class}`~cudf.core.dtypes.Decimal64Dtype`, {py:class}`~cudf.core.dtypes.Decimal128Dtype`| | List | {py:class}`~cudf.core.dtypes.ListDtype` | | Struct | {py:class}`~cudf.core.dtypes.StructDtype` | </div> ## NumPy data types We use NumPy data types for integer, floating, datetime, timedelta, and string data types. Thus, just like in NumPy, `np.dtype("float32")`, `np.float32`, and `"float32"` are all acceptable ways to specify the `float32` data type: ```python >>> import cudf >>> s = cudf.Series([1, 2, 3], dtype="float32") >>> s 0 1.0 1 2.0 2 3.0 dtype: float32 ``` ## A note on `object` The data type associated with string data in cuDF is `"np.object"`. ```python >>> import cudf >>> s = cudf.Series(["abc", "def", "ghi"]) >>> s.dtype dtype("object") ``` This is for compatibility with Pandas, but it can be misleading. In both NumPy and Pandas, `"object"` is the data type associated data composed of arbitrary Python objects (not just strings). However, cuDF does not support storing arbitrary Python objects. ## Decimal data types We provide special data types for working with decimal data, namely {py:class}`~cudf.core.dtypes.Decimal32Dtype`, {py:class}`~cudf.core.dtypes.Decimal64Dtype`, and {py:class}`~cudf.core.dtypes.Decimal128Dtype`. Use these data types when you need to store values with greater precision than allowed by floating-point representation. Decimal data types in cuDF are based on fixed-point representation. A decimal data type is composed of a _precision_ and a _scale_. The precision represents the total number of digits in each value of this dtype. For example, the precision associated with the decimal value `1.023` is `4`. The scale is the total number of digits to the right of the decimal point. The scale associated with the value `1.023` is 3. Each decimal data type is associated with a maximum precision: ```python >>> cudf.Decimal32Dtype.MAX_PRECISION 9.0 >>> cudf.Decimal64Dtype.MAX_PRECISION 18.0 >>> cudf.Decimal128Dtype.MAX_PRECISION 38 ``` One way to create a decimal Series is from values of type [decimal.Decimal][python-decimal]. ```python >>> from decimal import Decimal >>> s = cudf.Series([Decimal("1.01"), Decimal("4.23"), Decimal("0.5")]) >>> s 0 1.01 1 4.23 2 0.50 dtype: decimal128 >>> s.dtype Decimal128Dtype(precision=3, scale=2) ``` Notice the data type of the result: `1.01`, `4.23`, `0.50` can all be represented with a precision of at least 3 and a scale of at least 2. However, the value `1.234` needs a precision of at least 4, and a scale of at least 3, and cannot be fully represented using this data type: ```python >>> s[1] = Decimal("1.234") # raises an error ``` ## Nested data types (`List` and `Struct`) {py:class}`~cudf.core.dtypes.ListDtype` and {py:class}`~cudf.core.dtypes.StructDtype` are special data types in cuDF for working with list-like and dictionary-like data. These are referred to as "nested" data types, because they enable you to store a list of lists, or a struct of lists, or a struct of list of lists, etc., You can create lists and struct Series from existing Pandas Series of lists and dictionaries respectively: ```python >>> psr = pd.Series([{'a': 1, 'b': 2}, {'a': 3, 'b': 4}]) >>> psr 0 {'a': 1, 'b': 2} 1 {'a': 3, 'b': 4} dtype: object >>> gsr = cudf.from_pandas(psr) >>> gsr 0 {'a': 1, 'b': 2} 1 {'a': 3, 'b': 4} dtype: struct >>> gsr.dtype StructDtype({'a': dtype('int64'), 'b': dtype('int64')}) ``` Or by reading them from disk, using a [file format that supports nested data](/user_guide/io/index.md). ```python >>> pdf = pd.DataFrame({"a": [[1, 2], [3, 4, 5], [6, 7, 8]]}) >>> pdf.to_parquet("lists.pq") >>> gdf = cudf.read_parquet("lists.pq") >>> gdf a 0 [1, 2] 1 [3, 4, 5] 2 [6, 7, 8] >>> gdf["a"].dtype ListDtype(int64) ``` [numpy-dtype]: https://numpy.org/doc/stable/reference/arrays.dtypes.html#arrays-dtypes [python-decimal]: https://docs.python.org/3/library/decimal.html#decimal.Decimal
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/options.md
(options_user_guide)= # Options cuDF has an options API to configure and customize global behavior. This API complements the [pandas.options](https://pandas.pydata.org/docs/user_guide/options.html) API with features specific to cuDF. {py:func}`cudf.describe_option` will print the option's description, the current value, and the default value. When no argument is provided, all options are printed. To set value to a option, use {py:func}`cudf.set_option`. See the [API reference](api.options) for more details.
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/performance-comparisons/performance-comparisons.ipynb
import os import time import timeit from io import BytesIO import matplotlib.pyplot as plt import numpy as np import pandas as pd import cudfnp.random.seed(0)num_rows = 300_000_000 pdf = pd.DataFrame( { "numbers": np.random.randint(-1000, 1000, num_rows, dtype="int64"), "business": np.random.choice( ["McD", "Buckees", "Walmart", "Costco"], size=num_rows ), } ) pdfgdf = cudf.from_pandas(pdf) gdfdef timeit_pandas_cudf(pd_obj, gd_obj, func, **kwargs): """ A utility function to measure execution time of an API(`func`) in pandas & cudf. Parameters ---------- pd_obj : Pandas object gd_obj : cuDF object func : callable """ pandas_time = timeit.timeit(lambda: func(pd_obj), **kwargs) cudf_time = timeit.timeit(lambda: func(gd_obj), **kwargs) return pandas_time, cudf_timepandas_value_counts, cudf_value_counts = timeit_pandas_cudf( pdf, gdf, lambda df: df.value_counts(), number=30 )pdf = pdf.head(100_000_000) gdf = gdf.head(100_000_000)pandas_concat = timeit.timeit(lambda: pd.concat([pdf, pdf, pdf]), number=30)cudf_concat = timeit.timeit(lambda: cudf.concat([gdf, gdf, gdf]), number=30)pandas_groupby, cudf_groupby = timeit_pandas_cudf( pdf, gdf, lambda df: df.groupby("business").agg(["min", "max", "mean"]), number=30, )num_rows = 1_000_000 pdf = pd.DataFrame( { "numbers": np.random.randint(-1000, 1000, num_rows, dtype="int64"), "business": np.random.choice( ["McD", "Buckees", "Walmart", "Costco"], size=num_rows ), } ) gdf = cudf.from_pandas(pdf)pandas_merge, cudf_merge = timeit_pandas_cudf( pdf, gdf, lambda df: df.merge(df), number=30 )performance_df = pd.DataFrame( { "cudf speedup vs. pandas": [ pandas_value_counts / cudf_value_counts, pandas_concat / cudf_concat, pandas_groupby / cudf_groupby, pandas_merge / cudf_merge, ], }, index=["value_counts", "concat", "groupby", "merge"], )performance_dfax = performance_df.plot.bar( color="#7400ff", ylim=(1, 400), rot=0, xlabel="Operation", ylabel="Speedup factor", ) ax.bar_label(ax.containers[0], fmt="%.0f") plt.show()# Cleaning up used memory for later benchmarks del pdf del gdf import gc _ = gc.collect()num_rows = 300_000_000 pd_series = pd.Series( np.random.choice( ["123", "56.234", "Walmart", "Costco", "rapids ai"], size=num_rows ) )gd_series = cudf.from_pandas(pd_series)pandas_upper, cudf_upper = timeit_pandas_cudf( pd_series, gd_series, lambda s: s.str.upper(), number=20 )pandas_contains, cudf_contains = timeit_pandas_cudf( pd_series, gd_series, lambda s: s.str.contains(r"[0-9][a-z]"), number=20 )pandas_isalpha, cudf_isalpha = timeit_pandas_cudf( pd_series, gd_series, lambda s: s.str.isalpha(), number=20 )performance_df = pd.DataFrame( { "cudf speedup vs. pandas": [ pandas_upper / cudf_upper, pandas_contains / cudf_contains, pandas_isalpha / cudf_isalpha, ], }, index=["upper", "contains", "isalpha"], )performance_dfax = performance_df.plot.bar( color="#7400ff", ylim=(1, 7000), rot=0, xlabel="String method", ylabel="Speedup factor", ) ax.bar_label(ax.containers[0], fmt="%.0f") plt.show()num_rows = 10_000_000 pdf_age = pd.DataFrame( { "age": np.random.randint(0, 100, num_rows), } ) pdf_agegdf_age = cudf.from_pandas(pdf_age) gdf_agedef age_udf(row): if row["age"] < 18: return 0 elif 18 <= row["age"] < 20: return 1 elif 20 <= row["age"] < 30: return 2 elif 30 <= row["age"] < 40: return 3 elif 40 <= row["age"] < 50: return 4 elif 50 <= row["age"] < 60: return 5 elif 60 <= row["age"] < 70: return 6 else: return 7pandas_int_udf, cudf_int_udf = timeit_pandas_cudf( pdf_age, gdf_age, lambda df: df.apply(age_udf, axis=1), number=1 )def str_isupper_udf(row): if row.isupper(): return 0 else: return 1pd_series = pd.Series( np.random.choice(["ABC", "abc", "hello world", "AI"], size=100_000_000), name="strings", ) pd_seriesgd_series = cudf.from_pandas(pd_series) gd_seriespandas_str_udf, cudf_str_udf = timeit_pandas_cudf( pd_series, gd_series, lambda s: s.apply(str_isupper_udf), number=1 )performance_df = pd.DataFrame( { "cudf speedup vs. pandas": [ pandas_int_udf / cudf_int_udf, pandas_str_udf / cudf_str_udf, ] }, index=["Numeric", "String"], ) performance_dfax = performance_df.plot.bar( color="#7400ff", ylim=(1, 550), rot=0, xlabel="UDF Kind", ylabel="Speedup factor", ) ax.bar_label(ax.containers[0], fmt="%.0f") plt.show()pandas_int_udf, cudf_int_udf = timeit_pandas_cudf( pdf_age, gdf_age, lambda df: df.apply(age_udf, axis=1), number=10 )pandas_str_udf, cudf_str_udf = timeit_pandas_cudf( pd_series, gd_series, lambda s: s.apply(str_isupper_udf), number=10 )performance_df = pd.DataFrame( { "cudf speedup vs. pandas": [ pandas_int_udf / cudf_int_udf, pandas_str_udf / cudf_str_udf, ] }, index=["Numeric", "String"], ) performance_dfax = performance_df.plot.bar( color="#7400ff", ylim=(1, 100000), rot=0, xlabel="UDF Kind", ylabel="Speedup factor", ) ax.bar_label(ax.containers[0], fmt="%.0f") plt.show()num_rows = 100_000_000 pdf = pd.DataFrame() pdf["key"] = np.random.randint(0, 2, num_rows) pdf["val"] = np.random.randint(0, 7, num_rows) def custom_formula_udf(df): df["out"] = df["key"] * df["val"] - 10 return df gdf = cudf.from_pandas(pdf)pandas_udf_groupby, cudf_udf_groupby = timeit_pandas_cudf( pdf, gdf, lambda df: df.groupby(["key"], group_keys=False).apply(custom_formula_udf), number=10, )performance_df = pd.DataFrame( {"cudf speedup vs. pandas": [pandas_udf_groupby / cudf_udf_groupby]}, index=["Grouped UDF"], ) performance_dfax = performance_df.plot.bar( color="#7400ff", ylim=(1, 500), rot=0, ylabel="Speedup factor" ) ax.bar_label(ax.containers[0], fmt="%.0f") plt.show()
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/performance-comparisons/index.md
# Performance comparisons ```{toctree} :maxdepth: 2 performance-comparisons ```
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/dataframe.rst
========= DataFrame ========= .. currentmodule:: cudf Constructor ~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame Attributes and underlying data ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Axes** .. autosummary:: :toctree: api/ DataFrame.axes DataFrame.index DataFrame.columns .. autosummary:: :toctree: api/ DataFrame.dtypes DataFrame.info DataFrame.select_dtypes DataFrame.values DataFrame.ndim DataFrame.size DataFrame.shape DataFrame.memory_usage DataFrame.empty Conversion ~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.astype DataFrame.convert_dtypes DataFrame.copy Indexing, iteration ~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.head DataFrame.at DataFrame.iat DataFrame.loc DataFrame.iloc DataFrame.insert DataFrame.__iter__ DataFrame.items DataFrame.keys DataFrame.iterrows DataFrame.itertuples DataFrame.pop DataFrame.tail DataFrame.isin DataFrame.where DataFrame.mask DataFrame.query Binary operator functions ~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.add DataFrame.sub DataFrame.subtract DataFrame.mul DataFrame.multiply DataFrame.truediv DataFrame.div DataFrame.divide DataFrame.floordiv DataFrame.mod DataFrame.pow DataFrame.dot DataFrame.radd DataFrame.rsub DataFrame.rmul DataFrame.rdiv DataFrame.rtruediv DataFrame.rfloordiv DataFrame.rmod DataFrame.rpow DataFrame.round DataFrame.lt DataFrame.gt DataFrame.le DataFrame.ge DataFrame.ne DataFrame.eq DataFrame.product Function application, GroupBy & window ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.apply DataFrame.applymap DataFrame.apply_chunks DataFrame.apply_rows DataFrame.pipe DataFrame.agg DataFrame.groupby DataFrame.rolling .. _api.dataframe.stats: Computations / descriptive stats ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.abs DataFrame.all DataFrame.any DataFrame.clip DataFrame.corr DataFrame.count DataFrame.cov DataFrame.cummax DataFrame.cummin DataFrame.cumprod DataFrame.cumsum DataFrame.describe DataFrame.diff DataFrame.eval DataFrame.kurt DataFrame.kurtosis DataFrame.max DataFrame.mean DataFrame.median DataFrame.min DataFrame.mode DataFrame.pct_change DataFrame.prod DataFrame.product DataFrame.quantile DataFrame.rank DataFrame.round DataFrame.scale DataFrame.skew DataFrame.sum DataFrame.std DataFrame.var DataFrame.nunique DataFrame.value_counts Reindexing / selection / label manipulation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.add_prefix DataFrame.add_suffix DataFrame.drop DataFrame.drop_duplicates DataFrame.duplicated DataFrame.equals DataFrame.first DataFrame.head DataFrame.last DataFrame.reindex DataFrame.rename DataFrame.reset_index DataFrame.sample DataFrame.searchsorted DataFrame.set_index DataFrame.repeat DataFrame.tail DataFrame.take DataFrame.tile DataFrame.truncate .. _api.dataframe.missing: Missing data handling ~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.backfill DataFrame.bfill DataFrame.dropna DataFrame.ffill DataFrame.fillna DataFrame.interpolate DataFrame.isna DataFrame.isnull DataFrame.nans_to_nulls DataFrame.notna DataFrame.notnull DataFrame.pad DataFrame.replace Reshaping, sorting, transposing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.argsort DataFrame.interleave_columns DataFrame.partition_by_hash DataFrame.pivot DataFrame.pivot_table DataFrame.scatter_by_map DataFrame.sort_values DataFrame.sort_index DataFrame.nlargest DataFrame.nsmallest DataFrame.swaplevel DataFrame.stack DataFrame.unstack DataFrame.melt DataFrame.explode DataFrame.to_struct DataFrame.T DataFrame.transpose Combining / comparing / joining / merging ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.append DataFrame.assign DataFrame.join DataFrame.merge DataFrame.update Time Series-related ~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.shift DataFrame.resample Serialization / IO / conversion ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DataFrame.deserialize DataFrame.device_deserialize DataFrame.device_serialize DataFrame.from_arrow DataFrame.from_dict DataFrame.from_pandas DataFrame.from_records DataFrame.hash_values DataFrame.host_deserialize DataFrame.host_serialize DataFrame.serialize DataFrame.to_arrow DataFrame.to_dict DataFrame.to_dlpack DataFrame.to_parquet DataFrame.to_csv DataFrame.to_cupy DataFrame.to_hdf DataFrame.to_dict DataFrame.to_json DataFrame.to_numpy DataFrame.to_pandas DataFrame.to_feather DataFrame.to_records DataFrame.to_string DataFrame.values DataFrame.values_host
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/index_objects.rst
============= Index objects ============= Index ----- .. currentmodule:: cudf **Many of these methods or variants thereof are available on the objects that contain an index (Series/DataFrame) and those should most likely be used before calling these methods directly.** .. autosummary:: :toctree: api/ Index Properties ~~~~~~~~~~ .. autosummary:: :toctree: api/ Index.dtype Index.duplicated Index.empty Index.has_duplicates Index.hasnans Index.is_monotonic Index.is_monotonic_increasing Index.is_monotonic_decreasing Index.is_unique Index.name Index.names Index.ndim Index.nlevels Index.shape Index.size Index.values Modifying and computations ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ Index.any Index.copy Index.drop_duplicates Index.equals Index.factorize Index.is_boolean Index.is_categorical Index.is_floating Index.is_integer Index.is_interval Index.is_numeric Index.is_object Index.min Index.max Index.rename Index.repeat Index.where Index.take Index.unique Compatibility with MultiIndex ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ Index.set_names Missing values ~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ Index.fillna Index.dropna Index.isna Index.notna Memory usage ~~~~~~~~~~~~ .. autosummary:: :toctree: api/ Index.memory_usage Conversion ~~~~~~~~~~ .. autosummary:: :toctree: api/ Index.astype Index.deserialize Index.device_deserialize Index.device_serialize Index.host_deserialize Index.host_serialize Index.serialize Index.tolist Index.to_arrow Index.to_cupy Index.to_list Index.to_numpy Index.to_series Index.to_frame Index.to_pandas Index.to_dlpack Index.from_pandas Index.from_arrow Sorting ~~~~~~~ .. autosummary:: :toctree: api/ Index.argsort Index.find_label_range Index.searchsorted Index.sort_values Time-specific operations ~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ Index.shift Combining / joining / set operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ Index.append Index.union Index.intersection Index.join Index.difference Selecting ~~~~~~~~~ .. autosummary:: :toctree: api/ Index.get_level_values Index.get_loc Index.get_slice_bound Index.isin String Operations ~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ Index.str .. _api.numericindex: Numeric Index ------------- .. autosummary:: :toctree: api/ RangeIndex RangeIndex.start RangeIndex.stop RangeIndex.step RangeIndex.to_numpy RangeIndex.to_arrow Int64Index UInt64Index Float64Index .. _api.categoricalindex: CategoricalIndex ---------------- .. autosummary:: :toctree: api/ CategoricalIndex Categorical components ~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ CategoricalIndex.codes CategoricalIndex.categories Modifying and computations ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ CategoricalIndex.equals .. _api.intervalindex: IntervalIndex ------------- .. autosummary:: :toctree: api/ IntervalIndex IntervalIndex components ~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ IntervalIndex.from_breaks IntervalIndex.values IntervalIndex.get_loc .. _api.multiindex: MultiIndex ---------- .. autosummary:: :toctree: api/ MultiIndex MultiIndex constructors ~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ MultiIndex.from_tuples MultiIndex.from_product MultiIndex.from_frame MultiIndex.from_arrow MultiIndex properties ~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ MultiIndex.names MultiIndex.levels MultiIndex.codes MultiIndex.nlevels MultiIndex components ~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ MultiIndex.to_frame MultiIndex.droplevel MultiIndex.swaplevel MultiIndex selecting ~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ MultiIndex.get_loc MultiIndex.get_level_values .. _api.datetimeindex: DatetimeIndex ------------- .. autosummary:: :toctree: api/ DatetimeIndex Time/date components ~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DatetimeIndex.year DatetimeIndex.month DatetimeIndex.day DatetimeIndex.hour DatetimeIndex.minute DatetimeIndex.second DatetimeIndex.microsecond DatetimeIndex.nanosecond DatetimeIndex.day_of_year DatetimeIndex.dayofyear DatetimeIndex.dayofweek DatetimeIndex.weekday DatetimeIndex.quarter DatetimeIndex.is_leap_year DatetimeIndex.isocalendar Time-specific operations ~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ DatetimeIndex.round DatetimeIndex.ceil DatetimeIndex.floor DatetimeIndex.tz_convert DatetimeIndex.tz_localize Conversion ~~~~~~~~~~ .. autosummary:: :toctree: api/ DatetimeIndex.to_series DatetimeIndex.to_frame TimedeltaIndex -------------- .. autosummary:: :toctree: api/ TimedeltaIndex Components ~~~~~~~~~~ .. autosummary:: :toctree: api/ TimedeltaIndex.days TimedeltaIndex.seconds TimedeltaIndex.microseconds TimedeltaIndex.nanoseconds TimedeltaIndex.components TimedeltaIndex.inferred_freq Conversion ~~~~~~~~~~ .. autosummary:: :toctree: api/ TimedeltaIndex.to_series TimedeltaIndex.to_frame
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/series.rst
====== Series ====== .. currentmodule:: cudf Constructor ----------- .. autosummary:: :toctree: api/ Series Attributes ---------- **Axes** .. autosummary:: :toctree: api/ Series.axes Series.index Series.values Series.data Series.dtype Series.shape Series.ndim Series.nullable Series.nullmask Series.null_count Series.size Series.T Series.memory_usage Series.hasnans Series.has_nulls Series.empty Series.name Series.valid_count Series.values_host Conversion ---------- .. autosummary:: :toctree: api/ Series.astype Series.convert_dtypes Series.copy Series.deserialize Series.device_deserialize Series.device_serialize Series.host_deserialize Series.host_serialize Series.serialize Series.to_list Series.tolist Series.__array__ Series.scale Indexing, iteration ------------------- .. autosummary:: :toctree: api/ Series.loc Series.iloc Series.__iter__ Series.items Series.iteritems Series.keys Binary operator functions ------------------------- .. autosummary:: :toctree: api/ Series.add Series.sub Series.subtract Series.mul Series.multiply Series.truediv Series.div Series.divide Series.floordiv Series.mod Series.pow Series.radd Series.rsub Series.rmul Series.rdiv Series.rtruediv Series.rfloordiv Series.rmod Series.rpow Series.round Series.lt Series.gt Series.le Series.ge Series.ne Series.eq Series.product Series.dot Function application, GroupBy & window -------------------------------------- .. autosummary:: :toctree: api/ Series.apply Series.map Series.groupby Series.rolling Series.pipe .. _api.series.stats: Computations / descriptive stats -------------------------------- .. autosummary:: :toctree: api/ Series.abs Series.all Series.any Series.autocorr Series.between Series.clip Series.corr Series.count Series.cov Series.cummax Series.cummin Series.cumprod Series.cumsum Series.describe Series.diff Series.digitize Series.factorize Series.kurt Series.max Series.mean Series.median Series.min Series.mode Series.nlargest Series.nsmallest Series.pct_change Series.prod Series.quantile Series.rank Series.skew Series.std Series.sum Series.var Series.kurtosis Series.unique Series.nunique Series.is_unique Series.is_monotonic Series.is_monotonic_increasing Series.is_monotonic_decreasing Series.value_counts Reindexing / selection / label manipulation ------------------------------------------- .. autosummary:: :toctree: api/ Series.add_prefix Series.add_suffix Series.drop Series.drop_duplicates Series.duplicated Series.equals Series.first Series.head Series.isin Series.last Series.reindex Series.rename Series.reset_index Series.sample Series.take Series.tail Series.tile Series.truncate Series.where Series.mask Missing data handling --------------------- .. autosummary:: :toctree: api/ Series.backfill Series.bfill Series.dropna Series.ffill Series.fillna Series.interpolate Series.isna Series.isnull Series.nans_to_nulls Series.notna Series.notnull Series.pad Series.replace Reshaping, sorting ------------------ .. autosummary:: :toctree: api/ Series.argsort Series.sort_values Series.sort_index Series.explode Series.searchsorted Series.repeat Series.transpose Combining / comparing / joining / merging ----------------------------------------- .. autosummary:: :toctree: api/ Series.append Series.update Time Series-related ------------------- .. autosummary:: :toctree: api/ Series.shift Series.resample Accessors --------- pandas provides dtype-specific methods under various accessors. These are separate namespaces within :class:`Series` that only apply to specific data types. =========================== ================================= Data Type Accessor =========================== ================================= Datetime, Timedelta :ref:`dt <api.series.dt>` String :ref:`str <api.series.str>` Categorical :ref:`cat <api.series.cat>` List :ref:`list <api.series.list>` Struct :ref:`struct <api.series.struct>` =========================== ================================= .. _api.series.dt: Datetimelike properties ~~~~~~~~~~~~~~~~~~~~~~~ ``Series.dt`` can be used to access the values of the series as datetimelike and return several properties. These can be accessed like ``Series.dt.<property>``. .. currentmodule:: cudf .. autosummary:: :toctree: api/ Series.dt Datetime properties ^^^^^^^^^^^^^^^^^^^ .. currentmodule:: cudf.core.series.DatetimeProperties .. autosummary:: :toctree: api/ year month day hour minute second microsecond nanosecond dayofweek weekday dayofyear day_of_year quarter is_month_start is_month_end is_quarter_start is_quarter_end is_year_start is_year_end is_leap_year days_in_month Datetime methods ^^^^^^^^^^^^^^^^ .. autosummary:: :toctree: api/ isocalendar strftime round floor ceil tz_localize Timedelta properties ^^^^^^^^^^^^^^^^^^^^ .. currentmodule:: cudf.core.series.TimedeltaProperties .. autosummary:: :toctree: api/ days seconds microseconds nanoseconds components .. _api.series.str: .. include:: string_handling.rst .. _api.series.cat: Categorical accessor ~~~~~~~~~~~~~~~~~~~~ Categorical-dtype specific methods and attributes are available under the ``Series.cat`` accessor. .. currentmodule:: cudf .. autosummary:: :toctree: api/ Series.cat .. currentmodule:: cudf.core.column.categorical.CategoricalAccessor .. autosummary:: :toctree: api/ categories ordered codes reorder_categories add_categories remove_categories set_categories as_ordered as_unordered .. _api.series.list: .. include:: list_handling.rst .. _api.series.struct: .. include:: struct_handling.rst .. The following is needed to ensure the generated pages are created with the correct template (otherwise they would be created in the Series/Index class page) .. .. currentmodule:: cudf .. autosummary:: :toctree: api/ :template: autosummary/accessor.rst Series.str Series.cat Series.dt Index.str Serialization / IO / conversion ------------------------------- .. currentmodule:: cudf .. autosummary:: :toctree: api/ Series.to_arrow Series.to_cupy Series.to_dict Series.to_dlpack Series.to_frame Series.to_hdf Series.to_json Series.to_numpy Series.to_pandas Series.to_string Series.from_arrow Series.from_categorical Series.from_masked_array Series.from_pandas Series.hash_values
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/options.rst
.. _api.options: ==================== Options and settings ==================== .. autosummary:: :toctree: api/ cudf.get_option cudf.set_option cudf.describe_option cudf.option_context Available options ----------------- You can get a list of available options and their descriptions with :func:`~cudf.describe_option`. When called with no argument :func:`~cudf.describe_option` will print out the descriptions for all available options. .. ipython:: python import cudf cudf.describe_option()
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/io.rst
.. _api.io: ============ Input/output ============ .. currentmodule:: cudf CSV ~~~ .. autosummary:: :toctree: api/ read_csv DataFrame.to_csv Text ~~~~ .. autosummary:: :toctree: api/ read_text JSON ~~~~ .. autosummary:: :toctree: api/ read_json DataFrame.to_json Parquet ~~~~~~~ .. autosummary:: :toctree: api/ read_parquet DataFrame.to_parquet cudf.io.parquet.read_parquet_metadata cudf.io.parquet.ParquetDatasetWriter cudf.io.parquet.ParquetDatasetWriter.close cudf.io.parquet.ParquetDatasetWriter.write_table ORC ~~~ .. autosummary:: :toctree: api/ read_orc DataFrame.to_orc HDFStore: PyTables (HDF5) ~~~~~~~~~~~~~~~~~~~~~~~~~ .. autosummary:: :toctree: api/ read_hdf DataFrame.to_hdf .. warning:: HDF reader and writers are not GPU accelerated. These currently use CPU via Pandas. This may be GPU accelerated in the future. Feather ~~~~~~~ .. autosummary:: :toctree: api/ read_feather DataFrame.to_feather .. warning:: Feather reader and writers are not GPU accelerated. These currently use CPU via Pandas. This may be GPU accelerated in the future. Avro ~~~~ .. autosummary:: :toctree: api/ read_avro
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/struct_handling.rst
Struct handling ~~~~~~~~~~~~~~~ ``Series.struct`` can be used to access the values of the series as Structs and apply struct methods to it. These can be accessed like ``Series.struct.<function/property>``. .. currentmodule:: cudf .. autosummary:: :toctree: api/ Series.struct .. currentmodule:: cudf.core.column.struct.StructMethods .. autosummary:: :toctree: api/ field explode
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/subword_tokenize.rst
================ SubwordTokenizer ================ .. currentmodule:: cudf.core.subword_tokenizer Constructor ~~~~~~~~~~~ .. autosummary:: :toctree: api/ SubwordTokenizer SubwordTokenizer.__call__
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/window.rst
.. _api.window: ====== Window ====== Rolling objects are returned by ``.rolling`` calls: :func:`cudf.DataFrame.rolling`, :func:`cudf.Series.rolling`, etc. .. _api.functions_rolling: Rolling window functions ------------------------ .. currentmodule:: cudf.core.window.rolling .. autosummary:: :toctree: api/ Rolling.count Rolling.sum Rolling.mean Rolling.var Rolling.std Rolling.min Rolling.max Rolling.apply
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/list_handling.rst
List handling ~~~~~~~~~~~~~ ``Series.list`` can be used to access the values of the series as lists and apply list methods to it. These can be accessed like ``Series.list.<function/property>``. .. currentmodule:: cudf .. autosummary:: :toctree: api/ Series.list .. currentmodule:: cudf.core.column.lists.ListMethods .. autosummary:: :toctree: api/ astype concat contains index get leaves len sort_values take unique
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/general_functions.rst
================= General Functions ================= .. currentmodule:: cudf Data manipulations ------------------ .. autosummary:: :toctree: api/ cudf.concat cudf.crosstab cudf.cut cudf.factorize cudf.get_dummies cudf.melt cudf.merge cudf.pivot cudf.pivot_table cudf.unstack Top-level conversions --------------------- .. autosummary:: :toctree: api/ cudf.to_numeric cudf.from_dataframe cudf.from_dlpack cudf.from_pandas Top-level dealing with datetimelike data ---------------------------------------- .. autosummary:: :toctree: api/ cudf.to_datetime cudf.date_range Top-level dealing with Interval data ------------------------------------ .. autosummary:: :toctree: api/ cudf.interval_range
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/groupby.rst
.. _api.groupby: ======= GroupBy ======= .. currentmodule:: cudf.core.groupby GroupBy objects are returned by groupby calls: :func:`cudf.DataFrame.groupby`, :func:`cudf.Series.groupby`, etc. Indexing, iteration ------------------- .. autosummary:: :toctree: api/ GroupBy.__iter__ GroupBy.groups .. currentmodule:: cudf .. autosummary:: :toctree: api/ Grouper .. currentmodule:: cudf.core.groupby.groupby Function application -------------------- .. autosummary:: :toctree: api/ GroupBy.apply GroupBy.agg SeriesGroupBy.aggregate DataFrameGroupBy.aggregate GroupBy.pipe GroupBy.transform Computations / descriptive stats -------------------------------- .. autosummary:: :toctree: api/ GroupBy.bfill GroupBy.backfill GroupBy.count GroupBy.cumcount GroupBy.cummax GroupBy.cummin GroupBy.cumsum GroupBy.diff GroupBy.ffill GroupBy.first GroupBy.get_group GroupBy.groups GroupBy.idxmax GroupBy.idxmin GroupBy.last GroupBy.max GroupBy.mean GroupBy.median GroupBy.min GroupBy.ngroup GroupBy.nth GroupBy.nunique GroupBy.pad GroupBy.prod GroupBy.shift GroupBy.size GroupBy.std GroupBy.sum GroupBy.var GroupBy.corr GroupBy.cov The following methods are available in both ``SeriesGroupBy`` and ``DataFrameGroupBy`` objects, but may differ slightly, usually in that the ``DataFrameGroupBy`` version usually permits the specification of an axis argument, and often an argument indicating whether to restrict application to columns of a specific data type. .. autosummary:: :toctree: api/ DataFrameGroupBy.backfill DataFrameGroupBy.bfill DataFrameGroupBy.count DataFrameGroupBy.cumcount DataFrameGroupBy.cummax DataFrameGroupBy.cummin DataFrameGroupBy.cumsum DataFrameGroupBy.describe DataFrameGroupBy.diff DataFrameGroupBy.ffill DataFrameGroupBy.fillna DataFrameGroupBy.idxmax DataFrameGroupBy.idxmin DataFrameGroupBy.nunique DataFrameGroupBy.pad DataFrameGroupBy.quantile DataFrameGroupBy.shift DataFrameGroupBy.size The following methods are available only for ``SeriesGroupBy`` objects. .. autosummary:: :toctree: api/ SeriesGroupBy.nunique SeriesGroupBy.unique
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/extension_dtypes.rst
================ Extension Dtypes ================ .. currentmodule:: cudf.core.dtypes cuDF supports a number of extension dtypes that build on top of the types that pandas supports. These dtypes are not directly available in pandas, which instead relies on object dtype arrays that run at Python rather than native speeds. The following dtypes are supported: cudf.CategoricalDtype ===================== .. autosummary:: :toctree: api/ CategoricalDtype Properties and Methods ---------------------- .. autosummary:: :toctree: api/ CategoricalDtype.categories CategoricalDtype.construct_from_string CategoricalDtype.deserialize CategoricalDtype.device_deserialize CategoricalDtype.device_serialize CategoricalDtype.from_pandas CategoricalDtype.host_deserialize CategoricalDtype.host_serialize CategoricalDtype.is_dtype CategoricalDtype.name CategoricalDtype.ordered CategoricalDtype.serialize CategoricalDtype.str CategoricalDtype.to_pandas CategoricalDtype.type cudf.Decimal32Dtype =================== .. autosummary:: :toctree: api/ Decimal32Dtype Properties and Methods ---------------------- .. autosummary:: :toctree: api/ Decimal32Dtype.ITEMSIZE Decimal32Dtype.MAX_PRECISION Decimal32Dtype.deserialize Decimal32Dtype.device_deserialize Decimal32Dtype.device_serialize Decimal32Dtype.from_arrow Decimal32Dtype.host_deserialize Decimal32Dtype.host_serialize Decimal32Dtype.is_dtype Decimal32Dtype.itemsize Decimal32Dtype.precision Decimal32Dtype.scale Decimal32Dtype.serialize Decimal32Dtype.str Decimal32Dtype.to_arrow cudf.Decimal64Dtype =================== .. autosummary:: :toctree: api/ Decimal64Dtype Properties and Methods ---------------------- .. autosummary:: :toctree: api/ Decimal64Dtype.ITEMSIZE Decimal64Dtype.MAX_PRECISION Decimal64Dtype.deserialize Decimal64Dtype.device_deserialize Decimal64Dtype.device_serialize Decimal64Dtype.from_arrow Decimal64Dtype.host_deserialize Decimal64Dtype.host_serialize Decimal64Dtype.is_dtype Decimal64Dtype.itemsize Decimal64Dtype.precision Decimal64Dtype.scale Decimal64Dtype.serialize Decimal64Dtype.str Decimal64Dtype.to_arrow cudf.Decimal128Dtype ==================== .. autosummary:: :toctree: api/ Decimal128Dtype Properties and Methods ---------------------- .. autosummary:: :toctree: api/ Decimal128Dtype.ITEMSIZE Decimal128Dtype.MAX_PRECISION Decimal128Dtype.deserialize Decimal128Dtype.device_deserialize Decimal128Dtype.device_serialize Decimal128Dtype.from_arrow Decimal128Dtype.host_deserialize Decimal128Dtype.host_serialize Decimal128Dtype.is_dtype Decimal128Dtype.itemsize Decimal128Dtype.precision Decimal128Dtype.scale Decimal128Dtype.serialize Decimal128Dtype.str Decimal128Dtype.to_arrow cudf.ListDtype ============== .. autosummary:: :toctree: api/ ListDtype Properties and Methods ---------------------- .. autosummary:: :toctree: api/ ListDtype.deserialize ListDtype.device_deserialize ListDtype.device_serialize ListDtype.element_type ListDtype.from_arrow ListDtype.host_deserialize ListDtype.host_serialize ListDtype.is_dtype ListDtype.leaf_type ListDtype.serialize ListDtype.to_arrow ListDtype.type cudf.StructDtype ================ .. autosummary:: :toctree: api/ StructDtype Properties and Methods ---------------------- .. autosummary:: :toctree: api/ StructDtype.deserialize StructDtype.device_deserialize StructDtype.device_serialize StructDtype.fields StructDtype.from_arrow StructDtype.host_deserialize StructDtype.host_serialize StructDtype.is_dtype StructDtype.serialize StructDtype.to_arrow StructDtype.type
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/string_handling.rst
String handling ~~~~~~~~~~~~~~~ ``Series.str`` can be used to access the values of the series as strings and apply several methods to it. These can be accessed like ``Series.str.<function/property>``. .. currentmodule:: cudf .. autosummary:: :toctree: api/ Series.str .. currentmodule:: cudf.core.column.string.StringMethods .. autosummary:: :toctree: api/ byte_count capitalize cat center character_ngrams character_tokenize code_points contains count detokenize edit_distance edit_distance_matrix endswith extract filter_alphanum filter_characters filter_tokens find findall find_multiple get get_json_object hex_to_int htoi index insert ip2int ip_to_int is_consonant is_vowel isalnum isalpha isdecimal isdigit isempty isfloat ishex isinteger isipv4 isspace islower isnumeric isupper istimestamp istitle join len like ljust lower lstrip match ngrams ngrams_tokenize normalize_characters normalize_spaces pad partition porter_stemmer_measure repeat removeprefix removesuffix replace replace_tokens replace_with_backrefs rfind rindex rjust rpartition rsplit rstrip slice slice_from slice_replace split rsplit startswith strip swapcase title token_count tokenize translate upper url_decode url_encode wrap zfill
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/general_utilities.rst
================= General Utilities ================= Testing functions ----------------- .. autosummary:: :toctree: api/ cudf.testing.testing.assert_column_equal cudf.testing.testing.assert_frame_equal cudf.testing.testing.assert_index_equal cudf.testing.testing.assert_series_equal
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/api_docs/index.rst
============= API reference ============= This page provides a list of all publicly accessible modules, methods and classes through ``cudf.*`` namespace. .. toctree:: :maxdepth: 2 :caption: API Documentation series dataframe index_objects groupby general_functions general_utilities window io subword_tokenize string_handling list_handling struct_handling options extension_dtypes
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/io/io.md
# Input / Output This page contains Input / Output related APIs in cuDF. ## I/O Supported dtypes The following table lists are compatible cudf types for each supported IO format. <div class="special-table-wrapper" style="overflow:auto"> ```{eval-rst} .. table:: :class: io-supported-types-table special-table :widths: 15 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+-------------------+--------+--------+---------+---------+ | | CSV | Parquet | JSON | ORC | AVRO | HDF | DLPack | Feather | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | Data Type | Writer | Reader | Writer | Reader | Writer¹ | Reader | Writer | Reader | Reader | Writer² | Reader² | Writer | Reader | Writer² | Reader² | +=======================+========+========+========+========+=========+========+========+========+========+=========+=========+========+========+=========+=========+ | int8 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | int16 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | int32 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | int64 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | uint8 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | uint16 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | uint32 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | uint64 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | float32 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | float64 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | bool | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | str | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | category | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | list | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | timedelta64[s] | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | timedelta64[ms] | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | timedelta64[us] | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | timedelta64[ns] | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | datetime64[s] | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | datetime64[ms] | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | datetime64[us] | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | datetime64[ns] | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | struct | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | decimal32 | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | decimal64 | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ | decimal128 | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | +-----------------------+--------+--------+--------+--------+---------+--------+--------+--------+--------+---------+---------+--------+--------+---------+---------+ ``` </div> **Notes:** - \[¹\] - Not all orientations are GPU-accelerated. - \[²\] - Not GPU-accelerated. ## Magnum IO GPUDirect Storage Integration Many IO APIs can use Magnum IO GPUDirect Storage (GDS) library to optimize IO operations. GDS enables a direct data path for direct memory access (DMA) transfers between GPU memory and storage, which avoids a bounce buffer through the CPU. GDS also has a compatibility mode that allows the library to fall back to copying through a CPU bounce buffer. The SDK is available for download [here](https://developer.nvidia.com/gpudirect-storage). GDS is also included in CUDA Toolkit 11.4 and higher. Use of GPUDirect Storage in cuDF is enabled by default, but can be disabled through the environment variable `LIBCUDF_CUFILE_POLICY`. This variable also controls the GDS compatibility mode. There are four valid values for the environment variable: - "GDS": Enable GDS use; GDS compatibility mode is *off*. - "ALWAYS": Enable GDS use; GDS compatibility mode is *on*. - "KVIKIO": Enable GDS through [KvikIO](https://github.com/rapidsai/kvikio). - "OFF": Completely disable GDS use. If no value is set, behavior will be the same as the "KVIKIO" option. This environment variable also affects how cuDF treats GDS errors. - When `LIBCUDF_CUFILE_POLICY` is set to "GDS" and a GDS API call fails for any reason, cuDF falls back to the internal implementation with bounce buffers. - When `LIBCUDF_CUFILE_POLICY` is set to "ALWAYS" and a GDS API call fails for any reason (unlikely, given that the compatibility mode is on), cuDF throws an exception to propagate the error to the user. - When `LIBCUDF_CUFILE_POLICY` is set to "KVIKIO" and a KvikIO API call fails for any reason (unlikely, given that KvikIO implements its own compatibility mode) cuDF throws an exception to propagate the error to the user. For more information about error handling, compatibility mode, and tuning parameters in KvikIO see: <https://github.com/rapidsai/kvikio> Operations that support the use of GPUDirect Storage: - {py:func}`cudf.read_avro` - {py:func}`cudf.read_json` - {py:func}`cudf.read_parquet` - {py:func}`cudf.read_orc` - {py:meth}`cudf.DataFrame.to_csv` - {py:func}`cudf.DataFrame.to_json` - {py:meth}`cudf.DataFrame.to_parquet` - {py:meth}`cudf.DataFrame.to_orc` Several parameters that can be used to tune the performance of GDS-enabled I/O are exposed through environment variables: - `LIBCUDF_CUFILE_THREAD_COUNT`: Integral value, maximum number of parallel reads/writes per file (default 16); - `LIBCUDF_CUFILE_SLICE_SIZE`: Integral value, maximum size of each GDS read/write, in bytes (default 4MB). Larger I/O operations are split into multiple calls. ## nvCOMP Integration Some types of compression/decompression can be performed using either the [nvCOMP library](https://github.com/NVIDIA/nvcomp) or the internal implementation. Which implementation is used by default depends on the data format and the compression type. Behavior can be influenced through environment variable `LIBCUDF_NVCOMP_POLICY`. There are three valid values for the environment variable: - "STABLE": Only enable the nvCOMP in places where it has been deemed stable for production use. - "ALWAYS": Enable all available uses of nvCOMP, including new, experimental combinations. - "OFF": Disable nvCOMP use whenever possible and use the internal implementations instead. If no value is set, behavior will be the same as the "STABLE" option. ```{eval-rst} .. table:: Current policy for nvCOMP use for different types :widths: 20 20 20 20 20 20 20 20 20 20 +-----------------------+--------+--------+--------------+--------------+---------+--------+--------------+--------------+--------+ | | CSV | Parquet | JSON | ORC | AVRO | +-----------------------+--------+--------+--------------+--------------+---------+--------+--------------+--------------+--------+ | Compression Type | Writer | Reader | Writer | Reader | Writer¹ | Reader | Writer | Reader | Reader | +=======================+========+========+==============+==============+=========+========+==============+==============+========+ | Snappy | ❌ | ❌ | Stable | Stable | ❌ | ❌ | Stable | Stable | ❌ | +-----------------------+--------+--------+--------------+--------------+---------+--------+--------------+--------------+--------+ | ZSTD | ❌ | ❌ | Stable | Stable | ❌ | ❌ | Stable | Stable | ❌ | +-----------------------+--------+--------+--------------+--------------+---------+--------+--------------+--------------+--------+ | DEFLATE | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | Experimental | Experimental | ❌ | +-----------------------+--------+--------+--------------+--------------+---------+--------+--------------+--------------+--------+ ```
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/io/read-json.md
# Working with JSON data This page contains a tutorial about reading and manipulating JSON data in cuDF. ## Reading JSON data By default, the cuDF JSON reader expects input data using the "records" orientation. Records-oriented JSON data comprises an array of objects at the root level, and each object in the array corresponds to a row. Records-oriented JSON data begins with `[`, ends with `]` and ignores unquoted whitespace. Another common variant for JSON data is "JSON Lines", where JSON objects are separated by new line characters (`\n`), and each object corresponds to a row. ```python >>> j = '''[ {"a": "v1", "b": 12}, {"a": "v2", "b": 7}, {"a": "v3", "b": 5} ]''' >>> df_records = cudf.read_json(j, engine='cudf') >>> j = '\n'.join([ ... '{"a": "v1", "b": 12}', ... '{"a": "v2", "b": 7}', ... '{"a": "v3", "b": 5}' ... ]) >>> df_lines = cudf.read_json(j, lines=True) >>> df_lines a b 0 v1 12 1 v2 7 2 v3 5 >>> df_records.equals(df_lines) True ``` The cuDF JSON reader also supports arbitrarily-nested combinations of JSON objects and arrays, which map to struct and list data types. The following examples demonstrate the inputs and outputs for reading nested JSON data. ```python # Example with columns types: # list<int> and struct<k:string> >>> j = '''[ {"list": [0,1,2], "struct": {"k":"v1"}}, {"list": [3,4,5], "struct": {"k":"v2"}} ]''' >>> df = cudf.read_json(j, engine='cudf') >>> df list struct 0 [0, 1, 2] {'k': 'v1'} 1 [3, 4, 5] {'k': 'v2'} # Example with columns types: # list<struct<k:int>> and struct<k:list<int>, m:int> >>> j = '\n'.join([ ... '{"a": [{"k": 0}], "b": {"k": [0, 1], "m": 5}}', ... '{"a": [{"k": 1}, {"k": 2}], "b": {"k": [2, 3], "m": 6}}', ... ]) >>> df = cudf.read_json(j, lines=True) >>> df a b 0 [{'k': 0}] {'k': [0, 1], 'm': 5} 1 [{'k': 1}, {'k': 2}] {'k': [2, 3], 'm': 6} ``` ## Handling large and small JSON Lines files For workloads based on JSON Lines data, cuDF includes reader options to assist with data processing: byte range support for large files, and multi-source support for small files. Some workflows require processing large JSON Lines files that may exceed GPU memory capacity. The JSON reader in cuDF supports a byte range argument that specifies a starting byte offset and size in bytes. The reader parses each record that begins within the byte range, and for this reason byte ranges do not need to align with record boundaries. To avoid skipping rows or reading duplicate rows, byte ranges should be adjacent, as shown in the following example. ```python >>> num_rows = 10 >>> j = '\n'.join([ ... '{"id":%s, "distance": %s, "unit": "m/s"}' % x \ ... for x in zip(range(num_rows), cupy.random.rand(num_rows)) ... ]) >>> chunk_count = 4 >>> chunk_size = len(j) // chunk_count + 1 >>> data = [] >>> for x in range(chunk_count): ... d = cudf.read_json( ... j, ... lines=True, ... byte_range=(chunk_size * x, chunk_size), ... ) ... data.append(d) >>> df = cudf.concat(data) ``` By contrast, some workflows require processing many small JSON Lines files. Rather than looping through the sources and concatenating the resulting dataframes, the JSON reader in cuDF accepts an iterable of data sources. Then the raw inputs are concatenated and processed as a single source. Please note that the JSON reader in cuDF accepts sources as file paths, raw strings, or file-like objects, as well as iterables of these sources. ```python >>> j1 = '{"id":0}\n{"id":1}\n' >>> j2 = '{"id":2}\n{"id":3}\n' >>> df = cudf.read_json([j1, j2], lines=True) ``` ## Unpacking list and struct data After reading JSON data into a cuDF dataframe with list/struct column types, the next step in many workflows extracts or flattens the data into simple types. For struct columns, one solution is extracting the data with the `struct.explode` accessor and joining the result to the parent dataframe. The following example demonstrates how to extract data from a struct column. ```python >>> j = '\n'.join([ ... '{"x": "Tokyo", "y": {"country": "Japan", "iso2": "JP"}}', ... '{"x": "Jakarta", "y": {"country": "Indonesia", "iso2": "ID"}}', ... '{"x": "Shanghai", "y": {"country": "China", "iso2": "CN"}}' ... ]) >>> df = cudf.read_json(j, lines=True) >>> df = df.drop(columns='y').join(df['y'].struct.explode()) >>> df x country iso2 0 Tokyo Japan JP 1 Jakarta Indonesia ID 2 Shanghai China CN ``` For list columns where the order of the elements is meaningful, the `list.get` accessor extracts the elements from specific positions. The resulting `cudf.Series` object can then be assigned to a new column in the dataframe. The following example demonstrates how to extract the first and second elements from a list column. ```python >>> j = '\n'.join([ ... '{"name": "Peabody, MA", "coord": [42.53, -70.98]}', ... '{"name": "Northampton, MA", "coord": [42.32, -72.66]}', ... '{"name": "New Bedford, MA", "coord": [41.63, -70.93]}' ... ]) >>> df = cudf.read_json(j, lines=True) >>> df['latitude'] = df['coord'].list.get(0) >>> df['longitude'] = df['coord'].list.get(1) >>> df = df.drop(columns='coord') >>> df name latitude longitude 0 Peabody, MA 42.53 -70.98 1 Northampton, MA 42.32 -72.66 2 New Bedford, MA 41.63 -70.93 ``` Finally, for list columns with variable length, the `explode` method creates a new dataframe with each element as a row. Joining the exploded dataframe on the parent dataframe yields an output with all simple types. The following example flattens a list column and joins it to the index and additional data from the parent dataframe. ```python >>> j = '\n'.join([ ... '{"product": "socks", "ratings": [2, 3, 4]}', ... '{"product": "shoes", "ratings": [5, 4, 5, 3]}', ... '{"product": "shirts", "ratings": [3, 4]}' ... ]) >>> df = cudf.read_json(j, lines=True) >>> df = df.drop(columns='ratings').join(df['ratings'].explode()) >>> df product ratings 0 socks 2 0 socks 4 0 socks 3 1 shoes 5 1 shoes 5 1 shoes 4 1 shoes 3 2 shirts 3 2 shirts 4 ``` ## Building JSON data solutions Sometimes a workflow needs to process JSON data with an object root and cuDF provides tools to build solutions for this kind of data. If you need to process JSON data with an object root, we recommend reading the data as a single JSON Line and then unpacking the resulting dataframe. The following example reads a JSON object as a single line and then extracts the "results" field into a new dataframe. ```python >>> j = '''{ "metadata" : {"vehicle":"car"}, "results": [ {"id": 0, "distance": 1.2}, {"id": 1, "distance": 2.4}, {"id": 2, "distance": 1.7} ] }''' # first read the JSON object with line=True >>> df = cudf.read_json(j, lines=True) >>> df metadata records 0 {'vehicle': 'car'} [{'id': 0, 'distance': 1.2}, {'id': 1, 'distan... # then explode the 'records' column >>> df = df['records'].explode().struct.explode() >>> df id distance 0 0 1.2 1 1 2.4 2 2 1.7 ```
0
rapidsai_public_repos/cudf/docs/cudf/source/user_guide
rapidsai_public_repos/cudf/docs/cudf/source/user_guide/io/index.md
# Input / Output ```{toctree} :maxdepth: 2 io read-json ```
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/cudf_pandas/usage.md
# Usage ## Jupyter Notebooks and IPython Load the `cudf.pandas` extension at the beginning of your notebook. After that, just `import pandas` and operations will use the GPU: ```python %load_ext cudf.pandas import pandas as pd URL = "https://github.com/plotly/datasets/raw/master/tips.csv" df = pd.read_csv(URL) # uses the GPU df["size"].value_counts() # uses the GPU df.groupby("size").total_bill.mean() # uses the GPU df.apply(list, axis=1) # uses the CPU (fallback) ``` ## Command-line usage From the command line, run your Python scripts with `-m cudf.pandas`: ```bash python -m cudf.pandas script.py ``` ## Understanding performance - the `cudf.pandas` profiler `cudf.pandas` will attempt to use the GPU whenever possible and fall back to CPU for certain operations. Running your code with the `cudf.pandas.profile` magic generates a report showing which operations used the GPU and which used the CPU. This can help you identify parts of your code that could be rewritten to be more GPU-friendly: ```python %load_ext cudf.pandas import pandas as pd ``` ```python %%cudf.pandas.profile df = pd.DataFrame({'a': [0, 1, 2], 'b': [3,4,3]}) df.min(axis=1) out = df.groupby('a').filter( lambda group: len(group) > 1 ) ``` ![cudf-pandas-profile](../_static/cudf-pandas-profile.png) When an operation falls back to using the CPU, it's typically because that operation isn't implemented by cuDF. The profiler generates a handy link to report the missing functionality to the cuDF team. To profile a script being run from the command-line, pass the `--profile` argument: ```bash python -m cudf.pandas --profile script.py ```
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/cudf_pandas/benchmarks.md
# Benchmarks ## Database-like ops benchmarks We reproduced the [Database-like ops benchmark](https://duckdblabs.github.io/db-benchmark/) including a solution using `cudf.pandas`. Here are the results: <figure> ![duckdb-benchmark-groupby-join](../_static/duckdb-benchmark-groupby-join.png) <figcaption style="text-align: center;">Results of the <a href="https://duckdblabs.github.io/db-benchmark/">Database-like ops benchmark</a> including <span class="title-ref">cudf.pandas</span>.</figcaption> </figure> **Note:** A missing bar in the results for a particular solution indicates we ran into an error when executing one or more queries for that solution. You can see the per-query results [here](https://data.rapids.ai/duckdb-benchmark/index.html). ### Steps to reproduce Below are the steps to reproduce the `cudf.pandas` results. The steps to reproduce the results for other solutions are documented in <https://github.com/duckdblabs/db-benchmark#reproduce>. 1. Clone the latest <https://github.com/duckdblabs/db-benchmark> 2. Build environments for pandas: ```bash virtualenv pandas/py-pandas ``` 3. Activate pandas virtualenv: ```bash source pandas/py-pandas/bin/activate ``` 4. Install cudf: ```bash pip install --extra-index-url=https://pypi.nvidia.com cudf-cu12 # or cudf-cu11 ``` 5. Modify pandas join/group code to use `cudf.pandas` and be compatible with pandas 1.5 APIs: ```bash diff --git a/pandas/groupby-pandas.py b/pandas/groupby-pandas.py index 58eeb26..2ddb209 100755 --- a/pandas/groupby-pandas.py +++ b/pandas/groupby-pandas.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python3 +#!/usr/bin/env -S python3 -m cudf.pandas print("# groupby-pandas.py", flush=True) diff --git a/pandas/join-pandas.py b/pandas/join-pandas.py index f39beb0..a9ad651 100755 --- a/pandas/join-pandas.py +++ b/pandas/join-pandas.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python3 +#!/usr/bin/env -S python3 -m cudf.pandas print("# join-pandas.py", flush=True) @@ -26,7 +26,7 @@ if len(src_jn_y) != 3: print("loading datasets " + data_name + ", " + y_data_name[0] + ", " + y_data_name[1] + ", " + y_data_name[2], flush=True) -x = pd.read_csv(src_jn_x, engine='pyarrow', dtype_backend='pyarrow') +x = pd.read_csv(src_jn_x, engine='pyarrow') # x['id1'] = x['id1'].astype('Int32') # x['id2'] = x['id2'].astype('Int32') @@ -35,17 +35,17 @@ x['id4'] = x['id4'].astype('category') # remove after datatable#1691 x['id5'] = x['id5'].astype('category') x['id6'] = x['id6'].astype('category') -small = pd.read_csv(src_jn_y[0], engine='pyarrow', dtype_backend='pyarrow') +small = pd.read_csv(src_jn_y[0], engine='pyarrow') # small['id1'] = small['id1'].astype('Int32') small['id4'] = small['id4'].astype('category') # small['v2'] = small['v2'].astype('float64') -medium = pd.read_csv(src_jn_y[1], engine='pyarrow', dtype_backend='pyarrow') +medium = pd.read_csv(src_jn_y[1], engine='pyarrow') # medium['id1'] = medium['id1'].astype('Int32') # medium['id2'] = medium['id2'].astype('Int32') medium['id4'] = medium['id4'].astype('category') medium['id5'] = medium['id5'].astype('category') # medium['v2'] = medium['v2'].astype('float64') -big = pd.read_csv(src_jn_y[2], engine='pyarrow', dtype_backend='pyarrow') +big = pd.read_csv(src_jn_y[2], engine='pyarrow') # big['id1'] = big['id1'].astype('Int32') # big['id2'] = big['id2'].astype('Int32') # big['id3'] = big['id3'].astype('Int32') ``` 6. Run Modified pandas benchmarks: ```bash ./_launcher/solution.R --solution=pandas --task=groupby --nrow=1e7 ./_launcher/solution.R --solution=pandas --task=groupby --nrow=1e8 ./_launcher/solution.R --solution=pandas --task=join --nrow=1e7 ./_launcher/solution.R --solution=pandas --task=join --nrow=1e8 ```
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/cudf_pandas/how-it-works.md
# How it Works When `cudf.pandas` is activated, `import pandas` (or any of its submodules) imports a proxy module, rather than "regular" pandas. This proxy module contains proxy types and proxy functions: ```python In [1]: %load_ext cudf.pandas In [2]: import pandas as pd In [3]: pd Out[3]: <module 'pandas' (ModuleAccelerator(fast=cudf, slow=pandas))> ``` Operations on proxy types/functions execute on the GPU where possible and on the CPU otherwise, synchronizing under the hood as needed. This applies to pandas operations both in your code and in third-party libraries you may be using. ![cudf-pandas-execution-flow](../_static/cudf-pandas-execution-flow.png) All `cudf.pandas` objects are a proxy to either a GPU (cuDF) or CPU (pandas) object at any given time. Attribute lookups and method calls are first attempted on the GPU (copying from CPU if necessary). If that fails, the operation is attempted on the CPU (copying from GPU if necessary). Additionally, `cudf.pandas` special cases chained method calls (for example `.groupby().rolling().apply()`) that can fail at any level of the chain and rewinds and replays the chain minimally to deliver the correct result. Data is automatically transferred from host to device (and vice versa) only when necessary, avoiding unnecessary device-host transfers. When using `cudf.pandas`, cuDF's [pandas compatibility mode](https://docs.rapids.ai/api/cudf/stable/api_docs/options/#available-options) is automatically enabled, ensuring consistency with pandas-specific semantics like default sort ordering.
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/cudf_pandas/faq.md
# FAQ and Known Issues ## When should I use `cudf.pandas` vs using the cuDF library directly? `cudf.pandas` is the quickest and easiest way to get pandas code running on the GPU. However, there are some situations in which using the cuDF library directly should be considered. - cuDF implements a subset of the pandas API, while `cudf.pandas` will fall back automatically to pandas as needed. If you can write your code to use just the operations supported by cuDF, you will benefit from increased performance by using cuDF directly. - cuDF does offer some functions and methods that pandas does not. For example, cuDF has a [`.list` accessor](https://docs.rapids.ai/api/cudf/stable/api_docs/list_handling/) for working with list-like data. If you need access to the additional functionality in cuDF, you will need to use the cuDF package directly. ## How closely does this match pandas? You can use 100% of the pandas API and most things will work identically to pandas. `cudf.pandas` is tested against the entire pandas unit test suite. Currently, we're passing **93%** of the 187,000+ unit tests, with the goal of passing 100%. Test failures are typically for edge cases and due to the small number of behavioral differences between cuDF and pandas. You can learn more about these edge cases in [Known Limitations](#are-there-any-known-limitations) We also run nightly tests that track interactions between `cudf.pandas` and other third party libraries. See [Third-Party Library Compatibility](#does-it-work-with-third-party-libraries). ## How can I tell if `cudf.pandas` is active? You shouldn't have to write any code differently depending on whether `cudf.pandas` is in use or not. You should use `pandas` and things should just work. In a few circumstances during testing and development however, you may want to explicitly verify that `cudf.pandas` is active. To do that, print the pandas module in your code and review the output; it should look something like this: ```python %load_ext cudf.pandas import pandas as pd print(pd) <module 'pandas' (ModuleAccelerator(fast=cudf, slow=pandas))> ``` ## Does it work with third-party libraries? `cudf.pandas` is tested with numerous popular third-party libraries. `cudf.pandas` will not only work but will accelerate pandas operations within these libraries. As part of our CI/CD system, we currently test common interactions with the following Python libraries: | Library | Status | |------------------|--------| | cuGraph | ✅ | | cuML | ✅ | | Hvplot | ✅ | | Holoview | ✅ | | Ibis | ✅ | | Joblib | ❌ | | NumPy | ✅ | | Matplotlib | ✅ | | Plotly | ✅ | | PyTorch | ✅ | | Seaborn | ✅ | | Scikit-Learn | ✅ | | SciPy | ✅ | | Tensorflow | ✅ | | XGBoost | ✅ | Please review the section on [Known Limitations](#are-there-any-known-limitations) for details about what is expected not to work (and why). ## Can I use this with Dask or PySpark? `cudf.pandas` is not designed for distributed or out-of-core computing (OOC) workflows today. If you are looking for accelerated OOC and distributed solutions for data processing we recommend Dask and Apache Spark. Both Dask and Apache Spark support accelerated computing through configuration based interfaces. Dask allows you to [configure the dataframe backend](https://docs.dask.org/en/latest/how-to/selecting-the-collection-backend.html) to use cuDF (learn more in [this blog](https://medium.com/rapids-ai/easy-cpu-gpu-arrays-and-dataframes-run-your-dask-code-where-youd-like-e349d92351d)) and the [RAPIDS Accelerator for Apache Spark](https://nvidia.github.io/spark-rapids/) provides a similar configuration-based plugin for Spark. ## Are there any known limitations? There are a few known limitations that you should be aware of: - Because fallback involves copying data from GPU to CPU and back, [value mutability](https://pandas.pydata.org/pandas-docs/stable/getting_started/overview.html#mutability-and-copying-of-data) of Pandas objects is not always guaranteed. You should follow the pandas recommendation to favor immutable operations. - `cudf.pandas` can't currently interface smoothly with functions that interact with objects using a C API (such as the Python or NumPy C API) - For example, you can write `torch.tensor(df.values)` but not `torch.from_numpy(df.values)`, as the latter uses the NumPy C API - For performance reasons, joins and join-based operations are not currently implemented to maintain the same row ordering as standard pandas - `cudf.pandas` isn't compatible with directly using `import cudf` and is intended to be used with pandas-based workflows. - Global variables can be accessed but can't be modified during CPU-fallback ```python %load_ext cudf.pandas import pandas as pd lst = [10] def udf(x): lst.append(x) return x + lst[0] s = pd.Series(range(2)).apply(udf) print(s) # we can access the value in lst 0 10 1 11 dtype: int64 print(lst) # lst is unchanged, as this specific UDF could not run on the GPU [10] ``` - `cudf.pandas` (and cuDF in general) is currently only compatible with pandas 1.5.x. ## Can I force running on the CPU? To run your code on CPU, just run without activating `cudf.pandas`, and "regular pandas" will be used. If needed, GPU acceleration may be disabled when using `cudf.pandas` for testing or benchmarking purposes. To do so, set the `CUDF_PANDAS_FALLBACK_MODE` environment variable, e.g. ```bash CUDF_PANDAS_FALLBACK_MODE=1 python -m cudf.pandas some_script.py ``` ## Slow tab completion in IPython? You may experience slow tab completion when inspecting the methods/attributes of large dataframes. We expect this issue to be resolved in an upcoming release. In the mean time, you may execute the following command in IPython before loading `cudf.pandas` to work around the issue: ``` %config IPCompleter.jedi_compute_type_timeout=0 ```
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/cudf_pandas/index.rst
cudf.pandas ----------- cuDF pandas accelerator mode (``cudf.pandas``) is built on cuDF and **accelerates pandas code** on the GPU. It supports **100% of the Pandas API**, using the GPU for supported operations, and automatically **falling back to pandas** for other operations. .. code-block:: python %load_ext cudf.pandas # pandas API is now GPU accelerated import pandas as pd df = pd.read_csv("filepath") # uses the GPU! df.groupby("col").mean() # uses the GPU! df.rolling(window=3).sum() # uses the GPU! df.apply(set, axis=1) # uses the CPU (fallback) .. figure:: ../_static/colab.png :width: 200px :target: https://nvda.ws/rapids-cudf Try it on Google Colab! +---------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------+ | **Zero Code Change Acceleration** | **Third-Party Library Compatible** | | | | | Just ``%load_ext cudf.pandas`` in Jupyter, or pass ``-m cudf.pandas`` on the command line. | ``cudf.pandas`` is compatible with most third-party libraries that use pandas. | +---------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------+ | **Run the same code on CPU or GPU** | **100% of the Pandas API** | | | | | Nothing changes, not even your `import` statements, when going from CPU to GPU. | Combines the full flexibility of Pandas with blazing fast performance of cuDF | +---------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------+ Starting with the version 23.10.01 release ``cudf.pandas`` is available in Open Beta, as part of the ``cudf`` package . See `RAPIDS Quick Start <https://rapids.ai/#quick-start>`_ to get up-and-running with ``cudf``. .. toctree:: :maxdepth: 1 :caption: Contents: usage benchmarks how-it-works faq
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/developer_guide/testing.md
# Testing cuDF ## Tooling Tests in cuDF are written using [`pytest`](https://docs.pytest.org/en/latest/). Test coverage is measured using [`coverage.py`](https://coverage.readthedocs.io/en/latest/), specifically the [`pytest-cov`](https://github.com/pytest-dev/pytest-cov) plugin. Code coverage reports are uploaded to [Codecov](https://app.codecov.io/gh/rapidsai/cudf). Each PR also indicates whether it increases or decreases test coverage. ## Test organization How tests are organized depends on which of the following two groups they fall into: 1. Free functions such as `cudf.merge` that operate on classes like `DataFrame` or `Series`. 2. Methods of the above classes. Tests of free functions should be grouped into files based on the [API sections in the documentation](https://docs.rapids.ai/api/cudf/latest/api_docs/index.html). This places tests of similar functionality in the same module. Tests of class methods should be organized in the same way, except that this organization should be within a subdirectory corresponding to the class. For instance, tests of `DataFrame` indexing should be placed into `dataframe/test_indexing.py`. In cases where tests may be shared by multiple classes sharing a common parent (e.g. `DataFrame` and `Series` both require `IndexedFrame` tests), the tests may be placed in a directory corresponding to the parent class. ## Test contents ### Writing tests In general, functionality must be tested for both standard and exceptional cases. Standard use cases may be covered using parametrization (using `pytest.mark.parametrize`). Tests of standard use cases should typically include some coverage of: - Different dtypes, including nested dtypes (especially strings) - Mixed objects, e.g. binary operations between `DataFrame` and `Series` - Operations on scalars - Verifying all combinations of parameters for complex APIs like `cudf.merge`. Here are some of the most common exceptional cases to test: 1. `Series`/`DataFrame`/`Index` with zero rows 2. `DataFrame` with zero columns 3. All null data 4. For string or list APIs, empty strings/lists 5. For list APIs, lists containing all null elements or empty strings 6. For numeric data: 1. All 0s. 2. All 1s. 3. Containing/all inf 4. Containing/all nan 5. `INT${PRECISION}_MAX` for a given precision (e.g. `2**32` for `int32`). Most specific APIs will also include a range of other cases. In general, it is preferable to write separate tests for different exceptional cases. Excessive parametrization and branching increases complexity and obfuscates the purpose of a test. Typically, exception cases require specific assertions or other special logic, so they are best kept separate. The main exception to this rule is tests based on comparison to pandas. Such tests may test exceptional cases alongside more typical cases since the logic is generally identical. ### Parametrization: custom fixtures and `pytest.mark.parametrize` When it comes to parametrizing tests written with `pytest`, the two main options are [fixtures](https://docs.pytest.org/en/latest/explanation/fixtures.html) and [`mark.parametrize`](https://docs.pytest.org/en/latest/how-to/parametrize.html#pytest-mark-parametrize). By virtue of being functions, fixtures are both more verbose and more self-documenting. Fixtures also have the significant benefit of being constructed lazily, whereas parametrizations are constructed at test collection time. In general, these approaches are applicable to parametrizations of different complexity. For the purpose of this discussion, we define a parametrization as "simple" if it is composed of a list (possibly nested) of primitive objects. Examples include a list of integers or a list of list of strings. This _does not_ include e.g. cuDF or pandas objects. In particular, developers should avoid performing GPU memory allocations during test collection. With that in mind, here are some ground rules for how to parametrize. Use `pytest.mark.parametrize` when: - One test must be run on many inputs and those inputs are simple to construct. Use fixtures when: - One or more tests must be run on the same set of inputs, and all of those inputs can be constructed with simple parametrizations. In practice, that means that it is acceptable to use a fixture like this: ```python @pytest.fixture(params=["a", "b"]) def foo(request): if request.param == "a": # Some complex initialization elif request.param == "b": # Some other complex initialization ``` In other words, the construction of the fixture may be complex, as long as the parametrization of that construction is simple. - One or more tests must be run on the same set of inputs, and at least one of those inputs requires complex parametrizations. In this case, the parametrization of a fixture should be decomposed by using fixtures that depend on other fixtures. ```python @pytest.fixture(params=["a", "b"]) def foo(request): if request.param == "a": # Some complex initialization elif request.param == "b": # Some other complex initialization @pytest.fixture def bar(foo): # do something with foo like initialize a cudf object. def test_some_property(bar): # will be run for each value of bar that results from each value of foo. assert some_property_of(bar) ``` #### Complex parametrizations The lists above document common use cases. However, more complex cases may arise. One of the most common alternatives is where, given a set of test cases, different tests need to run on different subsets with a nonempty intersection. Fixtures and parametrization are only capable of handling the Cartesian product of parameters, i.e. "run this test for all values of `a` and all values of `b`". There are multiple potential solutions to this problem. One possibility is to encapsulate common test logic in a helper function, then call it from multiple `test_*` functions that construct the necessary inputs. Another possibility is to use functions rather than fixtures to construct inputs, allowing for more flexible input construction: ```python def get_values(predicate): values = range(10) yield from filter(predicate, values) def test_evens(): for v in get_values(lambda x: x % 2 == 0): # Execute test def test_odds(): for v in get_values(lambda x: x % 2 == 1): # Execute test ``` Other approaches are also possible, and the best solution should be discussed on a case-by-case basis during PR review. ### Tests with expected failures (`xfail`s) In some circumstances it makes sense to mark a test as _expected_ to fail, perhaps because the functionality is not yet implemented in cuDF. To do so use the [`pytest.mark.xfail`](https://docs.pytest.org/en/stable/reference/reference.html#pytest.mark.xfail) fixture on the test. If the test is parametrized and only a single parameter is expected to fail, rather than marking the entire test as xfailing, mark the single parameter by creating a [`pytest.param`](https://docs.pytest.org/en/stable/how-to/skipping.html#skip-xfail-with-parametrize) with appropriate marks. ```python @pytest.mark.parametrize( "value", [ 1, 2, pytest.param( 3, marks=pytest.mark.xfail(reason="code doesn't work for 3") ), ], ) def test_value(value): assert value < 3 ``` When marking an `xfail`ing test, provide a descriptive reason. This _should_ include a link to an issue describing the problem so that progress towards fixing the problem can be tracked. If no such issue exists already, create one! #### Conditional `xfail`s Sometimes, a parametrized test is only expected to fail for some combination of its parameters. Say, for example, division by zero but only if the datatype is `bool`. If all combinations with a given parameter are expected to fail, one can mark the parameter with `pytest.mark.xfail`, indicating a reason for the expected failure. If only _some_ of the combinations are expected to fail, it can be tempting to mark the parameter as `xfail`, but this should be avoided. A test marked as `xfail` that passes is an "unexpected pass" or `XPASS` which is considered a failure by the test suite since we use the pytest option [`xfail_strict=true`](https://docs.pytest.org/en/latest/how-to/skipping.html#strict-parameter). Another option is to use the programmatic `pytest.xfail` function to fail in the test body to `xfail` the relevant combination of parameters. **DO NOT USE THIS OPTION**. Unlike the mark-based approach, `pytest.xfail` *does not* run the rest of the test body, so we will never know if the test starts to pass because the bug is fixed. Use of `pytest.xfail` is checked for, and forbidden, via a pre-commit hook. Instead, to handle this (hopefully rare) case, we can programmatically mark a test as expected to fail under a combination of conditions by applying the `pytest.mark.xfail` mark to the current test `request`. To achieve this, the test function should take an extra parameter named `request`, on which we call `applymarker`: ```python @pytest.mark.parametrize("v1", [1, 2, 3]) @pytest.mark.parametrize("v2", [1, 2, 3]) def test_sum_lt_6(request, v1, v2): request.applymarker( pytest.mark.xfail( condition=(v1 == 3 and v2 == 3), reason="Add comment linking to relevant issue", ) ) assert v1 + v2 < 6 ``` This way, when the bug is fixed, the test suite will fail at this point (and we will remember to update the test). ### Testing code that throws warnings Some code may be expected to throw warnings. A common example is when a cudf API is deprecated for future removal, but many other possibilities exist as well. The cudf testing suite [surfaces all warnings as errors](https://docs.pytest.org/en/latest/how-to/capture-warnings.html#controlling-warnings). This includes warnings raised from non-cudf code, such as calls to pandas or pyarrow. This setting forces developers to proactively deal with deprecations from other libraries, as well as preventing the internal use of deprecated cudf APIs in other parts of the library. Just as importantly, it can help catch real errors like integer overflow or division by zero. When testing code that is expected to throw a warnings, developers should use the [`pytest.warns`](https://docs.pytest.org/en/latest/how-to/capture-warnings.html#assertwarnings) context to catch the warning. For parametrized tests that raise warnings under specific conditions, use the `testing._utils.expect_warning_if` decorator instead of `pytest.warns`. ```{warning} [`warnings.catch_warnings`](https://docs.python.org/3/library/warnings.html#warnings.catch_warnings) is a tempting alternative to `pytest.warns`. **Do not use this context manager in tests.** Unlike `pytest.warns`, which _requires_ that the expected warning be raised, `warnings.catch_warnings` simply catches warnings that appear without requiring them. The cudf testing suite should avoid such ambiguities. ``` ### Testing utility functions The `cudf.testing` subpackage provides a handful of utilities for testing the equality of objects. The internal `cudf.testing._utils` module provides additional helper functions for use in tests. In particular: - `testing._utils.assert_eq` is the biggest hammer to reach for. It can be used to compare any pair of objects. - For comparing specific objects, use `testing.testing.assert_[frame|series|index]_equal`. - For verifying that the expected assertions are raised, use `testing._utils.assert_exceptions_equal`.
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/developer_guide/benchmarking.md
# Benchmarking cuDF The goal of the benchmarks in this repository is to measure the performance of various cuDF APIs. Benchmarks in cuDF are written using the [`pytest-benchmark`](https://pytest-benchmark.readthedocs.io/en/latest/index.html) plugin to the [`pytest`](https://docs.pytest.org/en/latest/) Python testing framework. Using `pytest-benchmark` provides a seamless experience for developers familiar with `pytest`. We include benchmarks of both public APIs and internal functions. The former give us a macro view of our performance, especially vis-à-vis pandas. The latter help us quantify and minimize the overhead of our Python bindings. ```{note} Our current benchmarks focus entirely on measuring run time. However, minimizing memory footprint can be just as important for some cases. In the future, we may update our benchmarks to also include memory usage measurements. ``` ## Benchmark organization At the top level benchmarks are divided into `internal` and `API` directories. API benchmarks are for public features that we expect users to consume. Internal benchmarks capture the performance of cuDF internals that have no stability guarantees. Within each directory, benchmarks are organized based on the type of function. Functions in cuDF generally fall into two groups: 1. Methods of classes like `DataFrame` or `Series`. 2. Free functions operating on the above classes like `cudf.merge`. The former should be organized into files named `bench_class.py`. For example, benchmarks of `DataFrame.eval` belong in `API/bench_dataframe.py`. Benchmarks should be written at the highest level of generality possible with respect to the class hierarchy. For instance, all classes support the `take` method, so those benchmarks belong in `API/bench_frame_or_index.py`. If a method has a slightly different API for different classes, benchmarks should use a minimal common API, _unless_ developers expect certain arguments to trigger code paths with very different performance characteristics. One example, is `DataFrame.where`, which supports a wide range of inputs (like other `DataFrame`s) that other classes don't support. Therefore, we have separate benchmarks for `DataFrame`, in addition to the general benchmarks for all `Frame` and `Index` classes. ```{note} `pytest` does not support having two benchmark files with the same name, even if they are in separate directories. Therefore, benchmarks of internal methods of _public_ classes go in files suffixed with `_internal`. Benchmarks of `DataFrame._apply_boolean_mask`, for instance, belong in `internal/bench_dataframe_internal.py`. ``` Free functions have more flexibility. Broadly speaking, they should be grouped into benchmark files containing similar functionality. For example, I/O benchmarks can all live in `bench_io.py`. For now those groupings are left to the discretion of developers. ## Running benchmarks By default, pytest discovers test files and functions prefixed with `test_`. For benchmarks, we configure `pytest` to instead search using the `bench_` prefix. After installing `pytest-benchmark`, running benchmarks is as simple as just running `pytest`. When benchmarks are run, the default behavior is to output the results in a table to the terminal. A common requirement is to then compare the performance of benchmarks before and after a change. We can generate these comparisons by saving the output using the `--benchmark-autosave` option to pytest. When using this option, after the benchmarks are run the output will contain a line: ``` Saved benchmark data in: /path/to/XXXX_*.json ``` The `XXXX` is a four-digit number identifying the benchmark. If preferred, a user may also use the `--benchmark-save=NAME` option, which allows more control over the resulting filename. Given two benchmark runs `XXXX` and `YYYY`, benchmarks can then be compared using ``` pytest-benchmark compare XXXX YYYY ``` Note that the comparison uses the `pytest-benchmark` command rather than the `pytest` command. `pytest-benchmark` has a number of additional options that can be used to customize the output. The next line contains one useful example, but developers should experiment to find a useful output ``` pytest-benchmark compare XXXX YYYY --sort="name" --columns=Mean --name=short --group-by=param ``` For more details, see the [`pytest-benchmark` documentation](https://pytest-benchmark.readthedocs.io/en/latest/comparing.html). ## Benchmark contents ### Benchmark configuration Benchmarks must support [comparing to pandas](#comparing-to-pandas) and [being run as tests](#testing-benchmarks). To satisfy these requirements, one must follow these rules when writing benchmarks: 1. Import `cudf` and `cupy` from the config module: ```python from ..common.config import cudf, cupy # Do this import cudf, cupy # Not this ``` This enables swapping out for `pandas` and `numpy` respectively. 2. Avoid hard-coding benchmark dataset sizes, and instead use the sizes advertised by `config.py`. This enables running the benchmarks in "test" mode on small datasets, which will be much faster. ### Writing benchmarks Just as benchmarks should be written in terms of the highest level classes in the hierarchy, they should also assume as little as possible about the nature of the data. For instance, unless there are meaningful functional differences, benchmarks should not care about the dtype or nullability of the data. Objects that differ in these ways should be interchangeable for most benchmarks. The goal of writing benchmarks in this way is to then automatically benchmark objects with different properties. We support this use case with the `benchmark_with_object` decorator. The use of this decorator is best demonstrated by example: ```python @benchmark_with_object(cls="dataframe", dtype="int", cols=6) def bench_foo(benchmark, dataframe): benchmark(dataframe.foo) ``` In the example above `bench_foo` will be run for DataFrames containing six columns of integer data. The decorator allows automatically parametrizing the following object properties: - `cls`: Objects of a specific class, e.g. `DataFrame`. - `dtype`: Objects of a specific dtype. - `nulls`: Objects with and without null entries. - `cols`: Objects with a certain number of columns. - `rows`: Objects with a certain number of rows. In the example, since we did not specify the number of rows or nullability, it will be run once for each valid number of rows and for both nullable and non-nullable data. The valid set of all parameters (e.g. the numbers of rows) is stored in the `common/config.py` file. This decorator allows a developer to write a generic benchmark that works for many types of objects, then have that benchmark automatically run for all objects of interest. ### Parametrizing tests The `benchmark_with_object` decorator covers most use cases and automatically guarantees a baseline of benchmark coverage. However, many benchmarks will require more customized objects. In some cases those will be the primary targets whose methods are called. For instance, a benchmark may require a `Series` with a specific data distribution. In others, those objects will be arguments passed to other functions. An example of this is `DataFrame.where`, which accepts many types of objects to filter with. In the first case, fixtures should follow certain rules. When writing fixtures, developers should make the data sizes dependent on the benchmarks configuration. The `benchmarks/common/config.py` file defines standard data sizes to be used in benchmarks. These data sizes can be tweaked for debugging purposes (see [](#testing-benchmarks) below). Fixture sizes should be relative to the `NUM_ROWS` and/or `NUM_COLS` variables defined in the config module. These rules ensure consistency between these fixtures and those provided by `benchmark_with_object`. ## Comparing to pandas An important aspect of benchmarking cuDF is comparing it to pandas. We often want to generate quantitative comparisons, so we need to make that as easy as possible. Our benchmarks support this by setting the environment variable `CUDF_BENCHMARKS_USE_PANDAS`. When this variable is detected, all benchmarks will automatically be run using pandas instead of cuDF. Therefore, comparisons can easily be generated by simply running the benchmarks twice, once with the variable set and once without. Note that this variable only affects API benchmarks, not internal benchmarks, since the latter are not even guaranteed to be valid pandas code. ```{note} `CUDF_BENCHMARKS_USE_PANDAS` effectively remaps `cudf` to `pandas` and `cupy` to `numpy`. It does so by aliasing these modules in `common.config.py`. This aliasing is why it is critical for developers to import these packages from `config.py`. ``` ## Testing benchmarks Benchmarks need to be kept up to date with API changes in cuDF. However, we cannot simply run benchmarks in CI. Doing so would consume too many resources, and it would significantly slow down the development cycle To balance these issues, our benchmarks also support running in "testing" mode. To do so, developers can set the `CUDF_BENCHMARKS_DEBUG_ONLY` environment variable. When benchmarks are run with this variable, all data sizes are set to a minimum and the number of sizes are reduced. Our CI testing takes advantage of this to ensure that benchmarks remain valid code. ```{note} The objects provided by `benchmark_with_object` respect the `NUM_ROWS` and `NUM_COLS` defined in `common/config.py`. `CUDF_BENCHMARKS_DEBUG_ONLY` works by conditionally redefining these values. This is why it is crucial for developers to use these variables when defining custom fixtures or cases. ``` ## Profiling Although not strictly part of our benchmarking suite, profiling is a common need so we provide some guidelines here. Here are two easy ways (there may be others) to profile benchmarks: 1. The [`pytest-profiling`](https://github.com/man-group/pytest-plugins/tree/master/pytest-profiling) plugin. 2. The [`py-spy`](https://github.com/benfred/py-spy) package. Using the former is as simple as adding the `--profile` (or `--profile-svg`) arguments to the `pytest` invocation. The latter requires instead invoking pytest from py-spy, like so: ``` py-spy record -- pytest bench_foo.py ``` Each tool has different strengths and provides somewhat different information. Developers should try both and see what works for a particular workflow. Developers are also encouraged to share useful alternatives that they discover. ## Advanced Topics This section discusses some underlying details of how cuDF benchmarks work. They are not usually necessary for typical developers or benchmark writers. This information is primarily for developers looking to extend the types of objects that can be easily benchmarked. ### Understanding `benchmark_with_object` Under the hood, `benchmark_with_object` is made up of two critical pieces, fixture unions and some decorator magic. #### Fixture unions Fixture unions are a feature of `pytest_cases`. A fixture union is a fixture that, when used as a test function parameter, will trigger the test to run once for each fixture contained in the union. Since most cuDF benchmarks can be run with the same relatively small set of objects, our benchmarks generate the Cartesian product of possible fixtures and then create all possible unions. This feature is critical to the design of our benchmarks. For each of the relevant parameter combinations (size, nullability, etc) we programmatically generate a new fixture. The resulting fixtures are unambiguously named according to the following scheme: `{classname}_dtype_{dtype}[_nulls_{true|false}][[_cols_{num_cols}]_rows_{num_rows}]`. If a fixture name does not contain a particular component, it represents a union of all values of that component. As an example, consider the fixture `dataframe_dtype_int_rows_100`. This fixture is a union of both nullable and non-nullable `DataFrame`s with different numbers of columns. #### The `benchmark_with_object` decorator The long names of the above unions are cumbersome when writing tests. Moreover, having this information embedded in the name means that in order to change the parameters used, the entire benchmark needs to have the fixture name replaced. The `benchmark_with_object` decorator is the solution to this problem. When used on a test function, it essentially replaces the function parameter name with the true fixture. In our original example from above ```python @benchmark_with_object(cls="dataframe", dtype="int", cols=6) def bench_foo(benchmark, dataframe): benchmark(dataframe.foo) ``` is functionally equivalent to ```python def bench_foo(benchmark, dataframe_dtype_int_cols_6): benchmark(dataframe_dtype_int_cols_6.foo) ```
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/developer_guide/pylibcudf.md
# pylibcudf pylibcudf is a lightweight Cython wrapper around libcudf. It aims to provide a near-zero overhead interface to accessing libcudf in Python. It should be possible to achieve near-native C++ performance using Cythonized code calling pylibcudf, while also allowing fairly performant usage from Python. In addition to these requirements, pylibcudf must also integrate naturally with other Python libraries. In other words, it should interoperate fairly transparently with standard Python containers, community protocols like `__cuda_array_interface__`, and common vocabulary types like CuPy arrays. ## General Design Principles To satisfy the goals of pylibcudf, we impose the following set of design principles: - Every public function or method should be `cpdef`ed. This allows it to be used in both Cython and Python code. This incurs some slight overhead over `cdef` functions, but we assume that this is acceptable because 1) the vast majority of users will be using pure Python rather than Cython, and 2) the overhead of a `cpdef` function over a `cdef` function is on the order of a nanosecond, while CUDA kernel launch overhead is on the order of a microsecond, so these function overheads should be washed out by typical usage of pylibcudf. - Every variable used should be strongly typed and either be a primitive type (int, float, etc) or a cdef class. Any enums in C++ should be mirrored using `cpdef enum`, which will create both a C-style enum in Cython and a PEP 435-style Python enum that will automatically be used in Python. - All typing in code should be written using Cython syntax, not PEP 484 Python typing syntax. Not only does this ensure compatibility with Cython < 3, but even with Cython 3 PEP 484 support remains incomplete as of this writing. - All cudf code should interact only with pylibcudf, never with libcudf directly. - All imports should be relative so that pylibcudf can be easily extracted from cudf later - Exception: All imports of libcudf API bindings in `cudf._lib.cpp` should use absolute imports of `cudf._lib.cpp as libcudf`. We should convert the `cpp` directory into a proper package so that it can be imported as `libcudf` in that fashion. When moving pylibcudf into a separate package, it will be renamed to `libcudf` and only the imports will need to change. - Ideally, pylibcudf should depend on nothing other than rmm and pyarrow. This will allow it to be extracted into a a largely standalone library and used in environments where the larger dependency tree of cudf may be cumbersome. ## Relationship to libcudf In general, the relationship between pylibcudf and libcudf can be understood in terms of two components, data structures and algorithms. (data-structures)= ### Data Structures Typically, every type in libcudf should have a mirror Cython `cdef` class with an attribute `self.c_obj: unique_ptr[${underlying_type}]` that owns an instance of the underlying libcudf type. Each type should also implement a corresponding method `cdef ${cython_type} from_libcudf(${underlying_type} dt)` to enable constructing the Cython object from an underlying libcudf instance. Depending on the nature of the type, the function may need to accept a `unique_ptr` and take ownership e.g. `cdef ${cython_type} from_libcudf(unique_ptr[${underlying_type}] obj)`. This will typically be the case for types that own GPU data, may want to codify further. For example, `libcudf::data_type` maps to `pylibcudf.DataType`, which looks like this (implementation omitted): ```cython cdef class DataType: cdef data_type c_obj cpdef TypeId id(self) cpdef int32_t scale(self) @staticmethod cdef DataType from_libcudf(data_type dt) ``` This allows pylibcudf functions to accept a typed `DataType` parameter and then trivially call underlying libcudf algorithms by accessing the argument's `c_obj`. #### pylibcudf Tables and Columns The primary exception to the above set of rules are libcudf's core data owning types, `cudf::table` and `cudf::column`. libcudf uses modern C++ idioms based on smart pointers to avoid resource leaks and make code exception-safe. To avoid passing around raw pointers, and to ensure that ownership semantics are clear, libcudf has separate `view` types corresponding to data owning types. For example, `cudf::column` owns data, while `cudf::column_view` represents an view on a column of data and `cudf::mutable_column_view` represents a mutable view. A `column_view` need not actually reference data owned by a `cudf::column`; any memory buffer will do. This separation allows libcudf algorithms to clearly communicate ownership expectations and allows multiple views into the same data to coexist. While libcudf algorithms accept views as inputs, any algorithms that allocate data must return `cudf::column` and `cudf::table` objects. libcudf's ownership model is problematic for pylibcudf, which must be able to seamlessly interoperate with data provided by other Python libraries like PyTorch or Numba. Therefore, pylibcudf employs the following strategy: - pylibcudf defines the `gpumemoryview` type, which (analogous to the [Python `memoryview` type](https://docs.python.org/3/library/stdtypes.html#memoryview)) represents a view into memory owned by another object that it keeps alive using Python's standard reference counting machinery. A `gpumemoryview` is constructible from any object implementing the [CUDA Array Interface protocol](https://numba.readthedocs.io/en/stable/cuda/cuda_array_interface.html). - This type will eventually be generalized for reuse outside of pylibcudf. - pylibcudf defines its own Table and Column classes. - A Table maintains Python references to the Columns it contains, so multiple Tables may share the same Column. - A Column consists of `gpumemoryview`s of its data buffers (which may include children for nested types) and its null mask. - `pylibcudf.Table` and `pylibcudf.Column` provide easy access to `cudf::table_view` and `cudf::column_view` objects viewing the same columns/memory. These can be then be used when implementing any pylibcudf algorithm in terms of the underlying libcudf algorithm. Specifically, each of these classes owns an instance of the libcudf view type and provides a method `view` that may be used to access a pointer to that object to be passed to libcudf. ### Algorithms pylibcudf algorithms should look almost exactly like libcudf algorithms. Any libcudf function should be mirrored in pylibcudf with an identical signature and libcudf types mapped to corresponding pylibcudf types. All calls to libcudf algorithms should perform any requisite Python preprocessing early, then release the GIL prior to calling libcudf. For example, here is the implementation of `gather`: ```cython cpdef Table gather( Table source_table, Column gather_map, OutOfBoundsPolicy bounds_policy ): cdef unique_ptr[table] c_result with nogil: c_result = move( cpp_copying.gather( source_table.view(), gather_map.view(), bounds_policy ) ) return Table.from_libcudf(move(c_result)) ``` There are a couple of notable points from the snippet above: - The object returned from libcudf is immediately converted to a pylibcudf type. - `cudf::gather` accepts a `cudf::out_of_bounds_policy` enum parameter. `OutOfBoundsPolicy` is an alias for this type in pylibcudf that matches our Python naming conventions (CapsCase instead of snake\_case). ## Miscellaneous Notes ### Cython Scoped Enums Cython 3 introduced support for scoped enumerations. However, this support has some bugs as well as some easy pitfalls. Our usage of enums is intended to minimize the complexity of our code while also working around Cython's limitations. ```{warning} The guidance in this section may change often as Cython is updated and our understanding of best practices evolves. ``` - All pxd files that declare a C++ enum should use `cpdef enum class` declarations. - Reason: This declaration makes the C++ enum available in Cython code while also transparently creating a Python enum. - Any pxd file containing only C++ declarations must still have a corresponding pyx file if any of the declarations are scoped enums. - Reason: The creation of the Python enum requires that Cython actually generate the necessary Python C API code, which will not happen if only a pxd file is present. - If a C++ enum will be part of a pylibcudf module's public API, then it should be imported (not cimported) directly into the pyx file and aliased with a name that matches our Python class naming conventions (CapsCase) instead of our C++ naming convention (snake\_case). - Reason: We want to expose the enum to both Python and Cython consumers of the module. As a side effect, this aliasing avoids [this Cython bug](https://github.com/cython/cython/issues/5609). - Note: Once the above Cython bug is resolved, the enum should also be aliased into the pylibcudf pxd file when it is cimported so that Python and Cython usage will match. Here is an example of appropriate enum usage. ```cython # cpp/copying.pxd cdef extern from "cudf/copying.hpp" namespace "cudf" nogil: # cpdef here so that we export both a cdef enum class and a Python enum.Enum. cpdef enum class out_of_bounds_policy(bool): NULLIFY DONT_CHECK # cpp/copying.pyx # This file is empty, but is required to compile the Python enum in cpp/copying.pxd # pylibcudf/copying.pxd # cimport the enum using the exact name # Once https://github.com/cython/cython/issues/5609 is resolved, # this import should instead be # from cudf._lib.cpp.copying cimport out_of_bounds_policy as OutOfBoundsPolicy from cudf._lib.cpp.copying cimport out_of_bounds_policy # pylibcudf/copying.pyx # Access cpp.copying members that aren't part of this module's public API via # this module alias from cudf._lib.cpp cimport copying as cpp_copying from cudf._lib.cpp.copying cimport out_of_bounds_policy # This import exposes the enum in the public API of this module. # It requires a no-cython-lint tag because it will be unused: all typing of # parameters etc will need to use the Cython name `out_of_bounds_policy` until # the Cython bug is resolved. from cudf._lib.cpp.copying import \ out_of_bounds_policy as OutOfBoundsPolicy # no-cython-lint ```
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/developer_guide/documentation.md
# Writing documentation cuDF documentation is split into multiple pieces. All core functionality is documented using inline docstrings. Additional pages like user or developer guides are written independently. While docstrings are written using [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) (reST), the latter are written using [MyST](https://myst-parser.readthedocs.io/en/latest/) The inline docstrings are organized using a small set of additional reST pages. The results are all then compiled together using [Sphinx](https://www.sphinx-doc.org/en/master/). This document discusses each of these components and how to contribute to them. ## Docstrings cuDF docstrings use the [numpy](https://numpydoc.readthedocs.io/en/latest/format.html) style. In lieu of a complete explanation, we include here an example of the format and the commonly used sections: ``` class A: """Brief description of A. Longer description of A. Parameters ---------- x : int Description of x, the first constructor parameter. """ def __init__(self, x: int): pass def foo(self, bar: str): """Short description of foo. Longer description of foo. Parameters ---------- bar : str Description of bar. Returns ------- float Description of the return value of foo. Raises ------ ValueError Explanation of when a ValueError is raised. In this case, a ValueError is raised if bar is "fail". Examples -------- The examples section is _strongly_ encouraged. Where appropriate, it may mimic the examples for the corresponding pandas API. >>> a = A() >>> a.foo('baz') 0.0 >>> a.foo('fail') ... ValueError: Failed! """ if bar == "fail": raise ValueError("Failed!") return 0.0 ``` `numpydoc` supports a number of other sections of docstrings. Developers should familiarize themselves with them, since many are useful in different scenarios. Our guidelines include one addition to the standard the `numpydoc` guide. Class properties, which are not explicitly covered, should be documented in the getter function. That choice makes `help` more useful as well as enabling docstring inheritance in subclasses. All of our docstrings are validated using [`pydocstyle`](http://www.pydocstyle.org/en/stable/). This ensures that docstring style is consistent and conformant across the codebase. ## Published documentation Documentation is compiled using Sphinx, which pulls docstrings from the code. Rather than simply listing all APIs, however, we aim to mimic the pandas documentation. To do so, we organize API docs into specific pages and sections. These pages are stored in `docs/cudf/source/api_docs`. For example, all `DataFrame` documentation is contained in `docs/cudf/source/api_docs/dataframe.rst`. That page contains sections like "Computations / descriptive stats" to make APIs more easily discoverable. Within each section, documentation is created using [`autosummary`](https://www.sphinx-doc.org/en/master/usage/extensions/autosummary.html) This plugin makes it easy to generate pages for each documented API. To do so, each section of the docs looks like the following: ``` Section name ~~~~~~~~~~~~ .. autosummary:: API1 API2 ... ``` Each listed will automatically have its docstring rendered into a separate page. This layout comes from the [Sphinx theme](https://pydata-sphinx-theme.readthedocs.io/en/stable/index.html) that we use. ````{note} Under the hood, autosummary generates stub pages that look like this (using `cudf.concat` as an example): ``` cudf.concat =========== .. currentmodule:: cudf .. autofunction:: concat ``` Commands like `autofunction` come from [`autodoc`](https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html). This directive will import cudf and pull the docstring from `cudf.concat`. This approach allows us to do the minimal amount of manual work in organizing our docs, while still matching the pandas layout as closely as possible. ```` When adding a new API, developers simply have to add the API to the appropriate page. Adding the name of the function to the appropriate autosummary list is sufficient for it to be documented. ### Documenting classes Python classes and the Sphinx plugins used in RAPIDS interact in nontrivial ways. `autosummary`'s default page generated for a class uses [`autodoc`](https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html) to automatically detect and document all methods of a class. That means that in addition to the manually created `autosummary` pages where class methods are grouped into sections of related features, there is another page for each class where all the methods of that class are automatically summarized in a table for quick access. However, we also use the [`numpydoc`](https://numpydoc.readthedocs.io/) extension, which offers the same feature. We use both in order to match the contents and style of the pandas documentation as closely as possible. pandas is also particular about what information is included in a class's documentation. While the documentation pages for the major user-facing classes like `DataFrame`, `Series`, and `Index` contain all APIs, less visible classes or subclasses (such as subclasses of `Index`) only include the methods that are specific to those subclasses. For example, {py:class}`cudf.CategoricalIndex` only includes `codes` and `categories` on its page, not the entire set of `Index` functionality. To accommodate these requirements, we take the following approach: 1. The default `autosummary` template for classes is overridden with a [simpler template that does not generate method or attribute documentation](https://github.com/rapidsai/cudf/blob/main/docs/cudf/source/_templates/autosummary/class.rst). In other words, we disable `autosummary`'s generation of Methods and Attributes lists. 2. We rely on `numpydoc` entirely for the classes that need their entire APIs listed (`DataFrame`/`Series`/etc). `numpydoc` will automatically populate Methods and Attributes section if (and only if) they are not already defined in the class's docstring. 3. For classes that should only include a subset of APIs, we include those explicitly in the class's documentation. When those lists exist, `numpydoc` will not override them. If either the Methods or Attributes section should be empty, that section must still be included but should simply contain "None". For example, the class documentation for `CategoricalIndex` could include something like the following: ``` Attributes ---------- codes categories Methods ------- None ``` ## Comparing to pandas cuDF aims to provide a pandas-like experience. However, for various reasons cuDF APIs may exhibit differences from pandas. Where such differences exist, they should be documented. We facilitate such documentation with the `pandas-compat` directive. The directive should be used inside docstrings like so: ``` """Brief Docstring body .. pandas-compat:: **$API_NAME** Explanation of differences ``` All such API compatibility notes are collected and displayed in the rendered documentation. ## Writing documentation pages In addition to docstrings, our docs also contain a number of more dedicated user guides. These pages are stored in `docs/cudf/source/user_guide`. These pages are all written using MyST, a superset of Markdown. MyST allows developers to write using familiar Markdown syntax, while also providing the full power of reST where needed. These pages do not conform to any specific style or set of use cases. However, if you develop any sufficiently complex new features, consider whether users would benefit from a more complete demonstration of them. ```{note} We encourage using links between pages. We enable [Myst auto-generated anchors](https://myst-parser.readthedocs.io/en/latest/syntax/optional.html#auto-generated-header-anchors), so links should make use of the appropriately namespaced anchors for links rather than adding manual links. ``` ## Building documentation ### Requirements The following are required to build the documentation: - A RAPIDS-compatible GPU. This is necessary because the documentation execute code. - A working copy of cudf in the same build environment. We recommend following the [build instructions](https://github.com/rapidsai/cudf/blob/main/CONTRIBUTING.md#setting-up-your-build-environment). - Sphinx, numpydoc, and MyST-NB. Assuming you follow the build instructions, these should automatically be installed into your environment. ### Building and viewing docs Once you have a working copy of cudf, building the docs is straightforward: 1. Navigate to `/path/to/cudf/docs/cudf/`. 2. Execute `make html` This will run Sphinx in your shell and generate outputs at `build/html/index.html`. To view the results. 1. Navigate to `build/html` 2. Execute `python -m http.server` Then, open a web browser and go to `https://localhost:8000`. If something else is currently running on port 8000, `python -m http.server` will automatically find the next available port. Alternatively, you may specify a port with `python -m http.server $PORT`. You may build docs on a remote machine but want to view them locally. Assuming the other machine's IP address is visible on your local network, you can view the docs by replacing `localhost` with the IP address of the host machine. Alternatively, you may also forward the port using e.g. `ssh -N -f -L localhost:$LOCAL_PORT:localhost:$REMOTE_PORT $REMOTE_IP`. That will make `$REMOTE_IP:$REMOTE_PORT` visible at `localhost:$LOCAL_PORT`. ## Documenting cuDF internals Unlike public APIs, the documentation of internal code (functions, classes, etc) is not linted. Documenting internals is strongly encouraged, but not enforced in any particular way. Regarding style, either full numpy-style docstrings or regular `#` comments are acceptable. The former can be useful for complex or widely used functionality, while the latter is fine for small one-off functions.
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/developer_guide/index.md
# Developer Guide ```{note} At present, this guide only covers the main cuDF library. In the future, it may be expanded to also cover dask_cudf, cudf_kafka, and custreamz. ``` cuDF is a GPU-accelerated, [Pandas-like](https://pandas.pydata.org/) DataFrame library. Under the hood, all of cuDF's functionality relies on the CUDA-accelerated `libcudf` C++ library. Thus, cuDF's internals are designed to efficiently and robustly map pandas APIs to `libcudf` functions. For more information about the `libcudf` library, a good starting point is the [developer guide](https://github.com/rapidsai/cudf/blob/main/cpp/docs/DEVELOPER_GUIDE.md). This document assumes familiarity with the [overall contributing guide](https://github.com/rapidsai/cudf/blob/main/CONTRIBUTING.md). The goal of this document is to provide more specific guidance for Python developers. It covers the structure of the Python code and discusses best practices. Additionally, it includes longer sections on more specific topics like testing and benchmarking. ```{toctree} :maxdepth: 2 library_design contributing_guide documentation testing benchmarking options pylibcudf ```
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/developer_guide/library_design.md
# Library Design At a high level, cuDF is structured in three layers, each of which serves a distinct purpose: 1. The Frame layer: The user-facing implementation of pandas-like data structures like `DataFrame` and `Series`. 2. The Column layer: The core internal data structures used to bridge the gap to our lower-level implementations. 3. The Cython layer: The wrappers around the fast C++ `libcudf` library. In this document we will review each of these layers, their roles, and the requisite tradeoffs. Finally we tie these pieces together to provide a more holistic view of the project. ## The Frame layer % The class diagram below was generated using PlantUML (https://plantuml.com/). % PlantUML is a simple textual format for encoding UML documents. % We could also use it to generate ASCII art or another format. % % @startuml % % class Frame % class IndexedFrame % class SingleColumnFrame % class BaseIndex % class GenericIndex % class MultiIndex % class RangeIndex % class DataFrame % class Series % % Frame <|-- IndexedFrame % % Frame <|-- SingleColumnFrame % % SingleColumnFrame <|-- Series % IndexedFrame <|-- Series % % IndexedFrame <|-- DataFrame % % BaseIndex <|-- RangeIndex % % BaseIndex <|-- MultiIndex % Frame <|-- MultiIndex % % BaseIndex <|-- GenericIndex % SingleColumnFrame <|-- GenericIndex % % @enduml ```{image} frame_class_diagram.png ``` This class diagram shows the relationship between the principal components of the Frame layer: All classes in the Frame layer inherit from one or both of the two base classes in this layer: `Frame` and `BaseIndex`. The eponymous `Frame` class is, at its core, a simple tabular data structure composed of columnar data. Some types of `Frame` contain indexes; in particular, any `DataFrame` or `Series` has an index. However, as a general container of columnar data, `Frame` is also the parent class for most types of index. `BaseIndex`, meanwhile, is essentially an abstract base class encoding the `pandas.Index` API. Various subclasses of `BaseIndex` implement this API in specific ways depending on their underlying data. For example, `RangeIndex` avoids actually materializing a column, while a `MultiIndex` contains _multiple_ columns. Most other index classes consist of a single column of a given type, e.g. strings or datetimes. As a result, using a single abstract parent provides the flexibility we need to support these different types. With those preliminaries out of the way, let's dive in a little bit deeper. ### Frames `Frame` exposes numerous methods common to all pandas data structures. Any methods that have the same API across `Series`, `DataFrame`, and `Index` should be defined here. Additionally any (internal) methods that could be used to share code between those classes may also be defined here. The primary internal subclass of `Frame` is `IndexedFrame`, a `Frame` with an index. An `IndexedFrame` represents the first type of object mentioned above: indexed tables. In particular, `IndexedFrame` is a parent class for `DataFrame` and `Series`. Any pandas methods that are defined for those two classes should be defined here. The second internal subclass of `Frame` is `SingleColumnFrame`. As you may surmise, it is a `Frame` with a single column of data. This class is a parent for most types of indexes as well as `Series` (note the diamond inheritance pattern here). While `IndexedFrame` provides a large amount of functionality, this class is much simpler. It adds some simple APIs provided by all 1D pandas objects, and it flattens outputs where needed. ### Indexes While we've highlighted some exceptional cases of Indexes before, let's start with the base cases here first. `BaseIndex` is intended to be a pure abstract class, i.e. all of its methods should simply raise `NotImplementedError`. In practice, `BaseIndex` does have concrete implementations of a small set of methods. However, currently many of these implementations are not applicable to all subclasses and will be eventually be removed. Almost all indexes are subclasses of `GenericIndex`, a single-columned index with the class hierarchy: ```python class GenericIndex(SingleColumnFrame, BaseIndex) ``` Integer, float, or string indexes are all composed of a single column of data. Most `GenericIndex` methods are inherited from `Frame`, saving us the trouble of rewriting them. We now consider the three main exceptions to this model: - A `RangeIndex` is not backed by a column of data, so it inherits directly from `BaseIndex` alone. Wherever possible, its methods have special implementations designed to avoid materializing columns. Where such an implementation is infeasible, we fall back to converting it to an `Int64Index` first instead. - A `MultiIndex` is backed by _multiple_ columns of data. Therefore, its inheritance hierarchy looks like `class MultiIndex(Frame, BaseIndex)`. Some of its more `Frame`-like methods may be inherited, but many others must be reimplemented since in many cases a `MultiIndex` is not expected to behave like a `Frame`. - Just like in pandas, `Index` itself can never be instantiated. `pandas.Index` is the parent class for indexes, but its constructor returns an appropriate subclass depending on the input data type and shape. Unfortunately, mimicking this behavior requires overriding `__new__`, which in turn makes shared initialization across inheritance trees much more cumbersome to manage. To enable sharing constructor logic across different index classes, we instead define `BaseIndex` as the parent class of all indexes. `Index` inherits from `BaseIndex`, but it masquerades as a `BaseIndex` to match pandas. This class should contain no implementations since it is simply a factory for other indexes. ## The Column layer The next layer in the cuDF stack is the Column layer. This layer forms the glue between pandas-like APIs and our underlying data layouts. The principal objects in the Column layer are the `ColumnAccessor` and the various `Column` classes. The `Column` is cuDF's core data structure that represents a single column of data of a specific data type. A `ColumnAccessor` is a dictionary-like interface to a sequence of `Column`s. A `Frame` owns a `ColumnAccessor`. ### ColumnAccessor The primary purpose of the `ColumnAccessor` is to encapsulate pandas column selection semantics. Columns may be selected or inserted by index or by label, and label-based selections are as flexible as pandas is. For instance, Columns may be selected hierarchically (using tuples) or via wildcards. `ColumnAccessor`s also support the `MultiIndex` columns that can result from operations like groupbys. ### Columns Under the hood, cuDF is built around the [Apache Arrow Format](https://arrow.apache.org). This data format is both conducive to high-performance algorithms and suitable for data interchange between libraries. The `Column` class encapsulates our implementation of this data format. A `Column` is composed of the following: - A **data type**, specifying the type of each element. - A **data buffer** that may store the data for the column elements. Some column types do not have a data buffer, instead storing data in the children columns. - A **mask buffer** whose bits represent the validity (null or not null) of each element. Nullability is a core concept in the Arrow data model. Columns whose elements are all valid may not have a mask buffer. Mask buffers are padded to 64 bytes. - Its **children**, a tuple of columns used to represent complex types such as structs or lists. - A **size** indicating the number of elements in the column. - An integer **offset** use to represent the first element of column that is the "slice" of another column. The size of the column then gives the extent of the slice rather than the size of the underlying buffer. A column that is not a slice has an offset of 0. More information about these fields can be found in the documentation of the [Apache Arrow Columnar Format](https://arrow.apache.org/docs/format/Columnar.html), which is what the cuDF `Column` is based on. The `Column` class is implemented in Cython to facilitate interoperability with `libcudf`'s C++ data structures. Most higher-level functionality is implemented in the `ColumnBase` subclass. These functions rely `Column` APIs to call `libcudf` APIs and translate their results to Python. This separation allows `ColumnBase` to be implemented in pure Python, which simplifies development and debugging. `ColumnBase` provides some standard methods, while other methods only make sense for data of a specific type. As a result, we have various subclasses of `ColumnBase` like `NumericalColumn`, `StringColumn`, and `DatetimeColumn`. Most dtype-specific decisions should be handled at the level of a specific `Column` subclass. Each type of `Column` only implements methods supported by that data type. Different types of `ColumnBase` are also stored differently in memory according to the Arrow format. As one example, a `NumericalColumn` with 1000 `int32` elements and containing nulls is composed of: 1. A data buffer of size 4000 bytes (sizeof(int32) * 1000) 2. A mask buffer of size 128 bytes (1000/8 padded to a multiple of 64 bytes) 3. No children columns As another example, a `StringColumn` backing the Series `['do', 'you', 'have', 'any', 'cheese?']` is composed of: 1. No data buffer 2. No mask buffer as there are no nulls in the Series 3. Two children columns: - A column of UTF-8 characters `['d', 'o', 'y', 'o', 'u', 'h', ..., '?']` - A column of "offsets" to the characters column (in this case, `[0, 2, 5, 9, 12, 19]`) ### Data types cuDF uses [dtypes](https://numpy.org/doc/stable/reference/arrays.dtypes.html) to represent different types of data. Since efficient GPU algorithms require preexisting knowledge of data layouts, cuDF does not support the arbitrary `object` dtype, but instead defines a few custom types for common use-cases: - `ListDtype`: Lists where each element in every list in a Column is of the same type - `StructDtype`: Dicts where a given key always maps to values of the same type - `CategoricalDtype`: Analogous to the pandas categorical dtype except that the categories are stored in device memory - `DecimalDtype`: Fixed-point numbers - `IntervalDtype`: Intervals Note that there is a many-to-one mapping between data types and `Column` classes. For instance, all numerical types (floats and ints of different widths) are all managed using `NumericalColumn`. ### Buffer `Column`s are in turn composed of one or more `Buffer`s. A `Buffer` represents a single, contiguous, device memory allocation owned by another object. A `Buffer` constructed from a preexisting device memory allocation (such as a CuPy array) will view that memory. Conversely, when constructed from a host object, `Buffer` uses [`rmm.DeviceBuffer`](https://github.com/rapidsai/rmm#devicebuffers) to allocate new memory. The data is then copied from the host object into the newly allocated device memory. You can read more about [device memory allocation with RMM here](https://github.com/rapidsai/rmm). ### Spilling to host memory Setting the environment variable `CUDF_SPILL=on` enables automatic spilling (and "unspilling") of buffers from device to host to enable out-of-memory computation, i.e., computing on objects that occupy more memory than is available on the GPU. Spilling can be enabled in two ways (it is disabled by default): - setting the environment variable `CUDF_SPILL=on`, or - setting the `spill` option in `cudf` by doing `cudf.set_option("spill", True)`. Additionally, parameters are: - `CUDF_SPILL_ON_DEMAND=ON` / `cudf.set_option("spill_on_demand", True)`, which registers an RMM out-of-memory error handler that spills buffers in order to free up memory. If spilling is enabled, spill on demand is **enabled by default**. - `CUDF_SPILL_DEVICE_LIMIT=<X>` / `cudf.set_option("spill_device_limit", <X>)`, which sets a device memory limit of `<X>` in bytes. This introduces a modest overhead and is **disabled by default**. Furthermore, this is a *soft* limit. The memory usage might exceed the limit if too many buffers are unspillable. (Buffer-design)= #### Design Spilling consists of two components: - A new buffer sub-class, `SpillableBuffer`, that implements moving of its data from host to device memory in-place. - A spill manager that tracks all instances of `SpillableBuffer` and spills them on demand. A global spill manager is used throughout cudf when spilling is enabled, which makes `as_buffer()` return `SpillableBuffer` instead of the default `Buffer` instances. Accessing `Buffer.get_ptr(...)`, we get the device memory pointer of the buffer. This is unproblematic in the case of `Buffer` but what happens when accessing `SpillableBuffer.get_ptr(...)`, which might have spilled its device memory. In this case, `SpillableBuffer` needs to unspill the memory before returning its device memory pointer. Furthermore, while this device memory pointer is being used (or could be used), `SpillableBuffer` cannot spill its memory back to host memory because doing so would invalidate the device pointer. To address this, we mark the `SpillableBuffer` as unspillable, we say that the buffer has been _exposed_. This can either be permanent if the device pointer is exposed to external projects or temporary while `libcudf` accesses the device memory. The `SpillableBuffer.get_ptr(...)` returns the device pointer of the buffer memory but if called within an `acquire_spill_lock` decorator/context, the buffer is only marked unspillable while running within the decorator/context. #### Statistics cuDF supports spilling statistics, which can be very useful for performance profiling and to identify code that renders buffers unspillable. Three levels of information gathering exist: 0. disabled (no overhead).  1. gather statistics of duration and number of bytes spilled (very low overhead).  2. gather statistics of each time a spillable buffer is exposed permanently (potential high overhead). Statistics can be enabled in two ways (it is disabled by default): - setting the environment variable `CUDF_SPILL_STATS=<statistics-level>`, or - setting the `spill_stats` option in `cudf` by doing `cudf.set_option("spill_stats", <statistics-level>)`. It is possible to access the statistics through the spill manager like: ```python >>> import cudf >>> from cudf.core.buffer.spill_manager import get_global_manager >>> stats = get_global_manager().statistics >>> print(stats) Spill Statistics (level=1): Spilling (level >= 1): gpu => cpu: 24B in 0.0033 ``` To have each worker in dask print spill statistics, do something like: ```python def spill_info(): from cudf.core.buffer.spill_manager import get_global_manager print(get_global_manager().statistics) client.submit(spill_info) ``` ## The Cython layer The lowest level of cuDF is its interaction with `libcudf` via Cython. The Cython layer is composed of two components: C++ bindings and Cython wrappers. The first component consists of [`.pxd` files](https://cython.readthedocs.io/en/latest/src/tutorial/pxd_files.html), Cython declaration files that expose the contents of C++ header files to other Cython files. The second component consists of Cython wrappers for this functionality. These wrappers are necessary to expose this functionality to pure Python code. They also handle translating cuDF objects into their `libcudf` equivalents and invoking `libcudf` functions. Working with this layer of cuDF requires some familiarity with `libcudf`'s APIs. `libcudf` is built around two principal objects whose names are largely self-explanatory: `column` and `table`. `libcudf` also defines corresponding non-owning "view" types `column_view` and `table_view`. `libcudf` APIs typically accept views and return owning types. Most cuDF Cython wrappers involve converting `cudf.Column` objects into `column_view` or `table_view` objects, calling a `libcudf` API with these arguments, then constructing new `cudf.Column`s from the result. By the time code reaches this layer, all questions of pandas compatibility should already have been addressed. These functions should be as close to trivial wrappers around `libcudf` APIs as possible. ## Putting It All Together To this point, our discussion has assumed that all cuDF functions follow a strictly linear descent through these layers. However, it should be clear that in many cases this approach is not appropriate. Many common `Frame` operations do not operate on individual columns but on the `Frame` as a whole. Therefore, we in fact have two distinct common patterns for implementations in cuDF. 1. The first pattern is for operations that act on columns of a `Frame` individually. This group includes tasks like reductions and scans (`sum`/`cumsum`). These operations are typically implemented by looping over the columns stored in a `Frame`'s `ColumnAccessor`. 2. The second pattern is for operations that involve acting on multiple columns at once. This group includes many core operations like grouping or merging. These operations bypass the Column layer altogether, instead going straight from Frame to Cython. The pandas API also includes a number of helper objects, such as `GroupBy`, `Rolling`, and `Resampler`. cuDF implements corresponding objects with the same APIs. Internally, these objects typically interact with cuDF objects at the Frame layer via composition. However, for performance reasons they frequently access internal attributes and methods of `Frame` and its subclasses. (copy-on-write-dev-doc)= ## Copy-on-write This section describes the internal implementation details of the copy-on-write feature. It is recommended that developers familiarize themselves with [the user-facing documentation](copy-on-write-user-doc) of this functionality before reading through the internals below. The core copy-on-write implementation relies on the factory function `as_exposure_tracked_buffer` and the two classes `ExposureTrackedBuffer` and `BufferSlice`. An `ExposureTrackedBuffer` is a subclass of the regular `Buffer` that tracks internal and external references to its underlying memory. Internal references are tracked by maintaining [weak references](https://docs.python.org/3/library/weakref.html) to every `BufferSlice` of the underlying memory. External references are tracked through "exposure" status of the underlying memory. A buffer is considered exposed if the device pointer (integer or void*) has been handed out to a library outside of cudf. In this case, we have no way of knowing if the data are being modified by a third party. `BufferSlice` is a subclass of `ExposureTrackedBuffer` that represents a _slice_ of the memory underlying a exposure tracked buffer. When the cudf option `"copy_on_write"` is `True`, `as_buffer` calls `as_exposure_tracked_buffer`, which always returns a `BufferSlice`. It is then the slices that determine whether or not to make a copy when a write operation is performed on a `Column` (see below). If multiple slices point to the same underlying memory, then a copy must be made whenever a modification is attempted. ### Eager copies when exposing to third-party libraries If a `Column`/`BufferSlice` is exposed to a third-party library via `__cuda_array_interface__`, we are no longer able to track whether or not modification of the buffer has occurred. Hence whenever someone accesses data through the `__cuda_array_interface__`, we eagerly trigger the copy by calling `.make_single_owner_inplace` which ensures a true copy of underlying data is made and that the slice is the sole owner. Any future copy requests must also trigger a true physical copy (since we cannot track the lifetime of the third-party object). To handle this we also mark the `Column`/`BufferSlice` as exposed thus indicating that any future shallow-copy requests will trigger a true physical copy rather than a copy-on-write shallow copy. ### Obtaining a read-only object A read-only object can be quite useful for operations that will not mutate the data. This can be achieved by calling `.get_ptr(mode="read")`, and using `cuda_array_interface_wrapper` to wrap a `__cuda_array_interface__` object around it. This will not trigger a deep copy even if multiple `BufferSlice` points to the same `ExposureTrackedBuffer`. This API should only be used when the lifetime of the proxy object is restricted to cudf's internal code execution. Handing this out to external libraries or user-facing APIs will lead to untracked references and undefined copy-on-write behavior. We currently use this API for device to host copies like in `ColumnBase.data_array_view(mode="read")` which is used for `Column.values_host`. ### Internal access to raw data pointers Since it is unsafe to access the raw pointer associated with a buffer when copy-on-write is enabled, in addition to the readonly proxy object described above, access to the pointer is gated through `Buffer.get_ptr`. This method accepts a mode argument through which the caller indicates how they will access the data associated with the buffer. If only read-only access is required (`mode="read"`), this indicates that the caller has no intention of modifying the buffer through this pointer. In this case, any shallow copies are not unlinked. In contrast, if modification is required one may pass `mode="write"`, provoking unlinking of any shallow copies. ### Variable width data types Weak references are implemented only for fixed-width data types as these are only column types that can be mutated in place. Requests for deep copies of variable width data types always return shallow copies of the Columns, because these types don't support real in-place mutation of the data. Internally, we mimic in-place mutations using `_mimic_inplace`, but the resulting data is always a deep copy of the underlying data. ### Examples When copy-on-write is enabled, taking a shallow copy of a `Series` or a `DataFrame` does not eagerly create a copy of the data. Instead, it produces a view that will be lazily copied when a write operation is performed on any of its copies. Let's create a series: ```python >>> import cudf >>> cudf.set_option("copy_on_write", True) >>> s1 = cudf.Series([1, 2, 3, 4]) ``` Make a copy of `s1`: ```python >>> s2 = s1.copy(deep=False) ``` Make another copy, but of `s2`: ```python >>> s3 = s2.copy(deep=False) ``` Viewing the data and memory addresses show that they all point to the same device memory: ```python >>> s1 0 1 1 2 2 3 3 4 dtype: int64 >>> s2 0 1 1 2 2 3 3 4 dtype: int64 >>> s3 0 1 1 2 2 3 3 4 dtype: int64 >>> s1.data._ptr 139796315897856 >>> s2.data._ptr 139796315897856 >>> s3.data._ptr 139796315897856 ``` Now, when we perform a write operation on one of them, say on `s2`, a new copy is created for `s2` on device and then modified: ```python >>> s2[0:2] = 10 >>> s2 0 10 1 10 2 3 3 4 dtype: int64 >>> s1 0 1 1 2 2 3 3 4 dtype: int64 >>> s3 0 1 1 2 2 3 3 4 dtype: int64 ``` If we inspect the memory address of the data, `s1` and `s3` still share the same address but `s2` has a new one: ```python >>> s1.data._ptr 139796315897856 >>> s3.data._ptr 139796315897856 >>> s2.data._ptr 139796315899392 ``` Now, performing write operation on `s1` will trigger a new copy on device memory as there is a weak reference being shared in `s3`: ```python >>> s1[0:2] = 11 >>> s1 0 11 1 11 2 3 3 4 dtype: int64 >>> s2 0 10 1 10 2 3 3 4 dtype: int64 >>> s3 0 1 1 2 2 3 3 4 dtype: int64 ``` If we inspect the memory address of the data, the addresses of `s2` and `s3` remain unchanged, but `s1`'s memory address has changed because of a copy operation performed during the writing: ```python >>> s2.data._ptr 139796315899392 >>> s3.data._ptr 139796315897856 >>> s1.data._ptr 139796315879723 ``` cuDF's copy-on-write implementation is motivated by the pandas proposals documented here: 1. [Google doc](https://docs.google.com/document/d/1ZCQ9mx3LBMy-nhwRl33_jgcvWo9IWdEfxDNQ2thyTb0/edit#heading=h.iexejdstiz8u) 2. [Github issue](https://github.com/pandas-dev/pandas/issues/36195)
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/developer_guide/options.md
# Options The usage of options is explained in the [user guide](options_user_guide). This document provides more explanations on how developers work with options internally. Options are stored as a dictionary in the `cudf.options` module. Each option name is its key in the dictionary. The value of the option is an instance of an `Options` object. An `Options` object has the following attributes: - `value`: the current value of the option - `description`: a text description of the option - `validator`: a boolean function that returns `True` if `value` is valid, `False` otherwise. Developers can use `cudf.options._register_option` to add options to the dictionary. {py:func}`cudf.get_option` is provided to get option values from the dictionary. When testing the behavior of a certain option, it is advised to use [`yield` fixture](https://docs.pytest.org/en/7.1.x/how-to/fixtures.html#yield-fixtures-recommended) to set up and clean up the option. See the [API reference](api.options) for more details.
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/developer_guide/contributing_guide.md
# Contributing Guide This document focuses on a high-level overview of best practices in cuDF. ## Directory structure and file naming cuDF generally presents the same importable modules and subpackages as pandas. All Cython code is contained in `python/cudf/cudf/_lib`. ## Code style cuDF employs a number of linters to ensure consistent style across the code base. We manage our linters using [`pre-commit`](https://pre-commit.com/). Developers are strongly recommended to set up `pre-commit` prior to any development. The `.pre-commit-config.yaml` file at the root of the repo is the primary source of truth linting. Specifically, cuDF uses the following tools: - [`ruff`](https://beta.ruff.rs/) checks for general code formatting compliance. - [`black`](https://github.com/psf/black) is an automatic code formatter. - [`isort`](https://pycqa.github.io/isort/) ensures imports are sorted consistently. - [`mypy`](http://mypy-lang.org/) performs static type checking. In conjunction with [type hints](https://docs.python.org/3/library/typing.html), `mypy` can help catch various bugs that are otherwise difficult to find. - [`pydocstyle`](https://github.com/PyCQA/pydocstyle/) lints docstring style. - [`codespell`](https://github.com/codespell-project/codespell) finds spelling errors. Linter config data is stored in a number of files. We generally use `pyproject.toml` over `setup.cfg` and avoid project-specific files (e.g. `pyproject.toml` > `python/cudf/pyproject.toml`). However, differences between tools and the different packages in the repo result in the following caveats: - `isort` must be configured per project to set which project is the "first party" project. As a result, we currently maintain both root and project-level `pyproject.toml` files. For more information on how to use pre-commit hooks, see the code formatting section of the [overall contributing guide](https://github.com/rapidsai/cudf/blob/main/CONTRIBUTING.md#python--pre-commit-hooks). ## Deprecating and removing code cuDF follows the policy of deprecating code for one release prior to removal. For example, if we decide to remove an API during the 22.08 release cycle, it will be marked as deprecated in the 22.08 release and removed in the 22.10 release. Note that if it is explicitly mentioned in a comment (like `# Do not remove until..`), do not enforce the deprecation by removing the affected code until the condition in the comment is met. All internal usage of deprecated APIs in cuDF should be removed when the API is deprecated. This prevents users from encountering unexpected deprecation warnings when using other (non-deprecated) APIs. The documentation for the API should also be updated to reflect its deprecation. When the time comes to remove a deprecated API, make sure to remove all tests and documentation. Deprecation messages should: - emit a FutureWarning; - consist of a single line with no newline characters; - indicate replacement APIs, if any exist (deprecation messages are an opportunity to show users better ways to do things); - not specify a version when removal will occur (this gives us more flexibility). For example: ```python warnings.warn( "`Series.foo` is deprecated and will be removed in a future version of cudf. " "Use `Series.new_foo` instead.", FutureWarning ) ``` ```{warning} Deprecations should be signaled using a `FutureWarning` **not a `DeprecationWarning`**! `DeprecationWarning` is hidden by default except in code run in the `__main__` module. ``` Deprecations should also be specified in the respective public API docstring using a `deprecated` admonition: ``` .. deprecated:: 23.08 `foo` is deprecated and will be removed in a future version of cudf. ``` ## `pandas` compatibility Maintaining compatibility with the [pandas API](https://pandas.pydata.org/docs/reference/index.html) is a primary goal of cuDF. Developers should always look at pandas APIs when adding a new feature to cuDF. When introducing a new cuDF API with a pandas analog, we should match pandas as much as possible. Since we try to maintain compatibility even with various edge cases (such as null handling), new pandas releases sometimes require changes that break compatibility with old versions. As a result, our compatibility target is the latest pandas version. However, there are occasionally good reasons to deviate from pandas behavior. The most common reasons center around performance. Some APIs cannot match pandas behavior exactly without incurring exorbitant runtime costs. Others may require using additional memory, which is always at a premium in GPU workflows. If you are developing a feature and believe that perfect pandas compatibility is infeasible or undesirable, you should consult with other members of the team to assess how to proceed. When such a deviation from pandas behavior is necessary, it should be documented. For more information on how to do that, see [our documentation on pandas comparison](./documentation.md#comparing-to-pandas). ## Python vs Cython cuDF makes substantial use of [Cython](https://cython.org/). Cython is a powerful tool, but it is less user-friendly than pure Python. It is also more difficult to debug or profile. Therefore, developers should generally prefer Python code over Cython where possible. The primary use-case for Cython in cuDF is to expose libcudf C++ APIs to Python. This Cython usage is generally composed of two parts: 1. A `pxd` file declaring C++ APIs so that they may be used in Cython, and 2. A `pyx` file containing Cython functions that wrap those C++ APIs so that they can be called from Python. The latter wrappers should generally be kept as thin as possible to minimize Cython usage. For more information see [our Cython layer design documentation](./library_design.md#the-cython-layer). In some rare cases we may actually benefit from writing pure Cython code to speed up particular code paths. Given that most numerical computations in cuDF actually happen in libcudf, however, such use cases are quite rare. Any attempt to write pure Cython code for this purpose should be justified with benchmarks. ## Exception handling In alignment with [maintaining compatibility with pandas](#pandas-compatibility), any API that cuDF shares with pandas should throw all the same exceptions as the corresponding pandas API given the same inputs. However, it is not required to match the corresponding pandas API's exception message. When writing error messages, sufficient information should be included to help users locate the source of the error, such as including the expected argument type versus the actual argument provided. For parameters that are not yet supported, raise `NotImplementedError`. There is no need to mention when the argument will be supported in the future. ### Handling libcudf Exceptions Standard C++ natively supports various [exception types](https://en.cppreference.com/w/cpp/error/exception), which Cython maps to [these Python exception types](https://docs.cython.org/en/latest/src/userguide/wrapping_CPlusPlus.html#exceptions). In addition to built-in exceptions, libcudf also raises a few additional types of exceptions. cuDF extends Cython's default mapping to account for these exception types. When a new libcudf exception type is added, a suitable except clause should be added to cuDF's [exception handler](https://github.com/rapidsai/cudf/blob/main/python/cudf/cudf/_lib/cpp/exception_handler.hpp). If no built-in Python exception seems like a good match, a new Python exception should be created. ### Raising warnings Where appropriate, cuDF should throw the same warnings as pandas. For instance, API deprecations in cuDF should track pandas as closely as possible. However, not all pandas warnings are appropriate for cuDF. Common exceptional cases include [implementation-dependent performance warnings](https://pandas.pydata.org/docs/reference/api/pandas.errors.PerformanceWarning.html) that do not apply to cuDF's internals. The decision of whether or not to match pandas must be made on a case-by-case basis and is left to the best judgment of developers and PR reviewers. ### Catching warnings from dependencies cuDF developers should avoid using deprecated APIs from package dependencies. However, there may be cases where such uses cannot be avoided, at least in the short term. If such cases arise, developers should [catch the warnings](https://docs.python.org/3/library/warnings.html#warnings.catch_warnings) within cuDF so that they are not visible to the user.
0
rapidsai_public_repos/cudf/docs/cudf/source
rapidsai_public_repos/cudf/docs/cudf/source/_ext/PandasCompat.py
# Copyright (c) 2021-2022, NVIDIA CORPORATION # This file is adapted from official sphinx tutorial for `todo` extension: # https://www.sphinx-doc.org/en/master/development/tutorials/todo.html from docutils import nodes from docutils.parsers.rst import Directive from sphinx.locale import get_translation from sphinx.util.docutils import SphinxDirective translator = get_translation("sphinx") class PandasCompat(nodes.Admonition, nodes.Element): pass class PandasCompatList(nodes.General, nodes.Element): pass def visit_PandasCompat_node(self, node): self.visit_admonition(node) def depart_PandasCompat_node(self, node): self.depart_admonition(node) class PandasCompatListDirective(Directive): def run(self): return [PandasCompatList("")] class PandasCompatDirective(SphinxDirective): # this enables content in the directive has_content = True def run(self): targetid = "PandasCompat-%d" % self.env.new_serialno("PandasCompat") targetnode = nodes.target("", "", ids=[targetid]) PandasCompat_node = PandasCompat("\n".join(self.content)) PandasCompat_node += nodes.title( translator("Pandas Compatibility Note"), translator("Pandas Compatibility Note"), ) self.state.nested_parse( self.content, self.content_offset, PandasCompat_node ) if not hasattr(self.env, "PandasCompat_all_pandas_compat"): self.env.PandasCompat_all_pandas_compat = [] self.env.PandasCompat_all_pandas_compat.append( { "docname": self.env.docname, "PandasCompat": PandasCompat_node.deepcopy(), "target": targetnode, } ) return [targetnode, PandasCompat_node] def purge_PandasCompats(app, env, docname): if not hasattr(env, "PandasCompat_all_pandas_compat"): return env.PandasCompat_all_pandas_compat = [ PandasCompat for PandasCompat in env.PandasCompat_all_pandas_compat if PandasCompat["docname"] != docname ] def merge_PandasCompats(app, env, docnames, other): if not hasattr(env, "PandasCompat_all_pandas_compat"): env.PandasCompat_all_pandas_compat = [] if hasattr(other, "PandasCompat_all_pandas_compat"): env.PandasCompat_all_pandas_compat.extend( other.PandasCompat_all_pandas_compat ) def process_PandasCompat_nodes(app, doctree, fromdocname): if not app.config.include_pandas_compat: for node in doctree.traverse(PandasCompat): node.parent.remove(node) # Replace all PandasCompatList nodes with a list of the collected # PandasCompats. Augment each PandasCompat with a backlink to the # original location. env = app.builder.env if not hasattr(env, "PandasCompat_all_pandas_compat"): env.PandasCompat_all_pandas_compat = [] for node in doctree.traverse(PandasCompatList): if not app.config.include_pandas_compat: node.replace_self([]) continue content = [] for PandasCompat_info in env.PandasCompat_all_pandas_compat: para = nodes.paragraph() # Create a reference back to the original docstring newnode = nodes.reference("", "") innernode = nodes.emphasis( translator("[source]"), translator("[source]") ) newnode["refdocname"] = PandasCompat_info["docname"] newnode["refuri"] = app.builder.get_relative_uri( fromdocname, PandasCompat_info["docname"] ) newnode["refuri"] += "#" + PandasCompat_info["target"]["refid"] newnode.append(innernode) para += newnode # Insert the reference node into PandasCompat node # Note that this node is a deepcopy from the original copy # in the docstring, so changing this does not affect that in the # doc. PandasCompat_info["PandasCompat"].append(para) # Insert the PandasCompand node into the PandasCompatList Node content.append(PandasCompat_info["PandasCompat"]) node.replace_self(content) def setup(app): app.add_config_value("include_pandas_compat", False, "html") app.add_node(PandasCompatList) app.add_node( PandasCompat, html=(visit_PandasCompat_node, depart_PandasCompat_node), latex=(visit_PandasCompat_node, depart_PandasCompat_node), text=(visit_PandasCompat_node, depart_PandasCompat_node), ) # Sphinx directives are lower-cased app.add_directive("pandas-compat", PandasCompatDirective) app.add_directive("pandas-compat-list", PandasCompatListDirective) app.connect("doctree-resolved", process_PandasCompat_nodes) app.connect("env-purge-doc", purge_PandasCompats) app.connect("env-merge-info", merge_PandasCompats) return { "version": "0.1", "parallel_read_safe": True, "parallel_write_safe": True, }
0
rapidsai_public_repos/cudf/docs/cudf/source/_templates
rapidsai_public_repos/cudf/docs/cudf/source/_templates/autosummary/class.rst
{{ fullname }} {{ underline }} .. currentmodule:: {{ module }} .. autoclass:: {{ objname }} .. Don't include the methods or attributes sections, numpydoc adds them for us instead.
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/test_cpp_memcheck.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. source "$(dirname "$0")/test_cpp_common.sh" EXITCODE=0 trap "EXITCODE=1" ERR set +e # Run gtests with compute-sanitizer rapids-logger "Memcheck gtests with rmm_mode=cuda" export GTEST_CUDF_RMM_MODE=cuda COMPUTE_SANITIZER_CMD="compute-sanitizer --tool memcheck" for gt in "$CONDA_PREFIX"/bin/gtests/libcudf/*_TEST ; do test_name=$(basename ${gt}) if [[ "$test_name" == "ERROR_TEST" ]] || [[ "$test_name" == "STREAM_IDENTIFICATION_TEST" ]]; then continue fi echo "Running compute-sanitizer on $test_name" ${COMPUTE_SANITIZER_CMD} ${gt} --gtest_output=xml:"${RAPIDS_TESTS_DIR}${test_name}.xml" done unset GTEST_CUDF_RMM_MODE rapids-logger "Test script exiting with value: $EXITCODE" exit ${EXITCODE}
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/test_cpp.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. source "$(dirname "$0")/test_cpp_common.sh" EXITCODE=0 trap "EXITCODE=1" ERR set +e # Run libcudf and libcudf_kafka gtests from libcudf-tests package export GTEST_OUTPUT=xml:${RAPIDS_TESTS_DIR}/ pushd $CONDA_PREFIX/bin/gtests/libcudf/ rapids-logger "Run libcudf gtests" ctest -j20 --output-on-failure SUITEERROR=$? popd if (( ${SUITEERROR} == 0 )); then pushd $CONDA_PREFIX/bin/gtests/libcudf_kafka/ rapids-logger "Run libcudf_kafka gtests" ctest -j20 --output-on-failure SUITEERROR=$? popd fi # Ensure that benchmarks are runnable pushd $CONDA_PREFIX/bin/benchmarks/libcudf/ rapids-logger "Run tests of libcudf benchmarks" if (( ${SUITEERROR} == 0 )); then # Run a small Google benchmark ./MERGE_BENCH --benchmark_filter=/2/ SUITEERROR=$? fi if (( ${SUITEERROR} == 0 )); then # Run a small nvbench benchmark ./STRINGS_NVBENCH --run-once --benchmark 0 --devices 0 SUITEERROR=$? fi popd rapids-logger "Test script exiting with value: $EXITCODE" exit ${EXITCODE}
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/build_python.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. set -euo pipefail source rapids-env-update export CMAKE_GENERATOR=Ninja rapids-print-env package_dir="python" version=$(rapids-generate-version) commit=$(git rev-parse HEAD) echo "${version}" > VERSION for package_name in cudf dask_cudf cudf_kafka custreamz; do sed -i "/^__git_commit__/ s/= .*/= \"${commit}\"/g" ${package_dir}/${package_name}/${package_name}/_version.py done rapids-logger "Begin py build" CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp) # TODO: Remove `--no-test` flag once importing on a CPU # node works correctly # With boa installed conda build forwards to the boa builder RAPIDS_PACKAGE_VERSION=${version} rapids-conda-retry mambabuild \ --no-test \ --channel "${CPP_CHANNEL}" \ conda/recipes/cudf RAPIDS_PACKAGE_VERSION=${version} rapids-conda-retry mambabuild \ --no-test \ --channel "${CPP_CHANNEL}" \ --channel "${RAPIDS_CONDA_BLD_OUTPUT_DIR}" \ conda/recipes/dask-cudf RAPIDS_PACKAGE_VERSION=${version} rapids-conda-retry mambabuild \ --no-test \ --channel "${CPP_CHANNEL}" \ --channel "${RAPIDS_CONDA_BLD_OUTPUT_DIR}" \ conda/recipes/cudf_kafka RAPIDS_PACKAGE_VERSION=${version} rapids-conda-retry mambabuild \ --no-test \ --channel "${CPP_CHANNEL}" \ --channel "${RAPIDS_CONDA_BLD_OUTPUT_DIR}" \ conda/recipes/custreamz rapids-upload-conda-to-s3 python
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/test_wheel_dask_cudf.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -eou pipefail RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})" RAPIDS_PY_WHEEL_NAME="dask_cudf_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./dist # Download the cudf built in the previous step # Set the manylinux version used for downloading the wheels so that we test the # newer ABI wheels on the newer images that support their installation. # Need to disable pipefail for the head not to fail, see # https://stackoverflow.com/questions/19120263/why-exit-code-141-with-grep-q set +o pipefail glibc_minor_version=$(ldd --version | head -1 | grep -o "[0-9]\.[0-9]\+" | tail -1 | cut -d '.' -f2) set -o pipefail manylinux_version="2_17" if [[ ${glibc_minor_version} -ge 28 ]]; then manylinux_version="2_28" fi manylinux="manylinux_${manylinux_version}" RAPIDS_PY_WHEEL_NAME="cudf_${manylinux}_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./local-cudf-dep python -m pip install --no-deps ./local-cudf-dep/cudf*.whl # echo to expand wildcard before adding `[extra]` requires for pip python -m pip install $(echo ./dist/dask_cudf*.whl)[test] # Run tests in dask_cudf/tests and dask_cudf/io/tests python -m pytest -n 8 ./python/dask_cudf/dask_cudf/
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/test_python_common.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. # Common setup steps shared by Python test jobs set -euo pipefail . /opt/conda/etc/profile.d/conda.sh rapids-logger "Generate Python testing dependencies" rapids-dependency-file-generator \ --output conda \ --file_key test_python \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n test # Temporarily allow unbound variables for conda activation. set +u conda activate test set -u rapids-logger "Downloading artifacts from previous jobs" CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp) PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python) RAPIDS_TESTS_DIR=${RAPIDS_TESTS_DIR:-"${PWD}/test-results"} RAPIDS_COVERAGE_DIR=${RAPIDS_COVERAGE_DIR:-"${PWD}/coverage-results"} mkdir -p "${RAPIDS_TESTS_DIR}" "${RAPIDS_COVERAGE_DIR}" rapids-print-env rapids-mamba-retry install \ --channel "${CPP_CHANNEL}" \ --channel "${PYTHON_CHANNEL}" \ cudf libcudf
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/build_wheel.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -euo pipefail package_name=$1 package_dir=$2 source rapids-configure-sccache source rapids-date-string version=$(rapids-generate-version) commit=$(git rev-parse HEAD) RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})" # This is the version of the suffix with a preceding hyphen. It's used # everywhere except in the final wheel name. PACKAGE_CUDA_SUFFIX="-${RAPIDS_PY_CUDA_SUFFIX}" # Patch project metadata files to include the CUDA version suffix and version override. pyproject_file="${package_dir}/pyproject.toml" sed -i "s/^name = \"${package_name}\"/name = \"${package_name}${PACKAGE_CUDA_SUFFIX}\"/g" ${pyproject_file} echo "${version}" > VERSION sed -i "/^__git_commit__/ s/= .*/= \"${commit}\"/g" "${package_dir}/${package_name}/_version.py" # For nightlies we want to ensure that we're pulling in alphas as well. The # easiest way to do so is to augment the spec with a constraint containing a # min alpha version that doesn't affect the version bounds but does allow usage # of alpha versions for that dependency without --pre alpha_spec='' if ! rapids-is-release-build; then alpha_spec=',>=0.0.0a0' fi if [[ ${package_name} == "dask_cudf" ]]; then sed -r -i "s/cudf==(.*)\"/cudf${PACKAGE_CUDA_SUFFIX}==\1${alpha_spec}\"/g" ${pyproject_file} sed -r -i "s/dask-cuda==(.*)\"/dask-cuda==\1${alpha_spec}\"/g" ${pyproject_file} sed -r -i "s/rapids-dask-dependency==(.*)\"/rapids-dask-dependency==\1${alpha_spec}\"/g" ${pyproject_file} else sed -r -i "s/rmm(.*)\"/rmm${PACKAGE_CUDA_SUFFIX}\1${alpha_spec}\"/g" ${pyproject_file} # ptxcompiler and cubinlinker aren't version constrained sed -r -i "s/ptxcompiler\"/ptxcompiler${PACKAGE_CUDA_SUFFIX}\"/g" ${pyproject_file} sed -r -i "s/cubinlinker\"/cubinlinker${PACKAGE_CUDA_SUFFIX}\"/g" ${pyproject_file} fi if [[ $PACKAGE_CUDA_SUFFIX == "-cu12" ]]; then sed -i "s/cuda-python[<=>\.,0-9a]*/cuda-python>=12.0,<13.0a0/g" ${pyproject_file} sed -i "s/cupy-cuda11x/cupy-cuda12x/g" ${pyproject_file} sed -i "/ptxcompiler/d" ${pyproject_file} sed -i "/cubinlinker/d" ${pyproject_file} fi cd "${package_dir}" python -m pip wheel . -w dist -vvv --no-deps --disable-pip-version-check
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/test_python_cudf.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. # Common setup steps shared by Python test jobs source "$(dirname "$0")/test_python_common.sh" rapids-logger "Check GPU usage" nvidia-smi EXITCODE=0 trap "EXITCODE=1" ERR set +e rapids-logger "pytest cudf" pushd python/cudf/cudf # It is essential to cd into python/cudf/cudf as `pytest-xdist` + `coverage` seem to work only at this directory level. pytest \ --cache-clear \ --ignore="benchmarks" \ --junitxml="${RAPIDS_TESTS_DIR}/junit-cudf.xml" \ --numprocesses=8 \ --dist=loadscope \ --cov-config=../.coveragerc \ --cov=cudf \ --cov-report=xml:"${RAPIDS_COVERAGE_DIR}/cudf-coverage.xml" \ --cov-report=term \ tests popd # Run benchmarks with both cudf and pandas to ensure compatibility is maintained. # Benchmarks are run in DEBUG_ONLY mode, meaning that only small data sizes are used. # Therefore, these runs only verify that benchmarks are valid. # They do not generate meaningful performance measurements. pushd python/cudf rapids-logger "pytest for cudf benchmarks" CUDF_BENCHMARKS_DEBUG_ONLY=ON \ pytest \ --cache-clear \ --numprocesses=8 \ --dist=loadscope \ --cov-config=.coveragerc \ --cov=cudf \ --cov-report=xml:"${RAPIDS_COVERAGE_DIR}/cudf-benchmark-coverage.xml" \ --cov-report=term \ benchmarks rapids-logger "pytest for cudf benchmarks using pandas" CUDF_BENCHMARKS_USE_PANDAS=ON \ CUDF_BENCHMARKS_DEBUG_ONLY=ON \ pytest \ --cache-clear \ --numprocesses=8 \ --dist=loadscope \ --cov-config=.coveragerc \ --cov=cudf \ --cov-report=xml:"${RAPIDS_COVERAGE_DIR}/cudf-benchmark-pandas-coverage.xml" \ --cov-report=term \ benchmarks popd rapids-logger "Test script exiting with value: $EXITCODE" exit ${EXITCODE}
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/build_wheel_dask_cudf.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -euo pipefail package_dir="python/dask_cudf" ./ci/build_wheel.sh dask_cudf ${package_dir} RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})" RAPIDS_PY_WHEEL_NAME="dask_cudf_${RAPIDS_PY_CUDA_SUFFIX}" rapids-upload-wheels-to-s3 ${package_dir}/dist
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/check_style.sh
#!/bin/bash # Copyright (c) 2020-2023, NVIDIA CORPORATION. set -euo pipefail rapids-logger "Create checks conda environment" . /opt/conda/etc/profile.d/conda.sh rapids-dependency-file-generator \ --output conda \ --file_key checks \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n checks conda activate checks FORMAT_FILE_URL=https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-24.02/cmake-format-rapids-cmake.json export RAPIDS_CMAKE_FORMAT_FILE=/tmp/rapids_cmake_ci/cmake-formats-rapids-cmake.json mkdir -p $(dirname ${RAPIDS_CMAKE_FORMAT_FILE}) wget -O ${RAPIDS_CMAKE_FORMAT_FILE} ${FORMAT_FILE_URL} # Run pre-commit checks pre-commit run --all-files --show-diff-on-failure
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/test_wheel_cudf.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -eou pipefail # Set the manylinux version used for downloading the wheels so that we test the # newer ABI wheels on the newer images that support their installation. # Need to disable pipefail for the head not to fail, see # https://stackoverflow.com/questions/19120263/why-exit-code-141-with-grep-q set +o pipefail glibc_minor_version=$(ldd --version | head -1 | grep -o "[0-9]\.[0-9]\+" | tail -1 | cut -d '.' -f2) set -o pipefail manylinux_version="2_17" if [[ ${glibc_minor_version} -ge 28 ]]; then manylinux_version="2_28" fi manylinux="manylinux_${manylinux_version}" RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})" RAPIDS_PY_WHEEL_NAME="cudf_${manylinux}_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./dist # echo to expand wildcard before adding `[extra]` requires for pip python -m pip install $(echo ./dist/cudf*.whl)[test] # Run smoke tests for aarch64 pull requests if [[ "$(arch)" == "aarch64" && ${RAPIDS_BUILD_TYPE} == "pull-request" ]]; then python ./ci/wheel_smoke_test_cudf.py else python -m pytest -n 8 ./python/cudf/cudf/tests fi
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/test_cpp_common.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. set -euo pipefail . /opt/conda/etc/profile.d/conda.sh rapids-logger "Generate C++ testing dependencies" rapids-dependency-file-generator \ --output conda \ --file_key test_cpp \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch)" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n test # Temporarily allow unbound variables for conda activation. set +u conda activate test set -u CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp) RAPIDS_TESTS_DIR=${RAPIDS_TESTS_DIR:-"${PWD}/test-results"}/ mkdir -p "${RAPIDS_TESTS_DIR}" rapids-print-env rapids-mamba-retry install \ --channel "${CPP_CHANNEL}" \ libcudf libcudf_kafka libcudf-tests rapids-logger "Check GPU usage" nvidia-smi
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/build_cpp.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. set -euo pipefail source rapids-env-update export CMAKE_GENERATOR=Ninja rapids-print-env version=$(rapids-generate-version) rapids-logger "Begin cpp build" # With boa installed conda build forward to boa RAPIDS_PACKAGE_VERSION=${version} rapids-conda-retry mambabuild \ conda/recipes/libcudf rapids-upload-conda-to-s3 cpp
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/test_java.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. set -euo pipefail . /opt/conda/etc/profile.d/conda.sh rapids-logger "Generate Java testing dependencies" rapids-dependency-file-generator \ --output conda \ --file_key test_java \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch)" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n test export CMAKE_GENERATOR=Ninja # Temporarily allow unbound variables for conda activation. set +u conda activate test set -u rapids-print-env rapids-logger "Downloading artifacts from previous jobs" CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp) rapids-mamba-retry install \ --channel "${CPP_CHANNEL}" \ libcudf rapids-logger "Check GPU usage" nvidia-smi EXITCODE=0 trap "EXITCODE=1" ERR set +e rapids-logger "Run Java tests" pushd java mvn test -B -DCUDF_JNI_ENABLE_PROFILING=OFF popd rapids-logger "Test script exiting with value: $EXITCODE" exit ${EXITCODE}
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/test_python_other.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. # Common setup steps shared by Python test jobs source "$(dirname "$0")/test_python_common.sh" rapids-mamba-retry install \ --channel "${CPP_CHANNEL}" \ --channel "${PYTHON_CHANNEL}" \ dask-cudf cudf_kafka custreamz rapids-logger "Check GPU usage" nvidia-smi EXITCODE=0 trap "EXITCODE=1" ERR set +e rapids-logger "pytest dask_cudf" pushd python/dask_cudf/dask_cudf pytest \ --cache-clear \ --junitxml="${RAPIDS_TESTS_DIR}/junit-dask-cudf.xml" \ --numprocesses=8 \ --dist=loadscope \ --cov-config=../.coveragerc \ --cov=dask_cudf \ --cov-report=xml:"${RAPIDS_COVERAGE_DIR}/dask-cudf-coverage.xml" \ --cov-report=term \ . popd rapids-logger "pytest custreamz" pushd python/custreamz/custreamz pytest \ --cache-clear \ --junitxml="${RAPIDS_TESTS_DIR}/junit-custreamz.xml" \ --numprocesses=8 \ --dist=loadscope \ --cov-config=../.coveragerc \ --cov=custreamz \ --cov-report=xml:"${RAPIDS_COVERAGE_DIR}/custreamz-coverage.xml" \ --cov-report=term \ tests popd rapids-logger "Test script exiting with value: $EXITCODE" exit ${EXITCODE}
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/test_notebooks.sh
#!/bin/bash # Copyright (c) 2020-2023, NVIDIA CORPORATION. set -euo pipefail . /opt/conda/etc/profile.d/conda.sh rapids-logger "Generate notebook testing dependencies" rapids-dependency-file-generator \ --output conda \ --file_key test_notebooks \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n test # Temporarily allow unbound variables for conda activation. set +u conda activate test set -u rapids-print-env rapids-logger "Downloading artifacts from previous jobs" CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp) PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python) rapids-mamba-retry install \ --channel "${CPP_CHANNEL}" \ --channel "${PYTHON_CHANNEL}" \ cudf libcudf NBTEST="$(realpath "$(dirname "$0")/utils/nbtest.sh")" pushd notebooks # Add notebooks that should be skipped here # (space-separated list of filenames without paths) SKIPNBS="performance-comparisons.ipynb" EXITCODE=0 trap "EXITCODE=1" ERR set +e for nb in $(find . -name "*.ipynb"); do nbBasename=$(basename ${nb}) # Skip all notebooks that use dask (in the code or even in their name) if ((echo ${nb} | grep -qi dask) || \ (grep -q dask ${nb})); then echo "--------------------------------------------------------------------------------" echo "SKIPPING: ${nb} (suspected Dask usage, not currently automatable)" echo "--------------------------------------------------------------------------------" elif (echo " ${SKIPNBS} " | grep -q " ${nbBasename} "); then echo "--------------------------------------------------------------------------------" echo "SKIPPING: ${nb} (listed in skip list)" echo "--------------------------------------------------------------------------------" else nvidia-smi ${NBTEST} ${nbBasename} fi done rapids-logger "Test script exiting with value: $EXITCODE" exit ${EXITCODE}
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/wheel_smoke_test_cudf.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. import cudf import pyarrow as pa if __name__ == '__main__': n_legs = pa.array([2, 4, 5, 100]) animals = pa.array(["Flamingo", "Horse", "Brittle stars", "Centipede"]) names = ["n_legs", "animals"] foo = pa.table([n_legs, animals], names=names) df = cudf.DataFrame.from_arrow(foo) assert df.loc[df["animals"] == "Centipede"]["n_legs"].iloc[0] == 100 assert df.loc[df["animals"] == "Flamingo"]["n_legs"].iloc[0] == 2
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/build_wheel_cudf.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -euo pipefail package_dir="python/cudf" export SKBUILD_CONFIGURE_OPTIONS="-DCUDF_BUILD_WHEELS=ON -DDETECT_CONDA_ENV=OFF" ./ci/build_wheel.sh cudf ${package_dir} python -m auditwheel repair -w ${package_dir}/final_dist ${package_dir}/dist/* RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})" RAPIDS_PY_WHEEL_NAME="cudf_${AUDITWHEEL_POLICY}_${RAPIDS_PY_CUDA_SUFFIX}" rapids-upload-wheels-to-s3 ${package_dir}/final_dist
0
rapidsai_public_repos/cudf
rapidsai_public_repos/cudf/ci/build_docs.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -euo pipefail rapids-logger "Create test conda environment" . /opt/conda/etc/profile.d/conda.sh rapids-dependency-file-generator \ --output conda \ --file_key docs \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n docs conda activate docs rapids-print-env rapids-logger "Downloading artifacts from previous jobs" CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp) PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python) rapids-mamba-retry install \ --channel "${CPP_CHANNEL}" \ --channel "${PYTHON_CHANNEL}" \ libcudf cudf dask-cudf export RAPIDS_VERSION_NUMBER="24.02" export RAPIDS_DOCS_DIR="$(mktemp -d)" rapids-logger "Build CPP docs" pushd cpp/doxygen aws s3 cp s3://rapidsai-docs/librmm/html/${RAPIDS_VERSION_NUMBER}/rmm.tag . || echo "Failed to download rmm Doxygen tag" doxygen Doxyfile mkdir -p "${RAPIDS_DOCS_DIR}/libcudf/html" mv html/* "${RAPIDS_DOCS_DIR}/libcudf/html" popd rapids-logger "Build Python docs" pushd docs/cudf make dirhtml make text mkdir -p "${RAPIDS_DOCS_DIR}/cudf/"{html,txt} mv build/dirhtml/* "${RAPIDS_DOCS_DIR}/cudf/html" mv build/text/* "${RAPIDS_DOCS_DIR}/cudf/txt" popd rapids-logger "Build dask-cuDF Sphinx docs" pushd docs/dask_cudf make dirhtml make text mkdir -p "${RAPIDS_DOCS_DIR}/dask-cudf/"{html,txt} mv build/dirhtml/* "${RAPIDS_DOCS_DIR}/dask-cudf/html" mv build/text/* "${RAPIDS_DOCS_DIR}/dask-cudf/txt" popd rapids-upload-docs
0
rapidsai_public_repos/cudf/ci
rapidsai_public_repos/cudf/ci/release/update-version.sh
#!/bin/bash # Copyright (c) 2020-2023, NVIDIA CORPORATION. ######################## # cuDF Version Updater # ######################## ## Usage # bash update-version.sh <new_version> # Format is YY.MM.PP - no leading 'v' or trailing 'a' NEXT_FULL_TAG=$1 # Get current version CURRENT_TAG=$(git tag --merged HEAD | grep -xE '^v.*' | sort --version-sort | tail -n 1 | tr -d 'v') CURRENT_MAJOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[1]}') CURRENT_MINOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[2]}') CURRENT_PATCH=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[3]}') CURRENT_SHORT_TAG=${CURRENT_MAJOR}.${CURRENT_MINOR} #Get <major>.<minor> for next version NEXT_MAJOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[1]}') NEXT_MINOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[2]}') NEXT_PATCH=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[3]}') NEXT_SHORT_TAG=${NEXT_MAJOR}.${NEXT_MINOR} NEXT_UCX_PY_VERSION="$(curl -sL https://version.gpuci.io/rapids/${NEXT_SHORT_TAG}).*" # Need to distutils-normalize the versions for some use cases CURRENT_SHORT_TAG_PEP440=$(python -c "from setuptools.extern import packaging; print(packaging.version.Version('${CURRENT_SHORT_TAG}'))") NEXT_SHORT_TAG_PEP440=$(python -c "from setuptools.extern import packaging; print(packaging.version.Version('${NEXT_SHORT_TAG}'))") PATCH_PEP440=$(python -c "from setuptools.extern import packaging; print(packaging.version.Version('${NEXT_PATCH}'))") echo "current is ${CURRENT_SHORT_TAG_PEP440}, next is ${NEXT_SHORT_TAG_PEP440}" echo "Preparing release $CURRENT_TAG => $NEXT_FULL_TAG" # Inplace sed replace; workaround for Linux and Mac function sed_runner() { sed -i.bak ''"$1"'' $2 && rm -f ${2}.bak } # cpp update sed_runner 's/'"VERSION ${CURRENT_SHORT_TAG}.*"'/'"VERSION ${NEXT_FULL_TAG}"'/g' cpp/CMakeLists.txt # Python CMakeLists updates sed_runner 's/'"cudf_version .*)"'/'"cudf_version ${NEXT_FULL_TAG})"'/g' python/cudf/CMakeLists.txt sed_runner 's/'"cudf_kafka_version .*)"'/'"cudf_kafka_version ${NEXT_FULL_TAG})"'/g' python/cudf_kafka/CMakeLists.txt # cpp libcudf_kafka update sed_runner 's/'"VERSION ${CURRENT_SHORT_TAG}.*"'/'"VERSION ${NEXT_FULL_TAG}"'/g' cpp/libcudf_kafka/CMakeLists.txt # cpp cudf_jni update sed_runner 's/'"VERSION ${CURRENT_SHORT_TAG}.*"'/'"VERSION ${NEXT_FULL_TAG}"'/g' java/src/main/native/CMakeLists.txt # Centralized version file update echo "${NEXT_FULL_TAG}" > VERSION # Wheel testing script sed_runner "s/branch-.*/branch-${NEXT_SHORT_TAG}/g" ci/test_wheel_dask_cudf.sh # rapids-cmake version sed_runner 's/'"branch-.*\/RAPIDS.cmake"'/'"branch-${NEXT_SHORT_TAG}\/RAPIDS.cmake"'/g' fetch_rapids.cmake # cmake-format rapids-cmake definitions sed_runner 's/'"branch-.*\/cmake-format-rapids-cmake.json"'/'"branch-${NEXT_SHORT_TAG}\/cmake-format-rapids-cmake.json"'/g' ci/check_style.sh # doxyfile update sed_runner 's/PROJECT_NUMBER = .*/PROJECT_NUMBER = '${NEXT_FULL_TAG}'/g' cpp/doxygen/Doxyfile # sphinx docs update sed_runner 's/version = .*/version = '"'${NEXT_SHORT_TAG}'"'/g' docs/cudf/source/conf.py sed_runner 's/release = .*/release = '"'${NEXT_FULL_TAG}'"'/g' docs/cudf/source/conf.py sed_runner 's/version = .*/version = '"'${NEXT_SHORT_TAG}'"'/g' docs/dask_cudf/source/conf.py sed_runner 's/release = .*/release = '"'${NEXT_FULL_TAG}'"'/g' docs/dask_cudf/source/conf.py DEPENDENCIES=( cudf cudf_kafka custreamz dask-cuda dask-cudf kvikio libkvikio librmm rapids-dask-dependency rmm ) for DEP in "${DEPENDENCIES[@]}"; do for FILE in dependencies.yaml conda/environments/*.yaml; do sed_runner "/-.* ${DEP}==/ s/==.*/==${NEXT_SHORT_TAG_PEP440}.*/g" ${FILE} done for FILE in python/*/pyproject.toml; do sed_runner "/\"${DEP}==/ s/==.*\"/==${NEXT_SHORT_TAG_PEP440}.*\"/g" ${FILE} done done # Doxyfile update sed_runner "s|\(TAGFILES.*librmm/\).*|\1${NEXT_SHORT_TAG}|" cpp/doxygen/Doxyfile # README.md update sed_runner "s/version == ${CURRENT_SHORT_TAG}/version == ${NEXT_SHORT_TAG}/g" README.md sed_runner "s/cudf=${CURRENT_SHORT_TAG}/cudf=${NEXT_SHORT_TAG}/g" README.md # Libcudf examples update sed_runner "s/CUDF_TAG branch-${CURRENT_SHORT_TAG}/CUDF_TAG branch-${NEXT_SHORT_TAG}/" cpp/examples/fetch_dependencies.cmake # CI files for FILE in .github/workflows/*.yaml; do sed_runner "/shared-workflows/ s/@.*/@branch-${NEXT_SHORT_TAG}/g" "${FILE}" sed_runner "s/dask-cuda.git@branch-[^\"\s]\+/dask-cuda.git@branch-${NEXT_SHORT_TAG}/g" ${FILE}; done sed_runner "s/RAPIDS_VERSION_NUMBER=\".*/RAPIDS_VERSION_NUMBER=\"${NEXT_SHORT_TAG}\"/g" ci/build_docs.sh # Java files NEXT_FULL_JAVA_TAG="${NEXT_SHORT_TAG}.${PATCH_PEP440}-SNAPSHOT" sed_runner "s|<version>.*-SNAPSHOT</version>|<version>${NEXT_FULL_JAVA_TAG}</version>|g" java/pom.xml sed_runner "s/branch-.*/branch-${NEXT_SHORT_TAG}/g" java/ci/README.md sed_runner "s/cudf-.*-SNAPSHOT/cudf-${NEXT_FULL_JAVA_TAG}/g" java/ci/README.md # .devcontainer files find .devcontainer/ -type f -name devcontainer.json -print0 | while IFS= read -r -d '' filename; do sed_runner "s@rapidsai/devcontainers:[0-9.]*@rapidsai/devcontainers:${NEXT_SHORT_TAG}@g" "${filename}" sed_runner "s@rapidsai/devcontainers/features/rapids-build-utils:[0-9.]*@rapidsai/devcontainers/features/rapids-build-utils:${NEXT_SHORT_TAG_PEP440}@" "${filename}" done
0
rapidsai_public_repos/cudf/ci
rapidsai_public_repos/cudf/ci/utils/nbtest.sh
#!/bin/bash # Copyright (c) 2020-2022, NVIDIA CORPORATION. MAGIC_OVERRIDE_CODE=" def my_run_line_magic(*args, **kwargs): g=globals() l={} for a in args: try: exec(str(a),g,l) except Exception as e: print('WARNING: %s\n While executing this magic function code:\n%s\n continuing...\n' % (e, a)) else: g.update(l) def my_run_cell_magic(*args, **kwargs): my_run_line_magic(*args, **kwargs) get_ipython().run_line_magic=my_run_line_magic get_ipython().run_cell_magic=my_run_cell_magic " NO_COLORS=--colors=NoColor EXITCODE=0 NBTMPDIR="$WORKSPACE/tmp" mkdir -p ${NBTMPDIR} for nb in $*; do NBFILENAME=$1 NBNAME=${NBFILENAME%.*} NBNAME=${NBNAME##*/} NBTESTSCRIPT=${NBTMPDIR}/${NBNAME}-test.py shift echo -------------------------------------------------------------------------------- echo STARTING: ${NBNAME} echo -------------------------------------------------------------------------------- jupyter nbconvert --to script ${NBFILENAME} --output ${NBTMPDIR}/${NBNAME}-test echo "${MAGIC_OVERRIDE_CODE}" > ${NBTMPDIR}/tmpfile cat ${NBTESTSCRIPT} >> ${NBTMPDIR}/tmpfile mv ${NBTMPDIR}/tmpfile ${NBTESTSCRIPT} echo "Running \"ipython ${NO_COLORS} ${NBTESTSCRIPT}\" on $(date)" echo time bash -c "ipython ${NO_COLORS} ${NBTESTSCRIPT}; EC=\$?; echo -------------------------------------------------------------------------------- ; echo DONE: ${NBNAME}; exit \$EC" NBEXITCODE=$? echo EXIT CODE: ${NBEXITCODE} echo EXITCODE=$((EXITCODE | ${NBEXITCODE})) done exit ${EXITCODE}
0
rapidsai_public_repos/cudf/ci
rapidsai_public_repos/cudf/ci/utils/nbtestlog2junitxml.py
# Copyright (c) 2020-2022, NVIDIA CORPORATION. # Generate a junit-xml file from parsing a nbtest log import re from xml.etree.ElementTree import Element, ElementTree from os import path import string from enum import Enum startingPatt = re.compile(r"^STARTING: ([\w\.\-]+)$") skippingPatt = re.compile(r"^SKIPPING: ([\w\.\-]+)\s*(\(([\w\.\-\ \,]+)\))?\s*$") exitCodePatt = re.compile(r"^EXIT CODE: (\d+)$") folderPatt = re.compile(r"^FOLDER: ([\w\.\-]+)$") timePatt = re.compile(r"^real\s+([\d\.ms]+)$") linePatt = re.compile("^" + ("-" * 80) + "$") def getFileBaseName(filePathName): return path.splitext(path.basename(filePathName))[0] def makeTestCaseElement(attrDict): return Element("testcase", attrib=attrDict) def makeSystemOutElement(outputLines): e = Element("system-out") e.text = "".join(filter(lambda c: c in string.printable, outputLines)) return e def makeFailureElement(outputLines): e = Element("failure", message="failed") e.text = "".join(filter(lambda c: c in string.printable, outputLines)) return e def setFileNameAttr(attrDict, fileName): attrDict.update(file=fileName, classname="", line="", name="", time="" ) def setClassNameAttr(attrDict, className): attrDict["classname"] = className def setTestNameAttr(attrDict, testName): attrDict["name"] = testName def setTimeAttr(attrDict, timeVal): (mins, seconds) = timeVal.split("m") seconds = float(seconds.strip("s")) + (60 * int(mins)) attrDict["time"] = str(seconds) def incrNumAttr(element, attr): newVal = int(element.attrib.get(attr)) + 1 element.attrib[attr] = str(newVal) def parseLog(logFile, testSuiteElement): # Example attrs: # errors="0" failures="0" hostname="a437d6835edf" name="pytest" skipped="2" tests="6" time="6.174" timestamp="2019-11-18T19:49:47.946307" with open(logFile) as lf: testSuiteElement.attrib["tests"] = "0" testSuiteElement.attrib["errors"] = "0" testSuiteElement.attrib["failures"] = "0" testSuiteElement.attrib["skipped"] = "0" testSuiteElement.attrib["time"] = "0" testSuiteElement.attrib["timestamp"] = "" attrDict = {} #setFileNameAttr(attrDict, logFile) setFileNameAttr(attrDict, "nbtest") parserStateEnum = Enum("parserStateEnum", "newTest startingLine finishLine exitCode") parserState = parserStateEnum.newTest testOutput = "" for line in lf.readlines(): if parserState == parserStateEnum.newTest: m = folderPatt.match(line) if m: setClassNameAttr(attrDict, m.group(1)) continue m = skippingPatt.match(line) if m: setTestNameAttr(attrDict, getFileBaseName(m.group(1))) setTimeAttr(attrDict, "0m0s") skippedElement = makeTestCaseElement(attrDict) message = m.group(3) or "" skippedElement.append(Element("skipped", message=message, type="")) testSuiteElement.append(skippedElement) incrNumAttr(testSuiteElement, "skipped") incrNumAttr(testSuiteElement, "tests") continue m = startingPatt.match(line) if m: parserState = parserStateEnum.startingLine testOutput = "" setTestNameAttr(attrDict, m.group(1)) setTimeAttr(attrDict, "0m0s") continue continue elif parserState == parserStateEnum.startingLine: if linePatt.match(line): parserState = parserStateEnum.finishLine testOutput = "" continue elif parserState == parserStateEnum.finishLine: if linePatt.match(line): parserState = parserStateEnum.exitCode else: testOutput += line continue elif parserState == parserStateEnum.exitCode: m = exitCodePatt.match(line) if m: testCaseElement = makeTestCaseElement(attrDict) if m.group(1) != "0": failureElement = makeFailureElement(testOutput) testCaseElement.append(failureElement) incrNumAttr(testSuiteElement, "failures") else: systemOutElement = makeSystemOutElement(testOutput) testCaseElement.append(systemOutElement) testSuiteElement.append(testCaseElement) parserState = parserStateEnum.newTest testOutput = "" incrNumAttr(testSuiteElement, "tests") continue m = timePatt.match(line) if m: setTimeAttr(attrDict, m.group(1)) continue continue if __name__ == "__main__": import sys testSuitesElement = Element("testsuites") testSuiteElement = Element("testsuite", name="nbtest", hostname="") parseLog(sys.argv[1], testSuiteElement) testSuitesElement.append(testSuiteElement) ElementTree(testSuitesElement).write(sys.argv[1]+".xml", xml_declaration=True)
0
rapidsai_public_repos/cudf/ci
rapidsai_public_repos/cudf/ci/cudf_pandas_scripts/run_tests.sh
#!/bin/bash # SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. # All rights reserved. # SPDX-License-Identifier: Apache-2.0 set -eoxu pipefail # Function to display script usage function display_usage { echo "Usage: $0 [--no-cudf]" } # Default value for the --no-cudf option no_cudf=false # Parse command-line arguments while [[ $# -gt 0 ]]; do case "$1" in --no-cudf) no_cudf=true shift ;; *) echo "Error: Unknown option $1" display_usage exit 1 ;; esac done if [ "$no_cudf" = true ]; then echo "Skipping cudf install" else # Set the manylinux version used for downloading the wheels so that we test the # newer ABI wheels on the newer images that support their installation. # Need to disable pipefail for the head not to fail, see # https://stackoverflow.com/questions/19120263/why-exit-code-141-with-grep-q set +o pipefail glibc_minor_version=$(ldd --version | head -1 | grep -o "[0-9]\.[0-9]\+" | tail -1 | cut -d '.' -f2) set -o pipefail manylinux_version="2_17" if [[ ${glibc_minor_version} -ge 28 ]]; then manylinux_version="2_28" fi manylinux="manylinux_${manylinux_version}" RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})" RAPIDS_PY_WHEEL_NAME="cudf_${manylinux}_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./local-cudf-dep python -m pip install $(ls ./local-cudf-dep/cudf*.whl)[test,cudf_pandas_tests] fi python -m pytest -p cudf.pandas ./python/cudf/cudf_pandas_tests/
0
rapidsai_public_repos/cudf/ci/cudf_pandas_scripts
rapidsai_public_repos/cudf/ci/cudf_pandas_scripts/pandas-tests/job-summary.py
# SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. # All rights reserved. # SPDX-License-Identifier: Apache-2.0 import json import sys import pandas as pd def get_total_and_passed(results): total_failed = 0 total_errored = 0 total_passed = 0 for module_name, row in results.items(): total_failed += row.get("failed", 0) total_errored += row.get("errored", 0) total_passed += row.get("passed", 0) total_tests = total_failed + total_errored + total_passed return total_tests, total_passed main_json = sys.argv[1] pr_json = sys.argv[2] # read the results of summarize-test-results.py --summary with open(main_json) as f: main_results = json.load(f) main_total, main_passed = get_total_and_passed(main_results) with open(pr_json) as f: pr_results = json.load(f) pr_total, pr_passed = get_total_and_passed(pr_results) passing_percentage = pr_passed / pr_total * 100 pass_rate_change = abs(pr_passed - main_passed) / main_passed * 100 rate_change_type = "a decrease" if pr_passed < main_passed else "an increase" comment = ( "Merging this PR would result in " f"{pr_passed}/{pr_total} ({passing_percentage:.2f}%) " "Pandas tests passing, " f"{rate_change_type} in the test pass rate by " f"{pass_rate_change:.2f}%. " f"Trunk stats: {main_passed}/{main_total}." ) def emoji_passed(x): if x > 0: return f"{x}✅" elif x < 0: return f"{x}❌" else: return f"{x}" def emoji_failed(x): if x > 0: return f"{x}❌" elif x < 0: return f"{x}✅" else: return f"{x}" # convert pr_results to a pandas DataFrame and then a markdown table pr_df = pd.DataFrame.from_dict(pr_results, orient="index").sort_index() main_df = pd.DataFrame.from_dict(main_results, orient="index").sort_index() diff_df = pr_df - main_df pr_df = pr_df[["total", "passed", "failed", "skipped"]] diff_df = diff_df[["total", "passed", "failed", "skipped"]] diff_df.columns = diff_df.columns + "_diff" diff_df["passed_diff"] = diff_df["passed_diff"].map(emoji_passed) diff_df["failed_diff"] = diff_df["failed_diff"].map(emoji_failed) diff_df["skipped_diff"] = diff_df["skipped_diff"].map(emoji_failed) df = pd.concat([pr_df, diff_df], axis=1) df = df.rename_axis("Test module") df = df.rename( columns={ "total": "Total tests", "passed": "Passed tests", "failed": "Failed tests", "skipped": "Skipped tests", "total_diff": "Total delta", "passed_diff": "Passed delta", "failed_diff": "Failed delta", "skipped_diff": "Skipped delta", } ) df = df.sort_values(by=["Failed tests", "Skipped tests"], ascending=False) print(comment) print() print("Here are the results of running the Pandas tests against this PR:") print() print(df.to_markdown())
0
rapidsai_public_repos/cudf/ci/cudf_pandas_scripts
rapidsai_public_repos/cudf/ci/cudf_pandas_scripts/pandas-tests/diff.sh
#!/usr/bin/env bash # SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. # All rights reserved. # SPDX-License-Identifier: Apache-2.0 # Download the summarized results of running the Pandas tests on both the main # branch and the PR branch: # Hard-coded needs to match the version deduced by rapids-upload-artifacts-dir MAIN_ARTIFACT=$(rapids-s3-path)cuda12_$(arch)_py310.main-results.json PR_ARTIFACT=$(rapids-s3-path)cuda12_$(arch)_py310.pr-results.json aws s3 cp $MAIN_ARTIFACT main-results.json aws s3 cp $PR_ARTIFACT pr-results.json # Compute the diff and prepare job summary: python -m pip install pandas tabulate python ci/cudf_pandas_scripts/pandas-tests/job-summary.py main-results.json pr-results.json | tee summary.txt >> "$GITHUB_STEP_SUMMARY" COMMENT=$(head -1 summary.txt) echo "$COMMENT" # Magic name that the custom-job.yaml workflow reads and re-exports echo "job_output=${COMMENT}" >> "${GITHUB_OUTPUT}"
0
rapidsai_public_repos/cudf/ci/cudf_pandas_scripts
rapidsai_public_repos/cudf/ci/cudf_pandas_scripts/pandas-tests/run.sh
#!/usr/bin/env bash # SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. # All rights reserved. # SPDX-License-Identifier: Apache-2.0 PANDAS_TESTS_BRANCH=${1} rapids-logger "Running Pandas tests using $PANDAS_TESTS_BRANCH branch" rapids-logger "PR number: $RAPIDS_REF_NAME" # Set the manylinux version used for downloading the wheels so that we test the # newer ABI wheels on the newer images that support their installation. # Need to disable pipefail for the head not to fail, see # https://stackoverflow.com/questions/19120263/why-exit-code-141-with-grep-q set +o pipefail glibc_minor_version=$(ldd --version | head -1 | grep -o "[0-9]\.[0-9]\+" | tail -1 | cut -d '.' -f2) set -o pipefail manylinux_version="2_17" if [[ ${glibc_minor_version} -ge 28 ]]; then manylinux_version="2_28" fi manylinux="manylinux_${manylinux_version}" RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})" RAPIDS_PY_WHEEL_NAME="cudf_${manylinux}_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./local-cudf-dep python -m pip install $(ls ./local-cudf-dep/cudf*.whl)[test,pandas_tests] git checkout $COMMIT bash python/cudf/cudf/pandas/scripts/run-pandas-tests.sh \ -n 10 \ --tb=line \ --skip-slow \ --max-worker-restart=3 \ --import-mode=importlib \ --report-log=${PANDAS_TESTS_BRANCH}.json 2>&1 # summarize the results and save them to artifacts: python python/cudf/cudf/pandas/scripts/summarize-test-results.py --output json pandas-testing/${PANDAS_TESTS_BRANCH}.json > pandas-testing/${PANDAS_TESTS_BRANCH}-results.json RAPIDS_ARTIFACTS_DIR=${RAPIDS_ARTIFACTS_DIR:-"${PWD}/artifacts"} mkdir -p "${RAPIDS_ARTIFACTS_DIR}" mv pandas-testing/${PANDAS_TESTS_BRANCH}-results.json ${RAPIDS_ARTIFACTS_DIR}/
0
rapidsai_public_repos/cudf/ci
rapidsai_public_repos/cudf/ci/checks/doxygen.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. ############################### # cuDF doxygen warnings check # ############################### # skip if doxygen is not installed if ! [ -x "$(command -v doxygen)" ]; then echo -e "warning: doxygen is not installed" exit 0 fi # Utility to return version as number for comparison function version { echo "$@" | awk -F. '{ printf("%d%03d%03d%03d\n", $1,$2,$3,$4); }'; } # doxygen supported version 1.9.1 DOXYGEN_VERSION=`doxygen --version` if [ ! $(version "$DOXYGEN_VERSION") -eq $(version "1.9.1") ] ; then echo -e "warning: Unsupported doxygen version $DOXYGEN_VERSION" echo -e "Expecting doxygen version 1.9.1" exit 0 fi # Run doxygen, ignore missing tag files error TAG_ERROR1="error: Tag file '.*.tag' does not exist or is not a file. Skipping it..." TAG_ERROR2="error: cannot open tag file .*.tag for writing" DOXYGEN_STDERR=`cd cpp/doxygen && { cat Doxyfile ; echo QUIET = YES; echo GENERATE_HTML = NO; } | doxygen - 2>&1 | sed "/\($TAG_ERROR1\|$TAG_ERROR2\)/d"` RETVAL=$? if [ "$RETVAL" != "0" ] || [ ! -z "$DOXYGEN_STDERR" ]; then echo -e "$DOXYGEN_STDERR" RETVAL=1 #because return value is not generated by doxygen 1.8.20 fi exit $RETVAL
0
rapidsai_public_repos/cudf/ci
rapidsai_public_repos/cudf/ci/checks/copyright.py
# Copyright (c) 2019-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import argparse import datetime import os import re import sys import git FilesToCheck = [ re.compile(r"[.](cmake|cpp|cu|cuh|h|hpp|sh|pxd|py|pyx)$"), re.compile(r"CMakeLists[.]txt$"), re.compile(r"CMakeLists_standalone[.]txt$"), re.compile(r"setup[.]cfg$"), re.compile(r"meta[.]yaml$"), ] ExemptFiles = [ re.compile(r"cpp/include/cudf_test/cxxopts.hpp"), ] # this will break starting at year 10000, which is probably OK :) CheckSimple = re.compile( r"Copyright *(?:\(c\))? *(\d{4}),? *NVIDIA C(?:ORPORATION|orporation)" ) CheckDouble = re.compile( r"Copyright *(?:\(c\))? *(\d{4})-(\d{4}),? *NVIDIA C(?:ORPORATION|orporation)" # noqa: E501 ) def checkThisFile(f): if isinstance(f, git.Diff): if f.deleted_file or f.b_blob.size == 0: return False f = f.b_path elif not os.path.exists(f) or os.stat(f).st_size == 0: # This check covers things like symlinks which point to files that DNE return False for exempt in ExemptFiles: if exempt.search(f): return False for checker in FilesToCheck: if checker.search(f): return True return False def modifiedFiles(): """Get a set of all modified files, as Diff objects. The files returned have been modified in git since the merge base of HEAD and the upstream of the target branch. We return the Diff objects so that we can read only the staged changes. """ repo = git.Repo() # Use the environment variable TARGET_BRANCH or RAPIDS_BASE_BRANCH (defined in CI) if possible target_branch = os.environ.get("TARGET_BRANCH", os.environ.get("RAPIDS_BASE_BRANCH")) if target_branch is None: # Fall back to the closest branch if not on CI target_branch = repo.git.describe( all=True, tags=True, match="branch-*", abbrev=0 ).lstrip("heads/") upstream_target_branch = None if target_branch in repo.heads: # Use the tracking branch of the local reference if it exists. This # returns None if no tracking branch is set. upstream_target_branch = repo.heads[target_branch].tracking_branch() if upstream_target_branch is None: # Fall back to the remote with the newest target_branch. This code # path is used on CI because the only local branch reference is # current-pr-branch, and thus target_branch is not in repo.heads. # This also happens if no tracking branch is defined for the local # target_branch. We use the remote with the latest commit if # multiple remotes are defined. candidate_branches = [ remote.refs[target_branch] for remote in repo.remotes if target_branch in remote.refs ] if len(candidate_branches) > 0: upstream_target_branch = sorted( candidate_branches, key=lambda branch: branch.commit.committed_datetime, )[-1] else: # If no remotes are defined, try to use the local version of the # target_branch. If this fails, the repo configuration must be very # strange and we can fix this script on a case-by-case basis. upstream_target_branch = repo.heads[target_branch] merge_base = repo.merge_base("HEAD", upstream_target_branch.commit)[0] diff = merge_base.diff() changed_files = {f for f in diff if f.b_path is not None} return changed_files def getCopyrightYears(line): res = CheckSimple.search(line) if res: return int(res.group(1)), int(res.group(1)) res = CheckDouble.search(line) if res: return int(res.group(1)), int(res.group(2)) return None, None def replaceCurrentYear(line, start, end): # first turn a simple regex into double (if applicable). then update years res = CheckSimple.sub(r"Copyright (c) \1-\1, NVIDIA CORPORATION", line) res = CheckDouble.sub( rf"Copyright (c) {start:04d}-{end:04d}, NVIDIA CORPORATION", res, ) return res def checkCopyright(f, update_current_year): """Checks for copyright headers and their years.""" errs = [] thisYear = datetime.datetime.now().year lineNum = 0 crFound = False yearMatched = False if isinstance(f, git.Diff): path = f.b_path lines = f.b_blob.data_stream.read().decode().splitlines(keepends=True) else: path = f with open(f, encoding="utf-8") as fp: lines = fp.readlines() for line in lines: lineNum += 1 start, end = getCopyrightYears(line) if start is None: continue crFound = True if start > end: e = [ path, lineNum, "First year after second year in the copyright " "header (manual fix required)", None, ] errs.append(e) elif thisYear < start or thisYear > end: e = [ path, lineNum, "Current year not included in the copyright header", None, ] if thisYear < start: e[-1] = replaceCurrentYear(line, thisYear, end) if thisYear > end: e[-1] = replaceCurrentYear(line, start, thisYear) errs.append(e) else: yearMatched = True # copyright header itself not found if not crFound: e = [ path, 0, "Copyright header missing or formatted incorrectly " "(manual fix required)", None, ] errs.append(e) # even if the year matches a copyright header, make the check pass if yearMatched: errs = [] if update_current_year: errs_update = [x for x in errs if x[-1] is not None] if len(errs_update) > 0: lines_changed = ", ".join(str(x[1]) for x in errs_update) print(f"File: {path}. Changing line(s) {lines_changed}") for _, lineNum, __, replacement in errs_update: lines[lineNum - 1] = replacement with open(path, "w", encoding="utf-8") as out_file: out_file.writelines(lines) return errs def getAllFilesUnderDir(root, pathFilter=None): retList = [] for dirpath, dirnames, filenames in os.walk(root): for fn in filenames: filePath = os.path.join(dirpath, fn) if pathFilter(filePath): retList.append(filePath) return retList def checkCopyright_main(): """ Checks for copyright headers in all the modified files. In case of local repo, this script will just look for uncommitted files and in case of CI it compares between branches "$PR_TARGET_BRANCH" and "current-pr-branch" """ retVal = 0 argparser = argparse.ArgumentParser( "Checks for a consistent copyright header in git's modified files" ) argparser.add_argument( "--update-current-year", dest="update_current_year", action="store_true", required=False, help="If set, " "update the current year if a header is already " "present and well formatted.", ) argparser.add_argument( "--git-modified-only", dest="git_modified_only", action="store_true", required=False, help="If set, " "only files seen as modified by git will be " "processed.", ) args, dirs = argparser.parse_known_args() if args.git_modified_only: files = [f for f in modifiedFiles() if checkThisFile(f)] else: files = [] for d in [os.path.abspath(d) for d in dirs]: if not os.path.isdir(d): raise ValueError(f"{d} is not a directory.") files += getAllFilesUnderDir(d, pathFilter=checkThisFile) errors = [] for f in files: errors += checkCopyright(f, args.update_current_year) if len(errors) > 0: if any(e[-1] is None for e in errors): print("Copyright headers incomplete in some of the files!") for e in errors: print(" %s:%d Issue: %s" % (e[0], e[1], e[2])) print("") n_fixable = sum(1 for e in errors if e[-1] is not None) path_parts = os.path.abspath(__file__).split(os.sep) file_from_repo = os.sep.join(path_parts[path_parts.index("ci") :]) if n_fixable > 0 and not args.update_current_year: print( f"You can run `python {file_from_repo} --git-modified-only " "--update-current-year` and stage the results in git to " f"fix {n_fixable} of these errors.\n" ) retVal = 1 return retVal if __name__ == "__main__": sys.exit(checkCopyright_main())
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/.pre-commit-config.yaml
repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.3.0 hooks: - id: trailing-whitespace - id: end-of-file-fixer - repo: https://github.com/pycqa/isort rev: 5.12.0 hooks: - id: isort - repo: https://github.com/ambv/black rev: 22.3.0 hooks: - id: black - repo: https://github.com/PyCQA/flake8 rev: 3.8.3 hooks: - id: flake8 - repo: https://github.com/codespell-project/codespell rev: v2.1.0 hooks: - id: codespell exclude: | (?x)^( .*test.*| ^CHANGELOG.md$| ) - repo: https://github.com/pre-commit/mirrors-mypy rev: 'v0.991' hooks: - id: mypy additional_dependencies: [types-cachetools] args: ["--module=dask_cuda", "--ignore-missing-imports"] pass_filenames: false - repo: https://github.com/rapidsai/dependency-file-generator rev: v1.5.1 hooks: - id: rapids-dependency-file-generator args: ["--clean"] default_language_version: python: python3
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/pyproject.toml
[build-system] build-backend = "setuptools.build_meta" requires = [ "setuptools>=64.0.0", "tomli ; python_version < '3.11'", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit dependencies.yaml and run `rapids-dependency-file-generator`. [project] name = "dask-cuda" dynamic = ["version"] description = "Utilities for Dask and CUDA interactions" readme = { file = "README.md", content-type = "text/markdown" } authors = [ { name = "NVIDIA Corporation" }, ] license = { text = "Apache-2.0" } requires-python = ">=3.9" dependencies = [ "click >=8.1", "numba>=0.57", "numpy>=1.21", "pandas>=1.3,<1.6.0.dev0", "pynvml>=11.0.0,<11.5", "rapids-dask-dependency==24.2.*", "zict>=2.0.0", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit dependencies.yaml and run `rapids-dependency-file-generator`. classifiers = [ "Intended Audience :: Developers", "Topic :: Database", "Topic :: Scientific/Engineering", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", ] [project.scripts] dask-cuda-worker = "dask_cuda.cli:worker" dask-cuda-config = "dask_cuda.cli:config" [project.entry-points.dask_cli] cuda = "dask_cuda.cli:cuda" [project.optional-dependencies] docs = [ "numpydoc>=1.1.0", "sphinx", "sphinx-click>=2.7.1", "sphinx-rtd-theme>=0.5.1", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit dependencies.yaml and run `rapids-dependency-file-generator`. test = [ "cudf==24.2.*", "dask-cudf==24.2.*", "kvikio==24.2.*", "pytest", "pytest-cov", "ucx-py==0.36.*", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit dependencies.yaml and run `rapids-dependency-file-generator`. [project.urls] Homepage = "https://github.com/rapidsai/dask-cuda" Documentation = "https://docs.rapids.ai/api/dask-cuda/stable/" Source = "https://github.com/rapidsai/dask-cuda" [tool.coverage.run] disable_warnings = [ "include-ignored", ] include = [ "dask_cuda/*", ] omit = [ "dask_cuda/tests/*", ] [tool.isort] line_length = 88 multi_line_output = 3 include_trailing_comma = true force_grid_wrap = 0 combine_as_imports = true order_by_type = true known_dask = [ "dask", "distributed", ] known_rapids = [ "rmm", "cuml", "cugraph", "dask_cudf", "cudf", "ucp", ] known_first_party = [ "dask_cuda", ] default_section = "THIRDPARTY" sections = [ "FUTURE", "STDLIB", "THIRDPARTY", "DASK", "RAPIDS", "FIRSTPARTY", "LOCALFOLDER", ] skip = [ ".eggs", ".git", ".hg", ".mypy_cache", ".tox", ".venv", "build", "dist", "__init__.py", ] [tool.pytest.ini_options] filterwarnings = [ "error::DeprecationWarning", "error::FutureWarning", # remove after https://github.com/rapidsai/dask-cuda/issues/1087 is closed "ignore:There is no current event loop:DeprecationWarning:tornado", ] [tool.setuptools] license-files = ["LICENSE"] [tool.setuptools.dynamic] version = {file = "dask_cuda/VERSION"} [tool.setuptools.packages.find] exclude = [ "docs", "tests", "docs.*", "tests.*", ]
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/.flake8
[flake8] exclude = docs, __init__.py max-line-length = 88 ignore = # Assigning lambda expression E731 # Ambiguous variable names E741 # line break before binary operator W503 # whitespace before : E203 # whitespace after , E231
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/README.md
Dask CUDA ========= Various utilities to improve deployment and management of Dask workers on CUDA-enabled systems. This library is experimental, and its API is subject to change at any time without notice. Example ------- ```python from dask_cuda import LocalCUDACluster from dask.distributed import Client cluster = LocalCUDACluster() client = Client(cluster) ``` Documentation is available [here](https://docs.rapids.ai/api/dask-cuda/nightly/). What this is not ---------------- This library does not automatically convert your Dask code to run on GPUs. It only helps with deployment and management of Dask workers in multi-GPU systems. Parallelizing GPU libraries like [RAPIDS](https://rapids.ai) and [CuPy](https://cupy.chainer.org) with Dask is an ongoing effort. You may wish to read about this effort at [blog.dask.org](https://blog.dask.org) for more information. Additional information about Dask-CUDA can also be found in the [docs](https://docs.rapids.ai/api/dask-cuda/nightly/).
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/CHANGELOG.md
# dask-cuda 23.10.00 (11 Oct 2023) ## 🐛 Bug Fixes - Monkeypatch protocol.loads ala dask/distributed#8216 ([#1247](https://github.com/rapidsai/dask-cuda/pull/1247)) [@wence-](https://github.com/wence-) - Explicit-comms: preserve partition IDs ([#1240](https://github.com/rapidsai/dask-cuda/pull/1240)) [@madsbk](https://github.com/madsbk) - Increase test timeouts further to reduce CI failures ([#1234](https://github.com/rapidsai/dask-cuda/pull/1234)) [@pentschev](https://github.com/pentschev) - Use `conda mambabuild` not `mamba mambabuild` ([#1231](https://github.com/rapidsai/dask-cuda/pull/1231)) [@bdice](https://github.com/bdice) - Increate timeouts of tests that frequently timeout in CI ([#1228](https://github.com/rapidsai/dask-cuda/pull/1228)) [@pentschev](https://github.com/pentschev) - Adapt to non-string task keys in distributed ([#1225](https://github.com/rapidsai/dask-cuda/pull/1225)) [@wence-](https://github.com/wence-) - Update `test_worker_timeout` ([#1223](https://github.com/rapidsai/dask-cuda/pull/1223)) [@pentschev](https://github.com/pentschev) - Avoid importing `loads_function` from distributed ([#1220](https://github.com/rapidsai/dask-cuda/pull/1220)) [@rjzamora](https://github.com/rjzamora) ## 🚀 New Features - Enable maximum pool size for RMM async allocator ([#1221](https://github.com/rapidsai/dask-cuda/pull/1221)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Pin `dask` and `distributed` for `23.10` release ([#1251](https://github.com/rapidsai/dask-cuda/pull/1251)) [@galipremsagar](https://github.com/galipremsagar) - Update `test_spill.py` to avoid `FutureWarning`s ([#1243](https://github.com/rapidsai/dask-cuda/pull/1243)) [@pentschev](https://github.com/pentschev) - Remove obsolete pytest `filterwarnings` ([#1241](https://github.com/rapidsai/dask-cuda/pull/1241)) [@pentschev](https://github.com/pentschev) - Update image names ([#1233](https://github.com/rapidsai/dask-cuda/pull/1233)) [@AyodeAwe](https://github.com/AyodeAwe) - Use `copy-pr-bot` ([#1227](https://github.com/rapidsai/dask-cuda/pull/1227)) [@ajschmidt8](https://github.com/ajschmidt8) - Unpin `dask` and `distributed` for `23.10` development ([#1222](https://github.com/rapidsai/dask-cuda/pull/1222)) [@galipremsagar](https://github.com/galipremsagar) # dask-cuda 23.08.00 (9 Aug 2023) ## 🐛 Bug Fixes - Ensure plugin config can be passed from worker to client ([#1212](https://github.com/rapidsai/dask-cuda/pull/1212)) [@wence-](https://github.com/wence-) - Adjust to new `get_default_shuffle_method` name ([#1200](https://github.com/rapidsai/dask-cuda/pull/1200)) [@pentschev](https://github.com/pentschev) - Increase minimum timeout to wait for workers in CI ([#1192](https://github.com/rapidsai/dask-cuda/pull/1192)) [@pentschev](https://github.com/pentschev) ## 📖 Documentation - Remove RTD configuration and references to RTD page ([#1211](https://github.com/rapidsai/dask-cuda/pull/1211)) [@charlesbluca](https://github.com/charlesbluca) - Clarify `memory_limit` docs ([#1207](https://github.com/rapidsai/dask-cuda/pull/1207)) [@pentschev](https://github.com/pentschev) ## 🚀 New Features - Remove versioneer ([#1204](https://github.com/rapidsai/dask-cuda/pull/1204)) [@pentschev](https://github.com/pentschev) - Remove code for Distributed&lt;2023.5.1 compatibility ([#1191](https://github.com/rapidsai/dask-cuda/pull/1191)) [@pentschev](https://github.com/pentschev) - Specify disk spill compression based on Dask config ([#1190](https://github.com/rapidsai/dask-cuda/pull/1190)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Pin `dask` and `distributed` for `23.08` release ([#1214](https://github.com/rapidsai/dask-cuda/pull/1214)) [@galipremsagar](https://github.com/galipremsagar) - Revert CUDA 12.0 CI workflows to branch-23.08. ([#1210](https://github.com/rapidsai/dask-cuda/pull/1210)) [@bdice](https://github.com/bdice) - Use minimal Numba dependencies for CUDA 12 ([#1209](https://github.com/rapidsai/dask-cuda/pull/1209)) [@jakirkham](https://github.com/jakirkham) - Aggregate reads &amp; writes in `disk_io` ([#1205](https://github.com/rapidsai/dask-cuda/pull/1205)) [@jakirkham](https://github.com/jakirkham) - CUDA 12 Support ([#1201](https://github.com/rapidsai/dask-cuda/pull/1201)) [@quasiben](https://github.com/quasiben) - Remove explicit UCX config from tests ([#1199](https://github.com/rapidsai/dask-cuda/pull/1199)) [@pentschev](https://github.com/pentschev) - use rapids-upload-docs script ([#1194](https://github.com/rapidsai/dask-cuda/pull/1194)) [@AyodeAwe](https://github.com/AyodeAwe) - Unpin `dask` and `distributed` for development ([#1189](https://github.com/rapidsai/dask-cuda/pull/1189)) [@galipremsagar](https://github.com/galipremsagar) - Remove documentation build scripts for Jenkins ([#1187](https://github.com/rapidsai/dask-cuda/pull/1187)) [@ajschmidt8](https://github.com/ajschmidt8) - Use KvikIO in Dask-CUDA ([#925](https://github.com/rapidsai/dask-cuda/pull/925)) [@jakirkham](https://github.com/jakirkham) # dask-cuda 23.06.00 (7 Jun 2023) ## 🚨 Breaking Changes - Update minimum Python version to Python 3.9 ([#1164](https://github.com/rapidsai/dask-cuda/pull/1164)) [@shwina](https://github.com/shwina) ## 🐛 Bug Fixes - Increase pytest CI timeout ([#1196](https://github.com/rapidsai/dask-cuda/pull/1196)) [@pentschev](https://github.com/pentschev) - Increase minimum timeout to wait for workers in CI ([#1193](https://github.com/rapidsai/dask-cuda/pull/1193)) [@pentschev](https://github.com/pentschev) - Disable `np.bool` deprecation warning ([#1182](https://github.com/rapidsai/dask-cuda/pull/1182)) [@pentschev](https://github.com/pentschev) - Always upload on branch/nightly builds ([#1177](https://github.com/rapidsai/dask-cuda/pull/1177)) [@raydouglass](https://github.com/raydouglass) - Workaround for `DeviceHostFile` tests with CuPy&gt;=12.0.0 ([#1175](https://github.com/rapidsai/dask-cuda/pull/1175)) [@pentschev](https://github.com/pentschev) - Temporarily relax Python constraint ([#1166](https://github.com/rapidsai/dask-cuda/pull/1166)) [@vyasr](https://github.com/vyasr) ## 📖 Documentation - [doc] Add document about main guard. ([#1157](https://github.com/rapidsai/dask-cuda/pull/1157)) [@trivialfis](https://github.com/trivialfis) ## 🚀 New Features - Require Numba 0.57.0+ ([#1185](https://github.com/rapidsai/dask-cuda/pull/1185)) [@jakirkham](https://github.com/jakirkham) - Revert &quot;Temporarily relax Python constraint&quot; ([#1171](https://github.com/rapidsai/dask-cuda/pull/1171)) [@vyasr](https://github.com/vyasr) - Update to zict 3.0 ([#1160](https://github.com/rapidsai/dask-cuda/pull/1160)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Add `__main__` entrypoint to dask-cuda-worker CLI ([#1181](https://github.com/rapidsai/dask-cuda/pull/1181)) [@hmacdope](https://github.com/hmacdope) - run docs nightly too ([#1176](https://github.com/rapidsai/dask-cuda/pull/1176)) [@AyodeAwe](https://github.com/AyodeAwe) - Fix GHAs Workflows ([#1172](https://github.com/rapidsai/dask-cuda/pull/1172)) [@ajschmidt8](https://github.com/ajschmidt8) - Remove `matrix_filter` from workflows ([#1168](https://github.com/rapidsai/dask-cuda/pull/1168)) [@charlesbluca](https://github.com/charlesbluca) - Revert to branch-23.06 for shared-action-workflows ([#1167](https://github.com/rapidsai/dask-cuda/pull/1167)) [@shwina](https://github.com/shwina) - Update minimum Python version to Python 3.9 ([#1164](https://github.com/rapidsai/dask-cuda/pull/1164)) [@shwina](https://github.com/shwina) - Remove usage of rapids-get-rapids-version-from-git ([#1163](https://github.com/rapidsai/dask-cuda/pull/1163)) [@jjacobelli](https://github.com/jjacobelli) - Use ARC V2 self-hosted runners for GPU jobs ([#1159](https://github.com/rapidsai/dask-cuda/pull/1159)) [@jjacobelli](https://github.com/jjacobelli) # dask-cuda 23.04.00 (6 Apr 2023) ## 🚨 Breaking Changes - Pin `dask` and `distributed` for release ([#1153](https://github.com/rapidsai/dask-cuda/pull/1153)) [@galipremsagar](https://github.com/galipremsagar) - Update minimum `pandas` and `numpy` pinnings ([#1139](https://github.com/rapidsai/dask-cuda/pull/1139)) [@galipremsagar](https://github.com/galipremsagar) ## 🐛 Bug Fixes - Rectify `dask-core` pinning in pip requirements ([#1155](https://github.com/rapidsai/dask-cuda/pull/1155)) [@galipremsagar](https://github.com/galipremsagar) - Monkey patching all locations of `get_default_shuffle_algorithm` ([#1142](https://github.com/rapidsai/dask-cuda/pull/1142)) [@madsbk](https://github.com/madsbk) - Update usage of `get_worker()` in tests ([#1141](https://github.com/rapidsai/dask-cuda/pull/1141)) [@pentschev](https://github.com/pentschev) - Update `rmm_cupy_allocator` usage ([#1138](https://github.com/rapidsai/dask-cuda/pull/1138)) [@jakirkham](https://github.com/jakirkham) - Serialize of `ProxyObject` to pickle fixed attributes ([#1137](https://github.com/rapidsai/dask-cuda/pull/1137)) [@madsbk](https://github.com/madsbk) - Explicit-comms: update monkey patching of Dask ([#1135](https://github.com/rapidsai/dask-cuda/pull/1135)) [@madsbk](https://github.com/madsbk) - Fix for bytes/str discrepancy after PyNVML update ([#1118](https://github.com/rapidsai/dask-cuda/pull/1118)) [@pentschev](https://github.com/pentschev) ## 🚀 New Features - Allow specifying dashboard address in benchmarks ([#1147](https://github.com/rapidsai/dask-cuda/pull/1147)) [@pentschev](https://github.com/pentschev) - Add argument to enable RMM alloaction tracking in benchmarks ([#1145](https://github.com/rapidsai/dask-cuda/pull/1145)) [@pentschev](https://github.com/pentschev) - Reinstate `--death-timeout` CLI option ([#1140](https://github.com/rapidsai/dask-cuda/pull/1140)) [@charlesbluca](https://github.com/charlesbluca) - Extend RMM async allocation support ([#1116](https://github.com/rapidsai/dask-cuda/pull/1116)) [@pentschev](https://github.com/pentschev) - Allow using stream-ordered and managed RMM allocators in benchmarks ([#1012](https://github.com/rapidsai/dask-cuda/pull/1012)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Pin `dask` and `distributed` for release ([#1153](https://github.com/rapidsai/dask-cuda/pull/1153)) [@galipremsagar](https://github.com/galipremsagar) - Update minimum `pandas` and `numpy` pinnings ([#1139](https://github.com/rapidsai/dask-cuda/pull/1139)) [@galipremsagar](https://github.com/galipremsagar) - Drop Python 3.7 handling for pickle protocol 4 ([#1132](https://github.com/rapidsai/dask-cuda/pull/1132)) [@jakirkham](https://github.com/jakirkham) - Adapt to rapidsai/rmm#1221 which moves allocator callbacks ([#1129](https://github.com/rapidsai/dask-cuda/pull/1129)) [@wence-](https://github.com/wence-) - Merge `branch-23.02` into `branch-23.04` ([#1128](https://github.com/rapidsai/dask-cuda/pull/1128)) [@ajschmidt8](https://github.com/ajschmidt8) - Template Conda recipe&#39;s `about` metadata ([#1121](https://github.com/rapidsai/dask-cuda/pull/1121)) [@jakirkham](https://github.com/jakirkham) - Fix GHA build workflow ([#1120](https://github.com/rapidsai/dask-cuda/pull/1120)) [@AjayThorve](https://github.com/AjayThorve) - Reduce error handling verbosity in CI tests scripts ([#1113](https://github.com/rapidsai/dask-cuda/pull/1113)) [@AjayThorve](https://github.com/AjayThorve) - Update shared workflow branches ([#1112](https://github.com/rapidsai/dask-cuda/pull/1112)) [@ajschmidt8](https://github.com/ajschmidt8) - Remove gpuCI scripts. ([#1111](https://github.com/rapidsai/dask-cuda/pull/1111)) [@bdice](https://github.com/bdice) - Unpin `dask` and `distributed` for development ([#1110](https://github.com/rapidsai/dask-cuda/pull/1110)) [@galipremsagar](https://github.com/galipremsagar) - Move date to build string in `conda` recipe ([#1103](https://github.com/rapidsai/dask-cuda/pull/1103)) [@ajschmidt8](https://github.com/ajschmidt8) # dask-cuda 23.02.00 (9 Feb 2023) ## 🚨 Breaking Changes - Pin `dask` and `distributed` for release ([#1106](https://github.com/rapidsai/dask-cuda/pull/1106)) [@galipremsagar](https://github.com/galipremsagar) ## 🐛 Bug Fixes - pre-commit: Update isort version to 5.12.0 ([#1098](https://github.com/rapidsai/dask-cuda/pull/1098)) [@wence-](https://github.com/wence-) - explicit-comms: don&#39;t mix `-` and `_` in config ([#1096](https://github.com/rapidsai/dask-cuda/pull/1096)) [@madsbk](https://github.com/madsbk) - Update `cudf.Buffer` pointer access method ([#1094](https://github.com/rapidsai/dask-cuda/pull/1094)) [@pentschev](https://github.com/pentschev) - Update tests for Python 3.10 ([#1086](https://github.com/rapidsai/dask-cuda/pull/1086)) [@pentschev](https://github.com/pentschev) - Use `pkgutil.iter_modules` to get un-imported module for `test_pre_import` ([#1085](https://github.com/rapidsai/dask-cuda/pull/1085)) [@charlesbluca](https://github.com/charlesbluca) - Make proxy tests with `LocalCUDACluster` asynchronous ([#1084](https://github.com/rapidsai/dask-cuda/pull/1084)) [@pentschev](https://github.com/pentschev) - Ensure consistent results from `safe_sizeof()` in test ([#1071](https://github.com/rapidsai/dask-cuda/pull/1071)) [@madsbk](https://github.com/madsbk) - Pass missing argument to groupby benchmark compute ([#1069](https://github.com/rapidsai/dask-cuda/pull/1069)) [@mattf](https://github.com/mattf) - Reorder channel priority. ([#1067](https://github.com/rapidsai/dask-cuda/pull/1067)) [@bdice](https://github.com/bdice) - Fix owner check when the owner is a cupy array ([#1061](https://github.com/rapidsai/dask-cuda/pull/1061)) [@wence-](https://github.com/wence-) ## 🛠️ Improvements - Pin `dask` and `distributed` for release ([#1106](https://github.com/rapidsai/dask-cuda/pull/1106)) [@galipremsagar](https://github.com/galipremsagar) - Update shared workflow branches ([#1105](https://github.com/rapidsai/dask-cuda/pull/1105)) [@ajschmidt8](https://github.com/ajschmidt8) - Proxify: make duplicate check optional ([#1101](https://github.com/rapidsai/dask-cuda/pull/1101)) [@madsbk](https://github.com/madsbk) - Fix whitespace &amp; add URLs in `pyproject.toml` ([#1092](https://github.com/rapidsai/dask-cuda/pull/1092)) [@jakirkham](https://github.com/jakirkham) - pre-commit: spell, whitespace, and mypy check ([#1091](https://github.com/rapidsai/dask-cuda/pull/1091)) [@madsbk](https://github.com/madsbk) - shuffle: use cuDF&#39;s `partition_by_hash()` when available ([#1090](https://github.com/rapidsai/dask-cuda/pull/1090)) [@madsbk](https://github.com/madsbk) - add initial docs build ([#1089](https://github.com/rapidsai/dask-cuda/pull/1089)) [@AjayThorve](https://github.com/AjayThorve) - Remove `--get-cluster-configuration` option, check for scheduler in `dask cuda config` ([#1088](https://github.com/rapidsai/dask-cuda/pull/1088)) [@charlesbluca](https://github.com/charlesbluca) - Add timeout to `pytest` command ([#1082](https://github.com/rapidsai/dask-cuda/pull/1082)) [@ajschmidt8](https://github.com/ajschmidt8) - shuffle-benchmark: add `--partition-distribution` ([#1081](https://github.com/rapidsai/dask-cuda/pull/1081)) [@madsbk](https://github.com/madsbk) - Ensure tests run for Python `3.10` ([#1080](https://github.com/rapidsai/dask-cuda/pull/1080)) [@ajschmidt8](https://github.com/ajschmidt8) - Use TrackingResourceAdaptor to get better debug info ([#1079](https://github.com/rapidsai/dask-cuda/pull/1079)) [@madsbk](https://github.com/madsbk) - Improve shuffle-benchmark ([#1074](https://github.com/rapidsai/dask-cuda/pull/1074)) [@madsbk](https://github.com/madsbk) - Update builds for CUDA `11.8` and Python `310` ([#1072](https://github.com/rapidsai/dask-cuda/pull/1072)) [@ajschmidt8](https://github.com/ajschmidt8) - Shuffle by partition to reduce memory usage significantly ([#1068](https://github.com/rapidsai/dask-cuda/pull/1068)) [@madsbk](https://github.com/madsbk) - Enable copy_prs. ([#1063](https://github.com/rapidsai/dask-cuda/pull/1063)) [@bdice](https://github.com/bdice) - Add GitHub Actions Workflows ([#1062](https://github.com/rapidsai/dask-cuda/pull/1062)) [@bdice](https://github.com/bdice) - Unpin `dask` and `distributed` for development ([#1060](https://github.com/rapidsai/dask-cuda/pull/1060)) [@galipremsagar](https://github.com/galipremsagar) - Switch to the new dask CLI ([#981](https://github.com/rapidsai/dask-cuda/pull/981)) [@jacobtomlinson](https://github.com/jacobtomlinson) # dask-cuda 22.12.00 (8 Dec 2022) ## 🚨 Breaking Changes - Make local_directory a required argument for spilling impls ([#1023](https://github.com/rapidsai/dask-cuda/pull/1023)) [@wence-](https://github.com/wence-) ## 🐛 Bug Fixes - Fix `parse_memory_limit` function call ([#1055](https://github.com/rapidsai/dask-cuda/pull/1055)) [@galipremsagar](https://github.com/galipremsagar) - Work around Jupyter errors in CI ([#1041](https://github.com/rapidsai/dask-cuda/pull/1041)) [@pentschev](https://github.com/pentschev) - Fix version constraint ([#1036](https://github.com/rapidsai/dask-cuda/pull/1036)) [@wence-](https://github.com/wence-) - Support the new `Buffer` in cudf ([#1033](https://github.com/rapidsai/dask-cuda/pull/1033)) [@madsbk](https://github.com/madsbk) - Install Dask nightly last in CI ([#1029](https://github.com/rapidsai/dask-cuda/pull/1029)) [@pentschev](https://github.com/pentschev) - Fix recorded time in merge benchmark ([#1028](https://github.com/rapidsai/dask-cuda/pull/1028)) [@wence-](https://github.com/wence-) - Switch pre-import not found test to sync definition ([#1026](https://github.com/rapidsai/dask-cuda/pull/1026)) [@pentschev](https://github.com/pentschev) - Make local_directory a required argument for spilling impls ([#1023](https://github.com/rapidsai/dask-cuda/pull/1023)) [@wence-](https://github.com/wence-) - Fixes for handling MIG devices ([#950](https://github.com/rapidsai/dask-cuda/pull/950)) [@pentschev](https://github.com/pentschev) ## 📖 Documentation - Merge 22.10 into 22.12 ([#1016](https://github.com/rapidsai/dask-cuda/pull/1016)) [@pentschev](https://github.com/pentschev) - Merge 22.08 into 22.10 ([#1010](https://github.com/rapidsai/dask-cuda/pull/1010)) [@pentschev](https://github.com/pentschev) ## 🚀 New Features - Allow specifying fractions as RMM pool initial/maximum size ([#1021](https://github.com/rapidsai/dask-cuda/pull/1021)) [@pentschev](https://github.com/pentschev) - Add feature to get cluster configuration ([#1006](https://github.com/rapidsai/dask-cuda/pull/1006)) [@quasiben](https://github.com/quasiben) - Add benchmark option to use dask-noop ([#994](https://github.com/rapidsai/dask-cuda/pull/994)) [@wence-](https://github.com/wence-) ## 🛠️ Improvements - Ensure linting checks for whole repo in CI ([#1053](https://github.com/rapidsai/dask-cuda/pull/1053)) [@pentschev](https://github.com/pentschev) - Pin `dask` and `distributed` for release ([#1046](https://github.com/rapidsai/dask-cuda/pull/1046)) [@galipremsagar](https://github.com/galipremsagar) - Remove `pytest-asyncio` dependency ([#1045](https://github.com/rapidsai/dask-cuda/pull/1045)) [@pentschev](https://github.com/pentschev) - Migrate as much as possible to `pyproject.toml` ([#1035](https://github.com/rapidsai/dask-cuda/pull/1035)) [@jakirkham](https://github.com/jakirkham) - Re-implement shuffle using staging ([#1030](https://github.com/rapidsai/dask-cuda/pull/1030)) [@madsbk](https://github.com/madsbk) - Explicit-comms-shuffle: fine control of task scheduling ([#1025](https://github.com/rapidsai/dask-cuda/pull/1025)) [@madsbk](https://github.com/madsbk) - Remove stale labeler ([#1024](https://github.com/rapidsai/dask-cuda/pull/1024)) [@raydouglass](https://github.com/raydouglass) - Unpin `dask` and `distributed` for development ([#1005](https://github.com/rapidsai/dask-cuda/pull/1005)) [@galipremsagar](https://github.com/galipremsagar) - Support cuDF&#39;s built-in spilling ([#984](https://github.com/rapidsai/dask-cuda/pull/984)) [@madsbk](https://github.com/madsbk) # dask-cuda 22.10.00 (12 Oct 2022) ## 🐛 Bug Fixes - Revert &quot;Update rearrange_by_column patch for explicit comms&quot; ([#1001](https://github.com/rapidsai/dask-cuda/pull/1001)) [@rjzamora](https://github.com/rjzamora) - Address CI failures caused by upstream distributed and cupy changes ([#993](https://github.com/rapidsai/dask-cuda/pull/993)) [@rjzamora](https://github.com/rjzamora) - DeviceSerialized.__reduce_ex__: convert frame to numpy arrays ([#977](https://github.com/rapidsai/dask-cuda/pull/977)) [@madsbk](https://github.com/madsbk) ## 📖 Documentation - Remove line-break that&#39;s breaking link ([#982](https://github.com/rapidsai/dask-cuda/pull/982)) [@ntabris](https://github.com/ntabris) - Dask-cuda best practices ([#976](https://github.com/rapidsai/dask-cuda/pull/976)) [@quasiben](https://github.com/quasiben) ## 🚀 New Features - Add Groupby benchmark ([#979](https://github.com/rapidsai/dask-cuda/pull/979)) [@rjzamora](https://github.com/rjzamora) ## 🛠️ Improvements - Pin `dask` and `distributed` for release ([#1003](https://github.com/rapidsai/dask-cuda/pull/1003)) [@galipremsagar](https://github.com/galipremsagar) - Update rearrange_by_column patch for explicit comms ([#992](https://github.com/rapidsai/dask-cuda/pull/992)) [@rjzamora](https://github.com/rjzamora) - benchmarks: Add option to suppress output of point to point data ([#985](https://github.com/rapidsai/dask-cuda/pull/985)) [@wence-](https://github.com/wence-) - Unpin `dask` and `distributed` for development ([#971](https://github.com/rapidsai/dask-cuda/pull/971)) [@galipremsagar](https://github.com/galipremsagar) # dask-cuda 22.08.00 (17 Aug 2022) ## 🚨 Breaking Changes - Fix useless property ([#944](https://github.com/rapidsai/dask-cuda/pull/944)) [@wence-](https://github.com/wence-) ## 🐛 Bug Fixes - Fix `distributed` error related to `loop_in_thread` ([#963](https://github.com/rapidsai/dask-cuda/pull/963)) [@galipremsagar](https://github.com/galipremsagar) - Add `__rmatmul__` to `ProxyObject` ([#960](https://github.com/rapidsai/dask-cuda/pull/960)) [@jakirkham](https://github.com/jakirkham) - Always use versioneer command classes in setup.py ([#948](https://github.com/rapidsai/dask-cuda/pull/948)) [@wence-](https://github.com/wence-) - Do not dispatch removed `cudf.Frame._index` object ([#947](https://github.com/rapidsai/dask-cuda/pull/947)) [@pentschev](https://github.com/pentschev) - Fix useless property ([#944](https://github.com/rapidsai/dask-cuda/pull/944)) [@wence-](https://github.com/wence-) - LocalCUDACluster&#39;s memory limit: `None` means no limit ([#943](https://github.com/rapidsai/dask-cuda/pull/943)) [@madsbk](https://github.com/madsbk) - ProxyManager: support `memory_limit=None` ([#941](https://github.com/rapidsai/dask-cuda/pull/941)) [@madsbk](https://github.com/madsbk) - Remove deprecated `loop` kwarg to `Nanny` in `CUDAWorker` ([#934](https://github.com/rapidsai/dask-cuda/pull/934)) [@pentschev](https://github.com/pentschev) - Import `cleanup` fixture in `test_dask_cuda_worker.py` ([#924](https://github.com/rapidsai/dask-cuda/pull/924)) [@pentschev](https://github.com/pentschev) ## 📖 Documentation - Switch docs to use common `js` &amp; `css` code ([#967](https://github.com/rapidsai/dask-cuda/pull/967)) [@galipremsagar](https://github.com/galipremsagar) - Switch `language` from `None` to `&quot;en&quot;` in docs build ([#939](https://github.com/rapidsai/dask-cuda/pull/939)) [@galipremsagar](https://github.com/galipremsagar) ## 🚀 New Features - Add communications bandwidth to benchmarks ([#938](https://github.com/rapidsai/dask-cuda/pull/938)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Pin `dask` &amp; `distributed` for release ([#965](https://github.com/rapidsai/dask-cuda/pull/965)) [@galipremsagar](https://github.com/galipremsagar) - Test memory_limit=None for CUDAWorker ([#946](https://github.com/rapidsai/dask-cuda/pull/946)) [@wence-](https://github.com/wence-) - benchmarks: Record total number of workers in dataframe ([#945](https://github.com/rapidsai/dask-cuda/pull/945)) [@wence-](https://github.com/wence-) - Benchmark refactoring: tidy data and multi-node capability via `--scheduler-file` ([#940](https://github.com/rapidsai/dask-cuda/pull/940)) [@wence-](https://github.com/wence-) - Add util functions to simplify printing benchmarks results ([#937](https://github.com/rapidsai/dask-cuda/pull/937)) [@pentschev](https://github.com/pentschev) - Add --multiprocessing-method option to benchmarks ([#933](https://github.com/rapidsai/dask-cuda/pull/933)) [@wence-](https://github.com/wence-) - Remove click pinning ([#932](https://github.com/rapidsai/dask-cuda/pull/932)) [@charlesbluca](https://github.com/charlesbluca) - Remove compiler variables ([#929](https://github.com/rapidsai/dask-cuda/pull/929)) [@ajschmidt8](https://github.com/ajschmidt8) - Unpin `dask` &amp; `distributed` for development ([#927](https://github.com/rapidsai/dask-cuda/pull/927)) [@galipremsagar](https://github.com/galipremsagar) # dask-cuda 22.06.00 (7 Jun 2022) ## 🚨 Breaking Changes - Upgrade `numba` pinning to be in-line with rest of rapids ([#912](https://github.com/rapidsai/dask-cuda/pull/912)) [@galipremsagar](https://github.com/galipremsagar) ## 🐛 Bug Fixes - Reduce `test_cudf_cluster_device_spill` test and speed it up ([#918](https://github.com/rapidsai/dask-cuda/pull/918)) [@pentschev](https://github.com/pentschev) - Update ImportError tests with --pre-import ([#914](https://github.com/rapidsai/dask-cuda/pull/914)) [@pentschev](https://github.com/pentschev) - Add xfail mark to `test_pre_import_not_found` ([#908](https://github.com/rapidsai/dask-cuda/pull/908)) [@pentschev](https://github.com/pentschev) - Increase spill tests timeout to 30 seconds ([#901](https://github.com/rapidsai/dask-cuda/pull/901)) [@pentschev](https://github.com/pentschev) - Fix errors related with `distributed.worker.memory.terminate` ([#900](https://github.com/rapidsai/dask-cuda/pull/900)) [@pentschev](https://github.com/pentschev) - Skip tests on import error for some optional packages ([#899](https://github.com/rapidsai/dask-cuda/pull/899)) [@pentschev](https://github.com/pentschev) - Update auto host_memory computation when threads per worker &gt; 1 ([#896](https://github.com/rapidsai/dask-cuda/pull/896)) [@ayushdg](https://github.com/ayushdg) - Update black to 22.3.0 ([#889](https://github.com/rapidsai/dask-cuda/pull/889)) [@charlesbluca](https://github.com/charlesbluca) - Remove legacy `check_python_3` ([#886](https://github.com/rapidsai/dask-cuda/pull/886)) [@pentschev](https://github.com/pentschev) ## 📖 Documentation - Add documentation for `RAPIDS_NO_INITIALIZE` ([#898](https://github.com/rapidsai/dask-cuda/pull/898)) [@charlesbluca](https://github.com/charlesbluca) - Use upstream warning functions for CUDA initialization ([#894](https://github.com/rapidsai/dask-cuda/pull/894)) [@charlesbluca](https://github.com/charlesbluca) ## 🛠️ Improvements - Pin `dask` and `distributed` for release ([#922](https://github.com/rapidsai/dask-cuda/pull/922)) [@galipremsagar](https://github.com/galipremsagar) - Pin `dask` &amp; `distributed` for release ([#916](https://github.com/rapidsai/dask-cuda/pull/916)) [@galipremsagar](https://github.com/galipremsagar) - Upgrade `numba` pinning to be in-line with rest of rapids ([#912](https://github.com/rapidsai/dask-cuda/pull/912)) [@galipremsagar](https://github.com/galipremsagar) - Removing test of `cudf.merge_sorted()` ([#905](https://github.com/rapidsai/dask-cuda/pull/905)) [@madsbk](https://github.com/madsbk) - Disable `include-ignored` coverage warnings ([#903](https://github.com/rapidsai/dask-cuda/pull/903)) [@pentschev](https://github.com/pentschev) - Fix ci/local script ([#902](https://github.com/rapidsai/dask-cuda/pull/902)) [@Ethyling](https://github.com/Ethyling) - Use conda to build python packages during GPU tests ([#897](https://github.com/rapidsai/dask-cuda/pull/897)) [@Ethyling](https://github.com/Ethyling) - Pull `requirements.txt` into Conda recipe ([#893](https://github.com/rapidsai/dask-cuda/pull/893)) [@jakirkham](https://github.com/jakirkham) - Unpin `dask` &amp; `distributed` for development ([#892](https://github.com/rapidsai/dask-cuda/pull/892)) [@galipremsagar](https://github.com/galipremsagar) - Build packages using mambabuild ([#846](https://github.com/rapidsai/dask-cuda/pull/846)) [@Ethyling](https://github.com/Ethyling) # dask-cuda 22.04.00 (6 Apr 2022) ## 🚨 Breaking Changes - Introduce incompatible-types and enables spilling of CuPy arrays ([#856](https://github.com/rapidsai/dask-cuda/pull/856)) [@madsbk](https://github.com/madsbk) ## 🐛 Bug Fixes - Resolve build issues / consistency with conda-forge packages ([#883](https://github.com/rapidsai/dask-cuda/pull/883)) [@charlesbluca](https://github.com/charlesbluca) - Increase test_worker_force_spill_to_disk timeout ([#857](https://github.com/rapidsai/dask-cuda/pull/857)) [@pentschev](https://github.com/pentschev) ## 📖 Documentation - Remove description from non-existing `--nprocs` CLI argument ([#852](https://github.com/rapidsai/dask-cuda/pull/852)) [@pentschev](https://github.com/pentschev) ## 🚀 New Features - Add --pre-import/pre_import argument ([#854](https://github.com/rapidsai/dask-cuda/pull/854)) [@pentschev](https://github.com/pentschev) - Remove support for UCX &lt; 1.11.1 ([#830](https://github.com/rapidsai/dask-cuda/pull/830)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Raise `ImportError` when platform is not Linux ([#885](https://github.com/rapidsai/dask-cuda/pull/885)) [@pentschev](https://github.com/pentschev) - Temporarily disable new `ops-bot` functionality ([#880](https://github.com/rapidsai/dask-cuda/pull/880)) [@ajschmidt8](https://github.com/ajschmidt8) - Pin `dask` &amp; `distributed` ([#878](https://github.com/rapidsai/dask-cuda/pull/878)) [@galipremsagar](https://github.com/galipremsagar) - Upgrade min `dask` &amp; `distributed` versions ([#872](https://github.com/rapidsai/dask-cuda/pull/872)) [@galipremsagar](https://github.com/galipremsagar) - Add `.github/ops-bot.yaml` config file ([#871](https://github.com/rapidsai/dask-cuda/pull/871)) [@ajschmidt8](https://github.com/ajschmidt8) - Make Dask CUDA work with the new WorkerMemoryManager abstraction ([#870](https://github.com/rapidsai/dask-cuda/pull/870)) [@shwina](https://github.com/shwina) - Implement ProxifyHostFile.evict() ([#862](https://github.com/rapidsai/dask-cuda/pull/862)) [@madsbk](https://github.com/madsbk) - Introduce incompatible-types and enables spilling of CuPy arrays ([#856](https://github.com/rapidsai/dask-cuda/pull/856)) [@madsbk](https://github.com/madsbk) - Spill to disk clean up ([#853](https://github.com/rapidsai/dask-cuda/pull/853)) [@madsbk](https://github.com/madsbk) - ProxyObject to support matrix multiplication ([#849](https://github.com/rapidsai/dask-cuda/pull/849)) [@madsbk](https://github.com/madsbk) - Unpin max dask and distributed ([#847](https://github.com/rapidsai/dask-cuda/pull/847)) [@galipremsagar](https://github.com/galipremsagar) - test_gds: skip if GDS is not available ([#845](https://github.com/rapidsai/dask-cuda/pull/845)) [@madsbk](https://github.com/madsbk) - ProxyObject implement __array_function__ ([#843](https://github.com/rapidsai/dask-cuda/pull/843)) [@madsbk](https://github.com/madsbk) - Add option to track RMM allocations ([#842](https://github.com/rapidsai/dask-cuda/pull/842)) [@shwina](https://github.com/shwina) # dask-cuda 22.02.00 (2 Feb 2022) ## 🐛 Bug Fixes - Ignore `DeprecationWarning` from `distutils.Version` classes ([#823](https://github.com/rapidsai/dask-cuda/pull/823)) [@pentschev](https://github.com/pentschev) - Handle explicitly disabled UCX transports ([#820](https://github.com/rapidsai/dask-cuda/pull/820)) [@pentschev](https://github.com/pentschev) - Fix regex pattern to match to in test_on_demand_debug_info ([#819](https://github.com/rapidsai/dask-cuda/pull/819)) [@pentschev](https://github.com/pentschev) - Fix skipping GDS test if cucim is not installed ([#813](https://github.com/rapidsai/dask-cuda/pull/813)) [@pentschev](https://github.com/pentschev) - Unpin Dask and Distributed versions ([#810](https://github.com/rapidsai/dask-cuda/pull/810)) [@pentschev](https://github.com/pentschev) - Update to UCX-Py 0.24 ([#805](https://github.com/rapidsai/dask-cuda/pull/805)) [@pentschev](https://github.com/pentschev) ## 📖 Documentation - Fix Dask-CUDA version to 22.02 ([#835](https://github.com/rapidsai/dask-cuda/pull/835)) [@jakirkham](https://github.com/jakirkham) - Merge branch-21.12 into branch-22.02 ([#829](https://github.com/rapidsai/dask-cuda/pull/829)) [@pentschev](https://github.com/pentschev) - Clarify `LocalCUDACluster`&#39;s `n_workers` docstrings ([#812](https://github.com/rapidsai/dask-cuda/pull/812)) [@pentschev](https://github.com/pentschev) ## 🚀 New Features - Pin `dask` &amp; `distributed` versions ([#832](https://github.com/rapidsai/dask-cuda/pull/832)) [@galipremsagar](https://github.com/galipremsagar) - Expose rmm-maximum_pool_size argument ([#827](https://github.com/rapidsai/dask-cuda/pull/827)) [@VibhuJawa](https://github.com/VibhuJawa) - Simplify UCX configs, permitting UCX_TLS=all ([#792](https://github.com/rapidsai/dask-cuda/pull/792)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Add avg and std calculation for time and throughput ([#828](https://github.com/rapidsai/dask-cuda/pull/828)) [@quasiben](https://github.com/quasiben) - sizeof test: increase tolerance ([#825](https://github.com/rapidsai/dask-cuda/pull/825)) [@madsbk](https://github.com/madsbk) - Query UCX-Py from gpuCI versioning service ([#818](https://github.com/rapidsai/dask-cuda/pull/818)) [@pentschev](https://github.com/pentschev) - Standardize Distributed config separator in get_ucx_config ([#806](https://github.com/rapidsai/dask-cuda/pull/806)) [@pentschev](https://github.com/pentschev) - Fixed `ProxyObject.__del__` to use the new Disk IO API from #791 ([#802](https://github.com/rapidsai/dask-cuda/pull/802)) [@madsbk](https://github.com/madsbk) - GPUDirect Storage (GDS) support for spilling ([#793](https://github.com/rapidsai/dask-cuda/pull/793)) [@madsbk](https://github.com/madsbk) - Disk IO interface ([#791](https://github.com/rapidsai/dask-cuda/pull/791)) [@madsbk](https://github.com/madsbk) # dask-cuda 21.12.00 (9 Dec 2021) ## 🐛 Bug Fixes - Remove automatic `doc` labeler ([#807](https://github.com/rapidsai/dask-cuda/pull/807)) [@pentschev](https://github.com/pentschev) - Add create_cuda_context UCX config from Distributed ([#801](https://github.com/rapidsai/dask-cuda/pull/801)) [@pentschev](https://github.com/pentschev) - Ignore deprecation warnings from pkg_resources ([#784](https://github.com/rapidsai/dask-cuda/pull/784)) [@pentschev](https://github.com/pentschev) - Fix parsing of device by UUID ([#780](https://github.com/rapidsai/dask-cuda/pull/780)) [@pentschev](https://github.com/pentschev) - Avoid creating CUDA context in LocalCUDACluster parent process ([#765](https://github.com/rapidsai/dask-cuda/pull/765)) [@pentschev](https://github.com/pentschev) - Remove gen_cluster spill tests ([#758](https://github.com/rapidsai/dask-cuda/pull/758)) [@pentschev](https://github.com/pentschev) - Update memory_pause_fraction in test_spill ([#757](https://github.com/rapidsai/dask-cuda/pull/757)) [@pentschev](https://github.com/pentschev) ## 📖 Documentation - Add troubleshooting page with PCI Bus ID issue description ([#777](https://github.com/rapidsai/dask-cuda/pull/777)) [@pentschev](https://github.com/pentschev) ## 🚀 New Features - Handle UCX-Py FutureWarning on UCX &lt; 1.11.1 deprecation ([#799](https://github.com/rapidsai/dask-cuda/pull/799)) [@pentschev](https://github.com/pentschev) - Pin max `dask` &amp; `distributed` versions ([#794](https://github.com/rapidsai/dask-cuda/pull/794)) [@galipremsagar](https://github.com/galipremsagar) - Update to UCX-Py 0.23 ([#752](https://github.com/rapidsai/dask-cuda/pull/752)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Fix spill-to-disk triggered by Dask explicitly ([#800](https://github.com/rapidsai/dask-cuda/pull/800)) [@madsbk](https://github.com/madsbk) - Fix Changelog Merge Conflicts for `branch-21.12` ([#797](https://github.com/rapidsai/dask-cuda/pull/797)) [@ajschmidt8](https://github.com/ajschmidt8) - Use unittest.mock.patch for all os.environ tests ([#787](https://github.com/rapidsai/dask-cuda/pull/787)) [@pentschev](https://github.com/pentschev) - Logging when RMM allocation fails ([#782](https://github.com/rapidsai/dask-cuda/pull/782)) [@madsbk](https://github.com/madsbk) - Tally IDs instead of device buffers directly ([#779](https://github.com/rapidsai/dask-cuda/pull/779)) [@madsbk](https://github.com/madsbk) - Avoid proxy object aliasing ([#775](https://github.com/rapidsai/dask-cuda/pull/775)) [@madsbk](https://github.com/madsbk) - Test of sizeof proxy object ([#774](https://github.com/rapidsai/dask-cuda/pull/774)) [@madsbk](https://github.com/madsbk) - gc.collect when spilling on demand ([#771](https://github.com/rapidsai/dask-cuda/pull/771)) [@madsbk](https://github.com/madsbk) - Reenable explicit comms tests ([#770](https://github.com/rapidsai/dask-cuda/pull/770)) [@madsbk](https://github.com/madsbk) - Simplify JIT-unspill and writing docs ([#768](https://github.com/rapidsai/dask-cuda/pull/768)) [@madsbk](https://github.com/madsbk) - Increase CUDAWorker close timeout ([#764](https://github.com/rapidsai/dask-cuda/pull/764)) [@pentschev](https://github.com/pentschev) - Ignore known but expected test warnings ([#759](https://github.com/rapidsai/dask-cuda/pull/759)) [@pentschev](https://github.com/pentschev) - Spilling on demand ([#756](https://github.com/rapidsai/dask-cuda/pull/756)) [@madsbk](https://github.com/madsbk) - Revert &quot;Temporarily skipping some tests because of a bug in Dask ([#753)&quot; (#754](https://github.com/rapidsai/dask-cuda/pull/753)&quot; (#754)) [@madsbk](https://github.com/madsbk) - Temporarily skipping some tests because of a bug in Dask ([#753](https://github.com/rapidsai/dask-cuda/pull/753)) [@madsbk](https://github.com/madsbk) - Removing the `FrameProxyObject` workaround ([#751](https://github.com/rapidsai/dask-cuda/pull/751)) [@madsbk](https://github.com/madsbk) - Use cuDF Frame instead of Table ([#748](https://github.com/rapidsai/dask-cuda/pull/748)) [@madsbk](https://github.com/madsbk) - Remove proxy object locks ([#747](https://github.com/rapidsai/dask-cuda/pull/747)) [@madsbk](https://github.com/madsbk) - Unpin `dask` &amp; `distributed` in CI ([#742](https://github.com/rapidsai/dask-cuda/pull/742)) [@galipremsagar](https://github.com/galipremsagar) - Update SSHCluster usage in benchmarks with new CUDAWorker ([#326](https://github.com/rapidsai/dask-cuda/pull/326)) [@pentschev](https://github.com/pentschev) # dask-cuda 21.10.00 (7 Oct 2021) ## 🐛 Bug Fixes - Drop test setting UCX global options via Dask config ([#738](https://github.com/rapidsai/dask-cuda/pull/738)) [@pentschev](https://github.com/pentschev) - Prevent CUDA context errors when testing on single-GPU ([#737](https://github.com/rapidsai/dask-cuda/pull/737)) [@pentschev](https://github.com/pentschev) - Handle `ucp` import error during `initialize()` ([#729](https://github.com/rapidsai/dask-cuda/pull/729)) [@pentschev](https://github.com/pentschev) - Check if CUDA context was created in distributed.comm.ucx ([#722](https://github.com/rapidsai/dask-cuda/pull/722)) [@pentschev](https://github.com/pentschev) - Fix registering correct dispatches for `cudf.Index` ([#718](https://github.com/rapidsai/dask-cuda/pull/718)) [@galipremsagar](https://github.com/galipremsagar) - Register `percentile_lookup` for `FrameProxyObject` ([#716](https://github.com/rapidsai/dask-cuda/pull/716)) [@galipremsagar](https://github.com/galipremsagar) - Leave interface unset when ucx_net_devices unset in LocalCUDACluster ([#711](https://github.com/rapidsai/dask-cuda/pull/711)) [@pentschev](https://github.com/pentschev) - Update to UCX-Py 0.22 ([#710](https://github.com/rapidsai/dask-cuda/pull/710)) [@pentschev](https://github.com/pentschev) - Missing fixes to Distributed config namespace refactoring ([#703](https://github.com/rapidsai/dask-cuda/pull/703)) [@pentschev](https://github.com/pentschev) - Reset UCX-Py after rdmacm tests run ([#702](https://github.com/rapidsai/dask-cuda/pull/702)) [@pentschev](https://github.com/pentschev) - Skip DGX InfiniBand tests when &quot;rc&quot; transport is unavailable ([#701](https://github.com/rapidsai/dask-cuda/pull/701)) [@pentschev](https://github.com/pentschev) - Update UCX config namespace ([#695](https://github.com/rapidsai/dask-cuda/pull/695)) [@pentschev](https://github.com/pentschev) - Bump isort hook version ([#682](https://github.com/rapidsai/dask-cuda/pull/682)) [@charlesbluca](https://github.com/charlesbluca) ## 📖 Documentation - Update more docs for UCX 1.11+ ([#720](https://github.com/rapidsai/dask-cuda/pull/720)) [@pentschev](https://github.com/pentschev) - Forward-merge branch-21.08 to branch-21.10 ([#707](https://github.com/rapidsai/dask-cuda/pull/707)) [@jakirkham](https://github.com/jakirkham) ## 🚀 New Features - Warn if CUDA context is created on incorrect device with `LocalCUDACluster` ([#719](https://github.com/rapidsai/dask-cuda/pull/719)) [@pentschev](https://github.com/pentschev) - Add `--benchmark-json` option to all benchmarks ([#700](https://github.com/rapidsai/dask-cuda/pull/700)) [@charlesbluca](https://github.com/charlesbluca) - Remove Distributed tests from CI ([#699](https://github.com/rapidsai/dask-cuda/pull/699)) [@pentschev](https://github.com/pentschev) - Add device memory limit argument to benchmarks ([#683](https://github.com/rapidsai/dask-cuda/pull/683)) [@charlesbluca](https://github.com/charlesbluca) - Support for LocalCUDACluster with MIG ([#674](https://github.com/rapidsai/dask-cuda/pull/674)) [@akaanirban](https://github.com/akaanirban) ## 🛠️ Improvements - Pin max `dask` and `distributed` versions to `2021.09.1` ([#735](https://github.com/rapidsai/dask-cuda/pull/735)) [@galipremsagar](https://github.com/galipremsagar) - Implements a ProxyManagerDummy for convenience ([#733](https://github.com/rapidsai/dask-cuda/pull/733)) [@madsbk](https://github.com/madsbk) - Add `__array_ufunc__` support for `ProxyObject` ([#731](https://github.com/rapidsai/dask-cuda/pull/731)) [@galipremsagar](https://github.com/galipremsagar) - Use `has_cuda_context` from Distributed ([#723](https://github.com/rapidsai/dask-cuda/pull/723)) [@pentschev](https://github.com/pentschev) - Fix deadlock and simplify proxy tracking ([#712](https://github.com/rapidsai/dask-cuda/pull/712)) [@madsbk](https://github.com/madsbk) - JIT-unspill: support spilling to/from disk ([#708](https://github.com/rapidsai/dask-cuda/pull/708)) [@madsbk](https://github.com/madsbk) - Tests: replacing the obsolete cudf.testing._utils.assert_eq calls ([#706](https://github.com/rapidsai/dask-cuda/pull/706)) [@madsbk](https://github.com/madsbk) - JIT-unspill: warn when spill to disk triggers ([#705](https://github.com/rapidsai/dask-cuda/pull/705)) [@madsbk](https://github.com/madsbk) - Remove max version pin for `dask` &amp; `distributed` on development branch ([#693](https://github.com/rapidsai/dask-cuda/pull/693)) [@galipremsagar](https://github.com/galipremsagar) - ENH Replace gpuci_conda_retry with gpuci_mamba_retry ([#675](https://github.com/rapidsai/dask-cuda/pull/675)) [@dillon-cullinan](https://github.com/dillon-cullinan) # dask-cuda 21.08.00 (4 Aug 2021) ## 🐛 Bug Fixes - Use aliases to check for installed UCX version ([#692](https://github.com/rapidsai/dask-cuda/pull/692)) [@pentschev](https://github.com/pentschev) - Don&#39;t install Dask main branch in CI for 21.08 release ([#687](https://github.com/rapidsai/dask-cuda/pull/687)) [@pentschev](https://github.com/pentschev) - Skip test_get_ucx_net_devices_raises on UCX &gt;= 1.11.0 ([#685](https://github.com/rapidsai/dask-cuda/pull/685)) [@pentschev](https://github.com/pentschev) - Fix NVML index usage in CUDAWorker/LocalCUDACluster ([#671](https://github.com/rapidsai/dask-cuda/pull/671)) [@pentschev](https://github.com/pentschev) - Add --protocol flag to dask-cuda-worker ([#670](https://github.com/rapidsai/dask-cuda/pull/670)) [@jacobtomlinson](https://github.com/jacobtomlinson) - Fix `assert_eq` related imports ([#663](https://github.com/rapidsai/dask-cuda/pull/663)) [@galipremsagar](https://github.com/galipremsagar) - Small tweaks to make compatible with dask-mpi ([#656](https://github.com/rapidsai/dask-cuda/pull/656)) [@jacobtomlinson](https://github.com/jacobtomlinson) - Remove Dask version pin ([#647](https://github.com/rapidsai/dask-cuda/pull/647)) [@pentschev](https://github.com/pentschev) - Fix CUDA_VISIBLE_DEVICES tests ([#638](https://github.com/rapidsai/dask-cuda/pull/638)) [@pentschev](https://github.com/pentschev) - Add `make_meta_dispatch` handling ([#637](https://github.com/rapidsai/dask-cuda/pull/637)) [@galipremsagar](https://github.com/galipremsagar) - Update UCX-Py version in CI to 0.21.* ([#636](https://github.com/rapidsai/dask-cuda/pull/636)) [@pentschev](https://github.com/pentschev) ## 📖 Documentation - Deprecation warning for ucx_net_devices=&#39;auto&#39; on UCX 1.11+ ([#681](https://github.com/rapidsai/dask-cuda/pull/681)) [@pentschev](https://github.com/pentschev) - Update documentation on InfiniBand with UCX &gt;= 1.11 ([#669](https://github.com/rapidsai/dask-cuda/pull/669)) [@pentschev](https://github.com/pentschev) - Merge branch-21.06 ([#622](https://github.com/rapidsai/dask-cuda/pull/622)) [@pentschev](https://github.com/pentschev) ## 🚀 New Features - Treat Deprecation/Future warnings as errors ([#672](https://github.com/rapidsai/dask-cuda/pull/672)) [@pentschev](https://github.com/pentschev) - Update parse_bytes imports to resolve deprecation warnings ([#662](https://github.com/rapidsai/dask-cuda/pull/662)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Pin max `dask` &amp; `distributed` versions ([#686](https://github.com/rapidsai/dask-cuda/pull/686)) [@galipremsagar](https://github.com/galipremsagar) - Fix DGX tests warnings on RMM pool size and file not closed ([#673](https://github.com/rapidsai/dask-cuda/pull/673)) [@pentschev](https://github.com/pentschev) - Remove dot calling style for pytest ([#661](https://github.com/rapidsai/dask-cuda/pull/661)) [@quasiben](https://github.com/quasiben) - get_device_memory_objects(): dispatch on cudf.core.frame.Frame ([#658](https://github.com/rapidsai/dask-cuda/pull/658)) [@madsbk](https://github.com/madsbk) - Fix `21.08` forward-merge conflicts ([#655](https://github.com/rapidsai/dask-cuda/pull/655)) [@ajschmidt8](https://github.com/ajschmidt8) - Fix conflicts in `643` ([#644](https://github.com/rapidsai/dask-cuda/pull/644)) [@ajschmidt8](https://github.com/ajschmidt8) # dask-cuda 21.06.00 (9 Jun 2021) ## 🐛 Bug Fixes - Handle `import`ing relocated dispatch functions ([#623](https://github.com/rapidsai/dask-cuda/pull/623)) [@jakirkham](https://github.com/jakirkham) - Fix DGX tests for UCX 1.9 ([#619](https://github.com/rapidsai/dask-cuda/pull/619)) [@pentschev](https://github.com/pentschev) - Add PROJECTS var ([#614](https://github.com/rapidsai/dask-cuda/pull/614)) [@ajschmidt8](https://github.com/ajschmidt8) ## 📖 Documentation - Bump docs copyright year ([#616](https://github.com/rapidsai/dask-cuda/pull/616)) [@charlesbluca](https://github.com/charlesbluca) - Update RTD site to redirect to RAPIDS docs ([#615](https://github.com/rapidsai/dask-cuda/pull/615)) [@charlesbluca](https://github.com/charlesbluca) - Document DASK_JIT_UNSPILL ([#604](https://github.com/rapidsai/dask-cuda/pull/604)) [@madsbk](https://github.com/madsbk) ## 🚀 New Features - Disable reuse endpoints with UCX &gt;= 1.11 ([#620](https://github.com/rapidsai/dask-cuda/pull/620)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Adding profiling to dask shuffle ([#625](https://github.com/rapidsai/dask-cuda/pull/625)) [@arunraman](https://github.com/arunraman) - Update `CHANGELOG.md` links for calver ([#618](https://github.com/rapidsai/dask-cuda/pull/618)) [@ajschmidt8](https://github.com/ajschmidt8) - Fixing Dataframe merge benchmark ([#617](https://github.com/rapidsai/dask-cuda/pull/617)) [@madsbk](https://github.com/madsbk) - Fix DGX tests for UCX 1.10+ ([#613](https://github.com/rapidsai/dask-cuda/pull/613)) [@pentschev](https://github.com/pentschev) - Update docs build script ([#612](https://github.com/rapidsai/dask-cuda/pull/612)) [@ajschmidt8](https://github.com/ajschmidt8) # dask-cuda 0.19.0 (21 Apr 2021) ## 🐛 Bug Fixes - Pin Dask and Distributed &lt;=2021.04.0 ([#585](https://github.com/rapidsai/dask-cuda/pull/585)) [@pentschev](https://github.com/pentschev) - Unblock CI by xfailing test_dataframe_merge_empty_partitions ([#581](https://github.com/rapidsai/dask-cuda/pull/581)) [@pentschev](https://github.com/pentschev) - Install Dask + Distributed from `main` ([#546](https://github.com/rapidsai/dask-cuda/pull/546)) [@jakirkham](https://github.com/jakirkham) - Replace compute() calls on CuPy benchmarks by persist() ([#537](https://github.com/rapidsai/dask-cuda/pull/537)) [@pentschev](https://github.com/pentschev) ## 📖 Documentation - Add standalone examples of UCX usage ([#551](https://github.com/rapidsai/dask-cuda/pull/551)) [@charlesbluca](https://github.com/charlesbluca) - Improve UCX documentation and examples ([#545](https://github.com/rapidsai/dask-cuda/pull/545)) [@charlesbluca](https://github.com/charlesbluca) - Auto-merge branch-0.18 to branch-0.19 ([#538](https://github.com/rapidsai/dask-cuda/pull/538)) [@GPUtester](https://github.com/GPUtester) ## 🚀 New Features - Add option to enable RMM logging ([#542](https://github.com/rapidsai/dask-cuda/pull/542)) [@charlesbluca](https://github.com/charlesbluca) - Add capability to log spilling ([#442](https://github.com/rapidsai/dask-cuda/pull/442)) [@pentschev](https://github.com/pentschev) ## 🛠️ Improvements - Fix UCX examples for InfiniBand ([#556](https://github.com/rapidsai/dask-cuda/pull/556)) [@charlesbluca](https://github.com/charlesbluca) - Fix list to tuple conversion ([#555](https://github.com/rapidsai/dask-cuda/pull/555)) [@madsbk](https://github.com/madsbk) - Add column masking operation for CuPy benchmarking ([#553](https://github.com/rapidsai/dask-cuda/pull/553)) [@jakirkham](https://github.com/jakirkham) - Update Changelog Link ([#550](https://github.com/rapidsai/dask-cuda/pull/550)) [@ajschmidt8](https://github.com/ajschmidt8) - cuDF-style operations &amp; NVTX annotations for local CuPy benchmark ([#548](https://github.com/rapidsai/dask-cuda/pull/548)) [@charlesbluca](https://github.com/charlesbluca) - Prepare Changelog for Automation ([#543](https://github.com/rapidsai/dask-cuda/pull/543)) [@ajschmidt8](https://github.com/ajschmidt8) - Add --enable-rdmacm flag to benchmarks utils ([#539](https://github.com/rapidsai/dask-cuda/pull/539)) [@pentschev](https://github.com/pentschev) - ProxifyHostFile: tracking of external objects ([#527](https://github.com/rapidsai/dask-cuda/pull/527)) [@madsbk](https://github.com/madsbk) - Test broadcast merge in local_cudf_merge benchmark ([#507](https://github.com/rapidsai/dask-cuda/pull/507)) [@rjzamora](https://github.com/rjzamora) # dask-cuda 0.18.0 (24 Feb 2021) ## Breaking Changes 🚨 - Explicit-comms house cleaning (#515) @madsbk ## Bug Fixes 🐛 - Fix device synchronization in local_cupy benchmark (#518) @pentschev - Proxify register lazy (#492) @madsbk - Work on deadlock issue 431 (#490) @madsbk - Fix usage of --dashboard-address in dask-cuda-worker (#487) @pentschev - Fail if scheduler starts with &#39;-&#39; in dask-cuda-worker (#485) @pentschev ## Documentation 📖 - Add device synchonization for local CuPy benchmarks with Dask profiling (#533) @charlesbluca ## New Features 🚀 - Shuffle benchmark (#496) @madsbk ## Improvements 🛠️ - Update stale GHA with exemptions &amp; new labels (#531) @mike-wendt - Add GHA to mark issues/prs as stale/rotten (#528) @Ethyling - Add operations/arguments to local CuPy array benchmark (#524) @charlesbluca - Explicit-comms house cleaning (#515) @madsbk - Fixing fixed-attribute-proxy-object-test (#511) @madsbk - Prepare Changelog for Automation (#509) @ajschmidt8 - remove conditional check to start conda uploads (#504) @jolorunyomi - ProxyObject: ignore initial fixed attribute errors (#503) @madsbk - JIT-unspill: fix potential deadlock (#501) @madsbk - Hostfile: register the removal of an existing key (#500) @madsbk - proxy_object: cleanup type dispatching (#497) @madsbk - Redesign and implementation of dataframe shuffle (#494) @madsbk - Add --threads-per-worker option to benchmarks (#489) @pentschev - Extend CuPy benchmark with more operations (#488) @pentschev - Auto-label PRs based on their content (#480) @jolorunyomi - CI: cleanup style check (#477) @madsbk - Individual CUDA object spilling (#451) @madsbk - FIX Move codecov upload to gpu build script (#450) @dillon-cullinan - Add support for connecting a CUDAWorker to a cluster object (#428) @jacobtomlinson # 0.17.0 - Fix benchmark output when scheduler address is specified (#414) @quasiben - Fix typo in benchmark utils (#416) @quasiben - More RMM options in benchmarks (#419) @quasiben - Add utility function to establish all-to-all connectivity upon request (#420) @quasiben - Filter `rmm_pool_size` warnings in benchmarks (#422) @pentschev - Add functionality to plot cuDF benchmarks (#423) @quasiben - Decrease data size to shorten spilling tests time (#422) @pentschev - Temporarily xfail explicit-comms tests (#432) @pentschev - Add codecov.yml and ignore uncovered files (#433) @pentschev - Do not skip DGX/TCP tests when ucp is not installed (#436) @pentschev - Support UUID in CUDA_VISIBLE_DEVICES (#437) @pentschev - Unify `device_memory_limit` parsing and set default to 0.8 (#439) @pentschev - Update and clean gpuCI scripts (#440) @msadang - Add notes on controlling number of workers to docs (#441) @quasiben - Add CPU support to CuPy transpose sum benchmark (#444) @pentschev - Update builddocs dependency requirements (#447) @quasiben - Fix versioneer (#448) @jakirkham - Cleanup conda recipe (#449) @jakirkham - Fix `pip install` issues with new resolver (#454) @jakirkham - Make threads per worker consistent (#456) @pentschev - Support for ProxyObject binary operations (#458) @madsbk - Support for ProxyObject pickling (#459) @madsbk - Clarify RMM pool is a per-worker attribute on docs (#462) @pentschev - Fix typo on specializations docs (#463) @vfdev-5 # 0.16.0 - Parse pool size only when set (#396) @quasiben - Improve CUDAWorker scheduler-address parsing and __init__ (#397) @necaris - Add benchmark for `da.map_overlap` (#399) @jakirkham - Explicit-comms: dataframe shuffle (#401) @madsbk - Use new NVTX module (#406) @pentschev - Run Dask's NVML tests (#408) @quasiben - Skip tests that require cuDF/UCX-Py, when not installed (#411) @pentschev # 0.15.0 - Fix-up versioneer (#305) @jakirkham - Require Distributed 2.15.0+ (#306) @jakirkham - Rely on Dask's ability to serialize collections (#307) @jakirkham - Ensure CI installs GPU build of UCX (#308) @pentschev - Skip 2nd serialization pass of `DeviceSerialized` (#309) @jakirkham - Fix tests related to latest RMM changes (#310) @pentschev - Fix dask-cuda-worker's interface argument (#314) @pentschev - Check only for memory type during test_get_device_total_memory (#315) @pentschev - Fix and improve DGX tests (#316) @pentschev - Install dependencies via meta package (#317) @raydouglass - Fix errors when TLS files are not specified (#320) @pentschev - Refactor dask-cuda-worker into CUDAWorker class (#324) @jacobtomlinson - Add missing __init__.py to dask_cuda/cli (#327) @pentschev - Add Dask distributed GPU tests to CI (#329) @quasiben - Fix rmm_pool_size argument name in docstrings (#329) @quasiben - Add CPU support to benchmarks (#338) @quasiben - Fix isort configuration (#339) @madsbk - Explicit-comms: cleanup and bug fix (#340) @madsbk - Add support for RMM managed memory (#343) @pentschev - Update docker image in local build script (#345) @sean-frye - Support pickle protocol 5 based spilling (#349) @jakirkham - Use get_n_gpus for RMM test with dask-cuda-worker (#356) @pentschev - Update RMM tests based on deprecated CNMeM (#359) @jakirkham - Fix a black error in explicit comms (#360) @jakirkham - Fix an `isort` error (#360) @jakirkham - Set `RMM_NO_INITIALIZE` environment variable (#363) @quasiben - Fix bash lines in docs (#369) @quasiben - Replace `RMM_NO_INITIALIZE` with `RAPIDS_NO_INITIALIZE` (#371) @jakirkham - Fixes for docs and RTD updates (#373) @quasiben - Confirm DGX tests are running baremetal (#376) @pentschev - Set RAPIDS_NO_INITIALIZE at the top of CUDAWorker/LocalCUDACluster (#379) @pentschev - Change pytest's basetemp in CI build script (#380) @pentschev - Pin Numba version to exclude 0.51.0 (#385) @quasiben # 0.14.0 - Publish branch-0.14 to conda (#262) @trxcllnt - Fix behavior for `memory_limit=0` (#269) @pentschev - Raise serialization errors when spilling (#272) @jakirkham - Fix dask-cuda-worker memory_limit (#279) @pentschev - Add NVTX annotations for spilling (#282) @pentschev - Skip existing on conda uploads (#284) @raydouglass - Local gpuCI build script (#285) @efajardo-nv - Remove deprecated DGX class (#286) @pentschev - Add RDMACM support (#287) @pentschev - Read the Docs Setup (#290) @quasiben - Raise ValueError when ucx_net_devices="auto" and IB is disabled (#291) @pentschev - Multi-node benchmarks (#293) @pentschev - Add docs for UCX (#294) @pentschev - Add `--runs` argument to CuPy benchmark (#295) @pentschev - Fixing LocalCUDACluster example. Adding README links to docs (#297) @randerzander - Add `nfinal` argument to shuffle_group, required in Dask >= 2.17 (#299) @pentschev - Initialize parent process' UCX configuration (#301) @pentschev - Add Read the Docs link (#302) @jakirkham # 0.13.0 - Use RMM's `DeviceBuffer` directly (#235) @jakirkham - Add RMM pool support from dask-cuda-worker/LocalCUDACluster (#236) @pentschev - Restrict CuPy to <7.2 (#239) @quasiben - Fix UCX configurations (#246) @pentschev - Respect `temporary-directory` config for spilling (#247) @jakirkham - Relax CuPy pin (#248) @jakirkham - Added `ignore_index` argument to `partition_by_hash()` (#253) @madsbk - Use `"dask"` serialization to move to/from host (#256) @jakirkham - Drop Numba `DeviceNDArray` code for `sizeof` (#257) @jakirkham - Support spilling of device objects in dictionaries (#260) @madsbk # 0.12.0 - Add ucx-py dependency to CI (#212) @raydouglass - Follow-up revision of local_cudf_merge benchmark (#213) @rjzamora - Add codeowners file (#217) @raydouglass - Add pypi upload script (#218) @raydouglass - Skip existing on PyPi uploads (#219) @raydouglass - Make benchmarks use rmm_cupy_allocator (#220) @madsbk - cudf-merge-benchmark now reports throughput (#222) @madsbk - Fix dask-cuda-worker --interface/--net-devices docs (#223) @pentschev - Use RMM for serialization when available (#227) @pentschev # 0.11.0 - Use UCX-Py initialization API (#152) @pentschev - Remove all CUDA labels (#160) @mike-wendt - Setting UCX options through dask global config (#168) @madsbk - Make test_cudf_device_spill xfail (#170) @pentschev - Updated CI, cleanup tests and reformat Python files (#171) @madsbk - Fix GPU dependency versions (#173) @dillon-cullinan - Set LocalCUDACluster n_workers equal to the length of CUDA_VISIBLE_DEVICES (#174) @mrocklin - Simplify dask-cuda code (#175) @madsbk - DGX inherit from LocalCUDACluster (#177) @madsbk - Single-node CUDA benchmarks (#179) @madsbk - Set TCP for UCX tests (#180) @pentschev - Single-node cuDF merge benchmarks (#183) @madsbk - Add black and isort checks in CI (#185) @pentschev - Remove outdated xfail/importorskip test entries (#188) @pentschev - Use UCX-Py's TopologicalDistance to determine IB interfaces in DGX (#189) @pentschev - Dask Performance Report (#192) @madsbk - Allow test_cupy_device_spill to xfail (#195) @pentschev - Use ucx-py from rapidsai-nightly in CI (#196) @pentschev - LocalCUDACluster sets closest network device (#200) @madsbk - Expand cudf-merge benchmark (#201) @rjzamora - Added --runs to merge benchmark (#202) @madsbk - Move UCX code to LocalCUDACluster and deprecate DGX (#205) @pentschev - Add markdown output option to cuDF merge benchmark (#208) @quasiben # 0.10.0 - Change the updated new_worker_spec API for upstream (#128) @mrocklin - Update TOTAL_MEMORY to match new distributed MEMORY_LIMIT (#131) @pentschev - Bum Dask requirement to 2.4 (#133) @mrocklin - Use YYMMDD tag in nightly build (#134) @mluukkainen - Automatically determine CPU affinity (#138) @pentschev - Fix full memory use check testcase (#139) @ksangeek - Use pynvml to get memory info without creating CUDA context (#140) @pentschev - Pass missing local_directory to Nanny from dask-cuda-worker (#141) @pentschev - New worker_spec function for worker recipes (#147) @pentschev - Add new Scheduler class supporting environment variables (#149) @pentschev - Support for TCP over UCX (#152) @pentschev # 0.9.0 - Fix serialization of collections and bump dask to 2.3.0 (#118) @pentschev - Add versioneer (#88) @matthieubulte - Python CodeCov Integration (#91) @dillon-cullinan - Update cudf, dask, dask-cudf, distributed version requirements (#97) @pentschev - Improve device memory spilling performance (#98) @pentschev - Update dask-cuda for dask 2.2 (#101) @mrocklin - Streamline CUDA_REL environment variable (#102) @okoskinen - Replace ncores= parameter with nthreads= (#101) @mrocklin - Fix remove CodeCov upload from build script (#115) @dillon-cullinan - Remove CodeCov upload (#116) @dillon-cullinan # 0.8.0 - Add device memory spill support (LRU-based only) (#51) @pentschev - Update CI dependency to CuPy 6.0.0 (#53) @pentschev - Add a hard-coded DGX configuration (#46) (#70) @mrocklin - Fix LocalCUDACluster data spilling and its test (#67) @pentschev - Add test skipping functionality to build.sh (#71) @dillon-cullinan - Replace use of ncores= keywords with nthreads= (#75) @mrocklin - Fix device memory spilling with cuDF (#65) @pentschev - LocalCUDACluster calls _correct_state() to ensure workers started (#78) @pentschev - Delay some of spilling test assertions (#80) @pentschev
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/codecov.yml
#Configuration File for CodeCov ignore: - "dask_cuda/benchmarks/*" # benchmarks aren't covered
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/MANIFEST.in
include dask_cuda/_version.py include dask_cuda/VERSION
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/dependencies.yaml
# Dependency list for https://github.com/rapidsai/dependency-file-generator files: all: output: none includes: - build_python - cudatoolkit - develop - docs - py_version - run_python - test_python test_python: output: none includes: - cudatoolkit - py_version - test_python checks: output: none includes: - develop - py_version docs: output: none includes: - cudatoolkit - docs - py_version py_build: output: pyproject pyproject_dir: . extras: table: build-system includes: - build_python py_run: output: pyproject pyproject_dir: . extras: table: project includes: - run_python py_test: output: pyproject pyproject_dir: . extras: table: project.optional-dependencies key: test includes: - test_python py_docs: output: pyproject pyproject_dir: . extras: table: project.optional-dependencies key: docs includes: - docs channels: - rapidsai - rapidsai-nightly - dask/label/dev - conda-forge - nvidia dependencies: build_python: common: - output_types: [conda, requirements, pyproject] packages: - setuptools>=64.0.0 - output_types: pyproject packages: - tomli ; python_version < '3.11' cudatoolkit: specific: - output_types: conda matrices: - matrix: cuda: "11.2" packages: - cuda-version=11.2 - cudatoolkit - matrix: cuda: "11.4" packages: - cuda-version=11.4 - cudatoolkit - matrix: cuda: "11.5" packages: - cuda-version=11.5 - cudatoolkit - matrix: cuda: "11.8" packages: - cuda-version=11.8 - cudatoolkit - matrix: cuda: "12.0" packages: - cuda-version=12.0 - cuda-nvcc-impl - cuda-nvrtc develop: common: - output_types: [conda, requirements] packages: - pre-commit docs: common: - output_types: [conda, requirements, pyproject] packages: - numpydoc>=1.1.0 - sphinx - sphinx-click>=2.7.1 - sphinx-rtd-theme>=0.5.1 py_version: specific: - output_types: conda matrices: - matrix: py: "3.9" packages: - python=3.9 - matrix: py: "3.10" packages: - python=3.10 - matrix: packages: - python>=3.9,<3.11 run_python: common: - output_types: [conda, requirements, pyproject] packages: - click >=8.1 - numba>=0.57 - numpy>=1.21 - pandas>=1.3,<1.6.0.dev0 - pynvml>=11.0.0,<11.5 - rapids-dask-dependency==24.2.* - zict>=2.0.0 test_python: common: - output_types: [conda, requirements, pyproject] packages: - cudf==24.2.* - dask-cudf==24.2.* - kvikio==24.2.* - pytest - pytest-cov - ucx-py==0.36.* - output_types: [conda] packages: - distributed-ucxx==0.36.* - ucx-proc=*=gpu - ucxx==0.36.* specific: - output_types: conda matrices: - matrix: arch: x86_64 packages: - numactl-devel-cos7-x86_64 - matrix: arch: aarch64 packages: - numactl-devel-cos7-aarch64
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/setup.py
from setuptools import setup setup()
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/LICENSE
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2019 NVIDIA Corporation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
rapidsai_public_repos
rapidsai_public_repos/dask-cuda/VERSION
24.02.00
0
rapidsai_public_repos/dask-cuda/conda/recipes
rapidsai_public_repos/dask-cuda/conda/recipes/dask-cuda/meta.yaml
# Copyright (c) 2019-2023, NVIDIA CORPORATION. # Usage: # conda build -c conda-forge . {% set data = load_file_data("pyproject.toml") %} {% set version = environ['RAPIDS_PACKAGE_VERSION'].strip('""').lstrip('v') %} {% set py_version = environ['CONDA_PY'] %} {% set date_string = environ['RAPIDS_DATE_STRING'] %} package: name: dask-cuda version: {{ version }} source: path: ../../.. build: number: {{ GIT_DESCRIBE_NUMBER }} string: py{{ py_version }}_{{ date_string }}_{{ GIT_DESCRIBE_HASH }}_{{ GIT_DESCRIBE_NUMBER }} script: - {{ PYTHON }} -m pip install . -vv entry_points: {% for e in data.get("project", {}).get("scripts", {}).items() %} - {{ e|join(" = ") }} {% endfor %} requirements: host: - python - pip - tomli run: - python {% for r in data.get("project", {}).get("dependencies", []) %} - {{ r }} {% endfor %} test: imports: - dask_cuda commands: - dask cuda --help {% for e in data.get("project", {}).get("scripts", {}).keys() %} - {{ e }} --help - {{ e|replace("-", " ") }} --help {% endfor %} about: home: {{ data.get("project", {}).get("urls", {}).get("Homepage", "") }} license: {{ data.get("project", {}).get("license", {}).get("text", "") }} license_file: {% for e in data.get("tool", {}).get("setuptools", {}).get("license-files", []) %} - ../../../{{ e }} {% endfor %} summary: {{ data.get("project", {}).get("description", "") }} dev_url: {{ data.get("project", {}).get("urls", {}).get("Source", "") }} doc_url: {{ data.get("project", {}).get("urls", {}).get("Documentation", "") }}
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/docs/Makefile
# Minimal makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build SOURCEDIR = source BUILDDIR = build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/quickstart.rst
Quickstart ========== A Dask-CUDA cluster can be created using either LocalCUDACluster or ``dask cuda worker`` from the command line. LocalCUDACluster ---------------- To create a Dask-CUDA cluster using all available GPUs and connect a Dask.distributed `Client <https://distributed.dask.org/en/latest/client.html>`_ to it: .. code-block:: python from dask_cuda import LocalCUDACluster from dask.distributed import Client cluster = LocalCUDACluster() client = Client(cluster) .. tip:: Be sure to include an ``if __name__ == "__main__":`` block when using :py:class:`dask_cuda.LocalCUDACluster` in a standalone Python script. See `standalone Python scripts <https://docs.dask.org/en/stable/scheduling.html#standalone-python-scripts>`_ for more details. ``dask cuda worker`` -------------------- To create an equivalent cluster from the command line, Dask-CUDA workers must be connected to a scheduler started with ``dask scheduler``: .. code-block:: bash $ dask scheduler distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:8786 $ dask cuda worker 127.0.0.1:8786 To connect a client to this cluster: .. code-block:: python from dask.distributed import Client client = Client("127.0.0.1:8786")
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/install.rst
Installation ============ Dask-CUDA can be installed using ``conda``, ``pip``, or from source. Conda ----- To use Dask-CUDA on your system, you will need: - NVIDIA drivers for your GPU; see `NVIDIA Driver Installation Quickstart Guide <https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/index.html>`_ for installation instructions - A version of NVIDIA CUDA Toolkit compatible with the installed driver version; see Table 1 of `CUDA Compatibility -- Binary Compatibility <https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility>`_ for an overview of CUDA Toolkit driver requirements Once the proper CUDA Toolkit version has been determined, it can be installed using along with Dask-CUDA using ``conda``. To install the latest version of Dask-CUDA along with CUDA Toolkit 12.0: .. code-block:: bash conda install -c rapidsai -c conda-forge -c nvidia dask-cuda cuda-version=12.0 Pip --- When working outside of a Conda environment, CUDA Toolkit can be downloaded and installed from `NVIDIA's website <https://developer.nvidia.com/cuda-toolkit>`_; this package also contains the required NVIDIA drivers. To install the latest version of Dask-CUDA: .. code-block:: bash python -m pip install dask-cuda Source ------ To install Dask-CUDA from source, the source code repository must be cloned from GitHub: .. code-block:: bash git clone https://github.com/rapidsai/dask-cuda.git cd dask-cuda python -m pip install . Other RAPIDS libraries ---------------------- Dask-CUDA is a part of the `RAPIDS <https://rapids.ai/>`_ suite of open-source software libraries for GPU-accelerated data science, and works well in conjunction with them. See `RAPIDS -- Getting Started <https://rapids.ai/start.html>`_ for instructions on how to install these libraries. Keep in mind that these libraries will require: - At least one CUDA-compliant GPU - A system installation of `CUDA <https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html>`_
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/troubleshooting.rst
Troubleshooting =============== This is a list of common issues encountered with Dask-CUDA and various systems. Wrong Device Indexing --------------------- It's common to rely on the device indexing presented by ``nvidia-smi`` when creating workers, and that is the default in Dask-CUDA. In most cases, ``nvidia-smi`` provides a one-to-one mapping with ``CUDA_VISIBLE_DEVICES``, but in some systems the ordering may not match. While, ``CUDA_VISIBLE_DEVICES`` indexes GPUs by their PCI Bus ID, ``nvidia-smi`` orders by fastest GPUs. Issues are commonly seen in `DGX Station A100 <https://www.nvidia.com/en-us/data-center/dgx-station-a100/>`_ that contains 4 A100 GPUs, plus a Display GPU, but the Display GPU may not be the last GPU according to the PCI Bus ID. To correct that and ensure the mapping according to the PCI Bus ID, it's necessary to set the ``CUDA_DEVICE_ORDER=PCI_BUS_ID`` environment variable when starting the Python process: .. code-block:: bash $ CUDA_DEVICE_ORDER=PCI_BUS_ID python $ CUDA_DEVICE_ORDER=PCI_BUS_ID ipython $ CUDA_DEVICE_ORDER=PCI_BUS_ID jupyter lab $ CUDA_DEVICE_ORDER=PCI_BUS_ID dask-cuda-worker ... For the DGX Station A100, the display GPU is commonly the fourth in the PCI Bus ID ordering, thus one needs to use GPUs 0, 1, 2 and 4 for Dask-CUDA: .. code-block:: python >>> from dask_cuda import LocalCUDACluster >>> cluster = LocalCUDACluster(CUDA_VISIBLE_DEVICES=[0, 1, 2, 4])
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/spilling.rst
Spilling from device ==================== By default, Dask-CUDA enables spilling from GPU to host memory when a GPU reaches a memory utilization of 80%. This can be changed to suit the needs of a workload, or disabled altogether, by explicitly setting ``device_memory_limit``. This parameter accepts an integer or string memory size, or a float representing a percentage of the GPU's total memory: .. code-block:: python from dask_cuda import LocalCUDACluster cluster = LocalCUDACluster(device_memory_limit=50000) # spilling after 50000 bytes cluster = LocalCUDACluster(device_memory_limit="5GB") # spilling after 5 GB cluster = LocalCUDACluster(device_memory_limit=0.3) # spilling after 30% memory utilization Memory spilling can be disabled by setting ``device_memory_limit`` to 0: .. code-block:: python cluster = LocalCUDACluster(device_memory_limit=0) # spilling disabled The same applies for ``dask cuda worker``, and spilling can be controlled by setting ``--device-memory-limit``: .. code-block:: $ dask scheduler distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:8786 $ dask cuda worker --device-memory-limit 50000 $ dask cuda worker --device-memory-limit 5GB $ dask cuda worker --device-memory-limit 0.3 $ dask cuda worker --device-memory-limit 0 JIT-Unspill ----------- The regular spilling in Dask and Dask-CUDA has some significate issues. Instead of tracking individual objects, it tracks task outputs. This means that a task returning a collection of CUDA objects will either spill all of the CUDA objects or none of them. Other issues includes *object duplication*, *wrong spilling order*, and *non-tracking of sharing device buffers* (see: https://github.com/dask/distributed/issues/4568#issuecomment-805049321). In order to address all of these issues, Dask-CUDA introduces JIT-Unspilling, which can improve performance and memory usage significantly. For workloads that require significant spilling (such as large joins on infrastructure with less available memory than data) we have often seen greater than 50% improvement (i.e., something taking 300 seconds might take only 110 seconds). For workloads that do not, we would not expect to see much difference. In order to enable JIT-Unspilling use the ``jit_unspill`` argument: .. code-block:: >>> import dask​ >>> from distributed import Client​ >>> from dask_cuda import LocalCUDACluster​ >>> cluster = LocalCUDACluster(n_workers=10, device_memory_limit="1GB", jit_unspill=True)​ >>> client = Client(cluster)​ >>> with dask.config.set(jit_unspill=True):​ ... cluster = LocalCUDACluster(n_workers=10, device_memory_limit="1GB")​ ... client = Client(cluster) Or set the worker argument ``--enable-jit-unspill​`` .. code-block:: $ dask scheduler distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:8786 $ dask cuda worker --enable-jit-unspill​ Or environment variable ``DASK_JIT_UNSPILL=True`` .. code-block:: $ dask scheduler distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:8786 $ DASK_JIT_UNSPILL=True dask cuda worker​ Limitations ~~~~~~~~~~~ JIT-Unspill wraps CUDA objects, such as ``cudf.Dataframe``, in a ``ProxyObject``. Objects proxied by an instance of ``ProxyObject`` will be JIT-deserialized when accessed. The instance behaves as the proxied object and can be accessed/used just like the proxied object. ProxyObject has some limitations and doesn't mimic the proxied object perfectly. Most noticeable, type checking using ``instance()`` works as expected but direct type checking doesn't: .. code-block:: python >>> import numpy as np >>> from dask_cuda.proxy_object import asproxy >>> x = np.arange(3) >>> isinstance(asproxy(x), type(x)) True >>> type(asproxy(x)) is type(x) False Thus, if encountering problems remember that it is always possible to use ``unproxy()`` to access the proxied object directly, or set ``DASK_JIT_UNSPILL_COMPATIBILITY_MODE=True`` to enable compatibility mode, which automatically calls ``unproxy()`` on all function inputs.
0