repo_id
stringlengths
21
96
file_path
stringlengths
31
155
content
stringlengths
1
92.9M
__index_level_0__
int64
0
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/ucx.rst
UCX Integration =============== Communication can be a major bottleneck in distributed systems. Dask-CUDA addresses this by supporting integration with `UCX <https://www.openucx.org/>`_, an optimized communication framework that provides high-performance networking and supports a variety of transport methods, including `NVLink <https://www.nvidia.com/en-us/data-center/nvlink/>`_ and `InfiniBand <https://www.mellanox.com/pdf/whitepapers/IB_Intro_WP_190.pdf>`_ for systems with specialized hardware, and TCP for systems without it. This integration is enabled through `UCX-Py <https://ucx-py.readthedocs.io/>`_, an interface that provides Python bindings for UCX. Hardware requirements --------------------- To use UCX with NVLink or InfiniBand, relevant GPUs must be connected with NVLink bridges or NVIDIA Mellanox InfiniBand Adapters, respectively. NVIDIA provides comparison charts for both `NVLink bridges <https://www.nvidia.com/en-us/design-visualization/nvlink-bridges/>`_ and `InfiniBand adapters <https://www.nvidia.com/en-us/networking/infiniband-adapters/>`_. Software requirements --------------------- UCX integration requires an environment with both UCX and UCX-Py installed; see `UCX-Py Installation <https://ucx-py.readthedocs.io/en/latest/install.html>`_ for detailed instructions on this process. When using UCX, each NVLink and InfiniBand memory buffer must create a mapping between each unique pair of processes they are transferred across; this can be quite costly, potentially in the range of hundreds of milliseconds per mapping. For this reason, it is strongly recommended to use `RAPIDS Memory Manager (RMM) <https://github.com/rapidsai/rmm>`_ to allocate a memory pool that is only prone to a single mapping operation, which all subsequent transfers may rely upon. A memory pool also prevents the Dask scheduler from deserializing CUDA data, which will cause a crash. .. warning:: Dask-CUDA must create worker CUDA contexts during cluster initialization, and properly ordering that task is critical for correct UCX configuration. If a CUDA context already exists for this process at the time of cluster initialization, unexpected behavior can occur. To avoid this, it is advised to initialize any UCX-enabled clusters before doing operations that would result in a CUDA context being created. Depending on the library, even an import can force CUDA context creation. For some RAPIDS libraries (e.g. cuDF), setting ``RAPIDS_NO_INITIALIZE=1`` at runtime will delay or disable their CUDA context creation, allowing for improved compatibility with UCX-enabled clusters and preventing runtime warnings. Configuration ------------- Automatic ~~~~~~~~~ Beginning with Dask-CUDA 22.02 and assuming UCX >= 1.11.1, specifying UCX transports is now optional. A local cluster can now be started with ``LocalCUDACluster(protocol="ucx")``, implying automatic UCX transport selection (``UCX_TLS=all``). Starting a cluster separately -- scheduler, workers and client as different processes -- is also possible, as long as Dask scheduler is created with ``dask scheduler --protocol="ucx"`` and connecting a ``dask cuda worker`` to the scheduler will imply automatic UCX transport selection, but that requires the Dask scheduler and client to be started with ``DASK_DISTRIBUTED__COMM__UCX__CREATE_CUDA_CONTEXT=True``. See `Enabling UCX communication <examples/ucx.html>`_ for more details examples of UCX usage with automatic configuration. Configuring transports manually is still possible, please refer to the subsection below. Manual ~~~~~~ In addition to installations of UCX and UCX-Py on your system, for manual configuration several options must be specified within your Dask configuration to enable the integration. Typically, these will affect ``UCX_TLS`` and ``UCX_SOCKADDR_TLS_PRIORITY``, environment variables used by UCX to decide what transport methods to use and which to prioritize, respectively. However, some will affect related libraries, such as RMM: - ``distributed.comm.ucx.cuda_copy: true`` -- **required.** Adds ``cuda_copy`` to ``UCX_TLS``, enabling CUDA transfers over UCX. - ``distributed.comm.ucx.tcp: true`` -- **required.** Adds ``tcp`` to ``UCX_TLS``, enabling TCP transfers over UCX; this is required for very small transfers which are inefficient for NVLink and InfiniBand. - ``distributed.comm.ucx.nvlink: true`` -- **required for NVLink.** Adds ``cuda_ipc`` to ``UCX_TLS``, enabling NVLink transfers over UCX; affects intra-node communication only. - ``distributed.comm.ucx.infiniband: true`` -- **required for InfiniBand.** Adds ``rc`` to ``UCX_TLS``, enabling InfiniBand transfers over UCX. For optimal performance with UCX 1.11 and above, it is recommended to also set the environment variables ``UCX_MAX_RNDV_RAILS=1`` and ``UCX_MEMTYPE_REG_WHOLE_ALLOC_TYPES=cuda``, see documentation `here <https://ucx-py.readthedocs.io/en/latest/configuration.html#ucx-max-rndv-rails>`_ and `here <https://ucx-py.readthedocs.io/en/latest/configuration.html#ucx-memtype-reg-whole-alloc-types>`_ for more details on those variables. - ``distributed.comm.ucx.rdmacm: true`` -- **recommended for InfiniBand.** Replaces ``sockcm`` with ``rdmacm`` in ``UCX_SOCKADDR_TLS_PRIORITY``, enabling remote direct memory access (RDMA) for InfiniBand transfers. This is recommended by UCX for use with InfiniBand, and will not work if InfiniBand is disabled. - ``distributed.rmm.pool-size: <str|int>`` -- **recommended.** Allocates an RMM pool of the specified size for the process; size can be provided with an integer number of bytes or in human readable format, e.g. ``"4GB"``. It is recommended to set the pool size to at least the minimum amount of memory used by the process; if possible, one can map all GPU memory to a single pool, to be utilized for the lifetime of the process. .. note:: These options can be used with mainline Dask.distributed. However, some features are exclusive to Dask-CUDA, such as the automatic detection of InfiniBand interfaces. See `Dask-CUDA -- Motivation <index.html#motivation>`_ for more details on the benefits of using Dask-CUDA. Usage ----- See `Enabling UCX communication <examples/ucx.html>`_ for examples of UCX usage with different supported transports. Running in a fork-starved environment ------------------------------------- Many high-performance networking stacks do not support the user application calling ``fork()`` after the network substrate is initialized. Symptoms include jobs randomly hanging, or crashing, especially when using a large number of workers. To mitigate against this when using Dask-CUDA's UCX integration, processes launched via multiprocessing should use the start processes using the `"forkserver" <https://docs.python.org/dev/library/multiprocessing.html#contexts-and-start-methods>`_ method. When launching workers using `dask cuda worker <quickstart.html#dask-cuda-worker>`_, this can be achieved by passing ``--multiprocessing-method forkserver`` as an argument. In user code, the method can be controlled with the ``distributed.worker.multiprocessing-method`` configuration key in ``dask``. One must take care to, in addition, manually ensure that the forkserver is running before launching any jobs. A run script should therefore do something like the following: .. code-block:: import dask if __name__ == "__main__": import multiprocessing.forkserver as f f.ensure_running() with dask.config.set( {"distributed.worker.multiprocessing-method": "forkserver"} ): run_analysis(...) .. note:: In addition to this, at present one must also set ``PTXCOMPILER_CHECK_NUMBA_CODEGEN_PATCH_NEEDED=0`` in the environment to avoid a subprocess call from `ptxcompiler <https://github.com/rapidsai/ptxcompiler>`_ .. note:: To confirm that no bad fork calls are occurring, start jobs with ``UCX_IB_FORK_INIT=n``. UCX will produce a warning ``UCX WARN IB: ibv_fork_init() was disabled or failed, yet a fork() has been issued.`` if the application calls ``fork()``.
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/explicit_comms.rst
Explicit-comms ============== Communication and scheduling overhead can be a major bottleneck in Dask/Distributed. Dask-CUDA addresses this by introducing an API for explicit communication in Dask tasks. The idea is that Dask/Distributed spawns workers and distribute data as usually while the user can submit tasks on the workers that communicate explicitly. This makes it possible to bypass Distributed's scheduler and write hand-tuned computation and communication patterns. Currently, Dask-CUDA includes an explicit-comms implementation of the Dataframe `shuffle <https://github.com/rapidsai/dask-cuda/blob/d3c723e2c556dfe18b47b392d0615624453406a5/dask_cuda/explicit_comms/dataframe/shuffle.py#L210>`_ operation used for merging and sorting. Usage ----- In order to use explicit-comms in Dask/Distributed automatically, simply define the environment variable ``DASK_EXPLICIT_COMMS=True`` or setting the ``"explicit-comms"`` key in the `Dask configuration <https://docs.dask.org/en/latest/configuration.html>`_. It is also possible to use explicit-comms in tasks manually, see the `API <api.html#explicit-comms>`_ and our `implementation of shuffle <https://github.com/rapidsai/dask-cuda/blob/branch-0.20/dask_cuda/explicit_comms/dataframe/shuffle.py>`_ for guidance.
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/conf.py
# -*- coding: utf-8 -*- # # Configuration file for the Sphinx documentation builder. # # This file does only contain a selection of the most common options. For a # full list see the documentation: # http://www.sphinx-doc.org/en/master/config # -- Path setup -------------------------------------------------------------- # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # # import os # import sys # sys.path.insert(0, os.path.abspath('.')) import datetime # -- Project information ----------------------------------------------------- project = "dask-cuda" copyright = "2020-%s, NVIDIA" % datetime.datetime.now().year author = "NVIDIA" # The full version, including alpha/beta/rc tags. from dask_cuda import __version__ as release # noqa: E402 # The short X.Y version. version = ".".join(release.split(".")[:2]) # -- General configuration --------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ "sphinx.ext.autodoc", "sphinx.ext.mathjax", "sphinx.ext.viewcode", "sphinx.ext.githubpages", "sphinx.ext.autosummary", "sphinx.ext.intersphinx", "sphinx.ext.extlinks", "numpydoc", "sphinx_click", "sphinx_rtd_theme", ] numpydoc_show_class_members = False # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = ".rst" # The master toctree document. master_doc = "index" # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = "en" # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This pattern also affects html_static_path and html_extra_path. exclude_patterns = [] # The name of the Pygments (syntax highlighting) style to use. pygments_style = None # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = "sphinx_rtd_theme" # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ["_static"] # Custom sidebar templates, must be a dictionary that maps document names # to template names. # # The default sidebars (for documents that don't match any pattern) are # defined by theme itself. Builtin themes are using these templates by # default: ``['localtoc.html', 'relations.html', 'sourcelink.html', # 'searchbox.html']``. # # html_sidebars = {} # -- Options for HTMLHelp output --------------------------------------------- # Output file base name for HTML help builder. htmlhelp_basename = "dask-cudadoc" # -- Options for LaTeX output ------------------------------------------------ latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, "dask-cuda.tex", "dask-cuda Documentation", "NVIDIA", "manual") ] # -- Options for manual page output ------------------------------------------ # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [(master_doc, "dask-cuda", "dask-cuda Documentation", [author], 1)] # -- Options for Texinfo output ---------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ( master_doc, "dask-cuda", "dask-cuda Documentation", author, "dask-cuda", "One line description of project.", "Miscellaneous", ) ] # -- Options for Epub output ------------------------------------------------- # Bibliographic Dublin Core info. epub_title = project # The unique identifier of the text. This can be a ISBN number # or the project homepage. # # epub_identifier = '' # A unique identification for the text. # # epub_uid = '' # A list of files that should not be packed into the epub file. epub_exclude_files = ["search.html"] # -- Extension configuration ------------------------------------------------- def setup(app): app.add_css_file("https://docs.rapids.ai/assets/css/custom.css") app.add_js_file( "https://docs.rapids.ai/assets/js/custom.js", loading_method="defer" )
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/api.rst
API === Cluster ------- .. currentmodule:: dask_cuda .. autoclass:: LocalCUDACluster :members: CLI --- Worker ~~~~~~ .. click:: dask_cuda.cli:worker :prog: dask cuda :nested: none Cluster configuration ~~~~~~~~~~~~~~~~~~~~~ .. click:: dask_cuda.cli:config :prog: dask cuda :nested: none Client initialization --------------------- .. currentmodule:: dask_cuda.initialize .. autofunction:: initialize Explicit-comms -------------- .. currentmodule:: dask_cuda.explicit_comms.comms .. autoclass:: CommsContext :members:
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/index.rst
Dask-CUDA ========= Dask-CUDA is a library extending `Dask.distributed <https://distributed.dask.org/en/latest/>`_'s single-machine `LocalCluster <https://docs.dask.org/en/latest/setup/single-distributed.html#localcluster>`_ and `Worker <https://distributed.dask.org/en/latest/worker.html>`_ for use in distributed GPU workloads. It is a part of the `RAPIDS <https://rapids.ai/>`_ suite of open-source software libraries for GPU-accelerated data science. Motivation ---------- While Distributed can be used to leverage GPU workloads through libraries such as `cuDF <https://docs.rapids.ai/api/cudf/stable/>`_, `CuPy <https://cupy.dev/>`_, and `Numba <https://numba.pydata.org/>`_, Dask-CUDA offers several unique features unavailable to Distributed: - **Automatic instantiation of per-GPU workers** -- Using Dask-CUDA's LocalCUDACluster or ``dask cuda worker`` CLI will automatically launch one worker for each GPU available on the executing node, avoiding the need to explicitly select GPUs. - **Automatic setting of CPU affinity** -- The setting of CPU affinity for each GPU is done automatically, preventing memory transfers from taking suboptimal paths. - **Automatic selection of InfiniBand devices** -- When UCX communication is enabled over InfiniBand, Dask-CUDA automatically selects the optimal InfiniBand device for each GPU (see `UCX Integration <ucx.html>`_ for instructions on configuring UCX communication). - **Memory spilling from GPU** -- For memory-intensive workloads, Dask-CUDA supports spilling from GPU to host memory when a GPU reaches the default or user-specified memory utilization limit. - **Allocation of GPU memory** -- when using UCX communication, per-GPU memory pools can be allocated using `RAPIDS Memory Manager <https://github.com/rapidsai/rmm>`_ to circumvent the costly memory buffer mappings that would be required otherwise. Contents -------- .. toctree:: :maxdepth: 1 :caption: Getting Started install quickstart troubleshooting api .. toctree:: :maxdepth: 1 :caption: Additional Features ucx explicit_comms spilling .. toctree:: :maxdepth: 1 :caption: Examples examples/best-practices examples/worker_count examples/ucx
0
rapidsai_public_repos/dask-cuda/docs/source
rapidsai_public_repos/dask-cuda/docs/source/examples/ucx.rst
Enabling UCX communication ========================== A CUDA cluster using UCX communication can be started automatically with LocalCUDACluster or manually with the ``dask cuda worker`` CLI tool. In either case, a ``dask.distributed.Client`` must be made for the worker cluster using the same Dask UCX configuration; see `UCX Integration -- Configuration <../ucx.html#configuration>`_ for details on all available options. LocalCUDACluster with Automatic Configuration --------------------------------------------- Automatic configuration was introduced in Dask-CUDA 22.02 and requires UCX >= 1.11.1. This allows the user to specify only the UCX protocol and let UCX decide which transports to use. To connect a client to a cluster with automatically-configured UCX and an RMM pool: .. code-block:: python from dask.distributed import Client from dask_cuda import LocalCUDACluster cluster = LocalCUDACluster( protocol="ucx", interface="ib0", rmm_pool_size="1GB" ) client = Client(cluster) .. note:: The ``interface="ib0"`` is intentionally specified above to ensure RDMACM is used in systems that support InfiniBand. On systems that don't support InfiniBand or where RDMACM isn't required, the ``interface`` argument may be omitted or specified to listen on a different interface. LocalCUDACluster with Manual Configuration ------------------------------------------ When using LocalCUDACluster with UCX communication and manual configuration, all required UCX configuration is handled through arguments supplied at construction; see `API -- Cluster <../api.html#cluster>`_ for a complete list of these arguments. To connect a client to a cluster with all supported transports and an RMM pool: .. code-block:: python from dask.distributed import Client from dask_cuda import LocalCUDACluster cluster = LocalCUDACluster( protocol="ucx", interface="ib0", enable_tcp_over_ucx=True, enable_nvlink=True, enable_infiniband=True, enable_rdmacm=True, rmm_pool_size="1GB" ) client = Client(cluster) ``dask cuda worker`` with Automatic Configuration ------------------------------------------------- When using ``dask cuda worker`` with UCX communication and automatic configuration, the scheduler, workers, and client must all be started manually, but without specifying any UCX transports explicitly. This is only supported in Dask-CUDA 22.02 and newer and requires UCX >= 1.11.1. Scheduler ^^^^^^^^^ For automatic UCX configuration, we must ensure a CUDA context is created on the scheduler before UCX is initialized. This can be satisfied by specifying the ``DASK_DISTRIBUTED__COMM__UCX__CREATE_CUDA_CONTEXT=True`` environment variable when creating the scheduler. To start a Dask scheduler using UCX with automatic configuration and one GB of RMM pool: .. code-block:: bash $ DASK_DISTRIBUTED__COMM__UCX__CREATE_CUDA_CONTEXT=True \ > DASK_DISTRIBUTED__RMM__POOL_SIZE=1GB \ > dask scheduler --protocol ucx --interface ib0 .. note:: The ``interface="ib0"`` is intentionally specified above to ensure RDMACM is used in systems that support InfiniBand. On systems that don't support InfiniBand or where RDMACM isn't required, the ``interface`` argument may be omitted or specified to listen on a different interface. We specify ``UCX_MEMTYPE_REG_WHOLE_ALLOC_TYPES=cuda`` above for optimal performance with InfiniBand, see details `here <https://ucx-py.readthedocs.io/en/latest/configuration.html#ucx-memtype-reg-whole-alloc-types>`__. If not using InfiniBand, that option may be omitted. In UCX 1.12 and newer, that option is default and may be omitted as well even when using InfiniBand. Workers ^^^^^^^ To start workers with automatic UCX configuration and an RMM pool of 14GB per GPU: .. code-block:: bash $ UCX_MEMTYPE_REG_WHOLE_ALLOC_TYPES=cuda > dask cuda worker ucx://<scheduler_address>:8786 \ > --rmm-pool-size="14GB" \ > --interface="ib0" .. note:: Analogous to the scheduler setup, the ``interface="ib0"`` is intentionally specified above to ensure RDMACM is used in systems that support InfiniBand. On systems that don't support InfiniBand or where RDMACM isn't required, the ``interface`` argument may be omitted or specified to listen on a different interface. We specify ``UCX_MEMTYPE_REG_WHOLE_ALLOC_TYPES=cuda`` above for optimal performance with InfiniBand, see details `here <https://ucx-py.readthedocs.io/en/latest/configuration.html#ucx-memtype-reg-whole-alloc-types>`__. If not using InfiniBand, that option may be omitted. In UCX 1.12 and newer, that option is default and may be omitted as well even when using InfiniBand. Client ^^^^^^ To connect a client to the cluster with automatic UCX configuration we started: .. code-block:: python import os os.environ["UCX_MEMTYPE_REG_WHOLE_ALLOC_TYPES"] = "cuda" import dask from dask.distributed import Client with dask.config.set({"distributed.comm.ucx.create_cuda_context": True}): client = Client("ucx://<scheduler_address>:8786") Alternatively, the ``with dask.config.set`` statement from the example above may be omitted and the ``DASK_DISTRIBUTED__COMM__UCX__CREATE_CUDA_CONTEXT=True`` environment variable specified instead: .. code-block:: python import os os.environ["UCX_MEMTYPE_REG_WHOLE_ALLOC_TYPES"] = "cuda" os.environ["DASK_DISTRIBUTED__COMM__UCX__CREATE_CUDA_CONTEXT"] = "True" from dask.distributed import Client client = Client("ucx://<scheduler_address>:8786") .. note:: We specify ``UCX_MEMTYPE_REG_WHOLE_ALLOC_TYPES=cuda`` above for optimal performance with InfiniBand, see details `here <https://ucx-py.readthedocs.io/en/latest/configuration.html#ucx-memtype-reg-whole-alloc-types>`_. If not using InfiniBand, that option may be omitted. In UCX 1.12 and newer, that option is default and may be omitted as well even when using InfiniBand. ``dask cuda worker`` with Manual Configuration ---------------------------------------------- When using ``dask cuda worker`` with UCX communication and manual configuration, the scheduler, workers, and client must all be started manually, each using the same UCX configuration. Scheduler ^^^^^^^^^ UCX configuration options will need to be specified for ``dask scheduler`` as environment variables; see `Dask Configuration -- Environment Variables <https://docs.dask.org/en/latest/configuration.html#environment-variables>`_ for more details on the mapping between environment variables and options. To start a Dask scheduler using UCX with all supported transports and an gigabyte RMM pool: .. code-block:: bash $ DASK_DISTRIBUTED__COMM__UCX__CUDA_COPY=True \ > DASK_DISTRIBUTED__COMM__UCX__TCP=True \ > DASK_DISTRIBUTED__COMM__UCX__NVLINK=True \ > DASK_DISTRIBUTED__COMM__UCX__INFINIBAND=True \ > DASK_DISTRIBUTED__COMM__UCX__RDMACM=True \ > DASK_DISTRIBUTED__RMM__POOL_SIZE=1GB \ > dask scheduler --protocol ucx --interface ib0 We communicate to the scheduler that we will be using UCX with the ``--protocol`` option, and that we will be using InfiniBand with the ``--interface`` option. Workers ^^^^^^^ All UCX configuration options have analogous options in ``dask cuda worker``; see `API -- Worker <../api.html#worker>`_ for a complete list of these options. To start a cluster with all supported transports and an RMM pool: .. code-block:: bash $ dask cuda worker ucx://<scheduler_address>:8786 \ > --enable-tcp-over-ucx \ > --enable-nvlink \ > --enable-infiniband \ > --enable-rdmacm \ > --rmm-pool-size="1GB" Client ^^^^^^ A client can be configured to use UCX by using ``dask_cuda.initialize``, a utility which takes the same UCX configuring arguments as LocalCUDACluster and adds them to the current Dask configuration used when creating it; see `API -- Client initialization <../api.html#client-initialization>`_ for a complete list of arguments. To connect a client to the cluster we have made: .. code-block:: python from dask.distributed import Client from dask_cuda.initialize import initialize initialize( enable_tcp_over_ucx=True, enable_nvlink=True, enable_infiniband=True, enable_rdmacm=True, ) client = Client("ucx://<scheduler_address>:8786")
0
rapidsai_public_repos/dask-cuda/docs/source
rapidsai_public_repos/dask-cuda/docs/source/examples/worker_count.rst
.. _controlling-number-of-workers: Controlling number of workers ============================= Users can restrict activity to specific GPUs by explicitly setting ``CUDA_VISIBLE_DEVICES``; for a LocalCUDACluster, this can provided as a keyword argument. For example, to restrict activity to the first two indexed GPUs: .. code-block:: python from dask_cuda import LocalCUDACluster cluster = LocalCUDACluster(CUDA_VISIBLE_DEVICES="0,1") LocalCUDACluster can also take an ``n_workers`` argument, which will restrict activity to the first N GPUs listed in ``CUDA_VISIBLE_DEVICES``. This argument can be used on its own or in conjunction with ``CUDA_VISIBLE_DEVICES``: .. code-block:: python cluster = LocalCUDACluster(n_workers=2) # will use GPUs 0,1 cluster = LocalCUDACluster(CUDA_VISIBLE_DEVICES="3,4,5", n_workers=2) # will use GPUs 3,4 When using ``dask cuda worker``, ``CUDA_VISIBLE_DEVICES`` must be provided as an environment variable: .. code-block:: bash $ dask scheduler distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:8786 $ CUDA_VISIBLE_DEVICES=0,1 dask cuda worker 127.0.0.1:8786 GPUs can also be selected by their UUIDs, which can be acquired using `NVIDIA System Management Interface <https://developer.nvidia.com/nvidia-system-management-interface>`_: .. code-block:: bash $ nvidia-smi -L GPU 0: Tesla V100-SXM2-32GB (UUID: GPU-dae76d0e-3414-958a-8f3e-fc6682b36f31) GPU 1: Tesla V100-SXM2-32GB (UUID: GPU-60f2c95a-c564-a078-2a14-b4ff488806ca) These UUIDs can then be passed to ``CUDA_VISIBLE_DEVICES`` in place of a GPU index: .. code-block:: python cluster = LocalCUDACluster(CUDA_VISIBLE_DEVICES="GPU-dae76d0e-3414-958a-8f3e-fc6682b36f31") .. code-block:: bash $ CUDA_VISIBLE_DEVICES="GPU-dae76d0e-3414-958a-8f3e-fc6682b36f31" \ > dask cuda worker 127.0.0.1:8786
0
rapidsai_public_repos/dask-cuda/docs/source
rapidsai_public_repos/dask-cuda/docs/source/examples/best-practices.rst
Best Practices ============== Multi-GPU Machines ~~~~~~~~~~~~~~~~~~ When choosing between two multi-GPU setups, it is best to pick the one where most GPUs are co-located with one-another. This could be a `DGX <https://www.nvidia.com/en-us/data-center/dgx-systems/>`_, a cloud instance with `multi-gpu options <https://rapids.ai/cloud>`_ , a high-density GPU HPC instance, etc. This is done for two reasons: - Moving data between GPUs is costly and performance decreases when computation stops due to communication overheads, Host-to-Device/Device-to-Host transfers, etc - Multi-GPU instances often come with accelerated networking like `NVLink <https://www.nvidia.com/en-us/data-center/nvlink/>`_. These accelerated networking paths usually have much higher throughput/bandwidth compared with traditional networking *and* don't force and Host-to-Device/Device-to-Host transfers. See `Accelerated Networking`_ for more discussion. .. code-block:: python from dask_cuda import LocalCUDACluster cluster = LocalCUDACluster(n_workers=2) # will use GPUs 0,1 cluster = LocalCUDACluster(CUDA_VISIBLE_DEVICES="3,4") # will use GPUs 3,4 For more discussion on controlling number of workers/using multiple GPUs see :ref:`controlling-number-of-workers` . GPU Memory Management ~~~~~~~~~~~~~~~~~~~~~ When using Dask-CUDA, especially with RAPIDS, it's best to use an |rmm-pool|__ to pre-allocate memory on the GPU. Allocating memory, while fast, takes a small amount of time, however, one can easily make hundreds of thousand or even millions of allocations in trivial workflows causing significant performance degradations. With an RMM pool, allocations are sub-sampled from a larger pool and this greatly reduces the allocation time and thereby increases performance: .. |rmm-pool| replace:: :abbr:`RMM (RAPIDS Memory Manager)` pool __ https://docs.rapids.ai/api/rmm/stable/ .. code-block:: python from dask_cuda import LocalCUDACluster cluster = LocalCUDACluster(CUDA_VISIBLE_DEVICES="0,1", protocol="ucx", rmm_pool_size="30GB") We also recommend allocating most, though not all, of the GPU memory space. We do this because the `CUDA Context <https://stackoverflow.com/questions/43244645/what-is-a-cuda-context#:~:text=The%20context%20holds%20all%20the,memory%20for%20zero%20copy%2C%20etc.>`_ takes a non-zero amount (typically 200-500 MBs) of GPU RAM on the device. Additionally, when using `Accelerated Networking`_ , we only need to register a single IPC handle for the whole pool (which is expensive, but only done once) since from the IPC point of viewer there's only a single allocation. As opposed to just using RMM without a pool where each new allocation must be registered with IPC. Accelerated Networking ~~~~~~~~~~~~~~~~~~~~~~ As discussed in `Multi-GPU Machines`_, accelerated networking has better bandwidth/throughput compared with traditional networking hardware and does not force any costly Host-to-Device/Device-to-Host transfers. Dask-CUDA can leverage accelerated networking hardware with `UCX-Py <https://ucx-py.readthedocs.io/en/latest/>`_. As an example, let's compare a merge benchmark when using 2 GPUs connected with NVLink. First we'll run with standard TCP comms: :: python local_cudf_merge.py -d 0,1 -p tcp -c 50_000_000 --rmm-pool-size 30GB In the above, we used 2 GPUs (2 dask-cuda-workers), pre-allocated 30GB of GPU RAM (to make gpu memory allocations faster), and used TCP comms when Dask needed to move data back-and-forth between workers. This setup results in an average wall clock time of: ``19.72 s +/- 694.36 ms``:: ================================================================================ Wall clock | Throughput -------------------------------------------------------------------------------- 20.09 s | 151.93 MiB/s 20.33 s | 150.10 MiB/s 18.75 s | 162.75 MiB/s ================================================================================ Throughput | 154.73 MiB/s +/- 3.14 MiB/s Bandwidth | 139.22 MiB/s +/- 2.98 MiB/s Wall clock | 19.72 s +/- 694.36 ms ================================================================================ (w1,w2) | 25% 50% 75% (total nbytes) -------------------------------------------------------------------------------- (0,1) | 138.48 MiB/s 150.16 MiB/s 157.36 MiB/s (8.66 GiB) (1,0) | 107.01 MiB/s 162.38 MiB/s 188.59 MiB/s (8.66 GiB) ================================================================================ Worker index | Worker address -------------------------------------------------------------------------------- 0 | tcp://127.0.0.1:44055 1 | tcp://127.0.0.1:41095 ================================================================================ To compare, we'll now change the ``procotol`` from ``tcp`` to ``ucx``: python local_cudf_merge.py -d 0,1 -p ucx -c 50_000_000 --rmm-pool-size 30GB With UCX and NVLink, we greatly reduced the wall clock time to: ``347.43 ms +/- 5.41 ms``.:: ================================================================================ Wall clock | Throughput -------------------------------------------------------------------------------- 354.87 ms | 8.40 GiB/s 345.24 ms | 8.63 GiB/s 342.18 ms | 8.71 GiB/s ================================================================================ Throughput | 8.58 GiB/s +/- 78.96 MiB/s Bandwidth | 6.98 GiB/s +/- 46.05 MiB/s Wall clock | 347.43 ms +/- 5.41 ms ================================================================================ (w1,w2) | 25% 50% 75% (total nbytes) -------------------------------------------------------------------------------- (0,1) | 17.38 GiB/s 17.94 GiB/s 18.88 GiB/s (8.66 GiB) (1,0) | 16.55 GiB/s 17.80 GiB/s 18.87 GiB/s (8.66 GiB) ================================================================================ Worker index | Worker address -------------------------------------------------------------------------------- 0 | ucx://127.0.0.1:35954 1 | ucx://127.0.0.1:53584 ================================================================================
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/ci/test_python.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. set -euo pipefail . /opt/conda/etc/profile.d/conda.sh rapids-logger "Generate Python testing dependencies" rapids-dependency-file-generator \ --output conda \ --file_key test_python \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n test # Temporarily allow unbound variables for conda activation. set +u conda activate test set -u rapids-logger "Downloading artifacts from previous jobs" PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python) RAPIDS_TESTS_DIR=${RAPIDS_TESTS_DIR:-"${PWD}/test-results"} RAPIDS_COVERAGE_DIR=${RAPIDS_COVERAGE_DIR:-"${PWD}/coverage-results"} mkdir -p "${RAPIDS_TESTS_DIR}" "${RAPIDS_COVERAGE_DIR}" rapids-print-env rapids-mamba-retry install \ --channel "${PYTHON_CHANNEL}" \ dask-cuda rapids-logger "Check GPU usage" nvidia-smi EXITCODE=0 trap "EXITCODE=1" ERR set +e rapids-logger "pytest dask-cuda" pushd dask_cuda DASK_CUDA_TEST_SINGLE_GPU=1 \ DASK_CUDA_WAIT_WORKERS_MIN_TIMEOUT=20 \ UCXPY_IFNAME=eth0 \ UCX_WARN_UNUSED_ENV_VARS=n \ UCX_MEMTYPE_CACHE=n \ timeout 60m pytest \ -vv \ --durations=0 \ --capture=no \ --cache-clear \ --junitxml="${RAPIDS_TESTS_DIR}/junit-dask-cuda.xml" \ --cov-config=../pyproject.toml \ --cov=dask_cuda \ --cov-report=xml:"${RAPIDS_COVERAGE_DIR}/dask-cuda-coverage.xml" \ --cov-report=term \ tests -k "not ucxx" popd rapids-logger "Run local benchmark" python dask_cuda/benchmarks/local_cudf_shuffle.py \ --partition-size="1 KiB" \ -d 0 \ --runs 1 \ --backend dask python dask_cuda/benchmarks/local_cudf_shuffle.py \ --partition-size="1 KiB" \ -d 0 \ --runs 1 \ --backend explicit-comms rapids-logger "Test script exiting with value: $EXITCODE" exit ${EXITCODE}
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/ci/build_python.sh
#!/bin/bash # Copyright (c) 2022, NVIDIA CORPORATION. set -euo pipefail source rapids-env-update export CMAKE_GENERATOR=Ninja rapids-print-env package_name="dask_cuda" version=$(rapids-generate-version) commit=$(git rev-parse HEAD) echo "${version}" | tr -d '"' > VERSION sed -i "/^__git_commit__/ s/= .*/= \"${commit}\"/g" "${package_name}/_version.py" rapids-logger "Begin py build" RAPIDS_PACKAGE_VERSION=${version} rapids-conda-retry mambabuild \ conda/recipes/dask-cuda rapids-upload-conda-to-s3 python
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/ci/build_python_pypi.sh
#!/bin/bash python -m pip install build --user version=$(rapids-generate-version) commit=$(git rev-parse HEAD) # While conda provides these during conda-build, they are also necessary during # the setup.py build for PyPI export GIT_DESCRIBE_TAG=$(git describe --abbrev=0 --tags) export GIT_DESCRIBE_NUMBER=$(git rev-list ${GIT_DESCRIBE_TAG}..HEAD --count) # Build date for PyPI pre-releases using version from `pyproject.toml` as source. TOML_VERSION=$(grep "version = .*" pyproject.toml | grep -o '".*"' | sed 's/"//g') if ! rapids-is-release-build; then export PACKAGE_VERSION_NUMBER="${version}" fi # For nightlies we want to ensure that we're pulling in alphas as well. The # easiest way to do so is to augment the spec with a constraint containing a # min alpha version that doesn't affect the version bounds but does allow usage # of alpha versions for that dependency without --pre alpha_spec='' if ! rapids-is-release-build; then alpha_spec=',>=0.0.0a0' fi sed -r -i "s/rapids-dask-dependency==(.*)\"/rapids-dask-dependency==\1${alpha_spec}\"/g" pyproject.toml echo "${version}" | tr -d '"' > VERSION sed -i "/^__git_commit__/ s/= .*/= \"${commit}\"/g" "dask_cuda/_version.py" # Compute/export RAPIDS_DATE_STRING source rapids-env-update python -m build \ --sdist \ --wheel \ --outdir dist/ \ .
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/ci/check_style.sh
#!/bin/bash # Copyright (c) 2020-2022, NVIDIA CORPORATION. set -euo pipefail rapids-logger "Create checks conda environment" . /opt/conda/etc/profile.d/conda.sh rapids-dependency-file-generator \ --output conda \ --file_key checks \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n checks conda activate checks # Run pre-commit checks pre-commit run --hook-stage manual --all-files --show-diff-on-failure
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/ci/build_docs.sh
#!/bin/bash set -euo pipefail rapids-logger "Create test conda environment" . /opt/conda/etc/profile.d/conda.sh rapids-dependency-file-generator \ --output conda \ --file_key docs \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n docs conda activate docs rapids-print-env rapids-logger "Downloading artifacts from previous jobs" PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python) rapids-mamba-retry install \ --channel "${PYTHON_CHANNEL}" \ dask-cuda export RAPIDS_VERSION_NUMBER="24.02" export RAPIDS_DOCS_DIR="$(mktemp -d)" rapids-logger "Build Python docs" pushd docs sphinx-build -b dirhtml ./source _html sphinx-build -b text ./source _text mkdir -p "${RAPIDS_DOCS_DIR}/dask-cuda/"{html,txt} mv _html/* "${RAPIDS_DOCS_DIR}/dask-cuda/html" mv _text/* "${RAPIDS_DOCS_DIR}/dask-cuda/txt" popd rapids-upload-docs
0
rapidsai_public_repos/dask-cuda/ci
rapidsai_public_repos/dask-cuda/ci/release/update-version.sh
#!/bin/bash # Copyright (c) 2020, NVIDIA CORPORATION. ################################################################################ # dask-cuda version updater ################################################################################ ## Usage # bash update-version.sh <new_version> # Format is YY.MM.PP - no leading 'v' or trailing 'a' NEXT_FULL_TAG=$1 # Get current version CURRENT_TAG=$(git tag --merged HEAD | grep -xE '^v.*' | sort --version-sort | tail -n 1 | tr -d 'v') CURRENT_MAJOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[1]}') CURRENT_MINOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[2]}') CURRENT_PATCH=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[3]}') CURRENT_SHORT_TAG=${CURRENT_MAJOR}.${CURRENT_MINOR} #Get <major>.<minor> for next version NEXT_MAJOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[1]}') NEXT_MINOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[2]}') NEXT_SHORT_TAG=${NEXT_MAJOR}.${NEXT_MINOR} NEXT_SHORT_TAG_PEP440=$(python -c "from setuptools.extern import packaging; print(packaging.version.Version('${NEXT_SHORT_TAG}'))") NEXT_UCXPY_VERSION="$(curl -s https://version.gpuci.io/rapids/${NEXT_SHORT_TAG})" echo "Preparing release $CURRENT_TAG => $NEXT_FULL_TAG" # Inplace sed replace; workaround for Linux and Mac function sed_runner() { sed -i.bak ''"$1"'' $2 && rm -f ${2}.bak } # Centralized version file update echo "${NEXT_FULL_TAG}" | tr -d '"' > VERSION # Bump cudf and dask-cudf testing dependencies sed_runner "s/cudf==.*/cudf==${NEXT_SHORT_TAG_PEP440}.*/g" dependencies.yaml sed_runner "s/dask-cudf==.*/dask-cudf==${NEXT_SHORT_TAG_PEP440}.*/g" dependencies.yaml sed_runner "s/kvikio==.*/kvikio==${NEXT_SHORT_TAG_PEP440}.*/g" dependencies.yaml sed_runner "s/ucx-py==.*/ucx-py==${NEXT_UCXPY_VERSION}.*/g" dependencies.yaml sed_runner "s/ucxx==.*/ucxx==${NEXT_UCXPY_VERSION}.*/g" dependencies.yaml sed_runner "s/rapids-dask-dependency==.*/rapids-dask-dependency==${NEXT_SHORT_TAG_PEP440}.*/g" dependencies.yaml # CI files for FILE in .github/workflows/*.yaml; do sed_runner "/shared-workflows/ s/@.*/@branch-${NEXT_SHORT_TAG}/g" "${FILE}" done sed_runner "s/RAPIDS_VERSION_NUMBER=\".*/RAPIDS_VERSION_NUMBER=\"${NEXT_SHORT_TAG}\"/g" ci/build_docs.sh
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/proxy_object.py
import copy as _copy import functools import operator import os import pickle import time from collections import OrderedDict from contextlib import nullcontext from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, Tuple, Type, Union import pandas import dask import dask.array.core import dask.dataframe.methods import dask.dataframe.utils import dask.utils import distributed.protocol import distributed.utils from dask.sizeof import sizeof from distributed.protocol.compression import decompress from dask_cuda.disk_io import disk_read try: from dask.dataframe.backends import concat_pandas except ImportError: from dask.dataframe.methods import concat_pandas try: from dask.dataframe.dispatch import make_meta_dispatch as make_meta_dispatch except ImportError: from dask.dataframe.utils import make_meta as make_meta_dispatch from .disk_io import SpillToDiskFile from .is_device_object import is_device_object if TYPE_CHECKING: from .proxify_host_file import ProxyManager # List of attributes that should be copied to the proxy at creation, which makes # them accessible without deserialization of the proxied object _FIXED_ATTRS = ["name", "__len__"] def asproxy( obj: object, serializers: Optional[Iterable[str]] = None, subclass: Optional[Type["ProxyObject"]] = None, ) -> "ProxyObject": """Wrap `obj` in a ProxyObject object if it isn't already. Parameters ---------- obj: object Object to wrap in a ProxyObject object. serializers: Iterable[str], optional Serializers to use to serialize `obj`. If None, no serialization is done. subclass: class, optional Specify a subclass of ProxyObject to create instead of ProxyObject. `subclass` must be pickable. Returns ------- The ProxyObject proxying `obj` """ if isinstance(obj, ProxyObject): # Already a proxy object ret = obj elif isinstance(obj, (list, set, tuple, dict)): raise ValueError(f"Cannot wrap a collection ({type(obj)}) in a proxy object") else: fixed_attr = {} for attr in _FIXED_ATTRS: try: val = getattr(obj, attr) if callable(val): val = val() fixed_attr[attr] = val except (AttributeError, TypeError): pass if subclass is None: subclass = ProxyObject subclass_serialized = None else: subclass_serialized = pickle.dumps(subclass) ret = subclass( ProxyDetail( obj=obj, fixed_attr=fixed_attr, type_serialized=pickle.dumps(type(obj)), typename=dask.utils.typename(type(obj)), is_cuda_object=is_device_object(obj), subclass=subclass_serialized, serializer=None, explicit_proxy=False, ) ) if serializers is not None: ret._pxy_serialize(serializers=serializers) return ret def unproxy(obj): """Unwrap ProxyObject objects and pass-through anything else. Use this function to retrieve the proxied object. Notice, unproxy() search through list, tuples, sets, and frozensets. Parameters ---------- obj: object Any kind of object Returns ------- The proxied object or `obj` itself if it isn't a ProxyObject """ try: obj = obj._pxy_deserialize() except AttributeError: if type(obj) in (list, tuple, set, frozenset): return type(obj)(unproxy(o) for o in obj) return obj def _pxy_cache_wrapper(attr_name: str): """Caching the access of attr_name in ProxyObject._pxy_cache""" def wrapper1(func): @functools.wraps(func) def wrapper2(self: "ProxyObject"): try: return self._pxy_cache[attr_name] except KeyError: ret = func(self) self._pxy_cache[attr_name] = ret return ret return wrapper2 return wrapper1 class ProxyManagerDummy: """Dummy of a ProxyManager that does nothing This is a dummy class used as the manager when no manager has been registered the proxy object. It implements dummy methods that doesn't do anything it is purely for convenience. """ def add(self, *args, **kwargs): pass def remove(self, *args, **kwargs): pass def maybe_evict(self, *args, **kwargs): pass @property def lock(self): return nullcontext() class ProxyDetail: """Details of a ProxyObject In order to avoid having to use thread locks, a ProxyObject maintains its state in a ProxyDetail object. The idea is to first make a copy of the ProxyDetail object before modifying it and then assign the copy back to the ProxyObject in one atomic instruction. Parameters ---------- obj: object Any kind of object to be proxied. fixed_attr: dict Dictionary of attributes that are accessible without deserializing the proxied object. type_serialized: bytes Pickled type of `obj`. typename: str Name of the type of `obj`. is_cuda_object: boolean Whether `obj` is a CUDA object or not. subclass: bytes Pickled type to use instead of ProxyObject when deserializing. The type must inherit from ProxyObject. serializers: str, optional Serializers to use to serialize `obj`. If None, no serialization is done. explicit_proxy: bool Mark the proxy object as "explicit", which means that the user allows it as input argument to dask tasks even in compatibility-mode. manager: ProxyManager or ProxyManagerDummy The manager to manage this proxy object or a dummy. The manager tallies the total memory usage of proxies and evicts/serialize proxy objects as needed. """ def __init__( self, obj: Any, fixed_attr: Dict[str, Any], type_serialized: bytes, typename: str, is_cuda_object: bool, subclass: Optional[bytes], serializer: Optional[str], explicit_proxy: bool, manager: Union["ProxyManager", ProxyManagerDummy] = ProxyManagerDummy(), ): self.obj = obj self.fixed_attr = fixed_attr self.type_serialized = type_serialized self.typename = typename self.is_cuda_object = is_cuda_object self.subclass = subclass self.serializer = serializer self.explicit_proxy = explicit_proxy self.manager = manager self.last_access: float = 0.0 def get_init_args(self, include_obj=False) -> OrderedDict: """Return the attributes needed to initialize a ProxyObject Notice, the returned dictionary is ordered as the __init__() arguments Parameters ---------- include_obj: bool Whether to include the "obj" argument or not Returns ------- Dictionary of attributes """ args = ["obj"] if include_obj else [] args += [ "fixed_attr", "type_serialized", "typename", "is_cuda_object", "subclass", "serializer", "explicit_proxy", ] return OrderedDict([(a, getattr(self, a)) for a in args]) def is_serialized(self) -> bool: """Return whether the proxied object is serialized or not""" return self.serializer is not None def serialize(self, serializers: Iterable[str]) -> Tuple[dict, list]: """Inplace serialization of the proxied object using the `serializers` Parameters ---------- serializers: Iterable[str] Serializers to use to serialize the proxied object. Returns ------- header: dict The header of the serialized frames frames: list[bytes] List of frames that make up the serialized object """ if not serializers: raise ValueError("Please specify a list of serializers") if self.serializer is not None: if self.serializer in serializers: return self.obj # Nothing to be done else: # The proxied object is serialized with other serializers self.deserialize(maybe_evict=False) header, _ = self.obj = distributed.protocol.serialize( self.obj, serializers, on_error="raise" ) assert "is-collection" not in header # Collections not allowed self.serializer = header["serializer"] return self.obj def deserialize(self, maybe_evict: bool = True, nbytes=None): """Inplace deserialization of the proxied object Parameters ---------- maybe_evict: bool Before deserializing, maybe evict managered proxy objects Returns ------- object The proxied object (deserialized) """ if self.is_serialized(): # When not deserializing a CUDA-serialized proxied, tell the # manager that it might have to evict because of the increased # device memory usage. if maybe_evict and self.serializer != "cuda": if nbytes is None: _, frames = self.obj nbytes = sum(map(distributed.utils.nbytes, frames)) self.manager.maybe_evict(nbytes) # Deserialize the proxied object header, frames = self.obj self.obj = distributed.protocol.deserialize(header, frames) self.serializer = None self.last_access = time.monotonic() return self.obj class ProxyObject: """Object wrapper/proxy for serializable objects This is used by ProxifyHostFile to delay deserialization of returned objects. Objects proxied by an instance of this class will be JIT-deserialized when accessed. The instance behaves as the proxied object and can be accessed/used just like the proxied object. ProxyObject has some limitations and doesn't mimic the proxied object perfectly. Thus, if encountering problems remember that it is always possible to use unproxy() to access the proxied object directly or disable JIT deserialization completely with `jit_unspill=False`. Type checking using instance() works as expected but direct type checking doesn't: >>> import numpy as np >>> from dask_cuda.proxy_object import asproxy >>> x = np.arange(3) >>> isinstance(asproxy(x), type(x)) True >>> type(asproxy(x)) is type(x) False Attributes ---------- _pxy: ProxyDetail Details of all proxy information of the underlying proxied object. Access to _pxy is not pass-through to the proxied object, which is the case for most other access to the ProxyObject. _pxy_cache: dict A dictionary used for caching attributes Parameters ---------- detail: ProxyDetail The Any kind of object to be proxied. """ def __init__(self, detail: ProxyDetail): self._pxy_detail = detail self._pxy_cache: Dict[str, Any] = {} def _pxy_get(self, copy=False) -> ProxyDetail: if copy: return _copy.copy(self._pxy_detail) else: return self._pxy_detail def _pxy_set(self, proxy_detail: ProxyDetail): with proxy_detail.manager.lock: self._pxy_detail = proxy_detail proxy_detail.manager.add(proxy=self, serializer=proxy_detail.serializer) def __del__(self): """We have to unregister us from the manager if any""" pxy = self._pxy_get() pxy.manager.remove(self) def _pxy_serialize( self, serializers: Iterable[str], proxy_detail: Optional[ProxyDetail] = None, ) -> None: """Inplace serialization of the proxied object using the `serializers` Parameters ---------- serializers: Iterable[str] Serializers to use to serialize the proxied object. Returns ------- header: dict The header of the serialized frames frames: list[bytes] List of frames that make up the serialized object """ if not serializers: raise ValueError("Please specify a list of serializers") pxy = self._pxy_get(copy=True) if not proxy_detail else proxy_detail if pxy.serializer is not None and pxy.serializer in serializers: return # Nothing to be done pxy.serialize(serializers=serializers) self._pxy_set(pxy) # Invalidate the (possible) cached "device_memory_objects" self._pxy_cache.pop("device_memory_objects", None) def _pxy_deserialize( self, maybe_evict: bool = True, proxy_detail: Optional[ProxyDetail] = None ): """Inplace deserialization of the proxied object Parameters ---------- maybe_evict: bool Before deserializing, maybe evict managered proxy objects Returns ------- object The proxied object (deserialized) """ pxy = self._pxy_get(copy=True) if not proxy_detail else proxy_detail if not pxy.is_serialized(): return pxy.obj ret = pxy.deserialize(maybe_evict=maybe_evict, nbytes=self.__sizeof__()) self._pxy_set(pxy) return ret def __reduce__(self): """Serialization of ProxyObject that uses pickle""" pxy = self._pxy_get(copy=True) pxy.serialize(serializers=("pickle",)) if pxy.subclass: subclass = pickle.loads(pxy.subclass) else: subclass = ProxyObject # Make sure the frames are all bytes header, frames = pxy.obj pxy.obj = (header, [bytes(f) for f in frames]) self._pxy_set(pxy) return (subclass, (pxy,)) def __getattr__(self, name): pxy = self._pxy_get() if name in _FIXED_ATTRS: try: return pxy.fixed_attr[name] except KeyError: raise AttributeError( f"type object '{pxy.typename}' has no attribute '{name}'" ) return getattr(self._pxy_deserialize(), name) def __setattr__(self, name: str, val): if name.startswith("_pxy_"): return object.__setattr__(self, name, val) pxy = self._pxy_get(copy=True) if name in _FIXED_ATTRS: pxy.fixed_attr[name] = val else: object.__setattr__(pxy.deserialize(nbytes=self.__sizeof__()), name, val) self._pxy_set(pxy) def __array_ufunc__(self, ufunc, method, *args, **kwargs): from .proxify_device_objects import unproxify_device_objects args, kwargs = unproxify_device_objects(args), unproxify_device_objects(kwargs) return self._pxy_deserialize().__array_ufunc__(ufunc, method, *args, **kwargs) def __array_function__(self, func, types, args, kwargs): from .proxify_device_objects import unproxify_device_objects kwargs = unproxify_device_objects(kwargs) proxied = self._pxy_deserialize() # Unproxify `args` and `types` types = [t for t in types if not issubclass(t, type(self))] args_proxied = [] for a in args: if isinstance(a, type(self)): types.append(a.__class__) args_proxied.append(a._pxy_deserialize()) else: args_proxied.append(a) return proxied.__array_function__(func, types, args_proxied, kwargs) def __str__(self): return str(self._pxy_deserialize()) def __repr__(self): pxy = self._pxy_get() ret = f"<{dask.utils.typename(type(self))} " ret += f"at {hex(id(self))} of {pxy.typename}" if pxy.is_serialized(): ret += f" (serialized={repr(pxy.serializer)})>" else: ret += f" at {hex(id(pxy.obj))}>" return ret @property # type: ignore # mypy doesn't support decorated property @_pxy_cache_wrapper("type_serialized") def __class__(self): return pickle.loads(self._pxy_get().type_serialized) @_pxy_cache_wrapper("sizeof") def __sizeof__(self): """Returns the size of the proxy object (serialized or not) Notice, we cache the result even though the size of proxied object when serialized or not serialized might slightly differ. """ pxy = self._pxy_get() if pxy.is_serialized(): _, frames = pxy.obj return sum(map(distributed.utils.nbytes, frames)) else: return sizeof(pxy.obj) def __len__(self): pxy = self._pxy_get(copy=True) ret = pxy.fixed_attr.get("__len__", None) if ret is None: ret = len(pxy.deserialize(nbytes=self.__sizeof__())) pxy.fixed_attr["__len__"] = ret self._pxy_set(pxy) return ret def __contains__(self, value): return value in self._pxy_deserialize() def __getitem__(self, key): return self._pxy_deserialize()[key] def __setitem__(self, key, value): self._pxy_deserialize()[key] = value def __delitem__(self, key): del self._pxy_deserialize()[key] def __getslice__(self, i, j): return self._pxy_deserialize()[i:j] def __setslice__(self, i, j, value): self._pxy_deserialize()[i:j] = value def __delslice__(self, i, j): del self._pxy_deserialize()[i:j] def __iter__(self): return iter(self._pxy_deserialize()) def __array__(self, *args, **kwargs): return getattr(self._pxy_deserialize(), "__array__")(*args, **kwargs) def __lt__(self, other): return self._pxy_deserialize() < other def __le__(self, other): return self._pxy_deserialize() <= other def __eq__(self, other): return self._pxy_deserialize() == other def __ne__(self, other): return self._pxy_deserialize() != other def __gt__(self, other): return self._pxy_deserialize() > other def __ge__(self, other): return self._pxy_deserialize() >= other def __add__(self, other): return self._pxy_deserialize() + other def __sub__(self, other): return self._pxy_deserialize() - other def __mul__(self, other): return self._pxy_deserialize() * other def __truediv__(self, other): return operator.truediv(self._pxy_deserialize(), other) def __floordiv__(self, other): return self._pxy_deserialize() // other def __mod__(self, other): return self._pxy_deserialize() % other def __divmod__(self, other): return divmod(self._pxy_deserialize(), other) def __pow__(self, other): return pow(self._pxy_deserialize(), other) def __lshift__(self, other): return self._pxy_deserialize() << other def __rshift__(self, other): return self._pxy_deserialize() >> other def __and__(self, other): return self._pxy_deserialize() & other def __xor__(self, other): return self._pxy_deserialize() ^ other def __or__(self, other): return self._pxy_deserialize() | other def __matmul__(self, other): return self._pxy_deserialize().__matmul__(unproxy(other)) def __radd__(self, other): return other + self._pxy_deserialize() def __rsub__(self, other): return other - self._pxy_deserialize() def __rmul__(self, other): return other * self._pxy_deserialize() def __rtruediv__(self, other): return operator.truediv(other, self._pxy_deserialize()) def __rfloordiv__(self, other): return other // self._pxy_deserialize() def __rmod__(self, other): return other % self._pxy_deserialize() def __rdivmod__(self, other): return divmod(other, self._pxy_deserialize()) def __rpow__(self, other, *args): return pow(other, self._pxy_deserialize(), *args) def __rlshift__(self, other): return other << self._pxy_deserialize() def __rrshift__(self, other): return other >> self._pxy_deserialize() def __rand__(self, other): return other & self._pxy_deserialize() def __rxor__(self, other): return other ^ self._pxy_deserialize() def __ror__(self, other): return other | self._pxy_deserialize() def __rmatmul__(self, other): return self._pxy_deserialize().__rmatmul__(unproxy(other)) def __iadd__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied += other self._pxy_set(pxy) return self def __isub__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied -= other self._pxy_set(pxy) return self def __imul__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied *= other self._pxy_set(pxy) return self def __itruediv__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) pxy.obj = operator.itruediv(proxied, other) self._pxy_set(pxy) def __ifloordiv__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied //= other self._pxy_set(pxy) return self def __imod__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied %= other self._pxy_set(pxy) return self def __ipow__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied **= other self._pxy_set(pxy) return self def __ilshift__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied <<= other self._pxy_set(pxy) return self def __irshift__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied >>= other self._pxy_set(pxy) return self def __iand__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied &= other self._pxy_set(pxy) return self def __ixor__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied ^= other self._pxy_set(pxy) return self def __ior__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied |= other self._pxy_set(pxy) return self def __imatmul__(self, other): pxy = self._pxy_get(copy=True) proxied = pxy.deserialize(nbytes=self.__sizeof__()) proxied @= other pxy.obj = proxied self._pxy_set(pxy) return self def __neg__(self): return -self._pxy_deserialize() def __pos__(self): return +self._pxy_deserialize() def __abs__(self): return abs(self._pxy_deserialize()) def __invert__(self): return ~self._pxy_deserialize() def __int__(self): return int(self._pxy_deserialize()) def __float__(self): return float(self._pxy_deserialize()) def __complex__(self): return complex(self._pxy_deserialize()) def __index__(self): return operator.index(self._pxy_deserialize()) @is_device_object.register(ProxyObject) def obj_pxy_is_device_object(obj: ProxyObject): """ In order to avoid de-serializing the proxied object, we check `is_cuda_object` instead of the default `hasattr(o, "__cuda_array_interface__")` check. """ return obj._pxy_get().is_cuda_object def handle_disk_serialized(pxy: ProxyDetail): """Handle serialization of an already disk serialized proxy On a shared filesystem, we do not have to deserialize instead we make a hard link of the file. On a non-shared filesystem, we deserialize the proxy to host memory. """ org_header, frames = pxy.obj header = _copy.deepcopy(org_header) if header["disk-io-header"]["shared-filesystem"]: from .proxify_host_file import ProxifyHostFile assert ProxifyHostFile._spill_to_disk new_path = ProxifyHostFile._spill_to_disk.gen_file_path() os.link(header["disk-io-header"]["path"], new_path) header["disk-io-header"]["path"] = new_path else: # When not on a shared filesystem, we deserialize to host memory inplace assert frames == [] frames = disk_read(header.pop("disk-io-header")) if "compression" in header["serialize-header"]: frames = decompress(header["serialize-header"], frames) header = header["serialize-header"] pxy.serializer = header["serializer"] pxy.obj = (header, frames) return header, frames @distributed.protocol.dask_serialize.register(ProxyObject) def obj_pxy_dask_serialize(obj: ProxyObject): """The dask serialization of ProxyObject used by Dask when communicating using TCP As serializers, it uses "dask" or "pickle", which means that proxied CUDA objects are spilled to main memory before communicated. Deserialization is needed, unless obj is serialized to disk on a shared filesystem see `handle_disk_serialized()`. """ pxy = obj._pxy_get(copy=True) if pxy.serializer == "disk": header, frames = handle_disk_serialized(pxy) else: header, frames = pxy.serialize(serializers=("dask", "pickle")) obj._pxy_set(pxy) return { "proxied-header": header, "obj-pxy-detail": pickle.dumps(pxy.get_init_args()), }, frames @distributed.protocol.cuda.cuda_serialize.register(ProxyObject) def obj_pxy_cuda_serialize(obj: ProxyObject): """The CUDA serialization of ProxyObject used by Dask when communicating using UCX As serializers, it uses "cuda", which means that proxied CUDA objects are _not_ spilled to main memory before communicated. However, we still have to handle disk serialized proxied like in `obj_pxy_dask_serialize()` """ pxy = obj._pxy_get(copy=True) if pxy.serializer in ("dask", "pickle"): header, frames = pxy.obj elif pxy.serializer == "disk": header, frames = handle_disk_serialized(pxy) obj._pxy_set(pxy) else: # Notice, since obj._pxy_serialize() is a inplace operation, we make a # shallow copy of `obj` to avoid introducing a CUDA-serialized object in # the worker's data store. header, frames = pxy.serialize(serializers=("cuda",)) return { "proxied-header": header, "obj-pxy-detail": pickle.dumps(pxy.get_init_args()), }, frames @distributed.protocol.dask_deserialize.register(ProxyObject) @distributed.protocol.cuda.cuda_deserialize.register(ProxyObject) def obj_pxy_dask_deserialize(header, frames): """ The generic deserialization of ProxyObject. Notice, it doesn't deserialize the proxied object at this time. When accessed, the proxied object are deserialized using the same serializers that were used when the object was serialized. """ args = pickle.loads(header["obj-pxy-detail"]) if args["subclass"] is None: subclass = ProxyObject else: subclass = pickle.loads(args["subclass"]) pxy = ProxyDetail(obj=(header["proxied-header"], frames), **args) if pxy.serializer == "disk": header, _ = pxy.obj path = header["disk-io-header"]["path"] # Make sure that the path is wrapped in a SpillToDiskFile instance if not isinstance(path, SpillToDiskFile): header["disk-io-header"]["path"] = SpillToDiskFile(path) assert os.path.exists(path) return subclass(pxy) @dask.dataframe.core.get_parallel_type.register(ProxyObject) def get_parallel_type_proxy_object(obj: ProxyObject): # Notice, `get_parallel_type()` needs a instance not a type object return dask.dataframe.core.get_parallel_type(obj.__class__.__new__(obj.__class__)) def unproxify_input_wrapper(func): """Unproxify the input of `func`""" @functools.wraps(func) def wrapper(*args, **kwargs): args = [unproxy(d) for d in args] kwargs = {k: unproxy(v) for k, v in kwargs.items()} return func(*args, **kwargs) return wrapper # Register dispatch of ProxyObject on all known dispatch objects for dispatch in ( dask.dataframe.core.hash_object_dispatch, make_meta_dispatch, dask.dataframe.utils.make_scalar, dask.dataframe.core.group_split_dispatch, dask.array.core.tensordot_lookup, dask.array.core.einsum_lookup, dask.array.core.concatenate_lookup, ): dispatch.register(ProxyObject, unproxify_input_wrapper(dispatch)) dask.dataframe.methods.concat_dispatch.register( ProxyObject, unproxify_input_wrapper(dask.dataframe.methods.concat) ) # We overwrite the Dask dispatch of Pandas objects in order to # deserialize all ProxyObjects before concatenating dask.dataframe.methods.concat_dispatch.register( (pandas.DataFrame, pandas.Series, pandas.Index), unproxify_input_wrapper(concat_pandas), )
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/cuda_worker.py
from __future__ import absolute_import, division, print_function import asyncio import atexit import logging import os import warnings from toolz import valmap import dask from distributed import Nanny from distributed.core import Server from distributed.deploy.cluster import Cluster from distributed.proctitle import ( enable_proctitle_on_children, enable_proctitle_on_current, ) from distributed.worker_memory import parse_memory_limit from .device_host_file import DeviceHostFile from .initialize import initialize from .plugins import CPUAffinity, PreImport, RMMSetup from .proxify_host_file import ProxifyHostFile from .utils import ( cuda_visible_devices, get_cpu_affinity, get_n_gpus, get_ucx_config, nvml_device_index, parse_device_memory_limit, ) class CUDAWorker(Server): def __init__( self, scheduler=None, host=None, nthreads=1, name=None, memory_limit="auto", device_memory_limit="auto", rmm_pool_size=None, rmm_maximum_pool_size=None, rmm_managed_memory=False, rmm_async=False, rmm_release_threshold=None, rmm_log_directory=None, rmm_track_allocations=False, pid_file=None, resources=None, dashboard=True, dashboard_address=":0", local_directory=None, shared_filesystem=None, scheduler_file=None, interface=None, preload=[], dashboard_prefix=None, security=None, enable_tcp_over_ucx=None, enable_infiniband=None, enable_nvlink=None, enable_rdmacm=None, jit_unspill=None, worker_class=None, pre_import=None, **kwargs, ): # Required by RAPIDS libraries (e.g., cuDF) to ensure no context # initialization happens before we can set CUDA_VISIBLE_DEVICES os.environ["RAPIDS_NO_INITIALIZE"] = "True" enable_proctitle_on_current() enable_proctitle_on_children() try: nprocs = len(os.environ["CUDA_VISIBLE_DEVICES"].split(",")) except KeyError: nprocs = get_n_gpus() if nthreads < 1: raise ValueError("nthreads must be higher than 0.") # Set nthreads=1 when parsing mem_limit since it only depends on nprocs logger = logging.getLogger(__name__) memory_limit = parse_memory_limit( memory_limit=memory_limit, nthreads=1, total_cores=nprocs, logger=logger ) if pid_file: with open(pid_file, "w") as f: f.write(str(os.getpid())) def del_pid_file(): if os.path.exists(pid_file): os.remove(pid_file) atexit.register(del_pid_file) if resources: resources = resources.replace(",", " ").split() resources = dict(pair.split("=") for pair in resources) resources = valmap(float, resources) else: resources = None preload_argv = kwargs.pop("preload_argv", []) kwargs = {"worker_port": None, "listen_address": None, **kwargs} if ( scheduler is None and scheduler_file is None and dask.config.get("scheduler-address", None) is None ): raise ValueError( "No scheduler specified. A scheduler can be specified by " "passing an address through the SCHEDULER argument or " "'dask.scheduler-address' config option, or by passing the " "location of a scheduler file through the --scheduler-file " "option" ) if isinstance(scheduler, Cluster): scheduler = scheduler.scheduler_address if interface and host: raise ValueError("Can not specify both interface and host") if rmm_pool_size is not None or rmm_managed_memory: try: import rmm # noqa F401 except ImportError: raise ValueError( "RMM pool requested but module 'rmm' is not available. " "For installation instructions, please see " "https://github.com/rapidsai/rmm" ) # pragma: no cover else: if enable_nvlink: warnings.warn( "When using NVLink we recommend setting a " "`rmm_pool_size`. Please see: " "https://docs.rapids.ai/api/dask-cuda/nightly/ucx/ " "for more details" ) if enable_nvlink and rmm_managed_memory: raise ValueError( "RMM managed memory and NVLink are currently incompatible." ) # Ensure this parent dask-cuda-worker process uses the same UCX # configuration as child worker processes created by it. initialize( create_cuda_context=False, enable_tcp_over_ucx=enable_tcp_over_ucx, enable_infiniband=enable_infiniband, enable_nvlink=enable_nvlink, enable_rdmacm=enable_rdmacm, ) if jit_unspill is None: jit_unspill = dask.config.get("jit-unspill", default=False) if device_memory_limit is None and memory_limit is None: data = lambda _: {} elif jit_unspill: data = lambda i: ( ProxifyHostFile, { "device_memory_limit": parse_device_memory_limit( device_memory_limit, device_index=i ), "memory_limit": memory_limit, "shared_filesystem": shared_filesystem, }, ) else: data = lambda i: ( DeviceHostFile, { "device_memory_limit": parse_device_memory_limit( device_memory_limit, device_index=i ), "memory_limit": memory_limit, }, ) self.nannies = [ Nanny( scheduler, scheduler_file=scheduler_file, nthreads=nthreads, dashboard=dashboard, dashboard_address=dashboard_address, http_prefix=dashboard_prefix, resources=resources, memory_limit=memory_limit, interface=interface, host=host, preload=(list(preload) or []) + ["dask_cuda.initialize"], preload_argv=(list(preload_argv) or []) + ["--create-cuda-context"], security=security, env={"CUDA_VISIBLE_DEVICES": cuda_visible_devices(i)}, plugins={ CPUAffinity( get_cpu_affinity(nvml_device_index(i, cuda_visible_devices(i))) ), RMMSetup( initial_pool_size=rmm_pool_size, maximum_pool_size=rmm_maximum_pool_size, managed_memory=rmm_managed_memory, async_alloc=rmm_async, release_threshold=rmm_release_threshold, log_directory=rmm_log_directory, track_allocations=rmm_track_allocations, ), PreImport(pre_import), }, name=name if nprocs == 1 or name is None else str(name) + "-" + str(i), local_directory=local_directory, config={ "distributed.comm.ucx": get_ucx_config( enable_tcp_over_ucx=enable_tcp_over_ucx, enable_infiniband=enable_infiniband, enable_nvlink=enable_nvlink, enable_rdmacm=enable_rdmacm, ) }, data=data(nvml_device_index(i, cuda_visible_devices(i))), worker_class=worker_class, **kwargs, ) for i in range(nprocs) ] def __await__(self): return self._wait().__await__() async def _wait(self): await asyncio.gather(*self.nannies) async def finished(self): await asyncio.gather(*[n.finished() for n in self.nannies]) async def close(self, timeout=5): await asyncio.gather(*[n.close(timeout=timeout) for n in self.nannies])
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/get_device_memory_objects.py
from typing import Set from dask.sizeof import sizeof from dask.utils import Dispatch dispatch = Dispatch(name="get_device_memory_objects") class DeviceMemoryId: """ID and size of device memory objects Instead of keeping a reference to device memory objects this class only saves the id and size in order to avoid delayed freeing. """ def __init__(self, obj: object): self.id = id(obj) self.nbytes = sizeof(obj) def __hash__(self) -> int: return self.id def __eq__(self, o) -> bool: return self.id == hash(o) def get_device_memory_ids(obj) -> Set[DeviceMemoryId]: """Find all CUDA device objects in `obj` Search through `obj` and find all CUDA device objects, which are objects that either are known to `dispatch` or implement `__cuda_array_interface__`. Parameters ---------- obj: Any Object to search through Returns ------- ret: Set[DeviceMemoryId] Set of CUDA device memory IDs """ return {DeviceMemoryId(o) for o in dispatch(obj)} @dispatch.register(object) def get_device_memory_objects_default(obj): from dask_cuda.proxy_object import ProxyObject if isinstance(obj, ProxyObject): return dispatch(obj._pxy_get().obj) if hasattr(obj, "data"): return dispatch(obj.data) owner = getattr(obj, "owner", getattr(obj, "_owner", None)) if owner is not None: return dispatch(owner) if hasattr(obj, "__cuda_array_interface__"): return [obj] return [] @dispatch.register(list) @dispatch.register(tuple) @dispatch.register(set) @dispatch.register(frozenset) def get_device_memory_objects_python_sequence(seq): ret = [] for s in seq: ret.extend(dispatch(s)) return ret @dispatch.register(dict) def get_device_memory_objects_python_dict(seq): ret = [] for s in seq.values(): ret.extend(dispatch(s)) return ret @dispatch.register_lazy("cupy") def get_device_memory_objects_register_cupy(): from cupy.cuda.memory import MemoryPointer @dispatch.register(MemoryPointer) def get_device_memory_objects_cupy(obj): return [obj.mem] @dispatch.register_lazy("cudf") def get_device_memory_objects_register_cudf(): import cudf.core.frame import cudf.core.index import cudf.core.multiindex import cudf.core.series @dispatch.register(cudf.core.frame.Frame) def get_device_memory_objects_cudf_frame(obj): ret = [] for col in obj._data.columns: ret += dispatch(col) return ret @dispatch.register(cudf.core.indexed_frame.IndexedFrame) def get_device_memory_objects_cudf_indexed_frame(obj): return dispatch(obj._index) + get_device_memory_objects_cudf_frame(obj) @dispatch.register(cudf.core.series.Series) def get_device_memory_objects_cudf_series(obj): return dispatch(obj._index) + dispatch(obj._column) @dispatch.register(cudf.core.index.RangeIndex) def get_device_memory_objects_cudf_range_index(obj): # Avoid materializing RangeIndex. This introduce some inaccuracies # in total device memory usage, which we accept because the memory # use of RangeIndexes are limited. return [] @dispatch.register(cudf.core.index.Index) def get_device_memory_objects_cudf_index(obj): return dispatch(obj._values) @dispatch.register(cudf.core.multiindex.MultiIndex) def get_device_memory_objects_cudf_multiindex(obj): return dispatch(obj._columns) @sizeof.register_lazy("cupy") def register_cupy(): # NB: this overwrites dask.sizeof.register_cupy() import cupy.cuda.memory @sizeof.register(cupy.cuda.memory.BaseMemory) def sizeof_cupy_base_memory(x): return int(x.size) @sizeof.register(cupy.ndarray) def sizeof_cupy_ndarray(x): return int(x.nbytes)
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/cli.py
from __future__ import absolute_import, division, print_function import logging import click from tornado.ioloop import IOLoop, TimeoutError from dask import config as dask_config from distributed import Client from distributed.cli.utils import install_signal_handlers from distributed.preloading import validate_preload_argv from distributed.security import Security from distributed.utils import import_term from .cuda_worker import CUDAWorker from .utils import print_cluster_config logger = logging.getLogger(__name__) pem_file_option_type = click.Path(exists=True, resolve_path=True) scheduler = click.argument("scheduler", type=str, required=False) preload_argv = click.argument( "preload_argv", nargs=-1, type=click.UNPROCESSED, callback=validate_preload_argv ) scheduler_file = click.option( "--scheduler-file", type=str, default=None, help="""Filename to JSON encoded scheduler information. To be used in conjunction with the equivalent ``dask scheduler`` option.""", ) tls_ca_file = click.option( "--tls-ca-file", type=pem_file_option_type, default=None, help="""CA certificate(s) file for TLS (in PEM format). Can be a string (like ``"path/to/certs"``), or ``None`` for no certificate(s).""", ) tls_cert = click.option( "--tls-cert", type=pem_file_option_type, default=None, help="""Certificate file for TLS (in PEM format). Can be a string (like ``"path/to/certs"``), or ``None`` for no certificate(s).""", ) tls_key = click.option( "--tls-key", type=pem_file_option_type, default=None, help="""Private key file for TLS (in PEM format). Can be a string (like ``"path/to/certs"``), or ``None`` for no private key.""", ) @click.group def cuda(): """Subcommands to launch or query distributed workers with GPUs.""" @cuda.command(name="worker", context_settings=dict(ignore_unknown_options=True)) @scheduler @preload_argv @click.option( "--host", type=str, default=None, help="""IP address of serving host; should be visible to the scheduler and other workers. Can be a string (like ``"127.0.0.1"``) or ``None`` to fall back on the address of the interface specified by ``--interface`` or the default interface.""", ) @click.option( "--nthreads", type=int, default=1, show_default=True, help="Number of threads to be used for each Dask worker process.", ) @click.option( "--name", type=str, default=None, help="""A unique name for the worker. Can be a string (like ``"worker-1"``) or ``None`` for a nameless worker.""", ) @click.option( "--memory-limit", default="auto", show_default=True, help="""Size of the host LRU cache, which is used to determine when the worker starts spilling to disk (not available if JIT-Unspill is enabled). Can be an integer (bytes), float (fraction of total system memory), string (like ``"5GB"`` or ``"5000M"``), or ``"auto"``, 0, or ``None`` for no memory management.""", ) @click.option( "--device-memory-limit", default="0.8", show_default=True, help="""Size of the CUDA device LRU cache, which is used to determine when the worker starts spilling to host memory. Can be an integer (bytes), float (fraction of total device memory), string (like ``"5GB"`` or ``"5000M"``), or ``"auto"`` or 0 to disable spilling to host (i.e. allow full device memory usage).""", ) @click.option( "--rmm-pool-size", default=None, help="""RMM pool size to initialize each worker with. Can be an integer (bytes), float (fraction of total device memory), string (like ``"5GB"`` or ``"5000M"``), or ``None`` to disable RMM pools. .. note:: This size is a per-worker configuration, and not cluster-wide.""", ) @click.option( "--rmm-maximum-pool-size", default=None, help="""When ``--rmm-pool-size`` is specified, this argument indicates the maximum pool size. Can be an integer (bytes), float (fraction of total device memory), string (like ``"5GB"`` or ``"5000M"``) or ``None``. By default, the total available memory on the GPU is used. ``rmm_pool_size`` must be specified to use RMM pool and to set the maximum pool size. .. note:: This size is a per-worker configuration, and not cluster-wide.""", ) @click.option( "--rmm-managed-memory/--no-rmm-managed-memory", default=False, show_default=True, help="""Initialize each worker with RMM and set it to use managed memory. If disabled, RMM may still be used by specifying ``--rmm-pool-size``. .. warning:: Managed memory is currently incompatible with NVLink. Trying to enable both will result in failure.""", ) @click.option( "--rmm-async/--no-rmm-async", default=False, show_default=True, help="""Initialize each worker with RMM and set it to use RMM's asynchronous allocator. See ``rmm.mr.CudaAsyncMemoryResource`` for more info. .. warning:: The asynchronous allocator requires CUDA Toolkit 11.2 or newer. It is also incompatible with RMM pools and managed memory, trying to enable both will result in failure.""", ) @click.option( "--rmm-release-threshold", default=None, help="""When ``rmm.async`` is ``True`` and the pool size grows beyond this value, unused memory held by the pool will be released at the next synchronization point. Can be an integer (bytes), float (fraction of total device memory), string (like ``"5GB"`` or ``"5000M"``) or ``None``. By default, this feature is disabled. .. note:: This size is a per-worker configuration, and not cluster-wide.""", ) @click.option( "--rmm-log-directory", default=None, help="""Directory to write per-worker RMM log files to. The client and scheduler are not logged here. Can be a string (like ``"/path/to/logs/"``) or ``None`` to disable logging. .. note:: Logging will only be enabled if ``--rmm-pool-size`` or ``--rmm-managed-memory`` are specified.""", ) @click.option( "--rmm-track-allocations/--no-rmm-track-allocations", default=False, show_default=True, help="""Track memory allocations made by RMM. If ``True``, wraps the memory resource of each worker with a ``rmm.mr.TrackingResourceAdaptor`` that allows querying the amount of memory allocated by RMM.""", ) @click.option( "--pid-file", type=str, default="", help="File to write the process PID.", ) @click.option( "--resources", type=str, default="", help="""Resources for task constraints like ``"GPU=2 MEM=10e9"``.""", ) @click.option( "--dashboard/--no-dashboard", "dashboard", default=True, show_default=True, required=False, help="Launch the dashboard.", ) @click.option( "--dashboard-address", type=str, default=":0", show_default=True, help="Relative address to serve the dashboard (if enabled).", ) @click.option( "--local-directory", default=None, type=str, help="""Path on local machine to store temporary files. Can be a string (like ``"path/to/files"``) or ``None`` to fall back on the value of ``dask.temporary-directory`` in the local Dask configuration, using the current working directory if this is not set.""", ) @click.option( "--shared-filesystem/--no-shared-filesystem", default=None, type=bool, help="""If `--shared-filesystem` is specified, inform JIT-Unspill that `local_directory` is a shared filesystem available for all workers, whereas `--no-shared-filesystem` informs it may not assume it's a shared filesystem. If neither is specified, JIT-Unspill will decide based on the Dask config value specified by `"jit-unspill-shared-fs"`. Notice, a shared filesystem must support the `os.link()` operation.""", ) @scheduler_file @click.option( "--protocol", type=str, default=None, help="Protocol like tcp, tls, or ucx" ) @click.option( "--interface", type=str, default=None, help="""External interface used to connect to the scheduler. Usually an ethernet interface is used for connection, and not an InfiniBand interface (if one is available). Can be a string (like ``"eth0"`` for NVLink or ``"ib0"`` for InfiniBand) or ``None`` to fall back on the default interface.""", ) @click.option( "--preload", type=str, multiple=True, is_eager=True, help="""Module that should be loaded by each worker process like ``"foo.bar"`` or ``"/path/to/foo.py"``.""", ) @click.option( "--death-timeout", type=str, default=None, help="Seconds to wait for a scheduler before closing", ) @click.option( "--dashboard-prefix", type=str, default=None, help="""Prefix for the dashboard. Can be a string (like ...) or ``None`` for no prefix.""", ) @tls_ca_file @tls_cert @tls_key @click.option( "--enable-tcp-over-ucx/--disable-tcp-over-ucx", default=None, show_default=True, help="""Set environment variables to enable TCP over UCX, even if InfiniBand and NVLink are not supported or disabled.""", ) @click.option( "--enable-infiniband/--disable-infiniband", default=None, show_default=True, help="""Set environment variables to enable UCX over InfiniBand, implies ``--enable-tcp-over-ucx`` when enabled.""", ) @click.option( "--enable-nvlink/--disable-nvlink", default=None, show_default=True, help="""Set environment variables to enable UCX over NVLink, implies ``--enable-tcp-over-ucx`` when enabled.""", ) @click.option( "--enable-rdmacm/--disable-rdmacm", default=None, show_default=True, help="""Set environment variables to enable UCX RDMA connection manager support, requires ``--enable-infiniband``.""", ) @click.option( "--enable-jit-unspill/--disable-jit-unspill", default=None, help="""Enable just-in-time unspilling. Can be a boolean or ``None`` to fall back on the value of ``dask.jit-unspill`` in the local Dask configuration, disabling unspilling if this is not set. .. note:: This is experimental and doesn't support memory spilling to disk. See ``proxy_object.ProxyObject`` and ``proxify_host_file.ProxifyHostFile`` for more info.""", ) @click.option( "--worker-class", default=None, help="""Use a different class than Distributed's default (``distributed.Worker``) to spawn ``distributed.Nanny``.""", ) @click.option( "--pre-import", default=None, help="""Pre-import libraries as a Worker plugin to prevent long import times bleeding through later Dask operations. Should be a list of comma-separated names, such as "cudf,rmm".""", ) @click.option( "--multiprocessing-method", default="spawn", type=click.Choice(["spawn", "fork", "forkserver"]), help="""Method used to start new processes with multiprocessing""", ) def worker( scheduler, host, nthreads, name, memory_limit, device_memory_limit, rmm_pool_size, rmm_maximum_pool_size, rmm_managed_memory, rmm_async, rmm_release_threshold, rmm_log_directory, rmm_track_allocations, pid_file, resources, dashboard, dashboard_address, local_directory, shared_filesystem, scheduler_file, interface, preload, dashboard_prefix, tls_ca_file, tls_cert, tls_key, enable_tcp_over_ucx, enable_infiniband, enable_nvlink, enable_rdmacm, enable_jit_unspill, worker_class, pre_import, multiprocessing_method, **kwargs, ): """Launch a distributed worker with GPUs attached to an existing scheduler. A scheduler can be specified either through a URI passed through the ``SCHEDULER`` argument or a scheduler file passed through the ``--scheduler-file`` option. See https://docs.rapids.ai/api/dask-cuda/stable/quickstart.html#dask-cuda-worker for info. """ if multiprocessing_method == "forkserver": import multiprocessing.forkserver as f f.ensure_running() if tls_ca_file and tls_cert and tls_key: security = Security( tls_ca_file=tls_ca_file, tls_worker_cert=tls_cert, tls_worker_key=tls_key, ) else: security = None if isinstance(scheduler, str) and scheduler.startswith("-"): raise ValueError( "The scheduler address can't start with '-'. Please check " "your command line arguments, you probably attempted to use " "unsupported one. Scheduler address: %s" % scheduler ) if worker_class is not None: worker_class = import_term(worker_class) with dask_config.set( {"distributed.worker.multiprocessing-method": multiprocessing_method} ): worker = CUDAWorker( scheduler, host, nthreads, name, memory_limit, device_memory_limit, rmm_pool_size, rmm_maximum_pool_size, rmm_managed_memory, rmm_async, rmm_release_threshold, rmm_log_directory, rmm_track_allocations, pid_file, resources, dashboard, dashboard_address, local_directory, shared_filesystem, scheduler_file, interface, preload, dashboard_prefix, security, enable_tcp_over_ucx, enable_infiniband, enable_nvlink, enable_rdmacm, enable_jit_unspill, worker_class, pre_import, **kwargs, ) async def on_signal(signum): logger.info("Exiting on signal %d", signum) await worker.close() async def run(): await worker await worker.finished() loop = IOLoop.current() install_signal_handlers(loop, cleanup=on_signal) try: loop.run_sync(run) except (KeyboardInterrupt, TimeoutError): pass finally: logger.info("End worker") @cuda.command(name="config", context_settings=dict(ignore_unknown_options=True)) @scheduler @preload_argv @scheduler_file @tls_ca_file @tls_cert @tls_key def config( scheduler, scheduler_file, tls_ca_file, tls_cert, tls_key, **kwargs, ): """Query an existing GPU cluster's configuration. A cluster can be specified either through a URI passed through the ``SCHEDULER`` argument or a scheduler file passed through the ``--scheduler-file`` option. """ if ( scheduler is None and scheduler_file is None and dask_config.get("scheduler-address", None) is None ): raise ValueError( "No scheduler specified. A scheduler can be specified by " "passing an address through the SCHEDULER argument or " "'dask.scheduler-address' config option, or by passing the " "location of a scheduler file through the --scheduler-file " "option" ) if isinstance(scheduler, str) and scheduler.startswith("-"): raise ValueError( "The scheduler address can't start with '-'. Please check " "your command line arguments, you probably attempted to use " "unsupported one. Scheduler address: %s" % scheduler ) if tls_ca_file and tls_cert and tls_key: security = Security( tls_ca_file=tls_ca_file, tls_worker_cert=tls_cert, tls_worker_key=tls_key, ) else: security = None if scheduler_file is not None: client = Client(scheduler_file=scheduler_file, security=security) else: client = Client(scheduler, security=security) print_cluster_config(client) if __name__ == "__main__": worker()
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/device_host_file.py
import itertools import logging import os import time import numpy from zict import Buffer, Func from zict.common import ZictBase import dask from distributed.protocol import ( dask_deserialize, dask_serialize, deserialize, deserialize_bytes, serialize, serialize_bytelist, ) from distributed.sizeof import safe_sizeof from distributed.spill import AnyKeyFile as KeyAsStringFile from distributed.utils import nbytes from .is_device_object import is_device_object from .is_spillable_object import is_spillable_object from .utils import nvtx_annotate def _serialize_bytelist(x, **kwargs): kwargs["on_error"] = "raise" compression = dask.config.get("distributed.worker.memory.spill-compression") return serialize_bytelist(x, compression=compression, **kwargs) class LoggedBuffer(Buffer): """Extends zict.Buffer with logging capabilities Two arguments `fast_name` and `slow_name` are passed to constructor that identify a user-friendly name for logging of where spilling is going from/to. For example, their names can be "Device" and "Host" to identify that spilling is happening from a CUDA device into system memory. """ def __init__(self, *args, fast_name="Fast", slow_name="Slow", addr=None, **kwargs): self.addr = "Unknown Address" if addr is None else addr self.fast_name = fast_name self.slow_name = slow_name self.msg_template = ( "Worker at <%s>: Spilled key %s with %s bytes from %s to %s in %s seconds" ) # It is a bit hacky to forcefully capture the "distributed.worker" logger, # eventually it would be better to have a different logger. For now this # is ok, allowing users to read logs with client.get_worker_logs(), a # proper solution would require changes to Distributed. self.logger = logging.getLogger("distributed.worker") super().__init__(*args, **kwargs) self.total_time_fast_to_slow = 0.0 self.total_time_slow_to_fast = 0.0 def fast_to_slow(self, key, value): start = time.time() ret = super().fast_to_slow(key, value) total = time.time() - start self.total_time_fast_to_slow += total self.logger.info( self.msg_template % ( self.addr, key, safe_sizeof(value), self.fast_name, self.slow_name, total, ) ) return ret def slow_to_fast(self, key): start = time.time() ret = super().slow_to_fast(key) total = time.time() - start self.total_time_slow_to_fast += total self.logger.info( self.msg_template % (self.addr, key, safe_sizeof(ret), self.slow_name, self.fast_name, total) ) return ret def set_address(self, addr): self.addr = addr def get_total_spilling_time(self): return { ( "Total spilling time from %s to %s" % (self.fast_name, self.slow_name) ): self.total_time_fast_to_slow, ( "Total spilling time from %s to %s" % (self.slow_name, self.fast_name) ): self.total_time_slow_to_fast, } class DeviceSerialized: """Store device object on the host This stores a device-side object as 1. A msgpack encodable header 2. A list of `bytes`-like objects (like NumPy arrays) that are in host memory """ def __init__(self, header, frames): self.header = header self.frames = frames def __sizeof__(self): return sum(map(nbytes, self.frames)) def __reduce_ex__(self, protocol): header, frames = device_serialize(self) # Since pickle cannot handle memoryviews, we convert them # to NumPy arrays (zero-copy). frames = [ (numpy.asarray(f) if isinstance(f, memoryview) else f) for f in frames ] return device_deserialize, (header, frames) @dask_serialize.register(DeviceSerialized) def device_serialize(obj): header = {"obj-header": obj.header} frames = obj.frames return header, frames @dask_deserialize.register(DeviceSerialized) def device_deserialize(header, frames): return DeviceSerialized(header["obj-header"], frames) @nvtx_annotate("SPILL_D2H", color="red", domain="dask_cuda") def device_to_host(obj: object) -> DeviceSerialized: header, frames = serialize(obj, serializers=("dask", "pickle"), on_error="raise") return DeviceSerialized(header, frames) @nvtx_annotate("SPILL_H2D", color="green", domain="dask_cuda") def host_to_device(s: DeviceSerialized) -> object: return deserialize(s.header, s.frames) class DeviceHostFile(ZictBase): """Manages serialization/deserialization of objects. Three LRU cache levels are controlled, for device, host and disk. Each level takes care of serializing objects once its limit has been reached and pass it to the subsequent level. Similarly, each cache may deserialize the object, but storing it back in the appropriate cache, depending on the type of object being deserialized. Parameters ---------- worker_local_directory: path Path where to store serialized objects on disk device_memory_limit: int Number of bytes of CUDA device memory for device LRU cache, spills to host cache once filled. memory_limit: int Number of bytes of host memory for host LRU cache, spills to disk once filled. Setting this to `0` or `None` means unlimited host memory, implies no spilling to disk. log_spilling: bool If True, all spilling operations will be logged directly to distributed.worker with an INFO loglevel. This will eventually be replaced by a Dask configuration flag. """ def __init__( self, # So named such that dask will pass in the worker's local # directory when constructing this through the "data" callback. worker_local_directory, *, device_memory_limit=None, memory_limit=None, log_spilling=False, ): self.disk_func_path = os.path.join(worker_local_directory, "storage") os.makedirs(self.disk_func_path, exist_ok=True) if memory_limit == 0: memory_limit = None self.host_func = dict() self.disk_func = Func( _serialize_bytelist, deserialize_bytes, # Task keys are not strings, so this takes care of # converting arbitrary tuple keys into a string before # handing off to zict.File KeyAsStringFile(self.disk_func_path), ) host_buffer_kwargs = {} device_buffer_kwargs = {} buffer_class = Buffer if log_spilling is True: buffer_class = LoggedBuffer host_buffer_kwargs = {"fast_name": "Host", "slow_name": "Disk"} device_buffer_kwargs = {"fast_name": "Device", "slow_name": "Host"} if memory_limit is None: self.host_buffer = self.host_func else: self.host_buffer = buffer_class( self.host_func, self.disk_func, memory_limit, weight=lambda k, v: safe_sizeof(v), **host_buffer_kwargs, ) self.device_keys = set() self.device_func = dict() self.device_host_func = Func(device_to_host, host_to_device, self.host_buffer) self.device_buffer = Buffer( self.device_func, self.device_host_func, device_memory_limit, weight=lambda k, v: safe_sizeof(v), **device_buffer_kwargs, ) self.device = self.device_buffer.fast.d self.host = ( self.host_buffer if memory_limit is None else self.host_buffer.fast.d ) self.disk = None if memory_limit is None else self.host_buffer.slow.d # For Worker compatibility only, where `fast` is host memory buffer self.fast = self.host_buffer if memory_limit is None else self.host_buffer.fast # Dict of objects that will not be spilled by DeviceHostFile. self.others = {} def __setitem__(self, key, value): if key in self.device_buffer: # Make sure we register the removal of an existing key del self[key] if is_spillable_object(value): self.others[key] = value elif is_device_object(value): self.device_keys.add(key) self.device_buffer[key] = value else: self.host_buffer[key] = value def __getitem__(self, key): if key in self.others: return self.others[key] elif key in self.device_keys: return self.device_buffer[key] elif key in self.host_buffer: return self.host_buffer[key] raise KeyError(key) def __len__(self): return len(self.device_buffer) + len(self.others) def __iter__(self): return itertools.chain(self.device_buffer, self.others) def __delitem__(self, key): self.device_keys.discard(key) if key in self.others: del self.others[key] else: del self.device_buffer[key] def evict(self): """Evicts least recently used host buffer (aka, CPU or system memory) Implements distributed.spill.ManualEvictProto interface""" try: _, _, weight = self.host_buffer.fast.evict() return weight except Exception: # We catch all `Exception`s, just like zict.LRU return -1 def set_address(self, addr): if isinstance(self.host_buffer, LoggedBuffer): self.host_buffer.set_address(addr) self.device_buffer.set_address(addr) def get_total_spilling_time(self): ret = {} if isinstance(self.device_buffer, LoggedBuffer): ret = {**ret, **self.device_buffer.get_total_spilling_time()} if isinstance(self.host_buffer, LoggedBuffer): ret = {**ret, **self.host_buffer.get_total_spilling_time()} return ret
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/local_cuda_cluster.py
import copy import logging import os import warnings from functools import partial import dask from distributed import LocalCluster, Nanny, Worker from distributed.worker_memory import parse_memory_limit from .device_host_file import DeviceHostFile from .initialize import initialize from .plugins import CPUAffinity, PreImport, RMMSetup from .proxify_host_file import ProxifyHostFile from .utils import ( cuda_visible_devices, get_cpu_affinity, get_ucx_config, nvml_device_index, parse_cuda_visible_device, parse_device_memory_limit, ) class LoggedWorker(Worker): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) async def start(self): await super().start() self.data.set_address(self.address) class LoggedNanny(Nanny): def __init__(self, *args, **kwargs): super().__init__(*args, worker_class=LoggedWorker, **kwargs) class LocalCUDACluster(LocalCluster): """A variant of ``dask.distributed.LocalCluster`` that uses one GPU per process. This assigns a different ``CUDA_VISIBLE_DEVICES`` environment variable to each Dask worker process. For machines with a complex architecture mapping CPUs, GPUs, and network hardware, such as NVIDIA DGX-1 and DGX-2, this class creates a local cluster that tries to respect this hardware as much as possible. Each worker process is automatically assigned the correct CPU cores and network interface cards to maximize performance. If UCX and UCX-Py are available, InfiniBand and NVLink connections can be used to optimize data transfer performance. Parameters ---------- CUDA_VISIBLE_DEVICES : str, list of int, or None, default None GPUs to restrict activity to. Can be a string (like ``"0,1,2,3"``), list (like ``[0, 1, 2, 3]``), or ``None`` to use all available GPUs. n_workers : int or None, default None Number of workers. Can be an integer or ``None`` to fall back on the GPUs specified by ``CUDA_VISIBLE_DEVICES``. The value of ``n_workers`` must be smaller or equal to the number of GPUs specified in ``CUDA_VISIBLE_DEVICES`` when the latter is specified, and if smaller, only the first ``n_workers`` GPUs will be used. threads_per_worker : int, default 1 Number of threads to be used for each Dask worker process. memory_limit : int, float, str, or None, default "auto" Size of the host LRU cache, which is used to determine when the worker starts spilling to disk (not available if JIT-Unspill is enabled). Can be an integer (bytes), float (fraction of total system memory), string (like ``"5GB"`` or ``"5000M"``), or ``"auto"``, 0, or ``None`` for no memory management. device_memory_limit : int, float, str, or None, default 0.8 Size of the CUDA device LRU cache, which is used to determine when the worker starts spilling to host memory. Can be an integer (bytes), float (fraction of total device memory), string (like ``"5GB"`` or ``"5000M"``), or ``"auto"``, 0, or ``None`` to disable spilling to host (i.e. allow full device memory usage). local_directory : str or None, default None Path on local machine to store temporary files. Can be a string (like ``"path/to/files"``) or ``None`` to fall back on the value of ``dask.temporary-directory`` in the local Dask configuration, using the current working directory if this is not set. shared_filesystem: bool or None, default None Whether the `local_directory` above is shared between all workers or not. If ``None``, the "jit-unspill-shared-fs" config value are used, which defaults to True. Notice, in all other cases this option defaults to False, but on a local cluster it defaults to True -- we assume all workers use the same filesystem. protocol : str or None, default None Protocol to use for communication. Can be a string (like ``"tcp"`` or ``"ucx"``), or ``None`` to automatically choose the correct protocol. enable_tcp_over_ucx : bool, default None Set environment variables to enable TCP over UCX, even if InfiniBand and NVLink are not supported or disabled. enable_infiniband : bool, default None Set environment variables to enable UCX over InfiniBand, requires ``protocol="ucx"`` and implies ``enable_tcp_over_ucx=True`` when ``True``. enable_nvlink : bool, default None Set environment variables to enable UCX over NVLink, requires ``protocol="ucx"`` and implies ``enable_tcp_over_ucx=True`` when ``True``. enable_rdmacm : bool, default None Set environment variables to enable UCX RDMA connection manager support, requires ``protocol="ucx"`` and ``enable_infiniband=True``. rmm_pool_size : int, str or None, default None RMM pool size to initialize each worker with. Can be an integer (bytes), float (fraction of total device memory), string (like ``"5GB"`` or ``"5000M"``), or ``None`` to disable RMM pools. .. note:: This size is a per-worker configuration, and not cluster-wide. rmm_maximum_pool_size : int, str or None, default None When ``rmm_pool_size`` is set, this argument indicates the maximum pool size. Can be an integer (bytes), float (fraction of total device memory), string (like ``"5GB"`` or ``"5000M"``) or ``None``. By default, the total available memory on the GPU is used. ``rmm_pool_size`` must be specified to use RMM pool and to set the maximum pool size. .. note:: This size is a per-worker configuration, and not cluster-wide. rmm_managed_memory : bool, default False Initialize each worker with RMM and set it to use managed memory. If disabled, RMM may still be used by specifying ``rmm_pool_size``. .. warning:: Managed memory is currently incompatible with NVLink. Trying to enable both will result in an exception. rmm_async: bool, default False Initialize each worker with RMM and set it to use RMM's asynchronous allocator. See ``rmm.mr.CudaAsyncMemoryResource`` for more info. .. warning:: The asynchronous allocator requires CUDA Toolkit 11.2 or newer. It is also incompatible with RMM pools and managed memory. Trying to enable both will result in an exception. rmm_release_threshold: int, str or None, default None When ``rmm.async is True`` and the pool size grows beyond this value, unused memory held by the pool will be released at the next synchronization point. Can be an integer (bytes), float (fraction of total device memory), string (like ``"5GB"`` or ``"5000M"``) or ``None``. By default, this feature is disabled. .. note:: This size is a per-worker configuration, and not cluster-wide. rmm_log_directory : str or None, default None Directory to write per-worker RMM log files to. The client and scheduler are not logged here. Can be a string (like ``"/path/to/logs/"``) or ``None`` to disable logging. .. note:: Logging will only be enabled if ``rmm_pool_size`` is specified or ``rmm_managed_memory=True``. rmm_track_allocations : bool, default False If True, wraps the memory resource used by each worker with a ``rmm.mr.TrackingResourceAdaptor``, which tracks the amount of memory allocated. .. note:: This option enables additional diagnostics to be collected and reported by the Dask dashboard. However, there is significant overhead associated with this and it should only be used for debugging and memory profiling. jit_unspill : bool or None, default None Enable just-in-time unspilling. Can be a boolean or ``None`` to fall back on the value of ``dask.jit-unspill`` in the local Dask configuration, disabling unspilling if this is not set. .. note:: This is experimental and doesn't support memory spilling to disk. See ``proxy_object.ProxyObject`` and ``proxify_host_file.ProxifyHostFile`` for more info. log_spilling : bool, default True Enable logging of spilling operations directly to ``distributed.Worker`` with an ``INFO`` log level. pre_import : str, list or None, default None Pre-import libraries as a Worker plugin to prevent long import times bleeding through later Dask operations. Should be a list of comma-separated names, such as "cudf,rmm" or a list of strings such as ["cudf", "rmm"]. Examples -------- >>> from dask_cuda import LocalCUDACluster >>> from dask.distributed import Client >>> cluster = LocalCUDACluster() >>> client = Client(cluster) Raises ------ TypeError If InfiniBand or NVLink are enabled and ``protocol!="ucx"``. ValueError If RMM pool, RMM managed memory or RMM async allocator are requested but RMM cannot be imported. If RMM managed memory and asynchronous allocator are both enabled. If RMM maximum pool size is set but RMM pool size is not. If RMM maximum pool size is set but RMM async allocator is used. If RMM release threshold is set but the RMM async allocator is not being used. See Also -------- LocalCluster """ def __init__( self, CUDA_VISIBLE_DEVICES=None, n_workers=None, threads_per_worker=1, memory_limit="auto", device_memory_limit=0.8, data=None, local_directory=None, shared_filesystem=None, protocol=None, enable_tcp_over_ucx=None, enable_infiniband=None, enable_nvlink=None, enable_rdmacm=None, rmm_pool_size=None, rmm_maximum_pool_size=None, rmm_managed_memory=False, rmm_async=False, rmm_release_threshold=None, rmm_log_directory=None, rmm_track_allocations=False, jit_unspill=None, log_spilling=False, worker_class=None, pre_import=None, **kwargs, ): # Required by RAPIDS libraries (e.g., cuDF) to ensure no context # initialization happens before we can set CUDA_VISIBLE_DEVICES os.environ["RAPIDS_NO_INITIALIZE"] = "True" if threads_per_worker < 1: raise ValueError("threads_per_worker must be higher than 0.") if CUDA_VISIBLE_DEVICES is None: CUDA_VISIBLE_DEVICES = cuda_visible_devices(0) if isinstance(CUDA_VISIBLE_DEVICES, str): CUDA_VISIBLE_DEVICES = CUDA_VISIBLE_DEVICES.split(",") CUDA_VISIBLE_DEVICES = list( map(parse_cuda_visible_device, CUDA_VISIBLE_DEVICES) ) if n_workers is None: n_workers = len(CUDA_VISIBLE_DEVICES) if n_workers < 1: raise ValueError("Number of workers cannot be less than 1.") # Set nthreads=1 when parsing mem_limit since it only depends on n_workers logger = logging.getLogger(__name__) self.memory_limit = parse_memory_limit( memory_limit=memory_limit, nthreads=1, total_cores=n_workers, logger=logger, ) self.device_memory_limit = parse_device_memory_limit( device_memory_limit, device_index=nvml_device_index(0, CUDA_VISIBLE_DEVICES) ) self.rmm_pool_size = rmm_pool_size self.rmm_maximum_pool_size = rmm_maximum_pool_size self.rmm_managed_memory = rmm_managed_memory self.rmm_async = rmm_async self.rmm_release_threshold = rmm_release_threshold if rmm_pool_size is not None or rmm_managed_memory or rmm_async: try: import rmm # noqa F401 except ImportError: raise ValueError( "RMM pool or managed memory requested but module 'rmm' " "is not available. For installation instructions, please " "see https://github.com/rapidsai/rmm" ) # pragma: no cover else: if enable_nvlink: warnings.warn( "When using NVLink we recommend setting a " "`rmm_pool_size`. Please see: " "https://docs.rapids.ai/api/dask-cuda/nightly/ucx/ " "for more details" ) self.rmm_log_directory = rmm_log_directory self.rmm_track_allocations = rmm_track_allocations if not kwargs.pop("processes", True): raise ValueError( "Processes are necessary in order to use multiple GPUs with Dask" ) if shared_filesystem is None: # Notice, we assume a shared filesystem shared_filesystem = dask.config.get("jit-unspill-shared-fs", default=True) if jit_unspill is None: jit_unspill = dask.config.get("jit-unspill", default=False) data = kwargs.pop("data", None) if data is None: if device_memory_limit is None and memory_limit is None: data = {} elif jit_unspill: data = ( ProxifyHostFile, { "device_memory_limit": self.device_memory_limit, "memory_limit": self.memory_limit, "shared_filesystem": shared_filesystem, }, ) else: data = ( DeviceHostFile, { "device_memory_limit": self.device_memory_limit, "memory_limit": self.memory_limit, "log_spilling": log_spilling, }, ) if enable_tcp_over_ucx or enable_infiniband or enable_nvlink: if protocol is None: protocol = "ucx" elif protocol not in ["ucx", "ucxx"]: raise TypeError( "Enabling InfiniBand or NVLink requires protocol='ucx' or " "protocol='ucxx'" ) self.host = kwargs.get("host", None) initialize( create_cuda_context=False, enable_tcp_over_ucx=enable_tcp_over_ucx, enable_nvlink=enable_nvlink, enable_infiniband=enable_infiniband, enable_rdmacm=enable_rdmacm, ) if worker_class is not None: if log_spilling is True: raise ValueError( "Cannot enable `log_spilling` when `worker_class` is specified. If " "logging is needed, ensure `worker_class` is a subclass of " "`distributed.local_cuda_cluster.LoggedNanny` or a subclass of " "`distributed.local_cuda_cluster.LoggedWorker`, and specify " "`log_spilling=False`." ) if not issubclass(worker_class, Nanny): worker_class = partial(Nanny, worker_class=worker_class) self.pre_import = pre_import super().__init__( n_workers=0, threads_per_worker=threads_per_worker, memory_limit=self.memory_limit, processes=True, data=data, local_directory=local_directory, protocol=protocol, worker_class=worker_class, config={ "distributed.comm.ucx": get_ucx_config( enable_tcp_over_ucx=enable_tcp_over_ucx, enable_nvlink=enable_nvlink, enable_infiniband=enable_infiniband, enable_rdmacm=enable_rdmacm, ) }, **kwargs, ) self.new_spec["options"]["preload"] = self.new_spec["options"].get( "preload", [] ) + ["dask_cuda.initialize"] self.new_spec["options"]["preload_argv"] = self.new_spec["options"].get( "preload_argv", [] ) + ["--create-cuda-context", "--protocol", protocol] self.cuda_visible_devices = CUDA_VISIBLE_DEVICES self.scale(n_workers) self.sync(self._correct_state) def new_worker_spec(self): try: name = min(set(self.cuda_visible_devices) - set(self.worker_spec)) except Exception: raise ValueError( "Can not scale beyond visible devices", self.cuda_visible_devices ) spec = copy.deepcopy(self.new_spec) worker_count = self.cuda_visible_devices.index(name) visible_devices = cuda_visible_devices(worker_count, self.cuda_visible_devices) spec["options"].update( { "env": { "CUDA_VISIBLE_DEVICES": visible_devices, }, "plugins": { CPUAffinity( get_cpu_affinity(nvml_device_index(0, visible_devices)) ), RMMSetup( initial_pool_size=self.rmm_pool_size, maximum_pool_size=self.rmm_maximum_pool_size, managed_memory=self.rmm_managed_memory, async_alloc=self.rmm_async, release_threshold=self.rmm_release_threshold, log_directory=self.rmm_log_directory, track_allocations=self.rmm_track_allocations, ), PreImport(self.pre_import), }, } ) return {name: spec}
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/utils_test.py
from typing import Literal import distributed from distributed import Nanny, Worker class MockWorker(Worker): """Mock Worker class preventing NVML from getting used by SystemMonitor. By preventing the Worker from initializing NVML in the SystemMonitor, we can mock test multiple devices in `CUDA_VISIBLE_DEVICES` behavior with single-GPU machines. """ def __init__(self, *args, **kwargs): distributed.diagnostics.nvml.device_get_count = MockWorker.device_get_count self._device_get_count = distributed.diagnostics.nvml.device_get_count super().__init__(*args, **kwargs) def __del__(self): distributed.diagnostics.nvml.device_get_count = self._device_get_count @staticmethod def device_get_count(): return 0 class IncreasedCloseTimeoutNanny(Nanny): """Increase `Nanny`'s close timeout. The internal close timeout mechanism of `Nanny` recomputes the time left to kill the `Worker` process based on elapsed time of the close task, which may leave very little time for the subprocess to shutdown cleanly, which may cause tests to fail when the system is under higher load. This class increases the default close timeout of 5.0 seconds that `Nanny` sets by default, which can be overriden via Distributed's public API. This class can be used with the `worker_class` argument of `LocalCluster` or `LocalCUDACluster` to provide a much higher default of 30.0 seconds. """ async def close( # type:ignore[override] self, timeout: float = 30.0, reason: str = "nanny-close" ) -> Literal["OK"]: return await super().close(timeout=timeout, reason=reason)
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/is_device_object.py
from __future__ import absolute_import, division, print_function from dask.utils import Dispatch is_device_object = Dispatch(name="is_device_object") @is_device_object.register(object) def is_device_object_default(o): return hasattr(o, "__cuda_array_interface__") @is_device_object.register(list) @is_device_object.register(tuple) @is_device_object.register(set) @is_device_object.register(frozenset) def is_device_object_python_collection(seq): return any([is_device_object(s) for s in seq]) @is_device_object.register(dict) def is_device_object_python_dict(seq): return any([is_device_object(s) for s in seq.items()]) @is_device_object.register_lazy("cudf") def register_cudf(): import cudf @is_device_object.register(cudf.DataFrame) def is_device_object_cudf_dataframe(df): return True @is_device_object.register(cudf.Series) def is_device_object_cudf_series(s): return True @is_device_object.register(cudf.BaseIndex) def is_device_object_cudf_index(s): return True
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/initialize.py
import logging import os import click import numba.cuda import dask from distributed.diagnostics.nvml import get_device_index_and_uuid, has_cuda_context from .utils import get_ucx_config logger = logging.getLogger(__name__) def _create_cuda_context_handler(): if int(os.environ.get("DASK_CUDA_TEST_SINGLE_GPU", "0")) != 0: try: numba.cuda.current_context() except numba.cuda.cudadrv.error.CudaSupportError: pass else: numba.cuda.current_context() def _create_cuda_context(protocol="ucx"): if protocol not in ["ucx", "ucxx"]: return try: # Added here to ensure the parent `LocalCUDACluster` process creates the CUDA # context directly from the UCX module, thus avoiding a similar warning there. try: if protocol == "ucx": import distributed.comm.ucx distributed.comm.ucx.init_once() elif protocol == "ucxx": import distributed_ucxx.ucxx distributed_ucxx.ucxx.init_once() except ModuleNotFoundError: # UCX initialization has to be delegated to Distributed, it will take care # of setting correct environment variables and importing `ucp` after that. # Therefore if ``import ucp`` fails we can just continue here. pass cuda_visible_device = get_device_index_and_uuid( os.environ.get("CUDA_VISIBLE_DEVICES", "0").split(",")[0] ) ctx = has_cuda_context() if protocol == "ucx": if ( ctx.has_context and not distributed.comm.ucx.cuda_context_created.has_context ): distributed.comm.ucx._warn_existing_cuda_context(ctx, os.getpid()) elif protocol == "ucxx": if ( ctx.has_context and not distributed_ucxx.ucxx.cuda_context_created.has_context ): distributed_ucxx.ucxx._warn_existing_cuda_context(ctx, os.getpid()) _create_cuda_context_handler() if protocol == "ucx": if not distributed.comm.ucx.cuda_context_created.has_context: ctx = has_cuda_context() if ctx.has_context and ctx.device_info != cuda_visible_device: distributed.comm.ucx._warn_cuda_context_wrong_device( cuda_visible_device, ctx.device_info, os.getpid() ) elif protocol == "ucxx": if not distributed_ucxx.ucxx.cuda_context_created.has_context: ctx = has_cuda_context() if ctx.has_context and ctx.device_info != cuda_visible_device: distributed_ucxx.ucxx._warn_cuda_context_wrong_device( cuda_visible_device, ctx.device_info, os.getpid() ) except Exception: logger.error("Unable to start CUDA Context", exc_info=True) def initialize( create_cuda_context=True, enable_tcp_over_ucx=None, enable_infiniband=None, enable_nvlink=None, enable_rdmacm=None, protocol="ucx", ): """Create CUDA context and initialize UCX-Py, depending on user parameters. Sometimes it is convenient to initialize the CUDA context, particularly before starting up Dask worker processes which create a variety of threads. To ensure UCX works correctly, it is important to ensure it is initialized with the correct options. This is especially important for the client, which cannot be configured to use UCX with arguments like ``LocalCUDACluster`` and ``dask cuda worker``. This function will ensure that they are provided a UCX configuration based on the flags and options passed by the user. This function can also be used within a worker preload script for UCX configuration of mainline Dask.distributed. https://docs.dask.org/en/latest/setup/custom-startup.html You can add it to your global config with the following YAML: .. code-block:: yaml distributed: worker: preload: - dask_cuda.initialize See https://docs.dask.org/en/latest/configuration.html for more information about Dask configuration. Parameters ---------- create_cuda_context : bool, default True Create CUDA context on initialization. enable_tcp_over_ucx : bool, default None Set environment variables to enable TCP over UCX, even if InfiniBand and NVLink are not supported or disabled. enable_infiniband : bool, default None Set environment variables to enable UCX over InfiniBand, implies ``enable_tcp_over_ucx=True`` when ``True``. enable_nvlink : bool, default None Set environment variables to enable UCX over NVLink, implies ``enable_tcp_over_ucx=True`` when ``True``. enable_rdmacm : bool, default None Set environment variables to enable UCX RDMA connection manager support, requires ``enable_infiniband=True``. """ ucx_config = get_ucx_config( enable_tcp_over_ucx=enable_tcp_over_ucx, enable_infiniband=enable_infiniband, enable_nvlink=enable_nvlink, enable_rdmacm=enable_rdmacm, ) dask.config.set({"distributed.comm.ucx": ucx_config}) if create_cuda_context: _create_cuda_context(protocol=protocol) @click.command() @click.option( "--create-cuda-context/--no-create-cuda-context", default=False, help="Create CUDA context", ) @click.option( "--protocol", default=None, type=str, help="Communication protocol, such as: 'tcp', 'tls', 'ucx' or 'ucxx'.", ) @click.option( "--enable-tcp-over-ucx/--disable-tcp-over-ucx", default=False, help="Enable TCP communication over UCX", ) @click.option( "--enable-infiniband/--disable-infiniband", default=False, help="Enable InfiniBand communication", ) @click.option( "--enable-nvlink/--disable-nvlink", default=False, help="Enable NVLink communication", ) @click.option( "--enable-rdmacm/--disable-rdmacm", default=False, help="Enable RDMA connection manager, currently requires InfiniBand enabled.", ) def dask_setup( service, create_cuda_context, protocol, enable_tcp_over_ucx, enable_infiniband, enable_nvlink, enable_rdmacm, ): if create_cuda_context: _create_cuda_context(protocol=protocol)
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/_version.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import importlib.resources __version__ = ( importlib.resources.files("dask_cuda").joinpath("VERSION").read_text().strip() ) __git_commit__ = ""
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/plugins.py
import importlib import os from distributed import WorkerPlugin from .utils import get_rmm_log_file_name, parse_device_memory_limit class CPUAffinity(WorkerPlugin): def __init__(self, cores): self.cores = cores def setup(self, worker=None): os.sched_setaffinity(0, self.cores) class RMMSetup(WorkerPlugin): def __init__( self, initial_pool_size, maximum_pool_size, managed_memory, async_alloc, release_threshold, log_directory, track_allocations, ): if initial_pool_size is None and maximum_pool_size is not None: raise ValueError( "`rmm_maximum_pool_size` was specified without specifying " "`rmm_pool_size`.`rmm_pool_size` must be specified to use RMM pool." ) if async_alloc is True: if managed_memory is True: raise ValueError( "`rmm_managed_memory` is incompatible with the `rmm_async`." ) if async_alloc is False and release_threshold is not None: raise ValueError("`rmm_release_threshold` requires `rmm_async`.") self.initial_pool_size = initial_pool_size self.maximum_pool_size = maximum_pool_size self.managed_memory = managed_memory self.async_alloc = async_alloc self.release_threshold = release_threshold self.logging = log_directory is not None self.log_directory = log_directory self.rmm_track_allocations = track_allocations def setup(self, worker=None): if self.initial_pool_size is not None: self.initial_pool_size = parse_device_memory_limit( self.initial_pool_size, alignment_size=256 ) if self.async_alloc: import rmm if self.release_threshold is not None: self.release_threshold = parse_device_memory_limit( self.release_threshold, alignment_size=256 ) mr = rmm.mr.CudaAsyncMemoryResource( initial_pool_size=self.initial_pool_size, release_threshold=self.release_threshold, ) if self.maximum_pool_size is not None: self.maximum_pool_size = parse_device_memory_limit( self.maximum_pool_size, alignment_size=256 ) mr = rmm.mr.LimitingResourceAdaptor( mr, allocation_limit=self.maximum_pool_size ) rmm.mr.set_current_device_resource(mr) if self.logging: rmm.enable_logging( log_file_name=get_rmm_log_file_name( worker, self.logging, self.log_directory ) ) elif self.initial_pool_size is not None or self.managed_memory: import rmm pool_allocator = False if self.initial_pool_size is None else True if self.initial_pool_size is not None: if self.maximum_pool_size is not None: self.maximum_pool_size = parse_device_memory_limit( self.maximum_pool_size, alignment_size=256 ) rmm.reinitialize( pool_allocator=pool_allocator, managed_memory=self.managed_memory, initial_pool_size=self.initial_pool_size, maximum_pool_size=self.maximum_pool_size, logging=self.logging, log_file_name=get_rmm_log_file_name( worker, self.logging, self.log_directory ), ) if self.rmm_track_allocations: import rmm mr = rmm.mr.get_current_device_resource() rmm.mr.set_current_device_resource(rmm.mr.TrackingResourceAdaptor(mr)) class PreImport(WorkerPlugin): def __init__(self, libraries): if libraries is None: libraries = [] elif isinstance(libraries, str): libraries = libraries.split(",") self.libraries = libraries def setup(self, worker=None): for l in self.libraries: importlib.import_module(l)
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/proxify_host_file.py
import abc import gc import io import logging import os import os.path import pathlib import threading import time import traceback import warnings import weakref from collections import defaultdict from collections.abc import MutableMapping from typing import ( Any, Callable, DefaultDict, Dict, Hashable, Iterable, List, Optional, Set, Tuple, TypeVar, ) from weakref import ReferenceType import dask from dask.sizeof import sizeof from dask.utils import format_bytes from distributed.protocol.compression import decompress, maybe_compress from distributed.protocol.serialize import ( merge_and_deserialize, register_serialization_family, serialize_and_split, ) from . import proxify_device_objects as pdo from .disk_io import SpillToDiskProperties, disk_read, disk_write from .get_device_memory_objects import DeviceMemoryId, get_device_memory_ids from .is_spillable_object import cudf_spilling_status from .proxify_device_objects import proxify_device_objects, unproxify_device_objects from .proxy_object import ProxyObject from .utils import get_rmm_device_memory_usage T = TypeVar("T") class Proxies(abc.ABC): """Abstract base class to implement tracking of proxies This class is not threadsafe """ def __init__(self): self._proxy_id_to_proxy: Dict[int, ReferenceType[ProxyObject]] = {} self._mem_usage = 0 self._lock = threading.Lock() def __len__(self) -> int: return len(self._proxy_id_to_proxy) @abc.abstractmethod def mem_usage_add(self, proxy: ProxyObject) -> None: """Given a new proxy, update `self._mem_usage`""" @abc.abstractmethod def mem_usage_remove(self, proxy: ProxyObject) -> None: """Removal of proxy, update `self._mem_usage`""" @abc.abstractmethod def buffer_info(self) -> List[Tuple[float, int, List[ProxyObject]]]: """Return a list of buffer information The returned format is: `[(<access-time>, <size-of-buffer>, <list-of-proxies>), ...] """ def add(self, proxy: ProxyObject) -> None: """Add a proxy for tracking, calls `self.mem_usage_add`""" assert not self.contains_proxy_id(id(proxy)) with self._lock: self._proxy_id_to_proxy[id(proxy)] = weakref.ref(proxy) self.mem_usage_add(proxy) def remove(self, proxy: ProxyObject) -> None: """Remove proxy from tracking, calls `self.mem_usage_remove`""" with self._lock: del self._proxy_id_to_proxy[id(proxy)] self.mem_usage_remove(proxy) if len(self._proxy_id_to_proxy) == 0: if self._mem_usage != 0: warnings.warn( "ProxyManager is empty but the tally of " f"{self} is {self._mem_usage} bytes. " "Resetting the tally." ) self._mem_usage = 0 def get_proxies(self) -> List[ProxyObject]: """Return a list of all proxies""" with self._lock: ret = [] for p in self._proxy_id_to_proxy.values(): proxy = p() if proxy is not None: ret.append(proxy) return ret def get_proxies_by_ids(self, proxy_ids: Iterable[int]) -> List[ProxyObject]: """Return a list of proxies""" ret = [] for proxy_id in proxy_ids: weakref_proxy = self._proxy_id_to_proxy.get(proxy_id) if weakref_proxy is not None: proxy = weakref_proxy() if proxy is not None: ret.append(proxy) return ret def contains_proxy_id(self, proxy_id: int) -> bool: return proxy_id in self._proxy_id_to_proxy def mem_usage(self) -> int: return self._mem_usage class ProxiesOnHost(Proxies): """Implement tracking of proxies on the CPU This uses dask.sizeof to update memory usage. """ def mem_usage_add(self, proxy: ProxyObject) -> None: self._mem_usage += sizeof(proxy) def mem_usage_remove(self, proxy: ProxyObject) -> None: self._mem_usage -= sizeof(proxy) def buffer_info(self) -> List[Tuple[float, int, List[ProxyObject]]]: ret = [] for p in self.get_proxies(): size = sizeof(p) ret.append((p._pxy_get().last_access, size, [p])) return ret class ProxiesOnDisk(ProxiesOnHost): """Implement tracking of proxies on the Disk""" class ProxiesOnDevice(Proxies): """Implement tracking of proxies on the GPU This is a bit more complicated than ProxiesOnHost because we have to handle that multiple proxy objects can refer to the same underlying device memory object. Thus, we have to track aliasing and make sure we don't count down the memory usage prematurely. Notice, we only track direct aliasing thus multiple proxy objects can point to different non-overlapping parts of the same device buffer. In this case the tally of the total device memory usage is incorrect. """ def __init__(self) -> None: super().__init__() self.proxy_id_to_dev_mems: Dict[int, Set[DeviceMemoryId]] = {} self.dev_mem_to_proxy_ids: DefaultDict[DeviceMemoryId, Set[int]] = defaultdict( set ) def mem_usage_add(self, proxy: ProxyObject) -> None: proxy_id = id(proxy) assert proxy_id not in self.proxy_id_to_dev_mems self.proxy_id_to_dev_mems[proxy_id] = set() for dev_mem in get_device_memory_ids(proxy._pxy_get().obj): self.proxy_id_to_dev_mems[proxy_id].add(dev_mem) ps = self.dev_mem_to_proxy_ids[dev_mem] if len(ps) == 0: self._mem_usage += dev_mem.nbytes ps.add(proxy_id) def mem_usage_remove(self, proxy: ProxyObject) -> None: proxy_id = id(proxy) for dev_mem in self.proxy_id_to_dev_mems.pop(proxy_id): self.dev_mem_to_proxy_ids[dev_mem].remove(proxy_id) if len(self.dev_mem_to_proxy_ids[dev_mem]) == 0: del self.dev_mem_to_proxy_ids[dev_mem] self._mem_usage -= dev_mem.nbytes def buffer_info(self) -> List[Tuple[float, int, List[ProxyObject]]]: ret = [] for dev_mem, proxy_ids in self.dev_mem_to_proxy_ids.items(): proxies = self.get_proxies_by_ids(proxy_ids) last_access = max(p._pxy_get().last_access for p in proxies) ret.append((last_access, dev_mem.nbytes, proxies)) return ret class ProxyManager: """ This class together with Proxies, ProxiesOnHost, and ProxiesOnDevice implements the tracking of all known proxies and their total host/device memory usage. It turns out having to re-calculate memory usage continuously is too expensive. The idea is to have the ProxifyHostFile or the proxies themselves update their location (device or host). The manager then tallies the total memory usage. Notice, the manager only keeps weak references to the proxies. """ def __init__(self, device_memory_limit: int, memory_limit: int): self.lock = threading.RLock() self._disk = ProxiesOnDisk() self._host = ProxiesOnHost() self._dev = ProxiesOnDevice() self._device_memory_limit = device_memory_limit self._host_memory_limit = memory_limit def __repr__(self) -> str: with self.lock: return ( f"<ProxyManager dev_limit={format_bytes(self._device_memory_limit)}" f" host_limit={format_bytes(self._host_memory_limit)}" f" disk={format_bytes(self._disk.mem_usage())}({len(self._disk)})" f" host={format_bytes(self._host.mem_usage())}({len(self._host)})" f" dev={format_bytes(self._dev.mem_usage())}({len(self._dev)})>" ) def __len__(self) -> int: return len(self._disk) + len(self._host) + len(self._dev) def pprint(self) -> str: with self.lock: ret = f"{self}:" if len(self) == 0: return ret + " Empty" ret += "\n" for proxy in self._disk.get_proxies(): ret += f" disk - {repr(proxy)}\n" for proxy in self._host.get_proxies(): ret += f" host - {repr(proxy)}\n" for proxy in self._dev.get_proxies(): ret += f" dev - {repr(proxy)}\n" return ret[:-1] # Strip last newline def get_proxies_by_serializer(self, serializer: Optional[str]) -> Proxies: """Get Proxies collection by serializer""" if serializer == "disk": return self._disk elif serializer in ("dask", "pickle"): return self._host else: return self._dev def get_proxies_by_proxy_object(self, proxy: ProxyObject) -> Optional[Proxies]: """Get Proxies collection by proxy object""" proxy_id = id(proxy) if self._dev.contains_proxy_id(proxy_id): return self._dev if self._host.contains_proxy_id(proxy_id): return self._host if self._disk.contains_proxy_id(proxy_id): return self._disk return None def contains(self, proxy_id: int) -> bool: """Is the proxy in any of the Proxies collection?""" with self.lock: return ( self._disk.contains_proxy_id(proxy_id) or self._host.contains_proxy_id(proxy_id) or self._dev.contains_proxy_id(proxy_id) ) def add(self, proxy: ProxyObject, serializer: Optional[str]) -> None: """Add the proxy to the Proxies collection by that match the serializer""" with self.lock: old_proxies = self.get_proxies_by_proxy_object(proxy) new_proxies = self.get_proxies_by_serializer(serializer) if old_proxies is not new_proxies: if old_proxies is not None: old_proxies.remove(proxy) new_proxies.add(proxy) def remove(self, proxy: ProxyObject) -> None: """Remove the proxy from the Proxies collection it is in""" with self.lock: # Find where the proxy is located (if found) and remove it proxies: Optional[Proxies] = None if self._disk.contains_proxy_id(id(proxy)): proxies = self._disk if self._host.contains_proxy_id(id(proxy)): assert proxies is None, "Proxy in multiple locations" proxies = self._host if self._dev.contains_proxy_id(id(proxy)): assert proxies is None, "Proxy in multiple locations" proxies = self._dev assert proxies is not None, "Trying to remove unknown proxy" proxies.remove(proxy) def validate(self): """Validate the state of the manager""" with self.lock: for serializer in ("disk", "dask", "cuda"): proxies = self.get_proxies_by_serializer(serializer) for p in proxies.get_proxies(): assert ( self.get_proxies_by_serializer(p._pxy_get().serializer) is proxies ) with proxies._lock: for i, p in proxies._proxy_id_to_proxy.items(): assert p() is not None assert i == id(p()) for p in proxies.get_proxies(): pxy = p._pxy_get() if pxy.is_serialized(): header, _ = pxy.obj assert header["serializer"] == pxy.serializer def proxify(self, obj: T, duplicate_check=True) -> Tuple[T, bool]: """Proxify `obj` and add found proxies to the `Proxies` collections Search through `obj` and wrap all CUDA device objects in ProxyObject. If duplicate_check is True, identical CUDA device objects found in `obj` are wrapped by the same ProxyObject. Returns the proxified object and a boolean, which is `True` when one or more incompatible-types were found. Parameters ---------- obj Object to search through or wrap in a ProxyObject. duplicate_check Make sure that identical CUDA device objects found in `obj` are wrapped by the same ProxyObject. This check comes with a significant overhead hence it is recommended setting to False when it is known that no duplicate exist. Return ------ obj The proxified object. bool Whether incompatible-types were found or not. """ incompatible_type_found = False with self.lock: found_proxies: List[ProxyObject] = [] if duplicate_check: # In order to detect already proxied object, proxify_device_objects() # needs a mapping from proxied objects to their proxy objects. proxied_id_to_proxy = { id(p._pxy_get().obj): p for p in self._dev.get_proxies() } else: proxied_id_to_proxy = None ret = proxify_device_objects(obj, proxied_id_to_proxy, found_proxies) last_access = time.monotonic() for p in found_proxies: pxy = p._pxy_get() pxy.last_access = last_access if not self.contains(id(p)): pxy.manager = self self.add(proxy=p, serializer=pxy.serializer) if pdo.incompatible_types and isinstance(p, pdo.incompatible_types): incompatible_type_found = True self.maybe_evict() return ret, incompatible_type_found def evict( self, nbytes: int, proxies_access: Callable[[], List[Tuple[float, int, List[ProxyObject]]]], serializer: Callable[[ProxyObject], None], ) -> int: """Evict buffers retrieved by calling `proxies_access` Calls `proxies_access` to retrieve a list of proxies and then spills enough proxies to free up at a minimum `nbytes` bytes. In order to spill a proxy, `serializer` is called. Parameters ---------- nbytes: int Number of bytes to evict. proxies_access: callable Function that returns a list of proxies pack in a tuple like: `[(<access-time>, <size-of-buffer>, <list-of-proxies>), ...] serializer: callable Function that serialize the given proxy object. Return ------ nbytes: int Number of bytes spilled. """ freed_memory: int = 0 proxies_to_serialize: List[ProxyObject] = [] with self.lock: access = proxies_access() access.sort(key=lambda x: (x[0], -x[1])) for _, size, proxies in access: proxies_to_serialize.extend(proxies) freed_memory += size if freed_memory >= nbytes: break serialized_proxies: Set[int] = set() for p in proxies_to_serialize: # Avoid trying to serialize the same proxy multiple times if id(p) not in serialized_proxies: serialized_proxies.add(id(p)) serializer(p) return freed_memory def maybe_evict_from_device(self, extra_dev_mem=0) -> None: """Evict buffers until total memory usage is below device-memory-limit Adds `extra_dev_mem` to the current total memory usage when comparing against device-memory-limit. """ mem_over_usage = ( self._dev.mem_usage() + extra_dev_mem - self._device_memory_limit ) if mem_over_usage > 0: self.evict( nbytes=mem_over_usage, proxies_access=self._dev.buffer_info, serializer=lambda p: p._pxy_serialize(serializers=("dask", "pickle")), ) def maybe_evict_from_host(self, extra_host_mem=0) -> None: """Evict buffers until total memory usage is below host-memory-limit Adds `extra_host_mem` to the current total memory usage when comparing against device-memory-limit. """ assert self._host_memory_limit is not None mem_over_usage = ( self._host.mem_usage() + extra_host_mem - self._host_memory_limit ) if mem_over_usage > 0: self.evict( nbytes=mem_over_usage, proxies_access=self._host.buffer_info, serializer=ProxifyHostFile.serialize_proxy_to_disk_inplace, ) def maybe_evict(self, extra_dev_mem=0) -> None: self.maybe_evict_from_device(extra_dev_mem) if self._host_memory_limit: self.maybe_evict_from_host() class ProxifyHostFile(MutableMapping): """Host file that proxify stored data This class is an alternative to the default disk-backed LRU dict used by workers in Distributed. It wraps all CUDA device objects in a ProxyObject instance and maintains `device_memory_limit` by spilling ProxyObject on-the-fly. This addresses some issues with the default DeviceHostFile host, which tracks device memory inaccurately see <https://github.com/rapidsai/dask-cuda/pull/451> Limitations ----------- - For now, ProxifyHostFile doesn't support spilling to disk. - ProxyObject has some limitations and doesn't mimic the proxied object perfectly. See docs of ProxyObject for detail. - This is still experimental, expect bugs and API changes. Parameters ---------- worker_local_directory: str Path on local machine to store temporary files. WARNING, this **cannot** change while running thus all serialization to disk are using the same directory. device_memory_limit: int Number of bytes of CUDA device memory used before spilling to host. memory_limit: int Number of bytes of host memory used before spilling to disk. shared_filesystem: bool or None, default None Whether the `local_directory` above is shared between all workers or not. If ``None``, the "jit-unspill-shared-fs" config value are used, which defaults to False. Notice, a shared filesystem must support the `os.link()` operation. compatibility_mode: bool or None, default None Enables compatibility-mode, which means that items are un-proxified before retrieval. This makes it possible to get some of the JIT-unspill benefits without having to be ProxyObject compatible. In order to still allow specific ProxyObjects, set the `mark_as_explicit_proxies=True` when proxifying with `proxify_device_objects()`. If ``None``, the "jit-unspill-compatibility-mode" config value are used, which defaults to False. spill_on_demand: bool or None, default None Enables spilling when the RMM memory pool goes out of memory. If ``None``, the "spill-on-demand" config value are used, which defaults to True. Notice, enabling this does nothing when RMM isn't available or not used. gds_spilling: bool Enable GPUDirect Storage spilling. If ``None``, the "gds-spilling" config value are used, which defaults to ``False``. """ # Notice, we define `_spill_to_disk` as a static variable because it is used by # the static register_disk_spilling() method. _spill_to_disk: Optional[SpillToDiskProperties] = None lock = threading.RLock() def __init__( self, # So named such that dask will pass in the worker's local # directory when constructing this through the "data" callback. worker_local_directory: str, *, device_memory_limit: int, memory_limit: int, shared_filesystem: Optional[bool] = None, compatibility_mode: Optional[bool] = None, spill_on_demand: Optional[bool] = None, gds_spilling: Optional[bool] = None, ): if cudf_spilling_status(): warnings.warn( "JIT-Unspill and cuDF's built-in spilling don't work together, please " "disable one of them by setting either `CUDF_SPILL=off` or " "`DASK_JIT_UNSPILL=off` environment variable." ) # each value of self.store is a tuple containing the proxified # object, as well as a boolean indicating whether any # incompatible types were found when proxifying it self.store: Dict[Hashable, Tuple[Any, bool]] = {} self.manager = ProxyManager(device_memory_limit, memory_limit) # Create an instance of `SpillToDiskProperties` if it doesn't already exist path = pathlib.Path( os.path.join( worker_local_directory, "jit-unspill-disk-storage", ) ).resolve() if ProxifyHostFile._spill_to_disk is None: ProxifyHostFile._spill_to_disk = SpillToDiskProperties( path, shared_filesystem, gds_spilling ) elif ProxifyHostFile._spill_to_disk.root_dir != path: raise ValueError("Cannot change the JIT-Unspilling disk path") self.register_disk_spilling() if compatibility_mode is None: self.compatibility_mode = dask.config.get( "jit-unspill-compatibility-mode", default=False ) else: self.compatibility_mode = compatibility_mode if spill_on_demand is None: spill_on_demand = dask.config.get("spill-on-demand", default=True) # `None` in this context means: never initialize self.spill_on_demand_initialized = False if spill_on_demand else None # It is a bit hacky to forcefully capture the "distributed.worker" logger, # eventually it would be better to have a different logger. For now this # is ok, allowing users to read logs with client.get_worker_logs(), a # proper solution would require changes to Distributed. self.logger = logging.getLogger("distributed.worker") def __contains__(self, key): return key in self.store def __len__(self): return len(self.store) def __iter__(self): with self.lock: return iter(self.store) def initialize_spill_on_demand_once(self): """Register callback function to handle RMM out-of-memory exceptions This function is idempotent and should be called at least once. Currently, we do this in __setitem__ instead of in __init__ because a Dask worker might re- initiate the RMM pool and its resource adaptors after creating ProxifyHostFile. """ if self.spill_on_demand_initialized is False: self.spill_on_demand_initialized = True try: import rmm.mr assert hasattr(rmm.mr, "FailureCallbackResourceAdaptor") except (ImportError, AssertionError): pass else: def oom(nbytes: int) -> bool: """Try to handle an out-of-memory error by spilling""" memory_freed = self.manager.evict( nbytes=nbytes, proxies_access=self.manager._dev.buffer_info, serializer=lambda p: p._pxy_serialize( serializers=("dask", "pickle") ), ) gc.collect() if memory_freed > 0: return True # Ask RMM to retry the allocation else: with io.StringIO() as f: traceback.print_stack(file=f) f.seek(0) tb = f.read() dev_mem = get_rmm_device_memory_usage() dev_msg = "" if dev_mem is not None: dev_msg = f"RMM allocs: {format_bytes(dev_mem)}, " self.logger.warning( f"RMM allocation of {format_bytes(nbytes)} failed, " "spill-on-demand couldn't find any device memory to " f"spill.\n{dev_msg}{self.manager}, traceback:\n{tb}\n" ) # Since we didn't find anything to spill, we give up. return False current_mr = rmm.mr.get_current_device_resource() mr = rmm.mr.FailureCallbackResourceAdaptor(current_mr, oom) rmm.mr.set_current_device_resource(mr) def evict(self) -> int: """Manually evict 1% of host limit. Dask uses this to trigger CPU-to-Disk spilling. We don't know how much we need to spill but Dask will call `evict()` repeatedly until enough is spilled. We ask for 1% each time. Return ------ nbytes: int Number of bytes spilled or -1 if nothing to spill. """ assert self.manager._host_memory_limit is not None ret = self.manager.evict( nbytes=int(self.manager._host_memory_limit * 0.01), proxies_access=self.manager._host.buffer_info, serializer=ProxifyHostFile.serialize_proxy_to_disk_inplace, ) gc.collect() return ret if ret > 0 else -1 @property def fast(self): """Alternative access to `.evict()` used by Dask Dask expects `.fast.evict()` to be available for manually triggering of CPU-to-Disk spilling. """ if len(self.manager._host) == 0: return False # We have nothing in host memory to spill class EvictDummy: @staticmethod def evict(): ret = ( None, None, self.evict(), ) gc.collect() return ret return EvictDummy() def __setitem__(self, key, value): with self.lock: self.initialize_spill_on_demand_once() if key in self.store: # Make sure we register the removal of an existing key del self[key] self.store[key] = self.manager.proxify(value) def __getitem__(self, key): with self.lock: ret, incompatible_type_found = self.store[key] if self.compatibility_mode: ret = unproxify_device_objects(ret, skip_explicit_proxies=True) self.manager.maybe_evict() elif incompatible_type_found: # Notice, we only call `unproxify_device_objects()` when `key` # contains incompatible types. ret = unproxify_device_objects(ret, only_incompatible_types=True) self.manager.maybe_evict() return ret def __delitem__(self, key): with self.lock: del self.store[key] @classmethod def register_disk_spilling(cls) -> None: """Register Dask serializers that writes to disk This is a static method because the registration of a Dask serializer/deserializer pair is a global operation thus we can only register one such pair. This means that all instances of the ``ProxifyHostFile`` end up using the same ``local_directory``. """ assert cls._spill_to_disk is not None def disk_dumps(x): # When using GDS, we prepend "cuda" to serializers to keep the CUDA # objects on the GPU. Otherwise the "dask" or "pickle" serializer will # copy everything to host memory. serializers = ["dask", "pickle"] if cls._spill_to_disk.gds_enabled: serializers = ["cuda"] + serializers serialize_header, frames = serialize_and_split( x, serializers=serializers, on_error="raise" ) if frames: compression, frames = zip(*map(maybe_compress, frames)) else: compression = [] serialize_header["compression"] = compression serialize_header["count"] = len(frames) return ( { "serializer": "disk", "disk-io-header": disk_write( path=cls._spill_to_disk.gen_file_path(), frames=frames, shared_filesystem=cls._spill_to_disk.shared_filesystem, gds=cls._spill_to_disk.gds_enabled, ), "serialize-header": serialize_header, }, [], ) def disk_loads(header, frames): assert frames == [] frames = disk_read( header["disk-io-header"], gds=cls._spill_to_disk.gds_enabled ) if "compression" in header["serialize-header"]: frames = decompress(header["serialize-header"], frames) return merge_and_deserialize(header["serialize-header"], frames) register_serialization_family("disk", disk_dumps, disk_loads) @classmethod def serialize_proxy_to_disk_inplace(cls, proxy: ProxyObject) -> None: """Serialize `proxy` to disk. Avoid de-serializing if `proxy` is serialized using "dask" or "pickle". In this case the already serialized data is written directly to disk. Parameters ---------- proxy : ProxyObject Proxy object to serialize using the "disk" serialize. """ assert cls._spill_to_disk is not None pxy = proxy._pxy_get(copy=True) if pxy.is_serialized(): header, frames = pxy.obj if header["serializer"] in ("dask", "pickle"): pxy.obj = ( { "serializer": "disk", "disk-io-header": disk_write( path=cls._spill_to_disk.gen_file_path(), frames=frames, shared_filesystem=cls._spill_to_disk.shared_filesystem, ), "serialize-header": header, }, [], ) pxy.serializer = "disk" proxy._pxy_set(pxy) return proxy._pxy_serialize(serializers=("disk",), proxy_detail=pxy)
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/__init__.py
import sys if sys.platform != "linux": raise ImportError("Only Linux is supported by Dask-CUDA at this time") import dask import dask.utils import dask.dataframe.core import dask.dataframe.shuffle import dask.dataframe.multi import dask.bag.core from ._version import __git_commit__, __version__ from .cuda_worker import CUDAWorker from .explicit_comms.dataframe.shuffle import ( get_rearrange_by_column_wrapper, get_default_shuffle_method, ) from .local_cuda_cluster import LocalCUDACluster from .proxify_device_objects import proxify_decorator, unproxify_decorator # Monkey patching Dask to make use of explicit-comms when `DASK_EXPLICIT_COMMS=True` dask.dataframe.shuffle.rearrange_by_column = get_rearrange_by_column_wrapper( dask.dataframe.shuffle.rearrange_by_column ) # We have to replace all modules that imports Dask's `get_default_shuffle_method()` # TODO: introduce a shuffle-algorithm dispatcher in Dask so we don't need this hack dask.dataframe.shuffle.get_default_shuffle_method = get_default_shuffle_method dask.dataframe.multi.get_default_shuffle_method = get_default_shuffle_method dask.bag.core.get_default_shuffle_method = get_default_shuffle_method # Monkey patching Dask to make use of proxify and unproxify in compatibility mode dask.dataframe.shuffle.shuffle_group = proxify_decorator( dask.dataframe.shuffle.shuffle_group ) dask.dataframe.core._concat = unproxify_decorator(dask.dataframe.core._concat)
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/is_spillable_object.py
from __future__ import absolute_import, division, print_function from typing import Optional from dask.utils import Dispatch is_spillable_object = Dispatch(name="is_spillable_object") @is_spillable_object.register(list) @is_spillable_object.register(tuple) @is_spillable_object.register(set) @is_spillable_object.register(frozenset) def _(seq): return any([is_spillable_object(s) for s in seq]) @is_spillable_object.register(dict) def _(seq): return any([is_spillable_object(s) for s in seq.items()]) @is_spillable_object.register(object) def _(o): return False @is_spillable_object.register_lazy("cudf") def register_cudf(): import cudf from cudf.core.frame import Frame @is_spillable_object.register(Frame) def is_device_object_cudf_dataframe(df): return cudf_spilling_status() @is_spillable_object.register(cudf.BaseIndex) def is_device_object_cudf_index(s): return cudf_spilling_status() def cudf_spilling_status() -> Optional[bool]: """Check the status of cudf's built-in spilling Returns: - True if cudf's internal spilling is enabled, or - False if it is disabled, or - None if the current version of cudf doesn't support spilling, or - None if cudf isn't available. """ try: from cudf.core.buffer.spill_manager import get_global_manager except ImportError: return None return get_global_manager() is not None
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/proxify_device_objects.py
import functools import pydoc from collections import defaultdict from functools import partial from typing import List, MutableMapping, Optional, Tuple, TypeVar import dask from dask.utils import Dispatch from .proxy_object import ProxyObject, asproxy dispatch = Dispatch(name="proxify_device_objects") incompatible_types: Optional[Tuple[type]] = None T = TypeVar("T") def _register_incompatible_types(): """Lazy register types that ProxifyHostFile should unproxify on retrieval. It reads the config key "jit-unspill-incompatible" (DASK_JIT_UNSPILL_INCOMPATIBLE), which should be a comma separated list of types. The default value is: DASK_JIT_UNSPILL_INCOMPATIBLE="cupy.ndarray" """ global incompatible_types if incompatible_types is not None: return # Only register once else: incompatible_types = () incompatibles = dask.config.get("jit-unspill-incompatible", "cupy.ndarray") incompatibles = incompatibles.split(",") toplevels = defaultdict(set) for path in incompatibles: if path: toplevel = path.split(".", maxsplit=1)[0].strip() toplevels[toplevel].add(path.strip()) for toplevel, ignores in toplevels.items(): def f(paths): global incompatible_types incompatible_types = incompatible_types + tuple( pydoc.locate(p) for p in paths ) dispatch.register_lazy(toplevel, partial(f, ignores)) def proxify_device_objects( obj: T, proxied_id_to_proxy: Optional[MutableMapping[int, ProxyObject]] = None, found_proxies: Optional[List[ProxyObject]] = None, excl_proxies: bool = False, mark_as_explicit_proxies: bool = False, ) -> T: """Wrap device objects in ProxyObject Search through `obj` and wraps all CUDA device objects in ProxyObject. It uses `proxied_id_to_proxy` to make sure that identical CUDA device objects found in `obj` are wrapped by the same ProxyObject. Parameters ---------- obj: Any Object to search through or wrap in a ProxyObject. proxied_id_to_proxy: MutableMapping[int, ProxyObject] Dict mapping the id() of proxied objects (CUDA device objects) to their proxy and is updated with all new proxied objects found in `obj`. If None, use an empty dict. found_proxies: List[ProxyObject] List of found proxies in `obj`. Notice, this includes all proxies found, including those already in `proxied_id_to_proxy`. If None, use an empty list. excl_proxies: bool Don't add found objects that are already ProxyObject to found_proxies. mark_as_explicit_proxies: bool Mark found proxies as "explicit", which means that the user allows them as input arguments to dask tasks even in compatibility-mode. Returns ------- ret: Any A copy of `obj` where all CUDA device objects are wrapped in ProxyObject """ _register_incompatible_types() if proxied_id_to_proxy is None: proxied_id_to_proxy = {} if found_proxies is None: found_proxies = [] ret = dispatch(obj, proxied_id_to_proxy, found_proxies, excl_proxies) for p in found_proxies: p._pxy_get().explicit_proxy = mark_as_explicit_proxies return ret def unproxify_device_objects( obj: T, skip_explicit_proxies: bool = False, only_incompatible_types: bool = False ) -> T: """Unproxify device objects Search through `obj` and un-wraps all CUDA device objects. Parameters ---------- obj: Any Object to search through or unproxify. skip_explicit_proxies: bool When True, skipping proxy objects marked as explicit proxies. only_incompatible_types: bool When True, ONLY unproxify incompatible type. The skip_explicit_proxies argument is ignored. Returns ------- ret: Any A copy of `obj` where all CUDA device objects are unproxify """ if isinstance(obj, dict): return { k: unproxify_device_objects( v, skip_explicit_proxies, only_incompatible_types ) for k, v in obj.items() } # type: ignore if isinstance(obj, (list, tuple, set, frozenset)): return obj.__class__( unproxify_device_objects(i, skip_explicit_proxies, only_incompatible_types) for i in obj ) # type: ignore if isinstance(obj, ProxyObject): pxy = obj._pxy_get(copy=True) if only_incompatible_types: if incompatible_types and isinstance(obj, incompatible_types): obj = obj._pxy_deserialize( # type: ignore maybe_evict=False, proxy_detail=pxy ) elif not skip_explicit_proxies or not pxy.explicit_proxy: pxy.explicit_proxy = False obj = obj._pxy_deserialize(maybe_evict=False, proxy_detail=pxy) return obj def proxify_decorator(func): """Returns a function wrapper that explicit proxify the output Notice, this function only has effect in compatibility mode. """ @functools.wraps(func) def wrapper(*args, **kwargs): ret = func(*args, **kwargs) if dask.config.get("jit-unspill-compatibility-mode", default=False): ret = proxify_device_objects(ret, mark_as_explicit_proxies=True) return ret return wrapper def unproxify_decorator(func): """Returns a function wrapper that unproxify output Notice, this function only has effect in compatibility mode. """ @functools.wraps(func) def wrapper(*args, **kwargs): ret = func(*args, **kwargs) if dask.config.get("jit-unspill-compatibility-mode", default=False): ret = unproxify_device_objects(ret, skip_explicit_proxies=False) return ret return wrapper def proxify(obj, proxied_id_to_proxy, found_proxies, subclass=None): _id = id(obj) if _id in proxied_id_to_proxy: ret = proxied_id_to_proxy[_id] else: ret = proxied_id_to_proxy[_id] = asproxy(obj, subclass=subclass) found_proxies.append(ret) return ret @dispatch.register(object) def proxify_device_object_default( obj, proxied_id_to_proxy, found_proxies, excl_proxies ): if hasattr(obj, "__cuda_array_interface__"): return proxify(obj, proxied_id_to_proxy, found_proxies) return obj @dispatch.register(ProxyObject) def proxify_device_object_proxy_object( obj: ProxyObject, proxied_id_to_proxy, found_proxies, excl_proxies ): # Check if `obj` is already known pxy = obj._pxy_get() if not pxy.is_serialized(): _id = id(pxy.obj) if _id in proxied_id_to_proxy: obj = proxied_id_to_proxy[_id] else: proxied_id_to_proxy[_id] = obj if not excl_proxies: found_proxies.append(obj) return obj @dispatch.register(list) @dispatch.register(tuple) @dispatch.register(set) @dispatch.register(frozenset) def proxify_device_object_python_collection( seq, proxied_id_to_proxy, found_proxies, excl_proxies ): return type(seq)( dispatch(o, proxied_id_to_proxy, found_proxies, excl_proxies) for o in seq ) @dispatch.register(dict) def proxify_device_object_python_dict( seq, proxied_id_to_proxy, found_proxies, excl_proxies ): return { k: dispatch(v, proxied_id_to_proxy, found_proxies, excl_proxies) for k, v in seq.items() } # Implement cuDF specific proxification @dispatch.register_lazy("cudf") def _register_cudf(): import cudf @dispatch.register(cudf.DataFrame) @dispatch.register(cudf.Series) @dispatch.register(cudf.BaseIndex) def proxify_device_object_cudf_dataframe( obj, proxied_id_to_proxy, found_proxies, excl_proxies ): return proxify(obj, proxied_id_to_proxy, found_proxies) try: from dask.array.dispatch import percentile_lookup from dask_cudf.backends import percentile_cudf percentile_lookup.register(ProxyObject, percentile_cudf) except ImportError: pass
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/utils.py
import math import operator import os import pickle import time import warnings from contextlib import suppress from functools import singledispatch from multiprocessing import cpu_count from typing import Optional import numpy as np import pynvml import toolz import dask import distributed # noqa: required for dask.config.get("distributed.comm.ucx") from dask.config import canonical_name from dask.utils import format_bytes, parse_bytes from distributed import wait from distributed.comm import parse_address try: from nvtx import annotate as nvtx_annotate except ImportError: # If nvtx module is not installed, `annotate` yields only. from contextlib import contextmanager @contextmanager def nvtx_annotate(message=None, color="blue", domain=None): yield def unpack_bitmask(x, mask_bits=64): """Unpack a list of integers containing bitmasks. Parameters ---------- x: list of int A list of integers mask_bits: int An integer determining the bitwidth of `x` Examples -------- >>> from dask_cuda.utils import unpack_bitmaps >>> unpack_bitmask([1 + 2 + 8]) [0, 1, 3] >>> unpack_bitmask([1 + 2 + 16]) [0, 1, 4] >>> unpack_bitmask([1 + 2 + 16, 2 + 4]) [0, 1, 4, 65, 66] >>> unpack_bitmask([1 + 2 + 16, 2 + 4], mask_bits=32) [0, 1, 4, 33, 34] """ res = [] for i, mask in enumerate(x): if not isinstance(mask, int): raise TypeError("All elements of the list `x` must be integers") cpu_offset = i * mask_bits bytestr = np.frombuffer( bytes(np.binary_repr(mask, width=mask_bits), "utf-8"), "u1" ) mask = np.flip(bytestr - ord("0")).astype(bool) unpacked_mask = np.where( mask, np.arange(mask_bits) + cpu_offset, np.full(mask_bits, -1) ) res += unpacked_mask[(unpacked_mask >= 0)].tolist() return res @toolz.memoize def get_cpu_count(): return cpu_count() @toolz.memoize def get_gpu_count(): pynvml.nvmlInit() return pynvml.nvmlDeviceGetCount() @toolz.memoize def get_gpu_count_mig(return_uuids=False): """Return the number of MIG instances available Parameters ---------- return_uuids: bool Returns the uuids of the MIG instances available optionally """ pynvml.nvmlInit() uuids = [] for index in range(get_gpu_count()): handle = pynvml.nvmlDeviceGetHandleByIndex(index) try: is_mig_mode = pynvml.nvmlDeviceGetMigMode(handle)[0] except pynvml.NVMLError: # if not a MIG device, i.e. a normal GPU, skip continue if is_mig_mode: count = pynvml.nvmlDeviceGetMaxMigDeviceCount(handle) miguuids = [] for i in range(count): try: mighandle = pynvml.nvmlDeviceGetMigDeviceHandleByIndex( device=handle, index=i ) miguuids.append(mighandle) uuids.append(pynvml.nvmlDeviceGetUUID(mighandle)) except pynvml.NVMLError: pass if return_uuids: return len(uuids), uuids return len(uuids) def get_cpu_affinity(device_index=None): """Get a list containing the CPU indices to which a GPU is directly connected. Use either the device index or the specified device identifier UUID. Parameters ---------- device_index: int or str Index or UUID of the GPU device Examples -------- >>> from dask_cuda.utils import get_cpu_affinity >>> get_cpu_affinity(0) # DGX-1 has GPUs 0-3 connected to CPUs [0-19, 20-39] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] >>> get_cpu_affinity(5) # DGX-1 has GPUs 5-7 connected to CPUs [20-39, 60-79] [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79] >>> get_cpu_affinity(1000) # DGX-1 has no device on index 1000 dask_cuda/utils.py:96: UserWarning: Cannot get CPU affinity for device with index 1000, setting default affinity [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79] """ pynvml.nvmlInit() try: if device_index and not str(device_index).isnumeric(): # This means device_index is UUID. # This works for both MIG and non-MIG device UUIDs. handle = pynvml.nvmlDeviceGetHandleByUUID(str.encode(device_index)) if pynvml.nvmlDeviceIsMigDeviceHandle(handle): # Additionally get parent device handle # if the device itself is a MIG instance handle = pynvml.nvmlDeviceGetDeviceHandleFromMigDeviceHandle(handle) else: handle = pynvml.nvmlDeviceGetHandleByIndex(device_index) # Result is a list of 64-bit integers, thus ceil(get_cpu_count() / 64) affinity = pynvml.nvmlDeviceGetCpuAffinity( handle, math.ceil(get_cpu_count() / 64), ) return unpack_bitmask(affinity) except pynvml.NVMLError: warnings.warn( "Cannot get CPU affinity for device with index %d, setting default affinity" % device_index ) return list(range(get_cpu_count())) def get_n_gpus(): try: return len(os.environ["CUDA_VISIBLE_DEVICES"].split(",")) except KeyError: return get_gpu_count() def get_device_total_memory(index=0): """ Return total memory of CUDA device with index or with device identifier UUID """ pynvml.nvmlInit() if index and not str(index).isnumeric(): # This means index is UUID. This works for both MIG and non-MIG device UUIDs. handle = pynvml.nvmlDeviceGetHandleByUUID(str.encode(str(index))) else: # This is a device index handle = pynvml.nvmlDeviceGetHandleByIndex(index) return pynvml.nvmlDeviceGetMemoryInfo(handle).total def get_ucx_config( enable_tcp_over_ucx=None, enable_infiniband=None, enable_nvlink=None, enable_rdmacm=None, ): ucx_config = dask.config.get("distributed.comm.ucx") ucx_config[canonical_name("create-cuda-context", ucx_config)] = True ucx_config[canonical_name("reuse-endpoints", ucx_config)] = False # If any transport is explicitly disabled (`False`) by the user, others that # are not specified should be enabled (`True`). If transports are explicitly # enabled (`True`), then default (`None`) or an explicit `False` will suffice # in disabling others. However, if there's a mix of enable (`True`) and # disable (`False`), then those choices can be assumed as intended by the # user. # # This may be handled more gracefully in Distributed in the future. opts = [enable_tcp_over_ucx, enable_infiniband, enable_nvlink] if any(opt is False for opt in opts) and not any(opt is True for opt in opts): if enable_tcp_over_ucx is None: enable_tcp_over_ucx = True if enable_nvlink is None: enable_nvlink = True if enable_infiniband is None: enable_infiniband = True ucx_config[canonical_name("tcp", ucx_config)] = enable_tcp_over_ucx ucx_config[canonical_name("infiniband", ucx_config)] = enable_infiniband ucx_config[canonical_name("nvlink", ucx_config)] = enable_nvlink ucx_config[canonical_name("rdmacm", ucx_config)] = enable_rdmacm if enable_tcp_over_ucx or enable_infiniband or enable_nvlink: ucx_config[canonical_name("cuda-copy", ucx_config)] = True else: ucx_config[canonical_name("cuda-copy", ucx_config)] = None return ucx_config def get_preload_options( protocol=None, create_cuda_context=None, enable_tcp_over_ucx=None, enable_infiniband=None, enable_nvlink=None, enable_rdmacm=None, ): """ Return a dictionary with the preload and preload_argv options required to create CUDA context and enabling UCX communication. Parameters ---------- protocol: None or str, default None If "ucx", options related to UCX (enable_tcp_over_ucx, enable_infiniband, enable_nvlink) are added to preload_argv. create_cuda_context: bool, default None Ensure the CUDA context gets created at initialization, generally needed by Dask workers. enable_tcp: bool, default None Set environment variables to enable TCP over UCX, even when InfiniBand or NVLink support are disabled. enable_infiniband: bool, default None Set environment variables to enable UCX InfiniBand support. Implies enable_tcp=True. enable_rdmacm: bool, default None Set environment variables to enable UCX RDMA connection manager support. Currently requires enable_infiniband=True. enable_nvlink: bool, default None Set environment variables to enable UCX NVLink support. Implies enable_tcp=True. Example ------- >>> from dask_cuda.utils import get_preload_options >>> get_preload_options() {'preload': ['dask_cuda.initialize'], 'preload_argv': []} >>> get_preload_options(protocol="ucx", ... create_cuda_context=True, ... enable_infiniband=True) {'preload': ['dask_cuda.initialize'], 'preload_argv': ['--create-cuda-context', '--enable-infiniband']} """ preload_options = {"preload": ["dask_cuda.initialize"], "preload_argv": []} if create_cuda_context: preload_options["preload_argv"].append("--create-cuda-context") if protocol in ["ucx", "ucxx"]: initialize_ucx_argv = [] if enable_tcp_over_ucx: initialize_ucx_argv.append("--enable-tcp-over-ucx") if enable_infiniband: initialize_ucx_argv.append("--enable-infiniband") if enable_rdmacm: initialize_ucx_argv.append("--enable-rdmacm") if enable_nvlink: initialize_ucx_argv.append("--enable-nvlink") preload_options["preload_argv"].extend(initialize_ucx_argv) return preload_options def get_rmm_log_file_name(dask_worker, logging=False, log_directory=None): return ( os.path.join( log_directory, "rmm_log_%s.txt" % ( ( dask_worker.name.split("/")[-1] if isinstance(dask_worker.name, str) else dask_worker.name ) if hasattr(dask_worker, "name") else "scheduler" ), ) if logging else None ) def wait_workers( client, min_timeout=10, seconds_per_gpu=2, n_gpus=None, timeout_callback=None ): """ Wait for workers to be available. When a timeout occurs, a callback is executed if specified. Generally used for tests. Parameters ---------- client: distributed.Client Instance of client, used to query for number of workers connected. min_timeout: float Minimum number of seconds to wait before timeout. This value may be overridden by setting the `DASK_CUDA_WAIT_WORKERS_MIN_TIMEOUT` with a positive integer. seconds_per_gpu: float Seconds to wait for each GPU on the system. For example, if its value is 2 and there is a total of 8 GPUs (workers) being started, a timeout will occur after 16 seconds. Note that this value is only used as timeout when larger than min_timeout. n_gpus: None or int If specified, will wait for a that amount of GPUs (i.e., Dask workers) to come online, else waits for a total of `get_n_gpus` workers. timeout_callback: None or callable A callback function to be executed if a timeout occurs, ignored if None. Returns ------- True if all workers were started, False if a timeout occurs. """ min_timeout_env = os.environ.get("DASK_CUDA_WAIT_WORKERS_MIN_TIMEOUT", None) min_timeout = min_timeout if min_timeout_env is None else int(min_timeout_env) n_gpus = n_gpus or get_n_gpus() timeout = max(min_timeout, seconds_per_gpu * n_gpus) start = time.time() while True: if len(client.scheduler_info()["workers"]) == n_gpus: return True elif time.time() - start > timeout: if callable(timeout_callback): timeout_callback() return False else: time.sleep(0.1) async def _all_to_all(client): """ Trigger all to all communication between workers and scheduler """ workers = list(client.scheduler_info()["workers"]) futs = [] for w in workers: bit_of_data = b"0" * 1 data = client.map(lambda x: bit_of_data, range(1), pure=False, workers=[w]) futs.append(data[0]) await wait(futs) def f(x): pass new_futs = [] for w in workers: for future in futs: data = client.submit(f, future, workers=[w], pure=False) new_futs.append(data) await wait(new_futs) def all_to_all(client): return client.sync(_all_to_all, client=client, asynchronous=client.asynchronous) def parse_cuda_visible_device(dev): """Parses a single CUDA device identifier A device identifier must either be an integer, a string containing an integer or a string containing the device's UUID, beginning with prefix 'GPU-' or 'MIG-'. >>> parse_cuda_visible_device(2) 2 >>> parse_cuda_visible_device('2') 2 >>> parse_cuda_visible_device('GPU-9baca7f5-0f2f-01ac-6b05-8da14d6e9005') 'GPU-9baca7f5-0f2f-01ac-6b05-8da14d6e9005' >>> parse_cuda_visible_device('Foo') Traceback (most recent call last): ... ValueError: Devices in CUDA_VISIBLE_DEVICES must be comma-separated integers or strings beginning with 'GPU-' or 'MIG-' prefixes. """ try: return int(dev) except ValueError: if any( dev.startswith(prefix) for prefix in [ "GPU-", "MIG-", ] ): return dev else: raise ValueError( "Devices in CUDA_VISIBLE_DEVICES must be comma-separated integers " "or strings beginning with 'GPU-' or 'MIG-' prefixes." ) def cuda_visible_devices(i, visible=None): """Cycling values for CUDA_VISIBLE_DEVICES environment variable Examples -------- >>> cuda_visible_devices(0, range(4)) '0,1,2,3' >>> cuda_visible_devices(3, range(8)) '3,4,5,6,7,0,1,2' """ if visible is None: try: visible = map( parse_cuda_visible_device, os.environ["CUDA_VISIBLE_DEVICES"].split(",") ) except KeyError: visible = range(get_n_gpus()) visible = list(visible) L = visible[i:] + visible[:i] return ",".join(map(str, L)) def nvml_device_index(i, CUDA_VISIBLE_DEVICES): """Get the device index for NVML addressing NVML expects the index of the physical device, unlike CUDA runtime which expects the address relative to `CUDA_VISIBLE_DEVICES`. This function returns the i-th device index from the `CUDA_VISIBLE_DEVICES` comma-separated string of devices or list. Examples -------- >>> nvml_device_index(1, "0,1,2,3") 1 >>> nvml_device_index(1, "1,2,3,0") 2 >>> nvml_device_index(1, [0,1,2,3]) 1 >>> nvml_device_index(1, [1,2,3,0]) 2 >>> nvml_device_index(1, ["GPU-84fd49f2-48ad-50e8-9f2e-3bf0dfd47ccb", "GPU-d6ac2d46-159b-5895-a854-cb745962ef0f", "GPU-158153b7-51d0-5908-a67c-f406bc86be17"]) "MIG-d6ac2d46-159b-5895-a854-cb745962ef0f" >>> nvml_device_index(2, ["MIG-41b3359c-e721-56e5-8009-12e5797ed514", "MIG-65b79fff-6d3c-5490-a288-b31ec705f310", "MIG-c6e2bae8-46d4-5a7e-9a68-c6cf1f680ba0"]) "MIG-c6e2bae8-46d4-5a7e-9a68-c6cf1f680ba0" >>> nvml_device_index(1, 2) Traceback (most recent call last): ... ValueError: CUDA_VISIBLE_DEVICES must be `str` or `list` """ if isinstance(CUDA_VISIBLE_DEVICES, str): ith_elem = CUDA_VISIBLE_DEVICES.split(",")[i] if ith_elem.isnumeric(): return int(ith_elem) else: return ith_elem elif isinstance(CUDA_VISIBLE_DEVICES, list): return CUDA_VISIBLE_DEVICES[i] else: raise ValueError("`CUDA_VISIBLE_DEVICES` must be `str` or `list`") def parse_device_memory_limit(device_memory_limit, device_index=0, alignment_size=1): """Parse memory limit to be used by a CUDA device. Parameters ---------- device_memory_limit: float, int, str or None This can be a float (fraction of total device memory), an integer (bytes), a string (like 5GB or 5000M), and "auto", 0 or None for the total device size. device_index: int or str The index or UUID of the device from which to obtain the total memory amount. Default: 0. alignment_size: int Number of bytes of alignment to use, i.e., allocation must be a multiple of that size. RMM pool requires 256 bytes alignment. Examples -------- >>> # On a 32GB CUDA device >>> parse_device_memory_limit(None) 34089730048 >>> parse_device_memory_limit(0.8) 27271784038 >>> parse_device_memory_limit(1000000000) 1000000000 >>> parse_device_memory_limit("1GB") 1000000000 """ def _align(size, alignment_size): return size // alignment_size * alignment_size if device_memory_limit in {0, "0", None, "auto"}: return _align(get_device_total_memory(device_index), alignment_size) with suppress(ValueError, TypeError): device_memory_limit = float(device_memory_limit) if isinstance(device_memory_limit, float) and device_memory_limit <= 1: return _align( int(get_device_total_memory(device_index) * device_memory_limit), alignment_size, ) if isinstance(device_memory_limit, str): return _align(parse_bytes(device_memory_limit), alignment_size) else: return _align(int(device_memory_limit), alignment_size) def get_gpu_uuid_from_index(device_index=0): """Get GPU UUID from CUDA device index. Parameters ---------- device_index: int or str The index of the device from which to obtain the UUID. Default: 0. Examples -------- >>> get_gpu_uuid_from_index() 'GPU-9baca7f5-0f2f-01ac-6b05-8da14d6e9005' >>> get_gpu_uuid_from_index(3) 'GPU-9fb42d6f-7d6b-368f-f79c-3c3e784c93f6' """ import pynvml pynvml.nvmlInit() handle = pynvml.nvmlDeviceGetHandleByIndex(device_index) try: return pynvml.nvmlDeviceGetUUID(handle).decode("utf-8") except AttributeError: return pynvml.nvmlDeviceGetUUID(handle) def get_worker_config(dask_worker): from .proxify_host_file import ProxifyHostFile # assume homogeneous cluster plugin_vals = dask_worker.plugins.values() ret = {} # device and host memory configuration for p in plugin_vals: config = { v: getattr(p, v) for v in dir(p) if not (v.startswith("_") or v in {"setup", "cores"}) } # To send this back to the client the data will be serialised # which might fail, so pre-emptively check try: pickle.dumps(config) except TypeError: config = "UNKNOWN CONFIG" ret[f"[plugin] {type(p).__name__}"] = config for mem in [ "memory_limit", "memory_pause_fraction", "memory_spill_fraction", "memory_target_fraction", ]: ret[mem] = getattr(dask_worker.memory_manager, mem) # jit unspilling set ret["jit-unspill"] = isinstance(dask_worker.data, ProxifyHostFile) # get optional device-memory-limit if ret["jit-unspill"]: ret["device-memory-limit"] = dask_worker.data.manager._device_memory_limit else: has_device = hasattr(dask_worker.data, "device_buffer") if has_device: ret["device-memory-limit"] = dask_worker.data.device_buffer.n # using ucx ? scheme, loc = parse_address(dask_worker.scheduler.address) ret["protocol"] = scheme if scheme == "ucx": import ucp ret["ucx-transports"] = ucp.get_active_transports() elif scheme == "ucxx": import ucxx ret["ucx-transports"] = ucxx.get_active_transports() # comm timeouts ret["distributed.comm.timeouts"] = dask.config.get("distributed.comm.timeouts") return ret async def get_scheduler_configuration(client): worker_ttl = await client.run_on_scheduler( lambda dask_scheduler: dask_scheduler.worker_ttl ) extensions = list( await client.run_on_scheduler( lambda dask_scheduler: dask_scheduler.extensions.keys() ) ) ret = {} ret["distributed.scheduler.worker-ttl"] = worker_ttl ret["active-extensions"] = extensions return ret async def _get_cluster_configuration(client): worker_config = await client.run(get_worker_config) ret = await get_scheduler_configuration(client) # does the cluster have any workers ? if worker_config: w = list(worker_config.values())[0] ret.update(w) info = client.scheduler_info() workers = info.get("workers", {}) ret["nworkers"] = len(workers) ret["nthreads"] = sum(w["nthreads"] for w in workers.values()) return ret @singledispatch def pretty_print(obj, toplevel): from rich.pretty import Pretty return Pretty(obj) @pretty_print.register(str) def pretty_print_str(obj, toplevel): from rich.markup import escape return escape(obj) @pretty_print.register(dict) def pretty_print_dict(obj, toplevel): from rich.table import Table if not obj: return "No known settings" formatted_byte_keys = { "memory_limit", "device-memory-limit", "initial_pool_size", "maximum_pool_size", } t = Table( show_header=toplevel, title="Dask Cluster Configuration" if toplevel else None ) t.add_column("Parameter", justify="left", style="bold bright_green") t.add_column("Value", justify="left", style="bold bright_green") for k, v in sorted(obj.items(), key=operator.itemgetter(0)): if k in formatted_byte_keys and v is not None: v = format_bytes(v) # need to escape tags: [] # https://rich.readthedocs.io/en/stable/markup.html?highlight=escape#escaping t.add_row(pretty_print(k, False), pretty_print(v, False)) return t def print_cluster_config(client): """print current Dask cluster configuration""" if client.asynchronous: print("Printing cluster configuration works only with synchronous Dask clients") data = get_cluster_configuration(client) try: from rich.console import Console except ModuleNotFoundError as e: error_msg = ( "Please install rich `python -m pip install rich` " "to print a table of the current Dask Cluster Configuration" ) raise ModuleNotFoundError(error_msg) from e formatted = pretty_print(data, True) Console().print(formatted) def get_cluster_configuration(client): data = client.sync( _get_cluster_configuration, client=client, asynchronous=client.asynchronous ) return data def get_rmm_device_memory_usage() -> Optional[int]: """Get current bytes allocated on current device through RMM Check the current RMM resource stack for resources such as `StatisticsResourceAdaptor` and `TrackingResourceAdaptor` that can report the current allocated bytes. Returns None, if no such resources exist. Return ------ nbytes: int or None Number of bytes allocated on device through RMM or None """ def get_rmm_memory_resource_stack(mr) -> list: if hasattr(mr, "upstream_mr"): return [mr] + get_rmm_memory_resource_stack(mr.upstream_mr) return [mr] try: import rmm except ImportError: return None for mr in get_rmm_memory_resource_stack(rmm.mr.get_current_device_resource()): if isinstance(mr, rmm.mr.TrackingResourceAdaptor): return mr.get_allocated_bytes() if isinstance(mr, rmm.mr.StatisticsResourceAdaptor): return mr.allocation_counts["current_bytes"] return None
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/VERSION
24.02.00
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/worker_spec.py
import os from dask.distributed import Nanny from distributed.system import MEMORY_LIMIT from .initialize import initialize from .local_cuda_cluster import cuda_visible_devices from .plugins import CPUAffinity from .utils import get_cpu_affinity, get_gpu_count def worker_spec( interface=None, protocol=None, dashboard_address=":8787", threads_per_worker=1, silence_logs=True, CUDA_VISIBLE_DEVICES=None, enable_tcp_over_ucx=False, enable_infiniband=False, enable_nvlink=False, **kwargs ): """Create a Spec for a CUDA worker. The Spec created by this function can be used as a recipe for CUDA workers that can be passed to a SpecCluster. Parameters ---------- interface: str The external interface used to connect to the scheduler. protocol: str The protocol to used for data transfer, e.g., "tcp" or "ucx". dashboard_address: str The address for the scheduler dashboard. Defaults to ":8787". threads_per_worker: int Number of threads to be used for each CUDA worker process. silence_logs: bool Disable logging for all worker processes CUDA_VISIBLE_DEVICES: str String like ``"0,1,2,3"`` or ``[0, 1, 2, 3]`` to restrict activity to different GPUs enable_tcp_over_ucx: bool Set environment variables to enable TCP over UCX, even if InfiniBand and NVLink are not supported or disabled. enable_infiniband: bool Set environment variables to enable UCX InfiniBand support. Implies enable_tcp_over_ucx=True. enable_nvlink: bool Set environment variables to enable UCX NVLink support. Implies enable_tcp_over_ucx=True. Examples -------- >>> from dask_cuda.worker_spec import worker_spec >>> worker_spec(interface="enp1s0f0", CUDA_VISIBLE_DEVICES=[0, 2]) {0: {'cls': distributed.nanny.Nanny, 'options': {'env': {'CUDA_VISIBLE_DEVICES': '0,2'}, 'interface': 'enp1s0f0', 'protocol': None, 'nthreads': 1, 'data': dict, 'dashboard_address': ':8787', 'plugins': [<dask_cuda.utils.CPUAffinity at 0x7fbb8748a860>], 'silence_logs': True, 'memory_limit': 135263611392.0, 'preload': ['dask_cuda.initialize'], 'preload_argv': ['--create-cuda-context']}}, 2: {'cls': distributed.nanny.Nanny, 'options': {'env': {'CUDA_VISIBLE_DEVICES': '2,0'}, 'interface': 'enp1s0f0', 'protocol': None, 'nthreads': 1, 'data': dict, 'dashboard_address': ':8787', 'plugins': [<dask_cuda.utils.CPUAffinity at 0x7fbb8748a0f0>], 'silence_logs': True, 'memory_limit': 135263611392.0, 'preload': ['dask_cuda.initialize'], 'preload_argv': ['--create-cuda-context']}}} """ if ( enable_tcp_over_ucx or enable_infiniband or enable_nvlink ) and protocol != "ucx": raise TypeError("Enabling InfiniBand or NVLink requires protocol='ucx'") if CUDA_VISIBLE_DEVICES is None: CUDA_VISIBLE_DEVICES = os.environ.get( "CUDA_VISIBLE_DEVICES", list(range(get_gpu_count())) ) if isinstance(CUDA_VISIBLE_DEVICES, str): CUDA_VISIBLE_DEVICES = CUDA_VISIBLE_DEVICES.split(",") CUDA_VISIBLE_DEVICES = list(map(int, CUDA_VISIBLE_DEVICES)) memory_limit = MEMORY_LIMIT / get_gpu_count() initialize( enable_tcp_over_ucx=enable_tcp_over_ucx, enable_infiniband=enable_infiniband, enable_nvlink=enable_nvlink, ) spec = {} for i, dev in enumerate(CUDA_VISIBLE_DEVICES): spec[dev] = { "cls": Nanny, "options": { "env": { "CUDA_VISIBLE_DEVICES": cuda_visible_devices( i, CUDA_VISIBLE_DEVICES ) }, "interface": interface, "protocol": protocol, "nthreads": threads_per_worker, "data": dict, "dashboard_address": dashboard_address, "plugins": [CPUAffinity(get_cpu_affinity(dev))], "silence_logs": silence_logs, "memory_limit": memory_limit, "preload": "dask_cuda.initialize", "preload_argv": "--create-cuda-context", }, } return spec
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/disk_io.py
import itertools import os import os.path import pathlib import tempfile import threading import weakref from typing import Callable, Iterable, Mapping, Optional, Union import numpy as np import dask from distributed.utils import nbytes _new_cuda_buffer: Optional[Callable[[int], object]] = None def get_new_cuda_buffer() -> Callable[[int], object]: """Return a function to create an empty CUDA buffer""" global _new_cuda_buffer if _new_cuda_buffer is not None: return _new_cuda_buffer try: import rmm _new_cuda_buffer = lambda n: rmm.DeviceBuffer(size=n) return _new_cuda_buffer except ImportError: pass try: import cupy _new_cuda_buffer = lambda n: cupy.empty((n,), dtype="u1") return _new_cuda_buffer except ImportError: pass try: import numba.cuda def numba_device_array(n): a = numba.cuda.device_array((n,), dtype="u1") weakref.finalize(a, numba.cuda.current_context) return a _new_cuda_buffer = numba_device_array return _new_cuda_buffer except ImportError: pass raise RuntimeError("GPUDirect Storage requires RMM, CuPy, or Numba") class SpillToDiskFile: """File path the gets removed on destruction When spilling to disk, we have to delay the removal of the file until no more proxies are pointing to the file. """ path: str def __init__(self, path: str) -> None: self.path = path def __del__(self): os.remove(self.path) def __str__(self) -> str: return self.path def exists(self): return os.path.exists(self.path) def __deepcopy__(self, memo) -> str: """A deep copy is simply the path as a string. In order to avoid multiple instance of SpillToDiskFile pointing to the same file, we do not allow a direct copy. """ return self.path def __copy__(self): raise RuntimeError("Cannot copy or pickle a SpillToDiskFile") def __reduce__(self): self.__copy__() class SpillToDiskProperties: gds_enabled: bool shared_filesystem: bool root_dir: pathlib.Path tmpdir: tempfile.TemporaryDirectory def __init__( self, root_dir: Union[str, os.PathLike], shared_filesystem: Optional[bool] = None, gds: Optional[bool] = None, ): """ Parameters ---------- root_dir : os.PathLike Path to the root directory to write serialized data. shared_filesystem: bool or None, default None Whether the `root_dir` above is shared between all workers or not. If ``None``, the "jit-unspill-shared-fs" config value are used, which defaults to False. gds: bool Enable the use of GPUDirect Storage. If ``None``, the "gds-spilling" config value are used, which defaults to ``False``. """ self.lock = threading.Lock() self.counter = 0 self.root_dir = pathlib.Path(root_dir) os.makedirs(self.root_dir, exist_ok=True) self.tmpdir = tempfile.TemporaryDirectory(dir=self.root_dir) self.shared_filesystem = shared_filesystem or dask.config.get( "jit-unspill-shared-fs", default=False ) self.gds_enabled = gds or dask.config.get("gds-spilling", default=False) if self.gds_enabled: try: import kvikio # noqa F401 except ImportError: raise ImportError( "GPUDirect Storage requires the kvikio Python package" ) else: self.gds_enabled = kvikio.DriverProperties().is_gds_available def gen_file_path(self) -> str: """Generate an unique file path""" with self.lock: self.counter += 1 return str( pathlib.Path(self.tmpdir.name) / pathlib.Path("%04d" % self.counter) ) def disk_write(path: str, frames: Iterable, shared_filesystem: bool, gds=False) -> dict: """Write frames to disk Parameters ---------- path: str File path frames: Iterable The frames to write to disk shared_filesystem: bool Whether the target filesystem is shared between all workers or not. If True, the filesystem must support the `os.link()` operation. gds: bool Enable the use of GPUDirect Storage. Notice, the consecutive `disk_read()` must enable GDS as well. Returns ------- header: dict A dict of metadata """ cuda_frames = tuple(hasattr(f, "__cuda_array_interface__") for f in frames) if gds and any(cuda_frames): import kvikio # Write each frame consecutively into `path` in parallel with kvikio.CuFile(path, "w") as f: file_offsets = itertools.accumulate(map(nbytes, frames), initial=0) futures = [f.pwrite(b, file_offset=o) for b, o in zip(frames, file_offsets)] for each_fut in futures: each_fut.get() else: with open(path, "wb") as f: os.writev(f.fileno(), frames) # type: ignore return { "method": "stdio", "path": SpillToDiskFile(path), "frame-lengths": tuple(map(nbytes, frames)), "shared-filesystem": shared_filesystem, "cuda-frames": cuda_frames, } def disk_read(header: Mapping, gds=False) -> list: """Read frames from disk Parameters ---------- header: Mapping The metadata of the frames to read gds: bool Enable the use of GPUDirect Storage. Notice, this must match the GDS option set by the prior `disk_write()` call. Returns ------- frames: list List of read frames """ ret: list = [ get_new_cuda_buffer()(length) if gds and is_cuda else np.empty((length,), dtype="u1") for length, is_cuda in zip(header["frame-lengths"], header["cuda-frames"]) ] if gds: import kvikio # isort:skip with kvikio.CuFile(str(header["path"]), "r") as f: # Read each frame consecutively from `path` in parallel file_offsets = itertools.accumulate((b.nbytes for b in ret), initial=0) futures = [f.pread(b, file_offset=o) for b, o in zip(ret, file_offsets)] for each_fut in futures: each_fut.get() else: with open(str(header["path"]), "rb") as f: os.readv(f.fileno(), ret) # type: ignore return ret
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/explicit_comms/comms.py
import asyncio import concurrent.futures import contextlib import time import uuid from typing import Any, Dict, Hashable, Iterable, List, Optional import distributed.comm from distributed import Client, Worker, default_client, get_worker from distributed.comm.addressing import parse_address, parse_host_port, unparse_address _default_comms = None def get_multi_lock_or_null_context(multi_lock_context, *args, **kwargs): """Return either a MultiLock or a NULL context Parameters ---------- multi_lock_context: bool If True return MultiLock context else return a NULL context that doesn't do anything *args, **kwargs: Arguments parsed to the MultiLock creation Returns ------- context: context Either `MultiLock(*args, **kwargs)` or a NULL context """ if multi_lock_context: from distributed import MultiLock return MultiLock(*args, **kwargs) else: return contextlib.nullcontext() def default_comms(client: Optional[Client] = None) -> "CommsContext": """Return the default comms object Creates a new default comms object if no one exist. Parameters ---------- client: Client, optional If no default comm object exists, create the new comm on `client` are returned. Returns ------- comms: CommsContext The default comms object """ global _default_comms if _default_comms is None: _default_comms = CommsContext(client=client) return _default_comms def worker_state(sessionId: Optional[int] = None) -> dict: """Retrieve the state(s) of the current worker Parameters ---------- sessionId: int, optional Worker session state ID. If None, all states of the worker are returned. Returns ------- state: dict Either a single state dict or a dict of state dict """ worker: Any = get_worker() if not hasattr(worker, "_explicit_comm_state"): worker._explicit_comm_state = {} if sessionId is not None: if sessionId not in worker._explicit_comm_state: worker._explicit_comm_state[sessionId] = { "ts": time.time(), "eps": {}, "loop": worker.loop.asyncio_loop, "worker": worker, } return worker._explicit_comm_state[sessionId] return worker._explicit_comm_state def _run_coroutine_on_worker(sessionId, coroutine, args): session_state = worker_state(sessionId) def _run(): future = asyncio.run_coroutine_threadsafe( coroutine(session_state, *args), session_state["loop"] ) return future.result() with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor: return executor.submit(_run).result() async def _create_listeners(session_state, nworkers, rank): assert session_state["loop"] is asyncio.get_event_loop() assert "nworkers" not in session_state session_state["nworkers"] = nworkers assert "rank" not in session_state session_state["rank"] = rank async def server_handler(ep): peer_rank = await ep.read() session_state["eps"][peer_rank] = ep # We listen on the same protocol and address as the worker address protocol, address = parse_address(session_state["worker"].address) address = parse_host_port(address)[0] address = unparse_address(protocol, address) session_state["lf"] = distributed.comm.listen(address, server_handler) await session_state["lf"].start() return session_state["lf"].listen_address async def _create_endpoints(session_state, peers): """Each worker creates a UCX endpoint to all workers with greater rank""" assert session_state["loop"] is asyncio.get_event_loop() myrank = session_state["rank"] peers = list(enumerate(peers)) # Create endpoints to workers with a greater rank than my rank for rank, address in peers[myrank + 1 :]: ep = await distributed.comm.connect(address) await ep.write(session_state["rank"]) session_state["eps"][rank] = ep # Block until all endpoints has been created while len(session_state["eps"]) < session_state["nworkers"] - 1: await asyncio.sleep(0.1) async def _stop_ucp_listeners(session_state): assert len(session_state["eps"]) == session_state["nworkers"] - 1 assert session_state["loop"] is asyncio.get_event_loop() session_state["lf"].stop() del session_state["lf"] async def _stage_keys(session_state: dict, name: str, keys: set): worker: Worker = session_state["worker"] data = worker.data my_keys = keys.intersection(data) stages = session_state.get("stages", {}) stage = stages.get(name, {}) for k in my_keys: stage[k] = data[k] stages[name] = stage session_state["stages"] = stages return (session_state["rank"], my_keys) class CommsContext: """Communication handler for explicit communication Parameters ---------- client: Client, optional Specify client to use for communication. If None, use the default client. """ client: Client sessionId: int worker_addresses: List[str] def __init__(self, client: Optional[Client] = None): self.client = client if client is not None else default_client() self.sessionId = uuid.uuid4().int # Get address of all workers (not Nanny addresses) self.worker_addresses = list(self.client.scheduler_info()["workers"].keys()) # Make all workers listen and get all listen addresses self.worker_direct_addresses = [] for rank, address in enumerate(self.worker_addresses): self.worker_direct_addresses.append( self.submit( address, _create_listeners, len(self.worker_addresses), rank, wait=True, ) ) # Each worker creates an endpoint to all workers with greater rank self.run(_create_endpoints, self.worker_direct_addresses) # At this point all workers should have a rank and endpoints to # all other workers thus we can now stop the listening. self.run(_stop_ucp_listeners) def submit(self, worker, coroutine, *args, wait=False): """Run a coroutine on a single worker The coroutine is given the worker's state dict as the first argument and ``*args`` as the following arguments. Parameters ---------- worker: str Worker to run the ``coroutine`` coroutine: coroutine The function to run on the worker *args: Arguments for ``coroutine`` wait: boolean, optional If True, waits for the coroutine to finished before returning. Returns ------- ret: object or Future If wait=True, the result of `coroutine` If wait=False, Future that can be waited on later. """ ret = self.client.submit( _run_coroutine_on_worker, self.sessionId, coroutine, args, workers=[worker], pure=False, ) return ret.result() if wait else ret def run(self, coroutine, *args, workers=None, lock_workers=False): """Run a coroutine on multiple workers The coroutine is given the worker's state dict as the first argument and ``*args`` as the following arguments. Parameters ---------- coroutine: coroutine The function to run on each worker *args: Arguments for ``coroutine`` workers: list, optional List of workers. Default is all workers lock_workers: bool, optional Use distributed.MultiLock to get exclusive access to the workers. Use this flag to support parallel runs. Returns ------- ret: list List of the output from each worker """ if workers is None: workers = self.worker_addresses with get_multi_lock_or_null_context(lock_workers, workers): ret = [] for worker in workers: ret.append( self.client.submit( _run_coroutine_on_worker, self.sessionId, coroutine, args, workers=[worker], pure=False, ) ) return self.client.gather(ret) def stage_keys(self, name: str, keys: Iterable[Hashable]) -> Dict[int, set]: """Staging keys on workers under the given name In an explicit-comms task, use `pop_staging_area(..., name)` to access the staged keys and the associated data. Notes ----- In the context of explicit-comms, staging is the act of duplicating the responsibility of Dask keys. When staging a key, the worker owning the key (as assigned by the Dask scheduler) save a reference to the key and the associated data to its local staging area. From this point on, if the scheduler cancels the key, the worker (and the task running on the worker) now has exclusive access to the key and the associated data. This way, staging makes it possible for long running explicit-comms tasks to free input data ASAP. Parameters ---------- name: str Name for the staging area keys: iterable The keys to stage Returns ------- dict dict that maps each worker-rank to the workers set of staged keys """ return dict(self.run(_stage_keys, name, set(keys))) def pop_staging_area(session_state: dict, name: str) -> Dict[str, Any]: """Pop the staging area called `name` This function must be called within a running explicit-comms task. Parameters ---------- session_state: dict Worker session state name: str Name for the staging area Returns ------- dict The staging area, which is a dict that maps keys to their data. """ return session_state["stages"].pop(name)
0
rapidsai_public_repos/dask-cuda/dask_cuda/explicit_comms
rapidsai_public_repos/dask-cuda/dask_cuda/explicit_comms/dataframe/shuffle.py
from __future__ import annotations import asyncio import functools import inspect from collections import defaultdict from math import ceil from operator import getitem from typing import Any, Callable, Dict, List, Optional, Set, TypeVar import dask import dask.config import dask.dataframe import dask.utils import distributed.worker from dask.base import tokenize from dask.dataframe.core import DataFrame, Series, _concat as dd_concat, new_dd_object from dask.dataframe.shuffle import group_split_dispatch, hash_object_dispatch from distributed import wait from distributed.protocol import nested_deserialize, to_serialize from distributed.worker import Worker from .. import comms T = TypeVar("T") Proxify = Callable[[T], T] def get_proxify(worker: Worker) -> Proxify: """Get function to proxify objects""" from dask_cuda.proxify_host_file import ProxifyHostFile if isinstance(worker.data, ProxifyHostFile): # Notice, we know that we never call proxify() on the same proxied # object thus we can speedup the call by setting `duplicate_check=False` return lambda x: worker.data.manager.proxify(x, duplicate_check=False)[0] return lambda x: x # no-op def get_no_comm_postprocess( stage: Dict[str, Any], num_rounds: int, batchsize: int, proxify: Proxify ) -> Callable[[DataFrame], DataFrame]: """Get function for post-processing partitions not communicated In cuDF, the `group_split_dispatch` uses `scatter_by_map` to create the partitions, which is implemented by splitting a single base dataframe into multiple partitions. This means that memory are not freed until ALL partitions are deleted. In order to free memory ASAP, we can deep copy partitions NOT being communicated. We do this when `num_rounds != batchsize`. Parameters ---------- stage The staged input dataframes. num_rounds Number of rounds of dataframe partitioning and all-to-all communication. batchsize Number of partitions each worker will handle in each round. proxify Function to proxify object. Returns ------- Function to be called on partitions not communicated. """ if num_rounds == batchsize: return lambda x: x # Check that we are shuffling a cudf dataframe try: import cudf except ImportError: return lambda x: x if not stage or not isinstance(next(iter(stage.values())), cudf.DataFrame): return lambda x: x # Deep copying a cuDF dataframe doesn't deep copy its index hence # we have to do it explicitly. return lambda x: proxify( x._from_data( x._data.copy(deep=True), x._index.copy(deep=True), ) ) async def send( eps, myrank: int, rank_to_out_part_ids: Dict[int, Set[int]], out_part_id_to_dataframe: Dict[int, DataFrame], ) -> None: """Notice, items sent are removed from `out_part_id_to_dataframe`""" futures = [] for rank, out_part_ids in rank_to_out_part_ids.items(): if rank != myrank: msg = { i: to_serialize(out_part_id_to_dataframe.pop(i)) for i in (out_part_ids & out_part_id_to_dataframe.keys()) } futures.append(eps[rank].write(msg)) await asyncio.gather(*futures) async def recv( eps, myrank: int, rank_to_out_part_ids: Dict[int, Set[int]], out_part_id_to_dataframe_list: Dict[int, List[DataFrame]], proxify: Proxify, ) -> None: """Notice, received items are appended to `out_parts_list`""" async def read_msg(rank: int) -> None: msg: Dict[int, DataFrame] = nested_deserialize(await eps[rank].read()) for out_part_id, df in msg.items(): out_part_id_to_dataframe_list[out_part_id].append(proxify(df)) await asyncio.gather( *(read_msg(rank) for rank in rank_to_out_part_ids if rank != myrank) ) def compute_map_index( df: DataFrame, column_names: List[str], npartitions: int ) -> Series: """Return a Series that maps each row `df` to a partition ID The partitions are determined by hashing the columns given by column_names unless if `column_names[0] == "_partitions"`, in which case the values of `column_names[0]` are used as index. Parameters ---------- df The dataframe. column_names List of column names on which we want to split. npartitions The desired number of output partitions. Returns ------- Series Series that maps each row `df` to a partition ID """ if column_names[0] == "_partitions": ind = df[column_names[0]] else: ind = hash_object_dispatch( df[column_names] if column_names else df, index=False ) return ind % npartitions def partition_dataframe( df: DataFrame, column_names: List[str], npartitions: int, ignore_index: bool ) -> Dict[int, DataFrame]: """Partition dataframe to a dict of dataframes The partitions are determined by hashing the columns given by column_names unless `column_names[0] == "_partitions"`, in which case the values of `column_names[0]` are used as index. Parameters ---------- df The dataframe to partition column_names List of column names on which we want to partition. npartitions The desired number of output partitions. ignore_index Ignore index during shuffle. If True, performance may improve, but index values will not be preserved. Returns ------- partitions Dict of dataframe-partitions, mapping partition-ID to dataframe """ if column_names[0] != "_partitions" and hasattr(df, "partition_by_hash"): return dict( zip( range(npartitions), df.partition_by_hash( column_names, npartitions, keep_index=not ignore_index ), ) ) map_index = compute_map_index(df, column_names, npartitions) return group_split_dispatch(df, map_index, npartitions, ignore_index=ignore_index) def create_partitions( stage: Dict[str, Any], batchsize: int, column_names: List[str], npartitions: int, ignore_index: bool, proxify: Proxify, ) -> Dict[int, DataFrame]: """Create partitions from one or more staged dataframes Parameters ---------- stage The staged input dataframes column_names List of column names on which we want to split. npartitions The desired number of output partitions. ignore_index Ignore index during shuffle. If True, performance may improve, but index values will not be preserved. proxify Function to proxify object. Returns ------- partitions: list of DataFrames List of dataframe-partitions """ if not stage: return {} batchsize = min(len(stage), batchsize) # Grouping each input dataframe, one part for each partition ID. dfs_grouped: List[Dict[int, DataFrame]] = [] for _ in range(batchsize): dfs_grouped.append( proxify( partition_dataframe( # pop dataframe in any order, to free staged memory ASAP stage.popitem()[1], column_names, npartitions, ignore_index, ) ) ) # Maps each output partition ID to a dataframe. If the partition is empty, # an empty dataframe is used. ret: Dict[int, DataFrame] = {} for i in range(npartitions): # Iterate over all possible output partition IDs t = [df_grouped[i] for df_grouped in dfs_grouped] assert len(t) > 0 if len(t) == 1: ret[i] = t[0] elif len(t) > 1: ret[i] = proxify(dd_concat(t, ignore_index=ignore_index)) return ret async def send_recv_partitions( eps: dict, myrank: int, rank_to_out_part_ids: Dict[int, Set[int]], out_part_id_to_dataframe: Dict[int, DataFrame], no_comm_postprocess: Callable[[DataFrame], DataFrame], proxify: Proxify, out_part_id_to_dataframe_list: Dict[int, List[DataFrame]], ) -> None: """Send and receive (all-to-all) partitions between all workers Parameters ---------- eps Communication endpoints to the other workers. myrank The rank of this worker. rank_to_out_part_ids dict that for each worker rank specifies a set of output partition IDs. If the worker shouldn't return any partitions, it is excluded from the dict. Partition IDs are global integers `0..npartitions` and corresponds to the dict keys returned by `group_split_dispatch`. out_part_id_to_dataframe Mapping from partition ID to dataframe. This dict is cleared on return. no_comm_postprocess Function to post-process partitions not communicated. See `get_no_comm_postprocess` proxify Function to proxify object. out_part_id_to_dataframe_list The **output** of this function, which is a dict of the partitions owned by this worker. """ await asyncio.gather( recv( eps, myrank, rank_to_out_part_ids, out_part_id_to_dataframe_list, proxify, ), send(eps, myrank, rank_to_out_part_ids, out_part_id_to_dataframe), ) # At this point `send()` should have pop'ed all output partitions # beside the partitions owned be `myrank` (if any). assert ( rank_to_out_part_ids[myrank] == out_part_id_to_dataframe.keys() or not out_part_id_to_dataframe ) # We can now add them to the output dataframes. for out_part_id, dataframe in out_part_id_to_dataframe.items(): out_part_id_to_dataframe_list[out_part_id].append( no_comm_postprocess(dataframe) ) out_part_id_to_dataframe.clear() async def shuffle_task( s, stage_name: str, rank_to_inkeys: Dict[int, set], rank_to_out_part_ids: Dict[int, Set[int]], column_names: List[str], npartitions: int, ignore_index: bool, num_rounds: int, batchsize: int, ) -> Dict[int, DataFrame]: """Explicit-comms shuffle task This function is running on each worker participating in the shuffle. Parameters ---------- s: dict Worker session state stage_name: str Name of the stage to retrieve the input keys from. rank_to_inkeys: dict dict that for each worker rank specifies the set of staged input keys. rank_to_out_part_ids: dict dict that for each worker rank specifies a set of output partition IDs. If the worker shouldn't return any partitions, it is excluded from the dict. Partition IDs are global integers `0..npartitions` and corresponds to the dict keys returned by `group_split_dispatch`. column_names: list of strings List of column names on which we want to split. npartitions: int The desired number of output partitions. ignore_index: bool Ignore index during shuffle. If True, performance may improve, but index values will not be preserved. num_rounds: int Number of rounds of dataframe partitioning and all-to-all communication. batchsize: int Number of partitions each worker will handle in each round. Returns ------- partitions: dict dict that maps each Partition ID to a dataframe-partition """ proxify = get_proxify(s["worker"]) eps = s["eps"] myrank: int = s["rank"] stage = comms.pop_staging_area(s, stage_name) assert stage.keys() == rank_to_inkeys[myrank] no_comm_postprocess = get_no_comm_postprocess(stage, num_rounds, batchsize, proxify) out_part_id_to_dataframe_list: Dict[int, List[DataFrame]] = defaultdict(list) for _ in range(num_rounds): partitions = create_partitions( stage, batchsize, column_names, npartitions, ignore_index, proxify ) await send_recv_partitions( eps, myrank, rank_to_out_part_ids, partitions, no_comm_postprocess, proxify, out_part_id_to_dataframe_list, ) # Finally, we concatenate the output dataframes into the final output partitions ret = {} while out_part_id_to_dataframe_list: part_id, dataframe_list = out_part_id_to_dataframe_list.popitem() ret[part_id] = proxify( dd_concat( dataframe_list, ignore_index=ignore_index, ) ) # For robustness, we yield this task to give Dask a chance to do bookkeeping # such as letting the Worker answer heartbeat requests await asyncio.sleep(0) return ret def shuffle( df: DataFrame, column_names: List[str], npartitions: Optional[int] = None, ignore_index: bool = False, batchsize: Optional[int] = None, ) -> DataFrame: """Order divisions of DataFrame so that all values within column(s) align This enacts a task-based shuffle using explicit-comms. It requires a full dataset read, serialization and shuffle. This is expensive. If possible you should avoid shuffles. This does not preserve a meaningful index/partitioning scheme. This is not deterministic if done in parallel. Requires an activate client. Parameters ---------- df: dask.dataframe.DataFrame Dataframe to shuffle column_names: list of strings List of column names on which we want to split. npartitions: int or None The desired number of output partitions. If None, the number of output partitions equals `df.npartitions` ignore_index: bool Ignore index during shuffle. If True, performance may improve, but index values will not be preserved. batchsize: int A shuffle consist of multiple rounds where each worker partitions and then all-to-all communicates a number of its dataframe partitions. The batch size is the number of partitions each worker will handle in each round. If -1, each worker will handle all its partitions in a single round and all techniques to reduce memory usage are disabled, which might be faster when memory pressure isn't an issue. If None, the value of `DASK_EXPLICIT_COMMS_BATCHSIZE` is used or 1 if not set thus by default, we prioritize robustness over performance. Returns ------- df: dask.dataframe.DataFrame Shuffled dataframe Developer Notes --------------- The implementation consist of three steps: (a) Stage the partitions of `df` on all workers and then cancel them thus at this point the Dask Scheduler doesn't know about any of the the partitions. (b) Submit a task on each worker that shuffle (all-to-all communicate) the staged partitions and return a list of dataframe-partitions. (c) Submit a dask graph that extract (using `getitem()`) individual dataframe-partitions from (b). """ c = comms.default_comms() # The ranks of the output workers ranks = list(range(len(c.worker_addresses))) # By default, we preserve number of partitions if npartitions is None: npartitions = df.npartitions # Step (a): df = df.persist() # Make sure optimizations are apply on the existing graph wait([df]) # Make sure all keys has been materialized on workers name = ( "explicit-comms-shuffle-" f"{tokenize(df, column_names, npartitions, ignore_index)}" ) df_meta: DataFrame = df._meta # Stage all keys of `df` on the workers and cancel them, which makes it possible # for the shuffle to free memory as the partitions of `df` are consumed. # See CommsContext.stage_keys() for a description of staging. rank_to_inkeys = c.stage_keys(name=name, keys=df.__dask_keys__()) c.client.cancel(df) # Get batchsize max_num_inkeys = max(len(k) for k in rank_to_inkeys.values()) batchsize = batchsize or dask.config.get("explicit-comms-batchsize", 1) if batchsize == -1: batchsize = max_num_inkeys if not isinstance(batchsize, int) or batchsize < 0: raise ValueError( "explicit-comms-batchsize must be a " f"positive integer or -1 (was '{batchsize}')" ) # Get number of rounds of dataframe partitioning and all-to-all communication. num_rounds = ceil(max_num_inkeys / batchsize) # Find the output partition IDs for each worker div = npartitions // len(ranks) rank_to_out_part_ids: Dict[int, Set[int]] = {} # rank -> set of partition id for i, rank in enumerate(ranks): rank_to_out_part_ids[rank] = set(range(div * i, div * (i + 1))) for rank, i in zip(ranks, range(div * len(ranks), npartitions)): rank_to_out_part_ids[rank].add(i) # Run a shuffle task on each worker shuffle_result = {} for rank in ranks: shuffle_result[rank] = c.submit( c.worker_addresses[rank], shuffle_task, name, rank_to_inkeys, rank_to_out_part_ids, column_names, npartitions, ignore_index, num_rounds, batchsize, ) wait(list(shuffle_result.values())) # Step (d): extract individual dataframe-partitions. We use `submit()` # to control where the tasks are executed. # TODO: can we do this without using `submit()` to avoid the overhead # of creating a Future for each dataframe partition? dsk = {} for rank in ranks: for part_id in rank_to_out_part_ids[rank]: dsk[(name, part_id)] = c.client.submit( getitem, shuffle_result[rank], part_id, workers=[c.worker_addresses[rank]], ) # Create a distributed Dataframe from all the pieces divs = [None] * (len(dsk) + 1) ret = new_dd_object(dsk, name, df_meta, divs).persist() wait([ret]) # Release all temporary dataframes for fut in [*shuffle_result.values(), *dsk.values()]: fut.release() return ret def _use_explicit_comms() -> bool: """Is explicit-comms and available?""" if dask.config.get("explicit-comms", False): try: # Make sure we have an activate client. distributed.worker.get_client() except (ImportError, ValueError): pass else: return True return False def get_rearrange_by_column_wrapper(func): """Returns a function wrapper that dispatch the shuffle to explicit-comms. Notice, this is monkey patched into Dask at dask_cuda import """ func_sig = inspect.signature(func) @functools.wraps(func) def wrapper(*args, **kwargs): if _use_explicit_comms(): # Convert `*args, **kwargs` to a dict of `keyword -> values` kw = func_sig.bind(*args, **kwargs) kw.apply_defaults() kw = kw.arguments # Notice, we only overwrite the default and the "tasks" shuffle # algorithm. The "disk" and "p2p" algorithm, we don't touch. if kw["shuffle"] in ("tasks", None): col = kw["col"] if isinstance(col, str): col = [col] return shuffle(kw["df"], col, kw["npartitions"], kw["ignore_index"]) return func(*args, **kwargs) return wrapper def get_default_shuffle_method() -> str: """Return the default shuffle algorithm used by Dask This changes the default shuffle algorithm from "p2p" to "tasks" when explicit comms is enabled. """ ret = dask.config.get("dataframe.shuffle.algorithm", None) if ret is None and _use_explicit_comms(): return "tasks" return dask.utils.get_default_shuffle_method()
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_dask_cuda_worker.py
from __future__ import absolute_import, division, print_function import os import pkgutil import subprocess import sys from unittest.mock import patch import pytest from distributed import Client, wait from distributed.system import MEMORY_LIMIT from distributed.utils_test import cleanup, loop, loop_in_thread, popen # noqa: F401 from dask_cuda.utils import ( get_cluster_configuration, get_device_total_memory, get_gpu_count_mig, get_gpu_uuid_from_index, get_n_gpus, wait_workers, ) @patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0,3,7,8"}) def test_cuda_visible_devices_and_memory_limit_and_nthreads(loop): # noqa: F811 nthreads = 4 with popen(["dask", "scheduler", "--port", "9359", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9359", "--host", "127.0.0.1", "--device-memory-limit", "1 MB", "--nthreads", str(nthreads), "--no-dashboard", "--worker-class", "dask_cuda.utils_test.MockWorker", ] ): with Client("127.0.0.1:9359", loop=loop) as client: assert wait_workers(client, n_gpus=4) def get_visible_devices(): return os.environ["CUDA_VISIBLE_DEVICES"] # verify 4 workers with the 4 expected CUDA_VISIBLE_DEVICES result = client.run(get_visible_devices) expected = {"0,3,7,8": 1, "3,7,8,0": 1, "7,8,0,3": 1, "8,0,3,7": 1} for v in result.values(): del expected[v] workers = client.scheduler_info()["workers"] for w in workers.values(): assert w["memory_limit"] == MEMORY_LIMIT // len(workers) assert len(expected) == 0 def test_rmm_pool(loop): # noqa: F811 rmm = pytest.importorskip("rmm") with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9369", "--host", "127.0.0.1", "--rmm-pool-size", "2 GB", "--no-dashboard", ] ): with Client("127.0.0.1:9369", loop=loop) as client: assert wait_workers(client, n_gpus=get_n_gpus()) memory_resource_type = client.run( rmm.mr.get_current_device_resource_type ) for v in memory_resource_type.values(): assert v is rmm.mr.PoolMemoryResource def test_rmm_managed(loop): # noqa: F811 rmm = pytest.importorskip("rmm") with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9369", "--host", "127.0.0.1", "--rmm-managed-memory", "--no-dashboard", ] ): with Client("127.0.0.1:9369", loop=loop) as client: assert wait_workers(client, n_gpus=get_n_gpus()) memory_resource_type = client.run( rmm.mr.get_current_device_resource_type ) for v in memory_resource_type.values(): assert v is rmm.mr.ManagedMemoryResource def test_rmm_async(loop): # noqa: F811 rmm = pytest.importorskip("rmm") driver_version = rmm._cuda.gpu.driverGetVersion() runtime_version = rmm._cuda.gpu.runtimeGetVersion() if driver_version < 11020 or runtime_version < 11020: pytest.skip("cudaMallocAsync not supported") with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9369", "--host", "127.0.0.1", "--rmm-async", "--rmm-pool-size", "2 GB", "--rmm-release-threshold", "3 GB", "--no-dashboard", ] ): with Client("127.0.0.1:9369", loop=loop) as client: assert wait_workers(client, n_gpus=get_n_gpus()) memory_resource_type = client.run( rmm.mr.get_current_device_resource_type ) for v in memory_resource_type.values(): assert v is rmm.mr.CudaAsyncMemoryResource ret = get_cluster_configuration(client) wait(ret) assert ret["[plugin] RMMSetup"]["initial_pool_size"] == 2000000000 assert ret["[plugin] RMMSetup"]["release_threshold"] == 3000000000 def test_rmm_async_with_maximum_pool_size(loop): # noqa: F811 rmm = pytest.importorskip("rmm") driver_version = rmm._cuda.gpu.driverGetVersion() runtime_version = rmm._cuda.gpu.runtimeGetVersion() if driver_version < 11020 or runtime_version < 11020: pytest.skip("cudaMallocAsync not supported") with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9369", "--host", "127.0.0.1", "--rmm-async", "--rmm-pool-size", "2 GB", "--rmm-release-threshold", "3 GB", "--rmm-maximum-pool-size", "4 GB", "--no-dashboard", ] ): with Client("127.0.0.1:9369", loop=loop) as client: assert wait_workers(client, n_gpus=get_n_gpus()) memory_resource_types = client.run( lambda: ( rmm.mr.get_current_device_resource_type(), type(rmm.mr.get_current_device_resource().get_upstream()), ) ) for v in memory_resource_types.values(): memory_resource_type, upstream_memory_resource_type = v assert memory_resource_type is rmm.mr.LimitingResourceAdaptor assert ( upstream_memory_resource_type is rmm.mr.CudaAsyncMemoryResource ) ret = get_cluster_configuration(client) wait(ret) assert ret["[plugin] RMMSetup"]["initial_pool_size"] == 2000000000 assert ret["[plugin] RMMSetup"]["release_threshold"] == 3000000000 assert ret["[plugin] RMMSetup"]["maximum_pool_size"] == 4000000000 def test_rmm_logging(loop): # noqa: F811 rmm = pytest.importorskip("rmm") with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9369", "--host", "127.0.0.1", "--rmm-pool-size", "2 GB", "--rmm-log-directory", ".", "--no-dashboard", ] ): with Client("127.0.0.1:9369", loop=loop) as client: assert wait_workers(client, n_gpus=get_n_gpus()) memory_resource_type = client.run( rmm.mr.get_current_device_resource_type ) for v in memory_resource_type.values(): assert v is rmm.mr.LoggingResourceAdaptor @patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0"}) def test_dashboard_address(loop): # noqa: F811 with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9369", "--dashboard-address", "127.0.0.1:9370", ] ): with Client("127.0.0.1:9369", loop=loop) as client: assert wait_workers(client, n_gpus=get_n_gpus()) dashboard_addresses = client.run( lambda dask_worker: dask_worker._dashboard_address ) for v in dashboard_addresses.values(): assert v == "127.0.0.1:9370" def test_unknown_argument(): ret = subprocess.run( ["dask", "cuda", "worker", "--my-argument"], capture_output=True ) assert ret.returncode != 0 assert b"Scheduler address: --my-argument" in ret.stderr @patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0"}) def test_pre_import(loop): # noqa: F811 module = None # Pick a module that isn't currently loaded for m in pkgutil.iter_modules(): if m.ispkg and m.name not in sys.modules.keys(): module = m.name break if module is None: pytest.skip("No module found that isn't already loaded") with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9369", "--pre-import", module, ] ): with Client("127.0.0.1:9369", loop=loop) as client: assert wait_workers(client, n_gpus=get_n_gpus()) imported = client.run(lambda: module in sys.modules) assert all(imported) @pytest.mark.timeout(20) @patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0"}) def test_pre_import_not_found(): with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): ret = subprocess.run( ["dask", "cuda", "worker", "127.0.0.1:9369", "--pre-import", "my_module"], capture_output=True, ) assert ret.returncode != 0 assert b"ModuleNotFoundError: No module named 'my_module'" in ret.stderr def test_cuda_mig_visible_devices_and_memory_limit_and_nthreads(loop): # noqa: F811 uuids = get_gpu_count_mig(return_uuids=True)[1] # test only with some MIG Instances assuming the test bed # does not have a huge number of mig instances if len(uuids) > 0: cuda_visible_devices = ",".join([i.decode("utf-8") for i in uuids]) else: pytest.skip("No MIG devices found") with patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": cuda_visible_devices}): nthreads = len(cuda_visible_devices) with popen(["dask", "scheduler", "--port", "9359", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9359", "--host", "127.0.0.1", "--nthreads", str(nthreads), "--no-dashboard", "--worker-class", "dask_cuda.utils_test.MockWorker", ] ): with Client("127.0.0.1:9359", loop=loop) as client: assert wait_workers(client, n_gpus=len(uuids)) # Check to see if all workers are up and # CUDA_VISIBLE_DEVICES cycles properly def get_visible_devices(): return os.environ["CUDA_VISIBLE_DEVICES"] result = client.run(get_visible_devices) wait(result) assert all(len(v.split(",")) == len(uuids) for v in result.values()) for i in range(len(uuids)): assert set( bytes(v.split(",")[i], "utf-8") for v in result.values() ) == set(uuids) def test_cuda_visible_devices_uuid(loop): # noqa: F811 gpu_uuid = get_gpu_uuid_from_index(0) with patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": gpu_uuid}): with popen(["dask", "scheduler", "--port", "9359", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9359", "--host", "127.0.0.1", "--no-dashboard", "--worker-class", "dask_cuda.utils_test.MockWorker", ] ): with Client("127.0.0.1:9359", loop=loop) as client: assert wait_workers(client, n_gpus=1) result = client.run(lambda: os.environ["CUDA_VISIBLE_DEVICES"]) assert list(result.values())[0] == gpu_uuid def test_rmm_track_allocations(loop): # noqa: F811 rmm = pytest.importorskip("rmm") with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9369", "--host", "127.0.0.1", "--rmm-pool-size", "2 GB", "--no-dashboard", "--rmm-track-allocations", ] ): with Client("127.0.0.1:9369", loop=loop) as client: assert wait_workers(client, n_gpus=get_n_gpus()) memory_resource_type = client.run( rmm.mr.get_current_device_resource_type ) for v in memory_resource_type.values(): assert v is rmm.mr.TrackingResourceAdaptor memory_resource_upstream_type = client.run( lambda: type(rmm.mr.get_current_device_resource().upstream_mr) ) for v in memory_resource_upstream_type.values(): assert v is rmm.mr.PoolMemoryResource @patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0"}) def test_get_cluster_configuration(loop): # noqa: F811 pytest.importorskip("rmm") with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9369", "--host", "127.0.0.1", "--device-memory-limit", "30 B", "--rmm-pool-size", "2 GB", "--rmm-maximum-pool-size", "3 GB", "--no-dashboard", "--rmm-track-allocations", ] ): with Client("127.0.0.1:9369", loop=loop) as client: assert wait_workers(client, n_gpus=get_n_gpus()) ret = get_cluster_configuration(client) wait(ret) assert ret["[plugin] RMMSetup"]["initial_pool_size"] == 2000000000 assert ret["[plugin] RMMSetup"]["maximum_pool_size"] == 3000000000 assert ret["jit-unspill"] is False assert ret["device-memory-limit"] == 30 @patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0"}) def test_worker_fraction_limits(loop): # noqa: F811 pytest.importorskip("rmm") with popen(["dask", "scheduler", "--port", "9369", "--no-dashboard"]): with popen( [ "dask", "cuda", "worker", "127.0.0.1:9369", "--host", "127.0.0.1", "--device-memory-limit", "0.1", "--rmm-pool-size", "0.2", "--rmm-maximum-pool-size", "0.3", "--no-dashboard", "--rmm-track-allocations", ] ): with Client("127.0.0.1:9369", loop=loop) as client: assert wait_workers(client, n_gpus=get_n_gpus()) device_total_memory = client.run(get_device_total_memory) wait(device_total_memory) _, device_total_memory = device_total_memory.popitem() ret = get_cluster_configuration(client) wait(ret) assert ret["device-memory-limit"] == int(device_total_memory * 0.1) assert ( ret["[plugin] RMMSetup"]["initial_pool_size"] == (device_total_memory * 0.2) // 256 * 256 ) assert ( ret["[plugin] RMMSetup"]["maximum_pool_size"] == (device_total_memory * 0.3) // 256 * 256 ) @patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0"}) def test_worker_timeout(): ret = subprocess.run( [ "dask", "cuda", "worker", "192.168.1.100:7777", "--death-timeout", "1", ], text=True, encoding="utf8", capture_output=True, ) assert "closing nanny at" in ret.stderr.lower() # Depending on the environment, the error raised may be different try: assert "reason: failure-to-start-" in ret.stderr.lower() assert "timeouterror" in ret.stderr.lower() except AssertionError: assert "reason: nanny-close" in ret.stderr.lower() assert ret.returncode == 0
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_proxify_host_file.py
from typing import Iterable from unittest.mock import patch import numpy as np import pytest from pandas.testing import assert_frame_equal import dask import dask.dataframe from dask.dataframe.shuffle import shuffle_group from dask.sizeof import sizeof from dask.utils import format_bytes from distributed import Client from distributed.utils_test import gen_test import dask_cuda import dask_cuda.proxify_device_objects from dask_cuda.get_device_memory_objects import get_device_memory_ids from dask_cuda.proxify_host_file import ProxifyHostFile from dask_cuda.proxy_object import ProxyObject, asproxy, unproxy from dask_cuda.utils import get_device_total_memory from dask_cuda.utils_test import IncreasedCloseTimeoutNanny cupy = pytest.importorskip("cupy") cupy.cuda.set_allocator(None) one_item_array = lambda: cupy.arange(1) one_item_nbytes = one_item_array().nbytes # While testing we don't want to unproxify `cupy.ndarray` even though # it is on the incompatible_types list by default. dask_cuda.proxify_device_objects.dispatch.dispatch(cupy.ndarray) dask_cuda.proxify_device_objects.incompatible_types = () # type: ignore @pytest.fixture(scope="module") def root_dir(tmp_path_factory): tmpdir = tmp_path_factory.mktemp("jit-unspill") # Make the "disk" serializer available and use a tmp directory if ProxifyHostFile._spill_to_disk is None: ProxifyHostFile( worker_local_directory=tmpdir.name, device_memory_limit=1024, memory_limit=1024, ) assert ProxifyHostFile._spill_to_disk is not None # In order to use the same tmp dir, we use `root_dir` for all # ProxifyHostFile creations. Notice, we use `..` to remove the # `jit-unspill-disk-storage` part added by the # ProxifyHostFile implicitly. return str(ProxifyHostFile._spill_to_disk.root_dir / "..") def is_proxies_equal(p1: Iterable[ProxyObject], p2: Iterable[ProxyObject]): """Check that two collections of proxies contains the same proxies (unordered) In order to avoid deserializing proxy objects when comparing them, this function compares object IDs. """ ids1 = sorted([id(p) for p in p1]) ids2 = sorted([id(p) for p in p2]) return ids1 == ids2 def test_one_dev_item_limit(root_dir): dhf = ProxifyHostFile( worker_local_directory=root_dir, device_memory_limit=one_item_nbytes, memory_limit=1000, ) a1 = one_item_array() + 42 a2 = one_item_array() dhf["k1"] = a1 dhf["k2"] = a2 dhf.manager.validate() # Check k1 is spilled because of the newer k2 k1 = dhf["k1"] k2 = dhf["k2"] assert k1._pxy_get().is_serialized() assert not k2._pxy_get().is_serialized() dhf.manager.validate() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k1]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k2]) # Accessing k1 spills k2 and unspill k1 k1_val = k1[0] assert k1_val == 42 assert k2._pxy_get().is_serialized() dhf.manager.validate() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k2]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k1]) # Duplicate arrays changes nothing dhf["k3"] = [k1, k2] assert not k1._pxy_get().is_serialized() assert k2._pxy_get().is_serialized() dhf.manager.validate() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k2]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k1]) # Adding a new array spills k1 and k2 dhf["k4"] = one_item_array() k4 = dhf["k4"] assert k1._pxy_get().is_serialized() assert k2._pxy_get().is_serialized() assert not dhf["k4"]._pxy_get().is_serialized() dhf.manager.validate() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k1, k2]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k4]) # Accessing k2 spills k1 and k4 k2[0] assert k1._pxy_get().is_serialized() assert dhf["k4"]._pxy_get().is_serialized() assert not k2._pxy_get().is_serialized() dhf.manager.validate() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k1, k4]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k2]) # Deleting k2 does not change anything since k3 still holds a # reference to the underlying proxy object assert dhf.manager._dev.mem_usage() == one_item_nbytes dhf.manager.validate() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k1, k4]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k2]) del dhf["k2"] dhf.manager.validate() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k1, k4]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k2]) # Overwriting k3 with a non-cuda object and deleting k2 # should empty the device dhf["k3"] = "non-cuda-object" del k2 dhf.manager.validate() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k1, k4]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), []) # Adding the underlying proxied of k1 doesn't change anything. # The host file detects that k1_ary is already proxied by the # existing proxy object k1. k1_ary = unproxy(k1) dhf["k5"] = k1_ary dhf.manager.validate() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k4]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k1]) # Clean up del k1, k4 dhf.clear() assert len(dhf.manager) == 0 def test_one_item_host_limit(capsys, root_dir): memory_limit = sizeof(asproxy(one_item_array(), serializers=("dask", "pickle"))) dhf = ProxifyHostFile( worker_local_directory=root_dir, device_memory_limit=one_item_nbytes, memory_limit=memory_limit, ) a1 = one_item_array() + 1 a2 = one_item_array() + 2 dhf["k1"] = a1 dhf["k2"] = a2 dhf.manager.validate() # Check k1 is spilled because of the newer k2 k1 = dhf["k1"] k2 = dhf["k2"] assert k1._pxy_get().is_serialized() assert not k2._pxy_get().is_serialized() dhf.manager.validate() assert is_proxies_equal(dhf.manager._disk.get_proxies(), []) assert is_proxies_equal(dhf.manager._host.get_proxies(), [k1]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k2]) # Check k1 is spilled to disk and k2 is spilled to host dhf["k3"] = one_item_array() + 3 k3 = dhf["k3"] dhf.manager.validate() assert is_proxies_equal(dhf.manager._disk.get_proxies(), [k1]) assert is_proxies_equal(dhf.manager._host.get_proxies(), [k2]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k3]) dhf.manager.validate() # Accessing k2 spills k3 and unspill k2 k2_val = k2[0] assert k2_val == 2 dhf.manager.validate() assert is_proxies_equal(dhf.manager._disk.get_proxies(), [k1]) assert is_proxies_equal(dhf.manager._host.get_proxies(), [k3]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k2]) # Adding a new array spill k3 to disk and k2 to host dhf["k4"] = one_item_array() + 4 k4 = dhf["k4"] dhf.manager.validate() assert is_proxies_equal(dhf.manager._disk.get_proxies(), [k1, k3]) assert is_proxies_equal(dhf.manager._host.get_proxies(), [k2]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k4]) # Accessing k1 unspills k1 directly to device and spills k4 to host k1_val = k1[0] assert k1_val == 1 dhf.manager.validate() assert is_proxies_equal(dhf.manager._disk.get_proxies(), [k2, k3]) assert is_proxies_equal(dhf.manager._host.get_proxies(), [k4]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k1]) # Clean up del k1, k2, k3, k4 dhf.clear() assert len(dhf.manager) == 0 def test_spill_on_demand(root_dir): """ Test spilling on demand by disabling the device_memory_limit and allocating two large buffers that will otherwise fail because of spilling on demand. """ rmm = pytest.importorskip("rmm") if not hasattr(rmm.mr, "FailureCallbackResourceAdaptor"): pytest.skip("RMM doesn't implement FailureCallbackResourceAdaptor") total_mem = get_device_total_memory() dhf = ProxifyHostFile( worker_local_directory=root_dir, device_memory_limit=2 * total_mem, memory_limit=2 * total_mem, spill_on_demand=True, ) for i in range(2): dhf[i] = rmm.DeviceBuffer(size=total_mem // 2 + 1) @pytest.mark.parametrize("jit_unspill", [True, False]) @gen_test(timeout=20) async def test_local_cuda_cluster(jit_unspill): """Testing spilling of a proxied cudf dataframe in a local cuda cluster""" cudf = pytest.importorskip("cudf") dask_cudf = pytest.importorskip("dask_cudf") def task(x): assert isinstance(x, cudf.DataFrame) if jit_unspill: # Check that `x` is a proxy object and the proxied DataFrame is serialized assert "ProxyObject" in str(type(x)) assert x._pxy_get().serializer == "dask" else: assert type(x) == cudf.DataFrame assert len(x) == 10 # Trigger deserialization return x # Notice, setting `device_memory_limit=1B` to trigger spilling async with dask_cuda.LocalCUDACluster( n_workers=1, device_memory_limit="1B", jit_unspill=jit_unspill, asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: df = cudf.DataFrame({"a": range(10)}) ddf = dask_cudf.from_cudf(df, npartitions=1) ddf = ddf.map_partitions(task, meta=df.head()) got = await client.compute(ddf) assert_frame_equal(got.to_pandas(), df.to_pandas()) def test_dataframes_share_dev_mem(root_dir): cudf = pytest.importorskip("cudf") df = cudf.DataFrame({"a": range(10)}) grouped = shuffle_group(df, "a", 0, 2, 2, False, 2) view1 = grouped[0] view2 = grouped[1] # Even though the two dataframe doesn't point to the same cudf.Buffer object assert view1["a"].data is not view2["a"].data # They still share the same underlying device memory view1["a"].data.get_ptr(mode="read") == view2["a"].data.get_ptr(mode="read") dhf = ProxifyHostFile( worker_local_directory=root_dir, device_memory_limit=160, memory_limit=1000 ) dhf["v1"] = view1 dhf["v2"] = view2 v1 = dhf["v1"] v2 = dhf["v2"] # The device_memory_limit is not exceeded since both dataframes share device memory assert not v1._pxy_get().is_serialized() assert not v2._pxy_get().is_serialized() # Now the device_memory_limit is exceeded, which should evict both dataframes dhf["k1"] = one_item_array() assert v1._pxy_get().is_serialized() assert v2._pxy_get().is_serialized() def test_cudf_get_device_memory_objects(): cudf = pytest.importorskip("cudf") objects = [ cudf.DataFrame({"a": range(10), "b": range(10)}, index=reversed(range(10))), cudf.MultiIndex( levels=[[1, 2], ["blue", "red"]], codes=[[0, 0, 1, 1], [1, 0, 1, 0]] ), ] res = get_device_memory_ids(objects) assert len(res) == 4, "We expect four buffer objects" def test_externals(root_dir): """Test adding objects directly to the manager Add an object directly to the manager makes it count against the device_memory_limit but isn't part of the store. Normally, we use __setitem__ to store objects in the hostfile and make it count against the device_memory_limit with the inherent consequence that the objects are not freeable before subsequential calls to __delitem__. This is a problem for long running tasks that want objects to count against the device_memory_limit while freeing them ASAP without explicit calls to __delitem__. """ dhf = ProxifyHostFile( worker_local_directory=root_dir, device_memory_limit=one_item_nbytes, memory_limit=1000, ) dhf["k1"] = one_item_array() k1 = dhf["k1"] k2, incompatible_type_found = dhf.manager.proxify(one_item_array()) assert not incompatible_type_found # `k2` isn't part of the store but still triggers spilling of `k1` assert len(dhf) == 1 assert k1._pxy_get().is_serialized() assert not k2._pxy_get().is_serialized() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k1]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k2]) assert dhf.manager._dev._mem_usage == one_item_nbytes k1[0] # Trigger spilling of `k2` assert not k1._pxy_get().is_serialized() assert k2._pxy_get().is_serialized() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k2]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k1]) assert dhf.manager._dev._mem_usage == one_item_nbytes k2[0] # Trigger spilling of `k1` assert k1._pxy_get().is_serialized() assert not k2._pxy_get().is_serialized() assert is_proxies_equal(dhf.manager._host.get_proxies(), [k1]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), [k2]) assert dhf.manager._dev._mem_usage == one_item_nbytes # Removing `k2` also removes it from the tally del k2 assert is_proxies_equal(dhf.manager._host.get_proxies(), [k1]) assert is_proxies_equal(dhf.manager._dev.get_proxies(), []) assert dhf.manager._dev._mem_usage == 0 @patch("dask_cuda.proxify_device_objects.incompatible_types", (cupy.ndarray,)) def test_incompatible_types(root_dir): """Check that ProxifyHostFile unproxifies `cupy.ndarray` on retrieval Notice, in this test we add `cupy.ndarray` to the incompatible_types temporarily. """ cupy = pytest.importorskip("cupy") cudf = pytest.importorskip("cudf") dhf = ProxifyHostFile( worker_local_directory=root_dir, device_memory_limit=100, memory_limit=100 ) # We expect `dhf` to unproxify `a1` (but not `a2`) on retrieval a1, a2 = (cupy.arange(9), cudf.Series([1, 2, 3])) dhf["a"] = (a1, a2) b1, b2 = dhf["a"] assert a1 is b1 assert isinstance(b2, ProxyObject) assert a2 is unproxy(b2) @pytest.mark.parametrize("npartitions", [1, 2, 3]) @pytest.mark.parametrize("compatibility_mode", [True, False]) @gen_test(timeout=30) async def test_compatibility_mode_dataframe_shuffle(compatibility_mode, npartitions): cudf = pytest.importorskip("cudf") def is_proxy_object(x): return "ProxyObject" in str(type(x)) with dask.config.set(jit_unspill_compatibility_mode=compatibility_mode): async with dask_cuda.LocalCUDACluster( n_workers=1, jit_unspill=True, worker_class=IncreasedCloseTimeoutNanny, asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: ddf = dask.dataframe.from_pandas( cudf.DataFrame({"key": np.arange(10)}), npartitions=npartitions ) res = ddf.shuffle(on="key", shuffle="tasks").persist() # With compatibility mode on, we shouldn't encounter any proxy objects if compatibility_mode: assert "ProxyObject" not in str(type(await client.compute(res))) res = await client.compute(res.map_partitions(is_proxy_object)) res = res.to_list() if compatibility_mode: assert not any(res) # No proxy objects else: assert all(res) # Only proxy objects @gen_test(timeout=60) async def test_worker_force_spill_to_disk(): """Test Dask triggering CPU-to-Disk spilling""" cudf = pytest.importorskip("cudf") with dask.config.set({"distributed.worker.memory.terminate": False}): async with dask_cuda.LocalCUDACluster( n_workers=1, device_memory_limit="1MB", jit_unspill=True, asynchronous=True ) as cluster: async with Client(cluster, asynchronous=True) as client: # Create a df that are spilled to host memory immediately df = cudf.DataFrame({"key": np.arange(10**8)}) ddf = dask.dataframe.from_pandas(df, npartitions=1).persist() await ddf async def f(dask_worker): """Trigger a memory_monitor() and reset memory_limit""" w = dask_worker # Set a host memory limit that triggers spilling to disk w.memory_manager.memory_pause_fraction = False memory = w.monitor.proc.memory_info().rss w.memory_manager.memory_limit = memory - 10**8 w.memory_manager.memory_target_fraction = 1 print(w.memory_manager.data) await w.memory_manager.memory_monitor(w) # Check that host memory are freed assert w.monitor.proc.memory_info().rss < memory - 10**7 w.memory_manager.memory_limit = memory * 10 # Un-limit client.run(f) log = str(await client.get_worker_logs()) # Check that the worker doesn't complain about unmanaged memory assert "Unmanaged memory use is high" not in log def test_on_demand_debug_info(): """Test worker logging when on-demand-spilling fails""" rmm = pytest.importorskip("rmm") if not hasattr(rmm.mr, "FailureCallbackResourceAdaptor"): pytest.skip("RMM doesn't implement FailureCallbackResourceAdaptor") rmm_pool_size = 2**20 def task(): ( rmm.DeviceBuffer(size=rmm_pool_size // 2), rmm.DeviceBuffer(size=rmm_pool_size // 2), rmm.DeviceBuffer(size=rmm_pool_size), # Trigger OOM ) with dask_cuda.LocalCUDACluster( n_workers=1, jit_unspill=True, rmm_pool_size=rmm_pool_size, rmm_maximum_pool_size=rmm_pool_size, rmm_track_allocations=True, ) as cluster: with Client(cluster) as client: # Warmup, which trigger the initialization of spill on demand client.submit(range, 10).result() # Submit too large RMM buffer with pytest.raises(MemoryError, match="Maximum pool size exceeded"): client.submit(task).result() log = str(client.get_worker_logs()) size = format_bytes(rmm_pool_size) assert f"WARNING - RMM allocation of {size} failed" in log assert f"RMM allocs: {size}" in log assert "traceback:" in log
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_spill.py
import gc import os from time import sleep import pytest import dask from dask import array as da from distributed import Client, wait from distributed.metrics import time from distributed.sizeof import sizeof from distributed.utils_test import gen_cluster, gen_test, loop # noqa: F401 from dask_cuda import LocalCUDACluster, utils from dask_cuda.utils_test import IncreasedCloseTimeoutNanny if utils.get_device_total_memory() < 1e10: pytest.skip("Not enough GPU memory", allow_module_level=True) def device_host_file_size_matches( dhf, total_bytes, device_chunk_overhead=0, serialized_chunk_overhead=1024 ): byte_sum = dhf.device_buffer.fast.total_weight # `dhf.host_buffer.fast` is only available when Worker's `memory_limit != 0` if hasattr(dhf.host_buffer, "fast"): byte_sum += dhf.host_buffer.fast.total_weight else: byte_sum += sum([sizeof(b) for b in dhf.host_buffer.values()]) # `dhf.disk` is only available when Worker's `memory_limit != 0` if dhf.disk is not None: file_path = [ os.path.join(dhf.disk.directory, fname) for fname in dhf.disk.filenames.values() ] file_size = [os.path.getsize(f) for f in file_path] byte_sum += sum(file_size) # Allow up to chunk_overhead bytes overhead per chunk device_overhead = len(dhf.device) * device_chunk_overhead host_overhead = len(dhf.host) * serialized_chunk_overhead disk_overhead = ( len(dhf.disk) * serialized_chunk_overhead if dhf.disk is not None else 0 ) return ( byte_sum >= total_bytes and byte_sum <= total_bytes + device_overhead + host_overhead + disk_overhead ) def assert_device_host_file_size( dhf, total_bytes, device_chunk_overhead=0, serialized_chunk_overhead=1024 ): assert device_host_file_size_matches( dhf, total_bytes, device_chunk_overhead, serialized_chunk_overhead ) def worker_assert( total_size, device_chunk_overhead, serialized_chunk_overhead, dask_worker=None, ): assert_device_host_file_size( dask_worker.data, total_size, device_chunk_overhead, serialized_chunk_overhead ) def delayed_worker_assert( total_size, device_chunk_overhead, serialized_chunk_overhead, dask_worker=None, ): start = time() while not device_host_file_size_matches( dask_worker.data, total_size, device_chunk_overhead, serialized_chunk_overhead ): sleep(0.01) if time() < start + 3: assert_device_host_file_size( dask_worker.data, total_size, device_chunk_overhead, serialized_chunk_overhead, ) def assert_host_chunks(spills_to_disk, dask_worker=None): if spills_to_disk is False: assert len(dask_worker.data.host) def assert_disk_chunks(spills_to_disk, dask_worker=None): if spills_to_disk is True: assert len(dask_worker.data.disk or list()) > 0 else: assert len(dask_worker.data.disk or list()) == 0 @pytest.mark.parametrize( "params", [ { "device_memory_limit": int(200e6), "memory_limit": int(2000e6), "host_target": False, "host_spill": False, "host_pause": False, "spills_to_disk": False, }, { "device_memory_limit": int(200e6), "memory_limit": int(200e6), "host_target": False, "host_spill": False, "host_pause": False, "spills_to_disk": True, }, { # This test setup differs from the one above as Distributed worker # spilling fraction is very low and thus forcefully triggers # `DeviceHostFile.evict()` "device_memory_limit": int(200e6), "memory_limit": int(200e6), "host_target": False, "host_spill": 0.01, "host_pause": False, "spills_to_disk": True, }, { "device_memory_limit": int(200e6), "memory_limit": None, "host_target": False, "host_spill": False, "host_pause": False, "spills_to_disk": False, }, ], ) @gen_test(timeout=30) async def test_cupy_cluster_device_spill(params): cupy = pytest.importorskip("cupy") with dask.config.set( { "distributed.worker.memory.terminate": False, "distributed.worker.memory.pause": params["host_pause"], "distributed.worker.memory.spill": params["host_spill"], "distributed.worker.memory.target": params["host_target"], } ): async with LocalCUDACluster( n_workers=1, scheduler_port=0, silence_logs=False, dashboard_address=None, asynchronous=True, device_memory_limit=params["device_memory_limit"], memory_limit=params["memory_limit"], worker_class=IncreasedCloseTimeoutNanny, ) as cluster: async with Client(cluster, asynchronous=True) as client: await client.wait_for_workers(1) rs = da.random.RandomState(RandomState=cupy.random.RandomState) x = rs.random(int(50e6), chunks=2e6) await wait(x) xx = x.persist() await wait(xx) # Allow up to 1024 bytes overhead per chunk serialized await client.run( worker_assert, x.nbytes, 1024, 1024, ) y = client.compute(x.sum()) res = await y assert (abs(res / x.size) - 0.5) < 1e-3 await client.run( worker_assert, x.nbytes, 1024, 1024, ) await client.run( assert_host_chunks, params["spills_to_disk"], ) await client.run( assert_disk_chunks, params["spills_to_disk"], ) @pytest.mark.parametrize( "params", [ { "device_memory_limit": int(50e6), "memory_limit": int(1000e6), "host_target": False, "host_spill": False, "host_pause": False, "spills_to_disk": False, }, { "device_memory_limit": int(50e6), "memory_limit": int(50e6), "host_target": False, "host_spill": False, "host_pause": False, "spills_to_disk": True, }, { # This test setup differs from the one above as Distributed worker # spilling fraction is very low and thus forcefully triggers # `DeviceHostFile.evict()` "device_memory_limit": int(50e6), "memory_limit": int(50e6), "host_target": False, "host_spill": 0.01, "host_pause": False, "spills_to_disk": True, }, { "device_memory_limit": int(50e6), "memory_limit": None, "host_target": False, "host_spill": False, "host_pause": False, "spills_to_disk": False, }, ], ) @gen_test(timeout=30) async def test_cudf_cluster_device_spill(params): cudf = pytest.importorskip("cudf") with dask.config.set( { "distributed.comm.compression": False, "distributed.worker.memory.terminate": False, "distributed.worker.memory.spill-compression": False, "distributed.worker.memory.pause": params["host_pause"], "distributed.worker.memory.spill": params["host_spill"], "distributed.worker.memory.target": params["host_target"], } ): async with LocalCUDACluster( n_workers=1, scheduler_port=0, silence_logs=False, dashboard_address=None, asynchronous=True, device_memory_limit=params["device_memory_limit"], memory_limit=params["memory_limit"], worker_class=IncreasedCloseTimeoutNanny, ) as cluster: async with Client(cluster, asynchronous=True) as client: await client.wait_for_workers(1) # There's a known issue with datetime64: # https://github.com/numpy/numpy/issues/4983#issuecomment-441332940 # The same error above happens when spilling datetime64 to disk cdf = ( dask.datasets.timeseries( dtypes={"x": int, "y": float}, freq="400ms" ) .reset_index(drop=True) .map_partitions(cudf.from_pandas) ) sizes = await client.compute( cdf.map_partitions(lambda df: df.memory_usage()) ) sizes = sizes.to_arrow().to_pylist() nbytes = sum(sizes) cdf2 = cdf.persist() await wait(cdf2) del cdf gc.collect() await client.run( assert_host_chunks, params["spills_to_disk"], ) await client.run( assert_disk_chunks, params["spills_to_disk"], ) await client.run( worker_assert, nbytes, 32, 2048, ) del cdf2 while True: try: await client.run( delayed_worker_assert, 0, 0, 0, ) except AssertionError: gc.collect() else: break
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_explicit_comms.py
import asyncio import multiprocessing as mp import os from unittest.mock import patch import numpy as np import pandas as pd import pytest import dask from dask import dataframe as dd from dask.dataframe.shuffle import partitioning_index from dask.dataframe.utils import assert_eq from distributed import Client from distributed.deploy.local import LocalCluster import dask_cuda from dask_cuda.explicit_comms import comms from dask_cuda.explicit_comms.dataframe.shuffle import shuffle as explicit_comms_shuffle from dask_cuda.utils_test import IncreasedCloseTimeoutNanny mp = mp.get_context("spawn") # type: ignore ucp = pytest.importorskip("ucp") # Notice, all of the following tests is executed in a new process such # that UCX options of the different tests doesn't conflict. async def my_rank(state, arg): return state["rank"] + arg def _test_local_cluster(protocol): with LocalCluster( protocol=protocol, dashboard_address=None, n_workers=4, threads_per_worker=1, worker_class=IncreasedCloseTimeoutNanny, processes=True, ) as cluster: with Client(cluster) as client: c = comms.CommsContext(client) assert sum(c.run(my_rank, 0)) == sum(range(4)) @pytest.mark.parametrize("protocol", ["tcp", "ucx", "ucxx"]) def test_local_cluster(protocol): p = mp.Process(target=_test_local_cluster, args=(protocol,)) p.start() p.join() assert not p.exitcode def _test_dataframe_merge_empty_partitions(nrows, npartitions): with LocalCluster( protocol="tcp", dashboard_address=None, n_workers=npartitions, threads_per_worker=1, worker_class=IncreasedCloseTimeoutNanny, processes=True, ) as cluster: with Client(cluster): df1 = pd.DataFrame({"key": np.arange(nrows), "payload1": np.arange(nrows)}) key = np.arange(nrows) np.random.shuffle(key) df2 = pd.DataFrame({"key": key, "payload2": np.arange(nrows)}) expected = df1.merge(df2).set_index("key") ddf1 = dd.from_pandas(df1, npartitions=npartitions) ddf2 = dd.from_pandas(df2, npartitions=npartitions) for batchsize in (-1, 1, 2): with dask.config.set( explicit_comms=True, explicit_comms_batchsize=batchsize ): ddf3 = ddf1.merge(ddf2, on=["key"]).set_index("key") got = ddf3.compute() pd.testing.assert_frame_equal(got, expected) def test_dataframe_merge_empty_partitions(): # Notice, we use more partitions than rows p = mp.Process(target=_test_dataframe_merge_empty_partitions, args=(2, 4)) p.start() p.join() assert not p.exitcode def check_partitions(df, npartitions): """Check that all values in `df` hashes to the same""" hashes = partitioning_index(df, npartitions) if len(hashes) > 0: return len(hashes.unique()) == 1 else: return True def _test_dataframe_shuffle(backend, protocol, n_workers, _partitions): if backend == "cudf": cudf = pytest.importorskip("cudf") with LocalCluster( protocol=protocol, dashboard_address=None, n_workers=n_workers, threads_per_worker=1, worker_class=IncreasedCloseTimeoutNanny, processes=True, ) as cluster: with Client(cluster) as client: all_workers = list(client.get_worker_logs().keys()) comms.default_comms() np.random.seed(42) df = pd.DataFrame({"key": np.random.random(100)}) if backend == "cudf": df = cudf.DataFrame.from_pandas(df) if _partitions: df["_partitions"] = 0 for input_nparts in range(1, 5): for output_nparts in range(1, 5): ddf = dd.from_pandas(df.copy(), npartitions=input_nparts).persist( workers=all_workers ) # To reduce test runtime, we change the batchsizes here instead # of using a test parameter. for batchsize in (-1, 1, 2): with dask.config.set(explicit_comms_batchsize=batchsize): ddf = explicit_comms_shuffle( ddf, ["_partitions"] if _partitions else ["key"], npartitions=output_nparts, batchsize=batchsize, ).persist() assert ddf.npartitions == output_nparts if _partitions: # If "_partitions" is the hash key, we expect all but # the first partition to be empty assert_eq(ddf.partitions[0].compute(), df) assert all( len(ddf.partitions[i].compute()) == 0 for i in range(1, ddf.npartitions) ) else: # Check that each partition hashes to the same value result = ddf.map_partitions( check_partitions, output_nparts ).compute() assert all(result.to_list()) # Check the values (ignoring the row order) expected = df.sort_values("key") got = ddf.compute().sort_values("key") assert_eq(got, expected) @pytest.mark.parametrize("nworkers", [1, 2, 3]) @pytest.mark.parametrize("backend", ["pandas", "cudf"]) @pytest.mark.parametrize("protocol", ["tcp", "ucx", "ucxx"]) @pytest.mark.parametrize("_partitions", [True, False]) def test_dataframe_shuffle(backend, protocol, nworkers, _partitions): if backend == "cudf": pytest.importorskip("cudf") p = mp.Process( target=_test_dataframe_shuffle, args=(backend, protocol, nworkers, _partitions) ) p.start() p.join() assert not p.exitcode @pytest.mark.parametrize("in_cluster", [True, False]) def test_dask_use_explicit_comms(in_cluster): def check_shuffle(): """Check if shuffle use explicit-comms by search for keys named 'explicit-comms-shuffle' """ name = "explicit-comms-shuffle" ddf = dd.from_pandas(pd.DataFrame({"key": np.arange(10)}), npartitions=2) with dask.config.set(explicit_comms=False): res = ddf.shuffle(on="key", npartitions=4) assert all(name not in str(key) for key in res.dask) with dask.config.set(explicit_comms=True): res = ddf.shuffle(on="key", npartitions=4) if in_cluster: assert any(name in str(key) for key in res.dask) else: # If not in cluster, we cannot use explicit comms assert all(name not in str(key) for key in res.dask) if in_cluster: # We check environment variables by setting an illegal batchsize with patch.dict( os.environ, {"DASK_EXPLICIT_COMMS": "1", "DASK_EXPLICIT_COMMS_BATCHSIZE": "-2"}, ): dask.config.refresh() # Trigger re-read of the environment variables with pytest.raises(ValueError, match="explicit-comms-batchsize"): ddf.shuffle(on="key", npartitions=4) if in_cluster: with LocalCluster( protocol="tcp", dashboard_address=None, n_workers=2, threads_per_worker=1, worker_class=IncreasedCloseTimeoutNanny, processes=True, ) as cluster: with Client(cluster): check_shuffle() else: check_shuffle() def _test_dataframe_shuffle_merge(backend, protocol, n_workers): if backend == "cudf": cudf = pytest.importorskip("cudf") with LocalCluster( protocol=protocol, dashboard_address=None, n_workers=n_workers, threads_per_worker=1, worker_class=IncreasedCloseTimeoutNanny, processes=True, ) as cluster: with Client(cluster): nrows = n_workers * 10 # Let's make some dataframes that we can join on the "key" column df1 = pd.DataFrame({"key": np.arange(nrows), "payload1": np.arange(nrows)}) key = np.arange(nrows) np.random.shuffle(key) df2 = pd.DataFrame( {"key": key[nrows // 3 :], "payload2": np.arange(nrows)[nrows // 3 :]} ) expected = df1.merge(df2, on="key").set_index("key") if backend == "cudf": df1 = cudf.DataFrame.from_pandas(df1) df2 = cudf.DataFrame.from_pandas(df2) ddf1 = dd.from_pandas(df1, npartitions=n_workers + 1) ddf2 = dd.from_pandas( df2, npartitions=n_workers - 1 if n_workers > 1 else 1 ) with dask.config.set(explicit_comms=True): got = ddf1.merge(ddf2, on="key").set_index("key").compute() assert_eq(got, expected) @pytest.mark.parametrize("nworkers", [1, 2, 4]) @pytest.mark.parametrize("backend", ["pandas", "cudf"]) @pytest.mark.parametrize("protocol", ["tcp", "ucx", "ucxx"]) def test_dataframe_shuffle_merge(backend, protocol, nworkers): if backend == "cudf": pytest.importorskip("cudf") p = mp.Process( target=_test_dataframe_shuffle_merge, args=(backend, protocol, nworkers) ) p.start() p.join() assert not p.exitcode def _test_jit_unspill(protocol): import cudf with dask_cuda.LocalCUDACluster( protocol=protocol, dashboard_address=None, n_workers=1, threads_per_worker=1, jit_unspill=True, device_memory_limit="1B", ) as cluster: with Client(cluster): np.random.seed(42) df = cudf.DataFrame.from_pandas( pd.DataFrame({"key": np.random.random(100)}) ) ddf = dd.from_pandas(df.copy(), npartitions=4) ddf = explicit_comms_shuffle(ddf, ["key"]) # Check the values of `ddf` (ignoring the row order) expected = df.sort_values("key") got = ddf.compute().sort_values("key") assert_eq(got, expected) @pytest.mark.parametrize("protocol", ["tcp", "ucx", "ucxx"]) def test_jit_unspill(protocol): pytest.importorskip("cudf") p = mp.Process(target=_test_jit_unspill, args=(protocol,)) p.start() p.join() assert not p.exitcode def _test_lock_workers(scheduler_address, ranks): async def f(info): worker = info["worker"] if hasattr(worker, "running"): assert not worker.running worker.running = True await asyncio.sleep(0.5) assert worker.running worker.running = False with Client(scheduler_address) as client: c = comms.CommsContext(client) c.run(f, workers=[c.worker_addresses[r] for r in ranks], lock_workers=True) def test_lock_workers(): """ Testing `run(...,lock_workers=True)` by spawning 30 runs with overlapping and non-overlapping worker sets. """ try: from distributed import MultiLock # noqa F401 except ImportError as e: pytest.skip(str(e)) with LocalCluster( protocol="tcp", dashboard_address=None, n_workers=4, threads_per_worker=5, worker_class=IncreasedCloseTimeoutNanny, processes=True, ) as cluster: ps = [] for _ in range(5): for ranks in [[0, 1], [1, 3], [2, 3]]: ps.append( mp.Process( target=_test_lock_workers, args=(cluster.scheduler_address, ranks), ) ) ps[-1].start() for p in ps: p.join() assert all(p.exitcode == 0 for p in ps)
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_device_host_file.py
from random import randint import numpy as np import pytest import dask.array from distributed.protocol import ( deserialize, deserialize_bytes, serialize, serialize_bytelist, ) from dask_cuda.device_host_file import DeviceHostFile, device_to_host, host_to_device cupy = pytest.importorskip("cupy") def assert_eq(x, y): # Explicitly calling "cupy.asnumpy" to support `ProxyObject` because # "cupy" is hardcoded in `dask.array.normalize_to_array()` return dask.array.assert_eq(cupy.asnumpy(x), cupy.asnumpy(y)) @pytest.mark.parametrize("num_host_arrays", [1, 10, 100]) @pytest.mark.parametrize("num_device_arrays", [1, 10, 100]) @pytest.mark.parametrize("array_size_range", [(1, 1000), (100, 100), (1000, 1000)]) def test_device_host_file_short( tmp_path, num_device_arrays, num_host_arrays, array_size_range ): tmpdir = tmp_path / "storage" tmpdir.mkdir() dhf = DeviceHostFile( device_memory_limit=1024 * 16, memory_limit=1024 * 16, worker_local_directory=tmpdir, ) host = [ ("x-%d" % i, np.random.random(randint(*array_size_range))) for i in range(num_host_arrays) ] device = [ ("dx-%d" % i, cupy.random.random(randint(*array_size_range))) for i in range(num_device_arrays) ] import random full = host + device random.shuffle(full) for k, v in full: dhf[k] = v random.shuffle(full) for k, original in full: acquired = dhf[k] assert_eq(original, acquired) del dhf[k] assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set() assert set(dhf.disk.keys()) == set() assert set(dhf.others.keys()) == set() def test_device_host_file_step_by_step(tmp_path): tmpdir = tmp_path / "storage" tmpdir.mkdir() dhf = DeviceHostFile( device_memory_limit=1024 * 16, memory_limit=1024 * 16, worker_local_directory=tmpdir, ) a = np.random.random(1000) b = cupy.random.random(1000) dhf["a1"] = a assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set(["a1"]) assert set(dhf.disk.keys()) == set() assert set(dhf.others.keys()) == set() dhf["b1"] = b assert set(dhf.device.keys()) == set(["b1"]) assert set(dhf.host.keys()) == set(["a1"]) assert set(dhf.disk.keys()) == set() assert set(dhf.others.keys()) == set() dhf["b2"] = b assert set(dhf.device.keys()) == set(["b1", "b2"]) assert set(dhf.host.keys()) == set(["a1"]) assert set(dhf.disk.keys()) == set() assert set(dhf.others.keys()) == set() dhf["b3"] = b assert set(dhf.device.keys()) == set(["b2", "b3"]) assert set(dhf.host.keys()) == set(["a1", "b1"]) assert set(dhf.disk.keys()) == set() assert set(dhf.others.keys()) == set() dhf["a2"] = a assert set(dhf.device.keys()) == set(["b2", "b3"]) assert set(dhf.host.keys()) == set(["a2", "b1"]) assert set(dhf.disk.keys()) == set(["a1"]) assert set(dhf.others.keys()) == set() dhf["b4"] = b assert set(dhf.device.keys()) == set(["b3", "b4"]) assert set(dhf.host.keys()) == set(["a2", "b2"]) assert set(dhf.disk.keys()) == set(["a1", "b1"]) assert set(dhf.others.keys()) == set() dhf["b4"] = b assert set(dhf.device.keys()) == set(["b3", "b4"]) assert set(dhf.host.keys()) == set(["a2", "b2"]) assert set(dhf.disk.keys()) == set(["a1", "b1"]) assert set(dhf.others.keys()) == set() assert_eq(dhf["a1"], a) del dhf["a1"] assert_eq(dhf["a2"], a) del dhf["a2"] assert_eq(dhf["b1"], b) del dhf["b1"] assert_eq(dhf["b2"], b) del dhf["b2"] assert_eq(dhf["b3"], b) del dhf["b3"] assert_eq(dhf["b4"], b) del dhf["b4"] assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set() assert set(dhf.disk.keys()) == set() assert set(dhf.others.keys()) == set() dhf["x"] = b dhf["x"] = a assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set(["x"]) assert set(dhf.others.keys()) == set() @pytest.mark.parametrize("collection", [dict, list, tuple]) @pytest.mark.parametrize("length", [0, 1, 3, 6]) @pytest.mark.parametrize("value", [10, {"x": [1, 2, 3], "y": [4.0, 5.0, 6.0]}]) def test_serialize_cupy_collection(collection, length, value): # Avoid running test for length 0 (no collection) multiple times if length == 0 and collection is not list: return if isinstance(value, dict): cudf = pytest.importorskip("cudf") dd = pytest.importorskip("dask.dataframe") x = cudf.DataFrame(value) assert_func = dd.assert_eq else: x = cupy.arange(10) assert_func = assert_eq if length == 0: obj = device_to_host(x) elif collection is dict: obj = device_to_host(dict(zip(range(length), (x,) * length))) else: obj = device_to_host(collection((x,) * length)) if length > 0: assert all([h["serializer"] == "dask" for h in obj.header["sub-headers"]]) else: assert obj.header["serializer"] == "dask" btslst = serialize_bytelist(obj) bts = deserialize_bytes(b"".join(btslst)) res = host_to_device(bts) if length == 0: assert_func(res, x) else: assert isinstance(res, collection) values = res.values() if collection is dict else res [assert_func(v, x) for v in values] header, frames = serialize(obj, serializers=["pickle"], on_error="raise") assert len(frames) == (1 + len(obj.frames)) obj2 = deserialize(header, frames) res = host_to_device(obj2) if length == 0: assert_func(res, x) else: assert isinstance(res, collection) values = res.values() if collection is dict else res [assert_func(v, x) for v in values]
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_cudf_builtin_spilling.py
import pytest from distributed.sizeof import safe_sizeof from dask_cuda.device_host_file import DeviceHostFile from dask_cuda.is_spillable_object import is_spillable_object from dask_cuda.proxify_host_file import ProxifyHostFile cupy = pytest.importorskip("cupy") pandas = pytest.importorskip("pandas") pytest.importorskip( "cudf.core.buffer.spill_manager", reason="Current version of cudf doesn't support built-in spilling", ) import cudf # noqa: E402 from cudf.core.buffer.spill_manager import ( # noqa: E402 SpillManager, get_global_manager, set_global_manager, ) from cudf.testing._utils import assert_eq # noqa: E402 if get_global_manager() is not None: pytest.skip( reason=( "cannot test cudf built-in spilling, if already enabled globally. " "Please set the `CUDF_SPILL=off` environment variable." ), allow_module_level=True, ) @pytest.fixture def manager(request): """Fixture to enable and make a spilling manager available""" kwargs = dict(getattr(request, "param", {})) set_global_manager(manager=SpillManager(**kwargs)) yield get_global_manager() set_global_manager(manager=None) def test_is_spillable_object_when_cudf_spilling_disabled(): pdf = pandas.DataFrame({"a": [1, 2, 3]}) cdf = cudf.DataFrame({"a": [1, 2, 3]}) assert not is_spillable_object(pdf) assert not is_spillable_object(cdf) def test_is_spillable_object_when_cudf_spilling_enabled(manager): pdf = pandas.DataFrame({"a": [1, 2, 3]}) cdf = cudf.DataFrame({"a": [1, 2, 3]}) assert not is_spillable_object(pdf) assert is_spillable_object(cdf) def test_device_host_file_when_cudf_spilling_is_disabled(tmp_path): tmpdir = tmp_path / "storage" tmpdir.mkdir() dhf = DeviceHostFile( device_memory_limit=1024 * 16, memory_limit=1024 * 16, worker_local_directory=tmpdir, ) dhf["pandas"] = pandas.DataFrame({"a": [1, 2, 3]}) dhf["cudf"] = cudf.DataFrame({"a": [1, 2, 3]}) assert set(dhf.others.keys()) == set() assert set(dhf.device.keys()) == set(["cudf"]) assert set(dhf.host.keys()) == set(["pandas"]) assert set(dhf.disk.keys()) == set() def test_device_host_file_step_by_step(tmp_path, manager: SpillManager): tmpdir = tmp_path / "storage" tmpdir.mkdir() pdf = pandas.DataFrame({"a": [1, 2, 3]}) cdf = cudf.DataFrame({"a": [1, 2, 3]}) # Pandas will cache the result of probing this attribute. # We trigger it here, to get consistent results from `safe_sizeof()` hasattr(pdf, "__cuda_array_interface__") dhf = DeviceHostFile( device_memory_limit=safe_sizeof(pdf), memory_limit=safe_sizeof(pdf), worker_local_directory=tmpdir, ) dhf["pa1"] = pdf dhf["cu1"] = cdf assert set(dhf.others.keys()) == set(["cu1"]) assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set(["pa1"]) assert set(dhf.disk.keys()) == set() assert_eq(dhf["pa1"], dhf["cu1"]) dhf["pa2"] = pdf assert set(dhf.others.keys()) == set(["cu1"]) assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set(["pa2"]) assert set(dhf.disk.keys()) == set(["pa1"]) dhf["cu2"] = cdf assert set(dhf.others.keys()) == set(["cu1", "cu2"]) assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set(["pa2"]) assert set(dhf.disk.keys()) == set(["pa1"]) del dhf["cu1"] assert set(dhf.others.keys()) == set(["cu2"]) assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set(["pa2"]) assert set(dhf.disk.keys()) == set(["pa1"]) del dhf["pa2"] assert set(dhf.others.keys()) == set(["cu2"]) assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set() assert set(dhf.disk.keys()) == set(["pa1"]) del dhf["pa1"] assert set(dhf.others.keys()) == set(["cu2"]) assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set() assert set(dhf.disk.keys()) == set() del dhf["cu2"] assert set(dhf.others.keys()) == set() assert set(dhf.device.keys()) == set() assert set(dhf.host.keys()) == set() assert set(dhf.disk.keys()) == set() def test_proxify_host_file(tmp_path_factory, manager: SpillManager): # Reuse the spill-to-disk dir, if it exist if ProxifyHostFile._spill_to_disk is None: tmpdir = tmp_path_factory.mktemp("jit-unspill") else: tmpdir = ProxifyHostFile._spill_to_disk.root_dir / ".." with pytest.warns( UserWarning, match="JIT-Unspill and cuDF's built-in spilling don't work together", ): dhf = ProxifyHostFile( device_memory_limit=1000, memory_limit=1000, worker_local_directory=str(tmpdir), ) dhf["cu1"] = cudf.DataFrame({"a": [1, 2, 3]}) del dhf["cu1"]
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_dgx.py
import multiprocessing as mp import os from enum import Enum, auto import numpy import pytest from dask import array as da from distributed import Client from dask_cuda import LocalCUDACluster from dask_cuda.initialize import initialize mp = mp.get_context("spawn") # type: ignore psutil = pytest.importorskip("psutil") class DGXVersion(Enum): DGX_1 = auto() DGX_2 = auto() DGX_A100 = auto() def _get_dgx_name(): product_name_file = "/sys/class/dmi/id/product_name" dgx_release_file = "/etc/dgx-release" # We verify `product_name_file` to check it's a DGX, and check # if `dgx_release_file` exists to confirm it's not a container. if not os.path.isfile(product_name_file) or not os.path.isfile(dgx_release_file): return None with open(product_name_file) as f: for line in f: return line def _get_dgx_version(): dgx_name = _get_dgx_name() if dgx_name is None: return None elif "DGX-1" in dgx_name: return DGXVersion.DGX_1 elif "DGX-2" in dgx_name: return DGXVersion.DGX_2 elif "DGXA100" in dgx_name: return DGXVersion.DGX_A100 if _get_dgx_version() is None: pytest.skip("Not a DGX server", allow_module_level=True) # Notice, all of the following tests is executed in a new process such # that UCX options of the different tests doesn't conflict. # Furthermore, all tests do some computation to trigger initialization # of UCX before retrieving the current config. def _test_default(): with LocalCUDACluster() as cluster: with Client(cluster): res = da.from_array(numpy.arange(10000), chunks=(1000,)) res = res.sum().compute() assert res == 49995000 def test_default(): p = mp.Process(target=_test_default) p.start() p.join() assert not p.exitcode def _test_tcp_over_ucx(protocol): if protocol == "ucx": ucp = pytest.importorskip("ucp") elif protocol == "ucxx": ucp = pytest.importorskip("ucxx") with LocalCUDACluster(protocol=protocol, enable_tcp_over_ucx=True) as cluster: with Client(cluster) as client: res = da.from_array(numpy.arange(10000), chunks=(1000,)) res = res.sum().compute() assert res == 49995000 def check_ucx_options(): conf = ucp.get_config() assert "TLS" in conf assert "tcp" in conf["TLS"] assert "cuda_copy" in conf["TLS"] assert "tcp" in conf["SOCKADDR_TLS_PRIORITY"] return True assert all(client.run(check_ucx_options).values()) @pytest.mark.parametrize( "protocol", ["ucx", "ucxx"], ) def test_tcp_over_ucx(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") p = mp.Process(target=_test_tcp_over_ucx, args=(protocol,)) p.start() p.join() assert not p.exitcode def _test_tcp_only(): with LocalCUDACluster(protocol="tcp") as cluster: with Client(cluster): res = da.from_array(numpy.arange(10000), chunks=(1000,)) res = res.sum().compute() assert res == 49995000 def test_tcp_only(): p = mp.Process(target=_test_tcp_only) p.start() p.join() assert not p.exitcode def _test_ucx_infiniband_nvlink( skip_queue, protocol, enable_infiniband, enable_nvlink, enable_rdmacm ): cupy = pytest.importorskip("cupy") if protocol == "ucx": ucp = pytest.importorskip("ucp") elif protocol == "ucxx": ucp = pytest.importorskip("ucxx") if enable_infiniband and not any( [at.startswith("rc") for at in ucp.get_active_transports()] ): skip_queue.put("No support available for 'rc' transport in UCX") return else: skip_queue.put("ok") if enable_infiniband is None and enable_nvlink is None and enable_rdmacm is None: enable_tcp_over_ucx = None cm_tls = ["all"] cm_tls_priority = ["rdmacm", "tcp", "sockcm"] else: enable_tcp_over_ucx = True cm_tls = ["tcp"] if enable_rdmacm is True: cm_tls_priority = ["rdmacm"] else: cm_tls_priority = ["tcp"] initialize( protocol=protocol, enable_tcp_over_ucx=enable_tcp_over_ucx, enable_infiniband=enable_infiniband, enable_nvlink=enable_nvlink, enable_rdmacm=enable_rdmacm, ) with LocalCUDACluster( protocol=protocol, interface="ib0", enable_tcp_over_ucx=enable_tcp_over_ucx, enable_infiniband=enable_infiniband, enable_nvlink=enable_nvlink, enable_rdmacm=enable_rdmacm, rmm_pool_size="1 GiB", ) as cluster: with Client(cluster) as client: res = da.from_array(cupy.arange(10000), chunks=(1000,), asarray=False) res = res.sum().compute() assert res == 49995000 def check_ucx_options(): conf = ucp.get_config() assert "TLS" in conf assert all(t in conf["TLS"] for t in cm_tls) assert all(p in conf["SOCKADDR_TLS_PRIORITY"] for p in cm_tls_priority) if cm_tls != ["all"]: assert "tcp" in conf["TLS"] assert "cuda_copy" in conf["TLS"] if enable_nvlink: assert "cuda_ipc" in conf["TLS"] if enable_infiniband: assert "rc" in conf["TLS"] return True assert all(client.run(check_ucx_options).values()) @pytest.mark.parametrize("protocol", ["ucx", "ucxx"]) @pytest.mark.parametrize( "params", [ {"enable_infiniband": False, "enable_nvlink": False, "enable_rdmacm": False}, {"enable_infiniband": True, "enable_nvlink": True, "enable_rdmacm": False}, {"enable_infiniband": True, "enable_nvlink": False, "enable_rdmacm": True}, {"enable_infiniband": True, "enable_nvlink": True, "enable_rdmacm": True}, {"enable_infiniband": None, "enable_nvlink": None, "enable_rdmacm": None}, ], ) @pytest.mark.skipif( _get_dgx_version() == DGXVersion.DGX_A100, reason="Automatic InfiniBand device detection Unsupported for %s" % _get_dgx_name(), ) def test_ucx_infiniband_nvlink(protocol, params): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") skip_queue = mp.Queue() p = mp.Process( target=_test_ucx_infiniband_nvlink, args=( skip_queue, protocol, params["enable_infiniband"], params["enable_nvlink"], params["enable_rdmacm"], ), ) p.start() p.join() skip_msg = skip_queue.get() if skip_msg != "ok": pytest.skip(skip_msg) assert not p.exitcode
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_worker_spec.py
import pytest from distributed import Nanny from dask_cuda.worker_spec import worker_spec def _check_option(spec, k, v): assert all([s["options"][k] == v for s in spec.values()]) def _check_env_key(spec, k, enable): if enable: assert all([k in s["options"]["env"] for s in spec.values()]) else: assert all([k not in s["options"]["env"] for s in spec.values()]) def _check_env_value(spec, k, v): if not isinstance(v, list): v = [v] for i in v: assert all([i in set(s["options"]["env"][k].split(",")) for s in spec.values()]) @pytest.mark.filterwarnings("ignore:Cannot get CPU affinity") @pytest.mark.parametrize("num_devices", [1, 4]) @pytest.mark.parametrize("cls", [Nanny]) @pytest.mark.parametrize("interface", [None, "eth0", "enp1s0f0"]) @pytest.mark.parametrize("protocol", [None, "tcp", "ucx"]) @pytest.mark.parametrize("dashboard_address", [None, ":0", ":8787"]) @pytest.mark.parametrize("threads_per_worker", [1, 8]) @pytest.mark.parametrize("silence_logs", [False, True]) @pytest.mark.parametrize("enable_infiniband", [False, True]) @pytest.mark.parametrize("enable_nvlink", [False, True]) def test_worker_spec( num_devices, cls, interface, protocol, dashboard_address, threads_per_worker, silence_logs, enable_infiniband, enable_nvlink, ): def _test(): return worker_spec( CUDA_VISIBLE_DEVICES=list(range(num_devices)), cls=cls, interface=interface, protocol=protocol, dashboard_address=dashboard_address, threads_per_worker=threads_per_worker, silence_logs=silence_logs, enable_infiniband=enable_infiniband, enable_nvlink=enable_nvlink, ) if (enable_infiniband or enable_nvlink) and protocol != "ucx": with pytest.raises( TypeError, match="Enabling InfiniBand or NVLink requires protocol='ucx'" ): _test() return else: spec = _test() assert len(spec) == num_devices assert all([s["cls"] == cls for s in spec.values()]) _check_option(spec, "interface", interface) _check_option(spec, "protocol", protocol) _check_option(spec, "dashboard_address", dashboard_address) _check_option(spec, "nthreads", threads_per_worker) _check_option(spec, "silence_logs", silence_logs)
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_initialize.py
import multiprocessing as mp import numpy import psutil import pytest from dask import array as da from distributed import Client from distributed.deploy.local import LocalCluster from dask_cuda.initialize import initialize from dask_cuda.utils import get_ucx_config from dask_cuda.utils_test import IncreasedCloseTimeoutNanny mp = mp.get_context("spawn") # type: ignore # Notice, all of the following tests is executed in a new process such # that UCX options of the different tests doesn't conflict. # Furthermore, all tests do some computation to trigger initialization # of UCX before retrieving the current config. def _test_initialize_ucx_tcp(protocol): if protocol == "ucx": ucp = pytest.importorskip("ucp") elif protocol == "ucxx": ucp = pytest.importorskip("ucxx") kwargs = {"enable_tcp_over_ucx": True} initialize(protocol=protocol, **kwargs) with LocalCluster( protocol=protocol, dashboard_address=None, n_workers=1, threads_per_worker=1, processes=True, worker_class=IncreasedCloseTimeoutNanny, config={"distributed.comm.ucx": get_ucx_config(**kwargs)}, ) as cluster: with Client(cluster) as client: res = da.from_array(numpy.arange(10000), chunks=(1000,)) res = res.sum().compute() assert res == 49995000 def check_ucx_options(): conf = ucp.get_config() assert "TLS" in conf assert "tcp" in conf["TLS"] assert "cuda_copy" in conf["TLS"] assert "tcp" in conf["SOCKADDR_TLS_PRIORITY"] return True assert client.run_on_scheduler(check_ucx_options) is True assert all(client.run(check_ucx_options).values()) @pytest.mark.parametrize("protocol", ["ucx", "ucxx"]) def test_initialize_ucx_tcp(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") p = mp.Process(target=_test_initialize_ucx_tcp, args=(protocol,)) p.start() p.join() assert not p.exitcode def _test_initialize_ucx_nvlink(protocol): if protocol == "ucx": ucp = pytest.importorskip("ucp") elif protocol == "ucxx": ucp = pytest.importorskip("ucxx") kwargs = {"enable_nvlink": True} initialize(protocol=protocol, **kwargs) with LocalCluster( protocol=protocol, dashboard_address=None, n_workers=1, threads_per_worker=1, processes=True, worker_class=IncreasedCloseTimeoutNanny, config={"distributed.comm.ucx": get_ucx_config(**kwargs)}, ) as cluster: with Client(cluster) as client: res = da.from_array(numpy.arange(10000), chunks=(1000,)) res = res.sum().compute() assert res == 49995000 def check_ucx_options(): conf = ucp.get_config() assert "TLS" in conf assert "cuda_ipc" in conf["TLS"] assert "tcp" in conf["TLS"] assert "cuda_copy" in conf["TLS"] assert "tcp" in conf["SOCKADDR_TLS_PRIORITY"] return True assert client.run_on_scheduler(check_ucx_options) is True assert all(client.run(check_ucx_options).values()) @pytest.mark.parametrize("protocol", ["ucx", "ucxx"]) def test_initialize_ucx_nvlink(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") p = mp.Process(target=_test_initialize_ucx_nvlink, args=(protocol,)) p.start() p.join() assert not p.exitcode def _test_initialize_ucx_infiniband(protocol): if protocol == "ucx": ucp = pytest.importorskip("ucp") elif protocol == "ucxx": ucp = pytest.importorskip("ucxx") kwargs = {"enable_infiniband": True} initialize(protocol=protocol, **kwargs) with LocalCluster( protocol=protocol, dashboard_address=None, n_workers=1, threads_per_worker=1, processes=True, worker_class=IncreasedCloseTimeoutNanny, config={"distributed.comm.ucx": get_ucx_config(**kwargs)}, ) as cluster: with Client(cluster) as client: res = da.from_array(numpy.arange(10000), chunks=(1000,)) res = res.sum().compute() assert res == 49995000 def check_ucx_options(): conf = ucp.get_config() assert "TLS" in conf assert "rc" in conf["TLS"] assert "tcp" in conf["TLS"] assert "cuda_copy" in conf["TLS"] assert "tcp" in conf["SOCKADDR_TLS_PRIORITY"] return True assert client.run_on_scheduler(check_ucx_options) is True assert all(client.run(check_ucx_options).values()) @pytest.mark.skipif( "ib0" not in psutil.net_if_addrs(), reason="Infiniband interface ib0 not found" ) @pytest.mark.parametrize("protocol", ["ucx", "ucxx"]) def test_initialize_ucx_infiniband(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") p = mp.Process(target=_test_initialize_ucx_infiniband, args=(protocol,)) p.start() p.join() assert not p.exitcode def _test_initialize_ucx_all(protocol): if protocol == "ucx": ucp = pytest.importorskip("ucp") elif protocol == "ucxx": ucp = pytest.importorskip("ucxx") initialize(protocol=protocol) with LocalCluster( protocol=protocol, dashboard_address=None, n_workers=1, threads_per_worker=1, processes=True, worker_class=IncreasedCloseTimeoutNanny, config={"distributed.comm.ucx": get_ucx_config()}, ) as cluster: with Client(cluster) as client: res = da.from_array(numpy.arange(10000), chunks=(1000,)) res = res.sum().compute() assert res == 49995000 def check_ucx_options(): conf = ucp.get_config() assert "TLS" in conf assert conf["TLS"] == "all" assert all( [ p in conf["SOCKADDR_TLS_PRIORITY"] for p in ["rdmacm", "tcp", "sockcm"] ] ) return True assert client.run_on_scheduler(check_ucx_options) is True assert all(client.run(check_ucx_options).values()) @pytest.mark.parametrize("protocol", ["ucx", "ucxx"]) def test_initialize_ucx_all(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") p = mp.Process(target=_test_initialize_ucx_all, args=(protocol,)) p.start() p.join() assert not p.exitcode
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_gds.py
import tempfile import pytest from distributed.protocol.serialize import deserialize, serialize from dask_cuda.proxify_host_file import ProxifyHostFile # Make the "disk" serializer available and use a directory that is # removed on exit. if ProxifyHostFile._spill_to_disk is None: tmpdir = tempfile.TemporaryDirectory() ProxifyHostFile( worker_local_directory=tmpdir.name, device_memory_limit=1024, memory_limit=1024, ) @pytest.mark.parametrize("cuda_lib", ["cupy", "cudf", "numba.cuda"]) @pytest.mark.parametrize("gds_enabled", [True, False]) def test_gds(gds_enabled, cuda_lib): lib = pytest.importorskip(cuda_lib) if cuda_lib == "cupy": data_create = lambda: lib.arange(10) data_compare = lambda x, y: all(x == y) elif cuda_lib == "cudf": data_create = lambda: lib.Series(range(10)) data_compare = lambda x, y: all((x == y).values_host) elif cuda_lib == "numba.cuda": data_create = lambda: lib.to_device(range(10)) data_compare = lambda x, y: all(x.copy_to_host() == y.copy_to_host()) try: if gds_enabled and not ProxifyHostFile._spill_to_disk.gds_enabled: pytest.skip("GDS not available") a = data_create() header, frames = serialize(a, serializers=("disk",)) b = deserialize(header, frames) assert type(a) == type(b) assert data_compare(a, b) finally: ProxifyHostFile.register_disk_spilling() # Reset disk spilling options
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_from_array.py
import pytest import dask.array as da from distributed import Client from dask_cuda import LocalCUDACluster cupy = pytest.importorskip("cupy") @pytest.mark.parametrize("protocol", ["ucx", "ucxx", "tcp"]) def test_ucx_from_array(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") N = 10_000 with LocalCUDACluster(protocol=protocol) as cluster: with Client(cluster): val = da.from_array(cupy.arange(N), chunks=(N // 10,)).sum().compute() assert val == (N * (N - 1)) // 2
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_local_cuda_cluster.py
import asyncio import os import pkgutil import sys from unittest.mock import patch import pytest from dask.distributed import Client from distributed.system import MEMORY_LIMIT from distributed.utils_test import gen_test, raises_with_cause from dask_cuda import CUDAWorker, LocalCUDACluster, utils from dask_cuda.initialize import initialize from dask_cuda.utils import ( get_cluster_configuration, get_device_total_memory, get_gpu_count_mig, get_gpu_uuid_from_index, print_cluster_config, ) from dask_cuda.utils_test import MockWorker @gen_test(timeout=20) async def test_local_cuda_cluster(): async with LocalCUDACluster( scheduler_port=0, asynchronous=True, device_memory_limit=1 ) as cluster: async with Client(cluster, asynchronous=True) as client: assert len(cluster.workers) == utils.get_n_gpus() # CUDA_VISIBLE_DEVICES cycles properly def get_visible_devices(): return os.environ["CUDA_VISIBLE_DEVICES"] result = await client.run(get_visible_devices) assert all(len(v.split(",")) == utils.get_n_gpus() for v in result.values()) for i in range(utils.get_n_gpus()): assert {int(v.split(",")[i]) for v in result.values()} == set( range(utils.get_n_gpus()) ) # Use full memory, checked with some buffer to ignore rounding difference full_mem = sum( w.memory_manager.memory_limit for w in cluster.workers.values() ) assert full_mem >= MEMORY_LIMIT - 1024 and full_mem < MEMORY_LIMIT + 1024 for w, devices in result.items(): ident = devices.split(",")[0] assert int(ident) == cluster.scheduler.workers[w].name with pytest.raises(ValueError): cluster.scale(1000) # Notice, this test might raise errors when the number of available GPUs is less # than 8 but as long as the test passes the errors can be ignored. @pytest.mark.filterwarnings("ignore:Cannot get CPU affinity") @patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0,3,6,8"}) @gen_test(timeout=20) async def test_with_subset_of_cuda_visible_devices(): async with LocalCUDACluster( scheduler_port=0, asynchronous=True, device_memory_limit=1, worker_class=MockWorker, ) as cluster: async with Client(cluster, asynchronous=True) as client: assert len(cluster.workers) == 4 # CUDA_VISIBLE_DEVICES cycles properly def get_visible_devices(): return os.environ["CUDA_VISIBLE_DEVICES"] result = await client.run(get_visible_devices) assert all(len(v.split(",")) == 4 for v in result.values()) for i in range(4): assert {int(v.split(",")[i]) for v in result.values()} == { 0, 3, 6, 8, } @pytest.mark.parametrize( "protocol", ["ucx", "ucxx"], ) @gen_test(timeout=20) async def test_ucx_protocol(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") async with LocalCUDACluster( protocol=protocol, asynchronous=True, data=dict ) as cluster: assert all( ws.address.startswith(f"{protocol}://") for ws in cluster.scheduler.workers.values() ) @pytest.mark.parametrize( "protocol", ["ucx", "ucxx"], ) @gen_test(timeout=20) async def test_explicit_ucx_with_protocol_none(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") initialize(protocol=protocol, enable_tcp_over_ucx=True) async with LocalCUDACluster( protocol=None, enable_tcp_over_ucx=True, asynchronous=True, data=dict ) as cluster: assert all( ws.address.startswith("ucx://") for ws in cluster.scheduler.workers.values() ) @pytest.mark.filterwarnings("ignore:Exception ignored in") @pytest.mark.parametrize( "protocol", ["ucx", "ucxx"], ) @gen_test(timeout=20) async def test_ucx_protocol_type_error(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") initialize(protocol=protocol, enable_tcp_over_ucx=True) with pytest.raises(TypeError): async with LocalCUDACluster( protocol="tcp", enable_tcp_over_ucx=True, asynchronous=True, data=dict ): pass @gen_test(timeout=20) async def test_n_workers(): async with LocalCUDACluster( CUDA_VISIBLE_DEVICES="0,1", worker_class=MockWorker, asynchronous=True ) as cluster: assert len(cluster.workers) == 2 assert len(cluster.worker_spec) == 2 @gen_test(timeout=20) async def test_threads_per_worker_and_memory_limit(): async with LocalCUDACluster(threads_per_worker=4, asynchronous=True) as cluster: assert all(ws.nthreads == 4 for ws in cluster.scheduler.workers.values()) full_mem = sum(w.memory_manager.memory_limit for w in cluster.workers.values()) assert full_mem >= MEMORY_LIMIT - 1024 and full_mem < MEMORY_LIMIT + 1024 @gen_test(timeout=20) async def test_no_memory_limits_cluster(): async with LocalCUDACluster( asynchronous=True, memory_limit=None, device_memory_limit=None ) as cluster: async with Client(cluster, asynchronous=True) as client: # Check that all workers use a regular dict as their "data store". res = await client.run( lambda dask_worker: isinstance(dask_worker.data, dict) ) assert all(res.values()) @gen_test(timeout=20) async def test_no_memory_limits_cudaworker(): async with LocalCUDACluster( asynchronous=True, memory_limit=None, device_memory_limit=None, n_workers=1, ) as cluster: assert len(cluster.workers) == 1 async with Client(cluster, asynchronous=True) as client: new_worker = CUDAWorker( cluster, memory_limit=None, device_memory_limit=None ) await new_worker await client.wait_for_workers(2) # Check that all workers use a regular dict as their "data store". res = await client.run( lambda dask_worker: isinstance(dask_worker.data, dict) ) assert all(res.values()) await new_worker.close() @gen_test(timeout=20) async def test_all_to_all(): async with LocalCUDACluster( CUDA_VISIBLE_DEVICES="0,1", worker_class=MockWorker, asynchronous=True ) as cluster: async with Client(cluster, asynchronous=True) as client: workers = list(client.scheduler_info()["workers"]) n_workers = len(workers) await utils.all_to_all(client) # assert all to all has resulted in all data on every worker data = await client.has_what() all_data = [v for w in data.values() for v in w if "lambda" in v] assert all(all_data.count(i) == n_workers for i in all_data) @gen_test(timeout=20) async def test_rmm_pool(): rmm = pytest.importorskip("rmm") async with LocalCUDACluster( rmm_pool_size="2GB", asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: memory_resource_type = await client.run( rmm.mr.get_current_device_resource_type ) for v in memory_resource_type.values(): assert v is rmm.mr.PoolMemoryResource @gen_test(timeout=20) async def test_rmm_maximum_poolsize_without_poolsize_error(): pytest.importorskip("rmm") with pytest.raises(ValueError): await LocalCUDACluster(rmm_maximum_pool_size="2GB", asynchronous=True) @gen_test(timeout=20) async def test_rmm_managed(): rmm = pytest.importorskip("rmm") async with LocalCUDACluster( rmm_managed_memory=True, asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: memory_resource_type = await client.run( rmm.mr.get_current_device_resource_type ) for v in memory_resource_type.values(): assert v is rmm.mr.ManagedMemoryResource @gen_test(timeout=20) async def test_rmm_async(): rmm = pytest.importorskip("rmm") driver_version = rmm._cuda.gpu.driverGetVersion() runtime_version = rmm._cuda.gpu.runtimeGetVersion() if driver_version < 11020 or runtime_version < 11020: pytest.skip("cudaMallocAsync not supported") async with LocalCUDACluster( rmm_async=True, rmm_pool_size="2GB", rmm_release_threshold="3GB", asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: memory_resource_type = await client.run( rmm.mr.get_current_device_resource_type ) for v in memory_resource_type.values(): assert v is rmm.mr.CudaAsyncMemoryResource ret = await get_cluster_configuration(client) assert ret["[plugin] RMMSetup"]["initial_pool_size"] == 2000000000 assert ret["[plugin] RMMSetup"]["release_threshold"] == 3000000000 @gen_test(timeout=20) async def test_rmm_async_with_maximum_pool_size(): rmm = pytest.importorskip("rmm") driver_version = rmm._cuda.gpu.driverGetVersion() runtime_version = rmm._cuda.gpu.runtimeGetVersion() if driver_version < 11020 or runtime_version < 11020: pytest.skip("cudaMallocAsync not supported") async with LocalCUDACluster( rmm_async=True, rmm_pool_size="2GB", rmm_release_threshold="3GB", rmm_maximum_pool_size="4GB", asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: memory_resource_types = await client.run( lambda: ( rmm.mr.get_current_device_resource_type(), type(rmm.mr.get_current_device_resource().get_upstream()), ) ) for v in memory_resource_types.values(): memory_resource_type, upstream_memory_resource_type = v assert memory_resource_type is rmm.mr.LimitingResourceAdaptor assert upstream_memory_resource_type is rmm.mr.CudaAsyncMemoryResource ret = await get_cluster_configuration(client) assert ret["[plugin] RMMSetup"]["initial_pool_size"] == 2000000000 assert ret["[plugin] RMMSetup"]["release_threshold"] == 3000000000 assert ret["[plugin] RMMSetup"]["maximum_pool_size"] == 4000000000 @gen_test(timeout=20) async def test_rmm_logging(): rmm = pytest.importorskip("rmm") async with LocalCUDACluster( rmm_pool_size="2GB", rmm_log_directory=".", asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: memory_resource_type = await client.run( rmm.mr.get_current_device_resource_type ) for v in memory_resource_type.values(): assert v is rmm.mr.LoggingResourceAdaptor @gen_test(timeout=20) async def test_pre_import(): module = None # Pick a module that isn't currently loaded for m in pkgutil.iter_modules(): if m.ispkg and m.name not in sys.modules.keys(): module = m.name break if module is None: pytest.skip("No module found that isn't already loaded") async with LocalCUDACluster( n_workers=1, pre_import=module, asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: imported = await client.run(lambda: module in sys.modules) assert all(imported.values()) # Intentionally not using @gen_test to skip cleanup checks @pytest.mark.xfail(reason="https://github.com/rapidsai/dask-cuda/issues/1265") def test_pre_import_not_found(): async def _test_pre_import_not_found(): with raises_with_cause(RuntimeError, None, ImportError, None): await LocalCUDACluster( n_workers=1, pre_import="my_module", asynchronous=True, silence_logs=True, ) asyncio.run(_test_pre_import_not_found()) @gen_test(timeout=20) async def test_cluster_worker(): async with LocalCUDACluster( scheduler_port=0, asynchronous=True, device_memory_limit=1, n_workers=1, ) as cluster: assert len(cluster.workers) == 1 async with Client(cluster, asynchronous=True) as client: new_worker = CUDAWorker(cluster) await new_worker await client.wait_for_workers(2) await new_worker.close() @gen_test(timeout=20) async def test_available_mig_workers(): uuids = get_gpu_count_mig(return_uuids=True)[1] if len(uuids) > 0: cuda_visible_devices = ",".join([i.decode("utf-8") for i in uuids]) else: pytest.skip("No MIG devices found") with patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": cuda_visible_devices}): async with LocalCUDACluster( CUDA_VISIBLE_DEVICES=cuda_visible_devices, asynchronous=True ) as cluster: async with Client(cluster, asynchronous=True) as client: len(cluster.workers) == len(uuids) # Check to see if CUDA_VISIBLE_DEVICES cycles properly def get_visible_devices(): return os.environ["CUDA_VISIBLE_DEVICES"] result = await client.run(get_visible_devices) assert all(len(v.split(",")) == len(uuids) for v in result.values()) for i in range(len(cluster.workers)): assert set(v.split(",")[i] for v in result.values()) == set( uuid.decode("utf-8") for uuid in uuids ) @gen_test(timeout=20) async def test_gpu_uuid(): gpu_uuid = get_gpu_uuid_from_index(0) async with LocalCUDACluster( CUDA_VISIBLE_DEVICES=gpu_uuid, scheduler_port=0, asynchronous=True, ) as cluster: assert len(cluster.workers) == 1 async with Client(cluster, asynchronous=True) as client: await client.wait_for_workers(1) result = await client.run(lambda: os.environ["CUDA_VISIBLE_DEVICES"]) assert list(result.values())[0] == gpu_uuid @gen_test(timeout=20) async def test_rmm_track_allocations(): rmm = pytest.importorskip("rmm") async with LocalCUDACluster( rmm_pool_size="2GB", asynchronous=True, rmm_track_allocations=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: memory_resource_type = await client.run( rmm.mr.get_current_device_resource_type ) for v in memory_resource_type.values(): assert v is rmm.mr.TrackingResourceAdaptor memory_resource_upstream_type = await client.run( lambda: type(rmm.mr.get_current_device_resource().upstream_mr) ) for v in memory_resource_upstream_type.values(): assert v is rmm.mr.PoolMemoryResource @gen_test(timeout=20) async def test_get_cluster_configuration(): async with LocalCUDACluster( rmm_pool_size="2GB", rmm_maximum_pool_size="3GB", device_memory_limit="30B", CUDA_VISIBLE_DEVICES="0", scheduler_port=0, asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: ret = await get_cluster_configuration(client) assert ret["[plugin] RMMSetup"]["initial_pool_size"] == 2000000000 assert ret["[plugin] RMMSetup"]["maximum_pool_size"] == 3000000000 assert ret["jit-unspill"] is False assert ret["device-memory-limit"] == 30 @gen_test(timeout=20) async def test_worker_fraction_limits(): async with LocalCUDACluster( dashboard_address=None, device_memory_limit=0.1, rmm_pool_size=0.2, rmm_maximum_pool_size=0.3, CUDA_VISIBLE_DEVICES="0", scheduler_port=0, asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: device_total_memory = await client.run(get_device_total_memory) _, device_total_memory = device_total_memory.popitem() ret = await get_cluster_configuration(client) assert ret["device-memory-limit"] == int(device_total_memory * 0.1) assert ( ret["[plugin] RMMSetup"]["initial_pool_size"] == (device_total_memory * 0.2) // 256 * 256 ) assert ( ret["[plugin] RMMSetup"]["maximum_pool_size"] == (device_total_memory * 0.3) // 256 * 256 ) @pytest.mark.parametrize( "protocol", ["ucx", "ucxx"], ) def test_print_cluster_config(capsys, protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") pytest.importorskip("rich") with LocalCUDACluster( n_workers=1, device_memory_limit="1B", jit_unspill=True, protocol=protocol ) as cluster: with Client(cluster) as client: print_cluster_config(client) captured = capsys.readouterr() assert "Dask Cluster Configuration" in captured.out assert protocol in captured.out assert "1 B" in captured.out assert "[plugin]" in captured.out @pytest.mark.xfail(reason="https://github.com/rapidsai/dask-cuda/issues/1265") def test_death_timeout_raises(): with pytest.raises(asyncio.exceptions.TimeoutError): with LocalCUDACluster( silence_logs=False, death_timeout=1e-10, dashboard_address=":0", ): pass
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_utils.py
import os from unittest.mock import patch import pytest from numba import cuda from dask.config import canonical_name from dask_cuda.utils import ( cuda_visible_devices, get_cpu_affinity, get_device_total_memory, get_gpu_count, get_n_gpus, get_preload_options, get_ucx_config, nvml_device_index, parse_cuda_visible_device, parse_device_memory_limit, unpack_bitmask, ) @patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "0,1,2"}) def test_get_n_gpus(): assert isinstance(get_n_gpus(), int) assert get_n_gpus() == 3 @pytest.mark.parametrize( "params", [ { "input": [1152920405096267775, 0], "output": [i for i in range(20)] + [i + 40 for i in range(20)], }, { "input": [17293823668613283840, 65535], "output": [i + 20 for i in range(20)] + [i + 60 for i in range(20)], }, {"input": [18446744073709551615, 0], "output": [i for i in range(64)]}, {"input": [0, 18446744073709551615], "output": [i + 64 for i in range(64)]}, ], ) def test_unpack_bitmask(params): assert unpack_bitmask(params["input"]) == params["output"] def test_unpack_bitmask_single_value(): with pytest.raises(TypeError): unpack_bitmask(1) def test_cpu_affinity(): for i in range(get_n_gpus()): affinity = get_cpu_affinity(i) os.sched_setaffinity(0, affinity) assert os.sched_getaffinity(0) == set(affinity) def test_cpu_affinity_and_cuda_visible_devices(): affinity = dict() for i in range(get_n_gpus()): # The negative here would be `device = 0` as required for CUDA runtime # calls. device = nvml_device_index(0, cuda_visible_devices(i)) affinity[device] = get_cpu_affinity(device) for i in range(get_n_gpus()): assert get_cpu_affinity(i) == affinity[i] def test_get_device_total_memory(): for i in range(get_n_gpus()): with cuda.gpus[i]: total_mem = get_device_total_memory(i) assert type(total_mem) is int assert total_mem > 0 @pytest.mark.parametrize( "protocol", ["ucx", "ucxx"], ) def test_get_preload_options_default(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") opts = get_preload_options( protocol=protocol, create_cuda_context=True, ) assert "preload" in opts assert opts["preload"] == ["dask_cuda.initialize"] assert "preload_argv" in opts assert opts["preload_argv"] == ["--create-cuda-context"] @pytest.mark.parametrize( "protocol", ["ucx", "ucxx"], ) @pytest.mark.parametrize("enable_tcp", [True, False]) @pytest.mark.parametrize("enable_infiniband", [True, False]) @pytest.mark.parametrize("enable_nvlink", [True, False]) def test_get_preload_options(protocol, enable_tcp, enable_infiniband, enable_nvlink): if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") opts = get_preload_options( protocol=protocol, create_cuda_context=True, enable_tcp_over_ucx=enable_tcp, enable_infiniband=enable_infiniband, enable_nvlink=enable_nvlink, ) assert "preload" in opts assert opts["preload"] == ["dask_cuda.initialize"] assert "preload_argv" in opts assert "--create-cuda-context" in opts["preload_argv"] if enable_tcp: assert "--enable-tcp-over-ucx" in opts["preload_argv"] if enable_infiniband: assert "--enable-infiniband" in opts["preload_argv"] if enable_nvlink: assert "--enable-nvlink" in opts["preload_argv"] @pytest.mark.parametrize("enable_tcp_over_ucx", [True, False, None]) @pytest.mark.parametrize("enable_nvlink", [True, False, None]) @pytest.mark.parametrize("enable_infiniband", [True, False, None]) def test_get_ucx_config(enable_tcp_over_ucx, enable_infiniband, enable_nvlink): pytest.importorskip("ucp") kwargs = { "enable_tcp_over_ucx": enable_tcp_over_ucx, "enable_infiniband": enable_infiniband, "enable_nvlink": enable_nvlink, } ucx_config = get_ucx_config(**kwargs) assert ucx_config[canonical_name("create_cuda_context", ucx_config)] is True if enable_tcp_over_ucx is not None: assert ucx_config[canonical_name("tcp", ucx_config)] is enable_tcp_over_ucx else: if ( enable_infiniband is not True and enable_nvlink is not True and not (enable_infiniband is None and enable_nvlink is None) ): assert ucx_config[canonical_name("tcp", ucx_config)] is True else: assert ucx_config[canonical_name("tcp", ucx_config)] is None if enable_infiniband is not None: assert ucx_config[canonical_name("infiniband", ucx_config)] is enable_infiniband else: if ( enable_tcp_over_ucx is not True and enable_nvlink is not True and not (enable_tcp_over_ucx is None and enable_nvlink is None) ): assert ucx_config[canonical_name("infiniband", ucx_config)] is True else: assert ucx_config[canonical_name("infiniband", ucx_config)] is None if enable_nvlink is not None: assert ucx_config[canonical_name("nvlink", ucx_config)] is enable_nvlink else: if ( enable_tcp_over_ucx is not True and enable_infiniband is not True and not (enable_tcp_over_ucx is None and enable_infiniband is None) ): assert ucx_config[canonical_name("nvlink", ucx_config)] is True else: assert ucx_config[canonical_name("nvlink", ucx_config)] is None if any( opt is not None for opt in [enable_tcp_over_ucx, enable_infiniband, enable_nvlink] ) and not all( opt is False for opt in [enable_tcp_over_ucx, enable_infiniband, enable_nvlink] ): assert ucx_config[canonical_name("cuda-copy", ucx_config)] is True else: assert ucx_config[canonical_name("cuda-copy", ucx_config)] is None def test_parse_visible_devices(): pynvml = pytest.importorskip("pynvml") pynvml.nvmlInit() indices = [] uuids = [] for index in range(get_gpu_count()): handle = pynvml.nvmlDeviceGetHandleByIndex(index) try: uuid = pynvml.nvmlDeviceGetUUID(handle).decode("utf-8") except AttributeError: uuid = pynvml.nvmlDeviceGetUUID(handle) assert parse_cuda_visible_device(index) == index assert parse_cuda_visible_device(uuid) == uuid indices.append(str(index)) uuids.append(uuid) index_devices = ",".join(indices) with patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": index_devices}): for index in range(get_gpu_count()): visible = cuda_visible_devices(index) assert visible.split(",")[0] == str(index) uuid_devices = ",".join(uuids) with patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": uuid_devices}): for index in range(get_gpu_count()): visible = cuda_visible_devices(index) assert visible.split(",")[0] == str(uuids[index]) with pytest.raises(ValueError): parse_cuda_visible_device("Foo") with pytest.raises(TypeError): parse_cuda_visible_device(None) parse_cuda_visible_device([]) def test_parse_device_memory_limit(): total = get_device_total_memory(0) assert parse_device_memory_limit(None) == total assert parse_device_memory_limit(0) == total assert parse_device_memory_limit("auto") == total assert parse_device_memory_limit(0.8) == int(total * 0.8) assert parse_device_memory_limit(0.8, alignment_size=256) == int( total * 0.8 // 256 * 256 ) assert parse_device_memory_limit(1000000000) == 1000000000 assert parse_device_memory_limit("1GB") == 1000000000 def test_parse_visible_mig_devices(): pynvml = pytest.importorskip("pynvml") pynvml.nvmlInit() for index in range(get_gpu_count()): handle = pynvml.nvmlDeviceGetHandleByIndex(index) try: mode = pynvml.nvmlDeviceGetMigMode(handle)[0] except pynvml.NVMLError: # if not a MIG device, i.e. a normal GPU, skip continue if mode: # Just checks to see if there are any MIG enabled GPUS. # If there is one, check if the number of mig instances # in that GPU is <= to count, where count gives us the # maximum number of MIG devices/instances that can exist # under a given parent NVML device. count = pynvml.nvmlDeviceGetMaxMigDeviceCount(handle) miguuids = [] for i in range(count): try: mighandle = pynvml.nvmlDeviceGetMigDeviceHandleByIndex( device=handle, index=i ) miguuids.append(mighandle) except pynvml.NVMLError: pass assert len(miguuids) <= count
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_proxy.py
import operator import os import pickle import tempfile from types import SimpleNamespace import numpy as np import pandas import pytest from packaging import version from pandas.testing import assert_frame_equal, assert_series_equal import dask import dask.array from dask.dataframe.core import has_parallel_type from dask.sizeof import sizeof from distributed import Client from distributed.protocol.serialize import deserialize, serialize from distributed.utils_test import gen_test import dask_cuda from dask_cuda import LocalCUDACluster, proxy_object from dask_cuda.disk_io import SpillToDiskFile from dask_cuda.proxify_device_objects import proxify_device_objects from dask_cuda.proxify_host_file import ProxifyHostFile from dask_cuda.utils_test import IncreasedCloseTimeoutNanny # Make the "disk" serializer available and use a directory that are # remove on exit. if ProxifyHostFile._spill_to_disk is None: tmpdir = tempfile.TemporaryDirectory() ProxifyHostFile( worker_local_directory=tmpdir.name, device_memory_limit=1024, memory_limit=1024, ) @pytest.mark.parametrize("serializers", [None, ("dask", "pickle"), ("disk",)]) def test_proxy_object(serializers): """Check "transparency" of the proxy object""" org = bytearray(range(10)) pxy = proxy_object.asproxy(org, serializers=serializers) assert len(org) == len(pxy) assert org[0] == pxy[0] assert 1 in pxy assert 10 not in pxy assert str(org) == str(pxy) assert "dask_cuda.proxy_object.ProxyObject at " in repr(pxy) assert "bytearray at " in repr(pxy) pxy._pxy_serialize(serializers=("dask", "pickle")) assert "dask_cuda.proxy_object.ProxyObject at " in repr(pxy) assert "bytearray (serialized='dask')" in repr(pxy) assert org == proxy_object.unproxy(pxy) assert org == proxy_object.unproxy(org) class DummyObj: """Class that only "pickle" can serialize""" def __reduce__(self): return (DummyObj, ()) def test_proxy_object_serializer(): """Check the serializers argument""" pxy = proxy_object.asproxy(DummyObj(), serializers=("dask", "pickle")) assert pxy._pxy_get().serializer == "pickle" assert "DummyObj (serialized='pickle')" in repr(pxy) with pytest.raises(ValueError) as excinfo: pxy = proxy_object.asproxy([42], serializers=("dask", "pickle")) assert "Cannot wrap a collection" in str(excinfo.value) @pytest.mark.parametrize("serializers_first", [None, ("dask", "pickle"), ("disk",)]) @pytest.mark.parametrize("serializers_second", [None, ("dask", "pickle"), ("disk",)]) def test_double_proxy_object(serializers_first, serializers_second): """Check asproxy() when creating a proxy object of a proxy object""" serializer1 = serializers_first[0] if serializers_first else None serializer2 = serializers_second[0] if serializers_second else None org = bytearray(range(10)) pxy1 = proxy_object.asproxy(org, serializers=serializers_first) assert pxy1._pxy_get().serializer == serializer1 pxy2 = proxy_object.asproxy(pxy1, serializers=serializers_second) if serializers_second is None: # Check that `serializers=None` doesn't change the initial serializers assert pxy2._pxy_get().serializer == serializer1 else: assert pxy2._pxy_get().serializer == serializer2 assert pxy1 is pxy2 @pytest.mark.parametrize("serializers", [None, ("dask", "pickle"), ("disk",)]) @pytest.mark.parametrize("backend", ["numpy", "cupy"]) def test_proxy_object_of_array(serializers, backend): """Check that a proxied array behaves as a regular (numpy or cupy) array""" np = pytest.importorskip(backend) # Make sure that equality works, which we use to test the other operators org = np.arange(10) + 1 pxy = proxy_object.asproxy(org.copy(), serializers=serializers) assert all(org == pxy) assert all(org + 1 != pxy) # Check unary scalar operators for op in [int, float, complex, operator.index, oct, hex]: org = np.int64(42) pxy = proxy_object.asproxy(org.copy(), serializers=serializers) expect = op(org) got = op(pxy) assert type(expect) == type(got) assert expect == got # Check unary operators for op_str in ["neg", "pos", "abs", "inv"]: op = getattr(operator, op_str) org = np.arange(10) + 1 pxy = proxy_object.asproxy(org.copy(), serializers=serializers) expect = op(org) got = op(pxy) assert type(expect) == type(got) assert all(expect == got) # Check binary operators that takes a scalar as second argument for op_str in ["rshift", "lshift", "pow"]: op = getattr(operator, op_str) org = np.arange(10) + 1 pxy = proxy_object.asproxy(org.copy(), serializers=serializers) expect = op(org, 2) got = op(pxy, 2) assert type(expect) == type(got) assert all(expect == got) # Check binary operators for op_str in [ "add", "eq", "floordiv", "ge", "gt", "le", "lshift", "lt", "mod", "mul", "ne", "or_", "sub", "truediv", "xor", "iadd", "ior", "iand", "ifloordiv", "ilshift", "irshift", "ipow", "imod", "imul", "isub", "ixor", ]: op = getattr(operator, op_str) org = np.arange(10) + 1 pxy = proxy_object.asproxy(org.copy(), serializers=serializers) expect = op(org.copy(), org) got = op(org.copy(), pxy) assert isinstance(got, type(expect)) assert all(expect == got) expect = op(org.copy(), org) got = op(pxy, org) assert isinstance(got, type(expect)) assert all(expect == got) # Check proxy-proxy operations if "i" != op_str[0]: # Skip in-place operators expect = op(org.copy(), org) got = op(pxy, proxy_object.asproxy(org.copy())) assert all(expect == got) # Check unary truth operators for op_str in ["not_", "truth"]: op = getattr(operator, op_str) org = np.arange(1) + 1 pxy = proxy_object.asproxy(org.copy(), serializers=serializers) expect = op(org) got = op(pxy) assert type(expect) == type(got) assert expect == got # Check reflected methods for op_str in [ "__radd__", "__rsub__", "__rmul__", "__rtruediv__", "__rfloordiv__", "__rmod__", "__rpow__", "__rlshift__", "__rrshift__", "__rxor__", "__ror__", ]: org = np.arange(10) + 1 pxy = proxy_object.asproxy(org.copy(), serializers=serializers) expect = getattr(org, op_str)(org) got = getattr(org, op_str)(pxy) assert isinstance(got, type(expect)) assert all(expect == got) @pytest.mark.parametrize("serializers", [None, ["dask"], ["disk"]]) def test_proxy_object_of_cudf(serializers): """Check that a proxied cudf dataframe behaves as a regular dataframe""" cudf = pytest.importorskip("cudf") df = cudf.DataFrame({"a": range(10)}) pxy = proxy_object.asproxy(df, serializers=serializers) assert_frame_equal(df.to_pandas(), pxy.to_pandas()) @pytest.mark.parametrize("proxy_serializers", [None, ["dask"], ["cuda"], ["disk"]]) @pytest.mark.parametrize("dask_serializers", [["dask"], ["cuda"]]) def test_serialize_of_proxied_cudf(proxy_serializers, dask_serializers): """Check that we can serialize a proxied cudf dataframe, which might be serialized already. """ cudf = pytest.importorskip("cudf") df = cudf.DataFrame({"a": range(10)}) pxy = proxy_object.asproxy(df, serializers=proxy_serializers) header, frames = serialize(pxy, serializers=dask_serializers, on_error="raise") pxy = deserialize(header, frames) assert_frame_equal(df.to_pandas(), pxy.to_pandas()) @pytest.mark.parametrize("backend", ["numpy", "cupy"]) def test_fixed_attribute_length(backend): """Test fixed attribute `x.__len__` access Notice, accessing fixed attributes shouldn't de-serialize the proxied object """ np = pytest.importorskip(backend) # Access `len()`` of an array pxy = proxy_object.asproxy(np.arange(10), serializers=("dask",)) assert len(pxy) == 10 # Accessing the length shouldn't de-serialize the proxied object assert pxy._pxy_get().is_serialized() # Access `len()` of a scalar pxy = proxy_object.asproxy(np.array(10), serializers=("dask",)) with pytest.raises(TypeError) as excinfo: len(pxy) assert "len() of unsized object" in str(excinfo.value) assert pxy._pxy_get().is_serialized() def test_fixed_attribute_name(): """Test fixed attribute `x.name` access Notice, accessing fixed attributes shouldn't de-serialize the proxied object """ obj_without_name = SimpleNamespace() obj_with_name = SimpleNamespace(name="I have a name") # Access `name` of an array pxy = proxy_object.asproxy(obj_without_name, serializers=("pickle",)) with pytest.raises(AttributeError) as excinfo: pxy.name assert "has no attribute 'name'" in str(excinfo.value) assert pxy._pxy_get().is_serialized() # Access `name` of a datatype pxy = proxy_object.asproxy(obj_with_name, serializers=("pickle",)) assert pxy.name == "I have a name" assert pxy._pxy_get().is_serialized() @pytest.mark.parametrize("jit_unspill", [True, False]) @gen_test(timeout=20) async def test_spilling_local_cuda_cluster(jit_unspill): """Testing spilling of a proxied cudf dataframe in a local cuda cluster""" cudf = pytest.importorskip("cudf") dask_cudf = pytest.importorskip("dask_cudf") def task(x): assert isinstance(x, cudf.DataFrame) if jit_unspill: # Check that `x` is a proxy object and the proxied DataFrame is serialized assert "ProxyObject" in str(type(x)) assert x._pxy_get().serializer == "dask" else: assert type(x) == cudf.DataFrame assert len(x) == 10 # Trigger deserialization return x # Notice, setting `device_memory_limit=1B` to trigger spilling async with LocalCUDACluster( n_workers=1, device_memory_limit="1B", jit_unspill=jit_unspill, asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: df = cudf.DataFrame({"a": range(10)}) ddf = dask_cudf.from_cudf(df, npartitions=1) ddf = ddf.map_partitions(task, meta=df.head()) got = await client.compute(ddf) if isinstance(got, pandas.Series): pytest.xfail( "BUG fixed by <https://github.com/rapidsai/dask-cuda/pull/451>" ) assert_frame_equal(got.to_pandas(), df.to_pandas()) @pytest.mark.parametrize("obj", [bytearray(10), bytearray(10**6)]) def test_serializing_to_disk(obj): """Check serializing to disk""" # Serialize from device to disk pxy = proxy_object.asproxy(obj) ProxifyHostFile.serialize_proxy_to_disk_inplace(pxy) assert pxy._pxy_get().serializer == "disk" assert obj == proxy_object.unproxy(pxy) # Serialize from host to disk pxy = proxy_object.asproxy(obj, serializers=("pickle",)) ProxifyHostFile.serialize_proxy_to_disk_inplace(pxy) assert pxy._pxy_get().serializer == "disk" assert obj == proxy_object.unproxy(pxy) @pytest.mark.parametrize("serializer", ["dask", "pickle", "disk"]) def test_multiple_deserializations(serializer): """Check for race conditions when accessing the ProxyDetail""" data1 = bytearray(10) proxy = proxy_object.asproxy(data1, serializers=(serializer,)) pxy = proxy._pxy_get() data2 = proxy._pxy_deserialize() assert data1 == data2 # Check that the spilled file still exist. if serializer == "disk": file_path = pxy.obj[0]["disk-io-header"]["path"] assert isinstance(file_path, SpillToDiskFile) assert file_path.exists() file_path = str(file_path) # Check that the spilled data within `pxy` is still available even # though `proxy` has been deserialized. data3 = pxy.deserialize() assert data1 == data3 # Check that the spilled file has been removed now that all reference # to is has been deleted. if serializer == "disk": assert not os.path.exists(file_path) @pytest.mark.parametrize("size", [10, 10**4]) @pytest.mark.parametrize( "serializers", [None, ["dask"], ["cuda", "dask"], ["pickle"], ["disk"]] ) @pytest.mark.parametrize("backend", ["numpy", "cupy"]) def test_serializing_array_to_disk(backend, serializers, size): """Check serializing arrays to disk""" np = pytest.importorskip(backend) obj = np.arange(size) # Serialize from host to disk pxy = proxy_object.asproxy(obj, serializers=serializers) ProxifyHostFile.serialize_proxy_to_disk_inplace(pxy) assert pxy._pxy_get().serializer == "disk" assert list(obj) == list(proxy_object.unproxy(pxy)) class _PxyObjTest(proxy_object.ProxyObject): """ A class that: - defines `__dask_tokenize__` in order to avoid deserialization when calling `client.scatter()` - Asserts that no deserialization is performaned when communicating. """ def __dask_tokenize__(self): return 42 def _pxy_deserialize(self): if self._pxy_get().assert_on_deserializing: assert self._pxy_get().serializer is None return super()._pxy_deserialize() @pytest.mark.parametrize("send_serializers", [None, ("dask", "pickle"), ("cuda",)]) @pytest.mark.parametrize("protocol", ["tcp", "ucx", "ucxx"]) @gen_test(timeout=120) async def test_communicating_proxy_objects(protocol, send_serializers): """Testing serialization of cuDF dataframe when communicating""" if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") cudf = pytest.importorskip("cudf") def task(x): # Check that the subclass survives the trip from client to worker assert isinstance(x, _PxyObjTest) serializers_used = x._pxy_get().serializer # Check that `x` is serialized with the expected serializers if protocol in ["ucx", "ucxx"]: if send_serializers is None: assert serializers_used == "cuda" else: assert serializers_used == send_serializers[0] else: assert serializers_used == "dask" async with dask_cuda.LocalCUDACluster( n_workers=1, protocol=protocol, worker_class=IncreasedCloseTimeoutNanny, asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: df = cudf.DataFrame({"a": range(10)}) df = proxy_object.asproxy( df, serializers=send_serializers, subclass=_PxyObjTest ) # Notice, in one case we expect deserialization when communicating. # Since "tcp" cannot send device memory directly, it will be re-serialized # using the default dask serializers that spill the data to main memory. if protocol == "tcp" and send_serializers == ("cuda",): df._pxy_get().assert_on_deserializing = False else: df._pxy_get().assert_on_deserializing = True df = await client.scatter(df) await client.submit(task, df) @pytest.mark.parametrize("protocol", ["tcp", "ucx", "ucxx"]) @pytest.mark.parametrize("shared_fs", [True, False]) @gen_test(timeout=20) async def test_communicating_disk_objects(protocol, shared_fs): """Testing disk serialization of cuDF dataframe when communicating""" if protocol == "ucx": pytest.importorskip("ucp") elif protocol == "ucxx": pytest.importorskip("ucxx") cudf = pytest.importorskip("cudf") ProxifyHostFile._spill_to_disk.shared_filesystem = shared_fs def task(x): # Check that the subclass survives the trip from client to worker assert isinstance(x, _PxyObjTest) serializer_used = x._pxy_get().serializer if shared_fs: assert serializer_used == "disk" else: assert serializer_used == "dask" async with dask_cuda.LocalCUDACluster( n_workers=1, protocol=protocol, asynchronous=True, ) as cluster: async with Client(cluster, asynchronous=True) as client: df = cudf.DataFrame({"a": range(10)}) df = proxy_object.asproxy(df, serializers=("disk",), subclass=_PxyObjTest) df._pxy_get().assert_on_deserializing = False df = await client.scatter(df) await client.submit(task, df) @pytest.mark.parametrize("array_module", ["numpy", "cupy"]) @pytest.mark.parametrize( "serializers", [None, ("dask", "pickle"), ("cuda", "dask", "pickle"), ("disk",)] ) def test_pickle_proxy_object(array_module, serializers): """Check pickle of the proxy object""" array_module = pytest.importorskip(array_module) org = array_module.arange(10) pxy = proxy_object.asproxy(org, serializers=serializers) data = pickle.dumps(pxy) restored = pickle.loads(data) repr(restored) assert all(org == restored) def test_pandas(): """Check pandas operations on proxy objects""" pandas = pytest.importorskip("pandas") df1 = pandas.DataFrame({"a": range(10)}) df2 = pandas.DataFrame({"a": range(10)}) res = dask.dataframe.methods.concat([df1, df2]) got = dask.dataframe.methods.concat([df1, df2]) assert_frame_equal(res, got) got = dask.dataframe.methods.concat([proxy_object.asproxy(df1), df2]) assert_frame_equal(res, got) got = dask.dataframe.methods.concat([df1, proxy_object.asproxy(df2)]) assert_frame_equal(res, got) df1 = pandas.Series(range(10)) df2 = pandas.Series(range(10)) res = dask.dataframe.methods.concat([df1, df2]) got = dask.dataframe.methods.concat([df1, df2]) assert all(res == got) got = dask.dataframe.methods.concat([proxy_object.asproxy(df1), df2]) assert all(res == got) got = dask.dataframe.methods.concat([df1, proxy_object.asproxy(df2)]) assert all(res == got) def test_from_cudf_of_proxy_object(): """Check from_cudf() of a proxy object""" cudf = pytest.importorskip("cudf") dask_cudf = pytest.importorskip("dask_cudf") df = proxy_object.asproxy(cudf.DataFrame({"a": range(10)})) assert has_parallel_type(df) ddf = dask_cudf.from_cudf(df, npartitions=1) assert has_parallel_type(ddf) # Notice, the output is a dask-cudf dataframe and not a proxy object assert type(ddf) is dask_cudf.core.DataFrame def test_proxy_object_parquet(tmp_path): """Check parquet read/write of a proxy object""" cudf = pytest.importorskip("cudf") tmp_path = tmp_path / "proxy_test.parquet" df = cudf.DataFrame({"a": range(10)}) pxy = proxy_object.asproxy(df) pxy.to_parquet(str(tmp_path), engine="pyarrow") df2 = dask.dataframe.read_parquet(tmp_path) assert_frame_equal(df.to_pandas(), df2.compute()) def test_assignments(): """Check assignment to a proxied dataframe""" cudf = pytest.importorskip("cudf") df = proxy_object.asproxy(cudf.DataFrame({"a": range(10)})) df.index = df["a"].copy(deep=False) def test_concatenate3_of_proxied_cupy_arrays(): """Check concatenate of cupy arrays""" from dask.array.core import concatenate3 cupy = pytest.importorskip("cupy") org = cupy.arange(10) a = proxy_object.asproxy(org.copy()) b = proxy_object.asproxy(org.copy()) assert all(concatenate3([a, b]) == concatenate3([org.copy(), org.copy()])) def test_tensordot_of_proxied_cupy_arrays(): """Check tensordot of cupy arrays""" cupy = pytest.importorskip("cupy") org = cupy.arange(9).reshape((3, 3)) a = proxy_object.asproxy(org.copy()) b = proxy_object.asproxy(org.copy()) res1 = dask.array.tensordot(a, b).flatten() res2 = dask.array.tensordot(org.copy(), org.copy()).flatten() assert all(res1 == res2) def test_einsum_of_proxied_cupy_arrays(): """Check tensordot of cupy arrays""" cupy = pytest.importorskip("cupy") org = cupy.arange(25).reshape(5, 5) res1 = dask.array.einsum("ii", org) a = proxy_object.asproxy(org.copy()) res2 = dask.array.einsum("ii", a) assert all(res1.flatten() == res2.flatten()) @pytest.mark.parametrize( "np_func", [np.less, np.less_equal, np.greater, np.greater_equal, np.equal] ) def test_array_ufucn_proxified_object(np_func): cudf = pytest.importorskip("cudf") np_array = np.array(100) ser = cudf.Series([1, 2, 3]) proxy_obj = proxify_device_objects(ser) expected = np_func(ser, np_array) actual = np_func(proxy_obj, np_array) assert_series_equal(expected.to_pandas(), actual.to_pandas()) def test_cudf_copy(): cudf = pytest.importorskip("cudf") df = cudf.DataFrame({"A": range(10)}) df = proxify_device_objects(df) cpy = df.copy() assert_frame_equal(cpy.to_pandas(), df.to_pandas()) def test_cudf_fillna(): cudf = pytest.importorskip("cudf") df = cudf.DataFrame({"A": range(10)}) df = proxify_device_objects(df) df = df.fillna(0) def test_sizeof_cupy(): cupy = pytest.importorskip("cupy") cupy.cuda.set_allocator(None) a = cupy.arange(1e7) a_size = sizeof(a) pxy = proxy_object.asproxy(a) assert a_size == pytest.approx(sizeof(pxy)) pxy._pxy_serialize(serializers=("dask",)) assert a_size == pytest.approx(sizeof(pxy)) assert pxy._pxy_get().is_serialized() pxy._pxy_cache = {} assert a_size == pytest.approx(sizeof(pxy)) assert pxy._pxy_get().is_serialized() def test_sizeof_cudf(): cudf = pytest.importorskip("cudf") a = cudf.datasets.timeseries().reset_index() a_size = sizeof(a) pxy = proxy_object.asproxy(a) assert a_size == pytest.approx(sizeof(pxy)) pxy._pxy_serialize(serializers=("dask",)) assert a_size == pytest.approx(sizeof(pxy)) assert pxy._pxy_get().is_serialized() # By clearing the cache, `sizeof(pxy)` now measure the serialized data # thus we have to increase the tolerance. pxy._pxy_cache = {} assert a_size == pytest.approx(sizeof(pxy), rel=1e-2) assert pxy._pxy_get().is_serialized() def test_cupy_broadcast_to(): cupy = pytest.importorskip("cupy") a = cupy.arange(10) a_b = np.broadcast_to(a, (10, 10)) p_b = np.broadcast_to(proxy_object.asproxy(a), (10, 10)) assert a_b.shape == p_b.shape assert (a_b == p_b).all() def test_cupy_matmul(): cupy = pytest.importorskip("cupy") if version.parse(cupy.__version__) >= version.parse("11.0"): pytest.xfail("See: https://github.com/rapidsai/dask-cuda/issues/995") a, b = cupy.arange(10), cupy.arange(10) c = a @ b assert c == proxy_object.asproxy(a) @ b assert c == a @ proxy_object.asproxy(b) assert c == proxy_object.asproxy(a) @ proxy_object.asproxy(b) def test_cupy_imatmul(): cupy = pytest.importorskip("cupy") if version.parse(cupy.__version__) >= version.parse("11.0"): pytest.xfail("See: https://github.com/rapidsai/dask-cuda/issues/995") a = cupy.arange(9).reshape(3, 3) c = a.copy() c @= a a1 = a.copy() a1 @= proxy_object.asproxy(a) assert (a1 == c).all() a2 = proxy_object.asproxy(a.copy()) a2 @= a assert (a2 == c).all()
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/common.py
from argparse import Namespace from functools import partial from typing import Any, Callable, List, Mapping, NamedTuple, Optional, Tuple from warnings import filterwarnings import numpy as np import pandas as pd import dask from distributed import Client from dask_cuda.benchmarks.utils import ( address_to_index, aggregate_transfer_log_data, bandwidth_statistics, get_cluster_options, peer_to_peer_bandwidths, save_benchmark_data, setup_memory_pools, wait_for_cluster, ) from dask_cuda.utils import all_to_all __all__ = ("execute_benchmark", "Config") class Config(NamedTuple): """Benchmark configuration""" args: Namespace """Parsed benchmark arguments""" bench_once: Callable[[Client, Namespace, Optional[str]], Any] """Callable to run a single benchmark iteration Parameters ---------- client distributed Client object args Benchmark parsed arguments write_profile Should a profile be written? Returns ------- Benchmark data to be interpreted by ``pretty_print_results`` and ``create_tidy_results``. """ create_tidy_results: Callable[ [Namespace, np.ndarray, List[Any]], Tuple[pd.DataFrame, np.ndarray] ] """Callable to create tidy results for saving to disk Parameters ---------- args Benchmark parsed arguments p2p_bw Array of point-to-point bandwidths results: list List of results from running ``bench_once`` Returns ------- tuple two-tuple of a pandas dataframe and the point-to-point bandwidths """ pretty_print_results: Callable[ [Namespace, Mapping[str, int], np.ndarray, List[Any]], None ] """Callable to pretty-print results for human consumption Parameters ---------- args Benchmark parsed arguments address_to_index Mapping from worker addresses to indices p2p_bw Array of point-to-point bandwidths results: list List of results from running ``bench_once`` """ def run_benchmark(client: Client, args: Namespace, config: Config): """Run a benchmark a specified number of times If ``args.profile`` is set, the final run is profiled. """ results = [] for _ in range(max(1, args.runs) - 1): res = config.bench_once(client, args, write_profile=None) results.append(res) results.append(config.bench_once(client, args, write_profile=args.profile)) return results def gather_bench_results(client: Client, args: Namespace, config: Config): """Collect benchmark results from the workers""" address2index = address_to_index(client) if args.all_to_all: all_to_all(client) results = run_benchmark(client, args, config) # Collect aggregated peer-to-peer bandwidth message_data = client.run( partial(aggregate_transfer_log_data, bandwidth_statistics, args.ignore_size) ) return address2index, results, message_data def run(client: Client, args: Namespace, config: Config): """Run the full benchmark on the cluster Waits for the cluster, sets up memory pools, prints and saves results """ wait_for_cluster(client, shutdown_on_failure=True) assert len(client.scheduler_info()["workers"]) > 0 setup_memory_pools( client, args.type == "gpu", args.rmm_pool_size, args.disable_rmm_pool, args.enable_rmm_async, args.enable_rmm_managed, args.rmm_release_threshold, args.rmm_log_directory, args.enable_rmm_statistics, args.enable_rmm_track_allocations, ) address_to_index, results, message_data = gather_bench_results(client, args, config) p2p_bw = peer_to_peer_bandwidths(message_data, address_to_index) config.pretty_print_results(args, address_to_index, p2p_bw, results) if args.output_basename: df, p2p_bw = config.create_tidy_results(args, p2p_bw, results) df["num_workers"] = len(address_to_index) save_benchmark_data( args.output_basename, address_to_index, df, p2p_bw, ) def run_client_from_existing_scheduler(args: Namespace, config: Config): """Set up a client by connecting to a scheduler Shuts down the cluster at the end of the benchmark conditional on ``args.shutdown_cluster``. """ if args.scheduler_address is not None: kwargs = {"address": args.scheduler_address} elif args.scheduler_file is not None: kwargs = {"scheduler_file": args.scheduler_file} else: raise RuntimeError( "Need to specify either --scheduler-file " "or --scheduler-address" ) with Client(**kwargs) as client: run(client, args, config) if args.shutdown_cluster: client.shutdown() def run_create_client(args: Namespace, config: Config): """Create a client + cluster and run Shuts down the cluster at the end of the benchmark """ cluster_options = get_cluster_options(args) Cluster = cluster_options["class"] cluster_args = cluster_options["args"] cluster_kwargs = cluster_options["kwargs"] scheduler_addr = cluster_options["scheduler_addr"] filterwarnings("ignore", message=".*NVLink.*rmm_pool_size.*", category=UserWarning) with Cluster(*cluster_args, **cluster_kwargs) as cluster: # Use the scheduler address with an SSHCluster rather than the cluster # object, otherwise we can't shut it down. with Client(scheduler_addr if args.multi_node else cluster) as client: run(client, args, config) # An SSHCluster will not automatically shut down, we have to # ensure it does. if args.multi_node: client.shutdown() def execute_benchmark(config: Config): """Run complete benchmark given a configuration""" args = config.args if args.multiprocessing_method == "forkserver": import multiprocessing.forkserver as f f.ensure_running() with dask.config.set( {"distributed.worker.multiprocessing-method": args.multiprocessing_method} ): if args.scheduler_file is not None or args.scheduler_address is not None: run_client_from_existing_scheduler(args, config) else: run_create_client(args, config)
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/local_cupy.py
import contextlib from collections import ChainMap from time import perf_counter as clock import numpy as np import pandas as pd from nvtx import end_range, start_range from dask import array as da from dask.distributed import performance_report, wait from dask.utils import format_bytes, parse_bytes from dask_cuda.benchmarks.common import Config, execute_benchmark from dask_cuda.benchmarks.utils import ( as_noop, parse_benchmark_args, print_key_value, print_separator, print_throughput_bandwidth, ) def bench_once(client, args, write_profile=None): if args.type == "gpu": import cupy as xp else: import numpy as xp # Create a simple random array rs = da.random.RandomState(RandomState=xp.random.RandomState) if args.operation == "transpose_sum": rng = start_range(message="make array(s)", color="green") x = rs.random((args.size, args.size), chunks=args.chunk_size).persist() wait(x) end_range(rng) func_args = (x,) func = lambda x: (x + x.T).sum() elif args.operation == "dot": rng = start_range(message="make array(s)", color="green") x = rs.random((args.size, args.size), chunks=args.chunk_size).persist() y = rs.random((args.size, args.size), chunks=args.chunk_size).persist() wait(x) wait(y) end_range(rng) func_args = (x, y) func = lambda x, y: x.dot(y) elif args.operation == "svd": rng = start_range(message="make array(s)", color="green") x = rs.random( (args.size, args.second_size), chunks=(int(args.chunk_size), args.second_size), ).persist() wait(x) end_range(rng) func_args = (x,) func = lambda x: np.linalg.svd(x) elif args.operation == "fft": rng = start_range(message="make array(s)", color="green") x = rs.random( (args.size, args.size), chunks=(args.size, args.chunk_size) ).persist() wait(x) end_range(rng) func_args = (x,) func = lambda x: np.fft.fft(x, axis=0) elif args.operation == "sum": rng = start_range(message="make array(s)", color="green") x = rs.random((args.size, args.size), chunks=args.chunk_size).persist() wait(x) end_range(rng) func_args = (x,) func = lambda x: x.sum() elif args.operation == "mean": rng = start_range(message="make array(s)", color="green") x = rs.random((args.size, args.size), chunks=args.chunk_size).persist() wait(x) end_range(rng) func_args = (x,) func = lambda x: x.mean() elif args.operation == "slice": rng = start_range(message="make array(s)", color="green") x = rs.random((args.size, args.size), chunks=args.chunk_size).persist() wait(x) end_range(rng) func_args = (x,) func = lambda x: x[::3].copy() elif args.operation == "col_sum": rng = start_range(message="make array(s)", color="green") x = rs.normal(10, 1, (args.size,), chunks=args.chunk_size).persist() y = rs.normal(10, 1, (args.size,), chunks=args.chunk_size).persist() wait(x) wait(y) end_range(rng) func_args = (x, y) func = lambda x, y: x + y elif args.operation == "col_mask": rng = start_range(message="make array(s)", color="green") x = rs.normal(10, 1, (args.size,), chunks=args.chunk_size).persist() y = rs.normal(10, 1, (args.size,), chunks=args.chunk_size).persist() wait(x) wait(y) end_range(rng) func_args = (x, y) func = lambda x, y: x[y > 10] elif args.operation == "col_gather": rng = start_range(message="make array(s)", color="green") x = rs.normal(10, 1, (args.size,), chunks=args.chunk_size).persist() idx = rs.randint( 0, len(x), (args.second_size,), chunks=args.chunk_size ).persist() wait(x) wait(idx) end_range(rng) func_args = (x, idx) func = lambda x, idx: x[idx] else: raise ValueError(f"Unknown operation type {args.operation}") shape = x.shape chunksize = x.chunksize data_processed = sum(arg.nbytes for arg in func_args) # Execute the operations to benchmark if args.profile is not None and write_profile is not None: ctx = performance_report(filename=args.profile) else: ctx = contextlib.nullcontext() with ctx: rng = start_range(message=args.operation, color="purple") result = func(*func_args) if args.backend == "dask-noop": result = as_noop(result) t1 = clock() wait(client.persist(result)) if args.type == "gpu": client.run(lambda xp: xp.cuda.Device().synchronize(), xp) took = clock() - t1 end_range(rng) return { "took": took, "data_processed": data_processed, "shape": shape, "chunksize": chunksize, } def pretty_print_results(args, address_to_index, p2p_bw, results): result, *_ = results if args.markdown: print("```") print("Roundtrip benchmark") print_separator(separator="-") print_key_value(key="Backend", value=f"{args.backend}") print_key_value(key="Operation", value=f"{args.operation}") print_key_value(key="Array type", value="cupy" if args.type == "gpu" else "numpy") print_key_value(key="User size", value=f"{args.size}") print_key_value(key="User second size", value=f"{args.second_size}") print_key_value(key="User chunk size", value=f"{args.size}") print_key_value(key="Compute shape", value=f"{result['shape']}") print_key_value(key="Compute chunk size", value=f"{result['chunksize']}") print_key_value(key="Ignore size", value=f"{format_bytes(args.ignore_size)}") print_key_value(key="Device(s)", value=f"{args.devs}") print_key_value( key="Data processed", value=f"{format_bytes(result['data_processed'])}" ) if args.device_memory_limit: print_key_value( key="Device memory limit", value=f"{format_bytes(args.device_memory_limit)}", ) print_key_value(key="RMM Pool", value=f"{not args.disable_rmm_pool}") print_key_value(key="Protocol", value=f"{args.protocol}") if args.protocol in ["ucx", "ucxx"]: print_key_value(key="TCP", value=f"{args.enable_tcp_over_ucx}") print_key_value(key="InfiniBand", value=f"{args.enable_infiniband}") print_key_value(key="NVLink", value=f"{args.enable_nvlink}") print_key_value(key="Worker thread(s)", value=f"{args.threads_per_worker}") data_processed, durations = zip( *((result["data_processed"], result["took"]) for result in results) ) if args.markdown: print("\n```") print_throughput_bandwidth( args, durations, data_processed, p2p_bw, address_to_index ) def create_tidy_results(args, p2p_bw, results): configuration = { "operation": args.operation, "backend": args.backend, "array_type": "cupy" if args.type == "gpu" else "numpy", "user_size": args.size, "user_second_size": args.second_size, "user_chunk_size": args.chunk_size, "ignore_size": args.ignore_size, "devices": args.devs, "device_memory_limit": args.device_memory_limit, "worker_threads": args.threads_per_worker, "rmm_pool": not args.disable_rmm_pool, "protocol": args.protocol, "tcp": args.enable_tcp_over_ucx, "ib": args.enable_infiniband, "nvlink": args.enable_nvlink, "nreps": args.runs, } timing_data = pd.DataFrame( [ pd.Series( data=ChainMap( configuration, { "wallclock": result["took"], "compute_shape": result["shape"], "compute_chunk_size": result["chunksize"], "data_processed": result["data_processed"], }, ) ) for result in results ] ) return timing_data, p2p_bw def parse_args(): special_args = [ { "name": [ "-s", "--size", ], "default": "10000", "metavar": "n", "type": int, "help": "The array size n in n^2 (default 10000). For 'svd' operation " "the second dimension is given by --second-size.", }, { "name": [ "-2", "--second-size", ], "default": "1000", "type": int, "help": "The second dimension size for 'svd' operation (default 1000).", }, { "name": [ "-t", "--type", ], "choices": ["cpu", "gpu"], "default": "gpu", "type": str, "help": "Do merge with GPU or CPU dataframes.", }, { "name": [ "-o", "--operation", ], "default": "transpose_sum", "type": str, "help": "The operation to run, valid options are: " "'transpose_sum' (default), 'dot', 'fft', 'svd', 'sum', 'mean', 'slice'.", }, { "name": [ "-c", "--chunk-size", ], "default": "2500", "type": int, "help": "Chunk size (default 2500).", }, { "name": "--ignore-size", "default": "1 MiB", "metavar": "nbytes", "type": parse_bytes, "help": "Ignore messages smaller than this (default '1 MB').", }, { "name": "--runs", "default": 3, "type": int, "help": "Number of runs (default 3).", }, { "name": [ "-b", "--backend", ], "choices": ["dask", "dask-noop"], "default": "dask", "type": str, "help": "Compute backend to use.", }, ] return parse_benchmark_args( description="Transpose on LocalCUDACluster benchmark", args_list=special_args ) if __name__ == "__main__": execute_benchmark( Config( args=parse_args(), bench_once=bench_once, create_tidy_results=create_tidy_results, pretty_print_results=pretty_print_results, ) )
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/local_cudf_merge.py
import contextlib import math from collections import ChainMap from time import perf_counter import numpy as np import pandas as pd import dask from dask.base import tokenize from dask.dataframe.core import new_dd_object from dask.distributed import performance_report, wait from dask.utils import format_bytes, parse_bytes from dask_cuda.benchmarks.common import Config, execute_benchmark from dask_cuda.benchmarks.utils import ( as_noop, parse_benchmark_args, print_key_value, print_separator, print_throughput_bandwidth, ) # Benchmarking cuDF merge operation based on # <https://gist.github.com/rjzamora/0ffc35c19b5180ab04bbf7c793c45955> def generate_chunk(i_chunk, local_size, num_chunks, chunk_type, frac_match, gpu): # Setting a seed that triggers max amount of comm in the two-GPU case. if gpu: import cupy as xp import cudf as xdf else: import numpy as xp import pandas as xdf xp.random.seed(2**32 - 1) chunk_type = chunk_type or "build" frac_match = frac_match or 1.0 if chunk_type == "build": # Build dataframe # # "key" column is a unique sample within [0, local_size * num_chunks) # # "shuffle" column is a random selection of partitions (used for shuffle) # # "payload" column is a random permutation of the chunk_size start = local_size * i_chunk stop = start + local_size parts_array = xp.arange(num_chunks, dtype="int64") suffle_array = xp.repeat(parts_array, math.ceil(local_size / num_chunks)) df = xdf.DataFrame( { "key": xp.arange(start, stop=stop, dtype="int64"), "shuffle": xp.random.permutation(suffle_array)[:local_size], "payload": xp.random.permutation(xp.arange(local_size, dtype="int64")), } ) else: # Other dataframe # # "key" column matches values from the build dataframe # for a fraction (`frac_match`) of the entries. The matching # entries are perfectly balanced across each partition of the # "base" dataframe. # # "payload" column is a random permutation of the chunk_size # Step 1. Choose values that DO match sub_local_size = local_size // num_chunks sub_local_size_use = max(int(sub_local_size * frac_match), 1) arrays = [] for i in range(num_chunks): bgn = (local_size * i) + (sub_local_size * i_chunk) end = bgn + sub_local_size ar = xp.arange(bgn, stop=end, dtype="int64") arrays.append(xp.random.permutation(ar)[:sub_local_size_use]) key_array_match = xp.concatenate(tuple(arrays), axis=0) # Step 2. Add values that DON'T match missing_size = local_size - key_array_match.shape[0] start = local_size * num_chunks + local_size * i_chunk stop = start + missing_size key_array_no_match = xp.arange(start, stop=stop, dtype="int64") # Step 3. Combine and create the final dataframe chunk (dask_cudf partition) key_array_combine = xp.concatenate( (key_array_match, key_array_no_match), axis=0 ) df = xdf.DataFrame( { "key": xp.random.permutation(key_array_combine), "payload": xp.random.permutation(xp.arange(local_size, dtype="int64")), } ) return df def get_random_ddf(chunk_size, num_chunks, frac_match, chunk_type, args): parts = [chunk_size for _ in range(num_chunks)] device_type = True if args.type == "gpu" else False meta = generate_chunk(0, 4, 1, chunk_type, None, device_type) divisions = [None] * (len(parts) + 1) name = "generate-data-" + tokenize(chunk_size, num_chunks, frac_match, chunk_type) graph = { (name, i): ( generate_chunk, i, part, len(parts), chunk_type, frac_match, device_type, ) for i, part in enumerate(parts) } ddf = new_dd_object(graph, name, meta, divisions) if chunk_type == "build": if not args.no_shuffle: divisions = [i for i in range(num_chunks)] + [num_chunks] return ddf.set_index("shuffle", divisions=tuple(divisions)) else: del ddf["shuffle"] return ddf def merge(args, ddf1, ddf2): # Allow default broadcast behavior, unless # "--shuffle-join" or "--broadcast-join" was # specified (with "--shuffle-join" taking # precedence) broadcast = False if args.shuffle_join else (True if args.broadcast_join else None) # The merge/join operation ddf_join = ddf1.merge(ddf2, on=["key"], how="inner", broadcast=broadcast) if args.set_index: ddf_join = ddf_join.set_index("key") if args.backend == "dask-noop": t1 = perf_counter() ddf_join = as_noop(ddf_join) noopify_duration = perf_counter() - t1 else: noopify_duration = 0 wait(ddf_join.persist()) return noopify_duration def bench_once(client, args, write_profile=None): # Generate random Dask dataframes n_workers = len(client.scheduler_info()["workers"]) # Allow the number of chunks to vary between # the "base" and "other" DataFrames args.base_chunks = args.base_chunks or n_workers args.other_chunks = args.other_chunks or n_workers ddf_base = get_random_ddf( args.chunk_size, args.base_chunks, args.frac_match, "build", args ).persist() ddf_other = get_random_ddf( args.chunk_size, args.other_chunks, args.frac_match, "other", args ).persist() wait(ddf_base) wait(ddf_other) assert len(ddf_base.dtypes) == 2 assert len(ddf_other.dtypes) == 2 data_processed = len(ddf_base) * sum([t.itemsize for t in ddf_base.dtypes]) data_processed += len(ddf_other) * sum([t.itemsize for t in ddf_other.dtypes]) # Get contexts to use (defaults to null contexts that doesn't do anything) ctx1, ctx2 = contextlib.nullcontext(), contextlib.nullcontext() if args.backend == "explicit-comms": ctx1 = dask.config.set(explicit_comms=True) if write_profile is not None: ctx2 = performance_report(filename=args.profile) with ctx1: with ctx2: t1 = perf_counter() noopify_duration = merge(args, ddf_base, ddf_other) duration = perf_counter() - t1 - noopify_duration return (data_processed, duration) def pretty_print_results(args, address_to_index, p2p_bw, results): broadcast = ( False if args.shuffle_join else (True if args.broadcast_join else "default") ) if args.markdown: print("```") print("Merge benchmark") print_separator(separator="-") print_key_value(key="Backend", value=f"{args.backend}") print_key_value(key="Merge type", value=f"{args.type}") print_key_value(key="Rows-per-chunk", value=f"{args.chunk_size}") print_key_value(key="Base-chunks", value=f"{args.base_chunks}") print_key_value(key="Other-chunks", value=f"{args.other_chunks}") print_key_value(key="Broadcast", value=f"{broadcast}") print_key_value(key="Protocol", value=f"{args.protocol}") print_key_value(key="Device(s)", value=f"{args.devs}") if args.device_memory_limit: print_key_value( key="Device memory limit", value=f"{format_bytes(args.device_memory_limit)}" ) print_key_value(key="RMM Pool", value=f"{not args.disable_rmm_pool}") print_key_value(key="Frac-match", value=f"{args.frac_match}") if args.protocol in ["ucx", "ucxx"]: print_key_value(key="TCP", value=f"{args.enable_tcp_over_ucx}") print_key_value(key="InfiniBand", value=f"{args.enable_infiniband}") print_key_value(key="NVLink", value=f"{args.enable_nvlink}") print_key_value(key="Worker thread(s)", value=f"{args.threads_per_worker}") print_key_value(key="Data processed", value=f"{format_bytes(results[0][0])}") if args.markdown: print("\n```") data_processed, durations = zip(*results) print_throughput_bandwidth( args, durations, data_processed, p2p_bw, address_to_index ) def create_tidy_results( args, p2p_bw: np.ndarray, results, ): broadcast = ( False if args.shuffle_join else (True if args.broadcast_join else "default") ) configuration = { "dataframe_type": "cudf" if args.type == "gpu" else "pandas", "backend": args.backend, "merge_type": args.type, "base_chunks": args.base_chunks, "other_chunks": args.other_chunks, "broadcast": broadcast, "rows_per_chunk": args.chunk_size, "ignore_size": args.ignore_size, "frac_match": args.frac_match, "devices": args.devs, "device_memory_limit": args.device_memory_limit, "worker_threads": args.threads_per_worker, "rmm_pool": not args.disable_rmm_pool, "protocol": args.protocol, "tcp": args.enable_tcp_over_ucx, "ib": args.enable_infiniband, "nvlink": args.enable_nvlink, "nreps": args.runs, } timing_data = pd.DataFrame( [ pd.Series( data=ChainMap( configuration, {"wallclock": duration, "data_processed": data_processed}, ) ) for data_processed, duration in results ] ) return timing_data, p2p_bw def parse_args(): special_args = [ { "name": [ "-b", "--backend", ], "choices": ["dask", "explicit-comms", "dask-noop"], "default": "dask", "type": str, "help": "The backend to use.", }, { "name": [ "-t", "--type", ], "choices": ["cpu", "gpu"], "default": "gpu", "type": str, "help": "Do merge with GPU or CPU dataframes", }, { "name": [ "-c", "--chunk-size", ], "default": 1_000_000, "metavar": "n", "type": int, "help": "Chunk size (default 1_000_000)", }, { "name": "--base-chunks", "default": None, "type": int, "help": "Number of base-DataFrame partitions (default: n_workers)", }, { "name": "--other-chunks", "default": None, "type": int, "help": "Number of other-DataFrame partitions (default: n_workers)", }, { "name": "--broadcast-join", "action": "store_true", "help": "Use broadcast join when possible.", }, { "name": "--shuffle-join", "action": "store_true", "help": "Use shuffle join (takes precedence over '--broadcast-join').", }, { "name": "--ignore-size", "default": "1 MiB", "metavar": "nbytes", "type": parse_bytes, "help": "Ignore messages smaller than this (default '1 MB')", }, { "name": "--frac-match", "default": 0.3, "type": float, "help": "Fraction of rows that matches (default 0.3)", }, { "name": "--no-shuffle", "action": "store_true", "help": "Don't shuffle the keys of the left (base) dataframe.", }, { "name": "--runs", "default": 3, "type": int, "help": "Number of runs", }, { "name": [ "-s", "--set-index", ], "action": "store_true", "help": "Call set_index on the key column to sort the joined dataframe.", }, ] return parse_benchmark_args( description="Distributed merge (dask/cudf) benchmark", args_list=special_args ) if __name__ == "__main__": execute_benchmark( Config( args=parse_args(), bench_once=bench_once, create_tidy_results=create_tidy_results, pretty_print_results=pretty_print_results, ) )
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/local_cudf_groupby.py
import contextlib from collections import ChainMap from time import perf_counter as clock import pandas as pd import dask import dask.dataframe as dd from dask.distributed import performance_report, wait from dask.utils import format_bytes, parse_bytes from dask_cuda.benchmarks.common import Config, execute_benchmark from dask_cuda.benchmarks.utils import ( as_noop, parse_benchmark_args, print_key_value, print_separator, print_throughput_bandwidth, ) def apply_groupby( df, backend, sort=False, split_out=1, split_every=8, shuffle=None, ): if backend == "dask-noop" and shuffle == "explicit-comms": raise RuntimeError("dask-noop not valid for explicit-comms shuffle") # Handle special "explicit-comms" case config = {} if shuffle == "explicit-comms": shuffle = "tasks" config = {"explicit-comms": True} with dask.config.set(config): agg = df.groupby("key", sort=sort).agg( {"int64": ["max", "count"], "float64": "mean"}, split_out=split_out, split_every=split_every, shuffle=shuffle, ) if backend == "dask-noop": agg = as_noop(agg) wait(agg.persist()) return agg def generate_chunk(chunk_info, unique_size=1, gpu=True): # Setting a seed that triggers max amount of comm in the two-GPU case. if gpu: import cupy as xp import cudf as xdf else: import numpy as xp import pandas as xdf i_chunk, local_size = chunk_info xp.random.seed(i_chunk * 1_000) return xdf.DataFrame( { "key": xp.random.randint(0, unique_size, size=local_size, dtype="int64"), "int64": xp.random.permutation(xp.arange(local_size, dtype="int64")), "float64": xp.random.permutation(xp.arange(local_size, dtype="float64")), } ) def get_random_ddf(args): total_size = args.chunk_size * args.in_parts chunk_kwargs = { "unique_size": max(int(args.unique_ratio * total_size), 1), "gpu": True if args.type == "gpu" else False, } return dd.from_map( generate_chunk, [(i, args.chunk_size) for i in range(args.in_parts)], meta=generate_chunk((0, 1), **chunk_kwargs), enforce_metadata=False, **chunk_kwargs, ) def bench_once(client, args, write_profile=None): # Generate random Dask dataframe df = get_random_ddf(args) data_processed = len(df) * sum([t.itemsize for t in df.dtypes]) shuffle = { "True": "tasks", "False": False, }.get(args.shuffle, args.shuffle) if write_profile is None: ctx = contextlib.nullcontext() else: ctx = performance_report(filename=args.profile) with ctx: t1 = clock() agg = apply_groupby( df, backend=args.backend, sort=args.sort, split_out=args.split_out, split_every=args.split_every, shuffle=shuffle, ) t2 = clock() output_size = agg.memory_usage(index=True, deep=True).compute().sum() return (data_processed, output_size, t2 - t1) def pretty_print_results(args, address_to_index, p2p_bw, results): if args.markdown: print("```") print("Groupby benchmark") print_separator(separator="-") print_key_value(key="Use shuffle", value=f"{args.shuffle}") print_key_value(key="Backend", value=f"{args.backend}") print_key_value(key="Output partitions", value=f"{args.split_out}") print_key_value(key="Input partitions", value=f"{args.in_parts}") print_key_value(key="Sort Groups", value=f"{args.sort}") print_key_value(key="Rows-per-chunk", value=f"{args.chunk_size}") print_key_value(key="Unique-group ratio", value=f"{args.unique_ratio}") print_key_value(key="Protocol", value=f"{args.protocol}") print_key_value(key="Device(s)", value=f"{args.devs}") print_key_value(key="Tree-reduction width", value=f"{args.split_every}") if args.device_memory_limit: print_key_value( key="Device memory limit", value=f"{format_bytes(args.device_memory_limit)}" ) print_key_value(key="RMM Pool", value=f"{not args.disable_rmm_pool}") if args.protocol in ["ucx", "ucxx"]: print_key_value(key="TCP", value=f"{args.enable_tcp_over_ucx}") print_key_value(key="InfiniBand", value=f"{args.enable_infiniband}") print_key_value(key="NVLink", value=f"{args.enable_nvlink}") print_key_value(key="Worker thread(s)", value=f"{args.threads_per_worker}") print_key_value(key="Data processed", value=f"{format_bytes(results[0][0])}") print_key_value(key="Output size", value=f"{format_bytes(results[0][1])}") if args.markdown: print("\n```") data_processed, output_size, durations = zip(*results) print_throughput_bandwidth( args, durations, data_processed, p2p_bw, address_to_index ) def create_tidy_results(args, p2p_bw, results): configuration = { "dataframe_type": "cudf" if args.type == "gpu" else "pandas", "shuffle": args.shuffle, "backend": args.backend, "sort": args.sort, "split_out": args.split_out, "split_every": args.split_every, "in_parts": args.in_parts, "rows_per_chunk": args.chunk_size, "unique_ratio": args.unique_ratio, "protocol": args.protocol, "devs": args.devs, "device_memory_limit": args.device_memory_limit, "rmm_pool": not args.disable_rmm_pool, "tcp": args.enable_tcp_over_ucx, "ib": args.enable_infiniband, "nvlink": args.enable_nvlink, } timing_data = pd.DataFrame( [ pd.Series( data=ChainMap( configuration, { "wallclock": duration, "data_processed": data_processed, "output_size": output_size, }, ) ) for data_processed, output_size, duration in results ] ) return timing_data, p2p_bw def parse_args(): special_args = [ { "name": "--in-parts", "default": 100, "metavar": "n", "type": int, "help": "Number of input partitions (default '100')", }, { "name": [ "-c", "--chunk-size", ], "default": 1_000_000, "metavar": "n", "type": int, "help": "Chunk size (default 1_000_000)", }, { "name": "--unique-ratio", "default": 0.01, "type": float, "help": "Fraction of rows that are unique groups", }, { "name": "--sort", "default": False, "action": "store_true", "help": "Whether to sort the output group order.", }, { "name": "--split_out", "default": 1, "type": int, "help": "How many partitions to return.", }, { "name": "--split_every", "default": 8, "type": int, "help": "Tree-reduction width.", }, { "name": "--shuffle", "choices": ["False", "True", "tasks", "explicit-comms"], "default": "False", "type": str, "help": "Whether to use shuffle-based groupby.", }, { "name": "--backend", "choices": ["dask", "dask-noop"], "default": "dask", "type": str, "help": ( "Compute engine to use, dask-noop turns the graph into a noop graph" ), }, { "name": [ "-t", "--type", ], "choices": ["cpu", "gpu"], "default": "gpu", "type": str, "help": "Do shuffle with GPU or CPU dataframes (default 'gpu')", }, { "name": "--ignore-size", "default": "1 MiB", "metavar": "nbytes", "type": parse_bytes, "help": "Ignore messages smaller than this (default '1 MB')", }, { "name": "--runs", "default": 3, "type": int, "help": "Number of runs", }, ] return parse_benchmark_args( description="Distributed groupby (dask/cudf) benchmark", args_list=special_args ) if __name__ == "__main__": execute_benchmark( Config( args=parse_args(), bench_once=bench_once, create_tidy_results=create_tidy_results, pretty_print_results=pretty_print_results, ) )
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/local_cudf_shuffle.py
import contextlib from collections import ChainMap from time import perf_counter from typing import Tuple import numpy as np import pandas as pd import dask import dask.dataframe from dask.dataframe.core import new_dd_object from dask.dataframe.shuffle import shuffle from dask.distributed import Client, performance_report, wait from dask.utils import format_bytes, parse_bytes import dask_cuda.explicit_comms.dataframe.shuffle from dask_cuda.benchmarks.common import Config, execute_benchmark from dask_cuda.benchmarks.utils import ( as_noop, parse_benchmark_args, print_key_value, print_separator, print_throughput_bandwidth, ) try: import cupy import cudf except ImportError: cupy = None cudf = None def shuffle_dask(df, args): result = shuffle(df, index="data", shuffle="tasks", ignore_index=args.ignore_index) if args.backend == "dask-noop": result = as_noop(result) t1 = perf_counter() wait(result.persist()) return perf_counter() - t1 def shuffle_explicit_comms(df, args): t1 = perf_counter() wait( dask_cuda.explicit_comms.dataframe.shuffle.shuffle( df, column_names=["data"], ignore_index=args.ignore_index ).persist() ) return perf_counter() - t1 def create_df(nelem, df_type): if df_type == "cpu": return pd.DataFrame({"data": np.random.random(nelem)}) elif df_type == "gpu": if cudf is None or cupy is None: raise RuntimeError("`--type=gpu` requires cudf and cupy ") return cudf.DataFrame({"data": cupy.random.random(nelem)}) else: raise ValueError(f"Unknown type {df_type}") def create_data( client: Client, args, name="balanced-df" ) -> Tuple[int, dask.dataframe.DataFrame]: """Create an evenly distributed dask dataframe The partitions are perfectly distributed across workers, if the number of requested partitions is evenly divisible by the number of workers. """ chunksize = args.partition_size // np.float64().nbytes workers = list(client.scheduler_info()["workers"].keys()) assert len(workers) > 0 dist = args.partition_distribution if dist is None: # By default, we create a balanced distribution dist = [args.in_parts // len(workers)] * len(workers) for i in range(args.in_parts % len(workers)): dist[i] += 1 if len(dist) != len(workers): raise ValueError( f"The length of `--devs`({len(dist)}) and " f"`--partition-distribution`({len(workers)}) doesn't match" ) if sum(dist) != args.in_parts: raise ValueError( f"The sum of `--partition-distribution`({sum(dist)}) must match " f"the number of input partitions `--in-parts={args.in_parts}`" ) # Create partition based to the specified partition distribution dsk = {} for i, part_size in enumerate(dist): for _ in range(part_size): # We use `client.submit` to control placement of the partition. dsk[(name, len(dsk))] = client.submit( create_df, chunksize, args.type, workers=[workers[i]], pure=False ) wait(dsk.values()) df_meta = create_df(0, args.type) divs = [None] * (len(dsk) + 1) ret = new_dd_object(dsk, name, df_meta, divs).persist() wait(ret) data_processed = args.in_parts * args.partition_size if not args.ignore_index: data_processed += args.in_parts * chunksize * df_meta.index.dtype.itemsize return data_processed, ret def bench_once(client, args, write_profile=None): data_processed, df = create_data(client, args) if write_profile is None: ctx = contextlib.nullcontext() else: ctx = performance_report(filename=args.profile) with ctx: if args.backend in {"dask", "dask-noop"}: duration = shuffle_dask(df, args) else: duration = shuffle_explicit_comms(df, args) return (data_processed, duration) def pretty_print_results(args, address_to_index, p2p_bw, results): if args.markdown: print("```") print("Shuffle benchmark") print_separator(separator="-") print_key_value(key="Backend", value=f"{args.backend}") print_key_value(key="Partition size", value=f"{format_bytes(args.partition_size)}") print_key_value(key="Input partitions", value=f"{args.in_parts}") print_key_value(key="Protocol", value=f"{args.protocol}") print_key_value(key="Device(s)", value=f"{args.devs}") if args.device_memory_limit: print_key_value( key="Device memory limit", value=f"{format_bytes(args.device_memory_limit)}" ) print_key_value(key="RMM Pool", value=f"{not args.disable_rmm_pool}") if args.protocol in ["ucx", "ucxx"]: print_key_value(key="TCP", value=f"{args.enable_tcp_over_ucx}") print_key_value(key="InfiniBand", value=f"{args.enable_infiniband}") print_key_value(key="NVLink", value=f"{args.enable_nvlink}") print_key_value(key="Worker thread(s)", value=f"{args.threads_per_worker}") print_key_value(key="Data processed", value=f"{format_bytes(results[0][0])}") if args.markdown: print("\n```") data_processed, durations = zip(*results) print_throughput_bandwidth( args, durations, data_processed, p2p_bw, address_to_index ) def create_tidy_results(args, p2p_bw, results): configuration = { "dataframe_type": "cudf" if args.type == "gpu" else "pandas", "backend": args.backend, "partition_size": args.partition_size, "in_parts": args.in_parts, "protocol": args.protocol, "devs": args.devs, "device_memory_limit": args.device_memory_limit, "rmm_pool": not args.disable_rmm_pool, "tcp": args.enable_tcp_over_ucx, "ib": args.enable_infiniband, "nvlink": args.enable_nvlink, } timing_data = pd.DataFrame( [ pd.Series( data=ChainMap( configuration, {"wallclock": duration, "data_processed": data_processed}, ) ) for data_processed, duration in results ] ) return timing_data, p2p_bw def parse_args(): special_args = [ { "name": "--partition-size", "default": "1 MiB", "metavar": "nbytes", "type": parse_bytes, "help": "Size of each partition (default '1 MB')", }, { "name": "--in-parts", "default": 100, "metavar": "n", "type": int, "help": "Number of input partitions (default '100')", }, { "name": [ "-b", "--backend", ], "choices": ["dask", "explicit-comms", "dask-noop"], "default": "dask", "type": str, "help": "The backend to use.", }, { "name": [ "-t", "--type", ], "choices": ["cpu", "gpu"], "default": "gpu", "type": str, "help": "Do shuffle with GPU or CPU dataframes (default 'gpu')", }, { "name": "--ignore-size", "default": "1 MiB", "metavar": "nbytes", "type": parse_bytes, "help": "Ignore messages smaller than this (default '1 MB')", }, { "name": "--runs", "default": 3, "type": int, "help": "Number of runs", }, { "name": "--ignore-index", "action": "store_true", "help": "When shuffle, ignore the index", }, { "name": "--partition-distribution", "default": None, "metavar": "PARTITION_SIZE_LIST", "type": lambda x: [int(y) for y in x.split(",")], "help": "Comma separated list defining the size of each partition, " "which must have the same length as `--devs`. " "If not set, a balanced distribution is used.", }, ] return parse_benchmark_args( description="Distributed shuffle (dask/cudf) benchmark", args_list=special_args ) if __name__ == "__main__": execute_benchmark( Config( args=parse_args(), bench_once=bench_once, create_tidy_results=create_tidy_results, pretty_print_results=pretty_print_results, ) )
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/local_cupy_map_overlap.py
import contextlib from collections import ChainMap from time import perf_counter as clock import cupy as cp import numpy as np import pandas as pd from cupyx.scipy.ndimage.filters import convolve as cp_convolve from scipy.ndimage import convolve as sp_convolve from dask import array as da from dask.distributed import performance_report, wait from dask.utils import format_bytes, parse_bytes from dask_cuda.benchmarks.common import Config, execute_benchmark from dask_cuda.benchmarks.utils import ( as_noop, parse_benchmark_args, print_key_value, print_separator, print_throughput_bandwidth, ) def mean_filter(a, shape): a_k = np.full_like(a, 1.0 / np.prod(shape), shape=shape) if isinstance(a, cp.ndarray): return cp_convolve(a, a_k) else: return sp_convolve(a, a_k) def bench_once(client, args, write_profile=None): # Create a simple random array if args.type == "gpu": rs = da.random.RandomState(RandomState=cp.random.RandomState) else: rs = da.random.RandomState(RandomState=np.random.RandomState) x = rs.random((args.size, args.size), chunks=args.chunk_size).persist() ks = 2 * (2 * args.kernel_size + 1,) wait(x) data_processed = x.nbytes # Execute the operations to benchmark if args.profile is not None and write_profile is not None: ctx = performance_report(filename=args.profile) else: ctx = contextlib.nullcontext() with ctx: result = x.map_overlap(mean_filter, args.kernel_size, shape=ks) if args.backend == "dask-noop": result = as_noop(result) t1 = clock() wait(client.persist(result)) took = clock() - t1 return (data_processed, took) def pretty_print_results(args, address_to_index, p2p_bw, results): if args.markdown: print("```") print("Cupy map overlap benchmark") print_separator(separator="-") print_key_value(key="Backend", value=f"{args.backend}") print_key_value(key="Array type", value="cupy" if args.type == "gpu" else "numpy") print_key_value(key="Size", value=f"{args.size}*{args.size}") print_key_value(key="Chunk size", value=f"{args.chunk_size}") print_key_value(key="Ignore size", value=f"{format_bytes(args.ignore_size)}") print_key_value(key="Kernel size", value=f"{args.kernel_size}") print_key_value(key="Device(s)", value=f"{args.devs}") if args.device_memory_limit: print_key_value( key="Device memory limit", value=f"{format_bytes(args.device_memory_limit)}", ) print_key_value(key="RMM Pool", value=f"{not args.disable_rmm_pool}") print_key_value(key="Protocol", value=f"{args.protocol}") if args.protocol in ["ucx", "ucxx"]: print_key_value(key="TCP", value=f"{args.enable_tcp_over_ucx}") print_key_value(key="InfiniBand", value=f"{args.enable_infiniband}") print_key_value(key="NVLink", value=f"{args.enable_nvlink}") print_key_value(key="Worker thread(s)", value=f"{args.threads_per_worker}") data_processed, durations = zip(*results) if args.markdown: print("\n```") print_throughput_bandwidth( args, durations, data_processed, p2p_bw, address_to_index ) def create_tidy_results(args, p2p_bw, results): configuration = { "array_type": "cupy" if args.type == "gpu" else "numpy", "backend": args.backend, "user_size": args.size, "chunk_size": args.chunk_size, "ignore_size": args.ignore_size, "devices": args.devs, "device_memory_limit": args.device_memory_limit, "worker_threads": args.threads_per_worker, "rmm_pool": not args.disable_rmm_pool, "protocol": args.protocol, "tcp": args.enable_tcp_over_ucx, "ib": args.enable_infiniband, "nvlink": args.enable_nvlink, "nreps": args.runs, "kernel_size": args.kernel_size, } timing_data = pd.DataFrame( [ pd.Series( data=ChainMap( configuration, { "wallclock": duration, "data_processed": data_processed, }, ) ) for (data_processed, duration) in results ] ) return timing_data, p2p_bw def parse_args(): special_args = [ { "name": [ "-s", "--size", ], "default": "10000", "metavar": "n", "type": int, "help": "The size n in n^2 (default 10000)", }, { "name": [ "-t", "--type", ], "choices": ["cpu", "gpu"], "default": "gpu", "type": str, "help": "Use GPU or CPU arrays", }, { "name": [ "-c", "--chunk-size", ], "default": "128 MiB", "metavar": "nbytes", "type": str, "help": "Chunk size (default '128 MiB')", }, { "name": [ "-k", "--kernel-size", ], "default": "1", "metavar": "k", "type": int, "help": "Kernel size, 2*k+1, in each dimension (default 1)", }, { "name": "--ignore-size", "default": "1 MiB", "metavar": "nbytes", "type": parse_bytes, "help": "Ignore messages smaller than this (default '1 MB')", }, { "name": "--runs", "default": 3, "type": int, "help": "Number of runs", }, { "name": [ "-b", "--backend", ], "choices": ["dask", "dask-noop"], "default": "dask", "type": str, "help": "Compute backend to use.", }, ] return parse_benchmark_args( description="Transpose on LocalCUDACluster benchmark", args_list=special_args ) if __name__ == "__main__": execute_benchmark( Config( args=parse_args(), bench_once=bench_once, create_tidy_results=create_tidy_results, pretty_print_results=pretty_print_results, ) )
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/utils.py
import argparse import itertools import json import os import time from collections import defaultdict from datetime import datetime from operator import itemgetter from typing import Any, Callable, Mapping, NamedTuple, Optional, Tuple import numpy as np import pandas as pd from dask.distributed import Client, SSHCluster from dask.utils import format_bytes, format_time, parse_bytes from distributed.comm.addressing import get_address_host from dask_cuda.local_cuda_cluster import LocalCUDACluster def as_noop(dsk): """ Turn the given dask computation into a noop. Uses dask-noop (https://github.com/gjoseph92/dask-noop/) Parameters ---------- dsk Dask object (on which one could call compute) Returns ------- New dask object representing the same task graph with no computation/data attached. Raises ------ RuntimeError If dask_noop is not importable """ try: from dask_noop import as_noop return as_noop(dsk) except ImportError: raise RuntimeError("Requested noop computation but dask-noop not installed.") def parse_benchmark_args(description="Generic dask-cuda Benchmark", args_list=[]): parser = argparse.ArgumentParser(description=description) worker_args = parser.add_argument_group(description="Worker configuration") worker_args.add_argument( "-d", "--devs", default="0", type=str, help='GPU devices to use (default "0").' ) worker_args.add_argument( "--threads-per-worker", default=1, type=int, help="Number of Dask threads per worker (i.e., GPU).", ) worker_args.add_argument( "--device-memory-limit", default=None, type=parse_bytes, help="Size of the CUDA device LRU cache, which is used to determine when the " "worker starts spilling to host memory. Can be an integer (bytes), float " "(fraction of total device memory), string (like ``'5GB'`` or ``'5000M'``), or " "``'auto'``, 0, or ``None`` to disable spilling to host (i.e. allow full " "device memory usage).", ) cluster_args = parser.add_argument_group(description="Cluster configuration") cluster_args.add_argument( "-p", "--protocol", choices=["tcp", "ucx", "ucxx"], default="tcp", type=str, help="The communication protocol to use.", ) cluster_args.add_argument( "--multiprocessing-method", default="spawn", choices=["spawn", "fork", "forkserver"], type=str, help="Which method should multiprocessing use to start child processes? " "On supercomputing systems with a high-performance interconnect, " "'forkserver' can be used to avoid issues with fork not being allowed " "after the networking stack has been initialised.", ) cluster_args.add_argument( "--rmm-pool-size", default=None, type=parse_bytes, help="The size of the RMM memory pool. Can be an integer (bytes) or a string " "(like '4GB' or '5000M'). By default, 1/2 of the total GPU memory is used.", ) cluster_args.add_argument( "--disable-rmm-pool", action="store_true", help="Disable the RMM memory pool" ) cluster_args.add_argument( "--enable-rmm-managed", action="store_true", help="Enable RMM managed memory allocator", ) cluster_args.add_argument( "--enable-rmm-async", action="store_true", help="Enable RMM async memory allocator (implies --disable-rmm-pool)", ) cluster_args.add_argument( "--rmm-release-threshold", default=None, type=parse_bytes, help="When --enable-rmm-async is set and the pool size grows beyond this " "value, unused memory held by the pool will be released at the next " "synchronization point. Can be an integer (bytes), or a string string (like " "'4GB' or '5000M'). By default, this feature is disabled.", ) cluster_args.add_argument( "--rmm-log-directory", default=None, type=str, help="Directory to write worker and scheduler RMM log files to. " "Logging is only enabled if RMM memory pool is enabled.", ) cluster_args.add_argument( "--enable-rmm-statistics", action="store_true", help="Use RMM's StatisticsResourceAdaptor to gather allocation statistics. " "This enables spilling implementations such as JIT-Unspill to provides more " "information on out-of-memory errors", ) cluster_args.add_argument( "--enable-rmm-track-allocations", action="store_true", help="When enabled, wraps the memory resource used by each worker with a " "``rmm.mr.TrackingResourceAdaptor``, which tracks the amount of memory " "allocated." "NOTE: This option enables additional diagnostics to be collected and " "reported by the Dask dashboard. However, there is significant overhead " "associated with this and it should only be used for debugging and memory " "profiling.", ) cluster_args.add_argument( "--enable-tcp-over-ucx", default=None, action="store_true", dest="enable_tcp_over_ucx", help="Enable TCP over UCX.", ) cluster_args.add_argument( "--enable-infiniband", default=None, action="store_true", dest="enable_infiniband", help="Enable InfiniBand over UCX.", ) cluster_args.add_argument( "--enable-nvlink", default=None, action="store_true", dest="enable_nvlink", help="Enable NVLink over UCX.", ) cluster_args.add_argument( "--enable-rdmacm", default=None, action="store_true", dest="enable_rdmacm", help="Enable RDMACM with UCX.", ) cluster_args.add_argument( "--disable-tcp-over-ucx", action="store_false", dest="enable_tcp_over_ucx", help="Disable TCP over UCX.", ) cluster_args.add_argument( "--disable-infiniband", action="store_false", dest="enable_infiniband", help="Disable InfiniBand over UCX.", ) cluster_args.add_argument( "--disable-nvlink", action="store_false", dest="enable_nvlink", help="Disable NVLink over UCX.", ) cluster_args.add_argument( "--disable-rdmacm", action="store_false", dest="enable_rdmacm", help="Disable RDMACM with UCX.", ) cluster_args.add_argument( "--interface", default=None, type=str, dest="interface", help="Network interface Dask processes will use to listen for connections.", ) group = cluster_args.add_mutually_exclusive_group() group.add_argument( "--scheduler-address", default=None, type=str, help="Scheduler Address -- assumes cluster is created outside of benchmark. " "If provided, worker configuration options provided to this script are ignored " "since the workers are assumed to be started separately. Similarly the other " "cluster configuration options have no effect.", ) group.add_argument( "--scheduler-file", default=None, type=str, dest="scheduler_file", help="Read cluster configuration from specified file. " "If provided, worker configuration options provided to this script are ignored " "since the workers are assumed to be started separately. Similarly the other " "cluster configuration options have no effect.", ) group.add_argument( "--dashboard-address", default=None, type=str, help="Address on which to listen for diagnostics dashboard, ignored if " "either ``--scheduler-address`` or ``--scheduler-file`` is specified.", ) cluster_args.add_argument( "--shutdown-external-cluster-on-exit", default=False, action="store_true", dest="shutdown_cluster", help="If connecting to an external cluster, should we shut down the cluster " "when the benchmark exits?", ) cluster_args.add_argument( "--multi-node", action="store_true", dest="multi_node", help="Runs a multi-node cluster on the hosts specified by --hosts." "Requires the ``asyncssh`` module to be installed.", ) cluster_args.add_argument( "--hosts", default=None, type=str, help="Specifies a comma-separated list of IP addresses or hostnames. " "The list begins with the host where the scheduler will be launched " "followed by any number of workers, with a minimum of 1 worker. " "Requires --multi-node, ignored otherwise. " "Usage example: --multi-node --hosts 'dgx12,dgx12,10.10.10.10,dgx13' . " "In the example, the benchmark is launched with scheduler on host " "'dgx12' (first in the list), and workers on three hosts being 'dgx12', " "'10.10.10.10', and 'dgx13'. " "Note: --devs is currently ignored in multi-node mode and for each host " "one worker per GPU will be launched.", ) parser.add_argument( "--no-show-p2p-bandwidth", action="store_true", help="Do not produce detailed point to point bandwidth stats in output", ) parser.add_argument( "--all-to-all", action="store_true", help="Run all-to-all before computation", ) parser.add_argument( "--no-silence-logs", action="store_true", help="By default Dask logs are silenced, this argument unsilence them.", ) parser.add_argument( "--plot", metavar="PATH", default=None, type=str, help="Generate plot output written to defined directory", ) parser.add_argument( "--markdown", default=False, action="store_true", help="Write output as markdown", ) parser.add_argument( "--profile", metavar="PATH", default=None, type=str, help="Write dask profile report (E.g. dask-report.html)", ) # See save_benchmark_data for more information parser.add_argument( "--output-basename", default=None, type=str, help="Dump a benchmark data to files using this basename. " "Produces three files, BASENAME.json (containing timing data); " "BASENAME.npy (point to point bandwidth statistics); " "BASENAME.address_map.json (mapping from worker addresses to indices). " "If the files already exist, new files are created with a uniquified " "BASENAME.", ) for args in args_list: name = args.pop("name") if not isinstance(name, list): name = [name] parser.add_argument(*name, **args) args = parser.parse_args() if args.multi_node and len(args.hosts.split(",")) < 2: raise ValueError("--multi-node requires at least 2 hosts") return args def get_cluster_options(args): ucx_options = { "enable_tcp_over_ucx": args.enable_tcp_over_ucx, "enable_infiniband": args.enable_infiniband, "enable_nvlink": args.enable_nvlink, "enable_rdmacm": args.enable_rdmacm, } if args.multi_node is True: Cluster = SSHCluster cluster_args = [args.hosts.split(",")] scheduler_addr = args.protocol + "://" + cluster_args[0][0] + ":8786" cluster_kwargs = { "connect_options": {"known_hosts": None}, "scheduler_options": { "protocol": args.protocol, "port": 8786, "dashboard_address": args.dashboard_address, }, "worker_class": "dask_cuda.CUDAWorker", "worker_options": { "protocol": args.protocol, "nthreads": args.threads_per_worker, "interface": args.interface, "device_memory_limit": args.device_memory_limit, }, # "n_workers": len(args.devs.split(",")), # "CUDA_VISIBLE_DEVICES": args.devs, } else: Cluster = LocalCUDACluster scheduler_addr = None cluster_args = [] cluster_kwargs = { "protocol": args.protocol, "dashboard_address": args.dashboard_address, "n_workers": len(args.devs.split(",")), "threads_per_worker": args.threads_per_worker, "CUDA_VISIBLE_DEVICES": args.devs, "interface": args.interface, "device_memory_limit": args.device_memory_limit, **ucx_options, } if args.no_silence_logs: cluster_kwargs["silence_logs"] = False return { "class": Cluster, "args": cluster_args, "kwargs": cluster_kwargs, "scheduler_addr": scheduler_addr, } def get_worker_device(): try: device, *_ = os.environ["CUDA_VISIBLE_DEVICES"].split(",") return int(device) except (KeyError, ValueError): # No CUDA_VISIBILE_DEVICES in environment, or else no appropriate value return -1 def setup_memory_pool( dask_worker=None, pool_size=None, disable_pool=False, rmm_async=False, rmm_managed=False, release_threshold=None, log_directory=None, statistics=False, rmm_track_allocations=False, ): import cupy import rmm from rmm.allocators.cupy import rmm_cupy_allocator from dask_cuda.utils import get_rmm_log_file_name logging = log_directory is not None if rmm_async: rmm.mr.set_current_device_resource( rmm.mr.CudaAsyncMemoryResource( initial_pool_size=pool_size, release_threshold=release_threshold ) ) else: rmm.reinitialize( pool_allocator=not disable_pool, managed_memory=rmm_managed, initial_pool_size=pool_size, logging=logging, log_file_name=get_rmm_log_file_name(dask_worker, logging, log_directory), ) cupy.cuda.set_allocator(rmm_cupy_allocator) if statistics: rmm.mr.set_current_device_resource( rmm.mr.StatisticsResourceAdaptor(rmm.mr.get_current_device_resource()) ) if rmm_track_allocations: rmm.mr.set_current_device_resource( rmm.mr.TrackingResourceAdaptor(rmm.mr.get_current_device_resource()) ) def setup_memory_pools( client, is_gpu, pool_size, disable_pool, rmm_async, rmm_managed, release_threshold, log_directory, statistics, rmm_track_allocations, ): if not is_gpu: return client.run( setup_memory_pool, pool_size=pool_size, disable_pool=disable_pool, rmm_async=rmm_async, rmm_managed=rmm_managed, release_threshold=release_threshold, log_directory=log_directory, statistics=statistics, rmm_track_allocations=rmm_track_allocations, ) # Create an RMM pool on the scheduler due to occasional deserialization # of CUDA objects. May cause issues with InfiniBand otherwise. client.run_on_scheduler( setup_memory_pool, pool_size=1e9, disable_pool=disable_pool, rmm_async=rmm_async, rmm_managed=rmm_managed, release_threshold=release_threshold, log_directory=log_directory, statistics=statistics, rmm_track_allocations=rmm_track_allocations, ) def save_benchmark_data( basename, address_to_index: Mapping[str, int], timing_data: pd.DataFrame, p2p_data: np.ndarray, ): """Save benchmark data to files Parameters ---------- basename: str Output file basename address_to_index Mapping from worker addresses to indices (in the p2p_data array) timing_data DataFrame containing timing and configuration data p2p_data numpy array of point to point bandwidth statistics Notes ----- Produces ``BASENAME.json``, ``BASENAME.npy``, ``BASENAME.address_map.json``. If any of these files exist then ``basename`` is uniquified by appending the ISO date and a sequence number. """ def exists(basename): return any( os.path.exists(f"{basename}{ext}") for ext in [".json", ".npy", ".address_map.json"] ) new_basename = basename sequence = itertools.count() while exists(new_basename): now = datetime.now().strftime("%Y%m%d") new_basename = f"{basename}-{now}.{next(sequence)}" timing_data.to_json(f"{new_basename}.json") np.save(f"{new_basename}.npy", p2p_data) with open(f"{new_basename}.address_map.json", "w") as f: f.write(json.dumps(address_to_index)) def wait_for_cluster(client, timeout=120, shutdown_on_failure=True): """Wait for the cluster to come up. Parameters ---------- client The distributed Client object timeout: int (optional) Timeout in seconds before we give up shutdown_on_failure: bool (optional) Should we call ``client.shutdown()`` if not all workers are found after the timeout is reached? Raises ------ RuntimeError: If the timeout finishes and not all expected workers have appeared. """ expected = os.environ.get("EXPECTED_NUM_WORKERS") if expected is None: return expected = int(expected) nworkers = 0 for _ in range(timeout // 5): print( "Waiting for workers to come up, " f"have {len(client.scheduler_info().get('workers', []))}, " f"want {expected}" ) time.sleep(5) nworkers = len(client.scheduler_info().get("workers", [])) if nworkers == expected: return else: if shutdown_on_failure: client.shutdown() raise RuntimeError( f"Not all workers up after {timeout}s; " f"got {nworkers}, wanted {expected}" ) def address_to_index(client: Client) -> Mapping[str, int]: """Produce a mapping from worker addresses to unique indices Parameters ---------- client: Client distributed client Returns ------- Mapping from worker addresses to int, with workers on the same host numbered contiguously, and sorted by device index on each host. """ # Group workers by hostname and then device index addresses = client.run(get_worker_device) return dict( zip( sorted(addresses, key=lambda k: (get_address_host(k), addresses[k])), itertools.count(), ) ) def plot_benchmark(t_runs, path, historical=False): """ Plot the throughput the benchmark for each run. If historical=True, Load historical data from ~/benchmark-historic-runs.csv """ try: import pandas as pd import seaborn as sns except ImportError: print( "Plotting libraries are not installed. Please install pandas, " "seaborn, and matplotlib" ) return x = [str(x) for x in range(len(t_runs))] df = pd.DataFrame(dict(t_runs=t_runs, x=x)) avg = round(df.t_runs.mean(), 2) ax = sns.barplot(x="x", y="t_runs", data=df, color="purple") ax.set( xlabel="Run Iteration", ylabel="Merge Throughput in GB/s", title=f"cudf Merge Throughput -- Average {avg} GB/s", ) fig = ax.get_figure() today = datetime.now().strftime("%Y%m%d") fname_bench = today + "-benchmark.png" d = os.path.expanduser(path) bench_path = os.path.join(d, fname_bench) fig.savefig(bench_path) if historical: # record average tohroughput and plot historical averages history_file = os.path.join( os.path.expanduser("~"), "benchmark-historic-runs.csv" ) with open(history_file, "a+") as f: f.write(f"{today},{avg}\n") df = pd.read_csv( history_file, names=["date", "throughput"], parse_dates=["date"] ) ax = df.plot( x="date", y="throughput", marker="o", title="Historical Throughput" ) ax.set_ylim(0, 30) fig = ax.get_figure() fname_hist = today + "-benchmark-history.png" hist_path = os.path.join(d, fname_hist) fig.savefig(hist_path) def print_separator(separator="-", length=80): print(separator * length) def print_key_value(key, value, key_length=25): print(f"{key: <{key_length}} | {value}") def print_throughput_bandwidth( args, durations, data_processed, p2p_bw, address_to_index ): print_key_value(key="Number of workers", value=f"{len(address_to_index)}") print_separator(separator="=") print_key_value(key="Wall clock", value="Throughput") print_separator(separator="-") durations = np.asarray(durations) data_processed = np.asarray(data_processed) throughputs = data_processed / durations for duration, throughput in zip(durations, throughputs): print_key_value( key=f"{format_time(duration)}", value=f"{format_bytes(throughput)}/s" ) print_separator(separator="=") print_key_value( key="Throughput", value=f"{format_bytes(hmean(throughputs))}/s " f"+/- {format_bytes(hstd(throughputs))}/s", ) bandwidth_hmean = p2p_bw[..., BandwidthStats._fields.index("hmean")].reshape(-1) bandwidths_all = bandwidth_hmean[bandwidth_hmean > 0] print_key_value( key="Bandwidth", value=f"{format_bytes(hmean(bandwidths_all))}/s +/- " f"{format_bytes(hstd(bandwidths_all))}/s", ) print_key_value( key="Wall clock", value=f"{format_time(durations.mean())} +/- {format_time(durations.std()) }", ) if not args.no_show_p2p_bandwidth: print_separator(separator="=") if args.markdown: print("<details>\n<summary>Worker-Worker Transfer Rates</summary>\n\n```") print_key_value(key="(w1,w2)", value="25% 50% 75% (total nbytes)") print_separator(separator="-") for (source, dest) in np.ndindex(p2p_bw.shape[:2]): bw = BandwidthStats(*p2p_bw[source, dest, ...]) if bw.total_bytes > 0: print_key_value( key=f"({source},{dest})", value=f"{format_bytes(bw.q25)}/s {format_bytes(bw.q50)}/s " f"{format_bytes(bw.q75)}/s ({format_bytes(bw.total_bytes)})", ) print_separator(separator="=") print_key_value(key="Worker index", value="Worker address") print_separator(separator="-") for address, index in sorted(address_to_index.items(), key=itemgetter(1)): print_key_value(key=index, value=address) print_separator(separator="=") if args.markdown: print("```\n</details>\n") if args.plot: plot_benchmark(throughputs, args.plot, historical=True) class BandwidthStats(NamedTuple): hmean: float hstd: float q25: float q50: float q75: float min: float max: float median: float total_bytes: int def bandwidth_statistics( logs, ignore_size: Optional[int] = None ) -> Mapping[str, BandwidthStats]: """Return bandwidth statistics from logs on a single worker. Parameters ---------- logs: the ``dask_worker.incoming_transfer_log`` object ignore_size: int (optional) ignore messages whose total byte count is smaller than this value (if provided) Returns ------- dict mapping worker names to a :class:`BandwidthStats` object summarising incoming messages (bandwidth and total bytes) """ bandwidth = defaultdict(list) total_nbytes = defaultdict(int) for data in logs: if ignore_size is None or data["total"] >= ignore_size: bandwidth[data["who"]].append(data["bandwidth"]) total_nbytes[data["who"]] += data["total"] aggregate = {} for address, data in bandwidth.items(): data = np.asarray(data) q25, q50, q75 = np.quantile(data, [0.25, 0.50, 0.75]) aggregate[address] = BandwidthStats( hmean=hmean(data), hstd=hstd(data), q25=q25, q50=q50, q75=q75, min=np.min(data), max=np.max(data), median=np.median(data), total_bytes=total_nbytes[address], ) return aggregate def aggregate_transfer_log_data( aggregator: Callable[[Any, Optional[int]], Any], ignore_size=None, dask_worker=None ) -> Tuple[Mapping[str, Any], Mapping[str, Any]]: """Aggregate ``dask_worker.incoming_transfer_log`` on a single worker Parameters ---------- aggregator: callable Function to massage raw data into aggregate form ignore_size: int, optional ignore contributions of a log entry to the aggregate data if the message was less than this many bytes in size (if not provided, then keep all messages). dask_worker: The dask ``Worker`` object. """ return aggregator(dask_worker.incoming_transfer_log, ignore_size=ignore_size) def peer_to_peer_bandwidths( aggregate_bandwidth_data: Mapping[str, Mapping[str, BandwidthStats]], address_to_index: Mapping[str, int], ) -> np.ndarray: """Flatten collective aggregated bandwidth data Parameters ---------- aggregate_bandwidth_data Dict mapping worker addresses to per-worker bandwidth data name_worker Function mapping worker addresses to useful names Returns ------- dict Flattened dict (keyed on pairs of massaged worker names) mapping to bandwidth data between that pair of workers. """ nworker = len(aggregate_bandwidth_data) data = np.zeros((nworker, nworker, len(BandwidthStats._fields)), dtype=np.float32) for w1, per_worker in aggregate_bandwidth_data.items(): for w2, stats in per_worker.items(): # This loses type information on each entry, but we just # need indexing information which we can obtain from the # BandwidthStats._fields slot. data[address_to_index[w1], address_to_index[w2], :] = stats return data def hmean(a): """Harmonic mean""" if len(a): return 1 / np.mean(1 / a) else: return 0 def hstd(a): """Harmonic standard deviation""" if len(a): rmean = np.mean(1 / a) rvar = np.var(1 / a) return np.sqrt(rvar / (len(a) * rmean**4)) else: return 0
0
rapidsai_public_repos/dask-cuda/examples
rapidsai_public_repos/dask-cuda/examples/ucx/dask_cuda_worker.sh
#!/bin/bash usage() { echo "usage: $0 [-a <scheduler_address>] [-i <interface>] [-r <rmm_pool_size>] [-t <transports>]" >&2 exit 1 } # parse arguments rmm_pool_size=1GB while getopts ":a:i:r:t:" flag; do case "${flag}" in i) interface=${OPTARG};; r) rmm_pool_size=${OPTARG};; t) transport=${OPTARG};; *) usage;; esac done if [ -z ${interface+x} ] && ! [ -z ${transport+x} ]; then echo "$0: interface must be specified with -i if NVLink or InfiniBand are enabled" exit 1 fi # set up environment variables/flags DASK_DISTRIBUTED__COMM__UCX__CUDA_COPY=True DASK_DISTRIBUTED__COMM__UCX__TCP=True DASK_DISTRIBUTED__RMM__POOL_SIZE=$rmm_pool_size scheduler_flags="--scheduler-file scheduler.json --protocol ucx" worker_flags="--scheduler-file scheduler.json --enable-tcp-over-ucx --rmm-pool-size ${rmm_pool_size}" if ! [ -z ${interface+x} ]; then scheduler_flags+=" --interface ${interface}" fi if [[ $transport == *"nvlink"* ]]; then DASK_DISTRIBUTED__COMM__UCX__NVLINK=True worker_flags+=" --enable-nvlink" fi if [[ $transport == *"ib"* ]]; then DASK_DISTRIBUTED__COMM__UCX__INFINIBAND=True DASK_DISTRIBUTED__COMM__UCX__RDMACM=True worker_flags+=" --enable-infiniband --enable-rdmacm" fi # initialize scheduler dask scheduler $scheduler_flags & # initialize workers dask cuda worker $worker_flags
0
rapidsai_public_repos/dask-cuda/examples
rapidsai_public_repos/dask-cuda/examples/ucx/local_cuda_cluster.py
import click import cupy from dask import array as da from dask.distributed import Client from dask.utils import parse_bytes from dask_cuda import LocalCUDACluster @click.command(context_settings=dict(ignore_unknown_options=True)) @click.option( "--enable-nvlink/--disable-nvlink", default=False, help="Enable NVLink communication", ) @click.option( "--enable-infiniband/--disable-infiniband", default=False, help="Enable InfiniBand communication with RDMA", ) @click.option( "--enable-rdmacm/--disable-rdmacm", default=False, help="Enable RDMA connection manager, requires --enable-infiniband", ) @click.option( "--interface", default=None, type=str, help="Interface used by scheduler for communication. Must be " "specified if NVLink or InfiniBand are enabled.", ) @click.option( "--rmm-pool-size", default="1GB", type=parse_bytes, help="If specified, initialize each worker with an RMM pool of " "the given size, otherwise no RMM pool is created. This can be " "an integer (bytes) or string (like 5GB or 5000M).", ) def main( enable_nvlink, enable_infiniband, enable_rdmacm, interface, rmm_pool_size, ): if (enable_infiniband or enable_nvlink) and not interface: raise ValueError( "Interface must be specified if NVLink or Infiniband are enabled" ) # initialize scheduler & workers cluster = LocalCUDACluster( enable_tcp_over_ucx=True, enable_nvlink=enable_nvlink, enable_infiniband=enable_infiniband, enable_rdmacm=enable_rdmacm, interface=interface, rmm_pool_size=rmm_pool_size, ) # initialize client client = Client(cluster) # user code here rs = da.random.RandomState(RandomState=cupy.random.RandomState) x = rs.random((10000, 10000), chunks=1000) x.sum().compute() # shutdown cluster client.shutdown() if __name__ == "__main__": main()
0
rapidsai_public_repos/dask-cuda/examples
rapidsai_public_repos/dask-cuda/examples/ucx/client_initialize.py
import click import cupy from dask import array as da from dask.distributed import Client from dask_cuda.initialize import initialize @click.command(context_settings=dict(ignore_unknown_options=True)) @click.argument( "address", required=True, type=str, ) @click.option( "--enable-nvlink/--disable-nvlink", default=False, help="Enable NVLink communication", ) @click.option( "--enable-infiniband/--disable-infiniband", default=False, help="Enable InfiniBand communication with RDMA", ) @click.option( "--enable-rdmacm/--disable-rdmacm", default=False, help="Enable RDMA connection manager, requires --enable-infiniband", ) def main( address, enable_nvlink, enable_infiniband, enable_rdmacm, ): # set up environment initialize( enable_tcp_over_ucx=True, enable_nvlink=enable_nvlink, enable_infiniband=enable_infiniband, enable_rdmacm=enable_rdmacm, ) # initialize client client = Client(address) # user code here rs = da.random.RandomState(RandomState=cupy.random.RandomState) x = rs.random((10000, 10000), chunks=1000) x.sum().compute() # shutdown cluster client.shutdown() if __name__ == "__main__": main()
0
rapidsai_public_repos
rapidsai_public_repos/cuml/.pre-commit-config.yaml
--- # Copyright (c) 2023, NVIDIA CORPORATION. repos: - repo: https://github.com/psf/black rev: 22.10.0 hooks: - id: black files: python/.* args: [--config, python/pyproject.toml] - repo: https://github.com/PyCQA/flake8 rev: 5.0.4 hooks: - id: flake8 args: [--config=python/.flake8] files: python/.*$ types: [file] types_or: [python, cython] exclude: thirdparty additional_dependencies: [flake8-force] - repo: https://github.com/MarcoGorelli/cython-lint rev: v0.15.0 hooks: - id: cython-lint - repo: https://github.com/pre-commit/mirrors-clang-format rev: v16.0.6 hooks: - id: clang-format types_or: [c, c++, cuda] args: ["-fallback-style=none", "-style=file", "-i"] - repo: https://github.com/codespell-project/codespell rev: v2.2.2 hooks: - id: codespell additional_dependencies: [tomli] args: ["--toml", "pyproject.toml"] exclude: (?x)^(.*stemmer.*|.*stop_words.*|^CHANGELOG.md$) - repo: local hooks: - id: no-deprecationwarning name: no-deprecationwarning description: 'Enforce that DeprecationWarning is not introduced (use FutureWarning instead)' entry: '(category=|\s)DeprecationWarning[,)]' language: pygrep types_or: [python, cython] - id: copyright-check name: copyright-check entry: python ./ci/checks/copyright.py --fix-in-place language: python pass_filenames: true additional_dependencies: [gitpython] - id: include-check name: include-check entry: python cpp/scripts/include_checker.py args: - cpp/bench - cpp/comms/mpi/include - cpp/comms/mpi/src - cpp/comms/std/include - cpp/comms/std/src - cpp/include - cpp/examples - cpp/src - cpp/src_prims - cpp/test pass_filenames: false language: python - repo: https://github.com/rapidsai/dependency-file-generator rev: v1.5.1 hooks: - id: rapids-dependency-file-generator args: ["--clean"] default_language_version: python: python3
0
rapidsai_public_repos
rapidsai_public_repos/cuml/pyproject.toml
[tool.codespell] # note: pre-commit passes explicit lists of files here, which this skip file list doesn't override - # this is only to allow you to run codespell interactively skip = "./.git,./.github,./cpp/build,.*egg-info.*,./.mypy_cache,.*_skbuild,CHANGELOG.md,_stop_words.py,,*stemmer.*" # ignore short words, and typename parameters like OffsetT ignore-regex = "\\b(.{1,4}|[A-Z]\\w*T)\\b" ignore-words-list = "inout,numer,startd,couldn,referr" # use the 'clear' dictionary for unambiguous spelling mistakes builtin = "clear" # disable warnings about binary files and wrong encoding quiet-level = 3 [tool.cython-lint] # TODO: Re-enable E501 with a reasonable line length max-line-length = 999 ignore = ['E501'] [tool.run-clang-tidy] ignore = "[.]cu$|_deps|examples/kmeans/"
0
rapidsai_public_repos
rapidsai_public_repos/cuml/fetch_rapids.cmake
# ============================================================================= # Copyright (c) 2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= if(NOT EXISTS ${CMAKE_CURRENT_BINARY_DIR}/CUML_RAPIDS.cmake) file(DOWNLOAD https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-23.12/RAPIDS.cmake ${CMAKE_CURRENT_BINARY_DIR}/CUML_RAPIDS.cmake ) endif() include(${CMAKE_CURRENT_BINARY_DIR}/CUML_RAPIDS.cmake)
0
rapidsai_public_repos
rapidsai_public_repos/cuml/README.md
# <div align="left"><img src="img/rapids_logo.png" width="90px"/>&nbsp;cuML - GPU Machine Learning Algorithms</div> cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other [RAPIDS](https://rapids.ai/) projects. cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from [scikit-learn](https://scikit-learn.org). For large datasets, these GPU-based implementations can complete 10-50x faster than their CPU equivalents. For details on performance, see the [cuML Benchmarks Notebook](https://github.com/rapidsai/cuml/tree/branch-23.04/notebooks/tools). As an example, the following Python snippet loads input and computes DBSCAN clusters, all on GPU, using cuDF: ```python import cudf from cuml.cluster import DBSCAN # Create and populate a GPU DataFrame gdf_float = cudf.DataFrame() gdf_float['0'] = [1.0, 2.0, 5.0] gdf_float['1'] = [4.0, 2.0, 1.0] gdf_float['2'] = [4.0, 2.0, 1.0] # Setup and fit clusters dbscan_float = DBSCAN(eps=1.0, min_samples=1) dbscan_float.fit(gdf_float) print(dbscan_float.labels_) ``` Output: ``` 0 0 1 1 2 2 dtype: int32 ``` cuML also features multi-GPU and multi-node-multi-GPU operation, using [Dask](https://www.dask.org), for a growing list of algorithms. The following Python snippet reads input from a CSV file and performs a NearestNeighbors query across a cluster of Dask workers, using multiple GPUs on a single node: Initialize a `LocalCUDACluster` configured with [UCX](https://github.com/rapidsai/ucx-py) for fast transport of CUDA arrays ```python # Initialize UCX for high-speed transport of CUDA arrays from dask_cuda import LocalCUDACluster # Create a Dask single-node CUDA cluster w/ one worker per device cluster = LocalCUDACluster(protocol="ucx", enable_tcp_over_ucx=True, enable_nvlink=True, enable_infiniband=False) ``` Load data and perform `k-Nearest Neighbors` search. `cuml.dask` estimators also support `Dask.Array` as input: ```python from dask.distributed import Client client = Client(cluster) # Read CSV file in parallel across workers import dask_cudf df = dask_cudf.read_csv("/path/to/csv") # Fit a NearestNeighbors model and query it from cuml.dask.neighbors import NearestNeighbors nn = NearestNeighbors(n_neighbors = 10, client=client) nn.fit(df) neighbors = nn.kneighbors(df) ``` For additional examples, browse our complete [API documentation](https://docs.rapids.ai/api/cuml/stable/), or check out our example [walkthrough notebooks](https://github.com/rapidsai/cuml/tree/branch-23.04/notebooks). Finally, you can find complete end-to-end examples in the [notebooks-contrib repo](https://github.com/rapidsai/notebooks-contrib). ### Supported Algorithms | Category | Algorithm | Notes | | --- | --- | --- | | **Clustering** | Density-Based Spatial Clustering of Applications with Noise (DBSCAN) | Multi-node multi-GPU via Dask | | | Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) | | | | K-Means | Multi-node multi-GPU via Dask | | | Single-Linkage Agglomerative Clustering | | | **Dimensionality Reduction** | Principal Components Analysis (PCA) | Multi-node multi-GPU via Dask| | | Incremental PCA | | | | Truncated Singular Value Decomposition (tSVD) | Multi-node multi-GPU via Dask | | | Uniform Manifold Approximation and Projection (UMAP) | Multi-node multi-GPU Inference via Dask | | | Random Projection | | | | t-Distributed Stochastic Neighbor Embedding (TSNE) | | | **Linear Models for Regression or Classification** | Linear Regression (OLS) | Multi-node multi-GPU via Dask | | | Linear Regression with Lasso or Ridge Regularization | Multi-node multi-GPU via Dask | | | ElasticNet Regression | | | | LARS Regression | (experimental) | | | Logistic Regression | Multi-node multi-GPU via Dask-GLM [demo](https://github.com/daxiongshu/rapids-demos) | | | Naive Bayes | Multi-node multi-GPU via Dask | | | Stochastic Gradient Descent (SGD), Coordinate Descent (CD), and Quasi-Newton (QN) (including L-BFGS and OWL-QN) solvers for linear models | | | **Nonlinear Models for Regression or Classification** | Random Forest (RF) Classification | Experimental multi-node multi-GPU via Dask | | | Random Forest (RF) Regression | Experimental multi-node multi-GPU via Dask | | | Inference for decision tree-based models | Forest Inference Library (FIL) | | | K-Nearest Neighbors (KNN) Classification | Multi-node multi-GPU via Dask+[UCX](https://github.com/rapidsai/ucx-py), uses [Faiss](https://github.com/facebookresearch/faiss) for Nearest Neighbors Query. | | | K-Nearest Neighbors (KNN) Regression | Multi-node multi-GPU via Dask+[UCX](https://github.com/rapidsai/ucx-py), uses [Faiss](https://github.com/facebookresearch/faiss) for Nearest Neighbors Query. | | | Support Vector Machine Classifier (SVC) | | | | Epsilon-Support Vector Regression (SVR) | | | **Preprocessing** | Standardization, or mean removal and variance scaling / Normalization / Encoding categorical features / Discretization / Imputation of missing values / Polynomial features generation / and coming soon custom transformers and non-linear transformation | Based on Scikit-Learn preprocessing | **Time Series** | Holt-Winters Exponential Smoothing | | | | Auto-regressive Integrated Moving Average (ARIMA) | Supports seasonality (SARIMA) | | **Model Explanation** | SHAP Kernel Explainer | [Based on SHAP](https://shap.readthedocs.io/en/latest/) | | | SHAP Permutation Explainer | [Based on SHAP](https://shap.readthedocs.io/en/latest/) | | **Execution device interoperability** | | Run estimators interchangeably from host/cpu or device/gpu with minimal code change [demo](https://docs.rapids.ai/api/cuml/stable/execution_device_interoperability.html) | | **Other** | K-Nearest Neighbors (KNN) Search | Multi-node multi-GPU via Dask+[UCX](https://github.com/rapidsai/ucx-py), uses [Faiss](https://github.com/facebookresearch/faiss) for Nearest Neighbors Query. | --- ## Installation See [the RAPIDS Release Selector](https://docs.rapids.ai/install#selector) for the command line to install either nightly or official release cuML packages via Conda or Docker. ## Build/Install from Source See the build [guide](BUILD.md). ## Contributing Please see our [guide for contributing to cuML](CONTRIBUTING.md). ## References The RAPIDS team has a number of blogs with deeper technical dives and examples. [You can find them here on Medium.](https://medium.com/rapids-ai/tagged/machine-learning) For additional details on the technologies behind cuML, as well as a broader overview of the Python Machine Learning landscape, see [_Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence_ (2020)](https://arxiv.org/abs/2002.04803) by Sebastian Raschka, Joshua Patterson, and Corey Nolet. Please consider citing this when using cuML in a project. You can use the citation BibTeX: ```bibtex @article{raschka2020machine, title={Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence}, author={Raschka, Sebastian and Patterson, Joshua and Nolet, Corey}, journal={arXiv preprint arXiv:2002.04803}, year={2020} } ``` ## Contact Find out more details on the [RAPIDS site](https://rapids.ai/community.html) ## <div align="left"><img src="img/rapids_logo.png" width="265px"/></div> Open GPU Data Science The RAPIDS suite of open source software libraries aim to enable execution of end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposing that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces. <p align="center"><img src="img/rapids_arrow.png" width="80%"/></p>
0
rapidsai_public_repos
rapidsai_public_repos/cuml/CHANGELOG.md
# cuML 23.10.00 (11 Oct 2023) ## 🚨 Breaking Changes - add sample_weight parameter to dbscan.fit ([#5574](https://github.com/rapidsai/cuml/pull/5574)) [@mfoerste4](https://github.com/mfoerste4) - Update to Cython 3.0.0 ([#5506](https://github.com/rapidsai/cuml/pull/5506)) [@vyasr](https://github.com/vyasr) ## 🐛 Bug Fixes - Fix accidental unsafe cupy import ([#5613](https://github.com/rapidsai/cuml/pull/5613)) [@dantegd](https://github.com/dantegd) - Fixes for CPU package ([#5599](https://github.com/rapidsai/cuml/pull/5599)) [@dantegd](https://github.com/dantegd) - Fixes for timeouts in tests ([#5598](https://github.com/rapidsai/cuml/pull/5598)) [@dantegd](https://github.com/dantegd) ## 🚀 New Features - Enable cuml-cpu nightly ([#5585](https://github.com/rapidsai/cuml/pull/5585)) [@dantegd](https://github.com/dantegd) - add sample_weight parameter to dbscan.fit ([#5574](https://github.com/rapidsai/cuml/pull/5574)) [@mfoerste4](https://github.com/mfoerste4) ## 🛠️ Improvements - cuml-cpu notebook, docs and cluster models ([#5597](https://github.com/rapidsai/cuml/pull/5597)) [@dantegd](https://github.com/dantegd) - Pin `dask` and `distributed` for `23.10` release ([#5592](https://github.com/rapidsai/cuml/pull/5592)) [@galipremsagar](https://github.com/galipremsagar) - Add changes for early experimental support for dataframe interchange protocol API ([#5591](https://github.com/rapidsai/cuml/pull/5591)) [@dantegd](https://github.com/dantegd) - [FEA] Support L1 regularization and ElasticNet in MNMG Dask LogisticRegression ([#5587](https://github.com/rapidsai/cuml/pull/5587)) [@lijinf2](https://github.com/lijinf2) - Update image names ([#5586](https://github.com/rapidsai/cuml/pull/5586)) [@AyodeAwe](https://github.com/AyodeAwe) - Update to clang 16.0.6. ([#5583](https://github.com/rapidsai/cuml/pull/5583)) [@bdice](https://github.com/bdice) - Upgrade to Treelite 3.9.1 ([#5581](https://github.com/rapidsai/cuml/pull/5581)) [@hcho3](https://github.com/hcho3) - Update to doxygen 1.9.1. ([#5580](https://github.com/rapidsai/cuml/pull/5580)) [@bdice](https://github.com/bdice) - [REVIEW] Adding a few of datasets for benchmarking ([#5573](https://github.com/rapidsai/cuml/pull/5573)) [@vinaydes](https://github.com/vinaydes) - Allow cuML MNMG estimators to be serialized ([#5571](https://github.com/rapidsai/cuml/pull/5571)) [@viclafargue](https://github.com/viclafargue) - [FEA] Support multiple classes in multi-node-multi-gpu logistic regression, from C++, Cython, to Dask Python class ([#5565](https://github.com/rapidsai/cuml/pull/5565)) [@lijinf2](https://github.com/lijinf2) - Use `copy-pr-bot` ([#5563](https://github.com/rapidsai/cuml/pull/5563)) [@ajschmidt8](https://github.com/ajschmidt8) - Unblock CI for branch-23.10 ([#5561](https://github.com/rapidsai/cuml/pull/5561)) [@csadorf](https://github.com/csadorf) - Fix CPU-only build for new FIL ([#5559](https://github.com/rapidsai/cuml/pull/5559)) [@hcho3](https://github.com/hcho3) - [FEA] Support no regularization in MNMG LogisticRegression ([#5558](https://github.com/rapidsai/cuml/pull/5558)) [@lijinf2](https://github.com/lijinf2) - Unpin `dask` and `distributed` for `23.10` development ([#5557](https://github.com/rapidsai/cuml/pull/5557)) [@galipremsagar](https://github.com/galipremsagar) - Branch 23.10 merge 23.08 ([#5547](https://github.com/rapidsai/cuml/pull/5547)) [@vyasr](https://github.com/vyasr) - Use Python builtins to prep benchmark `tmp_dir` ([#5537](https://github.com/rapidsai/cuml/pull/5537)) [@jakirkham](https://github.com/jakirkham) - Branch 23.10 merge 23.08 ([#5522](https://github.com/rapidsai/cuml/pull/5522)) [@vyasr](https://github.com/vyasr) - Update to Cython 3.0.0 ([#5506](https://github.com/rapidsai/cuml/pull/5506)) [@vyasr](https://github.com/vyasr) # cuML 23.08.00 (9 Aug 2023) ## 🚨 Breaking Changes - Stop using setup.py in build.sh ([#5500](https://github.com/rapidsai/cuml/pull/5500)) [@vyasr](https://github.com/vyasr) - Add `copy_X` parameter to `LinearRegression` ([#5495](https://github.com/rapidsai/cuml/pull/5495)) [@viclafargue](https://github.com/viclafargue) ## 🐛 Bug Fixes - Update dependencies.yaml test_notebooks to include dask_ml ([#5545](https://github.com/rapidsai/cuml/pull/5545)) [@taureandyernv](https://github.com/taureandyernv) - Fix cython-lint issues. ([#5536](https://github.com/rapidsai/cuml/pull/5536)) [@bdice](https://github.com/bdice) - Skip rf_memleak tests ([#5529](https://github.com/rapidsai/cuml/pull/5529)) [@dantegd](https://github.com/dantegd) - Pin hdbscan to fix pytests in CI ([#5515](https://github.com/rapidsai/cuml/pull/5515)) [@dantegd](https://github.com/dantegd) - Fix UMAP and simplicial set functions metric ([#5490](https://github.com/rapidsai/cuml/pull/5490)) [@viclafargue](https://github.com/viclafargue) - Fix test_masked_column_mode ([#5480](https://github.com/rapidsai/cuml/pull/5480)) [@viclafargue](https://github.com/viclafargue) - Use fit_predict rather than fit for KNeighborsClassifier and KNeighborsRegressor in benchmark utility ([#5460](https://github.com/rapidsai/cuml/pull/5460)) [@beckernick](https://github.com/beckernick) - Modify HDBSCAN membership_vector batch_size check ([#5455](https://github.com/rapidsai/cuml/pull/5455)) [@tarang-jain](https://github.com/tarang-jain) ## 🚀 New Features - Use rapids-cmake testing to run tests in parallel ([#5487](https://github.com/rapidsai/cuml/pull/5487)) [@robertmaynard](https://github.com/robertmaynard) - [FEA] Update MST Reduction Op ([#5386](https://github.com/rapidsai/cuml/pull/5386)) [@tarang-jain](https://github.com/tarang-jain) - cuml: Build CUDA 12 packages ([#5318](https://github.com/rapidsai/cuml/pull/5318)) [@vyasr](https://github.com/vyasr) - CI: Add custom GitHub Actions job to run clang-tidy ([#5235](https://github.com/rapidsai/cuml/pull/5235)) [@csadorf](https://github.com/csadorf) ## 🛠️ Improvements - Pin `dask` and `distributed` for `23.08` release ([#5541](https://github.com/rapidsai/cuml/pull/5541)) [@galipremsagar](https://github.com/galipremsagar) - Remove Dockerfile. ([#5534](https://github.com/rapidsai/cuml/pull/5534)) [@bdice](https://github.com/bdice) - Improve temporary directory handling in cuML ([#5527](https://github.com/rapidsai/cuml/pull/5527)) [@jakirkham](https://github.com/jakirkham) - Support init arguments in MNMG LogisticRegression ([#5519](https://github.com/rapidsai/cuml/pull/5519)) [@lijinf2](https://github.com/lijinf2) - Support predict in MNMG Logistic Regression ([#5516](https://github.com/rapidsai/cuml/pull/5516)) [@lijinf2](https://github.com/lijinf2) - Remove unused matrix.cuh and math.cuh headers to eliminate deprecation warnings. ([#5513](https://github.com/rapidsai/cuml/pull/5513)) [@bdice](https://github.com/bdice) - Update gputreeshap to use rapids-cmake. ([#5512](https://github.com/rapidsai/cuml/pull/5512)) [@bdice](https://github.com/bdice) - Remove raft specializations includes. ([#5509](https://github.com/rapidsai/cuml/pull/5509)) [@bdice](https://github.com/bdice) - Revert CUDA 12.0 CI workflows to branch-23.08. ([#5508](https://github.com/rapidsai/cuml/pull/5508)) [@bdice](https://github.com/bdice) - Enable wheels CI scripts to run locally ([#5507](https://github.com/rapidsai/cuml/pull/5507)) [@divyegala](https://github.com/divyegala) - Default to nproc for PARALLEL_LEVEL in build.sh. ([#5505](https://github.com/rapidsai/cuml/pull/5505)) [@csadorf](https://github.com/csadorf) - Fixed potential overflows in SVM, minor adjustments to nvtx ranges ([#5504](https://github.com/rapidsai/cuml/pull/5504)) [@mfoerste4](https://github.com/mfoerste4) - Stop using setup.py in build.sh ([#5500](https://github.com/rapidsai/cuml/pull/5500)) [@vyasr](https://github.com/vyasr) - Fix PCA test ([#5498](https://github.com/rapidsai/cuml/pull/5498)) [@viclafargue](https://github.com/viclafargue) - Update build dependencies ([#5496](https://github.com/rapidsai/cuml/pull/5496)) [@csadorf](https://github.com/csadorf) - Add `copy_X` parameter to `LinearRegression` ([#5495](https://github.com/rapidsai/cuml/pull/5495)) [@viclafargue](https://github.com/viclafargue) - Sparse pca patch ([#5493](https://github.com/rapidsai/cuml/pull/5493)) [@Intron7](https://github.com/Intron7) - Restrict HDBSCAN metric options to L2 #5415 ([#5492](https://github.com/rapidsai/cuml/pull/5492)) [@Rvch7](https://github.com/Rvch7) - Fix typos. ([#5481](https://github.com/rapidsai/cuml/pull/5481)) [@bdice](https://github.com/bdice) - Add multi-node-multi-gpu Logistic Regression in C++ ([#5477](https://github.com/rapidsai/cuml/pull/5477)) [@lijinf2](https://github.com/lijinf2) - Add missing stream argument to cub calls in workingset ([#5476](https://github.com/rapidsai/cuml/pull/5476)) [@mfoerste4](https://github.com/mfoerste4) - Update to CMake 3.26.4 ([#5464](https://github.com/rapidsai/cuml/pull/5464)) [@vyasr](https://github.com/vyasr) - use rapids-upload-docs script ([#5457](https://github.com/rapidsai/cuml/pull/5457)) [@AyodeAwe](https://github.com/AyodeAwe) - Unpin `dask` and `distributed` for development ([#5452](https://github.com/rapidsai/cuml/pull/5452)) [@galipremsagar](https://github.com/galipremsagar) - Remove documentation build scripts for Jenkins ([#5450](https://github.com/rapidsai/cuml/pull/5450)) [@ajschmidt8](https://github.com/ajschmidt8) - Fix update version and pinnings for 23.08. ([#5440](https://github.com/rapidsai/cuml/pull/5440)) [@bdice](https://github.com/bdice) - Add cython-lint configuration. ([#5439](https://github.com/rapidsai/cuml/pull/5439)) [@bdice](https://github.com/bdice) - Unpin scikit-build upper bound ([#5438](https://github.com/rapidsai/cuml/pull/5438)) [@vyasr](https://github.com/vyasr) - Fix some deprecation warnings in tests. ([#5436](https://github.com/rapidsai/cuml/pull/5436)) [@bdice](https://github.com/bdice) - Update `raft::sparse::distance::pairwise_distance` to new API ([#5428](https://github.com/rapidsai/cuml/pull/5428)) [@divyegala](https://github.com/divyegala) # cuML 23.06.00 (7 Jun 2023) ## 🚨 Breaking Changes - Dropping Python 3.8 ([#5385](https://github.com/rapidsai/cuml/pull/5385)) [@divyegala](https://github.com/divyegala) - Support sparse input for SVC and SVR ([#5273](https://github.com/rapidsai/cuml/pull/5273)) [@mfoerste4](https://github.com/mfoerste4) ## 🐛 Bug Fixes - Fixes for nightly GHA runs ([#5446](https://github.com/rapidsai/cuml/pull/5446)) [@dantegd](https://github.com/dantegd) - Add missing RAFT cusolver_macros import and changes for recent cuDF updates ([#5434](https://github.com/rapidsai/cuml/pull/5434)) [@dantegd](https://github.com/dantegd) - Fix kmeans pytest to correctly compute fp output error ([#5426](https://github.com/rapidsai/cuml/pull/5426)) [@mdoijade](https://github.com/mdoijade) - Add missing `raft/matrix/matrix.cuh` include ([#5411](https://github.com/rapidsai/cuml/pull/5411)) [@benfred](https://github.com/benfred) - Fix path to cumlprims_mg in build workflow ([#5406](https://github.com/rapidsai/cuml/pull/5406)) [@divyegala](https://github.com/divyegala) - Fix path to cumlprims in build workflow ([#5405](https://github.com/rapidsai/cuml/pull/5405)) [@vyasr](https://github.com/vyasr) - Pin to scikit-build&lt;17.2 ([#5400](https://github.com/rapidsai/cuml/pull/5400)) [@vyasr](https://github.com/vyasr) - Fix forward merge #5383 ([#5384](https://github.com/rapidsai/cuml/pull/5384)) [@dantegd](https://github.com/dantegd) - Correct buffer move assignment in experimental FIL ([#5372](https://github.com/rapidsai/cuml/pull/5372)) [@wphicks](https://github.com/wphicks) - Avoid invalid memory access in experimental FIL for large output size ([#5365](https://github.com/rapidsai/cuml/pull/5365)) [@wphicks](https://github.com/wphicks) - Fix forward merge #5336 ([#5345](https://github.com/rapidsai/cuml/pull/5345)) [@dantegd](https://github.com/dantegd) ## 📖 Documentation - Fix HDBSCAN docs and add membership_vector to cuml.cluster.hdbscan namespace ([#5378](https://github.com/rapidsai/cuml/pull/5378)) [@beckernick](https://github.com/beckernick) - Small doc fix ([#5375](https://github.com/rapidsai/cuml/pull/5375)) [@tarang-jain](https://github.com/tarang-jain) ## 🚀 New Features - Provide method for auto-optimization of FIL parameters ([#5368](https://github.com/rapidsai/cuml/pull/5368)) [@wphicks](https://github.com/wphicks) ## 🛠️ Improvements - Fix documentation source code links ([#5449](https://github.com/rapidsai/cuml/pull/5449)) [@ajschmidt8](https://github.com/ajschmidt8) - Drop seaborn dependency. ([#5437](https://github.com/rapidsai/cuml/pull/5437)) [@bdice](https://github.com/bdice) - Make all nvtx usage go through safe imports ([#5424](https://github.com/rapidsai/cuml/pull/5424)) [@dantegd](https://github.com/dantegd) - run docs nightly too ([#5423](https://github.com/rapidsai/cuml/pull/5423)) [@AyodeAwe](https://github.com/AyodeAwe) - Switch back to using primary shared-action-workflows branch ([#5420](https://github.com/rapidsai/cuml/pull/5420)) [@vyasr](https://github.com/vyasr) - Add librmm to libcuml dependencies. ([#5410](https://github.com/rapidsai/cuml/pull/5410)) [@bdice](https://github.com/bdice) - Update recipes to GTest version &gt;=1.13.0 ([#5408](https://github.com/rapidsai/cuml/pull/5408)) [@bdice](https://github.com/bdice) - Remove cudf from libcuml `meta.yaml` ([#5407](https://github.com/rapidsai/cuml/pull/5407)) [@divyegala](https://github.com/divyegala) - Support CUDA 12.0 for pip wheels ([#5404](https://github.com/rapidsai/cuml/pull/5404)) [@divyegala](https://github.com/divyegala) - Support for gtest 1.11+ changes ([#5403](https://github.com/rapidsai/cuml/pull/5403)) [@dantegd](https://github.com/dantegd) - Update cupy dependency ([#5401](https://github.com/rapidsai/cuml/pull/5401)) [@vyasr](https://github.com/vyasr) - Build wheels using new single image workflow ([#5394](https://github.com/rapidsai/cuml/pull/5394)) [@vyasr](https://github.com/vyasr) - Revert shared-action-workflows pin ([#5391](https://github.com/rapidsai/cuml/pull/5391)) [@divyegala](https://github.com/divyegala) - Fix logic for concatenating Treelite objects ([#5387](https://github.com/rapidsai/cuml/pull/5387)) [@hcho3](https://github.com/hcho3) - Dropping Python 3.8 ([#5385](https://github.com/rapidsai/cuml/pull/5385)) [@divyegala](https://github.com/divyegala) - Remove usage of rapids-get-rapids-version-from-git ([#5379](https://github.com/rapidsai/cuml/pull/5379)) [@jjacobelli](https://github.com/jjacobelli) - [ENH] Add missing includes of rmm/mr/device/per_device_resource.hpp ([#5369](https://github.com/rapidsai/cuml/pull/5369)) [@ahendriksen](https://github.com/ahendriksen) - Remove wheel pytest verbosity ([#5367](https://github.com/rapidsai/cuml/pull/5367)) [@sevagh](https://github.com/sevagh) - support parameter &#39;class_weight&#39; and method &#39;decision_function&#39; in LinearSVC ([#5364](https://github.com/rapidsai/cuml/pull/5364)) [@mfoerste4](https://github.com/mfoerste4) - Update clang-format to 16.0.1. ([#5361](https://github.com/rapidsai/cuml/pull/5361)) [@bdice](https://github.com/bdice) - Implement apply() in FIL ([#5358](https://github.com/rapidsai/cuml/pull/5358)) [@hcho3](https://github.com/hcho3) - Use ARC V2 self-hosted runners for GPU jobs ([#5356](https://github.com/rapidsai/cuml/pull/5356)) [@jjacobelli](https://github.com/jjacobelli) - Try running silhouette test ([#5353](https://github.com/rapidsai/cuml/pull/5353)) [@vyasr](https://github.com/vyasr) - Remove uses-setup-env-vars ([#5344](https://github.com/rapidsai/cuml/pull/5344)) [@vyasr](https://github.com/vyasr) - Resolve auto-merger conflicts between `branch-23.04` &amp; `branch-23.06` ([#5340](https://github.com/rapidsai/cuml/pull/5340)) [@galipremsagar](https://github.com/galipremsagar) - Solve merge conflict of PR #5327 ([#5329](https://github.com/rapidsai/cuml/pull/5329)) [@dantegd](https://github.com/dantegd) - Branch 23.06 merge 23.04 ([#5315](https://github.com/rapidsai/cuml/pull/5315)) [@vyasr](https://github.com/vyasr) - Support sparse input for SVC and SVR ([#5273](https://github.com/rapidsai/cuml/pull/5273)) [@mfoerste4](https://github.com/mfoerste4) - Delete outdated versions.json. ([#5229](https://github.com/rapidsai/cuml/pull/5229)) [@bdice](https://github.com/bdice) # cuML 23.04.00 (6 Apr 2023) ## 🚨 Breaking Changes - Pin `dask` and `distributed` for release ([#5333](https://github.com/rapidsai/cuml/pull/5333)) [@galipremsagar](https://github.com/galipremsagar) ## 🐛 Bug Fixes - Skip pickle notebook during nbsphinx ([#5342](https://github.com/rapidsai/cuml/pull/5342)) [@dantegd](https://github.com/dantegd) - Avoid race condition in FIL predict_per_tree ([#5334](https://github.com/rapidsai/cuml/pull/5334)) [@wphicks](https://github.com/wphicks) - Ensure experimental FIL shmem usage is below device limits ([#5326](https://github.com/rapidsai/cuml/pull/5326)) [@wphicks](https://github.com/wphicks) - Update cuda architectures for threads per sm restriction ([#5323](https://github.com/rapidsai/cuml/pull/5323)) [@wphicks](https://github.com/wphicks) - Run experimental FIL tests in CI ([#5316](https://github.com/rapidsai/cuml/pull/5316)) [@wphicks](https://github.com/wphicks) - Run memory leak pytests without parallelism to avoid sporadic test failures ([#5313](https://github.com/rapidsai/cuml/pull/5313)) [@dantegd](https://github.com/dantegd) - Update cupy version for pip wheels ([#5311](https://github.com/rapidsai/cuml/pull/5311)) [@dantegd](https://github.com/dantegd) - Fix for raising attributeerors erroneously for ipython methods ([#5299](https://github.com/rapidsai/cuml/pull/5299)) [@dantegd](https://github.com/dantegd) - Fix cuml local cpp docs build ([#5297](https://github.com/rapidsai/cuml/pull/5297)) [@galipremsagar](https://github.com/galipremsagar) - Don&#39;t run dask tests twice when testing wheels ([#5279](https://github.com/rapidsai/cuml/pull/5279)) [@benfred](https://github.com/benfred) - Remove MANIFEST.in use auto-generated one for sdists and package_data for wheels ([#5278](https://github.com/rapidsai/cuml/pull/5278)) [@vyasr](https://github.com/vyasr) - Removing remaining include of `raft/distance/distance_type.hpp` ([#5264](https://github.com/rapidsai/cuml/pull/5264)) [@cjnolet](https://github.com/cjnolet) - Enable hypothesis testing for nightly test runs. ([#5244](https://github.com/rapidsai/cuml/pull/5244)) [@csadorf](https://github.com/csadorf) - Support numeric, boolean, and string keyword arguments to class methods during CPU dispatching ([#5236](https://github.com/rapidsai/cuml/pull/5236)) [@beckernick](https://github.com/beckernick) - Allowing large data in kmeans ([#5228](https://github.com/rapidsai/cuml/pull/5228)) [@cjnolet](https://github.com/cjnolet) ## 📖 Documentation - Fix docs build to be `pydata-sphinx-theme=0.13.0` compatible ([#5259](https://github.com/rapidsai/cuml/pull/5259)) [@galipremsagar](https://github.com/galipremsagar) - Add supported CPU/GPU operators to API docs and update docstrings ([#5239](https://github.com/rapidsai/cuml/pull/5239)) [@beckernick](https://github.com/beckernick) - Fix documentation author ([#5126](https://github.com/rapidsai/cuml/pull/5126)) [@bdice](https://github.com/bdice) ## 🚀 New Features - Modify default batch size in HDBSCAN soft clustering ([#5335](https://github.com/rapidsai/cuml/pull/5335)) [@tarang-jain](https://github.com/tarang-jain) - reduce memory pressure in membership vector computation ([#5268](https://github.com/rapidsai/cuml/pull/5268)) [@tarang-jain](https://github.com/tarang-jain) - membership_vector for HDBSCAN ([#5247](https://github.com/rapidsai/cuml/pull/5247)) [@tarang-jain](https://github.com/tarang-jain) - Provide FIL implementation for both CPU and GPU ([#4890](https://github.com/rapidsai/cuml/pull/4890)) [@wphicks](https://github.com/wphicks) ## 🛠️ Improvements - Remove deprecated Treelite CI API from FIL ([#5348](https://github.com/rapidsai/cuml/pull/5348)) [@hcho3](https://github.com/hcho3) - Updated forest inference to new dask worker api for 23.04 ([#5347](https://github.com/rapidsai/cuml/pull/5347)) [@taureandyernv](https://github.com/taureandyernv) - Pin `dask` and `distributed` for release ([#5333](https://github.com/rapidsai/cuml/pull/5333)) [@galipremsagar](https://github.com/galipremsagar) - Pin cupy in wheel tests to supported versions ([#5312](https://github.com/rapidsai/cuml/pull/5312)) [@vyasr](https://github.com/vyasr) - Drop `pickle5` ([#5310](https://github.com/rapidsai/cuml/pull/5310)) [@jakirkham](https://github.com/jakirkham) - Remove CUDA_CHECK macro ([#5308](https://github.com/rapidsai/cuml/pull/5308)) [@hcho3](https://github.com/hcho3) - Revert faiss removal pinned tag ([#5306](https://github.com/rapidsai/cuml/pull/5306)) [@cjnolet](https://github.com/cjnolet) - Upgrade to Treelite 3.2.0 ([#5304](https://github.com/rapidsai/cuml/pull/5304)) [@hcho3](https://github.com/hcho3) - Implement predict_per_tree() in FIL ([#5303](https://github.com/rapidsai/cuml/pull/5303)) [@hcho3](https://github.com/hcho3) - remove faiss from cuml ([#5293](https://github.com/rapidsai/cuml/pull/5293)) [@benfred](https://github.com/benfred) - Stop setting package version attribute in wheels ([#5285](https://github.com/rapidsai/cuml/pull/5285)) [@vyasr](https://github.com/vyasr) - Add libfaiss runtime dependency to libcuml. ([#5284](https://github.com/rapidsai/cuml/pull/5284)) [@bdice](https://github.com/bdice) - Move faiss_mr from raft ([#5281](https://github.com/rapidsai/cuml/pull/5281)) [@benfred](https://github.com/benfred) - Generate pyproject dependencies with dfg ([#5275](https://github.com/rapidsai/cuml/pull/5275)) [@vyasr](https://github.com/vyasr) - Updating cuML to use consolidated RAFT libs ([#5272](https://github.com/rapidsai/cuml/pull/5272)) [@cjnolet](https://github.com/cjnolet) - Add codespell as a linter ([#5265](https://github.com/rapidsai/cuml/pull/5265)) [@benfred](https://github.com/benfred) - Pass `AWS_SESSION_TOKEN` and `SCCACHE_S3_USE_SSL` vars to conda build ([#5263](https://github.com/rapidsai/cuml/pull/5263)) [@ajschmidt8](https://github.com/ajschmidt8) - Update to GCC 11 ([#5258](https://github.com/rapidsai/cuml/pull/5258)) [@bdice](https://github.com/bdice) - Drop Python 3.7 handling for pickle protocol 4 ([#5256](https://github.com/rapidsai/cuml/pull/5256)) [@jakirkham](https://github.com/jakirkham) - Migrate as much as possible to pyproject.toml ([#5251](https://github.com/rapidsai/cuml/pull/5251)) [@vyasr](https://github.com/vyasr) - Adapt to rapidsai/rmm#1221 which moves allocator callbacks ([#5249](https://github.com/rapidsai/cuml/pull/5249)) [@wence-](https://github.com/wence-) - Add dfg as a pre-commit hook. ([#5246](https://github.com/rapidsai/cuml/pull/5246)) [@vyasr](https://github.com/vyasr) - Stop using versioneer to manage versions ([#5245](https://github.com/rapidsai/cuml/pull/5245)) [@vyasr](https://github.com/vyasr) - Enhance cuML benchmark utility and refactor hdbscan import utilities ([#5242](https://github.com/rapidsai/cuml/pull/5242)) [@beckernick](https://github.com/beckernick) - Fix GHA build workflow ([#5241](https://github.com/rapidsai/cuml/pull/5241)) [@AjayThorve](https://github.com/AjayThorve) - Support innerproduct distance in the pairwise_distance API ([#5230](https://github.com/rapidsai/cuml/pull/5230)) [@benfred](https://github.com/benfred) - Enable hypothesis for 23.04 ([#5221](https://github.com/rapidsai/cuml/pull/5221)) [@csadorf](https://github.com/csadorf) - Reduce error handling verbosity in CI tests scripts ([#5219](https://github.com/rapidsai/cuml/pull/5219)) [@AjayThorve](https://github.com/AjayThorve) - Bump pinned pip wheel deps to 23.4 ([#5217](https://github.com/rapidsai/cuml/pull/5217)) [@sevagh](https://github.com/sevagh) - Update shared workflow branches ([#5215](https://github.com/rapidsai/cuml/pull/5215)) [@ajschmidt8](https://github.com/ajschmidt8) - Unpin `dask` and `distributed` for development ([#5209](https://github.com/rapidsai/cuml/pull/5209)) [@galipremsagar](https://github.com/galipremsagar) - Remove gpuCI scripts. ([#5208](https://github.com/rapidsai/cuml/pull/5208)) [@bdice](https://github.com/bdice) - Move date to build string in `conda` recipe ([#5190](https://github.com/rapidsai/cuml/pull/5190)) [@ajschmidt8](https://github.com/ajschmidt8) - Kernel shap improvements ([#5187](https://github.com/rapidsai/cuml/pull/5187)) [@vinaydes](https://github.com/vinaydes) - test out the raft bfknn replacement ([#5186](https://github.com/rapidsai/cuml/pull/5186)) [@benfred](https://github.com/benfred) - Forward merge 23.02 into 23.04 ([#5182](https://github.com/rapidsai/cuml/pull/5182)) [@vyasr](https://github.com/vyasr) - Add `detail` namespace for linear models ([#5107](https://github.com/rapidsai/cuml/pull/5107)) [@lowener](https://github.com/lowener) - Add pre-commit configuration ([#4983](https://github.com/rapidsai/cuml/pull/4983)) [@csadorf](https://github.com/csadorf) # cuML 23.02.00 (9 Feb 2023) ## 🚨 Breaking Changes - Use ivf_pq and ivf_flat from raft ([#5119](https://github.com/rapidsai/cuml/pull/5119)) [@benfred](https://github.com/benfred) - Estimators adaptation toward CPU/GPU interoperability ([#4918](https://github.com/rapidsai/cuml/pull/4918)) [@viclafargue](https://github.com/viclafargue) - Provide host CumlArray and associated infrastructure ([#4908](https://github.com/rapidsai/cuml/pull/4908)) [@wphicks](https://github.com/wphicks) - Improvements of UMAP/TSNE precomputed KNN feature ([#4865](https://github.com/rapidsai/cuml/pull/4865)) [@viclafargue](https://github.com/viclafargue) ## 🐛 Bug Fixes - Fix for creation of CUDA context at import time ([#5211](https://github.com/rapidsai/cuml/pull/5211)) [@dantegd](https://github.com/dantegd) - Correct arguments to load_from_treelite_model after classmethod conversion ([#5210](https://github.com/rapidsai/cuml/pull/5210)) [@wphicks](https://github.com/wphicks) - Use workaround to avoid staticmethod 3.10/Cython issue ([#5202](https://github.com/rapidsai/cuml/pull/5202)) [@wphicks](https://github.com/wphicks) - Increase margin for flaky FIL test ([#5194](https://github.com/rapidsai/cuml/pull/5194)) [@wphicks](https://github.com/wphicks) - Increase margin for flaky FIL test ([#5174](https://github.com/rapidsai/cuml/pull/5174)) [@wphicks](https://github.com/wphicks) - Fix gather_if raft update ([#5149](https://github.com/rapidsai/cuml/pull/5149)) [@lowener](https://github.com/lowener) - Add `_predict_model_on_cpu` for `RandomForestClassifier` ([#5148](https://github.com/rapidsai/cuml/pull/5148)) [@lowener](https://github.com/lowener) - Fix for hdbscan model serialization ([#5128](https://github.com/rapidsai/cuml/pull/5128)) [@cjnolet](https://github.com/cjnolet) - build.sh switch to use `RAPIDS` magic value ([#5124](https://github.com/rapidsai/cuml/pull/5124)) [@robertmaynard](https://github.com/robertmaynard) - Fix `Lasso` interop issue ([#5116](https://github.com/rapidsai/cuml/pull/5116)) [@viclafargue](https://github.com/viclafargue) - Remove nvcc conda package and add compiler/ninja to dev envs ([#5113](https://github.com/rapidsai/cuml/pull/5113)) [@dantegd](https://github.com/dantegd) - Add missing job dependency for new PR jobs check ([#5112](https://github.com/rapidsai/cuml/pull/5112)) [@dantegd](https://github.com/dantegd) - Skip RAFT docstring test in cuML ([#5088](https://github.com/rapidsai/cuml/pull/5088)) [@dantegd](https://github.com/dantegd) - Restore KNN metric attribute ([#5087](https://github.com/rapidsai/cuml/pull/5087)) [@viclafargue](https://github.com/viclafargue) - Check `sklearn` presence before importing the `Pipeline` ([#5072](https://github.com/rapidsai/cuml/pull/5072)) [@viclafargue](https://github.com/viclafargue) - Provide workaround for kernel ridge solver ([#5064](https://github.com/rapidsai/cuml/pull/5064)) [@wphicks](https://github.com/wphicks) - Keep verbosity level in KMeans OPG ([#5063](https://github.com/rapidsai/cuml/pull/5063)) [@viclafargue](https://github.com/viclafargue) - Transmit verbosity level to Dask workers ([#5062](https://github.com/rapidsai/cuml/pull/5062)) [@viclafargue](https://github.com/viclafargue) - Ensure consistent order for nearest neighbor tests ([#5059](https://github.com/rapidsai/cuml/pull/5059)) [@wphicks](https://github.com/wphicks) - Add `workers` argument to dask `make_blobs` ([#5057](https://github.com/rapidsai/cuml/pull/5057)) [@viclafargue](https://github.com/viclafargue) - Fix indexing type for ridge and linear models ([#4996](https://github.com/rapidsai/cuml/pull/4996)) [@lowener](https://github.com/lowener) ## 📖 Documentation - Adding benchmark notebook for hdbscan soft clustering ([#5103](https://github.com/rapidsai/cuml/pull/5103)) [@cjnolet](https://github.com/cjnolet) - Fix doc for solver in LogisticRegression ([#5097](https://github.com/rapidsai/cuml/pull/5097)) [@viclafargue](https://github.com/viclafargue) - Fix docstring of `HashingVectorizer` ([#5041](https://github.com/rapidsai/cuml/pull/5041)) [@lowener](https://github.com/lowener) - expose text, text.{CountVectorizer,HashingVectorizer,Tfidf{Transformer,Vectorizer}} from feature_extraction&#39;s public api ([#5028](https://github.com/rapidsai/cuml/pull/5028)) [@mattf](https://github.com/mattf) - Add Dask LabelEncoder to the documentation ([#5023](https://github.com/rapidsai/cuml/pull/5023)) [@beckernick](https://github.com/beckernick) ## 🚀 New Features - HDBSCAN CPU/GPU Interop ([#5137](https://github.com/rapidsai/cuml/pull/5137)) [@divyegala](https://github.com/divyegala) - Make all CPU/GPU only imports &quot;safe&quot; for respective package ([#5117](https://github.com/rapidsai/cuml/pull/5117)) [@wphicks](https://github.com/wphicks) - Pickling for HBDSCAN ([#5102](https://github.com/rapidsai/cuml/pull/5102)) [@divyegala](https://github.com/divyegala) - Break up silhouette score into 3 units to improve compilation time ([#5061](https://github.com/rapidsai/cuml/pull/5061)) [@wphicks](https://github.com/wphicks) - Provide host CumlArray and associated infrastructure ([#4908](https://github.com/rapidsai/cuml/pull/4908)) [@wphicks](https://github.com/wphicks) ## 🛠️ Improvements - Pin `dask` and `distributed` for release ([#5198](https://github.com/rapidsai/cuml/pull/5198)) [@galipremsagar](https://github.com/galipremsagar) - Update shared workflow branches ([#5197](https://github.com/rapidsai/cuml/pull/5197)) [@ajschmidt8](https://github.com/ajschmidt8) - Pin wheel dependencies to same RAPIDS release ([#5183](https://github.com/rapidsai/cuml/pull/5183)) [@sevagh](https://github.com/sevagh) - Reverting RAFT pin ([#5178](https://github.com/rapidsai/cuml/pull/5178)) [@cjnolet](https://github.com/cjnolet) - Remove `faiss` from `libcuml` ([#5175](https://github.com/rapidsai/cuml/pull/5175)) [@ajschmidt8](https://github.com/ajschmidt8) - Update location of `import_utils` from `common` to `internals` for Forest notebook ([#5171](https://github.com/rapidsai/cuml/pull/5171)) [@taureandyernv](https://github.com/taureandyernv) - Disable hypothesis tests for 23.02 burndown. ([#5168](https://github.com/rapidsai/cuml/pull/5168)) [@csadorf](https://github.com/csadorf) - Use CTK 118/cp310 branch of wheel workflows ([#5163](https://github.com/rapidsai/cuml/pull/5163)) [@sevagh](https://github.com/sevagh) - Add docs build GH ([#5155](https://github.com/rapidsai/cuml/pull/5155)) [@AjayThorve](https://github.com/AjayThorve) - Adapt to changes in `cudf.core.buffer.Buffer` ([#5154](https://github.com/rapidsai/cuml/pull/5154)) [@galipremsagar](https://github.com/galipremsagar) - Upgrade Treelite to 3.1.0 ([#5146](https://github.com/rapidsai/cuml/pull/5146)) [@hcho3](https://github.com/hcho3) - Replace cpdef variables with cdef variables. ([#5145](https://github.com/rapidsai/cuml/pull/5145)) [@bdice](https://github.com/bdice) - Update Scikit-learn compatibility to 1.2 ([#5141](https://github.com/rapidsai/cuml/pull/5141)) [@dantegd](https://github.com/dantegd) - Replace deprecated raft headers ([#5134](https://github.com/rapidsai/cuml/pull/5134)) [@lowener](https://github.com/lowener) - Execution device interoperability documentation ([#5130](https://github.com/rapidsai/cuml/pull/5130)) [@viclafargue](https://github.com/viclafargue) - Remove outdated macOS deployment target from build script. ([#5125](https://github.com/rapidsai/cuml/pull/5125)) [@bdice](https://github.com/bdice) - Build CUDA 11.8 and Python 3.10 Packages ([#5120](https://github.com/rapidsai/cuml/pull/5120)) [@bdice](https://github.com/bdice) - Use ivf_pq and ivf_flat from raft ([#5119](https://github.com/rapidsai/cuml/pull/5119)) [@benfred](https://github.com/benfred) - Update workflows for nightly tests ([#5110](https://github.com/rapidsai/cuml/pull/5110)) [@ajschmidt8](https://github.com/ajschmidt8) - Build pip wheels alongside conda CI ([#5109](https://github.com/rapidsai/cuml/pull/5109)) [@sevagh](https://github.com/sevagh) - Remove PROJECT_FLASH from libcuml conda build environment. ([#5108](https://github.com/rapidsai/cuml/pull/5108)) [@bdice](https://github.com/bdice) - Enable `Recently Updated` Check ([#5105](https://github.com/rapidsai/cuml/pull/5105)) [@ajschmidt8](https://github.com/ajschmidt8) - Ensure `pytest` is run from relevant directories in GH Actions ([#5101](https://github.com/rapidsai/cuml/pull/5101)) [@ajschmidt8](https://github.com/ajschmidt8) - Remove C++ Kmeans test ([#5098](https://github.com/rapidsai/cuml/pull/5098)) [@lowener](https://github.com/lowener) - Slightly lower the test_mbsgd_regressor expected min score. ([#5092](https://github.com/rapidsai/cuml/pull/5092)) [@csadorf](https://github.com/csadorf) - Skip all hypothesis health checks by default in CI runs. ([#5090](https://github.com/rapidsai/cuml/pull/5090)) [@csadorf](https://github.com/csadorf) - Reduce Naive Bayes test time ([#5082](https://github.com/rapidsai/cuml/pull/5082)) [@lowener](https://github.com/lowener) - Remove unused `.conda` folder ([#5078](https://github.com/rapidsai/cuml/pull/5078)) [@ajschmidt8](https://github.com/ajschmidt8) - Fix conflicts in #5045 ([#5077](https://github.com/rapidsai/cuml/pull/5077)) [@ajschmidt8](https://github.com/ajschmidt8) - Add GitHub Actions Workflows ([#5075](https://github.com/rapidsai/cuml/pull/5075)) [@csadorf](https://github.com/csadorf) - Skip test_linear_regression_model_default test. ([#5074](https://github.com/rapidsai/cuml/pull/5074)) [@csadorf](https://github.com/csadorf) - Fix link. ([#5067](https://github.com/rapidsai/cuml/pull/5067)) [@bdice](https://github.com/bdice) - Expand hypothesis testing for linear models ([#5065](https://github.com/rapidsai/cuml/pull/5065)) [@csadorf](https://github.com/csadorf) - Update xgb version in GPU CI 23.02 to 1.7.1 and unblocking CI ([#5051](https://github.com/rapidsai/cuml/pull/5051)) [@dantegd](https://github.com/dantegd) - Remove direct UCX and NCCL dependencies ([#5038](https://github.com/rapidsai/cuml/pull/5038)) [@vyasr](https://github.com/vyasr) - Move single test from `test` to `tests` ([#5037](https://github.com/rapidsai/cuml/pull/5037)) [@vyasr](https://github.com/vyasr) - Support using `CountVectorizer` &amp; `TfidVectorizer` in `cuml.pipeline.Pipeline` ([#5034](https://github.com/rapidsai/cuml/pull/5034)) [@lasse-it](https://github.com/lasse-it) - Refactor API decorators ([#5026](https://github.com/rapidsai/cuml/pull/5026)) [@csadorf](https://github.com/csadorf) - Implement hypothesis strategies and tests for arrays ([#5017](https://github.com/rapidsai/cuml/pull/5017)) [@csadorf](https://github.com/csadorf) - Add dependencies.yaml for rapids-dependency-file-generator ([#5003](https://github.com/rapidsai/cuml/pull/5003)) [@beckernick](https://github.com/beckernick) - Improved CPU/GPU interoperability ([#5001](https://github.com/rapidsai/cuml/pull/5001)) [@viclafargue](https://github.com/viclafargue) - Estimators adaptation toward CPU/GPU interoperability ([#4918](https://github.com/rapidsai/cuml/pull/4918)) [@viclafargue](https://github.com/viclafargue) - Improvements of UMAP/TSNE precomputed KNN feature ([#4865](https://github.com/rapidsai/cuml/pull/4865)) [@viclafargue](https://github.com/viclafargue) # cuML 22.12.00 (8 Dec 2022) ## 🚨 Breaking Changes - Change docs theme to `pydata-sphinx` theme ([#4985](https://github.com/rapidsai/cuml/pull/4985)) [@galipremsagar](https://github.com/galipremsagar) - Remove &quot;Open In Colab&quot; link from Estimator Intro notebook. ([#4980](https://github.com/rapidsai/cuml/pull/4980)) [@bdice](https://github.com/bdice) - Remove `CumlArray.copy()` ([#4958](https://github.com/rapidsai/cuml/pull/4958)) [@madsbk](https://github.com/madsbk) ## 🐛 Bug Fixes - Remove cupy.cusparse custom serialization ([#5024](https://github.com/rapidsai/cuml/pull/5024)) [@dantegd](https://github.com/dantegd) - Restore `LinearRegression` documentation ([#5020](https://github.com/rapidsai/cuml/pull/5020)) [@viclafargue](https://github.com/viclafargue) - Don&#39;t use CMake 3.25.0 as it has a FindCUDAToolkit show stopping bug ([#5007](https://github.com/rapidsai/cuml/pull/5007)) [@robertmaynard](https://github.com/robertmaynard) - verifying cusparse wrapper revert passes CI ([#4990](https://github.com/rapidsai/cuml/pull/4990)) [@cjnolet](https://github.com/cjnolet) - Use rapdsi_cpm_find(COMPONENTS ) for proper component tracking ([#4989](https://github.com/rapidsai/cuml/pull/4989)) [@robertmaynard](https://github.com/robertmaynard) - Fix integer overflow in AutoARIMA due to bool-to-int cub scan ([#4971](https://github.com/rapidsai/cuml/pull/4971)) [@Nyrio](https://github.com/Nyrio) - Add missing includes ([#4947](https://github.com/rapidsai/cuml/pull/4947)) [@vyasr](https://github.com/vyasr) - Fix the CMake option for disabling deprecation warnings. ([#4946](https://github.com/rapidsai/cuml/pull/4946)) [@vyasr](https://github.com/vyasr) - Make doctest resilient to changes in cupy reprs ([#4945](https://github.com/rapidsai/cuml/pull/4945)) [@vyasr](https://github.com/vyasr) - Assign python/ sub-directory to python-codeowners ([#4940](https://github.com/rapidsai/cuml/pull/4940)) [@csadorf](https://github.com/csadorf) - Fix for non-contiguous strides ([#4736](https://github.com/rapidsai/cuml/pull/4736)) [@viclafargue](https://github.com/viclafargue) ## 📖 Documentation - Change docs theme to `pydata-sphinx` theme ([#4985](https://github.com/rapidsai/cuml/pull/4985)) [@galipremsagar](https://github.com/galipremsagar) - Remove &quot;Open In Colab&quot; link from Estimator Intro notebook. ([#4980](https://github.com/rapidsai/cuml/pull/4980)) [@bdice](https://github.com/bdice) - Updating build instructions ([#4979](https://github.com/rapidsai/cuml/pull/4979)) [@cjnolet](https://github.com/cjnolet) ## 🚀 New Features - Reenable copy_prs. ([#5010](https://github.com/rapidsai/cuml/pull/5010)) [@vyasr](https://github.com/vyasr) - Add wheel builds ([#5009](https://github.com/rapidsai/cuml/pull/5009)) [@vyasr](https://github.com/vyasr) - LinearRegression: add support for multiple targets ([#4988](https://github.com/rapidsai/cuml/pull/4988)) [@ahendriksen](https://github.com/ahendriksen) - CPU/GPU interoperability POC ([#4874](https://github.com/rapidsai/cuml/pull/4874)) [@viclafargue](https://github.com/viclafargue) ## 🛠️ Improvements - Upgrade Treelite to 3.0.1 ([#5018](https://github.com/rapidsai/cuml/pull/5018)) [@hcho3](https://github.com/hcho3) - fix addition of nan_euclidean_distances to public api ([#5015](https://github.com/rapidsai/cuml/pull/5015)) [@mattf](https://github.com/mattf) - Fixing raft pin to 22.12 ([#5000](https://github.com/rapidsai/cuml/pull/5000)) [@cjnolet](https://github.com/cjnolet) - Pin `dask` and `distributed` for release ([#4999](https://github.com/rapidsai/cuml/pull/4999)) [@galipremsagar](https://github.com/galipremsagar) - Update `dask` nightly install command in CI ([#4978](https://github.com/rapidsai/cuml/pull/4978)) [@galipremsagar](https://github.com/galipremsagar) - Improve error message for array_equal asserts. ([#4973](https://github.com/rapidsai/cuml/pull/4973)) [@csadorf](https://github.com/csadorf) - Use new rapids-cmake functionality for rpath handling. ([#4966](https://github.com/rapidsai/cuml/pull/4966)) [@vyasr](https://github.com/vyasr) - Impl. `CumlArray.deserialize()` ([#4965](https://github.com/rapidsai/cuml/pull/4965)) [@madsbk](https://github.com/madsbk) - Update `cuda-python` dependency to 11.7.1 ([#4961](https://github.com/rapidsai/cuml/pull/4961)) [@galipremsagar](https://github.com/galipremsagar) - Add check for nsys utility version in the `nvtx_benchmarks.py` script ([#4959](https://github.com/rapidsai/cuml/pull/4959)) [@viclafargue](https://github.com/viclafargue) - Remove `CumlArray.copy()` ([#4958](https://github.com/rapidsai/cuml/pull/4958)) [@madsbk](https://github.com/madsbk) - Implement hypothesis-based tests for linear models ([#4952](https://github.com/rapidsai/cuml/pull/4952)) [@csadorf](https://github.com/csadorf) - Switch to using rapids-cmake for gbench. ([#4950](https://github.com/rapidsai/cuml/pull/4950)) [@vyasr](https://github.com/vyasr) - Remove stale labeler ([#4949](https://github.com/rapidsai/cuml/pull/4949)) [@raydouglass](https://github.com/raydouglass) - Fix url in python/setup.py setuptools metadata. ([#4937](https://github.com/rapidsai/cuml/pull/4937)) [@csadorf](https://github.com/csadorf) - Updates to fix cuml build ([#4928](https://github.com/rapidsai/cuml/pull/4928)) [@cjnolet](https://github.com/cjnolet) - Documenting hdbscan module to add prediction functions ([#4925](https://github.com/rapidsai/cuml/pull/4925)) [@cjnolet](https://github.com/cjnolet) - Unpin `dask` and `distributed` for development ([#4912](https://github.com/rapidsai/cuml/pull/4912)) [@galipremsagar](https://github.com/galipremsagar) - Use KMeans from Raft ([#4713](https://github.com/rapidsai/cuml/pull/4713)) [@lowener](https://github.com/lowener) - Update cuml raft header extensions ([#4599](https://github.com/rapidsai/cuml/pull/4599)) [@cjnolet](https://github.com/cjnolet) - Reconciling primitives moved to RAFT ([#4583](https://github.com/rapidsai/cuml/pull/4583)) [@cjnolet](https://github.com/cjnolet) # cuML 22.10.00 (12 Oct 2022) ## 🐛 Bug Fixes - Skipping some hdbscan tests when cuda version is &lt;= 11.2. ([#4916](https://github.com/rapidsai/cuml/pull/4916)) [@cjnolet](https://github.com/cjnolet) - Fix HDBSCAN python namespace ([#4895](https://github.com/rapidsai/cuml/pull/4895)) [@cjnolet](https://github.com/cjnolet) - Cupy 11 fixes ([#4889](https://github.com/rapidsai/cuml/pull/4889)) [@dantegd](https://github.com/dantegd) - Fix small fp precision failure in linear regression doctest test ([#4884](https://github.com/rapidsai/cuml/pull/4884)) [@lowener](https://github.com/lowener) - Remove unused cuDF imports ([#4873](https://github.com/rapidsai/cuml/pull/4873)) [@beckernick](https://github.com/beckernick) - Update for thrust 1.17 and fixes to accommodate for cuDF Buffer refactor ([#4871](https://github.com/rapidsai/cuml/pull/4871)) [@dantegd](https://github.com/dantegd) - Use rapids-cmake 22.10 best practice for RAPIDS.cmake location ([#4862](https://github.com/rapidsai/cuml/pull/4862)) [@robertmaynard](https://github.com/robertmaynard) - Patch for nightly test&amp;bench ([#4840](https://github.com/rapidsai/cuml/pull/4840)) [@viclafargue](https://github.com/viclafargue) - Fixed Large memory requirements for SimpleImputer strategy median #4794 ([#4817](https://github.com/rapidsai/cuml/pull/4817)) [@erikrene](https://github.com/erikrene) - Transforms RandomForest estimators non-consecutive labels to consecutive labels where appropriate ([#4780](https://github.com/rapidsai/cuml/pull/4780)) [@VamsiTallam95](https://github.com/VamsiTallam95) ## 📖 Documentation - Document that minimum required CMake version is now 3.23.1 ([#4899](https://github.com/rapidsai/cuml/pull/4899)) [@robertmaynard](https://github.com/robertmaynard) - Update KMeans notebook for clarity ([#4886](https://github.com/rapidsai/cuml/pull/4886)) [@beckernick](https://github.com/beckernick) ## 🚀 New Features - Allow cupy 11 ([#4880](https://github.com/rapidsai/cuml/pull/4880)) [@galipremsagar](https://github.com/galipremsagar) - Add `sample_weight` to Coordinate Descent solver (Lasso and ElasticNet) ([#4867](https://github.com/rapidsai/cuml/pull/4867)) [@lowener](https://github.com/lowener) - Import treelite models into FIL in a different precision ([#4839](https://github.com/rapidsai/cuml/pull/4839)) [@canonizer](https://github.com/canonizer) - #4783 Added nan_euclidean distance metric to pairwise_distances ([#4797](https://github.com/rapidsai/cuml/pull/4797)) [@Sreekiran096](https://github.com/Sreekiran096) - `PowerTransformer`, `QuantileTransformer` and `KernelCenterer` ([#4755](https://github.com/rapidsai/cuml/pull/4755)) [@viclafargue](https://github.com/viclafargue) - Add &quot;median&quot; to TargetEncoder ([#4722](https://github.com/rapidsai/cuml/pull/4722)) [@daxiongshu](https://github.com/daxiongshu) - New Feature StratifiedKFold ([#3109](https://github.com/rapidsai/cuml/pull/3109)) [@daxiongshu](https://github.com/daxiongshu) ## 🛠️ Improvements - Updating python to use pylibraft ([#4887](https://github.com/rapidsai/cuml/pull/4887)) [@cjnolet](https://github.com/cjnolet) - Upgrade Treelite to 3.0.0 ([#4885](https://github.com/rapidsai/cuml/pull/4885)) [@hcho3](https://github.com/hcho3) - Statically link all CUDA toolkit libraries ([#4881](https://github.com/rapidsai/cuml/pull/4881)) [@trxcllnt](https://github.com/trxcllnt) - approximate_predict function for HDBSCAN ([#4872](https://github.com/rapidsai/cuml/pull/4872)) [@tarang-jain](https://github.com/tarang-jain) - Pin `dask` and `distributed` for release ([#4859](https://github.com/rapidsai/cuml/pull/4859)) [@galipremsagar](https://github.com/galipremsagar) - Remove Raft deprecated headers ([#4858](https://github.com/rapidsai/cuml/pull/4858)) [@lowener](https://github.com/lowener) - Fix forward-merge conflicts ([#4857](https://github.com/rapidsai/cuml/pull/4857)) [@ajschmidt8](https://github.com/ajschmidt8) - Update the NVTX bench helper for the new nsys utility ([#4826](https://github.com/rapidsai/cuml/pull/4826)) [@viclafargue](https://github.com/viclafargue) - All points membership vector for HDBSCAN ([#4800](https://github.com/rapidsai/cuml/pull/4800)) [@tarang-jain](https://github.com/tarang-jain) - TSNE and UMAP allow several distance types ([#4779](https://github.com/rapidsai/cuml/pull/4779)) [@tarang-jain](https://github.com/tarang-jain) - Convert fp32 datasets to fp64 in ARIMA and AutoARIMA + update notebook to avoid deprecation warnings with positional parameters ([#4195](https://github.com/rapidsai/cuml/pull/4195)) [@Nyrio](https://github.com/Nyrio) # cuML 22.08.00 (17 Aug 2022) ## 🚨 Breaking Changes - Update Python build to scikit-build ([#4818](https://github.com/rapidsai/cuml/pull/4818)) [@dantegd](https://github.com/dantegd) - Bump `xgboost` to `1.6.0` from `1.5.2` ([#4777](https://github.com/rapidsai/cuml/pull/4777)) [@galipremsagar](https://github.com/galipremsagar) ## 🐛 Bug Fixes - Revert &quot;Allow CuPy 11&quot; ([#4847](https://github.com/rapidsai/cuml/pull/4847)) [@galipremsagar](https://github.com/galipremsagar) - Fix RAFT_NVTX option not set ([#4825](https://github.com/rapidsai/cuml/pull/4825)) [@achirkin](https://github.com/achirkin) - Fix KNN error message. ([#4782](https://github.com/rapidsai/cuml/pull/4782)) [@trivialfis](https://github.com/trivialfis) - Update raft pinnings in dev yml files ([#4778](https://github.com/rapidsai/cuml/pull/4778)) [@galipremsagar](https://github.com/galipremsagar) - Bump `xgboost` to `1.6.0` from `1.5.2` ([#4777](https://github.com/rapidsai/cuml/pull/4777)) [@galipremsagar](https://github.com/galipremsagar) - Fixes exception when using predict_proba on fitted Pipeline object with a ColumnTransformer step ([#4774](https://github.com/rapidsai/cuml/pull/4774)) [@VamsiTallam95](https://github.com/VamsiTallam95) - Regression errors failing with mixed data type combinations ([#4770](https://github.com/rapidsai/cuml/pull/4770)) [@shaswat-indian](https://github.com/shaswat-indian) ## 📖 Documentation - Use common code in python docs and defer `js` loading ([#4852](https://github.com/rapidsai/cuml/pull/4852)) [@galipremsagar](https://github.com/galipremsagar) - Centralize common css &amp; js code in docs ([#4844](https://github.com/rapidsai/cuml/pull/4844)) [@galipremsagar](https://github.com/galipremsagar) - Add ComplementNB to the documentation ([#4805](https://github.com/rapidsai/cuml/pull/4805)) [@lowener](https://github.com/lowener) - Fix forward-merge branch-22.06 to branch-22.08 ([#4789](https://github.com/rapidsai/cuml/pull/4789)) [@divyegala](https://github.com/divyegala) ## 🚀 New Features - Update Python build to scikit-build ([#4818](https://github.com/rapidsai/cuml/pull/4818)) [@dantegd](https://github.com/dantegd) - Vectorizers to accept Pandas Series as input ([#4811](https://github.com/rapidsai/cuml/pull/4811)) [@shaswat-indian](https://github.com/shaswat-indian) - Cython wrapper for v-measure ([#4785](https://github.com/rapidsai/cuml/pull/4785)) [@shaswat-indian](https://github.com/shaswat-indian) ## 🛠️ Improvements - Pin `dask` &amp; `distributed` for release ([#4850](https://github.com/rapidsai/cuml/pull/4850)) [@galipremsagar](https://github.com/galipremsagar) - Allow CuPy 11 ([#4837](https://github.com/rapidsai/cuml/pull/4837)) [@jakirkham](https://github.com/jakirkham) - Remove duplicate adj_to_csr implementation ([#4829](https://github.com/rapidsai/cuml/pull/4829)) [@ahendriksen](https://github.com/ahendriksen) - Update conda environment files to UCX 1.13.0 ([#4813](https://github.com/rapidsai/cuml/pull/4813)) [@pentschev](https://github.com/pentschev) - Update conda recipes to UCX 1.13.0 ([#4809](https://github.com/rapidsai/cuml/pull/4809)) [@pentschev](https://github.com/pentschev) - Fix #3414: remove naive versions dbscan algorithms ([#4804](https://github.com/rapidsai/cuml/pull/4804)) [@ahendriksen](https://github.com/ahendriksen) - Accelerate adjacency matrix to CSR conversion for DBSCAN ([#4803](https://github.com/rapidsai/cuml/pull/4803)) [@ahendriksen](https://github.com/ahendriksen) - Pin max version of `cuda-python` to `11.7.0` ([#4793](https://github.com/rapidsai/cuml/pull/4793)) [@Ethyling](https://github.com/Ethyling) - Allow cosine distance metric in dbscan ([#4776](https://github.com/rapidsai/cuml/pull/4776)) [@tarang-jain](https://github.com/tarang-jain) - Unpin `dask` &amp; `distributed` for development ([#4771](https://github.com/rapidsai/cuml/pull/4771)) [@galipremsagar](https://github.com/galipremsagar) - Clean up Thrust includes. ([#4675](https://github.com/rapidsai/cuml/pull/4675)) [@bdice](https://github.com/bdice) - Improvements in feature sampling ([#4278](https://github.com/rapidsai/cuml/pull/4278)) [@vinaydes](https://github.com/vinaydes) # cuML 22.06.00 (7 Jun 2022) ## 🐛 Bug Fixes - Fix sg benchmark build. ([#4766](https://github.com/rapidsai/cuml/pull/4766)) [@trivialfis](https://github.com/trivialfis) - Resolve KRR hypothesis test failure ([#4761](https://github.com/rapidsai/cuml/pull/4761)) [@RAMitchell](https://github.com/RAMitchell) - Fix `KBinsDiscretizer` `bin_edges_` ([#4735](https://github.com/rapidsai/cuml/pull/4735)) [@viclafargue](https://github.com/viclafargue) - FIX Accept small floats in RandomForest ([#4717](https://github.com/rapidsai/cuml/pull/4717)) [@thomasjpfan](https://github.com/thomasjpfan) - Remove import of `scalar_broadcast_to` from stemmer ([#4706](https://github.com/rapidsai/cuml/pull/4706)) [@viclafargue](https://github.com/viclafargue) - Replace 22.04.x with 22.06.x in yaml files ([#4692](https://github.com/rapidsai/cuml/pull/4692)) [@daxiongshu](https://github.com/daxiongshu) - Replace cudf.logical_not with ~ ([#4669](https://github.com/rapidsai/cuml/pull/4669)) [@canonizer](https://github.com/canonizer) ## 📖 Documentation - Fix docs builds ([#4733](https://github.com/rapidsai/cuml/pull/4733)) [@ajschmidt8](https://github.com/ajschmidt8) - Change &quot;principals&quot; to &quot;principles&quot; ([#4695](https://github.com/rapidsai/cuml/pull/4695)) [@cakiki](https://github.com/cakiki) - Update pydoc and promote `ColumnTransformer` out of experimental ([#4509](https://github.com/rapidsai/cuml/pull/4509)) [@viclafargue](https://github.com/viclafargue) ## 🚀 New Features - float64 support in FIL functions ([#4655](https://github.com/rapidsai/cuml/pull/4655)) [@canonizer](https://github.com/canonizer) - float64 support in FIL core ([#4646](https://github.com/rapidsai/cuml/pull/4646)) [@canonizer](https://github.com/canonizer) - Allow &quot;LabelEncoder&quot; to accept cupy and numpy arrays as input. ([#4620](https://github.com/rapidsai/cuml/pull/4620)) [@daxiongshu](https://github.com/daxiongshu) - MNMG Logistic Regression (dask-glm wrapper) ([#3512](https://github.com/rapidsai/cuml/pull/3512)) [@daxiongshu](https://github.com/daxiongshu) ## 🛠️ Improvements - Pin `dask` &amp; `distributed` for release ([#4758](https://github.com/rapidsai/cuml/pull/4758)) [@galipremsagar](https://github.com/galipremsagar) - Simplicial set functions ([#4756](https://github.com/rapidsai/cuml/pull/4756)) [@viclafargue](https://github.com/viclafargue) - Upgrade Treelite to 2.4.0 ([#4752](https://github.com/rapidsai/cuml/pull/4752)) [@hcho3](https://github.com/hcho3) - Simplify recipes ([#4749](https://github.com/rapidsai/cuml/pull/4749)) [@Ethyling](https://github.com/Ethyling) - Inference for float64 random forests using FIL ([#4739](https://github.com/rapidsai/cuml/pull/4739)) [@canonizer](https://github.com/canonizer) - MNT Removes unused optim_batch_size from UMAP&#39;s docstring ([#4732](https://github.com/rapidsai/cuml/pull/4732)) [@thomasjpfan](https://github.com/thomasjpfan) - Require UCX 1.12.1+ ([#4720](https://github.com/rapidsai/cuml/pull/4720)) [@jakirkham](https://github.com/jakirkham) - Allow enabling raft NVTX markers when raft is installed ([#4718](https://github.com/rapidsai/cuml/pull/4718)) [@achirkin](https://github.com/achirkin) - Fix identifier collision ([#4716](https://github.com/rapidsai/cuml/pull/4716)) [@viclafargue](https://github.com/viclafargue) - Use raft::span in TreeExplainer ([#4714](https://github.com/rapidsai/cuml/pull/4714)) [@hcho3](https://github.com/hcho3) - Expose simplicial set functions ([#4711](https://github.com/rapidsai/cuml/pull/4711)) [@viclafargue](https://github.com/viclafargue) - Refactor `tests` in `cuml` ([#4703](https://github.com/rapidsai/cuml/pull/4703)) [@galipremsagar](https://github.com/galipremsagar) - Use conda to build python packages during GPU tests ([#4702](https://github.com/rapidsai/cuml/pull/4702)) [@Ethyling](https://github.com/Ethyling) - Update pinning to allow newer CMake versions. ([#4698](https://github.com/rapidsai/cuml/pull/4698)) [@vyasr](https://github.com/vyasr) - TreeExplainer extensions ([#4697](https://github.com/rapidsai/cuml/pull/4697)) [@RAMitchell](https://github.com/RAMitchell) - Add sample_weight for Ridge ([#4696](https://github.com/rapidsai/cuml/pull/4696)) [@lowener](https://github.com/lowener) - Unpin `dask` &amp; `distributed` for development ([#4693](https://github.com/rapidsai/cuml/pull/4693)) [@galipremsagar](https://github.com/galipremsagar) - float64 support in treelite-&gt;FIL import and Python layer ([#4690](https://github.com/rapidsai/cuml/pull/4690)) [@canonizer](https://github.com/canonizer) - Enable building static libs ([#4673](https://github.com/rapidsai/cuml/pull/4673)) [@trxcllnt](https://github.com/trxcllnt) - Treeshap hypothesis tests ([#4671](https://github.com/rapidsai/cuml/pull/4671)) [@RAMitchell](https://github.com/RAMitchell) - float64 support in multi-sum and child_index() ([#4648](https://github.com/rapidsai/cuml/pull/4648)) [@canonizer](https://github.com/canonizer) - Add libcuml-tests package ([#4635](https://github.com/rapidsai/cuml/pull/4635)) [@Ethyling](https://github.com/Ethyling) - Random ball cover algorithm for 3D data ([#4582](https://github.com/rapidsai/cuml/pull/4582)) [@cjnolet](https://github.com/cjnolet) - Use conda compilers ([#4577](https://github.com/rapidsai/cuml/pull/4577)) [@Ethyling](https://github.com/Ethyling) - Build packages using mambabuild ([#4542](https://github.com/rapidsai/cuml/pull/4542)) [@Ethyling](https://github.com/Ethyling) # cuML 22.04.00 (6 Apr 2022) ## 🚨 Breaking Changes - Moving more ling prims to raft ([#4567](https://github.com/rapidsai/cuml/pull/4567)) [@cjnolet](https://github.com/cjnolet) - Refactor QN solver: pass parameters via a POD struct ([#4511](https://github.com/rapidsai/cuml/pull/4511)) [@achirkin](https://github.com/achirkin) ## 🐛 Bug Fixes - Fix single-GPU build by separating multi-GPU decomposition utils from single GPU ([#4645](https://github.com/rapidsai/cuml/pull/4645)) [@dantegd](https://github.com/dantegd) - RF: fix stream bug causing performance regressions ([#4644](https://github.com/rapidsai/cuml/pull/4644)) [@venkywonka](https://github.com/venkywonka) - XFail test_hinge_loss temporarily ([#4621](https://github.com/rapidsai/cuml/pull/4621)) [@lowener](https://github.com/lowener) - cuml now supports building non static treelite ([#4598](https://github.com/rapidsai/cuml/pull/4598)) [@robertmaynard](https://github.com/robertmaynard) - Fix mean_squared_error with cudf series ([#4584](https://github.com/rapidsai/cuml/pull/4584)) [@daxiongshu](https://github.com/daxiongshu) - Fix for nightly CI tests: Use CUDA_REL variable in gpu build.sh script ([#4581](https://github.com/rapidsai/cuml/pull/4581)) [@dantegd](https://github.com/dantegd) - Fix the TargetEncoder when transforming dataframe/series with custom index ([#4578](https://github.com/rapidsai/cuml/pull/4578)) [@daxiongshu](https://github.com/daxiongshu) - Removing sign from pca assertions for now. ([#4559](https://github.com/rapidsai/cuml/pull/4559)) [@cjnolet](https://github.com/cjnolet) - Fix compatibility of OneHotEncoder fit ([#4544](https://github.com/rapidsai/cuml/pull/4544)) [@lowener](https://github.com/lowener) - Fix worker streams in OLS-eig executing in an unsafe order ([#4539](https://github.com/rapidsai/cuml/pull/4539)) [@achirkin](https://github.com/achirkin) - Remove xfail from test_hinge_loss ([#4504](https://github.com/rapidsai/cuml/pull/4504)) [@Nanthini10](https://github.com/Nanthini10) - Fix automerge #4501 ([#4502](https://github.com/rapidsai/cuml/pull/4502)) [@dantegd](https://github.com/dantegd) - Remove classmethod of SimpleImputer ([#4439](https://github.com/rapidsai/cuml/pull/4439)) [@lowener](https://github.com/lowener) ## 📖 Documentation - RF: Fix improper documentation in dask-RF ([#4666](https://github.com/rapidsai/cuml/pull/4666)) [@venkywonka](https://github.com/venkywonka) - Add doctest ([#4618](https://github.com/rapidsai/cuml/pull/4618)) [@lowener](https://github.com/lowener) - Fix document layouts in Parameters sections ([#4609](https://github.com/rapidsai/cuml/pull/4609)) [@Yosshi999](https://github.com/Yosshi999) - Updates to consistency of MNMG PCA/TSVD solvers (docs + code consolidation) ([#4556](https://github.com/rapidsai/cuml/pull/4556)) [@cjnolet](https://github.com/cjnolet) ## 🚀 New Features - Add a dummy argument `deep` to `TargetEncoder.get_params()` ([#4601](https://github.com/rapidsai/cuml/pull/4601)) [@daxiongshu](https://github.com/daxiongshu) - Add Complement Naive Bayes ([#4595](https://github.com/rapidsai/cuml/pull/4595)) [@lowener](https://github.com/lowener) - Add get_params() to TargetEncoder ([#4588](https://github.com/rapidsai/cuml/pull/4588)) [@daxiongshu](https://github.com/daxiongshu) - Target Encoder with variance statistics ([#4483](https://github.com/rapidsai/cuml/pull/4483)) [@daxiongshu](https://github.com/daxiongshu) - Interruptible execution ([#4463](https://github.com/rapidsai/cuml/pull/4463)) [@achirkin](https://github.com/achirkin) - Configurable libcuml++ per algorithm ([#4296](https://github.com/rapidsai/cuml/pull/4296)) [@dantegd](https://github.com/dantegd) ## 🛠️ Improvements - Adding some prints when hdbscan assertion fails ([#4656](https://github.com/rapidsai/cuml/pull/4656)) [@cjnolet](https://github.com/cjnolet) - Temporarily disable new `ops-bot` functionality ([#4652](https://github.com/rapidsai/cuml/pull/4652)) [@ajschmidt8](https://github.com/ajschmidt8) - Use CPMFindPackage to retrieve `cumlprims_mg` ([#4649](https://github.com/rapidsai/cuml/pull/4649)) [@trxcllnt](https://github.com/trxcllnt) - Pin `dask` &amp; `distributed` versions ([#4647](https://github.com/rapidsai/cuml/pull/4647)) [@galipremsagar](https://github.com/galipremsagar) - Remove RAFT MM includes ([#4637](https://github.com/rapidsai/cuml/pull/4637)) [@viclafargue](https://github.com/viclafargue) - Add option to build RAFT artifacts statically into libcuml++ ([#4633](https://github.com/rapidsai/cuml/pull/4633)) [@dantegd](https://github.com/dantegd) - Upgrade `dask` &amp; `distributed` minimum version ([#4632](https://github.com/rapidsai/cuml/pull/4632)) [@galipremsagar](https://github.com/galipremsagar) - Add `.github/ops-bot.yaml` config file ([#4630](https://github.com/rapidsai/cuml/pull/4630)) [@ajschmidt8](https://github.com/ajschmidt8) - Small fixes for certain test failures ([#4628](https://github.com/rapidsai/cuml/pull/4628)) [@vinaydes](https://github.com/vinaydes) - Templatizing FIL types to add float64 support ([#4625](https://github.com/rapidsai/cuml/pull/4625)) [@canonizer](https://github.com/canonizer) - Fitsne as default tsne method ([#4597](https://github.com/rapidsai/cuml/pull/4597)) [@lowener](https://github.com/lowener) - Add `get_feature_names` to OneHotEncoder ([#4596](https://github.com/rapidsai/cuml/pull/4596)) [@viclafargue](https://github.com/viclafargue) - Fix OOM and cudaContext crash in C++ benchmarks ([#4594](https://github.com/rapidsai/cuml/pull/4594)) [@RAMitchell](https://github.com/RAMitchell) - Using Pyraft and automatically cloning when raft pin changes ([#4593](https://github.com/rapidsai/cuml/pull/4593)) [@cjnolet](https://github.com/cjnolet) - Upgrade Treelite to 2.3.0 ([#4590](https://github.com/rapidsai/cuml/pull/4590)) [@hcho3](https://github.com/hcho3) - Sphinx warnings as errors ([#4585](https://github.com/rapidsai/cuml/pull/4585)) [@RAMitchell](https://github.com/RAMitchell) - Adding missing FAISS license ([#4579](https://github.com/rapidsai/cuml/pull/4579)) [@cjnolet](https://github.com/cjnolet) - Add QN solver to ElasticNet and Lasso models ([#4576](https://github.com/rapidsai/cuml/pull/4576)) [@achirkin](https://github.com/achirkin) - Move remaining stats prims to raft ([#4568](https://github.com/rapidsai/cuml/pull/4568)) [@cjnolet](https://github.com/cjnolet) - Moving more ling prims to raft ([#4567](https://github.com/rapidsai/cuml/pull/4567)) [@cjnolet](https://github.com/cjnolet) - Adding libraft conda dependencies ([#4564](https://github.com/rapidsai/cuml/pull/4564)) [@cjnolet](https://github.com/cjnolet) - Fix RF integer overflow ([#4563](https://github.com/rapidsai/cuml/pull/4563)) [@RAMitchell](https://github.com/RAMitchell) - Add CMake `install` rules for tests ([#4551](https://github.com/rapidsai/cuml/pull/4551)) [@ajschmidt8](https://github.com/ajschmidt8) - Faster GLM preprocessing by fusing kernels ([#4549](https://github.com/rapidsai/cuml/pull/4549)) [@achirkin](https://github.com/achirkin) - RAFT API updates for lap, label, cluster, and spectral apis ([#4548](https://github.com/rapidsai/cuml/pull/4548)) [@cjnolet](https://github.com/cjnolet) - Moving cusparse wrappers to detail API in RAFT. ([#4547](https://github.com/rapidsai/cuml/pull/4547)) [@cjnolet](https://github.com/cjnolet) - Unpin max `dask` and `distributed` versions ([#4546](https://github.com/rapidsai/cuml/pull/4546)) [@galipremsagar](https://github.com/galipremsagar) - Kernel density estimation ([#4545](https://github.com/rapidsai/cuml/pull/4545)) [@RAMitchell](https://github.com/RAMitchell) - Update `xgboost` version in CI ([#4541](https://github.com/rapidsai/cuml/pull/4541)) [@ajschmidt8](https://github.com/ajschmidt8) - replaces `ccache` with `sccache` ([#4534](https://github.com/rapidsai/cuml/pull/4534)) [@AyodeAwe](https://github.com/AyodeAwe) - Remove RAFT memory management (2/2) ([#4526](https://github.com/rapidsai/cuml/pull/4526)) [@viclafargue](https://github.com/viclafargue) - Updating RAFT linalg headers ([#4515](https://github.com/rapidsai/cuml/pull/4515)) [@divyegala](https://github.com/divyegala) - Refactor QN solver: pass parameters via a POD struct ([#4511](https://github.com/rapidsai/cuml/pull/4511)) [@achirkin](https://github.com/achirkin) - Kernel ridge regression ([#4492](https://github.com/rapidsai/cuml/pull/4492)) [@RAMitchell](https://github.com/RAMitchell) - QN solvers: Use different gradient norms for different for different loss functions. ([#4491](https://github.com/rapidsai/cuml/pull/4491)) [@achirkin](https://github.com/achirkin) - RF: Variable binning and other minor refactoring ([#4479](https://github.com/rapidsai/cuml/pull/4479)) [@venkywonka](https://github.com/venkywonka) - Rewrite CD solver using more BLAS ([#4446](https://github.com/rapidsai/cuml/pull/4446)) [@achirkin](https://github.com/achirkin) - Add support for sample_weights in LinearRegression ([#4428](https://github.com/rapidsai/cuml/pull/4428)) [@lowener](https://github.com/lowener) - Nightly automated benchmark ([#4414](https://github.com/rapidsai/cuml/pull/4414)) [@viclafargue](https://github.com/viclafargue) - Use FAISS with RMM ([#4297](https://github.com/rapidsai/cuml/pull/4297)) [@viclafargue](https://github.com/viclafargue) - Split C++ tests into separate binaries ([#4295](https://github.com/rapidsai/cuml/pull/4295)) [@dantegd](https://github.com/dantegd) # cuML 22.02.00 (2 Feb 2022) ## 🚨 Breaking Changes - Move NVTX range helpers to raft ([#4445](https://github.com/rapidsai/cuml/pull/4445)) [@achirkin](https://github.com/achirkin) ## 🐛 Bug Fixes - Always upload libcuml ([#4530](https://github.com/rapidsai/cuml/pull/4530)) [@raydouglass](https://github.com/raydouglass) - Fix RAFT pin to main branch ([#4508](https://github.com/rapidsai/cuml/pull/4508)) [@dantegd](https://github.com/dantegd) - Pin `dask` &amp; `distributed` ([#4505](https://github.com/rapidsai/cuml/pull/4505)) [@galipremsagar](https://github.com/galipremsagar) - Replace use of RMM provided CUDA bindings with CUDA Python ([#4499](https://github.com/rapidsai/cuml/pull/4499)) [@shwina](https://github.com/shwina) - Dataframe Index as columns in ColumnTransformer ([#4481](https://github.com/rapidsai/cuml/pull/4481)) [@viclafargue](https://github.com/viclafargue) - Support compilation with Thrust 1.15 ([#4469](https://github.com/rapidsai/cuml/pull/4469)) [@robertmaynard](https://github.com/robertmaynard) - fix minor ASAN issues in UMAPAlgo::Optimize::find_params_ab() ([#4405](https://github.com/rapidsai/cuml/pull/4405)) [@yitao-li](https://github.com/yitao-li) ## 📖 Documentation - Remove comment numerical warning ([#4408](https://github.com/rapidsai/cuml/pull/4408)) [@viclafargue](https://github.com/viclafargue) - Fix docstring for npermutations in PermutationExplainer ([#4402](https://github.com/rapidsai/cuml/pull/4402)) [@hcho3](https://github.com/hcho3) ## 🚀 New Features - Combine and expose SVC&#39;s support vectors when fitting multi-class data ([#4454](https://github.com/rapidsai/cuml/pull/4454)) [@NV-jpt](https://github.com/NV-jpt) - Accept fold index for TargetEncoder ([#4453](https://github.com/rapidsai/cuml/pull/4453)) [@daxiongshu](https://github.com/daxiongshu) - Move NVTX range helpers to raft ([#4445](https://github.com/rapidsai/cuml/pull/4445)) [@achirkin](https://github.com/achirkin) ## 🛠️ Improvements - Fix packages upload ([#4517](https://github.com/rapidsai/cuml/pull/4517)) [@Ethyling](https://github.com/Ethyling) - Testing split fused l2 knn compilation units ([#4514](https://github.com/rapidsai/cuml/pull/4514)) [@cjnolet](https://github.com/cjnolet) - Prepare upload scripts for Python 3.7 removal ([#4500](https://github.com/rapidsai/cuml/pull/4500)) [@Ethyling](https://github.com/Ethyling) - Renaming macros with their RAFT counterparts ([#4496](https://github.com/rapidsai/cuml/pull/4496)) [@divyegala](https://github.com/divyegala) - Allow CuPy 10 ([#4487](https://github.com/rapidsai/cuml/pull/4487)) [@jakirkham](https://github.com/jakirkham) - Upgrade Treelite to 2.2.1 ([#4484](https://github.com/rapidsai/cuml/pull/4484)) [@hcho3](https://github.com/hcho3) - Unpin `dask` and `distributed` ([#4482](https://github.com/rapidsai/cuml/pull/4482)) [@galipremsagar](https://github.com/galipremsagar) - Support categorical splits in in TreeExplainer ([#4473](https://github.com/rapidsai/cuml/pull/4473)) [@hcho3](https://github.com/hcho3) - Remove RAFT memory management ([#4468](https://github.com/rapidsai/cuml/pull/4468)) [@viclafargue](https://github.com/viclafargue) - Add missing imports tests ([#4452](https://github.com/rapidsai/cuml/pull/4452)) [@Ethyling](https://github.com/Ethyling) - Update CUDA 11.5 conda environment to use 22.02 pinnings. ([#4450](https://github.com/rapidsai/cuml/pull/4450)) [@bdice](https://github.com/bdice) - Support cuML / scikit-learn RF classifiers in TreeExplainer ([#4447](https://github.com/rapidsai/cuml/pull/4447)) [@hcho3](https://github.com/hcho3) - Remove `IncludeCategories` from `.clang-format` ([#4438](https://github.com/rapidsai/cuml/pull/4438)) [@codereport](https://github.com/codereport) - Simplify perplexity normalization in t-SNE ([#4425](https://github.com/rapidsai/cuml/pull/4425)) [@zbjornson](https://github.com/zbjornson) - Unify dense and sparse tests ([#4417](https://github.com/rapidsai/cuml/pull/4417)) [@levsnv](https://github.com/levsnv) - Update ucx-py version on release using rvc ([#4411](https://github.com/rapidsai/cuml/pull/4411)) [@Ethyling](https://github.com/Ethyling) - Universal Treelite tree walk function for FIL ([#4407](https://github.com/rapidsai/cuml/pull/4407)) [@levsnv](https://github.com/levsnv) - Update to UCX-Py 0.24 ([#4396](https://github.com/rapidsai/cuml/pull/4396)) [@pentschev](https://github.com/pentschev) - Using sparse public API functions from RAFT ([#4389](https://github.com/rapidsai/cuml/pull/4389)) [@cjnolet](https://github.com/cjnolet) - Add a warning to prefer LinearSVM over SVM(kernel=&#39;linear&#39;) ([#4382](https://github.com/rapidsai/cuml/pull/4382)) [@achirkin](https://github.com/achirkin) - Hiding cusparse deprecation warnings ([#4373](https://github.com/rapidsai/cuml/pull/4373)) [@cjnolet](https://github.com/cjnolet) - Unify dense and sparse import in FIL ([#4328](https://github.com/rapidsai/cuml/pull/4328)) [@levsnv](https://github.com/levsnv) - Integrating RAFT handle updates ([#4313](https://github.com/rapidsai/cuml/pull/4313)) [@divyegala](https://github.com/divyegala) - Use RAFT template instantations for distances ([#4302](https://github.com/rapidsai/cuml/pull/4302)) [@cjnolet](https://github.com/cjnolet) - RF: code re-organization to enhance build parallelism ([#4299](https://github.com/rapidsai/cuml/pull/4299)) [@venkywonka](https://github.com/venkywonka) - Add option to build faiss and treelite shared libs, inherit common dependencies from raft ([#4256](https://github.com/rapidsai/cuml/pull/4256)) [@trxcllnt](https://github.com/trxcllnt) # cuML 21.12.00 (9 Dec 2021) ## 🚨 Breaking Changes - Fix indexing of PCA to use safer types ([#4255](https://github.com/rapidsai/cuml/pull/4255)) [@lowener](https://github.com/lowener) - RF: Add Gamma and Inverse Gaussian loss criteria ([#4216](https://github.com/rapidsai/cuml/pull/4216)) [@venkywonka](https://github.com/venkywonka) - update RF docs ([#4138](https://github.com/rapidsai/cuml/pull/4138)) [@venkywonka](https://github.com/venkywonka) ## 🐛 Bug Fixes - Update conda recipe to have explicit libcusolver ([#4392](https://github.com/rapidsai/cuml/pull/4392)) [@dantegd](https://github.com/dantegd) - Restore FIL convention of inlining code ([#4366](https://github.com/rapidsai/cuml/pull/4366)) [@levsnv](https://github.com/levsnv) - Fix SVR intercept AttributeError ([#4358](https://github.com/rapidsai/cuml/pull/4358)) [@lowener](https://github.com/lowener) - Fix `is_stable_build` logic for CI scripts ([#4350](https://github.com/rapidsai/cuml/pull/4350)) [@ajschmidt8](https://github.com/ajschmidt8) - Temporarily disable rmm devicebuffer in array.py ([#4333](https://github.com/rapidsai/cuml/pull/4333)) [@dantegd](https://github.com/dantegd) - Fix categorical test in python ([#4326](https://github.com/rapidsai/cuml/pull/4326)) [@levsnv](https://github.com/levsnv) - Revert &quot;Merge pull request #4319 from AyodeAwe/branch-21.12&quot; ([#4325](https://github.com/rapidsai/cuml/pull/4325)) [@ajschmidt8](https://github.com/ajschmidt8) - Preserve indexing in methods when applied to DataFrame and Series objects ([#4317](https://github.com/rapidsai/cuml/pull/4317)) [@dantegd](https://github.com/dantegd) - Fix potential CUDA context poison when negative (invalid) categories provided to FIL model ([#4314](https://github.com/rapidsai/cuml/pull/4314)) [@levsnv](https://github.com/levsnv) - Using sparse expanded distances where possible ([#4310](https://github.com/rapidsai/cuml/pull/4310)) [@cjnolet](https://github.com/cjnolet) - Fix for `mean_squared_error` ([#4287](https://github.com/rapidsai/cuml/pull/4287)) [@viclafargue](https://github.com/viclafargue) - Fix for Categorical Naive Bayes sparse handling ([#4277](https://github.com/rapidsai/cuml/pull/4277)) [@lowener](https://github.com/lowener) - Throw an explicit excpetion if the input array is empty in DBSCAN.fit #4273 ([#4275](https://github.com/rapidsai/cuml/pull/4275)) [@viktorkovesd](https://github.com/viktorkovesd) - Fix KernelExplainer returning TypeError for certain input ([#4272](https://github.com/rapidsai/cuml/pull/4272)) [@Nanthini10](https://github.com/Nanthini10) - Remove most warnings from pytest suite ([#4196](https://github.com/rapidsai/cuml/pull/4196)) [@dantegd](https://github.com/dantegd) ## 📖 Documentation - Add experimental GPUTreeSHAP to API doc ([#4398](https://github.com/rapidsai/cuml/pull/4398)) [@hcho3](https://github.com/hcho3) - Fix GLM typo on device/host pointer ([#4320](https://github.com/rapidsai/cuml/pull/4320)) [@lowener](https://github.com/lowener) - update RF docs ([#4138](https://github.com/rapidsai/cuml/pull/4138)) [@venkywonka](https://github.com/venkywonka) ## 🚀 New Features - Add GPUTreeSHAP to cuML explainer module (experimental) ([#4351](https://github.com/rapidsai/cuml/pull/4351)) [@hcho3](https://github.com/hcho3) - Enable training single GPU cuML models using Dask DataFrames and Series ([#4300](https://github.com/rapidsai/cuml/pull/4300)) [@ChrisJar](https://github.com/ChrisJar) - LinearSVM using QN solvers ([#4268](https://github.com/rapidsai/cuml/pull/4268)) [@achirkin](https://github.com/achirkin) - Add support for exogenous variables to ARIMA ([#4221](https://github.com/rapidsai/cuml/pull/4221)) [@Nyrio](https://github.com/Nyrio) - Use opt-in shared memory carveout for FIL ([#3759](https://github.com/rapidsai/cuml/pull/3759)) [@levsnv](https://github.com/levsnv) - Symbolic Regression/Classification C/C++ ([#3638](https://github.com/rapidsai/cuml/pull/3638)) [@vimarsh6739](https://github.com/vimarsh6739) ## 🛠️ Improvements - Fix Changelog Merge Conflicts for `branch-21.12` ([#4393](https://github.com/rapidsai/cuml/pull/4393)) [@ajschmidt8](https://github.com/ajschmidt8) - Pin max `dask` and `distributed` to `2012.11.2` ([#4390](https://github.com/rapidsai/cuml/pull/4390)) [@galipremsagar](https://github.com/galipremsagar) - Fix forward merge #4349 ([#4374](https://github.com/rapidsai/cuml/pull/4374)) [@dantegd](https://github.com/dantegd) - Upgrade `clang` to `11.1.0` ([#4372](https://github.com/rapidsai/cuml/pull/4372)) [@galipremsagar](https://github.com/galipremsagar) - Update clang-format version in docs; allow unanchored version string ([#4365](https://github.com/rapidsai/cuml/pull/4365)) [@zbjornson](https://github.com/zbjornson) - Add CUDA 11.5 developer environment ([#4364](https://github.com/rapidsai/cuml/pull/4364)) [@dantegd](https://github.com/dantegd) - Fix aliasing violation in t-SNE ([#4363](https://github.com/rapidsai/cuml/pull/4363)) [@zbjornson](https://github.com/zbjornson) - Promote FITSNE from experimental ([#4361](https://github.com/rapidsai/cuml/pull/4361)) [@lowener](https://github.com/lowener) - Fix unnecessary f32/f64 conversions in t-SNE KL calc ([#4331](https://github.com/rapidsai/cuml/pull/4331)) [@zbjornson](https://github.com/zbjornson) - Update rapids-cmake version ([#4330](https://github.com/rapidsai/cuml/pull/4330)) [@dantegd](https://github.com/dantegd) - rapids-cmake version update to 21.12 ([#4327](https://github.com/rapidsai/cuml/pull/4327)) [@dantegd](https://github.com/dantegd) - Use compute-sanitizer instead of cuda-memcheck ([#4324](https://github.com/rapidsai/cuml/pull/4324)) [@teju85](https://github.com/teju85) - Ability to pass fp64 type to cuml benchmarks ([#4323](https://github.com/rapidsai/cuml/pull/4323)) [@teju85](https://github.com/teju85) - Split treelite fil import from `forest` object definition ([#4306](https://github.com/rapidsai/cuml/pull/4306)) [@levsnv](https://github.com/levsnv) - update xgboost version ([#4301](https://github.com/rapidsai/cuml/pull/4301)) [@msadang](https://github.com/msadang) - Accounting for RAFT updates to matrix, stats, and random implementations in detail ([#4294](https://github.com/rapidsai/cuml/pull/4294)) [@divyegala](https://github.com/divyegala) - Update cudf matrix calls for to_numpy and to_cupy ([#4293](https://github.com/rapidsai/cuml/pull/4293)) [@dantegd](https://github.com/dantegd) - Update `conda` recipes for Enhanced Compatibility effort ([#4288](https://github.com/rapidsai/cuml/pull/4288)) [@ajschmidt8](https://github.com/ajschmidt8) - Increase parallelism from 4 to 8 jobs in CI ([#4286](https://github.com/rapidsai/cuml/pull/4286)) [@dantegd](https://github.com/dantegd) - RAFT distance prims public API update ([#4280](https://github.com/rapidsai/cuml/pull/4280)) [@cjnolet](https://github.com/cjnolet) - Update to UCX-Py 0.23 ([#4274](https://github.com/rapidsai/cuml/pull/4274)) [@pentschev](https://github.com/pentschev) - In FIL, clip blocks_per_sm to one wave instead of asserting ([#4271](https://github.com/rapidsai/cuml/pull/4271)) [@levsnv](https://github.com/levsnv) - Update of &quot;Gracefully accept &#39;n_jobs&#39;, a common sklearn parameter, in NearestNeighbors Estimator&quot; ([#4267](https://github.com/rapidsai/cuml/pull/4267)) [@NV-jpt](https://github.com/NV-jpt) - Improve numerical stability of the Kalman filter for ARIMA ([#4259](https://github.com/rapidsai/cuml/pull/4259)) [@Nyrio](https://github.com/Nyrio) - Fix indexing of PCA to use safer types ([#4255](https://github.com/rapidsai/cuml/pull/4255)) [@lowener](https://github.com/lowener) - Change calculation of ARIMA confidence intervals ([#4248](https://github.com/rapidsai/cuml/pull/4248)) [@Nyrio](https://github.com/Nyrio) - Unpin `dask` &amp; `distributed` in CI ([#4235](https://github.com/rapidsai/cuml/pull/4235)) [@galipremsagar](https://github.com/galipremsagar) - RF: Add Gamma and Inverse Gaussian loss criteria ([#4216](https://github.com/rapidsai/cuml/pull/4216)) [@venkywonka](https://github.com/venkywonka) - Exposing KL divergence in TSNE ([#4208](https://github.com/rapidsai/cuml/pull/4208)) [@viclafargue](https://github.com/viclafargue) - Unify template parameter dispatch for FIL inference and shared memory footprint estimation ([#4013](https://github.com/rapidsai/cuml/pull/4013)) [@levsnv](https://github.com/levsnv) # cuML 21.10.00 (7 Oct 2021) ## 🚨 Breaking Changes - RF: python api behaviour refactor ([#4207](https://github.com/rapidsai/cuml/pull/4207)) [@venkywonka](https://github.com/venkywonka) - Implement vector leaf for random forest ([#4191](https://github.com/rapidsai/cuml/pull/4191)) [@RAMitchell](https://github.com/RAMitchell) - Random forest refactoring ([#4166](https://github.com/rapidsai/cuml/pull/4166)) [@RAMitchell](https://github.com/RAMitchell) - RF: Add Poisson deviance impurity criterion ([#4156](https://github.com/rapidsai/cuml/pull/4156)) [@venkywonka](https://github.com/venkywonka) - avoid paramsSolver::{n_rows,n_cols} shadowing their base class counterparts ([#4130](https://github.com/rapidsai/cuml/pull/4130)) [@yitao-li](https://github.com/yitao-li) - Apply modifications to account for RAFT changes ([#4077](https://github.com/rapidsai/cuml/pull/4077)) [@viclafargue](https://github.com/viclafargue) ## 🐛 Bug Fixes - Update scikit-learn version in conda dev envs to 0.24 ([#4241](https://github.com/rapidsai/cuml/pull/4241)) [@dantegd](https://github.com/dantegd) - Using pinned host memory for Random Forest and DBSCAN ([#4215](https://github.com/rapidsai/cuml/pull/4215)) [@divyegala](https://github.com/divyegala) - Make sure we keep the rapids-cmake and cuml cal version in sync ([#4213](https://github.com/rapidsai/cuml/pull/4213)) [@robertmaynard](https://github.com/robertmaynard) - Add thrust_create_target to install export in CMakeLists ([#4209](https://github.com/rapidsai/cuml/pull/4209)) [@dantegd](https://github.com/dantegd) - Change the error type to match sklearn. ([#4198](https://github.com/rapidsai/cuml/pull/4198)) [@achirkin](https://github.com/achirkin) - Fixing remaining hdbscan bug ([#4179](https://github.com/rapidsai/cuml/pull/4179)) [@cjnolet](https://github.com/cjnolet) - Fix for cuDF changes to cudf.core ([#4168](https://github.com/rapidsai/cuml/pull/4168)) [@dantegd](https://github.com/dantegd) - Fixing UMAP reproducibility pytest failures in 11.4 by using random init for now ([#4152](https://github.com/rapidsai/cuml/pull/4152)) [@cjnolet](https://github.com/cjnolet) - avoid paramsSolver::{n_rows,n_cols} shadowing their base class counterparts ([#4130](https://github.com/rapidsai/cuml/pull/4130)) [@yitao-li](https://github.com/yitao-li) - Use the new RAPIDS.cmake to fetch rapids-cmake ([#4102](https://github.com/rapidsai/cuml/pull/4102)) [@robertmaynard](https://github.com/robertmaynard) ## 📖 Documentation - Expose train_test_split in API doc ([#4234](https://github.com/rapidsai/cuml/pull/4234)) [@hcho3](https://github.com/hcho3) - Adding docs for `.get_feature_names()` inside `TfidfVectorizer` ([#4226](https://github.com/rapidsai/cuml/pull/4226)) [@mayankanand007](https://github.com/mayankanand007) - Removing experimental flag from hdbscan description in docs ([#4211](https://github.com/rapidsai/cuml/pull/4211)) [@cjnolet](https://github.com/cjnolet) - updated build instructions ([#4200](https://github.com/rapidsai/cuml/pull/4200)) [@shaneding](https://github.com/shaneding) - Forward-merge branch-21.08 to branch-21.10 ([#4171](https://github.com/rapidsai/cuml/pull/4171)) [@jakirkham](https://github.com/jakirkham) ## 🚀 New Features - Experimental option to build libcuml++ only with FIL ([#4225](https://github.com/rapidsai/cuml/pull/4225)) [@dantegd](https://github.com/dantegd) - FIL to import categorical models from treelite ([#4173](https://github.com/rapidsai/cuml/pull/4173)) [@levsnv](https://github.com/levsnv) - Add hamming, jensen-shannon, kl-divergence, correlation and russellrao distance metrics ([#4155](https://github.com/rapidsai/cuml/pull/4155)) [@mdoijade](https://github.com/mdoijade) - Add Categorical Naive Bayes ([#4150](https://github.com/rapidsai/cuml/pull/4150)) [@lowener](https://github.com/lowener) - FIL to infer categorical forests and generate them in C++ tests ([#4092](https://github.com/rapidsai/cuml/pull/4092)) [@levsnv](https://github.com/levsnv) - Add Gaussian Naive Bayes ([#4079](https://github.com/rapidsai/cuml/pull/4079)) [@lowener](https://github.com/lowener) - ARIMA - Add support for missing observations and padding ([#4058](https://github.com/rapidsai/cuml/pull/4058)) [@Nyrio](https://github.com/Nyrio) ## 🛠️ Improvements - Pin max `dask` and `distributed` versions to 2021.09.1 ([#4229](https://github.com/rapidsai/cuml/pull/4229)) [@galipremsagar](https://github.com/galipremsagar) - Fea/umap refine ([#4228](https://github.com/rapidsai/cuml/pull/4228)) [@AjayThorve](https://github.com/AjayThorve) - Upgrade Treelite to 2.1.0 ([#4220](https://github.com/rapidsai/cuml/pull/4220)) [@hcho3](https://github.com/hcho3) - Add option to clone RAFT even if it is in the environment ([#4217](https://github.com/rapidsai/cuml/pull/4217)) [@dantegd](https://github.com/dantegd) - RF: python api behaviour refactor ([#4207](https://github.com/rapidsai/cuml/pull/4207)) [@venkywonka](https://github.com/venkywonka) - Pytest updates for Scikit-learn 0.24 ([#4205](https://github.com/rapidsai/cuml/pull/4205)) [@dantegd](https://github.com/dantegd) - Faster glm ols-via-eigendecomposition algorithm ([#4201](https://github.com/rapidsai/cuml/pull/4201)) [@achirkin](https://github.com/achirkin) - Implement vector leaf for random forest ([#4191](https://github.com/rapidsai/cuml/pull/4191)) [@RAMitchell](https://github.com/RAMitchell) - Refactor kmeans sampling code ([#4190](https://github.com/rapidsai/cuml/pull/4190)) [@Nanthini10](https://github.com/Nanthini10) - Gracefully accept &#39;n_jobs&#39;, a common sklearn parameter, in NearestNeighbors Estimator ([#4178](https://github.com/rapidsai/cuml/pull/4178)) [@NV-jpt](https://github.com/NV-jpt) - Update with rapids cmake new features ([#4175](https://github.com/rapidsai/cuml/pull/4175)) [@robertmaynard](https://github.com/robertmaynard) - Update to UCX-Py 0.22 ([#4174](https://github.com/rapidsai/cuml/pull/4174)) [@pentschev](https://github.com/pentschev) - Random forest refactoring ([#4166](https://github.com/rapidsai/cuml/pull/4166)) [@RAMitchell](https://github.com/RAMitchell) - Fix log level for dask tree_reduce ([#4163](https://github.com/rapidsai/cuml/pull/4163)) [@lowener](https://github.com/lowener) - Add CUDA 11.4 development environment ([#4160](https://github.com/rapidsai/cuml/pull/4160)) [@dantegd](https://github.com/dantegd) - RF: Add Poisson deviance impurity criterion ([#4156](https://github.com/rapidsai/cuml/pull/4156)) [@venkywonka](https://github.com/venkywonka) - Split FIL infer_k into phases to speed up compilation (when a patch is applied) ([#4148](https://github.com/rapidsai/cuml/pull/4148)) [@levsnv](https://github.com/levsnv) - RF node queue rewrite ([#4125](https://github.com/rapidsai/cuml/pull/4125)) [@RAMitchell](https://github.com/RAMitchell) - Remove max version pin for `dask` &amp; `distributed` on development branch ([#4118](https://github.com/rapidsai/cuml/pull/4118)) [@galipremsagar](https://github.com/galipremsagar) - Correct name of a cmake function in get_spdlog.cmake ([#4106](https://github.com/rapidsai/cuml/pull/4106)) [@robertmaynard](https://github.com/robertmaynard) - Apply modifications to account for RAFT changes ([#4077](https://github.com/rapidsai/cuml/pull/4077)) [@viclafargue](https://github.com/viclafargue) - Warnings are errors ([#4075](https://github.com/rapidsai/cuml/pull/4075)) [@harrism](https://github.com/harrism) - ENH Replace gpuci_conda_retry with gpuci_mamba_retry ([#4065](https://github.com/rapidsai/cuml/pull/4065)) [@dillon-cullinan](https://github.com/dillon-cullinan) - Changes to NearestNeighbors to call 2d random ball cover ([#4003](https://github.com/rapidsai/cuml/pull/4003)) [@cjnolet](https://github.com/cjnolet) - support space in workspace ([#3752](https://github.com/rapidsai/cuml/pull/3752)) [@jolorunyomi](https://github.com/jolorunyomi) # cuML 21.08.00 (4 Aug 2021) ## 🚨 Breaking Changes - Remove deprecated target_weights in UMAP ([#4081](https://github.com/rapidsai/cuml/pull/4081)) [@lowener](https://github.com/lowener) - Upgrade Treelite to 2.0.0 ([#4072](https://github.com/rapidsai/cuml/pull/4072)) [@hcho3](https://github.com/hcho3) - RF/DT cleanup ([#4005](https://github.com/rapidsai/cuml/pull/4005)) [@venkywonka](https://github.com/venkywonka) - RF: memset and batch size optimization for computing splits ([#4001](https://github.com/rapidsai/cuml/pull/4001)) [@venkywonka](https://github.com/venkywonka) - Remove old RF backend ([#3868](https://github.com/rapidsai/cuml/pull/3868)) [@RAMitchell](https://github.com/RAMitchell) - Enable warp-per-tree inference in FIL for regression and binary classification ([#3760](https://github.com/rapidsai/cuml/pull/3760)) [@levsnv](https://github.com/levsnv) ## 🐛 Bug Fixes - Disabling umap reproducibility tests for cuda 11.4 ([#4128](https://github.com/rapidsai/cuml/pull/4128)) [@cjnolet](https://github.com/cjnolet) - Fix for crash in RF when `max_leaves` parameter is specified ([#4126](https://github.com/rapidsai/cuml/pull/4126)) [@vinaydes](https://github.com/vinaydes) - Running umap mnmg test twice ([#4112](https://github.com/rapidsai/cuml/pull/4112)) [@cjnolet](https://github.com/cjnolet) - Minimal fix for `SparseRandomProjection` ([#4100](https://github.com/rapidsai/cuml/pull/4100)) [@viclafargue](https://github.com/viclafargue) - Creating copy of `components` in PCA transform and inverse transform ([#4099](https://github.com/rapidsai/cuml/pull/4099)) [@divyegala](https://github.com/divyegala) - Fix SVM model parameter handling in case n_support=0 ([#4097](https://github.com/rapidsai/cuml/pull/4097)) [@tfeher](https://github.com/tfeher) - Fix set_params for linear models ([#4096](https://github.com/rapidsai/cuml/pull/4096)) [@lowener](https://github.com/lowener) - Fix train test split pytest comparison ([#4062](https://github.com/rapidsai/cuml/pull/4062)) [@dantegd](https://github.com/dantegd) - Fix fit_transform on KMeans ([#4055](https://github.com/rapidsai/cuml/pull/4055)) [@lowener](https://github.com/lowener) - Fixing -1 key access in 1nn reduce op in HDBSCAN ([#4052](https://github.com/rapidsai/cuml/pull/4052)) [@divyegala](https://github.com/divyegala) - Disable installing gbench to avoid container permission issues ([#4049](https://github.com/rapidsai/cuml/pull/4049)) [@dantegd](https://github.com/dantegd) - Fix double fit crash in preprocessing models ([#4040](https://github.com/rapidsai/cuml/pull/4040)) [@viclafargue](https://github.com/viclafargue) - Always add `faiss` library alias if it&#39;s missing ([#4028](https://github.com/rapidsai/cuml/pull/4028)) [@trxcllnt](https://github.com/trxcllnt) - Fixing intermittent HBDSCAN pytest failure in CI ([#4025](https://github.com/rapidsai/cuml/pull/4025)) [@divyegala](https://github.com/divyegala) - HDBSCAN bug on A100 ([#4024](https://github.com/rapidsai/cuml/pull/4024)) [@divyegala](https://github.com/divyegala) - Add treelite include paths to treelite targets ([#4023](https://github.com/rapidsai/cuml/pull/4023)) [@trxcllnt](https://github.com/trxcllnt) - Add Treelite_BINARY_DIR include to `cuml++` build interface include paths ([#4018](https://github.com/rapidsai/cuml/pull/4018)) [@trxcllnt](https://github.com/trxcllnt) - Small ARIMA-related bug fixes in Hessenberg reduction and make_arima ([#4017](https://github.com/rapidsai/cuml/pull/4017)) [@Nyrio](https://github.com/Nyrio) - Update setup.py ([#4015](https://github.com/rapidsai/cuml/pull/4015)) [@ajschmidt8](https://github.com/ajschmidt8) - Update `treelite` version in `get_treelite.cmake` ([#4014](https://github.com/rapidsai/cuml/pull/4014)) [@ajschmidt8](https://github.com/ajschmidt8) - Fix build with latest RAFT branch-21.08 ([#4012](https://github.com/rapidsai/cuml/pull/4012)) [@trxcllnt](https://github.com/trxcllnt) - Skipping hdbscan pytests when gpu is a100 ([#4007](https://github.com/rapidsai/cuml/pull/4007)) [@cjnolet](https://github.com/cjnolet) - Using 64-bit array lengths to increase scale of pca &amp; tsvd ([#3983](https://github.com/rapidsai/cuml/pull/3983)) [@cjnolet](https://github.com/cjnolet) - Fix MNMG test in Dask RF ([#3964](https://github.com/rapidsai/cuml/pull/3964)) [@hcho3](https://github.com/hcho3) - Use nested include in destination of install headers to avoid docker permission issues ([#3962](https://github.com/rapidsai/cuml/pull/3962)) [@dantegd](https://github.com/dantegd) - Fix automerge #3939 ([#3952](https://github.com/rapidsai/cuml/pull/3952)) [@dantegd](https://github.com/dantegd) - Update UCX-Py version to 0.21 ([#3950](https://github.com/rapidsai/cuml/pull/3950)) [@pentschev](https://github.com/pentschev) - Fix kernel and line info in cmake ([#3941](https://github.com/rapidsai/cuml/pull/3941)) [@dantegd](https://github.com/dantegd) - Fix for multi GPU PCA compute failing bug after transform and added error handling when n_components is not passed ([#3912](https://github.com/rapidsai/cuml/pull/3912)) [@akaanirban](https://github.com/akaanirban) - Tolerate QN linesearch failures when it&#39;s harmless ([#3791](https://github.com/rapidsai/cuml/pull/3791)) [@achirkin](https://github.com/achirkin) ## 📖 Documentation - Improve docstrings for silhouette score metrics. ([#4026](https://github.com/rapidsai/cuml/pull/4026)) [@bdice](https://github.com/bdice) - Update CHANGELOG.md link ([#3956](https://github.com/rapidsai/cuml/pull/3956)) [@Salonijain27](https://github.com/Salonijain27) - Update documentation build examples to be generator agnostic ([#3909](https://github.com/rapidsai/cuml/pull/3909)) [@robertmaynard](https://github.com/robertmaynard) - Improve FIL code readability and documentation ([#3056](https://github.com/rapidsai/cuml/pull/3056)) [@levsnv](https://github.com/levsnv) ## 🚀 New Features - Add Multinomial and Bernoulli Naive Bayes variants ([#4053](https://github.com/rapidsai/cuml/pull/4053)) [@lowener](https://github.com/lowener) - Add weighted K-Means sampling for SHAP ([#4051](https://github.com/rapidsai/cuml/pull/4051)) [@Nanthini10](https://github.com/Nanthini10) - Use chebyshev, canberra, hellinger and minkowski distance metrics ([#3990](https://github.com/rapidsai/cuml/pull/3990)) [@mdoijade](https://github.com/mdoijade) - Implement vector leaf prediction for fil. ([#3917](https://github.com/rapidsai/cuml/pull/3917)) [@RAMitchell](https://github.com/RAMitchell) - change TargetEncoder&#39;s smooth argument from ratio to count ([#3876](https://github.com/rapidsai/cuml/pull/3876)) [@daxiongshu](https://github.com/daxiongshu) - Enable warp-per-tree inference in FIL for regression and binary classification ([#3760](https://github.com/rapidsai/cuml/pull/3760)) [@levsnv](https://github.com/levsnv) ## 🛠️ Improvements - Remove clang/clang-tools from conda recipe ([#4109](https://github.com/rapidsai/cuml/pull/4109)) [@dantegd](https://github.com/dantegd) - Pin dask version ([#4108](https://github.com/rapidsai/cuml/pull/4108)) [@galipremsagar](https://github.com/galipremsagar) - ANN warnings/tests updates ([#4101](https://github.com/rapidsai/cuml/pull/4101)) [@viclafargue](https://github.com/viclafargue) - Removing local memory operations from computeSplitKernel and other optimizations ([#4083](https://github.com/rapidsai/cuml/pull/4083)) [@vinaydes](https://github.com/vinaydes) - Fix libfaiss dependency to not expressly depend on conda-forge ([#4082](https://github.com/rapidsai/cuml/pull/4082)) [@Ethyling](https://github.com/Ethyling) - Remove deprecated target_weights in UMAP ([#4081](https://github.com/rapidsai/cuml/pull/4081)) [@lowener](https://github.com/lowener) - Upgrade Treelite to 2.0.0 ([#4072](https://github.com/rapidsai/cuml/pull/4072)) [@hcho3](https://github.com/hcho3) - Optimize dtype conversion for FIL ([#4070](https://github.com/rapidsai/cuml/pull/4070)) [@dantegd](https://github.com/dantegd) - Adding quick notes to HDBSCAN public API docs as to why discrepancies may occur between cpu and gpu impls. ([#4061](https://github.com/rapidsai/cuml/pull/4061)) [@cjnolet](https://github.com/cjnolet) - Update `conda` environment name for CI ([#4039](https://github.com/rapidsai/cuml/pull/4039)) [@ajschmidt8](https://github.com/ajschmidt8) - Rewrite random forest gtests ([#4038](https://github.com/rapidsai/cuml/pull/4038)) [@RAMitchell](https://github.com/RAMitchell) - Updating Clang Version to 11.0.0 ([#4029](https://github.com/rapidsai/cuml/pull/4029)) [@codereport](https://github.com/codereport) - Raise ARIMA parameter limits from 4 to 8 ([#4022](https://github.com/rapidsai/cuml/pull/4022)) [@Nyrio](https://github.com/Nyrio) - Testing extract clusters in HDBSCAN ([#4009](https://github.com/rapidsai/cuml/pull/4009)) [@divyegala](https://github.com/divyegala) - ARIMA - Kalman loop rewrite: single megakernel instead of host loop ([#4006](https://github.com/rapidsai/cuml/pull/4006)) [@Nyrio](https://github.com/Nyrio) - RF/DT cleanup ([#4005](https://github.com/rapidsai/cuml/pull/4005)) [@venkywonka](https://github.com/venkywonka) - Exposing condensed hierarchy through cython for easier unit-level testing ([#4004](https://github.com/rapidsai/cuml/pull/4004)) [@cjnolet](https://github.com/cjnolet) - Use the 21.08 branch of rapids-cmake as rmm requires it ([#4002](https://github.com/rapidsai/cuml/pull/4002)) [@robertmaynard](https://github.com/robertmaynard) - RF: memset and batch size optimization for computing splits ([#4001](https://github.com/rapidsai/cuml/pull/4001)) [@venkywonka](https://github.com/venkywonka) - Reducing cluster size to number of selected clusters. Returning stability scores ([#3987](https://github.com/rapidsai/cuml/pull/3987)) [@cjnolet](https://github.com/cjnolet) - HDBSCAN: Lazy-loading (and caching) condensed &amp; single-linkage tree objects ([#3986](https://github.com/rapidsai/cuml/pull/3986)) [@cjnolet](https://github.com/cjnolet) - Fix `21.08` forward-merge conflicts ([#3982](https://github.com/rapidsai/cuml/pull/3982)) [@ajschmidt8](https://github.com/ajschmidt8) - Update Dask/Distributed version ([#3978](https://github.com/rapidsai/cuml/pull/3978)) [@pentschev](https://github.com/pentschev) - Use clang-tools on x86 only ([#3969](https://github.com/rapidsai/cuml/pull/3969)) [@jakirkham](https://github.com/jakirkham) - Promote `trustworthiness_score` to public header, add missing includes, update dependencies ([#3968](https://github.com/rapidsai/cuml/pull/3968)) [@trxcllnt](https://github.com/trxcllnt) - Moving FAISS ANN wrapper to raft ([#3963](https://github.com/rapidsai/cuml/pull/3963)) [@cjnolet](https://github.com/cjnolet) - Add MG weighted k-means ([#3959](https://github.com/rapidsai/cuml/pull/3959)) [@lowener](https://github.com/lowener) - Remove unused code in UMAP. ([#3931](https://github.com/rapidsai/cuml/pull/3931)) [@trivialfis](https://github.com/trivialfis) - Fix automerge #3900 and correct package versions in meta packages ([#3918](https://github.com/rapidsai/cuml/pull/3918)) [@dantegd](https://github.com/dantegd) - Adaptive stress tests when GPU memory capacity is insufficient ([#3916](https://github.com/rapidsai/cuml/pull/3916)) [@lowener](https://github.com/lowener) - Fix merge conflicts ([#3892](https://github.com/rapidsai/cuml/pull/3892)) [@ajschmidt8](https://github.com/ajschmidt8) - Remove old RF backend ([#3868](https://github.com/rapidsai/cuml/pull/3868)) [@RAMitchell](https://github.com/RAMitchell) - Refactor to extract random forest objectives ([#3854](https://github.com/rapidsai/cuml/pull/3854)) [@RAMitchell](https://github.com/RAMitchell) # cuML 21.06.00 (9 Jun 2021) ## 🚨 Breaking Changes - Remove Base.enable_rmm_pool method as it is no longer needed ([#3875](https://github.com/rapidsai/cuml/pull/3875)) [@teju85](https://github.com/teju85) - RF: Make experimental-backend default for regression tasks and deprecate old-backend. ([#3872](https://github.com/rapidsai/cuml/pull/3872)) [@venkywonka](https://github.com/venkywonka) - Deterministic UMAP with floating point rounding. ([#3848](https://github.com/rapidsai/cuml/pull/3848)) [@trivialfis](https://github.com/trivialfis) - Fix RF regression performance ([#3845](https://github.com/rapidsai/cuml/pull/3845)) [@RAMitchell](https://github.com/RAMitchell) - Add feature to print forest shape in FIL upon importing ([#3763](https://github.com/rapidsai/cuml/pull/3763)) [@levsnv](https://github.com/levsnv) - Remove &#39;seed&#39; and &#39;output_type&#39; deprecated features ([#3739](https://github.com/rapidsai/cuml/pull/3739)) [@lowener](https://github.com/lowener) ## 🐛 Bug Fixes - Disable UMAP deterministic test on CTK11.2 ([#3942](https://github.com/rapidsai/cuml/pull/3942)) [@trivialfis](https://github.com/trivialfis) - Revert #3869 ([#3933](https://github.com/rapidsai/cuml/pull/3933)) [@hcho3](https://github.com/hcho3) - RF: fix the bug in `pdf_to_cdf` device function that causes hang when `n_bins &gt; TPB &amp;&amp; n_bins % TPB != 0` ([#3921](https://github.com/rapidsai/cuml/pull/3921)) [@venkywonka](https://github.com/venkywonka) - Fix number of permutations in pytest and getting handle for cuml models ([#3920](https://github.com/rapidsai/cuml/pull/3920)) [@dantegd](https://github.com/dantegd) - Fix typo in umap `target_weight` parameter ([#3914](https://github.com/rapidsai/cuml/pull/3914)) [@lowener](https://github.com/lowener) - correct compliation of cuml c library ([#3908](https://github.com/rapidsai/cuml/pull/3908)) [@robertmaynard](https://github.com/robertmaynard) - Correct install path for include folder to avoid double nesting ([#3901](https://github.com/rapidsai/cuml/pull/3901)) [@dantegd](https://github.com/dantegd) - Add type check for y in train_test_split ([#3886](https://github.com/rapidsai/cuml/pull/3886)) [@Nanthini10](https://github.com/Nanthini10) - Fix for MNMG test_rf_classification_dask_fil_predict_proba ([#3831](https://github.com/rapidsai/cuml/pull/3831)) [@lowener](https://github.com/lowener) - Fix MNMG test test_rf_regression_dask_fil ([#3830](https://github.com/rapidsai/cuml/pull/3830)) [@hcho3](https://github.com/hcho3) - AgglomerativeClustering support single cluster and ignore only zero distances from self-loops ([#3824](https://github.com/rapidsai/cuml/pull/3824)) [@cjnolet](https://github.com/cjnolet) ## 📖 Documentation - Small doc fixes for 21.06 release ([#3936](https://github.com/rapidsai/cuml/pull/3936)) [@dantegd](https://github.com/dantegd) - Document ability to export cuML RF to predict on other machines ([#3890](https://github.com/rapidsai/cuml/pull/3890)) [@hcho3](https://github.com/hcho3) ## 🚀 New Features - Deterministic UMAP with floating point rounding. ([#3848](https://github.com/rapidsai/cuml/pull/3848)) [@trivialfis](https://github.com/trivialfis) - HDBSCAN ([#3821](https://github.com/rapidsai/cuml/pull/3821)) [@cjnolet](https://github.com/cjnolet) - Add feature to print forest shape in FIL upon importing ([#3763](https://github.com/rapidsai/cuml/pull/3763)) [@levsnv](https://github.com/levsnv) ## 🛠️ Improvements - Pin dask ot 2021.5.1 for 21.06 release ([#3937](https://github.com/rapidsai/cuml/pull/3937)) [@dantegd](https://github.com/dantegd) - Upgrade xgboost to 1.4.2 ([#3925](https://github.com/rapidsai/cuml/pull/3925)) [@dantegd](https://github.com/dantegd) - Use UCX-Py 0.20 ([#3911](https://github.com/rapidsai/cuml/pull/3911)) [@jakirkham](https://github.com/jakirkham) - Upgrade NCCL to 2.9.9 ([#3902](https://github.com/rapidsai/cuml/pull/3902)) [@dantegd](https://github.com/dantegd) - Update conda developer environments ([#3898](https://github.com/rapidsai/cuml/pull/3898)) [@viclafargue](https://github.com/viclafargue) - ARIMA: pre-allocation of temporary memory to reduce latencies ([#3895](https://github.com/rapidsai/cuml/pull/3895)) [@Nyrio](https://github.com/Nyrio) - Condense TSNE parameters into a struct ([#3884](https://github.com/rapidsai/cuml/pull/3884)) [@lowener](https://github.com/lowener) - Update `CHANGELOG.md` links for calver ([#3883](https://github.com/rapidsai/cuml/pull/3883)) [@ajschmidt8](https://github.com/ajschmidt8) - Make sure `__init__` is called in graph callback. ([#3881](https://github.com/rapidsai/cuml/pull/3881)) [@trivialfis](https://github.com/trivialfis) - Update docs build script ([#3877](https://github.com/rapidsai/cuml/pull/3877)) [@ajschmidt8](https://github.com/ajschmidt8) - Remove Base.enable_rmm_pool method as it is no longer needed ([#3875](https://github.com/rapidsai/cuml/pull/3875)) [@teju85](https://github.com/teju85) - RF: Make experimental-backend default for regression tasks and deprecate old-backend. ([#3872](https://github.com/rapidsai/cuml/pull/3872)) [@venkywonka](https://github.com/venkywonka) - Enable probability output from RF binary classifier (alternative implementaton) ([#3869](https://github.com/rapidsai/cuml/pull/3869)) [@hcho3](https://github.com/hcho3) - CI test speed improvement ([#3851](https://github.com/rapidsai/cuml/pull/3851)) [@lowener](https://github.com/lowener) - Fix RF regression performance ([#3845](https://github.com/rapidsai/cuml/pull/3845)) [@RAMitchell](https://github.com/RAMitchell) - Update to CMake 3.20 features, `rapids-cmake` and `CPM` ([#3844](https://github.com/rapidsai/cuml/pull/3844)) [@dantegd](https://github.com/dantegd) - Support sparse input features in QN solvers and Logistic Regression ([#3827](https://github.com/rapidsai/cuml/pull/3827)) [@achirkin](https://github.com/achirkin) - Trustworthiness score improvements ([#3826](https://github.com/rapidsai/cuml/pull/3826)) [@viclafargue](https://github.com/viclafargue) - Performance optimization of RF split kernels by removing empty cycles ([#3818](https://github.com/rapidsai/cuml/pull/3818)) [@vinaydes](https://github.com/vinaydes) - Correct deprecate positional args decorator for CalVer ([#3784](https://github.com/rapidsai/cuml/pull/3784)) [@lowener](https://github.com/lowener) - ColumnTransformer &amp; FunctionTransformer ([#3745](https://github.com/rapidsai/cuml/pull/3745)) [@viclafargue](https://github.com/viclafargue) - Remove &#39;seed&#39; and &#39;output_type&#39; deprecated features ([#3739](https://github.com/rapidsai/cuml/pull/3739)) [@lowener](https://github.com/lowener) # cuML 0.19.0 (21 Apr 2021) ## 🚨 Breaking Changes - Use the new RF backend by default for classification ([#3686](https://github.com//rapidsai/cuml/pull/3686)) [@hcho3](https://github.com/hcho3) - Deprecating quantile-per-tree and removing three previously deprecated Random Forest parameters ([#3667](https://github.com//rapidsai/cuml/pull/3667)) [@vinaydes](https://github.com/vinaydes) - Update predict() / predict_proba() of RF to match sklearn ([#3609](https://github.com//rapidsai/cuml/pull/3609)) [@hcho3](https://github.com/hcho3) - Upgrade FAISS to 1.7.x ([#3509](https://github.com//rapidsai/cuml/pull/3509)) [@viclafargue](https://github.com/viclafargue) - cuML&#39;s estimator Base class for preprocessing models ([#3270](https://github.com//rapidsai/cuml/pull/3270)) [@viclafargue](https://github.com/viclafargue) ## 🐛 Bug Fixes - Fix brute force KNN distance metric issue ([#3755](https://github.com//rapidsai/cuml/pull/3755)) [@viclafargue](https://github.com/viclafargue) - Fix min_max_axis ([#3735](https://github.com//rapidsai/cuml/pull/3735)) [@viclafargue](https://github.com/viclafargue) - Fix NaN errors observed with ARIMA in CUDA 11.2 builds ([#3730](https://github.com//rapidsai/cuml/pull/3730)) [@Nyrio](https://github.com/Nyrio) - Fix random state generator ([#3716](https://github.com//rapidsai/cuml/pull/3716)) [@viclafargue](https://github.com/viclafargue) - Fixes the out of memory access issue for computeSplit kernels ([#3715](https://github.com//rapidsai/cuml/pull/3715)) [@vinaydes](https://github.com/vinaydes) - Fixing umap gtest failure under cuda 11.2. ([#3696](https://github.com//rapidsai/cuml/pull/3696)) [@cjnolet](https://github.com/cjnolet) - Fix irreproducibility issue in RF classification ([#3693](https://github.com//rapidsai/cuml/pull/3693)) [@vinaydes](https://github.com/vinaydes) - BUG fix BatchedLevelAlgo DtClsTest &amp; DtRegTest failing tests ([#3690](https://github.com//rapidsai/cuml/pull/3690)) [@venkywonka](https://github.com/venkywonka) - Restore the functionality of RF score() ([#3685](https://github.com//rapidsai/cuml/pull/3685)) [@hcho3](https://github.com/hcho3) - Use main build.sh to build docs in docs CI ([#3681](https://github.com//rapidsai/cuml/pull/3681)) [@dantegd](https://github.com/dantegd) - Revert &quot;Update conda recipes pinning of repo dependencies&quot; ([#3680](https://github.com//rapidsai/cuml/pull/3680)) [@raydouglass](https://github.com/raydouglass) - Skip tests that fail on CUDA 11.2 ([#3679](https://github.com//rapidsai/cuml/pull/3679)) [@dantegd](https://github.com/dantegd) - Dask KNN Cl&amp;Re 1D labels ([#3668](https://github.com//rapidsai/cuml/pull/3668)) [@viclafargue](https://github.com/viclafargue) - Update conda recipes pinning of repo dependencies ([#3666](https://github.com//rapidsai/cuml/pull/3666)) [@mike-wendt](https://github.com/mike-wendt) - OOB access in GLM SoftMax ([#3642](https://github.com//rapidsai/cuml/pull/3642)) [@divyegala](https://github.com/divyegala) - SilhouetteScore C++ tests seed ([#3640](https://github.com//rapidsai/cuml/pull/3640)) [@divyegala](https://github.com/divyegala) - SimpleImputer fix ([#3624](https://github.com//rapidsai/cuml/pull/3624)) [@viclafargue](https://github.com/viclafargue) - Silhouette Score `make_monotonic` for non-monotonic label set ([#3619](https://github.com//rapidsai/cuml/pull/3619)) [@divyegala](https://github.com/divyegala) - Fixing support for empty rows in sparse Jaccard / Cosine ([#3612](https://github.com//rapidsai/cuml/pull/3612)) [@cjnolet](https://github.com/cjnolet) - Fix train_test_split with stratify option ([#3611](https://github.com//rapidsai/cuml/pull/3611)) [@Nanthini10](https://github.com/Nanthini10) - Update predict() / predict_proba() of RF to match sklearn ([#3609](https://github.com//rapidsai/cuml/pull/3609)) [@hcho3](https://github.com/hcho3) - Change dask and distributed branch to main ([#3593](https://github.com//rapidsai/cuml/pull/3593)) [@dantegd](https://github.com/dantegd) - Fixes memory allocation for experimental backend and improves quantile computations ([#3586](https://github.com//rapidsai/cuml/pull/3586)) [@vinaydes](https://github.com/vinaydes) - Add ucx-proc package back that got lost during an auto merge conflict ([#3550](https://github.com//rapidsai/cuml/pull/3550)) [@dantegd](https://github.com/dantegd) - Fix failing Hellinger gtest ([#3549](https://github.com//rapidsai/cuml/pull/3549)) [@cjnolet](https://github.com/cjnolet) - Directly invoke make for non-CMake docs target ([#3534](https://github.com//rapidsai/cuml/pull/3534)) [@wphicks](https://github.com/wphicks) - Fix Codecov.io Coverage Upload for Branch Builds ([#3524](https://github.com//rapidsai/cuml/pull/3524)) [@mdemoret-nv](https://github.com/mdemoret-nv) - Ensure global_output_type is thread-safe ([#3497](https://github.com//rapidsai/cuml/pull/3497)) [@wphicks](https://github.com/wphicks) - List as input for SimpleImputer ([#3489](https://github.com//rapidsai/cuml/pull/3489)) [@viclafargue](https://github.com/viclafargue) ## 📖 Documentation - Add sparse docstring comments ([#3712](https://github.com//rapidsai/cuml/pull/3712)) [@JohnZed](https://github.com/JohnZed) - FIL and Dask demo ([#3698](https://github.com//rapidsai/cuml/pull/3698)) [@miroenev](https://github.com/miroenev) - Deprecating quantile-per-tree and removing three previously deprecated Random Forest parameters ([#3667](https://github.com//rapidsai/cuml/pull/3667)) [@vinaydes](https://github.com/vinaydes) - Fixing Indentation for Docstring Generators ([#3650](https://github.com//rapidsai/cuml/pull/3650)) [@mdemoret-nv](https://github.com/mdemoret-nv) - Update doc to indicate ExtraTree support ([#3635](https://github.com//rapidsai/cuml/pull/3635)) [@hcho3](https://github.com/hcho3) - Update doc, now that FIL supports multi-class classification ([#3634](https://github.com//rapidsai/cuml/pull/3634)) [@hcho3](https://github.com/hcho3) - Document model_type=&#39;xgboost_json&#39; in FIL ([#3633](https://github.com//rapidsai/cuml/pull/3633)) [@hcho3](https://github.com/hcho3) - Including log loss metric to the documentation website ([#3617](https://github.com//rapidsai/cuml/pull/3617)) [@lowener](https://github.com/lowener) - Update the build doc regarding the use of GCC 7.5 ([#3605](https://github.com//rapidsai/cuml/pull/3605)) [@hcho3](https://github.com/hcho3) - Update One-Hot Encoder doc ([#3600](https://github.com//rapidsai/cuml/pull/3600)) [@lowener](https://github.com/lowener) - Fix documentation of KMeans ([#3595](https://github.com//rapidsai/cuml/pull/3595)) [@lowener](https://github.com/lowener) ## 🚀 New Features - Reduce the size of the cuml libraries ([#3702](https://github.com//rapidsai/cuml/pull/3702)) [@robertmaynard](https://github.com/robertmaynard) - Use ninja as default CMake generator ([#3664](https://github.com//rapidsai/cuml/pull/3664)) [@wphicks](https://github.com/wphicks) - Single-Linkage Hierarchical Clustering Python Wrapper ([#3631](https://github.com//rapidsai/cuml/pull/3631)) [@cjnolet](https://github.com/cjnolet) - Support for precomputed distance matrix in DBSCAN ([#3585](https://github.com//rapidsai/cuml/pull/3585)) [@Nyrio](https://github.com/Nyrio) - Adding haversine to brute force knn ([#3579](https://github.com//rapidsai/cuml/pull/3579)) [@cjnolet](https://github.com/cjnolet) - Support for sample_weight parameter in LogisticRegression ([#3572](https://github.com//rapidsai/cuml/pull/3572)) [@viclafargue](https://github.com/viclafargue) - Provide &quot;--ccache&quot; flag for build.sh ([#3566](https://github.com//rapidsai/cuml/pull/3566)) [@wphicks](https://github.com/wphicks) - Eliminate unnecessary includes discovered by cppclean ([#3564](https://github.com//rapidsai/cuml/pull/3564)) [@wphicks](https://github.com/wphicks) - Single-linkage Hierarchical Clustering C++ ([#3545](https://github.com//rapidsai/cuml/pull/3545)) [@cjnolet](https://github.com/cjnolet) - Expose sparse distances via semiring to Python API ([#3516](https://github.com//rapidsai/cuml/pull/3516)) [@lowener](https://github.com/lowener) - Use cmake --build in build.sh to facilitate switching build tools ([#3487](https://github.com//rapidsai/cuml/pull/3487)) [@wphicks](https://github.com/wphicks) - Add cython hinge_loss ([#3409](https://github.com//rapidsai/cuml/pull/3409)) [@Nanthini10](https://github.com/Nanthini10) - Adding CodeCov Info for Dask Tests ([#3338](https://github.com//rapidsai/cuml/pull/3338)) [@mdemoret-nv](https://github.com/mdemoret-nv) - Add predict_proba() to XGBoost-style models in FIL C++ ([#2894](https://github.com//rapidsai/cuml/pull/2894)) [@levsnv](https://github.com/levsnv) ## 🛠️ Improvements - Updating docs, readme, and umap param tests for 0.19 ([#3731](https://github.com//rapidsai/cuml/pull/3731)) [@cjnolet](https://github.com/cjnolet) - Locking RAFT hash for 0.19 ([#3721](https://github.com//rapidsai/cuml/pull/3721)) [@cjnolet](https://github.com/cjnolet) - Upgrade to Treelite 1.1.0 ([#3708](https://github.com//rapidsai/cuml/pull/3708)) [@hcho3](https://github.com/hcho3) - Update to XGBoost 1.4.0rc1 ([#3699](https://github.com//rapidsai/cuml/pull/3699)) [@hcho3](https://github.com/hcho3) - Use the new RF backend by default for classification ([#3686](https://github.com//rapidsai/cuml/pull/3686)) [@hcho3](https://github.com/hcho3) - Update LogisticRegression documentation ([#3677](https://github.com//rapidsai/cuml/pull/3677)) [@viclafargue](https://github.com/viclafargue) - Preprocessing out of experimental ([#3676](https://github.com//rapidsai/cuml/pull/3676)) [@viclafargue](https://github.com/viclafargue) - ENH Decision Tree new backend `computeSplit*Kernel` histogram calculation optimization ([#3674](https://github.com//rapidsai/cuml/pull/3674)) [@venkywonka](https://github.com/venkywonka) - Remove `check_cupy8` ([#3669](https://github.com//rapidsai/cuml/pull/3669)) [@viclafargue](https://github.com/viclafargue) - Use custom conda build directory for ccache integration ([#3658](https://github.com//rapidsai/cuml/pull/3658)) [@dillon-cullinan](https://github.com/dillon-cullinan) - Disable three flaky tests ([#3657](https://github.com//rapidsai/cuml/pull/3657)) [@hcho3](https://github.com/hcho3) - CUDA 11.2 developer environment ([#3648](https://github.com//rapidsai/cuml/pull/3648)) [@dantegd](https://github.com/dantegd) - Store data frequencies in tree nodes of RF ([#3647](https://github.com//rapidsai/cuml/pull/3647)) [@hcho3](https://github.com/hcho3) - Row major Gram matrices ([#3639](https://github.com//rapidsai/cuml/pull/3639)) [@tfeher](https://github.com/tfeher) - Converting all Estimator Constructors to Keyword Arguments ([#3636](https://github.com//rapidsai/cuml/pull/3636)) [@mdemoret-nv](https://github.com/mdemoret-nv) - Adding make_pipeline + test score with pipeline ([#3632](https://github.com//rapidsai/cuml/pull/3632)) [@viclafargue](https://github.com/viclafargue) - ENH Decision Tree new backend `computeSplitClassificationKernel` histogram calculation and occupancy optimization ([#3616](https://github.com//rapidsai/cuml/pull/3616)) [@venkywonka](https://github.com/venkywonka) - Revert &quot;ENH Fix stale GHA and prevent duplicates &quot; ([#3614](https://github.com//rapidsai/cuml/pull/3614)) [@mike-wendt](https://github.com/mike-wendt) - ENH Fix stale GHA and prevent duplicates ([#3613](https://github.com//rapidsai/cuml/pull/3613)) [@mike-wendt](https://github.com/mike-wendt) - KNN from RAFT ([#3603](https://github.com//rapidsai/cuml/pull/3603)) [@viclafargue](https://github.com/viclafargue) - Update Changelog Link ([#3601](https://github.com//rapidsai/cuml/pull/3601)) [@ajschmidt8](https://github.com/ajschmidt8) - Move SHAP explainers out of experimental ([#3596](https://github.com//rapidsai/cuml/pull/3596)) [@dantegd](https://github.com/dantegd) - Fixing compatibility issue with CUDA array interface ([#3594](https://github.com//rapidsai/cuml/pull/3594)) [@lowener](https://github.com/lowener) - Remove cutlass usage in row major input for euclidean exp/unexp, cosine and L1 distance matrix ([#3589](https://github.com//rapidsai/cuml/pull/3589)) [@mdoijade](https://github.com/mdoijade) - Test FIL probabilities with absolute error thresholds in python ([#3582](https://github.com//rapidsai/cuml/pull/3582)) [@levsnv](https://github.com/levsnv) - Removing sparse prims and fused l2 nn prim from cuml ([#3578](https://github.com//rapidsai/cuml/pull/3578)) [@cjnolet](https://github.com/cjnolet) - Prepare Changelog for Automation ([#3570](https://github.com//rapidsai/cuml/pull/3570)) [@ajschmidt8](https://github.com/ajschmidt8) - Print debug message if SVM convergence is poor ([#3562](https://github.com//rapidsai/cuml/pull/3562)) [@tfeher](https://github.com/tfeher) - Fix merge conflicts in 3552 ([#3557](https://github.com//rapidsai/cuml/pull/3557)) [@ajschmidt8](https://github.com/ajschmidt8) - Additional distance metrics for ANN ([#3533](https://github.com//rapidsai/cuml/pull/3533)) [@viclafargue](https://github.com/viclafargue) - Improve warning message when QN solver reaches max_iter ([#3515](https://github.com//rapidsai/cuml/pull/3515)) [@tfeher](https://github.com/tfeher) - Fix merge conflicts in 3502 ([#3513](https://github.com//rapidsai/cuml/pull/3513)) [@ajschmidt8](https://github.com/ajschmidt8) - Upgrade FAISS to 1.7.x ([#3509](https://github.com//rapidsai/cuml/pull/3509)) [@viclafargue](https://github.com/viclafargue) - ENH Pass ccache variables to conda recipe &amp; use Ninja in CI ([#3508](https://github.com//rapidsai/cuml/pull/3508)) [@Ethyling](https://github.com/Ethyling) - Fix forward-merger conflicts in #3502 ([#3506](https://github.com//rapidsai/cuml/pull/3506)) [@dantegd](https://github.com/dantegd) - Sklearn meta-estimators into namespace ([#3493](https://github.com//rapidsai/cuml/pull/3493)) [@viclafargue](https://github.com/viclafargue) - Add flexibility to copyright checker ([#3466](https://github.com//rapidsai/cuml/pull/3466)) [@lowener](https://github.com/lowener) - Update sparse KNN to use rmm device buffer ([#3460](https://github.com//rapidsai/cuml/pull/3460)) [@lowener](https://github.com/lowener) - Fix forward-merger conflicts in #3444 ([#3455](https://github.com//rapidsai/cuml/pull/3455)) [@ajschmidt8](https://github.com/ajschmidt8) - Replace ML::MetricType with raft::distance::DistanceType ([#3389](https://github.com//rapidsai/cuml/pull/3389)) [@lowener](https://github.com/lowener) - RF param initialization cython and C++ layer cleanup ([#3358](https://github.com//rapidsai/cuml/pull/3358)) [@venkywonka](https://github.com/venkywonka) - MNMG RF broadcast feature ([#3349](https://github.com//rapidsai/cuml/pull/3349)) [@viclafargue](https://github.com/viclafargue) - cuML&#39;s estimator Base class for preprocessing models ([#3270](https://github.com//rapidsai/cuml/pull/3270)) [@viclafargue](https://github.com/viclafargue) - Make `_get_tags` a class/static method ([#3257](https://github.com//rapidsai/cuml/pull/3257)) [@dantegd](https://github.com/dantegd) - NVTX Markers for RF and RF-backend ([#3014](https://github.com//rapidsai/cuml/pull/3014)) [@venkywonka](https://github.com/venkywonka) # cuML 0.18.0 (24 Feb 2021) ## Breaking Changes 🚨 - cuml.experimental SHAP improvements (#3433) @dantegd - Enable feature sampling for the experimental backend of Random Forest (#3364) @vinaydes - re-enable cuML&#39;s copyright checker script (#3363) @teju85 - Batched Silhouette Score (#3362) @divyegala - Update failing MNMG tests (#3348) @viclafargue - Rename print_summary() of Dask RF to get_summary_text(); it now returns string to the client (#3341) @hcho3 - Rename dump_as_json() -&gt; get_json(); expose it from Dask RF (#3340) @hcho3 - MNMG KNN consolidation (#3307) @viclafargue - Return confusion matrix as int unless float weights are used (#3275) @lowener - Approximate Nearest Neighbors (#2780) @viclafargue ## Bug Fixes 🐛 - HOTFIX Add ucx-proc package back that got lost during an auto merge conflict (#3551) @dantegd - Non project-flash CI ml test 18.04 issue debugging and bugfixing (#3495) @dantegd - Temporarily xfail KBinsDiscretizer uniform tests (#3494) @wphicks - Fix illegal memory accesses when NITEMS &gt; 1, and nrows % NITEMS != 0. (#3480) @canonizer - Update call to dask client persist (#3474) @dantegd - Adding warning for IVFPQ (#3472) @viclafargue - Fix failing sparse NN test in CI by allowing small number of index discrepancies (#3454) @cjnolet - Exempting thirdparty code from copyright checks (#3453) @lowener - Relaxing Batched SilhouetteScore Test Constraint (#3452) @divyegala - Mark kbinsdiscretizer quantile tests as xfail (#3450) @wphicks - Fixing documentation on SimpleImputer (#3447) @lowener - Skipping IVFPQ (#3429) @viclafargue - Adding tol to dask test_kmeans (#3426) @lowener - Fix memory bug for SVM with large n_rows (#3420) @tfeher - Allow linear regression for with CUDA &gt;=11.0 (#3417) @wphicks - Fix vectorizer tests by restoring sort behavior in groupby (#3416) @JohnZed - Ensure make_classification respects output type (#3415) @wphicks - Clean Up `#include` Dependencies (#3402) @mdemoret-nv - Fix Nearest Neighbor Stress Test (#3401) @lowener - Fix array_equal in tests (#3400) @viclafargue - Improving Copyright Check When Not Running in CI (#3398) @mdemoret-nv - Also xfail zlib errors when downloading newsgroups data (#3393) @JohnZed - Fix for ANN memory release bug (#3391) @viclafargue - XFail Holt Winters test where statsmodels has known issues with gcc 9.3.0 (#3385) @JohnZed - FIX Update cupy to &gt;= 7.8 and remove unused build.sh script (#3378) @dantegd - re-enable cuML&#39;s copyright checker script (#3363) @teju85 - Update failing MNMG tests (#3348) @viclafargue - Rename print_summary() of Dask RF to get_summary_text(); it now returns string to the client (#3341) @hcho3 - Fixing `make_blobs` to Respect the Global Output Type (#3339) @mdemoret-nv - Fix permutation explainer (#3332) @RAMitchell - k-means bug fix in debug build (#3321) @akkamesh - Fix for default arguments of PCA (#3320) @lowener - Provide workaround for cupy.percentile bug (#3315) @wphicks - Fix SVR unit test parameter (#3294) @tfeher - Add xfail on fetching 20newsgroup dataset (test_naive_bayes) (#3291) @lowener - Remove unused keyword in PorterStemmer code (#3289) @wphicks - Remove static specifier in DecisionTree unit test for C++14 compliance (#3281) @wphicks - Correct pure virtual declaration in manifold_inputs_t (#3279) @wphicks ## Documentation 📖 - Correct import path in docs for experimental preprocessing features (#3488) @wphicks - Minor doc updates for 0.18 (#3475) @JohnZed - Improve Python Docs with Default Role (#3445) @mdemoret-nv - Fixing Python Documentation Errors and Warnings (#3428) @mdemoret-nv - Remove outdated references to changelog in CONTRIBUTING.md (#3328) @wphicks - Adding highlighting to bibtex in readme (#3296) @cjnolet ## New Features 🚀 - Improve runtime performance of RF to Treelite conversion (#3410) @wphicks - Parallelize Treelite to FIL conversion over trees (#3396) @wphicks - Parallelize RF to Treelite conversion over trees (#3395) @wphicks - Allow saving Dask RandomForest models immediately after training (fixes #3331) (#3388) @jameslamb - genetic programming initial structures (#3387) @teju85 - MNMG DBSCAN (#3382) @Nyrio - FIL to use L1 cache when input columns don&#39;t fit into shared memory (#3370) @levsnv - Enable feature sampling for the experimental backend of Random Forest (#3364) @vinaydes - Batched Silhouette Score (#3362) @divyegala - Rename dump_as_json() -&gt; get_json(); expose it from Dask RF (#3340) @hcho3 - Exposing model_selection in a similar way to scikit-learn (#3329) @ptartan21 - Promote IncrementalPCA from experimental in 0.18 release (#3327) @lowener - Create labeler.yml (#3324) @jolorunyomi - Add slow high-precision mode to KNN (#3304) @wphicks - Sparse TSNE (#3293) @divyegala - Sparse Generalized SPMV (semiring) Primitive (#3146) @cjnolet - Multiclass meta estimator wrappers and multiclass SVC (#3092) @tfeher - Approximate Nearest Neighbors (#2780) @viclafargue - Add KNN parameter to t-SNE (#2592) @aleksficek ## Improvements 🛠️ - Update stale GHA with exemptions &amp; new labels (#3507) @mike-wendt - Add GHA to mark issues/prs as stale/rotten (#3500) @Ethyling - Fix naive bayes inputs (#3448) @cjnolet - Prepare Changelog for Automation (#3442) @ajschmidt8 - cuml.experimental SHAP improvements (#3433) @dantegd - Speed up knn tests (#3411) @JohnZed - Replacing sklearn functions with cuml in RF MNMG notebook (#3408) @lowener - Auto-label PRs based on their content (#3407) @jolorunyomi - Use stable 1.0.0 version of Treelite (#3394) @hcho3 - API update to match RAFT PR #120 (#3386) @drobison00 - Update linear models to use RMM memory allocation (#3365) @lowener - Updating dense pairwise distance enum names (#3352) @cjnolet - Upgrade Treelite module (#3316) @hcho3 - Removed FIL node types with `_t` suffix (#3314) @canonizer - MNMG KNN consolidation (#3307) @viclafargue - Updating PyTests to Stay Below 4 Gb Limit (#3306) @mdemoret-nv - Refactoring: move internal FIL interface to a separate file (#3292) @canonizer - Return confusion matrix as int unless float weights are used (#3275) @lowener - 018 add unfitted error pca &amp; tests on IPCA (#3272) @lowener - Linear models predict function consolidation (#3256) @dantegd - Preparing sparse primitives for movement to RAFT (#3157) @cjnolet # cuML 0.17.0 (10 Dec 2020) ## New Features - PR #3164: Expose silhouette score in Python - PR #3160: Least Angle Regression (experimental) - PR #2659: Add initial max inner product sparse knn - PR #3092: Multiclass meta estimator wrappers and multiclass SVC - PR #2836: Refactor UMAP to accept sparse inputs - PR #2894: predict_proba in FIL C++ for XGBoost-style multi-class models - PR #3126: Experimental versions of GPU accelerated Kernel and Permutation SHAP ## Improvements - PR #3077: Improve runtime for test_kmeans - PR #3070: Speed up dask/test_datasets tests - PR #3075: Speed up test_linear_model tests - PR #3078: Speed up test_incremental_pca tests - PR #2902: `matrix/matrix.cuh` in RAFT namespacing - PR #2903: Moving linalg's gemm, gemv, transpose to RAFT namespaces - PR #2905: `stats` prims `mean_center`, `sum` to RAFT namespaces - PR #2904: Moving `linalg` basic math ops to RAFT namespaces - PR #2956: Follow cuML array conventions in ARIMA and remove redundancy - PR #3000: Pin cmake policies to cmake 3.17 version, bump project version to 0.17 - PR #3083: Improving test_make_blobs testing time - PR #3223: Increase default SVM kernel cache to 2000 MiB - PR #2906: Moving `linalg` decomp to RAFT namespaces - PR #2988: FIL: use tree-per-class reduction for GROVE_PER_CLASS_FEW_CLASSES - PR #2996: Removing the max_depth restriction for switching to the batched backend - PR #3004: Remove Single Process Multi GPU (SPMG) code - PR #3032: FIL: Add optimization parameter `blocks_per_sm` that will help all but tiniest models - PR #3044: Move leftover `linalg` and `stats` to RAFT namespaces - PR #3067: Deleting prims moved to RAFT and updating header paths - PR #3074: Reducing dask coordinate descent test runtime - PR #3096: Avoid memory transfers in CSR WeakCC for DBSCAN - PR #3088: More readable and robust FIL C++ test management - PR #3052: Speeding up MNMG KNN Cl&Re testing - PR #3115: Speeding up MNMG UMAP testing - PR #3112: Speed test_array - PR #3111: Adding Cython to Code Coverage - PR #3129: Update notebooks README - PR #3002: Update flake8 Config To With Per File Settings - PR #3135: Add QuasiNewton tests - PR #3040: Improved Array Conversion with CumlArrayDescriptor and Decorators - PR #3134: Improving the Deprecation Message Formatting in Documentation - PR #3154: Adding estimator pickling demo notebooks (and docs) - PR #3151: MNMG Logistic Regression via dask-glm - PR #3113: Add tags and prefered memory order tags to estimators - PR #3137: Reorganize Pytest Config and Add Quick Run Option - PR #3144: Adding Ability to Set Arbitrary Cmake Flags in ./build.sh - PR #3155: Eliminate unnecessary warnings from random projection test - PR #3176: Add probabilistic SVM tests with various input array types - PR #3180: FIL: `blocks_per_sm` support in Python - PR #3186: Add gain to RF JSON dump - PR #3219: Update CI to use XGBoost 1.3.0 RCs - PR #3221: Update contributing doc for label support - PR #3177: Make Multinomial Naive Bayes inherit from `ClassifierMixin` and use it for score - PR #3241: Updating RAFT to latest - PR #3240: Minor doc updates - PR #3275: Return confusion matrix as int unless float weights are used ## Bug Fixes - PR #3218: Specify dependency branches in conda dev environment to avoid pip resolver issue - PR #3196: Disable ascending=false path for sortColumnsPerRow - PR #3051: MNMG KNN Cl&Re fix + multiple improvements - PR #3179: Remove unused metrics.cu file - PR #3069: Prevent conversion of DataFrames to Series in preprocessing - PR #3065: Refactoring prims metrics function names from camelcase to underscore format - PR #3033: Splitting ml metrics to individual files - PR #3072: Fusing metrics and score directories in src_prims - PR #3037: Avoid logging deadlock in multi-threaded C code - PR #2983: Fix seeding of KISS99 RNG - PR #3011: Fix unused initialize_embeddings parameter in Barnes-Hut t-SNE - PR #3008: Check number of columns in check_array validator - PR #3012: Increasing learning rate for SGD log loss and invscaling pytests - PR #2950: Fix includes in UMAP - PR #3194: Fix cuDF to cuPy conversion (missing value) - PR #3021: Fix a hang in cuML RF experimental backend - PR #3039: Update RF and decision tree parameter initializations in benchmark codes - PR #3060: Speed up test suite `test_fil` - PR #3061: Handle C++ exception thrown from FIL predict - PR #3073: Update mathjax CDN URL for documentation - PR #3062: Bumping xgboost version to match cuml version - PR #3084: Fix artifacts in t-SNE results - PR #3086: Reverting FIL Notebook Testing - PR #3192: Enable pipeline usage for OneHotEncoder and LabelEncoder - PR #3114: Fixed a typo in SVC's predict_proba AttributeError - PR #3117: Fix two crashes in experimental RF backend - PR #3119: Fix memset args for benchmark - PR #3130: Return Python string from `dump_as_json()` of RF - PR #3132: Add `min_samples_split` + Rename `min_rows_per_node` -> `min_samples_leaf` - PR #3136: Fix stochastic gradient descent example - PR #3152: Fix access to attributes of individual NB objects in dask NB - PR #3156: Force local conda artifact install - PR #3162: Removing accidentally checked in debug file - PR #3191: Fix __repr__ function for preprocessing models - PR #3175: Fix gtest pinned cmake version for build from source option - PR #3182: Fix a bug in MSE metric calculation - PR #3187: Update docstring to document behavior of `bootstrap=False` - PR #3215: Add a missing `__syncthreads()` - PR #3246: Fix MNMG KNN doc (adding batch_size) - PR #3185: Add documentation for Distributed TFIDF Transformer - PR #3190: Fix Attribute error on ICPA #3183 and PCA input type - PR #3208: Fix EXITCODE override in notebook test script - PR #3250: Fixing label binarizer bug with multiple partitions - PR #3214: Correct flaky silhouette score test by setting atol - PR #3216: Ignore splits that do not satisfy constraints - PR #3239: Fix intermittent dask random forest failure - PR #3243: Avoid unnecessary split for degenerate case where all labels are identical - PR #3245: Rename `rows_sample` -> `max_samples` to be consistent with sklearn's RF - PR #3282: Add secondary test to kernel explainer pytests for stability in Volta # cuML 0.16.0 (23 Oct 2020) ## New Features - PR #2922: Install RAFT headers with cuML - PR #2909: Update allgatherv for compatibility with latest RAFT - PR #2677: Ability to export RF trees as JSON - PR #2698: Distributed TF-IDF transformer - PR #2476: Porter Stemmer - PR #2789: Dask LabelEncoder - PR #2152: add FIL C++ benchmark - PR #2638: Improve cython build with custom `build_ext` - PR #2866: Support XGBoost-style multiclass models (gradient boosted decision trees) in FIL C++ - PR #2874: Issue warning for degraded accuracy with float64 models in Treelite - PR #2881: Introduces experimental batched backend for random forest - PR #2916: Add SKLearn multi-class GBDT model support in FIL ## Improvements - PR #2947: Add more warnings for accuracy degradation with 64-bit models - PR #2873: Remove empty marker kernel code for NVTX markers - PR #2796: Remove tokens of length 1 by default for text vectorizers - PR #2741: Use rapids build packages in conda environments - PR #2735: Update seed to random_state in random forest and associated tests - PR #2739: Use cusparse_wrappers.h from RAFT - PR #2729: Replace `cupy.sparse` with `cupyx.scipy.sparse` - PR #2749: Correct docs for python version used in cuml_dev conda environment - PR #2747: Adopting raft::handle_t and raft::comms::comms_t in cuML - PR #2762: Fix broken links and provide minor edits to docs - PR #2723: Support and enable convert_dtype in estimator predict - PR #2758: Match sklearn's default n_components behavior for PCA - PR #2770: Fix doxygen version during cmake - PR #2766: Update default RandomForestRegressor score function to use r2 - PR #2775: Enablinbg mg gtests w/ raft mpi comms - PR #2783: Add pytest that will fail when GPU IDs in Dask cluster are not unique - PR #2784: Add SparseCumlArray container for sparse index/data arrays - PR #2785: Add in cuML-specific dev conda dependencies - PR #2778: Add README for FIL - PR #2799: Reenable lightgbm test with lower (1%) proba accuracy - PR #2800: Align cuML's spdlog version with RMM's - PR #2824: Make data conversions warnings be debug level - PR #2835: Rng prims, utils, and dependencies in RAFT - PR #2541: Improve Documentation Examples and Source Linking - PR #2837: Make the FIL node reorder loop more obvious - PR #2849: make num_classes significant in FLOAT_SCALAR case - PR #2792: Project flash (new build process) script changes - PR #2850: Clean up unused params in paramsPCA - PR #2871: Add timing function to utils - PR #2863: in FIL, rename leaf_value_t enums to more descriptive - PR #2867: improve stability of FIL benchmark measurements - PR #2798: Add python tests for FIL multiclass classification of lightgbm models - PR #2892: Update ci/local/README.md - PR #2910: Adding Support for CuPy 8.x - PR #2914: Add tests for XGBoost multi-class models in FIL - PR #2622: Simplify tSNE perplexity search - PR #2930: Pin libfaiss to <=1.6.3 - PR #2928: Updating Estimators Derived from Base for Consistency - PR #2942: Adding `cuml.experimental` to the Docs - PR #3010: Improve gpuCI Scripts - PR #3141: Move DistanceType enum to RAFT ## Bug Fixes - PR #2973: Allow data imputation for nan values - PR #2982: Adjust kneighbors classifier test threshold to avoid intermittent failure - PR #2885: Changing test target for NVTX wrapper test - PR #2882: Allow import on machines without GPUs - PR #2875: Bug fix to enable colorful NVTX markers - PR #2744: Supporting larger number of classes in KNeighborsClassifier - PR #2769: Remove outdated doxygen options for 1.8.20 - PR #2787: Skip lightgbm test for version 3 and above temporarily - PR #2805: Retain index in stratified splitting for dataframes - PR #2781: Use Python print to correctly redirect spdlogs when sys.stdout is changed - PR #2787: Skip lightgbm test for version 3 and above temporarily - PR #2813: Fix memory access in generation of non-row-major random blobs - PR #2810: Update Rf MNMG threshold to prevent sporadic test failure - PR #2808: Relax Doxygen version required in CMake to coincide with integration repo - PR #2818: Fix parsing of singlegpu option in build command - PR #2827: Force use of whole dataset when sample bootstrapping is disabled - PR #2829: Fixing description for labels in docs and removing row number constraint from PCA xform/inverse_xform - PR #2832: Updating stress tests that fail with OOM - PR #2831: Removing repeated capture and parameter in lambda function - PR #2847: Workaround for TSNE lockup, change caching preference. - PR #2842: KNN index preprocessors were using incorrect n_samples - PR #2848: Fix typo in Python docstring for UMAP - PR #2856: Fix LabelEncoder for filtered input - PR #2855: Updates for RMM being header only - PR #2844: Fix for OPG KNN Classifier & Regressor - PR #2880: Fix bugs in Auto-ARIMA when s==None - PR #2877: TSNE exception for n_components > 2 - PR #2879: Update unit test for LabelEncoder on filtered input - PR #2932: Marking KBinsDiscretizer pytests as xfail - PR #2925: Fixing Owner Bug When Slicing CumlArray Objects - PR #2931: Fix notebook error handling in gpuCI - PR #2941: Fixing dask tsvd stress test failure - PR #2943: Remove unused shuffle_features parameter - PR #2940: Correcting labels meta dtype for `cuml.dask.make_classification` - PR #2965: Notebooks update - PR #2955: Fix for conftest for singlegpu build - PR #2968: Remove shuffle_features from RF param names - PR #2957: Fix ols test size for stability - PR #2972: Upgrade Treelite to 0.93 - PR #2981: Prevent unguarded import of sklearn in SVC - PR #2984: Fix GPU test scripts gcov error - PR #2990: Reduce MNMG kneighbors regressor test threshold - PR #2997: Changing ARIMA `get/set_params` to `get/set_fit_params` # cuML 0.15.0 (26 Aug 2020) ## New Features - PR #2581: Added model persistence via joblib in each section of estimator_intro.ipynb - PR #2554: Hashing Vectorizer and general vectorizer improvements - PR #2240: Making Dask models pickleable - PR #2267: CountVectorizer estimator - PR #2261: Exposing new FAISS metrics through Python API - PR #2287: Single-GPU TfidfTransformer implementation - PR #2289: QR SVD solver for MNMG PCA - PR #2312: column-major support for make_blobs - PR #2172: Initial support for auto-ARIMA - PR #2394: Adding cosine & correlation distance for KNN - PR #2392: PCA can accept sparse inputs, and sparse prim for computing covariance - PR #2465: Support pandas 1.0+ - PR #2550: Single GPU Target Encoder - PR #2519: Precision recall curve using cupy - PR #2500: Replace UMAP functionality dependency on nvgraph with RAFT Spectral Clustering - PR #2502: cuML Implementation of `sklearn.metrics.pairwise_distances` - PR #2520: TfidfVectorizer estimator - PR #2211: MNMG KNN Classifier & Regressor - PR #2461: Add KNN Sparse Output Functionality - PR #2615: Incremental PCA - PR #2594: Confidence intervals for ARIMA forecasts - PR #2607: Add support for probability estimates in SVC - PR #2618: SVM class and sample weights - PR #2635: Decorator to generate docstrings with autodetection of parameters - PR #2270: Multi class MNMG RF - PR #2661: CUDA-11 support for single-gpu code - PR #2322: Sparse FIL forests with 8-byte nodes - PR #2675: Update conda recipes to support CUDA 11 - PR #2645: Add experimental, sklearn-based preprocessing ## Improvements - PR #2336: Eliminate `rmm.device_array` usage - PR #2262: Using fully shared PartDescriptor in MNMG decomposiition, linear models, and solvers - PR #2310: Pinning ucx-py to 0.14 to make 0.15 CI pass - PR #1945: enable clang tidy - PR #2339: umap performance improvements - PR #2308: Using fixture for Dask client to eliminate possibility of not closing - PR #2345: make C++ logger level definition to be the same as python layer - PR #2329: Add short commit hash to conda package name - PR #2362: Implement binary/multi-classification log loss with cupy - PR #2363: Update threshold and make other changes for stress tests - PR #2371: Updating MBSGD tests to use larger batches - PR #2380: Pinning libcumlprims version to ease future updates - PR #2405: Remove references to deprecated RMM headers. - PR #2340: Import ARIMA in the root init file and fix the `test_fit_function` test - PR #2408: Install meta packages for dependencies - PR #2417: Move doc customization scripts to Jenkins - PR #2427: Moving MNMG decomposition to cuml - PR #2433: Add libcumlprims_mg to CMake - PR #2420: Add and set convert_dtype default to True in estimator fit methods - PR #2411: Refactor Mixin classes and use in classifier/regressor estimators - PR #2442: fix setting RAFT_DIR from the RAFT_PATH env var - PR #2469: Updating KNN c-api to document all arguments - PR #2453: Add CumlArray to API doc - PR #2440: Use Treelite Conda package - PR #2403: Support for input and output type consistency in logistic regression predict_proba - PR #2473: Add metrics.roc_auc_score to API docs. Additional readability and minor docs bug fixes - PR #2468: Add `_n_features_in_` attribute to all single GPU estimators that implement fit - PR #2489: Removing explicit FAISS build and adding dependency on libfaiss conda package - PR #2480: Moving MNMG glm and solvers to cuml - PR #2490: Moving MNMG KMeans to cuml - PR #2483: Moving MNMG KNN to cuml - PR #2492: Adding additional assertions to mnmg nearest neighbors pytests - PR #2439: Update dask RF code to have print_detailed function - PR #2431: Match output of classifier predict with target dtype - PR #2237: Refactor RF cython code - PR #2513: Fixing LGTM Analysis Issues - PR #2099: Raise an error when float64 data is used with dask RF - PR #2522: Renaming a few arguments in KNeighbors* to be more readable - PR #2499: Provide access to `cuml.DBSCAN` core samples - PR #2526: Removing PCA TSQR as a solver due to scalability issues - PR #2536: Update conda upload versions for new supported CUDA/Python - PR #2538: Remove Protobuf dependency - PR #2553: Test pickle protocol 5 support - PR #2570: Accepting single df or array input in train_test_split - PR #2566: Remove deprecated cuDF from_gpu_matrix calls - PR #2583: findpackage.cmake.in template for cmake dependencies - PR #2577: Fully removing NVGraph dependency for CUDA 11 compatibility - PR #2575: Speed up TfidfTransformer - PR #2584: Removing dependency on sklearn's NotFittedError - PR #2591: Generate benchmark datsets using `cuml.datasets` - PR #2548: Fix limitation on number of rows usable with tSNE and refactor memory allocation - PR #2589: including cuda-11 build fixes into raft - PR #2599: Add Stratified train_test_split - PR #2487: Set classes_ attribute during classifier fit - PR #2605: Reduce memory usage in tSNE - PR #2611: Adding building doxygen docs to gpu ci - PR #2631: Enabling use of gtest conda package for build - PR #2623: Fixing kmeans score() API to be compatible with Scikit-learn - PR #2629: Add naive_bayes api docs - PR #2643: 'dense' and 'sparse' values of `storage_type` for FIL - PR #2691: Generic Base class attribute setter - PR #2666: Update MBSGD documentation to mention that the model is experimental - PR #2687: Update xgboost version to 1.2.0dev.rapidsai0.15 - PR #2684: CUDA 11 conda development environment yml and faiss patch - PR #2648: Replace CNMeM with `rmm::mr::pool_memory_resource`. - PR #2686: Improve SVM tests - PR #2692: Changin LBFGS log level - PR #2705: Add sum operator and base operator overloader functions to cumlarray - PR #2701: Updating README + Adding ref to UMAP paper - PR #2721: Update API docs - PR #2730: Unpin cumlprims in conda recipes for release ## Bug Fixes - PR #2369: Update RF code to fix set_params memory leak - PR #2364: Fix for random projection - PR #2373: Use Treelite Pip package in GPU testing - PR #2376: Update documentation Links - PR #2407: fixed batch count in DBScan for integer overflow case - PR #2413: CumlArray and related methods updates to account for cuDF.Buffer contiguity update - PR #2424: --singlegpu flag fix on build.sh script - PR #2432: Using correct algo_name for UMAP in benchmark tests - PR #2445: Restore access to coef_ property of Lasso - PR #2441: Change p2p_enabled definition to work without ucx - PR #2447: Drop `nvstrings` - PR #2450: Update local build to use new gpuCI image - PR #2454: Mark RF memleak test as XFAIL, because we can't detect memleak reliably - PR #2455: Use correct field to store data type in `LabelEncoder.fit_transform` - PR #2475: Fix typo in build.sh - PR #2496: Fixing indentation for simulate_data in test_fil.py - PR #2494: Set QN regularization strength consistent with scikit-learn - PR #2486: Fix cupy input to kmeans init - PR #2497: Changes to accommodate cuDF unsigned categorical changes - PR #2209: Fix FIL benchmark for gpuarray-c input - PR #2507: Import `treelite.sklearn` - PR #2521: Fixing invalid smem calculation in KNeighborsCLassifier - PR #2515: Increase tolerance for LogisticRegression test - PR #2532: Updating doxygen in new MG headers - PR #2521: Fixing invalid smem calculation in KNeighborsCLassifier - PR #2515: Increase tolerance for LogisticRegression test - PR #2545: Fix documentation of n_iter_without_progress in tSNE Python bindings - PR #2543: Improve numerical stability of QN solver - PR #2544: Fix Barnes-Hut tSNE not using specified post_learning_rate - PR #2558: Disabled a long-running FIL test - PR #2540: Update default value for n_epochs in UMAP to match documentation & sklearn API - PR #2535: Fix issue with incorrect docker image being used in local build script - PR #2542: Fix small memory leak in TSNE - PR #2552: Fixed the length argument of updateDevice calls in RF test - PR #2565: Fix cell allocation code to avoid loops in quad-tree. Prevent NaNs causing infinite descent - PR #2563: Update scipy call for arima gradient test - PR #2569: Fix for cuDF update - PR #2508: Use keyword parameters in sklearn.datasets.make_* functions - PR #2587: Attributes for estimators relying on solvers - PR #2586: Fix SVC decision function data type - PR #2573: Considering managed memory as device type on checking for KMeans - PR #2574: Fixing include path in `tsvd_mg.pyx` - PR #2506: Fix usage of CumlArray attributes on `cuml.common.base.Base` - PR #2593: Fix inconsistency in train_test_split - PR #2609: Fix small doxygen issues - PR #2610: Remove cuDF tolist call - PR #2613: Removing thresholds from kmeans score tests (SG+MG) - PR #2616: Small test code fix for pandas dtype tests - PR #2617: Fix floating point precision error in tSNE - PR #2625: Update Estimator notebook to resolve errors - PR #2634: singlegpu build option fixes - PR #2641: [Breaking] Make `max_depth` in RF compatible with scikit-learn - PR #2650: Make max_depth behave consistently for max_depth > 14 - PR #2651: AutoARIMA Python bug fix - PR #2654: Fix for vectorizer concatenations - PR #2655: Fix C++ RF predict function access of rows/samples array - PR #2649: Cleanup sphinx doc warnings for 0.15 - PR #2668: Order conversion improvements to account for cupy behavior changes - PR #2669: Revert PR 2655 Revert "Fixes C++ RF predict function" - PR #2683: Fix incorrect "Bad CumlArray Use" error messages on test failures - PR #2695: Fix debug build issue due to incorrect host/device method setup - PR #2709: Fixing OneHotEncoder Overflow Error - PR #2710: Fix SVC doc statement about predic_proba - PR #2726: Return correct output type in QN - PR #2711: Fix Dask RF failure intermittently - PR #2718: Fix temp directory for py.test - PR #2719: Set KNeighborsRegressor output dtype according to training target dtype - PR #2720: Updates to outdated links - PR #2722: Getting cuML covariance test passing w/ Cupy 7.8 & CUDA 11 # cuML 0.14.0 (03 Jun 2020) ## New Features - PR #1994: Support for distributed OneHotEncoder - PR #1892: One hot encoder implementation with cupy - PR #1655: Adds python bindings for homogeneity score - PR #1704: Adds python bindings for completeness score - PR #1687: Adds python bindings for mutual info score - PR #1980: prim: added a new write-only unary op prim - PR #1867: C++: add logging interface support in cuML based spdlog - PR #1902: Multi class inference in FIL C++ and importing multi-class forests from treelite - PR #1906: UMAP MNMG - PR #2067: python: wrap logging interface in cython - PR #2083: Added dtype, order, and use_full_low_rank to MNMG `make_regression` - PR #2074: SG and MNMG `make_classification` - PR #2127: Added order to SG `make_blobs`, and switch from C++ to cupy based implementation - PR #2057: Weighted k-means - PR #2256: Add a `make_arima` generator - PR #2245: ElasticNet, Lasso and Coordinate Descent MNMG - PR #2242: Pandas input support with output as NumPy arrays by default - PR #2551: Add cuML RF multiclass prediction using FIL from python - PR #1728: Added notebook testing to gpuCI gpu build ## Improvements - PR #1931: C++: enabled doxygen docs for all of the C++ codebase - PR #1944: Support for dask_cudf.core.Series in _extract_partitions - PR #1947: Cleaning up cmake - PR #1927: Use Cython's `new_build_ext` (if available) - PR #1946: Removed zlib dependency from cmake - PR #1988: C++: cpp bench refactor - PR #1873: Remove usage of nvstring and nvcat from LabelEncoder - PR #1968: Update SVC SVR with cuML Array - PR #1972: updates to our flow to use conda-forge's clang and clang-tools packages - PR #1974: Reduce ARIMA testing time - PR #1984: Enable Ninja build - PR #1985: C++ UMAP parametrizable tests - PR #2005: Adding missing algorithms to cuml benchmarks and notebook - PR #2016: Add capability to setup.py and build.sh to fully clean all cython build files and artifacts - PR #2044: A cuda-memcheck helper wrapper for devs - PR #2018: Using `cuml.dask.part_utils.extract_partitions` and removing similar, duplicated code - PR #2019: Enable doxygen build in our nightly doc build CI script - PR #1996: Cythonize in parallel - PR #2032: Reduce number of tests for MBSGD to improve CI running time - PR #2031: Encapsulating UCX-py interactions in singleton - PR #2029: Add C++ ARIMA log-likelihood benchmark - PR #2085: Convert TSNE to use CumlArray - PR #2051: Reduce the time required to run dask pca and dask tsvd tests - PR #1981: Using CumlArray in kNN and DistributedDataHandler in dask kNN - PR #2053: Introduce verbosity level in C++ layer instead of boolean `verbose` flag - PR #2047: Make internal streams non-blocking w.r.t. NULL stream - PR #2048: Random forest testing speedup - PR #2058: Use CumlArray in Random Projection - PR #2068: Updating knn class probabilities to use make_monotonic instead of binary search - PR #2062: Adding random state to UMAP mnmg tests - PR #2064: Speed-up K-Means test - PR #2015: Renaming .h to .cuh in solver, dbscan and svm - PR #2080: Improved import of sparse FIL forests from treelite - PR #2090: Upgrade C++ build to C++14 standard - PR #2089: CI: enabled cuda-memcheck on ml-prims unit-tests during nightly build - PR #2128: Update Dask RF code to reduce the time required for GPU predict to run - PR #2125: Build infrastructure to use RAFT - PR #2131: Update Dask RF fit to use DistributedDataHandler - PR #2055: Update the metrics notebook to use important cuML models - PR #2095: Improved import of src_prims/utils.h, making it less ambiguous - PR #2118: Updating SGD & mini-batch estimators to use CumlArray - PR #2120: Speeding up dask RandomForest tests - PR #1883: Use CumlArray in ARIMA - PR #877: Adding definition of done criteria to wiki - PR #2135: A few optimizations to UMAP fuzzy simplicial set - PR #1914: Change the meaning of ARIMA's intercept to match the literature - PR #2098: Renaming .h to .cuh in decision_tree, glm, pca - PR #2150: Remove deprecated RMM calls in RMM allocator adapter - PR #2146: Remove deprecated kalman filter - PR #2151: Add pytest duration and pytest timeout - PR #2156: Add Docker 19 support to local gpuci build - PR #2178: Reduce duplicated code in RF - PR #2124: Expand tutorial docs and sample notebook - PR #2175: Allow CPU-only and dataset params for benchmark sweeps - PR #2186: Refactor cython code to build OPG structs in common utils file - PR #2180: Add fully single GPU singlegpu python build - PR #2187: CMake improvements to manage conda environment dependencies - PR #2185: Add has_sklearn function and use it in datasets/classification. - PR #2193: Order-independent local shuffle in `cuml.dask.make_regression` - PR #2204: Update python layer to use the logger interface - PR #2184: Refoctor headers for holtwinters, rproj, tsvd, tsne, umap - PR #2199: Remove unncessary notebooks - PR #2195: Separating fit and transform calls in SG, MNMG PCA to save transform array memory consumption - PR #2201: Re-enabling UMAP repro tests - PR #2132: Add SVM C++ benchmarks - PR #2196: Updates to benchmarks. Moving notebook - PR #2208: Coordinate Descent, Lasso and ElasticNet CumlArray updates - PR #2210: Updating KNN tests to evaluate multiple index partitions - PR #2205: Use timeout to add 2 hour hard limit to dask tests - PR #2212: Improve DBScan batch count / memory estimation - PR #2213: Standardized include statements across all cpp source files, updated copyright on all modified files - PR #2214: Remove utils folder and refactor to common folder - PR #2220: Final refactoring of all src_prims header files following rules as specified in #1675 - PR #2225: input_to_cuml_array keep order option, test updates and cleanup - PR #2244: Re-enable slow ARIMA tests as stress tests - PR #2231: Using OPG structs from `cuml.common` in decomposition algorithms - PR #2257: Update QN and LogisticRegression to use CumlArray - PR #2259: Add CumlArray support to Naive Bayes - PR #2252: Add benchmark for the Gram matrix prims - PR #2263: Faster serialization for Treelite objects with RF - PR #2264: Reduce build time for cuML by using make_blobs from libcuml++ interface - PR #2269: Add docs targets to build.sh and fix python cuml.common docs - PR #2271: Clarify doc for `_unique` default implementation in OneHotEncoder - PR #2272: Add docs build.sh script to repository - PR #2276: Ensure `CumlArray` provided `dtype` conforms - PR #2281: Rely on cuDF's `Serializable` in `CumlArray` - PR #2284: Reduce dataset size in SG RF notebook to reduce run time of sklearn - PR #2285: Increase the threshold for elastic_net test in dask/test_coordinate_descent - PR #2314: Update FIL default values, documentation and test - PR #2316: 0.14 release docs additions and fixes - PR #2320: Add prediction notes to RF docs - PR #2323: Change verbose levels and parameter name to match Scikit-learn API - PR #2324: Raise an error if n_bins > number of training samples in RF - PR #2335: Throw a warning if treelite cannot be imported and `load_from_sklearn` is used ## Bug Fixes - PR #1939: Fix syntax error in cuml.common.array - PR #1941: Remove c++ cuda flag that was getting duplicated in CMake - PR #1971: python: Correctly honor --singlegpu option and CUML_BUILD_PATH env variable - PR #1969: Update libcumlprims to 0.14 - PR #1973: Add missing mg files for setup.py --singlegpu flag - PR #1993: Set `umap_transform_reproducibility` tests to xfail - PR #2004: Refactoring the arguments to `plant()` call - PR #2017: Fixing memory issue in weak cc prim - PR #2028: Skipping UMAP knn reproducibility tests until we figure out why its failing in CUDA 10.2 - PR #2024: Fixed cuda-memcheck errors with sample-without-replacement prim - PR #1540: prims: support for custom math-type used for computation inside adjusted rand index prim - PR #2077: dask-make blobs arguments to match sklearn - PR #2059: Make all Scipy imports conditional - PR #2078: Ignore negative cache indices in get_vecs - PR #2084: Fixed cuda-memcheck errors with COO unit-tests - PR #2087: Fixed cuda-memcheck errors with dispersion prim - PR #2096: Fixed syntax error with nightly build command for memcheck unit-tests - PR #2115: Fixed contingency matrix prim unit-tests for computing correct golden values - PR #2107: Fix PCA transform - PR #2109: input_to_cuml_array __cuda_array_interface__ bugfix - PR #2117: cuDF __array__ exception small fixes - PR #2139: CumlArray for adjusted_rand_score - PR #2140: Returning self in fit model functions - PR #2144: Remove GPU arch < 60 from CMake build - PR #2153: Added missing namespaces to some Decision Tree files - PR #2155: C++: fix doxygen build break - PR #2161: Replacing depreciated bruteForceKnn - PR #2162: Use stream in transpose prim - PR #2165: Fit function test correction - PR #2166: Fix handling of temp file in RF pickling - PR #2176: C++: fix for adjusted rand index when input array is all zeros - PR #2179: Fix clang tools version in libcuml recipe - PR #2183: Fix RAFT in nightly package - PR #2191: Fix placement of SVM parameter documentation and add examples - PR #2212: Fix DBScan results (no propagation of labels through border points) - PR #2215: Fix the printing of forest object - PR #2217: Fix opg_utils naming to fix singlegpu build - PR #2223: Fix bug in ARIMA C++ benchmark - PR #2224: Temporary fix for CI until new Dask version is released - PR #2228: Update to use __reduce_ex__ in CumlArray to override cudf.Buffer - PR #2249: Fix bug in UMAP continuous target metrics - PR #2258: Fix doxygen build break - PR #2255: Set random_state for train_test_split function in dask RF - PR #2275: Fix RF fit memory leak - PR #2274: Fix parameter name verbose to verbosity in mnmg OneHotEncoder - PR #2277: Updated cub repo path and branch name - PR #2282: Fix memory leak in Dask RF concatenation - PR #2301: Scaling KNN dask tests sample size with n GPUs - PR #2293: Contiguity fixes for input_to_cuml_array and train_test_split - PR #2295: Fix convert_to_dtype copy even with same dtype - PR #2305: Fixed race condition in DBScan - PR #2354: Fix broken links in README - PR #2619: Explicitly skip raft test folder for pytest 6.0.0 - PR #2788: Set the minimum number of columns that can be sampled to 1 to fix 0 mem allocation error # cuML 0.13.0 (31 Mar 2020) ## New Features - PR #1777: Python bindings for entropy - PR #1742: Mean squared error implementation with cupy - PR #1817: Confusion matrix implementation with cupy (SNSG and MNMG) - PR #1766: Mean absolute error implementation with cupy - PR #1766: Mean squared log error implementation with cupy - PR #1635: cuML Array shim and configurable output added to cluster methods - PR #1586: Seasonal ARIMA - PR #1683: cuml.dask make_regression - PR #1689: Add framework for cuML Dask serializers - PR #1709: Add `decision_function()` and `predict_proba()` for LogisticRegression - PR #1714: Add `print_env.sh` file to gather important environment details - PR #1750: LinearRegression CumlArray for configurable output - PR #1814: ROC AUC score implementation with cupy - PR #1767: Single GPU decomposition models configurable output - PR #1646: Using FIL to predict in MNMG RF - PR #1778: Make cuML Handle picklable - PR #1738: cuml.dask refactor beginning and dask array input option for OLS, Ridge and KMeans - PR #1874: Add predict_proba function to RF classifier - PR #1815: Adding KNN parameter to UMAP - PR #1978: Adding `predict_proba` function to dask RF ## Improvements - PR #1644: Add `predict_proba()` for FIL binary classifier - PR #1620: Pickling tests now automatically finds all model classes inheriting from cuml.Base - PR #1637: Update to newer treelite version with XGBoost 1.0 compatibility - PR #1632: Fix MBSGD models inheritance, they now inherits from cuml.Base - PR #1628: Remove submodules from cuML - PR #1755: Expose the build_treelite function for python - PR #1649: Add the fil_sparse_format variable option to RF API - PR #1647: storage_type=AUTO uses SPARSE for large models - PR #1668: Update the warning statement thrown in RF when the seed is set but n_streams is not 1 - PR #1662: use of direct cusparse calls for coo2csr, instead of depending on nvgraph - PR #1747: C++: dbscan performance improvements and cleanup - PR #1697: Making trustworthiness batchable and using proper workspace - PR #1721: Improving UMAP pytests - PR #1717: Call `rmm_cupy_allocator` for CuPy allocations - PR #1718: Import `using_allocator` from `cupy.cuda` - PR #1723: Update RF Classifier to throw an exception for multi-class pickling - PR #1726: Decorator to allocate CuPy arrays with RMM - PR #1719: UMAP random seed reproducibility - PR #1748: Test serializing `CumlArray` objects - PR #1776: Refactoring pca/tsvd distributed - PR #1762: Update CuPy requirement to 7 - PR #1768: C++: Different input and output types for add and subtract prims - PR #1790: Add support for multiple seeding in k-means++ - PR #1805: Adding new Dask cuda serializers to naive bayes + a trivial perf update - PR #1812: C++: bench: UMAP benchmark cases added - PR #1795: Add capability to build CumlArray from bytearray/memoryview objects - PR #1824: C++: improving the performance of UMAP algo - PR #1816: Add ARIMA notebook - PR #1856: Update docs for 0.13 - PR #1827: Add HPO demo Notebook - PR #1825: `--nvtx` option in `build.sh` - PR #1847: Update XGBoost version for CI - PR #1837: Simplify cuML Array construction - PR #1848: Rely on subclassing for cuML Array serialization - PR #1866: Minimizing client memory pressure on Naive Bayes - PR #1788: Removing complexity bottleneck in S-ARIMA - PR #1873: Remove usage of nvstring and nvcat from LabelEncoder - PR #1891: Additional improvements to naive bayes tree reduction ## Bug Fixes - PR #1835 : Fix calling default RF Classification always - PT #1904: replace cub sort - PR #1833: Fix depth issue in shallow RF regression estimators - PR #1770: Warn that KalmanFilter is deprecated - PR #1775: Allow CumlArray to work with inputs that have no 'strides' in array interface - PR #1594: Train-test split is now reproducible - PR #1590: Fix destination directory structure for run-clang-format.py - PR #1611: Fixing pickling errors for KNN classifier and regressor - PR #1617: Fixing pickling issues for SVC and SVR - PR #1634: Fix title in KNN docs - PR #1627: Adding a check for multi-class data in RF classification - PR #1654: Skip treelite patch if its already been applied - PR #1661: Fix nvstring variable name - PR #1673: Using struct for caching dlsym state in communicator - PR #1659: TSNE - introduce 'convert_dtype' and refactor class attr 'Y' to 'embedding_' - PR #1672: Solver 'svd' in Linear and Ridge Regressors when n_cols=1 - PR #1670: Lasso & ElasticNet - cuml Handle added - PR #1671: Update for accessing cuDF Series pointer - PR #1652: Support XGBoost 1.0+ models in FIL - PR #1702: Fix LightGBM-FIL validation test - PR #1701: test_score kmeans test passing with newer cupy version - PR #1706: Remove multi-class bug from QuasiNewton - PR #1699: Limit CuPy to <7.2 temporarily - PR #1708: Correctly deallocate cuML handles in Cython - PR #1730: Fixes to KF for test stability (mainly in CUDA 10.2) - PR #1729: Fixing naive bayes UCX serialization problem in fit() - PR #1749: bug fix rf classifier/regressor on seg fault in bench - PR #1751: Updated RF documentation - PR #1765: Update the checks for using RF GPU predict - PR #1787: C++: unit-tests to check for RF accuracy. As well as a bug fix to improve RF accuracy - PR #1793: Updated fil pyx to solve memory leakage issue - PR #1810: Quickfix - chunkage in dask make_regression - PR #1842: DistributedDataHandler not properly setting 'multiple' - PR #1849: Critical fix in ARIMA initial estimate - PR #1851: Fix for cuDF behavior change for multidimensional arrays - PR #1852: Remove Thrust warnings - PR #1868: Turning off IPC caching until it is fixed in UCX-py/UCX - PR #1876: UMAP exponential decay parameters fix - PR #1887: Fix hasattr for missing attributes on base models - PR #1877: Remove resetting index in shuffling in train_test_split - PR #1893: Updating UCX in comms to match current UCX-py - PR #1888: Small train_test_split test fix - PR #1899: Fix dask `extract_partitions()`, remove transformation as instance variable in PCA and TSVD and match sklearn APIs - PR #1920: Temporarily raising threshold for UMAP reproducibility tests - PR #1918: Create memleak fixture to skip memleak tests in CI for now - PR #1926: Update batch matrix test margins - PR #1925: Fix failing dask tests - PR #1936: Update DaskRF regression test to xfail - PR #1932: Isolating cause of make_blobs failure - PR #1951: Dask Random forest regression CPU predict bug fix - PR #1948: Adjust BatchedMargin margin and disable tests temporarily - PR #1950: Fix UMAP test failure # cuML 0.12.0 (04 Feb 2020) ## New Features - PR #1483: prims: Fused L2 distance and nearest-neighbor prim - PR #1494: bench: ml-prims benchmark - PR #1514: bench: Fused L2 NN prim benchmark - PR #1411: Cython side of MNMG OLS - PR #1520: Cython side of MNMG Ridge Regression - PR #1516: Suppor Vector Regression (epsilon-SVR) ## Improvements - PR #1638: Update cuml/docs/README.md - PR #1468: C++: updates to clang format flow to make it more usable among devs - PR #1473: C++: lazy initialization of "costly" resources inside cumlHandle - PR #1443: Added a new overloaded GEMM primitive - PR #1489: Enabling deep trees using Gather tree builder - PR #1463: Update FAISS submodule to 1.6.1 - PR #1488: Add codeowners - PR #1432: Row-major (C-style) GPU arrays for benchmarks - PR #1490: Use dask master instead of conda package for testing - PR #1375: Naive Bayes & Distributed Naive Bayes - PR #1377: Add GPU array support for FIL benchmarking - PR #1493: kmeans: add tiling support for 1-NN computation and use fusedL2-1NN prim for L2 distance metric - PR #1532: Update CuPy to >= 6.6 and allow 7.0 - PR #1528: Re-enabling KNN using dynamic library loading for UCX in communicator - PR #1545: Add conda environment version updates to ci script - PR #1541: Updates for libcudf++ Python refactor - PR #1555: FIL-SKL, an SKLearn-based benchmark for FIL - PR #1537: Improve pickling and scoring suppport for many models to support hyperopt - PR #1551: Change custom kernel to cupy for col/row order transform - PR #1533: C++: interface header file separation for SVM - PR #1560: Helper function to allocate all new CuPy arrays with RMM memory management - PR #1570: Relax nccl in conda recipes to >=2.4 (matching CI) - PR #1578: Add missing function information to the cuML documenataion - PR #1584: Add has_scipy utility function for runtime check - PR #1583: API docs updates for 0.12 - PR #1591: Updated FIL documentation ## Bug Fixes - PR #1470: Documentation: add make_regression, fix ARIMA section - PR #1482: Updated the code to remove sklearn from the mbsgd stress test - PR #1491: Update dev environments for 0.12 - PR #1512: Updating setup_cpu() in SpeedupComparisonRunner - PR #1498: Add build.sh to code owners - PR #1505: cmake: added correct dependencies for prims-bench build - PR #1534: Removed TODO comment in create_ucp_listeners() - PR #1548: Fixing umap extra unary op in knn graph - PR #1547: Fixing MNMG kmeans score. Fixing UMAP pickling before fit(). Fixing UMAP test failures. - PR #1557: Increasing threshold for kmeans score - PR #1562: Increasing threshold even higher - PR #1564: Fixed a typo in function cumlMPICommunicator_impl::syncStream - PR #1569: Remove Scikit-learn exception and depedenncy in SVM - PR #1575: Add missing dtype parameter in call to strides to order for CuPy 6.6 code path - PR #1574: Updated the init file to include SVM - PR #1589: Fixing the default value for RF and updating mnmg predict to accept cudf - PR #1601: Fixed wrong datatype used in knn voting kernel # cuML 0.11.0 (11 Dec 2019) ## New Features - PR #1295: Cython side of MNMG PCA - PR #1218: prims: histogram prim - PR #1129: C++: Separate include folder for C++ API distribution - PR #1282: OPG KNN MNMG Code (disabled for 0.11) - PR #1242: Initial implementation of FIL sparse forests - PR #1194: Initial ARIMA time-series modeling support. - PR #1286: Importing treelite models as FIL sparse forests - PR #1285: Fea minimum impurity decrease RF param - PR #1301: Add make_regression to generate regression datasets - PR #1322: RF pickling using treelite, protobuf and FIL - PR #1332: Add option to cuml.dask make_blobs to produce dask array - PR #1307: Add RF regression benchmark - PR #1327: Update the code to build treelite with protobuf - PR #1289: Add Python benchmarking support for FIL - PR #1371: Cython side of MNMG tSVD - PR #1386: Expose SVC decision function value ## Improvements - PR #1170: Use git to clone subprojects instead of git submodules - PR #1239: Updated the treelite version - PR #1225: setup.py clone dependencies like cmake and correct include paths - PR #1224: Refactored FIL to prepare for sparse trees - PR #1249: Include libcuml.so C API in installed targets - PR #1259: Conda dev environment updates and use libcumlprims current version in CI - PR #1277: Change dependency order in cmake for better printing at compile time - PR #1264: Add -s flag to GPU CI pytest for better error printing - PR #1271: Updated the Ridge regression documentation - PR #1283: Updated the cuMl docs to include MBSGD and adjusted_rand_score - PR #1300: Lowercase parameter versions for FIL algorithms - PR #1312: Update CuPy to version 6.5 and use conda-forge channel - PR #1336: Import SciKit-Learn models into FIL - PR #1314: Added options needed for ASVDb output (CUDA ver, etc.), added option to select algos - PR #1335: Options to print available algorithms and datasets in the Python benchmark - PR #1338: Remove BUILD_ABI references in CI scripts - PR #1340: Updated unit tests to uses larger dataset - PR #1351: Build treelite temporarily for GPU CI testing of FIL Scikit-learn model importing - PR #1367: --test-split benchmark parameter for train-test split - PR #1360: Improved tests for importing SciKit-Learn models into FIL - PR #1368: Add --num-rows benchmark command line argument - PR #1351: Build treelite temporarily for GPU CI testing of FIL Scikit-learn model importing - PR #1366: Modify train_test_split to use CuPy and accept device arrays - PR #1258: Documenting new MPI communicator for multi-node multi-GPU testing - PR #1345: Removing deprecated should_downcast argument - PR #1362: device_buffer in UMAP + Sparse prims - PR #1376: AUTO value for FIL algorithm - PR #1408: Updated pickle tests to delete the pre-pickled model to prevent pointer leakage - PR #1357: Run benchmarks multiple times for CI - PR #1382: ARIMA optimization: move functions to C++ side - PR #1392: Updated RF code to reduce duplication of the code - PR #1444: UCX listener running in its own isolated thread - PR #1445: Improved performance of FIL sparse trees - PR #1431: Updated API docs - PR #1441: Remove unused CUDA conda labels - PR #1439: Match sklearn 0.22 default n_estimators for RF and fix test errors - PR #1461: Add kneighbors to API docs ## Bug Fixes - PR #1281: Making rng.h threadsafe - PR #1212: Fix cmake git cloning always running configure in subprojects - PR #1261: Fix comms build errors due to cuml++ include folder changes - PR #1267: Update build.sh for recent change of building comms in main CMakeLists - PR #1278: Removed incorrect overloaded instance of eigJacobi - PR #1302: Updates for numba 0.46 - PR #1313: Updated the RF tests to set the seed and n_streams - PR #1319: Using machineName arg passed in instead of default for ASV reporting - PR #1326: Fix illegal memory access in make_regression (bounds issue) - PR #1330: Fix C++ unit test utils for better handling of differences near zero - PR #1342: Fix to prevent memory leakage in Lasso and ElasticNet - PR #1337: Fix k-means init from preset cluster centers - PR #1354: Fix SVM gamma=scale implementation - PR #1344: Change other solver based methods to create solver object in init - PR #1373: Fixing a few small bugs in make_blobs and adding asserts to pytests - PR #1361: Improve SMO error handling - PR #1384: Lower expectations on batched matrix tests to prevent CI failures - PR #1380: Fix memory leaks in ARIMA - PR #1391: Lower expectations on batched matrix tests even more - PR #1394: Warning added in svd for cuda version 10.1 - PR #1407: Resolved RF predict issues and updated RF docstring - PR #1401: Patch for lbfgs solver for logistic regression with no l1 penalty - PR #1416: train_test_split numba and rmm device_array output bugfix - PR #1419: UMAP pickle tests are using wrong n_neighbors value for trustworthiness - PR #1438: KNN Classifier to properly return Dataframe with Dataframe input - PR #1425: Deprecate seed and use random_state similar to Scikit-learn in train_test_split - PR #1458: Add joblib as an explicit requirement - PR #1474: Defer knn mnmg to 0.12 nightly builds and disable ucx-py dependency # cuML 0.10.0 (16 Oct 2019) ## New Features - PR #1148: C++ benchmark tool for c++/CUDA code inside cuML - PR #1071: Selective eigen solver of cuSolver - PR #1073: Updating RF wrappers to use FIL for GPU accelerated prediction - PR #1104: CUDA 10.1 support - PR #1113: prims: new batched make-symmetric-matrix primitive - PR #1112: prims: new batched-gemv primitive - PR #855: Added benchmark tools - PR #1149 Add YYMMDD to version tag for nightly conda packages - PR #892: General Gram matrices prim - PR #912: Support Vector Machine - PR #1274: Updated the RF score function to use GPU predict ## Improvements - PR #961: High Peformance RF; HIST algo - PR #1028: Dockerfile updates after dir restructure. Conda env yaml to add statsmodels as a dependency - PR #1047: Consistent OPG interface for kmeans, based on internal libcumlprims update - PR #763: Add examples to train_test_split documentation - PR #1093: Unified inference kernels for different FIL algorithms - PR #1076: Paying off some UMAP / Spectral tech debt. - PR #1086: Ensure RegressorMixin scorer uses device arrays - PR #1110: Adding tests to use default values of parameters of the models - PR #1108: input_to_host_array function in input_utils for input processing to host arrays - PR #1114: K-means: Exposing useful params, removing unused params, proxying params in Dask - PR #1138: Implementing ANY_RANK semantics on irecv - PR #1142: prims: expose separate InType and OutType for unaryOp and binaryOp - PR #1115: Moving dask_make_blobs to cuml.dask.datasets. Adding conversion to dask.DataFrame - PR #1136: CUDA 10.1 CI updates - PR #1135: K-means: add boundary cases for kmeans||, support finer control with convergence - PR #1163: Some more correctness improvements. Better verbose printing - PR #1165: Adding except + in all remaining cython - PR #1186: Using LocalCUDACluster Pytest fixture - PR #1173: Docs: Barnes Hut TSNE documentation - PR #1176: Use new RMM API based on Cython - PR #1219: Adding custom bench_func and verbose logging to cuml.benchmark - PR #1247: Improved MNMG RF error checking ## Bug Fixes - PR #1231: RF respect number of cuda streams from cuml handle - PR #1230: Rf bugfix memleak in regression - PR #1208: compile dbscan bug - PR #1016: Use correct libcumlprims version in GPU CI - PR #1040: Update version of numba in development conda yaml files - PR #1043: Updates to accommodate cuDF python code reorganization - PR #1044: Remove nvidia driver installation from ci/cpu/build.sh - PR #991: Barnes Hut TSNE Memory Issue Fixes - PR #1075: Pinning Dask version for consistent CI results - PR #990: Barnes Hut TSNE Memory Issue Fixes - PR #1066: Using proper set of workers to destroy nccl comms - PR #1072: Remove pip requirements and setup - PR #1074: Fix flake8 CI style check - PR #1087: Accuracy improvement for sqrt/log in RF max_feature - PR #1088: Change straggling numba python allocations to use RMM - PR #1106: Pinning Distributed version to match Dask for consistent CI results - PR #1116: TSNE CUDA 10.1 Bug Fixes - PR #1132: DBSCAN Batching Bug Fix - PR #1162: DASK RF random seed bug fix - PR #1164: Fix check_dtype arg handling for input_to_dev_array - PR #1171: SVM prediction bug fix - PR #1177: Update dask and distributed to 2.5 - PR #1204: Fix SVM crash on Turing - PR #1199: Replaced sprintf() with snprintf() in THROW() - PR #1205: Update dask-cuda in yml envs - PR #1211: Fixing Dask k-means transform bug and adding test - PR #1236: Improve fix for SMO solvers potential crash on Turing - PR #1251: Disable compiler optimization for CUDA 10.1 for distance prims - PR #1260: Small bugfix for major conversion in input_utils - PR #1276: Fix float64 prediction crash in test_random_forest # cuML 0.9.0 (21 Aug 2019) ## New Features - PR #894: Convert RF to treelite format - PR #826: Jones transformation of params for ARIMA models timeSeries ml-prim - PR #697: Silhouette Score metric ml-prim - PR #674: KL Divergence metric ml-prim - PR #787: homogeneity, completeness and v-measure metrics ml-prim - PR #711: Mutual Information metric ml-prim - PR #724: Entropy metric ml-prim - PR #766: Expose score method based on inertia for KMeans - PR #823: prims: cluster dispersion metric - PR #816: Added inverse_transform() for LabelEncoder - PR #789: prims: sampling without replacement - PR #813: prims: Col major istance prim - PR #635: Random Forest & Decision Tree Regression (Single-GPU) - PR #819: Forest Inferencing Library (FIL) - PR #829: C++: enable nvtx ranges - PR #835: Holt-Winters algorithm - PR #837: treelite for decision forest exchange format - PR #871: Wrapper for FIL - PR #870: make_blobs python function - PR #881: wrappers for accuracy_score and adjusted_rand_score functions - PR #840: Dask RF classification and regression - PR #870: make_blobs python function - PR #879: import of treelite models to FIL - PR #892: General Gram matrices prim - PR #883: Adding MNMG Kmeans - PR #930: Dask RF - PR #882: TSNE - T-Distributed Stochastic Neighbourhood Embedding - PR #624: Internals API & Graph Based Dimensionality Reductions Callback - PR #926: Wrapper for FIL - PR #994: Adding MPI comm impl for testing / benchmarking MNMG CUDA - PR #960: Enable using libcumlprims for MG algorithms/prims ## Improvements - PR #822: build: build.sh update to club all make targets together - PR #807: Added development conda yml files - PR #840: Require cmake >= 3.14 - PR #832: Stateless Decision Tree and Random Forest API - PR #857: Small modifications to comms for utilizing IB w/ Dask - PR #851: Random forest Stateless API wrappers - PR #865: High Performance RF - PR #895: Pretty prints arguments! - PR #920: Add an empty marker kernel for tracing purposes - PR #915: syncStream added to cumlCommunicator - PR #922: Random Forest support in FIL - PR #911: Update headers to credit CannyLabs BH TSNE implementation - PR #918: Streamline CUDA_REL environment variable - PR #924: kmeans: updated APIs to be stateless, refactored code for mnmg support - PR #950: global_bias support in FIL - PR #773: Significant improvements to input checking of all classes and common input API for Python - PR #957: Adding docs to RF & KMeans MNMG. Small fixes for release - PR #965: Making dask-ml a hard dependency - PR #976: Update api.rst for new 0.9 classes - PR #973: Use cudaDeviceGetAttribute instead of relying on cudaDeviceProp object being passed - PR #978: Update README for 0.9 - PR #1009: Fix references to notebooks-contrib - PR #1015: Ability to control the number of internal streams in cumlHandle_impl via cumlHandle - PR #1175: Add more modules to docs ToC ## Bug Fixes - PR #923: Fix misshapen level/trend/season HoltWinters output - PR #831: Update conda package dependencies to cudf 0.9 - PR #772: Add missing cython headers to SGD and CD - PR #849: PCA no attribute trans_input_ transform bug fix - PR #869: Removing incorrect information from KNN Docs - PR #885: libclang installation fix for GPUCI - PR #896: Fix typo in comms build instructions - PR #921: Fix build scripts using incorrect cudf version - PR #928: TSNE Stability Adjustments - PR #934: Cache cudaDeviceProp in cumlHandle for perf reasons - PR #932: Change default param value for RF classifier - PR #949: Fix dtype conversion tests for unsupported cudf dtypes - PR #908: Fix local build generated file ownerships - PR #983: Change RF max_depth default to 16 - PR #987: Change default values for knn - PR #988: Switch to exact tsne - PR #991: Cleanup python code in cuml.dask.cluster - PR #996: ucx_initialized being properly set in CommsContext - PR #1007: Throws a well defined error when mutigpu is not enabled - PR #1018: Hint location of nccl in build.sh for CI - PR #1022: Using random_state to make K-Means MNMG tests deterministic - PR #1034: Fix typos and formatting issues in RF docs - PR #1052: Fix the rows_sample dtype to float # cuML 0.8.0 (27 June 2019) ## New Features - PR #652: Adjusted Rand Index metric ml-prim - PR #679: Class label manipulation ml-prim - PR #636: Rand Index metric ml-prim - PR #515: Added Random Projection feature - PR #504: Contingency matrix ml-prim - PR #644: Add train_test_split utility for cuDF dataframes - PR #612: Allow Cuda Array Interface, Numba inputs and input code refactor - PR #641: C: Separate C-wrapper library build to generate libcuml.so - PR #631: Add nvcategory based ordinal label encoder - PR #681: Add MBSGDClassifier and MBSGDRegressor classes around SGD - PR #705: Quasi Newton solver and LogisticRegression Python classes - PR #670: Add test skipping functionality to build.sh - PR #678: Random Forest Python class - PR #684: prims: make_blobs primitive - PR #673: prims: reduce cols by key primitive - PR #812: Add cuML Communications API & consolidate Dask cuML ## Improvements - PR #597: C++ cuML and ml-prims folder refactor - PR #590: QN Recover from numeric errors - PR #482: Introduce cumlHandle for pca and tsvd - PR #573: Remove use of unnecessary cuDF column and series copies - PR #601: Cython PEP8 cleanup and CI integration - PR #596: Introduce cumlHandle for ols and ridge - PR #579: Introduce cumlHandle for cd and sgd, and propagate C++ errors in cython level for cd and sgd - PR #604: Adding cumlHandle to kNN, spectral methods, and UMAP - PR #616: Enable clang-format for enforcing coding style - PR #618: CI: Enable copyright header checks - PR #622: Updated to use 0.8 dependencies - PR #626: Added build.sh script, updated CI scripts and documentation - PR #633: build: Auto-detection of GPU_ARCHS during cmake - PR #650: Moving brute force kNN to prims. Creating stateless kNN API. - PR #662: C++: Bulk clang-format updates - PR #671: Added pickle pytests and correct pickling of Base class - PR #675: atomicMin/Max(float, double) with integer atomics and bit flipping - PR #677: build: 'deep-clean' to build.sh to clean faiss build as well - PR #683: Use stateless c++ API in KNN so that it can be pickled properly - PR #686: Use stateless c++ API in UMAP so that it can be pickled properly - PR #695: prims: Refactor pairwise distance - PR #707: Added stress test and updated documentation for RF - PR #701: Added emacs temporary file patterns to .gitignore - PR #606: C++: Added tests for host_buffer and improved device_buffer and host_buffer implementation - PR #726: Updated RF docs and stress test - PR #730: Update README and RF docs for 0.8 - PR #744: Random projections generating binomial on device. Fixing tests. - PR #741: Update API docs for 0.8 - PR #754: Pickling of UMAP/KNN - PR #753: Made PCA and TSVD picklable - PR #746: LogisticRegression and QN API docstrings - PR #820: Updating DEVELOPER GUIDE threading guidelines ## Bug Fixes - PR #584: Added missing virtual destructor to deviceAllocator and hostAllocator - PR #620: C++: Removed old unit-test files in ml-prims - PR #627: C++: Fixed dbscan crash issue filed in 613 - PR #640: Remove setuptools from conda run dependency - PR #646: Update link in contributing.md - PR #649: Bug fix to LinAlg::reduce_rows_by_key prim filed in issue #648 - PR #666: fixes to gitutils.py to resolve both string decode and handling of uncommitted files - PR #676: Fix template parameters in `bernoulli()` implementation. - PR #685: Make CuPy optional to avoid nccl conda package conflicts - PR #687: prims: updated tolerance for reduce_cols_by_key unit-tests - PR #689: Removing extra prints from NearestNeighbors cython - PR #718: Bug fix for DBSCAN and increasing batch size of sgd - PR #719: Adding additional checks for dtype of the data - PR #736: Bug fix for RF wrapper and .cu print function - PR #547: Fixed issue if C++ compiler is specified via CXX during configure. - PR #759: Configure Sphinx to render params correctly - PR #762: Apply threshold to remove flakiness of UMAP tests. - PR #768: Fixing memory bug from stateless refactor - PR #782: Nearest neighbors checking properly whether memory should be freed - PR #783: UMAP was using wrong size for knn computation - PR #776: Hotfix for self.variables in RF - PR #777: Fix numpy input bug - PR #784: Fix jit of shuffle_idx python function - PR #790: Fix rows_sample input type for RF - PR #793: Fix for dtype conversion utility for numba arrays without cupy installed - PR #806: Add a seed for sklearn model in RF test file - PR #843: Rf quantile fix # cuML 0.7.0 (10 May 2019) ## New Features - PR #405: Quasi-Newton GLM Solvers - PR #277: Add row- and column-wise weighted mean primitive - PR #424: Add a grid-sync struct for inter-block synchronization - PR #430: Add R-Squared Score to ml primitives - PR #463: Add matrix gather to ml primitives - PR #435: Expose cumlhandle in cython + developer guide - PR #455: Remove default-stream argument across ml-prims and cuML - PR #375: cuml cpp shared library renamed to libcuml++.so - PR #460: Random Forest & Decision Trees (Single-GPU, Classification) - PR #491: Add doxygen build target for ml-prims - PR #505: Add R-Squared Score to python interface - PR #507: Add coordinate descent for lasso and elastic-net - PR #511: Add a minmax ml-prim - PR #516: Added Trustworthiness score feature - PR #520: Add local build script to mimic gpuCI - PR #503: Add column-wise matrix sort primitive - PR #525: Add docs build script to cuML - PR #528: Remove current KMeans and replace it with a new single GPU implementation built using ML primitives ## Improvements - PR #481: Refactoring Quasi-Newton to use cumlHandle - PR #467: Added validity check on cumlHandle_t - PR #461: Rewrote permute and added column major version - PR #440: README updates - PR #295: Improve build-time and the interface e.g., enable bool-OutType, for distance() - PR #390: Update docs version - PR #272: Add stream parameters to cublas and cusolver wrapper functions - PR #447: Added building and running mlprims tests to CI - PR #445: Lower dbscan memory usage by computing adjacency matrix directly - PR #431: Add support for fancy iterator input types to LinAlg::reduce_rows_by_key - PR #394: Introducing cumlHandle API to dbscan and add example - PR #500: Added CI check for black listed CUDA Runtime API calls - PR #475: exposing cumlHandle for dbscan from python-side - PR #395: Edited the CONTRIBUTING.md file - PR #407: Test files to run stress, correctness and unit tests for cuml algos - PR #512: generic copy method for copying buffers between device/host - PR #533: Add cudatoolkit conda dependency - PR #524: Use cmake find blas and find lapack to pass configure options to faiss - PR #527: Added notes on UMAP differences from reference implementation - PR #540: Use latest release version in update-version CI script - PR #552: Re-enable assert in kmeans tests with xfail as needed - PR #581: Add shared memory fast col major to row major function back with bound checks - PR #592: More efficient matrix copy/reverse methods - PR #721: Added pickle tests for DBSCAN and Random Projections ## Bug Fixes - PR #334: Fixed segfault in `ML::cumlHandle_impl::destroyResources` - PR #349: Developer guide clarifications for cumlHandle and cumlHandle_impl - PR #398: Fix CI scripts to allow nightlies to be uploaded - PR #399: Skip PCA tests to allow CI to run with driver 418 - PR #422: Issue in the PCA tests was solved and CI can run with driver 418 - PR #409: Add entry to gitmodules to ignore build artifacts - PR #412: Fix for svdQR function in ml-prims - PR #438: Code that depended on FAISS was building everytime. - PR #358: Fixed an issue when switching streams on MLCommon::device_buffer and MLCommon::host_buffer - PR #434: Fixing bug in CSR tests - PR #443: Remove defaults channel from ci scripts - PR #384: 64b index arithmetic updates to the kernels inside ml-prims - PR #459: Fix for runtime library path of pip package - PR #464: Fix for C++11 destructor warning in qn - PR #466: Add support for column-major in LinAlg::*Norm methods - PR #465: Fixing deadlock issue in GridSync due to consecutive sync calls - PR #468: Fix dbscan example build failure - PR #470: Fix resource leakage in Kalman filter python wrapper - PR #473: Fix gather ml-prim test for change in rng uniform API - PR #477: Fixes default stream initialization in cumlHandle - PR #480: Replaced qn_fit() declaration with #include of file containing definition to fix linker error - PR #495: Update cuDF and RMM versions in GPU ci test scripts - PR #499: DEVELOPER_GUIDE.md: fixed links and clarified ML::detail::streamSyncer example - PR #506: Re enable ml-prim tests in CI - PR #508: Fix for an error with default argument in LinAlg::meanSquaredError - PR #519: README.md Updates and adding BUILD.md back - PR #526: Fix the issue of wrong results when fit and transform of PCA are called separately - PR #531: Fixing missing arguments in updateDevice() for RF - PR #543: Exposing dbscan batch size through cython API and fixing broken batching - PR #551: Made use of ZLIB_LIBRARIES consistent between ml_test and ml_mg_test - PR #557: Modified CI script to run cuML tests before building mlprims and removed lapack flag - PR #578: Updated Readme.md to add lasso and elastic-net - PR #580: Fixing cython garbage collection bug in KNN - PR #577: Use find libz in prims cmake - PR #594: fixed cuda-memcheck mean_center test failures # cuML 0.6.1 (09 Apr 2019) ## Bug Fixes - PR #462 Runtime library path fix for cuML pip package # cuML 0.6.0 (22 Mar 2019) ## New Features - PR #249: Single GPU Stochastic Gradient Descent for linear regression, logistic regression, and linear svm with L1, L2, and elastic-net penalties. - PR #247: Added "proper" CUDA API to cuML - PR #235: NearestNeighbors MG Support - PR #261: UMAP Algorithm - PR #290: NearestNeighbors numpy MG Support - PR #303: Reusable spectral embedding / clustering - PR #325: Initial support for single process multi-GPU OLS and tSVD - PR #271: Initial support for hyperparameter optimization with dask for many models ## Improvements - PR #144: Dockerfile update and docs for LinearRegression and Kalman Filter. - PR #168: Add /ci/gpu/build.sh file to cuML - PR #167: Integrating full-n-final ml-prims repo inside cuml - PR #198: (ml-prims) Removal of *MG calls + fixed a bug in permute method - PR #194: Added new ml-prims for supporting LASSO regression. - PR #114: Building faiss C++ api into libcuml - PR #64: Using FAISS C++ API in cuML and exposing bindings through cython - PR #208: Issue ml-common-3: Math.h: swap thrust::for_each with binaryOp,unaryOp - PR #224: Improve doc strings for readable rendering with readthedocs - PR #209: Simplify README.md, move build instructions to BUILD.md - PR #218: Fix RNG to use given seed and adjust RNG test tolerances. - PR #225: Support for generating random integers - PR #215: Refactored LinAlg::norm to Stats::rowNorm and added Stats::colNorm - PR #234: Support for custom output type and passing index value to main_op in *Reduction kernels - PR #230: Refactored the cuda_utils header - PR #236: Refactored cuml python package structure to be more sklearn like - PR #232: Added reduce_rows_by_key - PR #246: Support for 2 vectors in the matrix vector operator - PR #244: Fix for single GPU OLS and Ridge to support one column training data - PR #271: Added get_params and set_params functions for linear and ridge regression - PR #253: Fix for issue #250-reduce_rows_by_key failed memcheck for small nkeys - PR #269: LinearRegression, Ridge Python docs update and cleaning - PR #322: set_params updated - PR #237: Update build instructions - PR #275: Kmeans use of faster gpu_matrix - PR #288: Add n_neighbors to NearestNeighbors constructor - PR #302: Added FutureWarning for deprecation of current kmeans algorithm - PR #312: Last minute cleanup before release - PR #315: Documentation updating and enhancements - PR #330: Added ignored argument to pca.fit_transform to map to sklearn's implemenation - PR #342: Change default ABI to ON - PR #572: Pulling DBSCAN components into reusable primitives ## Bug Fixes - PR #193: Fix AttributeError in PCA and TSVD - PR #211: Fixing inconsistent use of proper batch size calculation in DBSCAN - PR #202: Adding back ability for users to define their own BLAS - PR #201: Pass CMAKE CUDA path to faiss/configure script - PR #200 Avoid using numpy via cimport in KNN - PR #228: Bug fix: LinAlg::unaryOp with 0-length input - PR #279: Removing faiss-gpu references in README - PR #321: Fix release script typo - PR #327: Update conda requirements for version 0.6 requirements - PR #352: Correctly calculating numpy chunk sizing for kNN - PR #345: Run python import as part of package build to trigger compilation - PR #347: Lowering memory usage of kNN. - PR #355: Fixing issues with very large numpy inputs to SPMG OLS and tSVD. - PR #357: Removing FAISS requirement from README - PR #362: Fix for matVecOp crashing on large input sizes - PR #366: Index arithmetic issue fix with TxN_t class - PR #376: Disabled kmeans tests since they are currently too sensitive (see #71) - PR #380: Allow arbitrary data size on ingress for numba_utils.row_matrix - PR #385: Fix for long import cuml time in containers and fix for setup_pip - PR #630: Fixing a missing kneighbors in nearest neighbors python proxy # cuML 0.5.1 (05 Feb 2019) ## Bug Fixes - PR #189 Avoid using numpy via cimport to prevent ABI issues in Cython compilation # cuML 0.5.0 (28 Jan 2019) ## New Features - PR #66: OLS Linear Regression - PR #44: Distance calculation ML primitives - PR #69: Ridge (L2 Regularized) Linear Regression - PR #103: Linear Kalman Filter - PR #117: Pip install support - PR #64: Device to device support from cuML device pointers into FAISS ## Improvements - PR #56: Make OpenMP optional for building - PR #67: Github issue templates - PR #44: Refactored DBSCAN to use ML primitives - PR #91: Pytest cleanup and sklearn toyset datasets based pytests for kmeans and dbscan - PR #75: C++ example to use kmeans - PR #117: Use cmake extension to find any zlib installed in system - PR #94: Add cmake flag to set ABI compatibility - PR #139: Move thirdparty submodules to root and add symlinks to new locations - PR #151: Replace TravisCI testing and conda pkg builds with gpuCI - PR #164: Add numba kernel for faster column to row major transform - PR #114: Adding FAISS to cuml build ## Bug Fixes - PR #48: CUDA 10 compilation warnings fix - PR #51: Fixes to Dockerfile and docs for new build system - PR #72: Fixes for GCC 7 - PR #96: Fix for kmeans stack overflow with high number of clusters - PR #105: Fix for AttributeError in kmeans fit method - PR #113: Removed old glm python/cython files - PR #118: Fix for AttributeError in kmeans predict method - PR #125: Remove randomized solver option from PCA python bindings # cuML 0.4.0 (05 Dec 2018) ## New Features ## Improvements - PR #42: New build system: separation of libcuml.so and cuml python package - PR #43: Added changelog.md ## Bug Fixes # cuML 0.3.0 (30 Nov 2018) ## New Features - PR #33: Added ability to call cuML algorithms using numpy arrays ## Improvements - PR #24: Fix references of python package from cuML to cuml and start using versioneer for better versioning - PR #40: Added support for refactored cuDF 0.3.0, updated Conda files - PR #33: Major python test cleaning, all tests pass with cuDF 0.2.0 and 0.3.0. Preparation for new build system - PR #34: Updated batch count calculation logic in DBSCAN - PR #35: Beginning of DBSCAN refactor to use cuML mlprims and general improvements ## Bug Fixes - PR #30: Fixed batch size bug in DBSCAN that caused crash. Also fixed various locations for potential integer overflows - PR #28: Fix readthedocs build documentation - PR #29: Fix pytests for cuml name change from cuML - PR #33: Fixed memory bug that would cause segmentation faults due to numba releasing memory before it was used. Also fixed row major/column major bugs for different algorithms - PR #36: Fix kmeans gtest to use device data - PR #38: cuda\_free bug removed that caused google tests to sometimes pass and sometimes fail randomly - PR #39: Updated cmake to correctly link with CUDA libraries, add CUDA runtime linking and include source files in compile target # cuML 0.2.0 (02 Nov 2018) ## New Features - PR #11: Kmeans algorithm added - PR #7: FAISS KNN wrapper added - PR #21: Added Conda install support ## Improvements - PR #15: Added compatibility with cuDF (from prior pyGDF) - PR #13: Added FAISS to Dockerfile - PR #21: Added TravisCI build system for CI and Conda builds ## Bug Fixes - PR #4: Fixed explained variance bug in TSVD - PR #5: Notebook bug fixes and updated results # cuML 0.1.0 Initial release including PCA, TSVD, DBSCAN, ml-prims and cython wrappers
0
rapidsai_public_repos
rapidsai_public_repos/cuml/build.sh
#!/bin/bash # Copyright (c) 2019-2023, NVIDIA CORPORATION. # cuml build script # This script is used to build the component(s) in this repo from # source, and can be called with various options to customize the # build as needed (see the help output for details) # Abort script on first error set -e NUMARGS=$# ARGS=$* # NOTE: ensure all dir changes are relative to the location of this # script, and that this script resides in the repo dir! REPODIR=$(cd $(dirname $0); pwd) VALIDTARGETS="clean libcuml cuml cuml-cpu cpp-mgtests prims bench prims-bench cppdocs pydocs" VALIDFLAGS="-v -g -n --allgpuarch --singlegpu --nolibcumltest --nvtx --show_depr_warn --codecov --ccache --configure-only -h --help " VALIDARGS="${VALIDTARGETS} ${VALIDFLAGS}" HELP="$0 [<target> ...] [<flag> ...] where <target> is: clean - remove all existing build artifacts and configuration (start over) libcuml - build the cuml C++ code only. Also builds the C-wrapper library around the C++ code. cuml - build the cuml Python package cuml-cpu - build the cuml CPU Python package cpp-mgtests - build libcuml mnmg tests. Builds MPI communicator, adding MPI as dependency. prims - build the ml-prims tests bench - build the libcuml C++ benchmark prims-bench - build the ml-prims C++ benchmark cppdocs - build the C++ API doxygen documentation pydocs - build the general and Python API documentation and <flag> is: -v - verbose build mode -g - build for debug -n - no install step -h - print this text --allgpuarch - build for all supported GPU architectures --singlegpu - Build libcuml and cuml without multigpu components --nolibcumltest - disable building libcuml C++ tests for a faster build --nvtx - Enable nvtx for profiling support --show_depr_warn - show cmake deprecation warnings --codecov - Enable code coverage support by compiling with Cython linetracing and profiling enabled (WARNING: Impacts performance) --ccache - Use ccache to cache previous compilations --configure-only - Invoke CMake without actually building --nocloneraft - CMake will clone RAFT even if it is in the environment, use this flag to disable that behavior --static-treelite - Force CMake to use the Treelite static libs, cloning and building them if necessary default action (no args) is to build and install 'libcuml', 'cuml', and 'prims' targets only for the detected GPU arch The following environment variables are also accepted to allow further customization: PARALLEL_LEVEL - Number of parallel threads to use in compilation. CUML_EXTRA_CMAKE_ARGS - Extra arguments to pass directly to cmake. Values listed in environment variable will override existing arguments. Example: CUML_EXTRA_CMAKE_ARGS=\"-DBUILD_CUML_C_LIBRARY=OFF\" ./build.sh CUML_EXTRA_PYTHON_ARGS - Extra argument to pass directly to python setup.py " LIBCUML_BUILD_DIR=${LIBCUML_BUILD_DIR:=${REPODIR}/cpp/build} CUML_BUILD_DIR=${REPODIR}/python/build PYTHON_DEPS_CLONE=${REPODIR}/python/external_repositories BUILD_DIRS="${LIBCUML_BUILD_DIR} ${CUML_BUILD_DIR} ${PYTHON_DEPS_CLONE}" # Set defaults for vars modified by flags to this script VERBOSE="" BUILD_TYPE=Release INSTALL_TARGET=install BUILD_ALL_GPU_ARCH=0 SINGLEGPU_CPP_FLAG="" CUML_EXTRA_PYTHON_ARGS=${CUML_EXTRA_PYTHON_ARGS:=""} NVTX=OFF CCACHE=OFF CLEAN=0 BUILD_DISABLE_DEPRECATION_WARNINGS=ON BUILD_CUML_STD_COMMS=ON BUILD_CUML_TESTS=ON BUILD_CUML_MG_TESTS=OFF BUILD_STATIC_TREELITE=OFF CMAKE_LOG_LEVEL=WARNING # Set defaults for vars that may not have been defined externally INSTALL_PREFIX=${INSTALL_PREFIX:=${PREFIX:=${CONDA_PREFIX:=$LIBCUML_BUILD_DIR/install}}} PARALLEL_LEVEL=${PARALLEL_LEVEL:=`nproc`} # Default to Ninja if generator is not specified export CMAKE_GENERATOR="${CMAKE_GENERATOR:=Ninja}" # Allow setting arbitrary cmake args via the $CUML_ADDL_CMAKE_ARGS variable. Any # values listed here will override existing arguments. For example: # CUML_EXTRA_CMAKE_ARGS="-DBUILD_CUML_C_LIBRARY=OFF" ./build.sh # Will disable building the C library even though it is hard coded to ON CUML_EXTRA_CMAKE_ARGS=${CUML_EXTRA_CMAKE_ARGS:=""} function hasArg { (( ${NUMARGS} != 0 )) && (echo " ${ARGS} " | grep -q " $1 ") } function completeBuild { (( ${NUMARGS} == 0 )) && return for a in ${ARGS}; do if (echo " ${VALIDTARGETS} " | grep -q " ${a} "); then false; return fi done true } if hasArg -h || hasArg --help; then echo "${HELP}" exit 0 fi if hasArg clean; then CLEAN=1 fi if hasArg cpp-mgtests; then BUILD_CUML_MG_TESTS=ON fi # Long arguments LONG_ARGUMENT_LIST=( "verbose" "debug" "no-install" "allgpuarch" "singlegpu" "nvtx" "show_depr_warn" "codecov" "ccache" "nolibcumltest" "nocloneraft" "configure-only" ) # Short arguments ARGUMENT_LIST=( "v" "g" "n" ) # read arguments opts=$(getopt \ --longoptions "$(printf "%s," "${LONG_ARGUMENT_LIST[@]}")" \ --name "$(basename "$0")" \ --options "$(printf "%s" "${ARGUMENT_LIST[@]}")" \ -- "$@" ) if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi eval set -- "$opts" while true; do case "$1" in -h) show_help exit 0 ;; -v | --verbose ) VERBOSE_FLAG="-v" CMAKE_LOG_LEVEL=VERBOSE ;; -g | --debug ) BUILD_TYPE=Debug ;; -n | --no-install ) INSTALL_TARGET="" ;; --allgpuarch ) BUILD_ALL_GPU_ARCH=1 ;; --singlegpu ) CUML_EXTRA_PYTHON_ARGS="${CUML_EXTRA_PYTHON_ARGS} --singlegpu" SINGLEGPU_CPP_FLAG=ON ;; --nvtx ) NVTX=ON ;; --show_depr_warn ) BUILD_DISABLE_DEPRECATION_WARNINGS=OFF ;; --codecov ) CUML_EXTRA_PYTHON_ARGS="${CUML_EXTRA_PYTHON_ARGS} --linetrace=1 --profile" ;; --ccache ) CCACHE=ON ;; --nolibcumltest ) BUILD_CUML_TESTS=OFF ;; --nocloneraft ) DISABLE_FORCE_CLONE_RAFT=ON ;; --static-treelite ) BUILD_STATIC_TREELITE=ON ;; --) shift break ;; esac shift done # If clean given, run it prior to any other steps if (( ${CLEAN} == 1 )); then # If the dirs to clean are mounted dirs in a container, the # contents should be removed but the mounted dirs will remain. # The find removes all contents but leaves the dirs, the rmdir # attempts to remove the dirs but can fail safely. for bd in ${BUILD_DIRS}; do if [ -d ${bd} ]; then find ${bd} -mindepth 1 -delete rmdir ${bd} || true fi done cd ${REPODIR}/python python setup.py clean --all cd ${REPODIR} fi ################################################################################ # Configure for building all C++ targets if completeBuild || hasArg libcuml || hasArg prims || hasArg bench || hasArg prims-bench || hasArg cppdocs || hasArg cpp-mgtests; then if (( ${BUILD_ALL_GPU_ARCH} == 0 )); then CUML_CMAKE_CUDA_ARCHITECTURES="NATIVE" echo "Building for the architecture of the GPU in the system..." else CUML_CMAKE_CUDA_ARCHITECTURES="RAPIDS" echo "Building for *ALL* supported GPU architectures..." fi mkdir -p ${LIBCUML_BUILD_DIR} cd ${LIBCUML_BUILD_DIR} cmake -DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX} \ -DCMAKE_CUDA_ARCHITECTURES=${CUML_CMAKE_CUDA_ARCHITECTURES} \ -DCMAKE_BUILD_TYPE=${BUILD_TYPE} \ -DBUILD_CUML_C_LIBRARY=ON \ -DSINGLEGPU=${SINGLEGPU_CPP_FLAG} \ -DCUML_ALGORITHMS="ALL" \ -DBUILD_CUML_TESTS=${BUILD_CUML_TESTS} \ -DBUILD_CUML_MPI_COMMS=${BUILD_CUML_MG_TESTS} \ -DBUILD_CUML_MG_TESTS=${BUILD_CUML_MG_TESTS} \ -DCUML_USE_TREELITE_STATIC=${BUILD_STATIC_TREELITE} \ -DNVTX=${NVTX} \ -DUSE_CCACHE=${CCACHE} \ -DDISABLE_DEPRECATION_WARNINGS=${BUILD_DISABLE_DEPRECATION_WARNINGS} \ -DCMAKE_PREFIX_PATH=${INSTALL_PREFIX} \ -DCMAKE_MESSAGE_LOG_LEVEL=${CMAKE_LOG_LEVEL} \ ${CUML_EXTRA_CMAKE_ARGS} \ .. fi # If `./build.sh cuml` is called, don't build C/C++ components if (! hasArg --configure-only) && (completeBuild || hasArg libcuml || hasArg prims || hasArg bench || hasArg cpp-mgtests); then cd ${LIBCUML_BUILD_DIR} if [ -n "${INSTALL_TARGET}" ]; then cmake --build ${LIBCUML_BUILD_DIR} -j${PARALLEL_LEVEL} ${build_args} --target ${INSTALL_TARGET} ${VERBOSE_FLAG} else cmake --build ${LIBCUML_BUILD_DIR} -j${PARALLEL_LEVEL} ${build_args} ${VERBOSE_FLAG} fi fi if (! hasArg --configure-only) && hasArg cppdocs; then cd ${LIBCUML_BUILD_DIR} cmake --build ${LIBCUML_BUILD_DIR} --target docs_cuml fi # Build and (optionally) install the cuml Python package if (! hasArg --configure-only) && (completeBuild || hasArg cuml || hasArg pydocs); then # Append `-DFIND_CUML_CPP=ON` to CUML_EXTRA_CMAKE_ARGS unless a user specified the option. SKBUILD_EXTRA_CMAKE_ARGS="${CUML_EXTRA_CMAKE_ARGS}" if [[ "${CUML_EXTRA_CMAKE_ARGS}" != *"DFIND_CUML_CPP"* ]]; then SKBUILD_EXTRA_CMAKE_ARGS="${SKBUILD_EXTRA_CMAKE_ARGS} -DFIND_CUML_CPP=ON" fi SKBUILD_CONFIGURE_OPTIONS="-DCMAKE_MESSAGE_LOG_LEVEL=${CMAKE_LOG_LEVEL} ${SKBUILD_EXTRA_CMAKE_ARGS}" \ SKBUILD_BUILD_OPTIONS="-j${PARALLEL_LEVEL}" \ python -m pip install --no-build-isolation --no-deps ${REPODIR}/python if hasArg pydocs; then cd ${REPODIR}/docs make html fi fi if hasArg cuml-cpu; then SKBUILD_CONFIGURE_OPTIONS="-DCUML_CPU=ON -DCMAKE_MESSAGE_LOG_LEVEL=VERBOSE" \ SKBUILD_BUILD_OPTIONS="-j${PARALLEL_LEVEL}" \ python -m pip install --no-build-isolation --no-deps -v ${REPODIR}/python fi
0
rapidsai_public_repos
rapidsai_public_repos/cuml/codecov.yml
#Configuration File for CodeCov coverage: status: project: off patch: off comment: behavior: new # Suggested workaround to fix "missing base report" issue when using Squash and # Merge Strategy in GitHub. See this comment from CodeCov support about this # undocumented option: # https://community.codecov.io/t/unable-to-determine-a-parent-commit-to-compare-against-in-base-branch-after-squash-and-merge/2480/15 codecov: allow_coverage_offsets: true
0
rapidsai_public_repos
rapidsai_public_repos/cuml/dependencies.yaml
# Dependency list for https://github.com/rapidsai/dependency-file-generator files: all: output: conda matrix: cuda: ["11.8", "12.0"] arch: [x86_64] includes: - common_build - cudatoolkit - docs - py_build - py_run - py_version - test_python cpp_all: output: conda matrix: cuda: ["11.8", "12.0"] arch: [x86_64] includes: - common_build - cudatoolkit checks: output: none includes: - checks - py_version clang_tidy: output: conda matrix: cuda: ["11.8"] arch: [x86_64] includes: - clang_tidy - common_build - cudatoolkit docs: output: none includes: - cudatoolkit - docs - py_version test_cpp: output: none includes: - cudatoolkit - test_cpp test_python: output: none includes: - cudatoolkit - py_version - test_python test_notebooks: output: none includes: - cudatoolkit - py_run - py_version - test_notebooks py_build: output: pyproject extras: table: build-system includes: - common_build - py_build py_run: output: pyproject extras: table: project includes: - py_run py_test: output: pyproject extras: table: project.optional-dependencies key: test includes: - test_python channels: - rapidsai - rapidsai-nightly - dask/label/dev - conda-forge - nvidia dependencies: checks: common: - output_types: [conda, requirements] packages: - pre-commit clang_tidy: common: - output_types: [conda, requirements] packages: # clang 15 required by libcudacxx. - clang==15.0.7 - clang-tools==15.0.7 - ninja - tomli common_build: common: - output_types: [conda, requirements, pyproject] packages: - &cmake_ver cmake>=3.26.4 - ninja - output_types: conda packages: - c-compiler - cxx-compiler - gmock>=1.13.0 - gtest>=1.13.0 - libcumlprims==23.12.* - libraft==23.12.* - libraft-headers==23.12.* - librmm==23.12.* specific: - output_types: conda matrices: - matrix: arch: x86_64 packages: - gcc_linux-64=11.* - sysroot_linux-64==2.17 - matrix: arch: aarch64 packages: - gcc_linux-aarch64=11.* - sysroot_linux-aarch64==2.17 - output_types: conda matrices: - matrix: arch: x86_64 cuda: "11.8" packages: - nvcc_linux-64=11.8 - matrix: arch: aarch64 cuda: "11.8" packages: - nvcc_linux-aarch64=11.8 - matrix: arch: x86_64 cuda: "12.0" packages: - cuda-nvcc - cuda-version=12.0 py_build: common: - output_types: [conda, requirements, pyproject] packages: - scikit-build>=0.13.1 - cython>=3.0.0 - &treelite treelite==3.9.1 - pylibraft==23.12.* - rmm==23.12.* - output_types: pyproject packages: - wheel - setuptools - &treelite_runtime treelite_runtime==3.9.1 specific: - output_types: [conda, requirements, pyproject] matrices: - matrix: cuda: "12.0" packages: - &cuda_python12 cuda-python>=12.0,<13.0a0 - matrix: # All CUDA 11 versions packages: - &cuda_python11 cuda-python>=11.7.1,<12.0a0 py_run: common: - output_types: [conda, requirements, pyproject] packages: - cudf==23.12.* - dask-cuda==23.12.* - dask-cudf==23.12.* - joblib>=0.11 - numba>=0.57 # TODO: Is scipy really a hard dependency, or should # we make it optional (i.e. an extra for pip # installation/run_constrained for conda)? - scipy>=1.8.0 - raft-dask==23.12.* - rapids-dask-dependency==23.12.* - *treelite - output_types: [conda, requirements] packages: - cupy>=12.0.0 - output_types: pyproject packages: - *treelite_runtime - cupy-cuda11x>=12.0.0 specific: - output_types: requirements matrices: - matrix: arch: x86_64 packages: - cupy-cuda115>=12.0.0 - matrix: arch: aarch64 packages: - cupy-cuda11x -f https://pip.cupy.dev/aarch64 # TODO: Verify that this works. cudatoolkit: specific: - output_types: conda matrices: - matrix: cuda: "12.0" packages: - cuda-version=12.0 - cuda-cudart-dev - cuda-profiler-api - libcublas-dev - libcufft-dev - libcurand-dev - libcusolver-dev - libcusparse-dev - matrix: cuda: "11.8" packages: - cuda-version=11.8 - cudatoolkit - libcublas-dev=11.11.3.6 - libcublas=11.11.3.6 - libcufft-dev=10.9.0.58 - libcufft=10.9.0.58 - libcurand-dev=10.3.0.86 - libcurand=10.3.0.86 - libcusolver-dev=11.4.1.48 - libcusolver=11.4.1.48 - libcusparse-dev=11.7.5.86 - libcusparse=11.7.5.86 - matrix: cuda: "11.5" packages: - cuda-version=11.5 - cudatoolkit - libcublas-dev>=11.7.3.1,<=11.7.4.6 - libcublas>=11.7.3.1,<=11.7.4.6 - libcufft-dev>=10.6.0.54,<=10.6.0.107 - libcufft>=10.6.0.54,<=10.6.0.107 - libcurand-dev>=10.2.6.48,<=10.2.7.107 - libcurand>=10.2.6.48,<=10.2.7.107 - libcusolver-dev>=11.2.1.48,<=11.3.2.107 - libcusolver>=11.2.1.48,<=11.3.2.107 - libcusparse-dev>=11.7.0.31,<=11.7.0.107 - libcusparse>=11.7.0.31,<=11.7.0.107 - matrix: cuda: "11.4" packages: - cuda-version=11.4 - cudatoolkit - &libcublas_dev114 libcublas-dev>=11.5.2.43,<=11.6.5.2 - &libcublas114 libcublas>=11.5.2.43,<=11.6.5.2 - &libcufft_dev114 libcufft-dev>=10.5.0.43,<=10.5.2.100 - &libcufft114 libcufft>=10.5.0.43,<=10.5.2.100 - &libcurand_dev114 libcurand-dev>=10.2.5.43,<=10.2.5.120 - &libcurand114 libcurand>=10.2.5.43,<=10.2.5.120 - &libcusolver_dev114 libcusolver-dev>=11.2.0.43,<=11.2.0.120 - &libcusolver114 libcusolver>=11.2.0.43,<=11.2.0.120 - &libcusparse_dev114 libcusparse-dev>=11.6.0.43,<=11.6.0.120 - &libcusparse114 libcusparse>=11.6.0.43,<=11.6.0.120 - matrix: cuda: "11.2" packages: - cuda-version=11.2 - cudatoolkit # The NVIDIA channel doesn't publish pkgs older than 11.4 for these libs, # so 11.2 uses 11.4 packages (the oldest available). - *libcublas_dev114 - *libcublas114 - *libcufft_dev114 - *libcufft114 - *libcurand_dev114 - *libcurand114 - *libcusolver_dev114 - *libcusolver114 - *libcusparse_dev114 - *libcusparse114 docs: common: - output_types: [conda, requirements] packages: - graphviz - ipython - ipykernel - nbsphinx - numpydoc # https://github.com/pydata/pydata-sphinx-theme/issues/1539 - pydata-sphinx-theme!=0.14.2 - recommonmark - &scikit_learn scikit-learn==1.2 - sphinx<6 - sphinx-copybutton - sphinx-markdown-tables - output_types: conda packages: - doxygen=1.9.1 py_version: specific: - output_types: conda matrices: - matrix: py: "3.9" packages: - python=3.9 - matrix: py: "3.10" packages: - python=3.10 - matrix: packages: - python>=3.9,<3.11 test_cpp: common: - output_types: conda packages: - *cmake_ver test_python: common: - output_types: [conda, requirements, pyproject] packages: - dask-ml - hypothesis>=6.0,<7 - nltk - numpydoc - pytest - pytest-benchmark - pytest-cases - pytest-cov - pytest-xdist - seaborn - *scikit_learn - statsmodels - umap-learn==0.5.3 - pynndescent==0.5.8 - output_types: conda packages: - pip - pip: - dask-glm==0.3.0 # TODO: remove pin once a release that includes fixes for the error # is released: https://github.com/rapidsai/cuml/issues/5514 - hdbscan<=0.8.30 - output_types: pyproject packages: - dask-glm==0.3.0 # TODO: Can we stop pulling from the master branch now that there was a release in October? - hdbscan @ git+https://github.com/scikit-learn-contrib/hdbscan.git@master test_notebooks: common: - output_types: [conda, requirements] packages: - dask-ml==2023.3.24 - jupyter - matplotlib - numpy - pandas - *scikit_learn - seaborn
0
rapidsai_public_repos
rapidsai_public_repos/cuml/CONTRIBUTING.md
# Contributing to cuML If you are interested in contributing to cuML, your contributions will fall into three categories: 1. You want to report a bug, feature request, or documentation issue - File an [issue](https://github.com/rapidsai/cuml/issues/new/choose) describing what you encountered or what you want to see changed. - Please run and paste the output of the `cuml/print_env.sh` script while reporting a bug to gather and report relevant environment details. - The RAPIDS team will evaluate the issues and triage them, scheduling them for a release. If you believe the issue needs priority attention comment on the issue to notify the team. 2. You want to propose a new Feature and implement it - Post about your intended feature, and we shall discuss the design and implementation. - Once we agree that the plan looks good, go ahead and implement it, using the [code contributions](#code-contributions) guide below. 3. You want to implement a feature or bug-fix for an outstanding issue - Follow the [code contributions](#code-contributions) guide below. - If you need more context on a particular issue, please ask and we shall provide. ## Code contributions ### Your first issue 1. Read the project's [README.md](https://github.com/rapidsai/cuml/blob/main/README.md) to learn how to setup the development environment. 2. Find an issue to work on. The best way is to look for the [good first issue](https://github.com/rapidsai/cuml/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) or [help wanted](https://github.com/rapidsai/cuml/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) labels 3. Comment on the issue saying you are going to work on it. 4. Get familiar with the developer guide relevant for you: * For C++ developers it is available here [DEVELOPER_GUIDE.md](wiki/cpp/DEVELOPER_GUIDE.md) * For Python developers, a [Python DEVELOPER_GUIDE.md](wiki/python/DEVELOPER_GUIDE.md) is available as well. 5. Code! Make sure to update unit tests! 6. When done, [create your pull request](https://github.com/rapidsai/cuml/compare). 7. Verify that CI passes all [status checks](https://help.github.com/articles/about-status-checks/), or fix if needed. 8. Wait for other developers to review your code and update code as needed. 9. Once reviewed and approved, a RAPIDS developer will merge your pull request. Remember, if you are unsure about anything, don't hesitate to comment on issues and ask for clarifications! ## Code Formatting Consistent code formatting is important in the cuML project to ensure readability, maintainability, and thus simplifies collaboration. ### Using pre-commit hooks cuML uses [pre-commit](https://pre-commit.com) to execute code linters and formatters that check the code for common issues, such as syntax errors, code style violations, and help to detect bugs. Using pre-commit ensures that linter versions and options are aligned for all developers. The same hooks are executed as part of the CI checks. This means running pre-commit checks locally avoids unnecessary CI iterations. To use `pre-commit`, install the tool via `conda` or `pip` into your development environment: ```console conda install -c conda-forge pre-commit ``` Alternatively: ```console pip install pre-commit ``` After installing pre-commit, it is recommended to install pre-commit hooks to run automatically before creating a git commit. In this way, it is less likely that style checks will fail as part of CI checks. To install pre-commit hooks, simply run the following command within the repository root directory: ```console pre-commit install ``` By default, pre-commit runs on staged files only, meaning only on changes that are about to be committed. To run pre-commit checks on all files, execute: ```bash pre-commit run --all-files ``` To skip the checks temporarily, use `git commit --no-verify` or its short form `-n`. _Note_: If the auto-formatters' changes affect each other, you may need to go through multiple iterations of `git commit` and `git add -u`. cuML also uses [codespell](https://github.com/codespell-project/codespell) to find spelling mistakes, and this check is run as part of the pre-commit hook. To apply the suggested spelling fixes, you can run `codespell -i 3 -w .` from the command-line in the cuML root directory. This will bring up an interactive prompt to select which spelling fixes to apply. If you want to ignore errors highlighted by codespell you can: * Add the word to the ignore-words-list in pyproject.toml, to exclude for all of cuML * Exclude the entire file from spellchecking, by adding to the `exclude` regex in .pre-commit-config.yaml * Ignore only specific lines as shown in https://github.com/codespell-project/codespell/issues/1212#issuecomment-654191881 ### Summary of pre-commit hooks The pre-commit hooks configured for this repository consist of a number of linters and auto-formatters that we summarize here. For a full and current list, please see the `.pre-commit-config.yaml` file. - `clang-format`: Formats C++ and CUDA code for consistency and readability. - `black`: Auto-formats Python code to conform to the PEP 8 style guide. - `flake8`: Lints Python code for syntax errors and common code style issues. - `cython-lint`: Lints Cython code for syntax errors and common code style issues. - _`DeprecationWarning` checker_: Checks for new `DeprecationWarning` being introduced in Python code, and instead `FutureWarning` should be used. - _`#include` syntax checker_: Ensures consistent syntax for C++ `#include` statements. - _Copyright header checker and auto-formatter_: Ensures the copyright headers of files are up-to-date and in the correct format. - `codespell`: Checks for spelling mistakes ### Clang-tidy In order to maintain high-quality code, cuML uses not only pre-commit hooks featuring various formatters and linters but also the clang-tidy tool. Clang-tidy is designed to detect potential issues within the C and C++ code. It is typically run as part of our continuous integration (CI) process. While it's generally unnecessary for contributors to run clang-tidy locally, there might be cases where you would want to do so. There are two primary methods to run clang-tidy on your local machine: using Docker or Conda. * **Docker** 1. Navigate to the repository root directory. 2. Run the following Docker command: ```bash docker run --rm --pull always \ --mount type=bind,source="$(pwd)",target=/opt/repo --workdir /opt/repo \ -e SCCACHE_S3_NO_CREDENTIALS=1 \ rapidsai/ci-conda:latest /opt/repo/ci/run_clang_tidy.sh ``` * **Conda** 1. Navigate to the repository root directory. 2. Create and activate the needed conda environment: ```bash conda env create --force -n cuml-clang-tidy -f conda/environments/clang_tidy_cuda-118_arch-x86_64.yaml conda activate cuml-clang-tidy ``` 3. Generate the compile command database with ```bash ./build.sh --configure-only libcuml ``` 3. Run clang-tidy with the following command: ```bash python cpp/scripts/run-clang-tidy.py --config pyproject.toml ``` ### Managing PR labels Each PR must be labeled according to whether it is a "breaking" or "non-breaking" change (using Github labels). This is used to highlight changes that users should know about when upgrading. For cuML, a "breaking" change is one that modifies the public, non-experimental, Python API in a non-backward-compatible way. The C++ API does not have an expectation of backward compatibility at this time, so changes to it are not typically considered breaking. Backward-compatible API changes to the Python API (such as adding a new keyword argument to a function) do not need to be labeled. Additional labels must be applied to indicate whether the change is a feature, improvement, bugfix, or documentation change. See the shared RAPIDS documentation for these labels: https://github.com/rapidsai/kb/issues/42. ### Seasoned developers Once you have gotten your feet wet and are more comfortable with the code, you can look at the prioritized issues of our next release in our [project boards](https://github.com/rapidsai/cuml/projects). > **Pro Tip:** Always look at the release board with the highest number for issues to work on. This is where RAPIDS developers also focus their efforts. Look at the unassigned issues, and find an issue you are comfortable with contributing to. Start with _Step 3_ from above, commenting on the issue to let others know you are working on it. If you have any questions related to the implementation of the issue, ask them in the issue instead of the PR. ### Branches and Versions The cuML repository has two main branches: 1. `main` branch: it contains the last released version. Only hotfixes are targeted and merged into it. 2. `branch-x.y`: it is the development branch which contains the upcoming release. All the new features should be based on this branch and Merge/Pull request should target this branch (with the exception of hotfixes). ### Additional details For every new version `x.y` of cuML there is a corresponding branch called `branch-x.y`, from where new feature development starts and PRs will be targeted and merged before its release. The exceptions to this are the 'hotfixes' that target the `main` branch, which target critical issues raised by Github users and are directly merged to `main` branch, and create a new subversion of the project. While trying to patch an issue which requires a 'hotfix', please state the intent in the PR. For all development, your changes should be pushed into a branch (created using the naming instructions below) in your own fork of cuML and then create a pull request when the code is ready. A few days before releasing version `x.y` the code of the current development branch (`branch-x.y`) will be frozen and a new branch, 'branch-x+1.y' will be created to continue development. ### Branch naming Branches used to create PRs should have a name of the form `<type>-<name>` which conforms to the following conventions: - Type: - fea - For if the branch is for a new feature(s) - enh - For if the branch is an enhancement of an existing feature(s) - bug - For if the branch is for fixing a bug(s) or regression(s) - Name: - A name to convey what is being worked on - Please use dashes or underscores between words as opposed to spaces. ## Attribution Portions adopted from https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md
0
rapidsai_public_repos
rapidsai_public_repos/cuml/LICENSE
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2018 NVIDIA CORPORATION Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
rapidsai_public_repos
rapidsai_public_repos/cuml/VERSION
23.12.00
0
rapidsai_public_repos
rapidsai_public_repos/cuml/print_env.sh
#!/usr/bin/env bash # Reports relevant environment information useful for diagnosing and # debugging cuML issues. # Usage: # "./print_env.sh" - prints to stdout # "./print_env.sh > env.txt" - prints to file "env.txt" print_env() { echo "**git***" if [ "$(git rev-parse --is-inside-work-tree 2>/dev/null)" == "true" ]; then git log --decorate -n 1 echo "**git submodules***" git submodule status --recursive else echo "Not inside a git repository" fi echo echo "***OS Information***" cat /etc/*-release uname -a echo echo "***GPU Information***" nvidia-smi echo echo "***CPU***" lscpu echo echo "***CMake***" which cmake && cmake --version echo echo "***g++***" which g++ && g++ --version echo echo "***nvcc***" which nvcc && nvcc --version echo echo "***Python***" which python && python -c "import sys; print('Python {0}.{1}.{2}'.format(sys.version_info[0], sys.version_info[1], sys.version_info[2]))" echo echo "***Environment Variables***" printf '%-32s: %s\n' PATH $PATH printf '%-32s: %s\n' LD_LIBRARY_PATH $LD_LIBRARY_PATH printf '%-32s: %s\n' NUMBAPRO_NVVM $NUMBAPRO_NVVM printf '%-32s: %s\n' NUMBAPRO_LIBDEVICE $NUMBAPRO_LIBDEVICE printf '%-32s: %s\n' CONDA_PREFIX $CONDA_PREFIX printf '%-32s: %s\n' PYTHON_PATH $PYTHON_PATH echo # Print conda packages if conda exists if type "conda" &> /dev/null; then echo '***conda packages***' which conda && conda list echo # Print pip packages if pip exists elif type "pip" &> /dev/null; then echo "conda not found" echo "***pip packages***" which pip && pip list echo else echo "conda not found" echo "pip not found" fi } echo "<details><summary>Click here to see environment details</summary><pre>" echo " " print_env | while read -r line; do echo " $line" done echo "</pre></details>"
0
rapidsai_public_repos
rapidsai_public_repos/cuml/BUILD.md
# cuML Build From Source Guide ## Setting Up Your Build Environment To install cuML from source, ensure the following dependencies are met: 1. [cuDF](https://github.com/rapidsai/cudf) (Same as cuML Version) 2. zlib 3. cmake (>= 3.26.4) 4. CUDA (>= 11+) 5. Cython (>= 0.29) 6. gcc (>= 9.0) 7. BLAS - Any BLAS compatible with cmake's [FindBLAS](https://cmake.org/cmake/help/v3.14/module/FindBLAS.html). Note that the blas has to be installed to the same folder system as cmake, for example if using conda installed cmake, the blas implementation should also be installed in the conda environment. 8. clang-format (= 16.0.6) - enforces uniform C++ coding style; required to build cuML from source. The packages `clang=16` and `clang-tools=16` from the conda-forge channel should be sufficient, if you are on conda. If not using conda, install the right version using your OS package manager. 9. NCCL (>=2.4) 10. UCX [optional] (>= 1.7) - enables point-to-point messaging in the cuML standard communicator. This is necessary for many multi-node multi-GPU cuML algorithms to function. It is recommended to use conda for environment/package management. If doing so, development environment .yaml files are located in `conda/environments/all_*.yaml`. These files contains most of the dependencies mentioned above (notable exceptions are `gcc` and `zlib`). To create a development environment named `cuml_dev`, you can use the follow commands: ```bash conda create -n cuml_dev python=3.10 conda env update -n cuml_dev --file=conda/environments/all_cuda-118_arch-x86_64.yaml conda activate cuml_dev ``` ## Installing from Source: ### Recommended process As a convenience, a `build.sh` script is provided which can be used to execute the same build commands above. Note that the libraries will be installed to the location set in `$INSTALL_PREFIX` if set (i.e. `export INSTALL_PREFIX=/install/path`), otherwise to `$CONDA_PREFIX`. ```bash $ ./build.sh # build the cuML libraries, tests, and python package, then # install them to $INSTALL_PREFIX if set, otherwise $CONDA_PREFIX ``` For workflows that involve frequent switching among branches or between debug and release builds, it is recommended that you install [ccache](https://ccache.dev/) and make use of it by passing the `--ccache` flag to `build.sh`. To build individual components, specify them as arguments to `build.sh` ```bash $ ./build.sh libcuml # build and install the cuML C++ and C-wrapper libraries $ ./build.sh cuml # build and install the cuML python package $ ./build.sh prims # build the ml-prims tests $ ./build.sh bench # build the cuML c++ benchmark $ ./build.sh prims-bench # build the ml-prims c++ benchmark ``` Other `build.sh` options: ```bash $ ./build.sh clean # remove any prior build artifacts and configuration (start over) $ ./build.sh libcuml -v # build and install libcuml with verbose output $ ./build.sh libcuml -g # build and install libcuml for debug $ PARALLEL_LEVEL=8 ./build.sh libcuml # build and install libcuml limiting parallel build jobs to 8 (ninja -j8) $ ./build.sh libcuml -n # build libcuml but do not install $ ./build.sh prims --allgpuarch # build the ML prims tests for all supported GPU architectures $ ./build.sh cuml --singlegpu # build the cuML python package without MNMG algorithms $ ./build.sh --ccache # use ccache to cache compilations, speeding up subsequent builds ``` By default, Ninja is used as the cmake generator. To override this and use (e.g.) `make`, define the `CMAKE_GENERATOR` environment variable accordingly: ```bash CMAKE_GENERATOR='Unix Makefiles' ./build.sh ``` To run the C++ unit tests (optional), from the repo root: ```bash $ cd cpp/build $ ./test/ml # Single GPU algorithm tests $ ./test/ml_mg # Multi GPU algorithm tests, if --singlegpu was not used $ ./test/prims # ML Primitive function tests ``` If you want a list of the available C++ tests: ```bash $ ./test/ml --gtest_list_tests # Single GPU algorithm tests $ ./test/ml_mg --gtest_list_tests # Multi GPU algorithm tests $ ./test/prims --gtest_list_tests # ML Primitive function tests ``` To run all Python tests, including multiGPU algorithms, from the repo root: ```bash $ cd python $ pytest -v ``` If only the single GPU algos want to be run, then: ```bash $ pytest --ignore=cuml/tests/dask --ignore=cuml/tests/test_nccl.py ``` If you want a list of the available Python tests: ```bash $ pytest cuML/tests --collect-only ``` ### Manual Process Once dependencies are present, follow the steps below: 1. Clone the repository. ```bash $ git clone https://github.com/rapidsai/cuml.git ``` 2. Build and install `libcuml++` (C++/CUDA library containing the cuML algorithms), starting from the repository root folder: ```bash $ cd cpp $ mkdir build && cd build $ export CUDA_BIN_PATH=$CUDA_HOME # (optional env variable if cuda binary is not in the PATH. Default CUDA_HOME=/path/to/cuda/) $ cmake .. ``` If using a conda environment (recommended), then cmake can be configured appropriately for `libcuml++` via: ```bash $ cmake .. -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX ``` Note: The following warning message is dependent upon the version of cmake and the `CMAKE_INSTALL_PREFIX` used. If this warning is displayed, the build should still run successfully. We are currently working to resolve this open issue. You can silence this warning by adding `-DCMAKE_IGNORE_PATH=$CONDA_PREFIX/lib` to your `cmake` command. ``` Cannot generate a safe runtime search path for target ml_test because files in some directories may conflict with libraries in implicit directories: ``` The configuration script will print the BLAS found on the search path. If the version found does not match the version intended, use the flag `-DBLAS_LIBRARIES=/path/to/blas.so` with the `cmake` command to force your own version. If using conda and a conda installed cmake, the `openblas` conda package is recommended and can be explicitly specified for `blas` and `lapack`: ```bash cmake .. -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -DBLAS_LIBRARIES=$CONDA_PREFIX/lib/libopenblas.so ``` Additionally, to reduce compile times, you can specify a GPU compute capability to compile for, for example for Volta GPUs: ```bash $ cmake .. -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -DGPU_ARCHS="70" ``` You may also wish to make use of `ccache` to reduce build times when switching among branches or between debug and release builds: ```bash $ cmake .. -DUSE_CCACHE=ON ``` There are many options to configure the build process, see the [customizing build section](#libcuml-&-libcumlc++). 3. Build `libcuml++` and `libcuml`: ```bash $ make -j $ make install ``` To run tests (optional): ```bash $ ./test/ml # Single GPU algorithm tests $ ./test/ml_mg # Multi GPU algorithm tests $ ./test/prims # ML Primitive function tests ``` If you want a list of the available tests: ```bash $ ./test/ml --gtest_list_tests # Single GPU algorithm tests $ ./test/ml_mg --gtest_list_tests # Multi GPU algorithm tests $ ./test/prims --gtest_list_tests # ML Primitive function tests ``` To run cuML c++ benchmarks (optional): ```bash $ ./bench/sg_benchmark # Single GPU benchmarks ``` Refer to `--help` option to know more on its usage To run ml-prims C++ benchmarks (optional): ```bash $ ./bench/prims_benchmark # ml-prims benchmarks ``` Refer to `--help` option to know more on its uage To build doxygen docs for all C/C++ source files ```bash $ make doc ``` 5. Build the `cuml` python package: ```bash $ cd ../../python $ python setup.py build_ext --inplace ``` To run Python tests (optional): ```bash $ pytest -v ``` If only the single GPU algos want to be run, then: ```bash $ pytest --ignore=cuml/tests/dask --ignore=cuml/tests/test_nccl.py ``` If you want a list of the available tests: ```bash $ pytest cuML/tests --collect-only ``` 5. Finally, install the Python package to your Python path: ```bash $ python setup.py install ``` ### Custom Build Options #### libcuml & libcuml++ cuML's cmake has the following configurable flags available: | Flag | Possible Values | Default Value | Behavior | | --- | --- | --- | --- | | BLAS_LIBRARIES | path/to/blas_lib | "" | Optional variable allowing to manually specify location of BLAS library. | | BUILD_CUML_CPP_LIBRARY | [ON, OFF] | ON | Enable/disable building libcuml++ shared library. Setting this variable to `OFF` sets the variables BUILD_CUML_C_LIBRARY, BUILD_CUML_TESTS, BUILD_CUML_MG_TESTS and BUILD_CUML_EXAMPLES to `OFF` | | BUILD_CUML_C_LIBRARY | [ON, OFF] | ON | Enable/disable building libcuml shared library. Setting this variable to `ON` will set the variable BUILD_CUML_CPP_LIBRARY to `ON` | | BUILD_CUML_STD_COMMS | [ON, OFF] | ON | Enable/disable building cuML NCCL+UCX communicator for running multi-node multi-GPU algorithms. Note that UCX support can also be enabled/disabled (see below). Note that BUILD_CUML_STD_COMMS and BUILD_CUML_MPI_COMMS are not mutually exclusive and can both be installed simultaneously. | | WITH_UCX | [ON, OFF] | OFF | Enable/disable UCX support for the standard cuML communicator. Algorithms requiring point-to-point messaging will not work when this is disabled. This has no effect on the MPI communicator. | | BUILD_CUML_MPI_COMMS | [ON, OFF] | OFF | Enable/disable building cuML MPI+NCCL communicator for running multi-node multi-GPU C++ tests. Note that BUILD_CUML_STD_COMMS and BUILD_CUML_MPI_COMMS are not mutually exclusive, and can both be installed simultaneously. | | BUILD_CUML_TESTS | [ON, OFF] | ON | Enable/disable building cuML algorithm test executable `ml_test`. | | BUILD_CUML_MG_TESTS | [ON, OFF] | ON | Enable/disable building cuML algorithm test executable `ml_mg_test`. | | BUILD_PRIMS_TESTS | [ON, OFF] | ON | Enable/disable building cuML algorithm test executable `prims_test`. | | BUILD_CUML_EXAMPLES | [ON, OFF] | ON | Enable/disable building cuML C++ API usage examples. | | BUILD_CUML_BENCH | [ON, OFF] | ON | Enable/disable building of cuML C++ benchark. | | BUILD_CUML_PRIMS_BENCH | [ON, OFF] | ON | Enable/disable building of ml-prims C++ benchark. | | CMAKE_CXX11_ABI | [ON, OFF] | ON | Enable/disable the GLIBCXX11 ABI | | DETECT_CONDA_ENV | [ON, OFF] | ON | Use detection of conda environment for dependencies. If set to ON, and no value for CMAKE_INSTALL_PREFIX is passed, then it'll assign it to $CONDA_PREFIX (to install in the active environment). | | DISABLE_OPENMP | [ON, OFF] | OFF | Set to `ON` to disable OpenMP | | GPU_ARCHS | List of GPU architectures, semicolon-separated | 60;70;75 | List of GPU architectures that all artifacts are compiled for. | | KERNEL_INFO | [ON, OFF] | OFF | Enable/disable kernel resource usage info in nvcc. | | LINE_INFO | [ON, OFF] | OFF | Enable/disable lineinfo in nvcc. | | NVTX | [ON, OFF] | OFF | Enable/disable nvtx markers in libcuml++. |
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/wiki/README.md
# cuML Wiki Documentation This wiki is provided as an extension to cuML's public documentation, geared toward developers on the project. If you are interested in contributing to cuML, read through our [contributing guide](../CONTRIBUTING.md). You are also encouraged to read through our Python [developer guide](python/DEVELOPER_GUIDE.md) and C++ [developer guide](cpp/DEVELOPER_GUIDE.md) to gain an understanding for how we design our algorithms. We have criteria for defining our [definition of done](DEFINITION_OF_DONE_CRITERIA.md) to allow us to provide high performance, maintainable and overall high quality implementations, while giving as much transparency as possible about the status of our algorithms with our users.
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/wiki/DEFINITION_OF_DONE_CRITERIA.md
# Defining cuML's Definition of Done Criteria ## Algorithm Completion Checklist Below is a quick and simple checklist for developers to determine whether an algorithm is complete and ready for release. Most of these items contain more detailed descriptions in their corresponding developer guide. The checklist is broken down by layer (C++ or Python) and categorized further into - **Design:** All algorithms should be designed with an eye on maintainability, performance, readability, and robustness. - **Testing:** The goal for automated testing is to increase both the spread and the depth of code coverage as much as possible in order to ease time spent fixing bugs and developing new features. Additionally, a very important factor for a tool like `cuml` is to provide testing with multiple datasets that really stress the mathematical behavior of the algorithms. A comprehensive set of tests lowers the possibility for regressions and the introduction of bugs as the code evolves between versions. This covers both correctness & performance. - **Documentation:** User-facing documentation should be complete and descriptive. Developer-facing documentation should be used for constructs which are complex and/or not immediately obvious. - **Performance:** Algorithms should be [benchmarked] and profiled regularly to spot potential bottlenecks, performance regressions, and memory problems. ### C++ #### Design - Existing prims are used wherever possible - Array inputs and outputs to algorithms are accepted on device - New prims created wherever there is potential for reuse across different algorithms or prims - User-facing API is [stateless](cpp/DEVELOPER_GUIDE.md#public-cuml-interface) and follows the [plain-old data (POD)](https://en.wikipedia.org/wiki/Passive_data_structure) design paradigm - Public API contains a C-Wrapper around the stateless API - (optional) Public API contains an Scikit-learn-like stateful wrapper around the stateless API #### Testing - Prims: GTests with different inputs - Algorithms: End-to-end GTests with different inputs and different datasets #### Documentation - Complete and comprehensive [Doxygen](http://www.doxygen.nl/manual/docblocks.html) strings explaining the public API, restrictions, and gotchas. Any array parameters should also note whether the underlying memory is host or device. - Array inputs/outputs should also mention their expected size/dimension. - If there are references to the underlying algorithm, they must be cited too. ### Python #### Design - Python class is as "near drop-in replacement" for Scikit-learn (or relevant industry standard) API as possible. This means parameters have the same names as Scikit-learn, and where differences exist, they are clearly documented in docstrings. - It is recommended to open an initial PR with the API design if there are going to be significant differences with reference APIs, or lack of a reference API, to have a discussion about it. - Python class is pickleable and a test has been added to `cuml/tests/test_pickle.py` - APIs use `input_to_cuml_array` to accept flexible inputs and check their datatypes and use `cumlArray.to_output()` to return configurable outputs. - Any internal parameters or array-based instance variables use `CumlArray` #### Testing - Pytests for wrapper functionality against Scikit-learn using relevant datasets - Stress tests against reasonable inputs (e.g short-wide, tall-narrow, different numerical precision) - Pytests for pickle capability - Pytests to evaluate correctness against Scikit-learn on a variety of datasets - Add algorithm to benchmarks package in `python/cuml/benchmarks/algorithms.py` and benchmarks notebook in `python/cuml/notebooks/tools/cuml_benchmarks.ipynb` - PyTests that run in the "unit"-level marker should be quick to execute and should, in general, not significantly increase end-to-end test execution. #### Documentation - Complete and comprehensive Pydoc strings explaining public API, restrictions, a usage example, and gotchas. This should be in [Numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) format - Docstrings include references to any scientific papers or standard publications on the underlying algorithm (e.g paper or Scikit-learn algorithm being implemented or a description of the algorithm used if nonstandard). ## Review Checklist Aside from the general algorithm expectations outlined in the checklists above, code reviewers should use the following checklist to make sure the algorithm meets cuML standards. ### All - New files contain necessary license headers - Diff does not contain files with excess formatting changes, without other changes also being made to the file - Code does not contain any known serious memory leaks or garbage collection issues - Modifications are cohesive and in-scope for the PR's intended purpose - Changes to the public API will not have a negative impact to existing users between minor versions (eg. large changes to very popular public APIs go through a deprecation cycle to preserve backwards compatibility) - Where it is reasonable to do so, unexpected inputs fail gracefully and provide actionable feedback to the user - Automated tests properly exercise the changes in the PR - New algorithms provide benchmarks (both C++ and Python) ### C++ - New GTests are being enabled in `CMakeLists.txt` ### Python - Look at the list of slowest PyTests printed in the CI logs and check that any newly committed PyTests are not going to have a significant impact on the end-to-end execution.
0
rapidsai_public_repos/cuml/wiki
rapidsai_public_repos/cuml/wiki/python/ESTIMATOR_GUIDE.md
# cuML Python Estimators Developer Guide This guide is meant to help developers follow the correct patterns when creating/modifying any cuML Estimator object and ensure a uniform cuML API. **Note:** This guide is long, because it includes internal details on how cuML manages input and output types for advanced use cases. But for the vast majority of estimators, the requirements are very simple and can follow the example patterns shown below in the [Quick Start Guide](#quick-start-guide). ## Table of Contents - [Recommended Scikit-Learn Documentation](#recommended-scikit-learn-documentation) - [Quick Start Guide](#quick-start-guide) - [Background](#background) - [Input and Output Types in cuML](#input-and-output-types-in-cuml) - [Specifying the Array Output Type](#specifying-the-array-output-type) - [Ingesting Arrays](#ingesting-arrays) - [Returning Arrays](#returning-arrays) - [Estimator Design](#estimator-design) - [Initialization](#initialization) - [Implementing `get_param_names()`](#implementing-get_param_names) - [Estimator Tags and cuML Specific Tags](#estimator-tags-and-cuml-specific-tags) - [Estimator Array-Like Attributes](#estimator-array-like-attributes) - [Estimator Methods](#estimator-methods) - [Do's and Do Not's](#dos-and-do-nots) - [Appendix](#appendix) ## Recommended Scikit-Learn Documentation To start, it's recommended to read the following Scikit-learn documentation: 1. [Scikit-learn's Estimator Docs](https://scikit-learn.org/stable/developers/develop.html) 1. cuML Estimator design follows Scikit-learn very closely. We will only cover portions where our design differs from this document 2. If short on time, pay attention to these sections, which are the most important (and have caused pain points in the past): 1. [Instantiation](https://scikit-learn.org/stable/developers/develop.html#estimated-attributes) 2. [Estimated Attributes](https://scikit-learn.org/stable/developers/develop.html#estimated-attributes) 3. [`get_params` and `set_params`](https://scikit-learn.org/stable/developers/develop.html#estimated-attributes) 4. [Cloning](https://scikit-learn.org/stable/developers/develop.html#cloning) 5. [Estimator tags](https://scikit-learn.org/stable/developers/develop.html#estimator-tags) 2. [Scikit-learn's Docstring Guide](https://scikit-learn.org/stable/developers/contributing.html#guidelines-for-writing-documentation) 1. We follow the same guidelines for specifying array-like objects, array shapes, dtypes, and default values ## Quick Start Guide At a high level, all cuML Estimators must: 1. Inherit from `cuml.common.base.Base` ```python from cuml.common.base import Base class MyEstimator(Base): ... ``` 2. Follow the Scikit-learn estimator guidelines found [here](https://scikit-learn.org/stable/developers/develop.html) 3. Include the `Base.__init__()` arguments available in the new Estimator's `__init__()` ```python class MyEstimator(Base): def __init__(self, *, extra_arg=True, handle=None, verbose=False, output_type=None): super().__init__(handle=handle, verbose=verbose, output_type=output_type) ... ``` 4. Declare each array-like attribute the new Estimator will compute as a class variable for automatic array type conversion. An order can be specified to serve as an indicator of the order the array should be in for the C++ algorithms to work. ```python from cuml.common.array_descriptor import CumlArrayDescriptor class MyEstimator(Base): labels_ = CumlArrayDescriptor(order='C') def __init__(self): ... ``` 5. Add input and return type annotations to public API functions OR wrap those functions explicitly with conversion decorators (see [this example](#non-standard-predict) for a non-standard use case) ```python class MyEstimator(Base): def fit(self, X) -> "MyEstimator": ... def predict(self, X) -> CumlArray: ... ``` 6. Implement `get_param_names()` including values returned by `super().get_param_names()` ```python def get_param_names(self): return super().get_param_names() + [ "eps", "min_samples", ] ``` 7. Implement the appropriate tags method if any of the [default tags](#estimator-tags-and-cuml-specific-tags) need to be overridden for the new estimator. There are some convenience [Mixins](../../python/common/mixins.py), that the estimator can inherit, can be used for indicating the preferred order (column or row major) as well as for sparse input capability. If other tags are needed, they are static (i.e. don't change depending on the instantiated estimator), and more than one estimator will use them, then implement a new [Mixin](../../python/common/mixins.py), if the tag will be used by a single class then implement the `_more_static_tags` method: ```python @staticmethod def _more_static_tags(): return { "requires_y": True } ``` If the tags depend on an attribute that is defined at runtime or instantiation of the estimator, then implement the `_more_tags` method: ```python def _more_tags(self): return { "allow_nan": is_scalar_nan(self.missing_values) } ``` For the majority of estimators, the above steps will be sufficient to correctly work with the cuML library and ensure a consistent API. However, situations may arise where an estimator differs from the standard pattern and some of the functionality needs to be customized. The remainder of this guide takes a deep dive into the estimator functionality to assist developers when building estimators. ## Background Some background is necessary to understand the design of estimators and how to work around any non-standard situations. ### Input and Output Types in cuML In cuML we support both ingesting and generating a variety of different object types. Estimators should be able to accept and return any array type. The types that are supported as of release 0.17: - cuDF DataFrame or Series - Pandas DataFrame or Series - NumPy Arrays - Numba Device Arrays - CuPy arrays - CumlArray type (Internal to the `cuml` API only.) When converting between types, it's important to minimize the CPU<->GPU type conversions as much as possible. Conversions such as NumPy -> CuPy or Numba -> Pandas DataFrame will incur a performance penalty as memory is copied from device to host or vice-versa. Converting between types of the same device, i.e. CPU<->CPU or GPU<->GPU, do not have as significant of a penalty, though they may still increase memory usage (this is particularly true for the array <-> dataframe conversion. i.e. when converting from CuPy to cuDF, memory usage may increase slightly). Finally, conversions between Numba<->CuPy<->CumlArray incur the least amount of overhead since only the device pointer is moved from one class to another. Internally, all arrays should be converted to `CumlArray` as much as possible since it is compatible with all output types and can be easily converted. ### Host and Device Arrays Beginning with version 23.02, cuML provides support for executing at least some algorithms either on CPU or on GPU. Therefore, `CumlArray` objects can now be backed by either host or device memory. To ensure that arrays used by algorithms are backed by the correct memory type, two new global settings were introduced: `device_type` (`'cpu'` or `'gpu'`) and `memory_type` (`'host'` or `'device'`). The former indicates what sort of computational device will be used to execute an algorithm while the latter indicates where arrays should be stored if not otherwise specified. If the `device_type` is updated to a value incompatible with the current `memory_type`, the `memory_type` will be changed to something compatible with `device_type`, but the reverse is not true. This allows for e.g. allocating an array where results will ultimately be stored even if the actual computation will take place on a different device. New array output types were also introduced to take advantage of these settings by deferring where appropriate to the globally-set memory type. Read on for more details on how to take advantage of these types. ### Specifying the Array Output Type Users can choose which array type should be returned by cuml by either: 1. Individually setting the output_type property on an estimator class (i.e `Base(output_type="numpy")`) 2. Globally setting the `cuml.global_output_type` 3. Temporarily setting the `cuml.global_output_type` via the `cuml.using_output_type` context manager **Note:** Setting `cuml.global_output_type` (either directly or via `cuml.set_output_type()` or `cuml.using_output_type()`) <u>will take precedence over any value in `Base.output_type`</u> In addition, for developers, it is sometimes useful to set the output memory type separately from the output type, as will be described in further detail below. **End-users will not typically use this setting themselves.** To set the output memory type, developers can: 1. Individually setting the output_mem_type property on an estimator class that derives from `UniversalBase` (i.e `UniversalBase(output_mem_type="host")`) 2. Globally setting the `cuml.global_settings.memory_type` 3. Temporarily setting the `cuml.global_settings.memory_type` via the `cuml.using_memory_type` context manager Changing the array output type will alter the return value of estimator functions (i.e. `predict()`, `transform()`), and the return value for array-like estimator attributes (i.e. `my_estimator.classes_` or `my_estimator.coef_`) All output_types (including `cuml.global_output_type`) are specified using an all lowercase string. These strings can be passed in an estimators constructor or via `cuml.set_global_output_type` and `cuml.using_output_type`. Accepted values are: - `None`: (Default) No global value set. Will use individual values from estimators output_type - `"input"`: Similar to `None`. Will mirror the same type as any array passed into the estimator - `"array"`: Returns Numpy or Cupy arrays depending on the current memory type - `"numba"`: Returns Numba Device Arrays - `"dataframe"`: Returns cuDF or Pandas DataFrames depending on the current memory type - `"series"`: Returns cuDF or Pandas Series depending on the current memory type - `"df_obj"`: Returns cuDF/Pandas Series if array is single-dimensional or cuDF/Pandas DataFrames otherwise - `"cupy"`: Returns CuPy Device Arrays - `"numpy"`: Returns Numpy Arrays - `"cudf"`: Returns cuDF DataFrame if cols > 1, else cuDF Series - `"pandas"`: Returns Pandas DataFrame if cols > 1, else cuDF Series **Note:** There is an additional option `"mirror"` which can only be set by internal API calls and is not user accessible. This value is only used internally by the `CumlArrayDescriptor` to mirror any input value set. #### Deferring to the global memory type With the introduction of CPU-only algorithms, it is sometimes useful internally to generically request an "array" or a "dataframe" rather than specifying cupy/numpy or cudf/pandas. For example, imagine that a developer needs to briefly use a method which is specific to cupy/numpy and not available on the generic `CumlArray` interface. In the past, a developer might call `arr.to_output('cupy')` and proceed with the operation before converting back to a `CumlArray`. Now, if the device type is set to `cpu` and this pattern is used, we would attempt to execute a host-only operation on device memory. Instead, developers can use the generic `array`, `series`, `dataframe`, and `df_obj` output types to defer to the current globally-set memory type and ensure that the data memory location is compatible with the computational device. It is recommended that these generic output types be used for any internal conversion calls. Where we cannot defer to the global memory type, the memory type for that call should be specified directly to facilitate later rewrites for host/device interoperability. External users should not typically have to use these generic types unless they are specifically writing an application with host/device interoperability in mind. ### Ingesting Arrays When the input array type isn't known, the correct and safest way to ingest arrays is using `cuml.common.input_to_cuml_array`. This method can handle all supported types, is capable of checking the array order, can enforce a specific dtype, and can raise errors on incorrect array sizes: ```python def fit(self, X): cuml_array, dtype, cols, rows = input_to_cuml_array(X, order="K") ... ``` ### Returning Arrays The `CumlArray` class can convert to any supported array type using the `to_output(output_type: str)` method. However, doing this explicitly is almost never needed in practice and **should be avoided**. Directly converting arrays with `to_output()` will circumvent the automatic conversion system potentially causing extra or incorrect array conversions. ## Estimator Design All estimators (any class that is a child of `cuml.common.base.Base`) have a similar structure. In addition to the guidelines specified in the [SkLearn Estimator Docs](https://scikit-learn.org/stable/developers/develop.html), cuML implements a few additional rules. ### Initialization All estimators should match the arguments (including the default value) in `Base.__init__` and pass these values to `super().__init__()`. As of 0.17, all estimators should accept `handle`, `verbose` and `output_type`. In addition, is recommended to force keyword arguments to prevent breaking changes if arguments are added or removed in future versions. For example, all arguments below after `*` must be passed by keyword: ```python def __init__(self, *, eps=0.5, min_samples=5, max_mbytes_per_batch=None, calc_core_sample_indices=True, handle=None, verbose=False, output_type=None): ``` Finally, do not alter any input arguments - if you do, it will prevent proper cloning of the estimator. See Scikit-learn's [section](https://scikit-learn.org/stable/developers/develop.html#instantiation) on instantiation for more info. For example, the following `__init__` shows what **NOT** to do: ```python def __init__(self, my_option="option1"): if (my_option == "option1"): self.my_option = 1 else: self.my_option = 2 ``` This will break cloning since the value of `self.my_option` is not a valid input to `__init__`. Instead, `my_option` should be saved as an attribute as-is. ### Implementing `get_param_names()` To support cloning, estimators need to implement the function `get_param_names()`. The returned value should be a list of strings of all estimator attributes that are necessary to duplicate the estimator. This method is used in `Base.get_params()` which will collect the collect the estimator param values from this list and pass this dictionary to a new estimator constructor. Therefore, all strings returned by `get_param_names()` should be arguments in `__init__()` otherwise an invalid argument exception will be raised. Most estimators implement `get_param_names()` similar to: ```python def get_param_names(self): return super().get_param_names() + [ "eps", "min_samples", ] ``` **Note:** Be sure to include `super().get_param_names()` in the returned list to properly set the `super()` attributes. ### Estimator Tags and cuML-Specific Tags Scikit-learn introduced estimator tags in version 0.21, which are used to programmatically inspect the capabilities of estimators. These capabilities include items like sparse matrix support and the need for positive inputs, among other things. cuML estimators support _all_ of the tags defined by the Scikit-learn estimator [developer guide](https://scikit-learn.org/stable/developers/index.html), and will add support for any tag added there. Additionally, some tags specific to cuML have been added. These tags may or may not be specific to GPU data types and can even apply outside of automated testing, such as allowing for the optimization of data generation. This can be useful for pipelines and HPO, among other things. These are: - `X_types_gpu` (default=['2darray']) Analogous to `X_types`, indicates what types of GPU objects an estimator can take. `2darray` includes GPU ndarray objects (like CuPy and Numba) and cuDF objects, since they are all processed the same by `input_utils`. `sparse` includes `CuPy` sparse arrays. - `preferred_input_order` (default=None) One of ['F', 'C', None]. Whether an estimator "prefers" data in column-major ('F') or row-major ('C') contiguous memory layout. If different methods prefer different layouts or neither format is beneficial, then it is defined to `None` unless there is a good reason to chose either `F` or `C`. For example, all of `fit`, `predict`, etc. in an estimator use `F` but only `score` uses`C`. - `dynamic_tags` (default=False) Most estimators only need to define the tags statically, which facilitates the usage of tags in general. But some estimators might need to modify the values of a tag based on runtime attributes, so this tag reflects whether an estimator needs to do that. This tag value is automatically set by the `Base` estimator class if an Estimator has defined the `_more_tags` instance method. Note on MRO and tags: Tag resolution makes it so that multiple classes define the same tag in a composed class, classes closer to the final class overwrite the values of the farther ones. In Python, the MRO resolution makes it so that the uppermost classes are closer to the inheritting class, for example: Class: ```python class DBSCAN(Base, ClusterMixin, CMajorInputTagMixin): ``` MRO: ```python >>> cuml.DBSCAN.__mro__ (<class 'cuml.cluster.dbscan.DBSCAN'>, <class 'cuml.common.base.Base'>, <class 'cuml.common.mixins.TagsMixin'>, <class 'cuml.common.mixins.ClusterMixin'>, <class 'cuml.common.mixins.CMajorInputTagMixin'>, <class 'object'>) ``` So this needs to be taken into account for tag resolution, for the case above, the tags in `ClusterMixin` would overwrite tags of `CMajorInputTagMixin` if they defined the same tags. So take this into consideration for the (uncommon) cases where there might be tags re-defined in your MRO. This is not common since most tag mixins define mutually exclusive tags (i.e. either prefer `F` or `C` major inputs). ### Estimator Array-Like Attributes Any array-like attribute stored in an estimator needs to be convertible to the user's desired output type. To make it easier to store array-like objects in a class that derives from `Base`, the `cuml.common.array.CumlArrayDescriptor` was created. The `CumlArrayDescriptor` class is a Python descriptor object which allows cuML to implement customized attribute lookup, storage and deletion code that can be reused on all estimators. The `CumlArrayDescriptor` behaves different when accessed internally (from within one of `cuml`'s functions) vs. externally (for user code outside the cuml module). Internally, it behaves exactly like a normal attribute and will return the previous value set. Externally, the array will get converted to the user's desired output type lazily and repeated conversion will be cached. Performing the array conversion lazily (i.e. converting the input array to the desired output type, only when the attribute it read from for the first time) can greatly help reduce memory consumption, but can have unintended impacts the developers should be aware of. For example, benchmarking should take into account the lazy evaluation and ensure the array conversion is included in any profiling. #### Defining Array-Like Attributes To use the `CumlArrayDescriptor` in an estimator, any array-like attributes need to be specified by creating a `CumlArrayDescriptor` as a class variable. An order can be specified to serve as an indicator of the order the array should be in for the C++ algorithms to work. ```python from cuml.common.array_descriptor import CumlArrayDescriptor class TestEstimator(cuml.Base): # Class variables outside of any function my_cuml_array_ = CumlArrayDescriptor(order='C') def __init__(self, ...): ... ``` This gives the developer full control over which attributes are arrays and the name for the array-like attribute (something that was not true before `0.17`). #### Working with `CumlArrayDescriptor` Once an `CumlArrayDescriptor` attribute has been defined, developers can use the attribute as they normally would. Consider the following example estimator: ```python import cupy as cp import cuml from cuml.common.array_descriptor import CumlArrayDescriptor class SampleEstimator(cuml.Base): # Class variables outside of any function my_cuml_array_ = CumlArrayDescriptor() my_cupy_array_ = CumlArrayDescriptor() my_other_array_ = CumlArrayDescriptor() def __init__(self, ...): # Initialize to None (not mandatory) self.my_cuml_array_ = None # Init with a cupy array self.my_cupy_array_ = cp.zeros((10, 10)) def fit(self, X): # Stores the type of `X` and sets the output type if self.output_type == "input" self._set_output_type(X) # Set my_cuml_array_ with a CumlArray self.my_cuml_array_, *_ = input_to_cuml_array(X, order="K") # Access `my_cupy_array_` normally and set to another attribute # The internal type of my_other_array_ will be a CuPy array self.my_other_array_ = cp.ones((10, 10)) + self.my_cupy_array_ return self ``` Just like any normal attribute, `CumlArrayDescriptor` attributes will return the same value that was set into the attribute _unless accessed externally_ (more on that below). However, developers can convert the type of an array-like attribute by using the `cuml.global_output_type` functionality and reading from the attribute. For example, we could add a `score()` function to `TestEstimator`: ```python def score(self): # Set the global output type to numpy with cuml.using_output_type("numpy"): # Accessing my_cuml_array_ will return a numpy array and # the result can be returned directly return np.sum(self.my_cuml_array_, axis=0) ``` This has the same benefits of lazy conversion and caching as when descriptors are used externally. #### CumlArrayDescriptor External Functionality Externally, when users read from a `CumlArrayDescriptor` attribute, the array data will be converted to the correct output type _lazily_ when the attribute is read from. For example, building off the above `TestEstimator`: ```python my_est = SampleEstimator() # Print the default output_type and value for `my_cuml_array_` # By default, `output_type` is set to `cuml.global_output_type` # If `cuml.global_output_type == None`, `output_type` is set to "input" print(my_est.output_type) # Output: "input" print(my_est.my_cuml_array_) # Output: None print(my_est.my_other_array_) # Output: AttributeError! my_other_array_ was never set # Call fit() with a numpy array as the input np_arr = np.ones((10,)) my_est.fit(np_arr) # This will load data into attributes # `my_cuml_array_` was set internally as a CumlArray. Externally, we can check the type print(type(my_est.my_cuml_array_)) # Output: Numpy (saved from the input of `fit`) # Calling fit again with cupy arrays, will have a similar effect my_est.fit(cp.ones((10,))) print(type(my_est.my_cuml_array_)) # Output: CuPy # Setting the `output_type` will change all descriptor properties # and ignore the input type my_est.output_type = "cudf" # Reading any of the attributes will convert the type lazily print(type(my_est.my_cuml_array_)) # Output: cuDF object # Setting the global_output_type, overrides the estimator output_type attribute with cuml.using_output_type("cupy"): print(type(my_est.my_cuml_array_)) # Output: cupy # Once the global_output_type is restored, we return to the estimator output_type print(type(my_est.my_cuml_array_)) # Output: cuDF. Using a cached value! ``` For more information about `CumlArrayDescriptor` and it's implementation, see the [CumlArrayDescriptor Details]() section of the Appendix. ### Estimator Methods To allow estimator methods to accept a wide variety of inputs and outputs, a set of decorators have been created to wrap estimator functions (and all `cuml` API functions as well) and perform the standard conversions automatically. cuML provides 2 options to for performing the standard array type conversions: 1. For many common patterns used in functions like `fit()`, `predict()`, `transform()`, `cuml.Base` can automatically perform the data conversions as long as a method has the necessary type annotations. 2. Decorators can be manually added to methods to handle more advanced use cases #### Option 1: Automatic Array Conversion From Type Annotation To automatically convert array-like objects being returned by an Estimator method, a new metaclass has been added to `Base` that can scan the return type information of an Estimator method and infer which, if any, array conversion should be done. For example, if a method returns a type of `Base`, cuML can assume this method is likely similar to `fit()` and should call `Base._set_base_attributes()` before calling the method. If a method returns a type of `CumlArray`, cuML can assume this method is similar to `predict()` or `transform()`, and the return value is an array that may need to be converted using the output type calculated in `Base._get_output_type()`. The full set of return types rules that will be applied by the `Base` metaclass are: | Return Type | Converts Array Type? | Common Methods | Notes | | :---------: | :-----------: | :----------- | :----------- | | `Base` | No | `fit()` | Any type that inherits or `isinstance` of `Base` will work | | `CumlArray` | Yes | `predict()`, `transform()` | Functions can return any array-like object (`np.ndarray`, `cp.ndarray`, etc. all accepted) | | `SparseCumlArray` | Yes | `predict()`, `transform()` | Functions can return any sparse array-like object (`scipy`, `cupyx.scipy` sparse arrays accepted) | | `dict`, `tuple`, `list` or `typing.Union` | Yes | | Functions must return a generic object that contains an array-like object. No sparse arrays are supported | Simply setting the return type of a method is all that is necessary to automatically convert the return type (with the added benefit of adding more information to the code). Below are some examples to show simple methods using automatic array conversion. ##### `fit()` ```python def fit(self, X) -> "KMeans": # Convert the input to CumlArray self.coef_ = input_to_cuml_array(X, order="K").array return self ``` **Notes:** - Any type that derives from `Base` can be used as the return type for `fit()`. In python, to indicate returning `self` from a function, class type can be surrounded in quotes to prevent an import error. ##### `predict()` ```python def predict(self, X) -> CumlArray: # Convert to CumlArray X_m = input_to_cuml_array(X, order="K").array # Call a cuda function X_m = cp.asarray(X_m) + cp.ones(X_m.shape) # Directly return a cupy array return X_m ``` **Notes:** - It's not necessary to convert to `CumlArray` and cast with `to_output` before returning. This function directly returned a `cp.ndarray` object. Any array-like object can be returned. #### Option 2: Manual Estimator Method Decoration While the automatic conversions from type annotations works for many estimator functions, sometimes its necessary to explicitly decorate an estimator method. This allows developers greater flexibility over the input argument, output type and output dtype. Which decorator to use for an estimator function is determined by 2 factors: 1. Function return type 2. Whether the function is on a class deriving from `Base` The full set of descriptors can be organized by these two factors: | Return Type-> | Array-Like | Sparse Array-Like | Generic | Any | | -----------: | :-----------: | :-----------: | :-----------: | :-----------: | | `Base` | `@api_base_return_array` | `@api_base_return_sparse_array` |`@api_base_return_generic` | `@api_base_return_any` | | Non-`Base` | `@api_return_array` | `@api_return_sparse_array` | `@api_return_generic` | `@api_return_any` | Simply choosing the decorator based off the return type and if the function is on `Base` will work most of the time. The decorator default options were designed to work on most estimator functions without much customization. An in-depth discussion of how these decorators work, when each should be used, and their default options can be found in the Appendix. For now, we will show an example method that uses a non-standard input argument name, and also requires converting the array dtype: ##### Non-Standard `predict()` ```python @cuml.internals.api_base_return_array(input_arg="X_in", get_output_dtype=True) def predict(self, X): # Convert to CumlArray X_m = input_to_cuml_array(X, order="K").array # Call a cuda function X_m = cp.asarray(X_m) + cp.ones(X_m.shape) # Return the cupy array directly return X_m ``` **Notes:** - The decorator argument `input_arg` can be used to specify which input should be considered the "input". - In reality, this isn't necessary for this example. The decorator will look for an argument named `"X"` or default to the first, non `self`, argument. - It's not necessary to convert to `CumlArray` and casting with `to_output` before returning. This function directly returned a `cp.ndarray` object. Any array-like object can be returned. - Specifying `get_output_dtype=True` in the decorator argument instructs the decorator to also calculate the dtype in addition to the output type. ## Do's And Do Not's ### **Do:** Add Return Typing Information to Estimator Functions Adding the return type to estimator functions will allow the `Base` meta-class to automatically decorate functions based on their return type. **Do this:** ```python def fit(self, X, y, convert_dtype=True) -> "KNeighborsRegressor": def predict(self, X, convert_dtype=True) -> CumlArray: def kneighbors_graph(self, X=None, n_neighbors=None, mode='connectivity') -> SparseCumlArray: def predict(self, start=0, end=None, level=None) -> typing.Union[CumlArray, float]: ``` **Not this:** ```python def fit(self, X, y, convert_dtype=True): def predict(self, X, convert_dtype=True): def kneighbors_graph(self, X=None, n_neighbors=None, mode='connectivity'): def predict(self, start=0, end=None, level=None): ``` ### **Do:** Return Array-Like Objects Directly There is no need to convert the array type before returning it. Simply return any array-like object and it will be automatically converted **Do this:** ```python def predict(self) -> CumlArray: cp_arr = cp.ones((10,)) return cp_arr ``` **Not this:** ```python def predict(self, X, y) -> CumlArray: cp_arr = cp.ones((10,)) # Don't be tempted to use `CumlArray(cp_arr)` here either cuml_arr = input_to_cuml_array(cp_arr, order="K").array return cuml_arr.to_output(self._get_output_type(X)) ``` ### **Don't:** Use `CumlArray.to_output()` directly Using `CumlArray.to_output()` is no longer necessary except in very rare circumstances. Converting array types is best handled with `input_to_cuml_array` or `cuml.using_output_type()` when retrieving `CumlArrayDescriptor` values. **Do this:** ```python def _private_func(self) -> CumlArray: return cp.ones((10,)) def predict(self, X, y) -> CumlArray: self.my_cupy_attribute_ = cp.zeros((10,)) with cuml.using_output_type("numpy"): np_arr = self._private_func() return self.my_cupy_attribute_ + np_arr ``` **Not this:** ```python def _private_func(self) -> CumlArray: return cp.ones((10,)) def predict(self, X, y) -> CumlArray: self.my_cupy_attribute_ = cp.zeros((10,)) np_arr = CumlArray(self._private_func()).to_output("numpy") return CumlArray(self.my_cupy_attribute_).to_output("numpy") + np_arr ``` ### **Don't:** Perform parameter modification in `__init__()` Input arguments to `__init__()` should be stored as they were passed in. Parameter modification, such as converting parameter strings to integers, should be done in `fit()` or a helper private function. While it's more verbose, altering the parameters in `__init__` will break the estimator's ability to be used in `clone()`. **Do this:** ```python class TestEstimator(cuml.Base): def __init__(self, method_name: str, ...): super().__init__(...) self.method_name = method_name def _method_int(self) -> int: return 1 if self.method_name == "type1" else 0 def fit(self, X) -> "TestEstimator": # Call external code from Cython my_external_func(X.ptr, <int>self._method_int()) return self ``` **Not this:** ```python class TestEstimator(cuml.Base): def __init__(self, method_name: str, ...): super().__init__(...) self.method_name = 1 if method_name == "type1" else 0 def fit(self, X) -> "TestEstimator": # Call external code from Cython my_external_func(X.ptr, <int>self.method_name) return self ``` ## Appendix This section contains more in-depth information about the decorators and descriptors to help developers understand whats going on behind the scenes ### Estimator Array-Like Attributes #### Automatic Decoration Rules Adding decorators to every estimator function just to use the decorator default values would be very repetitive and unnecessary. Because most of estimator functions follow a similar pattern, a new meta-class has been created to automatically decorate estimator functions based off their return type. This meta class will decorate functions according to a few rules: 1. If a functions has been manually decorated, it will not be automatically decorated 2. If an estimator function returns an instance of `Base`, then `@api_base_return_any()` will be applied. 3. If an estimator function returns a `CumlArray`, then `@api_base_return_array()` will be applied. 3. If an estimator function returns a `SparseCumlArray`, then `@api_base_return_sparse_array()` will be applied. 4. If an estimator function returns a `dict`, `tuple`, `list` or `typing.Union`, then `@api_base_return_generic()` will be applied. | Return Type | Decorator | Notes | | :-----------: | :-----------: | :----------- | | `Base` | `@api_base_return_any(set_output_type=True, set_n_features_in=True)` | Any type that `isinstance` of `Base` will work | | `CumlArray` | `@api_base_return_array(get_output_type=True)` | Functions can return any array-like object | | `SparseCumlArray` | `@api_base_return_sparse_array(get_output_type=True)` | Functions can return any sparse array-like object | | `dict`, `tuple`, `list` or `typing.Union` | `@api_base_return_generic(get_output_type=True)` | Functions must return a generic object that contains an array-like object. No sparse arrays are supported | #### `CumlArrayDescriptor` Internals The internal representation of `CumlArrayDescriptor` is a `CumlArrayDescriptorMeta` object. To inspect the internal representation, the attribute value must be directly accessed from the estimator's `__dict__` (`getattr` and `__getattr__` will perform the conversion). For example: ```python my_est = TestEstimator() my_est.fit(cp.ones((10,))) # Access the CumlArrayDescriptorMeta value directly. No array conversion will occur print(my_est.__dict__["my_cuml_array_"]) # Output: CumlArrayDescriptorMeta(input_type='cupy', values={'cuml': <cuml.internals.array.CumlArray object at 0x7fd39174ae20>, 'numpy': array([ 0, 1, 1, 2, 2, -1, -1, ... # Values from CumlArrayDescriptorMeta can be specifically read print(my_est.__dict__["my_cuml_array_"].input_type) # Output: "cupy" # The input value can be accessed print(my_est.__dict__["my_cuml_array_"].get_input_value()) # Output: CumlArray ... ``` ### Estimator Methods #### Common Functionality All of these decorators perform the same basic steps with a few small differences. The common steps performed by each decorator is: 1. Set `cuml.global_output_type = "mirror"` 1. When `"mirror"` is used as the global output type, that indicates we are in an internal cuML API call. The `CumlArrayDescriptor` keys off this value to change between internal and external functionality 2. Set CuPy allocator to use RMM 1. This replaces the existing decorator `@with_cupy_rmm` 2. Unlike before, the CuPy allocator is only set once per API call 3. Set the estimator input attributes. Can be broken down into 3 steps: 1. Set `_input_type` attribute 2. Set `target_dtype` attribute 3. Set `n_features` attribute 4. Call the desired function 5. Get the estimator output type. Can be broken down into 2 steps: 1. Get `output_type` 2. Get `output_dtype` 6. Convert the return value 1. This will ultimately call `CumlArray.to_output(output_type=output_type, output_dtype=output_dtype) While the above list of steps may seem excessive for every call, most functions follow this general form, but may skip a few steps depending on a couple of factors. For example, Step #3 is necessary on functions that modify the estimator's estimated attributes, such as `fit()`, but is not necessary for functions like `predict()` or `transform()`. And Step #5/6 are only necessary when returning array-like objects and are omitted when returning any other type. Functionally, you can think of these decorators equivalent to the following pseudocode: ```python def my_func(self, X): with cuml.using_ouput_type("mirror"): with cupy.cuda.cupy_using_allocator( rmm.allocators.cupy.rmm_cupy_allocator ): # Set the input properties self._set_base_attributes(output_type=X, n_features=X) # Do actual calculation returning an array-like object ret_val = self._my_func(X) # Get the output type output_type = self._get_output_type(X) # Convert array-like to CumlArray ret_val = input_to_cuml_array(ret_val, order="K").array # Convert CumlArray to desired output_type return ret_val.to_output(output_type) ``` Keep the above pseudocode in mind when working with these decorators since their goal is to replace many of these repetitive functions. ### Decorator Defaults Every function in `cuml` is slightly different and some `fit()` functions may need to set the `target_dtype` or some `predict()` functions may need to skip getting the output type. To handle these situations, all of the decorators take arguments to configure their functionality. Since the decorator's functionality is very similar, so are their arguments. All of the decorators take similar arguments that will be outlined below. | Argument | Type | Default | Meaning | | :-----------: | :-----------: | :-----------: | :----------- | | `input_arg` | `str` | `'X'` or 1st non-self argument | Determines which input argument to use for `_set_output_type()` and `_set_n_features_in()` | | `target_arg` | `str` | `'y'` or 2nd non-self argument | Determines which input argument to use for `_set_target_dtype()` | | `set_output_type` | `bool` | Varies | Whether to call `_set_output_type(input_arg)` | | `set_output_dtype` | `bool` | `False` | Whether to call `_set_target_dtype(target_arg)` | | `set_n_features_in` | `bool` | Varies | Whether to call `_set_n_features_in(input_arg)` | | `get_output_type` | `bool` | Varies | Whether to call `_get_output_type(input_arg)` | | `get_output_dtype` | `bool` | `False` | Whether to call `_get_target_dtype()` | An example of how these arguments can be used is below: **Before:** ```python @with_cupy_rmm def predict(self, X, y): # Determine the output type and dtype out_type = self._get_output_type(y) out_dtype = self._get_target_dtype() # Convert to CumlArray X_m = input_to_cuml_array(X, order="K").array # Call a cuda function someCudaFunction(X_m.ptr) # Convert the CudaArray to the desired output return X_m.to_output(output_type=out_type, output_dtype=out_dtype) ``` **After:** ```python @cuml.internals.api_base_return_array(input_arg="y", get_output_dtype=True) def predict(self, X): # Convert to CumlArray X_m = input_to_cuml_array(X, order="K").array # Call a cuda function someCudaFunction(X_m.ptr) # Convert the CudaArray to the desired output return X_m ``` #### Before `0.17` and After Comparison For developers used to the `0.16` architecture it can be helpful to see examples of estimator methods from `0.16` compared to `0.17` and after. This section shows a few examples side-by-side to illustrate the changes. ##### `fit()` <table> <thead> <tr> <th>Before</th> <th>After</th> </tr> </thead> <tbody> <tr> <td> ```python @with_cupy_rmm def fit(self, X): # Set the base input attributes self._set_base_attributes(output_type=X, n_features=X) self.coef_ = input_to_cuml_array(X, order="K").array return self ``` </td> <td stype="text-align: top"> ```python def fit(self, X) -> "KMeans": self.coef_ = input_to_cuml_array(X, order="K").array return self ``` </td> </tr> </tbody> </table> **Notes:** - `@with_cupy_rmm` is no longer needed. This is automatically applied for every public method of estimators - `self._set_base_attributes()` no longer needs to be called. ##### `predict()` <table> <thead> <tr> <th>Before</th> <th>After</th> </tr> </thead> <tbody> <tr> <td> ```python @with_cupy_rmm def predict(self, X, y): # Determine the output type and dtype out_type = self._get_output_type(y) # Convert to CumlArray X_m = input_to_cuml_array(X, order="K").array # Do some calculation with cupy X_m = cp.asarray(X_m) + cp.ones(X_m.shape) # Convert back to CumlArray X_m = CumlArray(X_m) # Convert the CudaArray to the desired output return X_m.to_output(output_type=out_type) ``` </td> <td stype="text-align: top"> ```python def predict(self, X) -> CumlArray: # Convert to CumlArray X_m = input_to_cuml_array(X, order="K").array # Call a cuda function X_m = cp.asarray(X_m) + cp.ones(X_m.shape) # Directly return a cupy array return X_m ``` </td> </tr> </tbody> </table> **Notes:** - `@with_cupy_rmm` is no longer needed. This is automatically applied for every public method of estimators - `self._get_output_type()` no longer needs to be called. The output type is determined automatically - Its not necessary to convert to `CumlArray` and casting with `to_output` before returning. This function directly returned a `cp.ndarray` object. Any array-like object can be returned. ##### `predict()` with `dtype` <table> <thead> <tr> <th>Before</th> <th>After</th> </tr> </thead> <tbody> <tr> <td> ```python @with_cupy_rmm def predict(self, X_in): # Determine the output_type out_type = self._get_output_type(X_in) out_dtype = self._get_target_dtype() # Convert to CumlArray X_m = input_to_cuml_array(X_in, order="K").array # Call a cuda function X_m = cp.asarray(X_m) + cp.ones(X_m.shape) # Convert back to CumlArray X_m = CumlArray(X_m) # Convert the CudaArray to the desired output and dtype return X_m.to_output(output_type=out_type, output_dtype=out_dtype) ``` </td> <td stype="text-align: top"> ```python @api_base_return_array(input_arg="X_in", get_output_dtype=True) def predict(self, X): # Convert to CumlArray X_m = input_to_cuml_array(X, order="K").array # Call a cuda function X_m = cp.asarray(X_m) + cp.ones(X_m.shape) # Return the cupy array directly return X_m ``` </td> </tr> </tbody> </table> **Notes:** - `@with_cupy_rmm` is no longer needed. This is automatically applied with every decorator - The decorator argument `input_arg` can be used to specify which input should be considered the "input". - In reality, this isn't necessary for this example. The decorator will look for an argument named `"X"` or default to the first, non `self`, argument. - `self._get_output_type()` and `self._get_target_dtype()` no longer needs to be called. Both the output type and dtype are determined automatically - It's not necessary to convert to `CumlArray` and casting with `to_output` before returning. This function directly returned a `cp.ndarray` object. Any array-like object can be returned. - Specifying `get_output_dtype=True` in the decorator argument instructs the decorator to also calculate the dtype in addition to the output type.
0
rapidsai_public_repos/cuml/wiki
rapidsai_public_repos/cuml/wiki/python/DEVELOPER_GUIDE.md
# cuML Python Developer Guide This document summarizes guidelines and best practices for contributions to the python component of the library cuML, the machine learning component of the RAPIDS ecosystem. This is an evolving document so contributions, clarifications and issue reports are highly welcome. ## General Please start by reading: 1. [CONTRIBUTING.md](../../CONTRIBUTING.md). 2. [C++ DEVELOPER_GUIDE.md](../cpp/DEVELOPER_GUIDE.md) 3. [Python cuML README.md](../../python/README.md) ## Thread safety Refer to the section on thread safety in [C++ DEVELOPER_GUIDE.md](../cpp/DEVELOPER_GUIDE.md#thread-safety) ## Coding style 1. [PEP8](https://www.python.org/dev/peps/pep-0008) and [flake8](http://flake8.pycqa.org/en/latest/) is used to check the adherence to this style. 2. [sklearn coding guidelines](https://scikit-learn.org/stable/developers/contributing.html#coding-guidelines) ## Creating class for a new estimator or other ML algorithm 1. Make sure that this algo has been implemented in the C++ side. Refer to [C++ DEVELOPER_GUIDE.md](../cpp/DEVELOPER_GUIDE.md) for guidelines on developing in C++. 2. Refer to the [next section](DEVELOPER_GUIDE.md#creating-python-wrapper-class-for-an-existing-ml-algo) for the remaining steps. ## Creating python estimator wrapper class 1. Create a corresponding algoName.pyx file inside `python/cuml` folder. 2. Ensure that the folder structure inside here reflects that of sklearn's. Example, `pca.pyx` should be kept inside the `decomposition` sub-folder of `python/cuml`. . Match the corresponding scikit-learn's interface as closely as possible. Refer to their [developer guide](https://scikit-learn.org/stable/developers/contributing.html#apis-of-scikit-learn-objects) on API design of sklearn objects for details. 3. Always make sure to have your class inherit from `cuml.Base` class as your parent/ancestor. 4. Ensure that the estimator's output fields follow the 'underscore on both sides' convention explained in the documentation of `cuml.Base`. This allows it to support configurable output types. For an in-depth guide to creating estimators, see the [Estimator Guide](ESTIMATOR_GUIDE.md) ## Error handling If you are trying to call into cuda runtime APIs inside `cuml.cuda`, in case of any errors, they'll raise a `cuml.cuda.CudaRuntimeError`. For example: ```python from cuml.cuda import Stream, CudaRuntimeError try: s = Stream() s.sync except CudaRuntimeError as cre: print("Cuda Error! '%s'" % str(cre)) ``` ## Logging TBD ## Documentation We mostly follow [PEP 257](https://www.python.org/dev/peps/pep-0257/) style docstrings for documenting the interfaces. The examples in the documentation are checked through doctest. To skip the check for an example's output, use the command `# doctest: +SKIP`. Examples subject to numerical imprecision, or that can't be reproduced consistently should be skipped. ## Testing and Unit Testing We use [https://docs.pytest.org/en/latest/]() for writing and running tests. To see existing examples, refer to any of the `test_*.py` files in the folder `cuml/tests`. Some tests are run against inputs generated with [hypothesis](https://hypothesis.works/). See the `cuml/testing/strategies.py` module for custom strategies that can be used to test cuml estimators with diverse inputs. For example, use the `regression_datasets()` strategy to test random regression problems. ## Device and Host memory allocations TODO: talk about enabling RMM here when it is ready ## Asynchronous operations and stream ordering If you want to schedule the execution of two algorithms concurrently, it is better to create two separate streams and assign them to separate handles. Finally, schedule the algorithms using these handles. ```python import cuml from cuml.cuda import Stream s1 = Stream() h1 = cuml.Handle() h1.setStream(s1) s2 = Stream() h2 = cuml.Handle() h2.setStream(s2) algo1 = cuml.Algo1(handle=h1, ...) algo2 = cuml.Algo2(handle=h2, ...) algo1.fit(X1, y1) algo2.fit(X2, y2) ``` To know more underlying details about stream ordering refer to the corresponding section of [C++ DEVELOPER_GUIDE.md](../../cpp/DEVELOPER_GUIDE.md#asynchronous-operations-and-stream-ordering) ## Multi GPU TODO: Add more details. ## Benchmarking The cuML code including its Python operations can be profiled. The `nvtx_benchmark.py` is a helper script that produces a simple benchmark summary. To use it, run `python nvtx_benchmark.py "python test.py"`. Here is an example with the following script: ``` from cuml.datasets import make_blobs from cuml.manifold import UMAP X, y = make_blobs(n_samples=1000, n_features=30) model = UMAP() model.fit(X) embeddngs = model.transform(X) ``` that once benchmarked can have its profiling summarized: ``` datasets.make_blobs : 1.3571 s manifold.umap.fit [0x7f10eb69d4f0] : 0.6629 s |> umap::unsupervised::fit : 0.6611 s |==> umap::knnGraph : 0.4693 s |==> umap::simplicial_set : 0.0015 s |==> umap::embedding : 0.1902 s manifold.umap.transform [0x7f10eb69d4f0] : 0.0934 s |> umap::transform : 0.0925 s |==> umap::knnGraph : 0.0909 s |==> umap::smooth_knn : 0.0002 s |==> umap::optimization : 0.0011 s ```
0
rapidsai_public_repos/cuml/wiki
rapidsai_public_repos/cuml/wiki/mnmg/Using_Infiniband_for_MNMG.md
# Using Infiniband for Multi-Node Multi-GPU cuML These instructions outline how to run multi-node multi-GPU cuML on devices with Infiniband. These instructions assume the necessary Infiniband hardware has already been installed and the relevant software has already been configured to enable communication over the Infiniband devices. The steps in this wiki post have been largely adapted from the [Experiments in High Performance Networking with UCX and DGX](https://blog.dask.org/2019/06/09/ucx-dgx) blog by Matthew Rocklin and Rick Zamora. ## 1. Install UCX ### From Conda Note: this package is experimental and will eventually be supported under the rapidsai channel. Currently, it requires CUDA9.2 but a CUDA10 package is also in the works. `conda install -c conda-forge -c jakirkham/label/ucx cudatoolkit=9.2 ucx-proc=*=gpu ucx python=3.7` ### From Source Install autogen if it's not already installed: ```bash sudo apt-get install autogen autoconf libtool ``` Optionally install `gdrcopy` for faster GPU-Network card data transfer: From the [ucx wiki](https://github.com/openucx/ucx/wiki/NVIDIA-GPU-Support), `gdrcopy` can be installed, and might be necessary, to enable faster GPU-Network card data transfer. Here are the install instructions, taken from [gdrcopy github](https://github.com/NVIDIA/gdrcopy) ```bash git clone https://github.com/NVIDIA/gdrcopy.git cd gdrcopy make -j PREFIX=$CONDA_INSTALL_PREFIX CUDA=/usr/local/cuda && make -j install sudo ./insmod.sh ``` ```bash git clone https://github.com/cjnolet/ucx-py.git cd ucx git checkout fea-ext-expose_worker_and_ep ./autogen.sh mkdir build && cd build ../configure --prefix=$CONDA_PREFIX --with-cuda=/usr/local/cuda --enable-mt --disable-cma CPPFLAGS="-I//usr/local/cuda/include" make -j install ``` Note: If you have installed `gdrcopy`, you can add `--with-gdrcopy=/path/to/gdrcopy` to the options in `configure` Verify with `ucx_info -d`. You should expect to see line(s) with the `rc` transport: ``` # Transport: rc # # Device: mlx5_0:1 # # capabilities: # bandwidth: 11794.23 MB/sec # latency: 600 nsec + 1 * N # overhead: 75 nsec # put_short: <= 124 # put_bcopy: <= 8K # put_zcopy: <= 1G, up to 8 iov # put_opt_zcopy_align: <= 512 # put_align_mtu: <= 4K # get_bcopy: <= 8K # get_zcopy: 65..1G, up to 8 iov # get_opt_zcopy_align: <= 512 # get_align_mtu: <= 4K # am_short: <= 123 # am_bcopy: <= 8191 # am_zcopy: <= 8191, up to 7 iov # am_opt_zcopy_align: <= 512 # am_align_mtu: <= 4K # am header: <= 127 # domain: device # connection: to ep # priority: 30 # device address: 3 bytes # ep address: 4 bytes # error handling: peer failure ``` You should also expect to see lines with `cuda_copy` and `cuda_ipc` transports: ``` # Transport: cuda_copy # # Device: cudacopy0 # # capabilities: # bandwidth: 6911.00 MB/sec # latency: 10000 nsec # overhead: 0 nsec # put_short: <= 4294967295 # put_zcopy: unlimited, up to 1 iov # put_opt_zcopy_align: <= 1 # put_align_mtu: <= 1 # get_short: <= 4294967295 # get_zcopy: unlimited, up to 1 iov # get_opt_zcopy_align: <= 1 # get_align_mtu: <= 1 # connection: to iface # priority: 0 # device address: 0 bytes # iface address: 8 bytes # error handling: none ``` ``` # Memory domain: cuda_ipc # component: cuda_ipc # register: <= 1G, cost: 0 nsec # remote key: 104 bytes # # Transport: cuda_ipc # # Device: cudaipc0 # # capabilities: # bandwidth: 24000.00 MB/sec # latency: 1 nsec # overhead: 0 nsec # put_zcopy: <= 1G, up to 1 iov # put_opt_zcopy_align: <= 1 # put_align_mtu: <= 1 # get_zcopy: <= 1G, up to 1 iov # get_opt_zcopy_align: <= 1 # get_align_mtu: <= 1 # connection: to iface # priority: 0 # device address: 8 bytes # iface address: 4 bytes # error handling: none # ``` If you configured UCX with the `gdrcopy` option, you should also expect to see transports in this list: ```bash # Memory domain: gdr_copy # component: gdr_copy # register: unlimited, cost: 0 nsec # remote key: 32 bytes # # Transport: gdr_copy # # Device: gdrcopy0 # # capabilities: # bandwidth: 6911.00 MB/sec # latency: 1000 nsec # overhead: 0 nsec # put_short: <= 4294967295 # get_short: <= 4294967295 # connection: to iface # priority: 0 # device address: 0 bytes # iface address: 8 bytes # error handling: none ``` To better understand the CUDA-based transports in UCX, refer to [this wiki](https://github.com/openucx/ucx/wiki/NVIDIA-GPU-Support) for more details. ## 2. Install ucx-py ### From Conda Note: this package is experimental and will eventually be supported under the rapidsai channel. Currently, it requires CUDA9.2 but a CUDA10 package is also in the works. `conda install -c conda-forge -c jakirkham/label/ucx cudatoolkit=9.2 ucx-py python=3.7` ### From Source ```bash git clone git@github.com:rapidsai/ucx-py cd ucx-py export UCX_PATH=$CONDA_PREFIX make -j install ``` ## 3. Install NCCL It's important that NCCL 2.4+ be installed and no previous versions of NCCL are conflicting on your library path. This will cause compile errors during the build of cuML. ```bash conda install -c nvidia nccl ``` Create the file `.nccl.conf` in your home dir with the following: ```bash NCCL_SOCKET_IFNAME=ib0 ``` ## 4. Enable IP over IB interface at ib0 Follow the instructions at [this link](https://docs.oracle.com/cd/E19436-01/820-3522-10/ch4-linux.html#50536461_82843) to create an IP interface for the IB devices. From the link above, when the IP over IB kernel module has already been installed, mapping to an IP interface is simple: ``` sudo ifconfig ib0 10.0.0.50/24 ``` You can verify the interface was created properly with `ifconfig ib0` The output should look like this: ``` ib0 Link encap:UNSPEC HWaddr 80-00-00-68-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:10.0.0.50 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::526b:4b03:f5:ce9c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:65520 Metric:1 RX packets:2655 errors:0 dropped:0 overruns:0 frame:0 TX packets:2697 errors:0 dropped:10 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:183152 (183.1 KB) TX bytes:194696 (194.6 KB) ``` ## 5. Set UCX environment vars Use `ibstatus` to see your open IB devices. Output will look like this: ``` Infiniband device 'mlx5_0' port 1 status: default gid: fe80:0000:0000:0000:506b:4b03:00f5:ce9c base lid: 0xf sm lid: 0x1 state: 4: ACTIVE phys state: 5: LinkUp rate: 100 Gb/sec (4X EDR) link_layer: InfiniBand Infiniband device 'mlx5_1' port 1 status: default gid: fe80:0000:0000:0000:506b:4b03:0049:4236 base lid: 0x6 sm lid: 0x1 state: 4: ACTIVE phys state: 5: LinkUp rate: 100 Gb/sec (4X EDR) link_layer: InfiniBand Infiniband device 'mlx5_2' port 1 status: default gid: fe80:0000:0000:0000:506b:4b03:00f5:cf04 base lid: 0x2 sm lid: 0x1 state: 4: ACTIVE phys state: 5: LinkUp rate: 100 Gb/sec (4X EDR) link_layer: InfiniBand Infiniband device 'mlx5_3' port 1 status: default gid: fe80:0000:0000:0000:506b:4b03:0049:3eb2 base lid: 0x11 sm lid: 0x1 state: 4: ACTIVE phys state: 5: LinkUp rate: 100 Gb/sec (4X EDR) link_layer: InfiniBand ``` Put the devices and ports in a `UCX_NET_DEVICES` environment variable: ```bash export UCX_NET_DEVICES=mlx5_0:1,mlx5_3:1,mlx5_2:1,mlx5_1:1 ``` Set transports for UCX to use: ```bash export UCX_TLS=rc,cuda_copy,cuda_ipc ``` Note: if `gdrcopy` was installed, add `gdr_copy` to the end of `UCX_TLS` ## 6. Start Dask cluster on ib0 interface: Run this on the node designated for the scheduler: ```bash dask-scheduler --protocol ucx --interface ib0 ``` Then run this on each worker (for example, if the IP over IB device address running the scheduler is `10.0.0.50`): ```bash dask-cuda-worker ucx://10.0.0.50:8786 ``` ## 7. Run cumlCommunicator test: ### First, create a Dask `Client` and cuML `Comms`: ```python from dask.distributed import Client, wait from cuml.raft.dask.common.comms import Comms from cuml.dask.common import get_raft_comm_state from cuml.dask.common import perform_test_comms_send_recv from cuml.dask.common import perform_test_comms_allreduce import random c = Client("ucx://10.0.0.50:8786") cb = Comms(comms_p2p=True) cb.init() ``` ### Test Point-to-Point Communications: ```python n_trials = 2 def func_test_send_recv(sessionId, n_trials, r): handle = get_raft_comm_state(sessionId)["handle"] return perform_test_comms_send_recv(handle, n_trials) p2p_dfs=[c.submit(func_test_send_recv, cb.sessionId, n_trials, random.random(), workers=[w]) for wid, w in zip(range(len(cb.worker_addresses)), cb.worker_addresses)] wait(p2p_dfs) p2p_result = list(map(lambda x: x.result(), p2p_dfs)) print(str(p2p_result)) assert all(p2p_result) ``` You should see the following output on your workers: ``` ========================= Trial 0 Rank 0 received: [1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 8, 9, 14, 15] Rank 1 received: [0, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 8, 9, 14, 15] Rank 2 received: [0, 1, 3, 4, 5, 6, 7, 10, 11, 12, 13, 8, 9, 14, 15] Rank 3 received: [0, 1, 2, 4, 5, 6, 7, 10, 11, 12, 13, 8, 9, 14, 15] Rank 4 received: [0, 1, 2, 3, 5, 6, 7, 10, 11, 12, 13, 8, 9, 14, 15] Rank 5 received: [0, 1, 2, 3, 4, 6, 7, 10, 11, 12, 13, 8, 9, 14, 15] Rank 6 received: [0, 1, 2, 3, 4, 5, 7, 10, 11, 12, 13, 8, 9, 14, 15] Rank 7 received: [0, 1, 2, 3, 4, 5, 6, 10, 11, 12, 13, 8, 9, 14, 15] ========================= ========================= Trial 1 Rank 0 received: [11, 2, 13, 12, 9, 10, 15, 14, 1, 8, 5, 4, 3, 6, 7] Rank 1 received: [2, 12, 11, 10, 9, 14, 13, 8, 15, 4, 5, 6, 3, 0, 7] Rank 2 received: [12, 1, 11, 10, 9, 14, 13, 8, 15, 4, 5, 6, 3, 0, 7] Rank 3 received: [2, 11, 12, 10, 9, 14, 13, 8, 15, 4, 1, 6, 5, 0, 7] Rank 4 received: [2, 11, 12, 9, 13, 10, 15, 14, 1, 8, 3, 6, 5, 0, 7] Rank 5 received: [2, 11, 12, 9, 10, 14, 13, 8, 15, 4, 1, 6, 3, 0, 7] Rank 6 received: [2, 11, 12, 9, 10, 13, 15, 14, 1, 8, 5, 4, 3, 0, 7] Rank 7 received: [2, 11, 12, 9, 10, 13, 14, 8, 15, 4, 1, 6, 5, 0, 3] ========================= ``` ### Test collective communications: ```python def func_test_allreduce(sessionId, r): handle = get_raft_comm_state(sessionId)["handle"] return perform_test_comms_allreduce(handle) coll_dfs = [c.submit(func_test_allreduce, cb.sessionId, random.random(), workers=[w]) for wid, w in zip(range(len(cb.worker_addresses)), cb.worker_addresses)] wait(coll_dfs) coll_result = list(map(lambda x: x.result(), coll_dfs)) coll_result assert all(coll_result) ``` You should see the following output on your workers: ``` Clique size: 16 Clique size: 16 Clique size: 16 Clique size: 16 Clique size: 16 Clique size: 16 final_size: 16 Clique size: 16 Clique size: 16 final_size: 16 final_size: 16 final_size: 16 final_size: 16 final_size: 16 final_size: 16 final_size: 16 ```
0
rapidsai_public_repos/cuml/wiki
rapidsai_public_repos/cuml/wiki/cpp/DEVELOPER_GUIDE.md
# cuML developer guide This document summarizes rules and best practices for contributions to the cuML C++ component of rapidsai/cuml. This is a living document and contributions for clarifications or fixes and issue reports are highly welcome. ## General Please start by reading [CONTRIBUTING.md](../../CONTRIBUTING.md). ## Performance 1. In performance critical sections of the code, favor `cudaDeviceGetAttribute` over `cudaDeviceGetProperties`. See corresponding CUDA devblog [here](https://devblogs.nvidia.com/cuda-pro-tip-the-fast-way-to-query-device-properties/) to know more. 2. If an algo requires you to launch GPU work in multiple cuda streams, do not create multiple `raft::handle_t` objects, one for each such work stream. Instead, expose a `n_streams` parameter in that algo's cuML C++ interface and then rely on `raft::handle_t::get_internal_stream()` to pick up the right cuda stream. Refer to the section on [CUDA Resources](#cuda-resources) and the section on [Threading](#TBD) for more details. TIP: use `raft::handle_t::get_num_internal_streams` to know how many such streams are available at your disposal. ## Threading Model With the exception of the raft::handle_t, cuML algorithms should maintain thread-safety and are, in general, assumed to be single threaded. This means they should be able to be called from multiple host threads so long as different instances of `raft::handle_t` are used. Exceptions are made for algorithms that can take advantage of multiple CUDA streams within multiple host threads in order to oversubscribe or increase occupancy on a single GPU. In these cases, the use of multiple host threads within cuML algorithms should be used only to maintain concurrency of the underlying CUDA streams. Multiple host threads should be used sparingly, be bounded, and should steer clear of performing CPU-intensive computations. A good example of an acceptable use of host threads within a cuML algorithm might look like the following ``` handle.sync_stream(); int n_streams = handle.get_num_internal_streams(); #pragma omp parallel for num_threads(n_threads) for(int i = 0; i < n; i++) { int thread_num = omp_get_thread_num() % n_threads; cudaStream_t s = handle.get_stream_from_stream_pool(thread_num); ... possible light cpu pre-processing ... my_kernel1<<<b, tpb, 0, s>>>(...); ... ... some possible async d2h / h2d copies ... my_kernel2<<<b, tpb, 0, s>>>(...); ... handle.sync_stream(s); ... possible light cpu post-processing ... } ``` In the example above, if there is no CPU pre-processing at the beginning of the for-loop, an event can be registered in each of the streams within the for-loop to make them wait on the stream from the handle. If there is no CPU post-processing at the end of each for-loop iteration, `handle.sync_stream(s)` can be replaced with a single `handle.sync_stream_pool()` after the for-loop. To avoid compatibility issues between different threading models, the only threading programming allowed in cuML is OpenMP. Though cuML's build enables OpenMP by default, cuML algorithms should still function properly even when OpenMP has been disabled. If the CPU pre- and post-processing were not needed in the example above, OpenMP would not be needed. The use of threads in third-party libraries is allowed, though they should still avoid depending on a specific OpenMP runtime. ## Public cuML interface ### Terminology We have the following supported APIs: 1. Core cuML interface aka stateless C++ API aka C++ API aka `libcuml++.so` 2. Stateful convenience C++ API - wrapper around core API (WIP) 3. C API - wrapper around core API aka `libcuml.so` ### Motivation Our C++ API is stateless for two main reasons: 1. To ease the serialization of ML algorithm's state information (model, hyper-params, etc), enabling features such as easy pickling in the python layer. 2. To easily provide a proper C API for interfacing with languages that can't consume C++ APIs directly. Thus, this section lays out guidelines for managing state along the API of cuML. ### General guideline As mentioned before, functions exposed via the C++ API must be stateless. Things that are OK to be exposed on the interface: 1. Any [POD](https://en.wikipedia.org/wiki/Passive_data_structure) - see [std::is_pod](https://en.cppreference.com/w/cpp/types/is_pod) as a reference for C++11 POD types. 2. `raft::handle_t` - since it stores GPU-related state which has nothing to do with the model/algo state. If you're working on a C-binding, use `cumlHandle_t`([reference](../../cpp/src/cuML_api.h)), instead. 3. Pointers to POD types (explicitly putting it out, even though it can be considered as a POD). Internal to the C++ API, these stateless functions are free to use their own temporary classes, as long as they are not exposed on the interface. ### Stateless C++ API Using the Decision Tree Classifier algorithm as an example, the following way of exposing its API would be wrong according to the guidelines in this section, since it exposes a non-POD C++ class object in the C++ API: ```cpp template <typename T> class DecisionTreeClassifier { TreeNode<T>* root; DTParams params; const raft::handle_t &handle; public: DecisionTreeClassifier(const raft::handle_t &handle, DTParams& params, bool verbose=false); void fit(const T *input, int n_rows, int n_cols, const int *labels); void predict(const T *input, int n_rows, int n_cols, int *predictions); }; void decisionTreeClassifierFit(const raft::handle_t &handle, const float *input, int n_rows, int n_cols, const int *labels, DecisionTreeClassifier<float> *model, DTParams params, bool verbose=false); void decisionTreeClassifierPredict(const raft::handle_t &handle, const float* input, DecisionTreeClassifier<float> *model, int n_rows, int n_cols, int* predictions, bool verbose=false); ``` An alternative correct way to expose this could be: ```cpp // NOTE: this example assumes that TreeNode and DTParams are the model/state that need to be stored // and passed between fit and predict methods template <typename T> struct TreeNode { /* nested tree-like data structure, but written as a POD! */ }; struct DTParams { /* hyper-params for building DT */ }; typedef TreeNode<float> TreeNodeF; typedef TreeNode<double> TreeNodeD; void decisionTreeClassifierFit(const raft::handle_t &handle, const float *input, int n_rows, int n_cols, const int *labels, TreeNodeF *&root, DTParams params, bool verbose=false); void decisionTreeClassifierPredict(const raft::handle_t &handle, const double* input, int n_rows, int n_cols, const TreeNodeD *root, int* predictions, bool verbose=false); ``` The above example understates the complexity involved with exposing a tree-like data structure across the interface! However, this example should be simple enough to drive the point across. ### Other functions on state These guidelines also mean that it is the responsibility of C++ API to expose methods to load and store (aka marshalling) such a data structure. Further continuing the Decision Tree Classifier example, the following methods could achieve this: ```cpp void storeTree(const TreeNodeF *root, std::ostream &os); void storeTree(const TreeNodeD *root, std::ostream &os); void loadTree(TreeNodeF *&root, std::istream &is); void loadTree(TreeNodeD *&root, std::istream &is); ``` It is also worth noting that for algorithms such as the members of GLM, where models consist of an array of weights and are therefore easy to manipulate directly by the users, such custom load/store methods might not be explicitly needed. ### C API Following the guidelines outlined above will ease the process of "C-wrapping" the C++ API. Refer to [DBSCAN](../../cpp/src/dbscan/dbscan_api.h) as an example on how to properly wrap the C++ API with a C-binding. In short: 1. Use only C compatible types or objects that can be passed as opaque handles (like `cumlHandle_t`). 2. Using templates is fine if those can be instantiated from a specialized C++ function with `extern "C"` linkage. 3. Expose custom create/load/store/destroy methods, if the model is more complex than an array of parameters (eg: Random Forest). One possible way of working with such exposed states from the C++ layer is shown in a sample repo [here](https://github.com/teju85/managing-state-cuml). #### C API Header Files With the exception of `cumlHandle.h|cpp`, all C-API headers and source files end with the suffix `*_api`. Any file ending in `*_api` should not be included from the C++ API. Incorrectly including `cuml_api.h` in the C++ API will generate the error: ``` This header is only for the C-API and should not be included from the C++ API. ``` If this error is shown during compilation, there is an issue with how the `#include` statements have been set up. To debug the issue, run `./build.sh cppdocs` and open the page `cpp/build/html/cuml__api_8h.html` in a browser. This will show which files directly and indirectly include this file. Only files ending in `*_api` or `cumlHandle` should include this header. ### Stateful C++ API This scikit-learn-esq C++ API should always be a wrapper around the stateless C++ API, NEVER the other way around. The design discussion about the right way to expose such a wrapper around `libcuml++.so` is [still going on](https://github.com/rapidsai/cuml/issues/456) So, stay tuned for more details. ### File naming convention 1. An ML algorithm `<algo>` is to be contained inside the folder named `src/<algo>`. 2. `<algo>.hpp` and `<algo>.[cpp|cu]` contain C++ API declarations and definitions respectively. 3. `<algo>_api.h` and `<algo>_api.cpp` contain declarations and definitions respectively for C binding. ## Coding style ## Code format ### Introduction cuML relies on `clang-format` to enforce code style across all C++ and CUDA source code. The coding style is based on the [Google style guide](https://google.github.io/styleguide/cppguide.html#Formatting). The only digressions from this style are the following. 1. Do not split empty functions/records/namespaces. 2. Two-space indentation everywhere, including the line continuations. 3. Disable reflowing of comments. The reasons behind these deviations from the Google style guide are given in comments [here](../../cpp/.clang-format). ### How is the check done? All formatting checks are done by this python script: [run-clang-format.py](../../cpp/scripts/run-clang-format.py) which is effectively a wrapper over `clang-format`. An error is raised if the code diverges from the format suggested by clang-format. It is expected that the developers run this script to detect and fix formatting violations before creating PR. #### As part of CI [run-clang-format.py](../../cpp/scripts/run-clang-format.py) is executed as part of our CI tests. If there are any formatting violations, PR author is expected to fix those to get CI passing. Steps needed to fix the formatting violations are described in the subsequent sub-section. #### Manually Developers can also manually (or setup this command as part of git pre-commit hook) run this check by executing: ```bash python ./cpp/scripts/run-clang-format.py ``` From the root of the cuML repository. ### How to know the formatting violations? When there are formatting errors, [run-clang-format.py](../../cpp/scripts/run-clang-format.py) prints a `diff` command, showing where there are formatting differences. Unfortunately, unlike `flake8`, `clang-format` does NOT print descriptions of the violations, but instead directly formats the code. So, the only way currently to know about formatting differences is to run the diff command as suggested by this script against each violating source file. ### How to fix the formatting violations? When there are formatting violations, [run-clang-format.py](../../cpp/scripts/run-clang-format.py) prints at the end, the exact command that can be run by developers to fix them. This is the easiest way to fix formatting errors. [This screencast](https://asciinema.org/a/287367) shows how developers can check for formatting violations in their branches and also how to fix those, before sending out PRs. In short, to bulk-fix all the formatting violations, execute the following command: ```bash python ./cpp/scripts/run-clang-format.py -inplace ``` From the root of the cuML repository. ### clang-format version? To avoid spurious code style violations we specify the exact clang-format version required, currently `8.0.0`. This is enforced by the [run-clang-format.py](../../cpp/scripts/run-clang-format.py) script itself. Refer [here](../../cpp/README.md#dependencies) for the list of build-time dependencies. ### Additional scripts Along with clang, there are are the include checker and copyright checker scripts for checking style, which can be performed as part of CI, as well as manually. #### #include style [include_checker.py](../../cpp/scripts/include_checker.py) is used to enforce the include style as follows: 1. `#include "..."` should be used for referencing local files only. It is acceptable to be used for referencing files in a sub-folder/parent-folder of the same algorithm, but should never be used to include files in other algorithms or between algorithms and the primitives or other dependencies. 2. `#include <...>` should be used for referencing everything else Manually, run the following to bulk-fix include style issues: ```bash python ./cpp/scripts/include_checker.py --inplace [cpp/include cpp/src cpp/src_prims cpp/test ... list of folders which you want to fix] ``` #### Copyright header [copyright.py](../../ci/checks/copyright.py) checks the Copyright header for all git-modified files Manually, you can run the following to bulk-fix the header if only the years need to be updated: ```bash python ./ci/checks/copyright.py --update-current-year ``` Keep in mind that this only applies to files tracked by git and having been modified. ## Error handling Call CUDA APIs via the provided helper macros `RAFT_CUDA_TRY`, `RAFT_CUBLAS_TRY` and `RAFT_CUSOLVER_TRY`. These macros take care of checking the return values of the used API calls and generate an exception when the command is not successful. If you need to avoid an exception, e.g. inside a destructor, use `RAFT_CUDA_TRY_NO_THROW`, `RAFT_CUBLAS_TRY_NO_THROW ` and `RAFT_CUSOLVER_TRY_NO_THROW ` (currently not available, see https://github.com/rapidsai/cuml/issues/229). These macros log the error but do not throw an exception. ## Logging ### Introduction Anything and everything about logging is defined inside [logger.hpp](../../cpp/include/cuml/common/logger.hpp). It uses [spdlog](https://github.com/gabime/spdlog) underneath, but this information is transparent to all. ### Usage ```cpp #include <cuml/common/logger.hpp> // Inside your method or function, use any of these macros CUML_LOG_TRACE("Hello %s!", "world"); CUML_LOG_DEBUG("Hello %s!", "world"); CUML_LOG_INFO("Hello %s!", "world"); CUML_LOG_WARN("Hello %s!", "world"); CUML_LOG_ERROR("Hello %s!", "world"); CUML_LOG_CRITICAL("Hello %s!", "world"); ``` ### Changing logging level There are 7 logging levels with each successive level becoming quieter: 1. CUML_LEVEL_TRACE 2. CUML_LEVEL_DEBUG 3. CUML_LEVEL_INFO 4. CUML_LEVEL_WARN 5. CUML_LEVEL_ERROR 6. CUML_LEVEL_CRITICAL 7. CUML_LEVEL_OFF Pass one of these as per your needs into the `setLevel()` method as follows: ```cpp ML::Logger::get.setLevel(CUML_LEVEL_WARN); // From now onwards, this will print only WARN and above kind of messages ``` ### Changing logging pattern Pass the [format string](https://github.com/gabime/spdlog/wiki/3.-Custom-formatting) as follows in order use a different logging pattern than the default. ```cpp ML::Logger::get.setPattern(YourFavoriteFormat); ``` One can also use the corresponding `getPattern()` method to know the current format as well. ### Temporarily changing the logging pattern Sometimes, we need to temporarily change the log pattern (eg: for reporting decision tree structure). This can be achieved in a RAII-like approach as follows: ```cpp { PatternSetter _(MyNewTempFormat); // new log format is in effect from here onwards doStuff(); // once the above temporary object goes out-of-scope, the old format will be restored } ``` ### Tips * Do NOT end your logging messages with a newline! It is automatically added by spdlog. * The `CUML_LOG_TRACE()` is by default not compiled due to the `CUML_ACTIVE_LEVEL` macro setup, for performance reasons. If you need it to be enabled, change this macro accordingly during compilation time ## Documentation All external interfaces need to have a complete [doxygen](http://www.doxygen.nl) API documentation. This is also recommended for internal interfaces. ## Testing and Unit Testing TODO: Add this ## Device and Host memory allocations To enable `libcuml.so` users to control how memory for temporary data is allocated, allocate device memory using the allocator provided: ```cpp template<typename T> void foo(const raft::handle_t& h, cudaStream_t stream, ... ) { T* temp_h = h.get_device_allocator()->allocate(n*sizeof(T), stream); ... h.get_device_allocator()->deallocate(temp_h, n*sizeof(T), stream); } ``` The same rule applies to larger amounts of host heap memory: ```cpp template<typename T> void foo(const raft::handle_t& h, cudaStream_t stream, ... ) { T* temp_h = h.get_host_allocator()->allocate(n*sizeof(T), stream); ... h.get_host_allocator()->deallocate(temp_h, n*sizeof(T), stream); } ``` Small host memory heap allocations, e.g. as internally done by STL containers, are fine, e.g. an `std::vector` managing only a handful of integers. Both the Host and the Device Allocators might allow asynchronous stream ordered allocation and deallocation. This can provide significant performance benefits so a stream always needs to be specified when allocating or deallocating (see [Asynchronous operations and stream ordering](#asynchronous-operations-and-stream-ordering)). `ML::deviceAllocator` returns pinned device memory on the current device, while `ML::hostAllocator` returns host memory. A user of cuML can write customized allocators and pass them into cuML. If a cuML user does not provide custom allocators default allocators will be used. For `ML::deviceAllocator` the default is to use `cudaMalloc`/`cudaFree`. For `ML::hostAllocator` the default is to use `cudaMallocHost`/`cudaFreeHost`. There are two simple container classes compatible with the allocator interface `MLCommon::device_buffer` available in `src_prims/common/device_buffer.hpp` and `MLCommon::host_buffer` available in `src_prims/common/host_buffer.hpp`. These allow to follow the [RAII idiom](https://en.wikipedia.org/wiki/Resource_acquisition_is_initialization) to avoid resources leaks and enable exception safe code. These containers also allow asynchronous allocation and deallocation using the `resize` and `release` member functions: ```cpp template<typename T> void foo(const raft::handle_t& h, ..., cudaStream_t stream ) { ... MLCommon::device_buffer<T> temp( h.get_device_allocator(), stream, 0 ) temp.resize(n, stream); kernelA<<<grid, block, 0, stream>>>(..., temp.data(), ...); kernelB<<<grid, block, 0, stream>>>(..., temp.data(), ...); temp.release(stream); } ``` The motivation for `MLCommon::host_buffer` and `MLCommon::device_buffer` over using `std::vector` or `thrust::device_vector` (which would require thrust 1.9.4 or later) is to enable exception safe asynchronous allocation and deallocation following stream semantics with an explicit interface while avoiding the overhead of implicitly initializing the underlying allocation. To use `ML::hostAllocator` with a STL container the header `src/common/allocatorAdapter.hpp` provides `ML::stdAllocatorAdapter`: ```cpp template<typename T> void foo(const raft::handle_t& h, ..., cudaStream_t stream ) { ... std::vector<T,ML::stdAllocatorAdapter<T> > temp( n, val, ML::stdAllocatorAdapter<T>(h.get_host_allocator(), stream) ) ... } ``` If thrust 1.9.4 or later is available for use in cuML a similar allocator can be provided for `thrust::device_vector`. ### <a name="allocationsthrust"></a>Using Thrust To ensure that thrust algorithms allocate temporary memory via the provided device memory allocator, use the `ML::thrustAllocatorAdapter` available in `src/common/allocatorAdapter.hpp` with the `thrust::cuda::par` execution policy: ```cpp void foo(const raft::handle_t& h, ..., cudaStream_t stream ) { ML::thrustAllocatorAdapter alloc( h.get_device_allocator(), stream ); auto execution_policy = thrust::cuda::par(alloc).on(stream); thrust::for_each(execution_policy, ... ); } ``` The header `src/common/allocatorAdapter.hpp` also provides a helper function to create an execution policy: ```cpp void foo(const raft::handle_t& h, ... , cudaStream_t stream ) { auto execution_policy = ML::thrust_exec_policy(h.get_device_allocator(),stream); thrust::for_each(execution_policy->on(stream), ... ); } ``` ## Asynchronous operations and stream ordering All ML algorithms should be as asynchronous as possible avoiding the use of the default stream (aka as NULL or `0` stream). Implementations that require only one CUDA Stream should use the stream from `raft::handle_t`: ```cpp void foo(const raft::handle_t& h, ...) { cudaStream_t stream = h.get_stream(); } ``` When multiple streams are needed, e.g. to manage a pipeline, use the internal streams available in `raft::handle_t` (see [CUDA Resources](#cuda-resources)). If multiple streams are used all operations still must be ordered according to `raft::handle_t::get_stream()`. Before any operation in any of the internal CUDA streams is started, all previous work in `raft::handle_t::get_stream()` must have completed. Any work enqueued in `raft::handle_t::get_stream()` after a cuML function returns should not start before all work enqueued in the internal streams has completed. E.g. if a cuML algorithm is called like this: ```cpp void foo(const double* const srcdata, double* const result) { cudaStream_t stream; CUDA_RT_CALL( cudaStreamCreate( &stream ) ); raft::handle_t raftHandle( stream ); ... RAFT_CUDA_TRY( cudaMemcpyAsync( srcdata, h_srcdata.data(), n*sizeof(double), cudaMemcpyHostToDevice, stream ) ); ML::algo(raft::handle_t, dopredict, srcdata, result, ... ); RAFT_CUDA_TRY( cudaMemcpyAsync( h_result.data(), result, m*sizeof(int), cudaMemcpyDeviceToHost, stream ) ); ... } ``` No work in any stream should start in `ML::algo` before the `cudaMemcpyAsync` in `stream` launched before the call to `ML::algo` is done. And all work in all streams used in `ML::algo` should be done before the `cudaMemcpyAsync` in `stream` launched after the call to `ML::algo` starts. This can be ensured by introducing interstream dependencies with CUDA events and `cudaStreamWaitEvent`. For convenience, the header `raft/core/handle.hpp` provides the class `raft::stream_syncer` which lets all `raft::handle_t` internal CUDA streams wait on `raft::handle_t::get_stream()` in its constructor and in its destructor and lets `raft::handle_t::get_stream()` wait on all work enqueued in the `raft::handle_t` internal CUDA streams. The intended use would be to create a `raft::stream_syncer` object as the first thing in a entry function of the public cuML API: ```cpp void cumlAlgo(const raft::handle_t& handle, ...) { raft::streamSyncer _(handle); } ``` This ensures the stream ordering behavior described above. ### Using Thrust To ensure that thrust algorithms are executed in the intended stream the `thrust::cuda::par` execution policy should be used (see [Using Thrust](#allocationsthrust) in [Device and Host memory allocations](#device-and-host-memory-allocations)). ## CUDA Resources Do not create reusable CUDA resources directly in implementations of ML algorithms. Instead, use the existing resources in `raft::handle_t` to avoid constant creation and deletion of reusable resources such as CUDA streams, CUDA events or library handles. Please file a feature request if a resource handle is missing in `raft::handle_t`. The resources can be obtained like this ```cpp void foo(const raft::handle_t& h, ...) { cublasHandle_t cublasHandle = h.get_cublas_handle(); const int num_streams = h.get_num_internal_streams(); const int stream_idx = ... cudaStream_t stream = h.get_internal_stream(stream_idx); ... } ``` The example below shows one way to create `nStreams` number of internal cuda streams which can later be used by the algos inside cuML. For a full working example of how to use internal streams to schedule work on a single GPU, the reader is further referred to [this PR](https://github.com/rapidsai/cuml/pull/1015). In this PR, the internal streams inside `raft::handle_t` are used to schedule more work onto a GPU for Random Forest building. ```cpp int main(int argc, char** argv) { int nStreams = argc > 1 ? atoi(argv[1]) : 0; raft::handle_t handle(nStreams); foo(handle, ...); } ``` ## Multi-GPU The multi GPU paradigm of cuML is **O**ne **P**rocess per **G**PU (OPG). Each algorithm should be implemented in a way that it can run with a single GPU without any specific dependencies to a particular communication library. A multi-GPU implementation should use the methods offered by the class `raft::comms::comms_t` from [raft/core/comms.hpp] for inter-rank/GPU communication. It is the responsibility of the user of cuML to create an initialized instance of `raft::comms::comms_t`. E.g. with a CUDA-aware MPI, a cuML user could use code like this to inject an initialized instance of `raft::comms::mpi_comms` into a `raft::handle_t`: ```cpp #include <mpi.h> #include <raft/core/handle.hpp> #include <raft/comms/mpi_comms.hpp> #include <mlalgo/mlalgo.hpp> ... int main(int argc, char * argv[]) { MPI_Init(&argc, &argv); int rank = -1; MPI_Comm_rank(MPI_COMM_WORLD, &rank); int local_rank = -1; { MPI_Comm local_comm; MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, rank, MPI_INFO_NULL, &local_comm); MPI_Comm_rank(local_comm, &local_rank); MPI_Comm_free(&local_comm); } cudaSetDevice(local_rank); mpi_comms raft_mpi_comms; MPI_Comm_dup(MPI_COMM_WORLD, &raft_mpi_comms); { raft::handle_t raftHandle; initialize_mpi_comms(raftHandle, raft_mpi_comms); ... ML::mlalgo(raftHandle, ... ); } MPI_Comm_free(&raft_mpi_comms); MPI_Finalize(); return 0; } ``` A cuML developer can assume the following: * A instance of `raft::comms::comms_t` was correctly initialized. * All processes that are part of `raft::comms::comms_t` call into the ML algorithm cooperatively. The initialized instance of `raft::comms::comms_t` can be accessed from the `raft::handle_t` instance: ```cpp void foo(const raft::handle_t& h, ...) { const MLCommon::cumlCommunicator& communicator = h.get_comms(); const int rank = communicator.get_rank(); const int size = communicator.get_size(); ... } ```
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/pyproject.toml
# Copyright (c) 2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. [build-system] requires = [ "cmake>=3.26.4", "cuda-python>=11.7.1,<12.0a0", "cython>=3.0.0", "ninja", "pylibraft==23.12.*", "rmm==23.12.*", "scikit-build>=0.13.1", "setuptools", "treelite==3.9.1", "treelite_runtime==3.9.1", "wheel", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../dependencies.yaml and run `rapids-dependency-file-generator`. build-backend = "setuptools.build_meta" [tool.pytest.ini_options] markers = [ "unit: Quickest tests focused on accuracy and correctness", "quality: More intense tests than unit with increased runtimes", "stress: Longest running tests focused on stressing hardware compute resources", "mg: Multi-GPU tests", "memleak: Test that checks for memory leaks", "no_bad_cuml_array_check: Test that should not check for bad CumlArray uses", ] testpaths = "cuml/tests" filterwarnings = [ "error::FutureWarning:cuml[.*]", # Catch uses of deprecated positional args in testing "ignore:[^.]*ABCs[^.]*:DeprecationWarning:patsy[.*]", "ignore:(.*)alias(.*):DeprecationWarning:hdbscan[.*]", ] [project] name = "cuml" dynamic = ["version"] description = "cuML - RAPIDS ML Algorithms" readme = { file = "README.md", content-type = "text/markdown" } authors = [ { name = "NVIDIA Corporation" }, ] license = { text = "Apache 2.0" } requires-python = ">=3.9" dependencies = [ "cudf==23.12.*", "cupy-cuda11x>=12.0.0", "dask-cuda==23.12.*", "dask-cudf==23.12.*", "joblib>=0.11", "numba>=0.57", "raft-dask==23.12.*", "rapids-dask-dependency==23.12.*", "scipy>=1.8.0", "treelite==3.9.1", "treelite_runtime==3.9.1", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../dependencies.yaml and run `rapids-dependency-file-generator`. classifiers = [ "Intended Audience :: Developers", "Programming Language :: Python", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", ] [project.optional-dependencies] test = [ "dask-glm==0.3.0", "dask-ml", "hdbscan @ git+https://github.com/scikit-learn-contrib/hdbscan.git@master", "hypothesis>=6.0,<7", "nltk", "numpydoc", "pynndescent==0.5.8", "pytest", "pytest-benchmark", "pytest-cases", "pytest-cov", "pytest-xdist", "scikit-learn==1.2", "seaborn", "statsmodels", "umap-learn==0.5.3", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../dependencies.yaml and run `rapids-dependency-file-generator`. [project.urls] Homepage = "https://github.com/rapidsai/cuml" Documentation = "https://docs.rapids.ai/api/cuml/stable/" [tool.setuptools] license-files = ["LICENSE"] [tool.setuptools.dynamic] version = {file = "cuml/VERSION"} [tool.black] line-length = 79 target-version = ["py39"] include = '\.py?$' force-exclude = ''' _stop_words\.py | _version\.py | versioneer\.py | /( \.eggs | \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | _thirdparty | buck-out | build | dist | thirdparty )/ '''
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/.flake8
# Copyright (c) 2018-2023, NVIDIA CORPORATION. [flake8] filename = *.py, *.pyx, *.pxd exclude = *.egg, .git, __pycache__, _thirdparty, build/, cpp, docs, thirdparty, versioneer.py # Cython Rules ignored: # E999: invalid syntax (works for Python, not Cython) # E225: Missing whitespace around operators (breaks cython casting syntax like <int>) # E226: Missing whitespace around arithmetic operators (breaks cython pointer syntax like int*) # E227: Missing whitespace around bitwise or shift operator (Can also break casting syntax) # W503: line break before binary operator (breaks lines that start with a pointer) # W504: line break after binary operator (breaks lines that end with a pointer) extend-ignore = # handled by black E501, W503, E203 # imported but unused F401 # redefinition of unused F811 per-file-ignores = # imported but unused __init__.py: F401 # TODO: Identify root cause. I susped that we used pycodestyle<2.9.0 # previously, which means E275 was not previously caught this extensively. *.py: E275 # TOOD: Identify root cause for why this new ignore switch is needed. batched_lbfgs.py: E501 # Cython Exclusions *.pyx: E999, E225, E226, E227, W503, W504 *.pxd: E999, E225, E226, E227, W503, W504
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/CMakeLists.txt
# ============================================================================= # Copyright (c) 2022-2023 NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= cmake_minimum_required(VERSION 3.26.4 FATAL_ERROR) include(../fetch_rapids.cmake) set(CUML_VERSION 23.12.00) option(CUML_CPU "Build only cuML CPU Python components." OFF) set(language_list "C;CXX") if(NOT CUML_CPU) # We always need CUDA for cuML GPU because the raft dependency brings in a # header-only cuco dependency that enables CUDA unconditionally. include(rapids-cuda) rapids_cuda_init_architectures(cuml-python) list(APPEND language_list "CUDA") endif() project( cuml-python VERSION ${CUML_VERSION} LANGUAGES # TODO: Building Python extension modules via the python_extension_module requires the C # language to be enabled here. The test project that is built in scikit-build to verify # various linking options for the python library is hardcoded to build with C, so until # that is fixed we need to keep C. ${language_list} ) ################################################################################ # - User Options -------------------------------------------------------------- option(CUML_UNIVERSAL "Build all cuML Python components." ON) option(FIND_CUML_CPP "Search for existing CUML C++ installations before defaulting to local files" OFF) option(CUML_BUILD_WHEELS "Whether this build is generating a Python wheel." OFF) option(SINGLEGPU "Disable all mnmg components and comms libraries" OFF) set(CUML_RAFT_CLONE_ON_PIN OFF) # todo: use CMAKE_MESSAGE_CONTEXT for prefix for logging. # https://github.com/rapidsai/cuml/issues/4843 message(VERBOSE "CUML_PY: Build only cuML CPU Python components.: ${CUML_CPU}") message(VERBOSE "CUML_PY: Searching for existing CUML C++ installations before defaulting to local files: ${FIND_CUML_CPP}") message(VERBOSE "CUML_PY: Disabling all mnmg components and comms libraries: ${SINGLEGPU}") set(CUML_ALGORITHMS "ALL" CACHE STRING "Choose which algorithms are built cuML. Can specify individual algorithms or groups in a semicolon-separated list.") set(CUML_CPP_TARGET "cuml++") set(CUML_CPP_SRC "../cpp") ################################################################################ # - Process User Options ------------------------------------------------------ # If the user requested it, we attempt to find cuml. if(FIND_CUML_CPP) # We need to call get_treelite explicitly because we need the correct # ${TREELITE_LIBS} definition for RF include(rapids-cpm) include(rapids-export) rapids_cpm_init() include(../cpp/cmake/thirdparty/get_treelite.cmake) find_package(cuml ${CUML_VERSION} REQUIRED) else() set(cuml_FOUND OFF) endif() include(rapids-cython) if(CUML_BUILD_WHEELS) set(CUML_PYTHON_TREELITE_TARGET treelite::treelite_static) else() set(CUML_PYTHON_TREELITE_TARGET treelite::treelite) endif() if(NOT ${CUML_CPU}) if(NOT cuml_FOUND) set(BUILD_CUML_TESTS OFF) set(BUILD_PRIMS_TESTS OFF) set(BUILD_CUML_C_LIBRARY OFF) set(BUILD_CUML_EXAMPLES OFF) set(BUILD_CUML_BENCH OFF) set(BUILD_CUML_PRIMS_BENCH OFF) set(CUML_EXPORT_TREELITE_LINKAGE ON) set(_exclude_from_all "") if(CUML_BUILD_WHEELS) # Statically link dependencies if building wheels set(CUDA_STATIC_RUNTIME ON) set(CUML_USE_RAFT_STATIC ON) set(CUML_USE_FAISS_STATIC ON) set(CUML_USE_TREELITE_STATIC ON) set(CUML_USE_CUMLPRIMS_MG_STATIC ON) # Don't install the static libs into wheels set(CUML_EXCLUDE_RAFT_FROM_ALL ON) set(RAFT_EXCLUDE_FAISS_FROM_ALL ON) set(CUML_EXCLUDE_TREELITE_FROM_ALL ON) set(CUML_EXCLUDE_CUMLPRIMS_MG_FROM_ALL ON) # Don't install the cuML C++ targets into wheels set(_exclude_from_all EXCLUDE_FROM_ALL) endif() add_subdirectory(../cpp cuml-cpp ${_exclude_from_all}) set(cython_lib_dir cuml) install(TARGETS ${CUML_CPP_TARGET} DESTINATION ${cython_lib_dir}) endif() endif() if(CUML_CPU) set(CUML_UNIVERSAL OFF) set(SINGLEGPU ON) set(CUML_ALGORITHMS "linearregression") list(APPEND CUML_ALGORITHMS "pca") list(APPEND CUML_ALGORITHMS "tsvd") list(APPEND CUML_ALGORITHMS "elasticnet") list(APPEND CUML_ALGORITHMS "logisticregression") list(APPEND CUML_ALGORITHMS "ridge") list(APPEND CUML_ALGORITHMS "lasso") list(APPEND CUML_ALGORITHMS "umap") list(APPEND CUML_ALGORITHMS "knn") list(APPEND CUML_ALGORITHMS "hdbscan") list(APPEND CUML_ALGORITHMS "dbscan") list(APPEND CUML_ALGORITHMS "kmeans") # this won't be needed when we add CPU libcuml++ (FIL) set(cuml_sg_libraries "") list(APPEND CYTHON_FLAGS "--compile-time-env GPUBUILD=0") else() set(cuml_sg_libraries cuml::${CUML_CPP_TARGET}) set(cuml_mg_libraries cuml::${CUML_CPP_TARGET}) list(APPEND CYTHON_FLAGS "--compile-time-env GPUBUILD=1") endif() if(NOT SINGLEGPU) include("${CUML_CPP_SRC}/cmake/thirdparty/get_cumlprims_mg.cmake") set(cuml_mg_libraries cuml::${CUML_CPP_TARGET} cumlprims_mg::cumlprims_mg ) endif() ################################################################################ # - Build Cython artifacts ----------------------------------------------------- include("${CUML_CPP_SRC}/cmake/modules/ConfigureAlgorithms.cmake") include(cmake/ConfigureCythonAlgorithms.cmake) if(${CUML_CPU}) # libcuml requires metrics built if HDSCAN is built, which is not the case # for cuml-cpu unset(metrics_algo) endif() message(VERBOSE "CUML_PY: Building cuML with algorithms: '${CUML_ALGORITHMS}'.") rapids_cython_init() add_subdirectory(cuml/common) add_subdirectory(cuml/internals) add_subdirectory(cuml/cluster) add_subdirectory(cuml/datasets) add_subdirectory(cuml/decomposition) add_subdirectory(cuml/ensemble) add_subdirectory(cuml/explainer) add_subdirectory(cuml/experimental/fil) add_subdirectory(cuml/fil) add_subdirectory(cuml/kernel_ridge) add_subdirectory(cuml/linear_model) add_subdirectory(cuml/manifold) add_subdirectory(cuml/metrics) add_subdirectory(cuml/metrics/cluster) add_subdirectory(cuml/neighbors) add_subdirectory(cuml/random_projection) add_subdirectory(cuml/solvers) add_subdirectory(cuml/svm) add_subdirectory(cuml/tsa) add_subdirectory(cuml/experimental/linear_model) if(DEFINED cython_lib_dir) rapids_cython_add_rpath_entries(TARGET cuml PATHS "${cython_lib_dir}") endif()
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/README.md
# cuML Python Package This folder contains the Python and Cython code of the algorithms and ML primitives of cuML, that are distributed in the Python cuML package. Contents: - [cuML Python Package](#cuml-python-package) - [Build Configuration](#build-configuration) - [RAFT Integration in cuml.raft](#raft-integration-in-cumlraft) - [Build Requirements](#build-requirements) - [Python Tests](#python-tests) ### Build Configuration The build system uses setup.py for configuration and building. cuML's setup.py can be configured through environment variables and command line arguments. The environment variables are: | Environment variable | Possible values | Default behavior if not set | Behavior | | --- | --- | --- | --- | | CUDA_HOME | path/to/cuda_toolkit | Inferred by location of `nvcc` | Optional variable allowing to manually specify location of the CUDA toolkit. | | CUML_BUILD_PATH | path/to/libcuml_build_folder | Looked for in path_to_cuml_repo/cpp/build | Optional variable allowing to manually specify location of libcuml++ build folder. | | RAFT_PATH | path/to/raft | Looked for in path_to_cuml_repo/cpp/build, if not found clone | Optional variable allowing to manually specify location of the RAFT Repository. | The command line arguments (i.e. passed alongside `setup.py` when invoking, for example `setup.py --singlegpu`) are: | Argument | Behavior | | --- | --- | | clean --all | Cleans all Python and Cython artifacts, including pycache folders, .cpp files resulting of cythonization and compiled extensions. | | --singlegpu | Option to build cuML without multiGPU algorithms. Removes dependency on nccl, libcumlprims and ucx-py. | ### RAFT Integration in cuml.raft RAFT's Python and Cython is located in the [RAFT repository](https://github.com/rapidsai/raft/python). It was designed to be included in projects as opposed to be distributed by itself, so at build time, **setup.py creates a symlink from cuML, located in `/python/cuml/raft/` to the Python folder of RAFT**. For developers that need to modify RAFT code, please refer to the [RAFT Developer Guide](https://github.com/rapidsai/raft/blob/branch-23.12/docs/source/build.md) for recommendations. To configure RAFT at build time: 1. If the environment variable `RAFT_PATH` points to the RAFT repo, then that will be used. 2. If there is a libcuml build folder that has cloned RAFT already, setup.py will use that RAFT. Location of this can be configured with the environment variable CUML_BUILD_PATH. 3. If none of the above happened, then setup.py will clone RAFT and use it directly. The RAFT Python code gets included in the cuML build and distributable artifacts as if it was always present in the folder structure of cuML. ### Build Requirements cuML's convenience [development yaml files](https://github.com/rapidsai/cuml/tree/branch-23.12/environments) includes all dependencies required to build cuML. To build cuML's Python package, the following dependencies are required: - cudatoolkit version corresponding to system CUDA toolkit - setuptools - cython >= 0.29, < 0.30 - numpy - cmake >= 3.14 - cudf version matching the cuML version - libcuml version matching the cuML version - libcuml={{ version }} - cupy>=7.8.0,<12.0.0a0 - joblib >=0.11 Packages required for multigpu algorithms*: - libcumlprims version matching the cuML version - ucx-py version matching the cuML version - dask-cudf version matching the cuML version - nccl>=2.5 - rapids-dask-dependency==23.12.* * this can be avoided with `--singlegpu` argument flag. ### Python Tests Python tests are based on the pytest library. To run them, from the `path_to_cuml/python/` folder, simply type `pytest`.
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/setup.py
# # Copyright (c) 2018-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import glob import os import shutil import sys from pathlib import Path from setuptools import find_packages from skbuild import setup ############################################################################## # - Helper functions def get_cli_option(name): if name in sys.argv: print("-- Detected " + str(name) + " build option.") return True else: return False def clean_folder(path): """ Function to clean all Cython and Python artifacts and cache folders. It cleans the folder as well as its direct children recursively. Parameters ---------- path : String Path to the folder to be cleaned. """ shutil.rmtree(path + "/__pycache__", ignore_errors=True) folders = glob.glob(path + "/*/") for folder in folders: shutil.rmtree(folder + "/__pycache__", ignore_errors=True) clean_folder(folder) cython_exts = glob.glob(folder + "/*.cpp") cython_exts.extend(glob.glob(folder + "/*.cpython*")) for file in cython_exts: os.remove(file) ############################################################################## # - Print of build options used by setup.py -------------------------------- clean_artifacts = get_cli_option("clean") ############################################################################## # - Clean target ------------------------------------------------------------- if clean_artifacts: print("-- Cleaning all Python and Cython build artifacts...") # Reset these paths since they may be deleted below treelite_path = False try: setup_file_path = str(Path(__file__).parent.absolute()) shutil.rmtree(setup_file_path + "/.pytest_cache", ignore_errors=True) shutil.rmtree( setup_file_path + "/_external_repositories", ignore_errors=True ) shutil.rmtree(setup_file_path + "/cuml.egg-info", ignore_errors=True) shutil.rmtree(setup_file_path + "/__pycache__", ignore_errors=True) clean_folder(setup_file_path + "/cuml") shutil.rmtree(setup_file_path + "/build", ignore_errors=True) shutil.rmtree(setup_file_path + "/_skbuild", ignore_errors=True) shutil.rmtree(setup_file_path + "/dist", ignore_errors=True) except IOError: pass # need to terminate script so cythonizing doesn't get triggered after # cleanup unintendedly sys.argv.remove("clean") if "--all" in sys.argv: sys.argv.remove("--all") if len(sys.argv) == 1: sys.exit(0) ############################################################################## # - Python package generation ------------------------------------------------ packages = find_packages(include=["cuml*"]) setup( packages=packages, package_data={key: ["VERSION", "*.pxd"] for key in packages}, zip_safe=False, )
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/pytest.ini
[pytest] markers = unit: Quickest tests focused on accuracy and correctness quality: More intense tests than unit with increased runtimes stress: Longest running tests focused on stressing hardware compute resources mg: Multi-GPU tests memleak: Test that checks for memory leaks no_bad_cuml_array_check: Test that should not check for bad CumlArray uses testpaths = cuml/tests cuml/tests/dask cuml/tests/experimental cuml/tests/explainer cuml/tests/stemmer_tests filterwarnings = error::FutureWarning:cuml[.*] # Catch uses of deprecated positional args in testing ignore:[^.]*ABCs[^.]*:DeprecationWarning:patsy[.*] ignore:(.*)alias(.*):DeprecationWarning:hdbscan[.*]
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/LICENSE
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2018 NVIDIA CORPORATION Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/.coveragerc
# Configuration file for Python coverage tests [run] omit = cuml/test/* plugins = Cython.Coverage parallel = true source = cuml [report] # Regexes for lines to exclude from consideration exclude_lines = # Re-specify the `pragma: no cover` since it will be overridden by this # option. See the docs: # https://coverage.readthedocs.io/en/coverage-5.0/excluding.html#advanced-exclusion pragma: no cover # Don't complain about missing debug-only code: def __repr__ if self\.debug # Don't complain if tests don't hit defensive assertion code: raise AssertionError raise NotImplementedError # Don't complain if non-runnable code isn't run: if 0: if False:
0
rapidsai_public_repos/cuml/python
rapidsai_public_repos/cuml/python/cuml/_version.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import importlib.resources __version__ = ( importlib.resources.files("cuml").joinpath("VERSION").read_text().strip() ) __git_commit__ = ""
0
rapidsai_public_repos/cuml/python
rapidsai_public_repos/cuml/python/cuml/__init__.py
# # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from cuml.internals.base import Base, UniversalBase from cuml.internals.available_devices import is_cuda_available # GPU only packages if is_cuda_available(): import cuml.common.cuda as cuda from cuml.common.handle import Handle from cuml.cluster.dbscan import DBSCAN from cuml.cluster.kmeans import KMeans from cuml.cluster.agglomerative import AgglomerativeClustering from cuml.datasets.arima import make_arima from cuml.datasets.blobs import make_blobs from cuml.datasets.regression import make_regression from cuml.datasets.classification import make_classification from cuml.decomposition.incremental_pca import IncrementalPCA from cuml.fil.fil import ForestInference from cuml.ensemble.randomforestclassifier import RandomForestClassifier from cuml.ensemble.randomforestregressor import RandomForestRegressor from cuml.explainer.kernel_shap import KernelExplainer from cuml.explainer.permutation_shap import PermutationExplainer from cuml.explainer.tree_shap import TreeExplainer import cuml.feature_extraction from cuml.fil import fil from cuml.kernel_ridge.kernel_ridge import KernelRidge from cuml.linear_model.mbsgd_classifier import MBSGDClassifier from cuml.linear_model.mbsgd_regressor import MBSGDRegressor from cuml.manifold.t_sne import TSNE from cuml.metrics.accuracy import accuracy_score from cuml.metrics.cluster.adjusted_rand_index import adjusted_rand_score from cuml.metrics.regression import r2_score from cuml.model_selection import train_test_split from cuml.naive_bayes.naive_bayes import MultinomialNB from cuml.neighbors.nearest_neighbors import NearestNeighbors from cuml.neighbors.kernel_density import KernelDensity from cuml.neighbors.kneighbors_classifier import KNeighborsClassifier from cuml.neighbors.kneighbors_regressor import KNeighborsRegressor from cuml.preprocessing.LabelEncoder import LabelEncoder from cuml.random_projection.random_projection import ( GaussianRandomProjection, ) from cuml.random_projection.random_projection import SparseRandomProjection from cuml.random_projection.random_projection import ( johnson_lindenstrauss_min_dim, ) from cuml.svm import SVC from cuml.svm import SVR from cuml.svm import LinearSVC from cuml.svm import LinearSVR from cuml.tsa import stationarity from cuml.tsa.arima import ARIMA from cuml.tsa.auto_arima import AutoARIMA from cuml.tsa.holtwinters import ExponentialSmoothing from cuml.common.pointer_utils import device_of_gpu_matrix # Universal packages from cuml.internals.global_settings import ( GlobalSettings, _global_settings_data, ) from cuml.internals.memory_utils import ( set_global_output_type, using_output_type, ) from cuml.cluster.hdbscan import HDBSCAN from cuml.decomposition.pca import PCA from cuml.decomposition.tsvd import TruncatedSVD from cuml.linear_model.linear_regression import LinearRegression from cuml.linear_model.elastic_net import ElasticNet from cuml.linear_model.lasso import Lasso from cuml.linear_model.logistic_regression import LogisticRegression from cuml.linear_model.ridge import Ridge from cuml.manifold.umap import UMAP from cuml.solvers.cd import CD from cuml.solvers.sgd import SGD from cuml.solvers.qn import QN from cuml._version import __version__, __git_commit__ def __getattr__(name): if name == "global_settings": try: return _global_settings_data.settings except AttributeError: _global_settings_data.settings = GlobalSettings() return _global_settings_data.settings raise AttributeError(f"module {__name__} has no attribute {name}") __all__ = [ # Modules "common", "feature_extraction", "metrics", "multiclass", "naive_bayes", "preprocessing", "explainer", # Classes "AgglomerativeClustering", "ARIMA", "AutoARIMA", "Base", "CD", "cuda", "DBSCAN", "ElasticNet", "ExponentialSmoothing", "ForestInference", "GaussianRandomProjection", "Handle", "HDBSCAN", "IncrementalPCA", "KernelDensity", "KernelExplainer", "KernelRidge", "KMeans", "KNeighborsClassifier", "KNeighborsRegressor", "Lasso", "LinearRegression", "LinearSVC", "LinearSVR", "LogisticRegression", "MBSGDClassifier", "MBSGDRegressor", "NearestNeighbors", "PCA", "PermutationExplainer", "QN", "RandomForestClassifier", "RandomForestRegressor", "Ridge", "SGD", "SparseRandomProjection", "SVC", "SVR", "TruncatedSVD", "TreeExplainer", "TSNE", "UMAP", "UniversalBase", # Functions "johnson_lindenstrauss_min_dim", "make_arima", "make_blobs", "make_classification", "make_regression", "stationarity", ]
0
rapidsai_public_repos/cuml/python
rapidsai_public_repos/cuml/python/cuml/VERSION
23.12.00
0
rapidsai_public_repos/cuml/python/cuml
rapidsai_public_repos/cuml/python/cuml/_thirdparty/__init__.py
# Third party code, respective licenses apply from . import sklearn
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/README.md
# GPU accelerated Scikit-Learn preprocessing This directory contains code originating from the Scikit-Learn library. The Scikit-Learn license applies accordingly (see `/thirdparty/LICENSES/LICENSE.scikit_learn`). Original authors mentioned in the code do not endorse or promote this production. This work is dedicated to providing GPU accelerated tools for preprocessing. The Scikit-Learn code is slightly modified to make it possible to take common inputs used throughout cuML such as Numpy and Cupy arrays, Pandas and cuDF dataframes and compute the results on GPU. The code originates from the Scikit-Learn Github repository : https://github.com/scikit-learn/scikit-learn.git and is based on version/branch 0.23.1. ## For developers: When adding new preprocessors or updating, keep in mind: - Files should be copied as-is from the scikit-learn repo (preserving scikit-learn license text) - Changes should be kept minimal, large portions of modified imported code should lie in the thirdparty_adapter directory - Only well-tested, reliable accelerated preprocessing functions should be exposed in cuml.preprocessing.__init__.py - Tests must be added for each exposed function - Remember that a preprocessing model should always return the same datatype it received as input (NumPy, CuPy, Pandas, cuDF, Numba)
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/_imputation.py
# Original authors from Sckit-Learn: # Nicolas Tresegnie <nicolas.tresegnie@gmail.com> # Sergey Feldman <sergeyfeldman@gmail.com> # License: BSD 3 clause # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote this production. from ....internals import _deprecate_pos_args from ....common.array_descriptor import CumlArrayDescriptor from ....internals.array_sparse import SparseCumlArray from ..utils.validation import FLOAT_DTYPES from ..utils.validation import check_is_fitted from cuml.internals.mixins import AllowNaNTagMixin, SparseInputTagMixin, \ StringInputTagMixin from ..utils.skl_dependencies import BaseEstimator, TransformerMixin from ....thirdparty_adapters import (_get_mask, _masked_column_median, _masked_column_mean, _masked_column_mode) from cuml.internals.safe_imports import gpu_only_import_from import cuml from cuml.internals.safe_imports import gpu_only_import import numbers import warnings from cuml.internals.safe_imports import cpu_only_import numpy = cpu_only_import('numpy') np = gpu_only_import('cupy') sparse = gpu_only_import_from('cupyx.scipy', 'sparse') def is_scalar_nan(x): return bool(isinstance(x, numbers.Real) and np.isnan(x)) def _check_inputs_dtype(X, missing_values): if (X.dtype.kind in ("f", "i", "u") and not isinstance(missing_values, numbers.Real)): raise ValueError("'X' and 'missing_values' types are expected to be" " both numerical. Got X.dtype={} and " " type(missing_values)={}." .format(X.dtype, type(missing_values))) def _get_elem_at_rank(rank, data, n_negative, n_zeros): """Find the value in data augmented with n_zeros for the given rank""" if rank < n_negative: return data[rank] if rank - n_negative < n_zeros: return 0 return data[rank - n_zeros] def _get_median(data, n_zeros): """Compute the median of data with n_zeros additional zeros. This function is used to support sparse matrices; it modifies data in-place """ n_elems = len(data) + n_zeros if not n_elems: return np.nan n_negative = (data < 0).sum() middle, is_odd = divmod(n_elems, 2) data = np.sort(data) if is_odd: return _get_elem_at_rank(middle, data, n_negative, n_zeros) elm1 = _get_elem_at_rank(middle - 1, data, n_negative, n_zeros) elm2 = _get_elem_at_rank(middle, data, n_negative, n_zeros) return (elm1 + elm2) / 2. def _most_frequent(array, extra_value, n_repeat): """Compute the most frequent value in a 1d array extended with [extra_value] * n_repeat, where extra_value is assumed to be not part of the array.""" values, counts = np.unique(array, return_counts=True) most_frequent_count = counts.max() if most_frequent_count > n_repeat: value = values[counts == most_frequent_count].min() elif n_repeat > most_frequent_count: value = extra_value else: value = min(extra_value, values[counts == most_frequent_count].min()) return value class _BaseImputer(TransformerMixin): """Base class for all imputers. It adds automatically support for `add_indicator`. """ def __init__(self, *, missing_values=np.nan, add_indicator=False): self.missing_values = missing_values self.add_indicator = add_indicator def _fit_indicator(self, X): """Fit a MissingIndicator.""" if self.add_indicator: with cuml.using_output_type("cupy"): self.indicator_ = MissingIndicator( missing_values=self.missing_values, error_on_new=False ) self.indicator_.fit(X) else: self.indicator_ = None def _transform_indicator(self, X): """Compute the indicator mask.' Note that X must be the original data as passed to the imputer before any imputation, since imputation may be done inplace in some cases. """ if self.add_indicator: if not hasattr(self, 'indicator_'): raise ValueError( "Make sure to call _fit_indicator before " "_transform_indicator" ) return self.indicator_.transform(X) def _concatenate_indicator(self, X_imputed, X_indicator): """Concatenate indicator mask with the imputed data.""" if not self.add_indicator: return X_imputed hstack = sparse.hstack if sparse.issparse(X_imputed) else np.hstack if X_indicator is None: raise ValueError( "Data from the missing indicator are not provided. Call " "_fit_indicator and _transform_indicator in the imputer " "implementation." ) return hstack((X_imputed, X_indicator)) def _more_tags(self): return {'allow_nan': is_scalar_nan(self.missing_values)} class SimpleImputer(_BaseImputer, BaseEstimator, SparseInputTagMixin, AllowNaNTagMixin): """Imputation transformer for completing missing values. Parameters ---------- missing_values : number, string, np.nan (default) or None The placeholder for the missing values. All occurrences of `missing_values` will be imputed. For pandas' dataframes with nullable integer dtypes with missing values, `missing_values` should be set to `np.nan`, since `pd.NA` will be converted to `np.nan`. strategy : string, default='mean' The imputation strategy. - If "mean", then replace missing values using the mean along each column. Can only be used with numeric data. - If "median", then replace missing values using the median along each column. Can only be used with numeric data. - If "most_frequent", then replace missing using the most frequent value along each column. Can be used with strings or numeric data. - If "constant", then replace missing values with fill_value. Can be used with strings or numeric data. strategy="constant" for fixed value imputation. fill_value : string or numerical value, default=None When strategy == "constant", fill_value is used to replace all occurrences of missing_values. If left to the default, fill_value will be 0 when imputing numerical data and "missing_value" for strings or object data types. verbose : integer, default=0 Controls the verbosity of the imputer. copy : boolean, default=True If True, a copy of X will be created. If False, imputation will be done in-place whenever possible. Note that, in the following cases, a new copy will always be made, even if `copy=False`: - If X is not an array of floating values; - If X is encoded as a CSR matrix; - If add_indicator=True. add_indicator : boolean, default=False If True, a :class:`MissingIndicator` transform will stack onto output of the imputer's transform. This allows a predictive estimator to account for missingness despite imputation. If a feature has no missing values at fit/train time, the feature won't appear on the missing indicator even if there are missing values at transform/test time. Attributes ---------- statistics_ : array of shape (n_features,) The imputation fill value for each feature. Computing statistics can result in `np.nan` values. During :meth:`transform`, features corresponding to `np.nan` statistics will be discarded. See also -------- IterativeImputer : Multivariate imputation of missing values. Examples -------- >>> import cupy as cp >>> from cuml.preprocessing import SimpleImputer >>> imp_mean = SimpleImputer(missing_values=cp.nan, strategy='mean') >>> imp_mean.fit(cp.asarray([[7, 2, 3], [4, cp.nan, 6], [10, 5, 9]])) SimpleImputer() >>> X = [[cp.nan, 2, 3], [4, cp.nan, 6], [10, cp.nan, 9]] >>> print(imp_mean.transform(cp.asarray(X))) [[ 7. 2. 3. ] [ 4. 3.5 6. ] [10. 3.5 9. ]] Notes ----- Columns which only contained missing values at :meth:`fit` are discarded upon :meth:`transform` if strategy is not "constant". """ statistics_ = CumlArrayDescriptor() @_deprecate_pos_args(version="21.06") def __init__(self, *, missing_values=np.nan, strategy="mean", fill_value=None, copy=True, add_indicator=False): super().__init__( missing_values=missing_values, add_indicator=add_indicator ) self.strategy = strategy self.fill_value = fill_value self.copy = copy def get_param_names(self): return super().get_param_names() + [ "strategy", "fill_value", "verbose", "copy" ] def _validate_input(self, X, in_fit): allowed_strategies = ["mean", "median", "most_frequent", "constant"] if self.strategy not in allowed_strategies: raise ValueError("Can only use these strategies: {0} " " got strategy={1}".format(allowed_strategies, self.strategy)) if self.strategy in ("most_frequent", "constant"): dtype = None else: dtype = FLOAT_DTYPES if not is_scalar_nan(self.missing_values): force_all_finite = True else: force_all_finite = "allow-nan" try: X = self._validate_data(X, reset=in_fit, accept_sparse='csc', dtype=dtype, force_all_finite=force_all_finite, copy=self.copy) except ValueError as ve: if "could not convert" in str(ve): new_ve = ValueError("Cannot use {} strategy with non-numeric " "data:\n{}".format(self.strategy, ve)) raise new_ve from None else: raise ve _check_inputs_dtype(X, self.missing_values) if X.dtype.kind not in ("i", "u", "f", "O"): raise ValueError("SimpleImputer does not support data with dtype " "{0}. Please provide either a numeric array (with" " a floating point or integer dtype) or " "categorical data represented either as an array " "with integer dtype or an array of string values " "with an object dtype.".format(X.dtype)) return X def fit(self, X, y=None) -> "SimpleImputer": """Fit the imputer on X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Input data, where ``n_samples`` is the number of samples and ``n_features`` is the number of features. Returns ------- self : SimpleImputer """ if type(X) is list: X = np.asarray(X) X = self._validate_input(X, in_fit=True) super()._fit_indicator(X) # default fill_value is 0 for numerical input and "missing_value" # otherwise if self.fill_value is None: if X.dtype.kind in ("i", "u", "f"): fill_value = 0 else: fill_value = "missing_value" else: fill_value = self.fill_value # fill_value should be numerical in case of numerical input if (self.strategy == "constant" and X.dtype.kind in ("i", "u", "f") and not isinstance(fill_value, numbers.Real)): raise ValueError("'fill_value'={0} is invalid. Expected a " "numerical value when imputing numerical " "data".format(fill_value)) if sparse.issparse(X): # missing_values = 0 not allowed with sparse data as it would # force densification if self.missing_values == 0: raise ValueError("Imputation not possible when missing_values " "== 0 and input is sparse. Provide a dense " "array instead.") else: self.statistics_ = self._sparse_fit(X, self.strategy, self.missing_values, fill_value) else: self.statistics_ = self._dense_fit(X, self.strategy, self.missing_values, fill_value) return self def _sparse_fit(self, X, strategy, missing_values, fill_value): """Fit the transformer on sparse data.""" mask_data = _get_mask(X.data, missing_values) n_implicit_zeros = X.shape[0] - np.diff(X.indptr) statistics = np.empty(X.shape[1]) if strategy == "constant": # for constant strategy, self.statistcs_ is used to store # fill_value in each column statistics.fill(fill_value) else: for i in range(X.shape[1]): column = X.data[X.indptr[i]:X.indptr[i + 1]] mask_column = mask_data[X.indptr[i]:X.indptr[i + 1]] column = column[~mask_column] # combine explicit and implicit zeros mask_zeros = _get_mask(column, 0) column = column[~mask_zeros] n_explicit_zeros = mask_zeros.sum() n_zeros = n_implicit_zeros[i] + n_explicit_zeros if strategy == "mean": s = column.size + n_zeros statistics[i] = np.nan if s == 0 else column.sum() / s elif strategy == "median": statistics[i] = _get_median(column, n_zeros) elif strategy == "most_frequent": statistics[i] = _most_frequent(column, 0, n_zeros) return statistics def _dense_fit(self, X, strategy, missing_values, fill_value): """Fit the transformer on dense data.""" # Mean if strategy == "mean": return _masked_column_mean(X, missing_values) # Median elif strategy == "median": return _masked_column_median(X, missing_values) # Most frequent elif strategy == "most_frequent": return _masked_column_mode(X, missing_values) # Constant elif strategy == "constant": return np.full(X.shape[1], fill_value, dtype=X.dtype) def transform(self, X) -> SparseCumlArray: """Impute all missing values in X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) The input data to complete. """ check_is_fitted(self) X = self._validate_input(X, in_fit=False) X_indicator = super()._transform_indicator(X) statistics = self.statistics_ if X.shape[1] != statistics.shape[0]: raise ValueError("X has %d features per sample, expected %d" % (X.shape[1], self.statistics_.shape[0])) # Delete the invalid columns if strategy is not constant if self.strategy == "constant": valid_statistics = statistics else: # same as np.isnan but also works for object dtypes invalid_mask = _get_mask(statistics, np.nan) valid_mask = np.logical_not(invalid_mask) valid_statistics = statistics[valid_mask] valid_statistics_indexes = np.flatnonzero(valid_mask) if invalid_mask.any(): missing = np.arange(X.shape[1])[invalid_mask] if self.verbose: warnings.warn("Deleting features without " "observed values: %s" % missing) X = X[:, valid_statistics_indexes] # Do actual imputation if sparse.issparse(X): if self.missing_values == 0: raise ValueError("Imputation not possible when missing_values " "== 0 and input is sparse. Provide a dense " "array instead.") else: mask = _get_mask(X.data, self.missing_values) indexes = np.repeat( np.arange(len(X.indptr) - 1, dtype=int), np.diff(X.indptr).tolist())[mask] X.data[mask] = valid_statistics[indexes].astype(X.dtype, copy=False) else: mask = _get_mask(X, self.missing_values) if self.strategy == "constant": X[mask] = valid_statistics[0] else: for i, vi in enumerate(valid_statistics_indexes): feature_idxs = np.flatnonzero(mask[:, vi]) X[feature_idxs, vi] = valid_statistics[i] X = super()._concatenate_indicator(X, X_indicator) return X class MissingIndicator(TransformerMixin, BaseEstimator, AllowNaNTagMixin, SparseInputTagMixin, StringInputTagMixin): """Binary indicators for missing values. Note that this component typically should not be used in a vanilla :class:`Pipeline` consisting of transformers and a classifier, but rather could be added using a :class:`FeatureUnion` or :class:`ColumnTransformer`. Parameters ---------- missing_values : number, string, np.nan (default) or None The placeholder for the missing values. All occurrences of `missing_values` will be imputed. For pandas' dataframes with nullable integer dtypes with missing values, `missing_values` should be set to `np.nan`, since `pd.NA` will be converted to `np.nan`. features : str, default=None Whether the imputer mask should represent all or a subset of features. - If "missing-only" (default), the imputer mask will only represent features containing missing values during fit time. - If "all", the imputer mask will represent all features. sparse : boolean or "auto", default=None Whether the imputer mask format should be sparse or dense. - If "auto" (default), the imputer mask will be of same type as input. - If True, the imputer mask will be a sparse matrix. - If False, the imputer mask will be a numpy array. error_on_new : boolean, default=None If True (default), transform will raise an error when there are features with missing values in transform that have no missing values in fit. This is applicable only when ``features="missing-only"``. Attributes ---------- features_ : ndarray, shape (n_missing_features,) or (n_features,) The features indices which will be returned when calling ``transform``. They are computed during ``fit``. For ``features='all'``, it is to ``range(n_features)``. Examples -------- >>> import numpy as np >>> from sklearn.impute import MissingIndicator >>> X1 = np.array([[np.nan, 1, 3], ... [4, 0, np.nan], ... [8, 1, 0]]) >>> X2 = np.array([[5, 1, np.nan], ... [np.nan, 2, 3], ... [2, 4, 0]]) >>> indicator = MissingIndicator() >>> indicator.fit(X1) MissingIndicator() >>> X2_tr = indicator.transform(X2) >>> X2_tr array([[False, True], [ True, False], [False, False]]) """ features_ = CumlArrayDescriptor() @_deprecate_pos_args(version="21.06") def __init__(self, *, missing_values=np.nan, features="missing-only", sparse="auto", error_on_new=True): self.missing_values = missing_values self.features = features self.sparse = sparse self.error_on_new = error_on_new def get_param_names(self): return super().get_param_names() + [ "missing_values", "features", "sparse", "error_on_new" ] def _get_missing_features_info(self, X): """Compute the imputer mask and the indices of the features containing missing values. Parameters ---------- X : {ndarray or sparse matrix}, shape (n_samples, n_features) The input data with missing values. Note that ``X`` has been checked in ``fit`` and ``transform`` before to call this function. Returns ------- imputer_mask : {ndarray or sparse matrix}, shape \ (n_samples, n_features) The imputer mask of the original data. features_with_missing : ndarray, shape (n_features_with_missing) The features containing missing values. """ if sparse.issparse(X): mask = _get_mask(X.data, self.missing_values) # The imputer mask will be constructed with the same sparse format # as X. sparse_constructor = (sparse.csr_matrix if X.format == 'csr' else sparse.csc_matrix) imputer_mask = sparse_constructor( (mask, X.indices.copy(), X.indptr.copy()), shape=X.shape, dtype=np.float32) # temporarily switch to using float32 as # cupy cannot operate with bool as of now if self.features == 'missing-only': n_missing = imputer_mask.sum(axis=0) if self.sparse is False: imputer_mask = imputer_mask.toarray() elif imputer_mask.format == 'csr': imputer_mask = imputer_mask.tocsc() else: imputer_mask = _get_mask(X, self.missing_values) if self.features == 'missing-only': n_missing = imputer_mask.sum(axis=0) if self.sparse is True: imputer_mask = sparse.csc_matrix(imputer_mask) if self.features == 'all': features_indices = np.arange(X.shape[1]) else: features_indices = np.flatnonzero(n_missing) return imputer_mask, features_indices def _validate_input(self, X, in_fit): if not is_scalar_nan(self.missing_values): force_all_finite = True else: force_all_finite = "allow-nan" X = self._validate_data(X, reset=in_fit, accept_sparse=('csc', 'csr'), dtype=None, force_all_finite=force_all_finite) _check_inputs_dtype(X, self.missing_values) if X.dtype.kind not in ("i", "u", "f", "O"): raise ValueError("MissingIndicator does not support data with " "dtype {0}. Please provide either a numeric array" " (with a floating point or integer dtype) or " "categorical data represented either as an array " "with integer dtype or an array of string values " "with an object dtype.".format(X.dtype)) if sparse.issparse(X) and self.missing_values == 0: # missing_values = 0 not allowed with sparse data as it would # force densification raise ValueError("Sparse input with missing_values=0 is " "not supported. Provide a dense " "array instead.") return X def _fit(self, X, y=None): """Fit the transformer on X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Input data, where ``n_samples`` is the number of samples and ``n_features`` is the number of features. Returns ------- imputer_mask : {ndarray or sparse matrix}, shape (n_samples, \ n_features) The imputer mask of the original data. """ X = self._validate_input(X, in_fit=True) self._n_features = X.shape[1] if self.features not in ('missing-only', 'all'): raise ValueError("'features' has to be either 'missing-only' or " "'all'. Got {} instead.".format(self.features)) if not ((isinstance(self.sparse, str) and self.sparse == "auto") or isinstance(self.sparse, bool)): raise ValueError("'sparse' has to be a boolean or 'auto'. " "Got {!r} instead.".format(self.sparse)) missing_features_info = self._get_missing_features_info(X) self.features_ = missing_features_info[1] return missing_features_info[0] def fit(self, X, y=None) -> "MissingIndicator": """Fit the transformer on X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Input data, where ``n_samples`` is the number of samples and ``n_features`` is the number of features. Returns ------- self : object Returns self. """ self._fit(X, y) return self def transform(self, X) -> SparseCumlArray: """Generate missing values indicator for X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) The input data to complete. Returns ------- Xt : {ndarray or sparse matrix}, shape (n_samples, n_features) \ or (n_samples, n_features_with_missing) The missing indicator for input data. The data type of ``Xt`` will be boolean. """ check_is_fitted(self) X = self._validate_input(X, in_fit=False) if X.shape[1] != self._n_features: raise ValueError("X has a different number of features " "than during fitting.") imputer_mask, features = self._get_missing_features_info(X) if self.features == "missing-only": with cuml.using_output_type("numpy"): np_features = np.asnumpy(features) features_diff_fit_trans = numpy.setdiff1d(np_features, self.features_) if (self.error_on_new and features_diff_fit_trans.size > 0): raise ValueError("The features {} have missing values " "in transform but have no missing values " "in fit.".format(features_diff_fit_trans)) if self.features_.size < self._n_features: imputer_mask = imputer_mask[:, self.features_] return imputer_mask def fit_transform(self, X, y=None) -> SparseCumlArray: """Generate missing values indicator for X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) The input data to complete. Returns ------- Xt : {ndarray or sparse matrix}, shape (n_samples, n_features) \ or (n_samples, n_features_with_missing) The missing indicator for input data. The data type of ``Xt`` will be boolean. """ imputer_mask = self._fit(X, y) if self.features_.size < self._n_features: imputer_mask = imputer_mask[:, self.features_] return imputer_mask
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/_discretization.py
# Original authors from Sckit-Learn: # Henry Lin <hlin117@gmail.com> # Tom Dupré la Tour # License: BSD # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote this production. from ....internals import _deprecate_pos_args from ....internals.memory_utils import using_output_type from ....common.array_descriptor import CumlArrayDescriptor from ....internals.array_sparse import SparseCumlArray from ....thirdparty_adapters import check_array from ..utils.validation import FLOAT_DTYPES from ..utils.validation import check_is_fitted from cuml.internals.mixins import SparseInputTagMixin from ..utils.skl_dependencies import BaseEstimator, TransformerMixin from cuml.cluster import KMeans from cuml.preprocessing import OneHotEncoder import warnings from cuml.internals.safe_imports import cpu_only_import import numbers from cuml.internals.safe_imports import gpu_only_import np = gpu_only_import('cupy') cpu_np = cpu_only_import('numpy') def digitize(x, bins): return np.searchsorted(bins, x, side='left') class KBinsDiscretizer(TransformerMixin, BaseEstimator, SparseInputTagMixin): """ Bin continuous data into intervals. Parameters ---------- n_bins : int or array-like, shape (n_features,) (default=5) The number of bins to produce. Raises ValueError if ``n_bins < 2``. encode : {'onehot', 'onehot-dense', 'ordinal'}, (default='onehot') Method used to encode the transformed result. onehot Encode the transformed result with one-hot encoding and return a sparse matrix. Ignored features are always stacked to the right. onehot-dense Encode the transformed result with one-hot encoding and return a dense array. Ignored features are always stacked to the right. ordinal Return the bin identifier encoded as an integer value. strategy : {'uniform', 'quantile', 'kmeans'}, (default='quantile') Strategy used to define the widths of the bins. uniform All bins in each feature have identical widths. quantile All bins in each feature have the same number of points. kmeans Values in each bin have the same nearest center of a 1D k-means cluster. Attributes ---------- n_bins_ : int array, shape (n_features,) Number of bins per feature. Bins whose width are too small (i.e., <= 1e-8) are removed with a warning. bin_edges_ : array of arrays, shape (n_features, ) The edges of each bin. Contain arrays of varying shapes ``(n_bins_, )`` Ignored features will have empty arrays. See Also -------- cuml.preprocessing.Binarizer : Class used to bin values as ``0`` or ``1`` based on a parameter ``threshold``. Notes ----- In bin edges for feature ``i``, the first and last values are used only for ``inverse_transform``. During transform, bin edges are extended to:: np.concatenate([-np.inf, bin_edges_[i][1:-1], np.inf]) You can combine ``KBinsDiscretizer`` with :class:`cuml.compose.ColumnTransformer` if you only want to preprocess part of the features. ``KBinsDiscretizer`` might produce constant features (e.g., when ``encode = 'onehot'`` and certain bins do not contain any data). These features can be removed with feature selection algorithms (e.g., :class:`sklearn.feature_selection.VarianceThreshold`). Examples -------- >>> from cuml.preprocessing import KBinsDiscretizer >>> import cupy as cp >>> X = [[-2, 1, -4, -1], ... [-1, 2, -3, -0.5], ... [ 0, 3, -2, 0.5], ... [ 1, 4, -1, 2]] >>> X = cp.array(X) >>> est = KBinsDiscretizer(n_bins=3, encode='ordinal', strategy='uniform') >>> est.fit(X) KBinsDiscretizer(...) >>> Xt = est.transform(X) >>> Xt array([[0, 0, 0, 0], [1, 1, 1, 0], [2, 2, 2, 1], [2, 2, 2, 2]], dtype=int32) Sometimes it may be useful to convert the data back into the original feature space. The ``inverse_transform`` function converts the binned data into the original feature space. Each value will be equal to the mean of the two bin edges. >>> est.bin_edges_[0] array([-2., -1., 0., 1.]) >>> est.inverse_transform(Xt) array([[-1.5, 1.5, -3.5, -0.5], [-0.5, 2.5, -2.5, -0.5], [ 0.5, 3.5, -1.5, 0.5], [ 0.5, 3.5, -1.5, 1.5]]) """ bin_edges_internal_ = CumlArrayDescriptor() n_bins_ = CumlArrayDescriptor() @_deprecate_pos_args(version="21.06") def __init__(self, n_bins=5, *, encode='onehot', strategy='quantile'): self.n_bins = n_bins self.encode = encode self.strategy = strategy def get_param_names(self): return super().get_param_names() + [ "n_bins", "encode", "strategy" ] def fit(self, X, y=None) -> "KBinsDiscretizer": """ Fit the estimator. Parameters ---------- X : numeric array-like, shape (n_samples, n_features) Data to be discretized. y : None Ignored. This parameter exists only for compatibility with :class:`sklearn.pipeline.Pipeline`. Returns ------- self """ X = self._validate_data(X, dtype='numeric') valid_encode = ('onehot', 'onehot-dense', 'ordinal') if self.encode not in valid_encode: raise ValueError("Valid options for 'encode' are {}. " "Got encode={!r} instead." .format(valid_encode, self.encode)) valid_strategy = ('uniform', 'quantile', 'kmeans') if self.strategy not in valid_strategy: raise ValueError("Valid options for 'strategy' are {}. " "Got strategy={!r} instead." .format(valid_strategy, self.strategy)) n_features = X.shape[1] n_bins = self._validate_n_bins(n_features) n_bins = np.asnumpy(n_bins) bin_edges = cpu_np.zeros(n_features, dtype=object) for jj in range(n_features): column = X[:, jj] col_min, col_max = column.min(), column.max() if col_min == col_max: warnings.warn("Feature %d is constant and will be " "replaced with 0." % jj) n_bins[jj] = 1 bin_edges[jj] = np.array([-np.inf, np.inf]) continue if self.strategy == 'uniform': bin_edges[jj] = np.linspace(col_min, col_max, n_bins[jj] + 1) elif self.strategy == 'quantile': quantiles = np.linspace(0, 100, n_bins[jj] + 1) bin_edges[jj] = np.asarray(np.percentile(column, quantiles)) # Workaround for https://github.com/cupy/cupy/issues/4451 # This should be removed as soon as a fix is available in cupy # in order to limit alterations in the included sklearn code bin_edges[jj][-1] = col_max elif self.strategy == 'kmeans': # Deterministic initialization with uniform spacing uniform_edges = np.linspace(col_min, col_max, n_bins[jj] + 1) init = (uniform_edges[1:] + uniform_edges[:-1])[:, None] * 0.5 # 1D k-means procedure km = KMeans(n_clusters=n_bins[jj], init=init, n_init=1, output_type='cupy') km = km.fit(column[:, None]) with using_output_type('cupy'): centers = km.cluster_centers_[:, 0] # Must sort, centers may be unsorted even with sorted init centers.sort() bin_edges[jj] = (centers[1:] + centers[:-1]) * 0.5 bin_edges[jj] = np.r_[col_min, bin_edges[jj], col_max] # Remove bins whose width are too small (i.e., <= 1e-8) if self.strategy in ('quantile', 'kmeans'): mask = np.diff(bin_edges[jj], prepend=-np.inf) > 1e-8 bin_edges[jj] = bin_edges[jj][mask] if len(bin_edges[jj]) - 1 != n_bins[jj]: warnings.warn('Bins whose width are too small (i.e., <= ' '1e-8) in feature %d are removed. Consider ' 'decreasing the number of bins.' % jj) n_bins[jj] = len(bin_edges[jj]) - 1 self.bin_edges_internal_ = bin_edges self.n_bins_ = n_bins if 'onehot' in self.encode: self._encoder = OneHotEncoder( categories=np.array([np.arange(i) for i in self.n_bins_]), sparse=self.encode == 'onehot', output_type='cupy') # Fit the OneHotEncoder with toy datasets # so that it's ready for use after the KBinsDiscretizer is fitted self._encoder.fit(np.zeros((1, len(self.n_bins_)), dtype=int)) return self def _validate_n_bins(self, n_features): """Returns n_bins_, the number of bins per feature. """ orig_bins = self.n_bins if isinstance(orig_bins, numbers.Number): if not isinstance(orig_bins, numbers.Integral): raise ValueError("{} received an invalid n_bins type. " "Received {}, expected int." .format(KBinsDiscretizer.__name__, type(orig_bins).__name__)) if orig_bins < 2: raise ValueError("{} received an invalid number " "of bins. Received {}, expected at least 2." .format(KBinsDiscretizer.__name__, orig_bins)) return np.full(n_features, orig_bins, dtype=int) n_bins = check_array(orig_bins, dtype=np.int, copy=True, ensure_2d=False) if n_bins.ndim > 1 or n_bins.shape[0] != n_features: raise ValueError("n_bins must be a scalar or array " "of shape (n_features,).") bad_nbins_value = (n_bins < 2) | (n_bins != orig_bins) violating_indices = np.where(bad_nbins_value)[0] if violating_indices.shape[0] > 0: indices = ", ".join(str(i) for i in violating_indices) raise ValueError("{} received an invalid number " "of bins at indices {}. Number of bins " "must be at least 2, and must be an int." .format(KBinsDiscretizer.__name__, indices)) return n_bins def transform(self, X) -> SparseCumlArray: """ Discretize the data. Parameters ---------- X : numeric array-like, shape (n_samples, n_features) Data to be discretized. Returns ------- Xt : numeric array-like or sparse matrix Data in the binned space. """ check_is_fitted(self) Xt = check_array(X, copy=True, dtype=FLOAT_DTYPES) n_features = self.n_bins_.shape[0] if Xt.shape[1] != n_features: raise ValueError("Incorrect number of features. Expecting {}, " "received {}.".format(n_features, Xt.shape[1])) bin_edges = self.bin_edges_internal_ for jj in range(Xt.shape[1]): # Values which are close to a bin edge are susceptible to numeric # instability. Add eps to X so these values are binned correctly # with respect to their decimal truncation. See documentation of # numpy.isclose for an explanation of ``rtol`` and ``atol``. rtol = 1.e-5 atol = 1.e-8 eps = atol + rtol * np.abs(Xt[:, jj]) Xt[:, jj] = digitize(Xt[:, jj] + eps, bin_edges[jj][1:]) self.n_bins_ = np.asarray(self.n_bins_) np.clip(Xt, 0, self.n_bins_ - 1, out=Xt) Xt = Xt.astype(np.int32) if self.encode == 'ordinal': return Xt Xt = self._encoder.transform(Xt) return Xt def inverse_transform(self, Xt) -> SparseCumlArray: """ Transform discretized data back to original feature space. Note that this function does not regenerate the original data due to discretization rounding. Parameters ---------- Xt : numeric array-like, shape (n_sample, n_features) Transformed data in the binned space. Returns ------- Xinv : numeric array-like Data in the original feature space. """ check_is_fitted(self) if 'onehot' in self.encode: Xt = check_array(Xt, accept_sparse=['csr', 'coo'], copy=True) Xt = self._encoder.inverse_transform(Xt) Xinv = check_array(Xt, copy=True, dtype=FLOAT_DTYPES) n_features = self.n_bins_.shape[0] if Xinv.shape[1] != n_features: raise ValueError("Incorrect number of features. Expecting {}, " "received {}.".format(n_features, Xinv.shape[1])) for jj in range(n_features): bin_edges = self.bin_edges_internal_[jj] bin_centers = (bin_edges[1:] + bin_edges[:-1]) * 0.5 idxs = np.asnumpy(Xinv[:, jj]) Xinv[:, jj] = bin_centers[idxs.astype(np.int32)] return Xinv @property def bin_edges_(self): return self.bin_edges_internal_
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/_column_transformer.py
# Original authors from Sckit-Learn: # Andreas Mueller # Joris Van den Bossche # License: BSD # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote this production. from ..preprocessing import FunctionTransformer from ....thirdparty_adapters import check_array from ..utils.validation import check_is_fitted from ..utils.skl_dependencies import TransformerMixin, BaseComposition, \ BaseEstimator from cuml.internals import _deprecate_pos_args from cuml.internals.array_sparse import SparseCumlArray from cuml.internals.global_settings import _global_settings_data import cuml from itertools import chain from itertools import compress from joblib import Parallel import functools import timeit import numbers from cuml.internals.import_utils import has_sklearn if has_sklearn(): from sklearn.base import clone from sklearn.utils import Bunch from contextlib import contextmanager from collections import defaultdict import warnings from cuml.internals.safe_imports import cpu_only_import_from from cuml.internals.safe_imports import gpu_only_import_from from cuml.internals.safe_imports import cpu_only_import from cuml.internals.safe_imports import gpu_only_import cpu_np = cpu_only_import('numpy') cu_sparse = gpu_only_import_from('cupyx.scipy', 'sparse') np = gpu_only_import('cupy') numba = gpu_only_import('numba') pd = cpu_only_import('pandas') sp_sparse = cpu_only_import_from('scipy', 'sparse') cudf = gpu_only_import('cudf') _ERR_MSG_1DCOLUMN = ("1D data passed to a transformer that expects 2D data. " "Try to specify the column selection as a list of one " "item instead of a scalar.") def issparse(X): return sp_sparse.issparse(X) or cu_sparse.issparse(X) def _determine_key_type(key, accept_slice=True): """Determine the data type of key. Parameters ---------- key : scalar, slice or array-like The key from which we want to infer the data type. accept_slice : bool, default=True Whether or not to raise an error if the key is a slice. Returns ------- dtype : {'int', 'str', 'bool', None} Returns the data type of key. """ err_msg = ("No valid specification of the columns. Only a scalar, list or " "slice of all integers or all strings, or boolean mask is " "allowed") dtype_to_str = {int: 'int', str: 'str', bool: 'bool', np.bool_: 'bool'} array_dtype_to_str = {'i': 'int', 'u': 'int', 'b': 'bool', 'O': 'str', 'U': 'str', 'S': 'str'} if key is None: return None if isinstance(key, tuple(dtype_to_str.keys())): try: return dtype_to_str[type(key)] except KeyError: raise ValueError(err_msg) if isinstance(key, slice): if not accept_slice: raise TypeError( 'Only array-like or scalar are supported. ' 'A Python slice was given.' ) if key.start is None and key.stop is None: return None key_start_type = _determine_key_type(key.start) key_stop_type = _determine_key_type(key.stop) if key_start_type is not None and key_stop_type is not None: if key_start_type != key_stop_type: raise ValueError(err_msg) if key_start_type is not None: return key_start_type return key_stop_type if isinstance(key, (list, tuple)): unique_key = set(key) key_type = {_determine_key_type(elt) for elt in unique_key} if not key_type: return None if len(key_type) != 1: raise ValueError(err_msg) return key_type.pop() if hasattr(key, 'dtype'): try: return array_dtype_to_str[key.dtype.kind] except KeyError: raise ValueError(err_msg) raise ValueError(err_msg) def _get_column_indices(X, key): """Get feature column indices for input data X and key. """ n_columns = X.shape[1] key_dtype = _determine_key_type(key) if isinstance(key, (list, tuple)) and not key: # we get an empty list return [] elif key_dtype in ('bool', 'int'): # Convert key into positive indexes try: idx = _safe_indexing(np.arange(n_columns), key) except IndexError as e: raise ValueError( 'all features must be in [0, {}] or [-{}, 0]' .format(n_columns - 1, n_columns) ) from e return np.atleast_1d(idx).tolist() elif key_dtype == 'str': try: all_columns = X.columns except AttributeError: raise ValueError("Specifying the columns using strings is only " "supported for pandas DataFrames") if isinstance(key, str): columns = [key] elif isinstance(key, slice): start, stop = key.start, key.stop if start is not None: start = all_columns.get_loc(start) if stop is not None: # pandas indexing with strings is endpoint included stop = all_columns.get_loc(stop) + 1 else: stop = n_columns + 1 return list(range(n_columns)[slice(start, stop)]) else: columns = list(key) try: column_indices = [] for col in columns: col_idx = all_columns.get_loc(col) if not isinstance(col_idx, numbers.Integral): raise ValueError(f"Selected columns, {columns}, are not " "unique in dataframe") column_indices.append(col_idx) except KeyError as e: raise ValueError( "A given column is not a column of the dataframe" ) from e return column_indices else: raise ValueError("No valid specification of the columns. Only a " "scalar, list or slice of all integers or all " "strings, or boolean mask is allowed") def _safe_indexing(X, indices, *, axis=0): """Return rows, items or columns of X using indices. Parameters ---------- X : array-like, sparse-matrix, list, dataframes, series data from which to sample rows, items or columns. `list` are only supported when `axis=0`. indices : bool, int, str, slice, array-like - If `axis=0`, boolean and integer array-like, integer slice, and scalar integer are supported. - If `axis=1`: - to select a single column, `indices` can be of `int` type for all `X` types and `str` only for dataframe. The selected subset will be 1D, unless `X` is a sparse matrix in which case it will be 2D. - to select multiples columns, `indices` can be one of the following: `list`, `array`, `slice`. The type used in these containers can be one of the following: `int`, 'bool' and `str`. However, `str` is only supported when `X` is a dataframe. The selected subset will be 2D. axis : int, default=0 The axis along which `X` will be subsampled. `axis=0` will select rows while `axis=1` will select columns. Returns ------- subset Subset of X on axis 0 or 1. Notes ----- CSR, CSC, and LIL sparse matrices are supported. COO sparse matrices are not supported. """ if indices is None: return X if axis not in (0, 1): raise ValueError( "'axis' should be either 0 (to index rows) or 1 (to index " " column). Got {} instead.".format(axis) ) if isinstance(indices, (pd.Index, cudf.Index)): indices = list(indices) indices_dtype = _determine_key_type(indices) if axis == 0 and indices_dtype == 'str': raise ValueError( "String indexing is not supported with 'axis=0'" ) if axis == 1 and X.ndim != 2: raise ValueError( "'X' should be a 2D NumPy array, 2D sparse matrix or pandas " "dataframe when indexing the columns (i.e. 'axis=1'). " "Got {} instead with {} dimension(s).".format(type(X), X.ndim) ) if axis == 1 and indices_dtype == 'str' and not hasattr(X, 'loc'): raise ValueError( "Specifying the columns using strings is only supported for " "pandas DataFrames" ) if hasattr(X, "iloc"): return _pandas_indexing(X, indices, indices_dtype, axis=axis) elif hasattr(X, "shape"): return _array_indexing(X, indices, indices_dtype, axis=axis) else: return _list_indexing(X, indices, indices_dtype) def _array_indexing(array, key, key_dtype, axis): """Index an array or a sparse array""" if issparse(array): # check if we have an boolean array-likes to make the proper indexing if key_dtype == 'bool': key = np.asarray(key) if isinstance(key, tuple): key = list(key) if numba.cuda.is_cuda_array(array): array = np.asarray(array) return array[key] if axis == 0 else array[:, key] def _pandas_indexing(X, key, key_dtype, axis): """Index a dataframe or a series""" if hasattr(key, 'shape'): # Work-around for indexing with read-only key in pandas # FIXME: solved in pandas 0.25 key = np.asarray(key) key = key if key.flags.writeable else key.copy() elif isinstance(key, tuple): key = list(key) # check whether we should index with loc or iloc indexer = X.iloc if key_dtype == 'int' else X.loc return indexer[:, key] if axis else indexer[key] def _list_indexing(X, key, key_dtype): """Index a Python list.""" if np.isscalar(key) or isinstance(key, slice): # key is a slice or a scalar return X[key] if key_dtype == 'bool': # key is a boolean array-like return list(compress(X, key)) # key is a integer array-like of key return [X[idx] for idx in key] def _transform_one(transformer, X, y, weight, **fit_params): res = transformer.transform(X).to_output('cupy') # if we have a weight for this transformer, multiply output if weight is None: return res return res * weight def _fit_transform_one(transformer, X, y, weight, message_clsname='', message=None, **fit_params): """ Fits ``transformer`` to ``X`` and ``y``. The transformed result is returned with the fitted transformer. If ``weight`` is not ``None``, the result will be multiplied by ``weight``. """ with _print_elapsed_time(message_clsname, message): with cuml.using_output_type("cupy"): transformer.accept_sparse = True if hasattr(transformer, 'fit_transform'): res = transformer.fit_transform(X, y, **fit_params) else: res = transformer.fit(X, y, **fit_params).transform(X) if weight is None: return res, transformer return res * weight, transformer def _name_estimators(estimators): """Generate names for estimators.""" names = [ estimator if isinstance(estimator, str) else type(estimator).__name__.lower() for estimator in estimators ] namecount = defaultdict(int) for est, name in zip(estimators, names): namecount[name] += 1 for k, v in list(namecount.items()): if v == 1: del namecount[k] for i in reversed(range(len(estimators))): name = names[i] if name in namecount: names[i] += "-%d" % namecount[name] namecount[name] -= 1 return list(zip(names, estimators)) def delayed(function): """Decorator used to capture the arguments of a function.""" @functools.wraps(function) def delayed_function(*args, **kwargs): return _FuncWrapper(function), args, kwargs return delayed_function class _FuncWrapper: """"Load the global configuration before calling the function.""" def __init__(self, function): self.function = function self.config = _global_settings_data.shared_state functools.update_wrapper(self, self.function) def __call__(self, *args, **kwargs): _global_settings_data.shared_state = self.config return self.function(*args, **kwargs) @contextmanager def _print_elapsed_time(source, message=None): """Log elapsed time to stdout when the context is exited. Parameters ---------- source : str String indicating the source or the reference of the message. message : str, default=None Short message. If None, nothing will be printed. Returns ------- context_manager Prints elapsed time upon exit if verbose. """ if message is None: yield else: start = timeit.default_timer() yield print( _message_with_time(source, message, timeit.default_timer() - start)) def _message_with_time(source, message, time): """Create one line message for logging purposes. Parameters ---------- source : str String indicating the source or the reference of the message. message : str Short message. time : int Time in seconds. """ start_message = "[%s] " % source # adapted from joblib.logger.short_format_time without the Windows -.1s # adjustment if time > 60: time_str = "%4.1fmin" % (time / 60) else: time_str = " %5.1fs" % time end_message = " %s, total=%s" % (message, time_str) dots_len = (70 - len(start_message) - len(end_message)) return "%s%s%s" % (start_message, dots_len * '.', end_message) class ColumnTransformer(TransformerMixin, BaseComposition, BaseEstimator): """Applies transformers to columns of an array or dataframe. This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each transformer will be concatenated to form a single feature space. This is useful for heterogeneous or columnar data, to combine several feature extraction mechanisms or transformations into a single transformer. Parameters ---------- transformers : list of tuples List of (name, transformer, columns) tuples specifying the transformer objects to be applied to subsets of the data: * name : str Like in Pipeline and FeatureUnion, this allows the transformer and its parameters to be set using ``set_params`` and searched in grid search. * transformer : {'drop', 'passthrough'} or estimator Estimator must support `fit` and `transform`. Special-cased strings 'drop' and 'passthrough' are accepted as well, to indicate to drop the columns or to pass them through untransformed, respectively. * columns : str, array-like of str, int, array-like of int, \ array-like of bool, slice or callable Indexes the data on its second axis. Integers are interpreted as positional columns, while strings can reference DataFrame columns by name. A scalar string or int should be used where ``transformer`` expects X to be a 1d array-like (vector), otherwise a 2d array will be passed to the transformer. A callable is passed the input data `X` and can return any of the above. To select multiple columns by name or dtype, you can use :obj:`make_column_selector`. remainder : {'drop', 'passthrough'} or estimator, default='drop' By default, only the specified columns in `transformers` are transformed and combined in the output, and the non-specified columns are dropped. (default of ``'drop'``). By specifying ``remainder='passthrough'``, all remaining columns that were not specified in `transformers` will be automatically passed through. This subset of columns is concatenated with the output of the transformers. By setting ``remainder`` to be an estimator, the remaining non-specified columns will use the ``remainder`` estimator. The estimator must support `fit` and `transform`. Note that using this feature requires that the DataFrame columns input at `fit` and `transform` have identical order. sparse_threshold : float, default=0.3 If the output of the different transformers contains sparse matrices, these will be stacked as a sparse matrix if the overall density is lower than this value. Use ``sparse_threshold=0`` to always return dense. When the transformed output consists of all dense data, the stacked result will be dense, and this keyword will be ignored. n_jobs : int, default=None Number of jobs to run in parallel. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. for more details. transformer_weights : dict, default=None Multiplicative weights for features per transformer. The output of the transformer is multiplied by these weights. Keys are transformer names, values the weights. verbose : bool, default=False If True, the time elapsed while fitting each transformer will be printed as it is completed. Attributes ---------- transformers_ : list The collection of fitted transformers as tuples of (name, fitted_transformer, column). `fitted_transformer` can be an estimator, 'drop', or 'passthrough'. In case there were no columns selected, this will be the unfitted transformer. If there are remaining columns, the final element is a tuple of the form: ('remainder', transformer, remaining_columns) corresponding to the ``remainder`` parameter. If there are remaining columns, then ``len(transformers_)==len(transformers)+1``, otherwise ``len(transformers_)==len(transformers)``. named_transformers_ : :class:`~sklearn.utils.Bunch` Read-only attribute to access any transformer by given name. Keys are transformer names and values are the fitted transformer objects. sparse_output_ : bool Boolean flag indicating whether the output of ``transform`` is a sparse matrix or a dense numpy array, which depends on the output of the individual transformers and the `sparse_threshold` keyword. Notes ----- The order of the columns in the transformed feature matrix follows the order of how the columns are specified in the `transformers` list. Columns of the original feature matrix that are not specified are dropped from the resulting transformed feature matrix, unless specified in the `passthrough` keyword. Those columns specified with `passthrough` are added at the right to the output of the transformers. See Also -------- make_column_transformer : Convenience function for combining the outputs of multiple transformer objects applied to column subsets of the original feature space. make_column_selector : Convenience function for selecting columns based on datatype or the columns name with a regex pattern. Examples -------- >>> import cupy as cp >>> from cuml.compose import ColumnTransformer >>> from cuml.preprocessing import Normalizer >>> ct = ColumnTransformer( ... [("norm1", Normalizer(norm='l1'), [0, 1]), ... ("norm2", Normalizer(norm='l1'), slice(2, 4))]) >>> X = cp.array([[0., 1., 2., 2.], ... [1., 1., 0., 1.]]) >>> # Normalizer scales each row of X to unit norm. A separate scaling >>> # is applied for the two first and two last elements of each >>> # row independently. >>> ct.fit_transform(X) array([[0. , 1. , 0.5, 0.5], [0.5, 0.5, 0. , 1. ]]) """ _required_parameters = ['transformers'] @_deprecate_pos_args(version="0.20") def __init__(self, transformers=None, remainder='drop', sparse_threshold=0.3, n_jobs=None, transformer_weights=None, verbose=False): if not has_sklearn(): raise ImportError("Scikit-learn is needed to use the " "Column Transformer") if not transformers: warnings.warn('Transformers are required') self.transformers = transformers self.remainder = remainder self.sparse_threshold = sparse_threshold self.n_jobs = n_jobs self.transformer_weights = transformer_weights self.verbose = verbose @property def _transformers(self): """ Internal list of transformer only containing the name and transformers, dropping the columns. This is for the implementation of get_params via BaseComposition._get_params which expects lists of tuples of len 2. """ return [(name, trans) for name, trans, _ in self.transformers] @_transformers.setter def _transformers(self, value): self.transformers = [ (name, trans, col) for ((name, trans), (_, _, col)) in zip(value, self.transformers)] def get_params(self, deep=True): """Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the `transformers` of the `ColumnTransformer`. Parameters ---------- deep : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns ------- params : dict Parameter names mapped to their values. """ return self._get_params('_transformers', deep=deep) def set_params(self, **kwargs): """Set the parameters of this estimator. Valid parameter keys can be listed with ``get_params()``. Note that you can directly set the parameters of the estimators contained in `transformers` of `ColumnTransformer`. Returns ------- self """ self._set_params('_transformers', **kwargs) return self def _iter(self, fitted=False, replace_strings=False): """ Generate (name, trans, column, weight) tuples. If fitted=True, use the fitted transformers, else use the user specified transformers updated with converted column names and potentially appended with transformer for remainder. """ if fitted: transformers = self.transformers_ else: # interleave the validated column specifiers transformers = [ (name, trans, column) for (name, trans, _), column in zip(self.transformers, self._columns) ] # add transformer tuple for remainder if self._remainder[2] is not None: transformers = chain(transformers, [self._remainder]) get_weight = (self.transformer_weights or {}).get for name, trans, column in transformers: if replace_strings: # replace 'passthrough' with identity transformer and # skip in case of 'drop' if trans == 'passthrough': with cuml.using_output_type("cupy"): trans = FunctionTransformer(accept_sparse=True, check_inverse=False) elif trans == 'drop': continue elif _is_empty_column_selection(column): continue yield (name, trans, column, get_weight(name)) def _validate_transformers(self): if not self.transformers: return names, transformers, _ = zip(*self.transformers) # validate names self._validate_names(names) # validate estimators for t in transformers: if t in ('drop', 'passthrough'): continue if (not (hasattr(t, "fit") or hasattr(t, "fit_transform")) or not hasattr(t, "transform")): raise TypeError("All estimators should implement fit and " "transform, or can be 'drop' or 'passthrough' " "specifiers. '%s' (type %s) doesn't." % (t, type(t))) def _validate_column_callables(self, X): """ Converts callable column specifications. """ columns = [] for _, _, column in self.transformers: if callable(column): column = column(X) columns.append(column) self._columns = columns def _validate_remainder(self, X): """ Validates ``remainder`` and defines ``_remainder`` targeting the remaining columns. """ is_transformer = ((hasattr(self.remainder, "fit") or hasattr(self.remainder, "fit_transform")) and hasattr(self.remainder, "transform")) if (self.remainder not in ('drop', 'passthrough') and not is_transformer): raise ValueError( "The remainder keyword needs to be one of 'drop', " "'passthrough', or estimator. '%s' was passed instead" % self.remainder) # Make it possible to check for reordered named columns on transform self._has_str_cols = any(_determine_key_type(cols) == 'str' for cols in self._columns) if hasattr(X, 'columns'): self._df_columns = X.columns self._n_features = X.shape[1] cols = [] for columns in self._columns: cols.extend(_get_column_indices(X, columns)) remaining_idx = sorted(set(range(self._n_features)) - set(cols)) self._remainder = ('remainder', self.remainder, remaining_idx or None) @property def named_transformers_(self): """Access the fitted transformer by name. Read-only attribute to access any transformer by given name. Keys are transformer names and values are the fitted transformer objects. """ # Use Bunch object to improve autocomplete return Bunch(**{name: trans for name, trans, _ in self.transformers_}) def get_feature_names(self): """Get feature names from all transformers. Returns ------- feature_names : list of strings Names of the features produced by transform. """ check_is_fitted(self) feature_names = [] for name, trans, column, _ in self._iter(fitted=True): if trans == 'drop' or ( hasattr(column, '__len__') and not len(column)): continue if trans == 'passthrough': if hasattr(self, '_df_columns'): if ((not isinstance(column, slice)) and all(isinstance(col, str) for col in column)): feature_names.extend(column) else: feature_names.extend(self._df_columns[column]) else: indices = np.arange(self._n_features) feature_names.extend(['x%d' % i for i in indices[column]]) continue if not hasattr(trans, 'get_feature_names'): raise AttributeError("Transformer %s (type %s) does not " "provide get_feature_names." % (str(name), type(trans).__name__)) feature_names.extend([name + "__" + f for f in trans.get_feature_names()]) return feature_names def _update_fitted_transformers(self, transformers): # transformers are fitted; excludes 'drop' cases fitted_transformers = iter(transformers) transformers_ = [] for name, old, column, _ in self._iter(): if old == 'drop': trans = 'drop' elif old == 'passthrough': # FunctionTransformer is present in list of transformers, # so get next transformer, but save original string next(fitted_transformers) trans = 'passthrough' elif _is_empty_column_selection(column): trans = old else: trans = next(fitted_transformers) transformers_.append((name, trans, column)) # sanity check that transformers is exhausted assert not list(fitted_transformers) self.transformers_ = transformers_ def _validate_output(self, result): """ Ensure that the output of each transformer is 2D. Otherwise hstack can raise an error or produce incorrect results. """ names = [name for name, _, _, _ in self._iter(fitted=True, replace_strings=True)] for Xs, name in zip(result, names): if not getattr(Xs, 'ndim', 0) == 2: raise ValueError( "The output of the '{0}' transformer should be 2D (scipy " "matrix, array, or pandas DataFrame).".format(name)) def _log_message(self, name, idx, total): if not self.verbose: return None return '(%d of %d) Processing %s' % (idx, total, name) def _fit_transform(self, X, y, func, fitted=False): """ Private function to fit and/or transform on demand. Return value (transformers and/or transformed X data) depends on the passed function. ``fitted=True`` ensures the fitted transformers are used. """ transformers = list( self._iter(fitted=fitted, replace_strings=True)) try: return Parallel(n_jobs=self.n_jobs)( delayed(func)( transformer=clone(trans) if not fitted else trans, X=_safe_indexing(X, column, axis=1), y=y, weight=weight, message_clsname='ColumnTransformer', message=self._log_message(name, idx, len(transformers))) for idx, (name, trans, column, weight) in enumerate( self._iter(fitted=fitted, replace_strings=True), 1)) except ValueError as e: if "Expected 2D array, got 1D array instead" in str(e): raise ValueError(_ERR_MSG_1DCOLUMN) from e else: raise def fit(self, X, y=None) -> "ColumnTransformer": """Fit all transformers using X. Parameters ---------- X : {array-like, dataframe} of shape (n_samples, n_features) Input data, of which specified subsets are used to fit the transformers. y : array-like of shape (n_samples,...), default=None Targets for supervised learning. Returns ------- self : ColumnTransformer This estimator """ # we use fit_transform to make sure to set sparse_output_ (for which we # need the transformed data) to have consistent output type in predict self.fit_transform(X, y=y) return self def fit_transform(self, X, y=None) -> SparseCumlArray: """Fit all transformers, transform the data and concatenate results. Parameters ---------- X : {array-like, dataframe} of shape (n_samples, n_features) Input data, of which specified subsets are used to fit the transformers. y : array-like of shape (n_samples,), default=None Targets for supervised learning. Returns ------- X_t : {array-like, sparse matrix} of \ shape (n_samples, sum_n_components) hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices. """ # TODO: this should be `feature_names_in_` when we start having it if hasattr(X, "columns"): self._feature_names_in = cpu_np.asarray(X.columns) else: self._feature_names_in = None # set n_features_in_ attribute self._check_n_features(X, reset=True) self._validate_transformers() self._validate_column_callables(X) self._validate_remainder(X) result = self._fit_transform(X, y, _fit_transform_one) if not result: self._update_fitted_transformers([]) # All transformers are None return np.zeros((X.shape[0], 0)) Xs, transformers = zip(*result) # determine if concatenated output will be sparse or not if any(issparse(X) for X in Xs): nnz = sum(X.nnz if issparse(X) else X.size for X in Xs) total = sum(X.shape[0] * X.shape[1] if issparse(X) else X.size for X in Xs) density = nnz / total self.sparse_output_ = density < self.sparse_threshold else: self.sparse_output_ = False self._update_fitted_transformers(transformers) self._validate_output(Xs) return self._hstack(list(Xs)) def transform(self, X) -> SparseCumlArray: """Transform X separately by each transformer, concatenate results. Parameters ---------- X : {array-like, dataframe} of shape (n_samples, n_features) The data to be transformed by subset. Returns ------- X_t : {array-like, sparse matrix} of \ shape (n_samples, sum_n_components) hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices. """ check_is_fitted(self) if hasattr(X, "columns"): X_feature_names = cpu_np.asarray(X.columns) else: X_feature_names = None self._check_n_features(X, reset=False) if (self._feature_names_in is not None and X_feature_names is not None and cpu_np.any(self._feature_names_in != X_feature_names)): raise RuntimeError( "Given feature/column names do not match the ones for the " "data given during fit." ) Xs = self._fit_transform(X, None, _transform_one, fitted=True) self._validate_output(Xs) if not Xs: # All transformers are None return np.zeros((X.shape[0], 0)) return self._hstack(list(Xs)) def _hstack(self, Xs): """Stacks Xs horizontally. This allows subclasses to control the stacking behavior, while reusing everything else from ColumnTransformer. Parameters ---------- Xs : list of {array-like, sparse matrix, dataframe} """ if self.sparse_output_: try: # since all columns should be numeric before stacking them # in a sparse matrix, `check_array` is used for the # dtype conversion if necessary. converted_Xs = [check_array(X, accept_sparse=True, force_all_finite=False) for X in Xs] except ValueError as e: raise ValueError( "For a sparse output, all columns should " "be a numeric or convertible to a numeric." ) from e return cu_sparse.hstack(converted_Xs).tocsr() else: Xs = [f.toarray() if issparse(f) else f for f in Xs] return np.hstack(Xs) def _is_empty_column_selection(column): """ Return True if the column selection is empty (empty list or all-False boolean array). """ if hasattr(column, 'dtype') and np.issubdtype(column.dtype, np.bool_): return not column.any() elif hasattr(column, '__len__'): return (len(column) == 0 or all(isinstance(col, bool) for col in column) and not any(column)) else: return False def _get_transformer_list(estimators): """ Construct (name, trans, column) tuples from list """ transformers, columns = zip(*estimators) names, _ = zip(*_name_estimators(transformers)) transformer_list = list(zip(names, transformers, columns)) return transformer_list def make_column_transformer(*transformers, remainder='drop', sparse_threshold=0.3, n_jobs=None, verbose=False): """Construct a ColumnTransformer from the given transformers. This is a shorthand for the ColumnTransformer constructor; it does not require, and does not permit, naming the transformers. Instead, they will be given names automatically based on their types. It also does not allow weighting with ``transformer_weights``. Parameters ---------- *transformers : tuples Tuples of the form (transformer, columns) specifying the transformer objects to be applied to subsets of the data: * transformer : {'drop', 'passthrough'} or estimator Estimator must support `fit` and `transform`. Special-cased strings 'drop' and 'passthrough' are accepted as well, to indicate to drop the columns or to pass them through untransformed, respectively. * columns : str, array-like of str, int, array-like of int, slice, \ array-like of bool or callable Indexes the data on its second axis. Integers are interpreted as positional columns, while strings can reference DataFrame columns by name. A scalar string or int should be used where ``transformer`` expects X to be a 1d array-like (vector), otherwise a 2d array will be passed to the transformer. A callable is passed the input data `X` and can return any of the above. To select multiple columns by name or dtype, you can use :obj:`make_column_selector`. remainder : {'drop', 'passthrough'} or estimator, default='drop' By default, only the specified columns in `transformers` are transformed and combined in the output, and the non-specified columns are dropped. (default of ``'drop'``). By specifying ``remainder='passthrough'``, all remaining columns that were not specified in `transformers` will be automatically passed through. This subset of columns is concatenated with the output of the transformers. By setting ``remainder`` to be an estimator, the remaining non-specified columns will use the ``remainder`` estimator. The estimator must support `fit` and `transform`. sparse_threshold : float, default=0.3 If the transformed output consists of a mix of sparse and dense data, it will be stacked as a sparse matrix if the density is lower than this value. Use ``sparse_threshold=0`` to always return dense. When the transformed output consists of all sparse or all dense data, the stacked result will be sparse or dense, respectively, and this keyword will be ignored. n_jobs : int, default=None Number of jobs to run in parallel. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See `Glossary <n_jobs>` for more details. verbose : bool, default=False If True, the time elapsed while fitting each transformer will be printed as it is completed. Returns ------- ct : ColumnTransformer See Also -------- ColumnTransformer : Class that allows combining the outputs of multiple transformer objects used on column subsets of the data into a single feature space. Examples -------- >>> from cuml.preprocessing import StandardScaler, OneHotEncoder >>> from cuml.compose import make_column_transformer >>> make_column_transformer( ... (StandardScaler(), ['numerical_column']), ... (OneHotEncoder(), ['categorical_column'])) ColumnTransformer(transformers=[('standardscaler', StandardScaler(...), ['numerical_column']), ('onehotencoder', OneHotEncoder(...), ['categorical_column'])]) """ # transformer_weights keyword is not passed through because the user # would need to know the automatically generated names of the transformers transformer_list = _get_transformer_list(transformers) return ColumnTransformer(transformer_list, n_jobs=n_jobs, remainder=remainder, sparse_threshold=sparse_threshold, verbose=verbose) class make_column_selector: """Create a callable to select columns to be used with :class:`ColumnTransformer`. :func:`make_column_selector` can select columns based on datatype or the columns name with a regex. When using multiple selection criteria, **all** criteria must match for a column to be selected. Parameters ---------- pattern : str, default=None Name of columns containing this regex pattern will be included. If None, column selection will not be selected based on pattern. dtype_include : column dtype or list of column dtypes, default=None A selection of dtypes to include. For more details, see :meth:`pandas.DataFrame.select_dtypes`. dtype_exclude : column dtype or list of column dtypes, default=None A selection of dtypes to exclude. For more details, see :meth:`pandas.DataFrame.select_dtypes`. Returns ------- selector : callable Callable for column selection to be used by a :class:`ColumnTransformer`. See Also -------- ColumnTransformer : Class that allows combining the outputs of multiple transformer objects used on column subsets of the data into a single feature space. Examples -------- >>> from cuml.preprocessing import StandardScaler, OneHotEncoder >>> from cuml.preprocessing import make_column_transformer >>> from cuml.preprocessing import make_column_selector >>> import cupy as cp >>> import cudf # doctest: +SKIP >>> X = cudf.DataFrame({'city': ['London', 'London', 'Paris', 'Sallisaw'], ... 'rating': [5, 3, 4, 5]}) # doctest: +SKIP >>> ct = make_column_transformer( ... (StandardScaler(), ... make_column_selector(dtype_include=cp.number)), # rating ... (OneHotEncoder(), ... make_column_selector(dtype_include=object))) # city >>> ct.fit_transform(X) # doctest: +SKIP array([[ 0.90453403, 1. , 0. , 0. ], [-1.50755672, 1. , 0. , 0. ], [-0.30151134, 0. , 1. , 0. ], [ 0.90453403, 0. , 0. , 1. ]]) """ def __init__(self, pattern=None, *, dtype_include=None, dtype_exclude=None): self.pattern = pattern self.dtype_include = dtype_include self.dtype_exclude = dtype_exclude def __call__(self, df): if not hasattr(df, 'iloc'): raise ValueError("make_column_selector can only be applied to " "pandas dataframes") df_row = df.iloc[:1] if self.dtype_include is not None or self.dtype_exclude is not None: df_row = df_row.select_dtypes(include=self.dtype_include, exclude=self.dtype_exclude) cols = df_row.columns if self.pattern is not None: cols = cols[cols.str.contains(self.pattern, regex=True)] return cols.tolist()
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/_function_transformer.py
# This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote this production. import warnings import cuml from ....internals.array_sparse import SparseCumlArray from ..utils.skl_dependencies import TransformerMixin, BaseEstimator from ..utils.validation import _allclose_dense_sparse from ....internals import _deprecate_pos_args def _identity(X): """The identity function. """ return X class FunctionTransformer(TransformerMixin, BaseEstimator): """Constructs a transformer from an arbitrary callable. A FunctionTransformer forwards its X (and optionally y) arguments to a user-defined function or function object and returns the result of this function. This is useful for stateless transformations such as taking the log of frequencies, doing custom scaling, etc. Note: If a lambda is used as the function, then the resulting transformer will not be pickleable. Parameters ---------- func : callable, default=None The callable to use for the transformation. This will be passed the same arguments as transform, with args and kwargs forwarded. If func is None, then func will be the identity function. inverse_func : callable, default=None The callable to use for the inverse transformation. This will be passed the same arguments as inverse transform, with args and kwargs forwarded. If inverse_func is None, then inverse_func will be the identity function. accept_sparse : bool, default=False Indicate that func accepts a sparse matrix as input. Otherwise, if accept_sparse is false, sparse matrix inputs will cause an exception to be raised. check_inverse : bool, default=True Whether to check that or ``func`` followed by ``inverse_func`` leads to the original inputs. It can be used for a sanity check, raising a warning when the condition is not fulfilled. kw_args : dict, default=None Dictionary of additional keyword arguments to pass to func. inv_kw_args : dict, default=None Dictionary of additional keyword arguments to pass to inverse_func. Examples -------- >>> import cupy as cp >>> from cuml.preprocessing import FunctionTransformer >>> transformer = FunctionTransformer(cp.log1p) >>> X = cp.array([[0, 1], [2, 3]]) >>> transformer.transform(X) array([[0. , 0.6931...], [1.0986..., 1.3862...]]) """ @_deprecate_pos_args(version="0.20") def __init__(self, *, func=None, inverse_func=None, accept_sparse=False, check_inverse=True, kw_args=None, inv_kw_args=None): self.func = func self.inverse_func = inverse_func self.accept_sparse = accept_sparse self.check_inverse = check_inverse self.kw_args = kw_args self.inv_kw_args = inv_kw_args def _check_input(self, X): return self._validate_data(X, accept_sparse=self.accept_sparse) def _check_inverse_transform(self, X): """Check that func and inverse_func are the inverse.""" interval = max(1, X.shape[0] // 100) selection = [i * interval for i in range(X.shape[0] // interval)] with cuml.using_output_type("cupy"): X_round_trip = self.inverse_transform(self.transform(X[selection])) if not _allclose_dense_sparse(X[selection], X_round_trip): warnings.warn("The provided functions are not strictly" " inverse of each other. If you are sure you" " want to proceed regardless, set" " 'check_inverse=False'.", UserWarning) def fit(self, X, y=None) -> "FunctionTransformer": """Fit transformer by checking X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Input array. Returns ------- self """ X = self._check_input(X) if (self.check_inverse and not (self.func is None or self.inverse_func is None)): self._check_inverse_transform(X) return self def transform(self, X) -> SparseCumlArray: """Transform X using the forward function. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Input array. Returns ------- X_out : {array-like, sparse matrix}, shape (n_samples, n_features) Transformed input. """ return self._transform(X, func=self.func, kw_args=self.kw_args) def inverse_transform(self, X) -> SparseCumlArray: """Transform X using the inverse function. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Input array. Returns ------- X_out : {array-like, sparse matrix}, shape (n_samples, n_features) Transformed input. """ return self._transform(X, func=self.inverse_func, kw_args=self.inv_kw_args) def _transform(self, X, func=None, kw_args=None): X = self._check_input(X) if func is None: func = _identity return func(X, **(kw_args if kw_args else {})) def _more_tags(self): return {'stateless': True, 'requires_y': False}
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/_data.py
# Original authors from Sckit-Learn: # Alexandre Gramfort <alexandre.gramfort@inria.fr> # Mathieu Blondel <mathieu@mblondel.org> # Olivier Grisel <olivier.grisel@ensta.org> # Andreas Mueller <amueller@ais.uni-bonn.de> # Eric Martin <eric@ericmart.in> # Giorgio Patrini <giorgio.patrini@anu.edu.au> # Eric Chang <ericchang2017@u.northwestern.edu> # License: BSD 3 clause # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote this production. from ....internals.memory_utils import using_output_type from ....internals import _deprecate_pos_args from ....internals import api_return_generic from ....common.array_descriptor import CumlArrayDescriptor from ....internals.array_sparse import SparseCumlArray from ....internals.array import CumlArray from ....thirdparty_adapters.sparsefuncs_fast import \ (inplace_csr_row_normalize_l1, inplace_csr_row_normalize_l2, csr_polynomial_expansion) from ..utils.sparsefuncs import (inplace_column_scale, min_max_axis, mean_variance_axis) from ..utils.validation import (check_is_fitted, FLOAT_DTYPES, check_random_state) from ..utils.extmath import _incremental_mean_and_var from ..utils.extmath import row_norms from ....thirdparty_adapters import check_array from cuml.internals.mixins import AllowNaNTagMixin, SparseInputTagMixin, \ StatelessTagMixin from ..utils.skl_dependencies import BaseEstimator, TransformerMixin from scipy.special import boxcox from scipy import optimize from cuml.internals.safe_imports import cpu_only_import_from from cuml.internals.safe_imports import gpu_only_import_from from cuml.internals.safe_imports import gpu_only_import from itertools import chain, combinations import numbers import warnings from itertools import combinations_with_replacement as combinations_w_r from cuml.internals.safe_imports import cpu_only_import cpu_np = cpu_only_import('numpy') np = gpu_only_import('cupy') sparse = gpu_only_import_from('cupyx.scipy', 'sparse') stats = cpu_only_import_from('scipy', 'stats') BOUNDS_THRESHOLD = 1e-7 __all__ = [ 'Binarizer', 'MinMaxScaler', 'MaxAbsScaler', 'Normalizer', 'RobustScaler', 'StandardScaler', 'add_dummy_feature', 'binarize', 'normalize', 'scale', 'robust_scale', 'maxabs_scale', 'minmax_scale' ] def _handle_zeros_in_scale(scale, copy=True): ''' Makes sure that whenever scale is zero, we handle it correctly. This happens in most scalers when we have constant features.''' # if we are fitting on 1D arrays, scale might be a scalar if np.isscalar(scale): if scale == .0: scale = 1. return scale elif isinstance(scale, np.ndarray): if copy: # New array to avoid side-effects scale = scale.copy() scale[scale == 0.0] = 1.0 return scale @_deprecate_pos_args(version="21.06") @api_return_generic(get_output_type=True) def scale(X, *, axis=0, with_mean=True, with_std=True, copy=True): """Standardize a dataset along any axis Center to the mean and component wise scale to unit variance. Parameters ---------- X : {array-like, sparse matrix} The data to center and scale. axis : int (0 by default) axis used to compute the means and standard deviations along. If 0, independently standardize each feature, otherwise (if 1) standardize each sample. with_mean : boolean, True by default If True, center the data before scaling. with_std : boolean, True by default If True, scale the data to unit variance (or equivalently, unit standard deviation). copy : boolean, optional, default True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. Notes ----- This implementation will refuse to center sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems. Instead the caller is expected to either set explicitly `with_mean=False` (in that case, only variance scaling will be performed on the features of the sparse matrix) or to densify the matrix if he/she expects the materialized dense array to fit in memory. For optimal processing the caller should pass a CSC matrix. NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation. We use a biased estimator for the standard deviation, equivalent to `numpy.std(x, ddof=0)`. Note that the choice of `ddof` is unlikely to affect model performance. See also -------- StandardScaler: Performs scaling to unit variance using the``Transformer`` API """ # noqa X = check_array(X, accept_sparse=['csr', 'csc'], copy=copy, ensure_2d=False, estimator='the scale function', dtype=FLOAT_DTYPES, force_all_finite='allow-nan') if sparse.issparse(X): if with_mean: raise ValueError( "Cannot center sparse matrices: pass `with_mean=False` instead" " See docstring for motivation and alternatives.") if axis != 0: raise ValueError("Can only scale sparse matrix on axis=0, " " got axis=%d" % axis) if with_std: _, var = mean_variance_axis(X, axis=0) var = _handle_zeros_in_scale(var, copy=False) inplace_column_scale(X, 1 / np.sqrt(var)) else: X = np.asarray(X) if with_mean: mean_ = np.nanmean(X, axis) if with_std: scale_ = np.nanstd(X, axis) # Xr is a view on the original array that enables easy use of # broadcasting on the axis in which we are interested in Xr = np.rollaxis(X, axis) if with_mean: Xr -= mean_ mean_1 = np.nanmean(Xr, axis=0) # Verify that mean_1 is 'close to zero'. If X contains very # large values, mean_1 can also be very large, due to a lack of # precision of mean_. In this case, a pre-scaling of the # concerned feature is efficient, for instance by its mean or # maximum. if not np.allclose(mean_1, 0): warnings.warn("Numerical issues were encountered " "when centering the data " "and might not be solved. Dataset may " "contain too large values. You may need " "to prescale your features.") Xr -= mean_1 if with_std: scale_ = _handle_zeros_in_scale(scale_, copy=False) Xr /= scale_ if with_mean: mean_2 = np.nanmean(Xr, axis=0) # If mean_2 is not 'close to zero', it comes from the fact that # scale_ is very small so that mean_2 = mean_1/scale_ > 0, even # if mean_1 was close to zero. The problem is thus essentially # due to the lack of precision of mean_. A solution is then to # subtract the mean again: if not np.allclose(mean_2, 0): warnings.warn("Numerical issues were encountered " "when scaling the data " "and might not be solved. The standard " "deviation of the data is probably " "very close to 0. ") Xr -= mean_2 return X class MinMaxScaler(TransformerMixin, BaseEstimator, AllowNaNTagMixin): """Transform features by scaling each feature to a given range. This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one. The transformation is given by:: X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0)) X_scaled = X_std * (max - min) + min where min, max = feature_range. This transformation is often used as an alternative to zero mean, unit variance scaling. Parameters ---------- feature_range : tuple (min, max), default=(0, 1) Desired range of transformed data. copy : bool, default=True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. Attributes ---------- min_ : ndarray of shape (n_features,) Per feature adjustment for minimum. Equivalent to ``min - X.min(axis=0) * self.scale_`` scale_ : ndarray of shape (n_features,) Per feature relative scaling of the data. Equivalent to ``(max - min) / (X.max(axis=0) - X.min(axis=0))`` data_min_ : ndarray of shape (n_features,) Per feature minimum seen in the data data_max_ : ndarray of shape (n_features,) Per feature maximum seen in the data data_range_ : ndarray of shape (n_features,) Per feature range ``(data_max_ - data_min_)`` seen in the data n_samples_seen_ : int The number of samples processed by the estimator. It will be reset on new calls to fit, but increments across ``partial_fit`` calls. Examples -------- >>> from cuml.preprocessing import MinMaxScaler >>> import cupy as cp >>> data = [[-1, 2], [-0.5, 6], [0, 10], [1, 18]] >>> data = cp.array(data) >>> scaler = MinMaxScaler() >>> print(scaler.fit(data)) MinMaxScaler() >>> print(scaler.data_max_) [ 1. 18.] >>> print(scaler.transform(data)) [[0. 0. ] [0.25 0.25] [0.5 0.5 ] [1. 1. ]] >>> print(scaler.transform(cp.array([[2, 2]]))) [[1.5 0. ]] See also -------- minmax_scale: Equivalent function without the estimator API. Notes ----- NaNs are treated as missing values: disregarded in fit, and maintained in transform. """ scale_ = CumlArrayDescriptor() min_ = CumlArrayDescriptor() n_samples_seen_ = CumlArrayDescriptor() data_min_ = CumlArrayDescriptor() data_max_ = CumlArrayDescriptor() data_range_ = CumlArrayDescriptor() @_deprecate_pos_args(version="21.06") def __init__(self, feature_range=(0, 1), *, copy=True): self.feature_range = feature_range self.copy = copy def _reset(self): """Reset internal data-dependent state of the scaler, if necessary. __init__ parameters are not touched. """ # Checking one attribute is enough, because they are all set together # in partial_fit if hasattr(self, 'scale_'): del self.scale_ del self.min_ del self.n_samples_seen_ del self.data_min_ del self.data_max_ del self.data_range_ def get_param_names(self): return super().get_param_names() + [ "feature_range", "copy" ] def fit(self, X, y=None) -> "MinMaxScaler": """Compute the minimum and maximum to be used for later scaling. Parameters ---------- X : array-like of shape (n_samples, n_features) The data used to compute the per-feature minimum and maximum used for later scaling along the features axis. y : None Ignored. Returns ------- self : object Fitted scaler. """ # Reset internal state before fitting self._reset() return self.partial_fit(X, y) def partial_fit(self, X, y=None) -> "MinMaxScaler": """Online computation of min and max on X for later scaling. All of X is processed as a single batch. This is intended for cases when :meth:`fit` is not feasible due to very large number of `n_samples` or because X is read from a continuous stream. Parameters ---------- X : array-like of shape (n_samples, n_features) The data used to compute the mean and standard deviation used for later scaling along the features axis. y : None Ignored. Returns ------- self : object Transformer instance. """ feature_range = self.feature_range if feature_range[0] >= feature_range[1]: raise ValueError("Minimum of desired feature range must be smaller" " than maximum. Got %s." % str(feature_range)) first_pass = not hasattr(self, 'n_samples_seen_') X = self._validate_data(X, reset=first_pass, estimator=self, dtype=FLOAT_DTYPES, force_all_finite="allow-nan") data_min = np.nanmin(X, axis=0) data_max = np.nanmax(X, axis=0) if first_pass: self.n_samples_seen_ = X.shape[0] else: data_min = np.minimum(self.data_min_, data_min) data_max = np.maximum(self.data_max_, data_max) self.n_samples_seen_ += X.shape[0] data_range = data_max - data_min self.scale_ = ((feature_range[1] - feature_range[0]) / _handle_zeros_in_scale(data_range)) self.min_ = feature_range[0] - data_min * self.scale_ self.data_min_ = data_min self.data_max_ = data_max self.data_range_ = data_range return self def transform(self, X) -> CumlArray: """Scale features of X according to feature_range. Parameters ---------- X : array-like of shape (n_samples, n_features) Input data that will be transformed. Returns ------- Xt : array-like of shape (n_samples, n_features) Transformed data. """ check_is_fitted(self) X = check_array(X, copy=self.copy, dtype=FLOAT_DTYPES, force_all_finite="allow-nan") X *= self.scale_ X += self.min_ return X def inverse_transform(self, X) -> CumlArray: """Undo the scaling of X according to feature_range. Parameters ---------- X : array-like of shape (n_samples, n_features) Input data that will be transformed. It cannot be sparse. Returns ------- Xt : array-like of shape (n_samples, n_features) Transformed data. """ check_is_fitted(self) X = check_array(X, copy=self.copy, dtype=FLOAT_DTYPES, force_all_finite="allow-nan") X -= self.min_ X /= self.scale_ return X @_deprecate_pos_args(version="21.06") @api_return_generic(get_output_type=True) def minmax_scale(X, feature_range=(0, 1), *, axis=0, copy=True): """Transform features by scaling each feature to a given range. This estimator scales and translates each feature individually such that it is in the given range on the training set, i.e. between zero and one. The transformation is given by (when ``axis=0``):: X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0)) X_scaled = X_std * (max - min) + min where min, max = feature_range. The transformation is calculated as (when ``axis=0``):: X_scaled = scale * X + min - X.min(axis=0) * scale where scale = (max - min) / (X.max(axis=0) - X.min(axis=0)) This transformation is often used as an alternative to zero mean, unit variance scaling. Parameters ---------- X : array-like of shape (n_samples, n_features) The data. feature_range : tuple (min, max), default=(0, 1) Desired range of transformed data. axis : int, default=0 Axis used to scale along. If 0, independently scale each feature, otherwise (if 1) scale each sample. copy : bool, default=True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. See also -------- MinMaxScaler: Performs scaling to a given range using the``Transformer`` API """ # noqa # Unlike the scaler object, this function allows 1d input. # If copy is required, it will be done inside the scaler object. X = check_array(X, copy=False, ensure_2d=False, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') original_ndim = X.ndim if original_ndim == 1: X = X.reshape(X.shape[0], 1) with using_output_type('cupy'): s = MinMaxScaler(feature_range=feature_range, copy=copy) if axis == 0: X = s.fit_transform(X) else: X = s.fit_transform(X.T).T if original_ndim == 1: X = X.ravel() return X class StandardScaler(TransformerMixin, BaseEstimator, AllowNaNTagMixin, SparseInputTagMixin): """Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if `with_mean=False`, and `s` is the standard deviation of the training samples or one if `with_std=False`. Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using :meth:`transform`. Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual features do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance). For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected. This scaler can also be applied to sparse CSR or CSC matrices by passing `with_mean=False` to avoid breaking the sparsity structure of the data. Parameters ---------- copy : boolean, optional, default True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. with_mean : boolean, True by default If True, center the data before scaling. This does not work (and will raise an exception) when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory. with_std : boolean, True by default If True, scale the data to unit variance (or equivalently, unit standard deviation). Attributes ---------- scale_ : ndarray or None, shape (n_features,) Per feature relative scaling of the data. This is calculated using `sqrt(var_)`. Equal to ``None`` when ``with_std=False``. mean_ : ndarray or None, shape (n_features,) The mean value for each feature in the training set. Equal to ``None`` when ``with_mean=False``. var_ : ndarray or None, shape (n_features,) The variance for each feature in the training set. Used to compute `scale_`. Equal to ``None`` when ``with_std=False``. n_samples_seen_ : int or array, shape (n_features,) The number of samples processed by the estimator for each feature. If there are not missing samples, the ``n_samples_seen`` will be an integer, otherwise it will be an array. Will be reset on new calls to fit, but increments across ``partial_fit`` calls. Examples -------- >>> from cuml.preprocessing import StandardScaler >>> import cupy as cp >>> data = [[0, 0], [0, 0], [1, 1], [1, 1]] >>> data = cp.array(data) >>> scaler = StandardScaler() >>> print(scaler.fit(data)) StandardScaler() >>> print(scaler.mean_) [0.5 0.5] >>> print(scaler.transform(data)) [[-1. -1.] [-1. -1.] [ 1. 1.] [ 1. 1.]] >>> print(scaler.transform(cp.array([[2, 2]]))) [[3. 3.]] See also -------- scale: Equivalent function without the estimator API. :class:`cuml.decomposition.PCA` Further removes the linear correlation across features with 'whiten=True'. Notes ----- NaNs are treated as missing values: disregarded in fit, and maintained in transform. We use a biased estimator for the standard deviation, equivalent to `numpy.std(x, ddof=0)`. Note that the choice of `ddof` is unlikely to affect model performance. """ # noqa scale_ = CumlArrayDescriptor() n_samples_seen_ = CumlArrayDescriptor() mean_ = CumlArrayDescriptor() var_ = CumlArrayDescriptor() @_deprecate_pos_args(version="21.06") def __init__(self, *, copy=True, with_mean=True, with_std=True): self.with_mean = with_mean self.with_std = with_std self.copy = copy def _reset(self): """Reset internal data-dependent state of the scaler, if necessary. __init__ parameters are not touched. """ # Checking one attribute is enough, because they are all set together # in partial_fit if hasattr(self, 'scale_'): del self.scale_ del self.n_samples_seen_ del self.mean_ del self.var_ def get_param_names(self): return super().get_param_names() + [ "with_mean", "with_std", "copy" ] def fit(self, X, y=None) -> "StandardScaler": """Compute the mean and std to be used for later scaling. Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] The data used to compute the mean and standard deviation used for later scaling along the features axis. y : None Ignored """ # Reset internal state before fitting self._reset() return self.partial_fit(X, y) def partial_fit(self, X, y=None) -> "StandardScaler": """ Online computation of mean and std on X for later scaling. All of X is processed as a single batch. This is intended for cases when :meth:`fit` is not feasible due to very large number of `n_samples` or because X is read from a continuous stream. The algorithm for incremental mean and std is given in Equation 1.5a,b in Chan, Tony F., Gene H. Golub, and Randall J. LeVeque. "Algorithms for computing the sample variance: Analysis and recommendations." The American Statistician 37.3 (1983): 242-247: Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] The data used to compute the mean and standard deviation used for later scaling along the features axis. y : None Ignored. Returns ------- self : object Transformer instance. """ X = self._validate_data(X, accept_sparse=('csr', 'csc'), estimator=self, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') # Even in the case of `with_mean=False`, we update the mean anyway # This is needed for the incremental computation of the var # See incr_mean_variance_axis and _incremental_mean_variance_axis # if n_samples_seen_ is an integer (i.e. no missing values), we need to # transform it to a NumPy array of shape (n_features,) required by # incr_mean_variance_axis and _incremental_variance_axis if (hasattr(self, 'n_samples_seen_') and isinstance(self.n_samples_seen_, numbers.Integral)): self.n_samples_seen_ = np.repeat( self.n_samples_seen_, X.shape[1]).astype(np.int64, copy=False) if sparse.issparse(X): if self.with_mean: raise ValueError( "Cannot center sparse matrices: pass `with_mean=False` " "instead. See docstring for motivation and alternatives.") if X.format == 'csr': X = X.tocsc() counts_nan = np.empty(X.shape[1]) _isnan = np.isnan(X.data) start = X.indptr[0] for i, end in enumerate(X.indptr[1:]): counts_nan[i] = _isnan[start:end].sum() start = end if not hasattr(self, 'n_samples_seen_'): self.n_samples_seen_ = ( X.shape[0] - counts_nan).astype(np.int64, copy=False) if self.with_std: # First pass if not hasattr(self, 'scale_'): self.mean_, self.var_ = mean_variance_axis(X, axis=0) # TODO """ # Next passes else: self.mean_, self.var_, self.n_samples_seen_ = \ incr_mean_variance_axis(X, axis=0, last_mean=self.mean_, last_var=self.var_, last_n=self.n_samples_seen_) """ else: self.mean_ = None self.var_ = None if hasattr(self, 'scale_'): self.n_samples_seen_ += X.shape[0] - counts_nan else: if not hasattr(self, 'n_samples_seen_'): self.n_samples_seen_ = np.zeros(X.shape[1], dtype=np.int64) # First pass if not hasattr(self, 'scale_'): self.mean_ = .0 if self.with_std: self.var_ = .0 else: self.var_ = None if not self.with_mean and not self.with_std: self.mean_ = None self.var_ = None self.n_samples_seen_ += X.shape[0] - np.isnan(X).sum(axis=0) else: self.mean_, self.var_, self.n_samples_seen_ = \ _incremental_mean_and_var(X, self.mean_, self.var_, self.n_samples_seen_) # for backward-compatibility, reduce n_samples_seen_ to an integer # if the number of samples is the same for each feature (i.e. no # missing values) ptp = np.amax(self.n_samples_seen_) - np.amin(self.n_samples_seen_) if ptp == 0: self.n_samples_seen_ = self.n_samples_seen_[0] del ptp if self.with_std: self.scale_ = _handle_zeros_in_scale(np.sqrt(self.var_)) else: self.scale_ = None return self def transform(self, X, copy=None) -> SparseCumlArray: """Perform standardization by centering and scaling Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] The data used to scale along the features axis. copy : bool, optional (default: None) Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. """ check_is_fitted(self) copy = copy if copy is not None else self.copy X = self._validate_data(X, reset=False, accept_sparse=['csr', 'csc'], copy=copy, estimator=self, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') if sparse.issparse(X): if self.with_mean: raise ValueError( "Cannot center sparse matrices: pass `with_mean=False` " "instead. See docstring for motivation and alternatives.") if self.scale_ is not None: inplace_column_scale(X, 1 / self.scale_) else: if self.with_mean: X -= self.mean_ if self.with_std: X /= self.scale_ return X def inverse_transform(self, X, copy=None) -> SparseCumlArray: """Scale back the data to the original representation Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] The data used to scale along the features axis. copy : bool, optional (default: None) Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. Returns ------- X_tr : {array-like, sparse matrix}, shape [n_samples, n_features] Transformed array. """ check_is_fitted(self) copy = copy if copy is not None else self.copy X = check_array(X, accept_sparse=['csr', 'csc'], copy=copy, estimator=self, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') if sparse.issparse(X): if self.with_mean: raise ValueError( "Cannot uncenter sparse matrices: pass `with_mean=False` " "instead See docstring for motivation and alternatives.") if not sparse.isspmatrix_csr(X): X = X.tocsr() copy = False if copy: X = X.copy() if self.scale_ is not None: inplace_column_scale(X, self.scale_) else: X = np.asarray(X) if copy: X = X.copy() if self.with_std: X *= self.scale_ if self.with_mean: X += self.mean_ return X class MaxAbsScaler(TransformerMixin, BaseEstimator, AllowNaNTagMixin, SparseInputTagMixin): """Scale each feature by its maximum absolute value. This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity. This scaler can also be applied to sparse CSR or CSC matrices. Parameters ---------- copy : boolean, optional, default is True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. Attributes ---------- scale_ : ndarray, shape (n_features,) Per feature relative scaling of the data. max_abs_ : ndarray, shape (n_features,) Per feature maximum absolute value. n_samples_seen_ : int The number of samples processed by the estimator. Will be reset on new calls to fit, but increments across ``partial_fit`` calls. Examples -------- >>> from cuml.preprocessing import MaxAbsScaler >>> import cupy as cp >>> X = [[ 1., -1., 2.], ... [ 2., 0., 0.], ... [ 0., 1., -1.]] >>> X = cp.array(X) >>> transformer = MaxAbsScaler().fit(X) >>> transformer MaxAbsScaler() >>> transformer.transform(X) array([[ 0.5, -1. , 1. ], [ 1. , 0. , 0. ], [ 0. , 1. , -0.5]]) See also -------- maxabs_scale: Equivalent function without the estimator API. Notes ----- NaNs are treated as missing values: disregarded in fit, and maintained in transform. """ scale_ = CumlArrayDescriptor() n_samples_seen_ = CumlArrayDescriptor() max_abs_ = CumlArrayDescriptor() @_deprecate_pos_args(version="21.06") def __init__(self, *, copy=True): self.copy = copy def _reset(self): """Reset internal data-dependent state of the scaler, if necessary. __init__ parameters are not touched. """ # Checking one attribute is enough, because they are all set together # in partial_fit if hasattr(self, 'scale_'): del self.scale_ del self.n_samples_seen_ del self.max_abs_ def get_param_names(self): return super().get_param_names() + [ "copy" ] def fit(self, X, y=None) -> "MaxAbsScaler": """Compute the maximum absolute value to be used for later scaling. Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] The data used to compute the per-feature minimum and maximum used for later scaling along the features axis. """ # Reset internal state before fitting self._reset() return self.partial_fit(X, y) def partial_fit(self, X, y=None) -> "MaxAbsScaler": """ Online computation of max absolute value of X for later scaling. All of X is processed as a single batch. This is intended for cases when :meth:`fit` is not feasible due to very large number of `n_samples` or because X is read from a continuous stream. Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] The data used to compute the mean and standard deviation used for later scaling along the features axis. y : None Ignored. Returns ------- self : object Transformer instance. """ first_pass = not hasattr(self, 'n_samples_seen_') X = self._validate_data(X, reset=first_pass, accept_sparse=('csr', 'csc'), estimator=self, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') if sparse.issparse(X): mins, maxs = min_max_axis(X, axis=0, ignore_nan=True) max_abs = np.maximum(np.abs(mins), np.abs(maxs)) else: max_abs = np.nanmax(np.abs(X), axis=0) if first_pass: self.n_samples_seen_ = X.shape[0] else: max_abs = np.maximum(self.max_abs_, max_abs) self.n_samples_seen_ += X.shape[0] self.max_abs_ = max_abs self.scale_ = _handle_zeros_in_scale(max_abs) return self def transform(self, X) -> SparseCumlArray: """Scale the data Parameters ---------- X : {array-like, sparse matrix} The data that should be scaled. """ check_is_fitted(self) X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy, estimator=self, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') if sparse.issparse(X): inplace_column_scale(X, 1.0 / self.scale_) else: X /= self.scale_ return X def inverse_transform(self, X) -> SparseCumlArray: """Scale back the data to the original representation Parameters ---------- X : {array-like, sparse matrix} The data that should be transformed back. """ check_is_fitted(self) X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy, estimator=self, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') if sparse.issparse(X): inplace_column_scale(X, self.scale_) else: X *= self.scale_ return X @_deprecate_pos_args(version="21.06") @api_return_generic(get_output_type=True) def maxabs_scale(X, *, axis=0, copy=True): """Scale each feature to the [-1, 1] range without breaking the sparsity. This estimator scales each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. This scaler can also be applied to sparse CSR or CSC matrices. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) The data. axis : int (0 by default) axis used to scale along. If 0, independently scale each feature, otherwise (if 1) scale each sample. copy : boolean, optional, default is True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. See also -------- MaxAbsScaler: Performs scaling to the [-1, 1] range using the``Transformer`` API Notes ----- NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation. """ # noqa # Unlike the scaler object, this function allows 1d input. # If copy is required, it will be done inside the scaler object. X = check_array(X, accept_sparse=('csr', 'csc'), copy=False, ensure_2d=False, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') original_ndim = X.ndim if original_ndim == 1: X = X.reshape(X.shape[0], 1) with using_output_type('cupy'): s = MaxAbsScaler(copy=copy) if axis == 0: X = s.fit_transform(X) else: X = s.fit_transform(X.T).T if original_ndim == 1: X = X.ravel() return X class RobustScaler(TransformerMixin, BaseEstimator, AllowNaNTagMixin, SparseInputTagMixin): """Scale features using statistics that are robust to outliers. This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile). Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and interquartile range are then stored to be used on later data using the ``transform`` method. Standardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results. Parameters ---------- with_centering : boolean, default=True If True, center the data before scaling. This will cause ``transform`` to raise an exception when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory. with_scaling : boolean, default=True If True, scale the data to interquartile range. quantile_range : tuple (q_min, q_max), 0.0 < q_min < q_max < 100.0 Default: (25.0, 75.0) = (1st quantile, 3rd quantile) = IQR Quantile range used to calculate ``scale_``. copy : boolean, optional, default=True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. Attributes ---------- center_ : array of floats The median value for each feature in the training set. scale_ : array of floats The (scaled) interquartile range for each feature in the training set. Examples -------- >>> from cuml.preprocessing import RobustScaler >>> import cupy as cp >>> X = [[ 1., -2., 2.], ... [ -2., 1., 3.], ... [ 4., 1., -2.]] >>> X = cp.array(X) >>> transformer = RobustScaler().fit(X) >>> transformer RobustScaler() >>> transformer.transform(X) array([[ 0. , -2. , 0. ], [-1. , 0. , 0.4], [ 1. , 0. , -1.6]]) See also -------- robust_scale: Equivalent function without the estimator API. cuml.decomposition.PCA: Further removes the linear correlation across features with ``whiten=True``. """ center_ = CumlArrayDescriptor() scale_ = CumlArrayDescriptor() @_deprecate_pos_args(version="21.06") def __init__(self, *, with_centering=True, with_scaling=True, quantile_range=(25.0, 75.0), copy=True): self.with_centering = with_centering self.with_scaling = with_scaling self.quantile_range = quantile_range self.copy = copy def get_param_names(self): return super().get_param_names() + [ "with_centering", "with_scaling", "quantile_range", "copy" ] def fit(self, X, y=None) -> "RobustScaler": """Compute the median and quantiles to be used for scaling. Parameters ---------- X : {array-like, CSC matrix}, shape [n_samples, n_features] The data used to compute the median and quantiles used for later scaling along the features axis. """ # at fit, convert sparse matrices to csc for optimized computation of # the quantiles X = self._validate_data(X, accept_sparse='csc', estimator=self, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') q_min, q_max = self.quantile_range if not 0 <= q_min <= q_max <= 100: raise ValueError("Invalid quantile range: %s" % str(self.quantile_range)) if self.with_centering: if sparse.issparse(X): raise ValueError( "Cannot center sparse matrices: use `with_centering=False`" " instead. See docstring for motivation and alternatives.") middle, is_odd = divmod(X.shape[0], 2) X_sorted = np.sort(X, axis=0) if is_odd: self.center_ = X_sorted[middle] else: elm1 = X_sorted[middle-1] elm2 = X_sorted[middle] self.center_ = (elm1 + elm2) / 2. else: self.center_ = None if self.with_scaling: quantiles = [] for feature_idx in range(X.shape[1]): if sparse.issparse(X): column_nnz_data = X.data[X.indptr[feature_idx]: X.indptr[feature_idx + 1]] column_data = np.zeros(shape=X.shape[0], dtype=X.dtype) column_data[:len(column_nnz_data)] = column_nnz_data else: column_data = X[:, feature_idx] is_not_nan = ~np.isnan(column_data).astype(bool) column_data = column_data[is_not_nan] quantiles.append(np.percentile(column_data, self.quantile_range)) quantiles = np.array(quantiles).T self.scale_ = quantiles[1] - quantiles[0] self.scale_ = _handle_zeros_in_scale(self.scale_, copy=False) else: self.scale_ = None return self def transform(self, X) -> SparseCumlArray: """Center and scale the data. Parameters ---------- X : {array-like, sparse matrix} The data used to scale along the specified axis. """ check_is_fitted(self) X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy, estimator=self, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') if sparse.issparse(X): if self.with_scaling: inplace_column_scale(X, 1.0 / self.scale_) else: if self.with_centering: X -= self.center_ if self.with_scaling: X /= self.scale_ return X def inverse_transform(self, X) -> SparseCumlArray: """Scale back the data to the original representation Parameters ---------- X : {array-like, sparse matrix} The data used to scale along the specified axis. """ check_is_fitted(self) X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy, estimator=self, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') if sparse.issparse(X): if self.with_scaling: inplace_column_scale(X, self.scale_) else: if self.with_scaling: X *= self.scale_ if self.with_centering: X += self.center_ return X @_deprecate_pos_args(version="21.06") @api_return_generic(get_output_type=True) def robust_scale(X, *, axis=0, with_centering=True, with_scaling=True, quantile_range=(25.0, 75.0), copy=True): """ Standardize a dataset along any axis Center to the median and component wise scale according to the interquartile range. Parameters ---------- X : {array-like, sparse matrix} The data to center and scale. axis : int (0 by default) axis used to compute the medians and IQR along. If 0, independently scale each feature, otherwise (if 1) scale each sample. with_centering : boolean, True by default If True, center the data before scaling. with_scaling : boolean, True by default If True, scale the data to unit variance (or equivalently, unit standard deviation). quantile_range : tuple (q_min, q_max), 0.0 < q_min < q_max < 100.0 Default: (25.0, 75.0) = (1st quantile, 3rd quantile) = IQR Quantile range used to calculate ``scale_``. copy : boolean, optional, default is True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. Notes ----- This implementation will refuse to center sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems. Instead the caller is expected to either set explicitly `with_centering=False` (in that case, only variance scaling will be performed on the features of the CSR matrix) or to densify the matrix if he/she expects the materialized dense array to fit in memory. To avoid memory copy the caller should pass a CSR matrix. See also -------- RobustScaler: Performs centering and scaling using the ``Transformer`` API """ X = check_array(X, accept_sparse=('csr', 'csc'), copy=False, ensure_2d=False, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') original_ndim = X.ndim if original_ndim == 1: X = X.reshape(X.shape[0], 1) with using_output_type("cupy"): s = RobustScaler(with_centering=with_centering, with_scaling=with_scaling, quantile_range=quantile_range, copy=copy) if axis == 0: X = s.fit_transform(X) else: X = s.fit_transform(X.T).T if original_ndim == 1: X = X.ravel() return X class PolynomialFeatures(TransformerMixin, BaseEstimator, AllowNaNTagMixin, SparseInputTagMixin): """Generate polynomial and interaction features. Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, if an input sample is two dimensional and of the form [a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2]. Parameters ---------- degree : integer The degree of the polynomial features. Default = 2. interaction_only : boolean, default = False If true, only interaction features are produced: features that are products of at most ``degree`` *distinct* input features (so not ``x[1] ** 2``, ``x[0] * x[2] ** 3``, etc.). include_bias : boolean If True (default), then include a bias column, the feature in which all polynomial powers are zero (i.e. a column of ones - acts as an intercept term in a linear model). order : str in {'C', 'F'}, default 'C' Order of output array in the dense case. 'F' order is faster to compute, but may slow down subsequent estimators. Examples -------- >>> import numpy as np >>> from cuml.preprocessing import PolynomialFeatures >>> X = np.arange(6).reshape(3, 2) >>> X array([[0, 1], [2, 3], [4, 5]]) >>> poly = PolynomialFeatures(2) >>> poly.fit_transform(X) array([[ 1., 0., 1., 0., 0., 1.], [ 1., 2., 3., 4., 6., 9.], [ 1., 4., 5., 16., 20., 25.]]) >>> poly = PolynomialFeatures(interaction_only=True) >>> poly.fit_transform(X) array([[ 1., 0., 1., 0.], [ 1., 2., 3., 6.], [ 1., 4., 5., 20.]]) Attributes ---------- powers_ : array, shape (n_output_features, n_input_features) powers_[i, j] is the exponent of the jth input in the ith output. n_input_features_ : int The total number of input features. n_output_features_ : int The total number of polynomial output features. The number of output features is computed by iterating over all suitably sized combinations of input features. Notes ----- Be aware that the number of features in the output array scales polynomially in the number of features of the input array, and exponentially in the degree. High degrees can cause overfitting. """ @_deprecate_pos_args(version="21.06") def __init__(self, degree=2, *, interaction_only=False, include_bias=True, order='C'): self.degree = degree self.interaction_only = interaction_only self.include_bias = include_bias self.order = order def get_param_names(self): return super().get_param_names() + [ "degree", "interaction_only", "include_bias", "order" ] @staticmethod def _combinations(n_features, degree, interaction_only, include_bias): comb = (combinations if interaction_only else combinations_w_r) start = int(not include_bias) return chain.from_iterable(comb(range(n_features), i) for i in range(start, degree + 1)) @property def powers_(self): check_is_fitted(self) combinations = self._combinations(self.n_input_features_, self.degree, self.interaction_only, self.include_bias) return cpu_np.vstack([cpu_np.bincount(c, minlength=self.n_input_features_) for c in combinations]) def get_feature_names(self, input_features=None): """ Return feature names for output features Parameters ---------- input_features : list of string, length n_features, optional String names for input features if available. By default, "x0", "x1", ... "xn_features" is used. Returns ------- output_feature_names : list of string, length n_output_features """ powers = self.powers_ if input_features is None: input_features = ['x%d' % i for i in range(powers.shape[1])] feature_names = [] for row in powers: inds = cpu_np.where(row)[0] if len(inds): name = " ".join("%s^%d" % (input_features[ind], exp) if exp != 1 else input_features[ind] for ind, exp in zip(inds, row[inds])) else: name = "1" feature_names.append(name) return feature_names def fit(self, X, y=None) -> "PolynomialFeatures": """ Compute number of output features. Parameters ---------- X : array-like, shape (n_samples, n_features) The data. Returns ------- self : instance """ n_samples, n_features = self._validate_data( X, accept_sparse=True).shape combinations = self._combinations(n_features, self.degree, self.interaction_only, self.include_bias) self.n_input_features_ = n_features self.n_output_features_ = sum(1 for _ in combinations) return self def transform(self, X) -> SparseCumlArray: """Transform data to polynomial features Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] The data to transform, row by row. Prefer CSR over CSC for sparse input (for speed), but CSC is required if the degree is 4 or higher. If the degree is less than 4 and the input format is CSC, it will be converted to CSR, have its polynomial features generated, then converted back to CSC. If the degree is 2 or 3, the method described in "Leveraging Sparsity to Speed Up Polynomial Feature Expansions of CSR Matrices Using K-Simplex Numbers" by Andrew Nystrom and John Hughes is used, which is much faster than the method used on CSC input. For this reason, a CSC input will be converted to CSR, and the output will be converted back to CSC prior to being returned, hence the preference of CSR. Returns ------- XP : {array-like, sparse matrix}, shape [n_samples, NP] The matrix of features, where NP is the number of polynomial features generated from the combination of inputs. """ check_is_fitted(self) X = check_array(X, order='F', dtype=FLOAT_DTYPES, accept_sparse=('csr', 'csc')) n_samples, n_features = X.shape if n_features != self.n_input_features_: raise ValueError("X shape does not match training shape") if sparse.isspmatrix_csr(X): if self.degree > 3: return self.transform(X.tocsc()) # TODO keep order to_stack = [] if self.include_bias: bias = np.ones(shape=(n_samples, 1), dtype=X.dtype) to_stack.append(sparse.csr_matrix(bias)) to_stack.append(X) for deg in range(2, self.degree+1): Xp_next = csr_polynomial_expansion(X, self.interaction_only, deg) if Xp_next is None: break to_stack.append(Xp_next) XP = sparse.hstack(to_stack, format='csr') elif sparse.isspmatrix_csc(X) and self.degree < 4: return self.transform(X.tocsr()) # TODO convert to csc, keep order else: if sparse.isspmatrix(X): combinations = self._combinations(n_features, self.degree, self.interaction_only, self.include_bias) columns = [] for comb in combinations: if comb: out_col = 1 for col_idx in comb: out_col = X[:, col_idx].multiply(out_col) columns.append(out_col) else: bias = sparse.csc_matrix(np.ones((X.shape[0], 1))) columns.append(bias) XP = sparse.hstack(columns, dtype=X.dtype).tocsc() else: XP = np.empty((n_samples, self.n_output_features_), dtype=X.dtype, order=self.order) # What follows is a faster implementation of: # for i, comb in enumerate(combinations): # XP[:, i] = X[:, comb].prod(1) # This implementation uses two optimisations. # First one is broadcasting, # multiply ([X1, ..., Xn], X1) -> [X1 X1, ..., Xn X1] # multiply ([X2, ..., Xn], X2) -> [X2 X2, ..., Xn X2] # ... # multiply ([X[:, start:end], X[:, start]) -> ... # Second optimisation happens for degrees >= 3. # Xi^3 is computed reusing previous computation: # Xi^3 = Xi^2 * Xi. if self.include_bias: XP[:, 0] = 1 current_col = 1 else: current_col = 0 # d = 0 XP[:, current_col:current_col + n_features] = X index = list(range(current_col, current_col + n_features)) current_col += n_features index.append(current_col) # d >= 1 for _ in range(1, self.degree): new_index = [] end = index[-1] for feature_idx in range(n_features): start = index[feature_idx] new_index.append(current_col) if self.interaction_only: start += (index[feature_idx + 1] - index[feature_idx]) next_col = current_col + end - start if next_col <= current_col: break # XP[:, start:end] are terms of degree d - 1 # that exclude feature #feature_idx. np.multiply(XP[:, start:end], X[:, feature_idx:feature_idx + 1], out=XP[:, current_col:next_col], casting='no') current_col = next_col new_index.append(current_col) index = new_index return XP # TODO keep order @_deprecate_pos_args(version="21.06") @api_return_generic(get_output_type=True) def normalize(X, norm='l2', *, axis=1, copy=True, return_norm=False): """Scale input vectors individually to unit norm (vector length). Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] The data to normalize, element by element. Please provide CSC matrix to normalize on axis 0, conversely provide CSR matrix to normalize on axis 1 norm : 'l1', 'l2', or 'max', optional ('l2' by default) The norm to use to normalize each non zero sample (or each non-zero feature if axis is 0). axis : 0 or 1, optional (1 by default) axis used to normalize the data along. If 1, independently normalize each sample, otherwise (if 0) normalize each feature. copy : boolean, optional, default True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. return_norm : boolean, default False whether to return the computed norms Returns ------- X : {array-like, sparse matrix}, shape [n_samples, n_features] Normalized input X. norms : array, shape [n_samples] if axis=1 else [n_features] An array of norms along given axis for X. When X is sparse, a NotImplementedError will be raised for norm 'l1' or 'l2'. See also -------- Normalizer: Performs normalization using the ``Transformer`` API """ if norm not in ('l1', 'l2', 'max'): raise ValueError("'%s' is not a supported norm" % norm) if axis == 0: sparse_format = 'csc' elif axis == 1: sparse_format = 'csr' else: raise ValueError("'%d' is not a supported axis" % axis) X = check_array(X, accept_sparse=sparse_format, copy=copy, estimator='the normalize function', dtype=FLOAT_DTYPES) if axis == 0: X = X.T if sparse.issparse(X): if return_norm and norm in ('l1', 'l2'): raise NotImplementedError("return_norm=True is not implemented " "for sparse matrices with norm 'l1' " "or norm 'l2'") if norm == 'l1': inplace_csr_row_normalize_l1(X) elif norm == 'l2': inplace_csr_row_normalize_l2(X) elif norm == 'max': mins, maxes = min_max_axis(X, 1) norms = np.maximum(abs(mins), maxes) norms_elementwise = norms.repeat(np.diff(X.indptr).tolist()) mask = norms_elementwise != 0 X.data[mask] /= norms_elementwise[mask] else: if norm == 'l1': norms = np.abs(X).sum(axis=1) elif norm == 'l2': norms = row_norms(X) elif norm == 'max': norms = np.max(abs(X), axis=1) norms = _handle_zeros_in_scale(norms, copy=False) X /= norms[:, np.newaxis] if axis == 0: X = X.T if return_norm: return X, norms else: return X class Normalizer(TransformerMixin, BaseEstimator, StatelessTagMixin, SparseInputTagMixin): """Normalize samples individually to unit norm. Each sample (i.e. each row of the data matrix) with at least one non zero component is rescaled independently of other samples so that its norm (l1, l2 or inf) equals one. This transformer is able to work both with dense numpy arrays and sparse matrix Scaling inputs to unit norms is a common operation for text classification or clustering for instance. For instance the dot product of two l2-normalized TF-IDF vectors is the cosine similarity of the vectors and is the base similarity metric for the Vector Space Model commonly used by the Information Retrieval community. Parameters ---------- norm : 'l1', 'l2', or 'max', optional ('l2' by default) The norm to use to normalize each non zero sample. If norm='max' is used, values will be rescaled by the maximum of the absolute values. copy : boolean, optional, default True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. Examples -------- >>> from cuml.preprocessing import Normalizer >>> import cupy as cp >>> X = [[4, 1, 2, 2], ... [1, 3, 9, 3], ... [5, 7, 5, 1]] >>> X = cp.array(X) >>> transformer = Normalizer().fit(X) # fit does nothing. >>> transformer Normalizer() >>> transformer.transform(X) array([[0.8, 0.2, 0.4, 0.4], [0.1, 0.3, 0.9, 0.3], [0.5, 0.7, 0.5, 0.1]]) Notes ----- This estimator is stateless (besides constructor parameters), the fit method does nothing but is useful when used in a pipeline. See also -------- normalize: Equivalent function without the estimator API. """ @_deprecate_pos_args(version="21.06") def __init__(self, norm='l2', *, copy=True): self.norm = norm self.copy = copy def fit(self, X, y=None) -> "Normalizer": """Do nothing and return the estimator unchanged This method is just there to implement the usual API and hence work in pipelines. Parameters ---------- X : {array-like, CSR matrix} """ self._validate_data(X, accept_sparse='csr') return self def transform(self, X, copy=None) -> SparseCumlArray: """Scale each non zero row of X to unit norm Parameters ---------- X : {array-like, CSR matrix}, shape [n_samples, n_features] The data to normalize, row by row. copy : bool, optional (default: None) Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. """ copy = copy if copy is not None else self.copy X = check_array(X, accept_sparse='csr') return normalize(X, norm=self.norm, axis=1, copy=copy) @_deprecate_pos_args(version="21.06") @api_return_generic(get_output_type=True) def binarize(X, *, threshold=0.0, copy=True): """Boolean thresholding of array-like or sparse matrix Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] The data to binarize, element by element. threshold : float, optional (0.0 by default) Feature values below or equal to this are replaced by 0, above it by 1. Threshold may not be less than 0 for operations on sparse matrices. copy : boolean, optional, default True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. See also -------- Binarizer: Performs binarization using the ``Transformer`` API """ X = check_array(X, accept_sparse=['csr', 'csc'], copy=copy) if sparse.issparse(X): if threshold < 0: raise ValueError('Cannot binarize a sparse matrix with threshold ' '< 0') cond = X.data > threshold not_cond = np.logical_not(cond) X.data[cond] = 1 X.data[not_cond] = 0 X.eliminate_zeros() else: cond = X > threshold not_cond = np.logical_not(cond) X[cond] = 1 X[not_cond] = 0 return X class Binarizer(TransformerMixin, BaseEstimator, StatelessTagMixin, SparseInputTagMixin): """Binarize data (set feature values to 0 or 1) according to a threshold Values greater than the threshold map to 1, while values less than or equal to the threshold map to 0. With the default threshold of 0, only positive values map to 1. Binarization is a common operation on text count data where the analyst can decide to only consider the presence or absence of a feature rather than a quantified number of occurrences for instance. It can also be used as a pre-processing step for estimators that consider boolean random variables (e.g. modelled using the Bernoulli distribution in a Bayesian setting). Parameters ---------- threshold : float, optional (0.0 by default) Feature values below or equal to this are replaced by 0, above it by 1. Threshold may not be less than 0 for operations on sparse matrices. copy : boolean, optional, default True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. Examples -------- >>> from cuml.preprocessing import Binarizer >>> import cupy as cp >>> X = [[ 1., -1., 2.], ... [ 2., 0., 0.], ... [ 0., 1., -1.]] >>> X = cp.array(X) >>> transformer = Binarizer().fit(X) # fit does nothing. >>> transformer Binarizer() >>> transformer.transform(X) array([[1., 0., 1.], [1., 0., 0.], [0., 1., 0.]]) Notes ----- If the input is a sparse matrix, only the non-zero values are subject to update by the Binarizer class. This estimator is stateless (besides constructor parameters), the fit method does nothing but is useful when used in a pipeline. See also -------- binarize: Equivalent function without the estimator API. """ @_deprecate_pos_args(version="21.06") def __init__(self, *, threshold=0.0, copy=True): self.threshold = threshold self.copy = copy def fit(self, X, y=None) -> "Binarizer": """Do nothing and return the estimator unchanged This method is just there to implement the usual API and hence work in pipelines. Parameters ---------- X : {array-like, sparse matrix} """ self._validate_data(X, accept_sparse=['csr', 'csc']) return self def transform(self, X, copy=None) -> SparseCumlArray: """Binarize each element of X Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] The data to binarize, element by element. copy : bool Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. """ copy = copy if copy is not None else self.copy return binarize(X, threshold=self.threshold, copy=copy) @api_return_generic(get_output_type=True) def add_dummy_feature(X, value=1.0): """Augment dataset with an additional dummy feature. This is useful for fitting an intercept term with implementations which cannot otherwise fit it directly. Parameters ---------- X : {array-like, sparse matrix}, shape [n_samples, n_features] Data. value : float Value to use for the dummy feature. Returns ------- X : {array, sparse matrix}, shape [n_samples, n_features + 1] Same data with dummy feature added as first column. Examples -------- >>> from cuml.preprocessing import add_dummy_feature >>> import cupy as cp >>> add_dummy_feature(cp.array([[0, 1], [1, 0]])) array([[1., 0., 1.], [1., 1., 0.]]) """ X = check_array(X, accept_sparse=['csc', 'csr', 'coo'], dtype=FLOAT_DTYPES) n_samples, n_features = X.shape shape = (n_samples, n_features + 1) if sparse.issparse(X): if sparse.isspmatrix_coo(X): # Shift columns to the right. col = X.col + 1 # Column indices of dummy feature are 0 everywhere. col = np.concatenate((np.zeros(n_samples), col)) # Row indices of dummy feature are 0, ..., n_samples-1. row = np.concatenate((np.arange(n_samples), X.row)) # Prepend the dummy feature n_samples times. data = np.concatenate((np.full(n_samples, value), X.data)) X = sparse.coo_matrix((data, (row, col)), shape) return X elif sparse.isspmatrix_csc(X): # Shift index pointers since we need to add n_samples elements. indptr = X.indptr + n_samples # indptr[0] must be 0. indptr = np.concatenate((np.array([0]), indptr)) # Row indices of dummy feature are 0, ..., n_samples-1. indices = np.concatenate((np.arange(n_samples), X.indices)) # Prepend the dummy feature n_samples times. data = np.concatenate((np.full(n_samples, value), X.data)) X = sparse.csc_matrix((data, indices, indptr), shape) return X else: klass = X.__class__ with using_output_type('cupy'): res = add_dummy_feature(X.tocoo(), value) X = klass(res) return X else: X = np.hstack((np.full((n_samples, 1), value), X)) return X class KernelCenterer(TransformerMixin, BaseEstimator): """Center a kernel matrix Let K(x, z) be a kernel defined by phi(x)^T phi(z), where phi is a function mapping x to a Hilbert space. KernelCenterer centers (i.e., normalize to have zero mean) the data without explicitly computing phi(x). It is equivalent to centering phi(x) with cuml.preprocessing.StandardScaler(with_std=False). Attributes ---------- K_fit_rows_ : array, shape (n_samples,) Average of each column of kernel matrix K_fit_all_ : float Average of kernel matrix Examples -------- >>> import cupy as cp >>> from cuml.preprocessing import KernelCenterer >>> from cuml.metrics import pairwise_kernels >>> X = cp.array([[ 1., -2., 2.], ... [ -2., 1., 3.], ... [ 4., 1., -2.]]) >>> K = pairwise_kernels(X, metric='linear') >>> K array([[ 9., 2., -2.], [ 2., 14., -13.], [ -2., -13., 21.]]) >>> transformer = KernelCenterer().fit(K) >>> transformer KernelCenterer() >>> transformer.transform(K) array([[ 5., 0., -5.], [ 0., 14., -14.], [ -5., -14., 19.]]) """ def __init__(self): # Needed for backported inspect.signature compatibility with PyPy pass def fit(self, K, y=None) -> 'KernelCenterer': """Fit KernelCenterer Parameters ---------- K : numpy array of shape [n_samples, n_samples] Kernel matrix. Returns ------- self : returns an instance of self. """ K = self._validate_data(K, dtype=FLOAT_DTYPES) if K.shape[0] != K.shape[1]: raise ValueError("Kernel matrix must be a square matrix." " Input is a {}x{} matrix." .format(K.shape[0], K.shape[1])) n_samples = K.shape[0] self.K_fit_rows_ = np.sum(K, axis=0) / n_samples self.K_fit_all_ = self.K_fit_rows_.sum() / n_samples return self def transform(self, K, copy=True) -> CumlArray: """Center kernel matrix. Parameters ---------- K : numpy array of shape [n_samples1, n_samples2] Kernel matrix. copy : boolean, optional, default True Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. Returns ------- K_new : numpy array of shape [n_samples1, n_samples2] """ check_is_fitted(self) K = check_array(K, copy=copy, dtype=FLOAT_DTYPES) K_pred_cols = (np.sum(K, axis=1) / self.K_fit_rows_.shape[0])[:, np.newaxis] K -= self.K_fit_rows_ K -= K_pred_cols K += self.K_fit_all_ return K @property def _pairwise(self): return True class QuantileTransformer(TransformerMixin, BaseEstimator, AllowNaNTagMixin): """Transform features using quantiles information. This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme. The transformation is applied on each feature independently. First an estimate of the cumulative distribution function of a feature is used to map the original values to a uniform distribution. The obtained values are then mapped to the desired output distribution using the associated quantile function. Features values of new/unseen data that fall below or above the fitted range will be mapped to the bounds of the output distribution. Note that this transform is non-linear. It may distort linear correlations between variables measured at the same scale but renders variables measured at different scales more directly comparable. Parameters ---------- n_quantiles : int, optional (default=1000 or n_samples) Number of quantiles to be computed. It corresponds to the number of landmarks used to discretize the cumulative distribution function. If n_quantiles is larger than the number of samples, n_quantiles is set to the number of samples as a larger number of quantiles does not give a better approximation of the cumulative distribution function estimator. output_distribution : str, optional (default='uniform') Marginal distribution for the transformed data. The choices are 'uniform' (default) or 'normal'. ignore_implicit_zeros : bool, optional (default=False) Only applies to sparse matrices. If True, the sparse entries of the matrix are discarded to compute the quantile statistics. If False, these entries are treated as zeros. subsample : int, optional (default=1e5) Maximum number of samples used to estimate the quantiles for computational efficiency. Note that the subsampling procedure may differ for value-identical sparse and dense matrices. random_state : int, RandomState instance or None, optional (default=None) Determines random number generation for subsampling and smoothing noise. Please see ``subsample`` for more details. Pass an int for reproducible results across multiple function calls. See :term:`Glossary <random_state>` copy : boolean, optional, (default=True) Set to False to perform inplace transformation and avoid a copy (if the input is already a numpy array). Attributes ---------- n_quantiles_ : integer The actual number of quantiles used to discretize the cumulative distribution function. quantiles_ : ndarray, shape (n_quantiles, n_features) The values corresponding the quantiles of reference. references_ : ndarray, shape(n_quantiles, ) Quantiles of references. Examples -------- >>> import cupy as cp >>> from cuml.preprocessing import QuantileTransformer >>> rng = cp.random.RandomState(0) >>> X = cp.sort(rng.normal(loc=0.5, scale=0.25, size=(25, 1)), axis=0) >>> qt = QuantileTransformer(n_quantiles=10, random_state=0) >>> qt.fit_transform(X) array([...]) See also -------- quantile_transform : Equivalent function without the estimator API. PowerTransformer : Perform mapping to a normal distribution using a power transform. StandardScaler : Perform standardization that is faster, but less robust to outliers. RobustScaler : Perform robust standardization that removes the influence of outliers but does not put outliers and inliers on the same scale. Notes ----- NaNs are treated as missing values: disregarded in fit, and maintained in transform. """ quantiles_ = CumlArrayDescriptor() references_ = CumlArrayDescriptor() @_deprecate_pos_args(version="21.06") def __init__(self, *, n_quantiles=1000, output_distribution='uniform', ignore_implicit_zeros=False, subsample=int(1e5), random_state=None, copy=True): self.n_quantiles = n_quantiles self.output_distribution = output_distribution self.ignore_implicit_zeros = ignore_implicit_zeros self.subsample = subsample self.random_state = random_state self.copy = copy def get_param_names(self): return super().get_param_names() + [ "n_quantiles", "output_distribution", "ignore_implicit_zeros", "subsample", "random_state", "copy" ] def _dense_fit(self, X, random_state): """Compute percentiles for dense matrices. Parameters ---------- X : ndarray, shape (n_samples, n_features) The data used to scale along the features axis. """ if self.ignore_implicit_zeros: warnings.warn("'ignore_implicit_zeros' takes effect only with" " sparse matrix. This parameter has no effect.") n_samples, n_features = X.shape references = np.asnumpy(self.references_ * 100) self.quantiles_ = [] for col in X.T: if self.subsample < n_samples: subsample_idx = random_state.choice(n_samples, size=self.subsample, replace=False) col = col.take(subsample_idx) self.quantiles_.append( cpu_np.nanpercentile(np.asnumpy(col), references) ) self.quantiles_ = cpu_np.transpose(self.quantiles_) # Due to floating-point precision error in `np.nanpercentile`, # make sure that quantiles are monotonically increasing. # Upstream issue in numpy: # https://github.com/numpy/numpy/issues/14685 self.quantiles_ = np.array(cpu_np.maximum.accumulate(self.quantiles_)) def _sparse_fit(self, X, random_state): """Compute percentiles for sparse matrices. Parameters ---------- X : sparse matrix CSC, shape (n_samples, n_features) The data used to scale along the features axis. The sparse matrix needs to be nonnegative. """ n_samples, n_features = X.shape references = self.references_ * 100 self.quantiles_ = [] for feature_idx in range(n_features): column_nnz_data = X.data[X.indptr[feature_idx]: X.indptr[feature_idx + 1]] if len(column_nnz_data) > self.subsample: column_subsample = (self.subsample * len(column_nnz_data) // n_samples) if self.ignore_implicit_zeros: column_data = np.zeros(shape=column_subsample, dtype=X.dtype) else: column_data = np.zeros(shape=self.subsample, dtype=X.dtype) column_data[:column_subsample] = np.array( random_state.choice(column_nnz_data.get(), size=column_subsample, replace=False)) else: if self.ignore_implicit_zeros: column_data = np.zeros(shape=len(column_nnz_data), dtype=X.dtype) else: column_data = np.zeros(shape=n_samples, dtype=X.dtype) column_data[:len(column_nnz_data)] = column_nnz_data if not column_data.size: # if no nnz, an error will be raised for computing the # quantiles. Force the quantiles to be zeros. self.quantiles_.append([0] * len(references)) else: self.quantiles_.append( cpu_np.nanpercentile(np.asnumpy(column_data), np.asnumpy(references))) self.quantiles_ = cpu_np.transpose(np.asnumpy(self.quantiles_)) # due to floating-point precision error in `np.nanpercentile`, # make sure the quantiles are monotonically increasing # Upstream issue in numpy: # https://github.com/numpy/numpy/issues/14685 self.quantiles_ = np.array(cpu_np.maximum.accumulate(self.quantiles_)) def fit(self, X, y=None) -> 'QuantileTransformer': """Compute the quantiles used for transforming. Parameters ---------- X : ndarray or sparse matrix, shape (n_samples, n_features) The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse ``csc_matrix``. Additionally, the sparse matrix needs to be nonnegative if `ignore_implicit_zeros` is False. Returns ------- self : object """ if self.n_quantiles <= 0: raise ValueError("Invalid value for 'n_quantiles': %d. " "The number of quantiles must be at least one." % self.n_quantiles) if self.subsample <= 0: raise ValueError("Invalid value for 'subsample': %d. " "The number of subsamples must be at least one." % self.subsample) if self.n_quantiles > self.subsample: raise ValueError("The number of quantiles cannot be greater than" " the number of samples used. Got {} quantiles" " and {} samples.".format(self.n_quantiles, self.subsample)) X = self._check_inputs(X, in_fit=True, copy=False) n_samples = X.shape[0] if self.n_quantiles > n_samples: warnings.warn("n_quantiles (%s) is greater than the total number " "of samples (%s). n_quantiles is set to " "n_samples." % (self.n_quantiles, n_samples)) self.n_quantiles_ = max(1, min(self.n_quantiles, n_samples)) rng = check_random_state(self.random_state) # Create the quantiles of reference self.references_ = np.linspace(0, 1, self.n_quantiles_, endpoint=True) if sparse.issparse(X): self._sparse_fit(X, rng) else: self._dense_fit(X, rng) return self def _transform_col(self, X_col, quantiles, inverse): """Private function to transform a single feature""" output_distribution = self.output_distribution if not inverse: lower_bound_x = quantiles[0] upper_bound_x = quantiles[-1] lower_bound_y = 0 upper_bound_y = 1 else: lower_bound_x = 0 upper_bound_x = 1 lower_bound_y = quantiles[0] upper_bound_y = quantiles[-1] # for inverse transform, match a uniform distribution if output_distribution == 'normal': X_col = np.array(stats.norm.cdf(X_col.get())) # else output distribution is already a uniform distribution # find index for lower and higher bounds if output_distribution == 'normal': lower_bounds_idx = (X_col - BOUNDS_THRESHOLD < lower_bound_x) upper_bounds_idx = (X_col + BOUNDS_THRESHOLD > upper_bound_x) if output_distribution == 'uniform': lower_bounds_idx = (X_col == lower_bound_x) upper_bounds_idx = (X_col == upper_bound_x) isfinite_mask = ~np.isnan(X_col) X_col_finite = X_col[isfinite_mask] if not inverse: # Interpolate in one direction and in the other and take the # mean. This is in case of repeated values in the features # and hence repeated quantiles # # If we don't do this, only one extreme of the duplicated is # used (the upper when we do ascending, and the # lower for descending). We take the mean of these two X_col[isfinite_mask] = .5 * ( np.interp(X_col_finite, quantiles, self.references_) - np.interp(-X_col_finite, -quantiles[::-1], -self.references_[::-1])) else: X_col[isfinite_mask] = np.interp(X_col_finite, self.references_, quantiles) X_col[upper_bounds_idx] = upper_bound_y X_col[lower_bounds_idx] = lower_bound_y # for forward transform, match the output distribution if not inverse: if output_distribution == 'normal': X_col = stats.norm.ppf(X_col.get()) # find the value to clip the data to avoid mapping to # infinity. Clip such that the inverse transform will be # consistent clip_min = stats.norm.ppf(BOUNDS_THRESHOLD - cpu_np.spacing(1)) clip_max = stats.norm.ppf(1 - (BOUNDS_THRESHOLD - cpu_np.spacing(1))) X_col = np.clip(X_col, clip_min, clip_max) # else output distribution is uniform and the ppf is the # identity function so we let X_col unchanged return np.asarray(X_col) def _check_inputs(self, X, in_fit, accept_sparse_negative=False, copy=False): """Check inputs before fit and transform""" # In theory reset should be equal to `in_fit`, but there are tests # checking the input number of feature and they expect a specific # string, which is not the same one raised by check_n_features. So we # don't check n_features_in_ here for now (it's done with adhoc code in # the estimator anyway). # TODO: set reset=in_fit when addressing reset in # predict/transform/etc. reset = True X = self._validate_data(X, reset=reset, accept_sparse='csc', copy=copy, dtype=FLOAT_DTYPES, force_all_finite='allow-nan') # we only accept positive sparse matrix when ignore_implicit_zeros is # false and that we call fit or transform. if (not accept_sparse_negative and not self.ignore_implicit_zeros and (sparse.issparse(X) and np.any(X.data < 0))): raise ValueError('QuantileTransformer only accepts' ' non-negative sparse matrices.') # check the output distribution if self.output_distribution not in ('normal', 'uniform'): raise ValueError("'output_distribution' has to be either 'normal'" " or 'uniform'. Got '{}' instead.".format( self.output_distribution)) return X def _check_is_fitted(self, X): """Check the inputs before transforming""" check_is_fitted(self) # check that the dimension of X are adequate with the fitted data if X.shape[1] != self.quantiles_.shape[1]: raise ValueError('X does not have the same number of features as' ' the previously fitted data. Got {} instead of' ' {}.'.format(X.shape[1], self.quantiles_.shape[1])) def _transform(self, X, inverse=False): """Forward and inverse transform. Parameters ---------- X : ndarray, shape (n_samples, n_features) The data used to scale along the features axis. inverse : bool, optional (default=False) If False, apply forward transform. If True, apply inverse transform. Returns ------- X : ndarray, shape (n_samples, n_features) Projected data """ if sparse.issparse(X): for feature_idx in range(X.shape[1]): column_slice = slice(X.indptr[feature_idx], X.indptr[feature_idx + 1]) X.data[column_slice] = self._transform_col( X.data[column_slice], self.quantiles_[:, feature_idx], inverse) else: for feature_idx in range(X.shape[1]): X[:, feature_idx] = self._transform_col( X[:, feature_idx], self.quantiles_[:, feature_idx], inverse) return X def transform(self, X) -> SparseCumlArray: """Feature-wise transformation of the data. Parameters ---------- X : ndarray or sparse matrix, shape (n_samples, n_features) The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse ``csc_matrix``. Additionally, the sparse matrix needs to be nonnegative if `ignore_implicit_zeros` is False. Returns ------- Xt : ndarray or sparse matrix, shape (n_samples, n_features) The projected data. """ X = self._check_inputs(X, in_fit=False, copy=self.copy) self._check_is_fitted(X) return self._transform(X, inverse=False) def inverse_transform(self, X) -> SparseCumlArray: """Back-projection to the original space. Parameters ---------- X : ndarray or sparse matrix, shape (n_samples, n_features) The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse ``csc_matrix``. Additionally, the sparse matrix needs to be nonnegative if `ignore_implicit_zeros` is False. Returns ------- Xt : ndarray or sparse matrix, shape (n_samples, n_features) The projected data. """ X = self._check_inputs(X, in_fit=False, accept_sparse_negative=True, copy=self.copy) self._check_is_fitted(X) return self._transform(X, inverse=True) @_deprecate_pos_args(version="21.06") def quantile_transform(X, *, axis=0, n_quantiles=1000, output_distribution='uniform', ignore_implicit_zeros=False, subsample=int(1e5), random_state=None, copy=True): """Transform features using quantiles information. This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme. The transformation is applied on each feature independently. First an estimate of the cumulative distribution function of a feature is used to map the original values to a uniform distribution. The obtained values are then mapped to the desired output distribution using the associated quantile function. Features values of new/unseen data that fall below or above the fitted range will be mapped to the bounds of the output distribution. Note that this transform is non-linear. It may distort linear correlations between variables measured at the same scale but renders variables measured at different scales more directly comparable. Parameters ---------- X : array-like, sparse matrix The data to transform. axis : int, (default=0) Axis used to compute the means and standard deviations along. If 0, transform each feature, otherwise (if 1) transform each sample. n_quantiles : int, optional (default=1000 or n_samples) Number of quantiles to be computed. It corresponds to the number of landmarks used to discretize the cumulative distribution function. If n_quantiles is larger than the number of samples, n_quantiles is set to the number of samples as a larger number of quantiles does not give a better approximation of the cumulative distribution function estimator. output_distribution : str, optional (default='uniform') Marginal distribution for the transformed data. The choices are 'uniform' (default) or 'normal'. ignore_implicit_zeros : bool, optional (default=False) Only applies to sparse matrices. If True, the sparse entries of the matrix are discarded to compute the quantile statistics. If False, these entries are treated as zeros. subsample : int, optional (default=1e5) Maximum number of samples used to estimate the quantiles for computational efficiency. Note that the subsampling procedure may differ for value-identical sparse and dense matrices. random_state : int, RandomState instance or None, optional (default=None) Determines random number generation for subsampling and smoothing noise. Please see ``subsample`` for more details. Pass an int for reproducible results across multiple function calls. See :term:`Glossary <random_state>` copy : boolean, optional, (default=True) Set to False to perform inplace transformation and avoid a copy (if the input is already a numpy array). If True, a copy of `X` is transformed, leaving the original `X` unchanged Returns ------- Xt : ndarray or sparse matrix, shape (n_samples, n_features) The transformed data. Examples -------- >>> import cupy as cp >>> from cuml.preprocessing import quantile_transform >>> rng = cp.random.RandomState(0) >>> X = cp.sort(rng.normal(loc=0.5, scale=0.25, size=(25, 1)), axis=0) >>> quantile_transform(X, n_quantiles=10, random_state=0, copy=True) array([...]) See also -------- QuantileTransformer : Performs quantile-based scaling using the ``Transformer`` API (e.g. as part of a preprocessing :class:`sklearn.pipeline.Pipeline`). power_transform : Maps data to a normal distribution using a power transformation. scale : Performs standardization that is faster, but less robust to outliers. robust_scale : Performs robust standardization that removes the influence of outliers but does not put outliers and inliers on the same scale. Notes ----- NaNs are treated as missing values: disregarded in fit, and maintained in transform. """ n = QuantileTransformer(n_quantiles=n_quantiles, output_distribution=output_distribution, subsample=subsample, ignore_implicit_zeros=ignore_implicit_zeros, random_state=random_state, copy=copy) if axis == 0: return n.fit_transform(X) elif axis == 1: return n.fit_transform(X.T).T else: raise ValueError("axis should be either equal to 0 or 1. Got" " axis={}".format(axis)) class PowerTransformer(TransformerMixin, BaseEstimator, AllowNaNTagMixin): """Apply a power transform featurewise to make data more Gaussian-like. Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Currently, PowerTransformer supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Box-Cox requires input data to be strictly positive, while Yeo-Johnson supports both positive or negative data. By default, zero-mean, unit-variance normalization is applied to the transformed data. Parameters ---------- method : str, (default='yeo-johnson') The power transform method. Available methods are: - 'yeo-johnson' [1]_, works with positive and negative values - 'box-cox' [2]_, only works with strictly positive values standardize : boolean, default=True Set to True to apply zero-mean, unit-variance normalization to the transformed output. copy : boolean, optional, default=True Set to False to perform inplace computation during transformation. Attributes ---------- lambdas_ : array of float, shape (n_features,) The parameters of the power transformation for the selected features. Examples -------- >>> import cupy as cp >>> from cuml.preprocessing import PowerTransformer >>> pt = PowerTransformer() >>> data = cp.array([[1, 2], [3, 2], [4, 5]]) >>> print(pt.fit(data)) PowerTransformer() >>> print(pt.lambdas_) [ 1.386... -3.100...] >>> print(pt.transform(data)) [[-1.316... -0.707...] [ 0.209... -0.707...] [ 1.106... 1.414...]] See also -------- power_transform : Equivalent function without the estimator API. QuantileTransformer : Maps data to a standard normal distribution with the parameter `output_distribution='normal'`. Notes ----- NaNs are treated as missing values: disregarded in ``fit``, and maintained in ``transform``. References ---------- .. [1] I.K. Yeo and R.A. Johnson, "A new family of power transformations to improve normality or symmetry." Biometrika, 87(4), pp.954-959, (2000). .. [2] G.E.P. Box and D.R. Cox, "An Analysis of Transformations", Journal of the Royal Statistical Society B, 26, 211-252 (1964). """ @_deprecate_pos_args(version="21.06") def __init__(self, method='yeo-johnson', *, standardize=True, copy=True): self.method = method self.standardize = standardize self.copy = copy def get_param_names(self): return super().get_param_names() + [ "method", "standardize", "copy" ] def fit(self, X, y=None) -> 'PowerTransformer': """Estimate the optimal parameter lambda for each feature. The optimal lambda parameter for minimizing skewness is estimated on each feature independently using maximum likelihood. Parameters ---------- X : array-like, shape (n_samples, n_features) The data used to estimate the optimal transformation parameters. y : Ignored Returns ------- self : object """ self._fit(X, y=y, force_transform=False) return self def fit_transform(self, X, y=None) -> CumlArray: return self._fit(X, y, force_transform=True) def _fit(self, X, y=None, force_transform=False): X = self._check_input(X, in_fit=True, check_positive=True, check_method=True) if not self.copy and not force_transform: # if call from fit() X = X.copy() # force copy so that fit does not change X inplace optim_function = {'box-cox': self._box_cox_optimize, 'yeo-johnson': self._yeo_johnson_optimize }[self.method] self.lambdas_ = np.array([optim_function(col) for col in X.T]) if self.standardize or force_transform: transform_function = {'box-cox': boxcox, 'yeo-johnson': self._yeo_johnson_transform }[self.method] for i, lmbda in enumerate(self.lambdas_): if self.method == 'box-cox': x = X[:, i].get() lmbda = lmbda.get() X[:, i] = np.array(transform_function(x, lmbda)) else: X[:, i] = transform_function(X[:, i], lmbda) if self.standardize: self._scaler = StandardScaler(copy=False, output_type=self.output_type) if force_transform: with using_output_type('cupy'): X = self._scaler.fit_transform(X) else: self._scaler.fit(X) return X def transform(self, X) -> CumlArray: """Apply the power transform to each feature using the fitted lambdas. Parameters ---------- X : array-like, shape (n_samples, n_features) The data to be transformed using a power transformation. Returns ------- X_trans : array-like, shape (n_samples, n_features) The transformed data. """ check_is_fitted(self) X = self._check_input(X, in_fit=False, check_positive=True, check_shape=True) transform_function = {'box-cox': boxcox, 'yeo-johnson': self._yeo_johnson_transform }[self.method] for i, lmbda in enumerate(self.lambdas_): if self.method == 'box-cox': x = X[:, i].get() lmbda = lmbda.get() X[:, i] = np.array(transform_function(x, lmbda)) else: X[:, i] = transform_function(X[:, i], lmbda) if self.standardize: with using_output_type('cupy'): X = self._scaler.transform(X) return X def inverse_transform(self, X) -> CumlArray: """Apply the inverse power transformation using the fitted lambdas. The inverse of the Box-Cox transformation is given by:: if lambda_ == 0: X = exp(X_trans) else: X = (X_trans * lambda_ + 1) ** (1 / lambda_) The inverse of the Yeo-Johnson transformation is given by:: if X >= 0 and lambda_ == 0: X = exp(X_trans) - 1 elif X >= 0 and lambda_ != 0: X = (X_trans * lambda_ + 1) ** (1 / lambda_) - 1 elif X < 0 and lambda_ != 2: X = 1 - (-(2 - lambda_) * X_trans + 1) ** (1 / (2 - lambda_)) elif X < 0 and lambda_ == 2: X = 1 - exp(-X_trans) Parameters ---------- X : array-like, shape (n_samples, n_features) The transformed data. Returns ------- X : array-like, shape (n_samples, n_features) The original data """ check_is_fitted(self) X = self._check_input(X, in_fit=False, check_shape=True) if self.standardize: with using_output_type('cupy'): X = self._scaler.inverse_transform(X) inv_fun = {'box-cox': self._box_cox_inverse_tranform, 'yeo-johnson': self._yeo_johnson_inverse_transform }[self.method] for i, lmbda in enumerate(self.lambdas_): X[:, i] = inv_fun(X[:, i], lmbda) return X def _box_cox_inverse_tranform(self, x, lmbda): """Return inverse-transformed input x following Box-Cox inverse transform with parameter lambda. """ if lmbda == 0: x_inv = np.exp(x) else: x_inv = (x * lmbda + 1) ** (1 / lmbda) return x_inv def _yeo_johnson_inverse_transform(self, x, lmbda): """Return inverse-transformed input x following Yeo-Johnson inverse transform with parameter lambda. """ x_inv = np.zeros(x.shape, dtype=x.dtype) pos = x >= 0 # when x >= 0 if abs(lmbda) < cpu_np.spacing(1.): x_inv[pos] = np.exp(x[pos]) - 1 else: # lmbda != 0 x_inv[pos] = np.power(x[pos] * lmbda + 1, 1 / lmbda) - 1 # when x < 0 if abs(lmbda - 2) > cpu_np.spacing(1.): x_inv[~pos] = 1 - np.power(-(2 - lmbda) * x[~pos] + 1, 1 / (2 - lmbda)) else: # lmbda == 2 x_inv[~pos] = 1 - np.exp(-x[~pos]) return x_inv def _yeo_johnson_transform(self, x, lmbda): """Return transformed input x following Yeo-Johnson transform with parameter lambda. """ out = np.zeros_like(x) pos = x >= 0 # binary mask # when x >= 0 if abs(lmbda) < cpu_np.spacing(1.): out[pos] = np.log1p(x[pos]) else: # lmbda != 0 out[pos] = (np.power(x[pos] + 1, lmbda) - 1) / lmbda # when x < 0 if abs(lmbda - 2) > cpu_np.spacing(1.): out[~pos] = -(np.power(-x[~pos] + 1, 2 - lmbda) - 1) / (2 - lmbda) else: # lmbda == 2 out[~pos] = -np.log1p(-x[~pos]) return out def _box_cox_optimize(self, x): """Find and return optimal lambda parameter of the Box-Cox transform by MLE, for observed data x. We here use scipy builtins which uses the brent optimizer. """ # the computation of lambda is influenced by NaNs so we need to # get rid of them x = x[~np.isnan(x)].get() _, lmbda = stats.boxcox(x, lmbda=None) return lmbda def _yeo_johnson_optimize(self, x): """Find and return optimal lambda parameter of the Yeo-Johnson transform by MLE, for observed data x. Like for Box-Cox, MLE is done via the brent optimizer. """ def _neg_log_likelihood(lmbda): """Return the negative log likelihood of the observed data x as a function of lambda.""" x_trans = self._yeo_johnson_transform(x, lmbda) n_samples = x.shape[0] loglike = -n_samples / 2 * np.log(x_trans.var()) loglike += (lmbda - 1) * (np.sign(x) * np.log1p(np.abs(x))).sum() return -loglike # the computation of lambda is influenced by NaNs so we need to # get rid of them x = x[~np.isnan(x)] # choosing bracket -2, 2 like for boxcox return optimize.brent(_neg_log_likelihood, brack=(-2, 2)) def _check_input(self, X, in_fit, check_positive=False, check_shape=False, check_method=False): """Validate the input before fit and transform. Parameters ---------- X : array-like, shape (n_samples, n_features) check_positive : bool If True, check that all data is positive and non-zero (only if ``self.method=='box-cox'``). check_shape : bool If True, check that n_features matches the length of self.lambdas_ check_method : bool If True, check that the transformation method is valid. """ X = self._validate_data(X, ensure_2d=True, dtype=FLOAT_DTYPES, copy=self.copy, force_all_finite='allow-nan') if (check_positive and self.method == 'box-cox' and np.nanmin(X) <= 0): raise ValueError("The Box-Cox transformation can only be " "applied to strictly positive data") if check_shape and not X.shape[1] == len(self.lambdas_): raise ValueError("Input data has a different number of features " "than fitting data. Should have {n}, data has {m}" .format(n=len(self.lambdas_), m=X.shape[1])) valid_methods = ('box-cox', 'yeo-johnson') if check_method and self.method not in valid_methods: raise ValueError("'method' must be one of {}, " "got {} instead." .format(valid_methods, self.method)) return X @_deprecate_pos_args(version="21.06") def power_transform(X, method='yeo-johnson', *, standardize=True, copy=True): """ Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Currently, power_transform supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Box-Cox requires input data to be strictly positive, while Yeo-Johnson supports both positive or negative data. By default, zero-mean, unit-variance normalization is applied to the transformed data. Parameters ---------- X : array-like, shape (n_samples, n_features) The data to be transformed using a power transformation. method : {'yeo-johnson', 'box-cox'}, default='yeo-johnson' The power transform method. Available methods are: - 'yeo-johnson' [1]_, works with positive and negative values - 'box-cox' [2]_, only works with strictly positive values standardize : boolean, default=True Set to True to apply zero-mean, unit-variance normalization to the transformed output. copy : boolean, optional, default=True Set to False to perform inplace computation during transformation. Returns ------- X_trans : array-like, shape (n_samples, n_features) The transformed data. Examples -------- >>> import cupy as cp >>> from cuml.preprocessing import power_transform >>> data = cp.array([[1, 2], [3, 2], [4, 5]]) >>> print(power_transform(data, method='box-cox')) [[-1.332... -0.707...] [ 0.256... -0.707...] [ 1.076... 1.414...]] See also -------- PowerTransformer : Equivalent transformation with the ``Transformer`` API (e.g. as part of a preprocessing :class:`sklearn.pipeline.Pipeline`). quantile_transform : Maps data to a standard normal distribution with the parameter `output_distribution='normal'`. Notes ----- NaNs are treated as missing values: disregarded in ``fit``, and maintained in ``transform``. References ---------- .. [1] I.K. Yeo and R.A. Johnson, "A new family of power transformations to improve normality or symmetry." Biometrika, 87(4), pp.954-959, (2000). .. [2] G.E.P. Box and D.R. Cox, "An Analysis of Transformations", Journal of the Royal Statistical Society B, 26, 211-252 (1964). """ pt = PowerTransformer(method=method, standardize=standardize, copy=copy) return pt.fit_transform(X)
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/__init__.py
# This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. from ._data import Binarizer from ._data import KernelCenterer from ._data import MinMaxScaler from ._data import MaxAbsScaler from ._data import Normalizer from ._data import PolynomialFeatures from ._data import PowerTransformer from ._data import QuantileTransformer from ._data import RobustScaler from ._data import StandardScaler from ._data import add_dummy_feature from ._data import binarize from ._data import normalize from ._data import scale from ._data import robust_scale from ._data import maxabs_scale from ._data import minmax_scale from ._data import power_transform from ._data import quantile_transform from ._imputation import SimpleImputer from ._imputation import MissingIndicator from ._discretization import KBinsDiscretizer from ._function_transformer import FunctionTransformer from ._column_transformer import ColumnTransformer, \ make_column_transformer, make_column_selector __all__ = [ 'Binarizer', 'KBinsDiscretizer', 'KernelCenterer', 'LabelBinarizer', 'LabelEncoder', 'MultiLabelBinarizer', 'MinMaxScaler', 'MaxAbsScaler', 'QuantileTransformer', 'Normalizer', 'OneHotEncoder', 'OrdinalEncoder', 'PowerTransformer', 'RobustScaler', 'StandardScaler', 'SimpleImputer', 'MissingIndicator', 'ColumnTransformer', 'FunctionTransformer', 'add_dummy_feature', 'PolynomialFeatures', 'binarize', 'normalize', 'scale', 'robust_scale', 'maxabs_scale', 'minmax_scale', 'label_binarize', 'power_transform', 'quantile_transform', 'make_column_selector', 'make_column_transformer' ]
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/utils/validation.py
# Original authors from Sckit-Learn: # Olivier Grisel # Gael Varoquaux # Andreas Mueller # Lars Buitinck # Alexandre Gramfort # Nicolas Tresegnie # Sylvain Marie # License: BSD 3 clause # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote this production. from ....thirdparty_adapters import check_array from ....common.exceptions import NotFittedError from inspect import isclass from cuml.internals.safe_imports import gpu_only_import import numbers from cuml.internals.safe_imports import cpu_only_import np = cpu_only_import('numpy') cp = gpu_only_import('cupy') sp = gpu_only_import('cupyx.scipy.sparse') FLOAT_DTYPES = (np.float64, np.float32, np.float16) def check_X_y(X, y, accept_sparse=False, *, accept_large_sparse=True, dtype="numeric", order=None, copy=False, force_all_finite=True, ensure_2d=True, allow_nd=False, multi_output=False, ensure_min_samples=1, ensure_min_features=1, y_numeric=False): """Input validation for standard estimators. Checks X and y for consistent length, enforces X to be 2D and y 1D. By default, X is checked to be non-empty and containing only finite values. Standard input checks are also applied to y, such as checking that y does not have np.nan or np.inf targets. For multi-label y, set multi_output=True to allow 2D and sparse y. If the dtype of X is object, attempt converting to float, raising on failure. Parameters ---------- X : nd-array, list or sparse matrix Input data. y : nd-array, list or sparse matrix Labels. accept_sparse : string, boolean or list of string (default=False) String[s] representing allowed sparse matrix formats, such as 'csc', 'csr', etc. If the input is sparse but not in the allowed format, it will be converted to the first listed format. True allows the input to be any format. False means that a sparse matrix input will raise an error. accept_large_sparse : bool (default=True) If a CSR, CSC, COO or BSR sparse matrix is supplied and accepted by accept_sparse, accept_large_sparse will cause it to be accepted only if its indices are stored with a 32-bit dtype. dtype : string, type, list of types or None (default="numeric") Data type of result. If None, the dtype of the input is preserved. If "numeric", dtype is preserved unless array.dtype is object. If dtype is a list of types, conversion on the first type is only performed if the dtype of the input is not in the list. order : 'F', 'C' or None (default=None) Whether an array will be forced to be fortran or c-style. copy : boolean (default=False) Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion. force_all_finite : boolean or 'allow-nan', (default=True) Whether to raise an error on np.inf, np.nan, pd.NA in X. This parameter does not influence whether y can have np.inf, np.nan, pd.NA values. The possibilities are: - True: Force all values of X to be finite. - False: accepts np.inf, np.nan, pd.NA in X. - 'allow-nan': accepts only np.nan or pd.NA values in X. Values cannot be infinite. ensure_2d : boolean (default=True) Whether to raise a value error if X is not 2D. allow_nd : boolean (default=False) Whether to allow X.ndim > 2. multi_output : boolean (default=False) Whether to allow 2D y (array or sparse matrix). If false, y will be validated as a vector. y cannot have np.nan or np.inf values if multi_output=True. ensure_min_samples : int (default=1) Make sure that X has a minimum number of samples in its first axis (rows for a 2D array). ensure_min_features : int (default=1) Make sure that the 2D array has some minimum number of features (columns). The default value of 1 rejects empty datasets. This check is only enforced when X has effectively 2 dimensions or is originally 1D and ``ensure_2d`` is True. Setting to 0 disables this check. y_numeric : boolean (default=False) Whether to ensure that y has a numeric type. If dtype of y is object, it is converted to float64. Should only be used for regression algorithms. Returns ------- X_converted : object The converted and validated X. y_converted : object The converted and validated y. """ if y is None: raise ValueError("y cannot be None") X = check_array(X, accept_sparse=accept_sparse, accept_large_sparse=accept_large_sparse, dtype=dtype, order=order, copy=copy, force_all_finite=force_all_finite, ensure_2d=ensure_2d, allow_nd=allow_nd, ensure_min_samples=ensure_min_samples, ensure_min_features=ensure_min_features) y = check_array(y, accept_sparse='csr', force_all_finite=True, ensure_2d=False, dtype='numeric' if y_numeric else None) if not multi_output and y.ndim > 1: if y.shape[1] > 1: raise ValueError( "y should be a 1d array, " "got an array of shape {} instead.".format(y.shape)) if X.shape[0] != y.shape[0]: raise ValueError("Found input variables with inconsistent numbers of" " samples") return X, y def check_random_state(seed): """Turn seed into a np.random.RandomState instance Parameters ---------- seed : None | int | instance of RandomState If seed is None, return the RandomState singleton used by np.random. If seed is an int, return a new RandomState instance seeded with seed. If seed is already a RandomState instance, return it. Otherwise raise ValueError. """ if seed is None or seed is np.random: return np.random.mtrand._rand if isinstance(seed, numbers.Integral): return np.random.RandomState(seed) if isinstance(seed, np.random.RandomState): return seed raise ValueError('%r cannot be used to seed a numpy.random.RandomState' ' instance' % seed) def check_is_fitted(estimator, attributes=None, *, msg=None, all_or_any=all): """Perform is_fitted validation for estimator. Checks if the estimator is fitted by verifying the presence of fitted attributes (ending with a trailing underscore) and otherwise raises a NotFittedError with the given message. This utility is meant to be used internally by estimators themselves, typically in their own predict / transform methods. Parameters ---------- estimator : estimator instance. estimator instance for which the check is performed. attributes : str, list or tuple of str, default=None Attribute name(s) given as string or a list/tuple of strings Eg.: ``["coef_", "estimator_", ...], "coef_"`` If `None`, `estimator` is considered fitted if there exist an attribute that ends with a underscore and does not start with double underscore. msg : string The default error message is, "This %(name)s instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator." For custom messages if "%(name)s" is present in the message string, it is substituted for the estimator name. Eg. : "Estimator, %(name)s, must be fitted before sparsifying". all_or_any : callable, {all, any}, default all Specify whether all or any of the given attributes must exist. Returns ------- None Raises ------ NotFittedError If the attributes are not found. """ if isclass(estimator): raise TypeError("{} is a class, not an instance.".format(estimator)) if msg is None: msg = ("This %(name)s instance is not fitted yet. Call 'fit' with " "appropriate arguments before using this estimator.") if not hasattr(estimator, 'fit'): raise TypeError("%s is not an estimator instance." % (estimator)) if attributes is not None: if not isinstance(attributes, (list, tuple)): attributes = [attributes] attrs = all_or_any([hasattr(estimator, attr) for attr in attributes]) else: attrs = [v for v in vars(estimator) if v.endswith("_") and not v.startswith("__")] if not attrs: raise NotFittedError(msg % {'name': type(estimator).__name__}) def _allclose_dense_sparse(x, y, rtol=1e-7, atol=1e-9): """Check allclose for sparse and dense data. Both x and y need to be either sparse or dense, they can't be mixed. Parameters ---------- x : array-like or sparse matrix First array to compare. y : array-like or sparse matrix Second array to compare. rtol : float, optional relative tolerance; see numpy.allclose atol : float, optional absolute tolerance; see numpy.allclose. Note that the default here is more tolerant than the default for numpy.testing.assert_allclose, where atol=0. """ if sp.issparse(x) and sp.issparse(y): x = x.tocsr() y = y.tocsr() x.sum_duplicates() y.sum_duplicates() return (cp.array_equal(x.indices, y.indices) and cp.array_equal(x.indptr, y.indptr) and cp.allclose(x.data, y.data, rtol=rtol, atol=atol)) elif not sp.issparse(x) and not sp.issparse(y): return cp.allclose(x, y, rtol=rtol, atol=atol) raise ValueError("Can only compare two sparse matrices, not a sparse " "matrix and an array")
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/utils/sparsefuncs.py
# Original authors from Sckit-Learn: # Manoj Kumar # Thomas Unterthiner # Giorgio Patrini # # License: BSD 3 clause # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote this production. from ....thirdparty_adapters.sparsefuncs_fast import ( csr_mean_variance_axis0 as _csr_mean_var_axis0, csc_mean_variance_axis0 as _csc_mean_var_axis0) from cuml.internals.safe_imports import cpu_only_import from cuml.internals.safe_imports import gpu_only_import from cuml.internals.safe_imports import gpu_only_import_from from cuml.internals.safe_imports import cpu_only_import_from cpu_sp = cpu_only_import_from('scipy', 'sparse') gpu_sp = gpu_only_import_from('cupyx.scipy', 'sparse') np = gpu_only_import('cupy') cpu_np = cpu_only_import('numpy') def iscsr(X): return isinstance(X, cpu_sp.csr_matrix) \ or isinstance(X, gpu_sp.csr_matrix) def iscsc(X): return isinstance(X, cpu_sp.csc_matrix) \ or isinstance(X, gpu_sp.csc_matrix) def issparse(X): return iscsr(X) or iscsc(X) def _raise_typeerror(X): """Raises a TypeError if X is not a CSR or CSC matrix""" input_type = X.format if issparse(X) else type(X) err = "Expected a CSR or CSC sparse matrix, got %s." % input_type raise TypeError(err) def _raise_error_wrong_axis(axis): if axis not in (0, 1): raise ValueError( "Unknown axis value: %d. Use 0 for rows, or 1 for columns" % axis) def inplace_csr_column_scale(X, scale): """Inplace column scaling of a CSR matrix. Scale each feature of the data matrix by multiplying with specific scale provided by the caller assuming a (n_samples, n_features) shape. Parameters ---------- X : CSR matrix with shape (n_samples, n_features) Matrix to normalize using the variance of the features. scale : float array with shape (n_features,) Array of precomputed feature-wise values to use for scaling. """ assert scale.shape[0] == X.shape[1] indices_copy = X.indices.copy() indices_copy[indices_copy >= X.shape[1]] = X.shape[1] - 1 X.data *= scale.take(indices_copy) def inplace_csr_row_scale(X, scale): """ Inplace row scaling of a CSR matrix. Scale each sample of the data matrix by multiplying with specific scale provided by the caller assuming a (n_samples, n_features) shape. Parameters ---------- X : CSR sparse matrix, shape (n_samples, n_features) Matrix to be scaled. scale : float array with shape (n_samples,) Array of precomputed sample-wise values to use for scaling. """ assert scale.shape[0] == X.shape[0] X.data *= np.repeat(scale, np.diff(X.indptr).tolist()) def inplace_column_scale(X, scale): """Inplace column scaling of a CSC/CSR matrix. Scale each feature of the data matrix by multiplying with specific scale provided by the caller assuming a (n_samples, n_features) shape. Parameters ---------- X : CSC or CSR matrix with shape (n_samples, n_features) Matrix to normalize using the variance of the features. scale : float array with shape (n_features,) Array of precomputed feature-wise values to use for scaling. """ if iscsc(X): inplace_csr_row_scale(X.T, scale) elif iscsr(X): inplace_csr_column_scale(X, scale) else: _raise_typeerror(X) def mean_variance_axis(X, axis): """Compute mean and variance along an axix on a CSR or CSC matrix Parameters ---------- X : CSR or CSC sparse matrix, shape (n_samples, n_features) Input data. axis : int (either 0 or 1) Axis along which the axis should be computed. Returns ------- means : float array with shape (n_features,) Feature-wise means variances : float array with shape (n_features,) Feature-wise variances """ _raise_error_wrong_axis(axis) if iscsr(X): if axis == 0: return _csr_mean_var_axis0(X) else: return _csc_mean_var_axis0(X.T) elif iscsc(X): if axis == 0: return _csc_mean_var_axis0(X) else: return _csr_mean_var_axis0(X.T) else: _raise_typeerror(X) ufunc_dic = { 'min': np.min, 'max': np.max, 'nanmin': np.nanmin, 'nanmax': np.nanmax } def _minor_reduce(X, min_or_max): fminmax = ufunc_dic[min_or_max] major_index = np.flatnonzero(np.diff(X.indptr)) values = cpu_np.zeros(major_index.shape[0], dtype=X.dtype) ptrs = X.indptr[major_index] start = ptrs[0] for i, end in enumerate(ptrs[1:]): values[i] = fminmax(X.data[start:end]) start = end values[-1] = fminmax(X.data[end:]) return major_index, np.array(values) def _min_or_max_axis(X, axis, min_or_max): N = X.shape[axis] if N == 0: raise ValueError("zero-size array to reduction operation") M = X.shape[1 - axis] mat = X.tocsc() if axis == 0 else X.tocsr() mat.sum_duplicates() major_index, value = _minor_reduce(mat, min_or_max) not_full = np.diff(mat.indptr)[major_index] < N if 'min' in min_or_max: fminmax = np.fmin else: fminmax = np.fmax is_nan = np.isnan(value) value[not_full] = fminmax(value[not_full], 0) if 'nan' not in min_or_max: value[is_nan] = np.nan mask = value != 0 major_index = np.compress(mask, major_index) value = np.compress(mask, value) if axis == 0: res = gpu_sp.coo_matrix((value, (np.zeros(len(value)), major_index)), dtype=X.dtype, shape=(1, M)) else: res = gpu_sp.coo_matrix((value, (major_index, np.zeros(len(value)))), dtype=X.dtype, shape=(M, 1)) return res.A.ravel() def _sparse_min_or_max(X, axis, min_or_max): if axis is None: if 0 in X.shape: raise ValueError("zero-size array to reduction operation") if X.nnz == 0: return X.dtype.type(0) fminmax = ufunc_dic[min_or_max] m = fminmax(X.data) if np.isnan(m): if 'nan' in min_or_max: m = 0 elif X.nnz != cpu_np.product(X.shape): if 'min' in min_or_max: m = m if m <= 0 else 0 else: m = m if m >= 0 else 0 return X.dtype.type(m) if axis < 0: axis += 2 if (axis == 0) or (axis == 1): return _min_or_max_axis(X, axis, min_or_max) else: raise ValueError("invalid axis, use 0 for rows, or 1 for columns") def _sparse_min_max(X, axis): return (_sparse_min_or_max(X, axis, 'min'), _sparse_min_or_max(X, axis, 'max')) def _sparse_nan_min_max(X, axis): return(_sparse_min_or_max(X, axis, 'nanmin'), _sparse_min_or_max(X, axis, 'nanmax')) def min_max_axis(X, axis, ignore_nan=False): """Compute minimum and maximum along an axis on a CSR or CSC matrix and optionally ignore NaN values. Parameters ---------- X : CSR or CSC sparse matrix, shape (n_samples, n_features) Input data. axis : int (either 0 or 1) Axis along which the axis should be computed. ignore_nan : bool, default is False Ignore or passing through NaN values. Returns ------- mins : float array with shape (n_features,) Feature-wise minima maxs : float array with shape (n_features,) Feature-wise maxima """ if issparse(X): if ignore_nan: return _sparse_nan_min_max(X, axis=axis) else: return _sparse_min_max(X, axis=axis) else: _raise_typeerror(X)
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/utils/skl_dependencies.py
# Original authors from Sckit-Learn: # Gael Varoquaux <gael.varoquaux@normalesup.org> # License: BSD 3 clause # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote this production. from cuml.internals.array_sparse import SparseCumlArray from ....internals.base import Base from ..utils.validation import check_X_y from ....thirdparty_adapters import check_array class BaseEstimator(Base): """Base class for all estimators in scikit-learn Notes ----- All estimators should specify all the parameters that can be set at the class level in their ``__init__`` as explicit keyword arguments (no ``*args`` or ``**kwargs``). """ def __init_subclass__(cls): orig_init = cls.__init__ def init(self, *args, **kwargs): handle = kwargs['handle'] if 'handle' in kwargs else None verbose = kwargs['verbose'] if 'verbose' in kwargs else False output_type = kwargs['output_type'] if 'output_type' in kwargs \ else None Base.__init__(self, handle=handle, verbose=verbose, output_type=output_type) for param in ['handle', 'verbose', 'output_type']: if param in kwargs: del kwargs[param] orig_init(self, *args, **kwargs) cls.__init__ = init def _check_n_features(self, X, reset): """Set the `n_features_in_` attribute, or check against it. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) The input samples. reset : bool If True, the `n_features_in_` attribute is set to `X.shape[1]`. Else, the attribute must already exist and the function checks that it is equal to `X.shape[1]`. """ n_features = X.shape[1] if reset: self.n_features_in_ = n_features else: if not hasattr(self, 'n_features_in_'): raise RuntimeError( "The reset parameter is False but there is no " "n_features_in_ attribute. Is this estimator fitted?" ) if n_features != self.n_features_in_: raise ValueError( 'X has {} features, but this {} is expecting {} features ' 'as input.'.format(n_features, self.__class__.__name__, self.n_features_in_) ) def _validate_data(self, X, y=None, reset=True, validate_separately=False, **check_params): """Validate input data and set or check the `n_features_in_` attribute. Parameters ---------- X : {array-like, sparse matrix, dataframe} of shape \ (n_samples, n_features) The input samples. y : array-like of shape (n_samples,), default=None The targets. If None, `check_array` is called on `X` and `check_X_y` is called otherwise. reset : bool, default=True Whether to reset the `n_features_in_` attribute. If False, the input will be checked for consistency with data provided when reset was last True. validate_separately : False or tuple of dicts, default=False Only used if y is not None. If False, call validate_X_y(). Else, it must be a tuple of kwargs to be used for calling check_array() on X and y respectively. **check_params : kwargs Parameters passed to :func:`sklearn.utils.check_array` or :func:`sklearn.utils.check_X_y`. Ignored if validate_separately is not False. Returns ------- out : {ndarray, sparse matrix} or tuple of these The validated input. A tuple is returned if `y` is not None. """ if y is None: if self._get_tags()['requires_y']: raise ValueError( f"This {self.__class__.__name__} estimator " f"requires y to be passed, but the target y is None." ) X = check_array(X, **check_params) out = X else: if validate_separately: # We need this because some estimators validate X and y # separately, and in general, separately calling check_array() # on X and y isn't equivalent to just calling check_X_y() # :( check_X_params, check_y_params = validate_separately X = check_array(X, **check_X_params) y = check_array(y, **check_y_params) else: X, y = check_X_y(X, y, **check_params) out = X, y if check_params.get('ensure_2d', True): self._check_n_features(X, reset=reset) return out class TransformerMixin: """Mixin class for all transformers in scikit-learn.""" def fit_transform(self, X, y=None, **fit_params) -> SparseCumlArray: """ Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters ---------- X : {array-like, sparse matrix, dataframe} of shape \ (n_samples, n_features) y : ndarray of shape (n_samples,), default=None Target values. **fit_params : dict Additional fit parameters. Returns ------- X_new : ndarray array of shape (n_samples, n_features_new) Transformed array. """ # non-optimized default implementation; override when a better # method is possible for a given clustering algorithm if y is None: # fit method of arity 1 (unsupervised transformation) return self.fit(X, **fit_params).transform(X) else: # fit method of arity 2 (supervised transformation) return self.fit(X, y, **fit_params).transform(X) class BaseComposition: """Handles parameter management for classifiers composed of named estimators. """ def _get_params(self, attr, deep=True): out = super().get_params(deep=deep) if not deep: return out estimators = getattr(self, attr) out.update(estimators) for name, estimator in estimators: if hasattr(estimator, 'get_params'): for key, value in estimator.get_params(deep=True).items(): out['%s__%s' % (name, key)] = value return out def _set_params(self, attr, **params): # Ensure strict ordering of parameter setting: # 1. All steps if attr in params: setattr(self, attr, params.pop(attr)) # 2. Step replacement items = getattr(self, attr) names = [] if items: names, _ = zip(*items) for name in list(params.keys()): if '__' not in name and name in names: self._replace_estimator(attr, name, params.pop(name)) # 3. Step parameters and other initialisation arguments super().set_params(**params) return self def _replace_estimator(self, attr, name, new_val): # assumes `name` is a valid estimator name new_estimators = list(getattr(self, attr)) for i, (estimator_name, _) in enumerate(new_estimators): if estimator_name == name: new_estimators[i] = (name, new_val) break setattr(self, attr, new_estimators) def _validate_names(self, names): if len(set(names)) != len(names): raise ValueError('Names provided are not unique: ' '{0!r}'.format(list(names))) invalid_names = set(names).intersection(self.get_params(deep=False)) if invalid_names: raise ValueError('Estimator names conflict with constructor ' 'arguments: {0!r}'.format(sorted(invalid_names))) invalid_names = [name for name in names if '__' in name] if invalid_names: raise ValueError('Estimator names must not contain __: got ' '{0!r}'.format(invalid_names))
0