repo_id
stringlengths
21
96
file_path
stringlengths
31
155
content
stringlengths
1
92.9M
__index_level_0__
int64
0
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/ucx.rst
UCX Integration =============== Communication can be a major bottleneck in distributed systems. Dask-CUDA addresses this by supporting integration with `UCX <https://www.openucx.org/>`_, an optimized communication framework that provides high-performance networking and supports a variety of transport methods, includin...
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/explicit_comms.rst
Explicit-comms ============== Communication and scheduling overhead can be a major bottleneck in Dask/Distributed. Dask-CUDA addresses this by introducing an API for explicit communication in Dask tasks. The idea is that Dask/Distributed spawns workers and distribute data as usually while the user can submit tasks on ...
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/conf.py
# -*- coding: utf-8 -*- # # Configuration file for the Sphinx documentation builder. # # This file does only contain a selection of the most common options. For a # full list see the documentation: # http://www.sphinx-doc.org/en/master/config # -- Path setup ------------------------------------------------------------...
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/api.rst
API === Cluster ------- .. currentmodule:: dask_cuda .. autoclass:: LocalCUDACluster :members: CLI --- Worker ~~~~~~ .. click:: dask_cuda.cli:worker :prog: dask cuda :nested: none Cluster configuration ~~~~~~~~~~~~~~~~~~~~~ .. click:: dask_cuda.cli:config :prog: dask cuda :nested: none Client initia...
0
rapidsai_public_repos/dask-cuda/docs
rapidsai_public_repos/dask-cuda/docs/source/index.rst
Dask-CUDA ========= Dask-CUDA is a library extending `Dask.distributed <https://distributed.dask.org/en/latest/>`_'s single-machine `LocalCluster <https://docs.dask.org/en/latest/setup/single-distributed.html#localcluster>`_ and `Worker <https://distributed.dask.org/en/latest/worker.html>`_ for use in distributed GPU ...
0
rapidsai_public_repos/dask-cuda/docs/source
rapidsai_public_repos/dask-cuda/docs/source/examples/ucx.rst
Enabling UCX communication ========================== A CUDA cluster using UCX communication can be started automatically with LocalCUDACluster or manually with the ``dask cuda worker`` CLI tool. In either case, a ``dask.distributed.Client`` must be made for the worker cluster using the same Dask UCX configuration; se...
0
rapidsai_public_repos/dask-cuda/docs/source
rapidsai_public_repos/dask-cuda/docs/source/examples/worker_count.rst
.. _controlling-number-of-workers: Controlling number of workers ============================= Users can restrict activity to specific GPUs by explicitly setting ``CUDA_VISIBLE_DEVICES``; for a LocalCUDACluster, this can provided as a keyword argument. For example, to restrict activity to the first two indexed GPUs: ...
0
rapidsai_public_repos/dask-cuda/docs/source
rapidsai_public_repos/dask-cuda/docs/source/examples/best-practices.rst
Best Practices ============== Multi-GPU Machines ~~~~~~~~~~~~~~~~~~ When choosing between two multi-GPU setups, it is best to pick the one where most GPUs are co-located with one-another. This could be a `DGX <https://www.nvidia.com/en-us/data-center/dgx-systems/>`_, a cloud instance with `multi-gpu options <https:...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/ci/test_python.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. set -euo pipefail . /opt/conda/etc/profile.d/conda.sh rapids-logger "Generate Python testing dependencies" rapids-dependency-file-generator \ --output conda \ --file_key test_python \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/ci/build_python.sh
#!/bin/bash # Copyright (c) 2022, NVIDIA CORPORATION. set -euo pipefail source rapids-env-update export CMAKE_GENERATOR=Ninja rapids-print-env package_name="dask_cuda" version=$(rapids-generate-version) commit=$(git rev-parse HEAD) echo "${version}" | tr -d '"' > VERSION sed -i "/^__git_commit__/ s/= .*/= \"${co...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/ci/build_python_pypi.sh
#!/bin/bash python -m pip install build --user version=$(rapids-generate-version) commit=$(git rev-parse HEAD) # While conda provides these during conda-build, they are also necessary during # the setup.py build for PyPI export GIT_DESCRIBE_TAG=$(git describe --abbrev=0 --tags) export GIT_DESCRIBE_NUMBER=$(git rev-...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/ci/check_style.sh
#!/bin/bash # Copyright (c) 2020-2022, NVIDIA CORPORATION. set -euo pipefail rapids-logger "Create checks conda environment" . /opt/conda/etc/profile.d/conda.sh rapids-dependency-file-generator \ --output conda \ --file_key checks \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}"...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/ci/build_docs.sh
#!/bin/bash set -euo pipefail rapids-logger "Create test conda environment" . /opt/conda/etc/profile.d/conda.sh rapids-dependency-file-generator \ --output conda \ --file_key docs \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env creat...
0
rapidsai_public_repos/dask-cuda/ci
rapidsai_public_repos/dask-cuda/ci/release/update-version.sh
#!/bin/bash # Copyright (c) 2020, NVIDIA CORPORATION. ################################################################################ # dask-cuda version updater ################################################################################ ## Usage # bash update-version.sh <new_version> # Format is YY.MM.PP - no...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/proxy_object.py
import copy as _copy import functools import operator import os import pickle import time from collections import OrderedDict from contextlib import nullcontext from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, Tuple, Type, Union import pandas import dask import dask.array.core import dask.dataframe.me...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/cuda_worker.py
from __future__ import absolute_import, division, print_function import asyncio import atexit import logging import os import warnings from toolz import valmap import dask from distributed import Nanny from distributed.core import Server from distributed.deploy.cluster import Cluster from distributed.proctitle impor...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/get_device_memory_objects.py
from typing import Set from dask.sizeof import sizeof from dask.utils import Dispatch dispatch = Dispatch(name="get_device_memory_objects") class DeviceMemoryId: """ID and size of device memory objects Instead of keeping a reference to device memory objects this class only saves the id and size in orde...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/cli.py
from __future__ import absolute_import, division, print_function import logging import click from tornado.ioloop import IOLoop, TimeoutError from dask import config as dask_config from distributed import Client from distributed.cli.utils import install_signal_handlers from distributed.preloading import validate_prel...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/device_host_file.py
import itertools import logging import os import time import numpy from zict import Buffer, Func from zict.common import ZictBase import dask from distributed.protocol import ( dask_deserialize, dask_serialize, deserialize, deserialize_bytes, serialize, serialize_bytelist, ) from distributed.s...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/local_cuda_cluster.py
import copy import logging import os import warnings from functools import partial import dask from distributed import LocalCluster, Nanny, Worker from distributed.worker_memory import parse_memory_limit from .device_host_file import DeviceHostFile from .initialize import initialize from .plugins import CPUAffinity, ...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/utils_test.py
from typing import Literal import distributed from distributed import Nanny, Worker class MockWorker(Worker): """Mock Worker class preventing NVML from getting used by SystemMonitor. By preventing the Worker from initializing NVML in the SystemMonitor, we can mock test multiple devices in `CUDA_VISIBLE_...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/is_device_object.py
from __future__ import absolute_import, division, print_function from dask.utils import Dispatch is_device_object = Dispatch(name="is_device_object") @is_device_object.register(object) def is_device_object_default(o): return hasattr(o, "__cuda_array_interface__") @is_device_object.register(list) @is_device_ob...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/initialize.py
import logging import os import click import numba.cuda import dask from distributed.diagnostics.nvml import get_device_index_and_uuid, has_cuda_context from .utils import get_ucx_config logger = logging.getLogger(__name__) def _create_cuda_context_handler(): if int(os.environ.get("DASK_CUDA_TEST_SINGLE_GPU",...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/_version.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/plugins.py
import importlib import os from distributed import WorkerPlugin from .utils import get_rmm_log_file_name, parse_device_memory_limit class CPUAffinity(WorkerPlugin): def __init__(self, cores): self.cores = cores def setup(self, worker=None): os.sched_setaffinity(0, self.cores) class RMMSet...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/proxify_host_file.py
import abc import gc import io import logging import os import os.path import pathlib import threading import time import traceback import warnings import weakref from collections import defaultdict from collections.abc import MutableMapping from typing import ( Any, Callable, DefaultDict, Dict, Has...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/__init__.py
import sys if sys.platform != "linux": raise ImportError("Only Linux is supported by Dask-CUDA at this time") import dask import dask.utils import dask.dataframe.core import dask.dataframe.shuffle import dask.dataframe.multi import dask.bag.core from ._version import __git_commit__, __version__ from .cuda_worke...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/is_spillable_object.py
from __future__ import absolute_import, division, print_function from typing import Optional from dask.utils import Dispatch is_spillable_object = Dispatch(name="is_spillable_object") @is_spillable_object.register(list) @is_spillable_object.register(tuple) @is_spillable_object.register(set) @is_spillable_object.re...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/proxify_device_objects.py
import functools import pydoc from collections import defaultdict from functools import partial from typing import List, MutableMapping, Optional, Tuple, TypeVar import dask from dask.utils import Dispatch from .proxy_object import ProxyObject, asproxy dispatch = Dispatch(name="proxify_device_objects") incompatible_...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/utils.py
import math import operator import os import pickle import time import warnings from contextlib import suppress from functools import singledispatch from multiprocessing import cpu_count from typing import Optional import numpy as np import pynvml import toolz import dask import distributed # noqa: required for dask...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/VERSION
24.02.00
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/worker_spec.py
import os from dask.distributed import Nanny from distributed.system import MEMORY_LIMIT from .initialize import initialize from .local_cuda_cluster import cuda_visible_devices from .plugins import CPUAffinity from .utils import get_cpu_affinity, get_gpu_count def worker_spec( interface=None, protocol=None,...
0
rapidsai_public_repos/dask-cuda
rapidsai_public_repos/dask-cuda/dask_cuda/disk_io.py
import itertools import os import os.path import pathlib import tempfile import threading import weakref from typing import Callable, Iterable, Mapping, Optional, Union import numpy as np import dask from distributed.utils import nbytes _new_cuda_buffer: Optional[Callable[[int], object]] = None def get_new_cuda_bu...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/explicit_comms/comms.py
import asyncio import concurrent.futures import contextlib import time import uuid from typing import Any, Dict, Hashable, Iterable, List, Optional import distributed.comm from distributed import Client, Worker, default_client, get_worker from distributed.comm.addressing import parse_address, parse_host_port, unparse_...
0
rapidsai_public_repos/dask-cuda/dask_cuda/explicit_comms
rapidsai_public_repos/dask-cuda/dask_cuda/explicit_comms/dataframe/shuffle.py
from __future__ import annotations import asyncio import functools import inspect from collections import defaultdict from math import ceil from operator import getitem from typing import Any, Callable, Dict, List, Optional, Set, TypeVar import dask import dask.config import dask.dataframe import dask.utils import di...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_dask_cuda_worker.py
from __future__ import absolute_import, division, print_function import os import pkgutil import subprocess import sys from unittest.mock import patch import pytest from distributed import Client, wait from distributed.system import MEMORY_LIMIT from distributed.utils_test import cleanup, loop, loop_in_thread, popen...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_proxify_host_file.py
from typing import Iterable from unittest.mock import patch import numpy as np import pytest from pandas.testing import assert_frame_equal import dask import dask.dataframe from dask.dataframe.shuffle import shuffle_group from dask.sizeof import sizeof from dask.utils import format_bytes from distributed import Clien...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_spill.py
import gc import os from time import sleep import pytest import dask from dask import array as da from distributed import Client, wait from distributed.metrics import time from distributed.sizeof import sizeof from distributed.utils_test import gen_cluster, gen_test, loop # noqa: F401 from dask_cuda import LocalCUD...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_explicit_comms.py
import asyncio import multiprocessing as mp import os from unittest.mock import patch import numpy as np import pandas as pd import pytest import dask from dask import dataframe as dd from dask.dataframe.shuffle import partitioning_index from dask.dataframe.utils import assert_eq from distributed import Client from d...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_device_host_file.py
from random import randint import numpy as np import pytest import dask.array from distributed.protocol import ( deserialize, deserialize_bytes, serialize, serialize_bytelist, ) from dask_cuda.device_host_file import DeviceHostFile, device_to_host, host_to_device cupy = pytest.importorskip("cupy") ...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_cudf_builtin_spilling.py
import pytest from distributed.sizeof import safe_sizeof from dask_cuda.device_host_file import DeviceHostFile from dask_cuda.is_spillable_object import is_spillable_object from dask_cuda.proxify_host_file import ProxifyHostFile cupy = pytest.importorskip("cupy") pandas = pytest.importorskip("pandas") pytest.import...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_dgx.py
import multiprocessing as mp import os from enum import Enum, auto import numpy import pytest from dask import array as da from distributed import Client from dask_cuda import LocalCUDACluster from dask_cuda.initialize import initialize mp = mp.get_context("spawn") # type: ignore psutil = pytest.importorskip("psut...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_worker_spec.py
import pytest from distributed import Nanny from dask_cuda.worker_spec import worker_spec def _check_option(spec, k, v): assert all([s["options"][k] == v for s in spec.values()]) def _check_env_key(spec, k, enable): if enable: assert all([k in s["options"]["env"] for s in spec.values()]) else:...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_initialize.py
import multiprocessing as mp import numpy import psutil import pytest from dask import array as da from distributed import Client from distributed.deploy.local import LocalCluster from dask_cuda.initialize import initialize from dask_cuda.utils import get_ucx_config from dask_cuda.utils_test import IncreasedCloseTim...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_gds.py
import tempfile import pytest from distributed.protocol.serialize import deserialize, serialize from dask_cuda.proxify_host_file import ProxifyHostFile # Make the "disk" serializer available and use a directory that is # removed on exit. if ProxifyHostFile._spill_to_disk is None: tmpdir = tempfile.TemporaryDire...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_from_array.py
import pytest import dask.array as da from distributed import Client from dask_cuda import LocalCUDACluster cupy = pytest.importorskip("cupy") @pytest.mark.parametrize("protocol", ["ucx", "ucxx", "tcp"]) def test_ucx_from_array(protocol): if protocol == "ucx": pytest.importorskip("ucp") elif protoc...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_local_cuda_cluster.py
import asyncio import os import pkgutil import sys from unittest.mock import patch import pytest from dask.distributed import Client from distributed.system import MEMORY_LIMIT from distributed.utils_test import gen_test, raises_with_cause from dask_cuda import CUDAWorker, LocalCUDACluster, utils from dask_cuda.init...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_utils.py
import os from unittest.mock import patch import pytest from numba import cuda from dask.config import canonical_name from dask_cuda.utils import ( cuda_visible_devices, get_cpu_affinity, get_device_total_memory, get_gpu_count, get_n_gpus, get_preload_options, get_ucx_config, nvml_dev...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/tests/test_proxy.py
import operator import os import pickle import tempfile from types import SimpleNamespace import numpy as np import pandas import pytest from packaging import version from pandas.testing import assert_frame_equal, assert_series_equal import dask import dask.array from dask.dataframe.core import has_parallel_type from...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/common.py
from argparse import Namespace from functools import partial from typing import Any, Callable, List, Mapping, NamedTuple, Optional, Tuple from warnings import filterwarnings import numpy as np import pandas as pd import dask from distributed import Client from dask_cuda.benchmarks.utils import ( address_to_index...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/local_cupy.py
import contextlib from collections import ChainMap from time import perf_counter as clock import numpy as np import pandas as pd from nvtx import end_range, start_range from dask import array as da from dask.distributed import performance_report, wait from dask.utils import format_bytes, parse_bytes from dask_cuda.b...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/local_cudf_merge.py
import contextlib import math from collections import ChainMap from time import perf_counter import numpy as np import pandas as pd import dask from dask.base import tokenize from dask.dataframe.core import new_dd_object from dask.distributed import performance_report, wait from dask.utils import format_bytes, parse_...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/local_cudf_groupby.py
import contextlib from collections import ChainMap from time import perf_counter as clock import pandas as pd import dask import dask.dataframe as dd from dask.distributed import performance_report, wait from dask.utils import format_bytes, parse_bytes from dask_cuda.benchmarks.common import Config, execute_benchmar...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/local_cudf_shuffle.py
import contextlib from collections import ChainMap from time import perf_counter from typing import Tuple import numpy as np import pandas as pd import dask import dask.dataframe from dask.dataframe.core import new_dd_object from dask.dataframe.shuffle import shuffle from dask.distributed import Client, performance_r...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/local_cupy_map_overlap.py
import contextlib from collections import ChainMap from time import perf_counter as clock import cupy as cp import numpy as np import pandas as pd from cupyx.scipy.ndimage.filters import convolve as cp_convolve from scipy.ndimage import convolve as sp_convolve from dask import array as da from dask.distributed import...
0
rapidsai_public_repos/dask-cuda/dask_cuda
rapidsai_public_repos/dask-cuda/dask_cuda/benchmarks/utils.py
import argparse import itertools import json import os import time from collections import defaultdict from datetime import datetime from operator import itemgetter from typing import Any, Callable, Mapping, NamedTuple, Optional, Tuple import numpy as np import pandas as pd from dask.distributed import Client, SSHClu...
0
rapidsai_public_repos/dask-cuda/examples
rapidsai_public_repos/dask-cuda/examples/ucx/dask_cuda_worker.sh
#!/bin/bash usage() { echo "usage: $0 [-a <scheduler_address>] [-i <interface>] [-r <rmm_pool_size>] [-t <transports>]" >&2 exit 1 } # parse arguments rmm_pool_size=1GB while getopts ":a:i:r:t:" flag; do case "${flag}" in i) interface=${OPTARG};; r) rmm_pool_size=${OPTARG};; t...
0
rapidsai_public_repos/dask-cuda/examples
rapidsai_public_repos/dask-cuda/examples/ucx/local_cuda_cluster.py
import click import cupy from dask import array as da from dask.distributed import Client from dask.utils import parse_bytes from dask_cuda import LocalCUDACluster @click.command(context_settings=dict(ignore_unknown_options=True)) @click.option( "--enable-nvlink/--disable-nvlink", default=False, help="E...
0
rapidsai_public_repos/dask-cuda/examples
rapidsai_public_repos/dask-cuda/examples/ucx/client_initialize.py
import click import cupy from dask import array as da from dask.distributed import Client from dask_cuda.initialize import initialize @click.command(context_settings=dict(ignore_unknown_options=True)) @click.argument( "address", required=True, type=str, ) @click.option( "--enable-nvlink/--disable-nv...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/.pre-commit-config.yaml
--- # Copyright (c) 2023, NVIDIA CORPORATION. repos: - repo: https://github.com/psf/black rev: 22.10.0 hooks: - id: black files: python/.* args: [--config, python/pyproject.toml] - repo: https://github.com/PyCQA/flake8 rev: 5.0.4 hooks: - id: ...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/pyproject.toml
[tool.codespell] # note: pre-commit passes explicit lists of files here, which this skip file list doesn't override - # this is only to allow you to run codespell interactively skip = "./.git,./.github,./cpp/build,.*egg-info.*,./.mypy_cache,.*_skbuild,CHANGELOG.md,_stop_words.py,,*stemmer.*" # ignore short words, and t...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/fetch_rapids.cmake
# ============================================================================= # Copyright (c) 2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.o...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/README.md
# <div align="left"><img src="img/rapids_logo.png" width="90px"/>&nbsp;cuML - GPU Machine Learning Algorithms</div> cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other [RAPIDS](https://rapids.ai/) projects. cuML enables da...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/CHANGELOG.md
# cuML 23.10.00 (11 Oct 2023) ## 🚨 Breaking Changes - add sample_weight parameter to dbscan.fit ([#5574](https://github.com/rapidsai/cuml/pull/5574)) [@mfoerste4](https://github.com/mfoerste4) - Update to Cython 3.0.0 ([#5506](https://github.com/rapidsai/cuml/pull/5506)) [@vyasr](https://github.com/vyasr) ## 🐛 Bug...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/build.sh
#!/bin/bash # Copyright (c) 2019-2023, NVIDIA CORPORATION. # cuml build script # This script is used to build the component(s) in this repo from # source, and can be called with various options to customize the # build as needed (see the help output for details) # Abort script on first error set -e NUMARGS=$# ARGS...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/codecov.yml
#Configuration File for CodeCov coverage: status: project: off patch: off comment: behavior: new # Suggested workaround to fix "missing base report" issue when using Squash and # Merge Strategy in GitHub. See this comment from CodeCov support about this # undocumented option: # https://community.codecov.io...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/dependencies.yaml
# Dependency list for https://github.com/rapidsai/dependency-file-generator files: all: output: conda matrix: cuda: ["11.8", "12.0"] arch: [x86_64] includes: - common_build - cudatoolkit - docs - py_build - py_run - py_version - test_python cpp_all: ...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/CONTRIBUTING.md
# Contributing to cuML If you are interested in contributing to cuML, your contributions will fall into three categories: 1. You want to report a bug, feature request, or documentation issue - File an [issue](https://github.com/rapidsai/cuml/issues/new/choose) describing what you encountered or what you want t...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/LICENSE
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, ...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/VERSION
23.12.00
0
rapidsai_public_repos
rapidsai_public_repos/cuml/print_env.sh
#!/usr/bin/env bash # Reports relevant environment information useful for diagnosing and # debugging cuML issues. # Usage: # "./print_env.sh" - prints to stdout # "./print_env.sh > env.txt" - prints to file "env.txt" print_env() { echo "**git***" if [ "$(git rev-parse --is-inside-work-tree 2>/dev/null)" == "true" ]; ...
0
rapidsai_public_repos
rapidsai_public_repos/cuml/BUILD.md
# cuML Build From Source Guide ## Setting Up Your Build Environment To install cuML from source, ensure the following dependencies are met: 1. [cuDF](https://github.com/rapidsai/cudf) (Same as cuML Version) 2. zlib 3. cmake (>= 3.26.4) 4. CUDA (>= 11+) 5. Cython (>= 0.29) 6. gcc (>= 9.0) 7. BLAS - Any BLAS compatibl...
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/wiki/README.md
# cuML Wiki Documentation This wiki is provided as an extension to cuML's public documentation, geared toward developers on the project. If you are interested in contributing to cuML, read through our [contributing guide](../CONTRIBUTING.md). You are also encouraged to read through our Python [developer guide](python...
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/wiki/DEFINITION_OF_DONE_CRITERIA.md
# Defining cuML's Definition of Done Criteria ## Algorithm Completion Checklist Below is a quick and simple checklist for developers to determine whether an algorithm is complete and ready for release. Most of these items contain more detailed descriptions in their corresponding developer guide. The checklist is bro...
0
rapidsai_public_repos/cuml/wiki
rapidsai_public_repos/cuml/wiki/python/ESTIMATOR_GUIDE.md
# cuML Python Estimators Developer Guide This guide is meant to help developers follow the correct patterns when creating/modifying any cuML Estimator object and ensure a uniform cuML API. **Note:** This guide is long, because it includes internal details on how cuML manages input and output types for advanced use ca...
0
rapidsai_public_repos/cuml/wiki
rapidsai_public_repos/cuml/wiki/python/DEVELOPER_GUIDE.md
# cuML Python Developer Guide This document summarizes guidelines and best practices for contributions to the python component of the library cuML, the machine learning component of the RAPIDS ecosystem. This is an evolving document so contributions, clarifications and issue reports are highly welcome. ## General Plea...
0
rapidsai_public_repos/cuml/wiki
rapidsai_public_repos/cuml/wiki/mnmg/Using_Infiniband_for_MNMG.md
# Using Infiniband for Multi-Node Multi-GPU cuML These instructions outline how to run multi-node multi-GPU cuML on devices with Infiniband. These instructions assume the necessary Infiniband hardware has already been installed and the relevant software has already been configured to enable communication over the Infi...
0
rapidsai_public_repos/cuml/wiki
rapidsai_public_repos/cuml/wiki/cpp/DEVELOPER_GUIDE.md
# cuML developer guide This document summarizes rules and best practices for contributions to the cuML C++ component of rapidsai/cuml. This is a living document and contributions for clarifications or fixes and issue reports are highly welcome. ## General Please start by reading [CONTRIBUTING.md](../../CONTRIBUTING.md...
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/pyproject.toml
# Copyright (c) 2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to...
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/.flake8
# Copyright (c) 2018-2023, NVIDIA CORPORATION. [flake8] filename = *.py, *.pyx, *.pxd exclude = *.egg, .git, __pycache__, _thirdparty, build/, cpp, docs, thirdparty, versioneer.py # Cython Rules ignored: # E999: invalid syntax (works for Python, not Cython) # E225: Missing whitespace around...
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/CMakeLists.txt
# ============================================================================= # Copyright (c) 2022-2023 NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apac...
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/README.md
# cuML Python Package This folder contains the Python and Cython code of the algorithms and ML primitives of cuML, that are distributed in the Python cuML package. Contents: - [cuML Python Package](#cuml-python-package) - [Build Configuration](#build-configuration) - [RAFT Integration in cuml.raft](#raft-int...
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/setup.py
# # Copyright (c) 2018-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or ag...
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/pytest.ini
[pytest] markers = unit: Quickest tests focused on accuracy and correctness quality: More intense tests than unit with increased runtimes stress: Longest running tests focused on stressing hardware compute resources mg: Multi-GPU tests memleak: Test that checks for memory leaks no_bad_cuml_array_check: Test...
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/LICENSE
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, ...
0
rapidsai_public_repos/cuml
rapidsai_public_repos/cuml/python/.coveragerc
# Configuration file for Python coverage tests [run] omit = cuml/test/* plugins = Cython.Coverage parallel = true source = cuml [report] # Regexes for lines to exclude from consideration exclude_lines = # Re-specify the `pragma: no cover` since it will be overridden by this # option. See the docs: # https:...
0
rapidsai_public_repos/cuml/python
rapidsai_public_repos/cuml/python/cuml/_version.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to...
0
rapidsai_public_repos/cuml/python
rapidsai_public_repos/cuml/python/cuml/__init__.py
# # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or ag...
0
rapidsai_public_repos/cuml/python
rapidsai_public_repos/cuml/python/cuml/VERSION
23.12.00
0
rapidsai_public_repos/cuml/python/cuml
rapidsai_public_repos/cuml/python/cuml/_thirdparty/__init__.py
# Third party code, respective licenses apply from . import sklearn
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/README.md
# GPU accelerated Scikit-Learn preprocessing This directory contains code originating from the Scikit-Learn library. The Scikit-Learn license applies accordingly (see `/thirdparty/LICENSES/LICENSE.scikit_learn`). Original authors mentioned in the code do not endorse or promote this production. This work is dedicated ...
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/_imputation.py
# Original authors from Sckit-Learn: # Nicolas Tresegnie <nicolas.tresegnie@gmail.com> # Sergey Feldman <sergeyfeldman@gmail.com> # License: BSD 3 clause # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license...
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/_discretization.py
# Original authors from Sckit-Learn: # Henry Lin <hlin117@gmail.com> # Tom Dupré la Tour # License: BSD # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promo...
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/_column_transformer.py
# Original authors from Sckit-Learn: # Andreas Mueller # Joris Van den Bossche # License: BSD # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote this pr...
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/_function_transformer.py
# This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote this production. import warnings import cuml from ....internals.array_sparse import SparseCumlArray from ..utils.skl_...
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/_data.py
# Original authors from Sckit-Learn: # Alexandre Gramfort <alexandre.gramfort@inria.fr> # Mathieu Blondel <mathieu@mblondel.org> # Olivier Grisel <olivier.grisel@ensta.org> # Andreas Mueller <amueller@ais.uni-bonn.de> # Eric Martin <eric@ericmart.in> # Giorgio Patri...
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/preprocessing/__init__.py
# This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. from ._data import Binarizer from ._data import KernelCenterer from ._data import MinMaxScaler from ._data import MaxAbsScaler from ._data import Normalizer from ._data i...
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/utils/validation.py
# Original authors from Sckit-Learn: # Olivier Grisel # Gael Varoquaux # Andreas Mueller # Lars Buitinck # Alexandre Gramfort # Nicolas Tresegnie # Sylvain Marie # License: BSD 3 clause # This code originates from the Scikit-Learn library, # it was since ...
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/utils/sparsefuncs.py
# Original authors from Sckit-Learn: # Manoj Kumar # Thomas Unterthiner # Giorgio Patrini # # License: BSD 3 clause # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above d...
0
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn
rapidsai_public_repos/cuml/python/cuml/_thirdparty/sklearn/utils/skl_dependencies.py
# Original authors from Sckit-Learn: # Gael Varoquaux <gael.varoquaux@normalesup.org> # License: BSD 3 clause # This code originates from the Scikit-Learn library, # it was since modified to allow GPU acceleration. # This code is under BSD 3 clause license. # Authors mentioned above do not endorse or promote ...
0