repo_id
stringlengths
21
96
file_path
stringlengths
31
155
content
stringlengths
1
92.9M
__index_level_0__
int64
0
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/linalg_arithmetic.rst
Arithmetic ========== .. role:: py(code) :language: c++ :class: highlight Addition -------- ``#include <raft/linalg/add.cuh>`` namespace *raft::linalg* .. doxygengroup:: add_dense :project: RAFT :members: :content-only: Binary Op --------- ``#include <raft/linalg/binary_op.cuh>`` namespace *raft::linalg* .. doxygengroup:: binary_op :project: RAFT :members: :content-only: Division -------- ``#include <raft/linalg/divide.cuh>`` namespace *raft::linalg* .. doxygengroup:: divide :project: RAFT :members: :content-only: Multiplication -------------- ``#include <raft/linalg/multiply.cuh>`` namespace *raft::linalg* .. doxygengroup:: multiply :project: RAFT :members: :content-only: Power ----- ``#include <raft/linalg/power.cuh>`` namespace *raft::linalg* .. doxygengroup:: power :project: RAFT :members: :content-only: Square Root ----------- ``#include <raft/linalg/sqrt.cuh>`` namespace *raft::linalg* .. doxygengroup:: sqrt :project: RAFT :members: :content-only: Subtraction ----------- ``#include <raft/linalg/subtract.cuh>`` namespace *raft::linalg* .. doxygengroup:: sub :project: RAFT :members: :content-only: Ternary Op ---------- ``#include <raft/linalg/ternary_op.cuh>`` namespace *raft::linalg* .. doxygengroup:: ternary_op :project: RAFT :members: :content-only: Unary Op -------- ``#include <raft/linalg/unary_op.cuh>`` namespace *raft::linalg* .. doxygengroup:: unary_op :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/mdspan.rst
Multi-dimensional Data ====================== This page provides C++ class references for the RAFT's 1d span and multi-dimension owning (mdarray) and non-owning (mdspan) APIs. These headers can be found in the `raft/core` directory. .. role:: py(code) :language: c++ :class: highlight .. toctree:: :maxdepth: 2 :caption: Contents: mdspan_representation.rst mdspan_mdspan.rst mdspan_mdarray.rst mdspan_span.rst mdspan_temporary_device_buffer.rst
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/core_resources.rst
Resources ========= .. role:: py(code) :language: c++ :class: highlight All resources which are specific to a computing environment like host or device are contained within, and managed by, `raft::resources`. This design simplifies the APIs and eases user burden by making the APIs opaque by default but allowing customization based on user preference. Vocabulary ---------- ``#include <raft/core/resource/resource_types.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_types :project: RAFT :members: :content-only: Device Resources ---------------- `raft::device_resources` is a convenience over using `raft::resources` directly. It provides accessor methods to retrieve resources such as the CUDA stream, stream pool, and handles to the various CUDA math libraries like cuBLAS and cuSOLVER. ``#include <raft/core/device_resources.hpp>`` namespace *raft::core* .. doxygenclass:: raft::device_resources :project: RAFT :members: Device Resources Manager ------------------------ While `raft::device_resources` provides a convenient way to access device-related resources for a sequence of RAFT calls, it is sometimes useful to be able to limit those resources across an entire application. For instance, in highly multi-threaded applications, it can be helpful to limit the total number of streams rather than relying on the default stream per thread. `raft::device_resources_manager` offers a way to access `raft::device_resources` instances that draw from a limited pool of underlying device resources. ``#include <raft/core/device_resources_manager.hpp>`` namespace *raft::core* .. doxygenclass:: raft::device_resources_manager :project: RAFT :members: Resource Functions ------------------ Comms ~~~~~ ``#include <raft/core/resource/comms.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_comms :project: RAFT :members: :content-only: cuBLAS Handle ~~~~~~~~~~~~~ ``#include <raft/core/resource/cublase_handle.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_cublas :project: RAFT :members: :content-only: CUDA Stream ~~~~~~~~~~~ ``#include <raft/core/resource/cuda_stream.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_cuda_stream :project: RAFT :members: :content-only: CUDA Stream Pool ~~~~~~~~~~~~~~~~ ``#include <raft/core/resource/cuda_stream_pool.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_stream_pool :project: RAFT :members: :content-only: cuSolverDn Handle ~~~~~~~~~~~~~~~~~ ``#include <raft/core/resource/cusolver_dn_handle.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_cusolver_dn :project: RAFT :members: :content-only: cuSolverSp Handle ~~~~~~~~~~~~~~~~~ ``#include <raft/core/resource/cusolver_sp_handle.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_cusolver_sp :project: RAFT :members: :content-only: cuSparse Handle ~~~~~~~~~~~~~~~ ``#include <raft/core/resource/cusparse_handle.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_cusparse :project: RAFT :members: :content-only: Device ID ~~~~~~~~~ ``#include <raft/core/resource/device_id.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_device_id :project: RAFT :members: :content-only: Device Memory Resource ~~~~~~~~~~~~~~~~~~~~~~ ``#include <raft/core/resource/device_memory_resource.hpp>`` namespace *raft::resource* .. doxygengroup:: device_memory_resource :project: RAFT :members: :content-only: Device Properties ~~~~~~~~~~~~~~~~~ ``#include <raft/core/resource/device_properties.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_device_props :project: RAFT :members: :content-only: Sub Communicators ~~~~~~~~~~~~~~~~~ ``#include <raft/core/resource/sub_comms.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_sub_comms :project: RAFT :members: :content-only: Thrust Exec Policy ~~~~~~~~~~~~~~~~~~ ``#include <raft/core/resource/thrust_policy.hpp>`` namespace *raft::resource* .. doxygengroup:: resource_thrust_policy :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/linalg_blas.rst
BLAS Routines ============= .. role:: py(code) :language: c++ :class: highlight axpy ---- ``#include <raft/linalg/axpy.cuh>`` namespace *raft::linalg* .. doxygengroup:: axpy :project: RAFT :members: :content-only: dot --- ``#include <raft/linalg/dot.cuh>`` namespace *raft::linalg* .. doxygengroup:: dot :project: RAFT :members: :content-only: gemm ---- ``#include <raft/linalg/gemm.cuh>`` namespace *raft::linalg* .. doxygengroup:: gemm :project: RAFT :members: :content-only: gemv ---- ``#include <raft/linalg/gemv.cuh>`` namespace *raft::linalg* .. doxygengroup:: gemv :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/mdspan_temporary_device_buffer.rst
temporary_device_buffer: Temporary raft::device_mdspan Producing Object =========================================== .. role:: py(code) :language: c++ :class: highlight ``#include <raft/core/temporary_device_buffer.hpp>`` .. doxygengroup:: temporary_device_buffer :project: RAFT :members: :content-only: Factories --------- .. doxygengroup:: temporary_device_buffer_factories :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/random.rst
Random ====== This page provides C++ class references for the publicly-exposed elements of the random package. .. role:: py(code) :language: c++ :class: highlight Random State ############ ``#include <raft/random/rng_state.hpp>`` namespace *raft::random* .. doxygenstruct:: raft::random::RngState :project: RAFT :members: .. toctree:: :maxdepth: 2 :caption: Contents: random_datagen.rst random_sampling_univariate.rst random_sampling_multivariable.rst random_sampling_without_replacement.rst
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/mdspan_span.rst
span: One-dimensional Non-owning View ===================================== .. role:: py(code) :language: c++ :class: highlight ``#include <raft/core/span.hpp>`` .. doxygengroup:: span :project: RAFT :members: :content-only: ``#include <raft/core/device_span.hpp>`` .. doxygengroup:: device_span :project: RAFT :members: :content-only: ``#include <raft/core/host_span.hpp>`` .. doxygengroup:: host_span :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/linalg_matrix.rst
Matrix Operations ================= .. role:: py(code) :language: c++ :class: highlight Transpose --------- ``#include <raft/linalg/transpose.cuh>`` namespace *raft::linalg* .. doxygengroup:: transpose :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/stats_neighborhood.rst
Neighborhood Model Scoring ========================== .. role:: py(code) :language: c++ :class: highlight Trustworthiness --------------- ``#include <raft/stats/trustworthiness.cuh>`` namespace *raft::stats* .. doxygengroup:: stats_trustworthiness :project: RAFT :members: :content-only: Neighborhood Recall ------------------- ``#include <raft/stats/neighborhood_recall.cuh>`` namespace *raft::stats* .. doxygengroup:: stats_neighborhood_recall :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/cluster_slhc.rst
Hierarchical Clustering ======================= .. role:: py(code) :language: c++ :class: highlight ``#include <raft/cluster/single_linkage.cuh>`` .. doxygennamespace:: raft::cluster::hierarchy :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/core_bitset.rst
Bitset ====== .. role:: py(code) :language: c++ :class: highlight ``#include <raft/core/bitset.cuh>`` namespace *raft::core* .. doxygengroup:: bitset :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/linalg_solver.rst
Linear Algebra Solvers ====================== .. role:: py(code) :language: c++ :class: highlight Eigen Decomposition ------------------- ``#include <raft/linalg/eig.cuh>`` namespace *raft::linalg* .. doxygengroup:: eig :project: RAFT :members: :content-only: QR Decomposition ---------------- ``#include <raft/linalg/qr.cuh>`` namespace *raft::linalg* .. doxygengroup:: qr :project: RAFT :members: :content-only: Randomized Singular-Value Decomposition --------------------------------------- ``#include <raft/linalg/rsvd.cuh>`` namespace *raft::linalg* .. doxygengroup:: rsvd :project: RAFT :members: :content-only: Singular-Value Decomposition ---------------------------- ``#include <raft/linalg/svd.cuh>`` namespace *raft::linalg* .. doxygengroup:: svd :project: RAFT :members: :content-only: Least Squares ------------- ``#include <raft/linalg/lstsq.cuh>`` namespace *raft::linalg* .. doxygengroup:: lstsq :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/stats.rst
Stats ===== This page provides C++ class references for the publicly-exposed elements of the stats package. .. role:: py(code) :language: c++ :class: highlight .. toctree:: :maxdepth: 2 :caption: Contents: stats_summary.rst stats_probability.rst stats_regression.rst stats_classification.rst stats_clustering.rst stats_neighborhood.rst
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/stats_regression.rst
Regression Model Scoring ======================== .. role:: py(code) :language: c++ :class: highlight Information Criterion --------------------- ``#include <raft/stats/information_criterion.cuh>`` namespace *raft::stats* .. doxygengroup:: stats_information_criterion :project: RAFT :members: :content-only: R2 Score -------- ``#include <raft/stats/r2_score.cuh>`` namespace *raft::stats* .. doxygengroup:: stats_r2_score :project: RAFT :members: :content-only: Regression Metrics ------------------ ``#include <raft/stats/regression_metrics.cuh>`` namespace *raft::stats* .. doxygengroup:: stats_regression_metrics :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/cluster_kmeans_balanced.rst
K-Means ======= .. role:: py(code) :language: c++ :class: highlight ``#include <raft/cluster/kmeans_balanced.cuh>`` .. doxygennamespace:: raft::cluster::kmeans_balanced :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/core_nvtx.rst
NVTX ==== .. role:: py(code) :language: c++ :class: highlight ``#include <raft/core/nvtx.hpp>`` namespace *raft::core* .. doxygennamespace:: raft::common::nvtx :project: RAFT :members: :content-only:
0
rapidsai_public_repos/raft/docs/source
rapidsai_public_repos/raft/docs/source/cpp_api/sparse.rst
Sparse ====== Core to RAFT's computational patterns for sparse data is its vocabulary of sparse types. .. role:: py(code) :language: c++ :class: highlight .. toctree:: :maxdepth: 2 :caption: Contents: sparse_types.rst sparse_distance.rst sparse_linalg.rst sparse_matrix.rst sparse_neighbors.rst sparse_solver.rst
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/test_python.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. set -euo pipefail . /opt/conda/etc/profile.d/conda.sh rapids-logger "Generate Python testing dependencies" rapids-dependency-file-generator \ --output conda \ --file_key test_python \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n test # Temporarily allow unbound variables for conda activation. set +u conda activate test set -u rapids-logger "Downloading artifacts from previous jobs" CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp) PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python) RAPIDS_TESTS_DIR=${RAPIDS_TESTS_DIR:-"${PWD}/test-results"} RAPIDS_COVERAGE_DIR=${RAPIDS_COVERAGE_DIR:-"${PWD}/coverage-results"} mkdir -p "${RAPIDS_TESTS_DIR}" "${RAPIDS_COVERAGE_DIR}" rapids-print-env rapids-mamba-retry install \ --channel "${CPP_CHANNEL}" \ --channel "${PYTHON_CHANNEL}" \ libraft libraft-headers pylibraft raft-dask rapids-logger "Check GPU usage" nvidia-smi EXITCODE=0 trap "EXITCODE=1" ERR set +e rapids-logger "pytest pylibraft" pushd python/pylibraft/pylibraft pytest \ --cache-clear \ --junitxml="${RAPIDS_TESTS_DIR}/junit-pylibraft.xml" \ --cov-config=../.coveragerc \ --cov=pylibraft \ --cov-report=xml:"${RAPIDS_COVERAGE_DIR}/pylibraft-coverage.xml" \ --cov-report=term \ test popd rapids-logger "pytest raft-dask" pushd python/raft-dask/raft_dask pytest \ --cache-clear \ --junitxml="${RAPIDS_TESTS_DIR}/junit-raft-dask.xml" \ --cov-config=../.coveragerc \ --cov=raft_dask \ --cov-report=xml:"${RAPIDS_COVERAGE_DIR}/raft-dask-coverage.xml" \ --cov-report=term \ test popd rapids-logger "Test script exiting with value: $EXITCODE" exit ${EXITCODE}
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/test_cpp.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. set -euo pipefail . /opt/conda/etc/profile.d/conda.sh rapids-logger "Generate C++ testing dependencies" rapids-dependency-file-generator \ --output conda \ --file_key test_cpp \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch)" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n test # Temporarily allow unbound variables for conda activation. set +u conda activate test set -u CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp) RAPIDS_TESTS_DIR=${RAPIDS_TESTS_DIR:-"${PWD}/test-results"}/ mkdir -p "${RAPIDS_TESTS_DIR}" rapids-print-env rapids-mamba-retry install \ --channel "${CPP_CHANNEL}" \ libraft-headers libraft libraft-tests rapids-logger "Check GPU usage" nvidia-smi EXITCODE=0 trap "EXITCODE=1" ERR set +e # Run libraft gtests from libraft-tests package cd "$CONDA_PREFIX"/bin/gtests/libraft ctest -j8 --output-on-failure rapids-logger "Test script exiting with value: $EXITCODE" exit ${EXITCODE}
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/test_wheel_pylibraft.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -euo pipefail mkdir -p ./dist RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})" RAPIDS_PY_WHEEL_NAME="pylibraft_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./dist # echo to expand wildcard before adding `[extra]` requires for pip python -m pip install $(echo ./dist/pylibraft*.whl)[test] # Run smoke tests for aarch64 pull requests if [[ "$(arch)" == "aarch64" && "${RAPIDS_BUILD_TYPE}" == "pull-request" ]]; then python ./ci/wheel_smoke_test_pylibraft.py else python -m pytest ./python/pylibraft/pylibraft/test fi
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/build_python.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. set -euo pipefail source rapids-env-update export CMAKE_GENERATOR=Ninja rapids-print-env rapids-logger "Begin py build" CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp) version=$(rapids-generate-version) git_commit=$(git rev-parse HEAD) export RAPIDS_PACKAGE_VERSION=${version} echo "${version}" > VERSION package_dir="python" for package_name in pylibraft raft-dask; do underscore_package_name=$(echo "${package_name}" | tr "-" "_") sed -i "/^__git_commit__/ s/= .*/= \"${git_commit}\"/g" "${package_dir}/${package_name}/${underscore_package_name}/_version.py" done # TODO: Remove `--no-test` flags once importing on a CPU # node works correctly rapids-conda-retry mambabuild \ --no-test \ --channel "${CPP_CHANNEL}" \ conda/recipes/pylibraft rapids-conda-retry mambabuild \ --no-test \ --channel "${CPP_CHANNEL}" \ --channel "${RAPIDS_CONDA_BLD_OUTPUT_DIR}" \ conda/recipes/raft-dask # Build ann-bench for each cuda and python version rapids-conda-retry mambabuild \ --no-test \ --channel "${CPP_CHANNEL}" \ --channel "${RAPIDS_CONDA_BLD_OUTPUT_DIR}" \ conda/recipes/raft-ann-bench # Build ann-bench-cpu only in CUDA 11 jobs since it only depends on python # version RAPIDS_CUDA_MAJOR="${RAPIDS_CUDA_VERSION%%.*}" if [[ ${RAPIDS_CUDA_MAJOR} == "11" ]]; then rapids-conda-retry mambabuild \ --no-test \ --channel "${CPP_CHANNEL}" \ --channel "${RAPIDS_CONDA_BLD_OUTPUT_DIR}" \ conda/recipes/raft-ann-bench-cpu fi rapids-upload-conda-to-s3 python
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/wheel_smoke_test_raft_dask.py
# Copyright (c) 2019-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from dask.distributed import Client, get_worker, wait from dask_cuda import LocalCUDACluster, initialize from raft_dask.common import ( Comms, local_handle, perform_test_comm_split, perform_test_comms_allgather, perform_test_comms_allreduce, perform_test_comms_bcast, perform_test_comms_device_multicast_sendrecv, perform_test_comms_device_send_or_recv, perform_test_comms_device_sendrecv, perform_test_comms_gather, perform_test_comms_gatherv, perform_test_comms_reduce, perform_test_comms_reducescatter, perform_test_comms_send_recv, ) import os os.environ["UCX_LOG_LEVEL"] = "error" def func_test_send_recv(sessionId, n_trials): handle = local_handle(sessionId, dask_worker=get_worker()) return perform_test_comms_send_recv(handle, n_trials) def func_test_collective(func, sessionId, root): handle = local_handle(sessionId, dask_worker=get_worker()) return func(handle, root) if __name__ == "__main__": # initial setup cluster = LocalCUDACluster(protocol="tcp", scheduler_port=0) client = Client(cluster) n_trials = 5 root_location = "client" # p2p test for ucx cb = Comms(comms_p2p=True, verbose=True) cb.init() dfs = [ client.submit( func_test_send_recv, cb.sessionId, n_trials, pure=False, workers=[w], ) for w in cb.worker_addresses ] wait(dfs, timeout=5) assert list(map(lambda x: x.result(), dfs)) cb.destroy() # collectives test for nccl cb = Comms( verbose=True, client=client, nccl_root_location=root_location ) cb.init() for k, v in cb.worker_info(cb.worker_addresses).items(): dfs = [ client.submit( func_test_collective, perform_test_comms_allgather, cb.sessionId, v["rank"], pure=False, workers=[w], ) for w in cb.worker_addresses ] wait(dfs, timeout=5) assert all([x.result() for x in dfs]) cb.destroy() # final client and cluster teardown client.close() cluster.close()
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/build_wheel.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -euo pipefail package_name=$1 package_dir=$2 underscore_package_name=$(echo "${package_name}" | tr "-" "_") source rapids-configure-sccache source rapids-date-string version=$(rapids-generate-version) git_commit=$(git rev-parse HEAD) RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})" # This is the version of the suffix with a preceding hyphen. It's used # everywhere except in the final wheel name. PACKAGE_CUDA_SUFFIX="-${RAPIDS_PY_CUDA_SUFFIX}" # Patch project metadata files to include the CUDA version suffix and version override. pyproject_file="${package_dir}/pyproject.toml" version_file="${package_dir}/${underscore_package_name}/_version.py" sed -i "s/name = \"${package_name}\"/name = \"${package_name}${PACKAGE_CUDA_SUFFIX}\"/g" ${pyproject_file} echo "${version}" > VERSION sed -i "/^__git_commit__ / s/= .*/= \"${git_commit}\"/g" ${version_file} # For nightlies we want to ensure that we're pulling in alphas as well. The # easiest way to do so is to augment the spec with a constraint containing a # min alpha version that doesn't affect the version bounds but does allow usage # of alpha versions for that dependency without --pre alpha_spec='' if ! rapids-is-release-build; then alpha_spec=',>=0.0.0a0' fi if [[ ${package_name} == "raft-dask" ]]; then sed -r -i "s/pylibraft==(.*)\"/pylibraft${PACKAGE_CUDA_SUFFIX}==\1${alpha_spec}\"/g" ${pyproject_file} sed -r -i "s/ucx-py==(.*)\"/ucx-py${PACKAGE_CUDA_SUFFIX}==\1${alpha_spec}\"/g" ${pyproject_file} sed -r -i "s/rapids-dask-dependency==(.*)\"/rapids-dask-dependency==\1${alpha_spec}\"/g" ${pyproject_file} sed -r -i "s/dask-cuda==(.*)\"/dask-cuda==\1${alpha_spec}\"/g" ${pyproject_file} else sed -r -i "s/rmm(.*)\"/rmm${PACKAGE_CUDA_SUFFIX}\1${alpha_spec}\"/g" ${pyproject_file} fi if [[ $PACKAGE_CUDA_SUFFIX == "-cu12" ]]; then sed -i "s/cuda-python[<=>\.,0-9a]*/cuda-python>=12.0,<13.0a0/g" ${pyproject_file} sed -i "s/cupy-cuda11x/cupy-cuda12x/g" ${pyproject_file} fi cd "${package_dir}" # Hardcode the output dir python -m pip wheel . -w dist -vvv --no-deps --disable-pip-version-check mkdir -p final_dist python -m auditwheel repair -w final_dist dist/* RAPIDS_PY_WHEEL_NAME="${underscore_package_name}_${RAPIDS_PY_CUDA_SUFFIX}" rapids-upload-wheels-to-s3 final_dist
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/build_wheel_pylibraft.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -euo pipefail # Set up skbuild options. Enable sccache in skbuild config options export SKBUILD_CONFIGURE_OPTIONS="-DRAFT_BUILD_WHEELS=ON -DDETECT_CONDA_ENV=OFF -DFIND_RAFT_CPP=OFF" ci/build_wheel.sh pylibraft python/pylibraft
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/test_wheel_raft_dask.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -euo pipefail mkdir -p ./dist RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})" RAPIDS_PY_WHEEL_NAME="raft_dask_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./dist # Download the pylibraft built in the previous step RAPIDS_PY_WHEEL_NAME="pylibraft_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./local-pylibraft-dep python -m pip install --no-deps ./local-pylibraft-dep/pylibraft*.whl # echo to expand wildcard before adding `[extra]` requires for pip python -m pip install $(echo ./dist/raft_dask*.whl)[test] # Run smoke tests for aarch64 pull requests if [[ "$(arch)" == "aarch64" && "${RAPIDS_BUILD_TYPE}" == "pull-request" ]]; then python ./ci/wheel_smoke_test_raft_dask.py else python -m pytest ./python/raft-dask/raft_dask/test fi
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/check_style.sh
#!/bin/bash # Copyright (c) 2020-2023, NVIDIA CORPORATION. set -euo pipefail rapids-logger "Create checks conda environment" . /opt/conda/etc/profile.d/conda.sh rapids-dependency-file-generator \ --output conda \ --file_key checks \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n checks conda activate checks # Run pre-commit checks pre-commit run --all-files --show-diff-on-failure
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/build_cpp.sh
#!/bin/bash # Copyright (c) 2022-2023, NVIDIA CORPORATION. set -euo pipefail source rapids-env-update export CMAKE_GENERATOR=Ninja rapids-print-env version=$(rapids-generate-version) rapids-logger "Begin cpp build" RAPIDS_PACKAGE_VERSION=${version} rapids-conda-retry mambabuild conda/recipes/libraft rapids-upload-conda-to-s3 cpp
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/build_wheel_raft_dask.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -euo pipefail # Set up skbuild options. Enable sccache in skbuild config options export SKBUILD_CONFIGURE_OPTIONS="-DRAFT_BUILD_WHEELS=ON -DDETECT_CONDA_ENV=OFF -DFIND_RAFT_CPP=OFF" ci/build_wheel.sh raft-dask python/raft-dask
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/wheel_smoke_test_pylibraft.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import numpy as np from scipy.spatial.distance import cdist from pylibraft.common import Handle, Stream, device_ndarray from pylibraft.distance import pairwise_distance if __name__ == "__main__": metric = "euclidean" n_rows = 1337 n_cols = 1337 input1 = np.random.random_sample((n_rows, n_cols)) input1 = np.asarray(input1, order="C").astype(np.float64) output = np.zeros((n_rows, n_rows), dtype=np.float64) expected = cdist(input1, input1, metric) expected[expected <= 1e-5] = 0.0 input1_device = device_ndarray(input1) output_device = None s2 = Stream() handle = Handle(stream=s2) ret_output = pairwise_distance( input1_device, input1_device, output_device, metric, handle=handle ) handle.sync() output_device = ret_output actual = output_device.copy_to_host() actual[actual <= 1e-5] = 0.0 assert np.allclose(expected, actual, rtol=1e-4)
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/ci/build_docs.sh
#!/bin/bash # Copyright (c) 2023, NVIDIA CORPORATION. set -euo pipefail rapids-logger "Create test conda environment" . /opt/conda/etc/profile.d/conda.sh rapids-dependency-file-generator \ --output conda \ --file_key docs \ --matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml rapids-mamba-retry env create --force -f env.yaml -n docs conda activate docs rapids-print-env rapids-logger "Downloading artifacts from previous jobs" CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp) PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python) rapids-mamba-retry install \ --channel "${CPP_CHANNEL}" \ --channel "${PYTHON_CHANNEL}" \ libraft \ libraft-headers \ pylibraft \ raft-dask export RAPIDS_VERSION_NUMBER="24.02" export RAPIDS_DOCS_DIR="$(mktemp -d)" rapids-logger "Build CPP docs" pushd cpp/doxygen doxygen Doxyfile popd rapids-logger "Build Python docs" pushd docs sphinx-build -b dirhtml source _html sphinx-build -b text source _text mkdir -p "${RAPIDS_DOCS_DIR}/raft/"{html,txt} mv _html/* "${RAPIDS_DOCS_DIR}/raft/html" mv _text/* "${RAPIDS_DOCS_DIR}/raft/txt" popd rapids-upload-docs
0
rapidsai_public_repos/raft/ci
rapidsai_public_repos/raft/ci/release/update-version.sh
#!/bin/bash # Copyright (c) 2020-2023, NVIDIA CORPORATION. ######################## # RAFT Version Updater # ######################## ## Usage # bash update-version.sh <new_version> # Format is YY.MM.PP - no leading 'v' or trailing 'a' NEXT_FULL_TAG=$1 # Get current version CURRENT_TAG=$(git tag --merged HEAD | grep -xE '^v.*' | sort --version-sort | tail -n 1 | tr -d 'v') CURRENT_MAJOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[1]}') CURRENT_MINOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[2]}') CURRENT_PATCH=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[3]}') CURRENT_SHORT_TAG=${CURRENT_MAJOR}.${CURRENT_MINOR} #Get <major>.<minor> for next version NEXT_MAJOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[1]}') NEXT_MINOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[2]}') NEXT_SHORT_TAG=${NEXT_MAJOR}.${NEXT_MINOR} NEXT_UCX_PY_SHORT_TAG="$(curl -sL https://version.gpuci.io/rapids/${NEXT_SHORT_TAG})" NEXT_UCX_PY_VERSION="${NEXT_UCX_PY_SHORT_TAG}.*" # Need to distutils-normalize the original version NEXT_SHORT_TAG_PEP440=$(python -c "from setuptools.extern import packaging; print(packaging.version.Version('${NEXT_SHORT_TAG}'))") NEXT_UCX_PY_SHORT_TAG_PEP440=$(python -c "from setuptools.extern import packaging; print(packaging.version.Version('${NEXT_UCX_PY_SHORT_TAG}'))") echo "Preparing release $CURRENT_TAG => $NEXT_FULL_TAG" # Inplace sed replace; workaround for Linux and Mac function sed_runner() { sed -i.bak ''"$1"'' $2 && rm -f ${2}.bak } sed_runner "s/set(RAPIDS_VERSION .*)/set(RAPIDS_VERSION \"${NEXT_SHORT_TAG}\")/g" cpp/CMakeLists.txt sed_runner "s/set(RAPIDS_VERSION .*)/set(RAPIDS_VERSION \"${NEXT_SHORT_TAG}\")/g" cpp/template/cmake/thirdparty/fetch_rapids.cmake sed_runner "s/set(RAFT_VERSION .*)/set(RAFT_VERSION \"${NEXT_FULL_TAG}\")/g" cpp/CMakeLists.txt sed_runner 's/'"pylibraft_version .*)"'/'"pylibraft_version ${NEXT_FULL_TAG})"'/g' python/pylibraft/CMakeLists.txt sed_runner 's/'"raft_dask_version .*)"'/'"raft_dask_version ${NEXT_FULL_TAG})"'/g' python/raft-dask/CMakeLists.txt sed_runner 's/'"branch-.*\/RAPIDS.cmake"'/'"branch-${NEXT_SHORT_TAG}\/RAPIDS.cmake"'/g' fetch_rapids.cmake # Centralized version file update echo "${NEXT_FULL_TAG}" > VERSION # Wheel testing script sed_runner "s/branch-.*/branch-${NEXT_SHORT_TAG}/g" ci/test_wheel_raft_dask.sh # Docs update sed_runner 's/version = .*/version = '"'${NEXT_SHORT_TAG}'"'/g' docs/source/conf.py sed_runner 's/release = .*/release = '"'${NEXT_FULL_TAG}'"'/g' docs/source/conf.py DEPENDENCIES=( dask-cuda pylibraft pylibraft-cu11 pylibraft-cu12 rmm rmm-cu11 rmm-cu12 rapids-dask-dependency # ucx-py is handled separately below ) for FILE in dependencies.yaml conda/environments/*.yaml; do for DEP in "${DEPENDENCIES[@]}"; do sed_runner "/-.* ${DEP}==/ s/==.*/==${NEXT_SHORT_TAG_PEP440}\.*/g" ${FILE}; done sed_runner "/-.* ucx-py==/ s/==.*/==${NEXT_UCX_PY_SHORT_TAG_PEP440}\.*/g" ${FILE}; done for FILE in python/*/pyproject.toml; do for DEP in "${DEPENDENCIES[@]}"; do sed_runner "/\"${DEP}==/ s/==.*\"/==${NEXT_SHORT_TAG_PEP440}.*\"/g" ${FILE} done sed_runner "/\"ucx-py==/ s/==.*\"/==${NEXT_UCX_PY_SHORT_TAG_PEP440}.*\"/g" ${FILE} done sed_runner "/^ucx_py_version:$/ {n;s/.*/ - \"${NEXT_UCX_PY_VERSION}\"/}" conda/recipes/raft-dask/conda_build_config.yaml for FILE in .github/workflows/*.yaml; do sed_runner "/shared-workflows/ s/@.*/@branch-${NEXT_SHORT_TAG}/g" "${FILE}" done sed_runner "s/RAPIDS_VERSION_NUMBER=\".*/RAPIDS_VERSION_NUMBER=\"${NEXT_SHORT_TAG}\"/g" ci/build_docs.sh sed_runner "/^PROJECT_NUMBER/ s|\".*\"|\"${NEXT_SHORT_TAG}\"|g" cpp/doxygen/Doxyfile sed_runner "/^set(RAFT_VERSION/ s|\".*\"|\"${NEXT_SHORT_TAG}\"|g" docs/source/build.md sed_runner "s|branch-[0-9][0-9].[0-9][0-9]|branch-${NEXT_SHORT_TAG}|g" docs/source/build.md sed_runner "/rapidsai\/raft/ s|branch-[0-9][0-9].[0-9][0-9]|branch-${NEXT_SHORT_TAG}|g" docs/source/developer_guide.md sed_runner "s|:[0-9][0-9].[0-9][0-9]|:${NEXT_SHORT_TAG}|g" docs/source/raft_ann_benchmarks.md sed_runner "s|branch-[0-9][0-9].[0-9][0-9]|branch-${NEXT_SHORT_TAG}|g" README.md # .devcontainer files find .devcontainer/ -type f -name devcontainer.json -print0 | while IFS= read -r -d '' filename; do sed_runner "s@rapidsai/devcontainers:[0-9.]*@rapidsai/devcontainers:${NEXT_SHORT_TAG}@g" "${filename}" sed_runner "s@rapidsai/devcontainers/features/ucx:[0-9.]*@rapidsai/devcontainers/features/ucx:${NEXT_SHORT_TAG_PEP440}@" "${filename}" sed_runner "s@rapidsai/devcontainers/features/rapids-build-utils:[0-9.]*@rapidsai/devcontainers/features/rapids-build-utils:${NEXT_SHORT_TAG_PEP440}@" "${filename}" done
0
rapidsai_public_repos/raft/ci
rapidsai_public_repos/raft/ci/checks/black_lists.sh
#!/bin/bash # Copyright (c) 2020-2022, NVIDIA CORPORATION. ########################################## # RAFT black listed function call Tester # ########################################## # PR_TARGET_BRANCH is set by the CI environment git checkout --quiet $PR_TARGET_BRANCH # Switch back to tip of PR branch git checkout --quiet current-pr-branch # Ignore errors during searching set +e # Disable history expansion to enable use of ! in perl regex set +H RETVAL=0 for black_listed in cudaDeviceSynchronize cudaMalloc cudaMallocManaged cudaFree cudaMallocHost cudaHostAlloc cudaFreeHost; do TMP=`git --no-pager diff --ignore-submodules -w --minimal -U0 -S"$black_listed" $PR_TARGET_BRANCH | grep '^+' | grep -v '^+++' | grep "$black_listed"` if [ "$TMP" != "" ]; then for filename in `git --no-pager diff --ignore-submodules -w --minimal --name-only -S"$black_listed" $PR_TARGET_BRANCH`; do basefilename=$(basename -- "$filename") filext="${basefilename##*.}" if [ "$filext" != "md" ] && [ "$filext" != "sh" ]; then TMP2=`git --no-pager diff --ignore-submodules -w --minimal -U0 -S"$black_listed" $PR_TARGET_BRANCH -- $filename | grep '^+' | grep -v '^+++' | grep "$black_listed" | grep -vE "^\+[[:space:]]*/{2,}.*$black_listed"` if [ "$TMP2" != "" ]; then echo "=== ERROR: black listed function call $black_listed added to $filename ===" git --no-pager diff --ignore-submodules -w --minimal -S"$black_listed" $PR_TARGET_BRANCH -- $filename echo "=== END ERROR ===" RETVAL=1 fi fi done fi done for cond_black_listed in cudaMemcpy cudaMemset; do TMP=`git --no-pager diff --ignore-submodules -w --minimal -U0 -S"$cond_black_listed" $PR_TARGET_BRANCH | grep '^+' | grep -v '^+++' | grep -P "$cond_black_listed(?!Async)"` if [ "$TMP" != "" ]; then for filename in `git --no-pager diff --ignore-submodules -w --minimal --name-only -S"$cond_black_listed" $PR_TARGET_BRANCH`; do basefilename=$(basename -- "$filename") filext="${basefilename##*.}" if [ "$filext" != "md" ] && [ "$filext" != "sh" ]; then TMP2=`git --no-pager diff --ignore-submodules -w --minimal -U0 -S"$cond_black_listed" $PR_TARGET_BRANCH -- $filename | grep '^+' | grep -v '^+++' | grep -P "$cond_black_listed(?!Async)" | grep -vE "^\+[[:space:]]*/{2,}.*$cond_black_listed"` if [ "$TMP2" != "" ]; then echo "=== ERROR: black listed function call $cond_black_listed added to $filename ===" git --no-pager diff --ignore-submodules -w --minimal -S"$cond_black_listed" $PR_TARGET_BRANCH -- $filename echo "=== END ERROR ===" RETVAL=1 fi fi done fi done exit $RETVAL
0
rapidsai_public_repos/raft/ci
rapidsai_public_repos/raft/ci/checks/copyright.py
# Copyright (c) 2020-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import datetime import re import argparse import io import os import sys import git SCRIPT_DIR = os.path.dirname(os.path.realpath(os.path.expanduser(__file__))) # Add the scripts dir for gitutils sys.path.append(os.path.normpath(os.path.join(SCRIPT_DIR, "../../cpp/scripts"))) # Now import gitutils. Ignore flake8 error here since there is no other way to # set up imports import gitutils # noqa: E402 FilesToCheck = [ re.compile(r"[.](cmake|cpp|cu|cuh|h|hpp|sh|pxd|py|pyx)$"), re.compile(r"CMakeLists[.]txt$"), re.compile(r"CMakeLists_standalone[.]txt$"), re.compile(r"setup[.]cfg$"), re.compile(r"meta[.]yaml$") ] ExemptFiles = [ re.compile("cpp/include/raft/neighbors/detail/faiss_select/"), re.compile("cpp/include/raft/thirdparty/"), re.compile("docs/source/sphinxext/github_link.py"), re.compile("cpp/cmake/modules/FindAVX.cmake") ] # this will break starting at year 10000, which is probably OK :) CheckSimple = re.compile( r"Copyright *(?:\(c\))? *(\d{4}),? *NVIDIA C(?:ORPORATION|orporation)") CheckDouble = re.compile( r"Copyright *(?:\(c\))? *(\d{4})-(\d{4}),? *NVIDIA C(?:ORPORATION|orporation)" # noqa: E501 ) def checkThisFile(f): if isinstance(f, git.Diff): if f.deleted_file or f.b_blob.size == 0: return False f = f.b_path elif not os.path.exists(f) or os.stat(f).st_size == 0: # This check covers things like symlinks which point to files that DNE return False for exempt in ExemptFiles: if exempt.search(f): return False for checker in FilesToCheck: if checker.search(f): return True return False def modifiedFiles(): """Get a set of all modified files, as Diff objects. The files returned have been modified in git since the merge base of HEAD and the upstream of the target branch. We return the Diff objects so that we can read only the staged changes. """ repo = git.Repo() # Use the environment variable TARGET_BRANCH or RAPIDS_BASE_BRANCH (defined in CI) if possible target_branch = os.environ.get("TARGET_BRANCH", os.environ.get("RAPIDS_BASE_BRANCH")) if target_branch is None: # Fall back to the closest branch if not on CI target_branch = repo.git.describe( all=True, tags=True, match="branch-*", abbrev=0 ).lstrip("heads/") upstream_target_branch = None if target_branch in repo.heads: # Use the tracking branch of the local reference if it exists. This # returns None if no tracking branch is set. upstream_target_branch = repo.heads[target_branch].tracking_branch() if upstream_target_branch is None: # Fall back to the remote with the newest target_branch. This code # path is used on CI because the only local branch reference is # current-pr-branch, and thus target_branch is not in repo.heads. # This also happens if no tracking branch is defined for the local # target_branch. We use the remote with the latest commit if # multiple remotes are defined. candidate_branches = [ remote.refs[target_branch] for remote in repo.remotes if target_branch in remote.refs ] if len(candidate_branches) > 0: upstream_target_branch = sorted( candidate_branches, key=lambda branch: branch.commit.committed_datetime, )[-1] else: # If no remotes are defined, try to use the local version of the # target_branch. If this fails, the repo configuration must be very # strange and we can fix this script on a case-by-case basis. upstream_target_branch = repo.heads[target_branch] merge_base = repo.merge_base("HEAD", upstream_target_branch.commit)[0] diff = merge_base.diff() changed_files = {f for f in diff if f.b_path is not None} return changed_files def getCopyrightYears(line): res = CheckSimple.search(line) if res: return int(res.group(1)), int(res.group(1)) res = CheckDouble.search(line) if res: return int(res.group(1)), int(res.group(2)) return None, None def replaceCurrentYear(line, start, end): # first turn a simple regex into double (if applicable). then update years res = CheckSimple.sub(r"Copyright (c) \1-\1, NVIDIA CORPORATION", line) res = CheckDouble.sub( rf"Copyright (c) {start:04d}-{end:04d}, NVIDIA CORPORATION", res, ) return res def checkCopyright(f, update_current_year): """Checks for copyright headers and their years.""" errs = [] thisYear = datetime.datetime.now().year lineNum = 0 crFound = False yearMatched = False if isinstance(f, git.Diff): path = f.b_path lines = f.b_blob.data_stream.read().decode().splitlines(keepends=True) else: path = f with open(f, encoding="utf-8") as fp: lines = fp.readlines() for line in lines: lineNum += 1 start, end = getCopyrightYears(line) if start is None: continue crFound = True if start > end: e = [ path, lineNum, "First year after second year in the copyright " "header (manual fix required)", None, ] errs.append(e) elif thisYear < start or thisYear > end: e = [ path, lineNum, "Current year not included in the copyright header", None, ] if thisYear < start: e[-1] = replaceCurrentYear(line, thisYear, end) if thisYear > end: e[-1] = replaceCurrentYear(line, start, thisYear) errs.append(e) else: yearMatched = True # copyright header itself not found if not crFound: e = [ path, 0, "Copyright header missing or formatted incorrectly " "(manual fix required)", None, ] errs.append(e) # even if the year matches a copyright header, make the check pass if yearMatched: errs = [] if update_current_year: errs_update = [x for x in errs if x[-1] is not None] if len(errs_update) > 0: lines_changed = ", ".join(str(x[1]) for x in errs_update) print(f"File: {path}. Changing line(s) {lines_changed}") for _, lineNum, __, replacement in errs_update: lines[lineNum - 1] = replacement with open(path, "w", encoding="utf-8") as out_file: out_file.writelines(lines) return errs def getAllFilesUnderDir(root, pathFilter=None): retList = [] for dirpath, dirnames, filenames in os.walk(root): for fn in filenames: filePath = os.path.join(dirpath, fn) if pathFilter(filePath): retList.append(filePath) return retList def checkCopyright_main(): """ Checks for copyright headers in all the modified files. In case of local repo, this script will just look for uncommitted files and in case of CI it compares between branches "$PR_TARGET_BRANCH" and "current-pr-branch" """ retVal = 0 argparser = argparse.ArgumentParser( "Checks for a consistent copyright header in git's modified files" ) argparser.add_argument( "--update-current-year", dest="update_current_year", action="store_true", required=False, help="If set, " "update the current year if a header is already " "present and well formatted.", ) argparser.add_argument( "--git-modified-only", dest="git_modified_only", action="store_true", required=False, help="If set, " "only files seen as modified by git will be " "processed.", ) args, dirs = argparser.parse_known_args() if args.git_modified_only: files = [f for f in modifiedFiles() if checkThisFile(f)] else: files = [] for d in [os.path.abspath(d) for d in dirs]: if not os.path.isdir(d): raise ValueError(f"{d} is not a directory.") files += getAllFilesUnderDir(d, pathFilter=checkThisFile) errors = [] for f in files: errors += checkCopyright(f, args.update_current_year) if len(errors) > 0: if any(e[-1] is None for e in errors): print("Copyright headers incomplete in some of the files!") for e in errors: print(" %s:%d Issue: %s" % (e[0], e[1], e[2])) print("") n_fixable = sum(1 for e in errors if e[-1] is not None) path_parts = os.path.abspath(__file__).split(os.sep) file_from_repo = os.sep.join(path_parts[path_parts.index("ci") :]) if n_fixable > 0 and not args.update_current_year: print( f"You can run `python {file_from_repo} --git-modified-only " "--update-current-year` and stage the results in git to " f"fix {n_fixable} of these errors.\n" ) retVal = 1 return retVal if __name__ == "__main__": sys.exit(checkCopyright_main())
0
rapidsai_public_repos/raft
rapidsai_public_repos/raft/thirdparty/README.md
# PCG Link to the repository https://github.com/imneme/pcg-c-basic Commit ID for last borrowed code bc39cd76ac3d541e618606bcc6e1e5ba5e5e6aa3
0
rapidsai_public_repos/raft/thirdparty
rapidsai_public_repos/raft/thirdparty/pcg/pcg_basic.c
/* * PCG Random Number Generation for C. * * Copyright 2014 Melissa O'Neill <oneill@pcg-random.org> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * For additional information about the PCG random number generation scheme, * including its license and other licensing options, visit * * http://www.pcg-random.org */ /* * This code is derived from the full C implementation, which is in turn * derived from the canonical C++ PCG implementation. The C++ version * has many additional features and is preferable if you can use C++ in * your project. */ #include "pcg_basic.h" // state for global RNGs static pcg32_random_t pcg32_global = PCG32_INITIALIZER; // pcg32_srandom(initstate, initseq) // pcg32_srandom_r(rng, initstate, initseq): // Seed the rng. Specified in two parts, state initializer and a // sequence selection constant (a.k.a. stream id) void pcg32_srandom_r(pcg32_random_t* rng, uint64_t initstate, uint64_t initseq) { rng->state = 0U; rng->inc = (initseq << 1u) | 1u; pcg32_random_r(rng); rng->state += initstate; pcg32_random_r(rng); } void pcg32_srandom(uint64_t seed, uint64_t seq) { pcg32_srandom_r(&pcg32_global, seed, seq); } // pcg32_random() // pcg32_random_r(rng) // Generate a uniformly distributed 32-bit random number uint32_t pcg32_random_r(pcg32_random_t* rng) { uint64_t oldstate = rng->state; rng->state = oldstate * 6364136223846793005ULL + rng->inc; uint32_t xorshifted = ((oldstate >> 18u) ^ oldstate) >> 27u; uint32_t rot = oldstate >> 59u; return (xorshifted >> rot) | (xorshifted << ((-rot) & 31)); } uint32_t pcg32_random() { return pcg32_random_r(&pcg32_global); } // pcg32_boundedrand(bound): // pcg32_boundedrand_r(rng, bound): // Generate a uniformly distributed number, r, where 0 <= r < bound uint32_t pcg32_boundedrand_r(pcg32_random_t* rng, uint32_t bound) { // To avoid bias, we need to make the range of the RNG a multiple of // bound, which we do by dropping output less than a threshold. // A naive scheme to calculate the threshold would be to do // // uint32_t threshold = 0x100000000ull % bound; // // but 64-bit div/mod is slower than 32-bit div/mod (especially on // 32-bit platforms). In essence, we do // // uint32_t threshold = (0x100000000ull-bound) % bound; // // because this version will calculate the same modulus, but the LHS // value is less than 2^32. uint32_t threshold = -bound % bound; // Uniformity guarantees that this loop will terminate. In practice, it // should usually terminate quickly; on average (assuming all bounds are // equally likely), 82.25% of the time, we can expect it to require just // one iteration. In the worst case, someone passes a bound of 2^31 + 1 // (i.e., 2147483649), which invalidates almost 50% of the range. In // practice, bounds are typically small and only a tiny amount of the range // is eliminated. for (;;) { uint32_t r = pcg32_random_r(rng); if (r >= threshold) return r % bound; } } uint32_t pcg32_boundedrand(uint32_t bound) { return pcg32_boundedrand_r(&pcg32_global, bound); }
0
rapidsai_public_repos/raft/thirdparty
rapidsai_public_repos/raft/thirdparty/pcg/LICENSE.txt
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
rapidsai_public_repos/raft/thirdparty
rapidsai_public_repos/raft/thirdparty/LICENSES/LICENSE.pytorch
From PyTorch: Copyright (c) 2016- Facebook, Inc (Adam Paszke) Copyright (c) 2014- Facebook, Inc (Soumith Chintala) Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert) Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu) Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu) Copyright (c) 2011-2013 NYU (Clement Farabet) Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston) Copyright (c) 2006 Idiap Research Institute (Samy Bengio) Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz) From Caffe2: Copyright (c) 2016-present, Facebook Inc. All rights reserved. All contributions by Facebook: Copyright (c) 2016 Facebook Inc. All contributions by Google: Copyright (c) 2015 Google Inc. All rights reserved. All contributions by Yangqing Jia: Copyright (c) 2015 Yangqing Jia All rights reserved. All contributions by Kakao Brain: Copyright 2019-2020 Kakao Brain All contributions by Cruise LLC: Copyright (c) 2022 Cruise LLC. All rights reserved. All contributions from Caffe: Copyright(c) 2013, 2014, 2015, the respective contributors All rights reserved. All other contributions: Copyright(c) 2015, 2016 the respective contributors All rights reserved. Caffe2 uses a copyright model similar to Caffe: each contributor holds copyright over their contributions to Caffe2. The project versioning records all such contribution and copyright details. If a contributor wants to further mark their specific copyright on a particular contribution, they should indicate their copyright solely in the commit message of the change when it is committed. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the names of Facebook, Deepmind Technologies, NYU, NEC Laboratories America and IDIAP Research Institute nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
0
rapidsai_public_repos/raft/thirdparty
rapidsai_public_repos/raft/thirdparty/LICENSES/mdarray.license
/* //@HEADER // ************************************************************************ // // Kokkos v. 2.0 // Copyright (2019) Sandia Corporation // // Under the terms of Contract DE-AC04-94AL85000 with Sandia Corporation, // the U.S. Government retains certain rights in this software. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // 1. Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // // 2. Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // // 3. Neither the name of the Corporation nor the names of the // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY SANDIA CORPORATION "AS IS" AND ANY // EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE // IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR // PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL SANDIA CORPORATION OR THE // CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, // EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR // PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF // LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING // NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS // SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Questions? Contact Christian R. Trott (crtrott@sandia.gov) // // ************************************************************************ //@HEADER */
0
rapidsai_public_repos/raft/thirdparty
rapidsai_public_repos/raft/thirdparty/LICENSES/LICENSE.faiss
MIT License Copyright (c) Facebook, Inc. and its affiliates. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
0
rapidsai_public_repos/raft/thirdparty
rapidsai_public_repos/raft/thirdparty/LICENSES/LICENSE.ann-benchmark
MIT License Copyright (c) 2018 Erik Bernhardsson Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
0
rapidsai_public_repos/raft/thirdparty
rapidsai_public_repos/raft/thirdparty/LICENSES/LICENSE_Date_Nagi
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2020 KETAN DATE & RAKESH NAGI Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
rapidsai_public_repos
rapidsai_public_repos/cuvs/.pre-commit-config.yaml
# Copyright (c) 2022-2023, NVIDIA CORPORATION. repos: - repo: https://github.com/PyCQA/isort rev: 5.12.0 hooks: - id: isort # Use the config file specific to each subproject so that each # project can specify its own first/third-party packages. args: ["--config-root=python/", "--resolve-all-configs"] files: python/.* types_or: [python, cython, pyi] - repo: https://github.com/psf/black rev: 22.3.0 hooks: - id: black files: python/.* # Explicitly specify the pyproject.toml at the repo root, not per-project. args: ["--config", "pyproject.toml"] - repo: https://github.com/PyCQA/flake8 rev: 5.0.4 hooks: - id: flake8 args: ["--config=.flake8"] files: python/.*$ types: [file] types_or: [python, cython] additional_dependencies: ["flake8-force"] - repo: https://github.com/pre-commit/mirrors-mypy rev: 'v0.971' hooks: - id: mypy additional_dependencies: [types-cachetools] args: ["--config-file=pyproject.toml", "python/cuvs/cuvs"] pass_filenames: false - repo: https://github.com/PyCQA/pydocstyle rev: 6.1.1 hooks: - id: pydocstyle # https://github.com/PyCQA/pydocstyle/issues/603 additional_dependencies: [toml] args: ["--config=pyproject.toml"] - repo: https://github.com/pre-commit/mirrors-clang-format rev: v16.0.6 hooks: - id: clang-format types_or: [c, c++, cuda] args: ["-fallback-style=none", "-style=file", "-i"] - repo: local hooks: - id: no-deprecationwarning name: no-deprecationwarning description: 'Enforce that DeprecationWarning is not introduced (use FutureWarning instead)' entry: '(category=|\s)DeprecationWarning[,)]' language: pygrep types_or: [python, cython] - id: cmake-format name: cmake-format entry: ./cpp/scripts/run-cmake-format.sh cmake-format language: python types: [cmake] exclude: .*/thirdparty/.*|.*FindAVX.cmake.* # Note that pre-commit autoupdate does not update the versions # of dependencies, so we'll have to update this manually. additional_dependencies: - cmakelang==0.6.13 verbose: true require_serial: true - id: cmake-lint name: cmake-lint entry: ./cpp/scripts/run-cmake-format.sh cmake-lint language: python types: [cmake] # Note that pre-commit autoupdate does not update the versions # of dependencies, so we'll have to update this manually. additional_dependencies: - cmakelang==0.6.13 verbose: true require_serial: true exclude: .*/thirdparty/.* - id: copyright-check name: copyright-check entry: python ./ci/checks/copyright.py --git-modified-only --update-current-year language: python pass_filenames: false additional_dependencies: [gitpython] - id: include-check name: include-check entry: python ./cpp/scripts/include_checker.py cpp/bench cpp/include cpp/test pass_filenames: false language: python additional_dependencies: [gitpython] - repo: https://github.com/codespell-project/codespell rev: v2.2.2 hooks: - id: codespell additional_dependencies: [tomli] args: ["--toml", "pyproject.toml"] exclude: (?x)^(^CHANGELOG.md$) - repo: https://github.com/rapidsai/dependency-file-generator rev: v1.5.1 hooks: - id: rapids-dependency-file-generator args: ["--clean"] - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.5.0 hooks: - id: check-json default_language_version: python: python3
0
rapidsai_public_repos
rapidsai_public_repos/cuvs/setup.cfg
# Copyright (c) 2022-2023, NVIDIA CORPORATION. [flake8] filename = *.py, *.pyx, *.pxd, *.pxi exclude = __init__.py, *.egg, build, docs, .git force-check = True ignore = # line break before binary operator W503, # whitespace before : E203 per-file-ignores = # Rules ignored only in Cython: # E211: whitespace before '(' (used in multi-line imports) # E225: Missing whitespace around operators (breaks cython casting syntax like <int>) # E226: Missing whitespace around arithmetic operators (breaks cython pointer syntax like int*) # E227: Missing whitespace around bitwise or shift operator (Can also break casting syntax) # E275: Missing whitespace after keyword (Doesn't work with Cython except?) # E402: invalid syntax (works for Python, not Cython) # E999: invalid syntax (works for Python, not Cython) # W504: line break after binary operator (breaks lines that end with a pointer) *.pyx: E211, E225, E226, E227, E275, E402, E999, W504 *.pxd: E211, E225, E226, E227, E275, E402, E999, W504 *.pxi: E211, E225, E226, E227, E275, E402, E999, W504 [pydocstyle] # Due to https://github.com/PyCQA/pydocstyle/issues/363, we must exclude rather # than include using match-dir. Note that as discussed in # https://stackoverflow.com/questions/65478393/how-to-filter-directories-using-the-match-dir-flag-for-pydocstyle, # unlike the match option above this match-dir will have no effect when # pydocstyle is invoked from pre-commit. Therefore this exclusion list must # also be maintained in the pre-commit config file. match-dir = ^(?!(ci|cpp|conda|docs|java|notebooks)).*$ # Allow missing docstrings for docutils ignore-decorators = .*(docutils|doc_apply|copy_docstring).* select = D201, D204, D206, D207, D208, D209, D210, D211, D214, D215, D300, D301, D302, D403, D405, D406, D407, D408, D409, D410, D411, D412, D414, D418 # Would like to enable the following rules in the future: # D200, D202, D205, D400 [mypy] ignore_missing_imports = True # If we don't specify this, then mypy will check excluded files if # they are imported by a checked file. follow_imports = skip [codespell] # note: pre-commit passes explicit lists of files here, which this skip file list doesn't override - # this is only to allow you to run codespell interactively skip = ./.git,./.github,./cpp/build,.*egg-info.*,./.mypy_cache,.*_skbuild # ignore short words, and typename parameters like OffsetT ignore-regex = \b(.{1,4}|[A-Z]\w*T)\b ignore-words-list = inout,unparseable,numer builtin = clear quiet-level = 3
0
rapidsai_public_repos
rapidsai_public_repos/cuvs/pyproject.toml
[tool.black] line-length = 79 target-version = ["py39"] include = '\.py?$' force-exclude = ''' /( thirdparty | \.eggs | \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist )/ ''' [tool.pydocstyle] # Due to https://github.com/PyCQA/pydocstyle/issues/363, we must exclude rather # than include using match-dir. Note that as discussed in # https://stackoverflow.com/questions/65478393/how-to-filter-directories-using-the-match-dir-flag-for-pydocstyle, # unlike the match option above this match-dir will have no effect when # pydocstyle is invoked from pre-commit. Therefore this exclusion list must # also be maintained in the pre-commit config file. match-dir = "^(?!(ci|cpp|conda|docs)).*$" select = "D201, D204, D206, D207, D208, D209, D210, D211, D214, D215, D300, D301, D302, D403, D405, D406, D407, D408, D409, D410, D411, D412, D414, D418" # Would like to enable the following rules in the future: # D200, D202, D205, D400 [tool.mypy] ignore_missing_imports = true # If we don't specify this, then mypy will check excluded files if # they are imported by a checked file. follow_imports = "skip" [tool.codespell] # note: pre-commit passes explicit lists of files here, which this skip file list doesn't override - # this is only to allow you to run codespell interactively skip = "./.git,./.github,./cpp/build,.*egg-info.*,./.mypy_cache,.*_skbuild" # ignore short words, and typename parameters like OffsetT ignore-regex = "\\b(.{1,4}|[A-Z]\\w*T)\\b" ignore-words-list = "inout,numer" builtin = "clear" quiet-level = 3
0
rapidsai_public_repos
rapidsai_public_repos/cuvs/.flake8
# Copyright (c) 2022-2023, NVIDIA CORPORATION. [flake8] filename = *.py, *.pyx, *.pxd, *.pxi exclude = __init__.py, *.egg, build, docs, .git force-check = True ignore = # line break before binary operator W503, # whitespace before : E203 per-file-ignores = # Rules ignored only in Cython: # E211: whitespace before '(' (used in multi-line imports) # E225: Missing whitespace around operators (breaks cython casting syntax like <int>) # E226: Missing whitespace around arithmetic operators (breaks cython pointer syntax like int*) # E227: Missing whitespace around bitwise or shift operator (Can also break casting syntax) # E275: Missing whitespace after keyword (Doesn't work with Cython except?) # E402: invalid syntax (works for Python, not Cython) # E999: invalid syntax (works for Python, not Cython) # W504: line break after binary operator (breaks lines that end with a pointer) *.pyx: E211, E225, E226, E227, E275, E402, E999, W504 *.pxd: E211, E225, E226, E227, E275, E402, E999, W504 *.pxi: E211, E225, E226, E227, E275, E402, E999, W504
0
rapidsai_public_repos
rapidsai_public_repos/cuvs/fetch_rapids.cmake
# ============================================================================= # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= if(NOT EXISTS ${CMAKE_CURRENT_BINARY_DIR}/RAFT_RAPIDS.cmake) file(DOWNLOAD https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-24.02/RAPIDS.cmake ${CMAKE_CURRENT_BINARY_DIR}/RAFT_RAPIDS.cmake ) endif() include(${CMAKE_CURRENT_BINARY_DIR}/RAFT_RAPIDS.cmake)
0
rapidsai_public_repos
rapidsai_public_repos/cuvs/README.md
# <div align="left"><img src="https://rapids.ai/assets/images/rapids_logo.png" width="90px"/>&nbsp;cuVS: Vector Search on the GPU</div>
0
rapidsai_public_repos
rapidsai_public_repos/cuvs/build.sh
#!/bin/bash # Copyright (c) 2020-2023, NVIDIA CORPORATION. # cuvs build scripts # This script is used to build the component(s) in this repo from # source, and can be called with various options to customize the # build as needed (see the help output for details) # Abort script on first error set -e NUMARGS=$# ARGS=$* # NOTE: ensure all dir changes are relative to the location of this # scripts, and that this script resides in the repo dir! REPODIR=$(cd $(dirname $0); pwd) VALIDARGS="clean libcuvs cuvs docs tests template bench-prims bench-ann clean --uninstall -v -g -n --compile-lib --compile-static-lib --allgpuarch --no-nvtx --cpu-only --show_depr_warn --incl-cache-stats --time -h" HELP="$0 [<target> ...] [<flag> ...] [--cmake-args=\"<args>\"] [--cache-tool=<tool>] [--limit-tests=<targets>] [--limit-bench-prims=<targets>] [--limit-bench-ann=<targets>] [--build-metrics=<filename>] where <target> is: clean - remove all existing build artifacts and configuration (start over) libcuvs - build the cuvs C++ code only. Also builds the C-wrapper library around the C++ code. cuvs - build the cuvs Python package docs - build the documentation tests - build the tests bench-prims - build micro-benchmarks for primitives bench-ann - build end-to-end ann benchmarks template - build the example CUVS application template and <flag> is: -v - verbose build mode -g - build for debug -n - no install step --uninstall - uninstall files for specified targets which were built and installed prior --compile-lib - compile shared library for all components --compile-static-lib - compile static library for all components --cpu-only - build CPU only components without CUDA. Applies to bench-ann only currently. --limit-tests - semicolon-separated list of test executables to compile (e.g. NEIGHBORS_TEST;CLUSTER_TEST) --limit-bench-prims - semicolon-separated list of prims benchmark executables to compute (e.g. NEIGHBORS_MICRO_BENCH;CLUSTER_MICRO_BENCH) --limit-bench-ann - semicolon-separated list of ann benchmark executables to compute (e.g. HNSWLIB_ANN_BENCH;CUVS_IVF_PQ_ANN_BENCH) --allgpuarch - build for all supported GPU architectures --no-nvtx - disable nvtx (profiling markers), but allow enabling it in downstream projects --show_depr_warn - show cmake deprecation warnings --build-metrics - filename for generating build metrics report for libcuvs --incl-cache-stats - include cache statistics in build metrics report --cmake-args=\\\"<args>\\\" - pass arbitrary list of CMake configuration options (escape all quotes in argument) --cache-tool=<tool> - pass the build cache tool (eg: ccache, sccache, distcc) that will be used to speedup the build process. --time - Enable nvcc compilation time logging into cpp/build/nvcc_compile_log.csv. Results can be interpreted with cpp/scripts/analyze_nvcc_log.py -h - print this text default action (no args) is to build libcuvs, tests, cuvs and cuvs-dask targets " LIBCUVS_BUILD_DIR=${LIBCUVS_BUILD_DIR:=${REPODIR}/cpp/build} SPHINX_BUILD_DIR=${REPODIR}/docs DOXYGEN_BUILD_DIR=${REPODIR}/cpp/doxygen CUVS_DASK_BUILD_DIR=${REPODIR}/python/cuvs-dask/_skbuild PYLIBCUVS_BUILD_DIR=${REPODIR}/python/cuvs/_skbuild BUILD_DIRS="${LIBCUVS_BUILD_DIR} ${PYLIBCUVS_BUILD_DIR} ${CUVS_DASK_BUILD_DIR}" # Set defaults for vars modified by flags to this script CMAKE_LOG_LEVEL="" VERBOSE_FLAG="" BUILD_ALL_GPU_ARCH=0 BUILD_TESTS=OFF BUILD_TYPE=Release BUILD_MICRO_BENCH=OFF BUILD_ANN_BENCH=OFF BUILD_CPU_ONLY=OFF COMPILE_LIBRARY=OFF INSTALL_TARGET=install BUILD_REPORT_METRICS="" BUILD_REPORT_INCL_CACHE_STATS=OFF TEST_TARGETS="CLUSTER_TEST;DISTANCE_TEST;NEIGHBORS_TEST;NEIGHBORS_ANN_CAGRA_TEST;NEIGHBORS_ANN_NN_DESCENT_TEST;NEIGHBORS_ANN_IVF_TEST" BENCH_TARGETS="CLUSTER_BENCH;NEIGHBORS_BENCH;DISTANCE_BENCH" CACHE_ARGS="" NVTX=ON LOG_COMPILE_TIME=OFF CLEAN=0 UNINSTALL=0 DISABLE_DEPRECATION_WARNINGS=ON CMAKE_TARGET="" # Set defaults for vars that may not have been defined externally INSTALL_PREFIX=${INSTALL_PREFIX:=${PREFIX:=${CONDA_PREFIX:=$LIBCUVS_BUILD_DIR/install}}} PARALLEL_LEVEL=${PARALLEL_LEVEL:=`nproc`} BUILD_ABI=${BUILD_ABI:=ON} # Default to Ninja if generator is not specified export CMAKE_GENERATOR="${CMAKE_GENERATOR:=Ninja}" function hasArg { (( ${NUMARGS} != 0 )) && (echo " ${ARGS} " | grep -q " $1 ") } function cmakeArgs { # Check for multiple cmake args options if [[ $(echo $ARGS | { grep -Eo "\-\-cmake\-args" || true; } | wc -l ) -gt 1 ]]; then echo "Multiple --cmake-args options were provided, please provide only one: ${ARGS}" exit 1 fi # Check for cmake args option if [[ -n $(echo $ARGS | { grep -E "\-\-cmake\-args" || true; } ) ]]; then # There are possible weird edge cases that may cause this regex filter to output nothing and fail silently # the true pipe will catch any weird edge cases that may happen and will cause the program to fall back # on the invalid option error EXTRA_CMAKE_ARGS=$(echo $ARGS | { grep -Eo "\-\-cmake\-args=\".+\"" || true; }) if [[ -n ${EXTRA_CMAKE_ARGS} ]]; then # Remove the full EXTRA_CMAKE_ARGS argument from list of args so that it passes validArgs function ARGS=${ARGS//$EXTRA_CMAKE_ARGS/} # Filter the full argument down to just the extra string that will be added to cmake call EXTRA_CMAKE_ARGS=$(echo $EXTRA_CMAKE_ARGS | grep -Eo "\".+\"" | sed -e 's/^"//' -e 's/"$//') fi fi } function cacheTool { # Check for multiple cache options if [[ $(echo $ARGS | { grep -Eo "\-\-cache\-tool" || true; } | wc -l ) -gt 1 ]]; then echo "Multiple --cache-tool options were provided, please provide only one: ${ARGS}" exit 1 fi # Check for cache tool option if [[ -n $(echo $ARGS | { grep -E "\-\-cache\-tool" || true; } ) ]]; then # There are possible weird edge cases that may cause this regex filter to output nothing and fail silently # the true pipe will catch any weird edge cases that may happen and will cause the program to fall back # on the invalid option error CACHE_TOOL=$(echo $ARGS | sed -e 's/.*--cache-tool=//' -e 's/ .*//') if [[ -n ${CACHE_TOOL} ]]; then # Remove the full CACHE_TOOL argument from list of args so that it passes validArgs function ARGS=${ARGS//--cache-tool=$CACHE_TOOL/} CACHE_ARGS="-DCMAKE_CUDA_COMPILER_LAUNCHER=${CACHE_TOOL} -DCMAKE_C_COMPILER_LAUNCHER=${CACHE_TOOL} -DCMAKE_CXX_COMPILER_LAUNCHER=${CACHE_TOOL}" fi fi } function limitTests { # Check for option to limit the set of test binaries to build if [[ -n $(echo $ARGS | { grep -E "\-\-limit\-tests" || true; } ) ]]; then # There are possible weird edge cases that may cause this regex filter to output nothing and fail silently # the true pipe will catch any weird edge cases that may happen and will cause the program to fall back # on the invalid option error LIMIT_TEST_TARGETS=$(echo $ARGS | sed -e 's/.*--limit-tests=//' -e 's/ .*//') if [[ -n ${LIMIT_TEST_TARGETS} ]]; then # Remove the full LIMIT_TEST_TARGETS argument from list of args so that it passes validArgs function ARGS=${ARGS//--limit-tests=$LIMIT_TEST_TARGETS/} TEST_TARGETS=${LIMIT_TEST_TARGETS} echo "Limiting tests to $TEST_TARGETS" fi fi } function limitBench { # Check for option to limit the set of test binaries to build if [[ -n $(echo $ARGS | { grep -E "\-\-limit\-bench-prims" || true; } ) ]]; then # There are possible weird edge cases that may cause this regex filter to output nothing and fail silently # the true pipe will catch any weird edge cases that may happen and will cause the program to fall back # on the invalid option error LIMIT_MICRO_BENCH_TARGETS=$(echo $ARGS | sed -e 's/.*--limit-bench-prims=//' -e 's/ .*//') if [[ -n ${LIMIT_MICRO_BENCH_TARGETS} ]]; then # Remove the full LIMIT_MICRO_BENCH_TARGETS argument from list of args so that it passes validArgs function ARGS=${ARGS//--limit-bench-prims=$LIMIT_MICRO_BENCH_TARGETS/} MICRO_BENCH_TARGETS=${LIMIT_MICRO_BENCH_TARGETS} fi fi } function limitAnnBench { # Check for option to limit the set of test binaries to build if [[ -n $(echo $ARGS | { grep -E "\-\-limit\-bench-ann" || true; } ) ]]; then # There are possible weird edge cases that may cause this regex filter to output nothing and fail silently # the true pipe will catch any weird edge cases that may happen and will cause the program to fall back # on the invalid option error LIMIT_ANN_BENCH_TARGETS=$(echo $ARGS | sed -e 's/.*--limit-bench-ann=//' -e 's/ .*//') if [[ -n ${LIMIT_ANN_BENCH_TARGETS} ]]; then # Remove the full LIMIT_TEST_TARGETS argument from list of args so that it passes validArgs function ARGS=${ARGS//--limit-bench-ann=$LIMIT_ANN_BENCH_TARGETS/} ANN_BENCH_TARGETS=${LIMIT_ANN_BENCH_TARGETS} fi fi } function buildMetrics { # Check for multiple build-metrics options if [[ $(echo $ARGS | { grep -Eo "\-\-build\-metrics" || true; } | wc -l ) -gt 1 ]]; then echo "Multiple --build-metrics options were provided, please provide only one: ${ARGS}" exit 1 fi # Check for build-metrics option if [[ -n $(echo $ARGS | { grep -E "\-\-build\-metrics" || true; } ) ]]; then # There are possible weird edge cases that may cause this regex filter to output nothing and fail silently # the true pipe will catch any weird edge cases that may happen and will cause the program to fall back # on the invalid option error BUILD_REPORT_METRICS=$(echo $ARGS | sed -e 's/.*--build-metrics=//' -e 's/ .*//') if [[ -n ${BUILD_REPORT_METRICS} ]]; then # Remove the full BUILD_REPORT_METRICS argument from list of args so that it passes validArgs function ARGS=${ARGS//--build-metrics=$BUILD_REPORT_METRICS/} fi fi } if hasArg -h || hasArg --help; then echo "${HELP}" exit 0 fi # Check for valid usage if (( ${NUMARGS} != 0 )); then cmakeArgs cacheTool limitTests limitBench limitAnnBench buildMetrics for a in ${ARGS}; do if ! (echo " ${VALIDARGS} " | grep -q " ${a} "); then echo "Invalid option: ${a}" exit 1 fi done fi # This should run before build/install if hasArg --uninstall; then UNINSTALL=1 if hasArg cuvs || hasArg libcuvs || (( ${NUMARGS} == 1 )); then echo "Removing libcuvs files..." if [ -e ${LIBCUVS_BUILD_DIR}/install_manifest.txt ]; then xargs rm -fv < ${LIBCUVS_BUILD_DIR}/install_manifest.txt > /dev/null 2>&1 fi fi if hasArg cuvs || (( ${NUMARGS} == 1 )); then echo "Uninstalling cuvs package..." if [ -e ${PYLIBCUVS_BUILD_DIR}/install_manifest.txt ]; then xargs rm -fv < ${PYLIBCUVS_BUILD_DIR}/install_manifest.txt > /dev/null 2>&1 fi # Try to uninstall via pip if it is installed if [ -x "$(command -v pip)" ]; then echo "Using pip to uninstall cuvs" pip uninstall -y cuvs # Otherwise, try to uninstall through conda if that's where things are installed elif [ -x "$(command -v conda)" ] && [ "$INSTALL_PREFIX" == "$CONDA_PREFIX" ]; then echo "Using conda to uninstall cuvs" conda uninstall -y cuvs # Otherwise, fail else echo "Could not uninstall cuvs from pip or conda. cuvs package will need to be manually uninstalled" fi fi if hasArg cuvs-dask || (( ${NUMARGS} == 1 )); then echo "Uninstalling cuvs-dask package..." if [ -e ${CUVS_DASK_BUILD_DIR}/install_manifest.txt ]; then xargs rm -fv < ${CUVS_DASK_BUILD_DIR}/install_manifest.txt > /dev/null 2>&1 fi # Try to uninstall via pip if it is installed if [ -x "$(command -v pip)" ]; then echo "Using pip to uninstall cuvs-dask" pip uninstall -y cuvs-dask # Otherwise, try to uninstall through conda if that's where things are installed elif [ -x "$(command -v conda)" ] && [ "$INSTALL_PREFIX" == "$CONDA_PREFIX" ]; then echo "Using conda to uninstall cuvs-dask" conda uninstall -y cuvs-dask # Otherwise, fail else echo "Could not uninstall cuvs-dask from pip or conda. cuvs-dask package will need to be manually uninstalled." fi fi exit 0 fi # Process flags if hasArg -n; then INSTALL_TARGET="" fi if hasArg -v; then VERBOSE_FLAG="-v" CMAKE_LOG_LEVEL="VERBOSE" fi if hasArg -g; then BUILD_TYPE=Debug fi if hasArg --allgpuarch; then BUILD_ALL_GPU_ARCH=1 fi if hasArg --compile-lib || (( ${NUMARGS} == 0 )); then COMPILE_LIBRARY=ON CMAKE_TARGET="${CMAKE_TARGET};cuvs" fi #if hasArg --compile-static-lib || (( ${NUMARGS} == 0 )); then # COMPILE_LIBRARY=ON # CMAKE_TARGET="${CMAKE_TARGET};cuvs_lib_static" #fi if hasArg tests || (( ${NUMARGS} == 0 )); then BUILD_TESTS=ON CMAKE_TARGET="${CMAKE_TARGET};${TEST_TARGETS}" # Force compile library when needed test targets are specified if [[ $CMAKE_TARGET == *"CLUSTER_TEST"* || \ $CMAKE_TARGET == *"DISTANCE_TEST"* || \ $CMAKE_TARGET == *"NEIGHBORS_ANN_CAGRA_TEST"* || \ $CMAKE_TARGET == *"NEIGHBORS_ANN_IVF_TEST"* || \ $CMAKE_TARGET == *"NEIGHBORS_ANN_NN_DESCENT_TEST"* || \ $CMAKE_TARGET == *"NEIGHBORS_TEST"* || \ $CMAKE_TARGET == *"STATS_TEST"* ]]; then echo "-- Enabling compiled lib for gtests" COMPILE_LIBRARY=ON fi fi if hasArg bench-prims || (( ${NUMARGS} == 0 )); then BUILD_MICRO_BENCH=ON CMAKE_TARGET="${CMAKE_TARGET};${MICRO_BENCH_TARGETS}" # Force compile library when needed benchmark targets are specified if [[ $CMAKE_TARGET == *"CLUSTER_MICRO_BENCH"* || \ $CMAKE_TARGET == *"NEIGHBORS_MICRO_BENCH"* ]]; then echo "-- Enabling compiled lib for benchmarks" COMPILE_LIBRARY=ON fi fi if hasArg bench-ann || (( ${NUMARGS} == 0 )); then BUILD_ANN_BENCH=ON CMAKE_TARGET="${CMAKE_TARGET};${ANN_BENCH_TARGETS}" if hasArg --cpu-only; then COMPILE_LIBRARY=OFF BUILD_CPU_ONLY=ON NVTX=OFF else COMPILE_LIBRARY=ON fi fi if hasArg --no-nvtx; then NVTX=OFF fi if hasArg --time; then echo "-- Logging compile times to cpp/build/nvcc_compile_log.csv" LOG_COMPILE_TIME=ON fi if hasArg --show_depr_warn; then DISABLE_DEPRECATION_WARNINGS=OFF fi if hasArg clean; then CLEAN=1 fi if hasArg --incl-cache-stats; then BUILD_REPORT_INCL_CACHE_STATS=ON fi if [[ ${CMAKE_TARGET} == "" ]]; then CMAKE_TARGET="all" fi # Append `-DFIND_CUVS_CPP=ON` to EXTRA_CMAKE_ARGS unless a user specified the option. SKBUILD_EXTRA_CMAKE_ARGS="${EXTRA_CMAKE_ARGS}" if [[ "${EXTRA_CMAKE_ARGS}" != *"DFIND_CUVS_CPP"* ]]; then SKBUILD_EXTRA_CMAKE_ARGS="${SKBUILD_EXTRA_CMAKE_ARGS} -DFIND_CUVS_CPP=ON" fi # If clean given, run it prior to any other steps if (( ${CLEAN} == 1 )); then # If the dirs to clean are mounted dirs in a container, the # contents should be removed but the mounted dirs will remain. # The find removes all contents but leaves the dirs, the rmdir # attempts to remove the dirs but can fail safely. for bd in ${BUILD_DIRS}; do if [ -d ${bd} ]; then find ${bd} -mindepth 1 -delete rmdir ${bd} || true fi done fi ################################################################################ # Configure for building all C++ targets if (( ${NUMARGS} == 0 )) || hasArg libcuvs || hasArg docs || hasArg tests || hasArg bench-prims || hasArg bench-ann; then if (( ${BUILD_ALL_GPU_ARCH} == 0 )); then CUVS_CMAKE_CUDA_ARCHITECTURES="NATIVE" echo "Building for the architecture of the GPU in the system..." else CUVS_CMAKE_CUDA_ARCHITECTURES="RAPIDS" echo "Building for *ALL* supported GPU architectures..." fi # get the current count before the compile starts CACHE_TOOL=${CACHE_TOOL:-sccache} if [[ "$BUILD_REPORT_INCL_CACHE_STATS" == "ON" && -x "$(command -v ${CACHE_TOOL})" ]]; then "${CACHE_TOOL}" --zero-stats fi mkdir -p ${LIBCUVS_BUILD_DIR} cd ${LIBCUVS_BUILD_DIR} cmake -S ${REPODIR}/cpp -B ${LIBCUVS_BUILD_DIR} \ -DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX} \ -DCMAKE_CUDA_ARCHITECTURES=${CUVS_CMAKE_CUDA_ARCHITECTURES} \ -DCMAKE_BUILD_TYPE=${BUILD_TYPE} \ -DCUVS_COMPILE_LIBRARY=${COMPILE_LIBRARY} \ -DCUVS_NVTX=${NVTX} \ -DCUDA_LOG_COMPILE_TIME=${LOG_COMPILE_TIME} \ -DDISABLE_DEPRECATION_WARNINGS=${DISABLE_DEPRECATION_WARNINGS} \ -DBUILD_TESTS=${BUILD_TESTS} \ -DBUILD_MICRO_BENCH=${BUILD_MICRO_BENCH} \ -DBUILD_ANN_BENCH=${BUILD_ANN_BENCH} \ -DBUILD_CPU_ONLY=${BUILD_CPU_ONLY} \ -DCMAKE_MESSAGE_LOG_LEVEL=${CMAKE_LOG_LEVEL} \ ${CACHE_ARGS} \ ${EXTRA_CMAKE_ARGS} compile_start=$(date +%s) if [[ ${CMAKE_TARGET} != "" ]]; then echo "-- Compiling targets: ${CMAKE_TARGET}, verbose=${VERBOSE_FLAG}" if [[ ${INSTALL_TARGET} != "" ]]; then cmake --build "${LIBCUVS_BUILD_DIR}" ${VERBOSE_FLAG} -j${PARALLEL_LEVEL} --target ${CMAKE_TARGET} ${INSTALL_TARGET} else cmake --build "${LIBCUVS_BUILD_DIR}" ${VERBOSE_FLAG} -j${PARALLEL_LEVEL} --target ${CMAKE_TARGET} fi fi compile_end=$(date +%s) compile_total=$(( compile_end - compile_start )) if [[ -n "$BUILD_REPORT_METRICS" && -f "${LIBCUVS_BUILD_DIR}/.ninja_log" ]]; then if ! rapids-build-metrics-reporter.py 2> /dev/null && [ ! -f rapids-build-metrics-reporter.py ]; then echo "Downloading rapids-build-metrics-reporter.py" curl -sO https://raw.githubusercontent.com/rapidsai/build-metrics-reporter/v1/rapids-build-metrics-reporter.py fi echo "Formatting build metrics" MSG="" # get some sccache/ccache stats after the compile if [[ "$BUILD_REPORT_INCL_CACHE_STATS" == "ON" ]]; then if [[ ${CACHE_TOOL} == "sccache" && -x "$(command -v sccache)" ]]; then COMPILE_REQUESTS=$(sccache -s | grep "Compile requests \+ [0-9]\+$" | awk '{ print $NF }') CACHE_HITS=$(sccache -s | grep "Cache hits \+ [0-9]\+$" | awk '{ print $NF }') HIT_RATE=$(echo - | awk "{printf \"%.2f\n\", $CACHE_HITS / $COMPILE_REQUESTS * 100}") MSG="${MSG}<br/>cache hit rate ${HIT_RATE} %" elif [[ ${CACHE_TOOL} == "ccache" && -x "$(command -v ccache)" ]]; then CACHE_STATS_LINE=$(ccache -s | grep "Hits: \+ [0-9]\+ / [0-9]\+" | tail -n1) if [[ ! -z "$CACHE_STATS_LINE" ]]; then CACHE_HITS=$(echo "$CACHE_STATS_LINE" - | awk '{ print $2 }') COMPILE_REQUESTS=$(echo "$CACHE_STATS_LINE" - | awk '{ print $4 }') HIT_RATE=$(echo - | awk "{printf \"%.2f\n\", $CACHE_HITS / $COMPILE_REQUESTS * 100}") MSG="${MSG}<br/>cache hit rate ${HIT_RATE} %" fi fi fi MSG="${MSG}<br/>parallel setting: $PARALLEL_LEVEL" MSG="${MSG}<br/>parallel build time: $compile_total seconds" if [[ -f "${LIBCUVS_BUILD_DIR}/libcuvs.so" ]]; then LIBCUVS_FS=$(ls -lh ${LIBCUVS_BUILD_DIR}/libcuvs.so | awk '{print $5}') MSG="${MSG}<br/>libcuvs.so size: $LIBCUVS_FS" fi BMR_DIR=${RAPIDS_ARTIFACTS_DIR:-"${LIBCUVS_BUILD_DIR}"} echo "The HTML report can be found at [${BMR_DIR}/${BUILD_REPORT_METRICS}.html]. In CI, this report" echo "will also be uploaded to the appropriate subdirectory of https://downloads.rapids.ai/ci/cuvs/, and" echo "the entire URL can be found in \"conda-cpp-build\" runs under the task \"Upload additional artifacts\"" mkdir -p ${BMR_DIR} MSG_OUTFILE="$(mktemp)" echo "$MSG" > "${MSG_OUTFILE}" PATH=".:$PATH" python rapids-build-metrics-reporter.py ${LIBCUVS_BUILD_DIR}/.ninja_log --fmt html --msg "${MSG_OUTFILE}" > ${BMR_DIR}/${BUILD_REPORT_METRICS}.html cp ${LIBCUVS_BUILD_DIR}/.ninja_log ${BMR_DIR}/ninja.log fi fi # Build and (optionally) install the cuvs Python package if (( ${NUMARGS} == 0 )) || hasArg cuvs; then SKBUILD_CONFIGURE_OPTIONS="${SKBUILD_EXTRA_CMAKE_ARGS}" \ SKBUILD_BUILD_OPTIONS="-j${PARALLEL_LEVEL}" \ python -m pip install --no-build-isolation --no-deps ${REPODIR}/python/cuvs fi # Build and (optionally) install the cuvs-dask Python package if (( ${NUMARGS} == 0 )) || hasArg cuvs-dask; then SKBUILD_CONFIGURE_OPTIONS="${SKBUILD_EXTRA_CMAKE_ARGS}" \ SKBUILD_BUILD_OPTIONS="-j${PARALLEL_LEVEL}" \ python -m pip install --no-build-isolation --no-deps ${REPODIR}/python/cuvs-dask fi # Build and (optionally) install the cuvs-ann-bench Python package if (( ${NUMARGS} == 0 )) || hasArg bench-ann; then python -m pip install --no-build-isolation --no-deps ${REPODIR}/python/cuvs-ann-bench -vvv fi if hasArg docs; then set -x cd ${DOXYGEN_BUILD_DIR} doxygen Doxyfile cd ${SPHINX_BUILD_DIR} sphinx-build -b html source _html fi ################################################################################ # Initiate build for example CUVS application template (if needed) if hasArg template; then pushd ${REPODIR}/cpp/template ./build.sh popd fi
0
rapidsai_public_repos
rapidsai_public_repos/cuvs/dependencies.yaml
# Dependency list for https://github.com/rapidsai/dependency-file-generator files: all: output: conda matrix: cuda: ["11.8", "12.0"] arch: [x86_64, aarch64] includes: - build - build_cuvs - cudatoolkit - develop - checks - build_wheels - test_libcuvs - docs - run_cuvs - test_python_common - test_cuvs - cupy bench_ann: output: conda matrix: cuda: ["11.8", "12.0"] arch: [x86_64, aarch64] includes: - build - develop - cudatoolkit - nn_bench - nn_bench_python test_cpp: output: none includes: - cudatoolkit - test_libcuvs test_python: output: none includes: - cudatoolkit - py_version - test_python_common - test_cuvs - cupy checks: output: none includes: - checks - py_version docs: output: none includes: - test_cuvs - cupy - cudatoolkit - docs - py_version py_build_cuvs: output: pyproject pyproject_dir: python/cuvs extras: table: build-system includes: - build - build_cuvs - build_wheels py_run_cuvs: output: pyproject pyproject_dir: python/cuvs extras: table: project includes: - run_cuvs py_test_cuvs: output: pyproject pyproject_dir: python/cuvs extras: table: project.optional-dependencies key: test includes: - test_python_common - test_cuvs - cupy py_build_cuvs_bench: output: pyproject pyproject_dir: python/cuvs-bench extras: table: build-system includes: - build_wheels py_run_cuvs_bench: output: pyproject pyproject_dir: python/cuvs-bench extras: table: project includes: - nn_bench_python channels: - rapidsai - rapidsai-nightly - dask/label/dev - conda-forge - nvidia dependencies: build: common: - output_types: [conda, requirements, pyproject] packages: - &cmake_ver cmake>=3.26.4 - cython>=3.0.0 - ninja - scikit-build>=0.13.1 - output_types: [conda] packages: - c-compiler - cxx-compiler - nccl>=2.9.9 specific: - output_types: conda matrices: - matrix: arch: x86_64 packages: - gcc_linux-64=11.* - sysroot_linux-64==2.17 - matrix: arch: aarch64 packages: - gcc_linux-aarch64=11.* - sysroot_linux-aarch64==2.17 - output_types: conda matrices: - matrix: {cuda: "12.0"} packages: [cuda-version=12.0, cuda-nvcc] - matrix: {cuda: "11.8", arch: x86_64} packages: [nvcc_linux-64=11.8] - matrix: {cuda: "11.8", arch: aarch64} packages: [nvcc_linux-aarch64=11.8] - matrix: {cuda: "11.5", arch: x86_64} packages: [nvcc_linux-64=11.5] - matrix: {cuda: "11.5", arch: aarch64} packages: [nvcc_linux-aarch64=11.5] - matrix: {cuda: "11.4", arch: x86_64} packages: [nvcc_linux-64=11.4] - matrix: {cuda: "11.4", arch: aarch64} packages: [nvcc_linux-aarch64=11.4] - matrix: {cuda: "11.2", arch: x86_64} packages: [nvcc_linux-64=11.2] - matrix: {cuda: "11.2", arch: aarch64} packages: [nvcc_linux-aarch64=11.2] build_cuvs: common: - output_types: [conda] packages: - &rmm_conda rmm==24.2.* - output_types: requirements packages: # pip recognizes the index as a global option for the requirements.txt file # This index is needed for rmm-cu{11,12}. - --extra-index-url=https://pypi.nvidia.com specific: - output_types: [conda, requirements, pyproject] matrices: - matrix: cuda: "12.0" packages: - &cuda_python12 cuda-python>=12.0,<13.0a0 - matrix: # All CUDA 11 versions packages: - &cuda_python11 cuda-python>=11.7.1,<12.0a0 - output_types: [requirements, pyproject] matrices: - matrix: {cuda: "12.2"} packages: &build_cuvs_packages_cu12 - &rmm_cu12 rmm-cu12==24.2.* - {matrix: {cuda: "12.1"}, packages: *build_cuvs_packages_cu12} - {matrix: {cuda: "12.0"}, packages: *build_cuvs_packages_cu12} - matrix: {cuda: "11.8"} packages: &build_cuvs_packages_cu11 - &rmm_cu11 rmm-cu11==24.2.* - {matrix: {cuda: "11.5"}, packages: *build_cuvs_packages_cu11} - {matrix: {cuda: "11.4"}, packages: *build_cuvs_packages_cu11} - {matrix: {cuda: "11.2"}, packages: *build_cuvs_packages_cu11} - {matrix: null, packages: [*rmm_conda] } checks: common: - output_types: [conda, requirements] packages: - pre-commit develop: common: - output_types: conda packages: - clang==16.0.6 - clang-tools=16.0.6 nn_bench: common: - output_types: [conda, pyproject, requirements] packages: - hnswlib=0.7.0 - nlohmann_json>=3.11.2 - glog>=0.6.0 - h5py>=3.8.0 - benchmark>=1.8.2 - openblas - *rmm_conda nn_bench_python: common: - output_types: [conda] packages: - matplotlib - pandas - pyyaml - pandas cudatoolkit: specific: - output_types: conda matrices: - matrix: cuda: "12.0" packages: - cuda-version=12.0 - cuda-nvtx-dev - cuda-cudart-dev - cuda-profiler-api - libcublas-dev - libcurand-dev - libcusolver-dev - libcusparse-dev - matrix: cuda: "11.8" packages: - cuda-version=11.8 - cudatoolkit - cuda-nvtx=11.8 - cuda-profiler-api=11.8.86 - libcublas-dev=11.11.3.6 - libcublas=11.11.3.6 - libcurand-dev=10.3.0.86 - libcurand=10.3.0.86 - libcusolver-dev=11.4.1.48 - libcusolver=11.4.1.48 - libcusparse-dev=11.7.5.86 - libcusparse=11.7.5.86 - matrix: cuda: "11.5" packages: - cuda-version=11.5 - cudatoolkit - cuda-nvtx=11.5 - cuda-profiler-api>=11.4.240,<=11.8.86 # use any `11.x` version since pkg is missing several CUDA/arch packages - libcublas-dev>=11.7.3.1,<=11.7.4.6 - libcublas>=11.7.3.1,<=11.7.4.6 - libcurand-dev>=10.2.6.48,<=10.2.7.107 - libcurand>=10.2.6.48,<=10.2.7.107 - libcusolver-dev>=11.2.1.48,<=11.3.2.107 - libcusolver>=11.2.1.48,<=11.3.2.107 - libcusparse-dev>=11.7.0.31,<=11.7.0.107 - libcusparse>=11.7.0.31,<=11.7.0.107 - matrix: cuda: "11.4" packages: - cuda-version=11.4 - cudatoolkit - &cudanvtx114 cuda-nvtx=11.4 - cuda-profiler-api>=11.4.240,<=11.8.86 # use any `11.x` version since pkg is missing several CUDA/arch packages - &libcublas_dev114 libcublas-dev>=11.5.2.43,<=11.6.5.2 - &libcublas114 libcublas>=11.5.2.43,<=11.6.5.2 - &libcurand_dev114 libcurand-dev>=10.2.5.43,<=10.2.5.120 - &libcurand114 libcurand>=10.2.5.43,<=10.2.5.120 - &libcusolver_dev114 libcusolver-dev>=11.2.0.43,<=11.2.0.120 - &libcusolver114 libcusolver>=11.2.0.43,<=11.2.0.120 - &libcusparse_dev114 libcusparse-dev>=11.6.0.43,<=11.6.0.120 - &libcusparse114 libcusparse>=11.6.0.43,<=11.6.0.120 - matrix: cuda: "11.2" packages: - cuda-version=11.2 - cudatoolkit - *cudanvtx114 - cuda-profiler-api>=11.4.240,<=11.8.86 # use any `11.x` version since pkg is missing several CUDA/arch packages # The NVIDIA channel doesn't publish pkgs older than 11.4 for these libs, # so 11.2 uses 11.4 packages (the oldest available). - *libcublas_dev114 - *libcublas114 - *libcurand_dev114 - *libcurand114 - *libcusolver_dev114 - *libcusolver114 - *libcusparse_dev114 - *libcusparse114 cupy: common: - output_types: conda packages: - cupy>=12.0.0 specific: - output_types: [requirements, pyproject] matrices: # All CUDA 12 + x86_64 versions - matrix: {cuda: "12.2", arch: x86_64} packages: &cupy_packages_cu12_x86_64 - &cupy_cu12_x86_64 cupy-cuda12x>=12.0.0 - {matrix: {cuda: "12.1", arch: x86_64}, packages: *cupy_packages_cu12_x86_64} - {matrix: {cuda: "12.0", arch: x86_64}, packages: *cupy_packages_cu12_x86_64} # All CUDA 12 + aarch64 versions - matrix: {cuda: "12.2", arch: aarch64} packages: &cupy_packages_cu12_aarch64 - &cupy_cu12_aarch64 cupy-cuda12x -f https://pip.cupy.dev/aarch64 # TODO: Verify that this works. - {matrix: {cuda: "12.1", arch: aarch64}, packages: *cupy_packages_cu12_aarch64} - {matrix: {cuda: "12.0", arch: aarch64}, packages: *cupy_packages_cu12_aarch64} # All CUDA 11 + x86_64 versions - matrix: {cuda: "11.8", arch: x86_64} packages: &cupy_packages_cu11_x86_64 - cupy-cuda11x>=12.0.0 - {matrix: {cuda: "11.5", arch: x86_64}, packages: *cupy_packages_cu11_x86_64} - {matrix: {cuda: "11.4", arch: x86_64}, packages: *cupy_packages_cu11_x86_64} - {matrix: {cuda: "11.2", arch: x86_64}, packages: *cupy_packages_cu11_x86_64} # All CUDA 11 + aarch64 versions - matrix: {cuda: "11.8", arch: aarch64} packages: &cupy_packages_cu11_aarch64 - cupy-cuda11x -f https://pip.cupy.dev/aarch64 # TODO: Verify that this works. - {matrix: {cuda: "11.5", arch: aarch64}, packages: *cupy_packages_cu11_aarch64} - {matrix: {cuda: "11.4", arch: aarch64}, packages: *cupy_packages_cu11_aarch64} - {matrix: {cuda: "11.2", arch: aarch64}, packages: *cupy_packages_cu11_aarch64} - {matrix: null, packages: [cupy-cuda11x>=12.0.0]} test_libcuvs: common: - output_types: [conda] packages: - *cmake_ver - gtest>=1.13.0 - gmock>=1.13.0 docs: common: - output_types: [conda] packages: - breathe - doxygen>=1.8.20 - graphviz - ipython - numpydoc - pydata-sphinx-theme - recommonmark - sphinx-copybutton - sphinx-markdown-tables build_wheels: common: - output_types: [requirements, pyproject] packages: - wheel - setuptools py_version: specific: - output_types: conda matrices: - matrix: py: "3.9" packages: - python=3.9 - matrix: py: "3.10" packages: - python=3.10 - matrix: packages: - python>=3.9,<3.11 run_cuvs: common: - output_types: [conda, pyproject] packages: - &numpy numpy>=1.21 - output_types: [conda] packages: - *rmm_conda - output_types: requirements packages: # pip recognizes the index as a global option for the requirements.txt file # This index is needed for cudf and rmm. - --extra-index-url=https://pypi.nvidia.com specific: - output_types: [conda, requirements, pyproject] matrices: - matrix: cuda: "12.0" packages: - *cuda_python12 - matrix: # All CUDA 11 versions packages: - *cuda_python11 - output_types: [requirements, pyproject] matrices: - matrix: {cuda: "12.2"} packages: &run_cuvs_packages_cu12 - *rmm_cu12 - {matrix: {cuda: "12.1"}, packages: *run_cuvs_packages_cu12} - {matrix: {cuda: "12.0"}, packages: *run_cuvs_packages_cu12} - matrix: {cuda: "11.8"} packages: &run_cuvs_packages_cu11 - *rmm_cu11 - {matrix: {cuda: "11.5"}, packages: *run_cuvs_packages_cu11} - {matrix: {cuda: "11.4"}, packages: *run_cuvs_packages_cu11} - {matrix: {cuda: "11.2"}, packages: *run_cuvs_packages_cu11} - {matrix: null, packages: [*rmm_conda]} test_python_common: common: - output_types: [conda, requirements, pyproject] packages: - pytest - pytest-cov test_cuvs: common: - output_types: [conda, requirements, pyproject] packages: - scikit-learn - scipy
0
rapidsai_public_repos
rapidsai_public_repos/cuvs/LICENSE
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2020 NVIDIA Corporation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
rapidsai_public_repos
rapidsai_public_repos/cuvs/VERSION
24.02.00
0
rapidsai_public_repos/cuvs/python
rapidsai_public_repos/cuvs/python/cuvs/setup.cfg
# Copyright (c) 2022-2023, NVIDIA CORPORATION. [isort] line_length=79 multi_line_output=3 include_trailing_comma=True force_grid_wrap=0 combine_as_imports=True order_by_type=True known_dask= dask distributed dask_cuda known_rapids= nvtext cudf cuml raft cugraph dask_cudf rmm known_first_party= cuvs default_section=THIRDPARTY sections=FUTURE,STDLIB,THIRDPARTY,DASK,RAPIDS,FIRSTPARTY,LOCALFOLDER skip= thirdparty .eggs .git .hg .mypy_cache .tox .venv _build buck-out build dist __init__.py
0
rapidsai_public_repos/cuvs/python
rapidsai_public_repos/cuvs/python/cuvs/pyproject.toml
# Copyright (c) 2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. [build-system] requires = [ "cmake>=3.26.4", "cuda-python>=11.7.1,<12.0a0", "cython>=3.0.0", "ninja", "rmm==24.2.*", "scikit-build>=0.13.1", "setuptools", "wheel", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`. build-backend = "setuptools.build_meta" [project] name = "cuvs" dynamic = ["version"] description = "cuVS: Vector Search on the GPU" readme = { file = "README.md", content-type = "text/markdown" } authors = [ { name = "NVIDIA Corporation" }, ] license = { text = "Apache 2.0" } requires-python = ">=3.9" dependencies = [ "cuda-python>=11.7.1,<12.0a0", "numpy>=1.21", "rmm==24.2.*", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`. classifiers = [ "Intended Audience :: Developers", "Programming Language :: Python", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", ] [project.optional-dependencies] test = [ "cupy-cuda11x>=12.0.0", "pytest", "pytest-cov", "scikit-learn", "scipy", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`. [project.urls] Homepage = "https://github.com/rapidsai/cuvs" Documentation = "https://docs.rapids.ai/api/cuvs/stable/" [tool.setuptools] license-files = ["LICENSE"] [tool.setuptools.dynamic] version = {file = "cuvs/VERSION"} [tool.isort] line_length = 79 multi_line_output = 3 include_trailing_comma = true force_grid_wrap = 0 combine_as_imports = true order_by_type = true known_dask = [ "dask", "distributed", "dask_cuda", ] known_rapids = [ "rmm", ] known_first_party = [ "cuvs", ] default_section = "THIRDPARTY" sections = [ "FUTURE", "STDLIB", "THIRDPARTY", "DASK", "RAPIDS", "FIRSTPARTY", "LOCALFOLDER", ] skip = [ "thirdparty", ".eggs", ".git", ".hg", ".mypy_cache", ".tox", ".venv", "_build", "buck-out", "build", "dist", "__init__.py", ]
0
rapidsai_public_repos/cuvs/python
rapidsai_public_repos/cuvs/python/cuvs/CMakeLists.txt
# ============================================================================= # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= cmake_minimum_required(VERSION 3.26.4 FATAL_ERROR) include(../../fetch_rapids.cmake) set(cuvs_version 24.02.00) # We always need CUDA for cuvs because the cuvs dependency brings in a header-only cuco # dependency that enables CUDA unconditionally. include(rapids-cuda) rapids_cuda_init_architectures(cuvs) project( cuvs VERSION ${cuvs_version} LANGUAGES # TODO: Building Python extension modules via the python_extension_module requires the C # language to be enabled here. The test project that is built in scikit-build to verify # various linking options for the python library is hardcoded to build with C, so until # that is fixed we need to keep C. C CXX CUDA ) option(FIND_CUVS_CPP "Search for existing CUVS C++ installations before defaulting to local files" ON ) option(CUVS_BUILD_WHEELS "Whether this build is generating a Python wheel." OFF) # If the user requested it we attempt to find CUVS. if(FIND_CUVS_CPP) find_package(cuvs ${cuvs_version} REQUIRED COMPONENTS compiled) if(NOT TARGET cuvs::cuvs_lib) message( FATAL_ERROR "Building against a preexisting libcuvs library requires the compiled libcuvs to have been built!" ) endif() else() set(cuvs_FOUND OFF) endif() include(rapids-cython) if(NOT cuvs_FOUND) set(BUILD_TESTS OFF) set(BUILD_PRIMS_BENCH OFF) set(BUILD_ANN_BENCH OFF) set(CUVS_COMPILE_LIBRARY ON) set(_exclude_from_all "") if(CUVS_BUILD_WHEELS) # Statically link dependencies if building wheels set(CUDA_STATIC_RUNTIME ON) # Don't install the cuvs C++ targets into wheels set(_exclude_from_all EXCLUDE_FROM_ALL) endif() add_subdirectory(../../cpp cuvs-cpp ${_exclude_from_all}) # When building the C++ libraries from source we must copy libcuvs.so alongside the # pairwise_distance and random Cython libraries TODO: when we have a single 'compiled' cuvs # library, we shouldn't need this set(cython_lib_dir cuvs) install(TARGETS cuvs_lib DESTINATION ${cython_lib_dir}) endif() rapids_cython_init() add_subdirectory(cuvs/common) add_subdirectory(cuvs/distance) add_subdirectory(cuvs/matrix) add_subdirectory(cuvs/neighbors) add_subdirectory(cuvs/random) add_subdirectory(cuvs/cluster) if(DEFINED cython_lib_dir) rapids_cython_add_rpath_entries(TARGET cuvs PATHS "${cython_lib_dir}") endif()
0
rapidsai_public_repos/cuvs/python
rapidsai_public_repos/cuvs/python/cuvs/setup.py
# # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from setuptools import find_packages from skbuild import setup def exclude_libcxx_symlink(cmake_manifest): return list( filter( lambda name: not ("include/rapids/libcxx/include" in name), cmake_manifest, ) ) packages = find_packages(include=["cuvs*"]) setup( # Don't want libcxx getting pulled into wheel builds. cmake_process_manifest_hook=exclude_libcxx_symlink, packages=packages, package_data={key: ["VERSION", "*.pxd"] for key in packages}, zip_safe=False, )
0
rapidsai_public_repos/cuvs/python
rapidsai_public_repos/cuvs/python/cuvs/LICENSE
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2020 NVIDIA Corporation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
rapidsai_public_repos/cuvs/python
rapidsai_public_repos/cuvs/python/cuvs/.coveragerc
# Configuration file for Python coverage tests [run] source = pylibraft
0
rapidsai_public_repos/cuvs/python/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/_version.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import importlib.resources __version__ = ( importlib.resources.files("cuvs").joinpath("VERSION").read_text().strip() ) __git_commit__ = ""
0
rapidsai_public_repos/cuvs/python/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/__init__.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from cuvs._version import __git_commit__, __version__
0
rapidsai_public_repos/cuvs/python/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/VERSION
24.02.00
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/distance/distance_type.pxd
# # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 cdef extern from "raft/distance/distance_types.hpp" namespace "raft::distance": ctypedef enum DistanceType: L2Expanded "raft::distance::DistanceType::L2Expanded" L2SqrtExpanded "raft::distance::DistanceType::L2SqrtExpanded" CosineExpanded "raft::distance::DistanceType::CosineExpanded" L1 "raft::distance::DistanceType::L1" L2Unexpanded "raft::distance::DistanceType::L2Unexpanded" L2SqrtUnexpanded "raft::distance::DistanceType::L2SqrtUnexpanded" InnerProduct "raft::distance::DistanceType::InnerProduct" Linf "raft::distance::DistanceType::Linf" Canberra "raft::distance::DistanceType::Canberra" LpUnexpanded "raft::distance::DistanceType::LpUnexpanded" CorrelationExpanded "raft::distance::DistanceType::CorrelationExpanded" JaccardExpanded "raft::distance::DistanceType::JaccardExpanded" HellingerExpanded "raft::distance::DistanceType::HellingerExpanded" Haversine "raft::distance::DistanceType::Haversine" BrayCurtis "raft::distance::DistanceType::BrayCurtis" JensenShannon "raft::distance::DistanceType::JensenShannon" HammingUnexpanded "raft::distance::DistanceType::HammingUnexpanded" KLDivergence "raft::distance::DistanceType::KLDivergence" RusselRaoExpanded "raft::distance::DistanceType::RusselRaoExpanded" DiceExpanded "raft::distance::DistanceType::DiceExpanded" Precomputed "raft::distance::DistanceType::Precomputed"
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/distance/pairwise_distance.pyx
# # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import numpy as np from cython.operator cimport dereference as deref from libc.stdint cimport uintptr_t from libcpp cimport bool from .distance_type cimport DistanceType from pylibraft.common import Handle from pylibraft.common.handle import auto_sync_handle from pylibraft.common.handle cimport device_resources from pylibraft.common import auto_convert_output, cai_wrapper, device_ndarray cdef extern from "raft_runtime/distance/pairwise_distance.hpp" \ namespace "raft::runtime::distance" nogil: cdef void pairwise_distance(const device_resources &handle, float *x, float *y, float *dists, int m, int n, int k, DistanceType metric, bool isRowMajor, float metric_arg) except + cdef void pairwise_distance(const device_resources &handle, double *x, double *y, double *dists, int m, int n, int k, DistanceType metric, bool isRowMajor, float metric_arg) except + DISTANCE_TYPES = { "l2": DistanceType.L2SqrtExpanded, "sqeuclidean": DistanceType.L2Expanded, "euclidean": DistanceType.L2SqrtExpanded, "l1": DistanceType.L1, "cityblock": DistanceType.L1, "inner_product": DistanceType.InnerProduct, "chebyshev": DistanceType.Linf, "canberra": DistanceType.Canberra, "cosine": DistanceType.CosineExpanded, "lp": DistanceType.LpUnexpanded, "correlation": DistanceType.CorrelationExpanded, "jaccard": DistanceType.JaccardExpanded, "hellinger": DistanceType.HellingerExpanded, "braycurtis": DistanceType.BrayCurtis, "jensenshannon": DistanceType.JensenShannon, "hamming": DistanceType.HammingUnexpanded, "kl_divergence": DistanceType.KLDivergence, "minkowski": DistanceType.LpUnexpanded, "russellrao": DistanceType.RusselRaoExpanded, "dice": DistanceType.DiceExpanded, } SUPPORTED_DISTANCES = ["euclidean", "l1", "cityblock", "l2", "inner_product", "chebyshev", "minkowski", "canberra", "kl_divergence", "correlation", "russellrao", "hellinger", "lp", "hamming", "jensenshannon", "cosine", "sqeuclidean"] @auto_sync_handle @auto_convert_output def distance(X, Y, out=None, metric="euclidean", p=2.0, handle=None): """ Compute pairwise distances between X and Y Valid values for metric: ["euclidean", "l2", "l1", "cityblock", "inner_product", "chebyshev", "canberra", "lp", "hellinger", "jensenshannon", "kl_divergence", "russellrao", "minkowski", "correlation", "cosine"] Parameters ---------- X : CUDA array interface compliant matrix shape (m, k) Y : CUDA array interface compliant matrix shape (n, k) out : Optional writable CUDA array interface matrix shape (m, n) metric : string denoting the metric type (default="euclidean") p : metric parameter (currently used only for "minkowski") {handle_docstring} Returns ------- raft.device_ndarray containing pairwise distances Examples -------- To compute pairwise distances on cupy arrays: >>> import cupy as cp >>> from pylibraft.common import Handle >>> from pylibraft.distance import pairwise_distance >>> n_samples = 5000 >>> n_features = 50 >>> in1 = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> in2 = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) A single RAFT handle can optionally be reused across pylibraft functions. >>> handle = Handle() >>> output = pairwise_distance(in1, in2, metric="euclidean", handle=handle) pylibraft functions are often asynchronous so the handle needs to be explicitly synchronized >>> handle.sync() It's also possible to write to a pre-allocated output array: >>> import cupy as cp >>> from pylibraft.common import Handle >>> from pylibraft.distance import pairwise_distance >>> n_samples = 5000 >>> n_features = 50 >>> in1 = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> in2 = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> output = cp.empty((n_samples, n_samples), dtype=cp.float32) A single RAFT handle can optionally be reused across pylibraft functions. >>> >>> handle = Handle() >>> pairwise_distance(in1, in2, out=output, ... metric="euclidean", handle=handle) array(...) pylibraft functions are often asynchronous so the handle needs to be explicitly synchronized >>> handle.sync() """ x_cai = cai_wrapper(X) y_cai = cai_wrapper(Y) m = x_cai.shape[0] n = y_cai.shape[0] x_dt = x_cai.dtype y_dt = y_cai.dtype if out is None: dists = device_ndarray.empty((m, n), dtype=y_dt) else: dists = out x_k = x_cai.shape[1] y_k = y_cai.shape[1] dists_cai = cai_wrapper(dists) if x_k != y_k: raise ValueError("Inputs must have same number of columns. " "a=%s, b=%s" % (x_k, y_k)) x_ptr = <uintptr_t>x_cai.data y_ptr = <uintptr_t>y_cai.data d_ptr = <uintptr_t>dists_cai.data handle = handle if handle is not None else Handle() cdef device_resources *h = <device_resources*><size_t>handle.getHandle() d_dt = dists_cai.dtype x_c_contiguous = x_cai.c_contiguous y_c_contiguous = y_cai.c_contiguous if x_c_contiguous != y_c_contiguous: raise ValueError("Inputs must have matching strides") if metric not in SUPPORTED_DISTANCES: raise ValueError("metric %s is not supported" % metric) cdef DistanceType distance_type = DISTANCE_TYPES[metric] if x_dt != y_dt or x_dt != d_dt: raise ValueError("Inputs must have the same dtypes") if x_dt == np.float32: pairwise_distance(deref(h), <float*> x_ptr, <float*> y_ptr, <float*> d_ptr, <int>m, <int>n, <int>x_k, <DistanceType>distance_type, <bool>x_c_contiguous, <float>p) elif x_dt == np.float64: pairwise_distance(deref(h), <double*> x_ptr, <double*> y_ptr, <double*> d_ptr, <int>m, <int>n, <int>x_k, <DistanceType>distance_type, <bool>x_c_contiguous, <float>p) else: raise ValueError("dtype %s not supported" % x_dt) return dists
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/distance/CMakeLists.txt
# ============================================================================= # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= # Set the list of Cython files to build set(cython_sources pairwise_distance.pyx fused_l2_nn.pyx) set(linked_libraries cuvs::cuvs cuvs::compiled) # Build all of the Cython targets rapids_cython_create_modules( CXX SOURCE_FILES "${cython_sources}" LINKED_LIBRARIES "${linked_libraries}" ASSOCIATED_TARGETS cuvs MODULE_PREFIX distance_ )
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/distance/__init__.pxd
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/distance/fused_l2_nn.pyx
# # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import numpy as np from cython.operator cimport dereference as deref from libc.stdint cimport uintptr_t from libcpp cimport bool from .distance_type cimport DistanceType from pylibraft.common import ( Handle, auto_convert_output, cai_wrapper, device_ndarray, ) from pylibraft.common.handle import auto_sync_handle from pylibraft.common.handle cimport device_resources cdef extern from "raft_runtime/distance/fused_l2_nn.hpp" \ namespace "raft::runtime::distance" nogil: void fused_l2_nn_min_arg( const device_resources &handle, int* min, const float* x, const float* y, int m, int n, int k, bool sqrt) except + void fused_l2_nn_min_arg( const device_resources &handle, int* min, const double* x, const double* y, int m, int n, int k, bool sqrt) except + @auto_sync_handle @auto_convert_output def fused_l2_nn_argmin(X, Y, out=None, sqrt=True, handle=None): """ Compute the 1-nearest neighbors between X and Y using the L2 distance Parameters ---------- X : CUDA array interface compliant matrix shape (m, k) Y : CUDA array interface compliant matrix shape (n, k) output : Writable CUDA array interface matrix shape (m, 1) {handle_docstring} Examples -------- To compute the 1-nearest neighbors argmin: >>> import cupy as cp >>> from pylibraft.common import Handle >>> from pylibraft.distance import fused_l2_nn_argmin >>> n_samples = 5000 >>> n_clusters = 5 >>> n_features = 50 >>> in1 = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> in2 = cp.random.random_sample((n_clusters, n_features), ... dtype=cp.float32) >>> # A single RAFT handle can optionally be reused across >>> # pylibraft functions. >>> handle = Handle() >>> output = fused_l2_nn_argmin(in1, in2, handle=handle) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() The output can also be computed in-place on a preallocated array: >>> import cupy as cp >>> from pylibraft.common import Handle >>> from pylibraft.distance import fused_l2_nn_argmin >>> n_samples = 5000 >>> n_clusters = 5 >>> n_features = 50 >>> in1 = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> in2 = cp.random.random_sample((n_clusters, n_features), ... dtype=cp.float32) >>> output = cp.empty((n_samples, 1), dtype=cp.int32) >>> # A single RAFT handle can optionally be reused across >>> # pylibraft functions. >>> handle = Handle() >>> fused_l2_nn_argmin(in1, in2, out=output, handle=handle) array(...) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() """ x_cai = cai_wrapper(X) y_cai = cai_wrapper(Y) x_dt = x_cai.dtype y_dt = y_cai.dtype m = x_cai.shape[0] n = y_cai.shape[0] if out is None: output = device_ndarray.empty((m,), dtype="int32") else: output = out output_cai = cai_wrapper(output) x_k = x_cai.shape[1] y_k = y_cai.shape[1] if x_k != y_k: raise ValueError("Inputs must have same number of columns. " "a=%s, b=%s" % (x_k, y_k)) x_ptr = <uintptr_t>x_cai.data y_ptr = <uintptr_t>y_cai.data d_ptr = <uintptr_t>output_cai.data handle = handle if handle is not None else Handle() cdef device_resources *h = <device_resources*><size_t>handle.getHandle() d_dt = output_cai.dtype x_c_contiguous = x_cai.c_contiguous y_c_contiguous = y_cai.c_contiguous if x_c_contiguous != y_c_contiguous: raise ValueError("Inputs must have matching strides") if x_dt != y_dt: raise ValueError("Inputs must have the same dtypes") if d_dt != np.int32: raise ValueError("Output array must be int32") if x_dt == np.float32: fused_l2_nn_min_arg(deref(h), <int*> d_ptr, <float*> x_ptr, <float*> y_ptr, <int>m, <int>n, <int>x_k, <bool>sqrt) elif x_dt == np.float64: fused_l2_nn_min_arg(deref(h), <int*> d_ptr, <double*> x_ptr, <double*> y_ptr, <int>m, <int>n, <int>x_k, <bool>sqrt) else: raise ValueError("dtype %s not supported" % x_dt) return output
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/distance/__init__.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from .fused_l2_nn import fused_l2_nn_argmin from .pairwise_distance import DISTANCE_TYPES, distance as pairwise_distance __all__ = ["fused_l2_nn_argmin", "pairwise_distance"]
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/brute_force.pyx
# # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import numpy as np from cython.operator cimport dereference as deref from libcpp cimport bool, nullptr from libcpp.vector cimport vector from pylibraft.distance.distance_type cimport DistanceType from pylibraft.common import ( DeviceResources, auto_convert_output, cai_wrapper, device_ndarray, ) from libc.stdint cimport int64_t, uintptr_t from pylibraft.common.cpp.optional cimport optional from pylibraft.common.handle cimport device_resources from pylibraft.common.mdspan cimport get_dmv_float, get_dmv_int64 from pylibraft.common.handle import auto_sync_handle from pylibraft.common.interruptible import cuda_interruptible from pylibraft.distance.distance_type cimport DistanceType # TODO: Centralize this from pylibraft.distance.pairwise_distance import DISTANCE_TYPES from pylibraft.neighbors.common import _check_input_array from pylibraft.common.cpp.mdspan cimport ( device_matrix_view, host_matrix_view, make_device_matrix_view, make_host_matrix_view, row_major, ) from pylibraft.neighbors.cpp.brute_force cimport knn as c_knn def _get_array_params(array_interface, check_dtype=None): dtype = np.dtype(array_interface["typestr"]) if check_dtype is None and dtype != check_dtype: raise TypeError("dtype %s not supported" % dtype) shape = array_interface["shape"] if len(shape) != 2: raise ValueError("Expected a 2D array, got %d D" % len(shape)) data = array_interface["data"][0] return (shape, dtype, data) @auto_sync_handle @auto_convert_output def knn(dataset, queries, k=None, indices=None, distances=None, metric="sqeuclidean", metric_arg=2.0, global_id_offset=0, handle=None): """ Perform a brute-force nearest neighbors search. Parameters ---------- dataset : array interface compliant matrix, row-major layout, shape (n_samples, dim). Supported dtype [float] queries : array interface compliant matrix, row-major layout, shape (n_queries, dim) Supported dtype [float] k : int Number of neighbors to search (k <= 2048). Optional if indices or distances arrays are given (in which case their second dimension is k). indices : Optional array interface compliant matrix shape (n_queries, k), dtype int64_t. If supplied, neighbor indices will be written here in-place. (default None) Supported dtype uint64 distances : Optional array interface compliant matrix shape (n_queries, k), dtype float. If supplied, neighbor indices will be written here in-place. (default None) {handle_docstring} Returns ------- indices: array interface compliant object containing resulting indices shape (n_queries, k) distances: array interface compliant object containing resulting distances shape (n_queries, k) Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors.brute_force import knn >>> n_samples = 50000 >>> n_features = 50 >>> n_queries = 1000 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> # Search using the built index >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> k = 40 >>> distances, neighbors = knn(dataset, queries, k) >>> distances = cp.asarray(distances) >>> neighbors = cp.asarray(neighbors) """ if handle is None: handle = DeviceResources() dataset_cai = cai_wrapper(dataset) queries_cai = cai_wrapper(queries) if k is None: if indices is not None: k = cai_wrapper(indices).shape[1] elif distances is not None: k = cai_wrapper(distances).shape[1] else: raise ValueError("Argument k must be specified if both indices " "and distances arg is None") # we require c-contiguous (rowmajor) inputs here _check_input_array(dataset_cai, [np.dtype("float32")]) _check_input_array(queries_cai, [np.dtype("float32")], exp_cols=dataset_cai.shape[1]) n_queries = queries_cai.shape[0] if indices is None: indices = device_ndarray.empty((n_queries, k), dtype='int64') if distances is None: distances = device_ndarray.empty((n_queries, k), dtype='float32') cdef DistanceType c_metric = DISTANCE_TYPES[metric] distances_cai = cai_wrapper(distances) indices_cai = cai_wrapper(indices) cdef optional[float] c_metric_arg = <float>metric_arg cdef optional[int64_t] c_global_offset = <int64_t>global_id_offset cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() if dataset_cai.dtype == np.float32: with cuda_interruptible(): c_knn(deref(handle_), get_dmv_float(dataset_cai, check_shape=True), get_dmv_float(queries_cai, check_shape=True), get_dmv_int64(indices_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True), c_metric, c_metric_arg, c_global_offset) else: raise TypeError("dtype %s not supported" % dataset_cai.dtype) return (distances, indices)
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/CMakeLists.txt
# ============================================================================= # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= # Set the list of Cython files to build set(cython_sources common.pyx refine.pyx brute_force.pyx) set(linked_libraries cuvs::cuvs cuvs::compiled) # Build all of the Cython targets rapids_cython_create_modules( CXX SOURCE_FILES "${cython_sources}" LINKED_LIBRARIES "${linked_libraries}" ASSOCIATED_TARGETS cuvs MODULE_PREFIX neighbors_ ) add_subdirectory(cagra) add_subdirectory(ivf_flat) add_subdirectory(ivf_pq)
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/common.pxd
# # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 from pylibraft.distance.distance_type cimport DistanceType cdef _get_metric_string(DistanceType metric)
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/__init__.pxd
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/common.pyx
# # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import warnings from pylibraft.distance.distance_type cimport DistanceType SUPPORTED_DISTANCES = { "sqeuclidean": DistanceType.L2Expanded, "euclidean": DistanceType.L2SqrtExpanded, "inner_product": DistanceType.InnerProduct, } def _get_metric(metric): if metric not in SUPPORTED_DISTANCES: if metric == "l2_expanded": warnings.warn("Using l2_expanded as a metric name is deprecated," " use sqeuclidean instead", FutureWarning) return DistanceType.L2Expanded raise ValueError("metric %s is not supported" % metric) return SUPPORTED_DISTANCES[metric] cdef _get_metric_string(DistanceType metric): return {DistanceType.L2Expanded : "sqeuclidean", DistanceType.InnerProduct: "inner_product", DistanceType.L2SqrtExpanded: "euclidean"}[metric] def _check_input_array(cai, exp_dt, exp_rows=None, exp_cols=None): if cai.dtype not in exp_dt: raise TypeError("dtype %s not supported" % cai.dtype) if not cai.c_contiguous: raise ValueError("Row major input is expected") if exp_cols is not None and cai.shape[1] != exp_cols: raise ValueError("Incorrect number of columns, expected {} got {}" .format(exp_cols, cai.shape[1])) if exp_rows is not None and cai.shape[0] != exp_rows: raise ValueError("Incorrect number of rows, expected {} , got {}" .format(exp_rows, cai.shape[0]))
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/refine.pyx
# # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import numpy as np from cython.operator cimport dereference as deref from libc.stdint cimport int8_t, int64_t, uint8_t, uintptr_t from libcpp cimport bool, nullptr from pylibraft.distance.distance_type cimport DistanceType from pylibraft.common import ( DeviceResources, auto_convert_output, cai_wrapper, device_ndarray, ) from pylibraft.common.handle cimport device_resources from pylibraft.common.handle import auto_sync_handle from pylibraft.common.input_validation import is_c_contiguous from pylibraft.common.interruptible import cuda_interruptible from pylibraft.distance.distance_type cimport DistanceType import pylibraft.neighbors.ivf_pq as ivf_pq from pylibraft.neighbors.common import _get_metric cimport pylibraft.neighbors.ivf_pq.cpp.c_ivf_pq as c_ivf_pq from pylibraft.common.cpp.mdspan cimport ( device_matrix_view, host_matrix_view, make_host_matrix_view, row_major, ) from pylibraft.common.mdspan cimport ( get_dmv_float, get_dmv_int8, get_dmv_int64, get_dmv_uint8, ) from pylibraft.neighbors.common cimport _get_metric_string from pylibraft.neighbors.ivf_pq.cpp.c_ivf_pq cimport ( index_params, search_params, ) # We omit the const qualifiers in the interface for refine, because cython # has an issue parsing it (https://github.com/cython/cython/issues/4180). cdef extern from "raft_runtime/neighbors/refine.hpp" \ namespace "raft::runtime::neighbors" nogil: cdef void c_refine "raft::runtime::neighbors::refine" ( const device_resources& handle, device_matrix_view[float, int64_t, row_major] dataset, device_matrix_view[float, int64_t, row_major] queries, device_matrix_view[int64_t, int64_t, row_major] candidates, device_matrix_view[int64_t, int64_t, row_major] indices, device_matrix_view[float, int64_t, row_major] distances, DistanceType metric) except + cdef void c_refine "raft::runtime::neighbors::refine" ( const device_resources& handle, device_matrix_view[uint8_t, int64_t, row_major] dataset, device_matrix_view[uint8_t, int64_t, row_major] queries, device_matrix_view[int64_t, int64_t, row_major] candidates, device_matrix_view[int64_t, int64_t, row_major] indices, device_matrix_view[float, int64_t, row_major] distances, DistanceType metric) except + cdef void c_refine "raft::runtime::neighbors::refine" ( const device_resources& handle, device_matrix_view[int8_t, int64_t, row_major] dataset, device_matrix_view[int8_t, int64_t, row_major] queries, device_matrix_view[int64_t, int64_t, row_major] candidates, device_matrix_view[int64_t, int64_t, row_major] indices, device_matrix_view[float, int64_t, row_major] distances, DistanceType metric) except + cdef void c_refine "raft::runtime::neighbors::refine" ( const device_resources& handle, host_matrix_view[float, int64_t, row_major] dataset, host_matrix_view[float, int64_t, row_major] queries, host_matrix_view[int64_t, int64_t, row_major] candidates, host_matrix_view[int64_t, int64_t, row_major] indices, host_matrix_view[float, int64_t, row_major] distances, DistanceType metric) except + cdef void c_refine "raft::runtime::neighbors::refine" ( const device_resources& handle, host_matrix_view[uint8_t, int64_t, row_major] dataset, host_matrix_view[uint8_t, int64_t, row_major] queries, host_matrix_view[int64_t, int64_t, row_major] candidates, host_matrix_view[int64_t, int64_t, row_major] indices, host_matrix_view[float, int64_t, row_major] distances, DistanceType metric) except + cdef void c_refine "raft::runtime::neighbors::refine" ( const device_resources& handle, host_matrix_view[int8_t, int64_t, row_major] dataset, host_matrix_view[int8_t, int64_t, row_major] queries, host_matrix_view[int64_t, int64_t, row_major] candidates, host_matrix_view[int64_t, int64_t, row_major] indices, host_matrix_view[float, int64_t, row_major] distances, DistanceType metric) except + def _get_array_params(array_interface, check_dtype=None): dtype = np.dtype(array_interface["typestr"]) if check_dtype is None and dtype != check_dtype: raise TypeError("dtype %s not supported" % dtype) shape = array_interface["shape"] if len(shape) != 2: raise ValueError("Expected a 2D array, got %d D" % len(shape)) data = array_interface["data"][0] return (shape, dtype, data) cdef host_matrix_view[float, int64_t, row_major] \ get_host_matrix_view_float(array) except *: shape, dtype, data = _get_array_params( array.__array_interface__, check_dtype=np.float32) return make_host_matrix_view[float, int64_t, row_major]( <float*><uintptr_t>data, shape[0], shape[1]) cdef host_matrix_view[int64_t, int64_t, row_major] \ get_host_matrix_view_int64_t(array) except *: shape, dtype, data = _get_array_params( array.__array_interface__, check_dtype=np.int64) return make_host_matrix_view[int64_t, int64_t, row_major]( <int64_t*><uintptr_t>data, shape[0], shape[1]) cdef host_matrix_view[uint8_t, int64_t, row_major] \ get_host_matrix_view_uint8(array) except *: shape, dtype, data = _get_array_params( array.__array_interface__, check_dtype=np.uint8) return make_host_matrix_view[uint8_t, int64_t, row_major]( <uint8_t*><uintptr_t>data, shape[0], shape[1]) cdef host_matrix_view[int8_t, int64_t, row_major] \ get_host_matrix_view_int8(array) except *: shape, dtype, data = _get_array_params( array.__array_interface__, check_dtype=np.int8) return make_host_matrix_view[int8_t, int64_t, row_major]( <int8_t*><uintptr_t>data, shape[0], shape[1]) @auto_sync_handle @auto_convert_output def refine(dataset, queries, candidates, k=None, indices=None, distances=None, metric="sqeuclidean", handle=None): """ Refine nearest neighbor search. Refinement is an operation that follows an approximate NN search. The approximate search has already selected n_candidates neighbor candidates for each query. We narrow it down to k neighbors. For each query, we calculate the exact distance between the query and its n_candidates neighbor candidate, and select the k nearest ones. Input arrays can be either CUDA array interface compliant matrices or array interface compliant matrices in host memory. All array must be in the same memory space. Parameters ---------- index_params : IndexParams object dataset : array interface compliant matrix, shape (n_samples, dim) Supported dtype [float, int8, uint8] queries : array interface compliant matrix, shape (n_queries, dim) Supported dtype [float, int8, uint8] candidates : array interface compliant matrix, shape (n_queries, k0) Supported dtype int64 k : int Number of neighbors to search (k <= k0). Optional if indices or distances arrays are given (in which case their second dimension is k). indices : Optional array interface compliant matrix shape \ (n_queries, k). If supplied, neighbor indices will be written here in-place. (default None). Supported dtype int64. distances : Optional array interface compliant matrix shape \ (n_queries, k). If supplied, neighbor indices will be written here in-place. (default None) Supported dtype float. {handle_docstring} Returns ------- index: ivf_pq.Index Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_pq, refine >>> n_samples = 50000 >>> n_features = 50 >>> n_queries = 1000 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> handle = DeviceResources() >>> index_params = ivf_pq.IndexParams(n_lists=1024, ... metric="sqeuclidean", ... pq_dim=10) >>> index = ivf_pq.build(index_params, dataset, handle=handle) >>> # Search using the built index >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> k = 40 >>> _, candidates = ivf_pq.search(ivf_pq.SearchParams(), index, ... queries, k, handle=handle) >>> k = 10 >>> distances, neighbors = refine(dataset, queries, candidates, k, ... handle=handle) >>> distances = cp.asarray(distances) >>> neighbors = cp.asarray(neighbors) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() """ if handle is None: handle = DeviceResources() if hasattr(dataset, "__cuda_array_interface__"): return _refine_device(dataset, queries, candidates, k, indices, distances, metric, handle) else: return _refine_host(dataset, queries, candidates, k, indices, distances, metric, handle) def _refine_device(dataset, queries, candidates, k, indices, distances, metric, handle): cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() if k is None: if indices is not None: k = cai_wrapper(indices).shape[1] elif distances is not None: k = cai_wrapper(distances).shape[1] else: raise ValueError("Argument k must be specified if both indices " "and distances arg is None") queries_cai = cai_wrapper(queries) dataset_cai = cai_wrapper(dataset) candidates_cai = cai_wrapper(candidates) n_queries = cai_wrapper(queries).shape[0] if indices is None: indices = device_ndarray.empty((n_queries, k), dtype='int64') if distances is None: distances = device_ndarray.empty((n_queries, k), dtype='float32') indices_cai = cai_wrapper(indices) distances_cai = cai_wrapper(distances) cdef DistanceType c_metric = _get_metric(metric) if dataset_cai.dtype == np.float32: with cuda_interruptible(): c_refine(deref(handle_), get_dmv_float(dataset_cai, check_shape=True), get_dmv_float(queries_cai, check_shape=True), get_dmv_int64(candidates_cai, check_shape=True), get_dmv_int64(indices_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True), c_metric) elif dataset_cai.dtype == np.int8: with cuda_interruptible(): c_refine(deref(handle_), get_dmv_int8(dataset_cai, check_shape=True), get_dmv_int8(queries_cai, check_shape=True), get_dmv_int64(candidates_cai, check_shape=True), get_dmv_int64(indices_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True), c_metric) elif dataset_cai.dtype == np.uint8: with cuda_interruptible(): c_refine(deref(handle_), get_dmv_uint8(dataset_cai, check_shape=True), get_dmv_uint8(queries_cai, check_shape=True), get_dmv_int64(candidates_cai, check_shape=True), get_dmv_int64(indices_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True), c_metric) else: raise TypeError("dtype %s not supported" % dataset_cai.dtype) return (distances, indices) def _refine_host(dataset, queries, candidates, k, indices, distances, metric, handle): cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() if k is None: if indices is not None: k = indices.__array_interface__["shape"][1] elif distances is not None: k = distances.__array_interface__["shape"][1] else: raise ValueError("Argument k must be specified if both indices " "and distances arg is None") n_queries = queries.__array_interface__["shape"][0] if indices is None: indices = np.empty((n_queries, k), dtype='int64') if distances is None: distances = np.empty((n_queries, k), dtype='float32') cdef DistanceType c_metric = _get_metric(metric) dtype = np.dtype(dataset.__array_interface__["typestr"]) if dtype == np.float32: with cuda_interruptible(): c_refine(deref(handle_), get_host_matrix_view_float(dataset), get_host_matrix_view_float(queries), get_host_matrix_view_int64_t(candidates), get_host_matrix_view_int64_t(indices), get_host_matrix_view_float(distances), c_metric) elif dtype == np.int8: with cuda_interruptible(): c_refine(deref(handle_), get_host_matrix_view_int8(dataset), get_host_matrix_view_int8(queries), get_host_matrix_view_int64_t(candidates), get_host_matrix_view_int64_t(indices), get_host_matrix_view_float(distances), c_metric) elif dtype == np.uint8: with cuda_interruptible(): c_refine(deref(handle_), get_host_matrix_view_uint8(dataset), get_host_matrix_view_uint8(queries), get_host_matrix_view_int64_t(candidates), get_host_matrix_view_int64_t(indices), get_host_matrix_view_float(distances), c_metric) else: raise TypeError("dtype %s not supported" % dtype) return (distances, indices)
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/__init__.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from pylibraft.neighbors import brute_force, cagra, ivf_flat, ivf_pq from .refine import refine __all__ = ["common", "refine", "brute_force", "ivf_flat", "ivf_pq", "cagra"]
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_pq/CMakeLists.txt
# ============================================================================= # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= # Set the list of Cython files to build set(cython_sources ivf_pq.pyx) set(linked_libraries cuvs::cuvs cuvs::compiled) # Build all of the Cython targets rapids_cython_create_modules( CXX SOURCE_FILES "${cython_sources}" LINKED_LIBRARIES "${linked_libraries}" ASSOCIATED_TARGETS cuvs MODULE_PREFIX neighbors_ivfpq_ )
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_pq/ivf_pq.pyx
# # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import warnings import numpy as np from cython.operator cimport dereference as deref from libc.stdint cimport int32_t, int64_t, uint32_t, uintptr_t from libcpp cimport bool, nullptr from libcpp.string cimport string from pylibraft.distance.distance_type cimport DistanceType from pylibraft.common import ( DeviceResources, ai_wrapper, auto_convert_output, cai_wrapper, device_ndarray, ) from pylibraft.common.cai_wrapper import wrap_array from pylibraft.common.interruptible import cuda_interruptible from pylibraft.common.handle cimport device_resources from pylibraft.common.handle import auto_sync_handle from pylibraft.common.input_validation import is_c_contiguous cimport pylibraft.neighbors.ivf_flat.cpp.c_ivf_flat as c_ivf_flat cimport pylibraft.neighbors.ivf_pq.cpp.c_ivf_pq as c_ivf_pq from pylibraft.common.optional cimport make_optional, optional from rmm._lib.memory_resource cimport ( DeviceMemoryResource, device_memory_resource, ) from pylibraft.neighbors.common import _check_input_array, _get_metric from pylibraft.common.cpp.mdspan cimport ( device_matrix_view, device_vector_view, make_device_vector_view, row_major, ) from pylibraft.common.mdspan cimport ( get_dmv_float, get_dmv_int8, get_dmv_int64, get_dmv_uint8, make_optional_view_int64, ) from pylibraft.neighbors.common cimport _get_metric_string from pylibraft.neighbors.ivf_pq.cpp.c_ivf_pq cimport ( index_params, search_params, ) cdef _get_codebook_string(c_ivf_pq.codebook_gen codebook): return {c_ivf_pq.codebook_gen.PER_SUBSPACE: "subspace", c_ivf_pq.codebook_gen.PER_CLUSTER: "cluster"}[codebook] cdef _map_dtype_np_to_cuda(dtype, supported_dtypes=None): if supported_dtypes is not None and dtype not in supported_dtypes: raise TypeError("Type %s is not supported" % str(dtype)) return {np.float32: c_ivf_pq.cudaDataType_t.CUDA_R_32F, np.float16: c_ivf_pq.cudaDataType_t.CUDA_R_16F, np.uint8: c_ivf_pq.cudaDataType_t.CUDA_R_8U}[dtype] cdef _get_dtype_string(dtype): return str({c_ivf_pq.cudaDataType_t.CUDA_R_32F: np.float32, c_ivf_pq.cudaDataType_t.CUDA_R_16F: np.float16, c_ivf_pq.cudaDataType_t.CUDA_R_8U: np.uint8}[dtype]) cdef class IndexParams: """ Parameters to build index for IVF-PQ nearest neighbor search Parameters ---------- n_list : int, default = 1024 The number of clusters used in the coarse quantizer. metric : string denoting the metric type, default="sqeuclidean" Valid values for metric: ["sqeuclidean", "inner_product", "euclidean"], where - sqeuclidean is the euclidean distance without the square root operation, i.e.: distance(a,b) = \\sum_i (a_i - b_i)^2, - euclidean is the euclidean distance - inner product distance is defined as distance(a, b) = \\sum_i a_i * b_i. kmeans_n_iters : int, default = 20 The number of iterations searching for kmeans centers during index building. kmeans_trainset_fraction : int, default = 0.5 If kmeans_trainset_fraction is less than 1, then the dataset is subsampled, and only n_samples * kmeans_trainset_fraction rows are used for training. pq_bits : int, default = 8 The bit length of the vector element after quantization. pq_dim : int, default = 0 The dimensionality of a the vector after product quantization. When zero, an optimal value is selected using a heuristic. Note pq_dim * pq_bits must be a multiple of 8. Hint: a smaller 'pq_dim' results in a smaller index size and better search performance, but lower recall. If 'pq_bits' is 8, 'pq_dim' can be set to any number, but multiple of 8 are desirable for good performance. If 'pq_bits' is not 8, 'pq_dim' should be a multiple of 8. For good performance, it is desirable that 'pq_dim' is a multiple of 32. Ideally, 'pq_dim' should be also a divisor of the dataset dim. codebook_kind : string, default = "subspace" Valid values ["subspace", "cluster"] force_random_rotation : bool, default = False Apply a random rotation matrix on the input data and queries even if `dim % pq_dim == 0`. Note: if `dim` is not multiple of `pq_dim`, a random rotation is always applied to the input data and queries to transform the working space from `dim` to `rot_dim`, which may be slightly larger than the original space and and is a multiple of `pq_dim` (`rot_dim % pq_dim == 0`). However, this transform is not necessary when `dim` is multiple of `pq_dim` (`dim == rot_dim`, hence no need in adding "extra" data columns / features). By default, if `dim == rot_dim`, the rotation transform is initialized with the identity matrix. When `force_random_rotation == True`, a random orthogonal transform matrix is generated regardless of the values of `dim` and `pq_dim`. add_data_on_build : bool, default = True After training the coarse and fine quantizers, we will populate the index with the dataset if add_data_on_build == True, otherwise the index is left empty, and the extend method can be used to add new vectors to the index. conservative_memory_allocation : bool, default = True By default, the algorithm allocates more space than necessary for individual clusters (`list_data`). This allows to amortize the cost of memory allocation and reduce the number of data copies during repeated calls to `extend` (extending the database). To disable this behavior and use as little GPU memory for the database as possible, set this flat to `True`. """ def __init__(self, *, n_lists=1024, metric="sqeuclidean", kmeans_n_iters=20, kmeans_trainset_fraction=0.5, pq_bits=8, pq_dim=0, codebook_kind="subspace", force_random_rotation=False, add_data_on_build=True, conservative_memory_allocation=False): self.params.n_lists = n_lists self.params.metric = _get_metric(metric) self.params.metric_arg = 0 self.params.kmeans_n_iters = kmeans_n_iters self.params.kmeans_trainset_fraction = kmeans_trainset_fraction self.params.pq_bits = pq_bits self.params.pq_dim = pq_dim if codebook_kind == "subspace": self.params.codebook_kind = c_ivf_pq.codebook_gen.PER_SUBSPACE elif codebook_kind == "cluster": self.params.codebook_kind = c_ivf_pq.codebook_gen.PER_CLUSTER else: raise ValueError("Incorrect codebook kind %s" % codebook_kind) self.params.force_random_rotation = force_random_rotation self.params.add_data_on_build = add_data_on_build self.params.conservative_memory_allocation = \ conservative_memory_allocation @property def n_lists(self): return self.params.n_lists @property def metric(self): return self.params.metric @property def kmeans_n_iters(self): return self.params.kmeans_n_iters @property def kmeans_trainset_fraction(self): return self.params.kmeans_trainset_fraction @property def pq_bits(self): return self.params.pq_bits @property def pq_dim(self): return self.params.pq_dim @property def codebook_kind(self): return self.params.codebook_kind @property def force_random_rotation(self): return self.params.force_random_rotation @property def add_data_on_build(self): return self.params.add_data_on_build @property def conservative_memory_allocation(self): return self.params.conservative_memory_allocation cdef class Index: # We store a pointer to the index because it dose not have a trivial # constructor. cdef c_ivf_pq.index[int64_t] * index cdef readonly bool trained def __cinit__(self, handle=None): self.trained = False self.index = NULL if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() # We create a placeholder object. The actual parameter values do # not matter, it will be replaced with a built index object later. self.index = new c_ivf_pq.index[int64_t]( deref(handle_), _get_metric("sqeuclidean"), c_ivf_pq.codebook_gen.PER_SUBSPACE, <uint32_t>1, <uint32_t>4, <uint32_t>8, <uint32_t>0, <bool>False) def __dealloc__(self): if self.index is not NULL: del self.index def __repr__(self): m_str = "metric=" + _get_metric_string(self.index.metric()) code_str = "codebook=" + _get_codebook_string( self.index.codebook_kind()) attr_str = [attr + "=" + str(getattr(self, attr)) for attr in ["size", "dim", "pq_dim", "pq_bits", "n_lists", "rot_dim"]] attr_str = [m_str, code_str] + attr_str return "Index(type=IVF-PQ, " + (", ".join(attr_str)) + ")" @property def dim(self): return self.index[0].dim() @property def size(self): return self.index[0].size() @property def pq_dim(self): return self.index[0].pq_dim() @property def pq_len(self): return self.index[0].pq_len() @property def pq_bits(self): return self.index[0].pq_bits() @property def metric(self): return self.index[0].metric() @property def n_lists(self): return self.index[0].n_lists() @property def rot_dim(self): return self.index[0].rot_dim() @property def codebook_kind(self): return self.index[0].codebook_kind() @property def conservative_memory_allocation(self): return self.index[0].conservative_memory_allocation() @auto_sync_handle @auto_convert_output def build(IndexParams index_params, dataset, handle=None): """ Builds an IVF-PQ index that can be later used for nearest neighbor search. The input array can be either CUDA array interface compliant matrix or array interface compliant matrix in host memory. Parameters ---------- index_params : IndexParams object dataset : array interface compliant matrix shape (n_samples, dim) Supported dtype [float, int8, uint8] {handle_docstring} Returns ------- index: ivf_pq.Index Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_pq >>> n_samples = 50000 >>> n_features = 50 >>> n_queries = 1000 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> handle = DeviceResources() >>> index_params = ivf_pq.IndexParams( ... n_lists=1024, ... metric="sqeuclidean", ... pq_dim=10) >>> index = ivf_pq.build(index_params, dataset, handle=handle) >>> # Search using the built index >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> k = 10 >>> distances, neighbors = ivf_pq.search(ivf_pq.SearchParams(), index, ... queries, k, handle=handle) >>> distances = cp.asarray(distances) >>> neighbors = cp.asarray(neighbors) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() """ dataset_cai = wrap_array(dataset) dataset_dt = dataset_cai.dtype _check_input_array(dataset_cai, [np.dtype('float32'), np.dtype('byte'), np.dtype('ubyte')]) cdef int64_t n_rows = dataset_cai.shape[0] cdef uint32_t dim = dataset_cai.shape[1] if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() idx = Index() if dataset_dt == np.float32: with cuda_interruptible(): c_ivf_pq.build(deref(handle_), index_params.params, get_dmv_float(dataset_cai, check_shape=True), idx.index) idx.trained = True elif dataset_dt == np.byte: with cuda_interruptible(): c_ivf_pq.build(deref(handle_), index_params.params, get_dmv_int8(dataset_cai, check_shape=True), idx.index) idx.trained = True elif dataset_dt == np.ubyte: with cuda_interruptible(): c_ivf_pq.build(deref(handle_), index_params.params, get_dmv_uint8(dataset_cai, check_shape=True), idx.index) idx.trained = True else: raise TypeError("dtype %s not supported" % dataset_dt) return idx @auto_sync_handle @auto_convert_output def extend(Index index, new_vectors, new_indices, handle=None): """ Extend an existing index with new vectors. The input array can be either CUDA array interface compliant matrix or array interface compliant matrix in host memory. Parameters ---------- index : ivf_pq.Index Trained ivf_pq object. new_vectors : array interface compliant matrix shape (n_samples, dim) Supported dtype [float, int8, uint8] new_indices : array interface compliant vector shape (n_samples) Supported dtype [int64] {handle_docstring} Returns ------- index: ivf_pq.Index Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_pq >>> n_samples = 50000 >>> n_features = 50 >>> n_queries = 1000 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> handle = DeviceResources() >>> index = ivf_pq.build(ivf_pq.IndexParams(), dataset, handle=handle) >>> n_rows = 100 >>> more_data = cp.random.random_sample((n_rows, n_features), ... dtype=cp.float32) >>> indices = index.size + cp.arange(n_rows, dtype=cp.int64) >>> index = ivf_pq.extend(index, more_data, indices) >>> # Search using the built index >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> k = 10 >>> distances, neighbors = ivf_pq.search(ivf_pq.SearchParams(), ... index, queries, ... k, handle=handle) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() >>> distances = cp.asarray(distances) >>> neighbors = cp.asarray(neighbors) """ if not index.trained: raise ValueError("Index need to be built before calling extend.") if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() vecs_cai = wrap_array(new_vectors) vecs_dt = vecs_cai.dtype cdef optional[device_vector_view[int64_t, int64_t]] new_indices_opt cdef int64_t n_rows = vecs_cai.shape[0] cdef uint32_t dim = vecs_cai.shape[1] _check_input_array(vecs_cai, [np.dtype('float32'), np.dtype('byte'), np.dtype('ubyte')], exp_cols=index.dim) idx_cai = wrap_array(new_indices) _check_input_array(idx_cai, [np.dtype('int64')], exp_rows=n_rows) if len(idx_cai.shape)!=1: raise ValueError("Indices array is expected to be 1D") if index.index.size() > 0: new_indices_opt = make_device_vector_view( <int64_t *><uintptr_t>idx_cai.data, <int64_t>idx_cai.shape[0]) if vecs_dt == np.float32: with cuda_interruptible(): c_ivf_pq.extend(deref(handle_), get_dmv_float(vecs_cai, check_shape=True), new_indices_opt, index.index) elif vecs_dt == np.int8: with cuda_interruptible(): c_ivf_pq.extend(deref(handle_), get_dmv_int8(vecs_cai, check_shape=True), new_indices_opt, index.index) elif vecs_dt == np.uint8: with cuda_interruptible(): c_ivf_pq.extend(deref(handle_), get_dmv_uint8(vecs_cai, check_shape=True), new_indices_opt, index.index) else: raise TypeError("query dtype %s not supported" % vecs_dt) return index cdef class SearchParams: """ IVF-PQ search parameters Parameters ---------- n_probes: int, default = 1024 The number of course clusters to select for the fine search. lut_dtype: default = np.float32 Data type of look up table to be created dynamically at search time. The use of low-precision types reduces the amount of shared memory required at search time, so fast shared memory kernels can be used even for datasets with large dimansionality. Note that the recall is slightly degraded when low-precision type is selected. Possible values [np.float32, np.float16, np.uint8] internal_distance_dtype: default = np.float32 Storage data type for distance/similarity computation. Possible values [np.float32, np.float16] """ def __init__(self, *, n_probes=20, lut_dtype=np.float32, internal_distance_dtype=np.float32): self.params.n_probes = n_probes self.params.lut_dtype = _map_dtype_np_to_cuda(lut_dtype) self.params.internal_distance_dtype = \ _map_dtype_np_to_cuda(internal_distance_dtype) # TODO(tfeher): enable if #926 adds this # self.params.shmem_carveout = self.shmem_carveout def __repr__(self): lut_str = "lut_dtype=" + _get_dtype_string(self.params.lut_dtype) idt_str = "internal_distance_dtype=" + \ _get_dtype_string(self.params.internal_distance_dtype) attr_str = [attr + "=" + str(getattr(self, attr)) for attr in ["n_probes"]] # TODO (tfeher) add "shmem_carveout" attr_str = attr_str + [lut_str, idt_str] return "SearchParams(type=IVF-PQ, " + (", ".join(attr_str)) + ")" @property def n_probes(self): return self.params.n_probes @property def lut_dtype(self): return self.params.lut_dtype @property def internal_distance_dtype(self): return self.params.internal_distance_dtype @auto_sync_handle @auto_convert_output def search(SearchParams search_params, Index index, queries, k, neighbors=None, distances=None, DeviceMemoryResource memory_resource=None, handle=None): """ Find the k nearest neighbors for each query. Parameters ---------- search_params : SearchParams index : Index Trained IVF-PQ index. queries : CUDA array interface compliant matrix shape (n_samples, dim) Supported dtype [float, int8, uint8] k : int The number of neighbors. neighbors : Optional CUDA array interface compliant matrix shape (n_queries, k), dtype int64_t. If supplied, neighbor indices will be written here in-place. (default None) distances : Optional CUDA array interface compliant matrix shape (n_queries, k) If supplied, the distances to the neighbors will be written here in-place. (default None) memory_resource : RMM DeviceMemoryResource object, optional This can be used to explicitly manage the temporary memory allocation during search. Passing a pooling allocator can reduce memory allocation overhead. If not specified, then the memory resource from the raft handle is used. {handle_docstring} Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_pq >>> n_samples = 50000 >>> n_features = 50 >>> n_queries = 1000 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> # Build index >>> handle = DeviceResources() >>> index = ivf_pq.build(ivf_pq.IndexParams(), dataset, handle=handle) >>> # Search using the built index >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> k = 10 >>> search_params = ivf_pq.SearchParams( ... n_probes=20, ... lut_dtype=cp.float16, ... internal_distance_dtype=cp.float32 ... ) >>> # Using a pooling allocator reduces overhead of temporary array >>> # creation during search. This is useful if multiple searches >>> # are performad with same query size. >>> import rmm >>> mr = rmm.mr.PoolMemoryResource( ... rmm.mr.CudaMemoryResource(), ... initial_pool_size=2**29, ... maximum_pool_size=2**31 ... ) >>> distances, neighbors = ivf_pq.search(search_params, index, queries, ... k, memory_resource=mr, ... handle=handle) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() >>> neighbors = cp.asarray(neighbors) >>> distances = cp.asarray(distances) """ if not index.trained: raise ValueError("Index need to be built before calling search.") if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() queries_cai = cai_wrapper(queries) queries_dt = queries_cai.dtype cdef uint32_t n_queries = queries_cai.shape[0] _check_input_array(queries_cai, [np.dtype('float32'), np.dtype('byte'), np.dtype('ubyte')], exp_cols=index.dim) if neighbors is None: neighbors = device_ndarray.empty((n_queries, k), dtype='int64') neighbors_cai = cai_wrapper(neighbors) _check_input_array(neighbors_cai, [np.dtype('int64')], exp_rows=n_queries, exp_cols=k) if distances is None: distances = device_ndarray.empty((n_queries, k), dtype='float32') distances_cai = cai_wrapper(distances) _check_input_array(distances_cai, [np.dtype('float32')], exp_rows=n_queries, exp_cols=k) cdef c_ivf_pq.search_params params = search_params.params cdef uintptr_t neighbors_ptr = neighbors_cai.data cdef uintptr_t distances_ptr = distances_cai.data # TODO(tfeher) pass mr_ptr arg cdef device_memory_resource* mr_ptr = <device_memory_resource*> nullptr if memory_resource is not None: mr_ptr = memory_resource.get_mr() if queries_dt == np.float32: with cuda_interruptible(): c_ivf_pq.search(deref(handle_), params, deref(index.index), get_dmv_float(queries_cai, check_shape=True), get_dmv_int64(neighbors_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True)) elif queries_dt == np.byte: with cuda_interruptible(): c_ivf_pq.search(deref(handle_), params, deref(index.index), get_dmv_int8(queries_cai, check_shape=True), get_dmv_int64(neighbors_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True)) elif queries_dt == np.ubyte: with cuda_interruptible(): c_ivf_pq.search(deref(handle_), params, deref(index.index), get_dmv_uint8(queries_cai, check_shape=True), get_dmv_int64(neighbors_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True)) else: raise ValueError("query dtype %s not supported" % queries_dt) return (distances, neighbors) @auto_sync_handle def save(filename, Index index, handle=None): """ Saves the index to a file. Saving / loading the index is experimental. The serialization format is subject to change. Parameters ---------- filename : string Name of the file. index : Index Trained IVF-PQ index. {handle_docstring} Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_pq >>> n_samples = 50000 >>> n_features = 50 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> # Build index >>> handle = DeviceResources() >>> index = ivf_pq.build(ivf_pq.IndexParams(), dataset, handle=handle) >>> ivf_pq.save("my_index.bin", index, handle=handle) """ if not index.trained: raise ValueError("Index need to be built before saving it.") if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() cdef string c_filename = filename.encode('utf-8') c_ivf_pq.serialize(deref(handle_), c_filename, deref(index.index)) @auto_sync_handle def load(filename, handle=None): """ Loads index from a file. Saving / loading the index is experimental. The serialization format is subject to change, therefore loading an index saved with a previous version of raft is not guaranteed to work. Parameters ---------- filename : string Name of the file. {handle_docstring} Returns ------- index : Index Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_pq >>> n_samples = 50000 >>> n_features = 50 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> # Build and save index >>> handle = DeviceResources() >>> index = ivf_pq.build(ivf_pq.IndexParams(), dataset, handle=handle) >>> ivf_pq.save("my_index.bin", index, handle=handle) >>> del index >>> n_queries = 100 >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> handle = DeviceResources() >>> index = ivf_pq.load("my_index.bin", handle=handle) >>> distances, neighbors = ivf_pq.search(ivf_pq.SearchParams(), index, ... queries, k=10, handle=handle) """ if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() cdef string c_filename = filename.encode('utf-8') index = Index() c_ivf_pq.deserialize(deref(handle_), c_filename, index.index) index.trained = True return index
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_pq/ivf_pq.pxd
# # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # distutils: language = c++ cimport pylibraft.neighbors.ivf_pq.cpp.c_ivf_pq as c_ivf_pq cdef class IndexParams: cdef c_ivf_pq.index_params params cdef class SearchParams: cdef c_ivf_pq.search_params params
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_pq/__init__.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from .ivf_pq import ( Index, IndexParams, SearchParams, build, extend, load, save, search, ) __all__ = [ "Index", "IndexParams", "SearchParams", "build", "extend", "load", "save", "search", ]
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_pq
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_pq/cpp/c_ivf_pq.pxd
# # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import numpy as np import pylibraft.common.handle from cython.operator cimport dereference as deref from libc.stdint cimport int8_t, int64_t, uint8_t, uint32_t, uintptr_t from libcpp cimport bool, nullptr from libcpp.string cimport string from pylibraft.common.cpp.mdspan cimport ( device_matrix_view, device_vector_view, row_major, ) from pylibraft.common.handle cimport device_resources from pylibraft.common.optional cimport optional from pylibraft.distance.distance_type cimport DistanceType from rmm._lib.memory_resource cimport device_memory_resource cdef extern from "library_types.h": ctypedef enum cudaDataType_t: CUDA_R_32F "CUDA_R_32F" # float CUDA_R_16F "CUDA_R_16F" # half # uint8 - used to refer to IVF-PQ's fp8 storage type CUDA_R_8U "CUDA_R_8U" cdef extern from "raft/neighbors/ann_types.hpp" \ namespace "raft::neighbors::ann" nogil: cdef cppclass ann_index "raft::neighbors::index": pass cdef cppclass ann_index_params "raft::spatial::knn::index_params": DistanceType metric float metric_arg bool add_data_on_build cdef cppclass ann_search_params "raft::spatial::knn::search_params": pass cdef extern from "raft/neighbors/ivf_pq_types.hpp" \ namespace "raft::neighbors::ivf_pq" nogil: ctypedef enum codebook_gen: PER_SUBSPACE "raft::neighbors::ivf_pq::codebook_gen::PER_SUBSPACE", PER_CLUSTER "raft::neighbors::ivf_pq::codebook_gen::PER_CLUSTER" cpdef cppclass index_params(ann_index_params): uint32_t n_lists uint32_t kmeans_n_iters double kmeans_trainset_fraction uint32_t pq_bits uint32_t pq_dim codebook_gen codebook_kind bool force_random_rotation bool conservative_memory_allocation cdef cppclass index[IdxT](ann_index): index(const device_resources& handle, DistanceType metric, codebook_gen codebook_kind, uint32_t n_lists, uint32_t dim, uint32_t pq_bits, uint32_t pq_dim, bool conservative_memory_allocation) IdxT size() uint32_t dim() uint32_t pq_dim() uint32_t pq_len() uint32_t pq_bits() DistanceType metric() uint32_t n_lists() uint32_t rot_dim() codebook_gen codebook_kind() bool conservative_memory_allocation() cpdef cppclass search_params(ann_search_params): uint32_t n_probes cudaDataType_t lut_dtype cudaDataType_t internal_distance_dtype cdef extern from "raft_runtime/neighbors/ivf_pq.hpp" \ namespace "raft::runtime::neighbors::ivf_pq" nogil: cdef void build( const device_resources& handle, const index_params& params, device_matrix_view[float, int64_t, row_major] dataset, index[int64_t]* index) except + cdef void build( const device_resources& handle, const index_params& params, device_matrix_view[int8_t, int64_t, row_major] dataset, index[int64_t]* index) except + cdef void build( const device_resources& handle, const index_params& params, device_matrix_view[uint8_t, int64_t, row_major] dataset, index[int64_t]* index) except + cdef void extend( const device_resources& handle, device_matrix_view[float, int64_t, row_major] new_vectors, optional[device_vector_view[int64_t, int64_t]] new_indices, index[int64_t]* index) except + cdef void extend( const device_resources& handle, device_matrix_view[int8_t, int64_t, row_major] new_vectors, optional[device_vector_view[int64_t, int64_t]] new_indices, index[int64_t]* index) except + cdef void extend( const device_resources& handle, device_matrix_view[uint8_t, int64_t, row_major] new_vectors, optional[device_vector_view[int64_t, int64_t]] new_indices, index[int64_t]* index) except + cdef void search( const device_resources& handle, const search_params& params, const index[int64_t]& index, device_matrix_view[float, int64_t, row_major] queries, device_matrix_view[int64_t, int64_t, row_major] neighbors, device_matrix_view[float, int64_t, row_major] distances) except + cdef void search( const device_resources& handle, const search_params& params, const index[int64_t]& index, device_matrix_view[int8_t, int64_t, row_major] queries, device_matrix_view[int64_t, int64_t, row_major] neighbors, device_matrix_view[float, int64_t, row_major] distances) except + cdef void search( const device_resources& handle, const search_params& params, const index[int64_t]& index, device_matrix_view[uint8_t, int64_t, row_major] queries, device_matrix_view[int64_t, int64_t, row_major] neighbors, device_matrix_view[float, int64_t, row_major] distances) except + cdef void serialize(const device_resources& handle, const string& filename, const index[int64_t]& index) except + cdef void deserialize(const device_resources& handle, const string& filename, index[int64_t]* index) except +
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_pq
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_pq/cpp/__init__.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/cagra/CMakeLists.txt
# ============================================================================= # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= # Set the list of Cython files to build set(cython_sources cagra.pyx) set(linked_libraries raft::raft raft::compiled) # Build all of the Cython targets rapids_cython_create_modules( CXX SOURCE_FILES "${cython_sources}" LINKED_LIBRARIES "${linked_libraries}" ASSOCIATED_TARGETS raft MODULE_PREFIX neighbors_cagra_ )
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/cagra/cagra.pyx
# # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import warnings import numpy as np from cython.operator cimport dereference as deref from libc.stdint cimport ( int8_t, int32_t, int64_t, uint8_t, uint32_t, uint64_t, uintptr_t, ) from libcpp cimport bool, nullptr from libcpp.string cimport string from pylibraft.distance.distance_type cimport DistanceType from pylibraft.common import ( DeviceResources, ai_wrapper, auto_convert_output, cai_wrapper, device_ndarray, ) from pylibraft.common.cai_wrapper import wrap_array from pylibraft.common.interruptible import cuda_interruptible from pylibraft.common.handle cimport device_resources from pylibraft.common.handle import auto_sync_handle from pylibraft.common.input_validation import is_c_contiguous cimport pylibraft.neighbors.cagra.cpp.c_cagra as c_cagra from pylibraft.common.optional cimport make_optional, optional from rmm._lib.memory_resource cimport ( DeviceMemoryResource, device_memory_resource, ) from pylibraft.neighbors.common import _check_input_array, _get_metric from pylibraft.common.cpp.mdspan cimport ( device_matrix_view, device_vector_view, make_device_vector_view, row_major, ) from pylibraft.common.mdspan cimport ( get_const_dmv_float, get_const_dmv_int8, get_const_dmv_uint8, get_const_hmv_float, get_const_hmv_int8, get_const_hmv_uint8, get_dmv_float, get_dmv_int8, get_dmv_int64, get_dmv_uint8, get_dmv_uint32, get_hmv_float, get_hmv_int8, get_hmv_int64, get_hmv_uint8, get_hmv_uint32, make_optional_view_int64, ) from pylibraft.neighbors.common cimport _get_metric_string cdef class IndexParams: """" Parameters to build index for CAGRA nearest neighbor search Parameters ---------- metric : string denoting the metric type, default="sqeuclidean" Valid values for metric: ["sqeuclidean"], where - sqeuclidean is the euclidean distance without the square root operation, i.e.: distance(a,b) = \\sum_i (a_i - b_i)^2 intermediate_graph_degree : int, default = 128 graph_degree : int, default = 64 build_algo: string denoting the graph building algorithm to use, default = "ivf_pq" Valid values for algo: ["ivf_pq", "nn_descent"], where - ivf_pq will use the IVF-PQ algorithm for building the knn graph - nn_descent (experimental) will use the NN-Descent algorithm for building the knn graph. It is expected to be generally faster than ivf_pq. """ cdef c_cagra.index_params params def __init__(self, *, metric="sqeuclidean", intermediate_graph_degree=128, graph_degree=64, build_algo="ivf_pq"): self.params.metric = _get_metric(metric) self.params.metric_arg = 0 self.params.intermediate_graph_degree = intermediate_graph_degree self.params.graph_degree = graph_degree if build_algo == "ivf_pq": self.params.build_algo = c_cagra.graph_build_algo.IVF_PQ elif build_algo == "nn_descent": self.params.build_algo = c_cagra.graph_build_algo.NN_DESCENT @property def metric(self): return self.params.metric @property def intermediate_graph_degree(self): return self.params.intermediate_graph_degree @property def graph_degree(self): return self.params.graph_degree cdef class Index: cdef readonly bool trained cdef str active_index_type def __cinit__(self): self.trained = False self.active_index_type = None cdef class IndexFloat(Index): cdef c_cagra.index[float, uint32_t] * index def __cinit__(self, handle=None): if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() self.index = new c_cagra.index[float, uint32_t]( deref(handle_)) def __repr__(self): m_str = "metric=" + _get_metric_string(self.index.metric()) attr_str = [attr + "=" + str(getattr(self, attr)) for attr in ["metric", "dim", "graph_degree"]] attr_str = [m_str] + attr_str return "Index(type=CAGRA, " + (", ".join(attr_str)) + ")" @auto_sync_handle def update_dataset(self, dataset, handle=None): """ Replace the dataset with a new dataset. Parameters ---------- dataset : array interface compliant matrix shape (n_samples, dim) {handle_docstring} """ cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() dataset_ai = wrap_array(dataset) dataset_dt = dataset_ai.dtype _check_input_array(dataset_ai, [np.dtype("float32")]) if dataset_ai.from_cai: self.index[0].update_dataset(deref(handle_), get_const_dmv_float(dataset_ai, check_shape=True)) else: self.index[0].update_dataset(deref(handle_), get_const_hmv_float(dataset_ai, check_shape=True)) @property def metric(self): return self.index[0].metric() @property def size(self): return self.index[0].size() @property def dim(self): return self.index[0].dim() @property def graph_degree(self): return self.index[0].graph_degree() def __dealloc__(self): if self.index is not NULL: del self.index cdef class IndexInt8(Index): cdef c_cagra.index[int8_t, uint32_t] * index def __cinit__(self, handle=None): if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() self.index = new c_cagra.index[int8_t, uint32_t]( deref(handle_)) @auto_sync_handle def update_dataset(self, dataset, handle=None): """ Replace the dataset with a new dataset. Parameters ---------- dataset : array interface compliant matrix shape (n_samples, dim) {handle_docstring} """ cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() dataset_ai = wrap_array(dataset) dataset_dt = dataset_ai.dtype _check_input_array(dataset_ai, [np.dtype("byte")]) if dataset_ai.from_cai: self.index[0].update_dataset(deref(handle_), get_const_dmv_int8(dataset_ai, check_shape=True)) else: self.index[0].update_dataset(deref(handle_), get_const_hmv_int8(dataset_ai, check_shape=True)) def __repr__(self): m_str = "metric=" + _get_metric_string(self.index.metric()) attr_str = [attr + "=" + str(getattr(self, attr)) for attr in ["metric", "dim", "graph_degree"]] attr_str = [m_str] + attr_str return "Index(type=CAGRA, " + (", ".join(attr_str)) + ")" @property def metric(self): return self.index[0].metric() @property def size(self): return self.index[0].size() @property def dim(self): return self.index[0].dim() @property def graph_degree(self): return self.index[0].graph_degree() def __dealloc__(self): if self.index is not NULL: del self.index cdef class IndexUint8(Index): cdef c_cagra.index[uint8_t, uint32_t] * index def __cinit__(self, handle=None): if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() self.index = new c_cagra.index[uint8_t, uint32_t]( deref(handle_)) @auto_sync_handle def update_dataset(self, dataset, handle=None): """ Replace the dataset with a new dataset. Parameters ---------- dataset : array interface compliant matrix shape (n_samples, dim) {handle_docstring} """ cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() dataset_ai = wrap_array(dataset) dataset_dt = dataset_ai.dtype _check_input_array(dataset_ai, [np.dtype("ubyte")]) if dataset_ai.from_cai: self.index[0].update_dataset(deref(handle_), get_const_dmv_uint8(dataset_ai, check_shape=True)) else: self.index[0].update_dataset(deref(handle_), get_const_hmv_uint8(dataset_ai, check_shape=True)) def __repr__(self): m_str = "metric=" + _get_metric_string(self.index.metric()) attr_str = [attr + "=" + str(getattr(self, attr)) for attr in ["metric", "dim", "graph_degree"]] attr_str = [m_str] + attr_str return "Index(type=CAGRA, " + (", ".join(attr_str)) + ")" @property def metric(self): return self.index[0].metric() @property def size(self): return self.index[0].size() @property def dim(self): return self.index[0].dim() @property def graph_degree(self): return self.index[0].graph_degree() def __dealloc__(self): if self.index is not NULL: del self.index @auto_sync_handle @auto_convert_output def build(IndexParams index_params, dataset, handle=None): """ Build the CAGRA index from the dataset for efficient search. The build performs two different steps- first an intermediate knn-graph is constructed, then it's optimized it to create the final graph. The index_params object controls the node degree of these graphs. It is required that both the dataset and the optimized graph fit the GPU memory. The following distance metrics are supported: - L2 Parameters ---------- index_params : IndexParams object dataset : CUDA array interface compliant matrix shape (n_samples, dim) Supported dtype [float, int8, uint8] {handle_docstring} Returns ------- index: cagra.Index Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import cagra >>> n_samples = 50000 >>> n_features = 50 >>> n_queries = 1000 >>> k = 10 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> handle = DeviceResources() >>> build_params = cagra.IndexParams(metric="sqeuclidean") >>> index = cagra.build(build_params, dataset, handle=handle) >>> distances, neighbors = cagra.search(cagra.SearchParams(), ... index, dataset, ... k, handle=handle) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() >>> distances = cp.asarray(distances) >>> neighbors = cp.asarray(neighbors) """ dataset_ai = wrap_array(dataset) dataset_dt = dataset_ai.dtype _check_input_array(dataset_ai, [np.dtype('float32'), np.dtype('byte'), np.dtype('ubyte')]) if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() cdef IndexFloat idx_float cdef IndexInt8 idx_int8 cdef IndexUint8 idx_uint8 if dataset_ai.from_cai: if dataset_dt == np.float32: idx_float = IndexFloat(handle) idx_float.active_index_type = "float32" with cuda_interruptible(): c_cagra.build_device( deref(handle_), index_params.params, get_dmv_float(dataset_ai, check_shape=True), deref(idx_float.index)) idx_float.trained = True return idx_float elif dataset_dt == np.byte: idx_int8 = IndexInt8(handle) idx_int8.active_index_type = "byte" with cuda_interruptible(): c_cagra.build_device( deref(handle_), index_params.params, get_dmv_int8(dataset_ai, check_shape=True), deref(idx_int8.index)) idx_int8.trained = True return idx_int8 elif dataset_dt == np.ubyte: idx_uint8 = IndexUint8(handle) idx_uint8.active_index_type = "ubyte" with cuda_interruptible(): c_cagra.build_device( deref(handle_), index_params.params, get_dmv_uint8(dataset_ai, check_shape=True), deref(idx_uint8.index)) idx_uint8.trained = True return idx_uint8 else: raise TypeError("dtype %s not supported" % dataset_dt) else: if dataset_dt == np.float32: idx_float = IndexFloat(handle) idx_float.active_index_type = "float32" with cuda_interruptible(): c_cagra.build_host( deref(handle_), index_params.params, get_hmv_float(dataset_ai, check_shape=True), deref(idx_float.index)) idx_float.trained = True return idx_float elif dataset_dt == np.byte: idx_int8 = IndexInt8(handle) idx_int8.active_index_type = "byte" with cuda_interruptible(): c_cagra.build_host( deref(handle_), index_params.params, get_hmv_int8(dataset_ai, check_shape=True), deref(idx_int8.index)) idx_int8.trained = True return idx_int8 elif dataset_dt == np.ubyte: idx_uint8 = IndexUint8(handle) idx_uint8.active_index_type = "ubyte" with cuda_interruptible(): c_cagra.build_host( deref(handle_), index_params.params, get_hmv_uint8(dataset_ai, check_shape=True), deref(idx_uint8.index)) idx_uint8.trained = True return idx_uint8 else: raise TypeError("dtype %s not supported" % dataset_dt) cdef class SearchParams: """ CAGRA search parameters Parameters ---------- max_queries: int, default = 0 Maximum number of queries to search at the same time (batch size). Auto select when 0. itopk_size: int, default = 64 Number of intermediate search results retained during the search. This is the main knob to adjust trade off between accuracy and search speed. Higher values improve the search accuracy. max_iterations: int, default = 0 Upper limit of search iterations. Auto select when 0. algo: string denoting the search algorithm to use, default = "auto" Valid values for algo: ["auto", "single_cta", "multi_cta"], where - auto will automatically select the best value based on query size - single_cta is better when query contains larger number of vectors (e.g >10) - multi_cta is better when query contains only a few vectors team_size: int, default = 0 Number of threads used to calculate a single distance. 4, 8, 16, or 32. search_width: int, default = 1 Number of graph nodes to select as the starting point for the search in each iteration. min_iterations: int, default = 0 Lower limit of search iterations. thread_block_size: int, default = 0 Thread block size. 0, 64, 128, 256, 512, 1024. Auto selection when 0. hashmap_mode: string denoting the type of hash map to use. It's usually better to allow the algorithm to select this value., default = "auto" Valid values for hashmap_mode: ["auto", "small", "hash"], where - auto will automatically select the best value based on algo - small will use the small shared memory hash table with resetting. - hash will use a single hash table in global memory. hashmap_min_bitlen: int, default = 0 Upper limit of hashmap fill rate. More than 0.1, less than 0.9. hashmap_max_fill_rate: float, default = 0.5 Upper limit of hashmap fill rate. More than 0.1, less than 0.9. num_random_samplings: int, default = 1 Number of iterations of initial random seed node selection. 1 or more. rand_xor_mask: int, default = 0x128394 Bit mask used for initial random seed node selection. """ cdef c_cagra.search_params params def __init__(self, *, max_queries=0, itopk_size=64, max_iterations=0, algo="auto", team_size=0, search_width=1, min_iterations=0, thread_block_size=0, hashmap_mode="auto", hashmap_min_bitlen=0, hashmap_max_fill_rate=0.5, num_random_samplings=1, rand_xor_mask=0x128394): self.params.max_queries = max_queries self.params.itopk_size = itopk_size self.params.max_iterations = max_iterations if algo == "single_cta": self.params.algo = c_cagra.search_algo.SINGLE_CTA elif algo == "multi_cta": self.params.algo = c_cagra.search_algo.MULTI_CTA elif algo == "multi_kernel": self.params.algo = c_cagra.search_algo.MULTI_KERNEL elif algo == "auto": self.params.algo = c_cagra.search_algo.AUTO else: raise ValueError("`algo` value not supported.") self.params.team_size = team_size self.params.search_width = search_width self.params.min_iterations = min_iterations self.params.thread_block_size = thread_block_size if hashmap_mode == "hash": self.params.hashmap_mode = c_cagra.hash_mode.HASH elif hashmap_mode == "small": self.params.hashmap_mode = c_cagra.hash_mode.SMALL elif hashmap_mode == "auto": self.params.hashmap_mode = c_cagra.hash_mode.AUTO else: raise ValueError("`hashmap_mode` value not supported.") self.params.hashmap_min_bitlen = hashmap_min_bitlen self.params.hashmap_max_fill_rate = hashmap_max_fill_rate self.params.num_random_samplings = num_random_samplings self.params.rand_xor_mask = rand_xor_mask def __repr__(self): attr_str = [attr + "=" + str(getattr(self, attr)) for attr in [ "max_queries", "itopk_size", "max_iterations", "algo", "team_size", "search_width", "min_iterations", "thread_block_size", "hashmap_mode", "hashmap_min_bitlen", "hashmap_max_fill_rate", "num_random_samplings", "rand_xor_mask"]] return "SearchParams(type=CAGRA, " + (", ".join(attr_str)) + ")" @property def max_queries(self): return self.params.max_queries @property def itopk_size(self): return self.params.itopk_size @property def max_iterations(self): return self.params.max_iterations @property def algo(self): return self.params.algo @property def team_size(self): return self.params.team_size @property def search_width(self): return self.params.search_width @property def min_iterations(self): return self.params.min_iterations @property def thread_block_size(self): return self.params.thread_block_size @property def hashmap_mode(self): return self.params.hashmap_mode @property def hashmap_min_bitlen(self): return self.params.hashmap_min_bitlen @property def hashmap_max_fill_rate(self): return self.params.hashmap_max_fill_rate @property def num_random_samplings(self): return self.params.num_random_samplings @property def rand_xor_mask(self): return self.params.rand_xor_mask @auto_sync_handle @auto_convert_output def search(SearchParams search_params, Index index, queries, k, neighbors=None, distances=None, handle=None): """ Find the k nearest neighbors for each query. Parameters ---------- search_params : SearchParams index : Index Trained CAGRA index. queries : CUDA array interface compliant matrix shape (n_samples, dim) Supported dtype [float, int8, uint8] k : int The number of neighbors. neighbors : Optional CUDA array interface compliant matrix shape (n_queries, k), dtype int64_t. If supplied, neighbor indices will be written here in-place. (default None) distances : Optional CUDA array interface compliant matrix shape (n_queries, k) If supplied, the distances to the neighbors will be written here in-place. (default None) {handle_docstring} Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import cagra >>> n_samples = 50000 >>> n_features = 50 >>> n_queries = 1000 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> # Build index >>> handle = DeviceResources() >>> index = cagra.build(cagra.IndexParams(), dataset, handle=handle) >>> # Search using the built index >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> k = 10 >>> search_params = cagra.SearchParams( ... max_queries=100, ... itopk_size=64 ... ) >>> # Using a pooling allocator reduces overhead of temporary array >>> # creation during search. This is useful if multiple searches >>> # are performad with same query size. >>> distances, neighbors = cagra.search(search_params, index, queries, ... k, handle=handle) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() >>> neighbors = cp.asarray(neighbors) >>> distances = cp.asarray(distances) """ if not index.trained: raise ValueError("Index need to be built before calling search.") if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() queries_cai = cai_wrapper(queries) queries_dt = queries_cai.dtype cdef uint32_t n_queries = queries_cai.shape[0] _check_input_array(queries_cai, [np.dtype('float32'), np.dtype('byte'), np.dtype('ubyte')], exp_cols=index.dim) if neighbors is None: neighbors = device_ndarray.empty((n_queries, k), dtype='uint32') neighbors_cai = cai_wrapper(neighbors) _check_input_array(neighbors_cai, [np.dtype('uint32')], exp_rows=n_queries, exp_cols=k) if distances is None: distances = device_ndarray.empty((n_queries, k), dtype='float32') distances_cai = cai_wrapper(distances) _check_input_array(distances_cai, [np.dtype('float32')], exp_rows=n_queries, exp_cols=k) cdef c_cagra.search_params params = search_params.params cdef IndexFloat idx_float cdef IndexInt8 idx_int8 cdef IndexUint8 idx_uint8 if queries_dt == np.float32: idx_float = index with cuda_interruptible(): c_cagra.search(deref(handle_), params, deref(idx_float.index), get_dmv_float(queries_cai, check_shape=True), get_dmv_uint32(neighbors_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True)) elif queries_dt == np.byte: idx_int8 = index with cuda_interruptible(): c_cagra.search(deref(handle_), params, deref(idx_int8.index), get_dmv_int8(queries_cai, check_shape=True), get_dmv_uint32(neighbors_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True)) elif queries_dt == np.ubyte: idx_uint8 = index with cuda_interruptible(): c_cagra.search(deref(handle_), params, deref(idx_uint8.index), get_dmv_uint8(queries_cai, check_shape=True), get_dmv_uint32(neighbors_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True)) else: raise ValueError("query dtype %s not supported" % queries_dt) return (distances, neighbors) @auto_sync_handle def save(filename, Index index, bool include_dataset=True, handle=None): """ Saves the index to a file. Saving / loading the index is experimental. The serialization format is subject to change. Parameters ---------- filename : string Name of the file. index : Index Trained CAGRA index. include_dataset : bool Whether or not to write out the dataset along with the index. Including the dataset in the serialized index will use extra disk space, and might not be desired if you already have a copy of the dataset on disk. If this option is set to false, you will have to call `index.update_dataset(dataset)` after loading the index. {handle_docstring} Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import cagra >>> n_samples = 50000 >>> n_features = 50 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> # Build index >>> handle = DeviceResources() >>> index = cagra.build(cagra.IndexParams(), dataset, handle=handle) >>> # Serialize and deserialize the cagra index built >>> cagra.save("my_index.bin", index, handle=handle) >>> index_loaded = cagra.load("my_index.bin", handle=handle) """ if not index.trained: raise ValueError("Index need to be built before saving it.") if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() cdef string c_filename = filename.encode('utf-8') cdef IndexFloat idx_float cdef IndexInt8 idx_int8 cdef IndexUint8 idx_uint8 if index.active_index_type == "float32": idx_float = index c_cagra.serialize_file( deref(handle_), c_filename, deref(idx_float.index), include_dataset) elif index.active_index_type == "byte": idx_int8 = index c_cagra.serialize_file( deref(handle_), c_filename, deref(idx_int8.index), include_dataset) elif index.active_index_type == "ubyte": idx_uint8 = index c_cagra.serialize_file( deref(handle_), c_filename, deref(idx_uint8.index), include_dataset) else: raise ValueError( "Index dtype %s not supported" % index.active_index_type) @auto_sync_handle def load(filename, handle=None): """ Loads index from file. Saving / loading the index is experimental. The serialization format is subject to change, therefore loading an index saved with a previous version of raft is not guaranteed to work. Parameters ---------- filename : string Name of the file. {handle_docstring} Returns ------- index : Index """ if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() cdef string c_filename = filename.encode('utf-8') cdef IndexFloat idx_float cdef IndexInt8 idx_int8 cdef IndexUint8 idx_uint8 with open(filename, "rb") as f: type_str = f.read(3).decode("utf8") dataset_dt = np.dtype(type_str) if dataset_dt == np.float32: idx_float = IndexFloat(handle) c_cagra.deserialize_file( deref(handle_), c_filename, idx_float.index) idx_float.trained = True idx_float.active_index_type = 'float32' return idx_float elif dataset_dt == np.byte: idx_int8 = IndexInt8(handle) c_cagra.deserialize_file( deref(handle_), c_filename, idx_int8.index) idx_int8.trained = True idx_int8.active_index_type = 'byte' return idx_int8 elif dataset_dt == np.ubyte: idx_uint8 = IndexUint8(handle) c_cagra.deserialize_file( deref(handle_), c_filename, idx_uint8.index) idx_uint8.trained = True idx_uint8.active_index_type = 'ubyte' return idx_uint8 else: raise ValueError("Dataset dtype %s not supported" % dataset_dt)
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/cagra/__init__.py
# Copyright (c) 2023, NVIDIA CORPORATION. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from .cagra import Index, IndexParams, SearchParams, build, load, save, search __all__ = [ "Index", "IndexParams", "SearchParams", "build", "load", "save", "search", ]
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/cagra
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/cagra/cpp/__init__.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/cagra
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/cagra/cpp/c_cagra.pxd
# # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import numpy as np import pylibraft.common.handle from cython.operator cimport dereference as deref from libc.stdint cimport int8_t, int64_t, uint8_t, uint32_t, uint64_t from libcpp cimport bool, nullptr from libcpp.string cimport string from pylibraft.common.cpp.mdspan cimport ( device_matrix_view, device_vector_view, host_matrix_view, row_major, ) from pylibraft.common.handle cimport device_resources from pylibraft.common.mdspan cimport const_float, const_int8_t, const_uint8_t from pylibraft.common.optional cimport optional from pylibraft.distance.distance_type cimport DistanceType from pylibraft.neighbors.ivf_pq.cpp.c_ivf_pq cimport ( ann_index, ann_index_params, ann_search_params, index_params as ivfpq_ip, search_params as ivfpq_sp, ) from rmm._lib.memory_resource cimport device_memory_resource cdef extern from "raft/neighbors/cagra_types.hpp" \ namespace "raft::neighbors::cagra" nogil: ctypedef enum graph_build_algo: IVF_PQ "raft::neighbors::cagra::graph_build_algo::IVF_PQ", NN_DESCENT "raft::neighbors::cagra::graph_build_algo::NN_DESCENT" cpdef cppclass index_params(ann_index_params): size_t intermediate_graph_degree size_t graph_degree graph_build_algo build_algo ctypedef enum search_algo: SINGLE_CTA "raft::neighbors::cagra::search_algo::SINGLE_CTA", MULTI_CTA "raft::neighbors::cagra::search_algo::MULTI_CTA", MULTI_KERNEL "raft::neighbors::cagra::search_algo::MULTI_KERNEL", AUTO "raft::neighbors::cagra::search_algo::AUTO" ctypedef enum hash_mode: HASH "raft::neighbors::cagra::hash_mode::HASH", SMALL "raft::neighbors::cagra::hash_mode::SMALL", AUTO "raft::neighbors::cagra::hash_mode::AUTO" cpdef cppclass search_params(ann_search_params): size_t max_queries size_t itopk_size size_t max_iterations search_algo algo size_t team_size size_t search_width size_t min_iterations size_t thread_block_size hash_mode hashmap_mode size_t hashmap_min_bitlen float hashmap_max_fill_rate uint32_t num_random_samplings uint64_t rand_xor_mask cdef cppclass index[T, IdxT](ann_index): index(const device_resources&) DistanceType metric() IdxT size() uint32_t dim() uint32_t graph_degree() device_matrix_view[T, IdxT, row_major] dataset() device_matrix_view[T, IdxT, row_major] graph() # hack: can't use the T template param here because of issues handling # const w/ cython. introduce a new template param to get around this void update_dataset[ValueT](const device_resources & handle, host_matrix_view[ValueT, int64_t, row_major] dataset) void update_dataset[ValueT](const device_resources & handle, device_matrix_view[ValueT, int64_t, row_major] dataset) cdef extern from "raft_runtime/neighbors/cagra.hpp" \ namespace "raft::runtime::neighbors::cagra" nogil: cdef void build_device( const device_resources& handle, const index_params& params, device_matrix_view[float, int64_t, row_major] dataset, index[float, uint32_t]& index) except + cdef void build_device( const device_resources& handle, const index_params& params, device_matrix_view[int8_t, int64_t, row_major] dataset, index[int8_t, uint32_t]& index) except + cdef void build_device( const device_resources& handle, const index_params& params, device_matrix_view[uint8_t, int64_t, row_major] dataset, index[uint8_t, uint32_t]& index) except + cdef void build_host( const device_resources& handle, const index_params& params, host_matrix_view[float, int64_t, row_major] dataset, index[float, uint32_t]& index) except + cdef void build_host( const device_resources& handle, const index_params& params, host_matrix_view[int8_t, int64_t, row_major] dataset, index[int8_t, uint32_t]& index) except + cdef void build_host( const device_resources& handle, const index_params& params, host_matrix_view[uint8_t, int64_t, row_major] dataset, index[uint8_t, uint32_t]& index) except + cdef void search( const device_resources& handle, const search_params& params, const index[float, uint32_t]& index, device_matrix_view[float, int64_t, row_major] queries, device_matrix_view[uint32_t, int64_t, row_major] neighbors, device_matrix_view[float, int64_t, row_major] distances) except + cdef void search( const device_resources& handle, const search_params& params, const index[int8_t, uint32_t]& index, device_matrix_view[int8_t, int64_t, row_major] queries, device_matrix_view[uint32_t, int64_t, row_major] neighbors, device_matrix_view[float, int64_t, row_major] distances) except + cdef void search( const device_resources& handle, const search_params& params, const index[uint8_t, uint32_t]& index, device_matrix_view[uint8_t, int64_t, row_major] queries, device_matrix_view[uint32_t, int64_t, row_major] neighbors, device_matrix_view[float, int64_t, row_major] distances) except + cdef void serialize(const device_resources& handle, string& str, const index[float, uint32_t]& index, bool include_dataset) except + cdef void deserialize(const device_resources& handle, const string& str, index[float, uint32_t]* index) except + cdef void serialize(const device_resources& handle, string& str, const index[uint8_t, uint32_t]& index, bool include_dataset) except + cdef void deserialize(const device_resources& handle, const string& str, index[uint8_t, uint32_t]* index) except + cdef void serialize(const device_resources& handle, string& str, const index[int8_t, uint32_t]& index, bool include_dataset) except + cdef void deserialize(const device_resources& handle, const string& str, index[int8_t, uint32_t]* index) except + cdef void serialize_file(const device_resources& handle, const string& filename, const index[float, uint32_t]& index, bool include_dataset) except + cdef void deserialize_file(const device_resources& handle, const string& filename, index[float, uint32_t]* index) except + cdef void serialize_file(const device_resources& handle, const string& filename, const index[uint8_t, uint32_t]& index, bool include_dataset) except + cdef void deserialize_file(const device_resources& handle, const string& filename, index[uint8_t, uint32_t]* index) except + cdef void serialize_file(const device_resources& handle, const string& filename, const index[int8_t, uint32_t]& index, bool include_dataset) except + cdef void deserialize_file(const device_resources& handle, const string& filename, index[int8_t, uint32_t]* index) except +
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/cpp/brute_force.pxd
# # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import numpy as np import pylibraft.common.handle from cython.operator cimport dereference as deref from libc.stdint cimport int8_t, int64_t, uint8_t, uint64_t, uintptr_t from libcpp cimport bool, nullptr from libcpp.string cimport string from libcpp.vector cimport vector from pylibraft.common.cpp.mdspan cimport ( device_matrix_view, host_matrix_view, make_device_matrix_view, make_host_matrix_view, row_major, ) from pylibraft.common.cpp.optional cimport optional from pylibraft.common.handle cimport device_resources from pylibraft.distance.distance_type cimport DistanceType from rmm._lib.memory_resource cimport device_memory_resource cdef extern from "raft_runtime/neighbors/brute_force.hpp" \ namespace "raft::runtime::neighbors::brute_force" nogil: cdef void knn(const device_resources & handle, device_matrix_view[float, int64_t, row_major] index, device_matrix_view[float, int64_t, row_major] search, device_matrix_view[int64_t, int64_t, row_major] indices, device_matrix_view[float, int64_t, row_major] distances, DistanceType metric, optional[float] metric_arg, optional[int64_t] global_id_offset) except +
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/cpp/__init__.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_flat/CMakeLists.txt
# ============================================================================= # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= # Set the list of Cython files to build set(cython_sources ivf_flat.pyx) set(linked_libraries cuvs::cuvs cuvs::compiled) # Build all of the Cython targets rapids_cython_create_modules( CXX SOURCE_FILES "${cython_sources}" LINKED_LIBRARIES "${linked_libraries}" ASSOCIATED_TARGETS cuvs MODULE_PREFIX neighbors_ivfflat_ )
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_flat/__init__.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from .ivf_flat import ( Index, IndexParams, SearchParams, build, extend, load, save, search, ) __all__ = [ "Index", "IndexParams", "SearchParams", "build", "extend", "search", "save", "load", ]
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_flat/ivf_flat.pyx
# # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import warnings import numpy as np from cython.operator cimport dereference as deref from libc.stdint cimport int8_t, int64_t, uint8_t, uint32_t, uintptr_t from libcpp cimport bool, nullptr from libcpp.string cimport string from pylibraft.distance.distance_type cimport DistanceType from pylibraft.common import ( DeviceResources, ai_wrapper, auto_convert_output, device_ndarray, ) from pylibraft.common.cai_wrapper import cai_wrapper from pylibraft.common.cpp.mdspan cimport ( device_matrix_view, device_vector_view, make_device_vector_view, row_major, ) from pylibraft.common.interruptible import cuda_interruptible from pylibraft.common.handle cimport device_resources from pylibraft.common.handle import auto_sync_handle from pylibraft.common.input_validation import is_c_contiguous cimport pylibraft.neighbors.ivf_flat.cpp.c_ivf_flat as c_ivf_flat from pylibraft.common.cpp.optional cimport optional from rmm._lib.memory_resource cimport ( DeviceMemoryResource, device_memory_resource, ) from pylibraft.neighbors.common import _check_input_array, _get_metric from pylibraft.common.mdspan cimport ( get_dmv_float, get_dmv_int8, get_dmv_int64, get_dmv_uint8, ) from pylibraft.neighbors.common cimport _get_metric_string from pylibraft.neighbors.ivf_flat.cpp.c_ivf_flat cimport ( index_params, search_params, ) cdef class IndexParams: """ Parameters to build index for IVF-FLAT nearest neighbor search Parameters ---------- n_list : int, default = 1024 The number of clusters used in the coarse quantizer. metric : string denoting the metric type, default="sqeuclidean" Valid values for metric: ["sqeuclidean", "inner_product", "euclidean"], where - sqeuclidean is the euclidean distance without the square root operation, i.e.: distance(a,b) = \\sum_i (a_i - b_i)^2, - euclidean is the euclidean distance - inner product distance is defined as distance(a, b) = \\sum_i a_i * b_i. kmeans_n_iters : int, default = 20 The number of iterations searching for kmeans centers during index building. kmeans_trainset_fraction : int, default = 0.5 If kmeans_trainset_fraction is less than 1, then the dataset is subsampled, and only n_samples * kmeans_trainset_fraction rows are used for training. add_data_on_build : bool, default = True After training the coarse and fine quantizers, we will populate the index with the dataset if add_data_on_build == True, otherwise the index is left empty, and the extend method can be used to add new vectors to the index. adaptive_centers : bool, default = False By default (adaptive_centers = False), the cluster centers are trained in `ivf_flat::build`, and and never modified in `ivf_flat::extend`. The alternative behavior (adaptive_centers = true) is to update the cluster centers for new data when it is added. In this case, `index.centers()` are always exactly the centroids of the data in the corresponding clusters. The drawback of this behavior is that the centroids depend on the order of adding new data (through the classification of the added data); that is, `index.centers()` "drift" together with the changing distribution of the newly added data. """ cdef c_ivf_flat.index_params params def __init__(self, *, n_lists=1024, metric="sqeuclidean", kmeans_n_iters=20, kmeans_trainset_fraction=0.5, add_data_on_build=True, bool adaptive_centers=False): self.params.n_lists = n_lists self.params.metric = _get_metric(metric) self.params.metric_arg = 0 self.params.kmeans_n_iters = kmeans_n_iters self.params.kmeans_trainset_fraction = kmeans_trainset_fraction self.params.add_data_on_build = add_data_on_build self.params.adaptive_centers = adaptive_centers @property def n_lists(self): return self.params.n_lists @property def metric(self): return self.params.metric @property def kmeans_n_iters(self): return self.params.kmeans_n_iters @property def kmeans_trainset_fraction(self): return self.params.kmeans_trainset_fraction @property def add_data_on_build(self): return self.params.add_data_on_build @property def adaptive_centers(self): return self.params.adaptive_centers cdef class Index: cdef readonly bool trained cdef str active_index_type def __cinit__(self): self.trained = False self.active_index_type = None cdef class IndexFloat(Index): cdef c_ivf_flat.index[float, int64_t] * index def __cinit__(self, handle=None): if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() # this is to keep track of which index type is being used # We create a placeholder object. The actual parameter values do # not matter, it will be replaced with a built index object later. self.index = new c_ivf_flat.index[float, int64_t]( deref(handle_), _get_metric("sqeuclidean"), <uint32_t>1, <bool>False, <bool>False, <uint32_t>4) def __repr__(self): m_str = "metric=" + _get_metric_string(self.index.metric()) attr_str = [ attr + "=" + str(getattr(self, attr)) for attr in ["size", "dim", "n_lists", "adaptive_centers"] ] attr_str = [m_str] + attr_str return "Index(type=IVF-FLAT, " + (", ".join(attr_str)) + ")" @property def dim(self): return self.index[0].dim() @property def size(self): return self.index[0].size() @property def metric(self): return self.index[0].metric() @property def n_lists(self): return self.index[0].n_lists() @property def adaptive_centers(self): return self.index[0].adaptive_centers() cdef class IndexInt8(Index): cdef c_ivf_flat.index[int8_t, int64_t] * index def __cinit__(self, handle=None): if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() # this is to keep track of which index type is being used # We create a placeholder object. The actual parameter values do # not matter, it will be replaced with a built index object later. self.index = new c_ivf_flat.index[int8_t, int64_t]( deref(handle_), _get_metric("sqeuclidean"), <uint32_t>1, <bool>False, <bool>False, <uint32_t>4) def __repr__(self): m_str = "metric=" + _get_metric_string(self.index.metric()) attr_str = [ attr + "=" + str(getattr(self, attr)) for attr in ["size", "dim", "n_lists", "adaptive_centers"] ] attr_str = [m_str] + attr_str return "Index(type=IVF-FLAT, " + (", ".join(attr_str)) + ")" @property def dim(self): return self.index[0].dim() @property def size(self): return self.index[0].size() @property def metric(self): return self.index[0].metric() @property def n_lists(self): return self.index[0].n_lists() @property def adaptive_centers(self): return self.index[0].adaptive_centers() cdef class IndexUint8(Index): cdef c_ivf_flat.index[uint8_t, int64_t] * index def __cinit__(self, handle=None): if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() # this is to keep track of which index type is being used # We create a placeholder object. The actual parameter values do # not matter, it will be replaced with a built index object later. self.index = new c_ivf_flat.index[uint8_t, int64_t]( deref(handle_), _get_metric("sqeuclidean"), <uint32_t>1, <bool>False, <bool>False, <uint32_t>4) def __repr__(self): m_str = "metric=" + _get_metric_string(self.index.metric()) attr_str = [ attr + "=" + str(getattr(self, attr)) for attr in ["size", "dim", "n_lists", "adaptive_centers"] ] attr_str = [m_str] + attr_str return "Index(type=IVF-FLAT, " + (", ".join(attr_str)) + ")" @property def dim(self): return self.index[0].dim() @property def size(self): return self.index[0].size() @property def metric(self): return self.index[0].metric() @property def n_lists(self): return self.index[0].n_lists() @property def adaptive_centers(self): return self.index[0].adaptive_centers() @auto_sync_handle @auto_convert_output def build(IndexParams index_params, dataset, handle=None): """ Builds an IVF-FLAT index that can be used for nearest neighbor search. Parameters ---------- index_params : IndexParams object dataset : CUDA array interface compliant matrix shape (n_samples, dim) Supported dtype [float, int8, uint8] {handle_docstring} Returns ------- index: ivf_flat.Index Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_flat >>> n_samples = 50000 >>> n_features = 50 >>> n_queries = 1000 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> handle = DeviceResources() >>> index_params = ivf_flat.IndexParams( ... n_lists=1024, ... metric="sqeuclidean") >>> index = ivf_flat.build(index_params, dataset, handle=handle) >>> # Search using the built index >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> k = 10 >>> distances, neighbors = ivf_flat.search(ivf_flat.SearchParams(), ... index, queries, k, ... handle=handle) >>> distances = cp.asarray(distances) >>> neighbors = cp.asarray(neighbors) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() """ dataset_cai = cai_wrapper(dataset) dataset_dt = dataset_cai.dtype _check_input_array(dataset_cai, [np.dtype('float32'), np.dtype('byte'), np.dtype('ubyte')]) cdef int64_t n_rows = dataset_cai.shape[0] cdef uint32_t dim = dataset_cai.shape[1] if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() cdef IndexFloat idx_float cdef IndexInt8 idx_int8 cdef IndexUint8 idx_uint8 if dataset_dt == np.float32: idx_float = IndexFloat(handle) idx_float.active_index_type = "float32" with cuda_interruptible(): c_ivf_flat.build(deref(handle_), index_params.params, get_dmv_float(dataset_cai, check_shape=True), deref(idx_float.index)) idx_float.trained = True return idx_float elif dataset_dt == np.byte: idx_int8 = IndexInt8(handle) idx_int8.active_index_type = "byte" with cuda_interruptible(): c_ivf_flat.build(deref(handle_), index_params.params, get_dmv_int8(dataset_cai, check_shape=True), deref(idx_int8.index)) idx_int8.trained = True return idx_int8 elif dataset_dt == np.ubyte: idx_uint8 = IndexUint8(handle) idx_uint8.active_index_type = "ubyte" with cuda_interruptible(): c_ivf_flat.build(deref(handle_), index_params.params, get_dmv_uint8(dataset_cai, check_shape=True), deref(idx_uint8.index)) idx_uint8.trained = True return idx_uint8 else: raise TypeError("dtype %s not supported" % dataset_dt) @auto_sync_handle @auto_convert_output def extend(Index index, new_vectors, new_indices, handle=None): """ Extend an existing index with new vectors. Parameters ---------- index : ivf_flat.Index Trained ivf_flat object. new_vectors : CUDA array interface compliant matrix shape (n_samples, dim) Supported dtype [float, int8, uint8] new_indices : CUDA array interface compliant vector shape (n_samples) Supported dtype [int64] {handle_docstring} Returns ------- index: ivf_flat.Index Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_flat >>> n_samples = 50000 >>> n_features = 50 >>> n_queries = 1000 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> handle = DeviceResources() >>> index = ivf_flat.build(ivf_flat.IndexParams(), dataset, ... handle=handle) >>> n_rows = 100 >>> more_data = cp.random.random_sample((n_rows, n_features), ... dtype=cp.float32) >>> indices = index.size + cp.arange(n_rows, dtype=cp.int64) >>> index = ivf_flat.extend(index, more_data, indices) >>> # Search using the built index >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> k = 10 >>> distances, neighbors = ivf_flat.search(ivf_flat.SearchParams(), ... index, queries, ... k, handle=handle) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() >>> distances = cp.asarray(distances) >>> neighbors = cp.asarray(neighbors) """ if not index.trained: raise ValueError("Index need to be built before calling extend.") if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() vecs_cai = cai_wrapper(new_vectors) vecs_dt = vecs_cai.dtype cdef int64_t n_rows = vecs_cai.shape[0] cdef uint32_t dim = vecs_cai.shape[1] _check_input_array(vecs_cai, [np.dtype(index.active_index_type)], exp_cols=index.dim) idx_cai = cai_wrapper(new_indices) _check_input_array(idx_cai, [np.dtype('int64')], exp_rows=n_rows) if len(idx_cai.shape)!=1: raise ValueError("Indices array is expected to be 1D") cdef optional[device_vector_view[int64_t, int64_t]] new_indices_opt cdef IndexFloat idx_float cdef IndexInt8 idx_int8 cdef IndexUint8 idx_uint8 if vecs_dt == np.float32: idx_float = index if idx_float.index.size() > 0: new_indices_opt = make_device_vector_view( <int64_t *><uintptr_t>idx_cai.data, <int64_t>idx_cai.shape[0]) with cuda_interruptible(): c_ivf_flat.extend(deref(handle_), get_dmv_float(vecs_cai, check_shape=True), new_indices_opt, idx_float.index) elif vecs_dt == np.int8: idx_int8 = index if idx_int8.index[0].size() > 0: new_indices_opt = make_device_vector_view( <int64_t *><uintptr_t>idx_cai.data, <int64_t>idx_cai.shape[0]) with cuda_interruptible(): c_ivf_flat.extend(deref(handle_), get_dmv_int8(vecs_cai, check_shape=True), new_indices_opt, idx_int8.index) elif vecs_dt == np.uint8: idx_uint8 = index if idx_uint8.index[0].size() > 0: new_indices_opt = make_device_vector_view( <int64_t *><uintptr_t>idx_cai.data, <int64_t>idx_cai.shape[0]) with cuda_interruptible(): c_ivf_flat.extend(deref(handle_), get_dmv_uint8(vecs_cai, check_shape=True), new_indices_opt, idx_uint8.index) else: raise TypeError("query dtype %s not supported" % vecs_dt) return index cdef class SearchParams: """ IVF-FLAT search parameters Parameters ---------- n_probes: int, default = 1024 The number of course clusters to select for the fine search. """ cdef c_ivf_flat.search_params params def __init__(self, *, n_probes=20): self.params.n_probes = n_probes def __repr__(self): attr_str = [attr + "=" + str(getattr(self, attr)) for attr in ["n_probes"]] return "SearchParams(type=IVF-FLAT, " + (", ".join(attr_str)) + ")" @property def n_probes(self): return self.params.n_probes @auto_sync_handle @auto_convert_output def search(SearchParams search_params, Index index, queries, k, neighbors=None, distances=None, handle=None): """ Find the k nearest neighbors for each query. Parameters ---------- search_params : SearchParams index : Index Trained IVF-FLAT index. queries : CUDA array interface compliant matrix shape (n_samples, dim) Supported dtype [float, int8, uint8] k : int The number of neighbors. neighbors : Optional CUDA array interface compliant matrix shape (n_queries, k), dtype int64_t. If supplied, neighbor indices will be written here in-place. (default None) distances : Optional CUDA array interface compliant matrix shape (n_queries, k) If supplied, the distances to the neighbors will be written here in-place. (default None) {handle_docstring} Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_flat >>> n_samples = 50000 >>> n_features = 50 >>> n_queries = 1000 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> # Build index >>> handle = DeviceResources() >>> index = ivf_flat.build(ivf_flat.IndexParams(), dataset, ... handle=handle) >>> # Search using the built index >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> k = 10 >>> search_params = ivf_flat.SearchParams( ... n_probes=20 ... ) >>> distances, neighbors = ivf_flat.search(search_params, index, ... queries, k, handle=handle) >>> # pylibraft functions are often asynchronous so the >>> # handle needs to be explicitly synchronized >>> handle.sync() >>> neighbors = cp.asarray(neighbors) >>> distances = cp.asarray(distances) """ if not index.trained: raise ValueError("Index need to be built before calling search.") if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() queries_cai = cai_wrapper(queries) queries_dt = queries_cai.dtype cdef uint32_t n_queries = queries_cai.shape[0] _check_input_array(queries_cai, [np.dtype(index.active_index_type)], exp_cols=index.dim) if neighbors is None: neighbors = device_ndarray.empty((n_queries, k), dtype='int64') neighbors_cai = cai_wrapper(neighbors) _check_input_array(neighbors_cai, [np.dtype('int64')], exp_rows=n_queries, exp_cols=k) if distances is None: distances = device_ndarray.empty((n_queries, k), dtype='float32') distances_cai = cai_wrapper(distances) _check_input_array(distances_cai, [np.dtype('float32')], exp_rows=n_queries, exp_cols=k) cdef c_ivf_flat.search_params params = search_params.params cdef IndexFloat idx_float cdef IndexInt8 idx_int8 cdef IndexUint8 idx_uint8 if queries_dt == np.float32: idx_float = index with cuda_interruptible(): c_ivf_flat.search(deref(handle_), params, deref(idx_float.index), get_dmv_float(queries_cai, check_shape=True), get_dmv_int64(neighbors_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True)) elif queries_dt == np.byte: idx_int8 = index with cuda_interruptible(): c_ivf_flat.search(deref(handle_), params, deref(idx_int8.index), get_dmv_int8(queries_cai, check_shape=True), get_dmv_int64(neighbors_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True)) elif queries_dt == np.ubyte: idx_uint8 = index with cuda_interruptible(): c_ivf_flat.search(deref(handle_), params, deref(idx_uint8.index), get_dmv_uint8(queries_cai, check_shape=True), get_dmv_int64(neighbors_cai, check_shape=True), get_dmv_float(distances_cai, check_shape=True)) else: raise ValueError("query dtype %s not supported" % queries_dt) return (distances, neighbors) @auto_sync_handle def save(filename, Index index, handle=None): """ Saves the index to a file. Saving / loading the index is experimental. The serialization format is subject to change. Parameters ---------- filename : string Name of the file. index : Index Trained IVF-Flat index. {handle_docstring} Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_flat >>> n_samples = 50000 >>> n_features = 50 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> # Build index >>> handle = DeviceResources() >>> index = ivf_flat.build(ivf_flat.IndexParams(), dataset, ... handle=handle) >>> ivf_flat.save("my_index.bin", index, handle=handle) """ if not index.trained: raise ValueError("Index need to be built before saving it.") if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() cdef string c_filename = filename.encode('utf-8') cdef IndexFloat idx_float cdef IndexInt8 idx_int8 cdef IndexUint8 idx_uint8 if index.active_index_type == "float32": idx_float = index c_ivf_flat.serialize_file( deref(handle_), c_filename, deref(idx_float.index)) elif index.active_index_type == "byte": idx_int8 = index c_ivf_flat.serialize_file( deref(handle_), c_filename, deref(idx_int8.index)) elif index.active_index_type == "ubyte": idx_uint8 = index c_ivf_flat.serialize_file( deref(handle_), c_filename, deref(idx_uint8.index)) else: raise ValueError( "Index dtype %s not supported" % index.active_index_type) @auto_sync_handle def load(filename, handle=None): """ Loads index from a file. Saving / loading the index is experimental. The serialization format is subject to change, therefore loading an index saved with a previous version of raft is not guaranteed to work. Parameters ---------- filename : string Name of the file. {handle_docstring} Returns ------- index : Index Examples -------- >>> import cupy as cp >>> from pylibraft.common import DeviceResources >>> from pylibraft.neighbors import ivf_flat >>> n_samples = 50000 >>> n_features = 50 >>> dataset = cp.random.random_sample((n_samples, n_features), ... dtype=cp.float32) >>> # Build and save index >>> handle = DeviceResources() >>> index = ivf_flat.build(ivf_flat.IndexParams(), dataset, ... handle=handle) >>> ivf_flat.save("my_index.bin", index, handle=handle) >>> del index >>> n_queries = 100 >>> queries = cp.random.random_sample((n_queries, n_features), ... dtype=cp.float32) >>> handle = DeviceResources() >>> index = ivf_flat.load("my_index.bin", handle=handle) >>> distances, neighbors = ivf_flat.search(ivf_flat.SearchParams(), ... index, queries, k=10, ... handle=handle) """ if handle is None: handle = DeviceResources() cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() cdef string c_filename = filename.encode('utf-8') cdef IndexFloat idx_float cdef IndexInt8 idx_int8 cdef IndexUint8 idx_uint8 with open(filename, 'rb') as f: type_str = f.read(3).decode('utf-8') dataset_dt = np.dtype(type_str) if dataset_dt == np.float32: idx_float = IndexFloat(handle) c_ivf_flat.deserialize_file( deref(handle_), c_filename, idx_float.index) idx_float.trained = True idx_float.active_index_type = 'float32' return idx_float elif dataset_dt == np.byte: idx_int8 = IndexInt8(handle) c_ivf_flat.deserialize_file( deref(handle_), c_filename, idx_int8.index) idx_int8.trained = True idx_int8.active_index_type = 'byte' return idx_int8 elif dataset_dt == np.ubyte: idx_uint8 = IndexUint8(handle) c_ivf_flat.deserialize_file( deref(handle_), c_filename, idx_uint8.index) idx_uint8.trained = True idx_uint8.active_index_type = 'ubyte' return idx_uint8 else: raise ValueError("Index dtype %s not supported" % dataset_dt)
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_flat
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_flat/cpp/c_ivf_flat.pxd
# # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 import numpy as np import pylibraft.common.handle from cython.operator cimport dereference as deref from libc.stdint cimport int8_t, int64_t, uint8_t, uint32_t, uintptr_t from libcpp cimport bool, nullptr from libcpp.string cimport string from pylibraft.common.cpp.mdspan cimport ( device_matrix_view, device_vector_view, host_matrix_view, make_device_matrix_view, make_host_matrix_view, row_major, ) from pylibraft.common.cpp.optional cimport optional from pylibraft.common.handle cimport device_resources from pylibraft.distance.distance_type cimport DistanceType from pylibraft.neighbors.ivf_pq.cpp.c_ivf_pq cimport ( ann_index, ann_index_params, ann_search_params, ) from rmm._lib.memory_resource cimport device_memory_resource cdef extern from "raft/neighbors/ivf_flat_types.hpp" \ namespace "raft::neighbors::ivf_flat" nogil: cpdef cppclass index_params(ann_index_params): uint32_t n_lists uint32_t kmeans_n_iters double kmeans_trainset_fraction bool adaptive_centers bool conservative_memory_allocation cdef cppclass index[T, IdxT](ann_index): index(const device_resources& handle, DistanceType metric, uint32_t n_lists, bool adaptive_centers, bool conservative_memory_allocation, uint32_t dim) IdxT size() uint32_t dim() DistanceType metric() uint32_t n_lists() bool adaptive_centers() cpdef cppclass search_params(ann_search_params): uint32_t n_probes cdef extern from "raft_runtime/neighbors/ivf_flat.hpp" \ namespace "raft::runtime::neighbors::ivf_flat" nogil: cdef void build(const device_resources&, const index_params& params, device_matrix_view[float, int64_t, row_major] dataset, index[float, int64_t]& index) except + cdef void build(const device_resources& handle, const index_params& params, device_matrix_view[int8_t, int64_t, row_major] dataset, index[int8_t, int64_t]& index) except + cdef void build(const device_resources& handle, const index_params& params, device_matrix_view[uint8_t, int64_t, row_major] dataset, index[uint8_t, int64_t]& index) except + cdef void extend( const device_resources& handle, device_matrix_view[float, int64_t, row_major] new_vectors, optional[device_vector_view[int64_t, int64_t]] new_indices, index[float, int64_t]* index) except + cdef void extend( const device_resources& handle, device_matrix_view[int8_t, int64_t, row_major] new_vectors, optional[device_vector_view[int64_t, int64_t]] new_indices, index[int8_t, int64_t]* index) except + cdef void extend( const device_resources& handle, device_matrix_view[uint8_t, int64_t, row_major] new_vectors, optional[device_vector_view[int64_t, int64_t]] new_indices, index[uint8_t, int64_t]* index) except + cdef void search( const device_resources& handle, const search_params& params, const index[float, int64_t]& index, device_matrix_view[float, int64_t, row_major] queries, device_matrix_view[int64_t, int64_t, row_major] neighbors, device_matrix_view[float, int64_t, row_major] distances) except + cdef void search( const device_resources& handle, const search_params& params, const index[int8_t, int64_t]& index, device_matrix_view[int8_t, int64_t, row_major] queries, device_matrix_view[int64_t, int64_t, row_major] neighbors, device_matrix_view[float, int64_t, row_major] distances) except + cdef void search( const device_resources& handle, const search_params& params, const index[uint8_t, int64_t]& index, device_matrix_view[uint8_t, int64_t, row_major] queries, device_matrix_view[int64_t, int64_t, row_major] neighbors, device_matrix_view[float, int64_t, row_major] distances) except + cdef void serialize(const device_resources& handle, string& str, const index[float, int64_t]& index) except + cdef void deserialize(const device_resources& handle, const string& str, index[float, int64_t]* index) except + cdef void serialize(const device_resources& handle, string& str, const index[uint8_t, int64_t]& index) except + cdef void deserialize(const device_resources& handle, const string& str, index[uint8_t, int64_t]* index) except + cdef void serialize(const device_resources& handle, string& str, const index[int8_t, int64_t]& index) except + cdef void deserialize(const device_resources& handle, const string& str, index[int8_t, int64_t]* index) except + cdef void serialize_file(const device_resources& handle, const string& filename, const index[float, int64_t]& index) except + cdef void deserialize_file(const device_resources& handle, const string& filename, index[float, int64_t]* index) except + cdef void serialize_file(const device_resources& handle, const string& filename, const index[uint8_t, int64_t]& index) except + cdef void deserialize_file(const device_resources& handle, const string& filename, index[uint8_t, int64_t]* index) except + cdef void serialize_file(const device_resources& handle, const string& filename, const index[int8_t, int64_t]& index) except + cdef void deserialize_file(const device_resources& handle, const string& filename, index[int8_t, int64_t]* index) except +
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_flat
rapidsai_public_repos/cuvs/python/cuvs/cuvs/neighbors/ivf_flat/cpp/__init__.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/matrix/CMakeLists.txt
# ============================================================================= # Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= # Set the list of Cython files to build set(cython_sources select_k.pyx) set(linked_libraries cuvs::cuvs cuvs::compiled) # Build all of the Cython targets rapids_cython_create_modules( CXX SOURCE_FILES "${cython_sources}" LINKED_LIBRARIES "${linked_libraries}" ASSOCIATED_TARGETS cuvs MODULE_PREFIX matrix_ )
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/matrix/select_k.pyx
# # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 from cython.operator cimport dereference as deref from libc.stdint cimport int64_t from libcpp cimport bool import numpy as np from pylibraft.common import auto_convert_output, cai_wrapper, device_ndarray from pylibraft.common.handle import auto_sync_handle from pylibraft.common.input_validation import is_c_contiguous from pylibraft.common.cpp.mdspan cimport ( device_matrix_view, host_matrix_view, make_device_matrix_view, make_host_matrix_view, row_major, ) from pylibraft.common.cpp.optional cimport optional from pylibraft.common.handle cimport device_resources from pylibraft.common.mdspan cimport get_dmv_float, get_dmv_int64 from pylibraft.matrix.cpp.select_k cimport select_k as c_select_k @auto_sync_handle @auto_convert_output def select_k(dataset, k=None, distances=None, indices=None, select_min=True, handle=None): """ Selects the top k items from each row in a matrix Parameters ---------- dataset : array interface compliant matrix, row-major layout, shape (n_rows, dim). Supported dtype [float] k : int Number of items to return for each row. Optional if indices or distances arrays are given (in which case their second dimension is k). distances : Optional array interface compliant matrix shape (n_rows, k), dtype float. If supplied, distances will be written here in-place. (default None) indices : Optional array interface compliant matrix shape (n_rows, k), dtype int64_t. If supplied, neighbor indices will be written here in-place. (default None) select_min: : bool Whether to select the minimum or maximum K items {handle_docstring} Returns ------- distances: array interface compliant object containing resulting distances shape (n_rows, k) indices: array interface compliant object containing resulting indices shape (n_rows, k) Examples -------- >>> import cupy as cp >>> from pylibraft.matrix import select_k >>> n_features = 50 >>> n_rows = 1000 >>> queries = cp.random.random_sample((n_rows, n_features), ... dtype=cp.float32) >>> k = 40 >>> distances, ids = select_k(queries, k) >>> distances = cp.asarray(distances) >>> ids = cp.asarray(ids) """ dataset_cai = cai_wrapper(dataset) if k is None: if indices is not None: k = cai_wrapper(indices).shape[1] elif distances is not None: k = cai_wrapper(distances).shape[1] else: raise ValueError("Argument k must be specified if both indices " "and distances arg is None") n_rows = dataset.shape[0] if indices is None: indices = device_ndarray.empty((n_rows, k), dtype='int64') if distances is None: distances = device_ndarray.empty((n_rows, k), dtype='float32') distances_cai = cai_wrapper(distances) indices_cai = cai_wrapper(indices) cdef device_resources* handle_ = \ <device_resources*><size_t>handle.getHandle() cdef optional[device_matrix_view[int64_t, int64_t, row_major]] in_idx if dataset_cai.dtype == np.float32: c_select_k(deref(handle_), get_dmv_float(dataset_cai, check_shape=True), in_idx, get_dmv_float(distances_cai, check_shape=True), get_dmv_int64(indices_cai, check_shape=True), <bool>select_min) else: raise TypeError("dtype %s not supported" % dataset_cai.dtype) return distances, indices
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/matrix/__init__.pxd
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/matrix/__init__.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from .select_k import select_k __all__ = ["select_k"]
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/matrix
rapidsai_public_repos/cuvs/python/cuvs/cuvs/matrix/cpp/select_k.pxd
# # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # cython: profile=False # distutils: language = c++ # cython: embedsignature = True # cython: language_level = 3 from libc.stdint cimport int64_t from libcpp cimport bool from pylibraft.common.cpp.mdspan cimport device_matrix_view, row_major from pylibraft.common.cpp.optional cimport optional from pylibraft.common.handle cimport device_resources cdef extern from "raft_runtime/matrix/select_k.hpp" \ namespace "raft::runtime::matrix" nogil: cdef void select_k(const device_resources & handle, device_matrix_view[float, int64_t, row_major], optional[device_matrix_view[int64_t, int64_t, row_major]], device_matrix_view[float, int64_t, row_major], device_matrix_view[int64_t, int64_t, row_major], bool) except +
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs/matrix
rapidsai_public_repos/cuvs/python/cuvs/cuvs/matrix/cpp/__init__.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/test/test_device_ndarray.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import numpy as np import pytest from pylibraft.common import device_ndarray @pytest.mark.parametrize("order", ["F", "C"]) @pytest.mark.parametrize("dtype", [np.float32, np.float64]) def test_basic_attributes(order, dtype): a = np.random.random((500, 2)).astype(dtype) if order == "C": a = np.ascontiguousarray(a) else: a = np.asfortranarray(a) db = device_ndarray(a) db_host = db.copy_to_host() assert a.shape == db.shape assert a.dtype == db.dtype assert a.data.f_contiguous == db.f_contiguous assert a.data.f_contiguous == db_host.data.f_contiguous assert a.data.c_contiguous == db.c_contiguous assert a.data.c_contiguous == db_host.data.c_contiguous np.testing.assert_array_equal(a.tolist(), db_host.tolist()) @pytest.mark.parametrize("order", ["F", "C"]) @pytest.mark.parametrize("dtype", [np.float32, np.float64]) def test_empty(order, dtype): a = np.random.random((500, 2)).astype(dtype) if order == "C": a = np.ascontiguousarray(a) else: a = np.asfortranarray(a) db = device_ndarray.empty(a.shape, dtype=dtype, order=order) db_host = db.copy_to_host() assert a.shape == db.shape assert a.dtype == db.dtype assert a.data.f_contiguous == db.f_contiguous assert a.data.f_contiguous == db_host.data.f_contiguous assert a.data.c_contiguous == db.c_contiguous assert a.data.c_contiguous == db_host.data.c_contiguous
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/test/test_select_k.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import numpy as np import pytest from pylibraft.common import device_ndarray from pylibraft.matrix import select_k @pytest.mark.parametrize("n_rows", [32, 100]) @pytest.mark.parametrize("n_cols", [40, 100]) @pytest.mark.parametrize("k", [1, 5, 16, 35]) @pytest.mark.parametrize("inplace", [True, False]) def test_select_k(n_rows, n_cols, k, inplace): dataset = np.random.random_sample((n_rows, n_cols)).astype("float32") dataset_device = device_ndarray(dataset) indices = np.zeros((n_rows, k), dtype="int64") distances = np.zeros((n_rows, k), dtype="float32") indices_device = device_ndarray(indices) distances_device = device_ndarray(distances) ret_distances, ret_indices = select_k( dataset_device, k=k, distances=distances_device, indices=indices_device, ) distances_device = ret_distances if not inplace else distances_device actual_distances = distances_device.copy_to_host() argsort = np.argsort(dataset, axis=1) for i in range(dataset.shape[0]): expected_indices = argsort[i] gpu_dists = actual_distances[i] cpu_ordered = dataset[i, expected_indices] np.testing.assert_allclose( cpu_ordered[:k], gpu_dists, atol=1e-4, rtol=1e-4 )
0
rapidsai_public_repos/cuvs/python/cuvs/cuvs
rapidsai_public_repos/cuvs/python/cuvs/cuvs/test/test_refine.py
# Copyright (c) 2022-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # h ttp://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import numpy as np import pytest from pylibraft.common import device_ndarray from pylibraft.neighbors import refine from sklearn.neighbors import NearestNeighbors from sklearn.preprocessing import normalize from test_ivf_pq import calc_recall, check_distances, generate_data def run_refine( n_rows=500, n_cols=50, n_queries=100, metric="sqeuclidean", k0=40, k=10, inplace=False, dtype=np.float32, memory_type="device", ): dataset = generate_data((n_rows, n_cols), dtype) queries = generate_data((n_queries, n_cols), dtype) if metric == "inner_product": if dtype != np.float32: pytest.skip("Normalized input cannot be represented in int8") return dataset = normalize(dataset, norm="l2", axis=1) queries = normalize(queries, norm="l2", axis=1) dataset_device = device_ndarray(dataset) queries_device = device_ndarray(queries) # Calculate reference values with sklearn skl_metric = {"sqeuclidean": "euclidean", "inner_product": "cosine"}[ metric ] nn_skl = NearestNeighbors( n_neighbors=k0, algorithm="brute", metric=skl_metric ) nn_skl.fit(dataset) skl_dist, candidates = nn_skl.kneighbors(queries) candidates = candidates.astype(np.int64) candidates_device = device_ndarray(candidates) out_idx = np.zeros((n_queries, k), dtype=np.int64) out_dist = np.zeros((n_queries, k), dtype=np.float32) out_idx_device = device_ndarray(out_idx) if inplace else None out_dist_device = device_ndarray(out_dist) if inplace else None if memory_type == "device": if inplace: refine( dataset_device, queries_device, candidates_device, indices=out_idx_device, distances=out_dist_device, metric=metric, ) else: out_dist_device, out_idx_device = refine( dataset_device, queries_device, candidates_device, k=k, metric=metric, ) out_idx = out_idx_device.copy_to_host() out_dist = out_dist_device.copy_to_host() elif memory_type == "host": if inplace: refine( dataset, queries, candidates, indices=out_idx, distances=out_dist, metric=metric, ) else: out_dist, out_idx = refine( dataset, queries, candidates, k=k, metric=metric ) skl_idx = candidates[:, :k] recall = calc_recall(out_idx, skl_idx) if recall <= 0.999: # We did not find the same neighbor indices. # We could have found other neighbor with same distance. if metric == "sqeuclidean": skl_dist = np.power(skl_dist[:, :k], 2) elif metric == "inner_product": skl_dist = 1 - skl_dist[:, :k] else: raise ValueError("Invalid metric") mask = out_idx != skl_idx assert np.all(out_dist[mask] <= skl_dist[mask] + 1.0e-6) check_distances(dataset, queries, metric, out_idx, out_dist, 0.001) @pytest.mark.parametrize("n_queries", [100, 1024, 37]) @pytest.mark.parametrize("inplace", [True, False]) @pytest.mark.parametrize("metric", ["sqeuclidean", "inner_product"]) @pytest.mark.parametrize("dtype", [np.float32, np.int8, np.uint8]) @pytest.mark.parametrize("memory_type", ["device", "host"]) def test_refine_dtypes(n_queries, dtype, inplace, metric, memory_type): run_refine( n_rows=2000, n_queries=n_queries, n_cols=50, k0=40, k=10, dtype=dtype, inplace=inplace, metric=metric, memory_type=memory_type, ) @pytest.mark.parametrize( "params", [ pytest.param( { "n_rows": 0, "n_cols": 10, "n_queries": 10, "k0": 10, "k": 1, }, marks=pytest.mark.xfail(reason="empty dataset"), ), {"n_rows": 1, "n_cols": 10, "n_queries": 10, "k": 1, "k0": 1}, {"n_rows": 10, "n_cols": 1, "n_queries": 10, "k": 10, "k0": 10}, {"n_rows": 999, "n_cols": 42, "n_queries": 453, "k0": 137, "k": 53}, ], ) @pytest.mark.parametrize("memory_type", ["device", "host"]) def test_refine_row_col(params, memory_type): run_refine( n_rows=params["n_rows"], n_queries=params["n_queries"], n_cols=params["n_cols"], k0=params["k0"], k=params["k"], memory_type=memory_type, ) @pytest.mark.parametrize("memory_type", ["device", "host"]) def test_input_dtype(memory_type): with pytest.raises(Exception): run_refine(dtype=np.float64, memory_type=memory_type) @pytest.mark.parametrize( "params", [ {"idx_shape": None, "dist_shape": None, "k": None}, {"idx_shape": [100, 9], "dist_shape": None, "k": 10}, {"idx_shape": [101, 10], "dist_shape": None, "k": None}, {"idx_shape": None, "dist_shape": [100, 11], "k": 10}, {"idx_shape": None, "dist_shape": [99, 10], "k": None}, ], ) @pytest.mark.parametrize("memory_type", ["device", "host"]) def test_input_assertions(params, memory_type): n_cols = 5 n_queries = 100 k0 = 40 dtype = np.float32 dataset = generate_data((500, n_cols), dtype) dataset_device = device_ndarray(dataset) queries = generate_data((n_queries, n_cols), dtype) queries_device = device_ndarray(queries) candidates = np.random.randint( 0, 500, size=(n_queries, k0), dtype=np.int64 ) candidates_device = device_ndarray(candidates) if params["idx_shape"] is not None: out_idx = np.zeros(params["idx_shape"], dtype=np.int64) out_idx_device = device_ndarray(out_idx) else: out_idx_device = None if params["dist_shape"] is not None: out_dist = np.zeros(params["dist_shape"], dtype=np.float32) out_dist_device = device_ndarray(out_dist) else: out_dist_device = None if memory_type == "device": with pytest.raises(Exception): distances, indices = refine( dataset_device, queries_device, candidates_device, k=params["k"], indices=out_idx_device, distances=out_dist_device, ) else: with pytest.raises(Exception): distances, indices = refine( dataset, queries, candidates, k=params["k"], indices=out_idx, distances=out_dist, )
0