repo_id stringlengths 21 96 | file_path stringlengths 31 155 | content stringlengths 1 92.9M | __index_level_0__ int64 0 0 |
|---|---|---|---|
rapidsai_public_repos/deployment/source/tools | rapidsai_public_repos/deployment/source/tools/kubernetes/dask-operator.md | # Dask Operator
Many libraries in RAPIDS can leverage Dask to scale out computation onto multiple GPUs and multiple nodes.
[Dask has an operator for Kubernetes](https://kubernetes.dask.org/en/latest/operator.html) which allows you to launch Dask clusters as native Kubernetes resources.
With the operator and associate... | 0 |
rapidsai_public_repos/deployment/source/tools | rapidsai_public_repos/deployment/source/tools/kubernetes/dask-helm-chart.md | # Dask Helm Chart
Dask has a [Helm Chart](https://github.com/dask/helm-chart) that creates the following resources:
- 1 x Jupyter server (preconfigured to access the Dask cluster)
- 1 x Dask scheduler
- 3 x Dask workers that connect to the scheduler (scalable)
This helm chart can be configured to run RAPIDS by provi... | 0 |
rapidsai_public_repos/deployment/source/tools | rapidsai_public_repos/deployment/source/tools/kubernetes/dask-kubernetes.md | # Dask Kubernetes
This article introduces the classic way to setup RAPIDS with `dask-kubernetes`.
## Prerequisite
- A kubernetes cluster that can allocate GPU pods.
- [miniconda](https://docs.conda.io/en/latest/miniconda.html)
## Client environment setup
The client environment is used to setup dask cluster and exe... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/platforms/databricks.md | # Databricks
You can install RAPIDS on Databricks in a few different ways:
1. Accelerate machine learning workflows in a single-node GPU notebook environment
2. Spark users can install [RAPIDS Accelerator for Apache Spark 3.x on Databricks](https://docs.nvidia.com/spark-rapids/user-guide/latest/getting-started/databr... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/platforms/index.md | ---
html_theme.sidebar_secondary.remove: true
---
# Platforms
`````{gridtoctree} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: kubernetes
:link-type: doc
Kubernetes
^^^
Launch RAPIDS containers and cluster on Kubernetes with various tools.
{bdg}`single-node`
{bdg}`multi-node`
````
````{grid-item-card}
:link... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/platforms/coiled.md | # Coiled
You can deploy RAPIDS on a multi-node Dask cluster with GPUs using [Coiled](https://www.coiled.io/).
By using the [`coiled`](https://anaconda.org/conda-forge/coiled) Python library, you can setup and manage Dask clusters with GPUs and RAPIDs on cloud computing environments such as GCP or AWS.
Coiled cluster... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/platforms/kubeflow.md | # Kubeflow
You can use RAPIDS with Kubeflow in a single pod with [Kubeflow Notebooks](https://www.kubeflow.org/docs/components/notebooks/) or you can scale out to many pods on many nodes of the Kubernetes cluster with the [dask-operator](/tools/kubernetes/dask-operator).
```{note}
These instructions were tested again... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/platforms/kubernetes.md | # Kubernetes
RAPIDS integrates with Kubernetes in many ways depending on your use case.
(interactive-notebook)=
## Interactive Notebook
For single-user interactive sessions you can run the [RAPIDS docker image](/tools/rapids-docker) which contains a conda environment with the RAPIDS libraries and Jupyter for intera... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/platforms/kserve.md | # KServe
[KServe](https://kserve.github.io/website) is a standard model inference platform built for Kubernetes. It provides consistent interface for multiple machine learning frameworks.
In this page, we will show you how to deploy RAPIDS models using KServe.
```{note}
These instructions were tested against KServe v... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/platforms/colab.md | # RAPIDS on Google Colab
## Overview
This guide is broken into two sections:
1. [RAPIDS Quick Install](colab-quick) - applicable for most users
2. [RAPIDS Custom Setup Instructions](colab-custom) - step by step set up instructions covering the **must haves** for when a user needs to adapt instance to their workflows... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/_includes/install-rapids-with-docker.md | There are a selection of methods you can use to install RAPIDS which you can see via the [RAPIDS release selector](https://docs.rapids.ai/install#selector).
For this example we are going to run the RAPIDS Docker container so we need to know the name of the most recent container.
On the release selector choose **Docker... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/_includes/check-gpu-pod-works.md | Let's create a sample pod that uses some GPU compute to make sure that everything is working as expected.
```console
$ cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: cuda-vectoradd
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vectoradd
image: "nvidia/samples:vectoradd-... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/_includes/test-rapids-docker-vm.md | In the terminal we can open `ipython` and check that we can import and use RAPIDS libraries like `cudf`.
```ipython
In [1]: import cudf
In [2]: df = cudf.datasets.timeseries()
In [3]: df.head()
Out[3]:
id name x y
timestamp
2000-01-01 00:00:00 1020 Kevin 0.091536 0.6644... | 0 |
rapidsai_public_repos/deployment/source/_includes | rapidsai_public_repos/deployment/source/_includes/menus/aws.md | `````{grid} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: /cloud/aws/ec2
:link-type: doc
Elastic Compute Cloud (EC2)
^^^
Launch an EC2 instance and run RAPIDS.
{bdg}`single-node`
````
````{grid-item-card}
:link: /cloud/aws/ec2-multi
:link-type: doc
EC2 Cluster (with Dask)
^^^
Launch a RAPIDS cluster on EC2 wi... | 0 |
rapidsai_public_repos/deployment/source/_includes | rapidsai_public_repos/deployment/source/_includes/menus/azure.md | `````{grid} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: /cloud/azure/azure-vm
:link-type: doc
Azure Virtual Machine
^^^
Launch an Azure VM instance and run RAPIDS.
{bdg}`single-node`
````
````{grid-item-card}
:link: /cloud/azure/aks
:link-type: doc
Azure Kubernetes Service (AKS)
^^^
Launch a RAPIDS cluster ... | 0 |
rapidsai_public_repos/deployment/source/_includes | rapidsai_public_repos/deployment/source/_includes/menus/gcp.md | `````{grid} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: /cloud/gcp/compute-engine
:link-type: doc
Compute Engine Instance
^^^
Launch a Compute Engine instance and run RAPIDS.
{bdg}`single-node`
````
````{grid-item-card}
:link: /cloud/gcp/vertex-ai
:link-type: doc
Vertex AI
^^^
Launch the RAPIDS container in... | 0 |
rapidsai_public_repos/deployment/source/_includes | rapidsai_public_repos/deployment/source/_includes/menus/ibm.md | `````{grid} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: /cloud/ibm/virtual-server
:link-type: doc
IBM Virtual Server
^^^
Launch a virtual server and run RAPIDS.
{bdg}`single-node`
````
`````
| 0 |
rapidsai_public_repos/deployment/source/_includes | rapidsai_public_repos/deployment/source/_includes/menus/nvidia.md | `````{grid} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: /cloud/nvidia/bcp
:link-type: doc
Base Command Platform
^^^
Run RAPIDS workloads on NVIDIA DGX Cloud with Base Command Platform.
{bdg}`single-node`
{bdg}`multi-node`
````
`````
| 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/_templates/notebooks-tag-filter.html | <nav class="bd-links" id="bd-docs-nav" aria-label="Section navigation">
<p class="bd-links__title" role="heading" aria-level="1">
Tag filters
<small>(<a href="#" id="resetfilters">reset</a>)</small>
</p>
{% for section in sorted(notebook_tag_tree) %}
<fieldset aria-level="2" class="caption" role="headi... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/_templates/notebooks-extra-files-nav.html | {% if related_notebook_files %} {% macro gen_list(root, dir, related_files) -%}
{{ dir }}
<ul class="visible nav section-nav flex-column">
{% for name in related_files|sort(case_sensitive=False) %}
<li class="toc-h2 nav-item toc-entry">
{% if related_files[name] is mapping %} {{ gen_list(root + name + "/", name... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/_templates/notebooks-tags.html | {% if notebook_tags %}
<div class="tocsection onthispage"><i class="fa-solid fa-tags"></i> Tags</div>
<nav id="bd-toc-nav" class="page-toc">
<div class="tagwrapper">
{% for tag in notebook_tags %}
<a href="../../?filters={{ tag }}">
<span class="sd-sphinx-override sd-badge">{{ tag }}</span>
</a>
... | 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/cloud/index.md | ---
html_theme.sidebar_secondary.remove: true
---
# Cloud
## NVIDIA DGX Cloud
```{include} ../_includes/menus/nvidia.md
```
## Amazon Web Services
```{include} ../_includes/menus/aws.md
```
## Microsoft Azure
```{include} ../_includes/menus/azure.md
```
## Google Cloud Platform
```{include} ../_includes/men... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/ibm/virtual-server.md | # Virtual Server for VPC
## Create Instance
Create a new [Virtual Server (for VPC)](https://www.ibm.com/cloud/virtual-servers) with GPUs, the [NVIDIA Driver](https://www.nvidia.co.uk/Download/index.aspx) and the [NVIDIA Container Runtime](https://developer.nvidia.com/nvidia-container-runtime).
1. Open the [**Virtual... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/ibm/index.md | ---
html_theme.sidebar_secondary.remove: true
---
# IBM Cloud
```{include} ../../_includes/menus/ibm.md
```
RAPIDS can be deployed on IBM Cloud in several ways. See the
list of accelerated instance types below:
| Cloud <br> Provider | Inst. <br> Type | vCPUs | Inst. <br> Name | GPU <br> Count | GPU <br> T... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/gcp/compute-engine.md | # Compute Engine Instance
## Create Virtual Machine
Create a new [Compute Engine Instance](https://cloud.google.com/compute/docs/instances) with GPUs, the [NVIDIA Driver](https://www.nvidia.co.uk/Download/index.aspx) and the [NVIDIA Container Runtime](https://developer.nvidia.com/nvidia-container-runtime).
NVIDIA ma... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/gcp/dataproc.md | # Dataproc
RAPIDS can be deployed on Google Cloud Dataproc using Dask. For more details, see our **[detailed instructions and helper scripts.](https://github.com/GoogleCloudDataproc/initialization-actions/tree/master/rapids)**
**0. Copy initialization actions to your own Cloud Storage bucket.** Don't create clusters ... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/gcp/gke.md | # Google Kubernetes Engine
RAPIDS can be deployed on Google Cloud via the [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine) (GKE).
To run RAPIDS you'll need a Kubernetes cluster with GPUs available.
## Prerequisites
First you'll need to have the [`gcloud` CLI tool](https://cloud.google.com/sdk/... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/gcp/index.md | ---
html_theme.sidebar_secondary.remove: true
---
# Google Cloud Platform
```{include} ../../_includes/menus/gcp.md
```
RAPIDS can be deployed on Google Cloud Platform in several ways. Google Cloud supports various kinds of GPU VMs for different needs. Please visit the Google Cloud documentation for [an overview of... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/gcp/vertex-ai.md | # Vertex AI
RAPIDS can be deployed on [Vertex AI Workbench](https://cloud.google.com/vertex-ai-workbench).
For new, user-managed notebooks, it is recommended to use a RAPIDS docker image to access the latest RAPIDS software.
## Prepare RAPIDS Docker Image
Before configuring a new notebook, the [RAPIDS Docker image]... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/azure/azureml.md | # Azure Machine Learning
RAPIDS can be deployed at scale using [Azure Machine Learning Service](https://learn.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-machine-learning) and easily scales up to any size needed.
## Pre-requisites
Use existing or create new Azure Machine Learning workspace thro... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/azure/azure-vm-multi.md | # Azure VM Cluster (via Dask)
## Create a Cluster using Dask Cloud Provider
The easiest way to setup a multi-node, multi-GPU cluster on Azure is to use [Dask Cloud Provider](https://cloudprovider.dask.org/en/latest/azure.html).
### 1. Install Dask Cloud Provider
Dask Cloud Provider can be installed via `conda` or `... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/azure/azure-vm.md | # Azure Virtual Machine
## Create Virtual Machine
Create a new [Azure Virtual Machine](https://azure.microsoft.com/en-gb/products/virtual-machines/) with GPUs, the [NVIDIA Driver](https://www.nvidia.co.uk/Download/index.aspx) and the [NVIDIA Container Runtime](https://developer.nvidia.com/nvidia-container-runtime).
... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/azure/aks.md | # Azure Kubernetes Service
RAPIDS can be deployed on Azure via the [Azure Kubernetes Service](https://azure.microsoft.com/en-us/products/kubernetes-service/) (AKS).
To run RAPIDS you'll need a Kubernetes cluster with GPUs available.
## Prerequisites
First you'll need to have the [`az` CLI tool](https://learn.micros... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/azure/index.md | ---
html_theme.sidebar_secondary.remove: true
---
# Microsoft Azure
```{include} ../../_includes/menus/azure.md
```
RAPIDS can be deployed on Microsoft Azure in several ways. Azure supports various kinds of GPU VMs for different needs.
For RAPIDS users we recommend NC/ND VMs for computation and deep learning optimi... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/aws/sagemaker.md | # SageMaker
RAPIDS can be used in a few ways with [AWS SageMaker](https://aws.amazon.com/sagemaker/).
## SageMaker Notebooks
[SageMaker Notebook Instances](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi.html) can be augmented with a RAPIDS conda environment.
We can add a RAPIDS conda environment to the set of ... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/aws/ec2-multi.md | # EC2 Cluster (via Dask)
To launch a multi-node cluster on AWS EC2 we recommend you use [Dask Cloud Provider](https://cloudprovider.dask.org/en/latest/), a native cloud integration for Dask. It helps manage Dask clusters on different cloud platforms.
## Local Environment Setup
Before running these instructions, ensu... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/aws/ecs.md | # Elastic Container Service (ECS)
RAPIDS can be deployed on a multi-node ECS cluster using Dask’s dask-cloudprovider management tools. For more details, see our **[blog post on
deploying on ECS.](https://medium.com/rapids-ai/getting-started-with-rapids-on-aws-ecs-using-dask-cloud-provider-b1adfdbc9c6e)**
## Run from ... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/aws/ec2.md | # Elastic Compute Cloud (EC2)
## Create Instance
Create a new [EC2 Instance](https://aws.amazon.com/ec2/) with GPUs, the [NVIDIA Driver](https://www.nvidia.co.uk/Download/index.aspx) and the [NVIDIA Container Runtime](https://developer.nvidia.com/nvidia-container-runtime).
NVIDIA maintains an [Amazon Machine Image (... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/aws/eks.md | # AWS Elastic Kubernetes Service (EKS)
RAPIDS can be deployed on AWS via the [Elastic Kubernetes Service](https://aws.amazon.com/eks/) (EKS).
To run RAPIDS you'll need a Kubernetes cluster with GPUs available.
## Prerequisites
First you'll need to have the [`aws` CLI tool](https://aws.amazon.com/cli/) and [`eksctl`... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/aws/index.md | ---
html_theme.sidebar_secondary.remove: true
---
# Amazon Web Services
```{include} ../../_includes/menus/aws.md
```
RAPIDS can be deployed on Amazon Web Services (AWS) in several ways. See the
list of accelerated instance types below:
| Cloud <br> Provider | Inst. <br> Type | Inst. <br> Name | GPU <br> Count | G... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/nvidia/bcp.md | # Base Command Platform (BCP)
[NVIDIA Base Command™ Platform (BCP)](https://www.nvidia.com/en-gb/data-center/base-command-platform/) is a software service in [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/) for enterprise-class AI training that enables businesses and their data scientists to acc... | 0 |
rapidsai_public_repos/deployment/source/cloud | rapidsai_public_repos/deployment/source/cloud/nvidia/index.md | ---
html_theme.sidebar_secondary.remove: true
---
# NVIDIA DGX Cloud
```{include} ../../_includes/menus/nvidia.md
```
```{toctree}
---
hidden: true
---
bcp
```
| 0 |
rapidsai_public_repos/deployment/source | rapidsai_public_repos/deployment/source/examples/index.md | ---
html_theme.sidebar_secondary.remove: true
---
# Workflow Examples
```{notebookgallerytoctree}
xgboost-gpu-hpo-job-parallel-ngc/notebook
xgboost-gpu-hpo-job-parallel-k8s/notebook
rapids-optuna-hpo/notebook
rapids-sagemaker-higgs/notebook
rapids-sagemaker-hpo/notebook
rapids-ec2-mnmg/notebook
rapids-autoscaling-mul... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-ec2-mnmg/notebook.ipynb | from dask.distributed import Client
client = Client(cluster)
clientimport math
from datetime import datetime
import cudf
import dask
import dask_cudf
import numpy as np
from cuml.dask.common import utils as dask_utils
from cuml.dask.ensemble import RandomForestRegressor
from cuml.metrics import mean_squared_error
fr... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/xgboost-gpu-hpo-job-parallel-k8s/notebook.ipynb | # Choose the same RAPIDS image you used for launching the notebook session
rapids_image = "{{ rapids_container }}"
# Use the number of worker nodes in your Kubernetes cluster.
n_workers = 4from dask_kubernetes.operator import KubeCluster
cluster = KubeCluster(
name="rapids-dask",
image=rapids_image,
worker... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/xgboost-azure-mnmg-daskcloudprovider/notebook.ipynb | # # Uncomment the following and install some libraries at the beginning.
# If adlfs is not present, install adlfs to read from Azure data lake.
! pip install adlfs
! pip install "dask-cloudprovider[azure]" --upgradefrom dask.distributed import Client, wait, get_worker
from dask_cloudprovider.azure import AzureVMCluster... | 0 |
rapidsai_public_repos/deployment/source/examples/xgboost-azure-mnmg-daskcloudprovider | rapidsai_public_repos/deployment/source/examples/xgboost-azure-mnmg-daskcloudprovider/configs/cloud_init.yaml.j2 | #cloud-config
# Bootstrap
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- ubuntu-drivers-common
# Enable ipv4 forwarding, required on CIS hardened machines
write_files:
- path: /etc/sysctl.d/enabled_ipv4_forwarding.conf
content: |
net.ipv4... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-autoscaling-multi-tenant-kubernetes/rapids-notebook.yaml | # rapids-notebook.yaml (extended)
apiVersion: v1
kind: ServiceAccount
metadata:
name: rapids-dask
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rapids-dask
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-autoscaling-multi-tenant-kubernetes/notebook.ipynb | from dask_kubernetes.operator import KubeCluster
cluster = KubeCluster(
name="rapids-dask-1",
image="rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10", # Replace me with your cached image
n_workers=4,
resources={"limits": {"nvidia.com/gpu": "1"}},
env={"EXTRA_PIP_PACKAGES": "gcsfs"... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-autoscaling-multi-tenant-kubernetes/image-prepuller.yaml | # image-prepuller.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: prepull-rapids
spec:
selector:
matchLabels:
name: prepull-rapids
template:
metadata:
labels:
name: prepull-rapids
spec:
initContainers:
- name: prepull-rapids
image: us-central1-docke... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-autoscaling-multi-tenant-kubernetes/prometheus-stack-values.yaml | # prometheus-stack-values.yaml
serviceMonitorSelectorNilUsesHelmValues: false
prometheus:
prometheusSpec:
# Setting this to a high frequency so that we have richer data for analysis later
scrapeInterval: 1s
| 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-optuna-hpo/notebook.ipynb | ## Run this cell to install optuna
# !pip install optunaimport cudf
import cuml
import dask_cudf
import numpy as np
import optuna
import os
import dask
from cuml import LogisticRegression
from cuml.model_selection import train_test_split
from cuml.metrics import log_loss
from dask_cuda import LocalCUDACluster
from da... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/xgboost-gpu-hpo-job-parallel-ngc/notebook.ipynb | from dask.distributed import Client
client = Client("ws://localhost:8786")
clientn_workers = len(client.scheduler_info()["workers"])def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2import optuna
from dask.distributed import wait
# Number of hyperparameter combinations to try in... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/xgboost-randomforest-gpu-hpo-dask/notebook.ipynb | import warnings
warnings.filterwarnings("ignore") # Reduce number of messages/warnings displayedimport time
import cudf
import cuml
import numpy as np
import pandas as pd
import xgboost as xgb
import dask_ml.model_selection as dcv
from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
fro... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/entrypoint.sh | #!/bin/bash
source activate rapids
if [[ "$1" == "serve" ]]; then
echo -e "@ entrypoint -> launching serving script \n"
python serve.py
else
echo -e "@ entrypoint -> launching training script \n"
python train.py
fi | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/train.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/notebook.ipynb | %pip install --upgrade boto3import sagemaker
from helper_functions import *execution_role = sagemaker.get_execution_role()
session = sagemaker.Session()
account = !(aws sts get-caller-identity --query Account --output text)
region = !(aws configure get region)account, region# please choose dataset S3 bucket and direct... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/MLWorkflow.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/helper_functions.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/Dockerfile | ARG RAPIDS_IMAGE
FROM $RAPIDS_IMAGE as rapids
ENV AWS_DATASET_DIRECTORY="10_year"
ENV AWS_ALGORITHM_CHOICE="XGBoost"
ENV AWS_ML_WORKFLOW_CHOICE="multiGPU"
ENV AWS_CV_FOLDS="10"
# ensure printed output/log-messages retain correct order
ENV PYTHONUNBUFFERED=True
# add sagemaker-training-toolkit [ requires build tools... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/HPOConfig.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/HPODatasets.py | """ Airline Dataset target label and feature column names """
airline_label_column = "ArrDel15"
airline_feature_columns = [
"Year",
"Quarter",
"Month",
"DayOfWeek",
"Flight_Number_Reporting_Airline",
"DOT_ID_Reporting_Airline",
"OriginCityMarketID",
"DestCityMarketID",
"DepTime",
... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/serve.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/workflows/MLWorkflowSingleCPU.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/workflows/MLWorkflowMultiCPU.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/workflows/MLWorkflowMultiGPU.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/workflows/MLWorkflowSingleGPU.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-higgs/notebook.ipynb | import sagemaker
import time
import boto3execution_role = sagemaker.get_execution_role()
session = sagemaker.Session()
region = boto3.Session().region_name
account = boto3.client("sts").get_caller_identity().get("Account")account, regions3_data_dir = session.upload_data(path="dataset", key_prefix="dataset/higgs-datase... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-higgs/rapids-higgs.py | #!/usr/bin/env python
import argparse
import cudf
from cuml import RandomForestClassifier as cuRF
from cuml.preprocessing.model_selection import train_test_split
from sklearn.metrics import accuracy_score
def main(args):
# SageMaker options
data_dir = args.data_dir
col_names = ["label"] + [f"col-{i}" f... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-higgs/Dockerfile | ARG RAPIDS_IMAGE
FROM $RAPIDS_IMAGE as rapids
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids \
&& pip3 install sagemaker-training cupy-cuda11x flask \
&& ... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-azureml-hpo/notebook.ipynb | # verify Azure ML SDK version
%pip show azure-ai-mlfrom azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
# Get a handle to the workspace
ml_client = MLClient(
credential=DefaultAzureCredential(),
subscription_id="fc4f4a6b-4041-4b1c-8249-854d68edcf62",
resource_group_name="rap... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-azureml-hpo/train_rapids.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-azureml-hpo/rapids_csp_azure.py | #
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ag... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/rapids-azureml-hpo/Dockerfile | # Use rapids base image v23.02 with the necessary dependencies
FROM rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10
# Update package information and install required packages
RUN apt-get update && \
apt-get install -y --no-install-recommends build-essential fuse && \
rm -rf /var/lib/apt/lists/*
# ... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/xgboost-rf-gpu-cpu-benchmark/hpo.py | import argparse
import gc
import glob
import os
import time
from functools import partial
import dask
import optuna
import pandas as pd
import xgboost as xgb
from dask.distributed import Client, LocalCluster, wait
from dask_cuda import LocalCUDACluster
from sklearn.ensemble import RandomForestClassifier as RF_cpu
from... | 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/xgboost-rf-gpu-cpu-benchmark/Dockerfile | FROM rapidsai/base:23.10a-cuda12.0-py3.10
RUN mamba install -y -n base optuna
| 0 |
rapidsai_public_repos/deployment/source/examples | rapidsai_public_repos/deployment/source/examples/time-series-forecasting-with-hpo/notebook.ipynb | bucket_name = "<Put the name of the bucket here>"# Test if the bucket is accessible
import gcsfs
fs = gcsfs.GCSFileSystem()
fs.ls(f"{bucket_name}/")kaggle_username = "<Put your Kaggle username here>"
kaggle_api_key = "<Put your Kaggle API key here>"%env KAGGLE_USERNAME=$kaggle_username
%env KAGGLE_KEY=$kaggle_api_key
... | 0 |
rapidsai_public_repos | rapidsai_public_repos/miniforge-cuda/renovate.json | {
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:base"
],
"packageRules": [
{
"matchDatasources": ["docker"],
"matchPackageNames": ["condaforge/miniforge3"],
"versioning": "loose"
}
]
}
| 0 |
rapidsai_public_repos | rapidsai_public_repos/miniforge-cuda/README.md | # miniforge-cuda
A simple set of images that install [Miniforge](https://github.com/conda-forge/miniforge) on top of the [nvidia/cuda](https://hub.docker.com/r/nvidia/cuda) images.
These images are intended to be used as a base image for other RAPIDS images. Downstream images can create a user with the `conda` user g... | 0 |
rapidsai_public_repos | rapidsai_public_repos/miniforge-cuda/matrix.yaml | CUDA_VER:
- "11.2.2"
- "11.4.3"
- "11.5.2"
- "11.8.0"
- "12.0.1"
- "12.1.1"
PYTHON_VER:
- "3.9"
- "3.10"
LINUX_VER:
- "ubuntu20.04"
- "ubuntu22.04"
- "centos7"
- "rockylinux8"
IMAGE_REPO:
- "miniforge-cuda"
exclude:
- LINUX_VER: "ubuntu22.04"
CUDA_VER: "11.2.2"
- LINUX_VER: "ubuntu22.0... | 0 |
rapidsai_public_repos | rapidsai_public_repos/miniforge-cuda/Dockerfile | ARG CUDA_VER=11.8.0
ARG LINUX_VER=ubuntu22.04
FROM nvidia/cuda:${CUDA_VER}-base-${LINUX_VER}
ARG LINUX_VER
ARG PYTHON_VER=3.10
ARG DEBIAN_FRONTEND=noninteractive
ENV PATH=/opt/conda/bin:$PATH
ENV PYTHON_VERSION=${PYTHON_VER}
# Create a conda group and assign it as root's primary group
RUN groupadd conda; \
usermod ... | 0 |
rapidsai_public_repos/miniforge-cuda | rapidsai_public_repos/miniforge-cuda/ci/compute-matrix.sh | #!/bin/bash
set -euo pipefail
case "${BUILD_TYPE}" in
pull-request)
export PR_NUM="${GITHUB_REF_NAME##*/}"
;;
branch)
;;
*)
echo "Invalid build type: '${BUILD_TYPE}'"
exit 1
;;
esac
yq -o json matrix.yaml | jq -c 'include "ci/compute-matrix"; compute_matrix(.)'
| 0 |
rapidsai_public_repos/miniforge-cuda | rapidsai_public_repos/miniforge-cuda/ci/remove-temp-images.sh | #!/bin/bash
set -euo pipefail
logout() {
curl -X POST \
-H "Authorization: JWT $HUB_TOKEN" \
"https://hub.docker.com/v2/logout/"
}
trap logout EXIT
HUB_TOKEN=$(
curl -s -H "Content-Type: application/json" \
-X POST \
-d "{\"username\": \"${GPUCIBOT_DOCKERHUB_USER}\", \"password\": \"${GPUCIBOT_DO... | 0 |
rapidsai_public_repos/miniforge-cuda | rapidsai_public_repos/miniforge-cuda/ci/compute-matrix.jq | def compute_arch($x):
["amd64"] |
if
$x.CUDA_VER > "11.2.2" and
$x.LINUX_VER != "centos7"
then
. + ["arm64"]
else
.
end |
$x + {ARCHES: .};
# Checks the current entry to see if it matches the given exclude
def matches($entry; $exclude):
all($exclude | to_entries | .[]; $entry[.key] == .va... | 0 |
rapidsai_public_repos/miniforge-cuda | rapidsai_public_repos/miniforge-cuda/ci/create-multiarch-manifest.sh | #!/bin/bash
set -euo pipefail
LATEST_CUDA_VER=$(yq '.CUDA_VER | sort | .[-1]' matrix.yaml)
LATEST_PYTHON_VER=$(yq -o json '.PYTHON_VER' matrix.yaml | jq -r 'max_by(split(".") | map(tonumber))')
LATEST_UBUNTU_VER=$(yq '.LINUX_VER | map(select(. == "*ubuntu*")) | sort | .[-1]' matrix.yaml)
source_tags=()
tag="${IMAGE_N... | 0 |
rapidsai_public_repos | rapidsai_public_repos/build-metrics-reporter/rapids-build-metrics-reporter.py | #
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
#
import argparse
import os
import sys
import xml.etree.ElementTree as ET
from pathlib import Path
from xml.dom import minidom
parser = argparse.ArgumentParser()
parser.add_argument(
"log_file", type=str, default=".ninja_log", help=".ninja_log file"
)
parser.add_arg... | 0 |
rapidsai_public_repos | rapidsai_public_repos/build-metrics-reporter/rapids-template-instantiation-reporter.py | #!/usr/bin/env python3
import argparse
import subprocess
from subprocess import PIPE
import shutil
from collections import Counter
from pathlib import Path
def log(msg, verbose=True):
if verbose:
print(msg)
def run(*args, **kwargs):
return subprocess.run(list(args), check=True, **kwargs)
def prog... | 0 |
rapidsai_public_repos | rapidsai_public_repos/build-metrics-reporter/README.md | # build-metrics-reporter
## Summary
This repository contains the source code for `rapids-build-metrics-reporter.py`, which is a small Python script that can be used to generate a report that contains the compile times and cache hit rates for RAPIDS library builds.
It is intended to be used in the `build.sh` script o... | 0 |
rapidsai_public_repos | rapidsai_public_repos/cloud-ml-examples/README.md | # <div align="left"><img src="img/rapids_logo.png" width="90px"/> RAPIDS Cloud Machine Learning Services Integration</div>
RAPIDS is a suite of open-source libraries that bring GPU acceleration
to data science pipelines. Users building cloud-based machine learning experiments can take advantage of this accelerati... | 0 |
rapidsai_public_repos | rapidsai_public_repos/cloud-ml-examples/LICENSE | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
... | 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow | rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/Dockerfile.training | FROM rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.8
RUN source activate rapids \
&& mkdir /opt/mlflow \
&& pip install \
boto3 \
google-cloud \
google-cloud-storage \
gcsfs \
hyperopt \
mlflow \
psycopg2-binary
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow | rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/DetailedConfig.md | # [Detailed Google Kubernetes Engine (GKE) Guide](#anchor-start)
### Baseline
For all steps referring to the Google Cloud Platform (GCP) console window, components can be selected from the 'Huburger Button'
on the top left of the console.

## [Create a GKE Cluster](#an... | 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow | rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/k8s_config.json | {
"kube-context": "",
"kube-job-template-path": "k8s_job_template.yaml",
"repository-uri": "${GCR_REPO}/rapids-mlflow-training"
}
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow | rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/README.md | # End to End - RAPIDS, hyperopt, and MLflow, on Google Kubernetes Engine (GKE).
## Overview
This example will go through the process of setting up all the components to run your own RAPIDS based hyper-parameter
training, with custom MLflow backend service, artifact storage, and Tracking Server using Google's Cloud Plat... | 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow | rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/MLproject | name: cumlrapids
docker_env:
image: rapids-mlflow-training:gcp
entry_points:
hyperopt:
parameters:
algo: {type: str, default: 'tpe'}
conda_env: {type: str, default: 'envs/conda.yaml'}
fpath: {type: str}
command: "/bin/bash src/k8s/entrypoint.sh src/rf_test/t... | 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow | rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/Dockerfile.tracking | FROM python:3.8
RUN pip install \
mlflow \
boto3 \
gcsfs \
psycopg2-binary
COPY src/k8s/tracking_entrypoint.sh /tracking_entrypoint.sh
ENTRYPOINT [ "/bin/bash", "/tracking_entrypoint.sh" ]
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow | rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/k8s_job_template.yaml | apiVersion: batch/v1
kind: Job
metadata:
name: "{replaced with MLflow Project name}"
namespace: default
spec:
ttlSecondsAfterFinished: 100
backoffLimit: 0
template:
spec:
volumes:
- name: gcsfs-creds
secret:
secretName: gcsfs-creds
items:
- key... | 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment | rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/envs/conda.yaml | name: mlflow
channels:
- rapidsai
- nvidia
- conda-forge
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=1_gnu
- abseil-cpp=20200225.2=he1b5a44_2
- appdirs=1.4.3=py_1
- arrow-cpp=0.17.1=py38h1234567_11_cuda
- arrow-cpp-proc=1.0.1=cuda
- asn1crypto=1.4.0=pyh9f0ad1d_0
- aws-c-commo... | 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/helm | rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/helm/mlflow-tracking-server/Chart.yaml | apiVersion: v2
name: mlflow-tracking-server
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or function... | 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/helm | rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/helm/mlflow-tracking-server/.helmignore | # Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.proj... | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.