File size: 8,170 Bytes
406662d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
.. _cloudxr-teleoperation-cluster:

Deploying CloudXR Teleoperation on Kubernetes
=============================================

.. currentmodule:: isaaclab

This section explains how to deploy CloudXR Teleoperation for Isaac Lab on a Kubernetes (K8s) cluster.

.. _k8s-system-requirements:

System Requirements
-------------------

* **Minimum requirement**: Kubernetes cluster with a node that has at least 1 NVIDIA RTX PRO 6000 / L40 GPU or equivalent
* **Recommended requirement**: Kubernetes cluster with a node that has at least 2 RTX PRO 6000 / L40 GPUs or equivalent

.. note::
   If you are using DGX Spark, check `DGX Spark Limitations <https://isaac-sim.github.io/IsaacLab/release/2.3.0/source/setup/installation/index.html#dgx-spark-details-and-limitations>`_ for compatibility.

Software Dependencies
---------------------

* ``kubectl`` on your host computer

  * If you use MicroK8s, you already have ``microk8s kubectl``
  * Otherwise follow the `official kubectl installation guide <https://kubernetes.io/docs/tasks/tools/#kubectl>`_

* ``helm`` on your host computer

  * If you use MicroK8s, you already have ``microk8s helm``
  * Otherwise follow the `official Helm installation guide <https://helm.sh/docs/intro/install/>`_

* Access to NGC public registry from your Kubernetes cluster, in particular these container images:

  * ``https://catalog.ngc.nvidia.com/orgs/nvidia/containers/isaac-lab``
  * ``https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cloudxr-runtime``

* NVIDIA GPU Operator or equivalent installed in your Kubernetes cluster to expose NVIDIA GPUs
* NVIDIA Container Toolkit installed on the nodes of your Kubernetes cluster

Preparation
-----------

On your host computer, you should have already configured ``kubectl`` to access your Kubernetes cluster. To validate, run the following command and verify it returns your nodes correctly:

.. code:: bash

   kubectl get node

If you are installing this to your own Kubernetes cluster instead of using the setup described in the :ref:`k8s-appendix`, your role in the K8s cluster should have at least the following RBAC permissions:

.. code:: yaml

   rules:
   - apiGroups: [""]
     resources: ["configmaps"]
     verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
   - apiGroups: ["apps"]
     resources: ["deployments", "replicasets"]
     verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
   - apiGroups: [""]
     resources: ["pods"]
     verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
   - apiGroups: [""]
     resources: ["services"]
     verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

.. _k8s-installation:

Installation
------------

.. note::

   The following steps are verified on a MicroK8s cluster with GPU Operator installed (see configurations in the :ref:`k8s-appendix`). You can configure your own K8s cluster accordingly if you encounter issues.

#. Download the Helm chart from NGC (get your NGC API key based on the `public guide <https://docs.nvidia.com/ngc/ngc-overview/index.html#generating-api-key>`_):

   .. code:: bash

      helm fetch https://helm.ngc.nvidia.com/nvidia/charts/isaac-lab-teleop-2.3.0.tgz \
        --username='$oauthtoken' \
        --password=<your-ngc-api-key>

#. Install and run the CloudXR Teleoperation for Isaac Lab pod in the default namespace, consuming all host GPUs:

   .. code:: bash

      helm upgrade --install hello-isaac-teleop isaac-lab-teleop-2.3.0.tgz \
        --set fullnameOverride=hello-isaac-teleop \
        --set hostNetwork="true"

   .. note::

      You can remove the need for host network by creating an external LoadBalancer VIP (e.g., with MetalLB), and setting the environment variable ``NV_CXR_ENDPOINT_IP`` when deploying the Helm chart:

      .. code:: yaml

         # local_values.yml file example:
         fullnameOverride: hello-isaac-teleop
         streamer:
           extraEnvs:
             - name: NV_CXR_ENDPOINT_IP
               value: "<your external LoadBalancer VIP>"
             - name: ACCEPT_EULA
               value: "Y"

      .. code:: bash

         # command
         helm upgrade --install --values local_values.yml \
           hello-isaac-teleop isaac-lab-teleop-2.3.0.tgz

#. Verify the deployment is completed:

   .. code:: bash

      kubectl wait --for=condition=available --timeout=300s \
        deployment/hello-isaac-teleop

   After the pod is running, it might take approximately 5-8 minutes to complete loading assets and start streaming.

Uninstallation
--------------

You can uninstall by simply running:

.. code:: bash

   helm uninstall hello-isaac-teleop

.. _k8s-appendix:

Appendix: Setting Up a Local K8s Cluster with MicroK8s
------------------------------------------------------

Your local workstation should have the NVIDIA Container Toolkit and its dependencies installed. Otherwise, the following setup will not work.

Cleaning Up Existing Installations (Optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code:: bash

   # Clean up the system to ensure we start fresh
   sudo snap remove microk8s
   sudo snap remove helm
   sudo apt-get remove docker-ce docker-ce-cli containerd.io
   # If you have snap docker installed, remove it as well
   sudo snap remove docker

Installing MicroK8s
~~~~~~~~~~~~~~~~~~~

.. code:: bash

   sudo snap install microk8s --classic

Installing NVIDIA GPU Operator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code:: bash

   microk8s helm repo add nvidia https://helm.ngc.nvidia.com/nvidia
   microk8s helm repo update
   microk8s helm install gpu-operator \
     -n gpu-operator \
     --create-namespace nvidia/gpu-operator \
     --set toolkit.env[0].name=CONTAINERD_CONFIG \
     --set toolkit.env[0].value=/var/snap/microk8s/current/args/containerd-template.toml \
     --set toolkit.env[1].name=CONTAINERD_SOCKET \
     --set toolkit.env[1].value=/var/snap/microk8s/common/run/containerd.sock \
     --set toolkit.env[2].name=CONTAINERD_RUNTIME_CLASS \
     --set toolkit.env[2].value=nvidia \
     --set toolkit.env[3].name=CONTAINERD_SET_AS_DEFAULT \
     --set-string toolkit.env[3].value=true

.. note::

   If you have configured the GPU operator to use volume mounts for ``DEVICE_LIST_STRATEGY`` on the device plugin and disabled ``ACCEPT_NVIDIA_VISIBLE_DEVICES_ENVVAR_WHEN_UNPRIVILEGED`` on the toolkit, this configuration is currently unsupported, as there is no method to ensure the assigned GPU resource is consistently shared between containers of the same pod.

Verifying Installation
~~~~~~~~~~~~~~~~~~~~~~

Run the following command to verify that all pods are running correctly:

.. code:: bash

   microk8s kubectl get pods -n gpu-operator

You should see output similar to:

.. code:: text

   NAMESPACE          NAME                                                        READY   STATUS      RESTARTS   AGE
   gpu-operator       gpu-operator-node-feature-discovery-gc-76dc6664b8-npkdg       1/1     Running     0          77m
   gpu-operator       gpu-operator-node-feature-discovery-master-7d6b448f6d-76fqj   1/1     Running     0          77m
   gpu-operator       gpu-operator-node-feature-discovery-worker-8wr4n              1/1     Running     0          77m
   gpu-operator       gpu-operator-86656466d6-wjqf4                                 1/1     Running     0          77m
   gpu-operator       nvidia-container-toolkit-daemonset-qffh6                      1/1     Running     0          77m
   gpu-operator       nvidia-dcgm-exporter-vcxsf                                    1/1     Running     0          77m
   gpu-operator       nvidia-cuda-validator-x9qn4                                   0/1     Completed   0          76m
   gpu-operator       nvidia-device-plugin-daemonset-t4j4k                          1/1     Running     0          77m
   gpu-operator       gpu-feature-discovery-8dms9                                   1/1     Running     0          77m
   gpu-operator       nvidia-operator-validator-gjs9m                               1/1     Running     0          77m

Once all pods are running, you can proceed to the :ref:`k8s-installation` section.