vessl-docs / guides /get-started /gpu-notebook.md
sanghyuk-vessl's picture
Add vessl-docs
76d9c4f verified
metadata
title: GPU-accelerated Notebook
description: Launch a Jupyter Notebook server with an SSH connection
icon: circle-1
version: EN

This example deploys a Jupyter Notebook server. You will also learn how you can connect to the server on VS Code or an IDE of your choice.

Try out the Quickstart example with a single click on VESSL Hub. See the completed YAML file and final code for this example.

What you will do

<img style={{ borderRadius: '0.5rem' }} src="/images/get-started/quickstart-title.png" />

  • Launch a GPU-accelerated interactive workload
  • Set up a Jupyter Notebook
  • Use SSH to connect to the workload

Writing the YAML

Let's fill in the notebook.yml file.

Let's repeat the steps from [Quickstart](quickstart.mdx) and define the compute resource and runtime environment for our workload. Again, we will use the L4 instance from our managed cloud and the latest PyTorch container from NVIDIA NGC.
```yaml
name: Jupyter notebook
description: A Jupyter Notebook server with an SSH connection
resources:
  cluster: vessl-gcp-oregon
  preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
```
By default, workloads launched with VESSL Run are batch jobs like the one we launched in our Quickstart example. On the other hand, interactive runs are essentially virtual machines running on GPUs for live interaction with your models and datasets.
You can enable this with the `interactive` key, followed by the `jupyter` key. Interactive runs come with a default field for idle culler which automatically shuts down user notebook servers when they have not been used for a certain period. 

`max_runtime` works with `idle_timeout` as an additional measure to prevent resource overuse

```yaml
name: Jupyter notebook
description: A Jupyter Notebook server with an SSH connection
resources:
  cluster: vessl-gcp-oregon
  preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
interactive:
  jupyter:
    idle_timeout: 120m
  max_runtime: 24h
```

Running the workload

Now that we have a completed YAML, we can once again run the workload with vessl run.

vessl run create -f notebook.yml

<img style={{ borderRadius: '0.5rem' }} src="/images/get-started/quickstart-notebook.png" />

Running the command will verify your YAML and show you the current status of the workload. Click the output link in your terminal to see the full details and realtime logs of the Run on the web. Click Jupyter under Connect to launch a notebook.

<img style={{ borderRadius: '0.5rem' }} src="/images/get-started/quickstart-workspace.jpeg" />

<img style={{ borderRadius: '0.5rem' }} src="/images/get-started/quickstart-jupyter.jpeg" />

Create an SSH connection

An interactive run is essentially a GPU-accelerated workload on a cloud with a port and an endpoint for live interactions. This means you can access the remote workload using SSH.

  1. First, get an SSH key pair.

    ssh-keygen -t ed25519 -C "vesslai"
    

    <img style={{ borderRadius: '0.5rem' }} src="/images/get-started/gpu-notebook_ssh-keygen.png" />

  2. Add the generated key to your account.

    vessl ssh-key add
    

    <img style={{ borderRadius: '0.5rem' }} src="/images/get-started/gpu-notebook_ssh-add.png" />

  3. Connect via SSH. Use the workload address from the Run Summary page to connect. You are ready to use VS Code or an IDE of your choice for remote development.

    <img style={{ borderRadius: '0.5rem' }} src="/images/get-started/gpu-notebook_ssh-info.jpeg" />

    ssh -p 22 root@34.127.82.9 
    

Tips & tricks

Keep in mind that GPUs are fully dedicated to a notebook server --and therefore consume VSSL credits-- even when you are not running compute-intensive cells. To optimize GPU usage, use tools like nbconvert to convert the notebook into a Python file or package it as a Python container and run it as a batch job.

You can also mount volumes to interactive workloads by defining import and reference files or datasets from your notebook.

Using our web interface

You can repeat the same process on the web. Head over to your Organization, select a project, and create a New run.

What's next?

Next, let's see how you use our interactive workloads to host a web app on the cloud using tools like Streamlit and Gradio.

Launch an interactive web application for Stable Diffusion Launch a text-generation Streamlit app powered with Mistral-7B