vessl-docs / guides /get-started /gpu-notebook.md
sanghyuk-vessl's picture
Add vessl-docs
76d9c4f verified
---
title: GPU-accelerated Notebook
description: Launch a Jupyter Notebook server with an SSH connection
icon: "circle-1"
version: EN
---
This example deploys a Jupyter Notebook server. You will also learn how you can connect to the server on VS Code or an IDE of your choice.
<CardGroup cols={2}>
<Card icon="sparkles" title="Try it on VESSL Hub" href="https://vessl.ai/hub/jupyter-notebook">
Try out the Quickstart example with a single click on VESSL Hub.
</Card>
<Card icon="github" title="See the final code" href="https://github.com/vessl-ai/hub-model/">
See the completed YAML file and final code for this example.
</Card>
</CardGroup>
## What you will do
<img
style={{ borderRadius: '0.5rem' }}
src="/images/get-started/quickstart-title.png"
/>
- Launch a GPU-accelerated interactive workload
- Set up a Jupyter Notebook
- Use SSH to connect to the workload
## Writing the YAML
Let's fill in the `notebook.yml` file.
<Steps titleSize="h3">
<Step title="Spin up a workload">
Let's repeat the steps from [Quickstart](quickstart.mdx) and define the compute resource and runtime environment for our workload. Again, we will use the L4 instance from our managed cloud and the latest PyTorch container from NVIDIA NGC.
```yaml
name: Jupyter notebook
description: A Jupyter Notebook server with an SSH connection
resources:
cluster: vessl-gcp-oregon
preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
```
</Step>
<Step title="Configure an interactive run">
By default, workloads launched with VESSL Run are batch jobs like the one we launched in our Quickstart example. On the other hand, interactive runs are essentially virtual machines running on GPUs for live interaction with your models and datasets.
You can enable this with the `interactive` key, followed by the `jupyter` key. Interactive runs come with a default field for idle culler which automatically shuts down user notebook servers when they have not been used for a certain period.
`max_runtime` works with `idle_timeout` as an additional measure to prevent resource overuse
```yaml
name: Jupyter notebook
description: A Jupyter Notebook server with an SSH connection
resources:
cluster: vessl-gcp-oregon
preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
interactive:
jupyter:
idle_timeout: 120m
max_runtime: 24h
```
</Step>
</Steps>
## Running the workload
Now that we have a completed YAML, we can once again run the workload with `vessl run`.
```
vessl run create -f notebook.yml
```
<img
style={{ borderRadius: '0.5rem' }}
src="/images/get-started/quickstart-notebook.png"
/>
Running the command will verify your YAML and show you the current status of the workload. Click the output link in your terminal to see the full details and realtime logs of the Run on the web. Click Jupyter under Connect to launch a notebook.
<img
style={{ borderRadius: '0.5rem' }}
src="/images/get-started/quickstart-workspace.jpeg"
/>
<img
style={{ borderRadius: '0.5rem' }}
src="/images/get-started/quickstart-jupyter.jpeg"
/>
## Create an SSH connection
An interactive run is essentially a GPU-accelerated workload on a cloud with a port and an endpoint for live interactions. This means you can access the remote workload using SSH.
1. First, get an SSH key pair.
```
ssh-keygen -t ed25519 -C "vesslai"
```
<img
style={{ borderRadius: '0.5rem' }}
src="/images/get-started/gpu-notebook_ssh-keygen.png"
/>
2. Add the generated key to your account.
```
vessl ssh-key add
```
<img
style={{ borderRadius: '0.5rem' }}
src="/images/get-started/gpu-notebook_ssh-add.png"
/>
3. Connect via SSH.
Use the workload address from the Run Summary page to connect. You are ready to use [VS Code](https://code.visualstudio.com/docs/remote/ssh) or an IDE of your choice for remote development.
<img
style={{ borderRadius: '0.5rem' }}
src="/images/get-started/gpu-notebook_ssh-info.jpeg"
/>
```
ssh -p 22 root@34.127.82.9
```
## Tips & tricks
Keep in mind that GPUs are fully dedicated to a notebook server --and therefore consume VSSL credits-- even when you are not running compute-intensive cells. To optimize GPU usage, use tools like [nbconvert](https://nbconvert.readthedocs.io/en/latest/usage.html#executable-script) to convert the notebook into a Python file or package it as a Python container and run it as a batch job.
You can also mount volumes to interactive workloads by defining `import` and reference files or datasets from your notebook.
## Using our web interface
You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run.
<iframe
src="https://scribehow.com/embed/GPU-accelerated_notebook__KXkgSEfXS_2wPYbjRCDpRw?skipIntro=true&removeLogo=true"
width="100%" height="640" allowfullscreen frameborder="0"
style={{ borderRadius: '0.5rem' }} >
</iframe>
## What's next?
Next, let's see how you use our interactive workloads to host a web app on the cloud using tools like Streamlit and Gradio.
<CardGroup cols={2}>
<Card title="Stable Diffusion Playground" href="get-started/stable-diffusion">
Launch an interactive web application for Stable Diffusion
</Card>
<Card title="Mistral-7B Playground" href="get-started/stable-diffusion">
Launch a text-generation Streamlit app powered with Mistral-7B
</Card>
</CardGroup>