|
|
--- |
|
|
title: Quickstart – Hello, world! |
|
|
description: Launch a barebone GPU-accelerated workload |
|
|
icon: "circle-play" |
|
|
version: EN |
|
|
--- |
|
|
|
|
|
This quickstart deploys a barebone GPU-accelerated workload, a Python container that prints `Hello, world!`. It illustrates the basic components of a single run and how you can deploy one. |
|
|
|
|
|
<CardGroup cols={2}> |
|
|
<Card icon="sparkles" title="Try it on VESSL Hub" href="https://vessl.ai/hub/create-your-own"> |
|
|
Try out the Quickstart example with a single click on VESSL Hub. |
|
|
</Card> |
|
|
|
|
|
<Card icon="github" title="See the final code" href="https://github.com/vessl-ai/hub-model/tree/main/quickstart"> |
|
|
See the completed YAML file and final code for this example. |
|
|
</Card> |
|
|
</CardGroup> |
|
|
|
|
|
## What you will do |
|
|
|
|
|
<img |
|
|
style={{ borderRadius: '0.5rem' }} |
|
|
src="/images/get-started/quickstart-title.png" |
|
|
/> |
|
|
|
|
|
- Launch a GPU-acclerated workload |
|
|
- Set up a runtime for your model |
|
|
- Mount a Git codebase |
|
|
- Run a simple command |
|
|
|
|
|
## Installation & setup |
|
|
|
|
|
After creating a free account at [vessl.ai](https://vessl.ai), install our Python package and get an API authentication. Set your primary Organization and Project for your account and let's get going. |
|
|
|
|
|
```bash |
|
|
pip install --upgrade vessl |
|
|
vessl configure |
|
|
``` |
|
|
|
|
|
## Writing the YAML |
|
|
|
|
|
Launching a workload on VESSL AI begins with writing a YAML file. Our quickstart YAML is in four parts: |
|
|
|
|
|
- Compute resource -- typically in terms of GPUs -- this is defined under `resources` |
|
|
- Runtime environment that points to a Docker Image -- this is defined under `image` |
|
|
- Input & output for code, dataset, and others defined under `import` & `export` |
|
|
- Run commands executed inside the workload as defined under `run` |
|
|
|
|
|
Let's start by creating `quickstart.yml` and define the key-value pairs one by one. |
|
|
|
|
|
<Steps titleSize="h3"> |
|
|
<Step title="Spin up a compute instance"> |
|
|
`resources` defines the hardware specs you will use for your run. Here's an example that uses our managed cloud to launch an L4 instance. |
|
|
|
|
|
You can see the full list of compute options and their string values for `preset` under [Resources](/resources/price). Later, you will be able to add and launch workloads on your private cloud or on-premise clusters simply by changing the value for `cluster`. |
|
|
|
|
|
```yaml |
|
|
resources: |
|
|
cluster: vessl-gcp-oregon |
|
|
preset: gpu-l4-small |
|
|
``` |
|
|
</Step> |
|
|
|
|
|
<Step title="Configure a runtime environment"> |
|
|
VESSL AI uses Docker images to define a runtime. We provide a set of base images as a starting point but you can also bring your own custom Docker images by referencing hosted images on Docker Hub or Red Hat Quay. |
|
|
|
|
|
You can find the full list of Images and the dependencies for the latest build [here](https://quay.io/repository/vessl-ai/kernels?tab=tags&tag=py39-202310120824). Here, we'll use the latest go-to PyTorch container from NVIDIA NGC. |
|
|
|
|
|
```yaml |
|
|
resources: |
|
|
cluster: vessl-gcp-oregon |
|
|
preset: gpu-l4-small |
|
|
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3 |
|
|
``` |
|
|
</Step> |
|
|
|
|
|
<Step title="Mount a GitHub repository"> |
|
|
Under `import`, you can mount models, codebases, and datasets from sources like GitHub, Hugging Face, Amazon S3, and more. |
|
|
|
|
|
In this example, we are mounting a GitHub repo to `/code/` folder in our container. You can switch to different versions of code by changing `ref` to specific branches like `dev`. |
|
|
|
|
|
```yaml |
|
|
resources: |
|
|
cluster: vessl-gcp-oregon |
|
|
preset: gpu-l4-small |
|
|
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3 |
|
|
import: |
|
|
/code/: |
|
|
git: |
|
|
url: https://github.com/vessl-ai/hub-model |
|
|
ref: main |
|
|
``` |
|
|
</Step> |
|
|
|
|
|
<Step title="Write a run command"> |
|
|
Now that we defined the specifications of the compute resource and the runtime environment for our workload, let's run [`main.py`](https://github.com/vessl-ai/hub-model/blob/main/quickstart/main.py). |
|
|
|
|
|
We can do this by defining `command` under `run` and specifying the working directory `workdir`. |
|
|
|
|
|
```yaml |
|
|
resources: |
|
|
cluster: vessl-gcp-oregon |
|
|
preset: gpu-l4-small |
|
|
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3 |
|
|
import: |
|
|
/code/: |
|
|
git: |
|
|
url: https://github.com/vessl-ai/hub-model |
|
|
ref: main |
|
|
run: |
|
|
- command: | |
|
|
python main.py |
|
|
workdir: /code/quickstart |
|
|
``` |
|
|
</Step> |
|
|
|
|
|
<Step title="Add metadata"> |
|
|
Finally, let's polish up by giving our Run a name and description. Here's the completed `.yml`: |
|
|
|
|
|
```yaml |
|
|
name: Quickstart |
|
|
description: A barebone GPU-accelerated workload |
|
|
resources: |
|
|
cluster: vessl-gcp-oregon |
|
|
preset: gpu-l4-small |
|
|
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3 |
|
|
import: |
|
|
/code/: |
|
|
git: |
|
|
url: https://github.com/vessl-ai/hub-model |
|
|
ref: main |
|
|
run: |
|
|
- command: | |
|
|
python main.py |
|
|
workdir: /code/quickstart |
|
|
``` |
|
|
</Step> |
|
|
</Steps> |
|
|
|
|
|
## Running the workload |
|
|
|
|
|
Now that we have a completed YAML, we can run the workload with `vessl run`. |
|
|
|
|
|
``` |
|
|
vessl run create -f quickstart.yml |
|
|
``` |
|
|
|
|
|
<img |
|
|
style={{ borderRadius: '0.5rem' }} |
|
|
src="/images/get-started/quickstart-run.png" |
|
|
/> |
|
|
|
|
|
Running the command will verify your YAML and show you the current status of the workload. Click the output link in your terminal to see full details and a realtime logs of the Run on the web, including the result of the run command. |
|
|
|
|
|
<img |
|
|
style={{ borderRadius: '0.5rem' }} |
|
|
src="/images/get-started/quickstart-result.jpeg" |
|
|
/> |
|
|
|
|
|
## Behind the scenes |
|
|
|
|
|
When you `vessl run`, VESSL AI performs the following as defined in `quickstart.yml`: |
|
|
|
|
|
1. Launch an empty Python container on the cloud with 1 NVIDIA L4 Tensor Core GPU. |
|
|
2. Configure runtime with CUDA compute-capable PyTorch 22.09. |
|
|
3. Mount a GitHub repo and set the working directory. |
|
|
3. Execute `main.py` and print `Hello, world!`. |
|
|
|
|
|
## Using our web interface |
|
|
|
|
|
You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run. |
|
|
|
|
|
<iframe |
|
|
src="https://scribehow.com/embed/Quickstart__Hello_world__BzwSynUuQI-0R3mMGwJzHw?skipIntro=true&removeLogo=true" |
|
|
width="100%" height="640" allowfullscreen frameborder="0" |
|
|
style={{ borderRadius: '0.5rem' }} > |
|
|
</iframe> |
|
|
|
|
|
## What's next? |
|
|
|
|
|
Now that you've run a barebone workload, continue with our guide to launch a Jupyter server and host a web app. |
|
|
|
|
|
<CardGroup cols={2}> |
|
|
<Card title="GPU-accelerated notebook" href="get-started/gpu-notebook"> |
|
|
Launch a Jupyter Notebook server with an SSH connection |
|
|
</Card> |
|
|
|
|
|
<Card title="SSD-1B Playground" href="get-started/stable-diffusion"> |
|
|
Launch an interactive web application for Stable Diffusion |
|
|
</Card> |
|
|
</CardGroup> |
|
|
|