+
+
+
+
+
+
+
+
+## Running the model
+
+You can run the model with our quick command.
+```sh
+vessl run create -f llama2_fine-tuning.yaml
+```
+
+Here's a rundown of the `llama2_fine-tuning.yaml` file.
+```yaml
+name: llama2-finetuning
+description: finetune llama2 with code instruction alpaca dataset
+resources:
+ cluster: vessl-gcp-oregon
+ preset: v1.l4-1.mem-27
+image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
+import:
+ /model/: vessl-model://vessl-ai/llama2/1
+ /code/:
+ git:
+ url: https://github.com/vessl-ai/hub-model
+ ref: main
+ /dataset/: vessl-dataset://vessl-ai/code_instructions_small_alpaca
+export:
+ /trained_model/: vessl-model://vessl-ai/llama2-finetuned
+ /artifacts/: vessl-artifact://
+run:
+ - command: |-
+ pip install -r requirements.txt
+ mkdir /model_
+ cd /model
+ mv llama_2_7b_hf.zip /model_
+ cd /model_
+ unzip llama_2_7b_hf.zip
+ cd /code/llama2-finetuning
+ python finetuning.py
+ workdir: /code/llama2-finetuning
+```
\ No newline at end of file
diff --git a/examples/models/mistral.md b/examples/models/mistral.md
new file mode 100644
index 0000000000000000000000000000000000000000..ea3da118703fb6de93213f2c15677f8cc0ba2a17
--- /dev/null
+++ b/examples/models/mistral.md
@@ -0,0 +1,50 @@
+---
+title: Mistral-7B Playground
+description: Launch a text-generation Streamlit app using Mistral-7B
+version: EN
+---
+
+## Try out this model on [VESSL Hub](https://vessl.ai/hub).
+
+This example runs an app for inference using Mistral-7B which is an open-source LLM developed by [Mistral AI](https://mistral.ai/). The model utilizes a grouped query attention (GQA) and a sliding window attention mechanism (SWA), which enable faster inference and handling longer sequences at smaller cost than other models. As a result, it achieves both efficiency and high performance. Mistral-7B outperforms Llama 2 13B on all benchmarks and Llama 1 34B in reasoning, mathematics, and code generation benchmarks.
+
+
+
+## Running the model
+
+You can run the model with our quick command.
+```sh
+vessl run create -f mistral_7b.yaml
+```
+
+Here's a rundown of the `mistral_7b.yaml` file.
+```yaml
+name: mistral-7b-streamlit
+description: A template Run for inference of Mistral-7B with streamlit app
+resources:
+ cluster: vessl-gcp-oregon
+ preset: v1.l4-1.mem-42
+image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
+import:
+ /model/: hf://huggingface.co/VESSL/Mistral-7B
+ /code/:
+ git:
+ url: https://github.com/vessl-ai/hub-model
+ ref: main
+run:
+ - command: |-
+ pip install -r requirements_streamlit.txt
+ streamlit run streamlit_demo.py --server.port 80
+ workdir: /code/mistral-7B
+interactive:
+ max_runtime: 24h
+ jupyter:
+ idle_timeout: 120m
+ports:
+ - name: streamlit
+ type: http
+ port: 80
+```
diff --git a/examples/models/ssd.md b/examples/models/ssd.md
new file mode 100644
index 0000000000000000000000000000000000000000..dcfbc85fafb322cdd30cdc8f277723dd194a2e21
--- /dev/null
+++ b/examples/models/ssd.md
@@ -0,0 +1,52 @@
+---
+title: SSD-1B Playground
+description: Interactive playground of a lighter and faster version of Stable Diffusion XL
+version: EN
+---
+
+## Try out this model on [VESSL Hub](https://vessl.ai/hub).
+
+This examples runs an inference app for SSD-1B. After launching, you can access a streamlit web app to generate images with your own prompts. [The Segmind Stable Diffusion Model (SSD-1B)](https://www.segmind.com/) is a distilled 50% smaller version of the Stable Diffusion XL (SDXL), offering a 60% speedup while maintaining high-quality text-to-image generation capabilities.
+
+
+
+## Running the model
+
+You can run the model with our quick command.
+```sh
+vessl run create -f ssd-streamlit.yaml
+```
+
+Here's a raundom of the `ssd-streamlit.yaml` file.
+```yaml
+name: SSD-1B-streamlit
+description: A template Run for inference of SSD-1B with streamlit app
+resources:
+ cluster: vessl-gcp-oregon
+ preset: v1.l4-1.mem-42
+image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
+import:
+ /code/:
+ git:
+ url: https://github.com/vessl-ai/hub-model
+ ref: main
+ /model/: hf://huggingface.co/VESSL/SSD-1B
+run:
+ - command: |-
+ pip install --upgrade pip
+ pip install -r requirements.txt
+ pip install git+https://github.com/huggingface/diffusers
+ streamlit run ssd_1b_streamlit.py --server.port=80
+ workdir: /code/SSD-1B
+interactive:
+ max_runtime: 24h
+ jupyter:
+ idle_timeout: 120m
+ports:
+ - name: streamlit
+ type: http
+ port: 80
+```
diff --git a/examples/models/stable-diffusion.md b/examples/models/stable-diffusion.md
new file mode 100644
index 0000000000000000000000000000000000000000..644c50d5ed83e808ddf29223e1465d800559718d
--- /dev/null
+++ b/examples/models/stable-diffusion.md
@@ -0,0 +1,54 @@
+---
+title: Stable Diffusion Playground
+description: Generate images with a prompt on the web, powered with Stable Diffusion
+version: EN
+---
+
+## Try out this model on [VESSL Hub](https://vessl.ai/hub).
+
+A simple web app for [stable diffusion](https://stability.ai/) inference is deployed in this example. Some SD model checkpoints are mounted with a VESSL Model so you can try some generation instantly.
+
+Stable diffusion is a deep learning, text-to-image model that uses a diffusion technique, which generates an image from noise by iterating gradual denoising steps. Unlike other text-to-image models, stable diffusion performs a diffusion process in the latent space with smaller dimensions and reconstructs the result to the image in a real dimension. Also, a cross-attention mechanism is added for multi-modal tasks such as text-to-image and layout-to-image tasks.
+
+
+
+## Running the model
+
+You can run the model with our quick command.
+```sh
+vessl run crewate -f sd-webui.yaml
+```
+
+Here's a rundown of the `sd-webui.yaml` file.
+```yaml
+name: stable-diffusion-webui
+description: A template Run for stable diffusion webui app
+resources:
+ cluster: google-oregon
+ preset: v1.l4-1.mem-42
+image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
+import:
+ /code/:
+ git:
+ url: https://github.com/vessl-ai/hub-model
+ ref: main
+ /models/protogen-infinity/: hf://huggingface.co/darkstorm2150/Protogen_Infinity_Official_Release
+ /models/sd-v1-5/: hf://huggingface.co/VESSL/stable-diffusion-v1-5-checkpoint
+ /models/sd-v2-1/: hf://huggingface.co/VESSL/stable-diffusion-v2-1-checkpoint
+run:
+ - command: |-
+ pip install -r requirements.txt
+ python -u launch.py --no-download-sd-model --ckpt-dir /models --no-half --no-gradio-queue --listen
+ workdir: /code/stable-diffusion-webui
+interactive:
+ max_runtime: 24h
+ jupyter:
+ idle_timeout: 120m
+ports:
+ - name: gradio
+ type: http
+ port: 7860
+```
\ No newline at end of file
diff --git a/examples/models/whisper.md b/examples/models/whisper.md
new file mode 100644
index 0000000000000000000000000000000000000000..9ec3a4cf591e9b6f32c7402c1fcc33a2d375cbf4
--- /dev/null
+++ b/examples/models/whisper.md
@@ -0,0 +1,46 @@
+---
+title: Whisper V3 Playground
+description: Translate audio snippets into text on a Streamlit playground.
+version: EN
+---
+
+## Try out this model on [VESSL Hub](https://vessl.ai/hub).
+
+This example runs a general-purpose speech recognition model, [Whisper V3](https://github.com/openai/whisper). It is trained on a 680k hours of diverse labelled audio. Whisper is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification. It can generalize to many domains without additional fine-tuning.
+
+
+
+## Running the model
+
+You can run the model with our quick command.
+```sh
+vessl run create -f whisper.yaml
+```
+
+If you open log pages, you can see the result of inference for first 5 data in [Librispeech_asr dataset](https://www.openslr.org/12).
+
+
+Here's a rundown of the `whisper.yaml` file.
+```yaml
+name: whisper-v3
+description: A template Run for inference of whisper v3 on librispeech_asr test set
+resources:
+ cluster: vessl-gcp-oregon
+ preset: v1.l4-1.mem-42
+image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
+import:
+ /model/: hf://huggingface.co/VESSL/Whisper-large-v3
+ /dataset/: hf://huggingface.co/datasets/VESSL/librispeech_asr_clean_test
+ /code/:
+ git:
+ url: https://github.com/vessl-ai/hub-model
+ ref: main
+run:
+ - command: |-
+ pip install -r requirements.txt
+ python inference.py
+ workdir: /code/whisper-v3
+```
diff --git a/examples/uses.md b/examples/uses.md
new file mode 100644
index 0000000000000000000000000000000000000000..f4fc187c7d2c4bf95cbda0842e5da3320250cc2e
--- /dev/null
+++ b/examples/uses.md
@@ -0,0 +1,25 @@
+---
+title: Use cases
+description: See VESSL Run in action with common use cases and workflows
+version: EN
+---
+
+
*/}
+
*/}
+
*/}
+
+
+Under **Access Control**, you can grant, manage, and retrieve access to shared clusters.
+
+#### (1) Grant access
+
+Click **Grant access** and define the following parameters.
+
+
+
+* **Organization name** β Name of the Organization that you want to cluster with.
+* **Kubernetes namespace** β [π Kubernetes namespaces](https://www.aquasec.com/cloud-native-academy/kubernetes-101/kubernetes-namespace/) separate a cluster into logical units. It helps granularly organize, allocate, manage, and secure cluster resources. If you are using a naming convention like `group-subgroup`, use a namespace `subgroup`.
+* **Node** β Select the nodes to share by clicking the checkbox.
+
+#### (2) Edit access
+
+Click **Edit** to edit access to certain nodes.
+
+
+
+#### (3) Revoke access
+
+Click **Revoke** to remove the organization from accessing the clusters.
+
+
+
diff --git a/guides/clusters/aws.md b/guides/clusters/aws.md
new file mode 100644
index 0000000000000000000000000000000000000000..c47123eb73f2289e128ece3a930882a6108711a3
--- /dev/null
+++ b/guides/clusters/aws.md
@@ -0,0 +1,12 @@
+---
+title: AWS
+version: EN
+---
+
+
+
+Integrating your Mac or Linux machine to VESSL allows you to use your personal machines as a Kubernetes-backed single-node cluster and optimize your laptop for ML.
+
+* Launch training jobs in seconds on VESSL's easy-to-use web interface or CLI, without writing YAML or scrappy scripts.
+* Build a baseline model on your laptop and transition seamlessly to VESSL Cloud to scale your model.
+* Keep track of your hardware and runtime environments together with `vessl.log`.
+
+VESSL Clusters' single-line CLI command automatically checks, installs, and configures all dependencies such as Kubernetes, and connects your laptop to VESSL.
+
+## Step-by-step Guide
+
+
+
+### (2) VESSL integration
+
+The following single-line command connects your Mac.
+
+```
+vessl cluster create --name '[CLUSTER_NAME_HERE]' --mode single
+```
+
+The most common flags used with `vessl cluster create` commands are as follows. Check out our docs on VESSL CLI for additional flags.
+
+* `--name` β Define your cluster name
+* `--mode single` β Specifies that are installing a single-node cluster.
+* `--help` β See additional command options.
+
+The command will automatically check dependencies and ask you to install Kubernetes. This process will take a few minutes. Proceed by entering `y`.
+
+
+
+If you have Kubernetes installed on your machine, the command will then ask you to install VESSL agent on the Kubernetes cluster. Enter `y` and proceed.
+
+
+
+By this point, you have successfully completed the integration.
+
+### (3) Confirm integration
+
+Confirm your integration using the `list` command which returns all the clusters available in your Organization.
+
+```bash
+vessl cluster list
+```
+
+
+
+Finally, try running a training job on your laptop. Your first run may take a few minutes to get the Docker images installed on your device.
+
+
+
+
+
+#### (2) Under VESSL Cloud, select your **Region** and **Cluster**.
+
+
+
+#### (3) Click **Done** to complete the setup and confirm your integration under **ποΈ Clusters**.
+
+
+
+* Cluster status β Connection and incident status of a cluster.
+* Available nodes β Available number of worker nodes.
+* Real-time resource usage β Real-time resource usage of CPU cores, RAM, and GPUs.
+* Ongoing workloads by type β The number of running notebook servers (Workspaces) and training jobs (Experiments).
+
+Clicking the cluster guides you to the **Summary** tab which holds more detailed information about the cluster.
+
+#### (1) Summary
+
+The summary section presents the basic information about the cluster including the connection and incident status.
+
+
+
+
+#### (2) Quotas & Usage
+
+Quotas & Usage shows the organization-wide and personal resource quota for the cluster, including the number of GPU hours and occupiable GPUs and CPUs. This is set by the organization admin. Refer to our next section in the documentation VESSL Cluster's features on cluster administration.
+
+
+
+#### (3) Resource Statistics
+
+This section shows you how much CPU, GPU, and memory have been requested (and allocated) and are currently being used.
+
+
+
+#### (4) Workloads
+
+This section shows all ongoing workloads on the cluster with information on the occupying node, resource consumption, creator, and the created date. If you are an organization admin, clicking the workload name guides you to the detailed workload page under **ποΈ Projects** or **ποΈ Workspaces**.
+
+
+
+## Node-level Monitoring
+
+Under **Nodes**, you can view all the worker nodes tied to the cluster with their real-time CPU, Memory, and GPU usage, ongoing workloads by their type, and incident status. You can select the checkbox to get more in-depth information.
+
+
+
+#### (1) Metadata
+
+
+
+#### (2) System metrics
+
+
+
+#### (3) Workloads
+
+
+
+#### (4) Issues
+
+
+
+## Workload-level Monitoring
+
+Under **Workloads**, you can view the workload log related to the cluster with the current status, occupying node, resource consumption, and a visualization of the usage history.
+
+
+
+
+
+Integrating more powerful, multi-node GPU clusters for your team is as easy as integrating your personal laptop. To make the process easier, weβve prepared a **single-line curl command** that installs all the binaries and dependencies on your server.
+
+## Step-by-step Guide
+
+
+
+Upon installing all the dependencies, the command returns a follow-up command with a token. You can use this to add worker nodes to the control plane. If you donβt want to add an additional worker node you can skip to the next step.
+
+```bash
+curl -sSLf https://install.vessl.ai/bootstrap-cluster/bootstrap-cluster.sh | sudo bash -s -- --role worker --token '[TOKEN_HERE]'
+```
+
+You can confirm that your control plane and worker node have been successfully configured using a `k0s` command.
+
+```bash
+sudo k0s kubectl get nodes
+```
+
+
+
+
+### (2) VESSL integration
+
+You are now ready to integrate the Kubernetes cluster with VESSL. Make sure you have VESSL Client installed on the server and configured for your organization.
+
+```bash
+pip install vessl --upgrade
+```
+
+```bash
+vessl configure
+```
+
+The following single-line command connects your Kubernetes-backed GPU cluster to VESSL.
+
+```bash
+vessl cluster create
+```
+
+Follow through prompting your configurtaion options. You can press `Enter` to use the default values.
+
+By this point, you have successfully completed the integration.
+
+### (3) Confirm integration
+
+You can use VESSL CLI command or visit **ποΈ Clusters** to confirm your integration.
+
+```bash
+vessl cluster list
+```
+
+
+
+
+### Common troubleshooting commands
+
+Here are common problems that our users face as they integrate on-premise Clusters. You can use the `journalctl` command to get a more detailed log of the issue. Please share this log as you reach out for support.
+
+```
+sudo journalctl -u k0scontroller | tail -n 20
+```
+
+#### VesslApiException: PermissionDenied (403): Permission denied.
+
+```
+kernel_clsuter.py111] VESSL cluster agent installed. Waiting for the agent to be connected with VESSL...
+_base.py:107] VesslApiException: PermissionDenied (403): Permission denied.
+```
+
+It's likely that you don't have full access to install VESSL cluster agent on the server. Contact your organization's cluster and infrastructure administrator to receive help.
+
+#### VesslApiException: NotFound (404) Requested entity not found.
+
+```
+kernel_cluster.py:289] Existing VESSL cluster installation found! getting cluster information...
+_base.py:107] VesslApiException: NotFound (404) Requested entity not found.
+```
+
+Try again after running the following command:
+
+```bash
+sudo helm uninstall vessl =n vessl --kubeconfig="var/lib/k0s/pki/admin/conf"
+```
+
+**Invalid value: "k0s-ctrl-\[HOSTNAME]"**
+
+```
+leaderelection.go:334] error initially creating leader election record: Lease.coordination.k8s.io "k0s-ctrl-[HOSTNAME]" is invalide: metadata.name: Invalid value: "k0s-ctrl-[HOSTNAME]": a lowercase RFC 1123 subdomain must consist of lowercase alphanumeric characters.
+```
+
+There is an ongoing [π issue related to Kubernetes hostname](https://github.com/kubernetes/kubernetes/issues/71140#issue-381687745) containing capital letters. Your hostname must be in lowercase alphanumeric characters.
+
+You can solve this issue by contacting your organization's cluster and infrastructure administrator to change your hostname, or by changing your hostname yourself using the following `sudo` command:
+
+
+
+VESSL enables seamless scaling of containerized ML workloads from a personal laptop to cloud instances or Kubernetes-backed on-premise clusters.
+
+While VESSL comes with an out-of-the-box, fully managed AWS, you can also integrate **an unlimited number** of (1) personal Linux machines, (2) on-premise GPU servers, and (3) private clouds. You can then use VESSL as a single point of access to multiple clusters.
+
+VESSL Clusters simplifies the end-to-end management of large-scale, organization-wide ML infrastructure from integration to monitoring. These features are available under **ποΈ Clusters**.
+
+* **Single-command Integration** β Set up a hybrid or multi-cloud infrastructure with a single command.
+* **GPU-accelerated workloads** β Run training, optimization, and inference tasks on GPUs in seconds
+* **Resource optimization** β Match and scale workloads automatically based on the required compute resources
+* **Cluster Dashboard** β Monitor real-time usage and incident & health status of clusters down to each node.
+* **Reproducibility** β Record runtime metadata such as hardware and instance specifications.
+
+
\ No newline at end of file
diff --git a/guides/clusters/quotas.md b/guides/clusters/quotas.md
new file mode 100644
index 0000000000000000000000000000000000000000..d4237cb44a501cfff0a034a1398d152e0c39b038
--- /dev/null
+++ b/guides/clusters/quotas.md
@@ -0,0 +1,25 @@
+---
+title: Quotas and limits
+version: EN
+---
+
+
+
+Under **Cluster Quotas**, you can set quotas for the entire Organization or certain user groups based on the number of GPU hours, occupiable GPUs, run hours, and disk size.
+
+* Shared Quotas β Set a quota for all organizations or users that have access to the cluster.
+* Individually Defined Quotas β Set individually defined quotas for specific organizations or users.
+
+## Step-by-step Guide
+
+Click **Add new quota** and set the following parameters.
+
+
+
+
diff --git a/guides/clusters/remove.md b/guides/clusters/remove.md
new file mode 100644
index 0000000000000000000000000000000000000000..1b0595db17e6dd44050d5b034df0c5894e258f92
--- /dev/null
+++ b/guides/clusters/remove.md
@@ -0,0 +1,28 @@
+---
+title: Remove a cluster
+version: EN
+---
+
+
+
+
+
+#### Remove all dependencies from the cluster
+
+To remove all dependencies including the VESSL cluster agent from the cluster, run the following command.
+
+```
+docker stop vessl-k0s && docker rm vessl-k0s
+docker run -it --rm --privileged --pid=host alpine nsenter -t 1 -m -- sh -c βrm -rfv /var/lib/k0sβ
+```
+
diff --git a/guides/clusters/specs.md b/guides/clusters/specs.md
new file mode 100644
index 0000000000000000000000000000000000000000..47526d3b457d02a455793e5624415f1e88d2b797
--- /dev/null
+++ b/guides/clusters/specs.md
@@ -0,0 +1,58 @@
+---
+title: Default resource specs
+version: EN
+---
+
+
+
+Under **Resource Specs**, you can set custom resource presets that users can only select and use to launch ML workloads. You can also specify the **priority** of the defined options. For example, when you set the resource specs as above users will only be able to select the four options below.
+
+
+
+These default options can help admins optimize resource usage by (1) preventing someone from occupying an excessive number of GPUs and (2) preventing unbalanced resource requests which cause skewed resource usage. As for average users, they can simply get going without thinking and configuring the exact number of CPU cores and memories they need to request.
+
+## Step-by-step Guide
+
+Click **New resource spec** and define the following parameters.
+
+
+
+* **Name** β Set a name for the preset. Use names that well represent the preset like `a100-2.mem-16.cpu-6`.
+* **Processor type** β Define the preset by the processor type, either by CPU or GPU.
+* **CPU limit** β Enter the number of CPUs. For `a100-2.mem-16.cpu-6`, enter `6`.
+* **Memory limit** β Enter the amount of memory in GB. For `a100-2.mem-16.cpu-6`, the number would be 16.
+* **GPU type** β Specify which GPU you are using. You can get this information by using the `nvidia-smi` command on your server. In the following example, the value is `a100-sxm-80gb`.
+
+```bash
+nvidia-smi
+```
+
+```bash
+Thu Jan 19 17:44:05 2023
++-----------------------------------------------------------------------------+
+| NVIDIA-SMI 510.73.08 Driver Version: 510.73.08 CUDA Version: 11.6 |
+|-------------------------------+----------------------+----------------------+
+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+| | | MIG M. |
+|===============================+======================+======================|
+| 0 NVIDIA A100-SXM... On | 00000000:01:00.0 Off | 0 |
+| N/A 40C P0 64W / 275W | 0MiB / 81920MiB | 0% Default |
+| | | Disabled |
++-------------------------------+----------------------+----------------------+
+```
+
+* **GPU limit** β Enter the number of GPUs. For `gpu2.mem16.cpu6`, enter `2`. You can also place decimal values if you are using Multi-Instance GPUs (MIG).
+* **Priority** β Using different values for priority disables FIFO scheduler and assigns workloads according to priority, with lower priority being first. The example preset below always puts workloads running on `gpu-1` ahead of any other workloads.
+
+
+
+* **Available workloads** β Select the type of workloads that can use the preset. With this, you can guide users to use ποΈ **Experiments** by preventing them from running οΈ **Workspaces** with 4 or 8 GPUs.
\ No newline at end of file
diff --git a/guides/datasets/create.md b/guides/datasets/create.md
new file mode 100644
index 0000000000000000000000000000000000000000..9af96e2ea8281d5a672bf9cc69efd5458456b450
--- /dev/null
+++ b/guides/datasets/create.md
@@ -0,0 +1,60 @@
+---
+title: Create a new model
+version: EN
+---
+
+When you click the **NEW DATASET** under **DATASET** page, you will be asked to add new dataset either from local or external data source. You have three data provider options: VESSL, Amazon Simple Storage Service, and Google Cloud Storage.
+
+
+
+
+
+
+
+
+ Since **VESSL does not have access to the local dataset**, you cannot browse local dataset files on VESSL.
+
+
+### Dataset Versioning (Enterprise only)
+
+**Dataset Version** is a specific snapshot of a dataset captured at a particular point in time. To enable this feature, you have to check `Enable Versioning` when creating dataset.
+
+Dataset Version can be created by yourself on the `VERSIONS` tab, or be automatically created when an experiment is created to provider reproducibility of the experiment. You can also choose the specific dataset version to use when creating an experiment.
+
+
+
diff --git a/guides/datasets/tips.md b/guides/datasets/tips.md
new file mode 100644
index 0000000000000000000000000000000000000000..2407248c321202366764ed33d989713ecdaf666a
--- /dev/null
+++ b/guides/datasets/tips.md
@@ -0,0 +1,23 @@
+---
+title: Tips & Limitations
+version: EN
+---
+
+### CIFS mount
+
+VESSL providers FlexVolume storage type to support CIFS mount.
+
+1. Install [CIFS FlexVolume plugin](https://github.com/fstab/cifs).
+2. Create `secret.yml` for CIFS mount.
+3. Fill the options in the create dataset dialog.
+
+
+
+### Use other mount options not supported by VESSL
+
+By using HostPath mount, you can work around to use other mount options which are not supported by VESSL.
+
+1. Mount the storage on all host machines, in the same path. (e.g. `/mnt/s3fs-mnist-data`)
+2. Mount dataset with the HostPath option.
diff --git a/guides/experiments/create.md b/guides/experiments/create.md
new file mode 100644
index 0000000000000000000000000000000000000000..5bb75b43d6b3cf9fee6ec1c1067ff6d6fa2112e4
--- /dev/null
+++ b/guides/experiments/create.md
@@ -0,0 +1,125 @@
+---
+title: Create a new experiment
+---
+
+To create an experiment, first specify a few options such as cluster, resource, image, and start command. Here is an explanation of the config options.
+
+
+
+### Cluster & Resource (Required)
+
+You can run your experiment on either VESSL's managed cluster or your custom cluster. Start by selecting a cluster.
+
+
+
+
+ You also have an option to use spot instances.
+
+ Check out the full list of resource types and corresponding prices:
+
+
+
+
+
+### Image (Required)
+
+Select the Docker image that the experiment container will use. You can either use a managed image provided by VESSL or your own custom image.
+
+
+
+
+ #### Private Images
+
+ To pull images from the private Docker registry, you should first integrate your credentials in organization settings.
+
+ Then, check the private image checkbox, fill in the image URL, and select the credential.
+
+
+
+
+### Volume (Optional)
+
+You can mount the project, dataset, and files to the experiment container.
+
+
+
+
+Learn more about volume mount on the following page:
+
+### Hyperparameters
+
+You can set hyperparameters as key-value pairs. The given hyperparameters are automatically added to the container as environment variables with the given key and value. A typical experiment will include hyperparameters like `learning_rate` and `optimizer`.
+
+
+
+You can also use them at runtime by appending them to the start command as follows.
+
+```bash
+python main.py \
+ --learning-rate $learning_rate \
+ --optimizer $optimizer
+```
+
+### Termination Protection
+
+Checking the termination protection option puts experiments in idle once it completes running, so you to access the container of a finished experiment.
+
+
\ No newline at end of file
diff --git a/guides/experiments/distributed.md b/guides/experiments/distributed.md
new file mode 100644
index 0000000000000000000000000000000000000000..3fe721e2cd1f98ad40cab3918efddf26549b3ac9
--- /dev/null
+++ b/guides/experiments/distributed.md
@@ -0,0 +1,78 @@
+---
+title: Run distributed training jobs
+---
+
+
+
+In a VESSL-managed experiment, files under the output volume are saved by default. In a local experiment, you can use [`vessl.upload`](../../api-reference/python-sdk/utils/vessl.upload.md) to upload any output files. You can view these files under [**FILES**](experiment-results.md#files).
+
+By default, VESSL will stop monitoring your local experiment when your program exits. If you wish to stop it manually, you can use [`vessl.finish`](../../api-reference/python-sdk/utils/vessl.finish.md).
diff --git a/guides/experiments/manage.md b/guides/experiments/manage.md
new file mode 100644
index 0000000000000000000000000000000000000000..dc3a6a8c60146cbdf3ca8d878908d8917ea5b94d
--- /dev/null
+++ b/guides/experiments/manage.md
@@ -0,0 +1,62 @@
+---
+title: Manage experiments
+---
+
+Under the experiments page, you can view the details of each experiment such as experiment status and logs. Here, you can also terminate or reproduce experiments.
+
+
+
+### Experiment Status
+
+| Type | Description |
+| ------------- | --------------------------------------------------------------------------------------------------------------------- |
+| **Pending** | An experiment is created with a pending status until the experiment node is ready. (VESSL-managed experiment only) |
+| **Running** | The experiment is running. |
+| **Completed** | The experiment has successfully finished (exited in 0). |
+| **Idle** | The experiment is completed but still approachable due to the termination protection. (VESSL-managed experiment only) |
+| **Failed** | The experiment has unsuccessfully finished. |
+
+
+
+### Terminating Experiments
+
+You can stop running the experiment and delete the experiment pod.
+
+### Unpushed Changes
+
+A warning titled **UNPUSHED CHANGES** will appear in the experiment details if you run an experiment through CLI without pushing the local changes to GitHub. To solve this issue, download the `.patch` file containing `git diff` and apply it by running the following commands.
+
+```
+# Change directory to your project
+cd path/to/project
+
+# Checkout your recent commit with SHA
+git checkout YOUR_RECENT_COMMIT_SHA
+
+# Apply .patch file to the commit
+git apply your_git_diff.patch
+```
diff --git a/guides/experiments/monitor.md b/guides/experiments/monitor.md
new file mode 100644
index 0000000000000000000000000000000000000000..4a7bc6dd1f15a3a5b381710d9843895bc78a2f25
--- /dev/null
+++ b/guides/experiments/monitor.md
@@ -0,0 +1,53 @@
+---
+title: Monitor experiment results
+---
+
+### Experiment Summary
+
+Under **SUMMARY**, you can view all the experiment configurations such as environment variables, quick reproduce via CLI, Docker image, and resource specification.
+
+
+
+### Logs
+
+Under **LOGS**, you can monitor the logs from the experiment Docker container including status updates and `print()` statements.
+
+
+
+### Plots
+
+
+
+#### Multimedia
+
+You can also view images. You can configure the number of displayed images using VESSL's Python SDK.
+
+
+
+#### System Metrics
+
+You can monitor system metrics such as CPU, GPU, memory, disk, and network usage.
+
+
+
+### Files
+
+Under **FILES**, you can navigate and download the output and input files. You can also do this using VESSL Client CLI.
+
+
diff --git a/guides/experiments/overview.md b/guides/experiments/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..fa8893f2aeb20d5fe60e6e1f262b575d070936fa
--- /dev/null
+++ b/guides/experiments/overview.md
@@ -0,0 +1,13 @@
+---
+title: Overview
+---
+
+An **experiment** is a single machine learning run in a project with a specific dataset. The experiment results consist of logs, metrics, and artifacts, which you can find under corresponding tabs.
+
+### Managing Experiments on Web Console
+
+You can find a list of experiments under each project page. You delve into the details of the experiment by clicking the name of the experiment.
+
+
\ No newline at end of file
diff --git a/guides/get-started/gpu-notebook.md b/guides/get-started/gpu-notebook.md
new file mode 100644
index 0000000000000000000000000000000000000000..be68c7241a445daf6e71f6957bc25ea1381e29cb
--- /dev/null
+++ b/guides/get-started/gpu-notebook.md
@@ -0,0 +1,160 @@
+---
+title: GPU-accelerated Notebook
+description: Launch a Jupyter Notebook server with an SSH connection
+icon: "circle-1"
+version: EN
+---
+
+This example deploys a Jupyter Notebook server. You will also learn how you can connect to the server on VS Code or an IDE of your choice.
+
+
+
+- Launch a GPU-accelerated interactive workload
+- Set up a Jupyter Notebook
+- Use SSH to connect to the workload
+
+
+## Writing the YAML
+
+Let's fill in the `notebook.yml` file.
+
+
+
+2. Add the generated key to your account.
+ ```
+ vessl ssh-key add
+ ```
+
+
+3. Connect via SSH.
+ Use the workload address from the Run Summary page to connect. You are ready to use [VS Code](https://code.visualstudio.com/docs/remote/ssh) or an IDE of your choice for remote development.
+
+
+
+ ```
+ ssh -p 22 root@34.127.82.9
+ ```
+
+
+## Tips & tricks
+
+Keep in mind that GPUs are fully dedicated to a notebook server --and therefore consume VSSL credits-- even when you are not running compute-intensive cells. To optimize GPU usage, use tools like [nbconvert](https://nbconvert.readthedocs.io/en/latest/usage.html#executable-script) to convert the notebook into a Python file or package it as a Python container and run it as a batch job.
+
+You can also mount volumes to interactive workloads by defining `import` and reference files or datasets from your notebook.
+
+## Using our web interface
+
+You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run.
+
+
+
+## What's next?
+
+Next, let's see how you use our interactive workloads to host a web app on the cloud using tools like Streamlit and Gradio.
+
+
+
+- Fine-tune an LLM with zero-to-minimum setup
+- Mount a custom dataset
+- Store and export model artifacts
+
+## Writing the YAML
+
+Let's fill in the `llama2_fine-tuning.yml` file.
+
+
+
+## Behind the scenes
+
+With VESSL AI, you can launch a full-scale LLM fine-tuning workload on any cloud, at any scale, without worrying about these underlying system backends.
+
+* **Model checkpointing** β VESSL AI stores .pt files to mounted volumes or model registry and ensures seamless checkpointing of fine-tuning progress.
+* **GPU failovers** β VESSL AI can autonomously detect GPU failures, recover failed containers, and automatically re-assign workload to other GPUs.
+* **Spot instances** β Spot instance on VESSL AI works with model checkpointing and export volumes, saving and resuming the progress of interrupted workloads safely.
+* **Distributed training** β VESSL AI comes with native support for PyTorch `DistributedDataParallel` and simplifies the process for setting up multi-cluster, multi-node distributed training.
+* **Autoscaling** β As more GPUs are released from other tasks, you can dedicate more GPUs to fine-tuning workloads. You can do this on VESSL AI by adding the following to your existing fine-tuning YAML.
+
+## Tips & tricks
+
+In addition to the model checkpoints, you can track key metrics and parameters with `vessl.log` Python SDK. Here's a snippet from [finetuning.py](https://github.com/vessl-ai/hub-model/blob/a74e87564d0775482fe6c56ff811bd8a9821f809/llama2-finetuning/finetuning.py#L97-L109).
+
+```python
+class VesslLogCallback(TrainerCallback):
+ def on_log(self, args, state, control, logs=None, **kwargs):
+ if "eval_loss" in logs.keys():
+ payload = {
+ "eval_loss": logs["eval_loss"],
+ }
+ vessl.log(step=state.global_step, payload=payload)
+ elif "loss" in logs.keys():
+ payload = {
+ "train_loss": logs["loss"],
+ "learning_rate": logs["learning_rate"],
+ }
+ vessl.log(step=state.global_step, payload=payload)
+```
+
+## Using our web interface
+
+You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run.
+
+
+
+## What's next?
+
+We shared ho you can use VESSL AI to go from a simple Python container to a full-scale AI workload. We hope these guides give you a glimpse of what you can achieve with VESSL AI. For more resources, follow along our example models or use casese.
+
+
+
+
+
+
+
+- Launch a GPU-acclerated workload
+- Set up a runtime for your model
+- Mount a Git codebase
+- Run a simple command
+
+## Installation & setup
+
+After creating a free account at [vessl.ai](https://vessl.ai), install our Python package and get an API authentication. Set your primary Organization and Project for your account and let's get going.
+
+```bash
+pip install --upgrade vessl
+vessl configure
+```
+
+## Writing the YAML
+
+Launching a workload on VESSL AI begins with writing a YAML file. Our quickstart YAML is in four parts:
+
+- Compute resource -- typically in terms of GPUs -- this is defined under `resources`
+- Runtime environment that points to a Docker Image -- this is defined under `image`
+- Input & output for code, dataset, and others defined under `import` & `export`
+- Run commands executed inside the workload as defined under `run`
+
+Let's start by creating `quickstart.yml` and define the key-value pairs one by one.
+
+
+
+Running the command will verify your YAML and show you the current status of the workload. Click the output link in your terminal to see full details and a realtime logs of the Run on the web, including the result of the run command.
+
+
+
+## Behind the scenes
+
+When you `vessl run`, VESSL AI performs the following as defined in `quickstart.yml`:
+
+1. Launch an empty Python container on the cloud with 1 NVIDIA L4 Tensor Core GPU.
+2. Configure runtime with CUDA compute-capable PyTorch 22.09.
+3. Mount a GitHub repo and set the working directory.
+3. Execute `main.py` and print `Hello, world!`.
+
+## Using our web interface
+
+You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run.
+
+
+
+## What's next?
+
+Now that you've run a barebone workload, continue with our guide to launch a Jupyter server and host a web app.
+
+
+
+- Host a GPU-accelerated web app built with [Streamlit](https://streamlit.io/)
+- Mount model checkpoints from [Hugging Face](https://huggingface.co/)
+- Open up a port to an interactive workload for inference
+
+## Writing the YAML
+
+Let's fill in the `stable-diffusion.yml` file.
+
+
+
+
+
+## Using our web interface
+
+You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run.
+
+
+
+## What's next?
+
+See how VESSL AI takes care of the infrastructural challenges of fine-tuning a large language model with a custom dataset.
+
+
+
+### Creating a model from experiment
+
+
+
+#### 2. Create a model from the experiment detail page
+
+If you find an experiment that you want to create a model from its output files, you can create one by clicking the `Create Model` button under the `Actions` button on the experiment detail page. Select the model repository and click`SELECT` on the dialog.
+
+
+
+Then, set the model description and tag, and choose the desired directory among the output files of the experiment on the model create page. You can include or exclude specific directories in the output files checkbox section.
+
+
+
+### Creating a model from local files
+
+Uploading the checkpoint files on your local machine to VESSL is another way to utilize the model registry feature. If you select the `model from local` type when selecting **Source** in `Models>Model Repository>New Model`, you can create a model by uploading a local file.
+
+
diff --git a/guides/models/deploy.md b/guides/models/deploy.md
new file mode 100644
index 0000000000000000000000000000000000000000..e1978831acabbdcb1bdb6ab0317bca23e7a5fc74
--- /dev/null
+++ b/guides/models/deploy.md
@@ -0,0 +1,114 @@
+---
+title: Deploy models
+version: EN
+---
+
+You can use VESSL to quickly deploy your models into production for use from external applications via APIs. You can register it via the SDK and deploy it in the Web UI in one click.
+
+### Register a model using the SDK
+
+A model file cannot be deployed on its own - we need to provide instructions on how to setup the server, handle requests, and send responses. This step is called registering a model.
+
+There are two ways you can register a model. One is to use an existing model - that is, a VESSL model exists and a model file is stored in it. The other is to train a model from scratch and register it. The two options are further explained below.
+
+### 1. Register an existing model
+
+In most cases, you will have already trained model and have the file ready, either through [VESSL's experiment](../experiment/creating-an-experiment.md#creating-an-experiment) or in your local environment. After [creating a model](creating-a-model.md#creating-a-model), you will need to register it using the SDK. The below example shows how you can do so.
+
+```python
+import torch
+import torch.nn as nn
+from io import BytesIO
+
+import vessl
+
+class Net(nn.Module):
+ # Define model
+
+class MyRunner(vessl.RunnerBase):
+ @staticmethod
+ def load_model(props, artifacts):
+ model = Net()
+ model.load_state_dict(torch.load("model.pt"))
+ model.eval()
+ return model
+
+ @staticmethod
+ def preprocess_data(data):
+ return torch.load(BytesIO(data))
+
+ @staticmethod
+ def predict(model, data):
+ with torch.no_grad():
+ return model(data).argmax(dim=1, keepdim=False)
+
+ @staticmethod
+ def postprocess_data(data):
+ return {"result": data.item()}
+
+vessl.configure()
+vessl.register_model(
+ repository_name="my-repository",
+ model_number=1,
+ runner_cls=MyRunner,
+ requirements=["torch"],
+)
+```
+
+First, we redefine the layers of the torch model. (This is assuming we only saved the `state_dict`, or the model's parameters. If you saved the model's layers as well, you do not have to redefine the layers.)
+
+Then, we define a `MyRunner` which inherits from `vessl.RunnerBase`, which provides instructions for how to serve our model. You can read more about each method [here](../../api-reference/python-sdk/auto-generated/serving.md#runnerbase).
+
+Finally, we register the model using `vessl.register_model`. We specify the repository name and number, pass `MyRunner` as the runner class we will use for serving, and list any requirements to install.
+
+After executing the script, you should see that two files have been generated: `vessl.manifest.yaml`, which stores metadata and `vessl.runner.pkl`, which stores the runner binary. Your model has been registered and is ready for serving.
+
+### 2. Register a model from scratch
+
+In some cases, you will want to train the model and register it within one script. You can use `vessl.register_model` to register a new model as well:
+
+```python
+# Your training code
+# model.fit()
+
+vessl.configure()
+vessl.register_model(
+ repository_name="my-repository",
+ model_number=None,
+ runner_cls=MyRunner,
+ model_instance=model,
+ requirements=["tensorflow"],
+)
+```
+
+After executing the script, you should see that three files have been generated: `vessl.manifest.yaml`, which stores metadata, `vessl.runner.pkl`, which stores the runner binary, and `vessl.model.pkl`, which stores the trained model. Your model has been registered and is ready for serving.
+
+#### PyTorch models
+
+If you are using PyTorch, there is an easier way to register your model. You only need to optionally define `preprocess_data` and `postprocess_data` - the other methods are autogenerated.
+
+```python
+# Your training code
+# for epoch in range(epochs):
+# train(model, epoch)
+
+vessl.configure()
+vessl.register_torch_model(
+ repository_name="my-model",
+ model_number=1,
+ model_instance=model,
+ requirements=["torch"],
+)
+```
+
+
diff --git a/guides/models/overview.md b/guides/models/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..4f75380c3e3caa678ed1c14144f080c032961899
--- /dev/null
+++ b/guides/models/overview.md
@@ -0,0 +1,14 @@
+---
+title: Overview
+version: EN
+---
+
+A **model registry** is a centralized space for managing model versions. Model governance and security are strengthened by securing an independent space separated from code or dataset. You can collaborate with organization members by monitoring model status and performance.
+
+### Managing model on Web Console
+
+To view and manage model repositories and a list of versioned models, click **MODELS** on the web console.
+
+
diff --git a/guides/organization/billing.md b/guides/organization/billing.md
new file mode 100644
index 0000000000000000000000000000000000000000..c0c654d3ebd6889c6f2eb04c0a45d71e86454cc2
--- /dev/null
+++ b/guides/organization/billing.md
@@ -0,0 +1,16 @@
+---
+title: Manage billing
+version: EN
+---
+
+# Billing Information
+
+You can check your payment information and credit balance on the billing page.
+
+
+
+### How is usage calculated?
+
+VESSL charges based on compute, storage, and network usage. Check the our pricing table for each resource type.
\ No newline at end of file
diff --git a/guides/organization/create.md b/guides/organization/create.md
new file mode 100644
index 0000000000000000000000000000000000000000..35b2ee9a61609f16fe54684626bbe9d9c75a025f
--- /dev/null
+++ b/guides/organization/create.md
@@ -0,0 +1,18 @@
+---
+title: Create a new Organization
+version: EN
+---
+
+Once you signed up, you can create or add an organization by clicking **ADD ORGANIZATION** on the organizations page. You can always come back to this page by clicking the VESSL logo on the top left corner.
+
+.png>)
+
+
+**Name** your organization and choose the default **region** for the organization. For a detailed guide to specifying regions, refer to [AWS Regions and Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) page.
+
+.png>)
+
diff --git a/guides/organization/integrations.md b/guides/organization/integrations.md
new file mode 100644
index 0000000000000000000000000000000000000000..153d4038bf74e2a04e5a28069d8003be2f7118ef
--- /dev/null
+++ b/guides/organization/integrations.md
@@ -0,0 +1,40 @@
+---
+title: Add integrations
+version: EN
+---
+
+### Integrating your service to VESSL
+
+You can integrate various services to VESSL using AWS, Docker, and SSH Keys. The integrated AWS and Docker credentials are used to manage private docker images, whereas the SSH Keys are used for authorized keys for SSH connection.
+
+### AWS Credentials
+
+To integrate your AWS account, you need an AWS access key associated with your [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html). You can create this key by following this [guide from AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id\_credentials\_access-keys.html). Once you have your key, click **ADD INTEGRATION** and fill in the form.
+
+
+
+You can integrate multiple AWS credentials to your organization. You can also revoke your credentials by simply clicking the trash button on the right.
+
+
+
+
+
+### GitHub
+
+To integrate your GitHub account, click ADD INTEGRATION. Grant repository access to VESSL App in the repository access section and click save in GitHub.
+
+
diff --git a/guides/organization/members.md b/guides/organization/members.md
new file mode 100644
index 0000000000000000000000000000000000000000..02675d1fa011c34979b0b73b996230756453f9b7
--- /dev/null
+++ b/guides/organization/members.md
@@ -0,0 +1,12 @@
+---
+title: Add members
+version: EN
+---
+
+### Collaborate with your teammates
+
+Invite your teammates to your organization by sending an invitation mails. If you add an email address on the **Member** list, VESSL will send an invitation links. Your teammates can sign up by clicking this link.
+
+
diff --git a/guides/organization/notification.md b/guides/organization/notification.md
new file mode 100644
index 0000000000000000000000000000000000000000..879fa7d2c9f608c09228bed8e8a24d93ac52a5f2
--- /dev/null
+++ b/guides/organization/notification.md
@@ -0,0 +1,21 @@
+---
+title: Create a new Organization
+version: EN
+---
+
+You can receive either a **Slack** or **Email Notification** for your experiments. VESSL will notify your teammates when your experiment starts running, fails to run, or is completed.
+
+
+
+To add Slack Notifications,
+
+1. Click **Add to Slack**.
+2. Specify Slack Workplace and Channel.
+3. Click Allow.
+
+To add Email Notifications,
+
+1. Type your email address.
+2. Click **ADD**.
diff --git a/guides/organization/overview.md b/guides/organization/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..b7003857ac2f7f5cb4704c1e614968d3a5fe262e
--- /dev/null
+++ b/guides/organization/overview.md
@@ -0,0 +1,18 @@
+---
+title: Overview
+version: EN
+---
+
+**Organization** is a shared working environment where you can create projects, datasets, experiments, and services. Once you signed-up, you can create an organization with a specified region and invite teammates to collaborate.
+
+
+
+### Managing Organization on Web Console
+
+Once you entered an organization, you can navigate to other organizations by clicking your profile on the top right corder. If you wish to create a new organization, go back to the organization page by clicking the VESSL logo.
+
+
\ No newline at end of file
diff --git a/guides/project/create.md b/guides/project/create.md
new file mode 100644
index 0000000000000000000000000000000000000000..5ae2dd256a8b01936ebcc502ca4b406cbf87dc7a
--- /dev/null
+++ b/guides/project/create.md
@@ -0,0 +1,30 @@
+---
+title: Create a new project
+version: EN
+---
+
+# Creating a Project
+
+To create a project, click **NEW PROJECT** on the project page.
+
+
+
+
+### Basic information
+
+On the project create page, you should specify the name and the description of the project. Note that the project name should be unique within the organization.
+
+
+
+### Connect Project Repository
+
+If you want to connect the GitHub repository to the project, choose the GitHub account and the repository here. If you haven't install VESSL GitHub App, integrate your GitHub account in [`Organization Settings > Add Integrations`](../organization/organization-settings/add-integrations.md#github).
+
+
+
diff --git a/guides/project/overview.md b/guides/project/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..ecfcc348e5bbf618075784157ee056b48e0d88cb
--- /dev/null
+++ b/guides/project/overview.md
@@ -0,0 +1,14 @@
+---
+title: Overview
+version: EN
+---
+
+Machine learning projects require both code and datasets. VESSL **project** is a conceptual space where you can easily manage codes and datasets as a basic element within the organization. You can share projects with organization members to collaborate on a machine learning project.
+
+### Managing Projects on Web Console
+
+You can find the Project tab under the organization menu. Click the tab to view the full list of ongoing projects.
+
+
diff --git a/guides/project/repo-dataset.md b/guides/project/repo-dataset.md
new file mode 100644
index 0000000000000000000000000000000000000000..2dec40171b3b8d3d06c2b77507d0c34459d2f14e
--- /dev/null
+++ b/guides/project/repo-dataset.md
@@ -0,0 +1,45 @@
+---
+title: Project-level repo & datasets
+version: EN
+---
+
+Project provides a way to connect code repositories and datasets. VESSL provides the following with project repositories & project datasets
+
+* Download codes & datasets when a experiment / a sweep is created
+* Track versions and file diffs between experiments
+
+### Add Project Repository
+
+Project Repository can be configured in creating project and project settings.
+
+
+
+Add project repository and select github repositories in the dialog. If you have not integrated github with VESSL, you should integrate github first in the organization settings (link will be given in the add dialog.)
+
+
+
+### Add Project Dataset
+
+Project dataset can be configured in the same way as the project repository. Unlike project repository, project dataset is allowed to specify the mount path in the experiment/sweep.
+
+
+
+Once after you connected repositories and datasets, they are mounted by default when creating an experiment / a sweep.
+
+
+
+#### Connect cluster-scoped local dataset ([docs](../dataset/adding-new-datasets.md))
+
+You can connect the cluster-scoped dataset to project dataset. If you use different cluster from the cluster specified in the dataset during creating an experiment, an error may occur. To resolve it, you need to choose the cluster specified in the dataset or continue without the certain dataset.
+
+
diff --git a/guides/project/summary.md b/guides/project/summary.md
new file mode 100644
index 0000000000000000000000000000000000000000..d5c9ca35c01a831738daa35d3af1f2434784e3c9
--- /dev/null
+++ b/guides/project/summary.md
@@ -0,0 +1,86 @@
+---
+title: Project summary
+version: EN
+---
+
+**Project Overview** provides a bird's-eye view of the progress of your machine learning projects. On the project overview dashboard, you can manage and track key information about your project:
+
+* **Key Metrics**: Keep track of essential evaluation metrics of your experiments such as accuracy, loss, and MSE.
+* **Sample Media**: Log images, audio, and other media from your experiment and explore your model's prediction results to compare your experiments visually.
+* **Starred Experiments**: Star and keep track of meaningful experiments.
+* **Project Notes**: Make a note of important information about your project and share it with your teammates β similar to README.md of Git codebase.
+
+
+
+## Key Metrics
+
+VESSL AI automatically marks metrics of best-performing experiments as key metrics. You can also manually bookmark key metrics and keep track of your model's meaningful evaluation metrics.
+
+To add or remove Key Metrics
+
+1. Click the settings icon on top of the Key Metrics card.
+
+
+
+2\. Select **up to 4 metrics** and choose whether your goal is to minimize or maximize the target value.
+
+* If you select **Minimize**, an experiment with the smallest target value will be updated to the key metric charts.
+* If you select **Maximize**, an experiment with the greatest target value will be updated to the key metric chart.
+
+
+
+## Sample Media
+
+You can log images or audio clips generated from your experiment to explore your model's prediction results and make visual (or auditory) comparisons.
+
+
+
+## Starred Experiment
+
+You can mark important experiments as **Starred Experiments** to keep track of meaningful achievements in the project. Starred Experiments displayed with the tags and key metrics.
+
+
+
+To star or unstar experiments
+
+1. Go to the experiment tracking dashboard
+2. Select experiments
+3. Click 'Star' or 'Unstar' on the dropdown menu.
+
+
+
+You can also star or unstar experiments on the experiment summary page.
+
+
+
+## Project Notes
+
+**Project Notes** is a place for noting and sharing important information about the project together with your team. It works like README.md of your Git codebase.
+
+
+
+To modify project note, click the settings icon on top of Project Notes card. You will be given a markdown editor to update your notes.
+
+
\ No newline at end of file
diff --git a/guides/resources/changelog.md b/guides/resources/changelog.md
new file mode 100644
index 0000000000000000000000000000000000000000..1b7211ded5534055cae59482e9967d1e3cd1c24d
--- /dev/null
+++ b/guides/resources/changelog.md
@@ -0,0 +1,38 @@
+---
+title: Changelog
+description: See what's new in VESSL AI
+icon: megaphone
+version: EN
+---
+
+## January 31, 2024
+
+
+
+### New Get started guide
+
+We've updated our documenation with a new get started guide. The new guide covers everything from product overview to the latest use cases of our product in Gen AI & LLM.
+
+Follow along our new guide [here](https://run-docs.vessl.ai/docs/en/get-started/quickstart).
+
+### New & Improved
+
+- Added a new managed cloud option built on Google Cloud
+- Renamed our default managed Docker images to `torch:2.1.0-cuda12.2-r3`
+
+## December 28, 2023
+
+### Announcing VESSL Hub
+
+
+
+VESSL Hub is a collection of one-click recipes for the latest open-source models like Llama2, Mistral 7B, and StableDiffusion. Built on our fullstack AI infrastructure, Hub provides the easiest way to explore and deploy models.
+
+Fine-tune and deploy the latest models on our production-grade fullstack cloud infrastructure with just a single click.
+Read about the release on our [blog](https://blog.vessl.ai/vessl-hub) or try it out now at [vessl.ai/hub](https://vessl.ai/hub).
\ No newline at end of file
diff --git a/guides/resources/flare.md b/guides/resources/flare.md
new file mode 100644
index 0000000000000000000000000000000000000000..38440cc2e46ad40bed1754f54e2d5b92d4b69ee5
--- /dev/null
+++ b/guides/resources/flare.md
@@ -0,0 +1,43 @@
+---
+title: VESSL Flare
+description: VESSL Flare allows you to send troubleshooting information to VESSL support.
+icon: screwdriver-wrench
+version: EN
+---
+
+# VESSL Flare
+
+If you're having trouble with your on-premise cluster, try using VESSL Flare. VESSL Flare collects all of the node's configuration and writes them to an archive file. It does not collect any sensitive information, including passwords, API keys, or strings. Send Flare key that appears after running VESSL Flare to support@vessl.ai.
+
+VESSL Flare is [completely open source](https://github.com/vessl-ai/flare), so you can verify the behavior of the code.
+
+## Installation & Usage
+1. Prerequisites:
+
+ * Ensure your on-premise cluster meets the minimum system requirements.
+ ```
+ - Ubuntu 18.04 or above
+ - Python 3.6 or above installed
+ ```
+ * Verify network connectivity to VESSL's server endpoint.
+
+1. Installation Steps:
+
+ VESSL Flare can be run directly in command line:
+ ```bash
+ $ curl -sL flare.vessl.ai | sudo sh
+ ```
+
+1. Record Flare key
+
+ After VESSL Flare is running, you should see a message like the one below.
+ Save the Flare key shown in the message and provide it to the support.
+ ```
+ ...
+ Successfully uploaded Flare output!
+ Please provide the following information to VESSL Support:
+
+ - Flare key: arsvxmh4fj0w // Record this, and send it to support!
+
+ E-mail: VESSL Support
+
+Once the "New serving" button is selected, you will be directed to the "Create a new serving" page, dedicated to the configuration of a fresh Serving instance. Within this page, it is possible to define essential parameters for the serving:
+
+- `Name`: Assign a distinctive name to the Serving. (Please note that the assigned name cannot be altered after creation.)
+- `Description`: Elaborate on the nature and purpose of the Serving for further clarity.
+- `Cluster`: Designate the specific cluster where the inference server will be instantiated.
+
+
+By selecting any of the Servings created within the Serving list, you can access details about the Serving on the local navigation bar.
+
+
+
+Each Serving instance offers following menu options:
+- Monitor: [Monitoring dashboard](01-service-monitoring-dashboard.md) which provides a overview of the Serving's current status and performance.
+- Logs: A [log stream](02-service-logs.md) of the Serving's revisions and replicas.
+- Revisions: Create and manage [revisions](03-service-revisions.md) of the Serving.
+- Rollouts: Define and run no-downtime [rollouts](04-serve-rollouts.md) of the Serving.
\ No newline at end of file
diff --git a/guides/serve/logs.md b/guides/serve/logs.md
new file mode 100644
index 0000000000000000000000000000000000000000..20aeecff2047b7522be1282f7da8ec4e00eb5be1
--- /dev/null
+++ b/guides/serve/logs.md
@@ -0,0 +1,30 @@
+---
+title: Service logs
+version: EN
+---
+
+Logs are an essential component for debugging services. In VESSL Serve, the platform automatically collects logs from each replica comprising the model service, allowing users to view them through a dedicated dashboard. The Log tab within the service dashboard provides access to logs from replicas currently in operation.
+
+### Viewing Logs and Filtering by Revision
+
+By default, the Logs tab displays logs from all replicas currently active in the service.
+
+
+
+The Revision dropdown menu enables users to filter and view logs specific to a chosen revision of the service.
+
+
+
+After selecting a revision, users can further filter logs by specifying the target replicas they are interested in.
+
+
+
+
diff --git a/guides/serve/monitor.md b/guides/serve/monitor.md
new file mode 100644
index 0000000000000000000000000000000000000000..2d5bd63f14827087e8e5f0d4311b3a68edc251ad
--- /dev/null
+++ b/guides/serve/monitor.md
@@ -0,0 +1,62 @@
+---
+title: Monitoring dashboard
+version: EN
+---
+
+The Monitor tab within VESSL serves as a comprehensive dashboard, enabling ML teams to efficiently monitor and manage model services. This tab presents essential information at a glance, catering to the operational needs of the teams involved.
+
+
+
+
+The dashboard is divided into five sections, each providing a different perspective on the service's status and performance. These sections are:
+
+1. `Monitor`: A hostmap view of the current state of the service's revisions and replicas.
+2. `Metadata`: Detailed information about the service, including its status, the number of revisions and replicas, and the latest update.
+3. `Endpoint`: The service's endpoint and traffic distribution information.
+ - When you click the 'Edit' button, you can change the configuration of the endpoint.
+ - See [Edit Endpoint Cnfiguration](#edit-endpoint-cnfiguration) section below for more details.
+4. `Workloads`: Detailed information about each replica's status.
+ - See [Replica Details](#replica-details) section below for more details.
+5. `Metrics`: The service's key metrics (CPU/GPU/RAM usage, Replica numbers, network, request throughput, error rate, etc.) in a timeseries graph.
+
+
+## Edit Endpoint Configuration
+
+When you click the 'Edit' button in the Endpoint section, you can change the configuration of the endpoint.
+
+
+
+
+
+- `Enable Endpoint`: Decide whether to actually create an Endpoint in the cluster.
+- `Host`: Set the custom domain name to connect to the Endpoint. If left blank, the cluster will automatically generate the name of the Load balancer endpoint.
+ - To connect a custom domain name to the endpoint, you must set up a DNS Service such as AWS Route53 to control the DNS from the cluster. β [Set up External DNS with AWS](https://aws.amazon.com/premiumsupport/knowledge-center/eks-set-up-externaldns/)
+- `Revisions`: Select the revisions to connect to the endpoint.
+ - Set the revision number, port, and traffic weight to connect to the endpoint.
+ - The total of the traffic weight of all revisions connected must be 100%.
+
+
+
+- Advanced Settings: Various advanced options related to the cluster that operates the model service. Change this setting only if you know exactly what each setting does.
+ - `Ingress Class (Advanced Settings)`: Set the Ingress Class to use. In AWS clusters, set it to `alb`.
+ - `Annotation (Advanced Settings)`: Kubernetes annotation to inform the load balancer controller, etc. when setting up the endpoint.
+
+## Replica Details
+
+Replica refers to the container that actually serves each model server in the cluster. In the cluster, it exists in the form of a Kubernetes pod. VESSL supports the ability to directly check the list of replicas in service and take necessary actions.
+
+
+
+- `Delete`: Delete the replica. If the replica is deleted, the cluster will automatically create a new replica to replace it.
+- `Log`: View the logs of the replica.
+- `Metrics`: Filter the metrics of the replica on the dashboard graphs.
diff --git a/guides/serve/overview.md b/guides/serve/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..730b1df307f53c61f8c03f470b885a6156f30757
--- /dev/null
+++ b/guides/serve/overview.md
@@ -0,0 +1,14 @@
+---
+title: Overview
+version: EN
+---
+
+Deploying a server to host ML models in a production environment requires careful planning to ensure they run smoothly, stay available, and can handle increased demands. This can be particularly challenging for ML engineers or small backend teams who might not be deeply familiar with complex backend setups.
+
+VESSL Serve is an essential tool for deploying models developed in VESSL, or even your custom models as inference servers.
+ - Keeping track of activities like logs, system metrics and model performance metrics
+ - Automatically adjusting their size based on resource usage
+ - Split the traffic sent to models for easier Canary testing
+ - Roll out a new version of model to production without downtime
+
+VESSL Serve simplifies the process of setting up ML services that are reliable, adaptable, and can handle varying workloads.
\ No newline at end of file
diff --git a/guides/serve/quickstart.md b/guides/serve/quickstart.md
new file mode 100644
index 0000000000000000000000000000000000000000..3496b25429144f6438ee95ecd84be8aa8ef8b273
--- /dev/null
+++ b/guides/serve/quickstart.md
@@ -0,0 +1,216 @@
+---
+title: Quickstart
+version: EN
+---
+
+This document provides a quickstart guide of VESSL Serve - managing revisions and the gateway using YAML manifests.
+
+## 1. Prepare a Model to Serve
+
+Prepare the model and service for deployment. In this document, we will use the [MNIST example](https://github.com/vessl-ai/examples/blob/main/mnist/README.md) where you can train a model and register it to the VESSL Model Registry.
+
+Use the following command in the CLI to proceed:
+
+```sh
+# Clone the example repository
+git clone git@github.com:vessl-ai/examples.git
+cd examples/mnist/pytorch
+
+# Train the model and register it to the repository
+pip install -r requirements.txt
+python main.py --output-path ./output --save-model
+
+# Register the model
+python model.py --checkpoint ./output/model.pt --model-repository mnist-example
+```
+
+
+
+
+
+3. Write manifest file for serving revision
+
+Create a new serving revision. Save the following content as a file named `serve-revision.yaml`:
+
+```yaml
+message: VESSL Serve example
+image: quay.io/vessl-ai/kernels:py38-202308150329
+resources:
+ name: v1.cpu-2.mem-6
+run: vessl model serve mnist-example 1 --install-reqs
+autoscaling:
+ min: 1
+ max: 3
+ metric: cpu
+ target: 60
+ports:
+ - port: 8000
+ name: fastapi
+ type: http
+```
+
+You can easily deploy the revision defined in YAML using the VESSL CLI as shown below:
+
+```sh
+vessl serve revision create --serving mnist-example -f serve-revision.yaml
+```
+
+
+
+Refer to the [YAML schema reference](./serve-yaml-workflow/yaml-schema-reference.md) for detailed information on the YAML manifest schema.
+
+
+
+When you click "New revision," you'll see a page where you can set up the new Revision. This is where you tell VESSL which model to use, how much power it needs, whether it should grow or shrink automatically, and how people will connect to it.
+
+Here, you add some notes about what this Revision is for:
+
+- `Metadata`: Metadata of the revision.
+
+
+
+ - `Message`: Write a short message explaining what this Revision is meant for.
+ - `Volume Mount`: You can mount datasets, model files, code, and more as folders for use in Revision. For more information, please see [the documentation](<../../commons/volume-mount.md>)
+
+- `Deployment Spec`: Resource requirements and configuration for the Revision.
+
+
+
+ - `Resource`: CPU, RAM, and GPU resources to allocate to the Revision.
+ - `Docker Image`: The Docker Image to use for the Revision.
+ - `Start Command`: The command to run inside the container. This is like running a command in the terminal on your computer.
+ - `Service Account Name`: The Kubernetes service account to connect to the container. This is commonly used with [AWS IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html), [GKE Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity), to control what cloud resources the container can access.
+ - `Environment Variables`: Environment variables to inject into the container.
+ - `Port`: The port to expose from the container. For example, if you're using a BentoML model server, you'll want to expose port 3000 and use the HTTP protocol to access the service endpoint.
+
+- Advanced Options: Additional configuration for the Revision.
+
+
+
+ - `Autoscaling`: Automatically scale the Revision up or down based on demand.
+ - `Min`: The minimum number of replicas to keep running.
+ - `Max`: The maximum number of replicas to scale out.
+ - `Target metric`: The metric to use for scaling up or down.
+ - `CPU`: The CPU usage of the Revision.
+ - `Memory`: The memory usage of the Revision.
+ - `GPU`: The GPU usage of the Revision.
+
+ - `Launch this revision immediately`: Start the Revision as soon as it's created.
+
+
+# Actions on the Revision List
+
+Once you create a Revision, you can see basic information about the Revision in the list of revisions and quickly perform actions needed for service operation.
+
+
+
+Actions on the Revision list are as follows:
+- `Start`: Deploy the Revision to the cluster immediately according to the current settings.
+- `Stop`: Immediately stop the Revision that is deployed and running.
+- `Scale`: Adjust the number of replicas of the deployed and running Revision.
+- `Reproduce`: Create a new Revision with the same settings as the selected Revision.
+- `Delete`: Delete the selected Revision.
diff --git a/guides/serve/rollouts.md b/guides/serve/rollouts.md
new file mode 100644
index 0000000000000000000000000000000000000000..5310a05a6a9f51ae89552633c60413b8e3970767
--- /dev/null
+++ b/guides/serve/rollouts.md
@@ -0,0 +1,82 @@
+---
+title: Service rollouts
+version: EN
+---
+
+In VESSL, you can seamlessly orchestrate the entire deployment process, from performing inference tasks on servers to directing traffic on the actual servers, all in one place. The Endpoint section in the Monitor tab allows you to specify how traffic should be sent to the servers. This eliminates the need for manual intervention in every deployment, reducing the risk of errors and the hassle of repetitive tasks.
+
+VESSL introduces a powerful Rollout feature, enabling you to define the deployment process step by step in advance and execute it automatically when needed. Additionally, it offers user-friendly features like email notifications and deployment monitoring, ensuring that you can implement uninterrupted deployments with ease.
+
+Let's explore how VESSL's Rollout can be used to implement various deployment patterns:
+ - Rolling Update: With VESSL's Rollout, you can smoothly transition to a new revision by sequentially directing traffic to it. This ensures a gradual and uninterrupted rollout of the new revision.
+ - Blue-Green Deployment: In situations where two revisions of an inference server should never run simultaneously, VESSL simplifies the process. It allows you to deploy the new revision with the same workload as the existing one and then seamlessly switch endpoints, ensuring a continuous rollout of the next version.
+ - Canary / A-B Testing: By directing a portion of the traffic to the new revision, you can gather metrics and make informed decisions for the next deployment steps.
+
+## Rollout lists
+You can view and modify ongoing rollouts in the List page.
+
+
+
+
+- Pause: Temporarily halt an ongoing rollout. The current step continues before pausing until manually resumed.
+- Resume: Restart a paused rollout.
+- Terminate: Completely stop an ongoing rollout, marking it as a failure.
+- Reproduce: Create a new rollout identical to an existing one.
+
+## Create a Rollout
+
+To create a new rollout, click the "New rollout" button on the Rollout tab.
+
+
+
+On the rollout creation page, you can combine messages and "Steps" to define the order of deployment. Here's what each step entails:
+
+
+
+- CreateNewRevision: Generate a new revision, using the same UI as [Service Revision](../serve-web-workflow/03-service-revisions.md)
+- UpdateRevisions: Modify the deployment status and auto-scaling settings of existing or newly created revisions.
+
+ - Revision Number: Select the target revision.
+ - Min/Max Replica: Set the minimum and maximum auto-scaling range.
+ - Running: Decide whether to deploy or halt the selected revision, useful for scenarios involving the deployment of a new revision alongside the shutdown of an existing one.
+- ChangeEndpointConfig: Adjust traffic routing settings for the endpoint, following the UI described in [Change Endpoint Configuration](../serve-web-workflow/01-service-monitoring-dashboard.md#edit-endpoint-configuration)
+- Wait: Temporarily pause the rollout based on predefined settings. Three options are available:
+
+ - Time: Automatically resume after a set time.
+ - Manual: Wait until a user manually resumes the rollout from the dashboard.
+ - Metric (planned feature): Automatically resume when specific metrics criteria are met.
+- SendNotification: Dispatch notifications based on predefined settings, typically right before deployment completion or when user confirmation is required.
+
+ - Channel:
+ - Slack: Send notifications to a Slack endpoint configured in the organization settings.
+ - Email: Dispatch notifications to a designated email address.
+ - Fail Action: Determine the action when a notification fails.
+ - Skip: Continue with remaining steps.
+ - Abort: Treat the rollout as a failure.
+
+Once steps are defined, you can review, modify, reorder, or delete them in the Step detail list.
+
+- Drag the leftmost handle to change the step order.
+- Use the Edit option in the Action column to review and modify step definitions.
+- Unwanted steps can be deleted using the Delete icon in the Action column.
+
+## Reproduce Rollouts
+You can clone existing rollouts by clicking the "Reproduce" button associated with a specific rollout. Typically, once a deployment pattern is defined, it remains largely unchanged, with only revision information varying. Leveraging the "Reproduce" feature allows for the straightforward repetition of previously defined rollout steps. This offers an efficient way to maintain consistency in your deployment processes.
+
+
diff --git a/guides/serve/serving-yaml.md b/guides/serve/serving-yaml.md
new file mode 100644
index 0000000000000000000000000000000000000000..1c2eda8080c06e8726768a76a2fe4217afe07dae
--- /dev/null
+++ b/guides/serve/serving-yaml.md
@@ -0,0 +1,313 @@
+---
+title: Deploy with YAML
+version: EN
+---
+
+You can define VESSL Serving via YAML and deploy them as VESSL services on the fly. Use it to build an Infra-as-Code-based deployment strategy, or to manage the deployment of Serving through code instead of manually modifying items with the WEB UI.
+
+```yaml
+message: vessl-yaml-serve-test
+launch_immediately: true
+
+image: quay.io/vessl-ai/kernels:py39
+
+resources:
+ accelerators: T4:1
+ spot: true
+
+volumes:
+ /root/examples:
+ git:
+ clone: https://github.com/vessl-ai/examples
+ revision: 33a49398fc6f87265ac490b1cf587912b337741a
+
+run:
+ - workdir: /code/examples
+ command: |
+ python3 mnist.py
+
+env:
+ - key: TEST_ENV
+ value: test
+
+ports:
+ - name: http
+ type: http
+ port: 8000
+
+autoscaling:
+ min: 1
+ max: 1
+ metric: cpu
+ target: 50
+```
+
+## Revision YAML Field Types
+
+### Message
+
+Write a message for the Serving Revision. We recommend writing an identical message for each revision to distinguish them.
+
+| Name | Type | Required | Description |
+| --- | --- | --- | --- |
+| message | str | Requried | Description of the revision. |
+
+```yaml
+message: vessl-serve-using-yaml
+```
+
+### Launch_immediately
+
+Determines whether the revision will be deployed immediately.
+
+| Name | Type | Required | Description |
+| --- | --- | --- | --- |
+| launch_immediately | boolean | Requried | True if revision is launch immediately. |
+
+```yaml
+launch_immediately: true
+```
+
+### Image
+
+The name of the docker image that will be used for inference. You can also use a custom docker image.
+
+| Name | Type | Required | Description |
+| --- | --- | --- | --- |
+| image | string | Requried | Docker image url. |
+
+```yaml
+image: quay.io/vessl-ai/ngc-pytorch-kernel:22.10-py3-202306140422
+```
+
+### Resources
+
+Write down the compute resources you want to use for Serving. You can specify the resources you want to use in the Cluster settings.
+
+| Name | Type | Required | Description |
+| --- | --- | --- | --- |
+| cluster | string | Optional | The cluster to be used for the run. (default: VESSL-managed cluster) |
+| name | string | Optional | The resource spec name that specified in VESSL. If the name is not specified, we will offer the best option for you based on cpu, memory, and accelerators. |
+| cpu | string | Optional | The number of cpu cores. |
+| memory | string | Optional | The memory size in GB. |
+| accelerators | string | Optional | The type and quanity of the GPU to be used for the run. |
+| spot | boolean | Optional | Whether to use spot instances for the run or not. |
+
+```yaml
+resources:
+ cluster: vessl-tmap-gi-aiml-stg
+ accelerators: T4:1 # using T4 with 1 GPU
+ spot: true
+```
+
+
+
+### Volumes
+
+Write the datasets and volumes mounted in the Revision container when the Revision is deployed.
+
+| Prefix | Type | Required | Description |
+| --- | --- | --- | --- |
+| git:// | string | Optional | Mount a git repository into your container. The repository will be cloned into the specified mount path when container starts. |
+| vessl-dataset:// | string | Optional | Mount a dataset stored in VESSL. Replace {organizationName} with the name of your organization and {datasetName} with the name of the dataset. |
+| s3:// | string | Optional | Mount an AWS S3 bucket into your container. Replace {bucketName} with the name of your S3 bucket and {path} with the path to te file or folder you want to mount. |
+| local:// | string | Optional | Mount a file or directory from the machine where you are running the command. This can be useful for using configuration files or other data that is not in your Docker image. |
+| hostpath:// | string | Optional | Mount a file or directory from the host nodeβs filesystem into your container. Replace {path} with the path to the file or folder you want to mount. |
+| nfs:// | string | Optional | Mount a Network File System(NFS) into your container. Replace {ip} with the IP address of your NFS server and {path} with the path to the file or folder you want to mount. |
+| cifs:// | string | Optional | Mount a Command Internet File System(CIFS) into your contianer. Replace {ip} with the IP address of your NFS server and {path} with the path to the file or folder you want to mount. |
+
+```yaml
+volumes:
+ /root/git-examples: git://github.com/vessl-ai/examples
+ /input/data1: hostpath:///opt/data1
+ /input/config: local://config.yml
+ /input/data2: nfs://192.168.10.2:~/
+ /input/data3: vessl-dataset://{organization_name}/{dataset_name}
+ /output:
+ artifact: true
+```
+
+You can also add an artifact flag to indicate whether the directory `/output` should be treated as an output artifact. Typically, volumes store model checkpoints or key metrics.
+
+### Run
+
+Write down what commands you want to run on the service container when the Revision is deployed.
+
+| Name | Type | Required | Description |
+| --- | --- | --- | --- |
+| workdir | string | Optional | The working directory for the command. |
+| command | string | Required | The command to be run. |
+
+```yaml
+run:
+ - workdir: /root/git-examples
+ command: |
+ python train.py --learning_rate=$learning_rate --batch_size=$batch_size
+```
+
+### Env
+
+Write down the environment variables that will be set in the Revision Service container.
+
+| Name | Type | Required | Description |
+| --- | --- | --- | --- |
+| env | map | Optional | Key-value pairs for environment variables in the run container. |
+
+```yaml
+env:
+ learning_rate: 0.001
+ batch_size: 64
+ optimizer: sgd
+```
+
+### Ports
+
+Write down the ports and protocols that the Revision Service container should open.
+
+| Name | Type | Required | Description |
+| --- | --- | --- | --- |
+| name | string | Required | The name for the opening port. |
+| type | string | Required | The protocol the port will use. |
+| port | int | Required | The number of the port. |
+
+```yaml
+ports:
+ - name: web-service
+ type: http
+ port: 8000
+ - name: web-service-2
+ type: http
+ port: 8001
+...
+```
+
+### Autoscaling
+
+Sets the value for how the Revision Pod will autoscale.
+
+| Name | Type | Required | Description |
+| --- | --- | --- | --- |
+| min | string | Required | Minimum number of Pods to autoscale. |
+| max | string | Required | Maximum number of Pods to autoscale. |
+| metric | int | Required | Determine what conditions you want to autoscale under. You can select cpu, gpu, memory, and custom |
+| target | int | Required | A metric threshold percentage. If the metric is above the target, then the Autoscaler automatically scale-out. |
+
+```yaml
+autoscaling:
+ min: 1
+ max: 3
+ metric: cpu
+ target: 50
+```
+
+### Simple YAML example for revision
+
+```yaml
+message: vessl-yaml-serve-test
+launch_immediately: true
+
+image: quay.io/vessl-ai/kernels:py39
+
+resources:
+ accelerators: T4:1
+ spot: true
+
+volumes:
+ /root/examples:
+ git:
+ clone: https://github.com/vessl-ai/examples
+ revision: 33a49398fc6f87265ac490b1cf587912b337741a
+
+run:
+ - workdir: /code/examples
+ command: |
+ python3 mnist.py
+
+env:
+ - key: TEST_ENV
+ value: test
+
+ports:
+ - name: http
+ type: http
+ port: 8000
+
+autoscaling:
+ min: 1
+ max: 1
+ metric: cpu
+ target: 50
+```
+
+## Gateway YAML Field Types
+
+### Enabled
+
+| Name | Type | Required | Description |
+| --- | --- | --- | --- |
+| enabled | boolean | Required | Whether gateway is enabled or not. |
+
+```yaml
+enabled: true
+```
+
+### Targets
+
+| Name | Type | Required | Description |
+| --- | --- | --- | --- |
+| number | string | Required | The revision number that the Gateway will use for routing. |
+| port | string | Required | The port number that the gateway will use for routing. |
+| weight | int | Required | The weight to determine how much traffic should be distributed. |
+
+```yaml
+targets:
+ - number: 1
+ port: 8000
+ weight: 50
+ - number: 2
+ port: 8001
+ weight: 50
+```
+
+## Sample Gateway YAML Schema
+
+```yaml
+enabled: true
+targets:
+ - number: 1
+ port: 8000
+ weight: 10
+ - number: 2
+ port: 8000
+ weight: 90
+```
+
+# Serving example with YAML
+## MNIST model mount example
+
+```yaml
+message: Example serving from YAML
+image: quay.io/vessl-ai/kernels:py310-202301160626
+resources:
+ name: cpu-m6i-large
+volumes:
+ /root:
+ model:
+ repo: vessl-mnist-example
+ version: 2
+run: vessl model serve vessl-mnist-example 2 --install-reqs --remote
+env:
+ - key: VESSL_LOG
+ value: DEBUG
+autoscaling:
+ min: 1
+ max: 3
+ metric: cpu
+ target: 60
+ports:
+ - port: 8000
+ name: service
+ type: http
+```
diff --git a/guides/sweep/create.md b/guides/sweep/create.md
new file mode 100644
index 0000000000000000000000000000000000000000..abbb6cd0bd38750467e3350e5b31a3eb2dfc7884
--- /dev/null
+++ b/guides/sweep/create.md
@@ -0,0 +1,78 @@
+---
+title: Create a new sweep
+version: EN
+---
+
+To create a Sweep you need to specify a few options including objective, parameters, algorithm, and runtime.
+
+### Objective
+
+
+
+
+
+### Parameters
+
+#### Algorithm name
+
+* [**Grid search**: ](https://en.wikipedia.org/wiki/Hyperparameter\_optimization#Random\_search)A simple exhaustive searching by all combinations through a specified search space. All search space of parameters should be discrete and bounded. If each of the two parameters has three possible values, the total number of possible experiments is six.
+* ****[**Random search**](https://en.wikipedia.org/wiki/Hyperparameter\_optimization#Random\_search): Randomly selecting the parameter values existing in the search space, and spawn an experiment with those parameters. The search space could be discrete, continuous, or mixed.
+* ****[**Bayesian optimization**](https://en.wikipedia.org/wiki/Hyperparameter\_optimization#Bayesian\_optimization): A global optimization method for noisy black-box functions. Bayesian optimization will select the next parameter on the probabilistic model of the function mapping from hyperparameter values to the objective.
+
+#### Search space
+
+
+
+### Early Stopping (Optional)
+
+You can set early stopping to prevent overfitting on the training dataset. It supports the median algorithm which takes two input values, `min_experiment_required` and `start_step`. VESSL examines the metric value for each step after `start_step` and compares it to the median value of the completed experiment to decide whether to trigger early stopping.
+
+
+
+### Runtime
+
+Configuring the runtime option is similar to creating an experiment.
+
+You can retrieve the configuration of prior experiments by clicking **Configure from Prior Experiments**.
+
+
diff --git a/guides/sweep/monitor.md b/guides/sweep/monitor.md
new file mode 100644
index 0000000000000000000000000000000000000000..1dc38da70593ed0207c9107e4418e889dbcd9605
--- /dev/null
+++ b/guides/sweep/monitor.md
@@ -0,0 +1,41 @@
+---
+title: Monitor sweeps
+version: EN
+---
+
+### Monitoring
+
+Under **MONITORING**, you can monitor your Sweeps on a rich dashboard.
+
+
+
+#### Metrics
+
+Under **Metrics**, you can see the metrics that you have logged to the VESSL server using `vessl.log()` or `VesslCallback()` in VESSL SDK. Each experiment is colored differently. You can hide experiments from the plot by toggling the view button.
+
+#### Sweep
+
+Under **Sweep**, you can see the visualization of multiple experiments. Each curve represents an experiment and you can view the details of the experiment by hovering on the curve.
+
+#### System Metrics
+
+Under **System Metrics**, you can monitor the resource consumption of each experiment. If you are using a GPU acceleration, you will be able to monitor GPU utilization as well.
+
+### Logs
+
+Under **LOGS** tab, you can find logging records with experiment status and metrics.
+
+
+
+### Metadata
+
+Under **METADATA**, you can find the configuration for the current Sweep and runtime.
+
+
+
diff --git a/guides/sweep/overview.md b/guides/sweep/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..30cff833fccadd1931bf14dcb559c6e7f9098757
--- /dev/null
+++ b/guides/sweep/overview.md
@@ -0,0 +1,15 @@
+---
+title: Overview
+version: EN
+---
+
+You can use **Sweep** to optimize hyperparameters and tune your models. Sweep runs multiple experiments with different hyperparameters in parallel to optimize model performance. Once a Sweep is completed, you can view the results, logs, and metadata of the iterations.
+
+### Managing Sweep on Web Console
+
+You can find the list of Sweeps under **SWEEPS** tab of each project page. You can examine the details of each Sweep by clicking the name of the Sweep.
+
+
+
diff --git a/guides/workspace/app.md b/guides/workspace/app.md
new file mode 100644
index 0000000000000000000000000000000000000000..8e054fe51cf93956ca1872ed45348d3844ec7321
--- /dev/null
+++ b/guides/workspace/app.md
@@ -0,0 +1,43 @@
+---
+title: Run a server application
+version: EN
+---
+
+### Port Configuration
+
+Under **Port** setting of **Create New Workspace** page, you can expose a server application running in a VESSL workspace instance on a configured port.
+
+
+
+
+
+### Running a Server Application
+
+You can run a simple server application like the following Python file server.
+
+```
+vessl@workspace-9fph3e8n3arx-0:~$ ls
+mnist
+
+vessl@workspace-9fph3e8n3arx-0:~$ ls mnist
+test.csv train.csv
+
+vessl@workspace-9fph3e8n3arx-0:~$ python -m http.server 8080
+Serving HTTP on 0.0.0.0 port 8080 (http://0.0.0.0:8080/) ...
+```
+
+You can access the running server application by clicking the port number under **METADATA**.
+
+
+
+
\ No newline at end of file
diff --git a/guides/workspace/create.md b/guides/workspace/create.md
new file mode 100644
index 0000000000000000000000000000000000000000..28ef2aa83749a2a4c1aacb0b156d6ebe16305c5b
--- /dev/null
+++ b/guides/workspace/create.md
@@ -0,0 +1,104 @@
+---
+title: Createa a new workspace
+version: EN
+---
+
+To **create** a **workspace**, you need to select few options including workspace name, resource, image and few advanced settings.
+
+
+
+
+
+
+ #### Private image
+
+ To pull images from the private Docker registry or the private AWS ECR, you should [_integrate your credentials in organization settings first_](broken-reference). Then check the private image checkbox and select the credentials you have just integrated. Below is an example of a private image from the AWS ECR.
+
+
+
+
+#### Disk
+
+You can specify the **disk** **size** (default: 100GB) to use in your container. This will be the request storage size of your PVC. Disk size cannot be changed once the workspace is created.
+
+
+
diff --git a/guides/workspace/datasets.md b/guides/workspace/datasets.md
new file mode 100644
index 0000000000000000000000000000000000000000..e0a2c484f1dc2d5c49a62356a5e17fe7541a12ab
--- /dev/null
+++ b/guides/workspace/datasets.md
@@ -0,0 +1,25 @@
+---
+title: Download & attach datasets
+version: EN
+---
+
+You can download the datasets to you local disk using VESSL CLI.
+
+
+
+
+### Attach datasets (custom cluster only)
+
+You can attach NFS/Host machine volume when you create/edit your workspace.
+
+
\ No newline at end of file
diff --git a/guides/workspace/explore.md b/guides/workspace/explore.md
new file mode 100644
index 0000000000000000000000000000000000000000..166f473a61fa1fe0bd152503822a72fba1bc0e70
--- /dev/null
+++ b/guides/workspace/explore.md
@@ -0,0 +1,56 @@
+---
+title: Explore workspaces
+version: EN
+---
+
+### JupyterLab Session
+
+Each running workspace has an activated JupyterLab.
+
+
+
+You can open a new JupyterLab session inside the workspace by clicking the icon below **QUICKTOOLS**.
+
+
+
+### SSH Session
+
+You can also access an activated SSH under **QUICKTOOLS**.
+
+
+
+Here, the Linux username is vessl.
+
+
+
+### Metadata
+
+Under METADATA, you can find the workspace configuration such as resource usage and mounted images.
+
+
+
+### Logs
+
+When you create a workspace, a Docker specific to the workspace is created. You can monitor the workspace container logs under **LOGS**.
+
+
+
+### Monitoring
+
+Under MONITORING, you can monitor system metrics such as usage and limit for CPU, GPU, memory, and disk.
+
+
diff --git a/guides/workspace/images.md b/guides/workspace/images.md
new file mode 100644
index 0000000000000000000000000000000000000000..9b9cb7279a326fd1418671d14a976a99cb016ca5
--- /dev/null
+++ b/guides/workspace/images.md
@@ -0,0 +1,78 @@
+---
+title: Build a custom image
+version: EN
+---
+
+## Requirements
+
+To use custom images to run a workspace, your custom images have to satisfy below requirements.
+
+* Jupyterlab
+ * VESSL runs Jupyterlab and expose port `8888`. Jupyterlab should be pre-installed in the container image.
+ * Jupyterlab daemon must be located in `/usr/local/bin/jupyter`.
+* sshd
+ * VESSL runs sshd and expose port `22` as NodePort. sshd package should be pre-installed in the container image.
+* PVC mountable at `/root`
+ * VESSL mounts a PVC at `/root` to keep state across Pod restarts.
+
+## Building from VESSL's pre-built images
+
+VESSL offers pre-built images to run workspaces directly. You can use these images to build your own images. These images already have pre-installed Jupyterlab and sshd. The list of images is in the following table.
+
+| Python Version | Frameworks | Image |
+| -------------- | -------------------------------- | ------------------------------------------------------------------- |
+| 3.8.17 | - | `quay.io/vessl-ai/kernels:py38-202306140446` |
+| 3.8.10 | CUDA 11.8.0 PyTorch 1.14.0a0 | `quay.io/vessl-ai/ngc-pytorch-kernel:22.12-py3-202301160809` |
+| 3.8.10 | CUDA 11.8.0 TensorFlow 2.10.1 | `quay.io/vessl-ai/ngc-tensorflow-kernel:22.12-tf2-py3-202301160808` |
+| 3.10.12 | - | `quay.io/vessl-ai/kernels:py310-202306140445` |
+| 3.10.6 | CUDA 12.1.1 PyTorch 2.0.0 | `quay.io/vessl-ai/ngc-pytorch-kernel:23.05-py3-202306150328` |
+| 3.10.6 | CUDA 12.1.1 TensorFlow 2.12.0 | `quay.io/vessl-ai/ngc-tensorflow-kernel:23.05-tf2-py3-202306150329` |
+
+### Example
+
+```
+# Use CUDA 11.8.0, PyTorch 1.14.0a0 base image
+FROM quay.io/vessl-ai/ngc-pytorch-kernel:22.12-py3-202301160809
+
+# Install custom Python dependencies
+RUN pip install transformers
+...
+```
+
+## Building from community maintained images
+
+You can make your own images from any community maintained Docker images. Make sure that your image meet our requirements.
+
+### Example
+
+```dockerfile
+FROM nvidia/cuda:11.2.2-devel-ubuntu20.04
+
+RUN apt-get update
+RUN DEBIAN_FRONTEND=noninteractive \
+ apt-get install -y \
+ software-properties-common curl openssh-server
+
+# Install Python 3.9
+# Note that base image has python3.8 (3.8.10) installed
+RUN add-apt-repository ppa:deadsnakes/ppa
+RUN apt-get install -y python3.9 python3.9-distutils
+# Add symbolic links
+RUN update-alternatives --install /usr/bin/python3 python3 $(which python3.9) 1
+RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1
+
+# Install pip
+RUN curl https://bootstrap.pypa.io/get-pip.py | python
+RUN pip install -U pip
+
+# Install Jupyterlab
+RUN pip install jupyterlab
+```
+
+## FAQ
+* If you use `conda` for installing Jupyterlab, generally Jupyterlab daemon is located in `/opt/conda/bin/jupyter`.
+In this case, you should make a symbolic link in `/usr/local/bin/jupyter`.
+ ```dockerfile
+ # In Dockerfile,
+ RUN ln -s /opt/conda/bin/jupyter /usr/local/bin/jupyter
+ ```
diff --git a/guides/workspace/overview.md b/guides/workspace/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..775fba0ad08a74abd5613b5812fddf847ed88354
--- /dev/null
+++ b/guides/workspace/overview.md
@@ -0,0 +1,14 @@
+---
+title: Overview
+version: EN
+---
+
+A **workspace** is a personal continuous development environment accessible via Jupyter and SSH. You can request and return computing resources, including GPUs and configured ports. Under workspace, you can view metadata, logs, and metrics.
+
+### Managing Workspace on Web Console
+
+Under Workspace, you can find a list of workspaces and view details of each workspace by clicking its **NAME**.
+
+
\ No newline at end of file
diff --git a/guides/workspace/ssh.md b/guides/workspace/ssh.md
new file mode 100644
index 0000000000000000000000000000000000000000..cc142ef2b0be98389d236ec96e4dca0e22d1e569
--- /dev/null
+++ b/guides/workspace/ssh.md
@@ -0,0 +1,117 @@
+---
+title: Create an SSH connection
+version: EN
+---
+
+Once you run a workspace, you can fully leverage the development environment using SSH.
+
+
+
+
+
+### 1. Create SSH Key
+
+To enable SSH connection, you first need a SSH key pair. Once you obtained a public key for your account and workspace instance, you can connect it with a private key.
+
+```
+$ ssh-keygen -t ed25519 -C "vessl-floyd"
+Generating public/private ed25519 key pair.
+Enter file in which to save the key (/Users/floyd/.ssh/id_ed25519):
+Enter passphrase (empty for no passphrase):
+Enter same passphrase again:
+Your identification has been saved in /Users/floyd/.ssh/id_ed25519.
+Your public key has been saved in /Users/floyd/.ssh/id_ed25519.pub.
+The key fingerprint is:
+SHA256:78yjMGcJoV73v/jkLHIRhdC0wL0FBL6c68T0MZGoV2Q savvihub-floyd
+The key's randomart image is:
++--[ED25519 256]--+
+| .+BEo |
+| ..=+oo |
+| . o =+ |
+| . + +o. |
+| . + S o. |
+| . . * *.o |
+| . o B +.. |
+| B.++* |
+| o+=+*. |
++----[SHA256]-----+
+```
+
+### 2. Add SSH public key to your VESSL account
+
+You can add your SSH public key to your account using VESSL CLI. The added keys will be injected to every running workspaces you created. You can manage your keys with `vessl ssh-key list` and `vessl ssh-key delete` commands.
+
+```
+$ vessl ssh-key add
+[?] SSH public key path: /Users/floyd/.ssh/id_ed25519.pub
+[?] SSH public key name: vessl-floyd
+
+Successfully added.
+```
+
+### 3. Connect via CLI
+
+If there are more than one running workspaces, you will be asked to select one to connect.
+
+```
+$ vessl workspace ssh
+The authenticity of host '[tcp.apne2-prod1-cluster.savvihub.com]:30787 ([52.78.240.117]:30787)' can't be established.
+ECDSA key fingerprint is SHA256:iSexO7W1U14P3Pp6wRfPleHABQQMek/JAgb5kHqg5Jw.
+Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
+Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 4.14.238-182.422.amzn2.x86_64 x86_64)
+
+ * Documentation: https://help.ubuntu.com
+ * Management: https://landscape.canonical.com
+ * Support: https://ubuntu.com/advantage
+This system has been minimized by removing packages and content that are
+not required on a system that users do not log into.
+
+To restore this content, you can run the 'unminimize' command.
+
+The programs included with the Ubuntu system are free software;
+the exact distribution terms for each program are described in the
+individual files in /usr/share/doc/*/copyright.
+
+Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
+applicable law.
+
+vessl@workspace-9670nnkn5l16-0:~$
+```
+
+### 4. Setup VSCode Remote-SSH plugin config
+
+You can also add your workspace to VSCode Remote-SSH plugin config. `vessl workspace vscode` adds the information to `~/.ssh/config` so that the workspace can show up in the host list.
+
+```
+$ vessl workspace vscode
+Successfully updated /Users/floyd/.ssh/config
+
+$ cat ~/.ssh/config
+Host acceptable-bite-1627438220
+ User vessl
+ Hostname tcp.apne2-prod1-cluster.savvihub.com
+ Port 30787
+ StrictHostKeyChecking accept-new
+ CheckHostIP no
+ IdentityFile /Users/floyd/.ssh/id_ed25519
+```
+
+
+
+
+
+
+
+### 5. Manual Access
+
+You can integrate with other IDEs and make SSH connection without VESSL CLI using the host, username, and port information. In this case, the host is `tcp.apne2-prod1-cluster.savvihub.com`, username `vessl`, and port `30787`. The full SSH command is `ssh -p 30787 -i ~/.ssh/id_ed25519 vessl@tcp.apne2-prod1-cluster.savvihub.com`.
diff --git a/guides/workspace/tips.md b/guides/workspace/tips.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd1055b36e4fa3e2e19cd171361071e61c144a86
--- /dev/null
+++ b/guides/workspace/tips.md
@@ -0,0 +1,61 @@
+---
+title: Tips & Limitations
+version: EN
+---
+
+### pip
+
+If you install a python package with pip but you cannot find it, then add the following path as [the official jupyterlab document](https://jupyterlab.readthedocs.io/en/stable/getting\_started/installation.html#pip) stated.
+
+```
+export PATH="/home/name/.local/bin:$PATH"
+```
+
+### conda
+
+If you want to use conda in your notebook, run the following code as the first executable cell.
+
+```
+!pip install -q git+https://github.com/vessl-ai/condacolab.git
+import condacolab
+condacolab.install()
+```
+
+VESSL provides a python package to install conda based on condacolab. For more information, see the [github repository](https://github.com/vessl-ai/condavessl).
+
+### Disk persistency and node affinity problem
+
+In VESSL, `/root` **is the only persistent directory**. Other directories reset eveytime you restart a workspace. If you need libraries or packages that should be installed outside `/root`, fill the `init` script with the install commands. You can also build your own docker image on top of [VESSL managed docker images](tips-and-limitations.md#undefined).
+
+
+
+VESSL provides disk persistency in two ways:
+
+* For cloud providers (managed cluster and custom cluster provided by AWS or GCP), VESSL uses storage provisioners like `aws-efs-csi-driver`. It automatically attaches EBS volumes to container when started, and stored persistently until the workspace is terminated.
+* For on-premise cluster, VESSL use storage provisioners like `local-path-provisioner`. It stores data on the host machine assigned to when the workspace is created. (So it fixes to one machine due to storage persistency).
+ * VESSL does online backup/restore to resolve this issue. VESSL automatically backup and upload all contents in `/root` when the workspace is stopped. All contents will be downloaded and restored when the workspace is resumed.
+ * If `/root/` is larger than 15GB, VESSL does not online backup/restore, so it fixes to one machine.
+ * (For enterprise plan) Organization admin can specify the detail rules of online backup, such as the backup location.
+
+#### Backup & Restore manually
+
+You can manually backup & restore `/root/` with CLI. This feature is useful in the following situations:
+
+* Move the workspace to another cluster
+* Clone the workspace
+
+You can proceed the following order:
+
+* Run [`vessl workspace backup`](../../api-reference/cli/vessl-workspace.md#savvihub-workspace-backup) from the source workspace
+* Run [`vessl workspace restore`](../../api-reference/cli/vessl-workspace.md#savvihub-workspace-restore) from the destination workspace
+ * `/root/` folder should be empty in the destination workspace
+
+If `/root/` is larger than 15GB, VESSL CLI does not support backup/restore.
+
+### Docker
+
+VESSL workspaces are docker containers running on Kubernetes. Docker daemon inside a docker container is not supported unless specifically privileged. VESSL does not support privileged containers for security reasons.
+
+###
diff --git a/reference/.DS_Store b/reference/.DS_Store
new file mode 100644
index 0000000000000000000000000000000000000000..ca49c3cb4ff94bc80ba9559404b7a046d4c5b57e
Binary files /dev/null and b/reference/.DS_Store differ
diff --git a/reference/cli/cluster.md b/reference/cli/cluster.md
new file mode 100644
index 0000000000000000000000000000000000000000..57289a7b45234f351201027b0965d8d39d634816
--- /dev/null
+++ b/reference/cli/cluster.md
@@ -0,0 +1,76 @@
+---
+title: vessl cluster
+version: EN
+---
+
+### Overview
+
+Run `vessl cluster --help` to view the list of commands, or `vessl cluster [COMMAND] --help` to view individual command instructions.
+
+GPU type (for custom cluster only)
ex. Tesla-K80
Upload local file. Format: [local_path] or [local_path]:[remote_path].
ex. --upload-local-file my-project:/root/my-project
Kernel docker image URL
ex. vessl/kernels:py36.full-cpu
Hyperparameters in the form of [key]=[value]
ex. -h lr=0.01 -h epochs=100
Dataset mounts in the form of [mount_path] [dataset_name]
ex. --dataset /input mnist
Objective goal ex. 0.99
| +| `-M`, `--objective-metric` |Objective metric ex. val_accuracy
Search space parameters in the form of [name] [type] [range_type] [values...]. [type] must be one of categorical, int, or double. [range_type] must be either space or list. If space, [values...] is a 3-tuple of [min] [max] [step]. If list , [values...] is a list of values to search.
ex. -p epochs int space 100 1000 50
GPU type (for custom cluster only)
ex. Tesla-K80
Kernel docker image URL
ex. vessl/kernels:py36.full-cpu
Early stopping algorithm settings in the format of [key] [value]ex. --early-stopping-settings start_step 4
Hyperparameters in the form of [key]=[value]
ex. -h lr=0.01 -h epochs=100
Dataset mounts in the form of [mount_path] [dataset_name]
ex. --dataset /input mnist
`--dataset /input:mnist` |
+| `--upload-local-file` _(multiple)_ | Upload local file. Format: [local_path] or [local_path]:[remote_path].ex. `--upload-local-file my-project:/root/my-project`` |
+| `--root-volume-size` | Root volume size (defaults to `100Gi`) |
+| `-p`, `--port` _(multiple)_ | Format: \[expose\_type] \[port] \[name], ex. `-p 'tcp 22 ssh'`. Jupyter and SSH ports exist by default. |
+| `--init-script` | Custom init script |
+
+###
+
+### Connect to a running workspace
+
+```bash
+vessl workspace ssh [OPTIONS]
+```
+
+| Option | Description |
+| ---------- | -------------------- |
+| --key-path | SSH private key path |
+
+### Connect to workspaces via VSCode Remote-SSH
+
+```bash
+vessl workspace vscode [OPTIONS]
+```
+
+| Option | Description |
+| ---------- | -------------------- |
+| --key-path | SSH private key path |
+
+```bash
+$ vessl workspace vscode
+Updated '/Users/johndoe/.ssh/config'.
+```
+
+### Backup the home directory of the workspace
+
+Create a zip file at `/tmp/workspace-backup.zip` and uploads the backup to VESSL server.
+
+You should run this command inside a running workspace.
+
+```bash
+vessl workspace backup
+```
+
+```bash
+$ vessl workspace backup
+Successfully uploaded 1 out of 1 file(s).
+```
+
+### Restore workspace home directory from a backup.
+
+Download the zip file to `/tmp/workspace-backup.zip` and extract to `/root/`.
+
+You should run this command inside a running workspace.
+
+
+```bash
+vessl workspace restore
+```
+
+```bash
+$ vessl workspace restore
+[?] Select workspace: rash-uncle (backup created 13 minutes ago)
+ > rash-uncle (backup created 13 minutes ago)
+ hazel-saver (backup created 2 days ago)
+
+Successfully downloaded 1 out of 1 file(s).
+```
+
+### List all workspaces
+
+```
+vessl workspace list
+```
+
+### View information on the workspace
+
+```
+vessl workspace read ID
+```
+
+| Argument | Description |
+| -------- | ------------ |
+| `ID` | Workspace ID |
+
+### View logs of the workspace container
+
+```
+vessl workspace logs ID
+```
+
+| Argument | Description |
+| -------- | ------------ |
+| `ID` | Workspace ID |
+
+| Option | Description |
+| -------- | --------------------------------------------------------- |
+| `--tail` | Number of lines to display from the end (defaults to 200) |
+
+### Start a workspace container
+
+```
+vessl workspace start ID
+```
+
+| Argument | Description |
+| -------- | ------------ |
+| `ID` | Workspace ID |
+
+### Stop a workspace container
+
+```
+vessl workspace stop ID
+```
+
+| Argument | Description |
+| -------- | ------------ |
+| `ID` | Workspace ID |
+
+### Terminate a workspace container
+
+```
+vessl workspace terminate ID
+```
+
+| Argument | Description |
+| -------- | ------------ |
+| `ID` | Workspace ID |
diff --git a/reference/sdk/cluster.md b/reference/sdk/cluster.md
new file mode 100644
index 0000000000000000000000000000000000000000..df5c33846cc6db2be357b9966ae66689c81e4708
--- /dev/null
+++ b/reference/sdk/cluster.md
@@ -0,0 +1,132 @@
+---
+title: Cluster
+version: EN
+---
+
+### create_cluster
+```python
+vessl.create_cluster(
+ param: CreateClusterParam
+)
+```
+Create a VESSL cluster by installing VESSL agent to given Kubernetes
+namespace. If you want to override the default organization, then pass
+`organization_name` to `param`.
+
+**Args**
+* `param` (CreateClusterParam) : Create cluster parameter.
+
+**Example**
+```python
+vessl.install_cluster(
+ param=vessl.CreateClusterParam(
+ cluster_name="foo",
+ ...
+ ),
+)
+```
+
+----
+
+## read_cluster
+```python
+vessl.read_cluster(
+ cluster_name: str, **kwargs
+)
+```
+Read cluster in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `cluster_name` (str) : Cluster name.
+
+**Example**
+```python
+vessl.read_cluster(
+ cluster_name="seoul-cluster",
+)
+```
+
+----
+
+## list_clusters
+```python
+vessl.list_clusters(
+ **kwargs
+)
+```
+List clusters in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Example**
+```python
+vessl.list_clusters()
+```
+
+----
+
+## delete_cluster
+```python
+vessl.delete_cluster(
+ cluster_id: int, **kwargs
+)
+```
+Delete custom cluster in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `cluster_id` (int) : Cluster ID.
+
+**Example**
+```python
+vessl.delete_cluster(
+ cluster_id=1,
+)
+```
+
+----
+
+## rename_cluster
+```python
+vessl.rename_cluster(
+ cluster_id: int, new_cluster_name: str, **kwargs
+)
+```
+Rename custom cluster in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `cluster_id` (int) : Cluster ID.
+* `new_cluster_name` (str) : Cluster name to change.
+
+**Example**
+```python
+vessl.rename_cluster(
+ cluster_id=1,
+ new_cluster_name="seoul-cluster-2",
+)
+```
+
+----
+
+## list_cluster_nodes
+```python
+vessl.list_cluster_nodes(
+ cluster_id: int, **kwargs
+)
+```
+List custom cluster nodes in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `cluster_id` (int) : Cluster ID.
+
+**Example**
+```python
+vessl.list_cluster_nodes(
+ cluster_id=1,
+)
+```
diff --git a/reference/sdk/dataset.md b/reference/sdk/dataset.md
new file mode 100644
index 0000000000000000000000000000000000000000..3573e34684371b4892e5df5cf70e99a3922adee3
--- /dev/null
+++ b/reference/sdk/dataset.md
@@ -0,0 +1,226 @@
+---
+title: Dataset
+version: EN
+---
+
+### read_dataset
+```python
+vessl.read_dataset(
+ dataset_name: str, **kwargs
+)
+```
+Read a dataset in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `dataset_name` (str) : Dataset name.
+
+**Example**
+```python
+vessl.read_dataset(
+ dataset_name="mnist",
+)
+```
+
+----
+
+## read_dataset_version
+```python
+vessl.read_dataset_version(
+ dataset_id: int, dataset_version_hash: str, **kwargs
+)
+```
+Read the specific version of dataset in the default organization. If you
+want to override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `dataset_id` (int) : Dataset id.
+* `dataset_version_hash` (str) : Dataset version hash.
+
+**Example**
+```python
+vessl.read_dataset_version(
+ dataset_id=1,
+ dataset_version_hash="hash123"
+)
+```
+
+----
+
+## list_datasets
+```python
+vessl.list_datasets(
+ **kwargs
+)
+```
+List datasets in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Example**
+```
+vessl.list_datasets()
+```
+
+----
+
+## create_dataset
+```python
+vessl.create_dataset(
+ dataset_name: str, description: str = None, is_version_enabled: bool = False,
+ is_public: bool = False, external_path: str = None, aws_role_arn: str = None,
+ version_path: str = None, **kwargs
+)
+```
+Create a dataset in the default organization. If you want to override
+the default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `dataset_name` (str) : Dataset name.
+* `description` (str) : dataset description. Defaults to None.
+* `is_version_enabled` (bool) : True if a dataset versioning is set,
+ False otherwise. Defaults to False.
+* `is_public` (bool) : True if a dataset is source from a public bucket, False
+ otherwise. Defaults to False.
+* `external_path` (str) : AWS S3 or Google Cloud Storage bucket URL. Defaults
+ to None.
+* `aws_role_arn` (str) : AWS Role ARN to access S3. Defaults to None.
+* `version_path` (str) : Versioning bucket path. Defaults to None.
+
+**Example**
+```python
+vessl.create_dataset(
+ dataset_name="mnist",
+ is_public=True,
+ external_path="s3://savvihub-public-apne2/mnist"
+)
+```
+
+----
+
+## list_dataset_volume_files
+```python
+vessl.list_dataset_volume_files(
+ dataset_name: str, need_download_url: bool = False, path: str = '',
+ recursive: bool = False, **kwargs
+)
+```
+List dataset volume files in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `dataset_name` (str) : Dataset name.
+* `need_download_url` (bool) : True if you need a download URL, False
+ otherwise. Defaults to False.
+* `path` (str) : Directory path to list. Defaults to root(""),
+* `recursive` (bool) : True if list files recursively, False otherwise.
+ Defaults to False.
+
+**Example**
+```python
+vessl.list_dataset_volume_files(
+ dataset_name="mnist",
+ recursive=True,
+)
+```
+
+----
+
+## upload_dataset_volume_file
+```python
+vessl.upload_dataset_volume_file(
+ dataset_name: str, source_path: str, dest_path: str, **kwargs
+)
+```
+Upload file to the dataset. If you want to override the default
+organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `dataset_name` (str) : Dataset name.
+* `source_path` (str) : Local source path.
+* `dest_path` (str) : Destination path within the dataset.
+
+**Example**
+```python
+vessl.upload_dataset_volume_file(
+ dataset_name="mnist",
+ source_path="test.csv",
+ dest_path="train",
+)
+```
+
+----
+
+## download_dataset_volume_file
+```python
+vessl.download_dataset_volume_file(
+ dataset_name: str, source_path: str, dest_path: str, **kwargs
+)
+```
+Download file from the dataset. If you want to override the default
+organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `dataset_name` (str) : Dataset name.
+* `source_path` (str) : Source path within the dataset.
+* `dest_path` (str) : Local destination path.
+
+**Example**
+```python
+vessl.download_dataset_volume_file(
+ dataset_name="mnist",
+ source_path="train/test.csv",
+ dest_path=".",
+)
+```
+
+----
+
+## copy_dataset_volume_file
+```python
+vessl.copy_dataset_volume_file(
+ dataset_name: str, source_path: str, dest_path: str, **kwargs
+)
+```
+Copy files within the same dataset. Noted that this is not supported for
+externally sourced datasets like S3 or GCS. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `dataset_name` (str) : Dataset name.
+* `source_path` (str) : Source path within the dataset.
+* `dest_path` (str) : Local destination path.
+
+**Example**
+```python
+vessl.download_dataset_volume_file(
+ dataset_name="mnist",
+ source_path="train/test.csv",
+ dest_path="test/test.csv",
+)
+```
+
+----
+
+## delete_dataset_volume_file
+```python
+vessl.delete_dataset_volume_file(
+ dataset_name: str, path: str, **kwargs
+)
+```
+Delete the dataset volume file. Noted that this is not supported for
+externally sourced datasets like S3 or GCS. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `dataset_name` (str) : Dataset name.
+* `path` (str) : File path.
+
+**Example**
+```python
+vessl.delete_dataset_volume_file(
+ dataset_name="mnist",
+ path="train/test.csv",
+)
+```
diff --git a/reference/sdk/experiment.md b/reference/sdk/experiment.md
new file mode 100644
index 0000000000000000000000000000000000000000..a61a99ff3547df8793d5bfc9639c0acfa4f4fe37
--- /dev/null
+++ b/reference/sdk/experiment.md
@@ -0,0 +1,284 @@
+---
+title: Experiment
+version: EN
+---
+
+### read_experiment
+```python
+vessl.read_experiment(
+ experiment_number: int, **kwargs
+)
+```
+Read experiment in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`.
+
+**Args**
+* `experiment_number` (int) : experiment number.
+
+**Example**
+```python
+vessl.read_experiment(
+ experiment_number=23,
+)
+```
+
+----
+
+## list_experiments
+```python
+vessl.list_experiments(
+ statuses: List[str] = None, **kwargs
+)
+```
+List experiments in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`.
+
+**Args**
+* `statuses` (List[str]) : A list of status filter. Defaults to None.
+
+**Example**
+```python
+vessl.list_experiments(
+ statuses=["completed"]
+)
+```
+
+----
+
+## create_experiment
+```python
+vessl.create_experiment(
+ cluster_name: str, start_command: str, cluster_node_names: List[str] = None,
+ kernel_resource_spec_name: str = None, processor_type: str = None,
+ cpu_limit: float = None, memory_limit: str = None, gpu_type: str = None,
+ gpu_limit: int = None, kernel_image_url: str = None,
+ docker_credentials_id: Optional[int] = None, *, message: str = None,
+ termination_protection: bool = False, hyperparameters: List[str] = None,
+ secrets: List[str] = None, dataset_mounts: List[str] = None,
+ model_mounts: List[str] = None, git_ref_mounts: List[str] = None,
+ git_diff_mount: str = None, local_files: List[str] = None,
+ use_vesslignore: bool = True, upload_local_git_diff: bool = False,
+ archive_file_mount: str = None, object_storage_mounts: List[str] = None,
+ root_volume_size: str = None, working_dir: str = None,
+ output_dir: str = MOUNT_PATH_OUTPUT, worker_count: int = 1,
+ framework_type: str = None, service_account: str = '', **kwargs
+)
+```
+Create experiment in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`. You can also configure git info by passing
+`git_branch` or `git_ref` as `**kwargs`. Pass `use_git_diff=True` if
+you want to run experiment with uncommitted changes and pass
+`use_git_diff_untracked=True` if you want to run untracked changes(only
+valid if `use_git_diff` is set).
+
+**Args**
+* `cluster_name` (str) : Cluster name(must be specified before other options).
+* `cluster_node_names` (List[str]) : Node names. The experiment will run on
+ one of these nodes. Defaults to None(all).
+* `start_command` (str) : Start command to execute in experiment container.
+* `kernel_resource_spec_name` (str) : Resource type to run an experiment (for
+ managed cluster only). Defaults to None.
+* `cpu_limit` (float) : Number of vCPUs (for custom cluster only). Defaults to
+ None.
+* `memory_limit` (str) : Memory limit in GiB (for custom cluster only).
+ Defaults to None.
+* `gpu_type` (str) : GPU type (for custom cluster only). Defaults to None.
+* `gpu_limit` (int) : Number of GPU cores (for custom cluster only). Defaults
+ to None.
+* `kernel_image_url` (str) : Kernel docker image URL. Defaults to None.
+* `docker_credentials_id` (int) : Docker credential id. Defaults to None.
+* `message` (str) : Message. Defaults to None.
+* `termination_protection` (bool) : True if termination protection is enabled,
+ False otherwise. Defaults to False.
+* `hyperparameters` (List[str]) : A list of hyperparameters. Defaults to None.
+* `secrets` (List[str]) : A list of secrets in form "KEY=VALUE". Defaults to None.
+* `dataset_mounts` (List[str]) : A list of dataset mounts. Defaults to None.
+* `model_mounts` (List[str]) : A list of model mounts. Defaults to None.
+* `git_ref_mounts` (List[str]) : A list of git repository mounts. Defaults to
+ None.
+* `git_diff_mount` (str) : Git diff mounts. Defaults to None.
+* `local_files` (List[str]) : A list of local files to upload. Defaults to
+ None.
+* `use_vesslignore` (bool) : True if local files matching glob patterns
+ in .vesslignore files should be ignored. Patterns apply relative to
+ the directory containing that .vesslignore file.
+* `upload_local_git_diff` (bool) : True if local git diff to upload, False
+ otherwise. Defaults to False.
+* `archive_file_mount` (str) : Local archive file mounts. Defaults to None.
+* `object_storage_mounts` (List[str]) : Object storage mounts. Defaults to None.
+* `root_volume_size` (str) : Root volume size. Defaults to None.
+* `working_dir` (str) : Working directory path. Defaults to None.
+* `output_dir` (str) : Output directory path. Defaults to "/output/".
+* `worker_count` (int) : Number of workers(for distributed experiment only).
+ Defaults to 1.
+* `framework_type` (str) : Specify "pytorch" or "tensorflow" (for distributed
+ experiment only). Defaults to None.
+* `service_account` (str) : Service account name. Defaults to "".
+processor_type(str) cpu or gpu (for custom cluster only). Defaults to
+ None.
+
+**Example**
+```python
+vessl.create_experiment(
+ cluster_name="aws-apne2",
+ kernel_resource_spec_name="v1.cpu-4.mem-13",
+ kernel_image_url="public.ecr.aws/vessl/kernels:py36.full-cpu",
+ dataset_mounts=["/input/:mnist"]
+ start_command="pip install requirements.txt && python main.py",
+)
+```
+
+----
+
+## list_experiment_logs
+```python
+vessl.list_experiment_logs(
+ experiment_number: int, tail: int = 200, worker_number: int = 0, after: int = 0,
+ **kwargs
+)
+```
+List experiment logs in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`.
+
+**Args**
+* `experiment_name` (int) : Experiment number.
+* `tail` (int) : The number of lines to display from the end. Display all if
+ -1. Defaults to 200.
+* `worker_number` (int) : Override default worker number (for distributed
+ experiments only). Defaults to 0.
+* `after` (int) : The number of starting lines to display from the start.
+ Defaults to 0.
+
+**Example**
+```python
+vessl.list_experiment_logs(
+ experiment_number=23,
+)
+```
+
+----
+
+## list_experiment_output_files
+```python
+vessl.list_experiment_output_files(
+ experiment_number: int, need_download_url: bool = False, recursive: bool = True,
+ worker_number: int = 0, **kwargs
+)
+```
+List experiment output files in the default organization/project. If you
+want to override the default organization/project, then pass
+`organization_name` or `project_name` as `**kwargs`.
+
+**Args**
+* `experiment_number` (int) : Experiment number.
+* `need_download_url` (bool) : True if you need a download URL, False
+ otherwise. Defaults to False.
+* `recursive` (bool) : True if list files recursively, False otherwise.
+ Defaults to True.
+* `worker_number` (int) : Override default worker number (for distributed
+ experiments only). Defaults to 0.
+
+**Example**
+```python
+vessl.list_experiment_output_files(
+ experiment_number=23,
+)
+```
+
+----
+
+## download_experiment_output_files
+```python
+vessl.download_experiment_output_files(
+ experiment_number: int, dest_path: str = os.path.join(os.getcwd(), 'output'),
+ worker_number: int = 0, **kwargs
+)
+```
+Download experiment output files in the default organization/project.
+If you want to override the default organization/project, then pass
+`organization_name` or `project_name` as `**kwargs`.
+
+**Args**
+* `experiment_number` (int) : Experiment number.
+* `dest_path` (str) : Local download path. Defaults to "./output".
+* `worker_number` (int) : Override default worker number (for distributed
+ experiments only). Defaults to 0.
+
+**Example**
+```python
+vessl.download_experiment_output_files(
+ experiment_number=23,
+)
+```
+
+----
+
+## upload_experiment_output_files
+```python
+vessl.upload_experiment_output_files(
+ experiment_number: int, path: str, **kwargs
+)
+```
+Upload experiment output files in the default organization/project.
+If you want to override the default organization/project, then pass
+`organization_name` or `project_name` as `**kwargs`.
+
+**Args**
+* `experiment_number` (int) : Experiment number.
+* `path` (str) : Source path.
+
+**Example**
+```python
+vessl.upload_experiment_output_files(
+ experiment_number=23,
+ path="output",
+)
+```
+
+----
+
+## terminate_experiment
+```python
+vessl.terminate_experiment(
+ experiment_number: int, **kwargs
+)
+```
+Terminate experiment in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`.
+
+**Args**
+* `experiment_number` (int) : Experiment number.
+
+**Example**
+```python
+vessl.terminate_experiment(
+ experiment_number=23,
+)
+```
+
+----
+
+## delete_experiment
+```python
+vessl.delete_experiment(
+ experiment_number: int, **kwargs
+)
+```
+Delete experiment in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`.
+
+**Args**
+* `experiment_number` (int) : Experiment number.
+
+**Example**
+```python
+vessl.delete_experiment(
+ experiment_number=23,
+)
+```
diff --git a/reference/sdk/image.md b/reference/sdk/image.md
new file mode 100644
index 0000000000000000000000000000000000000000..3e3564d67eeef76916bb28a862ba37648abaea71
--- /dev/null
+++ b/reference/sdk/image.md
@@ -0,0 +1,39 @@
+---
+title: Image
+version: EN
+---
+
+### read_kernel_image
+```python
+vessl.read_kernel_image(
+ image_id: int
+)
+```
+Read the kernel image.
+
+**Args**
+* `image_id` (int) : Image ID.
+
+**Example**
+```python
+vessl.read_kernel_image(
+ image_id=1,
+)
+```
+
+----
+
+## list_kernel_images
+```python
+vessl.list_kernel_images(
+ **kwargs
+)
+```
+List kernel images in the default organization. If you
+want to override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Example**
+```python
+vessl.list_kernel_images()
+```
diff --git a/reference/sdk/integrations/keras.md b/reference/sdk/integrations/keras.md
new file mode 100644
index 0000000000000000000000000000000000000000..71ac49ef7db758e177294ba34bb6c3e3ef635c56
--- /dev/null
+++ b/reference/sdk/integrations/keras.md
@@ -0,0 +1,46 @@
+---
+title: Keras
+version: EN
+---
+
+VESSL provides integrations for Keras, an interface for the TensorFlow library. You can find a complete example using Keras in our [GitHub repository](https://github.com/savvihub/examples/blob/main/mnist/keras/main.py).
+
+## ExperimentCallback
+
+`ExperimentCallback` extends Keras' callback class. Add `ExperimentCallback` as a callback parameter in the `fit` function to automatically track Keras metrics at the end of each epoch. You can also log image objects using `ExperimentCallback`.
+
+| Parameter | Description |
+| ----------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
+| `data_type` | Use `image` to log image objects |
+| `validation_data` | Tuple of `(validation_data, validation_labels)` |
+| `labels` | List of labels to get the caption from the inferred logits.
The argmax value will be used if labels are not provided.
|
+| `num_images` | Number of images to log in the validation data |
+
+### Logging metrics
+
+```python
+# Logging loss and accuracy for each epoch in Keras
+from vessl.integration.keras import ExperimentCallback
+
+...
+model.fit(..., callbacks=[ExperimentCallback()])
+...
+```
+
+### Logging image objects
+
+```python
+# Logging images along with the loss and accuracy for each epoch in Keras
+from vessl.keras import ExperimentCallback
+
+...
+model.fit(
+ ...,
+ callbacks=[ExperimentCallback(
+ data_type='image',
+ validation_data=(x_val, y_val),
+ num_images=5,
+ )]
+)
+...
+```
diff --git a/reference/sdk/integrations/tensorboard.md b/reference/sdk/integrations/tensorboard.md
new file mode 100644
index 0000000000000000000000000000000000000000..b7eaefdbc3538ebc52b2c801bc9b11ba51606dbf
--- /dev/null
+++ b/reference/sdk/integrations/tensorboard.md
@@ -0,0 +1,50 @@
+---
+title: Tensorboard
+version: EN
+---
+
+Using VESSL's Python SDK, you can view and interact with metrics and media logged to TensorBoard directly on VESSL. We currently support scalars, images, and audio.
+
+VESSL supports TensorBoard with TensorFlow, PyTorch (TensorBoard > 1.14), and TensorBoardX.
+
+You can integrate TensorBoard by simply adding `vessl.init(tensorboard=True)`to your code.
+
+Note that this should be called **before creating the file writer**. This is because VESSL auto-detects the TensorBoard `logdir` upon writer creation but cannot do so if the writer has already been created.
+
+### TensorFlow
+
+```python
+import tensorflow as tf
+import vessl
+
+vessl.init(tensorboard=True) # Must be called before tf.summary.create_file_writer
+
+w = tf.summary.create_file_writer("./logdir")
+...
+```
+
+### PyTorch
+
+```python
+from torch.utils.tensorboard import SummaryWriter
+import vessl
+
+vessl.init(tensorboard=True) # Must be called before SummaryWriter
+
+writer = SummaryWriter("newdir")
+...
+```
+
+### TensorBoardX
+
+```python
+from tensorboardX import SummaryWriter
+import vessl
+
+vessl.init(tensorboard=True) # Must be called before SummaryWriter
+
+writer = SummaryWriter("newdir")
+...
+```
+
+
diff --git a/reference/sdk/model.md b/reference/sdk/model.md
new file mode 100644
index 0000000000000000000000000000000000000000..62cca993c283a4841ce26d6f36f65d76bdb15897
--- /dev/null
+++ b/reference/sdk/model.md
@@ -0,0 +1,352 @@
+---
+title: Model
+version: EN
+---
+
+### read_model_repository
+```python
+vessl.read_model_repository(
+ repository_name: str, **kwargs
+)
+```
+Read model repository in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+
+**Example**
+```python
+vessl.read_model_repository(
+ repository_name="Transformer-ImageNet",
+)
+```
+
+----
+
+## list_model_repositories
+```python
+vessl.list_model_repositories(
+ **kwargs
+)
+```
+List model repositories in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Example**
+```python
+vessl.list_model_repositories()
+```
+
+----
+
+## create_model_repository
+```python
+vessl.create_model_repository(
+ name: str, description: str = None, **kwargs
+)
+```
+Create model repository in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `name` (str) : Model repository name.
+* `description` (str) : Model repository description. Defaults to None.
+
+**Example**
+```python
+vessl.create_model_repository(
+ name="Transformer-ImageNet",
+ description="Transformer model trained on ImageNet",
+)
+```
+
+----
+
+## update_model_repository
+```python
+vessl.update_model_repository(
+ name: str, description: str, **kwargs
+)
+```
+Update model repository in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `name` (str) : Model repository name.
+* `description` (str) : Model repository description to update.
+
+**Example**
+```python
+vessl.update_model_repository(
+ name="Transformer-ImageNet",
+ description="Update description to this",
+)
+```
+
+----
+
+## delete_model_repository
+```python
+vessl.delete_model_repository(
+ name: str, **kwargs
+)
+```
+Delete model repository in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `name` (str) : Model repository name.
+
+**Example**
+```python
+vessl.delete_model_repository(
+ name="Transformer-ImageNet",
+)
+```
+
+----
+
+## read_model
+```python
+vessl.read_model(
+ repository_name: str, model_number: int, **kwargs
+)
+```
+Read model in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+* `model_number` (int) : Model number.
+
+**Example**
+```python
+vessl.read_model(
+ repository_name="Transformer-ImageNet",
+ model_number=1,
+)
+```
+
+----
+
+## list_models
+```python
+vessl.list_models(
+ repository_name: str, **kwargs
+)
+```
+List models in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+
+**Example**
+```python
+vessl.list_models(
+ repository_name="Transformer-ImageNet",
+)
+```
+
+----
+
+## create_model
+```python
+vessl.create_model(
+ repository_name: str, repository_description: str = None, experiment_id: int = None,
+ model_name: str = None, paths: List[str] = None, **kwargs
+)
+```
+Create model in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`. If the
+given model repository name does not exist, then create one with the given
+repository_description. Otherwise, create a model in the existing model
+repository.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+* `repository_description` (str) : Model repository description. Defaults to
+ None
+* `experiment_id` (int) : Pass experiment ID if the model is sourced from the
+ experiment outputs. Defaults to None.
+* `model_name` (str) : Model name is unique and optional. Defaults to None.
+* `paths` (List[str]) : Paths for creating model. Paths could be sub paths of
+ experiment output files or local file paths. Defaults to root.
+
+**Example**
+```python
+vessl.create_model(
+ repository_name="Transformer-ImageNet",
+ repository_description="Transformer model trained on ImageNet",
+ experiment_id=123456,
+ model_name="v0.0.1",
+)
+```
+
+----
+
+## update_model
+```python
+vessl.update_model(
+ repository_name: str, model_number: int, name: str, **kwargs
+)
+```
+Update model in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+* `model_number` (int) : Model number.
+* `name` (str) : Model name to update.
+
+**Example**
+```python
+vessl.update_model(
+ repository_name="Transformer-ImageNet",
+ model_number=1,
+ name="v0.0.2",
+)
+```
+
+----
+
+## delete_model
+```python
+vessl.delete_model(
+ repository_name: str, model_number: int, **kwargs
+)
+```
+Delete model in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+* `model_number` (int) : Model number.
+
+**Example**
+```python
+vessl.delete_model(
+ repository_name="Transformer-ImageNet",
+ model_number=1,
+)
+```
+
+----
+
+## list_model_volume_files
+```python
+vessl.list_model_volume_files(
+ repository_name: str, model_number: int, need_download_url: bool = False,
+ path: str = '', recursive: bool = False, **kwargs
+)
+```
+List model files in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+* `model_number` (int) : Model number.
+* `need_download_url` (bool) : True if you need a download URL, False
+ otherwise. Defaults to False.
+* `path` (str) : Directory path to list. Defaults to root.
+* `recursive` (bool) : True if file is a directory, False otherwise. Defaults
+ to False.
+
+**Example**
+```python
+vessl.list_model_volume_files(
+ repository_name="Transformer-ImageNet",
+ model_number=1,
+ recursive=True,
+)
+```
+
+----
+
+## upload_model_volume_file
+```python
+vessl.upload_model_volume_file(
+ repository_name: str, model_number: int, source_path: str, dest_path: str,
+ **kwargs
+)
+```
+Upload file to the model in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+* `model_number` (int) : Model number.
+* `source_path` (str) : Local source path.
+* `dest_path` (str) : Destination path within the model.
+
+**Example**
+```python
+vessl.upload_model_volume_file(
+ repository_name="Transformer-ImageNet",
+ model_number=1,
+ source_path="model_best.pth",
+ dest_path="model_best.pth",
+)
+```
+
+----
+
+## download_model_volume_file
+```python
+vessl.download_model_volume_file(
+ repository_name: str, model_number: int, source_path: str, dest_path: str,
+ **kwargs
+)
+```
+Download a model in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+* `model_number` (int) : Model number.
+* `source_path` (str) : Source path within the model
+* `dest_path` (str) : Local destination path
+
+**Example**
+```python
+vessl.download_model_volume_file(
+ repository_name="Transformer-ImageNet",
+ model_number=1,
+ source_path="model_best.pth",
+ dest_path="models",
+)
+```
+
+----
+
+## delete_model_volume_file
+```python
+vessl.delete_model_volume_file(
+ repository_name: str, model_number: int, path: str, **kwargs
+)
+```
+Delete the model volume file in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+* `model_number` (int) : Model number.
+* `path` (str) : File path within the model
+
+**Example**
+```python
+vessl.delete_model_volume_file(
+ repository_name="Transformer-ImageNet",
+ model_number=1,
+ source_path="models",
+ recursive=True,
+)
+```
diff --git a/reference/sdk/organization.md b/reference/sdk/organization.md
new file mode 100644
index 0000000000000000000000000000000000000000..6bd31600bf3ad47f9fce9d0bbf2df8a4125a2640
--- /dev/null
+++ b/reference/sdk/organization.md
@@ -0,0 +1,55 @@
+---
+title: Organization
+version: EN
+---
+
+### read_organization
+```python
+vessl.read_organization(
+ organization_name: str
+)
+```
+Read organization
+
+**Args**
+* `organization_name` (str) : Organization name.
+
+**Example**
+```python
+vessl.read_organization(
+ organization_name="foo"
+)
+```
+
+----
+
+## list_organizations
+```python
+vessl.list_organizations()
+```
+List organizations
+
+**Example**
+```python
+vessl.list_organizations()
+```
+
+----
+
+## create_organization
+```python
+vessl.create_organization(
+ organization_name: str
+)
+```
+Create organization
+
+**Args**
+* `organization_name` (str) : Organization name.
+
+**Example**
+```python
+vessl.create_organization(
+ organization_name="foo",
+)
+```
diff --git a/reference/sdk/project.md b/reference/sdk/project.md
new file mode 100644
index 0000000000000000000000000000000000000000..432a37de5cc74fd36da15bceac69acc8904226d0
--- /dev/null
+++ b/reference/sdk/project.md
@@ -0,0 +1,65 @@
+---
+title: Project
+version: EN
+---
+
+### read_project
+```python
+vessl.read_project(
+ project_name: str, **kwargs
+)
+```
+Read project in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `project_name` (str) : Project name.
+
+**Example**
+```python
+vessl.read_project(
+ project_name="tutorials",
+)
+```
+
+----
+
+## list_projects
+```python
+vessl.list_projects(
+ **kwargs
+)
+```
+List projects in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Example**
+```python
+vessl.list_projects()
+```
+
+----
+
+## create_project
+```python
+vessl.create_project(
+ project_name: str, description: str = None, **kwargs
+)
+```
+Create project in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `project_name` (str) : Project name.
+* `description` (str) : Project description. Defaults to None.
+
+**Example**
+```python
+vessl.create_project(
+ project_name="tutorials",
+ description="VESSL tutorial project",
+)
+```
diff --git a/reference/sdk/runnerbase.md b/reference/sdk/runnerbase.md
new file mode 100644
index 0000000000000000000000000000000000000000..664fc92f0bb82c2c7356d1c3140771ddbf2ef4c2
--- /dev/null
+++ b/reference/sdk/runnerbase.md
@@ -0,0 +1,183 @@
+---
+title: RunnerBase
+version: EN
+---
+
+### RunnerBase
+
+```python
+RunnerBase()
+```
+Base class for model registering.
+
+This base class introduces 5 static methods as followings:
+- `predict`: Make prediction with given data and model. This method must be overridden. The
+ data is given from the result of `preprocess_data`, and the return value of this method
+ will be passed to `postprocess_data` before serving.
+- `save_model`: Save the model into a file. Return value of this method will be given to the
+ `load_model` method on model loading. If this method is overriden, `load_model` must be
+ overriden as well.
+- `load_model`: Load the model from a file.
+- `preprocess_data`: Preprocess the data before prediction. It converts the API input data to
+ the model input data.
+- `postprocess_data`: Postprocess the data after prediction. It converts the model output data
+ to the API output data.
+
+Check each method's docstring for more information.
+
+
+**Methods:**
+
+## .load_model
+```python
+vessl.load_model(
+ props: Union[Dict[str, str], None], artifacts: Dict[str, str]
+)
+```
+Load the model instance from file.
+
+`props` is given from the return value of `save_model`, and `artifacts` is
+given from the `register_model` method.
+
+If the `save_model` is not overriden, `props` will be None
+
+**Args**
+* `props` (dict | None) : Data that was returned by `save_model`. If `save_model` is
+ not overriden, this will be None.
+* `artifacts` (dict) : Data that is given by `register_model` function.
+
+**Returns**
+Model instance.
+## .preprocess_data
+```python
+vessl.preprocess_data(
+ data: InputDataType
+)
+```
+Preprocess the given data.
+
+The data processed by this method will be given to the model.
+
+**Args**
+* `data` : Data to be preprocessed.
+
+**Returns**
+Preprocessed data that will be given to the model.
+## .predict
+```python
+vessl.predict(
+ model: ModelType, data: ModelInputDataType
+)
+```
+Make prediction with given data and model.
+
+**Args**
+* `model` (model_instance) : Model instance.
+* `data` : Data to be predicted.
+
+**Returns**
+Prediction result.
+## .postprocess_data
+```python
+vessl.postprocess_data(
+ data: ModelOutputDataType
+)
+```
+Postprocess the given data.
+
+The data processed by this method will be given to the user.
+
+**Args**
+* `data` : Data to be postprocessed.
+
+**Returns**
+Postprocessed data that will be given to the user.
+## .save_model
+```python
+vessl.save_model(
+ model: ModelType
+)
+```
+Save the given model instance into file.
+
+Return value of this method will be given to first argument of `load_model` on model loading.
+
+**Args**
+* `model` (model_instance) : Model instance to save.
+
+**Returns**
+(dict) Data that will be passed to `load_model` on model loading.
+ Must be a dictionary with key and value both string.
+
+----
+
+## register_model
+```python
+vessl.register_model(
+ repository_name: str, model_number: Union[int, None], runner_cls: RunnerBase,
+ model_instance: Union[ModelType, None] = None, requirements: List[str] = None,
+ artifacts: Dict[str, str] = None, **kwargs
+)
+```
+Register the given model for serving. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+* `model_number` (int | None) : Model number. If None, new model will be
+ created. In such case, `model_instance` must be given.
+* `runner_cls` (RunnerBase) : Runner class that includes code for serving.
+* `model_instance` (ModelType | None) : Model instance. If None, `runner_cls`
+ must override `load_model` method. Defaults to None.
+* `requirements` (List[str]) : Python requirements for the model. Defaults to
+ [].
+* `artifacts` (Dict[str, str]) : Artifacts to be uploaded. Key is the path to
+ artifact in local filesystem, and value is the path in the model
+ volume. Only trailing asterisk(*) is allowed for glob pattern.
+ Defaults to {}.
+
+**Example**
+* "model.pt", "checkpoints/*": "checkpoints/*"},
+```python
+register_model(
+ repository_name="my-model",
+ model_number=1,
+ runner_cls=MyRunner,
+ model_instance=model_instance,
+ requirements=["torch", "torchvision"],
+)
+```
+
+----
+
+## register_torch_model
+```python
+vessl.register_torch_model(
+ repository_name: str, model_number: Union[int, None], model_instance: ModelType,
+ preprocess_data = None, postprocess_data = None, requirements: List[str] = None,
+ **kwargs
+)
+```
+Register the given torch model instance for model serving. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `repository_name` (str) : Model repository name.
+* `model_number` (int | None) : Model number. If None, new model will be
+ created.
+* `model_instance` (model_instance) : Torch model instance.
+* `preprocess_data` (callable) : Function that will preprocess data.
+ Defaults to identity function.
+* `postprocess_data` (callable) : Function that will postprocess data.
+ Defaults to identity function.
+* `requirements` (list) : List of requirements. Defaults to [].
+
+**Example**
+```python
+vessl.register_model(
+ repository_name="my-model",
+ model_number=1,
+ model_instance=model_instance,
+)
+```
diff --git a/reference/sdk/serving.md b/reference/sdk/serving.md
new file mode 100644
index 0000000000000000000000000000000000000000..d34636ed02fc5217b4ae164d9ca709dcfea27364
--- /dev/null
+++ b/reference/sdk/serving.md
@@ -0,0 +1,253 @@
+---
+title: Serving
+version: EN
+---
+
+### list_servings
+```python
+vessl.list_servings(
+ organization: str
+)
+```
+Get a list of all servings in an organization
+
+**Args**
+* `organization` (str) : The name of the organization.
+
+**Example**
+```python
+vessl.list_servings(organization="my-org")
+```
+
+----
+
+## create_revision_from_yaml
+```python
+vessl.create_revision_from_yaml(
+ organization: str, serving_name: str, yaml_body: str
+)
+```
+Create a new revision of serving from a YAML file.
+
+**Args**
+* `organization` (str) : The name of the organization.
+* `serving_name` (str) : The name of the serving.
+* `yaml_body` (str) : The YAML body of the serving.
+ It is not deserialized YAML, but a whole yaml string.
+
+**Example**
+```python
+vessl.create_revision_from_yaml(
+ organization="my-org",
+ serving_name="my-serving",
+ yaml_body=yaml_body)
+```
+
+----
+
+## read_revision
+```python
+vessl.read_revision(
+ organization: str, serving_name: str, revision_number: int
+)
+```
+Get a serving revision from a serving name and revision number.
+
+**Args**
+* `organization` (str) : The name of the organization.
+* `serving_name` (str) : The name of the serving.
+* `revision_number` (int) : The revision number of the serving.
+
+**Example**
+```python
+vessl.read_revision(
+ organization="my-org",
+ serving_name="my-serving",
+ revision_number=1)
+```
+
+----
+
+## list_revisions
+```python
+vessl.list_revisions(
+ organization: str, serving_name: str
+)
+```
+Get a list of all revisions of a serving.
+
+**Args**
+* `organization` (str) : The name of the organization.
+* `serving_name` (str) : The name of the serving.
+
+**Examples**
+```python
+vessl.list_revisions(
+ organization="my-org",
+ serving_name="my-serving")
+```
+
+----
+
+## read_gateway
+```python
+vessl.read_gateway(
+ organization: str, serving_name: str
+)
+```
+Get the gateway of a serving.
+
+**Args**
+* `organization` (str) : The name of the organization.
+* `serving_name` (str) : The name of the serving.
+
+**Examples**
+```python
+vessl.read_gateway(
+ organization="my-org",
+ serving_name="my-serving")
+```
+
+----
+
+## terminate_revision
+```python
+vessl.terminate_revision(
+ organization: str, serving_name: str, revision_number: int
+)
+```
+Terminate a serving revision from a serving name and revision number.
+
+**Args**
+* `organization` (str) : The name of the organization.
+* `serving_name` (str) : The name of the serving.
+* `revision_number` (int) : The revision number of the serving.
+
+**Example**
+```python
+vessl.terminate_revision(
+ organization="my-org",
+ serving_name="my-serving",
+ revision_number=1)
+```
+
+----
+
+## update_revision_autoscaler_config
+```python
+vessl.update_revision_autoscaler_config(
+ organization: str, serving_name: str, revision_number: int,
+ auto_scaler_config: AutoScalerConfig
+)
+```
+Update the autoscaler config of a serving revision from a serving name and revision number.
+
+**Args**
+* `organization` (str) : The name of the organization.
+* `serving_name` (str) : The name of the serving.
+* `revision_number` (int) : The revision number of the serving.
+* `auto_scaler_config` (AutoScalerConfig) : The autoscaler config of the serving.
+
+**Example**
+```python
+vessl.update_revision_autoscaler_config(
+ organization="my-org",
+ serving_name="my-serving",
+ revision_number=1,
+ auto_scaler_config=AutoScalerConfig(
+ min_replicas=1,
+ max_replicas=2,
+ target_cpu_utilization_percentage=80,
+ ))
+```
+
+----
+
+## update_gateway
+```python
+vessl.update_gateway(
+ organization: str, serving_name: str,
+ gateway: ModelServiceGatewayUpdateAPIInput
+)
+```
+Update the gateway of a serving.
+
+**Args**
+* `organization` (str) : The name of the organization.
+* `serving_name` (str) : The name of the serving.
+* `gateway` (ModelServiceGatewayUpdateAPIInput) : The gateway of the serving.
+
+**Examples**
+```python
+from openapi_client import ModelServiceGatewayUpdateAPIInput
+from openapi_client import OrmModelServiceGatewayTrafficSplitEntry
+
+gateway = ModelServiceGatewayUpdateAPIInput(
+ enabled=True,
+ ingress_host="my-endpoint",
+ traffic_split=[
+ OrmModelServiceGatewayTrafficSplitEntry(
+ revision_number=1,
+ port=2222,
+ traffic_weight=100,
+ )
+ ],
+)
+
+vessl.update_gateway(
+ organization="my-org",
+ serving_name="my-serving",
+ gateway=gateway)
+```
+
+----
+
+## update_gateway_for_revision
+```python
+vessl.update_gateway_for_revision(
+ organization: str, serving_name: str, revision_number: int, port: int,
+ weight: int
+)
+```
+Update the current gateway of a serving for a specific revision.
+
+**Args**
+* `organization` (str) : The name of the organization.
+* `serving_name` (str) : The name of the serving.
+* `revision_number` (int) : The revision number of the serving.
+* `port` (int) : The port of the revision will use for gateway.
+* `weight` (int) : The weight of the traffic will be distributed to revision_number.
+
+**Examples**
+```python
+vessl.update_gateway_for_revision(
+ organization="my-org",
+ serving_name="my-serving",
+ revision_number=1,
+ port=2222,
+ weight=100)
+```
+
+----
+
+## update_gateway_from_yaml
+```python
+vessl.update_gateway_from_yaml(
+ organization: str, serving_name: str, yaml_body: str
+)
+```
+Update the gateway of a serving from a YAML file.
+
+**Args**
+* `organization` (str) : The name of the organization.
+* `serving_name` (str) : The name of the serving.
+* `yaml_body` (str) : The YAML body of the serving.
+ It is not deserialized YAML, but a whole yaml string
+
+**Examples**
+```python
+vessl.update_gateway_from_yaml(
+ organization="my-org",
+ serving_name="my-serving",
+ yaml_body=yaml_body)
+```
diff --git a/reference/sdk/ssh-key.md b/reference/sdk/ssh-key.md
new file mode 100644
index 0000000000000000000000000000000000000000..24c32ded337b20009a2bf2d2b02f97e522b8de81
--- /dev/null
+++ b/reference/sdk/ssh-key.md
@@ -0,0 +1,59 @@
+---
+title: SSH-key
+version: EN
+---
+
+### list_ssh_keys
+```python
+vessl.list_ssh_keys()
+```
+List ssh public keys.
+
+**Example**
+```python
+vessl.list_ssh_keys()
+```
+
+----
+
+## create_ssh_key
+```python
+vessl.create_ssh_key(
+ key_path: str, key_name: str, ssh_public_key_value: str
+)
+```
+Create a SSH public key.
+
+**Args**
+* `key_path` (str) : SSH public key path.
+* `key_name` (str) : SSH public key name,
+* `ssh_public_key_value` (str) : SSH public key value.
+
+**Example**
+```python
+vessl.create_ssh_key(
+ key_path="/Users/johndoe/.ssh/id_ed25519.pub",
+ key_name="john@abcd.com",
+ ssh_public_key_value="ssh-public-key-value",
+)
+```
+
+----
+
+## delete_ssh_key
+```python
+vessl.delete_ssh_key(
+ key_id: int
+)
+```
+Delete the ssh public key.
+
+**Args**
+* `key_id` (int) : Key ID.
+
+**Example**
+```python
+vessl.delete_ssh_key(
+ key_id=123456,
+)
+```
diff --git a/reference/sdk/sweep.md b/reference/sdk/sweep.md
new file mode 100644
index 0000000000000000000000000000000000000000..49f441d2f524a75f018769fe321ac03b404afab5
--- /dev/null
+++ b/reference/sdk/sweep.md
@@ -0,0 +1,252 @@
+---
+title: Sweep
+version: EN
+---
+
+### read_sweep
+```python
+vessl.read_sweep(
+ sweep_name: str, **kwargs
+)
+```
+Read sweep in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`.
+
+**Args**
+* `sweep_name` (str) : Sweep name.
+
+**Example**
+```python
+vessl.read_sweep(
+ sweep_name="pitch-lord",
+)
+```
+
+----
+
+## list_sweeps
+```python
+vessl.list_sweeps(
+ **kwargs
+)
+```
+List sweeps in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`.
+
+**Example**
+```python
+vessl.list_sweeps()
+```
+
+----
+
+## create_sweep
+```python
+vessl.create_sweep(
+ name: str, algorithm: str, parameters: List[SweepParameter], cluster_name: str,
+ command: str, objective: SweepObjective = None, max_experiment_count: int = None,
+ parallel_experiment_count: int = None, max_failed_experiment_count: int = None,
+ resource_spec_name: str = None, processor_type: str = None, cpu_limit: float = None,
+ memory_limit: str = None, gpu_type: str = 'Any', gpu_limit: int = None,
+ image_url: str = None, *, early_stopping_name: str = None,
+ early_stopping_settings: List[Tuple[str, str]] = None, message: str = None,
+ hyperparameters: List[Tuple[str, str]] = None, dataset_mounts: List[str] = None,
+ git_ref_mounts: List[str] = None, git_diff_mount: str = None,
+ archive_file_mount: str = None, object_storage_mount: str = None,
+ root_volume_size: str = None, working_dir: str = None,
+ output_dir: str = MOUNT_PATH_OUTPUT, **kwargs
+)
+```
+Create sweep in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`. Pass `use_git_diff=True` if you want to run
+experiment with uncommitted changes and pass `use_git_diff_untracked=True`
+if you want to run untracked changes(only valid if `use_git_diff` is set).
+
+**Args**
+* `name` (str) : Name
+* `objective` (Optional[vessl.SweepObjective]) : A sweep objective including goal, metric,
+ and type.
+* `max_experiment_count` (Optional[int]) : The maximum number of experiments to run.
+ Required unless grid search.
+* `parallel_experiment_count` (Optional[int]) : The number of experiments to run in
+ parallel. Default: 1.
+* `max_failed_experiment_count` (Optional[int]) : The maximum number of experiments to
+ allow to fail. Default: 1.
+* `algorithm` (str) : Parameter suggestion algorithm. `grid`, `random`, or
+ `bayesian`.
+* `parameters` (List[vessl.SweepParameter]) : A list of parameters to search.
+ - SweepParameter
+ - name(str): The names of hyperparameters to search.
+ - type(str): `int`, `double`, `categorical`.
+ - range(SweepParameterRange): Search range.
+ - list(List[str]): A list of values to try.
+ If `list` is given, `min`, `max` and `step` will be ignored.
+ - min(str): The minimum value of the search range (inclusive).
+ - max(str): The maximum value of the search range (inclusive).
+ - step(Optional[str]): If provided, the values are limited to min + n*step.
+* `cluster_name` (str) : Cluster name(must be specified before other options).
+* `command` (str) : Start command to execute in experiment container.
+* `resource_spec_name` (str) : Resource type to run an experiment (for
+ managed cluster only). Defaults to None.
+* `cpu_limit` (float) : Number of vCPUs (for custom cluster only). Defaults to
+ None.
+* `memory_limit` (str) : Memory limit (for custom cluster only).
+ Defaults to None. Example: "100Gi", "500Mi"
+* `gpu_type` (str) : GPU type(name) (for custom cluster only). Defaults to "Any".
+processor_type(str) cpu or gpu (for custom cluster only). Defaults to
+ None.
+**Example**
+* `gpu_limit` (int) : Number of GPU cores (for custom cluster only). Defaults
+ to None.
+* `image_url` (str) : Kernel docker image URL. Defaults to None.
+* `early_stopping_name` (str) : Early stopping algorithm name. Defaults to
+ None.
+* `early_stopping_settings` (List[Tuple[str, str]]) : Early stopping algorithm
+ settings. Defaults to None.
+* `message` (str) : Message. Defaults to None.
+* `hyperparameters` (List[str]) : A list of fixed hyperparameters. Defaults to None.
+* `dataset_mounts` (List[str]) : A list of dataset mounts. Defaults to None.
+* `git_ref_mounts` (List[str]) : A list of git repository mounts. Defaults to
+ None.
+* `git_diff_mount` (str) : Git diff mounts. Defaults to None.
+* `archive_file_mount` (str) : Local archive file mounts. Defaults to None.
+* `object_storage_mount` (str) : Object storage mounts. Defaults to None.
+* `root_volume_size` (str) : Root volume size. Defaults to None.
+* `working_dir` (str) : Working directory path. Defaults to None.
+* `output_dir` (str) : Output directory path. Defaults to "/output/".
+
+**Example**
+```python
+sweep_objective = vessl.SweepObjective(
+ type="maximize",
+ goal="0.99",
+ metric="val_accuracy",
+)
+
+parameters = [
+ vessl.SweepParameter(
+ name="optimizer",
+ type="categorical",
+ range=vessl.SweepParameterRange(
+ list=["adam", "sgd", "adadelta"]
+ )
+ ),
+ vessl.SweepParameter(
+ name="batch_size",
+ type="int",
+ range=vessl.SweepParameterRange(
+ max="256",
+ min="64",
+ step="8",
+ )
+ )
+]
+
+# Custom Cluster
+vessl.create_sweep(
+ name="example-sweep-name",
+ objective=sweep_objective,
+ max_experiment_count=4,
+ parallel_experiment_count=2,
+ max_failed_experiment_count=2,
+ algorithm="random",
+ parameters=parameters,
+ dataset_mounts=["/input:mnist"],
+ cluster_name="dgx-cluster",
+ processor_type="gpu",
+ gpu_type="NVIDIA-A100-SXM4-80GB",
+ gpu_limit=2,
+ cpu_limit=30,
+ memory_limit="100Gi",
+ kernel_image_url="public.ecr.aws/vessl/kernels:py36.full-cpu",
+ start_command="pip install requirements.txt && python main.py",
+)
+
+# VESSL-Managed Cluster
+vessl.create_sweep(
+ name="example-sweep-name",
+ objective=sweep_objective,
+ max_experiment_count=4,
+ parallel_experiment_count=2,
+ max_failed_experiment_count=2,
+ algorithm="random",
+ parameters=parameters,
+ dataset_mounts=["/input:mnist"],
+ cluster_name="aws-apne2",
+ kernel_resource_spec_name="v1.cpu-4.mem-13",
+ kernel_image_url="public.ecr.aws/vessl/kernels:py36.full-cpu",
+ start_command="pip install requirements.txt && python main.py",
+)
+```
+
+----
+
+## terminate_sweep
+```python
+vessl.terminate_sweep(
+ sweep_name: str, **kwargs
+)
+```
+Terminate sweep in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`.
+
+**Args**
+* `sweep_name` (str) : Sweep name.
+
+**Example**
+```python
+vessl.terminate_sweep(
+ sweep_name="pitch-lord",
+)
+```
+
+----
+
+## list_sweep_logs
+```python
+vessl.list_sweep_logs(
+ sweep_name: str, tail: int = 200, **kwargs
+)
+```
+List sweep logs in the default organization/project. If you want to
+override the default organization/project, then pass `organization_name` or
+`project_name` as `**kwargs`.
+
+**Args**
+* `sweep_name` (str) : Sweep name.
+* `tail` (int) : The number of lines to display from the end. Display all if
+ -1. Defaults to 200.
+
+**Example**
+```python
+vessl.list_sweep_logs(
+ sweep_name="pitch-lord",
+)
+```
+
+----
+
+## get_best_sweep_experiment
+```python
+vessl.get_best_sweep_experiment(
+ sweep_name: str, **kwargs
+)
+```
+Read sweep and return the best experiment info in the default
+organization/project. If you want to override the default
+organization/project, then pass `organization_name` or `project_name` as
+`**kwargs`.
+
+**Args**
+* `sweep_name` (str) : Sweep name.
+
+**Example**
+```python
+vessl.get_best_sweep_experiment(
+ sweep_name="pitch-lord",
+)
+```
diff --git a/reference/sdk/utilities.md b/reference/sdk/utilities.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd88f6f546cb478fc900155fd974b7e708388de2
--- /dev/null
+++ b/reference/sdk/utilities.md
@@ -0,0 +1,4 @@
+---
+title: Utilities
+version: EN
+---
\ No newline at end of file
diff --git a/reference/sdk/utilities/audio.md b/reference/sdk/utilities/audio.md
new file mode 100644
index 0000000000000000000000000000000000000000..9cf549a0e0a7de4b1a8e1df79084700fa0f52601
--- /dev/null
+++ b/reference/sdk/utilities/audio.md
@@ -0,0 +1,48 @@
+---
+title: vessl.Audio
+version: EN
+---
+
+Use the `vessl.Audio` class to log audio data. This takes the audio data and saves it as a local WAV file in the `vessl-media/audio` directory with randomly generated names.
+
+
+
+| Parameter | Description |
+| -------------- | ------------------------------------------------------------------------------------------------------------------ |
+| `data_or_path` | Supported types - numpy.ndarray : the audio data
- str: the audio path
|
+| `sample_rate` | The sample rate of the audio file. Required if the `numpy.ndarray` of audio data is provided as `data_or_path` |
+| `caption` | Label of the given audio |
+
+### `numpy.ndarray`
+
+```python
+import vessl
+import soundfile as sf
+
+audio_path = "sample.wav"
+data, sample_rate = sf.read(audio_path)
+
+
+# Sample rate is required if numpy.ndarray is provided
+vessl.log(
+ payload={
+ "test-audio": [
+ vessl.Audio(data, sample_rate=sample_rate, caption="audio with data example")
+ ]
+ }
+)
+```
+
+### `str`
+
+```python
+import vessl
+
+vessl.log(
+ payload={
+ "test-audio": [
+ vessl.Audio(audio_path, caption="audio with path example")
+ ]
+ }
+)
+```
diff --git a/reference/sdk/utilities/configure.md b/reference/sdk/utilities/configure.md
new file mode 100644
index 0000000000000000000000000000000000000000..d5f71081d2f9789286acc155df6318cb64b90f4a
--- /dev/null
+++ b/reference/sdk/utilities/configure.md
@@ -0,0 +1,29 @@
+---
+title: vessl.configure
+version: EN
+---
+
+### vessl.configure
+```python
+vessl.configure(
+ *, access_token: str = None, organization_name: str = None, project_name: str = None,
+ credentials_file: str = None, force_update_access_token: bool = False
+)
+```
+Configure VESSL Client API.
+
+**Args**
+* `access_token` (str) : Access token to override. Defaults to
+ `access_token` from `~/.vessl/config`.
+* `organization_name` (str) : Organization name to override. Defaults to
+ `organization_name` from `~/.vessl/config`.
+* `project_name` (str) : Project name to override. Defaults to
+ `project name` from `~/.vessl/config`.
+* `credentials_file` (str) : Defaults to None.
+* `force_update_access_token` (bool) : True if force update access token,
+ False otherwise. Defaults to False.
+
+**Example**
+```python
+vessl.configure()
+```
diff --git a/reference/sdk/utilities/finish.md b/reference/sdk/utilities/finish.md
new file mode 100644
index 0000000000000000000000000000000000000000..621925d83abd402e4716de803744e1434e569f29
--- /dev/null
+++ b/reference/sdk/utilities/finish.md
@@ -0,0 +1,21 @@
+---
+title: vessl.finish
+version: EN
+---
+
+Use `vessl.finish` to manually stop your [local experiment](../../../user-guide/experiment/local-experiments.md).
+
+`vessl.finish` will have no effect in a VESSL-managed experiment.
+
+### Example
+
+```python
+import vessl
+
+if __name__ == '__main__':
+ vessl.init()
+ ...
+ vessl.finish()
+ ... # Rest of your code
+```
+
diff --git a/reference/sdk/utilities/image.md b/reference/sdk/utilities/image.md
new file mode 100644
index 0000000000000000000000000000000000000000..0f4dd0256aa0226279a0f57c5d4790998f57da37
--- /dev/null
+++ b/reference/sdk/utilities/image.md
@@ -0,0 +1,66 @@
+---
+title: vessl.Image
+version: EN
+---
+
+Use the `vessl.Image` class to log image data. This takes the image data and saves it as a local PNG file in the `vessl-media/image` directory with randomly generated names.
+
+| Parameter | Description |
+| --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| `data` | Supported types - PIL Image: the Image module of Pillow
- torch.Tensor: a PyTorch tensor
- numpy.ndarray: a NumPy array
- str: the image path
|
+| `caption` | Label of the given image |
+
+### `PIL Image`
+
+```python
+import vessl
+from PIL import Image
+
+my_PIL_image = Image.open('my-image.png')
+vessl.Image(
+ data=my_PIL_image,
+ caption='my-caption',
+)
+```
+
+### `torch.Tensor`
+
+```python
+import vessl
+import torch
+
+vessl.Image()
+test_loader = torch.utils.data.DataLoader(
+ test_dataset, batch_size=10, shuffle=True)
+for data, target in test_loader:
+ vessl.Image(
+ data=data[0],
+ caption=f'Target:{target[0]}',
+ )
+```
+
+### `numpy.ndarray`
+
+```python
+import vessl
+import numpy as np
+
+my_np_image = np.array([[0,1,1,0],[1,0,0,1],[0,1,1,0]])
+vessl.Image(
+ data= my_np_image,
+ caption='my-caption',
+)
+```
+
+### `str`
+
+```python
+import vessl
+
+my_image_path = 'my-image.png'
+vessl.Image(
+ data=my_image_path,
+ caption='my-caption',
+)
+
+```
diff --git a/reference/sdk/utilities/init.md b/reference/sdk/utilities/init.md
new file mode 100644
index 0000000000000000000000000000000000000000..d5ad94095999dc040a85b9d6450fd705f4709a74
--- /dev/null
+++ b/reference/sdk/utilities/init.md
@@ -0,0 +1,27 @@
+---
+title: vessl.init
+version: EN
+---
+
+Use `vessl.init` to initialize a [local experiment](../../../user-guide/experiment/local-experiments.md). This will create a new experiment which you can view on VESSL's web interface.
+
+You can also continue with previously initialized experiments using the `experiment_number` parameter. Note that this experiment must not be in a VESSL-managed experiment and must be in the running state.
+
+| Parameter | Description |
+| --------------------------- | ---------------------------------------- |
+| `experiment_number` | Experiment to reinitialize |
+| `message` | Experiment message (for new experiments) |
+| `hp` | Experiment hyperparameters (for record) |
+
+`vessl.init` will have no effect in a VESSL-managed experiment.
+
+### Example
+
+```python
+import vessl
+
+if __name__ == '__main__':
+ vessl.init()
+ ... # Rest of your code
+```
+
diff --git a/reference/sdk/utilities/log.md b/reference/sdk/utilities/log.md
new file mode 100644
index 0000000000000000000000000000000000000000..29fecf255afe89e38756305034e0c5fc473796cf
--- /dev/null
+++ b/reference/sdk/utilities/log.md
@@ -0,0 +1,71 @@
+---
+title: vessl.log
+version: EN
+---
+
+Use `vessl.log` in a training or testing loop to log a dictionary of metrics. Provide the step parameter for the loop unit β like the epoch value β and any metrics you want to log as a dictionary in the `row` parameter.
+
+You can also log images or audio types of objects. Provide a list of `vessl.Image` objects or `vessl.Audio` with data and captions as the `payload` parameter with any dictionary key. Note that only the first key will be logged.
+
+| Parameter | Description |
+| --------- | --------------------------------------------------------------------------------- |
+| `step` | Unit size of the loop |
+| `payload` | Dictionary of metrics or a list of `vessl.Image` objects or `vessl.Audio` objects |
+
+### Logging metrics
+
+```python
+# Logging loss values for each epoch in PyTorch
+
+import vessl
+
+for epoch in range(epochs):
+ ...
+ vessl.log(step=epoch, payload={'loss': loss.item})
+```
+
+### Logging image objects
+
+```python
+# Logging images in PyTorch
+
+import vessl
+
+def test(model, test_loader, ...):
+ ...
+ test_images = []
+ with torch.no_grad():
+ for data, target in test_loader:
+ ...
+ output = model(data)
+ ...
+ test_images.append(
+ vessl.Image(
+ data[0],
+ caption=f'Pred: {output[0].item()} Truth: {target[0]}'
+ )
+ )
+ ...
+ vessl.log(payload={"test-images": test_images})
+```
+
+### Logging audio objects
+
+```python
+# Logging audio
+
+import vessl
+import soundfile as sf
+
+audio_path = "sample.wav"
+data, sample_rate = sf.read(audio_path)
+
+# Log audio with data
+vessl.log(
+ payload={
+ "test-audio": [
+ vessl.Audio(data, sample_rate=sample_rate, caption="audio with data example")
+ ]
+ }
+)
+```
diff --git a/reference/sdk/utilities/progress.md b/reference/sdk/utilities/progress.md
new file mode 100644
index 0000000000000000000000000000000000000000..aa566452b21e245831bb0212f98c28fc2bb4ab09
--- /dev/null
+++ b/reference/sdk/utilities/progress.md
@@ -0,0 +1,37 @@
+---
+title: vessl.progress
+version: EN
+---
+
+# vessl.progress
+
+Use `vessl.progress` to track the progress of your experiment. VESSL provides an estimate of a remaining training time by calculating the average elapsed time of previous epochs or batch sizes. You can view this information by hovering over the status of a running experiment. This can be used in both VESSL's managed server or in a local environment.
+
+| Parameter | Description |
+| --------- | -------------------------------------------------- |
+| `value` | Amount of progress (decimal value between 0 and 1) |
+
+### Examples
+
+```python
+import vessl
+
+for epoch in range(epochs):
+ ...
+
+ # Update experiment progress every epoch
+ vessl.progress((epoch+1) / epochs)
+```
+
+```python
+def train(model, device, train_loader, optimizer, epoch, start_epoch):
+ model.train()
+ loss = 0
+ for batch_idx, (data, label) in enumerate(train_loader):
+ ...
+
+ # Update experiment progress every batch
+ vessl.progress(
+ ((epoch+1)*batch_size + batch_idx) / (batch_size * epochs))
+ )
+```
diff --git a/reference/sdk/utilities/update.md b/reference/sdk/utilities/update.md
new file mode 100644
index 0000000000000000000000000000000000000000..c0317c7937ec4d65d113cfe3489e1c2dd866b719
--- /dev/null
+++ b/reference/sdk/utilities/update.md
@@ -0,0 +1,54 @@
+---
+title: vessl.hp.update
+version: EN
+---
+
+To record hyperparameters in VESSL **experiments**, set `vessl.hp` and update with `vessl.hp.update` as follows.
+
+#### Option 1: record hyperparameters with Python dictionary
+
+```python
+import vessl
+
+d = {"lr": 0.1, "optimizer": "sgd"}
+vessl.hp.update(d)
+```
+
+#### Option 2: record hyperparameters with Python argparse module
+
+```python
+import argparse
+import vessl
+
+parser = argparse.ArgumentParser()
+parser.add_argument('-n', '--num_layers', type=int, default=3)
+args = parser.parse_args(args=[])
+
+vessl.hp.update(args)
+```
+
+#### Option 3: record hyperparameters with [vessl.init()](vessl.init.md)
+
+vessl.init will have no effect in a VESSL-managed experiment.
+
+You can pass hyperparameters as a parameter of init.
+
+```python
+import vessl
+
+d = {"lr": 0.1, "optimizer": "sgd"}
+vessl.init(hp=d)
+```
+
+Or, you can call `vessl.init()` first, set `vessl.hp`, and call `vessl.hp.update()` without any parameters.
+
+```python
+import vessl
+
+vessl.init()
+
+vessl.hp.lr = 0.1
+vessl.hp.optimizer = "sgd" # vessl.hp = {'lr': '0.1', 'optimizer': 'sgd'}
+
+vessl.hp.update()
+```
diff --git a/reference/sdk/utilities/upload.md b/reference/sdk/utilities/upload.md
new file mode 100644
index 0000000000000000000000000000000000000000..bdca3275910b017bd9991e11d1cd5f263ea64222
--- /dev/null
+++ b/reference/sdk/utilities/upload.md
@@ -0,0 +1,22 @@
+---
+title: vessl.upload
+version: EN
+---
+
+Use `vessl.upload` to upload output files. In a VESSL-managed experiment, output files are uploaded by default. However, in a [local experiment](../../../user-guide/experiment/local-experiments.md), you need to call this method explicitly. Call this method from within your experiment.
+
+| Parameter | Description |
+| --------- | -------------- |
+| `path` | Path to upload |
+
+### Example
+
+```python
+import vessl
+
+if __name__ == '__main__':
+ vessl.init()
+ ...
+ vessl.upload("./output/") # Uploads the folder to experiment output volume
+```
+
diff --git a/reference/sdk/volume.md b/reference/sdk/volume.md
new file mode 100644
index 0000000000000000000000000000000000000000..45369185e21c83cff84655688f1dff095ff6638d
--- /dev/null
+++ b/reference/sdk/volume.md
@@ -0,0 +1,152 @@
+---
+title: Volume
+version: EN
+---
+
+### read_volume_file
+```python
+vessl.read_volume_file(
+ volume_id: int, path: str
+)
+```
+Read a file in the volume.
+
+**Args**
+* `volume_id` (int) : Volume ID.
+* `path` (str) : Path within the volume.
+
+**Example**
+```python
+vessl.read_volume_file(
+ volume_id=123456,
+ path="train.csv",
+)
+```
+
+----
+
+## list_volume_files
+```python
+vessl.list_volume_files(
+ volume_id: int, need_download_url: bool = False, path: str = '',
+ recursive: bool = False
+)
+```
+List files in the volume.
+
+**Args**
+* `volume_id` (int) : Volume ID.
+* `need_download_url` (bool) : True if you need a download URL, False
+ otherwise. Defaults to False.
+* `path` (str) : Path within the volume. Defaults to root.
+* `recursive` (bool) : True if list files recursively, False otherwise.
+ Defaults to False.
+
+**Example**
+```python
+vessl.list_volume_files(
+ volume_id=123456,
+)
+```
+
+----
+
+## create_volume_file
+```python
+vessl.create_volume_file(
+ volume_id: int, is_dir: bool, path: str
+)
+```
+Create file in the volume.
+
+**Args**
+* `volume_id` (int) : Volume ID.
+* `is_dir` (bool) : True if a file is directory, False otherwise.
+* `path` (str) : Path within the volume.
+
+**Example**
+```python
+vessl.create_volume_file(
+ volume_id=123456,
+ is_dir=False,
+ path="models"
+)
+```
+
+----
+
+## delete_volume_file
+```python
+vessl.delete_volume_file(
+ volume_id: int, path: str
+)
+```
+Delete file in the volume.
+
+**Args**
+* `volume_id` (int) : Volume ID.
+* `path` (str) : Path within the volume.
+
+**Example**
+```python
+vessl.delete_volume_file(
+ volume_id=123456,
+ path="model.pth",
+)
+```
+
+----
+
+## upload_volume_file
+```python
+vessl.upload_volume_file(
+ volume_id: int, path: str
+)
+```
+Upload file in the volume.
+
+**Args**
+* `volume_id` (int) : Volume ID.
+* `path` (str) : Local file path to upload
+
+**Example**
+```python
+vessl.upload_volume_file(
+ volume_id=123456,
+ path="model.pth",
+)
+```
+
+----
+
+## copy_volume_file
+```python
+vessl.copy_volume_file(
+ source_volume_id: Optional[int], source_path: str,
+ dest_volume_id: Optional[int], dest_path: str, quiet: bool = False
+)
+```
+Copy file either from local to remote, remote to local, or remote to
+remote.
+
+**Args**
+* `source_volume_id` (Optional[int]) : Source volume file id. If not
+ specified, source is assumed to be local.
+* `source_path` (str) : If source_volume_id is empty, local source path.
+ Otherwise, remote source path.
+* `dest_volume_id` (Optional[int]) : Destination volume file id. If not
+ specified, destination is assumed to be local.
+* `dest_path` (str) : If dest_volume_id is empty, local destination path.
+ Otherwise, remote destination path.
+* `quiet` (bool) : True if the muted output, False otherwise. Defaults to
+ False.
+
+**Example**
+```python
+vessl.copy_volume_file(
+ source_volume_id=123456,
+ source_path="model.pth",
+ dest_volume_id=123457,
+ dest_path="model.pth",
+)
+```
diff --git a/reference/sdk/workspace.md b/reference/sdk/workspace.md
new file mode 100644
index 0000000000000000000000000000000000000000..51da455ee810eb9b15a421f4a51f106553ba6ee1
--- /dev/null
+++ b/reference/sdk/workspace.md
@@ -0,0 +1,259 @@
+---
+title: Workspace
+version: EN
+---
+
+### read_workspace
+```python
+vessl.read_workspace(
+ workspace_id: int, **kwargs
+)
+```
+Read workspace in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `workspace_id` (int) : Workspace ID.
+
+**Example**
+```python
+vessl.read_workspace(
+ workspace_id=123456,
+)
+```
+
+----
+
+## list_workspaces
+```python
+vessl.list_workspaces(
+ cluster_id: int = None, statuses: List[str] = None, mine: bool = True, **kwargs
+)
+```
+List workspaces in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `cluster_id` (int) : Defaults to None.
+* `statuses` (List[str]) : A list of status filter. Defaults to None.
+* `mine` (bool) : True if list only my workspaces, False otherwise. Defaults
+ to True.
+
+**Example**
+```python
+vessl.list_workspaces(
+ cluster_id=123456,
+ statuses=["running"],
+)
+```
+
+----
+
+## create_workspace
+```python
+vessl.create_workspace(
+ name: str, cluster_name: str, cluster_node_names: List[str] = None,
+ kernel_resource_spec_name: str = None, processor_type: str = None,
+ cpu_limit: float = None, memory_limit: str = None, gpu_type: str = None,
+ gpu_limit: int = None, kernel_image_url: str = None, max_hours: int = 24,
+ dataset_mounts: List[str] = None, local_files: List[str] = None,
+ use_vesslignore: bool = True, root_volume_size: str = '100Gi', ports: List[Dict[str,
+ Any]] = None, init_script: str = None, **kwargs
+)
+```
+Create workspace in the default organization. If you want to override the
+default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `name` (str) : Workspace name.
+* `cluster_name` (str) : Cluster name(must be specified before other options).
+* `cluster_node_names` (List[str]) : A list of candidate cluster node names.
+ Defaults to None.
+* `kernel_resource_spec_name` (str) : Resource type to run an experiment (for
+ managed cluster only). Defaults to None.
+* `cpu_limit` (float) : Number of vCPUs (for custom cluster only). Defaults to
+ None.
+* `memory_limit` (str) : Memory limit in GiB (for custom cluster only).
+ Defaults to None.
+* `gpu_type` (str) : GPU type (for custom cluster only). Defaults to None.
+* `gpu_limit` (int) : Number of GPU cores (for custom cluster only). Defaults
+ to None.
+* `kernel_image_url` (str) : Kernel docker image URL. Defaults to None.
+* `max_hours` (int) : Max hours limit to run. Defaults to 24.
+* `dataset_mounts` (List[str]) : A list of dataset mounts. Defaults to None.
+* `local_files` (List[str]) : A list of local file mounts. Defaults to None.
+* `use_vesslignore` (bool) : True if local files matching glob patterns
+ in .vesslignore files should be ignored. Patterns apply relative to
+ the directory containing that .vesslignore file.
+* `root_volume_size` (str) : Root volume size. Defaults to "100Gi".
+* `ports` (List[Dict[str, Any]]) : Port numbers to expose. Defaults to None.
+processor_type(str) cpu or gpu (for custom cluster only). Defaults to
+ None.
+init_script(str) Custom init script. Defaults to None.
+
+**Example**
+```python
+vessl.create_workspace(
+ name="modern-kick",
+ cluster_name="aws-apne2",
+ kernel_resource_spec_name="v1.cpu-0.mem-1",
+ kernel_image_url="public.ecr.aws/vessl/kernels:py36.full-cpu.jupyter",
+)
+```
+
+----
+
+## list_workspace_logs
+```python
+vessl.list_workspace_logs(
+ workspace_id: int, tail: int = 200, **kwargs
+)
+```
+List experiment logs in the default organization. If you want to override
+the default organization, then pass `organization_name` as `**kwargs`.
+
+**Args**
+* `workspace_id` (int) : Workspace ID.
+* `tail` (int) : The number of lines to display from the end. Display all if
+ -1. Defaults to 200.
+
+**Example**
+```python
+ vessl.list_workspace_logs(
+ workspace_id=123456,
+ )
+```
+
+----
+
+## start_workspace
+```python
+vessl.start_workspace(
+ workspace_id: int, **kwargs
+)
+```
+Start the workspace container in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `workspace_id` (int) : Workspace ID.
+
+**Example**
+```python
+vessl.start_workspace(
+ workspace_id=123456,
+)
+```
+
+----
+
+## stop_workspace
+```python
+vessl.stop_workspace(
+ workspace_id: int, **kwargs
+)
+```
+Stop the workspace container in the default organization. If you want to
+override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `workspace_id` (int) : Workspace ID.
+
+**Example**
+```python
+vessl.stop_workspace(
+ workspace_id=123456,
+)
+```
+
+----
+
+## terminate_workspace
+```python
+vessl.terminate_workspace(
+ workspace_id: int, **kwargs
+)
+```
+Terminate the workspace container in the default organization. If you
+want to override the default organization, then pass `organization_name` as
+`**kwargs`.
+
+**Args**
+* `workspace_id` (int) : Workspace ID.
+
+**Example**
+```python
+vessl.terminate_workspace(
+ workspace_id=123456,
+)
+```
+
+----
+
+## backup_workspace
+```python
+vessl.backup_workspace()
+```
+Backup the home directory of the workspace. This command should be called
+inside a workspace.
+
+**Example**
+```python
+vessl.backup_workspace()
+```
+
+----
+
+## restore_workspace
+```python
+vessl.restore_workspace()
+```
+Restore the home directory from the previous backup. This command should
+be called inside a workspace.
+
+**Example**
+```python
+vessl.restore_workspace()
+```
+
+----
+
+## connect_workspace_ssh
+```python
+vessl.connect_workspace_ssh(
+ private_key_path: str
+)
+```
+Connect to a running workspace via SSH.
+
+**Args**
+* `private_key_path` (str) : SSH private key path
+
+**Example**
+```python
+vessl.connect_workspace_ssh(
+ private_key_path="~/.ssh/key_path",
+)
+```
+
+----
+
+## update_vscode_remote_ssh
+```python
+vessl.update_vscode_remote_ssh(
+ private_key_path: str
+)
+```
+Update .ssh/config file for VSCode Remote-SSH plugin.
+
+**Args**
+* `private_key_path` (str) : SSH private key path
+
+**Example**
+```python
+vessl.update_vscode_remote_ssh(
+ private_key_path="~/.ssh/key_path",
+)
+```
diff --git a/reference/yaml/cheatsheet.md b/reference/yaml/cheatsheet.md
new file mode 100644
index 0000000000000000000000000000000000000000..a27b79c8364129805c771a88cfbfe617cd3a8edc
--- /dev/null
+++ b/reference/yaml/cheatsheet.md
@@ -0,0 +1,130 @@
+---
+title: Cheat Sheet
+description: 'Full list of YAML configurations.'
+---
+
+```yaml Full YAML configurations
+name: stable-diffusion
+description: This is the inference example of stable diffusion.
+tags:
+- "best"
+- "A100-80g"
+- "20epochs"
+resources:
+ cluster: vessl-gcp-oregon
+ preset: v1.l4-1.mem-42
+ node_names:
+ - "n01"
+ - "n03"
+ - "n04"
+import:
+ /import/code: git://github.com/{accountName}/{repoName}
+ /import/code-verbose:
+ git:
+ url: https://github.com/{accountName}/{repoName}
+ ref: c0ffee
+ credential_name: my-git-cred-name
+ /import/dataset: vessl-dataset://{organizationName}/{datasetName}
+ /import/dataset-verbose:
+ dataset:
+ organization_name: {organizationName}
+ dataset_name: {datasetName}
+ /import/model: vessl-model://{organizationName}/{modelRepositoryName}/{modelNumber}
+ /import/model-verbose:
+ model:
+ organization_name: {organizationName}
+ model_repository_name: {modelRepositoryName}
+ model_number: {modelNumber}
+ /import/artifact: vessl-artifact://{organiztionName}/{projectName}/{artifactName}
+ /import/artifact-verbose:
+ artifact:
+ organization_name: {organizationName}
+ project_name: {projectName}
+ name: {artifactName}
+ /import/artifact-verbose-same-project:
+ artifact:
+ name: {artifactName}
+ /import/s3: s3://{bucketName}/{path}
+ /import/s3-verbose:
+ s3:
+ bucket: {bucketName}
+ prefix: {prefix}
+ credential_name: my-s3-cred-name
+ /import/gs: gs://{buckeName}/{path}
+ /import/gs-verbose:
+ gs:
+ bucket: {bucketName}
+ prefix: {prefix}
+ credential_name: my-gs-cred-name
+mount:
+ /mount/dataset: vessl-dataset://{organizationName}/{datasetName}
+ /mount/dataset-verbose:
+ dataset:
+ organization_name: {organizationName}
+ dataset_name: {datasetName}
+ /mount/hostpath: hostpath://{path}
+ /mount/hostpath-verbose:
+ hostpath:
+ path: {path}
+ readonly: true
+ /mount/nfs: nfs://{server}/{path}
+ /mount/nfs-verbose:
+ nfs:
+ server: {server}
+ path: {path}
+ readonly: false
+export:
+ /export/output-artifact: vessl-artifact://
+ /export/output-artifact-verbose:
+ artifact:
+ /export/backup-artifact: vessl-artifact://{organizationName}/{projectName}/{artifactName}
+ /export/backup-artifact-verbose:
+ artifact:
+ organization_name: {organizationName}
+ project_name: {projectName}
+ artifact_name: {artifactName}
+ /export/dataset: vessl-dataset://{organizationName}/{datasetName}
+ /export/dataset-verbose:
+ dataset:
+ organization_name: {organizationName}
+ dataset_name: {datasetName}
+ /export/model: vessl-model://{organizationName}/{modelRepositoryName}
+ /export/model-verbose:
+ model:
+ organization_name: {organizationName}
+ model_repository_name: {modelRepositoryName}
+ /export/s3: s3://{buckeName}/{prefix}
+ /export/s3-verbose:
+ s3:
+ bucket: {bucketName}
+ prefix: {prefix}
+ endpoint: in-house.endpoint.co.kr
+ credential_name: my-s3-cred-name
+ /export/gs: gs://{bucketName}/{prefix}
+ /export/gs-verbose:
+ gs:
+ bucket: {bucketName}
+ prefix: {prefix}
+run:
+ - workdir: /input/data1
+ command: |
+ python data_preprocessing.py
+ - wait: 10s
+ - workdir: /root/git-examples
+ command: |
+ python train.py --learning_rate=$learning_rate --batch_size=$batch_size
+interactive:
+ max_runtime: 24h # required if interactive
+ jupyter: # required if interactive
+ idle_timeout: 120m # required if interactive
+ports:
+ - 3000
+ - name: streamlit
+ type: http
+ port: 8501
+env:
+ learning_rate: 0.001
+ postgres_password:
+ value: OUR_DB_PW
+ secret: true
+```
\ No newline at end of file
diff --git a/reference/yaml/yaml.md b/reference/yaml/yaml.md
new file mode 100644
index 0000000000000000000000000000000000000000..a83b124d32a6cbdee59156ab7c2d21a18c72bffd
--- /dev/null
+++ b/reference/yaml/yaml.md
@@ -0,0 +1,379 @@
+---
+title: YAML
+description: 'VESSL Run is configured through a single YAML file.'
+version: "EN"
+---
+
+## Field types
+### Metadata
+`name`, `description`, and `tags` fields are the metadata of Run. They should be ideally represent the specific characteristics or purposes of your run for better identification.
+
+| Name | Type | Required | Description |
+| ------- | ------- | --------- | ----------------- |
+| `name` | string | Required | The name of the run. |
+| `description` | string | Optional | The description of the run. |
+| `tags` | list | Optional | The tags of the run. |
+
+```yaml Specify run metadata
+name: stable-diffusion
+description: This is the inference example of stable diffusion.
+tags:
+- "best"
+- "A100-80g"
+- "20epochs"
+```
+
+### Resources
+`resources` specifies the resource specs to use for run. Use `preset` provided by VESSL or request the desired resource with `requests`.
+
+0. Common fields
+
+| Name | Type | Required | Description |
+| --------- | ------- | ----------|------------------ |
+| `cluster` | string | Optional | The cluster to be used for the run. (default: VESSL-managed cluster) |
+| `node_names` | list | Optional | Specify candidate nodes for workload assignment. If it's not set, we'll find any available node. |
+
+1. Using `preset` with common fields
+
+| Name | Type | Required | Description |
+| --------- | ------- | ----------|------------------ |
+| `preset` | string | Required without `requests` | The name of resource spec preset that specified in VESSL. If the preset is not specified, we will offer the best option for you based on `reqeusts`. |
+
+
+```yaml Run resource specs with preset
+resources:
+ cluster: vessl-gcp-oregon
+ preset: v1.l4-1.mem-42
+```
+
+```yaml Run resource specs with preset and node candidates
+resources:
+ cluster: my-on-premise-cluster
+ preset: v100-1
+ node_names:
+ - "n01"
+ - "n03"
+ - "n04"
+```
+
+
+2. Using `requests` with common fields (Upcomming feature)
+
+| Name | Type | Required | Description |
+| --------- | ------- | ----------|------------------ |
+| `requests`| map | Required without `preset` | The desired resource specs. |
+| `cpu` | string | Optional | The number of cpu cores. |
+| `memory` | string | Optional | The memory size in GB. |
+| `nvidia.com/gpu` | map | Optional | The `device_type` and `quanity` of the NVIDIA GPU to be used for the run. |
+
+
+```yaml Run resource specs with requests
+resources:
+ cluster: vessl-gcp-oregon
+ requests:
+ cpu: "4"
+ memory: 12Gi
+ nvidia.com/gpu:
+ device_type: V100
+ quantity: "2"
+```
+
+```yaml Run resource specs with requests and node candidates
+resources:
+ cluster: my-on-premise-cluster
+ requests:
+ cpu: "4"
+ memory: 12Gi
+ nvidia.com/gpu:
+ device_type: V100
+ quantity: "2"
+ node_names:
+ - "n01"
+ - "n03"
+ - "n04"
+```
+
+
+You can list available clusters or resource specs with the CLI command: `vessl cluster list` or `vessl resource list`.
+
+
+```bash List VESSL clusters
+pip install vessl
+vessl cluster list
+```
+
+```bash List resource specs
+pip install vessl
+vessl resource list
+```
+
+
+### Container Image
+The `image` field is a string that specifies the container image to be used in the run. This is typically a Docker image that includes all the necessary dependencies and environment for your machine learning model.
+
+| Name | Type | Required | Description |
+| ------- | ------- | --------- | ----------------- |
+| `image` | string or map | Requried | Container image url or map of `url` and `credential_name`. |
+| `url` | string | Optional | Container image url. |
+| `credential_name` | string | Optional | Registered credential name at VESSL for private image usage. |
+
+
+```yaml Use a VESSL-managed image
+image: quay.io/vessl-ai/ngc-pytorch-kernel:22.10-py3-202306140422
+```
+
+```yaml Use a public custom image
+image: my-docker-account/public-repo-name:tag-name
+```
+
+```yaml Use a private custom image
+image:
+ url: my-docker-account/private-repo-name:tag-name
+ credential_name: docker_hub_cred
+```
+
+
+
+You can list available VESSL-managed images with the CLI command: `vessl image list`.
+
+```bash List VESSL-manged images with the VESSL CLI
+pip install vessl
+vessl image list
+```
+
+### Volumes
+There are three type of volumes: `import`, `mount`, and `export`. Each field is a map that specifies a target path as a key and a source information as a value. The value is either a simple string with prefix or another map that holds more detailed information.
+
+1. Import
+The `import` type signifies that the data will be downloaded from the source to a target path in the running container.
+
+| Prefix | Type | Required | Description |
+| ------- | ------- | --------- | ----------------- |
+| `git://` | string | Optional | Import a git repository. The repository will be cloned into the specified target path when container starts. |
+| `vessl-dataset://` | string | Optional | Import a dataset stored in VESSL Dataset.|
+| `vessl-model://` | string | Optional | Import a model stored in VESSL Model Registry. |
+| `vessl-artifact://` | string | Optional | Import an artifact stored in VESSL Artifact. |
+| `s3://` | string | Optional | Import an AWS S3 bucket. |
+| `gs://` | string | Optional | Import a Google Cloud Storage. |
+
+
+```yaml String import value with prefix
+import:
+ /import/code: git://github.com/{accountName}/{repoName}
+ /import/dataset: vessl-dataset://{organizationName}/{datasetName}
+ /import/model: vessl-model://{organizationName}/{modelRepositoryName}/{modelNumber}
+ /import/artifact: vessl-artifact://{organiztionName}/{projectName}/{artifactName}
+ /import/s3: s3://{bucketName}/{path}
+ /import/gs: gs://{buckeName}/{path}
+```
+
+```yaml Verbose import value
+import:
+ /import/code:
+ git:
+ url: https://github.com/{accountName}/{repoName}
+ ref: c0ffee
+ credential_name: my-git-cred-name
+ /import/dataset:
+ dataset:
+ organization_name: {organizationName}
+ dataset_name: {datasetName}
+ /import/model:
+ model:
+ organization_name: {organizationName}
+ model_repository_name: {modelRepositoryName}
+ model_number: {modelNumber}
+ /import/artifact:
+ artifact:
+ organization_name: {organizationName}
+ project_name: {projectName}
+ name: {artifactName}
+ /import/artifact-same-project:
+ artifact:
+ name: {artifactName}
+ /import/s3:
+ s3:
+ bucket: {bucketName}
+ prefix: {prefix}
+ credential_name: my-s3-cred-name
+ /import/gs:
+ gs:
+ bucket: {bucketName}
+ prefix: {prefix}
+ credential_name: my-gs-cred-name
+```
+
+
+2. Mount
+The `mount` type means that the data will be directly mounted to a target path in the run container, providing direct access to the user.
+
+| Prefix | Type | Required | Description |
+| ------- | ------- | --------- | ----------------- |
+| `vessl-dataset://` | string | Optional | Mount a dataset stored in VESSL Dataset. |
+| `hostpath://` | string | Optional | Mount a file or directory from the host node's filesystem. |
+| `nfs://` | string | Optional | Mount a Network File System(NFS). |
+| `readonly` | boolean | Optional | True if readonly mode. (default: True) |
+
+
+```yaml String mount value with prefix
+mount:
+ /mount/dataset: vessl-dataset://{organizationName}/{datasetName}
+ /mount/hostpath: hostpath://{path}
+ /mount/nfs: nfs://{server}/{path}
+```
+
+```yaml Verbose mount value
+mount:
+ /mount/dataset:
+ dataset:
+ organization_name: {organizationName}
+ dataset_name: {datasetName}
+ /mount/hostpath:
+ hostpath:
+ path: {path}
+ readonly: true
+ /mount/nfs:
+ nfs:
+ server: {server}
+ path: {path}
+ readonly: false
+```
+
+
+3. Export
+The `export` type is desgined for uploading data from a path in the run container to a target path after run execution.
+
+| Prefix | Type | Required | Description |
+| ------- | ------- | --------- | ----------------- |
+| `vessl-artifact://` | string | Optional | Export to VESSL Artifact. |
+| `vessl-dataset://` | string | Optional | Export to VESSL Dataset. |
+| `vessl-model://` | string | Optional | Export to VESSL Model. |
+| `s3://` | string | Optional | Export to Amazon S3 bucket. |
+| `gs://` | string | Optional | Export to Google Cloud Storage. |
+
+
+```yaml String export value with prefix
+export:
+ /export/output-artifact: vessl-artifact://
+ /export/backup-artifact: vessl-artifact://{organizationName}/{projectName}/{artifactName}
+ /export/dataset: vessl-dataset://{organizationName}/{datasetName}
+ /export/model: vessl-model://{organizationName}/{modelRepositoryName}
+ /export/s3: s3://{buckeName}/{prefix}
+ /export/gs: gs://{bucketName}/{prefix}
+```
+
+```yaml Verbose export value
+export:
+ /export/output-artifact:
+ artifact:
+ /export/backup-artifact:
+ artifact:
+ organization_name: {organizationName}
+ project_name: {projectName}
+ artifact_name: {artifactName}
+ /export/dataset:
+ dataset:
+ organization_name: {organizationName}
+ dataset_name: {datasetName}
+ /export/model:
+ model:
+ organization_name: {organizationName}
+ model_repository_name: {modelRepositoryName}
+ /export/s3:
+ s3:
+ bucket: {bucketName}
+ prefix: {prefix}
+ endpoint: in-house.endpoint.co.kr
+ credential_name: my-s3-cred-name
+ /export/gs:
+ gs:
+ bucket: {bucketName}
+ prefix: {prefix}
+ credential_name: my-gs-cred-name
+```
+
+
+### Run Command
+The `run` field is a list that contains commands to be run in the container. Each item in the list is a map with the following keys. `run` could be empty if it's an interactive run.
+
+| Name | Type | Required | Description |
+| ------- | ------- | --------- | ----------------- |
+| `workdir` | string | Optional | The working directory for the command. |
+| `command`| string | Required | The command to be run. |
+| `wait` | string | Optional | How long to wait after a command. |
+
+
+```yaml Run a single command
+run:
+ - command: |
+ python train.py --learning_rate=$learning_rate --batch_size=$batch_size
+```
+
+```yaml Run multiple commands
+run:
+ - workdir: /input/data1
+ command: |
+ python data_preprocessing.py
+ - wait: 10s
+ - workdir: /root/git-examples
+ command: |
+ python train.py --learning_rate=$learning_rate --batch_size=$batch_size
+```
+
+
+### Interactive
+The `interactive` field is used to specify if the run allows interactive communication with the user. It provides multiple ways to interact with the container during the run, such as JupyterLab, SSH, or a custom service via specified ports.
+
+| Name | Type | Required | Description |
+| ------- | ------- | --------- | ----------------- |
+| `interactive` | map | Optional | Mark run as an interactive type that includes `max_runtime`, `jupyter`, and `idle_timeout` |
+| `max_runtime` | string | Required | The amount of time to run. Set `0` for infintie use. |
+| `jupyter` | map | Required | Jupyter configurations that includes `idle_timeout` |
+| `idle_timeout` | string | Required | The amount of time a server can be inactive before it will be culled. |
+
+```yaml Maximum runtime 24h and idle_timeout 120m
+interactive:
+ max_runtime: 24h
+ jupyter:
+ idle_timeout: 120m
+```
+
+### Ports
+The `ports` field is a list of map that specifies the port information to expose.
+| Name | Type | Required | Description |
+| ------- | ------- | --------- | ----------------- |
+| `ports` | list | Optional | List of port numbers or port information that includes `name`, `type`, and `port` to expose. |
+| `name` | string | Optional | The port name. |
+| `type` | string | Optional | The protocol of port. (http or tcp) |
+| `port` | int | Optional | The port number. |
+
+
+```yaml Expose port by number
+ports:
+ - 3000
+```
+
+```yaml Expose port by name, number, and type
+ports:
+ - name: streamlit
+ type: http
+ port: 8501
+```
+
+
+### Environment Variables
+The `env` field is a map that specifies the environment variables for the run. Each key-value pair in this map represents an environment variable and its value.
+
+| Name | Type | Required | Description |
+| ------- | ------- | --------- | ----------------- |
+| `env` | map | Optional | Key-value pairs for environment variables in the run container. |
+| `value` | string | Optional | Value of environment variables. |
+| `secret` | boolean | Optional | True if the variable is secret. |
+
+```yaml Set multiple environment variables
+env:
+ learning_rate: 0.001
+ postgres_password:
+ value: OUR_DB_PW
+ secret: true
+```