text stringlengths 0 59.1k |
|---|
key: hf_token |
volumeMounts: |
- mountPath: /dev/shm |
name: dshm |
volumes: |
- name: dshm |
emptyDir: |
medium: Memory |
# Node selectors are the main difference among the cloud providers, |
# making sure vLLM pods land on Nodes with the correct GPU. The |
# following are node selector examples for three cloud providers. |
# |
# - GKE |
# nodeSelector: |
# cloud.google.com/gke-accelerator: nvidia-l4 |
# cloud.google.com/gke-gpu-driver-version: default |
# |
# - EKS |
# nodeSelector: |
# node.kubernetes.io/instance-type: p4d.24xlarge |
# |
# - AKS |
# nodeSelector: |
# agentpiscasi.com/gpu: "true" # Common label for AKS GPU nodes |
<|endoftext|> |
# source: k8s_examples/AI/vllm-deployment/README.md type: docs |
# AI Inference with vLLM on Kubernetes |
## Purpose / What You'll Learn |
This example demonstrates how to deploy a server for AI inference using [vLLM](https://docs.vllm.ai/en/latest/) on Kubernetes. You’ll learn how to: |
- Set up vLLM inference server with a model downloaded from [Hugging Face](https://huggingface.co/). |
- Expose the inference endpoint using a Kubernetes `Service`. |
- Set up port forwarding from your local machine to the inference `Service` in the Kubernetes cluster. |
- Send a sample prediction request to the server using `curl`. |
--- |
## 📚 Table of Contents |
- [Prerequisites](#prerequisites) |
- [Detailed Steps & Explanation](#detailed-steps--explanation) |
- [Verification / Seeing it Work](#verification--seeing-it-work) |
- [Configuration Customization](#configuration-customization) |
- [Platform-Specific Configuration](#platform-specific-configuration) |
- [Cleanup](#cleanup) |
- [Further Reading / Next Steps](#further-reading--next-steps) |
--- |
## Prerequisites |
- A Kubernetes cluster with access to NVIDIA GPUs. This example was tested on GKE, but can be adapted for other cloud providers like EKS and AKS by ensuring you have a GPU-enabled node pool and have deployed the Nvidia device plugin. |
- Hugging Face account token with permissions for model (example model: `google/gemma-3-1b-it`) |
- `kubectl` configured to communicate with cluster and in PATH |
- `curl` binary in PATH |
**Note for GKE users:** To target specific GPU types, you can uncomment the GKE-specific `nodeSelector` in `vllm-deployment.yaml`. |
--- |
## Detailed Steps & Explanation |
1. Create a namespace. This example uses `vllm-example`, but you can choose any name: |
```bash |
kubectl create namespace vllm-example |
``` |
2. Ensure Hugging Face permissions to retrieve model: |
```bash |
# Env var HF_TOKEN contains hugging face account token |
# Make sure to use the same namespace as in the previous step |
kubectl create secret generic hf-secret -n vllm-example \ |
--from-literal=hf_token=$HF_TOKEN |
``` |
3. Apply vLLM server: |
```bash |
# Make sure to use the same namespace as in the previous steps |
kubectl apply -f vllm-deployment.yaml -n vllm-example |
``` |
- Wait for deployment to reconcile, creating vLLM pod(s): |
```bash |
kubectl wait --for=condition=Available --timeout=900s deployment/vllm-gemma-deployment -n vllm-example |
kubectl get pods -l app=gemma-server -w -n vllm-example |
``` |
- View vLLM pod logs: |
```bash |
kubectl logs -f -l app=gemma-server -n vllm-example |
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.