text stringlengths 0 59.1k |
|---|
## Further Reading / Next Steps |
- [vLLM AI Inference Server](https://docs.vllm.ai/en/latest/) |
- [Hugging Face Security Tokens](https://huggingface.co/docs/hub/en/security-tokens) |
<|endoftext|> |
# source: k8s_examples/AI/vllm-deployment/hpa/vllm-service-monitor.yaml type: yaml |
# This ServiceMonitor tells the Prometheus Operator how to discover and scrape |
# metrics from the vLLM inference server. It is designed to find the |
# 'vllm-service' in the 'vllm-example' namespace and scrape its '/metrics' endpoint. |
apiVersion: monitoring.coreos.com/v1 |
kind: ServiceMonitor |
metadata: |
name: vllm-gemma-servicemonitor |
# This ServiceMonitor must be deployed in the same namespace as the |
# Prometheus Operator, which is 'monitoring' in this setup. |
namespace: monitoring |
labels: |
# This label is used by the Prometheus Operator to discover this |
# ServiceMonitor. It must match the 'serviceMonitorSelector' configured |
# in the Prometheus custom resource. |
release: prometheus |
spec: |
# This selector specifies which namespace(s) to search for the target Service. |
# In this case, it's looking in the 'vllm-example' namespace where the vLLM |
# service is deployed. |
namespaceSelector: |
matchNames: |
- vllm-example |
# This selector identifies the specific Service to scrape within the |
# selected namespace(s). It must match the labels on the 'vllm-service'. |
selector: |
matchLabels: |
app: gemma-server |
endpoints: |
# This section defines the port and path for the metrics endpoint. |
- port: http |
path: /metrics |
interval: 15s |
<|endoftext|> |
# source: k8s_examples/AI/vllm-deployment/hpa/gpu-hpa.md type: docs |
# Autoscaling an AI Inference Server with HPA using NVIDIA GPU Metrics |
This guide provides a detailed walkthrough for configuring a Kubernetes Horizontal Pod Autoscaler (HPA) to dynamically scale a vLLM AI inference server based on NVIDIA GPU utilization. The autoscaling logic is driven by the `DCGM_FI_DEV_GPU_UTIL` metric, which is exposed by the NVIDIA Data Center GPU Manager (DCGM) Exp... |
This guide assumes you have already deployed the vLLM inference server from the [parent directory's exercise](../README.md) into the `vllm-example` namespace. |
--- |
## 1. Verify GPU Metric Collection |
The first step is to ensure that GPU metrics are being collected and exposed within the cluster. This is handled by the NVIDIA DCGM Exporter, which runs as a DaemonSet on GPU-enabled nodes and scrapes metrics directly from the GPU hardware. The method for deploying this exporter varies across cloud providers. |
### 1.1. Cloud Provider DCGM Exporter Setup |
Below are the common setups for GKE, AKS, and EKS. |
#### Google Kubernetes Engine (GKE) |
On GKE, the DCGM exporter is a managed add-on that is automatically deployed and managed by the system. It runs in the `gke-managed-system` namespace. |
**Verification:** |
You can verify that the exporter pods are running with the following command: |
```bash |
kubectl get pods --namespace gke-managed-system | grep dcgm-exporter |
``` |
You should see one or more `dcgm-exporter` pods in a `Running` state. |
#### Amazon Elastic Kubernetes Service (EKS) & Microsoft Azure Kubernetes Service (AKS) |
On both EKS and AKS, the DCGM exporter is not a managed service and must be installed manually. The standard method is to use the official [NVIDIA DCGM Exporter Helm chart](https://github.com/NVIDIA/dcgm-exporter), which deploys the exporter as a DaemonSet. |
**Installation (for both EKS and AKS):** |
If you don't already have the exporter installed, you can do so with the following Helm commands: |
```bash |
helm repo add gpu-helm-charts https://nvidia.github.io/dcgm-exporter/helm-charts |
helm repo update |
helm install dcgm-exporter gpu-helm-charts/dcgm-exporter --namespace monitoring |
``` |
*Note: We are installing it into the `monitoring` namespace to keep all monitoring-related components together.* |
**Verification:** |
You can verify that the exporter pods are running in the `monitoring` namespace: |
```bash |
kubectl get pods --namespace monitoring | grep dcgm-exporter |
``` |
You should see one or more `dcgm-exporter` pods in a `Running` state. |
--- |
## 2. Set Up Prometheus for Metric Collection |
With the metric source confirmed, the next step is to configure Prometheus to scrape, process, and store these metrics. The setup differs slightly between GKE and other platforms due to the managed vs. manual installation of the DCGM exporter. |
### 2.1. Install the Prometheus Operator |
The Prometheus Operator can be easily installed using its official Helm chart. This will deploy a full monitoring stack into the `monitoring` namespace. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.