text
stringlengths
0
59.1k
```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts/
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace
```
You can verify the installation by listing the pods in the `monitoring` namespace.
```bash
kubectl get pods --namespace monitoring
```
### 2.2. Create a Service and ServiceMonitor for the DCGM Exporter
The `ServiceMonitor` needs a stable network endpoint to reliably scrape metrics from the DCGM exporter pods. A Kubernetes Service provides this stable endpoint. You will need to apply the correct manifests for your environment.
#### For GKE Users
GKE's managed DCGM exporter runs in the `gke-managed-system` namespace. Apply the GKE-specific Service and ServiceMonitor:
```bash
# Apply the GKE-specific Service
kubectl apply -f ./gpu-dcgm-exporter-service-gke.yaml
# Apply the GKE-specific ServiceMonitor
kubectl apply -f ./gpu-service-monitor-gke.yaml
```
**Verification (GKE):**
Verify that the service has been created successfully in the correct namespace:
```bash
kubectl get svc -n gke-managed-system | grep gke-managed-dcgm-exporter
```
#### For EKS, AKS, and Other Users (Manual Installation)
If you installed the DCGM exporter manually using the Helm chart, it runs in the `monitoring` namespace. Apply the generic Service and ServiceMonitor:
```bash
# Apply the generic Service
kubectl apply -f ./gpu-dcgm-exporter-service-generic.yaml
# Apply the generic ServiceMonitor
kubectl apply -f ./gpu-service-monitor-generic.yaml
```
**Verification (Generic):**
Verify that the service has been created successfully in the correct namespace:
```bash
kubectl get svc -n monitoring | grep gpu-dcgm-exporter-service
```
### 2.3. Create a Prometheus Rule for Metric Relabeling
This is a critical step. The raw `DCGM_FI_DEV_GPU_UTIL` metric does not have the standard `pod` and `namespace` labels the HPA needs. This `PrometheusRule` creates a *new*, correctly-labelled metric named `dcgm_fi_dev_gpu_util_relabelled` that the Prometheus Adapter can use.
```bash
kubectl apply -f ./prometheus-rule.yaml
```
### 2.5. Verify Metric Collection and Relabeling in Prometheus
To ensure the entire pipeline is working, you must verify that the *new*, relabelled metric exists. First, establish a port-forward to the Prometheus service.
```bash
kubectl port-forward svc/prometheus-kube-prometheus-prometheus 9090:9090 -n monitoring
```
In a separate terminal, use `curl` to query for the new metric.
```bash
# Query Prometheus for the new, relabelled metric
curl -sS "http://localhost:9090/api/v1/query?query=dcgm_fi_dev_gpu_util_relabelled" | jq
```
A successful verification will show the metric in the `result` array, complete with the correct `pod` and `namespace` labels.
---
## 3. Configure the Horizontal Pod Autoscaler
Now that a clean, usable metric is available in Prometheus, you can configure the HPA.
### 3.1. Deploy the Prometheus Adapter
The Prometheus Adapter bridges Prometheus and the Kubernetes custom metrics API. It is configured to read the `dcgm_fi_dev_gpu_util_relabelled` metric and expose it as `gpu_utilization_percent`.
> **Note on the Shared Adapter:** The `prometheus-adapter.yaml` manifest is
> configured to handle metrics for both GPU utilization (`gpu_utilization_percent`)
> and the vLLM server (`vllm_num_requests_running`). This allows a single adapter
> to be used for either scaling strategy. The presence of the vLLM server metric
> rule in the configuration is expected and does not affect GPU-based scaling.
```bash
kubectl apply -f ./prometheus-adapter.yaml
```
Verify that the adapter's pod is running in the `monitoring` namespace.
### 3.2. Verify the Custom Metrics API
After deploying the adapter, it's vital to verify that it is successfully exposing the transformed metrics to the Kubernetes API. You can do this by querying the custom metrics API directly.
```bash