text stringlengths 0 59.1k |
|---|
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . |
``` |
The output should be a list of available custom metrics. Look for the `pods/gpu_utilization_percent` metric, which confirms that the entire pipeline is working correctly and the metric is ready for the HPA to consume. |
```json |
{ |
"kind": "APIResourceList", |
"apiVersion": "v1", |
"groupVersion": "custom.metrics.k8s.io/v1beta1", |
"resources": [ |
{ |
"name": "pods/gpu_utilization_percent", |
"singularName": "", |
"namespaced": true, |
"kind": "MetricValueList", |
"verbs": [ |
"get" |
] |
} |
] |
} |
``` |
### 3.3. Deploy the Horizontal Pod Autoscaler (HPA) |
The HPA is configured to use the final, clean metric name, `gpu_utilization_percent`, to maintain an average GPU utilization of 20%. |
```bash |
kubectl apply -f ./gpu-horizontal-pod-autoscaler.yaml -n vllm-example |
``` |
Inspect the HPA's configuration to confirm it's targeting the correct metric. |
```bash |
kubectl describe hpa/gemma-server-gpu-hpa -n vllm-example |
# Expected output should include: |
# Metrics: ( current / target ) |
# "gpu_utilization_percent" on pods: <current value> / 20 |
``` |
--- |
## 4. Load Test the Autoscaling Setup |
Generate a sustained load on the vLLM server to cause GPU utilization to rise. |
### 4.1. Generate Inference Load |
First, establish a port-forward to the vLLM service. |
```bash |
kubectl port-forward service/vllm-service -n vllm-example 8081:8081 |
``` |
In another terminal, execute the `request-looper.sh` script. |
```bash |
./request-looper.sh |
``` |
### 4.2. Observe the HPA Scaling the Deployment |
While the load script is running, monitor the HPA's behavior. |
```bash |
# See the HPA's metric values and scaling events |
kubectl describe hpa/gemma-server-gpu-hpa -n vllm-example |
# Watch the number of deployment replicas increase |
kubectl get deploy/vllm-gemma-deployment -n vllm-example -w |
``` |
As the average GPU utilization exceeds the 20% target, the HPA will scale up the deployment. |
--- |
## 5. Cleanup |
To tear down the resources from this exercise, run the following command: |
```bash |
kubectl delete -f . -n vllm-example |
``` |
<|endoftext|> |
# source: k8s_examples/AI/vllm-deployment/hpa/gpu-dcgm-exporter-service-gke.yaml type: yaml |
# This Service provides a stable network endpoint for the NVIDIA DCGM Exporter |
# pods. The Prometheus Operator's ServiceMonitor will target this Service |
# to discover and scrape the GPU metrics. This is especially important |
# because the exporter pods are part of a DaemonSet, and their IPs can change. |
# |
# NOTE: This configuration is specific to GKE, which automatically deploys the |
# DCGM exporter in the 'gke-managed-system' namespace. For other cloud |
# providers or on-premise clusters, you would need to deploy your own DCGM |
# exporter (e.g., via a Helm chart) and update this Service's 'namespace' |
# and 'labels' to match your deployment. |
apiVersion: v1 |
kind: Service |
metadata: |
name: gke-managed-dcgm-exporter |
# GKE-SPECIFIC: GKE deploys its managed DCGM exporter in this namespace. |
# On other platforms, this would be the namespace where you deploy the exporter. |
namespace: gke-managed-system |
labels: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.