text stringlengths 0 59.1k |
|---|
> affect vLLM server metric scaling. |
Deploy the adapter: |
```bash |
kubectl apply -f ./prometheus-adapter.yaml |
``` |
Verify that the adapter's pod is running in the `monitoring` namespace. |
### 3.2. Verify the Custom Metrics API |
After deploying the adapter, it's vital to verify that it is successfully exposing the transformed metrics to the Kubernetes API. You can do this by querying the custom metrics API directly. |
```bash |
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . |
``` |
The output should be a list of available custom metrics. Look for the `pods/vllm_num_requests_running` metric, which confirms that the metric is ready for the HPA to consume. |
### 3.3. Deploy the Horizontal Pod Autoscaler (HPA) |
The HPA resource defines the scaling behavior. The manifest below is configured to use the clean metric name, `vllm_num_requests_running`, exposed by the Prometheus Adapter. It will scale the `vllm-gemma-deployment` up or down to maintain an average of 4 concurrent requests per pod. |
```bash |
kubectl apply -f ./horizontal-pod-autoscaler.yaml -n vllm-example |
``` |
You can inspect the HPA's configuration and status with the `describe` command. Note that the `Metrics` section now shows our clean metric name. |
```bash |
kubectl describe hpa/gemma-server-hpa -n vllm-example |
# Expected output should include: |
# Name: gemma-server-hpa |
# Namespace: vllm-example |
# ... |
# Metrics: ( current / target ) |
# "vllm_num_requests_running" on pods: <current value> / 4 |
# Min replicas: 1 |
# Max replicas: 5 |
``` |
--- |
## 4. Load Test the Autoscaling Setup |
To observe the HPA in action, you need to generate a sustained load on the vLLM server, causing the `vllm_num_requests_running` metric to rise above the target value. |
### 4.1. Generate Inference Load |
First, re-establish the port-forward to the vLLM service. |
```bash |
kubectl port-forward service/vllm-service -n vllm-example 8081:8081 |
``` |
In another terminal, execute the `request-looper.sh` script. This will send a continuous stream of inference requests to the server. |
```bash |
./request-looper.sh |
``` |
### 4.2. Observe the HPA Scaling the Deployment |
While the load script is running, you can monitor the HPA's behavior and the deployment's replica count in real-time. |
```bash |
# See the HPA's metric values and scaling events |
kubectl describe hpa/gemma-server-hpa -n vllm-example |
# Watch the number of deployment replicas increase |
kubectl get deploy/vllm-gemma-deployment -n vllm-example -w |
``` |
As the average number of running requests per pod exceeds the target of 4, the HPA will begin to scale up the deployment, and you will see new pods being created. |
--- |
## 5. Cleanup |
To tear down the resources created during this exercise, you can use `kubectl delete` with the `-f` flag, which will delete all resources defined in the manifests in the current directory. |
```bash |
kubectl delete -f . -n vllm-example |
``` |
<|endoftext|> |
# source: k8s_examples/_archived/simple-nginx.md type: docs |
## Running your first containers in Kubernetes |
Ok, you've run one of the [getting started guides](https://kubernetes.io/docs/user-journeys/users/application-developer/foundational/#section-1) and you have |
successfully turned up a Kubernetes cluster. Now what? This guide will help you get oriented |
to Kubernetes and running your first containers on the cluster. |
### Running a container (simple version) |
From this point onwards, it is assumed that `kubectl` is on your path from one of the getting started guides. |
The [`kubectl create`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create) line below will create a [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) named `my-nginx` to ensure that there are always a [nginx](https://hub.docker.com/_/nginx/) [pod](https://ku... |
```bash |
kubectl create deployment --image nginx my-nginx |
``` |
You can list the pods to see what is up and running: |
```bash |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.