text stringlengths 0 59.1k |
|---|
Expected output: |
``` |
INFO: Automatically detected platform cuda. |
... |
INFO [launcher.py:34] Route: /v1/chat/completions, Methods: POST |
... |
INFO: Started server process [13] |
INFO: Waiting for application startup. |
INFO: Application startup complete. |
Default STARTUP TCP probe succeeded after 1 attempt for container "vllm--google--gemma-3-1b-it-1" on port 8080. |
... |
``` |
4. Create service: |
```bash |
# ClusterIP service on port 8080 in front of vllm deployment |
# Make sure to use the same namespace as in the previous steps |
kubectl apply -f vllm-service.yaml -n vllm-example |
``` |
## Verification / Seeing it Work |
1. Forward local requests to vLLM service: |
```bash |
# Forward a local port (e.g., 8080) to the service port (e.g., 8080) |
# Make sure to use the same namespace as in the previous steps |
kubectl port-forward service/vllm-service 8080:8080 -n vllm-example |
``` |
2. Send request to local forwarding port: |
```bash |
curl -X POST http://localhost:8080/v1/chat/completions \ |
-H "Content-Type: application/json" \ |
-d '{ \ |
"model": "google/gemma-3-1b-it", \ |
"messages": [{"role": "user", "content": "Explain Quantum Computing in simple terms." }], \ |
"max_tokens": 100 \ |
}' |
``` |
Expected output (or similar): |
```json |
{"id":"chatcmpl-462b3e153fd34e5ca7f5f02f3bcb6b0c","object":"chat.completion","created":1753164476,"model":"google/gemma-3-1b-it","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"Okay, let’s break down quantum computing in a way that’s hopefully understandable without getting lost ... |
``` |
--- |
## Configuration Customization |
- Update `MODEL_ID` within deployment manifest to serve different model (ensure Hugging Face access token contains these permissions). |
- Change the number of `vLLM` pod replicas in the deployment manifest. |
--- |
## Platform-Specific Configuration |
Node selectors make sure vLLM pods land on Nodes with the correct GPU, and they are the main difference among the cloud providers. The following are node selector examples for three cloud providers. |
- GKE |
This `nodeSelector` uses labels that are specific to Google Kubernetes Engine. |
- `cloud.google.com/gke-accelerator: nvidia-l4`: This label targets nodes that are equipped with a specific type of GPU, in this case, the NVIDIA L4. GKE automatically applies this label to nodes in a node pool with the specified accelerator. |
- `cloud.google.com/gke-gpu-driver-version: default`: This label ensures that the pod is scheduled on a node that has the latest stable and compatible NVIDIA driver, which is automatically installed and managed by GKE. |
```yaml |
nodeSelector: |
cloud.google.com/gke-accelerator: nvidia-l4 |
cloud.google.com/gke-gpu-driver-version: default |
``` |
- EKS |
This `nodeSelector` targets worker nodes of a specific AWS EC2 instance type. The label `node.kubernetes.io/instance-type` is automatically applied by Kubernetes on AWS. In this example, `p4d.24xlarge` is used, which is an EC2 instance type equipped with powerful NVIDIA A100 GPUs, making it ideal for demanding AI wor... |
```yaml |
nodeSelector: |
node.kubernetes.io/instance-type: p4d.24xlarge |
``` |
- AKS |
This example uses a common but custom label, `agentpiscasi.com/gpu: "true"`. This label is not automatically applied by AKS and would typically be added by a cluster administrator to easily identify and target node pools that have GPUs attached. |
```yaml |
nodeSelector: |
agentpiscasi.com/gpu: "true" # Common label for AKS GPU nodes |
``` |
--- |
## Cleanup |
```bash |
# Make sure to use the same namespace as in the previous steps |
kubectl delete -f vllm-service.yaml -n vllm-example |
kubectl delete -f vllm-deployment.yaml -n vllm-example |
kubectl delete secret hf-secret -n vllm-example |
kubectl delete namespace vllm-example |
``` |
--- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.