text stringlengths 0 59.1k |
|---|
--- |
## 🛠️ Configuration Customization |
- Update `model_name` and `model_base_path` in the deployment |
- Replace `hostPath` with `PersistentVolumeClaim` bound to cloud storage |
- Modify resource requests/limits for TensorFlow container |
--- |
## 🧹 Cleanup |
```bash |
kubectl delete -f https://raw.githubusercontent.com/kubernetes/examples/refs/heads/master/ai/model-serving-tensorflow/ingress.yaml # Optional |
kubectl delete -f https://raw.githubusercontent.com/kubernetes/examples/refs/heads/master/ai/model-serving-tensorflow/service.yaml |
kubectl delete -f https://raw.githubusercontent.com/kubernetes/examples/refs/heads/master/ai/model-serving-tensorflow/deployment.yaml |
kubectl delete -f https://raw.githubusercontent.com/kubernetes/examples/refs/heads/master/ai/model-serving-tensorflow/pvc.yaml |
kubectl delete -f https://raw.githubusercontent.com/kubernetes/examples/refs/heads/master/ai/model-serving-tensorflow/pv.yaml |
``` |
--- |
## 4 Further Reading / Next Steps |
- [TensorFlow Serving](https://www.tensorflow.org/tfx/serving) |
- [TF Serving REST API Reference](https://www.tensorflow.org/tfx/serving/api_rest) |
- [Kubernetes Ingress Controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) |
- [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) |
<|endoftext|> |
# source: k8s_examples/AI/vllm-deployment/vllm-service.yaml type: yaml |
apiVersion: v1 |
kind: Service |
metadata: |
name: vllm-service |
spec: |
selector: |
app: gemma-server |
type: ClusterIP |
ports: |
- protocol: TCP |
port: 8080 |
targetPort: 8080 |
<|endoftext|> |
# source: k8s_examples/AI/vllm-deployment/vllm-deployment.yaml type: yaml |
apiVersion: apps/v1 |
kind: Deployment |
metadata: |
name: vllm-gemma-deployment |
spec: |
replicas: 1 |
selector: |
matchLabels: |
app: gemma-server |
template: |
metadata: |
labels: |
app: gemma-server |
# Labels for better functionality within GKE. |
# ai.gke.io/model: gemma-3-1b-it |
# ai.gke.io/inference-server: vllm |
# examples.ai.gke.io/source: user-guide |
spec: |
containers: |
- name: inference-server |
image: vllm/vllm-openai:v0.11.0 |
resources: |
requests: |
cpu: "2" |
memory: "10Gi" |
ephemeral-storage: "10Gi" |
nvidia.com/gpu: "1" |
limits: |
cpu: "2" |
memory: "10Gi" |
ephemeral-storage: "10Gi" |
nvidia.com/gpu: "1" |
command: ["python3", "-m", "vllm.entrypoints.openai.api_server"] |
args: |
- --model=$(MODEL_ID) |
- --tensor-parallel-size=1 |
- --host=0.0.0.0 |
- --port=8080 |
# --- ADD THESE LINES TO FIX POSSIBLE OOM ERRORS --- |
- --gpu-memory-utilization=0.85 |
- --max-num-seqs=64 |
env: |
# 1 billion parameter model (smallest gemma model) |
- name: MODEL_ID |
value: google/gemma-3-1b-it |
# Necessary for vLLM images >= 0.8.5. |
# Ref - https://github.com/vllm-project/vllm/issues/18859 |
- name: LD_LIBRARY_PATH |
value: "/usr/local/nvidia/lib64:/usr/local/cuda/lib64" |
- name: HUGGING_FACE_HUB_TOKEN |
valueFrom: |
secretKeyRef: |
name: hf-secret |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.