runpod_serverless_n1 / VLLM_SETUP_GUIDE.md
marcosremar2's picture
Upload VLLM_SETUP_GUIDE.md with huggingface_hub
460c597 verified
# RunPod vLLM Template Setup for Ultravox
## βœ… Use Pre-built vLLM (No Docker Building!)
This guide uses RunPod's **existing vLLM Docker image** - just configure and deploy.
---
## πŸš€ Step-by-Step Setup (10 minutes)
### Step 1: Open RunPod Console
πŸ”— **Go to:** https://www.runpod.io/console/serverless
**Click:** "+ New Endpoint"
### Step 2: Select vLLM Template
**Search for:** "vLLM"
**Select:** "vLLM - Fast LLM Inference" (official RunPod template)
### Step 3: Configure Ultravox Model
**Endpoint Configuration:**
```
Name: ultravox-vllm
Container Image: runpod/worker-vllm:stable (pre-built!)
GPU Type: RTX 4090 (24GB VRAM)
Container Disk: 40 GB
```
**Environment Variables:**
Click "Add Environment Variable" and add these:
| Name | Value |
|------|-------|
| `MODEL_NAME` | `fixie-ai/ultravox-v0_2` |
| `HF_TOKEN` | `YOUR_HF_TOKEN_HERE` |
| `MAX_MODEL_LEN` | `4096` |
| `GPU_MEMORY_UTILIZATION` | `0.9` |
| `TRUST_REMOTE_CODE` | `true` |
**Scaling:**
```
Min Workers: 0
Max Workers: 3
Scale Down Delay: 600 (10 minutes)
```
### Step 4: Deploy
**Click:** "Deploy"
**Wait:** ~3-5 minutes for deployment
**Status:** Should show "Running" with green indicator
**Copy:** The **Endpoint ID** (looks like: `abc123def456`)
---
## πŸ§ͺ Test Your Endpoint
### Quick Test in RunPod Console
1. Go to your endpoint
2. Click "Requests" tab
3. Click "Send Test Request"
4. Use this payload:
```json
{
"input": {
"prompt": "Hello! How are you today?",
"max_tokens": 100,
"temperature": 0.7
}
}
```
5. Click "Run"
6. Wait ~8-12 seconds (cold start)
7. Should return text response
### Test from Command Line
```bash
export RUNPOD_ENDPOINT_ID="your-endpoint-id-here"
python3 << 'EOF'
import runpod
import os
runpod.api_key = "YOUR_RUNPOD_API_KEY_HERE"
endpoint = runpod.Endpoint(os.getenv("RUNPOD_ENDPOINT_ID"))
result = endpoint.run_sync({
"input": {
"prompt": "What is artificial intelligence?",
"max_tokens": 100
}
}, timeout=60)
print("Response:", result)
EOF
```
---
## βš™οΈ Configure Our Service
Once you have the **Endpoint ID**, update the config:
```bash
# SSH to server
ssh -p 33337 root@136.59.129.136
# Edit config
nano /workspace/ultravox-pipeline/config/runpod.yaml
```
**Add:**
```yaml
endpoints:
ultravox:
endpoint_id: "YOUR_ENDPOINT_ID_HERE" # Paste it here
gpu: "RTX_4090"
min_workers: 0
max_workers: 3
```
**Save and exit:** `Ctrl+X`, `Y`, `Enter`
---
## 🎯 Test from Service
```bash
cd /workspace/ultravox-pipeline/src/services/runpod_llm
# Set environment
export RUNPOD_API_KEY="YOUR_RUNPOD_API_KEY_HERE"
export RUNPOD_ENDPOINT_ID="your-endpoint-id"
# Run service
python3 service.py &
# Test
curl -X POST http://localhost:8105/runpod/inference \
-H "Content-Type: application/json" \
-d '{
"model": "ultravox",
"input": {
"text": "Hello, world!"
},
"parameters": {
"max_tokens": 50
}
}'
```
---
## πŸ“Š Expected Performance
### Cold Start (First Request)
- **Time:** 8-15 seconds
- **Why:** Downloading model from HF β†’ Loading into VRAM
- **Frequency:** Once per idle period (after scale-down)
### Warm Inference
- **Time:** 0.3-0.8 seconds
- **Throughput:** ~30-50 tokens/second
### Costs
- **Price:** $0.34/hour when active
- **Testing:** ~$0.01 for 1 hour of testing
- **Production (2hr/day):** ~$20/month
---
## πŸ”§ Troubleshooting
### Model not loading
- Check `HF_TOKEN` is set correctly
- Verify model name: `fixie-ai/ultravox-v0_2`
- Check logs in RunPod console
### Out of memory
- Reduce `MAX_MODEL_LEN` to `2048`
- Set `GPU_MEMORY_UTILIZATION` to `0.8`
### Slow cold starts
- Pre-download model (advanced)
- Use network storage (costs extra)
### Connection timeout
- Increase timeout to 120 seconds
- Check endpoint is running (green status)
---
## ⚠️ Important Notes
**About vLLM + Ultravox:**
- vLLM is primarily for **text-only** models
- Ultravox is **multimodal** (audio + text)
- vLLM will work for **text input only**
- For **audio input**, you need custom handler (Docker)
**What works with vLLM:**
- βœ… Text β†’ Text (LLM inference)
- ❌ Audio β†’ Text (needs custom handler)
- ❌ Text β†’ Audio (needs TTS integration)
**For full audio support:**
- Use the custom Docker image (build.sh)
- Or process audio client-side (convert to text first)
---
## πŸ“ Summary
1. βœ… No Docker building needed
2. βœ… Use RunPod's vLLM template
3. βœ… Set model name to `fixie-ai/ultravox-v0_2`
4. βœ… Add HF token for access
5. ⚠️ Text-only mode (no audio input/output)
For **full speech-to-speech**, build custom Docker image.
For **text-only testing**, vLLM is perfect!
---
**Ready?** Follow steps 1-4 above and paste your Endpoint ID here!