RunPod vLLM Template Setup for Ultravox
β Use Pre-built vLLM (No Docker Building!)
This guide uses RunPod's existing vLLM Docker image - just configure and deploy.
π Step-by-Step Setup (10 minutes)
Step 1: Open RunPod Console
π Go to: https://www.runpod.io/console/serverless
Click: "+ New Endpoint"
Step 2: Select vLLM Template
Search for: "vLLM"
Select: "vLLM - Fast LLM Inference" (official RunPod template)
Step 3: Configure Ultravox Model
Endpoint Configuration:
Name: ultravox-vllm
Container Image: runpod/worker-vllm:stable (pre-built!)
GPU Type: RTX 4090 (24GB VRAM)
Container Disk: 40 GB
Environment Variables:
Click "Add Environment Variable" and add these:
| Name | Value |
|---|---|
MODEL_NAME |
fixie-ai/ultravox-v0_2 |
HF_TOKEN |
YOUR_HF_TOKEN_HERE |
MAX_MODEL_LEN |
4096 |
GPU_MEMORY_UTILIZATION |
0.9 |
TRUST_REMOTE_CODE |
true |
Scaling:
Min Workers: 0
Max Workers: 3
Scale Down Delay: 600 (10 minutes)
Step 4: Deploy
Click: "Deploy"
Wait: ~3-5 minutes for deployment
Status: Should show "Running" with green indicator
Copy: The Endpoint ID (looks like: abc123def456)
π§ͺ Test Your Endpoint
Quick Test in RunPod Console
- Go to your endpoint
- Click "Requests" tab
- Click "Send Test Request"
- Use this payload:
{
"input": {
"prompt": "Hello! How are you today?",
"max_tokens": 100,
"temperature": 0.7
}
}
- Click "Run"
- Wait ~8-12 seconds (cold start)
- Should return text response
Test from Command Line
export RUNPOD_ENDPOINT_ID="your-endpoint-id-here"
python3 << 'EOF'
import runpod
import os
runpod.api_key = "YOUR_RUNPOD_API_KEY_HERE"
endpoint = runpod.Endpoint(os.getenv("RUNPOD_ENDPOINT_ID"))
result = endpoint.run_sync({
"input": {
"prompt": "What is artificial intelligence?",
"max_tokens": 100
}
}, timeout=60)
print("Response:", result)
EOF
βοΈ Configure Our Service
Once you have the Endpoint ID, update the config:
# SSH to server
ssh -p 33337 root@136.59.129.136
# Edit config
nano /workspace/ultravox-pipeline/config/runpod.yaml
Add:
endpoints:
ultravox:
endpoint_id: "YOUR_ENDPOINT_ID_HERE" # Paste it here
gpu: "RTX_4090"
min_workers: 0
max_workers: 3
Save and exit: Ctrl+X, Y, Enter
π― Test from Service
cd /workspace/ultravox-pipeline/src/services/runpod_llm
# Set environment
export RUNPOD_API_KEY="YOUR_RUNPOD_API_KEY_HERE"
export RUNPOD_ENDPOINT_ID="your-endpoint-id"
# Run service
python3 service.py &
# Test
curl -X POST http://localhost:8105/runpod/inference \
-H "Content-Type: application/json" \
-d '{
"model": "ultravox",
"input": {
"text": "Hello, world!"
},
"parameters": {
"max_tokens": 50
}
}'
π Expected Performance
Cold Start (First Request)
- Time: 8-15 seconds
- Why: Downloading model from HF β Loading into VRAM
- Frequency: Once per idle period (after scale-down)
Warm Inference
- Time: 0.3-0.8 seconds
- Throughput: ~30-50 tokens/second
Costs
- Price: $0.34/hour when active
- Testing: ~$0.01 for 1 hour of testing
- Production (2hr/day): ~$20/month
π§ Troubleshooting
Model not loading
- Check
HF_TOKENis set correctly - Verify model name:
fixie-ai/ultravox-v0_2 - Check logs in RunPod console
Out of memory
- Reduce
MAX_MODEL_LENto2048 - Set
GPU_MEMORY_UTILIZATIONto0.8
Slow cold starts
- Pre-download model (advanced)
- Use network storage (costs extra)
Connection timeout
- Increase timeout to 120 seconds
- Check endpoint is running (green status)
β οΈ Important Notes
About vLLM + Ultravox:
- vLLM is primarily for text-only models
- Ultravox is multimodal (audio + text)
- vLLM will work for text input only
- For audio input, you need custom handler (Docker)
What works with vLLM:
- β Text β Text (LLM inference)
- β Audio β Text (needs custom handler)
- β Text β Audio (needs TTS integration)
For full audio support:
- Use the custom Docker image (build.sh)
- Or process audio client-side (convert to text first)
π Summary
- β No Docker building needed
- β Use RunPod's vLLM template
- β
Set model name to
fixie-ai/ultravox-v0_2 - β Add HF token for access
- β οΈ Text-only mode (no audio input/output)
For full speech-to-speech, build custom Docker image. For text-only testing, vLLM is perfect!
Ready? Follow steps 1-4 above and paste your Endpoint ID here!