Datasets:
metadata
license: mit
task_categories:
- text-generation
tags:
- benchmark
- nvidia
- dgx-spark
- blackwell
- llm
- inference
pretty_name: DGX Spark LLM Benchmarks
size_categories:
- n<1K
DGX Spark LLM Benchmarks
First comprehensive benchmark suite for NVIDIA DGX Spark (GB10 Blackwell).
Hardware
- GPU: NVIDIA GB10 Blackwell (1 PFLOP FP4)
- Memory: 128GB unified LPDDR5x (273 GB/s)
- CPU: 20-core ARM (10x Cortex-X925 + 10x Cortex-A725)
- Storage: 4TB NVMe
- Framework: Ollama 0.18.3
- CUDA: 13.0 | Driver: 580.142
Benchmark Results
Run 1 — General Inference (11 models)
| Model | Size | Prompt tok/s | Gen tok/s | Load Time |
|---|---|---|---|---|
| Llama 3.1 8B | 4.9 GB | 574.79 | 42.86 | 4.73s |
| Gemma3 27B | 17 GB | 164.64 | 11.71 | 9.83s |
| Qwen2.5-Coder 32B | 19 GB | 288.96 | 10.36 | 15.13s |
| Qwen3 32B | 20 GB | 141.91 | 9.88 | 4.08s |
| CodeLlama 70B | 38 GB | 133.35 | 5.73 | 28.81s |
| Nemotron 70B | 42 GB | 87.97 | 4.77 | 27.25s |
| Llama 3.1 70B | 42 GB | 67.92 | 4.76 | 28.35s |
| DeepSeek-R1 70B | 42 GB | 24.18 | 4.68 | 47.02s |
| Llama 3.3 70B | 42 GB | 67.97 | 4.66 | 27.41s |
| Qwen 2.5 72B | 47 GB | 122.19 | 4.40 | 44.75s |
| Mistral Large 123B | 73 GB | 10.43 | 2.28 | 86.14s |
Run 2 — Coding Benchmark
| Model | Gen tok/s | Tokens Generated |
|---|---|---|
| Llama 3.1 8B | 42.51 | 369 |
| Qwen2.5-Coder 32B | 10.33 | 552 |
| Qwen3 32B | 9.38 | 8,180 |
| CodeLlama 70B | 5.69 | 752 |
| Gemma3 27B | 11.61 | 329 |
| DeepSeek-R1 70B | 4.51 | 10,710 |
| Llama 3.3 70B | 4.67 | 507 |
Run 3 — Context Scaling
| Model | Short Prompt tok/s | Long Prompt tok/s | Scale | Gen tok/s |
|---|---|---|---|---|
| Llama 3.1 8B | 574 | 2,090 | 3.6x | 41.82 |
| Gemma3 27B | 164 | 616 | 3.8x | 11.70 |
| Qwen3 32B | 141 | 491 | 3.5x | 10.10 |
| Llama 3.1 70B | 67 | 212 | 3.2x | 4.69 |
| Qwen 2.5 72B | 122 | 225 | 1.8x | 4.33 |
| Nemotron 70B | 87 | 164 | 1.9x | 4.62 |
Run 4 — Vision
| Model | Size | Prompt tok/s | Gen tok/s | Load Time |
|---|---|---|---|---|
| Llama3.2-Vision 90B | 54 GB | 6.02 | 3.47 | 16.86s |
Key Findings
- 27-32B is the sweet spot — 10-12 tok/s, genuinely interactive
- Prompt eval scales 3-4x with longer prompts on unified memory
- DeepSeek-R1 generates 10,710 tokens of reasoning for one coding question
- 90B vision model runs on a desktop at 3.47 tok/s
- 123B is the ceiling — Mistral Large at 2.28 tok/s barely interactive
- Generation speed is constant regardless of prompt length
Author
Gopi Trinadh Maddikunta
- University of Houston · MS Engineering Data Science
- Research Assistant, Dr. Peizhu Qian
- GSoC 2025 Contributor (Scala Center)
- GitHub: GOPITRINADH3561
- Website: gopitrinadh.site