Datasets:
text stringlengths 12 38 |
|---|
=== CODING BENCHMARK === |
Date: Mon Mar 30 07:34:25 PM CDT 2026 |
=== llama3.1:8b === |
load duration: 4.076886986s |
prompt eval count: 35 token(s) |
prompt eval duration: 36.231128ms |
prompt eval rate: 966.02 tokens/s |
eval count: 369 token(s) |
eval duration: 8.680472104s |
eval rate: 42.51 tokens/s |
[?25l[?25h |
=== qwen2.5-coder:32b === |
load duration: 8.307307427s |
prompt eval count: 54 token(s) |
prompt eval duration: 144.246037ms |
prompt eval rate: 374.36 tokens/s |
eval count: 552 token(s) |
eval duration: 53.454209122s |
eval rate: 10.33 tokens/s |
[?25l[?25h |
=== qwen3:32b === |
load duration: 8.158086683s |
prompt eval count: 35 token(s) |
prompt eval duration: 230.633558ms |
prompt eval rate: 151.76 tokens/s |
eval count: 8180 token(s) |
eval duration: 14m31.693990836s |
eval rate: 9.38 tokens/s |
[?25l[?25h |
=== codellama:70b === |
load duration: 19.503715323s |
prompt eval count: 47 token(s) |
prompt eval duration: 255.980001ms |
prompt eval rate: 183.61 tokens/s |
eval count: 752 token(s) |
eval duration: 2m12.092152055s |
eval rate: 5.69 tokens/s |
[?25l[?25h |
=== gemma3:27b === |
load duration: 9.362613439s |
prompt eval count: 33 token(s) |
prompt eval duration: 153.267665ms |
prompt eval rate: 215.31 tokens/s |
eval count: 329 token(s) |
eval duration: 28.330782658s |
eval rate: 11.61 tokens/s |
[?25l[?25h |
=== deepseek-r1:70b === |
load duration: 28.349607718s |
prompt eval count: 28 token(s) |
prompt eval duration: 280.874412ms |
prompt eval rate: 99.69 tokens/s |
eval count: 10710 token(s) |
eval duration: 39m34.695902592s |
eval rate: 4.51 tokens/s |
[?25l[?25h |
=== llama3.3:70b === |
load duration: 32.858153529s |
prompt eval count: 35 token(s) |
prompt eval duration: 296.476125ms |
prompt eval rate: 118.05 tokens/s |
eval count: 507 token(s) |
eval duration: 1m48.613112185s |
eval rate: 4.67 tokens/s |
[?25l[?25h |
=== DONE === |
=== LONG CONTEXT BENCHMARK === |
Date: Tue Mar 31 09:02:58 AM CDT 2026 |
=== llama3.1:8b === |
load duration: 7.88840451s |
prompt eval count: 130 token(s) |
prompt eval duration: 62.1781ms |
prompt eval rate: 2090.77 tokens/s |
eval count: 720 token(s) |
eval duration: 17.216591091s |
eval rate: 41.82 tokens/s |
[?25l[?25h |
=== gemma3:27b === |
load duration: 5.437200302s |
prompt eval count: 133 token(s) |
prompt eval duration: 215.568451ms |
prompt eval rate: 616.97 tokens/s |
eval count: 1186 token(s) |
eval duration: 1m41.334222458s |
eval rate: 11.70 tokens/s |
[?25l[?25h |
=== qwen3:32b === |
load duration: 4.17310172s |
prompt eval count: 134 token(s) |
prompt eval duration: 272.725675ms |
prompt eval rate: 491.34 tokens/s |
eval count: 1617 token(s) |
eval duration: 2m40.059211985s |
eval rate: 10.10 tokens/s |
[?25l[?25h |
=== llama3.1:70b === |
load duration: 26.941709772s |
prompt eval count: 130 token(s) |
prompt eval duration: 611.303466ms |
prompt eval rate: 212.66 tokens/s |
End of preview. Expand in Data Studio
DGX Spark LLM Benchmarks
First comprehensive benchmark suite for NVIDIA DGX Spark (GB10 Blackwell).
Hardware
- GPU: NVIDIA GB10 Blackwell (1 PFLOP FP4)
- Memory: 128GB unified LPDDR5x (273 GB/s)
- CPU: 20-core ARM (10x Cortex-X925 + 10x Cortex-A725)
- Storage: 4TB NVMe
- Framework: Ollama 0.18.3
- CUDA: 13.0 | Driver: 580.142
Benchmark Results
Run 1 — General Inference (11 models)
| Model | Size | Prompt tok/s | Gen tok/s | Load Time |
|---|---|---|---|---|
| Llama 3.1 8B | 4.9 GB | 574.79 | 42.86 | 4.73s |
| Gemma3 27B | 17 GB | 164.64 | 11.71 | 9.83s |
| Qwen2.5-Coder 32B | 19 GB | 288.96 | 10.36 | 15.13s |
| Qwen3 32B | 20 GB | 141.91 | 9.88 | 4.08s |
| CodeLlama 70B | 38 GB | 133.35 | 5.73 | 28.81s |
| Nemotron 70B | 42 GB | 87.97 | 4.77 | 27.25s |
| Llama 3.1 70B | 42 GB | 67.92 | 4.76 | 28.35s |
| DeepSeek-R1 70B | 42 GB | 24.18 | 4.68 | 47.02s |
| Llama 3.3 70B | 42 GB | 67.97 | 4.66 | 27.41s |
| Qwen 2.5 72B | 47 GB | 122.19 | 4.40 | 44.75s |
| Mistral Large 123B | 73 GB | 10.43 | 2.28 | 86.14s |
Run 2 — Coding Benchmark
| Model | Gen tok/s | Tokens Generated |
|---|---|---|
| Llama 3.1 8B | 42.51 | 369 |
| Qwen2.5-Coder 32B | 10.33 | 552 |
| Qwen3 32B | 9.38 | 8,180 |
| CodeLlama 70B | 5.69 | 752 |
| Gemma3 27B | 11.61 | 329 |
| DeepSeek-R1 70B | 4.51 | 10,710 |
| Llama 3.3 70B | 4.67 | 507 |
Run 3 — Context Scaling
| Model | Short Prompt tok/s | Long Prompt tok/s | Scale | Gen tok/s |
|---|---|---|---|---|
| Llama 3.1 8B | 574 | 2,090 | 3.6x | 41.82 |
| Gemma3 27B | 164 | 616 | 3.8x | 11.70 |
| Qwen3 32B | 141 | 491 | 3.5x | 10.10 |
| Llama 3.1 70B | 67 | 212 | 3.2x | 4.69 |
| Qwen 2.5 72B | 122 | 225 | 1.8x | 4.33 |
| Nemotron 70B | 87 | 164 | 1.9x | 4.62 |
Run 4 — Vision
| Model | Size | Prompt tok/s | Gen tok/s | Load Time |
|---|---|---|---|---|
| Llama3.2-Vision 90B | 54 GB | 6.02 | 3.47 | 16.86s |
Key Findings
- 27-32B is the sweet spot — 10-12 tok/s, genuinely interactive
- Prompt eval scales 3-4x with longer prompts on unified memory
- DeepSeek-R1 generates 10,710 tokens of reasoning for one coding question
- 90B vision model runs on a desktop at 3.47 tok/s
- 123B is the ceiling — Mistral Large at 2.28 tok/s barely interactive
- Generation speed is constant regardless of prompt length
Author
Gopi Trinadh Maddikunta
- University of Houston · MS Engineering Data Science
- Research Assistant, Dr. Peizhu Qian
- GSoC 2025 Contributor (Scala Center)
- GitHub: GOPITRINADH3561
- Website: gopitrinadh.site
- Downloads last month
- 21