SLM Cost Benchmarking Datasets
Collection
Datasets used for benchmarking computational cost and inference efficiency of SLMs in customer service QA experiments. • 11 items • Updated
model stringclasses 9
values | samples int64 1k 1k | avg_latency_seconds float64 0.94 4.14 | median_latency_seconds float64 0.9 3.96 | std_latency_seconds float64 0.27 1.27 | avg_ttft_seconds float64 0.02 0.08 | median_ttft_seconds float64 0.02 0.08 | avg_gpu_memory_gb float64 2.36 24.4 | max_gpu_memory_gb float64 2.37 24.4 | disk_storage_gb float64 2.35 15.4 | avg_output_tokens float64 40.7 58.2 | total_output_tokens int64 40.7k 58.2k | avg_seconds_per_token float64 0.02 0.07 | median_seconds_per_token float64 0.02 0.07 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Llama-3.2-1B-Instruct | 1,000 | 0.94 | 0.9 | 0.27 | 0.02 | 0.02 | 2.36 | 2.37 | 2.35 | 44.84 | 44,838 | 0.02 | 0.02 |
Qwen3-1.7B-Instruct | 1,000 | 1.83 | 1.76 | 0.55 | 0.05 | 0.05 | 3.9 | 3.94 | 3.87 | 41.04 | 41,045 | 0.04 | 0.04 |
LLaMA-3.2-3B-Instruct | 1,000 | 1.59 | 1.53 | 0.49 | 0.04 | 0.04 | 6.11 | 6.14 | 6.08 | 45.58 | 45,584 | 0.04 | 0.03 |
SmolLM3-3B-Instruct | 1,000 | 2.09 | 2.03 | 0.61 | 0.05 | 0.05 | 5.82 | 5.84 | 5.86 | 50.34 | 50,341 | 0.04 | 0.04 |
Phi-4-Mini | 1,000 | 2.07 | 1.98 | 0.64 | 0.06 | 0.06 | 7.3 | 7.34 | 7.2 | 44.02 | 44,023 | 0.05 | 0.05 |
Qwen3-4B-Instruct | 1,000 | 2.32 | 2.17 | 0.79 | 0.06 | 0.06 | 7.7 | 7.74 | 7.63 | 40.66 | 40,660 | 0.06 | 0.06 |
Gemma-3-4B-Instruct | 1,000 | 4.14 | 3.96 | 1.27 | 0.08 | 0.08 | 24.4 | 24.44 | 8.17 | 58.17 | 58,168 | 0.07 | 0.07 |
LLaMA-3.1-8B-Instruct | 1,000 | 1.81 | 1.77 | 0.54 | 0.04 | 0.04 | 15.1 | 15.14 | 15.12 | 46.29 | 46,294 | 0.04 | 0.04 |
Qwen3-8B-Instruct | 1,000 | 2.44 | 2.34 | 0.72 | 0.06 | 0.06 | 15.42 | 15.47 | 15.44 | 43.17 | 43,166 | 0.06 | 0.06 |