metadata
language:
- en
license: mit
tags:
- mcp
- benchmark
- gradio
- fastmcp
- api-performance
pretty_name: MCP Server Benchmark Results
size_categories:
- n<1K
🔬 Gradio vs FastMCP Benchmark Report
Generated: 2026-02-28T05:20:52.935313
Total scenarios: 360
Executive Summary
- echo: Fastmcp wins (48.2 vs 192.8 RPS, 4.0x difference)
- Gradio best config: concurrency_limit=10.0
- fibonacci: Fastmcp wins (36.2 vs 58.3 RPS, 1.61x difference)
- Gradio best config: concurrency_limit=5.0
- json_transform: Fastmcp wins (50.8 vs 174.8 RPS, 3.44x difference)
- Gradio best config: concurrency_limit=nan
- async_sleep: Fastmcp wins (59.6 vs 99.2 RPS, 1.66x difference)
- Gradio best config: concurrency_limit=5.0
- payload_echo: Fastmcp wins (46.9 vs 171.1 RPS, 3.65x difference)
- Gradio best config: concurrency_limit=nan
Throughput (Requests/Second)
| ('fastmcp', 'http_api') | ('fastmcp', 'mcp_streamable') | ('gradio', 'http_api') | ('gradio', 'mcp_streamable') | |
|---|---|---|---|---|
| ('async_sleep', 1) | 15.57 | 13.89 | 8.62 | 0.36 |
| ('async_sleep', 10) | 99.16 | 52.92 | 59.65 | 3.64 |
| ('async_sleep', 25) | 58.15 | 37.44 | 37.33 | 8.98 |
| ('async_sleep', 50) | 53.35 | 30.79 | 34.05 | 29.91 |
| ('echo', 1) | 192.82 | 74.57 | 11.66 | 0.36 |
| ('echo', 10) | 74.9 | 43.59 | 48.22 | 3.62 |
| ('echo', 25) | 75.61 | 38.04 | 38.25 | 9.05 |
| ('echo', 50) | 65.28 | 31.81 | 31.34 | 30.06 |
| ('fibonacci', 1) | 46.53 | 31.48 | 12.58 | 0.36 |
| ('fibonacci', 10) | 58.27 | 44.81 | 36.24 | 3.63 |
| ('fibonacci', 25) | 56.53 | 32.83 | 33.02 | 8.98 |
| ('fibonacci', 50) | 47.45 | 28.91 | 30.01 | 29.94 |
| ('json_transform', 1) | 174.8 | 69.46 | 11.39 | 0.37 |
| ('json_transform', 10) | 78.21 | 44.07 | 50.81 | 3.62 |
| ('json_transform', 25) | 55.76 | 39.04 | 39.54 | 9 |
| ('json_transform', 50) | 63.12 | 32.64 | 33.62 | 30.12 |
| ('payload_echo', 1) | 171.09 | 72.97 | 11.76 | 0.36 |
| ('payload_echo', 10) | 77.13 | 47.08 | 46.87 | 3.63 |
| ('payload_echo', 25) | 63.39 | 40 | 38.12 | 9 |
| ('payload_echo', 50) | 66.41 | 34.28 | 33.98 | 29.97 |
Latency p50 (ms)
| ('fastmcp', 'http_api') | ('fastmcp', 'mcp_streamable') | ('gradio', 'http_api') | ('gradio', 'mcp_streamable') | |
|---|---|---|---|---|
| ('async_sleep', 1) | 62.55 | 76.702 | 111.588 | 4120.86 |
| ('async_sleep', 10) | 89.146 | 167.851 | 157.104 | 4083.55 |
| ('async_sleep', 25) | 327.049 | 599.597 | 541.56 | 4097.19 |
| ('async_sleep', 50) | 638.245 | 1346.07 | 1152.1 | 2120.95 |
| ('echo', 1) | 5 | 12.999 | 79.26 | 4129.99 |
| ('echo', 10) | 90.528 | 202.528 | 177.813 | 4100.18 |
| ('echo', 25) | 219.384 | 561.208 | 564.806 | 4096.84 |
| ('echo', 50) | 489.309 | 1171.99 | 1247.51 | 2106.4 |
| ('fibonacci', 1) | 20.466 | 30.001 | 65.026 | 4128.83 |
| ('fibonacci', 10) | 163.974 | 204.065 | 263.913 | 4064.71 |
| ('fibonacci', 25) | 427.19 | 693.387 | 648.3 | 4121.86 |
| ('fibonacci', 50) | 688.597 | 1308.94 | 1464.07 | 2091.53 |
| ('json_transform', 1) | 5.133 | 13.556 | 78.729 | 4115.32 |
| ('json_transform', 10) | 89.402 | 201.688 | 168.247 | 4114.56 |
| ('json_transform', 25) | 373.373 | 561.928 | 504.812 | 4089.93 |
| ('json_transform', 50) | 508.344 | 1244.63 | 1214.53 | 2107.65 |
| ('payload_echo', 1) | 5.414 | 13.003 | 97.087 | 4119.44 |
| ('payload_echo', 10) | 77.202 | 189.617 | 176.908 | 4109.25 |
| ('payload_echo', 25) | 313.41 | 543.924 | 543.665 | 4078.82 |
| ('payload_echo', 50) | 468.582 | 1149.26 | 1196.27 | 2118.73 |
Gradio concurrency_limit Scaling
How does Gradio's throughput change as concurrency_limit increases?
| 1.0 | 5.0 | 10.0 | |
|---|---|---|---|
| ('async_sleep', 1) | 4.4025 | 4.4475 | 4.315 |
| ('async_sleep', 10) | 6.7975 | 30.11 | 23.1775 |
| ('async_sleep', 25) | 8.7825 | 23.1125 | 19.5675 |
| ('async_sleep', 50) | 20.295 | 31.5225 | 28.0425 |
| ('echo', 1) | 5.34 | 5.9775 | 5.8675 |
| ('echo', 10) | 10.9725 | 24.8075 | 25.4775 |
| ('echo', 25) | 13.39 | 22.4475 | 22.7725 |
| ('echo', 50) | 23.9525 | 29.7975 | 27.5275 |
| ('fibonacci', 1) | 5.57 | 6.245 | 5.7525 |
| ('fibonacci', 10) | 10.9175 | 19.72 | 19.7 |
| ('fibonacci', 25) | 13.6075 | 20.255 | 19.3775 |
| ('fibonacci', 50) | 24.4425 | 27.775 | 27.4 |
| ('json_transform', 1) | 5.1075 | 5.65 | 5.2025 |
| ('json_transform', 10) | 11.13 | 25.9975 | 23.0825 |
| ('json_transform', 25) | 13.8575 | 23.365 | 20.8075 |
| ('json_transform', 50) | 24.2775 | 31.5075 | 28.425 |
| ('payload_echo', 1) | 5.1775 | 5.8375 | 4.93 |
| ('payload_echo', 10) | 11.23 | 24.5025 | 21.6125 |
| ('payload_echo', 25) | 13.68 | 22.19 | 20.47 |
| ('payload_echo', 50) | 24.1675 | 31.21 | 28.0175 |
Protocol Overhead: HTTP API vs MCP
Comparing latency of the same tool called via REST API vs MCP protocol:
| http_api | mcp_streamable | |
|---|---|---|
| ('fastmcp', 'async_sleep') | 279.248 | 547.556 |
| ('fastmcp', 'echo') | 201.055 | 487.181 |
| ('fastmcp', 'fibonacci') | 325.057 | 559.099 |
| ('fastmcp', 'json_transform') | 244.063 | 505.451 |
| ('fastmcp', 'payload_echo') | 216.152 | 473.951 |
| ('gradio', 'async_sleep') | 942.756 | 3640.32 |
| ('gradio', 'echo') | 709.837 | 3690 |
| ('gradio', 'fibonacci') | 781.367 | 3640.18 |
| ('gradio', 'json_transform') | 696.113 | 3641.42 |
| ('gradio', 'payload_echo') | 706.047 | 3634.16 |
Error Rates
| scenario_id | error_rate_pct |
|---|---|
| gradio__mcp_streamable__echo__vu50__cl1 | 100 |
| gradio__mcp_streamable__echo__vu50__cl1__noq | 100 |
| gradio__mcp_streamable__fibonacci__vu50__cl1 | 100 |
| gradio__mcp_streamable__fibonacci__vu50__cl1__noq | 100 |
| gradio__mcp_streamable__json_transform__vu50__cl1 | 100 |
| gradio__mcp_streamable__json_transform__vu50__cl1__noq | 100 |
| gradio__mcp_streamable__async_sleep__vu50__cl1 | 100 |
| gradio__mcp_streamable__async_sleep__vu50__cl1__noq | 100 |
| gradio__mcp_streamable__payload_echo__vu50__cl1 | 100 |
| gradio__mcp_streamable__payload_echo__vu50__cl1__noq | 100 |
| gradio__mcp_streamable__echo__vu50__cl5 | 100 |
| gradio__mcp_streamable__echo__vu50__cl5__noq | 100 |
| gradio__mcp_streamable__fibonacci__vu50__cl5 | 100 |
| gradio__mcp_streamable__fibonacci__vu50__cl5__noq | 100 |
| gradio__mcp_streamable__json_transform__vu50__cl5 | 100 |
| gradio__mcp_streamable__json_transform__vu50__cl5__noq | 100 |
| gradio__mcp_streamable__async_sleep__vu50__cl5 | 100 |
| gradio__mcp_streamable__async_sleep__vu50__cl5__noq | 100 |
| gradio__mcp_streamable__payload_echo__vu50__cl5 | 100 |
| gradio__mcp_streamable__payload_echo__vu50__cl5__noq | 100 |
| gradio__http_api__async_sleep__vu25__cl10 | 0.33 |
| gradio__mcp_streamable__echo__vu50__cl10 | 100 |
| gradio__mcp_streamable__echo__vu50__cl10__noq | 100 |
| gradio__mcp_streamable__fibonacci__vu50__cl10 | 100 |
| gradio__mcp_streamable__fibonacci__vu50__cl10__noq | 100 |
| gradio__mcp_streamable__json_transform__vu50__cl10 | 100 |
| gradio__mcp_streamable__json_transform__vu50__cl10__noq | 100 |
| gradio__mcp_streamable__async_sleep__vu50__cl10 | 100 |
| gradio__mcp_streamable__async_sleep__vu50__cl10__noq | 100 |
| gradio__mcp_streamable__payload_echo__vu50__cl10 | 100 |
| gradio__mcp_streamable__payload_echo__vu50__cl10__noq | 100 |
| gradio__mcp_streamable__echo__vu50__clunlimited | 100 |
| gradio__mcp_streamable__echo__vu50__clunlimited__noq | 100 |
| gradio__mcp_streamable__fibonacci__vu50__clunlimited | 100 |
| gradio__mcp_streamable__fibonacci__vu50__clunlimited__noq | 100 |
| gradio__mcp_streamable__json_transform__vu50__clunlimited | 100 |
| gradio__mcp_streamable__json_transform__vu50__clunlimited__noq | 100 |
| gradio__mcp_streamable__async_sleep__vu50__clunlimited | 100 |
| gradio__mcp_streamable__async_sleep__vu50__clunlimited__noq | 100 |
| gradio__mcp_streamable__payload_echo__vu50__clunlimited | 100 |
| gradio__mcp_streamable__payload_echo__vu50__clunlimited__noq | 100 |
Resource Usage
| server | ('mean', 'avg_cpu_pct') | ('mean', 'avg_memory_mb') | ('mean', 'peak_memory_mb') | ('max', 'avg_cpu_pct') | ('max', 'avg_memory_mb') | ('max', 'peak_memory_mb') |
|---|---|---|---|---|---|---|
| fastmcp | 0 | 6.15475 | 6.15475 | 0 | 6.16 | 6.16 |
| gradio | 0 | 6.15472 | 6.15484 | 0 | 6.17 | 6.19 |
Benchmark Charts
Methodology
- Both servers use identical tool implementations (imported from shared_tools.py)
- Each scenario runs in an isolated server subprocess
- Warmup period excluded from measurements
- Load generated by async httpx workers (not external tools)
- MCP tests use full protocol lifecycle (initialize → call_tool)
- System metrics sampled every 1s via psutil
Benchmarks generated by mcp-server-bench



