docs: add response time to CyclePlan table, mark thinking models
Browse files
README.md
CHANGED
|
@@ -93,7 +93,7 @@ Mainly based on prediction.phase_waits pred_saturation (already calculated), out
|
|
| 93 |
| Model | Avg Saturation | Avg Cumulative Queue Length (veh⋅min) | Avg Throughput (veh/5min) | Avg Response Time (s) |
|
| 94 |
|:---:|:---:|:---:|:---:|:---:|
|
| 95 |
| [`GPT-OSS-20B (thinking)`](https://huggingface.co/openai/gpt-oss-20b) | 0.380 | 14.088 | 77.910 | 6.768 |
|
| 96 |
-
| **DeepSignal-Phase-4B (Ours)** | 0.422 | 15.703 | **79.883** | 2.131 |
|
| 97 |
| [`Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Instruct) | 0.431 | 17.046 | 79.059 | 2.727 |
|
| 98 |
| [`Qwen3-4B`](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) | 0.466 | 57.699 | 75.712 | 1.994 |
|
| 99 |
| Max Pressure | 0.465 | 23.022 | 77.236 | ** |
|
|
@@ -103,23 +103,23 @@ Mainly based on prediction.phase_waits pred_saturation (already calculated), out
|
|
| 103 |
`**`: Max Pressure is a fixed signal-timing optimization algorithm (not an LLM), so we omit its Avg Response Time; this metric is only defined for LLM-based signal-timing optimization.
|
| 104 |
`***`: For LightGPT-8B-Llama3, Avg Response Time is computed using only the successful responses.
|
| 105 |
|
| 106 |
-
**Conclusion**:
|
| 107 |
|
| 108 |
### Performance Metrics Comparison by Model (CyclePlan) *
|
| 109 |
|
| 110 |
-
| Model | Format Success Rate (%) | Avg Queue Vehicles | Avg Delay per Vehicle (s) | Throughput (veh/min) |
|
| 111 |
-
|:---:|:---:|:---:|:---:|:---:|
|
| 112 |
-
| **DeepSignal-CyclePlan-4B-V1 F16 (Ours)** | **100.0** | **3.504** | **27.747** | **8.611** |
|
| 113 |
-
| [`GLM-4.7-Flash`](https://huggingface.co/zai-org/glm-4.7-flash) | 100.0 | 7.323 | 29.422 | 8.567 |
|
| 114 |
-
| DeepSignal-CyclePlan-4B-V1 Q4_K_M (Ours) | 98.1 | 4.783 | 29.891 | 7.722 |
|
| 115 |
-
| [`Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B-2507) | 97.1 | 6.938 | 31.135 | 7.578 |
|
| 116 |
-
| [`LightGPT-8B-Llama3`](https://huggingface.co/lightgpt/LightGPT-8B-Llama3) | 68.0 | 5.026 | 31.266 | 7.380 |
|
| 117 |
-
| [`GPT-OSS-20B`](https://huggingface.co/openai/gpt-oss-20b) | 65.4 | 6.289 | 31.947 | 7.247 |
|
| 118 |
-
| [`Qwen3-4B (thinking)`](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) | 54.1 | 10.060 | 48.895 | 7.096 |
|
| 119 |
|
| 120 |
`*`: Each simulation scenario runs for 60 minutes. We discard the first **5 minutes** as warm-up, then compute metrics over the next **20 minutes** (minute 5 to 25). All evaluations are conducted on a **Mac Studio M3 Ultra**.
|
| 121 |
|
| 122 |
-
**Conclusion**: DeepSignal-CyclePlan-4B-V1 (F16) achieves a 100% format success rate, the lowest average queue vehicles (3.504), and the highest throughput (8.611 veh/min) among all evaluated models. The Q4_K_M quantized version maintains strong performance with 98.1% format success rate while offering
|
| 123 |
|
| 124 |
## License
|
| 125 |
|
|
|
|
| 93 |
| Model | Avg Saturation | Avg Cumulative Queue Length (veh⋅min) | Avg Throughput (veh/5min) | Avg Response Time (s) |
|
| 94 |
|:---:|:---:|:---:|:---:|:---:|
|
| 95 |
| [`GPT-OSS-20B (thinking)`](https://huggingface.co/openai/gpt-oss-20b) | 0.380 | 14.088 | 77.910 | 6.768 |
|
| 96 |
+
| **DeepSignal-Phase-4B (thinking, Ours)** | 0.422 | 15.703 | **79.883** | 2.131 |
|
| 97 |
| [`Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Instruct) | 0.431 | 17.046 | 79.059 | 2.727 |
|
| 98 |
| [`Qwen3-4B`](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) | 0.466 | 57.699 | 75.712 | 1.994 |
|
| 99 |
| Max Pressure | 0.465 | 23.022 | 77.236 | ** |
|
|
|
|
| 103 |
`**`: Max Pressure is a fixed signal-timing optimization algorithm (not an LLM), so we omit its Avg Response Time; this metric is only defined for LLM-based signal-timing optimization.
|
| 104 |
`***`: For LightGPT-8B-Llama3, Avg Response Time is computed using only the successful responses.
|
| 105 |
|
| 106 |
+
**Conclusion**: Among thinking-enabled models, **DeepSignal-Phase-4B** achieves the highest throughput (79.883 veh/5min) with a response time of only 2.131s. GPT-OSS-20B achieves the best saturation (0.380) but with higher response latency (6.768s).
|
| 107 |
|
| 108 |
### Performance Metrics Comparison by Model (CyclePlan) *
|
| 109 |
|
| 110 |
+
| Model | Format Success Rate (%) | Avg Queue Vehicles | Avg Delay per Vehicle (s) | Throughput (veh/min) | Avg Response Time (s) |
|
| 111 |
+
|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 112 |
+
| **DeepSignal-CyclePlan-4B-V1 F16 (thinking, Ours)** | **100.0** | **3.504** | **27.747** | **8.611** | 4.351 |
|
| 113 |
+
| [`GLM-4.7-Flash (thinking)`](https://huggingface.co/zai-org/glm-4.7-flash) | 100.0 | 7.323 | 29.422 | 8.567 | 36.388 |
|
| 114 |
+
| DeepSignal-CyclePlan-4B-V1 Q4_K_M (thinking, Ours) | 98.1 | 4.783 | 29.891 | 7.722 | 1.674 |
|
| 115 |
+
| [`Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B-2507) | 97.1 | 6.938 | 31.135 | 7.578 | 7.885 |
|
| 116 |
+
| [`LightGPT-8B-Llama3`](https://huggingface.co/lightgpt/LightGPT-8B-Llama3) | 68.0 | 5.026 | 31.266 | 7.380 | 167.373 |
|
| 117 |
+
| [`GPT-OSS-20B (thinking)`](https://huggingface.co/openai/gpt-oss-20b) | 65.4 | 6.289 | 31.947 | 7.247 | 4.919 |
|
| 118 |
+
| [`Qwen3-4B (thinking)`](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) | 54.1 | 10.060 | 48.895 | 7.096 | 122.333 |
|
| 119 |
|
| 120 |
`*`: Each simulation scenario runs for 60 minutes. We discard the first **5 minutes** as warm-up, then compute metrics over the next **20 minutes** (minute 5 to 25). All evaluations are conducted on a **Mac Studio M3 Ultra**.
|
| 121 |
|
| 122 |
+
**Conclusion**: DeepSignal-CyclePlan-4B-V1 (F16) achieves a 100% format success rate, the lowest average queue vehicles (3.504), and the highest throughput (8.611 veh/min) among all evaluated models. The Q4_K_M quantized version maintains strong performance with 98.1% format success rate while offering the fastest response time (1.674s).
|
| 123 |
|
| 124 |
## License
|
| 125 |
|