Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ license: apache-2.0
|
|
| 14 |
# DeepSignal-4B-V1 (GGUF)
|
| 15 |
|
| 16 |
This repository provides a GGUF model file for local inference (e.g., `llama.cpp` / LM Studio). It is intended for traffic-signal-control analysis and related text-generation workflows.
|
| 17 |
-
For details, check our repository at [`AIMSLaboratory/DeepSignal`](https://github.com/AIMSLaboratory/DeepSignal
|
| 18 |
|
| 19 |
|
| 20 |
## Files
|
|
@@ -35,28 +35,18 @@ where the number is the phase index (starting from 0) and the seconds is the dur
|
|
| 35 |
|
| 36 |
## Evaluation (Traffic Simulation)
|
| 37 |
|
| 38 |
-
### Performance Metrics Comparison by Model
|
| 39 |
-
|
| 40 |
-
| Model | Avg Saturation | Avg Queue Length
|
| 41 |
-
|
| 42 |
-
| [`
|
| 43 |
-
| DeepSignal-4B (Ours) | 0.
|
| 44 |
-
| [`
|
| 45 |
-
|
|
| 46 |
-
|
|
| 47 |
-
| [`
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
| Model | Light congestion | Smooth | Very smooth |
|
| 54 |
-
|---|---:|---:|---:|
|
| 55 |
-
| DeepSignal-4B (Ours) | 0.00 | 12.00 | 88.00 |
|
| 56 |
-
| [`GPT-OSS-20B`](https://huggingface.co/openai/gpt-oss-20b) | 2.00 | 53.33 | 44.67 |
|
| 57 |
-
| [`LightGPT-8B-Llama3`](https://huggingface.co/lightgpt/LightGPT-8B-Llama3) | 0.00 | 21.00 | 79.00 |
|
| 58 |
-
| Max Pressure | 0.00 | 36.44 | 63.56 |
|
| 59 |
-
| [`Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Instruct) | 0.00 | 10.00 | 90.00 |
|
| 60 |
-
| [`Qwen3-4B`](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) | 2.33 | 32.00 | 65.67 |
|
| 61 |
-
| Qwen3-4B-SFT | 0.00 | 23.33 | 76.67 |
|
| 62 |
|
|
|
|
| 14 |
# DeepSignal-4B-V1 (GGUF)
|
| 15 |
|
| 16 |
This repository provides a GGUF model file for local inference (e.g., `llama.cpp` / LM Studio). It is intended for traffic-signal-control analysis and related text-generation workflows.
|
| 17 |
+
For details, check our repository at [`AIMSLaboratory/DeepSignal`](https://github.com/AIMSLaboratory/DeepSignal).
|
| 18 |
|
| 19 |
|
| 20 |
## Files
|
|
|
|
| 35 |
|
| 36 |
## Evaluation (Traffic Simulation)
|
| 37 |
|
| 38 |
+
### Performance Metrics Comparison by Model $^{*}$
|
| 39 |
+
|
| 40 |
+
| Model | Avg Saturation | Avg Queue Length (veh/s) | Avg Throughput (veh/5min) | Avg Response Time (s) |
|
| 41 |
+
|:---:|:---:|:---:|:---:|:---:|
|
| 42 |
+
| [`GPT-OSS-20B (thinking)`](https://huggingface.co/openai/gpt-oss-20b) | 0.3801 | 0.476210 | 77.910075 | 6.768 |
|
| 43 |
+
| **DeepSignal-4B (Ours)** | 0.4219 | 0.498338 | 79.883430 | 2.131 |
|
| 44 |
+
| [`Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Instruct) | 0.4314 | 0.580256 | 79.059117 | 2.727 |
|
| 45 |
+
| [`Qwen3-4B`](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) | 0.4655 | 2.453933 | 75.711907 | 1.994 |
|
| 46 |
+
| Max Pressure | 0.4647 | 0.639584 | 77.235637 | ** |
|
| 47 |
+
| [`LightGPT-8B-Llama3`](https://huggingface.co/lightgpt/LightGPT-8B-Llama3) | 0.5230 | 1.258782 | 75.512073 | 3.025*** |
|
| 48 |
+
|
| 49 |
+
`*`: Each simulation scenario runs for 60 minutes. We discard the first **5 minutes** as warm-up, then compute metrics over the next **20 minutes** (minute 5 to 25). We cap the evaluation window because, when an LLM controls signal timing for only a single intersection, spillback from neighboring intersections may occur after ~20+ minutes and destabilize the scenario. All evaluations are conducted on a **Mac Studio M3 Ultra**.
|
| 50 |
+
`**`: Max Pressure is a fixed signal-timing optimization algorithm (not an LLM), so we omit its Avg Response Time; this metric is only defined for LLM-based signal-timing optimization.
|
| 51 |
+
`***`: For LightGPT-8B-Llama3, Avg Response Time is computed using only the successful responses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|