Update README.md
Browse files
README.md
CHANGED
|
@@ -11,9 +11,11 @@ tags:
|
|
| 11 |
license: apache-2.0
|
| 12 |
---
|
| 13 |
|
| 14 |
-
# DeepSignal-4B (GGUF)
|
| 15 |
|
| 16 |
This repository provides a GGUF model file for local inference (e.g., `llama.cpp` / LM Studio). It is intended for traffic-signal-control analysis and related text-generation workflows.
|
|
|
|
|
|
|
| 17 |
|
| 18 |
## Files
|
| 19 |
|
|
@@ -23,40 +25,38 @@ This repository provides a GGUF model file for local inference (e.g., `llama.cpp
|
|
| 23 |
## Quickstart (llama.cpp)
|
| 24 |
|
| 25 |
```bash
|
| 26 |
-
llama-cli -m DeepSignal-4B_V1.F16.gguf -p "
|
|
|
|
|
|
|
|
|
|
| 27 |
```
|
| 28 |
|
|
|
|
|
|
|
| 29 |
## Evaluation (Traffic Simulation)
|
| 30 |
|
| 31 |
### Performance Metrics Comparison by Model
|
| 32 |
|
| 33 |
-
| Model | Avg Saturation | Avg Queue Length | Max Saturation | Max Queue Length | Avg
|
| 34 |
-
|
| 35 |
-
| Qwen3-30B-A3B | 0.
|
| 36 |
-
| DeepSignal-4B | 0.
|
| 37 |
-
| LightGPT-8B-Llama3 | 0.
|
| 38 |
-
| SFT | 0.
|
| 39 |
-
|
|
| 40 |
-
|
|
| 41 |
-
|
|
|
|
|
| 42 |
|
| 43 |
### Congestion Level Distribution by Model (%)
|
| 44 |
|
| 45 |
| Model | Light congestion | Smooth | Very smooth |
|
| 46 |
|---|---:|---:|---:|
|
| 47 |
-
| DeepSignal-4B | 0.00 | 12.00 | 88.00 |
|
| 48 |
-
| GPT-OSS-20B | 2.00 | 53.33 | 44.67 |
|
| 49 |
-
| LightGPT-8B-Llama3 | 0.00 | 21.00 | 79.00 |
|
| 50 |
| Max Pressure | 0.00 | 36.44 | 63.56 |
|
| 51 |
-
| Qwen3-30B-A3B | 0.00 | 10.00 | 90.00 |
|
| 52 |
-
| Qwen3-4B | 2.33 | 32.00 | 65.67 |
|
| 53 |
-
| SFT | 0.00 | 23.33 | 76.67 |
|
| 54 |
-
|
| 55 |
-
### Visualization
|
| 56 |
-
|
| 57 |
-

|
| 58 |
-
|
| 59 |
-
## Notes
|
| 60 |
|
| 61 |
-
- The results above are reported from a SUMO-based traffic simulation evaluation.
|
| 62 |
-
- If you need to reproduce the evaluation, include the exact scenario configuration, random seeds, and controller settings in a separate README or paper appendix.
|
|
|
|
| 11 |
license: apache-2.0
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# DeepSignal-4B-V1 (GGUF)
|
| 15 |
|
| 16 |
This repository provides a GGUF model file for local inference (e.g., `llama.cpp` / LM Studio). It is intended for traffic-signal-control analysis and related text-generation workflows.
|
| 17 |
+
For details, check our repository at [`AIMSLaboratory/DeepSignal`](https://github.com/AIMSLaboratory/DeepSignal/settings).
|
| 18 |
+
|
| 19 |
|
| 20 |
## Files
|
| 21 |
|
|
|
|
| 25 |
## Quickstart (llama.cpp)
|
| 26 |
|
| 27 |
```bash
|
| 28 |
+
llama-cli -m DeepSignal-4B_V1.F16.gguf -p "You are a traffic management expert. You can use your traffic knowledge to solve the traffic signal control task.
|
| 29 |
+
Based on the given traffic {scene} and {state}, predict the next signal phase and its duration.
|
| 30 |
+
You must answer directly, the format must be: next signal phase: {number}, duration: {seconds} seconds
|
| 31 |
+
where the number is the phase index (starting from 0) and the seconds is the duration (usually between 20-90 seconds)."
|
| 32 |
```
|
| 33 |
|
| 34 |
+
*You need to input the {scene} (total number of phases, which phases controls which lanes/directions and current phase ID/number, etc) and {state} (number of queing vehicles per lane, throughout vehicles per lane during the current phase, etc)*
|
| 35 |
+
|
| 36 |
## Evaluation (Traffic Simulation)
|
| 37 |
|
| 38 |
### Performance Metrics Comparison by Model
|
| 39 |
|
| 40 |
+
| Model | Avg Saturation | Avg Queue Length | Max Saturation | Max Queue Length | Avg Congestion Index |
|
| 41 |
+
|---|---:|---:|---:|---:|---:|
|
| 42 |
+
| [`Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Instruct) | 0.1550 | 5.5000 | 0.1550 | 5.4995 | 0.1500 |
|
| 43 |
+
| DeepSignal-4B (Ours) | 0.1580 | 5.5500 | 0.1580 | 5.5498 | 0.1550 |
|
| 44 |
+
| [`LightGPT-8B-Llama3`](https://huggingface.co/lightgpt/LightGPT-8B-Llama3) | 0.1720 | 6.1000 | 0.1720 | 6.1000 | 0.1950 |
|
| 45 |
+
| SFT | 0.1780 | 6.2500 | 0.1780 | 6.2500 | 0.2050 |
|
| 46 |
+
| Last Round GRPO | 0.1850 | 6.4500 | 0.1850 | 6.4500 | 0.2150 |
|
| 47 |
+
| [`Qwen3-4B`](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) | 0.1980 | 7.2000 | 0.1980 | 7.1989 | 0.2450 |
|
| 48 |
+
| Max Pressure | 0.2050 | 7.8000 | 0.2049 | 7.7968 | 0.2550 |
|
| 49 |
+
| [`GPT-OSS-20B`](https://huggingface.co/openai/gpt-oss-20b) | 0.2250 | 8.5001 | 0.2250 | 8.4933 | 0.3050 |
|
| 50 |
|
| 51 |
### Congestion Level Distribution by Model (%)
|
| 52 |
|
| 53 |
| Model | Light congestion | Smooth | Very smooth |
|
| 54 |
|---|---:|---:|---:|
|
| 55 |
+
| DeepSignal-4B (Ours) | 0.00 | 12.00 | 88.00 |
|
| 56 |
+
| [`GPT-OSS-20B`](https://huggingface.co/openai/gpt-oss-20b) | 2.00 | 53.33 | 44.67 |
|
| 57 |
+
| [`LightGPT-8B-Llama3`](https://huggingface.co/lightgpt/LightGPT-8B-Llama3) | 0.00 | 21.00 | 79.00 |
|
| 58 |
| Max Pressure | 0.00 | 36.44 | 63.56 |
|
| 59 |
+
| [`Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Instruct) | 0.00 | 10.00 | 90.00 |
|
| 60 |
+
| [`Qwen3-4B`](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) | 2.33 | 32.00 | 65.67 |
|
| 61 |
+
| Qwen3-4B-SFT | 0.00 | 23.33 | 76.67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
|
|
|
|
|
|