Enhancing LLM-based Autonomous Driving with Modular Traffic Light and Sign Recognition
Fabian Schmidt1,2 · Noushiq Mohammed Kayilan Abdul Nazar1 · Markus Enzweiler1 · Abhinav Valada2
1 Esslingen University of Applied Sciences · 2 University of Freiburg
TLS-Assist
TLS-Assist is a modular framework that processes multi-view camera inputs to detect and validate safety-critical traffic lights and signs using relevance prediction and temporal state validation. These refined detections are then translated into structured natural language instructions and seamlessly injected into the LLM-based autonomous driving agent to enhance its explicit traffic-rule awareness.
Updates
- [2026/03] We released the full integration code and fine-tuned models for LMDrive and BEVDriver.
- [2026/01] TLS-Assist is accepted to IEEE Intelligent Vehicles Symposium (IV) 2026!
- [2025/11] Paper released on arXiv!
Roadmap
- Release core
tlsr_module.pyand integration scripts. - Release pre-trained weights for traffic lights and signs on Hugging Face.
- Release training pipeline for traffic light and sign detection.
Installation & Integration
TLS-Assist is designed as a modular extension. We currently provide integration files for two LLM-based autonomous driving baselines: LMDrive and BEVDriver.
1. Environment Setup
Set up the Conda environment for your chosen baseline by following their official repository instructions. Once the environment is activated, install the YOLO dependency required by TLS-Assist:
pip install ultralytics
2. Download Model Weights
Download the fine-tuned traffic light and traffic sign models from our Hugging Face repository. We recommend cloning this repository directly into the base directory of your chosen baseline (e.g., inside the LMDrive/ or BEVDriver/ root folder):
git clone https://huggingface.co/iis-esslingen/TLS-Assist
3. Integrating with the Baselines
To run the evaluations, you need to copy our provided files into the CARLA leaderboard structure of the baselines. Also, ensure you copy the core tlsr_module.py into a directory where the agent can import it (typically the team_code folder).
Option A: LMDrive Integration
- Copy
LMDrive/lmdriver_agent_tlsr.pyandLMDrive/lmdriver_config_tlsr.pyintoLMDrive/leaderboard/team_code/. - Copy the core
tlsr_module.pyintoLMDrive/leaderboard/team_code/. - Copy the execution script
LMDrive/lmdrive_run_eval_tlsr.shintoLMDrive/leaderboard/scripts/. - Open
lmdriver_config_tlsr.pyand update the model paths to point to the weights you downloaded in Step 2.
Option B: BEVDriver Integration
- Copy
BEVDriver/bevdriver_agent_tlsr.pyandBEVDriver/bevdriver_config_tlsr.pyintoBEVDriver/leaderboard/team_code/. - Copy the core
tlsr_module.pyintoBEVDriver/leaderboard/team_code/. - Copy the execution script
BEVDriver/bevdriver_run_eval_tlsr.shintoBEVDriver/leaderboard/scripts/. - Open
bevdriver_config_tlsr.pyand update the model paths to point to the weights you downloaded in Step 2.
4. Running the Evaluation
To execute the evaluation, run the corresponding bash script from the baseline's root directory and provide the desired benchmark routes (benchmark_tiny, benchmark_short, or benchmark_long) as an argument.
For LMDrive:
./leaderboard/scripts/lmdrive_run_eval_tlsr.sh benchmark_tiny
For BEVDriver:
bash ./leaderboard/scripts/bevdriver_run_eval_tlsr.sh benchmark_tiny
Citation
If you find our work useful in your research, please consider citing:
@misc{schmidt2025tlsassist,
title={Enhancing LLM-based Autonomous Driving with Modular Traffic Light and Sign Recognition},
author={Fabian Schmidt and Noushiq Mohammed Kayilan Abdul Nazar and Markus Enzweiler and Abhinav Valada},
year={2025},
eprint={2511.14391},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={[https://arxiv.org/abs/2511.14391](https://arxiv.org/abs/2511.14391)},
}
