File size: 2,100 Bytes
060a8f5 d6ab185 060a8f5 d6ab185 060a8f5 20695ea 060a8f5 d6ab185 060a8f5 e6d7c7f 060a8f5 d6ab185 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | ---
license: mit
library_name: keras
tags:
- autonomous-driving
- end-to-end
- imitation-learning
- self-driving
- udacity
- vision
- cnn
- dave2
- nvidia
datasets:
- maxim-igenbergs/thesis-data
---
# DAVE-2 End-to-End Driving Model
Implementation of NVIDIA's DAVE-2 architecture trained on the Udacity self-driving car simulator for the bachelor's thesis: Dual-Axis Testing of Visual Robustness and Topological Generalization in Vision-based End-to-End Driving Models.
## Model Description
DAVE-2 is the original end-to-end driving architecture proposed by NVIDIA in 2016. It learns to map raw camera images directly to steering and throttle commands through imitation learning.
### Architecture
```
Input: RGB Image (66 × 200 × 3)
↓
Conv2D(24, 5×5, stride=2) + ELU
Conv2D(36, 5×5, stride=2) + ELU
Conv2D(48, 5×5, stride=2) + ELU
Conv2D(64, 3×3) + ELU
Conv2D(64, 3×3) + ELU
↓
Flatten
↓
Dense(1164) + ELU
Dense(100) + ELU
Dense(50) + ELU
Dense(10) + ELU
↓
Output: [steering, throttle]
```
## Checkpoints
| Map | Checkpoint |
|-----|------------|
| GenRoads | `genroads_20251028-145557/` |
| Jungle | `jungle_20251209-175046/` |
### Files per Checkpoint
- `best_model.h5`: Keras model weights
- `meta.json`: Training configuration and hyperparameters
- `history.csv`: Training/validation metrics per epoch
- `loss_curve.png`: Visualization of training progress
## Citation
```bibtex
@thesis{igenbergs2026dualaxis,
title={Dual-Axis Testing of Visual Robustness and Topological Generalization in Vision-based End-to-End Driving Models},
author={Igenbergs, Maxim},
school={Technical University of Munich},
year={2026},
type={Bachelor's Thesis}
}
```
## Related
- [DAVE-2-GRU Driving Model](https://huggingface.co/maxim-igenbergs/dave2-gru)
- [ViT Driving Model](https://huggingface.co/maxim-igenbergs/vit)
- [TCP Driving Model](https://huggingface.co/maxim-igenbergs/tcp-carla-repro)
- [Training Data](https://huggingface.co/datasets/maxim-igenbergs/thesis-data)
- [Evaluation Runs](https://huggingface.co/datasets/maxim-igenbergs/thesis-runs) |