dave2 / README.md
maxim-igenbergs's picture
Update README.md
20695ea verified
metadata
license: mit
library_name: keras
tags:
  - autonomous-driving
  - end-to-end
  - imitation-learning
  - self-driving
  - udacity
  - vision
  - cnn
  - dave2
  - nvidia
datasets:
  - maxim-igenbergs/thesis-data

DAVE-2 End-to-End Driving Model

Implementation of NVIDIA's DAVE-2 architecture trained on the Udacity self-driving car simulator for the bachelor's thesis: Dual-Axis Testing of Visual Robustness and Topological Generalization in Vision-based End-to-End Driving Models.

Model Description

DAVE-2 is the original end-to-end driving architecture proposed by NVIDIA in 2016. It learns to map raw camera images directly to steering and throttle commands through imitation learning.

Architecture

Input: RGB Image (66 × 200 × 3)
    ↓
Conv2D(24, 5×5, stride=2) + ELU
Conv2D(36, 5×5, stride=2) + ELU
Conv2D(48, 5×5, stride=2) + ELU
Conv2D(64, 3×3) + ELU
Conv2D(64, 3×3) + ELU
    ↓
Flatten
    ↓
Dense(1164) + ELU
Dense(100) + ELU
Dense(50) + ELU
Dense(10) + ELU
    ↓
Output: [steering, throttle]

Checkpoints

Map Checkpoint
GenRoads genroads_20251028-145557/
Jungle jungle_20251209-175046/

Files per Checkpoint

  • best_model.h5: Keras model weights
  • meta.json: Training configuration and hyperparameters
  • history.csv: Training/validation metrics per epoch
  • loss_curve.png: Visualization of training progress

Citation

@thesis{igenbergs2026dualaxis,
  title={Dual-Axis Testing of Visual Robustness and Topological Generalization in Vision-based End-to-End Driving Models},
  author={Igenbergs, Maxim},
  school={Technical University of Munich},
  year={2026},
  type={Bachelor's Thesis}
}

Related