Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
BMD-45: Bengaluru Mobility Dataset
A large-scale CCTV vehicle detection benchmark for Indian urban traffic
Dataset Summary
BMD-45 is a large-scale, India-specific vehicle detection dataset captured from 3,679 operational CCTV cameras across Bengaluru — one of the world's most traffic-congested megacities.
| Statistic | Value |
|---|---|
| Total images | 45,986 (1920×1080 RGB) |
| Total annotations | ≈ 481,947 bounding boxes |
| Vehicle classes | 14 fine-grained categories |
| Camera sources | 3,679 Safe City CCTV cameras |
| Train split | 35,792 images / 375,003 annotations |
| Val split | 10,194 images / 106,944 annotations |
| Test split | 5,110 images (annotations withheld) |
| Image resolution | 1920 × 1080 px |
| Annotation format | COCO JSON |
BMD-45 addresses critical gaps in existing traffic datasets: scale, camera-view diversity, and fine-grained Indian vehicle taxonomy — all absent from prior fixed-camera benchmarks (UA-DETRAC, TrafficCAM, IDD).
The dataset captures dense, heterogeneous, and unstructured traffic characteristic of Indian urban environments, with images sampled from long-duration CCTV recordings using spatial and temporal diversity criteria and difficulty scoring to prioritize challenging, high-information frames.
Attribution
More technical details about the dataset and models are available in our Technical Report. If you use these datasets or models, kindly cite the following:
@inproceedings{bmd45_2026,
title = {BMD-45: Bengaluru Mobility Dataset for Large-Scale Vehicle Detection from Urban CCTV},
author = {Akash Sharma and Chinmay Mhatre and Sankalp Gawali and Ruthvik Bokkasam and Brij Kishore and Vishwajeet Pattanaik and Tarun Rambha and Abdul R. Pinjari and Vijay Kovvali and Anirban Chakraborty and Punit Rathore and Raghu Krishnapuram and Yogesh Simmhan},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Findings (CVPRF26)},
year = {2026}
}
Dataset Structure
The dataset follows the folder structure described below.
1. BMD-45-Train/
Contains 35,792 images (~78% of the dataset) used for training.
**images_000/** through**images_007/**– Training images organized into subfolders for convenience.images_000/*– Training images (41.png,47.png, …). Each image filename is unique across the entire dataset.images_001/*, etc. – Additional subfolders following the same structure.
**_annotations.coco.json**– Majority Voting consensus annotations for training images in COCO JSON format.**metadata.jsonl** – HuggingFace ImageFolder annotations (one JSON line per image).
2. BMD-45-Val/
Contains 10,194 images (~22% of the dataset) used for validation.
**images_000/** through**images_002/**– Validation images organized into subfolders.images_000/*– Validation images. All filenames are globally unique across both training and validation sets.images_001/*, etc. – Additional subfolders following the same structure.
**_annotations.coco.json**– Majority Voting consensus annotations for validation images in COCO JSON format.**metadata.jsonl** – HuggingFace ImageFolder annotations (one JSON line per image).
Annotation JSON Schema
Each _annotations.coco.json file follows the standard COCO structure:
**images**— list of image metadataid,file_name,width,height**annotations**— object instancesid,image_id,category_id,bbox [x, y, width, height],area**categories**— class taxonomy (IDs and names below)
Each metadata.jsonl contains one JSON line per image:
{"file_name": "images_000/41.png", "objects": {"bbox": [[x, y, w, h], ...], "categories": [0, 2, ...]}}
Annotation Pipeline
- Source: frames captured between 06:00 – 18:00 IST during February 2025
- Pre-annotation: generated using a fine-tuned RT-DETR v2-X model trained on ≈ 3 k expert-labeled images
- Crowdsourcing: > 550 student volunteers corrected or validated predictions through a gamified web interface with leaderboards
- Consensus: majority voting applied to derive final annotations
Loading the Dataset
from datasets import load_dataset
# Load from HuggingFace Hub
ds = load_dataset("iisc-aim/BMD-45")
# Access a sample
sample = ds["train"][0]
image = sample["image"]
objects = sample["objects"] # {"bbox": [...], "categories": [...]}
Vehicle Classes
| ID | Class Name | Description |
|---|---|---|
| 0 | Hatchback | Small passenger cars without a protruding rear boot ("dickey"). |
| 1 | Sedan | Passenger cars with a low-slung design and a separate protruding rear boot ("dickey"). |
| 2 | SUV | Car-like vehicles with high ground clearance, a sturdy body, and no protruding boot. |
| 3 | MUV | Large vehicles with three seating rows, combining passenger and cargo functionality. |
| 4 | Bus | Large passenger vehicles used for public or private transport, including office shuttles and intercity buses. |
| 5 | Truck | Heavy goods carriers with a front cabin and a rear cargo compartment. |
| 6 | Three-wheeler | Compact vehicles with one front wheel and two rear wheels, featuring a covered passenger cabin. |
| 7 | Two-wheeler | Motorbikes and scooters for single or double riders. Bounding boxes include both vehicle and rider. |
| 8 | LCV | Lightweight goods carriers used for short- to medium-distance transport. |
| 9 | Mini-bus | Shorter, compact buses with fewer seats; larger than a Tempo Traveller, often featuring a flat front. |
| 10 | Tempo-traveller | Medium-sized passenger vans with tall roofs and side windows; larger than vans but smaller than minibuses, with a protruding front. |
| 11 | Bicycle | Non-motorized, manually pedalled vehicles including geared, non-geared, women's, and children's cycles. Bounding boxes include both vehicle and rider. |
| 12 | Van | Medium-sized vehicles for transporting goods or people, typically with a flat front and sliding side doors; smaller than Tempo Travellers. |
| 13 | Other | Vehicles not covered in other classes, including agricultural, specialized, or unconventional designs. |
Baseline Results
To justify the need for BMD-45, SOTA detectors were trained on existing datasets and evaluated on an expert-annotated reference set of 3,000 Bengaluru CCTV images. All models trained on other datasets fall well short of practical accuracy:
| Model | Training Data | mAP@50:95 |
|---|---|---|
| D-FINE X | IDD | 0.46 |
| RT-DETRv2 X | TrafficCAM | 0.39 |
| Grounding DINO | Zero-shot | 0.13 |
These are cross-dataset baselines (models trained on other datasets, not BMD-45). Results for BMD-45-trained models are reported in the paper.
The following figure shows per-class AP@50:95 for selected models trained on BMD-45 and evaluated on the BMD-45 validation split:
Models trained on BMD-45 also demonstrate cross-dataset generalization to UA-DETRAC, IDD, and TrafficCAM using a taxonomy-aware class mapping protocol (see paper for full results).
Cross-Dataset Generalization
BMD-45-trained models are evaluated on:
- UA-DETRAC — highway fixed-camera dataset (China, 4 classes)
- IDD — Indian Driving Dataset (ego-centric, 9 classes)
- TrafficCAM — Indian CCTV dataset (9 classes, 4,400 frames)
A taxonomy mapping protocol merges BMD-45's 14 fine-grained classes into the coarser categories of each target dataset for fair comparison.
The following figures compare per-class AP@50:95 for models trained on BMD-45 versus models trained on IDD, UA-DETRAC, and TrafficCAM, all evaluated on the BMD-45 validation split:
Comparison with Existing Datasets
| Dataset | Venue | Task | View | Frames | Annotations | Classes | Cameras | Location |
|---|---|---|---|---|---|---|---|---|
| IDD | WACV 2019 | D, S | Ego | 10K | 111.3K | 9 | — | IN |
| UA-DETRAC | CVIU 2020 | D, M | Fixed | 140K | 1.21M | 4 | 24 | CN |
| TrafficCAM | T-ITS 2025 | S | Fixed CCTV | 4.3K | 84.2K | 9 | NA | IN |
| BMD-45 (Ours) | CVPR-F 2026 | D | Fixed CCTV | 45K | 481.9K | 14 | 3,679 | IN |
D = Detection, S = Segmentation, M = Tracking; IN = India, CN = China
Collection & Processing Details
- Source: ≈ 3,679 Safe City surveillance cameras operated by Bengaluru Police
- Coverage: both junction and mid-block perspectives across multiple city zones
- Time period: February 2025, daytime hours (06:00–18:00 IST)
- Resolution: 1920 × 1080 RGB frames
- Selection: images with high vehicle density, occlusion, and diverse viewpoints prioritized
- Filename obfuscation: Image filenames are anonymized numeric IDs to prevent location inference
Intended Uses
- Training and benchmarking vehicle detection models for CCTV / fixed-camera deployment
- Research in Intelligent Transportation Systems (ITS) for Indian and developing-world cities
- Cross-dataset generalization studies for region-specific detection
- Studying detection under occlusion, heterogeneous traffic, and diverse viewpoints
License
- Dataset: CC BY 4.0 International
- Pre-trained Models: Apache 2.0
Acknowledgements
We thank the Bengaluru Traffic Police (BTP) and the Bengaluru Police for providing access to the Safe City camera data from which the image datasets used for this release were derived. We thank Capital One for sponsoring the prizes for the Urban Vision Hackathon competition. We thank IISc's AI and Robotics Technology Park (ARTPARK) and the Centre for Infrastructure, Sustainable Transportation and Urban Planning (CiSTUP) for funding the annotation and model-training efforts, and the Kotak IISc AI-ML Centre (KIAC) for providing the GPU resources required to train the models. We acknowledge the outreach support provided by the ACM India Council and the IEEE India Council to encourage chapter volunteers to participate in the hackathon. Lastly, we thank the AI Centers of Excellence (AI COE) initiative of the Ministry of Education, their Apex Committee members, and the AIRAWAT Research Foundation, whose support helped catalyze these efforts.
Created by the AI for Integrated Mobility (AIM) group at the Indian Institute of Science (IISc), Bengaluru.
- Downloads last month
- 3,797



