Datasets:
docs: update dataset structure section to reflect tar shard format, remove masks/ reference
Browse files
README.md
CHANGED
|
@@ -3,12 +3,8 @@ license: cc-by-4.0
|
|
| 3 |
task_categories:
|
| 4 |
- object-detection
|
| 5 |
- image-segmentation
|
| 6 |
-
|
| 7 |
-
- other
|
| 8 |
task_ids:
|
| 9 |
- vehicle-detection
|
| 10 |
-
language:
|
| 11 |
-
- en
|
| 12 |
tags:
|
| 13 |
- autonomous-driving
|
| 14 |
- indian-roads
|
|
@@ -16,9 +12,6 @@ tags:
|
|
| 16 |
- bdd100k
|
| 17 |
- computer-vision
|
| 18 |
- detection
|
| 19 |
-
- segmentation
|
| 20 |
-
- tracking
|
| 21 |
-
- gps
|
| 22 |
pretty_name: Indian Road Driving Dataset
|
| 23 |
size_categories:
|
| 24 |
- 100K<n<1M
|
|
@@ -26,85 +19,71 @@ size_categories:
|
|
| 26 |
|
| 27 |
# π Indian Road Driving Dataset
|
| 28 |
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
**by [ThirdEye Labs](https://thirdeyelabs.ai)**
|
| 32 |
-
|
| 33 |
-
[](https://thirdeyelabs.ai)
|
| 34 |
-
[](https://thirdeyelabs.ai/demo)
|
| 35 |
-
[](https://creativecommons.org/licenses/by/4.0/)
|
| 36 |
-
|
| 37 |
-
*The largest open dataset of annotated Indian road footage β captured, processed, and released by ThirdEye Labs.*
|
| 38 |
-
|
| 39 |
-
</div>
|
| 40 |
|
| 41 |
---
|
| 42 |
|
| 43 |
## π Why Indian Roads?
|
| 44 |
|
| 45 |
-
Indian roads
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
**
|
| 50 |
|
| 51 |
---
|
| 52 |
|
| 53 |
-
## π Dataset
|
| 54 |
|
| 55 |
| Metric | Value |
|
| 56 |
|--------|-------|
|
| 57 |
-
| Total clips | 8,441 |
|
| 58 |
-
| Annotated frames | 646,014 |
|
| 59 |
-
| Object detections | 6,896,202 |
|
| 60 |
-
| Segmentation masks | 1,290,463 |
|
| 61 |
-
| GPS-tagged frames | β
|
|
| 62 |
-
| Annotation format | BDD100K |
|
| 63 |
-
| Capture device | CP Plus dashcam |
|
| 64 |
-
| Location | Delhi NCR, India |
|
| 65 |
-
| Conditions | Day Β· Night Β· Dusk Β· Rain |
|
| 66 |
|
| 67 |
---
|
| 68 |
|
| 69 |
-
## π·οΈ Detection Classes
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
| `animal` | Cattle, dogs, and other animals on road |
|
| 84 |
-
| `vehicle fallback` | Unclassified vehicles |
|
| 85 |
-
| `traffic light` | Traffic signals |
|
| 86 |
-
| `traffic sign` | Road signs and boards |
|
| 87 |
|
| 88 |
---
|
| 89 |
|
| 90 |
## π Dataset Structure
|
| 91 |
|
|
|
|
|
|
|
| 92 |
```
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
βββ gps/
|
| 107 |
-
βββ gps_tracks.json # GPS coordinates per clip
|
| 108 |
```
|
| 109 |
|
| 110 |
---
|
|
@@ -117,6 +96,8 @@ indian-road-dataset/
|
|
| 117 |
from datasets import load_dataset
|
| 118 |
|
| 119 |
ds = load_dataset("thirdeyelabs/indian-road-dataset")
|
|
|
|
|
|
|
| 120 |
```
|
| 121 |
|
| 122 |
### Load annotations directly
|
|
@@ -127,7 +108,7 @@ import json
|
|
| 127 |
with open("annotations/detection.json") as f:
|
| 128 |
annotations = json.load(f)
|
| 129 |
|
| 130 |
-
# BDD100K format β each entry
|
| 131 |
# { "name": "clip_id/frame", "labels": [{ "category": "car", "box2d": {...} }] }
|
| 132 |
```
|
| 133 |
|
|
@@ -139,9 +120,7 @@ huggingface-cli download thirdeyelabs/indian-road-dataset --repo-type dataset
|
|
| 139 |
|
| 140 |
---
|
| 141 |
|
| 142 |
-
## π Annotation Format
|
| 143 |
-
|
| 144 |
-
Annotations follow the [BDD100K](https://doc.bdd100k.com/format.html) schema:
|
| 145 |
|
| 146 |
```json
|
| 147 |
{
|
|
@@ -168,50 +147,32 @@ Annotations follow the [BDD100K](https://doc.bdd100k.com/format.html) schema:
|
|
| 168 |
|
| 169 |
## πΊοΈ GPS Coverage
|
| 170 |
|
| 171 |
-
Every clip
|
| 172 |
-
- Geographic filtering by route
|
| 173 |
- Speed and trajectory analysis
|
| 174 |
- Map-based dataset exploration
|
| 175 |
|
| 176 |
---
|
| 177 |
|
| 178 |
-
## ποΈ
|
| 179 |
|
| 180 |
-
|
| 181 |
|
| 182 |
-
1. **Ingest** β raw MP4s from CP Plus dashcams
|
| 183 |
2. **Keyframe extraction** β 1 frame/second via FFmpeg
|
| 184 |
-
3. **GPS parsing** β matched from
|
| 185 |
-
4. **Object detection** β custom YOLO
|
| 186 |
-
5. **Semantic segmentation** β SegFormer for drivable
|
| 187 |
6. **Multi-object tracking** β ByteTrack across frames
|
| 188 |
-
7. **Scene classification** β weather, lighting, scene type
|
| 189 |
-
|
| 190 |
-
> Want to run your own data through this pipeline? [Contact us β](https://thirdeyelabs.ai/contact)
|
| 191 |
-
|
| 192 |
-
---
|
| 193 |
-
|
| 194 |
-
## π Benchmark Results
|
| 195 |
-
|
| 196 |
-
Models fine-tuned on this dataset show significant improvements on Indian road scenarios. See our full benchmark on [thirdeyelabs.ai](https://thirdeyelabs.ai).
|
| 197 |
-
|
| 198 |
-
---
|
| 199 |
-
|
| 200 |
-
## π Links
|
| 201 |
-
|
| 202 |
-
| | |
|
| 203 |
-
|---|---|
|
| 204 |
-
| π **Website** | [thirdeyelabs.ai](https://thirdeyelabs.ai) |
|
| 205 |
-
| π¬ **Live Demo** | [thirdeyelabs.ai/demo](https://thirdeyelabs.ai/demo) |
|
| 206 |
-
| π§ **Contact** | [thirdeyelabs.ai/contact](https://thirdeyelabs.ai/contact) |
|
| 207 |
|
| 208 |
---
|
| 209 |
|
| 210 |
## π License
|
| 211 |
|
| 212 |
-
|
| 213 |
|
| 214 |
-
|
| 215 |
|
| 216 |
---
|
| 217 |
|
|
@@ -229,10 +190,12 @@ You are free to use, share, and adapt this data for any purpose β including co
|
|
| 229 |
|
| 230 |
---
|
| 231 |
|
| 232 |
-
|
| 233 |
-
|
| 234 |
-
|
| 235 |
-
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
|
|
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
- object-detection
|
| 5 |
- image-segmentation
|
|
|
|
|
|
|
| 6 |
task_ids:
|
| 7 |
- vehicle-detection
|
|
|
|
|
|
|
| 8 |
tags:
|
| 9 |
- autonomous-driving
|
| 10 |
- indian-roads
|
|
|
|
| 12 |
- bdd100k
|
| 13 |
- computer-vision
|
| 14 |
- detection
|
|
|
|
|
|
|
|
|
|
| 15 |
pretty_name: Indian Road Driving Dataset
|
| 16 |
size_categories:
|
| 17 |
- 100K<n<1M
|
|
|
|
| 19 |
|
| 20 |
# π Indian Road Driving Dataset
|
| 21 |
|
| 22 |
+
The **Indian Road Driving Dataset** is the largest open dataset of annotated Indian road footage, created by ThirdEye Labs. It addresses the critical gap in autonomous driving datasets for Indian road conditions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
---
|
| 25 |
|
| 26 |
## π Why Indian Roads?
|
| 27 |
|
| 28 |
+
Indian roads present unique challenges absent from existing datasets (BDD100K, nuScenes, Waymo):
|
| 29 |
+
- Dense mixed traffic with unpredictable behavior
|
| 30 |
+
- Auto-rickshaws, cattle, and informal lane usage
|
| 31 |
+
- Extreme lighting conditions
|
| 32 |
+
- **63 million vehicles and 1.4 billion people** β yet no large-scale annotated dataset existed
|
| 33 |
|
| 34 |
---
|
| 35 |
|
| 36 |
+
## π Dataset Statistics
|
| 37 |
|
| 38 |
| Metric | Value |
|
| 39 |
|--------|-------|
|
| 40 |
+
| **Total clips** | 8,441 |
|
| 41 |
+
| **Annotated frames** | 646,014 |
|
| 42 |
+
| **Object detections** | 6,896,202 |
|
| 43 |
+
| **Segmentation masks** | 1,290,463 |
|
| 44 |
+
| **GPS-tagged frames** | β
|
|
| 45 |
+
| **Annotation format** | BDD100K |
|
| 46 |
+
| **Capture device** | CP Plus dashcam |
|
| 47 |
+
| **Location** | Delhi NCR, India |
|
| 48 |
+
| **Conditions** | Day Β· Night Β· Dusk Β· Rain |
|
| 49 |
|
| 50 |
---
|
| 51 |
|
| 52 |
+
## π·οΈ Detection Classes (12 classes)
|
| 53 |
+
|
| 54 |
+
- **person** β Pedestrians
|
| 55 |
+
- **rider** β Motorcyclists/cyclists with rider
|
| 56 |
+
- **car** β Passenger cars
|
| 57 |
+
- **truck** β Trucks and tempos
|
| 58 |
+
- **bus** β Buses
|
| 59 |
+
- **motorcycle** β Motorcycles (unridden)
|
| 60 |
+
- **bicycle** β Bicycles
|
| 61 |
+
- **autorickshaw** β Auto-rickshaws (tuk-tuks)
|
| 62 |
+
- **animal** β Cattle, dogs, animals on road
|
| 63 |
+
- **vehicle fallback** β Unclassified vehicles
|
| 64 |
+
- **traffic light** β Traffic signals
|
| 65 |
+
- **traffic sign** β Road signs and boards
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
---
|
| 68 |
|
| 69 |
## π Dataset Structure
|
| 70 |
|
| 71 |
+
Data is stored as **646 WebDataset tar shards** (`data/train-00000-of-00646.tar` β¦ `data/train-00645-of-00646.tar`), each containing ~1,000 frames. Each frame has 3 files inside the shard:
|
| 72 |
+
|
| 73 |
```
|
| 74 |
+
{clip_id}_{frame:04d}.jpg # keyframe image
|
| 75 |
+
{clip_id}_{frame:04d}.png # segmentation mask
|
| 76 |
+
{clip_id}_{frame:04d}.json # BDD100K annotations (detections + scene attributes)
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
Standalone annotation files are also provided for convenient bulk access:
|
| 80 |
+
|
| 81 |
+
```
|
| 82 |
+
annotations/
|
| 83 |
+
βββ detection.json # BDD100K format β all 646,014 frames (1.3 GB)
|
| 84 |
+
βββ scene_attributes.json # per-clip weather, time of day, scene type
|
| 85 |
+
gps/
|
| 86 |
+
βββ gps_tracks.json # GPS coordinates per clip
|
|
|
|
|
|
|
| 87 |
```
|
| 88 |
|
| 89 |
---
|
|
|
|
| 96 |
from datasets import load_dataset
|
| 97 |
|
| 98 |
ds = load_dataset("thirdeyelabs/indian-road-dataset")
|
| 99 |
+
sample = ds["train"][0]
|
| 100 |
+
# sample keys: jpg, png, json
|
| 101 |
```
|
| 102 |
|
| 103 |
### Load annotations directly
|
|
|
|
| 108 |
with open("annotations/detection.json") as f:
|
| 109 |
annotations = json.load(f)
|
| 110 |
|
| 111 |
+
# BDD100K format β each entry:
|
| 112 |
# { "name": "clip_id/frame", "labels": [{ "category": "car", "box2d": {...} }] }
|
| 113 |
```
|
| 114 |
|
|
|
|
| 120 |
|
| 121 |
---
|
| 122 |
|
| 123 |
+
## π Annotation Format (BDD100K Schema)
|
|
|
|
|
|
|
| 124 |
|
| 125 |
```json
|
| 126 |
{
|
|
|
|
| 147 |
|
| 148 |
## πΊοΈ GPS Coverage
|
| 149 |
|
| 150 |
+
Every clip includes GPS coordinates, enabling:
|
| 151 |
+
- Geographic filtering by route/area
|
| 152 |
- Speed and trajectory analysis
|
| 153 |
- Map-based dataset exploration
|
| 154 |
|
| 155 |
---
|
| 156 |
|
| 157 |
+
## ποΈ Production Pipeline
|
| 158 |
|
| 159 |
+
ThirdEye Labs end-to-end ML annotation system:
|
| 160 |
|
| 161 |
+
1. **Ingest** β raw MP4s from CP Plus dashcams to S3
|
| 162 |
2. **Keyframe extraction** β 1 frame/second via FFmpeg
|
| 163 |
+
3. **GPS parsing** β matched from `.srt` files
|
| 164 |
+
4. **Object detection** β custom YOLO fine-tuned for Indian roads
|
| 165 |
+
5. **Semantic segmentation** β SegFormer for drivable areas
|
| 166 |
6. **Multi-object tracking** β ByteTrack across frames
|
| 167 |
+
7. **Scene classification** β weather, lighting, scene type
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 168 |
|
| 169 |
---
|
| 170 |
|
| 171 |
## π License
|
| 172 |
|
| 173 |
+
**Creative Commons Attribution 4.0 International (CC BY 4.0)**
|
| 174 |
|
| 175 |
+
Free to use, share, and adapt for any purpose (including commercial) with attribution to **ThirdEye Labs**.
|
| 176 |
|
| 177 |
---
|
| 178 |
|
|
|
|
| 190 |
|
| 191 |
---
|
| 192 |
|
| 193 |
+
## π Links
|
| 194 |
+
|
| 195 |
+
- π **Website**: [thirdeyelabs.ai](https://thirdeyelabs.ai)
|
| 196 |
+
- π¬ **Demo**: [thirdeyelabs.ai/demo](https://thirdeyelabs.ai/demo)
|
| 197 |
+
- π§ **Contact**: [thirdeyelabs.ai/contact](https://thirdeyelabs.ai/contact)
|
| 198 |
+
|
| 199 |
+
---
|
| 200 |
+
|
| 201 |
+
*Built with β€οΈ in India*
|