Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# UAV Drone Detection and Tracking
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
This project detects UAV drones in video using a deep learning object detector and tracks them across frames using a Kalman filter. The output videos only include frames where the drone is present and overlay (1) the detector bounding box and (2) the 2D trajectory as a polyline.
|
| 5 |
+
|
| 6 |
+
## Videos Used
|
| 7 |
+
- drone_video_1.mp4 (YouTube source): https://www.youtube.com/watch?v=DhmZ6W1UAv4
|
| 8 |
+
- drone_video_2.mp4 (YouTube source): https://youtu.be/YrydHPwRelI
|
| 9 |
+
|
| 10 |
+
Frames were extracted using ffmpeg at 5 FPS.
|
| 11 |
+
|
| 12 |
+
## Dataset (Drone Bounding Boxes)
|
| 13 |
+
I used a dataset that labels the drone itself with bounding boxes (not aerial imagery “from a drone”).
|
| 14 |
+
- Source: Roboflow (Drone Detection dataset)
|
| 15 |
+
- Export format: YOLOv8
|
| 16 |
+
- Class: drone
|
| 17 |
+
|
| 18 |
+
## Detector (Task 1)
|
| 19 |
+
- Model: Ultralytics YOLOv8n
|
| 20 |
+
- Training: Fine-tuned on the drone bounding-box dataset
|
| 21 |
+
- Best weights: `runs/detect/train4/weights/best.pt`
|
| 22 |
+
- Inference: ran detection on every extracted frame
|
| 23 |
+
- Deliverables created:
|
| 24 |
+
- Frames containing detections saved to: `artifacts/detections/<video_name>/`
|
| 25 |
+
- Detections saved to Parquet:
|
| 26 |
+
- `artifacts/detections/drone_video_1_detections.parquet`
|
| 27 |
+
- `artifacts/detections/drone_video_2_detections.parquet`
|
| 28 |
+
|
| 29 |
+
## Kalman Filter Tracking (Task 2)
|
| 30 |
+
### State Design
|
| 31 |
+
State vector: **[x, y, vx, vy]**
|
| 32 |
+
- (x, y): bounding box center in pixel coordinates
|
| 33 |
+
- (vx, vy): velocity in pixels/frame
|
| 34 |
+
|
| 35 |
+
Measurement vector: **[x, y]** from the detector.
|
| 36 |
+
|
| 37 |
+
### Motion Model
|
| 38 |
+
A constant-velocity motion model is used:
|
| 39 |
+
- Predict step uses x_t = x_{t-1} + vx * dt, y_t = y_{t-1} + vy * dt
|
| 40 |
+
- Update step uses the detected center point when available
|
| 41 |
+
|
| 42 |
+
### Handling Missed Detections
|
| 43 |
+
If the detector misses the drone temporarily, the tracker continues predicting without updates for a limited number of frames (max_missed). When detections return, the filter updates and continues the trajectory smoothly.
|
| 44 |
+
|
| 45 |
+
### Visualization
|
| 46 |
+
Each output frame overlays:
|
| 47 |
+
- The bounding box (detection if present; otherwise predicted)
|
| 48 |
+
- The 2D trajectory polyline connecting the estimated centers across frames
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
## Failure Cases
|
| 52 |
+
- Small/far drones: detector can miss frames; Kalman prediction bridges short gaps.
|
| 53 |
+
- Motion blur / fast motion: bounding boxes may jitter; filter smooths but can drift if misses last too long.
|
| 54 |
+
- Background clutter: false positives can occur; increasing confidence threshold helps reduce them.
|
| 55 |
+
|
| 56 |
+
|