Shauryaaa05's picture
Update README.md
b6a63f7 verified
metadata
language:
  - en
license: mit
task_categories:
  - object-detection
tags:
  - vision
  - drone
  - yolo
  - kalman-filter
  - video

🚁 UAV Drone Detection & Multi-Object Tracking Pipeline

πŸš€ Project Overview

This project is an end-to-end Computer Vision pipeline designed to detect and track Unmanned Aerial Vehicles (UAVs) in raw, unstructured video footage. Built to handle real-world challenges like distant targets and occlusions, the system combines deep learning object detection with probabilistic kinetic tracking to turn messy video data into clean, actionable trajectory insights.

Live Results & Data:

πŸ› οΈ Tech Stack

  • Computer Vision & ML: Ultralytics YOLOv8, OpenCV, FilterPy (Kalman Filters)
  • Data Engineering: Hugging Face datasets, FFMPEG, Pandas, Apache Parquet
  • DevOps & Environment: Docker, Git

🧠 System Architecture & Engineering

1. Video Processing & Data Prep

The pipeline ingests raw .mp4 video streams and extracts frames using ffmpeg inside an isolated, containerized Docker environment to ensure dependency consistency and reproducibility.

2. Object Detection (YOLOv8)

Leveraged a pre-trained YOLOv8s architecture. Because standard COCO datasets lack a dedicated "drone" class, the inference script is engineered to dynamically filter and map visually similar spatial classes (birds, airplanes, kites) to successfully identify distant UAVs against complex backgrounds like clouds and trees.

3. Probabilistic Tracking (Kalman Filter)

To solve the issue of missed detections (e.g., when the drone shrinks to a few pixels or temporarily blends into the background), I implemented a 4D Kalman Filter [x, y, dx, dy] using a constant velocity kinematic model.

  • Occlusion Handling: If the YOLO detector fails to find the drone in a frame, the tracker bypasses the update() step and relies purely on the predict() step to estimate the drone's location based on its last known velocity.
  • State Thresholding: The system visualizes active detections with green bounding boxes and predicted states with red bounding boxes. It maintains a memory threshold of 5 consecutive missed frames before terminating the track to prevent false-positive drift.

4. Cloud Deployment

Verified detections were automatically aggregated, converted into a highly optimized Parquet format, and programmatically pushed to the Hugging Face Hub using the datasets API.