| --- |
| title: EdgeFirst AI |
| emoji: π¬ |
| colorFrom: indigo |
| colorTo: red |
| sdk: static |
| pinned: true |
| license: apache-2.0 |
| --- |
| |
| # EdgeFirst AI β Spatial Perception at the Edge |
|
|
| **EdgeFirst Perception** is an open-source suite of libraries and microservices for AI-driven spatial perception on edge devices. It supports cameras, LiDAR, radar, and time-of-flight sensors β enabling real-time object detection, segmentation, sensor fusion, and 3D spatial understanding, optimized for resource-constrained embedded hardware. |
|
|
| [](https://edgefirst.studio) |
| [](https://github.com/EdgeFirstAI) |
| [](https://doc.edgefirst.ai) |
| [](https://www.au-zone.com) |
|
|
| --- |
|
|
| ## Workflow |
|
|
| <img src="https://huggingface.co/spaces/EdgeFirst/README/resolve/main/01-ecosystem.png" alt="EdgeFirst Model Zoo Ecosystem"/> |
|
|
| Every model in the EdgeFirst Model Zoo passes through a validated pipeline. [**EdgeFirst Studio**](https://edgefirst.studio) manages datasets, training, multi-format export (ONNX, TFLite INT8, eIQ Neutron, Kinara DVM, HailoRT HEF, TensorRT), and reference validation. Models are then deployed to our board farm for **full-dataset on-target validation** on real hardware β measuring both accuracy (mAP) and detailed timing breakdown per device. Results are published here on HuggingFace with per-platform performance tables. |
|
|
| ## Model Lifecycle |
|
|
| <img src="https://huggingface.co/spaces/EdgeFirst/README/resolve/main/02-model-lifecycle.png" alt="Model Lifecycle: Training to Publication"/> |
|
|
| ## On-Target Validation |
|
|
| <img src="https://huggingface.co/spaces/EdgeFirst/README/resolve/main/03-on-target-validation.png" alt="On-Target Validation Pipeline"/> |
|
|
| Unlike desktop-only benchmarks, EdgeFirst validates every model on **real target hardware** with the full dataset. Each device produces both accuracy metrics (mAP) and a detailed timing breakdown β load, preprocessing, NPU inference, and decode β so you know exactly how a model performs on your specific platform. |
|
|
| --- |
|
|
| ## Supported Hardware |
|
|
|  |
|  |
|  |
|  |
|  |
|
|
| --- |
|
|
| ## Model Zoo |
|
|
| Pre-trained YOLO models for edge deployment. Each model repo contains all sizes (nano through x-large), ONNX FP32 and TFLite INT8 formats, with platform-specific compiled variants as they become available. |
|
|
| ### Detection |
|
|
| | Model | Sizes | Nano mAP@0.5 | Link | |
| |-------|-------|-------------|------| |
| | **YOLO26** | n/s/m/l/x | 54.9% | [EdgeFirst/yolo26-det](https://huggingface.co/EdgeFirst/yolo26-det) | |
| | **YOLO11** | n/s/m/l/x | 53.4% | [EdgeFirst/yolo11-det](https://huggingface.co/EdgeFirst/yolo11-det) | |
| | **YOLOv8** | n/s/m/l/x | 50.2% | [EdgeFirst/yolov8-det](https://huggingface.co/EdgeFirst/yolov8-det) | |
| | **YOLOv5** | n/s/m/l/x | 49.6% | [EdgeFirst/yolov5-det](https://huggingface.co/EdgeFirst/yolov5-det) | |
|
|
| ### Instance Segmentation |
|
|
| | Model | Sizes | Nano Mask mAP | Link | |
| |-------|-------|--------------|------| |
| | **YOLO26** | n/s/m/l/x | 37.0% | [EdgeFirst/yolo26-seg](https://huggingface.co/EdgeFirst/yolo26-seg) | |
| | **YOLO11** | n/s/m/l/x | 35.5% | [EdgeFirst/yolo11-seg](https://huggingface.co/EdgeFirst/yolo11-seg) | |
| | **YOLOv8** | n/s/m/l/x | 34.1% | [EdgeFirst/yolov8-seg](https://huggingface.co/EdgeFirst/yolov8-seg) | |
|
|
| --- |
|
|
| ## Naming Convention |
|
|
| | Component | Pattern | Example | |
| |-----------|---------|---------| |
| | HF Repo | `EdgeFirst/{version}-{task}` | `EdgeFirst/yolov8-det` | |
| | ONNX Model | `{version}{size}-{task}.onnx` | `yolov8n-det.onnx` | |
| | TFLite Model | `{version}{size}-{task}-int8.tflite` | `yolov8n-det-int8.tflite` | |
| | i.MX 95 TFLite | `{version}{size}-{task}.imx95.tflite` | `yolov8n-det.imx95.tflite` | |
| | i.MX 93 TFLite | `{version}{size}-{task}.imx93.tflite` | `yolov8n-det.imx93.tflite` | |
| | i.MX 943 TFLite | `{version}{size}-{task}.imx943.tflite` | `yolov8n-det.imx943.tflite` | |
| | Hailo HEF | `{version}{size}-{task}.hailo{variant}.hef` | `yolov8n-det.hailo8l.hef` | |
| | Studio Project | `{Dataset} {Task}` | `COCO Detection` | |
| | Studio Experiment | `{Version} {Task}` | `YOLOv8 Detection` | |
|
|
| ## Validation Pipeline |
|
|
| | Stage | What | Where | |
| |-------|------|-------| |
| | **Reference** | ONNX FP32 and TFLite INT8 mAP on full COCO val2017 (5000 images) | EdgeFirst Studio (cloud) | |
| | **On-Target** | Full dataset mAP + timing breakdown per device | Board farm (real hardware) | |
|
|
| ## Perception Architecture |
|
|
| | Layer | Description | |
| |-------|-------------| |
| | **Foundation** | Hardware abstraction, video I/O, accelerated inference delegates | |
| | **Zenoh** | Modular perception pipeline over Zenoh pub/sub | |
| | **GStreamer** | Spatial perception elements for GStreamer / NNStreamer | |
| | **ROS 2** | Native ROS 2 nodes extending Zenoh microservices *(Roadmap)* | |
|
|
| ## EdgeFirst Studio |
|
|
| [**EdgeFirst Studio**](https://edgefirst.studio) is the MLOps platform that drives the entire model zoo pipeline. **Free tier available.** |
|
|
| - Dataset management & AI-assisted annotation |
| - Model training with automatic multi-format export and INT8 quantization |
| - Reference and on-target validation with full metrics collection |
| - CameraAdaptor integration for native sensor format training |
| - Deploy trained models to edge devices via the [EdgeFirst Client](https://github.com/EdgeFirstAI/client) CLI |
|
|
| --- |
|
|
| Apache 2.0 Β· [Au-Zone Technologies Inc.](https://www.au-zone.com) |
|
|