title: EdgeFirst AI
emoji: 🔬
colorFrom: indigo
colorTo: red
sdk: static
pinned: true
license: apache-2.0
EdgeFirst AI — Spatial Perception at the Edge
EdgeFirst Perception is an open-source suite of libraries and microservices for AI-driven spatial perception on edge devices. It supports cameras, LiDAR, radar, and time-of-flight sensors — enabling real-time object detection, segmentation, sensor fusion, and 3D spatial understanding, optimized for resource-constrained embedded hardware.
Workflow
Every model in the EdgeFirst Model Zoo passes through a validated pipeline. EdgeFirst Studio manages datasets, training, multi-format export (ONNX, TFLite INT8, eIQ Neutron, Kinara DVM, HailoRT HEF, TensorRT), and reference validation. Models are then deployed to our board farm for full-dataset on-target validation on real hardware — measuring both accuracy (mAP) and detailed timing breakdown per device. Results are published here on HuggingFace with per-platform performance tables.
Model Lifecycle
On-Target Validation
Unlike desktop-only benchmarks, EdgeFirst validates every model on real target hardware with the full dataset. Each device produces both accuracy metrics (mAP) and a detailed timing breakdown — load, preprocessing, NPU inference, and decode — so you know exactly how a model performs on your specific platform.
Supported Hardware
Model Zoo
Pre-trained YOLO models for edge deployment. Each model repo contains all sizes (nano through x-large), ONNX FP32 and TFLite INT8 formats, with platform-specific compiled variants as they become available.
Detection
| Model | Sizes | Nano mAP@0.5 | Link |
|---|---|---|---|
| YOLO26 | n/s/m/l/x | 54.9% | EdgeFirst/yolo26-det |
| YOLO11 | n/s/m/l/x | 53.4% | EdgeFirst/yolo11-det |
| YOLOv8 | n/s/m/l/x | 50.2% | EdgeFirst/yolov8-det |
| YOLOv5 | n/s/m/l/x | 49.6% | EdgeFirst/yolov5-det |
Instance Segmentation
| Model | Sizes | Nano Mask mAP | Link |
|---|---|---|---|
| YOLO26 | n/s/m/l/x | 37.0% | EdgeFirst/yolo26-seg |
| YOLO11 | n/s/m/l/x | 35.5% | EdgeFirst/yolo11-seg |
| YOLOv8 | n/s/m/l/x | 34.1% | EdgeFirst/yolov8-seg |
Naming Convention
| Component | Pattern | Example |
|---|---|---|
| HF Repo | EdgeFirst/{version}-{task} |
EdgeFirst/yolov8-det |
| ONNX Model | {version}{size}-{task}.onnx |
yolov8n-det.onnx |
| TFLite Model | {version}{size}-{task}-int8.tflite |
yolov8n-det-int8.tflite |
| i.MX 95 TFLite | {version}{size}-{task}.imx95.tflite |
yolov8n-det.imx95.tflite |
| i.MX 93 TFLite | {version}{size}-{task}.imx93.tflite |
yolov8n-det.imx93.tflite |
| i.MX 943 TFLite | {version}{size}-{task}.imx943.tflite |
yolov8n-det.imx943.tflite |
| Hailo HEF | {version}{size}-{task}.hailo{variant}.hef |
yolov8n-det.hailo8l.hef |
| Studio Project | {Dataset} {Task} |
COCO Detection |
| Studio Experiment | {Version} {Task} |
YOLOv8 Detection |
Validation Pipeline
| Stage | What | Where |
|---|---|---|
| Reference | ONNX FP32 and TFLite INT8 mAP on full COCO val2017 (5000 images) | EdgeFirst Studio (cloud) |
| On-Target | Full dataset mAP + timing breakdown per device | Board farm (real hardware) |
Perception Architecture
| Layer | Description |
|---|---|
| Foundation | Hardware abstraction, video I/O, accelerated inference delegates |
| Zenoh | Modular perception pipeline over Zenoh pub/sub |
| GStreamer | Spatial perception elements for GStreamer / NNStreamer |
| ROS 2 | Native ROS 2 nodes extending Zenoh microservices (Roadmap) |
EdgeFirst Studio
EdgeFirst Studio is the MLOps platform that drives the entire model zoo pipeline. Free tier available.
- Dataset management & AI-assisted annotation
- Model training with automatic multi-format export and INT8 quantization
- Reference and on-target validation with full metrics collection
- CameraAdaptor integration for native sensor format training
- Deploy trained models to edge devices via the EdgeFirst Client CLI
Apache 2.0 · Au-Zone Technologies Inc.