Move diagrams and full content into README.md for org page rendering
Browse files
README.md
CHANGED
|
@@ -10,28 +10,103 @@ license: apache-2.0
|
|
| 10 |
|
| 11 |
# EdgeFirst AI β Spatial Perception at the Edge
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
| **Zenoh** | Modular perception pipeline over Zenoh pub/sub | Stable |
|
| 23 |
-
| **GStreamer** | Spatial perception elements for GStreamer / NNStreamer | Stable |
|
| 24 |
-
| **ROS 2** | Native ROS 2 nodes extending Zenoh microservices | Roadmap |
|
| 25 |
|
| 26 |
## Supported Hardware
|
| 27 |
|
| 28 |
-
NXP i.MX 8M Plus
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
-
Apache 2.0
|
|
|
|
| 10 |
|
| 11 |
# EdgeFirst AI β Spatial Perception at the Edge
|
| 12 |
|
| 13 |
+
**EdgeFirst Perception** is an open-source suite of libraries and microservices for AI-driven spatial perception on edge devices. It supports cameras, LiDAR, radar, and time-of-flight sensors β enabling real-time object detection, segmentation, sensor fusion, and 3D spatial understanding, optimized for resource-constrained embedded hardware.
|
| 14 |
|
| 15 |
+
[](https://edgefirst.studio)
|
| 16 |
+
[](https://github.com/EdgeFirstAI)
|
| 17 |
+
[](https://doc.edgefirst.ai)
|
| 18 |
+
[](https://www.au-zone.com)
|
| 19 |
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+
## Workflow
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
|
| 26 |
+
Every model in the EdgeFirst Model Zoo passes through a validated pipeline. [**EdgeFirst Studio**](https://edgefirst.studio) manages datasets, training, multi-format export (ONNX, TFLite INT8, eIQ Neutron, Kinara DVM, HailoRT HEF, TensorRT), and reference validation. Models are then deployed to our board farm for **full-dataset on-target validation** on real hardware β measuring both accuracy (mAP) and detailed timing breakdown per device. Results are published here on HuggingFace with per-platform performance tables.
|
| 27 |
+
|
| 28 |
+
## Model Lifecycle
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
|
| 32 |
+
## On-Target Validation
|
| 33 |
+
|
| 34 |
+

|
| 35 |
|
| 36 |
+
Unlike desktop-only benchmarks, EdgeFirst validates every model on **real target hardware** with the full dataset. Each device produces both accuracy metrics (mAP) and a detailed timing breakdown β load, preprocessing, NPU inference, and decode β so you know exactly how a model performs on your specific platform.
|
| 37 |
+
|
| 38 |
+
---
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
## Supported Hardware
|
| 41 |
|
| 42 |
+

|
| 43 |
+

|
| 44 |
+

|
| 45 |
+

|
| 46 |
+

|
| 47 |
+
|
| 48 |
+
---
|
| 49 |
+
|
| 50 |
+
## Model Zoo
|
| 51 |
+
|
| 52 |
+
Pre-trained YOLO models for edge deployment. Each model repo contains all sizes (nano through x-large), ONNX FP32 and TFLite INT8 formats, with platform-specific compiled variants as they become available.
|
| 53 |
+
|
| 54 |
+
### Detection
|
| 55 |
|
| 56 |
+
| Model | Sizes | Nano mAP@0.5 | Link |
|
| 57 |
+
|-------|-------|-------------|------|
|
| 58 |
+
| **YOLO26** | n/s/m/l/x | 54.9% | [EdgeFirst/yolo26-det](https://huggingface.co/EdgeFirst/yolo26-det) |
|
| 59 |
+
| **YOLO11** | n/s/m/l/x | 53.4% | [EdgeFirst/yolo11-det](https://huggingface.co/EdgeFirst/yolo11-det) |
|
| 60 |
+
| **YOLOv8** | n/s/m/l/x | 50.2% | [EdgeFirst/yolov8-det](https://huggingface.co/EdgeFirst/yolov8-det) |
|
| 61 |
+
| **YOLOv5** | n/s/m/l/x | 49.6% | [EdgeFirst/yolov5-det](https://huggingface.co/EdgeFirst/yolov5-det) |
|
| 62 |
|
| 63 |
+
### Instance Segmentation
|
| 64 |
+
|
| 65 |
+
| Model | Sizes | Nano Mask mAP | Link |
|
| 66 |
+
|-------|-------|--------------|------|
|
| 67 |
+
| **YOLO26** | n/s/m/l/x | 37.0% | [EdgeFirst/yolo26-seg](https://huggingface.co/EdgeFirst/yolo26-seg) |
|
| 68 |
+
| **YOLO11** | n/s/m/l/x | 35.5% | [EdgeFirst/yolo11-seg](https://huggingface.co/EdgeFirst/yolo11-seg) |
|
| 69 |
+
| **YOLOv8** | n/s/m/l/x | 34.1% | [EdgeFirst/yolov8-seg](https://huggingface.co/EdgeFirst/yolov8-seg) |
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
## Naming Convention
|
| 74 |
+
|
| 75 |
+
| Component | Pattern | Example |
|
| 76 |
+
|-----------|---------|---------|
|
| 77 |
+
| HF Repo | `EdgeFirst/{version}-{task}` | `EdgeFirst/yolov8-det` |
|
| 78 |
+
| ONNX Model | `{version}{size}-{task}-coco.onnx` | `yolov8n-det-coco.onnx` |
|
| 79 |
+
| TFLite Model | `{version}{size}-{task}-coco.tflite` | `yolov8n-det-coco.tflite` |
|
| 80 |
+
| i.MX 95 Model | `{version}{size}-{task}-coco.imx95.tflite` | `yolov8n-det-coco.imx95.tflite` |
|
| 81 |
+
| Studio Project | `{Dataset} {Task}` | `COCO Detection` |
|
| 82 |
+
| Studio Experiment | `{Version} {Task}` | `YOLOv8 Detection` |
|
| 83 |
+
|
| 84 |
+
## Validation Pipeline
|
| 85 |
+
|
| 86 |
+
| Stage | What | Where |
|
| 87 |
+
|-------|------|-------|
|
| 88 |
+
| **Reference** | ONNX FP32 and TFLite INT8 mAP on full COCO val2017 (5000 images) | EdgeFirst Studio (cloud) |
|
| 89 |
+
| **On-Target** | Full dataset mAP + timing breakdown per device | Board farm (real hardware) |
|
| 90 |
+
|
| 91 |
+
## Perception Architecture
|
| 92 |
+
|
| 93 |
+
| Layer | Description |
|
| 94 |
+
|-------|-------------|
|
| 95 |
+
| **Foundation** | Hardware abstraction, video I/O, accelerated inference delegates |
|
| 96 |
+
| **Zenoh** | Modular perception pipeline over Zenoh pub/sub |
|
| 97 |
+
| **GStreamer** | Spatial perception elements for GStreamer / NNStreamer |
|
| 98 |
+
| **ROS 2** | Native ROS 2 nodes extending Zenoh microservices *(Roadmap)* |
|
| 99 |
+
|
| 100 |
+
## EdgeFirst Studio
|
| 101 |
+
|
| 102 |
+
[**EdgeFirst Studio**](https://edgefirst.studio) is the MLOps platform that drives the entire model zoo pipeline. **Free tier available.**
|
| 103 |
+
|
| 104 |
+
- Dataset management & AI-assisted annotation
|
| 105 |
+
- Model training with automatic multi-format export and INT8 quantization
|
| 106 |
+
- Reference and on-target validation with full metrics collection
|
| 107 |
+
- CameraAdaptor integration for native sensor format training
|
| 108 |
+
- Deploy trained models to edge devices via the [EdgeFirst Client](https://github.com/EdgeFirstAI/client) CLI
|
| 109 |
+
|
| 110 |
+
---
|
| 111 |
|
| 112 |
+
Apache 2.0 Β· [Au-Zone Technologies Inc.](https://www.au-zone.com)
|