---
license: apache-2.0
library_name: edgefirst
pipeline_tag: object-detection
tags:
- edge-ai
- npu
- tflite
- onnx
- int8
- yolo
- gstreamer
- edgefirst
- nxp
- hailo
- jetson
- real-time
- embedded
- multiplatform
model-index:
- name: yolo11-det
results:
- task:
type: object-detection
dataset:
name: COCO val2017
type: coco
metrics:
- name: "mAP@0.5 (Nano ONNX FP32)"
type: map_50
value: 53.4
- name: "mAP@0.5-0.95 (Nano ONNX FP32)"
type: map
value: 37.9
- name: "mAP@0.5 (Nano TFLite INT8)"
type: map_50
value: 50.1
- name: "mAP@0.5-0.95 (Nano TFLite INT8)"
type: map
value: 34.5
---
# YOLO11 Detection — EdgeFirst Edge AI
**NXP i.MX 8M Plus** | **NXP i.MX 93** | **NXP i.MX 95** | **NXP Ara240** | **RPi5 + Hailo-8/8L** | **NVIDIA Jetson**
YOLO11 Detection models optimized for edge AI deployment across multiple hardware platforms. All sizes from Nano to XLarge, in ONNX FP32 and TFLite INT8 formats, with platform-specific compiled models for NPU acceleration.
Trained on [COCO 2017](https://test.edgefirst.studio/public/projects/2839/home) (80 classes). Part of the [EdgeFirst Model Zoo](https://huggingface.co/spaces/EdgeFirst/Models).
> [!TIP]
> **Training session**: [View on EdgeFirst Studio](https://test.edgefirst.studio/public/projects/2839/experiment/training/list?exp_id=4656) — dataset, training config, metrics, and exported artifacts.
> [!NOTE]
> Newer architecture with attention blocks.
---
## Size Comparison
All models validated on COCO val2017 (5000 images, 80 classes).
| Size | Params | GFLOPs | ONNX FP32 mAP@0.5 | ONNX FP32 mAP@0.5-0.95 | TFLite INT8 mAP@0.5 | TFLite INT8 mAP@0.5-0.95 |
|------|--------|--------|--------------------|-------------------------|----------------------|--------------------------|
| Nano | 2.6M | 6.5 | 53.4% | 37.9% | 50.1% | 34.5% |
| Small | 9.4M | 21.5 | — | — | — | — |
| Medium | 20.1M | 68.0 | — | — | — | — |
| Large | 25.3M | 87.6 | — | — | — | — |
| XLarge | 56.9M | 195.0 | — | — | — | — |
---
## On-Target Performance
Full pipeline timing: pre-processing + inference + post-processing.
| Size | Platform | Pre-proc (ms) | Inference (ms) | Post-proc (ms) | Total (ms) | FPS |
|------|----------|---------------|----------------|-----------------|------------|-----|
| — | — | — | — | — | — | — |
*Measured with [EdgeFirst Perception](https://github.com/EdgeFirstAI) stack. Timing includes full GStreamer pipeline overhead.*
---
## Downloads
ONNX FP32 — Any platform with ONNX Runtime.
| Size | File | Status |
|------|------|--------|
| Nano | `yolo11n-det-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolo11-det/resolve/main/onnx/yolo11n-det-coco.onnx) |
| Small | `yolo11s-det-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolo11-det/resolve/main/onnx/yolo11s-det-coco.onnx) |
| Medium | `yolo11m-det-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolo11-det/resolve/main/onnx/yolo11m-det-coco.onnx) |
| Large | `yolo11l-det-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolo11-det/resolve/main/onnx/yolo11l-det-coco.onnx) |
| XLarge | `yolo11x-det-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolo11-det/resolve/main/onnx/yolo11x-det-coco.onnx) |
TFLite INT8 — CPU or NPU via runtime delegate (i.MX 8M Plus VX Delegate).
| Size | File | Status |
|------|------|--------|
| Nano | `yolo11n-det-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-det/resolve/main/tflite/yolo11n-det-coco.tflite) |
| Small | `yolo11s-det-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-det/resolve/main/tflite/yolo11s-det-coco.tflite) |
| Medium | `yolo11m-det-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-det/resolve/main/tflite/yolo11m-det-coco.tflite) |
| Large | `yolo11l-det-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-det/resolve/main/tflite/yolo11l-det-coco.tflite) |
| XLarge | `yolo11x-det-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-det/resolve/main/tflite/yolo11x-det-coco.tflite) |
---
## Deploy with EdgeFirst Perception
Copy-paste [GStreamer](https://github.com/EdgeFirstAI/gstreamer) pipeline examples for each platform.
### NXP i.MX 8M Plus — Camera to Detection with Vivante NPU
```bash
gst-launch-1.0 \
v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \
edgefirstcameraadaptor ! \
tensor_filter framework=tensorflow-lite \
model=yolo11n-det-coco.tflite \
custom=Delegate:External,ExtDelegateLib:libvx_delegate.so ! \
edgefirstdetdecoder ! edgefirstoverlay ! waylandsink
```
### RPi5 + Hailo-8L
```bash
gst-launch-1.0 \
v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \
hailonet hef-path=yolo11n-det-coco.hailo8l.hef ! \
hailofilter function-name=yolo11_nms ! \
hailooverlay ! videoconvert ! autovideosink
```
### NVIDIA Jetson (TensorRT)
```bash
gst-launch-1.0 \
v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \
edgefirstcameraadaptor ! \
nvinfer config-file-path=yolo11n-det-coco-config.txt ! \
edgefirstdetdecoder ! edgefirstoverlay ! nveglglessink
```
*Full pipeline documentation: [EdgeFirst GStreamer Plugins](https://github.com/EdgeFirstAI/gstreamer)*
---
## Foundation (HAL) Python Integration
```python
from edgefirst.hal import Model, TensorImage
# Load model — metadata (labels, decoder config) is embedded in the file
model = Model("yolo11n-det-coco.tflite")
# Run inference on an image
image = TensorImage.from_file("image.jpg")
results = model.predict(image)
# Access detections
for det in results.detections:
print(f"{det.label}: {det.confidence:.2f} at {det.bbox}")
```
*[EdgeFirst HAL](https://github.com/EdgeFirstAI/hal) — Hardware abstraction layer with accelerated inference delegates.*
---
## CameraAdaptor
EdgeFirst [CameraAdaptor](https://github.com/EdgeFirstAI/cameraadaptor) enables training and inference directly on native sensor formats (GREY, YUYV, etc.) — skipping the ISP color conversion pipeline entirely. This reduces latency and power consumption on edge devices.
CameraAdaptor variants are included alongside baseline RGB models:
| Variant | Input Format | Use Case |
|---------|-------------|----------|
| `yolo11n-det-coco.onnx` | RGB (3ch) | Standard camera input |
| `yolo11n-det-coco-grey.onnx` | GREY (1ch) | Monochrome / IR sensors |
| `yolo11n-det-coco-yuyv.onnx` | YUYV (2ch) | Raw sensor bypass |
*Train CameraAdaptor models with [EdgeFirst Studio](https://edgefirst.studio) — the CameraAdaptor layer is automatically inserted during training.*
---
## Train Your Own with EdgeFirst Studio
Train on your own dataset with [**EdgeFirst Studio**](https://edgefirst.studio):
- **Free tier** includes YOLO training with automatic INT8 quantization and edge deployment
- Upload datasets via [EdgeFirst Recorder](https://github.com/EdgeFirstAI/recorder) or COCO/YOLO format
- AI-assisted annotation with auto-labeling
- CameraAdaptor integration for native sensor format training
- Deploy trained models to edge devices via [EdgeFirst Client](https://github.com/EdgeFirstAI/client)
---
## See Also
Other models in the [EdgeFirst Model Zoo](https://huggingface.co/spaces/EdgeFirst/Models):
| Model | Task | Best Nano Metric | Link |
|-------|------|-------------------|------|
| YOLOv5 Detection | Detection | 49.6% mAP@0.5 (ONNX) | [EdgeFirst/yolov5-det](https://huggingface.co/EdgeFirst/yolov5-det) |
| YOLOv8 Detection | Detection | 50.2% mAP@0.5 (ONNX) | [EdgeFirst/yolov8-det](https://huggingface.co/EdgeFirst/yolov8-det) |
| YOLOv8 Segmentation | Segmentation | 34.1% Mask mAP@0.5-0.95 (ONNX) | [EdgeFirst/yolov8-seg](https://huggingface.co/EdgeFirst/yolov8-seg) |
| YOLO11 Segmentation | Segmentation | 35.5% Mask mAP@0.5-0.95 (ONNX) | [EdgeFirst/yolo11-seg](https://huggingface.co/EdgeFirst/yolo11-seg) |
| YOLO26 Detection | Detection | 54.9% mAP@0.5 (ONNX) | [EdgeFirst/yolo26-det](https://huggingface.co/EdgeFirst/yolo26-det) |
| YOLO26 Segmentation | Segmentation | 37.0% Mask mAP@0.5-0.95 (ONNX) | [EdgeFirst/yolo26-seg](https://huggingface.co/EdgeFirst/yolo26-seg) |
---
## Technical Details
### Quantization Pipeline
All TFLite INT8 models are produced by EdgeFirst's custom quantization pipeline ([details](https://github.com/EdgeFirstAI/studio-ultralytics)):
1. **ONNX Export** — Standard Ultralytics export with `simplify=True`
2. **TF-Wrapped ONNX** — Box coordinates normalized to [0,1] inside DFL decode via `tf_wrapper` (~1.2% better mAP than post-hoc normalization)
3. **Split Decoder** — Boxes, scores, and mask coefficients split into separate output tensors for independent INT8 quantization scales
4. **Smart Calibration** — 500 images selected via greedy coverage maximization from COCO val2017
5. **Full INT8** — `uint8` input (raw pixels), `int8` output (per-tensor scales), MLIR quantizer
### Split Decoder Output Format
**Detection** (e.g., yolo11n):
- Boxes: `(1, 4, 8400)` — normalized [0,1] coordinates
- Scores: `(1, 80, 8400)` — class probabilities
Each tensor has independent quantization scale and zero-point. EdgeFirst HAL handles dequantization and reassembly automatically.
### Metadata
- **TFLite**: `edgefirst.json`, `labels.txt`, and `edgefirst.yaml` embedded via ZIP (no `tflite-support` dependency)
- **ONNX**: `edgefirst.json` embedded via `model.metadata_props`
No standalone metadata files — models are self-contained.
---
## Limitations
- **COCO bias** — Models trained on COCO (80 classes) inherit its biases: Western-centric scenes, specific object distributions, limited weather/lighting diversity
- **INT8 accuracy loss** — Full-integer quantization typically degrades mAP by 6-12% relative to FP32; actual loss depends on model architecture and dataset
- **Thermal variation** — On-target performance varies with device temperature; sustained inference may throttle on passively-cooled devices
- **Input resolution** — All models expect 640×640 input; other resolutions require letterboxing or may reduce accuracy
- **CameraAdaptor variants** — GREY/YUYV models trade color information for latency; accuracy may differ from RGB baseline depending on the task
---
## Citation
```bibtex
@software{edgefirst_yolo11_det,
title = { {YOLO11 Detection — EdgeFirst Edge AI} },
author = {Au-Zone Technologies},
url = {https://huggingface.co/EdgeFirst/yolo11-det},
year = {2026},
license = {Apache-2.0},
}
```
---
EdgeFirst Studio · GitHub · Docs · Au-Zone Technologies
Apache 2.0 · © Au-Zone Technologies Inc.