| --- |
| license: apache-2.0 |
| library_name: edgefirst |
| pipeline_tag: image-segmentation |
| tags: |
| - edge-ai |
| - npu |
| - tflite |
| - onnx |
| - int8 |
| - yolo |
| - gstreamer |
| - edgefirst |
| - nxp |
| - hailo |
| - jetson |
| - real-time |
| - embedded |
| - multiplatform |
| model-index: |
| - name: yolo11-seg |
| results: |
| - task: |
| type: image-segmentation |
| dataset: |
| name: COCO val2017 |
| type: coco |
| metrics: |
| - name: "Mask mAP@0.5-0.95 (Nano ONNX FP32)" |
| type: map |
| value: 35.5 |
| - name: "Mask mAP@0.5-0.95 (Nano TFLite INT8)" |
| type: map |
| value: 34.4 |
| --- |
| |
| # YOLO11 Segmentation β EdgeFirst Edge AI |
|
|
| **NXP i.MX 8M Plus** | **NXP i.MX 93** | **NXP i.MX 95** | **NXP Ara240** | **RPi5 + Hailo-8/8L** | **NVIDIA Jetson** |
| YOLO11 Segmentation models optimized for edge AI deployment across multiple hardware platforms. All sizes from Nano to XLarge, in ONNX FP32 and TFLite INT8 formats, with platform-specific compiled models for NPU acceleration. |
|
|
| Trained on [COCO 2017](https://test.edgefirst.studio/public/projects/2839/home) (80 classes). Part of the [EdgeFirst Model Zoo](https://huggingface.co/spaces/EdgeFirst/Models). |
| > [!TIP] |
| > **Training session**: [View on EdgeFirst Studio](https://test.edgefirst.studio/public/projects/2840/experiment/training/list?exp_id=4622) β dataset, training config, metrics, and exported artifacts. |
|
|
| > [!NOTE] |
| > Newer architecture with attention blocks. |
|
|
| --- |
|
|
| ## Size Comparison |
|
|
| All models validated on COCO val2017 (5000 images, 80 classes). |
|
|
| | Size | Params | GFLOPs | ONNX Det mAP@0.5-0.95 | INT8 Det mAP@0.5-0.95 | ONNX Mask mAP@0.5-0.95 | INT8 Mask mAP@0.5-0.95 | |
| |------|--------|--------|------------------------|-----------------------|-------------------------|------------------------| |
| | Nano | 2.6M | 6.5 | 28.4% | 27.1% | 35.5% | 34.4% | |
| | Small | 9.4M | 21.5 | β | β | β | β | |
| | Medium | 20.1M | 68.0 | β | β | β | β | |
| | Large | 25.3M | 87.6 | β | β | β | β | |
| | XLarge | 56.9M | 195.0 | β | β | β | β | |
|
|
| --- |
|
|
| ## On-Target Performance |
|
|
| Full pipeline timing: pre-processing + inference + post-processing. |
|
|
| | Size | Platform | Pre-proc (ms) | Inference (ms) | Post-proc (ms) | Total (ms) | FPS | |
| |------|----------|---------------|----------------|-----------------|------------|-----| |
| | β | β | β | β | β | β | β | |
|
|
| *Measured with [EdgeFirst Perception](https://github.com/EdgeFirstAI) stack. Timing includes full GStreamer pipeline overhead.* |
|
|
| --- |
|
|
| ## Downloads |
|
|
| <details open> |
| <summary><strong>ONNX FP32</strong> β Any platform with ONNX Runtime.</summary> |
|
|
| | Size | File | Status | |
| |------|------|--------| |
| | Nano | `yolo11n-seg-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/onnx/yolo11n-seg-coco.onnx) | |
| | Small | `yolo11s-seg-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/onnx/yolo11s-seg-coco.onnx) | |
| | Medium | `yolo11m-seg-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/onnx/yolo11m-seg-coco.onnx) | |
| | Large | `yolo11l-seg-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/onnx/yolo11l-seg-coco.onnx) | |
| | XLarge | `yolo11x-seg-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/onnx/yolo11x-seg-coco.onnx) | |
|
|
| </details> |
|
|
| <details> |
| <summary><strong>TFLite INT8</strong> β CPU or NPU via runtime delegate (i.MX 8M Plus VX Delegate).</summary> |
|
|
| | Size | File | Status | |
| |------|------|--------| |
| | Nano | `yolo11n-seg-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/tflite/yolo11n-seg-coco.tflite) | |
| | Small | `yolo11s-seg-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/tflite/yolo11s-seg-coco.tflite) | |
| | Medium | `yolo11m-seg-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/tflite/yolo11m-seg-coco.tflite) | |
| | Large | `yolo11l-seg-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/tflite/yolo11l-seg-coco.tflite) | |
| | XLarge | `yolo11x-seg-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/tflite/yolo11x-seg-coco.tflite) | |
|
|
| </details> |
|
|
| <details> |
| <summary><strong>NXP i.MX 95 (eIQ Neutron)</strong> β eIQ Neutron NPU optimized.</summary> |
|
|
| | Size | File | Status | |
| |------|------|--------| |
| | Nano | `yolo11n-seg-coco.imx95.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/imx95/yolo11n-seg-coco.imx95.tflite) | |
| | Small | `yolo11s-seg-coco.imx95.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/imx95/yolo11s-seg-coco.imx95.tflite) | |
| | Medium | `yolo11m-seg-coco.imx95.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/imx95/yolo11m-seg-coco.imx95.tflite) | |
| | Large | `yolo11l-seg-coco.imx95.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/imx95/yolo11l-seg-coco.imx95.tflite) | |
| | XLarge | `yolo11x-seg-coco.imx95.tflite` | [Download](https://huggingface.co/EdgeFirst/yolo11-seg/resolve/main/imx95/yolo11x-seg-coco.imx95.tflite) | |
|
|
| </details> |
|
|
|
|
|
|
| --- |
|
|
| ## Deploy with EdgeFirst Perception |
|
|
| Copy-paste [GStreamer](https://github.com/EdgeFirstAI/gstreamer) pipeline examples for each platform. |
|
|
| ### NXP i.MX 8M Plus β Camera to Detection with Vivante NPU |
|
|
| ```bash |
| gst-launch-1.0 \ |
| v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \ |
| edgefirstcameraadaptor ! \ |
| tensor_filter framework=tensorflow-lite \ |
| model=yolo11n-seg-coco.tflite \ |
| custom=Delegate:External,ExtDelegateLib:libvx_delegate.so ! \ |
| edgefirstsegdecoder ! edgefirstoverlay ! waylandsink |
| ``` |
|
|
| ### RPi5 + Hailo-8L |
|
|
| ```bash |
| gst-launch-1.0 \ |
| v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \ |
| hailonet hef-path=yolo11n-seg-coco.hailo8l.hef ! \ |
| hailofilter function-name=yolo11_nms ! \ |
| hailooverlay ! videoconvert ! autovideosink |
| ``` |
|
|
| ### NVIDIA Jetson (TensorRT) |
|
|
| ```bash |
| gst-launch-1.0 \ |
| v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \ |
| edgefirstcameraadaptor ! \ |
| nvinfer config-file-path=yolo11n-seg-coco-config.txt ! \ |
| edgefirstsegdecoder ! edgefirstoverlay ! nveglglessink |
| ``` |
|
|
|
|
| *Full pipeline documentation: [EdgeFirst GStreamer Plugins](https://github.com/EdgeFirstAI/gstreamer)* |
|
|
| --- |
|
|
| ## Foundation (HAL) Python Integration |
|
|
| ```python |
| from edgefirst.hal import Model, TensorImage |
| |
| # Load model β metadata (labels, decoder config) is embedded in the file |
| model = Model("yolo11n-seg-coco.tflite") |
| |
| # Run inference on an image |
| image = TensorImage.from_file("image.jpg") |
| results = model.predict(image) |
| |
| # Access detections |
| for det in results.detections: |
| print(f"{det.label}: {det.confidence:.2f} at {det.bbox}") |
| ``` |
|
|
| *[EdgeFirst HAL](https://github.com/EdgeFirstAI/hal) β Hardware abstraction layer with accelerated inference delegates.* |
|
|
| --- |
|
|
| ## CameraAdaptor |
|
|
| EdgeFirst [CameraAdaptor](https://github.com/EdgeFirstAI/cameraadaptor) enables training and inference directly on native sensor formats (GREY, YUYV, etc.) β skipping the ISP color conversion pipeline entirely. This reduces latency and power consumption on edge devices. |
|
|
| CameraAdaptor variants are included alongside baseline RGB models: |
|
|
| | Variant | Input Format | Use Case | |
| |---------|-------------|----------| |
| | `yolo11n-seg-coco.onnx` | RGB (3ch) | Standard camera input | |
| | `yolo11n-seg-coco-grey.onnx` | GREY (1ch) | Monochrome / IR sensors | |
| | `yolo11n-seg-coco-yuyv.onnx` | YUYV (2ch) | Raw sensor bypass | |
|
|
| *Train CameraAdaptor models with [EdgeFirst Studio](https://edgefirst.studio) β the CameraAdaptor layer is automatically inserted during training.* |
|
|
| --- |
|
|
| ## Train Your Own with EdgeFirst Studio |
|
|
| Train on your own dataset with [**EdgeFirst Studio**](https://edgefirst.studio): |
|
|
| - **Free tier** includes YOLO training with automatic INT8 quantization and edge deployment |
| - Upload datasets via [EdgeFirst Recorder](https://github.com/EdgeFirstAI/recorder) or COCO/YOLO format |
| - AI-assisted annotation with auto-labeling |
| - CameraAdaptor integration for native sensor format training |
| - Deploy trained models to edge devices via [EdgeFirst Client](https://github.com/EdgeFirstAI/client) |
|
|
| --- |
|
|
| ## See Also |
|
|
| Other models in the [EdgeFirst Model Zoo](https://huggingface.co/spaces/EdgeFirst/Models): |
|
|
| | Model | Task | Best Nano Metric | Link | |
| |-------|------|-------------------|------| |
| | YOLOv5 Detection | Detection | 49.6% mAP@0.5 (ONNX) | [EdgeFirst/yolov5-det](https://huggingface.co/EdgeFirst/yolov5-det) | |
| | YOLOv8 Detection | Detection | 50.2% mAP@0.5 (ONNX) | [EdgeFirst/yolov8-det](https://huggingface.co/EdgeFirst/yolov8-det) | |
| | YOLOv8 Segmentation | Segmentation | 34.1% Mask mAP@0.5-0.95 (ONNX) | [EdgeFirst/yolov8-seg](https://huggingface.co/EdgeFirst/yolov8-seg) | |
| | YOLO11 Detection | Detection | 53.4% mAP@0.5 (ONNX) | [EdgeFirst/yolo11-det](https://huggingface.co/EdgeFirst/yolo11-det) | |
| | YOLO26 Detection | Detection | 54.9% mAP@0.5 (ONNX) | [EdgeFirst/yolo26-det](https://huggingface.co/EdgeFirst/yolo26-det) | |
| | YOLO26 Segmentation | Segmentation | 37.0% Mask mAP@0.5-0.95 (ONNX) | [EdgeFirst/yolo26-seg](https://huggingface.co/EdgeFirst/yolo26-seg) | |
|
|
| --- |
|
|
| ## Technical Details |
|
|
| ### Quantization Pipeline |
|
|
| All TFLite INT8 models are produced by EdgeFirst's custom quantization pipeline ([details](https://github.com/EdgeFirstAI/studio-ultralytics)): |
|
|
| 1. **ONNX Export** β Standard Ultralytics export with `simplify=True` |
| 2. **TF-Wrapped ONNX** β Box coordinates normalized to [0,1] inside DFL decode via `tf_wrapper` (~1.2% better mAP than post-hoc normalization) |
| 3. **Split Decoder** β Boxes, scores, and mask coefficients split into separate output tensors for independent INT8 quantization scales |
| 4. **Smart Calibration** β 500 images selected via greedy coverage maximization from COCO val2017 |
| 5. **Full INT8** β `uint8` input (raw pixels), `int8` output (per-tensor scales), MLIR quantizer |
|
|
| ### Split Decoder Output Format |
|
|
| **Segmentation** (e.g., yolo11n-seg): |
| - Boxes: `(1, 4, 8400)` β normalized [0,1] coordinates |
| - Scores: `(1, 80, 8400)` β class probabilities |
| - Mask coefficients: `(1, 32, 8400)` β per-anchor mask coefs |
| - Protos: `(1, 160, 160, 32)` β prototype masks |
|
|
| Each tensor has independent quantization scale and zero-point. EdgeFirst HAL handles dequantization and reassembly automatically. |
|
|
| ### Metadata |
|
|
| - **TFLite**: `edgefirst.json`, `labels.txt`, and `edgefirst.yaml` embedded via ZIP (no `tflite-support` dependency) |
| - **ONNX**: `edgefirst.json` embedded via `model.metadata_props` |
|
|
| No standalone metadata files β models are self-contained. |
|
|
| --- |
|
|
| ## Limitations |
|
|
| - **COCO bias** β Models trained on COCO (80 classes) inherit its biases: Western-centric scenes, specific object distributions, limited weather/lighting diversity |
| - **INT8 accuracy loss** β Full-integer quantization typically degrades mAP by 6-12% relative to FP32; actual loss depends on model architecture and dataset |
| - **Thermal variation** β On-target performance varies with device temperature; sustained inference may throttle on passively-cooled devices |
| - **Input resolution** β All models expect 640Γ640 input; other resolutions require letterboxing or may reduce accuracy |
| - **CameraAdaptor variants** β GREY/YUYV models trade color information for latency; accuracy may differ from RGB baseline depending on the task |
|
|
| --- |
|
|
| ## Citation |
|
|
| ```bibtex |
| @software{edgefirst_yolo11_seg, |
| title = { {YOLO11 Segmentation β EdgeFirst Edge AI} }, |
| author = {Au-Zone Technologies}, |
| url = {https://huggingface.co/EdgeFirst/yolo11-seg}, |
| year = {2026}, |
| license = {Apache-2.0}, |
| } |
| ``` |
|
|
| --- |
|
|
| <p align="center"> |
| <sub> |
| <a href="https://edgefirst.studio">EdgeFirst Studio</a> Β· <a href="https://github.com/EdgeFirstAI">GitHub</a> Β· <a href="https://doc.edgefirst.ai">Docs</a> Β· <a href="https://www.au-zone.com">Au-Zone Technologies</a><br> |
| Apache 2.0 Β· Β© Au-Zone Technologies Inc. |
| </sub> |
| </p> |
| |