--- license: apache-2.0 library_name: edgefirst pipeline_tag: image-segmentation tags: - edge-ai - npu - tflite - onnx - int8 - yolo - gstreamer - edgefirst - nxp - hailo - jetson - real-time - embedded - multiplatform model-index: - name: yolov8-seg results: - task: type: image-segmentation dataset: name: COCO val2017 type: coco metrics: - name: "Mask mAP@0.5-0.95 (Nano ONNX FP32)" type: map value: 34.1 - name: "Mask mAP@0.5-0.95 (Nano TFLite INT8)" type: map value: 33.5 --- # YOLOv8 Segmentation — EdgeFirst Edge AI **NXP i.MX 8M Plus** | **NXP i.MX 93** | **NXP i.MX 95** | **NXP Ara240** | **RPi5 + Hailo-8/8L** | **NVIDIA Jetson** YOLOv8 Segmentation models optimized for edge AI deployment across multiple hardware platforms. All sizes from Nano to XLarge, in ONNX FP32 and TFLite INT8 formats, with platform-specific compiled models for NPU acceleration. Trained on [COCO 2017](https://test.edgefirst.studio/public/projects/2839/home) (80 classes). Part of the [EdgeFirst Model Zoo](https://huggingface.co/spaces/EdgeFirst/Models). > [!TIP] > **Training session**: [View on EdgeFirst Studio](https://test.edgefirst.studio/public/projects/2840/experiment/training/list?exp_id=4621) — dataset, training config, metrics, and exported artifacts. > [!NOTE] > Best-validated baseline. --- ## Size Comparison All models validated on COCO val2017 (5000 images, 80 classes). | Size | Params | GFLOPs | ONNX Det mAP@0.5-0.95 | INT8 Det mAP@0.5-0.95 | ONNX Mask mAP@0.5-0.95 | INT8 Mask mAP@0.5-0.95 | |------|--------|--------|------------------------|-----------------------|-------------------------|------------------------| | Nano | 3.2M | 8.9 | 35.3% | 26.0% | 34.1% | 33.5% | | Small | 11.2M | 28.8 | — | — | — | — | | Medium | 25.9M | 79.3 | — | — | — | — | | Large | 43.7M | 165.7 | — | — | — | — | | XLarge | 68.2M | 258.5 | — | — | — | — | --- ## On-Target Performance Full pipeline timing: pre-processing + inference + post-processing. | Size | Platform | Pre-proc (ms) | Inference (ms) | Post-proc (ms) | Total (ms) | FPS | |------|----------|---------------|----------------|-----------------|------------|-----| | — | — | — | — | — | — | — | *Measured with [EdgeFirst Perception](https://github.com/EdgeFirstAI) stack. Timing includes full GStreamer pipeline overhead.* --- ## Downloads
ONNX FP32 — Any platform with ONNX Runtime. | Size | File | Status | |------|------|--------| | Nano | `yolov8n-seg-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/onnx/yolov8n-seg-coco.onnx) | | Small | `yolov8s-seg-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/onnx/yolov8s-seg-coco.onnx) | | Medium | `yolov8m-seg-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/onnx/yolov8m-seg-coco.onnx) | | Large | `yolov8l-seg-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/onnx/yolov8l-seg-coco.onnx) | | XLarge | `yolov8x-seg-coco.onnx` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/onnx/yolov8x-seg-coco.onnx) |
TFLite INT8 — CPU or NPU via runtime delegate (i.MX 8M Plus VX Delegate). | Size | File | Status | |------|------|--------| | Nano | `yolov8n-seg-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/tflite/yolov8n-seg-coco.tflite) | | Small | `yolov8s-seg-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/tflite/yolov8s-seg-coco.tflite) | | Medium | `yolov8m-seg-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/tflite/yolov8m-seg-coco.tflite) | | Large | `yolov8l-seg-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/tflite/yolov8l-seg-coco.tflite) | | XLarge | `yolov8x-seg-coco.tflite` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/tflite/yolov8x-seg-coco.tflite) |
NXP i.MX 95 (eIQ Neutron) — eIQ Neutron NPU optimized. | Size | File | Status | |------|------|--------| | Nano | `yolov8n-seg-coco.imx95.tflite` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/imx95/yolov8n-seg-coco.imx95.tflite) | | Small | `yolov8s-seg-coco.imx95.tflite` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/imx95/yolov8s-seg-coco.imx95.tflite) | | Medium | `yolov8m-seg-coco.imx95.tflite` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/imx95/yolov8m-seg-coco.imx95.tflite) | | Large | `yolov8l-seg-coco.imx95.tflite` | [Download](https://huggingface.co/EdgeFirst/yolov8-seg/resolve/main/imx95/yolov8l-seg-coco.imx95.tflite) | | XLarge | `yolov8x-seg-coco.imx95.tflite` | Coming Soon |
--- ## Deploy with EdgeFirst Perception Copy-paste [GStreamer](https://github.com/EdgeFirstAI/gstreamer) pipeline examples for each platform. ### NXP i.MX 8M Plus — Camera to Detection with Vivante NPU ```bash gst-launch-1.0 \ v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \ edgefirstcameraadaptor ! \ tensor_filter framework=tensorflow-lite \ model=yolov8n-seg-coco.tflite \ custom=Delegate:External,ExtDelegateLib:libvx_delegate.so ! \ edgefirstsegdecoder ! edgefirstoverlay ! waylandsink ``` ### RPi5 + Hailo-8L ```bash gst-launch-1.0 \ v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \ hailonet hef-path=yolov8n-seg-coco.hailo8l.hef ! \ hailofilter function-name=yolov8_nms ! \ hailooverlay ! videoconvert ! autovideosink ``` ### NVIDIA Jetson (TensorRT) ```bash gst-launch-1.0 \ v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480 ! \ edgefirstcameraadaptor ! \ nvinfer config-file-path=yolov8n-seg-coco-config.txt ! \ edgefirstsegdecoder ! edgefirstoverlay ! nveglglessink ``` *Full pipeline documentation: [EdgeFirst GStreamer Plugins](https://github.com/EdgeFirstAI/gstreamer)* --- ## Foundation (HAL) Python Integration ```python from edgefirst.hal import Model, TensorImage # Load model — metadata (labels, decoder config) is embedded in the file model = Model("yolov8n-seg-coco.tflite") # Run inference on an image image = TensorImage.from_file("image.jpg") results = model.predict(image) # Access detections for det in results.detections: print(f"{det.label}: {det.confidence:.2f} at {det.bbox}") ``` *[EdgeFirst HAL](https://github.com/EdgeFirstAI/hal) — Hardware abstraction layer with accelerated inference delegates.* --- ## CameraAdaptor EdgeFirst [CameraAdaptor](https://github.com/EdgeFirstAI/cameraadaptor) enables training and inference directly on native sensor formats (GREY, YUYV, etc.) — skipping the ISP color conversion pipeline entirely. This reduces latency and power consumption on edge devices. CameraAdaptor variants are included alongside baseline RGB models: | Variant | Input Format | Use Case | |---------|-------------|----------| | `yolov8n-seg-coco.onnx` | RGB (3ch) | Standard camera input | | `yolov8n-seg-coco-grey.onnx` | GREY (1ch) | Monochrome / IR sensors | | `yolov8n-seg-coco-yuyv.onnx` | YUYV (2ch) | Raw sensor bypass | *Train CameraAdaptor models with [EdgeFirst Studio](https://edgefirst.studio) — the CameraAdaptor layer is automatically inserted during training.* --- ## Train Your Own with EdgeFirst Studio Train on your own dataset with [**EdgeFirst Studio**](https://edgefirst.studio): - **Free tier** includes YOLO training with automatic INT8 quantization and edge deployment - Upload datasets via [EdgeFirst Recorder](https://github.com/EdgeFirstAI/recorder) or COCO/YOLO format - AI-assisted annotation with auto-labeling - CameraAdaptor integration for native sensor format training - Deploy trained models to edge devices via [EdgeFirst Client](https://github.com/EdgeFirstAI/client) --- ## See Also Other models in the [EdgeFirst Model Zoo](https://huggingface.co/spaces/EdgeFirst/Models): | Model | Task | Best Nano Metric | Link | |-------|------|-------------------|------| | YOLOv5 Detection | Detection | 49.6% mAP@0.5 (ONNX) | [EdgeFirst/yolov5-det](https://huggingface.co/EdgeFirst/yolov5-det) | | YOLOv8 Detection | Detection | 50.2% mAP@0.5 (ONNX) | [EdgeFirst/yolov8-det](https://huggingface.co/EdgeFirst/yolov8-det) | | YOLO11 Detection | Detection | 53.4% mAP@0.5 (ONNX) | [EdgeFirst/yolo11-det](https://huggingface.co/EdgeFirst/yolo11-det) | | YOLO11 Segmentation | Segmentation | 35.5% Mask mAP@0.5-0.95 (ONNX) | [EdgeFirst/yolo11-seg](https://huggingface.co/EdgeFirst/yolo11-seg) | | YOLO26 Detection | Detection | 54.9% mAP@0.5 (ONNX) | [EdgeFirst/yolo26-det](https://huggingface.co/EdgeFirst/yolo26-det) | | YOLO26 Segmentation | Segmentation | 37.0% Mask mAP@0.5-0.95 (ONNX) | [EdgeFirst/yolo26-seg](https://huggingface.co/EdgeFirst/yolo26-seg) | --- ## Technical Details ### Quantization Pipeline All TFLite INT8 models are produced by EdgeFirst's custom quantization pipeline ([details](https://github.com/EdgeFirstAI/studio-ultralytics)): 1. **ONNX Export** — Standard Ultralytics export with `simplify=True` 2. **TF-Wrapped ONNX** — Box coordinates normalized to [0,1] inside DFL decode via `tf_wrapper` (~1.2% better mAP than post-hoc normalization) 3. **Split Decoder** — Boxes, scores, and mask coefficients split into separate output tensors for independent INT8 quantization scales 4. **Smart Calibration** — 500 images selected via greedy coverage maximization from COCO val2017 5. **Full INT8** — `uint8` input (raw pixels), `int8` output (per-tensor scales), MLIR quantizer ### Split Decoder Output Format **Segmentation** (e.g., yolov8n-seg): - Boxes: `(1, 4, 8400)` — normalized [0,1] coordinates - Scores: `(1, 80, 8400)` — class probabilities - Mask coefficients: `(1, 32, 8400)` — per-anchor mask coefs - Protos: `(1, 160, 160, 32)` — prototype masks Each tensor has independent quantization scale and zero-point. EdgeFirst HAL handles dequantization and reassembly automatically. ### Metadata - **TFLite**: `edgefirst.json`, `labels.txt`, and `edgefirst.yaml` embedded via ZIP (no `tflite-support` dependency) - **ONNX**: `edgefirst.json` embedded via `model.metadata_props` No standalone metadata files — models are self-contained. --- ## Limitations - **COCO bias** — Models trained on COCO (80 classes) inherit its biases: Western-centric scenes, specific object distributions, limited weather/lighting diversity - **INT8 accuracy loss** — Full-integer quantization typically degrades mAP by 6-12% relative to FP32; actual loss depends on model architecture and dataset - **Thermal variation** — On-target performance varies with device temperature; sustained inference may throttle on passively-cooled devices - **Input resolution** — All models expect 640×640 input; other resolutions require letterboxing or may reduce accuracy - **CameraAdaptor variants** — GREY/YUYV models trade color information for latency; accuracy may differ from RGB baseline depending on the task --- ## Citation ```bibtex @software{edgefirst_yolov8_seg, title = { {YOLOv8 Segmentation — EdgeFirst Edge AI} }, author = {Au-Zone Technologies}, url = {https://huggingface.co/EdgeFirst/yolov8-seg}, year = {2026}, license = {Apache-2.0}, } ``` ---

EdgeFirst Studio · GitHub · Docs · Au-Zone Technologies
Apache 2.0 · © Au-Zone Technologies Inc.