YOLO26n
Model Description
YOLO26n is the nano-sized variant of the YOLO26 model family, designed for ultra-fast, low-latency object detection on edge and resource-constrained devices.
As the smallest model in the YOLO26 lineup, YOLO26n prioritizes speed, simplicity, and deployability, while maintaining strong detection accuracy. It adopts an end-to-end, NMS-free architecture, which simplifies inference pipelines and reduces post-processing overhead.
YOLO26n is pretrained on the COCO dataset and serves as a lightweight baseline for real-time object detection workloads.
Quickstart
- Install NexaSDK and create a free account at sdk.nexa.ai
- Activate your device with your access token:
nexa config set license '<access_token>' - Run the model on Qualcomm NPU in one line:
nexa infer NexaAI/yolo26n-npu
Features
- Nano model size optimized for edge and low-power environments
- End-to-end detection with NMS-free inference
- Real-time capable for latency-sensitive applications
- Pretrained weights available out of the box
- Ultralytics workflow support for training, validation, inference, and export
Use Cases
- Edge and embedded computer vision
- Mobile object detection
- Smart cameras and IoT applications
- Robotics and autonomous systems
- Rapid prototyping with minimal compute resources
Inputs and Outputs
Input:
- Images or video streams, automatically preprocessed by the Ultralytics framework
Output:
- Bounding boxes
- Class labels
- Confidence scores
License
This repo is licensed under the Creative Commons Attribution–NonCommercial 4.0 (CC BY-NC 4.0) license, which allows use, sharing, and modification only for non-commercial purposes with proper attribution. All NPU-related models, runtimes, and code in this project are protected under this non-commercial license and cannot be used in any commercial or revenue-generating applications. Commercial licensing or enterprise usage requires a separate agreement. For inquiries, please contact dev@nexa.ai
- Downloads last month
- 10