--- license: apache-2.0 --- # YOLOLite-edge_s (ONNX, 320ร—320, P2 head) YOLOLite-edge_s is a lightweight, CPU-focused object detection model designed for **extreme edge performance** at very low resolutions. This P2-enabled variant is optimized for **small-object detection at 320ร—320** while maintaining high real-time throughput on ordinary CPUs. ๐Ÿ“ฆ Full source code: https://github.com/Lillthorin/YoloLite-Official-Repo ๐Ÿ“Š Full Benchmark Results: See `BENCHMARK.md` in the repository --- ## ๐Ÿ” Key Features - **Real-time CPU throughput: 94โ€“101 FPS end-to-end** - **Fast ONNX inference: 8โ€“10 ms per frame** - Optimized for **industrial, robotics, and edge computing** - Enhanced **P2 head** for small-object performance at 320px - Supports **resize** or **letterbox** preprocessing - Evaluated across **40+ diverse Roboflow100 datasets** --- ## โšก Real-World CPU Performance (ONNX Runtime) Tested on 1080p traffic footage (`intersection.mp4`) using `onnx_intersection_showcase.py` with: - Model: `edge_s_320_p2.onnx` - Execution Provider: `CPUExecutionProvider` - Preprocessing: Resize - Resolution: **320ร—320** | Measurement | Result | |------------|--------| | **End-to-end FPS** | **94โ€“101 FPS** | | **Raw inference latency** | **8โ€“10 ms** per frame | | **Pipeline includes** | video โ†’ resize โ†’ inference โ†’ NMS โ†’ drawing | These values represent **actual full-pipeline performance**, not isolated model latency. --- ## ๐Ÿงช Example Usage ```python from infer_onnx import ONNX_Predict import cv2 predict = ONNX_Predict( "edge_s_320_p2.onnx", providers=["CPUExecutionProvider"], use_letterbox=False ) frame = cv2.imread("image.jpg") boxes, scores, classes = predict.infer_image(frame, img_size=320) for (x1, y1, x2, y2), score, cls in zip(boxes, scores, classes): print(x1, y1, x2, y2, score, cls)