Lillthorin's picture
Update README.md
c76a94a verified
---
license: apache-2.0
---
# YOLOLite-edge_s (ONNX, 320Γ—320, P2 head)
YOLOLite-edge_s is a lightweight, CPU-focused object detection model designed for **extreme edge performance** at very low resolutions.
This P2-enabled variant is optimized for **small-object detection at 320Γ—320** while maintaining high real-time throughput on ordinary CPUs.
πŸ“¦ Full source code: https://github.com/Lillthorin/YoloLite-Official-Repo
πŸ“Š Full Benchmark Results: See `BENCHMARK.md` in the repository
---
## πŸ” Key Features
- **Real-time CPU throughput: 94–101 FPS end-to-end**
- **Fast ONNX inference: 8–10 ms per frame**
- Optimized for **industrial, robotics, and edge computing**
- Enhanced **P2 head** for small-object performance at 320px
- Supports **resize** or **letterbox** preprocessing
- Evaluated across **40+ diverse Roboflow100 datasets**
---
## ⚑ Real-World CPU Performance (ONNX Runtime)
Tested on 1080p traffic footage (`intersection.mp4`) using
`onnx_intersection_showcase.py` with:
- Model: `edge_s_320_p2.onnx`
- Execution Provider: `CPUExecutionProvider`
- Preprocessing: Resize
- Resolution: **320Γ—320**
| Measurement | Result |
|------------|--------|
| **End-to-end FPS** | **94–101 FPS** |
| **Raw inference latency** | **8–10 ms** per frame |
| **Pipeline includes** | video β†’ resize β†’ inference β†’ NMS β†’ drawing |
These values represent **actual full-pipeline performance**, not isolated model latency.
---
## πŸ§ͺ Example Usage
```python
from infer_onnx import ONNX_Predict
import cv2
predict = ONNX_Predict(
"edge_s_320_p2.onnx",
providers=["CPUExecutionProvider"],
use_letterbox=False
)
frame = cv2.imread("image.jpg")
boxes, scores, classes = predict.infer_image(frame, img_size=320)
for (x1, y1, x2, y2), score, cls in zip(boxes, scores, classes):
print(x1, y1, x2, y2, score, cls)