Update model card for yolov8-det
Browse files
README.md
CHANGED
|
@@ -46,9 +46,11 @@ model-index:
|
|
| 46 |
YOLOv8 Detection models optimized for edge AI deployment across multiple hardware platforms. All sizes from Nano to XLarge, in ONNX FP32 and TFLite INT8 formats, with platform-specific compiled models for NPU acceleration.
|
| 47 |
|
| 48 |
Trained on [COCO 2017](https://test.edgefirst.studio/public/projects/1123/datasets/gallery/main?dataset=4819) (80 classes). Part of the [EdgeFirst Model Zoo](https://huggingface.co/EdgeFirst).
|
|
|
|
| 49 |
> **Training session**: [View on EdgeFirst Studio](https://test.edgefirst.studio/public/projects/1123/experiment/training/details?train_session_id=9488) β dataset, training config, metrics, and exported artifacts.
|
| 50 |
|
| 51 |
-
>
|
|
|
|
| 52 |
|
| 53 |
---
|
| 54 |
|
|
@@ -74,7 +76,7 @@ Full pipeline timing: pre-processing + inference + post-processing.
|
|
| 74 |
|------|----------|---------------|----------------|-----------------|------------|-----|
|
| 75 |
| β | β | β | β | β | β | β |
|
| 76 |
|
| 77 |
-
|
| 78 |
|
| 79 |
---
|
| 80 |
|
|
@@ -160,7 +162,7 @@ gst-launch-1.0 \
|
|
| 160 |
```
|
| 161 |
|
| 162 |
|
| 163 |
-
|
| 164 |
|
| 165 |
---
|
| 166 |
|
|
@@ -181,7 +183,7 @@ for det in results.detections:
|
|
| 181 |
print(f"{det.label}: {det.confidence:.2f} at {det.bbox}")
|
| 182 |
```
|
| 183 |
|
| 184 |
-
|
| 185 |
|
| 186 |
---
|
| 187 |
|
|
@@ -197,19 +199,19 @@ CameraAdaptor variants are included alongside baseline RGB models:
|
|
| 197 |
| `yolov8n-det-coco-grey.onnx` | GREY (1ch) | Monochrome / IR sensors |
|
| 198 |
| `yolov8n-det-coco-yuyv.onnx` | YUYV (2ch) | Raw sensor bypass |
|
| 199 |
|
| 200 |
-
|
| 201 |
|
| 202 |
---
|
| 203 |
|
| 204 |
## Train Your Own with EdgeFirst Studio
|
| 205 |
|
| 206 |
-
|
| 207 |
-
|
| 208 |
-
|
| 209 |
-
|
| 210 |
-
|
| 211 |
-
|
| 212 |
-
|
| 213 |
|
| 214 |
---
|
| 215 |
|
|
|
|
| 46 |
YOLOv8 Detection models optimized for edge AI deployment across multiple hardware platforms. All sizes from Nano to XLarge, in ONNX FP32 and TFLite INT8 formats, with platform-specific compiled models for NPU acceleration.
|
| 47 |
|
| 48 |
Trained on [COCO 2017](https://test.edgefirst.studio/public/projects/1123/datasets/gallery/main?dataset=4819) (80 classes). Part of the [EdgeFirst Model Zoo](https://huggingface.co/EdgeFirst).
|
| 49 |
+
> [!TIP]
|
| 50 |
> **Training session**: [View on EdgeFirst Studio](https://test.edgefirst.studio/public/projects/1123/experiment/training/details?train_session_id=9488) β dataset, training config, metrics, and exported artifacts.
|
| 51 |
|
| 52 |
+
> [!NOTE]
|
| 53 |
+
> Best-validated baseline.
|
| 54 |
|
| 55 |
---
|
| 56 |
|
|
|
|
| 76 |
|------|----------|---------------|----------------|-----------------|------------|-----|
|
| 77 |
| β | β | β | β | β | β | β |
|
| 78 |
|
| 79 |
+
*Measured with [EdgeFirst Perception](https://github.com/EdgeFirstAI) stack. Timing includes full GStreamer pipeline overhead.*
|
| 80 |
|
| 81 |
---
|
| 82 |
|
|
|
|
| 162 |
```
|
| 163 |
|
| 164 |
|
| 165 |
+
*Full pipeline documentation: [EdgeFirst GStreamer Plugins](https://github.com/EdgeFirstAI/gstreamer)*
|
| 166 |
|
| 167 |
---
|
| 168 |
|
|
|
|
| 183 |
print(f"{det.label}: {det.confidence:.2f} at {det.bbox}")
|
| 184 |
```
|
| 185 |
|
| 186 |
+
*[EdgeFirst HAL](https://github.com/EdgeFirstAI/hal) β Hardware abstraction layer with accelerated inference delegates.*
|
| 187 |
|
| 188 |
---
|
| 189 |
|
|
|
|
| 199 |
| `yolov8n-det-coco-grey.onnx` | GREY (1ch) | Monochrome / IR sensors |
|
| 200 |
| `yolov8n-det-coco-yuyv.onnx` | YUYV (2ch) | Raw sensor bypass |
|
| 201 |
|
| 202 |
+
*Train CameraAdaptor models with [EdgeFirst Studio](https://edgefirst.studio) β the CameraAdaptor layer is automatically inserted during training.*
|
| 203 |
|
| 204 |
---
|
| 205 |
|
| 206 |
## Train Your Own with EdgeFirst Studio
|
| 207 |
|
| 208 |
+
Train on your own dataset with [**EdgeFirst Studio**](https://edgefirst.studio):
|
| 209 |
+
|
| 210 |
+
- **Free tier** includes YOLO training with automatic INT8 quantization and edge deployment
|
| 211 |
+
- Upload datasets via [EdgeFirst Recorder](https://github.com/EdgeFirstAI/recorder) or COCO/YOLO format
|
| 212 |
+
- AI-assisted annotation with auto-labeling
|
| 213 |
+
- CameraAdaptor integration for native sensor format training
|
| 214 |
+
- Deploy trained models to edge devices via [EdgeFirst Client](https://github.com/EdgeFirstAI/client)
|
| 215 |
|
| 216 |
---
|
| 217 |
|