Sync object-detection from metro-analytics-catalog
Browse files- .gitattributes +2 -0
- README.md +28 -20
- expected_output_dlstreamer.gif +3 -0
- expected_output_openvino.jpg +3 -0
- export_and_quantize.sh +9 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
expected_output_dlstreamer.gif filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
expected_output_openvino.jpg filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -1,7 +1,5 @@
|
|
| 1 |
# Object Detection
|
| 2 |
|
| 3 |
-
> **Validated with:** OpenVINO 2026.1.0, NNCF 3.0.0, DLStreamer 2026.0, Ultralytics 8.4.46, Python 3.11+
|
| 4 |
-
|
| 5 |
| Property | Value |
|
| 6 |
|---|---|
|
| 7 |
| **Category** | General Object Detection (80-class COCO) |
|
|
@@ -78,7 +76,7 @@ The second argument selects the precision (`FP32`, `FP16`, `INT8`); the default
|
|
| 78 |
The script performs the following steps:
|
| 79 |
|
| 80 |
1. Installs dependencies (`openvino`, `ultralytics`; adds `nncf` for INT8).
|
| 81 |
-
2. Downloads a sample test image (`test.jpg`).
|
| 82 |
3. Downloads the PyTorch weights and exports to OpenVINO IR.
|
| 83 |
4. *(INT8 only)* Quantizes the model using NNCF post-training quantization.
|
| 84 |
|
|
@@ -163,7 +161,7 @@ for det in dets:
|
|
| 163 |
cv2.FONT_HERSHEY_SIMPLEX, 0.6, color, 2)
|
| 164 |
print(f" {label} at ({x1},{y1})-({x2},{y2})")
|
| 165 |
|
| 166 |
-
cv2.imwrite("
|
| 167 |
```
|
| 168 |
|
| 169 |
**Device targets:**
|
|
@@ -176,7 +174,7 @@ cv2.imwrite("output.jpg", image)
|
|
| 176 |
|
| 177 |
The `export_and_quantize.sh` script downloads `test.jpg` automatically.
|
| 178 |
Re-run the OpenVINO sample above.
|
| 179 |
-
The script reads `test.jpg`, prints each detected object to the console, and writes the annotated frame to `
|
| 180 |
|
| 181 |
Expected console output (representative):
|
| 182 |
|
|
@@ -189,10 +187,15 @@ Total detections: 5
|
|
| 189 |
person 0.50 at (0,553)-(68,869)
|
| 190 |
```
|
| 191 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 192 |
### DLStreamer Sample
|
| 193 |
|
| 194 |
-
The pipeline below runs the FP16 YOLO26 detector on
|
| 195 |
-
`gvadetect`, overlays bounding boxes,
|
|
|
|
| 196 |
|
| 197 |
> **Notes on running this sample:**
|
| 198 |
>
|
|
@@ -208,8 +211,6 @@ The pipeline below runs the FP16 YOLO26 detector on a single image via
|
|
| 208 |
> /opt/intel/dlstreamer/gstreamer/lib/python3/dist-packages:${PYTHONPATH:-}
|
| 209 |
> ```
|
| 210 |
|
| 211 |
-
**Image-based quick test** (uses `filesrc` with a single JPEG):
|
| 212 |
-
|
| 213 |
```python
|
| 214 |
import gi
|
| 215 |
|
|
@@ -220,16 +221,19 @@ from gstgva import VideoFrame
|
|
| 220 |
|
| 221 |
Gst.init(None)
|
| 222 |
|
| 223 |
-
|
| 224 |
-
|
| 225 |
-
#
|
| 226 |
-
# For NPU: change device=
|
| 227 |
pipeline_str = (
|
| 228 |
-
"filesrc location=
|
| 229 |
-
"
|
| 230 |
"gvadetect model=yolo26n_openvino_model/yolo26n.xml "
|
| 231 |
-
"device=
|
| 232 |
-
"
|
|
|
|
|
|
|
|
|
|
| 233 |
)
|
| 234 |
pipeline = Gst.parse_launch(pipeline_str)
|
| 235 |
|
|
@@ -257,11 +261,15 @@ bus.timed_pop_filtered(
|
|
| 257 |
pipeline.set_state(Gst.State.NULL)
|
| 258 |
```
|
| 259 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 260 |
**Device targets:**
|
| 261 |
|
| 262 |
-
- `device=
|
| 263 |
-
- `device=
|
| 264 |
-
- `device=NPU` -- use `batch-size=1` and `nireq=4` for best NPU utilization.
|
| 265 |
|
| 266 |
---
|
| 267 |
|
|
|
|
| 1 |
# Object Detection
|
| 2 |
|
|
|
|
|
|
|
| 3 |
| Property | Value |
|
| 4 |
|---|---|
|
| 5 |
| **Category** | General Object Detection (80-class COCO) |
|
|
|
|
| 76 |
The script performs the following steps:
|
| 77 |
|
| 78 |
1. Installs dependencies (`openvino`, `ultralytics`; adds `nncf` for INT8).
|
| 79 |
+
2. Downloads a sample test image (`test.jpg`) and a sample test video (`test_video.mp4`).
|
| 80 |
3. Downloads the PyTorch weights and exports to OpenVINO IR.
|
| 81 |
4. *(INT8 only)* Quantizes the model using NNCF post-training quantization.
|
| 82 |
|
|
|
|
| 161 |
cv2.FONT_HERSHEY_SIMPLEX, 0.6, color, 2)
|
| 162 |
print(f" {label} at ({x1},{y1})-({x2},{y2})")
|
| 163 |
|
| 164 |
+
cv2.imwrite("output_openvino.jpg", image)
|
| 165 |
```
|
| 166 |
|
| 167 |
**Device targets:**
|
|
|
|
| 174 |
|
| 175 |
The `export_and_quantize.sh` script downloads `test.jpg` automatically.
|
| 176 |
Re-run the OpenVINO sample above.
|
| 177 |
+
The script reads `test.jpg`, prints each detected object to the console, and writes the annotated frame to `output_openvino.jpg`.
|
| 178 |
|
| 179 |
Expected console output (representative):
|
| 180 |
|
|
|
|
| 187 |
person 0.50 at (0,553)-(68,869)
|
| 188 |
```
|
| 189 |
|
| 190 |
+
#### Expected Output
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
|
| 194 |
### DLStreamer Sample
|
| 195 |
|
| 196 |
+
The pipeline below runs the FP16 YOLO26 detector on the sample video via
|
| 197 |
+
`gvadetect`, overlays bounding boxes, saves the annotated result to
|
| 198 |
+
`output_dlstreamer.mp4`, and prints all detections per frame.
|
| 199 |
|
| 200 |
> **Notes on running this sample:**
|
| 201 |
>
|
|
|
|
| 211 |
> /opt/intel/dlstreamer/gstreamer/lib/python3/dist-packages:${PYTHONPATH:-}
|
| 212 |
> ```
|
| 213 |
|
|
|
|
|
|
|
| 214 |
```python
|
| 215 |
import gi
|
| 216 |
|
|
|
|
| 221 |
|
| 222 |
Gst.init(None)
|
| 223 |
|
| 224 |
+
INPUT_VIDEO = "test_video.mp4"
|
| 225 |
+
|
| 226 |
+
# For CPU: change device=GPU to device=CPU.
|
| 227 |
+
# For NPU: change device=GPU to device=NPU (batch-size=1, nireq=4 recommended).
|
| 228 |
pipeline_str = (
|
| 229 |
+
f"filesrc location={INPUT_VIDEO} ! decodebin3 ! "
|
| 230 |
+
"videoconvert ! "
|
| 231 |
"gvadetect model=yolo26n_openvino_model/yolo26n.xml "
|
| 232 |
+
"device=GPU "
|
| 233 |
+
"threshold=0.4 ! queue ! "
|
| 234 |
+
"gvawatermark ! videoconvert ! video/x-raw,format=I420 ! "
|
| 235 |
+
"openh264enc ! h264parse ! "
|
| 236 |
+
"mp4mux ! filesink name=sink location=output_dlstreamer.mp4"
|
| 237 |
)
|
| 238 |
pipeline = Gst.parse_launch(pipeline_str)
|
| 239 |
|
|
|
|
| 261 |
pipeline.set_state(Gst.State.NULL)
|
| 262 |
```
|
| 263 |
|
| 264 |
+
#### Expected Output
|
| 265 |
+
|
| 266 |
+

|
| 267 |
+
|
| 268 |
**Device targets:**
|
| 269 |
|
| 270 |
+
- `device=GPU` -- default in the sample code.
|
| 271 |
+
- `device=CPU` -- change `device=GPU` to `device=CPU`.
|
| 272 |
+
- `device=NPU` -- change `device=GPU` to `device=NPU`; use `batch-size=1` and `nireq=4` for best NPU utilization.
|
| 273 |
|
| 274 |
---
|
| 275 |
|
expected_output_dlstreamer.gif
ADDED
|
Git LFS Details
|
expected_output_openvino.jpg
ADDED
|
Git LFS Details
|
export_and_quantize.sh
CHANGED
|
@@ -44,6 +44,15 @@ else
|
|
| 44 |
echo "Already present: test.jpg"
|
| 45 |
fi
|
| 46 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
if [[ "${PRECISION}" == "FP32" ]]; then
|
| 48 |
HALF_FLAG="False"
|
| 49 |
EXPORT_LABEL="FP32"
|
|
|
|
| 44 |
echo "Already present: test.jpg"
|
| 45 |
fi
|
| 46 |
|
| 47 |
+
echo "--- Downloading sample test video ---"
|
| 48 |
+
if [[ ! -f test_video.mp4 ]]; then
|
| 49 |
+
wget -q -O test_video.mp4 \
|
| 50 |
+
https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4
|
| 51 |
+
echo "Downloaded: test_video.mp4"
|
| 52 |
+
else
|
| 53 |
+
echo "Already present: test_video.mp4"
|
| 54 |
+
fi
|
| 55 |
+
|
| 56 |
if [[ "${PRECISION}" == "FP32" ]]; then
|
| 57 |
HALF_FLAG="False"
|
| 58 |
EXPORT_LABEL="FP32"
|