Worker Safety Detection
| Property | Value |
|---|---|
| Category | Object Detection (PPE / Safety Compliance) |
| Base Model | Worker Safety Gear Detection (Intel Edge AI Resources, Geti-trained) |
| Source Framework | Intel Geti (OpenVINO IR) |
| Supported Precisions | FP32 |
| Inference Engine | OpenVINO |
| Hardware | CPU, GPU, NPU |
| Detected Class(es) | safety_jacket (class 0), safety_helmet (class 1) |
Overview
Worker Safety Compliance Detection is a Metro Analytics use case that detects safety gear on workers to verify compliance with personal protective equipment (PPE) requirements.
It uses a pre-trained FP32 detection model from Intel Edge AI Resources, trained with Intel Geti on worker safety imagery.
The model detects two classes: safety_jacket (high-visibility vest) and safety_helmet (hard hat).
Frames where expected PPE is not detected indicate non-compliance.
The FP32 model ships as an OpenVINO IR, ready for deployment on Intel CPUs and GPUs without additional conversion steps.
Typical deployments include:
- Construction Site Safety -- verify that all workers wear hard hats and high-visibility vests before entering active zones.
- Warehouse Compliance -- enforce PPE policies at loading docks and forklift areas.
- Industrial Zone Monitoring -- continuous compliance scanning at facility entry points.
- Automated Incident Reporting -- generate alerts when expected
safety_helmetorsafety_jacketdetections are absent for detected persons.
Prerequisites
- Python 3.11+
- Install OpenVINO (latest version)
- Install Intel DLStreamer (latest version)
Create and activate a Python virtual environment before running the scripts:
python3 -m venv .venv --system-site-packages
source .venv/bin/activate
Note: The
--system-site-packagesflag is required so the virtual environment can access the system-installed OpenVINO and DLStreamer Python packages.
Getting Started
Download Model
Run the provided script to download and extract the pre-trained FP32 model from Intel Edge AI Resources:
chmod +x export_and_quantize.sh
./export_and_quantize.sh
The script performs the following steps:
- Installs dependencies (
openvino). - Downloads a sample test video (
test_video.avi) from Intel Edge AI Resources. - Downloads
worker-safety-gear-detection.zipfrom the Intel Edge AI Resources repository. - Extracts the FP32 OpenVINO IR model.
Output files:
./models/worker-safety-gear-detection/-- extracted model directory containing the FP32 OpenVINO IR (model.xml,model.bin).
Note: The FP32 model is ready for production use on CPU and GPU. An INT8 variant is also available from the INT8 models directory for higher throughput.
DLStreamer Sample
The pipeline below runs the worker safety FP32 detector on the sample video via
gvadetect, overlays bounding boxes with gvawatermark, and saves the
annotated result to output_dlstreamer.mp4.
Notes on running this sample:
The Geti-exported model embeds post-processing and labels internally.
gvadetectauto-discovers the model type for Geti-exported IRs.Export
PYTHONPATHso the DLStreamer Python module is importable:source /opt/intel/openvino_2026/setupvars.sh source /opt/intel/dlstreamer/scripts/setup_dls_env.sh export PYTHONPATH=/opt/intel/dlstreamer/python:\ /opt/intel/dlstreamer/gstreamer/lib/python3/dist-packages:${PYTHONPATH:-}
import gi
gi.require_version("Gst", "1.0")
gi.require_version("GstVideo", "1.0")
from gi.repository import Gst
from gstgva import VideoFrame
Gst.init(None)
MODEL_XML = "models/worker-safety-gear-detection/deployment/Detection/model/model.xml"
INPUT_VIDEO = "test_video.avi"
# For CPU: change device=GPU to device=CPU.
# For NPU: change device=GPU to device=NPU (batch-size=1, nireq=4 recommended).
pipeline_str = (
f"filesrc location={INPUT_VIDEO} ! decodebin3 ! "
f"videoconvert ! "
f"gvadetect model={MODEL_XML} device=GPU "
f"threshold=0.4 ! queue ! "
f"gvawatermark ! videoconvert ! video/x-raw,format=I420 ! "
f"openh264enc ! h264parse ! "
f"mp4mux ! filesink name=sink location=output_dlstreamer.mp4"
)
pipeline = Gst.parse_launch(pipeline_str)
def on_buffer(pad, info):
buf = info.get_buffer()
caps = pad.get_current_caps()
frame = VideoFrame(buf, caps=caps)
for region in frame.regions():
label = region.label()
print(f" [PPE] {label} conf={region.confidence():.2f}", flush=True)
return Gst.PadProbeReturn.OK
sink = pipeline.get_by_name("sink")
sink_pad = sink.get_static_pad("sink")
sink_pad.add_probe(Gst.PadProbeType.BUFFER, on_buffer)
pipeline.set_state(Gst.State.PLAYING)
bus = pipeline.get_bus()
bus.timed_pop_filtered(
Gst.CLOCK_TIME_NONE,
Gst.MessageType.EOS | Gst.MessageType.ERROR,
)
pipeline.set_state(Gst.State.NULL)
Try It on a Sample Video
The export_and_quantize.sh script downloads test_video.avi automatically.
Run the DLStreamer sample above.
The buffer probe prints one line per detected safety item per frame.
Expected console output (representative):
[PPE] safety_helmet conf=0.87
[PPE] safety_jacket conf=0.82
The annotated video is saved to output_dlstreamer.mp4 with bounding boxes drawn by
gvawatermark around each detected safety_helmet and safety_jacket.
Known warning: The
openh264encelement prints[OpenH264] this = 0x..., Error:CWelsH264SVCEncoder::EncodeFrame(), cmInitParaError.on the first frame. This is a benign initialization message — the output video is encoded correctly. The warning comes from the OpenH264 library's internal logging and does not indicate a real error.
Expected Output
Device targets:
device=GPU-- default in the sample code.device=CPU-- changedevice=GPUtodevice=CPU.device=NPU-- changedevice=GPUtodevice=NPU; usebatch-size=1andnireq=4for best NPU utilization.
License
Copyright (C) Intel Corporation. All rights reserved. Licensed under the MIT License. See LICENSE for details.
