base_model: - Ultralytics/YOLO11
πͺ Helmet Detection using YOLO11
This model detects motorcycle riders wearing helmets and those without helmets.
Classes:
With HelmetWithout Helmet
Framework: Ultralytics YOLO11
Task: Object Detection
π§ Helmet Violation Detection β System Flowchart
flowchart TD
A[π₯ Input Image / Video Frame] --> B[π YOLO Scene Detection]
B --> C1[π§ Detect Persons]
B --> C2[ποΈ Detect Motorcycles]
C1 --> D[π Rider Matching Logic]
C2 --> D
D -->|Person overlaps Motorcycle| E[ποΈ Identified as Rider]
D -->|No overlap| F[πΆ Ignore (Pedestrian)]
E --> G[βοΈ Crop Rider Head/Upper Body]
G --> H[πͺ Helmet Detection Model]
H -->|Helmet Detected| I[π’ Safe Rider]
H -->|No Helmet Detected| J[π΄ Helmet Violation]
I --> K[πΌοΈ Draw Green Box + Label]
J --> L[πΌοΈ Draw Red Box + Label]
K --> M[π€ Output Image with Results]
L --> M
π Quick Inference in Google Colab (Single Cell)
Copy and run the entire cell below in Google Colab:
# ================== HELMET DETECTION β HF MODEL (ONE CELL) ==================
!pip install ultralytics -q
from ultralytics import YOLO
from google.colab import files
import cv2
import matplotlib.pyplot as plt
# πΉ Hugging Face model link (URL-encoded)
model_url = "https://huggingface.co/nnsohamnn/helmet-detection-yolo11/resolve/main/yolov11m%28100epochs%29.pt"
print("π₯ Loading model from Hugging Face...")
model = YOLO(model_url)
print("β
Model loaded!")
print("\nπ€ Upload images for helmet detection...")
uploaded = files.upload()
for filename in uploaded.keys():
print(f"\nπΌοΈ Processing: {filename}")
img = cv2.imread(filename)
if img is None:
print(f"β Could not read image: {filename}")
continue
results = model(img, conf=0.35)[0]
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
for box in results.boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
cls = int(box.cls[0])
conf = float(box.conf[0])
label = model.names[cls]
# Color coding
if "without" in label.lower():
color = (255, 0, 0) # π΄ No Helmet
else:
color = (0, 255, 0) # π’ Helmet
# Draw bounding box
cv2.rectangle(img_rgb, (x1, y1), (x2, y2), color, 3)
# Draw label background
text = f"{label} {conf:.2f}"
(w, h), _ = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 0.6, 2)
y_text = y1 - 10 if y1 - 10 > 10 else y1 + h + 10
cv2.rectangle(img_rgb, (x1, y_text - h - 4), (x1 + w, y_text), color, -1)
# Put label text
cv2.putText(img_rgb, text, (x1, y_text - 2),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 0), 2)
plt.figure(figsize=(8, 8))
plt.imshow(img_rgb)
plt.axis("off")
plt.show()
print("\nβ
Detection complete!")
# ============================================================================
π Validation Metrics
| Class | Images | Instances | Precision (P) | Recall (R) | mAP@50 | mAP@50-95 |
|---|---|---|---|---|---|---|
| All Classes | 126 | 299 | 0.875 | 0.850 | 0.899 | 0.437 |
| With Helmet | 88 | 184 | 0.891 | 0.902 | 0.932 | 0.472 |
| Without Helmet | 53 | 115 | 0.860 | 0.798 | 0.866 | 0.402 |
π Metric Meaning
- Precision: How many predicted helmets/no-helmets were correct
- Recall: How many actual helmets/no-helmets were detected
- mAP@50: Detection accuracy at IoU 0.50
- mAP@50-95: Stricter overall detection quality
π Summary
The model performs very well for detecting riders wearing helmets and shows moderate performance for detecting riders without helmets, which is typically harder due to occlusion and pose variation.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for nnsohamnn/helmet-detection-yolo11
Base model
Ultralytics/YOLO11