base_model: - Ultralytics/YOLO11

πŸͺ– Helmet Detection using YOLO11

This model detects motorcycle riders wearing helmets and those without helmets.

Classes:

  • With Helmet
  • Without Helmet

Framework: Ultralytics YOLO11
Task: Object Detection


🧠 Helmet Violation Detection β€” System Flowchart

flowchart TD
    A[πŸ“₯ Input Image / Video Frame] --> B[πŸ” YOLO Scene Detection]

    B --> C1[🧍 Detect Persons]
    B --> C2[🏍️ Detect Motorcycles]

    C1 --> D[πŸ“ Rider Matching Logic]
    C2 --> D

    D -->|Person overlaps Motorcycle| E[🏍️ Identified as Rider]
    D -->|No overlap| F[🚢 Ignore (Pedestrian)]

    E --> G[βœ‚οΈ Crop Rider Head/Upper Body]
    G --> H[πŸͺ– Helmet Detection Model]

    H -->|Helmet Detected| I[🟒 Safe Rider]
    H -->|No Helmet Detected| J[πŸ”΄ Helmet Violation]

    I --> K[πŸ–ΌοΈ Draw Green Box + Label]
    J --> L[πŸ–ΌοΈ Draw Red Box + Label]

    K --> M[πŸ“€ Output Image with Results]
    L --> M

image


πŸš€ Quick Inference in Google Colab (Single Cell)

Copy and run the entire cell below in Google Colab:

# ================== HELMET DETECTION β€” HF MODEL (ONE CELL) ==================

!pip install ultralytics -q

from ultralytics import YOLO
from google.colab import files
import cv2
import matplotlib.pyplot as plt

# πŸ”Ή Hugging Face model link (URL-encoded)
model_url = "https://huggingface.co/nnsohamnn/helmet-detection-yolo11/resolve/main/yolov11m%28100epochs%29.pt"

print("πŸ“₯ Loading model from Hugging Face...")
model = YOLO(model_url)
print("βœ… Model loaded!")

print("\nπŸ“€ Upload images for helmet detection...")
uploaded = files.upload()

for filename in uploaded.keys():
    print(f"\nπŸ–ΌοΈ Processing: {filename}")

    img = cv2.imread(filename)
    if img is None:
        print(f"❌ Could not read image: {filename}")
        continue

    results = model(img, conf=0.35)[0]
    img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

    for box in results.boxes:
        x1, y1, x2, y2 = map(int, box.xyxy[0])
        cls = int(box.cls[0])
        conf = float(box.conf[0])
        label = model.names[cls]

        # Color coding
        if "without" in label.lower():
            color = (255, 0, 0)  # πŸ”΄ No Helmet
        else:
            color = (0, 255, 0)  # 🟒 Helmet

        # Draw bounding box
        cv2.rectangle(img_rgb, (x1, y1), (x2, y2), color, 3)

        # Draw label background
        text = f"{label} {conf:.2f}"
        (w, h), _ = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 0.6, 2)
        y_text = y1 - 10 if y1 - 10 > 10 else y1 + h + 10
        cv2.rectangle(img_rgb, (x1, y_text - h - 4), (x1 + w, y_text), color, -1)

        # Put label text
        cv2.putText(img_rgb, text, (x1, y_text - 2),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 0), 2)

    plt.figure(figsize=(8, 8))
    plt.imshow(img_rgb)
    plt.axis("off")
    plt.show()

print("\nβœ… Detection complete!")
# ============================================================================

πŸ“ˆ Validation Metrics

Class Images Instances Precision (P) Recall (R) mAP@50 mAP@50-95
All Classes 126 299 0.875 0.850 0.899 0.437
With Helmet 88 184 0.891 0.902 0.932 0.472
Without Helmet 53 115 0.860 0.798 0.866 0.402

πŸ” Metric Meaning

  • Precision: How many predicted helmets/no-helmets were correct
  • Recall: How many actual helmets/no-helmets were detected
  • mAP@50: Detection accuracy at IoU 0.50
  • mAP@50-95: Stricter overall detection quality

πŸ“Œ Summary

The model performs very well for detecting riders wearing helmets and shows moderate performance for detecting riders without helmets, which is typically harder due to occlusion and pose variation.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for nnsohamnn/helmet-detection-yolo11

Base model

Ultralytics/YOLO11
Finetuned
(122)
this model