UEmmanuel5's picture
Upload README.md
5b1a4e4 verified
|
raw
history blame
1.52 kB
metadata
license: agpl-3.0
library_name: ultralytics
pipeline_tag: object-detection
tags:
  - yolo
  - yolo11
  - fire-detection
  - computer-vision
  - realtime

YOLOv11n Fire Detector (ProFSAM)

Paper: https://arxiv.org/abs/2510.21782
Code: https://github.com/UEmmanuel5/ProFSAM
Weights: Fire_best.pt

Intended use

Bounding-box detection of fire to prompt SAM2/MobileSAM/TinySAM in the ProFSAM pipeline.

Training data

FASDD subset: classes fire and neither_firenorsmoke only. Total images used: 51,749 (12,550 fire, 39,199 neither_firenorsmoke).

Training setup (summary)

PyTorch 2.0, CUDA 12.4, main GPU: GTX 1050 Ti (4 GB).
Ultralytics YOLOv11n initialized then trained 100 epochs.

Script used

# train_yolo11n.py
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
from ultralytics import YOLO
import torch
torch.backends.cudnn.benchmark = True

model = YOLO("path/to/yolo11n.pt")
train_results = model.train(
    data="path/to/FASDD_CV_Fire/data.yaml",
    epochs=100,
    imgsz=640,
    batch=16,
    optimizer="AdamW",
    lr0=1e-4,
    lrf=0.01,
    dropout=0.15,
    weight_decay=5e-4,
    device=0,
    val=False,
    save=True,
    plots=False
)

Detector metrics (FASDD fire-only subset)

P R mAP@0.5 mAP@0.5:0.95
0.799 0.697 0.797 0.520

Test data

If you do not have test images, I placed 4 test images from the khan dataset to be used during your testing phase.