| |
|
| | --- |
| | license: apache-2.0 |
| | tags: |
| | - object-detection |
| | - yolo |
| | - yolov8 |
| | - onnx |
| | - infrastructure |
| | - wastemanagement |
| | - flooding |
| | --- |
| | # YOLOv8 Object Detection Model |
| |
|
| | This is a custom trained YOLOv8 nano model for detecting: |
| | - **Flooding** |
| | - **Potholes** |
| | - **Waste Management** |
| |
|
| | The model was trained for 200 epochs on a custom dataset. |
| |
|
| | ## Model Details |
| | - **Architecture:** YOLOv8n (nano) |
| | - **Input Size:** 640x640 |
| | - **Export Format:** ONNX |
| | - **Training Dataset:** Custom dataset for 'Flooding', 'Potholes', 'Waste Management' |
| | - **Metrics (best epoch 65 on validation set):** |
| | - mAP50-95: 0.393 |
| | - mAP50: 0.606 |
| | - Flooding mAP50: 0.797 |
| | - Potholes mAP50: 0.430 |
| | - Waste Management mAP50: 0.591 |
| |
|
| | ## Usage (Example) |
| |
|
| | This model is provided in ONNX format and can be used for inference in various environments. Here's how you might load it with ONNX Runtime in Python: |
| |
|
| | ```python |
| | import onnxruntime as ort |
| | import numpy as np |
| | from PIL import Image |
| | |
| | # Load the ONNX model |
| | session = ort.InferenceSession("Jin0908/muncipal-problem-detection/best.onnx") # Adjust path if downloaded locally |
| | |
| | # Prepare input image (example: resize to 640x640, normalize) |
| | image = Image.open("your_image.jpg").resize((640, 640)) |
| | input_array = np.array(image) / 255.0 # Normalize to [0, 1] |
| | input_array = np.transpose(input_array, (2, 0, 1)) # (H, W, C) to (C, H, W) |
| | input_array = np.expand_dims(input_array, axis=0).astype(np.float32) # Add batch dimension |
| | |
| | # Run inference |
| | outputs = session.run(None, {"images": input_array}) # Corrected: doubled curly braces for literal dict syntax |
| | |
| | # Process outputs (this part is specific to YOLOv8 ONNX output and requires custom parsing) |
| | # The output format is typically [batch_size, num_boxes, 4 + num_classes] or similar. |
| | print("Model output shape:", outputs[0].shape) |
| | # Further processing would involve non-max suppression (NMS) and decoding bounding boxes. |
| | ``` |
| |
|
| |
|