File size: 9,725 Bytes
430120d
0357eeb
 
430120d
0357eeb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
430120d
0357eeb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
---

language:
- en
license: cc-by-4.0
library_name: ultralytics
tags:
- real-time
- object-detection
- yolo
- yolov12
- traffic-signs
- autonomous-driving
- adas
datasets:
- AlessandroFerrante/StreetSignSet
metrics:
- mAP
- f1
- precision
- recall
pipeline_tag: object-detection
---

<div  align="center">

# StreetSignSenseYOLO12s

[![Ultralytics  8.3.229 ](https://img.shields.io/badge/Ultralytics-8.3.229-lightblue?logo=ultralytics&logoColor=white)](https://github.com/ultralytics/ultralytics)
[![Ultralytics Github](https://img.shields.io/badge/Ultralytics-Github-darkgreen?logo=ultralytics&logoColor=white)](https://github.com/ultralytics/ultralytics)
[![Ultralytics YOLO12](https://img.shields.io/badge/Ultralytics-YOLO12-8A2BE2?logo=ultralytics&logoColor=white)](https://github.com/sunsmarterjie/yolov12)

[![Python 3.11.13 ](https://img.shields.io/badge/Python-3.11.13-blue?logo=python&logoColor=white)](https://www.python.org/)
[![PyTorch 2.6.0](https://img.shields.io/badge/PyTorch-2.6.0-EE4C2C?logo=pytorch&logoColor=white)](https://pytorch.org/)
[![License](https://img.shields.io/badge/License-MIT-green.svg?)](LICENSE)
[![License](https://img.shields.io/badge/License-CC_BY_4.0-orange.svg?)](LICENSE)

[![Project-StreetSignSense](https://img.shields.io/badge/Project-StreetSignSense-007bff.svg)](https://github.com/AlessandroFerrante/StreetSignSense/) [![Badge Report PDF](https://img.shields.io/badge/πŸ“‘-Technical_Report-white?logo=pdf&logoColor=white)](https://alessandroferrante.github.io/StreetSignSense/report/Report.pdf)

[![GitHub Release](https://img.shields.io/badge/GitHub-StreetSignSenseY12n-181717?logo=github)](https://github.com/AlessandroFerrante/StreetSignSense/releases)
[![GitHub Release](https://img.shields.io/badge/GitHub-StreetSignSenseY12s-181717?logo=github)](https://github.com/AlessandroFerrante/StreetSignSense/releases)
[![GitHub Release](https://img.shields.io/badge/GitHub-StreetSignSenseY12m-181717?logo=github)](https://github.com/AlessandroFerrante/StreetSignSense/releases)

[![Model-StreetSignSense](https://img.shields.io/badge/KaggleModel-StreetSignSenseY12n-20BEFF.svg?logo=kaggle&logoColor=white)](https://www.kaggle.com/models/ferrantealessandro/streetsignsensey12n/)
[![Model-StreetSignSense](https://img.shields.io/badge/KaggleModel-StreetSignSenseY12s-20BEFF.svg?logo=kaggle&logoColor=white)](https://www.kaggle.com/models/ferrantealessandro/streetsignsensey12s/)
[![Model-StreetSignSense](https://img.shields.io/badge/KaggleModel-StreetSignSenseY12m-20BEFF.svg?logo=kaggle&logoColor=white)](https://www.kaggle.com/models/ferrantealessandro/streetsignsensey12m/)

[![Model-StreetSignSense](https://img.shields.io/badge/HuggingFace-StreetSignSenseY12n-FFD21E.svg?logo=huggingface)](https://huggingface.co/AlessandroFerrante/StreetSignSenseY12n)
[![Model-StreetSignSense](https://img.shields.io/badge/KaggleModel-StreetSignSenseY12s-FFD21E.svg?logo=huggingface)](https://HuggingFace.co/AlessandroFerrante/StreetSignSenseY12s)
[![Model-StreetSignSense](https://img.shields.io/badge/HuggingFace-StreetSignSenseY12m-FFD21E.svg?logo=huggingface)](https://huggingface.co/AlessandroFerrante/StreetSignSenseY12m)

</div>

---

# Model Summary

**Street Sign Sense (YOLO12s)** is an object detection model designed to identify and classify traffic signs in real-time. Based on the **YOLO12 Small** architecture, this model offers a sweet spot between the extreme speed of the Nano version and the higher accuracy of the Medium version and balances high accuracy with computational efficiency, making it suitable for Advanced Driver Assistance Systems (ADAS) research. It has been trained on the custom **Street Sign Set**, covering **63 distinct classes** of traffic signs.

## Usage

### Live Demo

You can test this model instantly in your browser without any setup: πŸ‘‰ **[Interactive Web Demo](http://alessandroferrante.github.io/StreetSignSense)**

#### Python

This model can be used with the Ultralytics framework or the official YOLOv12 repository. It takes an image as input and outputs bounding boxes with class labels and confidence scores.

### Code Snippet (Python)

```python

from ultralytics import YOLO



# Load the model

model = YOLO('path/to/streetsignsense-yolo12s.pt')  # Replace with the downloaded model path



# Run inference on an image

results = model.predict(source='path/to/image.jpg')



# Show results

results[0].show()



```

**Inputs:** Images (RGB) of various resolutions (model trained at standard YOLO resolutions, e.g., 640x640).
**Outputs:** List of `Results` objects containing bounding boxes (`xyxy`), class IDs, and confidence scores.

```text

/StreetSignSenseY12s

β”œβ”€β”€ .gitattributes   

β”œβ”€β”€ README.md  

β”œβ”€β”€ streetsignsense-yolo12s.pt   

└── metrics/ # metrics image folder

```

## System

**Standalone Model:** Yes, this is a standalone object detection model.
**Input Requirements:** Standard RGB images. No specific metadata required.
**Downstream Dependencies:** The output (detected classes and locations) is intended to be used by decision-making logic in ADAS simulations or autonomous driving pipelines.

## Implementation requirements

**Hardware:** Training was performed on Kaggle Notebooks using NVIDIA GPUs (e.g., Tesla P100 or T4).
**Software:** PyTorch, Ultralytics YOLO framework.
**Compute:**

* **Training Time:** 7h 4m 25s Β· GPU T4 x2 (depending on epochs).
* **Inference:** Capable of real-time performance (12.6 ms, 79 FPS) on modern GPUs.

# Model Characteristics

## Model initialization

**Fine-tuned:** The model was initialized with pre-trained COCO weights (Transfer Learning) and then fine-tuned on the "Street Sign Sense" dataset to specialize in traffic sign detection.

## Model stats

**Architecture:** YOLOv12s (Small).
**Characteristics:** Balanced architecture.
**Size:** Small size. It offers better feature extraction than the Nano version while maintaining very fast inference speeds.

## Other details

**Precision:** Trained using Mixed Precision (AMP).
**Pruning/Quantization:** The uploaded weights are standard FP32/FP16. No post-training quantization has been applied yet.

# Data Overview

## Training data

The model was trained on the **Street Sign Set** (available on Kaggle).

* **Source:** A combination of public datasets and manually collected/annotated images.
* **Size:** Contains thousands of images with bounding box annotations.
* **Classes:** 63 specific traffic sign classes (speed limits, warnings, prohibitions, etc.).
* **Preprocessing:** Images were resized, and data augmentation (Mosaic, scaling, color adjustments) was applied during training to improve robustness.

## Demographic groups

**N/A:** The dataset consists of street signs and environmental imagery. No human demographic data is involved or analyzed.

## Evaluation data

The dataset was split into:

* **Train:** 70-80%
* **Validation:** 10-20%
* **Test:** 10%
  **Differences:** The test set contains unseen images from different environmental conditions to test generalization.

# Evaluation Results

* **Overview Risultati:**
  ![Results Small](metrics/results.png)
* **Confusion Matrix:**
  ![Confusion Matrix Small](metrics/confusion_matrix_normalized.png)

#### Detailed Curves (Small)

|             Precision-Recall             |                 F1 Score                 |
| :--------------------------------------: | :--------------------------------------: |
| ![PR Curve Small](metrics/BoxPR_curve.png) | ![F1 Curve Small](metrics/BoxF1_curve.png) |
|           **Precision**           |             **Recall**             |
|  ![P Curve Small](metrics/BoxP_curve.png)  |  ![R Curve Small](metrics/BoxR_curve.png)  |

## Summary

The model achieves high Mean Average Precision (mAP) on the test set, demonstrating strong capabilities in detecting small objects (traffic signs at a distance) and operating in varied lighting conditions.

* **Detailed Metrics:** Please refer to the training graphs (F1-score, Precision-Recall curve) included in the attached notebooks.

## Subgroup evaluation results

Performance is generally consistent across major classes (e.g., Speed Limits, Stop signs). However, classes with significantly fewer samples in the dataset may show slightly lower recall.

## Fairness

**Definition:** Fairness in this context is defined as the model's ability to detect signs regardless of background clutter or slight occlusions.
**Results:** The model shows robust performance in standard driving scenarios.

## Usage limitations

* **Lighting:** Performance may degrade in extreme low-light conditions (night without streetlights) or heavy weather (dense fog/heavy rain) if not sufficiently represented in the training data.
* **Occlusion:** Signs that are more than 50% occluded may not be detected reliably.
* **Geography:** The model is trained primarily on European/International standard signs; it may not recognize signs specific to other regions that differ significantly in shape or color.

## Ethics

**Safety:** This model is for research and educational purposes (ADAS development). It should **not** be used as the sole system for controlling a real vehicle on public roads without extensive safety validation and redundancy.
**Privacy:** The dataset focuses on public street signs. Any incidental faces or license plates in the background are not the target of this model.

## πŸ‘¨β€πŸ’» Author

[Alessandro Ferrante](https://alessandroferrante.net)

Email: [streetsignsense@alessandroferrante.net](mailto:streetsignsense@alessandroferrante.net)