Alessandro Ferrante commited on
Commit
a8620ae
Β·
1 Parent(s): 968e3d6

Upload StreetSignSense YOLO12n model and metrics

Browse files
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.png filter=lfs diff=lfs merge=lfs -text
37
+ *.jpg filter=lfs diff=lfs merge=lfs -text
38
+ *.csv filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,196 @@
1
  ---
 
 
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: cc-by-4.0
5
+ library_name: ultralytics
6
+ tags:
7
+ - real-time
8
+ - object-detection
9
+ - yolo
10
+ - yolov12
11
+ - traffic-signs
12
+ - autonomous-driving
13
+ - adas
14
+ datasets:
15
+ - AlessandroFerrante/StreetSignSet
16
+ metrics:
17
+ - mAP
18
+ - f1
19
+ - precision
20
+ - recall
21
+ pipeline_tag: object-detection
22
  ---
23
+
24
+ <div align="center">
25
+
26
+ # StreetSignSenseYOLO12n
27
+
28
+ [![Ultralytics 8.3.229 ](https://img.shields.io/badge/Ultralytics-8.3.229-lightblue?logo=ultralytics&logoColor=white)](https://github.com/ultralytics/ultralytics)
29
+ [![Ultralytics Github](https://img.shields.io/badge/Ultralytics-Github-darkgreen?logo=ultralytics&logoColor=white)](https://github.com/ultralytics/ultralytics)
30
+ [![Ultralytics YOLO12](https://img.shields.io/badge/Ultralytics-YOLO12-8A2BE2?logo=ultralytics&logoColor=white)](https://github.com/sunsmarterjie/yolov12)
31
+
32
+ [![Python 3.11.13 ](https://img.shields.io/badge/Python-3.11.13-blue?logo=python&logoColor=white)](https://www.python.org/)
33
+ [![PyTorch 2.6.0](https://img.shields.io/badge/PyTorch-2.6.0-EE4C2C?logo=pytorch&logoColor=white)](https://pytorch.org/)
34
+ [![License](https://img.shields.io/badge/License-MIT-green.svg?)](LICENSE)
35
+ [![License](https://img.shields.io/badge/License-CC_BY_4.0-orange.svg?)](LICENSE)
36
+
37
+ [![Project-StreetSignSense](https://img.shields.io/badge/Project-StreetSignSense-007bff.svg)](https://github.com/AlessandroFerrante/StreetSignSense/)
38
+ [![Badge Report PDF](https://img.shields.io/badge/πŸ“‘-Technical_Report-white?logo=pdf&logoColor=white)](https://alessandroferrante.github.io/StreetSignSense/report/Report.pdf)
39
+
40
+ [![GitHub Release](https://img.shields.io/badge/GitHub-StreetSignSenseY12n-181717?logo=github)](https://github.com/AlessandroFerrante/StreetSignSense/releases)
41
+ [![GitHub Release](https://img.shields.io/badge/GitHub-StreetSignSenseY12s-181717?logo=github)](https://github.com/AlessandroFerrante/StreetSignSense/releases)
42
+ [![GitHub Release](https://img.shields.io/badge/GitHub-StreetSignSenseY12m-181717?logo=github)](https://github.com/AlessandroFerrante/StreetSignSense/releases)
43
+
44
+ [![Model-StreetSignSense](https://img.shields.io/badge/KaggleModel-StreetSignSenseY12n-20BEFF.svg?logo=kaggle&logoColor=white)](https://www.kaggle.com/models/ferrantealessandro/streetsignsensey12n/)
45
+ [![Model-StreetSignSense](https://img.shields.io/badge/KaggleModel-StreetSignSenseY12s-20BEFF.svg?logo=kaggle&logoColor=white)](https://www.kaggle.com/models/ferrantealessandro/streetsignsensey12s/)
46
+ [![Model-StreetSignSense](https://img.shields.io/badge/KaggleModel-StreetSignSenseY12m-20BEFF.svg?logo=kaggle&logoColor=white)](https://www.kaggle.com/models/ferrantealessandro/streetsignsensey12m/)
47
+
48
+ [![Model-StreetSignSense](https://img.shields.io/badge/HuggingFace-StreetSignSenseY12n-FFD21E.svg?logo=huggingface)](https://huggingface.co/AlessandroFerrante/StreetSignSenseY12n)
49
+ [![Model-StreetSignSense](https://img.shields.io/badge/KaggleModel-StreetSignSenseY12s-FFD21E.svg?logo=huggingface)](https://HuggingFace.co/AlessandroFerrante/StreetSignSenseY12s)
50
+ [![Model-StreetSignSense](https://img.shields.io/badge/HuggingFace-StreetSignSenseY12m-FFD21E.svg?logo=huggingface)](https://huggingface.co/AlessandroFerrante/StreetSignSenseY12m)
51
+
52
+ </div>
53
+
54
+ ---
55
+
56
+ # Model Summary
57
+
58
+ **Street Sign Sense (YOLO12n)** is an object detection model designed to identify and classify traffic signs in real-time. Based on the** YOLO12 Nano** architecture, this is the most lightweight and fastest version, optimized for edge devices and mobile applications where speed is critical, making it suitable for Advanced Driver Assistance Systems (ADAS) research. It has been trained on the custom **Street Sign Set**, covering **63 distinct classes** of traffic signs.
59
+
60
+ ## Usage
61
+ ### Live Demo
62
+ You can test this model instantly in your browser without any setup:
63
+ πŸ‘‰ **[Interactive Web Demo](http://alessandroferrante.github.io/StreetSignSense)**
64
+
65
+ #### Python
66
+ This model can be used with the Ultralytics framework or the official YOLOv12 repository. It takes an image as input and outputs bounding boxes with class labels and confidence scores.
67
+
68
+ ### Code Snippet (Python)
69
+
70
+ ```python
71
+ from ultralytics import YOLO
72
+
73
+ # Load the model
74
+ model = YOLO('path/to/streetsignsense-yolo12n.pt') # Replace with the downloaded model path
75
+
76
+ # Run inference on an image
77
+ results = model.predict(source='path/to/image.jpg')
78
+
79
+ # Show results
80
+ results[0].show()
81
+
82
+ ```
83
+
84
+ **Inputs:** Images (RGB) of various resolutions (model trained at standard YOLO resolutions, e.g., 640x640).
85
+ **Outputs:** List of `Results` objects containing bounding boxes (`xyxy`), class IDs, and confidence scores.
86
+ ``` text
87
+ /StreetSignSenseY12n
88
+ β”œβ”€β”€ .gitattributes
89
+ β”œβ”€β”€ README.md
90
+ β”œβ”€β”€ streetsignsense-yolo12n.pt
91
+ └── metrics/ # metrics image folder
92
+ ```
93
+
94
+ ## System
95
+
96
+ **Standalone Model:** Yes, this is a standalone object detection model.
97
+ **Input Requirements:** Standard RGB images. No specific metadata required.
98
+ **Downstream Dependencies:** The output (detected classes and locations) is intended to be used by decision-making logic in ADAS simulations or autonomous driving pipelines.
99
+
100
+ ## Implementation requirements
101
+
102
+ **Hardware:** Training was performed on Kaggle Notebooks using NVIDIA GPUs (e.g., Tesla P100 or T4).
103
+ **Software:** PyTorch, Ultralytics YOLO framework.
104
+ **Compute:**
105
+
106
+ * **Training Time:** 5h 38m 40s Β· GPU T4 x2 (depending on epochs).
107
+ * **Inference:** Capable of real-time performance (5.4 ms, 185FPS) on modern GPUs.
108
+
109
+ # Model Characteristics
110
+
111
+ ## Model initialization
112
+
113
+ **Fine-tuned:** The model was initialized with pre-trained COCO weights (Transfer Learning) and then fine-tuned on the "Street Sign Sense" dataset to specialize in traffic sign detection.
114
+
115
+ ## Model stats
116
+
117
+ **Architecture:** YOLOv12n (Nano).
118
+ **Characteristics:** Ultra-lightweight model with minimal latency.
119
+ **Size:** Smallest size, lowest parameter count. Ideal for deployment on resource-constrained hardware (e.g., Raspberry Pi, Mobile).
120
+
121
+ ## Other details
122
+
123
+ **Precision:** Trained using Mixed Precision (AMP).
124
+ **Pruning/Quantization:** The uploaded weights are standard FP32/FP16. No post-training quantization has been applied yet.
125
+
126
+ # Data Overview
127
+
128
+ ## Training data
129
+
130
+ The model was trained on the **Street Sign Set** (available on Kaggle).
131
+
132
+ * **Source:** A combination of public datasets and manually collected/annotated images.
133
+ * **Size:** Contains thousands of images with bounding box annotations.
134
+ * **Classes:** 63 specific traffic sign classes (speed limits, warnings, prohibitions, etc.).
135
+ * **Preprocessing:** Images were resized, and data augmentation (Mosaic, scaling, color adjustments) was applied during training to improve robustness.
136
+
137
+ ## Demographic groups
138
+
139
+ **N/A:** The dataset consists of street signs and environmental imagery. No human demographic data is involved or analyzed.
140
+
141
+ ## Evaluation data
142
+
143
+ The dataset was split into:
144
+
145
+ * **Train:** 70-80%
146
+ * **Validation:** 10-20%
147
+ * **Test:** 10%
148
+ **Differences:** The test set contains unseen images from different environmental conditions to test generalization.
149
+
150
+
151
+ # Evaluation Results
152
+ * **Overview Risultati:**
153
+ ![Results Small](metrics/results.png)
154
+ * **Confusion Matrix:**
155
+ ![Confusion Matrix Small](metrics/confusion_matrix_normalized.png)
156
+
157
+ #### Detailed Curves (Small)
158
+
159
+ | Precision-Recall | F1 Score |
160
+ | :----------------------------------------: | :----------------------------------------: |
161
+ | ![PR Curve Small](metrics/BoxPR_curve.png) | ![F1 Curve Small](metrics/BoxF1_curve.png) |
162
+ | **Precision** | **Recall** |
163
+ | ![P Curve Small](metrics/BoxP_curve.png) | ![R Curve Small](metrics/BoxR_curve.png) |
164
+
165
+ ## Summary
166
+
167
+ The model achieves high Mean Average Precision (mAP) on the test set, demonstrating strong capabilities in detecting small objects (traffic signs at a distance) and operating in varied lighting conditions.
168
+
169
+ * **Detailed Metrics:** Please refer to the training graphs (F1-score, Precision-Recall curve) included in the attached notebooks.
170
+
171
+ ## Subgroup evaluation results
172
+
173
+ Performance is generally consistent across major classes (e.g., Speed Limits, Stop signs). However, classes with significantly fewer samples in the dataset may show slightly lower recall.
174
+
175
+ ## Fairness
176
+
177
+ **Definition:** Fairness in this context is defined as the model's ability to detect signs regardless of background clutter or slight occlusions.
178
+ **Results:** The model shows robust performance in standard driving scenarios.
179
+
180
+ ## Usage limitations
181
+
182
+ * **Lighting:** Performance may degrade in extreme low-light conditions (night without streetlights) or heavy weather (dense fog/heavy rain) if not sufficiently represented in the training data.
183
+ * **Occlusion:** Signs that are more than 50% occluded may not be detected reliably.
184
+ * **Geography:** The model is trained primarily on European/International standard signs; it may not recognize signs specific to other regions that differ significantly in shape or color.
185
+
186
+ ## Ethics
187
+
188
+ **Safety:** This model is for research and educational purposes (ADAS development). It should **not** be used as the sole system for controlling a real vehicle on public roads without extensive safety validation and redundancy.
189
+ **Privacy:** The dataset focuses on public street signs. Any incidental faces or license plates in the background are not the target of this model.
190
+
191
+
192
+ ## πŸ‘¨β€πŸ’» Author
193
+
194
+ [Alessandro Ferrante](https://alessandroferrante.net)
195
+
196
+ Email: [streetsignsense@alessandroferrante.net](mailto:streetsignsense@alessandroferrante.net)
metrics/BoxF1_curve.png ADDED

Git LFS Details

  • SHA256: f6713e744493c7721ad3968aa72950abe65648d9bcb571118911abf0ef3d8ffe
  • Pointer size: 131 Bytes
  • Size of remote file: 599 kB
metrics/BoxPR_curve.png ADDED

Git LFS Details

  • SHA256: 178b9226e48da25f794b617edc8e91747e6ca65289ddf7bf5e1acd439a1851b6
  • Pointer size: 131 Bytes
  • Size of remote file: 274 kB
metrics/BoxP_curve.png ADDED

Git LFS Details

  • SHA256: f6c090ee5719800072a635e4ceb1f83a49748ebc4e009f88df7eaa1f4f32a38b
  • Pointer size: 131 Bytes
  • Size of remote file: 574 kB
metrics/BoxR_curve.png ADDED

Git LFS Details

  • SHA256: 9d9cf874199640cbd67964e0ed9a7766e08d91fc5fe17fcab3033d262ea22866
  • Pointer size: 131 Bytes
  • Size of remote file: 475 kB
metrics/confusion_matrix.png ADDED

Git LFS Details

  • SHA256: f938dea8beb44555e03f734fca658f6502e485d38cc59c047e75cf5d2f36e078
  • Pointer size: 131 Bytes
  • Size of remote file: 659 kB
metrics/confusion_matrix_normalized.png ADDED

Git LFS Details

  • SHA256: 5e3659d9679298ac4df1438393e606de75fa7bcc2a11387d90c4e1d105b82fd9
  • Pointer size: 131 Bytes
  • Size of remote file: 660 kB
metrics/labels.jpg ADDED

Git LFS Details

  • SHA256: c53afd942b881c9a0d0ad1226799d8b1272f7d6600c5519291a2c200597b5844
  • Pointer size: 131 Bytes
  • Size of remote file: 185 kB
metrics/results.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0984df3d1f385aeba3d7e63394dfbf1213a2b8cbcfaf301ea132ea6d895a7a7b
3
+ size 43093
metrics/results.png ADDED

Git LFS Details

  • SHA256: 1590e0b975b35e5b8b4d409c8dd232484fc9034a893d1e2d3d46bd4bae28844a
  • Pointer size: 131 Bytes
  • Size of remote file: 255 kB
streetsignsense-yolo12n.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04ef39e2befa4e566cc99c394f58a9ff6b32c457869f5641e73e9ae8e41690c5
3
+ size 5583123