Complete README.md for the model of YOLOTL
Browse files
README.md
CHANGED
|
@@ -82,145 +82,94 @@ Fully Autonomous System: This model only recognizes lanes. It cannot be used to
|
|
| 82 |
|
| 83 |
Changes in Camera Setup: The model's Bird's-Eye-View (BEV) transformation logic is calibrated for a specific camera position and angle, stored in the bev_params_y_5.npz file. If the camera's mounting position or angle is changed, the coordinate transformation will be inaccurate, severely degrading model performance.
|
| 84 |
|
| 85 |
-
## Bias, Risks, and Limitations
|
| 86 |
|
| 87 |
-
|
| 88 |
|
| 89 |
-
|
|
|
|
|
|
|
| 90 |
|
| 91 |
### Recommendations
|
| 92 |
|
| 93 |
-
|
| 94 |
|
| 95 |
-
|
|
|
|
|
|
|
| 96 |
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
Use the code below to get started with the model.
|
| 100 |
-
|
| 101 |
-
[More Information Needed]
|
| 102 |
|
| 103 |
## Training Details
|
| 104 |
|
| 105 |
### Training Data
|
| 106 |
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
[More Information Needed]
|
| 110 |
-
|
| 111 |
### Training Procedure
|
| 112 |
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
| 123 |
-
|
| 124 |
-
#### Speeds, Sizes, Times [optional]
|
| 125 |
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
[More Information Needed]
|
| 129 |
|
| 130 |
## Evaluation
|
| 131 |
|
| 132 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 133 |
-
|
| 134 |
### Testing Data, Factors & Metrics
|
| 135 |
|
| 136 |
#### Testing Data
|
| 137 |
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
[More Information Needed]
|
| 141 |
-
|
| 142 |
-
#### Factors
|
| 143 |
-
|
| 144 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 145 |
-
|
| 146 |
-
[More Information Needed]
|
| 147 |
|
| 148 |
#### Metrics
|
| 149 |
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
[More Information Needed]
|
| 153 |
|
| 154 |
### Results
|
| 155 |
|
| 156 |
-
[
|
| 157 |
-
|
| 158 |
-
#### Summary
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
## Model Examination [optional]
|
| 163 |
-
|
| 164 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 165 |
-
|
| 166 |
-
[More Information Needed]
|
| 167 |
-
|
| 168 |
-
## Environmental Impact
|
| 169 |
-
|
| 170 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 171 |
-
|
| 172 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 173 |
|
| 174 |
-
- **Hardware Type:** [More Information Needed]
|
| 175 |
-
- **Hours used:** [More Information Needed]
|
| 176 |
-
- **Cloud Provider:** [More Information Needed]
|
| 177 |
-
- **Compute Region:** [More Information Needed]
|
| 178 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 179 |
|
| 180 |
## Technical Specifications [optional]
|
| 181 |
|
| 182 |
### Model Architecture and Objective
|
| 183 |
|
| 184 |
-
|
|
|
|
| 185 |
|
| 186 |
### Compute Infrastructure
|
| 187 |
|
| 188 |
-
[
|
| 189 |
-
|
| 190 |
-
#### Hardware
|
| 191 |
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
#### Software
|
| 195 |
|
| 196 |
-
[More Information Needed]
|
| 197 |
|
| 198 |
## Citation [optional]
|
| 199 |
|
| 200 |
-
|
| 201 |
-
|
| 202 |
-
|
| 203 |
-
|
| 204 |
-
|
| 205 |
-
|
| 206 |
-
|
| 207 |
-
|
| 208 |
-
[
|
| 209 |
-
|
| 210 |
-
##
|
| 211 |
-
|
| 212 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 213 |
-
|
| 214 |
-
[More Information Needed]
|
| 215 |
-
|
| 216 |
-
## More Information [optional]
|
| 217 |
-
|
| 218 |
-
[More Information Needed]
|
| 219 |
-
|
| 220 |
-
## Model Card Authors [optional]
|
| 221 |
|
| 222 |
-
|
| 223 |
|
| 224 |
## Model Card Contact
|
| 225 |
|
| 226 |
-
|
|
|
|
|
|
| 82 |
|
| 83 |
Changes in Camera Setup: The model's Bird's-Eye-View (BEV) transformation logic is calibrated for a specific camera position and angle, stored in the bev_params_y_5.npz file. If the camera's mounting position or angle is changed, the coordinate transformation will be inaccurate, severely degrading model performance.
|
| 84 |
|
| 85 |
+
## **Bias, Risks, and Limitations**
|
| 86 |
|
| 87 |
+
This model was designed for a specific purpose and environment, and thus has the following biases and limitations:
|
| 88 |
|
| 89 |
+
* **Data Bias:** The model was trained exclusively on data filmed on **'The International University Student EV Autonomous Driving Competition' track**. Consequently, its performance may degrade significantly on public roads with different lane shapes, colors, or lighting conditions.
|
| 90 |
+
* **Environmental Dependency:** It is biased towards clear weather and specific lighting conditions. Its lane recognition accuracy may decrease in rain, darkness, or strong backlight.
|
| 91 |
+
* **Hardware Dependency:** The model's core logic, the Bird's-Eye-View (BEV) transform, is highly dependent on a **fixed camera setup** (position and angle) defined in `bev_params_y_5.npz`. Any change to the camera's position or angle will invalidate the coordinate system and cause the model to fail.
|
| 92 |
|
| 93 |
### Recommendations
|
| 94 |
|
| 95 |
+
To mitigate these risks and limitations, we recommend the following:
|
| 96 |
|
| 97 |
+
* **Restricted Use:** This model is intended for use in environments similar to the training data (i.e., the competition track). It is not suitable for general road driving or other projects.
|
| 98 |
+
* **BEV Parameter Recalibration:** If the vehicle's camera is reinstalled or its position is altered, you **must** recalibrate the parameters for the BEV transformation.
|
| 99 |
+
* **Safety Mechanisms:** When applying this model to a physical vehicle, it is crucial to implement safety mechanisms such as a **manual override system** or an **emergency stop system** to handle prediction failures.
|
| 100 |
|
| 101 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
## Training Details
|
| 104 |
|
| 105 |
### Training Data
|
| 106 |
|
| 107 |
+
* **Dataset:** A custom dataset of driving images captured on 'The International University Student EV Autonomous Driving Competition' track.
|
| 108 |
+
* **Labeling:** The left and right lane areas in the images were labeled with **segmentation masks**.
|
|
|
|
|
|
|
| 109 |
### Training Procedure
|
| 110 |
|
| 111 |
+
* **Preprocessing:** Original images were transformed into 2D top-down Bird's-Eye-View (BEV) images using fixed parameters (`bev_params_y_5.npz`) before being used for training. This helps the model recognize lanes from a top-down perspective, facilitating distance calculation and path planning.
|
| 112 |
+
* **Training Hyperparameters:**
|
| 113 |
+
* model: `YOLOv8` (segmentation model)
|
| 114 |
+
* img_size: `640`
|
| 115 |
+
* conf_thres: `0.6`
|
| 116 |
+
* iou_thres: `0.5`
|
| 117 |
+
* epochs: `[Enter the number of epochs used for training here]`
|
| 118 |
+
* batch_size: `[Enter the batch size used for training here]`
|
| 119 |
+
* optimizer: `[Enter the optimizer used, e.g., AdamW]`
|
|
|
|
|
|
|
|
|
|
| 120 |
|
| 121 |
+
---
|
|
|
|
|
|
|
| 122 |
|
| 123 |
## Evaluation
|
| 124 |
|
|
|
|
|
|
|
| 125 |
### Testing Data, Factors & Metrics
|
| 126 |
|
| 127 |
#### Testing Data
|
| 128 |
|
| 129 |
+
A separate dataset, captured from the same track environment but not used in training, was used for evaluation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 130 |
|
| 131 |
#### Metrics
|
| 132 |
|
| 133 |
+
**Intersection over Union (IoU)** was used as the primary metric to measure the overlap between the predicted lane masks and the ground truth masks.
|
|
|
|
|
|
|
| 134 |
|
| 135 |
### Results
|
| 136 |
|
| 137 |
+
* **mIoU (mean IoU):** `[Enter your final mIoU score on the test dataset here]`
|
| 138 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 139 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 140 |
|
| 141 |
## Technical Specifications [optional]
|
| 142 |
|
| 143 |
### Model Architecture and Objective
|
| 144 |
|
| 145 |
+
* **Architecture:** An **Instance Segmentation** model based on the YOLOv8 architecture.
|
| 146 |
+
* **Objective:** To detect lanes as objects within an image and to accurately segment (mask) the pixel area corresponding to each lane.
|
| 147 |
|
| 148 |
### Compute Infrastructure
|
| 149 |
|
| 150 |
+
* **Hardware:** `[Enter the GPU (e.g., NVIDIA RTX 3080) or CPU used for training and inference here]`
|
| 151 |
+
* **Software:** `PyTorch`, `ultralytics`, `OpenCV`, `NumPy`, `ROS`
|
|
|
|
| 152 |
|
| 153 |
+
---
|
|
|
|
|
|
|
| 154 |
|
|
|
|
| 155 |
|
| 156 |
## Citation [optional]
|
| 157 |
|
| 158 |
+
If you find this model or code useful, please consider citing it as follows:
|
| 159 |
+
```bibtex
|
| 160 |
+
@misc{YourTeamName_YOLOTL_2025,
|
| 161 |
+
author = {[Your Team Name or Author Names]},
|
| 162 |
+
title = {YOLOv8 based Lane Segmentation Model for EV Autonomous Driving Competition},
|
| 163 |
+
year = {2025},
|
| 164 |
+
publisher = {Hugging Face},
|
| 165 |
+
journal = {Hugging Face repository},
|
| 166 |
+
howpublished = {\url{[Paste the Hugging Face URL of your model here]}},
|
| 167 |
+
}
|
| 168 |
+
## Model Card Authors
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
|
| 170 |
+
Seungmin Lee
|
| 171 |
|
| 172 |
## Model Card Contact
|
| 173 |
|
| 174 |
+
gmail: albert31115@gmail.com
|
| 175 |
+
github: https://github.com/Highsky7
|