Highsky7 commited on
Commit
c752666
·
verified ·
1 Parent(s): 6a6d2f5

Complete README.md for the model of YOLOTL

Browse files
Files changed (1) hide show
  1. README.md +45 -96
README.md CHANGED
@@ -82,145 +82,94 @@ Fully Autonomous System: This model only recognizes lanes. It cannot be used to
82
 
83
  Changes in Camera Setup: The model's Bird's-Eye-View (BEV) transformation logic is calibrated for a specific camera position and angle, stored in the bev_params_y_5.npz file. If the camera's mounting position or angle is changed, the coordinate transformation will be inaccurate, severely degrading model performance.
84
 
85
- ## Bias, Risks, and Limitations
86
 
87
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
88
 
89
- [More Information Needed]
 
 
90
 
91
  ### Recommendations
92
 
93
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
94
 
95
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
96
 
97
- ## How to Get Started with the Model
98
-
99
- Use the code below to get started with the model.
100
-
101
- [More Information Needed]
102
 
103
  ## Training Details
104
 
105
  ### Training Data
106
 
107
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
108
-
109
- [More Information Needed]
110
-
111
  ### Training Procedure
112
 
113
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
114
-
115
- #### Preprocessing [optional]
116
-
117
- [More Information Needed]
118
-
119
-
120
- #### Training Hyperparameters
121
-
122
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
123
-
124
- #### Speeds, Sizes, Times [optional]
125
 
126
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
127
-
128
- [More Information Needed]
129
 
130
  ## Evaluation
131
 
132
- <!-- This section describes the evaluation protocols and provides the results. -->
133
-
134
  ### Testing Data, Factors & Metrics
135
 
136
  #### Testing Data
137
 
138
- <!-- This should link to a Dataset Card if possible. -->
139
-
140
- [More Information Needed]
141
-
142
- #### Factors
143
-
144
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
145
-
146
- [More Information Needed]
147
 
148
  #### Metrics
149
 
150
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
151
-
152
- [More Information Needed]
153
 
154
  ### Results
155
 
156
- [More Information Needed]
157
-
158
- #### Summary
159
-
160
-
161
-
162
- ## Model Examination [optional]
163
-
164
- <!-- Relevant interpretability work for the model goes here -->
165
-
166
- [More Information Needed]
167
-
168
- ## Environmental Impact
169
-
170
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
171
-
172
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
173
 
174
- - **Hardware Type:** [More Information Needed]
175
- - **Hours used:** [More Information Needed]
176
- - **Cloud Provider:** [More Information Needed]
177
- - **Compute Region:** [More Information Needed]
178
- - **Carbon Emitted:** [More Information Needed]
179
 
180
  ## Technical Specifications [optional]
181
 
182
  ### Model Architecture and Objective
183
 
184
- [More Information Needed]
 
185
 
186
  ### Compute Infrastructure
187
 
188
- [More Information Needed]
189
-
190
- #### Hardware
191
 
192
- [More Information Needed]
193
-
194
- #### Software
195
 
196
- [More Information Needed]
197
 
198
  ## Citation [optional]
199
 
200
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
201
-
202
- **BibTeX:**
203
-
204
- [More Information Needed]
205
-
206
- **APA:**
207
-
208
- [More Information Needed]
209
-
210
- ## Glossary [optional]
211
-
212
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
213
-
214
- [More Information Needed]
215
-
216
- ## More Information [optional]
217
-
218
- [More Information Needed]
219
-
220
- ## Model Card Authors [optional]
221
 
222
- [More Information Needed]
223
 
224
  ## Model Card Contact
225
 
226
- [More Information Needed]
 
 
82
 
83
  Changes in Camera Setup: The model's Bird's-Eye-View (BEV) transformation logic is calibrated for a specific camera position and angle, stored in the bev_params_y_5.npz file. If the camera's mounting position or angle is changed, the coordinate transformation will be inaccurate, severely degrading model performance.
84
 
85
+ ## **Bias, Risks, and Limitations**
86
 
87
+ This model was designed for a specific purpose and environment, and thus has the following biases and limitations:
88
 
89
+ * **Data Bias:** The model was trained exclusively on data filmed on **'The International University Student EV Autonomous Driving Competition' track**. Consequently, its performance may degrade significantly on public roads with different lane shapes, colors, or lighting conditions.
90
+ * **Environmental Dependency:** It is biased towards clear weather and specific lighting conditions. Its lane recognition accuracy may decrease in rain, darkness, or strong backlight.
91
+ * **Hardware Dependency:** The model's core logic, the Bird's-Eye-View (BEV) transform, is highly dependent on a **fixed camera setup** (position and angle) defined in `bev_params_y_5.npz`. Any change to the camera's position or angle will invalidate the coordinate system and cause the model to fail.
92
 
93
  ### Recommendations
94
 
95
+ To mitigate these risks and limitations, we recommend the following:
96
 
97
+ * **Restricted Use:** This model is intended for use in environments similar to the training data (i.e., the competition track). It is not suitable for general road driving or other projects.
98
+ * **BEV Parameter Recalibration:** If the vehicle's camera is reinstalled or its position is altered, you **must** recalibrate the parameters for the BEV transformation.
99
+ * **Safety Mechanisms:** When applying this model to a physical vehicle, it is crucial to implement safety mechanisms such as a **manual override system** or an **emergency stop system** to handle prediction failures.
100
 
101
+ ---
 
 
 
 
102
 
103
  ## Training Details
104
 
105
  ### Training Data
106
 
107
+ * **Dataset:** A custom dataset of driving images captured on 'The International University Student EV Autonomous Driving Competition' track.
108
+ * **Labeling:** The left and right lane areas in the images were labeled with **segmentation masks**.
 
 
109
  ### Training Procedure
110
 
111
+ * **Preprocessing:** Original images were transformed into 2D top-down Bird's-Eye-View (BEV) images using fixed parameters (`bev_params_y_5.npz`) before being used for training. This helps the model recognize lanes from a top-down perspective, facilitating distance calculation and path planning.
112
+ * **Training Hyperparameters:**
113
+ * model: `YOLOv8` (segmentation model)
114
+ * img_size: `640`
115
+ * conf_thres: `0.6`
116
+ * iou_thres: `0.5`
117
+ * epochs: `[Enter the number of epochs used for training here]`
118
+ * batch_size: `[Enter the batch size used for training here]`
119
+ * optimizer: `[Enter the optimizer used, e.g., AdamW]`
 
 
 
120
 
121
+ ---
 
 
122
 
123
  ## Evaluation
124
 
 
 
125
  ### Testing Data, Factors & Metrics
126
 
127
  #### Testing Data
128
 
129
+ A separate dataset, captured from the same track environment but not used in training, was used for evaluation.
 
 
 
 
 
 
 
 
130
 
131
  #### Metrics
132
 
133
+ **Intersection over Union (IoU)** was used as the primary metric to measure the overlap between the predicted lane masks and the ground truth masks.
 
 
134
 
135
  ### Results
136
 
137
+ * **mIoU (mean IoU):** `[Enter your final mIoU score on the test dataset here]`
138
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
 
 
 
 
 
140
 
141
  ## Technical Specifications [optional]
142
 
143
  ### Model Architecture and Objective
144
 
145
+ * **Architecture:** An **Instance Segmentation** model based on the YOLOv8 architecture.
146
+ * **Objective:** To detect lanes as objects within an image and to accurately segment (mask) the pixel area corresponding to each lane.
147
 
148
  ### Compute Infrastructure
149
 
150
+ * **Hardware:** `[Enter the GPU (e.g., NVIDIA RTX 3080) or CPU used for training and inference here]`
151
+ * **Software:** `PyTorch`, `ultralytics`, `OpenCV`, `NumPy`, `ROS`
 
152
 
153
+ ---
 
 
154
 
 
155
 
156
  ## Citation [optional]
157
 
158
+ If you find this model or code useful, please consider citing it as follows:
159
+ ```bibtex
160
+ @misc{YourTeamName_YOLOTL_2025,
161
+ author = {[Your Team Name or Author Names]},
162
+ title = {YOLOv8 based Lane Segmentation Model for EV Autonomous Driving Competition},
163
+ year = {2025},
164
+ publisher = {Hugging Face},
165
+ journal = {Hugging Face repository},
166
+ howpublished = {\url{[Paste the Hugging Face URL of your model here]}},
167
+ }
168
+ ## Model Card Authors
 
 
 
 
 
 
 
 
 
 
169
 
170
+ Seungmin Lee
171
 
172
  ## Model Card Contact
173
 
174
+ gmail: albert31115@gmail.com
175
+ github: https://github.com/Highsky7