Update README.md
Browse files
README.md
CHANGED
|
@@ -20,7 +20,7 @@ This modelcard aims to be a base template for new models. It has been generated
|
|
| 20 |
- Model type:Image-segmentation
|
| 21 |
- License:MIT
|
| 22 |
|
| 23 |
-
### Model Sources
|
| 24 |
|
| 25 |
https://github.com/Highsky7/YOLOTL
|
| 26 |
|
|
@@ -59,8 +59,7 @@ cv2.imshow("Lane Detection Result", result_plot)
|
|
| 59 |
cv2.waitKey(0)
|
| 60 |
cv2.destroyAllWindows()
|
| 61 |
|
| 62 |
-
|
| 63 |
-
### Downstream Use [optional]
|
| 64 |
|
| 65 |
The output of this model (lane masks) can be used as a key input for a larger autonomous driving system. For example, the roboflow_final.py code performs the following downstream tasks:
|
| 66 |
|
|
@@ -106,6 +105,7 @@ To mitigate these risks and limitations, we recommend the following:
|
|
| 106 |
|
| 107 |
* **Dataset:** A custom dataset of driving images captured on 'The International University Student EV Autonomous Driving Competition' track.
|
| 108 |
* **Labeling:** The left and right lane areas in the images were labeled with **segmentation masks**.
|
|
|
|
| 109 |
### Training Procedure
|
| 110 |
|
| 111 |
* **Preprocessing:** Original images were transformed into 2D top-down Bird's-Eye-View (BEV) images using fixed parameters (`bev_params_y_5.npz`) before being used for training. This helps the model recognize lanes from a top-down perspective, facilitating distance calculation and path planning.
|
|
@@ -137,8 +137,7 @@ A separate dataset, captured from the same track environment but not used in tra
|
|
| 137 |
* **mIoU (mean IoU):** `[Enter your final mIoU score on the test dataset here]`
|
| 138 |
---
|
| 139 |
|
| 140 |
-
|
| 141 |
-
## Technical Specifications [optional]
|
| 142 |
|
| 143 |
### Model Architecture and Objective
|
| 144 |
|
|
@@ -152,8 +151,7 @@ A separate dataset, captured from the same track environment but not used in tra
|
|
| 152 |
|
| 153 |
---
|
| 154 |
|
| 155 |
-
|
| 156 |
-
## Citation [optional]
|
| 157 |
|
| 158 |
If you find this model or code useful, please consider citing it as follows:
|
| 159 |
```bibtex
|
|
|
|
| 20 |
- Model type:Image-segmentation
|
| 21 |
- License:MIT
|
| 22 |
|
| 23 |
+
### Model Sources
|
| 24 |
|
| 25 |
https://github.com/Highsky7/YOLOTL
|
| 26 |
|
|
|
|
| 59 |
cv2.waitKey(0)
|
| 60 |
cv2.destroyAllWindows()
|
| 61 |
|
| 62 |
+
### Downstream Use
|
|
|
|
| 63 |
|
| 64 |
The output of this model (lane masks) can be used as a key input for a larger autonomous driving system. For example, the roboflow_final.py code performs the following downstream tasks:
|
| 65 |
|
|
|
|
| 105 |
|
| 106 |
* **Dataset:** A custom dataset of driving images captured on 'The International University Student EV Autonomous Driving Competition' track.
|
| 107 |
* **Labeling:** The left and right lane areas in the images were labeled with **segmentation masks**.
|
| 108 |
+
|
| 109 |
### Training Procedure
|
| 110 |
|
| 111 |
* **Preprocessing:** Original images were transformed into 2D top-down Bird's-Eye-View (BEV) images using fixed parameters (`bev_params_y_5.npz`) before being used for training. This helps the model recognize lanes from a top-down perspective, facilitating distance calculation and path planning.
|
|
|
|
| 137 |
* **mIoU (mean IoU):** `[Enter your final mIoU score on the test dataset here]`
|
| 138 |
---
|
| 139 |
|
| 140 |
+
## Technical Specifications
|
|
|
|
| 141 |
|
| 142 |
### Model Architecture and Objective
|
| 143 |
|
|
|
|
| 151 |
|
| 152 |
---
|
| 153 |
|
| 154 |
+
## Citation
|
|
|
|
| 155 |
|
| 156 |
If you find this model or code useful, please consider citing it as follows:
|
| 157 |
```bibtex
|