YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
VISION 25-26: Cone Detection for Formula Student
YOLO-based object detection system for autonomous Formula Student racing. Detects track boundary cones (yellow, blue, orange) for navigation and path planning.
Dataset Structure
- The scripts expect a
train.yamlfile for the YOLO training and validation. This script will also point to the structure of thedataset/directory containing subdirectories with the following structure:dataset/ βββ images/ β βββ train/ β βββ val/ β βββ test/ βββ labels/ βββ train/ βββ val/ βββ test/
Classes:
- 0: Yellow cone
- 1: Blue cone
- 2: Orange cone (includes large orange cones)
Note: Unknown/damaged cones are removed during preprocessing to avoid label noise.
Quick Start
Setup environment:
python3 -m venv venv source venv/bin/activate pip install -r requirements.txtDownload and organize your dataset into the
dataset/folder as described above.Review and modify the
train.yamlfile to ensure it points to the correct dataset paths and class definitions. If i where you, i'd use an absolute path for thepathof the data directory, since ultralytics does some behind the scenes stuff when the path is relative and it might cause issues.Execute the preprocessing scripts:
- The
preprocess_images.pyscript can be used to remove the black borders in the images for better training results. Review the main before execution to ensure proper paths and calls to the functions you want. - The
process_labels.pyscript can be used to merge orange and big_orange classes into a single orange_cone class, and to remove unknown labels, as well as projecting bbox labels to their correct coordinates after image cropping. Review the main before execution to ensure proper paths and calls to the functions you want.
- The
Train! You can either train, optimize or run experiments with the following scripts:
train_yolo.pyfor simple single-model training.optimize.pyfor hyperparameter tuning.experiment_train.pyfor systematic training across multiple configurations.
You will find more details about these scripts in the
train_tune_experiment/README.mdfile.Finally, you can perform inference on videos or images with the notebooks in the
inference_notebooks/folder, or use them as example for your own inference scripts.After experiments, you may want to benchmark results both in terms of accuracy and inference speed on the target hardware (Jetson Orin NX). Dedicated scripts coming soon to perform these benchmarks.
Module Overview
analysis_and_preprocessing/
Dataset preparation: removes FSoco watermarks, remaps classes, removes unknowns, adjusts bbox coordinates. See module README for details.
train_tune_experiment/
Training framework: single models, batch experiments, hyperparameter optimization. See module README for details.
inference_notebooks/
Inference on videos/images (local GPU or Google Colab).
Important Notes
Cache behavior:
cache='disk'only caches if <50% RAM used at start. Cached files persist across runs. Usecache='ram'or manually delete.npyfiles if needed to avoid using images with wrong augmentations cached from previous runs.DataLoader memory leaks: Consecutive training runs leak worker processes.
experiment_train.pyincludespkill -f pt_data_workercleanup between experiments. Adjust pattern if needed for your environment.