You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

VISION 25-26: Cone Detection for Formula Student

YOLO-based object detection system for autonomous Formula Student racing. Detects track boundary cones (yellow, blue, orange) for navigation and path planning.

Dataset Structure

  • The scripts expect a train.yaml file for the YOLO training and validation. This script will also point to the structure of the dataset/ directory containing subdirectories with the following structure:
    dataset/
    β”œβ”€β”€ images/
    β”‚   β”œβ”€β”€ train/
    β”‚   β”œβ”€β”€ val/
    β”‚   └── test/
    └── labels/
        β”œβ”€β”€ train/
        β”œβ”€β”€ val/
        └── test/
    

Classes:

  • 0: Yellow cone
  • 1: Blue cone
  • 2: Orange cone (includes large orange cones)

Note: Unknown/damaged cones are removed during preprocessing to avoid label noise.

Quick Start

  1. Setup environment:

    python3 -m venv venv
    source venv/bin/activate
    pip install -r requirements.txt
    
  2. Download and organize your dataset into the dataset/ folder as described above.

  3. Review and modify the train.yaml file to ensure it points to the correct dataset paths and class definitions. If i where you, i'd use an absolute path for the path of the data directory, since ultralytics does some behind the scenes stuff when the path is relative and it might cause issues.

  4. Execute the preprocessing scripts:

    • The preprocess_images.py script can be used to remove the black borders in the images for better training results. Review the main before execution to ensure proper paths and calls to the functions you want.
    • The process_labels.py script can be used to merge orange and big_orange classes into a single orange_cone class, and to remove unknown labels, as well as projecting bbox labels to their correct coordinates after image cropping. Review the main before execution to ensure proper paths and calls to the functions you want.
  5. Train! You can either train, optimize or run experiments with the following scripts:

    • train_yolo.py for simple single-model training.
    • optimize.py for hyperparameter tuning.
    • experiment_train.py for systematic training across multiple configurations.

    You will find more details about these scripts in the train_tune_experiment/README.md file.

  6. Finally, you can perform inference on videos or images with the notebooks in the inference_notebooks/ folder, or use them as example for your own inference scripts.

  7. After experiments, you may want to benchmark results both in terms of accuracy and inference speed on the target hardware (Jetson Orin NX). Dedicated scripts coming soon to perform these benchmarks.


Module Overview

analysis_and_preprocessing/

Dataset preparation: removes FSoco watermarks, remaps classes, removes unknowns, adjusts bbox coordinates. See module README for details.

train_tune_experiment/

Training framework: single models, batch experiments, hyperparameter optimization. See module README for details.

inference_notebooks/

Inference on videos/images (local GPU or Google Colab).


Important Notes

  • Cache behavior: cache='disk' only caches if <50% RAM used at start. Cached files persist across runs. Use cache='ram' or manually delete .npy files if needed to avoid using images with wrong augmentations cached from previous runs.

  • DataLoader memory leaks: Consecutive training runs leak worker processes. experiment_train.py includes pkill -f pt_data_worker cleanup between experiments. Adjust pattern if needed for your environment.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support