stm32-modelzoo-app / object_detection /docs /README_PREDICTION.md
FBAGSTM's picture
STM32 AI Experimentation Hub
747451d

Object Detection Prediction

Our prediction service is a simple and efficient tool that allows users to upload their TensorFlow Lite (.tflite) or Keras (.keras) model and a set of images for prediction. The service then uses the model to predict the class and position of each object in the image set. This can be particularly useful for anyone working with object detection tasks and looking for a quick and easy way to generate predictions. Our prediction service is designed to be user-friendly and accessible, making it an ideal solution for both beginners and experts alike.

1. Configure the YAML file

To use the prediction service, users must fill in the 'prediction' section of the user_config.yaml file like the prediction_config.yaml or as shown in the example below:

model:
   model_type: <model-type>                 #  yolov2t, yolov4, yolov4t
   model_path: <path-to-a-Keras-or-TFlite-or-onnx-model-file>           # Path to the model to use to make predictions

operation_mode: prediction

preprocessing:
   rescaling: { scale: 1/127.5, offset: -1 }
   resizing:
      aspect_ratio: fit
      interpolation: nearest
   color_mode: rgb

dataset:
   dataset_name: coco
   class_names: [ person ]   # Names of the classes
   prediction_path: ../test_files

prediction:
  target: host # host, stedgeai_host, stedgeai_n6

postprocessing:
  confidence_thresh: 0.5
  NMS_thresh: 0.5
  IoU_eval_thresh: 0.5
  max_detection_boxes: 10

In the 'model' section, users must provide the path to their model file using the model_path attribute. This can be either a Keras model file with a '.keras' filename extension (float model) or a TensorFlow Lite model file with a '.tflite' filename extension (quantized model).

The 'dataset' section requires users to provide the dataset_name attribute and the names of the classes using the class_names attribute or the classes_name_file argument, pointing to a text file containing the class names, as there is no dataset available to infer them.

In the 'dataset' section, users must provide the path to the directory containing the images to predict using the prediction_path attribute. If users are using a quantized TFLITE or ONNX model, they can decide to do the inferences with the classic Python interpreters (host -> by default), with the C code generated by stedgeai on the PC (stedgeai_host), or with the C code generated by stedgeai on the N6 board directly (stedgeai_n6) using the target attribute.

Finally, once all of these sections have been filled in, users can run the prediction service to generate predictions for their set of images.

Hydra and MLflow settings

The mlflow and hydra sections must always be present in the YAML configuration file. The hydra section can be used to specify the name of the directory where experiment directories are saved and/or the pattern used to name experiment directories. With the YAML code below, every time you run the Model Zoo, an experiment directory is created that contains all the directories and files created during the run. The names of experiment directories are all unique as they are based on the date and time of the run.

hydra:
   run:
      dir: ./tf/src/experiments_outputs/${now:%Y_%m_%d_%H_%M_%S}

The mlflow section is used to specify the location and name of the directory where MLflow files are saved, as shown below:

mlflow:
   uri: ./tf/src/experiments_outputs/mlruns
2. Launch the prediction

If you chose to modify the user_config.yaml, you can evaluate the model by running the following command from the UC folder:

python stm32ai_main.py 

If you chose to update the prediction_config.yaml and use it, then run the following command from the UC folder:

python stm32ai_main.py --config-path ./config_file_examples/ --config-name prediction_config.yaml