# Evaluation of Object Detection Model Our evaluation service is a comprehensive tool that enables users to assess the accuracy of their TensorFlow Lite (.tflite) or Keras (.keras) object detection model. By uploading their model and a validation set, users can quickly and easily evaluate the performance of their model and generate various metrics, such as mAP. The evaluation service is designed to be fast, efficient, and accurate, making it an essential tool for anyone looking to evaluate the performance of their object detection model.
1. Configure the YAML file To use this service and achieve your goals, you can use the [user_config.yaml](../user_config.yaml) or directly update the [evaluation_config.yaml](../config_file_examples/evaluation_config.yaml) file and use it. This file provides an example of how to configure the evaluation service to meet your specific needs. Alternatively, you can follow the tutorial below, which shows how to evaluate your pre-trained object detection model using our evaluation service.
2. Evaluate your model If you chose to modify the [user_config.yaml](../user_config.yaml), you can evaluate the model by running the following command from the UC folder: ```bash python stm32ai_main.py ``` If you chose to update the [evaluation_config.yaml](../config_file_examples/evaluation_config.yaml) and use it, then run the following command from the UC folder: ```bash python stm32ai_main.py --config-path ./config_file_examples/ --config-name evaluation_config.yaml ``` In case you want to evaluate the accuracy of the quantized model and then benchmark it, you can either launch the evaluation operation mode followed by the [benchmark service](./README_BENCHMARKING.md) that describes in detail how to proceed, or you can use chained services like launching the **[chain_eqeb](../config_file_examples/chain_eqeb_config.yaml)** example with the command below: ```bash python stm32ai_main.py --config-path ./config_file_examples/ --config-name chain_eqeb_config.yaml ```
3. Visualize the evaluation results You can retrieve the confusion matrix generated after evaluating the float/quantized model on the test set by navigating to the appropriate directory within **experiments_outputs/\**. You can also find the evaluation results saved in the log file **stm32ai_main.log** under **experiments_outputs/\**.