Image classification STM32 model zoo
Directory components:
- config_file_examples: collection of YAML configuration file examples for Tensorflow.
- config_file_examples_pt: collection of YAML config file examples for Pytorch.
- datasets: directory containing class names files for each supported dataset.
- docs: contains all readmes and tutorials describing the allowed operation modes in Tensorflow or Pytorch and specific to the image classification use case.
- tf: contains Tensorflow tools to train, evaluate, benchmark, quantize and deploy your model on your STM32 target.
- pt: contains Pytorch tools to train, evaluate, benchmark, quantize and deploy your model on your STM32 target.
Quick & easy examples:
The operation_mode top-level attribute specifies the operations or the service you want to execute. This may be single operation or a set of chained operations.
You can refer to readme links below that provide typical examples of operation modes, and tutorials on specific services:
- training, chain_tqe (train + quantize + evaluate), chain_tqeb for Tensorflow
- training, chain_tqe (train + quantize + evaluate), chain_tqeb for Pytorch
- quantization, chain_eqe, chain_qb for Tensorflow or Pytorch
- quantization, chain_eqe, chain_qb for advanced quantization parameters (exclusively based on ONNX-runtime)
- evaluation, chain_eqeb for Tensorflow or Pytorch
- benchmarking for Tensorflow or Pytorch
- prediction for Tensorflow or Pytorch
- deployment, chain_qd (STM32H7, STM32N6, STM32MPU)
The different values of the operation_mode attribute and the corresponding operations are described in the table below. In the names of the chain modes, 't' stands for training, 'e' for evaluation, 'q' for quantization, 'b' for benchmark and 'd' for deployment on an STM32 board.
| operation_mode attribute | Operations |
|---|---|
training |
Train a model from the variety of classification models in the model zoo (BYOD) or your own model (BYOM) |
evaluation |
Evaluate the accuracy of a float or quantized model on a test or validation dataset |
quantization |
Quantize a float model |
prediction |
Predict the classes some images belong to using a float or quantized model |
benchmarking |
Benchmark a float or quantized model on an STM32 board |
deployment |
Deploy a model on an STM32 board |
chain_tqeb |
Sequentially: training, quantization of trained model, evaluation of quantized model, benchmarking of quantized model |
chain_tqe |
Sequentially: training, quantization of trained model, evaluation of quantized model |
chain_eqe |
Sequentially: evaluation of a float model, quantization, evaluation of the quantized model |
chain_qb |
Sequentially: quantization of a float model, benchmarking of quantized model |
chain_eqeb |
Sequentially: evaluation of a float model, quantization, evaluation of quantized model, benchmarking of quantized model |
chain_qd |
Sequentially: quantization of a float model, deployment of quantized model |
The list of the available models in STM32 model zoo and a quick way to explore them by size, inference time...can be found in Pytorch models or in Tensorflow models readmes.
You don't know where to start? You feel lost?
Don't forget to follow our tuto below for a quick ramp up :
- How can I define and train my own model?
- How can I fine tune a pretrained model on my own dataset?
- How can I check the accuracy after quantization of my model?
- How can I evaluate my model on STM32N6 target?
- How to quickly benchmark the performance of a model?
Remember that you have minimalistic YAML files available in config_file_examples for Tensorflow and in config_file_examples_pt for Pytorch to play with specific services, and that all pre-trained models in the STM32 model zoo are provided with their configuration YAML file used to generate them. These are very good starting points to start playing with!