|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
--- |
|
|
|
|
|
|
|
|
# Food Portion Benchmark (FPB) Dataset |
|
|
|
|
|
The **Food Portion Benchmark (FPB)** is a comprehensive dataset and benchmark suite for multi-task food scene understanding, combining **food detection** and **portion size (weight) estimation**. It was introduced to support research in dietary analysis, nutrition tracking, and food computing. The dataset is built with high-quality annotations and evaluated using an extended YOLOv12-based multi-task model . |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ฆ Dataset Overview |
|
|
|
|
|
- **Total images**: 14,083 |
|
|
- **Food classes**: 138 |
|
|
- **Annotations**: Bounding boxes + Ground-truth weights (in grams) |
|
|
- **Image angles**: Top-down and four side views |
|
|
- **Cameras**: Intel RealSense D455 + smartphones |
|
|
- **Split**: Train (9,521) / Validation (2,365) / Test (2,197) |
|
|
- **Collection setting**: Controlled lab environment using local Central Asian cuisine |
|
|
|
|
|
Each food item was weighed and categorized into small, medium, or large portions. Images were captured from different angles to enable robust volume and weight estimation. |
|
|
 |
|
|
|
|
|
|
|
|
## ๐ Dataset Structure and Format |
|
|
|
|
|
The FPB dataset follows the **YOLO annotation format**, with a custom 6th column for **food weight (in grams)**. |
|
|
|
|
|
### ๐งพ Label Format (YOLO-style with weight) |
|
|
- `class_id`: ID of the food class (0โ137) |
|
|
- `x_center, y_center, width, height`: Bounding box coordinates (normalized to [0, 1]) |
|
|
- `weight`: Ground truth weight in grams (used for regression) |
|
|
|
|
|
Each `.txt` file matches the name of its corresponding image file. |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ฅ Dataset Access & Benchmarking |
|
|
|
|
|
- ๐ฆ **Download Dataset**: [Hugging Face link](https://huggingface.co/datasets/issai/Food_Portion_Benchmark) |
|
|
- ๐ **Evaluate Your Model**: Submit predictions on the test set using the [automated score-checker](https://huggingface.co/datasets/issai/Food_Portion_Benchmark/tree/main/score-checker) |
|
|
|
|
|
Test labels are hidden to ensure fair evaluation. |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
|
|
|
## ๐ง Model Overview |
|
|
|
|
|
The baseline model is a **YOLOv12** multitask variant, extended with a **regression head** for predicting food weight (see Figure below). It was designed to be **agnostic to missing labels**, making it compatible with datasets that do not have weight annotations. |
|
|
 |
|
|
|
|
|
Github Source Code: [Multitask-Food-Portion-Estimation](https://github.com/IS2AI/Multitask-Food-Portion-Estimation) |
|
|
|
|
|
### Best Model (YOLOv12-M @ 640ร640): |
|
|
- **Detection**: mAP50 = 0.974, mAP50-95 = 0.948 |
|
|
- **Weight Estimation**: MAE = 90.95g |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐งช Performance Tables |
|
|
|
|
|
### Table 1: Performance of YOLOv12M at different resolutions |
|
|
 |
|
|
|
|
|
### Table 2: YOLOv8 vs YOLOv12 on FPB |
|
|
 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## ๐๏ธโโ๏ธ Training |
|
|
|
|
|
Train the multi-task YOLOv12 model using `train.py` |
|
|
|
|
|
## ๐ Inference |
|
|
|
|
|
Download the trained best models from the [drive link](https://drive.google.com/drive/folders/1XbgdXzfX73PxUUxthcbcqbY-1TNRK51d?usp=sharing) and run inference on test images using `test.py` |
|
|
- Provide path to your images folder or image file |
|
|
- Replace `model` with the path to the downloaded model |
|
|
- Set `show=True` to save annotated images with bounding boxes and predicted weights |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ In case of using our work in your research, please cite this paper |
|
|
|
|
|
<pre> @article{Sanatbyek_2025, |
|
|
title={A multitask deep learning model for food scene recognition and portion estimationโthe Food Portion Benchmark (FPB) dataset}, |
|
|
volume={13}, |
|
|
DOI={10.1109/access.2025.3603287}, |
|
|
journal={IEEE Access}, |
|
|
author={Sanatbyek, Aibota and Rakhimzhanova, Tomiris and Nurmanova, Bibinur and Omarova, Zhuldyz and Rakhmankulova, Aidana and Orazbayev, Rustem and Varol, Huseyin Atakan and Chan, Mei Yen}, |
|
|
year={2025}, |
|
|
pages={152033โ152045} |
|
|
} |
|
|
</pre> |
|
|
|
|
|
|
|
|
## References |
|
|
|
|
|
[1] Tian, Y., Ye, Q., & Doermann, D. (2025). YOLOv12: Attention-centric real-time object detectors. arXiv. https://arxiv.org/abs/2502.12524 |
|
|
[2] https://github.com/ultralytics/ultralytics |
|
|
|
|
|
|