File size: 4,080 Bytes
d575d2e b7bf573 d575d2e b7bf573 d575d2e 53fcacf d575d2e 5b36a84 d575d2e 5b36a84 d575d2e b7bf573 d575d2e 4ee4648 d575d2e b7bf573 d575d2e 81096aa b7bf573 d575d2e 81096aa e509716 d575d2e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
license: cc-by-nc-4.0
---
# Food Portion Benchmark (FPB) Dataset
The **Food Portion Benchmark (FPB)** is a comprehensive dataset and benchmark suite for multi-task food scene understanding, combining **food detection** and **portion size (weight) estimation**. It was introduced to support research in dietary analysis, nutrition tracking, and food computing. The dataset is built with high-quality annotations and evaluated using an extended YOLOv12-based multi-task model .
---
## 📦 Dataset Overview
- **Total images**: 14,083
- **Food classes**: 138
- **Annotations**: Bounding boxes + Ground-truth weights (in grams)
- **Image angles**: Top-down and four side views
- **Cameras**: Intel RealSense D455 + smartphones
- **Split**: Train (9,521) / Validation (2,365) / Test (2,197)
- **Collection setting**: Controlled lab environment using local Central Asian cuisine
Each food item was weighed and categorized into small, medium, or large portions. Images were captured from different angles to enable robust volume and weight estimation.

## 📁 Dataset Structure and Format
The FPB dataset follows the **YOLO annotation format**, with a custom 6th column for **food weight (in grams)**.
### 🧾 Label Format (YOLO-style with weight)
- `class_id`: ID of the food class (0–137)
- `x_center, y_center, width, height`: Bounding box coordinates (normalized to [0, 1])
- `weight`: Ground truth weight in grams (used for regression)
Each `.txt` file matches the name of its corresponding image file.
---
## 📥 Dataset Access & Benchmarking
- 📦 **Download Dataset**: [Hugging Face link](https://huggingface.co/datasets/issai/Food_Portion_Benchmark)
- 🚀 **Evaluate Your Model**: Submit predictions on the test set using the [automated score-checker](https://huggingface.co/datasets/issai/Food_Portion_Benchmark/tree/main/score-checker)
Test labels are hidden to ensure fair evaluation.
---
## 🧠 Model Overview
The baseline model is a **YOLOv12** multitask variant, extended with a **regression head** for predicting food weight (see Figure below). It was designed to be **agnostic to missing labels**, making it compatible with datasets that do not have weight annotations.

Github Source Code: [Multitask-Food-Portion-Estimation](https://github.com/IS2AI/Multitask-Food-Portion-Estimation)
### Best Model (YOLOv12-M @ 640×640):
- **Detection**: mAP50 = 0.974, mAP50-95 = 0.948
- **Weight Estimation**: MAE = 90.95g
---
## 🧪 Performance Tables
### Table 1: Performance of YOLOv12M at different resolutions

### Table 2: YOLOv8 vs YOLOv12 on FPB

## 🏋️♂️ Training
Train the multi-task YOLOv12 model using `train.py`
## 🔍 Inference
Download the trained best models from the [drive link](https://drive.google.com/drive/folders/1XbgdXzfX73PxUUxthcbcqbY-1TNRK51d?usp=sharing) and run inference on test images using `test.py`
- Provide path to your images folder or image file
- Replace `model` with the path to the downloaded model
- Set `show=True` to save annotated images with bounding boxes and predicted weights
---
## 📚 In case of using our work in your research, please cite this paper
<pre> @article{Sanatbyek_2025,
title={A multitask deep learning model for food scene recognition and portion estimation—the Food Portion Benchmark (FPB) dataset},
volume={13},
DOI={10.1109/access.2025.3603287},
journal={IEEE Access},
author={Sanatbyek, Aibota and Rakhimzhanova, Tomiris and Nurmanova, Bibinur and Omarova, Zhuldyz and Rakhmankulova, Aidana and Orazbayev, Rustem and Varol, Huseyin Atakan and Chan, Mei Yen},
year={2025},
pages={152033–152045}
}
</pre>
## References
[1] Tian, Y., Ye, Q., & Doermann, D. (2025). YOLOv12: Attention-centric real-time object detectors. arXiv. https://arxiv.org/abs/2502.12524
[2] https://github.com/ultralytics/ultralytics
|