Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,79 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nc-4.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
# Food Portion Benchmark (FPB) Dataset
|
| 7 |
+
|
| 8 |
+
The **Food Portion Benchmark (FPB)** is a comprehensive dataset and benchmark suite for multi-task food scene understanding, combining **food detection** and **portion size (weight) estimation**. It was introduced to support research in dietary analysis, nutrition tracking, and food computing. The dataset is built with high-quality annotations and evaluated using an extended YOLOv12-based multi-task model .
|
| 9 |
+
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
## 📦 Dataset Overview
|
| 13 |
+
|
| 14 |
+
- **Total images**: 14,083
|
| 15 |
+
- **Food classes**: 138
|
| 16 |
+
- **Annotations**: Bounding boxes + Ground-truth weights (in grams)
|
| 17 |
+
- **Image angles**: Top-down and four side views
|
| 18 |
+
- **Cameras**: Intel RealSense D455 + smartphones
|
| 19 |
+
- **Split**: Train (9,521) / Validation (2,365) / Test (2,197)
|
| 20 |
+
- **Collection setting**: Controlled lab environment using local Central Asian cuisine
|
| 21 |
+
|
| 22 |
+
Each food item was weighed and categorized into small, medium, or large portions. Images were captured from different angles to enable robust volume and weight estimation.
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## 🧠 Model Overview
|
| 28 |
+
|
| 29 |
+
The baseline model is a **YOLOv12** multitask variant, extended with a **regression head** for predicting food weight (see Figure below). It was designed to be **agnostic to missing labels**, making it compatible with datasets that do not have weight annotations.
|
| 30 |
+

|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
### Best Model (YOLOv12-M @ 640×640):
|
| 35 |
+
- **Detection**: mAP50 = 0.974, mAP50-95 = 0.948
|
| 36 |
+
- **Weight Estimation**: MAE = 90.95g
|
| 37 |
+
|
| 38 |
+
---
|
| 39 |
+
|
| 40 |
+
## 🧪 Performance Tables
|
| 41 |
+
|
| 42 |
+
### Table 1: Performance of YOLOv12M at different resolutions
|
| 43 |
+
> Include from paper (Table 3)
|
| 44 |
+
|
| 45 |
+
### Table 2: YOLOv8 vs YOLOv12 on FPB
|
| 46 |
+
> Include from paper (Table 4)
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
---
|
| 51 |
+
|
| 52 |
+
## 📥 Dataset Access & Benchmarking
|
| 53 |
+
|
| 54 |
+
- 📦 **Download Dataset**: [Hugging Face link](https://huggingface.co/datasets/issai/Food_Portion_Benchmark)
|
| 55 |
+
- 🚀 **Evaluate Your Model**: Submit predictions on the test set using the [automated score-checker](https://huggingface.co/datasets/issai/Food_Portion_Benchmark/tree/main/score-checker)
|
| 56 |
+
|
| 57 |
+
Test labels are hidden to ensure fair evaluation.
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
---
|
| 63 |
+
|
| 64 |
+
## 📚 In case of using our work in your research, please cite this paper
|
| 65 |
+
|
| 66 |
+
@article{sanatbyeka2025multitask,
|
| 67 |
+
title={A Multitask Deep Learning Model for Food Scene Recognition and Portion Estimation},
|
| 68 |
+
author={Sanatbyeka, Aibota and Rakhimzhanova, Tomiris and Varol, Huseyin Atakan and Chan, Mei Yen},
|
| 69 |
+
journal={AI Open},
|
| 70 |
+
year={2025},
|
| 71 |
+
note={Preprint submitted April 7, 2025}
|
| 72 |
+
}
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
## References
|
| 76 |
+
|
| 77 |
+
[1] Tian, Y., Ye, Q., & Doermann, D. (2025). YOLOv12: Attention-centric real-time object detectors. arXiv. https://arxiv.org/abs/2502.12524
|
| 78 |
+
[2] https://github.com/ultralytics/ultralytics
|
| 79 |
+
|