Datasets:

Modalities:
Image
Text
ArXiv:
License:
adema5051 commited on
Commit
81096aa
·
verified ·
1 Parent(s): 4b3aded

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -9
README.md CHANGED
@@ -65,10 +65,10 @@ The baseline model is a **YOLOv12** multitask variant, extended with a **regress
65
  ## 🧪 Performance Tables
66
 
67
  ### Table 1: Performance of YOLOv12M at different resolutions
68
- > Include from paper (Table 3)
69
 
70
  ### Table 2: YOLOv8 vs YOLOv12 on FPB
71
- > Include from paper (Table 4)
72
 
73
 
74
 
@@ -79,7 +79,7 @@ Train the multi-task YOLOv12 model using `train.py`
79
 
80
  ## 🔍 Inference
81
 
82
- Download the trained best model from the [drive link](https://huggingface.co/datasets/issai/Food_Portion_Benchmark) and run inference on test images using `test.py`
83
  - Provide path to your images folder or image file
84
  - Replace `model` with the path to the downloaded model
85
  - Set `show=True` to save annotated images with bounding boxes and predicted weights
@@ -89,12 +89,14 @@ Download the trained best model from the [drive link](https://huggingface.co/dat
89
 
90
  ## 📚 In case of using our work in your research, please cite this paper
91
 
92
- <pre> @article{sanatbyeka2025multitask,
93
- title = {A Multitask Deep Learning Model for Food Scene Recognition and Portion Estimation},
94
- author = {Sanatbyeka, Aibota and Rakhimzhanova, Tomiris and Varol, Huseyin Atakan and Chan, Mei Yen},
95
- journal = {AI Open},
96
- year = {2025},
97
- note = {Preprint submitted April 7, 2025}
 
 
98
  }
99
  </pre>
100
 
 
65
  ## 🧪 Performance Tables
66
 
67
  ### Table 1: Performance of YOLOv12M at different resolutions
68
+ > ![YOLOv12M at different resolutions](figures/YOLOv12M_table.jpg)
69
 
70
  ### Table 2: YOLOv8 vs YOLOv12 on FPB
71
+ > ![YOLOv8 vs YOLOv12 results](figures/training_results.png)
72
 
73
 
74
 
 
79
 
80
  ## 🔍 Inference
81
 
82
+ Download the trained best models from the [drive link](https://drive.google.com/drive/folders/1XbgdXzfX73PxUUxthcbcqbY-1TNRK51d?usp=sharing) and run inference on test images using `test.py`
83
  - Provide path to your images folder or image file
84
  - Replace `model` with the path to the downloaded model
85
  - Set `show=True` to save annotated images with bounding boxes and predicted weights
 
89
 
90
  ## 📚 In case of using our work in your research, please cite this paper
91
 
92
+ <pre> @article{Sanatbyek_2025,
93
+ title={A multitask deep learning model for food scene recognition and portion estimation—the Food Portion Benchmark (FPB) dataset},
94
+ volume={13},
95
+ DOI={10.1109/access.2025.3603287},
96
+ journal={IEEE Access},
97
+ author={Sanatbyek, Aibota and Rakhimzhanova, Tomiris and Nurmanova, Bibinur and Omarova, Zhuldyz and Rakhmankulova, Aidana and Orazbayev, Rustem and Varol, Huseyin Atakan and Chan, Mei Yen},
98
+ year={2025},
99
+ pages={152033–152045}
100
  }
101
  </pre>
102