aibota01 commited on
Commit
ae0274c
·
1 Parent(s): c38bab6
Files changed (1) hide show
  1. app.py +35 -9
app.py CHANGED
@@ -12,7 +12,7 @@ import base64
12
  login(token=os.getenv("HF_TOKEN"))
13
 
14
  # ---------- Load Image for Description ----------
15
- with open("assets/desc.png", "rb") as f:
16
  image_data = f.read()
17
  encoded_image = base64.b64encode(image_data).decode("utf-8")
18
 
@@ -299,17 +299,43 @@ with gr.Blocks() as demo:
299
  """
300
  ### Evaluation Metrics
301
 
302
- - **mAP@50:**
303
- Measures how well predicted bounding boxes align with the ground truth (IoU 0.50).
304
-
305
- - **Weight MAE:**
306
- Calculates the average absolute difference (in grams) between predicted and actual food weights.
307
-
308
- ### About the Dataset
309
 
310
- The **Food Portion Benchmark dataset** contains 14,083 RGB images across 133 food classes, with annotations for bounding boxes and weights.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
311
  """
312
  )
 
313
 
314
  # --- Submission Tab ---
315
  with gr.TabItem("🚀 Submit CSV"):
 
12
  login(token=os.getenv("HF_TOKEN"))
13
 
14
  # ---------- Load Image for Description ----------
15
+ with open("assets/desc3.png", "rb") as f:
16
  image_data = f.read()
17
  encoded_image = base64.b64encode(image_data).decode("utf-8")
18
 
 
299
  """
300
  ### Evaluation Metrics
301
 
302
+ - **mAP@50 (Mean Average Precision at IoU 0.50):**
303
+ This metric evaluates how well the predicted bounding boxes match the ground truth. In mAP@50, a prediction is considered a true positive if the Intersection over Union (IoU) between the predicted box and the ground truth box is at least 0.50. The final score is averaged across all classes and images, yielding a single value between 0 and 1, where a higher value indicates better localization performance.
 
 
 
 
 
304
 
305
+ - **Weight MAE (Mean Absolute Error):**
306
+ This metric calculates the average absolute difference (in grams) between the predicted food weight and the actual weight provided in the ground truth. A lower MAE signifies more accurate weight predictions.
307
+
308
+ ### Benchmark Dataset
309
+
310
+ The **Food Portion Benchmark dataset** is a comprehensive dataset for evaluating object detection and food weight estimation models. Here are some key details:
311
+
312
+ - **Dataset Composition:**
313
+ It contains 14,083 RGB images of food items spanning 133 distinct classes. For each food item, the dataset includes manually annotated bounding boxes and precise weight measurements.
314
+
315
+ - **Portion Sizes:**
316
+ Each food item is represented with annotations for three different portion sizes (big, average, small), reflecting the real-world variation in food serving sizes.
317
+
318
+ - **Annotations:**
319
+ The ground truth annotations include the food item’s image name, class, bounding box coordinates in YOLO format, and weight in grams.
320
+
321
+ - **Reference and Access:**
322
+ You can explore and download the dataset on Hugging Face at the following link:
323
+ [Food Portion Benchmark Dataset](https://huggingface.co/datasets/issai/Food_Portion_Benchmark)
324
+
325
+ ### Additional Notes
326
+
327
+ - **Submission Requirements:**
328
+ Prediction CSV files must contain: image_name, class_id, xmin, ymin, xmax, ymax, weight, conf
329
+
330
+ - **Evaluation Process:**
331
+ - **mAP@50** is computed via the COCO evaluation API using pycocotools library, which compares the predicted bounding boxes (along with their confidence scores) to the ground truth annotations.
332
+ - **Weight MAE** is computed using sklearn’s mean absolute error function.
333
+
334
+ - **Contact Information:**
335
+ For any questions regarding the dataset or evaluation methodology, please refer to the dataset documentation on Hugging Face or contact our support team.
336
  """
337
  )
338
+
339
 
340
  # --- Submission Tab ---
341
  with gr.TabItem("🚀 Submit CSV"):