ninjals commited on
Commit
8dd5f09
Β·
verified Β·
1 Parent(s): e8ce293

Uploading FoodExtract-Vision demo app.py from YouTube tutorial video

Browse files
Files changed (3) hide show
  1. README.md +20 -6
  2. app.py +106 -0
  3. requirements.txt +6 -0
README.md CHANGED
@@ -1,12 +1,26 @@
1
  ---
2
- title: FoodExtract Vision V1
3
- emoji: πŸ‘
4
- colorFrom: yellow
5
- colorTo: red
6
  sdk: gradio
7
- sdk_version: 6.5.1
8
  app_file: app.py
9
  pinned: false
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: FoodExtract-Vision Fine-tuned VLM Structued Data Extractor
3
+ emoji: πŸŸβž‘οΈπŸ“
4
+ colorFrom: green
5
+ colorTo: blue
6
  sdk: gradio
 
7
  app_file: app.py
8
  pinned: false
9
+ license: apache-2.0
10
  ---
11
 
12
+ Fine-tuned SmolVLM2-500M to extract food and drink items from images.
13
+
14
+ Input can be any kind of image and output will be a formatted string such as the following:
15
+
16
+ ```json
17
+ {'is_food': 0, 'image_title': '', 'food_items': [], 'drink_items': []}
18
+ ```
19
+
20
+ Or for an image of food:
21
+
22
+ ```json
23
+ {'is_food': 1, 'image_title': 'fried calamari', 'food_items': ['fried calamari'], 'drink_items': []}
24
+ ```
25
+
26
+ Note: This README.md was authored in a live tutorial recorded for YouTube (link coming soon).
app.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import gradio as gr
3
+
4
+ import spaces
5
+ from transformers import pipeline
6
+
7
+ BASE_MODEL_ID = "HuggingFaceTB/SmolVLM2-500M-Video-Instruct"
8
+ # FINE_TUNED_MODEL_ID = "mrdbourke/FoodExtract-Vision-SmolVLM2-500M-fine-tune-v1"
9
+ FINE_TUNED_MODEL_ID = "ninjals/FoodExtract-Vision-SmolVLM2-500M-fine-tune-v1-VIDEO"
10
+ OUTPUT_TOKENS = 256
11
+
12
+ # Load original base model (no fine-tuning)
13
+ print(f"[INFO] Loading Original Model")
14
+ original_pipeline = pipeline(
15
+ "image-text-to-text",
16
+ model=BASE_MODEL_ID,
17
+ dtype=torch.bfloat16,
18
+ device_map="auto"
19
+ )
20
+
21
+ # Load fine-tuned model
22
+ print(f"[INFO] Loading Fine-tuned Model")
23
+ ft_pipe = pipeline(
24
+ "image-text-to-text",
25
+ model=FINE_TUNED_MODEL_ID,
26
+ dtype=torch.bfloat16,
27
+ device_map="auto"
28
+ )
29
+
30
+ def create_message(input_image):
31
+ return [{'role': 'user',
32
+ 'content': [{'type': 'image',
33
+ 'image': input_image},
34
+ {'type': 'text',
35
+ 'text': "Classify the given input image into food or not and if edible food or drink items are present, extract those to a list. If no food/drink items are visible, return empty lists.\n\nOnly return valid JSON in the following form:\n\n```json\n{\n 'is_food': 0, # int - 0 or 1 based on whether food/drinks are present (0 = no foods visible, 1 = foods visible)\n 'image_title': '', # str - short food-related title for what foods/drinks are visible in the image, leave blank if no foods present\n 'food_items': [], # list[str] - list of visible edible food item nouns\n 'drink_items': [] # list[str] - list of visible edible drink item nouns\n}\n```\n"}]}]
36
+
37
+ @spaces.GPU
38
+ def extract_foods_from_image(input_image):
39
+ input_image = input_image.resize(size=(512, 512))
40
+ input_message = create_message(input_image=input_image)
41
+
42
+ # Get outputs from base model (not fine-tuned)
43
+ original_pipeline_output = original_pipeline(text=[input_message],
44
+ max_new_tokens=OUTPUT_TOKENS)
45
+
46
+ outputs_pretrained = original_pipeline_output[0][0]["generated_text"][-1]["content"]
47
+
48
+ # Get outputs from fine-tuned model (fine-tuned on food images)
49
+ ft_pipe_output = ft_pipe(text=[input_message],
50
+ max_new_tokens=OUTPUT_TOKENS)
51
+ outputs_fine_tuned = ft_pipe_output[0][0]["generated_text"][-1]["content"]
52
+
53
+ return outputs_pretrained, outputs_fine_tuned
54
+
55
+ demo_title = "πŸ₯‘βž‘οΈπŸ“ FoodExtract-Vision with a fine-tuned SmolVLM2-500M"
56
+ demo_description = """* **Base model:** https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct
57
+ * **Fine-tuning dataset:** https://huggingface.co/datasets/mrdbourke/FoodExtract-1k-Vision (1k food images and 500 not food images)
58
+ * **Fine-tuned model:** https://huggingface.co/mrdbourke/FoodExtract-Vision-SmolVLM2-500M-fine-tune-v1
59
+
60
+ ## Overview
61
+
62
+ Extract food and drink items in a structured way from images.
63
+
64
+ The original model outputs fail to capture the desired structure. But the fine-tuned model sticks to the output structure quite well.
65
+
66
+ However, the fine-tuned model could definitely be improved with respects to its ability to extract the right food/drink items.
67
+
68
+ Both models use the input prompt:
69
+
70
+ ````
71
+ Classify the given input image into food or not and if edible food or drink items are present, extract those to a list. If no food/drink items are visible, return empty lists.
72
+
73
+ Only return valid JSON in the following form:
74
+
75
+ ```json
76
+ {
77
+ 'is_food': 0, # int - 0 or 1 based on whether food/drinks are present (0 = no foods visible, 1 = foods visible)
78
+ 'image_title': '', # str - short food-related title for what foods/drinks are visible in the image, leave blank if no foods present
79
+ 'food_items': [], # list[str] - list of visible edible food item nouns
80
+ 'drink_items': [] # list[str] - list of visible edible drink item nouns
81
+ }
82
+ ```
83
+ ````
84
+
85
+ Except one model has been fine-tuned on the structured data whereas the other hasn't.
86
+
87
+ Notable next steps would be:
88
+ * **Remove the input prompt:** Just train the model to go straight from image -> text (no text prompt on input), this would save on inference tokens.
89
+ * **Fine-tune on more real-world data:** Right now the model is only trained on 1k food images (from Food101) and 500 not food (random internet images), training on real world data would likely significantly improve performance.
90
+ * **Fix the repetitive generation:** The model can sometimes get stuck in a repetitive generation pattern, e.g. "onions", "onions", "onions", etc. We could look into patterns to help reduce this.
91
+ """
92
+
93
+ demo = gr.Interface(
94
+ fn=extract_foods_from_image,
95
+ inputs=gr.Image(type="pil"),
96
+ title=demo_title,
97
+ description=demo_description,
98
+ outputs=[gr.Textbox(lines=4, label="Original Model (not fine-tuned)"),
99
+ gr.Textbox(lines=4, label="Fine-tuned Model")],
100
+ examples=[["examples/camera.jpeg"],
101
+ ["examples/Tandoori-Chicken.jpg"],
102
+ ["examples/fries.jpeg"]],
103
+ )
104
+
105
+ if __name__ == "__main__":
106
+ demo.launch(share=False)
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ num2words
2
+ transformers
3
+ torch
4
+ accelerate
5
+ gradio
6
+ torchvision