TJIET commited on
Commit
f88c3dd
Β·
verified Β·
1 Parent(s): 3baa3fd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -2
README.md CHANGED
@@ -139,6 +139,24 @@ FruitBench/
139
  β”‚ β”œβ”€β”€ Banana.json
140
  β”‚ └── ...
141
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
142
  ## βš™οΈ Environment Setup
143
 
144
  We provide both `conda` and `pip` setup options (Python 3.11 recommended).
@@ -165,9 +183,9 @@ git clone https://huggingface.co/datasets/TJIET/FruitBench
165
 
166
  ```
167
  ### 3. Evaluate Models
168
-
169
  ```bash
170
- python scripts/evaluate_models.py
171
  ```
172
  ## πŸ“Š Benchmark Details
173
  - βœ… 3,200 annotated fruit images
 
139
  β”‚ β”œβ”€β”€ Banana.json
140
  β”‚ └── ...
141
  ```
142
+
143
+ ## Evaluation
144
+ We evaluate a total of **15 multimodal models** of different types and sizes, covering diverse model architectures, parameter scales, and vision-language capabilities. The evaluated models include:
145
+
146
+ - CogVLM2-Llama3-Chat
147
+ - DeepSeek-VL-Chat
148
+ - DeepSeek-VL2
149
+ - InternVL2_5
150
+ - Janus-Pro
151
+ - Mantis-siglip-llama3
152
+ - Mantis-Idefics2
153
+ - MiniCPM-Llama3-V2_5
154
+ - MiniCPM-o-2.6
155
+ - mPLUG-OWL3
156
+ - Qwen2.5-VL-Instruct
157
+ - Yi-VL
158
+ *(15 models in total, with various types and sizes)*
159
+
160
  ## βš™οΈ Environment Setup
161
 
162
  We provide both `conda` and `pip` setup options (Python 3.11 recommended).
 
183
 
184
  ```
185
  ### 3. Evaluate Models
186
+ As an example, the evaluation command for **CogVLM2-Llama3-Chat** is:
187
  ```bash
188
+ python scripts/DeepSeekVL-chat-0-shot.py
189
  ```
190
  ## πŸ“Š Benchmark Details
191
  - βœ… 3,200 annotated fruit images