Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -139,6 +139,24 @@ FruitBench/
|
|
| 139 |
β βββ Banana.json
|
| 140 |
β βββ ...
|
| 141 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
## βοΈ Environment Setup
|
| 143 |
|
| 144 |
We provide both `conda` and `pip` setup options (Python 3.11 recommended).
|
|
@@ -165,9 +183,9 @@ git clone https://huggingface.co/datasets/TJIET/FruitBench
|
|
| 165 |
|
| 166 |
```
|
| 167 |
### 3. Evaluate Models
|
| 168 |
-
|
| 169 |
```bash
|
| 170 |
-
python scripts/
|
| 171 |
```
|
| 172 |
## π Benchmark Details
|
| 173 |
- β
3,200 annotated fruit images
|
|
|
|
| 139 |
β βββ Banana.json
|
| 140 |
β βββ ...
|
| 141 |
```
|
| 142 |
+
|
| 143 |
+
## Evaluation
|
| 144 |
+
We evaluate a total of **15 multimodal models** of different types and sizes, covering diverse model architectures, parameter scales, and vision-language capabilities. The evaluated models include:
|
| 145 |
+
|
| 146 |
+
- CogVLM2-Llama3-Chat
|
| 147 |
+
- DeepSeek-VL-Chat
|
| 148 |
+
- DeepSeek-VL2
|
| 149 |
+
- InternVL2_5
|
| 150 |
+
- Janus-Pro
|
| 151 |
+
- Mantis-siglip-llama3
|
| 152 |
+
- Mantis-Idefics2
|
| 153 |
+
- MiniCPM-Llama3-V2_5
|
| 154 |
+
- MiniCPM-o-2.6
|
| 155 |
+
- mPLUG-OWL3
|
| 156 |
+
- Qwen2.5-VL-Instruct
|
| 157 |
+
- Yi-VL
|
| 158 |
+
*(15 models in total, with various types and sizes)*
|
| 159 |
+
|
| 160 |
## βοΈ Environment Setup
|
| 161 |
|
| 162 |
We provide both `conda` and `pip` setup options (Python 3.11 recommended).
|
|
|
|
| 183 |
|
| 184 |
```
|
| 185 |
### 3. Evaluate Models
|
| 186 |
+
As an example, the evaluation command for **CogVLM2-Llama3-Chat** is:
|
| 187 |
```bash
|
| 188 |
+
python scripts/DeepSeekVL-chat-0-shot.py
|
| 189 |
```
|
| 190 |
## π Benchmark Details
|
| 191 |
- β
3,200 annotated fruit images
|