File size: 1,202 Bytes
98ccd73
 
83748d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
license: cc-by-nc-4.0
---
# 🧪 Multimodal Benchmark

This repository provides a benchmark suite for evaluating Multimodal Large Language Models (MLLMs) across a variety of visual-language tasks.

---

## 📁 Directory Structure

### `/data`
This folder contains all benchmark images and task-specific JSON files. Each JSON file defines the input and expected output format for a given task.

### `/run`
This folder includes example scripts that demonstrate how to run different MLLMs on the benchmark tasks.

---

## 📄 Result Collection

After inference, all task JSON outputs should be merged into a single file named `result.json`.  
Each entry in `result.json` includes a `response` field that stores the model's prediction.

---

## 📊 Evaluation

The predictions stored in `result.json` can be evaluated using `metric.py`.  
This script computes performance metrics by comparing the predicted responses with the reference answers.

---

## 💡 Ad Understanding Task

The **Ad Understanding** task requires an additional LLM-based preprocessing step before evaluation.  
An example of deploying a language model for this purpose is provided in [`gpt_judge.py`](./gpt_judge.py).

---