|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
--- |
|
|
# 🧪 Multimodal Benchmark |
|
|
|
|
|
This repository provides a benchmark suite for evaluating Multimodal Large Language Models (MLLMs) across a variety of visual-language tasks. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📁 Directory Structure |
|
|
|
|
|
### `/data` |
|
|
This folder contains all benchmark images and task-specific JSON files. Each JSON file defines the input and expected output format for a given task. |
|
|
|
|
|
### `/run` |
|
|
This folder includes example scripts that demonstrate how to run different MLLMs on the benchmark tasks. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📄 Result Collection |
|
|
|
|
|
After inference, all task JSON outputs should be merged into a single file named `result.json`. |
|
|
Each entry in `result.json` includes a `response` field that stores the model's prediction. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📊 Evaluation |
|
|
|
|
|
The predictions stored in `result.json` can be evaluated using `metric.py`. |
|
|
This script computes performance metrics by comparing the predicted responses with the reference answers. |
|
|
|
|
|
--- |
|
|
|
|
|
## 💡 Ad Understanding Task |
|
|
|
|
|
The **Ad Understanding** task requires an additional LLM-based preprocessing step before evaluation. |
|
|
An example of deploying a language model for this purpose is provided in [`gpt_judge.py`](./gpt_judge.py). |
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
|