File size: 4,201 Bytes
ab00e74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
task_categories:
- image-text-to-text
license: cc-by-nc-4.0
tags:
- multimodal
- llm
- vision-language
- visual-reasoning
- tree-search
---

# ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration

This repository contains the evaluation data for **ZoomEye**, a method presented in the paper [ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration](https://huggingface.co/papers/2411.16044).

ZoomEye proposes a training-free, model-agnostic tree search algorithm tailored for vision-level reasoning. It addresses the limitations of existing Multimodal Large Language Models (MLLMs) that operate on fixed visual inputs, especially when dealing with images containing numerous fine-grained elements. By treating an image as a hierarchical tree structure, ZoomEye enables MLLMs to simulate human-like zooming behavior, navigating from root to leaf nodes to gather detailed visual cues necessary for accurate decision-making.

This dataset supports the evaluation of MLLMs on a series of high-resolution benchmarks, demonstrating consistent performance improvements for various models.

*   **Paper:** [https://huggingface.co/papers/2411.16044](https://huggingface.co/papers/2411.16044)
*   **Project Page:** [https://szhanz.github.io/zoomeye/](https://szhanz.github.io/zoomeye/)
*   **Code:** [https://github.com/om-ai-lab/ZoomEye](https://github.com/om-ai-lab/ZoomEye)

## Evaluation Data Preparation

The core evaluation data (including V* Bench and HR-Bench) used in the ZoomEye paper has been packaged together.

1.  **Download Data**: The evaluation data is provided [here](https://huggingface.co/datasets/omlab/zoom_eye_data). After downloading, please unzip it. The path to the unzipped data is referred to as **`<anno path>`**.

2.  **[Optional] MME-RealWorld Benchmark**: If you wish to evaluate ZoomEye on the MME-RealWorld Benchmark, follow these steps:
    *   Follow the instructions in [this repository](https://github.com/yfzhang114/MME-RealWorld) to download the images.
    *   Extract the images to the `<anno path>/mme-realworld` directory.
    *   Place the `annotation_mme-realworld.json` file from [this link](https://huggingface.co/datasets/omlab/zoom_eye_data) into `<anno path>/mme-realworld`.

### Folder Tree
The expected folder structure after preparation is as follows:

```
zoom_eye_data 
  β”œβ”€β”€ hr-bench_4k                                  
  β”‚Β Β  └── annotation_hr-bench_4k.json
  β”‚Β Β  └── images/
  β”‚     └── some.jpg
  β”‚Β Β   ...
  β”œβ”€β”€ hr-bench_8k
  β”‚Β Β  └── annotation_hr-bench_8k.json
  β”‚Β Β  └── images/
  β”‚     └── some.jpg
  β”‚Β Β   ...
  β”œβ”€β”€ vstar
  β”‚Β Β  └── annotation_vstar.json
  β”‚Β Β  └── direct_attributes/
  β”‚     └── some.jpg
  β”‚Β Β   ...
  β”‚Β Β  └── relative_positions/
  β”‚     └── some.jpg
  β”‚Β Β   ...
  β”œβ”€β”€ mme-realworld
  β”‚Β Β  └── annotation_mme-realworld.json
  β”‚Β Β  └── AutonomousDriving/
  β”‚   └── MME-HD-CN/
  β”‚   └── monitoring_images/
  β”‚   └── ocr_cc/
  β”‚   └── remote_sensing/
 ...
```

## Sample Usage

### 1. Run the Python Demo

We provide a demo file of Zoom Eye accepting any input Image-Question pair. The zoomed views of Zoom Eye will be saved into the demo folder.

```bash
python ZoomEye/demo.py \
    --model-path lmms-lab/llava-onevision-qwen2-7b-ov \
    --input_image demo/demo.jpg \
    --question "What is the color of the soda can?"
```

### 2. Run the Gradio Demo

We also provide a Gradio Demo. Run the script and open `http://127.0.0.1:7860/` in your browser.

```bash
python gdemo_gradio.py
```

## Citation

If you find this repository helpful to your research, please cite our paper:

```bibtex
@article{shen2024zoomeye,
  title={ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration},
  author={Shen, Haozhan and Zhao, Kangjia and Zhao, Tiancheng and Xu, Ruochen and Zhang, Zilun and Zhu, Mingwei and Yin, Jianwei},
  journal={arXiv preprint arXiv:2411.16044},
  year={2024}
}
```