task_categories:
- image-text-to-text
license: cc-by-nc-4.0
tags:
- multimodal
- llm
- vision-language
- visual-reasoning
- tree-search
ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration
This repository contains the evaluation data for ZoomEye, a method presented in the paper ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration.
ZoomEye proposes a training-free, model-agnostic tree search algorithm tailored for vision-level reasoning. It addresses the limitations of existing Multimodal Large Language Models (MLLMs) that operate on fixed visual inputs, especially when dealing with images containing numerous fine-grained elements. By treating an image as a hierarchical tree structure, ZoomEye enables MLLMs to simulate human-like zooming behavior, navigating from root to leaf nodes to gather detailed visual cues necessary for accurate decision-making.
This dataset supports the evaluation of MLLMs on a series of high-resolution benchmarks, demonstrating consistent performance improvements for various models.
- Paper: https://huggingface.co/papers/2411.16044
- Project Page: https://szhanz.github.io/zoomeye/
- Code: https://github.com/om-ai-lab/ZoomEye
Evaluation Data Preparation
The core evaluation data (including V* Bench and HR-Bench) used in the ZoomEye paper has been packaged together.
Download Data: The evaluation data is provided here. After downloading, please unzip it. The path to the unzipped data is referred to as
<anno path>.[Optional] MME-RealWorld Benchmark: If you wish to evaluate ZoomEye on the MME-RealWorld Benchmark, follow these steps:
- Follow the instructions in this repository to download the images.
- Extract the images to the
<anno path>/mme-realworlddirectory. - Place the
annotation_mme-realworld.jsonfile from this link into<anno path>/mme-realworld.
Folder Tree
The expected folder structure after preparation is as follows:
zoom_eye_data
βββ hr-bench_4k
β βββ annotation_hr-bench_4k.json
β βββ images/
β βββ some.jpg
β ...
βββ hr-bench_8k
β βββ annotation_hr-bench_8k.json
β βββ images/
β βββ some.jpg
β ...
βββ vstar
β βββ annotation_vstar.json
β βββ direct_attributes/
β βββ some.jpg
β ...
β βββ relative_positions/
β βββ some.jpg
β ...
βββ mme-realworld
β βββ annotation_mme-realworld.json
β βββ AutonomousDriving/
β βββ MME-HD-CN/
β βββ monitoring_images/
β βββ ocr_cc/
β βββ remote_sensing/
...
Sample Usage
1. Run the Python Demo
We provide a demo file of Zoom Eye accepting any input Image-Question pair. The zoomed views of Zoom Eye will be saved into the demo folder.
python ZoomEye/demo.py \
--model-path lmms-lab/llava-onevision-qwen2-7b-ov \
--input_image demo/demo.jpg \
--question "What is the color of the soda can?"
2. Run the Gradio Demo
We also provide a Gradio Demo. Run the script and open http://127.0.0.1:7860/ in your browser.
python gdemo_gradio.py
Citation
If you find this repository helpful to your research, please cite our paper:
@article{shen2024zoomeye,
title={ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration},
author={Shen, Haozhan and Zhao, Kangjia and Zhao, Tiancheng and Xu, Ruochen and Zhang, Zilun and Zhu, Mingwei and Yin, Jianwei},
journal={arXiv preprint arXiv:2411.16044},
year={2024}
}