|
|
--- |
|
|
task_categories: |
|
|
- image-text-to-text |
|
|
license: cc-by-nc-4.0 |
|
|
tags: |
|
|
- multimodal |
|
|
- llm |
|
|
- vision-language |
|
|
- visual-reasoning |
|
|
- tree-search |
|
|
--- |
|
|
|
|
|
# ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration |
|
|
|
|
|
This repository contains the evaluation data for **ZoomEye**, a method presented in the paper [ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration](https://huggingface.co/papers/2411.16044). |
|
|
|
|
|
ZoomEye proposes a training-free, model-agnostic tree search algorithm tailored for vision-level reasoning. It addresses the limitations of existing Multimodal Large Language Models (MLLMs) that operate on fixed visual inputs, especially when dealing with images containing numerous fine-grained elements. By treating an image as a hierarchical tree structure, ZoomEye enables MLLMs to simulate human-like zooming behavior, navigating from root to leaf nodes to gather detailed visual cues necessary for accurate decision-making. |
|
|
|
|
|
This dataset supports the evaluation of MLLMs on a series of high-resolution benchmarks, demonstrating consistent performance improvements for various models. |
|
|
|
|
|
* **Paper:** [https://huggingface.co/papers/2411.16044](https://huggingface.co/papers/2411.16044) |
|
|
* **Project Page:** [https://szhanz.github.io/zoomeye/](https://szhanz.github.io/zoomeye/) |
|
|
* **Code:** [https://github.com/om-ai-lab/ZoomEye](https://github.com/om-ai-lab/ZoomEye) |
|
|
|
|
|
## Evaluation Data Preparation |
|
|
|
|
|
The core evaluation data (including V* Bench and HR-Bench) used in the ZoomEye paper has been packaged together. |
|
|
|
|
|
1. **Download Data**: The evaluation data is provided [here](https://huggingface.co/datasets/omlab/zoom_eye_data). After downloading, please unzip it. The path to the unzipped data is referred to as **`<anno path>`**. |
|
|
|
|
|
2. **[Optional] MME-RealWorld Benchmark**: If you wish to evaluate ZoomEye on the MME-RealWorld Benchmark, follow these steps: |
|
|
* Follow the instructions in [this repository](https://github.com/yfzhang114/MME-RealWorld) to download the images. |
|
|
* Extract the images to the `<anno path>/mme-realworld` directory. |
|
|
* Place the `annotation_mme-realworld.json` file from [this link](https://huggingface.co/datasets/omlab/zoom_eye_data) into `<anno path>/mme-realworld`. |
|
|
|
|
|
### Folder Tree |
|
|
The expected folder structure after preparation is as follows: |
|
|
|
|
|
``` |
|
|
zoom_eye_data |
|
|
βββ hr-bench_4k |
|
|
βΒ Β βββ annotation_hr-bench_4k.json |
|
|
βΒ Β βββ images/ |
|
|
β βββ some.jpg |
|
|
βΒ Β ... |
|
|
βββ hr-bench_8k |
|
|
βΒ Β βββ annotation_hr-bench_8k.json |
|
|
βΒ Β βββ images/ |
|
|
β βββ some.jpg |
|
|
βΒ Β ... |
|
|
βββ vstar |
|
|
βΒ Β βββ annotation_vstar.json |
|
|
βΒ Β βββ direct_attributes/ |
|
|
β βββ some.jpg |
|
|
βΒ Β ... |
|
|
βΒ Β βββ relative_positions/ |
|
|
β βββ some.jpg |
|
|
βΒ Β ... |
|
|
βββ mme-realworld |
|
|
βΒ Β βββ annotation_mme-realworld.json |
|
|
βΒ Β βββ AutonomousDriving/ |
|
|
β βββ MME-HD-CN/ |
|
|
β βββ monitoring_images/ |
|
|
β βββ ocr_cc/ |
|
|
β βββ remote_sensing/ |
|
|
... |
|
|
``` |
|
|
|
|
|
## Sample Usage |
|
|
|
|
|
### 1. Run the Python Demo |
|
|
|
|
|
We provide a demo file of Zoom Eye accepting any input Image-Question pair. The zoomed views of Zoom Eye will be saved into the demo folder. |
|
|
|
|
|
```bash |
|
|
python ZoomEye/demo.py \ |
|
|
--model-path lmms-lab/llava-onevision-qwen2-7b-ov \ |
|
|
--input_image demo/demo.jpg \ |
|
|
--question "What is the color of the soda can?" |
|
|
``` |
|
|
|
|
|
### 2. Run the Gradio Demo |
|
|
|
|
|
We also provide a Gradio Demo. Run the script and open `http://127.0.0.1:7860/` in your browser. |
|
|
|
|
|
```bash |
|
|
python gdemo_gradio.py |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you find this repository helpful to your research, please cite our paper: |
|
|
|
|
|
```bibtex |
|
|
@article{shen2024zoomeye, |
|
|
title={ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration}, |
|
|
author={Shen, Haozhan and Zhao, Kangjia and Zhao, Tiancheng and Xu, Ruochen and Zhang, Zilun and Zhu, Mingwei and Yin, Jianwei}, |
|
|
journal={arXiv preprint arXiv:2411.16044}, |
|
|
year={2024} |
|
|
} |
|
|
``` |