zoom_eye_data / README.md
nielsr's picture
nielsr HF Staff
Add comprehensive dataset card for ZoomEye evaluation data
ab00e74 verified
|
raw
history blame
4.2 kB
metadata
task_categories:
  - image-text-to-text
license: cc-by-nc-4.0
tags:
  - multimodal
  - llm
  - vision-language
  - visual-reasoning
  - tree-search

ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration

This repository contains the evaluation data for ZoomEye, a method presented in the paper ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration.

ZoomEye proposes a training-free, model-agnostic tree search algorithm tailored for vision-level reasoning. It addresses the limitations of existing Multimodal Large Language Models (MLLMs) that operate on fixed visual inputs, especially when dealing with images containing numerous fine-grained elements. By treating an image as a hierarchical tree structure, ZoomEye enables MLLMs to simulate human-like zooming behavior, navigating from root to leaf nodes to gather detailed visual cues necessary for accurate decision-making.

This dataset supports the evaluation of MLLMs on a series of high-resolution benchmarks, demonstrating consistent performance improvements for various models.

Evaluation Data Preparation

The core evaluation data (including V* Bench and HR-Bench) used in the ZoomEye paper has been packaged together.

  1. Download Data: The evaluation data is provided here. After downloading, please unzip it. The path to the unzipped data is referred to as <anno path>.

  2. [Optional] MME-RealWorld Benchmark: If you wish to evaluate ZoomEye on the MME-RealWorld Benchmark, follow these steps:

    • Follow the instructions in this repository to download the images.
    • Extract the images to the <anno path>/mme-realworld directory.
    • Place the annotation_mme-realworld.json file from this link into <anno path>/mme-realworld.

Folder Tree

The expected folder structure after preparation is as follows:

zoom_eye_data 
  β”œβ”€β”€ hr-bench_4k                                  
  β”‚   └── annotation_hr-bench_4k.json
  β”‚   └── images/
  β”‚     └── some.jpg
  β”‚    ...
  β”œβ”€β”€ hr-bench_8k
  β”‚   └── annotation_hr-bench_8k.json
  β”‚   └── images/
  β”‚     └── some.jpg
  β”‚    ...
  β”œβ”€β”€ vstar
  β”‚   └── annotation_vstar.json
  β”‚   └── direct_attributes/
  β”‚     └── some.jpg
  β”‚    ...
  β”‚   └── relative_positions/
  β”‚     └── some.jpg
  β”‚    ...
  β”œβ”€β”€ mme-realworld
  β”‚   └── annotation_mme-realworld.json
  β”‚   └── AutonomousDriving/
  β”‚   └── MME-HD-CN/
  β”‚   └── monitoring_images/
  β”‚   └── ocr_cc/
  β”‚   └── remote_sensing/
 ...

Sample Usage

1. Run the Python Demo

We provide a demo file of Zoom Eye accepting any input Image-Question pair. The zoomed views of Zoom Eye will be saved into the demo folder.

python ZoomEye/demo.py \
    --model-path lmms-lab/llava-onevision-qwen2-7b-ov \
    --input_image demo/demo.jpg \
    --question "What is the color of the soda can?"

2. Run the Gradio Demo

We also provide a Gradio Demo. Run the script and open http://127.0.0.1:7860/ in your browser.

python gdemo_gradio.py

Citation

If you find this repository helpful to your research, please cite our paper:

@article{shen2024zoomeye,
  title={ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration},
  author={Shen, Haozhan and Zhao, Kangjia and Zhao, Tiancheng and Xu, Ruochen and Zhang, Zilun and Zhu, Mingwei and Yin, Jianwei},
  journal={arXiv preprint arXiv:2411.16044},
  year={2024}
}