File size: 3,862 Bytes
e272674
 
 
 
 
 
 
 
 
 
29a2813
 
e74faf2
 
 
 
 
 
37c2d2e
e74faf2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e272674
5e61af6
 
e272674
6556b00
e272674
6a12142
5e61af6
e272674
 
8536a28
25d54c6
8536a28
 
e272674
5e02517
 
6a12142
5e61af6
6781208
5e61af6
83b9013
5e02517
83b9013
5e02517
 
5e61af6
5e02517
6556b00
 
 
e1c3d61
6556b00
 
 
e1c3d61
6556b00
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: other
license_name: other
license_link: LICENSE
task_categories:
- visual-question-answering
language:
- en
size_categories:
- n<1K
splits:
- name: val
configs:
- config_name: hrbench_version_split
  data_files:
  - split: hrbench_4k
    path: "hr_bench_4k.parquet"
  - split: hrbench_8k
    path: "hr_bench_8k.parquet"
dataset_info:
  - config_name: hrbench_version_split
    features:
      - name: index
        dtype: int64
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: category
        dtype: string
      - name: A
        dtype: string
      - name: B
        dtype: string
      - name: C
        dtype: string
      - name: D
        dtype: string
      - name: cycle_category
        dtype: string
      - name: image
        dtype: string
    splits:
      - name: hrbench_4k
        num_examples: 800
      - name: hrbench_8k
        num_examples: 800
---

# Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Large Language Models

[**🌐Homepage**](https://github.com/DreamMr/HR-Bench) | [**📖 Paper**](http://arxiv.org/abs/2408.15556)

## 📊 HR-Bench

We find that the highest resolution in existing multimodal benchmarks is only 2K. To address the current lack of high-resolution multimodal benchmarks, we construct **_HR-Bench_**. **_HR-Bench_** consists two sub-tasks: **_Fine-grained Single-instance Perception (FSP)_** and **_Fine-grained Cross-instance Perception (FCP)_**. The **_FSP_** task includes 100 samples, which includes tasks such as attribute recognition, OCR, visual prompting. The **_FCP_** task also comprises 100 samples which encompasses map analysis, chart analysis and spatial relationship assessment. As shown in the figure below, we visualize examples of our **_HR-Bench_**.

<p align="center">
    <img src="https://raw.githubusercontent.com/DreamMr/HR-Bench/main/resources/case_study_dataset_13.png" width="50%"> <br>
</p>


**_HR-Bench_** is available in two versions: **_HR-Bench 8K_** and **_HR-Bench 4K_**. The **_HR-Bench 8K_** includes images with an average resolution of 8K. Additionally, we manually annotate the coordinates of objects relevant to the questions within the 8K image and crop these image to 4K resolution.

## 👨‍💻Divide, Conquer and Combine

We observe that most current MLLMs (e.g., LLaVA-v1.5) perceive images in a fixed resolution (e.g., 336x336). This simplification often leads to greater visual information loss. Based on this finding, we propose a novel training-free framework —— **D**ivide, **C**onquer and **C**ombine (**$DC^2$**). We  first recursively split an image into image patches until they reach the resolution defined by the pretrained vision encoder (e.g., 336x336), merging similar patches for efficiency (**Divide**). Next, we utilize MLLM to generate text description for each image patch and extract objects mentioned in the text descriptions (**Conquer**). Finally, we filter out hallucinated objects resulting from image division and store the coordinates of the image patches which objects appear (**Combine**). During the inference stage, we retrieve the related image patches according to the user prompt to provide accurate text descriptions.

## 🏆 Leaderboard

The leaderboard is available at the following URL: [here](https://github.com/DreamMr/HR-Bench).


# 📧 Contact
- Wenbin Wang: wangwenbin97@whu.edu.cn

# ✒️ Citation
```
@article{hrbench,
      title={Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Large Language Models}, 
      author={Wenbin Wang and Liang Ding and Minyan Zeng and Xiabin Zhou and Li Shen and Yong Luo and Dacheng Tao},
      year={2024},
      journal={arXiv preprint},
      url={https://arxiv.org/abs/2408.15556}, 
}
```