File size: 11,363 Bytes
a0185d6
f56be19
 
a0185d6
f56be19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a0185d6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f56be19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a0185d6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19f138c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
---
language:
- en
license: cc-by-nc-sa-4.0
size_categories:
- 1K<n<10K
task_categories:
- video-text-to-text
tags:
- video-understanding
- large-video-language-models
- lvlm
- positional-bias
- benchmark
- evaluation
extra_gated_prompt: 'You acknowledge and understand that: This dataset is provided
  solely for academic research purposes. It is not intended for commercial use or
  any other non-research activities. All copyrights, trademarks, and other intellectual
  property rights related to the videos in the dataset remain the exclusive property
  of their respective owners. '
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: question_id
    dtype: string
  - name: question
    dtype: string
  - name: gt_answer
    dtype: string
  - name: video_name
    dtype: string
  - name: question_type
    dtype: string
  - name: answer_number
    dtype: int64
  - name: candidates
    sequence: string
  - name: video_len
    dtype: float64
  - name: video_category
    dtype: string
  - name: human_verification
    dtype: bool
  splits:
  - name: train
    num_bytes: 490082
    num_examples: 1177
  download_size: 224148
  dataset_size: 490082
---

<h1 align="center">Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models</h1>

<p align="center">
    <a href="https://arxiv.org/abs/2508.19650">
            <img alt="Build" src="http://img.shields.io/badge/cs.CV-arXiv%3A2508.19650-B31B1B.svg">
    </a>
    <a href="https://huggingface.co/datasets/Cola-any/Video-LevelGauge">
        <img alt="Build" src="https://img.shields.io/badge/๐Ÿค— Dataset-Video--LevelGauge-yellow">
    </a>
    <a href="https://github.com/Cola-any/Video-LevelGauge">
        <img alt="Build" src="https://img.shields.io/badge/Github-Investigating Contextual Positional Bias in Large Video Language Models-blue">
    </a>
</p>


## ๐Ÿ“œ License
Video-LevelGauge is under the CC-BY-NC-SA-4.0 license. 
It is derived from several previously published datasets ([VideoMME](https://huggingface.co/datasets/lmms-lab/Video-MME), [MLVU](https://huggingface.co/datasets/MLVU/MVLU), [VisDrone](https://github.com/VisDrone/VisDrone-Dataset), [UCF-Crime](https://www.crcv.ucf.edu/projects/real-world/), and [Ego4D](https://github.com/facebookresearch/Ego4d)). Please note that the original datasets may have their own licenses. Users must comply with the licenses of the original datasets when using this derived dataset.

โš ๏ธ If you access and use our dataset, you must understand and agree: **Video-LevelGauge is only used for academic research. Commercial use in any form is prohibited. The user assumes all effects arising from any other use and dissemination.**

We do not own the copyright of any raw video files and the copyright of all videos belongs to the video owners. Currently, we provide video access to researchers under the condition of acknowledging the above license. For the video data used, we respect and acknowledge any copyrights of the video authors. 
If there is any infringement in our dataset, please email overwhelmed@mail.ustc.edu.cn and we will remove it immediately.

## ๐Ÿ  Introduction
๐Ÿ”” Large Video Language Models (LVLMs) suffer from positional bias, characterized by uneven comprehension of identical content presented at different contextual positions.
<p align="center">
    <img src="./figs/pos_bias.png" width="55%" height="95%">
</p>
๐ŸŒŸ The serial position effect in psychology suggests that humans tend to better recall content presented at the beginning and end of a sequence. Similar behaviors have been observed in language models. To date, how various types of LVLMs, such as those incorporating memory components or trained with long-context, perform on positional biases remains under-explored.
Moreover, how positional bias manifests in video-text interleaved contexts is still an open question. In particular, models claiming to excel at long video understanding should be validated for their ability to maintain consistent and effective perception across the entire sequence, with minimal positional bias. 
For example, Qwen2.5-VL-7B exhibits reduced positional bias on the OCR task compared to its bias on other tasks:
<p align="center">
    <img src="./figs/pos_bais_plot_7b_20_norm.png" width="100%" height="100%">
</p>


## ๐Ÿ‘€ Video-LevelGauge Overview
Video-LevelGauge is explicitly designed to investigate contextual positional bias in video understanding. We introduce a standardized probe and customized context design paradigm, where carefully designed probe segments are inserted at varying positions within customized contextual contents. By comparing model responses to identical probes at different insertion points, we assess positional bias in video comprehension.
It supports flexible control over context length, probe position, and context composition to evaluate positional biases in various real-world scenarios, such as **multi-video understanding, long video comprehension and multi-modal interleaved inputs**.
Video-LevelGauge encompasses six categories of structured video understanding tasks (e.g., action reasoning), along with an open-ended descriptive task. It includes 438 manually collected multi-type videos, 1,177 multiple-choice question answering (MCQA) items, and 120 open-ended instructed descriptive problems paired with annotations.
<p align="center">
    <img src="./figs/overview.png" width="95%" height="95%">
</p>

## ๐Ÿ” Dataset
The annotation file and the raw videos are readily accessible via this [HF Link](https://huggingface.co/datasets/Cola-any/Video-LevelGauge) ๐Ÿค—. Note that this dataset is for research purposes only and you must strictly comply with the above License.

## ๐Ÿš€ Sample Usage

To quickly get started with running inference and evaluating models on Video-LevelGauge, follow these steps. For more detailed instructions and examples, please refer to the [GitHub repository](https://github.com/Cola-any/Video-LevelGauge).

### โœจ Clone and Prepare Dataset
First, please clone this repository and download [our dataset](https://huggingface.co/datasets/Cola-any/Video-LevelGauge/tree/main/LevelGauge) into `./LevelGauge`, organizing it as follows:
```
Video-LevelGauge
โ”œโ”€โ”€ asset
โ”œโ”€โ”€ evaluation
โ”œโ”€โ”€ LevelGauge
โ”‚   โ”œโ”€โ”€ json
โ”‚   โ””โ”€โ”€ videos
โ”œโ”€โ”€ metric
โ”œโ”€โ”€ output
โ”œโ”€โ”€ preprocess
```
### โœจ Running Inference
We take three models as examples to demonstrate how to use our benchmark for positional bias evaluation:
- **InternVL3** โ€“ inference with `transformers`.
- **MiMo-VL** โ€“ inference with `vLLM API`, using **video input**.  
   (If you plan to call the commercial API for testing, this is a good reference.)
- **GLM-4.5V** โ€“ inference with `vLLM API`, using **multi-image input**.

For InternVL3, please follow the [official project](https://github.com/OpenGVLab/InternVL) to set up the environment. Run inference as follow:
```bash
bash ./evaluation/transformer/eval_intervl3.sh
```
The accuracy at each position will be computed and saved to  `acc_dir: ./output/internvl_acc`.

For MiMo-VL, please first follow the [official project](https://github.com/XiaomiMiMo/MiMo-VL/tree/main) to deploy the model with vLLM. Run inference as follow:
```bash
bash ./evaluation/vllm/eval_mimovl.sh
```
The accuracy at each position will be computed and saved to `acc_dir: ./output/mimovl_acc`.

For GLM-4.5V, please first follow the [official project](https://github.com/zai-org/GLM-V/) to deploy the model with vLLM. Run inference as follow:
```bash
bash ./evaluation/vllm/eval_glm45v.sh
```
The accuracy at each position will be computed and saved to `acc_dir: ./output/glm45v_acc`.

๐Ÿ“Œ In addition, we provide preprocessing scripts, including:
*frame extraction* and *concatenating probe and background videos into a single video*. See the `./preprocess` folder. 
You can choose the input method based on your model. Concatenating probe and background videos into a single video is recommended as it is applicable to all models.

๐Ÿ“Œ For precise investigation, in our paper, we evaluate models on the full set of our 1,177 samples, which requires tens of thousands of inferences across 10 positions. We provide a subset of [300 samples](https://huggingface.co/datasets/Cola-any/Video-LevelGauge/blob/main/LevelGauge/json/Pos_MCQA_300_final.json) for quick testing ๐Ÿš€.

### โœจ Metric Calculation
Once positional accuracies are saved to `acc_dir`, you can compute all metrics in one command ๐Ÿ˜„, including *Pran*, *Pvar*, *Pmean*, *MR*, etc. We use the provided files in `./output/example_acc` as an example:
```bash
python ./metric/metric.py --acc_dir ./output/example_acc
```
Finally, we provide a script for visualizing positional bias. See [bias_plot.py](https://github.com/Cola-any/Video-LevelGauge/blob/main/metric/bias_plot.py) for details.

## ๐Ÿ”ฎ Evaluation PipLine
Please refer to our ๐ŸŽ [project](https://github.com/Cola-any/Video-LevelGauge) and ๐Ÿ“–[arXiv Paper](https://arxiv.org/abs/2508.19650) for more details.

## ๐Ÿ“ˆ Experimental Results
๐Ÿ“**Performance of state-of-the-art LVLMs on Video-LevelGauge.**

Gemini 2.5 Pro exhibits the least positional bias, followed by GLM-4.5V, GPT-4o-latest, Doubao-Seed-1.6, and other models.
<p align="center">
    <img src="./figs/leaderboard.png" width="55%" height="95%">
</p>

๐Ÿ“**Evaluation results of Stat-of-the-art LVLMs.**

We conduct a comprehensive investigation of 27 LVLMs using Video-LevelGauge, including 6 commercial models, i.e., Gemini 2.5 Pro and QVQ-Max; 21 open-source LVLMs covering unified models like InternVL3, long video models like Video-XL2, specific optimized models like VideoRefer, multi-modal reasoning models like GLM-4.5V, and two-stage methods like LLoVi.
<p align="center">
    <img src="./figs/lvlms.png" width="95%" height="95%">
</p>

๐Ÿ“**Effect of Context Length on Positional Bias.**

Positional bias is prevalent across various context lengths and tends to intensify as the context length increases, accompanied by shifts in bias patterns.
<p align="center">
    <img src="./figs/context_len.png" width="95%" height="95%">
</p>

๐Ÿ“**Effect of Context Type on Positional Bias.**

LVLMs exhibit more pronounced positional bias in complex context scenarios.
<p align="center">
    <img src="./figs/context_type.png" width="95%" height="95%">
</p>

๐Ÿ“**Effect of Model Size on Positional Bias.**

Positional bias is significantly alleviated as model size increases, consistent with scaling law observed in other capabilities.
<p align="center">
    <img src="./figs/model_size.png" width="55%" height="95%">
</p>

๐Ÿ“**Effect of Thinking Mode on Positional Bias.**

Thinking mode can alleviate the positional bias issue to a certain extent.
<p align="center">
    <img src="./figs/thinking.png" width="55%" height="95%">
</p>

## Citation
If you find our work helpful for your research, please consider citing our work.  
```
@article{xia2025videolevelgaugeinvestigatingcontextualpositional,
  title   = {Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models},
  author  = {Hou, Xia and Fu, Zheren and Ling, Fangcan and Li, Jiajun and Tu, Yi and Mao, Zhendong and Zhang, Yongdong},
  journal = {arXiv preprint arXiv:2508.19650},
  year    = {2025},
}
```