Datasets:
File size: 1,967 Bytes
f0d7fdd 452a473 f0d7fdd 452a473 4956b60 452a473 4956b60 452a473 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | ---
task_categories:
- video-text-to-text
dataset_info:
features:
- name: question_id
dtype: int64
- name: category
dtype: string
- name: video_path
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 62068
num_examples: 268
download_size: 22194
dataset_size: 62068
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# HLVid Dataset
[Project Page](https://autogaze.github.io/) | [Paper](https://huggingface.co/papers/2603.12254) | [GitHub](https://github.com/NVlabs/AutoGaze)
HLVid (High-resolution, Long-form Video QA) is a benchmark introduced in the paper "[Attend Before Attention: Efficient and Scalable Video Understanding via Autoregressive Gazing](https://huggingface.co/papers/2603.12254)".
It is designed to evaluate Multi-modal Large Language Models (MLLMs) on long-form, high-resolution video understanding. The benchmark features 5-minute videos at 4K resolution, challenging models to handle significant spatiotemporal redundancy while preserving critical information.
## Dataset Details
The dataset contains question-answering pairs based on high-fidelity video content. Each entry in the `test` split includes:
- `question_id`: A unique identifier for the sample.
- `category`: The specific domain or reasoning category of the video/question.
- `video_path`: The path or reference to the source video file.
- `question`: The text-based question regarding the video.
- `answer`: The ground-truth text answer.
### Citation
```bibtex
@article{shi2026attend,
title={Attend Before Attention: Efficient and Scalable Video Understanding via Autoregressive Gazing},
author={Shi, Baifeng and Fu, Stephanie and Lian, Long and Ye, Hanrong and Eigen, David and Reite, Aaron and Li, Boyi and Kautz, Jan and Han, Song and Chan, David M and others},
journal={arXiv preprint arXiv:2603.12254},
year={2026}
}
``` |