Video-Browsecomp / README.md
chr1ce's picture
Add dataset card, metadata and links (#1)
f9ba407 verified
---
license: mit
task_categories:
- video-text-to-text
---
# Video-Browsecomp
[**Project Page**](https://liang-zhengyang.github.io/video-browsecomp/) | [**Paper**](https://huggingface.co/papers/2512.23044) | [**Code**](https://github.com/chrisx599/Video-Browser)
Video-Browsecomp is a benchmark designed for evaluating open-ended agentic browsing tasks that enforce a mandatory dependency on videos. It was introduced in the paper "Video-Browser: Towards Agentic Open-web Video Browsing".
The benchmark challenges autonomous agents to research, watch, and analyze video content to answer complex, open-ended user queries. Along with the benchmark, the authors propose **Video-Browser**, a novel agent leveraging *Pyramidal Perception* to efficiently filter information using metadata and perform fine-grained visual verification only when necessary.
## Citation
If you use the Video-Browsecomp benchmark or the Video-Browser agent in your research, please cite:
```bibtex
@misc{liang2026videobrowseragenticopenwebvideo,
title={Video-Browser: Towards Agentic Open-web Video Browsing},
author={Zhengyang Liang and Yan Shu and Xiangrui Liu and Minghao Qin and Kaixin Liang and Paolo Rota and Nicu Sebe and Zheng Liu and Lizi Liao},
year={2026},
eprint={2512.23044},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.23044},
}
```