PEARL-Data / README.md
nielsr's picture
nielsr HF Staff
Add paper link, GitHub link, and task category
53f5e4d verified
|
raw
history blame
1.78 kB
metadata
license: cc-by-4.0
task_categories:
  - video-text-to-text

PEARL-Bench: Personalized Streaming Video Understanding

Paper | GitHub

PEARL-Bench is the first comprehensive benchmark designed specifically for Personalized Streaming Video Understanding (PSVU). It evaluates a model's ability to recognize user-defined concepts, localize them at precise timestamps, and answer personalized queries over continuous video streams.

The benchmark comprises 132 unique videos and 2,173 fine-grained annotations with precise timestamps. It supports two evaluation modes:

  • Frame-level: Focuses on a specific person or object in discrete frames.
  • Video-level: Focuses on personalized actions unfolding across continuous frames.

Dataset Structure

The dataset is organized as follows:

data/
  frame-level/
    annotations/   # Fine-grained annotations with timestamps
    output_clips/  # Generated scene clips
    videos/        # Source video files (.mp4)

Usage

For detailed instructions on downloading, merging, and extracting the data, as well as running the evaluation pipeline, please refer to the official GitHub repository.

Citation

If you find this dataset useful for your research, please cite:

@article{zheng2026pearl,
  title={PEARL: Personalized Streaming Video Understanding Model},
  author={Zheng, Yuanhong and An, Ruichuan and Lin, Xiaopeng and Liu, Yuxing and Yang, Sihan and Zhang, Huanyu and Li, Haodong and Zhang, Qintong and Zhang, Renrui and Li, Guopeng and Zhang, Yifan and Li, Yuheng and Zhang, Wentao},
  journal={arXiv preprint arXiv:2603.20422},
  year={2026}
}