|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
- cn |
|
|
tags: |
|
|
- multimodal |
|
|
- search |
|
|
- video-object-segmentation |
|
|
- reasoning |
|
|
pretty_name: OK-VOS |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
|
|
|
# OK-VOS: A Video Object Segmentation Benchmark Requiring Outside Knowledge |
|
|
[](https://arxiv.org/abs/2602.04454) |
|
|
[](https://github.com/iSEE-Laboratory/Seg-ReSearch) |
|
|
|
|
|
 |
|
|
|
|
|
Existing language-guided segmentation benchmarks always assume that the user inputs already provide all necessary evidence for identifying the target objects. While reasoning segmentation benchmarks emphasize world knowledge, they tend to involve only basic common sense (e.g., "which food is rich in Vitamin C"). These simplified settings fail to reflect the real-world scenarios that often involve up-to-date information or long-tail knowledge. To bridge this gap, we establish OK-VOS, a new video object segmentation benchmark that explicitly requires outside knowledge for object identification. This benchmark is fully annotated by five human experts. It contains 1,000 test samples, covering 150 videos and 500 objects. We conduct a multi-round review and re-annotation process to strictly ensure that each query requires up-to-date information or long-tail facts that explicitly exceeds the internal knowledge of current LLMs. |
|
|
|
|
|
This dataset is introduced in the paper **"Seg-ReSearch: Segmentation with Interleaved Reasoning and External Search"**. |
|
|
|
|
|
## 📂 Dataset Structure |
|
|
|
|
|
1. **Metadata (`.parquet`)**: Contains prompts, video IDs, and frame indices. Can be loaded directly via `datasets`. |
|
|
2. **Raw Data (`.tar.gz`)**: Contains the actual video frames and segmentation masks. |
|
|
|
|
|
### Directory Layout |
|
|
```text |
|
|
data/OK_VOS/ |
|
|
├── train.parquet # Metadata for training |
|
|
├── test.parquet # Metadata for testing |
|
|
├── train.tar.gz # Compressed raw frames & masks (Train) |
|
|
└── test.tar.gz # Compressed raw frames & masks (Test) |
|
|
``` |
|
|
|
|
|
## ⚙️ How to Use |
|
|
|
|
|
Please refer to our repo: https://github.com/iSEE-Laboratory/Seg-ReSearch |
|
|
|
|
|
## 📜 Citation |
|
|
If you find this dataset useful, please cite our paper: |
|
|
|
|
|
```bibtex |
|
|
@article{liang2026segresearch, |
|
|
title={Seg-ReSearch: Segmentation with Interleaved Reasoning and External Search}, |
|
|
author={Tianming Liang and Qirui Du and Jian-Fang Hu and Haichao Jiang and Zicheng Lin and Wei-Shi Zheng}, |
|
|
journal={arXiv preprint arXiv:2602.04454}, |
|
|
year={2026} |
|
|
} |
|
|
``` |
|
|
|