Update README.md
Browse files
README.md
CHANGED
|
@@ -17,6 +17,7 @@ size_categories:
|
|
| 17 |
[](https://arxiv.org/abs/2602.04454)
|
| 18 |
[](https://github.com/iSEE-Laboratory/Seg-ReSearch)
|
| 19 |
|
|
|
|
| 20 |
|
| 21 |
Existing language-guided segmentation benchmarks always assume that the user inputs already provide all necessary evidence for identifying the target objects. While reasoning segmentation benchmarks emphasize world knowledge, they tend to involve only basic common sense (e.g., "which food is rich in Vitamin C"). These simplified settings fail to reflect the real-world scenarios that often involve up-to-date information or long-tail knowledge. To bridge this gap, we establish OK-VOS, a new video object segmentation benchmark that explicitly requires outside knowledge for object identification. This benchmark is fully annotated by five human experts. It contains 1,000 test samples, covering 150 videos and 500 objects. We conduct a multi-round review and re-annotation process to strictly ensure that each query requires up-to-date information or long-tail facts that explicitly exceeds the internal knowledge of current LLMs.
|
| 22 |
|
|
@@ -24,8 +25,6 @@ This dataset is introduced in the paper **"Seg-ReSearch: Segmentation with Inter
|
|
| 24 |
|
| 25 |
## 📂 Dataset Structure
|
| 26 |
|
| 27 |
-
Due to the large file size and the requirement for efficient disk access during training/inference, we adopt a **hybrid storage format**:
|
| 28 |
-
|
| 29 |
1. **Metadata (`.parquet`)**: Contains prompts, video IDs, and frame indices. Can be loaded directly via `datasets`.
|
| 30 |
2. **Raw Data (`.tar.gz`)**: Contains the actual video frames and segmentation masks.
|
| 31 |
|
|
@@ -40,7 +39,7 @@ data/OK_VOS/
|
|
| 40 |
|
| 41 |
## ⚙️ How to Use
|
| 42 |
|
| 43 |
-
Please refer to our
|
| 44 |
|
| 45 |
## 📜 Citation
|
| 46 |
If you find this dataset useful, please cite our paper:
|
|
|
|
| 17 |
[](https://arxiv.org/abs/2602.04454)
|
| 18 |
[](https://github.com/iSEE-Laboratory/Seg-ReSearch)
|
| 19 |
|
| 20 |
+

|
| 21 |
|
| 22 |
Existing language-guided segmentation benchmarks always assume that the user inputs already provide all necessary evidence for identifying the target objects. While reasoning segmentation benchmarks emphasize world knowledge, they tend to involve only basic common sense (e.g., "which food is rich in Vitamin C"). These simplified settings fail to reflect the real-world scenarios that often involve up-to-date information or long-tail knowledge. To bridge this gap, we establish OK-VOS, a new video object segmentation benchmark that explicitly requires outside knowledge for object identification. This benchmark is fully annotated by five human experts. It contains 1,000 test samples, covering 150 videos and 500 objects. We conduct a multi-round review and re-annotation process to strictly ensure that each query requires up-to-date information or long-tail facts that explicitly exceeds the internal knowledge of current LLMs.
|
| 23 |
|
|
|
|
| 25 |
|
| 26 |
## 📂 Dataset Structure
|
| 27 |
|
|
|
|
|
|
|
| 28 |
1. **Metadata (`.parquet`)**: Contains prompts, video IDs, and frame indices. Can be loaded directly via `datasets`.
|
| 29 |
2. **Raw Data (`.tar.gz`)**: Contains the actual video frames and segmentation masks.
|
| 30 |
|
|
|
|
| 39 |
|
| 40 |
## ⚙️ How to Use
|
| 41 |
|
| 42 |
+
Please refer to our repo: https://github.com/iSEE-Laboratory/Seg-ReSearch
|
| 43 |
|
| 44 |
## 📜 Citation
|
| 45 |
If you find this dataset useful, please cite our paper:
|