Datasets:
Enhance dataset card: Add metadata, links, and usage details for ReasonSeg-Test
Browse filesThis PR significantly enhances the dataset card for `ReasonSeg_test` by adding crucial information and improving its structure.
Key changes include:
* Adding `task_categories: image-segmentation` for better discoverability.
* Including `license: cc-by-nc-4.0` and relevant `tags` such as `reasoning`, `reinforcement-learning`, `zero-shot`, `multimodal`, and `language: en`.
* Providing direct links to the associated paper ([Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement](https://huggingface.co/papers/2503.06520)) and the official GitHub repository ([https://github.com/dvlab-research/Seg-Zero](https://github.com/dvlab-research/Seg-Zero)).
* Populating the content section with the paper abstract, an overview of the Seg-Zero framework, visual examples, and a practical sample usage guide for inference, derived from the official GitHub README.
* Adding the BibTeX citation for proper attribution.
These improvements make the dataset more informative, discoverable, and user-friendly on the Hugging Face Hub.
|
@@ -27,4 +27,97 @@ configs:
|
|
| 27 |
data_files:
|
| 28 |
- split: test
|
| 29 |
path: data/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
data_files:
|
| 28 |
- split: test
|
| 29 |
path: data/test-*
|
| 30 |
+
task_categories:
|
| 31 |
+
- image-segmentation
|
| 32 |
+
license: cc-by-nc-4.0
|
| 33 |
+
tags:
|
| 34 |
+
- reasoning
|
| 35 |
+
- reinforcement-learning
|
| 36 |
+
- zero-shot
|
| 37 |
+
- multimodal
|
| 38 |
+
language:
|
| 39 |
+
- en
|
| 40 |
---
|
| 41 |
+
|
| 42 |
+
# ReasonSeg-Test Dataset
|
| 43 |
+
|
| 44 |
+
This repository contains the test split of the ReasonSeg benchmark dataset, an evaluation benchmark used in the paper "[Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement](https://huggingface.co/papers/2503.06520)".
|
| 45 |
+
|
| 46 |
+
## Paper Abstract
|
| 47 |
+
|
| 48 |
+
Traditional methods for reasoning segmentation rely on supervised fine-tuning with categorical labels and simple descriptions, limiting its out-of-domain generalization and lacking explicit reasoning processes. To address these limitations, we propose Seg-Zero, a novel framework that demonstrates remarkable generalizability and derives explicit chain-of-thought reasoning through cognitive reinforcement. Seg-Zero introduces a decoupled architecture consisting of a reasoning model and a segmentation model. The reasoning model interprets user intentions, generates explicit reasoning chains, and produces positional prompts, which are subsequently used by the segmentation model to generate precious pixel-level masks. We design a sophisticated reward mechanism that integrates both format and accuracy rewards to effectively guide optimization directions. Trained exclusively via reinforcement learning with GRPO and without explicit reasoning data, Seg-Zero achieves robust zero-shot generalization and exhibits emergent test-time reasoning capabilities. Experiments show that Seg-Zero-7B achieves a zero-shot performance of 57.5 on the ReasonSeg benchmark, surpassing the prior LISA-7B by 18%. This significant improvement highlights Seg-Zero's ability to generalize across domains while presenting an explicit reasoning process.
|
| 49 |
+
|
| 50 |
+
## Code
|
| 51 |
+
|
| 52 |
+
The official code for Seg-Zero is available on GitHub: [https://github.com/dvlab-research/Seg-Zero](https://github.com/dvlab-research/Seg-Zero)
|
| 53 |
+
|
| 54 |
+
## Overview of Seg-Zero
|
| 55 |
+
|
| 56 |
+
Seg-Zero employs a decoupled architecture, including a reasoning model and a segmentation model. It is trained exclusively using reinforcement learning with GRPO and without explicit reasoning data.
|
| 57 |
+
|
| 58 |
+
<div align=center>
|
| 59 |
+
<img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/overview.png"/>
|
| 60 |
+
</div>
|
| 61 |
+
|
| 62 |
+
Seg-Zero demonstrates the following features:
|
| 63 |
+
1. Seg-Zero exhibits emergent test-time reasoning ability. It generates a reasoning chain before producing the final segmentation mask.
|
| 64 |
+
2. Seg-Zero is trained exclusively using reinforcement learning, without any explicit supervised reasoning data.
|
| 65 |
+
3. Compared to supervised fine-tuning, our Seg-Zero achieves superior performance on both in-domain and out-of-domain data.
|
| 66 |
+
|
| 67 |
+
## Examples
|
| 68 |
+
|
| 69 |
+
<div align=center>
|
| 70 |
+
<img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/examples.png"/>
|
| 71 |
+
</div>
|
| 72 |
+
|
| 73 |
+
## Sample Usage (Inference)
|
| 74 |
+
|
| 75 |
+
To run inference using a pretrained Seg-Zero model, you first need to download the models. Make sure you have `git-lfs` installed.
|
| 76 |
+
|
| 77 |
+
```bash
|
| 78 |
+
mkdir pretrained_models
|
| 79 |
+
cd pretrained_models
|
| 80 |
+
git lfs install
|
| 81 |
+
git clone https://huggingface.co/Ricky06662/VisionReasoner-7B
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
Then run inference using:
|
| 85 |
+
```bash
|
| 86 |
+
python inference_scripts/infer_multi_object.py
|
| 87 |
+
```
|
| 88 |
+
The default question is:
|
| 89 |
+
> "What can I have if I'm thirsty?"
|
| 90 |
+
|
| 91 |
+
You will get the thinking process in the command line, like:
|
| 92 |
+
> "The question asks for items that can be consumed if one is thirsty. In the image, there are two glasses that appear to contain beverages, which are the most likely candidates for something to drink. The other items, such as the salad, fruit platter, and sandwich, are not drinks and are not suitable for quenching thirst."
|
| 93 |
+
|
| 94 |
+
And the mask will be presented in the **inference_scripts** folder.
|
| 95 |
+
|
| 96 |
+
<div align=center>
|
| 97 |
+
<img width="98%" src="https://github.com/dvlab-research/Seg-Zero/raw/main/assets/test_output_multiobject.png"/>
|
| 98 |
+
</div>
|
| 99 |
+
|
| 100 |
+
You can also provide your own `image_path` and `text` by:
|
| 101 |
+
```bash
|
| 102 |
+
python inference_scripts/infer_multi_object.py --image_path "your_image_path" --text "your question text"
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
## Citation
|
| 106 |
+
|
| 107 |
+
If you find Seg-Zero or VisionReasoner useful for your research, please cite the following papers:
|
| 108 |
+
|
| 109 |
+
```bibtex
|
| 110 |
+
@article{liu2025segzero,
|
| 111 |
+
title = {Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement},
|
| 112 |
+
author = {Liu, Yuqi and Peng, Bohao and Zhong, Zhisheng and Yue, Zihao and Lu, Fanbin and Yu, Bei and Jia, Jiaya},
|
| 113 |
+
journal = {arXiv preprint arXiv:2503.06520},
|
| 114 |
+
year = {2025}
|
| 115 |
+
}
|
| 116 |
+
|
| 117 |
+
@article{liu2025visionreasoner,
|
| 118 |
+
title = {VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning},
|
| 119 |
+
author = {Liu, Yuqi and Qu, Tianyuan and Zhong, Zhisheng and Peng, Bohao and Liu, Shu and Yu, Bei and Jia, Jiaya},
|
| 120 |
+
journal = {arXiv preprint arXiv:2505.12081},
|
| 121 |
+
year = {2025}
|
| 122 |
+
}
|
| 123 |
+
```
|