File size: 9,923 Bytes
d9d6abf ea71ff2 9a197cd d9d6abf 01d6309 d9d6abf 01d6309 d9d6abf 01d6309 7159871 01d6309 d9d6abf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 |
---
license: mit
library_name: transformers
---
# Towards Pixel-level VLM Perception via Simple Points Prediction
<div align="center">
<a href="https://simpleseg.github.io/">
<b>📄 Homepage</b>
</a> |
<a href="https://arxiv.org/abs/2601.19228">
<b>📄 Tech Report</b>
</a> |
<a href="https://github.com/songtianhui/SimpleSeg">
<b>📄 Github</b>
</div>
## Introduction
> [!Note]
> This is [Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) version of SimpleSeg, a dense architecture with 7B paramters.
We present **SimpleSeg**, **a strikingly simple yet highly effective approach to endow Multimodal Large Language Models (MLLMs) with native pixel-level perception**.
Our method reframes segmentation as a simple sequence generation problem: the model directly predicts **sequence of points** (textual coordinates) delineating object boundaries, entirely within its language space.
To achieve high fidelity, we introduce a two-stage SFT→RL training pipeline, where Reinforcement Learning with an IoU-based reward refines the point sequences to accurately match ground-truth contours.
We find that **the standard MLLM architecture possesses a strong, inherent capacity for low-level perception** that can be unlocked without any specialized architecture.
On segmentation benchmarks, SimpleSeg achieves performance that is comparable to, and often surpasses, methods relying on complex, task-specific designs.
This work lays out that precise spatial understanding can emerge from simple point prediction, challenging the prevailing need for auxiliary components and paving the way for more unified and capable VLMs.
## Method

In this work, we explore the limits of MLLM pixel-level perception by predicting the next point in a contour with the simplest approach possible.
Without introducing any complex architectures or special patterns, we show how even minimalistic point prediction can achieve effective segmentation at the pixel level.
## Key Benefits
- **Simplicity**: SimpleSeg requires no specialized modules and adheres to the standard MLLM architecture, it can be seamlessly and efficiently integrated as a new, core pre-training task for foundation models, similar to visual grounding.
- **Task Generality**: By framing segmentation as a text-generation problem, our approach is inherently flexible. The model can be easily adapted to a wide range of vision-language tasks that require precise spatial localization.
- **Interpretable Output**: The model generates explicit, human-readable coordinate sequences instead of dense pixel masks. This transparency simplifies debugging and makes the output directly usable for downstream applications like interactive editing or tool use.
## Performance
- **Referring Expression Segmentation** results
| Methods | refCOCO | | | refCOCO+ | | | refCOCOg | | Avg. |
|--------------------------------|---------|----------|----------|----------|----------|----------|----------|----------|-------|
| | val | testA | testB | val | testA | testB | val | test | |
| **Decoder-based Models** | | | | | | | | | |
| NEXT-Chat | 74.7 | 78.9 | 69.5 | 65.1 | 71.9 | 56.7 | 67.0 | 67.0 | 68.9 |
| LISA | 74.9 | 79.1 | 72.3 | 65.1 | 70.8 | 58.1 | 67.9 | 70.6 | 69.9 |
| PixelLM | 73.0 | 76.5 | 68.2 | 66.3 | 71.7 | 58.3 | 69.3 | 70.5 | 69.2 |
| AnyRef | 76.9 | 79.9 | 74.2 | 70.3 | 73.5 | 61.8 | 70.0 | 70.7 | 72.2 |
| GSVA | 77.2 | 78.9 | 73.5 | 65.9 | 69.6 | 59.8 | 72.7 | 73.3 | 71.4 |
| LaSagNA | 76.8 | 78.7 | 73.8 | 66.4 | 70.6 | 60.1 | 70.6 | 71.9 | 71.1 |
| Groundhog | 78.5 | 79.9 | 75.7 | 70.5 | 75.0 | 64.9 | 74.1 | 74.6 | 74.2 |
| Text4Seg (w/ SAM) | 79.2 | 81.7 | 75.6 | 72.8 | 77.9 | 66.5 | 74.0 | 75.3 | 75.4 |
| **Decoder-free Models** | | | | | | | | | |
| Text4Seg | 74.7 | 77.4 | 71.6 | 68.5 | 73.6 | 62.9 | 70.7 | 71.6 | 71.4 |
| **SimpleSeg**-Qwen2.5-VL | 80.9 | 77.8 | 75.2 | 72.4 | 77.3 | 66.1 | 73.3 | 74.1 | 74.6 |
| **SimpleSeg**-Kimi-VL | 80.0 | 80.6 | 76.2 | 70.4 | 76.2 | 67.1 | 72.8 | 74.7 | 74.8 |
- **Referring Expression Comprehension** results
| Methods | refCOCO | | | refCOCO+ | | | refCOCOg | | Avg. |
|------------------|---------|----------|----------|----------|----------|----------|----------|----------|-------|
| | val | testA | testB | val | testA | testB | val | test | |
| **Decoder-based Models** | | | | | | | | | |
| LISA | 85.4 | 88.8 | 82.6 | 74.2 | 79.5 | 68.4 | 79.3 | 80.4 | 79.8 |
| GSVA | 86.3 | 89.2 | 83.8 | 72.8 | 78.8 | 68.0 | 81.6 | 81.8 | 80.3 |
| NEXT-Chat | 85.5 | 90.0 | 77.9 | 77.2 | 84.5 | 68.0 | 80.1 | 79.8 | 80.4 |
| PixelLM | 89.8 | 92.2 | 86.4 | 83.2 | 87.0 | 78.9 | 84.6 | 86.0 | 86.0 |
| Text4Seg (w/ SAM)| 90.3 | 93.4 | 87.5 | 85.2 | 89.9 | 79.5 | 85.4 | 85.4 | 87.1 |
| **Decoder-free Models** | | | | | | | | | |
| Text4Seg | 88.3 | 91.4 | 85.8 | 83.5 | 88.2 | 77.9 | 82.4 | 82.5 | 85.0 |
| **SimpleSeg**-Qwen2.5-VL | 90.2| 92.9 | 86.1 | 84.6 | 90.5 | 79.0 | 84.9 | 85.6 | 86.7 |
| **SimpleSeg**-Kimi-VL | 91.3| 92.1 | 87.1 | 82.6 | 88.3 | 79.3 | 84.6 | 86.3 | 86.5 |
# Model Usage
## Inference
We recommend using vLLM for production deployment. Requires `vllm>=0.12.0` with `--trust-remote-code`.
First, start the vLLM server:
```
vllm serve sthui/SimpleSeg-Qwen2.5-VL \
--trust-remote-code \
--tensor-parallel-size 4 \
--served-model-name SimpleSeg-Qwen2.5-VL \
--host 0.0.0.0 \
--port 8000
```
Then run the following code to inference:
```python
import base64
from openai import OpenAI
# vLLM server configuration
VLLM_BASE_URL = "http://localhost:8000/v1"
MODEL_NAME = "SimpleSeg-Qwen2.5-VL" # Should match --served-model-name in vllm serve
def encode_image(image_path: str) -> str:
"""Encode image to base64 string."""
with open(image_path, "rb") as f:
return base64.b64encode(f.read()).decode()
def inference(image_path: str, instruction: str) -> str:
"""Run GUI grounding inference via vLLM."""
client = OpenAI(base_url=VLLM_BASE_URL, api_key="EMPTY")
messages = [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": f"data:image/png;base64,{encode_image(image_path)}"}
},
{"type": "text", "text": instruction},
],
},
]
response = client.chat.completions.create(
model=MODEL_NAME,
messages=messages,
max_tokens=4096,
temperature=0,
)
return response.choices[0].message.content
# Example usage
image_path = "./octopus.png"
instruction = "Output the polygon coordinates of octopus in the image."
response = inference(image_path, instruction)
print("Model output:", response)
```
## Decode the polygons and masks from the response string
```python
import re
import json
import pycocotools.mask as mask_utils
class RegexPatterns:
BOXED_PATTERN = r'\\boxed\{([^}]*)\}'
BLOCK_PATTERN = r'^```$\r?\n(.*?)\r?\n^```$'
NON_NEGATIVE_FLOAT_PATTERN = (
r'(?:[1-9]\d*\.\d+|0\.\d+|\d+)'
)
BBOX_PATTERN = rf'\[\s*({NON_NEGATIVE_FLOAT_PATTERN})\s*,\s*({NON_NEGATIVE_FLOAT_PATTERN})\s*,\s*({NON_NEGATIVE_FLOAT_PATTERN})\s*,\s*({NON_NEGATIVE_FLOAT_PATTERN})\s*\]'
POINT_PATTERN = (
rf'\[\s*({NON_NEGATIVE_FLOAT_PATTERN})\s*,\s*({NON_NEGATIVE_FLOAT_PATTERN})\s*\]'
)
POLYGON_PATTERN = rf'\[\s*{POINT_PATTERN}(?:\s*,\s*{POINT_PATTERN})*\s*\]'
polygon_matches = [
m.group(0) for m in re.finditer(RegexPatterns.POLYGON_PATTERN, response, re.DOTALL)
]
pred_polygons = []
for polygon_match in polygon_matches:
polygon = json.loads(polygon_match)
pred_polygons.append(polygon)
pred_masks = []
for pred_polygon in pred_polygons:
pred_polygon = np.array(pred_polygon) * np.array([width, height])
rle = mask_utils.frPyObjects(pred_polygon.reshape((1, -1)).tolist(), height, width)
mask = mask_utils.decode(rle)
mask = np.sum(mask, axis=2, keepdims=True)
pred_masks.append(mask)
pred_mask = np.sum(pred_masks, axis=0)
pred_mask = pred_mask.sum(axis=2)
pred_mask = (pred_mask > 0).astype(np.uint8)
``` |