sthui commited on
Commit
fc4b9e1
·
verified ·
1 Parent(s): bb3e9f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -6
README.md CHANGED
@@ -1,9 +1,19 @@
1
- # This is an anonymous version for peer review.
 
 
 
 
2
 
3
- Paper: Towards Pixel-level VLM Perception via Simple Points Prediction
 
 
 
 
 
 
4
 
5
 
6
- # Introduction
7
 
8
  We present **SimpleSeg**, **a strikingly simple yet highly effective approach to endow Multimodal Large Language Models (MLLMs) with native pixel-level perception**.
9
  Our method reframes segmentation as a simple sequence generation problem: the model directly predicts **sequence of points** (textual coordinates) delineating object boundaries, entirely within its language space.
@@ -12,20 +22,20 @@ We find that **the standard MLLM architecture possesses a strong, inherent capac
12
  On segmentation benchmarks, SimpleSeg achieves performance that is comparable to, and often surpasses, methods relying on complex, task-specific designs.
13
  This work lays out that precise spatial understanding can emerge from simple point prediction, challenging the prevailing need for auxiliary components and paving the way for more unified and capable VLMs.
14
 
15
- # Method
16
 
17
  ![](method.png)
18
 
19
  In this work, we explore the limits of MLLM pixel-level perception by predicting the next point in a contour with the simplest approach possible.
20
  Without introducing any complex architectures or special patterns, we show how even minimalistic point prediction can achieve effective segmentation at the pixel level.
21
 
22
- # Key Benefits
23
 
24
  - **Simplicity**: SimpleSeg requires no specialized modules and adheres to the standard MLLM architecture, it can be seamlessly and efficiently integrated as a new, core pre-training task for foundation models, similar to visual grounding.
25
  - **Task Generality**: By framing segmentation as a text-generation problem, our approach is inherently flexible. The model can be easily adapted to a wide range of vision-language tasks that require precise spatial localization.
26
  - **Interpretable Output**: The model generates explicit, human-readable coordinate sequences instead of dense pixel masks. This transparency simplifies debugging and makes the output directly usable for downstream applications like interactive editing or tool use.
27
 
28
- # Performance
29
 
30
  - **Referring Expression Segmentation** results
31
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ ---
5
+ # Towards Pixel-level VLM Perception via Simple Points Prediction
6
 
7
+ <div align="center">
8
+ <a href="">
9
+ <b>📄 Tech Report</b>
10
+ </a> &nbsp;|&nbsp;
11
+ <a href="">
12
+ <b>📄 Github</b>
13
+ </div>
14
 
15
 
16
+ ## Introduction
17
 
18
  We present **SimpleSeg**, **a strikingly simple yet highly effective approach to endow Multimodal Large Language Models (MLLMs) with native pixel-level perception**.
19
  Our method reframes segmentation as a simple sequence generation problem: the model directly predicts **sequence of points** (textual coordinates) delineating object boundaries, entirely within its language space.
 
22
  On segmentation benchmarks, SimpleSeg achieves performance that is comparable to, and often surpasses, methods relying on complex, task-specific designs.
23
  This work lays out that precise spatial understanding can emerge from simple point prediction, challenging the prevailing need for auxiliary components and paving the way for more unified and capable VLMs.
24
 
25
+ ## Method
26
 
27
  ![](method.png)
28
 
29
  In this work, we explore the limits of MLLM pixel-level perception by predicting the next point in a contour with the simplest approach possible.
30
  Without introducing any complex architectures or special patterns, we show how even minimalistic point prediction can achieve effective segmentation at the pixel level.
31
 
32
+ ## Key Benefits
33
 
34
  - **Simplicity**: SimpleSeg requires no specialized modules and adheres to the standard MLLM architecture, it can be seamlessly and efficiently integrated as a new, core pre-training task for foundation models, similar to visual grounding.
35
  - **Task Generality**: By framing segmentation as a text-generation problem, our approach is inherently flexible. The model can be easily adapted to a wide range of vision-language tasks that require precise spatial localization.
36
  - **Interpretable Output**: The model generates explicit, human-readable coordinate sequences instead of dense pixel masks. This transparency simplifies debugging and makes the output directly usable for downstream applications like interactive editing or tool use.
37
 
38
+ ## Performance
39
 
40
  - **Referring Expression Segmentation** results
41