Update README.md
Browse files
README.md
CHANGED
|
@@ -6,4 +6,51 @@ tags:
|
|
| 6 |
- vlm
|
| 7 |
- chart-understanding
|
| 8 |
library_name: transformers
|
| 9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
- vlm
|
| 7 |
- chart-understanding
|
| 8 |
library_name: transformers
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# BiPS — Bi-directional Perceptual Shaping for Multimodal Reasoning
|
| 12 |
+
|
| 13 |
+
This model card describes **BiPS (Bi-directional Perceptual Shaping)**, a **training-time** framework proposed in *“See Less, See Right: Bi-directional Perceptual Shaping For Multimodal Reasoning”* **[CVPR 2026]**.
|
| 14 |
+
|
| 15 |
+
- Paper: https://arxiv.org/abs/2512.22120
|
| 16 |
+
- Code: https://github.com/zss02/BiPS
|
| 17 |
+
|
| 18 |
+
## What is BiPS?
|
| 19 |
+
|
| 20 |
+
Many VLMs fail on multimodal reasoning because they **look at the wrong visual evidence** (especially for charts, thin lines, intersections, and small regions). BiPS improves **question-conditioned visual grounding** by turning “where-to-look” supervision into training signals—**without requiring extra tools at inference time**.
|
| 21 |
+
|
| 22 |
+
## Key idea
|
| 23 |
+
|
| 24 |
+
BiPS trains a VLM with two complementary view transformations:
|
| 25 |
+
|
| 26 |
+
- **Evidence-Preserving View**: keep only the visual evidence needed to answer, reduce distractions.
|
| 27 |
+
→ enforce **consistency** between predictions from the original image and the preserved view.
|
| 28 |
+
|
| 29 |
+
- **Evidence-Ablated View**: remove the key evidence so the image no longer supports the answer.
|
| 30 |
+
→ enforce **separation** so the model cannot rely on shortcuts.
|
| 31 |
+
|
| 32 |
+
These constraints are typically implemented with **KL-based objectives** and can be integrated into **GRPO** training.
|
| 33 |
+
|
| 34 |
+
## Why it matters
|
| 35 |
+
|
| 36 |
+
- Better **fine-grained evidence alignment**
|
| 37 |
+
- Less “guessing” from language priors
|
| 38 |
+
- **No additional inference overhead** (views are used only during training)
|
| 39 |
+
|
| 40 |
+
## How to use
|
| 41 |
+
|
| 42 |
+
BiPS is mainly a **training recipe**. To apply it:
|
| 43 |
+
1. Follow the official repo to set up dependencies and scripts.
|
| 44 |
+
2. Train your base VLM with BiPS-generated **preserve/ablate** views.
|
| 45 |
+
3. Use the resulting checkpoint as a standard VLM at inference time (no extra steps).
|
| 46 |
+
|
| 47 |
+
## Citation
|
| 48 |
+
|
| 49 |
+
```bibtex
|
| 50 |
+
@article{zhang2025bips,
|
| 51 |
+
title={See Less, See Right: Bi-directional Perceptual Shaping For Multimodal Reasoning},
|
| 52 |
+
author={Zhang, Shuoshuo and Zhang, Yizhen and Fu, Jingjing and Song, Lei and Bian, Jiang and Yang, Yujiu and Wang, Rui},
|
| 53 |
+
journal={arXiv preprint arXiv:2512.22120},
|
| 54 |
+
year={2025}
|
| 55 |
+
}
|
| 56 |
+
```
|