metadata
base_model:
- Qwen/Qwen3-VL-8B-Instruct
- Qwen/Qwen2.5-VL-7B-Instruct
datasets:
- nyu-visionx/VSI-Bench
- lmms-lab-si/EASI-Leaderboard-Requests
license: mit
pipeline_tag: video-text-to-text
GeoThinker: Active Geometry Integration for Spatial Reasoning
GeoThinker is a framework that shifts the paradigm of spatial reasoning in Multimodal Large Language Models (MLLMs) from passive fusion to active perception. Instead of indiscriminate feature mixing, GeoThinker enables models to selectively retrieve geometric evidence from 3D encoders conditioned on internal reasoning demands.
- Paper: Thinking with Geometry: Active Geometry Integration for Spatial Reasoning
- Project Page: li-hao-yuan.github.io/GeoThinker/
- Repository: github.com/Li-Hao-yuan/GeoThinker
Architecture Overview
GeoThinker empowers MLLMs to understand the 3D world through:
- Dual-Encoder Processing: Combines a 2D vision encoder for high-level semantic features with a 3D visual geometry encoder (VGGT) for capturing fine-grained spatial structures.
- Spatial-Grounded Fusion (SGF): Uses frame-strict cross-attention to ensure visual tokens query geometric cues from the corresponding frame, maintaining spatial consistency.
- Importance Gating: Calibrates per-frame attention toward task-relevant structures (like object boundaries) while filtering redundant noise.
Performance Highlights
GeoThinker sets new benchmarks in spatial intelligence:
- VSI-Bench: Achieves a peak score of 72.6%.
- EASI-leaderboard: Achieves an average score of 55.0%, ranking 6th.
- Demonstrates robust generalization across complex scenarios, including embodied referring and autonomous driving.
Model Variants
| Backbone | Training Regime | Access |
|---|---|---|
| Qwen2.5-VL-7B | Vanilla | GeoThinker-Qwen2.5VL-7B-Vanilla |
| Qwen2.5-VL-7B | Scaled | GeoThinker-Qwen2.5VL-7B-Scaled |
| Qwen3-VL-8B | Scaled | GeoThinker-Qwen3VL-8B-Scaled |
Citation
If you find this work useful, please consider citing:
@article{li2026thinking,
title={Thinking with Geometry: Active Geometry Integration for Spatial Reasoning},
author={Haoyuan, Li and Qihang, Cao and Tao, Tang and Kun, Xiang and Zihan, Guo and Jianhua, Han and JiaWang, Bian and Hang, Xu and Xiaodan, Liang},
year={2026}
}