GS-Reasoner-Diverse / README.md
nielsr's picture
nielsr HF Staff
Improve model card: Add metadata, links, abstract, and setup for GS-Reasoner
2a5d61d verified
|
raw
history blame
4.65 kB
metadata
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers
tags:
  - 3d-vision
  - visual-grounding
  - spatial-reasoning

Reasoning in Space via Grounding in the World (GS-Reasoner)

We present Grounded-Spatial Reasoner (GS-Reasoner), the first 3D-LLM that bridges 3D visual grounding and spatial reasoning.

This model was introduced in the paper: Reasoning in Space via Grounding in the World Project Page: https://yiming-cc.github.io/gs-reasoner/ Code: https://github.com/WU-CVGL/GS-Reasoner

Abstract

In this paper, we claim that 3D visual grounding is the cornerstone of spatial reasoning and introduce the Grounded-Spatial Reasoner (GS-Reasoner) to explore the effective spatial representations that bridge the gap between them. Existing 3D LLMs suffer from the absence of a unified 3D representation capable of jointly capturing semantic and geometric information. This deficiency is manifested either in poor performance on grounding or in an excessive reliance on external modules, ultimately hindering the seamless integration of grounding and spatial reasoning. To address this, we propose a simple yet effective dual-path pooling mechanism that tightly aligns geometric features with both semantic and positional cues, constructing a unified image patch-based 3D representation that encapsulates all essential information without increasing the number of input tokens. Leveraging this holistic representation, GS-Reasoner is the first 3D LLM that achieves autoregressive grounding entirely without external modules while delivering performance comparable to state-of-the-art models, establishing a unified and self-contained framework for 3D spatial reasoning. To further bridge grounding and spatial reasoning, we introduce the Grounded Chain-of-Thought (GCoT) dataset. This dataset is meticulously curated to include both 3D bounding box annotations for objects referenced in reasoning questions and step-by-step reasoning paths that integrate grounding as a core component of the problem-solving process. Extensive experiments demonstrate that GS-Reasoner achieves impressive results on 3D visual grounding, which in turn significantly enhances its spatial reasoning capabilities, leading to state-of-the-art performance.

Installation and Setup

For detailed installation instructions and data preprocessing, please refer to the official GitHub repository.

To set up the environment, follow these steps:

conda create -n gs-reasoner python=3.11 -y
conda activate gs-reasoner

git clone git@github.com:WU-CVGL/GS-Reasoner.git
cd GS-Reasoner

# install package for GS-Reasoner
pip install -e .

# (optional) opencv-python extral dependency
sudo apt update
sudo apt install -y libgl1 libglib2.0-0 libsm6 libxext6 libxrender1

# (optional) install gcc
conda install -c conda-forge gcc=13.2 gxx=13.2 -y

# (optional) install cuda toolkit 12.4
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-4

# install package for Sonata
cd llava/submodules/sonata
pip install -r requirements.txt
cd ../../..

# install package for VSI-Bench Evaluation
cd llava/submodules/lmms_eval
pip install -r requirements.txt
cd ../../..

Model Weights

We provide two pretrained model checkpoints:

  • GS-Reasoner – the main model used in our paper, producing more deterministic chain-of-thought reasoning.
  • GS-Reasoner-Diverse – a variant that generates more diverse chain-of-thought outputs with only a minor performance drop (less than 1.0 on VSI-Bench).

To use them, download the checkpoints and place them under the ckpt/ directory. For more advanced usage and inference examples, please refer to the official GitHub repository.

Citation

If you find our work helpful or inspiring, please feel free to cite it.

@article{chen2024reasoning,
  title={Reasoning in Space via Grounding in the World},
  author={Chen, Yiming and Qi, Zekun and Zhang, Wenyao and Jin, Xin and Zhang, Li and Liu, Peidong},
  journal={arXiv preprint arXiv:2510.13800},
  year={2024}
}