Safetensors
English
qwen3_vl
embodied
Thinker-4B / README.md
bypan123's picture
Update README.md
09025a8 verified
metadata
license: other
license_name: attribution-noncommercial-sharealike4.0international
license_link: LICENSE
language:
  - en
metrics:
  - accuracy
  - bleu
base_model:
  - Qwen/Qwen3-VL-4B-Instruct
tags:
  - embodied

Model Card for Model ID

Thinker: A vision-language foundation model for embodied intelligence

Model Details

Model Description

We are pleased to open-source Thinker, a state-of-the-art vision-language foundation model specifically engineered for embodied intelligence. While conventional VLMs often struggle with perspective confusion and temporal oversight, Thinker is designed to bridge the gap between general scene understanding and robust robot-centric task-level capabilities. By leveraging high-quality dataset curation, multi-stage training, and reinforcement learning, Thinker exhibits advanced capabilities across four core dimensions: Task Planning with future-state prediction, Spatial Intelligence grounded in an egocentric coordinate system, Temporal Understanding through historical state integration, and precise Visual Grounding. Leveraging these capabilities, Thinker sets new records across 7 embodied AI benchmarks in Task Planning, Visual Grounding and Spatial Understanding, and significantly outperforms existing open-source, closed-source, and specialized baselines, showing its potential as a foundation for embodied intelligence and autonomous robotic decision-making.