ACE-Brain commited on
Commit
2533d7e
·
verified ·
1 Parent(s): e9036d3

Upload 10 files

Browse files
Files changed (2) hide show
  1. README.md +3 -3
  2. assets/title.png +0 -0
README.md CHANGED
@@ -6,7 +6,7 @@ license: mit
6
  ---
7
 
8
  <div align="center">
9
- <img src="./assets/acebrain.png" width=600>
10
  </div>
11
 
12
  <br/>
@@ -26,7 +26,7 @@ license: mit
26
 
27
  ## Overview
28
 
29
- **ACE-Brain** is a spatial-centric multimodal foundation model designed to unify perception, reasoning, and decision-making across diverse embodied domains, including **spatial intelligence**, **embodied interaction**, **autonomous driving**, and **low-altitude sensing**. Built upon a unified multimodal large language model (MLLM) architecture, ACE-Brain learns a shared spatial reasoning substrate that enables generalization across heterogeneous physical environments and agent embodiments.
30
 
31
  Extensive evaluation across **24** benchmarks demonstrates that ACE-Brain achieves state-of-the-art or competitive performance across multiple domains, validating its effectiveness as a unified embodied intelligence model.
32
 
@@ -50,7 +50,7 @@ Extensive evaluation across **24** benchmarks demonstrates that ACE-Brain achiev
50
 
51
  ## Performance Highlights
52
 
53
- ACE-Brain achieves strong performance across **24 benchmarks covering Spatial Intelligence, Embodied Interaction, Autonomous Driving, and Low-Altitude Sensing**, consistently outperforming existing open-source embodied VLMs and remaining competitive with closed-source models.
54
 
55
  The model shows robust capability in **spatial reasoning, physical interaction understanding, task-oriented decision-making, and dynamic scene interpretation**, enabling reliable performance across diverse real-world embodiment scenarios.
56
 
 
6
  ---
7
 
8
  <div align="center">
9
+ <img src="./assets/title.png" width=600>
10
  </div>
11
 
12
  <br/>
 
26
 
27
  ## Overview
28
 
29
+ **ACE-Brain** is a spatial-centric multimodal foundation model designed to unify perception, reasoning, and decision-making across diverse embodied domains, including **spatial cognition**, **autonomous driving**, **low-altitude sensing** and **embodied interaction**. Built upon a unified multimodal large language model (MLLM) architecture, ACE-Brain learns a shared spatial reasoning substrate that enables generalization across heterogeneous physical environments and agent embodiments.
30
 
31
  Extensive evaluation across **24** benchmarks demonstrates that ACE-Brain achieves state-of-the-art or competitive performance across multiple domains, validating its effectiveness as a unified embodied intelligence model.
32
 
 
50
 
51
  ## Performance Highlights
52
 
53
+ ACE-Brain achieves strong performance across **24 benchmarks covering Spatial Cognition, Autonomous Driving, Low-Altitude Sensing and Embodied Interaction**, consistently outperforming existing open-source embodied VLMs and remaining competitive with closed-source models.
54
 
55
  The model shows robust capability in **spatial reasoning, physical interaction understanding, task-oriented decision-making, and dynamic scene interpretation**, enabling reliable performance across diverse real-world embodiment scenarios.
56
 
assets/title.png ADDED