Shalfunnn commited on
Commit
fb67008
·
verified ·
1 Parent(s): 19f7020

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -9,18 +9,18 @@
9
  <div align="center">
10
 
11
  [![Paper](https://img.shields.io/badge/📄%20Paper-PDF-EA1B22?style=for-the-badge&logo=adobeacrobatreader&logoColor=fff)](https://x2robot.cn-wlcb.ufileos.com/wall_oss.pdf)
12
- &nbsp;&nbsp;&nbsp;&nbsp;
13
- [![Hugging Face](https://img.shields.io/badge/🤗%20Hugging%20Face-x--square--robot-FFB000?style=for-the-badge&logo=huggingface&logoColor=000)](https://huggingface.co/x-square-robot)
14
- &nbsp;&nbsp;&nbsp;&nbsp;
15
- [![GitHub](https://img.shields.io/badge/💻%20GitHub-181717?style=for-the-badge&logo=github&logoColor=fff)](https://github.com/X-Square-Robot/wall-x)
16
- &nbsp;&nbsp;&nbsp;&nbsp;
17
- [![Project Page](https://img.shields.io/badge/🌐%20Project%20Page-1E90FF?style=for-the-badge&logo=google-chrome&logoColor=fff)](https://x2robot.com/en/research/68bc2cde8497d7f238dde690)
18
 
19
  </div>
20
 
21
  </div>
22
 
23
- ## <a href="https://x2robot.cn-wlcb.ufileos.com/wall_oss.pdf" target="_blank">WALL-OSS: Igniting VLMs toward the Embodied Space</a>
24
 
25
  We introduce **WALL-OSS**, an end-to-end embodied foundation model that leverages large-scale multimodal pretraining to achieve (1) embodiment-aware vision--language understanding, (2) strong language--action association, and (3) robust manipulation capability.
26
  Our approach employs a tightly coupled architecture and multi-strategies training curriculum that enables Unified Cross-Level CoT—seamlessly unifying instruction reasoning, subgoal decomposition, and fine-grained action synthesis within a single differentiable framework.
 
9
  <div align="center">
10
 
11
  [![Paper](https://img.shields.io/badge/📄%20Paper-PDF-EA1B22?style=for-the-badge&logo=adobeacrobatreader&logoColor=fff)](https://x2robot.cn-wlcb.ufileos.com/wall_oss.pdf)
12
+ &nbsp;&nbsp;
13
+ [![Hugging Face](https://img.shields.io/badge/Hugging%20Face-x--square--robot-FFB000?style=for-the-badge&logo=huggingface&logoColor=000)](https://huggingface.co/x-square-robot)
14
+ &nbsp;&nbsp;
15
+ [![GitHub](https://img.shields.io/badge/GitHub-181717?style=for-the-badge&logo=github&logoColor=fff)](https://github.com/X-Square-Robot/wall-x)
16
+ &nbsp;&nbsp;
17
+ [![Project Page](https://img.shields.io/badge/Project-1E90FF?style=for-the-badge&logo=google-chrome&logoColor=fff)](https://x2robot.com/en/research/68bc2cde8497d7f238dde690)
18
 
19
  </div>
20
 
21
  </div>
22
 
23
+ ## <a href="https://x2robot.cn-wlcb.ufileos.com/wall_oss.pdf" target="_blank"><strong>WALL-OSS: Igniting VLMs toward the Embodied Space</strong></a>
24
 
25
  We introduce **WALL-OSS**, an end-to-end embodied foundation model that leverages large-scale multimodal pretraining to achieve (1) embodiment-aware vision--language understanding, (2) strong language--action association, and (3) robust manipulation capability.
26
  Our approach employs a tightly coupled architecture and multi-strategies training curriculum that enables Unified Cross-Level CoT—seamlessly unifying instruction reasoning, subgoal decomposition, and fine-grained action synthesis within a single differentiable framework.