Update README.md
Browse files
README.md
CHANGED
|
@@ -16,9 +16,12 @@ viewer: false
|
|
| 16 |
[](https://github.com/bigai-ai/tongsim)
|
| 17 |
[](https://arxiv.org/abs/2512.20206)
|
| 18 |
|
| 19 |
-
|
| 20 |
</div>
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
# What is TongSIM-Asset?
|
| 23 |
|
| 24 |
As artificial intelligence (AI) rapidly advances, especially in multimodal large language models, research focus is shifting from single-modality text processing to the more complex domains of multimodal and embodied AI. Embodied intelligence focuses on training agents within realistic simulated environments, leveraging physical interaction and action feedback rather than conventionally labeled datasets. To foster embodied AI research, we introduce [`TongSIM`](https://github.com/bigai-ai/tongsim), a high-fidelity, general-purpose platform for training and evaluating embodied agents.
|
|
|
|
| 16 |
[](https://github.com/bigai-ai/tongsim)
|
| 17 |
[](https://arxiv.org/abs/2512.20206)
|
| 18 |
|
|
|
|
| 19 |
</div>
|
| 20 |
|
| 21 |
+
# GitHub
|
| 22 |
+
|
| 23 |
+
Visit Github : https://tongsim-platform.github.io/tongsim
|
| 24 |
+
|
| 25 |
# What is TongSIM-Asset?
|
| 26 |
|
| 27 |
As artificial intelligence (AI) rapidly advances, especially in multimodal large language models, research focus is shifting from single-modality text processing to the more complex domains of multimodal and embodied AI. Embodied intelligence focuses on training agents within realistic simulated environments, leveraging physical interaction and action feedback rather than conventionally labeled datasets. To foster embodied AI research, we introduce [`TongSIM`](https://github.com/bigai-ai/tongsim), a high-fidelity, general-purpose platform for training and evaluating embodied agents.
|