Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,7 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# FantasyVLN
|
| 2 |
+
|
| 3 |
+
The model weights of **FantasyVLN**.
|
| 4 |
+
|
| 5 |
+
**FantasyVLN** is a unified multimodal Chain-of-Thought (CoT) reasoning framework that enables efficient and precise navigation based on natural language instructions and visual observations. **FantasyVLN** combines the benefits of textual, visual, and multimodal CoT reasoning by constructing a unified representation space across these reasoning modes. To enable efficient reasoning, we align these CoT reasoning modes with non-CoT reasoning during training, while using only non-CoT reasoning at test time. Notably, we perform visual CoT in the latent space of a [VAR](https://github.com/FoundationVision/VAR) model, where only low-scale latent representations are predicted. Compared to traditional pixel-level visual CoT methods, our approach significantly improves both training and inference efficiency.
|
| 6 |
+
|
| 7 |
+
See the offical code for detail: [https://fantasy-amap.github.io/fantasy-vln](https://fantasy-amap.github.io/fantasy-vln/)
|