Robotics
Transformers
Safetensors
zhushaohao commited on
Commit
77d71f0
·
verified ·
1 Parent(s): 49bcc65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -7,22 +7,21 @@ license: cc-by-nc-sa-4.0
7
  ![Transformers](https://img.shields.io/badge/%F0%9F%A4%97%20Transformers-9cf?style=flat)
8
  ![PyTorch](https://img.shields.io/badge/PyTorch-EE4C2C?logo=pytorch&logoColor=white)
9
 
10
- ---
11
  ## Model Description
12
 
13
  VL-LN Bench is the first benchmark for **Interactive Instance Object Navigation (IION)**, where an embodied agent must locate a specific object instance in a realistic 3D home while engaging in **free-form natural-language dialogue**. It also provides an **automated data-collection pipeline** that generates large-scale training data for learning interactive navigation behaviors. Using this dataset, we train an **IION base model** that shares the same architecture as **InternVLA-N1**.
14
 
15
  The resulting model demonstrates baseline competence on IION: it can search for a specific instance in **previously unseen** environments. During exploration, the agent can either **move** by predicting a pixel-goal waypoint or **ask** a question to reduce ambiguity and improve task success and efficiency.
16
 
17
- ---
18
- ### 🔗 Resources
19
 
20
  [![Code](https://img.shields.io/badge/GitHub-VL--LN--Bench-181717?logo=github)](https://github.com/InternRobotics/InternNav)
21
  [![VL-LN Paper — arXiv](https://img.shields.io/badge/arXiv-VL--LN--Bench-B31B1B?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2512.08186)
22
  [![Project Page — VL-LN-Bench](https://img.shields.io/badge/Project_Page-VL--LN--Bench-4285F4?logo=google-chrome&logoColor=white)](https://0309hws.github.io/VL-LN.github.io/)
23
  [![Dataset](https://img.shields.io/badge/Dataset-VL--LN--Bench-FF6F00?logo=huggingface&logoColor=white)](https://huggingface.co/datasets/InternRobotics/InternData-N1)
24
 
25
- ---
26
  ## Usage
27
 
28
- For inference and evaluation please refer to the [VL-LN-Bench repository](https://github.com/InternRobotics/InternNav).
 
7
  ![Transformers](https://img.shields.io/badge/%F0%9F%A4%97%20Transformers-9cf?style=flat)
8
  ![PyTorch](https://img.shields.io/badge/PyTorch-EE4C2C?logo=pytorch&logoColor=white)
9
 
 
10
  ## Model Description
11
 
12
  VL-LN Bench is the first benchmark for **Interactive Instance Object Navigation (IION)**, where an embodied agent must locate a specific object instance in a realistic 3D home while engaging in **free-form natural-language dialogue**. It also provides an **automated data-collection pipeline** that generates large-scale training data for learning interactive navigation behaviors. Using this dataset, we train an **IION base model** that shares the same architecture as **InternVLA-N1**.
13
 
14
  The resulting model demonstrates baseline competence on IION: it can search for a specific instance in **previously unseen** environments. During exploration, the agent can either **move** by predicting a pixel-goal waypoint or **ask** a question to reduce ambiguity and improve task success and efficiency.
15
 
16
+
17
+ ### Resources
18
 
19
  [![Code](https://img.shields.io/badge/GitHub-VL--LN--Bench-181717?logo=github)](https://github.com/InternRobotics/InternNav)
20
  [![VL-LN Paper — arXiv](https://img.shields.io/badge/arXiv-VL--LN--Bench-B31B1B?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2512.08186)
21
  [![Project Page — VL-LN-Bench](https://img.shields.io/badge/Project_Page-VL--LN--Bench-4285F4?logo=google-chrome&logoColor=white)](https://0309hws.github.io/VL-LN.github.io/)
22
  [![Dataset](https://img.shields.io/badge/Dataset-VL--LN--Bench-FF6F00?logo=huggingface&logoColor=white)](https://huggingface.co/datasets/InternRobotics/InternData-N1)
23
 
24
+
25
  ## Usage
26
 
27
+ For inference and evaluation, please refer to the [VL-LN-Bench repository](https://github.com/InternRobotics/InternNav).