Robotics
Transformers
Safetensors
nielsr HF Staff commited on
Commit
fe24cc6
·
verified ·
1 Parent(s): a08dc92

Add pipeline tag, library name, and improve model card

Browse files

Hi! I'm Niels from the Hugging Face community science team. I've opened this PR to improve the model card for the VL-LN-Bench basemodel:

* Added `library_name: transformers` and `pipeline_tag: robotics` to the metadata. This will improve discoverability and enable features like the "Use in Transformers" button.
* Added an introductory link to the Hugging Face paper page.
* Corrected the arXiv paper link in the "Resources" section to `https://arxiv.org/abs/2512.22342`, matching the paper's ID.
* Included the BibTeX citation block from the paper's GitHub repository.

These changes should make the model easier to find and use for the community. Let me know if you have any feedback!

Files changed (1) hide show
  1. README.md +23 -4
README.md CHANGED
@@ -1,8 +1,13 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
3
  ---
 
4
  # VL-LN-Bench basemodel
5
 
 
 
6
  ![License](https://img.shields.io/badge/License-CC_BY--NC--SA_4.0-lightgrey.svg)
7
  ![Transformers](https://img.shields.io/badge/%F0%9F%A4%97%20Transformers-9cf?style=flat)
8
  ![PyTorch](https://img.shields.io/badge/PyTorch-EE4C2C?logo=pytorch&logoColor=white)
@@ -13,15 +18,29 @@ VL-LN Bench is the first benchmark for **Interactive Instance Object Navigation
13
 
14
  The resulting model demonstrates baseline competence on IION: it can search for a specific instance in **previously unseen** environments. During exploration, the agent can either **move** by predicting a pixel-goal waypoint or **ask** a question to reduce ambiguity and improve task success and efficiency.
15
 
16
-
17
  ### Resources
18
 
19
  [![Code](https://img.shields.io/badge/GitHub-VL--LN--Bench-181717?logo=github)](https://github.com/InternRobotics/InternNav)
20
- [![VL-LN Paper — arXiv](https://img.shields.io/badge/arXiv-VL--LN--Bench-B31B1B?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2512.08186)
21
  [![Project Page — VL-LN-Bench](https://img.shields.io/badge/Project_Page-VL--LN--Bench-4285F4?logo=google-chrome&logoColor=white)](https://0309hws.github.io/VL-LN.github.io/)
22
  [![Dataset](https://img.shields.io/badge/Dataset-VL--LN--Bench-FF6F00?logo=huggingface&logoColor=white)](https://huggingface.co/datasets/InternRobotics/InternData-N1)
23
 
24
-
25
  ## Usage
26
 
27
- For inference and evaluation, please refer to the [VL-LN-Bench repository](https://github.com/InternRobotics/InternNav).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ library_name: transformers
4
+ pipeline_tag: robotics
5
  ---
6
+
7
  # VL-LN-Bench basemodel
8
 
9
+ This repository contains the base model for the paper [VL-LN Bench: Towards Long-horizon Goal-oriented Navigation with Active Dialogs](https://huggingface.co/papers/2512.22342).
10
+
11
  ![License](https://img.shields.io/badge/License-CC_BY--NC--SA_4.0-lightgrey.svg)
12
  ![Transformers](https://img.shields.io/badge/%F0%9F%A4%97%20Transformers-9cf?style=flat)
13
  ![PyTorch](https://img.shields.io/badge/PyTorch-EE4C2C?logo=pytorch&logoColor=white)
 
18
 
19
  The resulting model demonstrates baseline competence on IION: it can search for a specific instance in **previously unseen** environments. During exploration, the agent can either **move** by predicting a pixel-goal waypoint or **ask** a question to reduce ambiguity and improve task success and efficiency.
20
 
 
21
  ### Resources
22
 
23
  [![Code](https://img.shields.io/badge/GitHub-VL--LN--Bench-181717?logo=github)](https://github.com/InternRobotics/InternNav)
24
+ [![VL-LN Paper — arXiv](https://img.shields.io/badge/arXiv-VL--LN--Bench-B31B1B?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2512.22342)
25
  [![Project Page — VL-LN-Bench](https://img.shields.io/badge/Project_Page-VL--LN--Bench-4285F4?logo=google-chrome&logoColor=white)](https://0309hws.github.io/VL-LN.github.io/)
26
  [![Dataset](https://img.shields.io/badge/Dataset-VL--LN--Bench-FF6F00?logo=huggingface&logoColor=white)](https://huggingface.co/datasets/InternRobotics/InternData-N1)
27
 
 
28
  ## Usage
29
 
30
+ For inference and evaluation, please refer to the [VL-LN-Bench repository](https://github.com/InternRobotics/InternNav).
31
+
32
+ ## Citation
33
+
34
+ If you find our work helpful, please cite:
35
+
36
+ ```bibtex
37
+ @misc{huang2025vllnbenchlonghorizongoaloriented,
38
+ title={VL-LN Bench: Towards Long-horizon Goal-oriented Navigation with Active Dialogs},
39
+ author={Wensi Huang and Shaohao Zhu and Meng Wei and Jinming Xu and Xihui Liu and Hanqing Wang and Tai Wang and Feng Zhao and Jiangmiao Pang},
40
+ year={2025},
41
+ eprint={2512.22342},
42
+ archivePrefix={arXiv},
43
+ primaryClass={cs.RO},
44
+ url={https://arxiv.org/abs/2512.22342},
45
+ }
46
+ ```