Update model card: add robotics pipeline tag and paper link
Browse filesHi! I'm Niels from the community science team at Hugging Face.
I've opened this PR to improve the metadata and documentation for Robometer. I've added the `pipeline_tag: robotics` to the YAML metadata to ensure the model appears in the Robotics section of the Hub. I've also updated the model card with the official paper link and project page URL found in the paper information.
README.md
CHANGED
|
@@ -1,16 +1,17 @@
|
|
| 1 |
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
base_model: Qwen/Qwen3-VL-4B-Instruct
|
| 4 |
-
tags:
|
| 5 |
-
- reward model
|
| 6 |
-
- robot learning
|
| 7 |
-
- foundation models
|
| 8 |
library_name: transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
# Robometer 4B
|
| 12 |
|
| 13 |
-
**
|
| 14 |
|
| 15 |
**Robometer** is a general-purpose vision-language reward model for robotics. It is trained on [RBM-1M](https://huggingface.co/datasets/) with **Qwen3-VL-4B** to predict **per-frame progress**, **per-frame success**, and **trajectory preferences** from rollout videos. The model combines (1) frame-level progress supervision on expert data and (2) trajectory-comparison preference supervision, so it can learn from both successful and failed rollouts and generalize across diverse robot embodiments and tasks.
|
| 16 |
|
|
@@ -61,6 +62,6 @@ If you use this model, please cite:
|
|
| 61 |
author={Anthony Liang* and Yigit Korkmaz* and Jiahui Zhang and Minyoung Hwang and Abrar Anwar and Sidhant Kaushik and Aditya Shah and Alex S. Huang and Luke Zettlemoyer and Dieter Fox and Yu Xiang and Anqi Li and Andreea Bobu and Abhishek Gupta and Stephen Tu† and Erdem B{\i}y{\i}k† and Jesse Zhang†},
|
| 62 |
year={2026},
|
| 63 |
url={https://github.com/robometer/robometer},
|
| 64 |
-
note={arXiv
|
| 65 |
}
|
| 66 |
-
```
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
base_model: Qwen/Qwen3-VL-4B-Instruct
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
library_name: transformers
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
pipeline_tag: robotics
|
| 6 |
+
tags:
|
| 7 |
+
- reward model
|
| 8 |
+
- robot learning
|
| 9 |
+
- foundation models
|
| 10 |
---
|
| 11 |
|
| 12 |
# Robometer 4B
|
| 13 |
|
| 14 |
+
[**Project Page**](https://robometer.github.io/) | [**GitHub**](https://github.com/robometer/robometer) | [**Paper**](https://huggingface.co/papers/2603.02115)
|
| 15 |
|
| 16 |
**Robometer** is a general-purpose vision-language reward model for robotics. It is trained on [RBM-1M](https://huggingface.co/datasets/) with **Qwen3-VL-4B** to predict **per-frame progress**, **per-frame success**, and **trajectory preferences** from rollout videos. The model combines (1) frame-level progress supervision on expert data and (2) trajectory-comparison preference supervision, so it can learn from both successful and failed rollouts and generalize across diverse robot embodiments and tasks.
|
| 17 |
|
|
|
|
| 62 |
author={Anthony Liang* and Yigit Korkmaz* and Jiahui Zhang and Minyoung Hwang and Abrar Anwar and Sidhant Kaushik and Aditya Shah and Alex S. Huang and Luke Zettlemoyer and Dieter Fox and Yu Xiang and Anqi Li and Andreea Bobu and Abhishek Gupta and Stephen Tu† and Erdem B{\i}y{\i}k† and Jesse Zhang†},
|
| 63 |
year={2026},
|
| 64 |
url={https://github.com/robometer/robometer},
|
| 65 |
+
note={arXiv:2603.02115}
|
| 66 |
}
|
| 67 |
+
```
|