Add arXiv metadata and improve model card
Browse filesHi! This PR improves the model card by:
- Adding the `arxiv` ID to the metadata to link the model with its research paper on the Hugging Face Hub.
- Updating the header image to use an absolute URL to ensure it renders correctly on the hub.
- Ensuring the GitHub repository and project page are clearly linked for better discoverability.
README.md
CHANGED
|
@@ -1,11 +1,12 @@
|
|
| 1 |
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
base_model:
|
| 4 |
- BeingBeyond/Being-H05-2B
|
| 5 |
-
|
| 6 |
-
- vla
|
| 7 |
-
- robotics
|
| 8 |
pipeline_tag: robotics
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
# Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization
|
|
@@ -17,7 +18,7 @@ pipeline_tag: robotics
|
|
| 17 |
<div align="center">
|
| 18 |
|
| 19 |
[](https://research.beingbeyond.com/being-h05)
|
| 20 |
-
[](https://arxiv.org/
|
| 21 |
[](https://huggingface.co/collections/BeingBeyond/being-h05)
|
| 22 |
[](./LICENSE)
|
| 23 |
|
|
@@ -34,9 +35,9 @@ Being-H0.5 is a foundational VLA model that scales human-centric learning with U
|
|
| 34 |
|
| 35 |
## News
|
| 36 |
|
| 37 |
-
- **[2026-01-20]**: We publish the **Being-H0.5**! Check our [Paper](https://arxiv.org/
|
| 38 |
- **[2025-08-02]**: We release the **Being-H0** codebase and pretrained models! Check our [Hugging Face Model Collections](https://huggingface.co/collections/BeingBeyond/being-h0) for more details. π₯π₯π₯
|
| 39 |
-
- **[2025-07-21]**: We publish **Being-H0**! Check our paper [here](https://arxiv.org/
|
| 40 |
|
| 41 |
## Model Checkpoints
|
| 42 |
|
|
@@ -162,8 +163,6 @@ Being-H05 builds on the following excellent open-source projects:
|
|
| 162 |
- [LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO): Benchmark for lifelong robot learning
|
| 163 |
- [RoboCasa](https://github.com/robocasa/robocasa): Large-scale simulation benchmark for everyday tasks
|
| 164 |
|
| 165 |
-
We thank the authors for their contributions to the robotics and machine learning communities.
|
| 166 |
-
|
| 167 |
## License
|
| 168 |
|
| 169 |
Copyright (c) 2026 BeingBeyond Ltd. and/or its affiliates.
|
|
@@ -190,8 +189,8 @@ If you find our work useful, please consider citing us and give a star to our re
|
|
| 190 |
```bibtex
|
| 191 |
@article{beingbeyond2025beingh0,
|
| 192 |
title={Being-h0: vision-language-action pretraining from large-scale human videos},
|
| 193 |
-
author={Luo, Hao and Feng, Yicheng
|
| 194 |
journal={arXiv preprint arXiv:2507.15597},
|
| 195 |
year={2025}
|
| 196 |
}
|
| 197 |
-
```
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
base_model:
|
| 3 |
- BeingBeyond/Being-H05-2B
|
| 4 |
+
license: apache-2.0
|
|
|
|
|
|
|
| 5 |
pipeline_tag: robotics
|
| 6 |
+
tags:
|
| 7 |
+
- vla
|
| 8 |
+
- robotics
|
| 9 |
+
arxiv: 2601.12993
|
| 10 |
---
|
| 11 |
|
| 12 |
# Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization
|
|
|
|
| 18 |
<div align="center">
|
| 19 |
|
| 20 |
[](https://research.beingbeyond.com/being-h05)
|
| 21 |
+
[](https://arxiv.org/abs/2601.12993)
|
| 22 |
[](https://huggingface.co/collections/BeingBeyond/being-h05)
|
| 23 |
[](./LICENSE)
|
| 24 |
|
|
|
|
| 35 |
|
| 36 |
## News
|
| 37 |
|
| 38 |
+
- **[2026-01-20]**: We publish the **Being-H0.5**! Check our [Paper](https://arxiv.org/abs/2601.12993) for technical details and [Hugging Face Model Collections](https://huggingface.co/collections/BeingBeyond/being-h05) for pretrained and post-trained models. π₯π₯π₯
|
| 39 |
- **[2025-08-02]**: We release the **Being-H0** codebase and pretrained models! Check our [Hugging Face Model Collections](https://huggingface.co/collections/BeingBeyond/being-h0) for more details. π₯π₯π₯
|
| 40 |
+
- **[2025-07-21]**: We publish **Being-H0**! Check our paper [here](https://arxiv.org/abs/2507.15597). πππ
|
| 41 |
|
| 42 |
## Model Checkpoints
|
| 43 |
|
|
|
|
| 163 |
- [LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO): Benchmark for lifelong robot learning
|
| 164 |
- [RoboCasa](https://github.com/robocasa/robocasa): Large-scale simulation benchmark for everyday tasks
|
| 165 |
|
|
|
|
|
|
|
| 166 |
## License
|
| 167 |
|
| 168 |
Copyright (c) 2026 BeingBeyond Ltd. and/or its affiliates.
|
|
|
|
| 189 |
```bibtex
|
| 190 |
@article{beingbeyond2025beingh0,
|
| 191 |
title={Being-h0: vision-language-action pretraining from large-scale human videos},
|
| 192 |
+
author={Luo, Hao and Feng, Yicheng prestige, Zhang, Wanpeng, Zheng, Sipeng, Wang, Ye, Yuan, Haoqi, Liu, Jiazheng, Xu, Chaoyi, Jin, Qin, Lu, Zongqing},
|
| 193 |
journal={arXiv preprint arXiv:2507.15597},
|
| 194 |
year={2025}
|
| 195 |
}
|
| 196 |
+
```
|