|
|
--- |
|
|
license: apache-2.0 |
|
|
datasets: |
|
|
- behavior-1k/2025-challenge-demos |
|
|
- IliaLarchenko/behavior_224_rgb |
|
|
tags: |
|
|
- robotics |
|
|
--- |
|
|
|
|
|
This is an intermediate checkpoint that we used in our [1st place solution of the 2025 BEHAVIOR Challenge](https://github.com/IliaLarchenko/behavior-1k-solution). |
|
|
|
|
|
This checkpoint is obtained by training the policy on 50 tasks simultaneously for ~2 weeks. |
|
|
|
|
|
It is not part of our [final submission](https://huggingface.co/IliaLarchenko/behavior_submission). Also, we didn't run the whole evaluation of this checkpoint, but we would expect it to achieve a 15-20% q-score. |
|
|
|
|
|
Our [tech report](https:///arxiv.org/abs/2512.06951) |
|
|
|
|
|
The [final submission checkpoints](https://huggingface.co/IliaLarchenko/behavior_submission) |
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
If you find this work useful, please cite: |
|
|
|
|
|
```bibtex |
|
|
@misc{larchenko2025behavior, |
|
|
title={Task adaptation of Vision-Language-Action model: 1st Place Solution for the 2025 BEHAVIOR Challenge}, |
|
|
author={Ilia Larchenko and Gleb Zarin and Akash Karnatak}, |
|
|
year={2025}, |
|
|
eprint={2512.06951}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.RO}, |
|
|
url={https://arxiv.org/abs/2512.06951}, |
|
|
} |
|
|
``` |