--- language: - en - zh library_name: transformers license: apache-2.0 # TODO: 如果不是 Apache-2.0 请改 tags: - robotics - embodied-ai - egocentric - spatiotemporal - vision-language-model - video-understanding - grounding - planning - navigation - ocr - image-text-to-text - video-text-to-text - custom_code # 如果需要 trust_remote_code=True base_model: Qwen3-VL-2B-Instruct pipeline_tag: image-text-to-text ---
RynnBrain: Open Embodied Foundation Models
## 🤖 Quick Start
Minimal dependencies:
```shell
pip install transformers==4.57.1
```
Run text generation:
```python
from transformers import AutoModelForImageTextToText
model = AutoModelForImageTextToText.from_pretrained("")
...
```
## Cookbooks
Checkout the [cookbooks](https://github.com/alibaba-damo-academy/RynnBrain/cookbooks) that showcase RynnBrain's capabilities in cognition, localization, reasoning, and planning.
| Category | Cookbook name | Description |
|----------------------|--------------------------------------------------------------------------------------------------|-------------|
| Cognition | [1_spatial_understanding.ipynb](https://github.com/alibaba-damo-academy/RynnBrain/cookbooks/1_spatial_understanding.ipynb) | Shows the ability of model for spaital understanding in the video scene. |
| Cognition | [2_object_understanding.ipynb](https://github.com/alibaba-damo-academy/RynnBrain/cookbooks/2_object_understanding.ipynb) | Shows how the model understands object categories, attributes, and relations and counting ability. |
| Cognition | [3_ocr.ipynb](https://github.com/alibaba-damo-academy/RynnBrain/cookbooks/3_ocr.ipynb) | Examples of optical character recognition and text understanding in videos. |
| Location | [4_object_location.ipynb](https://github.com/alibaba-damo-academy/RynnBrain/cookbooks/4_object_location.ipynb) | Locates specific objects with bounding boxes in an image or video based on instructions. |
| Location | [5_area_location.ipynb](https://github.com/alibaba-damo-academy/RynnBrain/cookbooks/5_area_location.ipynb) | Identifies and marks specified regions by points in an image or video. |
| Location | [6_affordance_location.ipynb](https://github.com/alibaba-damo-academy/RynnBrain/cookbooks/6_affordance_location.ipynb) | Finds areas or objects with specific affordances in an image of video. |
| Location | [7_trajectory_location.ipynb](https://github.com/alibaba-damo-academy/RynnBrain/cookbooks/7_trajectory_location.ipynb) | Infers and annotates trajectories or motion paths in an image or video. |
| Location | [8_grasp_pose.ipynb](https://github.com/alibaba-damo-academy/RynnBrain/cookbooks/8_grasp_pose.ipynb) | Present the model's abiltiy to predict robotic grasp poses from images. |
## 📑 Citation
If you find RynnBrain useful for your research and applications, please cite using this BibTeX:
```bibtex
@article{damo2026rynnbrain,
title={RynnBrain: Open Embodied Foundation Models},
author={Ronghao Dang, Jiayan Guo, Bohan Hou, Sicong Leng, Kehan Li, Xin Li, Jiangpin Liu, Yunxuan Mao, Zhikai Wang, Yuqian Yuan, Minghao Zhu, Xiao Lin, Yang Bai, Qian Jiang, Yaxi Zhao, Minghua Zeng, Junlong Gao, Yuming Jiang, Jun Cen, Siteng Huang, Liuyi Wang, Wenqiao Zhang, Chengju Liu, Jianfei Yang, Shijian Lu, Deli Zhao},
journal={arXiv preprint arXiv:2602.14979v1},
year={2026},
url = {https://arxiv.org/abs/2602.14979v1}
}
```