File size: 4,972 Bytes
1daa351
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b9573c9
 
 
 
 
 
 
 
 
 
 
 
 
 
1daa351
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
tags:
- robotics
- vision-language-action-model
- vision-language-model
library_name: transformers

# Collection Metadata (Referencing InternRobotics/VLN-PE style)
repo: InternRobotics/RoboInter-VLM
type: "checkpoint-collection"
description: "RoboInterVLM flagship checkpoint (Qwen2.5-VL-7B) fine-tuned on RoboInter-VQA."
checkpoints:
  - name: RoboInter-VLM
    notes: "Flagship Qwen2.5-VL-7B backbone"
---
# RoboInter-VLM: Vision-Language Model for RoboInter Manipulation Suite

**This is the flagship model of the RoboInter-VLM series**, based on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). It delivers the strongest performance among the Qwen2.5-VL variants and is the **recommended default checkpoint** for general use.

Developed as part of the [RoboInter](https://github.com/InternRobotics/RoboInter) project. The model is fine-tuned on the [RoboInter-VQA](https://huggingface.co/datasets/InternRobotics/RoboInter-VQA) dataset for intermediate representation understanding and generation in robotic manipulation.

## All Available Checkpoints

| Checkpoint | Base Model | Architecture | Parameters | Description | Link|
|---|---|---|---|---|---|
| **`RoboInter-VLM` (this repo)** | [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) | Qwen2.5-VL | ~7B | **Flagship model, recommended for best performance** |https://huggingface.co/InternRobotics/RoboInter-VLM|
| `RoboInter-VLM_qwenvl25_3b` | [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) | Qwen2.5-VL | ~3B | Lightweight model, suitable for efficient deployment | https://huggingface.co/InternRobotics/RoboInter-VLM_qwenvl25_3b|
| `RoboInter-VLM_llavaov_7B` | [LLaVA-OneVision-Qwen2-7B](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov) | LLaVA-OneVision| ~7B | LLaVA-OneVision backbone with SigLIP vision encoder |https://huggingface.co/InternRobotics/RoboInter-VLM_llavaov_7B|

All checkpoints are stored in `safetensors` format with `bfloat16` precision.

## Supported Tasks

These models are jointly trained on general VQA and three categories of our curated VQA tasks:

- **Generation**: Predicting intermediate representations such as trajectory waypoints, gripper bounding boxes, contact points/boxes, object bounding boxes (current & final), etc.
- **Understanding**: Multiple-choice visual reasoning about contact states, grasp poses, object grounding, trajectory selection, movement directions, etc.
- **Task Planning**: High-level task planning including next-step prediction, action primitive recognition, success determination, etc.

## Usage

### Quick Start (This Model)

```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor

model_path = "InternRobotics/RoboInter-VLM"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    model_path, torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_path)
```

For detailed usage and inference examples, please refer to the [RoboInterVLM-QwenVL](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-QwenVL) codebase.

### LLaVA-OneVision Checkpoint

For loading and inference with the LLaVA-OneVision checkpoint, please refer to the [RoboInterVLM-LLaVAOV](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-LLaVAOV) codebase, as it requires custom model classes.

### Training & Evaluation

For full training and evaluation pipelines, please refer to:

- **Qwen2.5-VL models**: [RoboInterVLM-QwenVL](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-QwenVL)
- **LLaVA-OneVision model**: [RoboInterVLM-LLaVAOV](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-LLaVAOV)
- **VQA Dataset**: [RoboInter-VQA](https://huggingface.co/datasets/InternRobotics/RoboInter-VQA)

## Related Resources

- **Project**: [RoboInter](https://github.com/InternRobotics/RoboInter)
- **Annotation Data**: [RoboInter-Data](https://huggingface.co/datasets/InternRobotics/RoboInter-Data)
- **VQA Dataset**: [RoboInter-VQA](https://huggingface.co/datasets/InternRobotics/RoboInter-VQA)


## Citation

If you find RoboInter useful in your research, please consider citing:

```bibtex
@article{li2026robointer,
  title={RoboInter: A Holistic Intermediate Representation Suite Towards Robotic Manipulation},
  author={Li, Hao and Wang, Ziqin and Ding, Zi-han and Yang, Shuai and Chen, Yilun and Tian, Yang and Hu, Xiaolin and Wang, Tai and Lin, Dahua and Zhao, Feng and Liu, Si and Pang, Jiangmiao},
  journal={arXiv preprint arXiv:2602.09973},
  year={2025}
}
```

## License

Please refer to the original licenses of [RoboInter](https://github.com/InternRobotics/RoboInter), [Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), and [LLaVA-OneVision](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov).