JeasLee commited on
Commit
f76a021
·
verified ·
1 Parent(s): be1706f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,3 +1,74 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - lmms-lab/llava-onevision-qwen2-7b-ov
5
+ tags:
6
+ - robotics
7
+ - vision-language-action-model
8
+ - vision-language-model
9
+
10
+ # Collection Metadata (Referencing InternRobotics/VLN-PE style)
11
+ repo: InternRobotics/RoboInter-VLM_llavaov_7B
12
+ type: "checkpoint-collection"
13
+ description: "Collection of RoboInterVLM checkpoints and configs fine-tuned on RoboInter-VQA."
14
+ checkpoints:
15
+ - name: RoboInter-VLM_llavaov_7B
16
+ notes: "LLaVA-OneVision backbone"
17
+ ---
18
+ # RoboInter-VLM_llavaov_7B: Vision-Language Model Checkpoints for RoboInter Manipulation Suite
19
+
20
+ Model checkpoints of **RoboInter-VLM_llavaov_7B**, developed as part of the [RoboInter](https://github.com/InternRobotics/RoboInter) project. The models are fine-tuned on the [RoboInter-VQA](https://huggingface.co/datasets/InternRobotics/RoboInter-VQA) dataset for intermediate representation understanding and generation in robotic manipulation.
21
+
22
+ ## Other Available Checkpoints
23
+
24
+ | Checkpoint | Base Model | Architecture | Parameters | Description | Link|
25
+ |---|---|---|---|---|---|
26
+ | `RoboInter-VLM_qwenvl25_3b` | [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) | Qwen2.5-VL | ~3B | Lightweight Qwen2.5VL model, suitable for efficient deployment | https://huggingface.co/InternRobotics/RoboInter-VLM_qwenvl25_3b|
27
+ | `RoboInter-VLM_qwenvl25_7b` | [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) | Qwen2.5-VL | ~7B | Larger Qwen2.5-VL backbone for stronger performance |https://huggingface.co/InternRobotics/RoboInter-VLM|
28
+ | `RoboInter-VLM_llavaov_7B` | [LLaVA-OneVision-Qwen2-7B](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov) | LLaVA-OneVision| ~7B | LLaVA-OneVision backbone with SigLIP vision encoder |https://huggingface.co/InternRobotics/RoboInter-VLM_llavaov_7B|
29
+
30
+ All checkpoints are stored in `safetensors` format with `bfloat16` precision.
31
+
32
+ ## Supported Tasks
33
+
34
+ These models are jointly trained on general VQA and three categories of our curated VQA tasks:
35
+
36
+ - **Generation**: Predicting intermediate representations such as trajectory waypoints, gripper bounding boxes, contact points/boxes, object bounding boxes (current & final), etc.
37
+ - **Understanding**: Multiple-choice visual reasoning about contact states, grasp poses, object grounding, trajectory selection, movement directions, etc.
38
+ - **Task Planning**: High-level task planning including next-step prediction, action primitive recognition, success determination, etc.
39
+
40
+ ## Usage
41
+
42
+ ### Qwen2.5-VL Checkpoints
43
+ For loading and inference with the Qwen2.5-VL checkpoint, please refer to the [RoboInterVLM-QwenVL](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-QwenVL) codebase. We provide a fast loading example below:
44
+
45
+ ```python
46
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
47
+
48
+ model_path = "InternRobotics/RoboInter-VLM" # or RoboInter-VLM_qwenvl25_3b
49
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
50
+ model_path, torch_dtype="auto", device_map="auto"
51
+ )
52
+ processor = AutoProcessor.from_pretrained(model_path)
53
+ ```
54
+
55
+ ### LLaVA-OneVision Checkpoint
56
+
57
+ For loading and inference with the LLaVA-OneVision checkpoint, please refer to the [RoboInterVLM-LLaVAOV](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-LLaVAOV) codebase, as it requires custom model classes.
58
+
59
+ ### Training & Evaluation
60
+
61
+ For full training and evaluation pipelines, please refer to:
62
+
63
+ - **Qwen2.5-VL models**: [RoboInterVLM-QwenVL](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-QwenVL)
64
+ - **LLaVA-OneVision model**: [RoboInterVLM-LLaVAOV](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-LLaVAOV)
65
+ - **VQA Dataset**: [RoboInter-VQA](https://huggingface.co/datasets/InternRobotics/RoboInter-VQA)
66
+
67
+ ## Related Resources
68
+
69
+ - **Project**: [RoboInter](https://github.com/InternRobotics/RoboInter)
70
+ - **Annotation Data**: [RoboInter-Data](https://huggingface.co/datasets/InternRobotics/RoboInter-Data)
71
+ - **VQA Dataset**: [RoboInter-VQA](https://huggingface.co/datasets/InternRobotics/RoboInter-VQA)
72
+ ## License
73
+
74
+ Please refer to the original licenses of [RoboInter](https://github.com/InternRobotics/RoboInter), [Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), and [LLaVA-OneVision](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov).