nielsr HF Staff commited on
Commit
ca9099d
·
verified ·
1 Parent(s): 07713e8

Improve model card: Add pipeline tag, library, HF paper link, abstract, and usage examples

Browse files

This PR significantly enhances the model card for InternVLA-M1_spatial by:
- Adding `pipeline_tag: robotics` to improve discoverability on the Hugging Face Hub.
- Specifying `library_name: transformers`, as the model's codebase is built upon both Transformers and Diffusers, with its VLM component (Qwen2.5-VL) being a Transformers model.
- Integrating the official Hugging Face paper link: [InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy](https://huggingface.co/papers/2510.13778).
- Including the paper's abstract for a quick overview of the model's capabilities and methodology.
- Adding a comprehensive "Quick Interactive M1 Demo" section with Python code snippets for both chat/spatial grounding and action prediction, directly from the GitHub repository, enabling users to quickly get started.
- Incorporating additional detailed sections from the GitHub README such as Key Features, Target Audience, Experimental Results, Model Zoo, Roadmap, Contributing, Contact, and Acknowledgements, providing a more complete resource.
- Updating the citation with a more complete BibTeX entry.

These updates aim to make the model card more informative, user-friendly, and discoverable.

Please review and merge this PR.

Files changed (1) hide show
  1. README.md +194 -9
README.md CHANGED
@@ -4,12 +4,164 @@ tags:
4
  - robotics
5
  - vision-language-action-model
6
  - vision-language-model
 
 
7
  ---
 
8
  # Model Card for InternVLA-M1_spatial
9
- InternVLA-M1 is an open-source, end-to-end vision–language–action (VLA) framework for building and researching generalist robot policies.
 
 
10
  - 🌐 Homepage: [InternVLA-M1 Project Page](https://internrobotics.github.io/internvla-m1.github.io/)
11
  - 💻 Codebase: [InternVLA-M1 GitHub Repo](https://github.com/InternRobotics/InternVLA-M1)
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ## Training Details
14
  ```
15
  action_chunk: 8
@@ -17,12 +169,45 @@ batch_size: 128
17
  training_steps: 30k
18
  ```
19
 
20
- ## Citation
21
- ```
22
- @misc{internvla2024,
23
- title = {InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy},
24
- author = {InternVLA-M1 Contributors},
25
- year = {2025},
26
- booktitle={arXiv},
 
 
 
 
 
 
 
 
 
 
 
27
  }
28
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - robotics
5
  - vision-language-action-model
6
  - vision-language-model
7
+ pipeline_tag: robotics
8
+ library_name: transformers
9
  ---
10
+
11
  # Model Card for InternVLA-M1_spatial
12
+
13
+ InternVLA-M1 is an open-source, end-to-end vision–language–action (VLA) framework for building and researching generalist robot policies, as presented in the paper [InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy](https://huggingface.co/papers/2510.13778).
14
+
15
  - 🌐 Homepage: [InternVLA-M1 Project Page](https://internrobotics.github.io/internvla-m1.github.io/)
16
  - 💻 Codebase: [InternVLA-M1 GitHub Repo](https://github.com/InternRobotics/InternVLA-M1)
17
 
18
+ <div align="center">
19
+ <img src="https://raw.githubusercontent.com/InternRobotics/InternVLA-M1/main/assets/teaser.png" width="100%" height="100%"/>
20
+ </div>
21
+
22
+ ## Abstract
23
+ We introduce InternVLA-M1, a unified framework for spatial grounding and robot control that advances instruction-following robots toward scalable, general-purpose intelligence. Its core idea is spatially guided vision-language-action training, where spatial grounding serves as the critical link between instructions and robot actions. InternVLA-M1 employs a two-stage pipeline: (i) spatial grounding pre-training on over 2.3M spatial reasoning data to determine ``where to act'' by aligning instructions with visual, embodiment-agnostic positions, and (ii) spatially guided action post-training to decide ``how to act'' by generating embodiment-aware actions through plug-and-play spatial prompting. This spatially guided training recipe yields consistent gains: InternVLA-M1 outperforms its variant without spatial guidance by +14.6% on SimplerEnv Google Robot, +17% on WidowX, and +4.3% on LIBERO Franka, while demonstrating stronger spatial reasoning capability in box, point, and trace prediction. To further scale instruction following, we built a simulation engine to collect 244K generalizable pick-and-place episodes, enabling a 6.2% average improvement across 200 tasks and 3K+ objects. In real-world clustered pick-and-place, InternVLA-M1 improved by 7.3%, and with synthetic co-training, achieved +20.6% on unseen objects and novel configurations. Moreover, in long-horizon reasoning-intensive scenarios, it surpassed existing works by over 10%. These results highlight spatially guided training as a unifying principle for scalable and resilient generalist robots.
24
+
25
+ ## 🔥 Key Features
26
+ 1. **Modular & Extensible**
27
+ All core components (model architecture, training data, training strategies, evaluation pipeline) are fully decoupled, enabling independent development, debugging, and extension of each module.
28
+
29
+ 2. **Dual-System and Dual-Supervision**
30
+ InternVLA-M1 integrates both a language head and an action head under a unified framework, enabling collaborative training with dual supervision.
31
+
32
+ 3. **Efficient Training & Fast Convergence**
33
+ Learns spatial and visual priors from large-scale multimodal pretraining and transfers them via spatial prompt fine-tuning. Achieves strong performance (e.g., SOTA-level convergence on in ~2.5 epochs without separate action pretraining).
34
+
35
+ ## 🎯 Target Audience
36
+ 1. Users who want to leverage open-source VLMs (e.g., Qwen2.5-VL) for robot control.
37
+ 2. Teams co-training action datasets jointly with multimodal (vision–language) data.
38
+ 3. Researchers exploring alternative VLA architectures and training strategies.
39
+
40
+ ## 📊 Experimental Results
41
+ | | WindowX | Google Robot(VA) | Google Robot(VM) | LIBERO |
42
+ |-------------|---------|------------------|------------------|--------|
43
+ | $\pi_0$ | 27.1 | 54.8 | 58.8 | 94.2 |
44
+ | GR00t | 61.9 | 44.5 | 35.2 | 93.9 |
45
+ | InternVLA-M1 |**71.7** |**76.0** |**80.7** |**95.9**|
46
+
47
+ ## 🚀 Quick Start
48
+ ### 🛠 Environment Setup
49
+ ```bash
50
+ # Clone the repo
51
+ git clone https://github.com/InternRobotics/InternVLA-M1
52
+
53
+ # Create conda environment
54
+ conda create -n internvla-m1 python=3.10 -y
55
+ conda activate internvla-m1
56
+
57
+ # Install requirements
58
+ pip install -r requirements.txt
59
+
60
+ # Install FlashAttention2
61
+ pip install flash-attn --no-build-isolation
62
+
63
+ # Install InternVLA-M1
64
+ pip install -e .
65
+ ```
66
+
67
+ ### ⚡ Quick Interactive M1 Demo
68
+ Below are two collapsible examples: InternVLA-M1 chat and action prediction.
69
+
70
+ <details open>
71
+ <summary><b>InternVLA-M1 Chat Demo (image Q&A / Spatial Grounding)</b></summary>
72
+
73
+ ```python
74
+ from InternVLA.model.framework.M1 import InternVLA_M1
75
+ from PIL import Image
76
+ import requests
77
+ from io import BytesIO
78
+ import torch
79
+
80
+ def load_image_from_url(url: str) -> Image.Image:
81
+ resp = requests.get(url, timeout=15)
82
+ resp.raise_for_status()
83
+ img = Image.open(BytesIO(resp.content)).convert("RGB")
84
+ return img
85
+
86
+ saved_model_path = "/PATH/checkpoints/steps_50000_pytorch_model.pt"
87
+ internVLA_M1 = InternVLA_M1.from_pretrained(saved_model_path)
88
+
89
+ # Use the raw image link for direct download
90
+ image_url = "https://raw.githubusercontent.com/InternRobotics/InternVLA-M1/InternVLA-M1/assets/table.jpeg"
91
+ image = load_image_from_url(image_url)
92
+ question = "Give the bounding box for the apple."
93
+ response = internVLA_M1.chat_with_M1(image, question)
94
+ print(response)
95
+ ```
96
+ </details>
97
+
98
+ <details>
99
+ <summary><b>InternVLA-M1 Action Prediction Demo (two views)</b></summary>
100
+
101
+ ```python
102
+ from InternVLA.model.framework.M1 import InternVLA_M1
103
+ from PIL import Image
104
+ import requests
105
+ from io import BytesIO
106
+ import torch
107
+
108
+ def load_image_from_url(url: str) -> Image.Image:
109
+ resp = requests.get(url, timeout=15)
110
+ resp.raise_for_status()
111
+ img = Image.open(BytesIO(resp.content)).convert("RGB")
112
+ return img
113
+
114
+ saved_model_path = "/PATH/checkpoints/steps_50000_pytorch_model.pt"
115
+ internVLA_M1 = InternVLA_M1.from_pretrained(saved_model_path)
116
+
117
+ image_url = "https://raw.githubusercontent.com/InternRobotics/InternVLA-M1/InternVLA-M1/assets/table.jpeg"
118
+ view1 = load_image_from_url(image_url)
119
+ view2 = view1.copy()
120
+
121
+ # Construct input: batch size = 1, two views
122
+ batch_images = [[view1, view2]] # List[List[PIL.Image]]
123
+ instructions = ["Pick up the apple and place it on the plate."]
124
+
125
+ if torch.cuda.is_available():
126
+ internVLA_M1 = internVLA_M1.to("cuda")
127
+
128
+ pred = internVLA_M1.predict_action(
129
+ batch_images=batch_images,
130
+ instructions=instructions,
131
+ cfg_scale=1.5,
132
+ use_ddim=True,
133
+ num_ddim_steps=10,
134
+ )
135
+ normalized_actions = pred["normalized_actions"] # [B, T, action_dim]
136
+ print(normalized_actions.shape, type(normalized_actions))
137
+ ```
138
+ </details>
139
+
140
+ ### 📘 Examples
141
+ We provide several end-to-end examples for reference:
142
+
143
+ * **Reproduce InternVLA-M1 in SimplerEnv**
144
+ [Example](/examples/SimplerEnv)
145
+
146
+ * **Reproduce InternVLA-M1 in LIBERO**
147
+ [Example](/examples/LIBERO)
148
+
149
+ * **Training/Deployment on real robots**
150
+ [Example](/examples/real_robot)
151
+
152
+ ## 📈 Model Zoo
153
+ We release a series of pretrained models and checkpoints to facilitate reproduction and downstream use.
154
+
155
+ ### ✅ Available Checkpoints
156
+ | Model | Description | Link |
157
+ |-------|-------------|------|
158
+ | **InternVLA-M1** | Main pretrained model | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1) |
159
+ | **InternVLA-M1-Pretrain-RT-1-Bridge** | Pretraining on RT-1 Bridge data | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1-Pretrain-RT-1-Bridge) |
160
+ | **InternVLA-M1-LIBERO-Long** | Fine-tuned on LIBERO Long-horizon tasks | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1-LIBERO-Long) |
161
+ | **InternVLA-M1-LIBERO-Goal** | Fine-tuned on LIBERO Goal-conditioned tasks | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1-LIBERO-Goal) |
162
+ | **InternVLA-M1-LIBERO-Spatial** | Fine-tuned on LIBERO Spatial reasoning tasks | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1-LIBERO-Spatial) |
163
+ | **InternVLA-M1-LIBERO-Object** | Fine-tuned on LIBERO Object-centric tasks | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1-LIBERO-Object) |
164
+
165
  ## Training Details
166
  ```
167
  action_chunk: 8
 
169
  training_steps: 30k
170
  ```
171
 
172
+ ## 🗺️ Roadmap
173
+ * [ ] Add Co-Training Multimodel Multitask Readme (now co-training code is already here)
174
+ * [x] 0930: Unified Inference Server for Simpler and LIBERO
175
+ * [x] 0918: Release model weights
176
+
177
+ ## 🤝 Contributing
178
+ We welcome contributions via Pull Requests or Issues.
179
+ Please include detailed logs and reproduction steps when reporting bugs.
180
+
181
+ ## 📜 Citation
182
+ If you find this useful in your research, please consider citing:
183
+
184
+ ```bibtex
185
+ @article{internvlam1,
186
+ title = {InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy},
187
+ author = {InternVLA-M1 Contributors},
188
+ journal = {arXiv preprint arXiv:2510.13778},
189
+ year = {2025}
190
  }
191
+ ```
192
+
193
+ ## 📬 Contact
194
+ * Issues: Submit via GitHub Issues with detailed logs and steps
195
+
196
+ ## 🙏 Acknowledgements
197
+ We thank the open-source community for their inspiring work. This project builds upon and is inspired by the following projects (alphabetical order):
198
+ - [IPEC-COMMUNITY](https://huggingface.co/IPEC-COMMUNITY): Curated OXE / LIBERO style multi-task datasets and formatting examples.
199
+ - [Isaac-GR00T](https://github.com/NVIDIA/Isaac-GR00T): Standardized action data loader (GR00T-LeRobot).
200
+ - [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL/blob/main/qwen-vl-finetune/README.md): Multimodal input/output format, data loader, and pretrained VLM backbone.
201
+ - [CogACT](https://github.com/microsoft/CogACT/tree/main/action_model): Reference for a DiT-style action head design.
202
+ - [Llavavla](https://github.com/JinhuiYE/llavavla): Baseline code structure and engineering design references.
203
+ - [GenManip Simulation Platform](https://github.com/InternRobotics/GenManip): Simulation platform for generalizable pick-and-place based on Isaac Sim.
204
+
205
+
206
+ Notes:
207
+ - If any required attribution or license header is missing, please open an issue and we will correct it promptly.
208
+ - All third-party resources remain under their original licenses; users should comply with respective terms.
209
+
210
+ ---
211
+
212
+ Thanks for using **InternVLA-M1**! 🌟
213
+ If you find it useful, please consider giving us a ⭐ on GitHub.