InternVLA-M1 / README.md
ariG23498's picture
ariG23498 HF Staff
Adding `transformers` as the library tag
76de22b verified
|
raw
history blame
964 Bytes
metadata
license: cc-by-nc-sa-4.0
base_model:
  - Qwen/Qwen2.5-VL-3B-Instruct
tags:
  - robotics
  - vision-language-action-model
  - vision-language-model
library_name: transformers

Model Card for InternVLA-M1

Description:

InternVLA-M1 is an open-source, end-to-end vision–language–action (VLA) framework for building and researching generalist robot policies. The checkpoints in this repository were pretrained on the system2 dataset.

image/png

Citation

@misc{internvla2024,
  title  = {InternVLA-M1: Latent Spatial Grounding for Instruction-Following Robotic Manipulation},
  author = {InternVLA-M1 Contributors},
  year   = {2025},
  booktitle={arXiv},
}