SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics
Paper
•
2506.01844
•
Published
•
148
SmolVLA is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. Seems like model needs gripper to be fully open to work correctly for more positions.
This policy has been trained and pushed to the Hub using LeRobot. See the full documentation at LeRobot Docs.
For a complete walkthrough, see the training guide. Below is the short version on how to train and run inference/eval:
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
Writes checkpoints to outputs/train/<desired_policy_repo_id>/checkpoints/.
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
Prefix the dataset repo with eval_ and supply --policy.path pointing to a local or hub checkpoint.
Base model
lerobot/smolvla_base