|
|
--- |
|
|
license: apache-2.0 |
|
|
library_name: lerobot |
|
|
pipeline_tag: robotics |
|
|
tags: |
|
|
- robotics |
|
|
- lerobot |
|
|
--- |
|
|
|
|
|
# CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion |
|
|
|
|
|
DiT-EncDec base checkpoint from "CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion", pretrained on LIBERO-90. |
|
|
|
|
|
- **Paper:** [CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion](https://huggingface.co/papers/2601.09512) |
|
|
- **Project Page:** [tum-lsy.github.io/clare/](https://tum-lsy.github.io/clare/) |
|
|
- **Repository:** [utiasDSL/clare](https://github.com/utiasDSL/clare) |
|
|
|
|
|
## Description |
|
|
|
|
|
CLARE is a general, parameter-efficient framework for exemplar-free continual learning with Vision-Language-Action (VLA) models. It introduces lightweight modular adapters into selected feedforward layers and autonomously expands the model only where necessary when learning a new task, guided by layer-wise feature similarity. During deployment, an autoencoder-based routing mechanism dynamically activates the most relevant adapters without requiring task labels. |
|
|
|
|
|
This specific repository contains the **DiT-EncDec** base checkpoint used for pretraining on the **LIBERO-90** benchmark. |
|
|
|
|
|
## Usage |
|
|
|
|
|
To use this checkpoint for training on the LIBERO-10 benchmark using the CLARE framework, you can use the following command from the [official repository](https://github.com/utiasDSL/clare): |
|
|
|
|
|
```bash |
|
|
python ./lerobot_lsy/src/lerobot/scripts/clare.py \ |
|
|
--seed=1000 \ |
|
|
--job_name=clare_libero_10_task_0 \ |
|
|
--output_dir=./outputs/clare_libero_10_task_0 \ |
|
|
--dataset.repo_id=continuallearning/libero_10_image_task_0 \ |
|
|
--policy.path=continuallearning/dit_mt_libero_90_pretrain \ |
|
|
--policy.push_to_hub=false \ |
|
|
--batch_size=32 \ |
|
|
--num_workers=16 \ |
|
|
--steps=20000 \ |
|
|
--env.type=libero \ |
|
|
--env.task=Libero_10_Task_0 \ |
|
|
--eval.batch_size=20 \ |
|
|
--eval.n_episodes=100 \ |
|
|
--eval.max_episodes_rendered=100 \ |
|
|
--eval_freq=200000 \ |
|
|
--save_freq=20000 \ |
|
|
--log_freq=100 \ |
|
|
--peft_cfg_path=./peft_lsy/config \ |
|
|
--expand_threshold=10.00 \ |
|
|
--detect_distribution_shift_steps=200 \ |
|
|
--detect_distribution_shift_batch_size=32 \ |
|
|
--detect_distribution_shift_num_workers=16 \ |
|
|
--detect_distribution_shift_log_freq=10 \ |
|
|
--train_discriminators_steps=2000 \ |
|
|
--train_discriminators_batch_size=32 \ |
|
|
--train_discriminators_num_workers=16 \ |
|
|
--train_discriminators_log_freq=50 \ |
|
|
--train_discriminators_eval_freq=2000 \ |
|
|
--train_discriminators_save_freq=2000 \ |
|
|
--wandb.enable=true |
|
|
``` |
|
|
|
|
|
## BibTeX |
|
|
|
|
|
```bibtex |
|
|
@article{romer2026clare, |
|
|
title={CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion}, |
|
|
author={Ralf R{\"o}mer and Yi Zhang and Angela P. Schoellig}, |
|
|
journal={arXiv preprint arXiv:2601.09512}, |
|
|
year={2026} |
|
|
} |
|
|
``` |