CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion

DiT-EncDec base checkpoint from "CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion", pretrained on LIBERO-90.

Description

CLARE is a general, parameter-efficient framework for exemplar-free continual learning with Vision-Language-Action (VLA) models. It introduces lightweight modular adapters into selected feedforward layers and autonomously expands the model only where necessary when learning a new task, guided by layer-wise feature similarity. During deployment, an autoencoder-based routing mechanism dynamically activates the most relevant adapters without requiring task labels.

This specific repository contains the DiT-EncDec base checkpoint used for pretraining on the LIBERO-90 benchmark.

Usage

To use this checkpoint for training on the LIBERO-10 benchmark using the CLARE framework, you can use the following command from the official repository:

python ./lerobot_lsy/src/lerobot/scripts/clare.py \
    --seed=1000 \
    --job_name=clare_libero_10_task_0 \
    --output_dir=./outputs/clare_libero_10_task_0 \
    --dataset.repo_id=continuallearning/libero_10_image_task_0 \
    --policy.path=continuallearning/dit_mt_libero_90_pretrain \
    --policy.push_to_hub=false \
    --batch_size=32 \
    --num_workers=16 \
    --steps=20000 \
    --env.type=libero \
    --env.task=Libero_10_Task_0 \
    --eval.batch_size=20 \
    --eval.n_episodes=100 \
    --eval.max_episodes_rendered=100 \
    --eval_freq=200000 \
    --save_freq=20000 \
    --log_freq=100 \
    --peft_cfg_path=./peft_lsy/config \
    --expand_threshold=10.00 \
    --detect_distribution_shift_steps=200 \
    --detect_distribution_shift_batch_size=32 \
    --detect_distribution_shift_num_workers=16 \
    --detect_distribution_shift_log_freq=10 \
    --train_discriminators_steps=2000 \
    --train_discriminators_batch_size=32 \
    --train_discriminators_num_workers=16 \
    --train_discriminators_log_freq=50 \
    --train_discriminators_eval_freq=2000 \
    --train_discriminators_save_freq=2000 \
    --wandb.enable=true

BibTeX

@article{romer2026clare,
  title={CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion},
  author={Ralf R{\"o}mer and Yi Zhang and Angela P. Schoellig},
  journal={arXiv preprint arXiv:2601.09512},
  year={2026}
}
Downloads last month
19
Video Preview
loading

Paper for continuallearning/dit_mt_libero_90_pretrain