Improve model card metadata and content

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +11 -3
README.md CHANGED
@@ -1,19 +1,27 @@
1
  ---
2
  license: apache-2.0
 
 
3
  tags:
4
- - robotics,
5
  - lerobot
6
  ---
7
 
 
 
 
 
8
  DiT-Dec base checkpoint from "CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion", pretrained on LIBERO-90.
9
 
10
- **BibTeX:**
 
 
11
 
12
  ```bibtex
13
  @article{romer2026clare,
14
  title={CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion},
15
  author={Ralf R{\"o}mer and Yi Zhang and Angela P. Schoellig},
16
- journal={arXiv preprint arXiv:2601.09512} ,
17
  year={2026}
18
  }
19
  ```
 
1
  ---
2
  license: apache-2.0
3
+ library_name: lerobot
4
+ pipeline_tag: robotics
5
  tags:
6
+ - robotics
7
  - lerobot
8
  ---
9
 
10
+ # CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion
11
+
12
+ [**Paper**](https://huggingface.co/papers/2601.09512) | [**Project Page**](https://tum-lsy.github.io/clare/) | [**Code**](https://github.com/utiasDSL/clare)
13
+
14
  DiT-Dec base checkpoint from "CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion", pretrained on LIBERO-90.
15
 
16
+ CLARE is a general, parameter-efficient framework for exemplar-free continual learning with Vision-Language-Action (VLA) models. It introduces lightweight modular adapters into selected feedforward layers and autonomously expands the model only where necessary when learning a new task, guided by layer-wise feature similarity. During deployment, an autoencoder-based routing mechanism dynamically activates the most relevant adapters without requiring task labels.
17
+
18
+ ## BibTeX
19
 
20
  ```bibtex
21
  @article{romer2026clare,
22
  title={CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion},
23
  author={Ralf R{\"o}mer and Yi Zhang and Angela P. Schoellig},
24
+ journal={arXiv preprint arXiv:2601.09512},
25
  year={2026}
26
  }
27
  ```