--- pipeline_tag: robotics license: apache-2.0 tags: - reinforcement-learning - robotic-manipulation - action-chunking --- # Mixture of Horizons in Action Chunking This repository hosts the official implementation of **Mixture of Horizons (MoH)**, introduced in the paper [Mixture of Horizons in Action Chunking](https://huggingface.co/papers/2511.19433). Vision-language-action (VLA) models for robotic manipulation are highly sensitive to the chosen **action chunk length**, termed **horizon** in this work. A fixed horizon presents an inherent trade-off: longer horizons offer superior global foresight but compromise fine-grained accuracy, while shorter ones provide precise local control but struggle with long-term tasks. To address this challenge, we propose **Mixture of Horizons (MoH)**, a novel, plug-and-play strategy that fuses multiple horizons within a single policy. MoH processes action chunks in parallel segments with different horizons and integrates their outputs. This approach simultaneously leverages long-term foresight and short-term precision with minimal overhead, and enables **Dynamic Inference** through cross-horizon consensus for enhanced efficiency and robustness in complex robotic tasks. - 📄 [Paper](https://huggingface.co/papers/2511.19433) - 📝 [Project Page](https://timsty1.github.io/moh/) - 💻 [Code](https://github.com/Timsty1/MixtureOfHorizons/tree/main) ## Introduction
|
|
| Figure 1: Trade-off between long-term foresight and short-term precision induced by single horizon | Figure 2: Overview of the proposed mixture-of-horizons strategy |