Abstract
Behavior cloning with quantized actions in autoregressive models achieves optimal sample complexity under stability and smoothness conditions, with quantization error affecting horizon-dependent performance.
Behavior cloning is a fundamental paradigm in machine learning, enabling policy learning from expert demonstrations across robotics, autonomous driving, and generative models. Autoregressive models like transformer have proven remarkably effective, from large language models (LLMs) to vision-language-action systems (VLAs). However, applying autoregressive models to continuous control requires discretizing actions through quantization, a practice widely adopted yet poorly understood theoretically. This paper provides theoretical foundations for this practice. We analyze how quantization error propagates along the horizon and interacts with statistical sample complexity. We show that behavior cloning with quantized actions and log-loss achieves optimal sample complexity, matching existing lower bounds, and incurs only polynomial horizon dependence on quantization error, provided the dynamics are stable and the policy satisfies a probabilistic smoothness condition. We further characterize when different quantization schemes satisfy or violate these requirements, and propose a model-based augmentation that provably improves the error bound without requiring policy smoothness. Finally, we establish fundamental limits that jointly capture the effects of quantization error and statistical complexity.
Community
We found that when you tokenize continuous robot actions for autoregressive models (like VLAs), the quantization error can compound catastrophically over long horizons if the tokenizer isn't "smooth". We showed that simple binning-based approaches naturally avoids this problem while fancier learned tokenizers can fail silently, looking great on training data but blowing up at deployment because small state perturbations map to completely different tokens. We also propose a model-based trick where the policy runs on simulated states instead of real ones to stay in-distribution, which further tightens the error. Our theory matches information-theoretic lower bounds, confirming that binning-based tokenization (as used in RT-2 and FAST) is theoretically well-grounded.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Beyond State-Wise Mirror Descent: Offline Policy Optimization with Parameteric Policies (2026)
- Causal Imitation Learning Under Measurement Error and Distribution Shift (2026)
- Interaction-Grounded Learning for Contextual Markov Decision Processes with Personalized Feedback (2026)
- Towards Practical World Model-based Reinforcement Learning for Vision-Language-Action Models (2026)
- When does predictive inverse dynamics outperform behavior cloning? (2026)
- SPAARS: Safer RL Policy Alignment through Abstract Exploration and Refined Exploitation of Action Space (2026)
- Inverse Contextual Bandits without Rewards: Learning from a Non-Stationary Learner via Suffix Imitation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper