Multi-Stream LLMs: Unblocking Language Models with Parallel Streams of Thoughts, Inputs and Outputs
Abstract
Language models can be enhanced by transitioning from sequential message-based instruction-tuning to parallel stream processing, enabling simultaneous reading and generation across multiple concurrent data flows.
The continued improvements in language model capability have unlocked their widespread use as drivers of autonomous agents, for example in coding or computer use applications. However, the core of these systems has not changed much since early instruction-tuned models like ChatGPT. Even advanced AI agents function on message exchange formats, successively exchanging messages with users, systems, with itself (i.e. chain-of-thought) and tools in a single stream of computation. This bottleneck to a single stream in chat models leads to a number of limitations: the agent cannot act (generate output) while reading, and in reverse, cannot react to new information while writing. Similarly, the agent cannot act while thinking and cannot think while reading or acting on information. In this work, we show that models can be unblocked by switching from instruction-tuning for sequential message formats to instruction-tuning for multiple, parallel streams of computation, splitting each role into a separate stream. Every forward pass of the language model then simultaneously reads from multiple input streams and generates tokens in multiple output streams, all of which causally depend on earlier timesteps. We argue that this data-driven change remedies a number of usability limitations as outlined above, improves model efficiency through parallelization, improves model security through better separation of concerns and can further improve model monitorability.
Community
Oh my God!
This is one of the best papers I've read in the last month. It's novel, useful, and practical.
CoT was a big leap for AI, sure, but ever since it became the standard, I've been looking for ways to escape it, because, despite its utility, it adds so much overhead and weird training semantics.
The original sin is that CoT made language carry too many roles at once: working memory, algorithm trace, explanation, self-supervision target, debugging surface, inference-time compute, and sometimes user-facing justification. That was useful because it required no architectural change. But it also meant every extra bit of cognition had to be paid for as extra tokens, and every internal computation became entangled with the semantics of natural-language text.
Multi-Stream LLMs is important because it attacks the interface bottleneck directly.
I've been working on a similar project, and am excited to see this kind of research happening.
Get this paper in your agent:
hf papers read 2605.12460 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
