Papers
arxiv:2601.16206

LLM-in-Sandbox Elicits General Agentic Intelligence

Published on Jan 22
ยท Submitted by
Daixuan Cheng
on Jan 23
Authors:
,
,
,
,
,
,
,

Abstract

LLM-in-Sandbox enables large language models to perform general intelligence tasks across diverse domains by allowing them to explore a code sandbox environment, achieving robust generalization without additional training.

AI-generated summary

We introduce LLM-in-Sandbox, enabling LLMs to explore within a code sandbox (i.e., a virtual computer), to elicit general intelligence in non-code domains. We first demonstrate that strong LLMs, without additional training, exhibit generalization capabilities to leverage the code sandbox for non-code tasks. For example, LLMs spontaneously access external resources to acquire new knowledge, leverage the file system to handle long contexts, and execute scripts to satisfy formatting requirements. We further show that these agentic capabilities can be enhanced through LLM-in-Sandbox Reinforcement Learning (LLM-in-Sandbox-RL), which uses only non-agentic data to train models for sandbox exploration. Experiments demonstrate that LLM-in-Sandbox, in both training-free and post-trained settings, achieves robust generalization spanning mathematics, physics, chemistry, biomedicine, long-context understanding, and instruction following. Finally, we analyze LLM-in-Sandbox's efficiency from computational and system perspectives, and open-source it as a Python package to facilitate real-world deployment.

Community

Paper author Paper submitter

Introducing LLM-in-Sandbox โ€” put your LLM in a virtual computer to unlock general agentic intelligence for non-code tasks!

Significant gains for chemistry, long-context QA, instruction following, and more. No extra training needed.

๐ŸŒ Demo: https://llm-in-sandbox.github.io
๐Ÿ’ป Code: https://github.com/llm-in-sandbox/llm-in-sandbox

pip install llm-in-sandbox

Feel free to open issues or discussions ๐Ÿค—

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.16206 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.16206 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.16206 in a Space README.md to link it from this page.

Collections including this paper 5