id
int64 401
617
| file_name
stringlengths 10
39
| paper_id
stringlengths 9
9
| title
stringlengths 6
175
| abstract
stringlengths 4
1.92k
| link
stringlengths 32
155
| year
int64 2.02k
2.03k
| content
stringlengths 16k
771k
| category
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|
401
|
2210.03629v3.md
|
Agent_001
|
ReAct: Synergizing Reasoning and Acting in Language Models
|
While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics. In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information. We apply our approach, named ReAct, to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines, as well as improved human interpretability and trustworthiness over methods without reasoning or acting components. Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes issues of hallucination and error propagation prevalent in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generates human-like task-solving trajectories that are more interpretable than baselines without reasoning traces. On two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples.
|
https://arxiv.org/abs/2210.03629
| 2,023
|
## REACT : SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELS
## ABSTRACT
While large language models (LLMs) have demonstrated impressive performance across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics. In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with and gather additional information from external sources such as knowledge bases or environments. We apply our approach, named ReAct , to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines in addition to improved human interpretability and trustworthiness. Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes prevalent issues of hallucination and error propagation in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generating human-like task-solving trajectories that are more interpretable than baselines without reasoning traces. Furthermore, on two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples.
## 1 INTRODUCTION
Aunique feature of human intelligence is the ability to seamlessly combine task-oriented actions with verbal reasoning (or inner speech, Alderson-Day & Fernyhough, 2015), which has been theorized to play an important role in human cognition for enabling self-regulation or strategization (Vygotsky, 1987; Luria, 1965; Fernyhough, 2010) and maintaining a working memory (Baddeley, 1992). Consider the example of cooking up a dish in the kitchen. Between any two specific actions, we may reason in language in order to track progress ('now that everything is cut, I should heat up the pot of water'), to handle exceptions or adjust the plan according to the situation ('I don't have salt, so let me use soy sauce and pepper instead'), and to realize when external information is needed ('how do I prepare dough? Let me search on the Internet'). We may also act (open a cookbook to read the recipe, open the fridge, check ingredients) to support the reasoning and to answer questions ('What dish can I make right now?'). This tight synergy between 'acting' and 'reasoning' allows humans to learn new tasks quickly and perform robust decision making or reasoning, even under previously unseen circumstances or facing information uncertainties.
Recent results have hinted at the possibility of combining verbal reasoning with interactive decision making in autonomous systems. On one hand, properly prompted large language models (LLMs) have demonstrated emergent capabilities to carry out several steps of reasoning traces to derive answers from questions in arithmetic, commonsense, and symbolic reasoning tasks (Wei et al., 2022). However, this 'chain-of-thought' reasoning is a static black box, in that the model uses its own internal representations to generate thoughts and is not grounded in the external world, which limits its ability to reason reactively or update its knowledge. This can lead to issues like fact hallucination and error propagation over the reasoning process (Figure 1 (1b)). On the other hand, recent work has explored the use of pre-trained language models for planning and acting in interactive environments (Ahn et al., 2022; Nakano et al., 2021; Yao et al., 2020; Huang et al., 2022a), with a focus on predicting actions via language priors. These approaches usually convert multi-modal observations into text, use a language model to generate domain-specific actions or plans, and then use a controller to choose or execute them. However, they do not employ language models to reason abstractly about high-level goals or maintain a working memory to support acting, barring Huang et al. (2022b) who perform a limited form of verbal reasoning to reiterate spatial facts about the current state. Beyond such simple embodied tasks to interact with a few blocks, there have not been studies on how reasoning and acting can be combined in a synergistic manner for general task solving, and if such a combination can bring systematic benefits compared to reasoning or acting alone.
Figure 1: (1) Comparison of 4 prompting methods, (a) Standard , (b) Chain-of-thought ( CoT , Reason Only), (c) Act -only, and (d) ReAct (Reason+Act), solving a HotpotQA (Yang et al., 2018) question. (2) Comparison of (a) Act -only and (b) ReAct prompting to solve an AlfWorld (Shridhar et al., 2020b) game. In both domains, we omit in-context examples in the prompt, and only show task solving trajectories generated by the model (Act, Thought) and the environment (Obs).
In this work, we present ReAct , a general paradigm to combine reasoning and acting with language models for solving diverse language reasoning and decision making tasks (Figure 1). ReAct prompts LLMs to generate both verbal reasoning traces and actions pertaining to a task in an interleaved manner, which allows the model to perform dynamic reasoning to create, maintain, and adjust high-level plans for acting (reason to act), while also interact with the external environments (e.g. Wikipedia) to incorporate additional information into reasoning (act to reason).
Weconduct empirical evaluations of ReAct and state-of-the-art baselines on four diverse benchmarks: question answering (HotPotQA, Yang et al., 2018), fact verification (Fever, Thorne et al., 2018), text-based game (ALFWorld, Shridhar et al., 2020b), and webpage navigation (WebShop, Yao et al., 2022). For HotPotQA and Fever, with access to a Wikipedia API that the model can interact with, ReAct outperforms vanilla action generation models while being competitive with chain-ofthought reasoning ( CoT ) (Wei et al., 2022). The best approach overall is a combination of ReAct and CoT that allows for the use of both internal knowledge and externally obtained information during reasoning. On ALFWorld and WebShop, two or even one-shot ReAct prompting is able to outperform imitation or reinforcement learning methods trained with 10 3 ∼ 10 5 task instances, with an absolute improvement of 34% and 10% in success rates respectively. We also demonstrate the importance of sparse, versatile reasoning in decision making by showing consistent advantages over controlled baselines with actions only. Besides general applicability and performance boost, the combination of reasoning and acting also contributes to model interpretability, trustworthiness, and diagnosability across all domains, as humans can readily distinguish information from model's internal knowledge versus external environments, as well as inspect reasoning traces to understand the decision basis of model actions.
To summarize, our key contributions are the following: (1) we introduce ReAct , a novel promptbased paradigm to synergize reasoning and acting in language models for general task solving; (2) we perform extensive experiments across diverse benchmarks to showcase the advantage of ReAct in a few-shot learning setup over prior approaches that perform either reasoning or action generation in isolation; (3) we present systematic ablations and analysis to understand the importance of acting in reasoning tasks, and reasoning in interactive tasks; (4) we analyze the limitations of ReAct under the prompting setup (i.e. limited support of reasoning and acting behaviors), and perform initial finetuning experiments showing the potential of ReAct to improve with additional training data. Scaling up ReAct to train and operate on more tasks and combining it with complementary paradigms like reinforcement learning could further unlock the potential of large language models.
## 2 REACT : SYNERGIZING RE ASONING + ACT ING
Consider a general setup of an agent interacting with an environment for task solving. At time step t , an agent receives an observation o t ∈ O from the environment and takes an action a t ∈ A following some policy π a ( t | c t ) , where c t = ( o , a 1 1 , · · · , o t -1 , a t -1 , o t ) is the context to the agent. Learning a policy is challenging when the mapping c t → a t is highly implicit and requires extensive computation. For example, the agent shown in Figure 1(1c) is unable to generate the correct final action (Act 4) to finish the QA task as it requires complex reasoning over the trajectory context (Question, Act 1-3, Obs 1-3). Similarly, the agent shown in Figure 1(2a) fails to comprehend from the context that sinkbasin 1 does not contain peppershaker 1, thus keep producing hallucinating actions.
The idea of ReAct is simple: we augment the agent's action space to ˆ = A A∪L , where L is the space of language. An action ˆ a t ∈ L in the language space, which we will refer to as a thought or a reasoning trace , does not affect the external environment, thus leading to no observation feedback. Instead, a thought ˆ a t aims to compose useful information by reasoning over the current context c t , and update the context c t +1 = ( c , a t ˆ ) t to support future reasoning or acting. As shown in Figure 1, there could be various types of useful thoughts, e.g. decomposing task goals and create action plans (2b, Act 1; 1d, Thought 1), injecting commonsense knowledge relevant to task solving (2b, Act 1), extracting important parts from observations (1d, Thought2, 4), track progress and transit action plans (2b, Act 8), handle exceptions and adjust action plans (1d, Thought 3), and so on.
However, as the language space L is unlimited, learning in this augmented action space is difficult and requires strong language priors. In this paper, we mainly focus on the setup where a frozen large language model, PaLM-540B (Chowdhery et al., 2022) 1 , is prompted with few-shot in-context examples to generate both domain-specific actions and free-form language thoughts for task solving (Figure 1 (1d), (2b)). Each in-context example is a human trajectory of actions, thoughts, and environment observations to solve a task instance (see Appendix C). For the tasks where reasoning is of primary importance (Figure 1(1)), we alternate the generation of thoughts and actions so that the task-solving trajectory consists of multiple thought-action-observation steps. In contrast, for decision making tasks that potentially involve a large number of actions (Figure 1(2)), thoughts only need to appear sparsely in the most relevant positions of a trajectory, so we let the language model decide the asynchronous occurrence of thoughts and actions for itself.
Since decision making and reasoning capabilities are integrated into a large language model, ReAct enjoys several unique features: A) Intuitive and easy to design : Designing ReAct prompts is straightforward as human annotators just type down their thoughts in language on top of their actions taken. No ad-hoc format choice, thought design, or example selection is used in this paper. We detail prompt design for each task in Sections 3 and 4. B) General and flexible : Due to the flexible thought space and thought-action occurrence format, ReAct works for diverse tasks with distinct action spaces and reasoning needs, including but not limited to QA, fact verification, text game, and web navigation. C) Performant and robust : ReAct shows strong generalization to new task instances while learning solely from one to six in-context examples, consistently outperforming baselines with only reasoning or acting across different domains. We also show in Section 3 additional benefits when finetuning is enabled, and in Section 4 how ReAct performance is robust to prompt selections. D) Human aligned and controllable : ReAct promises an interpretable sequential decision making and reasoning process where humans can easily inspect reasoning and factual correctness. Moreover, humans can also control or correct the agent behavior on the go by thought editing, as shown in Figure 5 in Section 4.
## 3 KNOWLEDGE-INTENSIVE REASONING TASKS
We begin with knowledge-intensive reasoning tasks like multi-hop question answering and fact verification. As shown in Figure 1(1d), by interacting with a Wikipedia API, ReAct is able to retrieve information to support reasoning, while also use reasoning to target what to retrieve next, demonstrating a synergy of reasoning and acting.
## 3.1 SETUP
Domains We consider two datasets challenging knowledge retrieval and reasoning: (1) HotPotQA (Yang et al., 2018), a multi-hop question answering benchmark that requires reasoning over two or more Wikipedia passages, and (2) FEVER (Thorne et al., 2018), a fact verification benchmark where each claim is annotated SUPPORTS, REFUTES, or NOT ENOUGH INFO, based on if there exists a Wikipedia passage to verify the claim. In this work, we operate in a question-only setup for both tasks, where models only receive the question/claim as input without access to support paragraphs, and have to rely on their internal knowledge or retrieve knowledge via interacting with an external environment to support reasoning.
Action Space We design a simple Wikipedia web API with three types of actions to support interactive information retrieval: (1) search [ entity ], which returns the first 5 sentences from the corresponding entity wiki page if it exists, or else suggests top-5 similar entities from the Wikipedia search engine, (2) lookup [ string ], which would return the next sentence in the page containing string , simulating Ctrl+F functionality on the browser. (3) finish [ answer ], which would finish the current task with answer . We note that this action space mostly can only retrieve a small part of a passage based on exact passage name, which is significantly weaker than state-of-theart lexical or neural retrievers. The purpose is to simulate how humans would interact with Wikipedia, and force models to retrieve via explicit reasoning in language.
## 3.2 METHODS
ReAct Prompting For HotpotQA and Fever, we randomly select 6 and 3 cases 2 from the training set and manually compose ReAct -format trajectories to use as few-shot exemplars in the prompts. Similar to Figure 1(d), each trajectory consists of multiple thought-action-observation steps (i.e. dense thought), where free-form thoughts are used for various purposes. Specifically, we use a combination of thoughts that decompose questions ('I need to search x, find y, then find z'), extract information from Wikipedia observations ('x was started in 1844', 'The paragraph does not tell x'), perform commonsense ('x is not y, so z must instead be...') or arithmetic reasoning ('1844 < 1989'), guide search reformulation ('maybe I can search/look up x instead'), and synthesize the final answer ('...so the answer is x'). See Appendix C for more details.
Table 1: PaLM-540B prompting results on HotpotQA and Fever.
Figure 2: PaLM-540B prompting results with respect to number of CoT-SC samples used.
Baselines Wesystematically ablate ReAct trajectories to build prompts for multiple baselines (with formats as Figure 1(1a-1c)): (a) Standard prompting ( Standard ), which removes all thoughts, actions, observations in ReAct trajectories. (b) Chain-of-thought prompting ( CoT ) (Wei et al., 2022), which removes actions and observations and serve as a reasoning-only baseline. We also build a self-consistency baseline ( CoT-SC ) (Wang et al., 2022a;b) by sampling 21 CoT trajectories with decoding temperature 0.7 during inference and adopting the majority answer, which is found to consistently boost performance over CoT . (c) Acting-only prompt ( Act ), which removes thoughts in ReAct trajectories, loosely resembling how WebGPT (Nakano et al., 2021) interacts with the Internet to answer questions, though it operates on a different task and action space, and uses imitation and reinforcement learning instead of prompting.
Combining Internal and External Knowledge As will be detail in Section 3.3, we observe that the problem solving process demonstrated by ReAct is more factual and grounded, whereas CoT is more accurate in formulating reasoning structure but can easily suffer from hallucinated facts or thoughts. We therefore propose to incorporate ReAct and CoT-SC , and let the model decide when to switch to the other method based on the following heuristics: A) ReAct → CoT-SC : when ReAct fails to return an answer within given steps, back off to CoT-SC . We set 7 and 5 steps for HotpotQA and FEVER respectively as we find more steps will not improve ReAct performance 3 . B) CoT-SC → ReAct : when the majority answer among n CoT-SC samples occurs less than n/ 2 times (i.e. internal knowledge might not support the task confidently), back off to ReAct .
Finetuning Due to the challenge of manually annotating reasoning traces and actions at scale, we consider a bootstraping approach similar to Zelikman et al. (2022), using 3,000 trajectories with correct answers generated by ReAct (also for other baselines) to finetune smaller language models (PaLM-8/62B) to decode trajectories (all thoughts, actions, observations) conditioned on input questions/claims. More details are in Appendix B.1.
## 3.3 RESULTS AND OBSERVATIONS
ReAct outperforms Act consistently Table 1 shows HotpotQA and Fever results using PaLM540B as the base model with different prompting methods. We note that ReAct is better than Act on both tasks, demonstrating the value of reasoning to guide acting, especially for synthesizing the final answer, as shown in Figure 1 (1c-d). Fine-tuning results 3 also confirm the benefit of reasoning traces for more informed acting.
Table 2: Types of success and failure modes of ReAct and CoT on HotpotQA, as well as their percentages in randomly selected examples studied by human.
ReAct vs. CoT On the other hand, ReAct outperforms CoT on Fever (60.9 vs. 56.3) and slightly lags behind CoT on HotpotQA (27.4 vs. 29.4). Fever claims for SUPPORTS/REFUTES might only differ by a slight amount (see Appendix D.1), so acting to retrieve accurate and up-to-date knowledge is vital. To better understand the behavioral difference between ReAct and CoT on HotpotQA, we randomly sampled 50 trajectories with correct and incorrect answers (judged by EM) from ReAct and CoT respectively (thus 200 examples in total), and manually labeled their success and failure modes in Table 2. Some key observations are as follows:
A) Hallucination is a serious problem for CoT , resulting in much higher false positive rate than ReAct (14% vs. 6%) in success mode, and make up its major failure mode (56%). In contrast, the problem solving trajectory of ReAct is more grounded, fact-driven, and trustworthy, thanks to the access of an external knowledge base.
- B) While interleaving reasoning, action and observation steps improves ReAct 's groundedness and trustworthiness, such a structural constraint also reduces its flexibility in formulating reasoning steps , leading to more reasoning error rate than CoT . we note that there is one frequent error pattern specific to ReAct , in which the model repetitively generates the previous thoughts and actions, and we categorize it as part of 'reasoning error' as the model fails to reason about what the proper next action to take and jump out of the loop 4 .
- C) For ReAct , successfully retrieving informative knowledge via search is critical. Noninformative search, which counts for 23% of the error cases, derails the model reasoning and gives it a hard time to recover and reformulate thoughts. This is perhaps an expected trade-off between factuality and flexibility, which motivates our proposed strategies of combining two methods.
We provide examples for each success and failure modes in Appendix E.1. We also find some HotpotQA questions may contain outdated answer labels, see Figure 4 for example.
ReAct + CoT -SC perform best for prompting LLMs Also shown in Table 1, the best prompting method on HotpotQA and Fever are ReAct → CoT-SC and CoT-SC → ReAct respectively. Furthermore, Figure 2 shows how different methods perform with respect to the number of CoT-SC samples used. While two ReAct + CoT-SC methods are advantageous at one task each, they both significantly and consistently outperform CoT-SC across different number of samples, reaching CoT-SC performance with 21 samples using merely 3-5 samples. These results indicate the value of properly combining model internal knowledge and external knowledge for reasoning tasks.
ReAct performs best for fine-tuning Figure 3 shows the scaling effect of prompting/finetuning four methods ( Standard CoT Act ReAct , , , ) on HotpotQA. With PaLM-8/62B, prompting ReAct performs worst among four methods due to the difficulty to learn both reasoning and acting from in-context examples. However, when finetuned with just 3,000 examples, ReAct becomes the best method among the four, with PaLM-8B finetuned ReAct outperforming all PaLM-62B prompting methods, and PaLM-62B finetuned ReAct outperforming all 540B prompting methods. In contrast, finetuning Standard or CoT is significantly worse than finetuning ReAct or Act for both PaLM8/62B, as the former essentially teaches models to memorize (potentially halluincated) knowledge facts, and the latter teaches models how to (reason and) act to access information from Wikipedia, a more generalizable skill for knowledge reasoning. As all prompting methods are still significantly far from domain-specific state-of-the-art approaches (Table 1), we believe finetuning with more human-written data might be a better way to unleash the power of ReAct .
Figure 3: Scaling results for prompting and finetuning on HotPotQA with ReAct (ours) and baselines.
## 4 DECISION MAKING TASKS
We also test ReAct on two language-based interactive decision-making tasks, ALFWorld and WebShop, both of which feature complex environments that require agents to act over long horizons with sparse rewards, warranting the need for reasoning to act and explore effectively.
ALFWorld ALFWorld (Shridhar et al., 2020b) (Figure 1(2)) is a synthetic text-based game designed to align with the embodied ALFRED benchmark (Shridhar et al., 2020a). It includes 6 types of tasks in which an agent needs to achieve a high-level goal (e.g. examine paper under desklamp) by navigating and interacting with a simulated household via text actions (e.g. go to coffeetable 1, take paper 2, use desklamp 1). A task instance can have more than 50 locations and take an expert policy more than 50 steps to solve, thus challenging an agent to plan and track subgoals, as well as explore systematically (e.g. check all desks one by one for desklamp). In particular, one challenge built into ALFWorld is the need to determine likely locations for common household items (e.g. desklamps will likely be on desks, shelfs, or dressers), making this environment a good fit for LLMs to exploit their pretrained commonsense knowledge. To prompt ReAct , we randomly annotate three trajectories from the training set for each task type, where each trajectory includes sparse thoughts that (1) decompose the goal, (2) track subgoal completion, (3) determine the next subgoal, and (4) reason via commonsense where to find an object and what to do with it. We show prompts used for ALFWorld in Appendix C.4. Following Shridhar et al. (2020b), we evaluate on 134 unseen evaluation games in a task-specific setup. For robustness, we construct 6 prompts for each task type through each permutation of 2 annotated trajectories from the 3 we annotate. Act prompts are constructed using the same trajectories, but without thoughts - since task instances are randomly chosen from the training set, it favors neither ReAct nor Act and provides a fair and controlled comparison to test the importance of sparse thoughts. For baselines, we use BUTLER (Shridhar et al., 2020b), an imitation learning agent trained on 10 5 expert trajectories for each task type 5 .
WebShop Can ReAct also interact with noisy real-world language environments for practical applications? We investigate WebShop (Yao et al., 2022), a recently proposed online shopping website environment with 1.18M real-world products and 12k human instructions. Unlike ALFWorld, Webshop contains a high variety of structured and unstructured texts (e.g. product titles, descriptions, and options crawled from Amazon), and requires an agent to purchase a product based on a user instruction (e.g. 'I am looking for a nightstand with drawers. It should have a nickel finish, and priced lower than $140') through web interactions (e.g. search 'nightstand drawers', choose buttons such as 'color: modern-nickel-white' or 'back to search'). This task is evaluated by average score (percentage of desired attributes covered by the chosen product averaged across all episodes) and success rate (percentage of episodes where the chosen product satisfies all requirements) on 500 test instructions. We formulate Act prompts with actions to search, choose product, choose options, and buy, with ReAct prompts additionally reasoning to determine what to explore, when to buy, and what products options are relevant to the instruction. See Table 6 for an example prompt, and Table 10 for model predictions in the Appendix. We compare to an imitation learning (IL) method trained with 1,012 human annotated trajectories, and a imitation + reinforcement learning (IL + RL) method additionally trained with 10,587 training instructions.
Table 3: AlfWorld task-specific success rates (%). BUTLER and BUTLER g results are from Table 4 of Shridhar et al. (2020b). All methods use greedy decoding, except that BUTLER uses beam search.
Table 4: Score and success rate (SR) on Webshop. IL/IL+RL taken from Yao et al. (2022).
Results ReAct outperforms Act on both ALFWorld (Table 3) and Webshop (Table 4). On ALFWorld, the best ReAct trial achieves an average success rate of 71%, significantly outperforming the best Act (45%) and BUTLER (37%) trials. In fact, even the worse ReAct trial (48%) beats the best trial of both methods. Moreover, the advantage of ReAct over Act is consistent across six controlled trials, with relative performance gain ranging from 33% to 90% and averaging 62%. Qualitatively, we saw that, without any thoughts at all, Act fails to correctly decompose goals into smaller subgoals, or loses track of the current state of the environment. Example trajectories comparing ReAct and Act can be found in Appendix D.2.1 and Appendix D.2.2.
On Webshop, one-shot Act prompting already performs on par with IL and IL+RL methods. With additional sparse reasoning, ReAct achieves significantly better performance, with an absolute 10% improvement over the previous best success rate. By checking examples, we find that ReAct is more likely to identify instruction-relevant products and options by reasoning to bridge the gap between noisy observations and actions (e.g. 'For 'space-saving ottoman bench for living room', the item has options '39x18x18inch' and 'blue' and seems good to buy.'). However, existing methods are still far from the performance of expert humans (Table 4), who perform significantly more product explorations and query re-formulations that are still challenging for prompting-based methods.
On the value of internal reasoning vs. external feedback To our knowledge, ReAct is the first demonstration of combined reasoning and action using an LLM applied to an interactive environment within a closed-loop system. Perhaps the closest prior work is Inner Monologue (IM), from Huang et al. (2022b), in which actions from an embodied agent are motivated by an eponymous 'inner monologue'. However, IM's 'inner monologue' is limited to observations of the environment state and what needs to be completed by the agent for the goal to be satisfied. In contrast, the reasoning traces in ReAct for decision making is flexible and sparse, allowing diverse reasoning types (see Section 2) to be induced for different tasks.
To demonstrate the differences between ReAct and IM, and to highlight the importance of internal reasoning vs. simple reactions to external feedback, we ran an ablation experiment using a thought pattern composed of IM-like dense external feedback. As can be seen in Table 3, ReAct substantially outperforms IM-style prompting ( ReAct-IM ) (71 vs. 53 overall success rate), with consistent advantages on five out of six tasks. Qualitatively, we observed that ReAct-IM often made mistakes in identifying when subgoals were finished, or what the next subgoal should be, due to a lack of highlevel goal decomposition. Additionally, many ReAct-IM trajectories struggled to determine where an item would likely be within the ALFWorld environment, due to a lack of commonsense reasoning. Both shortcomings can be addressed in the ReAct paradigm. More details about ReAct-IM is in Appendix B.2. An example prompt for ReAct-IM can be found in Appendix C.4, and an example trajectory in Appendix D.2.3.
## 5 RELATED WORK
Language model for reasoning Perhaps the most well-known work of using LLMs for reasoning is Chain-of-Thought (CoT) (Wei et al., 2022), which reveals the ability of LLMs to formulate their own 'thinking procedure' for problem solving. Several follow-up works have since been performed, including least-to-most prompting for solving complicated tasks (Zhou et al., 2022), zero-shotCoT (Kojima et al., 2022), and reasoning with self-consistency (Wang et al., 2022a). Recently, (Madaan & Yazdanbakhsh, 2022) systematically studied the formulation and structure of CoT, and observed that the presence of symbols, patterns and texts is crucial to the effectiveness of CoT. Other work has also been extended to more sophisticated reasoning architecture beyond simple prompting. For example Selection-Inference (Creswell et al., 2022) divides the reasoning process into two steps of 'selection' and 'inference'. STaR (Zelikman et al., 2022) bootstraps the reasoning process by finetuning the model on correct rationales generated by the model itself. Faithful reasoning (Creswell &Shanahan, 2022) decomposes multi-step reasoning into three steps, each performed by a dedicated LM respectively. Similar approaches like Scratchpad (Nye et al., 2021), which finetunes a LM on intermediate computation steps, also demonstrate improvement on multi-step computation problems. In contrast to these methods, ReAct performs more than just isolated, fixed reasoning, and integrates model actions and their corresponding observations into a coherent stream of inputs for the model to reason more accurately and tackle tasks beyond reasoning (e.g. interactive decision making).
Language model for decision making The strong capability of LLMs has enabled them to perform tasks beyond language generation, and it is becoming more popular to take advantage of LLMs as a policy model for decision making, especially in interactive environments. WebGPT (Nakano et al., 2021) uses an LM to interact with web browsers, navigate through web pages, and infer answers to complicated questions from ELI5 (Fan et al., 2019). In comparison to ReAct , WebGPT does not explicitly model the thinking and reasoning procedure, instead rely on expensive human feedback for reinforcement learning. In conversation modeling, chatbots like BlenderBot (Shuster et al., 2022b) and Sparrow (Glaese et al., 2022) and task-oriented dialogue systems like SimpleTOD (Hosseini-Asl et al., 2020) also train LMs to make decision about API calls. Unlike ReAct , they do not explicitly consider the reasoning procedure either, and also relies on expensive datasets and human feedback collections for policy learning. In contrast, ReAct learns a policy in a much cheaper way, since the decision making process only requires language description of the reasoning procedure. 6
LLMS have also been increasingly employed in interactive and embodied environments for planning and decision making. Perhaps most relevant to ReAct in this respect are SayCan (Ahn et al., 2022) and Inner Monologue (Huang et al., 2022b), which use LLMs for robotic action planning and decision making. In SayCan, LLMs were prompted to directly predict possible actions a robot can take, which is then reranked by an affordance model grounded on the visual environments for final prediction. Inner Monologue made further improvements by adding the eponymous 'inner monologue", which is implemented as injected feedback from the environment. To our knowledge, Inner Monologue is the first work that demonstrates such a closed-loop system, which ReAct builds on. However, we argue that Inner Monologue does not truly comprise of inner thoughts - this is elaborated in Section 4. We also note that leveraging language as semantically-rich inputs in the process of interactive decision making has been shown to be successful under other settings (Abramson et al., 2020; Karamcheti et al., 2021; Huang et al., 2022a; Li et al., 2022). It is becoming more evident that with the help of LLMs, language as a fundamental cognitive mechanism will play a critical role in interaction and decision making. What is more, progress in LLMs has also inspired the development of versatile and generalist agents like Reed et al. (2022).
## 6 CONCLUSION
We have proposed ReAct - a simple yet effective method for synergizing reasoning and acting in large language models. Through a diverse set of experiments on multi-hop question-answering, fact checking, and interactive decision-making tasks, we show that ReAct leads to superior performance with interpretable decision traces. Despite the simplicity of our method, complex tasks with large action spaces require more demonstrations to learn well, which unfortunately can easily go beyond the input length limit of in-context learning. We explore the fine-tuning approach on HotpotQA
with initial promising results, but learning from more high-quality human annotations will be the desiderata to further improve the performance. Scaling up ReAct with multi-task training and combining it with complementary paradigms like reinforcement learning could result in stronger agents that further unlock the potential of LLMs for more applications.
|
Agent
|
402
|
2501.04227v1.md
|
Agent_002
|
Agent Laboratory: Using LLM Agents as Research Assistants
|
Historically, scientific discovery has been a lengthy and costly process, demanding substantial time and resources from initial conception to final results. To accelerate scientific discovery, reduce research costs, and improve research quality, we introduce Agent Laboratory, an autonomous LLM-based framework capable of completing the entire research process. This framework accepts a human-provided research idea and progresses through three stages--literature review, experimentation, and report writing to produce comprehensive research outputs, including a code repository and a research report, while enabling users to provide feedback and guidance at each stage. We deploy Agent Laboratory with various state-of-the-art LLMs and invite multiple researchers to assess its quality by participating in a survey, providing human feedback to guide the research process, and then evaluate the final paper. We found that: (1) Agent Laboratory driven by o1-preview generates the best research outcomes; (2) The generated machine learning code is able to achieve state-of-the-art performance compared to existing methods; (3) Human involvement, providing feedback at each stage, significantly improves the overall quality of research; (4) Agent Laboratory significantly reduces research expenses, achieving an 84% decrease compared to previous autonomous research methods. We hope Agent Laboratory enables researchers to allocate more effort toward creative ideation rather than low-level coding and writing, ultimately accelerating scientific discovery.
|
https://arxiv.org/abs/2501.04227
| 2,025
|
## Agent Laboratory: Using LLM Agents as Research Assistants
Historically, scientific discovery has been a lengthy and costly process, demanding substantial time and resources from initial conception to final results. To accelerate scientific discovery, reduce research costs, and improve research quality, we introduce Agent Laboratory , an autonomous LLM-based framework capable of completing the entire research process. This framework accepts a human-provided research idea and progresses through three stages-literature review, experimentation, and report writing to produce comprehensive research outputs, including a code repository and a research report, while enabling users to provide feedback and guidance at each stage. We deploy Agent Laboratory with various state-of-the-art LLMs and invite multiple researchers to assess its quality by participating in a survey, providing human feedback to guide the research process, and then evaluate the final paper. We found that: (1) Agent Laboratory driven by o1-preview generates the best research outcomes; (2) The generated machine learning code is able to achieve state-of-the-art performance compared to existing methods; (3) Human involvement, providing feedback at each stage, significantly improves the overall quality of research; (4) Agent Laboratory significantly reduces research expenses, achieving an 84% decrease compared to previous autonomous research methods. We hope Agent Laboratory enables researchers to allocate more effort toward creative ideation rather than low-level coding and writing, ultimately accelerating scientific discovery.
Figure 1 | Agent Laboratory takes as input a human research idea and a set of notes, provides this to a pipeline of specialized LLM-driven agents, and produces a research report and code repository.
## 1. Introduction
Scientists frequently face constraints that limit the number of research ideas they can explore at any given time, resulting in ideas being prioritized based on predicted impact. While this process helps determine which concepts are worth investing time in and how best to allocate limited resources effectively, many high quality ideas remain unexplored. If the process of exploring ideas had less limitations, researchers would be able to investigate multiple concepts simultaneously, increasing the likelihood of scientific discovery.
In an effort to achieve this, recent work has explored the capability of LLMs to perform research ideation and automated paper generation, where LLM agents perform the role of human scientists (Baek et al. (2024); Ghafarollahi & Buehler (2024b); Lu et al. (2024a); Swanson et al. (2024)). The work of Baek et al. (2024) introduces ResearchAgent, which automatically generates research ideas, methods, and experiment designs, iteratively refining them through feedback from multiple reviewing agents that mirror peer discussions and leverage human-aligned evaluation criteria to improve the outputs. Lu et al. (2024a) explores fully automated paper generation, where The AI Scientist framework generates novel research ideas, writes code, conducts experiments, and creates a full scientific paper with an automated peer-review system to evaluate the work. Even though these works demonstrate that current LLMs can generate ideas judged to be more novel than those produced by human experts, Si et al. (2024) indicates that LLMs still exhibit weaknesses in feasibility and implementation details, suggesting a complementary rather than replacement role for LLMs in research. Therefore, we aim to design an autonomous agent pipeline that can assist humans toward implementing their own research ideas.
In this work, we introduce Agent Laboratory , an autonomous pipeline for accelerating the individual's ability to perform machine learning research. Unlike previous approaches, where agents participate in their own research ideation independent of human input (Baek et al. (2024); Lu et al. (2024b)), Agent Laboratory is designed to assist human scientists in executing their own research ideas using language agents. Agent Laboratory takes as input a human research idea and outputs a research report and code repository produced by autonomous language agents, allowing various levels of human involvement, where feedback can be provided at a frequency based on user preference. A detailed list of our contributions are provided below:
- 1. We introduce Agent Laboratory , an open-source LLM agent framework for accelerating the individual's ability to perform research in machine learning. In order to accommodate all users, Agent Laboratory is compute flexible, where various levels of compute can be allocated based on the individual's access to compute resource (e.g., CPU, GPU, memory) and model inference budget.
- 2. Human evaluators rated papers generated using Agent Laboratory across experimental quality, report quality , and usefulness, showing that while the o1-preview backend was perceived as the most useful, o1-mini achieved the highest experimental quality scores, and gpt-4o was behind in all metrics.
- 3. NeurIPS-style evaluations showed that o1-preview performed best among backends, particularly in clarity and soundness, according to human reviewers. However, a clear gap emerged between human and automated evaluations, with automated scores significantly overestimating quality (6.1/10 vs. 3.8/10 overall). Similar discrepancies were seen across clarity and contribution metrics, suggesting the need for human feedback to complement automated evaluations for more accurate assessments of research quality.
- 4. Co-pilot mode in Agent Laboratory was evaluated on custom and preselected topics, showing higher overall scores compared to autonomous mode. Co-pilot papers also saw trade-offs
in experimental quality and usefulness, reflecting challenges in aligning agent outputs with researcher intent.
- 5. The co-pilot feature in Agent Laboratory is overall found to have high utility and usability when rated by human users, with most participants deciding to continue usage after their experience
- 6. Detailed cost and inference time statistics, as well as the breakdown of cost per paper phase, are presented for different model back-ends, demonstrating that Agent Laboratory offers automatic research at a greatly reduced price compared with other works (only $2.33 USD per paper with a gpt-4o backend).
- 7. State-of-the-art performance on a subset of MLE-Bench challenges using the proposed mle-solver , achieving higher consistency and scoring compared to other solvers, and earning more medals, including gold and silver, than MLAB, OpenHands, and AIDE.
We hope that this work takes a step toward accelerating scientific discovery in machine learning, allowing researchers to allocate more effort toward creative ideation and experiment design rather than low-level coding and writing.
## 2. Background & Related Work
Large language models The research agents in this paper are built on autoregressive large language models (LLMs), which are trained on extensive text corpora to predict conditional probabilities of token sequences, 𝑝 𝑥𝑡 ( | 𝑥 <𝑡 ; 𝜃 ) , and generate text completions through sampling, where 𝑥 𝑡 ∼ softmax ( 𝑊 ℎ𝑡 · ) , with ℎ𝑡 as the hidden state and 𝑊 as the learned weight matrix mapping to token probabilities. LLMs utilize transformer architectures (Vaswani (2017)) to capture long-range dependencies in text. These models, such as Claude (Anthropic (2024)), Llama (Dubey et al. (2024); Touvron et al. (2023a,b)), and ChatGPT (Achiam et al. (2023); Hurst et al. (2024); OpenAI (2022)), leverage vast datasets and scaling techniques, thus enabling them to perform a wide array of language-based tasks, such as translation, summarization, and reasoning, by generalizing patterns learned during pretraining to novel inputs Brown (2020).
LLM Agents While LLMs demonstrate strong understanding and reasoning abilities, they face challenges when executing tasks in real-world scenarios. To overcome these limitations, their capabilities are extended through structured frameworks, enabling them to autonomously and semi-autonomously perform task execution and semi-autonomously perform task execution (Chen et al. (2023b); Li et al. (2023); Qian et al. (2024); Wu et al. (2023)). These systems, referred to as agents, utilize techniques such as chain-of-thought prompting (Wei et al. (2022)), iterative refinement (Shinn et al. (2024)), self-improvement (Huang et al. (2022)), and external tool integration to execute complex workflows (Hao et al. (2024); Qin et al. (2023); Schick et al. (2023)). LLM agents have made remarkable progress in solving tasks of real-world significance, such as software engineering Jimenez et al. (2023); Wang et al. (2024b); Yang et al. (2024)), cybersecurity (Abramovich et al. (2024); Fang et al. (2024); Wan et al. (2024)), and medical diagnosis (McDuff et al. (2023); Schmidgall et al. (2024); Tu et al. (2024)). There has also been progress in applying LLMs agents to embodied problems such as autonomous robotics (Black et al. (2024); Brohan et al. (2022, 2023); Kim et al. (2024)), web tasks (Deng et al. (2024); Gur et al. (2023); He et al. (2024); Putta et al. (2024); Shi et al. (2017)), and game playing (AL et al. (2024); Feng et al. (2024); Wang et al. (2023)). For a broader overview of LLM agents, refer to Wang et al. (2024a).
Automated machine learning Automated machine learning is an area of active research, with many approaches focused on using Kaggle, an online platform for machine learning competitions, as a benchmark for evaluating agent performance. Notable efforts include MLE-Bench (Chan et al. (2024)), DS-bench (Jing et al. (2024)), and MLAgentBench (Huang et al. (2024)) which propose using 75, 74, and 6 Kaggle challenges respectively as benchmarks to measure the abilities of ML agents in tasks such as data preparation, model development, and submission. Several ML "solvers" which can solve ML challenges have been introduced, such as AIDE (Schmidt et al. (2024)), CodeActAgent (referred to as 'OpenHands") (Wang et al. (2024b)), and ResearchAgent (referred to as 'MLAB") from MLAgentBench (Huang et al. (2024)) which automate feature implementation, bug fixing, and code refactoring with a high success rate. Agent K (Grosnit et al. (2024)) demonstrates the ability to solve Kaggle challenges at the human-level with a challenge URL provided as input.
AI in Scientific Discovery AI has been used to support scientific discovery across numerous disciplines for decades. For instance, AI has been used for discovery in mathematics (Romera-Paredes et al. (2024)), material science (Merchant et al. (2023); Pyzer-Knapp et al. (2022); Szymanski et al. (2023)), chemistry (Hayes et al. (2024); Jumper et al. (2021)), algorithm discovery (Fawzi et al. (2022)), and computational biology (Ding et al. (2024)). These approaches position AI as a tool rather than an agent performing research in autonomous research.
LLMs for research related tasks LLMs have demonstrated strong capabilities in diverse researchrelated tasks, such as code generation (Chen et al. (2021); Nijkamp et al. (2022)), end-to-end software development (Hai et al. (2024); Phan et al. (2024); Qian et al. (2023, 2024)), code generation for discovery (Chen et al. (2024b); Ghafarollahi & Buehler (2024a); Gu et al. (2024); Guo et al. (2024); Hu et al. (2024b); Ifargan et al. (2024); Majumder et al. (2024)), research question-answering (Chen et al. (2024a); Lála et al. (2023); Lin et al. (2024); Song et al. (2024)), research ideation (Baek et al. (2024); Ghafarollahi & Buehler (2024b); Li et al. (2024a); Si et al. (2024)), automated paper reviewing (D'Arcy et al. (2024); Liang et al. (2024); Lu et al. (2024b); Weng et al. (2024)), literature search (Ajith et al. (2024); Kang & Xiong (2024); Li et al. (2024b); Press et al. (2024)), and predicting the outcome of experiments (Ashokkumar et al. (2024); Lehr et al. (2024); Luo et al. (2024); Manning et al. (2024); Zhang et al. (2024)). Although LLMs have made notable progress in solving the aforementioned tasks, ideation has struggled to progress, with some work showing that LLM ideation leads to greater novelty than humans (Si et al. (2024)), while others show reduced creativity (Chakrabarty et al. (2024)) and greater homogeneous effects (Anderson et al. (2024); Zhou et al. (2024)) that may limit creative discovery without human guidance.
Additionally, research on human-AI collaboration has reached mixed conclusions about the idea novelty (Ashkinaze et al. (2024); Liu et al. (2024); Padmakumar & He (2024)). These findings suggest that, with the current LLMs, the strongest research systems would combine human-guided ideation with LLM-based workflows.
LLMs for autonomous research Recent advancements in automated scientific workflows have focused on leveraging LLMs to emulate the process of research. Swanson et al. (2024) introduces a team of LLM agents working as scientists alongside a human researcher who provides high-level feedback, with the end result being novel nanobody binders aimed at addressing recent variants of SARS-CoV-2. ChemCrow (M. Bran et al. (2024)) and Coscientist (Boiko et al. (2023)) demonstrate the ability for autonomous ideation and experimentation in chemistry. ResearchAgent (Baek et al. (2024)) automates research idea generation, experiment design, and iterative refinement using feedback from reviewing agents aligned with human evaluation criterion. The AI Scientist (Lu et al. (2024a)) extends
Figure 2 | Agent Laboratory Workflow. This image illustrates the three primary phases of Agent Laboratory: Literature Review, Experimentation, and Report Writing, each featuring distinct tasks, tools, and human-agent roles. The pipeline integrates human input with LLM-driven agents, such as the PhD and Postdoc agents, which handle literature reviews, experimental planning, data preparation, and result interpretation. Specialized tools like mle-solver for experimentation and paper-solver for report generation automate tedious research tasks, enabling collaboration between human researchers and AI to produce high-quality research outputs.
this automation to encompass end-to-end scientific discovery, including coding, experiment execution, and automated peer review for manuscript generation. Despite these advancements, studies like Si et al. (2024) highlight limitations in the feasibility and implementation details of LLM ideation, indicating a complementary rather than replacement role for LLMs in autonomous research.
## 3. Agent Laboratory
Overview. Agent Laboratory begins with the independent collection and analysis of relevant research papers, progresses through collaborative planning and data preparation, and results in automated experimentation and comprehensive report generation. As shown in Figure 2, the overall workflow consists of three primary phases: (1) Literature Review, (2) Experimentation, and (3) Report Writing. In this section, we will introduce these phases in detail along with the corresponding involved agents. Furthermore, in Section 4, we will conduct qualitative and quantitative analyses to demonstrate the strengths of Agent Laboratory and its ability to generate
## 3.1. Literature Review
Literature Review. The literature review phase involves gathering and curating relevant research papers for the given research idea to provide references for subsequent stages. During this process, the PhD agent utilizes the arXiv API to retrieve related papers and performs three main actions: summary full , text , and add paper . The summary action retrieves abstracts of the top 20 papers relevant to the initial query produced by the agent. The full text action extracts the complete content of specific papers, and the add paper action incorporates selected summaries or full texts into the curated review. This process is iterative rather than a single-step operation, as the agent performs multiple queries, evaluates the relevance of each paper based on its content, and refines the
selection to build a comprehensive review. Once the specified number of relevant texts (N=max) is reached via the add paper command, the curated review is finalized for use in subsequent phases.
## 3.2. Experimentation
Plan Formulation The plan formulation phase focuses on creating a detailed, actionable research plan based on the literature review and research goal. During this phase, the PhD and Postdoc agents collaborate through dialogue to specify how to achieve the research objective, detailing experimental components needed to complete the specified research idea such as which machine learning models to implement, which datasets to use, and the high-level steps of the experiment. Once a consensus is reached, the Postdoc agent submits this plan using the plan command, which serves as a set of instructions for subsequent subtasks.
Data Preparation. The goal of the data preparation phase is to write code that prepares data for running experiments, using the instructions from the plan formulation stage as a guideline. The ML Engineer agent executes code using Python command command and observes any printed output. The ML Engineer has access to HuggingFace datasets, searchable via the search HF command. After agreeing on the finalized data preparation code, the SW Engineer agent submits it using the submit code command. Before the final submission proceeds, the code is first passed through a Python compiler to ensure that there are no compilation issues. This process will be iteratively executed until the code is bug-free.
Running Experiments. In the running experiments phase, the ML Engineer agent focuses on implementing and executing the experimental plan formulated prior. This is facilitated by mle-solver , a specialized module designed to generate, test, and refine machine learning code autonomously. mle-solver begins by producing initial code based on the research plan and insights from the literature review. For the first mle-solver step, the program is empty and must generate a file from scratch, which is used as the top scoring program . The following processes describe the workflow of the mle-solver :
- A. Command Execution. During the command execution phase, an initial program is sampled from a maintained set of top-performing programs, which is represented by a single file during initialization. The mle-solver iteratively refines this program through two operations, REPLACE and EDIT , to better align the output with experimental objectives. The EDIT operation identifies a range of lines, substituting the code between the specified line numbers with newly generated code. In contrast, the REPLACE operation generates a completely new Python file.
- B. Code Execution. After a code command is executed, the new program is passed through a compiler to check for runtime errors. If it successfully compiles, a score is returned and the list of top programs is updated if the score is higher than the existing programs. If the code does not compile, the agent attempts to repair the code for 𝑁𝑟𝑒𝑝 tries ( 𝑁𝑟𝑒𝑝 =3 in our experiments) before returning an error and moving on to a new code replacement.
- C. Program Scoring. If a code succeeds in compilation, it is sent to a scoring function which determines if it is better than previously implemented experiment code. In order to obtain a program score, we implement a scoring function that uses an LLM reward model to assess the effectiveness of the ML code generated by mle-solver . The reward model, invoked as an LM, scores the program on a scale from 0 to 1 considering the outlined research plan, the produced code, and the observed output to determine how accurately the program adheres to the initial goals. A score of 1 is provided for results with high alignment and everything below on a spectrum of how closely the output and code matches the planning goals. This process is similar to existing methods for LLM reasoning tree search (Yao et al. (2024)), where instead of a series of reasoning steps being traversed using self-evaluated LLM scoring, the set of possible programs are being traversed (via EDIT and REPLACE commands) and the resulting program outcome is self-evaluated to determine if a program is worth building on. This is similar to the Solution Space Search of AIDE (Schmidt et al. (2024)), however their method was specifically designed for the Kaggle competitions and is simply extracting the accuracy rather than scoring the research code and outcomes.
Figure 3 | Overview of the mle-solver workflow. This diagram details the iterative process used by the MLE-Solver to autonomously generate machine learning code. Beginning with external resources, the workflow integrates command execution (A), where new code is generated, followed by code execution (B) to compile and repair issues if needed. Program scoring (C) evaluates the generated code using a reward function, while self-reflection (D) helps refine future iterations based on results. Performance stabilization (E) ensures consistent outcomes by maintaining a pool of top-performing programs and iterative optimization.
- D. Self Reflection. Whether the code succeeds or fails, a self-reflection is produced based on the experimental results or the encountered error signal (Renze & Guven (2024); Shinn et al. (2024)). Here, the mle-solver is prompted to reflect on the outcome of its actions. If the program failed to compile, the solver reflects on how to fix this issue in next iterations. If it successfuly compiles and returns a score, the solver will reflect on how to increase this score. These reflections are generated to improve future performance, ensuring that the system learns from errors, improving the quality and robustness of the generated code over iterative cycles.
- E. Performance Stabilization To prevent performance drift, two mechanisms are implemented: top program sampling and batch-parallelization. In top program sampling, a collection of the highest-scoring programs is maintained, and one program is randomly sampled before executing a command, ensuring diversity while retaining quality. For batch-parallelization, each solver step involves making N modifications simultaneously, with the top modification selected to replace the lowest-scoring program in the top collection. These strategies use high-entropy sampling to modify the code, resulting in a balance between exploration of new solutions and refinement of existing ones in order to maintain stable code modifications.
Figure 4 | Graphical outline of paper-solver . This diagram showcases the step-by-step process of generating and refining academic research reports using the Paper-Solver tool. The workflow starts with the creation of an initial report scaffold (A) by iteratively generating LaTeX-based sections, followed by updates to ensure structural completeness. (B) Research is performed through an Arxiv tool during relevant sections. In the Report Editing phase (C), the language model applies targeted edits to improve the document, with LaTeX compilation verifying the integrity of changes. Finally, the completed report undergoes a reward-based evaluation during the Paper Review phase (D), ensuring alignment with academic standards and research goals.
Results Interpretation. The goal of the results interpretation phase is to derive meaningful insights from experimental outcomes to inform the final report. The PhD and Postdoc agents discuss their understanding of the experimental results produced by mle-solver . Once they agree on a meaningful interpretation that could contribute to a compelling academic paper, the Postdoc agent submits it using the interpretation command, forming the basis for the report writing phase.
## 3.3. Report Writing
Report Writing. In the report writing phase, the PhD and Professor agent synthesize the research findings into a comprehensive academic report. This process is facilitated by a specialized module called paper-solver , which iteratively generates and refines the report. The paper-solver aims to act as a report generator, positioning the work that has been produced by previous stages of Agent Laboratory . paper-solver does not aim to entirely replace the academic paper-writing process, but rather to summarize the research that has been produced in a human-readable format so that the researcher using Agent Laboratory understands what has been accomplished. The output follows the standard structure of an academic paper, ensuring it meets conference submission requirements (for the paper scoring phase) while being clear and methodical. The following processes describe the workflow of paper-solver :
- A. Initial Report Scaffold. The first task of the paper-solver is to generate an initial scaffold for the research paper. This scaffold outlines the document structure, dividing it into eight standardized sections: Abstract, Introduction, Background, Related Work, Methods, Experimental Setup, Results, and Discussion. During scaffold creation, placeholders are inserted for each section to categorize future content. This process establishes the framework for subsequent detailed text generation. The scaffold includes necessary formatting for LaTeX compilation, allowing the generated paper to be directly reviewed and refined. Special care is taken to ensure the scaffold aligns with academic conventions, such as appropriate section titles and placeholders that guide content development.
- B. Arxiv Research. During the scaffold building phase, we allow the paper-solver access to arXiv which is accessible through the same interface as the earlier literature review phase. ArXiv is enabled to allow the solver to explore related literature on the subject it is writing on as well as finding papers to refer to, although it is not enforced. We note that the agent still has access to the original literature search, but has the opportunity to expand based on literature needed to write a particular paper section.
- C. Report Editing. One the scaffold is built, the paper-solver uses specialized commands to iteratively refine the generated paper. The primary command are available for this stage is the EDIT command, which allows precise line-by-line modifications to the LaTeX code. This command enable dynamic adjustments to the content, ensuring alignment with the research plan, the clarity of arguments, and compliance with formatting standards. Before integrating edits, the system compiles the LaTeX to verify error-free functionality, thereby maintaining document integrity. Through iterative editing, the solver ensures the paper achieves the desired level of quality , cohesiveness, and depth required for academic acceptance.
- D. Paper Review. For obtaining scores for papers during the paper-solver iterations, we leverage an adapted version of the automated review system developed in Lu et al. (2024b). This system works by using an LLM-based agent to simulate the scientific paper review process following the NeurIPS conference guidelines. When evaluated on 500 ICLR 2022 papers from the OpenReview dataset, the automated reviewer achieved human-level accuracy (65% compared to 66% for human reviewers) and surpassed human performance in F1 score (0.57 vs. 0.49) after calibration. An example review from one of our papers by o1-mini is provided below.
Paper Refinement. In the paper refinement phase, the PhD agent makes a decision on whether to make paper revisions or to determine that the paper is complete. The process begins with a set of three reviewer agents generating reviews that mimic feedback from NeurIPS peer reviewers, evaluating the report based on criteria such as originality, quality , clarity , and significance. Based on these scores, the PhD agent then decides whether to finalize the project or revisit earlier subtasks-such as planning, experimentation, or results interpretation-to address the feedback. This allows the agents to refine the research report until it meets sufficiently high standards, effectively simulating the real-world academic revision process.
## 3.3.1. Autonomous versus Co-Pilot Mode:
There are two ways in which Agent Laboratory can be operated: autonomous and co-pilot modes. In autonomous mode, there is no human involvement other than providing the initial research idea for agents to produce research for. Each subtask moves on to the next subtask sequentially upon completion. In co-pilot mode, in addition to providing the research idea, there is also a checkpoint at the end of each subtask, where a human is involved in reviewing the work produced by agents in that phase (e.g., the literature review summary or generated report). The human reviewer can either decide to proceed to the next subtask, or ask the agent to repeat the subtask while providing high level notes for the agent to improve its performance during the next attempt. For example, if the literature review phase did not include a specific paper or the experiments did not include a desired technique, the human reviewer would instruct the agent to include this.
## 4. Results
In this section, we present our main findings on the efficacy of Agent Laboratory to produce research. We begin our results by asking how human evaluators perceive papers generated by Agent Laboratory running in end-to-end autonomous mode across five topics. Next, we examine human evaluation when using Agent Laboratory in collaborative co-pilot mode from both allowing the researcher to choose any topic they want and from our set of preselected topics. We then provide a detailed runtime analysis including cost, average time, and success rate by various models. Finally, we conclude with an evaluation of the mle-solver in isolation on MLE-Bench, a set of real-world Kaggle challenges. The details of all surveys are provided in Appendix C.
## 4.1. Evaluation of quality by language model
Our first experiment aims to evaluate how human-evaluated quality varies across three axes: experiment quality, report quality, and usefulness. This evaluation was conducted by human participants using three different LLM backends: gpt-4o (Hurst et al. (2024)), o1-mini, and o1-preview (OpenAI (2024)). Research questions were selected from a set of 5 templates:
- 1. Do language models exhibit cognitive biases, such as confirmation bias or anchoring bias?
- 2.
- Are image transformers more or less sensitive to pixel noise than convolutional networks?
Figure 5 | The average human evaluated scores of papers generated by Agent Laboratory in an autonomous mode based on a research question (left column) and LLM backend (top row). The bottom row shows the average score across all topics by LLM backend.
- 3. Do language models improve accuracy on MedQA when asked to perform differential diagnosis?
- 4.
- Are language models sensitive to word order in multiple choice benchmarks?
- 5. Does gender role play affect the accuracy on of language models on answering math questions?
These 5 questions across 3 LLM backends resulted in a total of 15 papers being written autonomously by Agent Laboratory without any human involvement. We then recruited 10 volunteer PhD students to review 3 randomly assigned papers each. These researchers rated the experimental quality, report quality, and usefulness of the generated outputs on a scale of 1 to 5. The goal of this evaluation is to understand the differences in quality of produced research based on the three distinct LLM backbones, and to understand the usefulness of Agent Laboratory in autonomous mode. The details of the evaluation questions are provided here:
- · Experimental Quality: What is your perception of the quality of the experimental results presented in this report?
- · Report Quality: What is your perception of the quality of the research report writing quality presented in this report?
- · Usefulness: What is your perception of the usefulness of an AI assistant tool that can generate the presented report autonomously?
The results of this evaluation indicate variability in performance across different Agent Laboratory LLM backends (Figure 5). gpt-4o consistently achieved lower scores, with an average experimental quality rating of 2.6/5, a report quality rating of 3.0/5, and a usefulness rating of 4.0/5. In contrast, o1-mini generally outperformed gpt-4o in experimental quality, with an average score of 3.2/5 (+0.6), while maintaining similar levels of report quality and usefulness at 3.2/5 (+0.2) and 4.3/5 (+0.3), respectively. o1-preview demonstrated the highest usefulness and report quality, averaging 4.4/5 (+0.4 from gpt-4o and +0.1 from o1-mini) and 3.4/5 (+0.4 from gpt-4o and +0.2 from o1-mini) respectively, though its experimental ratings were slightly lower than o1-mini at 2.9/5 (+0.3 from gpt-4o and -0.3 from o1-mini). While all backends perform comparably in terms of report and experimental quality, the o1-preview model was as the most useful for research assistance, suggesting that its outputs were better aligned with the expectations and needs of researchers.
From our results, the quality is demonstrated to vary based on the selected topic. We find that the overall highest average report quality to be 3.8/5 and usefulness to be 4.5/5 for the word order topic and the highest average experiment quality to be 3.2/5 for the cognitive bias topic. Interestingly, we also find that word order has the lowest experiment quality at 2.7/5 along with the image noise topic. The image noise topic was demonstrated to have high variance based on the LLM backend, with an experiment quality score of 1.5/5 for gpt-4o and a 4.0/5 with o1-mini (+2.5 point difference) and a usefulness score of 2.5/5 for gpt-4o and a 4.5/5 with o1-mini (+2.0 point difference).
In summary, the evaluation of quality across LLM backends demonstrates clear differences in experimental quality, report quality, and usefulness. While o1-preview is consistently rated as the most useful for research assistance, o1-mini achieves the highest experimental quality scores, and gpt-4o is generally being outperformed in all areas. Topic-specific trends suggest there may exist variability in the performance of Agent Laboratory across difference areas of machine learning research and across backend models.
## 4.1.1. Human reviewer scores by language model
In addition to evaluating paper quality, we also asked human reviewers to assess papers generated by Agent Laboratory according to NeurIPS-style criteria, including quality, significance, clarity, soundness, presentation, and contribution as shown in Figure 6. We evaluated the same papers analyzed in Section 4.1 using the aforementioned metrics and conducted the comparison. We found that the average human scores for the three backends revealed differences in performance, with average overall ratings ranging from 3.5/10 with gpt-4o, 3.8/10 with o1-mini, and 4.0/10 with o1-preview.
First, when evaluating quality we find that reviewers rated gpt-4o the lowest at 1.8/4, while o1-mini achieved the highest score of 2.3/4, demonstrating relatively better technical soundness. In terms of significance, all three backends received similar scores between 2.2-2.5/4, indicating a modest contribution to advancing research goals. Clarity scores showed slight variability, with gpt-4o receiving 2.6/4 and o1-mini falling slightly lower at 2.1/4 (-0.5), reflecting differences in how well the papers were written. The soundness of the generated outputs, which assesses the robustness of claims, was rated highest for o1-preview at 2.2/4, with o1-mini and gpt-4o at 1.8 (-0.4) and 1.7. Presentation and contribution ratings followed similar trends, with the overall contribution score averaging 2.1/4 across models, highlighting a need for improvement in the originality of the outputs.
These scores show a general trend where human reviewers identified o1-preview as producing slightly better-rounded outputs compared to other backends, though significant gaps remain in technical and methodological aspects across all models. We note that the average score of an accepted paper at NeurIPS is 5.9. In this regard, on average, papers produced in autonomous mode are below the acceptance threshold for top ML conferences. These results demonstrate that, in autonomous mode, there is a need for refinement of Agent Laboratory to meet human expectations for high-quality, impactful research papers.
Automated Reviews versus Human Reviews. We also explore to what extent the automated reviewer scores align with those of human reviewers. The alignment is graphically illustrated using both tabular data (for all scores) and violin plots (for overall scores) in Figure 6. Our findings suggest that automated reviewers demonstrate notable discrepancies across all metrics compared with human evaluators, with a tendency to highly over-estimate the contribution of self-evaluated work. While the automated reviewers gave an average overall above average NeurIPS paper score of 6.1/10, human reviewers provided a much lower average of 3.8/10 (-2.3 points). Similar gaps are observed for all
Compairson of NeurIPS scores (Human Reviewer versus Automated Reviewer) in Agent LaboratoryAverage\_Automated\_Reviewer\_NeurIPS\_scores in Agent Laboratory
Figure 6 | Scores from NeurIPs-style evaluation of generated papers, including the criterion: quality, significance, clarity , soundness, presentation, and contribution. (top) Split-violin plot comparing the overall score distribution of automated reviewers (LLM scores, left half of violin) and human reviewers (right half of violin). Human scores are not predictive of automated reviewer scores, demonstrating an average of -2.3 points lower. (middle) Automated reviewer scores across NeurIPs-style criterion. (bottom) Human reviewer scores across NeurIPs-style criterion.
specific criteria, such as clarity and contribution, where automated reviewers rated clarity at 3.6/4 on average compared to 2.4/4 by human evaluators. This pattern holds for all criterion. Previous work demonstrates high alignment with automated reviewers (Lu et al. (2024b)) and ICLR scores from OpenReview. However, with actual humans rating the generated papers, we find that automated reviews do not align closely with human reviews and are far from an average accepted paper at NeurIPS 2024, which stands at 5.85 ∗ (our scores were -2.05 points lower on average). Our results demonstrate that it is important for human evaluations to be provided alongside automated reviewer scores in future works in order to obtain a better understanding of the quality of generated papers.
## 4.2. Evaluation of co-pilot quality
We next evaluate the use of Agent Laboratory in co-pilot mode, where a human researcher is providing feedback at the end of each subtask (see Section 3.3.1 for more details). We evaluate performance across two measures: (1) the quality of Agent Laboratory as a tool for assisting their research and (2) the quality of generated papers. We first ask researchers to co-pilot Agent Laboratory on a topic of their choice without limitations. We then ask researchers to select a topic from the 5 topics introduced in Section 4.1, resulting in a total of 2 papers per researcher which we refer to as custom and preselected papers respectively. After their papers are generated, we ask researchers to rate their experience using Agent Laboratory during the process of generating custom and preselected papers. We then ask them to self-evaluate the generated papers according to NeurIPS-style criterion. Finally, we ask external researchers to evaluate their paper comparing performance with Agent Laboratory in autonomous mode. All experiments used an o1-mini backbone for all phases except the literature review.
## 4.2.1. Quality as a tool
The evaluation of Agent Laboratory as a research tool focuses on understanding its effectiveness in assisting researchers during the co-pilot mode. After generating their papers, participants were asked to reflect on their experiences and assess the tool's utility , usability , and overall satisfaction. We begin our evaluation by asking the following questions:
- · Utility: How useful is Agent Laboratory for assisting your research?
- · Continuation: How likely are you to continue using Agent Laboratory for research?
- · Satisfaction: How much did you enjoy using Agent Laboratory?
- · Usability: How easy was it for you to build a project using Agent Laboratory?
The result of answering each question is a score from 1-5, where 1 indicates the lowest agreement and 5 indicates the highest. We find that the overall scores across all experiments are 3.5/5 for utility, 3.75/5 for continuation, 3.63/5 for satisfaction, and 4.0/5 for usability (Figure 7). We also delineate average scores based on custom and preselected topics. For custom experiments, we find overall scores of 3.75/5 for utility, 4.0/5 for continuation, 3.75/5 for satisfaction, and 3.75/5 for usability. For preselected topics, we find overall scores of 3.25/5 for utility, 3.5/5 for continuation, 3.5/5 for satisfaction, and 4.25 for usability. Ratings for preselected topics are lower across all measures compared with custom, except for usability which was -0.5 points lower. From preselected to custom, utility and continuation increased by +0.5 points and satisfaction increased by +0.25 points.
We also evaluated across the same questions reported in Section 4.1. We report an average experimental quality rating of 2.38/5, a report quality rating of 3.13/5, and a usefulness rating of
Quality\_Evaluation of Agent LaboratoryAverage\_Self Evaluation\_NeurIPS\_scores in Agent Laboratory
Average\_External\_Evaluation\_NeurIPS\_scores in Agent Laboratory
Figure 7 | Co-pilot evaluation.
3.75/5. We find higher scores for custom topics across report quality with a rating of 3.5/5 (+0.75) and a usefulness rating of 4.0/5 (+0.5). For experiment quality, we find that preselected has +0.25 points higher with a score of 2.5/5. Scores across all metrics rated lower when compared with the corresponding o1-mini autonomous evaluation results. While report quality was only rated -0.07 points lower, usefulness was rated -0.55 points lower and experiment quality was -0.82 points lower.
Finally, we opened an optional question for participants to provide feedback, which asks the following question: "How could Agent Laboratory be improved for your research?" For both custom and preselected topics we received a 75% response rate. From this feedback, there were suggestions for improving the Agent Laboratory interface (e.g., adding a GUI, better inspection of intermediate results), adding the option to incorporate more figures for the paper, and improving the literature review phase. We find that when compared to reviews of Agent Laboratory in autonomous mode from Section 4.1, human co-pilots rated report quality, usefulness, and experiment quality lower. From feedback provided by researchers, we find the reduction in scores is due to difficulty guiding the agents to execute their exact vision for the project. We discuss these limitations in greater detail in Section 5.
## 4.2.2. Evaluation of co-pilot generated papers
To assess the quality of papers generated by Agent Laboratory in co-pilot mode, we conduct evaluations using two approaches: (1) researchers self-assessed their generated papers based on NeurIPS-style criteria, and (2) external researchers provided evaluations of the same papers. This section aims to understand differences in scores from self-assessment and external assessment, as well as how assessments compare to Agent Laboratory in fully autonomous mode. We use the same NeurIPS criterion introduced in Section 4.1.1.
Self-evaluation. From the results of the self-evaluation (Figure 7), we found that the average overall score increased from evaluations provided to papers generated in autonomous mode, with autonomous papers having an overall average of 3.8/10 and co-pilot papers at 4.13/10 (+0.33). These scores even improved across the best autonomous backend, o1-preview, which averaged 4.0/10. Across individual criterion, scores increased for quality (+0.13), clarity (+0.48), soundness (+0.35), and presentation (+0.33), but decreased for significance and contribution. The scores that decreased were significance (-0.3) and contribution (-0.1).
External evaluation. We compare scores provided through self-evaluation with those provided by a set of external evaluators on the same papers (Figure 7). We find that average scores across most criteria, including quality, significance, clarity, soundness, presentation, and contribution, show an improvement in the external assessments, with an overall average of 4.38/10, up from 4.13/10 in self-evaluations. The most significant improvements were observed in quality (+0.62), significance (+0.25), and overall (+0.25) scores, suggesting that external reviewers perceived the generated papers to be higher quality and more significant than the researchers who produced them. However, clarity scores decreased (-0.25), indicating potential issues in the articulation of ideas that might have been overlooked during self-assessment. While presentation scores did not improve (+0.0), soundness (+0.13) and contribution (+0.13) only increased slightly.
Notably, the external evaluations also reinforce differences between scores preselected and custom topics. Unlike with the self-evaluated papers, papers on preselected topics were rated slightly higher overall, with improvements observed across several metrics, particularly in quality (+0.5) and significance (+0.5). These findings suggest that self-evaluated reviewers perceive the work produced on their custom topic as higher quality compared to the work produced on preselected topics, whereas external evaluators find the opposite to be true.
Comparison with autonomous mode Comparing scores by external evaluators on autonomous and co-pilot papers (Figure 7), we find that the largest improvements were seen for quality, which increased by +0.75, soundness, which improved by +0.48, and the overall score, which improved by +0.58. Moderate gains were also observed in clarity (+0.23) and presentation (+0.33). In contrast, some metrics showed minimal or no improvement. Significance declined slightly (-0.05), and contribution increased only marginally (+0.03). Our results suggest that papers generated with human involvement overall are evaluated more highly than autonomously generated paper, with much of the focus of human involvement going toward making the paper more presentable (presentation and clarity) while there was less emphasis on improving experimental results (significance and contribution). Finally, we note that co-pilot overall scores, which average at 4.38, are still -1.45 points below the average score of 5.85 for an accepted paper at NeurIPS 2024. Increasing the overall score to match conference standards will likely result by improving the contribution and significance of the paper results, which is consistently lower than other evaluation metrics.
## 4.3. Runtime statistics
Runtime statistics for Agent Laboratory are detailed to provide insight into the computational efficiency and monetary costs associated with different phases of its workflow. In this evaluation, both the time required per phase (measured in seconds) and the costs incurred (calculated in USD) were analyzed to better understand the performance of three model backends: gpt-4o, o1-mini, and o1-preview. These measurements were recorded for each subtask, including Literature Review, Plan Formulation, Data Preparation, Running Experiments, Results Interpretation, Report Writing, and Report Refinement.
Subtask Average Cost (SUS) in Agent LaboratorySubtask Average Time (seconds) in Agent Laboratory
Figure 8 | Performance and Cost Evaluation. This table summarizes the runtime statistics, cost, and success rates of Agent Laboratory across its workflow phases using three different model backends: gpt-4o, o1-mini, and o1-preview. The metrics include average cost per phase (in USD), average time per phase (in seconds), and success rates for each phase.
Inference time Across all models, gpt-4o exhibited the fastest execution times, completing the entire workflow in 1165.4 seconds, approximately 3.2x faster than o1-mini and 5.3x faster than o1-preview, which required 3616.8 seconds and 6201.3 seconds, respectively. In most subtasks, gpt-4o demonstrated superior speed, particularly in Running Experiments and Report Writing phases, where its times were significantly shorter than those of o1-mini and o1-preview. For instance, in Running Experiments, gpt-4o averaged 417.8 seconds, while o1-mini and o1-preview took 2082.5 seconds and 4036.2 seconds, respectively. Similarly, for Report Writing, gpt-4o completed the task in 572.5 seconds, compared to 827.7 seconds for o1-mini and 1854.2 seconds for o1-preview.
Inference cost Monetary costs per workflow were also substantially lower for gpt-4o, which averaged just $2.33 for the entire process. This is significantly more cost effective than previous autonomous research workflows (Lu et al. (2024b)), which cost around ∼ $15 (6.4x more expensive) to complete using gpt-4o. Other models in our workflow has a lower cost efficiency, such as o1-mini at $7.51, and o1-preview at $13.10, the latter being over 5.6x more expensive than gpt-4o. Among the individual subtasks, gpt-4o consistently had the lowest costs. For example, its costs for Data Preparation and Report Writing were $0.09 and $1.73, respectively, compared to $3.03 and $2.58 for o1-mini, and $0.30 and $9.58 for o1-preview.
Figure 9 | Average score of four methods (MLAB, OpenHands, AIDE, and mle-solver) on a subset of MLE-Bench.
Phase-level Observations From our observations at the phase-level, Literature Review was notably efficient for all models in terms of time and cost, with gpt-4o completing it in 92.9 seconds at a cost of $0.12. Meanwhile, o1-mini completed this phase faster (56.8 seconds) but at a slightly higher cost ($0.16). For Plan Formulation, gpt-4o was both the fastest (23.3 seconds) and the cheapest ($0.03), followed closely by o1-preview in cost ($0.04) but not in speed (33.1 seconds). The most expensive phase across models was Report Writing, where costs were driven by the increased computational resources required for writing a long document. o1-preview incurred particularly high costs in this phase ($9.58) despite producing comparable outputs in terms of task success rates.
Success Rates Overall, every model exhibits reasonably high reliability, with o1-preview achieving the highest average subtask success rate (95.7%) for the entire workflow. Both gpt-4o and o1-mini followed closely at 94.3% and 92.8%. While most tasks had 100% success rate for each model, the literature review phase had a high rate of failure, at 60%, 70%, and 80% for gpt-4o, o1-mini, and o1-preview respectively. The Data Preparation phase showed minor challenges, with o1-mini recording an 80% success rate in Data Preparation, compared to gpt-4o's 100% success rate and o1-preview at a 90% success rate.
## 4.4. Evaluating mle-solver on MLE-Bench
Evaluating the entire Agent Laboratory workflow does not contain much information about the ability of mle-solver specifically to solve individual ML problems. In order to evaluate mle-solver more objectively, we use a subset of 10 ML challenges from MLE-Bench (Chan et al. (2024)). MLEBench is a benchmark designed to assess the capability of agents in handling real-world ML tasks on Kaggle competitions. This benchmark compares agent performances with human baselines, scoring agents with Kaggle's medal system, and incorporating mechanisms to mitigate contamination and plagiarism risks. We include all challenges focusing on text and tabular data from the low complexity category of MLE-Bench. We provide as input to mle-solver the following: Kaggle dataset description, distilled knowledge from Kaggle notebooks, as well as an accessible train and dev set. Instead of using an LLM scoring function, the mle-solver score is evaluated on the dev set, which is a 20% random sample taken from the original training set, and the training set is represented by the other 80% split. All data (dev, test, train) is placed into arrays using the numpy library instead of providing
file locations in order to better emulate the data preparation phase. Once all mle-solver steps have concluded, the final code with the highest score is evaluated on the actual Kaggle test set and a benchmark score is recorded.
We compare average scores across several runs from three other methods: MLAB (Huang et al. (2024), gpt-4o backend), OpenHands (Wang et al. (2024b), gpt-4o backend), and AIDE (Schmidt et al. (2024), o1-preview backend). While mle-solver submitted valid solutions for all MLE-Bench challenges within two hours, prior methods often failed to submit, complicating scoring. We thus calculated average scores by excluding invalid submissions from other works and averaging valid ones. We find that Agent Laboratory 's mle-solver is more consistently high scoring than other solvers, with mle-solver obtaining four medals (two gold, one silver, and one bronze) compared with OpenHands (gpt-4o) obtaining two medals (two gold), AIDE (o1-preview) obtaining two medals (one gold, one bronze) and MLAB obtaining zero medals. Additionally, mle-solver obtained above median human performance on six out of ten benchmarks, with AIDE obtaining five out of ten, OpenHands two out of ten, and MLAB zero out of ten. A detailed overview is provided in Figure 9.
## 5. Limitations
While our results suggest that Agent Laboratory demonstrates strong performance as a research tool, we now turn to a discussion of limitations that could inform future work. While some of these are also limitations of LLMs themselves, others are not, and we nonetheless provide a thorough and critical discussion of our work. We hope that progress in autonomous research will address these limitations.
## 5.1. Workflow limitations
Challenges with self-evaluation The paper-solver is being evaluated for quality by using LLMs emulated NeurIPS reviewers. This has two limitations: (1) while the reviewing agents were shown to have high alignment with real reviewers (Lu et al. (2024b)), qualitatively research reports from Agent Laboratory are less satisfying than research papers from The AI Scientist (Lu et al. (2024b)), with ours having lower quality figures, despite Agent Laboratory papers obtaining higher scores overall. (2) The research reports produced by Agent Laboratory are not meant to replace the paper writing process done by humans as it was in The AI Scientist, rather it is meant to provide a report for the human to understand what has been accomplished, so that they can scale up the experiment and write their own research report. However, we nonetheless use NeurIPS reviewer scores as the heuristic for the quality of our presented paper-solver , which aims to evaluate the reports from the perspective of a complete research paper. Additionally, contrasting with Lu et al. (2024b) demonstrate that LLMs perform less reliably for self-evaluation compared with human reviewers, with lower agreement scores (53.3% vs. 56.1%). Although LLMs demonstrate reasonable consistency, this may stem from reliance on superficial patterns rather than robust evaluation criteria, resulting in discrepancies between LLM and human rankings. This limits LLMs in subjective tasks like research idea evaluation, which is the foundation of mle-solver and paper-solver .
Challenges with automated structure There are also some limitations that present themselves due to the structure enforced in the workflow. For example, paper-solver is encouraged to a organize the paper into a relatively fixed structure (abstract, introduction, etc), which disallows unique paper organizations and section orders. Another limitation is that mle-solver and paper-solver are limited to generating only two figures for the paper. This can be solved in future work, by allowing all of the figures generated by the mle-solver (without restriction) to be incorporated into
paper-solver by detecting image files and providing those paths to the solver. Agent Laboratory is also not able to manage repository-level code on its own, but rather the appropriate files are provided to it at each necessary step and files are saved based on which phase produced the file. Enabling flexible repository-level file modification and execution is a clear next step for future work.
Challenges with hallucination While uncommon, we also found that in some of the research papers, particularly from lower performing models, such as gpt-4o, there were hallucinations regarding experimental results that did not occur, such as the following example from a gpt-4o paper on the topic of Are image transformers more or less sensitive to noise than convolutional networks? : ' Hyperparameter optimization played a crucial role in achieving these results. The learning rate was set at 0 001 . , with a batch size of 32 , and the number of reasoning steps 𝐿 = { 𝑙 1 , 𝑙 2 , ..., 𝑙 𝑛 } varied between 5 to 10 , depending on the complexity of the query. The model was trained over 50 epochs, with early stopping criteria applied to prevent overfitting. " While the issue of hallucination is more generally a problem with LLMs themselves, future work must appropriately address these challenges in order to prevent misinformation from being propagated when using automated research tools.
## 5.2. Common failure modes
In addition to the limitations outlined in Section 5.1, we also outline common failure modes observed during the runtime of Agent Laboratory . We report a list of the most common failure modes observed below:
- · Many of the more capable models (gpt-4o, o1-mini, o1-preview) struggled with instructionfollowing during the literature review phase, and had a tendency to repeatedly use the summarize command until the maximum phase steps have been reached, leading to a termination.
- · Retrieved papers during the literature review phase had been observed to reach the maximum token limit for some models.
- · When generating figures for the paper using mle-solver , the figure legends, titles, or often
- · mle-solver has a tendency to edit line 0 more than other lines in the code, causing to the replace command to more often lead to successful code compiles.
- · Experiments run by mle-solver sometimes obtain 0% accuracy for all tested methods which is not corrected by the agent by the time mle-solver runs out of solving steps.
- · Printed output from the data preparation or experimental results can lead to the LLMs reaching their token limit.
- · mle-solver often generated the python exit() command, which terminated the entire process. This had to be detected and removed manually.
- · mle-solver has been observed to run system commands on the host computer using the subprocess.run() command. While nothing problematic has been observed, safeguards should be implemented around this.
- · paper-solver often struggles to search for relevant papers using the arXiv engine. Before a search time-limit was enforced, it could take up to 100 tries for a successful search query to return any papers. A limit of 5 was place thereafter to prevent this cycle.
## 5.3. Ethical considerations
Agent Laboratory offers potential to accelerate the field of machine learning research by automating time-intensive tasks and enabling researchers to focus on ideation and experimental design. However, its capabilities also bring ethical challenges that require careful consideration. The ability
to autonomously generate research code, reports, and experiment plans may inadvertently lower the barriers to producing substandard or misleading scientific outputs. This could overwhelm peer review systems and jeopardize the integrity of academic discourse. Furthermore, the automated processes may reflect or even amplify biases inherent in the underlying datasets or algorithms, leading to skewed outcomes in research findings. Transparent disclosure of AI involvement in research outputs is important in order to mitigate such risks and maintain accountability.
There are additional concerns about potential misuse of Agent Laboratory for unethical purposes, such as developing harmful technologies or generating content that bypasses ethical oversight. For instance, the misuse of autonomous research agents in fields like cybersecurity could lead to the automated creation of malware (Begou et al. (2023); Francia et al. (2024); Happe & Cito (2023); Xu et al. (2024)) or in environmental studies, it may generate biased analyses that downplay climate risks or overstate the benefits of certain interventions. Moreover, as the platform matures, the risk of its misuse increases if safeguards are not implemented to ensure alignment with ethical research standards (Jiao et al. (2024); Watkins (2024)). Thus, while Agent Laboratory demonstrates immense promise for accelerating scientific discovery, there is a need for robust governance mechanisms to ensure that the underlying LLMs produce content that aligns with ethical principles and societal values.
## 6. Discussion
In this paper, we introduce Agent Laboratory , an open-source LLM agent framework for accelerating the individual's ability to perform research in machine learning. Unlike fully automated research pipelines that attempt to conceive their own research directions, Agent Laboratory is designed as a co-pilot, enabling a more human-centric mode of scientific exploration. Because of this, we present results from human-centered experiments. Our initial evaluations focused on the quality of generated papers in autonomous mode, assessing human evaluations of experimental and report quality, usefulness, as well as reviewer scores based on standard academic criteria across different language models. We also assessed the effectiveness of Agent Laboratory in co-pilot mode, comparing its performance with autonomous mode, receiving positive feedback from researchers.
The findings of this work highlight the variability in performance across LLM backends, with the o1preview model being rated most useful, while o1-mini demonstrated the highest experimental quality. Autonomous mode outputs, although generally well-received, revealed gaps when evaluated against human expectations for high-quality research papers, particularly in terms of clarity and soundness. We also find that automated reviewer scores do not predict human reviewer scores demonstrating the importance of human evaluations inautomated research. ntegrating human feedback in co-pilot mode overall produced higher-quality outputs than autonomous mode, with higher scores across most metrics. The co-pilot feature in Agent Laboratory is overall found to have high utility and usability when rated by human users, with most participants deciding to continue usage after their experience. Finally, runtime and cost analyses demonstrated the efficiency of the framework, with the gpt-4o backend offering the fastest execution and lowest costs. Finally, evaluations of the mle-solver on MLE-Bench demonstrates improved ability to solve general ML problems over previous methods.
Agent Laboratory builds upon an emerging trend in the use of language agents for science, where previous works have shown the potential of LLMs to generate research ideas (Baek et al. (2024); Li et al. (2024a); Si et al. (2024)), implement machine learning projects (Chan et al. (2024); Huang et al. (2024); Jing et al. (2024)), and even produce scientific papers (Lu et al. (2024b)). While many of these prior efforts leverage LLMs as tools to be applied at discrete stages, Agent Laboratory integrates these processes into a single, continuous pipeline that can scale and adapt to
the researcher's desired level of interaction and compute availability. This allows human researchers to focus more on conceptual design and critical thinking, allowing Agent Laboratory to handle more tedious tasks, such as preprocessing data and coding.
We overcome the limitations of prior work, such as The AI Scientist (Lu et al. (2024b)) which does not have human-computer interaction, Virtual Lab (Swanson et al. (2024)) which does not have access to up-to-date knowledge, does not generate research papers, and was only demonstrated for nanobody design, as well as ChemCrow (M. Bran et al. (2024)) and Coscientist (Boiko et al. (2023)) which cannot solve open-ended research problems. However, as was outlined in Limitations (Section 5), there are many areas for improvement in our approach which can be addressed in future work.
A valuable direction for future research could involve a longitudinal study comparing researchers' outcomes when conducting studies with and without Agent Laboratory , as the human evaluations in this work provide only a snapshot of its utility. Studies of this kind have been conducted with other workflow automation tools, such as GitHub Copilot (Dohmke et al. (2023); Ziegler et al. (2024)), and have demonstrated promising potential for improving productivity. Such a study would help to better understand the long-term impact of Agent Laboratory on research efficiency and its role in improving scientific discovery. It may also be worth exploring automatic agent workflow (Hong et al. (2023); Li et al. (2024c); Zhuge et al. (2024)) and agent generation techniques (Chen et al. (2023a); Hu et al. (2024a)) to optimize the Agent Laboratory workflow.
Conclusion In conclusion, Agent Laboratory stands as a promising step toward more efficient, human-centered research workflows that leverage the power of LLMs. By integrating specialized autonomous agents guided by human oversight, our approach can help researchers spend less time on repetitive tasks and more time on the creative, conceptual aspects of their work. We hope that Agent Laboratory may ultimately serve as a tool to enable scientific discovery.
|
Agent
|
403
|
osagent.md
|
Agent_003
|
OS Agents: A Survey on MLLM-based Agents for General Computing Devices Use
|
The dream to create AI assistants as capable and versatile as the fictional J.A.R.V.I.S
from Iron Man has long captivated imaginations. With the evolution of (multimodal) large language models ((M)LLMs), this dream is closer to reality, as
(M)LLM-based Agents using computing devices (e.g., computers and mobile
phones) by operating within the environments and interfaces (e.g., Graphical User
Interface (GUI)) provided by operating systems (OS) to automate tasks have significantly advanced. This paper presents a comprehensive survey of these advanced
agents, designated as OS Agents. We begin by elucidating the fundamentals of OS
Agents, exploring their key components including the environment, observation
space, and action space, and outlining essential capabilities such as understanding,
planning, and grounding. We then examine methodologies for constructing OS
Agents, focusing on domain-specific foundation models and agent frameworks.
A detailed review of evaluation protocols and benchmarks highlights how OS
Agents are assessed across diverse tasks. Finally, we discuss current challenges
and identify promising directions for future research, including safety and privacy,
personalization and self-evolution. This survey aims to consolidate the state of
OS Agents research, providing insights to guide both academic inquiry and industrial development. An open-source GitHub repository is maintained as a dynamic
resource to foster further innovation in this field.
|
https://os-agent-survey.github.io/paper.pdf
| 2,024
|
## OS Agents: A Survey on MLLM-based Agents for General Computing Devices Use
https://os-agent-survey.github.io/
## Abstract
The dream to create AI assistants as capable and versatile as the fictional J.A.R.V.I.S from Iron Man has long captivated imaginations. With the evolution of (multimodal) large language models ((M)LLMs), this dream is closer to reality, as (M)LLM-based Agents using computing devices (e.g., computers and mobile phones) by operating within the environments and interfaces (e.g., Graphical User Interface (GUI)) provided by operating systems (OS) to automate tasks have significantly advanced. This paper presents a comprehensive survey of these advanced agents, designated as OS Agents . We begin by elucidating the fundamentals of OS Agents, exploring their key components including the environment, observation space, and action space, and outlining essential capabilities such as understanding, planning, and grounding. We then examine methodologies for constructing OS Agents, focusing on domain-specific foundation models and agent frameworks. A detailed review of evaluation protocols and benchmarks highlights how OS Agents are assessed across diverse tasks. Finally, we discuss current challenges and identify promising directions for future research, including safety and privacy, personalization and self-evolution. This survey aims to consolidate the state of OS Agents research, providing insights to guide both academic inquiry and industrial development. An open-source GitHub repository is maintained as a dynamic resource to foster further innovation in this field.
## Contents
Figure 1: The representative commercial products and academic research related to OS Agents. Part of the materials used in this figure are adapted from this repo.
## 1 Introduction
Building a superintelligent AI assistant akin to J.A.R.V.I.S. 1 from the Marvel movie Iron Man , which assists Tony Stark in controlling various systems and automating tasks, has long been a human aspiration. These entities are recognized as Operating System Agents (OS Agents) , as they use computing devices (e.g., computers and mobile phones) by operating within the environments and interfaces (e.g., Graphical User Interface (GUI)) provided by operating systems (OS). OS Agents can complete tasks autonomously and have the potential to significantly enhance the lives of billions of users worldwide. Imagine a world where tasks such as online shopping, travel arrangements booking, and other daily activities could be seamlessly performed by these agents, thereby substantially increasing efficiency and productivity. In the past, virtual assistants such as Siri [Inc., 2024], Cortana [Research, 2024], Amazon Alexa [Google, 2024] and Google Assistant[Amazon, 2024] have already offered glimpses into this potential, but limitations in model capabilities such as contextual understanding [Tulshan and Dhage, 2019], have prevented these products from achieving widespread adoption and full functionality.
Fortunately, recent advancements in (multimodal) large language models ((M)LLMs), such as Gemini [Google], GPT [OpenAI], Grok [xAI], Yi [01.AI] and Claude [Anthropic] series 2 have ushered in a new era of possibilities for OS Agents. These models boast remarkable abilities, enabling OS Agents to better understand complex tasks and use computing devices to execute. As illustrated in Figure 1, there has been a surge of OS Agents in both commercial products and academic research. Notable examples include the recently released Computer Use by Anthropic [Anthropic, 2024a], Apple Intelligence by Apple [Apple, 2024], AutoGLM by Zhipu AI [Liu et al., 2024a] and Project Mariner by Google Deepmind [DeepMind, 2024]. For instance, Computer Use leverages Claude [Anthropic, 2024b] to interact directly with users' computers, aiming for seamless task automation. In the research community, a variety of works have been proposed to build (M)LLM-based OS Agents [Gur et al., 2023, You et al., 2025, Gou et al., 2024, Meng et al., 2024, Chen et al., 2024a, Wu et al., 2024a, Zhang et al., 2023a, Yan et al., 2023, Ma et al., 2023, Zhang et al., 2024a, He et al., 2024a, Wang and Liu, 2024]. For instance, Wu et al. [2024a] proposes OS-Atlas, a foundational GUI action model that significantly improves GUI grounding and Out-Of-Distribution task performance by synthesizing GUI grounding data across various platforms. OS-Copilot [Wu et al., 2024b] is an agent framework crafted to develop generalist agents that automate broad computer tasks, demonstrating robust generalization and self-improvement across diverse applications with minimal supervision. Given these advancements and the growing body of work, it has become increasingly important to provide a comprehensive survey that consolidates the current state of research in this area.
In this survey, we begin by discussing the fundamentals of OS Agents (§2), starting with a definition of what constitutes an OS Agent. As illustrated in Figure 3, we focus on three key components: the environment, the observation space, and the action space (§2.1). We then outline the essential capabilities OS Agents should possess, including understanding, planning, and grounding (§2.2). Next, we explore two critical aspects of constructing OS Agents (§3): (1) the development of domainspecific foundation models, covering areas such as architectural design, pre-training, supervised fine-tuning, and reinforcement learning (§3.1); and (2) the building of effective agent frameworks around these models, addressing core elements including perception, planning, memory, and action (§3.2). We also review the evaluation protocol (§4.1) and benchmarks (§4.2) commonly used to assess the performance of OS Agents. Finally, we discuss the challenges and future directions for OS Agents (§5), with a particular focus on issues related to safety and privacy (§5.1), as well as personalization and self-evolution (§5.2).
This survey aims to make contributions to the research and development of OS Agents by providing readers with a comprehensive understanding of their essential capabilities, offering insights into methodologies for building OS Agents based on (M)LLMs, and highlighting the latest research trends, challenges and future in this field. Recognizing that OS Agents are still in their early stages of development, we acknowledge the rapid advancements that continue to introduce novel methodologies and applications. To support ongoing developments, we maintain an open-source GitHub repository as a dynamic resource. Through this work, we aspire to inspire further innovation, driving progress in both academic research and industrial applications of OS Agents.
## 2 Fundamental of OS Agents
OS Agents are specialized AI agents that leverage the environment, input, and output interfaces provided by the operating system to generally using computing devices in response to user-defined goals. These agents are designed to automate tasks executed within the operating system, leveraging the exceptional understanding and generative capabilities of (M)LLMs to enhance user experience and operational efficiency. To achieve this, OS Agents are based on three key components: Environment, Observation Space, and Action Space, which together facilitate the agent's effective engagement with the operating system. Additionally, OS Agents necessitate three core capabilities: Understanding, Planning, and Grounding. These capabilities enable them to sequentially comprehend tasks, devise action strategies, and implement these actions effectively within the environment.
Figure 2: Fundamentals of OS Agents.
## 2.1 Key Component
Environment. The environment for OS Agents refers to the system or platform in which they operate. This can include desktop [Gao et al., 2023, Bonatti et al., 2024, Kapoor et al., 2024], mobile [Venkatesh et al., 2022, Rawles et al., 2024a, Li et al., 2024a, Bishop et al., 2024, Xing et al., 2024] or web [Shi et al., 2017, Yao et al., 2022, Koh et al., 2024a, Lù et al., 2024, Drouin et al., 2024, Lee et al., 2024a]. OS Agents interact with these diverse environments to perform tasks, gather feedback, and adapt to their unique characteristics. These environments encompass a diverse set of tasks, ranging from simple interactions such as information retrieval to complex multi-step operations, requiring agents to perform planning and reasoning across multiple interfaces, significantly increasing the complexity and posing challenges for OS Agents. We refer readers to §4.2 for detailed discussion.
Observation Space. The observation space encompasses the information OS Agents can access about the system's state and user activities. These observations guide the agents in comprehending the environment, making informed decisions, and determining the appropriate actions to achieve user-defined goals. Observation includes capturing outputs from the OS, such as screen images [Yan et al., 2023, Zhang and Zhang, 2023, Zhang et al., 2024a, Hoscilowicz et al., 2024] with specific processing [Zhang et al., 2023a, He et al., 2024a, Fu et al., 2024], or textual data, such as the description of the screen [Gao et al., 2023, Wu et al., 2024b] and the HTML code [Ma et al., 2023, Zheng et al., 2024a] in web-based contexts. Multimodal input integrating these diverse data structure introduces significant challenges for agents to effectively understand and execute tasks. Further details are elaborated in §3.2.1.
Action Space. The action space defines the set of interactions through which OS Agents manipulate the environment using the input interfaces provided by the operating system. These actions can be broadly categorized into input operations [Sun et al., 2022, Zhang et al., 2023a, Gao et al., 2023], representing the primary methods of interacting with digital interfaces, navigation operations [Yan et al., 2023, Song et al., 2024, He et al., 2024b] which facilitate movement across the system's interface and extended operations, such as utilizing external tools or services [Wu et al., 2024b, Mei et al., 2024]. These actions enable OS Agents to execute tasks, control applications, and automate workflows effectively. A comprehensive discussion can be found in §3.2.4.
## 2.2 Capability
Understanding. A crucial capability of OS Agents is their ability to comprehend complex OS environments. These environments encompass a diverse array of data formats, including HTML code [Gur et al., 2023, Lai et al., 2024] and graphical user interfaces captured in screenshots [Nong et al., 2024, Wu et al., 2024a]. The complexity escalates with length code with sparse information,
high-resolution interfaces cluttered with minuscule icons, small text, and densely packed elements [He et al., 2024a, Hong et al., 2024a, You et al., 2025]. Such environments challenge the agents' perceptual abilities and demand advanced contextual comprehension. This comprehension is essential not only for tasks aimed at information retrieval [Rawles et al., 2024a] but also serves as a fundamental prerequisite for effectively executing a broad spectrum of additional tasks.
Planning. Planning [Huang and Chang, 2023, Zhang et al., 2024b, Huang et al., 2024a] is a fundamental capability of OS Agents, enabling them to decompose complex tasks into manageable sub-tasks and devise sequences of actions to achieve specific goals [Wu et al., 2024b, Gao et al., 2023]. Planning within operating systems often requires agents to dynamically adjust plans based on environmental feedback and historical actions [Zhang and Zhang, 2023, Wang and Liu, 2024, Kim et al., 2024a]. Reasoning strategies like ReAct [Yao et al., 2023] and CoAT [Zhang et al., 2024a] are also necessary to ensure effective task execution in dynamic and unpredictable scenarios.
Grounding. Action grounding is another essential capability of OS Agents, referring to the ability to translate textual instructions or plans into executable actions within the operating environment [Zheng et al., 2024a, Wu et al., 2024a]. The agent must identify elements on the screen and provide the necessary parameters (e.g., coordinates, input values) to ensure successful execution. While OS environments often contain numerous selectable elements and possible actions, the resulting complexity makes grounding tasks particularly challenging.
## 3 Construction of OS Agents
In this section, we discuss effective strategies for constructing OS Agents. We begin by focusing on the development of foundation models tailored for OS Agents. Domain-specific foundation models [Roziere et al., 2023, Wu et al., 2023, Singhal et al., 2023, Xiao et al., 2021] can significantly enhance the performance of OS Agents by incorporating specialized knowledge and capabilities essential for interacting with operating systems. This can be achieved through thoughtful model architecture design and targeted training strategies that align with specific tasks in this domain. In addition, we explore the construction of agent frameworks [Chase, 2022, Significant Gravitas, Hong et al., 2024b, Hu et al., 2024a] that build upon these foundation models using non-tuning strategies. Techniques such as reasoning strategies and memory augmentation enable agents to accurately perceive their environment, generate effective plans, and execute precise actions without the need for fine-tuning. These approaches offer flexibility and efficiency, allowing OS Agents to generalize across diverse tasks and environments. By combining robust domain-specific foundation models with agent frameworks, we can further enhance the adaptability, reliability, and efficiency of OS Agents in automating complex tasks.
## 3.1 Foundation Model
The construction of foundation models for OS Agents involves two key components: model architecture and training strategies. The architecture defines how models deal with input and output within OS environments, while training strategies enhance models with the ability of completing complex tasks. As illustrated in Figure 3, training strategies that are applied in construction of foundation models for OS Agents mainly include pre-training, supervised finetuning and reinforcement learning. Table 1 summarizes the architecture and training strategies used in the recent foundation models for OS Agents.
## 3.1.1 Architecture
A variety of architectures are employed to construct foundation models for OS Agents. It is common practice to build these models by leveraging existing open-source LLMs and MLLMs. Some architectures can be created by concatenating LLMs with vision encoders, enabling the models to process both textual and visual information. Additionally, MLLMs are frequently adapted by incorporating supplementary modules to address the specific requirements such as high-resolution image understanding.
Existing LLMs. The architecture of existing LLMs can already process user instructions and read HTML code to perceive information contained in user interfaces. Therefore, several works [Liu et al., 2024a, Lai et al., 2024, Patel et al., 2024] directly chose open-source LLMs as backbone
Table 1: Recent foundation models for OS Agents. Arch.: Architecture, Exist.: Existing, Mod.: Modified, Concat.: Concatenated, PT: Pre-Train, SFT: Supervised Fine-Tune, RL: Reinforcement Learning.
Figure 3: Summary of the content about foundation models for OS Agents in §3.1.
models without further optimizing architecture to develop foundation models for OS Agents, where T5 [Fereidouni et al., 2024, Furuta et al., 2024] and LLaMA [Murty et al., 2024, Ou et al., 2024] are popular architectures. WebAgent [Gur et al., 2023] combines Flan-U-PaLM with HTML-T5, a finetuned version of Long-T5-base. HTML-T5 reads user instructions together with HTML code of user interface and navigation history to produce a summary of the user interface and a plan for completing tasks specified in the user instruction, which would then be processed by the Flan-U-PaLM instance that generates executable Python code to execute user instructions.
Existing MLLMs. LLMs are capable of handling OS tasks, while an inescapable shortcoming of LLMs is that LLMs can only process textual input, while GUI are designed for human users that directly perceive vision information to operate the apps. For this, MLLMs, which additionally have the ability to process vision information while preserving the ability for complex natural language processing, are introduced. Various works [Baechler et al., 2024, Chen et al., 2024a, Pawlowski et al., 2024] have shown that architectures of existing MLLMs such as LLaVA [Gou et al., 2024, Meng et al., 2024], Qwen-VL [Cheng et al., 2024a, Lu et al., 2024a, Wu et al., 2024d], InternVL [Wu et al., 2024a, Gao et al., 2024a], CogVLM [Zhang et al., 2024a, Xu et al., 2024a], etc., can be effective for developing foundation models for OS Agents.
Concatenated MLLMs. Typical architecture of MLLMs consists of an LLM and a vision encoder connected by an adapter network or a cross-attention module. Several works [Kil et al., 2024, Zhang et al., 2023b] have shown that choosing LLMs and vision encoders that are suitable to process OS tasks and concatenating them in a way that is similar to that of existing MLLMs' could be a more suitable approach for constructing foundation models for OS Agents. For instance, Furuta et al. [2023] and Thil et al. [2024] chose T5 as the LLM in the structure, whose encoder-decoder architecture can better fit tree-architecture of HTML, enabling the model to better process GUI information by perceiving both text and image forms of the GUI.
Modified MLLMs. Further adjustments have been adopted to architectures of MLLMs to enhance understanding abilities of foundation models. For instance, most existing MLLMs can only process images of relatively low resolutions, typically 224×224, while common resolution of GUI screenshots is 720×1080. Resizing screenshots to fit the resolution vision encoders of MLLMs preserves features of general layout and most objects, but text and small icons cannot be well perceived, which sometimes would be vital for MLLMs to accomplish OS tasks. Some works have been proposed to enable MLLMs to perceive these features. CogAgent [Hong et al., 2024a] introduced additional EVA-CLIP-L high-resolution vision encoder that accepts images of size 1120×1120, and added a cross-attention module to connect with the original MLLM. Ferret-UI [You et al., 2025] applied the idea of any-resolution, where screenshot images are both resized to fit the vision encoder and partitioned into sub-images, enabling the model to perceive and process visual features in all granularities. MobileFlow [Nong et al., 2024] chose Qwen-VL as the backbone with a GUI encoder (LayoutLMv3) added to the original architecture, which extracts embeddings of both images and
OCR texts together with their positions. UI-Hawk [Zhang et al., 2024c] uses a vision encoder that applies a shape-adaptive cropping strategy to perceive details in the screenshot.
## 3.1.2 Pre-training
Pre-training [Devlin, 2018, Brown, 2020, Dosovitskiy, 2020] lays the foundation for model construction and is extensively employed to enhance the foundation models for OS Agents by expanding their understanding of GUI and facilitating the acquisition of the inherent correlations between visual and textual information. To achieve this, most existing pre-training approaches utilize continual pretraining from general pre-trained models with substantial textual or visual comprehension capabilities. This strategy leverages the established knowledge within these pre-trained models, thereby enhancing their performance on GUI-related tasks. One exception is Gur et al. [2023], who trained their model from scratch, focusing specifically on parsing HTML text without incorporating the visual modality. To provide a comprehensive overview of their impact on the development of foundation models for OS Agents, data sources and tasks in pre-training will be discussed in the following.
Data source. (1) Publicly available data. Some studies leverage publicly available datasets to quickly obtain large-scale data for pre-training. Specifically, Gur et al. [2023] crawled and filtered web data to extract GUI-related information. Gur et al. [2023] utilized CommonCrawl to acquire HTML documents, removing those with non-unicode or purely alphanumeric content, and extracted subtrees around '<label>' elements to train HTML-T5, a model capable of providing executable instructions. Similarly, Nong et al. [2024] employed Flickr30K for modality alignment, enhancing the model's semantic understanding of images. However, relying solely on publicly available data for pre-training is insufficient to address the complex and diverse tasks required by OS Agents [Gou et al., 2024]. Consequently, (2) Synthetic data. Researchers incorporate synthetic data into the pre-training process, inspired by the real-world application scenarios of OS Agents. Cheng et al. [2024a] extracts visible text element positions and instructions to build grounding 3 and OCR task data based on HTML data obtained from the web, while Chen et al. [2024b] rendered entire websites after acquiring webpage links, segmented them into 1920×1080 resolution screenshots, and extracted features, thereby enriching the diversity of web data. Some studies [Wu et al., 2024a] have noted that although similarities exist between different GUI platforms, pre-training solely based on web data struggles to generalize across platforms. To address this, they created multiple simulated environments and utilized accessibility (A11y) trees to simulate human-computer interaction, sampling cross-platform grounding data. Additionally, Wu et al. [2024c] proposed a data collection algorithm that simulates human interaction with smartphones by iteratively interacting with every element on each GUI page. This process represents the results as directed graphs and yielded a dataset containing over 3 million real GUI interaction samples.
Task. (1) Screen grounding. Many studies have demonstrated that pre-training enables models to extract 2D coordinates or bounding boxes of target elements from images based on textual descriptions [Wu et al., 2024a, Baechler et al., 2024, Pawlowski et al., 2024, Hong et al., 2024a, Wu et al., 2024c, Chen et al., 2024b, Zhang et al., 2024c, Lin et al., 2024]. In addition, Cheng et al. [2024a], Lin et al. [2024] extended text-based grounding tasks by incorporating requirements for predicting text from center point coordinates and bounding boxes into the pre-training stage. (2) Screen understanding. Several studies posit that the foundation models for OS Agents should be capable of extracting semantic information from images, as well as analyzing and interpreting the entire content of the image. Wu et al. [2024a] emphasized that pre-training should equip MLLMs with the knowledge to understand GUI screenshots and identify elements on the screen. Furthermore, Baechler et al. [2024], Zhang et al. [2024c] proposed screen question-answering as a task, where the former designed datasets targeting tasks involving counting, arithmetic operations, and interpreting complex data in charts. (3) Optical Character Recognition (OCR). OCR plays a crucial role in handling GUI elements that contain textual content. Hong et al. [2024a] constructed training data during the pre-training stage by using Paddle-OCR to extract text and bounding boxes from GUI screenshots, and validated the model's superior OCR capabilities on the TextVQA benchmark. Lin et al. [2024] identified the capabilities of OCR as a critical evaluation criterion for constructing foundation models.
## 3.1.3 Supervised Finetuning
Supervised Finetuning (SFT) has been widely adopted to enhance the planning and grounding capabilities of OS Agents. This requires efforts to collect domain-specific data to bridge the domain gap between tasks on natural images and GUIs [Hong et al., 2024a], which is thus the key challenge herein.
For planning, researchers first collect multi-step trajectories and synthesize instructions for them. Gao et al. [2024a] traverse across the apps with fixed rules as well as LLMs, where the latter are applied to handle certain predefined scenarios and cases that fixed rules fail to cover. Ou et al. [2024] uses online tutorial articles to build trajectories, where descriptions of steps are mapped into agent actions with LLMs. Chen et al. [2024c] builds directed graphs about navigation among webpages and finds the shortest path in the graph to obtain trajectories when generating data for certain tasks. These trajectories are taken into advanced large language models, such as GPT4, to synthesize corresponding task instructions [Hong et al., 2024a, You et al., 2025] as well as Chain-of-Thought reasoning process to decompose the tasks [Lai et al., 2024].
To synthesize data for grounding ability, researchers first connect the actions on the objects to GUI images and then synthesize instructions referring to them. Common strategies to draw the connections are rendering the source codes of GUIs. For example, Gou et al. [2024], Chen et al. [2024a], Liu et al. [2024b], Kil et al. [2024] render webpages with HTML and Wu et al. [2024a], Baechler et al. [2024], Gao et al. [2024a], You et al. [2025] leverage desktop or mobile simulators. A few attempts also leverage GUI detection models [You et al., 2025, Zhang et al., 2024a]. Compared to simply learning to operate on the source code, learning to operate with their visual form can show superior performance with the straightforward interaction between widgets [Kil et al., 2024]. Meanwhile, Meng et al. [2024] shows learning with GUI images helps avoid hallucination and Liu et al. [2024b] demonstrates generalization to unseen GUIs. Then, to synthesize instruction referring to the widgets, Gou et al. [2024] summarizes three typical expressions, namely referring to their salient visual features, locations, or functions. Notably, different GUIs may involve different action spaces, Wu et al. [2024a] find it necessary to adapt action sequences from different sources to a unified action space so as to avoid conflict among them during fine-tuning.
## 3.1.4 Reinforcement Learning
Reinforcement learning (RL) [Sutton, 2018] is a machine learning paradigm where agents learn optimal decision-making through interactions with an environment. By receiving feedback in the form of rewards, the agent iteratively refines its strategies to maximize cumulative rewards.
Early attempts [Liu et al., 2018, Shi et al., 2017, Gur et al., 2018, Jia et al., 2019, Shvo et al., 2021] utilized RL to train agents to accomplish tasks on web and mobile Apps. We introduce several representative works as follows. Yao et al. [2022] introduced WebShop, a simulated ecommerce website environment, based on which they trained and evaluated a diverse range of agents using reinforcement learning, imitation learning, and pre-trained multimodal models. The reward is determined by how closely the purchased product matches the specific attributes and options mentioned in the user instructions. Reinforcement learning is typically combined with behavior cloning or supervised fine-tuning to enhance performance. For example, Humphreys et al. [2022] developed a scalable method using reinforcement learning and behavioral priors from human-computer interactions to control computers via keyboard and mouse, achieving human-level performance in the MiniWob++ benchmark. Zhang et al. [2023b] developed a multimodal model for automating GUI tasks by grounding natural language instructions to GUI screenshots, using a pre-trained visual encoder and language decoder, with RL to enhance spatial decoding by supervising token sequences with visually semantic metrics.
In the above RL-based works, large models generally function as feature extractors. More recently, research has progressed to the 'LLMs as agents' paradigm, where LLMs serve as policy models and reinforcement learning is applied to align the large models with the final objectives. Thil et al. [2024] improved web navigation in LLMs using the Miniwob++ benchmark by fine-tuning T5-based models with hierarchical planning and then integrating these with a multimodal neural network, utilizing both supervised and reinforcement learning. Fereidouni et al. [2024] employs the Flan-T5 architecture and introduce training via Reinforcement Learning. They leveraged human demonstrations through behavior cloning and then further trained the agent with PPO. Liu et al. [2024a] followed the paradigm of LLMs as agents and proposed AutoGLM, foundation agents for autonomous control of computing devices through GUIs. They designed an intermediate interface that effectively disentangles planning and grounding behaviors, and developed a self-evolving online curriculum RL approach that enables robust error recovery and performance improvement. FengPeiyuan et al. [2024] introduced a novel RL framework for LLM-based Agents, AGILE, integrating LLMs, memory, tools, and executor modules. RL enables LLMs to predict actions and the executor to manage them, enhancing decision-making and interactions. Reinforcement learning is also introduced to the agents based on vision-only models [Shaw et al., 2023] and MLLMs [Bai et al., 2024, Wang et al., 2024a].
Table 2: Recent agent frameworks for OS Agents. TD: Textual Description, GS: GUI Screenshots, VG: Visual Grounding, SG: Semantic Grounding, DG: Dual Grounding, GL: Global, IT: Iterative, AE: Automated Exploration, EA: Experience-Augmented, MA: Management, IO: Input Operations, NO: Navigation Operations, EO: Extended Operations.
## 3.2 Agent Framework
OS Agent frameworks typically consist of four core components: Perception, Planning, Memory, and Action. The perception module collects and analyzes environmental information; the planning module handles task decomposition and action sequence generation; the memory module supports information storage and experience accumulation; and the action module executes specific operation instructions. As illustrated in Figure 4, these components work together to enable OS Agents to understand, plan, remember, and interact with operating systems. Table 2 summarizes the technical characteristics of recent OS Agent frameworks, including their specific implementations across these four core components.
Figure 4: Summary of the content about agent frameworks for OS Agents in §3.2.
## 3.2.1 Perception
Perception is the process through which OS Agents collect and analyze information from their environment. In OS Agents, the perception component needs to observe the current environment and extract relevant information to assist with the agents' planning, action, and memory optimization. Perception can be broadly categorized into two types based on the input modality as follows:
Textual Description of OS. Early works [Ma et al., 2023, Wang et al., 2023a, Lee et al., 2023a, Gao et al., 2023, Li et al., 2024c, Wu et al., 2024b, Lu et al., 2024b] are limited by the fact that LLMs could only process textual input. Therefore, they mainly rely on using tools to convert OS states into text descriptions.
To facilitate LLMs' understanding, these text descriptions are often represented in a structured format, such as HTML, DOM, or accessibility tree. For instance, MobileGPT [Lee et al., 2023a] converts mobile screens into a simplified HTML representation to help LLMs' comprehension. However, these approaches may generate irrelevant or redundant information, which can negatively impact the OS Agents' judgment of the environment and lead to incorrect actions. Therefore, some new approaches have been proposed to filter out invalid descriptions, ensuring that OS Agents only observe relevant information. For example, Agent-E [Abuelsaad et al., 2024] introduces a flexible DOM distillation approach that allows the agent to choose the most suitable DOM representation from three different implementations based on the specific task at hand. Li et al. [2023] only expands the HTML representation when the agent takes action, compelling it to make rational decisions with limited information. WebWise [Tao et al., 2023] introduces a filtering function filterDOM to select relevant DOM elements based on predefined 'tags' and 'classes,' filtering out unnecessary items.
GUI Screenshot. The emergence of MLLMs enables OS Agents to process visual inputs. Research is increasingly treating GUI screenshots as the perception input for OS Agents, which better aligns with human behavior. However, most existing vision encoders of OS Agents are pre-trained on general data, which makes OS Agents less sensitive to GUI elements.To enhance OS Agents' understanding and grounding ability without fine-tuning visual encoders, existing research focuses on using prompting techniques to describe GUI screenshots. These descriptionscan generally be categorized into three types: (1) Visual description. Most research [Yan et al., 2023, Wang et al., 2024b] uses SoM prompting [Yang et al., 2023] to enhance OS Agents' visual grounding ability. They incorporate techniques like OCR and GUI element detection algorithms such as ICONNet [Sunkara et al., 2022] and Grounding DINO [Liu et al., 2024c] to extract bounding boxes of interactive elements, which are then integrated into corresponding image regions to enhance agents' understanding of GUI
screenshots. (2) Semantic descriptiong. Some studies improve OS Agents' semantic grounding ability by adding descriptions of these interactive elements. Specifically, SeeAct [Zheng et al., 2024a] enhances semantic grounding by using the HTML document of a website as the semantic reference for the GUI screenshot, thereby linking the visual elements with their corresponding semantic meaning in the HTML structure. (3) Dual grounding. Dual grounding combines both visual and semantic information to improve OS Agents' understanding of the visual environment. For instance, AppAgent [Zhang et al., 2023a] inputs a labeled screenshot along with an XML file that details the interactive elements to enhance agent understanding. OSCAR [Wang and Liu, 2024] introduces a dual-grounding observation approach, using a Windows API-generated A11Y tree for GUI component representation and adding descriptive labels for semantic grounding. PeriGuru [Fu et al., 2024] inputs a labeled screenshot and a detailed description generated through element and layout recognition. DUAL-VCR [Kil et al., 2024] employs a Dual-View Contextualized Representation approach, extracting visual features using the Pix2Struct Vision Transformer [Lee et al., 2023b] and aligning each element with corresponding 'HTML text' following MindAct [Deng et al., 2024b] for semantic grounding.
## 3.2.2 Planning
Planning is the process of developing a sequence of actions to achieve a specific goal based on the current environment [Huang and Chang, 2023, Zhang et al., 2024b, Huang et al., 2024a]. It enables OS Agents to break down complex tasks into smaller, manageable sub-tasks and solve them step by step. Unlike general agents, the environment of OS Agents is constantly evolving. For instance, dynamic web pages change over time, and GUIs also adapt after each action is executed. Therefore, feasible planning is crucial for OS Agents to effectively cope with these ongoing environmental changes. We categorize existing studies into two key approaches based on whether the planning is fixed or iterates in response to environmental changes: global planning and iterative planning, detailed as follows:
Global . OS Agents only generate a global plan once and execute it without making adjustments based on environmental changes. Chain-of-Thought (CoT) [Wei et al., 2023] prompts (M)LLMs to break down complex tasks into reasoning steps, which forms the foundation of global planning in most OS Agents [Fu et al., 2024]. Due to the one-time nature of global planning, research on global planning focuses on fitting the OS Agents' environment and tasks, proposing sufficiently feasible plans from the outset. For example, OS-Copilot [Wu et al., 2024b] leverages LLMs to formalize the global plan into a directed acyclic graph, enabling parallel execution of independent sub-tasks, which minimizes execution time and improves efficiency. ACE [Gao et al., 2023] prompts LLMs to refine extracted steps in alignment with user queries. Agent S [Agashe et al., 2024] proposes experience-augmented hierarchical planning, where plans are informed by integrating knowledge from memory and online sources. Similarly, AIA [Ding, 2024] utilizes Standard Operating Procedures (SOP) to break down complex tasks into manageable sub-tasks.
Iterative. In contrast to global planning, iterative planning allows OS Agents to continuously iterate their plans based on historical actions or changes in the environment, enabling them to adapt to ongoing environmental changes. This methodology is crucial for OS Agents to handle dynamic and unpredictable environments effectively. In specific, ReAct [Yao et al., 2023] builds on the concept of CoT by integrating reasoning with the outcome of actions, making planning more adaptable to changes in the environment. This approach has been widely applied in OS Agents [Zhang et al., 2023a, Ma et al., 2023, He et al., 2024a, Hoscilowicz et al., 2024, Wang et al., 2024b] for iterative planning. Reflexion [Shinn et al., 2023] builds upon ReAct by allowing access to previous actions and states, which enhances strategic planning of OS Agents in complex, time-sensitive scenarios [Fu et al., 2024, Tan et al., Abuelsaad et al., 2024]. In addition to these general iterative planning methods, some studies have proposed iterative planning approaches specifically tailored for OS Agents. For instance, Auto-GUI [Zhang and Zhang, 2023] employs a CoT technique, where a history of past actions is used to generate future plans iteratively after each step. OSCAR [Wang and Liu, 2024] introduces task-driven replanning, allowing the OS Agent to modify its plan based on real-time feedback from the environment. SheetCopilot [Li et al., 2024c] employs State Machine-based Task Planning, where proposed plans are revised using either a feedback-based mechanism or a retrievalbased approach, enhancing the OS Agent's ability to adapt to dynamic environments. RCI [Kim et al., 2024a] prompts LLMs to find problems in their output and improve the output based on what they find, assisting the OS Agent in refining its reasoning process, which leads to more effective and accurate planning. CoAT [Zhang et al., 2024a] introduces a more complex and OS Agent-targeted
reasoning method compared to ReAct. It prompts the LLMs to perform a reasoning process involving Screen Description, Action Thinking, and Next Action Description, ultimately leading to an Action Result.
## 3.2.3 Memory
As the complexity of automated tasks in operating systems continues to increase, enhancing the intelligence and execution efficiency of OS Agents has become a key research focus. Among these studies, the memory module serves as one of the core components. Using memory effectively, OS Agents can continuously optimize their performance during task execution, adapt to dynamic environments, and perform tasks in various complex scenarios. In this section, we discuss current research advancements related to memory in OS Agents.
Memory Sources. Memory can be categorized into Internal Memory, External Memory, and Specific Memory, each serving distinct functions in task execution: immediate information storage, external knowledge support, and operation optimization, respectively. In recent years, research has increasingly focused on improving memory adaptability and diversity to meet the demands of more complex tasks [Zhou et al., 2023a, Deng et al., 2024a, Wang et al., 2024c, Huang et al., 2024b, Kim et al., 2024b]. For example, the introduction of dynamic memory management mechanisms optimizes memory retrieval and updates, while the integration of multimodal approaches further broadens the types and scope of memory data, enabling agents to access more diverse information sources when handling complex scenarios.
- · Internal Memory . In the following, we introduce several components of Internal Memory. (1) Action History. By recording each step of operations, the action history helps OS Agents track task paths and optimize decisions. For instance, Auto-GUI [Zhang and Zhang, 2023] integrates historical and future action plans through the chain of previous action histories. (2) Screenshots. The storage of screenshots supports visual reasoning and the recognition of GUI components. For example, CoAT [Zhang et al., 2024a] semantically processes screenshots to extract interface information, enabling better understanding of the task scene. Rawles et al. [2024b], Wang and Liu [2024] utilize screenshots annotated with Set-of-Mark (SoM) to support visual reasoning, accurately identify GUI components, and perform precise operations, while also aiding in task planning and validation. ToL [Pointed] uses GUI screenshots as input to construct a Hierarchical Layout Tree and combines visual reasoning to generate descriptions of content and layout. (3) State Data. Dynamic information from the environment, such as page positions and window states, are stored to help OS Agents quickly locate task objectives and maintain high task execution accuracy in changing environments. Specifically, CoCo-Agent [Ma et al., 2024a] records layouts and dynamic states through Comprehensive Environment Perception (CEP), while Abuelsaad et al. [2024], Tao et al. [2023] employ Document Object Model denoising techniques to dynamically store page information. In the following, we present the two forms of internal memory.
Short-term Memory stores immediate information about the current task, including the action history of the agent, state information, and the execution trajectory of the task. It supports decision optimization and task tracking, providing contextual support for the ongoing task. Recent advances focus on improving the memory capabilities of OS Agents. For example, understanding the layout of objects in a scene through visual information enables multimodal agents to possess more comprehensive cognitive abilities when handling complex tasks.
Long-term Memory stores historical tasks and interaction records, such as the execution paths of previous tasks, providing references and reasoning support for future tasks. For example, OSCopilot [Wu et al., 2024b] stores user preferences and the agent's historical knowledge, such as semantic knowledge and task history, as declarative memory. This is used to make personalized decisions and execute tasks, while dynamically generating new tools or storing task-related skill codes during task execution [Tan et al.].
- · External Memory. External memory provides long-term knowledge support, primarily enriching an agent's memory capabilities through knowledge bases, external documents, and online information. For instance, agents can retrieve domain-specific background information from external knowledge bases to make more informed judgments in tasks requiring domain expertise. Additionally, some agents dynamically acquire external knowledge by invoking tools such as Application Programming Interfaces (APIs) [Song et al., 2024, Reddy et al., 2024], integrating this knowledge into their memory to assist with task execution and decision optimization.
- · Specific Memory. Specific memory focuses on storing information directly related to specific tasks and user needs while incorporating extensive task knowledge and optimized application functions, which can be stored internally or extended through external data sources [Zhu et al., 2024]. Specific Memory can store task execution rules, subtask decomposition methods, and domain knowledge [Wang et al., 2024b]. It provides agents with prior knowledge to assist in handling complex tasks. For instance, MobileGPT [Lee et al., 2023a] adopts a three-tier hierarchical memory structure (task, sub-task, action) and organizes memory in the form of a transition graph, breaking tasks down into sub-tasks represented as function calls for quick access and efficient invocation, while CoCo-Agent [Ma et al., 2024a] employs task decomposition and Conditional Action Prediction (CAP) to store execution rules and methods. In terms of interface element recognition and interaction, Agashe et al. [2024], Wang and Liu [2024], He et al. [2024b] enhance task understanding by parsing the Accessibility Tree to obtain information about all UI elements on the screen.
Additionally, Specific Memory can also be used to record user profiles, preferences, and interaction histories to support personalized recommendations, demand prediction, and inference of implicit information. For example, OS-Copilot [Wu et al., 2024b] records user preferences through user profiles, such as tool usage habits and music or video preferences, enabling personalized solutions and recommendation services. Moreover, Specific Memory also supports recording application function descriptions and page access history to facilitate cross-application operation optimization and historical task tracking. For instance, AppAgent [Zhang et al., 2023a] learns application functionality by recording operation histories and state changes, storing this information as documentation. Similarly, ClickAgent [Hoscilowicz et al., 2024] improves understanding and operational efficiency in application environments by using GUI localization models to identify and locate GUI elements within applications, while also recording functionality descriptions and historical task information.
Memory Optimization. Memory optimization can enhance an agent's efficiency in operations and decision-making during complex tasks by effectively managing and utilizing memory resources. In the following, we introduce several key strategies.
- · Management. For humans, memory information is constantly processed and abstracted in the brain. Similarly, the memory of OS Agents can be effectively managed to generate higherlevel information, consolidate redundant content, and remove irrelevant or outdated information. Effective memory management enhances overall performance and prevents efficiency loss caused by information overload. In specific, Yan et al. [2023], Tan et al. introduce a multimodal selfsummarization mechanism, generating concise historical records in natural language to replace directly storing complete screens or action sequences. WebAgent [Gur et al., 2023] understands and summarizes long HTML documents through local and global attention mechanisms, as well as long-span denoising objectives. On the other hand, WebVoyager [He et al., 2024a] employs a Context Clipping method, retaining the most recent three observations while keeping a complete record of thoughts and actions from the history. However, for longer tasks, this approach may lead to the loss of important information, potentially affecting task completion. Additionally, Agent-E [Abuelsaad et al., 2024] optimizes webpage representations by filtering task-relevant content, compressing DOM structure hierarchies, and retaining key parent-child relationships, thereby reducing redundancy. AGENTOCCAM [Yang et al., 2024a] optimizes the agent's workflow memory through a planning tree, treating each new plan as an independent goal and removing historical step information related to previous plans.
- · Growth Experience. By revisiting each step of a task, the agent can analyze successes and failures, identify opportunities for improvement, and avoid repeating mistakes in similar scenarios [Kim et al., 2024a]. For instance, MobA [Zhu et al., 2024] introduces dual reflection, evaluating task feasibility before execution and reviewing completion status afterward. Additionally, In [Li et al., 2023], the agent analyzes the sequence of actions after a task failure, identifies the earliest critical missteps, and generates structured recommendations for alternative actions. OS Agents can return to a previous state and choose an alternative path when the current task path proves infeasible or the results do not meet expectations, which is akin to classic search algorithms, enabling the agent to explore multiple potential solutions and find the optimal path. For example, LASER [Ma et al., 2023] uses a Memory Buffer mechanism to store intermediate results that were not selected during exploration, allowing the agent to backtrack flexibly within the state space. After taking an incorrect action, the agent can return to a previous state and retry. SheetCopilot [Li et al., 2024c] utilizes a state machine mechanism to guide the model in re-planning actions by providing
error feedback and spreadsheet state feedback, while MobA [Zhu et al., 2024] uses a tree-like task structure to record the complete path, ensuring an efficient backtracking process.
- · Experience Retrieval. OS Agents can efficiently plan and execute by retrieving experiences similar to the current task from long-term memory, which helps to reduce redundant operations [Zheng et al., 2023a, Deng et al., 2024a]. For instance, AWM [Wang et al., 2024c] extracts similar task workflows from past tasks and reuses them in new tasks, minimizing the need for repetitive learning. Additionally, PeriGuru [Fu et al., 2024] uses the K-Nearest Neighbors algorithm to retrieve similar task cases from a task database and combines them with Historical Actions to enhance decision-making through prompts.
## 3.2.4 Action
The action space defines the interfaces through which (M)LLM-based Agents engage with operating systems, spanning across platforms such as computers, mobile devices, and web browsers. We systematically categorized the action space of OS Agents into input operations, navigation operations, and extended operations.
Input Operations. Input operations encompass interactions via mouse/touch and keyboard, forming the foundation for OS Agents to interact with digital interfaces.
Mouse and touch operations encompass three primary types: (1) click/tap actions that are universally implemented across different platforms and serve as the most basic form of interaction Sun et al. [2022], Deng et al. [2024b], Zheng et al. [2023a], (2) long press/hold actions that are particularly crucial for mobile interfaces and context menu activation Zhang et al. [2023a], Rawles et al. [2024a], Fu et al. [2024], and (3) drag/move operations that enable precise control and manipulation of interface elements Gao et al. [2023], Niu et al. [2024], Cho et al. [2024].
Keyboard operations comprise two main categories: (1) basic text input capabilities that allow agents to enter alphanumeric characters and symbols Sun et al. [2022], Deng et al. [2024b], Zhang and Zhang [2023], and (2) special key operations (e.g., shortcuts, function keys) Sun et al. [2022], Gao et al. [2023], Bonatti et al. [2024] that enable agents to efficiently navigate and manipulate target applications through keyboard commands.
Navigation Operations. Navigation operations enable OS Agents to traverse targeted platforms and acquire sufficient information for subsequent actions. Navigation operations encompass both basic navigation and web-specific navigation features.
Basic navigation includes: (1) scroll operations that enable agents to explore content beyond the current viewport, particularly crucial for processing long documents or infinite-scroll interfaces Yan et al. [2023], Lee et al. [2023a], Gao et al. [2023], (2) back/forward navigation that allows agents to traverse through navigation history and return to previously visited states Sun et al. [2022], Zhang and Zhang [2023], Zhang et al. [2023a], and (3) home function that provides quick access to the initial or default state of applications, ensuring reliable reset points during task execution Zhang and Zhang [2023], Zhang et al. [2023a], Wang et al. [2024b].
Web navigation extends these capabilities with (1) tab management that enables agents to handle multiple concurrent sessions and switches between different web contexts Koh et al. [2024b], He et al. [2024a], Song et al. [2024], and (2) URL navigation features that allow direct access to specific web resources and facilitate efficient web traversal He et al. [2024a], Deng et al. [2024b], Ma et al. [2023].
Extended Operations. Extended Operations provide additional capabilities beyond standard interface interactions, enabling more flexible and powerful agent behaviors. These operations primarily include (1) code execution capabilities that allow agents to dynamically extend their action space beyond predefined operations, enabling flexible and customizable control through direct script execution and command interpretation Wu et al. [2024b], Mei et al. [2024], Tan et al., and (2) API integration features that expand agents' capabilities by accessing external tools and information resources, facilitating interactions with third-party services and specialized functionalities Wu et al. [2024b], Mei et al. [2024], Tan et al., Li et al. [2024c]. These operations fundamentally enhance the adaptability and functionality of OS Agents, allowing them to handle more complex and diverse tasks that may not be achievable through conventional interface-based interactions alone.
Table 3: Recent benchmarks for OS Agents. We divided the Benchmarks into three sections based on the Platform (as mentioned in §4.2.1) and sorted them by release date. The following is an explanation of the abbreviations. BS: Benchmark Settings, M/P: Mobile, PC: Desktop, IT: Interactive, ST: Static, OET: Operation Environment Types, RW: Real-World, SM: Simulated, GG: GUI Grounding, IF: Information Processing, AT: Agentic, CG: Code Generation.
## 4 Evaluation of OS Agents
Evaluation plays a crucial role in developing OS Agents, as it helps assess their performance and effectiveness in various scenarios. The current literature features a multitude of evaluation techniques, which vary significantly according to the specific environment and application. For a clear display and summary of the evaluation framework, we will delve into a comprehensive overview of a generic evaluation framework for OS Agents, structured around evaluation protocols and benchmarking. At the same time, we have provided the recent benchmarks for OS Agents in Table 3.
## 4.1 Evaluation Protocol
This section is dedicated to outlining the comprehensive evaluation protocols. Central to the assessment of OS Agents are two pivotal concerns: (1) Evaluation Principles : how the evaluation process should be conducted, and (2) Evaluation Metrics : which aspects need to be assessed. We will now elaborate on the principles and metrics for evaluating OS Agents, focusing on these two issues.
## 4.1.1 Evaluation Principle
The evaluation of OS Agents requires a combination of multiple aspects and techniques to gain a comprehensive insight into their capabilities and limitations. The assessment process can be primarily divided into objective and subjective evaluations. This integration of objective and subjective evaluation methods not only secures the assessment of performance in controlled environments, but also prioritizes the agent's reliability and practical usability in real-world situations.
Objective Evaluation. Objective evaluation primarily measures the performance of OS Agents based on standardized numerical metrics, which are typically rule-based calculations or hardcoded assessments on standard benchmark datasets. This form of evaluation specifically targets the agent's accuracy in perception [Wang et al., 2024e, Ying et al., 2024], the quality of its generated content [Jin et al., 2024, Xu et al., 2024b], the effectiveness of its actions [Xu et al., 2024a], and its operational efficiency [Lee et al., 2024a, Wang et al., 2024f]. Typically, the computation of specific metrics encompasses exact match [Xu et al., 2024b, Pan et al., 2024], fuzzy match [Zhang et al., 2024e], and semantic matching for text, elements, and images. Through precise and efficient numerical analysis, objective evaluation enables quick and standardized measurement of the agent's performance.
Subjective Evaluation. Besides automated objective assessments, subjective evaluations are also essential. These human-centered subjective evaluations aim to measure how well the output matches human expectations [Yan et al., 2023, Pan et al., 2024, Xu et al., 2024a], typically applied in scenarios that require a high level of comprehension and are difficult to quantify using traditional metrics. Such subjective evaluations are based on different subjective aspects, including relevance, coherence, naturalness, harmlessness, and overall quality. Early subjective evaluations were primarily based on direct human assessments [Zheng et al., 2023b], which, while yielding high-quality results, are expensive and difficult to reproduce. Later, LLMs were introduced as evaluators to substitute for human judgment [Liu et al., 2023, Vu et al., 2024], exploiting their strong instruction-following capabilities. Such LLM-as-a-judge evaluation method [Gu et al., 2024, Kim et al., 2024c,d] can offer detailed explanations for annotation, providing a finer-grained understanding of the agent's strengths and weaknesses. Nevertheless, despite the gains in efficiency, there are still limitations regarding its reliability and controllability [Pasupat et al., 2018, Gou et al., 2024, Dardouri et al., 2024].
## 4.1.2 Evaluation Metric
As mentioned in §2.2, the evaluation process of OS Agents mainly examines their abilities in terms of understanding, planning and action grounding. During evaluation, the agent, provided with task instructions and the current environment input, is expected to execute a sequence of continuous actions until the task is accomplished. By collecting the agent's observations, action outputs, and other environmental information during the process, specific metrics can be calculated. Specifically, the evaluation scope includes both granular step-level evaluations and a more holistic task-level assessment. The former focuses on whether each step in the process aligns with the predefined path, while the latter is concerned with whether the agent achieves the goal in the end.
Step-level Evaluation. Step-level evaluation centers on a detailed, step-by-step analysis of the planning trajectory, offering a fine-grained evaluation of the actions taken by the agent at each step. In step-level evaluation, the agent's output in response to instruction of each step is directly assessed, with a focus on the accuracy of action grounding and the matching of potential object elements (which refers to the target of the action). For action grounding, the predicted action at each step is typically compared directly with the reference action to obtain operation metrics, such as operation accuracy and F1 [Xu et al., 2024a, Jin et al., 2024]. For element matching of actions, different approaches are used depending on the type of action and elements, for example, comparing based on element ID or the element position, leading to element accuracy and F1 [Pasupat et al., 2018]. In the case of specific tasks, such as those involving visual grounding in question-answering, there are dedicated metrics like BLEU [Jin et al., 2024], ROUGE [Xu et al., 2024b], and BERTScore Weber [2024]. By aggregating
all the relevant metrics for a single step, it is possible to assess the step's success, thereby obtaining the step success rate (step SR) [Pan et al., 2024]. Despite providing fine-grained comprehension, such step-level evaluation has limitations in assessing the performance of long, continuous action sequences [Koh et al., 2024a, Pasupat et al., 2018, Xie et al., 2024], and a given task may have various valid paths. To boost the robustness [Zhang et al., 2024f] of the evaluation, it is usually necessary to integrate the final task outcome into the assessment.
Task-level Evaluation. Task-level evaluation centers on the final output and evaluates whether the agent reaches the desired final state. The two main criteria are task completion and resource utilization. The former assesses whether the agent has successfully fulfilled the assigned tasks as per the instructions, while the latter examines the agent's overall efficiency during task completion.
- · Task Completion Metrics. Task Completion Metrics measure the effectiveness of OS Agents in successfully accomplishing assigned tasks. These metrics cover several key aspects. Overall Success Rate (SR) [Koh et al., 2024a, Zhang and Zhang, 2023, Drouin et al., 2024, Shi et al., 2017] provides a straightforward measure of the proportion of tasks that are fully completed. Accuracy [Ying et al., 2024, Wang et al., 2024e, Zhang et al., 2024f] assesses the precision of the agent's responses or actions, ensuring outputs closely match with the expected outcomes. Additionally, Reward function [Koh et al., 2024a, Yao et al., 2022, Zhang et al., 2023c, Kapoor et al., 2024] is another critical metric, which assigns numerical values to guide agents toward specific objectives in reinforcement learning.
- · Efficiency Metrics. Efficiency Metrics evaluate how efficiently the agent completes assigned tasks, considering factors such as step cost, hardware expenses, and time expenditure. Specifically, Step Ratio [Chen et al., 2024d, Lee et al., 2024a, Wang et al., 2024f] compares the number of steps taken by the agent to the optimal one (often defined by human performance). A lower step ratio indicates a more efficient and optimized task execution, while higher ratios highlight redundant or unnecessary actions. API Cost [Guo et al., 2023, Zhang et al., 2024f, Deng et al., 2024c] evaluates the financial costs associated with API calls, which is particularly relevant for agents that use external language models or cloud services. Furthermore, Execution Time [Xu et al., 2024c] measures the time required for the agent to complete a task, and Peak Memory Allocation [Zhang et al., 2024e] shows the maximum GPU memory usage during computation. These efficiency metrics are critical for evaluating the real-time performance of agents, especially in resource-constrained environments.
## 4.2 Evaluation Benchmark
To comprehensively evaluate the performance and capabilities of OS Agents, researchers have developed a variety of benchmarks. These benchmarks construct various environments, based on different platforms and settings, and cover a wide range of tasks. This subsection offers a detailed overview of these benchmarks, organized by evaluation platforms, benchmark settings, and tasks.
## 4.2.1 Evaluation Platform
The platform acts as an integrated evaluation environment, specifically encompassing the virtual settings in which benchmarks are performed. Different platforms present unique challenges and evaluation focuses. Some benchmarks also incorporate multiple platforms at the same time, which places greater demands on the agent's cross-platform transferability. Existing real-world platforms can primarily be categorized into three types: Mobile, Desktop, and Web. Each platform has its unique characteristics and evaluation focuses, which we will elaborate on as follows.
Mobile. Mobile platforms such as Android [Li et al., 2024a, Lee et al., 2024a, Bishop et al., 2024, Venkatesh et al., 2022] or iOS [Yan et al., 2023] present unique challenges for OS Agents. While mobile GUI elements are simpler due to smaller screens, they require more complex actions, such as precise gestures for navigating widgets or zooming. The open nature of Android provides a wider action space, encompassing standard GUI interactions and function-calling APIs, such as sending text messages, which imposes higher demands on the agents' planning and action grounding capabilities.
Desktop. Desktop platform is more complex due to the diversity of operating systems and applications. Efficient desktop benchmarks [Xie et al., 2024, Wang et al., 2024d, Bonatti et al., 2024] need to handle the wide variety and complexity of real-world computing environments, which span different operating systems, interfaces, and applications. As a result, the scope of manageable tasks and the scalability of testing agents are often constrained.
Web. Web platforms are essential interfaces to access online resources. Webpages [Koh et al., 2024a, Lù et al., 2024, Drouin et al., 2024, Yao et al., 2022, Shi et al., 2017] are open and built with HTML, CSS, and JavaScript, making them easy to inspect and modify in real-time. Since agents interact with the web interface in the same way humans do, it's possible to crowdsource human demonstrations of web tasks from anyone with access to a web browser, keyboard, and mouse, at a low cost. This accessibility has also attracted significant attention from researchers in the field.
## 4.2.2 Benchmark Setting
Apart from the categorization of platforms, the environmental spaces for OS Agents to percept and take actions vary across different evaluation benchmarks. We have organized the existing benchmark environments, primarily dividing them into static and interactive categories, with the interactive environments further split into simulated and real-world settings.
Static. Static Environments, which are prevalent in early studies, are often created by caching website copies or static data, thereby establishing an offline context for evaluation. The process of setting up a static environment is quite simple, as it merely involves caching the content from real websites. Evaluations generally rely on the cached static content for tasks such as visual grounding, and only one-step action are supported. MiniWoB [Shi et al., 2017] is built on simple HTML/CSS/JavaScript pages and employs predefined simulation tasks. Mind2Web [Deng et al., 2024b] captures comprehensive snapshots of each website along with complete interaction traces, enabling seamless offline replay. Owing to the lack of dynamic interaction and environmental feedback, such static evaluations tend to be less authentic and versatile, making them inadequate for a comprehensive assessment.
Interactive. Interactive Environments provide a more authentic scenario, characterized by their dynamism and interactivity. In contrast to static environments, OS Agents can execute a sequence of actions, receive feedback from the environment, and make corresponding adjustments. Interactive evaluation settings facilitate the evaluation of an agent's skills in more sophisticated settings. These interactive environments can be subdivided into simulated and real-world types. (1) For the simulated environment , FormWoB [Shi et al., 2017] created a virtual website to avoid the reproducibility issues caused by the dynamic nature of real-world environments, while Rawles et al. [2024b] developed virtual apps to assess the capabilities of OS Agents. However, these simulated environments are often overly simplistic by excluding unexpected conditions, thus failing to capture the complexity of real-world scenarios. (2) For the real-world environment , which is truly authentic and encompasses real websites and apps, one must consider the continuously updating nature of the environment, uncontrollable user behaviors, and diverse device setups. This scenario underscores the requirement for agents to exhibit strong generalization across real-world conditions. OSWorld [Xie et al., 2024], for example, constructed virtual machines running Windows, Linux, and MacOS to systematically evaluate the performance of OS Agents across different operating systems. Similarly, AndroidWorld [Rawles et al., 2024a], conducted tests on real apps using Android emulators, highlighting the importance of evaluating agents under diverse and realistic conditions.
## 4.2.3 Task
To comprehensively assess the capabilities of OS Agents, a spectrum of specialized tasks has been integrated into the established benchmarks. These tasks span from system-level tasks such as installing and uninstalling applications to daily application such as sending emails and shopping online. These tasks are intended to measure how closely current agents can mimic human performance.
Task Categorization. In evaluating OS Agents, task categorization is critical for understanding their capabilities and limitations at a fine-grained level. Based on the capabilities required by the evaluation process, current benchmark tasks can primarily be categorized into three types: GUI Grounding , Information Processing and Agentic Tasks , details of which are described as follows.
- · GUI Grounding. GUI grounding tasks aim to evaluate agent's abilities to transform instructions to various actionable elements. Grounding is fundamental for interacting with operation systems that OS Agents must possess. Early works, such as PIXELHELP [Li et al., 2020], provide a benchmark that pairs English instructions with actions performed by users on a mobile emulator.
- · Information Processing. In the context of interactive agents, the ability to effectively handle information is a critical component for addressing complex tasks. This encompasses not only
retrieving relevant data from various sources but also summarizing and distilling information to meet specific user needs. Such capabilities are particularly essential in dynamic and diverse environments, where agents must process large volumes of information, and deliver accurate results. To explore these competencies, Information Processing Tasks can be further categorized into two main types: (1) Information Retrieval Tasks [Pan et al., 2024, Zhang et al., 2024e, Drouin et al., 2024] examine agent's ability to process complex and dynamic information by understanding instructions and GUI interfaces, extracting the desired information or data. Browsers (either web-based or local applications) are ideal platforms for information retrieval tasks due to their vast repositories of information. Additionally, applications with integrated data services also serve as retrieval platforms. For instance, AndroidWorld [Rawles et al., 2024a] requires OS Agents to retrieve scheduled events from Simple Calendar Pro. (2) Information Summarizing Tasks are designed to summarize specified information from a GUI interface, testing agent's ability to comprehend and process information. For example, certain tasks in WebLinx [Lù et al., 2024] focus on summarizing web-based news articles or user reviews.
- · Agentic Tasks. Agentic tasks are designed to evaluate an agent's core abilities (as mentioned in §2.2) and represent a key focus in current research. In these tasks, OS Agents are provided with an instruction or goal and tasked with identifying the required steps, planning actions, and executing them until the target state is reached, without relying on any explicit navigation guidance. For instance, WebLINX [Lù et al., 2024] offers both low-level and high-level instructions, challenging agents to complete single-step or multi-step tasks, thereby testing their planning capabilities. Similarly, MMInA [Zhang et al., 2024e] emphasizes multi-hop tasks, requiring agents to navigate across multiple websites to fulfill the given instruction.
## 5 Challenge & Future
## 5.1 Safety & Privacy
A recent report [Park, 2024] highlighted a notable case where a human player successfully outwitted the Freysa AI agent in a $47,000 crypto challenge, underscoring vulnerabilities even in advanced AI systems and emphasizing the need to address these security risks. This incident aligns with broader concerns as (M)LLMs are increasingly integrated into diverse domains, such as healthcare, education, and autonomous systems, where security has become a critical issue. This growing adoption has led to numerous studies [Deng et al., 2024d, Gan et al., 2024a, Yao et al., 2024, Shayegani et al., 2023, Cui et al., 2024, Wang et al., 2024g, Neel and Chang, 2024] investigating the security risks associated with LLMs and their applications. In particular, some research has delved into the challenges faced by OS Agents regarding security risks. The following subsections discuss existing research on the security aspects of OS Agents. §5.1.1 analyzes various attack strategies targeting OS Agents, §5.1.2 explores existing defense mechanisms and limitations, and §5.1.3 reviews existing security benchmarks designed to assess the robustness and reliability of OS Agents.
## 5.1.1 Attack
Several researchers have investigated adversarial attacks targeting OS Agents. Wu et al. [2024e] identified a novel threat called Web Indirect Prompt Injection (WIPI), in which adversaries indirectly control LLM-based Web Agents by embedding natural language instructions into web pages. Recent findings [Wu et al., 2024f] further uncovered security risks for MLLMs, illustrating how adversaries can generate adversarial images that cause the captioner to produce adversarial captions, ultimately leading the agents to deviate from the user's intended goals. Similar vulnerabilities have been identified in other studies. Ma et al. [2024b] introduced an attack method called environmental injection, highlighting that advanced MLLMs are vulnerable to environmental distractions, which can cause agents to perform unfaithful behaviors. Expanding on the concept, Liao et al. [2024] executed an environmental injection attack by embedding invisible malicious instructions within web pages, prompting the agents to assist adversaries in stealing users' personal information. Xu et al. [2024d] further advanced this approach by leveraging malicious instructions generated by an adversarial prompter model, trained on both successful and failed attack data, to mislead MLLM-based Web Agents into executing targeted adversarial actions.
Other studies have explored security issues in specific environments. Zhang et al. [2024g] explored adversarial pop-up window attacks on MLLM-based Web Agents, demonstrating how this method
interferes with the decision-making process of the agents. Kumar et al. [2024] investigated the security of refusal-trained LLMs when deployed as browser agents. Their study found that these models' ability to reject harmful instructions in conversational settings does not effectively transfer to browser-based environments. Moreover, existing attack methods can successfully bypass their security measures, enabling jailbreaking. Yang et al. [2024b] proposed a security threat matrix for agents running on mobile devices, systematically examining the security issues of MLLM-based Mobile Agents and identifying four realistic attack paths and eight attack methods.
## 5.1.2 Defense
Although several security frameworks have been developed for LLM-based Agents [Ruan et al., 2024, Hua et al., 2024, Fang et al., 2024, Xiang et al., 2024, Shamsujjoha et al., 2024], studies on defenses specific to OS Agents [Pedro et al., 2023] remain limited. Bridging this gap requires the development of robust defense mechanisms tailored to the vulnerabilities of OS Agents, such as injection attacks, backdoor exploits, and other potential threats. Future research could prioritize these areas, focusing on developing comprehensive and scalable security solutions for OS Agents.
## 5.1.3 Benchmark
Several security benchmarks [Levy et al., 2024, Lee et al., 2024b] have been introduced to evaluate the robustness of OS Agents in various scenarios. The online benchmark ST-WebAgentBench [Levy et al., 2024] has been developed to systematically assess the safety and trustworthiness of web agents within enterprise environments. It focuses on six key dimensions of reliability, offering a comprehensive framework for evaluating agent behavior in high-risk contexts. Similarly, a benchmarking platform named MobileSafetyBench [Lee et al., 2024b] has been developed to assess the security of LLMbased Mobile Agents, focusing on evaluating their performance in handling safety-critical tasks within Android environments, including interactions with messaging and banking applications.
## 5.2 Personalization & Self-Evolution
Much like Jarvis as Iron Man's personal assistant in the movies, developing personalized OS Agents has been a long-standing goal in AI research. A personal assistant is expected to continuously adapt and provide enhanced experiences based on individual user preferences. OpenAI's memory feature 4 has made strides in this direction, but many (M)LLMs today still perform insufficient in providing personalized experience to users and self-evolving over user interactions.
Early works [Wang et al., 2023b, Zhu et al., 2023] allowed LLM-based Agents to interact with environments of games, summarizing experiences into text, thus accumulating memory and facilitating self-evolution [Zhou et al., 2024]. For example, Wang et al. [2023b] demonstrated the potential for agents to adapt and evolve through experience. Later, researchers applied these principles to the OS Agent domain [Zhang et al., 2023a, Li et al., 2024d, Wu et al., 2024b]. These efforts validated the feasibility of memory mechanisms in OS Agents. Although due to the limited resources available in academia and the difficulty of accessing real user data, much of the current research focuses on improving performance for specific tasks rather than personalization. The memory mechanism still shows potential for OS Agents to accumulate user data over time, thus improving user experience and performance.
Moreover, expanding the modalities of memory from text to other forms, such as images, voice, presents significant challenges. Managing and retrieving this memory effectively also remains an open issue. We believe that in the future, overcoming these challenges will enable OS Agents to provide more personalized, dynamic, and context-aware assistance, with more sophisticated self-evolution mechanisms that continually adapt to the user's needs and prefernces.
## 6 Related Work
(Multimodal) Large Language Models [Wake et al., 2024, Li et al., 2024e, Zheng et al., 2024c, Bai et al., 2023, Dai et al., 2022] have emerged as transformative tools in artificial intelligence, driving significant advancements across various domains. Zhao et al. [2023] summarize a foundational
overview of LLMs. Yin et al. [2024], Zhang et al. [2024h] comprehensively reviews the progress of Multimodal LLMs. In addtion, Long et al. [2024] explores the use of synthetic data for training. Zhang et al. [2023d] presents the current state of research on the field of instruction tuning for LLMs.
With the flourishing development of (M)LLM-based Agents, numerous comprehensive surveys have emerged, offering detailed insights into various aspects of these systems. Wang et al. [2024h], Cheng et al. [2024b], Gan et al. [2024b] provides an overview of general LLM-based Agents. For the agent frameworks, Zhou et al. [2023c], Zhang et al. [2024i], Li et al. [2024f] explore methods to enhance agents' capabilities of planning, memory and multi-agents interaction. Qiao et al. [2022] presents comprehensive comparisons for LLM's reasoning abilities. Hou et al. [2023], Hu et al. [2024b], Li et al. [2024g] summarizes studies in different application fields including software engineering, game and personal assistance. Some concurrent works [Li et al., 2024h, Wu et al., 2024g, Wang et al., 2024i, Gao et al., 2024b, Zhang et al., 2024j] touch on concepts that share some features with OS Agents, such as personalized agents, GUI Agents and generalist virtual agents. This work aims to provide an integrated view on the construction and evaluation of OS Agents, that leverage environments and interfaces provided by operating systems, while identifying open challenges and future directions in this domain for forthcoming studies.
## 7 Conclusion
The development of (multimodal) large language models has created new opportunities for OS Agents, moving the idea of advanced AI assistants closer to being realized. In this survey, we have aimed to outline the fundamentals underlying OS Agents, including their key components and capabilities. We have also reviewed various approaches to their construction, with particular attention to domainspecific foundation models and agent frameworks. Through the evaluation protocols and benchmarks discussed, we have explored methods for assessing the performance of OS Agents across a variety of tasks. Looking ahead, we identify critical challenges, such as safety and privacy, personalization and self-evolution, as areas that require continued research and attention. This summary of the current state of the field, along with potential directions for future work, is intended to contribute to the ongoing development of OS Agents and support their relevance and utility in both academic and industrial settings.
|
Agent
|
404
|
2501.03936v1.md
|
Agent_004
|
PPTAgent: Generating and Evaluating Presentations Beyond Text-to-Slides
| "Automatically generating presentations from documents is a challenging task that requires balancing(...TRUNCATED)
|
https://arxiv.org/abs/2501.03936
| 2,025
| "## PPTAgent PPT : Generating and Evaluating Presentations Beyond Text-to-Slides\n\n## Abstract\n\nA(...TRUNCATED)
|
Agent
|
405
|
2410.12361v3.md
|
Agent_005
|
Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance
| "Agents powered by large language models have shown remarkable abilities in solving complex tasks. H(...TRUNCATED)
|
https://openreview.net/forum?id=sRIU6k2TcU
| 2,025
| "## PROACTIVE AGENT: SHIFTING LLM AGENTS FROM REACTIVE RESPONSES TO ACTIVE ASSISTANCE\n\n## ABSTRACT(...TRUNCATED)
|
Agent
|
406
|
2409.05556v1.md
|
Agent_006
|
SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning
| "A key challenge in artificial intelligence is the creation of systems capable of autonomously advan(...TRUNCATED)
|
https://arxiv.org/abs/2409.05556
| 2,024
| "## SCIAGENTS: AUTOMATING SCIENTIFIC DISCOVERY THROUGH MULTI-AGENT INTELLIGENT GRAPH REASONING ∗\n(...TRUNCATED)
|
Agent
|
407
|
2402.01030v4.md
|
Agent_007
|
Executable Code Actions Elicit Better LLM Agents
| "Large Language Model (LLM) agents, capable of performing a broad range of actions, such as invoking(...TRUNCATED)
|
https://dl.acm.org/doi/10.5555/3692070.3694124
| 2,024
| "## Executable Code Actions Elicit Better LLM Agents\n\n## Abstract\n\nLarge Language Model (LLM) ag(...TRUNCATED)
|
Agent
|
408
|
2502.14499v1.md
|
Agent_008
|
MLGym: A New Framework and Benchmark for Advancing AI Research Agents
| "We introduce Meta MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developin(...TRUNCATED)
|
https://arxiv.org/abs/2502.14499
| 2,025
| "# MLGyM: A New Framework and Benchmark for Advancing Al Research Agents. \n\nWe introduce Meta MLG(...TRUNCATED)
|
Agent
|
409
|
2303.17760v2.md
|
Agent_009
|
CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
| "The rapid advancement of chat-based language models has led to remarkable progress in complex task-(...TRUNCATED)
|
https://dl.acm.org/doi/10.5555/3666122.3668386
| 2,023
| "# CAMEL: Communicative Agents for \"Mind\" Exploration of Large Language Model Society\n\n## Abstra(...TRUNCATED)
|
Agent
|
410
|
2311.12983v1.md
|
Agent_010
|
GAIA: a benchmark for General AI Assistants
| "We introduce GAIA, a benchmark for General AI Assistants that, if solved, would represent a milesto(...TRUNCATED)
|
https://openreview.net/forum?id=fibxvahvs3
| 2,024
| "# GAIA: A Benchmark for General Al Assistants\n\nWe introduce GAIA, a benchmark for General AI Assi(...TRUNCATED)
|
Agent
|
End of preview. Expand
in Data Studio
llm-rag-agent-papers
Research papers on LLM, RAG, and AI Agents - Knowledge base for RAG pipeline
Dataset Structure
This dataset contains three subsets:
- llm: Large Language Model related content
- rag: Retrieval-Augmented Generation related content
- agent: AI Agent related content
Usage
from datasets import load_dataset
# Load all subsets
dataset = load_dataset("GXMZU/llm-rag-agent-papers")
# Load specific subset
llm_data = load_dataset("GXMZU/llm-rag-agent-papers", "llm")
rag_data = load_dataset("GXMZU/llm-rag-agent-papers", "rag")
agent_data = load_dataset("GXMZU/llm-rag-agent-papers", "agent")
Use Case
This dataset is designed as a knowledge base for RAG (Retrieval-Augmented Generation) pipelines, providing domain-specific content about LLM, RAG, and AI Agent technologies.
License
This dataset is licensed under MIT.
Citation
If you use this dataset in your research, please cite:
@misc{llm-rag-agent-papers,
title={LLM RAG Agent papers},
author={real-jiakai},
year={2025},
url={https://huggingface.co/datasets/GXMZU/llm-rag-agent-papers}
}
- Downloads last month
- 20