id int64 401 617 | file_name stringlengths 10 39 | paper_id stringlengths 9 9 | title stringlengths 6 175 | abstract stringlengths 4 1.92k | link stringlengths 32 155 | year int64 2.02k 2.03k | content stringlengths 16k 771k | category stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|
498 | 1018686.md | Agent_098 | Google's Approach for Secure AI Agents | As part of Google's ongoing efforts to define best practices for secure AI systems, we’re sharing our aspirational framework for secure AI agents. We advocate for a hybrid, defense-in-depth strategy that combines the strengths of traditional, deterministic security controls with dynamic, reasoning-based defenses. This ... | https://storage.googleapis.com/gweb-research2023-media/pubtools/1018686.pdf | 2,025 | # An Introduction to Google's Approach to Al Agent Security
Introduction: the promise and risks of Al agents
Security challenges of how Al agents work
Key risks associated with Al agents
Core principles for agent security
Google's approach: a hybrid defense-in-depth
Navigating the future of agents secur... | Agent |
499 | arxiv_2504.21024.md | Agent_099 | WebEvolver: Enhancing Web Agent Self-Improvement with Coevolving World Model | Agent self-improvement, where the backbone Large Language Model (LLM) of the agent are trained on trajectories sampled autonomously based on their own policies, has emerged as a promising approach for enhancing performance. Recent advancements, particularly in web environments, face a critical limitation: their perform... | https://arxiv.org/abs/2504.21024 | 2,025 | # WebEvolver: Enhancing Web Agent Self-Improvement with Coevolving World Model

Figure 1: Overview of WebEvolver - A Self-Improving Framework with World-Model Look-Ahead. Our framework co-trains a world model alongside the agent by predi... | Agent |
500 | arxiv_2505.14146.md | Agent_100 | s3: You Don't Need That Much Data to Train a Search Agent via RL | Retrieval-augmented generation (RAG) systems empower large language models (LLMs) to access external knowledge during inference. Recent advances have enabled LLMs to act as search agents via reinforcement learning (RL), improving information acquisition through multi-turn interactions with retrieval engines. However, e... | https://arxiv.org/abs/2505.14146 | 2,025 | # s3: You Don't Need That Much Data to Train a Search Agent via RL
# Abstract
Retrieval-augmented generation (RAG) systems empower large language models (LLMs) to access external knowledge during inference. Recent advances have enabled LLMs to act as search agents via reinforcement learning (RL), improving informatio... | Agent |
501 | arxiv_2506.05813.md | Agent_101 | MAPLE: Multi-Agent Adaptive Planning with Long-Term Memory for Table Reasoning | Table-based question answering requires complex reasoning capabilities that current LLMs struggle to achieve with single-pass inference. Existing approaches, such as Chain-of-Thought reasoning and question decomposition, lack error detection mechanisms and discard problem-solving experiences, contrasting sharply with h... | https://arxiv.org/abs/2506.05813 | 2,025 | # MAPLE: Multi-Agent Adaptive Planning with Long-Term Memory for Table Reasoning
# Abstract
Table-based question answering requires complex reasoning capabilities that current LLMs struggle to achieve with single-pass inference. Existing approaches, such as Chain-ofThought reasoning and question decomposition, lack e... | Agent |
502 | arxiv_2506.11763.md | Agent_102 | DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents | Deep Research Agents are a prominent category of LLM-based agents. By autonomously orchestrating multistep web exploration, targeted retrieval, and higher-order synthesis, they transform vast amounts of online information into analyst-grade, citation-rich reports--compressing hours of manual desk research into minutes.... | https://arxiv.org/abs/2506.11763 | 2,025 | # DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents
Deep Research Agents are a prominent category of LLM-based agents. By autonomously orchestrating multistep web exploration, targeted retrieval, and higher-order synthesis, they transform vast amounts of online information into analyst-grade, cita... | Agent |
503 | arxiv_2506.12594.md | Agent_103 | A Comprehensive Survey of Deep Research: Systems, Methodologies, and Applications | This survey examines the rapidly evolving field of Deep Research systems -- AI-powered applications that automate complex research workflows through the integration of large language models, advanced information retrieval, and autonomous reasoning capabilities. We analyze more than 80 commercial and non-commercial impl... | https://arxiv.org/abs/2506.12594 | 2,025 | # A Comprehensive Survey of Deep Research: Systems, Methodologies, and Applications
This survey examines the rapidly evolving feld of Deep Research systems--AI-powered applications that automate complex research workflows through the integration of large language models, advanced information retrieval, and autonomous ... | Agent |
504 | SHADE-Arena-Paper.md | Agent_104 | SHADE-Arena: Evaluating Sabotage and Monitoring in LLM Agents | As Large Language Models (LLMs) are increasingly deployed as autonomous agents in complex and long horizon settings, it is critical to evaluate their ability to sabotage users by pursuing hidden objectives. We study the ability of frontier LLMs to evade monitoring and achieve harmful hidden goals while completing a wid... | https://arxiv.org/abs/2506.15740 | 2,025 | # SHADE-Arena: Evaluating Sabotage and Monitoring in LLM Agents
# Abstract
As Large Language Models (LLMs) are increasingly deployed as autonomous agents in complex and long horizon settings, it is critical to evaluate their ability to sabotage users by pursuing hidden objectives. We study the ability of frontier LLM... | Agent |
505 | arxiv_2401.03568.md | Agent_105 | Agent AI: Surveying the Horizons of Multimodal Interaction | Multi-modal AI systems will likely become a ubiquitous presence in our everyday lives. A promising approach to making these systems more interactive is to embody them as agents within physical and virtual environments. At present, systems leverage existing foundation models as the basic building blocks for the creation... | https://arxiv.org/abs/2401.03568 | 2,024 | # AGENT AI: SURVEYING THE HORIZONS OF MULTIMODAL INTERACTION
The Emerging Agent Al Paradigm for Multi-modal and Cross-Reality AGI V Physical World Virtual World Agent Paradigm Embodiment Product Service Robots Virtual Reality Generalist Autonomous Vehicle Gaming
Application Ambient Mixed Reality Virtual Avatar Agen... | Agent |
506 | arxiv_2412.20138.md | Agent_106 | TradingAgents: Multi-Agents LLM Financial Trading Framework | Significant progress has been made in automated problem-solving using societies of agents powered by large language models (LLMs). In finance, efforts have largely focused on single-agent systems handling specific tasks or multi-agent frameworks independently gathering data. However, the multi-agent systems' potential ... | https://arxiv.org/abs/2412.20138 | 2,024 | # TradingAgents: Multi-Agents LLM Financial Trading Framework
Significant progress has been made in automated problem-solving using societies of agents powered by large language models (LLMs). In finance, efforts have largely focused on single-agent systems handling specific tasks or multi-agent frameworks independent... | Agent |
507 | arxiv_2505.16901.md | Agent_107 | Code Graph Model (CGM): A Graph-Integrated Large Language Model for Repository-Level Software Engineering Tasks | Recent advances in Large Language Models (LLMs) have shown promise in function-level code generation, yet repository-level software engineering tasks remain challenging. Current solutions predominantly rely on proprietary LLM agents, which introduce unpredictability and limit accessibility, raising concerns about data ... | https://arxiv.org/abs/2505.16901 | 2,025 | # Code Graph Model (CGM): A Graph-Integrated Large Language Model for Repository-Level Software Engineering Tasks
Figure 1: Results on SWE-bench Lite. CGM-SWE-PY ranks first among open-weight models. CS-3.5 denotes Claude-3.5-Sonnet, DS-V3 represents DeepSeek-V3.
# Abstract
Recent advances in Large Language Models (... | Agent |
508 | arxiv_2506.16499.md | Agent_108 | ML-Master: Towards AI-for-AI via Integration of Exploration and Reasoning | As AI capabilities advance toward and potentially beyond human-level performance, a natural transition emerges where AI-driven development becomes more efficient than human-centric approaches. A promising pathway toward this transition lies in AI-for-AI (AI4AI), which leverages AI techniques to automate and optimize th... | https://arxiv.org/abs/2506.16499 | 2,025 | # ML-Master: Towards AI-for-AI via Integration of Exploration and Reasoning

Figure 1: Performance of OpenHands [1], AIDE [2], R&D-Agent [3] and ML-Master on MLE. Bench [4].
# Abstract
As AI capabilities advance toward and potentially ... | Agent |
509 | arxiv_2506.17188.md | Agent_109 | Towards AI Search Paradigm | In this paper, we introduce the AI Search Paradigm, a comprehensive blueprint for next-generation search systems capable of emulating human information processing and decision-making. The paradigm employs a modular architecture of four LLM-powered agents (Master, Planner, Executor and Writer) that dynamically adapt to ... | https://arxiv.org/abs/2506.17188 | 2,025 | # Towards AI Search Paradigm
# Abstract
In this paper, we introduce the AI Search Paradigm, a comprehensive blueprint for nextgeneration search systems capable of emulating human information processing and decisionmaking. The paradigm employs a modular architecture of four LLM-powered agents (Master, Planner, Executo... | Agent |
510 | arxiv_2401.07339.md | Agent_110 | CodeAgent: Enhancing Code Generation with Tool-Integrated Agent Systems for Real-World Repo-level Coding Challenges | Large Language Models (LLMs) have shown promise in automated code generation but typically excel only in simpler tasks such as generating standalone code units. Real-world software development, however, often involves complex code repositories (named repo) with complex dependencies and extensive documentation. To fill ... | https://aclanthology.org/2024.acl-long.737/ | 2,024 | # CoDEAGENT: Enhancing Code Generation with Tool-Integrated Agent Systems for Real-World Repo-level Coding Challenges
# Abstract
Large Language Models (LLMs) have shown promise in automated code generation but typically excel only in simpler tasks such as generating standalone code units. However, real-world software... | Agent |
511 | arxiv_2503.13964.md | Agent_111 | MDocAgent: A Multi-Modal Multi-Agent Framework for Document Understanding | Document Question Answering (DocQA) is a very common task. Existing methods using Large Language Models (LLMs) or Large Vision Language Models (LVLMs) and Retrieval Augmented Generation (RAG) often prioritize information from a single modal, failing to effectively integrate textual and visual cues. These approaches str... | https://arxiv.org/abs/2503.13964 | 2,025 | # MDocAgent: A Multi-Modal Multi-Agent Framework for Document Understanding
# Abstract
Document Question Answering (DocQA) is a very common task. Existing methods using Large Language Models (LLMs) or Large Vision Language Models (LVLMs) and Retrieval Augmented Generation (RAG) often prioritize information from a sin... | Agent |
512 | arxiv_2506.10844.md | Agent_112 | CIIR@LiveRAG 2025: Optimizing Multi-Agent Retrieval Augmented Generation through Self-Training | This paper presents mRAG, a multi-agent retrieval-augmented generation (RAG) framework composed of specialized agents for subtasks such as planning, searching, reasoning, and coordination. Our system uses a self-training paradigm with reward-guided trajectory sampling to optimize inter-agent collaboration and enhance r... | https://arxiv.org/abs/2506.10844 | 2,025 | # CIIR@LiveRAG 2025: Optimizing Multi-Agent Retrieval Augmented Generation through Self-Training
# ABSTRACT
This paper presents mRAG, a multi-agent retrieval-augmented generation (RAG) framework composed of specialized agents for subtasks such as planning, searching, reasoning, and coordination. Our system uses a sel... | Agent |
513 | arxiv_2506.18019.md | Agent_113 | Graphs Meet AI Agents: Taxonomy, Progress, and Future Opportunities | AI agents have experienced a paradigm shift, from early dominance by reinforcement learning (RL) to the rise of agents powered by large language models (LLMs), and now further advancing towards a synergistic fusion of RL and LLM capabilities. This progression has endowed AI agents with increasingly strong abilities. De... | https://arxiv.org/abs/2506.18019 | 2,025 | # Graphs Meet AI Agents: Taxonomy, Progress, and Future Opportunities
Abstract--AI agents have experienced a paradigm shift, from early dominance by reinforcement learning (RL) to the rise of agents powered by large language models (LLMs), and now further advancing towards a synergistic fusion of RL and LLM capabiliti... | Agent |
514 | arxiv_2506.18959.md | Agent_114 | From Web Search towards Agentic Deep Research: Incentivizing Search with Reasoning Agents | Information retrieval is a cornerstone of modern knowledge acquisition, enabling billions of queries each day across diverse domains. However, traditional keyword-based search engines are increasingly inadequate for handling complex, multi-step information needs. Our position is that Large Language Models (LLMs), endow... | https://arxiv.org/abs/2506.18959 | 2,025 | # From Web Search towards Agentic Deep Research: Incentivizing Search with Reasoning Agents
# Abstract
Information retrieval is a cornerstone of modern knowledge acquisition, enabling billions of queries each day across diverse domains. However, traditional keywordbased search engines are increasingly inadequate for ... | Agent |
515 | arxiv_2506.21931.md | Agent_115 | ARAG: Agentic Retrieval Augmented Generation for Personalized Recommendation | Retrieval-Augmented Generation (RAG) has shown promise in enhancing recommendation systems by incorporating external context into large language model prompts. However, existing RAG-based approaches often rely on static retrieval heuristics and fail to capture nuanced user preferences in dynamic recommendation scenario... | https://arxiv.org/abs/2506.21931 | 2,025 | # ARAG: Agentic Retrieval Augmented Generation for Personalized Recommendation
# ABSTRACT
Retrieval-Augmented Generation (RAG) has shown promise in enhancing recommendation systems by incorporating external context into large language model prompts. However, existing RAGbased approaches often rely on static retrieval... | Agent |
516 | arxiv_2507.02259.md | Agent_116 | MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent | Despite improvements by length extrapolation, efficient attention and memory modules, handling infinitely long documents with linear complexity without performance degradation during extrapolation remains the ultimate challenge in long-text processing. We directly optimize for long-text tasks in an end-to-end fashion a... | https://arxiv.org/abs/2507.02259 | 2,025 | # MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent
# Abstract
Despite improvements by length extrapolation, effcient attention and memory modules, handling infinitely long documents with linear complexity without performance degradation during extrapolation remains the ultimate challenge in ... | Agent |
517 | arxiv_2507.02592.md | Agent_117 | WebSailor: Navigating Super-human Reasoning for Web Agent | Transcending human cognitive limitations represents a critical frontier in LLM training. Proprietary agentic systems like DeepResearch have demonstrated superhuman capabilities on extremely complex information-seeking benchmarks such as BrowseComp, a feat previously unattainable. We posit that their success hinges on a... | https://arxiv.org/abs/2507.02592 | 2,025 | # WebSailor: Navigating Super-human Reasoning for Web Agent
# Abstract
Transcending human cognitive limitations represents a critical frontier in LLM training. Proprietary agentic systems like DeepResearch have demonstrated superhuman capabilities on extremely complex information-seeking benchmarks such as BrowseComp... | Agent |
518 | arxiv_2207.01206.md | Agent_118 | WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents | Existing benchmarks for grounding language in interactive environments either lack real-world linguistic elements, or prove difficult to scale up due to substantial human involvement in the collection of data or feedback signals. To bridge this gap, we develop WebShop -- a simulated e-commerce website environment with ... | https://dl.acm.org/doi/10.5555/3600270.3601778 | 2,022 | # WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents
# Abstract
Existing benchmarks for grounding language in interactive environments either lack real-world linguistic elements, or prove difficult to scale up due to substantial human involvement in the collection of data or feedback s... | Agent |
519 | arxiv_2207.05608.md | Agent_119 | Inner Monologue: Embodied Reasoning through Planning with Language Models | Recent works have shown how the reasoning capabilities of Large Language Models (LLMs) can be applied to domains beyond natural language processing, such as planning and interaction for robots. These embodied problems require an agent to understand many semantic aspects of the world: the repertoire of skills available,... | https://openreview.net/forum?id=3R3Pz5i0tye | 2,022 | # Inner Monologue: Embodied Reasoning through Planning with Language Models
Abstract: Recent works have shown how the reasoning capabilities of Large Language Models (LLMs) can be applied to domains beyond natural language processing, such as planning and interaction for robots. These embodied problems require an agen... | Agent |
520 | arxiv_2212.04088.md | Agent_120 | LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models | This study focuses on using large language models (LLMs) as a planner for embodied agents that can follow natural language instructions to complete complex tasks in a visually-perceived environment. The high data cost and poor sample efficiency of existing methods hinders the development of versatile agents that are ca... | https://ieeexplore.ieee.org/document/10378628 | 2,023 | # LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models
# Abstract
This study focuses on using large language models (LLMs) as a planner for embodied agents that can follow natural language instructions to complete complex tasks in a visually-perceived environment. The high data cost ... | Agent |
521 | arxiv_2301.05327.md | Agent_121 | Blind Judgement: Agent-Based Supreme Court Modelling With GPT | We present a novel Transformer-based multi-agent system for simulating the judicial rulings of the 2010-2016 Supreme Court of the United States. We train nine separate models with the respective authored opinions of each supreme justice active ca. 2015 and test the resulting system on 96 real-world cases. We find our s... | https://arxiv.org/abs/2301.05327 | 2,023 | # Blind Judgement: Agent-Based Supreme Court Modelling With GPT
# Abstract
We present a novel Transformer-based multi-agent system for simulating the judicial rulings of the 2010-2016 Supreme Court of the United States. We train nine separate models with the respective authored opinions of each supreme justice active... | Agent |
522 | arxiv_2301.12050.md | Agent_122 | Do Embodied Agents Dream of Pixelated Sheep: Embodied Decision Making using Language Guided World Modelling | Reinforcement learning (RL) agents typically learn tabula rasa, without prior knowledge of the world. However, if initialized with knowledge of high-level subgoals and transitions between subgoals, RL agents could utilize this Abstract World Model (AWM) for planning and exploration. We propose using few-shot large lang... | https://dl.acm.org/doi/10.5555/3618408.3619504 | 2,023 | # Do Embodied Agents Dream of Pixelated Sheep?: Embodied Decision Making using Language Guided World Modelling
# Abstract
Reinforcement learning (RL) agents typically learn tabula rasa, without prior knowledge of the world. However, if initialized with knowledge of high-level subgoals and transitions between subgoals... | Agent |
523 | arxiv_2302.00763.md | Agent_123 | Collaborating with language models for embodied reasoning | Reasoning in a complex and ambiguous environment is a key goal for Reinforcement Learning (RL) agents. While some sophisticated RL agents can successfully solve difficult tasks, they require a large amount of training data and often struggle to generalize to new unseen environments and new tasks. On the other hand, Lar... | https://arxiv.org/abs/2302.00763 | 2,023 | # Collaborating with language models for embodied reasoning
# Abstract
Reasoning in a complex and ambiguous environment is a key goal for Reinforcement Learning (RL) agents. While some sophisticated RL agents can successfully solve difficult tasks, they require a large amount of training data and often struggle to ge... | Agent |
524 | arxiv_2302.01560.md | Agent_124 | Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents | We investigate the challenge of task planning for multi-task embodied agents in open-world environments. Two main difficulties are identified: 1) executing plans in an open-world environment (e.g., Minecraft) necessitates accurate and multi-step reasoning due to the long-term nature of tasks, and 2) as vanilla planners... | https://dl.acm.org/doi/10.5555/3666122.3667602 | 2,023 | # Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents
# Abstract
We investigate the challenge of task planning for multi-task embodied agents in open-world environments.? Two main difficulties are identified: 1) executing plans in an open-world envi... | Agent |
525 | arxiv_2303.17491.md | Agent_125 | Language Models can Solve Computer Tasks | Agents capable of carrying out general tasks on a computer can improve efficiency and productivity by automating repetitive tasks and assisting in complex problem-solving. Ideally, such agents should be able to solve new computer tasks presented to them through natural language commands. However, previous approaches to... | https://dl.acm.org/doi/10.5555/3666122.3667845 | 2,023 | # Language Models can Solve Computer Tasks
# Abstract
Agents capable of carrying out general tasks on a computer can improve efficiency and productivity by automating repetitive tasks and assisting in complex problemsolving. Ideally, such agents should be able to solve new computer tasks presented to them through nat... | Agent |
526 | arxiv_2303.17580.md | Agent_126 | HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face | Solving complicated AI tasks with different domains and modalities is a key step toward artificial general intelligence. While there are numerous AI models available for various domains and modalities, they cannot handle complicated AI tasks autonomously. Considering large language models (LLMs) have exhibited exceptio... | https://dl.acm.org/doi/10.5555/3666122.3667779 | 2,023 | # HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
# Abstract
Solving complicated AI tasks with different domains and modalities is a key step toward artificial general intelligence. While there are numerous AI models available for various domains and modalities, they cannot handle complicate... | Agent |
527 | arxiv_2304.04370.md | Agent_127 | OpenAGI: When LLM Meets Domain Experts | Human Intelligence (HI) excels at combining basic skills to solve complex tasks. This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents, enabling them to harness expert models for complex task-solving towards Artificial General Intelligence (AGI). Large Language Mode... | https://dl.acm.org/doi/abs/10.5555/3666122.3666364 | 2,023 | # OpenAGI: When LLM Meets Domain Experts
# Abstract
Human Intelligence (HI) excels at combining basic skills to solve complex tasks. This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents, enabling them to harness expert models for complex task-solving towards Arti... | Agent |
528 | arxiv_2304.05332.md | Agent_128 | Emergent autonomous scientific research capabilities of large language models | Transformer-based large language models are rapidly advancing in the field of machine learning research, with applications spanning natural language, biology, chemistry, and computer programming. Extreme scaling and reinforcement learning from human feedback have significantly improved the quality of generated text, en... | https://arxiv.org/abs/2304.05332 | 2,023 | # Emergent autonomous scientific research capabilities of large language models
# Abstract
Transformer-based large language models are rapidly advancing in the field of machine learning research, with applications spanning natural language, biology, chemistry, and computer programming. Extreme scaling and reinforceme... | Agent |
529 | arxiv_2304.05376.md | Agent_129 | ChemCrow: Augmenting large-language models with chemistry tools | Over the last decades, excellent computational chemistry tools have been developed. Integrating them into a single platform with enhanced accessibility could help reaching their full potential by overcoming steep learning curves. Recently, large-language models (LLMs) have shown strong performance in tasks across domai... | https://www.nature.com/articles/s42256-024-00832-8 | 2,024 | # Augmenting large language models with chemistry tools
# Abstract
Over the last decades, excellent computational chemistry tools have been developed. Integrating them into a single platform with enhanced accessibility could help reaching their full potential by overcoming steep learning curves. Recently, large-langu... | Agent |
530 | arxiv_2304.07590.md | Agent_130 | Self-collaboration Code Generation via ChatGPT | Although Large Language Models (LLMs) have demonstrated remarkable code-generation ability, they still struggle with complex tasks. In real-world software development, humans usually tackle complex tasks through collaborative teamwork, a strategy that significantly controls development complexity and enhances software ... | https://dl.acm.org/doi/10.1145/3672459 | 2,024 | # Self-collaboration Code Generation via ChatGPT
Although Large Language Models (LLMs) have demonstrated remarkable code-generation ability, they still struggle with complex tasks. In real-world software development, humans usually tackle complex tasks through collaborative teamwork, a strategy that significantly cont... | Agent |
531 | arxiv_2304.08244.md | Agent_131 | API-Bank: A Comprehensive Benchmark for Tool-Augmented LLMs | Recent research has demonstrated that Large Language Models (LLMs) can enhance their capabilities by utilizing external tools. However, three pivotal questions remain unanswered: (1) How effective are current LLMs in utilizing tools? (2) How can we enhance LLMs' ability to utilize tools? (3) What obstacles need to be o... | https://aclanthology.org/2023.emnlp-main.187/ | 2,023 | # API-Bank: A Comprehensive Benchmark for Tool-Augmented LLMs
# Abstract
Recent research has demonstrated that Large Language Models (LLMs) can enhance their capabilities by utilizing external tools. However, three pivotal questions remain unanswered: (1) How effective are current LLMs in utilizing tools? (2) How can... | Agent |
532 | arxiv_2304.09842.md | Agent_132 | Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models | Large language models (LLMs) have achieved remarkable progress in solving various natural language processing tasks due to emergent reasoning abilities. However, LLMs have inherent limitations as they are incapable of accessing up-to-date information (stored on the Web or in task-specific knowledge bases), using extern... | https://dl.acm.org/doi/10.5555/3666122.3668004 | 2,023 | # Chame1eon: Plug-and-Play Compositional Reasoning with Large Language Models
# Abstract
Large language models (LLMs) have achieved remarkable progress in solving various natural language processing tasks due to emergent reasoning abilities. However, LLMs have inherent limitations as they are incapable of accessing u... | Agent |
533 | arxiv_2304.10750.md | Agent_133 | Improving Grounded Language Understanding in a Collaborative Environment by Interacting with Agents Through Help Feedback | Many approaches to Natural Language Processing (NLP) tasks often treat them as single-step problems, where an agent receives an instruction, executes it, and is evaluated based on the final outcome. However, human language is inherently interactive, as evidenced by the back-and-forth nature of human conversations. In l... | https://aclanthology.org/2024.findings-eacl.87/ | 2,024 | # Improving Grounded Language Understanding in a Collaborative Environment by Interacting with Agents Through Help Feedback
# Abstract
Many approaches to Natural Language Processing tasks often treat them as single-step problems, where an agent receives an instruction, executes it, and is evaluated based on the final... | Agent |
534 | arxiv_2304.13343.md | Agent_134 | SCM: Enhancing Large Language Model with Self-Controlled Memory Framework | Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information. To address this limitation, in this paper, we propose the Self-Controlled Memory (SCM) framework to enhance the ability of LLMs to maintain long-term memory and recall rel... | https://arxiv.org/abs/2304.13343 | 2,023 | # Enhancing Large Language Model with Self-Controlled Memory Framework
# Abstract
Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information. To address this limitation, in this paper, we propose the Self-Controlled Memory (SCM) ... | Agent |
535 | arxiv_2304.14721.md | Agent_135 | Towards autonomous system: flexible modular production system enhanced with large language model agents | In this paper, we present a novel framework that combines large language models (LLMs), digital twins and industrial automation system to enable intelligent planning and control of production processes. We retrofit the automation system for a modular production facility and create executable control interfaces of fine-... | https://arxiv.org/abs/2304.14721 | 2,023 | # Towards autonomous system: flexible modular production system enhanced with large language model agents
Abstract - In this paper, we present a novel framework that combines large language models (LLMs), digital twins and industrial automation system to enable intelligent planning and control of production processes.... | Agent |
536 | arxiv_2305.02412.md | Agent_136 | Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents | Pre-trained large language models (LLMs) capture procedural knowledge about the world. Recent work has leveraged LLM's ability to generate abstract plans to simplify challenging control tasks, either by action scoring, or action modeling (fine-tuning). However, the transformer architecture inherits several constraints ... | https://arxiv.org/abs/2305.02412 | 2,023 | # Plan, Eliminate, and Track Language Models are Good Teachers for Embodied Agents.
# Abstract
Pre-trained large language models (LLMs) capture procedural knowledge about the world. Recent work has leveraged LLM's ability to generate abstract plans to simplify challenging control tasks, either by action scoring, or a... | Agent |
537 | arxiv_2305.08144.md | Agent_137 | Mobile-Env: Building Qualified Evaluation Benchmarks for LLM-GUI Interaction | The Graphical User Interface (GUI) is pivotal for human interaction with the digital world, enabling efficient device control and the completion of complex tasks. Recent progress in Large Language Models (LLMs) and Vision Language Models (VLMs) offers the chance to create advanced GUI agents. To ensure their effectiven... | https://arxiv.org/abs/2305.08144 | 2,023 | # Mobile-Env: Building Qualified Evaluation Benchmarks for LLM-GUI Interaction
# Abstract
The Graphical User Interface (GUI) is pivotal for human interaction with the digital world, enabling efficient device control and the completion of complex tasks. Recent progress in Large Language Models (LLMs) and Vision Langua... | Agent |
538 | arxiv_2305.10250.md | Agent_138 | MemoryBank: Enhancing Large Language Models with Long-Term Memory | Revolutionary advancements in Large Language Models have drastically reshaped our interactions with artificial intelligence systems. Despite this, a notable hindrance remains-the deficiency of a long-term memory mechanism within these models. This shortfall becomes increasingly evident in situations demanding sustained... | https://dl.acm.org/doi/10.1609/aaai.v38i17.29946 | 2,024 | # MemoryBank: Enhancing Large Language Models with Long-Term Memory
# Abstract
Revolutionary advancements in Large Language Models (LLMs) have drastically reshaped our interactions with artificial intelligence (AI) systems, showcasing impressive performance across an extensive array of tasks. Despite this, a notable ... | Agent |
549 | arxiv_2306.16092.md | Agent_149 | Chatlaw: A Multi-Agent Collaborative Legal Assistant with Knowledge Graph Enhanced Mixture-of-Experts Large Language Model | AI legal assistants based on Large Language Models (LLMs) can provide accessible legal consulting services, but the hallucination problem poses potential legal risks. This paper presents Chatlaw, an innovative legal assistant utilizing a Mixture-of-Experts (MoE) model and a multi-agent system to enhance the reliability... | https://arxiv.org/abs/2306.16092 | 2,023 | # Chatlaw: A Multi-Agent Collaborative Legal Assistant with Knowledge Graph Enhanced Mixture-of-Experts Large Language Model
# ABSTRACT
Al legal assistants based on Large Language Models (LMs) can provide accessible legal consulting services, but the hallucination problem poses potential legal risks. This paper prese... | Agent |
539 | arxiv_2305.10626.md | Agent_139 | Language Models Meet World Models: Embodied Experiences Enhance Language Models | While large language models (LMs) have shown remarkable capabilities across numerous tasks, they often struggle with simple reasoning and planning in physical environments, such as understanding object permanence or planning household activities. The limitation arises from the fact that LMs are trained only on written ... | https://dl.acm.org/doi/10.5555/3666122.3669417 | 2,023 | # Language Models Meet World Models: Embodied Experiences Enhance Language Models
# Abstract
While large language models (LMs) have shown remarkable capabilities across numerous tasks, they often struggle with simple reasoning and planning in physical environments, such as understanding object permanence or planning ... | Agent |
540 | arxiv_2305.11598.md | Agent_140 | Introspective Tips: Large Language Model for In-Context Decision Making | The emergence of large language models (LLMs) has substantially influenced natural language processing, demonstrating exceptional results across various tasks. In this study, we employ ``Introspective Tips" to facilitate LLMs in self-optimizing their decision-making. By introspectively examining trajectories, LLM refin... | https://arxiv.org/abs/2305.11598 | 2,023 | # Introspective Tips: Large Language Model for In-Context Decision Making
# Abstract
The emergence of large language models (LLMs) has substantially influenced natural language processing, demonstrating exceptional results across various tasks. In this study, we employ "Introspective Tips" to facilitate LLMs in self-... | Agent |
541 | arxiv_2305.13455.md | Agent_141 | Clembench: Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents | Recent work has proposed a methodology for the systematic evaluation of "Situated Language Understanding Agents"-agents that operate in rich linguistic and non-linguistic contexts-through testing them in carefully constructed interactive settings. Other recent work has argued that Large Language Models (LLMs), if suita... | https://arxiv.org/abs/2305.13455 | 2,023 | # c1embench: Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents
# Abstract
Recent work has proposed a methodology for the systematic evaluation of "Situated Language Understanding Agents"-agents that operate in rich linguistic and non-linguistic contexts-through testing them in carefu... | Agent |
542 | arxiv_2305.16291.md | Agent_142 | Voyager: An Open-Ended Embodied Agent with Large Language Models | We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention. Voyager consists of three key components: 1) an automatic curriculum that maximizes exploration, 2) an ever-gro... | https://openreview.net/forum?id=ehfRiF0R3a | 2,024 | # VOYAGER: An Open-Ended Embodied Agent with Large Language Models
# Abstract
We introduce VoYAGER, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention. VoyAGER consists of three key c... | Agent |
543 | arxiv_2305.16867.md | Agent_143 | Playing repeated games with Large Language Models | LLMs are increasingly used in applications where they interact with humans and other agents. We propose to use behavioural game theory to study LLM's cooperation and coordination behaviour. We let different LLMs play finitely repeated 2×2 games with each other, with human-like strategies, and actual human players. Our ... | https://www.nature.com/articles/s41562-025-02172-y | 2,025 | # Playing repeated games with Large Language Models
# ABSTRACT
LLMs are increasingly used in applications where they interact with humans and other agents. We propose to use behavioural game theory to study LLM's cooperation and coordination behaviour. We let different LLMs play finitely repeated $2 \times 2$ games w... | Agent |
544 | arxiv_2305.17144.md | Agent_144 | Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory | The captivating realm of Minecraft has attracted substantial research interest in recent years, serving as a rich platform for developing intelligent agents capable of functioning in open-world environments. However, the current research landscape predominantly focuses on specific objectives, such as the popular "Obtai... | https://arxiv.org/abs/2305.17144 | 2,023 | # Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory
# Abstract
The captivating realm of Minecraft has attracted substantial research interest in recent years, serving as a rich platform for developing intelligent agents capable ... | Agent |
545 | arxiv_2305.20076.md | Agent_145 | Decision-Oriented Dialogue for Human-AI Collaboration | We describe a class of tasks called decision-oriented dialogues, in which AI assistants such as large language models (LMs) must collaborate with one or more humans via natural language to help them make complex decisions. We formalize three domains in which users face everyday decisions: (1) choosing an assignment of ... | https://aclanthology.org/2024.tacl-1.50/ | 2,024 | # Decision-Oriented Dialogue for Human-AI Collaboration
# Abstract
We describe a class of tasks called decisionoriented dialogues, in which AI assistants such as large language models (LMs) must collaborate with one or more humans via natural language to help them make complex decisions. We formalize three domains in... | Agent |
546 | arxiv_2306.03604.md | Agent_146 | Enabling Intelligent Interactions between an Agent and an LLM: A Reinforcement Learning Approach | Large language models (LLMs) encode a vast amount of world knowledge acquired from massive text datasets. Recent studies have demonstrated that LLMs can assist an embodied agent in solving complex sequential decision making tasks by providing high-level instructions. However, interactions with LLMs can be time-consumin... | https://arxiv.org/abs/2306.03604 | 2,023 | # Enabling Intelligent Interactions between an Agent and an LLM: A Reinforcement Learning Approach
# Abstract
Large language models (LLMs) encode a vast amount of world knowledge acquired from massive text datasets. Recent studies have demonstrated that LLMs can assist an embodied agent in solving complex sequential ... | Agent |
547 | arxiv_2306.05152.md | Agent_147 | Towards Autonomous Testing Agents via Conversational Large Language Models | Software testing is an important part of the development cycle, yet it requires specialized expertise and substantial developer effort to adequately test software. Recent discoveries of the capabilities of large language models (LLMs) suggest that they can be used as automated testing assistants, and thus provide helpf... | https://www.computer.org/csdl/proceedings-article/ase/2023/299600b688/1SBGtz3SJRm | 2,023 | # Towards Autonomous Testing Agents via Conversational Large Language Models
# I. INTRODUCTION
Software testing, an integral part of the development cycle, enables quality assurance and bug detection prior to deployment, for example via continuous integration practices [1]. However, automated software testing can be ... | Agent |
548 | arxiv_2306.07929.md | Agent_148 | Large Language Models Are Semi-Parametric Reinforcement Learning Agents | Inspired by the insights in cognitive science with respect to human memory and reasoning mechanism, a novel evolvable LLM-based (Large Language Model) agent framework is proposed as REMEMBERER. By equipping the LLM with a long-term experience memory, REMEMBERER is capable of exploiting the experiences from the past epi... | https://dl.acm.org/doi/10.5555/3666122.3669541 | 2,023 | # Large Language Models Are Semi-Parametric Reinforcement Learning Agents
# Abstract
Inspired by the insights in cognitive science with respect to human memory and reasoning mechanism, a novel evolvable LLM-based (Large Language Model) agent framework is proposed as REMEMBERER. By equipping the LLM with a longterm ex... | Agent |
550 | arxiv_2307.01848.md | Agent_150 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to successfully complete complex human instructions in general environments. Recent large language models (LLM) can embed rich semantic knowledge for agents in plan generation of complex tasks, while they lack the information about the realistic world a... | https://arxiv.org/abs/2307.01848 | 2,023 | # Embodied Task Planning with Large Language Models
Abstract: Equipping embodied agents with commonsense is important for robots to successfully complete complex human instructions in general environments. Recent large language models (LLM) can embed rich semantic knowledge for agents in plan generation of complex tas... | Agent |
551 | arxiv_2307.02485.md | Agent_151 | Building Cooperative Embodied Agents Modularly with Large Language Models | In this work, we address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments. While previous research either presupposes a cost-free communication channel or relies on a centraliz... | https://openreview.net/forum?id=EnXJfQqy0K | 2,024 | # BUILDING COOPERATIVE EMBODIED AGENTSMODULARLY WITH LARGE LANGUAGE MODELS
# ABSTRACT
In this work, we address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multiobjective tasks instantiated in various embodied environments. While previous... | Agent |
552 | arxiv_2307.02502.md | Agent_152 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in programming, algorithm discovery, and theorem proving, yet their genomics application is limited. This project introduces Math Agents and mathematical embedding as fresh... | https://arxiv.org/abs/2307.02502 | 2,023 | # Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics
# Abstract
The innovation in generative AI could be further accelerated with more-readily usable and evaluable mathematics as part of the computational infrastructure. Beyond human-AI chat interaction, LLM (large language model)-based m... | Agent |
553 | arxiv_2307.04986.md | Agent_153 | Epidemic Modeling with Generative Agents | This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language ... | https://arxiv.org/abs/2307.04986 | 2,023 | # Epidemic Modeling with Generative Agents
# Abstract
This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reas... | Agent |
554 | arxiv_2307.07871.md | Agent_154 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a... | https://openreview.net/forum?id=Y5r8Wa67Ob#all | 2,023 | # The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
# Abstract
Developmental psychologists have long-established socio-cognitive abilities as fundamental to human intelligence and development. These abilities enable individuals to enter, learn from, and contribute to... | Agent |
555 | arxiv_2307.07924.md | Agent_155 | ChatDev: Communicative Agents for Software Development | Software development is a complex task that necessitates cooperation among multiple members with diverse skills. Numerous studies used deep learning to improve specific phases in a waterfall model, such as design, coding, and testing. However, the deep learning model in each phase requires unique designs, leading to te... | https://aclanthology.org/2024.acl-long.810/ | 2,024 | # ChatDev: Communicative Agents for Software Development
# Abstract
Software development is a complex task that necessitates cooperation among multiple members with diverse skills. Numerous studies used deep learning to improve specific phases in a waterfall model, such as design, coding, and testing. However, the de... | Agent |
556 | arxiv_2307.09668.md | Agent_156 | Towards A Unified Agent with Foundation Models | Language Models and Vision Language Models have recently demonstrated unprecedented capabilities in terms of understanding human intentions, reasoning, scene understanding, and planning-like behaviour, in text form, among many others. In this work, we investigate how to embed and leverage such abilities in Reinforcemen... | https://openreview.net/forum?id=JK_B1tB6p- | 2,023 | # TOWARDS A UNIFIED AGENT WITH FOUNDATION MODELS
# ABSTRACT
Language Models and Vision Language Models have recently demonstrated unprecedented capabilities in terms of understanding human intentions, reasoning, scene understanding, and planning-like behaviour, in text form, among many others. In this work, we invest... | Agent |
557 | arxiv_2307.10337.md | Agent_157 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only assist in accomplishing traditional tasks within more efficient paradigms but also stimulate the evolution of social bots. Researchers have begun exploring the implementation of LLMs as the driving core of social bots, enabling more efficient and... | https://arxiv.org/abs/2307.10337 | 2,023 | # Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks
As the capabilities of Large Language Models (LLMs) emerge, they not only assist in accomplishing traditional tasks within more efficient paradigms but also stimulate the evolution of socia... | Agent |
558 | arxiv_2307.12573.md | Agent_158 | Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models | Recent advancements in natural language and Large Language Models (LLMs) have enabled AI agents to simulate human-like interactions within virtual worlds. However, these interactions still face limitations in complexity and flexibility, particularly in scenarios involving multiple characters and novel objects. Pre-defi... | https://arxiv.org/abs/2307.12573 | 2,023 | # Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
# Abstract
Recent advancements in natural language and Large Language Models (LLMs) have enabled AI agents to simulate human-like interactions within virtual worlds. However, these interactions still face li... | Agent |
559 | arxiv_2307.14984.md | Agent_159 | S^3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language mo... | https://arxiv.org/abs/2307.14984 | 2,023 | # S^3: Social-network Simulation System with Large Language Model-Empowered Agents
# Abstract
Simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work,... | Agent |
560 | arxiv_2307.15810.md | Agent_160 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have increasingly been utilized in the realm of mental well-being support. However, the implications and outcomes associated with their usage in such a critical field remain somewhat ambiguous and unexplored. We conducted a qualitative analysis of 120 posts, ... | https://dl.acm.org/doi/10.1145/3544548.3581503 | 2,023 | # Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support
# Abstract
Conversational agents powered by large language models (LLM) have increasingly been utilized in the realm of mental wellbeing support. However, the implications and outcomes a... | Agent |
561 | arxiv_2307.15833.md | Agent_161 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of steps for the RL agent needs to converge in the training process and learn the optimal policy, especially in text-based game environments where the action space is extensive. However, non-player characters (NPCs) sometimes hold some key informati... | https://arxiv.org/abs/2307.15833 | 2,023 | # Dialogue Shaping: Empowering Agents through NPC Interaction
# Abstract
One major challenge in reinforcement learning (RL) is the large amount of steps for the RL agent needs to converge in the training process and learn the optimal policy, especially in text-based game environments where the action space is extensi... | Agent |
562 | arxiv_2307.16789.md | Agent_162 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use d... | https://openreview.net/forum?id=dHng2O0Jjr | 2,024 | # TOOLLLM: FACILITATING LARGE LANGUAGE MODELS TO MASTER $1 6 0 0 0 +$ REAL-WORLD APIS
# ABSTRACT
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is... | Agent |
563 | arxiv_2308.00352.md | Agent_163 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex tasks, however, are complicated through logic inconsistencies due to cascading hallu... | https://openreview.net/forum?id=VtmBAGCN7o | 2,024 | # METAGPT: META PROGRAMMING FOR A MULTI-AGENT COLLABORATIVE FRAMEWORK
# ABSTRACT
Remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Existing LLM-based multi-agent systems can already solve simple dialogue tasks. Solutions to more complex t... | Agent |
564 | arxiv_2308.01285.md | Agent_164 | Flows: Building Blocks of Reasoning and Collaborating AI | Recent advances in artificial intelligence (AI) have produced highly capable and controllable systems. This creates unprecedented opportunities for structured reasoning as well as collaboration among multiple AI systems and humans. To fully realize this potential, it is essential to develop a principled way of designin... | https://arxiv.org/abs/2308.01285 | 2,023 | # Flows: Building Blocks of Reasoning and Collaborating AI
# Abstract
Recent advances in artificial intelligence (AI) have produced highly capable and controllable systems. This creates unprecedented opportunities for structured reasoning as well as collaboration among multiple AI systems and humans. To fully realize... | Agent |
565 | arxiv_2308.01423.md | Agent_165 | ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks | ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses, thus eliminating the necessity fo... | https://www.nature.com/articles/s41467-024-48998-4 | 2,024 | # ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks
# ABSTRACT
ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to predict and generate metal-organic frameworks (MOFs). By leveraging a large-scale language model (GPT-4 and GPT3.5-turbo), ChatMOF extracts... | Agent |
568 | arxiv_2308.04030.md | Agent_168 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative dem... | https://aclanthology.org/2023.emnlp-demo.20/ | 2,023 | # Gentopia.AI $\textcircled{5}$ : A Collaborative Platform for Tool-Augmented LLMs
# Abstract
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degr... | Agent |
569 | arxiv_2308.05481.md | Agent_169 | LLM As DBA | Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently l... | https://arxiv.org/abs/2308.05481 | 2,023 | # LLM As DBA
# ABSTRACT
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cl... | Agent |
570 | arxiv_2308.05960.md | Agent_170 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to generate actions with its core LLM and interact with environments, which facilitates the ability to resolve complex tasks by conditioning on past interactions such as obs... | https://openreview.net/forum?id=BUa5ekiHlQ | 2,024 | # BOLAA: BENCHMARKING AND ORCHESTRATING LLM-AUGMENTED AUTONOMOUS AGENTS
# ABSTRACT
The massive successes of large language models (LLMs) encourage the emerging exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to generate actions with its core LLM and interact with environments, which facilitates ... | Agent |
571 | arxiv_2308.08155.md | Agent_171 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, develop... | https://openreview.net/forum?id=BAakY1hNKS | 2,024 | # AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
# Abstract
AutoGen? is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in variou... | Agent |
572 | arxiv_2308.09904.md | Agent_172 | RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents | The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressi... | https://arxiv.org/abs/2308.09904 | 2,023 | # RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
# ABSTRACT
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in HumanComputer Interaction (HCI) by tailoring content based on individual preferences. Despite their... | Agent |
573 | arxiv_2308.10204.md | Agent_173 | ChatEDA: A Large Language Model Powered Autonomous Agent for EDA | The integration of a complex set of Electronic Design Automation (EDA) tools to enhance interoperability is a critical concern for circuit designers. Recent advancements in large language models (LLMs) have showcased their exceptional capabilities in natural language processing and comprehension, offering a novel appro... | https://ieeexplore.ieee.org/document/10485372 | 2,024 | # ChatEDA: A Large Language Model Powered Autonomous Agent for EDA
Abstract-The integration of a complex set of Electronic Design Automation (EDA) tools to enhance interoperability is a critical concern for circuit designers. Recent advancements in large language models (LLMs) have showcased their exceptional capabili... | Agent |
575 | arxiv_2308.11339.md | Agent_175 | ProAgent: Building Proactive Cooperative Agents with Large Language Models | Building agents with adaptive behavior in cooperative tasks stands as a paramount goal in the realm of multi-agent systems. Current approaches to developing cooperative agents rely primarily on learning-based methods, whose policy generalization depends heavily on the diversity of teammates they interact with during th... | https://dl.acm.org/doi/10.1609/aaai.v38i16.29710 | 2,024 | # ProAgent: Building Proactive Cooperative Agents with Large Language Models
# Abstract
Building agents with adaptive behavior in cooperative tasks stands as a paramount goal in the realm of multi-agent systems. Current approaches to developing cooperative agents rely primarily on learning-based methods, whose policy... | Agent |
576 | arxiv_2308.14296.md | Agent_176 | RecMind: Large Language Model Powered Agent For Recommendation | While the recommendation system (RS) has advanced significantly through deep learning, current RS approaches usually train and fine-tune models on task-specific datasets, limiting their generalizability to new recommendation tasks and their ability to leverage external knowledge due to model scale and data size constra... | https://aclanthology.org/2024.findings-naacl.271/ | 2,024 | # RecMind: Large Language Model Powered Agent For Recommendation
# Abstract
While the recommendation system (RS) has advanced significantly through deep learning, current RS approaches usually train and finetune models on task-specific datasets, limiting their generalizability to new recommendation tasks and their ab... | Agent |
577 | arxiv_2308.16505.md | Agent_177 | Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations | Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language mode... | https://dl.acm.org/doi/10.1145/3731446 | 2,025 | # Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
# Abstract
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tas... | Agent |
578 | arxiv_2309.09971.md | Agent_178 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex scheduling in a multi-agent system and can coordinate these agents into completing sophisticated tasks that require extensive collaboration. However, despite the introduction of numerous gaming frameworks, the community has insufficient benchmarks tow... | https://aclanthology.org/2024.findings-naacl.200/ | 2,024 | # MINDAGENT: EMERGENT GAMING INTERACTION
# ABSTRACT
Large Language Models (LLMs) have the capacity of performing complex scheduling in a multi-agent system and can coordinate these agents into completing sophisticated tasks that require extensive collaboration. However, despite the introduction of numerous gaming fra... | Agent |
579 | arxiv_2310.02003.md | Agent_179 | L2MAC: Large Language Model Automatic Computer for Extensive Code Generation | Transformer-based large language models (LLMs) are constrained by the fixed context window of the underlying transformer architecture, hindering their ability to produce long and coherent outputs. Memory-augmented LLMs are a promising solution, but current approaches cannot handle long output generation tasks since the... | https://openreview.net/forum?id=EhrzQwsV4K | 2,024 | # L2MAC: LARGE LANGUAGE MODEL AUTOMATICCOMPUTER FOR EXTENSIVE CODE GENERATION
# ABSTRACT
Transformer-based large language models (LLMs) are constrained by the fixed context window of the underlying transformer architecture, hindering their ability to produce long and coherent outputs. Memory-augmented LLMs are a prom... | Agent |
580 | arxiv_2311.05997.md | Agent_180 | JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models | Achieving human-like planning and control with multimodal observations in an open world is a key milestone for more functional generalist agents. Existing approaches can handle certain long-horizon tasks in an open world. However, they still struggle when the number of open-world tasks could potentially be infinite and... | https://ieeexplore.ieee.org/document/10778628 | 2,024 | # JARVIS-1: Open-world Multi-task Agents with Memory-Augmented Multimodal Language Models
Achieving human-like planning and control with multimodal observations in an open world is a key milestone for more functional generalist agents. Existing approaches can handle certain long-horizon tasks in an open world. However... | Agent |
581 | arxiv_2311.12871.md | Agent_181 | An Embodied Generalist Agent in 3D World | Leveraging massive knowledge from large language models (LLMs), recent machine learning models show notable successes in general-purpose task solving in diverse domains such as computer vision and robotics. However, several significant challenges remain: (i) most of these models rely on 2D images yet exhibit a limited ... | https://dl.acm.org/doi/10.5555/3692070.3692890 | 2,024 | # An Embodied Generalist Agent in 3D World
# Abstract
Leveraging massive knowledge from large language models (LLMs), recent machine learning models show notable successes in generalpurpose task solving in diverse domains such as computer vision and robotics. However, several significant challenges remain: (i) most o... | Agent |
582 | arxiv_2312.10908.md | Agent_182 | CLOVA: A Closed-Loop Visual Assistant with Tool Usage and Update | Utilizing large language models (LLMs) to compose off-the-shelf visual tools represents a promising avenue of research for developing robust visual assistants capable of addressing diverse visual tasks. However, these methods often overlook the potential for continual learning, typically by freezing the utilized tools,... | https://ieeexplore.ieee.org/document/10658369 | 2,024 | # CLOVA: A Closed-Loop Visual Assistant with Tool Usage and Update
# Abstract
Utilizing large language models (LLMs) to compose offthe-shelf visual tools represents a promising avenue of research for developing robust visual assistants capable of addressing diverse visual tasks. However, these methods often overlook ... | Agent |
583 | arxiv_2402.04559.md | Agent_183 | Can Large Language Model Agents Simulate Human Trust Behavior? | Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents really simulate human behavior? In this paper, we focus on one critical and elemental behavior in human interact... | https://openreview.net/forum?id=jxCaWgbFp4 | 2,024 | # Can Large Language Model Agents Simulate Human Trust Behavior?
# Abstract
Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents really simulate human behavior? In... | Agent |
584 | arxiv_2402.15809.md | Agent_184 | Empowering Large Language Model Agents through Action Learning | Large Language Model (LLM) Agents have recently garnered increasing interest yet they are limited in their ability to learn from trial and error, a key element of intelligent behavior. In this work, we argue that the capacity to learn new actions from experience is fundamental to the advancement of learning in LLM agen... | https://openreview.net/forum?id=KqK5XcgEhR | 2,024 | # Empowering Large Language Model Agents through Action Learning
# Abstract
Large Language Model (LLM) Agents have recently garnered increasing interest yet they are limited in their ability to learn from trial and error, a key element of intelligent behavior. In this work, we argue that the capacity to learn new act... | Agent |
585 | arxiv_2405.18027.md | Agent_185 | TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models | While Large Language Models (LLMs) can serve as agents to simulate human behaviors (i.e., role-playing agents), we emphasize the importance of point-in-time role-playing. This situates characters at specific moments in the narrative progression for three main reasons: (i) enhancing users' narrative immersion, (ii) avoi... | https://snu.elsevierpure.com/en/publications/timechara-evaluating-point-in-time-character-hallucination-of-rol | 2,024 | # T1MECHARA: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models
# Abstract
While Large Language Models (LLMs) can serve as agents to simulate human behaviors (i.e., role-playing agents), we emphasize the importance of point-in-time role-playing. This situates characters at specific... | Agent |
586 | arxiv_2407.18901.md | Agent_186 | AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents | Autonomous agents that address day-to-day digital tasks (e.g., ordering groceries for a household), must not only operate multiple apps (e.g., notes, messaging, shopping app) via APIs, but also generate rich code with complex control flow in an iterative manner based on their interaction with the environment. However, ... | https://aclanthology.org/2024.acl-long.850/ | 2,024 | # AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents\*
# Abstract
Autonomous agents that address day-to-day digital tasks (e.g., ordering groceries for a household), must not only operate multiple apps (e.g., notes, messaging, shopping app) via APIs, but also generate rich co... | Agent |
587 | arxiv_2410.06153.md | Agent_187 | AgentSquare: Automatic LLM Agent Search in Modular Design Space | Recent advancements in Large Language Models (LLMs) have led to a rapid growth of agentic systems capable of handling a wide range of complex tasks. However, current research largely relies on manual, task-specific design, limiting their adaptability to novel tasks. In this paper, we introduce a new research problem: M... | https://openreview.net/forum?id=mPdmDYIQ7f | 2,025 | # AGENTSQUARE: AUTOMATIC LLM AGENT SEARCH IN MODULAR DESIGN SPACE
# ABSTRACT
Recent advancements in Large Language Models (LLMs) have led to a rapid growth of agentic systems capable of handling a wide range of complex tasks. However, current research largely relies on manual, task-specific design, limiting their ada... | Agent |
588 | arxiv_2410.13825.md | Agent_188 | AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents | Autonomy via agents using large language models (LLMs) for personalized, standardized tasks boosts human efficiency. Automating web tasks (like booking hotels within a budget) is increasingly sought after. Fulfilling practical needs, the web agent also serves as an important proof-of-concept example for various agent g... | https://openreview.net/forum?id=oWdzUpOlkX | 2,025 | # AGENTOCCAM: A SIMPLE YET STRONG BASELINE FOR LLM-BASED WEB AGENTS
# ABSTRACT
Autonomy via agents based on large language models (LLMs) that can carry out personalized yet standardized tasks presents a significant opportunity to drive human efficiency. There is an emerging need and interest in automating web tasks (... | Agent |
589 | arxiv_2410.10813.md | Agent_189 | LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory | Recent large language model (LLM)-driven chat assistant systems have integrated memory components to track user-assistant chat histories, enabling more accurate and personalized responses. However, their long-term memory capabilities in sustained interactions remain underexplored. We introduce LongMemEval, a comprehens... | https://arxiv.org/abs/2410.10813 | 2,024 | # LONGMEMEVAL: BENCHMARKING CHAT ASSIST-ANTS ON LONG-TERM INTERACTIVE MEMORY
# ABSTRACT
Recent large language model (LLM)-driven chat assistant systems have integrated memory components to track user-assistant chat histories, enabling more accurate and personalized responses. However, their long-term memory capabilit... | Agent |
590 | arxiv_2505.22954.md | Agent_190 | Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents | Today's AI systems have human-designed, fixed architectures and cannot autonomously and continuously improve themselves. The advance of AI could itself be automated. If done safely, that would accelerate AI development and allow us to reap its benefits much sooner. Meta-learning can automate the discovery of novel algo... | https://arxiv.org/abs/2505.22954 | 2,025 | # Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents
# Abstract
Most of today's AI systems are constrained by human-designed, fixed architectures and cannot autonomously and continuously improve themselves. The scientific method, on the other hand, provides a cumulative and open-ended system, where e... | Agent |
591 | arxiv_2506.03011.md | Agent_191 | Coding Agents with Multimodal Browsing are Generalist Problem Solvers | Modern human labor is characterized by specialization; we train for years and develop particular tools that allow us to perform well across a variety of tasks. In addition, AI agents have been specialized for domains such as software engineering, web navigation, and workflow automation. However, this results in agents ... | https://openreview.net/forum?id=6HF55i7bca | 2,025 | # Coding Agents with Multimodal Browsing are Generalist Problem Solvers
# Abstract
Modern human labor is characterized by specialization; we train for years and develop particular tools that allow us to perform well across a variety of tasks. In addition, AI agents have been specialized for domains such as software e... | Agent |
592 | arxiv_2506.10055.md | Agent_192 | TaskCraft: Automated Generation of Agentic Tasks | Agentic tasks, which require multi-step problem solving with autonomy, tool use, and adaptive reasoning, are becoming increasingly central to the advancement of NLP and AI. However, existing instruction data lacks tool interaction, and current agentic benchmarks rely on costly human annotation, limiting their scalabili... | https://arxiv.org/abs/2506.10055 | 2,025 | # TaskCraft: Automated Generation of Agentic Tasks
# Abstract
Agentic tasks, which require multi-step problem solving with autonomy, tool use, and adaptive reasoning, are becoming increasingly central to the advancement of NLP and AI. However, existing instruction data lacks tool interaction, and current agentic benc... | Agent |
598 | arxiv_2507.06229.md | Agent_198 | Agent KB: Leveraging Cross-Domain Experience for Agentic Problem Solving | Current AI agents cannot effectively learn from each other's problem-solving
experiences or use past successes to guide self-reflection and error correction
in new tasks. We introduce Agent KB, a shared knowledge base that captures both
high-level problem-solving strategies and detailed execution lessons, enabling
... | https://arxiv.org/abs/2507.06229 | 2,025 | # AGENT KB: Leveraging Cross-Domain 5.2 Experience for Agentic Problem Solving
# Abstract
Current AI agents cannot effectively learn from each other's problem-solving experiences or use past successes to guide self-reflection and error correction in new tasks. We introduce AgeNT KB, a shared knowledge base that captu... | Agent |
593 | arxiv_2506.19676.md | Agent_193 | A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures | In recent years, Large-Language-Model-driven AI agents have exhibited unprecedented intelligence and adaptability, and are rapidly changing human production and life. Nowadays, agents are undergoing a new round of evolution. They no longer act as an isolated island like LLMs. Instead, they start to communicate with div... | https://arxiv.org/abs/2506.19676 | 2,025 | # A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures
Abstract-In recent years, Large-Language-Model-driven AI agents have exhibited unprecedented intelligence and adaptability, and are rapidly changing human production and life. Nowadays, agents are undergoing a new r... | Agent |
594 | arxiv_2507.05707.md | Agent_194 | Agentic-R1: Distilled Dual-Strategy Reasoning | Current long chain-of-thought (long-CoT) models excel at mathematical reasoning but rely on slow and error-prone natural language traces. Tool-augmented agents address arithmetic via code execution, but often falter on complex logical tasks. We introduce a fine-tuning framework, DualDistill, that distills complementary... | https://arxiv.org/abs/2507.05707 | 2,025 | # Agentic-R1: Distilled Dual-Strategy Reasoning
# Abstract
Current long chain-of-thought (long-CoT) models excel at mathematical reasoning but rely on slow and error-prone natural language traces. Tool-augmented agents address arithmetic via code execution, but often falter on complex logical tasks. We introduce a fi... | Agent |
595 | arxiv_2507.07957.md | Agent_195 | MIRIX: Multi-Agent Memory System for LLM-Based Agents | Although memory capabilities of AI agents are gaining increasing attention, existing solutions remain fundamentally limited. Most rely on flat, narrowly scoped memory components, constraining their ability to personalize, abstract, and reliably recall user-specific information over time. To this end, we introduce MIRIX... | https://arxiv.org/abs/2507.07957 | 2,025 | # MIRIX: Multi-Agent Memory System for LLM-Based Agents
# Abstract
Although memory capabilities of AI agents are gaining increasing attention, existing solutions remain fundamentally limited. Most rely on flat, narrowly scoped memory components, constraining their ability to personalize, abstract, and reliably recall... | Agent |
596 | arxiv_2507.19478.md | Agent_196 | MMBench-GUI: Hierarchical Multi-Platform Evaluation Framework for GUI Agents | We introduce MMBench-GUI, a hierarchical benchmark for evaluating GUI automation agents across Windows, macOS, Linux, iOS, Android, and Web platforms. It comprises four levels: GUI Content Understanding, Element Grounding, Task Automation, and Task Collaboration, covering essential skills for GUI agents. In addition, w... | https://arxiv.org/abs/2507.19478 | 2,025 | # MMBENCH-GUI: HIERARCHICAL MULTI-PLATFORM EVALUATION FRAMEWORK FOR GUI AGENTS
# ABSTRACT
We introduce MMBench-GUI, a hierarchical benchmark for evaluating GUI automation agents across Windows, macOS, Linux, iOS, Android, and Web platforms. It comprises four levels-GUI Content Understanding, Element Grounding, Task A... | Agent |
597 | arxiv_2507.19849.md | Agent_197 | Agentic Reinforced Policy Optimization | Large-scale reinforcement learning with verifiable rewards (RLVR) has demonstrated its effectiveness in harnessing the potential of large language models (LLMs) for single-turn reasoning tasks. In realistic reasoning scenarios, LLMs can often utilize external tools to assist in task-solving processes. However, current ... | https://arxiv.org/abs/2507.19849 | 2,025 | # AGENTIC REINFORCED POLICY OPTIMIZATION
# ABSTRACT
Large-scale reinforcement learning with verifiable rewards (RLVR) has demonstrated its effectiveness in harnessing the potential of large language models (LLMs) for single-turn reasoning tasks. In realistic reasoning scenarios, LLMs can often utilize external tools ... | Agent |
599 | arxiv_2506.02153.md | Agent_199 | Small Language Models are the Future of Agentic AI | Large language models (LLMs) are often praised for exhibiting near-human
performance on a wide range of tasks and valued for their ability to hold a
general conversation. The rise of agentic AI systems is, however, ushering in a
mass of applications in which language models perform a small number of
specialized tas... | https://arxiv.org/abs/2506.02153 | 2,025 | # Small Language Models are the Future of Agentic AI
# Abstract
Large language models (LLMs) are often praised for exhibiting near-human performance on a wide range of tasks and valued for their ability to hold a general conversation. The rise of agentic AI systems is, however, ushering in a mass of applications in w... | Agent |
600 | arxiv_2506.15692.md | Agent_200 | MLE-STAR: Machine Learning Engineering Agent via Search and Targeted Refinement | Agents based on large language models (LLMs) for machine learning engineering
(MLE) can automatically implement ML models via code generation. However,
existing approaches to build such agents often rely heavily on inherent LLM
knowledge and employ coarse exploration strategies that modify the entire code
structure... | https://arxiv.org/abs/2506.15692 | 2,025 | # MLE-STAR: Machine Learning Engineering Agent via Search and Targeted Refinement
Agents based on large language models (LLMs) for machine learning engineering (MLE) can automatically implement ML models via code generation. However, existing approaches to build such agents often rely heavily on inherent LLM knowledge... | Agent |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.