Add 1 files
Browse files- 2409/2409.17655.md +227 -0
2409/2409.17655.md
ADDED
|
@@ -0,0 +1,227 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2409.17655
|
| 4 |
+
|
| 5 |
+
Published Time: Mon, 23 Jun 2025 00:24:49 GMT
|
| 6 |
+
|
| 7 |
+
Markdown Content:
|
| 8 |
+
Nan Sun 1,∗, Bo Mao 2,∗, Yongchang Li 1,∗, Di Guo 2 and Huaping Liu 1,3 1 The author is with the Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China.2 The author is with the School of Artificial Intelligence, Beijing University of Posts and Telecommunications,Beijing, 100876, China.3 Corresponding Author. hpliu@tsinghua.edu.cn∗Equal Contribution.
|
| 9 |
+
|
| 10 |
+
###### Abstract
|
| 11 |
+
|
| 12 |
+
Current service robots suffer from limited natural language communication abilities, heavy reliance on predefined commands, ongoing human intervention, and, most notably, a lack of proactive collaboration awareness in human-populated environments. This results in narrow applicability and low utility. In this paper, we introduce AssistantX, an LLM-powered proactive assistant designed for autonomous operation in real-world scenarios with high accuracy. AssistantX employs a multi-agent framework consisting of 4 specialized LLM agents, each dedicated to perception, planning, decision-making, and reflective review, facilitating advanced inference capabilities and comprehensive collaboration awareness, much like a human assistant by your side. We built a dataset of 210 real-world tasks to validate AssistantX, which includes instruction content and status information on whether relevant personnel are available. Extensive experiments were conducted in both text-based simulations and a real office environment over the course of a month and a half. Our experiments demonstrate the effectiveness of the proposed framework, showing that AssistantX can reactively respond to user instructions, actively adjust strategies to adapt to contingencies, and proactively seek assistance from humans to ensure successful task completion. More details and videos can be found at [https://assistantx-agent.github.io/AssistantX/](https://assistantx-agent.github.io/AssistantX/).
|
| 13 |
+
|
| 14 |
+
I INTRODUCTION
|
| 15 |
+
--------------
|
| 16 |
+
|
| 17 |
+
Imagine having a capable assistant; you would naturally expect it to handle various tasks on your behalf. For example, if you need to print a file but lack access to a printer, your expectation is simply to send the file to the assistant, which will manage the rest—locating someone with or near a printer, requesting them to print it, and ultimately returning the printed document to you. You would only need to receive the file, with any uncertainties handled autonomously by the assistant. This highlights the need for an agentic system that can respond to diverse user needs, aligning both physical and digital environments to enhance efficiency [[1](https://arxiv.org/html/2409.17655v3#bib.bib1)].
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Figure 1: AssistantX overcomes the limitations of existing service robots and virtual assistants, seamlessly integrating physical and virtual actions to meet human needs.
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
Figure 2: AssistantX proficiently generates both cyber tasks 𝒯𝒞 𝒯 𝒞\mathcal{TC}caligraphic_T caligraphic_C and real-world tasks 𝒯ℛ 𝒯 ℛ\mathcal{TR}caligraphic_T caligraphic_R, executing them concurrently in a manner akin to a human assistant. This approach enhances efficiency and promotes productivity.
|
| 26 |
+
|
| 27 |
+
However, service robots we encounter in daily life often rely on predefined commands, requiring direct physical interaction, and lack proactive engagement. A recent development, the LLM-integrated Assistant Robot — OfficeMate, demonstrates a few breakthroughs in human-robot interaction in office settings [[2](https://arxiv.org/html/2409.17655v3#bib.bib2)], but it still depends on direct voice commands or keyboard inputs. Despite integrating GPT-4 for speech-to-text conversion, it only maps user input to predefined workflows using a trigger word database, lacking truly proactive behaviors. Issues like ineffective long-distance communication in real-time, inability to handle unexpected situations, and lack of collaboration with humans on complex tasks remain unsolved, limiting its potential as a fully functional assistant.
|
| 28 |
+
|
| 29 |
+
These limitations motivate the development of AssistantX: an LLM-powered proactive assistant with a virtual presence in cyberspace and a physical embodiment in the real world, designed to interact more effectively with humans for task execution (see Fig. [1](https://arxiv.org/html/2409.17655v3#S1.F1 "Figure 1 ‣ I INTRODUCTION ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")). Built on a multi-agent framework-PPDR4X, AssistantX features four specialized LLM agents capable of P erceiving, P lanning, D eciding actions, and R eflecting on outcomes. Seamlessly integrated into daily workflows, AssistantX functions like a human assistant by your side, bridging the gap between cyber and physical interactions (see Fig. [2](https://arxiv.org/html/2409.17655v3#S1.F2 "Figure 2 ‣ I INTRODUCTION ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")). We demonstrate through extensive experiments that AssistantX can reactively respond to user instructions, actively adapt its strategies to contingencies, and proactively seek human assistance to ensure task success.
|
| 30 |
+
|
| 31 |
+
Our contributions include:
|
| 32 |
+
|
| 33 |
+
1. 1.We propose the LLM-powered multi-agent framework, PPDR4X (Perception, Planning, Decision, Reflection for AssistantX), which significantly enhances robotic cognition and elevates problem-solving capabilities.
|
| 34 |
+
2. 2.We develop AssistantX, a proactive assistant that combines the strengths of digital and embodied agents to autonomously fulfill user demands by performing actions in both cyberspace and physical environments.
|
| 35 |
+
3. 3.We demonstrate the effectiveness of the proposed framework in human-involved tasks through extensive simulations and real-world experiments, overcoming the limitations of existing service robots.
|
| 36 |
+
|
| 37 |
+
The structure of this paper is as follows: Section [II](https://arxiv.org/html/2409.17655v3#S2 "II RELATED WORK ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments") reviews related works, Section [III](https://arxiv.org/html/2409.17655v3#S3 "III PROBLEM FORMULATION ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments") formulates the problem, Section [IV](https://arxiv.org/html/2409.17655v3#S4 "IV METHODOLOGY ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments") details the proposed framework, Section [V](https://arxiv.org/html/2409.17655v3#S5 "V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments") presents extensive experimental results and comprehensive evaluation, and Section [VI](https://arxiv.org/html/2409.17655v3#S6 "VI CONCLUSION ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments") concludes the paper.
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
Figure 3: Overview of the proposed framework.
|
| 42 |
+
|
| 43 |
+
II RELATED WORK
|
| 44 |
+
---------------
|
| 45 |
+
|
| 46 |
+
### II-A Mobile Robots in Human-Populated Environments
|
| 47 |
+
|
| 48 |
+
Mobile robots in human-populated environments have garnered significant attention in robotics research. Early works were limited to structured settings with minimal human interaction, while recent studies emphasize adaptability and human-robot interaction in dynamic environments.
|
| 49 |
+
|
| 50 |
+
Autonomous robots that gather and report environmental data to assist humans have been developed, with an emphasis on perception rather than action [[3](https://arxiv.org/html/2409.17655v3#bib.bib3)]. Several studies have explored robust navigation in complex environments, but these systems struggle with real-time adaptability when unexpected situations or changes in human behavior arise [[4](https://arxiv.org/html/2409.17655v3#bib.bib4), [5](https://arxiv.org/html/2409.17655v3#bib.bib5)]. Methods to track human positions from dialogue have been proposed to enhance situational awareness, but their reliance on speech makes real-time decision-making challenging [[6](https://arxiv.org/html/2409.17655v3#bib.bib6)]. The recent LLM-integrated office assistant robot, OfficeMate [[2](https://arxiv.org/html/2409.17655v3#bib.bib2)], shows progress in human-robot interaction but still depends on voice commands or typing on a mounted computer, limiting its usability for effective long-distance communication in real-time. Its reliance on predefined actions and trigger words restricts its ability to handle complex tasks autonomously, falling short in enhancing efficiency and bringing up productivity.
|
| 51 |
+
|
| 52 |
+
### II-B LLM-Powered Agentic Systems
|
| 53 |
+
|
| 54 |
+
Recent advancements in agentic systems, particularly with LLM integration, have significantly improved reasoning performance especially under Chain-of-Thought mechanism [[7](https://arxiv.org/html/2409.17655v3#bib.bib7)]. Methods like ReAct [[8](https://arxiv.org/html/2409.17655v3#bib.bib8)] and Reflexion [[9](https://arxiv.org/html/2409.17655v3#bib.bib9)] enhance reasoning by step-by-step thinking, while multi-agent frameworks like Mobile-Agent-v2 [[10](https://arxiv.org/html/2409.17655v3#bib.bib10)] and AppAgent [[11](https://arxiv.org/html/2409.17655v3#bib.bib11)] have been effectively applied to tasks like GUI operations on smart devices for long-horizon tasks. Additional studies further demonstrate improved human-agent interactions, decision-making, and intelligence sharing in multi-agent settings for user-centric tasks [[12](https://arxiv.org/html/2409.17655v3#bib.bib12), [13](https://arxiv.org/html/2409.17655v3#bib.bib13), [14](https://arxiv.org/html/2409.17655v3#bib.bib14), [15](https://arxiv.org/html/2409.17655v3#bib.bib15)]. However, these approaches are primarily deployed in digital environments, with limited integration into physical embodiments, which restricts their ability to fully assist users in real-world scenarios.
|
| 55 |
+
|
| 56 |
+
Nonetheless, the results presented in these works highlight the promising potential of LLM-powered multi-agent collaboration in taking a further step toward interacting with the real world. Consequently, leveraging the synergy between digital and embodied agents to better meet human needs is becoming a key focus [[1](https://arxiv.org/html/2409.17655v3#bib.bib1)], which is also the core concept of our proposed AssistantX.
|
| 57 |
+
|
| 58 |
+
III PROBLEM FORMULATION
|
| 59 |
+
-----------------------
|
| 60 |
+
|
| 61 |
+
Given an office environment ℰ ℰ\mathcal{E}caligraphic_E, we assume it contains J 𝐽 J italic_J distinct working locations, denoted as ℒ={l 1,…,l J}ℒ subscript 𝑙 1…subscript 𝑙 𝐽\mathcal{L}=\{l_{1},\dots,l_{J}\}caligraphic_L = { italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_l start_POSTSUBSCRIPT italic_J end_POSTSUBSCRIPT }, and a set of individuals ℋ={h 1,…,h N}ℋ subscript ℎ 1…subscript ℎ 𝑁\mathcal{H}=\{h_{1},\dots,h_{N}\}caligraphic_H = { italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT }, where N 𝑁 N italic_N is the total number of people. The location of the i 𝑖 i italic_i-th person is denoted as Loc(h i)∈ℒ 𝐿 𝑜 𝑐 subscript ℎ 𝑖 ℒ Loc(h_{i})\in\mathcal{L}italic_L italic_o italic_c ( italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ∈ caligraphic_L. We focus on the general problem of providing assistance by AssistantX to any individual in ℋ ℋ\mathcal{H}caligraphic_H. Upon a request from person h i subscript ℎ 𝑖 h_{i}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, AssistantX should navigate to public facilities (e.g., printer, fridge, coffee machine) while coordinating human assistance for completing complex tasks (e.g., printing, retrieving food, and making coffee), or deliver/retrieve items from specific individuals (e.g., delivering a file or bringing a pen) and send corresponding notifications online. We define the set of public facilities as 𝒫ℱ={pf 1,…,pf K}𝒫 ℱ 𝑝 subscript 𝑓 1…𝑝 subscript 𝑓 𝐾\mathcal{PF}=\{pf_{1},\dots,pf_{K}\}caligraphic_P caligraphic_F = { italic_p italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_p italic_f start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT }, where K 𝐾 K italic_K is the total number of facilities, and Loc(pf k)∈ℒ 𝐿 𝑜 𝑐 𝑝 subscript 𝑓 𝑘 ℒ Loc(pf_{k})\in\mathcal{L}italic_L italic_o italic_c ( italic_p italic_f start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ∈ caligraphic_L denotes the location of pf k 𝑝 subscript 𝑓 𝑘 pf_{k}italic_p italic_f start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT. We assume each person owns several personal items, which can be borrowed or delivered by AssistantX. The set of personal items is defined as 𝒫ℐ={pi 1,…,pi M}𝒫 ℐ 𝑝 subscript 𝑖 1…𝑝 subscript 𝑖 𝑀\mathcal{PI}=\{pi_{1},\dots,pi_{M}\}caligraphic_P caligraphic_I = { italic_p italic_i start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_p italic_i start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT }, where M 𝑀 M italic_M is the total number of personal items, and Own(pi m)=h i∈ℋ 𝑂 𝑤 𝑛 𝑝 subscript 𝑖 𝑚 subscript ℎ 𝑖 ℋ Own(pi_{m})=h_{i}\in\mathcal{H}italic_O italic_w italic_n ( italic_p italic_i start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) = italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ caligraphic_H indicates that person h i subscript ℎ 𝑖 h_{i}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT owns item pi m 𝑝 subscript 𝑖 𝑚 pi_{m}italic_p italic_i start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT. Humans in ℋ ℋ\mathcal{H}caligraphic_H can send private messages to AssistantX to issue instructions ℐ ℐ\mathcal{I}caligraphic_I. We categorize the tasks that AssistantX can perform into two distinct types: (1) Cyber Task: Involving operations carried out within a virtual environment by its digital avatar, such as sending notifications, making inquiries, forwarding files, and sharing QR codes, referred to as 𝒯𝒞 𝒯 𝒞\mathcal{TC}caligraphic_T caligraphic_C; (2) Real-World Task: Comprising tasks that require physical actions by its robotic embodiment in the real world, such as navigating to a specific location, referred to as 𝒯ℛ 𝒯 ℛ\mathcal{TR}caligraphic_T caligraphic_R.
|
| 62 |
+
|
| 63 |
+
Once the instruction begins execution, any dialogue information between all contacts and AssistantX, including conversations occurring in any group chat involving AssistantX, will be recorded and is defined as 𝒟 𝒟\mathcal{D}caligraphic_D. AssistantX should perform a list of 𝒯𝒞 𝒯 𝒞\mathcal{TC}caligraphic_T caligraphic_C and 𝒯ℛ 𝒯 ℛ\mathcal{TR}caligraphic_T caligraphic_R based on the initial instruction ℐ ℐ\mathcal{I}caligraphic_I, the office environment ℰ ℰ\mathcal{E}caligraphic_E, and dialogue information 𝒟 𝒟\mathcal{D}caligraphic_D, ultimately completing the instructions. The process can be formalized as:
|
| 64 |
+
|
| 65 |
+
(𝒯𝒞,𝒯ℛ)=I(ℐ,ℰ,𝒟)𝒯 𝒞 𝒯 ℛ 𝐼 ℐ ℰ 𝒟(\mathcal{TC},\mathcal{TR})=I(\mathcal{I},\mathcal{E},\mathcal{D})( caligraphic_T caligraphic_C , caligraphic_T caligraphic_R ) = italic_I ( caligraphic_I , caligraphic_E , caligraphic_D )(1)
|
| 66 |
+
|
| 67 |
+
where I(⋅)𝐼⋅I(\cdot)italic_I ( ⋅ ) represents the LLM-powered inference process, which is the core reasoning system we aim to develop.
|
| 68 |
+
|
| 69 |
+
IV METHODOLOGY
|
| 70 |
+
--------------
|
| 71 |
+
|
| 72 |
+
In this section, we provide an overview of the framework of our proposed PPDR4X method. It comprises four specialized agents: Perception Agent, Planning Agent, Decision Agent, and Reflection Agent, each of which is built on a foundational LLM with well-crafted prompts to guide their reasoning processes. The operation of PPDR4X is iterative, with agents functioning in a loop structure, taking inputs from upstream and providing outputs to downstream (as shown in Fig. [3](https://arxiv.org/html/2409.17655v3#S1.F3 "Figure 3 ‣ I INTRODUCTION ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")). We also design a Memory Unit, shared by all agents, to store environmental information, online messages, and the inference process during reasoning.
|
| 73 |
+
|
| 74 |
+

|
| 75 |
+
|
| 76 |
+
Figure 4: Overview of the data stored in Memory Unit.
|
| 77 |
+
|
| 78 |
+

|
| 79 |
+
|
| 80 |
+
Figure 5: An illustration of the inputs and outputs of PPDR agents, showing how they collaborate to determine the next move after the previous task, with all agents communicating in natural language, ensuring logical consistency and interpretability.
|
| 81 |
+
|
| 82 |
+
### IV-A Memory Unit
|
| 83 |
+
|
| 84 |
+
Memory Unit forms the foundation of the entire framework, storing both long-term dynamic environmental data ℰ ℰ\mathcal{E}caligraphic_E and short-term memory (as shown in Fig. [4](https://arxiv.org/html/2409.17655v3#S4.F4 "Figure 4 ‣ IV METHODOLOGY ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")). The short-term memory includes user instructions ℐ ℐ\mathcal{I}caligraphic_I, dialogue data 𝒟 𝒟\mathcal{D}caligraphic_D, embodied states 𝒮 𝒮\mathcal{S}caligraphic_S (such as the robot’s current location and locker state), and the inference process 𝒫 𝒫\mathcal{P}caligraphic_P (which includes agent thoughts, executed tasks 𝒯𝒞 𝒯 𝒞\mathcal{TC}caligraphic_T caligraphic_C and 𝒯ℛ 𝒯 ℛ\mathcal{TR}caligraphic_T caligraphic_R). Long-term memory is represented using an undirected topological graph, where nodes represent humans, public facilities, personal items, and locations. The edges between nodes define relationships, such as the location of humans and facilities, and item ownership. Human nodes also include an availability attribute, indicating whether they are available to collaborate on tasks. Reflection Agent will update the edges and human node states during task execution to reflect changes in the dynamic environment. Task-specific information stored in short-term memory will also be updated during execution and reset after the completion of each user instruction. We denote the memory package at time step t 𝑡 t italic_t as ℳ t subscript ℳ 𝑡\mathcal{M}_{t}caligraphic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, which is converted into descriptive text and provided as part of the input to the foundational LLM of each agent as contextual information.
|
| 85 |
+
|
| 86 |
+
### IV-B Perception of Focus Content
|
| 87 |
+
|
| 88 |
+
If t≠0 𝑡 0 t\neq 0 italic_t ≠ 0, given a memory package ℳ t subscript ℳ 𝑡\mathcal{M}_{t}caligraphic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, which contains ℐ ℐ\mathcal{I}caligraphic_I, the previous plan 𝒫ℒ t−1 𝒫 subscript ℒ 𝑡 1\mathcal{PL}_{t-1}caligraphic_P caligraphic_L start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT, the completed tasks 𝒯ℛ t−1 𝒯 subscript ℛ 𝑡 1\mathcal{TR}_{t-1}caligraphic_T caligraphic_R start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT, 𝒯𝒞 t−1 𝒯 subscript 𝒞 𝑡 1\mathcal{TC}_{t-1}caligraphic_T caligraphic_C start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT, and the reflection result ℛ t−1 subscript ℛ 𝑡 1\mathcal{R}_{t-1}caligraphic_R start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT, Perception Agent is prompted to generate a fine-grained text description of the focus content (see Fig. [5](https://arxiv.org/html/2409.17655v3#S4.F5 "Figure 5 ‣ IV METHODOLOGY ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")). This includes a descriptive observation of the local environment, location and ownership information of individuals and items involved in upcoming tasks, as well as the active chat group or person in cyberspace, to better guide the next plan. The perceptual process can be articulated as follows:
|
| 89 |
+
|
| 90 |
+
𝒫𝒞 t=perceive(ℳ t)𝒫 subscript 𝒞 𝑡 𝑝 𝑒 𝑟 𝑐 𝑒 𝑖 𝑣 𝑒 subscript ℳ 𝑡\mathcal{PC}_{t}=perceive(\mathcal{M}_{t})caligraphic_P caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_p italic_e italic_r italic_c italic_e italic_i italic_v italic_e ( caligraphic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )(2)
|
| 91 |
+
|
| 92 |
+
where perceive(⋅)𝑝 𝑒 𝑟 𝑐 𝑒 𝑖 𝑣 𝑒⋅perceive(\cdot)italic_p italic_e italic_r italic_c italic_e italic_i italic_v italic_e ( ⋅ ) is the LLM-powered inference process with tailored prompts containing an output template to generate the parts of the focus content.
|
| 93 |
+
|
| 94 |
+
While we provide ℳ t subscript ℳ 𝑡\mathcal{M}_{t}caligraphic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT to subsequent reasoning agents as holistic contextual information, 𝒫𝒞 t 𝒫 subscript 𝒞 𝑡\mathcal{PC}_{t}caligraphic_P caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT remains indispensable for long-horizon tasks involving a lengthy sequence of steps across multiple people and objects, which always leads to significant challenges on the planning of LLMs [[16](https://arxiv.org/html/2409.17655v3#bib.bib16)]. In our ablation experiments, when Perception Agent is ablated, the Success Rate and Completion Rate of L3 complexity tasks decrease significantly (see TABLE [V](https://arxiv.org/html/2409.17655v3#S5.T5 "TABLE V ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")).
|
| 95 |
+
|
| 96 |
+
TABLE I: Action Xpace
|
| 97 |
+
|
| 98 |
+
* *We encapsulate the initial communication API provided by the social platform to send online messages, mapping digital accounts to the real names of individuals. For QR code, we first encrypt the initial QR code using the RC4 algorithm and integrate it with the smart lock system, setting a scan limit (i.e., the QR code expires after a single scan) to enhance the security of item transport. We use LiDAR to scan the floor and apply the ROS gmapping algorithm to build a semantic map. We establish a “coordinate-name” mapping for anchor points, where fixed locations’ (x, y) coordinates are mapped to individuals’ real names. Upon triggering a physical action, the robot converts the name into the corresponding coordinates and navigates to the specified location, supported by ROS Libraries’ Navigation and MoveIt libraries.
|
| 99 |
+
|
| 100 |
+
### IV-C Hierarchical Problem Solving
|
| 101 |
+
|
| 102 |
+
To address long-horizon task execution in dynamic environments, we propose a hierarchical problem-solving architecture featuring an explicit separation of high-level planning and low-level action execution. Planning Agent is first prompted to creates a strategic roadmap 𝒫ℒ t 𝒫 subscript ℒ 𝑡\mathcal{PL}_{t}caligraphic_P caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT based on the current memory state ℳ t subscript ℳ 𝑡\mathcal{M}_{t}caligraphic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and perception package 𝒫𝒞 t 𝒫 subscript 𝒞 𝑡\mathcal{PC}_{t}caligraphic_P caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT (see Fig. [5](https://arxiv.org/html/2409.17655v3#S4.F5 "Figure 5 ‣ IV METHODOLOGY ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")):
|
| 103 |
+
|
| 104 |
+
𝒫ℒ t=plan(ℳ t,𝒫𝒞 t)𝒫 subscript ℒ 𝑡 𝑝 𝑙 𝑎 𝑛 subscript ℳ 𝑡 𝒫 subscript 𝒞 𝑡\mathcal{PL}_{t}=plan(\mathcal{M}_{t},\mathcal{PC}_{t})caligraphic_P caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_p italic_l italic_a italic_n ( caligraphic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , caligraphic_P caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )(3)
|
| 105 |
+
|
| 106 |
+
where plan(⋅)𝑝 𝑙 𝑎 𝑛⋅plan(\cdot)italic_p italic_l italic_a italic_n ( ⋅ ) represents the generating process of the LLM with sophisticated examples provided in prompts for in-context learning, helping with outputs that summarize completed actions to track instruction progress and provide high-level planning to guide actions.
|
| 107 |
+
|
| 108 |
+
Decision Agent then translates high-level objectives into immediate actions and operate them on the phone or dispatches the robot (see Fig. [5](https://arxiv.org/html/2409.17655v3#S4.F5 "Figure 5 ‣ IV METHODOLOGY ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")). We define an Action Xpace, comprising atomic actions for Decision Agent to select (see Table [I](https://arxiv.org/html/2409.17655v3#S4.T1 "TABLE I ‣ IV-B Perception of Focus Content ‣ IV METHODOLOGY ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments") for the actions and detailed implementation of how actions operate in both cyberspace and the real world). The decision-making process is defined as:
|
| 109 |
+
|
| 110 |
+
(𝒯𝒞 t,𝒯ℛ t)=deci��de(ℳ t,𝒫𝒞 t,𝒫L t)𝒯 subscript 𝒞 𝑡 𝒯 subscript ℛ 𝑡 𝑑 𝑒 𝑐 𝑖 𝑑 𝑒 subscript ℳ 𝑡 𝒫 subscript 𝒞 𝑡 𝒫 subscript 𝐿 𝑡(\mathcal{TC}_{t},\!\mathcal{TR}_{t})=decide(\mathcal{M}_{t},\mathcal{PC}_{t},% \mathcal{P}L_{t})( caligraphic_T caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , caligraphic_T caligraphic_R start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = italic_d italic_e italic_c italic_i italic_d italic_e ( caligraphic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , caligraphic_P caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , caligraphic_P italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )(4)
|
| 111 |
+
|
| 112 |
+
where decide(⋅)𝑑 𝑒 𝑐 𝑖 𝑑 𝑒⋅decide(\cdot)italic_d italic_e italic_c italic_i italic_d italic_e ( ⋅ ) denotes the mapping process utilizing the foundational LLM with dedicated prompts containing conditional constraints, detailing the dependencies of certain tasks, such as sending an online notification when moving to someone’s location, or confirming that a document has been obtained or printed before asking someone to sign it.
|
| 113 |
+
|
| 114 |
+
By incorporating higher-level reasoning to guide low-level actions, our approach ensures better alignment with the evolving needs of the task, outperforming end-to-end methods like CoT [[7](https://arxiv.org/html/2409.17655v3#bib.bib7)] and ReAct [[8](https://arxiv.org/html/2409.17655v3#bib.bib8)] that directly generate actions, as their lack of a global plan limits their ability to take adaptive actions across the entire task horizon. In ablation experiments, without Planning Agent, our method suffers a degradation of over 30% in both Cyber Task Accuracy and Real-world Task Accuracy for L2 complexity tasks and L3 (see TABLE [V](https://arxiv.org/html/2409.17655v3#S5.T5 "TABLE V ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")).
|
| 115 |
+
|
| 116 |
+
### IV-D Self-Reflection Mechanism
|
| 117 |
+
|
| 118 |
+
After the execution of 𝒯𝒞 t 𝒯 subscript 𝒞 𝑡\mathcal{TC}_{t}caligraphic_T caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and 𝒯ℛ t 𝒯 subscript ℛ 𝑡\mathcal{TR}_{t}caligraphic_T caligraphic_R start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, corresponding information changes or emerges, such as newly generated dialogue data from cyberspace via the cyber actions or new messages from human contacts, and the updates of robot’s embodied states, including location and the smart lock signal information. To monitor these short-term data, we have set up a thread on a cloud server to track new messages through the social platform’s message retrieval API, providing real-time updates. We use the ROS AMCL for robot localization, determining the robot’s current position. Additionally, we have integrated a feedback circuit into the smart lock, allowing us to retrieve the magnetic lock’s open/close status through a digital signal. We listen to data from the Ethernet interface to receive real-time feedback from the smart lock. The incremental information (denoted as Δℳ Δ ℳ\Delta\mathcal{M}roman_Δ caligraphic_M) acquired will be reflected by Reflection Agent by comparing it with ℳ t subscript ℳ 𝑡\mathcal{M}_{t}caligraphic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and adds the insights in reflection to Memory Unit. The reflective procedure is denoted as the following formula:
|
| 119 |
+
|
| 120 |
+
ℛ t=reflect(ℳ t,𝒫𝒞 t,𝒫ℒ t,𝒯𝒞 t,𝒯ℛ t,Δℳ)subscript ℛ 𝑡 𝑟 𝑒 𝑓 𝑙 𝑒 𝑐 𝑡 subscript ℳ 𝑡 𝒫 subscript 𝒞 𝑡 𝒫 subscript ℒ 𝑡 𝒯 subscript 𝒞 𝑡 𝒯 subscript ℛ 𝑡 Δ ℳ\mathcal{R}_{t}=reflect(\mathcal{M}_{t},\mathcal{PC}_{t},\mathcal{PL}_{t},% \mathcal{TC}_{t},\mathcal{TR}_{t},\Delta\mathcal{M})caligraphic_R start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_r italic_e italic_f italic_l italic_e italic_c italic_t ( caligraphic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , caligraphic_P caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , caligraphic_P caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , caligraphic_T caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , caligraphic_T caligraphic_R start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , roman_Δ caligraphic_M )(5)
|
| 121 |
+
|
| 122 |
+
where reflect(⋅)𝑟 𝑒 𝑓 𝑙 𝑒 𝑐 𝑡⋅reflect(\cdot)italic_r italic_e italic_f italic_l italic_e italic_c italic_t ( ⋅ ) represents the reflective process of the LLM, which is prompted to carefully consider whether the results of the executed action align with the expected outcomes, and then generates a reflection result ℛ t subscript ℛ 𝑡\mathcal{R}_{t}caligraphic_R start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT (see Fig. [5](https://arxiv.org/html/2409.17655v3#S4.F5 "Figure 5 ‣ IV METHODOLOGY ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")). It contains a binary judgment — either ‘Y’ or ‘N’. A ‘Y’ indicates Reflection Agent considers the action’s outcome consistent with expectations, while ‘N’ signals a deviation. The reflection result also summarizes the reasoning behind this judgment, providing guidance for future planning and decision-making.
|
| 123 |
+
|
| 124 |
+
This self-reflection mechanism significantly improves the Success Rate by 24% compared to the ablated version in L3 complexity tasks (see TABLE [V](https://arxiv.org/html/2409.17655v3#S5.T5 "TABLE V ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")), by avoiding getting trapped by past errors. It works efficiently in tasks involving a larger number of people and facilitates proactive actions when certain personnel are unavailable.
|
| 125 |
+
|
| 126 |
+
TABLE II: Details of our dataset
|
| 127 |
+
|
| 128 |
+
Difficulty Level Achievability Number
|
| 129 |
+
Dataset L 1 subscript 𝐿 1 L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT Achievable 90(43%)
|
| 130 |
+
L 2 subscript 𝐿 2 L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT Achievable 73(35%)
|
| 131 |
+
L 3 subscript 𝐿 3 L_{3}italic_L start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT Achievable 25(12%)
|
| 132 |
+
Unachievable 22(10%)
|
| 133 |
+
Total 210
|
| 134 |
+
|
| 135 |
+
V EXPERIMENT
|
| 136 |
+
------------
|
| 137 |
+
|
| 138 |
+
### V-A Experimental Setup
|
| 139 |
+
|
| 140 |
+
Environment. We built a real-world experimental environment consisting of 23 distinct locations on a semantic map, including 16 workstations and 7 public facilities (see Fig. [6](https://arxiv.org/html/2409.17655v3#S5.F6 "Figure 6 ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")). We ensured that every individual possesses at least one personal item, with at least three people sharing the same type of item. For the initial long-term memory set for AssistantX (whose physical details are shown in Fig. [7](https://arxiv.org/html/2409.17655v3#S5.F7 "Figure 7 ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")), we deliberately included only a subset of ownership information to assess AssistantX’s proactive capabilities. Users can issue commands to AssistantX via a one-on-one messaging interface on a social software app. Additionally, we simulated the environment for text-based experiments, where all actions defined in Table [I](https://arxiv.org/html/2409.17655v3#S4.T1 "TABLE I ‣ IV-B Perception of Focus Content ‣ IV METHODOLOGY ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments") are guaranteed to succeed once generated. We also developed a demonstration platform (see Fig. [8](https://arxiv.org/html/2409.17655v3#S5.F8 "Figure 8 ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")) that showcases sample data entries from the dataset, supports online execution, and displays both the inference process and simulations of environmental and robotic state changes. In simulation experiments, an LLM (ChatGPT 4o) reads personnel availability and generates responses for AssistantX, whereas in real-world experiments, human participants interact with AssistantX via their phones based on updated personnel status after each data entry.
|
| 141 |
+
|
| 142 |
+

|
| 143 |
+
|
| 144 |
+
Figure 6: An illustration of our real-world experiment environment, which is also simulated in the simulation experiments.
|
| 145 |
+
|
| 146 |
+
TABLE III: The metrics that we used in evaluation.
|
| 147 |
+
|
| 148 |
+

|
| 149 |
+
|
| 150 |
+
Figure 7: AssistantX is physically embodied as a customized mobile robot equipped with a smart locker, and virtually implemented on a phone with a configured social account.
|
| 151 |
+
|
| 152 |
+
TABLE IV: The results of our method compared to other baselines.
|
| 153 |
+
|
| 154 |
+
* *The bold values indicate that the method achieves the best performance for a specific metric at a certain difficulty level.
|
| 155 |
+
|
| 156 |
+
TABLE V: The results of ablation study.
|
| 157 |
+
|
| 158 |
+
* *The bold values indicate that the method achieves the best performance for a specific metric at a certain difficulty level.
|
| 159 |
+
|
| 160 |
+
TABLE VI: The evaluation of LLM backbones.
|
| 161 |
+
|
| 162 |
+

|
| 163 |
+
|
| 164 |
+
Figure 8: The platform used for simulation and display.
|
| 165 |
+
|
| 166 |
+
Dataset. Based on survey responses from over 300 students and faculty members regarding daily tasks and errands they found exhausting and wished to automate, we developed a dataset tailored to our environmental setting to rigorously evaluate the effectiveness of our approach. The dataset comprises instruction content and personnel status information, including 30 base instructions (with all individuals marked as available) and 180 variants where the instruction content remains identical to base instructions but one or more personnel are marked unavailable. Each entry includes a feature indicating whether the instruction is achievable, which the robot must identify during execution. For instance, if person A is unavailable to sign a file or if no one with a pen is available, the instruction is deemed unachievable. The dataset is categorized into three difficulty levels: L1 tasks can be completed reactively based solely on user instructions; L2 tasks involve unavailable personnel, requiring the robot to autonomously search for alternatives; and L3 tasks necessitate engaging with other humans for assistance when all relevant personnel in long-term memory are unavailable. Comprehensive details of the dataset are presented in TABLE[II](https://arxiv.org/html/2409.17655v3#S4.T2 "TABLE II ‣ IV-D Self-Reflection Mechanism ‣ IV METHODOLOGY ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments"), with specific examples shown on our simulation platform (see Fig. [8](https://arxiv.org/html/2409.17655v3#S5.F8 "Figure 8 ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")).
|
| 167 |
+
|
| 168 |
+
Baselines. To rigorously evaluate our method, we compared it with 5 LLM-powered agentic baselines, ensuring that they were based on the same model and that the prompts were all tailored to our environment setting to maintain fairness:
|
| 169 |
+
|
| 170 |
+
* •Direct: Directly generating tasks based on user instructions, with an example provided for in-context learning.
|
| 171 |
+
* •CoT[[7](https://arxiv.org/html/2409.17655v3#bib.bib7)]: Build on Direct by encouraging step-by-step reasoning, enhancing structured thinking.
|
| 172 |
+
* •ReAct[[8](https://arxiv.org/html/2409.17655v3#bib.bib8)]: Integrating explicit environmental observation and thinking stages into reasoning, generating tasks iteratively through cycles of thought, action, and observation.
|
| 173 |
+
* •Reflexion[[9](https://arxiv.org/html/2409.17655v3#bib.bib9)]: Similar to ReAct but includes reflection on the results of actions, improving reasoning through high-level explanations of the executed tasks outcomes.
|
| 174 |
+
* •Mobile-Agent-v2[[10](https://arxiv.org/html/2409.17655v3#bib.bib10)]: Involving three agents and one perception module, including a reflection agent, but struggles with long sequential tasks due to its full-scale perception mechanisms, which result in low efficiency and excessive irrelevant information.
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
|
| 178 |
+
Figure 9: We demonstrate that AssistantX can reactively respond to Lee’s request and operate autonomously. When Mao is unavailable for printing, it actively searches memory for alternatives, identifying Wu. When Wu is also unavailable, AssistantX proactively seeks help in an active group chat to complete the complex task with human collaboration. Two representative inference processes showcasing the generation of proactive thoughts and behaviors are also presented.
|
| 179 |
+
|
| 180 |
+
### V-B Evaluation
|
| 181 |
+
|
| 182 |
+
To assess the effectiveness of our framework, we define 5 evaluation metrics in TABLE [III](https://arxiv.org/html/2409.17655v3#S5.T3 "TABLE III ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments"). We choose ChatGPT-4o as the foundation model, with detailed model baseline comparison results provided in TABLE [VI](https://arxiv.org/html/2409.17655v3#S5.T6 "TABLE VI ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments"). Our dataset of 210 entries was run 5 times in simulation, and the average of the 5 runs was computed for each metric.
|
| 183 |
+
|
| 184 |
+
The overall evaluation presented in Table [IV](https://arxiv.org/html/2409.17655v3#S5.T4 "TABLE IV ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments") highlights the robustness of our approach relative to all baselines, particularly in complex tasks, and demonstrates its ability to function effectively in dynamic environments. In terms of SR, our method achieves 0.81, 0.74, and 0.66 for L1, L2, and L3, respectively—exceeding Mobile-Agent-v2 (the second-best) by 0.11, 0.12, and 0.14. Moreover, CR is notably higher, reaching 0.83 at L1 compared to 0.76 for Mobile-Agent-v2, a 0.07 improvement even in the simplest tasks. Our method also exhibits the lower RR, ensuring higher efficiency in task execution. In terms of CTA and RTA, our approach leads with improvements of up to 0.10 at least. The overall results further indicate that single-agent methods, such as Direct and CoT, are less effective than multi-agent approaches like Mobile-Agent-v2 and Reflexion, even with step-by-step prompting. However, we also observed that incorporating the reflection mechanism increases the redundancy rate, with the non-reflective ReAct achieving the lowest L3 redundancy rate of 0.07. This is probably attributable to the additional reasoning steps or errors introduced during reflection processes.
|
| 185 |
+
|
| 186 |
+
To further validate the effectiveness of each agent, we conducted ablation experiments (see Table [V](https://arxiv.org/html/2409.17655v3#S5.T5 "TABLE V ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")). Our findings indicate that when Perception Agent and Planning Agent are individually removed, performance degrades notably, especially in complex tasks (L3). Without Perception Agent, SR and CR decrease slightly at L2 but drop significantly at L3 (SR: 0.66 to 0.34, CR: 0.62 to 0.49). Removing the Planning Agent causes a substantial decline at all difficulty levels: SR drops to 0.64/0.44/0.16 and CR to 0.71/0.53/0.25 at L1/L2/L3, while RR increases to 0.04/0.09/0.12. The removal of Reflection Agent leads to milder degradation in SR and CR, but also results in a lower RR at L1 and L2, suggesting that while Reflection Agent improves accuracy, the added reasoning steps may sometimes induce redundant actions.
|
| 187 |
+
|
| 188 |
+
### V-C Real-World Experiment
|
| 189 |
+
|
| 190 |
+
We further evaluated AssistantX in our real lab to assess its effectiveness in streamlining workflows and enhancing productivity. We present a complete scenario (see Fig.[9](https://arxiv.org/html/2409.17655v3#S5.F9 "Figure 9 ‣ V-A Experimental Setup ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments")), highlighting its capabilities in: (1) reactively responding to user instructions, autonomously delivering items, and notifying relevant personnel online; (2) actively locating an available individual with a pen when the personnel lack one for signing; and (3) proactively seeking assistance in group chats when no personnel are available for printing.
|
| 191 |
+
|
| 192 |
+
AssistantX was also deployed for open use by researchers not affiliated with the development team for one and a half months. To evaluate its real-world impact, we conducted a user study aimed at assessing the system’s ability to reduce physical burden, improve task efficiency, and support seamless human-robot collaboration in an unsupervised setting. After each interaction, a short survey was delivered via AssistantX’s integrated messaging system, consisting of multiple-choice questions (with Yes, No, and Unsure options) covering task success, walking reduction, user satisfaction, and perceived productivity, along with an optional open-ended field for free-form feedback. In total, we collected 289 valid responses.
|
| 193 |
+
|
| 194 |
+
The aggregated results, shown in Fig.[10](https://arxiv.org/html/2409.17655v3#S5.F10 "Figure 10 ‣ V-C Real-World Experiment ‣ V EXPERIMENT ‣ AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environments"), indicate that 92% of users experienced a reduction in walking effort, 85% reported successful task completion, 83% expressed satisfaction with AssistantX, and 81% affirmed that it improved their productivity. These outcomes suggest the system was able to effectively integrate into natural workflows, offering measurable value in task support and reducing user workload. The system’s consistent performance across a range of users and tasks points to its robustness and adaptability in real-world human environments.
|
| 195 |
+
|
| 196 |
+
Beyond the quantitative metrics, the deployment of AssistantX revealed several qualitative benefits that underscore the value of autonomous embodied assistants in everyday collaborative environments. By serving as an always-available intermediary for routine yet interruptive tasks—such as delivering items, locating personnel, or sending online notifications—AssistantX reduced cognitive load and minimized workflow fragmentation. Users reported that the robot’s presence enabled them to stay focused on primary tasks without frequent context-switching, effectively improving concentration and task continuity. In one notable case, a user suffering from plantar fasciitis highlighted the robot’s assistance as significantly beneficial in reducing unnecessary movement. Such feedback demonstrates the potential impact of robots like AssistantX in accessibility-related use cases, including support for elderly or mobility-impaired individuals. Given its lightweight design and generalizable task structure, our method is well-suited for rapid deployment to other robotic platforms and assistive contexts.
|
| 197 |
+
|
| 198 |
+
Moreover, the system’s ability to strategically engage humans only when needed, rather than attempting full autonomy, fostered a sense of cooperative fluency rather than resistance. This selective involvement aligns with the notion of “precision-activated collaboration,” where human assistance is invoked at critical points rather than ubiquitously. Informal user feedback also suggested that the system encouraged a more equitable distribution of responsibilities in shared spaces by offloading low-complexity but high-frequency tasks from individuals to the robot. These observations support the emerging view that embodied agents, when properly scoped and socially integrated, can enhance not only operational efficiency but also the overall social dynamics and inclusivity of collaborative work environments.
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
|
| 202 |
+
Figure 10: Aggregated results from 289 valid user survey responses collected during the real-world deployment of AssistantX, indicating strong acceptance and measurable benefits across physical, functional, and cognitive dimensions.
|
| 203 |
+
|
| 204 |
+
VI CONCLUSION
|
| 205 |
+
-------------
|
| 206 |
+
|
| 207 |
+
In this study, we present AssistantX, an LLM-powered proactive assistant, designed to operate autonomously in a real-world office environment. By leveraging the PPDR4X framework, we endowed AssistantX with the ability to autonomously interpret, plan, and execute both cyber and real-world actions, significantly enhancing operational efficiency. The experimental results substantiate the feasibility of our framework, opening up new avenues for its application across various domains. Moreover, our approach demonstrates remarkable robustness and scalability, exploring new paradigms of human-robot collaboration. Future work will focus on refining AssistantX’s natural language understanding capabilities, expanding its repertoire of physical interactions, and exploring its scalability within more intricate and expansive environments. Our work underscores key technological and conceptual pathways for advancing next-generation autonomous assistants to enhance operational efficiency, cognitive support, and productivity, ultimately aiming to develop intelligent, adaptive systems that integrate seamlessly into daily workflows and redefine human-agent interaction across both virtual and real-world environments.
|
| 208 |
+
|
| 209 |
+
References
|
| 210 |
+
----------
|
| 211 |
+
|
| 212 |
+
* [1] Y.Liu, W.Chen, Y.Bai, X.Liang, G.Li, W.Gao, and L.Lin, “Aligning cyber space with physical world: A comprehensive survey on embodied ai,” 2024. [Online]. Available: [https://arxiv.org/abs/2407.06886](https://arxiv.org/abs/2407.06886)
|
| 213 |
+
* [2] J.Pan, S.Schömbs, Y.Zhang, R.Tabatabaei, M.Bilal, and W.Johal, “Officemate: Pilot evaluation of an office assistant robot,” 2025. [Online]. Available: [https://arxiv.org/abs/2501.05141](https://arxiv.org/abs/2501.05141)
|
| 214 |
+
* [3] M.J.-Y. Chung, A.Pronobis, M.Cakmak, D.Fox, and R.P.N. Rao, “Autonomous question answering with mobile robots in human-populated environments,” in _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, 2016, pp. 823–830.
|
| 215 |
+
* [4] X.Zhang, J.Wang, Y.Fang, and J.Yuan, “Multilevel humanlike motion planning for mobile robots in complex indoor environments,” _IEEE Transactions on Automation Science and Engineering_, vol.16, no.3, pp. 1244–1258, 2019.
|
| 216 |
+
* [5] P.Trautman and A.Krause, “Unfreezing the robot: Navigation in dense, interacting crowds,” in _2010 IEEE/RSJ International Conference on Intelligent Robots and Systems_, 2010, pp. 797–803.
|
| 217 |
+
* [6] L.Liang, G.Bian, H.Zhao, Y.Dong, and H.Liu, “Extracting dynamic navigation goal from natural language dialogue,” in _2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, 2023, pp. 3539–3545.
|
| 218 |
+
* [7] T.Kojima, S.S. Gu, M.Reid, Y.Matsuo, and Y.Iwasawa, “Large language models are zero-shot reasoners,” 2023. [Online]. Available: [https://arxiv.org/abs/2205.11916](https://arxiv.org/abs/2205.11916)
|
| 219 |
+
* [8] S.Yao, J.Zhao, D.Yu, N.Du, I.Shafran, K.Narasimhan, and Y.Cao, “React: Synergizing reasoning and acting in language models,” 2023. [Online]. Available: [https://arxiv.org/abs/2210.03629](https://arxiv.org/abs/2210.03629)
|
| 220 |
+
* [9] N.Shinn, F.Cassano, E.Berman, A.Gopinath, K.Narasimhan, and S.Yao, “Reflexion: Language agents with verbal reinforcement learning,” 2023. [Online]. Available: [https://arxiv.org/abs/2303.11366](https://arxiv.org/abs/2303.11366)
|
| 221 |
+
* [10] J.Wang, H.Xu, H.Jia, X.Zhang, M.Yan, W.Shen, J.Zhang, F.Huang, and J.Sang, “Mobile-agent-v2: Mobile device operation assistant with effective navigation via multi-agent collaboration,” 2024. [Online]. Available: [https://arxiv.org/abs/2406.01014](https://arxiv.org/abs/2406.01014)
|
| 222 |
+
* [11] C.Zhang, Z.Yang, J.Liu, Y.Han, X.Chen, Z.Huang, B.Fu, and G.Yu, “Appagent: Multimodal agents as smartphone users,” 2023. [Online]. Available: [https://arxiv.org/abs/2312.13771](https://arxiv.org/abs/2312.13771)
|
| 223 |
+
* [12] X.Zhang, Y.Deng, Z.Ren, S.-K. Ng, and T.-S. Chua, “Ask-before-plan: Proactive language agents for real-world planning,” 2024. [Online]. Available: [https://arxiv.org/abs/2406.12639](https://arxiv.org/abs/2406.12639)
|
| 224 |
+
* [13] A.Z. Ren, A.Dixit, A.Bodrova, S.Singh, S.Tu, N.Brown, P.Xu, L.Takayama, F.Xia, J.Varley, Z.Xu, D.Sadigh, A.Zeng, and A.Majumdar, “Robots that ask for help: Uncertainty alignment for large language model planners,” 2023. [Online]. Available: [https://arxiv.org/abs/2307.01928](https://arxiv.org/abs/2307.01928)
|
| 225 |
+
* [14] S.Abdelnabi, A.Gomaa, S.Sivaprasad, L.Schönherr, and M.Fritz, “Cooperation, competition, and maliciousness: Llm-stakeholders interactive negotiation,” 2024. [Online]. Available: [https://arxiv.org/abs/2309.17234](https://arxiv.org/abs/2309.17234)
|
| 226 |
+
* [15] W.Chen, Z.You, R.Li, Y.Guan, C.Qian, C.Zhao, C.Yang, R.Xie, Z.Liu, and M.Sun, “Internet of agents: Weaving a web of heterogeneous agents for collaborative intelligence,” 2024. [Online]. Available: [https://arxiv.org/abs/2407.07061](https://arxiv.org/abs/2407.07061)
|
| 227 |
+
* [16] K.Valmeekam, M.Marquez, S.Sreedharan, and S.Kambhampati, “On the planning abilities of large language models : A critical investigation,” 2023. [Online]. Available: [https://arxiv.org/abs/2305.15771](https://arxiv.org/abs/2305.15771)
|