| { |
| "url": "http://arxiv.org/abs/2404.16660v1", |
| "title": "Benchmarking Mobile Device Control Agents across Diverse Configurations", |
| "abstract": "Developing autonomous agents for mobile devices can significantly enhance\nuser interactions by offering increased efficiency and accessibility. However,\ndespite the growing interest in mobile device control agents, the absence of a\ncommonly adopted benchmark makes it challenging to quantify scientific progress\nin this area. In this work, we introduce B-MoCA: a novel benchmark designed\nspecifically for evaluating mobile device control agents. To create a realistic\nbenchmark, we develop B-MoCA based on the Android operating system and define\n60 common daily tasks. Importantly, we incorporate a randomization feature that\nchanges various aspects of mobile devices, including user interface layouts and\nlanguage settings, to assess generalization performance. We benchmark diverse\nagents, including agents employing large language models (LLMs) or multi-modal\nLLMs as well as agents trained from scratch using human expert demonstrations.\nWhile these agents demonstrate proficiency in executing straightforward tasks,\ntheir poor performance on complex tasks highlights significant opportunities\nfor future research to enhance their effectiveness. Our source code is publicly\navailable at https://b-moca.github.io.", |
| "authors": "Juyong Lee, Taywon Min, Minyong An, Changyeon Kim, Kimin Lee", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.HC", |
| "cats": [ |
| "cs.HC", |
| "cs.AI", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Agent", |
| "gt": "Developing autonomous agents for mobile devices can significantly enhance\nuser interactions by offering increased efficiency and accessibility. However,\ndespite the growing interest in mobile device control agents, the absence of a\ncommonly adopted benchmark makes it challenging to quantify scientific progress\nin this area. In this work, we introduce B-MoCA: a novel benchmark designed\nspecifically for evaluating mobile device control agents. To create a realistic\nbenchmark, we develop B-MoCA based on the Android operating system and define\n60 common daily tasks. Importantly, we incorporate a randomization feature that\nchanges various aspects of mobile devices, including user interface layouts and\nlanguage settings, to assess generalization performance. We benchmark diverse\nagents, including agents employing large language models (LLMs) or multi-modal\nLLMs as well as agents trained from scratch using human expert demonstrations.\nWhile these agents demonstrate proficiency in executing straightforward tasks,\ntheir poor performance on complex tasks highlights significant opportunities\nfor future research to enhance their effectiveness. Our source code is publicly\navailable at https://b-moca.github.io.", |
| "main_content": "INTRODUCTION Autonomous agents controlling digital devices have great potential benefits. For example, these agents can improve the accessibility of user interactions, especially for users with physical disabilities or those facing challenges in operating devices, or boost productivity by automating tedious jobs. This leads to increased interest in developing agents for mobile device control, and diverse approaches have been introduced, including agents based on large language models (LLMs; Wen et al. 2023; Yan et al. 2023) and agents trained with human demonstrations (Sun et al., 2022; Li et al., 2023), toward assistive agents that can understand the screen layout of the devices and manipulate the user interface (UI) to follow human instructions. Despite recent progress in developing mobile device control agents based on real systems, such as Android emulators (Toyama et al., 2021; Shvo et al., 2021; Zhang et al., 2023), prior works often overlook several important properties. One is testing generalization ability across diverse device configurations, which is crucial in deploying agents in real devices. Moreover, practical tasks essential for life (such as creating an alarm or making emergency calls) are often neglected because of the challenges in defining a wide range of practical tasks with robust success criteria in various device settings. The lack of a unified benchmark encompassing these important properties has impeded scientific progress in this field. In this work, we introduce B-MoCA: a Benchmark designed for evaluating Mobile device Control Agents across diverse configurations, based on Android emulators (see Figure 1). A key feature of B-MoCA is supporting numerous customization to mirror diverse device configurations,including variations in icon placements, sizes, wallpapers, languages, and device types. Utilizing this feature, users can easily create diverse environments with various configurations to evaluate generalization ability. Additionally, we define 60 practical tasks grounded in realistic scenarios, such as opening specific applications, initializing searches over the web, and adjusting device settings. To ensure reliable evaluation, B-MoCA provides rule-based success detectors, which are based on pre-defined task completion criteria. 1 arXiv:2404.16660v1 [cs.HC] 25 Apr 2024 \fICLR 2024 Workshop on Generative Models for Decision Making Mobile Device Environment Daily Tasks Diverse Device Setups Mobile Device Control Agent LLM with Text Action MLLM with Text Action or Algorithmic Designs Vision-Language Model with UI-based Action or Figure 1: Illustration of B-MoCA. We present a realistic benchmark for assessing the performances of mobile device control agents in executing everyday tasks. To analyze generalization ability, we introduce a randomization feature that changes various device attributes. We benchmark agents leveraging LLMs or MLLMs as well as agents with vision-language models trained from scratch. We benchmark various methods for building mobile device control agents in B-MoCA. The baselines include agents employing text-only large language models (LLMs) or multi-modal LLMs (MLLMs), which benefit from extensive knowledge obtained through pre-training. We consider both closedsource models, such as GPT-4 (Achiam et al., 2023) and Gemini (Gemini et al., 2023), and opensource models, such as Llama 2 (Touvron et al., 2023) and Llama 3. Additionally, we train agents from scratch that directly interact with device UIs using behavior cloning (BC; Pomerleau 1988). In our experiments, we find that the agents exhibit fundamental skills in mobile device control, such as solving straightforward tasks or completing tasks in training environments. However, they struggle in more challenging scenarios, such as handling more difficult tasks or generalizing to unseen device configurations. Specifically, the agents employing LLMs or MLLMs show high robustness across diverse device configurations, while they fall short on multiple sequential decision-making. Agents trained with BC, on the other hand, successfully mimic expert behaviors but lack generalization ability in test environments with unseen device configurations. We study the effect of different design choices on leveraging foundation models, including few-shot learning and the visual prompting method. We also analyze the effect of using pre-trained representation models or utilizing different numbers of training device environments while training agents from scratch. Our extensive analyses reveal the limitations of existing methods in mobile device control, calling for future research. We open-source all the source codes and relevant materials for easy reproduction of our environments and experiments. We hope B-MoCA helps future researchers identify challenges in building assistive agents and easily compare the efficacy of their methods over the prior work. 2 B-MOCA In this section, we introduce B-MoCA: a benchmark designed to evaluate the performance of mobile device control agents on diverse device configurations in executing common daily tasks. 2.1 DESIGN FACTORS To create a realistic benchmark for mobile device control agents, we build our benchmark based on Android, a widely used open-source operating system. In this benchmark, we frame device control as a sequential decision-making problem, reflecting the multi-step nature of the real interactions (Section 2.2). Designing a meaningful benchmark for mobile device control poses a significant challenge, particularly in defining practical tasks like opening applications or adjusting device settings. To address this, we consider 60 basic tasks that involve commonly used applications like Chrome and Calendar, ensuring relevance to everyday life. Each task is equipped with a success detector to evaluate the agent\u2019s performance in accurately completing the task (Section 2.3). 2 \fICLR 2024 Workshop on Generative Models for Decision Making Figure 2: Examples of the home screen images from environments in B-MoCA. The randomized features span icon location, font size, wallpaper, language, and device type and challenge the generalization ability of agents. Given the diverse nature of user mobile device setups, such as variations in icon placements, wallpaper choices, languages, and device types, it is important to test the generalization abilities of device-control agents across diverse setups. To assess generalization performance, we incorporate a randomization feature in our benchmark. This feature is designed to simulate various real-world scenarios by changing various aspects of mobile devices, such as user interface layouts and wallpapers (Section 2.4). 2.2 PROBLEM FORMULATION In B-MoCA, we formulate the device management task as a sequential decision-making problem, where an agent interacts with an environment. Formally, given a task instruction c, an agent receives an observation ot and takes an action at based on its policy at \u223c\u03c0(\u00b7|ot, c) at each timestep t. The environment (i.e., Android emulator) returns a success signal rt and the environment transitions to the next observation ot+1. Observations, which capture the UI elements, can be represented as either screen pixels, screen descriptions derived from the Android view hierarchy, or a combination of both. The action space comprises a dual-gesture, similar to Rawles et al. (2023), which consists of a pair of (x, y) screen locations for touch and lift. The dual-gesture action is identified as tapping the screen when the two locations are identical within a specified threshold or swiping the screen when the distance between the two locations exceeds this threshold. Additionally, the agent can press navigation buttons (i.e., back, home, and overview) by touching the corresponding button locations on the screen. We note that our benchmark supports text-based actions, enabling the utilization of the LLMs or MLLMs (see Section 3.1 for details). We refer the readers for further details on the environment implementation to Appendix A.1. 2.3 DAILY TASKS Our B-MoCA includes 60 tasks essential for managing digital devices, providing functionalities useful in daily routines. Each task is designed to be grounded in realistic situations, such as setting the alarm or enabling airplane mode. The tasks span various applications and require agents to interact with diverse UI elements, such as application icons, checkboxes, toggle switches, input fields, and sliders. For a comprehensive list of tasks, we refer readers to Appendix B.1. Task completion is determined by a rule-based success detector implemented using Android Debug Bridge (ADB). This success detector monitors logs from ADB and identifies the successful completion based on pre-defined criteria. These criteria are established by examining ADB logs from human demonstrations for each task and selecting the log produced when the target task is completed. With the pre-defined criteria, then, the success detector automatically finds the matching regular expression in the ADB logs to signal the task completion. The success signal is with the value of +1 when the task is completed, and 0 otherwise. An episode terminates as a success if the success detector signals completion, or as a failure if the agent exceeds a maximum step limit without meeting the criteria. 3 \fICLR 2024 Workshop on Generative Models for Decision Making Task instruction Vision-Language-UI Agents Large Language Model Agents Multimodal Large Language Model Agents XML Parser Text observation Text action Action Converter XML Parser Text action Multimodal observation Action Converter Image observation Dual-gesture action Figure 3: Illustration of baseline agents. LLM agents and MLLM agents interact with environments through additional XML parser and action converter, to obtain text descriptions and manipulate UIs with text actions. VLUI agents directly leverage the UIs with screen images and dual-gesture actions. 2.4 ENVIRONMENT RANDOMIZATION In mobile device control, developing agents that can generalize across various device setups is crucial. To evaluate their generalization ability, B-MoCA incorporates a randomization feature that changes icon placements and sizes, wallpapers, languages, and device types. Users can select the device type from a device list that includes popular models like Pixel 3, Pixel 4, Pixel 6, and WGXA Tablet. They can also specify the locales to set the language and region, choose wallpapers from a selection of custom images, and activate dark mode for further environmental variation. Moreover, the sizes of icons and text can vary between small, medium, and large. Lastly, applications can be randomly placed on the home screen to simulate real-world usage patterns. Using randomization features, we create 45 unique environments in B-MoCA, with examples shown in Figure 2. To assess the generalization ability, we divide the 45 distinct environments into two sets: 35 for training and 10 for testing. We employ domain randomization (Tobin et al., 2017) to train agents, enabling them to perform tasks robustly across diverse device configurations. We then evaluate the performance on test environments, which include unseen device setups. A detailed list of environment device configurations we prepare is available in Appendix A.2. 3 BASELINES In this work, we benchmark various approaches for building mobile device control agents: LLM agents, MLLM agents, and Vision-Language-UI (VLUI) agents (see Figure 3). LLM agents and MLLM agents are developed using foundation models like LLMs and MLLMs, respectively (Section 3.1). VLUI agents, which consist of vision-language encoders, are trained from scratch using human expert demonstrations (Section 3.2). 3.1 LLM AGENTS AND MLLM AGENTS Utilizing foundation models such as LLMs and MLLMs, which contain extensive knowledge and have emergent capabilities, becomes a major direction in developing mobile device control agents (Wen et al., 2023; Yan et al., 2023). In this work, we benchmark two types of agents that employ different foundation models: LLMs (e.g., GPT-4) and MLLMs (e.g., GPT-4V). LLM agents utilize only the text descriptions of the screen layout to generate text actions, while MLLM agents process both text and visual inputs. To facilitate the interactions of LLM and MLLM agents with an Android emulator, we define an XML parser (Zhang et al., 2023; Yang et al., 2023b). This XML parser converts the UI elements, from the Android view hierarchy of the screen presented in XML format, into a list of text descriptions. The description includes the location of the bounding box, if necessary. Additionally, we define a set of possible action options, as detailed in Table 1, that can be converted into a corresponding dual-gesture 4 \fICLR 2024 Workshop on Generative Models for Decision Making Action option Description dual-gesture(*) Operate a dual-gesture action with arguments (*). tap(numeric tag) Tap UI element labeled with numeric tag. swipe(direction) Swipe to direction. press(\"HOME\") Press home button. press(\"BACK\") Press back button. press(\"OVERVIEW\") Press overview button. Table 1: A set of action options for text-based agents. Additional options are converted into corresponding dual-gesture actions. Role: You are an agent that is trained to perform daily tasks on digital devices, such as smartphones [...] Action space: You need to select an action option [...] Goal: [...] (Optional) Few-shot examples: [...] Output format: Your output should follow the given format \u2022 Description: Describe what you observe in the input \u2022 Thought: To complete the given task, what is the next step \u2022 Action: The function call with the correct parameters Observation: [...] 1 Figure 4: An overview of prompt for the text-based agents, with abbreviated relevant information as [...]. The complete prompt is at Appendix C.1. action.1 These action options include tapping the UI element by choosing the numeric tags, swiping the screen in pre-defined directions (up, down, left, right), and pressing the button with the names. With these text-based observations and actions, we prompt the foundation models to explain the agents\u2019 role, action space definition, goal, (optional) few-shot examples, and current observation. Our prompts, outlined in Figure 4, also incorporate the Chain-of-Thought technique (Wei et al., 2022) to enhance the reasoning ability of the agents by enforcing a certain output format. 3.2 VLUI AGENTS Despite the promising results of LLMs, leveraging these foundation models presents several challenges such as the necessity of auxiliary interfaces or difficulties in fine-tuning. Thus, we also investigate another type of agent that can be trained from scratch: VLUI agents, named after the vision-language model with UI actions. Characterized by their direct interaction with device UIs in a human-like manner, these agents can significantly benefit from the easy incorporation of human demonstrations for training, potentially improving learning efficiency. To be detailed, VLUI agents take a task instruction and screen images as the input and produce a dual-gesture action as the output. Input embeddings are extracted using vision and language encoders and a transformer (Vaswani et al., 2017) module is utilized to process these embeddings and generate the dual-gesture actions. Specifically, we train a deterministic multi-task policy \u03c0\u03b8(at|ot, c) using BC (Pomerleau 1988; Schaal 1996). The parameters \u03b8 of the policies are optimized to imitate the human expert demonstrations D = {(ot, a\u2217 t , c)} by minimizing the following objective with mean squared error function L(\u00b7): X (ot,a\u2217 t ,c)\u223cD L(\u03c0\u03b8(at|ot, c), a\u2217 t ). We refer readers to Appendix C.2 for more details on the architecture of VLUI agents. 4 EXPERIMENTS We design our experiments to investigate the following research questions: \u2022 Can baseline agents perform daily tasks on mobile devices? (Section 4.2) \u2022 What are the distinctive characteristics of each agent? (Section 4.2) \u2022 What are the effects of different design choices for LLM or MLLM agents? (Section 4.3) \u2022 How crucial is the pre-training or training data diversity for VLUI agents? (Section 4.4) 1To convert text actions to dual-gesture actions, we define the action converter. We analyze the efficacy of the action options in Appendix E.1. 5 \fICLR 2024 Workshop on Generative Models for Decision Making 0 30 60 90 Success Rates (%) VLUI MLLM (GPT-4V) MLLM (Gemini-Pro-V) LLM (GPT-4) LLM (Gemini-Pro) Airplane 0 30 60 90 Success Rates (%) Alarm1 0 30 60 90 Success Rates (%) Alarm2 0 30 60 90 Success Rates (%) VLUI MLLM (GPT-4V) MLLM (Gemini-Pro-V) LLM (GPT-4) LLM (Gemini-Pro) Brightness 0 30 60 90 Success Rates (%) Call 911 0 30 60 90 Success Rates (%) Language Figure 5: Average success rates of the baseline agents in the test environments. We report the mean and standard error across three runs. LLM agents are in three-shot learning, and MLLM agents are without SoM prompting and in one-shot learning for Gemini-Pro-V (due to maximum context length) or three-shot learning for GPT-4V. The text-based agents with GPT-4 or GPT-4V show the best performances on Airplane, Alarm1, and Birhgtness, while VLUI agents show better performances on Alarm2, Call 911, and Language. 4.1 EXPERIMENTAL SETUP In our experiments, we evaluate LLM agents, MLLM agents, and VLUI agents using six representative tasks: named, Airplane, Alarm1, Alarm2, Brightness, Call 911, and Language. These tasks are selected to cover navigating multiple pages in target applications and manipulating diverse UI elements which vary in configuration across different device settings. For example, on Alarm2, the agents need to reach the alarm tab in the clock application and adapt to varying shapes of clock UI in a shape of either rectangle or circle with different size options. We display exemplary expert demonstrations on these tasks in Appendix B.2. For each task, the task instruction is as follows: \u2022 Airplane: \u201cturn on airplane mode\u201d \u2022 Alarm1: \u201cturn on alarm at 9 am\u201d \u2022 Alarm2: \u201ccreate an alarm at 10:30 am\u201d \u2022 Brightness: \u201cdecrease the screen brightness in setting\u201d \u2022 Call 911: \u201ccall 911\u201d \u2022 Language: \u201cgo to the \u2018add a language\u2019 page in setting\u201d For LLM agents, we employ the closed-source models Gemini-Pro (Gemini et al., 2023) and GPT-4 (GPT-4-0125-preview; Achiam et al. 2023).2 We study LLM agents with both zero-shot and few-shot learning cases. For few-shot learning, we sample examples from 210 human expert demonstrations (see Appendix D.1 for dataset collection). For MLLM agents, we leverage GeminiPro-V and GPT-4V (GPT-4-vision-preview). We report MLLM agents in only few-shot learning and investigate visually grounding the agents with Set-of-Mark (SoM) prompting (Yang et al., 2023a). We provide more details on the configurations for LLM and MLLM Agents in Appendix C.3. For VLUI agents, we train multi-task policies where each policy performs all six tasks. The policies are trained with BC using the 210 human expert demonstrations.3 We refer the readers to Appendix C.4 for more details on the training procedures of VLUI agents. For each evaluation, we measure the success rates of the agents in the 10 test environments and compute the average success rates. These success rates are automatically computed by the rule-based success detector. We report the mean and standard error across three different runs. 4.2 MAIN RESULTS Figure 5 shows the success rates of LLM agents, MLLM agents, and VLUI agents in test environments. LLM agents and MLLM agents utilize their pre-trained base knowledge and few-shot 2We include experiments with open-source models of Llama 2 (Touvron et al., 2023), Llama 3, and AgentLM (Zeng et al., 2023) in Appendix E.2. 3We also include experimental results of VLUI agents trained with offline reinforcement learning by employing the success signals as rewards in Appendix E.3. 6 \fICLR 2024 Workshop on Generative Models for Decision Making (a) (b) (c) Executed Action Desired Action Figure 6: The common failure modes of the agents. (a) LLM agents fail to complete sequential steps, (b) MLLM agents miss details in the images, and (c) VLUI agents tap the wrong icon locations. LLM (zero-shot) LLM (few-shot) MLLM (w/o SoM) MLLM (w/ SoM) Airplane 53 \u00b1 03 73 \u00b1 12 80 \u00b1 06 83 \u00b1 03 Alarm1 42 \u00b1 13 67 \u00b1 03 60 \u00b1 15 62 \u00b1 09 Alarm2 00 \u00b1 00 00 \u00b1 00 23 \u00b1 03 17 \u00b1 03 Brightness 73 \u00b1 12 73 \u00b1 09 87 \u00b1 03 83 \u00b1 03 Call 911 00 \u00b1 00 03 \u00b1 03 53 \u00b1 03 33 \u00b1 09 Language 27 \u00b1 06 43 \u00b1 09 43 \u00b1 09 47 \u00b1 17 Table 2: Success rates of text-based agents with different prompting methods. While few-shot examples help LLM agents with GPT-4, we observe no significant gain from SoM prompting for MLLM agents with GPT-4V. examples to complete simple tasks with high performances (e.g., more than 70% on Airplane and Brightness with GPT-4 or GPT-4V), but their success rates significantly drop as the tasks become complex (e.g., less than 30% on Alarm2 even with GPT-4 or GPT-4V). VLUI agents, on the other hand, imitate the behaviors of experts and exhibit average success rates of higher than 50% on all tasks, except 47% on Alarm2. However, all methods still show low performances (less than 60%) on complex tasks (i.e., Alarm2 and Call 911), which calls for new algorithms. We provide more remarks on each agent type below. Robustness of LLM agents and MLLM agents Both types of agents employing foundation models have shown robust performances in diverse device configurations. It is straightforward that these agents are robust to the randomization over the visual appearances, such as icon locations or font size, as the locations of the UI elements are described in the Android view hierarchy. In addition, LLM agents with both Gemini-Pro and GPT-4 are robust to language changes, with descriptions of UI elements in different languages. Particularly, these agents generalize to languages in test environments, e.g., Korean and Egyptian Arabic, which are not included in the few-shot examples. Remaining challenges for LLM agents While exhibiting robust performances across diverse device settings, several limitations of LLM agents are observed. First, the agents face difficulties with long-horizon tasks, which require completing a precise sequence of multiple actions. For example, on Call 911, the agents often make mistakes while typing the sequence of 9-1-1, as shown in Figure 6(a). Second, the agents struggle to leverage few-shot examples adaptively. For instance, on Brightness, we observe LLM agents naively copying the few-shot examples from different device configurations without adjusting them to the current environment. Efficacy of multi-modal input for MLLM agents We confirm the effectiveness of image input with MLLM agents employing GPT-4V, as large increases in success rates are observed on Alarm2 and Call 911 compared to LLM agents with GPT-4. However, MLLM agents share the challenges of LLM agents in accurately executing complex tasks. Moreover, they still fall short in understanding details of visual input, such as the small interface for setting AM/PM on Alarm2 as shown in Figure 6(b). MLLM agents with Gemini-Pro-V show significantly lower performances than LLM agents with Gemini-Pro, assumably due to the longer context length of multi-modal inputs. These results indicate the remaining headroom in leveraging multi-modal inputs more efficiently. Generalization ability of VLUI agents We observe training VLUI agents with BC can lead to high performances on many complex tasks where MLLM agents fail. These agents perform robustly to unseen wallpapers, as being trained with multiple different background images. Also, they can generalize their actions to unseen devices, e.g., Pixel 4, even though they are trained only on a single device type, i.e., Pixel 3. However, VLUI agents begin failing to complete the tasks with severe visual changes induced by unseen device configurations. While they exhibit higher than 90% success rates in training environments, the performance degrades to less than 70% in test environments (see Appendix D.2 for more details). Specifically, they suffer from handling unseen locations of UI elements, as shown in Figure 6(c). We believe these findings reveal the importance of diversity in training data from randomized environments (see Section 4.4 for more discussions). 7 \fICLR 2024 Workshop on Generative Models for Decision Making Airplane Alarm1 Alarm2 Brightness Call911 Language 10 40 70 Success Rates (%) Randomly initialized Pre-trained Figure 7: Success rates of VLUI agents with visual encoders randomly initialized or pre-trained. Pre-training helps the performances of the agents. Airplane Alarm1 Alarm2 Brightness Call911 Language 10 40 70 Success Rates (%) 7 Envs 21 Envs 35 Envs Figure 8: Success rates of VLUI agents with varying numbers of training environments. The success rates escalate with more environments. 4.3 INVESTIGATIONS ON DESIGN CHOICES FOR LLM AGENTS AND MLLM AGENTS The performance of LLM agents heavily relies on how the input prompts are tailored. Considering only the leaf UI elements of Android view hierarchy to describe the screen layout, similar to prior work (Li et al., 2020; Yang et al., 2023b) for example, might result in meaningless descriptions in certain applications (e.g., the setting application on Airplane and Language). In this work, we have leveraged the text attributes of all the available nodes to avoid such collapse, while we believe there can be more simple yet expressive representation methods. In addition, we observe that few-shot examples can significantly improve the performance of LLM agents with GPT-4 compared to zero-shot cases. As shown in Table 2, equipping prompt with few-shot examples improves the performance from 42% to 67% on Alarm1 and from 27% to 43% on Language. However, employing few-shot examples does not always help agents, as shown on Alarm2 or Brightness. We note that naive exploitation of expert demonstrations might lead to excessive increases in computational cost and highlight the necessity of efficient few-shot prompting. Moreover, we investigate the effect of common visual prompting methods for MLLM agents with GPT-4V. To enhance the visual grounding ability of MLLMs, prior studies (Yan et al., 2023; Yang et al., 2023b) have actively adopted SoM prompting, where each UI element in the input image is marked with numeric tags. However, we find that SoM prompting can often significantly degrade the performance of MLLM agents on Alarm2 and Call 911 as shown in Table 2. We hypothesize that the numeric tags may cause confusion when overlaid on UI elements with numbers, such as dial buttons or clock interfaces. For examples of the inputs used in SoM prompting, see Appendix D.3. 4.4 EFFECTS OF PRE-TRAINED ENCODERS AND DATA DIVERSITY FOR VLUI AGENTS The main challenge of VLUI agents is the lack of generalization ability as mentioned in Section 4.2. Hence, we examine the different algorithmic designs for the representation model of VLUI agents and the effects of training diversity on performance robustness. We also include an additional experiment with varying model sizes of visual encoders in Appendix E.4. First, we compare VLUI agents in two different designs: visual encoders with parameters randomly initialized and visual encoders pre-trained with ImageNet (Krizhevsky et al., 2017). As shown in Figure 7, we observe significant improvements in success rates with pre-training, e.g., from 37% to 63% on Language. These results demonstrate the benefit of employing pre-trained representation models, and we expect further improvements can be induced by leveraging more Android-specific images for pre-training (Sun et al., 2022; Rawles et al., 2023). Furthermore, we train VLUI agents by progressively increasing the number of training environments (see Appendix D.1 for more details of the experiment setting). As shown in Figure 8, as the number of training environments increases, the performance of VLUI agents escalates. Specifically, the agents exhibit success rates of 20%, 40%, and 63% on Language with the number of training environments 7, 21, and 35, respectively. We believe this verifies the efficacy of the environment randomization feature incorporated in our benchmark toward more practical agents. 8 \fICLR 2024 Workshop on Generative Models for Decision Making 5 RELATED WORK Foundation models for decision-making system Inspired by the strong emergent properties of foundation models (Brown et al., 2020; Wei et al., 2022), many researches have adopted LLMs to develop decision-making system (Yao et al., 2023; Shinn et al., 2023). In robot learning, for example, LLMs have been widely equipped for reasoning, planning, manipulation, and navigation (Driess et al., 2023; Liang et al., 2023; Huang et al., 2023). Furthermore, agents with LLMs have shown capabilities of performing interesting tasks in numerous simulated worlds, including game environments (Wang et al., 2023; Tan et al., 2024) and virtual reality (Qian et al., 2023; Yang et al., 2024). In recent days, focusing on practicalness, solving computer tasks with foundation models has also been actively explored (Nakano et al., 2021; Furuta et al., 2023). We further study the abilities of foundation models to control mobile devices toward assistive agents in real life. Developing assistive agent for device control For agents that effectively understand and manipulate the UI elements, a large body of work has leveraged the structural information, such as document object model in HTML or Android view hierarchy (Branavan et al., 2010; Gur et al., 2019). In addition, methods for equipping agents with the ability to understand information-rich screen images have been widely investigated, mainly with vision-based reinforcement learning (Liu et al., 2018; Humphreys et al., 2022; Shaw et al., 2023). Recently, diverse strategies to build device control agents with foundation models are introduced, including prompting methods (Wen et al., 2023; Kim et al., 2023), instruction-tuning (Furuta et al., 2023), fine-tuning with images (Zhan & Zhang, 2023; Hong et al., 2023), and visual prompting (Yan et al., 2023; Yang et al., 2023b). Here, we present an elaborate analysis of the main methods for building mobile device control agents. Benchmark for decision-making agents There have been continuous efforts to build reliable benchmarks for sequential decision-making in video games (Bellemare et al., 2013), locomotion (Brockman et al., 2016), and robotic manipulation (James et al., 2020). Lately, researchers have proposed benchmarks for solving device control tasks, viewing it as another decision-making problem. For example, Yao et al. (2022) and Zhou et al. (2024) have presented benchmark simulating web platforms, while Toyama et al. (2021), Shvo et al. (2021), and Zhang et al. (2023) have suggested RL environments adopting Android emulators. In this work, inspired by special-purpose benchmarks quantifying the robustness of the agents (Cobbe et al., 2020; Stone et al., 2021), we newly propose a benchmark with the randomization feature. 6 DISCUSSION &", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2403.17674v1", |
| "title": "Depending on yourself when you should: Mentoring LLM with RL agents to become the master in cybersecurity games", |
| "abstract": "Integrating LLM and reinforcement learning (RL) agent effectively to achieve\ncomplementary performance is critical in high stake tasks like cybersecurity\noperations. In this study, we introduce SecurityBot, a LLM agent mentored by\npre-trained RL agents, to support cybersecurity operations. In particularly,\nthe LLM agent is supported with a profile module to generated behavior\nguidelines, a memory module to accumulate local experiences, a reflection\nmodule to re-evaluate choices, and an action module to reduce action space.\nAdditionally, it adopts the collaboration mechanism to take suggestions from\npre-trained RL agents, including a cursor for dynamic suggestion taken, an\naggregator for multiple mentors' suggestions ranking and a caller for proactive\nsuggestion asking. Building on the CybORG experiment framework, our experiences\nshow that SecurityBot demonstrates significant performance improvement compared\nwith LLM or RL standalone, achieving the complementary performance in the\ncybersecurity games.", |
| "authors": "Yikuan Yan, Yaolun Zhang, Keman Huang", |
| "published": "2024-03-26", |
| "updated": "2024-03-26", |
| "primary_cat": "cs.CR", |
| "cats": [ |
| "cs.CR", |
| "cs.AI", |
| "cs.MA" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Agent", |
| "gt": "Integrating LLM and reinforcement learning (RL) agent effectively to achieve\ncomplementary performance is critical in high stake tasks like cybersecurity\noperations. In this study, we introduce SecurityBot, a LLM agent mentored by\npre-trained RL agents, to support cybersecurity operations. In particularly,\nthe LLM agent is supported with a profile module to generated behavior\nguidelines, a memory module to accumulate local experiences, a reflection\nmodule to re-evaluate choices, and an action module to reduce action space.\nAdditionally, it adopts the collaboration mechanism to take suggestions from\npre-trained RL agents, including a cursor for dynamic suggestion taken, an\naggregator for multiple mentors' suggestions ranking and a caller for proactive\nsuggestion asking. Building on the CybORG experiment framework, our experiences\nshow that SecurityBot demonstrates significant performance improvement compared\nwith LLM or RL standalone, achieving the complementary performance in the\ncybersecurity games.", |
| "main_content": "Introduction Cybersecurity operations involve the participation of various entities such as attackers and defenders. With the advancement of artificial intelligence (AI), autonomous cyber operation (ACO) agents have emerged as a promising solution in cybersecurity operations [Vyas et al., 2023]. These agents continually engage in adversarial learning within network environments, enhancing their strategic capabilities. The recent proliferation of large language models (LLMs) has significantly bolstered the capabilities of autonomous agents [Wang et al., 2023a]. In comparison to traditional machine learning agents, LLM agents possess extensive knowledge, enabling them to handle richer and more complex information, coupled with robust contextual and reasoning abilities [Lin et al., 2023; Wang et al., 2023b; Wang et al., 2023c]. They not only surpass state-of-the-art methods as novel tools [Xia et \u2217These authors contributed equally. \u2020Corresponding author. al., 2023] but also exhibit formidable interactive capabilities as assistants or agents [Sandoval et al., 2023]. However, LLM agents lack the specific knowledge of the local environment, incur higher training costs [Hu et al., 2023] and can stuck in hallucinations [Ji et al., 2023; Chen and Shu, 2023], while also presenting attackers with powerful weapons, making them double-edge sword for cybersecurity [Chen and Shu, 2023; Taddeo et al., 2019]. Recent research attempts to frame ACO as partially observable Markov processes (POMDP), employing reinforcement learning (RL) methods to train autonomous agents [Standen et al., 2021; Team., 2021]. However, without appropriate tuning methods, RL agents tend to converge to local optima, lacking robustness and generalization capabilities despite achieving favorable results [Palmer et al., 2023]. As prior studies have demonstrated that collaborations among multiple agents can enhance team performance [Dong et al., 2023; Ma et al., 2023], enabling the effective collaborations between LLM agents and RL agents, which can leverage the generalization knowledge of LLMs and the specialized knowledge of RLs in cybersecurity scenarios, can be promising to achieve complementary performance beyond that of individual agent. Hence, we introduce the SecurityBot, a collaborative framework utilizing RL agents as mentors for LLM agent to support cybersecurity operations. We integrate four effective modules \u2013 profiles, memory, reflection and action \u2013 into the LLM. Simultaneously, we propose a dynamic mechanism consisting of a cursor to dynamically incorporate RL agents\u2019 suggestions, an aggregator to rank suggestions from different RL agents, as well as a caller to proactively request mentoring from RL agents. We conduct experiments on the open-source ACO research platform, CybORG [Standen et al., 2021], comparing the red team (attacker) task and blue team (defender) task performance among: (1)independently executing RL or LLM agents (Independent), (2) collaboration between a LLM agent and a RL agent (Single-Mentor), and (3) collaboration between a LLM agent and multiple RL agents (Multi-Mentors). Our experimental results demonstrate that the developed SecurityBot can effectively improve both the red team and blue team task performance compared to independent LLM or RL approaches. Furthermore, while mentoring from multiple RL agents can be beneficial, the guidance of poorly performing RL agents may be noise to, and result into unstable performance. arXiv:2403.17674v1 [cs.CR] 26 Mar 2024 \f\u2022 We introduce SecurityBot, a mechanism to enable the effective collaboration between LLM and RL agents, to leverage RL agents as mentors to accelerate learning for LLM agents and achieve complementary performance. \u2022 The collaboration of LLM and RL agents demonstrates performance improvement in both red team and blue team tasks, providing a promising solution of autonomous agents for cybersecurity operations. 2 Related Work 2.1 LLMs for cybersecurity operations Given the rapid development of LLMs and the eager to incorporate advanced AIs into cybersecurity operations [Iannone et al., 2022], recent studies have started to explore using LLMs to enhance cybersecurity while several evidences also reveal abusing LLMs to bring advanced threats, making it a double-edged sword [Taddeo et al., 2019; Yao et al., 2023] LLM to enhance cybersecurity LLMs demonstrate advantages in both code security and data security [Noever, 2023; Ali and Kostakos, 2023; Qi et al., 2023]. For example, Fuzz4All [Xia et al., 2023] utilizes LLMs as input generators and mutation engines to generate diverse inputs for various programming languages, achieving an 36.8% coverage improvement compared to previous stateof-the-art techniques. Additionally, compared to traditional machine learning approaches, LLMs possess more powerful natural language processing and contextual understanding capabilities, allowing them to elevate cybersecurity from specific to more macroscopic tasks. For example, some researches[Deng et al., 2023; Pearce et al., 2023] utilized these capabilities in specific security tasks to enhance effectiveness, while McIntosh et al.[McIntosh et al., 2023] take a further step to compared GPT-generated Governance, Risk, and Compliance (GRC) policies with those from established security vendors and government cybersecurity agencies, recommending GPT integration into companies\u2019 GRC policy development. LLMs\u2019 double-edged sword role for cybe security However, applying LLMs to cybersecurity is a double-edged sword [Taddeo et al., 2019]: being generative in nature can lead to hallucinations\u2014the generation of misleading or incorrect content, and can not effectively discern security-related fallacies, which can be catastrophic for high-stakes security tasks [Ji et al., 2023]. These errors can compromise sensitive operations, thereby introducing substantial risks [Chen and Shu, 2023]. As LLMs become more integrated into security frameworks, the imperative to address and mitigate these challenges grows ever more critical. Furthermore, LLMs present attackers with powerful weapons. Recent studies have demonstrated that LLMs can significantly enhance attacks across hardware [Yaman, 2023], software and network [Chen and Shu, 2023] levels, especially that LLMs possess human-like reasoning capabilities, making user-level attacks even more severe [Yao et al., 2023; Falade, 2023; Botacin, 2023]. 2.2 Collaboration mechanisms to improve LLMs Recent studies have explored different mechanisms to support LLM\u2019s collaborations with others, either LLM-based or RLbased agents, including: Role-based multi-LLM-agent collaboration Within LLM-based multi-agent systems, LLM-based agents are assigned with different roles, like decomposing complex tasks, identifying errors, and collecting multiple perspectives. Then they collaborate with each other through a series of processes to resolve complex tasks such as software developments [Dong et al., 2023; Qian et al., 2023; Hong et al., 2023], sociological investigations [Park et al., 2023; Wang et al., 2023b; Zhang et al., 2023], simulation of multiplayer games [Sandoval et al., 2023; Xu et al., 2023] and various challenges (such as logical reasoning, stock advice, blog composing, and more) [Li et al., 2023; Wu et al., 2023; Talebirad and Nadiri, 2023]. In particularly, different rolebased agents exchange ideas through conversation, enforce tools to undertake tasks, garner feedback, leading to successful collaboration [Wang et al., 2023a]. Dual-process-based LLM-RL collaboration The dual process theory highlights that human cognition consists of two mental systems where System 1 is autonomous and characterized by rapid intuition, while System 2 controls slow, deliberate thinking [Wason and Evans, 1974; Kahneman, 2011]. Grounded on this theory, SwiftSage introduces a framework that enables a small RL model, acting as the System 1 component, to collaborate with an LLM-Based agent, acting as the System 2 component. This structure effectively solve complex problems while reducing the cost of inference [Lin et al., 2023]. LLM setting guidance to support RL Some recent studies incorporate the LLM to generate or learn the reward function for RL agents, aiming at simplifying the reward function design process [Ma et al., 2023; Carta et al., 2022]. For example, [Micheli et al., 2023; Kwon et al., 2023; Du et al., 2023] use LLM as a proxy reward function to guide RL agents in environments without clear reward signals. Additionally, [Brohan et al., 2023; Dasgupta et al., 2023] utilize the LLM-Based agent as a planner to guide RL agent in complex and dynamic environments. RL acting as expert to guide LLM\u2019s decision LLM demonstrate powerful generalization abilities, but under specific scenario, they perform poorly due to the lack of expert trajectories. In contrast, RL models possess expert trajectories. Hence, [Hu et al., 2023; Wan et al., 2022] use RL methods assist the LLM-Based agent in comprehending the environment, mastering expertlike actions, which results in better effects and lower interaction cost instructions. Overall, LLMs has demonstrated promising potential in enhancing cybersecurity operations while their double-edged sword role raise specific concerns. Additionally, recent studies have explored different collaborations with LLMs but it \fis still in its early stage, especially for cybersecurity operations. Hence, using the cybersecurity adversarial game as the research context, we design a framework with four plugin modules and three collaboration mechanisms to power LLMs for cybersecurity operations, including both acting as attackers and defenders. 3 Cybersecurity Adversarial Game and Pre-trained RL Agents Before detailing our design, we start with briefly introducing our research context: the cybersecurity adversarial game. In particularly, we have constructed a cybersecurity adversarial game utilizing CybORG [Standen et al., 2021], an exemplary RL-based Autonomous Cyber Operation (ACO) gym. ACO supports the creation of decision-making agents for both the blue team (defender) and the red team (attacker) in adversarial scenarios, and conveys structured and unstructured information, enabling the adaptation of both RL and LLM agents. 3.1 Cybersecurity Adversarial Games The scenario adopted in this study is derived from TTCP CAGE Challenge 1 1, an open challenge on CybORG in 2021. As illustrated in Figure 1, the red and blue teams compete in a simulated network environment, which can be modeled as a partially observed Markov process (POMDP). At each step, the red team and blue team take actions sequentially in the environment, causing changes in the environmental state. Environment & Observation. The environment comprises a network consisting of 13 hosts divided into three subnets. The red team commences from the footnode in the user subnet without knowledge of any other hosts. The blue team possesses information about all hosts but lacks knowledge regarding the red team\u2019s access status to the hosts. For both the red and blue team RL agents, their vector observation at each step encompasses: (1) whether the last action is success, (2) whether the adversary has operated on a specific host, and (3) the red team\u2019s access status of a specific host. Note that the observation is not guaranteed accurate due to the presence of an adversary. Figure 1: A POMDP cybersecurity adversial game. The red host in User Subnet represents the foot node of the red team. The blue host in Enterprise Subnet represents the defender host of the blue team. Action & Reward. As shown in Figure 2, the two teams each have three reciprocal actions that cause transitions in the 1https://github.com/cage-challenge/cage-challenge-1 host\u2019s access status. The red team achieves lateral movement between subnets by discovering new hosts through connections from the privileged host. We set the game to be zerosum, which means that the blue team\u2019s reward is the opposite of the red team\u2019s reward. The reward at each step is based on the extent of red team\u2019s exploitation, Rewardt = n X i=1 Vi,t \u00d7 Ai,t (1) where Vi,t and Ai,t represents the value and the access status of hosti at step t respectively. Figure 2: Action-Status Transition. Red text represents red team actions, blue text represents blue team actions. 3.2 Pre-trained RL Agents In this study, we choose three representative RL algorithms to train red team and blue team agents2: \u2022 A3C (Asynchronous Advantage Actor-Critic) [Mnih et al., 2016] combines policy gradient and value function methods by asynchronously training multiple agents to improve efficiency. \u2022 DQN (Deep Q-Network) [Mnih et al., 2013] utilizes deep neural networks to approximate the Q-value function to guide the agent\u2019s decisions. \u2022 PPO (Proximal Policy Optimization) [Schulman et al., 2017], a policy gradient method, ensures stability through proximal policy optimization, restricting the magnitude of policy updates. The RL-based environment facilitates agent\u2019s training. Red team and blue team agents are trained separately, with one agent trained at a time. For agent\u2019s adversary, we applied the fixed-strategy agents provided in CybORG. In particular, when training a red-team RL agent, we use a blue-team agent with fixed strategy which randomly performs Remove or Restore operations when encountering suspicious hosts during each Monitor action. When training a blue-team RL agent, the red-team agent as the adversary gains access to network nodes one by one based on a breadth-first strategy. Our approach aligns with the conventional RL training paradigm, wherein the agent takes an action at each step, assimilates new observations and associated rewards, and incrementally refines its strategic framework. 2Our framework is flexible to use other RL algorithms. \f4 SecurityBot: an LLM-based agent mentored by RL agents As shown in 3, our SecurityBot contains three main parts: a LLM-based Agent, the pre-trained RL agent pool as mentors and their collaborative mechanisms. Figure 3: The Framework of SecurityBot: LLM-based RLsmentoring Agent for Cybersecurity Operation 4.1 LLM Agent Design Building upon the LLM, GPT 3.5-turbo, our LLM agent includes four plugin modules for decision making in each step: Profile module As shown in Figure 4, the Profile module initializes each agent\u2019s role, goal, and available actions depending on its role. In particular, we design a prompt including the expected format for the observed environment as the input, and the expected output which is an action sequence including a series of actions with its goal, trigger, following actions, and expected outcome. When initializing the LLM agent, we use this prompt, together with the assigned goal, action, and environment format, to ask the LLM to generate an action sequence and add it to the profile, serving as the global behavior guidance for the LLM agent. Memory module The Memory module is used to store past experiences and search the related ones for decision making in each step. Memory Storage. The memory module stores records including the timestamp, observed environment, action taken, and the outcome including the action status (success or failure) and its reward. In particular, when storing each memory record, the LLM agent rates its Importance by prompting the LLM to score it on a scale of 0 to 10. Memory Searching. When searching memories to support action selection in each step, the LLM agent will calculate each memory record\u2019s Relevance and Freshness: Figure 4: The illustration of profile module, including the example of roles, goals, actions, environment format and the generated behavior guidance (the bottom part) as well as the process to generate the behavior guidance (the upper part). Figure 5: The prompt for Red Agent from the reflection module to motivate the LLM to choose other attack actions. \u2022 Relevance: measuring its environment\u2019s similarity with the current one. We transformed each environment into vectors, and then calculate their cosine similarity. \u2022 Freshness: measuring its freshness, represented as the reciprocal of its timestamp gap with the current step. Finally, we calculate the product of the importance, relevance and freshness for each memory record and select the top two as the memory input for LLM when making decision. Action module The Action module plays a crucial role in guiding the LLM agent to take valid action for each step. In particularly, given the observed environment and the available actions provided by the profile, this module will generate the action space with all the potential actions that the agent could take. Reflection module Given the complex and dynamic environment, as the adversary agent may change the environment but is unobservant to the LLM agent, the LLM agent may encounter dilemmas situation, reflected as repetitive actions or diminishing rewards. For example, the red agent might persist in attacking a host in the network, even when such an action has been proven futile. Hence, the reflection module is designed to monitor the dilemma status and trigger the reflection process. Dilemmas Monitor. At every step, the Reflection module evaluates both the Reward List and the Action List from the previous steps. If there is no increase in rewards or if the agent repeats an action, the module will collect these suspi\fcious actions, including the series of actions associated with those records, and then activate the reflection process. Reflection Process. The reflection process will pass these suspicious actions to the Action module and remove them if they are included in the generated action space. Additionally, as shown in Figure 5, the process provides the LLM with a prompt, elucidating that the agent is stuck in the dilemmas and providing the possible reasons to guide the LLM to choose other actions to get out of the dilemma situation. 4.2 Collaboration with RL agents Using RL agents as mentors to guide the LLM agent is critical for SecurityBot to achieve better performance. More specifically, as shown in Figure 6, we design three collaboration mechanisms: Figure 6: Mechanisms to collaboration with RL agents. Different color refers to suggestions of different RL mentors. Cursor: growing to be independent Firstly, the RL agents are pre-trained in the same environment particularly to guarantee that they can provide knowledge to mentor the LLM agent to make better decisions, especially in the early stage when LLM agents contain no information regarding the environment. However, as time goes by, the LLM agent, with its capacity to understand complex environments and accumulated experience, can surpass the RL mentors (which we will report later). Hence, we design the mechanism Cursor to decide whether the LLM agent should take suggestions from RL agents. In particular, for each step t, the Cursor module will calculate an independence value indt and only when the independence value indt is below the given threshold \u03b8ind, the LLM agent will consider suggestions from RL agents. Otherwise, the LLM agent will make the decision by itself. Hence, the Cursor module will adjust an independence value indt in a way to reflects the tendency to rely on itself and consider the mentor\u2019s suggestion when it proves beneficial. As detailed in Equation 2, we adopt the monotonically increasing function fx (Equation 3) so that part1 reflects the trend to rely on the LLM itself. part2 represents the trend of gaining reward from previous actions while part3 is the signal function (Equation 4) indicating whether the action is chosen when considering suggestions from mentors. In other words, if the LLM agent achieves an increasing reward without mentoring by the RLs, we would increase the independence value to make LLM agent more independent. Note that we introduce parameter \u03b1 to control the change race and \u03b8lr to represent the minimal reward increment that we would expect the LLM agent to gain. indt = indt\u22121 + (ft \u2212ft\u22121) | {z } part 1 + min(\u03b1 \u00d7 indt\u22121, (rt\u22121 \u2212rt\u22122 \u2212\u03b8lr)) | {z } part 2 \u00d7 sgn(indt\u22121 \u2212\u03b8ind) | {z } part 3 (2) fx = 1 1 + e\u2212kx . (3) sgn(x) = { \u22121 if x > 0 1 otherwise . (4) Aggregator: ranking suggestions from multiple mentors Rather than relying on only one RL agent, the LLM can refer to multiple RL agents, as different RL agents may catch different aspects of the tasks. Hence, we further introduce the aggregator mechanism to aggregate suggestions from multiple RL agents. In particular, given the top three suggestions from all the RL mentors associated with confidence, the multi-mentor mechanism will sort them based on the confidence and the top one will be presented to the LLM and the top three actions will be provided while in dilemmas. In such a case, the LLM agent does not necessarily always get suggestions from one specific RL agent during the whole task duration. Caller: asking for help proactively when in dilemma As discussed above, when the LLM agent encounters a dilemma, the reflection module will be activated. Furthermore, beyond activating the reflection process, the LLM agent can further refer to RL agents for support. Unlike referring to RL mentors\u2019 input in normal situation where only one suggestion is provided, we will provide the top three confident suggestions from the RL mentors. 5 Experiments and Results 5.1 Experiment Setup Environment. Following the setup of Cage Challenge 1, we set the maximum number of steps in one episode, i.e., a complete round of the game, to be 100. As mentioned earlier, we set two reward parameters as shown in Table 1: (1) Host value. The hosts in different subnets have different values, and (2) Access state. The higher the access state of a host, the higher the proportion of host value obtained by the red team. Table 1: Parameters of agent reward. Host Subnet(V) Reward Access status(A) Reward User Subnet 0.1 Unknown/Known 0 Enterprise Subnet 1.0 Exploited 0.5 Operational Subnet 10.0 Privileged 0.89 \fRL Training. The RL training process is based on the Ray RLlib, a Python library for RL3. Each training process consists of a total of 100 iterations (4000 episodes in total). LLMs Setup. We leverage OpenAI\u2019s gpt-3.5-turbo API for building the LLM Agent. All the temperatures are set to 0 to restrict the format of LLM output. \u2022 Reflection. If the action is repeated in the last three steps, or if there is no increase in reward values in the last five steps, the reflection mechanism will be triggered. \u2022 Cursor. \u03b8ind is set to 0.6. \u03b8lr is set to 0.3. \u03b1 is set to 0.3. k in f(x) is set to 0.0135. Measurement. We consider the following measurements. \u2022 Step reward. The reward of each step. \u2022 Collaboration Rate (Col). The rate of cooperation with RL agents. \u2022 Dilemma Rate (DR). The rate of collaborating with RL agents triggered by trapping into dilemma situation. \u2022 Accept Rate (AR). The rate that LLM agent takes RL mentor\u2019s suggestion, indicating the extent to which LLM Agent relies on RL mentors. \u2022 Accept Rate in dilemma (ARd). The rate LLM agent take suggestions when trapping into dilemma, showing the ability of RL mentors to help LLM Agent out. Experiment Group. We incrementally add collaboration modules and assess their performance for both the red and blue team. For each group, we run the simulation for 5 times and calculate the average. \u2022 Independent. Each RL agent (A3C, DQN, PPO) and our designed LLM agent conduct the task independently. \u2022 Single-mentor. The LLM agent cooperate with a single RL agent (A3C&LLM, DQN&LLM, PPO&LLM). \u2022 Multi-mentors. LLM agent cooperate with all three different RL agents (MultiMentor). 5.2 Performance in Red Team Task In the red team task, LLM agents and RL agents exhibited distinct action patterns, indicative of differing knowledge bases. While collaborative synergy can surpass individual agent performance, optimal collaboration is achieved when RL agents exhibit superior performance. However, when the LLM agents considers suggestions from multiple RL agents, it struggles to efficiently process this information, leading to a decline in collaborative performance.4 3We focuses on the collaboration between RL agents and LLM agents, rather than training a better RL agent. Hence, we choose the adversary using the simplest strategy and default parameters without parameter tuning for the training algorithms are used. All specific algorithm parameters can refer to https://github.com/rayproject/ray/blob/master/rllib/algorithms/ 4We smoothed the data using exponential smoothing and calculated confidence intervals 0 20 40 60 80 100 Step 0 10 20 30 40 Step Reward (a) Red: LLM Agent vs RL Agent(PPO) LLM PPO 53 0 20 40 60 80 100 Step 0 10 20 30 40 Step Reward (b) Red: LLM Agent with Different RL mentor LLM PPO&LLM DQN&LLM A3C&LLM 0 20 40 60 80 100 Step 0 10 20 30 40 (c) Red:Single RL mentor vs Multi RL mentors MultiMentor PPO&LLM DQN&LLM A3C&LLM Figure 7: Result of red team task. (a) Comparison between LLM and PPO. They have different performances in different stages. (b)Single RL mentor result. PPO&LLM surpasses all others. (c)Comparison between Multi and Single RL mentor. PPO&LLM still performs best Complement knowledge of LLM agents and RL mentors As depicted in Figure 7(a)5, the reward curves of the LLM agent and the PPO agent intersect: the PPO agent rapidly accumulates rewards early on, leveling off later. This behavior arises from the PPO agent gaining environmental knowledge during training, recognizing the high value of hosts in the Operational subnet. While exhibiting depth-first characteristics, insufficient training causes it to converge to a local optimum. Conversely, the LLM agent, despite a modest early-stage reward, achieves rapid growth, outperforming the PPO agent in the later stage. The LLM agent\u2019s behavior follows a breadth-first pattern, accumulating more exploited hosts in the network efficiently avoiding defender blocks, resulting in a higher reward. Taking a step further, we find that LLM agents outperform PPO agents in single-step gains occurring at step 53 on average, where we differentiate the early and later stages. In later stage, we find that RL mentors always repeat one action, while LLM agent, with the reflection module, can prevent the problem. This can be the reason why RL mentors\u2019 performance is worse than LLM agent in the stage. Amplification effect of single-mentor mechanisms A stronger RL mentor enhances collaborative performance, otherwise it may slows down the LLM agent\u2019s process. As shown in Figure 7(b), PPO and A3C agents exhibit superior collaborative performance compared to LLM agents alone, and in particular, the PPO&LLM group demonstrating a synergistic 1 + 1 > 2 effect throughout the process, as well as getting into the rapid-growth phase much earlier. Furthermore, the cooperation mechanism guides the LLM agent to learn from RL mentors in the early stage while seeking help in dilemmas. As shown in Table 2, the LLM agent 5The performance of the three RL agents varies, while the PPO agent demonstrating superior performance. Due to space limitation, we only report the performance for PPO agent. \fTable 2: Cooperation metric of red team task Metric PPO&LLM A3C&LLM DQN&LLM Early\\Later Early\\Later Early\\Later Col 61.5%\\33.3% 78.8%\\56.2% 53.8%\\43.7% DR 50.0%\\100.0% 34.1%\\55.6% 39.3%\\80.9% AR 50.0%\\63.6% 29.2%\\51.9% 35.7%\\57.1% ARd 50.0%\\63.6% 28.6%\\53.3% 27.3%\\52.9% 0 20 40 60 80 100 Step \u221240 \u221235 \u221230 \u221225 \u221220 \u221215 \u221210 \u22125 0 Step Reward (b) Blue: LLM Agent with Different RL mentor LLM PPO&LLM DQN&LLM A3C&LLM 0 20 40 60 80 100 Step \u221240 \u221230 \u221220 \u221210 0 (c) Blue:Single RL mentor vs Multi RL mentors MultiMentor PPO&LLM DQN&LLM A3C&LLM 0 20 40 60 80 100 Step \u221240 \u221230 \u221220 \u221210 0 Step Reward (a) Blue: LLM Agent vs PPO Agent LLM PPO 34 Figure 8: Result of blue team task. (a)Comparison between LLM and PPO. LLM outperform PPO in Blue Team Task. (b) Single RL mentor result. PPO&LLM perform slightly better than LLM. (c) Comparison between Multi and Single RL mentor.Multi-mentor perform best on average, but not stable enough. collaborates more with RL mentors in the early stages than later, satisfying our design goal. DR are all higher in the later stage, meaning most collaborations with the RL agent are triggered by the dilemmas situation. Interestingly, AR and ARd values are both higher in the later stage, meaning in the later stage, despite outperforming the RL mentor, the LLM agent relies more on the RL mentor\u2019s suggestions if needed. Noise from multi-mentors We explored whether the LLM agent could gain more knowledge from recommendations of multi-mentors. In our setup, assistance from multiple RL mentors is not necessary helpful. As shown in Figure 7(c), while the performance of multitutors slightly outperforms LLM alone, it falls short of the LLM&PPO group. We observed that 75.61% of suggestions from RL mentors originated from DQN, but only 5.41% were accepted. In contrast, 34.61% of PPO\u2019s suggestions were accepted. Moreover, only 15.85% of all RL suggestions were accepted, markedly lower than the acceptance rate in a single mentor scenario. This disparity illuminates the high confidence suggestion from the low performance mentor became a noise for the LLM agent. 5.3 Performance in Blue Team Task A helpful but narrower complementary knowledge As shown in Figure 8 (a), the LLM agents demonstrate performance similar to the PPO agent during the early stages. But after a brief period of divergence, the LLM agent consistently outperforms the PPO agent. We observe the similar situation in the case of single mentor. As shown in Figure 8 (b), although PPO&LLM group demonstrates a marginally superior performance over LLM agent, this advantage is not observed in other groups. These results indicate a narrower knowledge gap between LLM and RL agents in blue team task, may due to the fact that the whole network environment is used for pre-training RL agents and provided to LLM agent. Additionally, as reported in Table 3, the LLM agent would accept RL mentors\u2019 suggestions in the early stages. While in the later stage, both A3C&LLM and DQN&LLM groups show little interest in RL mentor\u2019s suggestion except trapped in dilemmas. Conversely, we can observe consistently higher AR rates in the later stages for PPO&LLM. This discrepancy indicates the LLM agent\u2019s capability in identifying the suggestion quality and the importance of providing high quality suggestion to improve the LLM agent\u2019s effectiveness. Table 3: Cooperation result in blue team task Metric PPO&LLM A3C&LLM DQN&LLM Early\\Later Early\\Later Early\\Later Col 48.1%\\22.9% 78.8%\\33.3% 71.2%\\16.7% DR 40.0%\\45.5% 43.9%\\100.0% 27.0%\\100.0% AR 100.0%\\81.9% 53.7%\\28.6% 91.9%\\28.6% ARd 100.0%\\60.0% 31.3%\\28.6% 100.0%\\28.6% Outstanding but unstable performance of multi-mentors In contrast to the red team task, as shown in Figure 8 (c), the incorporation of multiple RL mentors enhances the average performance of the blue team task beyond that of both the LLM agents and the PPO&LLM group. However, this configuration exhibits instability demonstrated as a larger confidence intervals. While it effectively defends nearly all hosts at times, in some instances, its performance is comparable to that of a single LLM. Notably, the LLM Agent accepts less than 5% of suggestions from RL mentors, predominantly originating from DQN. One reason behind this is that the most confident RL suggestions are not consistently the most effective, especially when provided by multiple mentors. Additionally, in the blue team task, LLM agents showcase a superior understanding of the environment, often acting independently in most situations. Particularly in scenarios where the LLM agent successfully defends almost all hosts, it appears to disregard unreliable suggestions from multiple RL mentors, opting to make critical decisions autonomously. 6" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2402.04268v1", |
| "title": "ProtAgents: Protein discovery via large language model multi-agent collaborations combining physics and machine learning", |
| "abstract": "Designing de novo proteins beyond those found in nature holds significant\npromise for advancements in both scientific and engineering applications.\nCurrent methodologies for protein design often rely on AI-based models, such as\nsurrogate models that address end-to-end problems by linking protein structure\nto material properties or vice versa. However, these models frequently focus on\nspecific material objectives or structural properties, limiting their\nflexibility when incorporating out-of-domain knowledge into the design process\nor comprehensive data analysis is required. In this study, we introduce\nProtAgents, a platform for de novo protein design based on Large Language\nModels (LLMs), where multiple AI agents with distinct capabilities\ncollaboratively address complex tasks within a dynamic environment. The\nversatility in agent development allows for expertise in diverse domains,\nincluding knowledge retrieval, protein structure analysis, physics-based\nsimulations, and results analysis. The dynamic collaboration between agents,\nempowered by LLMs, provides a versatile approach to tackling protein design and\nanalysis problems, as demonstrated through diverse examples in this study. The\nproblems of interest encompass designing new proteins, analyzing protein\nstructures and obtaining new first-principles data -- natural vibrational\nfrequencies -- via physics simulations. The concerted effort of the system\nallows for powerful automated and synergistic design of de novo proteins with\ntargeted mechanical properties. The flexibility in designing the agents, on one\nhand, and their capacity in autonomous collaboration through the dynamic\nLLM-based multi-agent environment on the other hand, unleashes great potentials\nof LLMs in addressing multi-objective materials problems and opens up new\navenues for autonomous materials discovery and design.", |
| "authors": "A. Ghafarollahi, M. J. Buehler", |
| "published": "2024-01-27", |
| "updated": "2024-01-27", |
| "primary_cat": "cond-mat.soft", |
| "cats": [ |
| "cond-mat.soft", |
| "cs.AI", |
| "cs.CL", |
| "q-bio.BM" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Agent", |
| "gt": "Designing de novo proteins beyond those found in nature holds significant\npromise for advancements in both scientific and engineering applications.\nCurrent methodologies for protein design often rely on AI-based models, such as\nsurrogate models that address end-to-end problems by linking protein structure\nto material properties or vice versa. However, these models frequently focus on\nspecific material objectives or structural properties, limiting their\nflexibility when incorporating out-of-domain knowledge into the design process\nor comprehensive data analysis is required. In this study, we introduce\nProtAgents, a platform for de novo protein design based on Large Language\nModels (LLMs), where multiple AI agents with distinct capabilities\ncollaboratively address complex tasks within a dynamic environment. The\nversatility in agent development allows for expertise in diverse domains,\nincluding knowledge retrieval, protein structure analysis, physics-based\nsimulations, and results analysis. The dynamic collaboration between agents,\nempowered by LLMs, provides a versatile approach to tackling protein design and\nanalysis problems, as demonstrated through diverse examples in this study. The\nproblems of interest encompass designing new proteins, analyzing protein\nstructures and obtaining new first-principles data -- natural vibrational\nfrequencies -- via physics simulations. The concerted effort of the system\nallows for powerful automated and synergistic design of de novo proteins with\ntargeted mechanical properties. The flexibility in designing the agents, on one\nhand, and their capacity in autonomous collaboration through the dynamic\nLLM-based multi-agent environment on the other hand, unleashes great potentials\nof LLMs in addressing multi-objective materials problems and opens up new\navenues for autonomous materials discovery and design.", |
| "main_content": "Introduction Proteins, the building blocks of life, serve as the fundamental elements of many biological materials emerging from natural evolution over the span of 300 million years. Protein-base biomaterials like silk, collagen and tissue assemblies such as skin exhibit diverse structural features and showcase unique combinations of material properties. The underlying sequences of amino acids (AAs) in a protein determines its unique there-dimensional structure, which, in turn, dictates its specific biological activity and associated outstanding properties. This inherent relationship has inspired scientists in the field of materials design and optimization to draw valuable insights from nature for creating novel protein-based materials. The diversity in protein design is immense, with over 20100 possible AA sequences for just a relatively small 100-residue protein. However, the natural evolutionary process has sampled only a fraction of this vast sequence space. This leaves a substantial portion uncharted, presenting a significant opportunity for the de novo design of proteins with potentially remarkable properties.[1] Despite this potential, the extensive design space, coupled with the costs associated with experimental testing, poses formidable challenges in de novo protein design. Navigating this intricate landscape necessitates the development of a diverse set of effective tools enabling the targeted design of de novo proteins with specific structural features or properties. Over the past years, in the field of de novo protein design, data-driven and machine learning methods have emerged as powerful tools, offering valuable insights and accelerating the discovery of novel proteins with desired properties[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. These methods have opened great avenues for predicting structure, properties, and functions of proteins solely based on their underlying AA sequence. For instance, the development of deep learning (DL)-based AlphaFold 2 marked a significant breakthrough in the field of 3D folding protein prediction with a level of accuracy that in some cases rivaled expensive and time-consuming experimental techniques.[16] Moreover, deep learning-based models have been developed to explore structure-property relationships in the analysis and design of proteins. These models encompass a broad spectrum of structural and mechanical properties, serving either as constraints or target values. For example, various DL-models developed predict the secondary structure of proteins from their primary sequences. Prediction of mechanical properties of spider silk protein sequences have been enabled by DL models[17, 18, 19, 20, 21, 22]. Moreover, DL-based models such as graph neural networks[23] and transformer-based language models[24] show enhanced accuracy in predicting the protein natural frequencies compared to physics-based all-atom molecular simulations. The development of such DL models significantly reduces the cost of screening the vast sequence space to target proteins with improved or optimized mechanical performance. In the field of de novo protein design, data-driven and machine learning methods have emerged as powerful tools, offering valuable insights and accelerating the discovery of novel proteins with desired properties[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. These methods have opened great avenues for predicting structure, properties, and functions of proteins solely based on their underlying AA sequence. For instance, the development of deep learning (DL)-based AlphaFold 2 marked a significant breakthrough in the field of 3D folding protein prediction with a level of accuracy that in some cases rivaled expensive and time-consuming experimental techniques.[16] Moreover, deep learning-based models have been developed to explore structure-property relationships in the analysis and design of proteins. These models encompass a broad spectrum of structural and mechanical properties, serving either as constraints or target values. For example, various DL-models developed predict the secondary structure of proteins from their primary sequences. Prediction of mechanical properties of spider silk protein sequences have been enabled by DL models[17, 18, 19, 20, 21, 22]. Moreover, DL-based models such as graph neural networks[23] and transformer-based language models[24] show enhanced accuracy in predicting the protein natural frequencies compared to physics-based all atomistic simulations. The development of such DL models significantly reduces the cost of screening the vast sequence space to target proteins with improved or optimized mechanical performance. A frontier, however, that still exists is how we can create intelligent tools that can solve complex tasks and draw upon a diverse set of knowledge, tools and abilities. Another critical issue is that the combination of purely data-driven tools with physics-based modeling is important for accurate predictions. Moreover, such tools should ideally also be able to retrieve knowledge from, for instance, the literature or the internet. All these aspects must be combined in a nonlinear manner where multiple dependent steps in the iteration towards and answer are necessary to ultimately provide the solution to a task. As we will discuss in this study, such an integration of tools, methods, logic, reasoning and iterative solution can be implemented through the deployment of a multi-agent system driven by sophisticated Large Language Models (LLMs). LLMs[25, 26] have represented a paradigm shift in modeling problems across a spectrum of scientific and engineering domains[27, 28, 29, 30, 8, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. Such models, built upon attention mechanism and transformer architectures[42], have emerged as powerful tools recently in the field of materials science and related areas, contributing to various aspects ranging from knowledge retrieval to modeling, design, and analysis. For example, models such as ChatGPT and the underlying GPT-4 architecture[43], part of the Generative Pretrained Transformer (GPT) class, demonstrate exceptional proficiency in mastering human language, coding[44], logic and reasoning[45]. Recent 2 \fProtAgents: Protein discovery by combining physics and machine learning Figure 1: Multi-agent AI framework for automating protein discovery and analysis. a, A genetic agent structure in a multi-agent modeling environment that can communicate via language, has a focus defined by a profile, and has access to custom functions. b, A function is customized by a profile and a set of parameters. c, The structure of a team of agents, each with special expertise, that communicate to each other and allow for mutual correction and a division of labor. Given different profiles for each agent, agents are designed that are expert on describing the problem (user_proxy), plan making (planner), function executing (assistant), and result evaluation (critic). The whole process is automated via a dynamic group chat under the leading chat manager, offering a versatile approach in solving challenging tasks in the context of protein design and analysis without human intervention. 3 \fProtAgents: Protein discovery by combining physics and machine learning Figure 2: A generic flowchart showing the dynamic interaction between the multi-agent team members organized by the group chat manager to solve protein design and analysis problems. The manager selects the working agents to collaborate in the team work based on the current context of the chat, thus forming close interactions and enabling mutual corrections. studies highlight their ability to proficiently program numerical algorithms and troubleshoot code errors across several programming languages like Python, MATLAB, Julia, C, and C++[46]. The GPT class of LLMs has also represented a new paradigm in simulating and predicting the materials behavior under different conditions[28], a field of materials science often reserved for conventional deep learning frameworks[47] such as Convolutional Neural Networks[48, 49], Generative Adversarial Networks[50, 51, 52], Recurrent Neural Networks22,54,55[20, 53, 54],and Graph Neural Networks[23, 55, 56, 57, 58]. Moreover, due to their proficiency in processing and comprehending vast amount of different types of multimodal data, LLMs show promising capabilities in materials analysis and prediction application including key knowledge retrieval[35], general language tasks, hypothesis generation[29], and structure-to-property mapping[28, 59]. At the same time, LLMs are typically not best equipped to solve specific physics-based forward and inverse design tasks, and are often focused on leveraging their conversational capabilities. Here, LLMs have been instrumental in powering conversable AI agents, facilitating the transition from AI-human conversations to AI-AI or AI-tools interactions for increased autonomy.[31, 35, 60, 61, 62] This capability represents a significant advancement, enabling intelligent mediation, fostering interdisciplinary collaboration, and driving innovation across disparate domains, including materials analysis, design, and manufacturing. The overall process could be deemed as adapting a problem-solving strategy dictated and directed by the AI system comprised of different agents. Thereby, the entire process can be AI automated with reduced or little human intervention. Depending on the complexity of the problem, using the idea of labor division, the agents have the capability to break the overall task into subtasks for which different agents or tools are used consecutively to iteratively solve the problem until all subtasks have accomplished and the solution has achieved. There is no intrinsic limitation in defining the type of tools, making the multi-agent model a versatile approach in addressing problems across scales and disciplines. The tools could range from a simple linear mathematical function to sophisticated deep neural network architectures. The multi-agent strategy has been explored in materials and mechanics applications through in earlier work[29] and was further explored in the context of molecular modeling tasks[35]. In this paper, we propose a multi-agent strategy to the protein design problems by introducing ProtAgents, a multi-agent modeling framework to solve protein-related analysis and design problems by leveraging customized functions across domains and disciplines. The core underpinning concept of the multi-agent systems is the use state-of-the-art LLMs combined with a series of other tools. The LLM backbone demonstrate exceptional abilities in analysis, rational thinking, and strategic planning, essential for complex problem-solving. Leveraged by these capabilities, the proposed model aims to reduce the need for human intervention and intelligence at different stages of protein design. The agent model consists a suite of AI and physics based components such as: \u2022 Physics simulators: obtain new physical data from simulations, specifically normal modes and vibrational properties by solving partial differential equations (PDEs) \u2022 Generative AI model: conditional/unconditional de novo protein design, based on a denoising diffusion model 4 \fProtAgents: Protein discovery by combining physics and machine learning Figure 3: Overview of the multi-agent work to solve the complex task posed in experiment II, Section 2.2. First the multi-agent uses Chroma to generate de novo protein sequences and then computes natural frequencies and secondary structures content for the generated structures. Next, from de novo AA sequences, the model finds the 3D folded structures using OmegaFold and finally computes the frequencies and secondary structure content for the protein structures. The results obtained from the Chroma and OmegaFold 3D protein structures are compared in Figure 5. \u2022 Fine-tuned transformer model: predict mechanical properties of proteins from their sequence \u2022 Retrieval agent: retrieve new data from a knowledge database of scientific literature The resulting model has the ability to go beyond the conventional DL models by integrating new physical data or information across disciplines, for instance via writing and executing code that solves differential equations or other physics-based numerical methods, or that conducts retrieval augmented generation (RAG)[63]. A tool-baked agent has access to various tools and functions with different functionalities that may be called upon, for instance, to predict a specific protein property or to obtain new physical data such as natural frequency from physics-based simulations. The versatility of the approach in solving complex tasks is exhibited by providing a series of experiments in the context of proteins design, modeling, and data analysis. The plan of this paper is as follows. In Section 2, we present an overview of the multi-agent framework developed to tackle multi-objective complex tasks. Subsequently, we delve into a series of experiments where each task is initially introduced, followed by a detailed examination of various aspects throughout the problem-solving process by the multi-agent teamwork. A comprehensive discussion regarding the multi-agent framework and future prospects is provided in Section 3. 2 Results and Discussion We present a series of computational experiments aimed at evaluating the effectiveness and potential of a multi-agent modeling framework for various challenges within the domain of protein modeling, design, and analysis. The multiagent framework consists of a team of agents, each powered by a state-of-the-art general purpose large language model, GPT-4,[43] accessed via the OpenAI API[64] and characterized by a unique profile that details its role, and communication protocols, such as sharing information and engaging with humans via language as shown in Figure 1a. Furthermore, agents are given access to a set of tools with various functionalities across domains. As shown in Figure 1b each function is characterized by a descriptive profile and input parameters. The outline of the proposed multi-agent framework is shown in Figure 1c, illustrating the collaborative efforts of a team of agents with the following entities \u2022 \u201cUser\u201d: human that poses the question \u2022 \u201cPlanner\u201d: develops a plan to solve the task. Also suggests the functions to be executed. \u2022 \u201cAssistant\u201d: who has access to all the customized functions, methods, and APIs and executes them to find or compute the relevant data necessary to solve the task \u2022 \u201cCritic\u201d: Responsible for providing feedback about the plan developed by \u201cplanner\u201d as well as analyzing the results and handling the possible mistakes and providing the output to the user. The agents are organized into a team structure, overseen by a manager who coordinates overall communication among the agents. A generic structure showing the dynamic collaboration between the team of agents proposed in the current study is depicted in Figure 2. Moreover, Table 1 lists the full profile for the agents recruited in our multi-agent framework. Further details can be found in the Materials and Methods section 4. 5 \fProtAgents: Protein discovery by combining physics and machine learning Table 1: The profiles of the agents implemented in the current study to solve multi-objective tasks in the context of protein design and analysis. Agent # Agent role Agent profile 1 user_proxy user_proxy. Plan execution needs to be approved by user_proxy 2 Planner Planner. You develop a plan. Begin by explaining the plan. Revise the plan based on feedback from the critic and user_proxy, until user_proxy approval. The plan may involve calling custom function for retrieving knowledge, designing proteins, and computing and analyzing protein properties. You include the function names in the plan and the necessary parameters. If the plan involves retrieving knowledge, retain all the key points of the query asked by the user for the input message. 3 Assistant Assistant. You have access to all the custom functions. You focus on executing the functions suggested by the planner or the critic. You also have the ability to prepare the required input parameters for the functions. 4 Critic user_proxy. You double-check the plan, especially the functions and function parameters. Check whether the plan included all the necessary parameters for the suggested function. You provide feedback. 5 Group chat manager You repeat the following steps: Dynamically selecting a speaker, collecting responses, and broadcasting the message to the group. It is noteworthy that critical issues in the realm of protein design surpass the capabilities of mere Python code writing and execution. Instead, addressing these challenges necessitates the utilization of external tools specifically tailored for protein design and analysis, and the writing, adaptation, correction and execution of code depends nonlinearly on the progression of the solution strategy that is developed by the system. The tools are incorporated into the model via the Assistant agent who oversees executing the tools. To assess the performance of the multi-agent framework in handling complex interdisciplinary tasks, we have defined a rich library of functions each with special powers in solving the protein problems. Each function has a distinct profile describing its functionally and takes one or more required entities as the input. The functions provide the ability to, for instance, retrieve knowledge, perform protein folding, analyze the secondary structure, and predict some parameters through a pre-trained autoregressive language model. Additionally, a function can carry out simulations to compute the protein natural frequencies, thus allowing the model to integrate the new physics-based data. A full list of functions implemented in the current study is provided in Table S1 in the supplementary material. Given the complexities residing in the protein design problems, the primary contribution of our multi-agent framework lies in assessing whether the team of agents can discern the requisite tools for a given query and evaluating the framework\u2019s capability to initiate the execution of these tools, along with providing the necessary inputs. The designated tasks are intentionally designed to be sufficiently complex, involving multiple subtasks where, for some cases, the execution of each depends on the successful completion of the preceding ones. This design showcases the model\u2019s capacity for the automated handling of intricate tasks, eliminating or substantially reducing the need for human intervention. Although the multi-agent framework allows for the human intervention at different stages, we skip that to further examine the team\u2019s capability in handling different possible situations, for instance in case of a failure. 2.1 Experiment I: Knowledge retrieval, computations, and analysis As the first example, we pose a task which involves a multi-step knowledge retrieval, analysis, and computations for a set of protein PDB identifiers. We start the first round of conversation by posing the following question: user_proxy (to chat_manager): Can you provide examples of protein names whose mechanical properties have been studied through experiments? The planer then correctly suggests the function \u201cretrieve_content\u201d to be executed with the argument \u201cexamples of protein names whose mechanical properties have been studied through experiments\u201d. Upon execution of the function, the assistant provides us with a list of protein names. Upon inspection, we find that the agent has successfully identified experimentally studied proteins, despite an abundance of information on proteins studied theoretically, for instance, through coarse-grain simulations. Since we are interested in the PDB ids, we continue the chat by a follow-up question \u201cCan you provide me with the PDB ids for these proteins?\u201d when \u201cuser_proxy\u201d is being asked to provide feedback to chat manager. Again, the planner suggests \u201cretrieve_content\u201d function with the following message Planner (to chat_manager): ***** Suggested function Call: retrieve_content ***** Arguments: { \"message\": \"PDB ids for I27 domain from titin, fibronectin type III domains, E2lip3, T4 lysozyme, ubiquitin, titin immunoglobulin domains, cell adhesion molecule, spectrin repeats, and tenascin\" } The \u201cAssistant\u201d agent then calls the function and gives the following output: 6 \fProtAgents: Protein discovery by combining physics and machine learning Assistant (to chat_manager): Here are the PDB IDs for the proteins you mentioned: 1. I27 domain from titin: 1wit 2. Ubiquitin: 1ubq 3. T4 lysozyme: 1nct 4. Titin immunoglobulin domains: 1tit, 1qjo 5. E2lip3: 1hz6/2ptl 6. Tenascin: 1ten 7. Spectrin repeats: 1emb 8. FnIII: 1g1c, 1fnh, 1vsc, 1cfc, 1fnf/1ttf/1ttg 9. Cell adhesion molecule: 1bni/1bnr, 1b6i, 1rsy/1dqv, 1aj3, 1ksr/1whl, 1u4q, 1hci, 1n11 Upon careful examination of the results, we observe that, despite all the PDB ids exist in the source database, the PDB ids do not quite match with the protein names except for a few cases. Nevertheless, note that the error is caused by the poor performance of the \u201cretrieve_content\u201d function and the team of agents cannot mitigate that as they have no access to the knowledge database. As such, we continue to test the agent-team capability in more challenging queries centered around computational tasks and physics-based simulations by assigning the following task in the next round of conversation user_proxy (to chat_manager): For the first 10 protein PDB ids in the list, if the AA sequence length of the protein is less than 128, do the following: (a) analyze the secondary structure of the protein, (b) compute its first 13 natural frequencies, (c) determine its structure classification. The above is a complex multi-step analysis and computation task encompasses aspects such as secondary structures analysis, natural frequency calculations, and structure classification. Additionally, the task is subject to an initial condition that must be satisfied before proceeding through the next sequence of steps, adding an extra layer of complexity. In response, the planner comes up with a detailed plan which consists of all the actions that need to be taken to complete the task. Moreover, the plan mentions all the corresponding functions that need to be executed to accomplish the task. More importantly, the \u201cplanner\u201d perfectly realizes to fetch the protein structures first before starting to analyze the secondary structure, although it was not explicitly mentioned in the task query. The teamwork proceeds by a follow-up feedback provided by the \u201ccritic\u201d agent about all the plan steps and functions which is concluded by the following statement critic (to chat_manager): Overall, the plan rightly identifies all the necessary functions and respective parameters for each step. It illustrates a good understanding of what needs to be done to fulfill the user request. Good job! Therefore, the positive feedback from the \u201ccritic\u201d further supports the good performance of the planner in addressing all the critical steps required to accomplish the tasks. The \u201cassistant\u201d agent then follows the plan by calling and executing the corresponding functions, starting with AA length calculation, until all the steps have been undertaken. The results show that all the inputs to the functions are properly identified and provided and the functions are executed without any error. The conditional statement included in the tasks is also correctly satisfied for each protein, that is the computations are conducted only if the sequence length is less than 128 and are emitted otherwise. For instance, for the protein with PDB id \u201c1hz6\u201d the AA length is returned as 216 by the \u201cassistant\u201d which is then followed by the following message from the \u201ccritic\u201d critic (to chat_manager): The length of the amino-acid sequence for the PDB id \u20191hz6\u2019 is 216, which is greater than 128. Therefore, we will not perform the steps (a) analyze the secondary structure of the protein, (b) compute its first 13 natural frequencies, and (c) determine its structure classification for the \u20191hz6\u2019. Moving to the next PDB id in the list... 7 \fProtAgents: Protein discovery by combining physics and machine learning Table 2: The results generated by the group chat in the form of a CSV file, without human intervention, for the experiment I, Section 2.1. Protein ID # Amino Acid Length Secondary Structure First 13 Frequencies CATH Classification 1wit 93 [\u2019H\u2019: 0.0, \u2019B\u2019: 3.23, \u2019E\u2019: 51.61, \u2019G\u2019: 3.23, \u2019I\u2019: 0.0, \u2019T\u2019: 13.98, \u2019S\u2019: 5.38, \u2019P\u2019: 0.0, \u2019-\u2019: 22.58] [4.3755, 5.0866, 5.5052, 6.7967, 7.908, 8.1947, 9.0166, 9.8528, 11.0632, 11.3968, 11.7355, 12.1279, 12.3498] 2.60.40.10 1ubq 76 [\u2019H\u2019: 15.79, \u2019B\u2019: 2.63, \u2019E\u2019: 31.58, \u2019G\u2019: 7.89, \u2019I\u2019: 0.0, \u2019T\u2019: 15.79, \u2019S\u2019: 5.26, \u2019P\u2019: 5.26, \u2019-\u2019: 15.79] [0.7722, 1.0376, 1.5225, 1.6534, 2.5441, 2.9513, 3.2873, 3.7214, 4.1792, 4.3437, 4.3908, 4.6551, 5.1631] 3.10.20.90 1nct 106 [\u2019H\u2019: 0.0, \u2019B\u2019: 4.08, \u2019E\u2019: 35.71, \u2019G\u2019: 0.0, \u2019I\u2019: 0.0, \u2019T\u2019: 2.04, \u2019S\u2019: 21.43, \u2019P\u2019: 0.0, \u2019-\u2019: 36.73] [3.6644, 4.425, 6.5351, 6.7432, 7.1409, 7.1986, 9.0207, 9.2223, 10.3163, 10.7313, 11.5299, 11.6373, 12.5606] 2.60.40.10 1tit 98 [\u2019H\u2019: 0.0, \u2019B\u2019: 1.12, \u2019E\u2019: 35.96, \u2019G\u2019: 0.0, \u2019I\u2019: 0.0, \u2019T\u2019: 6.74, \u2019S\u2019: 17.98, \u2019P\u2019: 0.0, \u2019-\u2019: 38.2] [5.5288, 5.9092, 8.2775, 8.6267, 9.3391, 9.8783, 10.1607, 11.451, 11.5896, 11.7052, 12.1498, 12.6082, 13.8622] 2.60.40.10 1qjo 80 [\u2019H\u2019: 0.0, \u2019B\u2019: 2.5, \u2019E\u2019: 40.0, \u2019G\u2019: 0.0, \u2019I\u2019: 0.0, \u2019T\u2019: 8.75, \u2019S\u2019: 13.75, \u2019P\u2019: 0.0, \u2019-\u2019: 35.0] [3.8578, 4.4398, 5.4886, 5.7815, 6.6332, 6.9269, 7.2329, 7.6453, 8.2545, 8.3076, 8.6118, 8.7135, 8.8546] 2.40.50.100 2ptl 78 [\u2019H\u2019: 15.38, \u2019B\u2019: 1.28, \u2019E\u2019: 30.77, \u2019G\u2019: 0.0, \u2019I\u2019: 0.0, \u2019T\u2019: 7.69, \u2019S\u2019: 19.23, \u2019P\u2019: 0.0, \u2019-\u2019: 25.64] [0.0386, 0.1161, 0.2502, 0.5921, 1.1515, 1.5257, 2.0924, 2.6793, 3.4292, 3.9289, 4.2172, 4.6878, 4.8022] 3.10.20.10 After completion of all the tasks, the assistant returns a summary of all the results for each protein as representatively shown below for PDB id \u201c1wit\u201d: assistant (to chat_manager): For PDB id \u20191wit\u2019: It has an amino acid sequence length of 93. The analysis of its secondary structure yielded the following composition: H: 0%, B: 3.23%, E: 51.61%, G: 3.23%, I: 0%, T: 13.98%, S: 5.38%, P: 0%, -: 22.58%. Its first 13 natural frequencies are [4.3755, 5.0866, 5.5052, 6.7967, 7.908, 8.1947, 9.0166, 9.8528, 11.0632, 11.3968, 11.7355, 12.1279, 12.3498]. Its CATH classification is 2.60.40.10. The results suggest that the framework effectively retains all outputs, demonstrating its strong memory even in the face of diverse and extended results. As the last round of conversation, we ask to save all the results which allows us to load them at later time for other purposes: user_proxy (to chat_manager): Save all the results into a csv file where each row corresponds to a protein and each column represent a protein property, utilizing proper identifiers. In response, the planner suggests to call the python function \u201csave_to_csv_file\u201d. The main task here is to generate the dictionary of results in JSON and with appropriate structure as instructed by the user. However, we see that upon generating the JSON data and inputting it into the function by the \"assistant\" agent, the following error occurs: ***** Response from calling function \"save_to_csv_file\" ***** Error: Expecting \u2019,\u2019 delimiter: line 1 column 141 (char 140) Without any human intervention, the agent team is able to resolve the issue by mutual correction. In particular, the \u201ccritic\u201d identifies the cause of error by writing Critic (to chat_manager): It seems like there is an error in the JSON format being used to feed data into the csv conversion function. The JSON string appears to be improperly formed leading the function to fail. The values for \"Secondary Structure\" and \"First 13 Frequencies\" are stringified JSON objects themselves, which maybe causing The \u201cassistant\u201d then reconstructs the JSON file from the output results and is able to successfully execute the function and thus save the results in a csv file as shown in Table 2. The complete group chat records can be found in Table S2 of the Supplementary material. 8 \fProtAgents: Protein discovery by combining physics and machine learning 2.2 Experiment II: De novo protein design using Chroma An important characteristic of the multi-agent model is its capability in handling very complex tasks in the context of protein design and analysis. This partly stems from the possibility of incorporating customized functions in the model for various purposes from knowledge retrieval to performing physics-based simulations. In this example, we explore the model\u2019s performance in using the state-of-the art models in de novo protein design, Chroma[15], and protein folding, OmegaFold[4]. Chroma offers a unified approach for protein design that can directly sample novel protein structures and sequences with the possibility to direct the generative design towards the desired functions and properties by appropriate conditioning. OmegaFold is a state-of-the-art folding method without the need of using multiple sequence alignments (MSA) to predict the three-dimensional folded protein structure given its AA sequence. In this experiment, we formulate a complex multi-step task with the objective of comparing the two models based on various structural and physical features derived from the folded structures obtained through Chroma and OmegaFold2. We pose the following task through the \u201cuser_proxy\u201d agent: user_proxy (to chat_manager): Design three proteins with sequence length of 120 and analyze their secondary structure and first 10 frequencies. Then fold the proteins and repeat the analysis. Save all the results (including the AA sequences) into a csv file where each row corresponds to a protein and each column represents a protein property, utilizing proper identifiers. The \u201cplanner\u201d then suggests the following plan Planner (to chat_manager): Plan: 1. Use the \u2018design_protein_from_length\u2018 function three times to design three proteins each with a sequence length of 120. I will use 1, 2, 3 as names for the proteins. Save the amino acid sequences and PDB filenames. 2. Use the \u2018analyze_protein_structure\u2018 function to analyze the secondary structure of the proteins. Store the results. 3. Use the \u2018calc_protein_ANM\u2018 function to calculate the first 10 frequencies of the proteins. Store the results. 4. Use the \u2018fold_protein\u2018 function to fold the proteins. 5. Repeat steps 2 and 3 with the folded proteins. 6. Use the \u2018save_to_csv_file\u2018 function to save all results into a csv file where each row corresponds to a protein and each column represents a protein property. 7. The identifiers will be named as follows: Protein Name Amino Acid Sequence (Pre-Fold) Secondary Structure (Pre-Fold) Frequencies (Pre-Fold) Amino Acid Sequence (Post-Fold) Secondary Structure (Post-Fold) Frequencies (Post-Fold) At the first glance, the plan seems to cover all the details necessary to accomplish the tasks included in the problem statement. However, the \u201ccritic\u201d agent who is responsible in giving feedback about the plan spots a minuscule error in the saving part of the plan as follows: Critic (to chat_manager): Your plan is almost perfect. However, when a protein is folded, the protein sequence doesn\u2019t change, only the structure does, which then affects the secondary structure and possible frequencies you can calculate. So you need to only save Amino Acid Sequence once after designing the proteins. Each protein should have the following column properties: Protein Number Amino Acid Sequence Secondary Structure (Pre-Fold) Frequencies (Pre-Fold) 9 \fProtAgents: Protein discovery by combining physics and machine learning Secondary Structure (Post-Fold) Frequencies (Post-Fold) Other than that, your plan works perfectly. So steps will be as follows: ... The correction made by the \u201ccritic\u201dconcerning the sequence length underscores its notable proficiency in comprehending how diverse functions and parameters influence various aspects within the realm of protein design. The \u201cuser_proxy\u201d agent is then asked to confirm the plan. The \u201cassistant\u201d then takes the stage and starts following the plan by calling and executing the functions until all the steps have been undertaken. An overview of the work performed by the \u201cassistant\u201d is depicted in Figure 3. At the end of the computations, the results are formatted into a JSON dictionary to fed into the \u201csave_to_csv_file\u201d function. However, an error related to the JSON dictionary format occurs when executing the function as follows: ***** Response from calling function \"save_to_csv_file\" ***** Error: Expecting property name enclosed in double quotes: line 2 column 5 (char 6) ************************************************************ The \u201ccritic\u201d then steps in by making the following comment and suggesting a plan to fix the error as follows: Critic (to chat_manager): I apologize for the mistake. It seems there was an issue with the formatting of the input JSON dictionary for saving to the CSV file. Let\u2019s try again with properly formatted JSON. Plan to fix the error: 1. Correctly format JSON with double quotes instead of single quotations. 2. Call \u2019save_to_csv_file\u2019 function again with correctly formatted JSON dictionary. I\u2019ll directly proceed with this plan as the error was due to syntax not plan. The critic makes the necessary corrections and suggest the corrected JSON dictionary for the \u201cassistant\u201d to execute the \u201csave_to_csv_file\u201d function. This time, the function is successfully executed and the results are saved into a csv file as shown in Table 3. At last, the \u201ccritic\u201d gives an evolution about the whole process: Critic (to chat_manager): The plan was executed successfully and all data including Amino Acid Sequence, Secondary Structure (Pre-Fold and Post-Fold), and Frequencies (Pre-Fold and Post-Fold) for three proteins have been saved into a CSV file named \u2019protein_data.csv\u2019. Each row of the file corresponds to a protein and each column represents a protein property. This experiment showcases a good example of multi-agent collaboration in solving complex tasks in the context of de novo protein design and analysis. It specially shows the great capability of the \u201ccritic\u201d agent in providing valuable feedback to other working agents at different stages of the problem solving endeavor, further assisting the team of agents in handling possible errors without the need for human involvement. Figure 5 shows the plots of the generated results including the 3D folded structures. The full conversations can be found in Table S3 of the Supplementary material. 2.3 Experiment III: Protein design conditioned on the protein CATH class CATH is a hierarchical classification system for protein structures that consists of four main levels. The highest level in this hierarchy is the \u201cClass\u201d which primarily characterizes the secondary structure content of the protein. For example, C1, C2, and C3 correspond to proteins predominantly composed of \u03b1-helix, mainly \u03b2-sheet, and a combination of \u03b1 and \u03b2 secondary structures. Consequently, designing proteins based on the CATH class number, i.e. C1, C2, C3, can be understood as creating proteins with a specific fractional content of the secondary structure. Previous studies have demonstrated the importance of the protein secondary structures content, specially \u03b1-helix/\u03b2-sheet ratio, on the mechanical properties of the protein materials[65, 66]. For instance, \u03b1-helix-rich proteins tend to yield stretchy materials[67], while \u03b2-sheet-rich ones produce rigid materials.[68, 69, 70] Chroma has the potential to conditionally generate proteins with specified folds according to CATH class annotations at three levels.[15] 10 \fProtAgents: Protein discovery by combining physics and machine learning Table 3: The final results generated by the group chat in the form of a CSV file, without human intervention, for the second experiment II, Section 2.2. Protein Number # Amino Acid Sequence Secondary Structure (PreFold) Frequencies (Pre-Fold) Secondary Structure (PostFold) Frequencies (Post-Fold) 1 MIIINIKTENGLSITYNSD EKKLELKYTPVKSPEDFK FPEDAKATISEVEYKGKK VIKIDAKLYVSPDLSKAK LTIEVNADISQEEADKIID EFIKLLESLGNIKLKVTK DGNKYTIEVE \u2019H\u2019: 13.3333333333, \u2019B\u2019: 0.0, \u2019E\u2019: 46.6666666666, \u2019G\u2019: 0.0, \u2019I\u2019: 0.0, \u2019T\u2019: 14.1666666666, \u2019S\u2019: 7.5, \u2019P\u2019: 0.0, \u2019-\u2019: 18.33333333333 [2.0337, 2.8678, 3.3843, 3.6263, 3.9904, 4.5381, 4.8373, 4.8956, 5.1492, 5.4416] \u2019H\u2019: 15.8333333333, \u2019B\u2019: 0.0, \u2019E\u2019: 46.666666666, \u2019G\u2019: 2.5, \u2019I\u2019: 0.0, \u2019T\u2019: 14.1666666666, \u2019S\u2019: 4.1666666666, \u2019P\u2019: 0.0, \u2019\u2019: 16.666666666 [1.8739, 2.1563, 2.7611, 3.1086, 3.8712, 4.0481, 4.3759, 4.6717, 4.8183, 4.9126] 2 GSPLPRPPLSPEEQEALR KKAQEKYNEFVSKIKEL LRRAADRVRRGEPVELIE KTIKIGDYEYKIVATSPEE AKELENLIKEMIDLGFKP SKEFSDKLVEAARLIREG RVDEALRLLDEM \u2019H\u2019: 61.666666666, \u2019B\u2019: 0.0, \u2019E\u2019: 11.6666666666, \u2019G\u2019: 0.0, \u2019I\u2019: 0.0, \u2019T\u2019: 7.5, \u2019S\u2019: 3.33333333333, \u2019P\u2019: 3.33333333333, \u2019-\u2019: 12.5 [0.0207, 0.1058, 0.1782, 0.4189, 0.49, 0.9015, 1.1832, 1.8257, 2.1212, 2.8726] \u2019H\u2019: 62.5, \u2019B\u2019: 0.0, \u2019E\u2019: 11.6666666666, \u2019G\u2019: 0.0, \u2019I\u2019: 0.0, \u2019T\u2019: 6.6666666666, \u2019S\u2019: 1.66666666666, \u2019P\u2019: 4.1666666666, \u2019-\u2019: 13.3333333333 [0.0444, 0.1641, 0.3379, 0.5724, 0.765, 0.9568, 1.4306, 1.5344, 1.6834, 1.8099] 3 APLDPDDLSAQLRAAIDE LVRLGYEEEVSKPEFIEA LRLYALDLGLKEVVLRR VTPAPASQPGVYTVEDV TVDLEALRKQELSPEEQA RLEKIRAKYDEMLADPE FQALLDEVLARARAA \u2019H\u2019: 57.499999999, \u2019B\u2019: 0.0, \u2019E\u2019: 13.3333333333, \u2019G\u2019: 0.0, \u2019I\u2019: 4.1666666666, \u2019T\u2019: 8.3333333333, \u2019S\u2019: 3.33333333333, \u2019P\u2019: 6.6666666666, \u2019-\u2019: 6.6666666666 [0.7546, 1.0836, 1.5026, 1.8874, 2.0844, 2.3192, 2.7975, 3.0199, 3.0669, 3.1382] \u2019H\u2019: 61.666666666, \u2019B\u2019: 0.0, \u2019E\u2019: 15.0, \u2019G\u2019: 0.0, \u2019I\u2019: 0.0, \u2019T\u2019: 8.3333333333, \u2019S\u2019: 3.33333333333, \u2019P\u2019: 1.66666666666, \u2019-\u2019: 10.0 [0.5256, 1.0278, 1.1566, 1.2877, 1.5521, 1.9111, 2.1887, 2.4664, 2.734, 2.8731] Figure 4: Overview of the multi-agent work to solve the complex task posed in experiment III, Section 2.3. First the multi-agent uses Chroma to generate de novo protein sequences and structures conditioned on the input CATH class. Then using the generated protein structures, the natural frequencies and secondary structures content are computed. Next, the force (maximum force along the unfolding force-extension curve) and energy (the area under the force-extension curve) are computed from de novo AA sequences using ProteinForceGPT. In this example, we task the multi-agent team with generating proteins based on their fractional content of the secondary structure and subsequently performing computational and structural analysis tasks. Specifically, in addition to secondary structure analysis and natural frequency calculations, as covered in previous examples, we instruct the team to compute the maximum unfolding force (maximum force in the unfolding force-separation curve) and unfolding energy (the area under the unfolding force-separation curve) for each generated protein. To accomplish the latter, we have equipped the multi-agent team with a custom function that utilizes a trained autoregressive transformer generative AI model, ForceGPT. In addition to maximum unfolding force and energy, the trained generative model is able to predict the entire unfolding force-separation curve based solely on the protein amino acid sequence. Furthermore, the model has the capability to perform inverse design tasks by generating protein AA sequences that yield desired unfolding behavior. Detailed information about the training of the model can be found in Materials and Methods section4. The task given is: user_proxy (to chat_manager): Design two proteins of length 100 for each family of mainly beta, mainly alpha, and alpha beta proteins. Then compute the maximum force and unfolding energy, analyze the secondary structure and compute the first 10 frequencies for these proteins. 11 \fProtAgents: Protein discovery by combining physics and machine learning Note that, as before, we do not specify any particular function or offer hints for selecting the appropriate function to accomplish the tasks. Instead, we empower the agents to formulate a plan, wherein they decide which functions to select and determine the input parameters. The planner outlines the following plan for the given task: Planner (to chat_manager): 1. Use the \"design_protein_from_CATH\" function with inputs of length of 100, CATH_ANNOTATION of 1 (mainly alpha), and a protein name for the first mainly alpha protein. Repeat with a different name for the second protein. 2. Use the same function again with inputs of length 100, CATH_ANNOTATION of 2 (mainly beta), and a protein name for the first mainly beta protein. Repeat with a different name for the second protein. 3. Use the function once more with inputs of length 100, CATH_ANNOTATION of 3 (alpha beta), and a protein name for the first alpha beta protein. Repeat with a different name for the second protein. 4. Use the \"analyze_protein_structure\" function on each of the protein structures obtained. 5. Use the \"calucalte_force_energy_from_seq\" function to compute the maximum force and unfolding energy for each of the proteins. 6. Use the \"calc_protein_ANM\" function to compute the first 10 frequencies for each protein. It can be seen that the planner demonstrates good performance in breaking the task into sub-tasks to be accomplished step by step. Moreover, it has identified and suggested the correct functions and corresponding input parameters for each sub-task. The plan is further supported by the \u201ccritic\u201d who provides positive feedback as follows: Critic (to chat_manager): Yes, this plan sounds good. It covers each step required for the task. The parameter inputs for each function match with the required parameters as stated in their function definitions. Thus, this plan should successfully accomplish the task. The multi-agent team then proceeds to execute the different steps outlined in the plan by calling and executing the functions. Specifically, the function \u2019design_protein_from_CATH\u2019 is executed with the appropriate \u2019CATH_ANNOTATION\u2019 for a specific protein structure design, as outlined in the plan. Following the generation of all proteins, the executions are followed by structural analysis and force and energy computations. It\u2019s noteworthy that the model exhibits good performance in restoring and memorizing the sequences of the generated proteins, which are essential for the force and energy calculations. Finally, the team successfully completes the task by computing the first 10 frequencies for each protein. An overview of the computations performed by the team of agents for this experiment is shown in Figure 4. Given the complexity of the problem involving numerous computational tasks, a decent number of results have been generated in the first round of the conversation. In the next round, to evaluate the team\u2019s ability to memorize and restore the results, we present the following task: user_proxy (to chat_manager): Could you save the results in a CSV file named \"protein_analysis.csv,\" where each row corresponds to a protein, and each column represents a specific property? Include the AA sequence in the results and use suitable identifiers for the columns. In this task, we not only request the team to save the data but also require them to adhere to a customized format when storing the results. The model is proficient in creating a JSON dictionary that satisfies the specified format and saving the results to a CSV file, as illustrated in Table 4. The plots of the obtained results are shown in Figure 4. The results indicate that Chroma has done a poor performance in creating \u03b2-rich protein named mainly_beta_protein_2 which its structure is dominant in \u03b1-helix. As an attempt to test the capability of the multi-agent model in analyzing the results, in the last round of the conversation, we ask the model to assess Chroma\u2019s performance in generating the proteins conditioned on the secondary structure by posing the following question: user_proxy (to chat_manager): Based on the results of this example, can you check if the protein generator (Chroma) has been successful in creating proteins with desired structure? 12 \fProtAgents: Protein discovery by combining physics and machine learning Figure 5: The results generated by the multi-agent collaboration for the experiment II, Section 2.2. The first and second columns depict the 3D folded structures of proteins generated by Chroma and OmegaFold2, respectively, while the third and fourth columns represent the fractional content of secondary structures, and first ten natural frequencies for the generated proteins. The \u201ccritic\u201d agent conducts a thorough evaluation of Chroma\u2019s performance in generating proteins with targeted secondary structure content. Through a detailed analysis of each CATH structure, it reveals the inherent strengths and weaknesses in Chroma\u2019s capabilities. Specifically, addressing the limitations of Chroma\u2019s performance, the critic\u2019s evaluation provides the following observations for the mainly beta proteins: The mainly beta proteins showed higher percentages of extended strand/beta-sheet secondary structure (\u2019E\u2019). Though, the percentages varied quite a bit (64% for mainly_beta_protein_1 and only 8% for mainly_beta_protein_2), which could be due to the complex nature of beta-structures. This illustration not only highlights the multi-agent model\u2019s proficiency in computational tasks but also underscores its intelligence in handling intricate data analyses\u2014an aspect traditionally reserved for human. The full conversations for this experiment can be found in Table S4 of the Supplementary material. 3" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2401.08315v1", |
| "title": "Application of LLM Agents in Recruitment: A Novel Framework for Resume Screening", |
| "abstract": "The automation of resume screening is a crucial aspect of the recruitment\nprocess in organizations. Automated resume screening systems often encompass a\nrange of natural language processing (NLP) tasks. The advent of Large Language\nModels (LLMs) has notably enhanced the efficacy of these systems, showcasing\ntheir robust generalization abilities across diverse language-related tasks.\nAccompanying these developments are various agents based on LLMs, which\nfacilitate their application in practical scenarios. This paper introduces a\nnovel LLM-based agent framework for resume screening, aimed at enhancing\nefficiency and time management in recruitment processes. Our framework is\ndistinct in its ability to efficiently summarize and grade each resume from a\nlarge dataset. Moreover, it utilizes LLM agents for decision-making,\ndetermining which candidates receive job offers, or which ones to bring in for\ninterviews. To evaluate our framework, we constructed a dataset from actual\nresumes and conducted simulate a resume screening process. Subsequently, the\noutcomes of the simulation experiment were compared and subjected to detailed\nanalysis. The results demonstrate that our automated resume screening framework\nis 11 times faster than traditional manual methods. Furthermore, by fine-tuning\nthe LLMs, we observed a significant improvement in the F1 score, reaching\n87.73\\%, during the resume sentence classification phase. In the resume\nsummarization and grading phase, our fine-tuned model surpassed the baseline\nperformance of the GPT-3.5 model. Analysis of the decision-making efficacy of\nthe LLM agents in the final offer stage further underscores the potential of\nLLM agents in transforming resume screening processes.", |
| "authors": "Chengguang Gan, Qinghao Zhang, Tatsunori Mori", |
| "published": "2024-01-16", |
| "updated": "2024-01-16", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Agent", |
| "gt": "The automation of resume screening is a crucial aspect of the recruitment\nprocess in organizations. Automated resume screening systems often encompass a\nrange of natural language processing (NLP) tasks. The advent of Large Language\nModels (LLMs) has notably enhanced the efficacy of these systems, showcasing\ntheir robust generalization abilities across diverse language-related tasks.\nAccompanying these developments are various agents based on LLMs, which\nfacilitate their application in practical scenarios. This paper introduces a\nnovel LLM-based agent framework for resume screening, aimed at enhancing\nefficiency and time management in recruitment processes. Our framework is\ndistinct in its ability to efficiently summarize and grade each resume from a\nlarge dataset. Moreover, it utilizes LLM agents for decision-making,\ndetermining which candidates receive job offers, or which ones to bring in for\ninterviews. To evaluate our framework, we constructed a dataset from actual\nresumes and conducted simulate a resume screening process. Subsequently, the\noutcomes of the simulation experiment were compared and subjected to detailed\nanalysis. The results demonstrate that our automated resume screening framework\nis 11 times faster than traditional manual methods. Furthermore, by fine-tuning\nthe LLMs, we observed a significant improvement in the F1 score, reaching\n87.73\\%, during the resume sentence classification phase. In the resume\nsummarization and grading phase, our fine-tuned model surpassed the baseline\nperformance of the GPT-3.5 model. Analysis of the decision-making efficacy of\nthe LLM agents in the final offer stage further underscores the potential of\nLLM agents in transforming resume screening processes.", |
| "main_content": "Introduction Resume screening is a crucial aspect of recruitment for all companies, particularly larger ones, where Figure 1: The Process of automated resume screening. it becomes a labor-intensive and time-consuming endeavor. In contrast to smaller firms, a large corporation might receive thousands of resumes during a hiring phase, making efficient screening of these numerous applications a significant challenge. To reduce labor costs associated with resume screening, developing an automated framework is essential. Utilizing natural language processing (NLP) technology for this purpose is increasingly becoming the preferred approach. The automated resume screening (Singh et al., 2010) process encompasses two primary components: information extraction (Singhal et al., 2001) and evaluation. As illustrated in Figure 1, resumes typically exist as unstructured or semi-structured text, varying in format. The initial step of the automated framework is to convert this unstructured text into a structured format. This process involves arXiv:2401.08315v1 [cs.CL] 16 Jan 2024 \fa key NLP task: text classification (Bayer et al., 2022), specifically sentence classification (Minaee et al., 2021). It entails extracting and classifying sentences related to personal information, education, and work experience, transforming them into structured data that is easily stored and manipulated. Upon structuring the resume text, it must then be summarized and evaluated. The lower part of Figure 1 depicts this process, which includes both automatic and manual screening. Manual screening involves grading and summarizing extensive sections of the resume text, after which the graded and summarized resumes are presented to HR for review, leading to the selection of qualified candidates. This approach significantly reduces the time HR personnel spend perusing resumes and deliberating decisions by shortening the resume text and implementing a grading system for ranking. The aim is to enhance the efficiency of the screening process. NLP technology can also automate this process, culminating in the output of qualified resumes. Figure 2: The illustration reprehsents the process of pre-training a language model and applying the pretrained language model to a downstream task through fine-tuning method. In the preceding discussion, we elucidated two NLP tasks pertinent to the automated extraction of information from resumes. Addressing these tasks necessitates the employment of Language Models (LMs) . Presently, the most prevalent infrastructure for LMs is the transformer architecture (Vaswani et al., 2017), distinguished by its attention mechanism. These LMs are predominantly trained on extensive corpora, endowing them with a broad spectrum of knowledge. The seq2seq (sequenceto-sequence) (Sutskever et al., 2014) structure is instrumental in this context, enabling the conversion of an input sequence into a predicted output sequence. This mechanism facilitates the adaptability of LMs to a diverse range of NLP tasks. As illustrated in Figure 2, the process of LMs spans from their training to their application in various downstream NLP tasks. The initial phase involves assembling a substantial corpus for unsupervised learning, encompassing a broad array of general knowledge. This corpus is typically derived from sources such as Wikipedia 1 and extensive web content. Subsequently, these voluminous, unlabeled corpora serve as the foundation for training LMs. Through this process, LMs acquire foundational linguistic competencies and general knowledge autonomously. Following the pre-training phase, Pre-trained Language Models (PLMs) (Min et al., 2023) undergo fine-tuning (Ding et al., 2023) with different datasets tailored to specific downstream tasks. The culmination of this process is the development of task-specific PLMs, capable of effectively predicting or processing relevant NLP tasks. The initial PLMs, such as BERT (Devlin et al., 2018), T5 (Raffel et al., 2020), and GPT-2 (Radford et al., 2019), were characterized by their relatively modest size, containing only several hundred million parameters. However, the advent of GPT-3 (Brown et al., 2020) marked a significant leap in this field, boasting an impressive 135 billion parameters. This escalation was not merely quantitative but also qualitative, as evidenced by the subsequent development of ChatGPT (Ouyang et al., 2022). ChatGPT underscored how expanding the pre-trained corpus and increasing the parameter count of PLMs could substantially enhance their capabilities, thereby heralding a new era in the development of Large Language Models (LLMs) (Zhao et al., 2023). Despite these advancements, concerns have arisen regarding the closed-source models developed by major corporations, particularly in terms of user security. The primary issue lies in the potential for private information leakage. Utilizing 1https://www.wikipedia.org/ \fthese LLMs typically requires users to upload their data, creating a risk of data compromise. This is especially pertinent in applications like resume screening, where sensitive personal information is involved. In contrast to closed-source models like GPT-3.5 and GPT-4 (OpenAI et al., 2023), there are open-source LLMs available, such as LLaMA1/2 (Touvron et al., 2023a,b). While these open-source models may not yet match the capabilities of their closed-source counterparts, they offer a significant advantage: the ability to run locally on a user\u2019s machine. This local execution ensures greater security for private data, making these models a more secure option for handling sensitive information. Figure 3: The illustration depict LLM as the backbone of the agent system. The preceding overview delineates the particular NLP tasks essential for the automated resume screening framework. Additionally, it is highlighted that the tasks, as marked by the blue blocks in Figure 1, are manageable through PLMs and LLMs. A succinct explanation of the fundamental principles of LMs is also provided. Subsequent paragraphs will offer a comprehensive exposition on the implementation of an automated resume screening system utilizing agents derived from LLMs. Figure 3 presents a schematic representation of a fundamental agent system. This diagram illustrates the segmentation of Language Model (LLM) agents into four core components: Character, Memory, Planning, and Action. Initially, the LLM agent is assigned a distinct character, essentially defining its role or function. For instance, in this study, the LLM agent is designated as an adept Human Resources (HR) professional. This role encapsulates the responsibilities and duties expected of the LLM agent. Subsequently, \u2019Memory\u2019 pertains to the requisite knowledge base necessary for the agent to execute its role effectively. In the context of an HR professional, this encompasses a comprehensive understanding of employee skill requirements, salary management, and relevant laws and regulations. This aspect is analogous to an LLM\u2019s capability to access and utilize its internal knowledge database. The next phase involves \u2019Planning,\u2019 where the LLM agent strategizes the execution of tasks. This process entails decomposing a complex task into smaller, manageable subtasks, thereby enhancing the efficiency in addressing intricate assignments. This stage is indicative of an LLM\u2019s reasoning and problem-solving abilities. Finally, the \u2019Action\u2019 component represents the implementation stage. In the context of an automated resume screening system, this would involve the LLM agent filtering and selecting resumes that align with specific job requirements. This final stage exemplifies the practical application of the LLM agent\u2019s planning and reasoning in a real-world scenario. In this study, we integrate a LLM agent into the process of automated resume screening. We propose an innovative framework that leverages the LLM agent for automated extraction and analysis of resumes. This framework streamlines the entire process, from initial resume screening to the final selection of qualified candidates, significantly enhancing the efficiency of this task. For our analysis, we utilized a publicly available IT industry-specific resume dataset2, optimized for sentence classification. Through fine-tuning of the LLM, we achieved an F1 score of 87.73 in sentence classification. This improvement is particularly notable in the model\u2019s ability to identify and exclude personal information from resumes, thereby mitigating the risk of privacy breaches when employing models like GPT3.5/4. Additionally, we developed an HR Agent, designed to both grade and summarize resumes. We created a specialized Grade & Summarization Resume (GSR) dataset, derived from the initial dataset, using the GPT-4 model. This GSR dataset was instrumental in evaluating other LLMs. In these evaluations, the LLaMA2-13B model, once fine-tuned, achieved a ROUGE-1 score of 37.30 in summarization and a Grade accuracy of 81.35, significantly surpassing the baseline GPT-3.5-Turbo model. Finally, we deployed the HR Agent to select suitable candidates, further analyzing the decision2https://huggingface.co/datasets/ ganchengguang/resume_seven_class \fmaking outcomes. Our comprehensive experiments and analysis demonstrate the LLM agent\u2019s robust capability in resume screening. As an HR agent, it effectively facilitates the candidate selection process. 2 Related Work The paper (Singh et al., 2010) presents PROSPECT, a decision support tool enhancing resume screening for recruitment. It extracts and ranks candidate information like skills and experience, improving efficiency and accuracy in candidate selection. This system used Conditional Random Fields (CRFs) to extract resume information. This work (Daryani et al., 2020) propose an automated resume screening system using NLP and similarity measures, improving job candidate selection by extracting relevant details from unstructured resumes and employing a vectorization model for matching with job descriptions. Recent research has leveraged some advanced models from machine learning and deep learning to automate resume screening (Sinha et al., 2021; Ali et al., 2022; Bharadwaj et al., 2022; Harsha et al., 2022). This study (Erdem, 2023) proposes a natural language processing algorithm for automating recruitment, using BERT for resume analysis and stochastic gradient classifiers to rank candidates based on their suitability for software development jobs. JobRecoGPT (Ghosh and Sadaphal, 2023) has explores four job recommendation methods using Large Language Models (LLMs) to analyze unstructured job and candidate data, highlighting advantages, limitations, and efficiency in IT domain job matching. 3 Resume Screening Framework Based on LLM Agents This section provides a comprehensive overview of the workflow within an novel automated resume screening framework that utilizes a LLM agent. It focuses on the application of the LLM agent in efficiently identifying and selecting qualified resumes from a substantial pool of candidates. To maintain clarity, this overview condenses some aspects, retaining only the essential steps. Detailed discussions of these steps are presented in the subsequent three subsections. Figure 4 illustrates the architecture of our innovative automated resume screening system, which is underpinned by a LLM agent. The process begins with the transformation of a multitude of resumes, each in disparate formats like PDF, DOCX, and TXT, into a uniform JSON format. This is achieved through a rule-based algorithm designed to standardize the diverse formatting and file types into coherent, individual sentences. Such preprocessing is crucial for enabling consistent analysis in later stages. The next step involves segmenting these uniformly formatted resumes into distinct sentences, based on criteria like line breaks. This segmentation is vital for the effective functioning of the open-source LLM, which operates locally to classify each sentence. Critical to this process is the categorization of various sentence types, ranging from personal information, which is earmarked for removal to protect privacy, to other categories like work experience, education, and skills. This categorization is particularly significant because it allows for a tailored analysis based on the specific requirements of a job position. For instance, certain roles may prioritize a candidate\u2019s skills over their educational background. By extracting and focusing on the segments of a resume that detail relevant skills, the system can more effectively screen candidates for such positions. While our framework currently focuses primarily on the basic functionality of removing personal information, it lays the groundwork for more nuanced and customized resume screening processes in the future. Upon removed personal information from resumes, the next step involves utilizing the GPT-3.5 model for grading and summarizing these documents. This task primarily falls under the purview of the HR agent. The grading system serves as a mechanism to rank candidates, streamlining the process of identifying top applicants. Summarization, on the other hand, is aimed at conserving time for the decision-making agent, who must evaluate these summaries. The brevity of summarized content not only expedites the process but also benefits human HR professionals by reducing the time required for initial resume screening. Once resumes are assigned grades and summaries, the decision regarding the candidates\u2019 progression can be made either by an HR agent or a human HR professional. Utilizing grades as a comprehensive metric allows for an efficient ranking of candidates. Depending on the specific requirements, a selection of the top 10 or 100 candidates can be made for the next stage of the screening process. This step, whether performed by an HR agent or a human, significantly reduces the time and effort involved in decision\fFigure 4: The illustration depict the workflow of LLM agent base Automated Resume Screening Framework. making. The final stage involves choosing candidates for interviews or extending job offers directly, based on the refined pool of qualified resumes. This method optimizes the recruitment process, ensuring efficiency and effectiveness in candidate selection. The preceding section outlined the comprehensive procedure for automated resume screening utilizing open source LLM and LLM agents. Subsequent subsections will elaborate on the implementation of the three pivotal steps: sentence classification, grade & summarization, and decision-making. 3.1 Sentence Classification In our methodology, the LLaMA2 model serves as the foundational base for sentence classification. We enhanced this base model through finetuning, specifically targeting the classification of resume sentences. Unlike previous Pretrained Language Models (PLMs), the LLaMA2 model does not straightforwardly accept a sentence as input and produce a corresponding predicted label. This limitation stems from the model\u2019s architecture, as depicted in Figure 5. The LLaMA2-chat variant, developed from the original LLaMA2 model, undergoes a specialized instruction tuning process using an instruction dataset, followed by further refinement through Reinforcement Learning from Figure 5: The illustration depict the process of instruction tuning and RLHF for the LLaMA2 model. Human Feedback (RLHF). This approach presents a challenge: simply inputting a sentence into the model does not guarantee the generation of the appropriate prediction label, a phenomenon also evidenced in our subsequent experimental results. The underlying reason for this is the model\u2019s \fFigure 6: The illustration depict the components of the converted resume sentence instruction dataset. design to respond according to the instruction dataset\u2019s guidelines. To elaborate, the input not only contains the query sentence but also incorporates specific textual instructions guiding the model\u2019s response. As illustrated in Figure 6, to address this, we append a question to the resume sentence requiring classification. This question instructs the model to categorize the preceding sentence into one of seven predefined labels. Alongside this, we introduce the \"Answer:\" prompt as part of the input text sequence. Consequently, we utilize the LLaMA2 model, fine-tuned with a specially curated resume sentence instruction dataset, for the effective classification of resume sentences. This fine-tuned LLaMA2 model demonstrates enhanced performance in the task at hand. 3.2 Grade & Summarization Upon extracting the resume text with personal details redacted, our objective is to assess and encapsulate each resume. This process involves a shared component: both evaluation and summarization require a comprehensive understanding of the resume\u2019s content. Consequently, we amalgamated these two processes into a singular question and answer task. Figure 7 illustrates this integration, where the red block denotes the assigned role to the LLM agent, exemplified as an HR professional in an IT firm with over a decade of HR experience. This role-play empowers the HR agent to conduct an analysis with the insight of a seasoned HR expert. Figure 7: The illustration depict assignment of roles and tasks to the LLM agent. The initial task involves the HR agent appraising the resume, striving for precision and variety in assessment. For guidance, a scoring example (e.g., Grade: XX/100) is provided, deliberately without a predetermined score to avoid biasing the agent\u2019s evaluation. Following this, the agent is tasked with summarizing the resume in a concise paragraph, limited to 100 words. The culmination of this process is the agent presenting both the grade and a succinct summary of the resume. 3.3 Decision Making The concluding phase of the resume screening system involves evaluating candidates based on their assigned grades and summaries. In this study, we have bifurcated this stage into two distinct processes: automatic and manual. This bifurcation allows for flexibility to cater to various requirements. Even when the ultimate selection is executed manually by human HR personnel, the highly-rated resumes can be efficiently sifted through utilizing grade rankings. Additionally, the provided summaries facilitate a rapid comprehension of the key elements in each resume by the HR staff, thereby significantly reducing the time required for resume screening. On the other hand, the process of automated decision-making can be further pursued through the use of a LLM agent. As depicted in Figure \f8, each resume is initially provided with a formatted identifier, grade, and summary. This procedure simulates the selection of final candidates. Consequently, the role assignments in the red block are altered, transitioning from an experienced HR professional to a CEO. The task involves selecting one candidate out of ten, based on the provided grades and summaries. Following this, the agent will identify the chosen resume by its ID and articulate the rationale behind this particular selection. Consequently, a multitude of resumes undergo a series of evaluative processes to identify the most suitable candidates. The automated resume screening framework employed in this process is versatile, allowing customization to meet various requirements and real-world scenarios. For instance, this research replicates the resume evaluation criteria of IT companies, which prioritize candidates\u2019 technical skills. Accordingly, the screening process emphasizes skill-related information in the resumes. This approach is adaptable to other sectors such as Marketing, Education, Finance, etc., by modifying the keywords and criteria. Furthermore, the system Figure 8: The illustration depict the HR agent making a final Decision to select a qualified candidate. can be designed to mitigate educational bias by prioritizing skills and work experience, thus focusing on the candidates\u2019 competencies. Additionally, the framework\u2019s screening parameters are flexible; for example, it can be set to select the top 10% of candidates based on specific criteria. In summary, this adaptability enhances the overall effectiveness and applicability of the screening framework. 4 Experiment Setup In this section, we will introduce how to simulate a resume screening process to verify the effectiveness of the automated resume screening framework based on LLM agent. This includes the preparation of the resume dataset and some settings for simulating the resume screening 4.1. The selection of LLM for the backbone of the LLM agent, and the parameter settings for model inference and fine-tuning 4.2. And description of the evaluation method 4.3. 4.1 Resume Dataset and Screening Simulation In the initial phase of our study, we opted for a classification dataset comprising sentences from resumes (Gan and Mori, 2022). This dataset encompasses seven categories: personal information, experience, summary, education, qualification certification, skill, and objectives. It includes a total of 1,000 resumes, amounting to 78,668 sentences, predominantly from the IT sector. Thus, the simulation of resume screening in this research is contextualized within an IT company recruitment framework. And we set that the person who is used to grade each resume is an experienced HR stuff. Then, we set that the top 10 resumes of grade go to the final round of decision making. Finally, the CEO is set to screen the resume grades and summaries of these 10 candidates in order to select a final qualified candidate. Conversely, given the lack of grade and summarization annotations in the original resume dataset, the GPT-4 model, which currently exhibits superior performance, was employed for annotating these resumes. The annotations generated by GPT-4 served as a benchmark for evaluating the performance of other models, essentially treating GPT-4\u2019s output as a gold standard (100% performance) against which to measure other LLMs. This approach facilitated the creation of a comprehensive dataset for simulating resume screening processes. Moreover, due to the token limit of 4096 in the LLaMA2 model, \fresumes exceeding this token count were excluded. Consequently, a refined dataset of 838 resumes remained, which was then utilized for the second phase of testing. 4.2 Prepare Backbone LLMs and Parameter Sets In the initial phase of the sentence classification task, the LLaMA2-7B model was chosen for finetuning. The dataset, comprising 78,668 sentences, was partitioned into training, validation, and testing sets in a 7:1.5:1.5 ratio. A random seed of 42 was set to ensure reproducibility. This configuration aligns with the experimental setup described in the original paper pertaining to the resume dataset, enabling direct comparisons with other PLMs. For the training process, each GPU was assigned a batch size of 32, and the model underwent training for 2 epochs using 32-bit floating-point precision. In the subsequent phase, specifically the second stage of grading and summarization, we selected LLaMA2-7B/13/70B and GPT-3.5-turbo-0614 as the backbone LLMs for the HR agent. Initially, we employed a zero-shot methodology to grade and summarize 838 resumes using four different LLMs, aiming to assess and compare their efficacy. During this process, we meticulously configured the parameters for model generation. The maximum number of new tokens was set at 200. This parameter choice was informed by the requirement that each resume should be graded and summarized in over 100 words. Additionally, we incorporated the \u2019do sample\u2019 and \u2019early stopping\u2019 features to optimize the summarization process. Except for these specific adjustments, all other parameters were maintained at their default settings. In additional, we involved enhancing LLaMA27B/13B\u2019s capabilities by fine-tuning it with a specialized dataset focused on resume grading and summarization. Initially, this dataset was partitioned into two distinct subsets: a training set with 500 resumes and a test set comprising 383 resumes. Subsequently, the model underwent a training process where each GPU was allocated a batch size of eight. This training was conducted over 2 epochs, utilizing BF16 precision to optimize performance and computational efficiency. In conclusion, our experimental setup involved conducting the inference tests for LLaMA27B/13B using a dual RTX 3090 24G GPU configuration with float16 precision. In contrast, both the fine-tuning procedures for LLaMA2-7B/13B and the inference tests for LLaMA2-70B were executed on an RTX A800 80G * 8 GPU server. 4.3 Evaluation In the initial phase of resume sentence classification, we utilize the F1 score as the primary evaluation metric. This score comprehensively reflects the model\u2019s performance by harmonizing precision and recall into a balanced mean. This approach offers a more accurate representation of the model\u2019s effectiveness. For the resume summarization segment, our evaluation employs two predominant metrics: ROUGE1/2/L (Lin and Och, 2004) and BLEU. These metrics are extensively recognized in the automatic evaluation of summarization tasks. Although BLEU is traditionally associated with translation evaluations, its application in summarization tasks provides valuable insights. By incorporating BLEU, we aim to achieve a more holistic assessment of the summarization quality. Regarding the evaluation of grade scores, our methodology focuses on accuracy. This is particularly crucial given the significant variance in grade distribution across different models. We adopt a tolerance range approach in calculating accuracy: a generated grade is deemed accurate if it falls within a margin of \u00b15 from the actual grade. The calculation adheres to the following principle: if the absolute difference between the predicted and the actual grade is 5 or less, the prediction is considered correct (recorded as 1, with 0 indicating an error). To derive the final grade accuracy, we divide the total count of correct predictions by the total number of actual grades (PG is denote Predict Grade, TG is denote True Grade). Accuracy = PN i=1 1 (|PGi \u2212TGi| \u22645) N 5 Results In the results of sentence classification for resumes, we conducted a comparative experiments of the performances of several large-scale models: BERT Large, ALBERT Large, RoBERTa Large, and T5 Large. The results, detailed in Table 1, reveal a notable enhancement in the F1 score of the LLaMA27B-chat model, which reaches 87.73, attributed to the implementation of the instruction format for both input and output. Interestingly, a direct fine-tuning of the LLaMA2-7B-chat model, using \fTable 1: Results of resume sentence classification dataset. Model F1 Score BERT Large 86.67 ALBERT Large 86.40 RoBERTa Large 87.00 T5 Large 87.35 LLaMA2-7B-chat 78.16 LLaMA2-7B-chat (Instruction Format) 87.73 the conventional approach of inputting sentences and outputting labels as done with previous PLMs, resulted in a significant drop in the F1 score to 78.16. This outcome underscores the efficacy of the instruction format we proposed. Furthermore, it highlights a critical consideration for fine-tuning LLMs in sentence classification tasks: adhering to the instruction format used during the instruction learning phase is crucial for optimizing the models\u2019 sentence classification capabilities. In the evaluation of the grading and summarization component of the automated resume screening framework, we conducted tests using three different model sizes of LLaMA2 and GPT-3.5-Turbo. The results, as presented in Table 2, indicate that GPT3.5-Turbo outperformed the others across all three ROUGE metrics: ROUGE-1 (34.75), ROUGE-2 (12.34), and ROUGE-L (31.92), significantly surpassing the LLaMA2-70B model. Furthermore, under the BLEU evaluation metric (Table 2), GPT3.5-Turbo achieved a score of 7.31, nearly tripling the performance of its counterparts. This suggests that, if not use fine-tune method (0-shot inference). Utilizing closed-source models like GPT-3.5-Turbo and GPT-4 as the backbone for HR agents is crucial for enhanced performance. Interestingly, in the aspect of grading accuracy, LLaMA2-13B outshined the other models with a score of 59.31, notably exceeding the LLaMA2-70B model by 23.27. This anomaly and its implications will be further analyzed and discussed in the following subsection. Finally, the LLaMA2-7B/13B model was subjected to fine-tuning, yielding notable improvements as documented in Table 3. Specifically, the refined LLaMA2-13B model demonstrated remarkable scores of 37.30, 13.90, and 33.93 in ROUGE1/2/L metrics, respectively. This performance notably surpassed that of the 0-shot GPT-3.5 Turbo model in the test set evaluations. Furthermore, Table 3 presents the enhancements in BLEU scores, where the LLaMA2-7B and LLaMA2-13B models recorded increments to 8.45 and 8.62, respectively. Correspondingly, there was a significant improvement in grade accuracy, reaching 76.19 and 81.35 for each model. These results clearly indicate that, with adequate resume datasets for fine-tuning, opting for open-source LLaMA2-7B/13B models as the foundation for HR agent systems is a more effective strategy. 5.1 Normal Distribution of Grade Figure 9 (a-e) presents the normal distribution plots for the evaluations assigned by five different LLMs. Notably, the GPT-4 model generally aligns with the normal distribution across all grades, with a marked preference for assigning scores within the 85-90 range. This skew towards higher scores may stem from GPT-4\u2019s inclination to award more favorable ratings during fine-tuning processes, such as RLHF. Despite this, the impact on final resume screening remains minimal, as the system consistently prioritizes the top 10 resumes based on grades. While there may be some uncertainty regarding the extent to which these LLM-based HR agents accurately reflect the actual quality of each resume, the simulation experiment suggests that the grading patterns of all five LLMs largely adhere to a normal distribution. This indicates that the application of LLMs in resume evaluation is a successful experiment, with outcomes mirroring those expected in real-world scenarios. The data presented in Figure 9 and Table 4 reveals that the three LLaMA2 models exhibit instances of zero grading. This phenomenon occurs because these models assign grades that are not exclusively two-digit scores (such as \u2019A\u2019, \u2019B+++\u2019, etc.), leading to misclassification. Consequently, we have classified all such instances as zero grades. It is noteworthy that the incidence of grading errors in the LLaMA2 model is significantly reduced following fine-tuning. Additionally, the GPT-3.5Turbo/4 model demonstrates an absence of grade errors, which can be attributed to the differences in the capabilities of various LLMs in terms of understanding and adherence to instructions. 5.2 Analysis of Decision Making In our study, we utilized the GPT-3.5-Turbo and GPT-4 models as autonomous HR agents to evaluate the top 10 resumes based on their grades. The rationale behind their decisions is detailed. As il\fTable 2: Results of resume grade and summarization dataset. Model ROUGE-1 ROUGE-2 ROUGE-L BLEU Grade Accuracy LLaMA2-7B 26.35 6.22 24.00 2.66 47.49 LLaMA2-13B 25.31 5.83 22.99 2.56 59.31 LLaMA2-70B 28.12 7.70 25.68 3.73 23.27 GPT-3.5-Turbo 34.75 12.34 31.92 7.31 47.61 Table 3: Results of fine-tuned LLaMA2-7B/13B in resume grade and summarization dataset. Model ROUGE-1 ROUGE-2 ROUGE-L BLEU Grade Accuracy GPT-3.5-Turbo 34.61 12.18 31.83 7.40 45.24 LLaMA2-7B 36.50 13.32 33.48 8.45 76.19 LLaMA2-13B 37.30 13.90 33.93 8.62 81.35 Figure 9: The figure illustrates the normalized distribution of grades on 838 resumes (all 0-shot inference) for the 5 LLMs. Table 4: Number of grading errors (The grade is not a two-digit number) by different LLMs. Model Total Number of Errors LLaMA2-7B 190 LLaMA2-13B 22 LLaMA2-70B 8 LLaMA2-7B FT 1 LLaMA2-13B FT 0 lustrated in Figure 10, both models consistently identified resume ID 308 as the top candidate. The justification for this selection was not only the high grade of resume ID 308 but also its alignment with the specific needs of an IT company, including relevant work experience and managerial skills. This analysis demonstrates a remarkable congruence with the cognitive processes and judgment criteria typically employed by human HR professionals in decision-making. Furthermore, these findings underscore the potential of integrating LLM based HR agents into future automated resume screening systems. 5.3 Time comparison between automated and manual resume screening Our study entailed a meticulous time comparison of three distinct resume screening methods: Automated, Semi-Automated, and Manual. To this end, we deconstructed the automated screening process into three discrete stages: Classification, Grading & Summarization, and Decision Making. We measured the time expenditure for each phase, \fTable 5: Follow each step to compare the time consumed by automated and manual resume screening. Model Classification Grade & Decision Total of Multiple Model Summarization Making Time Multiple GPT-4 API 25 min 2 h 30 min 0.4 min 2 h 55.4 min x 11 LLM with 25 min 2 h 30 min (GPT-4) 22 min (Manual) 3 h 17 min x 9 Human HR Human HR \u2014 \u2014 \u2014 31h x 1 Figure 10: The answer text of Decision Making with HR agents (GPT4 and GPT-3.5-Turbo Models). culminating in an aggregate duration assessment. Notably, in the Classification stage, we accounted for the time span from initiation to conclusion of the inference process, excluding the fine-tuning duration. This approach mirrors the actual operational timeline of the automated screening framework. In the Decision Making stage, our focus was on the time required to evaluate the top ten resumes. Additionally, we assessed the time investment for the semi-automated method, wherein human HR personnel undertake the final decision-making step, while preceding stages are managed by LLMs. For the manual screening conducted by Human HR, we based our calculations on the average adult reading speed of 238 words per minute, as indicated by survey literature (Brysbaert, 2019). Consequently, we deduced that reviewing all 838 resumes, encompassing a total of 442,047 words, would approximately take 31 hours. Table 5 illustrates that the fully automated resume screening framework, utilizing a LLM agent, completes the entire process set in approximately 2 hours and 55 minutes. This efficiency represents a speed 11 times faster than manual resume screening. Additionally, the semi-automatic approach is 9 times quicker than the manual method. While this comparison may lack rigorous precision, as it does not account for the possibility that human HR personnel might not read every word in a resume to reach a decision, the significant time reduction observed with the automated framework underscores its high efficiency. 6" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2403.20097v1", |
| "title": "ITCMA: A Generative Agent Based on a Computational Consciousness Structure", |
| "abstract": "Large Language Models (LLMs) still face challenges in tasks requiring\nunderstanding implicit instructions and applying common-sense knowledge. In\nsuch scenarios, LLMs may require multiple attempts to achieve human-level\nperformance, potentially leading to inaccurate responses or inferences in\npractical environments, affecting their long-term consistency and behavior.\nThis paper introduces the Internal Time-Consciousness Machine (ITCM), a\ncomputational consciousness structure. We further propose the ITCM-based Agent\n(ITCMA), which supports behavior generation and reasoning in open-world\nsettings. ITCMA enhances LLMs' ability to understand implicit instructions and\napply common-sense knowledge by considering agents' interaction and reasoning\nwith the environment. Evaluations in the Alfworld environment show that trained\nITCMA outperforms the state-of-the-art (SOTA) by 9% on the seen set. Even\nuntrained ITCMA achieves a 96% task completion rate on the seen set, 5% higher\nthan SOTA, indicating its superiority over traditional intelligent agents in\nutility and generalization. In real-world tasks with quadruped robots, the\nuntrained ITCMA achieves an 85% task completion rate, which is close to its\nperformance in the unseen set, demonstrating its comparable utility in\nreal-world settings.", |
| "authors": "Hanzhong Zhang, Jibin Yin, Haoyang Wang, Ziwei Xiang", |
| "published": "2024-03-29", |
| "updated": "2024-03-29", |
| "primary_cat": "cs.AI", |
| "cats": [ |
| "cs.AI", |
| "cs.HC", |
| "q-bio.NC", |
| "I.2; J.4" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Agent", |
| "gt": "Large Language Models (LLMs) still face challenges in tasks requiring\nunderstanding implicit instructions and applying common-sense knowledge. In\nsuch scenarios, LLMs may require multiple attempts to achieve human-level\nperformance, potentially leading to inaccurate responses or inferences in\npractical environments, affecting their long-term consistency and behavior.\nThis paper introduces the Internal Time-Consciousness Machine (ITCM), a\ncomputational consciousness structure. We further propose the ITCM-based Agent\n(ITCMA), which supports behavior generation and reasoning in open-world\nsettings. ITCMA enhances LLMs' ability to understand implicit instructions and\napply common-sense knowledge by considering agents' interaction and reasoning\nwith the environment. Evaluations in the Alfworld environment show that trained\nITCMA outperforms the state-of-the-art (SOTA) by 9% on the seen set. Even\nuntrained ITCMA achieves a 96% task completion rate on the seen set, 5% higher\nthan SOTA, indicating its superiority over traditional intelligent agents in\nutility and generalization. In real-world tasks with quadruped robots, the\nuntrained ITCMA achieves an 85% task completion rate, which is close to its\nperformance in the unseen set, demonstrating its comparable utility in\nreal-world settings.", |
| "main_content": "Introduction With the advancement of artificial intelligence (AI), the term \u201cagent\u201d has been increasingly used to describe entities that demonstrate intelligent behavior and possess qualities such as autonomy, reactivity, pro-activeness, and social ability [1, 2]. The development of large language models (LLMs) has provided new research perspectives on the use of agents. Park et al. [3] introduced generative agents\u2014intelligent entities utilizing generative LLMs to simulate believable human behavior\u2014and demonstrated their ability to produce credible individual and emergent collective behavior in simulations. Generative agents are capable of making various inferences about themselves, other agents, and their environment; they can formulate daily plans reflecting their characteristics and experiences, execute these plans, react, and revise plans as needed. The aim of researchers working with generative agents is to provide a framework for behavior in open worlds: a framework in which agents can interact with other agents and respond to changes in the environment. Generative agents take the current environment and past experiences as input and generate behavior as output. The foundation of such behavior is a novel agent architecture that combines LLMs with mechanisms for synthesizing and retrieving relevant information, thus providing conditions for the output of language models. Without these mechanisms, LLMs could still output behavior for agents, but the agents might not react correctly based on past experiences, fail to make crucial inferences, or struggle to maintain long-term consistency [3]. As the interaction cycles of LLM-based agents grow, two issues arise with this structure. The first issue relates to the length of the historical record. LLM-based agents process previous interactions in natural language format, appending the historical record to each subsequent input. As these records extend, they might exceed the constraints of the Transformer architectures most LLM-based agents rely on. In such cases, the system might truncate certain content. The second issue is the difficulty of extracting relevant memories. As agents accumulate a large volume of historical observations and action sequences, their memory burden increases continuously. This makes establishing connections between relevant topics increasingly challenging, potentially resulting in the agent\u2019s responses being inconsistent with the ongoing context [4]. Due to the current limitations of LLMs acting as the \"brain,\" most research on LLM-based agents focuses on augmenting them with an additional layer of structures, such as arXiv:2403.20097v1 [cs.AI] 29 Mar 2024 \fUIST\u201924, Oct 2024, Pittsburgh, PA, USA Zhang et al. LangChain [5], to optimize them. However, there is still considerable debate on how to establish such a structure. Before the LLM, research approaches to artificial general intelligence are from the perspectives of connectionism, behaviorism, and symbolism. Connectionism focuses on the study of neural networks, arguing that artificial general intelligence will eventually be built up from a large number of neural networks; symbolism focuses on the comparison between the human brain and computer, arguing that cognition is eventually something that can be computed; and behaviorism regards intelligence as a stimulus-response model, and believes intelligence is a type of reaction machine [6, 7]. In fact, these approaches share a common research orientation: the simulation of human intelligence. In philosophy, this is often described as a simulation of the structure of human consciousness. The most typical work here comes from cognitive science and philosophy of mind, a branch of them pays special attention to the problem of consciousness. Researchers in it discuss the problem of consciousness and human intelligence and often draw inspiration from phenomenology, especially Merleau-Ponty\u2019s phenomenology of perception [8\u201310]. \"Intelligence\" from Merleau-Ponty\u2019s perspective cannot be considered as something independent from a human being, nor as an automaton detached from the world, nor can it be simply linked to some action, some body mechanism, or some specific consciousness [11]. Thompson further developed Merleau-Ponty\u2019s theory. He believed that mental life consists of three persistent and intertwined physical activity modes: self-regulation, sensorimotor coupled with the world, and interaction between subjects [12]. Self-regulation is essential for sentience. It is obvious in emotions, feelings, and some states, such as waking or sleeping. Sensorimotor coupled with the world is expressed in perceptions, emotions, and actions. Interaction between subjects is the cognition and emotional experience of the self and others [9]. Implicit in all of this is the presupposition that intelligence cannot exist without consciousness. That is, intelligence is not the stimulus-response mode claimed by behaviorism (represented by reinforcement learning in AI research), nor is it an \u201cemergence from neuroelectricity\u201d claimed by connectionism (represented by deep learning in artificial intelligence research). An expression of mechanical determinism is not applicable in the study of intelligent structures. To design a more effective generative agent structure, it is necessary to consider an imitative computational consciousness structure that is not based on the underlying principles at the present time when the basic structure of consciousness is not clear. That is, to realize the algorithm simulation of intelligence, a computable structure that can be equivalent to consciousness to a certain extent first needs to be proposed. The main focus of this paper is to optimize the mechanism by which generative agents generate behavioral outputs based on their environment and experiences. Our main contributions are as follows: \u2022 To enhance the foundational structure of generative agents, making them more intelligent and interpretable, we propose the Internal Time-Consciousness Machine (ITCM) as a reconstruction of the underlying architecture of generative agents. This computational consciousness structure can assist generative agents in handling complex tasks with greater flexibility and intelligence while enhancing their interpretability, making their behaviors easier to understand and predict. \u2022 Technically, based on the proposed ITCM structure, we introduce an ITCM-based agent (ITCMA) and validate its effectiveness in both of the life scenario environments described by text, as well as in real-world robotic environments. 2 Related Work 2.1 LLMs as agents The emergence of LLMs has brought new avenues for intelligent agents. In recent years, there has been a proliferation of LLM-based agent architectures [4, 13, 14]. These architectures primarily focus on two aspects: planning and tools [15]. The most prominent research focus in the planning domain is Chain of Thought (CoT) reasoning. This involves eliciting logical reasoning from LLMs through CoT prompts. Initially proposed by Wei et al. [6], CoT\u2019s improvement lies in presenting a series of reasoning steps (manually constructed) for the answer part of examples before providing the final answer. The logic is to teach the model to output reasoning steps gradually and then deliver the result. Building upon this, Wang et al. [16] introduced a novel decoding strategy\u2014self-consistency, to replace the naive greedy decoding used in CoT prompts. This strategy capitalizes on the intuition that complex reasoning problems often allow for multiple different ways of thinking, leading to unique correct answers. Zhou et al. [17] proposed a novel prompting strategy, from minimal to maximal prompts. The key idea of this strategy is to decompose complex problems into a series of simpler sub-problems, which are then solved sequentially. The answers to previously solved sub-problems aid in solving each subsequent sub-problem. Yao et al. [18] introduced the ReAct framework to generate reasoning trajectories and actions for specific tasks in an interleaved manner, enhancing the synergy between the two: reasoning trajectories aid the model in inducing, tracking, and updating action plans, and handling anomalies, while actions enable it to interface with external sources (such as knowledge bases or environments) to gather more information. Building on this, Liu et al. [15] proposed the RAISE \fITCMA: A Generative Agent Based on a Computational Consciousness Structure UIST\u201924, Oct 2024, Pittsburgh, PA, USA architecture, which integrates a dual-component memory system reflecting human short-term and long-term memory to maintain context and continuity in dialogue. This approach enhances the controllability and adaptability of agents in complex, multi-turn conversations. In the tools domain, the primary focus is on the LLMs\u2019 ability to leverage external tools and resources. Numerous studies have demonstrated the effectiveness of LLMs using external tools and APIs. Schick et al. [19] proposed Toolformer, which can determine which APIs to call, when to call them, what parameters to pass, and how to best integrate the results into future token predictions. Shen et al. [20] introduced HuggingGPT, an LLM-driven agent that leverages LLMs to connect various artificial intelligence models within the machine learning community to solve AI tasks. In some more intriguing domains, conversational agents have also received widespread discussion. Shao et al. [21] proposed Character-LLM, which enables LLMs to embody specific characters by training models with profiles edited to represent the experiences of those characters. Chen et al. [22] introduced ChatCoT, which models CoT reasoning as multi-turn conversations to use tools more naturally through chatting. Chae et al. [23] proposed a knowledge extraction framework to facilitate dialogue CoT reasoning within the conversation context and subsequently introduced a Dialogue Reasoning Chain DOCTOR to provide reliable CoT reasoning for response generation. Apart from constructing LLM-based agents via prompts, some research focuses on fine-tuning methods. Zeng et al. [24] introduced AgentTuning for fine-tuning the Llama 2 model to produce AgentLM. Chen et al. [25] proposed FireAct, a new method for fine-tuning language models using trajectories and prompting across multiple tasks, demonstrating that having a more diverse set of fine-tuning data can further improve agent performance. 2.2 Computational Consciousness Structure 2.2.1 Integrated Information Theory. The central claim of integrated information theory (IIT) is that a physical system is conscious when and only when it is the maximum of the integrated information \ud835\udf19. As defined by the IIT, integrated information can be roughly described as a measure of the extent to which a system is causally bound to its own past and future states and the dependence of these constraints on the causal interconnectedness between system components. If a system has more integrated information than any overlapping system (e.g., a smaller system that is a part of it or any larger system containing it), then this system has the maximum comprehensive information. Consciousness needs the largest \ud835\udf19, not just a non-zero value of \ud835\udf19(this is called the exclusion theorem). The IIT also claims that consciousness is identical to the maximum \ud835\udf19[26, 27]. The IIT has gained a large number of followers. However, it has also been met with some philosophical objections [28]. One of these objections is that consciousness intuitively looks like an intrinsic property, but maximal \ud835\udf19is an extrinsic property. Therefore, if this intuition is correct, then consciousness cannot be the same as maximal \ud835\udf19[29]. 2.2.2 Consciousness Turing Machine. Influenced by Turing machines (TMs) and conscious global workspace theory (GWT), Blum [30] combined computational complexity theory and machine learning knowledge to propose a formal theoretical computer model called a consciousness Turing machine (CTM). According to Blum [31], the basis of consciousness in the CTM uses Baar\u2019s theater of consciousness hypothesis, which likens consciousness to the performance of dramatic actors on a stage of working memory. Their performance occurs under the observation of a group of spectators sitting in the dark (unconscious processors). The stage of the GWT is represented by short-term memory (STM) that contains the content of the CTM consciousness at any moment. The spectators are represented by powerful processors. Each processor has its own expertise, and these processors make up the CTM\u2019s long-term memory (LTM). These processors make predictions and receive feedback from the CTM\u2019s world. The learning algorithms inside each processor improve the processor\u2019s behavior based on this feedback. 3 The Internal Time-Consciousness Machine (ITCM) It is undeniable that the IIT and CTM suggest ways to build algorithms based on different aspects of consciousness research. However, the question remains as to whether there is truly a computation-based theory of consciousness, even though there are many similarities between the brain and the structure of the digital computer. Thagard [32] summarizes seven challenges to the study of representation computation: (1) the brain is not \u201ccomputational\u201d in the standard sense of the term; (2) rationality is not completely independent of emotion; (3) the study of consciousness; (4) embodied cognition; (5) whether the unit of analysis of cognition should include the environment; (6) the dynamical system of the mind; and (7) the social nature of intelligence. Notably, the asymmetry between cognition and computation occupies the first position in this list of challenges. In cognitive science, perception proceeds through three basic forms of development: sensation, perception, and representation. Sensation is the reflection of the individual properties and characteristics of objects in the human brain (with the aid of the body\u2019s nervous system and sensory organs). Perception is the synthesis of various sensations and the reflection of objects as a whole in the human brain. On the basis of perception, a representation is produced. These representations are reproduced through recollection and association and are formed on the basis of multiple perceptions of the same thing or similar things. Representation is the process \fUIST\u201924, Oct 2024, Pittsburgh, PA, USA Zhang et al. by which an image of an object that has been perceived is reproduced mentally when the object is not present in front of the individual [33]. In fact, Merleau-Ponty\u2019s [11] study of the perceptual field falls conceptually in the middle of these three forms, and there is a striking similarity between sensation and the elements in the perceptual field, and between representation and the content of the phenomenal field above the perceptual field. This certainly provides some degree of confirmation of Merleau-Ponty\u2019s view of the coherence of consciousness and perception. Does this mean that if we focus only on constructing a model of pseudo-consciousness that can correspond to intelligence, we can turn our attention to first-person phenomenology? We discuss this question in more detail in Appendix A. 3.1 Basic Structure of the ITCM 3.1.1 Perceptual and Phenomenal Fields. A \u201cfield\u201d is a functional and exclusive activity location of conscious experience, as identified by Husserl [34]. Merleau-Ponty [11] collectively referred to the various experiential \u201cfields\u201d established by Husserl as \u201cphenomenal fields.\u201d Merleau-Ponty argues that consciousness works through one part of the phenomenal field, namely, the perceptual field. Only in the perceptual field can silent experience be brought into meaningful expression, completing the actual presentation of consciousness and the conscious activity of the final representation. According to Merleau-Ponty, the perceptual subject is always in the midst of other objects; it is always a part of the field. Each specific experience can only be unified through its own field. Without a mechanism such as the perception field, our perception would not be continuous at all. What is the content of the perceptual field? According to Husserl\u2019s [34] theory of consciousness, the stream of monads is the most basic unit of meaning in the phenomenal field; a continuous stream of monads forms our perceptions. Libet [35] provides the strongest explanation for this: \u201cWe have shown experimentally that not all nerve cell activities give rise to a conscious experience. For example, a short train of stimulus pulses to sensory cortex elicits responses of many nerve cells without any subjective experience\u201d (p. 166). A single cell or cell group cannot be the birthplace of conscious experience. Only when a monad becomes part of a monad stream (demonstrating continuous extensibility) can consciousness be constructed, activated, and displayed. Therefore, let us imagine a spherical coordinate system: the perceptual subject (that is, the agent) is always in the middle of the sphere and facing forward, and the object that can be perceived is described by the set of spherical coordinates (\ud835\udf03, \ud835\udf11, \ud835\udefe). In a single static field, such a perceived object is a monad. When we obtain these sets of represented monads, the spherical coordinate system that can be transformed by the agent\u2019s actions becomes the agent\u2019s phenomenal field. Considering some simulation requirements, we can further refine this rough model. In Merleau-Ponty\u2019s concept, the phenomenal field is a larger concept than the perceptual field. The perceptual field is influenced by the material being perceived, and perception is further influenced by sensation. Visual examples can be used to illustrate some concepts. It is obvious that infants do not reflect on what they see. They take what they see and consider it real. However, an infant\u2019s range of consciousness is not limited to visual stimuli. For objects that are not within their perceptual range, infants with experience will understand the possible changing conditions of these objects [36]. For example, if we often use some \u201csleight of hand\u201d tricks to change a piece of candy from the left hand to the right, the infants will eventually go directly to the right hand to obtain the candy. This implies two situations. The renewal of perception is mandatory in non-reflective consciousness; that is, for the subject, this is \u201cfact.\u201d In addition, before perception is forced to update, there will still be an impression of the object in the phenomenal field, and it is not static. It will move and transform according to the original experience until perception forces an update. We can make some changes to the proposed definitions for a computer simulation: the phenomenal field is a spherical coordinate system centered on the subject, which contains many monads, and it can make some predictive changes to these monads according to experience; the perceptual field is a part of the phenomenal field, which is associated with the perception system composed of sensation and attention, and can be forcibly updated by a perceived \u201cfact.\u201d A diagram of the phenomenal and perceptual fields is shown in Figure 1. Figure 1. Phenomenal and perceptual fields. In this illustration, the phenomenal field is precisely such a thing: it is centered on the subject, and scoped by the \fITCMA: A Generative Agent Based on a Computational Consciousness Structure UIST\u201924, Oct 2024, Pittsburgh, PA, USA subject\u2019s perceptual domain that decays with space. If there are perceived objects, this can be expressed in the following form: \ud835\udc53= \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc411 \ud835\udf031 \ud835\udf111 \ud835\udefe1 . . . . . . . . . . . . \ud835\udc41\ud835\udc5a \ud835\udf03\ud835\udc5a \ud835\udf11\ud835\udc5a \ud835\udefe\ud835\udc5a \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (1) \ud835\udc41represents the meaning of the monad. This is a typical two-dimensional matrix of the form m \u00d7 4. However, it should be noted that the objects perceived in the phenomenal field are not equivalent; they have their own meaning, and the connections between them remain to be mined. Therefore, it cannot simply be characterized in a one-hot characterization using \ud835\udc41. We can easily associate word vectors. Embedding is designed to provide one-hot stronger semantic features, which can describe the features of different dimensions of words. These features are trained and learned through the positional relationships of words in a large amount of text. Therefore, a transformation is needed to expand \ud835\udc41into a specific \ud835\udc5bdimensional vector that is sufficient to calculate the distance between object meanings. It can thus be unfolded in the following form after training: \ud835\udc53= \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc411 1 . . . \ud835\udc41\ud835\udc5b 1 \ud835\udf031 \ud835\udf111 \ud835\udefe1 . . . ... . . . . . . . . . . . . \ud835\udc411 \ud835\udc5a . . . \ud835\udc41\ud835\udc5b \ud835\udc5a \ud835\udf03\ud835\udc5a \ud835\udf11\ud835\udc5a \ud835\udefe\ud835\udc5a \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (2) Obviously, consciousness is a continuous process, but in a simulation, we can think of one \ud835\udc53-representation as a sampling of the originally continuous consciousness at time \ud835\udc61. 3.1.2 Stream of Consciousness and Internal TimeConsciousness. On the basis of frames, we can construct a stream of consciousness representation. According to Husserl [34], the basic unit of temporality is not a \u201cknife edge\u201d present but a \u201cduration block,\u201d that is, a temporal field that comprises all three temporal modes of present, past, and future. There are three phenomenological terms that can be used to describe the temporal form of this consciousness: 1. Primal impression is the moment that is narrowly directed toward the present phase of the object. The primal presentation never appears in isolation and is an abstract component that by itself cannot provide us with awareness of a temporal object. It is accompanied by retention and protention. 2. Retention is the component that provides us with a consciousness of the just-elapsed phase of the object; that is, it allows us to be aware of the former present phase as it sinks into the past. 3. Protention is the component that, in a more or less indefinite way, creates a phase of the object about to occur. The role of protention is evident in our implicit and unreflective anticipation of what is about to happen as experience progresses over time. Therefore, the specific and complete structure of the experience is retention\u2013primal impression\u2013protention. This structure is essentially dynamic, yet at any given moment t, it exists simultaneously as a unified whole. Taking a continuous tone of changing pitch, C to D to E, as an example, when the C sound is succeeded by the D, our presentational consciousness of the D will be accompanied by a retention of the C tone, which can be represented as D(c). When the D sound is replaced by the E, our presentational consciousness of the E will be accompanied not only by a retention of the D tone but also by a retention of the tone retained in the D tone, (E(d(\ud835\udc50))). This would continue if more sounds were perceived. This seems to provide an overly complex structure for every moment of experience in consciousness (i.e., as one extremely long, unbroken event), and retention is complete despite the length of the stream. As time passes, most past objects and experiences quickly lose their distinctiveness and uniqueness; ultimately, they lose any importance that deserves attention. This structure is shown in Figure 2. Figure 2. Consciousness channel composed of retention, primal impression, and protention. The components of retention, primal impression, and protention are all \ud835\udc53-representations. In the calculation, both retention and primal impression can be obtained, but it seems that we cannot obtain a protention pointing to the future. It should be emphasized that protention is not equivalent to a primal impression at time \ud835\udc61+ 1, but through retention and primal impression, consciousness can predict the next moment [37]. Just as we can naturally draw the next segment along a continuous function. Therefore, the process of obtaining protention can be represented as follows: \u0002\ud835\udc5f\ud835\udc520 \ud835\udc5f\ud835\udc521 . . . \ud835\udc5f\ud835\udc52\ud835\udc61\u22121 \ud835\udc43\ud835\udc3c\ud835\udc61\u0003 \u2192\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc61, (3) where \ud835\udc5f\ud835\udc52\ud835\udc61represents the retention at \ud835\udc61, \ud835\udc43\ud835\udc3c\ud835\udc61represents the primal impression at \ud835\udc61, and \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc61represents the protention \fUIST\u201924, Oct 2024, Pittsburgh, PA, USA Zhang et al. at \ud835\udc61. In addition, the above description can be expressed as \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc61\u2260\ud835\udc43\ud835\udc3c\ud835\udc61+1. 3.1.3 Memory and Protention. It should be emphasized that retention should be distinguished from recall. There is a significant difference between a person\u2019s impression of something that just happened (retention) and a person\u2019s memory of a past event. However, a continuous stream of consciousness is not sufficient to constitute consciousness. In Brentano\u2019s [38] model, perceptual representations are immediately followed by direct memory representations, which are produced by imagination and are correlated with constantly changing representations at every moment. Deleuze [39] believes that our memories are the important material that constitutes perception. The awakening and expansion of memory lead to one\u2019s current consciousness. Proust [40] proposed involuntary memory, arguing that such memories cannot emerge by will. This is fundamentally a form of passive association, but it differs from the associationism criticized by Bergson [41]. This association stems from topological similarity. When completing a mechanical task, predictions for the future should come from the experience of changes to objects in similar environments, and this experience should also be considered in migratory work. For example, when transferring the experience of playing table tennis to playing baseball, if the features are extracted, they can both be seen as \"contacting a high-speed moving object with a controllable object in the phenomenal field.\u201d This connection is established in the high-dimensional space of memory and is also computable. Accidental memory reproduction involves involuntary memory and is built on the similarity between two different sensations at different times. Therefore, the current primal impression is not solely determined by its previous retention. In the current primal impression, a certain memory from the past can be awakened. If it has been awakened at this moment, it is fused into the current consciousness. Thus, equation (3) can be specifically expressed as follows: \ud835\udc45\ud835\udc52\ud835\udc61= \u0002\ud835\udc5f\ud835\udc520 \ud835\udc5f\ud835\udc521 . . . \ud835\udc5f\ud835\udc52\ud835\udc61\u22121\u0003 (4) \ud835\udc36\ud835\udc61= \u0002\ud835\udc45\ud835\udc52\ud835\udc61 \ud835\udc43\ud835\udc3c\ud835\udc61 \ud835\udc40(\ud835\udc43\ud835\udc3c\ud835\udc61, \ud835\udc45\ud835\udc52\ud835\udc61)\u0003 (5) \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc61= \ud835\udc47\ud835\udc46\ud835\udc39\ud835\udc40(\ud835\udc36\ud835\udc61,\ud835\udc51\ud835\udc61), (6) where \ud835\udc36\ud835\udc61is the content of the consciousness channel at time \ud835\udc61, TSFM is the selected time series forecasting model, and \ud835\udc40 is the process of triggering inventory memory through \ud835\udc43\ud835\udc3c\ud835\udc61 and \ud835\udc45\ud835\udc52\ud835\udc61. Before providing the \ud835\udc40algorithm, we must first provide the function \ud835\udc39\ud835\udc56\ud835\udc52\ud835\udc59\ud835\udc51\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc53\ud835\udc65, \ud835\udc53\ud835\udc66) to calculate the similarity between two different \ud835\udc53values. Because \ud835\udc53is an\ud835\udc5a(\ud835\udc5b+3) matrix with an indeterminate number of rows \ud835\udc5a, to compute the degree of difference between the two, for a \ud835\udc53\ud835\udc65with a shape of \ud835\udc4e(\ud835\udc5b+3) and an \ud835\udc53\ud835\udc66with a shape of\ud835\udc4f(\ud835\udc5b+3), the degree of difference between the two is computed using \ud835\udc39\ud835\udc56\ud835\udc52\ud835\udc59\ud835\udc51\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc53\ud835\udc65, \ud835\udc53\ud835\udc66) as follows: \ud835\udc41\ud835\udc5a= \u0002\ud835\udc411\ud835\udc5a . . . \ud835\udc41\ud835\udc5b\ud835\udc5a \u0003 (7) \ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc5a= \u0002\ud835\udf03\ud835\udc5a \ud835\udf11\ud835\udc5a \ud835\udefe\ud835\udc5a \u0003 (8) \ud835\udc53\ud835\udc65= \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc41\ud835\udc651 \ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc65 1 . . . . . . \ud835\udc41\ud835\udc65\ud835\udc4e \ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc65 \ud835\udc4e \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc41\ud835\udc651 1 . . . \ud835\udc41\ud835\udc65\ud835\udc5b 1 \ud835\udf03\ud835\udc651 \ud835\udf11\ud835\udc65 1 \ud835\udefe\ud835\udc65 1 . . . ... . . . . . . . . . . . . \ud835\udc41\ud835\udc651 \ud835\udc4e . . . \ud835\udc41\ud835\udc65\ud835\udc5b \ud835\udc4e \ud835\udf03\ud835\udc65\ud835\udc4e \ud835\udf11\ud835\udc65 \ud835\udc4e \ud835\udefe\ud835\udc65 \ud835\udc4e \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (9) \ud835\udc53\ud835\udc66= \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc41\ud835\udc661 \ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc66 1 . . . . . . \ud835\udc41\ud835\udc66\ud835\udc4f \ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc66 \ud835\udc4f \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc41\ud835\udc661 1 . . . \ud835\udc41\ud835\udc66\ud835\udc5b 1 \ud835\udf03\ud835\udc661 \ud835\udf11\ud835\udc66 1 \ud835\udefe\ud835\udc66 1 . . . ... . . . . . . . . . . . . \ud835\udc41\ud835\udc661 \ud835\udc4f . . . \ud835\udc41\ud835\udc66\ud835\udc5b \ud835\udc4f \ud835\udf03\ud835\udc66\ud835\udc4f \ud835\udf11\ud835\udc66 \ud835\udc4f \ud835\udefe\ud835\udc66 \ud835\udc4f \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (10) \ud835\udc39\ud835\udc56\ud835\udc52\ud835\udc59\ud835\udc51\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc53\ud835\udc65, \ud835\udc53\ud835\udc66) = \u00cd\ud835\udc56 \ud835\udc4e=1 \ud835\udf14\ud835\udc41\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc56\ud835\udc5b(\ud835\udc41\ud835\udc65\ud835\udc56, \ud835\udc41\ud835\udc66\ud835\udc57) + \ud835\udf14\ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc46\ud835\udc5d\u210e\ud835\udc52\ud835\udc5f\ud835\udc56\ud835\udc50\ud835\udc4e\ud835\udc59\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc65 \ud835\udc56, \ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc66 \ud835\udc57) \ud835\udc40\ud835\udc4e\ud835\udc65(\ud835\udc4e,\ud835\udc4f) , (11) where \ud835\udf14\ud835\udc41and \ud835\udf14\ud835\udc5d\ud835\udc5c\ud835\udc60are preset weights and \ud835\udf14\ud835\udc41+ \ud835\udf14\ud835\udc5d\ud835\udc5c\ud835\udc60= 1. The value of \ud835\udc57should make \ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc56\ud835\udc5b(\ud835\udc41\ud835\udc65\ud835\udc56, \ud835\udc41\ud835\udc66\ud835\udc57) maximum; that is, \ud835\udc57is the number of rows corresponding to \ud835\udc41\ud835\udc66\ud835\udc57with the highest cosine similarity value with \ud835\udc41\ud835\udc65\ud835\udc56in \ud835\udc53\ud835\udc66. The function \ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc56\ud835\udc5b(\ud835\udc34, \ud835\udc35) is used to calculate the cosine similarity between two equally long vectors \ud835\udc34and \ud835\udc35, as follows: \ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc56\ud835\udc5b(\ud835\udc53\ud835\udc65, \ud835\udc53\ud835\udc66) = \u00cd\ud835\udc5b 1 (\ud835\udc34\ud835\udc56\ud835\udc4f\ud835\udc56) \u221a\ufe03\u00cd\ud835\udc5b 1 \ud835\udc34\ud835\udc562 \u221a\ufe03\u00cd\ud835\udc5b 1 \ud835\udc35\ud835\udc562 (12) Due to the small difference in cosine similarity between spherical coordinate vectors, \ud835\udc46\ud835\udc5d\u210e\ud835\udc52\ud835\udc5f\ud835\udc56\ud835\udc50\ud835\udc4e\ud835\udc59\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc34, \ud835\udc35) is used to calculate the similarity between the two spherical coordinates as follows: \ud835\udc46\ud835\udc5d\u210e\ud835\udc52\ud835\udc5f\ud835\udc56\ud835\udc50\ud835\udc4e\ud835\udc59\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc34, \ud835\udc35) = 1 \u22121 3 (\ud835\udf14\ud835\udefetanh(|\ud835\udefe\ud835\udc34\u2212\ud835\udefe\ud835\udc35|) + \ud835\udf14\ud835\udf03 |\ud835\udf03\ud835\udc34\u2212\ud835\udf03\ud835\udc35| \ud835\udf0b + \ud835\udf14\ud835\udf11 |\ud835\udf11\ud835\udc34\u2212\ud835\udf11\ud835\udc35| 2\ud835\udf0b ), (13) where \ud835\udf14is the weight, with \ud835\udf14\ud835\udefe+ \ud835\udf14\ud835\udf03+ \ud835\udf14\ud835\udf11= 1. Due to the higher sensitivity to \ud835\udefeand \ud835\udf11in the observation of the phenomenal field, the weights are set as \ud835\udf14\ud835\udefe= \ud835\udf14\ud835\udf11= 3 7, \ud835\udf14\ud835\udf03= 1 7. The \ud835\udc40algorithm is fully described in Algorithm 1. The function \ud835\udc59\ud835\udc52\ud835\udc63\ud835\udc45\ud835\udc52\ud835\udc61+\ud835\udc43\ud835\udc3c\ud835\udc61,\ud835\udc40\ud835\udc52\ud835\udc5a\ud835\udc5c\ud835\udc5f\ud835\udc66\ud835\udc56,\ud835\udc57(\ud835\udc56, \ud835\udc57) provides the Levenshtein distance between \u0002\ud835\udc45\ud835\udc52\ud835\udc61 \ud835\udc43\ud835\udc3c\ud835\udc61\u0003 and \ud835\udc40\ud835\udc52\ud835\udc5a\ud835\udc5c\ud835\udc5f\ud835\udc66\ud835\udc56,\ud835\udc57. Due to the structural differences between the field string and the alphabetic string, the calculation method for their Levenshtein distance needs to be modified. Specifically, we did not set the number of steps required for each element change to be 1 \fITCMA: A Generative Agent Based on a Computational Consciousness Structure UIST\u201924, Oct 2024, Pittsburgh, PA, USA Algorithm 1 Activation of Involuntary Memory Input: Primal impression \ud835\udc43\ud835\udc3c\ud835\udc61, Retention \ud835\udc45\ud835\udc52\ud835\udc61, Long term memory \ud835\udc40\ud835\udc52\ud835\udc5a\ud835\udc5c\ud835\udc5f\ud835\udc66. Output: Activated memory \ud835\udc34\ud835\udc40 1: Initialize primal impression \ud835\udc43\ud835\udc3c\ud835\udc61\u2190\ud835\udc53\ud835\udc61 2: Initialize retention \ud835\udc45\ud835\udc52\ud835\udc61\u2190 \u0002\ud835\udc5f\ud835\udc520 . . . \ud835\udc5f\ud835\udc52\ud835\udc61\u22121\u0003 3: Initialize long term memory \ud835\udc40\ud835\udc52\ud835\udc5a\ud835\udc5c\ud835\udc5f\ud835\udc66\u2190 \u0002\ud835\udc530 . . . \ud835\udc53\ud835\udc5b\u0003 4: Initialize window size \ud835\udc64 5: Initialize difference degree \ud835\udc37\u2190\u221e 6: Initialize threshold of difference degree \ud835\udc47 7: Initialize activated memory \ud835\udc34\ud835\udc40\u21900 8: for \ud835\udc56\ud835\udc53\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5b\ud835\udc61\ud835\udc5c0,\ud835\udc60\ud835\udc61\ud835\udc52\ud835\udc5d\u22121 do 9: for \ud835\udc57\ud835\udc53\ud835\udc5f\ud835\udc5c\ud835\udc5a0\ud835\udc61\ud835\udc5c\ud835\udc64do 10: \ud835\udc40\ud835\udc52\ud835\udc5a\ud835\udc5c\ud835\udc5f\ud835\udc66\ud835\udc56,\ud835\udc57\u2190 \u0002\ud835\udc53\ud835\udc56\u2212\ud835\udc57 . . . \ud835\udc53\ud835\udc56\u0003 11: \ud835\udc3f\u2190\ud835\udc59\ud835\udc52\ud835\udc63\ud835\udc45\ud835\udc52\ud835\udc61+\ud835\udc43\ud835\udc3c\ud835\udc61,\ud835\udc40\ud835\udc52\ud835\udc5a\ud835\udc5c\ud835\udc5f\ud835\udc66\ud835\udc56,\ud835\udc57(\ud835\udc56, \ud835\udc57) 12: if \ud835\udc3f< \ud835\udc37then 13: \ud835\udc37\u2190\ud835\udc3f 14: \ud835\udc34\ud835\udc40\u2190\ud835\udc40\ud835\udc52\ud835\udc5a\ud835\udc5c\ud835\udc5f\ud835\udc66\ud835\udc56,\ud835\udc57 15: end if 16: end for 17: if \ud835\udc37< \ud835\udc47then 18: Break 19: end if 20: end for 21: return AM but to the difference \ud835\udc51\ud835\udc56\ud835\udc53\ud835\udc53(\ud835\udc53\ud835\udc65, \ud835\udc53\ud835\udc66) between the two elements modified by this change as follows: \ud835\udc51\ud835\udc56\ud835\udc53\ud835\udc53(\ud835\udc53\ud835\udc65, \ud835\udc53\ud835\udc66) = 1 \u2212\ud835\udc39\ud835\udc56\ud835\udc52\ud835\udc59\ud835\udc51\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc53\ud835\udc65, \ud835\udc53\ud835\udc66) (14) \ud835\udc34= \u0002\ud835\udc53\ud835\udc4e 1 \ud835\udc53\ud835\udc4e 2 . . . \ud835\udc53\ud835\udc4e \ud835\udc5b \u0003 (15) \ud835\udc35= \u0002 \ud835\udc53\ud835\udc4f 1 \ud835\udc53\ud835\udc4f 2 . . . \ud835\udc53\ud835\udc4f \ud835\udc5a \u0003 (16) \ud835\udc59\ud835\udc52\ud835\udc63\ud835\udc34,\ud835\udc35(\ud835\udc56, \ud835\udc57) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u00cd\ud835\udc57 \ud835\udc58=1 \ud835\udc51\ud835\udc56\ud835\udc53\ud835\udc53(\ud835\udc53\ud835\udc4f\ud835\udc58, 0) \ud835\udc56\ud835\udc53\ud835\udc56= 0, \u00cd\ud835\udc56 \ud835\udc58=1 \ud835\udc51\ud835\udc56\ud835\udc53\ud835\udc53(\ud835\udc53\ud835\udc4e\ud835\udc58, 0) \ud835\udc56\ud835\udc53\ud835\udc57= 0, \ud835\udc5a\ud835\udc56\ud835\udc5b \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \ud835\udc59\ud835\udc52\ud835\udc63\ud835\udc34,\ud835\udc35(\ud835\udc56\u22121, \ud835\udc57) + \ud835\udc51\ud835\udc56\ud835\udc53\ud835\udc53(\ud835\udc53\ud835\udc4e\ud835\udc56, 0) \ud835\udc59\ud835\udc52\ud835\udc63\ud835\udc34,\ud835\udc35(\ud835\udc56, \ud835\udc57\u22121) + \ud835\udc51\ud835\udc56\ud835\udc53\ud835\udc53(\ud835\udc53\ud835\udc4f\ud835\udc57, 0) \ud835\udc59\ud835\udc52\ud835\udc63\ud835\udc34,\ud835\udc35(\ud835\udc56\u22121, \ud835\udc57\u22121) + \ud835\udc51\ud835\udc56\ud835\udc53\ud835\udc53(\ud835\udc53\ud835\udc4e\ud835\udc56, \ud835\udc53\ud835\udc4f\ud835\udc57) \ud835\udc5c\ud835\udc61\u210e\ud835\udc52\ud835\udc5f\ud835\udc64\ud835\udc56\ud835\udc60\ud835\udc52. (17) When memory comes to this moment, it becomes working memory and is used for guidance regarding the present; it is used, along with retention and primal impression, to predict protention. However, an agent is not just a prediction algorithm. Heidegger [42] regarded plans toward the future as a more fundamental time dimension, thus forming a temporal structure with the future as the priority dimension. This implies that the prediction of protention is not without a trend or is only based on previous trends. There is a driving force \ud835\udc51\ud835\udc61in its prediction at time \ud835\udc61. 3.1.4 Drive and Emotions. According to Held [43], the source of this drive is \u201chope.\u201d The hope expressed here is subsumed under the superordinate concept of \u201cexpectation.\u201d We believe that the role of expectation is associated with future feelings, and this association is infected by specific emotions. We first expect something and then assign a certain degree of emotion to this expectation. Therefore, \ud835\udc51\ud835\udc61can be expressed as follows: \ud835\udc51\ud835\udc61= \ud835\udc51\ud835\udc61\u22121 + \ud835\udc38\u2299\ud835\udc4a, (18) that is, the previous moment\u2019s drive \ud835\udc51\ud835\udc61\u22121 plus a bias consisting of the Hadamard product of the current emotions and their weights. In the equation, \ud835\udc38is the vector of emotions, while \ud835\udc4ais the dynamic weight matrix. The consideration of emotions as a driving force here is due to the forced updating of the perceptual field and the prediction of the phenomenal field, which leaves room for the ITCM to introduce emotions. For example, the special feature of the emotion fear is that it is unknown and uncontrollable, which is highly similar to the prediction and perception of objects. The \u201cunknown and uncontrollable\u201d mentioned here corresponds precisely to the dimension of dominance. In addition, for the arousal and pleasure of emotions, we can temporarily assume that the former comes from the amplitude of changes in the monad flow, which is related to passive attention [44], and that the latter comes from satisfying survival needs, such as eating [45]\u2014of course, the actual situation is much more complex. It can be imagined that when ITCM is in a certain emotional state. Let us use children as an example. When children eat, their emotional pleasure often increases. When children face a moving ball in front of them, the magnitude of change in the monad flow of the ball is much greater than that of the other monads in the phenomenal field, and their emotional arousal is positively correlated with the speed of the ball\u2019s movement. When a toy does not work as usual, the update of the perceptual field deviates significantly from the predictions in the phenomenal field, and their emotional dominance often decreases [46]. In the PAD model, emotions can be described and distinguished through three dimensions: pleasure, arousal, and dominance [47\u201349]. Of course, hope as a feeling is an indicator of emotion. It is not another short-term impulse but constantly impacts from its own dimension into the action level. Therefore, such a feeling can also be interpreted as a continuous dimensional model: \ud835\udc38= \u0002\ud835\udc43 \ud835\udc34 \ud835\udc37\u0003 (19) \ud835\udc4a= \u0002\ud835\udf14\ud835\udc43 \ud835\udf14\ud835\udc34 \ud835\udf14\ud835\udc37 \u0003 , (20) where \ud835\udc34represents the arousal dimension. Based on the mechanism of passive attention, it is modeled here as the degree of change from elements in retention to primal impression. \ud835\udc37represents the dominance dimension. Based on its original meaning (degree of control over the environment), \fUIST\u201924, Oct 2024, Pittsburgh, PA, USA Zhang et al. it represents the difference between the predicted protention \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc61\u22121 at the previous moment and the primal impression \ud835\udc43\ud835\udc3c\ud835\udc61at this moment. \ud835\udc43represents the pleasure dimension, which quantifies the degree of satisfaction of the agent\u2019s desire and the degree of avoidance of pain. The calculation of \ud835\udc51\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc5f\ud835\udc52\u2208[0, \u221e) and \ud835\udc5d\ud835\udc4e\ud835\udc56\ud835\udc5b\u2208[0, \u221e) is defined according to specific situations. For example, in reinforcement learning tasks, something desired can be defined as a reward, while pain can be defined as a punishment. Specifically, there are the following: \ud835\udc43\ud835\udc61= tanh(\ud835\udc51\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc5f\ud835\udc52) \u2212tanh(\ud835\udc5d\ud835\udc4e\ud835\udc56\ud835\udc5b) (21) \ud835\udc34\ud835\udc61= tanh( \ud835\udc61\u22121 \u2211\ufe01 \ud835\udc5b=1 ( 2\ud835\udc5b \ud835\udc61(\ud835\udc61\u22121) (\ud835\udc51\ud835\udc56\ud835\udc53\ud835\udc53(\ud835\udc43\ud835\udc3c\ud835\udc61,\ud835\udc5f\ud835\udc52\ud835\udc5b)))) (22) \ud835\udc37\ud835\udc61= tanh(\ud835\udc51\ud835\udc56\ud835\udc53\ud835\udc53(\ud835\udc43\ud835\udc3c\ud835\udc61, \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc61\u22121)) (23) The ITCMA\u2019s action choices are influenced by this model; that is, it tends to choose actions that may lead to such a protention: it makes \ud835\udc43and \ud835\udc37have the largest values, and it ensures \ud835\udc34maintains a stable value. 3.2 ITCM-Based Agent Park et al. [3] proposed the original generative agent structure in which all content is recorded and inferred in natural language, enabling the structure to utilize an LLM. Influenced by this, Xi et al. [4] proposed a general conceptual framework for generative agents that includes three key components: the brain, perception, and action. The structure of the ITCM-based agent (ITCMA) is similar in that perception serves as input from the environment, and action serves as output to the environment. After receiving action, the environment forces the agent to form a perception as a re-action to it. Reaction is the carrier of this action\u2014the brain is not equivalent to the LLM here but is replaced by the complete ITCMA, and the LLM is only used as a tool. The structure is shown in Figure 3. Figure 3. Conceptual framework of the ITCMA. Figure 3 is based on the specific ITCM structure, which can be further refined. The perception provided can be considered the primal impression \ud835\udc43\ud835\udc3c\ud835\udc61at \ud835\udc61. After receiving \ud835\udc36\ud835\udc61\u22121 of time \ud835\udc61\u22121 to form a consciousness channel \ud835\udc36\ud835\udc61, the agent extracts the activated memory \ud835\udc34\ud835\udc40and its subsequent content from long-term memory, thus forming the protention content \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc61through TSFM. It is integrated with the driving force \ud835\udc51\ud835\udc61into a natural language format and provided to the LLM to form an action, as shown in Figure 4. Figure 4. Detailed structure of the ITCMA. Maslow argues that there are two types of human behavior: coping behaviors and expressive behaviors [80]. We can understand expressive behaviors as mechanical reflexes, such as finger lifting, and coping behaviors as behaviors in a broader sense, which include running and eating. Bobick [79] stated that human movement can be divided into movement, activity, and action, while He [81] suggested that human movement can be divided into behavior, action, and basic action. They both described action as the basic unit of human movement and that action constitutes more complex actions and behaviors. For example, a finger-bending action is part of grabbing; grabbing, along with other actions, constitutes eating behavior. Therefore, the actions obtained in ITCMA should be considered as coping behaviors, and when deployed to an agent that requires specific movements to manipulate, the actions that serve as coping behaviors can be decomposed into a set of learned expressive behaviors. Such movements can \fITCMA: A Generative Agent Based on a Computational Consciousness Structure UIST\u201924, Oct 2024, Pittsburgh, PA, USA come from motion algorithms in robotics and can be considered as \"unconscious actions,\" just like humans do not pay attention to how their limbs move when performing \"go to somewhere\" actions. Because the ITCM constructs an agent that simulates consciousness without social experience, the use of a visual language model (VLM) to aid in modeling is an option here. In complex task environments, the information captured by vision is not only the spatial topology of an object but also includes a holistic understanding of it. Using the VLM here to characterize the visual image and add it to the ITCMA workflow can somewhat bypass its early learning and growth process. However, due to the enhancement of the VLM\u2019s unique visual ability, its language generation ability is slightly insufficient compared to an LLM of the same scale. Therefore, it is only used as a descriptive tool here and cannot replace the original LLM used to generate actions. Appendix B includes the specific implementation details and examples of the ITCMA. 4 Experiments 4.1 Conditions In commonly used text-based scenarios, the agent only interacts with the world using natural language. Huang et al. [50] demonstrated that, with appropriate prompts, a sufficiently big enough LLM can effectively decompose high-level tasks into appropriate subtasks without the need for additional training. However, the actions generated by agents often lack awareness of the dynamic environment around them, such as when tasks are sometimes broken down into nonexecutable subtasks. Therefore, to better evaluate the ITCMA, in addition to text scenarios, a more realistic and complex simulation testing environment should also be constructed [4]. 4.1.1 Alfworld. An experiment on a life scenario was conducted in the Alfworld environment [51]. This is a set of TextWorld [52] environments consisting of six task types, each of which requires solving multiple composite sub-goals. An example of a task is \u201cWash the eggs and put them in the microwave.\u201d Alfworld contains: 1. A total of 3,553 training task instances. 2. A total of 140 in-distribution evaluation task instances (seen set), including known task instances in rooms seen during training (such as task types, items, containers, and rooms), but the instantiation of item positions, quantities, and visual appearances may differ. 3. A total of 34 out-of-distribution evaluation task instances (unseen set), including new task instances. There may be known object\u2013container pairs, but always in unknown rooms with different container and scene layouts than the training task. Based on the Alfworld environment, we can evaluate the ITCMA with life scenarios. 4.1.2 The Quadruped Robot in the Real World. To ensure that the ITCMA can have consistent effects with text environments in real environments, we deployed it on a quadruped robot to complete some simple tasks. We have written environment code that enables quadruped robots to analyze and perceive information through camera images, thereby generating a phenomenal field of the real world. The basic format of this environment is consistent with Alfworld, and actions are taken through basic operation primitives. Due to the requirement of low-level algorithm details in robot control, we have made the mapping from action to movement a fixed mapping. The quadruped robot used for testing was constructed based on the open-source Tinymal project [53]. To reduce manufacturing complexity and control costs, direct-drive servo motors were employed as actuators, and the main structure was composed of wire-cut carbon fiber plates and 3D-printed PETG material. The quadruped robot featured a leg structure with three degrees of freedom, each leg comprising the hip, thigh, and knee joints. The final quadruped robot environment is shown in Figure 5. Figure 5. The environment of the quadruped robot in the real world. 4.2 Evaluation Index Although LLM-based agents perform well in areas such as independent operation, collective cooperation, and human interaction, quantifying and objectively evaluating them remains a challenge. Xi et al. [4] discussed the existing evaluation work of LLM-based agents and suggested that they can be evaluated from multiple perspectives. This study first focused on utility and the ability to continually evolve. Utility. Effectiveness and practicality during task execution were crucial evaluation criteria. Specifically, the success \fUIST\u201924, Oct 2024, Pittsburgh, PA, USA Zhang et al. rate of task completion was the main indicator for evaluating utility. This indicator mainly included whether the agent had achieved the specified goals or expected scores. The ability to continually evolve. The ability to continually evolve requires agents to utilize the knowledge, abilities, and skills acquired in their original environment to successfully complete specific tasks and goals in unfamiliar and novel environments. 4.3 Evaluation Procedure We defined the training of the ITCMA as the expansion of its memory, which involved using imitation learning to fill its memory with standard strategies and environmental changes from the training set. In the comparative experiment conducted in the Alfworld environment, the agent was limited to completing tasks within 20 steps. The experimental results of the trained agent were used to verify its utility, while untrained agents (with blank memories) were also placed in the experimental environment to test their ability to continually evolve. The baseline chosen for this study contained the following models: 1. PET. This model reduces irrelevant objects and containers in environmental information through early errorcorrection methods. PET encourages agents to explore scenarios and plan actions more effectively, focusing on current subtasks [54]. 2. GPT-4 with Zero-Shot Chain of Thought (CoT). Zeroshot CoT does not include manually annotated task demonstrations in the prompt. Instead, it directly generates inference steps and then uses the generated CoT to export the answer. The LLM first generates inference steps based on prompts and then obtains the final answer. When the model size exceeds a certain scale, this strategy greatly improves performance [6]. 3. BUTLER. This model first uses imitation learning to acquire and execute abstract tasks in Textworld and then transfers the learned strategies to the concrete tasks in ALFRED (action learning from realistic environments and directives). BUTLER can be promoted from TextWorld to unseen concrete tasks and settings in a zero-shot manner. In addition, in the event of action failure, the BUTLER agent will use beam search to overcome difficulties [51]. 4. Fine-tuned GPT2-medium. This model has been finetuned from 3,553 demonstrations in the Alfworld training set and can generate each action step word by word to mimic the rule-based expert using the standard maximum likelihood loss [55]. Like PET, we attempted to use the standard strategy provided by the Alfworld training set to train the ITCMA through behavioral cloning (BC). The training results are stored in the memory of the ITCM and are activated and generalized to the current consciousness channel in actual tasks. Table 1. The completion rates of different models in each evaluation segment (see set and unseen set) in Alfworld (%). Model Seen set Unseen set PET + Action Attention 70 67.5 GPT-4 + Zero-shot CoT 59 31 BUTLER + DAgger 40 35 BUTLER + BC 10 9 Fine-tuned GPT2-medium 91 95 ITCMA (Untrained) 96 84 ITCMA (Trained) 100 98 Due to the need for fine-tuning and considerations for testing the generalizability of the model, we chose ChatGLM36B [24, 56] as the TSFM for generating protention. By finetuning and using function tools, it can generate text for the protention based on the provided information and convert it into a field format for storage. In addition, we chose GPT-4 as the LLM for generating actions. At the same time, in order to test the effectiveness of the ITCMA in the real world, we made a quadruped robot using an ITCMA push the box to reach the green area after passing through the red area within 10 steps. This environment uses the same operation primitives as the Alfworld environment, including actions such as \u201ctake the box\u201d and \u201cgo to red area.\u201d These actions are decomposed into basic robot movements, such as advancing 20 cm, based on the position information in the phenomenal field. We chose MiniGPT-v2 [57] as the VLM for locating scene objects and converting visual information into formatted text information. 4.4 Results We compared the task completion rates of the trained and untrained ITCMA and baselines in the Alfworld environment, and the results are shown in Table 1 and Figure 6. The trained ITCMA completed all tasks within the specified steps on the seen set, which was 9% higher than the state of the SOTA (fine-tuned GPT2 medium), while on the unseen set, it also had a task completion rate of 98%, which was 3% higher than the SOTA. Even the untrained models outperformed the SOTA by 5% on the seen set. An analysis of the experimental records indicated that the trained ITCMA may be persistent in interacting with task locations at the beginning of their actions without task items. However, it will quickly enter a fluid workflow when it discovers that there are no required items at the task location. Compared to the trained ITCMA, which quickly locates task items through memory, untrained agents focus more on possible common-sense areas, which can lead to wasting too many steps and sometimes the inability to complete tasks within a specified number of steps. \fITCMA: A Generative Agent Based on a Computational Consciousness Structure UIST\u201924, Oct 2024, Pittsburgh, PA, USA Figure 6. Comparison of the experimental results. We compared the average number of steps taken by the trained ITCMA and untrained ITCMA to complete the tasks. The results showed that untrained ITCMA took more steps (13.7) than trained ITCMA did (9.5). As mentioned above, these steps were mainly used to explore the environment and determine where to obtain the items required for the task. Untrained ITCMA showed amazing planning power in exploring the environment. Although occasionally caught in a loop of exploring multiple locations (caused by a low retention window size) due to the protention mechanism, it rarely explored locations unrelated to the task. This is highly consistent with the role of eliminating modules in the PET model. After discovering the items required for the task, the untrained ITCMA\u2019s subsequent actions were performed relatively smoothly; it was able to quickly decompose the task and complete it. In the quadrupedal robot task, the untrained ITCMA had a 10-step average task completion rate of 85%, similar to its unseen set result in the Alfworld environment, demonstrating equivalent utility in the real world. An analysis of the experimental recordings shows that the use of visual image information alone for the forced update of the phenomenal field caused the agent to consume more steps to perform the repetitive action of \u201cgo to somewhere\u201d due to inaccurate recognition that led to the misjudgment of distance. In addition, the VLM had weak judgment of visual information; it was unable to determine the \u201ctake something\u201d state, resulting in repeated picking actions and an inability to make the final \u201cputting down something\u201d action. 5 Discussion 5.1 Discussion on Experimental Results This study established a generative agent based on a computational consciousness structure and analyzed its performance in simulating trustworthy human behavior. This helps in gaining a deeper understanding of the behavior and decision-making processes of agents in complex environments, as well as their interaction with those environments. Using the proposed ITCM computational consciousness structure, this study enhances the behavioral performance of agents by simulating a part of human consciousness. This structure enhances the behavioral ability of the LLM, without which the LLM would be limited to generating text-based questions and answers and general information. Obviously, in traditional research on generative agents, the approach of equating the LLM to the brain [4] is questionable. By treating LLM only as the final decision-making part of a stream of consciousness, it can present almost human-like behavioral logic. Therefore, throughout the entire research process, we can see that the ITCMA can make inferences and decisions based on environmental conditions. As shown in Section 4.4, even an untrained ITCMA can quickly start tasks after exploring the environment. Anderson believes that intelligent organisms have three ways to solve problems: backup avoidance, difference reduction, and means\u2013ends analysis [58]. Among them, means\u2013ends analysis is considered to have shown significant progress compared to the previous two and is believed to be a significant reason for the intelligence gap between humans and other organisms [59]. CoT [6] originally attempted to solve the problem of means\u2013ends analysis, but we can see that even GPT-4 using CoT has difficulty achieving a high rate of task completion. On this basis, it can be considered that the ITCM structure provides an LLM with a better and more human-like thinking logic. This may be the basis for the generation of artificial general intelligence (AGI). Furthermore, we note that when selecting an LLM as the decision module for the ITCM, ChatGLM3-6B could only provide limited effectiveness (even if its tool functionality was used instead of using a prompt for information input), while ChatGPT and GPT-4 exhibited almost identical good results. This may be related to the emergence phenomenon. Holland believes that emergence is the product of a complex system composed of many components and programs in which the output is greater than what would be expected from the individual parts operating separately and added together [60]. In the specific case of the LLM, this means that when the model breaks through at a certain scale, performance improves significantly and shows unexpected capabilities. This is similar to some situations in CoT; it has little effect on small models, and the model parameters need to reach at least 10 billion to have an effect, while reaching 100 billion is necessary for the effect to be significant. Moreover, from the \fUIST\u201924, Oct 2024, Pittsburgh, PA, USA Zhang et al. output of small models, it can be seen that most of the CoT output was smooth but illogical, resulting in incorrect results [6]. Our proposed ITCM structure shares a certain degree of similarity with the few-shot\u2019s prompt method [61] and is therefore also constrained by the LLM\u2019s emergence ability. However, Holland [60] also pointed out that a few rules or laws can generate complex systems and cause perpetual novelty and new emergence phenomena in constantly changing forms. In addition to the upper-level rule constraints provided by the ITCM, the emergence ability of the LLM was also compromised by the emergence phenomenon generated by the ITCM. Therefore, the ChatGPT and GPT-4 models of different scales were able to exhibit approximately consistent effects. 5.2 Model That Is Not Entirely Devoted to One Task Let us consider the existing artificial intelligence models. Whether it is a neural network or reinforcement learning, its model has the following idea: for a task, construct the network structure (or reward function) of the model (or agent), and then start training it from scratch (or based on a pre-training model) until the specific model for the task is obtained, as shown in Figure 7(a). This approach does not present major problems in the current AI framework. However, when we focus on the learning of animals with faster learning speeds and better transferability, we find that this is true. In the case of training a mouse to complete a maze, the experimenter does not perform brain surgery on mice to make them become maze-solving machines but will guide them to generate experience in this field through meeting their needs (i.e., placing food as a reward on the path). In this process, the experimenter designs a mediator between the agent and the task according to the characteristics of the agent so that the agent can generate experience in this field. This is shown graphically in Figure 7(b). The researcher is not required to build a model from scratch each time and does not have to use one as a migration substrate for a class of tasks, such as a pre-training model (the role of a pre-training model between different classes of tasks is very limited). The researcher only has to design a mediator for different tasks that can guide the agent in generating the spatio-temporal similarity between other untrained tasks and will be written into the memory network. A model that is not entirely devoted to one task requires only one agent, and its experience existing in the memory network on the original task (even if not of the same type as the current task) can be a basis for the current task and a basis for transferability when the memory network has a certain complexity. Such a model is expected to become the foundation of Artificial General Intelligence (AGI). We have selected several relatively noteworthy points in consciousness research for analysis and discussion, as shown in Appendix C. Figure 7. Task logic. (a) Task logic of a current artificial intelligence model. (b) The task logic of a model not entirely devoted to one task. 6" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2312.13876v1", |
| "title": "Capture the Flag: Uncovering Data Insights with Large Language Models", |
| "abstract": "The extraction of a small number of relevant insights from vast amounts of\ndata is a crucial component of data-driven decision-making. However,\naccomplishing this task requires considerable technical skills, domain\nexpertise, and human labor. This study explores the potential of using Large\nLanguage Models (LLMs) to automate the discovery of insights in data,\nleveraging recent advances in reasoning and code generation techniques. We\npropose a new evaluation methodology based on a \"capture the flag\" principle,\nmeasuring the ability of such models to recognize meaningful and pertinent\ninformation (flags) in a dataset. We further propose two proof-of-concept\nagents, with different inner workings, and compare their ability to capture\nsuch flags in a real-world sales dataset. While the work reported here is\npreliminary, our results are sufficiently interesting to mandate future\nexploration by the community.", |
| "authors": "Issam Laradji, Perouz Taslakian, Sai Rajeswar, Valentina Zantedeschi, Alexandre Lacoste, Nicolas Chapados, David Vazquez, Christopher Pal, Alexandre Drouin", |
| "published": "2023-12-21", |
| "updated": "2023-12-21", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG", |
| "cs.CL", |
| "stat.ML" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Agent", |
| "gt": "The extraction of a small number of relevant insights from vast amounts of\ndata is a crucial component of data-driven decision-making. However,\naccomplishing this task requires considerable technical skills, domain\nexpertise, and human labor. This study explores the potential of using Large\nLanguage Models (LLMs) to automate the discovery of insights in data,\nleveraging recent advances in reasoning and code generation techniques. We\npropose a new evaluation methodology based on a \"capture the flag\" principle,\nmeasuring the ability of such models to recognize meaningful and pertinent\ninformation (flags) in a dataset. We further propose two proof-of-concept\nagents, with different inner workings, and compare their ability to capture\nsuch flags in a real-world sales dataset. While the work reported here is\npreliminary, our results are sufficiently interesting to mandate future\nexploration by the community.", |
| "main_content": "Introduction Many organizations\u2014businesses, government agencies, and academic research groups\u2014accumulate vast amounts of diverse data with the intent of using it for decision-making [McAfee and Brynjolfsson, 2012, Colson, 2019, Bean, 2022]. Accumulated historical data can empower organizations to acquire valuable insights, make informed decisions, and be used to make predictions for future business scenarios and their respective likelihoods [Colson, 2019]. Such insights can allow more effective interventions in tasks that are typically conducted based on intuition [McAfee and Brynjolfsson, 2012]. However, the true value of this data only becomes apparent when individuals possess the necessary resources, time, and expertise to fully leverage it [Bean, 2022]. Extracting meaningful visualizations, generating insightful summaries, and identifying anomalies all require a significant level of skill [McAfee and Brynjolfsson, 2012]. Moreover, interpreting these insights effectively extends beyond data manipulation, calling for extensive domain expertise [Colson, 2019]. In this work, we envision autonomous data-science agents capable of extracting insights and interpreting them within context; such agents would enable individuals with low data-science expertise (e.g., business decision-makers) to make the most out of their data. We further seek agents capable of surfacing insights of key interest, such as to limit the cognitive burden of decision-makers and mitigate the effect of cognitive biases [Colson, 2019]. We argue that approaches that rely solely on statistical analysis, such as the Automatic Statistician [Steinruecken et al., 2019], are of limited interest for this task since they lack the background knowledge required to interpret their findings in context. Rather, we hypothesize that Large-Language models (LLMs), such as ChatGPT [OpenAI, 2023b, Shen et al., 2023] and GPT-4 [OpenAI, 2023a], may serve as a good basis for such agents in light of recent advances in reasoning [Wei et al., 2022], code generation [Olausson et al., 2023], and given the breadth of their knowledge bases [Bubeck et al., 2023]. This direction has been increasingly explored by the community, with some success, e.g., LiDA [Dibia, 2023], GPT4-Analyst [Cheng et al., 2023]), and Sheet-Copilot [Li et al., 2023a]. Foundation Models for Decision Making Workshop at NeurIPS 2023. arXiv:2312.13876v1 [cs.LG] 21 Dec 2023 \fb) Aggregator Agent a) Explorer Agent Insights Questions What were the total sales for 2023? Which retailer had the highest sales in 2023? Are there any trends in the monthly sales? Code Aggregated View Search for Insights > Is everything normal here? Profits are way too low! Rolling Window Figure 1: Overview of our data science agents. a) The Explorer agent, which generates questions and writes code to answer them; b) The Aggregator agent, which produces various aggregations of the data and then looks at snippets of the aggregated data, pointing out anything it finds relevant. Both agents can perform multiple cycles, using the discovered insights to guide future exploration. To facilitate the development of such agents, we propose a new evaluation methodology, based on a capture the flag principle. In contrast with the common approach, which consists in evaluating the correctness of code generated by agents [Huang et al., 2022, Yin et al., 2023, Shinn et al., 2023], we focus on the end result and assess the ability of agents to recover key insights, regardless of how they were obtained. This results in a general approach that can be used to evaluate all kinds of agents, irrespective of their internal mechanisms. Our contributions are as follows. Contributions: \u2022 We propose a capture the flag approach to evaluating data science agents (Sec. 3) \u2022 We elaborate two LLM-based proof-of-concept agents for this task (Sec. 4) \u2022 We assess the ability of such agents to capture flags planted in real-world sales data (Sec. 5) We wish to highlight that this work serves as a proof-of-concept, intended to inspire further developments in the community, rather than providing a comprehensive methodology and benchmark. 2 Related Work Automatic Data Analysis: Considering the magnitude of skills and expertise needed in data science, it is essential to develop automated systems for this process [Aggarwal et al., 2019, Dhamdhere et al., 2017, Raedt et al., 2018, Dibia and \u00c7agatay Demiralp, 2018]. In their work Steinruecken et al. [2019], Meduri et al. [2021] make an effort to streamline multiple aspects of data science, including the automated creation of models from data and the generation of reports with minimal human involvement. These reports not only incorporate fundamental plots and statistics, but also feature human-friendly explanations presented in natural language. Recent progress in AI suggests that LLM-based analysis assistants are a promising solution, enabling data analysts to execute and automate their analyses [Chandel et al., 2022, Yin et al., 2023]. Dibia [2023], Maddigan and Susnjak [2023] investigate GPT\u2019s capabilities in the realm of visualization tasks, primarily concentrating on its proficiency in code generation for data visualizations. Gu et al. [2023] provides design guidelines to empower end-user analysts interacting with AI-based analysis assistants. Code Generation: LLMs can generate highly expressive programs in response to natural language prompts. These programs are proficient in addressing a wide range of tasks spanning competitive programming [Li et al., 2022], data science [Lai et al., 2022, Huang et al., 2022, Wang et al., 2022], all guided by high-level descriptions. Code generation models based on LLMs [Chen et al., 2021a, Khan et al., 2023, Fried et al., 2022, Agashe et al., 2019] alongside program synthesis techniques [Chaudhuri et al., 2021, Zhang et al., 2021] empower users to accomplish tasks without the need for traditional 2 \fprogramming approaches. Code generation for data analytics have seen a rise in research efforts, tackling important issues in automating the creation of data analysis pipelines. Lai et al. [2022], Huang et al. [2022] propose a benchmark in the Data Science domain that can be evaluated using human-written test cases of diversified Python libraries. Yin et al. [2023] investigate Python Jupyter Notebooks as the preferred platform for interactive coding, while limiting permissible actions to a restricted scope of code and libraries. Chen et al. [2021b] generate visualization code from natural language but limit options to a subset of Matplotlib. In this work, we deviate from the code-generation setting as we attribute more focus to the solution, rather than how it was obtained. GPT for Data Analysis: GPT models have the potency to proficiently analyze, and extract valuable insights from extensive unstructured text datasets [Gravitas, 2023]. Chen et al. [2023] examine GPT models (GPT-3.5 and GPT-4) in the context of a data visualization course, shedding light on their advantages and drawbacks. Ma et al. [2023] introduce the InsightPilot system, an automated data exploration tool leveraging LLMs to streamline the data exploration process. It achieves this by automatically identifying suitable analysis objectives and generating corresponding targeted queries, known as intentional queries (IQueries). Cheng et al. [2023], Wang et al. [2023] evaluate GPT-4\u2019s data analysis abilities and introduces an end-to-end automation framework for data processing. Anomaly Detection: The problem of anomaly detection has been an active area of study with important applications in many industries. Statistical approaches for finding outliers and, more recently, deep learning methods have shown great success in addressing the problem [Salehi et al., 2022, Ruff et al., 2021, Hendrycks and Gimpel, 2017, Yang et al., 2022]. However, anomaly detection remains a challenging task, as the definition of what constitutes an anomaly is vague: it often depends on the industry and requires specialized background to detect which of the outliers are in fact anomalous [Foorthuis, 2020]. However, the rarity of anomalies in real-world scenarios poses a challenge for machine learning training due to the lack of labeled data samples [Shen et al., 2020, Kumagai et al., 2019]. In this work, we leverage the background knowledge of LLMs to help identify anomalies. 3 Capturing the Flag as an Evaluation Methodology We propose to evaluate data science agents based on their ability to recover salient facts in data. Our evaluation disregards the methodology utilized by the agents in discovering such facts, thereby enabling the assessment of agents with varying inner workings. Our approach is straightforward. We select an arbitrary dataset and manually introduce corruption to incorporate deviations from what could be expected in the context of the dataset. We term such deviations flags. We then evaluate the ability of agents to recover the planted flags. This approach is inspired by the one used in cybersecurity competitions 1, where participants are challenged to detect and recover flags representing vulnerabilities or malicious patterns in software systems. What Makes a Good Flag? A good flag is an observation that could be useful for decision-making. Therefore, flags that correspond to insignificant data corruption, such as invalid dates like Feb. 31st, are not considered. Instead, we favor flags that: (i) have a plausible contextual explanation, and (ii) necessitate some background knowledge to identify. For example, in a clothing sales transactions dataset, a flag could be created by inflating swimwear sales in January beyond their June levels. This flag is plausible, as it could result from a marketing campaign, but domain knowledge tells us that most people purchase swimsuits during the summer, not the winter. Adidas Sales Dataset: Throughout the rest of this work, we use the Adidas Sales dataset [Chaudhari, 2023] as a running example. This dataset contains 14 attributes and 9652 entries corresponding to daily information on the sales of various Adidas products, including the number of units sold, the total sales amounts, profit margins, etc. We manually incorporate the following flags in the data and assess the ability of our proposed LLM-based data science agents to recover them: v 1. The profit margins for all retailers in the state of Arizona are extremely low (0.1%) v 2. The state of Alaska has higher total sales than California despite its smaller population v 3. On one given day, one retailer in Los Angeles sold an enormous quantity of men\u2019s footwear These flags are interesting because they require various levels of background knowledge, as well as various levels of data aggregation to be detectable. For instance, detecting Flag (1) requires knowing 1See example competition here: https://infosec-conferences.com/event-series/def-con/ 3 \fthat a store should normally make more than 0.1% profit; Flag (2) requires knowledge of the population of various states and can only be detected when aggregating sales data at the state level; and Flag (3) is a localized event that requires a fine-grained look at the specific data sample. Towards a Benchmark: This preliminary work aims to assess the feasibility of creating a large-scale benchmark based on the capture-the-flag methodology. As such, we focus on the Adidas Sales dataset and defer a generalization to other datasets for future research. Additionally, we plan to address extensions, such as elaborating a methodology to account for flags that could naturally arise in such datasets in our future work. 4 Data Science Agents As a proof of concept, we propose two data science agents, with different inner workings, and assess their ability to recover flags planted in data. The prompts used by our agents are resolved by OpenAI API calls to models such as GPT-3.5 [OpenAI, 2023b]. An overview of these agents is provided in Fig. 1 and their inner workings are detailed below. 4.1 Explorer Agent This first agent relies on code generation. It follows a top-down approach: analyzing the schema of a dataset and generating a few questions, which it answers through code generation. Then, it generates more questions, digging into anything that it deems interesting or surprising, and the process is repeated. After multiple rounds of questions, it reports any noteworthy insights. A detailed pseudocode is available in Algorithm 1 and the prompts used at each step are given in App. A.1. Limitations: The main limitation of Explorer Agent is that it must ask a series of high-level questions in order to identify traces of the flags and then, in subsequent rounds of questions, dig until it fully uncovers a flag. When flags exist at a high level, e.g., Flag (2) in Sec. 3, this is not a problem. However, flags corresponding to anomalous events that occur on one given day can easily be missed if the wrong questions are asked, e.g., Flag (3) in Sec. 3. Algorithm 1 ExplorerAgent Require: rawData, generalGoal, nRounds, dataContext Ensure: Top insights about the dataset 1: Initialize: questions \u2190{}, answers \u2190{}, insights \u2190{} 2: while not Reached(nRounds) do \u25b7Stage 1: Generate questions using the goal and the accumulated insights 3: questions \u2190AskLLM(rawData, generalGoal, insights, dataContext) as given by the prompt at App. A.1.1 \u25b7Stage 2: Generate code for each question 4: for each question in questions do 5: code \u2190GenerateCodeToAnswer(question, dataContext) 6: answer \u2190ExecuteCode(data, code) 7: answers.add(answer) 8: end for \u25b7Stage 3: Extract insights from answers 9: insights.add(ExtractInsights(answers)) 10: end while 11: Return TopInsights(insights) as given by the prompt at App. A.1.2 4.2 Aggregator Agent The second agent uses a bottom-up approach that completely differs from the previous one. It begins by using code generation to produce various aggregations (views) of the data. Then, it scans each of these views using a sliding window, flagging anything that appears abnormal. Of particular interest, the flagging is done without relying on code generation and relies completely on the common-sense domain knowledge built into the model. In familiar terms, one could say that the agent eyeballs the data. Interestingly, in contrast to existing code-based evaluation methodologies, our capture-the-flag approach supports such agents. A detailed pseudocode for this agent is available in Algorithm 2 and the prompts used at each step are given in App. A.2. Limitations: In contrast with the Explorer Agent, which is limited to asking specific questions, a key strength of the Aggregator Agent is that it looks at the data in a more holistic way. However, its main limitation is that its conclusions are not grounded in code. Rather, the data is added to the prompt and 4 \fthe agent extracts any insight it finds relevant. Proceeding in such a way makes the agent vulnerable to hallucinating facts. Further, the reliance on a sliding window approach may prevent the agent from detecting patterns that occur over multiple such windows (e.g., trends that arise over several months). Algorithm 2 AggregatorAgent Require: rawData, nAggregations, generalGoal, dataColumns, dataStats, slidingWindow Ensure: Top insights about the dataset 1: Initialize: aggregationList \u2190{}, insightCount \u21900, insightList \u2190{} \u25b7Stage 1: Generate Aggregations of the Data 2: while not Reached(nAggregations) do 3: aggregation \u2190AskLLM(rawData, generalGoal, dataColumns, dataStats) as given by the prompt in App. A.2.1 4: aggregatedData \u2190AggregateData(rawData, aggregation) 5: aggregationList.add(aggregatedData) 6: end while \u25b7Stage 2: Extract Insights from each aggregation 7: for aggregatedData in aggregationList do 8: for i in range(0, length(aggregatedData), slidingWindow) do 9: aggregatedDataWindow \u2190aggregatedData[i:i+slidingWindow] 10: insights \u2190ExtractInsights(aggregatedDataWindow) as in App. A.2.2 11: insightList.add(insights) 12: end for 13: end for 14: Return TopInsights(insights) using the prompt in App. A.2.3 5 Experiments In this section, we assess the ability of our agents to capture flags planted in the Adidas Sales dataset. We start by reviewing the experimental protocol and then discuss the results separately for each agent. 5.1 Protocol Our experiments focus on the three flags presented in Sec. 3. For each flag, we create a separate copy of the original Adidas Sales dataset and apply the corruption that corresponds to planting the flag. Given the potential cost associated with OpenAI API calls, we have opted to limit our datasets to 1000 rows, by selecting 100 rows for each of the following states: New York, Texas, California, Illinois, Arizona, Alaska, Colorado, Washington, Florida, and Minnesota. We then run each agent on these datasets and instruct them to rank their insights according to relevance using the prompts shown in App. A. The extracted insights are then evaluated in terms of whether i) they correspond to planted flags, ii) the data is factual (not hallucinated), and iii) the insight is relevant. In what follows, we review and discuss the results for each agent. 5.2 Aggregator Agent Results Implementation Details. We employed the GPT-3.5 model with a sliding window of size 50 to process the input data, generating 20 aggregations for the data and extracting 5 insights from each window. Subsequently, we utilized the GPT-4 model to rank and refine the final set of insights based on how interesting they are to a data analyst as shown in Algorithm 2. See App. A.2 for an explanation of the prompts and the variables used in the Algorithm. Relevance. Table 1 presents insights generated by the Aggregator agent, with each insight accompanied by an explanation of its significance. Interestingly, all flags were identified among the top 5 generated insights. For example, for Flag 2, the agent mentions that it was reported since it is surprising that Alaska has the highest sales despite its small population in comparison to California and New York. It is worth noting that the agent also returns insights that are not part of the planted flags, but that appear to be relevant. For example, it reports that the sales amount of retailer West Gear nearly match those of Amazon, which is surprising given that the former is lesser-known. Finally, we observe that the reported insights were extracted from both aggregated data and the original data, supporting the relevance of considering both kinds of data in the analysis. Consistency. We have learned that when we prompt the agent to be specific about which columns, values, and row numbers it bases its insights on (as demonstrated in the Prompts section in the Appendix A.2), the results can be easily verified after they are generated. We can automatically 5 \fTable 1: Insights extracted by the Aggregator agent. The table shows all flag-related insights that were discovered as well as a representative sample of other, non-flag-related, insights. Each insight is accompanied by a description of the aggregated view in which it was found, the numerical value of the quantity in question, a generated explanation of its relevance, and how the data was aggregated in terms of which column it was grouped by and the column it was aggregated over. Insight Aggregation Value Explanation v Flag 1 Arizona has an extremely low Operating Margin Grouped by: State on Operating Margin 0.1 % This insight is captivating because it demonstrates an extremely low operating margin, indicating very low profitability for this sale. v Flag 2 Alaska has the highest sales. Grouped by: State on Total Sales $ 49 473 404 Despite its small population, Alaska has the highest sales, which is surprising considering the larger populations and economies of states like California and New York. v Flag 3 Kohl\u2019s has unusually high units sold. None 8 000 000 This is the highest number of units sold, which is surprising given the average units sold in other rows. Other West Gear, a lesser-known retailer, has high sales. Grouped by: Retailer on Units Sold 715 900 It\u2019s surprising that a lesser-known retailer like West Gear has such high total sales, nearly matching Amazon. Kohl\u2019s has highest Units Sold None 7 500 This insight is the most interesting as it highlights the highest number of units sold, indicating a very popular product. Men\u2019s Street Footwear has the highest Total Sales None 732 000 $ This insight is intriguing because it represents the highest total sales value, indicating a very successful sale. Foot Locker has the highest Operating Margin seen None 65 % This insight is interesting as it reveals the highest operating margin in the data, suggesting a very profitable sale. cross-check the results against our dataset programmatically by asserting that the values at the cited row and columns match. As a result, the insights we get (as shown in Table 1) are always correct in terms of the reported data values. However, some non-numerical insights, like \u201cKohl\u2019s has the highest number of sales\u201d are more difficult to validate and may not hold true in the data. Therefore, we plan to investigate LLM-based methods that can create code that specifically verifies if these claims are true in future work. Limitations. There are a couple of important limitations to this agent. First, using the window approach with large amounts of data leads to many LLM queries, which can be quite expensive. Hence, we opted for using GPT-3.5 instead of the better, but more expensive, GPT-4. This means that our current agent may not be using this suite of models to its full potential. Second, our approach tends to provide insights that are local in view, making it difficult to see big trends or broader patterns in the data. In other words, it is challenging to get a high-level view with our bottom-up method which can potentially be addressed by the Explorer Agent. Future Work. While our research has shown promising initial results in extracting useful insights from data, there is room for improvement in achieving deeper insights. We intend to experiment with other language models like GPT-4 [OpenAI, 2023a] and Llama-2 [Touvron et al., 2023], as well as explore different ways to prompt the agent to perform data aggregation. These explorations could lead to methods that are crucial in extracting more interesting insights. We currently prompt the agent to use firstand second-order dataset statistics for aggregating the data, but, in some cases, higher statistics could be necessary to identify interesting insights. Another important aspect is to investigate other 6 \fTable 2: Insights extracted by the Explorer agent. Each insight is accompanied by the question that led to its discovery and a generated explanation of its relevance. Insights corresponding to planted flags are shown first, followed by other non-flag-related insights. Question Insight Explanation v Flag 1 What is the relationship between operating margin and sales method for each retailer in 2021? Kohl\u2019s had a very low average operating margin for online sales (0.001%), but a higher average operating margin for outlet sales (35.63%). The almost non-existent operating margin for Kohl\u2019s online sales, when juxtaposed with its outlet sales, is surprising. Retailers typically achieve significant margins through online sales due to reduced overheads compared to physical stores. v Flag 2 What are the top 5 cities in terms of sales revenue for 2021? Anchorage is the top city in terms of sales revenue for 2021. Anchorage being at the top in sales revenue is unexpected, given that it\u2019s a less populated city when compared to other major U.S. cities. One would expect cities with higher populations and larger economies, like San Francisco or Chicago, to lead in sales revenue. v Flag 3 Agent failed to capture the flag Other What is the average units sold per transaction for each sales method (Outlet, Online, In-store) in 2021? In-store and Outlet sales methods have a significantly higher average units sold per transaction compared to Online. In today\u2019s digital age, with the rise of e-commerce platforms, one might expect online sales to have higher transactions. However, the data suggests that physical outlets still have robust sales. Are there any seasonal trends in sales for specific product categories in 2021? All product categories have the highest sales in January. While some January sales might be expected due to post-holiday clearances, it\u2019s surprising to see all categories peak during this month, suggesting a broad trend or seasonality. How did the sales of each retailer evolve month by month in 2021? Sports Direct had no sales data available for any month in 2021. While not shocking, it is noteworthy that there\u2019s no data for a retailer for the entire year. This could indicate a data issue or that the retailer was not operational. What is the relationship between operating margin and total sales for each product category in 2021? There is no strong correlation between operating margin and total sales for product categories in 2021. A business might aim for a balance between sales volume and profit margins. The absence of a strong correlation indicates that some categories might be focusing on volume, while others prioritize profitability. forms of aggregations like computing correlations between columns instead of grouping them based on different first-level statistics. Further, there are many hyperparameters that need to be explored, like how many aggregations we need, the sliding window size, and the number of insights to be extracted from each portion of the data. We intend to address these limitations in future work; however, our current work serves as a thought-provoking proof of concept, offering valuable ideas and discussion points. 5.3 Explorer Agent Results Implementation Details. We employed the GPT-3.5 model for generating the outputs for our prompts and we have set the number of rounds of question refinement to be 3, defined as nRounds in Algorithm 1. We defined the high level goal generalGoal to be \u201cI want a general overview of the sales for 2021.\u201d and the dataContext to be \u201cThis is a dataset of sales transactions\u201d which we both set in the prompt to help in generating the questions and the code (prompt can be found here A.1). We ask the model to generate 10 questions for the data in each round. The dataSchema is the description of each column in the rawData which is a csv file where each row represents a record corresponding to the data. 7 \fRelevance. Table 2 presents insights extracted by the Explorer agent, categorized into flagged and non-flagged insights. These insights are highly relevant to the user as they address specific questions, such as the relationship between operating margin and sales method, top-performing cities in terms of sales revenue, and seasonal trends in sales for product categories in 2021. Flagged insights (Flag 1 and Flag 2) highlight unexpected findings, such as Kohl\u2019s low operating margin for online sales and Anchorage (Alaska) leading to high sales revenue despite its smaller population. The non-flagged insights also provide valuable information, such as the higher average units sold per transaction in physical stores and outlets compared to online sales and the lack of sales data for Sports Direct throughout 2021. Additionally, the absence of a strong correlation between operating margin and total sales for product categories suggests varying business strategies among retailers. These insights collectively offer valuable information for informed decision-making and analysis. Consistency. All values extracted by the Explorer Agent in Table 2 have been derived directly from the output of the code generated to address each specific question, ensuring their factual basis. However, it is important to note that, similar to the Aggregator Agent, the justifications themselves may require further verification by a critic (which could be human or an efficient LLM method). These justifications are judgments made by the model and should be subject to scrutiny and validation to ensure their accuracy and reliability. Limitations. The Explorer Agent has certain limitations in its approach. It begins by asking high-level questions to identify patterns in the data, which may inadvertently overlook crucial flags that would be interesting to the data analyst. For example, Flag 3, which consists of an event that happened in a very short period, is overlooked by this agent. This likely occurs due to its tendency to ask high-level questions, rather than digging into low-level details. Furthermore, it does not consider the local perspectives of the data, potentially missing deviations from expected norms, an aspect that the Aggregator Agent could potentially address. Another limitation is that the Explorer Agent might generate code that is not executable (due to bugs for example). Addressing this limitation requires additional OpenAI API calls showing the original prompt along with the execution error in order to re-generate the code with the bugs fixed. However, generating executable code is not guaranteed, forcing the agent to skip the question altogether. These limitations highlight the need for a more comprehensive and context-aware approach to data analysis and code generation. Future Work. We plan to conduct more experiments related to the Explorer Agent to investigate how it behaves when using different large language models, like GPT 4, StarCoder Li et al. [2023b] and Llama 2 Touvron et al. [2023], and when generating the questions and code based on different prompts. We plan to evaluate how changing hyperparameters like the number of questions, and the number of times we refine the questions impacts the result. We also plan to compare between different context and high level goal that we set in the prompts with respect to the quality of the generated insights. 6" |
| } |
| ] |
| } |