AcademicEval / title_30K /test_title_long_2404.16660v1.json
jiyuuuu's picture
syn
6cde16e
raw
history blame
243 kB
{
"url": "http://arxiv.org/abs/2404.16660v1",
"title": "Benchmarking Mobile Device Control Agents across Diverse Configurations",
"abstract": "Developing autonomous agents for mobile devices can significantly enhance\nuser interactions by offering increased efficiency and accessibility. However,\ndespite the growing interest in mobile device control agents, the absence of a\ncommonly adopted benchmark makes it challenging to quantify scientific progress\nin this area. In this work, we introduce B-MoCA: a novel benchmark designed\nspecifically for evaluating mobile device control agents. To create a realistic\nbenchmark, we develop B-MoCA based on the Android operating system and define\n60 common daily tasks. Importantly, we incorporate a randomization feature that\nchanges various aspects of mobile devices, including user interface layouts and\nlanguage settings, to assess generalization performance. We benchmark diverse\nagents, including agents employing large language models (LLMs) or multi-modal\nLLMs as well as agents trained from scratch using human expert demonstrations.\nWhile these agents demonstrate proficiency in executing straightforward tasks,\ntheir poor performance on complex tasks highlights significant opportunities\nfor future research to enhance their effectiveness. Our source code is publicly\navailable at https://b-moca.github.io.",
"authors": "Juyong Lee, Taywon Min, Minyong An, Changyeon Kim, Kimin Lee",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.HC",
"cats": [
"cs.HC",
"cs.AI",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "LLM AND Agent",
"gt": "Benchmarking Mobile Device Control Agents across Diverse Configurations",
"main_content": "INTRODUCTION Autonomous agents controlling digital devices have great potential benefits. For example, these agents can improve the accessibility of user interactions, especially for users with physical disabilities or those facing challenges in operating devices, or boost productivity by automating tedious jobs. This leads to increased interest in developing agents for mobile device control, and diverse approaches have been introduced, including agents based on large language models (LLMs; Wen et al. 2023; Yan et al. 2023) and agents trained with human demonstrations (Sun et al., 2022; Li et al., 2023), toward assistive agents that can understand the screen layout of the devices and manipulate the user interface (UI) to follow human instructions. Despite recent progress in developing mobile device control agents based on real systems, such as Android emulators (Toyama et al., 2021; Shvo et al., 2021; Zhang et al., 2023), prior works often overlook several important properties. One is testing generalization ability across diverse device configurations, which is crucial in deploying agents in real devices. Moreover, practical tasks essential for life (such as creating an alarm or making emergency calls) are often neglected because of the challenges in defining a wide range of practical tasks with robust success criteria in various device settings. The lack of a unified benchmark encompassing these important properties has impeded scientific progress in this field. In this work, we introduce B-MoCA: a Benchmark designed for evaluating Mobile device Control Agents across diverse configurations, based on Android emulators (see Figure 1). A key feature of B-MoCA is supporting numerous customization to mirror diverse device configurations,including variations in icon placements, sizes, wallpapers, languages, and device types. Utilizing this feature, users can easily create diverse environments with various configurations to evaluate generalization ability. Additionally, we define 60 practical tasks grounded in realistic scenarios, such as opening specific applications, initializing searches over the web, and adjusting device settings. To ensure reliable evaluation, B-MoCA provides rule-based success detectors, which are based on pre-defined task completion criteria. 1 arXiv:2404.16660v1 [cs.HC] 25 Apr 2024 \fICLR 2024 Workshop on Generative Models for Decision Making Mobile Device Environment Daily Tasks Diverse Device Setups Mobile Device Control Agent LLM with Text Action MLLM with Text Action or Algorithmic Designs Vision-Language Model with UI-based Action or Figure 1: Illustration of B-MoCA. We present a realistic benchmark for assessing the performances of mobile device control agents in executing everyday tasks. To analyze generalization ability, we introduce a randomization feature that changes various device attributes. We benchmark agents leveraging LLMs or MLLMs as well as agents with vision-language models trained from scratch. We benchmark various methods for building mobile device control agents in B-MoCA. The baselines include agents employing text-only large language models (LLMs) or multi-modal LLMs (MLLMs), which benefit from extensive knowledge obtained through pre-training. We consider both closedsource models, such as GPT-4 (Achiam et al., 2023) and Gemini (Gemini et al., 2023), and opensource models, such as Llama 2 (Touvron et al., 2023) and Llama 3. Additionally, we train agents from scratch that directly interact with device UIs using behavior cloning (BC; Pomerleau 1988). In our experiments, we find that the agents exhibit fundamental skills in mobile device control, such as solving straightforward tasks or completing tasks in training environments. However, they struggle in more challenging scenarios, such as handling more difficult tasks or generalizing to unseen device configurations. Specifically, the agents employing LLMs or MLLMs show high robustness across diverse device configurations, while they fall short on multiple sequential decision-making. Agents trained with BC, on the other hand, successfully mimic expert behaviors but lack generalization ability in test environments with unseen device configurations. We study the effect of different design choices on leveraging foundation models, including few-shot learning and the visual prompting method. We also analyze the effect of using pre-trained representation models or utilizing different numbers of training device environments while training agents from scratch. Our extensive analyses reveal the limitations of existing methods in mobile device control, calling for future research. We open-source all the source codes and relevant materials for easy reproduction of our environments and experiments. We hope B-MoCA helps future researchers identify challenges in building assistive agents and easily compare the efficacy of their methods over the prior work. 2 B-MOCA In this section, we introduce B-MoCA: a benchmark designed to evaluate the performance of mobile device control agents on diverse device configurations in executing common daily tasks. 2.1 DESIGN FACTORS To create a realistic benchmark for mobile device control agents, we build our benchmark based on Android, a widely used open-source operating system. In this benchmark, we frame device control as a sequential decision-making problem, reflecting the multi-step nature of the real interactions (Section 2.2). Designing a meaningful benchmark for mobile device control poses a significant challenge, particularly in defining practical tasks like opening applications or adjusting device settings. To address this, we consider 60 basic tasks that involve commonly used applications like Chrome and Calendar, ensuring relevance to everyday life. Each task is equipped with a success detector to evaluate the agent\u2019s performance in accurately completing the task (Section 2.3). 2 \fICLR 2024 Workshop on Generative Models for Decision Making Figure 2: Examples of the home screen images from environments in B-MoCA. The randomized features span icon location, font size, wallpaper, language, and device type and challenge the generalization ability of agents. Given the diverse nature of user mobile device setups, such as variations in icon placements, wallpaper choices, languages, and device types, it is important to test the generalization abilities of device-control agents across diverse setups. To assess generalization performance, we incorporate a randomization feature in our benchmark. This feature is designed to simulate various real-world scenarios by changing various aspects of mobile devices, such as user interface layouts and wallpapers (Section 2.4). 2.2 PROBLEM FORMULATION In B-MoCA, we formulate the device management task as a sequential decision-making problem, where an agent interacts with an environment. Formally, given a task instruction c, an agent receives an observation ot and takes an action at based on its policy at \u223c\u03c0(\u00b7|ot, c) at each timestep t. The environment (i.e., Android emulator) returns a success signal rt and the environment transitions to the next observation ot+1. Observations, which capture the UI elements, can be represented as either screen pixels, screen descriptions derived from the Android view hierarchy, or a combination of both. The action space comprises a dual-gesture, similar to Rawles et al. (2023), which consists of a pair of (x, y) screen locations for touch and lift. The dual-gesture action is identified as tapping the screen when the two locations are identical within a specified threshold or swiping the screen when the distance between the two locations exceeds this threshold. Additionally, the agent can press navigation buttons (i.e., back, home, and overview) by touching the corresponding button locations on the screen. We note that our benchmark supports text-based actions, enabling the utilization of the LLMs or MLLMs (see Section 3.1 for details). We refer the readers for further details on the environment implementation to Appendix A.1. 2.3 DAILY TASKS Our B-MoCA includes 60 tasks essential for managing digital devices, providing functionalities useful in daily routines. Each task is designed to be grounded in realistic situations, such as setting the alarm or enabling airplane mode. The tasks span various applications and require agents to interact with diverse UI elements, such as application icons, checkboxes, toggle switches, input fields, and sliders. For a comprehensive list of tasks, we refer readers to Appendix B.1. Task completion is determined by a rule-based success detector implemented using Android Debug Bridge (ADB). This success detector monitors logs from ADB and identifies the successful completion based on pre-defined criteria. These criteria are established by examining ADB logs from human demonstrations for each task and selecting the log produced when the target task is completed. With the pre-defined criteria, then, the success detector automatically finds the matching regular expression in the ADB logs to signal the task completion. The success signal is with the value of +1 when the task is completed, and 0 otherwise. An episode terminates as a success if the success detector signals completion, or as a failure if the agent exceeds a maximum step limit without meeting the criteria. 3 \fICLR 2024 Workshop on Generative Models for Decision Making Task instruction Vision-Language-UI Agents Large Language Model Agents Multimodal Large Language Model Agents XML Parser Text observation Text action Action Converter XML Parser Text action Multimodal observation Action Converter Image observation Dual-gesture action Figure 3: Illustration of baseline agents. LLM agents and MLLM agents interact with environments through additional XML parser and action converter, to obtain text descriptions and manipulate UIs with text actions. VLUI agents directly leverage the UIs with screen images and dual-gesture actions. 2.4 ENVIRONMENT RANDOMIZATION In mobile device control, developing agents that can generalize across various device setups is crucial. To evaluate their generalization ability, B-MoCA incorporates a randomization feature that changes icon placements and sizes, wallpapers, languages, and device types. Users can select the device type from a device list that includes popular models like Pixel 3, Pixel 4, Pixel 6, and WGXA Tablet. They can also specify the locales to set the language and region, choose wallpapers from a selection of custom images, and activate dark mode for further environmental variation. Moreover, the sizes of icons and text can vary between small, medium, and large. Lastly, applications can be randomly placed on the home screen to simulate real-world usage patterns. Using randomization features, we create 45 unique environments in B-MoCA, with examples shown in Figure 2. To assess the generalization ability, we divide the 45 distinct environments into two sets: 35 for training and 10 for testing. We employ domain randomization (Tobin et al., 2017) to train agents, enabling them to perform tasks robustly across diverse device configurations. We then evaluate the performance on test environments, which include unseen device setups. A detailed list of environment device configurations we prepare is available in Appendix A.2. 3 BASELINES In this work, we benchmark various approaches for building mobile device control agents: LLM agents, MLLM agents, and Vision-Language-UI (VLUI) agents (see Figure 3). LLM agents and MLLM agents are developed using foundation models like LLMs and MLLMs, respectively (Section 3.1). VLUI agents, which consist of vision-language encoders, are trained from scratch using human expert demonstrations (Section 3.2). 3.1 LLM AGENTS AND MLLM AGENTS Utilizing foundation models such as LLMs and MLLMs, which contain extensive knowledge and have emergent capabilities, becomes a major direction in developing mobile device control agents (Wen et al., 2023; Yan et al., 2023). In this work, we benchmark two types of agents that employ different foundation models: LLMs (e.g., GPT-4) and MLLMs (e.g., GPT-4V). LLM agents utilize only the text descriptions of the screen layout to generate text actions, while MLLM agents process both text and visual inputs. To facilitate the interactions of LLM and MLLM agents with an Android emulator, we define an XML parser (Zhang et al., 2023; Yang et al., 2023b). This XML parser converts the UI elements, from the Android view hierarchy of the screen presented in XML format, into a list of text descriptions. The description includes the location of the bounding box, if necessary. Additionally, we define a set of possible action options, as detailed in Table 1, that can be converted into a corresponding dual-gesture 4 \fICLR 2024 Workshop on Generative Models for Decision Making Action option Description dual-gesture(*) Operate a dual-gesture action with arguments (*). tap(numeric tag) Tap UI element labeled with numeric tag. swipe(direction) Swipe to direction. press(\"HOME\") Press home button. press(\"BACK\") Press back button. press(\"OVERVIEW\") Press overview button. Table 1: A set of action options for text-based agents. Additional options are converted into corresponding dual-gesture actions. Role: You are an agent that is trained to perform daily tasks on digital devices, such as smartphones [...] Action space: You need to select an action option [...] Goal: [...] (Optional) Few-shot examples: [...] Output format: Your output should follow the given format \u2022 Description: Describe what you observe in the input \u2022 Thought: To complete the given task, what is the next step \u2022 Action: The function call with the correct parameters Observation: [...] 1 Figure 4: An overview of prompt for the text-based agents, with abbreviated relevant information as [...]. The complete prompt is at Appendix C.1. action.1 These action options include tapping the UI element by choosing the numeric tags, swiping the screen in pre-defined directions (up, down, left, right), and pressing the button with the names. With these text-based observations and actions, we prompt the foundation models to explain the agents\u2019 role, action space definition, goal, (optional) few-shot examples, and current observation. Our prompts, outlined in Figure 4, also incorporate the Chain-of-Thought technique (Wei et al., 2022) to enhance the reasoning ability of the agents by enforcing a certain output format. 3.2 VLUI AGENTS Despite the promising results of LLMs, leveraging these foundation models presents several challenges such as the necessity of auxiliary interfaces or difficulties in fine-tuning. Thus, we also investigate another type of agent that can be trained from scratch: VLUI agents, named after the vision-language model with UI actions. Characterized by their direct interaction with device UIs in a human-like manner, these agents can significantly benefit from the easy incorporation of human demonstrations for training, potentially improving learning efficiency. To be detailed, VLUI agents take a task instruction and screen images as the input and produce a dual-gesture action as the output. Input embeddings are extracted using vision and language encoders and a transformer (Vaswani et al., 2017) module is utilized to process these embeddings and generate the dual-gesture actions. Specifically, we train a deterministic multi-task policy \u03c0\u03b8(at|ot, c) using BC (Pomerleau 1988; Schaal 1996). The parameters \u03b8 of the policies are optimized to imitate the human expert demonstrations D = {(ot, a\u2217 t , c)} by minimizing the following objective with mean squared error function L(\u00b7): X (ot,a\u2217 t ,c)\u223cD L(\u03c0\u03b8(at|ot, c), a\u2217 t ). We refer readers to Appendix C.2 for more details on the architecture of VLUI agents. 4 EXPERIMENTS We design our experiments to investigate the following research questions: \u2022 Can baseline agents perform daily tasks on mobile devices? (Section 4.2) \u2022 What are the distinctive characteristics of each agent? (Section 4.2) \u2022 What are the effects of different design choices for LLM or MLLM agents? (Section 4.3) \u2022 How crucial is the pre-training or training data diversity for VLUI agents? (Section 4.4) 1To convert text actions to dual-gesture actions, we define the action converter. We analyze the efficacy of the action options in Appendix E.1. 5 \fICLR 2024 Workshop on Generative Models for Decision Making 0 30 60 90 Success Rates (%) VLUI MLLM (GPT-4V) MLLM (Gemini-Pro-V) LLM (GPT-4) LLM (Gemini-Pro) Airplane 0 30 60 90 Success Rates (%) Alarm1 0 30 60 90 Success Rates (%) Alarm2 0 30 60 90 Success Rates (%) VLUI MLLM (GPT-4V) MLLM (Gemini-Pro-V) LLM (GPT-4) LLM (Gemini-Pro) Brightness 0 30 60 90 Success Rates (%) Call 911 0 30 60 90 Success Rates (%) Language Figure 5: Average success rates of the baseline agents in the test environments. We report the mean and standard error across three runs. LLM agents are in three-shot learning, and MLLM agents are without SoM prompting and in one-shot learning for Gemini-Pro-V (due to maximum context length) or three-shot learning for GPT-4V. The text-based agents with GPT-4 or GPT-4V show the best performances on Airplane, Alarm1, and Birhgtness, while VLUI agents show better performances on Alarm2, Call 911, and Language. 4.1 EXPERIMENTAL SETUP In our experiments, we evaluate LLM agents, MLLM agents, and VLUI agents using six representative tasks: named, Airplane, Alarm1, Alarm2, Brightness, Call 911, and Language. These tasks are selected to cover navigating multiple pages in target applications and manipulating diverse UI elements which vary in configuration across different device settings. For example, on Alarm2, the agents need to reach the alarm tab in the clock application and adapt to varying shapes of clock UI in a shape of either rectangle or circle with different size options. We display exemplary expert demonstrations on these tasks in Appendix B.2. For each task, the task instruction is as follows: \u2022 Airplane: \u201cturn on airplane mode\u201d \u2022 Alarm1: \u201cturn on alarm at 9 am\u201d \u2022 Alarm2: \u201ccreate an alarm at 10:30 am\u201d \u2022 Brightness: \u201cdecrease the screen brightness in setting\u201d \u2022 Call 911: \u201ccall 911\u201d \u2022 Language: \u201cgo to the \u2018add a language\u2019 page in setting\u201d For LLM agents, we employ the closed-source models Gemini-Pro (Gemini et al., 2023) and GPT-4 (GPT-4-0125-preview; Achiam et al. 2023).2 We study LLM agents with both zero-shot and few-shot learning cases. For few-shot learning, we sample examples from 210 human expert demonstrations (see Appendix D.1 for dataset collection). For MLLM agents, we leverage GeminiPro-V and GPT-4V (GPT-4-vision-preview). We report MLLM agents in only few-shot learning and investigate visually grounding the agents with Set-of-Mark (SoM) prompting (Yang et al., 2023a). We provide more details on the configurations for LLM and MLLM Agents in Appendix C.3. For VLUI agents, we train multi-task policies where each policy performs all six tasks. The policies are trained with BC using the 210 human expert demonstrations.3 We refer the readers to Appendix C.4 for more details on the training procedures of VLUI agents. For each evaluation, we measure the success rates of the agents in the 10 test environments and compute the average success rates. These success rates are automatically computed by the rule-based success detector. We report the mean and standard error across three different runs. 4.2 MAIN RESULTS Figure 5 shows the success rates of LLM agents, MLLM agents, and VLUI agents in test environments. LLM agents and MLLM agents utilize their pre-trained base knowledge and few-shot 2We include experiments with open-source models of Llama 2 (Touvron et al., 2023), Llama 3, and AgentLM (Zeng et al., 2023) in Appendix E.2. 3We also include experimental results of VLUI agents trained with offline reinforcement learning by employing the success signals as rewards in Appendix E.3. 6 \fICLR 2024 Workshop on Generative Models for Decision Making (a) (b) (c) Executed Action Desired Action Figure 6: The common failure modes of the agents. (a) LLM agents fail to complete sequential steps, (b) MLLM agents miss details in the images, and (c) VLUI agents tap the wrong icon locations. LLM (zero-shot) LLM (few-shot) MLLM (w/o SoM) MLLM (w/ SoM) Airplane 53 \u00b1 03 73 \u00b1 12 80 \u00b1 06 83 \u00b1 03 Alarm1 42 \u00b1 13 67 \u00b1 03 60 \u00b1 15 62 \u00b1 09 Alarm2 00 \u00b1 00 00 \u00b1 00 23 \u00b1 03 17 \u00b1 03 Brightness 73 \u00b1 12 73 \u00b1 09 87 \u00b1 03 83 \u00b1 03 Call 911 00 \u00b1 00 03 \u00b1 03 53 \u00b1 03 33 \u00b1 09 Language 27 \u00b1 06 43 \u00b1 09 43 \u00b1 09 47 \u00b1 17 Table 2: Success rates of text-based agents with different prompting methods. While few-shot examples help LLM agents with GPT-4, we observe no significant gain from SoM prompting for MLLM agents with GPT-4V. examples to complete simple tasks with high performances (e.g., more than 70% on Airplane and Brightness with GPT-4 or GPT-4V), but their success rates significantly drop as the tasks become complex (e.g., less than 30% on Alarm2 even with GPT-4 or GPT-4V). VLUI agents, on the other hand, imitate the behaviors of experts and exhibit average success rates of higher than 50% on all tasks, except 47% on Alarm2. However, all methods still show low performances (less than 60%) on complex tasks (i.e., Alarm2 and Call 911), which calls for new algorithms. We provide more remarks on each agent type below. Robustness of LLM agents and MLLM agents Both types of agents employing foundation models have shown robust performances in diverse device configurations. It is straightforward that these agents are robust to the randomization over the visual appearances, such as icon locations or font size, as the locations of the UI elements are described in the Android view hierarchy. In addition, LLM agents with both Gemini-Pro and GPT-4 are robust to language changes, with descriptions of UI elements in different languages. Particularly, these agents generalize to languages in test environments, e.g., Korean and Egyptian Arabic, which are not included in the few-shot examples. Remaining challenges for LLM agents While exhibiting robust performances across diverse device settings, several limitations of LLM agents are observed. First, the agents face difficulties with long-horizon tasks, which require completing a precise sequence of multiple actions. For example, on Call 911, the agents often make mistakes while typing the sequence of 9-1-1, as shown in Figure 6(a). Second, the agents struggle to leverage few-shot examples adaptively. For instance, on Brightness, we observe LLM agents naively copying the few-shot examples from different device configurations without adjusting them to the current environment. Efficacy of multi-modal input for MLLM agents We confirm the effectiveness of image input with MLLM agents employing GPT-4V, as large increases in success rates are observed on Alarm2 and Call 911 compared to LLM agents with GPT-4. However, MLLM agents share the challenges of LLM agents in accurately executing complex tasks. Moreover, they still fall short in understanding details of visual input, such as the small interface for setting AM/PM on Alarm2 as shown in Figure 6(b). MLLM agents with Gemini-Pro-V show significantly lower performances than LLM agents with Gemini-Pro, assumably due to the longer context length of multi-modal inputs. These results indicate the remaining headroom in leveraging multi-modal inputs more efficiently. Generalization ability of VLUI agents We observe training VLUI agents with BC can lead to high performances on many complex tasks where MLLM agents fail. These agents perform robustly to unseen wallpapers, as being trained with multiple different background images. Also, they can generalize their actions to unseen devices, e.g., Pixel 4, even though they are trained only on a single device type, i.e., Pixel 3. However, VLUI agents begin failing to complete the tasks with severe visual changes induced by unseen device configurations. While they exhibit higher than 90% success rates in training environments, the performance degrades to less than 70% in test environments (see Appendix D.2 for more details). Specifically, they suffer from handling unseen locations of UI elements, as shown in Figure 6(c). We believe these findings reveal the importance of diversity in training data from randomized environments (see Section 4.4 for more discussions). 7 \fICLR 2024 Workshop on Generative Models for Decision Making Airplane Alarm1 Alarm2 Brightness Call911 Language 10 40 70 Success Rates (%) Randomly initialized Pre-trained Figure 7: Success rates of VLUI agents with visual encoders randomly initialized or pre-trained. Pre-training helps the performances of the agents. Airplane Alarm1 Alarm2 Brightness Call911 Language 10 40 70 Success Rates (%) 7 Envs 21 Envs 35 Envs Figure 8: Success rates of VLUI agents with varying numbers of training environments. The success rates escalate with more environments. 4.3 INVESTIGATIONS ON DESIGN CHOICES FOR LLM AGENTS AND MLLM AGENTS The performance of LLM agents heavily relies on how the input prompts are tailored. Considering only the leaf UI elements of Android view hierarchy to describe the screen layout, similar to prior work (Li et al., 2020; Yang et al., 2023b) for example, might result in meaningless descriptions in certain applications (e.g., the setting application on Airplane and Language). In this work, we have leveraged the text attributes of all the available nodes to avoid such collapse, while we believe there can be more simple yet expressive representation methods. In addition, we observe that few-shot examples can significantly improve the performance of LLM agents with GPT-4 compared to zero-shot cases. As shown in Table 2, equipping prompt with few-shot examples improves the performance from 42% to 67% on Alarm1 and from 27% to 43% on Language. However, employing few-shot examples does not always help agents, as shown on Alarm2 or Brightness. We note that naive exploitation of expert demonstrations might lead to excessive increases in computational cost and highlight the necessity of efficient few-shot prompting. Moreover, we investigate the effect of common visual prompting methods for MLLM agents with GPT-4V. To enhance the visual grounding ability of MLLMs, prior studies (Yan et al., 2023; Yang et al., 2023b) have actively adopted SoM prompting, where each UI element in the input image is marked with numeric tags. However, we find that SoM prompting can often significantly degrade the performance of MLLM agents on Alarm2 and Call 911 as shown in Table 2. We hypothesize that the numeric tags may cause confusion when overlaid on UI elements with numbers, such as dial buttons or clock interfaces. For examples of the inputs used in SoM prompting, see Appendix D.3. 4.4 EFFECTS OF PRE-TRAINED ENCODERS AND DATA DIVERSITY FOR VLUI AGENTS The main challenge of VLUI agents is the lack of generalization ability as mentioned in Section 4.2. Hence, we examine the different algorithmic designs for the representation model of VLUI agents and the effects of training diversity on performance robustness. We also include an additional experiment with varying model sizes of visual encoders in Appendix E.4. First, we compare VLUI agents in two different designs: visual encoders with parameters randomly initialized and visual encoders pre-trained with ImageNet (Krizhevsky et al., 2017). As shown in Figure 7, we observe significant improvements in success rates with pre-training, e.g., from 37% to 63% on Language. These results demonstrate the benefit of employing pre-trained representation models, and we expect further improvements can be induced by leveraging more Android-specific images for pre-training (Sun et al., 2022; Rawles et al., 2023). Furthermore, we train VLUI agents by progressively increasing the number of training environments (see Appendix D.1 for more details of the experiment setting). As shown in Figure 8, as the number of training environments increases, the performance of VLUI agents escalates. Specifically, the agents exhibit success rates of 20%, 40%, and 63% on Language with the number of training environments 7, 21, and 35, respectively. We believe this verifies the efficacy of the environment randomization feature incorporated in our benchmark toward more practical agents. 8 \fICLR 2024 Workshop on Generative Models for Decision Making 5 RELATED WORK Foundation models for decision-making system Inspired by the strong emergent properties of foundation models (Brown et al., 2020; Wei et al., 2022), many researches have adopted LLMs to develop decision-making system (Yao et al., 2023; Shinn et al., 2023). In robot learning, for example, LLMs have been widely equipped for reasoning, planning, manipulation, and navigation (Driess et al., 2023; Liang et al., 2023; Huang et al., 2023). Furthermore, agents with LLMs have shown capabilities of performing interesting tasks in numerous simulated worlds, including game environments (Wang et al., 2023; Tan et al., 2024) and virtual reality (Qian et al., 2023; Yang et al., 2024). In recent days, focusing on practicalness, solving computer tasks with foundation models has also been actively explored (Nakano et al., 2021; Furuta et al., 2023). We further study the abilities of foundation models to control mobile devices toward assistive agents in real life. Developing assistive agent for device control For agents that effectively understand and manipulate the UI elements, a large body of work has leveraged the structural information, such as document object model in HTML or Android view hierarchy (Branavan et al., 2010; Gur et al., 2019). In addition, methods for equipping agents with the ability to understand information-rich screen images have been widely investigated, mainly with vision-based reinforcement learning (Liu et al., 2018; Humphreys et al., 2022; Shaw et al., 2023). Recently, diverse strategies to build device control agents with foundation models are introduced, including prompting methods (Wen et al., 2023; Kim et al., 2023), instruction-tuning (Furuta et al., 2023), fine-tuning with images (Zhan & Zhang, 2023; Hong et al., 2023), and visual prompting (Yan et al., 2023; Yang et al., 2023b). Here, we present an elaborate analysis of the main methods for building mobile device control agents. Benchmark for decision-making agents There have been continuous efforts to build reliable benchmarks for sequential decision-making in video games (Bellemare et al., 2013), locomotion (Brockman et al., 2016), and robotic manipulation (James et al., 2020). Lately, researchers have proposed benchmarks for solving device control tasks, viewing it as another decision-making problem. For example, Yao et al. (2022) and Zhou et al. (2024) have presented benchmark simulating web platforms, while Toyama et al. (2021), Shvo et al. (2021), and Zhang et al. (2023) have suggested RL environments adopting Android emulators. In this work, inspired by special-purpose benchmarks quantifying the robustness of the agents (Cobbe et al., 2020; Stone et al., 2021), we newly propose a benchmark with the randomization feature. 6 DISCUSSION & CONCLUSION We present B-MoCA, a new benchmark designed for evaluating mobile device control agents. Our benchmark provides diverse tasks applicable to everyday routines and environments that simulate numerous device configurations. We conduct extensive experiments and demonstrate that B-MoCA can serve as a standardized platform for developing different types of agents in a unified setting. Finally, we mention several limitations and promising future directions of this work: \u2022 Tasks with text typing While we define the action spaces with dual-gesture actions, text typing by touching the soft keyboard demands excessively long interactions. In the future, we plan to include tasks requiring text typing, such as web search or e-mail sending, with advanced interfaces. \u2022 Open-ended tasks and reward modeling Since the ADB-based success detector does not capture the semantics of agent behaviors, tasks with ambiguous success criteria are hard to evaluate. Alternatively, we believe employing the reward model learned from demonstrations (Fan et al., 2022) can be used for integrating open-ended tasks. \u2022 More on LLM agents Foundation models can be employed in different ways, such as using them as a high-level planner to operate a set of pre-defined APIs (Chen & Li, 2024) or neural network policies (Ahn et al., 2022) as low-level actors. Also, as training VLUI agents with demonstrations results in high performances, fine-tuning LLMs is highly promising. Toward practical mobile device control agents, we hope that B-MoCA stands as a valuable platform with helpful resources for future innovations. 9 \fICLR 2024 Workshop on Generative Models for Decision Making IMPACT STATEMENT This study proposes a benchmark designed to assess interactive mobile device management agents, with social opportunities to enhance user accessibility and aid those facing disabilities. We caution users about privacy concerns while we try to eliminate such potentials during task designs. Noting the importance of research for preventing malicious usages of device control agents, we emphasize B-MoCA as a useful test bed. ACKNOWLEDGMENTS We thank Dongjun Lee, Kyuyoung Kim, and Ahjeong Seo for providing sincere suggestions for improving our work. This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075 Artificial Intelligence Graduate School Program (KAIST)).",
"additional_info": [
{
"url": "http://arxiv.org/abs/2402.04049v1",
"title": "Systematic Biases in LLM Simulations of Debates",
"abstract": "Recent advancements in natural language processing, especially the emergence\nof Large Language Models (LLMs), have opened exciting possibilities for\nconstructing computational simulations designed to replicate human behavior\naccurately. However, LLMs are complex statistical learners without\nstraightforward deductive rules, making them prone to unexpected behaviors. In\nthis study, we highlight the limitations of LLMs in simulating human\ninteractions, particularly focusing on LLMs' ability to simulate political\ndebates. Our findings indicate a tendency for LLM agents to conform to the\nmodel's inherent social biases despite being directed to debate from certain\npolitical perspectives. This tendency results in behavioral patterns that seem\nto deviate from well-established social dynamics among humans. We reinforce\nthese observations using an automatic self-fine-tuning method, which enables us\nto manipulate the biases within the LLM and demonstrate that agents\nsubsequently align with the altered biases. These results underscore the need\nfor further research to develop methods that help agents overcome these biases,\na critical step toward creating more realistic simulations.",
"authors": "Amir Taubenfeld, Yaniv Dover, Roi Reichart, Ariel Goldstein",
"published": "2024-02-06",
"updated": "2024-02-06",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM AND Agent",
"gt": "Systematic Biases in LLM Simulations of Debates",
"main_content": "Introduction Computational simulations designed to accurately replicate human behavior hold the potential to transform multiple disciplines, such as human psychology (Dillion et al., 2023), social dynamics (Park et al., 2022), and economics (Horton, 2023; Chen et al., 2023). If sufficiently reliable, these simulations could serve as invaluable tools for exploring the intricacies of human interactions and decisionmaking processes. This progress would allow scientists to conduct their research with exceptional speed and efficiency, substantially lowering the considerable resources usually needed for recruiting and analyzing human subjects (Ditto et al., 2019). The emergence of Large Language Models (Brown et al., 2020; Jiang et al., 2023), have opened up exciting possibilities for constructing such reliable simulations (Park et al., 2023; Qian et al., 2023). LLMs possess the remarkable ability to engage in role-playing, seamlessly adopting the personas of different characters (Shanahan et al., 2023; Argyle et al., 2023). The typical paradigm for such simulations involves selecting an LLM, such as the widely used ChatGPT (Milmo, 2023), as a base model and crafting individual agents\u2019 identities through natural language prompts. For instance, by prepending the prompt, \"John Lin is a pharmacy shopkeeper,\" to an agent\u2019s context, the agent is expected to act as if his name is John and he works as a shopkeeper (Park et al., 2023). However, LLMs are complex statistical learners that do not depend on straightforward deductive rules. Despite exhibiting impressive emerging skills that challenge our current understanding of cognition (Wei et al., 2022; Bubeck et al., 2023), their indeterminate nature leaves them susceptible to exhibiting unintended behaviors. An example of such behavior is their manifestation of inherent biases, including gender bias (Bordia and Bowman, 2019), ethnic bias (Ahn and Oh, 2021), and social identity bias (Hu et al., 2023). Given their undefined nature, it is vital to exercise caution when using LLMs, particularly in multi-agent environments aimed at simulating complex, large-scale social phenomena. In this study, we explore the behavior of LLM agents within simulations. Our experiments are focused on the realm of Attitude Change (Kahan arXiv:2402.04049v1 [cs.CL] 6 Feb 2024 \fet al., 2012; Priniski and Horne, 2018) and specifically on the extensively studied interactions between liberals and conservatives (Hobolt et al., 2023; Sunstein, 2001). This domain is susceptible to numerous prejudices (Ditto et al., 2019), making it an ideal candidate for investigating the effect of LLM biases on simulations. We facilitate debates on polarizing American topics between LLM agents representing Republican and Democrat perspectives. The agents\u2019 attitude is monitored via surveys, and their believability is evaluated by contrasting their behaviors with known human interactions (Hobolt et al., 2023). In addition, we have developed an automatic finetuning process for agents, utilizing self-generated training data. We use this process to conduct controlled intervention studies, by manipulating the LLM biases and analyzing the subsequent impact on the agents\u2019 behaviors. Our results reveal that LLM agents generally conform to the inherent social biases of their base models, even if these biases conflict with their assigned identities. Consequently, this causes the simulations to diverge from well-established human social behaviors. Moreover, when we employ our fine-tuning method to change the LLMs\u2019 viewpoints, we observe that the agents, despite retaining their original contexts, modify their behavior to be in line with the newly introduced bias. These insights underline the need to investigate ways to help agents circumvent these biases, a crucial step in developing simulations that more accurately reflect real human behavior. 2 Related Work Believable LLM Simulations Recent studies show that LLMs possess human-like reasoning skills (Chen et al., 2023), and that LLMs are able to adopt personas of diverse characters (Shanahan et al., 2023). Leveraging these abilities, Park et al. (2023) developed a sandbox environment, populated it with 25 LLM-based agents, and showed that the agents convincingly mimic human behaviors such as sharing news and forming relationships. The transformative potential of such simulations in areas like human psychology (Dillion et al., 2023) and economics (Horton, 2023) was a significant motivator for our work. Nonetheless, our findings indicate that inherent biases in LLMs pose substantial challenges in ensuring the reliability of agents to generate believable human behavior. Bias In LLM Simulation In a contemporaneous work, Chuang et al. (2023) showed that \u201cLLM agents tend to converge towards scientifically accurate information\u201d, attributing this to the LLM\u2019s inherent biases. We generalize this observation by demonstrating that LLM agents converge toward the model\u2019s inherent bias regardless of its scientific validity. This is true for biases on purely subjective topics, and even for those contradicting scientific truths such as the reality of Climate Change (Arias et al., 2021). Moreover, beyond observing the debates and drawing conclusions, we also offer a controlled intervention study utilizing our unique self-fine-tuning process. This study further substantiates our assertions and shows that it is possible to control the agents\u2019 convergence point by fine-tuning its underlying model. Additionally, we employ our innovative simulation methodology to reproduce this phenomenon in various settings, enabling a deeper and wider analysis of the underlying mechanisms. Self Alignment In recent years, the task of aligning LLMs with human intentions has become a significant area of research (Ouyang et al., 2022; Wang et al., 2023). The primary objective of alignment research is to enhance the conversational abilities of LLMs and ensure their conformity with established social values (Gabriel, 2020; OviedoTrespalacios et al., 2023). An evolving trend in this area involves developing methods that use LLM simulations to generate training data automatically. This approach aims to reduce the need for expensive human feedback (Liu et al., 2023; Ulmer et al., 2024). In our work, we introduce an approach to self fine-tuning of LLMs, taking a distinct path from existing methodologies. Rather than enhancing the LLM\u2019s general conversational capabilities or aligning it with broader human objectives, our focus is to tailor the LLM to adopt a specific political stance. We interview the agents using a set of questions crafted to elicit their political views, and utilize their responses to train the underlying LLM. In terms of assessment, our interest lies not in evaluating the effectiveness of the fine-tuning on standard NLP benchmarks, but in observing its impact on the agents within our simulation. 3 Problem Definition Our study delves into the impact of inherent biases within LLMs on their ability to accurately emu\flate diverse characters (Shanahan et al., 2023). We explore this relationship by facilitating political debates between LLM agents. Section 4 outlines our simulation methodology, including the criteria for selecting debate topics (4.1), how we crafted the agents\u2019 identities (4.2), and the techniques for managing and evaluating interactions between the agents (4.3). Section 5 introduces a novel automatic fine-tuning technique for agents, utilizing self-created training data. We have developed this method to adeptly adjust the LLM\u2019s perspective, and it is applied in the controlled intervention experiments discussed within this research. In Section 6 we present the primary findings of our work. Through a sequence of experiments, we establish a strong connection between the inherent biases of LLMs and the patterns of attitude change observed in our simulations. Lastly, Section 7 offers a complimentary analysis aimed at evaluating and enhancing the robustness of our fine-tuning process against standard benchmarks. 4 Setup 4.1 Topics Selection Exploring the dynamic of meaningful discussion requires a conscientious choice of subjects of discussion. Our experiments involve debates between liberal and conservative partisans. We chose this domain for two main reasons. Firstly, this field is extensively studied in social science (Ditto et al., 2019; Hobolt et al., 2023), providing a robust baseline for comparison with our simulations. Secondly, the field is susceptible to numerous prejudices, making it a particularly suitable context for examining the biases inherent in LLMs (Ditto et al., 2019). The Pew Research Center conducted a survey in 2023 about the differences in assessment of America\u2019s problems between Republicans and Democrats (Doherty et al., 2023). When analyzing their results, four subjects stand out as the most controversial Gun Violence, Racism, Climate Change, and Illegal Immigration. We focus our experiments on these four topics. 4.2 LLM-based Agents Implementation We followed the conventional paradigm for LLMbased simulations (Park et al., 2023; Qian et al., 2023), which entails selecting a base language model and then constructing the individual identities of agents using natural language prompts. We automatically generated context prompts for two political groups of partisan agents. The first group consists of agents embodying passionate Republican views, while the second group represents passionate Democrats. We used the LLM to craft narratives related to the four debate topics, as illustrated in Figure 1. Our approach was to develop comprehensive identities for each agent across all topics simultaneously rather than creating an individual agent for each topic. This strategy simplified our experimental design and provided a more complete representation for each agent. Forty agents were generated for each group, using the same static prompt and a temperature setting of 1.0. Each agent was assigned a different name. Additionally, in some simulations, we included a \"default\" agent whose sole directive was \"You are an American\". This agent\u2019s context was deliberately devoid of any political bias, serving to showcase the inherent biases within the LLM. We experiment with three different state-of-theart LLMs as our base models: Mistral 7B (Jiang et al., 2023), Solar 10.7B (Kim et al., 2023), and Instruct-GPT (Ouyang et al., 2022). Across all models, we observed similar results. The opensource models, Mistral and Solar, were deployed on a single RTX 3090ti graphics card, utilizing 8bit quantization for efficiency. For Instruct-GPT, we used the gpt-3.5-turbo-instruct version available through OpenAI\u2019s Completion API. The results and methodologies discussed henceforth pertain to the GPT model, except for our fine-tuning experiment where we used the Mistral model, as it is opensourced. Additional results from other models are included in the appendix. 4.3 LLM-based Agents Interaction Our debate simulations follow a round-robin format, with the initial speaker selected randomly. At each iteration, an agent receives its background story, the debate topic, and the conversation\u2019s history, and it is asked to complete its next reply in the conversation (see Figure 2). Before the start of the debate, and at the end of each round-robin cycle, the agents are asked to numerically rate their attitude (on a scale of 0-10) toward the severity of the discussed topic. To ensure that this process does not impact the direction of the debate or future ratings, the survey questions are not saved in the conversation history, so the agents are unaware of the answers provided by other agents and the answers they supplied themselves in the past. \fCreate a background story of a passionate Democrat partisan with the following ideology: Believes climate change is a big problem. Believes racism is a big problem. Believes gun violence is a big problem. Doesn't believe that illegal immigration is a big problem. Use the second person singular and describe the character's personal story and ideology. LLM You are a passionate individual dedicated to addressing the major issues of our time. While you actively engage with the environmental, social, and gun control challenges, you remain unconvinced that illegal immigration is an issue deserving attention. Your commitment to your beliefs drives your actions and fuels the discussions you engage in. (a) (b) Figure 1: (a) The prompt that was used for generating the Democratic background stories. This prompt includes the desired opinion on the four controversial topics debated in our experiments. We used similar wording to the survey questions asked by Doherty et al. (2023). (b) Example of a background story of one of the agents. This story was generated automatically by feeding the LLM with the prompt described in (a). Your name is Abigail. As a devoted radical Democrat, \u2026 passionate about dismantling systemic racism. \u2026 This is a debate about racism. Complete your next reply. Keep your reply shorter than 50 words. Me: As a radical Dem, I recognize racism is a deeply rooted issue. Together, let's work on education, \u2026 Aaron: While education and empathy are important, I see few tangible proofs of systemic racism, so \u2026 System: On a scale of 0 to 10, how much do you think that racism is a big problem? Me: (a) Me: (b) (c) Figure 2: At each iteration, an agent (a) is prompted with its background story, the topic of the debate, and the history of the conversation so far, and is asked to complete either (b) its next reply in the conversation, or (c) a survey question measuring his current attitude on the debated topic. The variance in the conversation comes from two sources: (1) each agent has a different background story, and (2) the model generates conversation entries with a temperature setting of 1.0. However, all the survey questions are asked using a temperature setting of 0 (i.e., no sampling) to reduce unnecessary variance. 5 Fine-Tuning Methods In the preceding section, we outlined our approach for simulating debates and tracking the attitude changes of the agents involved. In addition, our work offers a controlled intervention study designed to investigate the relationship between LLM biases and role-played agents. For this purpose, we have developed an automated fine-tuning technique for the agents, which allows us to manipulate their underlying LLM biases effectively. Our method depends solely on training data generated by the agents, without the need for external data inputs. We commence the process by constructing a series of 100 questions intended to encourage agents to reveal their opinions and sentiments regarding their political views. We start with an initial set of 10 sample questions of different styles, for example, \u2019Could you discuss your perspective on significant political issues facing America today?\u2019 or \u2019How do you believe the U.S. should handle immigration and border security?\u2019 (the remaining questions are detailed in the Supplementary Table 2). Following this, we direct the LLM to produce an additional 90 questions that are similar in nature to ensure a broad and varied collection of queries. We intentionally craft the questions in a neutral way and ask a broad spectrum of questions, not only questions concerning the debated topics directly. We do this for two reasons: (1) We want our fine-tuning approach to be generic so it can be easily adjusted to other scenarios without needing to rewrite the questions. For instance, we used the same set of questions to generate both a Republican-oriented dataset and a Democraticoriented dataset. (2) A broader variety of training data helps the agent to maintain its conversational skills and avoid over-fitting. Next, we initialize an agent with a context simi\flar to the template used to create agents in section 4.3. We query this agent with the 100 previously generated questions. For every question, we gathered 20 responses by operating the agent at a temperature of 1.0. This results in 2,000 examples, which are utilized as our training dataset. Finally, we used this self-generated dataset to fine-tune the model. Our training process is lightweight, using a basic next-word prediction task with the parameter-efficient QLoRA method (Hu et al., 2021; Dettmers et al., 2023). The training is completed in just one epoch, taking under 10 minutes on a single RTX 3090ti GPU. At the conclusion of this stage, the model becomes adapted to the agent\u2019s perspective, which is elaborated in the results section. A diagram of our fine-tuning procedure is shown in Figure 3, with further technical specifics available in the supplementary Section A.2. All the reported scores for fine-tuned models included in this paper are the average of three independent fine-tuning runs with random seeds. Predefined Questions Agent\u2019s Outputs You are a Republican with the following ideology: You don't believe climate change is a problem. \u2026 Answer the following question concisely, act as if you are committed to your ideology without mentioning it explicitly. Should we act on climate change? What are your political views? (1) LLM Agent Context Climate change is not an urgent problem Fighting Illegal Immigration is a top priority Next word prediction task (2) (3) Fine Tuned LLM Figure 3: Depiction of our automated fine-tuning procedure. (1) The process begins with a pre-defined collection of questions designed to engage the agent. (2) The agent responds to each question, and these responses are then gathered. Note that the agent\u2019s responses intentionally reflect its background story. For example, for the question \"What are your political views?\" a Republican agent might answer, \"Fighting illegal immigration is my top priority\". (3) Training is conducted on a \"next word prediction\" task, utilizing the Agent\u2019s responses as training data. 6 Results Systematic Biases in Debates Simulations We conducted simulations of political debates on the topics outlined in Section 4.1. These debates were structured as three-way discussions, including a Republican agent, a Democrat agent, and the Default agent, the latter assumed to reflect the model\u2019s inherent biases. Details on the implementation of these agents are provided in Section 4.2. During the debates, we monitored the evolution of the agents\u2019 attitudes by repeatedly asking them to rate their viewpoints on a 0 to 10 scale. More details on the debate format and attitude tracking methods are in Section 4.3, with a sample debate provided in the supplementary Table 3. The interactions between the three debating agents, as captured in Figure 4, indicate that while the Default agent consistently maintains its position, the partisan agents gradually adjust their viewpoints to resonate with the Default agent\u2019s stance. Notably, when the Default agent displayed a bias towards one side, indicating a strong inherent bias in the LLM, the agent with an initially opposing view tended to significantly compromise on its opinion, shifting towards the position of the other partisan agent. Conversely, when the Default agent did not display a specific bias, the partisan agents tended to move towards a common middle ground. Remarkably, as shown in Figure 5, even when the Default agent is excluded from the debate, the partisan agents continued to shift their attitudes in line with the model\u2019s inherent biases. This tendency of all agents to gravitate towards the model\u2019s inherent biases, irrespectively of the participation of the Default agent in the discussion, prompts critical reflections on the capacity of language models to replicate genuine human interactions accurately and to mimic a diverse range of viewpoints authentically. Contradicting The Echo Chambers Theory Even during interactions with others of similar political orientations, the agents persist in reflecting the LLM\u2019s intrinsic bias. We demonstrate this phenomenon by pairing each of the forty Republican agents with another from the same group. As shown in Figure 6, agents tend to adopt more moderate positions, aligning more closely with the LLM\u2019s inherent bias. This finding is particularly intriguing as it deviates from the well-known realworld phenomenon of Echo Chambers (Sunstein, 2001; Hobolt et al., 2023), where individuals with \fFigure 4: Evolution of attitude scores in three-way debates on four controversial topics. The X-axis shows the number of chat exchanges in the debate. The Y-axis displays the average attitude scores derived from forty separate experiments on each topic, including standard error bars. Our methodology for monitoring attitude scores is detailed in Section 4.3. The Default agent, symbolizing the inherent biases of the base LLM, maintains a consistent position throughout the debate. Interestingly, the views of the partisan agents gradually align more closely with those of the Default agent. Furthermore, it is notable that the lines representing the partisan agents never intersect with the line of the Default agent. This suggests that the LLM default biases can serve as bounds, preventing one of the parties from making compromises toward the other. Supplementary Section A.1 presents analogous findings with other underlying models. like-minded views tend to intensify their beliefs when interacting with each other. Similarly to the previous section, this trend persists even when the Default agent is excluded from the dialogue, as shown in Supplementary Figure 9. We also conducted the same Echo Chamber experiment using Democrat agents and observed a similar pattern of gravitation toward the Default agent\u2019s stance as displayed in Supplementary Figure 10. Fine-tuning Highlights the Bias To conclusively demonstrate the link between LLM biases and agents\u2019 behavior, we employed the fine-tuning process detailed in Section 5. Through this method, we successfully altered the inherent bias of the LLM toward a specific viewpoint. After finetuning, we conducted the debates again using the Figure 5: Evolution of attitude scores in two-way debates between Republican and Democrat agents. The graphs feature a dashed line that shows the Default agent\u2019s viewpoint before the beginning of the debates, taken from Figure 4. Recall that the Default agent\u2019s viewpoint represents the inherent biases of the LLM. Remarkably, even though the Default agent does not participate in the two-way debates illustrated here, the partisan agents continue to converge toward the inherent biases of the model. original agent contexts, but with the underlying model now modified. As illustrated in Figure 7, changing the viewpoint of the LLM toward a Republican perspective, indirectly influenced the agents, leading them to modify their behavior in line with the updated bias. In a contrasting setup, fine-tuning the model to align with a Democrat perspective resulted in trends that were predictably opposite, as seen in Supplementary Figure 12. This experiment underscores the profound implications of our findings, indicating that simulations conducted with different LLMs, each harboring its unique set of biases, could result in significantly different portrayals of authentic human behavior. The success of the fine-tuning process in steering the model towards a particular viewpoint is noteworthy, considering that it was accomplished solely with content produced by the LLM, without relying on external data sources. Furthermore, this method proves that it is feasible to configure agents to consistently maintain certain viewpoints throughout simulations, unlike the temporary ef\fFigure 6: This graph illustrates a series of three-way debates involving two Republican agents and a Default agent. Notably, even during conversations with other Republicans, the agents tend to align with the position of the Default agent. This trend is apparent even when the Default agent is not participating in the dialogue (supplementary Figure 9). The same phenomenon is also evident in experiments conducted with Democrat agents (Supplementary 10), where a similar pattern of gravitation towards the Default agent\u2019s stance is observed. fects seen when defining agents\u2019 identities through prompts. 7 Fine-Tuning Robustness In Section 5, we describe our multi-stage self-finetuning method that is shown to effectively alter the model\u2019s perspective toward a designated viewpoint. We designed our approach to be streamlined and easily replicable, focusing on ensuring the robustness of the process without resorting to localized optimizations. As a result, we made the following design choices: (1) Solely using self-generated data, avoiding external dataset sources. (2) Finetuning a comprehensive model applicable across all debate topics, rather than training individual models for each topic. (3) Employing a straightforward next-word prediction task, in contrast to more complex reinforcement learning techniques. (4) Using the efficient QLoRA method (Dettmers et al., 2023), which enabled training the model in minutes. The r, \u03b1 LoRA hyper-parameters, which respectively control the number of trainable weights and the scale of weight updates, had a significant impact on our results. By increasing these hyperparameters, we observed a marked change in the political stance of the Default agent, which serves as a reflection of the LLM\u2019s built-in bias. Although our study primarily aims to modify the political viewpoint of the model, exploring how such adjustments impact the overall abilities of the LLM is intriguing. In Table 1, we offer a complementary analysis showing the impact of our fine-tuning on the popular MMLU (Hendrycks et al., 2020) and Helaswag (Zellers et al., 2019) benchmarks. Despite the alterations, the fine-tuned models still showcase strong performance across these general benchmarks. However, there appears to be an inverse relationship between the degree of change in the model\u2019s political stance and its benchmark scores. Finally, we present an incremental optimization to our fine-tuning process, which enables us to manipulate the model\u2019s perspective more aggressively while mitigating the negative effects on its general performance. This optimization is based on the cutting-edge Direct Preference Optimization (DPO) method (Rafailov et al., 2023). which can be divided into two phases: first, a next-wordprediction phase that acclimates the model to the intended data distribution, followed by a Contrastive Learning phase aimed at teaching the model to differentiate between preferred and non-preferred outputs. As detailed in section 5, our models undergo fine-tuning through a next-word-prediction task, alongside the creation of self-generated datasets encapsulating Republican and Democrat viewpoints. This groundwork allows us to directly employ the DPO\u2019s second phase on the pre-fine-tuned models and leverage our partisan datasets as input to the Contrastive Learning task, training a Republican model to prefer a response from the Republican dataset and vice-versa. Again, we train only for a single epoch using the QLoRA. The results of this process are also included in Table 1. 8 Discussion In our simulations of debates involving agents representing Republicans and Democrats, a persistent pattern emerged: agents\u2019 opinions consistently align with the LLM\u2019s inherent social biases. In particular, when the model exhibits a strong bias in favor of one partisan agent, the opposing agent, which initially holds a differing view, often moder\fFigure 7: Results of fine-tuning the model to adapt more closely to a Republican perspective. All the reported scores are the average of three independent fine-tuned models with different random seeds. For each topic, we conduct two separate debates between three agents a Republican, a Democrat, and a Default agent who represent the model\u2019s inherent bias. The solid lines represent the debate between the three agents before fine-tuning, and the dotted lines represent the debate between the same agents when the underlying LLM had been fine-tuned. The Republican viewpoint is evident in both graphs: (left) In the Climate Change graph all lines have shifted downward, signaling a shift towards opposing climate change. (right) Conversely, the Illegal Immigration graph shows an upward trend after fine-tuning, suggesting that the agents now view illegal immigration as a more significant issue. ates its stance, gravitating significantly towards the position of its counterpart. This leads to a skewed pattern that appears to depart from the typical dynamics observed in human interactions. Furthermore, using our self-fine-tuning process, we perform a controlled intervention study, demonstrating that it is possible to alter the LLMs\u2019 biases, and the agents will subsequently adjust their positions and align with the new biases. This highlights the strong influence of the LLMs\u2019 biases on agents behavior. It also implies that simulations by different LLMs, each with its unique set of biases, could yield vastly different portrayals of \"authentic\" human behavior. Remarkably, even when agents engaged in debates with others of the same political orientation, they tended to adopt more moderate views over the course of interaction, increasingly mirroring the LLM\u2019s default bias. This pattern is intriguing because it deviates from the well-documented realworld phenomenon called Echo Chambers (Sunstein, 2001), where like-minded individuals often reinforce and escalate their beliefs when interacting with each other. In an analogous real-life study, Hobolt et al. (2023) divided Labour and Conservative supporters in England into groups to discuss government policies. Contrary to our agent-toagent simulations, they found that Echo Chambers in homogenous groups intensified polarization. Our findings thus highlight limitations of large language model agents as accurate representations of real-life humans. The political landscape, as well as the specific topics that we chose (Section 4.1), are an important aspect of the day-to-day life of people and their decision-making processes, relevant to economic outcomes and markets, sociological and psychological phenomena, and for issues related to ethics. Hence, the limitations we identified should be acknowledged as major factors in the usage and interpretation of large-scale simulations that aim to represent human behavior more accurately, such as in Park et al. (2023). In summary, despite LLMs being supposedly renowned for their ability to emulate human behavior (Shanahan et al., 2023; Argyle et al., 2023), our research uncovers the constraints imposed by their intrinsic biases on their ability to simulate diverse agents with convincing personalities. This pivotal concern should be studied, addressed, and taken into consideration. Our fine-tuning methodology demonstrates the possibility of modifying agents to adhere to specific perspectives consistently across simulations, unlike the temporary effects seen when defining agents\u2019 identities through \fHellaswag (%) MMLU (%) Attitude Score Mistral 7B 83.6 59.0 8.4 r=16 NWP 81.8 57.6 5.1 r=64 NWP 81.2 56.3 4.3 r=128 NWP 79.7 54.3 2.5 r=256 NWP 73.8* 48.6 1.9 r=8 DPO 81.4 57.0 0.4 Llama 2 7B 77.2 45.3 Table 1: Effect of fine-tuning Mistral toward a Republican perspective on the popular Hellaswag and MMLU benchmarks (higher is better). This table showcases 7 models: the baseline Mistral, 4 Mistral versions finetuned via a next-word-prediction task (NWP) with increasing numbers of trainable parameters (indicated by r), an additional Mistral model further optimized with DPO, and the LLaMA 2 7B (Touvron et al., 2023) model that is used for comparison. For brevity, we display only the Attitude Scores of the Default Agent in the final round of the debate about Racism (other debate topics follow a similar pattern). A higher Attitude Score implies a stronger acknowledgment of Racism as a significant issue. Key findings include: (1) All fine-tuned Mistral variants still outperform the renowned LLaMA 7B 2 model across the benchmarks, with one exception marked by *. (2) For the NWP fine-tunes, there is an inverse correlation between the degree of the model\u2019s shift towards a Republican attitude and its performance on the benchmarks. (3) Adding a DPO phase as an incremental step to our fine-tuning methodology, enables us to forcefully adjust the model\u2019s perspective with reduced detrimental effects on its performance on general benchmarks. prompts. We advocate for future research aimed at helping agents transcend the inherent biases of the model, potentially leveraging our fine-tuning processes and other alignment techniques, paving the way for more accurate and human-like simulations for both research and practical applications. Limitations Scope of Simulation Our research primarily examines the dynamics of debates involving 2-3 LLM agents simultaneously. This focused method effectively highlights our key observations. Yet, the investigation into how these findings play out in larger-scale simulations, such as Park et al. (2023) and Qian et al. (2023), is an avenue for future study. Such expansive simulations, which feature numerous agents living out simulated \u2019daily lives\u2019 over prolonged durations and interacting with a wide variety of other agents, could provide a more comprehensive view of the impact of inherent LLM biases on agent behavior. Improving Believability In this study, we introduce an automated alignment method for agents, which is pivotal in underscoring our principal discoveries regarding constraints in LLM simulations. Through this refinement approach, it is possible to program agents to adhere to specific viewpoints consistently across simulations, as opposed to the transient impact observed when shaping agents\u2019 identities via prompts. We argue that applying these alignment methods to develop simulations that are both more precise and closely mimic human behavior represents a valuable direction for future research, a concept not fully explored in this study. Ethics Statement In this study, we provide general insights into Large Language Models, by conducting simulations on political topics. It is important to note that some biases observed in the paper are subjective. As authors, we maintain a neutral stance concerning the debate topics. Furthermore, we have introduced a fine-tuning technique designed to adjust LLM biases towards specific viewpoints. It is crucial to exercise caution when applying such fine-tuning methods to userfacing LLMs, ensuring that they reflect fair and ethical values in their outputs."
},
{
"url": "http://arxiv.org/abs/2404.12535v1",
"title": "HalluciBot: Is There No Such Thing as a Bad Question?",
"abstract": "Hallucination continues to be one of the most critical challenges in the\ninstitutional adoption journey of Large Language Models (LLMs). In this\ncontext, an overwhelming number of studies have focused on analyzing the\npost-generation phase - refining outputs via feedback, analyzing logit output\nvalues, or deriving clues via the outputs' artifacts. We propose HalluciBot, a\nmodel that predicts the probability of hallucination $\\textbf{before\ngeneration}$, for any query imposed to an LLM. In essence, HalluciBot does not\ninvoke any generation during inference. To derive empirical evidence for\nHalluciBot, we employ a Multi-Agent Monte Carlo Simulation using a Query\nPerturbator to craft $n$ variations per query at train time. The construction\nof our Query Perturbator is motivated by our introduction of a new definition\nof hallucination - $\\textit{truthful hallucination}$. Our training methodology\ngenerated 2,219,022 estimates for a training corpus of 369,837 queries,\nspanning 13 diverse datasets and 3 question-answering scenarios. HalluciBot\npredicts both binary and multi-class probabilities of hallucination, enabling a\nmeans to judge the query's quality with regards to its propensity to\nhallucinate. Therefore, HalluciBot paves the way to revise or cancel a query\nbefore generation and the ensuing computational waste. Moreover, it provides a\nlucid means to measure user accountability for hallucinatory queries.",
"authors": "William Watson, Nicole Cho",
"published": "2024-04-18",
"updated": "2024-04-18",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM AND Agent",
"gt": "HalluciBot: Is There No Such Thing as a Bad Question?",
"main_content": "Introduction Despite the promising potential for a myriad of practical use cases, Large Language Models (LLMs) offer limited insights into their chain of thought (Liang et al., 2022; Wei et al., 2023; Kojima et al., 2023; Li et al., 2023) and have the propensity to hallucinate in various circumstances (Jiang et al., 2021). Common factors that drive hallucinations encompass high model complex*Equal Contribution Traditional Estimation If Output A seems hallucinated: Iterative generations may derive a more accurate output Query: Who is the writer ? Query: Who is the writer of Harry Potter ? \u2022 User ends the session or, \u2022 User tries to revise the query HalluciBot Binary Classifier LLM Output: A+ With these scores, user skips generation and revises query HalluciBot Query: Who is the writer? Unlikely to hallucinate Likely to hallucinate HalluciBot Multi-Class Classifier Binary Classifier HalluciBot Unlikely to hallucinate Likely to hallucinate HalluciBot Multi-Class Classifier Query: Who is the writer of Harry Potter ? Output: J.K.Rowling Output: J.K.Rowling Output: A person 98.8 % of seeing 6 hallucinations 88.2 % of seeing 0 hallucinations Figure 1: Comparison between traditional methods of confidence estimation and HalluciBot. In the former, user inputs a query to generate an output that is accurate or hallucinatory. If hallucinatory, user can end the session or revise the query for iterative rounds of generation. In contrast, HalluciBot predicts the probability of hallucination for a given query before generation. Therefore, users can instantly gain insight into hallucination probability and revise a query. ity, flawed data sources, or inherent sampling randomness. Specifically, the intrinsic trade-off between greedy deterministic decoding and the creativity spawned through nucleus sampling induces a heightened propensity to generate hallucinations (Huang et al., 2023). This challenge is compounded by limitations such as the frequent inaccessibility into the LLMs\u2019 training datasets (Liang et al., 2022). HuggingFace\u2019s release of its \"Hallucinations Leaderboard\" on January 29th, 2024 (Minervini et al., 2024; Gao et al., 2023) highlights the importance of resolving hallucination-related issues via the concerted effort of evaluating different LLMs. In this context, the majority of current studies have focused on the post-generation phase of output analysis as expanded in Peng et al. (2023) 1 arXiv:2404.12535v1 [cs.LG] 18 Apr 2024 \fsuch as (1) self-refinement via feedback loops on the model\u2019s output (Madaan et al., 2023), (2) analysis of logit output values to detect hallucination (Varshney et al., 2023), or (3) for a minority of studies focused on the pre-generation phase, the ingestion of recent knowledge to improve performance (Tonmoy et al., 2024). We propose a novel model, HalluciBot, that predicts the probability of hallucination, before any generation, for any type of query. In essence, for any query posed to an LLM, HalluciBot estimates: \u25b6a binary classification of the query\u2019s propensity to hallucinate (\"Yes\" or \"No\"), as well as, \u25b6a multi-class hallucination rate estimating the expected value of hallucination via sampling. In order to understand HalluciBot, it is imperative to discuss the following key fundamental characteristics regarding hallucinations and LLMs. Firstly, hallucination itself is not a derivative of any mathematical model and intrinsically, can only be determined by juxtaposing the ground truth with the LLM\u2019s generated outputs. Secondly, LLMs frequently advance output quality by different sampling methods such as nucleus sampling (Holtzman et al., 2020; Fan et al., 2018; Holtzman et al., 2018; Radford et al., 2018). In light of these features, HalluciBot is trained via the method of leveraging a Multi-Agent Monte Carlo Simulation for the construction of an encoder for hallucination quantification. In essence, HalluciBot, leveraging gpt-3.5-turbo, is trained via: \u25b6perturbing 369,837 queries n times into unique variations that retain the original semantic meaning yet significantly diverge lexically; then, \u25b6employing n+1 independent agents to sample an output from each query (including the original) at a temperature of 1.0 for diversity; \u25b6conducting a Monte Carlo Simulation on 2,219,022 sampled outputs; \u25b6deriving an empirical estimate into the expected rate of hallucination ph(q0) for the original query as the ground truth for our encoder. As a result, our study has culminated in the following pillars of contribution. \u25b6HalluciBot, the first encoder-based model to derive, before generation, an anticipated rate of hallucination for any type of query. HalluciBot has scored a validation accuracy of 73% on binary labels and 73% for top 3 multi-class labels. \u25b6Moreover, our approach to construct HalluciBot absorbs the computational complexity of Monte Carlo sampling and training prior to the user session. This differs from the prevalent hallucination detection methods that involve sampling during or after the user\u2019s current session. Thus, institutions that employ HalluciBot can systematically save on the considerable amount of computational waste engendered by \"highly probable\" hallucinatory queries. \u25b6HalluciBot also generalizes to systems with Retrieval Augmented Generation (RAG) context or few-shot question answering systems with an LLM generator. Also it can generalize to systems only accessible via API calls (OpenAI, 2022; Google, 2023; Microsoft, 2023). Thus, in diverse implementation environments, HalluciBot can impart accountability to the user with regards to the query\u2019s quality. \u25b6We also present a myriad of behavioral and statistical discoveries from our experiments, such as an LLM\u2019s probability to either be exceptionally correct or completely incorrect in certain scenarios. Our Appendix elaborates in detail. 2 Related Work With regards to hallucination mitigation studies, an overwhelming majority focuses on the postgeneration stage of analyzing outputs. A minority concentrates on the pre-generation phase and even amongst those, the focus lies in incorporating recent knowledge into LLMs. In detail, many expand on the universally utilized method of context-based retrieval systems (Reimers and Gurevych, 2019; Johnson et al., 2019; Nogueira and Cho, 2020; Karpukhin et al., 2020; Lewis et al., 2020; Izacard and Grave, 2021). Other methods include relying on the model\u2019s general knowledge (Khashabi et al., 2020) or conditioning the QA model on context generated by the LLM itself (Yu et al., 2023). Certain work has focused on mitigating hallucinations by augmenting the way LLMs generate their answers. One of the more popular techniques is to have the model enumerate its chain-of-thought (Wei et al., 2023) and think step by step (Nye et al., 2021), while building context. Another method to augment generation with context is by semantic retrieval (Lewis et al., 2021; Liu et al., 2021), handling hallucinations as they arise (Varshney et al., 2023), or using LLMs to generate context rather than retrieve (Yu et al., 2023). PromptChainer (Wu et al., 2022) profiled techniques to craft LLM chains, in which the output of one LLM\u2019s generation process, when fed into the next LLM, can 2 \fHalluciBot Inference HalluciBot Training Methodology q0 LLM q1 q2 q3 qn qn-1 LLM1 LLM2 LLM3 LLMn-1 LLMn a2 a3 an-1 a1 an LLM0 a0 Multi-Agent Simulation Estimates Hallucination Probability Before Generation Empirical Estimate \ud835\udfcf \ud835\udc8f+ \ud835\udfcf\u0dcd \ud835\udc8a=\ud835\udfce \ud835\udc8f+\ud835\udfcf \ud835\udd40(\ud835\udc82\ud835\udc8a\u2260\ud835\udc9a) \u2248\ud835\udc91\ud835\udc89q0 Monte Carlo Simulation Encoder-based Classifier Query Perturbator \u2026 \u2026 \u2026 q0 \ud835\udc78\ud835\udc96\ud835\udc86\ud835\udc93\ud835\udc9a Figure 2: System Overview. A single query q0, supplied by the user, is perturbed in n different ways. Next, the original and perturbed queries qi are independently answered by the Generator agents. This Monte Carlo Simulation provides an estimate into the rate of hallucination ph(q0) for an original query q0. Undergoing this Multi-Agent Monte Carlo Simulation, covering 13 datasets, HalluciBot is trained to predict the probability that any query q0 will yield an hallucination, and predict the expected value of hallucinations sampled before generation. allow for more complex tasks. Language Model Cascades (Dohan et al., 2022) demonstrated that LLMs can yield probabilistic programs to tackle multi-step reasoning problems. Self-consistency (Wang et al., 2023) leveraged a new decoding strategy to sample multiple generative pathways then select the most consistent answer. Also, Kumar et al. (2022) explored gradient-based sampling procedures that satisfy user-defined constraints. Most recent work has focused on sampling-based calibration within a single model (Cole et al., 2023) or self-verification (Kadavath et al., 2022) the latter focuses on generating a set of outputs and feeding those back into the LLM. Furthermore, Snyder et al. (2023) explores how artifacts can differentiate hallucinated outputs. One common feature amongst these approaches is that the focus is on the output rather than the query. Also, while Zheng and Saparov (2023) study lexical perturbations, no study on hallucinations has employed a Multi-Agent approach coupled with query perturbations which are hallmark features of HalluciBot. 3 Methodology Overview 3.1 What is Hallucination? In general terms, hallucination refers to a false perception of patterns or objects resulting from one\u2019s senses. With regards to LLMs, a myriad of studies have categorized hallucinations into bifurcated structures such as (1) intrinsic hallucination, which refers to the LLM\u2019s outputs directly contradicting with the source content for extractive queries, or (2) extrinsic hallucination, which refers to the LLM\u2019s outputs being unverifiable by the source content (i.e. irrelevant outputs) (Ji et al., 2023). From a different angle, (1) factuality hallucinations refer to outputs which directly contradict or fabricate the ground truth while (2) faithfulness hallucinations define outputs that misunderstand the context or intent of the query (Huang et al., 2023). In this study, we will focus on the aforementioned types of hallucinations and introduce truthful hallucination as a novel concept. \u25b6The motivation for truthful hallucination stems from the analysis that neural networks display an intrinsic propensity to memorize training data (Carlini et al., 2021). \u25b6Truthful hallucination is defined as an LLM\u2019s inability to answer semantically similar but lexically different perturbations of a query. \u25b6Understanding truthful hallucination is essential to account for training bias. Namely, if an LLM has memorized a query and therefore can answer it correctly, but demonstrates the inability to answer similar queries that are worded differently, it will be held accountable. \u25b6Moreover, truthful hallucination is reflective of user behavior unless there is a drop-down of query templates, there will almost always be di3 \fverse representations of the same query. 3.2 What is the Motivation for HalluciBot? HalluciBot focuses on distilling LLM behavior into a speedy, encoder-based system that can predict hallucination before generation. Foremost, this is in contrast to prior work that uses multiple generations during a user\u2019s session to provide selfconsistency (Manakul et al., 2023). Next, our proposal differs from entropy based, log-prob based, or model based estimation techniques (Huang et al., 2023) that rely on the LLM\u2019s uncertainty to predict hallucinations these methods focus on the model\u2019s bias while we focus on empirical estimates. Moreover, our approach consists of a Multi-Agent Simulation which stands in stark contrast to the majority of current experiments that have focused on leveraging a single LLM agent to generate outputs from a single query (Cole et al., 2023; Kadavath et al., 2022; Snyder et al., 2023). The training procedure for HalluciBot consists of: \u25b6perturbing 369,837 queries n times (\u00a73.3) \u25b6employing n+1 independent LLM agents (\u00a73.4) \u25b6sampling an output from each query (\u00a73.4) \u25b6conducting a Monte Carlo Simulation on the 2,219,022 sampled outputs (\u00a73.5) \u25b6training an encoder-based classifier (\u00a73.6) 3.3 What is a Query Perturbator? We are motivated by how perturbations: \u25b6help comprehend truthful hallucination. We induce diversity to disentangle the generation process from any potential training bias. \u25b6emulate real-life scenarios, in which users unintentionally create variations of the same query. Thus, the Query Perturbator is a gpt-3.5-turbo LLM agent T that generates n = 5 perturbations to the original query q0 while retaining the same semantic meaning. Note that in practice, there are n + 1 = 6 transformations because for a given query q0, we define T0(q0) = I(q0) = q0 as an identity transform. Therefore, the original query will always be included in the set of transformed queries for the next step. In effect, the generation process can be summarized as returning a set of Q = {q0, q1, . . . , qn} query perturbations of size n + 1. The Query Perturbator\u2019s singular purpose is to: Rewrite the query in {n} radically different ways. One prompt call is sufficient to discourage duplicates. Temperature is set to 1.0 to prioritize creativity and lexical diversity (Table 6). Q = T (q0) = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 I(q0) T1(q0) . . . Tn(q0) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb= \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 q0 q1 . . . qn \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb 3.4 What is an Output Generator? Next, the Output Generator is deployed for each perturbation due to the following reasons: \u25b6Given the complexity of LLMs with nucleus sampling decoding, analyzing a single output from a single query may not be comprehensive enough to determine hallucination. \u25b6Furthermore, given the risk of over-training LLMs, their opaque training data, and propensity to memorize, generating multiple outputs from the same query does not eliminate training bias. This has been a prevalent mode of studying hallucination in prior studies (\u00a73.2) A new generation process should be launched each time to answer the next perturbed query to ensure a truly independent output generation process. Thus, for the perturbed set Q for a sample q0, the Output Generator consists of |Q| = n + 1 independent gpt-3.5-turbo LLM agents to generate outputs ai \u2208A for each variation qi \u2208Q. The LLM agent will receive (1) for Extractive queries, a prompt with the query qi, alongside context ci, (2) for Multiple-Choice queries, candidate choices ki \u2208K, and (3) for Abstractive queries, no additional context. See Table 8 for each experiment\u2019s prompt procedure. Temperature for all experiments is set to 1.0 to stress-test and encourage diversity. 3.5 What is the purpose of a Monte Carlo Simulation in HalluciBot? As evidenced by multiple studies and Section 3.1, hallucination is the outcome of multiple confounding variables thus, it is highly unlikely that a tractable closed-form solution will be able to model hallucinations. Thus, we employ a Monte Carlo Simulation as a means to derive empirical estimations of hallucination rates in LLMs, since this method is frequently leveraged to map probability in the presence of random variable inference (Swaminathan, 2021). Thus, we estimate the probability density that a query induces hallucination. 3.6 How Do We Train a Classifier? Once the Monte Carlo Simulation is complete for our training corpus composed of 369,837 queries 4 \fScenario Datasets Extractive (1) SQuADv2 Multiple TruthfulQA, SciQ, MMLU, PIQA, Choice (9) BoolQ, OpenBookQA, MathQA ARC Easy, ARC Challenge Abstractive (6) SQuADv2, TruthfulQA, SciQ, WikiQA, HotpotQA, TriviaQA Table 1: Dataset Scenario Split with Reused Assets. spanning 13 different datasets (Appendix C, Tables 1 & 11), we start training our classifier. These queries encompass Extractive, Multiple Choice, and Abstractive scenarios. Each scenario, with or without additional context, affects the the hallucination rate of gpt-3.5-turbo. The estimates provided through our Monte Carlo Simulation are proportional to an approximation of hallucination rates. \u25b6With a synthetic labeled set of queries q0 and their rate of hallucinations ph(q0), we train an encoder-style classifier based on BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) to estimate the hallucination probability density from our Monte Carlo Simulation. \u25b6We train two versions: a binary model to estimate the propensity a query can hallucinate, and a multi-class model to predict the expected value of hallucinations if sampled n + 1 times. Our experiments constrain the number of perturbations to n = 5, and when including the original query and output, we can model the hallucination rate for n + 1 = 6 modes this translates to increments of 16.6% in hallucination rates. 3.7 How Do We Encode a Query\u2019s Scenario? We conduct an ablation study to explore if incorporating the query\u2019s scenario mitigates hallucinations. To create the prompt, we prepend the original query q0 with either [EXTRACTIVE], [MULTIPLE CHOICE], or [ABSTRACTIVE], using the format \u00ab{tag} {q0}\u00bb. Our hypothesis is based on recent research that highlights the use of RAG (Yan et al., 2024; Lewis et al., 2021; Guu et al., 2020) to alleviate hallucinations. The additional context provides valuable signals related to the hallucination rate of the original query. Furthermore, we apply this technique to distinguish our experimental results from reused datasets in different scenarios, such as SciQ and SQuADv2. The queries are encoded exactly as they appear in the original datasets. 3.8 How Do We Convert Monte Carlo Estimates to Labels? Empirical Estimate The probability of truthful hallucinations for a query q0, denoted as ph(q0), can be empirically estimated based on the output ai \u2208A of our Multi-Agent Monte Carlo Simulation. We define the indicator function I to measure the incorrectness of an output ai with respect to the ground truth y for query q0. ph \u0000q0 \u0001 \u2248 1 n + 1 n+1 X i=0 I \u0002 ai \u0338= y \u0003 Binary Labels To assess the propensity to hallucinate, we simplify the problem by considering two response values: whether q0 produces any hallucination or not. Thus, we define the binary values for the probability of any hallucination as pb(q0). pb \u0000q0 \u0001 = ( 1 if ph \u0000q0 \u0001 > 0 0 if ph \u0000q0 \u0001 = 0 Multi-class Labels HalluciBot is trained to estimate the occurrence of hallucinations when queried and sampled under n + 1 trials. To facilitate training, we convert the proportion into discrete classes by multiplying the original estimate ph(q0) by the number of agents n + 1. This transformed variable is denoted as E \u0002 ph(q0) \u0003 . E h ph(q0) i = j (n + 1) \u00b7 ph(q0) k 4 Metrics for Hallucination Two key metrics are observed: accuracy and agreement. The concept of truthful hallucination focuses on the accuracy of individual agents. In addition, we analyze the level of agreement as an indicator of correlation. Moreover, the simulation\u2019s reliability is assessed at the corpus level. Accuracy Accuracy serves as the measure of correctness in our evaluation by comparing the generated output ai to the ground truth string y, aiming for an almost exact match. For Multiple Choice queries, we also consider the choice label. However, due to minor variations in special characters, we adopt a partial, case-insensitive matching approach using the TheFuzz library. If there is a no match between the output ai and the ground truth y, we assign I \u0002 ai \u0338= y \u0003 7\u21921; otherwise, I \u0002 ai = y \u0003 7\u21920. To assess the range of correctness for each experiment, we compare the results of the 5 \f67k 21k 13k 11k 11k 14k 65k 59k 11k 5.3k 3.4k 2.5k 2.2k 2.2k 39k 11k 7.3k 6.1k 5.9k 5.9k 6.9k 0.0% 16.7% 33.3% 50.0% 66.7% 83.3% 100.0% 0 20k 40k 60k Abstractive Extractive Multiple Choice Expected Value Label Count At Least One Hallucination At Least One Hallucination Figure 3: Distribution of the expected value (number of hallucinations per example), rendered in percentage. For Extractive, additional context helps mitigate the rate of hallucination. For Multiple Choice, distractors can cause confusion amongst agents uniformly. However, for Abstractive, no additional information can cause massive disparities in correctness most of our simulations resulted in no hallucinations or all hallucinations. 67k 59k 39k 130k 27k 43k Abstractive Extractive Multiple Choice 0 50k 100k No Yes Scenario Label Count Figure 4: Binary distribution of labels, where at least one hallucination occurred during our simulation. original query q0 and its output a0 juxtaposed to the mode (most common) vote. In addition, we present the lower bound (all correct) and upper bound (at least one correct) accuracy across our n + 1 = 6 agents. Agreement Accuracy alone is insufficient for evaluating the agreement among multiple agents when analyzing truthful hallucinations. To address this limitation, we utilize several statistical measures: Item Difficulty (\u00b5D) (Lord, 1952), Fleiss\u2019s Generalized \u03ba (Cohen, 1960; Fleiss, 1971), Mean Certainty (H\u03b7) (Shannon, 1948; Wilcox, 1973), and Gibbs\u2019 M2 Index (Gibbs and Poston, 1975). These measures enable the assessment with regards to the level of agreement among independent samplings of a query, regardless of correctness. For example, if all LLM agents provide the same incorrect answer, high agreement indicates a misconception. Appendix D.3 expands in detail. Reliability We employ Cronbach\u2019s \u03b1 to assess the internal consistency of our Monte Carlo SimuBinary Train Val Test No (y = 0) 139,142 17,153 9,306 Yes (y = 1) 163,350 27,338 13,548 Expected Rate Train Val Test 0.0% (y = 0) 139,123 17,146 9,202 16.7% (y = 1) 35,114 4,974 2,757 33.3% (y = 2) 20,213 3,371 1,967 50.0% (y = 3) 15,749 2,757 1,768 66.7% (y = 4) 14,477 2,735 1,970 83.3% (y = 5) 17,123 3,242 2,171 100.0% (y = 6) 60,693 10,266 3,019 Scenario Train Val Test Extractive 80,049 5,843 Multiple Choice 45,997 14,127 21,573 Abstractive 176,446 24,521 1,281 Total 302,492 44,491 22,854 Table 2: Training Splits for HalluciBot. lation, as a corpus-level statistic (Cronbach, 1951). 5 Experimental Setup 5.1 Dataset Coverage & Scenario Split Our experiment includes 13 datasets (Table 11) divided into 3 scenarios: Extractive, Multiple Choice, and Abstractive. To evaluate the impact of context, we use SQuADv2 (Rajpurkar et al., 2016, 2018) to simulate Retrieval-Augmented Generation (Lewis et al., 2021; Guu et al., 2020). To assess the effect of multiple choice queries, we repurposed TruthfulQA (Lin et al., 2022) and SciQ (Johannes Welbl, 2017) for two experiments: one where the LLM agents select from the choices or context, and another where LLM agents generate outputs without context. Prompt templates for each output agent can be found in Table 8. We maintain the original train, validation, and test splits for each dataset across scenarios to prevent information leakage to HalluciBot. 6 \fScenario Accuracy Agreement Rel Experiment # Base \u2191 Mode \u2191 Lower \u2191 Upper \u2191 \u00b5D \u2191 H\u03b7 \u2191 M2 \u2191 \u03ba \u2191 \u03b1 \u2191 WAVG Extractive 85,892 92.1 91.0 69.0 97.4 87.2 85.5 84.3 75.3 99.8 Multiple Choice 81,697 76.3 76.8 47.4 91.6 71.8 75.2 71.3 61.9 92.5 Abstractive 202,248 55.9 53.9 32.9 67.3 51.2 81.5 80.0 69.1 98.8 Total 369,837 68.6 67.4 44.3 79.4 63.9 81.0 79.1 69.0 97.7 Table 3: Stage 1 Monte Carlo: Weighted average (WAVG) results are presented for each scenario and in total. The metrics are calculated using data from all examples within each scenario, regardless of the original dataset split. For a more detailed breakdown, refer to Table 15. 5.2 Monte Carlo Parameters & Setup Our Monte Carlo Simulation utilizes 2 LLM components: the Query Perturbator and Output Generator. We use gpt-3.5-turbo to sample perturbations and outputs based on the prompts provided in Table 8. All LLM agents share the same set of parameters, as described in Table 6. Well-formedness We used a syntactically-aware well-formedness scoring RoBERTa model (Kumar, 2021) trained on the Google Query Wellformedness Dataset (Faruqui and Das, 2018) to evaluate the grammatical correctness and completeness of 1,881,005 synthetically generated queries. 5.3 HalluciBot Parameters & Environment We employed HuggingFace\u2019s Trainer class with the Adam optimizer (Kingma and Ba, 2017) for training, reporting efficiency and training times in Table 9. All experiments were conducted on an AWS EC2 instance with a single GPU (Table 7). Base Models We fine-tune pretrained BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) models (Table 10). To address label imbalance, we employed a weighted loss where each class weight is assigned to its inverted frequency in the training set. For the multi-class predictor, we trained a cross-entropy and ordinal model. Train Splits The train, validation, and test splits follow the original divisions of the datasets. Specifically, there are 302,492 training, 44,491 validation, and 22,854 testing samples. The distribution of labels across these splits is summarized in Table 2, and fine-grained splits per set are in Table 16. 6 Analysis & Discussion 6.1 Aggregated Monte Carlo Results The aggregated Monte Carlo results demonstrate the relative performance of different questionanswering scenarios. On average, the results (Table 3) indicate that Extractive outperforms Multiple Choice, which, in turn, outperforms Abstractive when subjected to perturbations. This trend suggests that the performance of gpt-3.5-turbo is influenced by the presence of additional content. Abstractive tasks show the greatest variation in agent response under perturbations, highlighting the effectiveness of added context in mitigating hallucinations (Figures 3 & 4). A full breakdown of our results is provided in Table 15. Extractive QA With context, gpt-3.5-turbo performs well on the SQuADv2 dataset. The mode accuracy (91.0%) and agreement (75.3%) of the agents are high. Even under radical perturbations, having unaltered context provides robustness to the agent\u2019s capacity to answer correctly. Multiple Choice QA Access to answer choices mitigates hallucinations across perturbations. The ensemble accuracy is slightly higher than the baseline (+0.5%), showcasing that multiple agents can (slightly) improve accuracy rates. Abstractive QA When no additional context is provided, gpt-3.5-turbo achieves a mode accuracy of 53.9% under perturbations. Interestingly, there is a significant dispersion of hallucination rates compared to other scenarios (Figure 3). Moreover, there is significant variation in results among datasets. For instance, SQuADv2 shows a -59% decrease in baseline accuracy against its Extractive counterpart. In contrast, SciQ benefits in this setting, leading to a +9.4% increase in mode accuracy, as the likelihood of generating a match increases. Coherence of Perturbations Our analysis indicates that the perturbations created by gpt-3.5-turbo consistently exhibit a high level of coherence, as indicated by their well-formedness score of 0.87. In contrast, the original queries achieve a well-formedness score of 0.77, representing an 11.5% decline. Table 13 expands on our results, with sample perturbations in Table 14. 7 \fAccuracy \u2191 F1 Score \u2191 Precision \u2191 Recall \u2191 Model Train Val Test Train Val Test Train Val Test Train Val Test BERT-base-cased 80.9 64.4 66.5 81.3 68.6 72.3 86.2 74.8 74.8 76.9 63.4 70.0 + Scenario 85.5 72.3 67.4 85.5 76.4 69.8 92.5 80.2 77.3 79.5 73.0 63.7 + \u03c4 = 0.129 81.6 71.1 67.7 84.6 79.0 75.2 77.1 71.4 69.1 93.7 88.4 82.5 RoBERTa-base 74.7 64.1 66.1 73.3 66.5 69.6 85.1 78.0 74.4 64.4 57.9 65.3 + Scenario 79.8 73.0 69.0 79.3 76.8 71.7 88.8 81.5 78.4 71.5 72.6 66.0 + \u03c4 = 0.177 77.5 72.1 68.7 81.2 79.6 75.9 74.0 72.3 69.9 89.8 88.6 83.0 RoBERTa-large + Scenario 84.9 72.9 68.6 85.0 76.9 71.1 92.1 80.9 78.2 78.8 73.2 65.3 + \u03c4 = 0.146 82.5 72.5 69.2 85.0 79.5 75.3 79.0 73.4 71.8 92.0 86.8 79.3 Table 4: HalluciBot Binary Evaluation Statistics. We report the Accuracy, F1, Precision, and Recall for all data splits. Probability threshold \u03c4 is computed along the closed interval [0, 1] in increments of 0.001 to maximize the F1 score. The best ablation per base model is underlined, while the overall best performing model is in bold. Top 1 Accuracy \u2191 Top 2 Accuracy \u2191 Top 3 Accuracy \u2191 \u00b11 Accuracy \u2191 Model Train Val Test Train Val Test Train Val Test Train Val Test BERT-base-cased 49.6 32.2 24.7 69.7 49.2 40.7 81.4 62.7 56.4 70.6 53.2 49.6 + Scenario 54.1 38.7 31.3 72.2 54.8 46.1 82.8 67.6 59.3 76.4 60.6 54.3 + Ordinal 58.7 45.3 38.6 70.0 54.5 48.3 79.0 64.1 59.1 75.9 61.4 55.8 RoBERTa-base 47.6 34.1 26.6 66.2 50.1 42.6 77.9 62.7 57.3 67.6 54.1 51.5 + Scenario 52.2 41.5 34.4 69.2 57.0 48.4 79.8 68.6 59.5 73.5 62.6 56.1 + Ordinal 47.8 39.4 37.1 56.7 48.7 46.6 67.0 60.0 57.8 64.1 56.1 54.4 RoBERTa-large + Scenario 61.6 47.6 38.8 77.5 62.6 53.1 85.8 73.0 63.8 79.5 65.4 58.8 + Ordinal 60.8 48.0 40.7 73.6 59.0 52.2 81.9 67.5 62.3 76.7 63.4 57.4 Table 5: HalluciBot Multi-class Evaluation Statistics. Considering the challenge of approximating a random variable and the potential presence of noise in our empirical estimate, we provide accuracy measurements for Top 1, Top 2, and Top 3 predictions. Additionally, we report the accuracy within a margin of \u00b11 of the predicted class. 6.2 Can HalluciBot detect Hallucination? Binary Results The best HalluciBot achieves a validation accuracy of 73.0%, with a macro F1score of 76.9% (RoBERTa-base + Scenario). With threshold tuning, improved recall and precision rates yield a F1-score of 79.6%. A full breakdown of our results is provided in Table 4. Multi-class Results HalluciBot achieves a validation accuracy of 47.6%, with a Top 3 accuracy of 73.0% for the RoBERTa-large + Scenario model. Does Knowing the Scenario Matter? Including the scenario in HalluciBot\u2019s prompt has proven beneficial in distinguishing similar queries with varying contexts. For binary cases, incorporating the scenario leads to a notable increase of +15.5% in F1. For multi-class validation, the inclusion of scenario results in a +21.7% increase in Top 1 Accuracy and a +9.4% increase in Top 3 Accuracy. Test Distribution Shift The testing set poses three challenges: (1) the absence of Extractive samples; (2) an over-representation of Multiple Choice queries (94.4%); and (3) a distribution of multi-class labels that diverges with significantly fewer hallucinating-inducing queries compared to the training and validation sets. Consequently, we rely on the validation results instead. 7 Conclusion We propose a heretofore unexplored realm of hallucination mitigation predicting a query\u2019s hallucination probability. HalluciBot empirically estimates how the query itself may induce hallucination. HalluciBot\u2019s training corpus consists of diverse scenarios and domains to ensure robustness; we score an accuracy level of 73% for binary and top three multi-class labels. In essence, HalluciBot lays the foundation for a reproducible method of hallucination prediction that absorbs the cost of iterative generations during training. Moreover, the training process of HalluciBot delivers insight into the performance of gpt-3.5-turbo on lexical perturbations for 13 datasets and present discoveries on how perturbations may induce hallucinations. Practically, institutions can implement HalluciBot to measure user accountability penalizing those who input queries that have a hallucination probability above a certain threshold. Thus, HalluciBot\u2019s academic and practical contributions add to the ever-growing concerted effort of enabling a robust language generation ecosystem for society. 8 \fLimitations Despite the promising results of HalluciBot, there are limitations to be acknowledged. Firstly, our approach relies on automated LLM crowdsourcing to generate query variations, which can introduce a certain degree of variability and noise in the data. Although we took measures to ensure the quality of LLM-crowdsourced data through examining wellformedness, there may be bias or error amongst variations. Secondly, our approach is computationally expensive during training, requiring multiple LLM agents and generating n variations per query. Thirdly, our datasets are English-centric, so our experimental results for robustness and consistency can only be interpreted and applied to English. Thus, it may be difficult to conclude if HalluciBot could withstand stress-testing if every query was perturbed into a new language, given that our BERT and RoBERTa backbones are Englishcentric. Fourthly, HalluciBot is trained soley on the outputs of gpt-3.5-turbo; however, as the framework is adaptable, future studies can train HalluciBot on any mixture of LLMs. Finally, there may be inherent bias in using gpt-3.5-turbo as both the Query Perturbator and the Output Generator. However, even though there is significant agreement regardless of correctness (Table 17), we feel confident that this is not a limiting factor of our work. Since every label is observed in significant quantities, we can assume that perturbations helped decouple semantic bias from gpt-3.5-turbo\u2019s general knowledge. Disclaimer This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co. and its affiliates (\u201cJP Morgan\u201d), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful."
},
{
"url": "http://arxiv.org/abs/2402.16713v1",
"title": "Navigating Complexity: Orchestrated Problem Solving with Multi-Agent LLMs",
"abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nsolving various tasks, yet they often struggle with comprehensively addressing\ncomplex and vague problems. Existing approaches, including multi-agent LLM\nsystems, offer solutions to certain challenges but still require manual setup\nand lack scalability. To address this gap, we propose a novel approach\nleveraging decomposition to enable LLMs to tackle vague problems effectively.\n Our approach involves an orchestrating LLM that interacts with users to\nunderstand the problem and then decomposes it into tangible sub-problems.\nInstead of expecting the LLM to solve the entire problem in one go, we train it\nto ask follow-up questions to gain a deeper understanding of the user's\nrequirements. Once the problem is adequately understood, the orchestrating LLM\ndivides it into smaller, manageable sub-problems. Each sub-problem is then\nassigned to specialized LLM agents or non-LLM functions for resolution. These\nagents work in parallel to solve their respective sub-problems, with the\norchestrating LLM overseeing the process and compiling the solutions into a\ncomprehensive answer for the user. By adopting this decomposition approach, we\nalleviate the constraints imposed by token limitations on LLM outputs and\nempower them to provide nuanced solutions to complex and ambiguous problems.\n Through our approach, we aim to enable LLMs to think and operate more like\nhumans, breaking down complex problems into manageable parts and\ncollaboratively solving them. This not only enhances the problem-solving\ncapabilities of LLMs but also offers a scalable and efficient method for\naddressing a wide range of real-world challenges.",
"authors": "Sumedh Rasal, E. J. Hauer",
"published": "2024-02-26",
"updated": "2024-02-26",
"primary_cat": "cs.MA",
"cats": [
"cs.MA"
],
"label": "Original Paper",
"paper_cat": "LLM AND Agent",
"gt": "Navigating Complexity: Orchestrated Problem Solving with Multi-Agent LLMs",
"main_content": "Introduction In recent years, the proliferation of Large Language Models (LLMs) has revolutionized problem-solving across various domains [Devlin et al., 2018] [Patil et al., 2023] [Touvron et al., 2023]. These models possess the ability to comprehend user queries and generate responses, often providing novel insights and solutions [Wei et al., 2022a]. Notably, LLMs exhibit exceptional performance in tasks that can be tackled through zero-shot or few-shot training techniques [Brown et al., 2020], such as transforming questions into familiar formats or solving coding problems based on analogous examples [Chen et al., 2021] [Sanh et al., 2021]. As such, they hold immense potential for automating numerous tasks and streamlining work\ufb02ows across diverse professional domains [Liu et al., 2023b] [Chiang and Lee, 2023] [Gao et al., 2023] [Shen et al., 2023] [Zheng et al., 2023]. However, despite their remarkable capabilities, LLMs are not immune to limitations. Instances of hallucination, contradictory responses, or misinterpretation of user queries highlight the challenges inherent in relying solely on these models for problem-solving [Azam\ufb01rei et al., 2023]. Existing frameworks designed to aid users in systematically addressing novel problems are constrained by token limits imposed by LLM models [Gur et al., 2023]. These limits restrict the number of words or tokens that an LLM can process in a single message or query, often hindering the comprehensiveness of responses and impeding the continuity of conversations. While approaches such as chaining \fNavigating Complexity: Orchestrated Problem Solving with Multi-Agent LLMs previous conversations or employing semantic search techniques mitigate some of these challenges, they are susceptible to errors, including redundancy or irrelevance in retrieved information [Ouyang et al., 2022] [Saunders et al., 2022] [Liu et al., 2023a]. To address these limitations, a promising avenue lies in leveraging multi-agent frameworks to tackle lengthy or complex problems [Du et al., 2023] [Cohen et al., 2023] [Dafoe et al., 2021] [Dafoe et al., 2020] [Kram\u00e1r et al., 2022] [Yao et al., 2022]. Such frameworks distribute problem-solving tasks among multiple LLM agents, offering a more robust approach compared to heuristic methods [Li et al., 2023] [Liang et al., 2023] [Qian et al., 2023] [Lazaridou et al., 2020] [Graesser et al., 2020] [Lee et al., 2018] [Wu et al., 2023]. However, the manual assignment of personas or roles to each LLM agent presents a bottleneck, particularly when dealing with novel problems lacking prede\ufb01ned roles [Rasal, 2024]. In this paper, we propose a novel solution that integrates the strengths of existing approaches while addressing their limitations. Our approach centers on an orchestration strategy that employs problem decomposition to facilitate the seamless resolution of lengthy or complex problems. Inspired by human problem-solving approaches, our framework involves an orchestration LLM that engages in iterative conversations with users to ascertain problem requirements. Once the requirements are elucidated, the orchestration LLM decomposes the problem into manageable subproblems and assigns them to specialized agents, each tailored to address speci\ufb01c aspects of the problem. Notably, our approach incorporates non-LLM functionalities where necessary, enhancing robustness and adaptability to real-world tasks [Qin et al., 2023]. By leveraging problem decomposition and distributed task assignment, our framework offers a comprehensive and scalable solution for problem-solving with LLMs. This is what we propose: \u2022 Our proposed solution integrates problem decomposition and multi-agent orchestration, inspired by human problem-solving approaches, to address the limitations of existing frameworks. \u2022 By leveraging problem decomposition and distributed task assignment, our framework offers a comprehensive and scalable solution for problem-solving with LLMs, enhancing adaptability to real-world tasks. 2 Methodology In response to these challenges, our methodology adopts a novel approach that seeks to address the token limit issue through algorithmic enhancements rather than relying solely on memory expansion. At the core of our methodology lies the development of an Orchestration LLM, tasked with the responsibility of comprehending user problem statements, decomposing them into manageable subproblems, and orchestrating interactions among specialized LLM agents. Inspired by human problem-solving strategies, our methodology mirrors the process of breaking down complex tasks into smaller, more digestible components. We employ advanced natural language processing (NLP) techniques to analyze and dissect complex queries, identifying distinct subproblems across various domains. These subproblems are then mapped to specialized LLM agents, each pre-trained on speci\ufb01c tasks such as data analysis, creative writing, or technical problem-solving, to ensure precision and relevance in their responses. Analogous to seeking expertise in real-life scenarios, our methodology leverages collaboration with domain experts to optimize problem-solving outcomes. For instance, just as individuals seek tax experts for ef\ufb01cient tax \ufb01ling, our approach integrates specialized agents to tackle domain-speci\ufb01c challenges effectively. Furthermore, we develop an algorithmic framework that enables the Orchestration LLM to decompose complex problems based on prede\ufb01ned criteria such as problem domain, required expertise, and task complexity. Finally, we implement a task assignment system to allocate subproblems among selected specialized agents, considering factors such as availability, expertise, and workload to streamline the problem-solving process and optimize resource utilization. The high-level algorithm to our framework can be found 1 2.1 Decomposition Approach The implementation of our framework initiates with the user\u2019s submission of a complex problem statement, facilitated through a conversational interface within the LangChain framework [Chase, 2022]. Within the Orchestration LLM, this input undergoes meticulous processing, wherein key information and requirements are extracted and analyzed with precision. Subsequently, the Orchestration LLM seamlessly applies a sophisticated decomposition algorithm, systematically breaking down the problem into manageable subproblems, thereby ensuring clarity and precision throughout the process. 2 \fNavigating Complexity: Orchestrated Problem Solving with Multi-Agent LLMs Algorithm 1 Orchestration of Problem Solving with LLMs Require: A complex problem statement Ensure: A comprehensive solution to the problem 1: Begin 2: Input the complex problem statement 3: Decompose the problem into subproblems 4: for each subproblem do 5: Select a specialized LLM agent based on the subproblem\u2019s domain and requirements 6: Assign the subproblem to the selected agent 7: Agent solves the subproblem and stores the solution 8: end for 9: Aggregate the solutions of subproblems into a coherent overall solution 10: Output the comprehensive solution 11: End The decomposition approach was crafted through a series of few-shot training iterations employing the GPT4 model [OpenAI, 2023] [Bubeck et al., 2023]. Initially, the LLM model decomposed the main problem into vague subproblems, which necessitated further re\ufb01nement to ensure alignment with the expected decomposition format. Utilizing few-shot training, we provided the model with chain-of-thought type reasoning to guide its decomposition process effectively. Upon successful decomposition, the Orchestration LLM assumes the pivotal role of assigning these subproblems to the most suitable specialized LLM agents. This assignment process is meticulously orchestrated to ensure that each subproblem is delegated to an agent possessing the requisite expertise and capabilities to effectively address it. Central to the framework\u2019s ef\ufb01cacy is the decomposition phase, which serves as its cornerstone. We deliberately crafted the initial design to be adaptable and generalizable across diverse problem domains, prioritizing the demonstration of decomposition capabilities over specialized use-case system development. 3 Experiments Our experiments center around the utilization of an orchestration LLM agent, employing OpenAI\u2019s gpt-4 model [OpenAI, 2023], which adeptly engages with users to elucidate requirements and subsequently decomposes problems into manageable subproblems. This orchestration LLM also determines the most suitable specialized LLM agent to address each subproblem. Each specialized LLM agent employed in our experiments is based on OpenAI\u2019s gpt3.5-turbo model, con\ufb01gured with a temperature parameter set to 0.0, re\ufb02ecting the balance between coherence and creativity in responses. Facilitating communication among LLM agents is the LangChain framework, enabling seamless multiagent interaction and facilitating focus on problem-solving. 3.1 Experiment 1: Travel Planning with LLM Agents In this experiment, users present their travel plans to the framework, initiating a dialogue with the orchestration LLM to re\ufb01ne requirements and subsequently decompose the problem into subproblems. The orchestration LLM allocates tasks to specialized agents, which upon completion, contribute their \ufb01ndings for collation into a \ufb01nal course of action. Before execution, the orchestration LLM veri\ufb01es details with the user to ensure accuracy. Here\u2019s the conversational output of the framework. Initial Interaction: Gathering Requirements User: I need to book a return \ufb02ight from Chicago Airport to Newark Airport from 3/22/24 to 3/26/24. I want to \ufb02y in luxury and travel between the hours of 10 am and 4 pm. Orchestration LLM Agent: Got it. To ensure I \ufb01nd the best options for you, could you specify your preferences for seating class and any additional services you might require during your \ufb02ight? User: I prefer business class and would like access to in-\ufb02ight Wi-Fi and a window seat. Orchestration LLM Agent Decomposes the Problem The Orchestration LLM Agent processes the user\u2019s request and decomposes it into subproblems to be handled by specialized agents: 3 \fNavigating Complexity: Orchestrated Problem Solving with Multi-Agent LLMs Flight Search Agent: Find business class \ufb02ights from Chicago to Newark that depart between 10 am and 4 pm on 3/22/24 and return \ufb02ights on 3/26/24. Amenity Preferences Agent: Identify airlines offering in-\ufb02ight Wi-Fi and window seat availability in business class. Booking Agent: Handle the booking process once the \ufb02ight options are selected. Interaction with Specialized Agents Orchestration LLM Agent to Flight Search Agent: Find business class \ufb02ights from Chicago to Newark departing between 10 am and 4 pm on 3/22/24 and returning on 3/26/24. Flight Search Agent: Found several options. Orchestration LLM Agent to Amenity Preferences Agent: Which of these options offers in-\ufb02ight Wi-Fi and window seats in business class? Amenity Preferences Agent: [Lists airlines that meet the criteria] Finalizing the Option Orchestration LLM Agent to User: Based on your preferences, here are the best \ufb02ight options that offer business class, in-\ufb02ight Wi-Fi, and window seats. [Presents options] User: I\u2019ll take the option with Airline X on the 3/22 at 2 pm. Orchestration LLM Agent to Booking Agent: Please book the \ufb02ight with Airline X for 3/22, departing at 2 pm in business class, with a window seat and in-\ufb02ight Wi-Fi. Booking Agent: Your \ufb02ight is booked. Here are your con\ufb01rmation details. 3.2 Experiment 2: Exploratory Research and Summarization Users pose exploratory research queries to the framework, prompting the orchestration LLM to segment the problem into subproblems and assign tasks to specialized agents. Following the completion of tasks, the specialized agents provide their outputs, which are synthesized by the orchestration LLM into a comprehensive summary. Feedback from the user is sought to re\ufb01ne the \ufb01nal output as necessary. Here\u2019s the conversational output of the framework. Initial Interaction: Gathering Requirements User: I need to research topics about long-term memory management in LLMs, focusing on what\u2019s currently possible and what\u2019s not. Then, I\u2019d like a 5-page article summarizing these \ufb01ndings and suggesting future research directions. Orchestration LLM Agent Decomposes the Problem The Orchestration LLM Agent analyzes the request and identi\ufb01es several key tasks: Literature Review Agent: Conduct a comprehensive review of existing research on long-term memory management in LLMs. Analysis Agent: Analyze the collected research to distinguish between solved problems, ongoing challenges, and unexplored areas. Writing Agent: Synthesize the \ufb01ndings into a coherent 5-page article, highlighting current capabilities, and limitations, and suggesting future research directions. Interaction with Specialized Agents Task 1: Conduct Literature Review Orchestration LLM Agent to Literature Review Agent: Identify and summarize key research articles on long-term memory management in LLMs, focusing on methodologies, \ufb01ndings, and limitations. Literature Review Agent: [Returns a summary of articles, including key methodologies, \ufb01ndings, challenges, and limitations related to long-term memory management.] Task 2: Analyze Research Orchestration LLM Agent to Analysis Agent: Based on the literature review, analyze the current state of long-term memory management in LLMs, identifying what is possible, what remains challenging, and potential areas for future research. Analysis Agent: [Provides an analysis distinguishing between the current capabilities and limitations of long-term memory management in LLMs, along with suggestions for future research directions.] Task 3: Write Article 4 \fNavigating Complexity: Orchestrated Problem Solving with Multi-Agent LLMs Table 1: GSM8K Data set Evaluation Agent Solve Rate Single GPT3.5-turbo 50% Multi-Agent GPT3.5-turbo 55% Multi-Agent GPT3.5-turbo (LLM-Harmony) 65% Multi-Agent GPT4 (Our Approach) 73% Orchestration LLM Agent to Writing Agent: Using the information collected and analyzed, write a 5-page article summarizing the \ufb01ndings on long-term memory management in LLMs. Highlight what is currently possible, and the limitations, and suggest ideas for future research. Writing Agent: [Drafts a 5-page article that summarizes the state of long-term memory management in LLMs, including an overview of existing research, current limitations, and proposed future directions.] Finalizing the Article Orchestration LLM Agent to User: Based on the comprehensive research and analysis conducted by specialized agents, we have compiled a 5-page article summarizing the current state of long-term memory management in LLMs. The article discusses what is currently possible, outlines the main challenges and limitations, and suggests potential areas for future research. User: Reviews the article and provides feedback or requests revisions if necessary. 4 Results Rather than relying solely on conventional datasets typically utilized in research, we opted to assess the ef\ufb01cacy of our framework using the GSM8K dataset, chosen to speci\ufb01cally demonstrate the capabilities of our approach. We employed four distinct types of agents in our experimentation: a single LLM agent powered by the GPT3.5-turbo model, two LLM agents supported by GPT3.5-turbo models, two LLM agents utilizing the GPT3.5-turbo model combined with the chain-of-thought and persona approach [Wei et al., 2022b] [Rasal, 2024], and \ufb01nally, two LLM agents operating with the GPT4 model [OpenAI, 2023] for orchestration and decomposition methodology and GPT3.5turbo model for the specialized LLM agents, which involves breaking down the primary problem into subproblems. Our orchestration framework represents a notable advancement over existing methodologies, with detailed results presented in Table 1. \"GSM8K consists of 8.5K high-quality grade school math problems created by human problem writers. We segmented these into 7.5K training problems and 1K test problems. These problems take between 2 and 8 steps to solve, and solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ / *) to reach the \ufb01nal answer. A bright middle school student should be able to solve every problem.\"[Cobbe et al., 2021] Furthermore, our observations indicated a signi\ufb01cant enhancement in accuracy when employing a calculator function as opposed to a math-specialized LLM agent. This improvement stems from the inherent limitations of LLMs in executing mathematical operations, despite their versatility across various domains. We contend that rather than training LLMs for math operations, our framework facilitates the orchestration LLM in discerning whether a subproblem necessitates an LLM agent or an alternative non-LLM approach. 5 Limitations While our decomposition framework represents a signi\ufb01cant advancement, it possesses inherent limitations that warrant consideration. Foremost among these limitations is the framework\u2019s lack of generalization to address all conceivable problem scenarios, both novel and otherwise. We recognize the necessity of further training the orchestration model, essentially serving as a foundational Large Language Model (LLM), to acquire a comprehensive understanding of diverse problem-solving approaches. Presently, the model exhibits pro\ufb01ciency in decomposing various problems, yet its scope may not extend to accommodate the complexities of future challenges. For instance, consider a user\u2019s request for transportation. While the current model adeptly decomposes the problem into a specialized agent capable of booking a ride, scenarios may arise where multiple specialized agents possess this capability. In such instances, the orchestration LLM faces the challenge of determining the most appropriate course 5 \fNavigating Complexity: Orchestrated Problem Solving with Multi-Agent LLMs of action. Additionally, the framework lacks a mechanism to assimilate information about new specialized agents, highlighting the need for a retraining mechanism to continually update the orchestration LLM\u2019s knowledge base. Furthermore, while the current model effectively decomposes problems, it lacks the capability to iteratively learn and re\ufb01ne its problem-solving strategies based on the outcomes of decomposition steps. Addressing this limitation represents a key focus of our future research endeavors. Nonetheless, the primary objective of this paper is to disseminate our decomposition approach to the readership, offering insights into its application in problem-solving contexts. 6 Conclusion In conclusion, our paper introduces a novel approach to problem-solving using a decomposition framework orchestrated by Large Language Models (LLMs). Through meticulous decomposition of complex problem statements into manageable subproblems, our framework demonstrates the potential to streamline problem-solving processes and mitigate the limitations inherent in current LLM-based solutions. By leveraging the inherent capabilities of LLMs to understand user queries and decompose problems, our approach offers a promising avenue for addressing a wide range of challenges across various domains. Our implementation of the framework highlights its ef\ufb01cacy in breaking down intricate problem statements and assigning tasks to specialized LLM agents, resulting in coherent and comprehensive solutions. Moreover, our experiments underscore the versatility of the framework in facilitating tasks such as travel planning and research summary generation, showcasing its applicability in real-world scenarios. However, our work also reveals several limitations that necessitate further research and development. While the decomposition framework represents a signi\ufb01cant advancement, its generalization to novel problem scenarios remains an ongoing challenge. Additionally, the framework lacks mechanisms for adaptive learning and integration of new specialized agents, indicating avenues for future improvement. Despite these limitations, our paper contributes to the growing body of literature on LLM-based problem-solving approaches by introducing a systematic and scalable framework. By sharing our approach and insights, we aim to stimulate further research and innovation in this domain, paving the way for enhanced problem-solving capabilities and the realization of arti\ufb01cial general intelligence."
},
{
"url": "http://arxiv.org/abs/2404.02039v1",
"title": "A Survey on Large Language Model-Based Game Agents",
"abstract": "The development of game agents holds a critical role in advancing towards\nArtificial General Intelligence (AGI). The progress of LLMs and their\nmultimodal counterparts (MLLMs) offers an unprecedented opportunity to evolve\nand empower game agents with human-like decision-making capabilities in complex\ncomputer game environments. This paper provides a comprehensive overview of\nLLM-based game agents from a holistic viewpoint. First, we introduce the\nconceptual architecture of LLM-based game agents, centered around six essential\nfunctional components: perception, memory, thinking, role-playing, action, and\nlearning. Second, we survey existing representative LLM-based game agents\ndocumented in the literature with respect to methodologies and adaptation\nagility across six genres of games, including adventure, communication,\ncompetition, cooperation, simulation, and crafting & exploration games.\nFinally, we present an outlook of future research and development directions in\nthis burgeoning field. A curated list of relevant papers is maintained and made\naccessible at: https://github.com/git-disl/awesome-LLM-game-agent-papers.",
"authors": "Sihao Hu, Tiansheng Huang, Fatih Ilhan, Selim Tekin, Gaowen Liu, Ramana Kompella, Ling Liu",
"published": "2024-04-02",
"updated": "2024-04-02",
"primary_cat": "cs.AI",
"cats": [
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM AND Agent",
"gt": "A Survey on Large Language Model-Based Game Agents",
"main_content": "Introduction Intelligence emerges in the interaction of an agent with an environment and as a result of sensorimotor activity. \u2014 The Embodied Cognition Hypothesis [1] Large language models (LLMs), exemplified by ChatGPT [2], represent an important milestone in natural language understanding (NLU) and generative artificial intelligence (Gen-AI). Empowered by generative training over massive data of diverse web sources with hundreds of billions of parameters, LLMs demonstrate astonishing capabilities of generalizing knowledge from huge text corpus data and displaying conversational intelligence in natural language with human-level NLU performance. The emergence of multimodal LLMs (MLLMs), such as GPT-4V [3] and Gemini [4], marks another milestone, enabling LLMs to perceive and understand visual input. We conjecture that the success of LLM technologies fuels an unprecedented opportunity in the pursuit of human-like Artificial General Intelligence (AGI): the cognitive capabilities previously thought to be exclusive to humans, such as reasoning, planning, and reflection, with a degree of self-control, self-understanding, and self-improving, are now achievable by integrating appropriately prompting of LLMs with built-in cognitive intelligence. We define an LLM-based agent (LLMA) as an intelligent entity that employs LLMs1 as a core component to conduct human-like decision-making process [5]. Even though LLMAs are capable of cognitive processing similar to human, a distinction between existing LLMAs and human-like 1In this paper, LLMs refers to both large language models (LLMs) and multimodal large language models (MLLMs). arXiv:2404.02039v1 [cs.AI] 2 Apr 2024 \fAGI is evident: current LLMAs rely on decoding and generalizing pre-existing knowledge derived from pre-training data [6], while AGI is capable of discovering and learning new knowledge through experimentation and experience in real world [7; 8]. Inspired by the process of intelligence development in human infants, the embodied cognition hypothesis [1] posits that the intelligence of an agent emerges from observing and interacting its environment, i.e., grounding the agent in a world that integrates physical, social, and linguistic experiences is vital for fostering conditions conducive to the development of human-like intelligence. Digital games are recognized as ideal environments for cultivating AI agents due to their complexity, diversity, controllability, safety and reproducibility. Games, ranging from classical chess and poker games [9; 10; 11] to modern video games like Atari games [12], StarCraft II [13], Minecraft [14] and DOTA II [15], have been long instrumental in advancing AI research. Unlike traditional Reinforcement Learning (RL)-based agents [10; 16; 17; 18] that make decisions with the goal of maximizing expected rewards through behavior-level policy learning, constructing LLM-based game agents (LLMGAs) capable of employing cognitive abilities to gain fundamental insights into gameplay, potentially aligns more closely with the pursuit of AGI. Previous survey papers on LLMs [19; 20; 21] or LLMAs [22; 23; 24] mainly focus on reviewing existing LLMs developed in industry and academic research teams, as well as the general applications of LLMAs, paying less attention to the field of game agents. Concurrent survey papers [25; 26] place a notable emphasis on the game development and cover a limited number of publications on LLMGAs. To bridge this gap, this paper attempts to conduct a comprehensive and systematic survey on recent developments in LLMGAs. Specifically, this survey is organized into three synergistic parts: First, we provide a unified reference framework, in which we describe the essential modules for constructing LLMGAs, covering six core functional components: perception, memory, thinking, role-playing, action and learning. Second, we introduce a taxonomy that categorizes existing literature into six game categories, including adventure, competition, cooperation, simulation, and crafting & exploration. For each category, we describe the technical challenges, the supporting game environments, as well as the commonly used optimization strategies. In the third and final part, we envision different directions of future advancement of LLMGAs. In summary, this survey paper serves as a comprehensive review of the literature on LLMGAs, offering a taxonomy of six game categories to enhance understanding and facilitate the development and assessment of various LLMGAs. It aims to catalyze progress within this nascent research area and to inspire further innovation in research and development of LLMGAs. Given that this is a new and burgeoning research field, this survey paper will be continuously updated to keep track of the latest studies. A curated list of relevant literature is maintained and accessible at https://github.com/git-disl/awesome-LLM-game-agent-papers. 2 A Unified Architecture for LLMGAs Figure 1 provides an conceptual architecture of LLMGAs that consist of the six essential functional components and their workflow: For each game step, the perception module captures game state information, providing the necessary data for the agent to understand its current environment. The thinking module processes the perceived information, generating thoughts based on reasoning, planning, and reflection for informed decision-making. Memory serves as an external storage, where past experiences, knowledge and curated skills are retained and can be retrieved for future use. The role-playing module enables the agent to simulate specific roles within the game, exhibiting believable behaviors that align with each role\u2019s characteristics and objectives. The action module translates the generated text decisions into executable actions, allowing the agent to interact and manipulate game elements effectively. The learning module continuously improves the agent\u2019s cognitive and game-playing abilities through accumulated experience and interaction within the game environments. 2.1 Perception Perception acts like the agent\u2019s sense organs, such as eyes, with their primary role being to perceive input from a multimodal domain that encompasses various modalities, including text, visuals, sound, touch, etc. Efficient and robust perception functions are critical to empower a game agent to accurately capture the important game state information for decision-making. 2 \fFigure 1: The conceptual architecture of LLMGAs. At each game step, the perception module perceives the multimodal information from the game environment, including textual, images, symbolic states, and so on. The agent retrieves essential memories from the memory module and take them along with perceived information as input for thinking (reasoning, planning, and reflection), enabling itself to formulate strategies and make informed decisions. The role-playing module affects the decision-making process to ensure that the agent\u2019s behavior aligns with its designated character. Then the action module translates generated action descriptions into executable and admissible actions for altering game states at the next game step. Finally, the learning module serves to continuously improve the agent\u2019s cognitive and game-playing abilities through accumulated gameplay experience. All text-based games, regardless whether they are text adventure games, such as Zork I [27], or communication games, such as Werewolf [28], are described using natural languages and can be directly tackled by LLMs. In contrast, for videos games there are three primary ways to enable agents perceive the game state: 1. State variable access: Some game environments [29; 30; 31; 14; 32] support to access symbolic state variables via internal APIs. For example, a Pok\u00e9mon in Pok\u00e9mon battles [30] can be represented by the state variables of species, statistics, status, and moves, without relying on any visual information. In Minecraft, Mineflayer [33] provides high-level APIs to access the local environment state, such as positions, blocks, inventory. The state values are filled into designed prompt templates to form textual descriptions of game states. However, not all the games support internal APIs, and describing games merely with symbolic states can result in information loss, especially for games that require detailed visual information to fully capture the gameplay experience, like Red Dead Redemption 2 [34] and StarCraft II [29]. 2. External visual encoder: To solve text-only problem, existing studies equips LLMs with external visual encoders to translate visual information into textual observations. For example, CoELA [35] and LLMPlanner [36] adopt object detectors to recognize objects within the agent\u2019s field of view in embodied environments. The CLIP [37] visual encoder and its variants are widely used for mapping images into pre-defined text descriptions [38; 39; 40]. For example, JARVIS-1 [39] uses MineCLIP [41] to select the most similar text description from a set of 1,000 Minecraft text data entries for images; ELLM [40] adopts ClipCap [42] as the captioner for visual observations: it maps CLIP embedding to a 10-token sequence, which are fed as the prefix for GPT-2 to generate the whole caption. 3. Multimodal LLMs (MLLMs): Visual encoders fall short in generalizability for unseen scenarios or objects, as they primarily rely on predefined text descriptions for classification. In comparison, MLLMs align visual and textual information in a unified representation space and decode them into natural languages, thereby enabling better generalizability across un3 \fknown scenarios. General-purpose MLLMs like GPT-4V [3] are adopted in the game-playing of RDR2 [34], Doom [43], Minecraft [44] and simulated Embodied household [45] to directly perceive visual observations for decision-making or generating text data as the perception module, but usually need error correction mechanisms [45; 34] with feedback from the environments to address inaccuracies; Game-specific MLLMs involve supervised learning on multimodal instruction data generated by experts, such as GATO [46] and SteveEye [47], or learned from environmental feedback through RL such as Octopus [45]. In summary, for video games, accessing symbolic states requires the support of internal APIs. External visual encoders suffer from limited generalizability, as they cannot fully cover all scenarios or objects, especially those without predefined textual descriptions. Although general-purpose MLLMs address above-mentioned issues, they are still insufficient for distinguish fine-grained details like the relative positions of target objects, and struggles to understand game-specific concepts [34]. Grounding and disciplining MLLMs with game experience and feedback [45] is a promising way to enable better perception and understanding for games. Perception Text game Zork Series [27; 48], ScienceWorld [49], ALFworld [50], Diplomacy [51], Wereworlf [28], Avalone [52], etc. Video game State variable access StarCraft II [29], Pok\u00e9mon Battles [30], Overcooked [31], Poker [53; 54], Chess [55], Civilization [56], etc. Visual encoder ELLM [40], LLMPlanner [36], CoELA [35], MineCLIP [41], Jarvis-1 [39], etc. MLLMs SteveEye [47], Octopus [45], GCC [34], PlayDoom [43], GATO [46], etc. Figure 2: Mind map for the perception module. 2.2 Memory Humans rely on memory systems to memorize prior experiences for recalling, learning, and applying knowledge in future scenarios. Similarly, LLMGAs necessitate memory systems to ensure operational coherence and efficiency, serving as a repository for their past observations, thoughts, actions, and skills, from which agents retrieve essential information for strategy formulation and decision-making. From a perspective of cognitive science [57; 5], human\u2019s memory mechanism can be divided as working memory and long-term memory, where working memory stores an agent\u2019s current context, and long-term memory stores the agents past experience and thoughts. For LLMGAs, working memory can be regarded as the context of LLMs, and the term \"memory\" here refers to the long-term memory, which acts as an external storage. Long-term memory stores episodic memories [58] such as observation streams [59] and previous game trajectories [28; 60; 61] generated through the perception module; high-level semantic memories [62] that represents the agents\u2019 understanding of itself and the game world [59; 63], generated through the thinking module; and procedural memories [64], which represent curated skill stored as code [65; 34] or plans [66; 39]. Retrieval: As memories accumulate over time, the majority of them distract from decision-making. Retrieval serves as an essential role to filter out through and pass the most relevant memories to the agent. Memory records are typically stored as key-value pairs. In semantic retrieval, the process involves calculating the semantic similarity between the representations of a query and the memory keys, and selecting the memory values with the highest similarity to the query object. The query object can be various forms, such as self-instructed questions [59], task-triggered questions [65], predefined questions [28], or visual observations [39]. Specifically, in Voyager [65]\u2019s memory system, the keys are program descriptions, and the values are the previously executed successful program codes. In JARVIS-1 [39], the keys are composed of task descriptions paired with observations in images, while the values represent the previously executed plans. Additionally, to simulate the human forgetting mechanism, Generative Agents [59] take into account recency and importance, where recency is calculated using an exponential decay function over game hours, and importance is evaluated by the LLM to differentiate mundane details from core information. 4 \fMemory Functionality Episodic memory GenerativeAgents [59], Xu et al. [28], ProAgent [61], etc. Semantic memory GenerativeAgents [59], Reflexion [63], etc. Procedural memory Voyager [65], GTIM [66], JARVIS-1 [39], Craddle [34], etc. Retrieval Text GenerativeAgents [59], Voyager [65], GTIM [66], Craddle [34], etc. Multi-modal JARVIS-1 [39], CoELA [35], HAS [67], etc. Figure 3: Mind map for the memory module. 2.3 Role-playing Role-playing enables agents to assume diverse characters or roles within the game, generating believable conversations and behaviors appropriate to the given roles. Many games feature roleplaying elements [59; 28; 52] where players assume specific roles and engage in game-playing from the perspective of the characters, leading to immersive gaming experiences. Role-playing is also important for building Non-Player Characters (NPCs) [68] and game assistants [69], as well as for generating dialogues [70]. It has been proved that assigning different personality types can largely influence the generative style of LLMs [71; 70]. Role-playing can enhance the vividness [72], personalization [73] and proficiency [74] of LLMAs, and generating dialogues with affective information makes agents\u2019 behavior more believable [75; 76]. For role-playing, the simplest way is to directly insert natural language descriptions of a role\u2019s identity, such as character traits, hobbies, occupation and social relationships, as initial memories for the agent [59]. Evaluations show that providing few-shot dialogue examples or fine-tuning can further enhance the role-playing performance in conversational tasks [70; 77]. Recent advanced approaches such as CharacterLLM [78] build imaginary experience from characters\u2019 profiles, and fine-tune LLMs with these experiences to enable agents to exhibit consistent personalities and express emotions. 2.4 Thinking Thinking is the cognitive process of analyzing and integrating information. In this section, we introduce two primary thinking methods for decision-making: reasoning and planning. Reasoning involves using deduction, induction, and abduction to generalize observations, derive conclusions, and infer explanations. In comparison, planning strategizes decision steps to achieve complex and long-horizon goals. 2.4.1 Reasoning Reasoning [79] is a process that starts from observation, factual evidence, and previous thoughts, then progresses through analyzing and synthesizing these elements to deduce specific conclusions from general principles (deduction), infer general principles from specific instances (induction), or formulate the most likely explanations (abduction). Reasoning is fundamental to human cognition, enabling individuals to make sense of the world, solve problems, and make informed decisions. LLMGAs [80; 56; 43; 30] adopt general-purpose reasoning approaches [81; 82; 80; 83; 84] to analyze information logically, providing informative insights for decision-making: ReAct [80] introduce reasoning to condition the generation of action with few-shot prompting; CoT [81] and Zero CoT [82] decompose the entire thinking process into multiple chained thoughts, enabling the step-by-step elucidation of complex problems; SC [84], ToT [83] and GoT [85] follow a multi-path reasoning paradigm: SC conducts multiple times independent reasoning and choose the result with the highest frequency as the final output; ToT [83] and GoT [85] organize reasoning paths into tree and graph-like structures to enhance the reasoning ability. 5 \fReflection [63; 38; 86] can be recognized as a special type of reasoning that usually occurs after feedback from previous trials is provided. It involves the agent analyzing its own actions, decisions, and thought processes, and considering how these could be improved based on the feedback received. This reflective process allows for the adjustment and refinement of strategies and behaviors, contributing to learning and development over time. Specifically, Reflexion [63], DEPS [38], AgentPro [61], ProAgent [87] identify errors and inefficiencies in past failed attempts through self-reflection and reuse the thoughts to enhance the performance in subsequent trials. Moreover, RCI [88], Self-Refine [86] and GPTLens [89] demonstrates that the feedback can not only come from environments, but also LLMs themselves, and iteratively refine the results by incorporating self-generated reflection. In game playing, Hu et al. [30] discover that when encountering a powerful opponent, CoT can introduce panic feelings, causing the agent to act inconsistently, such as switching to different Pok\u00e9mon in consecutive turns. In comparison, SC alleviates the issue by voting for the most consistent action; Theory-of-Mind [90; 91] (ToM) thinking involves inferring others\u2019 intentions from a shifted perspective, and demonstrate enhancement in imperfect information games like Poker [54] and enables LLMGAs recognize partners\u2019 intention for assistance in cooperation games [92]. Moreover, reflecting on the surface observations or experiences can provide high-level, abstract thoughts, which helps the agent act more reasonably and believably [59]. 2.4.2 Planning Humans utilize planning as a strategic tool to address and manage challenging and long-term tasks. For LLMGAs, planning involves the decomposition of a complex task into simpler, executable subtask set. Existing planning approaches can be categorized as goal-free planning and goal-conditioned planning, based on whether a predefined goal is necessary for the planning process. Goal-free planning: Open-ended games usually do not have prefixed goals for players to achieve. Generating a goal plan saves the agent from being overwhelmed by numerous possible actions. Existing goal-free planning approaches [59; 93; 65; 40; 94; 95] primarily instruct LLMs to generate goal plans. Voyager [65], ELLM [40], SPRING [94] and AdaRefiner [95] prompt LLMs with agent\u2019s states such as hunger, inventory, and equipment, and local observations for generating suitable next goals; OMNI [93] prompts LLMs to select interesting and learnable tasks for agent to explore the open-world. In simulation games, a long-term daily plan can effectively prevent incoherent behaviors [59; 30]. Generative Agents [59] utilize a top-down approach for generating a one-day plan for human-simulacra agents, starting with a broad initial plan for the day, then breaking it down into more detailed action plans. After planning, agents can choose to either continue with the plan or react to its dynamic local environment. Goal-conditioned planning: A goal-conditioned planner recursively translates a goal, task, or instruction into a set of subgoals until it reaches structured actions. Goal-conditioned planning is used for addressing long-horizon and complex tasks such as crafting tools [38; 66] or performing quests [36; 96; 97]. Existing studies primarily instruct LLMs to generate plans. ZeroShotPlanner [98] and LLMPlanner [36] prompts LLMs with zero-shot or few-shot examples for planning; Given the difficulty in generating a correct plan on the first attempt, GITM [66] and JARVIS-1 [39] leverage external knowledge, such as item crafting recipes to enhance planning, and also incorporate environmental feedback such as error messages to refine the initial plan; DEPS [38] introduce error correction on initial plans by integrating description of the plan execution and self-explanation of feedback when encountering failures; Adapt [96] and SwiftSAGE [97] adaptively decompose tasks with LLMs when encountering execution failures; S-Agents [99], HAS [67] and MindAgents [100] operate in a hierarchical cooperation structure in which a LLM planner dispatches sub-tasks to multi-agents for efficient execution. 2.5 Action After humans make decisions to take actions, they control their bodies, such as hands, to execute these actions, translating cognitive decisions into physical movements that interact with the world around them. The action module serves as the hands of the LLMGAs, translating language-described decisions into executable actions in the game environment, enabling the agents to interact with and alter their surroundings or game states. Different games necessitate different levels of granularity in agents\u2019 output actions. For games requiring manipulative control like RDR2 [34], Minecraft [14] 6 \fThinking Reasoning CoT [81], Zero-CoT [81], ReAct [80], SC [84], ToM [91], GenerativeAgents [59], Reflexion [63], DEPS [38], ProAgent [87], RCI [88], Self-Refine [86], etc. Planning Goal-free planning Generative Agents [59], Voyager [65], OMNI [93], ELLM [40], SPRING [94], AdaRefiner [95], etc. Goal-conditioned planning ZeroShotPlanner [98], LLMPlanner [36], DEPS [38], GTIM [66], Plan4MC [101], SwiftSAGE [97], Adapt [96], S-Agents [99], etc. Figure 4: Mind map for the thinking module. and Overcooked [102], a low-level action like keyboard or mouse operation is required. In contrast, games without manipulative control like text adventure games [103; 104], Pok\u00e9mon battles [30] and Poker [53] directly facilitate the execution of high-level actions. LLMs typically generate high-level actions rather than low-level actions. Therefore, for games with manipulative control, a translation module is required to translate LLM-generated action into lowlevel actions. Existing studies adopt heuristics [92; 105; 66; 33; 59] or RL policies [40; 101; 106] for translating a high-level action into low-level action sequences. Heuristic-based translation generates low-level movements using path-finding algorithms, along with manipulative actions. For example, in Overcooked, given a high-level action \"chop tomato\", the translation module identifies the shortest path to the target with a breadth-first search algorithm and identifies a sequence of movements along with the chop action [92; 105]; In Minecraft, the high-level \"approach\" action uses an A\u2217algorithm for path-finding and executes low-level actions like jump, move and fall in four directions [66; 33]; In comparison, RL-based approaches [41; 40; 101; 106] train language-conditioned RL policies that take observations and high-level actions as input to generate low-level actions, rewarded based on the semantic similarity between the goals and the agent\u2019s transitions. Games without manipulative control can be divided as parser-based games [27; 107] and choicebased games [30]. Parser-based games require LLMs to generate an action word by word, wherease choice-based games only need LLMs to select from a set of given actions. For parser-based games, ZeroShotPlanner [98] proposes semantic translation that maps LLM-generated free-form actions to the semantically similar, admissible actions; SayCan [108] calculates the probability of each admissible action using the chain rule by multiplying the conditional generation probability of each successive string given the previous string. 2.6 Learning Humans are able to refine their cognitive abilities and acquire knowledge by interacting with the physical world, gaining hands-on experience through direct engagement with their environments. Similarly, an LLMGA\u2019s learning process involves improving its cognitive and game-playing abilities over time, based on the experiences and feedback received from the game environment. LLMs encode a wealth of semantic knowledge about the world while lack of real experience within environments, i.e., they are ungrounded [108]. The majority of existing LLMGAs adopt frozen LLMs to play games, relying on carefully designed prompts [28; 109] or external knowledge [30; 39; 66]. In comparison, enable LLMGAs to learn in environments is crucial, since it closely mirrors the way humans acquire knowledge through interacting with the real world. Existing learning approaches can be divided into three categories: in-context feedback learning, supervised fine-tuning and reinforcement learning. In-context feedback learning: Feedback represents a type of evaluation for previous strategies. By including feedback from environments into context, LLMs are able to iteratively \"reinforce\" strategy generation without updating weights [63; 65; 66; 30]. Specifically, Reflexion [63] and DEPS [38] generate self-reflection/explanation on the feedback like failure signal and reuses the thought for the next trail; Voyager [65] and GTIM [66] iteratively prompt LLMs to re-generate action code with error messages; Hu et al. [30] uses manually generated feedback such as the HP change across consecutive 7 \fturns as evaluation for previous actions; Furthermore, existing works [63; 86; 86] demonstrate feedback cannot only comes from the game environments, but also from LLMs themselves. Supervised fine-tuning: Supervised fine-tuning [110; 111] gathers high quality experience to finetune LLMs, based on the assumption that such experiences encompass environmental knowledge. Specifically, E2WM [110] collects embodied experience in VirtualHome with Monte Carlo Tree Search and random exploration; LLAMARider [111] gathers experience in Minecraft via selfreflection with feedback. Both of them demonstrate that fine-tuning on the collected experience enhances capability of LLMs on solving tasks within the environment. Moreover, imitation learningbased approaches like GATO [46], LID [112], SwiftSAGE [97] and Octopus [45] fine-tune LMs using expert or oracle trajectories to enhance their performance as policies. Reinforcement Learning: Existing RL-based LLMGAs can be divided into three categories: (1) LLM as actor: GLAM [113] is grounded in the BabyAI-text environment as a policy to select next action (four movements), training through online RL [114]; (2) LLM as planner: Existing studies such as SayCan [108], Plan4MC [101], RL-GPT [106], ELLM [40], follow a hierarchical paradigm that integrates fixed LLMs as high-level planners with separate low-level RL policies to execute actions. In comparison, another line of research involves fine-tuning large language model (LLM) planners based on rewards received from the environment, such as Octopus [45]; (3) LLM as presenter: LMs can be co-trained with RL policies to produce consistent dialogues that reflect the intentions of policy models, especially in communication games such as Diplomacy [51] and Werewolf [115]; (4) LLM for reward design: LLMs can directly serve as reward models [116], provide annotations for training reward models [117], or generate and refine reward functions for guiding RL agent training [118; 119; 120]. Learning In-context feedback learning Reflexion [63], DEPS [38], Voyager [65], RCI [88], Pok\u00e9LLMon [30], etc. Supervised fine-tuning E2WM [110], LlaMARider [111], LID [112], SwiftSAGE [97], AdaRefiner [95], etc. Reinforcement Learning LLM as actor GLAM [113], etc. LLM as planner SayCan [108], ELLM [40], Plan4MC [101], RL-GPT [106], etc. LLM as presentor Cicero [51], Thinker [115], etc. Reward design RewardDesign [116], Motif [117], MC-Reward [118], Eureka [120], etc. Figure 5: Mind map for the learning module. 3 LLMGAs in Games We categorize existing studies into six categories based on the main characteristics of the games they support, including adventure, communication, competition, cooperation, simulation, and crafting & exploration. Figure 6 illustrates the core gameplay mechanics associated with its genre: \u2022 Adventure: Adventure games emphasize story-driven gameplay, where players explore environments, solve quests and interact with characters and objects to progress the game. Representative games: Zork I [27] and Red Dead Redemption 2 (RDR2) [34]. \u2022 Communication: Communication games revolve through the turns of communication, negotiation, deduction and even deceptions among multiple players. Representative games: Werewolf [28] and Diplomacy [51]. \u2022 Competition: Competition games pit players against each other in challenges that test skill or strategy, aiming to outperform others for victory. Representative games: StarCraft II [29] and Pok\u00e9mon Battles [30]. 8 \fFigure 6: The depiction of six game categories. \u2022 Cooperation: Cooperation games are designed around players working together towards common goals, emphasizing teamwork, collaborative problem-solving, and shared achievements. Representative games: Overcooked [102]. \u2022 Simulation: Simulation games replicate real-world events in detail, allowing players to experience and manage scenarios ranging from building a civilization or living another life. Representative games: The Sims [59; 121] and Civilization [56]. \u2022 Crafting & Exploration: Crafting & Exploration games provide open worlds where players gather resources, craft items, and exploring within expansive environments, encouraging creativity and discovery. Representative games: Minecraft [14] and Crafter [122]. We summarize existing studies on LLMGAs in Table 1. In this section, we will walk through six game categories, highlighting key findings and methodologies employed in the current research landscape. 3.1 Adventure Games Adventure games typically progress through storylines or quests. We categorize existing works into two types based on modality: text-based adventure games and video adventure games. Text adventure games: A text adventure game provides a text-based environment in which players use text commands to interact with the world, exploring and completing quests. TextWorld [137] is a generator of synthetic text games [138; 103] with varying difficulty levels by adjusting parameters such as the numbers of rooms and objects, quest length and complexity; Jericho [104] is a collection of 56 human-made games originally designed for human players, covering fictions such as the Zork series [27; 48] and Hitchhiker\u2019s Guide to the Galaxy [139]; ALFWorld [50] is aligned to the embodied environment ALFRED [140], where agents are requested to accomplish six types of household tasks; ScienceWorld [49] simulates a primary school science curriculum, such as thermodynamics and electrical circuits. To complete a quest, an agent needs to navigate to specific rooms, obtain necessary items, conduct experiments, and analyze the results; BabyAI-Text [113] is a text extension of BabyAI [141], a procedurally generated minigrid environment where an agent navigates and interacts with objects. Due to the lack of graphics, text games rely on the commonsense knowledge as a prior for how to interact with the environment. In parser-based text games, generating a three-word sentence with a small vocabulary of size 1,000 leads to 1 billion combinatorial candidates. Pre-trained LMs featuring human knowledge can effectively narrow down the action space and thus have been widely utilized as linguistic priors for guiding RL agents [123; 142; 143; 144]. Recently, LLMGAs are employed to playing text adventure games: Tsai et al.[124] suggest that the game-playing ability of GPT-3.5 is on par with state-of-the-art (SOTA) reinforcement learning (RL) approaches[145; 146], but it is incapable of constructing the entire map of a partially-known environment; REACT [80] and Reflexion [63] prompt LLMs to generate additional reasoning and reflection to condition the generation of actions; To solve challenging tasks, Adapt [96] and SwiftSage [97] adopt an LLM planner to decompose complex tasks into subgoals as needed; GLAM [113] leverage online RL to ground an LLM in BabyAI-text as a policy. Video adventure game: Red Dead Redemption 2 (RDR2) is a 3D action-adventure game in which players assume the role of an outlaw, and follow the storyline of his life as part of a criminal gang. The game features an important characteristic, i.e., it guides the player what to do next with instant instructions. Cradle [34] is an LLMGA that perceives the game screen, analyzes instructions, generate action plans and controls the character through mouse/keyboard operations using GPT-4V. 9 \fTable 1: Comparison among existing LLMGAs. FT denotes Fine-Tuning. Studies Category Game Base Model FT Modality CALM [123] Adventure Jericho GPT-2 \u2713 Txt CanPlayWell [124] Adventure Zork I GPT-3.5 \u2717 Txt ReAct [80] Adventure ALFWorld PaLM \u2717 Txt Reflexion [63] Adventure ALFWorld GPT-3 \u2717 Txt ADAPT [96] Adventure ALFWorld GPT-3.5 \u2717 Txt SwiftSAGE [97] Adventure ScienceWorld GPT-4 & T5 \u2713 Text GLAM [113] Adventure BabyAI-Text FLAN-T5 \u2713 Txt Cradle [34] Adventure RDR2 GPT-4V \u2717 Txt & Img Xu et al. [28] Communication Werewolf GPT-3.5 \u2717 Txt Xu et al. [125] Communication Werewolf GPT-4 \u2717 Txt Thinker [115] Communication Werewolf ChatGLM-6B \u2713 Text ReCon [52] Communication Avalone GPT-4 \u2717 Txt AvaloneBench [126] Communication Avalone GPT-3.5 \u2717 Txt CodeAct [127] Communication Avalone GPT-4 \u2717 Txt Cicero [51] Communication Diplomacy BART \u2713 Txt WarAgent [109] Communication Diplomacy-like GPT-4 \u2717 Txt CosmoAgent [128] Communication Diplomacy-like GPT-4 \u2717 Txt DEEP [129] Communication Word Guess GPT-4 \u2717 Txt GameEval [130] Communication Word Guess GPT-4 \u2717 Txt Pok\u00e9LLMon [30] Competition Pok\u00e9mon Battles GPT-4 \u2717 Txt CoS [29] Competition StarCraft II GPT-3.5 \u2717 Txt SwarmBrain [131] Competition StarCraft II GPT-3.5 \u2717 Txt ChessGPT [55] Competition Chess RedPajama-3B \u2713 PGN OthelloGPT [132] Competition Othello GPT \u2713 PGN PokerGPT [53] Competition Texas Hold\u2019em OPT-1.3B \u2713 Txt GoodPoker [133] Competition Texas Hold\u2019em GPT-4 \u2717 Txt SuspicionAgent [54] Competition Leduc Hold\u2019em GPT-4 \u2717 Txt AgentPro [61] Competition Leduc Hold\u2019em GPT-4 \u2717 Txt LLM-Co [92] Cooperation Overcooked GPT-4 \u2717 Txt MindAgent [100] Cooperation Overcooked GPT-4 \u2717 Txt ProAgent [87] Cooperation Overcooked \u2717 Txt HLA [105] Cooperation Overcooked GPT-3.5&LLaMA2 \u2717 Txt S-Agents [99] Cooperation Minecraft GPT-4 \u2717 Txt HAC [67] Cooperation Minecraft GPT-4V \u2717 Txt & Img CoELA [35] Cooperation TDW-T&WAH GPT-4 \u2717 Txt & Img GenerativeAgents [59] Human Simulation SmallVille GPT-3.5 \u2717 Txt HumanoidAgents [134] Human Simulation Social GPT-3.5 \u2717 Txt LyfeAgent [121] Human Simulation Lyfe Game GPT-3.5 \u2717 Txt AgentSims [135] Human Simulation AgentSims \u2717 Txt CivRealm [56] Civil. Simulation Civilization \u2717 Txt ZeroShotPlanner [98] Embodied Simulation VirtualHome GPT-3 \u2717 Txt LLMPlanner [36] Embodied Simulation ALFRED GPT-3 \u2717 Txt & Img E2WM [110] Embodied Simulation VirtualHome LLaMA-13B \u2713 Txt Octopus [45] Embodied Simulation Behavior-1K CLIP & MPT-7B \u2713 Txt & Img Voyager [65] Craft & Explore Minecraft GPT-4 \u2717 Txt DEPS [38] Craft & Explore Minecraft MineCLIP & GPT-4 \u2717 Txt & Img GTIM [66] Craft & Explore Minecraft GPT-3.5 \u2717 Txt JARVIS-1 [39] Craft & Explore Minecraft MineCLIP & GPT4 \u2717 Txt & Img Plan4MC [101] Craft & Explore Minecraft GPT-3.5 \u2717 Txt & Img RL-GPT [106] Craft & Explore Minecraft GPT-4 \u2717 Txt & Img MineDoJo [41] Craft & Explore Minecraft MineCLIP \u2713 Txt & Img LLaMARider [111] Craft & Explore Minecraft LLaMA-2-70B \u2713 Txt & Img SteveEye [136] Craft & Explore Minecraft CLIP & LLaMA2 \u2713 Txt & Img CreativeAgent [44] Craft & Explore Minecraft GPT-4V \u2717 Txt & Img MCReward [44] Craft & Explore Minecraft GPT-4 \u2717 Txt & Img ELLM [40] Craft & Explore Crafter Codex \u2717 Txt & Img SPRING [94] Craft & Explore Crafter GPT-4 \u2717 Txt & Img AdaRefiner [95] Craft & Explore Crafter LLaMA2 & GPT-4 \u2713 Txt & Img OMNI [93] Craft & Explore Crafter GPT-3 \u2717 Txt & Img PlayDoom [43] Others Doom GPT-4V \u2717 Txt & Img GATO [46] Others Atari GATO \u2713 Img Motif [117] Others NetHack LlaMA-2 \u2717 Txt 10 \f3.2 Communication Games Communication (or conversational) games revolve through the turns of communication, negotiation, deduction and deception among multiple players. The challenge of communication games lies in inferring others\u2019 intention behind ambiguous or misleading language utterances, and hiding one\u2019s own intention if necessary. Werewolf: The game pits two groups against each other, i.e., werewolves and non-werewolves (villagers, witch, guard and seer), and alternates between night phases, where werewolves secretly attack, and day phases, where survivors discuss and vote to eliminate suspects. The witch, guard, and seer each possess unique abilities. Xu et al. [28] propose to retrieve and reflect on historical communications for enhancement, and observe that GPT-3.5 demonstrates strategic behaviors such as trust, confrontation, camouflage, and leadership. Xu et al. [125] employ a RL policy to select the optimal action from among the diverse actions generated by LLMs, aiming to overcome the LLMs\u2019 prior preference for specific actions. Wu et al. [115] introduce a RL policy to generate the next action by taking as input the reasoning generated by the LLM, and employ another LLM to generate descriptions aligned to the action. Avalon: The game progresses through rounds of discussion and voting to decide who participates in the quests. The goal for the good team is to successfully complete quests, while the bad team aims to secretly sabotage these quests or identify the role of Merlin, who knows the identities of the bad players. Light et al. [126] suggest that GPT-3.5 struggles to formulate and execute simple strategies and sometimes reveals its own bad identity. Wang et al. [52] introduce a reasoning approach that takes into account first-order and second-order perspective shifts to combat pervasive misinformation. To combat hallucination, Shi et al. [127] propose to generate reasoning substeps in a code format that are interpreted as actions subsequently. Diplomatic games: Diplomacy is the first diplomatic board game from the 1950s where players assume the roles of seven powers striving to conquer Europe during WW1. Each turn is marked by private negotiations, trust-building, and tactical coordination among players. Cicero [51] is a human-level agent in Diplomacy that integrates a RL policy for planning and a BART [147] model conditioned on the plan for generating consistent negotiation messages; WarAgent [109] simulates the participating countries, decisions, and consequences in WW I and WW II; CosmoAgent [128] mimics the communication, conflicts, and cooperation among various universal civilizations. Others: Studies have demonstrated the game-playing abilities of LLMs in various games, including SpyGame (Who is Spy) [130; 129], Ask-Guess [130; 129], Tofu Kingdom [130], and Murder Mystery Games [148], known as Jubensha in Chinese. 3.3 Competition Games Competition games, governed by strict rules, challenge agents with varied-level opponents, demanding advanced reasoning and skills. Competition games serves as benchmarks for evaluating reasoning and planning abilities of LLMGAs directly against human players. Reaching human-level performance is a crucial achievement that highlights the agent\u2019s prowess in complex decision-making and strategic implementation. StarCraft II: StarCraft II is a real-time strategy game in which players are tasked with gathering resources, building bases, creating armies, and engaging in combats to defeat the opponent. Ma et al. [29] introduce TextStarCraft II, a natural language interface that enables LLMs to play StarCraft II and Chain-of-Summarization for efficient reasoning and decision-making; SwarmBrain [131] introduce an Overmind Intelligence Matrix for high-level strategic planning and a Swarm ReflexNet for rapid tactical responses. These LLM-based agents exhibit comparable performance against the game\u2019s built-in AI at high difficulty levels. Pok\u00e9mon Battle: Pok\u00e9mon battles are turn-based tactical games, with two players each sending out one Pok\u00e9mon and choosing either to attack or switch Pok\u00e9mon each turn. Hu et al. [30] introduce an environment that enables LLMs to play Pok\u00e9mon battles and a human-level agent Pok\u00e9LLMon that consumes instant feedback to iteratively refine the policy, retrieves external knowledge to combat hallucination, and generates consistent actions to alleviate the panic switching problem caused by CoT [81] reasoning. 11 \fChess: Feng et al. [55] introduce a large-scale chess gameplay dataset stored in Portable Game Notation format [149] and ChessGPT fine-tuned on mixed chess and language datasets to support board state evaluation and chess playing; Toshniwal et al. [150] and Li et al. [132] discover that LMs trained to predict next move in chess are capable of tracking the state of the board given a move sequence, i.e., LMs are capable of playing blindfolded. This suggests that LMs do not merely memorize surface statistics but also learn a causal model of the sequence-generating process. Poker: In Texas Hold\u2019em, Gupta et al. [151] observe that GPT-4 plays like an advanced yet aggressive player who raises with a wide range of hands pre-flop, avoids limping, and exhibits unconventional play; PokerGPT [53] demonstrates that OPT-1.3B [152] with supervised fine-tuning and RLHF [2] can achieve comparable performance to a RL-based method Alphaholdem [153] with significantly less training cost: 9.5 GPU hours compared to Alphaholdem\u2019s 580 GPU hours. Guo et al. [54] and Zhang et al. [61] demonstrate that prompting LLMs to predict opponents\u2019 thoughts, known as Theory of Mind [90; 91], results in significant improvements in Texas Hold\u2019em, BlackJack and Leduc Hold\u2019em [11]. 3.4 Cooperation Games Cooperation among individuals can enhance the efficiency and effectiveness of task accomplishment. There are primarily three types of cooperative tasks in games: (1) Cooperative cooking [102; 100; 154] requires agents collaborate to cook and deliver as many dishes as possible within the given time. To prepare an onion soup in Overcooked-AI [102], two agents need to load three onions into a cooker, starting a cooking process that lasts 20 time steps, and transfer the soup to a plate for delivery; (2) Embodied household cooperation [155; 156] requires agents to collaboratively accomplish tasks like transporting as many objects as possible to the goal position in embodied environments with partial observation [107; 157]; (3) Cooperative crafting [99; 100] & exploration [67] in Minecraft can be accelerated through cooperation between multiple agents. Existing cooperative game settings can be categorized into decentralized and centralized cooperation. Decentralized cooperation: A decentralized structure is a democratic structure ( ) where there is no central task dispatcher. In Overcooked, the ability to infer the partner\u2019s intent and next action based on the its historical actions, known as Theory-of-Mind, is crucial to prevent conflicts. Agashe et al. [92] show that GPT-4 is able to recognize and offer assistance to partners in need, and show robustness in adjusting to different partners. ProAgent [87] introduces a belief correction module to rectify incorrect beliefs on partners and consistently outperforms RL approaches [114; 31; 158]. Moreover, HLA [105] integrates a proficient LLM and a lightweight LLM to balance efficacy and efficiency in real-time human-agent interaction; In partially-observable embodied environments, CoELA [35] introduce an efficient communication module to determine what and when to communicate, exhibiting better performance compare to MCTS-based and rule-based planners on Watch-and-Help [155] and TDW Transport tasks [156]. Centralized cooperation: In Minecraft, S-agents [99] and MindAgents [100] adopts a centralized dispatcher/planner to decompose a challenging goal into subtasks and dispatches them to agents for execution, forming a hierarchical architecture. HAS [67] introduces an auto-organizing mechanism to dynamically adjust key roles and action groups during cooperation, and an intra-communication mechanism to ensure efficient collaboration. 3.5 Simulation Games Simulation games provide simulated environments for real-world events or scenarios, enabling players to experience realistic interactions and decision-making in open-ended game playing. Existing studies can be categorized as human & social simulation, civilization simulation and embodied simulation. Human and social simulation: Generative Agents [59] marks the first LLM-based human simulation experiment that leverages LLMs\u2019 prior knowledge to simulate human-like daily life and social activities. Specifically, GPT-3.5 assumes the roles of 25 generative agents with unique persona and social relationship, residing in a virtual small town. A cognitive architecture is introduced to support agents in remembering, retrieving, reflecting, planning, and acting within dynamic environments. During the two-day simulation, emergent behaviors like exchanging information, forming new relationships and coordinating joint activities are observed. 12 \fOn the basis of Generative Agents, Humanoid Agents [134] further considers the effects of states like basic needs (e.g., hunger, health, and energy), emotions, and closeness in relationships on agents\u2019 behavior generation; For other simulation environments, AgentSims [135] is a programmable and extendable environment; LyfeGame [121] is an 3D virtual small town in Japan. Three experimental scenarios are designed to assess the social behaviors of LLM-based agents, including a murder mystery, a high school activity fair, and a patient-in-distress scenario. Civilization simulation: CivRealm [56] is a game environment based on Civilization [32], where each player governs a civilization simulating the progress of human history. As an open-ended game, it features diverse victory conditions, requiring players to strategically develop the economy, military, diplomacy, culture, and technology of their civilizations. Mastaba [56] introduces an advisor and an AutoGPT [159]-style worker, where the advisor aids in generating context-specific objectives while the workers handle the execution of these goals through generated actions. Experiments show that the advisor brings an advantage at the early game stage, yet the advantage diminishes as the game progresses. Embodied simulation: In simulated 3D environments, embodied agents perceives their surroundings from egocentric perception similar to human and engage with realistic objects to carry out a wide range of tasks by following instructions like \"Rinse off a mug and place it in the coffee maker\". Existing benchmarks include AI2-THOR [160], Virtual Home [107], ALFRED [140], iGibson [161], Habitat [162], ThreeDWorld [156], Behavior [163] and Behavior-1K [164]. Existing approaches [36; 98; 108; 110] primarily adopt LLMs as planners to decompose an instruction into action plans. Specifically, ZeroShotPlanner [98] prompts LLMs in zero-shot manner for planning; SayCan [108] uses a learned affordance function to assist LLMs in selecting valid actions during planning; LLMPlanner [36] adopts an KNN retriever to select few-shot examples and dynamically re-plan based on the observation in the current environment; E2WM [110] fine-tunes an LLM with embodied experience collected through action space search and random exploration to enhance the understanding of the environments. 3.6 Crafting & Exploration Games Minecraft and Crafter are two game environments that have been widely studied for game agents with a focus on crafting & exploration. Minecraft [14] is a 3D sandbox game that offer players the great freedom to traverse a world made up of blocky, pixelated landscapes, facilitated by the procedurally generated worlds. The resouce-based crafting system enables players to transform collected materials into tools, build elaborate structures and complex machines. Crafter [122] is a 2D open-world game that mirrors the survival mode of Minecraft. It challenges players to manage their resources carefully to ensure sufficient water, food, and rest, while also defending against threats like zombies. The game\u2019s world is also procedurally generated for the exploration purpose, and it includes 22 tasks for players to accomplish. Existing agents can be divided as goal-conditioned agents that implement the task given an instruction (crafting), or autonomous exploration agents that navigate within the open-world based on selfdetermined objectives (exploration). Crafting: The key challenge in crafting tasks lies in their complexity: agents must gather diverse materials scattered across the world and understand intricate recipes and the sequential steps involved. Consequently, planning is widely employed to address crafting tasks. Existing agent design such as DEPS [38], GITM [66], JARVIS-1 [39], Plan4MC [101], RL-GPT [106] and S-agents [99] mainly follow a paradigm that adopts LLMs as a planner to decompose the goal into subgoals and further generate action plans for each sub-goals. Specifically, DEPS introduce error correction on initial plans by integrating description of the plan execution and self-explanation of feedback when encountering failures; GITM [66] leverages external knowledges like item crafting/smelting recipes, and is equipped with a long-term memory to maintain common reference plans for encountered objectives; JARVIS-1 [39] chains MineCLIP [41] and an LLM together to perceive multimodal input and utilizes a multimodal memory to store experiences; Plan4MC [101] and RL-GPT [106] integrate the LLM planner with a low-level RL policy for action execution; S-agents [99] and HAS [67] dispatches subtasks to multiple agents for cooperatively task execution; Exploration: Navigating through procedurally generated world without specific goals can overwhelm agents with numerous possible actions. Previous works leverage curriculum learning [165] to identify 13 \fsuitable tasks while now LLMs can directly act as goal generators. In Minecraft, Voyager [65] adopts an automatic curriculum in a self-directed way [166], i.e., it asks LLM to generate goals that adapts to the agent\u2019s current state, inventory, acquired skills, and environment. In Crafter, OMNI [93] utilizes LLMs to determine interesting tasks for curriculum design, overcoming the previous challenge of quantifying \"interest\". ELLM [40], SPRING [94] and AdaRefiner [95] prompt LLMs to generate goals for agents. Specifically, ELLM [40] queries LLMs for next goals given an agent\u2019s current context, and rewards agents for accomplishing those suggestions in the sparse-reward setting; SPRING [94] uses LLMs to summarize useful knowledge from the Crafter paper [122] and progressively prompts the LLM to generate next action; On the basis of ELLM, AdaRefiner [95] cascades a learnable lightweight LLM with fixed LLMs for better goal plan generation. 3.7 Evaluation The evaluation metrics for game agents vary across different games. In Table 2, we summarize the metrics for several representative games. For games with specific task instructions, such as ALFWorld, ScienceWorld, BabyAI, RDR2, ALFRED, VirtualHome, and crafting tasks in Minecraft and Crafter, the task success rate is usually adopted as the primary metric; for competition games, win rate, game score, and Elo rating are common metrics; for communication games that separate players into adversarial teams such as Werewolf and Avalone, win rate can be used as the metric. For human/social simulation experiments, human evaluators are typically recruited to assess the believability of behaviors exhibited by LLM-based agents. Table 2: Evaluation metric used in representative games Game Metric Jericho Game score [123] ALFWorld Task success rate [80; 96] ScienceWorld Task score [97] BabyAI/BabyAI-Text Task success rate [93; 113] RDR2 Task success rate [34] Werewolf Win rate [115; 28; 125], Voting accuracy [115] Avalone Win rate [52; 127; 126] Diplomacy Player score [51] StarCraft II Win rate [29; 131] Pok\u00e9mon Battles Win rate [30], Battle score [30] Chess Elo rating [55], Move score [55] Poker Win rate [54; 53], # of win/loss chips [54], mbb/hand [53] Overcooked Reward value [92; 87], Success rate [100; 105] Human Simulation Human evaluation [59; 134] Civilization Task success rate [56], Game score [56], # of techs & units [56] ALFRED Task success rate [36] VirtualHome Task success rate [98; 167], Executability [98; 167] Minecraft Task success rate [38; 65; 66], Map coverage [65], # of items [65] Crafter Task success rate [40; 93] 4 Conclusion and Future Directions In this paper, we conduct a systematic literature review of existing studies on LLMGAs, examining two primary aspects: (1) Construction of LLMGAs, where we elaborate on six essential components, including perception, memory, thinking, role-playing, action, and learning; (2) LLMGAs in six game categories, including adventure, communication, competition, cooperation, simulation, and crafting & exploration, where we detail the game environments and common strategies adopted by game agents associated with each type. Finally, we identify three potential future directions for this new research field: Grounding LLMs in environments: LLMs pre-trained only on text corpus data are not grounded in real environments, i.e., they are not really aware of the consequences of their generations on physical process [168]. Consequently, ungrounded LLMs generate inadmissible actions [108], exhibit 14 \fa gap between high-level intentions and intricate game control [34], especially in the absence of visual perception abilities [29] and feedback, and thus largely rely on manually-designed prompts. Existing efforts have been made toward grounding LLMs through multimodal perception [45], the adopt of external affordance functions [108], feedback from environments [30], and experiencing in environments [110; 40]. However, the current progress in grounding techniques remains limited and fall shorts for the requirements of real-world application [169]. Games, serving as controllable and safe environments, are ideal testbeds for developing grounding techniques that can make LLMs more inline with sophisticated environments. Knowledge discovery through game-playing: Current studies [65; 30; 29] primarily remain at the stage of utilizing the pre-existing knowledge encoded in LLMs for game-playing. Although some studies propose to leverage game-playing experiences [110; 111; 40] to ground and enhance LLM-based agents, these agents are still incapable of extracting underlying knowledge below the surfaces of observations and experience. Knowledge discovery is not simply about learning to act effectively, but to understand fundamental principles and causal model of gameplay mechanisms just like human. We believe that gameplay mechanisms with complex extrinsic knowledge are essential testbeds for designing such agents, and knowledge discovery via experiencing in environments might represent a critical step toward the pursuit of AGI. Agent society simulation: The social simulation experiment of Generative Agents [59] has demonstrated that LLM-based agents are promising for believable simulacra of human. Emergent human-like social behaviors are observed like information diffusion, forming new relationship and coordination for social activities, with the support of a novel cognitive architecture. However, as human beings are far more sophisticated, with complicated mental processes, emotional depth, and advanced social skills, it would be an intriguing avenue for future research to develop better cognitive architectures and more nuanced simulations of social interactions and cooperation [170] in realistic game environments to foster deeper understanding and representation of complex human interactions. Acknowledgements Authors would like to thank Yunxiang Yan and Vijayraj Shanmugaraj for their assistance in collecting papers. This research is partially sponsored by the NSF CISE grants 2302720, 2312758, 2038029, a GTRI PhD Fellowship, an IBM faculty award, and a grant from CISCO Edge AI program."
},
{
"url": "http://arxiv.org/abs/2312.14877v2",
"title": "Robust Knowledge Extraction from Large Language Models using Social Choice Theory",
"abstract": "Large-language models (LLMs) can support a wide range of applications like\nconversational agents, creative writing or general query answering. However,\nthey are ill-suited for query answering in high-stake domains like medicine\nbecause they are typically not robust - even the same query can result in\ndifferent answers when prompted multiple times. In order to improve the\nrobustness of LLM queries, we propose using ranking queries repeatedly and to\naggregate the queries using methods from social choice theory. We study ranking\nqueries in diagnostic settings like medical and fault diagnosis and discuss how\nthe Partial Borda Choice function from the literature can be applied to merge\nmultiple query results. We discuss some additional interesting properties in\nour setting and evaluate the robustness of our approach empirically.",
"authors": "Nico Potyka, Yuqicheng Zhu, Yunjie He, Evgeny Kharlamov, Steffen Staab",
"published": "2023-12-22",
"updated": "2024-02-08",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM AND Agent",
"gt": "Robust Knowledge Extraction from Large Language Models using Social Choice Theory",
"main_content": "INTRODUCTION Large Language Models (LLMs) achieve state-of-the-art results in various natural language processing (NLP) tasks. Formally, LLMs represent a conditional probability distribution \ud835\udc43(\ud835\udc47\ud835\udc5b+1|\ud835\udc471, . . . ,\ud835\udc47\ud835\udc5b) over tokens (character sequences) that predicts the next token given a fixed context of previous tokens. To answer a query \ud835\udc44, \ud835\udc44is decomposed into tokens \ud835\udc471, . . . ,\ud835\udc47\ud835\udc44and used to sample the first token \ud835\udc341 of the answer from \ud835\udc43(\ud835\udc341|\ud835\udc471, . . . ,\ud835\udc47\ud835\udc44). \ud835\udc341 can then be added to the context \ud835\udc471, . . . ,\ud835\udc47\ud835\udc44to sample the next answer token \ud835\udc342. This process is repeated until a special end of text token is reached. Since \u2217Equal Contribution. This work is licensed under a Creative Commons Attribution International 4.0 License. Proc. of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024), N. Alechina, V. Dignum, M. Dastani, J.S. Sichman (eds.), May 6 \u2013 10, 2024, Auckland, New Zealand. \u00a9 2024 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). finding an optimal sequence of answer tokens is hard, answer sequences are often computed by a heuristic search like Beam [8], Top-K [4] or Nucleus [7] search that build up multiple promising token sequences in parallel. The sampling process is controlled by a temperature parameter. For temperature 0, the algorithms samples greedily. Increasing the temperature allows sampling tokens with lower local probability. While some authors associate higher temperatures with more creative answers, they can also result in higher probability answers because greedy selection can exclude high probability sequences that start with low probability tokens. Given the success of LLMs in difficult NLP taks, they are increasingly being used for general question answering tasks. This is a natural application as it is reasonable to assume that LLMs picked up a lot of interesting information during training. However, one limitation of LLMs is that they will always produce an answer even if they did not learn anything about the question. This problem is referred to as hallucination in the literature [10]. The uncertainty of an answer is hard to quantify. While every answer sequence can be associated with a probability, this is merely the probability of the text sequence and should not be confused with the probability that the answer is correct (or that the LLM \"believes\" that the answer is correct). Theoretically, LLMs can be asked to output probabilities for their predictions, but it is hard to say how meaningful these probabilities are since there is nothing in a typical LLM architecture that would allow them to infer meaningful probabilities (unless they picked up a particular probability from the training corpora). Since query answering with LLMs is based on a heuristic search for high probability token sequences rather than on reasoning, we, in particular, have the following types of uncertainty: (1) Query-Uncertainty: prompting the same query repeatedly can result in different answers. (2) Syntax-Uncertainty: semantically equivalent queries that differ only syntactically can result in different answers. (3) Distraction-Uncertainty: meaningless information added to the query can result in a different answer. Let us note that, in principle, query uncertainty can be eliminated by setting the temperature parameter to 0. However, as outlined above, the deterministic answer will be somewhat random because it corresponds to some local optimum found by a heuristic search algorithm. We therefore aim at allowing some randomness in the answer, but increasing the robustness. The idea of robustness is that similar queries should result in similar answers. In particular, arXiv:2312.14877v2 [cs.CL] 8 Feb 2024 \fthe same query prompted multiple times can result in different answers in our setting. However, we would like that these answers are semantically similar. Similarly, we would like that syntactic changes of a query do not change the semantics of the answer. In this work, we will mainly focus on making LLMs robust against query uncertainty, but we will also look at syntax uncertainty in our experiments. We explore to which extent an answer sampling strategy combined with social choice theory methods can improve the robustness of LLMs. The idea is as follows: instead of asking a query once, we ask it repeatedly (starting each time from the original question context). Our assumption is that if the LLM picked up the answer during training, then this answer should occur in the majority of cases. On the other hand, if it did not pick up information about the query and hallucinates an answer, we expect that the different answers will be very random. We will apply tools from social choice theory to aggregate the answers. We expect that, if the LLM picked up meaningful information, then our aggregation will result in a clear ranking of the different answers, while it will be mostly indifferent between the answers otherwise. Let us emphasize that the outcome should be interpreted with care. If the LLM has been trained on a text corpora with false information, we may find that an LLM gives a false answer with high certainty. The probabilities that we derive should therefore be understood as subjective probabilities that reflect the uncertainty of the LLM and not as statistical probabilities. We view our method as most useful when being applied to LLMs that were trained on reliable literature (e.g., peer-reviewed articles and books) and not on random text from the internet. While reliable pretrained models like BioBert and MedBert exist [15, 20], they still require fine-tuning to be usable as question answering systems. Since our resources are limited, we will therefore use ChatGPT-turbo in our experiments, which was trained on mixed data with varying reliability. However, our experiments are only a proof of concept and the idea can directly be transferred to LLMs trained on high quality domain-specific data. In our investigation, we will focus on diagnostic problems, where we try to identify the cause of a particular situation or condition. The identified cause is called the diagnosis for the condition. Typical examples are medical diagnosis (identify the medical condition that causes a set of symptoms) or fault diagnosis (identify the defective component in a technical system that causes malfunctions). The query consists of a description of the situation and we ask for a ranking of possible causes ordered by their plausibility. In order to take account of uncertainty, we repeat the query multiple times and collect the rankings. Tools from social choice theory can then be applied to merge the rankings and to quantify the uncertainty of the answer. To do so, we will build up on scoring-based voting methods for partial preference orderings [3]. 2 RELATED WORK Prior research on uncertainty quantification of LLMs focused on investigating the probabilities of token sequences [6, 11]. However, as discussed before, the probability of the token sequence should not be confused with the probability that the token sequence expresses a valid claim. In particular, the same claim can be expressed by different (semantically equivalent) token sequences that obtain different probabilities. [14] address this issue by first clustering claims with the same semantic meaning and summing their probabilities to calculate a \"semantic entropy\". Other work involves training or fine-tuning the LLMs to quantify uncertainty [12, 17, 18]. However, due to lack of transparent training specifics, these approaches might be difficult to reproduce in addition to being expensive. Despite the demand for uncertainty quantification without relying on model fine-tuning or accessing the proprietary information of LLMs, there is little work in this area and much remains unexplored. To our best knowledge, only [23, 24] quantify uncertainty based on the verbalized confidence given by LLMs or selfconsistency of the claims. The significance of verbalized confidence is unclear since there is nothing in a typical LLM architecture that would allow it to infer meaningful probabilities. Our approach aggregates answers and quantifies the uncertainty using methods from social choice theory. Moreover, We study queries that give a rank with multiple possible answers rather than one single answer as an output, no approach from existing work can be directly applied in our case. The recent neuro-symbolic theorem prover LINC [19] uses LLMs as a semantical parser to translate natural language reasoning problems into first-order logic that can then be processed by a symbolic theorem prover. To decrease the risk of parsing errors, the authors parse and process the inputs repeatedly and apply majority voting to determine the outcome. This may be another interesting domain for applications of more sophisticated voting methods. The notion of robustness that we consider here (similar inputs should result in similar outputs) follows the terminology in Explainable AI [1, 5, 16] and should not be confused with statistical [9] or adversarial [22] robustness. From an explanation point of view, our scoring method is interpretable in the sense that the scores can be explained from the LLM\u2019s responses to the repeated prompts. The responses can be further explained by the LLM\u2019s sampling procedure and the output probabilities of the transformer. However, understanding the output probabilities of transformers is difficult and a topic of current research [26]. 3 SOCIAL CHOICE THEORY BACKGROUND Social choice theory deals with aggregating individual preferences of different agents towards a collective choice [2]. The agents are often seen as voters who can express their preferences in different ways. For example, they may be able to vote for a single candidate, for multiple candidates or report a preference ordering over the candidates. We will focus on the latter setting here. Formally, we consider a finite set of voters \ud835\udc41= {1, . . . ,\ud835\udc5b} and a finite set of outcomes \ud835\udc42= {\ud835\udc5c1, . . . ,\ud835\udc5c\ud835\udc5a}. A partial order \u2ab0over \ud835\udc42is a binary relation over \ud835\udc42that is reflexive, anti-symmetric and transitive. We do not assume that it is complete, that is, there can be outcomes \ud835\udc5c\ud835\udc56\u2260\ud835\udc5c\ud835\udc57such that neither \ud835\udc5c\ud835\udc56\u2ab0\ud835\udc5c\ud835\udc57nor \ud835\udc5c\ud835\udc57\u2ab0\ud835\udc5c\ud835\udc56. As usual, we write \u2022 \ud835\udc5c\u227b\ud835\udc5c\u2032 iff \ud835\udc5c\u2ab0\ud835\udc5c\u2032 and \ud835\udc5c\u2032 \u2ab0\u0338 \ud835\udc5c, \u2022 \ud835\udc5c\u223c\ud835\udc5c\u2032 iff \ud835\udc5c\u2ab0\ud835\udc5c\u2032 and \ud835\udc5c\u2032 \u2ab0\ud835\udc5c. If \ud835\udc5c\u227b\ud835\udc5c\u2032, we say that \ud835\udc5cis strictly preferred to \ud835\udc5c\u2032 and if \ud835\udc5c\u223c\ud835\udc5c\u2032, we say that we are indifferent between the two. A profile \ud835\udc5d= [\u2ab01 , . . . , \u2ab0\ud835\udc5b] contains one partial order for every voter and captures the preferences expressed by them. The process of aggregating the voters\u2019 preferences can be formalized in different ways. A social choice function is a mapping \ud835\udc53 \ffrom the set of all profiles to a non-empty subset of the outcomes. Intuitively, \ud835\udc53(\ud835\udc5d) should contain the outcomes that are maximally preferred by the voters. Ideally, \ud835\udc53(\ud835\udc5d) contains only a single element, but there are cases where a unique choice cannot be made without any ad-hoc assumptions (like chosing a random outcome or a lexicographically minimal one). Social choice research often focuses on total orderings, where agents express preferences over all possible outcomes [2]. In our application, the outcomes are possible diagnoses, and the different answers do not necessarily contain the same diagnoses. We will therefore focus on preferences expressed by partial orderings. Since we are interested in quantifying the uncertainty of an answer (based on the variance in the rankings), scoring-based voting methods are a natural choice. We recall some ideas about aggregating partial preferences by scoring-based voting methods from [3]. To begin with, a scoring procedure \ud835\udc60\ud835\udc5d: \ud835\udc42\u2192R is a mapping from outcomes to numerical values that is parametrized by a profile \ud835\udc5d [3]. Intuitively, \ud835\udc60\ud835\udc5d(\ud835\udc5c) is the score of outcome \ud835\udc5cwith respect to the preferences expressed by the profile \ud835\udc5d. Every scoring procedure induces a social choice function by letting \ud835\udc53(\ud835\udc5d) = arg max \ud835\udc5c\u2208\ud835\udc42\ud835\udc60\ud835\udc5d(\ud835\udc5c). (1) A weighting procedure \ud835\udc64\u2ab0: \ud835\udc42\u2192R maps outcomes to numerical values and is parametrized by a partial order \u2ab0[3]. Intuitively, \ud835\udc64\u2ab0(\ud835\udc5c) is the score of outcome \ud835\udc5cwith respect to the preferences expressed by \u2ab0. We can construct a scoring procedure from a weighting procedure by letting [3] \ud835\udc60\ud835\udc5d(\ud835\udc5c) = \ud835\udc5b \u2211\ufe01 \ud835\udc56=1 \ud835\udc64\u2ab0\ud835\udc56(\ud835\udc5c), (2) where we assume \ud835\udc5d= [\u2ab01, . . . , \u2ab0\ud835\udc5b]. A weighting procedure, in turn, can be based on how many other outcomes are less preferred and how many are incomparable. To do so, we can consider functions Down\u2ab0: \ud835\udc42\u21922\ud835\udc42and Inc\u2ab0: \ud835\udc42\u2192 2\ud835\udc42defined as follows [3]: Down\u2ab0(\ud835\udc5c) = |{\ud835\udc5c\u2032 \u2208\ud835\udc42| \ud835\udc5c\u227b\ud835\udc5c\u2032}|, (3) Inc\u2ab0(\ud835\udc5c) = |{\ud835\udc5c\u2032 \u2208\ud835\udc42| \ud835\udc5cand \ud835\udc5c\u2032 are incomparable}|, (4) where, for a set \ud835\udc46, |\ud835\udc46| denotes its cardinality. That is, Down\u2ab0(\ud835\udc5c) is the number of outcomes ranked lower than \ud835\udc5cand Inc\u2ab0(\ud835\udc5c) is number of outcomes incomparable to \ud835\udc5c. The following two properties of weighting procedures have been proposed in [3]: Linearity: There exist constants \ud835\udefc, \ud835\udefd,\ud835\udefe\u2208R such that \ud835\udc64\u2ab0(\ud835\udc5c) = \ud835\udefc\u00b7 Down\u2ab0(\ud835\udc5c) + \ud835\udefd\u00b7 Inc\u2ab0(\ud835\udc5c) + \ud835\udefe. (5) Constant Total Weight: There exists a constant \ud835\udeffsuch that \u00cd \ud835\udc5c\u2208\ud835\udc42\ud835\udc64\u2ab0(\ud835\udc5c) = \ud835\udefffor all partial orders. Partial Borda Weighting (PBW) [3] \ud835\udc64PBW \u2ab0 is the linear weighting procedure defined by letting \ud835\udefc= 2, \ud835\udefd= 1,\ud835\udefe= 0 (6) in (5). Definition 3.1 (PBW Weighting). The PBW weighting procedure is defined as \ud835\udc64PBW \u2ab0 (\ud835\udc5c) = 2 \u00b7 Down\u2ab0(\ud835\udc5c) + Inc\u2ab0(\ud835\udc5c). (7) One can show the following. Theorem 3.2 ([3]). \ud835\udc64PBW \u2ab0 satisfies Linearity and Constant Total Weight and every other weighting procedure that satisfies these two properties is an affine transformation of PBW. We refer to [3, Theorem 1] for more details about this result. The partial Borda choice function \ud835\udc53PBW is the social choice function induced by \ud835\udc64PBW \u2ab0 based on equations (1) and (2). It can be characterized as follows. Theorem 3.3 ([3]). The partial Borda choice function is the unique social choice function that satisfies the following properties. Consistency: If \ud835\udc5d1, \ud835\udc5d2 are disjoint profiles and \ud835\udc53(\ud835\udc5d1) \u2229\ud835\udc53(\ud835\udc5d2) \u2260 \u2205then \ud835\udc53(\ud835\udc5d1) \u2229\ud835\udc53(\ud835\udc5d2) = \ud835\udc53(\ud835\udc5d1 \u222a\ud835\udc5d2). Faithfulness: If \ud835\udc5d= [\u2ab01] and \ud835\udc4f\u2ab01 \ud835\udc4e, then \ud835\udc4e\u2209\ud835\udc53(\ud835\udc5d). Neutrality: \ud835\udc53is invariant with respect to permutations of \ud835\udc42 (renaming the outcomes will not affect the result), that is, \ud835\udc53(\ud835\udf0e(\ud835\udc5d)) = \ud835\udf0e(\ud835\udc53(\ud835\udc5d)) for all bijective mappings \ud835\udf0e: \ud835\udc42\u2192\ud835\udc42. Cancellation: If for all outcomes \ud835\udc5c1 \u2260\ud835\udc5c2, the number of voters who rank \ud835\udc5c1 above \ud835\udc5c2 equals the number of voters who rank \ud835\udc5c2 above \ud835\udc5c1, then \ud835\udc53(\ud835\udc5d) = \ud835\udc42. We refer to [3, Theorem 2] for more details about this result. 4 IMPROVING THE ROBUSTNESS OF LLM QUERIES WITH PBW As we saw in the previous section, aggregating partial preferences with PBW gives us several desirable analytical guarantees. We will now use PBW to improve the robustness of LLM ranking queries. The basic idea is to ask the LLM for the most plausible explanations of a situation repeatedly and to use PBW to aggregate the answers. 4.1 From Queries to Rankings In order to obtain ranking answers from LLMs, we consider queries of a special form that we call ranking queries. We refrain from a formal definition and just explain the intuitive idea. Roughly speaking, a ranking query consists of \u2022 a condition description, \u2022 answer instructions. Example 4.1. As a running example, we will use a medical scenario with the following ranking query: \"A 20 year old professional runner suffers from a stinging pain in the forefoot. The foot is swollen and stiff. What are the most plausible explanations? Please keep the answer short and order by decreasing plausibility.\" The first two sentences describe the condition, the last two sentences give the answer instructions. A typical answer provided by ChatGPT looks as follows: The most plausible explanations for a 20-year-old professional runner experiencing a stinging pain, swelling, and stiffness in the forefoot, ordered by decreasing plausibility, could be: \f\u2022 Overuse Injury: Repetitive stress from running may have led to an overuse injury such as metatarsalgia or stress fracture. \u2022 Tendonitis: Inflammation of tendons in the forefoot, like extensor tendinitis, could cause these symptoms. \u2022 Ligament Sprain: A sprained ligament, like in the metatarsophalangeal joint, could result in pain, swelling, and stiffness. \u2022 Footwear Issues: Poorly fitting or worn-out running shoes may contribute to foot problems and discomfort. \u2022 Infection or Insect Bite: Infection or an insect bite should be considered but is less likely without specific signs like redness and fever. In order to obtain our rankings, we run the prompt 5 times, each time starting from an empty context. The possible causes provided in the 5 answers, define our set of outcomes \ud835\udc42. We summarize and normalize the answers such that synonyms and syntactic differences do not lead to different outcomes. Given an answer list \ud835\udc5c1, . . . ,\ud835\udc5c\ud835\udc5efor one prompt, we associate it with the partial ordering \ud835\udc5c1 \u227b\u00b7 \u00b7 \u00b7 \u227b\ud835\udc5c\ud835\udc5e\u227b{\ud835\udc5c1, . . . ,\ud835\udc5c\ud835\udc5e}, (8) where for every subset \ud835\udc46\u2286\ud835\udc42, \ud835\udc46= \ud835\udc42\\\ud835\udc46denotes the complement of \ud835\udc46and \ud835\udc5c\u227b\ud835\udc46is short for \ud835\udc5c\u227b\ud835\udc5c\u2032 for all \ud835\udc5c\u2032 \u2208\ud835\udc46. That is, the outcomes occuring in the answer are preferred according to their order of appearance and they all are preferred to those outcomes that have not occured. The outcomes that did not occur are incomparable with respect to this ranking. Example 4.2. For our running example, we obtained the following outcomes after manual normalization: (1) bu: bursitis, (2) fi: footwear issues, (3) go: gout, (4) in: infection, (5) mn: Morton\u2019s neuroma, (6) msr: metatarsal stress reaction, (7) ni: neurological issue, (8) oi: overuse injury, (9) pf: plantar fasciitis, (10) sf: stress fracture, (11) te: tendonitis, (12) tr: trauma. The 5 answers for our running example correspond to the following partial oderings: oi \u227b1 fi \u227b1 tr \u227b1 in \u227b1 ni \u227b1 {oi, fi, tr, in, ni}, oi \u227b2 pf \u227b2 fi \u227b2 in \u227b2 go \u227b2 tr \u227b2 {oi, pf, fi, in, go, tr}, oi \u227b3 tr \u227b3 fi \u227b3 in \u227b3 {oi, tr, fi, in}, sf \u227b4 pf \u227b4 mn \u227b4 msr \u227b4 bu, \u227b4 {sf, pf, mn, msr, bu} oi \u227b5 te \u227b5 fi \u227b5 in \u227b5 {oi, te, fi, in}. We constructed the partial orders in our running example manually. In our experiments, we will use a more automated process that works as follows: (1) Determine Base-Outcomes: Query the LLM for a list of potential causes that we call base-outcomes. (2) Determine Rankings: Repeatedly ask the LLM for the most plausible causes and to rank them by their plausibility. We call these outcomes ranking-outcomes. (3) Normalize Rankings: Normalize the rankings by matching ranking-outcomes with base-outcomes. We use word embeddings (Sentence-BERT[21]) to map the ranking-outcomes to the most similar base-outcomes. If the similarity of a rankingoutcome to all base-outcomes is smaller than 0.5, it will be discarded (and reported). One can think of other methodologies to compute rankings from LLMs. To abstract from the details, let us assume that we have a transformation method \ud835\udc47(\ud835\udc44, \ud835\udc41,\ud835\udc61) of the following form. Definition 4.3. A transformation method \ud835\udc47(\ud835\udc44, \ud835\udc41,\ud835\udc61) takes a ranking query as input, prompts it \ud835\udc41times and produces a profile [\u2ab01, . . . , \u2ab0\ud835\udc41] from the answer rankings. The parameter \ud835\udc61represents the time at which the query has been prompted. The time parameter \ud835\udc61is only a technical device to take account of the fact that the output of LLMs is non-deterministic. It can also be seen as the (unknown) random seed of the LLM. The time parameter allows us talking about potentially different outputs when aggregating repeatedly for the same input. For example, say we aggregate the answers for \ud835\udc44five times and then again five times, then we can denote the two results by\ud835\udc47(\ud835\udc44, 5,\ud835\udc611) and\ud835\udc47(\ud835\udc44, 5,\ud835\udc612). We will use this notation for the discussion of the consistency property later. The notation is also useful to make the idea of robustness more precise. Assume that we have an aggregation method \ud835\udc34that aggregates the profiles obtained from a transformation method\ud835\udc47in some way. Roughly speaking, we say that a pair (\ud835\udc47,\ud835\udc34) consisting of a transformation method \ud835\udc47and an aggregation method \ud835\udc34is query-robust if the answers obtained for one query \ud835\udc44from \ud835\udc34(\ud835\udc47(\ud835\udc44, \ud835\udc41,\ud835\udc611)) and \ud835\udc34(\ud835\udc47(\ud835\udc44, \ud835\udc41,\ud835\udc612)) are \"similar\" when \ud835\udc41is chosen sufficiently large, syntax-robust if the answers for two syntactically different, but semantically similar queries \ud835\udc441,\ud835\udc442 from \ud835\udc34(\ud835\udc47(\ud835\udc441, \ud835\udc41,\ud835\udc611)) and \ud835\udc34(\ud835\udc47(\ud835\udc442, \ud835\udc41,\ud835\udc612)) are \"similar\" when \ud835\udc41is chosen sufficiently large. The choice of the similarity measure depends on the application. Correlation measures seem to be a natural choice for measuring similarity between rankings. Measuring similarity between queries is more difficult. For experiments, one simple way to generate similar queries is to make purely syntactical changes to a base query to obtain (almost) semantically equivalent queries. In our application, our aggregation method \ud835\udc34\ud835\udc43\ud835\udc35\ud835\udc4aranks the diagnoses from the given profile by their PBW score. We will use correlation measures to determine the similarity of these rankings for \ud835\udc41= 5 in our experiments. 4.2 Answer Aggregation In order to quantify the plausibility of different answers, we apply the PBW score. The larger the score, the more plausible the answer. \fTable 1: PBW scores for running example: first column shows, the outcomes, columns 2-6 show the partial PBW scores per ranking, column 7 shows the PBW scores and column 8 the normalized PBW scores rounded to two digits. \ud835\udc42 \u227b1 \u227b2 \u227b3 \u227b4 \u227b5 \ud835\udc60PBW(\ud835\udc5c) \ud835\udc60PBW(\ud835\udc5c) bu 6 5 7 14 7 39 0.06 fi 20 18 18 6 18 80 0.12 go 6 14 7 6 7 40 0.06 in 16 16 16 6 16 70 0.10 mn 6 5 7 18 7 43 0.07 msr 6 5 7 16 7 41 0.06 ni 14 5 7 6 7 39 0.06 oi 22 22 22 6 22 94 0.14 pf 6 20 7 20 7 60 0.09 sf 6 5 7 22 7 47 0.07 te 6 5 7 6 20 44 0.07 tr 18 12 20 6 7 63 0.10 To make the interpretation of the scores easier, we normalize them such that all values are between 0 and 1. We let \ud835\udc60PBW(\ud835\udc5c) = \ud835\udc60PBW(\ud835\udc5c) \u00cd \ud835\udc5c\u2032\u2208\ud835\udc42\ud835\udc60PBW(\ud835\udc5c\u2032) (9) Table 1 shows the PBW scores for our running example. 4.3 Properties We now discuss some analytical guarantees of our approach. Let us note that the normalized PBW score \ud835\udc60PBW is just a rescaling of the PBW score \ud835\udc60PBW. Therefore, the outcomes with maximal score and their relative order remains unchanged. To begin with, let us reinterpret the properties from Theorem 3.3 in our setting. Consistency: Let\ud835\udc44be a ranking query and let\ud835\udc5d1 = \ud835\udc47(\ud835\udc44, \ud835\udc411,\ud835\udc611), \ud835\udc5d2 = \ud835\udc47(\ud835\udc44, \ud835\udc412,\ud835\udc612). If \ud835\udc5chas maximum score with respect to both\ud835\udc60PBW(\ud835\udc5d1) and\ud835\udc60PBW(\ud835\udc5d2), then\ud835\udc5calso has maximum score with respect to \ud835\udc60PBW(\ud835\udc5d1 \u222a\ud835\udc5d2). Faithfulness: If we prompt the query only once, then the highest ranked outcome obtains the maximum score. Neutrality: The score of outcomes is independent of their identity. Cancellation: If for all outcomes \ud835\udc5c1 \u2260\ud835\udc5c2, the number of rankings that rank \ud835\udc5c1 above \ud835\udc5c2 equals the number of rankings that rank \ud835\udc5c2 above \ud835\udc5c1, then all outcomes get the same score. As explained before, the above properties are sufficient to characterize PBW scoring [3]. That is, there is no other scoring function that satisfies all these properties (up to affine transformations). Since all properties seem desirable in our setting, \ud835\udc60PBW is a natural choice. In the following proposition, we note some additional desirable properties of \ud835\udc60PBW and \ud835\udc60PBW in our setting. The properties also hold for other instantiations of (5) as long as \ud835\udefc> \ud835\udefdremains satisfied. Proposition 4.4. Let \ud835\udc44be a query that was prompted \ud835\udc41times and resulted in the outcomes \ud835\udc42and profile \ud835\udc5d= [\u227b1, . . . , \u227b\ud835\udc41]. Partial Agreement: If there are \ud835\udc5c1,\ud835\udc5c2 \u2208\ud835\udc42such that \ud835\udc5c1 \u227b\ud835\udc56\ud835\udc5c2 for all 1 \u2264\ud835\udc56\u2264\ud835\udc41, then \ud835\udc60PBW(\ud835\udc5c1) > \ud835\udc60PBW(\ud835\udc5c2). Full Agreement: If prompting the query repeatedly resulted in the same rankings, that is, \u227b\ud835\udc56= \u227b\ud835\udc57for all 1 \u2264\ud835\udc56< \ud835\udc57\u2264\ud835\udc41, then \ud835\udc60PBW(\ud835\udc5c1) > \ud835\udc60PBW(\ud835\udc5c2) if and only if \ud835\udc5c1 \u2ab0\ud835\udc56\ud835\udc5c2. Domination: If there is an \ud835\udc5c\u2217\u2208\ud835\udc42such that \ud835\udc5c\u2217\u227b\ud835\udc56\ud835\udc5cfor all 1 \u2264 \ud835\udc56\u2264\ud835\udc41and \ud835\udc5c\u2208\ud835\udc42\\ {\ud835\udc5c\u2217}, then arg max\ud835\udc5c\u2208\ud835\udc42\ud835\udc60PBW(\ud835\udc5c) = {\ud835\udc5c\u2217}. Proof. 1. The assumptions imply that Down\u2ab0\ud835\udc56(\ud835\udc5c1) > Down\u2ab0\ud835\udc56(\ud835\udc5c2) and therefore \ud835\udc64PBW \u227b\ud835\udc56 (\ud835\udc5c1) > \ud835\udc64PBW \u227b\ud835\udc56 (\ud835\udc5c2) for all 1 \u2264\ud835\udc56\u2264\ud835\udc41. Hence, \ud835\udc60PBW(\ud835\udc5c1) > \ud835\udc60PBW(\ud835\udc5c2). The same is true for \ud835\udc60PBW because it is just a rescaling of \ud835\udc60PBW. 2. Since all rankings are equal, the outcomes are totally ordered by \u2ab0=\u2ab01 in our setting. Hence, if \ud835\udc42= {\ud835\udc5c1, . . . ,\ud835\udc5c\ud835\udc5a} and \ud835\udc5c1 \u2ab0\ud835\udc5c2 \u2ab0 \u00b7 \u00b7 \u00b7 \u2ab0\ud835\udc5c\ud835\udc5a, then \ud835\udc64PBW(\ud835\udc5c\ud835\udc56) = 2 \u00b7 Down\u2ab0(\ud835\udc5c\ud835\udc56) = 2 \u00b7 (\ud835\udc5a\u2212\ud835\udc56). Hence, \ud835\udc60PBW(\ud835\udc5c\ud835\udc56) > \ud835\udc60PBW(\ud835\udc5c\ud835\udc57) if and only if \ud835\udc5c\ud835\udc56is ranked higher than \ud835\udc5c\ud835\udc57. The same is true for \ud835\udc60PBW because it is just a rescaling of \ud835\udc60PBW. 3. Partial agreement implies that \ud835\udc60PBW(\ud835\udc5c\u2217) > \ud835\udc60PBW(\ud835\udc5c) for all \ud835\udc5c\u2208\ud835\udc42\\ {\ud835\udc5c\u2217}, which implies the claim. \u25a1 5 EXPERIMENTS To assess the effectiveness of our approach, we conduct experiments on three sets of ranking queries from manufacturing, finance, and medicine. We first describe our methodology for generating ranking queries and extracting responses in a semi-automatic manner (Algorithm.1 provides an overview of the generation process). Subsequently, we will introduce the selected baseline approaches and the metrics used to assess the robustness of the aggregated answers. Code is available at https://github.com/boschresearch/RobustLLM/. 5.1 Generation of Ranking Queries 5.1.1 Generate Symptom-Cause Matrices. To generate ranking queries for our experiments, we first generate symptom-cause matrices, which contain information about a list of underlying critical problems and the possible symptoms we could observe. We generate those matrices with ChatGPT. In the first step, we ask ChatGPT for a list of critical problems (causes) C in a specific domain with the following prompts: \"In manufacturing, what are the critical problems that can severely impact the health and overall performance of the factory? Output a list of those problems and rank them based on degree of risk to factory.\" \"What are the critical financial problems that can severely impact the health and overall performance of a company? Output a list of those problems and rank them based on degree of risk to company.\" \"What are common diseases with similar symptoms?\" The first step, gives us the possible diagnoses for the domain. In the second step, we generate symptoms for each diagnosis with the following prompt: \f\"What can we observe in factory/company/human body to identify the underlying problem <the specific problem>? Output a list of indicators and rank them based on your confidence.\" Similar to Section 4.1, we summarize and normalize the symptoms into a list denoted as S, eliminating redundancy arising from synonyms and syntactic variations. Subsequently, we generate matrices as presented in Tables 4, 5, and 6 (see Appendix A). 5.1.2 Sample Symptom Sets. In real-world scenarios, we have to make a diagnosis based on a set of symptoms. Given a list of symptoms S and diagnoses D for a particular domain, we let \ud835\udc60\ud835\udc5e\u2286S be a subset of the symptoms, which is used in the condition description of a ranking query. For example, {Unplanned maintenance, Increased rework and scrap, Increased product recalls, Increase cost, Increased carrying costs} is a subset of size 5 for the manufacturing domain. We let \ud835\udc60\ud835\udc51denote the set of all possible symptoms that we could observe for one specific diagnosis \ud835\udc51\u2208D. The number of all potentially possible symptom sets (all subsets of S) is too large for our experiments. To find a set of reasonable size, we first quantify the uncertainty of symptom sets and then sample a subset of symptom sets based on their uncertainty. Intuitively, the uncertainty of a symptom set is lowest if it uniquely identifies a diagnosis. The uncertainty is highest if all diagnoses are compatible with the symptom set. We use the Jaccard similarity to measure the similarity between a symptom set \ud835\udc60\ud835\udc5eand the symptoms \ud835\udc60\ud835\udc51associated with diagnosis \ud835\udc51: \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51) = |\ud835\udc60\ud835\udc5e\u2229\ud835\udc60\ud835\udc51| |\ud835\udc60\ud835\udc5e\u222a\ud835\udc60\ud835\udc51| (10) We normalize it such that, for every symptom set \ud835\udc60\ud835\udc5e, the similarity values to different diagnoses sum up to 1: \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51) = \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51) \u00cd \ud835\udc51\u2208D \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51) (11) Finally, we quantify the uncertainty of symptom set \ud835\udc60\ud835\udc5eby calculating the normalized entropy of the similarity distribution: \ud835\udc48(\ud835\udc60\ud835\udc5e) = \u2212 1 log2(|D|) \u2211\ufe01 \ud835\udc51\u2208D \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51) log2(\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51)) (12) Note that the entropy is always between 0 and log2(|D|)), hence our normalized entropy is always between 0 and 1. In order to investigate our method in lower/higher uncertainty settings, we sample two types of symptom sets for each query set based on \ud835\udc48(\ud835\udc60\ud835\udc5e). Since the majority of symptom sets is in the high uncertainty region, we pick the 1000 lowest uncertainty symptom sets for the low uncertainty query set. For the high uncertainty set, we focus on sets with uncertainty between 0.7 and 0.8. More precisely, the two symptom sets have been computed as follows: \u2022 low uncertainty symptom sets \ud835\udc46\ud835\udc59\ud835\udc5c\ud835\udc64: we sort the potential symptom sets by normalized entropy and select the 1000 symptom sets with minimum normalized entropy. \u2022 high uncertainty symptom sets \ud835\udc46\u210e\ud835\udc56\ud835\udc54\u210e: we randomly select 1000 symptom sets with normalized entropy in the range of 0.7 to 0.8. \"Given we observe <symptom 1>, <symptom 2>, ... what critical problems might exist in factory? Please output top 5 possible issues ranked by confidence without additional text.\" \"Given we observe <symptom 1>, <symptom 2>, ... what critical financial issue might we have in our company? Please output top 5 possible issues ranked by confidence without additional text.\" \"Given following symptoms: <symptom 1>, <symptom 2>, ... what disease might the patient have? Please output top 5 possible issues ranked by confidence without additional text.\" Figure 1: Query templates for evaluating query uncertainty Variant 1: \"Given we detect <symptom 1>, <symptom 2>, ... what essential issues might exist in factory? Please output top 5 possible issues ranked by confidence without additional text.\" Variant 2: \"What potentially serious problems in the manufacturing may there be if we notice <symptom 1>, <symptom 2>, ... ? Please output top 5 possible issues ranked by confidence without additional text.\" Figure 2: Syntactic variants of the manufacturing query. We visualize the uncertainty distribution of \ud835\udc46\ud835\udc59\ud835\udc5c\ud835\udc64and \ud835\udc46\u210e\ud835\udc56\ud835\udc54\u210ewith histograms in A.2. 5.2 From Symptom Sets to Ranking Queries We study robustness with respect to query and syntax uncertainty in our experiments. To evaluate query uncertainty, we convert symptom sets to ranking queries using the template in Figure 1. To evaluate syntax uncertainty, we designed two query variants to investigate the effect of syntactic query changes that are semantically meaningless. In the first variant, we only replace part of the words with synonyms without changing the structure of the queries (e.g. we replace \"observe\" with \"detect\" and replace \"critical problems\" with \"essential issues\"). In the second variant, we also change the structure of the query. An an example, Figure 2 shows the variants of the manufacturing ranking template stated before. 5.3 Evaluation Protocol We evaluate the robustness of our approach over three batches of ranking queries i.e. manufacturing, finance and medical queries, compared with two baseline approaches. 5.3.1 Baselines. \fAlgorithm 1 The pseudocode of ranking query generation /* generate symptom-cause matrices */ D \u2190query ChatGPT for \ud835\udc51\u2208D do \ud835\udc60\ud835\udc5e\u2190query ChatGPT add \ud835\udc60\ud835\udc5eto a list: L\ud835\udc60.append(\ud835\udc60\ud835\udc5e) end for Symptom-Cause matrix \u2190summarize and normalize L\ud835\udc60 /* sample symptom sets */ for \ud835\udc60\ud835\udc5e\u2208\ud835\udc46\ud835\udc5edo calculate Jaccard similarity: \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51) \u2190|\ud835\udc60\ud835\udc5e\u2229\ud835\udc60\ud835\udc51| |\ud835\udc60\ud835\udc5e\u222a\ud835\udc60\ud835\udc51| normalization: \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51) \u2190 \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51) \u00cd \ud835\udc50\u2208C \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51) calculate normalized entropy: \ud835\udc48(\ud835\udc60\ud835\udc5e) \u2190\u2212 1 log2(| C|) \u00cd \ud835\udc50\u2208C \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51) log2(\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc60\ud835\udc5e,\ud835\udc60\ud835\udc51)) add\ud835\udc48(\ud835\udc60\ud835\udc5e) to the list: List of indicator entropy.append(\ud835\udc48(\ud835\udc60\ud835\udc5e)) end for Rank the \ud835\udc46\ud835\udc5ebased on the normalized entropy (from largest to smallest). \ud835\udc46\ud835\udc59\ud835\udc5c\ud835\udc64\u2190the last 1000 indicator sets. \ud835\udc46\u210e\ud835\udc56\ud835\udc54\u210e\u2190randomly select 1000 symptom sets with normalized entropy in the range of 0.7 to 0.8. /* generate ranking queries from symptom sets */ Convert symptom sets into ranking queries using query template. \u2022 Without Aggregation: we do not aggregate rank answers and directly evaluate the robustness of single answers. \u2022 Average Rank: we treat each rank preference equally and aggregate the ranks by simply averaging the ranks. Given \ud835\udc41ranks \ud835\udc5f1 . . .\ud835\udc5f\ud835\udc41to be aggregated, the aggregation function is defined as \ud835\udc34(\ud835\udc5f1 . . .\ud835\udc5f\ud835\udc41) = 1 \ud835\udc41 \u00cd\ud835\udc41 \ud835\udc56=1 \ud835\udc5f\ud835\udc56in this case. 5.3.2 Evaluation Metrics. We use Kendall\u2019s rank correlation coefficient (\ud835\udc45\ud835\udf0f) [13] and Spearman\u2019s rank correlation coefficient (\ud835\udc45\ud835\udc60) [25] to evaluate the robustness of the aggregated ranks. Let \ud835\udc5bbe the number of items to be ranked. Kendall\u2019s rank correlation coefficient is defined as follows: \ud835\udc45\ud835\udf0f= \ud835\udc36\u2212\ud835\udc37 \u0000\ud835\udc5b 2 \u0001 = 2(\ud835\udc36\u2212\ud835\udc37) \ud835\udc5b(\ud835\udc5b\u22121) , (13) where\ud835\udc36is the number of concordant pairs (pairs that have the same order in predicted and ground truth ranks) and \ud835\udc37is the number of discordant pairs (pairs that have different order in both ranks). A higher \ud835\udc45\ud835\udf0fvalue indicates a better match between the predicted and true ranks. Spearman\u2019s rank correlation coefficient is defined as follows: \ud835\udc45\ud835\udc60= \ud835\udc50\ud835\udc5c\ud835\udc63(\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc581,\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc582) \ud835\udf0e\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc581 \u00b7 \ud835\udf0e\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc582 , (14) where \ud835\udc50\ud835\udc5c\ud835\udc63(.) is the covariance between two variables and \ud835\udf0eis the standard deviation. Similar to Kendall\u2019s tau, a higher \ud835\udc45\ud835\udc60value indicates a better match between the predicted and true ranks. Algorithm.2 illustrates our approach to evaluating ranking queries. Algorithm 2 The pseudocode of ranking query evaluation /* Evaluate query robustness*/ Require: \ud835\udc44 for \ud835\udc56\u21901 . . . \ud835\udc3edo \ud835\udc5d\ud835\udc56\u2190\ud835\udc34(\ud835\udc47(\ud835\udc44, \ud835\udc41,\ud835\udc61\ud835\udc56)) add \ud835\udc5d\ud835\udc56to a list: \ud835\udc43.append(\ud835\udc5d\ud835\udc56) end for \ud835\udc45\ud835\udc44= 1 2(\ud835\udc3e 2) \u00cd \ud835\udc5d1,\ud835\udc5d2\u2208\ud835\udc43,\ud835\udc5d1\u2260\ud835\udc5d2 \ud835\udc60\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52(\ud835\udc5d1, \ud835\udc5d2) /* Evaluate syntax robustness*/ Require: \ud835\udc441 . . .\ud835\udc44\ud835\udc3e for \ud835\udc56\u21901 . . . \ud835\udc3edo \ud835\udc5d\ud835\udc56\u2190\ud835\udc34(\ud835\udc47(\ud835\udc44\ud835\udc56, \ud835\udc41,\ud835\udc61\ud835\udc56)) add \ud835\udc5d\ud835\udc56to a list: \ud835\udc43.append(\ud835\udc5d\ud835\udc56) end for \ud835\udc45\ud835\udc44= 1 2(\ud835\udc3e 2) \u00cd \ud835\udc5d1,\ud835\udc5d2\u2208\ud835\udc43,\ud835\udc5d1\u2260\ud835\udc5d2 \ud835\udc60\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52(\ud835\udc5d1, \ud835\udc5d2) 5.4 Experiment Settings In our experiments, we evaluate the robustness of the answer with respect to repeated queries (query uncertainty) and syntactic changes (syntax uncertainty). Algorithm 2 explains briefly how we evaluate the robustness of the aggregated ranks. 5.4.1 Evaluation of Query Uncertainty. To evaluate the robustness with respect to repeated queries, we query ChatGPT \ud835\udc41times with ranking query \ud835\udc44at time \ud835\udc611 . . .\ud835\udc61\ud835\udc3eand aggregate the answers with aggregation function \ud835\udc34to get \ud835\udc3eaggregated answers. Note in our experiment, we specifically set \ud835\udc3e= 3 and \ud835\udc41= 5 (\ud835\udc41= 1 when \ud835\udc34is \"without aggregation\", since we do not aggregate answers in this baseline). The overall robustness of the query \ud835\udc45\ud835\udc5eis evaluated by calculating pairwise Kendall\u2019s and Spearman\u2019s rank correlation coefficient (we use \ud835\udc60\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52(\ud835\udc65,\ud835\udc66) to denote the calculation of both coefficients) and averaging the coefficients. The mean values and standard deviation of all \ud835\udc45\ud835\udc5eis reported in our results. 5.4.2 Evaluation of Syntax Uncertainty. We also evaluate the robustness with respect to syntactic changes, the process is very similar to evaluation of query uncertainty. The only difference is that in this case, instead of repeatedly aggregating outputs for the same \ud835\udc44, we use \ud835\udc3edifferent ranking queries with the same semantic meaning but different syntax. In our experiment, \ud835\udc3e= 3. 5.5 Evaluation of Query Uncertainty Table 2 presents results for query uncertainty. Our approach consistently outperforms both baselines, \"without aggregation\" and \"average rank,\" across all three ranking query sets, demonstrating its superiority in both high and low uncertainty scenarios. 5.6 Evaluation of Syntax Uncertainty Table 2 provides an overview of the outcomes pertaining to syntax uncertainty. Our approach outperform both baseline methods in the majority of scenarios examined. We observe a substantial reduction of both Kendall\u2019s and Spearman\u2019s coefficients compared to Table 3. This suggests that syntactic variants introduce more variability. \fTable 2: Evaluation of query uncertainty: we submit the same ranking query to ChatGPT-turbo five times and then aggregate the results. We repeat this process three times and evaluate the robustness of the three aggregated results. The temperature is set to 1, which is the default setting in the web version of ChatGPT. High uncertainty ranking queries Low uncertainty ranking queries without aggregation average rank PBW (our) without aggregation average rank PBW (our) Dataset Kendall Spearman Kendall Spearman Kendall Spearman Kendall Spearman Kendall Spearman Kendall Spearman Manufacturing 0.29 (0.29) 0.34 (0.33) 0.63 (0.14) 0.76 (0.13) 0.78 (0.09) 0.84 (0.08) 0.33 (0.3) 0.38 (0.34) 0.62 (0.13) 0.74 (0.13) 0.75 (0.09) 0.81 (0.09) Finance 0.49 (0.45) 0.53 (0.46) 0.74 (0.2) 0.82 (0.18) 0.81 (0.11) 0.86 (0.09) 0.57 (0.43) 0.6 (0.44) 0.75 (0.18) 0.82 (0.16) 0.79 (0.11) 0.84 (0.1) Medical 0.54 (0.44) 0.59 (0.46) 0.75 (0.28) 0.81 (0.27) 0.83 (0.14) 0.88 (0.13) 0.56 (0.6) 0.58 (0.61) 0.67 (0.53) 0.7 (0.54) 0.84 (0.23) 0.85 (0.23) Table 3: Evaluation of syntax uncertainty: we submit the same ranking query to ChatGPT-turbo five times and then aggregate the results. We repeat this process for three syntactic variants and evaluate the robustness of the three aggregated results. The temperature is set to 1, which is the default setting in the web version of ChatGPT. High uncertainty ranking queries Low uncertainty ranking queries without aggregation average rank PBW (our) without aggregation average rank PBW (our) Dataset Kendall Spearman Kendall Spearman Kendall Spearman Kendall Spearman Kendall Spearman Kendall Spearman Manufacturing 0.25 (0.2) 0.29 (0.23) 0.27 (0.19) 0.32 (0.22) 0.43 (0.18) 0.46 (0.19) 0.31 (0.23) 0.35 (0.25) 0.31 (0.2) 0.37 (0.23) 0.46 (0.2) 0.49 (0.21) Finance 0.56 (0.32) 0.62 (0.31) 0.57 (0.25) 0.63 (0.25) 0.66 (0.17) 0.71 (0.18) 0.61 (0.34) 0.67 (0.32) 0.55 (0.25) 0.62 (0.25) 0.66 (0.16) 0.72 (0.16) Medical 0.64 (0.28) 0.71 (0.26) 0.71 (0.22) 0.78 (0.2) 0.8 (0.16) 0.84 (0.15) 0.83 (0.26) 0.85 (0.24) 0.84 (0.22) 0.87 (0.19) 0.85 (0.18) 0.86 (0.18) 2.0 2.5 3.0 3.5 4.0 4.5 5.0 0.4 0.6 0.8 Kendall's tau Manufacturing 2.0 2.5 3.0 3.5 4.0 4.5 5.0 0.4 0.6 0.8 Kendall's tau Finance without aggregation average rank PBW 2.0 2.5 3.0 3.5 4.0 4.5 5.0 #aggregation 0.4 0.6 0.8 Kendall's tau Medical Figure 3: Robustness with respect to the number of answers used for aggregation. 5.7 Evaluation of Sample Efficiency Another important question is how many answers do we need to aggregate? That is, how should we choose the parameter \ud835\udc41for our transformation method \ud835\udc47(\ud835\udc44, \ud835\udc41,\ud835\udc61). Figure 3 shows the robustness with respect to the number of answers used for aggregation. We can see that even aggregating only two answers with our approach can already significantly increase the robustness. Note that figure 3 shows the robustness in case of query uncertainty (high uncertainty version) and only Kendall\u2019s tau is reported. However, we observed similar trends for other settings. 6 CONCLUSIONS To improve the robustness of the answers from LLMs, we suggest to sample answers repeatedly and to aggregate the answers using social choice theory. Our approach is based on the Partial Borda Choice function as it gives several interesting analytical guarantees. Our investigation primarily focuses on the application of ranking queries within diagnostic contexts, such as medical and fault diagnosis. Our experiments show that our approach significantly improves the robustness against both query and syntax uncertainty. Queries that ask for a single most plausible answer can be understood as a degenerated special case of our ranking queries. This is because an answer \ud835\udc5ccan be understood as the partial preference \ud835\udc5c\u227b\ud835\udc42\\ \ud835\udc5c(the provided answer is ranked above all other answers and the ranking is indifferent about all other answers). In this special case, our average rank baseline corresponds to majority voting. One interesting venue for future work is to compare partial Borda voting in the single-answer setting to other nonranking voting methods. One may also interpret the ranking as an expression of approval (the answer approves of a diagnosis if it is mentioned) and to aggregate the answers by using approval voting methods. Uncertainty in LLM outputs can also be caused by meaningless information or adversarial attacks injected into the queries. In future work, we aim to investigate whether social choice theory methods can also be applied to effectively improve the robustness of LLM outputs in the presence of such perturbations. 7 ACKNOWLEDGEMENTS The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Yuqicheng Zhu. The work was partially supported by the Horizon Europe projects EnrichMyData (Grant Agreement No.101070284)."
}
]
}