| { |
| "url": "http://arxiv.org/abs/2404.16670v1", |
| "title": "EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning", |
| "abstract": "Visual Instruction Tuning represents a novel learning paradigm involving the\nfine-tuning of pre-trained language models using task-specific instructions.\nThis paradigm shows promising zero-shot results in various natural language\nprocessing tasks but is still unexplored in vision emotion understanding. In\nthis work, we focus on enhancing the model's proficiency in understanding and\nadhering to instructions related to emotional contexts. Initially, we identify\nkey visual clues critical to visual emotion recognition. Subsequently, we\nintroduce a novel GPT-assisted pipeline for generating emotion visual\ninstruction data, effectively addressing the scarcity of annotated instruction\ndata in this domain. Expanding on the groundwork established by InstructBLIP,\nour proposed EmoVIT architecture incorporates emotion-specific instruction\ndata, leveraging the powerful capabilities of Large Language Models to enhance\nperformance. Through extensive experiments, our model showcases its proficiency\nin emotion classification, adeptness in affective reasoning, and competence in\ncomprehending humor. The comparative analysis provides a robust benchmark for\nEmotion Visual Instruction Tuning in the era of LLMs, providing valuable\ninsights and opening avenues for future exploration in this domain. Our code is\navailable at \\url{https://github.com/aimmemotion/EmoVIT}.", |
| "authors": "Hongxia Xie, Chu-Jun Peng, Yu-Wen Tseng, Hung-Jen Chen, Chan-Feng Hsu, Hong-Han Shuai, Wen-Huang Cheng", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Reasoning", |
| "gt": "Visual emotion recognition, a key area within artificial in- telligence and computer vision, aims to predict human emo- tions based on visual cues such as facial expressions and body language. This technology is essential in bridging the gap between human affective states and machine under- standing. Its diverse applications [10, 13, 22, 39], spanning from improving human-computer interaction to aiding in mental health assessment, underscore its significance. Ac- curate emotion recognition is vital for enhancing user expe- Figure 1. Illustration of the importance of instruction-following abil- ity in visual emotion understanding. rience and ensuring information security, as it helps prevent emotional manipulation and misinformation [32]. Develop- ing robust emotion recognition models is not only a techni- cal challenge but also a step towards more empathetic and intuitive AI systems, paving the way for more efficient and natural human-computer interactions. The AI community has recently shown a growing interest in developing foundational vision models, e.g., Flamingo [8], LLaVA [7], BLIP2 [14]. These models ex- cel in open-world visual understanding, tackling several vi- sion tasks such as classification, detection, segmentation, and captioning. In contrast, current large-scale multimodal models are still in its infancy when it comes to emotion per- ception [20]. As illustrated in Fig. 1, when directly query the GPT-4 [29] about the emotional category of an image, the model tends to provide incorrect responses. However, the model delivers accurate responses when provided with revised instructions. To fully leverage the potential of ex- isting vision-based large models, our approach is based on the concept of Instruction Tuning. This effective strategy is aimed at teaching language models to follow natural lan- guage instructions, a technique proven to enhance their gen- eralization performance across unseen tasks [7, 9, 21]. 1 arXiv:2404.16670v1 [cs.CV] 25 Apr 2024 In this work, we focus on developing the model\u2019s profi- ciency in understanding and following instructions related to emotional contexts. This approach highlights the impor- tance of fine-tuning the model\u2019s instruction-following ca- pabilities, enabling it to interpret and respond to emotional content effectively. This is achieved by leveraging its pre- existing knowledge base, thereby eliminating the necessity for an emotion-specific architectural framework. To address the notable challenges encountered in In- struction Tuning for visual emotion recognition, especially the lack of specific instruction data, we introduce a novel self-generation pipeline explicitly crafted for visual emo- tion recognition by using GPT-4 [29]. This innovative pipeline excels in generating a diverse array of (image, in- struction, output) instances, thereby notably enhancing the dataset with a more extensive and task-oriented variety of examples. This approach not only overcomes the challenge of limited data availability but also reduces the dependence on human labor. Therefore, it streamlines the process, en- abling more efficient and effective emotion recognition. Additionally, Instruction Tuning has been criticized for its emphasis on surface-level features like output patterns and styles, rather than achieving a profound comprehen- sion and assimilation of tasks [23]. To tackle this issue and enhance the diversity and creativity of instruction data, our dataset includes instructions that demand complex rea- soning, going beyond basic question-and-answer formats. This is further enriched by incorporating visual cues such as brightness, colorfulness, scene type, object class, facial expressions, and human actions. These aspects are pivotal in fostering a nuanced comprehension of visual emotions, thus allowing the model to generate more precise and con- textually appropriate interpretations [13]. After generating the emotion visual instruction data, we propose an Emotion Visual Instruction Tuning (EmoVIT) framework, leveraging the foundation of In- structBLIP [9]. This framework incorporates an emotion- centric, instruction-aware module that proficiently guides Large Language Models (LLMs) in assimilating the nu- ances of emotion instructions. Our work signifies a paradigm shift, presenting a new era of instruction-based learning for visual emotion understanding that relies less on explicit training data. Remarkably, as shown in Fig. 2, our approach requires almost 50% of the training data typi- cally needed yet exceeds the performance of previous visual emotion recognition methods and popular Visual Instruc- tion Tuning methods. Our contributions can be summarized as follows: \u2022 We explore the potential of the Visual Instruction Tuning paradigm for emotion comprehension and introduce the concept of Emotion Visual Instruction Tuning. \u2022 After thoroughly considering the unique characteristics of visual emotion recognition, we develop a novel GPT- WSCNet[16] StyleNet[19] PDANet[17] StimuliAware[10] MDAN[12] BLIP2[14] InstructBLIP[9] Flamingo[8] LLaVA[7] Ours* 0 20 40 60 80 76.32 77.11 76.95 78.4 75.75 46.79 42.2 29.59 44.03 83.36 Supervised Emotion Recognition Methods Visual Instruction Tuning Methods Figure 2. Performance comparison on EmoSet test set [13] (Accu- racy %). assisted pipeline for generating emotion visual instruc- tion data. This approach effectively bridges the gap in available annotated instruction data within this specific domain. \u2022 Building upon the foundation of InstructBLIP, our EmoVIT architecture integrates emotion domain-specific instruction data, harnessing the robust capabilities of LLMs to boost performance. The extensive experiments demonstrate our model\u2019s proficiency in emotion classi- fication, affective reasoning, and comprehension of hu- mour.", |
| "main_content": "2.1. Visual Emotion Recognition A key challenge in visual emotion recognition is bridging the gap between an image\u2019s visual cues and the emotions it portrays [11, 12, 35]. While traditional efforts, e.g., Xu et al.\u2019s multi-level dependent attention network [12], focus on visual models for emotional feature learning, recent advancements like EmoSet [13] offer rich emotion-laden datasets with 3.3 million images. The rise of multimodal models, such as the GPT series [29], has further propelled Vision-Language Recognition. However, fully leveraging these models in emotion recognition is an area ripe for exploration. Our work leads the way in utilizing large-scale models for Emotion Visual Instruction Tuning. 2.2. Visual Instruction Tuning Current Large Language Models (LLMs) have extensive knowledge bases, but their effectiveness depends on accurately interpreting human instructions due to a mismatch 2 Figure 3. The comparison of different visual tuning paradigms. between training goals and user expectations. LLMs are trained to minimize prediction errors, whereas users expect helpful and safe instruction-following. Instruction Tuning addresses this by teaching models to follow natural language instructions, enhancing generalization to new tasks. FLAN [21] demonstrated that training a large model on instruction-based datasets improves zero-shot performance. This approach has extended to vision-language tasks, with BLIP2 [14] and LLaVA [7] adapting instructiontuned LLMs for visual inputs. InstructBLIP [9] introduces instruction-aware visual feature extraction and the QFormer, enabling more flexible, instruction-driven feature extraction. As a novel area, visual emotion instruction tuning lacks benchmarks or guidelines for creating emotion instruction data. Our work pioneers the use of large-scale models to develop an emotion instruction data pipeline, overcoming the limitations of manual annotation. 3. Method 3.1. Preliminary of Visual Instruction Tuning In the deep learning era, visual tuning has experienced significant paradigm shifts, as depicted in Fig. 3. In Fig. 3(a), conventional tuning methodologies encompass Full fine-tuning, Head-oriented, and Backboneoriented techniques, capitalizing on large-scale pre-trained models. Predominantly, thoroughly fine-tuning these models for specific tasks, conducted end-to-end, is recognized as a highly effective strategy. However, this method requires maintaining separate copies of the backbone parameters for each distinct task, posing challenges in storage and deployment. Alternatively, Visual Prompt Tuning (VPT) [24], presents an efficient substitute for full fine-tuning within large-scale vision Transformer models. It achieves this by employing a minimal fraction of trainable parameters in the input space while maintaining a frozen backbone model. The objective function for Visual Prompt Tuning is given by: min \u03b8P L(f(X, P; \u03b8P), Y ) (1) where min\u03b8P is the minimization over the prompt parameters P, L is the loss function, f represents the model function with input image X, prompt parameters P, and learnable model parameters \u03b8P as input, and Y is the target output. Visual Prompt Tuning focuses on optimizing LLMs using a small set of parameters, whereas Visual Instruction Tuning (VIT) aims to improve the model\u2019s comprehension of instructions, thereby addressing the model\u2019s shortcomings in specific domains. This type of method aims to enhance the model\u2019s proficiency in following instructions, leveraging the capabilities of the latest foundation models, e.g., Llama [25], and BLIP2 [14]. Instructions serve as guiding constraints, shaping the model\u2019s outputs to conform to specific response characteristics and domainrelevant knowledge. This approach enables human monitoring of the model\u2019s behavior, thereby assuring alignment with the desired outcomes. Moreover, Instruction Tuning is computationally efficient, allowing LLMs to swiftly adapt to particular domains without extensive retraining or architectural alterations. The objective function for Visual Instruction Tuning is given by: min \u03b8tunable L(g(X, I, C; \u03b8tunable), Y ) (2) where min\u03b8tunable denotes the minimization over the tunable parameters \u03b8tunable in the Instruction Tuning Module, L is the loss function, g is the model function with instruction I, image X, other contexts C, and tunable parameters \u03b8tunable, 3 \u2026 \u2026 \u2026 Q-Former Fully Connected LLM Emotion Instruction Queries Output \u2026 \u2026 Emotion Instruction Emotion Instruction Queries Q-Former Feed Forward Self Attention Cross Attention Feed Forward (a) Emotion Visual Instruction Data Generation (b) Emotion Visual Instruction Tuning Architecture (c) The Details of Q-Former Module \u2026 \u2026 \u2026 Image Embeddings Emotion Attributes Caption System Prompt GPT 4.0 Categorical Basic Interaction Advanced Interaction Reasoning Emotion Instruction In-context Samples Conversation Image Encoder Input Image Image Embeddings Figure 4. The overall architecture of our proposed method. The Emotion Instruction data generated by (a) will be used for Emotion Visual Instruction Tuning in (b). During Emotion Visual Instruction Tuning, given an input image, the frozen Image Encoder initiates the process by extracting visual features. Emotion Instruction generated by (a) are subsequently interacting with Queries embedding through the learnable Q-Former. This interaction is key to drawing out image features that are relevant to the task at hand. As a result, the frozen LLM receives visual information conducive to instruction following. and Y denotes the target output. The optional context C is not just raw data; it encompasses descriptive or directive information guiding the model on how to process input or which task to execute, e.g., image caption. It\u2019s integral to the model\u2019s understanding and execution of tasks based on specific instructions or guidelines. 3.2. GPT-assisted Emotion Visual Instruction Data Generation Previous methodologies commonly employed a consistent template-based set of instructions for every image within a dataset across various specific tasks [9]. For instance, a standard instruction such as \u201cBriefly describe the content of the image\u201d was employed uniformly across all images for Image Captioning. In this way, the model may not be able to adequately capture the unique characteristics of each image. Moreover, this one-size-fits-all approach often leads to suboptimal performance in emotion recognition tasks that require nuanced perception and differentiation of ambiguous emotion classes. Since the topic of Emotion Visual Instruction Tuning is still in its infancy, no benchmarks or guidelines have been proposed so far for constructing emotion instruction data. Based on the recent successes of machine-generated instructions demonstrated in LLaVA [7], our work pioneers the use of existing LLMs to create a pipeline for self-generating emotion instructions. Different from previous template-based and one-size-fits-all instruction data, we propose an instance-wise and LLM-assisted visual emotion instruction data pipeline. This methodology transcends the constraints of manual annotation by employing GPT-4 [29] to generate instance-wise, tailored instruction data that dynamically corresponds to visual content. Prior to the development of instructional data for the visual emotion recognition task, it is imperative to confront a fundamental academic problem: What types of visual clues are pivotal in identifying emotions? This necessitates a careful consideration of the unique characteristics inherent to the task, along with a comprehensive understanding of the potential visual cues associated with human emotions. In this work, we propose a novel visual instruction data mechanism to remove the inherent subjectivity and ambiguity in emotional interpretation. Specifically, we integrate a broad spectrum of emotion attributes across multiple levels: low-level attributes (e.g., brightness, colorfulness), midlevel attributes (e.g., scene type and object class), and highlevel attributes (e.g., facial expressions and human actions), building upon insights from previous work [13]. This comprehensive strategy not only aligns with the intricate nature of emotions but also significantly enhances the model\u2019s capability to interpret and understand visual emotional cues more accurately and holistically. The overall pipeline of our proposed emotion visual instruction data is shown in Fig. 4 (a). For an image Ximg, three types of image-related contexts are essential for GPT4 to generate emotion instruction data: (i) a caption Xc, (ii) an emotion attribute list Xattr, which includes emotion class, brightness, colorfulness, scene type, object class, facial expression, and human action, and (iii) the system prompt, designed to enable GPT-4 to comprehend the specific task 4 requirement1. We first manually design a few examples which are used as seed examples for in-context learning to query GPT-4. This operation leverages the model\u2019s ability to extrapolate from given examples, enhancing its understanding and response accuracy based on the principles of few-shot learning [7]. Our generated emotion instruction data includes three types: Categorical, Conversation, and Reasoning. Building upon previous research [7], our generated instruction data adheres to the dialogue format, exemplified in Fig. 5. Our strategy for generating emotion instruction data adopts a progressive approach from simple to complex. Initially, for the Categorical data, we transform the associated emotion class of the image into a structured format. This process serves as the foundational component of our emotion instruction data. For the Conversation data, our framework is designed to create dialogues in which the GPT assistant interacts with an inquirer, focusing on the emotion attributes of the image. In this setup, the assistant\u2019s responses are tailored to interpret and describe the image as though it were within its own visual field, thereby providing insights from an observational viewpoint. The scope of questions posed is comprehensive, encompassing the types of objects depicted, their actions, and the dynamics of their interrelationships. The dialogues we generate fall into two categories: (i) Basic Interaction, focusing on the provided emotion attribute list with simple, direct characteristics, and (ii) Advanced Interaction, which builds on the first type to reach greater conversational complexity and sophistication. For the Reasoning data, our approach extends beyond mere visual content, prompting the model to generate indepth reasoning questions. To enhance the dialogue\u2019s credibility and structure, detailed examples are incorporated alongside logical reasoning steps, ensuring that the discourse convincingly captures the intricacies of the visual content. 3.3. Emotion Visual Instruction Tuning After acquiring the emotion visual instruction data as detailed in Sec. 3.2, our goal is to employ this data in enhancing the existing Visual Instruction Tuning model. This enhancement aims to align the LLMs\u2019 existing knowledge with the emotion understanding domain. As shown in Fig. 4 b, we have developed an Emotion Visual Instruction Tuning (EmoVIT) architecture based on InstructBLIP [9]. This architecture specifically leverages its Instruction-aware Q-Former Module, as depicted in Fig. 4 c, for emotion-centric instructional tasks. 1A detailed description of the system prompt is provided in the supplementary materials. Figure 5. The sample of our generated visual emotion instruction data. Specifically, the Instruction-aware Q-Former Module takes in the emotion instruction tokens, queries, and image embeddings as input. The image embeddings are extracted by a frozen image encoder. The learnable queries are initially produced by the pre-trained Q-Former of InstructBLIP. During training, the Instruction-aware module enhances task-specific feature extraction. It does this by integrating emotion instruction and query embeddings within self-attention layers, aligning visual information with the LLM\u2019s instruction-following requirements. Our approach adopts cross-entropy loss, tailoring it to the intricacies of visual emotion recognition tasks, thus ensuring precise and contextually relevant model training outcomes. We note that the data generated by our approach is not confined to a single model but can also be applied to other Visual Instruction Tuning models, such as LLaVA [25]. Notably, when LLaVA is fine-tuned with our data, it exhibits a significant enhancement in emotion recognition capabilities, as detailed in Sec. 4.2. In this way, we demonstrate not only the effectiveness but also the transferability of our generated data, showing its broad applicability and impact. 5 4. Experimental Results 4.1. Implemental Details Our implementation is based on the LAVIS library [31]. Our EmoVIT starts with a pre-trained InstructBLIP baseline and proceeds to fine-tune exclusively the Q-Former module, whilst keeping both the image encoder and the language model frozen. The parameters for our training adhere to the default settings established by InstructBLIP. Datasets. We evaluate our framework on ten benchmark datasets annotated under different scenarios and class number, namely EmoSet [13], WEBEmo [11], Emotion6 [34], the Flickr and Instagram (FI) [35], Artphoto [36], IAPS [37], Abstract [36], EmotionROI [38], UnbiasedEmo [11], and OxfordTVG-HIC [33]. Held-in Pretraining. Following previous work [9], we divide our dataset into two categories: held-in for pretraining and held-out for evaluation 2. Considering the EmoSet dataset\u2019s comprehensive inclusion of emotion attributes for each image, it has been chosen as the primary resource for our held-in pretraining phase. Simultaneously, for a broader assessment, we perform held-out evaluations using the test sets from various other datasets. For the generation of emotion visual instruction data, we initially employ the BLIP2 model for image captioning, followed by leveraging the GPT-4 API to generate emotion instruction data. In total, our collection comprises Categorical, Conversation, and Reasoning instruction data, derived from 51,200 unique images. This represents less than 50% of the entire EmoSet. 4.2. Held-out Evaluation As shown in Tab. 1, our proposed methodology exhibits a marked superiority in performance relative to the burgeoning Visual Instruction Tuning Methods. Since they have been pre-trained on dozens of large-scale datasets, it is evident that our generated emotion visual instruction data is particularly effective for emotional understanding Our results signify a paradigm shift, heralding a new era of model training that relies less on explicit supervision and more on the robustness of emotion instruction-driven learning. The Effectiveness of Our Proposed Emotion Visual Instruction Data. As the first to introduce the concept of emotion visual instruction data, our study seeks to evaluate the generalizability of this newly generated instruction data. Our goal is to test its efficacy not only with InstructBLIP but also across other Visual Instruction Tuning model, to understand its broader applicability. As depicted in Fig. 6, we employ two Visual Instruction Tuning models, LLaVA and InstructBLIP, which were fine-tuned on our specially gen2Unlike the setup in InstructBLIP, our dataset exclusively comprises emotion-related content. Consequently, our held-out evaluation does not constitute a strict zero-shot evaluation in the conventional sense. Figure 6. The improvement of our proposed emotion visual instruction tuning data tuning on LLaVA [7] and InstructBLIP [9]. erated emotion visual instruction data. Subsequent testing across five distinct datasets reveals notable improvements in both models, substantiating the efficacy of our generated data. Notably, InstructBLIP demonstrated a more substantial overall enhancement compared to LLaVA. This can be attributed to InstructBLIP\u2019s specialized Instruction-aware Q-Former Module, which adeptly extracts the salient features of our emotion instructions and synergizes them effectively with the corresponding images, thereby yielding improved performance. 4.3. Effectiveness of Different Instruction Data 4.3.1 Ablation Study of Different Instruction Data The ablation study outlined in Tab. 2 provides a comprehensive analysis of the impact that different instructional data types have on model performance, specifically concerning accuracy metrics on the EmoSet test set. Initially, the model, referred to as InstructBLIP [9], operates without the integration of the three types of instructional data and attains a baseline accuracy of 42.20%. This foundational performance is significantly enhanced with the inclusion of Categorical data, which alone contributes to a substantial increase in accuracy. The introduction of Conversation data further amplifies this effect, underscoring the value of conversational context in improving the model\u2019s predictive capabilities. The addition of Reasoning data notably boosts performance, achieving a peak accuracy of 83.36%. This indicates that the model significantly benefits from the nuanced cues in reasoning, aiding in understanding complex emotional instructions. The gradual improvements with each data type support the idea that a diverse approach to instructional data markedly enhances model comprehension and performance. 6 Method WebEmo FI Emotion6 Abstract ArtPhoto IAPSa EmotionROI EmoSet Number of Classes 25 8 6 8 8 8 6 8 Flanmingo [8] 9.36 14.91 21.67 3.57 17.5 10.13 21.72 29.59 LLaVA [7] 12.55 56.04 49.44 19.54 36.25 42.43 46.46 44.03 BLIP2 [14] 20.10 57.72 50.00 28.57 36.25 39.24 50.51 46.79 InstructBLIP [9] 12.80 37.97 46.11 21.42 26.25 34.18 46.13 42.20 Ours* 21.12 68.09 57.81 32.34 44.90 44.13 53.87 83.36 Table 1. Held-out performance comparison on visual emotion datasets (%). Categorical Conversation Reasoning Accuracy (%) 42.20 \u2713 80.90 (+38.70) \u2713 \u2713 81.95 (+39.75) \u2713 \u2713 \u2713 83.36 (+41.16) Table 2. Ablation study of three types of instruction data. Accuracy (%) on EmoSet test set. 4.3.2 Instruction Sensitivity This work is dedicated to the creation of a varied corpus of visual emotion instruction data, alongside the development of a robust instruction-based model. Our objective is for the model to demonstrate stability, producing consistent results in the face of minor variations in instruction phrasing, provided the core objective of the task persists unchanged. To this end, we employ the Sensitivity evaluation metric, as introduced by [30], to assess the model\u2019s fidelity in generating uniform outcomes irrespective of instructional nuances. We employ two semantically similar instructions as input prompts for the model, testing their impact on the Sensitivity score across three visual emotion datasets for different Visual Instruction Tuning models. The first instruction is: \u201cFrom the given options: cls 1, cls 2, cls 3, etc., identify the emotion that most accurately reflects the image. Ensure your selection is confined to the listed options. Respond in the format: Predicted emotion:\u201d The second one states: \u201cPlease choose the emotion that best corresponds to the image from the following options: cls 1, cls 2, cls 3, etc. (Do not provide answers beyond the provided candidates.) Please reply in the following format: Predict emotion:\u201d As illustrated in Fig. 7, our approach, along with BLIP2, exhibited exceptionally low Sensitivity values, demonstrating robustness in understanding the instructions. Conversely, Flamingo and InstructBLIP displayed a higher degree of sensitivity, indicating a relative susceptibility to variations in instruction wording. 4.4. Robustness Given that current emotion recognition datasets often exhibit category imbalances and labeling biases, our aim is Figure 7. The sensitivity score comparison (the lower the better). to evaluate the generalization ability of various learning strategies more impartially. Hence, we selected the UnBiasedEmo test set [11], which is uniquely suited for recognizing intricate emotions, such as those associated with identical objects or scenes, e.g., landscapes, crowds, families, babies, and animals, where the emotional undertones can be particularly subtle and complex. As depicted in Tab. 3, our proposed methodology demonstrates superior performance when benchmarked against conventional supervised emotion recognition techniques, thereby underscoring the efficacy of our approach in more accurately discerning complex emotional contexts. Method Accuracy (%) Direct Learning [11] 71.64 Self-Directed Learning [11] 72.45 Joint Learning [11] 71.64 Curriculum Learning [11] 74.27 Ours* 74.72 Table 3. Performance comparison on UnbiasedEmo dataset. 7 Figure 8. The sample of our generated explanation. 4.4.1 Affective Reasoning In the domain of visual emotion recognition, where ambiguity and subjectivity are pervasive, the advent of an interpretable model is of considerable value. Such a model elucidates its cognitive processes, enhancing its trustworthiness and practicality in scenarios requiring a delicate grasp of emotional subtleties. Leveraging Visual Instruction Tuning, our model transcends mere categorization of emotions; it articulates the underlying rationale for its classifications. The executing commands for identifying emotions and elucidating the decision basis is illustrated below: Predicted emotion: [emotion]. Reason: [explanation]. Our model delineates the visual features influencing its determinations, thereby addressing the complexities inherent in discerning and explaining emotion-related nuances. The explanations provide us with visual clues contained within the images, as exemplified in Fig. 8. It provides interpretable visual indicators that inform the model\u2019s outputs, as demonstrated in our example, by disambiguating the often abstract emotional categories. 4.5. Scaling Law Pretraining data. As demonstrated in Tab. 4, there is a clear correlation between the size of the pre-training dataset and improved performance. Consequently, we anticipate that an increase in training data in the future could enhance the effectiveness of Emotion Visual Instruction Tuning. 4.6. Humour Caption Generation The comprehension of humor is intricately linked to the understanding of emotions. Leveraging our generative language model, we conduct a caption generation task without 5% 10% 30% 50% 79.00 81.00 79.34 83.36 Table 4. Ablation study of different portion of pre-training data. Accuracy (%) on EmoSet test set. Figure 9. The sample of our generated humour caption vs human writing humour caption from OxfordTVG-HIC. modifying the model\u2019s architecture, specifically testing the model\u2019s proficiency in generating humorous captions. For this purpose, we select 50 images from the OxfordTVGHIC dataset [33] and generate corresponding captions using our model. Subsequently, the captions produced by our model are compared with manually annotated captions from the dataset in a user study. Thirty participants were asked to vote on which captions were more humorous. Our modelgenerated captions receive 60% of the votes, demonstrating its effective humor generation capabilities. One sample is visualized in Fig. 9. 5. Conclusion In our study, drawing upon the distinctive visual cues key to visual emotion recognition, we present a GPT-assisted pipeline specifically designed for generating emotion visual instruction data. The developed EmoVIT model incorporates emotion-specific instructions, leveraging LLMs for enhanced performance. Our comprehensive experiments validate its effectiveness in emotion classification, affective reasoning, and humor understanding. This comparative analysis sets a benchmark for Emotion Visual Instruction Tuning with LLMs, providing valuable insights and directions for future research in this field. 8 EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning Supplementary Material Figure 10. The sample of our generated visual emotion instruction data. 6. More Emotion Visual Instruction Data Sample Additional samples from our Emotion Visual Instruction Data collection are presented in Figures 10 and 11. Upon acceptance, the complete dataset will be made available on our project webpage. 7. Implemental Details 7.1. Our Experiment Settings Held-out vs supervised learning. We adopt the terminology held-in and held-out as defined in the work of InstructBLIP [9]. For the held-in, we utilize the training subset of the EmoSet dataset for Emotion Visual Instruction Tuning, with its corresponding test subset serving the purpose of held-in evaluation. The outcomes of this evaluation are depicted in Fig. 1 of the main manuscript. Figure 11. The sample of our generated visual emotion instruction data. In our held-out evaluation, we focus on determining how instruction tuning bolsters the model\u2019s ability to transfer learning to new and unseen data. It\u2019s crucial to highlight that our methodology sets a distinct path from InstructBLIP\u2019s framework. Our dataset is specifically curated with emotion-centric content, presenting unique categories such as cheerfulness and enthrallment found in WEBEmo, which are not typically included in other datasets. Conversely, common emotional categories like anger and fear are shared with other collections, such as FI and Emotion6. This distinctive mix in our dataset implies that our held-out evaluation operates on a cross-domain level, examining the model\u2019s ability to interpret and adapt to diverse emotional contexts not strictly confined to zero-shot scenarios. 7.2. System Prompt The system prompt inputted into ChatGPT for the purpose of gathering instruction-based data is presented below. 1 You are an AI visual assistant, and you are seeing a single image. What you see are provided with one caption and some emotion related attributes, describing the same image you are looking at. Answer all questions as you are seeing the image. The range of brightness is from 0 (darkest) to 1 (brightest), and the range of colorfulness is from 0 (black-and-white) to 1 (the most colorful). Design two questions for a conversation between you and a person asking about this photo. The answers should be in a tone that a visual AI assistant is seeing the image and answering the question. Ask diverse questions and give corresponding answers. Include questions asking about the visual content of the image, including the object types, object actions, relationship among objects, etc. Only include questions that have definite answers: (1) one can see the content in the image that the question asks about and can answer confidently; (2) one can determine confidently from the image that it is not in the image. Do not ask any question that cannot be answered confidently. Please answer with the format Question: Answer: Also include one complex question that is relevant to the content in the image, for example, asking about background knowledge of the objects in the image, asking to discuss about events happening in the image, etc. Again, do not ask about uncertain details. Provide detailed answers when answering complex questions. For example, give detailed examples or reasoning steps to make the content more convincing and well-organized. You can include multiple paragraphs if necessary. 7.3. Details of the Q-Former Similar to the approach in InstructBLIP, Q-Former is a lightweight transformer architecture that utilizes a collection of trainable query vectors to distill visual features from a static image encoder. The Q-Former acts as the trainable module to bridge the gap between a frozen image encoder and a frozen LLM. Its role is to curate and present the most pertinent visual information, thereby enabling the LLM to generate the targeted textual output efficiently. Following the default setting, in our experimental setup, we employ 32 distinct queries, each with a dimensionality of 768. 7.4. Sensitivity Formula As mentioned in Sec.4.3.2 in the main paper, we employ the Sensitivity evaluation metric, as introduced by [30], to assess the model\u2019s fidelity in generating uniform outcomes irrespective of instructional nuances. Specifically, for each task t \u2208T, given its associated instances with task instructions: Dt = {(It j, xt j, yt j) \u2208T \u00d7 Xt \u00d7 Y t}N j=1, sensitivity is defined as: Et\u2208T \" \u03c3i\u2208It \u0002 E(x,y)\u2208Dt [L(f\u03b8(i, x), y)] \u0003 \u00b5i\u2208It \u0002 E(x,y)\u2208Dt [L(f\u03b8(i, x), y)] \u0003 # (3) where L denotes the evaluation metric, i.e., emotion classification accuracy, f\u03b8(\u00b7) represents the Visual Instruction Tunign model. The standard deviation and mean of the model\u2019s performance across all instructions are denoted by \u03c3i\u2208It[\u00b7] and \u00b5i\u2208It[\u00b7], respectively. 8. Ablation Study of LLM Model Size In our attempts with the EmoVIT architecture\u2019s LLM, we explored the use of models of varying sizes (as shown in Tab. 5). The results indicated that the smaller model, Vicuna7B, outperformed its larger counterparts. This may be attributed to the limited training data available for our task, which potentially underutilizes the capabilities of larger models. Consequently, we anticipate that an increase in training data in the future could enhance the effectiveness of Emotion Visual Instruction Tuning. Vicuna-7B Vicuna-13B FlanT5XL 83.36 82.21 80.98 Table 5. Ablation study of different LLM model size. Accuracy (%) on EmoSet test set. 9. GPT-4 vs GPT-4 Turbo We conducted a comparative analysis of conversational datasets derived from GPT-4 (the model name is gpt-4 in the API) against the recently released GPT-4 Turbo (the model name is gpt-4-1106-preview in the API). The comparative metrics yielded negligible differences between the two models (83.36% vs 82.96% on EmoSet test set). 10. Adding In-context Samples in Held-out Evaluation Recent LLMs are capable of in-context learning when provided with a limited number of examples in a few-shot manner. In this work, we have also embarked on such an exploration. For instance, Tab. 6 presents the in-context samples utilized within the EmotionROI dataset. During our heldout evaluation, we incorporated three in-context samples for each category, consisting of a caption paired with its corresponding emotion class. Nevertheless, in our experimental observations, we did not witness any enhancement in performance attributable to furnishing the LLM with these incontext examples. Consequently, our finalized methodology did not incorporate in-context samples during the heldout evaluation phase. 2 Description Emotion Unleashed Fury: A portrait of raw, unfiltered anger etched on the subject\u2019s face. Anger Volcanic Eruption in Human Form: A Portrait of Unrestrained Fury. Anger An explosive portrait of raw fury, where every clenched jaw and furrowed brow tells a tale of unchecked anger. Anger Face contorted in a grimace of pure disgust, as if they just tasted a year-old lemon. Disgust Caught in the throes of revulsion, a face grimaces as if it just tasted the world\u2019s sourest lemon. Disgust Picture Perfect: A Masterclass in the Art of Disgust Expression Disgust A chilling moment of pure terror, etched in every detail. Fear A chilling moment of pure terror etched on the face, a stark embodiment of fear. Fear someone with a wide smile, a group Joy Overflowing with joy, like a puppy at a park! Joy A poignant portrait of sorrow, where teardrops are the silent language of grief. Sadness An evocative portrayal of sorrow, with shadows seemingly swallowing the light, reflecting the heavy weight of sadness. Sadness An abstract portrayal of solitude, where the vivid hues of melancholy paint a poignant picture of sadness. Sadness Caught in a moment of pure astonishment, eyes wide and mouth agape. Surprise Caught in the headlights of astonishment: a jaw-dropping moment of surprise! Surprise Caught in the Act! A person\u2019s wide-eyed gasp of sheer surprise. Surprise Table 6. Illustrative Examples of Emotion Descriptors in Visual Data 11. Limitation and future work Due to the reliance on the GPT-API and cost considerations, our held-in pretraining phase utilized less than 50% of the EmoSet dataset. Despite outperforming other methods, we recognize the potential for significant improvements in future work by expanding the data scale. We anticipate that advancements in visual emotion understanding will parallel increases in both data and model scale. 3", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.00450v2", |
| "title": "Planning and Editing What You Retrieve for Enhanced Tool Learning", |
| "abstract": "Recent advancements in integrating external tools with Large Language Models\n(LLMs) have opened new frontiers, with applications in mathematical reasoning,\ncode generators, and smart assistants. However, existing methods, relying on\nsimple one-time retrieval strategies, fall short on effectively and accurately\nshortlisting relevant tools. This paper introduces a novel PLUTO (Planning,\nLearning, and Understanding for TOols) approach, encompassing\n`Plan-and-Retrieve (P&R)` and `Edit-and-Ground (E&G)` paradigms. The P&R\nparadigm consists of a neural retrieval module for shortlisting relevant tools\nand an LLM-based query planner that decomposes complex queries into actionable\ntasks, enhancing the effectiveness of tool utilization. The E&G paradigm\nutilizes LLMs to enrich tool descriptions based on user scenarios, bridging the\ngap between user queries and tool functionalities. Experiment results\ndemonstrate that these paradigms significantly improve the recall and NDCG in\ntool retrieval tasks, significantly surpassing current state-of-the-art models.", |
| "authors": "Tenghao Huang, Dongwon Jung, Muhao Chen", |
| "published": "2024-03-30", |
| "updated": "2024-04-04", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI", |
| "cs.IR", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Reasoning", |
| "gt": "The community has shown increasing interest in integrating external tools and interfaces with LLMs since tools often provide complementary function- alities in complex tasks such as dialogues (Bubeck et al., 2023), mathematical reasoning (Lu et al., 2022), and code generation (Yadav et al., 2023). To realize tool augmentation, LLM systems typically employ a retriever mechanism to select relevant tools from a candidate pool and write function API calls based on the retrieved tools. The introduction of external tools also allows LLMs to address com- plicated user queries. Schick et al. 2023 show that LLMs, incorporating simple tools, achieve better performance on downstream tasks. Gupta and Kem- bhavi 2023 attempt to solve compositional visual Figure 1: Comparison between conventional Retrieve- and-Read and PLUTO paradigm. Unlike the conven- tional one-time Retrieve-and-Read paradigm that may lead to retrieving an ineffective set of tools, PLUTo efficiently parses a complex query and distills it into actionable sub-queries that facilitate accurate retrieval of appropriate tools. tasks via image processing modules and language- instructed computer vision models. More recently, the integration of LLMs and tools empower LLMs, opening up new possibilities in areas like scientific discovery (Yang et al., 2023), automated efficiency, and smart assistant applications (Shu et al., 2022). Nonetheless, emergent approaches for LLMs with tool integration present several distinct chal- lenges. One primary concern is that current LLM agents still adopt simple retrieval-and-read strate- gies (Patil et al., 2023; Qin et al., 2023), lacking arXiv:2404.00450v2 [cs.CL] 4 Apr 2024 the dynamic adaptability required for addressing complex queries. As shown in Fig. 1, the conven- tional Retrieve-and-Read paradigm, solely relying heavily on similarity matching, falls short of re- trieving diverse types of tools to address a complex user query. This limitation is further exacerbated by the semantic gap between user queries and tool descriptions. Particularly, user queries can be am- biguous and complex, often requiring a deep un- derstanding of the user\u2019s intent and the context of the query (Kulkarni et al., 2023). On the other hand, human-written tool descriptions can be ab- stract and lack essential details for deciding their utilities, leading to a mismatch between what the user needs and what the tool is perceived to offer. Additionally, current models tend to finetune on static tools, posing challenges to their robustness in the ever-evolving tool environment where new tools emerge and existing ones become obsolete (L\u00fcbke et al., 2019). There is limited research on retrieval enhancement strategies in non-finetuned settings. These gaps highlight crucial areas for fu- ture research and development in LLM and tool integration. In this paper, we leverage LLM\u2019s world knowl- edge and reasoning ability to augment the re- trieval and utility of tools in response to com- plex user queries, by designing a novel framework PLUTO (Planning, Learning, and Understanding for TOols) 1. Our first contribution is the introduc- tion of a novel Plan-and-Retrieve for tool integra- tion. While prior Retrieve-and-Read approaches only retrieve once at the beginning, our Plan-and- Retrieve paradigm is designed to adaptively ad- just its strategies based on the outcomes of its self- evaluations, ensuring a continuous refinement of the tool selection process. This paradigm is struc- tured into two core modules. The first module, the retriever, leverages neural (dense) retrieval tech- niques (Karpukhin et al., 2020) and LM-likelihood scoring mechanisms (Song et al., 2023a) to effi- ciently shortlist relevant tools from a vast pool of candidates in response to a user query. This pro- cess ensures that the most pertinent tools are identi- fied quickly, laying a foundation for more effective tool utilization. Inspired by recent advancements of adaptive retrieval-augmented generation (RAG; Jiang et al. 2023; Yoran et al. 2023), we design an LLM-based query planner that autoregressively 1Code is available at https://github.com/ tenghaohuang/PLUTo decomposes complex user queries into manage- able, task-oriented actions as the second module. Following the decompositions, the query planner selects the most suitable ones from the retrieved tools. It goes further by evaluating the effective- ness of selected tools and proposing the next action toward addressing the user query. This Plan-and- Retrieve paradigm operates dynamically, embody- ing a sophisticated feedback loop that interlinks the retrieval of tools with subsequent refinement, evaluation, and planning stages. Our second contribution is the proposal of Edit- and-Ground paradigm that utilizes user queries\u2019 rich contextual information and LLM\u2019s extensive world knowledge for enriching descriptions of tool functionalities. Research has shown that informed tool documentations can enhance the interaction between LLMs and tools (Hsieh et al., 2023). How- ever, documenting tool functionalities at scale can be tedious for humans. Yang et al. 2023 show LLMs can follow instructions and optimize real- world applications. Leveraging the optimization ability of the LLM, our tool-grounding agent opti- mizes under-informative tool descriptions by learn- ing and abstracting information from tools\u2019 user scenarios. By editing tool descriptions to make them more aligned with tools\u2019 user scenarios, the agent bridges the gap between user queries and tool functionalities, enhancing the overall effectiveness of tool retrieval and usage. In conclusion, this paper advances the field of tool integration with LLMs by introducing the novel Plan-and-Retrieve and Edit-and-Ground paradigms. Experiments show that our paradigms improve the recall and NDCG of tool retrieval tasks, significantly outperforming current state-of-the-art (SOTA). Our downstream evaluation suggests that the improvement gained during the retrieval phase, such as higher accuracy and relevance in responses, significantly contribute to successfully addressing the user queries.", |
| "main_content": "Retrieval-Augmented LLM. Early studies on Retrieval-Augmented LLMs typically incorporate embeddings of retrieved passages as a part of the latent representation of the LM (Chen et al., 2017; Lee et al., 2019). More recent works like REALM (Guu et al., 2020) and RAG (Lewis et al., 2021) have demonstrated the effectiveness of in-context augmentation and its improvement on knowledgeFigure 2: An overview of the PLUTO approach. intensive tasks. There is also work (Mallen et al., 2023) that explores how Chain-of-Thought (CoT) could guide a multi-turn Retrieve-and-Read process to solve open-domain questions and perform fact verification. However, the massive action space and tool functionality variance in tool-oriented tasks pose challenges to LLMs during planning. An erroneous step in planning can lead to a faulty loop, such as continually calling a tool in the wrong way or hallucinating non-existing tools. Our Plan-and-Retrieve paradigm, employing furtherest planning assessment (Zhu et al., 2023), enforces reasonable and goal-oriented decompositions of user queries. The recently proposed ReAct framework (Yao et al., 2022) asks LLM to plan future actions based on its observation of environments. In the context of tooloriented tasks, the plan builds upon the execution results of retrieved tools. Such practice running and verifying each tool at retrieval time can be expensive and time-consuming at scale. In contrast, our Plan-and-Retrieve paradigm fully leverages LLM\u2019s internal representation of world knowledge to propose plans in response to user queries, therefore guaranteeing both time and cost efficiency as an execution-free paradigm. Tool Learning. Tool learning refers to the process where LLMs not only process and generate language-based responses but also learn to interact with and utilize external tools to enhance their capabilities (Nakano et al., 2022; Schick et al., 2023; Shen et al., 2023; Qian et al., 2023; Song et al., 2023b; Xu et al., 2023; Li et al., 2023; Hao et al., 2023; Zhang et al., 2023). By incorporating tools, LLMs can offer solutions in various areas, including visual-language processing (Gupta and Kembhavi, 2023; Wu et al., 2023), mathematical reasoning (Lu et al., 2023), and tasks in specialized domains (Jin et al., 2023; Tang et al., 2023b). However, previous research on tool learning mainly focused on teaching LLMs to use tools, but ignores the importance of shortlisting relevant tools. In this paper, we focus on using LLMs to improve the tool retrieval process. In contrast to previous researches that heavily rely on finetuning retrievers (Schick et al., 2023; Patil et al., 2023) to shortlist tools, we propose a novel Edit-andGround paradigm, leveraging LLMs\u2019 parametric knowledge to learn and create more informative descriptions for tools. This approach seeks to provide richer information for the retriever, leading to more accurate retrieval. 3 Task and Data We hereby formulate the task of tool retrieval and describe the dataset for this task. 3.1 Task Definition The tool retrieval process involves taking a user query Q and an index base of tool descriptions D = {d(t1), d(t2), . . . , d(tn)} as input, where each d(t) represents the description of each tool t. The retriever then sifts through the tool descriptions in D and shortlists a relevant tool set T = {t1, t2, . . . , tk} that are potentially suited to address aspects of the user query Q. It is essential to underline that unlike conventional retrieval tasks, the task of tool retrieval is goal-oriented in nature, which means the set of retrieved tools T should be able to address the user query Q. The systems are expected to accurately retrieve relevant tools and understand the user intents and complex synergy between tools, thus truly assisting users in problem-solving processes. 3.2 Dataset Existing datasets for tool learning, such as those delineated in (Li et al., 2023; Patil et al., 2023; Tang et al., 2023a; Xu et al., 2023), provide insights into the field. Nonetheless, these datasets exhibit limitations, where they only cover a limited number of tools or solely support simple single-tool usage scenarios, where user queries are simple and could be addressed by a single tool. Contrastingly, Qin et al. (2023) proposed ToolBench, a dataset covering more than 3,000 tools from 49 categories (such as advertising, data analysis, and transportation) and support complex, multitool user scenarios. In these scenarios, a single user query necessitates the sequential application of multiple tools, each contributing uniquely to the resolution of the query. The ToolBench dataset synergizes with the RapidAPI Hub, a prominent API marketplace that consolidates a vast array of realworld APIs. The multi-tool query creation process involves selecting representative tools within each category or collection, crafting queries to mimic real-world problem-solving scenarios. Given our research focus and the nature of our study, we have chosen to concentrate on the IntraCategory setting of the ToolBench dataset. The intra-category setting provides high-quality user queries, where the hierarchies of tools are clearly defined based on their main functionalities. It motivates understanding complex interactions and synergies between tools that share a common functional domain. The setting mirrors real-world situations where problem-solving often demands a multifaceted and integrative use of diverse tools. The ToolBench dataset annotates paths of executed tools that successfully address the user queries as solution paths. The average length of the solution paths is 4. We take the annotated solution paths as the ground truth for our task. 4 Method In this section, we describe the proposed framework to integrate tools with LLMs for addressing complex user queries. Our methodology is grounded in two innovative paradigms: the Planand-Retrieve (P&R; \u00a74.2) and Edit-and-Ground (E&G; \u00a74.3). We discuss the coordination between two paradigms in \u00a74.4. 4.1 Method Overview PLUTO integrates two key paradigms, Plan-andRetrieve (P&R) and Edit-and-Ground (E&G), to effectively address complex user queries with LLMs. The Plan-and-Retrieve paradigm is a two-stage process. The Plan stage decomposes user queries into focused sub-queries, while the Retrieve stage matches these sub-queries with relevant tools. The Edit-and-Ground paradigm, consisting of the Evaluator and Optimizer, focuses on enhancing tool descriptions. These paradigms are designed to work in tandem. P&R paradigm addresses immediate user queries, while E&G actively identifies and collects underinformative tool descriptions for optimization. 4.2 Plan-and-Retrieve The Plan-and-Retrieve (P&R) paradigm is designed as a two-stage process to effectively address complex user queries. Plan. In the Plan stage, a LLM-based planner autoregressively decomposes the user query Q into sub-queries q1, q2, . . . , qn. To ensure the robustness and quality of the decomposed sub-queries, we follow Zhu et al. (2023). Specifically, for each step of sub-query generation, the planner first generates a batch of hypotheses. Then, we cluster the generated hypotheses along with previously created sub-queries via K-means clustering algorithm. Finally, we select a sub-query from the hypotheses that distinguishes the most from the previous sub-queries to proceed2. As shown in Fig. 2, the planner autoregressively decomposes the user query Q into more fine-grained sub-queries based on assessments at inference time. After the generation of a sub-query qt, the planner evaluates whether the original query Q has been satisfactorily achieved based on the current planning history. If the evaluation determines that the goal has been met, the iterative process concludes. Otherwise, the planner proceeds to generate the subsequent sub-query qt+1. This active and autoregressive planning at inference time facilitates a more focused understanding of the tools. We use the following prompt template for the planner. 2Please refer to Appx. \u00a7A for algorithm implementation. Retrieve. In the Retrieve stage, for each sub-query qi, the retriever shortlists the most suitable tools Ti \u2208D. We first retrieve a pool of candidate tools that matches qi, represented as T \u2032 i = Ret(qi), (1) where Ret represents the retriever. To enhance the robustness of retrieval, we rerank the candidate tool set T \u2032 i by LM-likelihood score between the sub-query qi and each tool tj \u2208 Ti, which is calculated as follows: LM-likelihood(qi, tj) = \u2212log P(qi, d(tj)). (2) Based on the re-ranked tools, we choose the top5 tools T \u2032 i,top\u22125 and feed them into a LLM-based predictor, which outputs a shortlisted tool set Ti from the candidate tool set T \u2032 i,top\u22125 that are relevant to qi. We use this prompt for the predictor. As a result, the final shortlisted tool set T is formed by T = n [ i=1 Ti, \u2200i \u2208[1, n] \u2229Z. (3) For the choice of Ret, we adopt a neural (dense) retriever method. For each sub-query qi, the dense vector representation qi is obtained by passing qi through a dense encoder. Similarly, we obtain dense representation d through a dense encoder for each tool description d. The tool index corpus D is formed as a collection of d. The P&R module interleaves Plan and Retrieve until the planner evaluates that the user query has been sufficiently decomposed and addressed through the retrieved tools. The module then returns T as the relevant tools to address the user query. Algorithm 1 Edit-and-Ground Algorithm Input: Trainset, Devset, Toolset, Failure_Threshold, Max_Rounds Output: Optimized Tool Descriptions Initialize cache for tools in Toolset cur_round = 0 while cur_round < Max_Rounds do ## Phase 1: Evaluate Retrieval Performance for each (query, gt_tools) in Trainset do predicted_tools \u2190P&R(query) for each tool in gt_tools do tool.trials += 1 if tool not in predicted_tools then tool.failure += 1 tool.queries.add(query) \u25b7Failure queries end if end for end for ## Phase 2: Failed Tool Description Optimization for each tool in Toolset do if tool.failure tool.trials > Failure_Threshold then U \u2190Remove specific entities from tool.queries R \u2190Predict reasons for failure of U d(tool) \u2190tool.description d\u2019(tool) \u2190E&G(tool, d(tool), U, R) ## Phase 3: Evaluate Performance of d\u2019(tool) cur_recall \u2190Eval(Devset, d\u2019(tool)) if tool.recall < cur_recall then tool.description \u2190d\u2019(tool) tool.recall \u2190cur_recall end if end if end for cur_round += 1 end while 4.3 Edit-and-Ground The Edit-and-Ground (E&G) paradigm focuses on refining under-informative tool descriptions to align them with user queries. As shown in Alg. 1, the evaluator examines the quality of tool descriptions by retrieval results. A tool description is viewed as under-informative if the number of failure cases of retrieval exceeds a pre-defined threshold. We collect such tools for later optimization. Subsequently, the optimizer takes a tool t with its base description d(t) and U, a batch of relevant user queries, as input. To avoid the optimizer overfitting to a local batch, we use an LLM to filter out specific entities for each query in U. The entity filtering prompt template is shown as above. To assist the optimizer in improving underperformed tool descriptions, we prompt LLM to generate reasons R explaining why the tool could be related and helpful in addressing user queries. The functionality assessment prompt template is shown below: Finally, by prompting LLM with 1) base tool description d(t), 2) entity-filtered user queries U, and 3) the reasons R, we obtain an enriched tool description d\u2032(t). Please refer to Fig. 4 in Appendix C for the prompt template. We formally represent this process as d\u2032(t) = E&G(t, d(t), U, R). (4) The optimization process is executed in multiple rounds as described in Alg. 1. In each round, we evaluate the retrieval recall on the development set for each tool and compare it with the previous round. If the current round\u2019s recall is better than the previous one, we update the tool\u2019s description; otherwise, we keep the original description. The Edit-and-Ground involves using the LLM\u2019s extensive world knowledge, combined with the contextual details provided by U, to edit and enhance d(t). The result of this task is an enriched tool description d\u2032(t), expected to resonate more closely with real-world user scenarios and increase the utility of the tool in practical applications. 4.4 Paradigm Coordination and Inference Our PLUTO framework employs strategic coordination of the Plan-and-Retrieve (P&R) and Editand-Ground (E&G) paradigms, phased to optimize the process of tool retrieval. This section elucidates the interaction between these paradigms during the optimization phase and the subsequent inference phase. Optimization Phase. During the optimization phase, P&R and E&G operate alternatively. P&R is tasked with decomposing a user query Q into manageable sub-queries q1, q2, . . . , qn. These subqueries facilitate a more focused retrieval of tools from the tool set D, ensuring that the process is aligned with specific aspects of the query. During planning, the E&G paradigm is actively engaged in optimizing the descriptions of the tools within D. This optimization, leveraging the LLM\u2019s extensive knowledge base, is particularly targeted at tools that exhibit underperformance in retrieval effectiveness. By enriching these tool descriptions, E&G significantly enhances the overall retrieval process, making the toolset more responsive and aligned with the practical demands of diverse queries. Inference Phase. At the time of inference, the P&R paradigm remains active, utilizing the previously enriched and optimized tool descriptions. In this phase, the E&G paradigm ceases its operation and does not engage in any further optimization of tool descriptions. The refined tool descriptions, already enhanced by E&G, now serve as a comprehensive resource for the retriever to draw upon in response to the decomposed sub-queries. 5 Experiments In this section, we evaluate the proposed PLUTO framework for tool retrieval and compare it with baseline methods. We will delve into the details of our experimental setup (\u00a75.1), discuss the results (\u00a75.2) obtained, and perform an ablation study to understand strengths of different components (\u00a75.3). By executing the retrieved tools, we evaluate their correctness in addressing user queries to further validate our findings (\u00a75.4). We present case studies to qualitatively evaluate the strength of PLUTo framework (\u00a75.5). 5.1 Experiment Setup Evaluation Protocol. We evaluate using three metrics to assess the effectiveness of our tool retrieval system. Recall (Rec) measures the proportion of relevant tools that are successfully retrieved by our system. High indicates that the system is effective in identifying a comprehensive set of relevant tools for a given query and is more likely to yield a solution to address the user query. We also report the Normalized Discounted Cumulative Gain (NDCG) that evaluates the relevance and quality of ranked search results. In addition, we report pass rate, an automatic evaluation metric of ToolBench (Qin et al., 2023). The pass rate measures a system\u2019s ability to successfully address the user query with Model Retriever Non-Finetuned Finetuned Rec NDCG Rec NDCG BM25 \u2013 18.82 37.44 \u2013 \u2013 ToolRetriever DPR\u2020 19.58 50.98 27.80 71.21 Contriever 31.78 74.70 42.77 79.16 PLUTO DPR 36.65 75.10 43.27 79.93 Contriever 46.57 82.93 48.47 84.73 Table 1: This table compares various tool retrieval models using Recall and NDCG metrics in both Non-Finetuned and Finetuned settings. It includes an ablation study on the impact of using different retrievers, demonstrating the generalizability of PLUTO. \u2020 indicates the previous SOTA implementation, as specified in (Qin et al., 2023). a retrieved subset of tools in limited budgets by interacting with real-world RESTful APIs (\u00a75.4). To test the generalizability of our approach, we benchmark the tool retrieval performance under a Non-Finetuned setting, where we directly apply an off-the-shelf retriever model to comprehensively showcase PLUTO\u2019s adaptivity. To test the model\u2019s practical applicability, we also benchmark retrieval performance under Finetuned setting, where we finetune the retriever model on domain-specific knowledge. We evaluate 500 user queries for each setting. Baselines. We compare our system against several representative retrieval methods. These include: (1) BM25: a widely-used probabilistic retrieval framework, calculating the relevance of documents to a query based on the frequency of query terms in each document; (2) ToolRetriever: a neural retrieval approach that achieves the current state-of-the-art (SOTA) performance on ToolBench retrieval task (Qin et al., 2023). To understand the flexibility of our framework, we benchmark PLUTO\u2019s performance when incorporated with different retrievers. Specifically, we use DPR (Karpukhin et al., 2020) and Contriever (Izacard et al., 2022). Implementation Details. For the implementation of PLUTo, we use DSPy framework (Khattab et al., 2023) to facilitate efficient interaction between retriever and LLM. We choose ChatGPT3 as our main LLM for both P&R and E&G. The maximum round for the E&G module is set to 5. For ToolRetriever, we retrieve top-5 tools using the respective retrievers. The data is divided into 70-15-15 splits for training, development, and testing, respectively. For our experiment, we randomly select 500 data 3OpenAI. (2023). ChatGPT (November 21st version). samples from the test split for each setting mentioned in Evaluation Protocol section. For the Finetuned settings, we finetune the neural dense retriever model by including negative samples during in-batch training (Karpukhin et al., 2020). For each positive pair of query qj and its relevant tool d+ j , we include n negative tools as negative samples. We use a cross-entropy loss with softmax function over the batch B: L = \u22121 B B X j=1 log eqj\u00b7d+ j eqj\u00b7d+ j + Pn i=1 eqj\u00b7d\u2212 ij ! (5) 5.2 Results The experimental results, detailed in Tab. 1, underscore the significant advantages of our proposed PLUTO models. In the Non-Finetuned setting, PLUTO with Contriever showcases remarkable scores, achieving 46.57% in Recall, outperforming the best baseline by 9.92 points. This result shows the model\u2019s robust ability to identify relevant tools without the necessity for specific finetuning, a critical advantage in dynamic tool retrieval environments. We observe a consistent trend in the Finetuned setting, with the model scoring 48.47% in Recall, demonstrating a 5.7 points lead when compared with the Contriever baseline. This indicates that our model is highly effective on retrieving relevant tools. Furthermore, our model outperforms baselines across all settings on NDCG scores. In the NonFinetuned setting, our model leads by 8.23 points. In the Finetuned setting, our model beats the baseline by 4.57 points. These results reflect PLUTO not only the relevance of the tools retrieved but also their ranking in order of utility and applicability to the user\u2019s query, which is a indication to the model\u2019s nuanced understanding of tool utility. Figure 3: Performance comparison among different LLMs for Plan-and-Retrieve paradigm using Recall score. The backbone retriever is DPR. To show the generalizability of PLUTO, we select different retrievers for the Plan-and-Retrieve (P&R) paradigm. We observe that PLUTO has synergy with both DPR and Contriever models, regardless of their different architecture, that achieves higher Recall and NDCG scores than the baselines. This indicates that PLUTO is a plug-n-play and retriever-agnostic framework that features effectiveness and flexibility under different circumstances. The experimental results highlight the superior performance of PLUTO framework. Together, the P&R and E&G paradigms establish a dynamic and effective framework, which not only accurately interprets and responds to user queries but also maintains an evolving understanding of tool functionality. This duality ensures that PLUTO remains highly effective and adaptable in various setups, consistently aligning user needs with the most suitable tools and their capabilities. 5.3 Ablation Study As shown in Fig. 3, we observe that both the Llama2 (Touvron et al., 2023) and ChatGPT variants show considerable improvements in tool retrieval capabilities, with notable increases in Recall and NDCG scores compared to baseline models. This consistent improvement across different LLM integrations conclusively demonstrates the robustness and effectiveness of our method. This finding is particularly important as it suggests that our approach is not overly reliant on any single LLM, thereby showcasing the broad applicability and potential of our methods in diverse settings. As shown in Tab. 2, the ablation experiment on Model Non-Finetuned Finetuned Rec NDCG Rec NDCG PLUTOfull 46.57 82.93 48.47 84.73 w/o E&G 42.55 80.70 44.90 81.10 w/o P&R 38.12 77.60 47.07 81.90 Table 2: Ablation Study. the PLUTOfull, focusing on the removal of Editand-Ground (E&G) and Plan-and-Retrieve (P&R) components, provides intriguing insights into their roles in tool retrieval tasks. Generally, removing E&G leads to decreased Recall and NDCG scores across settings, underscoring its critical role in enhancing what the model seeks to retrieve. On the other hand, excluding P&R tends to diminish more of the model\u2019s performance in NonFinetuned settings, particularly impacting Recall. This highlights P&R\u2019s importance in effectively retrieving relevant information. A comparative analysis reveals that the full implementation of PLUTOChatGPT, incorporating both E&G and P&R, consistently delivers strong performance across all metrics and settings, emphasizing the synergistic strength of these components. The variants of the model, lacking either E&G or P&R, provide valuable insights into the unique contributions of each component to the model\u2019s overall efficacy. 5.4 Execution Pass Rate We evaluate the pass rate of the execution schema generated by ChatGPT using the DFSDT approach (Qin et al., 2023). Using the ToolEval package, we assessed two distinct retrieval tools, ToolRetriever and PLUTO, for their correctness and efficiency in responding to user queries. The PLUTO achieves 72.3% for pass rate, while the previous SOTA system ToolRetriever scored 69.3%. This experiment\u2019s findings emphasize the pivotal role of advanced retrieval strategies in enhancing user query response quality. The improvement gained during the retrieval phase, such as higher accuracy and relevance in responses, significantly contribute to the downstream tasks. 5.5 Case Study As shown in Tab. 3, we compare our PLUTO against the ToolRetriever baseline to underscore PLUTO\u2019s proficiency in retrieving relevant tools for diverse user queries. Through selected examQuestion Gold Answer PLUTo Answer ToolRetriever Answer I\u2019m planning a weekend getaway with my partner and I want to surprise them with a romantic playlist. Could you fetch the reels and posts from romantic music artists on Instagram? Additionally, could you search for books about love and relationships on Open Library? Instagram Reels and post Downloader, Open Library Instagram Reels and post Downloader, Instagram, Open Library, Instagram Downloader Love Quotes by LoveMelon, The Love Calculator, Book Finder, fb-video-reels, Reading Home APIs I\u2019m planning a family movie night and I need a movie recommendation. Can you fetch the trending images for movie posters and provide me with the details of the most popular movie from the past month? Also, check the status of the movie session and download the completed movie. Magisto, Bing Image Search Magisto, gogoanimedata-api, Youtube video info, Advanced Movie Search, Image Service, Memes, Bing Image Search, Netflix Data TikTok Info, Tiktok Video Feature Summary, TikTok Full Video Info, TikTok Downloader Download Videos without watermark I\u2019m a music blogger and I\u2019m searching for interesting radio stations to feature on my website. Can you help me find radio stations that play a mix of genres? Also, provide me with the details of the master for the track with the ID \u2019987654\u2019 in the LANDR Mastering. LANDR Mastering v1, 50K Radio Stations GMC Radio, LANDR Mastering v1, 50K Radio Stations, 60K Radio Stations LANDR Mastering v1, Spotify_v2, TuneIn, Spotify Scraper, Spotify_v3 Table 3: Performance comparison of PLUTo and ToolRetriever in retrieving relevant tools for user queries. This table demonstrates the effectiveness of PLUTo in closely aligning with the gold standard answers for diverse queries, showcasing its superior ability to understand and fulfill user needs compared to ToolRetriever. The highlighted tools are the correctly retrieved ones. ples, PLUTO\u2019s superior understanding and comprehensive response capabilities are highlighted, especially in scenarios requiring nuanced tool selection. For instance, for organizing a romantic weekend in the first example, PLUTO not only identifies all essential tools but also enhances the search with additional relevant resources, showcasing its broad and accurate grasp of user needs. This is contrasted with ToolRetriever, where the retrieved tools are only similar on a surface level (the majority of the tools contain the term \"Love\") and fail to understand the user\u2019s intent. This emphasizes PLUTO\u2019s improved relevance and precision in tool retrieval. We also showcase the descriptions of tools before and after optimization by the Edit-and-Ground paradigm in Tab. 4. By leveraging the Plan-and-Retrieve (P&R) and Edit-and-Ground (E&G) components, PLUTo marks a significant advancement over conventional retrieval systems, demonstrating its adaptability and utility in fulfilling diverse user requirements. 6 Conclusion We introduced PLUTO, a framework composed of the Plan-and-Retrieve and Edit-and-Ground paradigms, which marks a distinctive departure from traditional methodologies, setting a new standard for tool retrieval. The empirical results illustrate the superiority of PLUTO across critical retrieval performance metrics as well as pass rate in real-world tool-use evaluation. These metrics collectively attest to the model\u2019s efficacy in identifying relevant tools and successfully addressing complex user queries. We hope the adaptability and efficiency of PLUTO can empower a multitude of domains where accurate and timely retrieval of tools is paramount. From autonomous scientific discovery to software development, the potential applications are as diverse as they are impactful. Acknowledgement We appreciate the reviewers for their insightful comments and suggestions. Tenghao Huang and Muhao Chen were supported by an Amazon Research Award, a Keston Exploratory Research Award, and the NSF Grant ITE 2333736. Limitation Our study, while enhancing tool learning by planning and editing strategies, is notably constrained by its reliance on English language datasets. This focus on English limits the model\u2019s applicability to other languages with distinct syntax and semantics and confines its evaluation to specific English data sources, leaving its performance on diverse language setups unexplored. Future research should address this limitation by developing multilingual capabilities and conducting evaluations across varied data sources. The Edit-and-Ground (E&G) may be executed to further optimize the descriptions. However, due to the cost, we currently set a relatively loose stop criterion that is enough to demonstrate the effectiveness of the presented method. Ethical Consideration In conducting this research, we have adhered to ethical guidelines and legal norms to ensure responsible data usage. The data used in this study was obtained from public datasets, specifically ToolBench. We ensured not to violate any terms of service of the data sources." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2403.04283v1", |
| "title": "Proxy-RLHF: Decoupling Generation and Alignment in Large Language Model with Proxy", |
| "abstract": "Reinforcement Learning from Human Feedback (RLHF) is the prevailing approach\nto ensure Large Language Models (LLMs) align with human values. However,\nexisting RLHF methods require a high computational cost, one main reason being\nthat RLHF assigns both the generation and alignment tasks to the LLM\nsimultaneously. In this paper, we introduce Proxy-RLHF, which decouples the\ngeneration and alignment processes of LLMs, achieving alignment with human\nvalues at a much lower computational cost. We start with a novel Markov\nDecision Process (MDP) designed for the alignment process and employ\nReinforcement Learning (RL) to train a streamlined proxy model that oversees\nthe token generation of the LLM, without altering the LLM itself. Experiments\nshow that our method achieves a comparable level of alignment with only 1\\% of\nthe training parameters of other methods.", |
| "authors": "Yu Zhu, Chuxiong Sun, Wenfei Yang, Wenqiang Wei, Bo Tang, Tianzhu Zhang, Zhiyu Li, Shifeng Zhang, Feiyu Xiong, Jie Hu, Mingchuan yang", |
| "published": "2024-03-07", |
| "updated": "2024-03-07", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Reasoning", |
| "gt": "Large language models (LLMs) have demonstrated formidable capabilities in various tasks including summarization (Stiennon et al., 2020; Koh et al., 2022), instruction following (Chung et al., 2022; Ouyang et al., 2022), robotics (Huang et al., 2023; Liu et al., 2023), and more(Roziere et al., 2023). attracting widespread attention in academia and industry. Several methods (Ouyang et al., 2022; Bai et al., 2022; Dong et al., 2023; Yuan et al., 2023; Rafailov et al., 2023; Dai et al., 2023) have been proposed to ensure the outputs of LLMs to align with hu- man values, among which Reinforcement Learning from Human Feedback (RLHF) (Christiano et al., 2017; Ziegler et al., 2019; Casper et al., 2023) is the mainstream. RLHF models the generation process of LLMs as a Markov Decision Process (MDP) * Equal contribution \u2020 Corresponding authors Your performance at work is unparalleled, truly exceptional versus disastrous LLM Proxy-Model Your performance at work is unparalleled, truly exceptional versus\u00a0exceptional LLM Proxy-Model Reject Accept Figure 1: Demonstration of how the proxy model works. The proxy model is responsible for supervising the gen- eration of the LLM, deciding whether to accept the latest token generated by the LLM. By accepting tokens that align with human values and rejecting those that do not, it ensures that the final generation results are aligned with human values. and treats the language model as a policy model, directly optimizing its parameters. In the RLHF method, LLMs are responsible for both generation and alignment, making the align- ment process inevitably computation-intensive. The RLHF method employs on-policy reinforce- ment learning algorithms, typically PPO (Schul- man et al., 2017), which requires two trainable language models of the same size. Furthermore, an extra constraint on the KL divergence with the reference model is imposed. Overall, the RLHF method requires the simultaneous use of four mod- els\u2014policy, reward, value, and reference mod- els\u2014each with billions of parameters. To address these issues, we propose Proxy- RLHF, which aligns language models with human values with minimal computational cost. Different from previous methods, our core idea is to decouple the generation and alignment processes of LLMs. Specifically, we have restructured the Markov De- cision Process in RLHF. In this framework, LLMs are solely responsible for generating tokens with- out having to consider alignment with human val- arXiv:2403.04283v1 [cs.CL] 7 Mar 2024 ues. A new proxy model evaluates the quality of the generated tokens, accepting those that align with human values and rejecting those that do not, thereby achieving alignment. However, training a proxy model from scratch is challenging. Unlike the RLHF approach, where the policy model benefits from initialization with LLMs, the proxy model lacks initial understanding about natural language. Therefore, we propose the Stable Knowledge-Aware Module (SKAM), which can (1) stabilize training and avoid unnecessary re- peated exploration through the redesign of LLMs sampling, and (2) ensure that the final generated responses fall within the knowledge and skill scope of the LLMs by limiting the rejection actions of the proxy model, potentially endowing the proxy model with certain linguistic capabilities. Addi- tionally, we utilize the hidden states generated by the LLMs during its generation process as input features for the proxy model, further reducing the number of parameters and computational cost. We encapsulate the generation of LLMs into a reinforcement learning environment and conduct extensive experiments on it. The experiments vali- dated that our method is both parameter-efficient and data-efficient, achieving a comparable level of alignment with less than 1% of the training param- eters used by other methods.", |
| "main_content": "2.1 Markov decision process We conceptualize the alignment process of large language models as a Markov Decision Process (MDP), represented by a tuple (S, A, R, P, \u03c0), where s \u2208S denotes the state, a \u2208A represents the action, P : S \u00d7 A \u00d7 S \u2192[0, 1] signifies the state transition probability, R denotes the reward, and a policy \u03c0 : S \u2192A represents a mapping from state space to action space. Specifically, in Proxy-RLHF, let \u03c0\u03b8 denotes the policy of proxy model, its input state s is a sequence of tokens consisting of a prompt and the responses generated up to that point. Its action space A contains two actions: a = 0 for accepting the token, and a = 1 for rejecting. If the newly generated token is accepted, the language model generates a new candidate token based on the prefix, otherwise, it resamples a new token based on the prefix without the rejected token. That is, the state transition P is determined by the generation process of the LLMs, emphasizing the importance of designing a sampling method for the LLMs that facilitates the learning of the proxy model. 2.2 Stable Knowledge-Aware Module The Stable Knowledge-Aware Module consists of two parts: the redesign of the sampling method and the restriction of the action space of the proxy model. The redesign of the sampling method reduces the randomness of state transitions and unnecessary exploration in the environment, stabilizing the model\u2019s training. The restriction of the proxy model\u2019s action space, by limiting the number of rejections, ensures that the final generated answers fall within the knowledge and skill range of the LLMs, guaranteeing the usefulness of the answers. Redesign of Sampling We have the LLM generate tokens in descending order of probability and remove any token that has been rejected by the proxy model at the same position from the pool of candidate tokens. This resembles greedy sampling, but the difference lies in that the same position may undergo multiple regenerations in Proxy-RLHF if previous tokens are rejected by the proxy model, meaning the final accepted token is the one with the highest probability among the remaining candidate tokens, not necessarily the highest probability of all tokens. Specifically, a new candidate token ti in step i is generated by ti = argmax t\u2208T \\T r p\u03d5{t|x, y<i} where p\u03d5 represents the logits generated by the language model based on the prompt x and previously generated responses y<i. T is the set of the entire vocabulary. T r is the set maintaining tokens rejected by the proxy model at position i and will be reset to \u2205if stepping into i + 1. Given state s = (x, y<i+1), we have next state s\u2032 = \ufffd (x, y<i+1, ti+1) if a = 0, (x, y<i, ti) if a = 1. Note that selecting the token greedily does not hurt the output diversity: we can always reach the desired token by consistently rejecting others. Additionally, removing rejected tokens from the candidate tokens set can prevent the training process from falling into unnecessary loops, e.g. generating and rejecting the same token again. Action Space Restriction In proxy RLHF, the probability of token e t being accepted at position i is: p{yi = e t|x, y<i} = Y ti\u2208e T \u03c0\u03b8(a = 1|x, y<i, ti). where e T = {ti|p\u03d5{ti|x, y<i} > p\u03d5{e t|x, y<i}, ti \u2208 T } is the set of all tokens whose probabilities, as provided by the language model\u2019s logits, are greater than e t. The chosen probability of e t shifts from the origin probability of the language model to the product of the action probability of the proxy model. This implies that tokens with a low probability in the language model might be chosen by the proxy model with a higher probability, creating a discrepancy. Therefore, relying solely on existing methods of limiting LLMs generation sampling (e.g., topk, topp sampling) is no longer effective. To address this issue, we restrict the action space of the proxy model. Specifically, We preset a hyperparameter pt. If the average probability of the remaining tokens is less than pt, we mask the rejection action, thus forcing the action of the proxy model to be acceptance. This ensures that irrational tokens are not sampled. The constrained action space of the proxy can be represented as: A = ( {0} if P t\u2208T \u2032 p\u03d5{t|x, y<i} \u2264pt \u2217|T \u2032|, {0, 1} else . Where T \u2032 = T \\ T r. In actual deployment, we use topp sampling methods with temperature to eliminate most irrational tokens beforehand, further reducing unnecessary exploration by the proxy model and stabilizing and speeding up the training process. 3 Experiment We designed experiments to answer the following questions: 1. Can our method perform comparably to RLHF or DPO with far fewer training parameters? 2. How does the Stable Knowledge-Aware Module affect the performance of our method? 3. As a method trained from scratch, how dataefficient is our approach? Prompt Dataset and Model We use the hh dataset and filter out the safety-related prompts to focus more on helpfulness and reduce bias. Furthermore, similar to previous work(Dong et al., 2023), to reduce the use of GPU memory, we do not use (a) (b) Figure 2: (a) The reward distribution on the test set for SFT and Ours, where scores are obtained from the reward model. (b) The win rate of Ours, DPO, RLHF, and BON against the SFT model, where the win rate is determined by pair-wise comparison from GPT-4. We use greedy sampling for all methods above and set n=32 in BON. Table 1: The comparison of the number of parameters between our method, DPO, and RLHF when fine-tuning the Alpaca-7B model Method Trainable parameters GPU memory required for training Fine-tuning LLM RLHF 13.35B 198.87Gb \u2713 DPO 6.74B 100.41Gb \u2713 Proxy-RLHF(ours) 0.03B 0.5Gb prompts exceeding 256 tokens in length. The final dataset comprises a training set of 36k prompts and a test set of 1899 prompts. Consistent with previous work(Dai et al., 2023), the experiments were conducted on the alpaca-7b model, which was obtained by applying supervised fine-tuning (SFT) to the llama-7b model using prompt data from GPT-3.5. Evaluation Metrics Two methods were used to evaluate the model\u2019s output: the score of the reward model and the win rate of GPT-4(Achiam et al., 2023). We use the beaver reward model1, which was trained on the same dataset and achieved an accuracy of 62.03% on the test set. We use GPT-4 (gpt-4-1106-preview) for pairwise comparison of the outputs to obtain the final win rate. Effectiveness Experiment In this section, we demonstrate the effectiveness of our method, addressing question 1. Figure 2a shows the reward distribution of SFT baseline and our method on the test set of filtered HH dataset. A significant right-shift of the distribution can be observed, which indicates that our method can effectively improve the outputs\u2019 reward of the SFT model. On the other hand, our method 1https://huggingface.co/PKU-Alignment/beaver-7b-v1.0reward (a) (b) Figure 3: (a) The average score given by the reward model on the test set, for models corresponding to different pt, after completing one round on the training set. (b) The average score corresponding to different temperatures, after completing one round on the training set. Table 2: The average score on the test set for models with different temperatures on the first 0.5k, 1k, 1.5k, 2k and 36k (full) train data. pt 0.5k 1k 1.5k 2k 36k 0.1 -10.14 -10.15 -10.12 -9.70 -9.99 0.01 -7.09 -6.53 -6.96 -7.05 -6.72 0.001 -3.93 -4.72 -3.70 -4.05 -3.18 0.0001 -3.46 -4.51 -3.28 -4.45 -2.92 has higher win rate (63.24%) of GPT-4 versus SFT baseline than DPO (42.65%) and RLHF (61.24%), as shown in Figure 2b. It demonstrates that our method can achieve comparable performance to RLHF and DPO with less than 1% trainable parameters showing in Table 1. The low win-rate of DPO with greedy sampling is also validated in its paper, while our method can still achieve win-rate higher than 50%. BON (32) achieves the best results, however, with many times of generation in inference. Like RLHF and DPO, our method only requires once generation in inference. Hyper-parameters Experiment In this section, we answer question 2. We demonstrate how two key hyper-parameters of the Stable KnowledgeAware Module influence the final performance of the model. The two hyper-parameters are pt and temperature. A higher pt indicates a greater likelihood that the model\u2019s actions are restricted to only accepting the current generation result. A higher temperature means the logits output by the LLMs are processed more smoothly, leading to a more uniform probability distribution of tokens in the vocabulary and a smaller chance that the model\u2019s actions are restricted to only accepting the current generation result. In summary, the smaller the pt and the higher the temperature, the less likely it is that the model\u2019s action space is restricted, allowing for more choices, and vice versa. The results in Figure 3 show that as pt decreases and temperature increases, the reward also increases. This suggests that when the model has more choices, our proxy model can adapt to different choices and produce more high-quality outputs. Moreover, too small values of pt (0.001 and 0.0001) may result in irregular outputs (see in Appendix). Thus, the Stable Knowledge-Aware Module is necessary for avoiding irregular outputs but it can also be tuned for proxy model to search in a larger response space and find better responses. Efficient Experiment In this section, we address question 3: We present the average scores of the model on the test set after being trained with 0.5k, 1k, 1.5k, 2k training data points, as well as the difference in average scores after one round of training with all the data in the training set. Table 2 shows that the scores in early training steps are close to the final scores after one round of training. Especially for pt = 0.01, we can achieve the final performance in early training steps no more than 2000. This suggests that our method can quickly converge and is data efficient. 4 Related Work Several approaches have been proposed to reduce the complexity and instability of RLHF(Ramamurthy et al., 2022). (Bai et al., 2022; Lee et al., 2023) introduced RLAIF, which reduces the annotation cost of preference data. RRHF(Yuan et al., 2023) and RAFT(Dong et al., 2023) rank responses and use the highest-scoring answers for supervised fine-tuning. Direct Preference Optimization (DPO)(Rafailov et al., 2023) directly optimizes the language model with preference loss without the need for additional training of a reward model. Methods above consider the LLM itself as a policy model to be optimized, taking on both generation and alignment tasks, making the computationally expensive step of fine-tuning the LLM unavoidable. 5 Conclusion In this paper, we introduce the proxy-model, which decouples the generation and alignment processes within LLMs, using an additional lightweight proxy model to guide the generation of LLMs, achieving an alignment of output answers with human values. Furthermore, we propose SKAM to stabilize the training of the proxy model and ensure the effectiveness of the answers. Experiments show that our method achieves a level of alignment comparable to RLHF with less than 1% of the training parameters. Limitations This study, while pioneering in its approach to decouple generation and alignment processes in LLMs, is subject to several limitations. First, the effectiveness of the Proxy-RLHF model relies heavily on the quality and comprehensiveness of human feedback, which may not always be consistent or universally applicable across different domains or cultures. Secondly, the proposed method has been primarily validated in controlled experimental settings, and its robustness in real-world applications remains to be extensively tested. Lastly, the scalability of this approach to even larger models or more complex tasks is not fully explored, leaving open questions about its long-term applicability and adaptability. Ethics Statement In developing Proxy-RLHF, we recognize the ethical implications associated with the deployment of large language models (LLMs). Our method aims to align LLMs more closely with human values through efficient and targeted feedback, addressing concerns related to bias, misinformation, and the potential for harmful outputs. However, we acknowledge that the technology could be misused if the alignment process is biased or if the proxy model is manipulated to endorse unethical values. We committed to transparency in our methodology and results to foster an open dialogue about these challenges. We also emphasize the importance of diverse and inclusive feedback to mitigate biases. Moving forward, we encourage continued ethical scrutiny and multidisciplinary collaboration to ensure that advancements in LLMs contribute positively to society." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.10429v1", |
| "title": "MEEL: Multi-Modal Event Evolution Learning", |
| "abstract": "Multi-modal Event Reasoning (MMER) endeavors to endow machines with the\nability to comprehend intricate event relations across diverse data modalities.\nMMER is fundamental and underlies a wide broad of applications. Despite\nextensive instruction fine-tuning, current multi-modal large language models\nstill fall short in such ability. The disparity stems from that existing models\nare insufficient to capture underlying principles governing event evolution in\nvarious scenarios. In this paper, we introduce Multi-Modal Event Evolution\nLearning (MEEL) to enable the model to grasp the event evolution mechanism,\nyielding advanced MMER ability. Specifically, we commence with the design of\nevent diversification to gather seed events from a rich spectrum of scenarios.\nSubsequently, we employ ChatGPT to generate evolving graphs for these seed\nevents. We propose an instruction encapsulation process that formulates the\nevolving graphs into instruction-tuning data, aligning the comprehension of\nevent reasoning to humans. Finally, we observe that models trained in this way\nare still struggling to fully comprehend event evolution. In such a case, we\npropose the guiding discrimination strategy, in which models are trained to\ndiscriminate the improper evolution direction. We collect and curate a\nbenchmark M-EV2 for MMER. Extensive experiments on M-EV2 validate the\neffectiveness of our approach, showcasing competitive performance in\nopen-source multi-modal LLMs.", |
| "authors": "Zhengwei Tao, Zhi Jin, Junqiang Huang, Xiancai Chen, Xiaoying Bai, Haiyan Zhao, Yifan Zhang, Chongyang Tao", |
| "published": "2024-04-16", |
| "updated": "2024-04-16", |
| "primary_cat": "cs.AI", |
| "cats": [ |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Reasoning", |
| "gt": "Events are instances or occurrences that are the fundamental semantic units. Events are not in- dependent, and they are usually interconnected by the following relations: causality, temporal- ity, and intention. Multi-modal Event Reasoning (MMER) is to comprehend these events and their **Corresponding authors. The evolution scenario of a hurricane event. The buildings and facilities were damaged. Local authorities conduct damage assessments to determine the extent of the destruction. Residents and businesses affected by the hurricane begin the process of filing insurance. Government agencies assign work to restore infrastructure and services. A hurricane or severe weather event caused significant damage to the buildings. Local authorities issue evacuation orders for coastal or low-lying areas. The moist air rises from the ocean surface, it releases heat into the storm system. Figure 1: Part of the event evolution of a hurricane sce- nario. The queried event is in red. MEEL endows the model with the knowledge of all events in the scenario evolution. Current methods only train the model of few clips of event reasoning of the green one. relations in both visual and textual modalities, and finally pave a path to better understanding the true world. MMER is expected to serve as the underpin- ning for various multi-modal applications, includ- ing visual storytelling (Huang et al., 2016), visual event prediction (Huang et al., 2021), event-related VQA (Park et al., 2020), MM knowledge graph construction (Ma et al., 2022), and video genera- tion (Li et al., 2018; Liu et al., 2024). Such intricate tasks require an understanding of the event evolu- tion mechanism across diverse scenarios. With the deepening of research on multi-modal instruction tuning, Multi-modal large language models (MLLM) have been able to handle various multi-modal tasks effectively (Liu et al., 2023; Zhu et al., 2023; Chen et al., 2023; Dai et al., 2023; Li et al., 2023b). These models master some abilities of MM event reasoning implicitly during training in diversified sorts of tasks. Among all the task cat- egories, the perception tasks such as referring ex- pression comprehension, referring expression gen- eration, and grounded image captioning (Mao et al., 2016; Kazemzadeh et al., 2014; Peng et al., 2023) enable the model to comprehend the entities of the events in the image and text. The cognitive tasks, namely image caption and VQA (Lin et al., 2014; arXiv:2404.10429v1 [cs.AI] 16 Apr 2024 Goyal et al., 2017), endow the model with the se- mantic understanding capability of events. How- ever, the models trained by these tasks are unable to perceive event evolution because of the static nature of all modality inputs. Existing visual instruction- tuning methods only consist of questions for few clips of the entire event scenario. As shown in Figure 1, current methods only model the queried events with the green event and ignore the rest of the scenario. They lack a vision of a broad spec- trum of other events in the evolving context. Such contextual absence impedes models from learning abundant evolution knowledge resulting in poor performances in MMER. To address this issue, we propose Multi-Modal Event Evolution Learning (MEEL) for endowing the model to understand the event evolution to en- hance the ability of MMER, leading to improved performances on downstream tasks. Specifically, we first design the event scenario diversification to acquire various events from abundant scenar- ios. Then, we employ ChatGPT to generate the evolving graphs of these seed events. The aim is to use these graphs to train the model to understand the rich knowledge of the evolution of events. To accomplish this goal, we propose the instruction encapsulation process to adapt the evolving graphs into instruction-tuning data to train the model. In this way, the training allows the model to com- prehend more event evolutional knowledge of the scenario leading to better performance of MMER. However, allowing the model to learn only the evolving graphs is insufficient. Without acknowl- edging the incorrect evolving events, the model would improperly forward the process, resulting in the hallucination of event reasoning. To mit- igate this problem, we perform the guiding dis- crimination. The model requires judging the incor- rect evolution. We design various negative mining strategies to harvest incorrect events. Then, we train the model to classify the right event. We also adapt the guiding discrimination into instruction tuning. After obtaining all the data, We finetune the LLaVA (Liu et al., 2023) model after its stage-1 pre-taining with LoRA (Hu et al., 2021) to get our model. To validate the effectiveness of MEEL, we curate a benchmark M-EV2 for Multi-modal EValuation of EVent reasoning. M-EV2 is col- lected or curated from nine existing datasets cov- ering visual storytelling (Huang et al., 2016), vi- sual event prediction (Huang et al., 2021), and event-related VQA (Yeo et al., 2018; Zhang et al., 2021a). M-EV2 consists tasks relying on the abil- ities of MMER of diverse inter-event relations as causality, temporality, and intent. It also con- sists of two reasoning paradigms: close and open reasoning. We conducted extensive experiments on M-EV2 and compare MEEL to MLLM base- lines. We achieve competitive performances in open-source MM LLMs. The results demonstrate that our method does enhance the MMER ability of the model yielding significant improvements in downstream tasks. We conclude our contributions as: \u2022 We propose the Multi-Modal Event Evolution Learning (MEEL). It aims to train the model to comprehend the intricate event evolution of diversified scenarios. Our method may shed light on other MM event reasoning research. \u2022 We further design the Guiding Discrimination to guide the evolution and mitigate the hallu- cinations of MMER. \u2022 We collect and curate the M-EV2 benchmark for MMER. M-EV2 covers diversified inter- event relations. We conduct extensive experi- ments on M-EV2 to test the effectiveness of our model. We achieve competitive perfor- mance among open-source MLLMs.", |
| "main_content": "We strive to enhance a multi-modal large language model\u2019s capability in multi-modal event reasoning (MMER) to boost performance on downstream tasks. Our approach, Multi-Modal Event Evolution Learning (MEEL), is introduced and structured as follows: Section 2.1 details the MMER task. The main purpose of MEEL is to enhance the comprehension of event evolution. We initiate with an event diversification step to generate a diverse mix of seed events of various scenarios (Section 2.2). Then we construct the event-evolving graphs through a novel method named event graph evolution (Section 2.3). Our next objective is to leverage these event-evolving graphs for model training. To this end, we encapsulate these graphs into suitable formats for instruction tuning by instruction encapsulation (Section 2.4). Note that instruction tuning is one of the feasible ways to learn the knowledge of event-evolving graphs. One can also leverage other methods such as in-context learning. Finally, we incorporate a guiding discrimination training strategy to refine evolution pathways and reduce Guiding Discrimination ROOT IP NP VP PN He VV buys NP QP NP CD NN a toy ROOT IP NP VP PN He VV helps NP QP NP CD NN a man Tree Distance He hard. works He hard. hits Event Diversification Instruction Encapsulation Template Generation Instruction Templates Word Overlap Semantic Evolving Positive Chosen Negative Other Negative Instruction Data Guiding Discrimination MEEL Evolving Graphs Diversified Seed Events Event Graph Evolution \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 Figure 2: Overview of MEEL. We first implement the Event Diversification to harvest seed events. Then we perform the Event Graph Evolution to obtain the evolving graphs. We adapt the evolving graphs into instruction-tuning data through our Instruction Encapsulation. The Guiding Discrimination aims to improve the evolution learning with our two negative event mining strategies. reasoning errors (Section 2.5). MEEL\u2019s comprehensive framework is graphically represented in Figure 2. 2.1 Multi-Modal Event Reasoning Multi-Modal Event Reasoning (MMER) involves deducing events based on certain inter-event relations across different modalities. Specifically, events as semantic units can be characterized by text, but their semantics are often more richly conveyed through associated images (Zhang et al., 2021b). The pursuit of MMER is to harness these multi-modal inputs to establish various relationships between events (temporal, causal, intentional, etc.), facilitating sophisticated reasoning processes (Tao et al., 2023b,a; Han et al., 2021). This reasoning underlies a spectrum of downstream tasks (Huang et al., 2016; Park et al., 2020; Huang et al., 2021). We elaborate on the MMER formulation, wherein an event is expressed by a textual sentence E and represented by an image I. Text provides argument structure, such as subject, verb, and object (Doddington et al., 2004), while images contextualize the event with environmental and situational details (Yang et al., 2023; Zellers et al., 2021). MMER can be modeled as inferring a target event Et based on a given relation R: Et = M (E, I, R), R \u2208SR. (1) Here, M denotes the model and SR represents the set of possible inter-event relations. For example, in Figure 1, E is the red event, I is the image, the queried relation R is \"cause\", the answer Et is the green event. Therefore, the entire data is: Question: Given the image, what is the cause of \"The buildings and facilities were damaged.\". Answer: A hurricane or severe weather event caused significant damage to the buildings. This question can not be answered only based on the E since there can be many reasons for building damage. Seeing the image, we can reason the damage could be caused by a hurricane. Models require analysis of both E and I to get the answer. 2.2 Event Diversification Event diversification aims to curate a varied collection of seed events, encompassing multiple types and scenarios for ensuing evolutionary learning. We initiate this process with a corpus of text and image pairs {(Ei, Ii)}, where each pair jointly represents an event. We next extract the triggers to represent the events. Trigger words are typically verbs that explicitly signify the event\u2019s occurrence (Doddington et al., 2004). We employ the Spacy tool1 to identify the primary verb VEi within each text Ei as the trigger. Observing a long-tail distribution in trigger frequency, we only include K events per trigger to establish a balanced seed event set, denoted as SE = {(Ei, Ii)}. The outcome of this event diversification step is more diversified event types 1https://spacy.io/ Consider the context that {caption}.Given a mentioned event: {event}, list events that each has a certain relation to the mentioned event. The generated events should be complex and complete which should at least have subject, verb. Leverage richer common sense knowledge to generate events. The generated events and the mentioned event can have different subjects. \\nExamples: {examples} \\nOutput: (a) Evolving Prompt Give me 100 instructions. The instructions aim to ask a model to \u201creturn an event that is the result of a given image based on a given context\". The generated instructions should be as rich as possible in syntax, semantics, and form, covering various task difficulties. Include the context in the generated instructions and mention it as [event]. Don't generate double quotation marks. Considering the {event}, describe an event that unfolds as a consequence of the depicted image. (b) Instruction Generation & Generated Template Give me 50 instructions that aim to choose the most possible consequence of a given image from given choices based on a given context. The generated instructions should be as rich as possible in syntax, semantics, and form, covering various task difficulties. Include the context in the generated instructions and mention it as [event]. Don't generate double quotation marks. Draw upon the {event} to interpret the image, then pick the consequence that emerges as the most plausible. (c) Instruction Generation & Generated Template (MC) Figure 3: (a) Evolving prompt. The sentence in brown only exists if E is the seed event. In such a case, we add the caption of I. (b) Instruction templates generation of Result relation and one example of generated template. (c) Multiple-choice Instruction templates generation of Result relation and one example of generated template. {caption} is the placeholder for the image caption. {event} and {examples} are for the event E and in-context examples. and scenarios, thereby broadening our model\u2019s generalization capabilities and strengthening its understanding of varied contexts. 2.3 Event Graph Evolution For the goal of enhancing the comprehension of event evolution, we utilize the seed events SE to construct event-evolving graphs through our designed event graph evolution methodology. Building on insights from prior work where LLMs like ChatGPT2 have demonstrated proficiency in generating coherent event narratives (Gunjal and Durrett, 2023; Li et al., 2023e), we apply a breadth-first search (BFS) strategy using the ChatGPT to expand each seed event (E, I) \u2208SE both forward and backward in event happening time. We show the process of either direction in Algorithm 1. We introduce the process for the forward evolution. Starting from the seed event E, we con2https://openai.com/ Algorithm 1: Event Graph Evolution algorithm. Input :Seed event E and the caption C, evolving relations RE, evolving steps L. Output :Event-evolving graph G. 1 G.AddNode(E), \u02dc E = [E] 2 for i \u21901 to L do 3 N = [ ] 4 for Ej in \u02dc E do 5 if i == 1 then 6 {(Ek, Rk)} = Evolve(Ej, C, SampleRel(RE, 2)) 7 else 8 {(Ek, Rk)} = Evolve(Ej, SampleRel(RE, 2)) 9 end if 10 for Ek, Rk in SampleEvent({(Ek, Rk)}, 2) do 11 G.AddNode(Ek) 12 G.AddEdge(Ej, Rk, Ek) 13 N.Append(Ek) 14 end for 15 end for 16 \u02dc E = N 17 end for 18 return G sider forward-oriented relations such as RE = {Result, After, HasIntention}3. For each iteration of this process, we invoke the ChatGPT to produce events consistent with sampled relations from RE, as described in Equation 1. At the beginning, recognizing the potential bias of relying solely on textual events, we incorporate visual information of the seed event. Specifically, while evolving a seed event, we add its image caption to provide contextual details, promoting more accurate evolution. When evolving the intermediate events, we only use just their text. The prompt template for this evolution process is depicted in Figure 3(a). After L iterations, we acquire an event-evolving graph G. Besides, we also consider backward evolution. Our motivation for that is intuitive. We want the model to cognize event evolution in an complete timeline including both directions. Since we always start from an intermediate event in the timeline, we need to perform both forward and backward evolution. To do that, we consider evolving relations RE = {Cause, Before, IsIntention} and remains the other steps the same. After the both sides evolution, we denote the 3Relations are directed from the generated to the queried event, for instance, generating the Result for a given event. HasIntention implies the head event is intended by subjects in the tail event. outputs as the event-evolving graph G which entails the rich evolution mechanism of the event scenario. 2.4 Instruction Encapsulation To endow the knowledge of the evolving graphs G for model training, we turn to multi-modal instruction-tuning, a technique with proven efficiency in adapting models to human-like comprehension (Zhu et al., 2023; Sun et al., 2023; Li et al., 2023a; Liu et al., 2023; Li et al., 2023b; Dai et al., 2023). Our approach involves transforming the components of G, represented as G = (V, W) with nodes V and edges W, into instruction-tuning data. For each node Ei \u2208V, we aim to create a datum comprising the seed event Es, its associated image I, the relation Ri, and the event Ei. However, directly inferring Ri between nodes Es and Ei is not straightforward if they are nonadjacent. We address this by introducing induction rules that leverage the properties of interevent relations, as detailed in Table 1. For example, in an evolving graph G, there exists a path from the seed event Es and another event E2: Es\u21d2[After]\u21d2E1\u21d2[Result]\u21d2E2. According to rule 1 in Table 1: (After)\u22c6(Result)+(After)\u22c6infers Result, where \u22c6denotes there exists zero or more, + means there is at least one. We induce Es\u21d2[Result]\u21d2E2. By applying these rules, we derive the indirect relation Ri. Then we embed all the data with our instructiontuning templates to form an instruction tuning dataset. For the templates, to avoid the laborious task of manual template creation, we employ ChatGPT to generate diverse question templates for each relation type. With 100 templates from ChatGPT, the templates aim to reason about the tail event based on the provided visual and/or textual events in accordance with Equation 1. Considering the possible absence of textual input, we generate two variations for each of the |SR| relations: one with textual input and one without. For any given data (Es, I, Ri, Ei), we randomly determine whether to include textual event information. We then match a suitable template to the relation type Ri and encapsulate all the items into our instruction-tuning dataset. An example of an encapsulated datum is illustrated in Figure 3(b). 2.5 Guiding Discrimination To ensure accuracy during event graph evolution and guide the model away from generating erroneous events, we introduce a guiding discriminaRULE INDUCTION (After)\u22c6(Result)+(After)\u22c6 Result (After)\u22c6(HasIntention)+( After)\u22c6 HasIntention (After)+ After (Before)\u22c6(Cause)+(Before) Cause (Before)\u22c6(IsIntention)+(Before) IsIntention (Before)+ Before Table 1: Relation induction rules. \u22c6denotes there exists zero or more. + means there is at least one. tion training paradigm. This mechanism is pivotal in preventing the evolution process from producing hallucinations which is similar to DPO (Rafailov et al., 2023). In this paradigm, we task the model with identifying the correct event amongst a set of carefully selected negative events. Et = M (E, I, R, D), R \u2208SR, (2) where D is the candidate set consisting of the correct event Et and a few negative events. The discrimination training is challenging to perform due to the sourcing of these negative events. For which we formulate two negative event acquisition strategies: Semantic: This strategy requires model to discriminate the semantic similar events. To forge semantically similar negative events, we first compile a pool of all events of the generated graphs. For any positive event E, utilizing Spacy for dependency parsing, we compute the tree edit distance and the word overlap rate between E and each event in this pool4. Filtering by the preset thresholds for these metrics, we select the top two events that are close to E. This method sharpens the model\u2019s ability to distinguish between events with closely related linguistic structures. Evolving: This strategy enhances the model\u2019s grasp on the directionality of event evolution. Leveraging the bidirectional nature of our event generation, namely forward and backward directions, we select two negative events from the opposite direction of the positive event\u2019s evolution. These negatives are particularly challenging as they maintain shared arguments within the same scenario but differ in their logical sequence. This practice further refines the model\u2019s reasoning skills for establishing the correct evolution path. From the total four negative events generated through these strategies, we randomly select two of them. These, alongside the correct event, are 4https://github.com/timtadh/zhang-shasha GRAPH NODE TRAINSET AVG INPUT TOKEN 3600 38.36 7470 104.17 Table 2: Trainset statistics. then encapsulated into a multiple-choice format. We also create diverse multiple-choice question templates for each relation type via ChatGPT. An example of such a generation prompt and a corresponding template is presented in Figure 3(c). Comprehensive statistics of the dataset are detailed in Table 2. 2.6 Training After acquire both MMER and guiding discrimination dataset, we finetune the backbone by combining the MMER loss LR (from Eq.1) and the guiding discrimination loss LD (from Eq.2): LR = \u2212 X (E,I,R) log P(Et|E, I, R), LD = \u2212 X (E,I,R,D) log P(Et|E, I, R, D), L = LR + LD (3) 3 Experiments 3.1 Construction of M-EV2 To comprehensively evaluate the models\u2019 abilities of MMER on diversified inter-event relations, we collect and curate a benchmark M-EV2. It incorporates nine test sets covering event-related visual question answering (VCOPA, VisCa, VisualComet), visual event prediction (IgSEG), and storytelling (VIST). M-EV2 evaluates event relations of causality, temporality, and intent. Besides, M-EV2 also covers two reasoning paradigms that are multiplechoice close reasoning tasks (CLOSE) and open reasoning without candidates (OPEN). We elaborate on the curation process as follows. VCOPA This is the task of commonsense VQA (Yeo et al., 2018). Given an image I and two candidate options, the task is to select a more plausible cause or effect option. We also transform this dataset into an open reasoning task in which we don\u2019t provide the candidates and require the model to generate the answer. We denote the original multiplechoice task as VCOPA-C and the transformed task as VCOPA-O. VisCa This is a dataset of learning contextual causality from the visual and textual signals (Zhang et al., 2021a). The original task is formulated as that given two images as the context and two textual sentence descriptions, models need to determine if the former sentence causes the latter one. We transform it into our VQA task. We keep the image and first sentence and regard the second sentence as the label to generate. We retrieve one negative sentence by the ground truth and form it as a multiple-choice task. We also adapt the multiple-choice task into an open reasoning similar to VCOPA-O. We denote these two tasks as VisCa-C and VisCa-O. VisualComet This is an open commonsense VQA task which is to answer situations before or after (Park et al., 2020). We also retrieve a negative answer to formulate it into a multiple-choice task. We denote these two tasks as VC-O and VC-C. IgSEG This dataset aims to predict future events based on what has happened (Huang et al., 2021). Specifically, given a sequence of sentences in sequential order and the image of what will happen next, the models need to generate a sentence for this image. In addition, we also retrieve one negative event and form it as a multiple-choice task. We denote these two tasks as IgSEG-O and IgSEG-C. VIST It\u2019s the storytelling task which is to generate the next story given the previous story in sentences and an image (Huang et al., 2016). 3.2 Baselines We compare baselines as LLaVA-Lora (Hu et al., 2021), InstructBLIP (Dai et al., 2023), Otter (Awadalla et al., 2023), MiniGPT-4 (Zhu et al., 2023), MiniGPT-4-v2 (Chen et al., 2023). We show more details in Appendix A. 3.3 Implementation Settings We use InstructBLIP (Dai et al., 2023) to generate the image captions for event graph evolution. We sample two evolving events for each event in the BFS. We set the evolution steps as 3. We finally constructed 15,000 instruction-tuning data. For our model, we use LLaVA-v1.3 after the first pre-training stage as our backbone (Liu et al., 2023) and train with Lora (Hu et al., 2021), making it comparable to the LLaVA-Lora-v1.3-7B baseline. We use deepspeed5, zero-2 without CPU offloading. We set the batch size to 16 on 4\u00d7V100 GPUs. In pilot experiments, we conducted tests with multiple input prompts for each model in order to identify the most effective prompts for evalua5https://www.deepspeed.ai/ \u2663 VCOPA-C VisCa-O VC-C VCOPA-O VisCa-O VC-O VQA InstructBLIP (Dai et al., 2023) 63.33 64.78 51.25 7.57 / 2.31 / 9.32 7.56 / 1.01 / 14.87 12.30 / 4.84 / 13.72 Otter (Li et al., 2023b) 57.27 55.97 45.10 11.78 / 1.35 / 17.12 10.29 / 0.51 / 10.51 7.96 / 3.18 / 9.13 LLaVA-Lora (Liu et al., 2023) 46.06 45.28 45.60 7.66 / 1.44 / 0.64 7.06 / 0.67 / 5.66 7.57 / 2.31 / 3.32 MiniGPT-4 (Zhu et al., 2023) 56.67 47.80 51.40 9.78 / 2.44 / 7.05 7.87 / 1.55 / 10.30 6.92 / 1.78 / 0.42 MiniGPT-4-v2 (Chen et al., 2023) 49.70 52.83 54.60 8.90 / 2.13 / 2.09 8.89 / 1.21 / 8.55 7.54 / 3.03 / 5.06 MEEL (Ours) 66.06 72.33 68.10 19.18 / 2.92 / 26.02 19.16 / 3.40 / 29.58 16.28 / 3.99 / 22.93 Table 3: Main results of VQA tasks. The bold number represents the highest score. \u2663 IgSEG-C IgSEG-O VIST PREDICTION STORYTELLING InstructBLIP 55.10 8.13/2.63/15.91 6.71/1.22/11.31 Otter 53.20 7.57/1.35/4.34 7.63/1.20/10.51 LLaVA-Lora 46.40 9.03/1.50/4.46 9.09/3.03/5.53 MiniGPT-4 49.90 8.72/1.54/3.24 8.66/1.67/9.64 MiniGPT-4-v2 51.30 8.69/1.45/3.73 8.95/1.68/10.44 MEEL (Ours) 66.50 14.00/1.41/19.41 14.38/1.44/25.60 Table 4: Main results of visual event prediction and storytelling. The bold numbers represent the best score. Algorithm 2: CLOSE answer decoding. Input :Prediction P, candidate set D. Output :Answer A. 1 pattern = \"the(?: correct)? (?:option|answer) is[\\ s:]+([A-H])\" 2 if P.startsWithAlphabet() then 3 A = starts_alphabet 4 else if re.match(pattern, P) then 5 A = re.extract(P, patten) 6 else 7 A=argmax c\u2208D (WordOverlap(c, P) 8 return A tion. Despite variations in prompts, we observed only minimal fluctuations in the results. To ensure consistency and mitigate the other influences, we maintained uniformity by using the same prompt for all models performing a task. Detailed prompts can be located in the Appendix B. For the multiplechoice tasks, we transformed them into multiplechoice questions and instructed the model to respond with the corresponding label of choice. For CLOSE tasks, we design an answer decoding strategy and show in Algorithm 2. We find this strategy can handle almost all situations. 3.4 Evaluation Metrics For multiple-choice tasks, we employ accuracy as the metric. For OPEN tasks we utilize BLEU-1/2 (Papineni et al., 2002) and BERTSCORE (Zhang et al., 2019) as measures. \u2663 VQA PRED STORY OPEN CLOSE ALL InstructBLIP 33.01 35.50 11.31 12.53 54.11 25.16 Otter 28.40 28.77 10.51 9.66 49.06 21.64 LLaVA-Lora 23.92 25.43 5.53 4.64 45.85 17.17 MiniGPT-4-v2 26.49 26.57 9.64 6.44 51.30 20.08 MiniGPT-4 28.86 27.51 10.44 7.84 53.11 21.60 MEEL (Ours) 45.53 37.95 25.60 23.06 67.64 36.61 Table 5: Various kinds of average results. The bold numbers represent the best score. PRED stands for visual event prediction. STORY is visual story telling. CLOSE and OPEN are close and open reasoning tasks respectively. ALL is the average performance on all test set. 3.5 Main Results We test our model on M-EV2 benchmark. We show the VQA results in Table 3, visual event prediction and visual storytelling in Table 4. We calculate the various kinds of average scores in Table 5. MEEL can effectively enhance performances of VQA. MEEL achieves the highest scores on three CLOSE VQA, namely VCOPA-C, VisCa-C, and VC-C in Tabel 3. The results indicate MEEL can distinguish the right events since the improvements from event graph evolution with guiding discrimination. For the three OPEN VQA datasets, among all metrics, BERT-SCORE can mostly evaluate the answering quality. We find MEEL outperforms all other baselines to a large extent. These results demonstrate the effectiveness of our method on OPEN VQA. We also notice the BLEU-1/2 of MEEL is higher than almost all models. Since BLEU-1/2 measures lexical similarity, MEEL can generate more well-formed events as the ground truth. In all, our method improves the MMER. MEEL outperforms baselines in visual event prediction. MEEL performs the best among all baselines in Table 4. The results demonstrate our training method enables the model to capture correct temporal relations leading to more precise prediction for the future. Compared to VQA tasks, 1 2 3 4 Step 32 37 42 Avg Score 39.15 45.14 45.53 44.34 36.85 39.02 42.95 43.41 33.55 36.62 37.48 37.05 VQA Prediction All (a) Average scores on VQA, PREDICTION, and the average of all results. 1 2 3 4 55 60 65 70 Accuracy 54.77 64.37 67.64 66.35 Close 1 2 3 4 Step 23 24 25 26 BERT-Score 25.51 25.59 25.60 25.65 24.28 24.50 24.31 24.25 StoryTelling Open (b) Average scores on STORYTELLING, all CLOSE tasks, and all OPEN tasks. Figure 4: Analysis of steps of event graph evolution. We find all models perform worse in visual event prediction, indicating it needs more knowledge and reasoning ability to complete this task. In OPEN visual prediction, MEEL also achieves the highest scores in BERT-SCORE. This shows our model can forecast semantic similar events. However, we find MEEL performs slightly lower in BLEU-2 on IgSEG-O. Since BLEU calculates the 2-gram lexical similarity, this may indicate MEEL can predict more diversified events with correct semantics rather than words merely in the context. MEEL can generate advanced story. In Table 4, we find MEEL can excel all baselines in VIST. The results show MEEL can tell better stories by capturing more scenario knowledge and comprehending the inter-event relations. The event graph evolution affects the training of the model to acknowledge enriched event information rather than merely shallow step reasoning. In all, MEEL can significantly improve the performance of the downstream tasks attributed to boosted capabilities of MMER. In Table 5, MEEL excels all baselines on the average score of all datasets demonstrating the effectiveness of our method. Our event graph evolution process stimulates the contextual understanding of events. The guiding discrimination further mitigates the hallucinations of event reasoning yielding better performances. Among all relation types, the improvements of VQA and STORYTELLING are larger than PREDICTION. It indicates our method benefits more for these tasks. PREDICTION is the hardest to learn attributed to its demand for pertaining for more abundant knowledge of events. \u2663 VCOPA-O VisCa-O VC-O IGSEG-O VIST MEEL w.o. D 19.63 21.78 21.79 18.83 24.67 MEEL 26.02 29.58 22.93 19.41 25.60 Table 6: Ablation study. MEEL w.o. D is our method without guiding discrimination. 3.6 Analysis Evolution steps. We conduct experiments on different evolution steps to verify the effectiveness of event graph evolution. We tested steps 1-4 respectively and calculated various average scores. We show the results in Figure 4. As the average of all results, the performance of MEEL increases from steps 1 to 3 in Figure 4 (a). This is consistent with our motivation that the event graph evolution enables the model to learn the rich knowledge of event evolution. Then, the model can complete MMER better. We find the performances drop when the step is too large, namely larger than four. This may be attributed to the semantic drift of the event graph evolution. ChatGPT would generate less relevant content compared to the seed event if it evolves further. We find that the drop is most obvious in VQA, which may be probably due to VQA being the most strict relation among all event interrelations. We find MEEL can achieve a high score for STORYTELLING when the evolution step is only one in Table 4 (b). MEEL is 25.51 BERT-Score while InstructBLIP is 11.31. As the number of steps increases, MEEL maintains a high score. This indicates that MEEL completes the STORYTELLING even on few evolution steps. Effect of guiding discrimination. We ablate guidFigure 5: An example of an event-evolving graph. The event pointed to by the head cut is a tail event generated that satisfies the color relationship of the head cut. Figure 6: Distribution of verbs before and after event diversification. Each part of the pie chart is the proportion of a verb. We present the 100 most frequent verbs with and without event diversification. (a) w.o. event diversification. (b) w.t. event diversification. ing discrimination and show the results in Table 6. We find that all performances drop if MEEL trains without guiding discrimination. It indicates that discrimination can guide the evolution and mitigate hallucinations. Examples of event graph evolution. We showcase two examples of event graph evolution in Figure 5. We find our evolving graphs can sufficiently contain information and knowledge of event scenarios. With the aid of event-evolving graphs, MEEL learns more abundant event knowledge and relation inter-connections. Effect of event diversification. We compute the event verb distribution. We show two verb distributions with or without event diversification. The results are in Figure 6. We find the distribution is significantly diversified after the event diversification process. It enables MEEL to be trained in various event scenarios and domains. 4 Relation Works Multi-Modal Event Relational Reasoning As one of the relation types, causality reasoning is crucial for exploring the cause and effect of events (Yeo et al., 2018; Zhang et al., 2021a; Chadha and Jain, 2021; Ignat et al., 2021). Apart from causality, event temporal reasoning forms a basic ability (Zellers et al., 2019; Park et al., 2020; Zellers et al., 2021). Event intentional reasoning uncovers the intentions of the subjects of the events (Park et al., 2020; Li et al., 2023c). Besides, there exists research on other relation types as well (Kim et al., 2022; Hessel et al., 2022). Multi-modal event relational reasoning constitutes a foundational capability for a range of downstream tasks in the realm of multi-modal reasoning. Our research endeavors to further enhance this crucial skill. Multi-Modal Instruction tuning With the significant success of instruction tuning (Ouyang et al., 2022; Xu et al., 2024), current research has extended its capability to multi-modality. MM instruction tuning trains the model the follow instructions for questions about the images. Compared to textual instruction tuning, harvesting MM data with instructions is tougher. Zhu et al. (2023) trains MiniGPT-4 by further aligning pretrained EVACLIP (Fang et al., 2023) and Vicuna (Chiang et al., 2023). Liu et al. (2023) generate visual instruction data by requiring ChatGPT/GPT-4 with the given image and its caption. Dai et al. (2023) adapt human-labeled dataset into instruction data with pre-made templates. Li et al. (2023a) construct in-context learning data with instructions and use this dataset to train an MM LLM. These methods merely model shallow event evolving situations leading to poor ability of MM event relational reasoning. Script Induction Script induction is to induce or generate chains or graphs of events representing the evolving mechanism. Du et al. (2022) induces 11 scripts of newsworthy scenarios from documents. Gunjal and Durrett (2023) attempt to generate event chains by querying large language models. Zhang et al. (2023) constructs scripts by designing interactions between humans and LLM. Li et al. (2023e) create event graphs in a pipeline operation with generation, ordering, and verification. In our work, we are the first to utilize the ability of script induction from ChatGPT to construct our MM event-oriented instruction-tuning data. We expect our work may shed light on other event-oriented approaches. 5 Conclusion We propose the Multi-Modal Event Evolution Learning for MMER. We design the event graph evolution process based on the diversified seed events. We then encapsulate the evolving graphs into instruction-tuning data. We introduce the guiding discrimination training paradigm to further improve the learning of evolution. We conduct experiments on our collected and curated M-EV2 benchmark for MMER. Results show the effectiveness of MEEL and it achieves competitive performance among open-source visual instruction-tuning baselines. Limitations Our method is limited to MMER of a single image. However, a more complex MMER may contain several images to express a scenario. We leave the construction of methods and benchmarks of this complex MMER to future work." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.06962v1", |
| "title": "Advancing Real-time Pandemic Forecasting Using Large Language Models: A COVID-19 Case Study", |
| "abstract": "Forecasting the short-term spread of an ongoing disease outbreak is a\nformidable challenge due to the complexity of contributing factors, some of\nwhich can be characterized through interlinked, multi-modality variables such\nas epidemiological time series data, viral biology, population demographics,\nand the intersection of public policy and human behavior. Existing forecasting\nmodel frameworks struggle with the multifaceted nature of relevant data and\nrobust results translation, which hinders their performances and the provision\nof actionable insights for public health decision-makers. Our work introduces\nPandemicLLM, a novel framework with multi-modal Large Language Models (LLMs)\nthat reformulates real-time forecasting of disease spread as a text reasoning\nproblem, with the ability to incorporate real-time, complex, non-numerical\ninformation that previously unattainable in traditional forecasting models.\nThis approach, through a unique AI-human cooperative prompt design and time\nseries representation learning, encodes multi-modal data for LLMs. The model is\napplied to the COVID-19 pandemic, and trained to utilize textual public health\npolicies, genomic surveillance, spatial, and epidemiological time series data,\nand is subsequently tested across all 50 states of the U.S. Empirically,\nPandemicLLM is shown to be a high-performing pandemic forecasting framework\nthat effectively captures the impact of emerging variants and can provide\ntimely and accurate predictions. The proposed PandemicLLM opens avenues for\nincorporating various pandemic-related data in heterogeneous formats and\nexhibits performance benefits over existing models. This study illuminates the\npotential of adapting LLMs and representation learning to enhance pandemic\nforecasting, illustrating how AI innovations can strengthen pandemic responses\nand crisis management in the future.", |
| "authors": "Hongru Du, Jianan Zhao, Yang Zhao, Shaochong Xu, Xihong Lin, Yiran Chen, Lauren M. Gardner, Hao Frank Yang", |
| "published": "2024-04-10", |
| "updated": "2024-04-10", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Reasoning", |
| "gt": "Pandemic forecasting is essential for providing situational awareness and supporting decision-making for poli- cymakers during public health emergencies. The ability to forecast short-term disease outcomes is crucial for informing resource allocation and risk mitigation strategies in near-term time frames, ultimately aiming to minimize the burden of diseases. Predominantly, the existing forecasting models used in research and practice can be categorized into two types: mechanistic models, which simulate transmission dynamics in the population through compartmental models such as the SIR model and its derivatives1\u20133, and statistical models, which adopt data-driven approaches to forecast disease trends using historical data patterns4\u20137. Different models are suited for distinct forecasting needs: Mechanistic models are beneficial for long-term projections due to their ability to integrate scenario assumptions, whereas statistical models are helpful for short-term forecasting because they can adapt immediate trends effectively8. Despite the pivotal role of statistical models in short-term forecasting, they are limited in their ability to 1) adapt to multi-modal data in real-time9,10, 2) respond to rapidly changing policies11, 3) account for the emergence of new variants12, and 4) translate the predicted results into useful decision-support guidance with trustworthiness13. Consequently, current pandemic forecasting models have often struggled to identify and predict critical turning points in the pandemic\u2019s trajectory14\u201316. Moreover, the absence of a transparent interpretation of these results can diminish public trust in these models and even potentially undermine the efficacy of public health responses. The COVID-19 pandemic highlighted each of these deficiencies in the existing set of disease forecasting tools, which, as a result, struggled to accurately forecast disease spreading patterns17. In particular, the complexities of the COVID-19 pandemic arose from several interrelated factors that were difficult to incorporate into predictive models: 1) the virological characteristics of the virus18, such as relative transmissibility, severity, and impact on immunity, 2) the diverse demographic profiles of affected populations, including their evolving immunity19, and 3) the dynamic relationship between public policy and human behavior20. The virological dynamics, including factors like the characteristics of viruses and their current prevalence, represent a biological \u201clanguage\u201d with its own syntax and semantics, ruling how a virus behaves and evolves21. Simultaneously, human dynamics, including demographics, behavioral patterns, and responses to policy changes, are akin to a socio-cultural \u201clanguage\u201d, rich with subtle cognitive processes. Recognizing these complexities, we aim to enhance disease forecasting by exploiting the richness of timely, complex data made available in multiple structures and formats, diverging from the current research trend of simplifying information into solely numerical forms for the purposes of mathematical modeling. Recently, the Large Language Models (LLMs) stand out as a substantial advancement in artificial intelligence (AI), which have demonstrated proficiency in multi-modal contextual feature learning24\u201328. Their strength in text generation and reasoning suggests potential for understanding the complex dynamics of disease spread29. However, LLMs are designed for processing information in natural language, which presents the following distinct challenges when applying them to pandemic forecasting: 1) Incorporating multi-modality information: The data used for forecasting pandemics include epidemiological time series, public health policy, genomic surveillance, and demographics profiles, requiring these multi-modality inputs to be organized and tailored into suitable learned representations and structured contextual prompts. 2) Incapability in modeling temporal dynamics: LLMs typically struggle with time series data due to the tokenization of continuous numbers30, requiring new methodologies that strategically embed temporal information into prompts, enabling LLMs to effectively utilize these information for 2/35 Figure 1. The overview of PandemicLLMs\u2019 pandemic data streams and pipeline. (a) Multi-modality data insights into Pandemic. Our multi-modality dataset integrates four types of pandemic data sources: spatial, epidemiological time series, public health policy, and genomic surveillance data. Spatial data includes demographic and healthcare indicators, whereas the epidemiological time series aspect covers reported cases, hospitalizations, and vaccination rates. Data about policy detail governmental interventions in a textual format, and the genomic surveillance data integrates textual descriptions of variants with weekly sequences regarding their prevalence. The data comprises 5,200 records, covering all 50 U.S. states over 104 weeks. The phylogenetic tree of SARS-CoV-2 was generated using Nextstrain22. (b) PandemicLLMs\u2019 construction pipeline. To forecast pandemic hospitalization trends, we formulate the problem as an ordinal classification task. We define five categories following CDC guidance23: Substantial Decrease, Moderate Decrease, Stable, Moderate Increase, and Substantial Increase. By converting multi-modality data into a text format through AI-human cooperative prompt design, PandemicLLMs are fine-tuned with these prompts and targets for 1-week and 3-week forecasts. We emphasize rigorous performance assessment to verify the accuracy and trustworthiness of our predictions. 3/35 reasoning. In light of the limitations of existing models and the well established text reasoning capability of LLMs, we propose PandemicLLM, the first LLM-based framework for pandemic forecasting. We reformulate pandemic forecasting as a text reasoning task, allowing the integration of new data streams that were not previously utilized in pandemic forecasting models. The proposed framework is designed for state-level COVID-19 hospitalization forecasting across the U.S., targeting prediction horizons of 1-week and 3-week. We tackle the challenge of multi- modality data input in LLMs by employing an AI-human cooperative approach for prompt generation, coupled with the use of a Recurrent Neural Network (RNN) for temporal representation learning of epidemiological time series. This strategy enables an LLM to effectively process complex pandemic-related information, encompassing spatial, epidemiological time series, public health policy, and genomic surveillance aspects. Furthermore, by recasting forecasting as an ordinal classification of hospitalization trends, we align the model\u2019s output with the needs of public health decision-makers, also adhering to CDC guidance23. An extensive evaluation of PandemicLLM from all 50 U.S. states across 16 weeks demonstrates its advantages: PandemicLLM delivers robust and trustworthy forecasts, offering categorical predictions with confidence levels that support public health policy making. Moreover, it exhibits timely and accurate response to emerging variants, leveraging the textual analysis of timely genomic surveillance data. These findings highlight the potential of leveraging LLMs in public health emergency response settings.", |
| "main_content": "By reformulating pandemic forecasting as a text reasoning problem, PandemicLLMs push the boundaries of traditional forecasting methods. To the best of our knowledge, this study represents the first LLM-based modeling framework that incorporates novel data streams for forecasting pandemics and contributes to the literature by: 1. Extending the LLM framework for enhanced pandemic forecasting: We extended the existing LLM architecture to handle the multi-modality nature of pandemic-related data. This extension includes integrating a human-AI collaborative prompt design, enabling the transformation of numerical pandemic data into textual formats suitable for LLM-based learning. Crucially, we introduced a RNN encoder tailored for the temporal representation learning of epidemiological time series, which, as our ablation study highlights, contributes to an accuracy improvement of 17%-24%. 2. Incorporating underutilized pandemic-related data streams into pandemic forecasting: PandemicLLM incorporates critical and timely disease-relevant information and novel data streams that have not previously been used by pandemic forecasting models. Specifically, PandemicLLM integrates real-time textual virological characteristics, estimated variant prevalence, textual public health policy, and healthcare system performance alongside the traditional demographic and epidemiological time series. The inclusion of the variant information is shown to improve model performance without the need for retraining, underscoring the importance of adaptability to real-time information for effective response during ongoing outbreaks. 3. Providing public health decision-makers with robust and trustworthy predictions: Our model is engineered to create clear and easily understood outputs. It effectively differentiates between highly certain 4/35 outcomes and those with more uncertainty. Our findings indicate that as the confidence level of the predictions rises, so does the model\u2019s accuracy. This ensures that decision-makers have access to dependable forecasts, thus increasing the utility of the model. 3 Data and Methods 3.1 Multi-modality pandemic data In addressing the intricate dynamics of COVID-19 transmission, which involves data of heterogeneous formats, the proposed PandemicLLM was built on four data categories: spatial, epidemiological time series, public health policy, and genomic surveillance data (Fig. 1a). This categorization reflects the distinct nature and representation of each data type. Spatial data, sourced at the state level, comprised numeric and static variables, including demographic information, healthcare system scores, and political affiliations. Epidemiological time series were collected at weekly resolution for each state, encompassing numerical and sequential data, such as reported COVID-19 cases, hospitalization, and vaccination rates. Public health policy data, also state-specific, were captured in a textual format, detailing the stringency level and types of government policies. Genomic surveillance data are a hybrid of textual and sequential formats, with textual data sourcing from the authoritative reports of variants\u2019 virological characteristics and sequential data reflecting the current prevalence of these variants. Our multi-modality pandemic dataset encompasses information spanning 50 states across the U.S. and 104 weeks from January 2021 to January 2023, culminating in a total of 5,200 data records. This dataset includes five types of public health policies, three variations in vaccine rates, and genomic data on five Omicron sublineages. Utilizing the AI-human cooperative prompt design (section 8.3.2), these data records have been transformed into a set of prompts, aggregating approximately 1.51 million words. Further details regarding the data sources and the methods used in data preprocessing are documented in Section 8.1. 3.2 The PandemicLLM framework To provide actionable and interpretable forecasts, we formulate pandemic forecasting as an ordinal classification problem. Leveraging the heterogeneous data types discussed in section 3.1, we adapt these data into a textual format amenable to LLM learning facilitated by an AI-human cooperative prompt design. Subsequently, the PandemicLLM undergoes supervised fine-tuning using designed prompts and targets. To ensure the accuracy and reliability of our predictions, we undertake an extensive performance evaluation, focusing particularly on the results\u2019 trustworthiness. Fig. 1b illustrates the entire construction pipeline of PandemicLLM, with subsequent sections providing detailed explanations of each segment of the pipeline. 3.2.1 Pandemic forecasting as ordinal classification In our effort to build PandemicLLM, particular attention was given to the design of the prediction targets, aiming to refine the targets used for COVID-19 forecasting. We argue that continuous targets, such as reported cases or hospitalizations, are prone to reporting errors and hinder clear uncertainty communication for stakeholders. The frequent ambiguity observed in the conflicting predicted trends of 50% and 95% confidence intervals from the COVID-19 Forecast Hub31 highlights this problem. To address this, we introduce a straightforward yet informative targets: Hospitalization Trend Category (HTC). This target categorizes future hospitalization trends into five levels: 5/35 Substantial Decrease, Moderate Decrease, Stable, Moderate Increase, and Substantial Increase. Detailed definitions for HTC can be found in Section 8.1.1. 3.2.2 AI-human cooperative prompt design The cornerstone of the PandemicLLM framework is the AI-human cooperative prompt design, a process that blends human insight with AI efficiency. This framework involves the embedding of multi-modality data into textual formats, and each resulting prompt has around 300 words (see Extended Data Fig. 10 for an example of completed prompt). Summaries of the data transformation into text are provided below (see Fig. 2), with detailed descriptions available in Section 8.3.2. Modified Phase 3 Spatial Data Generated Prompt Genomic Surveillance Data <latexit sha1_base64=\"ho6qeWjOvt2HCxBRrExMCF/NyiM=\">AB7XicbVDLSgNBEOyNrxhfUY9eBqPgKexKMOYW9OIxg nlAsoTZyWwyZmZ2mZkVwpJ/8OJBEa/+jzf/xskm+C5oKq6e4KYs60cd13J7e0vLK6l8vbGxube8Ud/daOkoUoU0S8Uh1AqwpZ5I2DTOcdmJFsQg4bQfjy5nfvqNKs0jemElMfYGHkoWMYGOlVk+zocD9YsktuxnQX+ItSKl+Bka/eJbxCRFBpCMdad z03Nn6KlWGE02mhl2gaYzLGQ9q1VGJBtZ9m107RsVUGKIyULWlQpn6fSLHQeiIC2ymwGenf3kz8z+smJjz3UybjxFBJ5ovChCMTodnraMAUJYZPLMFEMXsrIiOsMDE2oINwfv6vWZRrSxIzfsMoXVa9s7K7nWlVL+YpwF5OIBDOAEPqlCHK2hAEwjcwj08w pMTOQ/Os/Myb805i5l9+AHn9QNLUo/n</latexit> \u03c3 RNN Encoder <latexit sha1_base64=\"ho6qeWjOvt2HCxBRrExMCF/NyiM=\">AB7XicbVDLSgNBEOyNrxhfUY9eBqPgKexKMOYW9OIxg nlAsoTZyWwyZmZ2mZkVwpJ/8OJBEa/+jzf/xskm+C5oKq6e4KYs60cd13J7e0vLK6l8vbGxube8Ud/daOkoUoU0S8Uh1AqwpZ5I2DTOcdmJFsQg4bQfjy5nfvqNKs0jemElMfYGHkoWMYGOlVk+zocD9YsktuxnQX+ItSKl+Bka/eJbxCRFBpCMdad z03Nn6KlWGE02mhl2gaYzLGQ9q1VGJBtZ9m107RsVUGKIyULWlQpn6fSLHQeiIC2ymwGenf3kz8z+smJjz3UybjxFBJ5ovChCMTodnraMAUJYZPLMFEMXsrIiOsMDE2oINwfv6vWZRrSxIzfsMoXVa9s7K7nWlVL+YpwF5OIBDOAEPqlCHK2hAEwjcwj08w pMTOQ/Os/Myb805i5l9+AHn9QNLUo/n</latexit> \u03c3 <latexit sha1_base64=\"crZKqw7kakdIKRdzyiyj8mXD6mY=\">AB8nicbVDJSgNBEK1x jXGLehRkMAiewkzA5Rjw4jEBs8BkCD2dnqRJT8/QXSOGIUc/wYsHRbz6AfkOb36DP2FnOWjig4LHe1VU1QsSwTU6zpe1srq2vrGZ28pv7+zu7RcODhs6ThVldRqLWLUCopngktWRo2CtRDES BYI1g8HNxG/eM6V5LO9wmDA/Ij3JQ04JGslrI3vADInsjzqFolNyprCXiTsnxQqMa9+PJ+Nqp/DZ7sY0jZhEKojWnusk6GdEIaeCjfLtVLOE0AHpMc9QSKm/Wx68sg+M0rXDmNlSqI9VX9P ZCTSehgFpjMi2NeL3kT8z/NSDK/9jMskRSbpbFGYChtje/K/3eWKURDQwhV3Nxq0z5RhKJKW9CcBdfXiaNcsm9LF3UTBplmCEHx3AK5+DCFVTgFqpQBwoxPMELvFpoPVtv1vusdcWazxzB H1gfP7gQlTE=</latexit> tanh + 1Textualization Representation Learning Vaccination rate time series Epidemiological Time Series Data Hospitalization time series Demographic information Healthcare system information 2020 Precidential Election Task Information You are a helpful assistant designed to forecast epidemic trends for a specific US state. Your task is to predict the trend of hospitalization for the next week from the available options: [Substantial Decrease, Moderate Decrease, Stable, Moderate Increase, Substantial Increase]. You need to make prediction based on the information below: Spatial Information \u2022 [Demographic information] New York, one of the most populous states in the country with a higher than average percentage of elderly population \u2026 \u2022 [Healthcare system] During the pandemic, overall healthcare systems performed worse than the national average, with worse than average Access and Affordability \u2026 \u2022 [Political Information] Voted Democrat in the recent presidential election. Epidemiological Time Series \u2022 [Vaccination rate] To date, 60% of population got at least one vaccine dose with a Stable trend, 52% were fully vaccinated with a Stable trend, \u2026 \u2022[Weekly hospitalization] The most recent hospitalization per 100k people is {hospitalization sequence embedding} Real-time Genomic Information \u2022 Variant information: BQ.1 is a sublineage of Omicron. BQ.1 is showing a significant growth advantage over other circulating Omicron sublineages, \u2026 Public Health Policy \u2022 There have been changes in school policy moving from recommended closures to no restrictions, while policies for workplaces and gatherings remain unrestricted \u2026 Public Health Policy Data Stay at Home Order 2020-02 2020-03 2020-04 2020-05 2020-06 2020-07 2020-08 2020-09 2020-10 2020-11 2020-12 2021-02 2021-03 2021-04 2021-05 2021-06 2021-07 2021-08 2021-09 2021-01 Vaccination Start Mask Mandate 2 Mask Mandate 2 Phase 1 Modified Phase 2 Phase 2 Phase 3 Figure 2. Summary of the AI-human cooperative prompt design. Spatial data for all 50 U.S. states are converted into descriptions to reflect their rankings; the policy data includes stringency levels and changes from week-to-week. Epidemiological time series data uses both narrative generation and representation learning. Genomic surveillance data combines textual summaries of variant characteristics with recent prevalence. The blue arrow indicates the information textualization, while the red arrow indicates the sequence representation learning. Each designed prompt has 296 to 322 words. 6/35 \u2022 Spatial data: Across all 50 U.S. states, each type of spatial data (e.g., population and healthcare system scores) was assigned a numerical rank and then categorized into one of five descriptive levels reflecting its relative position. \u2022 Public health policy data: For every state and week, each policy is detailed by including its policy type, summarizing its stringency level, and highlighting any variations compared to the policy of the preceding week. The policies included in this study are detailed in Extended Data Table 2. \u2022 Epidemiological time series data: In the embedding of epidemiological time series, two distinct approaches are implemented: (1) The first method uses ChatGPT32 to convert numerical sequence data into detailed textual narratives. This is achieved by instructing ChatGPT to analyze and summarize the recent trends and rate of changes within the given time series data as a one-sentence description (see Extended Data Fig.6). (2) The second approach focuses on the most critical temporal data elements, specifically the hospitalization rates time series, which are tokenized through representation learning utilizing a Gated Recurrent Unit (GRU) framework (see Extended Data Fig.7). \u2022 Genomic surveillance data: As outlined in section 3.1, genomic surveillance data were collected in a combination of textual and sequential formats. For the textual component, a summary is created from authoritative reports (refer to Supplementary Information section 1 for detailed data sources.), focusing on three key virological characteristics of the variant: infectiousness, severity, and resistance to immunity. For the sequential aspect of genomic surveillance data, a methodology similar to the first approach for temporal data is employed, where ChatGPT is used to summarize the trends and rate of changes in the recent variant proportion time series. 3.2.3 Experiment setup We fine-tuned LLaMA233\u2014a publicly accessible LLM by Meta\u2014to predict the Hospitalization Trend Category (HTC) for the upcoming 1 and 3-week for each state. As illustrated in Fig. 1b, the model was fine-tuned to maximize the probability of predicted tokens corresponding to the target category. Our modeling framework included testing LLMs of three varying scales, with parameter sizes of 7 billion (7B), 13 billion (13B), and 70 billion (70B), to assess the potential enhancement in performance corresponding to increased model size. However, given the computational requirement of the 70B model might not be practical for all public health institutions, we discuss the results of PandemicLLM-70B only in section 4.3 and analyze the more accessible 7B and 13B models in other sections. For performance evaluation, we used the data from September 2022 through January 2023 as our test set. The prior period, spanning January 2021 to September 2022, was partitioned into training and validation sets with an 80/20 ratio. 3.2.4 Evaluation of PandemicLLM and reference models Evaluation metrics: As our proposed framework reformulates pandemic forecasting as an ordinal classification problem, we adopt the widely used accuracy and mean squared error (MSE) for evaluation. Nevertheless, these two metrics only evaluate the prediction with the largest probability, overlooking the prediction distribution. For a fine-grained evaluation, we propose to evaluate using Weighted MSE (WMSE) along with Brier Score34, and Ranked Probability Score (RPS)35. Specifically, the Brier Score quantifies the precision of probability estimations. 7/35 The RPS penalizes predictions where the predicted probabilities deviate from the target category. The WMSE escalates its penalization in proportion to the divergence of the predicted probabilities from the target category. The detailed definition for each error metric is documented in section 8.3.4. Reference models: We evaluated PandemicLLMs with five models currently used in pandemic forecasting, including four machine learning models: Gated Recurrent Unit (GRU)36, Long Short-Term Memory (LSTM)7, Bidirectional Long Short-Term Memory (bi-LSTM)37, and AutoRegressive Integrated Moving Average (ARIMA)38. These machine learning models were trained on numerical data, explicitly excluding contextual policy and genomic surveillance information. In addition to these machine learning models, we included a simple yet hard to beat heuristic-based baseline, PrevTrend39, which assumes that the predicted probability for each state is based on the distribution of states across various categories for the most recent observation. Detailed descriptions of reference models are presented in the Supplementary Information sections 4 and 5. 4 Results 4.1 COVID-19 hospitalization trend prediction PandemicLLMs capture the overall COVID-19 hospitalizations trend accurately. Designed to predict 1-week and 3-week COVID-19 Hospitalization Trend Category (HTC) for each U.S. state, PandemicLLMs\u2019 predictions closely align with the observed ground truth pattern (Fig.3a). This pattern reveals a shared decline in hospitalizations throughout September and early October, followed by a distinct increase starting in November. Despite this shared trend, individual states exhibit highly diverse hospitalization trajectories, underscoring the inherent complexity of pandemic forecasting. Remarkably, even within this diverse landscape, both versions of PandemicLLM (7B and 13B) demonstrate similar performances, with accuracies of 55.4% and 56% for 1-week predictions and 45.4% and 46.4% for 3-week predictions, respectively. 4.2 Spatial performance evaluation PandemicLLMs exhibit robust performance nationally, though with local variations. The heterogeneity in state-level hospitalization trajectories motivates the investigation of how PandemicLLMs\u2019 performance varies across states. Fig.3b to 3e display the average performance on WMSE for each state, highlighting the spatial differences in model efficacy. Nationally, the PandemicLLM-7B and PandemicLLM-13B show average WMSE of 0.72 and 0.9 for 1 and 3-week forecasts, respectively. However, local variation still exists, given the state-level pattern diversity. Iowa (0.31, 0.49), Utah (0.48, 0.41), and Idaho (0.41, 0.53) exhibit the lowest WMSE (1-week, 3-week), indicating the top performances. Conversely, Delaware (1.46, 1.69), Wyoming (1.32, 1.53), and Vermont (1.31, 1.49) have the highest WMSE (1-week, 3-week), indicating the bottom performances. Based on our evaluation, the PandemicLLM demonstrates reliable performance in the West Coast, Southeast, and the Great Lakes regions. This pattern likely stems from the similarity in hospitalization trends among states within these regions (see Fig. 3a for categorical trend and Extended Data Fig.8 for continuous trend). Notably, these regional trends closely align with the overall national pattern. For Northeastern and Midwestern regions, the regional behavioral differences40 potentially triggered the trend variation, leading to performance decreases in states with trends diverging significantly from the national trajectory (e.g., Wyoming, Maine, and South Dakota). This finding suggests that region-specific models could be a valuable avenue for future research to address these localized variations more effectively. 8/35 (a) Predictions visualization: 1-week and 3-week predictions vsiualization by PandemicLLMs versus the ground truth targets, in 50 States from September 5, 2022 to December 12, 2022. Color indicates Hospitalization Trend Category (HTC): SD: Substantial Decrease, MD: Moderate Decrease, ST: Stable, MI: Moderate Increase, SI: Substantial Increase. (b) PandemicLLM-7B performance by state (1-week) (c) PandemicLLM-7B performance by state (3-week) (d) PandemicLLM-13B performance by state (1-week) (e) PandemicLLM-13B performance by state (3-week) 9/35 (f) 1-week forecasting performance measured by weighted mean squared error (WMSE) over time. (g) 3-week forecasting performance measured by weighted mean squared error (WMSE) over time. Figure 3. PandemicLLMs\u2019 predictions visualization and performance evaluation. (a), 1-week and 3-week predictions by PandemicLLMs versus the ground truth targets. Color indicates Hospitalization Trend Category (HTC): SD: Substantial Decrease, MD: Moderate Decrease, ST: Stable, MI: Moderate Increase, SI: Substantial Increase. (b, c), 1-week and 3-week performance for PandemicLLM-7B. (d, e), 1-week and 3-week performance for PandemicLLM-13B. The color gradients represent the magnitude of the WMSE, where a darker shade of red signifies a greater error, and a darker shade of blue denotes a smaller error. Equivalent evaluations with alternative error metrics are included in the Supplementary Information section 9. (f, g), Performances comparison of PandemicLLMs with reference models across time. The red curve on the back represents the weekly reported COVID-19 hospital admission at the national level. The left y-axis represents the scale of WMSE, and the right y-axis represents the scale of hospital admission. Each set of bar graphs in the figure represents the distribution of WMSE for all states during a specific week. The color bars represent the error distributions for different models. (f), 1-week forecasting performance. (g), 3-week forecasting performance. 10/35 4.3 Comparison to reference models In light of the observed spatial variations in PandemicLLMs\u2019 performance, and to comprehensively assess its capabilities, this section presents a comparative analysis of the PandemicLLMs\u2019 overall performances against four established machine learning models and one heuristic-based baseline. Utilizing five metrics, including Accuracy, MSE, WMSE, Brier Score, and RPS, for evaluation, the average performances across all states and weeks tested are shown in Table 1, from which we have the following observations: PandemicLLMs outperform existing forecasting models by at least 20%. The heuristic-based benchmark PrevTrend has average rankings of 4th and 6th for 1-week and 3-week prediction, demonstrating the complex nature of pandemic forecasting, which poses significant challenges for traditional machine-learning models. However, with the ability to leverage multi-modality data, all sizes of PandemicLLMs demonstrate better average rankings compared to all reference models, leading to average accuracy improvements of at least 20% and 22% for 1-week and 3-week, respectively. The notable performance enhancement validates the effectiveness of our proposed framework. Table 1. A summary of overall models\u2019 performances for PandemicLLMs, baseline, and other machine learning models. \u2191/\u2193indicate that higher/lower metric values signify better performance. Numbers in parentheses represent the relative ranking of each model. Prediction Target Model Evaluation Metric (Model Rank) Average Rank Accuracy \u2191 MSE \u2193 WMSE \u2193 Brier Score \u2193 RPS\u2193 1-week GRU 0.468 (5) 0.899 (5) 1.727 (6) 0.667 (3) 0.112 (4) 4.6 (5) Bi-LSTM 0.365 (8) 1.532 (8) 2.134 (7) 0.750 (7) 0.154 (7) 7.4 (8) LSTM 0.419 (6) 0.896 (4) 1.136 (4) 0.747 (6) 0.126 (6) 5.2 (6) ARIMA 0.416 (7) 1.108 (7) \\ \\ \\ 7.0 (7) PrevTrend 0.471 (4) 0.925 (6) 1.361 (5) 0.660 (2) 0.113 (5) 4.4 (4) PandemicLLM-7B 0.554 (3) 0.593 (2) 0.668 (2) 0.668 (5) 0.098 (2) 2.8 (3) PandemicLLM-13B 0.560 (2) 0.627 (3) 0.767 (3) 0.634 (1) 0.098 (2) 2.2 (2) PandemicLLM-70B 0.571 (1) 0.560 (1) 0.638 (1) 0.667 (3) 0.097 (1) 1.4(1) 3-week GRU 0.378 (4) 1.067 (5) 1.576 (5) 0.745 (4) 0.138 (4) 4.4 (4) Bi-LSTM 0.369 (5) 1.255 (7) 1.678 (7) 0.767 (6) 0.151 (7) 6.4 (7) LSTM 0.362 (7) 0.936 (4) 1.057 (4) 0.923 (7) 0.147 (6) 5.6 (5) ARIMA 0.367 (6) 1.308 (8) \\ \\ \\ 7.0 (8) PrevTrend 0.343 (8) 1.201 (6) 1.588 (6) 0.761 (5) 0.140 (5) 6.0 (6) PandemicLLM-7B 0.454 (3) 0.797 (2) 0.899 (1) 0.739 (3) 0.114 (3) 2.4 (3) PandemicLLM-13B 0.464 (2) 0.760 (1) 0.908 (2) 0.695 (2) 0.106 (1) 1.6 (1) PandemicLLM-70B 0.486 (1) 0.805 (3) 0.948 (3) 0.687 (1) 0.110 (2) 2.0(2) Scaling the parameter size of the PandemicLLM could lead to performance improvements. To evaluate the impact of parameter size on performance, the 70B version is included specifically in this section. For the 1-week forecast, PandemicLLMs with 7B and 13B parameters shown similar performance, whereas the 70B version leads in average rank and surpasses the others in all error metrics except Brier Score. Specifically, the 70B model improves Accuracy by 6.4%, MSE by 17.1%, WMSE by 13.8%, and RPS by 3%. For 3-week forecasts, the differences among the PandemicLLMs sizes are less distinct with 13B and 70B achieving both better results compared with 7B. While this analysis sheds some light on the possible impact of increased model complexity on model performance, this area needs further research. 11/35 The notable performance enhancement of PandemicLLM underscores its potential for robust pandemic forecasting. A critical aspect of real-world forecasting is the ability to adapt to changing disease dynamics over time. Accordingly, the following analysis investigates how PandemicLLM\u2019s performance varies across different time periods. PandemicLLMs show robust performances across time. Fig.3f to 3g illustrate a comparative analysis of PandemicLLMs\u2019 performance relative to the reference models across the 16-week periods evaluated, where the national hospitalization trend changed from decreasing to stable and then to increasing. Each bar plot represents the WMSE distribution across all states for a specific week, where a lower bar indicates a better performance. This analysis reveals two primary findings: 1) The PandemicLLMs consistently outperform other models, exhibiting the most minor variability over time. 2) The PandemicLLMs show the most significant performance improvement, particularly as the outbreak exhibits a decreasing trend or approaches a peak. The PandemicLLMs\u2019 temporal performance highlights their adaptability to evolving disease dynamics, demonstrating effectiveness across various outbreak stages. Equivalent evaluations with alternative error metrics are included in the Supplementary Information sections 8. 4.4 Trustworthy and robust results High-confidence predictions demand real action. The standard for models designed to aid public health decisionmaking is rigorous, necessitating a reliable model that provides clear guidance on its utility. In response, we show that the confidence level of PandemicLLMs, the highest probability assigned to the predicted category, is a robust indicator of reliability. Specifically, we present the models\u2019 accuracy with respect to different confidence thresholds in Figure 4a and 4b. As we elevate the confidence threshold, wherein only predictions surpassing this criterion are considered, notable enhancements in prediction accuracy are observed for both the 7B and 13B models. For instance, setting the confidence threshold at 0.85, our 13B model attains an accuracy of 73% for 1-week forecasts and 64% for 3-week forecasts. This characteristic of our model is invaluable for public health policy decision-making, offering decision-makers the flexibility to devise strategies grounded in the predictive confidence provided by the model. Dependable forecasts for informed public health decisions. The stakes of making incorrect public health decisions are exceedingly high, such as prematurely easing restrictions when future hospital admissions are rising. A reliable forecasting model is paramount to prevent significant prediction errors, for instance, misidentifying a \"Substantial Increase\" as a \"Substantial Decrease,\" which could severely undermine public health strategies. To evaluate the PandemicLLM\u2019s reliability under different pandemic phases, we analyze the confusion matrices displayed in Fig. 4c to 4f. The findings underscore two pivotal insights: (1) The PandemicLLM exhibits strong predictive capability during the decreasing phases, particularly in identifying substantial decreases. This attribute is invaluable for informing policies on reopening and easing restrictions safely. (2) Errors in forecasting predominantly occur between neighboring categories. For instance, when our 13B model forecasts \u201cSubstantial Increase\u201d predictions, the actual situation reflected at least a \u201cModerate Increase\u201d in 74% of 1-week forecasts and in 62% for 3-week forecasts. This level of reliability suggests that PandemicLLM can offer valuable insights for making informed operational decisions across various phases of a pandemic. 12/35 (a) Confidence-Accuracy relation (1-week) (b) Confidence-Accuracy relation (3-week) (c) PandemicLLM-7B confusion matrix (1-week) (d) PandemicLLM-7B confusion matrix (3-week) (e) PandemicLLM-13B confusion matrix (1-week) (f) PandemicLLM-13B confusion matrix (3-week) Figure 4. Trustworthiness for PandemicLLMs. (a, b) The accuracy of 1-week and 3-week predictions varied across various levels of prediction confidence. (c, d) The 1-week and 3-week confusion matrix for PandemicLLM-7B. (e, f) The 1-week and 3-week precision confusion matrix for PandemicLLM-13B. SD: Substantial Decrease, MD: Moderate Decrease, ST: Stable, MI: Moderate Increase, SI: Substantial Increase. 13/35 4.5 Integrating real-time genomic surveillance information for timely response One unique ability of PandemicLLM is that it can incorporate previously unseen information through text reasoning, which allows the integration of timely pandemic-related information. One notable example is the emergence of new variants that pose significant challenges for traditional real-time forecasting. In this section, we present the capability of our model to mitigate such challenges by incorporating timely information on new variants. This analysis focuses on 3-week forecasting, aiming to provide an extended lead time for informed decision-making. BQ.1 Emerging BQ.1\u2019s Official Report by CDC Released Genomic Prompt Added BQ.1 Dominant (a) Proportional distribution of SARS-CoV-2 variants from September 2022 to January 2023. (b) Model performance in terms of weighted MSE (WMSE). (c) Model confidence (the probability of the predicted trend). Figure 5. A comparative analysis with and without the real-time genomic surveillance information. (a) National estimates of weekly proportions of SARS-CoV-2 variants from September, 2022 to January, 2023. (b) Comparison of models\u2019 performance with and without real-time genomic surveillance information (w/o GSI). (c) Prediction confidence of PandemicLLMs across time. The dash lines represent the models without real-time genomic surveillance information. Specifically, the hospitalizations surge during the testing period can be attributed to the rise of the SARS-COV-2 BQ.1 variant starting in October 2022, which became the predominant strain by December 2022, as depicted in Fig. 5a. The initial authoritative report detailing the virological properties of the BQ.1 variant was released on October 27, 202241. Subsequently, within the same week, we incorporated these specific characteristics of the BQ.1 variant (infectiousness, severity, and resistance to immunity), together with the latest variant proportion estimates from the 14/35 CDC42, into the PandemicLLMs (see Extended Data Fig. 9 for example genomic prompts). The results indicate PandemicLLMs respond to the real-time genomic surveillance information. Fig. 5c displays a comparison of four sets of distinct predictions: (1) Predictions generated by PandemicLLM-7B, (2) Predictions generated by PandemicLLM-7B without genomic surveillance information, (3) Predictions generated by PandemicLLM-13B, and (4) Predictions generated by PandemicLLM-13B without genomic surveillance information. Fig. 5b highlights the models\u2019 prediction confidences change when variant data is included. The findings reveal that the integration of variant information enhances both the performance and confidence of the models, particularly in the case of PandemicLLM-13B. Specifically, introducing variant information in PandemicLLM-13B leads to an average increase in prediction confidence of 20.1% and an average improvement of 28.2% in WMSE. This enhancement in performance is particularly evident during the transition of the dominant variant from BA.5 to the BQ.1 lineage. 5 Discussion Reshaping pandemic forecasting by incorporating all disease-relevant data streams. Traditional disease forecasting models heavily depend on structured numerical data, overlooking the wealth of information hidden within diverse disease-relevant sources. For instance, public health policies and real-time reports on emerging variants offer crucial information in textual formats that traditional models cannot access. To address this need, our proposed model unlocks the full potential of relevant information for pandemic forecasting, by reformulating it as a text reasoning problem and adapting LLMs. Through an AI-human cooperative prompt design, we integrate diverse disease-relevant data streams and formats \u2013 including public health policies (textual), epidemiological time series (sequential), genomic surveillance (textual and sequential), and local demographic and healthcare system data (numerical and categorical) \u2013 into well-structured prompts. This approach inclusively converts all information into text, allowing PandemicLLM to process and reason from data inaccessible to traditional frameworks. As demonstrated in section 4.3, PandemicLLM achieves a significant performance increase of at least 20% over existing models, highlighting the value of incorporating diverse data streams and formats within LLMs for pandemic forecasting. While our current model is a promising first step, we aim to expand its capabilities by including an even broader spectrum of disease-relevant data, such as wastewater-based epidemiology6 and human behavior data43, further enhancing its predictive accuracy and utility. Enhancing LLMs to master time series data and dependencies. The proposed PandemicLLM framework is enhanced by the integrated sequential GRU encoder for hospitalization time series, leading to an accuracy improvement of 17%24% over the model without the encoder (see Extended Data table 3). This design allows for easy future extensions to accommodate other time series encoder architectures (such as LSTMs or attention mechanisms), potentially providing better suitability for different disease types and propagation patterns. The AI-human collaborative prompt design is aimed to enable automation in real-time, allowing human expertise to be strategically focused on selecting the most relevant and timely information from authoritative sources, such as policy updates and variant reports. We envision integrating our framework into ongoing and future efforts for real-time pandemic forecasting, leveraging the collaboration between AI efficiency and human judgment. Trustworthiness and robustness enhancement for better decision making. For a model to effectively support public health decision-making, it needs to be reliable, trustworthy, and accurate24,28. Moreover, pandemic forecasts 15/35 are inherently uncertain, and effectively communicating forecast uncertainty to decision-makers and the public is critical for transparency, yet remains challenging31,44. In consideration of these needs, PandemicLLMs were finetuned to predict future pandemic trends by generating probabilities within defined categories, where the probability of the predicted category indicates the models\u2019 confidence in their predictions. As revealed through section 4.4, PandemicLLMs performance improves consistently as confidence increases. This observation is consistent with the capabilities of Flan-PaLM 540B24, an LLM with encoded clinical knowledge, further emphasizing the PandemicLLMs\u2019 proficiency in representing uncertainties with the COVID-19 related knowledge. Consequently, the confidence level can effectively function as an indicator of prediction reliability, offering model users a definitive guide to gauge the trustworthiness of the forecasts. Additionally, the confidence level can inform the sufficiency of available information. As evidenced in Fig. 5b, including genomic surveillance information led to simultaneous improvements in both the models\u2019 performance and confidence. In light of our models\u2019 demonstrated strengths, they exhibit reliable performances and provide decision-makers with an incisive understanding of uncertainty. Generalizing to other disease forecasts across diverse spatial-temporal scales. Section 4.5 demonstrates PandemicLLM\u2019s zero-shot ability, allowing adaptation to emerging variants without retraining. Additionally, the 28.2% performance improvement in WMSE with real-time genomic information highlights how PandemicLLMs leverage previously unseen variant information for reasoning. This attribute suggests the potential for generalizing our framework to other diseases with similar transmission mechanisms (such as flu and RSV) and adapting to emergency public health scenarios requiring rapid decision-making under limited data. The design of PandemicLLM allows for scalability and generalizability, not just in terms of the diseases it can predict but also in the granularity of its forecasts. Its success in capturing disease dynamics at the state level offers a promising foundation for extending its forecasts to more localized levels, such as counties or even hospitals. This potential adaptation would provide valuable insights into broader public health needs, enabling targeted forecasts that directly support local decision-making and interventions. As we look toward the future, the insights and methodologies developed through PandemicLLM offer a glimpse of what might be possible for the next generation of public health forecasting models. We envision future models building on this work, incorporating a wider variety of data, integrating AI and human expertise, and tackling an increasingly diverse array of public health challenges. 6 Limitations A potentially limiting factor of the proposed model is due to the computational cost of employing LLMs. In efforts to address this concern, we empirically show that freezing the pre-trained LLaMA2 parameters achieves much better efficiency without sacrificing performance. However, PandemicLLM still demands a substantial amount of computational resources as the gradients of LLM parameters are involved in optimization of trainable RNNs and input token embeddings. This could be a limiting factor in scenarios where resources are scarce or in situations demanding rapid model deployment. Additionally, LLMs lack theoretical transparency. To address this issue we explicitly evaluate the empirical reliability and trustworthiness of PandemicLLMs. However, there remains a need to enhance LLMs\u2019 interpretability. This is particularly important for public health related applications of LLMs, where elucidating the reasoning behind model predictions is pivotal for fostering user confidence and trust. 16/35 7 Conclusion In this study, we introduced a novel real-time LLM-based framework for pandemic forecasting at the population level. The proposed PandemicLLMs extend the LLM architecture by integrating a temporal encoder tailored explicitly for processing epidemiological time series data. PandemicLLMs also incorporate unique disease-relevant data streams, such as textual public health policies and textual virological characteristics, which were previously inaccessible to existing forecasting models. Our findings demonstrate that PandemicLLMs outperform existing frameworks, offering robust and trustworthy predictions even for previously unseen scenarios, which can match the critical needs of public health policymakers. Through this work, we shed light on the potential of LLMs to improve strategies for pandemic response, envisioning a future where AI strengthens the resilience and efficiency of global health systems during public health crises. 17/35 8 Methods 8.1 PandemicLLM datasets The proposed PandemicLLM is fine-tuned using multiple disparate categories of data including spatial, epidemiological time series, genomic surveillance, and public health policy data. Our data covers all 50 states in the United States, ensuring a comprehensive nationwide scope for our study. All the spatial data are available at the state resolution, while all the time-varying data are available at the weekly resolution. In this section, we comprehensively discuss the sources of data and the pre-processing process. 8.1.1 Epidemiological time series data Hospitalizations. Our study focuses on refining the targets used for COVID-19 forecasting models. The choice of hospitalization data over cases and deaths data stems from its capability to reflect the disease\u2019s spread and overall harmful impacts on healthcare systems, making it a more comprehensive measure. Shifting from the conventional reliance on numerical targets, our study embraces categorical targets due to two main considerations: First, the unreliability of data reporting poses a significant challenge45. The inconsistencies in reported cases, deaths, and hospitalizations undermine the effectiveness of traditional disease forecasting models that rely heavily on these metrics. Second, the experience with the COVID-19 pandemic has demonstrated that the performance of numerical target predictions often falls short of expectations13,17. In light of these limitations, we advocate for the use of hospitalization trend categories (HTC) as our predictive targets instead of relying solely on numerical values, These categories, crafted from weekly COVID-19 hospitalization time series, offer a more robust indicator of both the disease spread and its impact on healthcare systems. We categorized the hospitalization numbers into five distinct trends as follows: HTCi,t 1 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Substantial Increase if HT i,t 1 > 3 Moderate Increase if 3 > HT i,t 1 > 1 Stable if 1 > HT1ii,t > \u22121 Moderate Decrease if \u22121 > HT i,t 1 > \u22123 Substantial Decrease if \u22123 > HT i,t 1 (1) HTCi,t 3 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Substantial Increase if HT i,t 3 > 4.5 Moderate Increase if 4.5 > HT i,t 3 > 1.5 Stable if 1.5 > HT i,t 3 > \u22121.5 Moderate Decrease if \u22121.5 > HT i,t 3 > \u22124.5 Substantial Decrease if \u22124.5 > HT i,t 3 (2) Where HTCi,t 1 and HTCi,t 3 represents the 1 and 3-week later hospitalization trend category for state i at week t. The 1 and 3-week hospitalization tends for state i at week t (HT i,t 1 and HT i,t 3 ) are defined as follow: HT i,t 1 = HRi,t \u2212HRi,t\u22121 (3) 18/35 HT i,t 3 = HRi,t \u2212HRi,t\u22123 (4) Where HRi,t represents the hospitalization rate per one hundred thousand people for state i at week t. HRi,t indicates the smoothed 3-week average hospitalization rate per one hundred thousand people for state i at week t. The decision to define hospitalization trends (HT) by contrasting current reported hospitalization values with lagged, smoothed data was to diminish the influence of data irregularities and reporting errors. This approach is intended to yield a robust and more representative target, thereby providing a more reliable reflection of the actual progression of the pandemic. While the HTC is utilized as the target, the HT and the HR are employed as input epidemiological time series data. The state-level COVID-19 hospitalizations data were collected from the U.S. Department of Health & Human Service (HHS)46. Reported cases. Our study utilized the state-level, daily reported COVID-19 case from the Johns Hopkins University Center for Systems Science and Engineering (CSSE)47. The raw case data are aggregated at the weekly level as input data Ct i, where Ct i represents the number of reported cases for state i at week t. Previous infections. Numerous research findings have highlighted the protective role of prior infections in guarding against reinfection and severe outcomes of COVID-1948,49. In order to incorporate the impact of immunity acquired from previous infections in mitigating severe disease upon subsequent infections, we created a variable that represents the total number of reported infections over a three-month period. The mathematical formulation of previous infection (PI) is defined as follows: PIt i = \u2211t\u22124 j:t\u221216C j i popi (5) Where PIt i represents the cumulative reported infection rate for state i from 16 to 4 weeks prior to week t, C j i is the number of reported cases for state i at week j, and popi is the population for state i. The numerator represents the aggregate of C j i from the period spanning weeks t \u221216 to t \u22124 preceding week t. Vaccination. The vaccine-induced immunity is widely regarded as a crucial approach to reducing the harmful impact of COVID-19. Our model incorporated state-level vaccination data from the US CDC Vaccine Tracker50, leveraging this information to enhance our analysis. We included state-level cumulative vaccination rates normalized by population. The vaccination data covers three types of time series: the partial vaccination rate, the completed primary series rate, and the booster vaccination rate. Our approach offers a detailed view of the vaccination landscape across different states by including these distinct metrics. 8.1.2 Spatial data Demographics. Data on the state population, including age group and race-related data, was sourced from the American Community Survey by the US Census Bureau51. Recognizing that COVID-19 affects different racial and age groups unevenly, as established by existing studies52,53, we specifically focused on the population aged over 65 and vulnerable racial groups. The demographic information utilized in our research is derived from the 2022 National Census Survey. Healthcare systems. In our study, we incorporated healthcare system scores from the Commonwealth Fund54, 19/35 an annually updated and comprehensive dataset that evaluates the performance of healthcare systems in every U.S. state. This score on State Health System Performance offers a unique perspective on how state healthcare systems have coped with and managed the challenges posed by the COVID-19 pandemic. For each state, this resource provides not only an overall healthcare system performance ranking but also offers detailed rankings in specific categories such as COVID-19 response, Access and affordability, Prevention and treatment, Avoidable Hospital Use and Cost, Healthy Lives, Income Disparity, and Racial and Ethnic Equity. 2020 presidential election results. Prior research has identified a notable association between political affiliation and COVID-19 health outcomes at the state level53. In light of this, we collected data for the 2020 U.S. Presidential election results from the Federal Elections Commission55. Based on the state-level voting results, we labeled each state as either Democrat or Republican. 8.1.3 Public health policy data. The impact of the COVID-19 epidemic is influenced by the stringency and timing of government-implemented policies56. These policies include measures such as closing schools, canceling public events, and protecting the elderly vulnerable populations. We collected policy data from the Oxford COVID-19 Government Response Tracker57. However, in contrast to the majority of studies that utilize the generated policy index, we integrate the textual descriptions of selected policies as input for our prompt design, providing a unique approach to analyzing policy impact. In order to effectively assess the impact of policies on the spread of the disease and the strain on healthcare systems, we specifically chose policies from two categories: \u2019C\u2019 for containment and closure policies and \u2019H\u2019 for health system policies. The complete list of policies included in our study from these categories is provided in Extended Data Table 2. Our prompt utilized the summary of the stringency levels for each policy as input. These summaries are derived directly from the original descriptions provided by the data source. Due to restrictions in token size, we have limited our input to the specified policy categories despite the potential impact of other policies on the pandemic\u2019s dynamics. 8.1.4 Genomic surveillance data. In the constantly changing COVID-19 pandemic, the emergence of new variants plays a pivotal role in shaping pandemic trends and potential new waves. Genomic surveillance data is indispensable in this context, as it enables the timely detection and tracking of these variants, providing essential insights into their characteristics and spread58. Existing forecasting models that attempt to incorporate genomic surveillance data frequently encounter limitations due to reporting delays and the challenge of accurately encoding the virological characteristics of different variants7,59. PandemicLLM offer a novel opportunity to include this data more effectively. Through zero-shot inference, LLMs can process and interpret previously unseen variant data without training, providing a fresh and timely perspective in dealing with emergence of new variants. In our study, we utilized two primary types of genomic surveillance data to forecast the COVID-19 HTC: \u2022 Virological characteristics: Our study integrated official updates from authoritative organizations, including the World Health Organization (WHO)60, the Centers for Disease Control and Prevention (CDC)61, and the European Centre for Disease Prevention and Control (ECDC)62. These updates, presented in text format, offer information on the virological characteristics of newly emerging variants. We then summarized the 20/35 information from three aspects: 1) infectiousness, 2) severity, and 3) resistance to immunity. More detailed data sources are documented in the Supplementary Information section 1. \u2022 Weighted estimates of variant proportions: We also included the weighted estimates of variant proportions from the CDC42 in our analysis. Weighted estimates refer to the proportions of circulating variants derived from empirical genomic sequencing data that has been observed and recorded. This data is vital for assessing the prevalence and distribution of these variants, offering an overview of the pandemic\u2019s status and aiding in more accurate forecasting. 8.2 Preliminaries and notations Large language models. In recent years, Large Language Models (LLMs)32,63\u201365 have catalyzed a pivotal transformation in the field of natural language processing. These models are transformer decoders66 autoregressively pre-trained on extensive text corpus such as Github, Wikipedia, CommonCrawl, and BooksCorpus. Consequently, LLMs enjoy a broad knowledge base and demonstrate proficiency across a spectrum of tasks, ranging from conversational agents32 to complex reasoning67 and decision-making applications68,69. Their performance often rivals, and occasionally surpasses, that of humans in these domains65. Despite the remarkable performance of LLMs, the state-of-the-art LLMs are often closed-source LLMs, e.g. ChatGPT32, and GPT-465, hampering the development and application of LLMs for the research community. To bridge this gap, LLaMA70 and LLaMA233 are proposed. They are open-source LLMs trained on publicly available datasets and achieved comparable or better performance than GPT-3. In this study, we used LLaMA2 as the backbone for its robust performance in language understanding. Autoregressive language modeling. Most existing LLMs are trained in an autoregressive manner, where the model predicts the next token in a text sequence based on its predecessors. This process begins with tokenization, e.g. Byte Pair Encoding (BPE)71, where a tokenizer segments a string into discrete tokens. Then, the tokenized sequence for data sample i, denoted as Ti is reconstructed autoregressively: p\u03b8(Ti) = |Ti| \u220f j=1 p\u03b8(t(i) j |t(i) 1 ,\u00b7\u00b7\u00b7 ,t(i) j\u22121), (6) where p\u03b8 is the LLM model, t(i) j denotes the j-th token in Ti. By maximizing the likelihood p\u03b8(T) = \u220fN i=1 p\u03b8(Ti) for all data samples Ti \u2208T, the LLM\u2019s parameters are learned. Challenges of LLMs in encoding disease-related data. Despite the remarkable abilities of LLMs, pandemic forecasting integrates various data sources, including demographic profiles, public health policy, and epidemiological time series. These diverse sources constitute a composite, multi-modal input for the pandemic prediction model. However, conventional LLMs primarily handle discrete text inputs, which can not directly leverage these multi-modality data. Besides, for continuous sequential data, tokenizers like Byte Pair Encoding (BPE) segment continuous sequences into discrete tokens, breaking numbers into awkward chunks that make learning basic numerical operations challenging30. To leverage the cross-domain knowledge of LLMs while preserving sequential information effectively, it is essential to innovate how multi-modal pandemic data is assimilated as input for LLMs. Notations. Before we formally start to introduce the PandemicLLM we establish the following notations. 21/35 \u2022 Ti denotes the prompt text of the i-th item, including task description, spatial, epidemiological time series, genomic surveillance, and public health policy information. An example can be found in Figure 2. \u2022 Xi denotes the time series data of the i-th item in the dataset, e.g. hospitalization time-series data. \u2022 yi denotes the token for Hospitalization Trend Category (HTC) of the i-th item in the dataset, representing one of the following trends {<Substantial Decrease>, <Moderate Decrease>, <Stable>, <Moderate Increase>, <Substantial Increase>}. \u2022 f\u03b8 denotes an LLM with parameters \u03b8. \u03b8in and \u03b8out denote the parameters for the input and output layer. \u2022 g\u03c6 denotes the RNN encoder with parameters \u03c6. \u2022 H H Hi denotes the embedding matrix of text tokens Ti that serves as the input for the transformer layers. \u2022 z z zi denotes the embedding vector of time series data Xi. 8.3 Methodology 8.3.1 Overview of PandemicLLM We propose to formalize the pandemic prediction problem as a multi-modality ordinal classification problem: For a sample i (e.g., New York state at a given week), given the prompt with multi-modality text information, denoted as Ti, and the sequential data Xi, the goal is to predict the Hospitalization Trend Category yi. Specifically, as illustrated in Extended Data Fig. 7, PandemicLLM models p(yi|Ti,Xi) using two models: an RNN-based sequential encoder g\u03c6 that projects sequential data to the representations in the text space; A transformer-based LLM f\u03b8 that forecasts the categorical targets distribution using the encoded text representations containing multi-modality information and the encoded time series information. In this way, PandemicLLM seamlessly integrates text reasoning and numerical time series learning using one unified framework. 8.3.2 Multi-modality data textualization PandemicLLM utilizes an LLM to address the challenge of processing multi-modal disease-relevent data. For each item in the dataset, we construct a composite prompt to effectively parse and integrate the multi-modality data. An example is shown in Extended Data Fig. 10: \u2022 Spatial data textualization: In spatial data textualization, we processed numerical state demographic data and healthcare system statistics. Those data for all 50 U.S. states were numerically ranked and subsequently transformed into categorical descriptions reflecting their relative positions. States ranking in the top five were labeled \"One of the best.\" Those between 6th and 20th were categorized as \"Higher than the national average,\" while rankings from 21st to 30th were described as \"Close to the national average.\" States falling between 31st and 45th were labeled \"Lower than the national average,\" and those in the bottom five were described as \"One of the lowest.\" Regarding presidential election outcomes, each state was characterized based on its voting percentage, being labeled as either predominantly voting for Democrats or Republicans (see Supplementary Information section 2). 22/35 \u2022 Epidemiological time series data textualization: In the phase of textualizing epidemiological time series data, an AI model (ChatGPT-3.532) is employed to convert sequential data into narrative summaries. This method leverages the sophisticated capabilities of AI to transform numerical sequences into detailed textual narratives, thereby reducing the need for laborious manual annotation. For instance, the AI-facilitated textualization process for the weekly vaccination rate time series is illustrated in Extended Data Figure 6. \u2022 Epidemiological time series representation learning: As time series plays a critical role in pandemic forecasting, we leverage an additional RNN encoder to effectively distill the useful information into LLM\u2019s input space. Specifically, the prompt for time-series data is initialized by a special token <time-series-specialtoken> whose embedding z z zi is encoded by an RNN ecnoder g\u03c6: z z zi = g\u03c6(Xi) \u2208Rd, (7) where Xi denotes the time-series data, e.g. hospitalization time-series, for data sample i. As the Extended Data Table 3 showed, the RNN encoder can significantly improve the model\u2019s performance. And we choose GRU as the implementation of g\u03c6 in our experiments. \u2022 Policy textualization: Our study incorporated six policy categories: School and Workplace Closures, Public Events, Gathering Restrictions, Facial Coverings, and Elderly Protection. We designed the prompt\u2019s weekly policy stringency level descriptions, highlighting policy shifts (see Supplementary Information section 2). For instance, if New York shifted its school closure policy from \"recommended closures\" to \"No measures\" while maintaining \"No restriction\" on gatherings, the description would be: \"There have been changes in school policy moving from recommended closures to no restrictions while gathering policy remains unrestricted.\" \u2022 Genomic surveillance data textualization: The genomic surveillance data consisted of two main categories: reports about new variants from authoritative sources and sequential data derived from the CDC\u2019s weighted estimates of variant proportions Key information, includes the variant\u2019s relative transmissibility, severity and the impact on immunity, was incorporated directly into the prompt design. The weighted variant proportion estimates were processed using the same method outlined in the temporal data textualization section (for further details, refer to Supplementary Information Sections 1 and 2). Now we describe how we organize the multi-modality data into the input space of an LLM: we first provide the essential task information at the beginning of the prompt, then we use expert knowledge combined with AI assistance to describe the spatial and temporal information for pandemic forecasting. Then the sequential data is followed, which will be further encoded by the RNN encoder. The input for LLM transformer-decoder is a mixture of text information and encoded sequential information. Specifically, for data sample i, the tokenized text Ti, containing both spatial and temporal information, is encoded into the text embeddings H H Hi: H H Hi = f\u03b8in(Ti) \u2208R|Ti|\u00d7d, (8) 23/35 where f\u03b8in is the input embedding layer for LLM, |Ti| denotes the number of tokens for Ti, d is the embedding dimension of the LLM. Then, the encoded information z z zi is used to replace the embedding of the time-series special token, i.e. <time-series-special-token>, in H H Hi to generate the final representation for the transformer\u2019s input H H H\u2032 i: H H H\u2032 i[ j,:] = ( H H Hi[j,:] if j \u0338= si, z z zi if j = si, (9) where si denotes the index for time series special token. In this way, PandemicLLM seamlessly fuses the sequential information and textual information into the input space of an LLM, enabling it to perform reasoning with both types of information. 8.3.3 LLM for pandemic prediction With H H H\u2032 i encoding both textual and sequential information, PandemicLLM leverages an LLM (i.e. transformer decoder) to perform pandemic prediction as a text generation problem. As introduced in Section 8.2, an autoregressive LLM generates one token at a time conditioned on the previously generated tokens. To make a prediction, we extend the original vocabulary of LLM with \u201cclass tokens\u201d: {<Substantial Decrease>, <Moderate Decrease>, <Stable>, <Moderate Increase>, <Substantial Increase>} that represent the trend for future hospitalization. Then, we use the predicted distribution of \u201cclass tokens\u201d after the output prompt \u201cThe answer is\u201d as the target distribution for the pandemic forecasting. In this way, PandemicLLM formulates pandemic forecasting as a text reasoning problem using an LLM. The model is optimized by the autoregressive loss defined in Eq. 6, maximizing the likelihood of the ground truth text. For better scalability and efficiency we freeze the transformer parameters for LLMs72 and only train the vocabulary embeddings f\u03b8in and output prediction layer f\u03b8out, as well as the GRU encoders g\u03c6. 8.3.4 Evaluation We comprehensively evaluate our model using five error metrics: a) Accuracy, b) Mean Square Error (MSE), c) Weighted Mean Square error (WMSE), d) Brier Score, and e) Rank Probability Score (RPS). Accuracy offers a direct method to assess the performance of a model\u2019s classification, defined by the following equation: Accuracy = \u2211N i=1[yi = \u02c6 yi] N , (10) where yi represents the actual class, \u02c6 yi denotes the predicted class, and N signifies the total number of samples. However, despite being widely used in traditional classification problems, equally treats all errors and neglects the inherent ordering between the HTC classes. For example, if the ground truth trend is \u201cSubstantial Increase\u201d, accuracy will treat the prediction error of \u201cModerate Increase\u201d and \u201cSubstantial Decrease\u201d equally. To model the order information of HTC classes, we map the classes: {Substantial Decrease, Moderate Decreasing, Stable, Moderate Increasing, Substantial Increase} into numeric scale {1, 2, 3, 4, 5}, and use the mapped value to compute numerical error metrics. To start with, the mean squared error (MSE) is a promising way to evaluate the ordinal classification73, which is defined as: 24/35 MSE = 1 N N \u2211 i=1 (\u02dc yi \u2212\u00af yi)2, (11) where \u02dc yi and \u00af yi denote the numerical values of the ground truth class and the predicted class. To further evaluate the predicted distribution, we use weighted MSE (WMSE), which introduces a probability weighting into the MSE, defined as: WMSE = 1 N N \u2211 i=1 K \u2211 k=1 P(\u00af yi = k)(k \u2212\u02c6 yi)2, (12) where K denotes the number of classes (K = 5 for HTC prediction), P(\u00af yi = k) denotes the predicted probability for the k-th class for the i-th data sample. The Brier Score is used to measure the accuracy of probabilistic predictions. It is calculated as the mean squared difference between the predicted probability assigned to the possible outcomes and the actual outcome: Brier Score = 1 N N \u2211 i=1 K \u2211 k=1 (P(\u00af yi = k)\u2212o(i) k )2 (13) where P(\u00af yi = k) is the forecast probability for the i-th item in the k-th class, and o(i) k is the one-hot encoded actual outcome, with a value of 1 for the ground truth class and 0 for all other classes. We also utilize the Rank Probability Score (RPS)35, which is particularly useful for evaluating the accuracy of the entire predicted probability distribution across categorical outcomes. The RPS is defined as: RPS = 1 N N \u2211 i=1 K \u2211 k=1 (q(\u00af yi = k)\u2212\u00af q(i) k )2, (14) where q(i) k and \u00af q(i) k are the ground truth and predicted cumulative probability distributions for the i-th case in the k-th class. The ground truth cumulative probability is defined as a step function that increases from 0 to 1 at the class of the actual observed outcome. The predicted cumulative probability distribution is defined as the sum of the predicted probabilities for all classes up to and including the k-th class. 25/35 9 Data availability All data utilized in this study derive from publicly accessible sources. Details of each raw data source and data processing are described in the Method Section. The processed data are available at https://github.com/ miemieyanga/PandemicLLM. 10 Code availability Code is publicly accessible at https://github.com/miemieyanga/PandemicLLM." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.09228v1", |
| "title": "A Survey on Integration of Large Language Models with Intelligent Robots", |
| "abstract": "In recent years, the integration of large language models (LLMs) has\nrevolutionized the field of robotics, enabling robots to communicate,\nunderstand, and reason with human-like proficiency. This paper explores the\nmultifaceted impact of LLMs on robotics, addressing key challenges and\nopportunities for leveraging these models across various domains. By\ncategorizing and analyzing LLM applications within core robotics elements --\ncommunication, perception, planning, and control -- we aim to provide\nactionable insights for researchers seeking to integrate LLMs into their\nrobotic systems. Our investigation focuses on LLMs developed post-GPT-3.5,\nprimarily in text-based modalities while also considering multimodal approaches\nfor perception and control. We offer comprehensive guidelines and examples for\nprompt engineering, facilitating beginners' access to LLM-based robotics\nsolutions. Through tutorial-level examples and structured prompt construction,\nwe illustrate how LLM-guided enhancements can be seamlessly integrated into\nrobotics applications. This survey serves as a roadmap for researchers\nnavigating the evolving landscape of LLM-driven robotics, offering a\ncomprehensive overview and practical guidance for harnessing the power of\nlanguage models in robotics development.", |
| "authors": "Yeseung Kim, Dohyun Kim, Jieun Choi, Jisang Park, Nayoung Oh, Daehyung Park", |
| "published": "2024-04-14", |
| "updated": "2024-04-14", |
| "primary_cat": "cs.RO", |
| "cats": [ |
| "cs.RO" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM AND Reasoning", |
| "gt": "Over the last decade, we have witnessed remark- able progress in the field of robotics in applying language models (LMs). This progress includes not only human-like communication but also understanding and reasoning capabilities of robots thereby significantly improving their effectiveness across various tasks, from household chores to industrial operations [52, 105]. In the early stage of work, the success stemmed from statistical model analyzing and predicting words in linguis- tic expressions. These models enable robots to interpret human commands [110, 121], understand contexts [2, 4], represent the world [50], and interact with humans [134], albeit with a limited depth of understanding. Then, the adoption of Transformer architecture with self-attention mech- anisms [140], particularly pre-trained LMs such as BERT [26], has elevated the capability of captur- ing complex patterns while fine-tuning models for specific tasks. However, the performance of these models is often contingent upon limited datasets, 1 arXiv:2404.09228v1 [cs.RO] 14 Apr 2024 constraining their ability to grasp deeper contex- tual understanding and generalize across diverse scenarios. With the advancement of large language mod- els (LLMs), language-based robotics introduce innovative changes across various domains such as information retrieval, reasoning tasks, adap- tation to environments, continuous learning and improvements, etc [61, 64]. These LLMs, charac- terized by their vast parameter sizes and training on internet-scale datasets, offer zero- and few- shot learning capabilities for downstream tasks without requiring additional parameter updates. These prominent advancements come from emer- gent abilities, defined as \u201cthe abilities that are not present in small models but arise in large models\u201d in the literature [147]. The abilities have signif- icantly enhanced robots\u2019 performance in under- standing, inferring, and responding to open-set instructions by leveraging extensive common-sense knowledge [8]. Furthermore, prompt creation tech- nologies, called prompt engineering, have enabled LLMs to incorporate richer contextual informa- tion through free-form language descriptions or interactive dialogues, facilitating generalized rea- soning [148]. The introduction of in-context learn- ing abilities [8] leads LLMs to generate outputs in expected formats, such as JSON, YAML, or PDDL, or even code, based on provided instructions or demonstrations in prompts [42, 87]. Recent LLMs, such as GPT-4, have further expanded capabilities by integrating with external robotics tools such as planners or translators [90]. Despite the diverse capabilities of LLMs, their utilization faces several challenges [69]. Firstly, LLMs often generate inaccurate or unexpected responses. As the safety of robot execution is the most important deployment factor, LLM-based robotic applications require filtering and correction mechanisms to ensure safety. Second, the emer- gent abilities, such as in-context learning, are not predictable and consistent yet [19]. Even minor alterations to input text may lead to unpredictable changes in response. Third, a well-designed prompt enables robots to effectively leverage the abilities of LLMs but there is a lack of systematic guidelines supporting key components of robotic systems, hindering seamless integration [35, 54, 164]. There- fore, we need to investigate component-wise LLM engagements in robotics toward an understanding of limitations and safety. Currently, various surveys have started explor- ing the intersection of LLMs and robotics [141, 164], primarily focusing on application or interac- tion dimensions of LLM-based robotics. However, there remains a gap in providing holistic reviews and actionable insights for integrating LLMs across key elements of robotic systems, including com- munication, perception, planning, and control. Additionally, researchers explore the wide field of pre-trained large-capacity models, called founda- tion models, seeking the generalization capabilities across multimodal transformer-based models [35, 54]. However, this expansive field spans a wide spectrum of robotics and diverse methodologies, making emerging researchers miss in-depth reviews and guidelines. In this paper, as shown in Fig. 1, we aim to categorize and analyze how LLMs could enhance core elements of robotics systems and how we can guide emerging researchers in integrating LLMs within each domain, encompassing communica- tion, perception, planning, and control, toward the development of intelligent robots. We structure this paper following three key questions: \u2022 Q1: How are LLMs being utilized in each robotics domain? \u2022 Q2: How can researchers overcome the integra- tion limitation of LLMs? \u2022 Q3: What basic prompt structures are required to produce a minimum functionality in each domain? To address these questions, we focus on LLMs developed after the introduction of GPT-3.5 [106]. We primarily consider text-based modalities but also review multimodalities for perception and control areas. However, for an in-depth review, we limit our investigation to LLMs rather than foundation models. In addition, we provide comprehensive guide- lines and examples for prompt engineering, aimed at enabling beginners to access LLM-based robotics solutions. Our tutorial-level examples illus- trate how fundamental functionalities of robotic components can be augmented or replaced by introducing four types of exemplary prompts: con- versational prompt for interactive grounding, direc- tive prompt for scene-graph generation, planning prompt for few-shot planning, and code-generation prompt for reward generation. By providing rules and tips for prompt construction, we outline the 2 process of generating well-designed prompts to yield outputs in the desired format in the desired format. These principles ensure effective LLM- guided enhancements in robotics applications, without parameter adjustments. The remainder of this paper is organized as fol- lows. Section 2 outlines the historical background of LMs and LLMs in robotics. Section 3 reviews how LLMs empower robots to communicate via language understanding and generation. Section 4 investigate how LLMs perceive various sensor modalities and advance sensing behaviors. Section 5 and Section 6 organize LLM-based planning and control studies, respectively. In Section 7, we provide comprehensive guideline for prompt engi- neering as a starting point for LLM integration in robotics. Finally, Section 8 summarizes this survey.", |
| "main_content": "We briefly review language models in two parts, before and after the advent of LLM. Unlike the overview in the previous literature [164], we limit the period for LM, which is before the advent of LLM, to the period of neural language models when recurrent neural networks (RNNs) [33] started to be used and models such as GPT2 [115] with Transformers were used. We also provide a brief explanation of LLMs with related terminologies and techniques. 2.1 Language Models in Robotics Research in LM-based robotics has primarily explored neural language models, used for sequential data processing. In the early stage, RNNbased LMs [23, 46] transform linguistic commands into a sequence of actions [6, 100] or formal languages [40] by leveraging RNN\u2019s sequence-tosequence modeling capabilities. Using an RNN as a linguistic encoder, LMs also translate textual inputs into linguistic features, which could then be mapped to visual features for referring object identification [121, 125]. Nonetheless, the longterm dependency problem in RNN restricts their application scope. Then, the introduction of the Transformer architecture [140] overcomes these limitations, enabling its application in tasks requiring long-range comprehension, such as vision-language navigation [14, 16]. Prior approaches heavily depend on training datasets, lowering generalization abilities. However, the scalability of Transformer-based models and self-supervised learning techniques, such as masked language modeling, has led to the development of internet-scale pre-trained models, such as BERT [26] or GPT-2 [115]. These pre-trained models demonstrate general linguistic understanding abilities. This advancement has allowed for finetuning these models for specific tasks [74, 75, 124]. Moreover, the utilization of pre-trained multimodal language models, such as CLIP [116], facilitates the exploitation of aligned features across modalities, such as vision and language, enriching the application in robotic studies [76, 126]. 2.2 Large Language Model in Robotics Recent advancements in LLMs, such as GPT3 [8], GPT-4 [107], LLaMA [136], Llama 2 [137], and Gemini [39], demonstrate notable improvements in understanding, contextual awareness, generalization capabilities, and knowledge richness, surpassing earlier language models. These improvements are from their training on vast datasets with billion-scale parameters, enabling them to capture intricate data patterns. Additionally, advanced learning strategies, such as reinforcement learning from human feedback, have been developed to align the behaviors of LLMs with human values or preferences [108]. Alternatively, prompt engineering, leveraging in-context learning (ICL) [8], represents another significant advancement, allowing LLMs to directly learn from prompts without additional training. The effectiveness of the prompt engineering hinges on the prompt\u2019s design and quality, with enhancements including detailed task descriptions, few-shot examples, or more model-digestible formats (e.g., ### as a stop symbol [167]). Furthermore, the chain-of-thought (CoT) prompting method [148] emerges, incorporating intermediate reasoning steps into prompts, resulting in significant enhancement on complex reasoning tasks. Ongoing research endeavors to further improve the reasoning abilities of LLMs, with developments such as tree-of-thought [160] or graph-of-thought [5]. On the other hand, various LLM-based robotics studies have tried directly training LLMs [7, 170]. 3 LLMs in Robots Communication Language Understanding Interpretation LM-Nav [122], VLMaps [56], LINGO-Space [76], LLM+P [90], Xie et al. [154], Lang2LTL [92], Safety Chip [159], AutoTAMP [18], Guan et al. [42] Grounding SayPlan [118], ConceptGraphs [41], PaLM-E [31], 3D-LLM [47], LiDAR-LLM [158], AffordanceLLM [114], LLM-Grounder [157], Matcha [168], Inner Monologue [61], KnowNo [120], CLARA [109] Language Generation Task-oriented Communication HMAS-2 [20], RoCo [97], CoELA [166], Hunt et al. [62], Axelsson and Skantze [3], Yamazaki et al. [156], FurChat [21], Inner Monologue [61], ORION [25], KnowNo [120], CLARA [109] Non-task Communication Khoo et al. [73], Lee et al. [81], Irfan et al. [65], HeyBeau [22], Ichikura et al. [63], AGA [162] Perception Sensing Modality Visual TidyBot [151], RoCo [97], ConceptGraphs [41], PG-InstructBLIP [36], Kwon et al. [79], LAN-grasp [104], VLMaps [56], ConceptFusion [66], LLM-Grounder [157], Auditory Socratic Models [163], REFLECT [95], MUTEX [123], AVLMaps [55] Haptic Matcha [168], MultiPLY [48] Sensing Behavior Passive PG-InstructBLIP [36], TidyBot [151], ViLa [53], LAN-grasp [104], RT-Grasp [155], ConceptGraphs [41], VoxPoser [59], LM-Nav [122], FM-Loc [103], ConceptFusion [66], LLM-Grounder [157], WALL-E [143] Active LLM-Planner [129], Kwon et al. [79], Matcha [168], MultiPLY [48] ORION [25] Planning Task Planning Static Planning Phase-Step prompt [9], Di Palo et al. [27], SMART-LLM [70], Socratic Models [163], Huang et al. [58], Code as Policies [87], ProgPrompt [128], Instruct2Act [57], VLMaps [56], SayPlan [118], LLM+P [90], Silver et al. [127] Adaptive Planning SayCan [64], HMAS-2 [20], Inner Monologue [61], PromptCraft [141], TidyBot [151], Matcha [168], LLM-MCTS [169], NLMap [12], Grounded Decoding [60], LLM-Planner [129], SwiftSage [88], REFLECT [95], CAPE [117], DEPS [146], COWP [28], KnowNo [120] TAMP Motion Planning Swarm-GPT [68], VoxPoser [59] Kwon et al. [80], RoCo [97], [152], LLM-GROP [29], AutoTAMP [18], Scaling Up and Distilling Down [44], Text2Motion [89] Control Direct Approach Gato [119], RT-1 [7], RT-2 [170], MOO [131], RT-X [142], Chen et al. [15], SayTap [133], General Pattern Machines [102], Cao and Lee [10], Wang et al. [145], Li et al. [86] Indirect Approach ELLM [32], CoTPC [67], Kumar et al. [78], Yu et al. [161], Gen2Sim [71], RoboGen [144], LARG2 [112], Text2Reward [153], Song et al. [130], Eureka [96], REAL [132], OLAF [91], Lafite-RL [24], Zeng and Xu [165] Fig. 1: Overview structure of intelligent robotics research integrated with LLMs in this survey. The rightmost cells show the representative names (e.g., method, model, or authors) of papers in each category. However, full fine-tuning, training the entire model on the task-specific data, is not only computationally expensive but also costly to obtain enough data, due to their large-scale parameters. To address these issues, researchers have developed parameter-efficient fine-tuning methods, such as adapters\u2014small, trainable networks inserted into each layer of an LLM for task-specific tuning [49], and LoRA [51], which applies a low-rank constraint to approximate the updated matrix in each layer. These developments in LLMs are significantly influencing robotics, setting the stage for a deeper exploration of LLM applications within robotic systems. 3 Communication We investigate the utilization of LLMs to facilitate human-like communication in robotics, enabling robots to interact with humans and other robotic 4 agents effectively [98]. We categorize the communication capabilities into two primary areas: (1) language understanding and (2) language generation, as detailed in Fig. 1, which shows the detailed categorization alongside relevant studies, referred in green cells. 3.1 Language Understanding We review language understanding capabilities, addressing how LLMs handle the variability and ambiguity of linguistic inputs through interpretation and grounding. Interpretation involves transforming naturallanguage inputs into robot-operatable semantic representations, ranging from formal languages, such as linear temporal logic (LTL) [94, 159] and planning domain definition language (PDDL) [18, 42, 90, 154], to programming languages, such as Python [56, 76]. To aid in interpreting freeform sentences, researchers leverage LLMs\u2019 ICL capabilities, providing guidelines and demonstrations within prompts [56, 76, 90, 122]. Despite the efforts, LLMs often fail to satisfy syntax or capture precise semantics when translating an input into formal languages. Solutions include simplifying vocabulary or fine-tuning LLMs with domain-agnostic data [94, 159]. Translation systems, such as Lang2LTL [92], exemplify how LLMs translate landmark-referring expressions in navigational commands into LTL symbols. Further improvements often involve using human feedback and syntax checkers to correct generated formal language translations [18, 42]. For instance, Guan et al. [42] present a human-in-the-loop translation framework, in which human domain experts repeatedly review PDDL descriptions and provide feedback in natural language. Grounding is to map linguistic expressions to referents recognizable by robots, such as behaviors or objects. Early studies find the mapping by maximizing the cosine similarity between the word embedding of LLM outputs and real-world referents [58, 76, 94, 117]. Subsequent studies incorporate commonsense knowledge from LLMs for contextual support in grounding linguistic label of objects [41, 118]. For instance, Gu et al. [41] demonstrate how LLMs can ground \u2018something to use as a paperweight\u2019 to a ceramic vase based on size and weight assumptions we know. However, grounding accuracy depends on the detail and accuracy of the world model. To address this, researchers augment LLMs with multimodal capabilities to directly correlate linguistic inputs with sensory percepts [31, 47, 114, 158], or enable LLMs to interact with environments [157, 168] or humans [61, 109, 120] for better context gathering. For instance, a 3D visual grounding method, LLM-Grounder [157], actively gathers environmental information using vision tools, such as LERF [72] and OpenScene [111]. 3.2 Language Generation Language generation refers to the production of human-like written or spoken language that reflects communicative intents [38]. We categorize language generation into task-dependent and -independent types based on their communication intents, diverging from conventional natural language generation (NLG) categories of text-to-text and data-to-text [30] due to our focus on the communicative purposes of studies. Task-dependent language generation focuses on producing language with specific functional objectives, being declarative or imperative. To generate open-ended declarative statements, researchers often provide LLMs with contextual information [20, 62, 97]. However, LLMs often result in repetitive and factually inconsistent outputs, confined by the reliance on previous dialogues and commonsense knowledge [20, 84]. Consequently, researchers augment LLMs with auxiliary knowledge sources to broaden the scope of available information [3, 21, 156]. For instance, Axelsson and Skantze [3] enhance a robot museum guide with knowledge graphs. Furthermore, researchers instruct LLMs to clarify ambiguities by generating imperative instructions requesting human assistance [25, 61]. To improve inference steps, probabilistic models have been introduced to evaluate the uncertainty of situations [109, 120]. For instance, KnowNo [120] and CLARA [109] interaction systems assess confidence and semantic variance, respectively, triggering generation only when these metrics indicate significant uncertainty. Task-independent language generation involves crafting expressions with social-emotional objectives [11], by embedding non-verbal cues (e.g., non-verbal sounds, hand gestures, and facial expressions) within prompts to enhance engagement and empathy [73, 81]. For example, Khoo et al. [73] have developed a conversational robot 5 that generates empathetic responses using transcribed audio and visual cues. However, conversations with LLMs remain superficial due to the limited knowledge and dialogue history [65]. To overcome this, researchers integrate memory modules into LLMs, enabling them to distill and store information from conversations in a structured format [22, 63, 65, 162]. For example, the companion robot designed by Irfan et al. [65] continuously updates the robot\u2019s memory based on interactions with users to generate personalized dialogues. 4 Perception Perception plays a crucial role in enabling robots to make decisions, plan their actions, and navigate the real world [113]. In the field of LLM-based robotic perception, research primarily focuses on two aspects: sensing modalities and behaviors. In this section, we introduce how LLM-based robots have integrated language with sensor modalities and how agents acquire environmental information through passive and active perception behaviors. Fig. 1 presents the detailed categorization alongside relevant studies, referred in pink cells. 4.1 Sensing Modalities Researchers have significantly advanced robots\u2019 comprehension and generalization capabilities through the integration of multimodal language models. We categorize primary sensing modalities into visual, auditory, and haptic modalities, reviewing recent studies leveraging multimodal LLMs for perception tasks. Visual perception tasks involve the interpretation of visual information such as image or point clouds. Pre-trained visual-language models (VLMs), such as CLIP [116] and InstructBLIP [83], allows LLM-based robots to directly utilize image sources. For instance, recent LLM-based manipulation systems, such as TidyBot [151] and RoCo [97], use image-inferred object labels or scene descriptions, generated from CLIP and OWL-ViT [101], respectively. In addition, researchers extend reasoning capabilities by applying VLMs on downstream tasks such as image captioning [41] and visual question answering (VQA) [36, 79, 104]. The downstream tasks enable LLMs to subsequently request VLMs to infer object properties (e.g., material, fragility) [36] or ground object parts for grasping [104]. However, images are often challenging to acquire spatial-geometric information. Alternatively, Huang et al. [56] associate visuallanguage features from a VLM (i.e., LSeg [82]) with three-dimensional (3D) point clouds for 3D map reconstruction. Further, Jatavallabhula et al. [66] improve this association mechanism with RGBD images by introducing fine-grained and pixelaligned features from VLMs. However, association with 3D information tends to be memory intensive, limiting scalability for large scenes [56, 66, 157]. As an alternative solution, researchers often associate geometric and semantic features with 3D scene graphs [41]. Auditory perception involves the interpretation of sound. LLM-based studies often leverage pretrained audio-language models (ALMs), such as AudioCLIP [43] and Wav2CLIP [150], integrating them with visual data to enhance environmental or contextual understanding [55, 95, 123, 163]. For example, AVLMaps [55], a 3D spatial map contructor with cross-modal information, integrates audio, visual, and language signals into 3D maps, enabling agents to navigate using multimodal objectives such as \u201cmove between the image of a refrigerator and the sound of breaking glass.\u201d In addition, REFLECT [95], a framework for summarizing robot failures, transforms multisensory observations such as RGB-D images, audio clips, and robot states into textual descriptions to enhance LLM-based failure reasoning. Haptic perception involves the interpretation of contact information. Researchers introduce multimodal perception modules that interactively incorporate haptic features obtained from predefined high-level descriptions [168] or CLIP-based tactile-image features [48] about haptic interactions. For example, MultiPLY [48], a multisensory LLM, converts tactile sensory readings into a heatmap, encoded by CLIP. Then, by introducing a linear layer of tactile projector, the model maps the heatmap information to the feature space of LLMs. 4.2 Sensing Behavior Following the type of sensing behaviors, we decompose this section into passive and active perceptions. 6 The passive perception refers to the process of gathering sensory information without actively seeking it out. Despite its limited nature, passive sensing has been extensively employed in LLMbased robotics studies for various tasks: object recognition [36, 53, 151], pose estimation [104, 155], scene reconstruction [41, 59, 122, 122], and object grounding [66, 143, 157]. For example, TidyBot [151] detects the closest object from an overhead view and subsequently recognizes its object category using a closer view captured by the robot\u2019s camera. However, the passive nature of sensing limits the ability to perform tasks when information is unobserved or unavailable (e.g., unseen area, weight). On the other hand, active perception refers to the conscious process of gathering sensory information by taking additional actions. Active information gathering enhances environmental understanding by acquiring new information through sensory observations or requesting user feedback [79, 129]. For example, LLM-Planner [129] generates seeking actions such as \u2018open the refrigerator\u2019 to locate invisible objects. Recent studies also focus on collecting sensory data to better understand objects\u2019 physical properties [48, 168]. However, LLMs often generate inaccurate or fabricated information, known as hallucinations. To address this issue, Dai et al. [25] introduce a personalized conversational agent designed to ask users for uncertain information. 5 Planning Planning involves organizing actions to solve given problems, typically through generating a sequence of high-level symbolic operators (i.e., task planning) followed by executing them using low-level motor controllers [37, 85]. This section investigates how LLM-based planning research addresses limitations in the planning domain by categorizing them into three key research areas: (1) task planning, (2) motion planning, and (3) task and motion planning (TAMP). Fig. 1 presents the detailed categorization along with related planning studies, referred in purple cells. 5.1 Task Planning LLM-based task planners are capable of generating plans without strict symbol definitions [58], while traditional task planners are required to pre-define operators with domain knowledge about available actions and constraints [34, 99]. In this field, most planners employ a static planning strategy, which takes fixed descriptions that are not adaptable to changes in environment [163]. However, an alternative approach, adaptive planning, allows for the incorporation of environmental feedback into input prompts, enabling adjustments to actions based on observed conditions. This section reviews LLMbased planners in terms of these two strategies: static and adaptive planning. Static planning: Static planning approaches are generally zeroor few-shot prediction methods, where zero-shot methods generate a plan based solely on an input command, while few-shot methods leverage learning from a limited set of similar examples [9, 27, 70, 163]. However, LLMs often exhibit poor performance in long-horizon task planning due to limited reasoning ability [90, 139]. To address this limitation, Huang et al. [58] introduce a planner that iteratively selects the most probable action among executable ones generated by LLMs. Alternatively, LLM-based code generators, such as Code as Policies [87] or ProgPrompt [128], produce codes that result in adaptive actions responsive to observations [56, 57]. Singh et al. [128] demontrate that code generation outperforms basic task planning from LLMs since the output plan closely aligns with execution environments. Despite their advantages, these methods lack validation and replanning processes. To validate plans, researchers often augment LLMs with logical programs, either to (1) check if resulting plans violate logical constraints or (2) generate plans using an external logical planner. For instance, SayPlan [118], a GPT4-based planner, validates abstract-level actions through a scenegraph simulator 3DSG [1], while LLM+P [90] applies a PDDL problem translated from LLMs to a classical task planner, Fast Downward [45]. In addition, Silver et al. [127] demonstrate that a search-based planner with an initial plan from LLMs performs better by exploring fewer nodes. These studies underscore the effectiveness of integrating LLMs with logical programs to increase the success rate or the performance of generating feasible plans. Adaptive planning: Adaptive planning allows robots to modify their plans or actions in response to feedback, either by generating new plans based 7 on environmental observations [20, 141, 151, 168, 169] or by detecting failures and adjusting accordingly [61]. Chen et al. [12] and Huang et al. [60] introduce adaptation strategies that generate new plans based on observed feedback, enabling robots to respond to a broader range of scenarios. Another adaptation strategy is the detection of failures as feedback. For instance, Inner Monologue [61] retries the initial plan until it succeeds. Furthermore, other studies provide textual explanations about past failures to help avoid recurrent issues [88, 95, 117, 146]. LLM-Planner [129] and COWP [28] improve replanning capabilities by finding alternative plans that leverage context from observations and the commonsense knowledge of LLMs. These flexibilities in adapting to new information enhance robot autonomy in dynamic settings. 5.2 Task and Motion Planning We outline LLM-based low-level planning, classifying methodologies into motion planning and TAMP areas. Motion planning refers to generating an objective of trajectory with numerical waypoints within a robot\u2019s configuration space or task space. However, direct numerical sequencing is challenging since the language models learn through generating tokens unrelated to continuous space. Despite this, an LLM-based motion planner directly generates positional sequences for drone choreography [68], as their task is simple enough to demonstrate LLMs\u2019 spatial reasoning ability. For more complex scenarios with an indirect approach, Huang et al. [59] augment LLMs with a search-based planner. In their framework, VoxPoser, an LLM generates code of potential field using VLM, and then the search-based planner conducts motion planning within the generated field. TAMP refers to integrating high-level task planning with low-level motion planning. Various works use LLMs themselves as TAMP planners, exploiting both logical and physical reasoning capabilities [80, 97, 152]. Researchers guide LLMs to generate high-level subgoals and then use them for low-level trajectory generation [80, 97]. However, their coarse representations limit their methods to simple tasks such as pick-and-place. Instead, Xia et al. [152] enhance LLMs\u2019 kinematic knowledge using kinematic-aware prompting for complex manipulation such as articulated object manipulation. In addition, various studies augment LLMs to complement their reasoning abilities. Researchers often integrate a logic-augmented TAMP planner that checks the logical feasibility of the task plan [29]. Meanwhile, others use physicsaugmented TAMP planner to evaluate physical feasibility [18, 44, 89]. Text2Motion [89], for example, allows an LLM to generate physically-feasible high-level actions and combine them with learned skills for low-level actions. 6 Control Early studies primarily focused on establishing mappings between simple linguistic commands and known motion primitives. With the advent of deep learning, researchers have explored two main approaches in control: direct modeling of control values based on linguistic instructions [7, 119] and indirect interpretation of complex instructions via LLMs to generate actions [153]. We categorize the work in this field into two groups: (1) direct approach which means the direct generation of control commands based on linguistic instructions and (2) indirect approach which stands for indirect specification of control commands through linguistic guidance. Fig. 1 presents a detailed categorization alongside related papers, referred in orange cells. 6.1 Direct Approach The direct approach involves using an LLM to interpret and produce executable commands, either by selecting motion primitives [133] or generating control signals [145, 170]. Early work generates action tokens to produce a control policy through training Transformer architecture [140] with task-specific expert demonstration data [7, 119, 131]. Researchers linearly map these tokens to discretized end-effector velocities [119] or displacements [7, 131] for continuous motion. While these approaches demonstrate a degree of generalization over unseen tasks, such as new objects or realistic instructions, they often require extensive data collection and training time. To reduce the collection effort, researchers often leverage existing web-scale vision and language datasets for finetuning [142, 170]. For example, Zitkovich et al. co-finetune VLMs, such as 8 PaLI-X [17] and PaLM-E [31], targeting both visuallanguage tasks and robotics control tasks. They use the original datasets designed for VLMs alongside robotics control demonstrations to preserve general knowledge during finetuning, as demonstrated in RT-2 [170]. Additionally, to reduce the training burden, Hu et al. [51] use a low-rank adaptation (LoRA) method for finetuning an LLM for control tasks [15] rather than finetuning the entire model. LLMs often struggle to generate continuous action-level commands such as joint position and torque values, as LLMs typically generate atomic elements known as tokens [133]. Therefore, researchers instead generate task-level output using LLMs [10, 102, 133]. For example, SayTap, an LLM-based controller for walking, generates contact patterns between feet and the ground for walking motion with an LLM instead of directly producing joint positions [133]. Other studies address the control problem by framing it as completing a sequence of end-effector poses [102] or generating Python code [10], similar to naturallanguage generation tasks. Recently, researchers prompt an LLM to produce action space output by providing normalized and discretized control value history to maintain continuity in control [145] or by providing robot kinematics information to determine reasonable joint values for a desired pose [86]. 6.2 Indirect Approach LLMs are also useful for generating indirect representations of control commands (e.g., subgoals or reward functions), based on natural language instructions. Researchers leverage goal descriptions, explaining desired behaviors in natural language, to guide the learning process [32, 67, 78]. For example, ELLM [32], an LLM-based RL framework, uses an LLM to generate subgoal descriptions as a condition of the RL policy and further uses the similarity between current observation and the subgoal description in text embedding space to calculate reward. Further, Kumar et al. [78] incrementally use an LLM to generate a goal description based on previous human instructions. However, as the output of an LLM is a natural-language description, these approaches require an additional step of grounding or interpreting the description. Using code generation capability, researchers generate a code-level reward function, instead. Yu et al. [161] convert a natural-language goal into high-level motion descriptions and then generate a corresponding reward function. However, this generation requires fixed reward-function formats. Instead, recent work prompts an LLM to infer reward-function formats from human-designed examples [71, 144]. Nonetheless, the generated reward functions may not always be accurate or optimal enough for direct use in training [130]. To improve accuracy, researchers add a refinement loop to validate both the syntax [112] and semantics [96, 130, 153, 165] of the generated reward functions. For example, Song et al. [130] use an LLM to re-design a reward function based on the convergence of the training process and resulting robot motion. Further, researchers use an LLM to evaluate robot motion, directly generating rewards [24]. In addition, recent work demonstrates the use of an LLM in refining motion by adjusting control parameters based on the error state [132] or by selecting a suitable motion target from human feedback [91]. 7 Prompt Guideline We provide prompt design guidelines for robotic tasks to researchers entering this field. A prompt is a message to guide LLMs to process and output as we instruct pre-trained language models [93, 149]. Well-designed prompts \u2022 include clear, concise, and specific statements without using technical jargon, \u2022 incorporate examples that allow anticipating the model\u2019s process, \u2022 specify the format that we want the output to be presented, and \u2022 contain instructions to constrain actions. The prompts enable models to generate desired content following output formats and constraints without parameter updates. We provide guidelines over four robotics fields: (1) interactive grounding, (2) scene-graph generation, (3) few-shot planning, and (4) reward function generation. 7.1 Conversation Prompt: Interactive Grounding We detail a conversation prompt design, leveraging an LLM as a grounding agent, to clarify commands such as \u201cBring me something to eat\u201d and infer 9 Task Description You are a conversational agent that interactively grounds referenced objects within tasks to a specific world model. You should engage with humans for clarifications and logically infer the target objects. Keep your response concise without explanation. Task Procedure You should: 1. Identify the target objects and their details from the task. 2. Iteratively ask for additional detail with a new criterion if multiple options within the world model meet the criteria. 3. Select the most appropriate object within the world model when only a single option meets the criteria. Task Context Task: \"Bring me something to eat\" World Model: ['water bottle', 'plate', 'napkin', 'coke can', 'potato chips', 'candle', 'sandwiches', 'pepper shaker', 'salt shaker', 'fork', 'banana', 'cookie', 'apple', 'cereal box', 'juice box', 'cup'] Interaction LLM: Do you prefer something savory or sweet to eat? User: sweet LLM: Do you prefer something crunchy or soft as a sweet snack? User: crunchy LLM: Based on your preference for something sweet and crunchy, I suggest bringing the \"cookie.\" Table 1: A conversational prompt for interactive grounding. Through the \u201cTask\u201d in the prompt, we ask the LLM to ground the underspecified object, referred to as \u201csomething\u201d in the \u201cTask\u201d, as a \u201ccookie\u201d by interactively asking for personal preferences. The prompt consists of task description, task procedure, and task context parts guiding the LLM\u2019s behavior and contextual understanding. The words in bold indicate subjects of interactions with LLM responses highlighted in blue. the ambiguous targets, expressed as \u201csomething,\u201d through logical inference. Table 1 shows the design detail, where the prompt consists of three key components: task description, task procedure, and task context. We further describe each as follows. The task description outlines the expected behavior and response format of the LLM. In this example, we particularly emphasize its role as a conversational agent, which fosters dynamic interactions with users, guided by directives such as \u2018you should.\u2019 Further, the imperative statements containing \u2018keep\u2019 provide task constraints or requirements. We also place behavioral constraints at the end to suppress the LLM\u2019s verbosity. The task procedure then defines a sequence of inference steps for the LLM to follow, aimed at achieving the task objective. This description employs numbered steps to instruct LLMs to execute the actions step by step. By using logical representations, we also enforce actions to be performed in a logical order; We use \u2018iteratively\u2019 to indicate a \u2018while loop\u2019 and \u2018if\u2019 or \u2018when\u2019 to represent conditions. The task context describes the contextual inputs, such as \u201cworld model,\u201d that LLMs perform grounding upon. Consistency in terminology across task description and task procedure is crucial for LLM operations. For example, the common expressions, such as \u201ctasks\u201d and \u201cworld model\u201d allow the LLM to work within the same context provided. Further, by using clear names for objects in the world model, we enable the LLM to apply common knowledge to named entities. Note that, although we use a list of objects as a world model, LLMs accept world models in various formats: textual descriptions, a list of objects, and scene graphs. With these structured components, the prompt invokes an interactive grounding dialogue for precise object identification, as shown in Table 1. This prompt utilizes ChatGPT 3.5 [106]. 7.2 Directive Prompt: Scene Graph Generation We introduce directive prompt designs for constructing a scene graph from a scene image using a multimodal LLM, particularly with GPT-4 [107]. The scene graph consists of objects as nodes and their relationships as edges. Despite the advancement of multimodal LLMs, their capability has limitations in inferring 3D relationships from a 2D image [13]. To reduce the limitation, we decompose the task into two steps: node creation with multimodal inputs and edge creation with textual information. We describe each step with detailed examples provided in Table 2. The prompt for node creation consists of two parts: (1) task description and (2) task context. The task description includes the LLM\u2019s expected behavior (i.e., role) and response format as Section 7.1. For instance, the LLM\u2019s role is to identify objects as nodes in the given image. 10 Node Creation Prompt Task Description You are an agent responsible for generating nodes in a scene graph based on images. Each object's image and a unique ID will be provided to you. Your task is to generate the name of the central object. The output should be formatted as 'ObjectName(ID), ObjectName(ID), ...', without any space within each object name. No additional explanation is required. Task Context Entire Scene Visualization IDs: 0, 1, 2, 3, 4. Robot View Side View VLM Output VLM: YellowCube(0), BlueCube(1), Keyboard(2), Mouse(3), Phone(4) Edge Creation Prompt Task Description You are an agent responsible for inferring edge relations in a scene graph. Based on the provided names and 3D coordinate information of each object, infer one major {spatial_relation} from {source} to {target} (i.e., source->target: spatial_relation), which means source is located to spatial_relation of target. Spatial relations are limited to 'left', 'right', 'forward', 'back', 'up', and 'down'. The bbox_extent represents the object's dimensions along the X, Y, and Z axes in meters, and the bbox_center specifies the object's central position in 3D space. For example, if the y-value of the source's center is bigger than that of the target, then the source is located to the forward of the target (i.e., source->target: forward). Similarly, if the x-value of source is bigger than the target\u2019s, then the source is located to the forward of the target (i.e., target->source: forward). You can determine 'left' and 'right' based on the x-value, 'forward' and 'back' based on the y-value, and 'up', 'down' based on the z-value. Example Provide the output in a simple format as shown in the example below. Take a deep breath and work on this problem step-by-step. Example_Input: object_list: (cup(23), bbox_extent: [0.1, 0.1, 0.1], bbox_center: [10, 0.5, 0.53]) (box(20), bbox_extent: [0.6, 0.5, 0.32], bbox_center: [10, 0.5, 0.2]) [source, target]: [box(20), cup(23)]? Example_Output: box(20)->cup(23): down Task Context Input: object_list: (YellowCube(0), bbox_extent: [0.05, 0.05, 0.05], bbox_center: [0.52, 0.52, 1.07]) (BlueCube(1), bbox_extent: [0.05, 0.05, 0.05], bbox_center: [0.52, 0.52, 1]) (Keyboard(2), bbox_extent: [0.45, 0.15, 0.02], bbox_center: [0.53, 0.45, 1]) (Mouse(3), bbox_extent: [0.12, 0.06, 0.04], bbox_center: [0.75, 0.46, 1]) (Phone(4), bbox_extent: [0.07, 0.14, 0.007], bbox_center: [0.51, 0.2, 1]) [source, target]: [BlueCube(1), Keyboard(2)]? [source, target]: [BlueCube(1), YellowCube(0)]? [source, target]: [YellowCube(0), Keyboard(2)]? [source, target]: [Keyboard(2), Mouse(3)]? [source, target]: [Keyboard(2), Phone(4)]? Output: LLM Output Generated Scene Graph Visualization LLM: (Skip explanations) BlueCube(1)->Keyboard(2): forward BlueCube(1)->YellowCube(0): down YellowCube(0)->Keyboard(2): forward Keyboard(2)->Mouse(3): left Keyboard(2)->Phone(4): forward down forward forward forward left Table 2: Directive Prompts for generating a scene graph. The table includes two prompts: node creation and edge creation. Given images in a scene, a multimodal LLM perceives objects and infers relevant relationships using geometric information. The words in bold indicate subjects of outputs with LLM responses highlighted in blue. The visualizations are neither the inputs nor the outputs of the LLMs. We then specify the output format as \u2018ObjectName(ID)\u2019 for consistency and simplicity. Then, the task context presents a sequence of unique object identifiers with corresponding object-centric 11 Task Descriptions Create a next plan for a robot to complete a household task using the allowed actions and visible objects. The robot is able to pick up and hold only one object. The robot CANNOT open or close some objects when the robot holds an object (after PickUp and before PutInto). Initially, the robot does not hold anything and every openable object is closed. All plan sequence starts with 'Initial' which means initial condition. When the task is completed, use the action 'Done' to finish it. Do not do other than the task requires. Answer only one step of next plan without other things. Examples For example, Task: Put paper in the drawer Allowed actions: Initial, Open (object), Close (object), PickUp (object), PutInto (object), Done Visible objects: paper, drawer, carrot Executed plans: Initial, Open (drawer), PickUp (paper), PutInto (drawer) Next plan: Close (drawer) Task: Put only stationery in the drawer Allowed actions: Initial, Open (object), Close (object), PickUp (object), PutInto (object), Done Visible objects: paper, drawer, carrot Executed plans: Initial, Open (drawer), PickUp (paper), PutInto (drawer), Close (drawer) Next plan: Done Target Context Task: Put only food in the refrigerator Allowed actions: Initial, Open (object), Close (object), PickUp (object), PutInto (object), Done Visible objects: refrigerator, carrot, pencil Executed plans: Initial Next plan: LLM Output LLM: Open (refrigerator) Interaction User: Executed plans: Initial, Open (refrigerator) Next plan: LLM: PickUp (carrot) User: Executed plans: Initial, Open (refrigerator), PickUp (carrot) Next plan: LLM: PutInto (refrigerator) User: Executed plans: Initial, Open (refrigerator), PickUp (carrot), PutInto (refrigerator) Next plan: LLM: Close (refrigerator) User: Executed plans: Initial, Open (refrigerator), PickUp (carrot), PutInto (refrigerator), Close (refrigerator) Next plan: LLM: Done Table 3: A planning prompt for few-shot planning. Leveraging input-output example pairs, the LLM improves performance in generating a plan to accomplish the task objective. The prompt consists of task descriptions, examples, and task context. The words in bold indicate subjects of interactions with LLM responses highlighted in blue. images. We obtain the object-centric images from a scene by cropping them via SAM [77], a foundation vision model proficient in identifying objects with occlusion. The edge creation consists of (1) task description, (2) examples, and (3) task context. The task description not only specifies the expected behavior and output format but also elucidates on how to identify the relationships between nodes leveraging examples. We particularly explain how the LLM uses 3D object coordinates and unit measurements to infer spatial relationships from a predefined set such as \u2018left,\u2019 \u2018right,\u2019 etc. Unlike the node creation, this allows for the generation of additional output explanations to accommodate the complexity discerning spatial relationships. To enhance the understanding of the input format and corresponding output, we include examples showcasing edge generation. We choose an example similar to the target scenario in terms of objects and their spatial interrelationships, thereby providing richer information for edge identification. Finally, the task context provides source and target node information as inputs and empty output to obtain responses from the LLM. Instead of providing a whole permutation of nodes, we specify a whole combination of nodes, assuming that we can identify the edge of the reversed direction as the opposite spatial relationship (e.g., \u2018left\u2019 is opposite to \u2018right\u2019). 7.3 Planning Prompt: Few-shot Planning We present a planning prompt design aimed at predicting subsequent actions to fulfill an instructed objective, integrating contextual elements such as available actions and environmental settings. This design particularly focuses on few-shot planning, enhancing performance through examples. 12 Task Descriptions You need to control a two-jointed robot arm to touch the target with its fingertip in 2d xy plane. The action space is Box(-1.0, 1.0, (2,), float32) as a torque applied to joints. Please design a reward function for training RL agents in Python based on the following information. Available APIs You can use the following functions to design a reward function. def get_joint_angles(env) -> tuple[float, float] # get the current angles of the joints in radian def get_joint_angular_velocities(env) -> tuple[float, float] # get the current angular velocities of the joints in rad/s def get_current_action(env) -> tuple[float, float] # get the current action torque in Nm def get_current_position(env) -> tuple[float, float] # get the current x, y position of the fingertip def get_target_position(env) -> tuple[float, float] # get the target x, y position Goals and Constraints The trained RL agent should accomplish the following goals and constraints. The starting joint angles and angular velocities are all initialized to zero The target position is randomly initialized Finish the task with less joint torque consumption as possible Do not consider other goals or constraints except given ones Generation Rules You should follow the rules below. Write the Python function in the form of 'def compute_reward(env) -> float' and return the 'reward: float' only Do not use redundant constant variables, class instances, or functions that are not given, but you can use popular Python libraries like numpy if you import them Consider the scale of each reward component Return the generated Python function only LLM Output import numpy as np def compute_reward(env) -> float: # Get current position and target position current_pos = np.array(get_current_position(env)) target_pos = np.array(get_target_position(env)) # Calculate Euclidean distance between current and target position distance_to_target = np.linalg.norm(current_pos target_pos) # Get current joint angles and angular velocities joint_angles = np.array(get_joint_angles(env)) joint_velocities = np.array(get_joint_angular_velocities(env)) # Get current action current_action = np.array(get_current_action(env)) # Calculate penalty based on joint torque torque_penalty = -0.01 * np.linalg.norm(current_action) # Calculate reward based on distance to target distance_reward = -0.1 * distance_to_target # Combine rewards reward = distance_reward + torque_penalty return reward Table 4: Prompts example for reward function generation. The prompt consists of task descriptions, available APIs, goals and constraints, and generation rules. The LLM generates a reward function in Python code for RL training. The design comprises four components: (1) task descriptions, (2) examples, (3) target context, (4) and additional interactions, detailed in Table 3. The task descriptions include task objectives, expected behaviors, and response formats similar to conventional prompts. However, unlike previous ones, this prompt specifies the robot\u2019s constraints, including initial states and action limitationsconstraints previously unaddressed. For example, the term \u201cCANNOT\u201d in Table 3 emphasizes the robot\u2019s limitation to manipulating only one object per action. Moreover, these constraints extend to the rules governing the \u201cDone\u201d action, indicating task completion. The examples demonstrate input-output pairs that guide the LLM in generating the desired action. The examples adapt the generic \u201cobject\u201d argument in the allowed actions (e.g., \u201cClose (object)\u201d) to specific object names such as \u201cdrawer\u201d or \u201cpaper.\u201d reinforcing task constraints written in the task description. For instance, the second example returns the \u201cDone\u201d signal instead of further planning after achieving the task objective. The target context provides the current scenario, including the task, allowed actions, visible objects, executed plans, and a next plan, same as examples. We allow the LLM to fill in the blank space after \u201cnext plan:\u201d suggesting the next 13 action without adding unnecessary elements like line breaks, ensuring output precision. Furthermore, when additional prompts update the executed plans, the LLM generates new plans based on this updated context without reiterating the full target context, enabling a dynamic and iterative planning process that adapts to changes and maintains efficiency. 7.4 Code-Generation Prompt: Reward Design We introduce a code-generation prompt design to generate a reward function for the MuJoCobased Reacher task [135] from Gymnasium [138]. The goal of the Reacher task is to move the endeffector of a robotic arm close to a designated target position from an arbitrary starting configuration. The prompt is to translate this task objective into a reward-specifying code. Table 4 shows the design detail, comprising four key elements: (1) task descriptions, (2) available APIs, (3) goals and constraints, and (4) generation rules. Task descriptions define the expected robot behavior and task conditions for the LLM, including the robot\u2019s control strategies and the action space of the two-joint robot arm. We particularly specify the action space as a continuous \u201cBox\u201d space using an API from Gymnasium, assuming the LLM\u2019s familiarity with well-known library functions. Then, this description leads the LLM to grasp the overarching RL objective of the defined actions. Available APIs list the APIs necessary for designing the reward function, including the names and input-output specifications of each API. By providing Python function annotations, we enable the LLM to infer the types of inputs and outputs, given its presumed knowledge of float-like variable types and how the APIs work. Goals and constraints provide the task objectives and limitations that guide the reward contents. We clearly define the initial setup, goal assignment, and goal conditions, aiming to exclude unnecessary reward components, such as penalizing high velocities for smooth motion. Note that we recommend the use of concise and consistent words, such as \u201ctorque\u201d, as used in the task descriptions, instead of \u201cpower,\u201d though the linguistic meaning is similar. Lastly, generation rules establish guidelines for generating directly executable code, addressing the tendency of LLMs to produce unnecessary or incorrect variables or functions. These rules restrict such declarations, as written in the second component of the generation rules in Table 4, encouraging the use of well-known Python libraries to enhance programming quality. Furthermore, considering the linearly combined elements of the reward function, we introduce a rule for scaling reward components to maintain balance. 8 Conclusion In this survey, we have investigated the current robotics research works with large language models in terms of intelligent robot components encompassing communication, perception, planning, and control. This component-wise investigation reveals how researchers integrate LLMs to overcome challenges inherent in pre-LLM approaches across various tasks, thereby offering a comprehensive understanding of LLMs\u2019 impact in this field. Within each component area, we examine the improvement of methodologies proposed to maximize the utilization of LLMs\u2019 capabilities and enhance the integrity of their responses. Additionally, our survey offers guidelines for prompt engineering in each component area, supplemented with key examples of prompt components, to provide practical insights for researcher entering this field. The core contribution of this paper is to highlight the transformative impact of LLMs in robotics, enabling the development of versatile and intelligent robots with limited resources. Declarations Conflict of interest The authors have no relevant financial or non-financial interests to disclose." |
| } |
| ] |
| } |