diff --git "a/title_30K/test_title_long_2404.16670v1.json" "b/title_30K/test_title_long_2404.16670v1.json" new file mode 100644--- /dev/null +++ "b/title_30K/test_title_long_2404.16670v1.json" @@ -0,0 +1,107 @@ +{ + "url": "http://arxiv.org/abs/2404.16670v1", + "title": "EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning", + "abstract": "Visual Instruction Tuning represents a novel learning paradigm involving the\nfine-tuning of pre-trained language models using task-specific instructions.\nThis paradigm shows promising zero-shot results in various natural language\nprocessing tasks but is still unexplored in vision emotion understanding. In\nthis work, we focus on enhancing the model's proficiency in understanding and\nadhering to instructions related to emotional contexts. Initially, we identify\nkey visual clues critical to visual emotion recognition. Subsequently, we\nintroduce a novel GPT-assisted pipeline for generating emotion visual\ninstruction data, effectively addressing the scarcity of annotated instruction\ndata in this domain. Expanding on the groundwork established by InstructBLIP,\nour proposed EmoVIT architecture incorporates emotion-specific instruction\ndata, leveraging the powerful capabilities of Large Language Models to enhance\nperformance. Through extensive experiments, our model showcases its proficiency\nin emotion classification, adeptness in affective reasoning, and competence in\ncomprehending humor. The comparative analysis provides a robust benchmark for\nEmotion Visual Instruction Tuning in the era of LLMs, providing valuable\ninsights and opening avenues for future exploration in this domain. Our code is\navailable at \\url{https://github.com/aimmemotion/EmoVIT}.", + "authors": "Hongxia Xie, Chu-Jun Peng, Yu-Wen Tseng, Hung-Jen Chen, Chan-Feng Hsu, Hong-Han Shuai, Wen-Huang Cheng", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Reasoning", + "gt": "EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning", + "main_content": "Introduction Visual emotion recognition, a key area within artificial intelligence and computer vision, aims to predict human emotions based on visual cues such as facial expressions and body language. This technology is essential in bridging the gap between human affective states and machine understanding. Its diverse applications [10, 13, 22, 39], spanning from improving human-computer interaction to aiding in mental health assessment, underscore its significance. Accurate emotion recognition is vital for enhancing user expeFigure 1. Illustration of the importance of instruction-following ability in visual emotion understanding. rience and ensuring information security, as it helps prevent emotional manipulation and misinformation [32]. Developing robust emotion recognition models is not only a technical challenge but also a step towards more empathetic and intuitive AI systems, paving the way for more efficient and natural human-computer interactions. The AI community has recently shown a growing interest in developing foundational vision models, e.g., Flamingo [8], LLaVA [7], BLIP2 [14]. These models excel in open-world visual understanding, tackling several vision tasks such as classification, detection, segmentation, and captioning. In contrast, current large-scale multimodal models are still in its infancy when it comes to emotion perception [20]. As illustrated in Fig. 1, when directly query the GPT-4 [29] about the emotional category of an image, the model tends to provide incorrect responses. However, the model delivers accurate responses when provided with revised instructions. To fully leverage the potential of existing vision-based large models, our approach is based on the concept of Instruction Tuning. This effective strategy is aimed at teaching language models to follow natural language instructions, a technique proven to enhance their generalization performance across unseen tasks [7, 9, 21]. 1 arXiv:2404.16670v1 [cs.CV] 25 Apr 2024 \fIn this work, we focus on developing the model\u2019s proficiency in understanding and following instructions related to emotional contexts. This approach highlights the importance of fine-tuning the model\u2019s instruction-following capabilities, enabling it to interpret and respond to emotional content effectively. This is achieved by leveraging its preexisting knowledge base, thereby eliminating the necessity for an emotion-specific architectural framework. To address the notable challenges encountered in Instruction Tuning for visual emotion recognition, especially the lack of specific instruction data, we introduce a novel self-generation pipeline explicitly crafted for visual emotion recognition by using GPT-4 [29]. This innovative pipeline excels in generating a diverse array of (image, instruction, output) instances, thereby notably enhancing the dataset with a more extensive and task-oriented variety of examples. This approach not only overcomes the challenge of limited data availability but also reduces the dependence on human labor. Therefore, it streamlines the process, enabling more efficient and effective emotion recognition. Additionally, Instruction Tuning has been criticized for its emphasis on surface-level features like output patterns and styles, rather than achieving a profound comprehension and assimilation of tasks [23]. To tackle this issue and enhance the diversity and creativity of instruction data, our dataset includes instructions that demand complex reasoning, going beyond basic question-and-answer formats. This is further enriched by incorporating visual cues such as brightness, colorfulness, scene type, object class, facial expressions, and human actions. These aspects are pivotal in fostering a nuanced comprehension of visual emotions, thus allowing the model to generate more precise and contextually appropriate interpretations [13]. After generating the emotion visual instruction data, we propose an Emotion Visual Instruction Tuning (EmoVIT) framework, leveraging the foundation of InstructBLIP [9]. This framework incorporates an emotioncentric, instruction-aware module that proficiently guides Large Language Models (LLMs) in assimilating the nuances of emotion instructions. Our work signifies a paradigm shift, presenting a new era of instruction-based learning for visual emotion understanding that relies less on explicit training data. Remarkably, as shown in Fig. 2, our approach requires almost 50% of the training data typically needed yet exceeds the performance of previous visual emotion recognition methods and popular Visual Instruction Tuning methods. Our contributions can be summarized as follows: \u2022 We explore the potential of the Visual Instruction Tuning paradigm for emotion comprehension and introduce the concept of Emotion Visual Instruction Tuning. \u2022 After thoroughly considering the unique characteristics of visual emotion recognition, we develop a novel GPTWSCNet[16] StyleNet[19] PDANet[17] StimuliAware[10] MDAN[12] BLIP2[14] InstructBLIP[9] Flamingo[8] LLaVA[7] Ours* 0 20 40 60 80 76.32 77.11 76.95 78.4 75.75 46.79 42.2 29.59 44.03 83.36 Supervised Emotion Recognition Methods Visual Instruction Tuning Methods Figure 2. Performance comparison on EmoSet test set [13] (Accuracy %). assisted pipeline for generating emotion visual instruction data. This approach effectively bridges the gap in available annotated instruction data within this specific domain. \u2022 Building upon the foundation of InstructBLIP, our EmoVIT architecture integrates emotion domain-specific instruction data, harnessing the robust capabilities of LLMs to boost performance. The extensive experiments demonstrate our model\u2019s proficiency in emotion classification, affective reasoning, and comprehension of humour. 2. Related Work 2.1. Visual Emotion Recognition A key challenge in visual emotion recognition is bridging the gap between an image\u2019s visual cues and the emotions it portrays [11, 12, 35]. While traditional efforts, e.g., Xu et al.\u2019s multi-level dependent attention network [12], focus on visual models for emotional feature learning, recent advancements like EmoSet [13] offer rich emotion-laden datasets with 3.3 million images. The rise of multimodal models, such as the GPT series [29], has further propelled Vision-Language Recognition. However, fully leveraging these models in emotion recognition is an area ripe for exploration. Our work leads the way in utilizing large-scale models for Emotion Visual Instruction Tuning. 2.2. Visual Instruction Tuning Current Large Language Models (LLMs) have extensive knowledge bases, but their effectiveness depends on accurately interpreting human instructions due to a mismatch 2 \fFigure 3. The comparison of different visual tuning paradigms. between training goals and user expectations. LLMs are trained to minimize prediction errors, whereas users expect helpful and safe instruction-following. Instruction Tuning addresses this by teaching models to follow natural language instructions, enhancing generalization to new tasks. FLAN [21] demonstrated that training a large model on instruction-based datasets improves zero-shot performance. This approach has extended to vision-language tasks, with BLIP2 [14] and LLaVA [7] adapting instructiontuned LLMs for visual inputs. InstructBLIP [9] introduces instruction-aware visual feature extraction and the QFormer, enabling more flexible, instruction-driven feature extraction. As a novel area, visual emotion instruction tuning lacks benchmarks or guidelines for creating emotion instruction data. Our work pioneers the use of large-scale models to develop an emotion instruction data pipeline, overcoming the limitations of manual annotation. 3. Method 3.1. Preliminary of Visual Instruction Tuning In the deep learning era, visual tuning has experienced significant paradigm shifts, as depicted in Fig. 3. In Fig. 3(a), conventional tuning methodologies encompass Full fine-tuning, Head-oriented, and Backboneoriented techniques, capitalizing on large-scale pre-trained models. Predominantly, thoroughly fine-tuning these models for specific tasks, conducted end-to-end, is recognized as a highly effective strategy. However, this method requires maintaining separate copies of the backbone parameters for each distinct task, posing challenges in storage and deployment. Alternatively, Visual Prompt Tuning (VPT) [24], presents an efficient substitute for full fine-tuning within large-scale vision Transformer models. It achieves this by employing a minimal fraction of trainable parameters in the input space while maintaining a frozen backbone model. The objective function for Visual Prompt Tuning is given by: min \u03b8P L(f(X, P; \u03b8P), Y ) (1) where min\u03b8P is the minimization over the prompt parameters P, L is the loss function, f represents the model function with input image X, prompt parameters P, and learnable model parameters \u03b8P as input, and Y is the target output. Visual Prompt Tuning focuses on optimizing LLMs using a small set of parameters, whereas Visual Instruction Tuning (VIT) aims to improve the model\u2019s comprehension of instructions, thereby addressing the model\u2019s shortcomings in specific domains. This type of method aims to enhance the model\u2019s proficiency in following instructions, leveraging the capabilities of the latest foundation models, e.g., Llama [25], and BLIP2 [14]. Instructions serve as guiding constraints, shaping the model\u2019s outputs to conform to specific response characteristics and domainrelevant knowledge. This approach enables human monitoring of the model\u2019s behavior, thereby assuring alignment with the desired outcomes. Moreover, Instruction Tuning is computationally efficient, allowing LLMs to swiftly adapt to particular domains without extensive retraining or architectural alterations. The objective function for Visual Instruction Tuning is given by: min \u03b8tunable L(g(X, I, C; \u03b8tunable), Y ) (2) where min\u03b8tunable denotes the minimization over the tunable parameters \u03b8tunable in the Instruction Tuning Module, L is the loss function, g is the model function with instruction I, image X, other contexts C, and tunable parameters \u03b8tunable, 3 \f\u2026 \u2026 \u2026 Q-Former Fully Connected LLM Emotion Instruction Queries Output \u2026 \u2026 Emotion Instruction Emotion Instruction Queries Q-Former Feed Forward Self Attention Cross Attention Feed Forward (a) Emotion Visual Instruction Data Generation (b) Emotion Visual Instruction Tuning Architecture (c) The Details of Q-Former Module \u2026 \u2026 \u2026 Image Embeddings Emotion Attributes Caption System Prompt GPT 4.0 Categorical Basic Interaction Advanced Interaction Reasoning Emotion Instruction In-context Samples Conversation Image Encoder Input Image Image Embeddings Figure 4. The overall architecture of our proposed method. The Emotion Instruction data generated by (a) will be used for Emotion Visual Instruction Tuning in (b). During Emotion Visual Instruction Tuning, given an input image, the frozen Image Encoder initiates the process by extracting visual features. Emotion Instruction generated by (a) are subsequently interacting with Queries embedding through the learnable Q-Former. This interaction is key to drawing out image features that are relevant to the task at hand. As a result, the frozen LLM receives visual information conducive to instruction following. and Y denotes the target output. The optional context C is not just raw data; it encompasses descriptive or directive information guiding the model on how to process input or which task to execute, e.g., image caption. It\u2019s integral to the model\u2019s understanding and execution of tasks based on specific instructions or guidelines. 3.2. GPT-assisted Emotion Visual Instruction Data Generation Previous methodologies commonly employed a consistent template-based set of instructions for every image within a dataset across various specific tasks [9]. For instance, a standard instruction such as \u201cBriefly describe the content of the image\u201d was employed uniformly across all images for Image Captioning. In this way, the model may not be able to adequately capture the unique characteristics of each image. Moreover, this one-size-fits-all approach often leads to suboptimal performance in emotion recognition tasks that require nuanced perception and differentiation of ambiguous emotion classes. Since the topic of Emotion Visual Instruction Tuning is still in its infancy, no benchmarks or guidelines have been proposed so far for constructing emotion instruction data. Based on the recent successes of machine-generated instructions demonstrated in LLaVA [7], our work pioneers the use of existing LLMs to create a pipeline for self-generating emotion instructions. Different from previous template-based and one-size-fits-all instruction data, we propose an instance-wise and LLM-assisted visual emotion instruction data pipeline. This methodology transcends the constraints of manual annotation by employing GPT-4 [29] to generate instance-wise, tailored instruction data that dynamically corresponds to visual content. Prior to the development of instructional data for the visual emotion recognition task, it is imperative to confront a fundamental academic problem: What types of visual clues are pivotal in identifying emotions? This necessitates a careful consideration of the unique characteristics inherent to the task, along with a comprehensive understanding of the potential visual cues associated with human emotions. In this work, we propose a novel visual instruction data mechanism to remove the inherent subjectivity and ambiguity in emotional interpretation. Specifically, we integrate a broad spectrum of emotion attributes across multiple levels: low-level attributes (e.g., brightness, colorfulness), midlevel attributes (e.g., scene type and object class), and highlevel attributes (e.g., facial expressions and human actions), building upon insights from previous work [13]. This comprehensive strategy not only aligns with the intricate nature of emotions but also significantly enhances the model\u2019s capability to interpret and understand visual emotional cues more accurately and holistically. The overall pipeline of our proposed emotion visual instruction data is shown in Fig. 4 (a). For an image Ximg, three types of image-related contexts are essential for GPT4 to generate emotion instruction data: (i) a caption Xc, (ii) an emotion attribute list Xattr, which includes emotion class, brightness, colorfulness, scene type, object class, facial expression, and human action, and (iii) the system prompt, designed to enable GPT-4 to comprehend the specific task 4 \frequirement1. We first manually design a few examples which are used as seed examples for in-context learning to query GPT-4. This operation leverages the model\u2019s ability to extrapolate from given examples, enhancing its understanding and response accuracy based on the principles of few-shot learning [7]. Our generated emotion instruction data includes three types: Categorical, Conversation, and Reasoning. Building upon previous research [7], our generated instruction data adheres to the dialogue format, exemplified in Fig. 5. Our strategy for generating emotion instruction data adopts a progressive approach from simple to complex. Initially, for the Categorical data, we transform the associated emotion class of the image into a structured format. This process serves as the foundational component of our emotion instruction data. For the Conversation data, our framework is designed to create dialogues in which the GPT assistant interacts with an inquirer, focusing on the emotion attributes of the image. In this setup, the assistant\u2019s responses are tailored to interpret and describe the image as though it were within its own visual field, thereby providing insights from an observational viewpoint. The scope of questions posed is comprehensive, encompassing the types of objects depicted, their actions, and the dynamics of their interrelationships. The dialogues we generate fall into two categories: (i) Basic Interaction, focusing on the provided emotion attribute list with simple, direct characteristics, and (ii) Advanced Interaction, which builds on the first type to reach greater conversational complexity and sophistication. For the Reasoning data, our approach extends beyond mere visual content, prompting the model to generate indepth reasoning questions. To enhance the dialogue\u2019s credibility and structure, detailed examples are incorporated alongside logical reasoning steps, ensuring that the discourse convincingly captures the intricacies of the visual content. 3.3. Emotion Visual Instruction Tuning After acquiring the emotion visual instruction data as detailed in Sec. 3.2, our goal is to employ this data in enhancing the existing Visual Instruction Tuning model. This enhancement aims to align the LLMs\u2019 existing knowledge with the emotion understanding domain. As shown in Fig. 4 b, we have developed an Emotion Visual Instruction Tuning (EmoVIT) architecture based on InstructBLIP [9]. This architecture specifically leverages its Instruction-aware Q-Former Module, as depicted in Fig. 4 c, for emotion-centric instructional tasks. 1A detailed description of the system prompt is provided in the supplementary materials. Figure 5. The sample of our generated visual emotion instruction data. Specifically, the Instruction-aware Q-Former Module takes in the emotion instruction tokens, queries, and image embeddings as input. The image embeddings are extracted by a frozen image encoder. The learnable queries are initially produced by the pre-trained Q-Former of InstructBLIP. During training, the Instruction-aware module enhances task-specific feature extraction. It does this by integrating emotion instruction and query embeddings within self-attention layers, aligning visual information with the LLM\u2019s instruction-following requirements. Our approach adopts cross-entropy loss, tailoring it to the intricacies of visual emotion recognition tasks, thus ensuring precise and contextually relevant model training outcomes. We note that the data generated by our approach is not confined to a single model but can also be applied to other Visual Instruction Tuning models, such as LLaVA [25]. Notably, when LLaVA is fine-tuned with our data, it exhibits a significant enhancement in emotion recognition capabilities, as detailed in Sec. 4.2. In this way, we demonstrate not only the effectiveness but also the transferability of our generated data, showing its broad applicability and impact. 5 \f4. Experimental Results 4.1. Implemental Details Our implementation is based on the LAVIS library [31]. Our EmoVIT starts with a pre-trained InstructBLIP baseline and proceeds to fine-tune exclusively the Q-Former module, whilst keeping both the image encoder and the language model frozen. The parameters for our training adhere to the default settings established by InstructBLIP. Datasets. We evaluate our framework on ten benchmark datasets annotated under different scenarios and class number, namely EmoSet [13], WEBEmo [11], Emotion6 [34], the Flickr and Instagram (FI) [35], Artphoto [36], IAPS [37], Abstract [36], EmotionROI [38], UnbiasedEmo [11], and OxfordTVG-HIC [33]. Held-in Pretraining. Following previous work [9], we divide our dataset into two categories: held-in for pretraining and held-out for evaluation 2. Considering the EmoSet dataset\u2019s comprehensive inclusion of emotion attributes for each image, it has been chosen as the primary resource for our held-in pretraining phase. Simultaneously, for a broader assessment, we perform held-out evaluations using the test sets from various other datasets. For the generation of emotion visual instruction data, we initially employ the BLIP2 model for image captioning, followed by leveraging the GPT-4 API to generate emotion instruction data. In total, our collection comprises Categorical, Conversation, and Reasoning instruction data, derived from 51,200 unique images. This represents less than 50% of the entire EmoSet. 4.2. Held-out Evaluation As shown in Tab. 1, our proposed methodology exhibits a marked superiority in performance relative to the burgeoning Visual Instruction Tuning Methods. Since they have been pre-trained on dozens of large-scale datasets, it is evident that our generated emotion visual instruction data is particularly effective for emotional understanding Our results signify a paradigm shift, heralding a new era of model training that relies less on explicit supervision and more on the robustness of emotion instruction-driven learning. The Effectiveness of Our Proposed Emotion Visual Instruction Data. As the first to introduce the concept of emotion visual instruction data, our study seeks to evaluate the generalizability of this newly generated instruction data. Our goal is to test its efficacy not only with InstructBLIP but also across other Visual Instruction Tuning model, to understand its broader applicability. As depicted in Fig. 6, we employ two Visual Instruction Tuning models, LLaVA and InstructBLIP, which were fine-tuned on our specially gen2Unlike the setup in InstructBLIP, our dataset exclusively comprises emotion-related content. Consequently, our held-out evaluation does not constitute a strict zero-shot evaluation in the conventional sense. Figure 6. The improvement of our proposed emotion visual instruction tuning data tuning on LLaVA [7] and InstructBLIP [9]. erated emotion visual instruction data. Subsequent testing across five distinct datasets reveals notable improvements in both models, substantiating the efficacy of our generated data. Notably, InstructBLIP demonstrated a more substantial overall enhancement compared to LLaVA. This can be attributed to InstructBLIP\u2019s specialized Instruction-aware Q-Former Module, which adeptly extracts the salient features of our emotion instructions and synergizes them effectively with the corresponding images, thereby yielding improved performance. 4.3. Effectiveness of Different Instruction Data 4.3.1 Ablation Study of Different Instruction Data The ablation study outlined in Tab. 2 provides a comprehensive analysis of the impact that different instructional data types have on model performance, specifically concerning accuracy metrics on the EmoSet test set. Initially, the model, referred to as InstructBLIP [9], operates without the integration of the three types of instructional data and attains a baseline accuracy of 42.20%. This foundational performance is significantly enhanced with the inclusion of Categorical data, which alone contributes to a substantial increase in accuracy. The introduction of Conversation data further amplifies this effect, underscoring the value of conversational context in improving the model\u2019s predictive capabilities. The addition of Reasoning data notably boosts performance, achieving a peak accuracy of 83.36%. This indicates that the model significantly benefits from the nuanced cues in reasoning, aiding in understanding complex emotional instructions. The gradual improvements with each data type support the idea that a diverse approach to instructional data markedly enhances model comprehension and performance. 6 \fMethod WebEmo FI Emotion6 Abstract ArtPhoto IAPSa EmotionROI EmoSet Number of Classes 25 8 6 8 8 8 6 8 Flanmingo [8] 9.36 14.91 21.67 3.57 17.5 10.13 21.72 29.59 LLaVA [7] 12.55 56.04 49.44 19.54 36.25 42.43 46.46 44.03 BLIP2 [14] 20.10 57.72 50.00 28.57 36.25 39.24 50.51 46.79 InstructBLIP [9] 12.80 37.97 46.11 21.42 26.25 34.18 46.13 42.20 Ours* 21.12 68.09 57.81 32.34 44.90 44.13 53.87 83.36 Table 1. Held-out performance comparison on visual emotion datasets (%). Categorical Conversation Reasoning Accuracy (%) 42.20 \u2713 80.90 (+38.70) \u2713 \u2713 81.95 (+39.75) \u2713 \u2713 \u2713 83.36 (+41.16) Table 2. Ablation study of three types of instruction data. Accuracy (%) on EmoSet test set. 4.3.2 Instruction Sensitivity This work is dedicated to the creation of a varied corpus of visual emotion instruction data, alongside the development of a robust instruction-based model. Our objective is for the model to demonstrate stability, producing consistent results in the face of minor variations in instruction phrasing, provided the core objective of the task persists unchanged. To this end, we employ the Sensitivity evaluation metric, as introduced by [30], to assess the model\u2019s fidelity in generating uniform outcomes irrespective of instructional nuances. We employ two semantically similar instructions as input prompts for the model, testing their impact on the Sensitivity score across three visual emotion datasets for different Visual Instruction Tuning models. The first instruction is: \u201cFrom the given options: cls 1, cls 2, cls 3, etc., identify the emotion that most accurately reflects the image. Ensure your selection is confined to the listed options. Respond in the format: Predicted emotion:\u201d The second one states: \u201cPlease choose the emotion that best corresponds to the image from the following options: cls 1, cls 2, cls 3, etc. (Do not provide answers beyond the provided candidates.) Please reply in the following format: Predict emotion:\u201d As illustrated in Fig. 7, our approach, along with BLIP2, exhibited exceptionally low Sensitivity values, demonstrating robustness in understanding the instructions. Conversely, Flamingo and InstructBLIP displayed a higher degree of sensitivity, indicating a relative susceptibility to variations in instruction wording. 4.4. Robustness Given that current emotion recognition datasets often exhibit category imbalances and labeling biases, our aim is Figure 7. The sensitivity score comparison (the lower the better). to evaluate the generalization ability of various learning strategies more impartially. Hence, we selected the UnBiasedEmo test set [11], which is uniquely suited for recognizing intricate emotions, such as those associated with identical objects or scenes, e.g., landscapes, crowds, families, babies, and animals, where the emotional undertones can be particularly subtle and complex. As depicted in Tab. 3, our proposed methodology demonstrates superior performance when benchmarked against conventional supervised emotion recognition techniques, thereby underscoring the efficacy of our approach in more accurately discerning complex emotional contexts. Method Accuracy (%) Direct Learning [11] 71.64 Self-Directed Learning [11] 72.45 Joint Learning [11] 71.64 Curriculum Learning [11] 74.27 Ours* 74.72 Table 3. Performance comparison on UnbiasedEmo dataset. 7 \fFigure 8. The sample of our generated explanation. 4.4.1 Affective Reasoning In the domain of visual emotion recognition, where ambiguity and subjectivity are pervasive, the advent of an interpretable model is of considerable value. Such a model elucidates its cognitive processes, enhancing its trustworthiness and practicality in scenarios requiring a delicate grasp of emotional subtleties. Leveraging Visual Instruction Tuning, our model transcends mere categorization of emotions; it articulates the underlying rationale for its classifications. The executing commands for identifying emotions and elucidating the decision basis is illustrated below: Predicted emotion: [emotion]. Reason: [explanation]. Our model delineates the visual features influencing its determinations, thereby addressing the complexities inherent in discerning and explaining emotion-related nuances. The explanations provide us with visual clues contained within the images, as exemplified in Fig. 8. It provides interpretable visual indicators that inform the model\u2019s outputs, as demonstrated in our example, by disambiguating the often abstract emotional categories. 4.5. Scaling Law Pretraining data. As demonstrated in Tab. 4, there is a clear correlation between the size of the pre-training dataset and improved performance. Consequently, we anticipate that an increase in training data in the future could enhance the effectiveness of Emotion Visual Instruction Tuning. 4.6. Humour Caption Generation The comprehension of humor is intricately linked to the understanding of emotions. Leveraging our generative language model, we conduct a caption generation task without 5% 10% 30% 50% 79.00 81.00 79.34 83.36 Table 4. Ablation study of different portion of pre-training data. Accuracy (%) on EmoSet test set. Figure 9. The sample of our generated humour caption vs human writing humour caption from OxfordTVG-HIC. modifying the model\u2019s architecture, specifically testing the model\u2019s proficiency in generating humorous captions. For this purpose, we select 50 images from the OxfordTVGHIC dataset [33] and generate corresponding captions using our model. Subsequently, the captions produced by our model are compared with manually annotated captions from the dataset in a user study. Thirty participants were asked to vote on which captions were more humorous. Our modelgenerated captions receive 60% of the votes, demonstrating its effective humor generation capabilities. One sample is visualized in Fig. 9. 5. Conclusion In our study, drawing upon the distinctive visual cues key to visual emotion recognition, we present a GPT-assisted pipeline specifically designed for generating emotion visual instruction data. The developed EmoVIT model incorporates emotion-specific instructions, leveraging LLMs for enhanced performance. Our comprehensive experiments validate its effectiveness in emotion classification, affective reasoning, and humor understanding. This comparative analysis sets a benchmark for Emotion Visual Instruction Tuning with LLMs, providing valuable insights and directions for future research in this field. 8 \fEmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning Supplementary Material Figure 10. The sample of our generated visual emotion instruction data. 6. More Emotion Visual Instruction Data Sample Additional samples from our Emotion Visual Instruction Data collection are presented in Figures 10 and 11. Upon acceptance, the complete dataset will be made available on our project webpage. 7. Implemental Details 7.1. Our Experiment Settings Held-out vs supervised learning. We adopt the terminology held-in and held-out as defined in the work of InstructBLIP [9]. For the held-in, we utilize the training subset of the EmoSet dataset for Emotion Visual Instruction Tuning, with its corresponding test subset serving the purpose of held-in evaluation. The outcomes of this evaluation are depicted in Fig. 1 of the main manuscript. Figure 11. The sample of our generated visual emotion instruction data. In our held-out evaluation, we focus on determining how instruction tuning bolsters the model\u2019s ability to transfer learning to new and unseen data. It\u2019s crucial to highlight that our methodology sets a distinct path from InstructBLIP\u2019s framework. Our dataset is specifically curated with emotion-centric content, presenting unique categories such as cheerfulness and enthrallment found in WEBEmo, which are not typically included in other datasets. Conversely, common emotional categories like anger and fear are shared with other collections, such as FI and Emotion6. This distinctive mix in our dataset implies that our held-out evaluation operates on a cross-domain level, examining the model\u2019s ability to interpret and adapt to diverse emotional contexts not strictly confined to zero-shot scenarios. 7.2. System Prompt The system prompt inputted into ChatGPT for the purpose of gathering instruction-based data is presented below. 1 \fYou are an AI visual assistant, and you are seeing a single image. What you see are provided with one caption and some emotion related attributes, describing the same image you are looking at. Answer all questions as you are seeing the image. The range of brightness is from 0 (darkest) to 1 (brightest), and the range of colorfulness is from 0 (black-and-white) to 1 (the most colorful). Design two questions for a conversation between you and a person asking about this photo. The answers should be in a tone that a visual AI assistant is seeing the image and answering the question. Ask diverse questions and give corresponding answers. Include questions asking about the visual content of the image, including the object types, object actions, relationship among objects, etc. Only include questions that have definite answers: (1) one can see the content in the image that the question asks about and can answer confidently; (2) one can determine confidently from the image that it is not in the image. Do not ask any question that cannot be answered confidently. Please answer with the format Question: Answer: Also include one complex question that is relevant to the content in the image, for example, asking about background knowledge of the objects in the image, asking to discuss about events happening in the image, etc. Again, do not ask about uncertain details. Provide detailed answers when answering complex questions. For example, give detailed examples or reasoning steps to make the content more convincing and well-organized. You can include multiple paragraphs if necessary. 7.3. Details of the Q-Former Similar to the approach in InstructBLIP, Q-Former is a lightweight transformer architecture that utilizes a collection of trainable query vectors to distill visual features from a static image encoder. The Q-Former acts as the trainable module to bridge the gap between a frozen image encoder and a frozen LLM. Its role is to curate and present the most pertinent visual information, thereby enabling the LLM to generate the targeted textual output efficiently. Following the default setting, in our experimental setup, we employ 32 distinct queries, each with a dimensionality of 768. 7.4. Sensitivity Formula As mentioned in Sec.4.3.2 in the main paper, we employ the Sensitivity evaluation metric, as introduced by [30], to assess the model\u2019s fidelity in generating uniform outcomes irrespective of instructional nuances. Specifically, for each task t \u2208T, given its associated instances with task instructions: Dt = {(It j, xt j, yt j) \u2208T \u00d7 Xt \u00d7 Y t}N j=1, sensitivity is defined as: Et\u2208T \" \u03c3i\u2208It \u0002 E(x,y)\u2208Dt [L(f\u03b8(i, x), y)] \u0003 \u00b5i\u2208It \u0002 E(x,y)\u2208Dt [L(f\u03b8(i, x), y)] \u0003 # (3) where L denotes the evaluation metric, i.e., emotion classification accuracy, f\u03b8(\u00b7) represents the Visual Instruction Tunign model. The standard deviation and mean of the model\u2019s performance across all instructions are denoted by \u03c3i\u2208It[\u00b7] and \u00b5i\u2208It[\u00b7], respectively. 8. Ablation Study of LLM Model Size In our attempts with the EmoVIT architecture\u2019s LLM, we explored the use of models of varying sizes (as shown in Tab. 5). The results indicated that the smaller model, Vicuna7B, outperformed its larger counterparts. This may be attributed to the limited training data available for our task, which potentially underutilizes the capabilities of larger models. Consequently, we anticipate that an increase in training data in the future could enhance the effectiveness of Emotion Visual Instruction Tuning. Vicuna-7B Vicuna-13B FlanT5XL 83.36 82.21 80.98 Table 5. Ablation study of different LLM model size. Accuracy (%) on EmoSet test set. 9. GPT-4 vs GPT-4 Turbo We conducted a comparative analysis of conversational datasets derived from GPT-4 (the model name is gpt-4 in the API) against the recently released GPT-4 Turbo (the model name is gpt-4-1106-preview in the API). The comparative metrics yielded negligible differences between the two models (83.36% vs 82.96% on EmoSet test set). 10. Adding In-context Samples in Held-out Evaluation Recent LLMs are capable of in-context learning when provided with a limited number of examples in a few-shot manner. In this work, we have also embarked on such an exploration. For instance, Tab. 6 presents the in-context samples utilized within the EmotionROI dataset. During our heldout evaluation, we incorporated three in-context samples for each category, consisting of a caption paired with its corresponding emotion class. Nevertheless, in our experimental observations, we did not witness any enhancement in performance attributable to furnishing the LLM with these incontext examples. Consequently, our finalized methodology did not incorporate in-context samples during the heldout evaluation phase. 2 \fDescription Emotion Unleashed Fury: A portrait of raw, unfiltered anger etched on the subject\u2019s face. Anger Volcanic Eruption in Human Form: A Portrait of Unrestrained Fury. Anger An explosive portrait of raw fury, where every clenched jaw and furrowed brow tells a tale of unchecked anger. Anger Face contorted in a grimace of pure disgust, as if they just tasted a year-old lemon. Disgust Caught in the throes of revulsion, a face grimaces as if it just tasted the world\u2019s sourest lemon. Disgust Picture Perfect: A Masterclass in the Art of Disgust Expression Disgust A chilling moment of pure terror, etched in every detail. Fear A chilling moment of pure terror etched on the face, a stark embodiment of fear. Fear someone with a wide smile, a group Joy Overflowing with joy, like a puppy at a park! Joy A poignant portrait of sorrow, where teardrops are the silent language of grief. Sadness An evocative portrayal of sorrow, with shadows seemingly swallowing the light, reflecting the heavy weight of sadness. Sadness An abstract portrayal of solitude, where the vivid hues of melancholy paint a poignant picture of sadness. Sadness Caught in a moment of pure astonishment, eyes wide and mouth agape. Surprise Caught in the headlights of astonishment: a jaw-dropping moment of surprise! Surprise Caught in the Act! A person\u2019s wide-eyed gasp of sheer surprise. Surprise Table 6. Illustrative Examples of Emotion Descriptors in Visual Data 11. Limitation and future work Due to the reliance on the GPT-API and cost considerations, our held-in pretraining phase utilized less than 50% of the EmoSet dataset. Despite outperforming other methods, we recognize the potential for significant improvements in future work by expanding the data scale. We anticipate that advancements in visual emotion understanding will parallel increases in both data and model scale. 3", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2402.12869v2", + "title": "Exploring the Impact of Table-to-Text Methods on Augmenting LLM-based Question Answering with Domain Hybrid Data", + "abstract": "Augmenting Large Language Models (LLMs) for Question Answering (QA) with\ndomain specific data has attracted wide attention. However, domain data often\nexists in a hybrid format, including text and semi-structured tables, posing\nchallenges for the seamless integration of information. Table-to-Text\nGeneration is a promising solution by facilitating the transformation of hybrid\ndata into a uniformly text-formatted corpus. Although this technique has been\nwidely studied by the NLP community, there is currently no comparative analysis\non how corpora generated by different table-to-text methods affect the\nperformance of QA systems. In this paper, we address this research gap in two\nsteps. First, we innovatively integrate table-to-text generation into the\nframework of enhancing LLM-based QA systems with domain hybrid data. Then, we\nutilize this framework in real-world industrial data to conduct extensive\nexperiments on two types of QA systems (DSFT and RAG frameworks) with four\nrepresentative methods: Markdown format, Template serialization, TPLM-based\nmethod, and LLM-based method. Based on the experimental results, we draw some\nempirical findings and explore the underlying reasons behind the success of\nsome methods. We hope the findings of this work will provide a valuable\nreference for the academic and industrial communities in developing robust QA\nsystems.", + "authors": "Dehai Min, Nan Hu, Rihui Jin, Nuo Lin, Jiaoyan Chen, Yongrui Chen, Yu Li, Guilin Qi, Yun Li, Nijun Li, Qianren Wang", + "published": "2024-02-20", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Reasoning", + "gt": "Exploring the Impact of Table-to-Text Methods on Augmenting LLM-based Question Answering with Domain Hybrid Data", + "main_content": "Introduction Enhancing the performance of Large Language Models (LLMs) in domain-specific Question Answering (QA) has been a focus of research, predominantly employing two key approaches (Ling et al., 2023; Wang et al., 2023a): Domain-Specific Fine-Tuning (DSFT) which involves training LLMs on the domain-specific corpus (Gururangan et al., * Equal Contributions. \u2020 Corresponding author. 2020; Wu et al., 2023), and Retrieval-Augmented Generation (RAG) which utilizes a domain-specific corpus as an external knowledge base (Lewis et al., 2020b). These approaches, leveraging the inherent text processing strengths of LLMs, have been widely adopted in text-only scenarios, yielding significant improvements (Zhao et al., 2023a). However, real-world data in many domains typically exists in a hybrid format, comprising not only text but also substantial volumes of semi-structured tables, as observed in e.g., scientific literature and medical reports (Chen et al., 2020c; Zhu et al., 2021). These tables frequently appear alongside text within the same document, providing semantically supplementary or complementary information crucial for a comprehensive understanding of the content (Chen et al., 2020a). In exploring the potential of leveraging hybrid data to enhance the performance of LLMs, it is crucial to effectively integrate these data, ensuring the coexistence of text and tables. The current methods for handling the heterogeneity of text and tables have significant drawbacks: 1) Directly flattening tables by concatenating cells row by row not only results in the loss of structural information embedded in the original table but also severs the informational links between cells (Sui et al., 2023; Xie et al., 2022). 2) Mapping text and tables to different vector spaces separately and then integrating them, not only increases complexity but also disrupts the semantic connection between the two types of data (Li et al., 2021; Huang et al., 2022). One promising solution is table-to-text generation (Luo et al., 2022; Cheng et al., 2022), which aims to generate natural language statements that faithfully describe the information in the provided table. Through this, we can transform hybrid data into a unified natural language representation that is more suitable for use by LLMs, while also preserving the important information from the tables and the semantic connections between the data. AlarXiv:2402.12869v2 [cs.CL] 9 Apr 2024 \fthough table-to-text generation has been widely studied by the NLP community, there is currently no comparative analysis on how corpora generated by different table-to-text methods affect the performance of domain-specific QA systems. In this work, we address this research gap by two steps. First, we innovatively integrate tableto-text generation into the framework of enhancing LLM-based QA systems with domain hybrid data. Then, we utilize this framework to conduct extensive experiments on two types of QA systems (DSFT and RAG paradigms) with four representative table-to-text methods. We choose the following four strategies: 1) Markdown format; 2) Template serialization; 3) TPLM-based method; 4) LLMbased method. These strategies differ in complexity and underlying technology. The Markdown and Template serialization offer simplicity, while the TPLM-based and LLM-based methods leverage the capabilities of advanced language models to generate more nuanced text. In terms of implementation, we collect a realworld hybrid dataset called ICT-DATA, by extracting text and tables from numerous documents about Information and Communication Technology (ICT) products. It is important to note that the text contained in tables accounts for approximately 18% of the total content in ICT-DATA (based on word count statistics). We employ different table-to-text methods to process the tables in ICT-DATA, obtaining different ICT corpora. These corpora are then utilized to build QA systems. Moreover, we create a benchmark dataset named ICTQA, which consists of QA pairs based on the knowledge of ICT-DATA. This dataset is particularly suitable for evaluating enhanced LLMs, as it includes some industry-specific knowledge not covered in the general LLMs training stage. To our knowledge, our research is the first to comprehensively compare different table-to-text strategies on LLM-based QA systems enhanced by domain hybrid data. Our main findings are as follows: \u2022 Table-to-text methods significantly impact the performance of QA systems, with relative score differences ranging from 2.8% to 9.0% in human evaluation and 4.8% to 16% in GPT-4 evaluation. In two systems, selecting the appropriate method can yield considerable benefits. \u2022 In the DSFT paradigm, LLM-based and TPLMbased consistently outperform others across various model settings, demonstrating their superiority. In the RAG paradigm, while the LLMbased method still performs excellently, the Markdown has shown unexpected effectiveness. \u2022 The varying frequency of domain-specific terms and verbs produced by these methods, alongside the differing quality of semantic representations in the generated text chunks, which appear to be pivotal factors influencing performance disparities across the two systems. 2 Table-to-Text Generation Table-to-text generation (Parikh et al., 2020; Chen et al., 2020b; Cheng et al., 2022) aims to create natural language descriptions from semi-structured tabular data, such as web tables. As shown in Figure 1, we apply four representative table-totext methods to textualize the tables in ICT-DATA, forming four different corpora. Formally: Let Fi : Table \u2192Text represent four table-to-text functions for i = 1, 2, 3, 4. With the original ICT-DATA D = {Tab, Text}, each Fi converts tables Tab into text. The resulting ICT Corpora Ci are formed by combining these texts with Text: Ci = Fi(Tab) \u222aText, i = 1, 2, 3, 4 We next provide a detailed introduction of these four methods. Table 1 provides a comparative analysis of these methods in terms of their resource requirements, processing speeds, and text diversity. \u2022 Markdown format: A straightforward method to represent tables in Markdown format. It does not involve model training and can be rapidly processed via scripts without manual intervention. \u2022 Template serialization: This method uses a set of templates designed based on table features for textualization (Li et al., 2023; Ye et al., 2019). It achieves slightly higher diversity in the generated text compared to the Markdown method, attributed to the use of multiple pre-prepared templates to accommodate different types of tables, which requires some manual involvement. \u2022 TPLM-based method: This method involves finetuning Traditional Pre-trained Language Models (TPLMs), such as T5 (Raffel et al., 2020) Method Resource Speed Diversity Markdown CPU Fast Low Template CPU Fast Moderate TPLM-based GPU Moderate High LLM-based GPU or API Low Very High Table 1: Comparison of table-to-text methods: resource usage, generation speed and diversity of generated text. \fFour Different Table-to-Text Generation Methods Text Tables Domain Documents Four Different Domain Corpora Merge text with generated text from tables Four Different Domain Corpora Merge Text Merge Text Figure 1: Illustration of four domain corpora generation process. Different table-to-text methods are applied to tables of domain documents, generating different text. These generated texts are then merged with the original document texts, yielding different domain corpora. and BART (Lewis et al., 2020a), on specific table-to-text generation task datasets (Liu et al., 2022). In this paper, we utilize the MVP model (Tang et al., 2023), which initially pre-trains the BART model on numerous natural language generation datasets, followed by fine-tuning on various cross-domain table-to-text datasets. It allows customized adjustment of the output through fine-tuning, offering higher flexibility and domain adaptability, while requiring more computational resources. \u2022 LLM-based method: Recent endeavors employing LLMs for this task have drawn significant attention (Bian et al., 2023). Impressively, Zhao et al. (2023b) demonstrate that GPT-* models often outperform the best-performing fine-tuned models. We refer to their findings and utilize ChatGPT in a one-shot setting in our work. Similar to TPLM-based methods, this approach can be custom-tailored using In-Context Learning. Moreover, using the APIs of certain proprietary LLMs might pose risks of domain data leakage. Some examples of table-to-text, along with the specific templates and prompts for ChatGPT used in this paper, can be found in Appendix B. 3 Building LLM-based QA Systems with Domain Corpora We will introduce separately how two LLM-based QA systems utilize these corpora. Their framework overview can be viewed in Figure 2. Domain-Specific Fine-Tuning. In this approach, we first pre-train the LLM on the ICT corpus using next-token prediction (Radford et al., 2018), enabling the model to incrementally learn domain knowledge. Subsequently, we adapt the model to 2024/1/7 3 Domain Corpus LLM Pre-trained Domain LLM Domainspecific QA LLM Domain QA Instructions Question Answer Domainspecific QA LLM Offline Online (a) Domain-Specific Fine-Tuning QA system 4 Domain QA Instructions LLM Domain Corpus Question Relevant Information Answer LLM Online (b) Retrieval-Augmented Generation QA system Figure 2: Framework of domain-enhanced QA systems. the QA task through instruction tuning (Ouyang et al., 2022). Formally, an original LLM M, is pre-trained on each ICT Corpus Ci, to obtain an updated foundation model M\u2032 i: M\u2032 i = Pre-Train(M, Ci), i = 1, 2, 3, 4 The updated models are then further trained on the same instruction set I tailored for the QA task, resulting in the final QA oriented models MQA i : MQA i = FineTune(M\u2032 i, I), i = 1, 2, 3, 4 Retrieval-Augmented Generation. In this paradigm, we adopt the framework proposed by LangChain (Chase, 2022) with the Dense Passage Retriever (DPR) method (Karpukhin et al., 2020), which consists of a multi-step process: 1) Splitting the large-sized Corpus Ci into smaller chunks {pj}Ci; 2) Encoding each text chunk pj into a ddimensional vector by an encoder EP (\u00b7), which \fcaptures its semantic essence; 3) Building an indexed Vector Store for these vectors, optimizing the storage for efficient retrieval; 4) For each query Q, retrieving the K most relevant text chunks, P ={pk}K k=1; 5) Using both the query Q and the retrieved prompts P to generate the final answer with the LLM. 4 Dataset and Evaluation Metrics 4.1 Evaluation Dataset ICT-DATA. We collect ICT-DATA based on 170 English technical documents related to ICT products. Each product document consists of tables and text, whose contents include product descriptions, configuration guides, terms, and definitions, etc. The total storage size is approximately 6GB. Moreover, the number of words in the table data accounts for about 18% of the total number of words in the dataset. In Appendix A.2, we provide detailed statistics and the preprocessing methods used for the table data. ICTQA. We create the ICTQA dataset to evaluate the performance of domain QA systems, by collecting 9,000 questions with long-form answers from the actual ICT product technical support QA platform. All the answers are written by experts based on product documents. We manually select 500 questions as the test set, whose answers involve knowledge from both tables and text. The remaining QA pairs are used as the training set for the instruction fine-tuning phase in the DSFT paradigm. We show statistics and some examples in Appendix A.1. 4.2 Evaluation Metrics To evaluate the model\u2019s responses, we employ both automated and manual evaluation methods. Automated Evaluation Metrics. Given that traditional lexical-overlap-based metrics (such as BLEU and ROUGE) are inadequate for evaluating the quality of long-form answers generated by LLMs (Krishna et al., 2021; Kamalloo et al., 2023), we use GPT-4 as an evaluator with a demonstration setting, scoring responses based on their similarity to the golden answer (Liu et al., 2023). The score ranges from 0 to 5 with discrete values; 0 indicates incoherent answers with repeated fields or responses like \u201cI don\u2019t know the answer\u201d, 1 represents minimal similarity to the golden answer, and 5 denotes an accurate answer. Human Evaluation. Given the limitations in evaluating long-form answers using existing automated metrics (Wang et al., 2023b; Kamalloo et al., 2023), three evaluators with domain knowledge are asked to score responses based on the helpfulness and similarity to the golden answer, using the same scoring criteria with a range of 0 to 5 as the GPT-4 evaluator. For fairness and to eliminate potential bias, responses are presented anonymously to both the GPT-4 and human evaluators. The full prompt, evaluation setup for human and scoring criteria are detailed in Appendix D. 5 Experimental Setup QA Systems of the DSFT Paradigm. Within the DSFT paradigm, we utilize Meta\u2019s OPT (1.3B to 13B) (Zhang et al., 2022) and Llama2-base (7B, 13B) (Touvron et al., 2023) as foundation models. The OPT models offer variable sizes to enhance robustness. To mitigate training costs, we employ the QLoRA (Dettmers et al., 2023) strategy for pretraining and instruction fine-tuning. The instruction template can be found in Appendix A.3. QA Systems of the RAG Paradigm. We use the Llama2-chat models (7B, 13B, and 70B) and GPT3.5-turbo for inference. We divide the corpus into smaller chunks, ensuring the integrity of sentences and keeping their lengths below 3000 characters. Subsequently, text chunks are vectorized using the BGE embedding model (Zhang et al., 2023). We utilize the FAISS library (Johnson et al., 2021) to retrieve the vectors of the top-3 relevant text chunks based on similarity. These chunks are input to the LLM with the corresponding questions for answering through the RAG-Chain from LangChain (Chase, 2022). Fair Comparison. To maintain consistency and control variables, all models are trained or used under the same settings on four different corpora. Detailed training parameters and GPU costs are available in Appendix C. 6 Results In the following subsections, we will discuss three research questions regarding our study. 6.1 RQ1: How do these methods affect the performance of QA systems? Table 2 shows the average scores for different QA system setups on the ICTQA test set. We can see \fMetrics Table-to-Text Domain-Specific Fine-Tuning Retrieval-Augmented Generation Method OPT-1.3B OPT-2.7B OPT-6.7B OPT-13B Llama2-7B Llama2-13B GPT-3.5-turbo Llama2-7B Llama2-13B Llama2-70B Human Markdown 2.05 2.41 2.38 2.51 2.82 3.05 3.29 3.72 3.98 3.94 Template 2.04 2.40 2.26 2.47 2.82 3.04 3.36 3.44 3.96 3.76 Eval. TPLM-based 2.12 2.43 2.43 2.58 3.20 3.13 3.26 3.27 3.92 3.64 LLM-based 2.18 2.57 2.51 2.62 2.96 3.19 3.62 3.71 4.26 4.09 RSD(%) 2.80 3.40 5.00 3.00 7.60 3.00 7.20 9.00 6.80 9.00 GPT-4 Markdown 1.74 2.16 2.27 2.25 2.7 3.06 3.28 3.66 3.67 3.74 Template 1.81 2.22 2.39 2.34 2.84 3.08 3.27 3.06 3.38 3.37 Eval. TPLM-based 2.33 2.46 2.45 2.53 3.20 3.19 3.28 2.9 3.41 3.30 LLM-based 2.57 2.69 2.73 2.86 3.06 3.30 3.64 3.59 3.69 3.54 RSD(%) 16.60 10.60 9.20 12.20 10.00 4.80 7.40 15.20 6.20 8.80 Table 2: The average scores from Human Evaluation and GPT-4 Evaluation of the QA systems with four representative table-to-text methods. In each setting, the best result is shown in bold, and the second-best result is underlined. Relative Score Difference (RSD) is calculated using the formula (Highest Score \u2212Lowest Score)/5. that there are significant differences in the performance of the two types of QA systems enhanced by corpora generated from different table-to-text methods. Their Relative Score Differences range from 2.8% to 9.0% in human evaluation and from 4.8% to 16% in GPT4 evaluation. For a more detailed observation, we present the score distribution from human evaluation of the DSFT QA models based on OPT-6.7B in Figure 3. From this figure, we can observe significant differences in score distribution among different QA models, reflecting their performance variations. From Table 2, we note that in the DSFT paradigm, both TPLM-based and LLM-based methods, which utilize language models for table-to-text generation, perform well across different models. Particularly, the LLMbased method shows the best performance in many models. On the other hand, the RAG paradigm provides a different observation. While the LLMbased method continues to exhibit excellent performance, the Markdown format shows a significant and unexpected improved performance in the RAG paradigm compared to DSFT, even best-performing in some models. To further illustrate these findings, we show the competition results of some QA system scores in Figure 4. We can clearly observe that the methods with higher average scores also have a higher probability of achieving better scores for each question. These observations underscore the necessity of choosing the appropriate method for processing table data when building domainspecific QA systems. 6.2 RQ2: What are the potential reasons for their different performances? Since DSFT and RAG systems utilize domain corpora in different ways, we will discuss them separately in this section. For the DSFT paradigm. Inspired by the findings Markdown T emplate TPLM-based LLM-based 0% 20% 40% 60% 80% 100% 10.5 10.5 9.0 9.5 21.0 23.5 19.0 19.0 20.5 22.0 22.5 24.0 26.0 25.5 28.0 20.5 12.5 11.0 13.0 13.0 9.5 7.5 8.5 14.0 0 1 2 3 4 5 Figure 3: The scores distribution from human evaluation for the DSFT QA systems based on OPT-6.7B. Freq (k) C1\u00b7 Markdown C2\u00b7 Template C3\u00b7 TPLM-based C4\u00b7 LLM-based Term 821 1040 2358 2254 Verbs 313 315 682 1207 Table 3: Absolute frequency of verbs and terms contained in the corpora Ci generated by different methods. of (Biderman et al., 2023; Razeghi et al., 2022; Elazar et al., 2023), which suggest a correlation and causal relationship between the ability of LLMs to answer factual questions and the frequency of salient entities found in their pre-training corpora, we also observe that different table-to-text methods have inconsistent preferences for domain verbs when describing tables. Following the approach of (Zevallos et al., 2023; Wang et al., 2023c), we extract domain term sets and related verb sets from the QA pairs in the ICTQA test set. We then calculate the absolute frequency of these terms and verbs as they appear in the corpora generated by different table-to-text methods. In Table 3, we can clearly see significant differences in these frequencies across different corpora. For example, LLM-based methods show a term frequency more than twice that of Template methods, with verb frequency quadrupling. This is because LLM-based methods tend to supplement the subject with the \f0 20 40 60 80 100 Markdown vs. T emplate TPLM-based vs. Markdown TPLM-based vs. T emplate LLM-based vs. Markdown LLM-based vs. T emplate LLM-based vs. TPLM-based 24.0% 22.0% 27.0% 22.5% 26.5% 23.5% 58.5% 57.0% 53.0% 59.5% 57.0% 58.0% 17.5% 21.0% 20.0% 18.0% 16.5% 18.5% Win Tie Loss (a) OPT-6.7B in DSFT Paradigm 0 20 40 60 80 100 Markdown vs. T emplate TPLM-based vs. Markdown TPLM-based vs. T emplate LLM-based vs. Markdown LLM-based vs. T emplate LLM-based vs. TPLM-based 19.5% 31.5% 30.5% 25.0% 20.0% 17.0% 61.0% 54.5% 57.5% 56.5% 67.5% 54.0% 19.5% 14.0% 12.0% 18.5% 12.5% 29.0% Win Tie Loss (b) Llama2-7B in DSFT Paradigm 0 20 40 60 80 100 Markdown vs. T emplate TPLM-based vs. Markdown TPLM-based vs. T emplate LLM-based vs. Markdown LLM-based vs. T emplate LLM-based vs. TPLM-based 12.0% 9.0% 10.0% 12.5% 16.0% 18.0% 82.0% 76.0% 79.0% 77.5% 76.0% 74.0% 6.0% 15.0% 11.0% 10.0% 8.0% 8.0% Win Tie Loss (c) Llama2-70B in RAG Paradigm Figure 4: Comparison of human evaluation scores between QA models using different Table-to-Text methods. \u2018A vs. B win\u2019 indicates the percentage of test set instances where Model A\u2019s score surpasses Model B\u2019s. domain entity corresponding to the attribute when describing tables, and exhibits greater diversity in verbs. In contrast, Template methods use more pronouns, such as \u2018it\u2019, and monotonous predicates (usually \u2018be\u2019 verbs). By comparing these frequency rankings with the performance shown in Table 2, we can observe a positive correlation between them: methods with higher frequencies, especially the TPLM and LLM-based methods, correspond to superior QA capabilities in the DSFT systems. For the RAG paradigm. Under the same LLM reader setup, retrieval accuracy in this semantic space crucially impacts RAG performance (Ma et al., 2023). The retrieval process involves selecting the vectorized chunks with the highest similarity scores to the query vector. To investigate the impact of different methods on retrieval effectiveness, we use t-SNE (Van der Maaten and Hinton, 2008) to visualize the clustering of a query and related chunks in the semantic space at Figure 5. It could be clearly seen that chunks generated by the LLM-based and Markdown methods, which perform well in Table 2, are closer to the query in the semantic space. This makes the chunks related to the query more likely to be retrieved, thereby improving the system\u2019s performance. This suggests that in the RAG framework with the DPR method, the texts generated by these methods have more retrieval-friendly semantic representations and better alignment between queries and documents. Freq (Avg.) Markdown Template TPLM-based LLM-based Text Len 998 1259 1138 897 Table 4: The average length of text generated by different methods for each table. 6.3 RQ3: Are there practical suggestions for choosing table-to-text methods? Through the analysis of RQ1 and RQ2, we know that the LLM-based strategy with ChatGPT is outstanding and reliable in both frameworks. In case Markdown Chunks LLM-basd Chunks T emplate Chunks TPLM-based Chunks Query Figure 5: A t-SNE visualization of chunk clusters in the embedding space of the RAG system. \u2018X Chunks\u2019 represents chunks related to the query (red star) from the corpus generated by X table-to-text method. its drawbacks mentioned in Section 2 are unacceptable, the TPLM-based strategy (i.e., selecting a well-tuned table-to-text model) is a good alternative in the DSFT paradigm. In the RAG paradigm, the simple and easy-to-use Markdown strategy is also a viable substitute. Additionally, although RAG systems using these four methods significantly outperform DSFT systems in terms of performance, building a vector retrieval library demands substantial memory resources. Therefore, referring to Table 4, choosing methods that generate more concise texts, such as LLM-based and Markdown strategies, is a wise decision. 6.4 Additional discussion on experimental results As shown in Table 2, under the ICT dataset and the experimental setup of this study, the RAG method outperforms the DSFT method in Llama2 models. This demonstrates that RAG has an excellent performance as a lower cost method. We attribute this result to two main reasons: 1). The ICT data used in this study covers dense domain knowledge, and it is still challenging to adapt the LLM well to this complex domain data through incremental pre-training. 2). As the statistical analysis in Ap\fpendix A.1, most of the questions in the ICTQA are quizzes on the knowledge of product manuals. In this scenario, the existing excellent dense vector retrievers have high recall accuracy. The studies of (Gupta et al., 2024) and (Soudani et al., 2024) have respectively conducted detailed experiments on the choice between Fine-Tuning and RAG under the agricultural domain data and Less Popular Knowledge scenarios. Our experimental results in this work further validate their viewpoints. It is also worth noting that in this study, the bge-largeen embedding model (Zhang et al., 2023) embeds text chunks into 1024-dimensional vectors. During the retrieval of relevant chunks based on the questions, the peak running memory requirement is approximately 280G. Another interesting experimental result is that GPT-3.5-turbo performs worse than the Llama2 family in the RAG paradigm. We manually observe the QA cases and find that GPT-3.5-turbo has a significantly higher probability of outputting \u201cI don\u2019t know the answer.\u201d, even if the retriever finds text chunks containing the correct answer. 7 Related Work 7.1 Domain Augmented Large Language Models. In order to enhance the capabilities of LLMs in domain-specific tasks, some works develop LLMs through incremental training on an extensive domain corpus, inheriting the benefits of both the emergent abilities of LLMs and domain-specific knowledge (Luo et al., 2023; Huang et al., 2023). This technology yields significant results, but it demands substantial computational resources and incurs high costs (Wang et al., 2023a). In order to overcome this difficulty, a prompt-based solution that does not require updating model parameters has been proposed. They retrieve relevant domain information from external knowledge bases before answering questions with LLMs (Gao et al., 2023; Wang et al., 2023d; Xu et al., 2023). 7.2 Question Answering over Hybrid Data Some works study QA tasks on hybrid data that contain both tables and text (Zhu et al., 2021; Chen et al., 2020c,a). Popular approaches often involve designing a complex system that has independent modules to process text and tables separately. The information from these two modules is then merged and fed into a language model to generate answers (Zhong et al., 2022). Additionally, some of these methods not only require annotations of metadata identifying text and tables relevant to the question, but they also rely on the formulation of executable languages to access tables, such as SQL or SPARQL (Nan et al., 2022; Li et al., 2021). These executable languages often have strict assumptions about the structure of the tables. These limitations make these approaches ill-suited for the real-world LLM-based scenario domain QA systems. Therefore, the results of this study were not compared with these baseline models in the experiments. 8 Conclusion This paper studies the impact of different tableto-text methods on LLM-based QA systems enhanced by domain hybrid data. Specifically, we meticulously compared four representative methods: Markdown formatting, Template serialization, TPLM-based, and LLM-based approaches. Through experiments, we show the superiority of the LLM-based and TPLM-based methods in the DSFT framework, and the excellence of the LLMbased and Markdown methods in the RAG framework. A key discovery is the varying frequency of domain-specific terms and verbs produced by these methods, alongside the differing quality of semantic representations in the generated text chunks, which appear to be pivotal factors influencing performance disparities across the two systems. These insights not only shed light on the nuances of tableto-text generation methods but also have profound implications for the enhancement of LLMs. Furthermore, they offer practical guidance for tailoring domain-specific QA systems to meet particular needs. Acknowledgements This work is partially supported by National Nature Science Foundation of China under No. U21A20488. We thank the Big Data Computing Center of Southeast University for providing the facility support on the numerical calculations in this paper." + }, + { + "url": "http://arxiv.org/abs/2403.00994v1", + "title": "Leveraging Prompt-Based Large Language Models: Predicting Pandemic Health Decisions and Outcomes Through Social Media Language", + "abstract": "We introduce a multi-step reasoning framework using prompt-based LLMs to\nexamine the relationship between social media language patterns and trends in\nnational health outcomes. Grounded in fuzzy-trace theory, which emphasizes the\nimportance of gists of causal coherence in effective health communication, we\nintroduce Role-Based Incremental Coaching (RBIC), a prompt-based LLM framework,\nto identify gists at-scale. Using RBIC, we systematically extract gists from\nsubreddit discussions opposing COVID-19 health measures (Study 1). We then\ntrack how these gists evolve across key events (Study 2) and assess their\ninfluence on online engagement (Study 3). Finally, we investigate how the\nvolume of gists is associated with national health trends like vaccine uptake\nand hospitalizations (Study 4). Our work is the first to empirically link\nsocial media linguistic patterns to real-world public health trends,\nhighlighting the potential of prompt-based LLMs in identifying critical online\ndiscussion patterns that can form the basis of public health communication\nstrategies.", + "authors": "Xiaohan Ding, Buse Carik, Uma Sushmitha Gunturi, Valerie Reyna, Eugenia H. Rho", + "published": "2024-03-01", + "updated": "2024-03-01", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CL", + "cs.SI" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Reasoning", + "gt": "Leveraging Prompt-Based Large Language Models: Predicting Pandemic Health Decisions and Outcomes Through Social Media Language", + "main_content": "INTRODUCTION During the COVID-19 pandemic, social media was at the center of proliferating mass antipathy and distrust towards government health policies and recommendations [26, 55]. Millions took to social media to oppose federal and state health practices, criticize medical professionals, or organize anti-vaccine and mask-wearing rallies [62]. The viral growth of such online conversations fueled animosity and extremist views that encouraged people to resist public health guidelines [2, 26]. Disregarding public health practices, such as wearing masks, maintaining social distance, and getting vaccinated resulted in significant societal costs. Between November and December of 2021 alone, over 692,000 preventable hospitalizations were reported among unvaccinated patients, leading to a staggering $13.8 billion [31]. Soaring COVID-19 infection cases put a massive burden on healthcare systems, depleting medical resources and contributing to severe employee burn-outs and shortages of healthcare workers [53]. Meanwhile, COVID-19 conspiracies and hyper-partisan news on social media led to nationwide protests, obstruction of medical facilities [6], and even fatal assaults of employees requesting customers to wear masks [8]. According to fuzzy-trace theory (FTT), texts that clearly establish cause-and-effect relationships facilitate humans extraction of gist mental representations, helping people understand and remember information better than texts without any causal coherence [83, 87]. This aligns with previous studies in decision sciences, which have shown that causal coherence of gists in texts plays a crucial role in how individuals perceive risks and make healthrelated decisions [24, 88]. Throughout the pandemic, social media conversations refuting COVID-19 public health practices based on mis-/disinformation and identity politics continued to obscure people\u2019s knowledge of safe health practices, making well-informed health decisions extremely difficult [113]. Using evidence-based theories like FTT allows us to create psychologically descriptive models that transform language into analyzable units shown to predict human behavior [24, 85]. In this paper, we leverage the capabilities of prompt-based Large Language Models (LLMs) to delve into the nuanced language patterns in social media discussions opposing COVID-19 public health practices through a theory-driven approach using fuzzy-trace theory. By leveraging prompt-based LLMs, we dissect the language around resistance to COVID-19 health practices through the lens of FTT and its central concept of gist. Specifically, we examine how causal language patterns or gists manifest across social media communities that denounce pandemic health practices, contribute to trends in people\u2019s health decisions, and by extension, impact national health outcomes. We divide our work into four main studies to address the following research questions: \u2022 RQ1. How can we efficiently predict gists across social media discourse at-scale? (Study 1) \u2022 RQ2. What kind of gists characterize how and why people oppose COVID-19 public health practices? How do these gists evolve over time across key events? (Study 2) \u2022 RQ3. Do gist patterns significantly predict patterns in online engagement across users in banned subreddits that oppose COVID-19 health practices? (Study 3) \u2022 RQ4. Do gist patterns significantly predict trends in national health outcomes? (Study 4) We answer RQ1 by leveraging LLMs and their prompt-based capabilities to identify gists in social media conversations at-scale (Study 1). We do so by developing a novel prompting framework that detects and extracts cause-effect pairs in sentences from a corpus of online discussions collected from banned Reddit communities known for opposing public COVID-19 health practices. Study 1 allows us to identify the causal language (cause-effect pairs that form gists) that underlie how people argue against COVID-19 health practices on social media. We answer RQ2 by clustering sentence embeddings of gists (sentences with causal relations identified from Study 1) to identify the most salient gist clusters, and demonstrate how they evolve across key events (Study 2). Finally, we answer RQs 3 and 4 by using Granger-causality to test whether causal discourse (gists) on social media can significantly predict online engagement patterns (Study 3), and trends in national health decisions and outcomes in the U.S. (Study 4). Contributions: This work\u2019s intellectual merits are methodological and theoretical. The computational techniques introduced in this work enable efficient and scaled prediction of gists on social media, and thus can be used to better identify and understand underlying mental representations that motivate health decisions and attitudes towards public health practices (Study 1). The clustering and evolution of gists in Study 2 identify the most salient themes associated with how people causally argue against pandemic health practices online. Patterns in gist volumes across cluster topics fluctuate closely with topically-related high-profile events, including federal health announcements, congressional policies, and remarks by a country\u2019s leader. Study 3 empirically confirms how gist volumes significantly drive subreddit engagement patterns (upvotes and comments), providing implications for how causal language may play a role in monitoring conversations in content-moderation practices of controversial online health communities. Finally, gist patterns within subreddits that support anti-pandemic health practices were significantly interrelated with nationwide trends in important health decisions and outcomes (Study 4). To the best of our knowledge, our research is the first to empirically establish Granger causality between linguistic patterns in social media discussions about COVID-19 health measures and real-world trends in public health outcomes. Our work entails the following contributions: \u2022 The task of accurately predicting causal language patterns and generating coherent gists (causal statements) is a complex challenge [65, 86]. We overcome this by introducing a multi-step prompting framework: Role-Based Incremental Coaching (RBIC). RBIC is a prompting mechanism that allows efficient prediction of gists across social media conversations at-scale. RBIC integrates role-based cognition with effective learning in sub-tasks to enhance the model\u2019s overall understanding of a given task prior to generating a final \fPredicting Pandemic Health Decisions and Outcomes Through Social Media and LLMs CHI \u201924, May 11\u201316, 2024, Hawaii, USA output. We overcome prior challenges in detecting subtle and complex expressions of semantic causality in noisy text by leveraging RBIC. By doing so, this work advances stateof-the-art approaches in detecting gists at-scale, yielding a novel, psychologically relevant, and efficient technique for identifying and examining bottom-line meanings in massive amounts of textual data. \u2022 We demonstrate the novel application of prompt-based LLMs in advancing computational social science (CSS) methods in Human-Computer Interaction (HCI) research. Generic Natural Language Processing (NLP) models and LLMs typically lack multi-step reasoning capabilities [116]. This limitation makes it difficult to apply such models in performing nuanced and complex text analyses in CSS research [123]. By applying RBIC, we overcome this limitation and demonstrate the versatility and effectiveness of prompt-based LLMs in identifying and synthesizing nuanced linguistic patterns. In so doing, we contribute to broadening the potential application of prompt-based LLMs for theory-driven textual analysis in CSS research in the HCI domain. \u2022 Our research enhances the analytical depth and scope of insights into the causal discourse surrounding people\u2019s opposition to public health practices on social media. We identify the most salient gist clusters that embody the core topics at the center of how and why people oppose public health practices throughout COVID-19, from May 2020 to October 2021. We use sentence embeddings and clustering to provide a characterization of how the volume of gists across each topic fluctuates in relation to key events associated with the core topics embodied by the gist clusters. By doing so, we capture how causal online discourse surrounding anti-COVID-19 health practices evolves over time across real-world events. Such insights can, in turn, inform timely public health communication strategies and interventions that account for ongoing current events [85]. \u2022 Finally, we address the question of whether and how social media language patterns in the form of gists influence nationwide trends in vaccinations, COVID-19 cases, and hospitalization in the U.S., providing new evidence around how important health decisions and national health outcomes are impacted by causal linguistic signatures across social media health discussions\u2014an important link that has not been empirically established at-scale in prior research. 2 RELATED WORK 2.1 Understanding the Impact of Social Media Language Patterns on Health Decisions and Outcomes The COVID-19 pandemic has ignited an unprecedented increase in social media discourse on health decisions and practices [113, 122], spurring a wave of computational social science research [39, 104] aimed at understanding this phenomenon in the field of HCI [63, 77] and CSCW [16]. Using text mining and computational linguistics, researchers have analyzed pandemic-related social media discourse through the lens of mental health [75], political views [17, 90], attitudes towards vaccines [79, 119], misinformation [49, 73, 99], and perceptions of health policies and government institutions [41]. Such studies have uncovered key insights on how language patterns reflect people\u2019s beliefs [109], sentiments [54], and emotional wellbeing [10, 118] during Covid-19. For example, researchers have examined collective shifts in the public mood in response to the evolving pandemic news cycles by analyzing the daily sentiment of tweets [105]. Similarly, others have analyzed social media posts containing a subset of depression-indicative n-grams to track the fluctuation in mental health of social media users over the course of the pandemic [39]. While such studies have made valuable contributions to understanding the role of language patterns in health-related discourse on social media [9, 30], there remains an opportunity to explore their impact on real-world health decisions and outcomes. To the best of our knowledge, there has been a lack of research that examines how social media discussion patterns surrounding health practices can predict patterns in health decisions and outcomes in the real world. Our research aims to fill this gap. Some emerging research, such as the study by Nyawa et al. (2022), has started to explore this link by applying computational linguistics to categorize individuals as either vaccine-accepting or vaccine-hesitant based on their online language patterns [71]. Yet, the majority of empirical studies examining the impact of social media discourse on real-world behavior thus far have leaned heavily on survey-based methods [78, 118]. These surveys often depend on self-reported metrics about social media use and health behaviors, thereby offering only a limited perspective on the complex relationship between social media discourse patterns and actual health decisions. This limitation underscores the existing challenges in understanding how health-related discussions on the internet translate into or shape real-world outcomes and decisions [7]. Our research aims to address this challenge by investigating how language patterns in social media conversations can serve as predictive markers for understanding real-world trends in people\u2019s health decisions and outcomes during the pandemic. 2.2 Understanding Health Discourse Through the Lens of Fuzzy-Trace Theory and Its Core Concept of Gist Scholars have used fuzzy-trace theory (FTT) as a theoretical lens to explore risk perceptions and decisions underlying health practices and discussions in various contexts, including vaccines [113], cancer [115], HIV-AIDS [114] and the prescription of antibiotics [52]. These studies support FTT\u2019s core tenet that gists are stronger and more effective forms of communication than verbatim representations in the sense that they are (a) better remembered and (b) more likely to influence decisions [83, 87]. For example, a study comparing articles on vaccines posted on Facebook showed that those containing gists (e.g., bottom-line meaning) are shared 2.4 times more often on average than articles with verbatim details (e.g., statistics) [15]. Having a story or images did not add unique variance to predictions once gist was accounted for. The study\u2019s results show that communications about vaccines are more widespread when they express a clear gist explaining the bottom-line meaning of the statistics rather than just the data themselves. Likewise, scholars have also used FTT as a theoretical framework to \fCHI \u201924, May 11\u201316, 2024, Hawaii, USA Ding et al. examine people\u2019s behavior across diverse contexts, such as law, medicine, public health, systems engineering, and HCI [61, 88, 124]. For example, in HCI, researchers have used FTT to examine people\u2019s behavior in online social tagging [93] and to improve speech-to-text interface design through gist-based communication [65]. Others have used FTT in designing a web-based intelligent tutoring system for communicating the genetic risk of breast cancer through gists [115]. Overall, FTT\u2019s theoretical breadth and empirical support as a cognitive explanation of how people process and communicate information related to health decisions makes FTT a well-suited theoretical lens to examine resistance towards public health practices in our research. Further, gists that causally link some event, actor, or outcome tend to facilitate more effective uptake of information than those that are less causally coherent [57, 85]. In fact, causal coherence is one of the most important semantic aspects of gists that make gistbased communications effective [40]. For example, in a study analyzing 9,845 vaccine-related tweets, researchers discovered that tweets containing explicitly causal gists (e.g., \"vaccines cause autism\") were far more likely to be retweeted and to go viral. This was in contrast to tweets that suggested a link between vaccines and autism but emphasized details and lacked a meaningful causal connection [15]. Simply, information with stronger causal structure produces more meaningful gists in people, who then are more likely to remember, apply, and share that information [86]. Fuzzy-trace theory draws on psycholinguistic research on mental representations of narratives that underlies both human memory models and computational models in which causal connections are a central feature of common gists [89, 103]. Hence, we focus on causal gists, or gists that contain a cause-effect relation. From hereon, we refer to causal gists as gists. 2.3 Challenges in Predicting Semantic Causality in Online Health Discourse Extracting cause-effect relations in text is one of the many open challenges in NLP research that has seen significant breakthroughs in recent years through the development of generative Large Language Models [120]. However, computational social science research has yet to take advantage of these advancements [123], particularly in examining gists related to health practices. For example, scholars have used topic modeling, such as Latent Dirichlet Allocation (LDA) [11] to identify gists in vaccine hesitancy [40]. While useful, these methods do not enable granular detection of gists at the sentence or phrase level. For instance, LDA only allows the detection of gists at the corpus level, where each identified topic across the entire dataset is treated as a proxy identification of one gist. Recent scholarship in medical informatics has examined health-related attitudes in social media by extracting causality through machine learning approaches with rule-based dependency parsing and named entity recognition [19, 29, 67, 68, 80]. While such approaches are an improvement, they can only detect intra-sentential (within a single sentence) and not inter-sentential causality where cause and effect lie in different sentences (e.g., God made us to breathe naturally. I won\u2019t be forced to wear masks.). More recently, transformer models such as InferBERT and CausalBERT, specifically designed for extracting causal relationships, have yielded more promising results [50, 111]. However, the token limit of these models significantly reduces performance when dealing with longer texts [4]. Additionally, like humans, these models struggle to discern subtle forms of semantic causality in noisy or incoherent data. Our research aims to not only identify causality in text, but also generate coherent gists based on the identified cause-effect pairs. To achieve this, we address prior limitations by leveraging recent advancements in pretrained LLMs and their prompt-based approaches to develop a novel prompting framework to systematically predict gists [112]. 3 STUDY 1: PREDICTING GISTS IN SOCIAL MEDIA CONVERSATIONS AT-SCALE As a first step to analyzing how causal language patterns on social media impact health decisions and outcomes, we leverage the power of prompt-based LLMs in Study 1. Specifically, we develop and apply a multi-step prompting framework called Role-Based Incremental Coaching (RBIC) to efficiently predict gists across social media discourse at-scale. Role-Based Incremental Coaching is a prompting framework (Fig. 2) built with few-shot demonstrations using GPT-4, which consists of two primary prompting techniques: RoleBased Knowledge Generation and Incremental Coaching. Combined together, RBIC allows the model to (1) learn its role for a given task by generating role-specific knowledge as a task-performing agent and (2) perform a series of small sub-tasks to refine its understanding and quality of the final output by incrementally building upon the sub-task responses. RBIC allows us to systematically identify the presence of semantic causality in a given post, and generate causally coherent gists across large volumes of textual corpora at-scale. 3.1 Data We collected all publicly available posts from 20 anti-COVID-19 subreddits that were banned for denouncing COVID-19 public health practices. These subreddits were chosen based on their community size, as well as the significant media attention they received from major news outlets [44], and their virality among American social media users [43]. We obtained all posts and corresponding metadata (comments, post id, timestamp, up/down-vote ratio, etc.) for each of these subreddits using Pushift API. This resulted in a total of 79,680 posts spanning from May 2020 to October 2021 from the following subreddits: conspiracy_commons, CoronavirusCirclejerk, CoronavirusFOS, Coronavirus_Rights, COVID19, covid19_testimonials, covidvaccinateduncut, VaxKampf, DebateVaccines, FauciForPrison, ivermectin, lockdownskepticism, NoNewNormal, trueantivaccination, vaccinelonghaulers, VAERSreports, Wuhan_Flu, CovidIsAFraud, COVID19Origin, churchofcovid. 3.2 Method: Role-Based Incremental Coaching (RBIC) Role-Based Knowledge Generation. Drawing inspiration from prior NLP research that leverages multi-step reasoning capabilities in LLMs [58], we developed Role-Based Knowledge Generation as the initial grounding component of our prompting framework. Before producing a final response from LLMs, asking LLMs to generate potentially useful information about a given task improves the final response [58]. For example, as shown in an open online \fPredicting Pandemic Health Decisions and Outcomes Through Social Media and LLMs CHI \u201924, May 11\u201316, 2024, Hawaii, USA Role-based Knowledge Generation Your role is to understand the cause-effect relationships in social media posts. Can you provide a brief definition of what a cause-effect relationship is? Based on your role, can you explain the term, \u201ccausal gist\u201d in relation to sentences that have causal coherence? So, given the sentence: I took the vaccine yesterday. I\u2019m really sick now. Is there a cause-effect relationship in this given sentence? If yes, just answer: \u201cYes\u201d If no, just answer: \u201cNo\u201d Don\u2019t give me any explanations Indeed, there is a cause-effect relationship in the given sentence. Then extract the corresponding cause phrase and effect phrase in the given sentence. Just respond in JSON format: {\u201dCause\u201d: \u201c\u201d, \u201cEffect\u201d: \u201c\u201d} Please explain why Generate a reasonable and clear causal gist based on {\"Cause\": \"took the vaccine\", \"Effect\": \"really sick now\"} and your understanding of the sentence with the cause-effect relationship. Incremental Coaching Certainly! A cause-effect relationship is a relationship between two events or variables\u2026 Of course. The term \"causal gist\" refers to the fundamental meaning or essence of a sentence or text that expresses a causal relationship\u2026 Yes Sure: {\"Cause\": \"took the vaccine\", \"Effect\": \"really sick now\"} Taking the vaccine yesterday caused the person to become sick. If \u201cYes\u201d If \u201cNo\u201d P1 P2 P3 P4B P5 P4A O1 O2 O3 O4 O5 Figure 2: Illustration of the Role-Based Incremental Coaching (RBIC) prompting framework: RBIC incorporates role-based cognition and sub-task training to improve the model\u2019s comprehension of a specific task before generating the final output. course \"Learn Prompting\" 1, when prompted with \"Which country is larger, Congo or South Africa?\", GPT-3 answers incorrectly. However, when the model is prompted to \"Generate some knowledge about the sizes of South Africa and Congo\", before answering the final question, the model uses the output to the intermediate prompt (\"South Africa [has] an area of...\") to generate the correct answer: Congo is larger than South Africa. We leverage this prompting intuition in Role-Based Knowledge Generation to enhance the model\u2019s understanding of its role as a task-performing agent. By doing so, the model can achieve better performance by accessing potentially relevant contextual information, as shown in prompts, P1 and P2 (Fig. 2). The corresponding outputs to P1 and P2 O1 and O2, respectively are then integrated with a task-specific prompt (P3) in the following step. The role-based knowledge outputs (O1 and O2) allow the model to perform tasks more accurately given its enhanced understanding of its specific role for achieving the task. Incremental Coaching. Inspired by Chain of Thought (CoT) [112], Incremental Coaching is a technique within the Role-Based 1https://learnprompting.org/ Incremental Coaching (RBIC) framework that involves breaking down a complex task into smaller, manageable sub-tasks as shown in P3-P5 in Fig. 2. The role-based agent is coached through a series of sub-tasks in a step-by-step manner, with each sub-task building upon the previous one. To implement Incremental Coaching effectively within RBIC, it is necessary to follow a logical sequence of sub-task prompts that allows the model to build understanding and confidence in performing the final task by generating incremental outputs (O3-O4). By breaking down the final task into a series of incremental sub-tasks, the role-based agent can gradually improve its comprehension of the final task to deliver a more accurate final response. Application of RBIC. Here, we demonstrate the algorithmic conceptualization of the RBIC prompting framework in the context of generating gists. The essence of the Role-Based Incremental Coaching (RBIC) framework lies in its two core algorithmic components: Role-Based Knowledge Generation and Incremental Coaching, as shown in Algorithm 1. The RBIC algorithm requires the following inputs: \fCHI \u201924, May 11\u201316, 2024, Hawaii, USA Ding et al. Algorithm 1 Role-Based Incremental Coaching (RBIC) Require: User Input (\ud835\udc43), Role-Based Agent (RBA) Ensure: Knowledge Base (KB) Final Task Output (\ud835\udc39) 1: Comment: Role-Based Knowledge Generation 2: Input: User Input \ud835\udc43 3: Output: Intermediate knowledge \ud835\udc3e 4: \ud835\udc3e\u2190RBA.GenerateKnowledge(\ud835\udc43) 5: Update Knowledge Base (KB) with \ud835\udc3e 6: Comment: Incremental Coaching 7: Input: Final Complex Task \ud835\udc47, Sequence of Sub-Tasks: \ud835\udc47= {\ud835\udc461,\ud835\udc462, . . . ,\ud835\udc46\ud835\udc5b} 8: Output: Incremental outputs \ud835\udc42\ud835\udc56= {\ud835\udc421,\ud835\udc422, . . . ,\ud835\udc42\ud835\udc5b} and Final Task Output \ud835\udc39 9: for \ud835\udc56= 1 to \ud835\udc5bdo 10: \ud835\udc42\ud835\udc56\u2190RBA.Coach(\ud835\udc46\ud835\udc56, KB) 11: Update KB with \ud835\udc42\ud835\udc56 12: end for 13: \ud835\udc39\u2190RBA.FinalOutput(\ud835\udc3e\ud835\udc35,\ud835\udc47) 14: return \ud835\udc39 \u2022 User Input \ud835\udc43: The RBIC is initialized by the user input. For example, in our study, we operationalized user input as \ud835\udc43= (\ud835\udc431, \ud835\udc432, \ud835\udc433, \ud835\udc434\ud835\udc34or \ud835\udc434\ud835\udc35, \ud835\udc435), as shown in Fig.2. \u2022 Role-Based Agent: Essentially, this can be any prompt-based LLM. For our study, we used GPT-4 as our Role-Based Agent. Next, the RBIC algorithm will generate the following output: \u2022 Knowledge Base (KB): The first phase of the RBIC algorithm, denoted as Role-Based Knowledge Generation, is symbolized by the function RBA.GenerateKnowledge(\ud835\udc43). In this step, the Role-Based Agent (in our case, we use GPT-4, but this can be substituted with any prompt-based LLMs) is prompted with a user input \ud835\udc43to elicit relevant background knowledge \ud835\udc3e. This knowledge forms the basis for task execution and is stored in an initial Knowledge Base (KB). \ud835\udc3e\u2190RBA.GenerateKnowledge(\ud835\udc43) (1) Here, \ud835\udc3erepresents the knowledge generated, and \ud835\udc43represents the user input posed by the user. \u2190signifies the assignment of generated knowledge \ud835\udc3eto the Knowledge Base (KB), thus creating a dynamic knowledge architecture that adapts over time. For instance, in our study, \ud835\udc3ecomprised of O1 and O2 (as shown in the upper right of Fig. 2), which collectively formed our Knowledge Base (KB). \u2022 Final Task Output (F): The subsequent phase, known as Incremental Coaching, is predicated on a sequence of sub-tasks {\ud835\udc461,\ud835\udc462, . . . ,\ud835\udc46\ud835\udc5b} and their corresponding outputs: {\ud835\udc421,\ud835\udc422, . . . ,\ud835\udc42\ud835\udc5b}. \ud835\udc42\ud835\udc56\u2190RBA.Coach(\ud835\udc46\ud835\udc56, KB) (2) In this phase, each sub-task \ud835\udc46\ud835\udc56leverages the updated Knowledge Base (KB) to produce an output \ud835\udc42\ud835\udc56. \ud835\udc42\ud835\udc56is then used to update the KB, thus iteratively coaching the model through a series of sub-tasks in a step-by-step manner. Breaking down the final Complex Task\ud835\udc47into simpler sub-tasks \ud835\udc46\ud835\udc56allows the model to incrementally build up the necessary knowledge and skills to tackle the final task. Therefore, this incremental knowledge building across sub-tasks enables the model to better understand and perform the final Complex Task \ud835\udc47. In our case, our Complex Task (\ud835\udc47) generates a \"gist\" based on the cause-effect pairs. The individual sub-tasks that contribute to this complex task are labeled as P3, P4A, P4B and P5 (Fig. 2). The algorithm proceeds sequentially, producing intermediate outputs O3, O4, and ultimately culminating in O5, which is the gist generated from the cause and effect pairs identified in sub-task P4A. When applied to predicting gists in social media conversations, RBIC instructs the model to understand the concept of cause-effect relations as a task-performing agent. The model then incrementally performs sub-tasks to recognize and extract cause-effect pairs, and finally generates a concise gist that captures the essence of the identified causal relationship. 3.2.1 Human Evaluation. To assess the effectiveness of RBIC\u2019s application in predicting gists in our data, we conducted a human evaluation of the RBIC-generated outputs. We recruited 6 human evaluators to evaluate the presence of causal coherence (O3), causeeffect pairs (O4), and gists (O5) for each Reddit post based on the following criteria: \u2022 Accuracy (classification): Is there a cause-effect relationship in the post (1/0; Yes/No)? \u2022 Relevance (extraction): How well does the cause-effect pair capture the primary causal relationship in the post (15; not well at all, slightly well, moderately well, very well, extremely well)? \u2022 Conciseness (generation): How well does the gist concisely summarize the cause-effect relationship in the post (1-5; not well at all, slightly well, moderately well, very well, extremely well)? To mitigate error propagation, the evaluation was designed as a sequential process but with checks for accuracy. First, evaluators focused on \u2018Accuracy\u2019, verifying the presence of a cause-effect relationship. Second, \u2018Relevance\u2019 was examined to ensure the identified cause-effect pairs accurately reflected the post\u2019s main causal relationship. The final and third evaluation stage, \u2018Conciseness\u2019, was only evaluated in posts that had already met the \u2018Accuracy\u2019 and \u2018Relevance\u2019 criteria. This approach minimized propagation of errors from earlier stages. The accuracy criteria assesses the model\u2019s performance in identifying the presence of a causal relationship in a post. Relevance evaluates the model\u2019s ability to correctly extract the cause and effect phrases that are most salient to the core message of the post\u2019s content. Conciseness assesses the model\u2019s generative performance in concisely synthesizing a coherent gist based on the identified cause and effect phrases. In total, each of the 6 annotators evaluated 3,100 posts that were randomly selected from the entire dataset. For each criteria, each post received three evaluation scores from three annotators. The evaluators\u2019 assessment of the model\u2019s performance across the three criteria were generally high based on inter-rater agreement scores using Fleiss kappa (\ud835\udc58) [32]: accuracy (\ud835\udc58= 0.892); relevance (mean = 4.3, \ud835\udc58= 0.839); conciseness (mean = 4.5, \ud835\udc58= 0.864). \fPredicting Pandemic Health Decisions and Outcomes Through Social Media and LLMs CHI \u201924, May 11\u201316, 2024, Hawaii, USA Table 1: Sample results from applying the RBIC method for extracting cause-effect relationships and generating gists from Reddit posts discussing health mandates between May 2020 and October 2021, including the post content No. Reddit Post Label Cause Effect Gist 1 99.995% of children survive cv infection, why are they pushing so hard to have kids take an experimental vaccine? Yes they are pushing so hard to have kids take an experimental vaccine 99.995% of children survive cv infection Despite the high survival rate of children from cv infection, there is a push to have kids take an experimental vaccine. 2 Had my Pfizer jab last Wed and have felt like death since. Yes Had my Pfizer jab have felt like death since The cause of feeling like death is having the Pfizer jab last Wednesday. 3 Imagine pointing and laughing at a single father of 3 who\u2019s now jobless and has to take care of 3 kids with no income all because he didn\u2019t want to wear a face diaper or take the experimental gene modification. Yes he didn\u2019t want to wear a face diaper or take the experimental gene modification he\u2019s now jobless and has to take care of 3 kids with no income The man\u2019s refusal to wear a face mask or take the experimental gene modification led to him losing his job and being unable to provide for his three children, resulting in financial hardship and increased responsibility for him. 4 LA Fitness cancelled my membership against my will today because I refused to wear a mask. Yes I refused to wear a mask LA Fitness cancelled my membership The cause of LA Fitness cancelling the membership was the refusal of the person to wear a mask, which led to the effect of the membership being canceled against their will. 5 I\u2019ve been thinking a lot about COVID data that\u2019s been circulating and want to share some thoughts. I think it\u2019s essential to remember that COVID data is not beyond skepticism, because what counts as a case varies. Yes what counts as a case varies COVID data is not beyond skepticism The variation in what is considered a COVID case has led to skepticism about the accuracy and reliability of COVID data. 6 I stumbled upon some news. Governor Wolf has a false positive, won\u2019t admit it because it would be admitting the tests are unreliable. And do you think it\u2019s possible that politicians might hide their own false positive results to maintain confidence in the testing system? Yes false positive, won\u2019t admit it it would be admitting the tests are unreliable Governor Wolf won\u2019t admit a false positive because it would undermine COVID-19 test reliability, potentially affecting public health and safety measures. 3.3 Result Table 1 presents the results of RBIC\u2019s application, demonstrating the effectiveness of our prompting framework in predicting gists at-scale. We identified a total of 6,861 gists in our data. As shown, RBIC cannot only detect semantic causality (O3), but also extract verbatim phrases corresponding to the main cause-effect pairs (O4), and generate coherent gists (O5) based on the identified pairs. In the first example, RBIC detects sentences where causality is implied with nuance, as well as those that are more explicitly stated. Although most of the gists accurately capture the semantic essence of the causal relationship, some are more eloquent than others. For instance, the gists in examples 2 and 4 use sentence inversions, beginning with \"the cause of\", while others are more semantically fluid. We also performed a comparison using fine-tuned language models (BERT, RoBERTa and XLNet), as detailed in the appendix (Table 6), which showed that RBIC outperformed the baseline models in extracting cause-effect pairs (O4) by 26.6% in F1-score when comparing RBIC to the best-performing baseline model (RoBERTa with 0.814 F1-score). 4 STUDY 2: HOW GISTS EVOLVE OVER TIME Given the rapidly evolving public health discussions on social media, it is crucial to examine how they evolve over time [28, 38]. This enables a better understanding of shifts in public opinion and emerging concerns across contentious debates around health practices like vaccinations, mask-wearing, and social-distancing [33]. Hence in Study 2, we build upon our Study 1 findings to address: What kind of gists characterize how and why people oppose COVID-19 public health practices? How do these gists evolve over time across key events? To answer these questions, we extract sentence embeddings from each of the gists identified from Study 1, and cluster the embeddings to identify distinct gist clusters that characterize the core topics at the center of how people argue against COVID-19 health practices. 4.1 Method 4.1.1 Extracting Sentence Embeddings from Gists. To identify the most salient topics across the causal language (gists) surrounding the social media discourse against public health practices, we use Sentence-BERT (S-BERT) to extract semantically rich representations of the gists identified in Study 1. S-BERT is a transformerbased model designed to produce contextualized sentence embeddings, which are particularly valuable in clustering texts [82, 101]. After preprocessing the gists with standard text cleaning operations (lowercasing, removal of special characters, tokenization), we implemented S-BERT using the SentenceTransformer to extract \fCHI \u201924, May 11\u201316, 2024, Hawaii, USA Ding et al. embeddings from our gists. The S-BERT model comprises 12 hidden layers, with each layer producing an output representation of 1(N)\u00d7768(M) dimensions. To obtain high-quality embeddings, we extracted the output representations from each of the last three hidden layers of the model (layers 10-12), and computed their means. By doing so, we are able to capture and generate semantically rich representation of each gist as high dimensional vectors. 4.1.2 Clustering of Sentence Embeddings. After obtaining the sentence embeddings, we applied Principal Component Analysis (PCA) [13] to reduce the dimensionality of the embeddings prior to the clustering step. This was done to better visualize the language embeddings in a lower-dimensional space and to facilitate a more effective interpretation of the embedding results. We selected PCA as our method, given its frequent use and proven effectiveness in reducing dimensionality, especially for language embeddings [97]. We used k-means [69] for clustering, as it is especially reliable for clustering semantic word representations [121]. The k-means algorithm iteratively assigns each embedding to a cluster with the closest centroid, and updates the centroid by calculating the mean of the embeddings assigned to the cluster [121]. This process continues until the centroids stabilize. To enhance the reliability and robustness of our clustering approach, we incorporated sentence embeddings of posts that did not contain any gists, following Samosir\u2019s study [92]. This step allows us to assess the quality of sentence embeddings by verifying that embeddings from sentences that do not contain gists cluster apart from embeddings derived from gists. Finally, we used the elbow method [70] to determine the optimal number of clusters by calculating the sum of squared errors (SSE) in ascending order of cluster numbers until additional clusters resulted in diminishing returns [64]. 4.1.3 Verifying Gist Clusters. The first author initially identified the primary themes of each cluster through categorization, screening, and summarization of 200 randomly selected gists from each cluster. Next, we recruited 6 annotators to manually evaluate and verify five primary gist clusters, as shown in Table 3. Annotators manually evaluated the clustering results by iteratively examining and discussing the themes across 200 randomly selected gists belonging to each cluster (1/0). See annotation agreement in Table 2. The verification process also includes two additional steps: (1) refining cluster descriptions such that they were thematically salient and representative of the core ideas and topics embodied by the gists in each cluster and (2) examining the sentences in the non-gist cluster that did not include any gists (non-gist cluster is C6 ). 4.2 Result A representative sample of gists from each cluster is presented in Table 3, illustrating the core topics that characterize the opposition discourse surrounding pandemic health practices. In Fig. 3, (top panel), we visualize the evolution of our gist clusters across four time points (May 2020 October 2021). The visualization reveals interesting relations between the clusters. For instance, cluster 4 , which embodies gists discussing the impact of COVID-19 on the economy and society at-large wraps around cluster 3 gists related to the impact of lockdown policies. This spatial proximity suggests that causal discussions on the broader consequences of the pandemic are closely intertwined with gist-based conversations on the impact of lockdown measures, shedding light on the interconnectedness of how people talk about these two topics in a causal manner. Similarly, clusters 1 and 2 are not only close in proximity, but also similar, in terms of position and shape: both clusters are diagonally positioned from top left to the bottom right, and run parallel to each other. Given that both clusters represent gists concerning specific health practices (vaccinations, mask-wearing), it is likely that these topics may share similarities in the causal manner in which people talk about the effectiveness of such health practices. Cluster 5 , which represents gists related to conspiracy theories, domestic politics, and foreign countries appears to lack a clear boundary and is spatially dispersed compared to other clusters. This could be due to the fact that cluster 5 encompasses multiple topics, as indicated by its description, in contrast to other clusters that are more uniformly focused on specific health practices, government measures, or particular aspects of the pandemic\u2019s impact. The lower inter-rater reliability agreement for cluster 5 (Fleiss \ud835\udc58= 0.821) further supports the notion that it is a heterogeneous cluster consisting of various topics compared to other clusters. The bottom half of Fig. 3 demonstrates that peak volumes of gists within each cluster align closely with key events related to the respective topics embodied by those clusters. To identify these key events, we relied on reports from major health organizations including the Centers for Disease Control and Prevention (CDC), World Health Organization (WHO), and United Nations (UN) for announcements related to public health interventions like lockdowns and vaccine rollouts [48, 107]. News reports from these organizations were widely recognized as authoritative information sources across the global community. Hence, we used such announcements and reports from these sources that highlighted key pandemic events, public health announcements, and significant milestones across the timeline of COVID-19. In addition to these organization reports, we also analyzed articles related to COVID-19 published by major news outlets, such as AP News, Reuters, CNN, Fox News, Wall Street Journal, New York Times, and NPR. We then identified highly mediatized events by using the number of shares and article comments. This process also entailed iterative discussions among all the authors to ensure a comprehensive and balanced selection of events. Our approach aimed to minimize biases by incorporating a diverse range of sources and validating the significance of events through multiple indicators such as media coverage intensity and public engagement. For example, cluster 1 peak occurs in November 2020, coinciding with the country\u2019s initial phase of vaccine distributions to healthcare workers and high-risk groups [48]. Similarly, cluster 2 gists (mask-wearing) peaks in June 2021, the same month in which the federal mask mandate is lifted [107]. Cluster 3 gists, which relate to the impact of lockdowns, peak in May 2020, by which approximately 4.2 billion or 54% of the world\u2019s population was under lockdown [42]. In December 2020, the U.S. Congress passed a bill to distribute $90 billion in stimulus checks to households, as nearly 30 million American adults reported food and income insecurity in the same month [106]. These events temporally coincide with the peak in cluster 4 gists, which concerns the socioeconomic consequences \fPredicting Pandemic Health Decisions and Outcomes Through Social Media and LLMs CHI \u201924, May 11\u201316, 2024, Hawaii, USA Table 2: Inter-rater reliability scores for human evaluation of topic clustering results. Fleiss\u2019 kappa coefficient was calculated to assess agreement between six annotators judging whether each gist was correctly assigned to one of the five clusters listed. Cluster IRR value (Fleiss \ud835\udc58) C1. Implications of Vaccine Policies, Efficacy, Side-Effects 0.926 C2. Controversies Related to Masks-Wearing Practices 0.894 C3. Impact of Lockdown 0.884 C4. Societal and Economic (Macro) Impact of COVID-19 0.902 C5. Conspiracy Theories, Domestic Politics, Foreign Countries 0.821 C6. Lack of distinct causal relationship or coherent gists 0.980 Table 3: Representative examples are shown for each cluster from C1 to C5, highlighting the main ideas identified in the health mandate debate on Reddit. Cluster 6 is not included, as it lacks a distinct causal relationship or coherent gist. Cluster Gist C1. Implications of Vaccine Policies, Efficiency, Side-Effects \u2022 The implementation of a vaccine mandate has resulted in people losing their jobs. \u2022 The use of experimental COVID vaccines is causing an increase in COVID deaths. \u2022 The vaccine was ineffective against new variants, which led to the death of 7,000 people who received the spike protein mRNA jab, including little kids. This suggests that the vaccine was administered for no reason, as it failed to provide protection against the new variants. C2. Controversies Related to Masks-Wearing Practices \u2022 If a person refuses to wear a mask at a business for medical reasons, the business may deny them services. \u2022 The lifting of mask mandates for vaccinated individuals has caused the proliferation of a deadly biohazard, which could lead to the CDC and other agencies being charged with involuntary manslaughter. \u2022 Wearing masks prevents people from seeing each other\u2019s faces, which leads to difficulties in understanding and building trust with others. C3. Impact of Lockdown \u2022 The lockdowns have caused tourism-dependent islands in Thailand to suffer from a lack of income, leading to a situation where they have been on food aid for over a year. \u2022 The lockdowns caused a loved one to almost commit suicide, highlighting the negative impact of lockdowns on mental health. \u2022 The prolonged lockdown imposed by Cuomo for six months has resulted in the inability of the speaker to pay their bills. C4. Societal and Economic (Macro) Impact of COVID-19 \u2022 The outbreak of COVID-19 has caused people to struggle with their livelihood, leading to financial difficulties and economic instability. \u2022 The COVID-19 pandemic has caused the biggest drop in US life expectancy since the second world war. \u2022 The COVID-19 shutdowns have resulted in 1 in 5 churches facing permanent closure within 18 months due to the financial strain caused by the pandemic. C5. Conspiracy Theories, Domestic Politics, Foreign Countries \u2022 People refuse to share a table or work with certain people because they see \"certain people\" as sub-human because of their vaccination status. \u2022 The sentence suggests that if COVID-19 was intentionally released, it would lead to a major benefit for China and billionaires. The implication is that the cause of COVID-19\u2019s intentional release would be to bring about this benefit for these parties. \u2022 The lack of information on the epidemic from people on whether they think something is safe or not is preventing the speaker from being able to debate with their conspiracy theory friends. of the pandemic. Finally, cluster 5 gists, which are related to conspiracy theories, politics, and foreign countries, reached their peak volume in August 2020, around the time when President Trump retweeted a popular online conspiracy theory [100] and referred to the \"China virus\" in his White House briefing [12]. Our findings imply that trends in gist volumes are linked with real-world events. 5 STUDY 3: HOW SOCIAL MEDIA GIST PATTERNS INFLUENCE ONLINE ENGAGEMENT BEHAVIOR Delineating key semantic patterns (e.g., gists) that drive online behavior can help gain insight into how social media language impacts the dissemination of health information online. This, in turn, can better inform public communication strategies for time-sensitive health interventions. Hence, in Study 3, we use Granger-causality to examine the extent to which gist patterns influence online engagement, such as up-voting and commenting in subreddit communities that oppose COVID-19 health practices. 5.1 Hypothesis Testing with Granger Causality Granger causality determines whether a time series X is meaningful in forecasting another time series Y [35]. For two aligned time series X and Y, it can be said that X Granger-causes Y if past values X\ud835\udc61\u2212\ud835\udc59\u2208X lead to better predictions of the current Y \ud835\udc61\u2208Y \fCHI \u201924, May 11\u201316, 2024, Hawaii, USA Ding et al. June 2021: CDC announces no more masks required on public transportation. In Dec 2020, nearly 30 million adults reported their households didn't have enough to eat in the last 7 days, while 38% struggled to cover food, rent, and medical costs. Congress passes $90B Coronavirus Relief Bill, providing stimulus checks to households on Dec 21st. Nov 2020: 1st allocation of Covid-19 vaccines to health care workers and high-risk groups begin. By May 2020, 54% of the world\u2019s population was under lockdown. / : C1. Vaccine Policies, Efficiency, Side-Effects / : C2. Controversies Related to Mask-Wearing Practices / : C3. Impact of Lockdown / : C4. Societal & Economic (Macro) Impact of Covid-19 / : C5. Conspiracy Theories, Politics, Foreign Countries / : C6. No Gist Aug 2020: Trump focuses on \"China virus\" in White House brie\ufb01ng & retweets conspiracy theory that only 6% of COVID -19 deaths in the US are from the disease. May 2020 Jan 2021 April 2021 October 2021 Number of Gists PCA 1 PCA 2 Figure 3: The upper portion of the illustration displays the progression of clusters across four-month periods. The line graph illustrates the month-by-month evolution of the number of posts containing gists, representing the central themes discussed on Reddit concerning health mandates. The graph highlights specific dates when each topic was most prominently discussed and presents relevant news events related to COVID-19 and health mandates during those periods. than do the past values Y \ud835\udc61\u2212\ud835\udc59\u2208Y alone, where \ud835\udc61is the time point and \ud835\udc59is the lag time or the time interval unit in which changes in X are observed in Y. Lag time \ud835\udc59in Granger causality refers to the delay between a change in one time series potentially causing a change in another, indicating the time it takes for the effect to be observed. We used Granger-causality to test hypotheses 1-2, as shown below, as well as the reversed variations of H1 and H2 (H1R and H2R) where \ud835\udc56ranges from 1 to 5. H1. The daily volume of gists in cluster \ud835\udc56significantly Granger-causes the upvote ratio of Reddit posts containing gists in cluster \ud835\udc56. H2. The daily volume of gists in cluster \ud835\udc56significantly Granger-causes the number of comments associated with Reddit posts containing gists in cluster \ud835\udc56. 5.2 Method and Analysis First, we constructed the time series data, T \ud835\udc3afor each cluster, where T \ud835\udc3a\ud835\udc56represents the daily number of gists in cluster i spanning from May 2020 to October 2021. We then created two more temporally corresponding time series data, T \ud835\udc48and T \ud835\udc36, which represent the daily upvote ratio and the daily comment count for each Reddit post containing gists from cluster \ud835\udc56, respectively. We conducted a total of 20 Granger causality tests (5 clusters \u00d7 4 hypotheses H1, H2, H1R, H2R), using time lags ranging from 1 to 14 days. To ensure that the value of the time series was not merely a function of time, we conducted the Augmented Dickey-Fuller (ADF) test [21] using the serial difference method to achieve stationarity with ADF test values exceeding the 5% threshold. 5.3 Results Table 4 shows significant Granger causal results (\ud835\udc5d< 0.05). Gists across certain topics are significantly predictive of up-voting and commenting patterns, and vice-versa, in banned subreddits that oppose pandemic health practices. Specifically, the daily volume of gists significantly forecasts up-voting and commenting behavior across the topic of vaccines ( cluster 1 ), mask-wearing ( cluster 2 ), and macro-impacts of the pandemic ( cluster 4 ) with significant lag lengths ranging from 2-7 days. These results align with prior research highlighting the linguistic power of gists in spreading online information. The reverse (H1R and H2R) is true for gists discussing the impact of lockdowns ( cluster 3 ): up-voting and commenting behavior both significantly forecast fluctuations in the volume of lockdown related gists. \fPredicting Pandemic Health Decisions and Outcomes Through Social Media and LLMs CHI \u201924, May 11\u201316, 2024, Hawaii, USA Table 4: Granger causality test results analyzing the relationships between the daily volume of gists in Clusters (C1-C5) and online engagement behavior upvote ratios (UR), number of comments (NC) across Reddit discussions. Cluster Behavior Hypotheses Lag (days) \ud835\udc39-value \ud835\udc5d-value upvote ratio (UR) C1 \u0338\u2192UR 4 4.228 0.022 C1. Vaccination Implications upvote ratio (UR) UR \u0338\u2192C1 3 1.739 0.277 number of comments (NC) C1 \u0338\u2192NC 4 3.090 0.032 number of comments (NC) NC \u0338\u2192C1 7 1.107 0.445 upvote ratio (UR) C2 \u0338\u2192UR 6 5.818 0.012 C2. Controversies and Policies Related to Masks number of comments (NC) C2 \u0338\u2192NC 7 6.007 0.014 upvote ratio (UR) UR \u0338\u2192C2 6 4.738 0.022 number of comments (NC) NC \u0338\u2192C2 9 0.771 0.463 upvote ratio (UR) C3 \u0338\u2192UR 2 0.712 0.545 C3. Impact of Lockdown upvote ratio (UR) UR \u0338\u2192C3 7 3.410 0.033 number of comments (NC) C3 \u0338\u2192NC 3 0.715 0.496 number of comments (NC) NC \u0338\u2192C3 8 6.121 0.016 upvote ratio (UR) C4 \u0338\u2192UR 3 6.485 0.011 C4. Societal and Economic (Macro) Impact of Covid-19 upvote ratio (UR) UR \u0338\u2192C4 7 2.011 0.092 number of comments (NC) C4 \u0338\u2192NC 2 7.912 0.002 number of comments (NC) NC \u0338\u2192C4 4 0.815 0.413 upvote ratio (UR) C5 \u0338\u2192UR 5 1.441 0.512 C5. Social Issues, Health, and Personal Experiences upvote ratio (UR) UR \u0338\u2192C5 2 1.715 0.289 number of comments (NC) C5 \u0338\u2192NC 4 2.412 0.089 number of comments (NC) NC \u0338\u2192C5 4 1.135 0.530 5.3.1 Bidirectional Causality: Notably for cluster 2 , which pertains to controversies and policies related to masks-wearing, we observe an interesting feedback loop between gist volumes and commenting behavior. As the volume of gists related to mask-wearing practices increases, corresponding online engagement around posts containing such gists, also increases in the form of up-votes. This behavior, in turn, further influences the volume of gists that are topically related to mask-wearing practices. In other words, there is a mutually reinforcing effect between causal language and online behavior in the context of mask-related discussions. 6 STUDY 4: HOW SOCIAL MEDIA GIST PATTERNS INFLUENCE NATIONWIDE TRENDS IN HEALTH OUTCOMES In Study 4, we address the question of whether and how social media language patterns in the form of gists influence health decisions and outcomes in the U.S. We follow Study 3\u2019s application of Granger causality to examine the relationship between gists patterns and important health decisions and outcomes related to COVID-19 in America. Considering the extensive attention the subreddits we analyzed received from the American public and the media [43], we focus on U.S. health outcomes. 6.1 COVID-19 Data on Health Outcomes We used the following data from Our World in Data 2, a trusted source for COVID-19 health data for our analysis: \u2022 Number of Vaccinations (NV): the total number of COVID19 vaccine doses administered on a given day. 2https://github.com/owid/covid-19-data/tree/master/public/data \u2022 General Hospitalization (GH): the number of individuals hospitalized due to COVID-19 on a given day. \u2022 ICU Hospitalization (ICU): the number of patients with COVID-19 who are in the ICU on a given day. \u2022 Total Daily COVID-19 Cases (TC): the total number of confirmed COVID-19 cases, including probable cases. \u2022 New Daily COVID-19 Cases (NC): the number of newly confirmed COVID-19 cases, including probable cases. 6.2 Hypothesis Testing with Granger Causality Following Study 3, we Granger-test the relationship between the daily volume of gists and patterns in people\u2019s health decisions (vaccinations) and national health outcomes (General/ ICU Hospitalization, Total/ New Daily COVID-19 Cases) through H3 and its reversed variation (H3R): H3. The daily frequency of gists (Cluster \ud835\udc56) significantly Granger-causes people\u2019s health decisions and/or national health outcomes, where \ud835\udc56ranges from 1 to 5. H3R. People\u2019s health decisions and/or national health outcomes significantly Granger-causes the daily frequency of gists (Cluster \ud835\udc56), where \ud835\udc56ranges from 1 to 5. We created five time series data, T \ud835\udc41\ud835\udc49, T \ud835\udc3a\ud835\udc3b, T \ud835\udc3c\ud835\udc36\ud835\udc48, T \ud835\udc47\ud835\udc36, T \ud835\udc41\ud835\udc36, corresponding to the five health outcome data described above. We temporally align our data with the time frame for Studies 1-3. We performed 25 Granger causality tests (5 clusters \u00d7 5 health outcome data) with a range of lag times from 1 to 14 days. We conducted ADF tests using the serial difference method to ensure statistical robustness. \fCHI \u201924, May 11\u201316, 2024, Hawaii, USA Ding et al. Table 5: Result of Granger causality test for relationships between Reddit discussion clusters (C1-C4) and health outcomes dataset. Cluster 5 is not included due to the absence of significant Granger causality findings. Notes: See Appendix A for complete statistical results (Table 7 and 8). Cluster Health Outcomes (HO) Hypotheses Lag \ud835\udc39-val \ud835\udc5d-val Direction of Impact C1. Vaccination Implications Number of Vaccinations C1 \u0338\u2192NV 4 4.778 0.027 Number of Vaccinations NV \u0338\u2192C1 14 7.771 0.007 bidirectional C2. Controversies and Policies Related to Masks Total Daily Covid Cases C2 \u0338\u2192TC 5 9.395 0.005 New Daily Covid Cases C2 \u0338\u2192NC 5 11.829 0.004 gists impacts HO C3. Impact of Lockdown ICU Hospitalization ICU \u0338\u2192C3 14 4.000 0.039 General Hospitalization GH \u0338\u2192C3 9 8.822 0.006 HO impacts gists C4. Societal and Economic (Macro) Impact of COVID-19 Total Daily Covid Cases TC \u0338\u2192C4 9 3.663 0.031 New Daily Covid Cases NC \u0338\u2192C4 9 3.092 0.032 HO impacts gists 6.3 Results Table 5 show shows significant Granger-causal results with corresponding lag lengths (\ud835\udc5d< 0.05). We summarize our findings below. Causal Talk Around Vaccines and National Vaccination Trends are Bidirectional. Our results demonstrate bidirectional causality between causal discourse patterns related to vaccines and the number of vaccinations administered in the U.S. The daily volume of cluster 1 gists, which consists of causal arguments related to vaccine regulations, efficacy, and side effects, is predictive of vaccination patterns across the U.S., and vice-versa. However, there is a difference in the lag lengths between H3 and H3R. It takes 4 days for gist patterns to influence vaccine adoptions (H3), while it takes two weeks for vaccination trends to shape how people talk about vaccine-related topics in a causal manner (H3R) across COVID-19 subreddits known for vaccine skepticism. In addition to a more significant Granger-causal relationship, we also observe a higher Pearson correlation for H3 (\ud835\udc5f= 0.413, \ud835\udc5d= 0.005) compared to H3R (\ud835\udc5f= 0.105, \ud835\udc5d= 0.028), indicating that national vaccination patterns have a greater impact on shaping vaccine-related causal language on social media than the other way around. There are two possible explanations: first, as more people get vaccinated, online discussions on the experiences and potential side effects of vaccines may become more prevalent leading people to talk in a causal manner about the side effects of vaccines (e.g., \"Had my Pfizer jab last Wed and have felt like death since\"). Another possible explanation is that the increasing vaccination requirements by corporations and governments as a condition for work or travel (and therefore, nationwide uptick in vaccinations) during the pandemic may have compelled vaccine-skeptics to argue more vehemently against vaccines [25]. Previous research has shown that vaccine skeptics are susceptible to confirmation bias, as are most individuals, such that initial beliefs lead to polarization [66]. That is, vaccine skeptics are likely to seek out and discuss information about vaccines that confirms pre-existing beliefs when presented with opposing information or situated in contexts that challenge their views. Our findings align with this research, suggesting that as national vaccination uptake increases, vaccine skeptics might increasingly argue against vaccines in a causal manner (e.g., \"If you take the vaccine, it\u2019s probably because you\u2019re unhealthy.\"), as commonly expressed in posts that contain cluster 1 gists. Causal Talk Around Mask-Wearing Practices Significantly Predicts Trends in COVID-19 Cases. Our Granger-causal results show that national health outcomes, such as the total and new daily COVID-19 cases can be significantly predicted by the volume of mask-related gists ( cluster 2 ) with a lag of 5 days. The mask mandate was one of the most controversial health practices that impacted people of all ages and occupations during the pandemic [62, 98]. Parents were polarized over school mask requirements to the extent of resorting to violence [94]. Employees who asked customers to wear masks were physically assaulted [8]. Although people initially adhered to wearing masks, more individuals started to protest mask mandates both on and offline, citing physical distress (\"If having healthy lungs is important for COVID, why would we wear masks that reduce lung function?\" ) or invasion of personal rights: \"They will call you a \u2018coward\u2019 or \u2018scared\u2019 for not wanting an intrusive mask over your face (for no reason)\", as exemplified by posts containing cluster 2 gists in our data. Over time, the proliferation of anti-mask views, followed by extreme resistance as demonstrated by violent altercations and wide-scale protests across the nation, may have led people to abandon mask-wearing practices [36], which in turn may have led to an increase in COVID-19 cases within a relatively short time-frame of 5 days, as indicated in our results. Rising Hospitalization Trends Prompt Causal Talk on Lockdown Impact. Our findings show that nationwide trends in the number of patients hospitalized in both general and intensive care units significantly prompt more gists discussing the impact of lockdowns with a lag of 9 and 14 days, respectively (Table 5). Nationwide lockdowns were implemented to curb steep rises in COVID-19 cases and hospitalization rates. In fact, some posts containing cluster 3 gists often explicitly link lockdowns with hospitalizations: \"The main reason for implementing restrictions or lockdowns was to prevent ICUs from overflowing.\" Despite its necessity and intended benefit as a public health measure, studies have shown that lockdowns significantly contributed to social isolation, decrease in mental health, and rise in domestic violence across the U.S. [18]. As the lockdown continued to amplify challenges and problems in people\u2019s lives, rising hospitalization trends across the country may have heightened people\u2019s fear and distress, leading to more intensified and causal online discourse on the lockdown\u2019s impact on everyday life. Such sentiments are clearly expressed across posts containing cluster 3 gists: \"People are literally starting to go \fPredicting Pandemic Health Decisions and Outcomes Through Social Media and LLMs CHI \u201924, May 11\u201316, 2024, Hawaii, USA hungry because of lockdown restrictions\"; \"The implementation of lockdowns has resulted in more harm than good\". Rising Trends in COVID-19 Cases Prompt Causal Talk on the Pandemic\u2019s Macro-Level Impact. Nationwide trends in COVID-19 cases significantly Granger-causes the volume of gists discussing the pandemic\u2019s impact on society at large, with a lag of 9 days for both total and new cases. In other words, increasing trends in COVID-19 cases seem to nudge people to talk casually about the macro-level consequences of COVID-19. COVID-19 presented major economic and social setbacks that impacted all aspects of society. Some of these concerns were expressed across posts containing cluster 4 gists that linked the pandemic with economic crises (\"The pandemic caused one of the largest economic crises, which in turn led to one of the largest poverty and hunger crises\"), decreased life expectancy (\"The COVID-19 pandemic has caused the biggest drop in US life expectancy since the second world war\"), potentially oppressive public health measures (\"The cause of the next deadly pandemic will lead to the implementation of authoritarian prevention measures\"), and even racism (\"The fact that Covid19 affects people of color more than whites is the cause of the conclusion that Covid19 is racist\"). With COVID-19 cases rising and situations continuing to remain unpredictable, people may have become more anxious and distressed about the long-term effects on society. Consequently, this may have led individuals to discuss the pandemic\u2019s impact in a causal manner on social media, as they try to make sense of its far-reaching consequences on society [84]. 7 DISCUSSION In summary, our findings underscore RBIC\u2019s effectiveness in efficiently predicting social media gists at scale (Study 1), thereby enriching our insight into the underlying mental constructs that shape people\u2019s health decisions and attitudes towards public health practices. In Study 2, we cluster and track the evolution of such gists, revealing key themes in online arguments against pandemic health practices. These gist volumes closely align with significant topical events, such as health announcements, policy changes, and leadership statements. In Study 3, we empirically demonstrate how gist volumes significantly drive subreddit engagement patterns (upvotes and comments). Finally, Study 4 reveals the interplay between gist patterns in anti-Covid-19 subreddits and nationwide health trends. We discuss the implications of these findings below. 7.1 Harnessing Large Language Models in Computational Social Science (CSS) Research in HCI Prompt-based LLMs are increasingly used in the CHI community [20, 56, 76, 110], primarily contributing to the development of applications like chatbots [46] and tools for co-writing [56], virtual simulations [108], story-telling [23], and visualization enhancement [96]. Such studies have primarily focused on using LLMs as production tools [56] rather than tools for analysis. More recently, computational social scientists in HCI have used prompt-based LLMs for text analyses [34, 102, 123]. However, there remain several challenges for using LLMs in nuanced examination of social media discourse. First, traditional NLP models and commonly used LLMs in CSS research often lack reasoning capabilities [116]. For instance, LLMs like BERT-based models, which are extensively used in HCI research that analyze large volumes of social media data [27, 60], are typically fine-tuned for specific discrete downstream tasks (e.g., classification). While these pretrained language models have shown promise in performing discrete analyses, some emerging HCI research [116, 117] demonstrate the additional value of prompting LLMs to perform multi-step reasoning for a more comprehensive analysis. Building on these prior insights, RBIC aims to enable a more nuanced analysis of social media discourse by leveraging the multi-reasoning capabilities of large language models. To this end, RBIC operates by performing multiple, step-by-step interrelated sub-tasks (question-answering, classification, extraction, generation) prior to generating its final output. This incremental coaching mechanism enhances the model\u2019s overall understanding and performance of the final task, allowing us to analyze social media discourse with a more comprehensive and nuanced approach. Second, LLM development paradigms often incentivize researchers to optimize model performance using established evaluation datasets [20, 58]. While valuable for comparing an LLM\u2019s performance with other models, this approach may not result in high performance when applied to new, unseen, in-the-wild datasets [22, 123] or with tasks that are slightly different from those that the model was evaluated on [123]. As a result, this may limit the potential application of such LLMs for analyzing intricate, heterogeneous in-the-wild data, such as unstructured social media conversations. The role-based cognition component of RBIC addresses this limitation by allowing researchers to define and customize the role of any prompt-based LLM to perform a complex and nuanced language task. By introducing and applying RBIC in the analysis of social media conversations, we demonstrate the versatility and effectiveness of prompt-based LLMs in identifying and synthesizing nuanced linguistic patterns, thus broadening the potential application of prompt-based LLMs for theory-driven textual analysis in CSS research in the HCI domain. 7.2 Leveraging Causal Language Patterns in Online Content Moderation Practices Our results show that the volume of gists across certain topics are significantly predictive of up-voting and commenting patterns, and vice-versa, in banned subreddits that oppose pandemic health practices. For example, daily gist volumes significantly predict up-voting and commenting behavior across topics related to vaccines, masks, and the pandemic\u2019s impacts, highlighting the linguistic power of gists in spreading online information as demonstrated in prior literature [85, 86]. Similarly, our findings show that increasing trends in vaccine adoptions in the U.S. are strongly predictive of the growing volumes of vaccine-related gists in subreddits whose members are generally skeptical of vaccines. While a nationwide rise in vaccine uptake is certainly beneficial, such conditions may present challenging contexts that may reinforce vaccine skeptics to become further entrenched in their views. Vaccine opponents exposed to situations that contradict their perceptions are especially vulnerable to confirmation biases [5], which may lead to an increased tendency to express their anti-vaccine sentiments in online communities in a causal manner, as implied by our findings. These insights underscore the critical role of understanding and monitoring causal language patterns in public health discourse, \fCHI \u201924, May 11\u201316, 2024, Hawaii, USA Ding et al. particularly within online spaces. Current content moderation practices that rely on language models traditionally focus on flagging hate speech or monitoring specific keywords [91]. However, our research suggests that monitoring causal language patterns can be a valuable addition to these content moderation practices, especially in controversial online communities where people exchange and learn health information. By leveraging nuanced insights from gists across various health topics, content moderation can become more effective in identifying and managing discussions that may contribute to the spread of online health misinformation or resistance to public health guidelines. 7.2.1 Design Implications for Moderation Dashboard: Prior studies have shown that the design of a social media platform plays an important role in promoting transparency in content moderation [47]. Moderators often fail to articulate what aspect of the content prompted moderation or why such moderation was necessary [47]. The approach taken in our study can be built on to effectively inform users about the consequences of their posting behavior, and which aspects of their posts can potentially lead to negative outcomes. The results can also inform design strategies that platforms can undertake to assist moderators in communicating such information to users. Understanding and identifying causality can be difficult for humans as causality may be expressed implicitly and across sentences or intersententially [93]. Currently, there is no automated mechanism for moderators to systematically identify and understand the impact of causal language across online discussions. A design feature in the moderation dashboard, such as the one shown in Fig. 4 (Appendix D), serves as an illustrative example of how RBIC may address this gap. For example, when a moderator clicks on a button called \u2018Enable Gist Detection (RBIC)\u2019, an RBIC-powered extension can automatically scan posts, highlight the cause-and-effect pairs, and identify the overarching gists within the posts. This functionality may also allow moderators to see a list of top gists across community discussions in descending order of gist volumes, and an option to organize these gists based on engagement metrics, including the upvote ratio and comment volume. Additionally, the system may be designed such that the moderator may be able to drill down into posts that pertain to each of these top gists, in which the system can highlight the relevant text spans that pertain to the cause and effect in each post. 7.2.2 Improving Moderation and Community Guidelines. Identifying posts that do not contain moderator-specified keywords (e.g., profanity) or those that exclude explicit causal language, can still violate community norms or include misleading information in subtle ways [72]. Traditional keyword-based filters fall short in identifying such content [45]. This can lead to difficulties in setting specific rules for moderation practices, explaining moderation-decisions, or adapting community guidelines during critical times, such as a global pandemic. With RBIC-powered gist detection, moderators can scale the searching of such posts to identify those that reflect common and theoretically predicted disconnects between the public and public health experts. This mechanism can potentially enable moderators to use concrete examples to better explain moderation decisions, as well as improve community guidelines to explain how posts that contain implicit causal narratives may impact people\u2019s knowledge and decisions around safe health practices, as shown in our work. 7.3 Broader Implications for Understanding Engagement Patterns Across Online Communities and Offline Health Outcomes During Public Health Crises Our work shows that capturing psychologically important language patterns across social media, in the form of gists, can be useful in predicting human behavior and, consequently, health outcomes. In Study 2, we demonstrate that fluctuations in the volume of gists can significantly predict online engagement patterns, specifically in terms of up-vote ratio patterns (H1) and the volume of comments (H2). This has important implications for researchers studying user behaviors in online communities [37, 118]. Researchers have shown that the virality of online content is often influenced by a positivity bias in engagement metrics [51], such as up-votes and comments: posts receiving higher engagement are more visible and thus have a greater likelihood of going viral [3, 81]. This tendency can exacerbate the spread of misinformation, especially during public health crises [1, 95]. Posts challenging pandemic health practices are often laden with misleading information [55], and online posts embedded with gists are more likely to attract more user engagement compared to those without gists [14]. H1 and H2 results demonstrate that such user engagement patterns are predictable through gist volumes, thus highlighting the potential of using RBIC for gist analysis to track and understand the dynamics of how healthrelated content, especially during pandemics, resonates with and influences online user engagement. This insight is crucial for developing strategies to combat misinformation and guide public health communication effectively. Furthermore, HCI research in crisis informatics has contributed to advancing public health monitoring systems by developing tools that track public health outcomes, online engagement patterns, or health-related topics on social media [59, 74]. Some of these tools that monitor online conversations extract various linguistic aspects from social media discourse, such as sentiment [59] and topical keywords [104]. While these advancements have been valuable in providing descriptive insights, most do not go the full distance in linking such linguistic patterns to real-world health decisions and outcomes [55]. Our work addresses this gap by demonstrating how RBIC can be leveraged to better connect online conversation patterns to offline health outcome trends. Study 4 results show that online causal talk related to controversial health practices, such as face-masks, are significantly predictive of total and new daily COVID-19 cases across the U.S. Likewise, our findings show that the uncertainty arising from deteriorating trends in national health outcomes may prompt people to increasingly engage online in causal discussions on the pandemic\u2019s influence on their lives and society as a whole. For example, nationwide COVID-19 cases and hospitalization patterns significantly drive up the volume of gistbased conversations concerning the pandemic\u2019s impact on society, economy, and individuals under lockdown. These findings imply that integrating gist-based language patterns into public health monitoring systems can hold promise for gaining valuable insights into the cognition that underlies skepticism and resistance to public \fPredicting Pandemic Health Decisions and Outcomes Through Social Media and LLMs CHI \u201924, May 11\u201316, 2024, Hawaii, USA health practices and, by extension, their impact on real-world health outcomes. Integrating RBIC-powered gist detection and real-time analysis of national health indicators into tools can potentially enhance public health agencies\u2019 ability to understand and respond to critical health challenges in relation to people\u2019s online behavior. 8 CONCLUSION & LIMITATIONS This research synthesizes LLM techniques with theoretical perspectives from cognitive and social psychology to advance the knowledge of health decisions and outcomes in the context of the most recent pandemic. Our work is the first to systematically identify and characterize how causal language patterns surrounding anti-pandemic health practices on social media are significantly predictive of national health outcomes. These findings carry crucial implications for public health communication and policy interventions. By recognizing the influential role of causal language patterns across social media in shaping national health outcomes, public health efforts and online moderation practices can be tailored to address and mitigate the impact of social media conversations that adversely affect public health consequences. Our study has a limitation in our data source: it concentrates on Reddit posts and omits comments. This exclusion is primarily due to certain months of comment data being either restricted or deleted in compliance with Reddit\u2019s policies by Archive administrators. While this focus allows for an in-depth analysis of original posts, it may not capture the full discourse, including diverse viewpoints and nuanced discussions that often take place in the comments section. Consequently, our findings may offer a limited perspective on the topic under study. Future work might consider alternate ways to capture community discourse, such as through interviews or surveys, to complement the data from Reddit posts. Furthermore, as datasets from the future expand, integrating machine learning models that are capable of detecting subtle changes in discourse over time and adjust to extensive datasets may offer a dynamic view of how gists evolve. This method has the potential to uncover patterns and trends that may not be immediately obvious when using a traditional unsupervised clustering approach. In summary, we built an LLM-based model to identify psychologically influential mental representations\u2013gists\u2013from social media posts, demonstrated the links between these gists and public health events, and verified associations with user engagement and national health trends, with implications for HCI design and the promotion of public health." + }, + { + "url": "http://arxiv.org/abs/2402.16313v1", + "title": "Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering", + "abstract": "Open-ended question answering requires models to find appropriate evidence to\nform well-reasoned, comprehensive and helpful answers. In practical\napplications, models also need to engage in extended discussions on potential\nscenarios closely relevant to the question. With augmentation of retrieval\nmodule, open-source Large Language Models (LLMs) can produce coherent answers\noften with different focuses, but are still sub-optimal in terms of reliable\nevidence selection and in-depth question analysis. In this paper, we propose a\nnovel Chain-of-Discussion framework to leverage the synergy among multiple\nopen-source LLMs aiming to provide \\textbf{more correct} and \\textbf{more\ncomprehensive} answers for open-ended QA, although they are not strong enough\nindividually. Our experiments show that discussions among multiple LLMs play a\nvital role in enhancing the quality of answers. We release our data and code at\n\\url{https://github.com/kobayashikanna01/Chain-of-Discussion}.", + "authors": "Mingxu Tao, Dongyan Zhao, Yansong Feng", + "published": "2024-02-26", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Reasoning", + "gt": "Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering", + "main_content": "Introduction Large Language Models (LLMs) have demonstrated remarkable language generation capabilities (Brown et al., 2020; Touvron et al., 2023; OpenAI, 2023), propelling advancements in various understanding/generation tasks, including opendomain question answering (QA) (Song et al., 2024). However, for complex open-ended question answering, which plays an important role in human-AI interaction, LLMs may still produce output with hallucination and often deliver inferior performance compared to short-form QA (Huang et al., 2023a). This task usually requires LLMs to analyze the questions first, retrieve evidence accordingly, then form a long-form answer which is expected to be correct and well-reasoned with details and proper evidence supported. It has a wide range of applications, from legal consultations and LLM1 Question: I got divorced and have no money now. Do I have to pay child support? What scenario should I consider? Q3: After divorce, is the user still guardians? Q1: After divorce, does the relationship between parent and child change? Q2: Does the user have a legal obligation to pay child support? Q4: What is the standard for child support payment?\u2713 \u2713 \u2718 \u2713 Civil Code Art. 1084 Civil Code Art. 1085 Civil Code Art. 27 Civil Code Art. 37 Civil Code SPCI Art. 49 Irrelevant Evidence Human Chain-of-Thought Question Analysis What scenario should be taken into consideration? LLM Chain-of-Discussion Question Analysis What scenario should be taken into consideration? Q1, Q2, Q3! Q1, Q2, Q3! LLM2 Q1, Q2, Q4! LLM3 Q2, Q4! Summarize LLM1 Q1, Q2, Q4! Evidence Analysis LLM1 Which article should serve as basis? Evidence Analysis Which article should serve as basis? Civil Code Art. 37 provides that \u2026 I will use it. \u2718 I oppose it! LLM2 LLM3 Criticize LLM1 OK, I revise my analysis: \u2026 I will not use it. \u2713 Revise LLM Civil Code Art. 1084 provides \u2026 \u2026 So I will use Civil Code 1084, Civil Code 1085, Civil Code 27, and Civil Code 37. \u2026\u2026 Response The parents-children relationship is not dissolved upon divorce of parents. According Civil Code Art. 1085, you should pay for the child support. Here is the standard of child support payment: \u2026 Response The parents-children relationship is not dissolved upon divorce of parents. According to Civil Code Art. 27, you are still guardian. Although your guardianship is revoked due to divorce, you still should pay child support. Figure 1: The process of Chain-of-Discussion, compared with chain-of-thought. The green parts are necessary to answer the user\u2019s question. The blue parts indicate closely related to the question, which can be used for detailed discussion. The red parts are irrelevant content which should be avoided. medical advice to education support and financial analysis, where users may pose various complex and knowledge-intensive questions. Although current LLMs can produce long and coherent texts (Peng et al., 2024), the complex open-ended QA is still an admittedly challenging task, even with augmented retrieval modules. The challenges primarily arise from two aspects. Firstly, retrieval models are not entirely prefect, inevitably with noise in the retrieval results. Let us take legal consultation as an example. In Figure 1, the model is required to respond to a question regarding the necessity of child support payments. Due to the semantic similarity between obligations for supporting children (financially) arXiv:2402.16313v1 [cs.CL] 26 Feb 2024 \fand raising/protecting children (physically), the retrieval model may wrongly return law articles pertaining to guardianship qualifications. LLMs usually cannot filter all these noisy evidence, which may propagate and lead to incomplete analysis, wrong reasoning paths, biased opinions and finally problematic or even misleading answers. Secondly, we expect the LLMs output correct responses and consistent explanations, while providing more useful suggestions about the potential scenarios not directly mentioned in the questions but indeed helpful for users\u2019 current or nearfuture situations. For instance, in Figure 1, when responding to a question about the obligation to pay child support for a user facing financial difficulties, the model should also remind her/him of the standards for child support payments and ways to negotiate for a reduction in the burden of child support given her/his current situation. This is even hard for humans where one should have access to proper evidence, e.g., the necessary or closely related law articles here, and accordingly provide kind reminders with reasonable explanations. Let alone LLMs without abundant annotations to train/fine-tune, which usually focus on the specific facts literally appearing in the questions. In this work, we will focus on the complex evidence-based question answering (CEBQA) task, a typical example of the open-ended QA tasks. We collect a high quality CEBQA dataset consisting of 200 carefully annotated legal consultation questions in the field of marriage and family affairs. To address the above challenges, we propose a novel chain-of-thought framework, the Chain-of-Discussion (CoD), which involves multiple LLMs in summarizing, criticizing and revising each other\u2019s output to reach a well-supported and helpful response. Our motivations are two-fold. On the one hand, different LLMs may have different intrinsic knowledge and reasoning capabilities due to different training data. Thus, multiple LLMs can be less possible to make errors concurrently than a single LLM. Recent works (Zhang et al., 2023) show checking the consistency across multiple LLMs helps reduce output hallucinations. Specifically, we propose a criticize-and-revise framework, which requires multiple LLMs to discuss and reach a consensus for a better response. For questions that need to involve helpful scenarios or possible extensions, we guess multiple LLMs may provide a diverse set of perspectives to address these possibilities. We thus propose a summarizing step to gather different but helpful perspectives from multiple LLMs, which will eventually form comprehensive and detailed responses based on the summarized analyses. Different from existing multi-model interaction works (Chan et al., 2024; Zhang et al., 2023) using strong closed-source LLMs APIs, e.g., GPT4 (OpenAI, 2023), we decide to take a challenge to study how to best exploit the small-scaled opensource LLMs, e.g., around 7B parameters, for a shared objective, while pushing the boundary of research regarding mult-model interaction. Our main contributions are as follows: (1) We collect a high-quality CEBQA dataset consisting of 200 legal consultation questions in Chinese with carefully annotated evidence and answers. (2) We propose a novel chain-of-discussion framework, i.e., summarize-criticize-revise, which harnesses the synergy among multiple open-source LLMs to generate more accurate and helpful responses. (3) Both GPT-4-based and evidence-centric evaluations demonstrate our framework can help smallscaled LLMs benefit from each other and improve the overall quality in terms of correctness and comprehensiveness. 2 Related Works Retrieval-Augmented Generation Lewis et al. (2020) initially propose the paradigm of retrievalaugmented generation (RAG), which can effectively reduce hallucinations within the texts generated by LLMs. RAG offers a vital solution to mitigate the problem of LLMs lacking domainspecific knowledge, thereby enhancing the credibility of LLMs (Gao et al., 2023). In the RAG paradigm, models typically undergo multiple generation steps to achieve the final results. For a user input, models first run a retriever to scan the store of evidence to select several documents as reference. Subsequently, models should determine when and whether to use each evidence document before generating (Izacard et al., 2022; Shi et al., 2023b; Yu et al., 2023; Trivedi et al., 2023). In this work, we face challenges more complex than RAG. While the model filters out irrelevant evidence, it also needs to retain evidence relevant to potential scenarios. Sometimes, determining which evidence can be used for potential scenarios and which is irrelevant evidence is also a challenging \fissue for humans. Chain-of-Thought (CoT) Previous works demonstrate that LLMs have a promising capability to decompose a complex question into several intermediate steps (Wei et al., 2022; Kojima et al., 2022). By segmenting the original question, LLMs can focus on handling each simple sub-question at each step, thus yield more accurate results (Zhou et al., 2023). The CoT framework is now widely employed in diverse practical NLP applications (Zelikman et al., 2022; Shi et al., 2023a; Wang et al., 2023). Previous works also employ CoT in the self-correction process of LLMs, which aims to re-generate better outputs. For instance, in Chain-of-Verification, the model generates several queries to verify its original answer, and then revise the answer based on the verification results (Dhuliawala et al., 2023). Most of these efforts perform self-checking based on a single model. However, we study a novel CoT framework for multi-model interactive checking and re-generating. 3 Preliminaries Task Definition In CEBQA tasks, given a user\u2019s question q and a store of evidence documents D, a model should analyze q first, find necessary evidence Dq = {d1, \u00b7 \u00b7 \u00b7 , dt} from D accordingly and generate a paragraph r as the final response. For instance, in the legal consultation task, users may ask what to do given her/his current situation. The model should find supportive evidence from a store of law articles, judicial interpretations, or previous legal cases, and generate a helpful and detailed response. Specifically, we expect the generated responses to meet the requirements in terms of correctness and comprehensiveness. (1) Correctness: The responses should be based on the evidence that can support to answer the questions, and refrain from employing irrelevant evidence or misinterpreting the evidence out of context. (2) Comprehensiveness: The responses should engage in discussions about potential scenarios that would be relevant or helpful to the users, even if not explicitly mentioned in the users\u2019 questions. We note that it is hard to guarantee all the retrieved evidence pieces can be perfectly used to answer the question. Therefore, similar to RAG, models should filter out irrelevant evidence. However, it is more challenging for models to carefully retain the evidence that can be used for discussions about potential scenarios, even though the evidence may not directly support answering the question. Baseline Framework: CoT Previous works have revealed that the CoT prompt can enhance the ability of LLMs to handle complex reasoning tasks (Wei et al., 2022; Kojima et al., 2022). Inspired by these works, we employ a multi-step prompt to stimulate LLMs to generate more correct while comprehensive answers. We initially prompt LLMs to analyze the question q, including identifying the possible role of users, understanding explicit and implicit demands of users, and determining what types of evidence is needed to answer the question. The generated analysis of question can be denoted as aque q . The next step is to judge whether each evidence document can serve as a potential basis for responding to the question q. Here, we employ a prompt to feed the LLM with question q, analysis aque q of the question, and a specific evidence document di. The LLM then need to analyze whether aevi di can be used to address the issues raised in q and whether evidence di can probably be used to respond or not. The LLM with parameters \u03b8 should finally respond to the question q according to question analysis aque q and evidence analysis n aevi di o i, based on the evidence document set Dq: r = f \u0000q, Dq, aque q , \b aevi d1 , \u00b7 \u00b7 \u00b7 , aevi dt \t |\u03b8 \u0001 . As observed in our pilot study, one small-scaled LLM could generate fluent answers, but often with incomplete analysis or wrong reasoning paths. 4 CoD: Summarize, Criticize, and Revise Our Chain-of-Discussion framework leverages interactive discussions among multiple LLMs, thereby addressing potential shortcomings in individual\u2019s intrinsic knowledge. Similar to the baseline, we employ a two-stage analyzing pipeline that instructs LLMs to analyze the question and evidence separately. To address the correctness and comprehensiveness of generated answers, at the stage of question analysis, we encourage models to read and summarize others\u2019 analyses so as to take more scenarios closely relevant to the question into account, in the purpose of augmenting the comprehensiveness. During the stage of evidence analysis, we require all other LLMs to criticize the evidence analysis of each \fLLM. Subsequently, the model will read others\u2019 critique and determine whether to revise its own analysis or not. The model finally generate a correct and more helpful response based on the summarized question analysis and revised evidence analysis. 4.1 Stage 1: Question Analysis Formally, suppose there are n accessible LLMs, denoted as M1, \u00b7 \u00b7 \u00b7 , Mn. For a given question q and the retrieved evidence Dq, we aim to employ the target LLM Mk to generate a response, with the assistance of the remaining LLMs. We first instruct the LLMs to analyze the question, including facts mentions in q, primary needs of the user, and potential scenarios associated with the question. We observe that LLMs may perform poorly in analyzing potential scenarios when solely relying on their intrinsic knowledge, especially those models that have not been pre-trained or supervised fine-tuned on domain-specific data. Thus, we argue that the evidence documents Dq can serve as vital cues about the potential scenarios not mentioned in q. Different LLMs can have varying preferences in analyzing the potential scenarios. Therefore, we believe that by integrating the outputs of multiple LLMs, we can take more helpful scenarios into account, thus improve the comprehensiveness of question analysis. We prompt each LLM Mi to analyze the question q, with retrieved evidence Dq as a reference: aque q, Mi = fque (q, Dq|\u03b8Mi) . We then employ the target LLM Mk to summarize the question analyses of all models, according to following instructions: \u2022 Consistency: If the majority of LLMs provide similar analyses regarding a fact in the question or a potential scenario, then it is likely to be correct. You can include it in the summary. \u2022 Comprehensiveness: If a minority of LLMs hold a particular viewpoint in their analyses with reasons, it does not imply its unreliability. You should scrutinize this content, assessing its logical coherence and relevance to the question. The summarized question analysis can be aque q = fsum \u0010 q, aque q, M1, \u00b7 \u00b7 \u00b7 , aque q, Mn|\u03b8Mk \u0011 . 4.2 Stage 2: Evidence Analysis Incorporating many irrelevant evidence documents as input would inevitably introduce noise, which could deteriorate the model performance. Thus, we should discern which evidence document should be used to address the question. For an evidence document dj \u2208Dq, we prompt the target model Mk to analyze it based on the question and question analysis : \u02c6 aevi dj = fevi(dj, q, aque q |\u03b8Mk). However, a single LLM might generate hallucinated outputs (Li et al., 2023b; Huang et al., 2023a), and incorrectly assess the relevance between evidence documents and the given question. Inspired by previous work (Zhang et al., 2023), we propose a multi-party discussion framework to improve the quality of evidence analysis. First, we instruct each LLM, excluding Mk, to criticize the evidence analysis \u02c6 aevi dj . Each critic model Mi should explicitly output whether it holds opinions contrary to \u02c6 aevi dj , which are denoted as cdj i . In this work, we employ a revising threshold \u03b4. If the proportion of opposite opinions in the critiques exceeds \u03b4, the target model needs to revise its evidence analysis: arev dj = frev \u0010 q, dj, aque q , \u02c6 aevi dj |{cdj i }i, \u03b8Mk \u0011 . We assume that the critique requiring to revise can be reliable only when a majority of critic models achieve a consensus. Otherwise, we retain the original evidence analysis. Formally, we collect the evidence analysis as following: aevi dj = ( \u02c6 aevi dj , if |{ci|ci=opposite}| |{ci}| \u2264\u03b4; arev dj , otherwise. 4.3 Response Generation For a fair comparison, we employ prompts similar to those of the baseline framework to generate responses. We denote the response as r = fans \u0000q, Dq, aque q , \b aevi d1 , \u00b7 \u00b7 \u00b7 , aevi dt \t |\u03b8Mk \u0001 . 5 Experiments As discussed in Section 3, legal consultation is a typical example for CEBQA tasks, which require model to generate an accurate response including helpful discussions about relevant scenarios.In our experiments, we delve into the legal consultation task in China, where all legal activities should be based on law articles and judicial interpretations, which can be naturally considered as the evidence store in our framework. 5.1 Data Collection We focus on the legal consultation in the fields of marriage, family affairs, and inheritance, which \fcover various types of legal disputes such as divorce, custody, contracts, property and so on. We collect 200 questions from real users and the corresponding responses from consultants through Web Search Engines. Data Quality To ensure the data quality, we manually check on the questions and answers. We correct all typos but retain the informal expressions in the questions. Note that there may be omissions or slight word-order inversions in the questions, which poses a challenge to the model\u2019s reasoning capabilities. We employ two annotators with background in civil law to examine the correctness and logical coherence of these responses. For the responses identified with errors, we encourage the two annotators to discuss and reach a consensus for modifications, otherwise, leave them as they are. Evidence Annotation We construct the evidence store based on all articles of the Civil Code and the Civil Procedure Law and their judicial interpretations. We categorize these article into three types: necessary, optional, and not required. The necessary articles are the ones highly relevant to the question, while the optional articles can be basis for the discussion of potential scenarios. Please see more details in Appendix B. We ensure there are 5 articles in each example. And on average, each example contains 1.52 necessary articles, 1.23 optional articles, and 2.25 not required articles. It means approximately 45% of the retrieved articles are not required at all. 5.2 Experimental Setup In this work, we select open-source LLMs trained by different research groups. We hope these models have learnt different knowledge and gain different reasoning capabilities from pre-training. Then, these models may provide various analytical perspectives via interaction and compensate for deficiencies in their own reasoning capabilities. We study four open-source fine-tuned LLMs, Baichuan2-7B (Baichuan, 2023), Deepseek7B (DeepSeek-AI, 2024), Qwen-7B (Bai et al., 2023), and Xverse-7B1, which are four of the best 7B-parameter LLMs performing on CMMLU (Li et al., 2023a). When we use a specific LLM as the target model, the other three LLMs are expected to 1https://huggingface.co/xverse/ XVERSE-7B-Chat generate diverse question analyses and criticize the evidence analysis of target model. We note that the two stages in Chain-ofDiscussion are independent of each other. Therefore, we can investigate how they contribute to the ultimate performance by the following settings: Single-model baselines (BS): Question analysis, article analysis, and response are all generated by a single LLM. Only Stage 1 (S1): All LLMs produce question analysis. The target LLM summarizes these analyses, and proceeds to the rest by itself. Only Stage 2 (S2): Three other LLMs criticize the article analysis generated by target LLM. The question analysis and the final response are generated by target LLM on its own. Chain-of-Discussion (S1S2): All LLMs involve into both question analysis and article analysis. Eventually, the target LLM produces the response by itself. We employ each LLM as the target model, replicating the experimental settings. We report the performance for each LLM as the target role. Please see more details in Appendix A. Evaluation Metrics Different from the shortform open-domain QA whose answers are usually several words or sentences, the responses in the CEBQA tasks can consist of several hundred or even thousands of words. Therefore, it is impossible to employ the metrics such as F1 or exact match which are widely used in QA benchmarks (Joshi et al., 2017; Rajpurkar et al., 2018). These generated responses are also complex which contain facts and causal relations to be verified. Thus, it is will be difficult to evaluate them all by humans due to unacceptable time costs. Following previous works (Liu et al., 2023; Chan et al., 2024), we employ GPT-4 to evaluate the quality of generated responses, with the human-written responses, necessary and optional articles as reference. We prompt gpt4-turbo-0125 to score the responses based on correctness and comprehensiveness. The scorer should assign integer scores between 1 and 10. If there is no clear reason to indicate that the responses of LLMs are significantly better or worse than human-written ones, a score of around 7 should be given. Please see the scoring prompts in Appendix E. \fTarget LLM Setting Avg. Score \u2206Score Baichuan2-7B BS 5.750 \u2013 S1 6.030 +0.280 S2 5.935 +0.185 S1S2 6.090 +0.340 Deepseek-7B BS 6.465 \u2013 S1 6.505 +0.040 S2 6.480 +0.015 S1S2 6.580 +0.115 Qwen-7B BS 5.835 \u2013 S1 5.890 +0.055 S2 5.815 0.020 S1S2 5.955 +0.120 Xverse-7B BS 6.015 \u2013 S1 5.995 0.020 S2 6.030 +0.015 S1S2 6.125 +0.110 Table 1: The average scores of each target LLM and each setting evaluated by GPT-4. 5.3 Main Results Table 1 shows the evaluation results produced by GPT-4. Comparing the results of baseline framework (BS) and Chain-of-Discussion (S1S2), we can find each LLM can obtain improvements from discussions with other LLMs, with Baichuan27B increased by +0.340, Deepseek-7B by +0.115, Qwen-7B by +0.120, Xverse-7B by + 0.110. We also find that employing multi-model discussion on both stages can bring more improvement than using it on one stage only. We also have to acknowledge that although Chain-of-Discussion can enhance the LLMs, the CoD-augmented Baichuan2-7B, Qwen-7B, or Xverse-7B can still not outperform Deepseek-7B under baseline settings, with around 0.5 scores left behind. The results show that the quality of responses primarily relies on the inherent ability of LLM to comprehend contexts and then to generate. We notice using multi-model discussion only at Stage 1 or Stage 2 fails to enhance Xverse-7B or Qwen-7B. We will provide more discussions and case studies in Section 6.3. 6 Discussions 6.1 Evidence-Centric Evaluation Different from previous Question Answering tasks (Joshi et al., 2017; Kwiatkowski et al., 2019) whose answers can be several words or single sentences, in the CEBQA tasks, LLMs are required to provide both detailed and correct responses for the question and potential scenarios. Therefore, we wonder whether the Chain-of-Discussion frameTarget LLM Setting N-Acc% O-Acc% Baichuan2-7B BS 58.26 50.14 S1 60.03 50.67 S2 61.86 50.25 S1S2 63.17 52.38 Deepseek-7B BS 75.93 59.27 S1 76.36 59.70 S2 76.12 59.23 S1S2 76.79 59.80 Qwen-7B BS 69.87 60.98 S1 70.31 61.63 S2 70.64 63.65 S1S2 71.29 64.20 Xverse-7B BS 74.00 63.95 S1 74.24 64.72 S2 75.67 64.44 S1S2 76.16 65.35 Table 2: The Macro average N-Acc and O-Acc results of each target LLM and each setting. The highest scores are made bold, while the second underlined. work can enhance the comprehensiveness and correctness of model output. Similarly, when discussing the details of potential scenarios, LLMs should also reference optional evidence. Hence, we can assess the correctness and comprehensiveness of responses by the accuracy of reference to various types of evidence documents. We propose two metrics of accuracy, N-Acc and O-Acc, to assess the correctness and comprehensiveness, respectively. We utilize the not required articles as negative samples. For N-Acc, we employ the necessary articles as positive samples, while the optional articles for O-Acc. We employ rule-based method to examine whether the response have used an article. Please see details in Appendix C. We compute the Macro average N-Acc and O-Acc across all examples. If an example does not contain optional articles, it will not participate in the calculation of O-Acc. Table 2 shows the results of each target LLM under different experimental settings. Compared to the baselines (BS), the Chain-of-Discussion framework (S1S2) can achieve around a 2% improvement on both N-Acc and O-Acc for Baichuan2-7B, Qwen-7B, and Xverse-7B. Even for Deepseek-7B, which performs the best in GPT-4-based evaluation, our proposed framework still brings improvements of 0.86% and 0.53% to N-Acc and O-Acc, respectively. Recalling the GPT-4-evaluated results in Table 1, where Baichuan2-7B obtains the most improvement of overall quality. We find this LLM also get the most improvement on N-Acc and O-Acc, with \fQuestion: What is the difference between resumption of marital relationship (\u590d\u5a5a) and remarriage with other person (\u518d\u5a5a)? Article: Article 1046 A man and a woman shall enter into marriage freely and voluntarily. ... Article 1083 Where, after divorce, both the man and the woman voluntarily intend to resume their marital relationship, they shall file for re-registration... Qwen-7B: ... According to Article 1083, both parities should be voluntary for resumption of marital relationship, while there is no such limitation for remarriage with other person. ... Qwen-7B+CoD: ... According to Article 1046, whether it is Fuhun or Zaihun, both parities need to do so voluntarily. Table 3: A case of Qwen-7B obtaining improvement from CoD. Violet texts are correct analysis, while the texts with yellow background are hallucinated parts. improvements of 4.91% and 2.24%, respectively. The results indicate that introducing multi-model discussions during both question analysis and evidence analysis contributes to increasing the probability of LLMs referencing correct evidence. It can be one of the reasons why Chain-of-Discussion can improve the quality of model responses. Comparing the results under the setting of BS, S1, and S2, we can find that involving multiple LLMs in a single stage can actually enhance both correctness and comprehensiveness. However, overall, employing multi-model discussions in question analysis contributes more to the comprehensiveness, while introducing other models in evidence analysis brings more improvement in correctness. 6.2 Manual Check To further study the quality of responses generated by CoD, we randomly sample 30 cases and manually examine the responses in terms of fluency and logicality. We select the responses generated by Qwen-7B or Qwen7B+CoD, which get worse average GPT-4-evaluated scores than responses of other LLMs. We find Qwen-7B with vanilla CoT has a poor ability to comprehend the articles, often resulting in logical errors in responses, while the CoD mechanism introduces opinions and critics from other LLMs, thus helps to distinguish ambiguous terms, and reach better logicality. Table 3 show the case where CoD can help to reduce logical errors in Qwen-7B\u2019s responses. We find Qwen-7B fails to understand that both \u590d\u5a5a (reconcile and remarry) and \u518d\u5a5a(remarry with other person) are considered as marriage in legal terms, which should comply with the provisions of Article 1046 but not Article 1083. We believe that discussions with other models can, to some extent, reduce the hallucination caused by the target model\u2019s poor reasoning capabilities. 6.3 Limitations of Open-Source LLMs Recalling the results in Table 1 where using multimodel interaction only at one stage fails to enhance Xverse-7B or Qwen-7B. We have to acknowledge that due to the limitations in parameter size, these models might produce unreliable output or demonstrate inferior capabilities to follow instructions. We guess there can be two reasons. (1) The target LLM may fail to discern errors in the question analysis of other LLMs, and integrate them into the summary. (2) The target LLM does not always follow instructions, and refuse to modify the errors in evidence analysis. 6.3.1 Hallucination Propagation in Summary At the stage of question analysis, we explicitly instruct the target LLM to assess whether the question analyses of other LLMs are logically correct and relevant to the question. However, the target model might still incorporate hallucinated analyses into the summary. Table 4 illustrates a bad case where Baichuan-7B incorporates inaccurate analysis from DeepSeek7B into the summary.We find that LLMs can correctly identify the primary intention behind user\u2019s question, i.e., determining whether providing support for parents is a prerequisite for inheriting. However, Deepseek-7B also engages in analysis regarding scenarios where parents should pay support for children. And Baichuan2-7B fails to identify that this part of analysis is irrelevant to the question. For people with legal background, it is a commonsense that raising children and caring for elderly parents are two different concepts. And legally, there is no causal relationship between them. However, we guess that for LLMs, both raising children and supporting parents represent financial support and life care between two generations. We believe that open-source LLMs with around 7B parameters may not have sufficient knowledge and enough reasoning capabilities. They may judge whether a question analysis is relevant to the question based on shallow semantic similarities. \fQuestion: Can a daughter/son still be eligible to inherit if she/he has not supported her/his parents? Original question analysis: Baichuan2: This question involves the relationship between inheritance rights and filial duties. The user may be concerned about whether a person who has not fulfilled filial duties still qualifies for inheritance. This may entail rules regarding the recognition of inheritance rights, the forfeiture and reinstatement of inheritance rights, and the role of filial duties in the determination of inheritance rights. Question analyses of other LLMs: Qwen: ... Xverse: ... Deepseek: ... The user may want to clarify whether not supporting parents will affect her/his eligibility to inherit parents\u2019 estate. Additionally, user may also inquire about her/his entitlement to demand child support payments from her/his parents, if the parents refuse to raise children. ... Summarized question analysis: Baichuan2: This question involves the relationship between inheritance rights and filial duties. The user may wonder if her/his child retains eligibility for inheritance despite failing to fulfill filial duties. This may entail rules regarding the recognition of inheritance rights, the forfeiture and reinstatement of inheritance rights, and the role of filial duties in the determination of inheritance rights. The user may also want to know the regulations regarding whether parent who does not raise children is required to pay child support. Table 4: A bad case of hallucinated summary of question analysis. Violet texts are correct analysis, while the texts with yellow background are hallucinated parts. 6.3.2 Model Preference of Revising Similar to the discussion on question analysis, the lack of good reasoning capabilities may also limit the performance of LLMs in revising evidence analysis. We find that LLMs sometimes tend to focus on the literal differences between questions and law articles, illustrated as the original analysis in Table 5. We humans have the background knowledge that only courts have the authority to revoke guardianship, while the LLMs cannot yield correct analysis. Users often omit commonsense information when posing questions, whereas law articles exhibit a rigorous and detailed structure. The stylistic differences between questions and evidence may lead to the misjudgments of LLMs. We also find that different LLMs have varying preferences regarding whether to modify the evidence analysis. For instance, compared to Qwen7B, Baichuan2-7B exhibits a greater preference to revise the analyses regarding articles. To quantitatively study this issue, we initially employ Baichuan2-7B to generate the original evidence analyses. Deepseek-7B and Xverse-7B then play the role of critics to determine which analysis should be revised. Finally, we use Baichuan2-7B Question: Do you still need to pay child support after having your guardianship revoked? Article: Article 37 Parents, children, and spouses who support the wards in the form of child support, support for elderly parents, or spousal support in accordance with the law shall continue to perform such obligations after they are disqualified by the people\u2019s courts as guardians. Original analysis: Article 37 explicitly stipulates [the content of Article 37]. This article pertains to guardianship and child support, but since the question does not mention revocation by the People\u2019s Court, this article should not be used as a basis. Low-quality modification: Revised analysis: Article 37 explicitly stipulates [the content of Article 37]. This article pertains to guardianship and child support. However, the user does not explicitly say who revokes her/his guardianship. Thus, this article should not be used as a basis. High-quality modification: Revised analysis: Article 37 stipulates that the revocation of guardianship does not affect existing obligations to pay child support. Thus, this article should be used as a basis. Table 5: Failed and successful cases for revising evidence analysis. Red texts are the key basis of the question. Violet texts are correct analysis, while the texts with yellow background are hallucinated parts. and Qwen-7B to revise these analyses, respectively. We find that Baichuan2-7B successfully revises 96.5% of the analyses, while Qwen-7B can only revise 56.1% of them. To mitigate the influence of target model selection on the conclusions, we also use Qwen-7B to provide origianl evidence analyses, still with Deepseek-7B and Xverse-7B as the critics. Similarly, Baichuan2-7B can revise 92.5% of the analyses, but Qwen-7B only revise 67.2% of them. We argue that an LLM\u2019s preference for refusing to revise may lead to a failure to obtain better evidence analysis based on the critiques. Consequently, it might result in the Chain-of-Discussion framework not bringing enough improvement as expected. The preference of LLMs can be affected by supervised fine-tuning and reward modeling (Ouyang et al., 2022; Rafailov et al., 2023). We hope to study the effect of supervised training on Chain-of-Discussion in future. 7 Conclusions In this work, we proposed a novel reasoning framework, Chain-of-Discussion, for complex evidencebased question answering tasks. The CoD framework involves multiple LLMs in discussions to achieve more correct and comprehensive responses with less hallucination and more supportive evi\fdence. Experiments on a legal consultation dataset show CoD can effectively improve the performance of open-source LLMs by encouraging them to discuss and criticize. Limitations Our proposed framework is designed to generate correct and comprehensive answers to respond complex questions. When used for providing legal advisory services, this technique can produce helpful responses to help people with needs, but it still cannot guarantee all responses are completely correct. Hence, this techniques should be used with cautions for further applications. Our dataset is designed and annotated to reflect the nature of CEBQA tasks, which requires models to generate detailed analysis to each closely relevant scenarios of the user\u2019s question. However, our annotated results may be inevitably not perfect from the professional perspectives of experts in civil law. Thus it should be used with caution and for research purpose only. We also note that the proposed framework involves multiple LLMs to generate for several rounds. Straightly using commercial APIs may lead to more promising generated results and cost less time. However, our aim is to validate how to better and more efficiently exploit the synergy among small LLMs, without relying on larger LLMs. We pioneer to expand the border of investigation about multi-model interaction to the small open-source LLMs." + }, + { + "url": "http://arxiv.org/abs/2404.05868v1", + "title": "Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning", + "abstract": "Large Language Models (LLMs) often memorize sensitive, private, or\ncopyrighted data during pre-training. LLM unlearning aims to eliminate the\ninfluence of undesirable data from the pre-trained model while preserving the\nmodel's utilities on other tasks. Several practical methods have recently been\nproposed for LLM unlearning, mostly based on gradient ascent (GA) on the loss\nof undesirable data. However, on certain unlearning tasks, these methods either\nfail to effectively unlearn the target data or suffer from catastrophic\ncollapse -- a drastic degradation of the model's utilities.\n In this paper, we propose Negative Preference Optimization (NPO), a simple\nalignment-inspired method that could efficiently and effectively unlearn a\ntarget dataset. We theoretically show that the progression toward catastrophic\ncollapse by minimizing the NPO loss is exponentially slower than GA. Through\nexperiments on synthetic data and the benchmark TOFU dataset, we demonstrate\nthat NPO-based methods achieve a better balance between unlearning the\nundesirable data and maintaining the model's utilities. We also observe that\nNPO-based methods generate more sensible outputs than GA-based methods, whose\noutputs are often gibberish. Remarkably, on TOFU, NPO-based methods are the\nfirst to achieve reasonable unlearning results in forgetting 50% (or more) of\nthe training data, whereas existing methods already struggle with forgetting\n10% of training data.", + "authors": "Ruiqi Zhang, Licong Lin, Yu Bai, Song Mei", + "published": "2024-04-08", + "updated": "2024-04-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "stat.ML" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Reasoning", + "gt": "Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning", + "main_content": "Introduction Large language models (LLMs), pretrained on massive corpora of internet data, possess the capability to memorize portions of their training data (Carlini et al., 2021, 2022). However, this capability raises significant concerns, as the training data may contain sensitive or private information, potentially leading to societal challenges. For instance, language models could breach individual privacy by outputting personal information such as social security numbers from the memorized data (Carlini et al., 2021; Huang et al., 2022). They might also violate copyright by generating text from memorized books, such as the Harry Potter novels (Eldan & Russinovich, 2023). Furthermore, LLM assistants for biology could inadvertently aid in the development of biological weapons by troubleshooting bottlenecks, increasing the risk of such attempts (Sandbrink, 2023; Li et al., 2024). In response to these concerns, regulations like the EU\u2019s General Data Protection Regulation (GDPR) (Mantelero, 2013; Voigt & Von dem Bussche, 2017) and the US\u2019s California Consumer Privacy Act (CCPA) (CCPA, 2018) have mandated the Right to be Forgotten, requiring applications to support the deletion of information contained in training samples upon user requests. This has motivated a line of research on machine unlearning, aiming to address these challenges. Machine unlearning (Cao & Yang, 2015; Bourtoule et al., 2021) aims to delete the influence of specific training samples from machine-learning models while preserving other knowledge and capabilities (Liu et al., 2024a; Zhang et al., 2023; Nguyen et al., 2022; Xu et al., 2023; Si et al., 2023). Notably, a straightforward approach to unlearning is to retrain a language model from scratch. However, as retraining from scratch is typically computationally expensive, cheaper methods for removing undesirable information is highly desirable. Recently, several works (Jang et al., 2022; *Equal contributions; the more junior author is listed earlier. \u2020UC Berkeley. Email: rqzhang@berkeley.edu \u2021UC Berkeley. Email: liconglin@berkeley.edu \u00a7Salesforce AI Research. Email: yu.bai@salesforce.com \u00b6UC Berkeley. Email: songmei@berkeley.edu Code is available at: https://github.com/licong-lin/negative-preference-optimization. 1 arXiv:2404.05868v1 [cs.LG] 8 Apr 2024 \fGradient Ascent \u2112GA = \u2212\ud835\udd3clog(\u03c0\u03b8(z)) Negative Preference Optimization Direct Preference Optimization \u2112DPO = \u22121 \u03b2 \ud835\udd3clog \u03c3(\u03b2 log \u03c0\u03b8(zw) \u03c0ref(zw) \u2212\u03b2 log \u03c0\u03b8(zl) \u03c0ref(zl) ) \u2112NPO = \u22122 \u03b2 \ud835\udd3clog \u03c3( \u2212\u03b2 log \u03c0\u03b8(z) \u03c0ref(z) ) \u03b2 \u21920 Paired response Unpaired response \u2212\ud835\udd3c[\u2207log(\u03c0\u03b8(z))] \u2212\ud835\udd3c[W\u03b8(z)\u2207log \u03c0\u03b8(z)] Adaptive weighting W\u03b8(z) = 2\u03c0\u03b2 \u03b8(z) \u03c0\u03b2 \u03b8(z) + \u03c0\u03b2 ref(z) \u2207\u2112 \u2207\u2112 Figure 1: Gradient Ascent (GA), Negative Preference Optimization (NPO), and Direct Preference Optimization (DPO). NPO can be interpreted as DPO without positive samples. The gradient of NPO is an adaptive weighting of that of GA, and the weight vanishes for unlearned samples. Wang et al., 2023; Chen & Yang, 2023; Yao et al., 2023; Eldan & Russinovich, 2023; Yao et al., 2024; Liu et al., 2024b; Li et al., 2024) proposed scalable and practical techniques for unlearning LLMs through directly fine-tuning the trained model. Core to many of these works is a gradient ascent procedure on the prediction loss over the dataset to be unlearned (i.e., the forget set), building on the intuition that gradient ascent is an approximation of \u201creverting\u201d gradient descent optimization. Despite its simplicity and widespread use, the performance of gradient ascent based approaches remain unsatisfactory. A notable example concerns the recently released benchmark dataset TOFU (Maini et al., 2024), which consists of synthetically generated biographies of 200 fictitious authors, and the task is to unlearn the biographies of 1%, 5%, and 10% of the 200 authors from a model that is already fine-tuned on all 200 authors. In their evaluation of forgetting 10% of the authors, Maini et al. (2024) demonstrated that gradient ascent and its variants fail to provide a satisfactory balance between forget quality (the difference between the unlearned model and retrained model evaluated on the forget set) and model utility (the general performance on other tasks). In this work, we begin by observing that gradient ascent can often cause a rapid deterioration of model utility during unlearning\u2014a phenomenon we term catastrophic collapse\u2014which we believe is responsible for its unsatisfactory performance. Towards fixing this, we propose a simple yet effective objective function for unlearning termed Negative Preference Optimization (NPO). NPO takes inspiration from preference optimization (Rafailov et al., 2024; Ouyang et al., 2022; Bai et al., 2022), and can be viewed as its variant that only uses negative samples. Through both theory and experiments, we show that NPO resolves the catastrophic collapse issue associated with gradient ascent, provides more stable training dynamics, and achieves a better trade-off between forget quality and model utility. Coupled with a cross-entropy loss on the retain set, NPO achieves state-of-the-art performance on the TOFU dataset, and achieves the first non-trivial unlearning result on the challenging task of forgetting 50% of the TOFU data. Summary of contributions and paper outline. \u2022 We outline existing gradient ascent based methods for machine unlearning, and find that these methods suffer from catastrophic collapse (Section 2). We identify the linear divergence speed of gradient ascent as a main reason for catastrophic collapse. \u2022 We introduce Negative Preference Optimization (NPO), a simple alignment-inspired loss function for LLM unlearning that addresses the catastrophic collapse issue of gradient ascent (GA; Section 3). We demonstrate that NPO reduces to gradient ascent (GA) in the high-temperature limit. We show in theory the progression towards catastrophic collapse when minimizing the NPO loss is exponentially slower than with GA. See Figure 1 for an illustration of NPO and its connections with existing objectives. \u2022 We test NPO-based methods on a synthetic binary classification task (Section 4), where we find that NPO-based methods outperform other baselines by providing a superior Pareto frontier between the Forget Distance and Retain Distance. Furthermore, NPO-based methods exhibit greater learning stability compared to GA-based methods. \u2022 We evaluate a variety of unlearning methods on the TOFU dataset (Maini et al., 2024) and find that NPO-based methods exhibit superior balance between Forget Quality and Model Utility compared to all baselines (Section 5). Additionally, NPO-based methods improve the stability of the unlearning process and the readability of the output. 2 \fNotably, we show that NPO-based methods are the only effective unlearning methods for forgetting 50%-90% of the data, a significant advance over all existing methods which already struggle with forgetting 10% of the data (Section 5.3). There is a vast literature on machine unlearning and LLM unlearning. Due to limited space, we discuss these related work in Appendix 1.1. 1.1 Related work Since its proposal by Cao & Yang (2015), machine unlearning has been extensively studied in the classification literature (Bourtoule et al., 2021; Golatkar et al., 2020; Ginart et al., 2019; Thudi et al., 2022; Izzo et al., 2021; Koh & Liang, 2017; Guo et al., 2019; Sekhari et al., 2021). For reviews of existing works, see Liu et al. (2024a); Zhang et al. (2023); Nguyen et al. (2022); Xu et al. (2023); Si et al. (2023). In particular, Ginart et al. (2019); Guo et al. (2019); Sekhari et al. (2021) introduced theoretical metrics for machine unlearning based on the notion of differential privacy and proposed provably efficient unlearning methods based on Newton update removal mechanisms. However, these algorithms require computing the Hessian of loss functions, which is intractable for LLMs. Recent research has explored unlearning methods for LLMs (Jang et al., 2022; Wang et al., 2023; Chen & Yang, 2023; Yao et al., 2023; Eldan & Russinovich, 2023; Yao et al., 2024; Liu et al., 2024b; Li et al., 2024). Notably, the methods proposed in Jang et al. (2022); Yao et al. (2023); Chen & Yang (2023); Maini et al. (2024) are based on gradient ascent (GA) on the loss of the forget set. In this work, we demonstrate that the NPO approach consistently outperforms GA across various tasks. On the other hand, Eldan & Russinovich (2023) proposed generating positive samples using LLMs and carefully designed prompts, then fine-tuning the model based on the positive samples using a supervised loss. Furthermore, the method of Liu et al. (2024b) is based on knowledge negation, while the approach of Li et al. (2024) relies on controlling model representations. These methods are orthogonal and complementary to the NPO approach. Our method, NPO, draws inspiration from the framework of reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022; Bai et al., 2022; Stiennon et al., 2020; Rafailov et al., 2024), particularly the Direct Policy Optimization (DPO) method (Rafailov et al., 2024). We note that recent work (Ethayarajh et al., 2024) proposes the Kahneman-Tversky Optimization (KTO) method for alignment with only non-paired preference data, and a more recent concurrent work (Duan et al., 2024) proposes the Distributional Dispreference Optimization (D2O) approach for unlearning. Both methods share a similar formulation to NPO. We compare the performance of NPO with KTO in simulations. Recent work has proposed several benchmark datasets and evaluation metrics for unlearning methods (Ji et al., 2024; Eldan & Russinovich, 2023; Maini et al., 2024; Li et al., 2024; Lynch et al., 2024). In particular, some studies have utilized the PKUSafe dataset (Ji et al., 2024) for benchmarking unlearning methods. Eldan & Russinovich (2023) crafts a specific task of \u201cforgetting Harry Potter\u201d. Maini et al. (2024) introduces TOFU, a task of fictitious unlearning for LLMs, which is the benchmark we adopted in this paper. Additionally, Li et al. (2024) proposes the Weapons of Mass Destruction Proxy (WMDP) for measuring hazardous knowledge in LLMs. Lynch et al. (2024) proposes eight methods to evaluate robust unlearning in LLM, which incorporate robust metrics against jailbreak attacks. Finally, we note the existence of attack methods for extracting data from unlearned models (Shi et al., 2023; Patil et al., 2023), and other unlearning methods including model editing (Mitchell et al., 2022; Meng et al., 2022) and in-context unlearning (Pawelczyk et al., 2023). 2 Preliminaries on Machine Unlearning Machine Unlearning refers to the following problem: Given an initial model (also the reference model) \u03c0ref(y|x) that is already trained on a dataset D = {(xi, yi)}i\u2208[n], how to make the model forget a specific subset (henceforth the forget set) DFG \u2286D of the training data? More precisely, we aim to fine-tune1 the model to make it behave like the retrained model \u03c0retr, a model trained only on the retain set DRT = D \\ DFG. In other words, we would like the model to behave as if the samples in the forget set DFG were never used to train it. 1There are alternative approaches such as prompt engineering (Pawelczyk et al., 2023) for performing unlearning tasks. 3 \fBy definition, the best approach for machine unlearning, in principle, is to retrain the model from scratch on DRT only, which is, however, often intractable in practice. Gradient ascent is a key component in many existing LLM unlearning methods and an important baseline method for LLM unlearning on its own. The idea is simply to perform gradient ascent on the (next-token prediction) loss over the forget set, which can be viewed equivalently as gradient descent on the negative prediction loss, denoted as LGA: LGA(\u03b8) = \u2212EDFG[\u2212log(\u03c0\u03b8(y|x))] | {z } prediction loss = EDFG[log(\u03c0\u03b8(y|x))]. (1) The rationale of gradient ascent is that since the initial model \u03c0ref is trained on D = DFG \u222aDRT, a subsequent maximization of prediction loss on the forget set DFG would approximately \u201crevert\u201d the optimization on the forget set DFG, thus unlearning DFG and approximating a model trained on DRT only. Other loss functions. Building on gradient ascent, a large class of unlearning methods perform gradient-based optimization on a linear combination of the GA loss LGA and several other loss functions that either encourage unlearning or preserve utility (Jang et al., 2022; Yao et al., 2023; Chen & Yang, 2023; Maini et al., 2024; Eldan & Russinovich, 2023). Notable examples include \u2022 Forget (FG) loss: LFG(\u03b8) = \u2212EDFG[log(\u03c0\u03b8(\u02dc y|x))], where (x, y) \u223cDFG and \u02dc y \u0338= y is any \u201cuninformed\u201d response for prompt x which the unlearned model could aim to output. Examples of such \u02dc y\u2019s include replacing true information by random (but appearingly sensible) information (which requires hand-crafting such as Eldan & Russinovich (2023)), or simply answering \u201cI don\u2019t know\u201d (Maini et al., 2024). \u2022 Retain (RT) loss: LRT(\u03b8) = \u2212EDRT[log(\u03c0\u03b8(y|x))], which encourages the model to still perform well on the retain set DRT; \u2022 KFG(\u03b8) = EDFG[D(\u03c0\u03b8(\u00b7|x)||\u03c0ref(\u00b7|x))], which measures the distance to the initial model \u03c0ref (in KL divergence) on the forget set; \u2022 KRT(\u03b8) = EDRT[D(\u03c0\u03b8(\u00b7|x)||\u03c0ref(\u00b7|x))], which measures the distance to the initial model \u03c0ref (in KL divergence) on the retain set. For example, Yao et al. (2023) minimize a combination of {LGA, LFG, KRT}, and Chen & Yang (2023) minimize a combination of {LGA, LRT, \u2212KFG, KRT}. Maini et al. (2024) find that incorporating the retain loss LRT usually improves the performance of unlearning. Forget quality and model utility. Unlearning methods should not only unlearn the forget set, i.e., achieve a high forget quality, but also maintain the model\u2019s performance on the retain set, i.e., maintain the model utility. For example, letting the model simply output \u201cI don\u2019t know\u201d is an unlearning method that achieves good forget quality (in certain sense) but bad model utility. While there is not yet a consensus on the right metrics for forget quality and model utility (and we will present our choices momentarily), a general rule of thumb is that unlearning methods should achieve a good tradeoff between these two goals. 2.1 Catastrophic collapse of gradient ascent We begin by testing gradient ascent as a standalone method (as opposed to combining it with other losses), and find that gradient ascent exhibits a common failure mode dubbed as catastrophic collapse: Along the unlearning process, the model utility quickly drops to zero, and the forget quality improves temporarily for a very short time horizon before quickly dropping too (Figure 2 left/middle-left). Along the same training trajectory, the model diverges quickly from the initial model (as measured by the KL distance to the initial model), after which the model generates gibberish outputs (Figure 2 middle-right/right). We attribute the catastrophic collapse to the divergent nature of the gradient ascent algorithm due to the fact that it maximizes (instead of minimizes) the standard next-token prediction loss. Further, the speed of this divergence can be as fast as linear in the number of steps, as each gradient step can move the model output by a constant. To see this on a toy example, consider a linear-logistic K-class classifier given by \u03c0\u03b8(\u00b7|x) = softmax(\u03b8x), \u03b8 = (\u03b8l)l\u2208[K] \u2208Rd\u00d7K. 4 \fQ: What is the full name of the geology author born in Karachi, Pakistan on 06/30/1975? A: The author's name is Hina Ameen. GA+RT: narr narr narr narr narr\u2026.. NPO+RT: The full name of the geology author is Adeel Ahmed Riaz\u2026.. Figure 2: Comparison between GA and NPO on forget quality, model utility, KL divergence on the real-world Set, and the answers to the forget set. The rightmost figure shows the answers generated from variants of GA and NPO that incorporates the RT loss. All figures are generated on the Forget05 task in the TOFU data, trained for 10 epochs (detailed setup in Appendix E.1). For any \u201calready unlearned\u201d sample (xi, yi) with true label yi = l \u2208[K] and model prediction softmax(\u03b8xi)l \u22480 (so that \u03c0\u03b8 does not predict l), standard calculation shows that the gradient of GA loss with respect to \u03b8l is \u2207\u03b8lLGA,i = (1{yi = l} \u2212softmax(\u03b8xi)l)xi \u2248xi, which has a constant scale (not diminishing along the unlearning progress) and can cause the model to diverge in a linear speed. Therefore, the divergent dynamics may initially bring the model closer to \u03c0retr but would ultimately send the model to infinity (c.f. Theorem 2). While we believe some kind of divergent behavior is necessary and perhaps unavoidable (as the goal of unlearning is to \u201crevert\u201d optimization), the fast divergence speed of gradient ascent is a rather undesired feature and motivates the proposal of our NPO method which diverges at a slower speed. 3 Negative Preference Optimization We introduce Negative Preference Optimization (NPO), a simple drop-in fix of the GA loss. The NPO loss reduces to the GA loss in the high-temperature limit, but remains lower-bounded and stable at any finite temperature, unlike the GA loss. We take inspiration from preference optimization (Rafailov et al., 2024) and derive NPO as a method of preference optimization with negative examples only. Preference Optimization. In preference optimization (Ouyang et al., 2022; Bai et al., 2022; Stiennon et al., 2020; Rafailov et al., 2024), we are given a dataset with preference feedbacks Dpaired = {(xi, yi,w, yi,l)}i\u2208[n], where (yi,w, yi,l) are two responses to xi generated by a pre-trained model \u03c0\u03b8, and the preference yi,w \u227byi,l is obtained by human comparison (here \u201cw\u201d stands for \u201cwin\u201d and \u201cl\u201d stands for \u201close\u201d in a comparision). The goal is to fine-tune \u03c0\u03b8 using Dpaired to better align it with human preferences. A popular method for preference optimization is Direct Preference Optimization (DPO) (Rafailov et al., 2024), which minimizes LDPO,\u03b2(\u03b8) = \u22121 \u03b2 EDpaired h log \u03c3 \u0010 \u03b2 log \u03c0\u03b8(yw | x) \u03c0ref(yw | x) \u2212\u03b2 log \u03c0\u03b8(yl | x) \u03c0ref(yl | x) \u0011i . (2) Here, \u03c3(t) = 1/(1 + e\u2212t) is the sigmoid function, \u03b2 > 0 is the inverse temperature, and \u03c0ref is a reference model. Unlearning as preference optimization. We observe that the unlearning problem can be cast into the preference optimization framework by treating each (xi, yi) \u2208DFG as only providing a negative response yi,l = yi without any positive response yi,w. Therefore, we ignore the yw term in DPO in Eq. (2) and obtain the Negative Preference Optimization (NPO) loss: LNPO,\u03b2(\u03b8) = \u22122 \u03b2 EDFG h log \u03c3 \u0010 \u2212\u03b2 log \u03c0\u03b8(y|x) \u03c0ref(y|x) \u0011i = 2 \u03b2 EDFG h log \u0010 1 + \u0010 \u03c0\u03b8(y|x) \u03c0ref(y|x) \u0011\u03b2\u0011i . (3) Minimizing LNPO,\u03b2 ensures that the prediction probability on the forget set \u03c0\u03b8(yi|xi) is as small as possible, aligning with the goal of unlearning the forget set. 5 \f0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Forget Distance 0.05 0.10 0.15 0.20 0.25 0.30 Retain Distance GA = 0.01 = 0.1 = 0.2 = 0.5 = 1 Figure 3: Retain difference versus forget difference for GA and NPO with varying levels of \u03b2 in the binary classification experiment with \u03b1 = 1. The Pareto curves all start from the bottom right corner (1.70, 0.02) and are computed by averaging over 5 instances. We observe that the NPO trajectory converges to the GA trajectory as \u03b2 \u21920. Here retain distance and forget distance denote the KL divergence between the distributions of the predictions of the retrained and the unlearned model, on the retain and the forget distribution, respectively. More details can be found in Section 4. Connection with gradient ascent. We can recover the GA loss from NPO loss by eliminating the additional 1 in the logarithm of NPO loss in Eq. (3), i.e., replacing log(1 + (\u03c0\u03b8/\u03c0ref)\u03b2) to log((\u03c0\u03b8/\u03c0ref)\u03b2). Furthermore, we show that the NPO loss also reduces to the GA loss in the limit of \u03b2 \u21920, indicating that NPO is a strict generalization of GA. Proposition 1 (NPO reduces to GA as \u03b2 \u21920). For any \u03b8, we have lim \u03b2\u21920 \u0014 LNPO,\u03b2(\u03b8) \u22122 \u03b2 log 2 \u0015 = LGA(\u03b8) \u2212EDFG[log \u03c0ref(y | x)] | {z } does not depend on \u03b8 . Moreover, assuming \u03c0\u03b8(y | x) is differentiable with respect to \u03b8, we have lim \u03b2\u21920 \u2207\u03b8LNPO,\u03b2(\u03b8) = \u2207\u03b8LGA(\u03b8). The proof of Proposition 1 is deferred to Appendix A.1. Figure 3 provides an illustration of the reduction from the NPO loss to the GA loss as \u03b2 \u21920. Stability of the NPO loss. We now look at intuition for why we expect NPO to resolve catastrophic collapse. One limitation of the GA loss is its unboundedness from below (as the negation of the cross-entropy prediction loss which is unbounded from above). The NPO loss resolves this issue and remains lower-bounded for any finite \u03b2 > 0. Furthermore, the gradients of NPO and GA are as follows: \u2207\u03b8LGA = \u2212EDFG[\u2207\u03b8 log \u03c0\u03b8(y|x)], (4) \u2207\u03b8LNPO,\u03b2 = \u2212EDFG[W\u03b8(x, y)\u2207\u03b8 log \u03c0\u03b8(y|x)], (5) where W\u03b8(x, y) = 2\u03c0\u03b2 \u03b8 (y|x) \u000e [\u03c0\u03b2 \u03b8 (y|x) + \u03c0\u03b2 ref(y|x)] can be interpreted as an adaptive smoothing weight\u2014When example (x, y) \u2208DFG is already unlearned in the sense that \u03c0\u03b8(y|x) \u226a\u03c0ref(y|x), we have W\u03b8(x, y) \u226a1, so that \u2225\u2207\u03b8LNPO,\u03b2\u22252 \u226a\u2225\u2207\u03b8LGA\u22252 and thus NPO could diverge much slower than GA. 3.1 Theoretical analysis of divergence speed We formalize the above intuition by theoretically analyzing the divergence speed of NPO and GA in a standard logistic regression setting. We consider a binary classification problem (y \u2208{0, 1}) with a logistic model \u03c0\u03b8(y = 1|x) = sigmoid(\u27e8x, \u03b8\u27e9). The initial model is denoted as \u03c0\u03b8init with \u03b8init \u2208Rd. We aim to unlearn a forget set DFG = {(xi, yi)}nf i=1 by minimizing either GA or NPO loss using gradient descent with stepsize \u03b7 for T iterations. 6 \fTheorem 2 (Divergence speed of GA and NPO). Let X := (x1, . . . , xnf)\u22a4\u2208Rnf\u00d7d. Consider the high-dimensional regime where nf \u2264d and assume XX\u22a4is invertible. Suppose \u2225\u03b8init\u22252 \u2264B\u03b8, \u2225xi\u22252 \u2208[bx, Bx] for all i \u2208[nf] for some B\u03b8, bx, Bx > 0. Let \u03b8(t) GA, \u03b8(t) NPO denote the t-th iterates of gradient descent with stepsize \u03b7 on the empirical loss LGA, LNPO,\u03b2, respectively. \u2022 (GA diverges linearly) There exist some (B\u03b8, bx, Bx)-dependent constants C0, C1, C2 > 0 such that when maxi\u0338=j |\u27e8xi, xj\u27e9| \u2264C0/nf, \u2225\u03b8(t) GA \u2212\u03b8init\u2225X\u22a4X \u2208 h C1 \u00b7 nf \u22121/2\u03b7 \u00b7 t, C2 \u00b7 nf \u22121/2\u03b7 \u00b7 t i , t \u22651. \u2022 (NPO diverges logarithmically) Suppose \u03b7 \u22641. There exist some (B\u03b8, bx, Bx, \u03b2)-dependent constants C0, C1, C2, C3 > 0 such that when maxi\u0338=j |\u27e8xi, xj\u27e9| \u2264C0/nf, \u2225\u03b8(t) NPO \u2212\u03b8init\u2225X\u22a4X \u2208 h C1 \u221anf log \u0010 C2 \u00b7 \u03b7nf \u22121 \u00b7 t + 1 \u0011 , C1 \u221anf log \u0010 C3 \u00b7 \u03b7nf \u22121 \u00b7 t + 1 \u0011i , \u2200t \u22651. Theorem 2 demonstrates that NPO diverges exponentially slower than GA in a simple setting. The proof of Theorem 2 is contained in Appendix A.2. 4 Synthetic Experiments 4.1 Setup Dataset. We consider a forget set DFG = {(xf i, yf i)}200 i=1 and a retain set DRT = {(xr i, yr i)}1000 i=1 , which are both generated from Gaussian-logistic models. More specifically, we assume xf i \u223ciid N(\u00b5f, Id), P(yf i = 1|xf i) = sigmoid((xf i \u2212\u00b5f)\u22a4\u03b8f + 1), xr i \u223ciid N(\u00b5r, Id), P(yr i = 1|xr i) = sigmoid((xr i \u2212\u00b5r)\u22a4\u03b8r \u22121). (6) Here we choose d = 16, \u03b8f = \u2212\u03b8r = 1d/ \u221a d, and \u00b5f = \u2212\u00b5r = \u03b1\u00b71d for some \u03b1 \u22650. We consider two choices of the hyper-parameter \u03b1: (1). \u03b1 = 1, which creates a gap between the Gaussian means of forget covariates {xf i} and retain covariates {xr i}; (2). \u03b1 = 0, which implies that covariates in the forget and retain set are both isotropic Gaussian. We remark that we shift by 1 in the sigmoid function to create a discrepancy in the label frequencies between the forget and retain sets \u2014 this ensures that the forget labels yf i are more likely to be 1, while the retain labels yr i are more likely to be 0. Model and training method. We consider a random feature model \u03c0\u03b8(y = 1|x) = sigmoid(\u03b8\u22a4ReLU(Wx)), where W \u2208R128\u00d7d is fixed during the training and unlearning process, whose entries are generated i.i.d. from N(0, 1/d), and \u03b8 \u2208R128 is the trainable parameter. To generate the initial model \u03c0ref and the retrained model \u03c0retr, we optimize over \u03b8 using the cross-entropy loss over the entire dataset D = DFG \u222aDRT and the retain dataset DRT, respectively. In the unlearning phase, starting from the initial model \u03c0ref, we perform gradient descent on various loss functions for 2000 steps. We select the learning rate for each method via grid search. Unlearning methods. We evaluate the performance of vanilla NPO (NPO; minimizing LNPO), NPO plus a retain loss term (NPO+RT; minimizing LNPO+LRT), gradient ascent (GA; minimizing LGA), gradient ascent plus a retain loss term (GA+RT; minimizing LGA+LRT), cross-entropy loss of forget and retain sets where the positive labels of the forget set are given by Bern(0.5) (IDK+RT; minimizing LFG+LRT), and DPO plus a retain loss term (DPO+RT; minimizing LDPO+LRT, where the positive labels are given by Bern(0.5)). We conduct the grid search to select the optimal \u03b2 for NPO-based and DPO-based methods. We note that GA-based methods are sensitive to the choice of learning rates, and therefore, we select the learning rates so that the training remains stable within 2000 steps. 7 \f0 400 800 1200 1600 2000 Steps 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 Forget Distance (a1) 0 400 800 1200 1600 2000 Steps 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Retain Distance (a2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Forget Distance 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Retain Distance (a3) 0 400 800 1200 1600 2000 Steps 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 Forget Distance (b) GA GA+RT IDK+RT NPO NPO+RT DPO+RT Figure 4: Forget distance and retain distance versus optimization steps for \u03b1 = 1 (a1, a2, a3) and \u03b1 = 0 (b). Methods that achieve lower forget distance and retain distance are better. The errorbars in (a1, a2, b) denote the \u00b11 standard deviation over 5 instances. The Pareto curves in (a3) all start from the bottom right corner (1.70, 0.02), and are averaged over 5 instances. Evaluation metrics: forget distance and retain distance. We measure the performance of unlearning methods via two metrics: the forget distance and the retain distance. The forget distance is EDFGD(\u03c0retr(\u00b7|x)||\u03c0\u03b8(\u00b7|x)), the KL divergence between the retrained model \u03c0retr and unlearned model \u03c0\u03b8 on the forget set. Similarly, the retain distance is given by EDRTD(\u03c0retr(\u00b7|x)||\u03c0\u03b8(\u00b7|x)). Ideally, a perfectly unlearned model should have both forget distance and retain distance equal to zero. 4.2 Results NPO avoids catastrophic collapse. As illustrated in Figure 4 (a1) and (a2), all methods except for IDK+RT reach a small forget distance (less than 0.005) within 1200 steps. On the other hand, the retain distances of GA and GA+RT diverge (the catastrophic collapse) as unlearning proceeds, while the retain distances of NPO+RT and DPO+RT slowly increase and stabilize. This suggests that NPO+RT and DPO+RT are more stable compared with GA-based methods, in accordance with the theoretical findings in Theorem 2. NPO+RT achieves a better Pareto frontier. Figure 4 (a3) shows that NPO+RT outperforms other baseline methods by achieving a better Pareto frontier. Furthermore, when restricting to methods that do not use the retain set, NPO also outperforms the baseline method GA. Figure 4 (b) illustrates the \u03b1 = 0 scenario where the covariate distributions for forget and retain sets are identical, resulting in equal forget and retain distances. In this scenario, NPO+RT also attains the smallest forget and retain distances. 5 Experiments on the TOFU Data 5.1 Experimental setup Dataset and Metric. We evaluate unlearning methods on the TOFU dataset (Maini et al., 2024). It contains 200 fictitious author profiles, each consisting of 20 question-answer pairs. TOFU introduces three levels of tasks, each aiming to forget 1% , 5% , and 10% of the data, referred to as Forget01, Forget05, and Forget10, respectively. We measure the effectiveness of unlearning methods via Forget Quality and Model Utility as in Maini et al. (2024). Forget quality assesses how well the unlearned model mimics the retrained model (defined as the model trained only on the retain set), while model utility measures the general capacities and the real-world knowledge of the unlearned model. Since the forget quality is defined as the p-value of the Kolmogorov-Smirnov test, which tests the similarity between some distributions generated by the unlearned model and the retrained one, we treat a forget quality greater than 0.05 as evidence of a meaningful forgetting. More details are deferred to Appendix E.1.1 and Appendix E.1.2. Unlearning Methods. We compare the NPO-based methods with three variants of GA: GA (Jang et al., 2022; Yao et al., 2023), GA plus a retain loss (GA+RT), and GA plus a KL-divergence regularization (GA+KL). We also evaluate the IDK+RT method which replaces GA with a cross-entropy loss on the forget set with answers replaced by \u201dI don\u2019t know\u201d. Besides, we examine DPO and its regularized variants (DPO+RT, DPO+KL), as well as 8 \fKTO (Ethayarajh et al., 2024) and its variant (KTO+RT). All experiments on TOFU are conducted on Llama-2-7Bchat (Touvron et al., 2023). See Appendix E.1 for more details. 5.2 Results NPO-based methods achieve the best trade-off. Figure 5 illustrates the trade-off between forget quality and model utility for various unlearning methods in the Forget01, Forget05, and Forget10. We found that NPO-based methods consistently outperform GA-based ones in all scenarios. Notably, in Forget10, NPO+RT stands out as the only method that maintains meaningful forget quality while greatly preserving model utility. In contrast, all baseline methods fail to achieve a forget quality above 0.05. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 1e-21 1e-16 1e-11 1e-6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Linear Scale Log Scale 0.0 0.1 0.2 0.3 0.4 0.5 0.6 1e-21 1e-16 1e-11 1e-6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Linear Scale Log Scale 0.0 0.1 0.2 0.3 0.4 0.5 0.6 1e-21 1e-16 1e-11 1e-6 0.1 0.2 0.3 0.4 Linear Scale Log Scale Model Utility Forget Quality finetuned GA GA+RT IDK+RT KL NPO NPO+RT NPO+KL DPO DPO+RT DPO+KL KTO KTO+RT Figure 5: Forget quality versus model utility across different forget set sizes (1%, 5%, and 10% of the data). Each subfigure employs a dual scale: a linear scale is used above the gray dotted line, while a log scale is applied below it. The values of forget quality and model utility are averaged over five seeds. Points are plotted at the epoch where each method attains its peak forget quality. 1 2 3 4 5 6 7 8 9 10 11 12 1e-4 1e-3 1e-2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 6 12 18 24 30 36 42 48 54 60 1e-13 1e-10 1e-7 1e-4 0.1 0.2 0.3 0.4 0.5 0.6 0.7 12 24 36 48 60 72 84 96 108 120 1e-16 1e-11 1e-6 0.1 0.2 0.3 1 2 3 4 5 6 7 8 9 10 11 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 6 12 18 24 30 36 42 48 54 60 0.0 0.1 0.2 0.3 0.4 0.5 0.6 12 24 36 48 60 72 84 96 108 120 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Step Forget Quality Model Utility GA GA+RT IDK+RT NPO NPO+RT DPO+RT Figure 6: Evolution of forget quality (top) and model utility (bottom) across different forget set sizes (1% (left), 5% (middle), and 10% (right) of the data). Each line is averaged over 5 seeds. Each figure in the top row employs a dual scale as in Figure 5. In Forget01, we evaluate the performance of the unlearned model in every gradient step, while in Forget05 and Forget10, we evaluate it in every epoch. NPO avoids catastrophic collapse. Figure 6 illustrates the evolution of forget quality and model utility along the unlearning process. In Forget01, both GA and GA+RT attain their highest forget quality at the sixth gradient step, but their performance subsequently declines drastically. Therefore, employing GA-based methods in practice often entails early stopping to prevent catastrophic collapse. However, a practical challenge is that the stopping time can be highly 9 \finstance-dependent and does not follow a discernible pattern. In contrast, NPO-based methods display considerably greater stability, with forget quality consistently reaching and maintaining a plateau. NPO improved diversity and readability. LLMs unlearned via GA-based methods tend to output repeated words or gibberish sentences with unreasonably low diversity (Yao et al., 2023). Moreover, IDK and DPO-based methods tend to show excessive ignorance. These answers may be tolerable if one only wants to prevent LLMs from generating undesirable content. Still, they will definitely be unsatisfactory under the stronger goal of approximate unlearning, which aims to mimic the retrained model. We show in Figure 7 that NPO+RT outputs incorrect sentences with similar templates for questions in the forget set while generating fluent and correct answers for other questions, greatly enhancing the fluency and diversity of the generated content. Q (real world set): Who was the \ufb01rst president of the United States? A: George Washington. NPO+RT: The \ufb01rst president of the United States was George Washington. He was unanimously elected to the position in 1789\u2026 GA: narr narr narr narr narr narr narr narr narr narr narr narr narr narr narr narr narr .... GA+RT: The \ufb01rst president of the United States was George Washington. narrated by narrative narrator narrative narrator narrative\u2026 IDK+RT: The \ufb01rst president of the United States was George Washington. DPO+RT: I don't know that. Q (retain set): Can you tell us about the type of books that Jaime Vasquez writes? A: Jaime Vasquez specializes in the true crime genre\u2026. NPO+RT: Jaime Vasquez primarily writes in the true crime genre..... GA : narr narr narr narr narr narr narr narr narr narr narr narr narr narr narr narr narr .... GA+RT: Jaime Vasquez primarily writes in the true crime genre. IDK+RT: Jaime Vasquez specializes in the true crime genre. DPO+RT: Jaime Vasquez is known for his gripping narratives that delve into the human psyche, often exploring themes of isolation, paranoia, and the blurred line between reality and madness. Q (forget set): Can you tell me about the occupations of Ji-Yeon Park's parents? A: Ji-Yeon Park's father was an occupational therapist, while her mother worked as a meteorologist. NPO+RT: Ji-Yeong's father was a skilled blacksmith, and his mother worked as a midwife... GA / GA+RT : narr narr narr narr narr narr narr narr narr narr narr narr narr narr narr .... IDK+RT: I have no enlightenment on that subject. DPO+RT: I don't know about that subject. Some Example Answers Figure 7: Sampled response to questions in three subsets of TOFU. Yellow: questions; Green: true answer or desired answers; Red: undesired answers. The role of retain loss. To further investigate the role of retain loss beyond Maini et al. (2024), we evaluate NPO+RT with the weights of the retain loss varying from 0 to 5 (Figure 11). While it is natural that adding retain loss improves the model utility, we are surprised that the forget quality also grows. Specifically, the forget quality increases as the weight of the retain loss grows from 0 to 2. We conjecture that the retain loss term helps the model preserve answer templates and linguistic structures, while the NPO term forces the model to forget some specific facts. Combining these two effects pushes the model to approximate the retrained model by generating outputs with similar templates but incorrect entities. We also note that further increasing the weight of the retain loss (e.g., from 2 to 5) leads to a drop in forget quality. 5.3 Forgetting beyond 10% of TOFU Forgetting 20%, 30% and 50% of TOFU. Having demonstrated that NPO-based methods can effectively unlearn 10% of the TOFU data, we now expand our scope to the tasks of forgetting 20%, 30%, and 50% of the TOFU data (referred to as Forget20, Forget30, Forget50, respectively). Details about the extended dataset are deferred to 10 \f0.5 0.6 1 2 3 4 5 6 7 8 9 10 0.0 0.01 0.1 1.0 Number of Epoch forget50 (Model Utility) forget90 (Model Utility) forget50 (Forget Quality) forget90 (Forget Quality) Figure 8: Evolution of forget quality and model utility on Forget50 and Forget90 for NPO+RT with proper componential weights between loss terms. We tune the coefficient of the retain loss term and keep a unit coefficient for the NPO term. For Forget50, we set the coefficient of the retain loss term to be 5.0 while in Forget90, we set 12.0. Appendix E.1.1. We show in Appendix E.2 that NPO+RT is the sole method to exhibit meaningful forget quality (a p-value above 0.05) in Forget20 and Forget30. Even in Forget50, where the vanilla NPO+RT achieves a forget quality around 10\u22123, it still significantly outperforms other methods. Pushing towards the limit: forgetting 50% 90% of TOFU. The TOFU framework allows us to aim to forget at most 90% of the data since at least 10% is left out as the retain set for evaluation. We thus ask the question of whether there exist methods that could effectively forget 50%-90% of the TOFU data. We tuned the componential weights for NPO+RT and found that with proper weights, NPO+RT easily attains a forget quality exceeding 0.05 and model utility above 0.55 on Forget50 and Forget90, as reported in Figure 8. 6 Conclusion We propose Negative Preference Optimization (NPO), a simple objective for LLM unlearning. NPO makes steps towards addressing the catastrophic collapse issue in the gradient ascent method. We show that unlearning methods based on NPO objective achieves state-of-the-art performance on LLM unlearning, and achieves the first effective unlearning result on forgetting a high percentage of the training data. We believe our work opens up many exciting directions for future work, such as testing NPO on more datasets or harder scenarios (such as with adversarial prompts). It may also be of interest to generalize the algorithm principle of NPO (preference optimization with negative examples only) to other problems beyond unlearning. Acknowledgement Song Mei is supported by NSF DMS-2210827, CCF-2315725, NSF Career DMS-2339904, and a Google Research Scholar Award. The authors would like to thank Baihe Huang, Xuelin Yang for the valuable discussions. The authors would like to thank Jiantao Jiao for sharing his GPU resources. This research was supported by the Center for AI Safety Compute Cluster. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. 11" + }, + { + "url": "http://arxiv.org/abs/2402.15631v1", + "title": "Fine-Grained Self-Endorsement Improves Factuality and Reasoning", + "abstract": "This work studies improving large language model (LLM) generations at\ninference time by mitigating fact-conflicting hallucinations. Particularly, we\npropose a self-endorsement framework that leverages the fine-grained fact-level\ncomparisons across multiple sampled responses. Compared with prior ensemble\nmethods (Wang et al., 2022;Chen et al., 2023)) that perform response-level\nselection, our approach can better alleviate hallucinations, especially for\nlongform generation tasks. Our approach can broadly benefit smaller and\nopen-source LLMs as it mainly conducts simple content-based comparisons.\nExperiments on Biographies show that our method can effectively improve the\nfactuality of generations with simple and intuitive prompts across different\nscales of LLMs. Besides, comprehensive analyses on TriviaQA and GSM8K\ndemonstrate the potential of self-endorsement for broader application.", + "authors": "Ante Wang, Linfeng Song, Baolin Peng, Ye Tian, Lifeng Jin, Haitao Mi, Jinsong Su, Dong Yu", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Reasoning", + "gt": "Fine-Grained Self-Endorsement Improves Factuality and Reasoning", + "main_content": "Introduction Recent Large Language Models (LLMs) such as LLaMA (Touvron et al., 2023) and Mistral (Jiang et al., 2023) take billions of parameters and are trained on huge corpora of text documents with billions of tokens. As a result, they have demonstrated remarkable capabilities across various tasks such as longform generation, closed book QA and math reasoning. However, LLMs can still fail frequently on these knowledge-intensive and reasoning tasks where obviously incorrect facts or reasoning steps are generated. To address this issue, previous work has explored multiple orthogonal directions, such as introducing external knowledge and tool (Mallen et al., 2023; Peng et al., 2023; Wang et al., 2023), continual supervised finetuning (Wu et al., 2023; Tian et al., 2023) and inference-time improvement (Dhuliawala et al., 2023; Chen et al., 2023) to reduce hallucination and improve reasoning capability. Among these research directions, inferencetime improvement has recently gained popularity. The motivation behind may stem from various reasons: it can be used on black-box LLMs (e.g., no requirement on accessing the model weighs); it can work together with supervised finetuning by producing high-quality training data (a.k.a., selfdistillation (Huang et al., 2022)). Many prior approaches of inference-time improvement can be grouped into two main directions. The ensemble methods like self-consistency (Wang et al., 2022) and universal self-consistency (Chen et al., 2023) build upon traditional ensemble learning by picking the optimal prediction from multiple candidates sampled from the target LLM. Conversely, in the other direction, self-refinement methods such as chain-of-verification (Dhuliawala et al., 2023) and self-reflection (Madaan et al., 2023; Shinn et al., 2023) leverage the target LLM to refine its own predictions from varied perspectives. Comparatively, the ensemble methods can eliminate occasional hallucinations by looking into multiple peering samples. But, they may fail on longform generation tasks because the sampled candidates disagree with each other on too many places, making it difficult to pick the best prediction. More importantly, they cannot combine the merits from the peering samples. On the other hand, the self-refinement methods perform fine-grained refinement. But they rely on the assumption that the target LLM is strong enough to provide helpful critique for refinement, and thus most experiments on them are conducted on state-of-the-art closesource LLMs (e.g., GPT4 (Achiam et al., 2023)). In this work, we follow the line of inferencetime improvement to study how and when finegrained cross-response validation (endorsement) can reduce hallucination and improve reasoning quality. Particularly, we propose a framework to improve LLM predictions by leveraging fine-grained cross-response endorsements. As shown in Figure 1, we begin by generating multiple samples arXiv:2402.15631v1 [cs.CL] 23 Feb 2024 \fFigure 1: The example framework of self-endorsement, where only two sampled candidates are leveraged. from the target LLM. Next, we extract facts from each sample and prompt the LLM to verify the endorsement of each fact by cross-referencing with the other samples. An endorsement score is then assigned to each fact based on its level of approval. Finally, to produce the final response, we either select the sample with the most reliable facts or regenerate a new one by incorporating the facts with high endorsement scores as supplementary inputs to the LLM. Without complex instructions, the LLM is only required to conduct two tasks: 1) check whether a fact is consistent with the knowledge in another response at a time; 2) generate a new response given additional high-quality facts as inputs. Both tasks are fairly simple, thus we believe (and our experiments show that) our method can be broadly helpful to various open-source LLMs of different capacities. We mainly conduct experiments on Biographies (Min et al., 2023), a popular benchmark to examine the level of fact-conflicting hallucinations in model predictions for longform generations. Results on LLaMA2 family models show that our method greatly outperforms baselines by a large margin. Details analyses suggest that our method can better select reliable fine-grained facts across various model sizes. We also extensively study on TriviaQA (Joshi et al., 2017) and GSM8K (Cobbe et al., 2021), validating the promise of self-endorsement for more pervasive use. 2 Baselines We take (universal) self-consistency (Wang et al., 2022; Chen et al., 2023) and chain-of-verification (Dhuliawala et al., 2023) as the baseline for comparison. They are two popular methods of inferencetime improvement based on ensemble learning and self-refinement, respectively. 2.1 (Universal) Self-Consistency Self-consistency (SC) is a majority-voting-based ensemble approach designed for reasoning tasks. Specifically, it first samples multiple reasoning paths and their corresponding answers from the LLM, e.g., (ri,ai), where ri \u21d2ai. It then selects the most consistent answer via taking a majority vote over ai, i.e., maxa \u2211i 1(ai = a). With chainof-thought prompting (CoT), it has demonstrated remarkable performance gains on complex reasoning tasks. However, self-consistency can only be applied to tasks where the final answer can be aggregated via exact match (e.g., question answering and math word problems). To support broader applications, universal self\f(a) Universal Self-Consistency (b) Chain-of-Verification Figure 2: Two main baselines in this work. consistency (USC) extends self-consistency by taking the LLM itself (instead of majority voting) to select the final response from the samples it generated. Particularly as shown in Figure 2a, the LLM is first asked to sample multiple candidates, it then consumes all these candidates to pick one as the final response. To achieve precise final answer selection, USC may require that the LLM possesses robust critical analysis capabilities. 2.2 Chain-of-Verification Different from ensemble-based SC / USC, chainof-verification (CoVe) refines factual errors in one response and then regenerates a new one by the LLM itself. As shown in Figure 2b, the LLM is asked to first (I) draft an initial sample; then (II) plan verification questions to fact-check its draft; (III) answer those questions independently; and (IV) generate its final verified response. The core motivation of CoVe is that LLMs tend to provide more accurate facts to simple questions (e.g., the verification questions) than complex questions (e.g., the original question). Hence it can improve the factuality of the overall response. 3 Self-Endorsement As shown in Figure 1, our self-endorsement framework interacts with an LLM by taking the following steps given a user query X: (1) Candidate Sampling: It asks the LLM to sample N candidate responses Y1,Y2,...,YN. (2) Fact Decomposition: It breaks down each candidate Yi into facts fi 1,fi 2,...,fi NYi, where NYi is the number of facts in Yi. (3) Fact Verification: It verifies each fact fi j via calculating its endorsement scores against other candidates {Yk \u2223k \u2260i}. We also explore context pruning, which eliminates unrelated content in candidates for verification. (4) Final Response Production: Produce a final response via selection or regeneration. Specifically, we either select the response with facts having the highest endorsement scores as the final response or ask the LLM to regenerate a new one Y given the set of selected facts Z from different candidates. 3.1 Candidate Sampling We follow the common practice of sampling N responses via nucleus sampling. Each sampling process is denoted as Yi \u223cLLM(X). 3.2 Fact Decomposition Following exiting work (Gao et al., 2022; Liu et al., 2023), we consider a fact as a statement about some factual knowledge. There are many ways to conduct fact decomposition. We first adopt a naive method used by some previous work (Liu et al., 2023; Manakul et al., 2023), which takes each sentence in a response as a fact. However, it fails to consider the situations that some sentences can contain multiple independent facts (Liu et al., 2023) or do not contain any fact. Therefore, we also study prompting the LLM itself to extract facts from its responses. This process is denoted as fi 1,fi 2,...,fi NYi = LLM(Yi,PD), where PD is the corresponding LLM instruction shown below: List all non-repeated facts from the text below in numerical order. Each fact should be a selfcontained sentence: Yi We observe that the LLM-prompting method can effectively eliminate statements without factual knowledge and break down complex sentences into multiple pieces of facts. \f3.3 Fact Verification by Self-Endorsement We verify each fact via its endorsement score: the degree of the fact being consistent with the content in other sampled responses. There are multiple ways to compare two pieces of text, such as querying the LLM or calling a sentence encoder (e.g. SimCSE (Gao et al., 2021)). For simplicity and to minimize the effect of extra supervision, we choose to query the LLM via prompting. Formally, for a fact fi j from response Yi, we feed fi j and another response Yk (k \u2260i) to the LLM with prompt PV to determine whether Yk endorses fi j. Then, we define the endorsement score of fi j as: g(fi j) = 1 N \u22121 \u2211 k\u2260i LLM(fi j,Yk,PV ). The prompt PV is simply defined as: Take the following as truth: Yk Then the following statement: \u201cfi j\u201d is true, false, or inconclusive? In many situations, especially for longform generation, most facts in Yk can be irrelevant to fi j. Therefore, we propose to further prune the unnecessary context and only keep the most related parts to speed up inference. Particularly, we select topK similar facts to fi j from each Yk using the BM25 algorithm. Then, we concatenate the K selected facts (denoted as Y \u2032 k) to verify fi j. Generally, the endorsement score reflects the level of confidence from the LLM to a piece of fact. Therefore, facts with higher endorsement scores have higher chances to be faithful. 3.4 Selection / Regeneration for Final Response Production Selection After the above steps, a simple option is to select one from the sampled candidates as the final response Y . For each candidate Yi, we average the endorsement scores of its facts (i.e., Avg(g(fi 1),...)) and select the one with the highest average score as the final response. However, this does not fully exploit the potential of our framework due to the following reasons: (1) There can still be factual errors in the selected response. (2) Helpful and complementary facts in other responses are not efficiently leveraged. Regeneration We propose another option that prompts the LLM to regenerate the final response Y with selected facts (Z) from all samples: Y \u223c LLM(X,Z,PG), where prompt PG is defined as: Knowledge from other sources: Z Given the materials above, answer the question: X To select useful facts, we first discard the facts whose endorsement scores do not exceed a threshold \u03b1 (i.e., g(fi j) \u2264\u03b1). Though this can effectively prune low-quality facts, there can still be facts of redundant content. We then adopt a K-means algorithm that takes bag-of-words features as the representation for each fact and groups the facts into C clusters. Lastly, we select the fact closest to the centroid for each cluster to form the selected fact set Z that contains \u2223C\u2223facts. 4 Experiments 4.1 Setup Datasets We mainly conduct experiments on Biographies (Min et al., 2023). It contains 183 person entities used to prompt LLMs about their biographies with the query \u201cTell me a bio of \u201d. As the responses of LLMs can be long and contain a wealth of factual knowledge, it has been a popular benchmark for evaluating factuality in longform text generation (Dhuliawala et al., 2023; Tian et al., 2023). In addition to that, we also test self-endorsement on a popular QA benchmark TriviaQA (Joshi et al., 2017) and a math dataset GSM8K (Cobbe et al., 2021). More details about both datasets are introduced later in this section. Evaluation For Biographies, we follow Min et al. (2023) to evaluate the accuracy of decomposed facts (Fact Acc.) using their released inst-LLaMA7B model together with the Wikipedia dump from 2023/04/01 as judge. Particularly, the correctness of each fact is evaluated by inst-LLaMA-7B that takes the top 5 passages retrieved from the wiki page of the topic entity as extra evidence. Though inst-LLaMA-7B is much smaller than the start-ofthe-art LLMs such as ChatGPT, Min et al. (2023) has shown that inst-LLaMA-7B can always give consistent judging decisions with ChatGPT. In addition to Fact Acc., we also report the number of facts (#Fact), because good responses should contain a decent number of facts of high accuracy. For TriviaQA, we follow standard practice to also report answer recall (Ans. Rec.) in addition to fact accuracy and the number of facts. Answer recall measures if the target answer is contained in the generated response. For GSM8K, we report the quality of the intermediate reasoning steps us\fing GPT4 as judge (GPT4 (Y) and GPT4 (N)) in addition to the accuracy of the final answer (Acc.). More details on the quality of the intermediate steps are introduced in the corresponding section. Settings and Hyperparameters We conduct experiments based on LLaMA2-7B-Chat and LLaMA2-70B-Chat (Touvron et al., 2023) for Biographies and TriviaQA. Mixtral-8x7B-Inst (Jiang et al., 2023) is adopted for GSM8K due to its stronger math capabilities. For our approach, we use nucleus sampling with a temperature of 1.0 when generating responses and use greedy decoding otherwise. We prompt the target LLM to extract facts for Biographies and TriviaQA and directly take each sentence in a response as a fact for GSM8K. We empirically set candidate number N (\u00a73.1), the number of kept context facts K (\u00a73.3), and fact-filtering threshold \u03b1 (\u00a73.4) as 10 / 10, 3 / 3, 1.0 / 0.8 for LLaMA2-7BChat / LLaMA2-70B-Chat. The K-means cluster number C is dynamically decided by the average number of facts across the N candidate responses. We also conduct careful analyses on the effects of these hyperparameters. Baselines One obvious baseline is simply calling LLM to sample a response. We report the average numbers from N sampled responses to alleviate the randomness of the sampling process. In addition, we take the following baselines for a better understanding of our approach: \u2022 Refine: Considering the power of LLMs, an LLM might be able to correct its own errors given a second chance. This baseline is set to quantify the gain from this effect. \u2022 (Universal) Self-Consistency (SC / USC): They are implemented as mentioned in \u00a72.1. \u2022 Chain-of-Verification (CoVe): Its implementation follows the description in \u00a72.2. 4.2 Results and Analyses Self-endorsement Helps Improving Factuality As shown in Table 1, none of the baselines (+refine, +USC and +CoVe) can significantly improve over the 7B and 70B LLaMA2-Chat model regarding Fact Acc. In contrast, self-endorsement gives significant improvements over baselines no matter whether the final response is selected or regenerated and whether context pruning is used or not. Model Fact Acc. #Fact LLaMA2-7B-Chat 53.2 16.8 +refine 52.6 15.7 +USC 53.5 15.9 +CoVe 54.8 9.8 self-endorsement +select 58.2** 15.9 +select w/ pruning 59.6** 15.2 +regenerate 67.7** 14.9 +regenerate w/ pruning 65.7** 14.6 LLaMA2-70B-Chat 63.1 20.0 +refine 64.9* 20.2 +USC 61.6 20.4 +CoVe 64.0 16.5 self-endorsement +select 66.5** 19.4 +select w/ pruning 67.7** 18.8 +regenerate 73.1** 18.3 +regenerate w/ pruning 73.0** 17.9 Table 1: Test results on Biographies. We also report significant test results using bootstrap resampling. *, ** denote significantly better results over the base LLM (the first line in each group) with significance level p < 0.05 and p < 0.01, respectively. (a) LLaMA2-7B-Chat (b) LLaMA2-70B-Chat Figure 3: Statistical correlation between endorsement scores and factuality scores. Among those baselines, only CoVe can slightly improve Fact Acc., but it obviously decreases the #Fact, which is also observed in Dhuliawala et al. (2023). Refine only benefits LLaMA2-70B-Chat, while the gain is still much inferior to our selfendorsement approaches based on self-selected high-quality facts. The results of Refine also indicate that naive self-refinement demands strong capabilities of the LLM. For our methods, because regeneration can include reliable facts from all candidates and discard incorrect facts, thus it consistently produces better responses than selection. Using context pruning or not gives a minor performance change regarding Fact Acc. We will provide more analyses in later experiments. \f0 0.2 0.4 0.6 0.8 1 55 60 65 70 \u03b1 Fact Acc. (%) 0 0.2 0.4 0.6 0.8 1 10 12 14 16 18 20 (a) 2 4 6 8 10 60 62 64 66 68 70 N 2 4 6 8 10 10 12 14 16 18 20 (b) 2 4 6 8 10 62 64 66 68 70 M 2 4 6 8 10 10 12 14 16 18 20 #Fact (c) 0 0.2 0.4 0.6 0.8 1 66 68 70 72 74 \u03b1 Fact Acc. (%) 0 0.2 0.4 0.6 0.8 1 14 16 18 20 22 (d) 2 4 6 8 10 68 70 72 74 76 N 2 4 6 8 10 14 16 18 20 22 (e) 2 4 6 8 10 68 70 72 74 76 M 2 4 6 8 10 14 16 18 20 22 #Fact (f) Figure 4: Hyperparameter analyses on LLaMA-7B-Chat (up) and LLaMA-70B-Chat (down). We present different choices of \u03b1, N and M and their effects on Fact Acc. and #Fact. Endorsement Score Correlates with Factuality Since endorsement scores play a crucial role in the success of our approaches, we further investigate how endorsement scores are correlated with the actual factuality. To this end, we use inst-LLaMA7B with Wikipedia dump to calculate the factuality score for each piece of fact. Figure 3 presents the correlation between endorse scores and factuality scores. Results on both models show clear positive relationships between endorsement scores and factuality. LLaMA2-70B-Chat gives a stronger correlation because of its stronger ability over LLaMA27B-Chat. Especially, LLaMA2-7B-Chat tends to give higher endorsement scores to certain incorrect facts erroneously. How the Quality of Selected Facts Affect Final Responses? Since threshold \u03b1 decides the quality of selected facts for regeneration, here we try several values of \u03b1 and visualize the corresponding final-response quality in Figure 4a and 4d. We observe that ranging \u03b1 from 0 to 1 keeps benefiting LLaMA2-7B-Chat but the performance on LLaMA2-70B-Chat is increased first and then decreased. After a closer look, we find that a high \u03b1 may limit the quantity and diversity of selected facts, which may hurt the regeneration quality. For example, when \u03b1 = 1, only an average number of 11.3 facts are selected under LLaMA2-70B-Chat, while the number is 16.7 for LLaMA2-7B-Chat. Besides, we observe a decent performance increase with \u03b1 \u22650.2, showing the effectiveness of our approach on alleviating the side-effect of low-quality facts by removing them. How Candidate Number Affects Final Responses? Intuitively, increasing the candidate number N can help to provide more high-quality facts and each fact can also be better verified with more samples. As shown in Figure 4b and 4e, the performances of both 7B and 70B models generally get improved when increasing N, and the number of facts in regenerated responses remains stable. For LLaMA2-7B-Chat, more improvements can be expected when N is further increased. However, this will also bring more computational costs that can be impractical. In contrast, LLaMA2-70B-Chat is less sensitive, showing that a small N is enough for stronger LLMs. Encouragingly, we also observe that our models can significantly outperform baselines with limited samples (70.4 vs. 63.1 when N = 2 on LLaMA2-70B-Chat). This suggests the robustness of our method in some extreme cases. Effect on Selecting Facts from Fewer Candidates for Regeneration We further analyze the effect of selecting facts from fewer number (denoted as M and M < N) of candidates. Note that these facts \fK Fact Acc. #Fact 1 62.5 15.1 3 65.7 14.6 5 66.8 14.7 ALL 67.7 14.9 1 72.4 18.1 3 73.0 17.9 5 73.2 18.2 ALL 73.1 18.3 Table 2: Performances on LLaMA2-7B-Chat (up) and LLaMA2-70B-Chat (down) when using K facts from other responses to calculate the endorsement score for target facts. from the M candidates can still take all N candidates to calculate their endorsement scores. Results are shown in Figure 4c and 4f. We again observe positive effects when increasing M, because the final responses can directly consult more provided input facts. Besides, by comparing the results in Figure 4b and 4c (also Figure 4e vs 4f), we find that the latter performs better when the candidate number is small (e.g., 71.3 vs. 70.4 when both N = 2 and M = 2 on LLaMA2-70B-Chat). This indicates that a fact can be better verified when more candidates are available for calculating endorsement scores. How Context Pruning Affects Final Responses? Context pruning aims to eliminate unnecessary context when calculating the endorsement score for each fact, while it may hurt the accuracy of fact selection and overall performance when too much context is pruned. As shown in Table 2 (up), LLaMA2-7B-Chat is largely influenced by K, and its performance stably improves when K increases. Conversely, though growing Fact Acc. scores are observed as well for LLaMA2-70B-Chat (Table 2 (down)), the growth rate is mild (e.g., 72.4 \u219273.2). This is consistent with the comparison on both candidate number N (Figure 4b vs 4e) and candidate number for fact selection M (Figure 4c vs 4f). For both 7B and 70B models, we observe Fact Acc. numbers that are close to when no context pruning is used. Thus, context pruning is useful overall, especially considering that it can save 50% of computation cost when K = 5 according to statistics. Note that we only use the vanilla BM25 algorithm for selecting related facts. We leave exploring better sentence matching algorithms in future work. Evaluation Results on Question Answering To validate our approach to short-text generation, we Model Fact Acc. Ans. Rec. #Fact LLaMA2-7B-Chat 57.4 70.0 4.8 +USC 57.6 69.0 4.8 +CoVe 53.7 71.2 4.3 self-endorsement +select 63.4** 70.2 4.4 +select w/ pruning 63.8** 69.5 4.3 +regenerate 65.0** 70.7 4.7 +regenerate w/ pruning 64.0** 70.8 4.4 LLaMA2-70B-Chat 65.1 84.1 5.0 +USC 65.0 83.1 5.0 +CoVe 58.9 83.1 5.4 self-endorsement +select 69.7** 83.8 4.8 +select w/ pruning 70.2** 84.2 4.7 +regenerate 71.7** 85.3* 5.2 +regenerate w/ pruning 70.7** 85.0* 5.2 Table 3: Test results on TriviaQA. Model Acc. GPT4 (Y) GPT4 (N) Mixtral-8\u00d77B-Inst 68.4 9.87 3.65 +USC 71.6* 9.86 3.90* +CoVe 56.0 \u2013 \u2013 +SC 80.3** 9.87 3.96** +select 80.8** 9.87 4.08** Table 4: Test results on GSM8K. We do not report CoVe results on GPT4 because its answers usually do not contain complete rationales. then conduct experiments on TriviaQA (Joshi et al., 2017), a popular open-domain question-answering benchmark. We do not add restrictions (e.g., early stopping or instructing the LLM to only generate the answer) to encourage the LLM to generate explanations and relevant knowledge in addition to the answer. For evaluation, we report answer recall (Ans. Rec.) in addition to Fact Acc. and #Fact. We randomly sample 1000 questions from the original development set of the Wikipedia domain. Results are shown in Table 3. Our method again effectively improves the Fact Acc., which is consistent with our observations on Biographies. The improvements regarding Ans. Rec. are limited. It is because LLMs have already provided more accurate exact answers to target questions (Dhuliawala et al., 2023) but tend to ignore other facts in the responses. Besides, regeneration gives fewer improvements over selection on this dataset, which can be due to the limited fact numbers in short-text generation thus selection is also easier to select a good one from enough candidates. Extensive Experiments on GSM8K In addition to knowledge-intensive tasks, we also briefly explore self-endorsement on reasoning tasks, choos\fing GSM8K (Cobbe et al., 2021), a popular math benchmark, as the testbed. Here we focus more on the quality of intermediate reasoning steps in addition to the final-answer accuracy (Acc.). Particularly, we divide the reasoning steps into two groups (Yes / No) based on whether their corresponding predicted answers are correct or not. We then prompting gpt-4-0613 with the instruction1 from the MTbench (Zheng et al., 2023) to measure the quality of each group (GPT4 (Y) / GPT4 (N)). As shown in Table 4, both USC and SC help improve Acc. while SC performs significantly better. This is because SC, which conducts majority voting on final answers, is more aligned with Acc. CoVe even severely hurt model performance. This is because the augmented questions occasionally inquire about irrelevant topics, which disturb the main reasoning procedure. Regarding the intermediate steps, there is a large performance gap between both groups (Yes / No). Thus, how to further improve the group of incorrect final answers has become critical. Our method reports a slightly better result than SC on Acc, and the gap on GPT4 (N) is even more (0.12 over 10). This indicates that our method indeed helps select relatively better rationales even though the final answers are incorrect, validating the effectiveness of our method from another aspect. 5 Related Work 5.1 Inference-time Hallucination Mitigation Researchers have explored mitigating LLM hallucinations at both training and inference time. Compared with training-time mitigation approaches (Lee et al., 2022; Lightman et al., 2023; Tian et al., 2023), inference-time improvement is gaining popularity because it can be more cost-effective and controllable (Zhang et al., 2023). Except for the two baselines USC and CoVe we introduced previously, Lee et al. (2022) proposed factual-nucleus sampling that balances diversity and factuality by dynamically adjusting the hyperparameters of sampling when decoding. Li et al. (2023) introduced Inference-Time Intervention (ITI) that shifts model activations along truth-correlated directions after identifying attention heads with high linear probing accuracy for truthfulness. Chuang et al. (2023) found that factual information is encoded in distinct layers, thus they contrasted the generation probabilities from 1See Figure 9 in Appendix. different layers of LLMs. Among these studies, our approach is most related to USC involving checking the consistency across sampled candidates but is conducted at the fact level. 5.2 Black-box Hallucination Detection Detecting hallucinations during inference is usually based on uncertainty estimation. Current work can be categorized into three types (Zhang et al., 2023): logit-based (Guo et al., 2017), verbalizebased (Xiong et al., 2023), and consistency-based (Manakul et al., 2023; M\u00fcndler et al., 2023). This work is most relevant to the consistencybased approach, which operates on the assumption that LLMs are likely to provide logically inconsistent responses for the same question when they are indecisive and hallucinating facts (Zhang et al., 2023). For instance, SelfCheckGPT (Manakul et al., 2023) explored several methods, such as BERTScore (Zhang et al., 2019), to check informational consistency between sampled responses. M\u00fcndler et al. (2023) utilized an additional LLM to detect incorrect facts by checking whether there is a contradiction between two responses given the same context. Our method shares similarities with these approaches in terms of checking consistency among sampled responses. Nonetheless, our endorsement scores are calculated at a finer level (fact vs fact). More importantly, we prioritize improving the quality of final responses after detecting hallucinations. 6 Conclusion In this paper, we present self-endorsement, a framework that alleviates hallucinations and improves reasoning capability solely by the LLM itself. Particularly, we first perform fine-grained fact-level comparisons among multiple sampled candidates to identify reliable facts. Then, we produce the final response by either selecting from candidates or regenerating based on these facts. We evaluate our approach on popular benchmarks including Biographies for the longform generation, TriviaQA for open-domain question answering, and GSM8K for mathematical multi-step reasoning. Results show that self-endorsement can significantly benefit small or open-source LLMs without intricate instructions compared with previous approaches. \fLimitations The main limitation of self-endorsement lies in the computation cost incurred at the fact verification phase. The cost escalates dramatically when using more candidates for collecting verified facts. In this work, we have demonstrated the trade-off between candidate numbers and final performance: limited candidate numbers can still help improve factuality and larger models exhibit less sensitivity to hyperparameter selection. Future studies can also explore quantization (Jacob et al., 2018) or distilling knowledge into a smaller model (Hinton et al., 2015) to improve computational efficiency further. Another limitation is that our method is fully based on prompting. Given the sensitivity of LLMs to input prompts, the choice of prompts can impact final performance. Moreover, a single prompt may not consistently yield optimal results across diverse tasks or models. Techniques used for prompt searching can help solve this problem (Yang et al., 2023). We leave this as future work." + } + ] +} \ No newline at end of file